Paper Group ANR 371
Mediation Challenges and Socio-Technical Gaps for Explainable Deep Learning Applications. Using Causal Analysis to Learn Specifications from Task Demonstrations. Parameterized Convolutional Neural Networks for Aspect Level Sentiment Classification. Hyperspectral and multispectral image fusion under spectrally varying spatial blurs – Application to …
Mediation Challenges and Socio-Technical Gaps for Explainable Deep Learning Applications
Title | Mediation Challenges and Socio-Technical Gaps for Explainable Deep Learning Applications |
Authors | Rafael Brandão, Joel Carbonera, Clarisse de Souza, Juliana Ferreira, Bernardo Gonçalves, Carla Leitão |
Abstract | The presumed data owners’ right to explanations brought about by the General Data Protection Regulation in Europe has shed light on the social challenges of explainable artificial intelligence (XAI). In this paper, we present a case study with Deep Learning (DL) experts from a research and development laboratory focused on the delivery of industrial-strength AI technologies. Our aim was to investigate the social meaning (i.e. meaning to others) that DL experts assign to what they do, given a richly contextualized and familiar domain of application. Using qualitative research techniques to collect and analyze empirical data, our study has shown that participating DL experts did not spontaneously engage into considerations about the social meaning of machine learning models that they build. Moreover, when explicitly stimulated to do so, these experts expressed expectations that, with real-world DL application, there will be available mediators to bridge the gap between technical meanings that drive DL work, and social meanings that AI technology users assign to it. We concluded that current research incentives and values guiding the participants’ scientific interests and conduct are at odds with those required to face some of the scientific challenges involved in advancing XAI, and thus responding to the alleged data owners’ right to explanations or similar societal demands emerging from current debates. As a concrete contribution to mitigate what seems to be a more general problem, we propose three preliminary XAI Mediation Challenges with the potential to bring together technical and social meanings of DL applications, as well as to foster much needed interdisciplinary collaboration among AI and the Social Sciences researchers. |
Tasks | |
Published | 2019-07-16 |
URL | https://arxiv.org/abs/1907.07178v1 |
https://arxiv.org/pdf/1907.07178v1.pdf | |
PWC | https://paperswithcode.com/paper/mediation-challenges-and-socio-technical-gaps |
Repo | |
Framework | |
Using Causal Analysis to Learn Specifications from Task Demonstrations
Title | Using Causal Analysis to Learn Specifications from Task Demonstrations |
Authors | Daniel Angelov, Yordan Hristov, Subramanian Ramamoorthy |
Abstract | Learning models of user behaviour is an important problem that is broadly applicable across many application domains requiring human-robot interaction. In this work we show that it is possible to learn a generative model for distinct user behavioral types, extracted from human demonstrations, by enforcing clustering of preferred task solutions within the latent space. We use this model to differentiate between user types and to find cases with overlapping solutions. Moreover, we can alter an initially guessed solution to satisfy the preferences that constitute a particular user type by backpropagating through the learned differentiable model. An advantage of structuring generative models in this way is that it allows us to extract causal relationships between symbols that might form part of the user’s specification of the task, as manifested in the demonstrations. We show that the proposed method is capable of correctly distinguishing between three user types, who differ in degrees of cautiousness in their motion, while performing the task of moving objects with a kinesthetically driven robot in a tabletop environment. Our method successfully identifies the correct type, within the specified time, in 99% [97.8 - 99.8] of the cases, which outperforms an IRL baseline. We also show that our proposed method correctly changes a default trajectory to one satisfying a particular user specification even with unseen objects. The resulting trajectory is shown to be directly implementable on a PR2 humanoid robot completing the same task. |
Tasks | |
Published | 2019-03-04 |
URL | http://arxiv.org/abs/1903.01267v1 |
http://arxiv.org/pdf/1903.01267v1.pdf | |
PWC | https://paperswithcode.com/paper/using-causal-analysis-to-learn-specifications |
Repo | |
Framework | |
Parameterized Convolutional Neural Networks for Aspect Level Sentiment Classification
Title | Parameterized Convolutional Neural Networks for Aspect Level Sentiment Classification |
Authors | Binxuan Huang, Kathleen M. Carley |
Abstract | We introduce a novel parameterized convolutional neural network for aspect level sentiment classification. Using parameterized filters and parameterized gates, we incorporate aspect information into convolutional neural networks (CNN). Experiments demonstrate that our parameterized filters and parameterized gates effectively capture the aspect-specific features, and our CNN-based models achieve excellent results on SemEval 2014 datasets. |
Tasks | Sentiment Analysis |
Published | 2019-09-13 |
URL | https://arxiv.org/abs/1909.06276v1 |
https://arxiv.org/pdf/1909.06276v1.pdf | |
PWC | https://paperswithcode.com/paper/parameterized-convolutional-neural-networks-1 |
Repo | |
Framework | |
Hyperspectral and multispectral image fusion under spectrally varying spatial blurs – Application to high dimensional infrared astronomical imaging
Title | Hyperspectral and multispectral image fusion under spectrally varying spatial blurs – Application to high dimensional infrared astronomical imaging |
Authors | Claire Guilloteau, Thomas Oberlin, Olivier Berné, Nicolas Dobigeon |
Abstract | Hyperspectral imaging has become a significant source of valuable data for astronomers over the past decades. Current instrumental and observing time constraints allow direct acquisition of multispectral images, with high spatial but low spectral resolution, and hyperspectral images, with low spatial but high spectral resolution. To enhance scientific interpretation of the data, we propose a data fusion method which combines the benefits of each image to recover a high spatio-spectral resolution datacube. The proposed inverse problem accounts for the specificities of astronomical instruments, such as spectrally variant blurs. We provide a fast implementation by solving the problem in the frequency domain and in a low-dimensional subspace to efficiently handle the convolution operators as well as the high dimensionality of the data. We conduct experiments on a realistic synthetic dataset of simulated observation of the upcoming James Webb Space Telescope, and we show that our fusion algorithm outperforms state-of-the-art methods commonly used in remote sensing for Earth observation. |
Tasks | |
Published | 2019-12-26 |
URL | https://arxiv.org/abs/1912.11868v1 |
https://arxiv.org/pdf/1912.11868v1.pdf | |
PWC | https://paperswithcode.com/paper/hyperspectral-and-multispectral-image-fusion-1 |
Repo | |
Framework | |
Knowledge Consistency between Neural Networks and Beyond
Title | Knowledge Consistency between Neural Networks and Beyond |
Authors | Ruofan Liang, Tianlin Li, Longfei Li, Jing Wang, Quanshi Zhang |
Abstract | This paper aims to analyze knowledge consistency between pre-trained deep neural networks. We propose a generic definition for knowledge consistency between neural networks at different fuzziness levels. A task-agnostic method is designed to disentangle feature components, which represent the consistent knowledge, from raw intermediate-layer features of each neural network. As a generic tool, our method can be broadly used for different applications. In preliminary experiments, we have used knowledge consistency as a tool to diagnose representations of neural networks. Knowledge consistency provides new insights to explain the success of existing deep-learning techniques, such as knowledge distillation and network compression. More crucially, knowledge consistency can also be used to refine pre-trained networks and boost performance. |
Tasks | |
Published | 2019-08-05 |
URL | https://arxiv.org/abs/1908.01581v2 |
https://arxiv.org/pdf/1908.01581v2.pdf | |
PWC | https://paperswithcode.com/paper/knowledge-isomorphism-between-neural-networks |
Repo | |
Framework | |
Fast and Full-Resolution Light Field Deblurring using a Deep Neural Network
Title | Fast and Full-Resolution Light Field Deblurring using a Deep Neural Network |
Authors | Jonathan Samuel Lumentut, Tae Hyun Kim, Ravi Ramamoorthi, In Kyu Park |
Abstract | Restoring a sharp light field image from its blurry input has become essential due to the increasing popularity of parallax-based image processing. State-of-the-art blind light field deblurring methods suffer from several issues such as slow processing, reduced spatial size, and a limited motion blur model. In this work, we address these challenging problems by generating a complex blurry light field dataset and proposing a learning-based deblurring approach. In particular, we model the full 6-degree of freedom (6-DOF) light field camera motion, which is used to create the blurry dataset using a combination of real light fields captured with a Lytro Illum camera, and synthetic light field renderings of 3D scenes. Furthermore, we propose a light field deblurring network that is built with the capability of large receptive fields. We also introduce a simple strategy of angular sampling to train on the large-scale blurry light field effectively. We evaluate our method through both quantitative and qualitative measurements and demonstrate superior performance compared to the state-of-the-art method with a massive speedup in execution time. Our method is about 16K times faster than Srinivasan et. al. [22] and can deblur a full-resolution light field in less than 2 seconds. |
Tasks | Deblurring |
Published | 2019-03-31 |
URL | http://arxiv.org/abs/1904.00352v1 |
http://arxiv.org/pdf/1904.00352v1.pdf | |
PWC | https://paperswithcode.com/paper/fast-and-full-resolution-light-field |
Repo | |
Framework | |
Understanding, Categorizing and Predicting Semantic Image-Text Relations
Title | Understanding, Categorizing and Predicting Semantic Image-Text Relations |
Authors | Christian Otto, Matthias Springstein, Avishek Anand, Ralph Ewerth |
Abstract | Two modalities are often used to convey information in a complementary and beneficial manner, e.g., in online news, videos, educational resources, or scientific publications. The automatic understanding of semantic correlations between text and associated images as well as their interplay has a great potential for enhanced multimodal web search and recommender systems. However, automatic understanding of multimodal information is still an unsolved research problem. Recent approaches such as image captioning focus on precisely describing visual content and translating it to text, but typically address neither semantic interpretations nor the specific role or purpose of an image-text constellation. In this paper, we go beyond previous work and investigate, inspired by research in visual communication, useful semantic image-text relations for multimodal information retrieval. We derive a categorization of eight semantic image-text classes (e.g., “illustration” or “anchorage”) and show how they can systematically be characterized by a set of three metrics: cross-modal mutual information, semantic correlation, and the status relation of image and text. Furthermore, we present a deep learning system to predict these classes by utilizing multimodal embeddings. To obtain a sufficiently large amount of training data, we have automatically collected and augmented data from a variety of data sets and web resources, which enables future research on this topic. Experimental results on a demanding test set demonstrate the feasibility of the approach. |
Tasks | Image Captioning, Information Retrieval, Recommendation Systems |
Published | 2019-06-20 |
URL | https://arxiv.org/abs/1906.08595v1 |
https://arxiv.org/pdf/1906.08595v1.pdf | |
PWC | https://paperswithcode.com/paper/understanding-categorizing-and-predicting |
Repo | |
Framework | |
Adaptive scale-invariant online algorithms for learning linear models
Title | Adaptive scale-invariant online algorithms for learning linear models |
Authors | Michał Kempka, Wojciech Kotłowski, Manfred K. Warmuth |
Abstract | We consider online learning with linear models, where the algorithm predicts on sequentially revealed instances (feature vectors), and is compared against the best linear function (comparator) in hindsight. Popular algorithms in this framework, such as Online Gradient Descent (OGD), have parameters (learning rates), which ideally should be tuned based on the scales of the features and the optimal comparator, but these quantities only become available at the end of the learning process. In this paper, we resolve the tuning problem by proposing online algorithms making predictions which are invariant under arbitrary rescaling of the features. The algorithms have no parameters to tune, do not require any prior knowledge on the scale of the instances or the comparator, and achieve regret bounds matching (up to a logarithmic factor) that of OGD with optimally tuned separate learning rates per dimension, while retaining comparable runtime performance. |
Tasks | |
Published | 2019-02-20 |
URL | http://arxiv.org/abs/1902.07528v1 |
http://arxiv.org/pdf/1902.07528v1.pdf | |
PWC | https://paperswithcode.com/paper/adaptive-scale-invariant-online-algorithms |
Repo | |
Framework | |
Motion-Based Generator Model: Unsupervised Disentanglement of Appearance, Trackable and Intrackable Motions in Dynamic Patterns
Title | Motion-Based Generator Model: Unsupervised Disentanglement of Appearance, Trackable and Intrackable Motions in Dynamic Patterns |
Authors | Jianwen Xie, Ruiqi Gao, Zilong Zheng, Song-Chun Zhu, Ying Nian Wu |
Abstract | Dynamic patterns are characterized by complex spatial and motion patterns. Understanding dynamic patterns requires a disentangled representational model that separates the factorial components. A commonly used model for dynamic patterns is the state space model, where the state evolves over time according to a transition model and the state generates the observed image frames according to an emission model. To model the motions explicitly, it is natural for the model to be based on the motions or the displacement fields of the pixels. Thus in the emission model, we let the hidden state generate the displacement field, which warps the trackable component in the previous image frame to generate the next frame while adding a simultaneously emitted residual image to account for the change that cannot be explained by the deformation. The warping of the previous image is about the trackable part of the change of image frame, while the residual image is about the intrackable part of the image. We use a maximum likelihood algorithm to learn the model that iterates between inferring latent noise vectors that drive the transition model and updating the parameters given the inferred latent vectors. Meanwhile we adopt a regularization term to penalize the norms of the residual images to encourage the model to explain the change of image frames by trackable motion. Unlike existing methods on dynamic patterns, we learn our model in unsupervised setting without ground truth displacement fields. In addition, our model defines a notion of intrackability by the separation of warped component and residual component in each image frame. We show that our method can synthesize realistic dynamic pattern, and disentangling appearance, trackable and intrackable motions. The learned models are useful for motion transfer, and it is natural to adopt it to define and measure intrackability of a dynamic pattern. |
Tasks | |
Published | 2019-11-26 |
URL | https://arxiv.org/abs/1911.11294v1 |
https://arxiv.org/pdf/1911.11294v1.pdf | |
PWC | https://paperswithcode.com/paper/motion-based-generator-model-unsupervised |
Repo | |
Framework | |
Learning Nonlinear Input-Output Maps with Dissipative Quantum Systems
Title | Learning Nonlinear Input-Output Maps with Dissipative Quantum Systems |
Authors | Jiayin Chen, Hendra I. Nurdin |
Abstract | In this paper, we develop a theory of learning nonlinear input-output maps with fading memory by dissipative quantum systems, as a quantum counterpart of the theory of approximating such maps using classical dynamical systems. The theory identifies the properties required for a class of dissipative quantum systems to be {\em universal}, in that any input-output map with fading memory can be approximated arbitrarily closely by an element of this class. We then introduce an example class of dissipative quantum systems that is provably universal. Numerical experiments illustrate that with a small number of qubits, this class can achieve comparable performance to classical learning schemes with a large number of tunable parameters. Further numerical analysis suggests that the exponentially increasing Hilbert space presents a potential resource for dissipative quantum systems to surpass classical learning schemes for input-output maps. |
Tasks | |
Published | 2019-01-07 |
URL | https://arxiv.org/abs/1901.01653v3 |
https://arxiv.org/pdf/1901.01653v3.pdf | |
PWC | https://paperswithcode.com/paper/learning-nonlinear-input-output-maps-with |
Repo | |
Framework | |
On the convergence of single-call stochastic extra-gradient methods
Title | On the convergence of single-call stochastic extra-gradient methods |
Authors | Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, Panayotis Mertikopoulos |
Abstract | Variational inequalities have recently attracted considerable interest in machine learning as a flexible paradigm for models that go beyond ordinary loss function minimization (such as generative adversarial networks and related deep learning systems). In this setting, the optimal $\mathcal{O}(1/t)$ convergence rate for solving smooth monotone variational inequalities is achieved by the Extra-Gradient (EG) algorithm and its variants. Aiming to alleviate the cost of an extra gradient step per iteration (which can become quite substantial in deep learning applications), several algorithms have been proposed as surrogates to Extra-Gradient with a \emph{single} oracle call per iteration. In this paper, we develop a synthetic view of such algorithms, and we complement the existing literature by showing that they retain a $\mathcal{O}(1/t)$ ergodic convergence rate in smooth, deterministic problems. Subsequently, beyond the monotone deterministic case, we also show that the last iterate of single-call, \emph{stochastic} extra-gradient methods still enjoys a $\mathcal{O}(1/t)$ local convergence rate to solutions of \emph{non-monotone} variational inequalities that satisfy a second-order sufficient condition. |
Tasks | |
Published | 2019-08-22 |
URL | https://arxiv.org/abs/1908.08465v2 |
https://arxiv.org/pdf/1908.08465v2.pdf | |
PWC | https://paperswithcode.com/paper/on-the-convergence-of-single-call-stochastic |
Repo | |
Framework | |
An Evaluation of the Human-Interpretability of Explanation
Title | An Evaluation of the Human-Interpretability of Explanation |
Authors | Isaac Lage, Emily Chen, Jeffrey He, Menaka Narayanan, Been Kim, Sam Gershman, Finale Doshi-Velez |
Abstract | Recent years have seen a boom in interest in machine learning systems that can provide a human-understandable rationale for their predictions or decisions. However, exactly what kinds of explanation are truly human-interpretable remains poorly understood. This work advances our understanding of what makes explanations interpretable under three specific tasks that users may perform with machine learning systems: simulation of the response, verification of a suggested response, and determining whether the correctness of a suggested response changes under a change to the inputs. Through carefully controlled human-subject experiments, we identify regularizers that can be used to optimize for the interpretability of machine learning systems. Our results show that the type of complexity matters: cognitive chunks (newly defined concepts) affect performance more than variable repetitions, and these trends are consistent across tasks and domains. This suggests that there may exist some common design principles for explanation systems. |
Tasks | |
Published | 2019-01-31 |
URL | https://arxiv.org/abs/1902.00006v2 |
https://arxiv.org/pdf/1902.00006v2.pdf | |
PWC | https://paperswithcode.com/paper/an-evaluation-of-the-human-interpretability |
Repo | |
Framework | |
Achieving Fairness in the Stochastic Multi-armed Bandit Problem
Title | Achieving Fairness in the Stochastic Multi-armed Bandit Problem |
Authors | Vishakha Patil, Ganesh Ghalme, Vineet Nair, Y. Narahari |
Abstract | We study an interesting variant of the stochastic multi-armed bandit problem, called the Fair-SMAB problem, where each arm is required to be pulled for at least a given fraction of the total available rounds. We investigate the interplay between learning and fairness in terms of a pre-specified vector denoting the fractions of guaranteed pulls. We define a fairness-aware regret, called $r$-Regret, that takes into account the above fairness constraints and naturally extends the conventional notion of regret. Our primary contribution is characterizing a class of Fair-SMAB algorithms by two parameters: the unfairness tolerance and the learning algorithm used as a black-box. We provide a fairness guarantee for this class that holds uniformly over time irrespective of the choice of the learning algorithm. In particular, when the learning algorithm is UCB1, we show that our algorithm achieves $O(\ln T)$ $r$-Regret. Finally, we evaluate the cost of fairness in terms of the conventional notion of regret. |
Tasks | |
Published | 2019-07-23 |
URL | https://arxiv.org/abs/1907.10516v2 |
https://arxiv.org/pdf/1907.10516v2.pdf | |
PWC | https://paperswithcode.com/paper/achieving-fairness-in-the-stochastic-multi |
Repo | |
Framework | |
UCT-ADP Progressive Bias Algorithm for Solving Gomoku
Title | UCT-ADP Progressive Bias Algorithm for Solving Gomoku |
Authors | Xu Cao, Yanghao Lin |
Abstract | We combine Adaptive Dynamic Programming (ADP), a reinforcement learning method and UCB applied to trees (UCT) algorithm with a more powerful heuristic function based on Progressive Bias method and two pruning strategies for a traditional board game Gomoku. For the Adaptive Dynamic Programming part, we train a shallow forward neural network to give a quick evaluation of Gomoku board situations. UCT is a general approach in MCTS as a tree policy. Our framework use UCT to balance the exploration and exploitation of Gomoku game trees while we also apply powerful pruning strategies and heuristic function to re-select the available 2-adjacent grids of the state and use ADP instead of simulation to give estimated values of expanded nodes. Experiment result shows that this method can eliminate the search depth defect of the simulation process and converge to the correct value faster than single UCT. This approach can be applied to design new Gomoku AI and solve other Gomoku-like board game. |
Tasks | |
Published | 2019-12-11 |
URL | https://arxiv.org/abs/1912.05407v1 |
https://arxiv.org/pdf/1912.05407v1.pdf | |
PWC | https://paperswithcode.com/paper/uct-adp-progressive-bias-algorithm-for |
Repo | |
Framework | |
Is coding a relevant metaphor for building AI? A commentary on “Is coding a relevant metaphor for the brain?", by Romain Brette
Title | Is coding a relevant metaphor for building AI? A commentary on “Is coding a relevant metaphor for the brain?", by Romain Brette |
Authors | Adam Santoro, Felix Hill, David Barrett, David Raposo, Matthew Botvinick, Timothy Lillicrap |
Abstract | Brette contends that the neural coding metaphor is an invalid basis for theories of what the brain does. Here, we argue that it is an insufficient guide for building an artificial intelligence that learns to accomplish short- and long-term goals in a complex, changing environment. |
Tasks | |
Published | 2019-04-18 |
URL | http://arxiv.org/abs/1904.10396v1 |
http://arxiv.org/pdf/1904.10396v1.pdf | |
PWC | https://paperswithcode.com/paper/is-coding-a-relevant-metaphor-for-building-ai |
Repo | |
Framework | |