Paper Group NANR 110
Equivariant Entity-Relationship Networks. Adversarial Inductive Transfer Learning with input and output space adaptation. Are Transformers universal approximators of sequence-to-sequence functions?. Generalizing Reinforcement Learning to Unseen Actions. Leveraging Simple Model Predictions for Enhancing its Performance. Learning to Transfer via Mode …
Equivariant Entity-Relationship Networks
Title | Equivariant Entity-Relationship Networks |
Authors | Anonymous |
Abstract | Due to its extensive use in databases, the relational model is ubiquitous in representing big-data. However, recent progress in deep learning with relational data has been focused on (knowledge) graphs. In this paper we propose Equivariant Entity-Relationship Networks, the class of parameter-sharing neural networks derived from the entity-relationship model. We prove that our proposed feed-forward layer is the most expressive linear layer under the given equivariance constraints, and subsumes recently introduced equivariant models for sets, exchangeable tensors, and graphs. The proposed feed-forward layer has linear complexity in the the data and can be used for both inductive and transductive reasoning about relational databases, including database embedding, and the prediction of missing records. This, provides a principled theoretical foundation for the application of deep learning to one of the most abundant forms of data. |
Tasks | Knowledge Graphs |
Published | 2020-01-01 |
URL | https://openreview.net/forum?id=Hkx6p6EFDr |
https://openreview.net/pdf?id=Hkx6p6EFDr | |
PWC | https://paperswithcode.com/paper/equivariant-entity-relationship-networks |
Repo | |
Framework | |
Adversarial Inductive Transfer Learning with input and output space adaptation
Title | Adversarial Inductive Transfer Learning with input and output space adaptation |
Authors | Anonymous |
Abstract | We propose Adversarial Inductive Transfer Learning (AITL), a method for addressing discrepancies in input and output spaces between source and target domains. AITL utilizes adversarial domain adaptation and multi-task learning to address these discrepancies. Our motivating application is pharmacogenomics where the goal is to predict drug response in patients using their genomic information. The challenge is that clinical data (i.e. patients) with drug response outcome is very limited, creating a need for transfer learning to bridge the gap between large pre-clinical pharmacogenomics datasets (e.g. cancer cell lines) and clinical datasets. Discrepancies exist between 1) the genomic data of pre-clinical and clinical datasets (the input space), and 2) the different measures of the drug response (the output space). To the best of our knowledge, AITL is the first adversarial inductive transfer learning method to address both input and output discrepancies. Experimental results indicate that AITL outperforms state-of-the-art pharmacogenomics and transfer learning baselines and may guide precision oncology more accurately. |
Tasks | Domain Adaptation, Multi-Task Learning, Transfer Learning |
Published | 2020-01-01 |
URL | https://openreview.net/forum?id=ryeRn3NtPH |
https://openreview.net/pdf?id=ryeRn3NtPH | |
PWC | https://paperswithcode.com/paper/adversarial-inductive-transfer-learning-with |
Repo | |
Framework | |
Are Transformers universal approximators of sequence-to-sequence functions?
Title | Are Transformers universal approximators of sequence-to-sequence functions? |
Authors | Anonymous |
Abstract | Despite the widespread adoption of Transformer models for NLP tasks, the expressive power of these models is not well-understood. In this paper, we establish that Transformer models are universal approximators of continuous permutation equivariant sequence-to-sequence functions with compact support, which is quite surprising given the amount of shared parameters in these models. Furthermore, using positional encodings, we circumvent the restriction of permutation equivariance, and show that Transformer models can universally approximate arbitrary continuous sequence-to-sequence functions on a compact domain. Interestingly, our proof techniques clearly highlight the different roles of the self-attention and the feed-forward layers in Transformers. In particular, we prove that fixed width self-attention layers can compute contextual mappings of the input sequences, playing a key role in the universal approximation property of Transformers. Based on this insight from our analysis, we consider other architectures that can compute contextual mappings and empirically evaluate them. |
Tasks | |
Published | 2020-01-01 |
URL | https://openreview.net/forum?id=ByxRM0Ntvr |
https://openreview.net/pdf?id=ByxRM0Ntvr | |
PWC | https://paperswithcode.com/paper/are-transformers-universal-approximators-of |
Repo | |
Framework | |
Generalizing Reinforcement Learning to Unseen Actions
Title | Generalizing Reinforcement Learning to Unseen Actions |
Authors | Anonymous |
Abstract | A fundamental trait of intelligence is the ability to achieve goals in the face of novel circumstances. In this work, we address one such setting which requires solving a task with a novel set of actions. Empowering machines with this ability requires generalization in the way an agent perceives its available actions along with the way it uses these actions to solve tasks. Hence, we propose a framework to enable generalization over both these aspects: understanding an action’s functionality, and using actions to solve tasks through reinforcement learning. Specifically, an agent interprets an action’s behavior using unsupervised representation learning over a collection of data samples reflecting the diverse properties of that action. We employ a reinforcement learning architecture which works over these action representations, and propose regularization metrics essential for enabling generalization in a policy. We illustrate the generalizability of the representation learning method and policy, to enable zero-shot generalization to previously unseen actions on challenging sequential decision-making environments. Our results and videos can be found at sites.google.com/view/action-generalization/ |
Tasks | Decision Making, Representation Learning, Unsupervised Representation Learning |
Published | 2020-01-01 |
URL | https://openreview.net/forum?id=rkx35lHKwB |
https://openreview.net/pdf?id=rkx35lHKwB | |
PWC | https://paperswithcode.com/paper/generalizing-reinforcement-learning-to-unseen |
Repo | |
Framework | |
Leveraging Simple Model Predictions for Enhancing its Performance
Title | Leveraging Simple Model Predictions for Enhancing its Performance |
Authors | Anonymous |
Abstract | There has been recent interest in improving performance of simple models for multiple reasons such as interpretability, robust learning from small data, deployment in memory constrained settings as well as environmental considerations. In this paper, we propose a novel method SRatio that can utilize information from high performing complex models (viz. deep neural networks, boosted trees, random forests) to reweight a training dataset for a potentially low performing simple model such as a decision tree or a shallow network enhancing its performance. Our method also leverages the per sample hardness estimate of the simple model which is not the case with the prior works which primarily consider the complex model’s confidences/predictions and is thus conceptually novel. Moreover, we generalize and formalize the concept of attaching probes to intermediate layers of a neural network, which was one of the main ideas in previous work \citep{profweight}, to other commonly used classifiers and incorporate this into our method. The benefit of these contributions is witnessed in the experiments where on 6 UCI datasets and CIFAR-10 we outperform competitors in a majority (16 out of 27) of the cases and tie for best performance in the remaining cases. In fact, in a couple of cases, we even approach the complex model’s performance. We also conduct further experiments to validate assertions and intuitively understand why our method works. Theoretically, we motivate our approach by showing that the weighted loss minimized by simple models using our weighting upper bounds the loss of the complex model. |
Tasks | |
Published | 2020-01-01 |
URL | https://openreview.net/forum?id=HyxQbaEYPr |
https://openreview.net/pdf?id=HyxQbaEYPr | |
PWC | https://paperswithcode.com/paper/leveraging-simple-model-predictions-for-1 |
Repo | |
Framework | |
Learning to Transfer via Modelling Multi-level Task Dependency
Title | Learning to Transfer via Modelling Multi-level Task Dependency |
Authors | Anonymous |
Abstract | Multi-task learning has been successful in modeling multiple related tasks with large, carefully curated labeled datasets. By leveraging the relationships among different tasks, multi-task learning framework can improve the performance significantly. However, most of the existing works are under the assumption that the predefined tasks are related to each other. Thus, their applications on real-world are limited, because rare real-world problems are closely related. Besides, the understanding of relationships among tasks has been ignored by most of the current methods. Along this line, we propose a novel multi-task learning framework - Learning To Transfer Via Modelling Multi-level Task Dependency, which constructed attention based dependency relationships among different tasks. At the same time, the dependency relationship can be used to guide what knowledge should be transferred, thus the performance of our model also be improved. To show the effectiveness of our model and the importance of considering multi-level dependency relationship, we conduct experiments on several public datasets, on which we obtain significant improvements over current methods. |
Tasks | Multi-Task Learning |
Published | 2020-01-01 |
URL | https://openreview.net/forum?id=BklhsgSFvB |
https://openreview.net/pdf?id=BklhsgSFvB | |
PWC | https://paperswithcode.com/paper/learning-to-transfer-via-modelling-multi |
Repo | |
Framework | |
LEARNING DIFFICULT PERCEPTUAL TASKS WITH HODGKIN-HUXLEY NETWORKS
Title | LEARNING DIFFICULT PERCEPTUAL TASKS WITH HODGKIN-HUXLEY NETWORKS |
Authors | Anonymous |
Abstract | This paper demonstrates that a computational neural network model using ion channel-based conductances to transmit information can solve standard computer vision datasets at near state-of-the-art performance. Although not fully biologically accurate, this model incorporates fundamental biophysical principles underlying the control of membrane potential and the processing of information by Ohmic ion channels. The key computational step employs Conductance-Weighted Averaging (CWA) in place of the traditional affine transformation, representing a fundamentally different computational principle. Importantly, CWA based networks are self-normalizing and range-limited. We also demonstrate for the first time that a network with excitatory and inhibitory neurons and nonnegative synapse strengths can successfully solve computer vision problems. Although CWA models do not yet surpass the current state-of-the-art in deep learning, the results are competitive on CIFAR-10. There remain avenues for improving these networks, e.g. by more closely modeling ion channel function and connectivity patterns of excitatory and inhibitory neurons found in the brain. |
Tasks | |
Published | 2020-01-01 |
URL | https://openreview.net/forum?id=H1lMogrKDH |
https://openreview.net/pdf?id=H1lMogrKDH | |
PWC | https://paperswithcode.com/paper/learning-difficult-perceptual-tasks-with |
Repo | |
Framework | |
Learning Multi-Agent Communication Through Structured Attentive Reasoning
Title | Learning Multi-Agent Communication Through Structured Attentive Reasoning |
Authors | Anonymous |
Abstract | Learning communication via deep reinforcement learning has recently been shown to be an effective way to solve cooperative multi-agent tasks. However, learning which communicated information is beneficial for each agent’s decision-making remains a challenging task. In order to address this problem, we introduce a fully differentiable framework for communication and reasoning, enabling agents to solve cooperative tasks in partially-observable environments. The framework is designed to facilitate explicit reasoning between agents, through a novel memory-based attention network that can learn selectively from its past memories. The model communicates through a series of reasoning steps that decompose each agent’s intentions into learned representations that are used first to compute the relevance of communicated information, and second to extract information from memories given newly received information. By selectively interacting with new information, the model effectively learns a communication protocol directly, in an end-to-end manner. We empirically demonstrate the strength of our model in cooperative multi-agent tasks, where inter-agent communication and reasoning over prior information substantially improves performance compared to baselines. |
Tasks | Decision Making |
Published | 2020-01-01 |
URL | https://openreview.net/forum?id=H1lVIxHtPS |
https://openreview.net/pdf?id=H1lVIxHtPS | |
PWC | https://paperswithcode.com/paper/learning-multi-agent-communication-through |
Repo | |
Framework | |
S2VG: Soft Stochastic Value Gradient method
Title | S2VG: Soft Stochastic Value Gradient method |
Authors | Anonymous |
Abstract | Model-based reinforcement learning (MBRL) has shown its advantages in sample-efficiency over model-free reinforcement learning (MFRL). Despite the impressive results it achieves, it still faces a trade-off between the ease of data generation and model bias. In this paper, we propose a simple and elegant model-based reinforcement learning algorithm called soft stochastic value gradient method (S2VG). S2VG combines the merits of the maximum-entropy reinforcement learning and MBRL, and exploits both real and imaginary data. In particular, we embed the model in the policy training and learn $Q$ and $V$ functions from the real (or imaginary) data set. Such embedding enables us to compute an analytic policy gradient through the back-propagation rather than the likelihood-ratio estimation, which can reduce the variance of the gradient estimation. We name our algorithm Soft Stochastic Value Gradient method to indicate its connection with the well-known stochastic value gradient method in \citep{heess2015Learning}. |
Tasks | |
Published | 2020-01-01 |
URL | https://openreview.net/forum?id=r1l-HTNtDB |
https://openreview.net/pdf?id=r1l-HTNtDB | |
PWC | https://paperswithcode.com/paper/s2vg-soft-stochastic-value-gradient-method |
Repo | |
Framework | |
Learning to Defense by Learning to Attack
Title | Learning to Defense by Learning to Attack |
Authors | Anonymous |
Abstract | Adversarial training provides a principled approach for training robust neural networks. From an optimization perspective, the adversarial training is essentially solving a minimax robust optimization problem. The outer minimization is trying to learn a robust classifier, while the inner maximization is trying to generate adversarial samples. Unfortunately, such a minimax problem is very difficult to solve due to the lack of convex-concave structure. This work proposes a new adversarial training method based on a generic learning-to-learn (L2L) framework. Specifically, instead of applying the existing hand-designed algorithms for the inner problem, we learn an optimizer, which is parametrized as a convolutional neural network. At the same time, a robust classifier is learned to defense the adversarial attack generated by the learned optimizer. Our experiments over CIFAR-10 and CIFAR-100 datasets demonstrate that the L2L outperforms existing adversarial training methods in both classification accuracy and computational efficiency. Moreover, our L2L framework can be extended to the generative adversarial imitation learning and stabilize the training. |
Tasks | Adversarial Attack, Imitation Learning |
Published | 2020-01-01 |
URL | https://openreview.net/forum?id=HklsthVYDH |
https://openreview.net/pdf?id=HklsthVYDH | |
PWC | https://paperswithcode.com/paper/learning-to-defense-by-learning-to-attack-1 |
Repo | |
Framework | |
Learning Functionally Decomposed Hierarchies for Continuous Navigation Tasks
Title | Learning Functionally Decomposed Hierarchies for Continuous Navigation Tasks |
Authors | Anonymous |
Abstract | Solving long-horizon sequential decision making tasks in environments with sparse rewards is a longstanding problem in reinforcement learning (RL) research. Hierarchical Reinforcement Learning (HRL) has held the promise to enhance the capabilities of RL agents via operation on different levels of temporal abstraction. Despite the success of recent works in dealing with inherent nonstationarity and sample complexity, it remains difficult to generalize to unseen environments and to transfer different layers of the policy to other agents. In this paper, we propose a novel HRL architecture, Hierarchical Decompositional Reinforcement Learning (HiDe), which allows decomposition of the hierarchical layers into independent subtasks, yet allows for joint training of all layers in end-to-end manner. The main insight is to combine a control policy on a lower level with an image-based planning policy on a higher level. We evaluate our method on various complex continuous control tasks for navigation, demonstrating that generalization across environments and transfer of higher level policies can be achieved. See videos https://sites.google.com/view/hide-rl |
Tasks | Continuous Control, Decision Making, Hierarchical Reinforcement Learning |
Published | 2020-01-01 |
URL | https://openreview.net/forum?id=r1nSxrKPH |
https://openreview.net/pdf?id=r1nSxrKPH | |
PWC | https://paperswithcode.com/paper/learning-functionally-decomposed-hierarchies |
Repo | |
Framework | |
Through the Lens of Neural Network: Analyzing Neural QA Models via Quantized Latent Representation
Title | Through the Lens of Neural Network: Analyzing Neural QA Models via Quantized Latent Representation |
Authors | Anonymous |
Abstract | In recent years, deep learning models remain black boxes, where the decision-making process is still opaque to humans. In this work, we try to explore the probabilities of understanding how machine thinks when doing question-answering tasks. In general, words are represented by continuous latent representations in the neural-based QA models. Here we train the QA models with discrete latent representations, so each word in the context is also a token in the model. In this way, we can know what a word sequence in the context looks like through the lens of the QA models. We analyze the QA models trained on QuAC (Question Answering in Context) and CoQA (A Conversational Question Answering Challenge) and organize several rules the models obey when dealing with this kind of QA task. We also find that the models maintain much of the original performance after some hidden layers are quantized. |
Tasks | Decision Making, Question Answering |
Published | 2020-01-01 |
URL | https://openreview.net/forum?id=HkgeUeHFPB |
https://openreview.net/pdf?id=HkgeUeHFPB | |
PWC | https://paperswithcode.com/paper/through-the-lens-of-neural-network-analyzing |
Repo | |
Framework | |
Denoising and Regularization via Exploiting the Structural Bias of Convolutional Generators
Title | Denoising and Regularization via Exploiting the Structural Bias of Convolutional Generators |
Authors | Anonymous |
Abstract | Convolutional Neural Networks (CNNs) have emerged as highly successful tools for image generation, recovery, and restoration. This success is often attributed to large amounts of training data. On the contrary, a number of recent experimental results suggest that a major contributing factor to this success is that convolutional networks impose strong prior assumptions about natural images. A surprising experiment that highlights this structural bias towards simple, natural images is that one can remove various kinds of noise and corruptions from a corrupted natural image by simply fitting (via gradient descent) a randomly initialized, over-parameterized convolutional generator to this single image. While this over-parameterized model can eventually fit the corrupted image perfectly, surprisingly after a few iterations of gradient descent one obtains the uncorrupted image, without using any training data. This intriguing phenomena has enabled state-of-the-art CNN-based denoising as well as regularization in linear inverse problems such as compressive sensing. In this paper we take a step towards demystifying this experimental phenomena by attributing this effect to particular architectural choices of convolutional networks, namely fixed convolutional operations. We then formally characterize the dynamics of fitting a two layer convolutional generator to a noisy signal and prove that early-stopped gradient descent denoises/regularizes. This results relies on showing that convolutional generators fit the structured part of an image significantly faster than the corrupted portion. |
Tasks | Compressive Sensing, Denoising, Image Generation |
Published | 2020-01-01 |
URL | https://openreview.net/forum?id=HJeqhA4YDS |
https://openreview.net/pdf?id=HJeqhA4YDS | |
PWC | https://paperswithcode.com/paper/denoising-and-regularization-via-exploiting-1 |
Repo | |
Framework | |
Learning with Protection: Rejection of Suspicious Samples under Adversarial Environment
Title | Learning with Protection: Rejection of Suspicious Samples under Adversarial Environment |
Authors | Anonymous |
Abstract | We propose a novel framework for avoiding the misclassification of data by using a framework of learning with rejection and adversarial examples. Recent developments in machine learning have opened new opportunities for industrial innovations such as self-driving cars. However, many machine learning models are vulnerable to adversarial attacks and industrial practitioners are concerned about accidents arising from misclassification. To avoid critical misclassifications, we define a sample that is likely to be mislabeled as a suspicious sample. Our main idea is to apply a framework of learning with rejection and adversarial examples to assist in the decision making for such suspicious samples. We propose two frameworks, learning with rejection under adversarial attacks and learning with protection. Learning with rejection under adversarial attacks is a naive extension of the learning with rejection framework for handling adversarial examples. Learning with protection is a practical application of learning with rejection under adversarial attacks. This algorithm transforms the original multi-class classification problem into a binary classification for a specific class, and we reject suspicious samples to protect a specific label. We demonstrate the effectiveness of the proposed method in experiments. |
Tasks | Decision Making, Self-Driving Cars |
Published | 2020-01-01 |
URL | https://openreview.net/forum?id=HylKvyHYwS |
https://openreview.net/pdf?id=HylKvyHYwS | |
PWC | https://paperswithcode.com/paper/learning-with-protection-rejection-of |
Repo | |
Framework | |
Laplacian Denoising Autoencoder
Title | Laplacian Denoising Autoencoder |
Authors | Anonymous |
Abstract | While deep neural networks have been shown to perform remarkably well in many machine learning tasks, labeling a large amount of supervised data is usually very costly to scale. Therefore, learning robust representations with unlabeled data is critical in relieving human effort and vital for many downstream applications. Recent advances in unsupervised and self-supervised learning approaches for visual data benefit greatly from domain knowledge. Here we are interested in a more generic unsupervised learning framework that can be easily generalized to other domains. In this paper, we propose to learn data representations with a novel type of denoising autoencoder, where the input noisy data is generated by corrupting the clean data in gradient domain. This can be naturally generalized to span multiple scales with a Laplacian pyramid representation of the input data. In this way, the agent has to learn more robust representations that can exploit the underlying data structures across multiple scales. Experiments on several visual benchmarks demonstrate that better representations can be learned with the proposed approach, compared to its counterpart with single-scale corruption. Besides, we also demonstrate that the learned representations perform well when transferring to other vision tasks. |
Tasks | Denoising |
Published | 2020-01-01 |
URL | https://openreview.net/forum?id=HygHtpVtPH |
https://openreview.net/pdf?id=HygHtpVtPH | |
PWC | https://paperswithcode.com/paper/laplacian-denoising-autoencoder |
Repo | |
Framework | |