Paper Group NANR 101
Variational Hetero-Encoder Randomized GANs for Joint Image-Text Modeling. Discovering Motor Programs by Recomposing Demonstrations. Representing Unordered Data Using Multiset Automata and Complex Numbers. Evaluating Lossy Compression Rates of Deep Generative Models. Attentive Sequential Neural Processes. Controlling generative models with continuou …
Variational Hetero-Encoder Randomized GANs for Joint Image-Text Modeling
Title | Variational Hetero-Encoder Randomized GANs for Joint Image-Text Modeling |
Authors | Anonymous |
Abstract | For bidirectional joint image-text modeling, we develop variational hetero-encoder (VHE) randomized generative adversarial network (GAN) that integrates a probabilistic text decoder, probabilistic image encoder, and GAN into a coherent end-to-end multi-modality learning framework. VHE randomized GAN (VHE-GAN) encodes an image to decode its associated text, and feeds the variational posterior as the source of randomness into the GAN image generator. We plug three off-the-shelf modules, including a deep topic model, a ladder-structured image encoder, and StackGAN++, into VHE-GAN, which already achieves competitive performance. This further motivates the development of VHE-raster-scan-GAN that generates photo-realistic images in not only a multi-scale low-to-high-resolution manner, but also a hierarchical-semantic coarse-to-fine fashion. By capturing and relating hierarchical semantic and visual concepts with end-to-end training, VHE-raster-scan-GAN achieves state-of-the-art performance in a wide variety of image-text multi-modality learning and generation tasks. PyTorch code is provided. |
Tasks | |
Published | 2020-01-01 |
URL | https://openreview.net/forum?id=H1x5wRVtvS |
https://openreview.net/pdf?id=H1x5wRVtvS | |
PWC | https://paperswithcode.com/paper/variational-hetero-encoder-randomized-gans |
Repo | |
Framework | |
Discovering Motor Programs by Recomposing Demonstrations
Title | Discovering Motor Programs by Recomposing Demonstrations |
Authors | Tanmay Shankar, Shubham Tulsiani, Lerrel Pinto, Abhinav Gupta |
Abstract | In this paper, we present an approach to learn recomposable motor primitives across large-scale and diverse manipulation demonstrations. Current approaches to decomposing demonstrations into primitives often assume manually defined primitives and bypass the difficulty of discovering these primitives. On the other hand, approaches in primitive discovery put restrictive assumptions on the complexity of a primitive, which limit applicability to narrow tasks. Our approach attempts to circumvent these challenges by jointly learning both the underlying motor primitives and recomposing these primitives to form the original demonstration. Through constraints on both the parsimony of primitive decomposition and the simplicity of a given primitive, we are able to learn a diverse set of motor primitives, as well as a coherent latent representation for these primitives. We demonstrate, both qualitatively and quantitatively, that our learned primitives capture semantically meaningful aspects of a demonstration. This allows us to compose these primitives in a hierarchical reinforcement learning setup to efficiently solve robotic manipulation tasks like reaching and pushing. |
Tasks | Hierarchical Reinforcement Learning |
Published | 2020-01-01 |
URL | https://openreview.net/forum?id=rkgHY0NYwr |
https://openreview.net/pdf?id=rkgHY0NYwr | |
PWC | https://paperswithcode.com/paper/discovering-motor-programs-by-recomposing |
Repo | |
Framework | |
Representing Unordered Data Using Multiset Automata and Complex Numbers
Title | Representing Unordered Data Using Multiset Automata and Complex Numbers |
Authors | Anonymous |
Abstract | Unordered, variable-sized inputs arise in many settings across multiple fields. The ability for set- and multiset- oriented neural networks to handle this type of input has been the focus of much work in recent years. We propose to represent multisets using complex-weighted multiset automata and show how the multiset representations of certain existing neural architectures can be viewed as special cases of ours. Namely, (1) we provide a new theoretical and intuitive justification for the Transformer model’s representation of positions using sinusoidal functions, and (2) we extend the DeepSets model to use complex numbers, enabling it to outperform the existing model on an extension of one of their tasks. |
Tasks | |
Published | 2020-01-01 |
URL | https://openreview.net/forum?id=SJxmfgSYDB |
https://openreview.net/pdf?id=SJxmfgSYDB | |
PWC | https://paperswithcode.com/paper/representing-unordered-data-using-multiset |
Repo | |
Framework | |
Evaluating Lossy Compression Rates of Deep Generative Models
Title | Evaluating Lossy Compression Rates of Deep Generative Models |
Authors | Anonymous |
Abstract | Deep generative models have achieved remarkable progress in recent years. Despite this progress, quantitative evaluation and comparison of generative models remains as one of the important challenges. One of the most popular metrics for evaluating generative models is the log-likelihood. While the direct computation of log-likelihood can be intractable, it has been recently shown that the log-likelihood of some of the most interesting generative models such as variational autoencoders (VAE) or generative adversarial networks (GAN) can be efficiently estimated using annealed importance sampling (AIS). In this work, we argue that the log-likelihood metric by itself cannot represent all the different performance characteristics of generative models, and propose to use rate distortion curves to evaluate and compare deep generative models. We show that we can approximate the entire rate distortion curve using one single run of AIS for roughly the same computational cost as a single log-likelihood estimate. We evaluate lossy compression rates of different deep generative models such as VAEs, GANs (and its variants) and adversarial autoencoders (AAE) on MNIST and CIFAR10, and arrive at a number of insights not obtainable from log-likelihoods alone. |
Tasks | |
Published | 2020-01-01 |
URL | https://openreview.net/forum?id=ryga2CNKDH |
https://openreview.net/pdf?id=ryga2CNKDH | |
PWC | https://paperswithcode.com/paper/evaluating-lossy-compression-rates-of-deep |
Repo | |
Framework | |
Attentive Sequential Neural Processes
Title | Attentive Sequential Neural Processes |
Authors | Anonymous |
Abstract | Sequential Neural Processes (SNP) is a new class of models that can meta-learn a temporal stochastic process of stochastic processes by modeling temporal transition between Neural Processes. As Neural Processes (NP) suffers from underfitting, SNP is also prone to the same problem, even more severely due to its temporal context compression. Applying attention which resolves the problem of NP, however, is a challenge in SNP, because it cannot store the past contexts over which it is supposed to apply attention. In this paper, we propose the Attentive Sequential Neural Processes (ASNP) that resolve the underfitting in SNP by introducing a novel imaginary context as a latent variable and by applying attention over the imaginary context. We evaluate our model on 1D Gaussian Process regression and 2D moving MNIST/CelebA regression. We apply ASNP to implement Attentive Temporal GQN and evaluate on the moving-CelebA task. |
Tasks | |
Published | 2020-01-01 |
URL | https://openreview.net/forum?id=SJlEs1HKDr |
https://openreview.net/pdf?id=SJlEs1HKDr | |
PWC | https://paperswithcode.com/paper/attentive-sequential-neural-processes |
Repo | |
Framework | |
Controlling generative models with continuous factors of variations
Title | Controlling generative models with continuous factors of variations |
Authors | Anonymous |
Abstract | Recent deep generative models can provide photo-realistic images as well as visual or textual content embeddings useful to address various tasks of computer vision and natural language processing. Their usefulness is nevertheless often limited by the lack of control over the generative process or the poor understanding of the learned representation. To overcome these major issues, very recent works have shown the interest of studying the semantics of the latent space of generative models. In this paper, we propose to advance on the interpretability of the latent space of generative models by introducing a new method to find meaningful directions in the latent space of any generative model along which we can move to control precisely specific properties of the generated image like position or scale of the object in the image. Our method is weakly supervised and particularly well suited for the search of directions encoding simple transformations of the generated image, such as translation, zoom or color variations. We demonstrate the effectiveness of our method qualitatively and quantitatively, both for GANs and variational auto-encoders. |
Tasks | |
Published | 2020-01-01 |
URL | https://openreview.net/forum?id=H1laeJrKDB |
https://openreview.net/pdf?id=H1laeJrKDB | |
PWC | https://paperswithcode.com/paper/controlling-generative-models-with-continuous |
Repo | |
Framework | |
Improving Generalization in Meta Reinforcement Learning using Neural Objectives
Title | Improving Generalization in Meta Reinforcement Learning using Neural Objectives |
Authors | Anonymous |
Abstract | Biological evolution has distilled the experiences of many learners into the general learning algorithms of humans. Our novel meta-reinforcement learning algorithm MetaGenRL is inspired by this process. MetaGenRL distills the experiences of many complex agents to meta-learn a low-complexity neural objective function that affects how future individuals will learn. Unlike recent meta-RL algorithms, MetaGenRL can generalize to new environments that are entirely different from those used for meta-training. In some cases, it even outperforms human-engineered RL algorithms. MetaGenRL uses off-policy second-order gradients during meta-training that greatly increase its sample efficiency. |
Tasks | |
Published | 2020-01-01 |
URL | https://openreview.net/forum?id=S1evHerYPr |
https://openreview.net/pdf?id=S1evHerYPr | |
PWC | https://paperswithcode.com/paper/improving-generalization-in-meta |
Repo | |
Framework | |
Jelly Bean World: A Testbed for Never-Ending Learning
Title | Jelly Bean World: A Testbed for Never-Ending Learning |
Authors | Anonymous |
Abstract | Machine learning has shown growing success in recent years. However, current machine learning systems are highly specialized, trained for particular problems or domains, and typically on a single narrow dataset. Human learning, on the other hand, is highly general and adaptable. Never-ending learning is a machine learning paradigm that aims to bridge this gap, with the goal of encouraging researchers to design machine learning systems that can learn to perform a wider variety of inter-related tasks in more complex environments. To date, there is no environment or testbed to facilitate the development and evaluation of never-ending learning systems. To this end, we propose the Jelly Bean World testbed. The Jelly Bean World allows experimentation over two-dimensional grid worlds which are filled with items and in which agents can navigate. This testbed provides environments that are sufficiently complex and where more generally intelligent algorithms ought to perform better than current state-of-the-art reinforcement learning approaches. It does so by producing non-stationary environments and facilitating experimentation with multi-task, multi-agent, multi-modal, and curriculum learning settings. We hope that this new freely-available software will prompt new research and interest in the development and evaluation of never-ending learning systems and more broadly, general intelligence systems. |
Tasks | |
Published | 2020-01-01 |
URL | https://openreview.net/forum?id=Byx_YAVYPH |
https://openreview.net/pdf?id=Byx_YAVYPH | |
PWC | https://paperswithcode.com/paper/jelly-bean-world-a-testbed-for-never-ending |
Repo | |
Framework | |
Universal approximations of permutation invariant/equivariant functions by deep neural networks
Title | Universal approximations of permutation invariant/equivariant functions by deep neural networks |
Authors | Anonymous |
Abstract | In this paper, we develop a theory about the relationship between $G$-invariant/equivariant functions and deep neural networks for finite group $G$. Especially, for a given $G$-invariant/equivariant function, we construct its universal approximator by deep neural network whose layers equip $G$-actions and each affine transformations are $G$-equivariant/invariant. Due to representation theory, we can show that this approximator has exponentially fewer free parameters than usual models. |
Tasks | |
Published | 2020-01-01 |
URL | https://openreview.net/forum?id=HkeZQJBKDB |
https://openreview.net/pdf?id=HkeZQJBKDB | |
PWC | https://paperswithcode.com/paper/universal-approximations-of-permutation-1 |
Repo | |
Framework | |
V-MPO: On-Policy Maximum a Posteriori Policy Optimization for Discrete and Continuous Control
Title | V-MPO: On-Policy Maximum a Posteriori Policy Optimization for Discrete and Continuous Control |
Authors | Anonymous |
Abstract | Some of the most successful applications of deep reinforcement learning to challenging domains in discrete and continuous control have used policy gradient methods in the on-policy setting. However, policy gradients can suffer from large variance that may limit performance, and in practice require carefully tuned entropy regularization to prevent policy collapse. As an alternative to policy gradient algorithms, we introduce V-MPO, an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO) that performs policy iteration based on a learned state-value function. We show that V-MPO surpasses previously reported scores for both the Atari-57 and DMLab-30 benchmark suites in the multi-task setting, and does so reliably without importance weighting, entropy regularization, or population-based tuning of hyperparameters. On individual DMLab and Atari levels, the proposed algorithm can achieve scores that are substantially higher than has previously been reported. V-MPO is also applicable to problems with high-dimensional, continuous action spaces, which we demonstrate in the context of learning to control simulated humanoids with 22 degrees of freedom from full state observations and 56 degrees of freedom from pixel observations, as well as example OpenAI Gym tasks where V-MPO achieves substantially higher asymptotic scores than previously reported. |
Tasks | Continuous Control, Policy Gradient Methods |
Published | 2020-01-01 |
URL | https://openreview.net/forum?id=SylOlp4FvH |
https://openreview.net/pdf?id=SylOlp4FvH | |
PWC | https://paperswithcode.com/paper/v-mpo-on-policy-maximum-a-posteriori-policy-1 |
Repo | |
Framework | |
CATER: A diagnostic dataset for Compositional Actions & TEmporal Reasoning
Title | CATER: A diagnostic dataset for Compositional Actions & TEmporal Reasoning |
Authors | Anonymous |
Abstract | Computer vision has undergone a dramatic revolution in performance, driven in large part through deep features trained on large-scale supervised datasets. However, much of these improvements have focused on static image analysis; video understanding has seen rather modest improvements. Even though new datasets and spatiotemporal models have been proposed, simple frame-by-frame classification methods often still remain competitive. We posit that current video datasets are plagued with implicit biases over scene and object structure that can dwarf variations in temporal structure. In this work, we build a video dataset with fully observable and controllable object and scene bias, and which truly requires spatiotemporal understanding in order to be solved. Our dataset, named CATER, is rendered synthetically using a library of standard 3D objects, and tests the ability to recognize compositions of object movements that require long-term reasoning. In addition to being a challenging dataset, CATER also provides a plethora of diagnostic tools to analyze modern spatiotemporal video architectures by being completely observable and controllable. Using CATER, we provide insights into some of the most recent state of the art deep video architectures. |
Tasks | Video Understanding |
Published | 2020-01-01 |
URL | https://openreview.net/forum?id=HJgzt2VKPB |
https://openreview.net/pdf?id=HJgzt2VKPB | |
PWC | https://paperswithcode.com/paper/cater-a-diagnostic-dataset-for-compositional-1 |
Repo | |
Framework | |
MEMORY-BASED GRAPH NETWORKS
Title | MEMORY-BASED GRAPH NETWORKS |
Authors | Anonymous |
Abstract | Graph Neural Networks (GNNs) are a class of deep models that operates on data with arbitrary topology and order-invariant structure represented as graphs. We introduce an efficient memory layer for GNNs that can learn to jointly perform graph representation learning and graph pooling. We also introduce two new networks based on our memory layer: Memory-Based Graph Neural Network (MemGNN) and Graph Memory Network (GMN) that can learn hierarchical graph representations by coarsening the graph throughout the layers of memory. The experimental results demonstrate that the proposed models achieve state-of-the-art results in six out of seven graph classification and regression benchmarks. We also show that the learned representations could correspond to chemical features in the molecule data. |
Tasks | Graph Classification, Graph Representation Learning, Representation Learning |
Published | 2020-01-01 |
URL | https://openreview.net/forum?id=r1laNeBYPB |
https://openreview.net/pdf?id=r1laNeBYPB | |
PWC | https://paperswithcode.com/paper/memory-based-graph-networks |
Repo | |
Framework | |
Pixel Co-Occurence Based Loss Metrics for Super Resolution Texture Recovery
Title | Pixel Co-Occurence Based Loss Metrics for Super Resolution Texture Recovery |
Authors | Anonymous |
Abstract | Single Image Super Resolution (SISR) has significantly improved with Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs), often achieving order of magnitude better pixelwise accuracies (distortions) and state-of-the-art perceptual accuracy. Due to the stochastic nature of GAN reconstruction and the ill-posed nature of the problem, perceptual accuracy tends to correlate inversely with pixelwise accuracy which is especially detrimental to SISR, where preservation of original content is an objective. GAN stochastics can be guided by intermediate loss functions such as the VGG featurewise loss, but these features are typically derived from biased pre-trained networks. Similarly, measurements of perceptual quality such as the human Mean Opinion Score (MOS) and no-reference measures have issues with pre-trained bias. The spatial relationships between pixel values can be measured without bias using the Grey Level Co-occurence Matrix (GLCM), which was found to match the cardinality and comparative value of the MOS while reducing subjectivity and automating the analytical process. In this work, the GLCM is also directly used as a loss function to guide the generation of perceptually accurate images based on spatial collocation of pixel values. We compare GLCM based loss against scenarios where (1) no intermediate guiding loss function, and (2) the VGG feature function are used. Experimental validation is carried on X-ray images of rock samples, characterised by significant number of high frequency texture features. We find GLCM-based loss to result in images with higher pixelwise accuracy and better perceptual scores. |
Tasks | Image Super-Resolution, Super-Resolution |
Published | 2020-01-01 |
URL | https://openreview.net/forum?id=rylrI1HtPr |
https://openreview.net/pdf?id=rylrI1HtPr | |
PWC | https://paperswithcode.com/paper/pixel-co-occurence-based-loss-metrics-for |
Repo | |
Framework | |
Pareto Optimality in No-Harm Fairness
Title | Pareto Optimality in No-Harm Fairness |
Authors | Anonymous |
Abstract | Common fairness definitions in machine learning focus on balancing various notions of disparity and utility. In this work we study fairness in the context of risk disparity among sub-populations. We introduce the framework of Pareto-optimal fairness, where the goal of reducing risk disparity gaps is secondary only to the principle of not doing unnecessary harm, a concept that is especially applicable to high-stakes domains such as healthcare. We provide analysis and methodology to obtain maximally-fair no-harm classifiers on finite datasets. We argue that even in domains where fairness at cost is required, no-harm fairness can prove to be the optimal first step. This same methodology can also be applied to any unbalanced classification task, where we want to dynamically equalize the misclassification risks across outcomes without degrading overall performance any more than strictly necessary. We test the proposed methodology on real case-studies of predicting income, ICU patient mortality, classifying skin lesions from images, and assessing credit risk, demonstrating how the proposed framework compares favorably to other traditional approaches. |
Tasks | |
Published | 2020-01-01 |
URL | https://openreview.net/forum?id=B1e5TA4FPr |
https://openreview.net/pdf?id=B1e5TA4FPr | |
PWC | https://paperswithcode.com/paper/pareto-optimality-in-no-harm-fairness |
Repo | |
Framework | |
Quantum algorithm for finding the negative curvature direction
Title | Quantum algorithm for finding the negative curvature direction |
Authors | Anonymous |
Abstract | We present an efficient quantum algorithm aiming to find the negative curvature direction for escaping the saddle point, which is a critical subroutine for many second-order non-convex optimization algorithms. We prove that our algorithm could produce the target state corresponding to the negative curvature direction with query complexity O(polylog(d)ε^(-1)), where d is the dimension of the optimization function. The quantum negative curvature finding algorithm is exponentially faster than any known classical method which takes time at least O(dε^(−1/2)). Moreover, we propose an efficient algorithm to achieve the classical read-out of the target state. Our classical read-out algorithm runs exponentially faster on the degree of d than existing counterparts. |
Tasks | |
Published | 2020-01-01 |
URL | https://openreview.net/forum?id=r1xpF0VYDS |
https://openreview.net/pdf?id=r1xpF0VYDS | |
PWC | https://paperswithcode.com/paper/quantum-algorithm-for-finding-the-negative-1 |
Repo | |
Framework | |