February 1, 2020

3089 words 15 mins read

Paper Group AWR 155

Paper Group AWR 155

Learning-based Optimization of the Under-sampling Pattern in MRI. Self-Supervised Learning For Few-Shot Image Classification. Genetic Algorithm for the 0/1 Multidimensional Knapsack Problem. BERT for Evidence Retrieval and Claim Verification. Revisiting Local Descriptor based Image-to-Class Measure for Few-shot Learning. TextCaps : Handwritten Char …

Learning-based Optimization of the Under-sampling Pattern in MRI

Title Learning-based Optimization of the Under-sampling Pattern in MRI
Authors Cagla Deniz Bahadir, Adrian V. Dalca, Mert R. Sabuncu
Abstract Acquisition of Magnetic Resonance Imaging (MRI) scans can be accelerated by under-sampling in k-space (i.e., the Fourier domain). In this paper, we consider the problem of optimizing the sub-sampling pattern in a data-driven fashion. Since the reconstruction model’s performance depends on the sub-sampling pattern, we combine the two problems. For a given sparsity constraint, our method optimizes the sub-sampling pattern and reconstruction model, using an end-to-end learning strategy. Our algorithm learns from full-resolution data that are under-sampled retrospectively, yielding a sub-sampling pattern and reconstruction model that are customized to the type of images represented in the training data. The proposed method, which we call LOUPE (Learning-based Optimization of the Under-sampling PattErn), was implemented by modifying a U-Net, a widely-used convolutional neural network architecture, that we append with the forward model that encodes the under-sampling process. Our experiments with T1-weighted structural brain MRI scans show that the optimized sub-sampling pattern can yield significantly more accurate reconstructions compared to standard random uniform, variable density or equispaced under-sampling schemes. The code is made available at: https://github.com/cagladbahadir/LOUPE .
Tasks
Published 2019-01-07
URL http://arxiv.org/abs/1901.01960v2
PDF http://arxiv.org/pdf/1901.01960v2.pdf
PWC https://paperswithcode.com/paper/learning-based-optimization-of-the-under
Repo https://github.com/cagladbahadir/LOUPE
Framework tf

Self-Supervised Learning For Few-Shot Image Classification

Title Self-Supervised Learning For Few-Shot Image Classification
Authors Da Chen, Yuefeng Chen, Yuhong Li, Feng Mao, Yuan He, Hui Xue
Abstract Few-shot image classification aims to classify unseen classes with limited labeled samples. Recent works benefit from the meta-learning process with episodic tasks and can fast adapt to class from training to testing. Due to the limited number of samples for each task, the initial embedding network for meta learning becomes an essential component and can largely affects the performance in practice. To this end, many pre-trained methods have been proposed, and most of them are trained in supervised way with limited transfer ability for unseen classes. In this paper, we proposed to train a more generalized embedding network with self-supervised learning (SSL) which can provide slow and robust representation for downstream tasks by learning from the data itself. We evaluate our work by extensive comparisons with previous baseline methods on two few-shot classification datasets ({\em i.e.,} MiniImageNet and CUB). Based on the evaluation results, the proposed method achieves significantly better performance, i.e., improve 1-shot and 5-shot tasks by nearly \textbf{3%} and \textbf{4%} on MiniImageNet, by nearly \textbf{9%} and \textbf{3%} on CUB. Moreover, the proposed method can gain the improvement of (\textbf{15%}, \textbf{13%}) on MiniImageNet and (\textbf{15%}, \textbf{8%}) on CUB by pretraining using more unlabeled data. Our code will be available at \hyperref[https://github.com/phecy/SSL-FEW-SHOT.]{https://github.com/phecy/ssl-few-shot.}
Tasks Few-Shot Image Classification, Image Classification, Meta-Learning
Published 2019-11-14
URL https://arxiv.org/abs/1911.06045v2
PDF https://arxiv.org/pdf/1911.06045v2.pdf
PWC https://paperswithcode.com/paper/self-supervised-learning-for-few-shot-image
Repo https://github.com/phecy/SSL-FEW-SHOT
Framework pytorch

Genetic Algorithm for the 0/1 Multidimensional Knapsack Problem

Title Genetic Algorithm for the 0/1 Multidimensional Knapsack Problem
Authors Shalin Shah
Abstract The 0/1 multidimensional knapsack problem is the 0/1 knapsack problem with m constraints which makes it difficult to solve using traditional methods like dynamic programming or branch and bound algorithms. We present a genetic algorithm for the multidimensional knapsack problem with Java and C++ code that is able to solve publicly available instances in a very short computational duration. Our algorithm uses iteratively computed Lagrangian multipliers as constraint weights to augment the greedy algorithm for the multidimensional knapsack problem and uses that information in a greedy crossover in a genetic algorithm. The algorithm uses several other hyperparameters which can be set in the code to control convergence. Our algorithm improves upon the algorithm by Chu and Beasley in that it converges to optimum or near optimum solutions much faster.
Tasks
Published 2019-07-20
URL https://arxiv.org/abs/1908.08022v2
PDF https://arxiv.org/pdf/1908.08022v2.pdf
PWC https://paperswithcode.com/paper/genetic-algorithm-for-the-01-multidimensional
Repo https://github.com/shah314/gamultiknapsack
Framework none

BERT for Evidence Retrieval and Claim Verification

Title BERT for Evidence Retrieval and Claim Verification
Authors Amir Soleimani, Christof Monz, Marcel Worring
Abstract Motivated by the promising performance of pre-trained language models, we investigate BERT in an evidence retrieval and claim verification pipeline for the FEVER fact extraction and verification challenge. To this end, we propose to use two BERT models, one for retrieving potential evidence sentences supporting or rejecting claims, and another for verifying claims based on the predicted evidence sets. To train the BERT retrieval system, we use pointwise and pairwise loss functions, and examine the effect of hard negative mining. A second BERT model is trained to classify the samples as supported, refuted, and not enough information. Our system achieves a new state of the art recall of 87.1 for retrieving top five sentences out of the FEVER documents consisting of 50K Wikipedia pages, and scores second in the official leaderboard with the FEVER score of 69.7.
Tasks
Published 2019-10-07
URL https://arxiv.org/abs/1910.02655v1
PDF https://arxiv.org/pdf/1910.02655v1.pdf
PWC https://paperswithcode.com/paper/bert-for-evidence-retrieval-and-claim
Repo https://github.com/thunlp/KernelGAT
Framework pytorch

Revisiting Local Descriptor based Image-to-Class Measure for Few-shot Learning

Title Revisiting Local Descriptor based Image-to-Class Measure for Few-shot Learning
Authors Wenbin Li, Lei Wang, Jinglin Xu, Jing Huo, Yang Gao, Jiebo Luo
Abstract Few-shot learning in image classification aims to learn a classifier to classify images when only few training examples are available for each class. Recent work has achieved promising classification performance, where an image-level feature based measure is usually used. In this paper, we argue that a measure at such a level may not be effective enough in light of the scarcity of examples in few-shot learning. Instead, we think a local descriptor based image-to-class measure should be taken, inspired by its surprising success in the heydays of local invariant features. Specifically, building upon the recent episodic training mechanism, we propose a Deep Nearest Neighbor Neural Network (DN4 in short) and train it in an end-to-end manner. Its key difference from the literature is the replacement of the image-level feature based measure in the final layer by a local descriptor based image-to-class measure. This measure is conducted online via a $k$-nearest neighbor search over the deep local descriptors of convolutional feature maps. The proposed DN4 not only learns the optimal deep local descriptors for the image-to-class measure, but also utilizes the higher efficiency of such a measure in the case of example scarcity, thanks to the exchangeability of visual patterns across the images in the same class. Our work leads to a simple, effective, and computationally efficient framework for few-shot learning. Experimental study on benchmark datasets consistently shows its superiority over the related state-of-the-art, with the largest absolute improvement of $17%$ over the next best. The source code can be available from \UrlFont{https://github.com/WenbinLee/DN4.git}.
Tasks Few-Shot Image Classification, Few-Shot Learning, Image Classification
Published 2019-03-28
URL http://arxiv.org/abs/1903.12290v2
PDF http://arxiv.org/pdf/1903.12290v2.pdf
PWC https://paperswithcode.com/paper/revisiting-local-descriptor-based-image-to
Repo https://github.com/WenbinLee/DN4
Framework pytorch

TextCaps : Handwritten Character Recognition with Very Small Datasets

Title TextCaps : Handwritten Character Recognition with Very Small Datasets
Authors Vinoj Jayasundara, Sandaru Jayasekara, Hirunima Jayasekara, Jathushan Rajasegaran, Suranga Seneviratne, Ranga Rodrigo
Abstract Many localized languages struggle to reap the benefits of recent advancements in character recognition systems due to the lack of substantial amount of labeled training data. This is due to the difficulty in generating large amounts of labeled data for such languages and inability of deep learning techniques to properly learn from small number of training samples. We solve this problem by introducing a technique of generating new training samples from the existing samples, with realistic augmentations which reflect actual variations that are present in human hand writing, by adding random controlled noise to their corresponding instantiation parameters. Our results with a mere 200 training samples per class surpass existing character recognition results in the EMNIST-letter dataset while achieving the existing results in the three datasets: EMNIST-balanced, EMNIST-digits, and MNIST. We also develop a strategy to effectively use a combination of loss functions to improve reconstructions. Our system is useful in character recognition for localized languages that lack much labeled training data and even in other related more general contexts such as object recognition.
Tasks Few-Shot Image Classification, Image Classification, Image Generation
Published 2019-04-17
URL http://arxiv.org/abs/1904.08095v1
PDF http://arxiv.org/pdf/1904.08095v1.pdf
PWC https://paperswithcode.com/paper/textcaps-handwritten-character-recognition
Repo https://github.com/vinojjayasundara/textcaps
Framework tf

Real-Time Reinforcement Learning

Title Real-Time Reinforcement Learning
Authors Simon Ramstedt, Christopher Pal
Abstract Markov Decision Processes (MDPs), the mathematical framework underlying most algorithms in Reinforcement Learning (RL), are often used in a way that wrongfully assumes that the state of an agent’s environment does not change during action selection. As RL systems based on MDPs begin to find application in real-world safety critical situations, this mismatch between the assumptions underlying classical MDPs and the reality of real-time computation may lead to undesirable outcomes. In this paper, we introduce a new framework, in which states and actions evolve simultaneously and show how it is related to the classical MDP formulation. We analyze existing algorithms under the new real-time formulation and show why they are suboptimal when used in real-time. We then use those insights to create a new algorithm Real-Time Actor-Critic (RTAC) that outperforms the existing state-of-the-art continuous control algorithm Soft Actor-Critic both in real-time and non-real-time settings. Code and videos can be found at https://github.com/rmst/rtrl.
Tasks Continuous Control
Published 2019-11-11
URL https://arxiv.org/abs/1911.04448v4
PDF https://arxiv.org/pdf/1911.04448v4.pdf
PWC https://paperswithcode.com/paper/real-time-reinforcement-learning
Repo https://github.com/rmst/rtrl
Framework pytorch

Policy Optimization Through Approximate Importance Sampling

Title Policy Optimization Through Approximate Importance Sampling
Authors Marcin B. Tomczak, Dongho Kim, Peter Vrancx, Kee-Eung Kim
Abstract Recent policy optimization approaches (Schulman et al., 2015a; 2017) have achieved substantial empirical successes by constructing new proxy optimization objectives. These proxy objectives allow stable and low variance policy learning, but require small policy updates to ensure that the proxy objective remains an accurate approximation of the target policy value. In this paper we derive an alternative objective that obtains the value of the target policy by applying importance sampling (IS). However, the basic importance sampled objective is not suitable for policy optimization, as it incurs too high variance in policy updates. We therefore introduce an approximation that allows us to directly trade-off the bias of approximation with the variance in policy updates. We show that our approximation unifies previously developed approaches and allows us to interpolate between them. We develop a practical algorithm by optimizing the introduced objective with proximal policy optimization techniques (Schulman et al., 2017). We also provide a theoretical analysis of the introduced policy optimization objective demonstrating bias-variance trade-off. We empirically demonstrate that the resulting algorithm improves upon state of the art on-policy policy optimization on continuous control benchmarks.
Tasks Continuous Control
Published 2019-10-09
URL https://arxiv.org/abs/1910.03857v2
PDF https://arxiv.org/pdf/1910.03857v2.pdf
PWC https://paperswithcode.com/paper/policy-optimization-through-approximated
Repo https://github.com/marctom/POTAIS
Framework none

Measuring Compositionality in Representation Learning

Title Measuring Compositionality in Representation Learning
Authors Jacob Andreas
Abstract Many machine learning algorithms represent input data with vector embeddings or discrete codes. When inputs exhibit compositional structure (e.g. objects built from parts or procedures from subroutines), it is natural to ask whether this compositional structure is reflected in the the inputs’ learned representations. While the assessment of compositionality in languages has received significant attention in linguistics and adjacent fields, the machine learning literature lacks general-purpose tools for producing graded measurements of compositional structure in more general (e.g. vector-valued) representation spaces. We describe a procedure for evaluating compositionality by measuring how well the true representation-producing model can be approximated by a model that explicitly composes a collection of inferred representational primitives. We use the procedure to provide formal and empirical characterizations of compositional structure in a variety of settings, exploring the relationship between compositionality and learning dynamics, human judgments, representational similarity, and generalization.
Tasks Representation Learning
Published 2019-02-19
URL http://arxiv.org/abs/1902.07181v2
PDF http://arxiv.org/pdf/1902.07181v2.pdf
PWC https://paperswithcode.com/paper/measuring-compositionality-in-representation
Repo https://github.com/jacobandreas/tre
Framework pytorch

Making Deep Q-learning methods robust to time discretization

Title Making Deep Q-learning methods robust to time discretization
Authors Corentin Tallec, Léonard Blier, Yann Ollivier
Abstract Despite remarkable successes, Deep Reinforcement Learning (DRL) is not robust to hyperparameterization, implementation details, or small environment changes (Henderson et al. 2017, Zhang et al. 2018). Overcoming such sensitivity is key to making DRL applicable to real world problems. In this paper, we identify sensitivity to time discretization in near continuous-time environments as a critical factor; this covers, e.g., changing the number of frames per second, or the action frequency of the controller. Empirically, we find that Q-learning-based approaches such as Deep Q- learning (Mnih et al., 2015) and Deep Deterministic Policy Gradient (Lillicrap et al., 2015) collapse with small time steps. Formally, we prove that Q-learning does not exist in continuous time. We detail a principled way to build an off-policy RL algorithm that yields similar performances over a wide range of time discretizations, and confirm this robustness empirically.
Tasks Q-Learning
Published 2019-01-28
URL http://arxiv.org/abs/1901.09732v2
PDF http://arxiv.org/pdf/1901.09732v2.pdf
PWC https://paperswithcode.com/paper/making-deep-q-learning-methods-robust-to-time
Repo https://github.com/ctallec/continuous-rl
Framework pytorch

Fixup Initialization: Residual Learning Without Normalization

Title Fixup Initialization: Residual Learning Without Normalization
Authors Hongyi Zhang, Yann N. Dauphin, Tengyu Ma
Abstract Normalization layers are a staple in state-of-the-art deep neural network architectures. They are widely believed to stabilize training, enable higher learning rate, accelerate convergence and improve generalization, though the reason for their effectiveness is still an active research topic. In this work, we challenge the commonly-held beliefs by showing that none of the perceived benefits is unique to normalization. Specifically, we propose fixed-update initialization (Fixup), an initialization motivated by solving the exploding and vanishing gradient problem at the beginning of training via properly rescaling a standard initialization. We find training residual networks with Fixup to be as stable as training with normalization – even for networks with 10,000 layers. Furthermore, with proper regularization, Fixup enables residual networks without normalization to achieve state-of-the-art performance in image classification and machine translation.
Tasks Image Classification, Machine Translation
Published 2019-01-27
URL http://arxiv.org/abs/1901.09321v2
PDF http://arxiv.org/pdf/1901.09321v2.pdf
PWC https://paperswithcode.com/paper/fixup-initialization-residual-learning
Repo https://github.com/hongyi-zhang/Fixup
Framework pytorch

Structure-Aware Residual Pyramid Network for Monocular Depth Estimation

Title Structure-Aware Residual Pyramid Network for Monocular Depth Estimation
Authors Xiaotian Chen, Xuejin Chen, Zheng-Jun Zha
Abstract Monocular depth estimation is an essential task for scene understanding. The underlying structure of objects and stuff in a complex scene is critical to recovering accurate and visually-pleasing depth maps. Global structure conveys scene layouts, while local structure reflects shape details. Recently developed approaches based on convolutional neural networks (CNNs) significantly improve the performance of depth estimation. However, few of them take into account multi-scale structures in complex scenes. In this paper, we propose a Structure-Aware Residual Pyramid Network (SARPN) to exploit multi-scale structures for accurate depth prediction. We propose a Residual Pyramid Decoder (RPD) which expresses global scene structure in upper levels to represent layouts, and local structure in lower levels to present shape details. At each level, we propose Residual Refinement Modules (RRM) that predict residual maps to progressively add finer structures on the coarser structure predicted at the upper level. In order to fully exploit multi-scale image features, an Adaptive Dense Feature Fusion (ADFF) module, which adaptively fuses effective features from all scales for inferring structures of each scale, is introduced. Experiment results on the challenging NYU-Depth v2 dataset demonstrate that our proposed approach achieves state-of-the-art performance in both qualitative and quantitative evaluation. The code is available at https://github.com/Xt-Chen/SARPN.
Tasks Depth Estimation, Monocular Depth Estimation, Scene Understanding
Published 2019-07-13
URL https://arxiv.org/abs/1907.06023v1
PDF https://arxiv.org/pdf/1907.06023v1.pdf
PWC https://paperswithcode.com/paper/structure-aware-residual-pyramid-network-for
Repo https://github.com/Xt-Chen/SARPN
Framework pytorch

Learning agile and dynamic motor skills for legged robots

Title Learning agile and dynamic motor skills for legged robots
Authors Jemin Hwangbo, Joonho Lee, Alexey Dosovitskiy, Dario Bellicoso, Vassilios Tsounis, Vladlen Koltun, Marco Hutter
Abstract Legged robots pose one of the greatest challenges in robotics. Dynamic and agile maneuvers of animals cannot be imitated by existing methods that are crafted by humans. A compelling alternative is reinforcement learning, which requires minimal craftsmanship and promotes the natural evolution of a control policy. However, so far, reinforcement learning research for legged robots is mainly limited to simulation, and only few and comparably simple examples have been deployed on real systems. The primary reason is that training with real robots, particularly with dynamically balancing systems, is complicated and expensive. In the present work, we introduce a method for training a neural network policy in simulation and transferring it to a state-of-the-art legged system, thereby leveraging fast, automated, and cost-effective data generation schemes. The approach is applied to the ANYmal robot, a sophisticated medium-dog-sized quadrupedal system. Using policies trained in simulation, the quadrupedal machine achieves locomotion skills that go beyond what had been achieved with prior methods: ANYmal is capable of precisely and energy-efficiently following high-level body velocity commands, running faster than before, and recovering from falling even in complex configurations.
Tasks Legged Robots
Published 2019-01-24
URL http://arxiv.org/abs/1901.08652v1
PDF http://arxiv.org/pdf/1901.08652v1.pdf
PWC https://paperswithcode.com/paper/learning-agile-and-dynamic-motor-skills-for
Repo https://github.com/junja94/anymal_science_robotics_supplementary
Framework none

BoTorch: Programmable Bayesian Optimization in PyTorch

Title BoTorch: Programmable Bayesian Optimization in PyTorch
Authors Maximilian Balandat, Brian Karrer, Daniel R. Jiang, Samuel Daulton, Benjamin Letham, Andrew Gordon Wilson, Eytan Bakshy
Abstract Bayesian optimization provides sample-efficient global optimization for a broad range of applications, including automatic machine learning, molecular chemistry, and experimental design. We introduce BoTorch, a modern programming framework for Bayesian optimization. Enabled by Monte-Carlo (MC) acquisition functions and auto-differentiation, BoTorch’s modular design facilitates flexible specification and optimization of probabilistic models written in PyTorch, radically simplifying implementation of novel acquisition functions. Our MC approach is made practical by a distinctive algorithmic foundation that leverages fast predictive distributions and hardware acceleration. In experiments, we demonstrate the improved sample efficiency of BoTorch relative to other popular libraries. BoTorch is open source and available at https://github.com/pytorch/botorch.
Tasks Bayesian Optimisation
Published 2019-10-14
URL https://arxiv.org/abs/1910.06403v1
PDF https://arxiv.org/pdf/1910.06403v1.pdf
PWC https://paperswithcode.com/paper/botorch-programmable-bayesian-optimization-in
Repo https://github.com/pytorch/botorch
Framework pytorch

Function Space Particle Optimization for Bayesian Neural Networks

Title Function Space Particle Optimization for Bayesian Neural Networks
Authors Ziyu Wang, Tongzheng Ren, Jun Zhu, Bo Zhang
Abstract While Bayesian neural networks (BNNs) have drawn increasing attention, their posterior inference remains challenging, due to the high-dimensional and over-parameterized nature. To address this issue, several highly flexible and scalable variational inference procedures based on the idea of particle optimization have been proposed. These methods directly optimize a set of particles to approximate the target posterior. However, their application to BNNs often yields sub-optimal performance, as such methods have a particular failure mode on over-parameterized models. In this paper, we propose to solve this issue by performing particle optimization directly in the space of regression functions. We demonstrate through extensive experiments that our method successfully overcomes this issue, and outperforms strong baselines in a variety of tasks including prediction, defense against adversarial examples, and reinforcement learning.
Tasks
Published 2019-02-26
URL https://arxiv.org/abs/1902.09754v2
PDF https://arxiv.org/pdf/1902.09754v2.pdf
PWC https://paperswithcode.com/paper/function-space-particle-optimization-for
Repo https://github.com/thu-ml/fpovi
Framework tf
comments powered by Disqus