April 3, 2020

2912 words 14 mins read

Paper Group ANR 54

Paper Group ANR 54

Unique Properties of Wide Minima in Deep Networks. The Big Three: A Methodology to Increase Data Science ROI by Answering the Questions Companies Care About. Learning Flat Latent Manifolds with VAEs. Detecting Replay Attacks Using Multi-Channel Audio: A Neural Network-Based Method. Recovering Geometric Information with Learned Texture Perturbations …

Unique Properties of Wide Minima in Deep Networks

Title Unique Properties of Wide Minima in Deep Networks
Authors Rotem Mulayoff, Tomer Michaeli
Abstract It is well known that (stochastic) gradient descent has an implicit bias towards wide minima. In deep neural network training, this mechanism serves to screen out minima. However, the precise effect that this has on the trained network is not yet fully understood. In this paper, we characterize the wide minima in linear neural networks trained with a quadratic loss. First, we show that linear ResNets with zero initialization necessarily converge to the widest of all minima. We then prove that these minima correspond to nearly balanced networks whereby the gain from the input to any intermediate representation does not change drastically from one layer to the next. Finally, we show that consecutive layers in wide minima solutions are coupled. That is, one of the left singular vectors of each weight matrix, equals one of the right singular vectors of the next matrix. This forms a distinct path from input to output, that, as we show, is dedicated to the signal that experiences the largest gain end-to-end. Experiments indicate that these properties are characteristic of both linear and nonlinear models trained in practice.
Published 2020-02-11
URL https://arxiv.org/abs/2002.04710v1
PDF https://arxiv.org/pdf/2002.04710v1.pdf
PWC https://paperswithcode.com/paper/unique-properties-of-wide-minima-in-deep

The Big Three: A Methodology to Increase Data Science ROI by Answering the Questions Companies Care About

Title The Big Three: A Methodology to Increase Data Science ROI by Answering the Questions Companies Care About
Authors Daniel K. Griffin
Abstract Companies may be achieving only a third of the value they could be getting from data science in industry applications. In this paper, we propose a methodology for categorizing and answering ‘The Big Three’ questions (what is going on, what is causing it, and what actions can I take that will optimize what I care about) using data science. The applications of data science seem to be nearly endless in today’s modern landscape, with each company jockeying for position in the new data and insights economy. Yet, data scientists seem to be solely focused on using classification, regression, and clustering methods to answer the question ‘what is going on’. Answering questions about why things are happening or how to take optimal actions to improve metrics are relegated to niche fields of research and generally neglected in industry data science analysis. We survey technical methods to answer these other important questions, describe areas in which some of these methods are being applied, and provide a practical example of how to apply our methodology and selected methods to a real business use case.
Published 2020-02-12
URL https://arxiv.org/abs/2002.07069v1
PDF https://arxiv.org/pdf/2002.07069v1.pdf
PWC https://paperswithcode.com/paper/the-big-three-a-methodology-to-increase-data

Learning Flat Latent Manifolds with VAEs

Title Learning Flat Latent Manifolds with VAEs
Authors Nutan Chen, Alexej Klushyn, Francesco Ferroni, Justin Bayer, Patrick van der Smagt
Abstract Measuring the similarity between data points often requires domain knowledge. This can in parts be compensated by relying on unsupervised methods such as latent-variable models, where similarity/distance is estimated in a more compact latent space. Prevalent is the use of the Euclidean metric, which has the drawback of ignoring information about similarity of data stored in the decoder, as captured by the framework of Riemannian geometry. Alternatives—such as approximating the geodesic—are often computationally inefficient, rendering the methods impractical. We propose an extension to the framework of variational auto-encoders allows learning flat latent manifolds, where the Euclidean metric is a proxy for the similarity between data points. This is achieved by defining the latent space as a Riemannian manifold and by regularising the metric tensor to be a scaled identity matrix. Additionally, we replace the compact prior typically used in variational auto-encoders with a recently presented, more expressive hierarchical one—and formulate the learning problem as a constrained optimisation problem. We evaluate our method on a range of data-sets, including a video-tracking benchmark, where the performance of our unsupervised approach nears that of state-of-the-art supervised approaches, while retaining the computational efficiency of straight-line-based approaches.
Tasks Latent Variable Models
Published 2020-02-12
URL https://arxiv.org/abs/2002.04881v1
PDF https://arxiv.org/pdf/2002.04881v1.pdf
PWC https://paperswithcode.com/paper/learning-flat-latent-manifolds-with-vaes

Detecting Replay Attacks Using Multi-Channel Audio: A Neural Network-Based Method

Title Detecting Replay Attacks Using Multi-Channel Audio: A Neural Network-Based Method
Authors Yuan Gong, Jian Yang, Christian Poellabauer
Abstract With the rapidly growing number of security-sensitive systems that use voice as the primary input, it becomes increasingly important to address these systems’ potential vulnerability to replay attacks. Previous efforts to address this concern have focused primarily on single-channel audio. In this paper, we introduce a novel neural network-based replay attack detection model that further leverages spatial information of multi-channel audio and is able to significantly improve the replay attack detection performance.
Published 2020-03-18
URL https://arxiv.org/abs/2003.08225v1
PDF https://arxiv.org/pdf/2003.08225v1.pdf
PWC https://paperswithcode.com/paper/detecting-replay-attacks-using-multi-channel

Recovering Geometric Information with Learned Texture Perturbations

Title Recovering Geometric Information with Learned Texture Perturbations
Authors Jane Wu, Yongxu Jin, Zhenglin Geng, Hui Zhou, Ronald Fedkiw
Abstract Regularization is used to avoid overfitting when training a neural network; unfortunately, this reduces the attainable level of detail hindering the ability to capture high-frequency information present in the training data. Even though various approaches may be used to re-introduce high-frequency detail, it typically does not match the training data and is often not time coherent. In the case of network inferred cloth, these sentiments manifest themselves via either a lack of detailed wrinkles or unnaturally appearing and/or time incoherent surrogate wrinkles. Thus, we propose a general strategy whereby high-frequency information is procedurally embedded into low-frequency data so that when the latter is smeared out by the network the former still retains its high-frequency detail. We illustrate this approach by learning texture coordinates which when smeared do not in turn smear out the high-frequency detail in the texture itself but merely smoothly distort it. Notably, we prescribe perturbed texture coordinates that are subsequently used to correct the over-smoothed appearance of inferred cloth, and correcting the appearance from multiple camera views naturally recovers lost geometric information.
Published 2020-01-20
URL https://arxiv.org/abs/2001.07253v1
PDF https://arxiv.org/pdf/2001.07253v1.pdf
PWC https://paperswithcode.com/paper/recovering-geometric-information-with-learned

Adaptive control for hindlimb locomotion in a simulated mouse through temporal cerebellar learning

Title Adaptive control for hindlimb locomotion in a simulated mouse through temporal cerebellar learning
Authors T. P. Jensen, S. Tata, A. J. Ijspeert, S. Tolu
Abstract Human beings and other vertebrates show remarkable performance and efficiency in locomotion, but the functioning of their biological control systems for locomotion is still only partially understood. The basic patterns and timing for locomotion are provided by a central pattern generator (CPG) in the spinal cord. The cerebellum is known to play an important role in adaptive locomotion. Recent studies have given insights into the error signals responsible for driving the cerebellar adaptation in locomotion. However, the question of how the cerebellar output influences the gait remains unanswered. We hypothesize that the cerebellar correction is applied to the pattern formation part of the CPG. Here, a bio-inspired control system for adaptive locomotion of the musculoskeletal system of the mouse is presented, where a cerebellar-like module adapts the step time by using the double support interlimb asymmetry as a temporal teaching signal. The control system is tested on a simulated mouse in a split-belt treadmill setup similar to those used in experiments with real mice. The results show adaptive locomotion behavior in the interlimb parameters similar to that seen in humans and mice. The control system adaptively decreases the double support asymmetry that occurs due to environmental perturbations in the split-belt protocol.
Published 2020-02-07
URL https://arxiv.org/abs/2002.02807v2
PDF https://arxiv.org/pdf/2002.02807v2.pdf
PWC https://paperswithcode.com/paper/adaptive-control-for-hindlimb-locomotion-in-a

NLPMM: a Next Location Predictor with Markov Modeling

Title NLPMM: a Next Location Predictor with Markov Modeling
Authors Meng Chen, Yang Liu, Xiaohui Yu
Abstract In this paper, we solve the problem of predicting the next locations of the moving objects with a historical dataset of trajectories. We present a Next Location Predictor with Markov Modeling (NLPMM) which has the following advantages: (1) it considers both individual and collective movement patterns in making prediction, (2) it is effective even when the trajectory data is sparse, (3) it considers the time factor and builds models that are suited to different time periods. We have conducted extensive experiments in a real dataset, and the results demonstrate the superiority of NLPMM over existing methods.
Published 2020-03-16
URL https://arxiv.org/abs/2003.07037v1
PDF https://arxiv.org/pdf/2003.07037v1.pdf
PWC https://paperswithcode.com/paper/nlpmm-a-next-location-predictor-with-markov

Prediction of Drug Synergy by Ensemble Learning

Title Prediction of Drug Synergy by Ensemble Learning
Authors Işıksu Ekşioğlu, Mehmet Tan
Abstract One of the promising methods for the treatment of complex diseases such as cancer is combinational therapy. Due to the combinatorial complexity, machine learning models can be useful in this field, where significant improvements have recently been achieved in determination of synergistic combinations. In this study, we investigate the effectiveness of different compound representations in predicting the drug synergy. On a large drug combination screen dataset, we first demonstrate the use of a promising representation that has not been used for this problem before, then we propose an ensemble on representation-model combinations that outperform each of the baseline models.
Published 2020-01-07
URL https://arxiv.org/abs/2001.01997v1
PDF https://arxiv.org/pdf/2001.01997v1.pdf
PWC https://paperswithcode.com/paper/prediction-of-drug-synergy-by-ensemble

Scalable and Practical Natural Gradient for Large-Scale Deep Learning

Title Scalable and Practical Natural Gradient for Large-Scale Deep Learning
Authors Kazuki Osawa, Yohei Tsuji, Yuichiro Ueno, Akira Naruse, Chuan-Sheng Foo, Rio Yokota
Abstract Large-scale distributed training of deep neural networks results in models with worse generalization performance as a result of the increase in the effective mini-batch size. Previous approaches attempt to address this problem by varying the learning rate and batch size over epochs and layers, or ad hoc modifications of batch normalization. We propose Scalable and Practical Natural Gradient Descent (SP-NGD), a principled approach for training models that allows them to attain similar generalization performance to models trained with first-order optimization methods, but with accelerated convergence. Furthermore, SP-NGD scales to large mini-batch sizes with a negligible computational overhead as compared to first-order methods. We evaluated SP-NGD on a benchmark task where highly optimized first-order methods are available as references: training a ResNet-50 model for image classification on ImageNet. We demonstrate convergence to a top-1 validation accuracy of 75.4% in 5.5 minutes using a mini-batch size of 32,768 with 1,024 GPUs, as well as an accuracy of 74.9% with an extremely large mini-batch size of 131,072 in 873 steps of SP-NGD.
Tasks Image Classification
Published 2020-02-13
URL https://arxiv.org/abs/2002.06015v1
PDF https://arxiv.org/pdf/2002.06015v1.pdf
PWC https://paperswithcode.com/paper/scalable-and-practical-natural-gradient-for

Online Agnostic Boosting via Regret Minimization

Title Online Agnostic Boosting via Regret Minimization
Authors Nataly Brukhim, Xinyi Chen, Elad Hazan, Shay Moran
Abstract Boosting is a widely used machine learning approach based on the idea of aggregating weak learning rules. While in statistical learning numerous boosting methods exist both in the realizable and agnostic settings, in online learning they exist only in the realizable case. In this work we provide the first agnostic online boosting algorithm; that is, given a weak learner with only marginally-better-than-trivial regret guarantees, our algorithm boosts it to a strong learner with sublinear regret. Our algorithm is based on an abstract (and simple) reduction to online convex optimization, which efficiently converts an arbitrary online convex optimizer to an online booster. Moreover, this reduction extends to the statistical as well as the online realizable settings, thus unifying the 4 cases of statistical/online and agnostic/realizable boosting.
Published 2020-03-02
URL https://arxiv.org/abs/2003.01150v1
PDF https://arxiv.org/pdf/2003.01150v1.pdf
PWC https://paperswithcode.com/paper/online-agnostic-boosting-via-regret

Unsupervised Multi-Modal Image Registration via Geometry Preserving Image-to-Image Translation

Title Unsupervised Multi-Modal Image Registration via Geometry Preserving Image-to-Image Translation
Authors Moab Arar, Yiftach Ginger, Dov Danon, Ilya Leizerson, Amit Bermano, Daniel Cohen-Or
Abstract Many applications, such as autonomous driving, heavily rely on multi-modal data where spatial alignment between the modalities is required. Most multi-modal registration methods struggle computing the spatial correspondence between the images using prevalent cross-modality similarity measures. In this work, we bypass the difficulties of developing cross-modality similarity measures, by training an image-to-image translation network on the two input modalities. This learned translation allows training the registration network using simple and reliable mono-modality metrics. We perform multi-modal registration using two networks - a spatial transformation network and a translation network. We show that by encouraging our translation network to be geometry preserving, we manage to train an accurate spatial transformation network. Compared to state-of-the-art multi-modal methods our presented method is unsupervised, requiring no pairs of aligned modalities for training, and can be adapted to any pair of modalities. We evaluate our method quantitatively and qualitatively on commercial datasets, showing that it performs well on several modalities and achieves accurate alignment.
Tasks Autonomous Driving, Image Registration, Image-to-Image Translation
Published 2020-03-18
URL https://arxiv.org/abs/2003.08073v1
PDF https://arxiv.org/pdf/2003.08073v1.pdf
PWC https://paperswithcode.com/paper/unsupervised-multi-modal-image-registration

Nested-Wasserstein Self-Imitation Learning for Sequence Generation

Title Nested-Wasserstein Self-Imitation Learning for Sequence Generation
Authors Ruiyi Zhang, Changyou Chen, Zhe Gan, Zheng Wen, Wenlin Wang, Lawrence Carin
Abstract Reinforcement learning (RL) has been widely studied for improving sequence-generation models. However, the conventional rewards used for RL training typically cannot capture sufficient semantic information and therefore render model bias. Further, the sparse and delayed rewards make RL exploration inefficient. To alleviate these issues, we propose the concept of nested-Wasserstein distance for distributional semantic matching. To further exploit it, a novel nested-Wasserstein self-imitation learning framework is developed, encouraging the model to exploit historical high-rewarded sequences for enhanced exploration and better semantic matching. Our solution can be understood as approximately executing proximal policy optimization with Wasserstein trust-regions. Experiments on a variety of unconditional and conditional sequence-generation tasks demonstrate the proposed approach consistently leads to improved performance.
Tasks Imitation Learning
Published 2020-01-20
URL https://arxiv.org/abs/2001.06944v1
PDF https://arxiv.org/pdf/2001.06944v1.pdf
PWC https://paperswithcode.com/paper/nested-wasserstein-self-imitation-learning

Training Neural Network Controllers Using Control Barrier Functions in the Presence of Disturbances

Title Training Neural Network Controllers Using Control Barrier Functions in the Presence of Disturbances
Authors Shakiba Yaghoubi, Georgios Fainekos, Sriram Sankaranarayanan
Abstract Control Barrier Functions (CBF) have been recently utilized in the design of provably safe feedback control laws for nonlinear systems. These feedback control methods typically compute the next control input by solving an online Quadratic Program (QP). Solving QP in real-time can be a computationally expensive process for resource constraint systems. In this work, we propose to use imitation learning to learn Neural Network-based feedback controllers which will satisfy the CBF constraints. In the process, we also develop a new class of High Order CBF for systems under external disturbances. We demonstrate the framework on a unicycle model subject to external disturbances, e.g., wind or currents.
Tasks Imitation Learning
Published 2020-01-18
URL https://arxiv.org/abs/2001.08088v1
PDF https://arxiv.org/pdf/2001.08088v1.pdf
PWC https://paperswithcode.com/paper/training-neural-network-controllers-using

Learning Task-Driven Control Policies via Information Bottlenecks

Title Learning Task-Driven Control Policies via Information Bottlenecks
Authors Vincent Pacelli, Anirudha Majumdar
Abstract This paper presents a reinforcement learning approach to synthesizing task-driven control policies for robotic systems equipped with rich sensory modalities (e.g., vision or depth). Standard reinforcement learning algorithms typically produce policies that tightly couple control actions to the entirety of the system’s state and rich sensor observations. As a consequence, the resulting policies can often be sensitive to changes in task-irrelevant portions of the state or observations (e.g., changing background colors). In contrast, the approach we present here learns to create a task-driven representation that is used to compute control actions. Formally, this is achieved by deriving a policy gradient-style algorithm that creates an information bottleneck between the states and the task-driven representation; this constrains actions to only depend on task-relevant information. We demonstrate our approach in a thorough set of simulation results on multiple examples including a grasping task that utilizes depth images and a ball-catching task that utilizes RGB images. Comparisons with a standard policy gradient approach demonstrate that the task-driven policies produced by our algorithm are often significantly more robust to sensor noise and task-irrelevant changes in the environment.
Published 2020-02-04
URL https://arxiv.org/abs/2002.01428v1
PDF https://arxiv.org/pdf/2002.01428v1.pdf
PWC https://paperswithcode.com/paper/learning-task-driven-control-policies-via

On Computation and Generalization of Generative Adversarial Imitation Learning

Title On Computation and Generalization of Generative Adversarial Imitation Learning
Authors Minshuo Chen, Yizhou Wang, Tianyi Liu, Zhuoran Yang, Xingguo Li, Zhaoran Wang, Tuo Zhao
Abstract Generative Adversarial Imitation Learning (GAIL) is a powerful and practical approach for learning sequential decision-making policies. Different from Reinforcement Learning (RL), GAIL takes advantage of demonstration data by experts (e.g., human), and learns both the policy and reward function of the unknown environment. Despite the significant empirical progresses, the theory behind GAIL is still largely unknown. The major difficulty comes from the underlying temporal dependency of the demonstration data and the minimax computational formulation of GAIL without convex-concave structure. To bridge such a gap between theory and practice, this paper investigates the theoretical properties of GAIL. Specifically, we show: (1) For GAIL with general reward parameterization, the generalization can be guaranteed as long as the class of the reward functions is properly controlled; (2) For GAIL, where the reward is parameterized as a reproducing kernel function, GAIL can be efficiently solved by stochastic first order optimization algorithms, which attain sublinear convergence to a stationary solution. To the best of our knowledge, these are the first results on statistical and computational guarantees of imitation learning with reward/policy function approximation. Numerical experiments are provided to support our analysis.
Tasks Decision Making, Imitation Learning
Published 2020-01-09
URL https://arxiv.org/abs/2001.02792v2
PDF https://arxiv.org/pdf/2001.02792v2.pdf
PWC https://paperswithcode.com/paper/on-computation-and-generalization-of-1
comments powered by Disqus