May 7, 2019

3116 words 15 mins read

Paper Group AWR 95

Paper Group AWR 95

iCaRL: Incremental Classifier and Representation Learning. Generalized Kalman Smoothing: Modeling and Algorithms. Modular Tracking Framework: A Unified Approach to Registration based Tracking. Understanding Deep Convolutional Networks. Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling. Automatic chemical design using a …

iCaRL: Incremental Classifier and Representation Learning

Title iCaRL: Incremental Classifier and Representation Learning
Authors Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, Christoph H. Lampert
Abstract A major open problem on the road to artificial intelligence is the development of incrementally learning systems that learn about more and more concepts over time from a stream of data. In this work, we introduce a new training strategy, iCaRL, that allows learning in such a class-incremental way: only the training data for a small number of classes has to be present at the same time and new classes can be added progressively. iCaRL learns strong classifiers and a data representation simultaneously. This distinguishes it from earlier works that were fundamentally limited to fixed data representations and therefore incompatible with deep learning architectures. We show by experiments on CIFAR-100 and ImageNet ILSVRC 2012 data that iCaRL can learn many classes incrementally over a long period of time where other strategies quickly fail.
Tasks Representation Learning
Published 2016-11-23
URL http://arxiv.org/abs/1611.07725v2
PDF http://arxiv.org/pdf/1611.07725v2.pdf
PWC https://paperswithcode.com/paper/icarl-incremental-classifier-and
Repo https://github.com/srebuffi/iCaRL
Framework tf

Generalized Kalman Smoothing: Modeling and Algorithms

Title Generalized Kalman Smoothing: Modeling and Algorithms
Authors A. Y. Aravkin, J. V. Burke, L. Ljung, A. Lozano, G. Pillonetto
Abstract State-space smoothing has found many applications in science and engineering. Under linear and Gaussian assumptions, smoothed estimates can be obtained using efficient recursions, for example Rauch-Tung-Striebel and Mayne-Fraser algorithms. Such schemes are equivalent to linear algebraic techniques that minimize a convex quadratic objective function with structure induced by the dynamic model. These classical formulations fall short in many important circumstances. For instance, smoothers obtained using quadratic penalties can fail when outliers are present in the data, and cannot track impulsive inputs and abrupt state changes. Motivated by these shortcomings, generalized Kalman smoothing formulations have been proposed in the last few years, replacing quadratic models with more suitable, often nonsmooth, convex functions. In contrast to classical models, these general estimators require use of iterated algorithms, and these have received increased attention from control, signal processing, machine learning, and optimization communities. In this survey we show that the optimization viewpoint provides the control and signal processing community great freedom in the development of novel modeling and inference frameworks for dynamical systems. We discuss general statistical models for dynamic systems, making full use of nonsmooth convex penalties and constraints, and providing links to important models in signal processing and machine learning. We also survey optimization techniques for these formulations, paying close attention to dynamic problem structure. Modeling concepts and algorithms are illustrated with numerical examples.
Tasks
Published 2016-09-20
URL http://arxiv.org/abs/1609.06369v2
PDF http://arxiv.org/pdf/1609.06369v2.pdf
PWC https://paperswithcode.com/paper/generalized-kalman-smoothing-modeling-and
Repo https://github.com/UW-AMO/TimeSeriesES-Cell
Framework none

Modular Tracking Framework: A Unified Approach to Registration based Tracking

Title Modular Tracking Framework: A Unified Approach to Registration based Tracking
Authors Abhineet Singh, Martin Jagersand
Abstract This paper presents a modular, extensible and highly efficient open source framework for registration based tracking called Modular Tracking Framework (MTF). Targeted at robotics applications, it is implemented entirely in C++ and designed from the ground up to easily integrate with systems that support any of several major vision and robotics libraries including OpenCV, ROS, ViSP and Eigen. It implements more methods, is faster, and more precise than other existing systems. Further, the theoretical basis for its design is a new way to conceptualize registration based trackers that decomposes them into three constituent sub modules - Search Method (SM), Appearance Model (AM) and State Space Model (SSM). In the process, we integrate many important advances published after Baker & Matthews’ landmark work in 2004. In addition to being a practical solution for fast and high precision tracking, MTF can also serve as a useful research tool by allowing existing and new methods for any of the sub modules to be studied better. When a new method is introduced for one of these, the breakdown can help to experimentally find the combination of methods for the others that is optimum for it. By extensive use of generic programming, MTF makes it easy to plug in a new method for any of the sub modules so that it can not only be tested comprehensively with existing methods but also become immediately available for deployment in any project that uses the framework. With 16 AMs, 11 SMs and 13 SSMs implemented already, MTF provides over 2000 distinct single layer trackers. It also allows two or more of these to be combined together in several ways to create a practically unlimited variety of novel multi layer trackers.
Tasks
Published 2016-02-29
URL http://arxiv.org/abs/1602.09130v4
PDF http://arxiv.org/pdf/1602.09130v4.pdf
PWC https://paperswithcode.com/paper/modular-tracking-framework-a-unified-approach
Repo https://github.com/abhineet123/MTF
Framework none

Understanding Deep Convolutional Networks

Title Understanding Deep Convolutional Networks
Authors Stéphane Mallat
Abstract Deep convolutional networks provide state of the art classifications and regressions results over many high-dimensional problems. We review their architecture, which scatters data with a cascade of linear filter weights and non-linearities. A mathematical framework is introduced to analyze their properties. Computations of invariants involve multiscale contractions, the linearization of hierarchical symmetries, and sparse separations. Applications are discussed.
Tasks
Published 2016-01-19
URL http://arxiv.org/abs/1601.04920v1
PDF http://arxiv.org/pdf/1601.04920v1.pdf
PWC https://paperswithcode.com/paper/understanding-deep-convolutional-networks
Repo https://github.com/INFOSIGA/TriagemAutom
Framework tf

Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling

Title Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling
Authors Hakan Inan, Khashayar Khosravi, Richard Socher
Abstract Recurrent neural networks have been very successful at predicting sequences of words in tasks such as language modeling. However, all such models are based on the conventional classification framework, where the model is trained against one-hot targets, and each word is represented both as an input and as an output in isolation. This causes inefficiencies in learning both in terms of utilizing all of the information and in terms of the number of parameters needed to train. We introduce a novel theoretical framework that facilitates better learning in language modeling, and show that our framework leads to tying together the input embedding and the output projection matrices, greatly reducing the number of trainable variables. Our framework leads to state of the art performance on the Penn Treebank with a variety of network models.
Tasks Language Modelling
Published 2016-11-04
URL http://arxiv.org/abs/1611.01462v3
PDF http://arxiv.org/pdf/1611.01462v3.pdf
PWC https://paperswithcode.com/paper/tying-word-vectors-and-word-classifiers-a
Repo https://github.com/JianGoForIt/YellowFin_Pytorch
Framework pytorch

Automatic chemical design using a data-driven continuous representation of molecules

Title Automatic chemical design using a data-driven continuous representation of molecules
Authors Rafael Gómez-Bombarelli, Jennifer N. Wei, David Duvenaud, José Miguel Hernández-Lobato, Benjamín Sánchez-Lengeling, Dennis Sheberla, Jorge Aguilera-Iparraguirre, Timothy D. Hirzel, Ryan P. Adams, Alán Aspuru-Guzik
Abstract We report a method to convert discrete representations of molecules to and from a multidimensional continuous representation. This model allows us to generate new molecules for efficient exploration and optimization through open-ended spaces of chemical compounds. A deep neural network was trained on hundreds of thousands of existing chemical structures to construct three coupled functions: an encoder, a decoder and a predictor. The encoder converts the discrete representation of a molecule into a real-valued continuous vector, and the decoder converts these continuous vectors back to discrete molecular representations. The predictor estimates chemical properties from the latent continuous vector representation of the molecule. Continuous representations allow us to automatically generate novel chemical structures by performing simple operations in the latent space, such as decoding random vectors, perturbing known chemical structures, or interpolating between molecules. Continuous representations also allow the use of powerful gradient-based optimization to efficiently guide the search for optimized functional compounds. We demonstrate our method in the domain of drug-like molecules and also in the set of molecules with fewer that nine heavy atoms.
Tasks Efficient Exploration
Published 2016-10-07
URL http://arxiv.org/abs/1610.02415v3
PDF http://arxiv.org/pdf/1610.02415v3.pdf
PWC https://paperswithcode.com/paper/automatic-chemical-design-using-a-data-driven
Repo https://github.com/aksub99/molecular-vae
Framework pytorch

Music transcription modelling and composition using deep learning

Title Music transcription modelling and composition using deep learning
Authors Bob L. Sturm, João Felipe Santos, Oded Ben-Tal, Iryna Korshunova
Abstract We apply deep learning methods, specifically long short-term memory (LSTM) networks, to music transcription modelling and composition. We build and train LSTM networks using approximately 23,000 music transcriptions expressed with a high-level vocabulary (ABC notation), and use them to generate new transcriptions. Our practical aim is to create music transcription models useful in particular contexts of music composition. We present results from three perspectives: 1) at the population level, comparing descriptive statistics of the set of training transcriptions and generated transcriptions; 2) at the individual level, examining how a generated transcription reflects the conventions of a music practice in the training transcriptions (Celtic folk); 3) at the application level, using the system for idea generation in music composition. We make our datasets, software and sound examples open and available: \url{https://github.com/IraKorshunova/folk-rnn}.
Tasks
Published 2016-04-29
URL http://arxiv.org/abs/1604.08723v1
PDF http://arxiv.org/pdf/1604.08723v1.pdf
PWC https://paperswithcode.com/paper/music-transcription-modelling-and-composition
Repo https://github.com/9552nZ/SmartSheetMusic
Framework none

Cooperative Inverse Reinforcement Learning

Title Cooperative Inverse Reinforcement Learning
Authors Dylan Hadfield-Menell, Anca Dragan, Pieter Abbeel, Stuart Russell
Abstract For an autonomous system to be helpful to humans and to pose no unwarranted risks, it needs to align its values with those of the humans in its environment in such a way that its actions contribute to the maximization of value for the humans. We propose a formal definition of the value alignment problem as cooperative inverse reinforcement learning (CIRL). A CIRL problem is a cooperative, partial-information game with two agents, human and robot; both are rewarded according to the human’s reward function, but the robot does not initially know what this is. In contrast to classical IRL, where the human is assumed to act optimally in isolation, optimal CIRL solutions produce behaviors such as active teaching, active learning, and communicative actions that are more effective in achieving value alignment. We show that computing optimal joint policies in CIRL games can be reduced to solving a POMDP, prove that optimality in isolation is suboptimal in CIRL, and derive an approximate CIRL algorithm.
Tasks Active Learning
Published 2016-06-09
URL http://arxiv.org/abs/1606.03137v3
PDF http://arxiv.org/pdf/1606.03137v3.pdf
PWC https://paperswithcode.com/paper/cooperative-inverse-reinforcement-learning
Repo https://github.com/chanlaw/assistive-bandits
Framework tf

Likelihood-free inference by ratio estimation

Title Likelihood-free inference by ratio estimation
Authors Owen Thomas, Ritabrata Dutta, Jukka Corander, Samuel Kaski, Michael U. Gutmann
Abstract We consider the problem of parametric statistical inference when likelihood computations are prohibitively expensive but sampling from the model is possible. Several so-called likelihood-free methods have been developed to perform inference in the absence of a likelihood function. The popular synthetic likelihood approach infers the parameters by modelling summary statistics of the data by a Gaussian probability distribution. In another popular approach called approximate Bayesian computation, the inference is performed by identifying parameter values for which the summary statistics of the simulated data are close to those of the observed data. Synthetic likelihood is easier to use as no measure of `closeness’ is required but the Gaussianity assumption is often limiting. Moreover, both approaches require judiciously chosen summary statistics. We here present an alternative inference approach that is as easy to use as synthetic likelihood but not as restricted in its assumptions, and that, in a natural way, enables automatic selection of relevant summary statistic from a large set of candidates. The basic idea is to frame the problem of estimating the posterior as a problem of estimating the ratio between the data generating distribution and the marginal distribution. This problem can be solved by logistic regression, and including regularising penalty terms enables automatic selection of the summary statistics relevant to the inference task. We illustrate the general theory on canonical examples and employ it to perform inference for challenging stochastic nonlinear dynamical systems and high-dimensional summary statistics. |
Tasks
Published 2016-11-30
URL https://arxiv.org/abs/1611.10242v5
PDF https://arxiv.org/pdf/1611.10242v5.pdf
PWC https://paperswithcode.com/paper/likelihood-free-inference-by-ratio-estimation
Repo https://github.com/mjarvenpaa/parallel-GP-SL
Framework none

Modeling Relationships in Referential Expressions with Compositional Modular Networks

Title Modeling Relationships in Referential Expressions with Compositional Modular Networks
Authors Ronghang Hu, Marcus Rohrbach, Jacob Andreas, Trevor Darrell, Kate Saenko
Abstract People often refer to entities in an image in terms of their relationships with other entities. For example, “the black cat sitting under the table” refers to both a “black cat” entity and its relationship with another “table” entity. Understanding these relationships is essential for interpreting and grounding such natural language expressions. Most prior work focuses on either grounding entire referential expressions holistically to one region, or localizing relationships based on a fixed set of categories. In this paper we instead present a modular deep architecture capable of analyzing referential expressions into their component parts, identifying entities and relationships mentioned in the input expression and grounding them all in the scene. We call this approach Compositional Modular Networks (CMNs): a novel architecture that learns linguistic analysis and visual inference end-to-end. Our approach is built around two types of neural modules that inspect local regions and pairwise interactions between regions. We evaluate CMNs on multiple referential expression datasets, outperforming state-of-the-art approaches on all tasks.
Tasks Visual Question Answering
Published 2016-11-30
URL http://arxiv.org/abs/1611.09978v1
PDF http://arxiv.org/pdf/1611.09978v1.pdf
PWC https://paperswithcode.com/paper/modeling-relationships-in-referential
Repo https://github.com/hengyuan-hu/bottom-up-attention-vqa
Framework pytorch

Regressing Robust and Discriminative 3D Morphable Models with a very Deep Neural Network

Title Regressing Robust and Discriminative 3D Morphable Models with a very Deep Neural Network
Authors Anh Tuan Tran, Tal Hassner, Iacopo Masi, Gerard Medioni
Abstract The 3D shapes of faces are well known to be discriminative. Yet despite this, they are rarely used for face recognition and always under controlled viewing conditions. We claim that this is a symptom of a serious but often overlooked problem with existing methods for single view 3D face reconstruction: when applied “in the wild”, their 3D estimates are either unstable and change for different photos of the same subject or they are over-regularized and generic. In response, we describe a robust method for regressing discriminative 3D morphable face models (3DMM). We use a convolutional neural network (CNN) to regress 3DMM shape and texture parameters directly from an input photo. We overcome the shortage of training data required for this purpose by offering a method for generating huge numbers of labeled examples. The 3D estimates produced by our CNN surpass state of the art accuracy on the MICC data set. Coupled with a 3D-3D face matching pipeline, we show the first competitive face recognition results on the LFW, YTF and IJB-A benchmarks using 3D face shapes as representations, rather than the opaque deep feature vectors used by other modern systems.
Tasks 3D Face Reconstruction, Face Recognition, Face Reconstruction, Face Verification
Published 2016-12-15
URL http://arxiv.org/abs/1612.04904v1
PDF http://arxiv.org/pdf/1612.04904v1.pdf
PWC https://paperswithcode.com/paper/regressing-robust-and-discriminative-3d
Repo https://github.com/fengju514/Expression-Net
Framework tf

Asynchronous Methods for Deep Reinforcement Learning

Title Asynchronous Methods for Deep Reinforcement Learning
Authors Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu
Abstract We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
Tasks Atari Games
Published 2016-02-04
URL http://arxiv.org/abs/1602.01783v2
PDF http://arxiv.org/pdf/1602.01783v2.pdf
PWC https://paperswithcode.com/paper/asynchronous-methods-for-deep-reinforcement
Repo https://github.com/cdesilv1/sc2_ai_cdes
Framework tf

Unsupervised Neural Hidden Markov Models

Title Unsupervised Neural Hidden Markov Models
Authors Ke Tran, Yonatan Bisk, Ashish Vaswani, Daniel Marcu, Kevin Knight
Abstract In this work, we present the first results for neuralizing an Unsupervised Hidden Markov Model. We evaluate our approach on tag in- duction. Our approach outperforms existing generative models and is competitive with the state-of-the-art though with a simpler model easily extended to include additional context.
Tasks
Published 2016-09-28
URL http://arxiv.org/abs/1609.09007v1
PDF http://arxiv.org/pdf/1609.09007v1.pdf
PWC https://paperswithcode.com/paper/unsupervised-neural-hidden-markov-models
Repo https://github.com/ketranm/neuralHMM
Framework torch

The Partially Observable Hidden Markov Model and its Application to Keystroke Dynamics

Title The Partially Observable Hidden Markov Model and its Application to Keystroke Dynamics
Authors John V. Monaco, Charles C. Tappert
Abstract The partially observable hidden Markov model is an extension of the hidden Markov Model in which the hidden state is conditioned on an independent Markov chain. This structure is motivated by the presence of discrete metadata, such as an event type, that may partially reveal the hidden state but itself emanates from a separate process. Such a scenario is encountered in keystroke dynamics whereby a user’s typing behavior is dependent on the text that is typed. Under the assumption that the user can be in either an active or passive state of typing, the keyboard key names are event types that partially reveal the hidden state due to the presence of relatively longer time intervals between words and sentences than between letters of a word. Using five public datasets, the proposed model is shown to consistently outperform other anomaly detectors, including the standard HMM, in biometric identification and verification tasks and is generally preferred over the HMM in a Monte Carlo goodness of fit test.
Tasks
Published 2016-07-13
URL http://arxiv.org/abs/1607.03854v7
PDF http://arxiv.org/pdf/1607.03854v7.pdf
PWC https://paperswithcode.com/paper/the-partially-observable-hidden-markov-model
Repo https://github.com/vmonaco/pohmm-keystroke
Framework none

Understanding Trainable Sparse Coding via Matrix Factorization

Title Understanding Trainable Sparse Coding via Matrix Factorization
Authors Thomas Moreau, Joan Bruna
Abstract Sparse coding is a core building block in many data analysis and machine learning pipelines. Typically it is solved by relying on generic optimization techniques, that are optimal in the class of first-order methods for non-smooth, convex functions, such as the Iterative Soft Thresholding Algorithm and its accelerated version (ISTA, FISTA). However, these methods don’t exploit the particular structure of the problem at hand nor the input data distribution. An acceleration using neural networks was proposed in \cite{Gregor10}, coined LISTA, which showed empirically that one could achieve high quality estimates with few iterations by modifying the parameters of the proximal splitting appropriately. In this paper we study the reasons for such acceleration. Our mathematical analysis reveals that it is related to a specific matrix factorization of the Gram kernel of the dictionary, which attempts to nearly diagonalise the kernel with a basis that produces a small perturbation of the $\ell_1$ ball. When this factorization succeeds, we prove that the resulting splitting algorithm enjoys an improved convergence bound with respect to the non-adaptive version. Moreover, our analysis also shows that conditions for acceleration occur mostly at the beginning of the iterative process, consistent with numerical experiments. We further validate our analysis by showing that on dictionaries where this factorization does not exist, adaptive acceleration fails.
Tasks
Published 2016-09-01
URL http://arxiv.org/abs/1609.00285v4
PDF http://arxiv.org/pdf/1609.00285v4.pdf
PWC https://paperswithcode.com/paper/understanding-trainable-sparse-coding-via
Repo https://github.com/tomMoral/AdaptiveOptim
Framework tf
comments powered by Disqus