May 6, 2019

3008 words 15 mins read

Paper Group ANR 387

Paper Group ANR 387

Combination of LMS Adaptive Filters with Coefficients Feedback. Unsupervised convolutional neural networks for motion estimation. Interaction Screening: Efficient and Sample-Optimal Learning of Ising Models. Learning an Astronomical Catalog of the Visible Universe through Scalable Bayesian Inference. The Neural Noisy Channel. Bayesian Optical Flow …

Combination of LMS Adaptive Filters with Coefficients Feedback

Title Combination of LMS Adaptive Filters with Coefficients Feedback
Authors Luiz F. O. Chamon, Cassio G. Lopes
Abstract Parallel combinations of adaptive filters have been effectively used to improve the performance of adaptive algorithms and address well-known trade-offs, such as convergence rate vs. steady-state error. Nevertheless, typical combinations suffer from a convergence stagnation issue due to the fact that the component filters run independently. Solutions to this issue usually involve conditional transfers of coefficients between filters, which although effective, are hard to generalize to combinations with more filters or when there is no clearly faster adaptive filter. In this work, a more natural solution is proposed by cyclically feeding back the combined coefficient vector to all component filters. Besides coping with convergence stagnation, this new topology improves tracking and supervisor stability, and bridges an important conceptual gap between combinations of adaptive filters and variable step size schemes. We analyze the steady-state, tracking, and transient performance of this topology for LMS component filters and supervisors with generic activation functions. Numerical examples are used to illustrate how coefficients feedback can improve the performance of parallel combinations at a small computational overhead.
Tasks
Published 2016-08-10
URL http://arxiv.org/abs/1608.03248v2
PDF http://arxiv.org/pdf/1608.03248v2.pdf
PWC https://paperswithcode.com/paper/combination-of-lms-adaptive-filters-with
Repo
Framework

Unsupervised convolutional neural networks for motion estimation

Title Unsupervised convolutional neural networks for motion estimation
Authors Aria Ahmadi, Ioannis Patras
Abstract Traditional methods for motion estimation estimate the motion field F between a pair of images as the one that minimizes a predesigned cost function. In this paper, we propose a direct method and train a Convolutional Neural Network (CNN) that when, at test time, is given a pair of images as input it produces a dense motion field F at its output layer. In the absence of large datasets with ground truth motion that would allow classical supervised training, we propose to train the network in an unsupervised manner. The proposed cost function that is optimized during training, is based on the classical optical flow constraint. The latter is differentiable with respect to the motion field and, therefore, allows backpropagation of the error to previous layers of the network. Our method is tested on both synthetic and real image sequences and performs similarly to the state-of-the-art methods.
Tasks Motion Estimation, Optical Flow Estimation
Published 2016-01-22
URL http://arxiv.org/abs/1601.06087v1
PDF http://arxiv.org/pdf/1601.06087v1.pdf
PWC https://paperswithcode.com/paper/unsupervised-convolutional-neural-networks
Repo
Framework

Interaction Screening: Efficient and Sample-Optimal Learning of Ising Models

Title Interaction Screening: Efficient and Sample-Optimal Learning of Ising Models
Authors Marc Vuffray, Sidhant Misra, Andrey Y. Lokhov, Michael Chertkov
Abstract We consider the problem of learning the underlying graph of an unknown Ising model on p spins from a collection of i.i.d. samples generated from the model. We suggest a new estimator that is computationally efficient and requires a number of samples that is near-optimal with respect to previously established information-theoretic lower-bound. Our statistical estimator has a physical interpretation in terms of “interaction screening”. The estimator is consistent and is efficiently implemented using convex optimization. We prove that with appropriate regularization, the estimator recovers the underlying graph using a number of samples that is logarithmic in the system size p and exponential in the maximum coupling-intensity and maximum node-degree.
Tasks
Published 2016-05-24
URL http://arxiv.org/abs/1605.07252v3
PDF http://arxiv.org/pdf/1605.07252v3.pdf
PWC https://paperswithcode.com/paper/interaction-screening-efficient-and-sample
Repo
Framework

Learning an Astronomical Catalog of the Visible Universe through Scalable Bayesian Inference

Title Learning an Astronomical Catalog of the Visible Universe through Scalable Bayesian Inference
Authors Jeffrey Regier, Kiran Pamnany, Ryan Giordano, Rollin Thomas, David Schlegel, Jon McAuliffe, Prabhat
Abstract Celeste is a procedure for inferring astronomical catalogs that attains state-of-the-art scientific results. To date, Celeste has been scaled to at most hundreds of megabytes of astronomical images: Bayesian posterior inference is notoriously demanding computationally. In this paper, we report on a scalable, parallel version of Celeste, suitable for learning catalogs from modern large-scale astronomical datasets. Our algorithmic innovations include a fast numerical optimization routine for Bayesian posterior inference and a statistically efficient scheme for decomposing astronomical optimization problems into subproblems. Our scalable implementation is written entirely in Julia, a new high-level dynamic programming language designed for scientific and numerical computing. We use Julia’s high-level constructs for shared and distributed memory parallelism, and demonstrate effective load balancing and efficient scaling on up to 8192 Xeon cores on the NERSC Cori supercomputer.
Tasks Bayesian Inference
Published 2016-11-10
URL http://arxiv.org/abs/1611.03404v1
PDF http://arxiv.org/pdf/1611.03404v1.pdf
PWC https://paperswithcode.com/paper/learning-an-astronomical-catalog-of-the
Repo
Framework

The Neural Noisy Channel

Title The Neural Noisy Channel
Authors Lei Yu, Phil Blunsom, Chris Dyer, Edward Grefenstette, Tomas Kocisky
Abstract We formulate sequence to sequence transduction as a noisy channel decoding problem and use recurrent neural networks to parameterise the source and channel models. Unlike direct models which can suffer from explaining-away effects during training, noisy channel models must produce outputs that explain their inputs, and their component models can be trained with not only paired training samples but also unpaired samples from the marginal output distribution. Using a latent variable to control how much of the conditioning sequence the channel model needs to read in order to generate a subsequent symbol, we obtain a tractable and effective beam search decoder. Experimental results on abstractive sentence summarisation, morphological inflection, and machine translation show that noisy channel models outperform direct models, and that they significantly benefit from increased amounts of unpaired output data that direct models cannot easily use.
Tasks Machine Translation, Morphological Inflection
Published 2016-11-08
URL http://arxiv.org/abs/1611.02554v2
PDF http://arxiv.org/pdf/1611.02554v2.pdf
PWC https://paperswithcode.com/paper/the-neural-noisy-channel
Repo
Framework

Bayesian Optical Flow with Uncertainty Quantification

Title Bayesian Optical Flow with Uncertainty Quantification
Authors Jie Sun, Fernando J. Quevedo, Erik Bollt
Abstract Optical flow refers to the visual motion observed between two consecutive images. Since the degree of freedom is typically much larger than the constraints imposed by the image observations, the straightforward formulation of optical flow as an inverse problem is ill-posed. Standard approaches to determine optical flow rely on formulating and solving an optimization problem that contains both a data fidelity term and a regularization term, the latter effectively resolves the otherwise ill-posedness of the inverse problem. In this work, we depart from the deterministic formalism, and instead treat optical flow as a statistical inverse problem. We discuss how a classical optical flow solution can be interpreted as a point estimate in this more general framework. The statistical approach, whose “solution” is a distribution of flow fields, which we refer to as Bayesian optical flow, allows not only “point” estimates (e.g., the computation of average flow field), but also statistical estimates (e.g., quantification of uncertainty) that are beyond any standard method for optical flow. As application, we benchmark Bayesian optical flow together with uncertainty quantification using several types of prescribed ground-truth flow fields and images.
Tasks Optical Flow Estimation
Published 2016-11-04
URL http://arxiv.org/abs/1611.01230v2
PDF http://arxiv.org/pdf/1611.01230v2.pdf
PWC https://paperswithcode.com/paper/bayesian-optical-flow-with-uncertainty
Repo
Framework

Once for All: a Two-flow Convolutional Neural Network for Visual Tracking

Title Once for All: a Two-flow Convolutional Neural Network for Visual Tracking
Authors Kai Chen, Wenbing Tao
Abstract One of the main challenges of visual object tracking comes from the arbitrary appearance of objects. Most existing algorithms try to resolve this problem as an object-specific task, i.e., the model is trained to regenerate or classify a specific object. As a result, the model need to be initialized and retrained for different objects. In this paper, we propose a more generic approach utilizing a novel two-flow convolutional neural network (named YCNN). The YCNN takes two inputs (one is object image patch, the other is search image patch), then outputs a response map which predicts how likely the object appears in a specific location. Unlike those object-specific approach, the YCNN is trained to measure the similarity between two image patches. Thus it will not be confined to any specific object. Furthermore the network can be end-to-end trained to extract both shallow and deep convolutional features which are dedicated for visual tracking. And once properly trained, the YCNN can be applied to track all kinds of objects without further training and updating. Benefiting from the once-for-all model, our algorithm is able to run at a very high speed of 45 frames-per-second. The experiments on 51 sequences also show that our algorithm achieves an outstanding performance.
Tasks Object Tracking, Visual Object Tracking, Visual Tracking
Published 2016-04-26
URL http://arxiv.org/abs/1604.07507v1
PDF http://arxiv.org/pdf/1604.07507v1.pdf
PWC https://paperswithcode.com/paper/once-for-all-a-two-flow-convolutional-neural
Repo
Framework

Context-guided diffusion for label propagation on graphs

Title Context-guided diffusion for label propagation on graphs
Authors Kwang In Kim, James Tompkin, Hanspeter Pfister, Christian Theobalt
Abstract Existing approaches for diffusion on graphs, e.g., for label propagation, are mainly focused on isotropic diffusion, which is induced by the commonly-used graph Laplacian regularizer. Inspired by the success of diffusivity tensors for anisotropic diffusion in image processing, we presents anisotropic diffusion on graphs and the corresponding label propagation algorithm. We develop positive definite diffusivity operators on the vector bundles of Riemannian manifolds, and discretize them to diffusivity operators on graphs. This enables us to easily define new robust diffusivity operators which significantly improve semi-supervised learning performance over existing diffusion algorithms.
Tasks
Published 2016-02-20
URL http://arxiv.org/abs/1602.06439v1
PDF http://arxiv.org/pdf/1602.06439v1.pdf
PWC https://paperswithcode.com/paper/context-guided-diffusion-for-label
Repo
Framework

Handwritten Signature Verification Using Hand-Worn Devices

Title Handwritten Signature Verification Using Hand-Worn Devices
Authors Ben Nassi, Alona Levy, Yuval Elovici, Erez Shmueli
Abstract Online signature verification technologies, such as those available in banks and post offices, rely on dedicated digital devices such as tablets or smart pens to capture, analyze and verify signatures. In this paper, we suggest a novel method for online signature verification that relies on the increasingly available hand-worn devices, such as smartwatches or fitness trackers, instead of dedicated ad-hoc devices. Our method uses a set of known genuine and forged signatures, recorded using the motion sensors of a hand-worn device, to train a machine learning classifier. Then, given the recording of an unknown signature and a claimed identity, the classifier can determine whether the signature is genuine or forged. In order to validate our method, it was applied on 1980 recordings of genuine and forged signatures that we collected from 66 subjects in our institution. Using our method, we were able to successfully distinguish between genuine and forged signatures with a high degree of accuracy (0.98 AUC and 0.05 EER).
Tasks
Published 2016-12-19
URL http://arxiv.org/abs/1612.06305v1
PDF http://arxiv.org/pdf/1612.06305v1.pdf
PWC https://paperswithcode.com/paper/handwritten-signature-verification-using-hand
Repo
Framework

Self-taught learning of a deep invariant representation for visual tracking via temporal slowness principle

Title Self-taught learning of a deep invariant representation for visual tracking via temporal slowness principle
Authors Jason Kuen, Kian Ming Lim, Chin Poo Lee
Abstract Visual representation is crucial for a visual tracking method’s performances. Conventionally, visual representations adopted in visual tracking rely on hand-crafted computer vision descriptors. These descriptors were developed generically without considering tracking-specific information. In this paper, we propose to learn complex-valued invariant representations from tracked sequential image patches, via strong temporal slowness constraint and stacked convolutional autoencoders. The deep slow local representations are learned offline on unlabeled data and transferred to the observational model of our proposed tracker. The proposed observational model retains old training samples to alleviate drift, and collect negative samples which are coherent with target’s motion pattern for better discriminative tracking. With the learned representation and online training samples, a logistic regression classifier is adopted to distinguish target from background, and retrained online to adapt to appearance changes. Subsequently, the observational model is integrated into a particle filter framework to peform visual tracking. Experimental results on various challenging benchmark sequences demonstrate that the proposed tracker performs favourably against several state-of-the-art trackers.
Tasks Visual Tracking
Published 2016-04-14
URL http://arxiv.org/abs/1604.04144v1
PDF http://arxiv.org/pdf/1604.04144v1.pdf
PWC https://paperswithcode.com/paper/self-taught-learning-of-a-deep-invariant
Repo
Framework

Learning Motion Patterns in Videos

Title Learning Motion Patterns in Videos
Authors Pavel Tokmakov, Karteek Alahari, Cordelia Schmid
Abstract The problem of determining whether an object is in motion, irrespective of camera motion, is far from being solved. We address this challenging task by learning motion patterns in videos. The core of our approach is a fully convolutional network, which is learned entirely from synthetic video sequences, and their ground-truth optical flow and motion segmentation. This encoder-decoder style architecture first learns a coarse representation of the optical flow field features, and then refines it iteratively to produce motion labels at the original high-resolution. We further improve this labeling with an objectness map and a conditional random field, to account for errors in optical flow, and also to focus on moving “things” rather than “stuff”. The output label of each pixel denotes whether it has undergone independent motion, i.e., irrespective of camera motion. We demonstrate the benefits of this learning framework on the moving object segmentation task, where the goal is to segment all objects in motion. Our approach outperforms the top method on the recently released DAVIS benchmark dataset, comprising real-world sequences, by 5.6%. We also evaluate on the Berkeley motion segmentation database, achieving state-of-the-art results.
Tasks Motion Segmentation, Optical Flow Estimation, Semantic Segmentation
Published 2016-12-21
URL http://arxiv.org/abs/1612.07217v2
PDF http://arxiv.org/pdf/1612.07217v2.pdf
PWC https://paperswithcode.com/paper/learning-motion-patterns-in-videos
Repo
Framework

Online Learning with Feedback Graphs Without the Graphs

Title Online Learning with Feedback Graphs Without the Graphs
Authors Alon Cohen, Tamir Hazan, Tomer Koren
Abstract We study an online learning framework introduced by Mannor and Shamir (2011) in which the feedback is specified by a graph, in a setting where the graph may vary from round to round and is \emph{never fully revealed} to the learner. We show a large gap between the adversarial and the stochastic cases. In the adversarial case, we prove that even for dense feedback graphs, the learner cannot improve upon a trivial regret bound obtained by ignoring any additional feedback besides her own loss. In contrast, in the stochastic case we give an algorithm that achieves $\widetilde \Theta(\sqrt{\alpha T})$ regret over $T$ rounds, provided that the independence numbers of the hidden feedback graphs are at most $\alpha$. We also extend our results to a more general feedback model, in which the learner does not necessarily observe her own loss, and show that, even in simple cases, concealing the feedback graphs might render a learnable problem unlearnable.
Tasks
Published 2016-05-23
URL http://arxiv.org/abs/1605.07018v1
PDF http://arxiv.org/pdf/1605.07018v1.pdf
PWC https://paperswithcode.com/paper/online-learning-with-feedback-graphs-without
Repo
Framework

Variational Bayesian Inference of Line Spectra

Title Variational Bayesian Inference of Line Spectra
Authors Mihai-Alin Badiu, Thomas Lundgaard Hansen, Bernard Henri Fleury
Abstract In this paper, we address the fundamental problem of line spectral estimation in a Bayesian framework. We target model order and parameter estimation via variational inference in a probabilistic model in which the frequencies are continuous-valued, i.e., not restricted to a grid; and the coefficients are governed by a Bernoulli-Gaussian prior model turning model order selection into binary sequence detection. Unlike earlier works which retain only point estimates of the frequencies, we undertake a more complete Bayesian treatment by estimating the posterior probability density functions (pdfs) of the frequencies and computing expectations over them. Thus, we additionally capture and operate with the uncertainty of the frequency estimates. Aiming to maximize the model evidence, variational optimization provides analytic approximations of the posterior pdfs and also gives estimates of the additional parameters. We propose an accurate representation of the pdfs of the frequencies by mixtures of von Mises pdfs, which yields closed-form expectations. We define the algorithm VALSE in which the estimates of the pdfs and parameters are iteratively updated. VALSE is a gridless, convergent method, does not require parameter tuning, can easily include prior knowledge about the frequencies and provides approximate posterior pdfs based on which the uncertainty in line spectral estimation can be quantified. Simulation results show that accounting for the uncertainty of frequency estimates, rather than computing just point estimates, significantly improves the performance. The performance of VALSE is superior to that of state-of-the-art methods and closely approaches the Cram'er-Rao bound computed for the true model order.
Tasks Bayesian Inference
Published 2016-04-13
URL http://arxiv.org/abs/1604.03744v2
PDF http://arxiv.org/pdf/1604.03744v2.pdf
PWC https://paperswithcode.com/paper/variational-bayesian-inference-of-line
Repo
Framework

Spectral Angle Based Unary Energy Functions for Spatial-Spectral Hyperspectral Classification using Markov Random Fields

Title Spectral Angle Based Unary Energy Functions for Spatial-Spectral Hyperspectral Classification using Markov Random Fields
Authors Utsav B. Gewali, Sildomar T. Monteiro
Abstract In this paper, we propose and compare two spectral angle based approaches for spatial-spectral classification. Our methods use the spectral angle to generate unary energies in a grid-structured Markov random field defined over the pixel labels of a hyperspectral image. The first approach is to use the exponential spectral angle mapper (ESAM) kernel/covariance function, a spectral angle based function, with the support vector machine and the Gaussian process classifier. The second approach is to directly use the minimum spectral angle between the test pixel and the training pixels as the unary energy. We compare the proposed methods with the state-of-the-art Markov random field methods that use support vector machines and Gaussian processes with squared exponential kernel/covariance function. In our experiments with two datasets, it is seen that using minimum spectral angle as unary energy produces better or comparable results to the existing methods at a smaller running time.
Tasks Gaussian Processes
Published 2016-10-22
URL http://arxiv.org/abs/1610.06985v1
PDF http://arxiv.org/pdf/1610.06985v1.pdf
PWC https://paperswithcode.com/paper/spectral-angle-based-unary-energy-functions
Repo
Framework

Optimal Transport vs. Fisher-Rao distance between Copulas for Clustering Multivariate Time Series

Title Optimal Transport vs. Fisher-Rao distance between Copulas for Clustering Multivariate Time Series
Authors Gautier Marti, Sébastien Andler, Frank Nielsen, Philippe Donnat
Abstract We present a methodology for clustering N objects which are described by multivariate time series, i.e. several sequences of real-valued random variables. This clustering methodology leverages copulas which are distributions encoding the dependence structure between several random variables. To take fully into account the dependence information while clustering, we need a distance between copulas. In this work, we compare renowned distances between distributions: the Fisher-Rao geodesic distance, related divergences and optimal transport, and discuss their advantages and disadvantages. Applications of such methodology can be found in the clustering of financial assets. A tutorial, experiments and implementation for reproducible research can be found at www.datagrapple.com/Tech.
Tasks Clustering Multivariate Time Series, Time Series
Published 2016-04-28
URL http://arxiv.org/abs/1604.08634v2
PDF http://arxiv.org/pdf/1604.08634v2.pdf
PWC https://paperswithcode.com/paper/optimal-transport-vs-fisher-rao-distance
Repo
Framework
comments powered by Disqus