May 5, 2019

2947 words 14 mins read

Paper Group ANR 483

Paper Group ANR 483

Action2Activity: Recognizing Complex Activities from Sensor Data. Mining Discriminative Triplets of Patches for Fine-Grained Classification. Sequence Tutor: Conservative Fine-Tuning of Sequence Generation Models with KL-control. Hyperspectral Subspace Identification Using SURE. Loss-aware Binarization of Deep Networks. Ups and Downs: Modeling the V …

Action2Activity: Recognizing Complex Activities from Sensor Data

Title Action2Activity: Recognizing Complex Activities from Sensor Data
Authors Ye Liu, Liqiang Nie, Lei Han, Luming Zhang, David S Rosenblum
Abstract As compared to simple actions, activities are much more complex, but semantically consistent with a human’s real life. Techniques for action recognition from sensor generated data are mature. However, there has been relatively little work on bridging the gap between actions and activities. To this end, this paper presents a novel approach for complex activity recognition comprising of two components. The first component is temporal pattern mining, which provides a mid-level feature representation for activities, encodes temporal relatedness among actions, and captures the intrinsic properties of activities. The second component is adaptive Multi-Task Learning, which captures relatedness among activities and selects discriminant features. Extensive experiments on a real-world dataset demonstrate the effectiveness of our work.
Tasks Activity Recognition, Multi-Task Learning, Temporal Action Localization
Published 2016-11-07
URL http://arxiv.org/abs/1611.01872v1
PDF http://arxiv.org/pdf/1611.01872v1.pdf
PWC https://paperswithcode.com/paper/action2activity-recognizing-complex
Repo
Framework

Mining Discriminative Triplets of Patches for Fine-Grained Classification

Title Mining Discriminative Triplets of Patches for Fine-Grained Classification
Authors Yaming Wang, Jonghyun Choi, Vlad I. Morariu, Larry S. Davis
Abstract Fine-grained classification involves distinguishing between similar sub-categories based on subtle differences in highly localized regions; therefore, accurate localization of discriminative regions remains a major challenge. We describe a patch-based framework to address this problem. We introduce triplets of patches with geometric constraints to improve the accuracy of patch localization, and automatically mine discriminative geometrically-constrained triplets for classification. The resulting approach only requires object bounding boxes. Its effectiveness is demonstrated using four publicly available fine-grained datasets, on which it outperforms or achieves comparable performance to the state-of-the-art in classification.
Tasks
Published 2016-05-04
URL http://arxiv.org/abs/1605.01130v1
PDF http://arxiv.org/pdf/1605.01130v1.pdf
PWC https://paperswithcode.com/paper/mining-discriminative-triplets-of-patches-for
Repo
Framework

Sequence Tutor: Conservative Fine-Tuning of Sequence Generation Models with KL-control

Title Sequence Tutor: Conservative Fine-Tuning of Sequence Generation Models with KL-control
Authors Natasha Jaques, Shixiang Gu, Dzmitry Bahdanau, José Miguel Hernández-Lobato, Richard E. Turner, Douglas Eck
Abstract This paper proposes a general method for improving the structure and quality of sequences generated by a recurrent neural network (RNN), while maintaining information originally learned from data, as well as sample diversity. An RNN is first pre-trained on data using maximum likelihood estimation (MLE), and the probability distribution over the next token in the sequence learned by this model is treated as a prior policy. Another RNN is then trained using reinforcement learning (RL) to generate higher-quality outputs that account for domain-specific incentives while retaining proximity to the prior policy of the MLE RNN. To formalize this objective, we derive novel off-policy RL methods for RNNs from KL-control. The effectiveness of the approach is demonstrated on two applications; 1) generating novel musical melodies, and 2) computational molecular generation. For both problems, we show that the proposed method improves the desired properties and structure of the generated sequences, while maintaining information learned from data.
Tasks
Published 2016-11-09
URL http://arxiv.org/abs/1611.02796v9
PDF http://arxiv.org/pdf/1611.02796v9.pdf
PWC https://paperswithcode.com/paper/sequence-tutor-conservative-fine-tuning-of
Repo
Framework

Hyperspectral Subspace Identification Using SURE

Title Hyperspectral Subspace Identification Using SURE
Authors Behnood Rasti, Magnus O. Ulfarsson, Johannes R. Sveinsson
Abstract Remote sensing hyperspectral sensors collect large volumes of high dimensional spectral and spatial data. However, due to spectral and spatial redundancy the true hyperspectral signal lies on a subspace of much lower dimension than the original data. The identification of the signal subspace is a very important first step for most hyperspectral algorithms. In this paper we investigate the important problem of identifying the hyperspectral signal subspace by minimizing the mean squared error (MSE) between the true signal and an estimate of the signal. Since the MSE is uncomputable in practice, due to its dependency on the true signal, we propose a method based on the Stein’s unbiased risk estimator (SURE) that provides an unbiased estimate of the MSE. The resulting method is simple and fully automatic and we evaluate it using both simulated and real hyperspectral data sets. Experimental results shows that our proposed method compares well to recent state-of-the-art subspace identification methods.
Tasks
Published 2016-06-01
URL http://arxiv.org/abs/1606.00219v1
PDF http://arxiv.org/pdf/1606.00219v1.pdf
PWC https://paperswithcode.com/paper/hyperspectral-subspace-identification-using
Repo
Framework

Loss-aware Binarization of Deep Networks

Title Loss-aware Binarization of Deep Networks
Authors Lu Hou, Quanming Yao, James T. Kwok
Abstract Deep neural network models, though very powerful and highly successful, are computationally expensive in terms of space and time. Recently, there have been a number of attempts on binarizing the network weights and activations. This greatly reduces the network size, and replaces the underlying multiplications to additions or even XNOR bit operations. However, existing binarization schemes are based on simple matrix approximation and ignore the effect of binarization on the loss. In this paper, we propose a proximal Newton algorithm with diagonal Hessian approximation that directly minimizes the loss w.r.t. the binarized weights. The underlying proximal step has an efficient closed-form solution, and the second-order information can be efficiently obtained from the second moments already computed by the Adam optimizer. Experiments on both feedforward and recurrent networks show that the proposed loss-aware binarization algorithm outperforms existing binarization schemes, and is also more robust for wide and deep networks.
Tasks
Published 2016-11-05
URL http://arxiv.org/abs/1611.01600v3
PDF http://arxiv.org/pdf/1611.01600v3.pdf
PWC https://paperswithcode.com/paper/loss-aware-binarization-of-deep-networks
Repo
Framework
Title Ups and Downs: Modeling the Visual Evolution of Fashion Trends with One-Class Collaborative Filtering
Authors Ruining He, Julian McAuley
Abstract Building a successful recommender system depends on understanding both the dimensions of people’s preferences as well as their dynamics. In certain domains, such as fashion, modeling such preferences can be incredibly difficult, due to the need to simultaneously model the visual appearance of products as well as their evolution over time. The subtle semantics and non-linear dynamics of fashion evolution raise unique challenges especially considering the sparsity and large scale of the underlying datasets. In this paper we build novel models for the One-Class Collaborative Filtering setting, where our goal is to estimate users’ fashion-aware personalized ranking functions based on their past feedback. To uncover the complex and evolving visual factors that people consider when evaluating products, our method combines high-level visual features extracted from a deep convolutional neural network, users’ past feedback, as well as evolving trends within the community. Experimentally we evaluate our method on two large real-world datasets from Amazon.com, where we show it to outperform state-of-the-art personalized ranking measures, and also use it to visualize the high-level fashion trends across the 11-year span of our dataset.
Tasks Recommendation Systems
Published 2016-02-04
URL http://arxiv.org/abs/1602.01585v1
PDF http://arxiv.org/pdf/1602.01585v1.pdf
PWC https://paperswithcode.com/paper/ups-and-downs-modeling-the-visual-evolution
Repo
Framework

Neuromorphic Deep Learning Machines

Title Neuromorphic Deep Learning Machines
Authors Emre Neftci, Charles Augustine, Somnath Paul, Georgios Detorakis
Abstract An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated weights are not essential for learning deep representations. Random BP replaces feedback weights with random ones and encourages the network to adjust its feed-forward weights to learn pseudo-inverses of the (random) feedback weights. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations in neuromorphic computing hardware. The rule requires only one addition and two comparisons for each synaptic weight using a two-compartment leaky Integrate & Fire (I&F) neuron, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving nearly identical classification accuracies compared to artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning.
Tasks
Published 2016-12-16
URL http://arxiv.org/abs/1612.05596v2
PDF http://arxiv.org/pdf/1612.05596v2.pdf
PWC https://paperswithcode.com/paper/neuromorphic-deep-learning-machines
Repo
Framework

Quadratic Projection Based Feature Extraction with Its Application to Biometric Recognition

Title Quadratic Projection Based Feature Extraction with Its Application to Biometric Recognition
Authors Yan Yan, Hanzi Wang, Si Chen, Xiaochun Cao, David Zhang
Abstract This paper presents a novel quadratic projection based feature extraction framework, where a set of quadratic matrices is learned to distinguish each class from all other classes. We formulate quadratic matrix learning (QML) as a standard semidefinite programming (SDP) problem. However, the con- ventional interior-point SDP solvers do not scale well to the problem of QML for high-dimensional data. To solve the scalability of QML, we develop an efficient algorithm, termed DualQML, based on the Lagrange duality theory, to extract nonlinear features. To evaluate the feasibility and effectiveness of the proposed framework, we conduct extensive experiments on biometric recognition. Experimental results on three representative biometric recogni- tion tasks, including face, palmprint, and ear recognition, demonstrate the superiority of the DualQML-based feature extraction algorithm compared to the current state-of-the-art algorithms
Tasks
Published 2016-03-25
URL http://arxiv.org/abs/1603.07797v1
PDF http://arxiv.org/pdf/1603.07797v1.pdf
PWC https://paperswithcode.com/paper/quadratic-projection-based-feature-extraction
Repo
Framework

Selecting the Best Player Formation for Corner-Kick Situations Based on Bayes’ Estimation

Title Selecting the Best Player Formation for Corner-Kick Situations Based on Bayes’ Estimation
Authors Jordan Henrio, Thomas Henn, Tomoharu Nakashima, Hidehisa Akiyama
Abstract In the domain of the Soccer simulation 2D league of the RoboCup project, appropriate player positioning against a given opponent team is an important factor of soccer team performance. This work proposes a model which decides the strategy that should be applied regarding a particular opponent team. This task can be realized by applying preliminary a learning phase where the model determines the most effective strategies against clusters of opponent teams. The model determines the best strategies by using sequential Bayes’ estimators. As a first trial of the system, the proposed model is used to determine the association of player formations against opponent teams in the particular situation of corner-kick. The implemented model shows satisfying abilities to compare player formations that are similar to each other in terms of performance and determines the right ranking even by running a decent number of simulation games.
Tasks
Published 2016-06-03
URL http://arxiv.org/abs/1606.01015v1
PDF http://arxiv.org/pdf/1606.01015v1.pdf
PWC https://paperswithcode.com/paper/selecting-the-best-player-formation-for
Repo
Framework

Policy Search with High-Dimensional Context Variables

Title Policy Search with High-Dimensional Context Variables
Authors Voot Tangkaratt, Herke van Hoof, Simone Parisi, Gerhard Neumann, Jan Peters, Masashi Sugiyama
Abstract Direct contextual policy search methods learn to improve policy parameters and simultaneously generalize these parameters to different context or task variables. However, learning from high-dimensional context variables, such as camera images, is still a prominent problem in many real-world tasks. A naive application of unsupervised dimensionality reduction methods to the context variables, such as principal component analysis, is insufficient as task-relevant input may be ignored. In this paper, we propose a contextual policy search method in the model-based relative entropy stochastic search framework with integrated dimensionality reduction. We learn a model of the reward that is locally quadratic in both the policy parameters and the context variables. Furthermore, we perform supervised linear dimensionality reduction on the context variables by nuclear norm regularization. The experimental results show that the proposed method outperforms naive dimensionality reduction via principal component analysis and a state-of-the-art contextual policy search method.
Tasks Dimensionality Reduction
Published 2016-11-10
URL http://arxiv.org/abs/1611.03231v1
PDF http://arxiv.org/pdf/1611.03231v1.pdf
PWC https://paperswithcode.com/paper/policy-search-with-high-dimensional-context
Repo
Framework

A Kernel Independence Test for Geographical Language Variation

Title A Kernel Independence Test for Geographical Language Variation
Authors Dong Nguyen, Jacob Eisenstein
Abstract Quantifying the degree of spatial dependence for linguistic variables is a key task for analyzing dialectal variation. However, existing approaches have important drawbacks. First, they are based on parametric models of dependence, which limits their power in cases where the underlying parametric assumptions are violated. Second, they are not applicable to all types of linguistic data: some approaches apply only to frequencies, others to boolean indicators of whether a linguistic variable is present. We present a new method for measuring geographical language variation, which solves both of these problems. Our approach builds on Reproducing Kernel Hilbert space (RKHS) representations for nonparametric statistics, and takes the form of a test statistic that is computed from pairs of individual geotagged observations without aggregation into predefined geographical bins. We compare this test with prior work using synthetic data as well as a diverse set of real datasets: a corpus of Dutch tweets, a Dutch syntactic atlas, and a dataset of letters to the editor in North American newspapers. Our proposed test is shown to support robust inferences across a broad range of scenarios and types of data.
Tasks
Published 2016-01-25
URL http://arxiv.org/abs/1601.06579v2
PDF http://arxiv.org/pdf/1601.06579v2.pdf
PWC https://paperswithcode.com/paper/a-kernel-independence-test-for-geographical
Repo
Framework

Joint Spatial-Angular Sparse Coding for dMRI with Separable Dictionaries

Title Joint Spatial-Angular Sparse Coding for dMRI with Separable Dictionaries
Authors Evan Schwab, René Vidal, Nicolas Charon
Abstract Diffusion MRI (dMRI) provides the ability to reconstruct neuronal fibers in the brain, $\textit{in vivo}$, by measuring water diffusion along angular gradient directions in q-space. High angular resolution diffusion imaging (HARDI) can produce better estimates of fiber orientation than the popularly used diffusion tensor imaging, but the high number of samples needed to estimate diffusivity requires longer patient scan times. To accelerate dMRI, compressed sensing (CS) has been utilized by exploiting a sparse dictionary representation of the data, discovered through sparse coding. The sparser the representation, the fewer samples are needed to reconstruct a high resolution signal with limited information loss, and so an important area of research has focused on finding the sparsest possible representation of dMRI. Current reconstruction methods however, rely on an angular representation $\textit{per voxel}$ with added spatial regularization, and so, for non-zero signals, one is required to have at least one non-zero coefficient per voxel. This means that the global level of sparsity must be greater than the number of voxels. In contrast, we propose a joint spatial-angular representation of dMRI that will allow us to achieve levels of global sparsity that are below the number of voxels. A major challenge, however, is the computational complexity of solving a global sparse coding problem over large-scale dMRI. In this work, we present novel adaptations of popular sparse coding algorithms that become better suited for solving large-scale problems by exploiting spatial-angular separability. Our experiments show that our method achieves significantly sparser representations of HARDI than is possible by the state of the art.
Tasks
Published 2016-12-18
URL http://arxiv.org/abs/1612.05846v3
PDF http://arxiv.org/pdf/1612.05846v3.pdf
PWC https://paperswithcode.com/paper/joint-spatial-angular-sparse-coding-for-dmri
Repo
Framework

Statistical Machine Translation for Indian Languages: Mission Hindi 2

Title Statistical Machine Translation for Indian Languages: Mission Hindi 2
Authors Raj Nath Patel, Prakash B. Pimpale
Abstract This paper presents Centre for Development of Advanced Computing Mumbai’s (CDACM) submission to NLP Tools Contest on Statistical Machine Translation in Indian Languages (ILSMT) 2015 (collocated with ICON 2015). The aim of the contest was to collectively explore the effectiveness of Statistical Machine Translation (SMT) while translating within Indian languages and between English and Indian languages. In this paper, we report our work on all five language pairs, namely Bengali-Hindi (\bnhi), Marathi-Hindi (\mrhi), Tamil-Hindi (\tahi), Telugu-Hindi (\tehi), and English-Hindi (\enhi) for Health, Tourism, and General domains. We have used suffix separation, compound splitting and preordering prior to SMT training and testing.
Tasks Machine Translation
Published 2016-10-25
URL http://arxiv.org/abs/1610.08000v1
PDF http://arxiv.org/pdf/1610.08000v1.pdf
PWC https://paperswithcode.com/paper/statistical-machine-translation-for-indian
Repo
Framework

Exploiting Vagueness for Multi-Agent Consensus

Title Exploiting Vagueness for Multi-Agent Consensus
Authors Michael Crosscombe, Jonathan Lawry
Abstract A framework for consensus modelling is introduced using Kleene’s three valued logic as a means to express vagueness in agents’ beliefs. Explicitly borderline cases are inherent to propositions involving vague concepts where sentences of a propositional language may be absolutely true, absolutely false or borderline. By exploiting these intermediate truth values, we can allow agents to adopt a more vague interpretation of underlying concepts in order to weaken their beliefs and reduce the levels of inconsistency, so as to achieve consensus. We consider a consensus combination operation which results in agents adopting the borderline truth value as a shared viewpoint if they are in direct conflict. Simulation experiments are presented which show that applying this operator to agents chosen at random (subject to a consistency threshold) from a population, with initially diverse opinions, results in convergence to a smaller set of more precise shared beliefs. Furthermore, if the choice of agents for combination is dependent on the payoff of their beliefs, this acting as a proxy for performance or usefulness, then the system converges to beliefs which, on average, have higher payoff.
Tasks
Published 2016-07-19
URL http://arxiv.org/abs/1607.05540v2
PDF http://arxiv.org/pdf/1607.05540v2.pdf
PWC https://paperswithcode.com/paper/exploiting-vagueness-for-multi-agent
Repo
Framework

Similarity Mapping with Enhanced Siamese Network for Multi-Object Tracking

Title Similarity Mapping with Enhanced Siamese Network for Multi-Object Tracking
Authors Minyoung Kim, Stefano Alletto, Luca Rigazio
Abstract Multi-object tracking has recently become an important area of computer vision, especially for Advanced Driver Assistance Systems (ADAS). Despite growing attention, achieving high performance tracking is still challenging, with state-of-the- art systems resulting in high complexity with a large number of hyper parameters. In this paper, we focus on reducing overall system complexity and the number hyper parameters that need to be tuned to a specific environment. We introduce a novel tracking system based on similarity mapping by Enhanced Siamese Neural Network (ESNN), which accounts for both appearance and geometric information, and is trainable end-to-end. Our system achieves competitive performance in both speed and accuracy on MOT16 challenge, compared to known state-of-the-art methods.
Tasks Multi-Object Tracking, Object Tracking
Published 2016-09-28
URL http://arxiv.org/abs/1609.09156v2
PDF http://arxiv.org/pdf/1609.09156v2.pdf
PWC https://paperswithcode.com/paper/similarity-mapping-with-enhanced-siamese
Repo
Framework
comments powered by Disqus