Paper Group ANR 365
Distributed Adaptive Learning of Graph Signals. Deep encoding of etymological information in TEI. A Model of Multi-Agent Consensus for Vague and Uncertain Beliefs. Riemannian stochastic variance reduced gradient on Grassmann manifold. Emotion-Based Crowd Representation for Abnormality Detection. Solving Large-scale Systems of Random Quadratic Equat …
Distributed Adaptive Learning of Graph Signals
Title | Distributed Adaptive Learning of Graph Signals |
Authors | P. Di Lorenzo, P. Banelli, S. Barbarossa, S. Sardellitti |
Abstract | The aim of this paper is to propose distributed strategies for adaptive learning of signals defined over graphs. Assuming the graph signal to be bandlimited, the method enables distributed reconstruction, with guaranteed performance in terms of mean-square error, and tracking from a limited number of sampled observations taken from a subset of vertices. A detailed mean square analysis is carried out and illustrates the role played by the sampling strategy on the performance of the proposed method. Finally, some useful strategies for distributed selection of the sampling set are provided. Several numerical results validate our theoretical findings, and illustrate the performance of the proposed method for distributed adaptive learning of signals defined over graphs. |
Tasks | Action Localization |
Published | 2016-09-20 |
URL | http://arxiv.org/abs/1609.06100v4 |
http://arxiv.org/pdf/1609.06100v4.pdf | |
PWC | https://paperswithcode.com/paper/distributed-adaptive-learning-of-graph |
Repo | |
Framework | |
Deep encoding of etymological information in TEI
Title | Deep encoding of etymological information in TEI |
Authors | Jack Bowers, Laurent Romary |
Abstract | This paper aims to provide a comprehensive modeling and representation of etymological data in digital dictionaries. The purpose is to integrate in one coherent framework both digital representations of legacy dictionaries, and also born-digital lexical databases that are constructed manually or semi-automatically. We want to propose a systematic and coherent set of modeling principles for a variety of etymological phenomena that may contribute to the creation of a continuum between existing and future lexical constructs, where anyone interested in tracing the history of words and their meanings will be able to seamlessly query lexical resources.Instead of designing an ad hoc model and representation language for digital etymological data, we will focus on identifying all the possibilities offered by the TEI guidelines for the representation of lexical information. |
Tasks | |
Published | 2016-11-30 |
URL | http://arxiv.org/abs/1611.10122v1 |
http://arxiv.org/pdf/1611.10122v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-encoding-of-etymological-information-in |
Repo | |
Framework | |
A Model of Multi-Agent Consensus for Vague and Uncertain Beliefs
Title | A Model of Multi-Agent Consensus for Vague and Uncertain Beliefs |
Authors | Michael Crosscombe, Jonathan Lawry |
Abstract | Consensus formation is investigated for multi-agent systems in which agents’ beliefs are both vague and uncertain. Vagueness is represented by a third truth state meaning \emph{borderline}. This is combined with a probabilistic model of uncertainty. A belief combination operator is then proposed which exploits borderline truth values to enable agents with conflicting beliefs to reach a compromise. A number of simulation experiments are carried out in which agents apply this operator in pairwise interactions, under the bounded confidence restriction that the two agents’ beliefs must be sufficiently consistent with each other before agreement can be reached. As well as studying the consensus operator in isolation we also investigate scenarios in which agents are influenced either directly or indirectly by the state of the world. For the former we conduct simulations which combine consensus formation with belief updating based on evidence. For the latter we investigate the effect of assuming that the closer an agent’s beliefs are to the truth the more visible they are in the consensus building process. In all cases applying the consensus operators results in the population converging to a single shared belief which is both crisp and certain. Furthermore, simulations which combine consensus formation with evidential updating converge faster to a shared opinion which is closer to the actual state of the world than those in which beliefs are only changed as a result of directly receiving new evidence. Finally, if agent interactions are guided by belief quality measured as similarity to the true state of the world, then applying the consensus operator alone results in the population converging to a high quality shared belief. |
Tasks | |
Published | 2016-12-11 |
URL | http://arxiv.org/abs/1612.03433v2 |
http://arxiv.org/pdf/1612.03433v2.pdf | |
PWC | https://paperswithcode.com/paper/a-model-of-multi-agent-consensus-for-vague |
Repo | |
Framework | |
Riemannian stochastic variance reduced gradient on Grassmann manifold
Title | Riemannian stochastic variance reduced gradient on Grassmann manifold |
Authors | Hiroyuki Kasai, Hiroyuki Sato, Bamdev Mishra |
Abstract | Stochastic variance reduction algorithms have recently become popular for minimizing the average of a large, but finite, number of loss functions. In this paper, we propose a novel Riemannian extension of the Euclidean stochastic variance reduced gradient algorithm (R-SVRG) to a compact manifold search space. To this end, we show the developments on the Grassmann manifold. The key challenges of averaging, addition, and subtraction of multiple gradients are addressed with notions like logarithm mapping and parallel translation of vectors on the Grassmann manifold. We present a global convergence analysis of the proposed algorithm with decay step-sizes and a local convergence rate analysis under fixed step-size with some natural assumptions. The proposed algorithm is applied on a number of problems on the Grassmann manifold like principal components analysis, low-rank matrix completion, and the Karcher mean computation. In all these cases, the proposed algorithm outperforms the standard Riemannian stochastic gradient descent algorithm. |
Tasks | Low-Rank Matrix Completion, Matrix Completion |
Published | 2016-05-24 |
URL | http://arxiv.org/abs/1605.07367v3 |
http://arxiv.org/pdf/1605.07367v3.pdf | |
PWC | https://paperswithcode.com/paper/riemannian-stochastic-variance-reduced |
Repo | |
Framework | |
Emotion-Based Crowd Representation for Abnormality Detection
Title | Emotion-Based Crowd Representation for Abnormality Detection |
Authors | Hamidreza Rabiee, Javad Haddadnia, Hossein Mousavi, Moin Nabi, Vittorio Murino, Nicu Sebe |
Abstract | In crowd behavior understanding, a model of crowd behavior need to be trained using the information extracted from video sequences. Since there is no ground-truth available in crowd datasets except the crowd behavior labels, most of the methods proposed so far are just based on low-level visual features. However, there is a huge semantic gap between low-level motion/appearance features and high-level concept of crowd behaviors. In this paper we propose an attribute-based strategy to alleviate this problem. While similar strategies have been recently adopted for object and action recognition, as far as we know, we are the first showing that the crowd emotions can be used as attributes for crowd behavior understanding. The main idea is to train a set of emotion-based classifiers, which can subsequently be used to represent the crowd motion. For this purpose, we collect a big dataset of video clips and provide them with both annotations of “crowd behaviors” and “crowd emotions”. We show the results of the proposed method on our dataset, which demonstrate that the crowd emotions enable the construction of more descriptive models for crowd behaviors. We aim at publishing the dataset with the article, to be used as a benchmark for the communities. |
Tasks | Anomaly Detection, Temporal Action Localization |
Published | 2016-07-26 |
URL | http://arxiv.org/abs/1607.07646v1 |
http://arxiv.org/pdf/1607.07646v1.pdf | |
PWC | https://paperswithcode.com/paper/emotion-based-crowd-representation-for |
Repo | |
Framework | |
Solving Large-scale Systems of Random Quadratic Equations via Stochastic Truncated Amplitude Flow
Title | Solving Large-scale Systems of Random Quadratic Equations via Stochastic Truncated Amplitude Flow |
Authors | Gang Wang, Georgios B. Giannakis, Jie Chen |
Abstract | A novel approach termed \emph{stochastic truncated amplitude flow} (STAF) is developed to reconstruct an unknown $n$-dimensional real-/complex-valued signal $\bm{x}$ from $m$ `phaseless’ quadratic equations of the form $\psi_i=\langle\bm{a}_i,\bm{x}\rangle$. This problem, also known as phase retrieval from magnitude-only information, is \emph{NP-hard} in general. Adopting an amplitude-based nonconvex formulation, STAF leads to an iterative solver comprising two stages: s1) Orthogonality-promoting initialization through a stochastic variance reduced gradient algorithm; and, s2) A series of iterative refinements of the initialization using stochastic truncated gradient iterations. Both stages involve a single equation per iteration, thus rendering STAF a simple, scalable, and fast approach amenable to large-scale implementations that is useful when $n$ is large. When ${\bm{a}i}{i=1}^m$ are independent Gaussian, STAF provably recovers exactly any $\bm{x}\in\mathbb{R}^n$ exponentially fast based on order of $n$ quadratic equations. STAF is also robust in the presence of additive noise of bounded support. Simulated tests involving real Gaussian ${\bm{a}_i}$ vectors demonstrate that STAF empirically reconstructs any $\bm{x}\in\mathbb{R}^n$ exactly from about $2.3n$ magnitude-only measurements, outperforming state-of-the-art approaches and narrowing the gap from the information-theoretic number of equations $m=2n-1$. Extensive experiments using synthetic data and real images corroborate markedly improved performance of STAF over existing alternatives. | |
Tasks | |
Published | 2016-10-29 |
URL | http://arxiv.org/abs/1610.09540v1 |
http://arxiv.org/pdf/1610.09540v1.pdf | |
PWC | https://paperswithcode.com/paper/solving-large-scale-systems-of-random |
Repo | |
Framework | |
Cseq2seq: Cyclic Sequence-to-Sequence Learning
Title | Cseq2seq: Cyclic Sequence-to-Sequence Learning |
Authors | Biao Zhang, Deyi Xiong, Jinsong Su |
Abstract | The vanilla sequence-to-sequence learning (seq2seq) reads and encodes a source sequence into a fixed-length vector only once, suffering from its insufficiency in modeling structural correspondence between the source and target sequence. Instead of handling this insufficiency with a linearly weighted attention mechanism, in this paper, we propose to use a recurrent neural network (RNN) as an alternative (Cseq2seq-I). During decoding, Cseq2seq-I cyclically feeds the previous decoding state back to the encoder as the initial state of the RNN, and reencodes source representations to produce context vectors. We surprisingly find that the introduced RNN succeeds in dynamically detecting translationrelated source tokens according to the partial target sequence. Based on this finding, we further hypothesize that the partial target sequence can act as a feedback to improve the understanding of the source sequence. To test this hypothesis, we propose cyclic sequence-to-sequence learning (Cseq2seq-II) which differs from the seq2seq only in the reintroduction of previous decoding state into the same encoder. We further perform parameter sharing on Cseq2seq-II to reduce parameter redundancy and enhance regularization. In particular, we share the weights of the encoder and decoder, and two targetside word embeddings, making Cseq2seq-II equivalent to a single conditional RNN model, with 31% parameters pruned but even better performance. Cseq2seq-II not only preserves the simplicity of seq2seq but also yields comparable and promising results on machine translation tasks. Experiments on Chinese- English and English-German translation show that Cseq2seq achieves significant and consistent improvements over seq2seq and is as competitive as the attention-based seq2seq model. |
Tasks | Machine Translation, Word Embeddings |
Published | 2016-07-29 |
URL | http://arxiv.org/abs/1607.08725v2 |
http://arxiv.org/pdf/1607.08725v2.pdf | |
PWC | https://paperswithcode.com/paper/cseq2seq-cyclic-sequence-to-sequence-learning |
Repo | |
Framework | |
Deep Learning Driven Visual Path Prediction from a Single Image
Title | Deep Learning Driven Visual Path Prediction from a Single Image |
Authors | Siyu Huang, Xi Li, Zhongfei Zhang, Zhouzhou He, Fei Wu, Wei Liu, Jinhui Tang, Yueting Zhuang |
Abstract | Capabilities of inference and prediction are significant components of visual systems. In this paper, we address an important and challenging task of them: visual path prediction. Its goal is to infer the future path for a visual object in a static scene. This task is complicated as it needs high-level semantic understandings of both the scenes and motion patterns underlying video sequences. In practice, cluttered situations have also raised higher demands on the effectiveness and robustness of the considered models. Motivated by these observations, we propose a deep learning framework which simultaneously performs deep feature learning for visual representation in conjunction with spatio-temporal context modeling. After that, we propose a unified path planning scheme to make accurate future path prediction based on the analytic results of the context models. The highly effective visual representation and deep context models ensure that our framework makes a deep semantic understanding of the scene and motion pattern, consequently improving the performance of the visual path prediction task. In order to comprehensively evaluate the model’s performance on the visual path prediction task, we construct two large benchmark datasets from the adaptation of video tracking datasets. The qualitative and quantitative experimental results show that our approach outperforms the existing approaches and owns a better generalization capability. |
Tasks | |
Published | 2016-01-27 |
URL | http://arxiv.org/abs/1601.07265v1 |
http://arxiv.org/pdf/1601.07265v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-learning-driven-visual-path-prediction |
Repo | |
Framework | |
Residual CNDS
Title | Residual CNDS |
Authors | Hussein A. Al-Barazanchi, Hussam Qassim, Abhishek Verma |
Abstract | Convolutional Neural networks nowadays are of tremendous importance for any image classification system. One of the most investigated methods to increase the accuracy of CNN is by increasing the depth of CNN. Increasing the depth by stacking more layers also increases the difficulty of training besides making it computationally expensive. Some research found that adding auxiliary forks after intermediate layers increases the accuracy. Specifying which intermediate layer shoud have the fork just addressed recently. Where a simple rule were used to detect the position of intermediate layers that needs the auxiliary supervision fork. This technique known as convolutional neural networks with deep supervision (CNDS). This technique enhanced the accuracy of classification over the straight forward CNN used on the MIT places dataset and ImageNet. In the other side, Residual Learning is another technique emerged recently to ease the training of very deep CNN. Residual Learning framwork changed the learning of layers from unreferenced functions to learning residual function with regard to the layer’s input. Residual Learning achieved state of arts results on ImageNet 2015 and COCO competitions. In this paper, we study the effect of adding residual connections to CNDS network. Our experiments results show increasing of accuracy over using CNDS only. |
Tasks | Image Classification |
Published | 2016-08-07 |
URL | http://arxiv.org/abs/1608.02201v1 |
http://arxiv.org/pdf/1608.02201v1.pdf | |
PWC | https://paperswithcode.com/paper/residual-cnds |
Repo | |
Framework | |
Beyond standard benchmarks: Parameterizing performance evaluation in visual object tracking
Title | Beyond standard benchmarks: Parameterizing performance evaluation in visual object tracking |
Authors | Luka Čehovin Zajc, Alan Lukežič, Aleš Leonardis, Matej Kristan |
Abstract | Object-to-camera motion produces a variety of apparent motion patterns that significantly affect performance of short-term visual trackers. Despite being crucial for designing robust trackers, their influence is poorly explored in standard benchmarks due to weakly defined, biased and overlapping attribute annotations. In this paper we propose to go beyond pre-recorded benchmarks with post-hoc annotations by presenting an approach that utilizes omnidirectional videos to generate realistic, consistently annotated, short-term tracking scenarios with exactly parameterized motion patterns. We have created an evaluation system, constructed a fully annotated dataset of omnidirectional videos and the generators for typical motion patterns. We provide an in-depth analysis of major tracking paradigms which is complementary to the standard benchmarks and confirms the expressiveness of our evaluation approach. |
Tasks | Object Tracking, Visual Object Tracking |
Published | 2016-12-01 |
URL | http://arxiv.org/abs/1612.00089v2 |
http://arxiv.org/pdf/1612.00089v2.pdf | |
PWC | https://paperswithcode.com/paper/beyond-standard-benchmarks-parameterizing |
Repo | |
Framework | |
Convolutional Regression for Visual Tracking
Title | Convolutional Regression for Visual Tracking |
Authors | Kai Chen, Wenbing Tao |
Abstract | Recently, discriminatively learned correlation filters (DCF) has drawn much attention in visual object tracking community. The success of DCF is potentially attributed to the fact that a large amount of samples are utilized to train the ridge regression model and predict the location of object. To solve the regression problem in an efficient way, these samples are all generated by circularly shifting from a search patch. However, these synthetic samples also induce some negative effects which weaken the robustness of DCF based trackers. In this paper, we propose a Convolutional Regression framework for visual tracking (CRT). Instead of learning the linear regression model in a closed form, we try to solve the regression problem by optimizing a one-channel-output convolution layer with Gradient Descent (GD). In particular, the receptive field size of the convolution layer is set to the size of object. Contrary to DCF, it is possible to incorporate all “real” samples clipped from the whole image. A critical issue of the GD approach is that most of the convolutional samples are negative and the contribution of positive samples will be suppressed. To address this problem, we propose a novel Automatic Hard Negative Mining method to eliminate easy negatives and enhance positives. Extensive experiments are conducted on a widely-used benchmark with 100 sequences. The results show that the proposed algorithm achieves outstanding performance and outperforms almost all the existing DCF based algorithms. |
Tasks | Object Tracking, Visual Object Tracking, Visual Tracking |
Published | 2016-11-14 |
URL | http://arxiv.org/abs/1611.04215v2 |
http://arxiv.org/pdf/1611.04215v2.pdf | |
PWC | https://paperswithcode.com/paper/convolutional-regression-for-visual-tracking |
Repo | |
Framework | |
Discriminative Scale Space Tracking
Title | Discriminative Scale Space Tracking |
Authors | Martin Danelljan, Gustav Häger, Fahad Shahbaz Khan, Michael Felsberg |
Abstract | Accurate scale estimation of a target is a challenging research problem in visual object tracking. Most state-of-the-art methods employ an exhaustive scale search to estimate the target size. The exhaustive search strategy is computationally expensive and struggles when encountered with large scale variations. This paper investigates the problem of accurate and robust scale estimation in a tracking-by-detection framework. We propose a novel scale adaptive tracking approach by learning separate discriminative correlation filters for translation and scale estimation. The explicit scale filter is learned online using the target appearance sampled at a set of different scales. Contrary to standard approaches, our method directly learns the appearance change induced by variations in the target scale. Additionally, we investigate strategies to reduce the computational cost of our approach. Extensive experiments are performed on the OTB and the VOT2014 datasets. Compared to the standard exhaustive scale search, our approach achieves a gain of 2.5% in average overlap precision on the OTB dataset. Additionally, our method is computationally efficient, operating at a 50% higher frame rate compared to the exhaustive scale search. Our method obtains the top rank in performance by outperforming 19 state-of-the-art trackers on OTB and 37 state-of-the-art trackers on VOT2014. |
Tasks | Object Tracking, Visual Object Tracking |
Published | 2016-09-20 |
URL | http://arxiv.org/abs/1609.06141v1 |
http://arxiv.org/pdf/1609.06141v1.pdf | |
PWC | https://paperswithcode.com/paper/discriminative-scale-space-tracking |
Repo | |
Framework | |
Modeling and Propagating CNNs in a Tree Structure for Visual Tracking
Title | Modeling and Propagating CNNs in a Tree Structure for Visual Tracking |
Authors | Hyeonseob Nam, Mooyeol Baek, Bohyung Han |
Abstract | We present an online visual tracking algorithm by managing multiple target appearance models in a tree structure. The proposed algorithm employs Convolutional Neural Networks (CNNs) to represent target appearances, where multiple CNNs collaborate to estimate target states and determine the desirable paths for online model updates in the tree. By maintaining multiple CNNs in diverse branches of tree structure, it is convenient to deal with multi-modality in target appearances and preserve model reliability through smooth updates along tree paths. Since multiple CNNs share all parameters in convolutional layers, it takes advantage of multiple models with little extra cost by saving memory space and avoiding redundant network evaluations. The final target state is estimated by sampling target candidates around the state in the previous frame and identifying the best sample in terms of a weighted average score from a set of active CNNs. Our algorithm illustrates outstanding performance compared to the state-of-the-art techniques in challenging datasets such as online tracking benchmark and visual object tracking challenge. |
Tasks | Object Tracking, Visual Object Tracking, Visual Tracking |
Published | 2016-08-25 |
URL | http://arxiv.org/abs/1608.07242v1 |
http://arxiv.org/pdf/1608.07242v1.pdf | |
PWC | https://paperswithcode.com/paper/modeling-and-propagating-cnns-in-a-tree |
Repo | |
Framework | |
Critical Echo State Networks that Anticipate Input using Morphable Transfer Functions
Title | Critical Echo State Networks that Anticipate Input using Morphable Transfer Functions |
Authors | Norbert Michael Mayer |
Abstract | The paper investigates a new type of truly critical echo state networks where individual transfer functions for every neuron can be modified to anticipate the expected next input. Deviations from expected input are only forgotten slowly in power law fashion. The paper outlines the theory, numerically analyzes a one neuron model network and finally discusses technical and also biological implications of this type of approach. |
Tasks | |
Published | 2016-06-12 |
URL | http://arxiv.org/abs/1606.03674v2 |
http://arxiv.org/pdf/1606.03674v2.pdf | |
PWC | https://paperswithcode.com/paper/critical-echo-state-networks-that-anticipate |
Repo | |
Framework | |
The Multivariate Generalised von Mises distribution: Inference and applications
Title | The Multivariate Generalised von Mises distribution: Inference and applications |
Authors | Alexandre K. W. Navarro, Jes Frellsen, Richard E. Turner |
Abstract | Circular variables arise in a multitude of data-modelling contexts ranging from robotics to the social sciences, but they have been largely overlooked by the machine learning community. This paper partially redresses this imbalance by extending some standard probabilistic modelling tools to the circular domain. First we introduce a new multivariate distribution over circular variables, called the multivariate Generalised von Mises (mGvM) distribution. This distribution can be constructed by restricting and renormalising a general multivariate Gaussian distribution to the unit hyper-torus. Previously proposed multivariate circular distributions are shown to be special cases of this construction. Second, we introduce a new probabilistic model for circular regression, that is inspired by Gaussian Processes, and a method for probabilistic principal component analysis with circular hidden variables. These models can leverage standard modelling tools (e.g. covariance functions and methods for automatic relevance determination). Third, we show that the posterior distribution in these models is a mGvM distribution which enables development of an efficient variational free-energy scheme for performing approximate inference and approximate maximum-likelihood learning. |
Tasks | Gaussian Processes |
Published | 2016-02-16 |
URL | http://arxiv.org/abs/1602.05003v6 |
http://arxiv.org/pdf/1602.05003v6.pdf | |
PWC | https://paperswithcode.com/paper/the-multivariate-generalised-von-mises |
Repo | |
Framework | |