May 6, 2019

2846 words 14 mins read

Paper Group ANR 334

Paper Group ANR 334

Scaling up Dynamic Topic Models. Discovering Useful Parts for Pose Estimation in Sparsely Annotated Datasets. Liquid Democracy: An Analysis in Binary Aggregation and Diffusion. Advances in Very Deep Convolutional Neural Networks for LVCSR. Dynamic Scene Deblurring using a Locally Adaptive Linear Blur Model. Support Vector Machines and Generalisatio …

Scaling up Dynamic Topic Models

Title Scaling up Dynamic Topic Models
Authors Arnab Bhadury, Jianfei Chen, Jun Zhu, Shixia Liu
Abstract Dynamic topic models (DTMs) are very effective in discovering topics and capturing their evolution trends in time series data. To do posterior inference of DTMs, existing methods are all batch algorithms that scan the full dataset before each update of the model and make inexact variational approximations with mean-field assumptions. Due to a lack of a more scalable inference algorithm, despite the usefulness, DTMs have not captured large topic dynamics. This paper fills this research void, and presents a fast and parallelizable inference algorithm using Gibbs Sampling with Stochastic Gradient Langevin Dynamics that does not make any unwarranted assumptions. We also present a Metropolis-Hastings based $O(1)$ sampler for topic assignments for each word token. In a distributed environment, our algorithm requires very little communication between workers during sampling (almost embarrassingly parallel) and scales up to large-scale applications. We are able to learn the largest Dynamic Topic Model to our knowledge, and learned the dynamics of 1,000 topics from 2.6 million documents in less than half an hour, and our empirical results show that our algorithm is not only orders of magnitude faster than the baselines but also achieves lower perplexity.
Tasks Time Series, Topic Models
Published 2016-02-19
URL http://arxiv.org/abs/1602.06049v1
PDF http://arxiv.org/pdf/1602.06049v1.pdf
PWC https://paperswithcode.com/paper/scaling-up-dynamic-topic-models
Repo
Framework

Discovering Useful Parts for Pose Estimation in Sparsely Annotated Datasets

Title Discovering Useful Parts for Pose Estimation in Sparsely Annotated Datasets
Authors Mikhail Breslav, Tyson L. Hedrick, Stan Sclaroff, Margrit Betke
Abstract Our work introduces a novel way to increase pose estimation accuracy by discovering parts from unannotated regions of training images. Discovered parts are used to generate more accurate appearance likelihoods for traditional part-based models like Pictorial Structures [13] and its derivatives. Our experiments on images of a hawkmoth in flight show that our proposed approach significantly improves over existing work [27] for this application, while also being more generally applicable. Our proposed approach localizes landmarks at least twice as accurately as a baseline based on a Mixture of Pictorial Structures (MPS) model. Our unique High-Resolution Moth Flight (HRMF) dataset is made publicly available with annotations.
Tasks Pose Estimation
Published 2016-05-02
URL http://arxiv.org/abs/1605.00707v1
PDF http://arxiv.org/pdf/1605.00707v1.pdf
PWC https://paperswithcode.com/paper/discovering-useful-parts-for-pose-estimation
Repo
Framework

Liquid Democracy: An Analysis in Binary Aggregation and Diffusion

Title Liquid Democracy: An Analysis in Binary Aggregation and Diffusion
Authors Zoé Christoff, Davide Grossi
Abstract The paper proposes an analysis of liquid democracy (or, delegable proxy voting) from the perspective of binary aggregation and of binary diffusion models. We show how liquid democracy on binary issues can be embedded into the framework of binary aggregation with abstentions, enabling the transfer of known results about the latter—such as impossibility theorems—to the former. This embedding also sheds light on the relation between delegation cycles in liquid democracy and the probability of collective abstentions, as well as the issue of individual rationality in a delegable proxy voting setting. We then show how liquid democracy on binary issues can be modeled and analyzed also as a specific process of dynamics of binary opinions on networks. These processes—called Boolean DeGroot processes—are a special case of the DeGroot stochastic model of opinion diffusion. We establish the convergence conditions of such processes and show they provide some novel insights on how the effects of delegation cycles and individual rationality could be mitigated within liquid democracy. The study is a first attempt to provide theoretical foundations to the delgable proxy features of the liquid democracy voting system. Our analysis suggests recommendations on how the system may be modified to make it more resilient with respect to the handling of delegation cycles and of inconsistent majorities.
Tasks
Published 2016-12-23
URL http://arxiv.org/abs/1612.08048v2
PDF http://arxiv.org/pdf/1612.08048v2.pdf
PWC https://paperswithcode.com/paper/liquid-democracy-an-analysis-in-binary
Repo
Framework

Advances in Very Deep Convolutional Neural Networks for LVCSR

Title Advances in Very Deep Convolutional Neural Networks for LVCSR
Authors Tom Sercu, Vaibhava Goel
Abstract Very deep CNNs with small 3x3 kernels have recently been shown to achieve very strong performance as acoustic models in hybrid NN-HMM speech recognition systems. In this paper we investigate how to efficiently scale these models to larger datasets. Specifically, we address the design choice of pooling and padding along the time dimension which renders convolutional evaluation of sequences highly inefficient. We propose a new CNN design without timepadding and without timepooling, which is slightly suboptimal for accuracy, but has two significant advantages: it enables sequence training and deployment by allowing efficient convolutional evaluation of full utterances, and, it allows for batch normalization to be straightforwardly adopted to CNNs on sequence data. Through batch normalization, we recover the lost peformance from removing the time-pooling, while keeping the benefit of efficient convolutional evaluation. We demonstrate the performance of our models both on larger scale data than before, and after sequence training. Our very deep CNN model sequence trained on the 2000h switchboard dataset obtains 9.4 word error rate on the Hub5 test-set, matching with a single model the performance of the 2015 IBM system combination, which was the previous best published result.
Tasks Large Vocabulary Continuous Speech Recognition, Speech Recognition
Published 2016-04-06
URL http://arxiv.org/abs/1604.01792v2
PDF http://arxiv.org/pdf/1604.01792v2.pdf
PWC https://paperswithcode.com/paper/advances-in-very-deep-convolutional-neural
Repo
Framework

Dynamic Scene Deblurring using a Locally Adaptive Linear Blur Model

Title Dynamic Scene Deblurring using a Locally Adaptive Linear Blur Model
Authors Tae Hyun Kim, Seungjun Nah, Kyoung Mu Lee
Abstract State-of-the-art video deblurring methods cannot handle blurry videos recorded in dynamic scenes, since they are built under a strong assumption that the captured scenes are static. Contrary to the existing methods, we propose a video deblurring algorithm that can deal with general blurs inherent in dynamic scenes. To handle general and locally varying blurs caused by various sources, such as moving objects, camera shake, depth variation, and defocus, we estimate pixel-wise non-uniform blur kernels. We infer bidirectional optical flows to handle motion blurs, and also estimate Gaussian blur maps to remove optical blur from defocus in our new blur model. Therefore, we propose a single energy model that jointly estimates optical flows, defocus blur maps and latent frames. We also provide a framework and efficient solvers to minimize the proposed energy model. By optimizing the energy model, we achieve significant improvements in removing general blurs, estimating optical flows, and extending depth-of-field in blurry frames. Moreover, in this work, to evaluate the performance of non-uniform deblurring methods objectively, we have constructed a new realistic dataset with ground truths. In addition, extensive experimental on publicly available challenging video data demonstrate that the proposed method produces qualitatively superior performance than the state-of-the-art methods which often fail in either deblurring or optical flow estimation.
Tasks Deblurring, Optical Flow Estimation
Published 2016-03-14
URL http://arxiv.org/abs/1603.04265v1
PDF http://arxiv.org/pdf/1603.04265v1.pdf
PWC https://paperswithcode.com/paper/dynamic-scene-deblurring-using-a-locally
Repo
Framework

Support Vector Machines and Generalisation in HEP

Title Support Vector Machines and Generalisation in HEP
Authors A. Bethani, A. J. Bevan, J. Hays, T. J. Stevenson
Abstract We review the concept of support vector machines (SVMs) and discuss examples of their use. One of the benefits of SVM algorithms, compared with neural networks and decision trees is that they can be less susceptible to over fitting than those other algorithms are to over training. This issue is related to the generalisation of a multivariate algorithm (MVA); a problem that has often been overlooked in particle physics. We discuss cross validation and how this can be used to improve the generalisation of a MVA in the context of High Energy Physics analyses. The examples presented use the Toolkit for Multivariate Analysis (TMVA) based on ROOT and describe our improvements to the SVM functionality and new tools introduced for cross validation within this framework.
Tasks
Published 2016-10-19
URL http://arxiv.org/abs/1610.09932v1
PDF http://arxiv.org/pdf/1610.09932v1.pdf
PWC https://paperswithcode.com/paper/support-vector-machines-and-generalisation-in-1
Repo
Framework

Adapting Models to Signal Degradation using Distillation

Title Adapting Models to Signal Degradation using Distillation
Authors Jong-Chyi Su, Subhransu Maji
Abstract Model compression and knowledge distillation have been successfully applied for cross-architecture and cross-domain transfer learning. However, a key requirement is that training examples are in correspondence across the domains. We show that in many scenarios of practical importance such aligned data can be synthetically generated using computer graphics pipelines allowing domain adaptation through distillation. We apply this technique to learn models for recognizing low-resolution images using labeled high-resolution images, non-localized objects using labeled localized objects, line-drawings using labeled color images, etc. Experiments on various fine-grained recognition datasets demonstrate that the technique improves recognition performance on the low-quality data and beats strong baselines for domain adaptation. Finally, we present insights into workings of the technique through visualizations and relating it to existing literature.
Tasks Domain Adaptation, Model Compression, Transfer Learning
Published 2016-04-01
URL http://arxiv.org/abs/1604.00433v2
PDF http://arxiv.org/pdf/1604.00433v2.pdf
PWC https://paperswithcode.com/paper/adapting-models-to-signal-degradation-using
Repo
Framework

Pointing the Unknown Words

Title Pointing the Unknown Words
Authors Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, Yoshua Bengio
Abstract The problem of rare and unknown words is an important issue that can potentially influence the performance of many NLP systems, including both the traditional count-based and the deep learning models. We propose a novel way to deal with the rare and unseen words for the neural network models using attention. Our model uses two softmax layers in order to predict the next word in conditional language models: one predicts the location of a word in the source sentence, and the other predicts a word in the shortlist vocabulary. At each time-step, the decision of which softmax layer to use choose adaptively made by an MLP which is conditioned on the context.~We motivate our work from a psychological evidence that humans naturally have a tendency to point towards objects in the context or the environment when the name of an object is not known.~We observe improvements on two tasks, neural machine translation on the Europarl English to French parallel corpora and text summarization on the Gigaword dataset using our proposed model.
Tasks Machine Translation, Text Summarization
Published 2016-03-26
URL http://arxiv.org/abs/1603.08148v3
PDF http://arxiv.org/pdf/1603.08148v3.pdf
PWC https://paperswithcode.com/paper/pointing-the-unknown-words
Repo
Framework

Social Scene Understanding: End-to-End Multi-Person Action Localization and Collective Activity Recognition

Title Social Scene Understanding: End-to-End Multi-Person Action Localization and Collective Activity Recognition
Authors Timur Bagautdinov, Alexandre Alahi, François Fleuret, Pascal Fua, Silvio Savarese
Abstract We present a unified framework for understanding human social behaviors in raw image sequences. Our model jointly detects multiple individuals, infers their social actions, and estimates the collective actions with a single feed-forward pass through a neural network. We propose a single architecture that does not rely on external detection algorithms but rather is trained end-to-end to generate dense proposal maps that are refined via a novel inference scheme. The temporal consistency is handled via a person-level matching Recurrent Neural Network. The complete model takes as input a sequence of frames and outputs detections along with the estimates of individual actions and collective activities. We demonstrate state-of-the-art performance of our algorithm on multiple publicly available benchmarks.
Tasks Action Localization, Activity Recognition, Scene Understanding
Published 2016-11-28
URL http://arxiv.org/abs/1611.09078v1
PDF http://arxiv.org/pdf/1611.09078v1.pdf
PWC https://paperswithcode.com/paper/social-scene-understanding-end-to-end-multi
Repo
Framework

Dictionary Integration using 3D Morphable Face Models for Pose-invariant Collaborative-representation-based Classification

Title Dictionary Integration using 3D Morphable Face Models for Pose-invariant Collaborative-representation-based Classification
Authors Xiaoning Song, Zhen-Hua Feng, Guosheng Hu, Josef Kittler, William Christmas, Xiao-Jun Wu
Abstract The paper presents a dictionary integration algorithm using 3D morphable face models (3DMM) for pose-invariant collaborative-representation-based face classification. To this end, we first fit a 3DMM to the 2D face images of a dictionary to reconstruct the 3D shape and texture of each image. The 3D faces are used to render a number of virtual 2D face images with arbitrary pose variations to augment the training data, by merging the original and rendered virtual samples to create an extended dictionary. Second, to reduce the information redundancy of the extended dictionary and improve the sparsity of reconstruction coefficient vectors using collaborative-representation-based classification (CRC), we exploit an on-line elimination scheme to optimise the extended dictionary by identifying the most representative training samples for a given query. The final goal is to perform pose-invariant face classification using the proposed dictionary integration method and the on-line pruning strategy under the CRC framework. Experimental results obtained for a set of well-known face datasets demonstrate the merits of the proposed method, especially its robustness to pose variations.
Tasks
Published 2016-11-01
URL http://arxiv.org/abs/1611.00284v3
PDF http://arxiv.org/pdf/1611.00284v3.pdf
PWC https://paperswithcode.com/paper/dictionary-integration-using-3d-morphable
Repo
Framework

An expressive dissimilarity measure for relational clustering using neighbourhood trees

Title An expressive dissimilarity measure for relational clustering using neighbourhood trees
Authors Sebastijan Dumancic, Hendrik Blockeel
Abstract Clustering is an underspecified task: there are no universal criteria for what makes a good clustering. This is especially true for relational data, where similarity can be based on the features of individuals, the relationships between them, or a mix of both. Existing methods for relational clustering have strong and often implicit biases in this respect. In this paper, we introduce a novel similarity measure for relational data. It is the first measure to incorporate a wide variety of types of similarity, including similarity of attributes, similarity of relational context, and proximity in a hypergraph. We experimentally evaluate how using this similarity affects the quality of clustering on very different types of datasets. The experiments demonstrate that (a) using this similarity in standard clustering methods consistently gives good results, whereas other measures work well only on datasets that match their bias; and (b) on most datasets, the novel similarity outperforms even the best among the existing ones.
Tasks
Published 2016-04-29
URL http://arxiv.org/abs/1604.08934v2
PDF http://arxiv.org/pdf/1604.08934v2.pdf
PWC https://paperswithcode.com/paper/an-expressive-dissimilarity-measure-for
Repo
Framework

Supervised Classification of RADARSAT-2 Polarimetric Data for Different Land Features

Title Supervised Classification of RADARSAT-2 Polarimetric Data for Different Land Features
Authors Abhishek Maity
Abstract The pixel percentage belonging to the user defined area that are assigned to cluster in a confusion matrix for RADARSAT-2 over Vancouver area has been analysed for classification. In this study, supervised Wishart and Support Vector Machine (SVM) classifiers over RADARSAT-2 (RS2) fine quadpol mode Single Look Complex (SLC) product data is computed and compared. In comparison with conventional single channel or dual channel polarization, RADARSAT-2 is fully polarimetric, making it to offer better land feature contrast for classification operation.
Tasks
Published 2016-08-01
URL http://arxiv.org/abs/1608.00501v1
PDF http://arxiv.org/pdf/1608.00501v1.pdf
PWC https://paperswithcode.com/paper/supervised-classification-of-radarsat-2
Repo
Framework

Object Recognition Based on Amounts of Unlabeled Data

Title Object Recognition Based on Amounts of Unlabeled Data
Authors Fuqiang Liu, Fukun Bi, Liang Chen
Abstract This paper proposes a novel semi-supervised method on object recognition. First, based on Boost Picking, a universal algorithm, Boost Picking Teaching (BPT), is proposed to train an effective binary-classifier just using a few labeled data and amounts of unlabeled data. Then, an ensemble strategy is detailed to synthesize multiple BPT-trained binary-classifiers to be a high-performance multi-classifier. The rationality of the strategy is also analyzed in theory. Finally, the proposed method is tested on two databases, CIFAR-10 and CIFAR-100. Using 2% labeled data and 98% unlabeled data, the accuracies of the proposed method on the two data sets are 78.39% and 50.77% respectively.
Tasks Object Recognition
Published 2016-03-25
URL http://arxiv.org/abs/1603.07957v1
PDF http://arxiv.org/pdf/1603.07957v1.pdf
PWC https://paperswithcode.com/paper/object-recognition-based-on-amounts-of
Repo
Framework

The Phylogenetic LASSO and the Microbiome

Title The Phylogenetic LASSO and the Microbiome
Authors Stephen T Rush, Christine H Lee, Washington Mio, Peter T Kim
Abstract Scientific investigations that incorporate next generation sequencing involve analyses of high-dimensional data where the need to organize, collate and interpret the outcomes are pressingly important. Currently, data can be collected at the microbiome level leading to the possibility of personalized medicine whereby treatments can be tailored at this scale. In this paper, we lay down a statistical framework for this type of analysis with a view toward synthesis of products tailored to individual patients. Although the paper applies the technique to data for a particular infectious disease, the methodology is sufficiently rich to be expanded to other problems in medicine, especially those in which coincident `-omics’ covariates and clinical responses are simultaneously captured. |
Tasks
Published 2016-07-29
URL http://arxiv.org/abs/1607.08877v1
PDF http://arxiv.org/pdf/1607.08877v1.pdf
PWC https://paperswithcode.com/paper/the-phylogenetic-lasso-and-the-microbiome
Repo
Framework

Syntactic and semantic classification of verb arguments using dependency-based and rich semantic features

Title Syntactic and semantic classification of verb arguments using dependency-based and rich semantic features
Authors Francesco Elia
Abstract Corpus Pattern Analysis (CPA) has been the topic of Semeval 2015 Task 15, aimed at producing a system that can aid lexicographers in their efforts to build a dictionary of meanings for English verbs using the CPA annotation process. CPA parsing is one of the subtasks which this annotation process is made of and it is the focus of this report. A supervised machine-learning approach has been implemented, in which syntactic features derived from parse trees and semantic features derived from WordNet and word embeddings are used. It is shown that this approach performs well, even with the data sparsity issues that characterize the dataset, and can obtain better results than other system by a margin of about 4% f-score.
Tasks Word Embeddings
Published 2016-04-19
URL http://arxiv.org/abs/1604.05747v1
PDF http://arxiv.org/pdf/1604.05747v1.pdf
PWC https://paperswithcode.com/paper/syntactic-and-semantic-classification-of-verb
Repo
Framework
comments powered by Disqus