May 5, 2019

2976 words 14 mins read

Paper Group ANR 444

Paper Group ANR 444

A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution. DeepStance at SemEval-2016 Task 6: Detecting Stance in Tweets Using Character and Word-Level CNNs. Less is More: Learning Prominent and Diverse Topics for Data Summarization. The constrained Dantzig selector with enhanced consistency. Modeling Human Reading …

A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution

Title A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution
Authors David I. Inouye, Eunho Yang, Genevera I. Allen, Pradeep Ravikumar
Abstract The Poisson distribution has been widely studied and used for modeling univariate count-valued data. Multivariate generalizations of the Poisson distribution that permit dependencies, however, have been far less popular. Yet, real-world high-dimensional count-valued data found in word counts, genomics, and crime statistics, for example, exhibit rich dependencies, and motivate the need for multivariate distributions that can appropriately model this data. We review multivariate distributions derived from the univariate Poisson, categorizing these models into three main classes: 1) where the marginal distributions are Poisson, 2) where the joint distribution is a mixture of independent multivariate Poisson distributions, and 3) where the node-conditional distributions are derived from the Poisson. We discuss the development of multiple instances of these classes and compare the models in terms of interpretability and theory. Then, we empirically compare multiple models from each class on three real-world datasets that have varying data characteristics from different domains, namely traffic accident data, biological next generation sequencing data, and text data. These empirical experiments develop intuition about the comparative advantages and disadvantages of each class of multivariate distribution that was derived from the Poisson. Finally, we suggest new research directions as explored in the subsequent discussion section.
Tasks
Published 2016-08-31
URL http://arxiv.org/abs/1609.00066v2
PDF http://arxiv.org/pdf/1609.00066v2.pdf
PWC https://paperswithcode.com/paper/a-review-of-multivariate-distributions-for
Repo
Framework

DeepStance at SemEval-2016 Task 6: Detecting Stance in Tweets Using Character and Word-Level CNNs

Title DeepStance at SemEval-2016 Task 6: Detecting Stance in Tweets Using Character and Word-Level CNNs
Authors Prashanth Vijayaraghavan, Ivan Sysoev, Soroush Vosoughi, Deb Roy
Abstract This paper describes our approach for the Detecting Stance in Tweets task (SemEval-2016 Task 6). We utilized recent advances in short text categorization using deep learning to create word-level and character-level models. The choice between word-level and character-level models in each particular case was informed through validation performance. Our final system is a combination of classifiers using word-level or character-level models. We also employed novel data augmentation techniques to expand and diversify our training dataset, thus making our system more robust. Our system achieved a macro-average precision, recall and F1-scores of 0.67, 0.61 and 0.635 respectively.
Tasks Data Augmentation, Text Categorization
Published 2016-06-17
URL http://arxiv.org/abs/1606.05694v1
PDF http://arxiv.org/pdf/1606.05694v1.pdf
PWC https://paperswithcode.com/paper/deepstance-at-semeval-2016-task-6-detecting
Repo
Framework

Less is More: Learning Prominent and Diverse Topics for Data Summarization

Title Less is More: Learning Prominent and Diverse Topics for Data Summarization
Authors Jian Tang, Cheng Li, Ming Zhang, Qiaozhu Mei
Abstract Statistical topic models efficiently facilitate the exploration of large-scale data sets. Many models have been developed and broadly used to summarize the semantic structure in news, science, social media, and digital humanities. However, a common and practical objective in data exploration tasks is not to enumerate all existing topics, but to quickly extract representative ones that broadly cover the content of the corpus, i.e., a few topics that serve as a good summary of the data. Most existing topic models fit exactly the same number of topics as a user specifies, which have imposed an unnecessary burden to the users who have limited prior knowledge. We instead propose new models that are able to learn fewer but more representative topics for the purpose of data summarization. We propose a reinforced random walk that allows prominent topics to absorb tokens from similar and smaller topics, thus enhances the diversity among the top topics extracted. With this reinforced random walk as a general process embedded in classical topic models, we obtain \textit{diverse topic models} that are able to extract the most prominent and diverse topics from data. The inference procedures of these diverse topic models remain as simple and efficient as the classical models. Experimental results demonstrate that the diverse topic models not only discover topics that better summarize the data, but also require minimal prior knowledge of the users.
Tasks Data Summarization, Topic Models
Published 2016-11-29
URL http://arxiv.org/abs/1611.09921v2
PDF http://arxiv.org/pdf/1611.09921v2.pdf
PWC https://paperswithcode.com/paper/less-is-more-learning-prominent-and-diverse
Repo
Framework

The constrained Dantzig selector with enhanced consistency

Title The constrained Dantzig selector with enhanced consistency
Authors Yinfei Kong, Zemin Zheng, Jinchi Lv
Abstract The Dantzig selector has received popularity for many applications such as compressed sensing and sparse modeling, thanks to its computational efficiency as a linear programming problem and its nice sampling properties. Existing results show that it can recover sparse signals mimicking the accuracy of the ideal procedure, up to a logarithmic factor of the dimensionality. Such a factor has been shown to hold for many regularization methods. An important question is whether this factor can be reduced to a logarithmic factor of the sample size in ultra-high dimensions under mild regularity conditions. To provide an affirmative answer, in this paper we suggest the constrained Dantzig selector, which has more flexible constraints and parameter space. We prove that the suggested method can achieve convergence rates within a logarithmic factor of the sample size of the oracle rates and improved sparsity, under a fairly weak assumption on the signal strength. Such improvement is significant in ultra-high dimensions. This method can be implemented efficiently through sequential linear programming. Numerical studies confirm that the sample size needed for a certain level of accuracy in these problems can be much reduced.
Tasks
Published 2016-05-11
URL http://arxiv.org/abs/1605.03311v1
PDF http://arxiv.org/pdf/1605.03311v1.pdf
PWC https://paperswithcode.com/paper/the-constrained-dantzig-selector-with
Repo
Framework

Modeling Human Reading with Neural Attention

Title Modeling Human Reading with Neural Attention
Authors Michael Hahn, Frank Keller
Abstract When humans read text, they fixate some words and skip others. However, there have been few attempts to explain skipping behavior with computational models, as most existing work has focused on predicting reading times (e.g.,~using surprisal). In this paper, we propose a novel approach that models both skipping and reading, using an unsupervised architecture that combines a neural attention with autoencoding, trained on raw text using reinforcement learning. Our model explains human reading behavior as a tradeoff between precision of language understanding (encoding the input accurately) and economy of attention (fixating as few words as possible). We evaluate the model on the Dundee eye-tracking corpus, showing that it accurately predicts skipping behavior and reading times, is competitive with surprisal, and captures known qualitative features of human reading.
Tasks Eye Tracking
Published 2016-08-19
URL http://arxiv.org/abs/1608.05604v2
PDF http://arxiv.org/pdf/1608.05604v2.pdf
PWC https://paperswithcode.com/paper/modeling-human-reading-with-neural-attention
Repo
Framework

Co-localization with Category-Consistent Features and Geodesic Distance Propagation

Title Co-localization with Category-Consistent Features and Geodesic Distance Propagation
Authors Hieu Le, Chen-Ping Yu, Gregory Zelinsky, Dimitris Samaras
Abstract Co-localization is the problem of localizing objects of the same class using only the set of images that contain them. This is a challenging task because the object detector must be built without negative examples that can lead to more informative supervision signals. The main idea of our method is to cluster the feature space of a generically pre-trained CNN, to find a set of CNN features that are consistently and highly activated for an object category, which we call category-consistent CNN features. Then, we propagate their combined activation map using superpixel geodesic distances for co-localization. In our first set of experiments, we show that the proposed method achieves state-of-the-art performance on three related benchmarks: PASCAL 2007, PASCAL-2012, and the Object Discovery dataset. We also show that our method is able to detect and localize truly unseen categories, on six held-out ImageNet categories with accuracy that is significantly higher than previous state-of-the-art. Our intuitive approach achieves this success without any region proposals or object detectors and can be based on a CNN that was pre-trained purely on image classification tasks without further fine-tuning.
Tasks Image Classification
Published 2016-12-10
URL https://arxiv.org/abs/1612.03236v3
PDF https://arxiv.org/pdf/1612.03236v3.pdf
PWC https://paperswithcode.com/paper/co-localization-with-category-consistent-cnn
Repo
Framework

A characterization of product-form exchangeable feature probability functions

Title A characterization of product-form exchangeable feature probability functions
Authors Marco Battiston, Stefano Favaro, Daniel M. Roy, Yee Whye Teh
Abstract We characterize the class of exchangeable feature allocations assigning probability $V_{n,k}\prod_{l=1}^{k}W_{m_{l}}U_{n-m_{l}}$ to a feature allocation of $n$ individuals, displaying $k$ features with counts $(m_{1},\ldots,m_{k})$ for these features. Each element of this class is parametrized by a countable matrix $V$ and two sequences $U$ and $W$ of non-negative weights. Moreover, a consistency condition is imposed to guarantee that the distribution for feature allocations of $n-1$ individuals is recovered from that of $n$ individuals, when the last individual is integrated out. In Theorem 1.1, we prove that the only members of this class satisfying the consistency condition are mixtures of the Indian Buffet Process over its mass parameter $\gamma$ and mixtures of the Beta–Bernoulli model over its dimensionality parameter $N$. Hence, we provide a characterization of these two models as the only, up to randomization of the parameters, consistent exchangeable feature allocations having the required product form.
Tasks
Published 2016-07-07
URL http://arxiv.org/abs/1607.02066v1
PDF http://arxiv.org/pdf/1607.02066v1.pdf
PWC https://paperswithcode.com/paper/a-characterization-of-product-form
Repo
Framework

Semantic Parsing to Probabilistic Programs for Situated Question Answering

Title Semantic Parsing to Probabilistic Programs for Situated Question Answering
Authors Jayant Krishnamurthy, Oyvind Tafjord, Aniruddha Kembhavi
Abstract Situated question answering is the problem of answering questions about an environment such as an image or diagram. This problem requires jointly interpreting a question and an environment using background knowledge to select the correct answer. We present Parsing to Probabilistic Programs (P3), a novel situated question answering model that can use background knowledge and global features of the question/environment interpretation while retaining efficient approximate inference. Our key insight is to treat semantic parses as probabilistic programs that execute nondeterministically and whose possible executions represent environmental uncertainty. We evaluate our approach on a new, publicly-released data set of 5000 science diagram questions, outperforming several competitive classical and neural baselines.
Tasks Question Answering, Semantic Parsing
Published 2016-06-22
URL http://arxiv.org/abs/1606.07046v2
PDF http://arxiv.org/pdf/1606.07046v2.pdf
PWC https://paperswithcode.com/paper/semantic-parsing-to-probabilistic-programs
Repo
Framework

Interactive Semantic Featuring for Text Classification

Title Interactive Semantic Featuring for Text Classification
Authors Camille Jandot, Patrice Simard, Max Chickering, David Grangier, Jina Suh
Abstract In text classification, dictionaries can be used to define human-comprehensible features. We propose an improvement to dictionary features called smoothed dictionary features. These features recognize document contexts instead of n-grams. We describe a principled methodology to solicit dictionary features from a teacher, and present results showing that models built using these human-comprehensible features are competitive with models trained with Bag of Words features.
Tasks Text Classification
Published 2016-06-24
URL http://arxiv.org/abs/1606.07545v1
PDF http://arxiv.org/pdf/1606.07545v1.pdf
PWC https://paperswithcode.com/paper/interactive-semantic-featuring-for-text
Repo
Framework

Distributed Cooperative Decision-Making in Multiarmed Bandits: Frequentist and Bayesian Algorithms

Title Distributed Cooperative Decision-Making in Multiarmed Bandits: Frequentist and Bayesian Algorithms
Authors Peter Landgren, Vaibhav Srivastava, Naomi Ehrich Leonard
Abstract We study distributed cooperative decision-making under the explore-exploit tradeoff in the multiarmed bandit (MAB) problem. We extend the state-of-the-art frequentist and Bayesian algorithms for single-agent MAB problems to cooperative distributed algorithms for multi-agent MAB problems in which agents communicate according to a fixed network graph. We rely on a running consensus algorithm for each agent’s estimation of mean rewards from its own rewards and the estimated rewards of its neighbors. We prove the performance of these algorithms and show that they asymptotically recover the performance of a centralized agent. Further, we rigorously characterize the influence of the communication graph structure on the decision-making performance of the group.
Tasks Decision Making
Published 2016-06-02
URL https://arxiv.org/abs/1606.00911v3
PDF https://arxiv.org/pdf/1606.00911v3.pdf
PWC https://paperswithcode.com/paper/distributed-cooperative-decision-making-in
Repo
Framework

A Functional Regression approach to Facial Landmark Tracking

Title A Functional Regression approach to Facial Landmark Tracking
Authors Enrique Sánchez-Lozano, Georgios Tzimiropoulos, Brais Martinez, Fernando De la Torre, Michel Valstar
Abstract Linear regression is a fundamental building block in many face detection and tracking algorithms, typically used to predict shape displacements from image features through a linear mapping. This paper presents a Functional Regression solution to the least squares problem, which we coin Continuous Regression, resulting in the first real-time incremental face tracker. Contrary to prior work in Functional Regression, in which B-splines or Fourier series were used, we propose to approximate the input space by its first-order Taylor expansion, yielding a closed-form solution for the continuous domain of displacements. We then extend the continuous least squares problem to correlated variables, and demonstrate the generalisation of our approach. We incorporate Continuous Regression into the cascaded regression framework, and show its computational benefits for both training and testing. We then present a fast approach for incremental learning within Cascaded Continuous Regression, coined iCCR, and show that its complexity allows real-time face tracking, being 20 times faster than the state of the art. To the best of our knowledge, this is the first incremental face tracker that is shown to operate in real-time. We show that iCCR achieves state-of-the-art performance on the 300-VW dataset, the most recent, large-scale benchmark for face tracking.
Tasks Face Detection, Landmark Tracking
Published 2016-12-07
URL http://arxiv.org/abs/1612.02203v2
PDF http://arxiv.org/pdf/1612.02203v2.pdf
PWC https://paperswithcode.com/paper/a-functional-regression-approach-to-facial
Repo
Framework

Optimal Recommendation to Users that React: Online Learning for a Class of POMDPs

Title Optimal Recommendation to Users that React: Online Learning for a Class of POMDPs
Authors Rahul Meshram, Aditya Gopalan, D. Manjunath
Abstract We describe and study a model for an Automated Online Recommendation System (AORS) in which a user’s preferences can be time-dependent and can also depend on the history of past recommendations and play-outs. The three key features of the model that makes it more realistic compared to existing models for recommendation systems are (1) user preference is inherently latent, (2) current recommendations can affect future preferences, and (3) it allows for the development of learning algorithms with provable performance guarantees. The problem is cast as an average-cost restless multi-armed bandit for a given user, with an independent partially observable Markov decision process (POMDP) for each item of content. We analyze the POMDP for a single arm, describe its structural properties, and characterize its optimal policy. We then develop a Thompson sampling-based online reinforcement learning algorithm to learn the parameters of the model and optimize utility from the binary responses of the users to continuous recommendations. We then analyze the performance of the learning algorithm and characterize the regret. Illustrative numerical results and directions for extension to the restless hidden Markov multi-armed bandit problem are also presented.
Tasks Recommendation Systems
Published 2016-03-30
URL http://arxiv.org/abs/1603.09233v1
PDF http://arxiv.org/pdf/1603.09233v1.pdf
PWC https://paperswithcode.com/paper/optimal-recommendation-to-users-that-react
Repo
Framework

Cascaded Continuous Regression for Real-time Incremental Face Tracking

Title Cascaded Continuous Regression for Real-time Incremental Face Tracking
Authors Enrique Sánchez-Lozano, Brais Martinez, Georgios Tzimiropoulos, Michel Valstar
Abstract This paper introduces a novel real-time algorithm for facial landmark tracking. Compared to detection, tracking has both additional challenges and opportunities. Arguably the most important aspect in this domain is updating a tracker’s models as tracking progresses, also known as incremental (face) tracking. While this should result in more accurate localisation, how to do this online and in real time without causing a tracker to drift is still an important open research question. We address this question in the cascaded regression framework, the state-of-the-art approach for facial landmark localisation. Because incremental learning for cascaded regression is costly, we propose a much more efficient yet equally accurate alternative using continuous regression. More specifically, we first propose cascaded continuous regression (CCR) and show its accuracy is equivalent to the Supervised Descent Method. We then derive the incremental learning updates for CCR (iCCR) and show that it is an order of magnitude faster than standard incremental learning for cascaded regression, bringing the time required for the update from seconds down to a fraction of a second, thus enabling real-time tracking. Finally, we evaluate iCCR and show the importance of incremental learning in achieving state-of-the-art performance. Code for our iCCR is available from http://www.cs.nott.ac.uk/~psxes1
Tasks Face Alignment, Landmark Tracking
Published 2016-08-03
URL http://arxiv.org/abs/1608.01137v2
PDF http://arxiv.org/pdf/1608.01137v2.pdf
PWC https://paperswithcode.com/paper/cascaded-continuous-regression-for-real-time
Repo
Framework

Anchoring and Agreement in Syntactic Annotations

Title Anchoring and Agreement in Syntactic Annotations
Authors Yevgeni Berzak, Yan Huang, Andrei Barbu, Anna Korhonen, Boris Katz
Abstract We present a study on two key characteristics of human syntactic annotations: anchoring and agreement. Anchoring is a well known cognitive bias in human decision making, where judgments are drawn towards pre-existing values. We study the influence of anchoring on a standard approach to creation of syntactic resources where syntactic annotations are obtained via human editing of tagger and parser output. Our experiments demonstrate a clear anchoring effect and reveal unwanted consequences, including overestimation of parsing performance and lower quality of annotations in comparison with human-based annotations. Using sentences from the Penn Treebank WSJ, we also report systematically obtained inter-annotator agreement estimates for English dependency parsing. Our agreement results control for parser bias, and are consequential in that they are on par with state of the art parsing performance for English newswire. We discuss the impact of our findings on strategies for future annotation efforts and parser evaluations.
Tasks Decision Making, Dependency Parsing
Published 2016-05-15
URL http://arxiv.org/abs/1605.04481v3
PDF http://arxiv.org/pdf/1605.04481v3.pdf
PWC https://paperswithcode.com/paper/anchoring-and-agreement-in-syntactic
Repo
Framework
Title Resource allocation using metaheuristic search
Authors Andy M. Connor, Amit Shah
Abstract This research is focused on solving problems in the area of software project management using metaheuristic search algorithms and as such is research in the field of search based software engineering. The main aim of this research is to evaluate the performance of different metaheuristic search techniques in resource allocation and scheduling problems that would be typical of software development projects. This paper reports a set of experiments which evaluate the performance of three algorithms, namely simulated annealing, tabu search and genetic algorithms. The experimental results indicate that all of the metaheuristics search techniques can be used to solve problems in resource allocation and scheduling within a software project. Finally, a comparative analysis suggests that overall the genetic algorithm had performed better than simulated annealing and tabu search.
Tasks
Published 2016-05-06
URL http://arxiv.org/abs/1605.01855v1
PDF http://arxiv.org/pdf/1605.01855v1.pdf
PWC https://paperswithcode.com/paper/resource-allocation-using-metaheuristic
Repo
Framework
comments powered by Disqus