May 6, 2019

3112 words 15 mins read

Paper Group ANR 397

Paper Group ANR 397

Search Tracker: Human-derived object tracking in-the-wild through large-scale search and retrieval. Minimum Description Length Principle in Supervised Learning with Application to Lasso. Reinforcement Learning in Conflicting Environments for Autonomous Vehicles. Real-time Eye Gaze Direction Classification Using Convolutional Neural Network. Twitter …

Search Tracker: Human-derived object tracking in-the-wild through large-scale search and retrieval

Title Search Tracker: Human-derived object tracking in-the-wild through large-scale search and retrieval
Authors Archith J. Bency, S. Karthikeyan, Carter De Leo, Santhoshkumar Sunderrajan, B. S. Manjunath
Abstract Humans use context and scene knowledge to easily localize moving objects in conditions of complex illumination changes, scene clutter and occlusions. In this paper, we present a method to leverage human knowledge in the form of annotated video libraries in a novel search and retrieval based setting to track objects in unseen video sequences. For every video sequence, a document that represents motion information is generated. Documents of the unseen video are queried against the library at multiple scales to find videos with similar motion characteristics. This provides us with coarse localization of objects in the unseen video. We further adapt these retrieved object locations to the new video using an efficient warping scheme. The proposed method is validated on in-the-wild video surveillance datasets where we outperform state-of-the-art appearance-based trackers. We also introduce a new challenging dataset with complex object appearance changes.
Tasks Object Tracking
Published 2016-02-05
URL http://arxiv.org/abs/1602.01890v1
PDF http://arxiv.org/pdf/1602.01890v1.pdf
PWC https://paperswithcode.com/paper/search-tracker-human-derived-object-tracking
Repo
Framework

Minimum Description Length Principle in Supervised Learning with Application to Lasso

Title Minimum Description Length Principle in Supervised Learning with Application to Lasso
Authors Masanori Kawakita, Jun’ichi Takeuchi
Abstract The minimum description length (MDL) principle in supervised learning is studied. One of the most important theories for the MDL principle is Barron and Cover’s theory (BC theory), which gives a mathematical justification of the MDL principle. The original BC theory, however, can be applied to supervised learning only approximately and limitedly. Though Barron et al. recently succeeded in removing a similar approximation in case of unsupervised learning, their idea cannot be essentially applied to supervised learning in general. To overcome this issue, an extension of BC theory to supervised learning is proposed. The derived risk bound has several advantages inherited from the original BC theory. First, the risk bound holds for finite sample size. Second, it requires remarkably few assumptions. Third, the risk bound has a form of redundancy of the two-stage code for the MDL procedure. Hence, the proposed extension gives a mathematical justification of the MDL principle to supervised learning like the original BC theory. As an important example of application, new risk and (probabilistic) regret bounds of lasso with random design are derived. The derived risk bound holds for any finite sample size $n$ and feature number $p$ even if $n\ll p$ without boundedness of features in contrast to the past work. Behavior of the regret bound is investigated by numerical simulations. We believe that this is the first extension of BC theory to general supervised learning with random design without approximation.
Tasks
Published 2016-07-11
URL http://arxiv.org/abs/1607.02914v1
PDF http://arxiv.org/pdf/1607.02914v1.pdf
PWC https://paperswithcode.com/paper/minimum-description-length-principle-in
Repo
Framework

Reinforcement Learning in Conflicting Environments for Autonomous Vehicles

Title Reinforcement Learning in Conflicting Environments for Autonomous Vehicles
Authors Dominik Meyer, Johannes Feldmaier, Hao Shen
Abstract In this work, we investigate the application of Reinforcement Learning to two well known decision dilemmas, namely Newcomb’s Problem and Prisoner’s Dilemma. These problems are exemplary for dilemmas that autonomous agents are faced with when interacting with humans. Furthermore, we argue that a Newcomb-like formulation is more adequate in the human-machine interaction case and demonstrate empirically that the unmodified Reinforcement Learning algorithms end up with the well known maximum expected utility solution.
Tasks Autonomous Vehicles
Published 2016-10-22
URL http://arxiv.org/abs/1610.07089v1
PDF http://arxiv.org/pdf/1610.07089v1.pdf
PWC https://paperswithcode.com/paper/reinforcement-learning-in-conflicting
Repo
Framework

Real-time Eye Gaze Direction Classification Using Convolutional Neural Network

Title Real-time Eye Gaze Direction Classification Using Convolutional Neural Network
Authors Anjith George, Aurobinda Routray
Abstract Estimation eye gaze direction is useful in various human-computer interaction tasks. Knowledge of gaze direction can give valuable information regarding users point of attention. Certain patterns of eye movements known as eye accessing cues are reported to be related to the cognitive processes in the human brain. We propose a real-time framework for the classification of eye gaze direction and estimation of eye accessing cues. In the first stage, the algorithm detects faces using a modified version of the Viola-Jones algorithm. A rough eye region is obtained using geometric relations and facial landmarks. The eye region obtained is used in the subsequent stage to classify the eye gaze direction. A convolutional neural network is employed in this work for the classification of eye gaze direction. The proposed algorithm was tested on Eye Chimera database and found to outperform state of the art methods. The computational complexity of the algorithm is very less in the testing phase. The algorithm achieved an average frame rate of 24 fps in the desktop environment.
Tasks
Published 2016-05-17
URL http://arxiv.org/abs/1605.05258v1
PDF http://arxiv.org/pdf/1605.05258v1.pdf
PWC https://paperswithcode.com/paper/real-time-eye-gaze-direction-classification
Repo
Framework

Twitter Opinion Topic Model: Extracting Product Opinions from Tweets by Leveraging Hashtags and Sentiment Lexicon

Title Twitter Opinion Topic Model: Extracting Product Opinions from Tweets by Leveraging Hashtags and Sentiment Lexicon
Authors Kar Wai Lim, Wray Buntine
Abstract Aspect-based opinion mining is widely applied to review data to aggregate or summarize opinions of a product, and the current state-of-the-art is achieved with Latent Dirichlet Allocation (LDA)-based model. Although social media data like tweets are laden with opinions, their “dirty” nature (as natural language) has discouraged researchers from applying LDA-based opinion model for product review mining. Tweets are often informal, unstructured and lacking labeled data such as categories and ratings, making it challenging for product opinion mining. In this paper, we propose an LDA-based opinion model named Twitter Opinion Topic Model (TOTM) for opinion mining and sentiment analysis. TOTM leverages hashtags, mentions, emoticons and strong sentiment words that are present in tweets in its discovery process. It improves opinion prediction by modeling the target-opinion interaction directly, thus discovering target specific opinion words, neglected in existing approaches. Moreover, we propose a new formulation of incorporating sentiment prior information into a topic model, by utilizing an existing public sentiment lexicon. This is novel in that it learns and updates with the data. We conduct experiments on 9 million tweets on electronic products, and demonstrate the improved performance of TOTM in both quantitative evaluations and qualitative analysis. We show that aspect-based opinion analysis on massive volume of tweets provides useful opinions on products.
Tasks Opinion Mining, Sentiment Analysis
Published 2016-09-21
URL http://arxiv.org/abs/1609.06578v1
PDF http://arxiv.org/pdf/1609.06578v1.pdf
PWC https://paperswithcode.com/paper/twitter-opinion-topic-model-extracting
Repo
Framework

An efficient K-means algorithm for Massive Data

Title An efficient K-means algorithm for Massive Data
Authors Marco Capó, Aritz Pérez, José Antonio Lozano
Abstract Due to the progressive growth of the amount of data available in a wide variety of scientific fields, it has become more difficult to ma- nipulate and analyze such information. Even though datasets have grown in size, the K-means algorithm remains as one of the most popular clustering methods, in spite of its dependency on the initial settings and high computational cost, especially in terms of distance computations. In this work, we propose an efficient approximation to the K-means problem intended for massive data. Our approach recursively partitions the entire dataset into a small number of sub- sets, each of which is characterized by its representative (center of mass) and weight (cardinality), afterwards a weighted version of the K-means algorithm is applied over such local representation, which can drastically reduce the number of distances computed. In addition to some theoretical properties, experimental results indicate that our method outperforms well-known approaches, such as the K-means++ and the minibatch K-means, in terms of the relation between number of distance computations and the quality of the approximation.
Tasks
Published 2016-05-10
URL http://arxiv.org/abs/1605.02989v1
PDF http://arxiv.org/pdf/1605.02989v1.pdf
PWC https://paperswithcode.com/paper/an-efficient-k-means-algorithm-for-massive
Repo
Framework

A Hierarchical Model of Reviews for Aspect-based Sentiment Analysis

Title A Hierarchical Model of Reviews for Aspect-based Sentiment Analysis
Authors Sebastian Ruder, Parsa Ghaffari, John G. Breslin
Abstract Opinion mining from customer reviews has become pervasive in recent years. Sentences in reviews, however, are usually classified independently, even though they form part of a review’s argumentative structure. Intuitively, sentences in a review build and elaborate upon each other; knowledge of the review structure and sentential context should thus inform the classification of each sentence. We demonstrate this hypothesis for the task of aspect-based sentiment analysis by modeling the interdependencies of sentences in a review with a hierarchical bidirectional LSTM. We show that the hierarchical model outperforms two non-hierarchical baselines, obtains results competitive with the state-of-the-art, and outperforms the state-of-the-art on five multilingual, multi-domain datasets without any hand-engineered features or external resources.
Tasks Aspect-Based Sentiment Analysis, Opinion Mining, Sentiment Analysis
Published 2016-09-09
URL http://arxiv.org/abs/1609.02745v1
PDF http://arxiv.org/pdf/1609.02745v1.pdf
PWC https://paperswithcode.com/paper/a-hierarchical-model-of-reviews-for-aspect
Repo
Framework

Nonparametric Detection of Geometric Structures over Networks

Title Nonparametric Detection of Geometric Structures over Networks
Authors Shaofeng Zou, Yingbin Liang, H. Vincent Poor
Abstract Nonparametric detection of existence of an anomalous structure over a network is investigated. Nodes corresponding to the anomalous structure (if one exists) receive samples generated by a distribution q, which is different from a distribution p generating samples for other nodes. If an anomalous structure does not exist, all nodes receive samples generated by p. It is assumed that the distributions p and q are arbitrary and unknown. The goal is to design statistically consistent tests with probability of errors converging to zero as the network size becomes asymptotically large. Kernel-based tests are proposed based on maximum mean discrepancy that measures the distance between mean embeddings of distributions into a reproducing kernel Hilbert space. Detection of an anomalous interval over a line network is first studied. Sufficient conditions on minimum and maximum sizes of candidate anomalous intervals are characterized in order to guarantee the proposed test to be consistent. It is also shown that certain necessary conditions must hold to guarantee any test to be universally consistent. Comparison of sufficient and necessary conditions yields that the proposed test is order-level optimal and nearly optimal respectively in terms of minimum and maximum sizes of candidate anomalous intervals. Generalization of the results to other networks is further developed. Numerical results are provided to demonstrate the performance of the proposed tests.
Tasks
Published 2016-04-05
URL http://arxiv.org/abs/1604.01351v2
PDF http://arxiv.org/pdf/1604.01351v2.pdf
PWC https://paperswithcode.com/paper/nonparametric-detection-of-geometric
Repo
Framework

Haploid-Diploid Evolutionary Algorithms

Title Haploid-Diploid Evolutionary Algorithms
Authors Larry Bull
Abstract This paper uses the recent idea that the fundamental haploid-diploid lifecycle of eukaryotic organisms implements a rudimentary form of learning within evolution. A general approach for evolutionary computation is here derived that differs from all previous known work using diploid representations. The primary role of recombination is also changed from that previously considered in both natural and artificial evolution under the new view. Using well-known abstract tuneable models it is shown that varying fitness landscape ruggedness varies the benefit of the new approach.
Tasks
Published 2016-08-19
URL http://arxiv.org/abs/1608.05578v3
PDF http://arxiv.org/pdf/1608.05578v3.pdf
PWC https://paperswithcode.com/paper/haploid-diploid-evolutionary-algorithms
Repo
Framework

A Distance Function for Comparing Straight-Edge Geometric Figures

Title A Distance Function for Comparing Straight-Edge Geometric Figures
Authors Apoorva Honnegowda Roopa, Shrisha Rao
Abstract This paper defines a distance function that measures the dissimilarity between planar geometric figures formed with straight lines. This function can in turn be used in partial matching of different geometric figures. For a given pair of geometric figures that are graphically isomorphic, one function measures the angular dissimilarity and another function measures the edge length disproportionality. The distance function is then defined as the convex sum of these two functions. The novelty of the presented function is that it satisfies all properties of a distance function and the computation of the same is done by projecting appropriate features to a cartesian plane. To compute the deviation from the angular similarity property, the Euclidean distance between the given angular pairs and the corresponding points on the $y=x$ line is measured. Further while computing the deviation from the edge length proportionality property, the best fit line, for the set of edge lengths, which passes through the origin is found, and the Euclidean distance between the given edge length pairs and the corresponding point on a $y=mx$ line is calculated. Iterative Proportional Fitting Procedure (IPFP) is used to find this best fit line. We demonstrate the behavior of the defined function for some sample pairs of figures.
Tasks
Published 2016-11-25
URL http://arxiv.org/abs/1612.01400v1
PDF http://arxiv.org/pdf/1612.01400v1.pdf
PWC https://paperswithcode.com/paper/a-distance-function-for-comparing-straight
Repo
Framework

Cross-Domain Visual Matching via Generalized Similarity Measure and Feature Learning

Title Cross-Domain Visual Matching via Generalized Similarity Measure and Feature Learning
Authors Liang Lin, Guangrun Wang, Wangmeng Zuo, Xiangchu Feng, Lei Zhang
Abstract Cross-domain visual data matching is one of the fundamental problems in many real-world vision tasks, e.g., matching persons across ID photos and surveillance videos. Conventional approaches to this problem usually involves two steps: i) projecting samples from different domains into a common space, and ii) computing (dis-)similarity in this space based on a certain distance. In this paper, we present a novel pairwise similarity measure that advances existing models by i) expanding traditional linear projections into affine transformations and ii) fusing affine Mahalanobis distance and Cosine similarity by a data-driven combination. Moreover, we unify our similarity measure with feature representation learning via deep convolutional neural networks. Specifically, we incorporate the similarity measure matrix into the deep architecture, enabling an end-to-end way of model optimization. We extensively evaluate our generalized similarity model in several challenging cross-domain matching tasks: person re-identification under different views and face verification over different modalities (i.e., faces from still images and videos, older and younger faces, and sketch and photo portraits). The experimental results demonstrate superior performance of our model over other state-of-the-art methods.
Tasks Face Verification, Person Re-Identification, Representation Learning
Published 2016-05-13
URL http://arxiv.org/abs/1605.04039v1
PDF http://arxiv.org/pdf/1605.04039v1.pdf
PWC https://paperswithcode.com/paper/cross-domain-visual-matching-via-generalized
Repo
Framework

Satisfying Real-world Goals with Dataset Constraints

Title Satisfying Real-world Goals with Dataset Constraints
Authors Gabriel Goh, Andrew Cotter, Maya Gupta, Michael Friedlander
Abstract The goal of minimizing misclassification error on a training set is often just one of several real-world goals that might be defined on different datasets. For example, one may require a classifier to also make positive predictions at some specified rate for some subpopulation (fairness), or to achieve a specified empirical recall. Other real-world goals include reducing churn with respect to a previously deployed model, or stabilizing online training. In this paper we propose handling multiple goals on multiple datasets by training with dataset constraints, using the ramp penalty to accurately quantify costs, and present an efficient algorithm to approximately optimize the resulting non-convex constrained optimization problem. Experiments on both benchmark and real-world industry datasets demonstrate the effectiveness of our approach.
Tasks
Published 2016-06-24
URL http://arxiv.org/abs/1606.07558v2
PDF http://arxiv.org/pdf/1606.07558v2.pdf
PWC https://paperswithcode.com/paper/satisfying-real-world-goals-with-dataset
Repo
Framework

Towards Segmenting Consumer Stereo Videos: Benchmark, Baselines and Ensembles

Title Towards Segmenting Consumer Stereo Videos: Benchmark, Baselines and Ensembles
Authors Wei-Chen Chiu, Fabio Galasso, Mario Fritz
Abstract Are we ready to segment consumer stereo videos? The amount of this data type is rapidly increasing and encompasses rich information of appearance, motion and depth cues. However, the segmentation of such data is still largely unexplored. First, we propose therefore a new benchmark: videos, annotations and metrics to measure progress on this emerging challenge. Second, we evaluate several state of the art segmentation methods and propose a novel ensemble method based on recent spectral theory. This combines existing image and video segmentation techniques in an efficient scheme. Finally, we propose and integrate into this model a novel regressor, learnt to optimize the stereo segmentation performance directly via a differentiable proxy. The regressor makes our segmentation ensemble adaptive to each stereo video and outperforms the segmentations of the ensemble as well as a most recent RGB-D segmentation technique.
Tasks Video Semantic Segmentation
Published 2016-09-03
URL http://arxiv.org/abs/1609.00836v2
PDF http://arxiv.org/pdf/1609.00836v2.pdf
PWC https://paperswithcode.com/paper/towards-segmenting-consumer-stereo-videos
Repo
Framework

ScribbleSup: Scribble-Supervised Convolutional Networks for Semantic Segmentation

Title ScribbleSup: Scribble-Supervised Convolutional Networks for Semantic Segmentation
Authors Di Lin, Jifeng Dai, Jiaya Jia, Kaiming He, Jian Sun
Abstract Large-scale data is of crucial importance for learning semantic segmentation models, but annotating per-pixel masks is a tedious and inefficient procedure. We note that for the topic of interactive image segmentation, scribbles are very widely used in academic research and commercial software, and are recognized as one of the most user-friendly ways of interacting. In this paper, we propose to use scribbles to annotate images, and develop an algorithm to train convolutional networks for semantic segmentation supervised by scribbles. Our algorithm is based on a graphical model that jointly propagates information from scribbles to unmarked pixels and learns network parameters. We present competitive object semantic segmentation results on the PASCAL VOC dataset by using scribbles as annotations. Scribbles are also favored for annotating stuff (e.g., water, sky, grass) that has no well-defined shape, and our method shows excellent results on the PASCAL-CONTEXT dataset thanks to extra inexpensive scribble annotations. Our scribble annotations on PASCAL VOC are available at http://research.microsoft.com/en-us/um/people/jifdai/downloads/scribble_sup
Tasks Semantic Segmentation
Published 2016-04-18
URL http://arxiv.org/abs/1604.05144v1
PDF http://arxiv.org/pdf/1604.05144v1.pdf
PWC https://paperswithcode.com/paper/scribblesup-scribble-supervised-convolutional
Repo
Framework

Burstiness Scale: a highly parsimonious model for characterizing random series of events

Title Burstiness Scale: a highly parsimonious model for characterizing random series of events
Authors Rodrigo A S Alves, Renato Assunção, Pedro O S Vaz de Melo
Abstract The problem to accurately and parsimoniously characterize random series of events (RSEs) present in the Web, such as e-mail conversations or Twitter hashtags, is not trivial. Reports found in the literature reveal two apparent conflicting visions of how RSEs should be modeled. From one side, the Poissonian processes, of which consecutive events follow each other at a relatively regular time and should not be correlated. On the other side, the self-exciting processes, which are able to generate bursts of correlated events and periods of inactivities. The existence of many and sometimes conflicting approaches to model RSEs is a consequence of the unpredictability of the aggregated dynamics of our individual and routine activities, which sometimes show simple patterns, but sometimes results in irregular rising and falling trends. In this paper we propose a highly parsimonious way to characterize general RSEs, namely the Burstiness Scale (BuSca) model. BuSca views each RSE as a mix of two independent process: a Poissonian and a self-exciting one. Here we describe a fast method to extract the two parameters of BuSca that, together, gives the burstyness scale, which represents how much of the RSE is due to bursty and viral effects. We validated our method in eight diverse and large datasets containing real random series of events seen in Twitter, Yelp, e-mail conversations, Digg, and online forums. Results showed that, even using only two parameters, BuSca is able to accurately describe RSEs seen in these diverse systems, what can leverage many applications.
Tasks
Published 2016-02-20
URL http://arxiv.org/abs/1602.06431v1
PDF http://arxiv.org/pdf/1602.06431v1.pdf
PWC https://paperswithcode.com/paper/burstiness-scale-a-highly-parsimonious-model
Repo
Framework
comments powered by Disqus