May 6, 2019

3164 words 15 mins read

Paper Group ANR 398

Paper Group ANR 398

Achieving stable subspace clustering by post-processing generic clustering results. Using Word Embeddings in Twitter Election Classification. Fast Trajectory Simplification Algorithm for Natural User Interfaces in Robot Programming by Demonstration. Estimation of Passenger Route Choice Pattern Using Smart Card Data for Complex Metro Systems. Deeper …

Achieving stable subspace clustering by post-processing generic clustering results

Title Achieving stable subspace clustering by post-processing generic clustering results
Authors Duc-Son Pham, Ognjen Arandjelovic, Svetha Venkatesh
Abstract We propose an effective subspace selection scheme as a post-processing step to improve results obtained by sparse subspace clustering (SSC). Our method starts by the computation of stable subspaces using a novel random sampling scheme. Thus constructed preliminary subspaces are used to identify the initially incorrectly clustered data points and then to reassign them to more suitable clusters based on their goodness-of-fit to the preliminary model. To improve the robustness of the algorithm, we use a dominant nearest subspace classification scheme that controls the level of sensitivity against reassignment. We demonstrate that our algorithm is convergent and superior to the direct application of a generic alternative such as principal component analysis. On several popular datasets for motion segmentation and face clustering pervasively used in the sparse subspace clustering literature the proposed method is shown to reduce greatly the incidence of clustering errors while introducing negligible disturbance to the data points already correctly clustered.
Tasks Motion Segmentation
Published 2016-05-27
URL http://arxiv.org/abs/1605.08680v1
PDF http://arxiv.org/pdf/1605.08680v1.pdf
PWC https://paperswithcode.com/paper/achieving-stable-subspace-clustering-by-post
Repo
Framework

Using Word Embeddings in Twitter Election Classification

Title Using Word Embeddings in Twitter Election Classification
Authors Xiao Yang, Craig Macdonald, Iadh Ounis
Abstract Word embeddings and convolutional neural networks (CNN) have attracted extensive attention in various classification tasks for Twitter, e.g. sentiment classification. However, the effect of the configuration used to train and generate the word embeddings on the classification performance has not been studied in the existing literature. In this paper, using a Twitter election classification task that aims to detect election-related tweets, we investigate the impact of the background dataset used to train the embedding models, the context window size and the dimensionality of word embeddings on the classification performance. By comparing the classification results of two word embedding models, which are trained using different background corpora (e.g. Wikipedia articles and Twitter microposts), we show that the background data type should align with the Twitter classification dataset to achieve a better performance. Moreover, by evaluating the results of word embeddings models trained using various context window sizes and dimensionalities, we found that large context window and dimension sizes are preferable to improve the performance. Our experimental results also show that using word embeddings and CNN leads to statistically significant improvements over various baselines such as random, SVM with TF-IDF and SVM with word embeddings.
Tasks Sentiment Analysis, Word Embeddings
Published 2016-06-22
URL http://arxiv.org/abs/1606.07006v3
PDF http://arxiv.org/pdf/1606.07006v3.pdf
PWC https://paperswithcode.com/paper/using-word-embeddings-in-twitter-election
Repo
Framework

Fast Trajectory Simplification Algorithm for Natural User Interfaces in Robot Programming by Demonstration

Title Fast Trajectory Simplification Algorithm for Natural User Interfaces in Robot Programming by Demonstration
Authors Daniel L. Marino, Milos Manic
Abstract Trajectory simplification is a problem encountered in areas like Robot programming by demonstration, CAD/CAM, computer vision, and in GPS-based applications like traffic analysis. This problem entails reduction of the points in a given trajectory while keeping the relevant points which preserve important information. The benefits include storage reduction, computational expense, while making data more manageable. Common techniques formulate a minimization problem to be solved, where the solution is found iteratively under some error metric, which causes the algorithms to work in super-linear time. We present an algorithm called FastSTray, which selects the relevant points in the trajectory in linear time by following an open loop heuristic approach. While most current trajectory simplification algorithms are tailored for GPS trajectories, our approach focuses on smooth trajectories for robot programming by demonstration recorded using motion capture systems.Two variations of the algorithm are presented: 1. aims to preserve shape and temporal information; 2. preserves only shape information. Using the points in the simplified trajectory we use cubic splines to interpolate between these points and recreate the original trajectory. The presented algorithm was tested on trajectories recorded from a hand-tracking system. It was able to eliminate about 90% of the points in the original trajectories while maintaining errors between 0.78-2cm which corresponds to 1%-2.4% relative error with respect to the bounding box of the trajectories.
Tasks Motion Capture
Published 2016-08-25
URL http://arxiv.org/abs/1608.07338v1
PDF http://arxiv.org/pdf/1608.07338v1.pdf
PWC https://paperswithcode.com/paper/fast-trajectory-simplification-algorithm-for
Repo
Framework

Estimation of Passenger Route Choice Pattern Using Smart Card Data for Complex Metro Systems

Title Estimation of Passenger Route Choice Pattern Using Smart Card Data for Complex Metro Systems
Authors Juanjuan Zhao, Fan Zhang, Lai Tu, Chengzhong Xu, Dayong Shen, Chen Tian, Xiang-Yang Li, Zhengxi Li
Abstract Nowadays, metro systems play an important role in meeting the urban transportation demand in large cities. The understanding of passenger route choice is critical for public transit management. The wide deployment of Automated Fare Collection(AFC) systems opens up a new opportunity. However, only each trip’s tap-in and tap-out timestamp and stations can be directly obtained from AFC system records; the train and route chosen by a passenger are unknown, which are necessary to solve our problem. While existing methods work well in some specific situations, they don’t work for complicated situations. In this paper, we propose a solution that needs no additional equipment or human involvement than the AFC systems. We develop a probabilistic model that can estimate from empirical analysis how the passenger flows are dispatched to different routes and trains. We validate our approach using a large scale data set collected from the Shenzhen metro system. The measured results provide us with useful inputs when building the passenger path choice model.
Tasks
Published 2016-04-19
URL http://arxiv.org/abs/1605.08390v1
PDF http://arxiv.org/pdf/1605.08390v1.pdf
PWC https://paperswithcode.com/paper/estimation-of-passenger-route-choice-pattern
Repo
Framework

DeeperBind: Enhancing Prediction of Sequence Specificities of DNA Binding Proteins

Title DeeperBind: Enhancing Prediction of Sequence Specificities of DNA Binding Proteins
Authors Hamid Reza Hassanzadeh, May D. Wang
Abstract Transcription factors (TFs) are macromolecules that bind to \textit{cis}-regulatory specific sub-regions of DNA promoters and initiate transcription. Finding the exact location of these binding sites (aka motifs) is important in a variety of domains such as drug design and development. To address this need, several \textit{in vivo} and \textit{in vitro} techniques have been developed so far that try to characterize and predict the binding specificity of a protein to different DNA loci. The major problem with these techniques is that they are not accurate enough in prediction of the binding affinity and characterization of the corresponding motifs. As a result, downstream analysis is required to uncover the locations where proteins of interest bind. Here, we propose DeeperBind, a long short term recurrent convolutional network for prediction of protein binding specificities with respect to DNA probes. DeeperBind can model the positional dynamics of probe sequences and hence reckons with the contributions made by individual sub-regions in DNA sequences, in an effective way. Moreover, it can be trained and tested on datasets containing varying-length sequences. We apply our pipeline to the datasets derived from protein binding microarrays (PBMs), an in-vitro high-throughput technology for quantification of protein-DNA binding preferences, and present promising results. To the best of our knowledge, this is the most accurate pipeline that can predict binding specificities of DNA sequences from the data produced by high-throughput technologies through utilization of the power of deep learning for feature generation and positional dynamics modeling.
Tasks
Published 2016-11-17
URL http://arxiv.org/abs/1611.05777v1
PDF http://arxiv.org/pdf/1611.05777v1.pdf
PWC https://paperswithcode.com/paper/deeperbind-enhancing-prediction-of-sequence
Repo
Framework

Generalized Topic Modeling

Title Generalized Topic Modeling
Authors Avrim Blum, Nika Haghtalab
Abstract Recently there has been significant activity in developing algorithms with provable guarantees for topic modeling. In standard topic models, a topic (such as sports, business, or politics) is viewed as a probability distribution $\vec a_i$ over words, and a document is generated by first selecting a mixture $\vec w$ over topics, and then generating words i.i.d. from the associated mixture $A{\vec w}$. Given a large collection of such documents, the goal is to recover the topic vectors and then to correctly classify new documents according to their topic mixture. In this work we consider a broad generalization of this framework in which words are no longer assumed to be drawn i.i.d. and instead a topic is a complex distribution over sequences of paragraphs. Since one could not hope to even represent such a distribution in general (even if paragraphs are given using some natural feature representation), we aim instead to directly learn a document classifier. That is, we aim to learn a predictor that given a new document, accurately predicts its topic mixture, without learning the distributions explicitly. We present several natural conditions under which one can do this efficiently and discuss issues such as noise tolerance and sample complexity in this model. More generally, our model can be viewed as a generalization of the multi-view or co-training setting in machine learning.
Tasks Topic Models
Published 2016-11-04
URL http://arxiv.org/abs/1611.01259v1
PDF http://arxiv.org/pdf/1611.01259v1.pdf
PWC https://paperswithcode.com/paper/generalized-topic-modeling
Repo
Framework

Expectile Matrix Factorization for Skewed Data Analysis

Title Expectile Matrix Factorization for Skewed Data Analysis
Authors Rui Zhu, Di Niu, Linglong Kong, Zongpeng Li
Abstract Matrix factorization is a popular approach to solving matrix estimation problems based on partial observations. Existing matrix factorization is based on least squares and aims to yield a low-rank matrix to interpret the conditional sample means given the observations. However, in many real applications with skewed and extreme data, least squares cannot explain their central tendency or tail distributions, yielding undesired estimates. In this paper, we propose \emph{expectile matrix factorization} by introducing asymmetric least squares, a key concept in expectile regression analysis, into the matrix factorization framework. We propose an efficient algorithm to solve the new problem based on alternating minimization and quadratic programming. We prove that our algorithm converges to a global optimum and exactly recovers the true underlying low-rank matrices when noise is zero. For synthetic data with skewed noise and a real-world dataset containing web service response times, the proposed scheme achieves lower recovery errors than the existing matrix factorization method based on least squares in a wide range of settings.
Tasks
Published 2016-06-07
URL http://arxiv.org/abs/1606.01984v3
PDF http://arxiv.org/pdf/1606.01984v3.pdf
PWC https://paperswithcode.com/paper/expectile-matrix-factorization-for-skewed
Repo
Framework

Occlusion-Aware Video Deblurring with a New Layered Blur Model

Title Occlusion-Aware Video Deblurring with a New Layered Blur Model
Authors Byeongjoo Ahn, Tae Hyun Kim, Wonsik Kim, Kyoung Mu Lee
Abstract We present a deblurring method for scenes with occluding objects using a carefully designed layered blur model. Layered blur model is frequently used in the motion deblurring problem to handle locally varying blurs, which is caused by object motions or depth variations in a scene. However, conventional models have a limitation in representing the layer interactions occurring at occlusion boundaries. In this paper, we address this limitation in both theoretical and experimental ways, and propose a new layered blur model reflecting actual blur generation process. Based on this model, we develop an occlusion-aware deblurring method that can estimate not only the clear foreground and background, but also the object motion more accurately. We also provide a novel analysis on the blur kernel at object boundaries, which shows the distinctive characteristics of the blur kernel that cannot be captured by conventional blur models. Experimental results on synthetic and real blurred videos demonstrate that the proposed method yields superior results, especially at object boundaries.
Tasks Deblurring
Published 2016-11-29
URL http://arxiv.org/abs/1611.09572v1
PDF http://arxiv.org/pdf/1611.09572v1.pdf
PWC https://paperswithcode.com/paper/occlusion-aware-video-deblurring-with-a-new
Repo
Framework

Learning Runtime Parameters in Computer Systems with Delayed Experience Injection

Title Learning Runtime Parameters in Computer Systems with Delayed Experience Injection
Authors Michael Schaarschmidt, Felix Gessert, Valentin Dalibard, Eiko Yoneki
Abstract Learning effective configurations in computer systems without hand-crafting models for every parameter is a long-standing problem. This paper investigates the use of deep reinforcement learning for runtime parameters of cloud databases under latency constraints. Cloud services serve up to thousands of concurrent requests per second and can adjust critical parameters by leveraging performance metrics. In this work, we use continuous deep reinforcement learning to learn optimal cache expirations for HTTP caching in content delivery networks. To this end, we introduce a technique for asynchronous experience management called delayed experience injection, which facilitates delayed reward and next-state computation in concurrent environments where measurements are not immediately available. Evaluation results show that our approach based on normalized advantage functions and asynchronous CPU-only training outperforms a statistical estimator.
Tasks
Published 2016-10-31
URL http://arxiv.org/abs/1610.09903v1
PDF http://arxiv.org/pdf/1610.09903v1.pdf
PWC https://paperswithcode.com/paper/learning-runtime-parameters-in-computer
Repo
Framework

Active Learning from Imperfect Labelers

Title Active Learning from Imperfect Labelers
Authors Songbai Yan, Kamalika Chaudhuri, Tara Javidi
Abstract We study active learning where the labeler can not only return incorrect labels but also abstain from labeling. We consider different noise and abstention conditions of the labeler. We propose an algorithm which utilizes abstention responses, and analyze its statistical consistency and query complexity under fairly natural assumptions on the noise and abstention rate of the labeler. This algorithm is adaptive in a sense that it can automatically request less queries with a more informed or less noisy labeler. We couple our algorithm with lower bounds to show that under some technical conditions, it achieves nearly optimal query complexity.
Tasks Active Learning
Published 2016-10-30
URL http://arxiv.org/abs/1610.09730v1
PDF http://arxiv.org/pdf/1610.09730v1.pdf
PWC https://paperswithcode.com/paper/active-learning-from-imperfect-labelers
Repo
Framework

Fast and Energy-Efficient CNN Inference on IoT Devices

Title Fast and Energy-Efficient CNN Inference on IoT Devices
Authors Mohammad Motamedi, Daniel Fong, Soheil Ghiasi
Abstract Convolutional Neural Networks (CNNs) exhibit remarkable performance in various machine learning tasks. As sensor-equipped internet of things (IoT) devices permeate into every aspect of modern life, it is increasingly important to run CNN inference, a computationally intensive application, on resource constrained devices. We present a technique for fast and energy-efficient CNN inference on mobile SoC platforms, which are projected to be a major player in the IoT space. We propose techniques for efficient parallelization of CNN inference targeting mobile GPUs, and explore the underlying tradeoffs. Experiments with running Squeezenet on three different mobile devices confirm the effectiveness of our approach. For further study, please refer to the project repository available on our GitHub page: https://github.com/mtmd/Mobile_ConvNet
Tasks
Published 2016-11-22
URL http://arxiv.org/abs/1611.07151v1
PDF http://arxiv.org/pdf/1611.07151v1.pdf
PWC https://paperswithcode.com/paper/fast-and-energy-efficient-cnn-inference-on
Repo
Framework

Optimized Polynomial Evaluation with Semantic Annotations

Title Optimized Polynomial Evaluation with Semantic Annotations
Authors Daniel Rubio Bonilla, Colin W. Glass, Jan Kuper
Abstract In this paper we discuss how semantic annotations can be used to introduce mathematical algorithmic information of the underlying imperative code to enable compilers to produce code transformations that will enable better performance. By using this approaches not only good performance is achieved, but also better programmability, maintainability and portability across different hardware architectures. To exemplify this we will use polynomial equations of different degrees.
Tasks
Published 2016-03-04
URL http://arxiv.org/abs/1603.01520v3
PDF http://arxiv.org/pdf/1603.01520v3.pdf
PWC https://paperswithcode.com/paper/optimized-polynomial-evaluation-with-semantic
Repo
Framework

Collaborative Information Bottleneck

Title Collaborative Information Bottleneck
Authors Matías Vera, Leonardo Rey Vega, Pablo Piantanida
Abstract This paper investigates a multi-terminal source coding problem under a logarithmic loss fidelity which does not necessarily lead to an additive distortion measure. The problem is motivated by an extension of the Information Bottleneck method to a multi-source scenario where several encoders have to build cooperatively rate-limited descriptions of their sources in order to maximize information with respect to other unobserved (hidden) sources. More precisely, we study fundamental information-theoretic limits of the so-called: (i) Two-way Collaborative Information Bottleneck (TW-CIB) and (ii) the Collaborative Distributed Information Bottleneck (CDIB) problems. The TW-CIB problem consists of two distant encoders that separately observe marginal (dependent) components $X_1$ and $X_2$ and can cooperate through multiple exchanges of limited information with the aim of extracting information about hidden variables $(Y_1,Y_2)$, which can be arbitrarily dependent on $(X_1,X_2)$. On the other hand, in CDIB there are two cooperating encoders which separately observe $X_1$ and $X_2$ and a third node which can listen to the exchanges between the two encoders in order to obtain information about a hidden variable $Y$. The relevance (figure-of-merit) is measured in terms of a normalized (per-sample) multi-letter mutual information metric (log-loss fidelity) and an interesting tradeoff arises by constraining the complexity of descriptions, measured in terms of the rates needed for the exchanges between the encoders and decoders involved. Inner and outer bounds to the complexity-relevance region of these problems are derived from which optimality is characterized for several cases of interest. Our resulting theoretical complexity-relevance regions are finally evaluated for binary symmetric and Gaussian statistical models.
Tasks
Published 2016-04-05
URL http://arxiv.org/abs/1604.01433v2
PDF http://arxiv.org/pdf/1604.01433v2.pdf
PWC https://paperswithcode.com/paper/collaborative-information-bottleneck
Repo
Framework

Supervised Dimensionality Reduction via Distance Correlation Maximization

Title Supervised Dimensionality Reduction via Distance Correlation Maximization
Authors Praneeth Vepakomma, Chetan Tonde, Ahmed Elgammal
Abstract In our work, we propose a novel formulation for supervised dimensionality reduction based on a nonlinear dependency criterion called Statistical Distance Correlation, Szekely et. al. (2007). We propose an objective which is free of distributional assumptions on regression variables and regression model assumptions. Our proposed formulation is based on learning a low-dimensional feature representation $\mathbf{z}$, which maximizes the squared sum of Distance Correlations between low dimensional features $\mathbf{z}$ and response $y$, and also between features $\mathbf{z}$ and covariates $\mathbf{x}$. We propose a novel algorithm to optimize our proposed objective using the Generalized Minimization Maximizaiton method of \Parizi et. al. (2015). We show superior empirical results on multiple datasets proving the effectiveness of our proposed approach over several relevant state-of-the-art supervised dimensionality reduction methods.
Tasks Dimensionality Reduction
Published 2016-01-03
URL http://arxiv.org/abs/1601.00236v1
PDF http://arxiv.org/pdf/1601.00236v1.pdf
PWC https://paperswithcode.com/paper/supervised-dimensionality-reduction-via
Repo
Framework

An information theoretic formulation of the Dictionary Learning and Sparse Coding Problems on Statistical Manifolds

Title An information theoretic formulation of the Dictionary Learning and Sparse Coding Problems on Statistical Manifolds
Authors Rudrasis Chakraborty, Monami Banerjee, Victoria Crawford, Baba C. Vemuri
Abstract In this work, we propose a novel information theoretic framework for dictionary learning (DL) and sparse coding (SC) on a statistical manifold (the manifold of probability distributions). Unlike the traditional DL and SC framework, our new formulation {\it does not explicitly incorporate any sparsity inducing norm in the cost function but yet yields SCs}. Moreover, we extend this framework to the manifold of symmetric positive definite matrices, $\mathcal{P}_n$. Our algorithm approximates the data points, which are probability distributions, by the weighted Kullback-Leibeler center (KL-center) of the dictionary atoms. The KL-center is the minimizer of the maximum KL-divergence between the unknown center and members of the set whose center is being sought. Further, {\it we proved that this KL-center is a sparse combination of the dictionary atoms}. Since, the data reside on a statistical manifold, the data fidelity term can not be as simple as in the case of the vector-space data. We therefore employ the geodesic distance between the data and a sparse approximation of the data element. This cost function is minimized using an acceleterated gradient descent algorithm. An extensive set of experimental results show the effectiveness of our proposed framework. We present several experiments involving a variety of classification problems in Computer Vision applications. Further, we demonstrate the performance of our algorithm by comparing it to several state-of-the-art methods both in terms of classification accuracy and sparsity.
Tasks Dictionary Learning
Published 2016-04-23
URL http://arxiv.org/abs/1604.06939v2
PDF http://arxiv.org/pdf/1604.06939v2.pdf
PWC https://paperswithcode.com/paper/an-information-theoretic-formulation-of-the
Repo
Framework
comments powered by Disqus