May 6, 2019

3165 words 15 mins read

Paper Group ANR 395

Paper Group ANR 395

Efficient Clustering of Correlated Variables and Variable Selection in High-Dimensional Linear Models. Structured Matrix Recovery via the Generalized Dantzig Selector. Robust Discriminative Clustering with Sparse Regularizers. A Detailed Rubric for Motion Segmentation. Fast On-Line Kernel Density Estimation for Active Object Localization. Graph Bas …

Efficient Clustering of Correlated Variables and Variable Selection in High-Dimensional Linear Models

Title Efficient Clustering of Correlated Variables and Variable Selection in High-Dimensional Linear Models
Authors Niharika Gauraha, Swapan K. Parui
Abstract In this paper, we introduce Adaptive Cluster Lasso(ACL) method for variable selection in high dimensional sparse regression models with strongly correlated variables. To handle correlated variables, the concept of clustering or grouping variables and then pursuing model fitting is widely accepted. When the dimension is very high, finding an appropriate group structure is as difficult as the original problem. The ACL is a three-stage procedure where, at the first stage, we use the Lasso(or its adaptive or thresholded version) to do initial selection, then we also include those variables which are not selected by the Lasso but are strongly correlated with the variables selected by the Lasso. At the second stage we cluster the variables based on the reduced set of predictors and in the third stage we perform sparse estimation such as Lasso on cluster representatives or the group Lasso based on the structures generated by clustering procedure. We show that our procedure is consistent and efficient in finding true underlying population group structure(under assumption of irrepresentable and beta-min conditions). We also study the group selection consistency of our method and we support the theory using simulated and pseudo-real dataset examples.
Tasks
Published 2016-03-11
URL http://arxiv.org/abs/1603.03724v1
PDF http://arxiv.org/pdf/1603.03724v1.pdf
PWC https://paperswithcode.com/paper/efficient-clustering-of-correlated-variables
Repo
Framework

Structured Matrix Recovery via the Generalized Dantzig Selector

Title Structured Matrix Recovery via the Generalized Dantzig Selector
Authors Sheng Chen, Arindam Banerjee
Abstract In recent years, structured matrix recovery problems have gained considerable attention for its real world applications, such as recommender systems and computer vision. Much of the existing work has focused on matrices with low-rank structure, and limited progress has been made matrices with other types of structure. In this paper we present non-asymptotic analysis for estimation of generally structured matrices via the generalized Dantzig selector under generic sub-Gaussian measurements. We show that the estimation error can always be succinctly expressed in terms of a few geometric measures of suitable sets which only depend on the structure of the underlying true matrix. In addition, we derive the general bounds on these geometric measures for structures characterized by unitarily invariant norms, which is a large family covering most matrix norms of practical interest. Examples are provided to illustrate the utility of our theoretical development.
Tasks Recommendation Systems
Published 2016-04-12
URL http://arxiv.org/abs/1604.03492v1
PDF http://arxiv.org/pdf/1604.03492v1.pdf
PWC https://paperswithcode.com/paper/structured-matrix-recovery-via-the
Repo
Framework

Robust Discriminative Clustering with Sparse Regularizers

Title Robust Discriminative Clustering with Sparse Regularizers
Authors Nicolas Flammarion, Balamurugan Palaniappan, Francis Bach
Abstract Clustering high-dimensional data often requires some form of dimensionality reduction, where clustered variables are separated from “noise-looking” variables. We cast this problem as finding a low-dimensional projection of the data which is well-clustered. This yields a one-dimensional projection in the simplest situation with two clusters, and extends naturally to a multi-label scenario for more than two clusters. In this paper, (a) we first show that this joint clustering and dimension reduction formulation is equivalent to previously proposed discriminative clustering frameworks, thus leading to convex relaxations of the problem, (b) we propose a novel sparse extension, which is still cast as a convex relaxation and allows estimation in higher dimensions, (c) we propose a natural extension for the multi-label scenario, (d) we provide a new theoretical analysis of the performance of these formulations with a simple probabilistic model, leading to scalings over the form $d=O(\sqrt{n})$ for the affine invariant case and $d=O(n)$ for the sparse case, where $n$ is the number of examples and $d$ the ambient dimension, and finally, (e) we propose an efficient iterative algorithm with running-time complexity proportional to $O(nd^2)$, improving on earlier algorithms which had quadratic complexity in the number of examples.
Tasks Dimensionality Reduction
Published 2016-08-29
URL http://arxiv.org/abs/1608.08052v1
PDF http://arxiv.org/pdf/1608.08052v1.pdf
PWC https://paperswithcode.com/paper/robust-discriminative-clustering-with-sparse
Repo
Framework

A Detailed Rubric for Motion Segmentation

Title A Detailed Rubric for Motion Segmentation
Authors Pia Bideau, Erik Learned-Miller
Abstract Motion segmentation is currently an active area of research in computer Vision. The task of comparing different methods of motion segmentation is complicated by the fact that researchers may use subtly different definitions of the problem. Questions such as “Which objects are moving?", “What is background?", and “How can we use motion of the camera to segment objects, whether they are static or moving?” are clearly related to each other, but lead to different algorithms, and imply different versions of the ground truth. This report has two goals. The first is to offer a precise definition of motion segmentation so that the intent of an algorithm is as well-defined as possible. The second is to report on new versions of three previously existing data sets that are compatible with this definition. We hope that this more detailed definition, and the three data sets that go with it, will allow more meaningful comparisons of certain motion segmentation methods.
Tasks Motion Segmentation
Published 2016-10-31
URL http://arxiv.org/abs/1610.10033v1
PDF http://arxiv.org/pdf/1610.10033v1.pdf
PWC https://paperswithcode.com/paper/a-detailed-rubric-for-motion-segmentation
Repo
Framework

Fast On-Line Kernel Density Estimation for Active Object Localization

Title Fast On-Line Kernel Density Estimation for Active Object Localization
Authors Anthony D. Rhodes, Max H. Quinn, Melanie Mitchell
Abstract A major goal of computer vision is to enable computers to interpret visual situations—abstract concepts (e.g., “a person walking a dog,” “a crowd waiting for a bus,” “a picnic”) whose image instantiations are linked more by their common spatial and semantic structure than by low-level visual similarity. In this paper, we propose a novel method for prior learning and active object localization for this kind of knowledge-driven search in static images. In our system, prior situation knowledge is captured by a set of flexible, kernel-based density estimations—a situation model—that represent the expected spatial structure of the given situation. These estimations are efficiently updated by information gained as the system searches for relevant objects, allowing the system to use context as it is discovered to narrow the search. More specifically, at any given time in a run on a test image, our system uses image features plus contextual information it has discovered to identify a small subset of training images—an importance cluster—that is deemed most similar to the given test image, given the context. This subset is used to generate an updated situation model in an on-line fashion, using an efficient multipole expansion technique. As a proof of concept, we apply our algorithm to a highly varied and challenging dataset consisting of instances of a “dog-walking” situation. Our results support the hypothesis that dynamically-rendered, context-based probability models can support efficient object localization in visual situations. Moreover, our approach is general enough to be applied to diverse machine learning paradigms requiring interpretable, probabilistic representations generated from partially observed data.
Tasks Active Object Localization, Density Estimation, Object Localization
Published 2016-11-16
URL http://arxiv.org/abs/1611.05369v1
PDF http://arxiv.org/pdf/1611.05369v1.pdf
PWC https://paperswithcode.com/paper/fast-on-line-kernel-density-estimation-for
Repo
Framework

Graph Based Convolutional Neural Network

Title Graph Based Convolutional Neural Network
Authors Michael Edwards, Xianghua Xie
Abstract The benefit of localized features within the regular domain has given rise to the use of Convolutional Neural Networks (CNNs) in machine learning, with great proficiency in the image classification. The use of CNNs becomes problematic within the irregular spatial domain due to design and convolution of a kernel filter being non-trivial. One solution to this problem is to utilize graph signal processing techniques and the convolution theorem to perform convolutions on the graph of the irregular domain to obtain feature map responses to learnt filters. We propose graph convolution and pooling operators analogous to those in the regular domain. We also provide gradient calculations on the input data and spectral filters, which allow for the deep learning of an irregular spatial domain problem. Signal filters take the form of spectral multipliers, applying convolution in the graph spectral domain. Applying smooth multipliers results in localized convolutions in the spatial domain, with smoother multipliers providing sharper feature maps. Algebraic Multigrid is presented as a graph pooling method, reducing the resolution of the graph through agglomeration of nodes between layers of the network. Evaluation of performance on the MNIST digit classification problem in both the regular and irregular domain is presented, with comparison drawn to standard CNN. The proposed graph CNN provides a deep learning method for the irregular domains present in the machine learning community, obtaining 94.23% on the regular grid, and 94.96% on a spatially irregular subsampled MNIST.
Tasks Image Classification
Published 2016-09-28
URL http://arxiv.org/abs/1609.08965v1
PDF http://arxiv.org/pdf/1609.08965v1.pdf
PWC https://paperswithcode.com/paper/graph-based-convolutional-neural-network
Repo
Framework

Hollywood in Homes: Crowdsourcing Data Collection for Activity Understanding

Title Hollywood in Homes: Crowdsourcing Data Collection for Activity Understanding
Authors Gunnar A. Sigurdsson, Gül Varol, Xiaolong Wang, Ali Farhadi, Ivan Laptev, Abhinav Gupta
Abstract Computer vision has a great potential to help our daily lives by searching for lost keys, watering flowers or reminding us to take a pill. To succeed with such tasks, computer vision methods need to be trained from real and diverse examples of our daily dynamic scenes. While most of such scenes are not particularly exciting, they typically do not appear on YouTube, in movies or TV broadcasts. So how do we collect sufficiently many diverse but boring samples representing our lives? We propose a novel Hollywood in Homes approach to collect such data. Instead of shooting videos in the lab, we ensure diversity by distributing and crowdsourcing the whole process of video creation from script writing to video recording and annotation. Following this procedure we collect a new dataset, Charades, with hundreds of people recording videos in their own homes, acting out casual everyday activities. The dataset is composed of 9,848 annotated videos with an average length of 30 seconds, showing activities of 267 people from three continents. Each video is annotated by multiple free-text descriptions, action labels, action intervals and classes of interacted objects. In total, Charades provides 27,847 video descriptions, 66,500 temporally localized intervals for 157 action classes and 41,104 labels for 46 object classes. Using this rich data, we evaluate and provide baseline results for several tasks including action recognition and automatic description generation. We believe that the realism, diversity, and casual nature of this dataset will present unique challenges and new opportunities for computer vision community.
Tasks Temporal Action Localization
Published 2016-04-06
URL http://arxiv.org/abs/1604.01753v3
PDF http://arxiv.org/pdf/1604.01753v3.pdf
PWC https://paperswithcode.com/paper/hollywood-in-homes-crowdsourcing-data
Repo
Framework

Binary classification of multi-channel EEG records based on the $ε$-complexity of continuous vector functions

Title Binary classification of multi-channel EEG records based on the $ε$-complexity of continuous vector functions
Authors Boris Darkhovsky, Alexandra Piryatinska, Alexander Kaplan
Abstract A methodology for binary classification of EEG records which correspond to different mental states is proposed. This model-free methodology is based on our theory of the $\epsilon$-complexity of continuous functions which is extended here (see Appendix) to the case of vector functions. This extension permits us to handle multichannel EEG recordings. The essence of the methodology is to use the $\epsilon$-complexity coefficients as features to classify (using well known classifiers) different types of vector functions representing EEG-records corresponding to different types of mental states. We apply our methodology to the problem of classification of multichannel EEG-records related to a group of healthy adolescents and a group of adolescents with schizophrenia. We found that our methodology permits accurate classification of the data in the four-dimensional feather space of the $\epsilon$-complexity coefficients.
Tasks EEG
Published 2016-10-05
URL http://arxiv.org/abs/1610.01633v1
PDF http://arxiv.org/pdf/1610.01633v1.pdf
PWC https://paperswithcode.com/paper/binary-classification-of-multi-channel-eeg
Repo
Framework

Temporally Consistent Motion Segmentation from RGB-D Video

Title Temporally Consistent Motion Segmentation from RGB-D Video
Authors Peter Bertholet, Alexandru-Eugen Ichim, Matthias Zwicker
Abstract We present a method for temporally consistent motion segmentation from RGB-D videos assuming a piecewise rigid motion model. We formulate global energies over entire RGB-D sequences in terms of the segmentation of each frame into a number of objects, and the rigid motion of each object through the sequence. We develop a novel initialization procedure that clusters feature tracks obtained from the RGB data by leveraging the depth information. We minimize the energy using a coordinate descent approach that includes novel techniques to assemble object motion hypotheses. A main benefit of our approach is that it enables us to fuse consistently labeled object segments from all RGB-D frames of an input sequence into individual 3D object reconstructions.
Tasks Motion Segmentation
Published 2016-08-16
URL http://arxiv.org/abs/1608.04642v1
PDF http://arxiv.org/pdf/1608.04642v1.pdf
PWC https://paperswithcode.com/paper/temporally-consistent-motion-segmentation
Repo
Framework

mpEAd: Multi-Population EA Diagrams

Title mpEAd: Multi-Population EA Diagrams
Authors Sebastian Lenartowicz, Mark Wineberg
Abstract Multi-population evolutionary algorithms are, by nature, highly complex and difficult to describe. Even two populations working in concert (or opposition) present a myriad of potential configurations that are often difficult to relate using text alone. Little effort has been made, however, to depict these kinds of systems, relying solely on the simple structural connections (related using ad hoc diagrams) between populations and often leaving out crucial details. In this paper, we propose a notation and accompanying formalism for consistently and powerfully depicting these structures and the relationships within them in an intuitive and consistent way. Using our notation, we examine simple co-evolutionary systems and discover new configurations by the simple process of “drawing on a whiteboard”. Finally, we demonstrate that even complex, highly-interconnected systems with large numbers of populations can be understood with ease using the advanced features of our formalism
Tasks
Published 2016-07-18
URL http://arxiv.org/abs/1607.05213v1
PDF http://arxiv.org/pdf/1607.05213v1.pdf
PWC https://paperswithcode.com/paper/mpead-multi-population-ea-diagrams
Repo
Framework

Tournament selection in zeroth-level classifier systems based on average reward reinforcement learning

Title Tournament selection in zeroth-level classifier systems based on average reward reinforcement learning
Authors Zhaoxiang Zang, Zhao Li, Junying Wang, Zhiping Dan
Abstract As a genetics-based machine learning technique, zeroth-level classifier system (ZCS) is based on a discounted reward reinforcement learning algorithm, bucket-brigade algorithm, which optimizes the discounted total reward received by an agent but is not suitable for all multi-step problems, especially large-size ones. There are some undiscounted reinforcement learning methods available, such as R-learning, which optimize the average reward per time step. In this paper, R-learning is used as the reinforcement learning employed by ZCS, to replace its discounted reward reinforcement learning approach, and tournament selection is used to replace roulette wheel selection in ZCS. The modification results in classifier systems that can support long action chains, and thus is able to solve large multi-step problems.
Tasks
Published 2016-04-26
URL http://arxiv.org/abs/1604.07704v1
PDF http://arxiv.org/pdf/1604.07704v1.pdf
PWC https://paperswithcode.com/paper/tournament-selection-in-zeroth-level
Repo
Framework

Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity

Title Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity
Authors Amit Daniely, Roy Frostig, Yoram Singer
Abstract We develop a general duality between neural networks and compositional kernels, striving towards a better understanding of deep learning. We show that initial representations generated by common random initializations are sufficiently rich to express all functions in the dual kernel space. Hence, though the training objective is hard to optimize in the worst case, the initial weights form a good starting point for optimization. Our dual view also reveals a pragmatic and aesthetic perspective of neural networks and underscores their expressive power.
Tasks
Published 2016-02-18
URL http://arxiv.org/abs/1602.05897v2
PDF http://arxiv.org/pdf/1602.05897v2.pdf
PWC https://paperswithcode.com/paper/toward-deeper-understanding-of-neural
Repo
Framework

Recurrent Convolutional Networks for Pulmonary Nodule Detection in CT Imaging

Title Recurrent Convolutional Networks for Pulmonary Nodule Detection in CT Imaging
Authors Petros-Pavlos Ypsilantis, Giovanni Montana
Abstract Computed tomography (CT) generates a stack of cross-sectional images covering a region of the body. The visual assessment of these images for the identification of potential abnormalities is a challenging and time consuming task due to the large amount of information that needs to be processed. In this article we propose a deep artificial neural network architecture, ReCTnet, for the fully-automated detection of pulmonary nodules in CT scans. The architecture learns to distinguish nodules and normal structures at the pixel level and generates three-dimensional probability maps highlighting areas that are likely to harbour the objects of interest. Convolutional and recurrent layers are combined to learn expressive image representations exploiting the spatial dependencies across axial slices. We demonstrate that leveraging intra-slice dependencies substantially increases the sensitivity to detect pulmonary nodules without inflating the false positive rate. On the publicly available LIDC/IDRI dataset consisting of 1,018 annotated CT scans, ReCTnet reaches a detection sensitivity of 90.5% with an average of 4.5 false positives per scan. Comparisons with a competing multi-channel convolutional neural network for multi-slice segmentation and other published methodologies using the same dataset provide evidence that ReCTnet offers significant performance gains.
Tasks Computed Tomography (CT)
Published 2016-09-28
URL http://arxiv.org/abs/1609.09143v2
PDF http://arxiv.org/pdf/1609.09143v2.pdf
PWC https://paperswithcode.com/paper/recurrent-convolutional-networks-for
Repo
Framework

Deep Symbolic Representation Learning for Heterogeneous Time-series Classification

Title Deep Symbolic Representation Learning for Heterogeneous Time-series Classification
Authors Shengdong Zhang, Soheil Bahrampour, Naveen Ramakrishnan, Mohak Shah
Abstract In this paper, we consider the problem of event classification with multi-variate time series data consisting of heterogeneous (continuous and categorical) variables. The complex temporal dependencies between the variables combined with sparsity of the data makes the event classification problem particularly challenging. Most state-of-art approaches address this either by designing hand-engineered features or breaking up the problem over homogeneous variates. In this work, we propose and compare three representation learning algorithms over symbolized sequences which enables classification of heterogeneous time-series data using a deep architecture. The proposed representations are trained jointly along with the rest of the network architecture in an end-to-end fashion that makes the learned features discriminative for the given task. Experiments on three real-world datasets demonstrate the effectiveness of the proposed approaches.
Tasks Representation Learning, Time Series, Time Series Classification
Published 2016-12-05
URL http://arxiv.org/abs/1612.01254v1
PDF http://arxiv.org/pdf/1612.01254v1.pdf
PWC https://paperswithcode.com/paper/deep-symbolic-representation-learning-for
Repo
Framework

Divide and Conquer Networks

Title Divide and Conquer Networks
Authors Alex Nowak-Vila, David Folqué, Joan Bruna
Abstract We consider the learning of algorithmic tasks by mere observation of input-output pairs. Rather than studying this as a black-box discrete regression problem with no assumption whatsoever on the input-output mapping, we concentrate on tasks that are amenable to the principle of divide and conquer, and study what are its implications in terms of learning. This principle creates a powerful inductive bias that we leverage with neural architectures that are defined recursively and dynamically, by learning two scale-invariant atomic operations: how to split a given input into smaller sets, and how to merge two partially solved tasks into a larger partial solution. Our model can be trained in weakly supervised environments, namely by just observing input-output pairs, and in even weaker environments, using a non-differentiable reward signal. Moreover, thanks to the dynamic aspect of our architecture, we can incorporate the computational complexity as a regularization term that can be optimized by backpropagation. We demonstrate the flexibility and efficiency of the Divide-and-Conquer Network on several combinatorial and geometric tasks: convex hull, clustering, knapsack and euclidean TSP. Thanks to the dynamic programming nature of our model, we show significant improvements in terms of generalization error and computational complexity.
Tasks
Published 2016-11-08
URL http://arxiv.org/abs/1611.02401v7
PDF http://arxiv.org/pdf/1611.02401v7.pdf
PWC https://paperswithcode.com/paper/divide-and-conquer-networks
Repo
Framework
comments powered by Disqus