Paper Group ANR 59
Maximally Divergent Intervals for Anomaly Detection. Demonstrating the Feasibility of Automatic Game Balancing. Generalizing the Convolution Operator to extend CNNs to Irregular Domains. Twitter as a Source of Global Mobility Patterns for Social Good. Review of Action Recognition and Detection Methods. Learning Abstract Classes using Deep Learning. …
Maximally Divergent Intervals for Anomaly Detection
Title | Maximally Divergent Intervals for Anomaly Detection |
Authors | Erik Rodner, Björn Barz, Yanira Guanche, Milan Flach, Miguel Mahecha, Paul Bodesheim, Markus Reichstein, Joachim Denzler |
Abstract | We present new methods for batch anomaly detection in multivariate time series. Our methods are based on maximizing the Kullback-Leibler divergence between the data distribution within and outside an interval of the time series. An empirical analysis shows the benefits of our algorithms compared to methods that treat each time step independently from each other without optimizing with respect to all possible intervals. |
Tasks | Anomaly Detection, Time Series |
Published | 2016-10-21 |
URL | http://arxiv.org/abs/1610.06761v1 |
http://arxiv.org/pdf/1610.06761v1.pdf | |
PWC | https://paperswithcode.com/paper/maximally-divergent-intervals-for-anomaly |
Repo | |
Framework | |
Demonstrating the Feasibility of Automatic Game Balancing
Title | Demonstrating the Feasibility of Automatic Game Balancing |
Authors | Vanessa Volz, Günter Rudolph, Boris Naujoks |
Abstract | Game balancing is an important part of the (computer) game design process, in which designers adapt a game prototype so that the resulting gameplay is as entertaining as possible. In industry, the evaluation of a game is often based on costly playtests with human players. It suggests itself to automate this process using surrogate models for the prediction of gameplay and outcome. In this paper, the feasibility of automatic balancing using simulation- and deck-based objectives is investigated for the card game top trumps. Additionally, the necessity of a multi-objective approach is asserted by a comparison with the only known (single-objective) method. We apply a multi-objective evolutionary algorithm to obtain decks that optimise objectives, e.g. win rate and average number of tricks, developed to express the fairness and the excitement of a game of top trumps. The results are compared with decks from published top trumps decks using simulation-based objectives. The possibility to generate decks better or at least as good as decks from published top trumps decks in terms of these objectives is demonstrated. Our results indicate that automatic balancing with the presented approach is feasible even for more complex games such as real-time strategy games. |
Tasks | Real-Time Strategy Games |
Published | 2016-03-11 |
URL | http://arxiv.org/abs/1603.03795v1 |
http://arxiv.org/pdf/1603.03795v1.pdf | |
PWC | https://paperswithcode.com/paper/demonstrating-the-feasibility-of-automatic |
Repo | |
Framework | |
Generalizing the Convolution Operator to extend CNNs to Irregular Domains
Title | Generalizing the Convolution Operator to extend CNNs to Irregular Domains |
Authors | Jean-Charles Vialatte, Vincent Gripon, Grégoire Mercier |
Abstract | Convolutional Neural Networks (CNNs) have become the state-of-the-art in supervised learning vision tasks. Their convolutional filters are of paramount importance for they allow to learn patterns while disregarding their locations in input images. When facing highly irregular domains, generalized convolutional operators based on an underlying graph structure have been proposed. However, these operators do not exactly match standard ones on grid graphs, and introduce unwanted additional invariance (e.g. with regards to rotations). We propose a novel approach to generalize CNNs to irregular domains using weight sharing and graph-based operators. Using experiments, we show that these models resemble CNNs on regular domains and offer better performance than multilayer perceptrons on distorded ones. |
Tasks | |
Published | 2016-06-03 |
URL | http://arxiv.org/abs/1606.01166v4 |
http://arxiv.org/pdf/1606.01166v4.pdf | |
PWC | https://paperswithcode.com/paper/generalizing-the-convolution-operator-to |
Repo | |
Framework | |
Twitter as a Source of Global Mobility Patterns for Social Good
Title | Twitter as a Source of Global Mobility Patterns for Social Good |
Authors | Mark Dredze, Manuel García-Herranz, Alex Rutherford, Gideon Mann |
Abstract | Data on human spatial distribution and movement is essential for understanding and analyzing social systems. However existing sources for this data are lacking in various ways; difficult to access, biased, have poor geographical or temporal resolution, or are significantly delayed. In this paper, we describe how geolocation data from Twitter can be used to estimate global mobility patterns and address these shortcomings. These findings will inform how this novel data source can be harnessed to address humanitarian and development efforts. |
Tasks | |
Published | 2016-06-20 |
URL | http://arxiv.org/abs/1606.06343v1 |
http://arxiv.org/pdf/1606.06343v1.pdf | |
PWC | https://paperswithcode.com/paper/twitter-as-a-source-of-global-mobility |
Repo | |
Framework | |
Review of Action Recognition and Detection Methods
Title | Review of Action Recognition and Detection Methods |
Authors | Soo Min Kang, Richard P. Wildes |
Abstract | In computer vision, action recognition refers to the act of classifying an action that is present in a given video and action detection involves locating actions of interest in space and/or time. Videos, which contain photometric information (e.g. RGB, intensity values) in a lattice structure, contain information that can assist in identifying the action that has been imaged. The process of action recognition and detection often begins with extracting useful features and encoding them to ensure that the features are specific to serve the task of action recognition and detection. Encoded features are then processed through a classifier to identify the action class and their spatial and/or temporal locations. In this report, a thorough review of various action recognition and detection algorithms in computer vision is provided by analyzing the two-step process of a typical action recognition and detection algorithm: (i) extraction and encoding of features, and (ii) classifying features into action classes. In efforts to ensure that computer vision-based algorithms reach the capabilities that humans have of identifying actions irrespective of various nuisance variables that may be present within the field of view, the state-of-the-art methods are reviewed and some remaining problems are addressed in the final chapter. |
Tasks | Action Detection, Temporal Action Localization |
Published | 2016-10-21 |
URL | http://arxiv.org/abs/1610.06906v2 |
http://arxiv.org/pdf/1610.06906v2.pdf | |
PWC | https://paperswithcode.com/paper/review-of-action-recognition-and-detection |
Repo | |
Framework | |
Learning Abstract Classes using Deep Learning
Title | Learning Abstract Classes using Deep Learning |
Authors | Sebastian Stabinger, Antonio Rodriguez-Sanchez, Justus Piater |
Abstract | Humans are generally good at learning abstract concepts about objects and scenes (e.g.\ spatial orientation, relative sizes, etc.). Over the last years convolutional neural networks have achieved almost human performance in recognizing concrete classes (i.e.\ specific object categories). This paper tests the performance of a current CNN (GoogLeNet) on the task of differentiating between abstract classes which are trivially differentiable for humans. We trained and tested the CNN on the two abstract classes of horizontal and vertical orientation and determined how well the network is able to transfer the learned classes to other, previously unseen objects. |
Tasks | |
Published | 2016-06-17 |
URL | http://arxiv.org/abs/1606.05506v1 |
http://arxiv.org/pdf/1606.05506v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-abstract-classes-using-deep-learning |
Repo | |
Framework | |
Statistical Inference for Cluster Trees
Title | Statistical Inference for Cluster Trees |
Authors | Jisu Kim, Yen-Chi Chen, Sivaraman Balakrishnan, Alessandro Rinaldo, Larry Wasserman |
Abstract | A cluster tree provides a highly-interpretable summary of a density function by representing the hierarchy of its high-density clusters. It is estimated using the empirical tree, which is the cluster tree constructed from a density estimator. This paper addresses the basic question of quantifying our uncertainty by assessing the statistical significance of topological features of an empirical cluster tree. We first study a variety of metrics that can be used to compare different trees, analyze their properties and assess their suitability for inference. We then propose methods to construct and summarize confidence sets for the unknown true cluster tree. We introduce a partial ordering on cluster trees which we use to prune some of the statistically insignificant features of the empirical tree, yielding interpretable and parsimonious cluster trees. Finally, we illustrate the proposed methods on a variety of synthetic examples and furthermore demonstrate their utility in the analysis of a Graft-versus-Host Disease (GvHD) data set. |
Tasks | |
Published | 2016-05-20 |
URL | http://arxiv.org/abs/1605.06416v3 |
http://arxiv.org/pdf/1605.06416v3.pdf | |
PWC | https://paperswithcode.com/paper/statistical-inference-for-cluster-trees |
Repo | |
Framework | |
Stochastic dual averaging methods using variance reduction techniques for regularized empirical risk minimization problems
Title | Stochastic dual averaging methods using variance reduction techniques for regularized empirical risk minimization problems |
Authors | Tomoya Murata, Taiji Suzuki |
Abstract | We consider a composite convex minimization problem associated with regularized empirical risk minimization, which often arises in machine learning. We propose two new stochastic gradient methods that are based on stochastic dual averaging method with variance reduction. Our methods generate a sparser solution than the existing methods because we do not need to take the average of the history of the solutions. This is favorable in terms of both interpretability and generalization. Moreover, our methods have theoretical support for both a strongly and a non-strongly convex regularizer and achieve the best known convergence rates among existing nonaccelerated stochastic gradient methods. |
Tasks | |
Published | 2016-03-08 |
URL | http://arxiv.org/abs/1603.02412v1 |
http://arxiv.org/pdf/1603.02412v1.pdf | |
PWC | https://paperswithcode.com/paper/stochastic-dual-averaging-methods-using |
Repo | |
Framework | |
Hand Action Detection from Ego-centric Depth Sequences with Error-correcting Hough Transform
Title | Hand Action Detection from Ego-centric Depth Sequences with Error-correcting Hough Transform |
Authors | Chi Xu, Lakshmi Narasimhan Govindarajan, Li Cheng |
Abstract | Detecting hand actions from ego-centric depth sequences is a practically challenging problem, owing mostly to the complex and dexterous nature of hand articulations as well as non-stationary camera motion. We address this problem via a Hough transform based approach coupled with a discriminatively learned error-correcting component to tackle the well known issue of incorrect votes from the Hough transform. In this framework, local parts vote collectively for the start $&$ end positions of each action over time. We also construct an in-house annotated dataset of 300 long videos, containing 3,177 single-action subsequences over 16 action classes collected from 26 individuals. Our system is empirically evaluated on this real-life dataset for both the action recognition and detection tasks, and is shown to produce satisfactory results. To facilitate reproduction, the new dataset and our implementation are also provided online. |
Tasks | Action Detection, Temporal Action Localization |
Published | 2016-06-07 |
URL | http://arxiv.org/abs/1606.02031v1 |
http://arxiv.org/pdf/1606.02031v1.pdf | |
PWC | https://paperswithcode.com/paper/hand-action-detection-from-ego-centric-depth |
Repo | |
Framework | |
Surround suppression explained by long-range recruitment of local competition, in a columnar V1 model
Title | Surround suppression explained by long-range recruitment of local competition, in a columnar V1 model |
Authors | Hongzhi You, Giacomo Indiveri, Dylan Richard Muir |
Abstract | Although neurons in columns of visual cortex of adult carnivores and primates share similar orientation tuning preferences, responses of nearby neurons are surprisingly sparse and temporally uncorrelated, especially in response to complex visual scenes. The mechanisms underlying this counter-intuitive combination of response properties are still unknown. Here we present a computational model of columnar visual cortex which explains experimentally observed integration of complex features across the visual field, and which is consistent with anatomical and physiological profiles of cortical excitation and inhibition. In this model, sparse local excitatory connections within columns, coupled with strong unspecific local inhibition and functionally-specific long-range excitatory connections across columns, give rise to competitive dynamics that reproduce experimental observations. Our results explain surround modulation of responses to simple and complex visual stimuli, including reduced correlation of nearby excitatory neurons, increased excitatory response selectivity, increased inhibitory selectivity, and complex orientation-tuning of surround modulation. |
Tasks | |
Published | 2016-11-03 |
URL | https://arxiv.org/abs/1611.00945v2 |
https://arxiv.org/pdf/1611.00945v2.pdf | |
PWC | https://paperswithcode.com/paper/surround-suppression-explained-by-long-range |
Repo | |
Framework | |
Learning Determinantal Point Processes in Sublinear Time
Title | Learning Determinantal Point Processes in Sublinear Time |
Authors | Christophe Dupuy, Francis Bach |
Abstract | We propose a new class of determinantal point processes (DPPs) which can be manipulated for inference and parameter learning in potentially sublinear time in the number of items. This class, based on a specific low-rank factorization of the marginal kernel, is particularly suited to a subclass of continuous DPPs and DPPs defined on exponentially many items. We apply this new class to modelling text documents as sampling a DPP of sentences, and propose a conditional maximum likelihood formulation to model topic proportions, which is made possible with no approximation for our class of DPPs. We present an application to document summarization with a DPP on $2^{500}$ items. |
Tasks | Document Summarization, Point Processes |
Published | 2016-10-19 |
URL | http://arxiv.org/abs/1610.05925v1 |
http://arxiv.org/pdf/1610.05925v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-determinantal-point-processes-in |
Repo | |
Framework | |
Multi-view Generative Adversarial Networks
Title | Multi-view Generative Adversarial Networks |
Authors | Mickaël Chen, Ludovic Denoyer |
Abstract | Learning over multi-view data is a challenging problem with strong practical applications. Most related studies focus on the classification point of view and assume that all the views are available at any time. We consider an extension of this framework in two directions. First, based on the BiGAN model, the Multi-view BiGAN (MV-BiGAN) is able to perform density estimation from multi-view inputs. Second, it can deal with missing views and is able to update its prediction when additional views are provided. We illustrate these properties on a set of experiments over different datasets. |
Tasks | Density Estimation |
Published | 2016-11-07 |
URL | http://arxiv.org/abs/1611.02019v2 |
http://arxiv.org/pdf/1611.02019v2.pdf | |
PWC | https://paperswithcode.com/paper/multi-view-generative-adversarial-networks |
Repo | |
Framework | |
Improved Quick Hypervolume Algorithm
Title | Improved Quick Hypervolume Algorithm |
Authors | Andrzej Jaszkiewicz |
Abstract | In this paper, we present a significant improvement of Quick Hypervolume algorithm, one of the state-of-the-art algorithms for calculating exact hypervolume of the space dominated by a set of d-dimensional points. This value is often used as a quality indicator in multiobjective evolutionary algorithms and other multiobjective metaheuristics and the efficiency of calculating this indicator is of crucial importance especially in the case of large sets or many dimensional objective spaces. We use a similar divide and conquer scheme as in the original Quick Hypervolume algorithm, but in our algorithm we split the problem into smaller sub-problems in a different way. Through both theoretical analysis and computational study we show that our approach improves computational complexity of the algorithm and practical running times. |
Tasks | |
Published | 2016-12-11 |
URL | http://arxiv.org/abs/1612.03402v4 |
http://arxiv.org/pdf/1612.03402v4.pdf | |
PWC | https://paperswithcode.com/paper/improved-quick-hypervolume-algorithm |
Repo | |
Framework | |
Attend in groups: a weakly-supervised deep learning framework for learning from web data
Title | Attend in groups: a weakly-supervised deep learning framework for learning from web data |
Authors | Bohan Zhuang, Lingqiao Liu, Yao Li, Chunhua Shen, Ian Reid |
Abstract | Large-scale datasets have driven the rapid development of deep neural networks for visual recognition. However, annotating a massive dataset is expensive and time-consuming. Web images and their labels are, in comparison, much easier to obtain, but direct training on such automatically harvested images can lead to unsatisfactory performance, because the noisy labels of Web images adversely affect the learned recognition models. To address this drawback we propose an end-to-end weakly-supervised deep learning framework which is robust to the label noise in Web images. The proposed framework relies on two unified strategies – random grouping and attention – to effectively reduce the negative impact of noisy web image annotations. Specifically, random grouping stacks multiple images into a single training instance and thus increases the labeling accuracy at the instance level. Attention, on the other hand, suppresses the noisy signals from both incorrectly labeled images and less discriminative image regions. By conducting intensive experiments on two challenging datasets, including a newly collected fine-grained dataset with Web images of different car models, the superior performance of the proposed methods over competitive baselines is clearly demonstrated. |
Tasks | |
Published | 2016-11-30 |
URL | http://arxiv.org/abs/1611.09960v1 |
http://arxiv.org/pdf/1611.09960v1.pdf | |
PWC | https://paperswithcode.com/paper/attend-in-groups-a-weakly-supervised-deep |
Repo | |
Framework | |
Doubly Random Parallel Stochastic Methods for Large Scale Learning
Title | Doubly Random Parallel Stochastic Methods for Large Scale Learning |
Authors | Aryan Mokhtari, Alec Koppel, Alejandro Ribeiro |
Abstract | We consider learning problems over training sets in which both, the number of training examples and the dimension of the feature vectors, are large. To solve these problems we propose the random parallel stochastic algorithm (RAPSA). We call the algorithm random parallel because it utilizes multiple processors to operate in a randomly chosen subset of blocks of the feature vector. We call the algorithm parallel stochastic because processors choose elements of the training set randomly and independently. Algorithms that are parallel in either of these dimensions exist, but RAPSA is the first attempt at a methodology that is parallel in both, the selection of blocks and the selection of elements of the training set. In RAPSA, processors utilize the randomly chosen functions to compute the stochastic gradient component associated with a randomly chosen block. The technical contribution of this paper is to show that this minimally coordinated algorithm converges to the optimal classifier when the training objective is convex. In particular, we show that: (i) When using decreasing stepsizes, RAPSA converges almost surely over the random choice of blocks and functions. (ii) When using constant stepsizes, convergence is to a neighborhood of optimality with a rate that is linear in expectation. RAPSA is numerically evaluated on the MNIST digit recognition problem. |
Tasks | |
Published | 2016-03-22 |
URL | http://arxiv.org/abs/1603.06782v1 |
http://arxiv.org/pdf/1603.06782v1.pdf | |
PWC | https://paperswithcode.com/paper/doubly-random-parallel-stochastic-methods-for |
Repo | |
Framework | |