May 5, 2019

3232 words 16 mins read

Paper Group ANR 494

Paper Group ANR 494

Efficient Classification of Multi-Labelled Text Streams by Clashing. Compliance-Aware Bandits. Self-Correcting Models for Model-Based Reinforcement Learning. An Artificial Neural Networks based Temperature Prediction Framework for Network-on-Chip based Multicore Platform. Uncovering Voice Misuse Using Symbolic Mismatch. What is the Best Way for Ext …

Efficient Classification of Multi-Labelled Text Streams by Clashing

Title Efficient Classification of Multi-Labelled Text Streams by Clashing
Authors Ricardo Ñanculef, Ilias Flaounas, Nello Cristianini
Abstract We present a method for the classification of multi-labelled text documents explicitly designed for data stream applications that require to process a virtually infinite sequence of data using constant memory and constant processing time. Our method is composed of an online procedure used to efficiently map text into a low-dimensional feature space and a partition of this space into a set of regions for which the system extracts and keeps statistics used to predict multi-label text annotations. Documents are fed into the system as a sequence of words, mapped to a region of the partition, and annotated using the statistics computed from the labelled instances colliding in the same region. This approach is referred to as clashing. We illustrate the method in real-world text data, comparing the results with those obtained using other text classifiers. In addition, we provide an analysis about the effect of the representation space dimensionality on the predictive performance of the system. Our results show that the online embedding indeed approximates the geometry of the full corpus-wise TF and TF-IDF space. The model obtains competitive F measures with respect to the most accurate methods, using significantly fewer computational resources. In addition, the method achieves a higher macro-averaged F measure than methods with similar running time. Furthermore, the system is able to learn faster than the other methods from partially labelled streams.
Tasks
Published 2016-04-12
URL http://arxiv.org/abs/1604.03200v1
PDF http://arxiv.org/pdf/1604.03200v1.pdf
PWC https://paperswithcode.com/paper/efficient-classification-of-multi-labelled
Repo
Framework

Compliance-Aware Bandits

Title Compliance-Aware Bandits
Authors Nicolás Della Penna, Mark D. Reid, David Balduzzi
Abstract Motivated by clinical trials, we study bandits with observable non-compliance. At each step, the learner chooses an arm, after, instead of observing only the reward, it also observes the action that took place. We show that such noncompliance can be helpful or hurtful to the learner in general. Unfortunately, naively incorporating compliance information into bandit algorithms loses guarantees on sublinear regret. We present hybrid algorithms that maintain regret bounds up to a multiplicative factor and can incorporate compliance information. Simulations based on real data from the International Stoke Trial show the practical potential of these algorithms.
Tasks
Published 2016-02-09
URL http://arxiv.org/abs/1602.02852v1
PDF http://arxiv.org/pdf/1602.02852v1.pdf
PWC https://paperswithcode.com/paper/compliance-aware-bandits
Repo
Framework

Self-Correcting Models for Model-Based Reinforcement Learning

Title Self-Correcting Models for Model-Based Reinforcement Learning
Authors Erik Talvitie
Abstract When an agent cannot represent a perfectly accurate model of its environment’s dynamics, model-based reinforcement learning (MBRL) can fail catastrophically. Planning involves composing the predictions of the model; when flawed predictions are composed, even minor errors can compound and render the model useless for planning. Hallucinated Replay (Talvitie 2014) trains the model to “correct” itself when it produces errors, substantially improving MBRL with flawed models. This paper theoretically analyzes this approach, illuminates settings in which it is likely to be effective or ineffective, and presents a novel error bound, showing that a model’s ability to self-correct is more tightly related to MBRL performance than one-step prediction error. These results inspire an MBRL algorithm for deterministic MDPs with performance guarantees that are robust to model class limitations.
Tasks
Published 2016-12-19
URL http://arxiv.org/abs/1612.06018v2
PDF http://arxiv.org/pdf/1612.06018v2.pdf
PWC https://paperswithcode.com/paper/self-correcting-models-for-model-based
Repo
Framework

An Artificial Neural Networks based Temperature Prediction Framework for Network-on-Chip based Multicore Platform

Title An Artificial Neural Networks based Temperature Prediction Framework for Network-on-Chip based Multicore Platform
Authors Sandeep Aswath Narayana
Abstract Continuous improvement in silicon process technologies has made possible the integration of hundreds of cores on a single chip. However, power and heat have become dominant constraints in designing these massive multicore chips causing issues with reliability, timing variations and reduced lifetime of the chips. Dynamic Thermal Management (DTM) is a solution to avoid high temperatures on the die. Typical DTM schemes only address core level thermal issues. However, the Network-on-chip (NoC) paradigm, which has emerged as an enabling methodology for integrating hundreds to thousands of cores on the same die can contribute significantly to the thermal issues. Moreover, the typical DTM is triggered reactively based on temperature measurements from on-chip thermal sensor requiring long reaction times whereas predictive DTM method estimates future temperature in advance, eliminating the chance of temperature overshoot. Artificial Neural Networks (ANNs) have been used in various domains for modeling and prediction with high accuracy due to its ability to learn and adapt. This thesis concentrates on designing an ANN prediction engine to predict the thermal profile of the cores and Network-on-Chip elements of the chip. This thermal profile of the chip is then used by the predictive DTM that combines both core level and network level DTM techniques. On-chip wireless interconnect which is recently envisioned to enable energy-efficient data exchange between cores in a multicore environment, will be used to provide a broadcast-capable medium to efficiently distribute thermal control messages to trigger and manage the DTM schemes.
Tasks
Published 2016-12-12
URL http://arxiv.org/abs/1612.04197v1
PDF http://arxiv.org/pdf/1612.04197v1.pdf
PWC https://paperswithcode.com/paper/an-artificial-neural-networks-based
Repo
Framework

Uncovering Voice Misuse Using Symbolic Mismatch

Title Uncovering Voice Misuse Using Symbolic Mismatch
Authors Marzyeh Ghassemi, Zeeshan Syed, Daryush D. Mehta, Jarrad H. Van Stan, Robert E. Hillman, John V. Guttag
Abstract Voice disorders affect an estimated 14 million working-aged Americans, and many more worldwide. We present the first large scale study of vocal misuse based on long-term ambulatory data collected by an accelerometer placed on the neck. We investigate an unsupervised data mining approach to uncovering latent information about voice misuse. We segment signals from over 253 days of data from 22 subjects into over a hundred million single glottal pulses (closures of the vocal folds), cluster segments into symbols, and use symbolic mismatch to uncover differences between patients and matched controls, and between patients pre- and post-treatment. Our results show significant behavioral differences between patients and controls, as well as between some pre- and post-treatment patients. Our proposed approach provides an objective basis for helping diagnose behavioral voice disorders, and is a first step towards a more data-driven understanding of the impact of voice therapy.
Tasks
Published 2016-08-08
URL http://arxiv.org/abs/1608.02301v1
PDF http://arxiv.org/pdf/1608.02301v1.pdf
PWC https://paperswithcode.com/paper/uncovering-voice-misuse-using-symbolic
Repo
Framework

What is the Best Way for Extracting Meaningful Attributes from Pictures?

Title What is the Best Way for Extracting Meaningful Attributes from Pictures?
Authors Liangchen Liu, Arnold Wiliem, Shaokang Chen, Brian C. Lovell
Abstract Automatic attribute discovery methods have gained in popularity to extract sets of visual attributes from images or videos for various tasks. Despite their good performance in some classification tasks, it is difficult to evaluate whether the attributes discovered by these methods are meaningful and which methods are the most appropriate to discover attributes for visual descriptions. In its simplest form, such an evaluation can be performed by manually verifying whether there is any consistent identifiable visual concept distinguishing between positive and negative exemplars labelled by an attribute. This manual checking is tedious, expensive and labour intensive. In addition, comparisons between different methods could also be problematic as it is not clear how one could quantitatively decide which attribute is more meaningful than the others. In this paper, we propose a novel attribute meaningfulness metric to address this challenging problem. With this metric, automatic quantitative evaluation can be performed on the attribute sets; thus, reducing the enormous effort to perform manual evaluation. The proposed metric is applied to some recent automatic attribute discovery and hashing methods on four attribute-labelled datasets. To further validate the efficacy of the proposed method, we conducted a user study. In addition, we also compared our metric with a semi-supervised attribute discover method using the mixture of probabilistic PCA. In our evaluation, we gleaned several insights that could be beneficial in developing new automatic attribute discovery methods.
Tasks
Published 2016-10-17
URL http://arxiv.org/abs/1610.04957v1
PDF http://arxiv.org/pdf/1610.04957v1.pdf
PWC https://paperswithcode.com/paper/what-is-the-best-way-for-extracting
Repo
Framework

Classification accuracy as a proxy for two sample testing

Title Classification accuracy as a proxy for two sample testing
Authors Ilmun Kim, Aaditya Ramdas, Aarti Singh, Larry Wasserman
Abstract When data analysts train a classifier and check if its accuracy is significantly different from chance, they are implicitly performing a two-sample test. We investigate the statistical properties of this flexible approach in the high-dimensional setting. We prove two results that hold for all classifiers in any dimensions: if its true error remains $\epsilon$-better than chance for some $\epsilon>0$ as $d,n \to \infty$, then (a) the permutation-based test is consistent (has power approaching to one), (b) a computationally efficient test based on a Gaussian approximation of the null distribution is also consistent. To get a finer understanding of the rates of consistency, we study a specialized setting of distinguishing Gaussians with mean-difference $\delta$ and common (known or unknown) covariance $\Sigma$, when $d/n \to c \in (0,\infty)$. We study variants of Fisher’s linear discriminant analysis (LDA) such as “naive Bayes” in a nontrivial regime when $\epsilon \to 0$ (the Bayes classifier has true accuracy approaching 1/2), and contrast their power with corresponding variants of Hotelling’s test. Surprisingly, the expressions for their power match exactly in terms of $n,d,\delta,\Sigma$, and the LDA approach is only worse by a constant factor, achieving an asymptotic relative efficiency (ARE) of $1/\sqrt{\pi}$ for balanced samples. We also extend our results to high-dimensional elliptical distributions with finite kurtosis. Other results of independent interest include minimax lower bounds, and the optimality of Hotelling’s test when $d=o(n)$. Simulation results validate our theory, and we present practical takeaway messages along with natural open problems.
Tasks
Published 2016-02-06
URL https://arxiv.org/abs/1602.02210v4
PDF https://arxiv.org/pdf/1602.02210v4.pdf
PWC https://paperswithcode.com/paper/classification-accuracy-as-a-proxy-for-two
Repo
Framework

Linear Bandit algorithms using the Bootstrap

Title Linear Bandit algorithms using the Bootstrap
Authors Nandan Sudarsanam, Balaraman Ravindran
Abstract This study presents two new algorithms for solving linear stochastic bandit problems. The proposed methods use an approach from non-parametric statistics called bootstrapping to create confidence bounds. This is achieved without making any assumptions about the distribution of noise in the underlying system. We present the X-Random and X-Fixed bootstrap bandits which correspond to the two well-known approaches for conducting bootstraps on models, in the literature. The proposed methods are compared to other popular solutions for linear stochastic bandit problems, namely, OFUL, LinUCB and Thompson Sampling. The comparisons are carried out using a simulation study on a hierarchical probability meta-model, built from published data of experiments, which are run on real systems. The model representing the response surfaces is conceptualized as a Bayesian Network which is presented with varying degrees of noise for the simulations. One of the proposed methods, X-Random bootstrap, performs better than the baselines in-terms of cumulative regret across various degrees of noise and different number of trials. In certain settings the cumulative regret of this method is less than half of the best baseline. The X-Fixed bootstrap performs comparably in most situations and particularly well when the number of trials is low. The study concludes that these algorithms could be a preferred alternative for solving linear bandit problems, especially when the distribution of the noise in the system is unknown.
Tasks
Published 2016-05-04
URL http://arxiv.org/abs/1605.01185v1
PDF http://arxiv.org/pdf/1605.01185v1.pdf
PWC https://paperswithcode.com/paper/linear-bandit-algorithms-using-the-bootstrap
Repo
Framework

Fitness-based Adaptive Control of Parameters in Genetic Programming: Adaptive Value Setting of Mutation Rate and Flood Mechanisms

Title Fitness-based Adaptive Control of Parameters in Genetic Programming: Adaptive Value Setting of Mutation Rate and Flood Mechanisms
Authors Michal Gregor, Juraj Spalek
Abstract This paper concerns applications of genetic algorithms and genetic programming to tasks for which it is difficult to find a representation that does not map to a highly complex and discontinuous fitness landscape. In such cases the standard algorithm is prone to getting trapped in local extremes. The paper proposes several adaptive mechanisms that are useful in preventing the search from getting trapped.
Tasks
Published 2016-05-05
URL http://arxiv.org/abs/1605.01514v1
PDF http://arxiv.org/pdf/1605.01514v1.pdf
PWC https://paperswithcode.com/paper/fitness-based-adaptive-control-of-parameters
Repo
Framework

Robust Estimation of Multiple Inlier Structures

Title Robust Estimation of Multiple Inlier Structures
Authors Xiang Yang, Peter Meer
Abstract The robust estimator presented in this paper processes each structure independently. The scales of the structures are estimated adaptively and no threshold is involved in spite of different objective functions. The user has to specify only the number of elemental subsets for random sampling. After classifying all the input data, the segmented structures are sorted by their strengths and the strongest inlier structures come out at the top. Like any robust estimators, this algorithm also has limitations which are described in detail. Several synthetic and real examples are presented to illustrate every aspect of the algorithm.
Tasks
Published 2016-09-20
URL http://arxiv.org/abs/1609.06371v3
PDF http://arxiv.org/pdf/1609.06371v3.pdf
PWC https://paperswithcode.com/paper/robust-estimation-of-multiple-inlier
Repo
Framework

Spatial contrasting for deep unsupervised learning

Title Spatial contrasting for deep unsupervised learning
Authors Elad Hoffer, Itay Hubara, Nir Ailon
Abstract Convolutional networks have marked their place over the last few years as the best performing model for various visual tasks. They are, however, most suited for supervised learning from large amounts of labeled data. Previous attempts have been made to use unlabeled data to improve model performance by applying unsupervised techniques. These attempts require different architectures and training methods. In this work we present a novel approach for unsupervised training of Convolutional networks that is based on contrasting between spatial regions within images. This criterion can be employed within conventional neural networks and trained using standard techniques such as SGD and back-propagation, thus complementing supervised methods.
Tasks
Published 2016-11-21
URL http://arxiv.org/abs/1611.06996v1
PDF http://arxiv.org/pdf/1611.06996v1.pdf
PWC https://paperswithcode.com/paper/spatial-contrasting-for-deep-unsupervised
Repo
Framework
Title Evaluating Link Prediction Accuracy on Dynamic Networks with Added and Removed Edges
Authors Ruthwik R. Junuthula, Kevin S. Xu, Vijay K. Devabhaktuni
Abstract The task of predicting future relationships in a social network, known as link prediction, has been studied extensively in the literature. Many link prediction methods have been proposed, ranging from common neighbors to probabilistic models. Recent work by Yang et al. has highlighted several challenges in evaluating link prediction accuracy. In dynamic networks where edges are both added and removed over time, the link prediction problem is more complex and involves predicting both newly added and newly removed edges. This results in new challenges in the evaluation of dynamic link prediction methods, and the recommendations provided by Yang et al. are no longer applicable, because they do not address edge removal. In this paper, we investigate several metrics currently used for evaluating accuracies of dynamic link prediction methods and demonstrate why they can be misleading in many cases. We provide several recommendations on evaluating dynamic link prediction accuracy, including separation into two categories of evaluation. Finally we propose a unified metric to characterize link prediction accuracy effectively using a single number.
Tasks Dynamic Link Prediction, Link Prediction
Published 2016-07-25
URL http://arxiv.org/abs/1607.07330v1
PDF http://arxiv.org/pdf/1607.07330v1.pdf
PWC https://paperswithcode.com/paper/evaluating-link-prediction-accuracy-on
Repo
Framework

Statistical Learning Theory Approach for Data Classification with l-diversity

Title Statistical Learning Theory Approach for Data Classification with l-diversity
Authors Koray Mancuhan, Chris Clifton
Abstract Corporations are retaining ever-larger corpuses of personal data; the frequency or breaches and corresponding privacy impact have been rising accordingly. One way to mitigate this risk is through use of anonymized data, limiting the exposure of individual data to only where it is absolutely needed. This would seem particularly appropriate for data mining, where the goal is generalizable knowledge rather than data on specific individuals. In practice, corporate data miners often insist on original data, for fear that they might “miss something” with anonymized or differentially private approaches. This paper provides a theoretical justification for the use of anonymized data. Specifically, we show that a support vector classifier trained on anatomized data satisfying l-diversity should be expected to do as well as on the original data. Anatomy preserves all data values, but introduces uncertainty in the mapping between identifying and sensitive values, thus satisfying l-diversity. The theoretical effectiveness of the proposed approach is validated using several publicly available datasets, showing that we outperform the state of the art for support vector classification using training data protected by k-anonymity, and are comparable to learning on the original data.
Tasks
Published 2016-10-18
URL http://arxiv.org/abs/1610.05815v1
PDF http://arxiv.org/pdf/1610.05815v1.pdf
PWC https://paperswithcode.com/paper/statistical-learning-theory-approach-for-data
Repo
Framework

Data-Driven Background Subtraction Algorithm for in-Camera Acceleration in Thermal Imagery

Title Data-Driven Background Subtraction Algorithm for in-Camera Acceleration in Thermal Imagery
Authors Konstantinos Makantasis, Antonis Nikitakis, Anastasios Doulamis, Nikolaos Doulamis, Yannis Papaefstathiou
Abstract Detection of moving objects in videos is a crucial step towards successful surveillance and monitoring applications. A key component for such tasks is called background subtraction and tries to extract regions of interest from the image background for further processing or action. For this reason, its accuracy and real-time performance is of great significance. Although, effective background subtraction methods have been proposed, only a few of them take into consideration the special characteristics of thermal imagery. In this work, we propose a background subtraction scheme, which models the thermal responses of each pixel as a mixture of Gaussians with unknown number of components. Following a Bayesian approach, our method automatically estimates the mixture structure, while simultaneously it avoids over/under fitting. The pixel density estimate is followed by an efficient and highly accurate updating mechanism, which permits our system to be automatically adapted to dynamically changing operation conditions. We propose a reference implementation of our method in reconfigurable hardware achieving both adequate performance and low power consumption. Adopting a High Level Synthesis design, demanding floating point arithmetic operations are mapped in reconfigurable hardware; demonstrating fast-prototyping and on-field customization at the same time.
Tasks
Published 2016-07-31
URL http://arxiv.org/abs/1608.00229v2
PDF http://arxiv.org/pdf/1608.00229v2.pdf
PWC https://paperswithcode.com/paper/data-driven-background-subtraction-algorithm
Repo
Framework

An Asymptotically Optimal Contextual Bandit Algorithm Using Hierarchical Structures

Title An Asymptotically Optimal Contextual Bandit Algorithm Using Hierarchical Structures
Authors Mohammadreza Mohaghegh Neyshabouri, Kaan Gokcesu, Huseyin Ozkan, Suleyman S. Kozat
Abstract We propose online algorithms for sequential learning in the contextual multi-armed bandit setting. Our approach is to partition the context space and then optimally combine all of the possible mappings between the partition regions and the set of bandit arms in a data driven manner. We show that in our approach, the best mapping is able to approximate the best arm selection policy to any desired degree under mild Lipschitz conditions. Therefore, we design our algorithms based on the optimal adaptive combination and asymptotically achieve the performance of the best mapping as well as the best arm selection policy. This optimality is also guaranteed to hold even in adversarial environments since we do not rely on any statistical assumptions regarding the contexts or the loss of the bandit arms. Moreover, we design efficient implementations for our algorithms in various hierarchical partitioning structures such as lexicographical or arbitrary position splitting and binary trees (and several other partitioning examples). For instance, in the case of binary tree partitioning, the computational complexity is only log-linear in the number of regions in the finest partition. In conclusion, we provide significant performance improvements by introducing upper bounds (w.r.t. the best arm selection policy) that are mathematically proven to vanish in the average loss per round sense at a faster rate compared to the state-of-the-art. Our experimental work extensively covers various scenarios ranging from bandit settings to multi-class classification with real and synthetic data. In these experiments, we show that our algorithms are highly superior over the state-of-the-art techniques while maintaining the introduced mathematical guarantees and a computationally decent scalability.
Tasks
Published 2016-12-05
URL http://arxiv.org/abs/1612.01367v2
PDF http://arxiv.org/pdf/1612.01367v2.pdf
PWC https://paperswithcode.com/paper/an-asymptotically-optimal-contextual-bandit
Repo
Framework
comments powered by Disqus