July 28, 2019

2743 words 13 mins read

Paper Group ANR 217

Paper Group ANR 217

Taming Non-stationary Bandits: A Bayesian Approach. The Causality/Repair Connection in Databases: Causality-Programs. Algorithmic infeasibility of community detection in higher-order networks. Learning Local Receptive Fields and their Weight Sharing Scheme on Graphs. Data preprocessing methods for robust Fourier ptychographic microscopy. On Reject …

Taming Non-stationary Bandits: A Bayesian Approach

Title Taming Non-stationary Bandits: A Bayesian Approach
Authors Vishnu Raj, Sheetal Kalyani
Abstract We consider the multi armed bandit problem in non-stationary environments. Based on the Bayesian method, we propose a variant of Thompson Sampling which can be used in both rested and restless bandit scenarios. Applying discounting to the parameters of prior distribution, we describe a way to systematically reduce the effect of past observations. Further, we derive the exact expression for the probability of picking sub-optimal arms. By increasing the exploitative value of Bayes’ samples, we also provide an optimistic version of the algorithm. Extensive empirical analysis is conducted under various scenarios to validate the utility of proposed algorithms. A comparison study with various state-of-the-arm algorithms is also included.
Tasks
Published 2017-07-31
URL http://arxiv.org/abs/1707.09727v1
PDF http://arxiv.org/pdf/1707.09727v1.pdf
PWC https://paperswithcode.com/paper/taming-non-stationary-bandits-a-bayesian
Repo
Framework

The Causality/Repair Connection in Databases: Causality-Programs

Title The Causality/Repair Connection in Databases: Causality-Programs
Authors Leopoldo Bertossi
Abstract In this work, answer-set programs that specify repairs of databases are used as a basis for solving computational and reasoning problems about causes for query answers from databases.
Tasks
Published 2017-04-17
URL http://arxiv.org/abs/1704.05136v2
PDF http://arxiv.org/pdf/1704.05136v2.pdf
PWC https://paperswithcode.com/paper/the-causalityrepair-connection-in-databases
Repo
Framework

Algorithmic infeasibility of community detection in higher-order networks

Title Algorithmic infeasibility of community detection in higher-order networks
Authors Tatsuro Kawamoto
Abstract In principle, higher-order networks that have multiple edge types are more informative than their lower-order counterparts. In practice, however, excessively rich information may be algorithmically infeasible to extract. It requires an algorithm that assumes a high-dimensional model and such an algorithm may perform poorly or be extremely sensitive to the initial estimate of the model parameters. Herein, we address this problem of community detection through a detectability analysis. We focus on the expectation-maximization (EM) algorithm with belief propagation (BP), and analytically derive its algorithmic detectability threshold, i.e., the limit of the modular structure strength below which the algorithm can no longer detect any modular structures. The results indicate the existence of a phase in which the community detection of a lower-order network outperforms its higher-order counterpart.
Tasks Community Detection
Published 2017-10-24
URL http://arxiv.org/abs/1710.08816v1
PDF http://arxiv.org/pdf/1710.08816v1.pdf
PWC https://paperswithcode.com/paper/algorithmic-infeasibility-of-community
Repo
Framework

Learning Local Receptive Fields and their Weight Sharing Scheme on Graphs

Title Learning Local Receptive Fields and their Weight Sharing Scheme on Graphs
Authors Jean-Charles Vialatte, Vincent Gripon, Gilles Coppin
Abstract We propose a simple and generic layer formulation that extends the properties of convolutional layers to any domain that can be described by a graph. Namely, we use the support of its adjacency matrix to design learnable weight sharing filters able to exploit the underlying structure of signals in the same fashion as for images. The proposed formulation makes it possible to learn the weights of the filter as well as a scheme that controls how they are shared across the graph. We perform validation experiments with image datasets and show that these filters offer performances comparable with convolutional ones.
Tasks
Published 2017-06-08
URL http://arxiv.org/abs/1706.02684v3
PDF http://arxiv.org/pdf/1706.02684v3.pdf
PWC https://paperswithcode.com/paper/learning-local-receptive-fields-and-their
Repo
Framework

Data preprocessing methods for robust Fourier ptychographic microscopy

Title Data preprocessing methods for robust Fourier ptychographic microscopy
Authors Yan Zhang, An Pan, Ming Lei, Baoli Yao
Abstract Fourier ptychographic microscopy (FPM) is a recently proposed computational imaging technique with both high resolution and wide field-of-view. In current FP experimental setup, the dark-field images with high-angle illuminations are easily submerged by stray light and background noise due to the low signal-to-noise ratio, thus significantly degrading the reconstruction quality and also imposing a major restriction on the synthetic numerical aperture (NA) of the FP approach. To this end, an overall and systematic data preprocessing scheme for noise removal from FP’s raw dataset is provided, which involves sampling analysis as well as underexposed/overexposed treatments, then followed by the elimination of unknown stray light and suppression of inevitable background noise, especially Gaussian noise and CCD dark current in our experiments. The reported non-parametric scheme facilitates great enhancements of the FP’s performance, which has been demonstrated experimentally that the benefits of noise removal by these methods far outweigh its defects of concomitant signal loss. In addition, it could be flexibly cooperated with the existing state-of-the-art algorithms, producing a stronger robustness of the FP approach in various applications.
Tasks
Published 2017-06-04
URL http://arxiv.org/abs/1707.03716v1
PDF http://arxiv.org/pdf/1707.03716v1.pdf
PWC https://paperswithcode.com/paper/data-preprocessing-methods-for-robust-fourier
Repo
Framework

On Reject and Refine Options in Multicategory Classification

Title On Reject and Refine Options in Multicategory Classification
Authors Chong Zhang, Wenbo Wang, Xingye Qiao
Abstract In many real applications of statistical learning, a decision made from misclassification can be too costly to afford; in this case, a reject option, which defers the decision until further investigation is conducted, is often preferred. In recent years, there has been much development for binary classification with a reject option. Yet, little progress has been made for the multicategory case. In this article, we propose margin-based multicategory classification methods with a reject option. In addition, and more importantly, we introduce a new and unique refine option for the multicategory problem, where the class of an observation is predicted to be from a set of class labels, whose cardinality is not necessarily one. The main advantage of both options lies in their capacity of identifying error-prone observations. Moreover, the refine option can provide more constructive information for classification by effectively ruling out implausible classes. Efficient implementations have been developed for the proposed methods. On the theoretical side, we offer a novel statistical learning theory and show a fast convergence rate of the excess $\ell$-risk of our methods with emphasis on diverging dimensionality and number of classes. The results can be further improved under a low noise assumption. A set of comprehensive simulation and real data studies has shown the usefulness of the new learning tools compared to regular multicategory classifiers. Detailed proofs of theorems and extended numerical results are included in the supplemental materials available online.
Tasks
Published 2017-01-09
URL http://arxiv.org/abs/1701.02265v1
PDF http://arxiv.org/pdf/1701.02265v1.pdf
PWC https://paperswithcode.com/paper/on-reject-and-refine-options-in-multicategory
Repo
Framework

Linking Sequences of Events with Sparse or No Common Occurrence across Data Sets

Title Linking Sequences of Events with Sparse or No Common Occurrence across Data Sets
Authors Yunsung Kim
Abstract Data of practical interest - such as personal records, transaction logs, and medical histories - are sequential collections of events relevant to a particular source entity. Recent studies have attempted to link sequences that represent a common entity across data sets to allow more comprehensive statistical analyses and to identify potential privacy failures. Yet, current approaches remain tailored to their specific domains of application, and they fail when co-referent sequences in different data sets contain sparse or no common events, which occurs frequently in many cases. To address this, we formalize the general problem of “sequence linkage” and describe “LDA-Link,” a generic solution that is applicable even when co-referent event sequences contain no common items at all. LDA-Link is built upon “Split-Document” model, a new mixed-membership probabilistic model for the generation of event sequence collections. It detects the latent similarity of sequences and thus achieves robustness particularly when co-referent sequences share sparse or no event overlap. We apply LDA-Link in the context of social media profile reconciliation where users make no common posts across platforms, comparing to the state-of-the-art generic solution to sequence linkage.
Tasks
Published 2017-11-12
URL http://arxiv.org/abs/1711.04248v1
PDF http://arxiv.org/pdf/1711.04248v1.pdf
PWC https://paperswithcode.com/paper/linking-sequences-of-events-with-sparse-or-no
Repo
Framework

Deep Convolutional Neural Networks for Interpretable Analysis of EEG Sleep Stage Scoring

Title Deep Convolutional Neural Networks for Interpretable Analysis of EEG Sleep Stage Scoring
Authors Albert Vilamala, Kristoffer H. Madsen, Lars K. Hansen
Abstract Sleep studies are important for diagnosing sleep disorders such as insomnia, narcolepsy or sleep apnea. They rely on manual scoring of sleep stages from raw polisomnography signals, which is a tedious visual task requiring the workload of highly trained professionals. Consequently, research efforts to purse for an automatic stage scoring based on machine learning techniques have been carried out over the last years. In this work, we resort to multitaper spectral analysis to create visually interpretable images of sleep patterns from EEG signals as inputs to a deep convolutional network trained to solve visual recognition tasks. As a working example of transfer learning, a system able to accurately classify sleep stages in new unseen patients is presented. Evaluations in a widely-used publicly available dataset favourably compare to state-of-the-art results, while providing a framework for visual interpretation of outcomes.
Tasks EEG, Sleep Stage Detection, Transfer Learning
Published 2017-10-02
URL http://arxiv.org/abs/1710.00633v1
PDF http://arxiv.org/pdf/1710.00633v1.pdf
PWC https://paperswithcode.com/paper/deep-convolutional-neural-networks-for-10
Repo
Framework

Triagem virtual de imagens de imuno-histoquímica usando redes neurais artificiais e espectro de padrões

Title Triagem virtual de imagens de imuno-histoquímica usando redes neurais artificiais e espectro de padrões
Authors Higor Neto Lima, Wellington Pinheiro dos Santos, Mêuser Jorge Silva Valença
Abstract The importance of organizing medical images according to their nature, application and relevance is increasing. Furhermore, a previous selection of medical images can be useful to accelerate the task of analysis by pathologists. Herein this work we propose an image classifier to integrate a CBIR (Content-Based Image Retrieval) selection system. This classifier is based on pattern spectra and neural networks. Feature selection is performed using pattern spectra and principal component analysis, whilst image classification is based on multilayer perceptrons and a composition of self-organizing maps and learning vector quantization. These methods were applied for content selection of immunohistochemical images of placenta and newdeads lungs. Results demonstrated that this approach can reach reasonable classification performance.
Tasks Content-Based Image Retrieval, Feature Selection, Image Classification, Image Retrieval, Quantization
Published 2017-12-03
URL http://arxiv.org/abs/1712.01695v1
PDF http://arxiv.org/pdf/1712.01695v1.pdf
PWC https://paperswithcode.com/paper/triagem-virtual-de-imagens-de-imuno
Repo
Framework

Clusters of Driving Behavior from Observational Smartphone Data

Title Clusters of Driving Behavior from Observational Smartphone Data
Authors Josh Warren, Jeff Lipkowitz, Vadim Sokolov
Abstract Understanding driving behaviors is essential for improving safety and mobility of our transportation systems. Data is usually collected via simulator-based studies or naturalistic driving studies. Those techniques allow for understanding relations between demographics, road conditions and safety. On the other hand, they are very costly and time consuming. Thanks to the ubiquity of smartphones, we have an opportunity to substantially complement more traditional data collection techniques with data extracted from phone sensors, such as GPS, accelerometer gyroscope and camera. We developed statistical models that provided insight into driver behavior in the San Francisco metro area based on tens of thousands of driver logs. We used novel data sources to support our work. We used cell phone sensor data drawn from five hundred drivers in San Francisco to understand the speed of traffic across the city as well as the maneuvers of drivers in different areas. Specifically, we clustered drivers based on their driving behavior. We looked at driver norms by street and flagged driving behaviors that deviated from the norm.
Tasks
Published 2017-10-12
URL http://arxiv.org/abs/1710.04502v3
PDF http://arxiv.org/pdf/1710.04502v3.pdf
PWC https://paperswithcode.com/paper/clusters-of-driving-behavior-from
Repo
Framework

Revenue Optimization with Approximate Bid Predictions

Title Revenue Optimization with Approximate Bid Predictions
Authors Andrés Muñoz Medina, Sergei Vassilvitskii
Abstract In the context of advertising auctions, finding good reserve prices is a notoriously challenging learning problem. This is due to the heterogeneity of ad opportunity types and the non-convexity of the objective function. In this work, we show how to reduce reserve price optimization to the standard setting of prediction under squared loss, a well understood problem in the learning community. We further bound the gap between the expected bid and revenue in terms of the average loss of the predictor. This is the first result that formally relates the revenue gained to the quality of a standard machine learned model.
Tasks
Published 2017-06-15
URL http://arxiv.org/abs/1706.04732v2
PDF http://arxiv.org/pdf/1706.04732v2.pdf
PWC https://paperswithcode.com/paper/revenue-optimization-with-approximate-bid
Repo
Framework

Online Learning with Regularized Kernel for One-class Classification

Title Online Learning with Regularized Kernel for One-class Classification
Authors Chandan Gautam, Aruna Tiwari, Sundaram Suresh, Kapil Ahuja
Abstract This paper presents an online learning with regularized kernel based one-class extreme learning machine (ELM) classifier and is referred as online RK-OC-ELM. The baseline kernel hyperplane model considers whole data in a single chunk with regularized ELM approach for offline learning in case of one-class classification (OCC). Further, the basic hyper plane model is adapted in an online fashion from stream of training samples in this paper. Two frameworks viz., boundary and reconstruction are presented to detect the target class in online RKOC-ELM. Boundary framework based one-class classifier consists of single node output architecture and classifier endeavors to approximate all data to any real number. However, one-class classifier based on reconstruction framework is an autoencoder architecture, where output nodes are identical to input nodes and classifier endeavor to reconstruct input layer at the output layer. Both these frameworks employ regularized kernel ELM based online learning and consistency based model selection has been employed to select learning algorithm parameters. The performance of online RK-OC-ELM has been evaluated on standard benchmark datasets as well as on artificial datasets and the results are compared with existing state-of-the art one-class classifiers. The results indicate that the online learning one-class classifier is slightly better or same as batch learning based approaches. As, base classifier used for the proposed classifiers are based on the ELM, hence, proposed classifiers would also inherit the benefit of the base classifier i.e. it will perform faster computation compared to traditional autoencoder based one-class classifier.
Tasks Model Selection, One-class classifier
Published 2017-01-17
URL http://arxiv.org/abs/1701.04508v2
PDF http://arxiv.org/pdf/1701.04508v2.pdf
PWC https://paperswithcode.com/paper/online-learning-with-regularized-kernel-for
Repo
Framework

Multiple component decomposition from millimeter single-channel data

Title Multiple component decomposition from millimeter single-channel data
Authors Iván Rodríguez-Montoya, David Sánchez-Argüelles, Itziar Aretxaga, Emanuele Bertone, Miguel Chávez-Dagostino, David H. Hughes, Alfredo Montaña, Grant W. Wilson, Milagros Zeballos
Abstract We present an implementation of a blind source separation algorithm to remove foregrounds off millimeter surveys made by single-channel instruments. In order to make possible such a decomposition over single-wavelength data: we generate levels of artificial redundancy, then perform a blind decomposition, calibrate the resulting maps, and lastly measure physical information. We simulate the reduction pipeline using mock data: atmospheric fluctuations, extended astrophysical foregrounds, and point-like sources, but we apply the same methodology to the AzTEC/ASTE survey of the Great Observatories Origins Deep Survey-South (GOODS-S). In both applications, our technique robustly decomposes redundant maps into their underlying components, reducing flux bias, improving signal-to-noise, and minimizing information loss. In particular, the GOODS-S survey is decomposed into four independent physical components, one of them is the already known map of point sources, two are atmospheric and systematic foregrounds, and the fourth component is an extended emission that can be interpreted as the confusion background of faint sources.
Tasks
Published 2017-11-23
URL http://arxiv.org/abs/1711.08456v1
PDF http://arxiv.org/pdf/1711.08456v1.pdf
PWC https://paperswithcode.com/paper/multiple-component-decomposition-from
Repo
Framework

Neural Variational Inference and Learning in Undirected Graphical Models

Title Neural Variational Inference and Learning in Undirected Graphical Models
Authors Volodymyr Kuleshov, Stefano Ermon
Abstract Many problems in machine learning are naturally expressed in the language of undirected graphical models. Here, we propose black-box learning and inference algorithms for undirected models that optimize a variational approximation to the log-likelihood of the model. Central to our approach is an upper bound on the log-partition function parametrized by a function q that we express as a flexible neural network. Our bound makes it possible to track the partition function during learning, to speed-up sampling, and to train a broad class of hybrid directed/undirected models via a unified variational inference framework. We empirically demonstrate the effectiveness of our method on several popular generative modeling datasets.
Tasks
Published 2017-11-07
URL http://arxiv.org/abs/1711.02679v2
PDF http://arxiv.org/pdf/1711.02679v2.pdf
PWC https://paperswithcode.com/paper/neural-variational-inference-and-learning-in
Repo
Framework

A LBP Based Correspondence Identification Scheme for Multi-view Sensing Network

Title A LBP Based Correspondence Identification Scheme for Multi-view Sensing Network
Authors Raghavendra Kandukuri
Abstract In this paper, we describes a correspondence identification method between two-views of regular RGB camera that can be run in real-time. The basic idea is first applying normalized cross correlation to retrieve a sparse set of matching pairs from image pair. Then loopy belief propagation scheme is applied to the the set of possible candidates to densely identify correspondences from different views. The experiment results demonstrate superb accuracy and precision that outperform the state-of-the-art in the computer vision field. Meanwhile, the implementation is simple enough that can be optimized for real-time performance. We have given the detailed comparison of existing approaches and show that this method can enable various practical applications from 3D reconstruction to image search.
Tasks 3D Reconstruction, Image Retrieval
Published 2017-09-16
URL http://arxiv.org/abs/1709.06509v1
PDF http://arxiv.org/pdf/1709.06509v1.pdf
PWC https://paperswithcode.com/paper/a-lbp-based-correspondence-identification
Repo
Framework
comments powered by Disqus