October 18, 2019

2886 words 14 mins read

Paper Group ANR 506

Paper Group ANR 506

Semblance: A Rank-Based Kernel on Probability Spaces for Niche Detection. A Bayesian Model for Activities Recommendation and Event Structure Optimization Using Visitors Tracking. Neural Architecture Search: A Survey. Variance Suppression: Balanced Training Process in Deep Learning. Comparison of Semantic Segmentation Approaches for Horizon/Sky Line …

Semblance: A Rank-Based Kernel on Probability Spaces for Niche Detection

Title Semblance: A Rank-Based Kernel on Probability Spaces for Niche Detection
Authors Divyansh Agarwal, Nancy R. Zhang
Abstract In data science, determining proximity between observations is critical to many downstream analyses such as clustering, information retrieval and classification. However, when the underlying structure of the data probability space is unclear, the function used to compute similarity between data points is often arbitrarily chosen. Here, we present a novel concept of proximity, Semblance, that uses the empirical distribution across all observations to inform the similarity between each pair. The advantage of Semblance lies in its distribution free formulation and its ability to detect niche features by placing greater emphasis on similarity between observation pairs that fall at the outskirts of the data distribution, as opposed to those that fall towards the center. We prove that Semblance is a valid Mercer kernel, thus allowing its principled use in kernel based learning machines. Semblance can be applied to any data modality, and we demonstrate its consistently improved performance against conventional methods through simulations and three real case studies from very different applications, viz. cell type classification using single cell RNA sequencing, selecting predictors of positive return on real estate investments, and image compression.
Tasks Image Compression, Image Reconstruction, Information Retrieval
Published 2018-08-06
URL https://arxiv.org/abs/1808.02061v4
PDF https://arxiv.org/pdf/1808.02061v4.pdf
PWC https://paperswithcode.com/paper/semblance-a-rank-based-kernel-on-probability
Repo
Framework

A Bayesian Model for Activities Recommendation and Event Structure Optimization Using Visitors Tracking

Title A Bayesian Model for Activities Recommendation and Event Structure Optimization Using Visitors Tracking
Authors Henrique X. Goulart, Guilherme A. Wachs-Lopes
Abstract In events that are composed by many activities, there is a problem that involves retrieve and management the information of visitors that are visiting the activities. This management is crucial to find some activities that are drawing attention of visitors; identify an ideal positioning for activities; which path is more frequented by visitors. In this work, these features are studied using Complex Network theory. For the beginning, an artificial database was generated to study the mentioned features. Secondly, this work shows a method to optimize the event structure that is better than a random method and a recommendation system that achieves ~95% of accuracy.
Tasks
Published 2018-02-28
URL http://arxiv.org/abs/1802.10393v1
PDF http://arxiv.org/pdf/1802.10393v1.pdf
PWC https://paperswithcode.com/paper/a-bayesian-model-for-activities
Repo
Framework

Neural Architecture Search: A Survey

Title Neural Architecture Search: A Survey
Authors Thomas Elsken, Jan Hendrik Metzen, Frank Hutter
Abstract Deep Learning has enabled remarkable progress over the last years on a variety of tasks, such as image recognition, speech recognition, and machine translation. One crucial aspect for this progress are novel neural architectures. Currently employed architectures have mostly been developed manually by human experts, which is a time-consuming and error-prone process. Because of this, there is growing interest in automated neural architecture search methods. We provide an overview of existing work in this field of research and categorize them according to three dimensions: search space, search strategy, and performance estimation strategy.
Tasks Machine Translation, Neural Architecture Search, Speech Recognition
Published 2018-08-16
URL http://arxiv.org/abs/1808.05377v3
PDF http://arxiv.org/pdf/1808.05377v3.pdf
PWC https://paperswithcode.com/paper/neural-architecture-search-a-survey
Repo
Framework

Variance Suppression: Balanced Training Process in Deep Learning

Title Variance Suppression: Balanced Training Process in Deep Learning
Authors Tao Yi, Xingxuan Wang
Abstract Stochastic gradient descent updates parameters with summation gradient computed from a random data batch. This summation will lead to unbalanced training process if the data we obtained is unbalanced. To address this issue, this paper takes the error variance and error mean both into consideration. The adaptively adjusting approach of two terms trading off is also given in our algorithm. Due to this algorithm can suppress error variance, we named it Variance Suppression Gradient Descent (VSSGD). Experimental results have demonstrated that VSSGD can accelerate the training process, effectively prevent overfitting, improve the networks learning capacity from small samples.
Tasks
Published 2018-11-20
URL http://arxiv.org/abs/1811.08163v3
PDF http://arxiv.org/pdf/1811.08163v3.pdf
PWC https://paperswithcode.com/paper/variance-suppression-balanced-training
Repo
Framework

Comparison of Semantic Segmentation Approaches for Horizon/Sky Line Detection

Title Comparison of Semantic Segmentation Approaches for Horizon/Sky Line Detection
Authors Touqeer Ahmad, Pavel Campr, Martin Čadík, George Bebis
Abstract Horizon or skyline detection plays a vital role towards mountainous visual geo-localization, however most of the recently proposed visual geo-localization approaches rely on \textbf{user-in-the-loop} skyline detection methods. Detecting such a segmenting boundary fully autonomously would definitely be a step forward for these localization approaches. This paper provides a quantitative comparison of four such methods for autonomous horizon/sky line detection on an extensive data set. Specifically, we provide the comparison between four recently proposed segmentation methods; one explicitly targeting the problem of horizon detection\cite{Ahmad15}, second focused on visual geo-localization but relying on accurate detection of skyline \cite{Saurer16} and other two proposed for general semantic segmentation – Fully Convolutional Networks (FCN) \cite{Long15} and SegNet\cite{Badrinarayanan15}. Each of the first two methods is trained on a common training set \cite{Baatz12} comprised of about 200 images while models for the third and fourth method are fine tuned for sky segmentation problem through transfer learning using the same data set. Each of the method is tested on an extensive test set (about 3K images) covering various challenging geographical, weather, illumination and seasonal conditions. We report average accuracy and average absolute pixel error for each of the presented formulation.
Tasks Semantic Segmentation, Transfer Learning
Published 2018-05-21
URL http://arxiv.org/abs/1805.08105v1
PDF http://arxiv.org/pdf/1805.08105v1.pdf
PWC https://paperswithcode.com/paper/comparison-of-semantic-segmentation
Repo
Framework

Indoor Scene Understanding in 2.5/3D for Autonomous Agents: A Survey

Title Indoor Scene Understanding in 2.5/3D for Autonomous Agents: A Survey
Authors Muzammal Naseer, Salman H Khan, Fatih Porikli
Abstract With the availability of low-cost and compact 2.5/3D visual sensing devices, computer vision community is experiencing a growing interest in visual scene understanding of indoor environments. This survey paper provides a comprehensive background to this research topic. We begin with a historical perspective, followed by popular 3D data representations and a comparative analysis of available datasets. Before delving into the application specific details, this survey provides a succinct introduction to the core technologies that are the underlying methods extensively used in the literature. Afterwards, we review the developed techniques according to a taxonomy based on the scene understanding tasks. This covers holistic indoor scene understanding as well as subtasks such as scene classification, object detection, pose estimation, semantic segmentation, 3D reconstruction, saliency detection, physics-based reasoning and affordance prediction. Later on, we summarize the performance metrics used for evaluation in different tasks and a quantitative comparison among the recent state-of-the-art techniques. We conclude this review with the current challenges and an outlook towards the open research problems requiring further investigation.
Tasks 3D Reconstruction, Object Detection, Pose Estimation, Saliency Detection, Scene Classification, Scene Understanding, Semantic Segmentation
Published 2018-03-09
URL http://arxiv.org/abs/1803.03352v2
PDF http://arxiv.org/pdf/1803.03352v2.pdf
PWC https://paperswithcode.com/paper/indoor-scene-understanding-in-253d-for
Repo
Framework

Local Stability and Performance of Simple Gradient Penalty mu-Wasserstein GAN

Title Local Stability and Performance of Simple Gradient Penalty mu-Wasserstein GAN
Authors Cheolhyeong Kim, Seungtae Park, Hyung Ju Hwang
Abstract Wasserstein GAN(WGAN) is a model that minimizes the Wasserstein distance between a data distribution and sample distribution. Recent studies have proposed stabilizing the training process for the WGAN and implementing the Lipschitz constraint. In this study, we prove the local stability of optimizing the simple gradient penalty $\mu$-WGAN(SGP $\mu$-WGAN) under suitable assumptions regarding the equilibrium and penalty measure $\mu$. The measure valued differentiation concept is employed to deal with the derivative of the penalty terms, which is helpful for handling abstract singular measures with lower dimensional support. Based on this analysis, we claim that penalizing the data manifold or sample manifold is the key to regularizing the original WGAN with a gradient penalty. Experimental results obtained with unintuitive penalty measures that satisfy our assumptions are also provided to support our theoretical results.
Tasks
Published 2018-10-05
URL http://arxiv.org/abs/1810.02528v1
PDF http://arxiv.org/pdf/1810.02528v1.pdf
PWC https://paperswithcode.com/paper/local-stability-and-performance-of-simple
Repo
Framework

Unsupervised Anomalous Data Space Specification

Title Unsupervised Anomalous Data Space Specification
Authors Ian J Davis
Abstract Computer algorithms are written with the intent that when run they perform a useful function. Typically any information obtained is unknown until the algorithm is run. However, if the behavior of an algorithm can be fully described by precomputing just once how this algorithm will respond when executed on any input, this precomputed result provides a complete specification for all solutions in the problem domain. We apply this idea to a previous anomaly detection algorithm, and in doing so transform it from one that merely detects individual anomalies when asked to discover potentially anomalous values, into an algorithm also capable of generating a complete specification for those values it would deem to be anomalous. This specification is derived by examining no more than a small training data, can be obtained in very small constant time, and is inherently far more useful than results obtained by repeated execution of this tool. For example, armed with such a specification one can ask how close an anomaly is to being deemed normal, and can validate this answer not by exhaustively testing the algorithm but by examining if the specification so generated is indeed correct. This powerful idea can be applied to any algorithm whose runtime behavior can be recovered from its construction and so has wide applicability.
Tasks Anomaly Detection
Published 2018-10-18
URL http://arxiv.org/abs/1810.08309v1
PDF http://arxiv.org/pdf/1810.08309v1.pdf
PWC https://paperswithcode.com/paper/unsupervised-anomalous-data-space
Repo
Framework

Self Organizing Classifiers and Niched Fitness

Title Self Organizing Classifiers and Niched Fitness
Authors Danilo Vasconcellos Vargas, Hirotaka Takano, Junichi Murata
Abstract Learning classifier systems are adaptive learning systems which have been widely applied in a multitude of application domains. However, there are still some generalization problems unsolved. The hurdle is that fitness and niching pressures are difficult to balance. Here, a new algorithm called Self Organizing Classifiers is proposed which faces this problem from a different perspective. Instead of balancing the pressures, both pressures are separated and no balance is necessary. In fact, the proposed algorithm possesses a dynamical population structure that self-organizes itself to better project the input space into a map. The niched fitness concept is defined along with its dynamical population structure, both are indispensable for the understanding of the proposed method. Promising results are shown on two continuous multi-step problems. One of which is yet more challenging than previous problems of this class in the literature.
Tasks
Published 2018-11-20
URL http://arxiv.org/abs/1811.08226v1
PDF http://arxiv.org/pdf/1811.08226v1.pdf
PWC https://paperswithcode.com/paper/self-organizing-classifiers-and-niched
Repo
Framework

My camera can see through fences: A deep learning approach for image de-fencing

Title My camera can see through fences: A deep learning approach for image de-fencing
Authors Sankaraganesh Jonna, Krishna Kanth Nakka, Rajiv R. Sahay
Abstract In recent times, the availability of inexpensive image capturing devices such as smartphones/tablets has led to an exponential increase in the number of images/videos captured. However, sometimes the amateur photographer is hindered by fences in the scene which have to be removed after the image has been captured. Conventional approaches to image de-fencing suffer from inaccurate and non-robust fence detection apart from being limited to processing images of only static occluded scenes. In this paper, we propose a semi-automated de-fencing algorithm using a video of the dynamic scene. We use convolutional neural networks for detecting fence pixels. We provide qualitative as well as quantitative comparison results with existing lattice detection algorithms on the existing PSU NRT data set and a proposed challenging fenced image dataset. The inverse problem of fence removal is solved using split Bregman technique assuming total variation of the de-fenced image as the regularization constraint.
Tasks
Published 2018-05-18
URL http://arxiv.org/abs/1805.07442v1
PDF http://arxiv.org/pdf/1805.07442v1.pdf
PWC https://paperswithcode.com/paper/my-camera-can-see-through-fences-a-deep
Repo
Framework

Privacy Preserving Machine Learning: Threats and Solutions

Title Privacy Preserving Machine Learning: Threats and Solutions
Authors Mohammad Al-Rubaie, J. Morris Chang
Abstract For privacy concerns to be addressed adequately in current machine learning systems, the knowledge gap between the machine learning and privacy communities must be bridged. This article aims to provide an introduction to the intersection of both fields with special emphasis on the techniques used to protect the data.
Tasks
Published 2018-03-27
URL http://arxiv.org/abs/1804.11238v1
PDF http://arxiv.org/pdf/1804.11238v1.pdf
PWC https://paperswithcode.com/paper/privacy-preserving-machine-learning-threats
Repo
Framework

On the Approximation Properties of Random ReLU Features

Title On the Approximation Properties of Random ReLU Features
Authors Yitong Sun, Anna Gilbert, Ambuj Tewari
Abstract We study the approximation properties of random ReLU features through their reproducing kernel Hilbert space (RKHS). We first prove a universality theorem for the RKHS induced by random features whose feature maps are of the form of nodes in neural networks. The universality result implies that the random ReLU features method is a universally consistent learning algorithm. We prove that despite the universality of the RKHS induced by the random ReLU features, composition of functions in it generates substantially more complicated functions that are harder to approximate than those functions simply in the RKHS. We also prove that such composite functions can be efficiently approximated by multi-layer ReLU networks with bounded weights. This depth separation result shows that the random ReLU features models suffer from the same weakness as that of shallow models. We show in experiments that the performance of random ReLU features is comparable to that of random Fourier features and, in general, has a lower computational cost. We also demonstrate that when the target function is the composite function as described in the depth separation theorem, 3-layer neural networks indeed outperform both random ReLU features and 2-layer neural networks.
Tasks
Published 2018-10-10
URL https://arxiv.org/abs/1810.04374v3
PDF https://arxiv.org/pdf/1810.04374v3.pdf
PWC https://paperswithcode.com/paper/on-the-approximation-capabilities-of-relu
Repo
Framework

Scalable Gaussian Process Inference with Finite-data Mean and Variance Guarantees

Title Scalable Gaussian Process Inference with Finite-data Mean and Variance Guarantees
Authors Jonathan H. Huggins, Trevor Campbell, Mikołaj Kasprzak, Tamara Broderick
Abstract Gaussian processes (GPs) offer a flexible class of priors for nonparametric Bayesian regression, but popular GP posterior inference methods are typically prohibitively slow or lack desirable finite-data guarantees on quality. We develop an approach to scalable approximate GP regression with finite-data guarantees on the accuracy of pointwise posterior mean and variance estimates. Our main contribution is a novel objective for approximate inference in the nonparametric setting: the preconditioned Fisher (pF) divergence. We show that unlike the Kullback–Leibler divergence (used in variational inference), the pF divergence bounds the 2-Wasserstein distance, which in turn provides tight bounds the pointwise difference of the mean and variance functions. We demonstrate that, for sparse GP likelihood approximations, we can minimize the pF divergence efficiently. Our experiments show that optimizing the pF divergence has the same computational requirements as variational sparse GPs while providing comparable empirical performance–in addition to our novel finite-data quality guarantees.
Tasks Gaussian Processes
Published 2018-06-26
URL http://arxiv.org/abs/1806.10234v4
PDF http://arxiv.org/pdf/1806.10234v4.pdf
PWC https://paperswithcode.com/paper/scalable-gaussian-process-inference-with
Repo
Framework

Automatically Explaining Machine Learning Prediction Results: A Demonstration on Type 2 Diabetes Risk Prediction

Title Automatically Explaining Machine Learning Prediction Results: A Demonstration on Type 2 Diabetes Risk Prediction
Authors Gang Luo
Abstract Background: Predictive modeling is a key component of solutions to many healthcare problems. Among all predictive modeling approaches, machine learning methods often achieve the highest prediction accuracy, but suffer from a long-standing open problem precluding their widespread use in healthcare. Most machine learning models give no explanation for their prediction results, whereas interpretability is essential for a predictive model to be adopted in typical healthcare settings. Methods: This paper presents the first complete method for automatically explaining results for any machine learning predictive model without degrading accuracy. We did a computer coding implementation of the method. Using the electronic medical record data set from the Practice Fusion diabetes classification competition containing patient records from all 50 states in the United States, we demonstrated the method on predicting type 2 diabetes diagnosis within the next year. Results: For the champion machine learning model of the competition, our method explained prediction results for 87.4% of patients who were correctly predicted by the model to have type 2 diabetes diagnosis within the next year. Conclusions: Our demonstration showed the feasibility of automatically explaining results for any machine learning predictive model without degrading accuracy.
Tasks
Published 2018-12-06
URL http://arxiv.org/abs/1812.02852v1
PDF http://arxiv.org/pdf/1812.02852v1.pdf
PWC https://paperswithcode.com/paper/automatically-explaining-machine-learning
Repo
Framework

The Inductive Bias of Restricted f-GANs

Title The Inductive Bias of Restricted f-GANs
Authors Shuang Liu, Kamalika Chaudhuri
Abstract Generative adversarial networks are a novel method for statistical inference that have achieved much empirical success; however, the factors contributing to this success remain ill-understood. In this work, we attempt to analyze generative adversarial learning – that is, statistical inference as the result of a game between a generator and a discriminator – with the view of understanding how it differs from classical statistical inference solutions such as maximum likelihood inference and the method of moments. Specifically, we provide a theoretical characterization of the distribution inferred by a simple form of generative adversarial learning called restricted f-GANs – where the discriminator is a function in a given function class, the distribution induced by the generator is restricted to lie in a pre-specified distribution class and the objective is similar to a variational form of the f-divergence. A consequence of our result is that for linear KL-GANs – that is, when the discriminator is a linear function over some feature space and f corresponds to the KL-divergence – the distribution induced by the optimal generator is neither the maximum likelihood nor the method of moments solution, but an interesting combination of both.
Tasks
Published 2018-09-12
URL http://arxiv.org/abs/1809.04542v1
PDF http://arxiv.org/pdf/1809.04542v1.pdf
PWC https://paperswithcode.com/paper/the-inductive-bias-of-restricted-f-gans
Repo
Framework
comments powered by Disqus