October 19, 2019

3008 words 15 mins read

Paper Group ANR 158

Paper Group ANR 158

RGB-based 3D Hand Pose Estimation via Privileged Learning with Depth Images. XPCA: Extending PCA for a Combination of Discrete and Continuous Variables. Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation. Unsupervised learning with contrastive latent variable models. Finding Frequent Entities in Continuous Data. Adversari …

RGB-based 3D Hand Pose Estimation via Privileged Learning with Depth Images

Title RGB-based 3D Hand Pose Estimation via Privileged Learning with Depth Images
Authors Shanxin Yuan, Bjorn Stenger, Tae-Kyun Kim
Abstract This paper proposes a method for hand pose estimation from RGB images that uses both external large-scale depth image datasets and paired depth and RGB images as privileged information at training time. We show that providing depth information during training significantly improves performance of pose estimation from RGB images during testing. We explore different ways of using this privileged information: (1) using depth data to initially train a depth-based network, (2) using the features from the depth-based network of the paired depth images to constrain mid-level RGB network weights, and (3) using the foreground mask, obtained from the depth data, to suppress the responses from the background area. By using paired RGB and depth images, we are able to supervise the RGB-based network to learn middle layer features that mimic that of the corresponding depth-based network, which is trained on large-scale, accurately annotated depth data. During testing, when only an RGB image is available, our method produces accurate 3D hand pose predictions. Our method is also tested on 2D hand pose estimation. Experiments on three public datasets show that the method outperforms the state-of-the-art methods for hand pose estimation using RGB image input.
Tasks Hand Pose Estimation, Pose Estimation
Published 2018-11-18
URL http://arxiv.org/abs/1811.07376v1
PDF http://arxiv.org/pdf/1811.07376v1.pdf
PWC https://paperswithcode.com/paper/rgb-based-3d-hand-pose-estimation-via
Repo
Framework

XPCA: Extending PCA for a Combination of Discrete and Continuous Variables

Title XPCA: Extending PCA for a Combination of Discrete and Continuous Variables
Authors Clifford Anderson-Bergman, Tamara G. Kolda, Kina Kincher-Winoto
Abstract Principal component analysis (PCA) is arguably the most popular tool in multivariate exploratory data analysis. In this paper, we consider the question of how to handle heterogeneous variables that include continuous, binary, and ordinal. In the probabilistic interpretation of low-rank PCA, the data has a normal multivariate distribution and, therefore, normal marginal distributions for each column. If some marginals are continuous but not normal, the semiparametric copula-based principal component analysis (COCA) method is an alternative to PCA that combines a Gaussian copula with nonparametric marginals. If some marginals are discrete or semi-continuous, we propose a new extended PCA (XPCA) method that also uses a Gaussian copula and nonparametric marginals and accounts for discrete variables in the likelihood calculation by integrating over appropriate intervals. Like PCA, the factors produced by XPCA can be used to find latent structure in data, build predictive models, and perform dimensionality reduction. We present the new model, its induced likelihood function, and a fitting algorithm which can be applied in the presence of missing data. We demonstrate how to use XPCA to produce an estimated full conditional distribution for each data point, and use this to produce to provide estimates for missing data that are automatically range respecting. We compare the methods as applied to simulated and real-world data sets that have a mixture of discrete and continuous variables.
Tasks Dimensionality Reduction
Published 2018-08-22
URL http://arxiv.org/abs/1808.07510v1
PDF http://arxiv.org/pdf/1808.07510v1.pdf
PWC https://paperswithcode.com/paper/xpca-extending-pca-for-a-combination-of
Repo
Framework

Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation

Title Deep Reinforcement Learning in Ice Hockey for Context-Aware Player Evaluation
Authors Guiliang Liu, Oliver Schulte
Abstract A variety of machine learning models have been proposed to assess the performance of players in professional sports. However, they have only a limited ability to model how player performance depends on the game context. This paper proposes a new approach to capturing game context: we apply Deep Reinforcement Learning (DRL) to learn an action-value Q function from 3M play-by-play events in the National Hockey League (NHL). The neural network representation integrates both continuous context signals and game history, using a possession-based LSTM. The learned Q-function is used to value players’ actions under different game contexts. To assess a player’s overall performance, we introduce a novel Game Impact Metric (GIM) that aggregates the values of the player’s actions. Empirical Evaluation shows GIM is consistent throughout a play season, and correlates highly with standard success measures and future salary.
Tasks
Published 2018-05-26
URL http://arxiv.org/abs/1805.11088v3
PDF http://arxiv.org/pdf/1805.11088v3.pdf
PWC https://paperswithcode.com/paper/deep-reinforcement-learning-in-ice-hockey-for
Repo
Framework

Unsupervised learning with contrastive latent variable models

Title Unsupervised learning with contrastive latent variable models
Authors Kristen Severson, Soumya Ghosh, Kenney Ng
Abstract In unsupervised learning, dimensionality reduction is an important tool for data exploration and visualization. Because these aims are typically open-ended, it can be useful to frame the problem as looking for patterns that are enriched in one dataset relative to another. These pairs of datasets occur commonly, for instance a population of interest vs. control or signal vs. signal free recordings.However, there are few methods that work on sets of data as opposed to data points or sequences. Here, we present a probabilistic model for dimensionality reduction to discover signal that is enriched in the target dataset relative to the background dataset. The data in these sets do not need to be paired or grouped beyond set membership. By using a probabilistic model where some structure is shared amongst the two datasets and some is unique to the target dataset, we are able to recover interesting structure in the latent space of the target dataset. The method also has the advantages of a probabilistic model, namely that it allows for the incorporation of prior information, handles missing data, and can be generalized to different distributional assumptions. We describe several possible variations of the model and demonstrate the application of the technique to de-noising, feature selection, and subgroup discovery settings.
Tasks Dimensionality Reduction, Feature Selection, Latent Variable Models
Published 2018-11-14
URL http://arxiv.org/abs/1811.06094v1
PDF http://arxiv.org/pdf/1811.06094v1.pdf
PWC https://paperswithcode.com/paper/unsupervised-learning-with-contrastive-latent
Repo
Framework

Finding Frequent Entities in Continuous Data

Title Finding Frequent Entities in Continuous Data
Authors Ferran Alet, Rohan Chitnis, Leslie P. Kaelbling, Tomas Lozano-Perez
Abstract In many applications that involve processing high-dimensional data, it is important to identify a small set of entities that account for a significant fraction of detections. Rather than formalize this as a clustering problem, in which all detections must be grouped into hard or soft categories, we formalize it as an instance of the frequent items or heavy hitters problem, which finds groups of tightly clustered objects that have a high density in the feature space. We show that the heavy hitters formulation generates solutions that are more accurate and effective than the clustering formulation. In addition, we present a novel online algorithm for heavy hitters, called HAC, which addresses problems in continuous space, and demonstrate its effectiveness on real video and household domains.
Tasks
Published 2018-05-08
URL http://arxiv.org/abs/1805.02874v1
PDF http://arxiv.org/pdf/1805.02874v1.pdf
PWC https://paperswithcode.com/paper/finding-frequent-entities-in-continuous-data
Repo
Framework

Adversarial Training for Adverse Conditions: Robust Metric Localisation using Appearance Transfer

Title Adversarial Training for Adverse Conditions: Robust Metric Localisation using Appearance Transfer
Authors Horia Porav, Will Maddern, Paul Newman
Abstract We present a method of improving visual place recognition and metric localisation under very strong appear- ance change. We learn an invertable generator that can trans- form the conditions of images, e.g. from day to night, summer to winter etc. This image transforming filter is explicitly designed to aid and abet feature-matching using a new loss based on SURF detector and dense descriptor maps. A network is trained to output synthetic images optimised for feature matching given only an input RGB image, and these generated images are used to localize the robot against a previously built map using traditional sparse matching approaches. We benchmark our results using multiple traversals of the Oxford RobotCar Dataset over a year-long period, using one traversal as a map and the other to localise. We show that this method significantly improves place recognition and localisation under changing and adverse conditions, while reducing the number of mapping runs needed to successfully achieve reliable localisation.
Tasks Visual Place Recognition
Published 2018-03-09
URL http://arxiv.org/abs/1803.03341v1
PDF http://arxiv.org/pdf/1803.03341v1.pdf
PWC https://paperswithcode.com/paper/adversarial-training-for-adverse-conditions
Repo
Framework

Differentially-Private “Draw and Discard” Machine Learning

Title Differentially-Private “Draw and Discard” Machine Learning
Authors Vasyl Pihur, Aleksandra Korolova, Frederick Liu, Subhash Sankuratripati, Moti Yung, Dachuan Huang, Ruogu Zeng
Abstract In this work, we propose a novel framework for privacy-preserving client-distributed machine learning. It is motivated by the desire to achieve differential privacy guarantees in the local model of privacy in a way that satisfies all systems constraints using asynchronous client-server communication and provides attractive model learning properties. We call it “Draw and Discard” because it relies on random sampling of models for load distribution (scalability), which also provides additional server-side privacy protections and improved model quality through averaging. We present the mechanics of client and server components of “Draw and Discard” and demonstrate how the framework can be applied to learning Generalized Linear models. We then analyze the privacy guarantees provided by our approach against several types of adversaries and showcase experimental results that provide evidence for the framework’s viability in practical deployments.
Tasks
Published 2018-07-11
URL http://arxiv.org/abs/1807.04369v2
PDF http://arxiv.org/pdf/1807.04369v2.pdf
PWC https://paperswithcode.com/paper/differentially-private-draw-and-discard
Repo
Framework

Assessment of Deep Convolutional Neural Networks for Road Surface Classification

Title Assessment of Deep Convolutional Neural Networks for Road Surface Classification
Authors Marcus Nolte, Nikita Kister, Markus Maurer
Abstract When parameterizing vehicle control algorithms for stability or trajectory control, the road-tire friction coefficient is an essential model parameter when it comes to control performance. One major impact on the friction coefficient is the condition of the road surface. A camera-based, forward-looking classification of the road-surface helps enabling an early parametrization of vehicle control algorithms. In this paper, we train and compare two different Deep Convolutional Neural Network models, regarding their application for road friction estimation and describe the challenges for training the classifier in terms of available training data and the construction of suitable datasets.
Tasks
Published 2018-04-24
URL http://arxiv.org/abs/1804.08872v2
PDF http://arxiv.org/pdf/1804.08872v2.pdf
PWC https://paperswithcode.com/paper/assessment-of-deep-convolutional-neural
Repo
Framework

ABACUS: Unsupervised Multivariate Change Detection via Bayesian Source Separation

Title ABACUS: Unsupervised Multivariate Change Detection via Bayesian Source Separation
Authors Wenyu Zhang, Daniel Gilbert, David Matteson
Abstract Change detection involves segmenting sequential data such that observations in the same segment share some desired properties. Multivariate change detection continues to be a challenging problem due to the variety of ways change points can be correlated across channels and the potentially poor signal-to-noise ratio on individual channels. In this paper, we are interested in locating additive outliers (AO) and level shifts (LS) in the unsupervised setting. We propose ABACUS, Automatic BAyesian Changepoints Under Sparsity, a Bayesian source separation technique to recover latent signals while also detecting changes in model parameters. Multi-level sparsity achieves both dimension reduction and modeling of signal changes. We show ABACUS has competitive or superior performance in simulation studies against state-of-the-art change detection methods and established latent variable models. We also illustrate ABACUS on two real application, modeling genomic profiles and analyzing household electricity consumption.
Tasks Dimensionality Reduction, Latent Variable Models
Published 2018-10-15
URL http://arxiv.org/abs/1810.06167v1
PDF http://arxiv.org/pdf/1810.06167v1.pdf
PWC https://paperswithcode.com/paper/abacus-unsupervised-multivariate-change
Repo
Framework

Using Apple Machine Learning Algorithms to Detect and Subclassify Non-Small Cell Lung Cancer

Title Using Apple Machine Learning Algorithms to Detect and Subclassify Non-Small Cell Lung Cancer
Authors Andrew A. Borkowski, Catherine P. Wilson, Steven A. Borkowski, Lauren A. Deland, Stephen M. Mastorides
Abstract Lung cancer continues to be a major healthcare challenge with high morbidity and mortality rates among both men and women worldwide. The majority of lung cancer cases are of non-small cell lung cancer type. With the advent of targeted cancer therapy, it is imperative not only to properly diagnose but also sub-classify non-small cell lung cancer. In our study, we evaluated the utility of using Apple Create ML module to detect and sub-classify non-small cell carcinomas based on histopathological images. After module optimization, the program detected 100% of non-small cell lung cancer images and successfully subclassified the majority of the images. Trained modules, such as ours, can be utilized in diagnostic smartphone-based applications, augmenting diagnostic services in understaffed areas of the world.
Tasks
Published 2018-08-24
URL http://arxiv.org/abs/1808.08230v2
PDF http://arxiv.org/pdf/1808.08230v2.pdf
PWC https://paperswithcode.com/paper/using-apple-machine-learning-algorithms-to
Repo
Framework

Learning to Detect Instantaneous Changes with Retrospective Convolution and Static Sample Synthesis

Title Learning to Detect Instantaneous Changes with Retrospective Convolution and Static Sample Synthesis
Authors Chao Chen, Sheng Zhang, Cuibing Du
Abstract Change detection has been a challenging visual task due to the dynamic nature of real-world scenes. Good performance of existing methods depends largely on prior background images or a long-term observation. These methods, however, suffer severe degradation when they are applied to detection of instantaneously occurred changes with only a few preceding frames provided. In this paper, we exploit spatio-temporal convolutional networks to address this challenge, and propose a novel retrospective convolution, which features efficient change information extraction between the current frame and frames from historical observation. To address the problem of foreground-specific over-fitting in learning-based methods, we further propose a data augmentation method, named static sample synthesis, to guide the network to focus on learning change-cued information rather than specific spatial features of foreground. Trained end-to-end with complex scenarios, our framework proves to be accurate in detecting instantaneous changes and robust in combating diverse noises. Extensive experiments demonstrate that our proposed method significantly outperforms existing methods.
Tasks Data Augmentation
Published 2018-11-20
URL http://arxiv.org/abs/1811.08138v1
PDF http://arxiv.org/pdf/1811.08138v1.pdf
PWC https://paperswithcode.com/paper/learning-to-detect-instantaneous-changes-with
Repo
Framework

Improving Context-Aware Semantic Relationships in Sparse Mobile Datasets

Title Improving Context-Aware Semantic Relationships in Sparse Mobile Datasets
Authors Peter Hansel, Nik Marda, William Yin
Abstract Traditional semantic similarity models often fail to encapsulate the external context in which texts are situated. However, textual datasets generated on mobile platforms can help us build a truer representation of semantic similarity by introducing multimodal data. This is especially important in sparse datasets, making solely text-driven interpretation of context more difficult. In this paper, we develop new algorithms for building external features into sentence embeddings and semantic similarity scores. Then, we test them on embedding spaces on data from Twitter, using each tweet’s time and geolocation to better understand its context. Ultimately, we show that applying PCA with eight components to the embedding space and appending multimodal features yields the best outcomes. This yields a considerable improvement over pure text-based approaches for discovering similar tweets. Our results suggest that our new algorithm can help improve semantic understanding in various settings.
Tasks Semantic Similarity, Semantic Textual Similarity, Sentence Embeddings
Published 2018-12-23
URL http://arxiv.org/abs/1812.09650v1
PDF http://arxiv.org/pdf/1812.09650v1.pdf
PWC https://paperswithcode.com/paper/improving-context-aware-semantic
Repo
Framework

Segmentation hiérarchique faiblement supervisée

Title Segmentation hiérarchique faiblement supervisée
Authors Amin Fehri, Santiago Velasco-Forero, Fernand Meyer
Abstract Image segmentation is the process of partitioning an image into a set of meaningful regions according to some criteria. Hierarchical segmentation has emerged as a major trend in this regard as it favors the emergence of important regions at different scales. On the other hand, many methods allow us to have prior information on the position of structures of interest in the images. In this paper, we present a versatile hierarchical segmentation method that takes into account any prior spatial information and outputs a hierarchical segmentation that emphasizes the contours or regions of interest while preserving the important structures in the image. An application of this method to the weakly-supervised segmentation problem is presented.
Tasks Semantic Segmentation
Published 2018-02-20
URL http://arxiv.org/abs/1802.07008v1
PDF http://arxiv.org/pdf/1802.07008v1.pdf
PWC https://paperswithcode.com/paper/segmentation-hierarchique-faiblement
Repo
Framework

Mixed-Integer Convex Nonlinear Optimization with Gradient-Boosted Trees Embedded

Title Mixed-Integer Convex Nonlinear Optimization with Gradient-Boosted Trees Embedded
Authors Miten Mistry, Dimitrios Letsios, Gerhard Krennrich, Robert M. Lee, Ruth Misener
Abstract Decision trees usefully represent sparse, high dimensional and noisy data. Having learned a function from this data, we may want to thereafter integrate the function into a larger decision-making problem, e.g., for picking the best chemical process catalyst. We study a large-scale, industrially-relevant mixed-integer nonlinear nonconvex optimization problem involving both gradient-boosted trees and penalty functions mitigating risk. This mixed-integer optimization problem with convex penalty terms broadly applies to optimizing pre-trained regression tree models. Decision makers may wish to optimize discrete models to repurpose legacy predictive models, or they may wish to optimize a discrete model that particularly well-represents a data set. We develop several heuristic methods to find feasible solutions, and an exact, branch-and-bound algorithm leveraging structural properties of the gradient-boosted trees and penalty functions. We computationally test our methods on concrete mixture design instance and a chemical catalysis industrial instance.
Tasks Decision Making
Published 2018-03-02
URL https://arxiv.org/abs/1803.00952v3
PDF https://arxiv.org/pdf/1803.00952v3.pdf
PWC https://paperswithcode.com/paper/mixed-integer-convex-nonlinear-optimization
Repo
Framework

Typhoon track prediction using satellite images in a Generative Adversarial Network

Title Typhoon track prediction using satellite images in a Generative Adversarial Network
Authors Mario Rüttgers, Sangseung Lee, Donghyun You
Abstract Tracks of typhoons are predicted using satellite images as input for a Generative Adversarial Network (GAN). The satellite images have time gaps of 6 hours and are marked with a red square at the location of the typhoon center. The GAN uses images from the past to generate an image one time step ahead. The generated image shows the future location of the typhoon center, as well as the future cloud structures. The errors between predicted and real typhoon centers are measured quantitatively in kilometers. 42.4% of all typhoon center predictions have absolute errors of less than 80 km, 32.1% lie within a range of 80 - 120 km and the remaining 25.5% have accuracies above 120 km. The relative error sets the above mentioned absolute error in relation to the distance that has been traveled by a typhoon over the past 6 hours. High relative errors are found in three types of situations, when a typhoon moves on the open sea far away from land, when a typhoon changes its course suddenly and when a typhoon is about to hit the mainland. The cloud structure prediction is evaluated qualitatively. It is shown that the GAN is able to predict trends in cloud motion. In order to improve both, the typhoon center and cloud motion prediction, the present study suggests to add information about the sea surface temperature, surface pressure and velocity fields to the input data.
Tasks motion prediction
Published 2018-08-16
URL http://arxiv.org/abs/1808.05382v1
PDF http://arxiv.org/pdf/1808.05382v1.pdf
PWC https://paperswithcode.com/paper/typhoon-track-prediction-using-satellite
Repo
Framework
comments powered by Disqus