January 30, 2020

3125 words 15 mins read

Paper Group ANR 393

Paper Group ANR 393

Deep Learning for Ranking Response Surfaces with Applications to Optimal Stopping Problems. Option-Critic in Cooperative Multi-agent Systems. Forward Vehicle Collision Warning Based on Quick Camera Calibration. Improved Sample Complexities for Deep Networks and Robust Classification via an All-Layer Margin. Hypothetical answers to continuous querie …

Deep Learning for Ranking Response Surfaces with Applications to Optimal Stopping Problems

Title Deep Learning for Ranking Response Surfaces with Applications to Optimal Stopping Problems
Authors Ruimeng Hu
Abstract In this paper, we propose deep learning algorithms for ranking response surfaces, with applications to optimal stopping problems in financial mathematics. The problem of ranking response surfaces is motivated by estimating optimal feedback policy maps in stochastic control problems, aiming to efficiently find the index associated to the minimal response across the entire continuous input space $\mathcal{X} \subseteq \mathbb{R}^d$. By considering points in $\mathcal{X}$ as pixels and indices of the minimal surfaces as labels, we recast the problem as an image segmentation problem, which assigns a label to every pixel in an image such that pixels with the same label share certain characteristics. This provides an alternative method for efficiently solving the problem instead of using sequential design in our previous work [R. Hu and M. Ludkovski, SIAM/ASA Journal on Uncertainty Quantification, 5 (2017), 212–239]. Deep learning algorithms are scalable, parallel and model-free, i.e., no parametric assumptions needed on the response surfaces. Considering ranking response surfaces as image segmentation allows one to use a broad class of deep neural networks, e.g., UNet, SegNet, DeconvNet, which have been widely applied and numerically proved to possess high accuracy in the field. We also systematically study the dependence of deep learning algorithms on the input data generated on uniform grids or by sequential design sampling, and observe that the performance of deep learning is {\it not} sensitive to the noise and locations (close to/away from boundaries) of training data. We present a few examples including synthetic ones and the Bermudan option pricing problem to show the efficiency and accuracy of this method.
Tasks Semantic Segmentation
Published 2019-01-11
URL https://arxiv.org/abs/1901.03478v2
PDF https://arxiv.org/pdf/1901.03478v2.pdf
PWC https://paperswithcode.com/paper/deep-learning-for-ranking-response-surfaces
Repo
Framework

Option-Critic in Cooperative Multi-agent Systems

Title Option-Critic in Cooperative Multi-agent Systems
Authors Jhelum Chakravorty, Nadeem Ward, Julien Roy, Maxime Chevalier-Boisvert, Sumana Basu, Andrei Lupu, Doina Precup
Abstract In this paper, we investigate learning temporal abstractions in cooperative multi-agent systems, using the options framework (Sutton et al, 1999). First, we address the planning problem for the decentralized POMDP represented by the multi-agent system, by introducing a \emph{common information approach}. We use the notion of \emph{common beliefs} and broadcasting to solve an equivalent centralized POMDP problem. Then, we propose the Distributed Option Critic (DOC) algorithm, which uses centralized option evaluation and decentralized intra-option improvement. We theoretically analyze the asymptotic convergence of DOC and build a new multi-agent environment to demonstrate its validity. Our experiments empirically show that DOC performs competitively against baselines and scales with the number of agents.
Tasks
Published 2019-11-28
URL https://arxiv.org/abs/1911.12825v3
PDF https://arxiv.org/pdf/1911.12825v3.pdf
PWC https://paperswithcode.com/paper/option-critic-in-cooperative-multi-agent
Repo
Framework

Forward Vehicle Collision Warning Based on Quick Camera Calibration

Title Forward Vehicle Collision Warning Based on Quick Camera Calibration
Authors Yuwei Lu, Yuan Yuan, Qi Wang
Abstract Forward Vehicle Collision Warning (FCW) is one of the most important functions for autonomous vehicles. In this procedure, vehicle detection and distance measurement are core components, requiring accurate localization and estimation. In this paper, we propose a simple but efficient forward vehicle collision warning framework by aggregating monocular distance measurement and precise vehicle detection. In order to obtain forward vehicle distance, a quick camera calibration method which only needs three physical points to calibrate related camera parameters is utilized. As for the forward vehicle detection, a multi-scale detection algorithm that regards the result of calibration as distance priori is proposed to improve the precision. Intensive experiments are conducted in our established real scene dataset and the results have demonstrated the effectiveness of the proposed framework.
Tasks Autonomous Vehicles, Calibration
Published 2019-04-22
URL http://arxiv.org/abs/1904.12642v1
PDF http://arxiv.org/pdf/1904.12642v1.pdf
PWC https://paperswithcode.com/paper/190412642
Repo
Framework

Improved Sample Complexities for Deep Networks and Robust Classification via an All-Layer Margin

Title Improved Sample Complexities for Deep Networks and Robust Classification via an All-Layer Margin
Authors Colin Wei, Tengyu Ma
Abstract For linear classifiers, the relationship between (normalized) output margin and generalization is captured in a clear and simple bound – a large output margin implies good generalization. Unfortunately, for deep models, this relationship is less clear: existing analyses of the output margin give complicated bounds which sometimes depend exponentially on depth. In this work, we propose to instead analyze a new notion of margin, which we call the “all-layer margin.” Our analysis reveals that the all-layer margin has a clear and direct relationship with generalization for deep models. This enables the following concrete applications of the all-layer margin: 1) by analyzing the all-layer margin, we obtain tighter generalization bounds for neural nets which depend on Jacobian and hidden layer norms and remove the exponential dependency on depth 2) our neural net results easily translate to the adversarially robust setting, giving the first direct analysis of robust test error for deep networks, and 3) we present a theoretically inspired training algorithm for increasing the all-layer margin and demonstrate that our algorithm improves test performance over strong baselines in practice.
Tasks
Published 2019-10-09
URL https://arxiv.org/abs/1910.04284v1
PDF https://arxiv.org/pdf/1910.04284v1.pdf
PWC https://paperswithcode.com/paper/improved-sample-complexities-for-deep
Repo
Framework

Hypothetical answers to continuous queries over data streams

Title Hypothetical answers to continuous queries over data streams
Authors Luís Cruz-Filipe, Graça Gaspar, Isabel Nunes
Abstract Continuous queries over data streams may suffer from blocking operations and/or unbound wait, which may delay answers until some relevant input arrives through the data stream. These delays may turn answers, when they arrive, obsolete to users who sometimes have to make decisions with no help whatsoever. Therefore, it can be useful to provide hypothetical answers - “given the current information, it is possible that X will become true at time t” - instead of no information at all. In this paper we present a semantics for queries and corresponding answers that covers such hypothetical answers, together with an online algorithm for updating the set of facts that are consistent with the currently available information.
Tasks
Published 2019-05-23
URL https://arxiv.org/abs/1905.09610v2
PDF https://arxiv.org/pdf/1905.09610v2.pdf
PWC https://paperswithcode.com/paper/hypothetical-answers-to-continuous-queries
Repo
Framework

Probabilistic End-to-end Noise Correction for Learning with Noisy Labels

Title Probabilistic End-to-end Noise Correction for Learning with Noisy Labels
Authors Kun Yi, Jianxin Wu
Abstract Deep learning has achieved excellent performance in various computer vision tasks, but requires a lot of training examples with clean labels. It is easy to collect a dataset with noisy labels, but such noise makes networks overfit seriously and accuracies drop dramatically. To address this problem, we propose an end-to-end framework called PENCIL, which can update both network parameters and label estimations as label distributions. PENCIL is independent of the backbone network structure and does not need an auxiliary clean dataset or prior information about noise, thus it is more general and robust than existing methods and is easy to apply. PENCIL outperforms previous state-of-the-art methods by large margins on both synthetic and real-world datasets with different noise types and noise rates. Experiments show that PENCIL is robust on clean datasets, too.
Tasks Image Classification
Published 2019-03-19
URL http://arxiv.org/abs/1903.07788v1
PDF http://arxiv.org/pdf/1903.07788v1.pdf
PWC https://paperswithcode.com/paper/probabilistic-end-to-end-noise-correction-for
Repo
Framework

Latent Space Cartography: Generalised Metric-Inspired Measures and Measure-Based Transformations for Generative Models

Title Latent Space Cartography: Generalised Metric-Inspired Measures and Measure-Based Transformations for Generative Models
Authors Max F. Frenzel, Bogdan Teleaga, Asahi Ushio
Abstract Deep generative models are universal tools for learning data distributions on high dimensional data spaces via a mapping to lower dimensional latent spaces. We provide a study of latent space geometries and extend and build upon previous results on Riemannian metrics. We show how a class of heuristic measures gives more flexibility in finding meaningful, problem-specific distances, and how it can be applied to diverse generator types such as autoregressive generators commonly used in e.g. language and other sequence modeling. We further demonstrate how a diffusion-inspired transformation previously studied in cartography can be used to smooth out latent spaces, stretching them according to a chosen measure. In addition to providing more meaningful distances directly in latent space, this also provides a unique tool for novel kinds of data visualizations. We believe that the proposed methods can be a valuable tool for studying the structure of latent spaces and learned data distributions of generative models.
Tasks
Published 2019-02-06
URL http://arxiv.org/abs/1902.02113v1
PDF http://arxiv.org/pdf/1902.02113v1.pdf
PWC https://paperswithcode.com/paper/latent-space-cartography-generalised-metric
Repo
Framework

Interactive Learning of Environment Dynamics for Sequential Tasks

Title Interactive Learning of Environment Dynamics for Sequential Tasks
Authors Robert Loftin, Bei Peng, Matthew E. Taylor, Michael L. Littman, David L. Roberts
Abstract In order for robots and other artificial agents to efficiently learn to perform useful tasks defined by an end user, they must understand not only the goals of those tasks, but also the structure and dynamics of that user’s environment. While existing work has looked at how the goals of a task can be inferred from a human teacher, the agent is often left to learn about the environment on its own. To address this limitation, we develop an algorithm, Behavior Aware Modeling (BAM), which incorporates a teacher’s knowledge into a model of the transition dynamics of an agent’s environment. We evaluate BAM both in simulation and with real human teachers, learning from a combination of task demonstrations and evaluative feedback, and show that it can outperform approaches which do not explicitly consider this source of dynamics knowledge.
Tasks
Published 2019-07-19
URL https://arxiv.org/abs/1907.08478v1
PDF https://arxiv.org/pdf/1907.08478v1.pdf
PWC https://paperswithcode.com/paper/interactive-learning-of-environment-dynamics
Repo
Framework

Optimizing Data Usage via Differentiable Rewards

Title Optimizing Data Usage via Differentiable Rewards
Authors Xinyi Wang, Hieu Pham, Paul Michel, Antonios Anastasopoulos, Graham Neubig, Jaime Carbonell
Abstract To acquire a new skill, humans learn better and faster if a tutor, based on their current knowledge level, informs them of how much attention they should pay to particular content or practice problems. Similarly, a machine learning model could potentially be trained better with a scorer that “adapts” to its current learning state and estimates the importance of each training data instance. Training such an adaptive scorer efficiently is a challenging problem; in order to precisely quantify the effect of a data instance at a given time during the training, it is typically necessary to first complete the entire training process. To efficiently optimize data usage, we propose a reinforcement learning approach called Differentiable Data Selection (DDS). In DDS, we formulate a scorer network as a learnable function of the training data, which can be efficiently updated along with the main model being trained. Specifically, DDS updates the scorer with an intuitive reward signal: it should up-weigh the data that has a similar gradient with a dev set upon which we would finally like to perform well. Without significant computing overhead, DDS delivers strong and consistent improvements over several strong baselines on two very different tasks of machine translation and image classification.
Tasks Image Classification, Machine Translation
Published 2019-11-22
URL https://arxiv.org/abs/1911.10088v1
PDF https://arxiv.org/pdf/1911.10088v1.pdf
PWC https://paperswithcode.com/paper/optimizing-data-usage-via-differentiable-1
Repo
Framework

Neuron Interaction Based Representation Composition for Neural Machine Translation

Title Neuron Interaction Based Representation Composition for Neural Machine Translation
Authors Jian Li, Xing Wang, Baosong Yang, Shuming Shi, Michael R. Lyu, Zhaopeng Tu
Abstract Recent NLP studies reveal that substantial linguistic information can be attributed to single neurons, i.e., individual dimensions of the representation vectors. We hypothesize that modeling strong interactions among neurons helps to better capture complex information by composing the linguistic properties embedded in individual neurons. Starting from this intuition, we propose a novel approach to compose representations learned by different components in neural machine translation (e.g., multi-layer networks or multi-head attention), based on modeling strong interactions among neurons in the representation vectors. Specifically, we leverage bilinear pooling to model pairwise multiplicative interactions among individual neurons, and a low-rank approximation to make the model computationally feasible. We further propose extended bilinear pooling to incorporate first-order representations. Experiments on WMT14 English-German and English-French translation tasks show that our model consistently improves performances over the SOTA Transformer baseline. Further analyses demonstrate that our approach indeed captures more syntactic and semantic information as expected.
Tasks Machine Translation
Published 2019-11-22
URL https://arxiv.org/abs/1911.09877v1
PDF https://arxiv.org/pdf/1911.09877v1.pdf
PWC https://paperswithcode.com/paper/neuron-interaction-based-representation
Repo
Framework

Learning Models from Data with Measurement Error: Tackling Underreporting

Title Learning Models from Data with Measurement Error: Tackling Underreporting
Authors Roy Adams, Yuelong Ji, Xiaobin Wang, Suchi Saria
Abstract Measurement error in observational datasets can lead to systematic bias in inferences based on these datasets. As studies based on observational data are increasingly used to inform decisions with real-world impact, it is critical that we develop a robust set of techniques for analyzing and adjusting for these biases. In this paper we present a method for estimating the distribution of an outcome given a binary exposure that is subject to underreporting. Our method is based on a missing data view of the measurement error problem, where the true exposure is treated as a latent variable that is marginalized out of a joint model. We prove three different conditions under which the outcome distribution can still be identified from data containing only error-prone observations of the exposure. We demonstrate this method on synthetic data and analyze its sensitivity to near violations of the identifiability conditions. Finally, we use this method to estimate the effects of maternal smoking and opioid use during pregnancy on childhood obesity, two import problems from public health. Using the proposed method, we estimate these effects using only subject-reported drug use data and substantially refine the range of estimates generated by a sensitivity analysis-based approach. Further, the estimates produced by our method are consistent with existing literature on both the effects of maternal smoking and the rate at which subjects underreport smoking.
Tasks
Published 2019-01-25
URL http://arxiv.org/abs/1901.09060v1
PDF http://arxiv.org/pdf/1901.09060v1.pdf
PWC https://paperswithcode.com/paper/learning-models-from-data-with-measurement
Repo
Framework

Improving Reproducible Deep Learning Workflows with DeepDIVA

Title Improving Reproducible Deep Learning Workflows with DeepDIVA
Authors Michele Alberti, Vinaychandran Pondenkandath, Lars Vögtlin, Marcel Würsch, Rolf Ingold, Marcus Liwicki
Abstract The field of deep learning is experiencing a trend towards producing reproducible research. Nevertheless, it is still often a frustrating experience to reproduce scientific results. This is especially true in the machine learning community, where it is considered acceptable to have black boxes in your experiments. We present DeepDIVA, a framework designed to facilitate easy experimentation and their reproduction. This framework allows researchers to share their experiments with others, while providing functionality that allows for easy experimentation, such as: boilerplate code, experiment management, hyper-parameter optimization, verification of data integrity and visualization of data and results. Additionally, the code of DeepDIVA is well-documented and supported by several tutorials that allow a new user to quickly familiarize themselves with the framework.
Tasks
Published 2019-06-11
URL https://arxiv.org/abs/1906.04736v1
PDF https://arxiv.org/pdf/1906.04736v1.pdf
PWC https://paperswithcode.com/paper/improving-reproducible-deep-learning
Repo
Framework

Metric Pose Estimation for Human-Machine Interaction Using Monocular Vision

Title Metric Pose Estimation for Human-Machine Interaction Using Monocular Vision
Authors Christoph Heindl, Markus Ikeda, Gernot Stübl, Andreas Pichler, Josef Scharinger
Abstract The rapid growth of collaborative robotics in production requires new automation technologies that take human and machine equally into account. In this work, we describe a monocular camera based system to detect human-machine interactions from a bird’s-eye perspective. Our system predicts poses of humans and robots from a single wide-angle color image. Even though our approach works on 2D color input, we lift the majority of detections to a metric 3D space. Our system merges pose information with predefined virtual sensors to coordinate human-machine interactions. We demonstrate the advantages of our system in three use cases.
Tasks Pose Estimation
Published 2019-10-08
URL https://arxiv.org/abs/1910.03239v1
PDF https://arxiv.org/pdf/1910.03239v1.pdf
PWC https://paperswithcode.com/paper/metric-pose-estimation-for-human-machine
Repo
Framework

Deep-Gap: A deep learning framework for forecasting crowdsourcing supply-demand gap based on imaging time series and residual learning

Title Deep-Gap: A deep learning framework for forecasting crowdsourcing supply-demand gap based on imaging time series and residual learning
Authors Ahmed Ben Said, Abdelkarim Erradi
Abstract Mobile crowdsourcing has become easier thanks to the widespread of smartphones capable of seamlessly collecting and pushing the desired data to cloud services. However, the success of mobile crowdsourcing relies on balancing the supply and demand by first accurately forecasting spatially and temporally the supply-demand gap, and then providing efficient incentives to encourage participant movements to maintain the desired balance. In this paper, we propose Deep-Gap, a deep learning approach based on residual learning to predict the gap between mobile crowdsourced service supply and demand at a given time and space. The prediction can drive the incentive model to achieve a geographically balanced service coverage in order to avoid the case where some areas are over-supplied while other areas are under-supplied. This allows anticipating the supply-demand gap and redirecting crowdsourced service providers towards target areas. Deep-Gap relies on historical supply-demand time series data as well as available external data such as weather conditions and day type (e.g., weekday, weekend, holiday). First, we roll and encode the time series of supply-demand as images using the Gramian Angular Summation Field (GASF), Gramian Angular Difference Field (GADF) and the Recurrence Plot (REC). These images are then used to train deep Convolutional Neural Networks (CNN) to extract the low and high-level features and forecast the crowdsourced services gap. We conduct comprehensive comparative study by establishing two supply-demand gap forecasting scenarios: with and without external data. Compared to state-of-art approaches, Deep-Gap achieves the lowest forecasting errors in both scenarios.
Tasks Time Series
Published 2019-11-02
URL https://arxiv.org/abs/1911.07625v1
PDF https://arxiv.org/pdf/1911.07625v1.pdf
PWC https://paperswithcode.com/paper/deep-gap-a-deep-learning-framework-for
Repo
Framework

A statistical test for correspondence of texts to the Zipf-Mandelbrot law

Title A statistical test for correspondence of texts to the Zipf-Mandelbrot law
Authors Anik Chakrabarty, Mikhail Chebunin, Artyom Kovalevskii, Ilya Pupyshev, Natalia Zakrevskaya, Qianqian Zhou
Abstract We analyse correspondence of a text to a simple probabilistic model. The model assumes that the words are selected independently from an infinite dictionary. The probability distribution correspond to the Zipf—Mandelbrot law. We count sequentially the numbers of different words in the text and get the process of the numbers of different words. Then we estimate Zipf—Mandelbrot law parameters using the same sequence and construct an estimate of the expectation of the number of different words in the text. Then we subtract the corresponding values of the estimate from the sequence and normalize along the coordinate axes, obtaining a random process on a segment from 0 to 1. We prove that this process (the empirical text bridge) converges weakly in the uniform metric on $C (0,1)$ to a centered Gaussian process with continuous a.s. paths. We develop and implement an algorithm for approximate calculation of eigenvalues of the covariance function of the limit Gaussian process, and then an algorithm for calculating the probability distribution of the integral of the square of this process. We use the algorithm to analyze uniformity of texts in English, French, Russian and Chinese.
Tasks
Published 2019-12-25
URL https://arxiv.org/abs/1912.11600v1
PDF https://arxiv.org/pdf/1912.11600v1.pdf
PWC https://paperswithcode.com/paper/a-statistical-test-for-correspondence-of
Repo
Framework
comments powered by Disqus