January 30, 2020

2988 words 15 mins read

Paper Group ANR 423

Paper Group ANR 423

A maximum principle argument for the uniform convergence of graph Laplacian regressors. Seeding the Singularity for A.I. Convergence and sample complexity of gradient methods for the model-free linear quadratic regulator problem. Streamlined Variational Inference for Linear Mixed Models with Crossed Random Effects. Technical Report: Co-learning of …

A maximum principle argument for the uniform convergence of graph Laplacian regressors

Title A maximum principle argument for the uniform convergence of graph Laplacian regressors
Authors Nicolas Garcia Trillos, Ryan Murray
Abstract We study asymptotic consistency guarantees for a non-parametric regression problem with Laplacian regularization. In particular, we consider $(x_1, y_1), \dots, (x_n, y_n)$ samples from some distribution on the cross product $\mathcal{M} \times \mathbb{R}$, where $\mathcal{M}$ is a $m$-dimensional manifold embedded in $\mathbb{R}^d$. A geometric graph on the cloud ${x_1, \dots, x_n }$ is constructed by connecting points that are within some specified distance $\varepsilon_n$. A suitable semi-linear equation involving the resulting graph Laplacian is used to obtain a regressor for the observed values of $y$. We establish probabilistic error rates for the uniform difference between the regressor constructed from the observed data and the Bayes regressor (or trend) associated to the ground-truth distribution. We give the explicit dependence of the rates in terms of the parameter $\varepsilon_n$, the strength of regularization $\beta_n$, and the number of data points $n$. Our argument relies on a simple, yet powerful, maximum principle for the graph Laplacian. We also address a simple extension of the framework to a semi-supervised setting.
Tasks
Published 2019-01-29
URL http://arxiv.org/abs/1901.10089v2
PDF http://arxiv.org/pdf/1901.10089v2.pdf
PWC https://paperswithcode.com/paper/a-maximum-principle-argument-for-the-uniform
Repo
Framework

Seeding the Singularity for A.I

Title Seeding the Singularity for A.I
Authors Pavel Kraikivski
Abstract The singularity refers to an idea that once a machine having an artificial intelligence surpassing the human intelligence capacity is created, it will trigger explosive technological and intelligence growth. I propose to test the hypothesis that machine intelligence capacity can grow autonomously starting with an intelligence comparable to that of bacteria - microbial intelligence. The goal will be to demonstrate that rapid growth in intelligence capacity can be realized at all in artificial computing systems. I propose the following three properties that may allow an artificial intelligence to exhibit a steady growth in its intelligence capacity: (i) learning with the ability to modify itself when exposed to more data, (ii) acquiring new functionalities (skills), and (iii) expanding or replicating itself. The algorithms must demonstrate a rapid growth in skills of dataprocessing and analysis and gain qualitatively different functionalities, at least until the current computing technology supports their scalable development. The existing algorithms that already encompass some of these or similar properties, as well as missing abilities that must yet be implemented, will be reviewed in this work. Future computational tests could support or oppose the hypothesis that artificial intelligence can potentially grow to the level of superintelligence which overcomes the limitations in hardware by producing necessary processing resources or by changing the physical realization of computation from using chip circuits to using quantum computing principles.
Tasks
Published 2019-08-04
URL https://arxiv.org/abs/1908.01766v1
PDF https://arxiv.org/pdf/1908.01766v1.pdf
PWC https://paperswithcode.com/paper/seeding-the-singularity-for-ai
Repo
Framework

Convergence and sample complexity of gradient methods for the model-free linear quadratic regulator problem

Title Convergence and sample complexity of gradient methods for the model-free linear quadratic regulator problem
Authors Hesameddin Mohammadi, Armin Zare, Mahdi Soltanolkotabi, Mihailo R. Jovanović
Abstract Model-free reinforcement learning attempts to find an optimal control action for an unknown dynamical system by directly searching over the parameter space of controllers. The convergence behavior and statistical properties of these approaches are often poorly understood because of the nonconvex nature of the underlying optimization problems as well as the lack of exact gradient computation. In this paper, we take a step towards demystifying the performance and efficiency of such methods by focusing on the standard infinite-horizon linear quadratic regulator problem for continuous-time systems with unknown state-space parameters. We establish exponential stability for the ordinary differential equation (ODE) that governs the gradient-flow dynamics over the set of stabilizing feedback gains and show that a similar result holds for the gradient descent method that arises from the forward Euler discretization of the corresponding ODE. We also provide theoretical bounds on the convergence rate and sample complexity of a random search method. Our results demonstrate that the required simulation time for achieving $\epsilon$-accuracy in a model-free setup and the total number of function evaluations both scale as $\log , (1/\epsilon)$.
Tasks
Published 2019-12-26
URL https://arxiv.org/abs/1912.11899v1
PDF https://arxiv.org/pdf/1912.11899v1.pdf
PWC https://paperswithcode.com/paper/convergence-and-sample-complexity-of-gradient
Repo
Framework

Streamlined Variational Inference for Linear Mixed Models with Crossed Random Effects

Title Streamlined Variational Inference for Linear Mixed Models with Crossed Random Effects
Authors Marianne Menictas, Gioia Di Credico, Matt P. Wand
Abstract We derive streamlined mean field variational Bayes algorithms for fitting linear mixed models with crossed random effects. In the most general situation, where the dimensions of the crossed groups are arbitrarily large, streamlining is hindered by lack of sparseness in the underlying least squares system. Because of this fact we also consider a hierarchy of relaxations of the mean field product restriction. The least stringent product restriction delivers a high degree of inferential accuracy. However, this accuracy must be mitigated against its higher storage and computing demands. Faster sparse storage and computing alternatives are also provided, but come with the price of diminished inferential accuracy. This article provides full algorithmic details of three variational inference strategies, presents detailed empirical results on their pros and cons and, thus, guides the users on their choice of variational inference approach depending on the problem size and computing resources.
Tasks
Published 2019-10-04
URL https://arxiv.org/abs/1910.01799v2
PDF https://arxiv.org/pdf/1910.01799v2.pdf
PWC https://paperswithcode.com/paper/streamlined-variational-inference-for-linear
Repo
Framework

Technical Report: Co-learning of geometry and semantics for online 3D mapping

Title Technical Report: Co-learning of geometry and semantics for online 3D mapping
Authors Marcela Carvalho, Maxime Ferrera, Alexandre Boulch, Julien Moras, Bertrand Le Saux, Pauline Trouvé-Peloux
Abstract This paper is a technical report about our submission for the ECCV 2018 3DRMS Workshop Challenge on Semantic 3D Reconstruction \cite{Tylecek2018rms}. In this paper, we address 3D semantic reconstruction for autonomous navigation using co-learning of depth map and semantic segmentation. The core of our pipeline is a deep multi-task neural network which tightly refines depth and also produces accurate semantic segmentation maps. Its inputs are an image and a raw depth map produced from a pair of images by standard stereo vision. The resulting semantic 3D point clouds are then merged in order to create a consistent 3D mesh, in turn used to produce dense semantic 3D reconstruction maps. The performances of each step of the proposed method are evaluated on the dataset and multiple tasks of the 3DRMS Challenge, and repeatedly surpass state-of-the-art approaches.
Tasks 3D Reconstruction, Autonomous Navigation, Semantic Segmentation
Published 2019-11-04
URL https://arxiv.org/abs/1911.01082v1
PDF https://arxiv.org/pdf/1911.01082v1.pdf
PWC https://paperswithcode.com/paper/technical-report-co-learning-of-geometry-and
Repo
Framework

Surface Reconstruction from 3D Line Segments

Title Surface Reconstruction from 3D Line Segments
Authors Pierre-Alain Langlois, Alexandre Boulch, Renaud Marlet
Abstract In man-made environments such as indoor scenes, when point-based 3D reconstruction fails due to the lack of texture, lines can still be detected and used to support surfaces. We present a novel method for watertight piecewise-planar surface reconstruction from 3D line segments with visibility information. First, planes are extracted by a novel RANSAC approach for line segments that allows multiple shape support. Then, each 3D cell of a plane arrangement is labeled full or empty based on line attachment to planes, visibility and regularization. Experiments show the robustness to sparse input data, noise and outliers.
Tasks 3D Reconstruction
Published 2019-11-01
URL https://arxiv.org/abs/1911.00451v1
PDF https://arxiv.org/pdf/1911.00451v1.pdf
PWC https://paperswithcode.com/paper/surface-reconstruction-from-3d-line-segments
Repo
Framework
Title MFAS: Multimodal Fusion Architecture Search
Authors Juan-Manuel Pérez-Rúa, Valentin Vielzeuf, Stéphane Pateux, Moez Baccouche, Frédéric Jurie
Abstract We tackle the problem of finding good architectures for multimodal classification problems. We propose a novel and generic search space that spans a large number of possible fusion architectures. In order to find an optimal architecture for a given dataset in the proposed search space, we leverage an efficient sequential model-based exploration approach that is tailored for the problem. We demonstrate the value of posing multimodal fusion as a neural architecture search problem by extensive experimentation on a toy dataset and two other real multimodal datasets. We discover fusion architectures that exhibit state-of-the-art performance for problems with different domain and dataset size, including the NTU RGB+D dataset, the largest multi-modal action recognition dataset available.
Tasks Neural Architecture Search, Temporal Action Localization
Published 2019-03-15
URL http://arxiv.org/abs/1903.06496v1
PDF http://arxiv.org/pdf/1903.06496v1.pdf
PWC https://paperswithcode.com/paper/mfas-multimodal-fusion-architecture-search
Repo
Framework

Reinforcement Learning with Attention that Works: A Self-Supervised Approach

Title Reinforcement Learning with Attention that Works: A Self-Supervised Approach
Authors Anthony Manchin, Ehsan Abbasnejad, Anton van den Hengel
Abstract Attention models have had a significant positive impact on deep learning across a range of tasks. However previous attempts at integrating attention with reinforcement learning have failed to produce significant improvements. We propose the first combination of self attention and reinforcement learning that is capable of producing significant improvements, including new state of the art results in the Arcade Learning Environment. Unlike the selective attention models used in previous attempts, which constrain the attention via preconceived notions of importance, our implementation utilises the Markovian properties inherent in the state input. Our method produces a faithful visualisation of the policy, focusing on the behaviour of the agent. Our experiments demonstrate that the trained policies use multiple simultaneous foci of attention, and are able to modulate attention over time to deal with situations of partial observability.
Tasks Atari Games
Published 2019-04-06
URL http://arxiv.org/abs/1904.03367v1
PDF http://arxiv.org/pdf/1904.03367v1.pdf
PWC https://paperswithcode.com/paper/reinforcement-learning-with-attention-that
Repo
Framework

Meta Dropout: Learning to Perturb Features for Generalization

Title Meta Dropout: Learning to Perturb Features for Generalization
Authors Hae Beom Lee, Taewook Nam, Eunho Yang, Sung Ju Hwang
Abstract A machine learning model that generalizes well should obtain low errors on the unseen test examples. Test examples could be understood as perturbations of training examples, which means that if we know how to optimally perturb training examples to simulate test examples, we could achieve better generalization at test time. However, obtaining such perturbation is not possible in standard machine learning frameworks as the distribution of the test data is unknown. To tackle this challenge, we propose a meta-learning framework that learns to perturb the latent features of training examples for generalization. Specifically, we meta-learn a noise generator that will output the optimal noise distribution for latent features across all network layers to obtain low error on the test instances, in an input-dependent manner. Then, the learned noise generator will perturb the training examples of unseen tasks at the meta-test time. We show that our method, Meta-dropout, could be also understood as meta-learning of the variational inference framework for a specific graphical model, and describe its connection to existing regularizers. Finally, we validate Meta-dropout on multiple benchmark datasets for few-shot classification, whose results show that it not only significantly improves the generalization performance of meta-learners but also allows them to obtain fast converegence.
Tasks Meta-Learning
Published 2019-05-30
URL https://arxiv.org/abs/1905.12914v1
PDF https://arxiv.org/pdf/1905.12914v1.pdf
PWC https://paperswithcode.com/paper/meta-dropout-learning-to-perturb-features-for
Repo
Framework

Leveraging Vision Reconstruction Pipelines for Satellite Imagery

Title Leveraging Vision Reconstruction Pipelines for Satellite Imagery
Authors Kai Zhang, Jin Sun, Noah Snavely
Abstract Reconstructing 3D geometry from satellite imagery is an important topic of research. However, disparities exist between how this 3D reconstruction problem is handled in the remote sensing context and how multi-view reconstruction pipelines have been developed in the computer vision community. In this paper, we explore whether state-of-the-art reconstruction pipelines from the vision community can be applied to the satellite imagery. Along the way, we address several challenges adapting vision-based structure from motion and multi-view stereo methods. We show that vision pipelines can offer competitive speed and accuracy in the satellite context.
Tasks 3D Reconstruction
Published 2019-10-07
URL https://arxiv.org/abs/1910.02989v2
PDF https://arxiv.org/pdf/1910.02989v2.pdf
PWC https://paperswithcode.com/paper/leveraging-vision-reconstruction-pipelines
Repo
Framework

Sampling Good Latent Variables via CPP-VAEs: VAEs with Condition Posterior as Prior

Title Sampling Good Latent Variables via CPP-VAEs: VAEs with Condition Posterior as Prior
Authors Sadegh Aliakbarian, Fatemeh Sadat Saleh, Mathieu Salzmann, Lars Petersson, Stephen Gould
Abstract In practice, conditional variational autoencoders (CVAEs) perform conditioning by combining two sources of information which are computed completely independently; CVAEs first compute the condition, then sample the latent variable, and finally concatenate these two sources of information. However, these two processes should be tied together such that the model samples a latent variable given the conditioning signal. In this paper, we directly address this by conditioning the sampling of the latent variable on the CVAE condition, thus encouraging it to carry relevant information. We study this specifically for tasks that leverage with strong conditioning signals and where the generative models have highly expressive decoders able to generate a sample based on the information contained in the condition solely. In particular, we experiments with the two challenging tasks of diverse human motion generation and diverse image captioning, for which our results suggest that unifying latent variable sampling and conditioning not only yields samples of higher quality, but also helps the model to avoid the posterior collapse, a known problem of VAEs with expressive decoders.
Tasks Image Captioning
Published 2019-12-18
URL https://arxiv.org/abs/1912.08521v1
PDF https://arxiv.org/pdf/1912.08521v1.pdf
PWC https://paperswithcode.com/paper/sampling-good-latent-variables-via-cpp-vaes
Repo
Framework

Sparse Equisigned PCA: Algorithms and Performance Bounds in the Noisy Rank-1 Setting

Title Sparse Equisigned PCA: Algorithms and Performance Bounds in the Noisy Rank-1 Setting
Authors Arvind Prasadan, Raj Rao Nadakuditi, Debashis Paul
Abstract Singular value decomposition (SVD) based principal component analysis (PCA) breaks down in the high-dimensional and limited sample size regime below a certain critical eigen-SNR that depends on the dimensionality of the system and the number of samples. Below this critical eigen-SNR, the estimates returned by the SVD are asymptotically uncorrelated with the latent principal components. We consider a setting where the left singular vector of the underlying rank one signal matrix is assumed to be sparse and the right singular vector is assumed to be equisigned, that is, having either only nonnegative or only nonpositive entries. We consider six different algorithms for estimating the sparse principal component based on different statistical criteria and prove that by exploiting sparsity, we recover consistent estimates in the low eigen-SNR regime where the SVD fails. Our analysis reveals conditions under which a coordinate selection scheme based on a \textit{sum-type decision statistic} outperforms schemes that utilize the $\ell_1$ and $\ell_2$ norm-based statistics. We derive lower bounds on the size of detectable coordinates of the principal left singular vector and utilize these lower bounds to derive lower bounds on the worst-case risk. Finally, we verify our findings with numerical simulations and illustrate the performance with a video data example, where the interest is in identifying objects.
Tasks
Published 2019-05-22
URL https://arxiv.org/abs/1905.09369v2
PDF https://arxiv.org/pdf/1905.09369v2.pdf
PWC https://paperswithcode.com/paper/sparse-equisigned-pca-algorithms-and
Repo
Framework

Dynamic Search – Optimizing the Game of Information Seeking

Title Dynamic Search – Optimizing the Game of Information Seeking
Authors Zhiwen Tang, Grace Hui Yang
Abstract This article presents the emerging topic of dynamic search (DS). To position dynamic search in a larger research landscape, the article discusses in detail its relationship to related research topics and disciplines. The article reviews approaches to modeling dynamics during information seeking, with an emphasis on Reinforcement Learning (RL)-enabled methods. Details are given for how different approaches are used to model interactions among the human user, the search system, and the environment. The paper ends with a review of evaluations of dynamic search systems.
Tasks
Published 2019-09-26
URL https://arxiv.org/abs/1909.12425v1
PDF https://arxiv.org/pdf/1909.12425v1.pdf
PWC https://paperswithcode.com/paper/dynamic-search-optimizing-the-game-of
Repo
Framework

NTP : A Neural Network Topology Profiler

Title NTP : A Neural Network Topology Profiler
Authors Raghavendra Bhat, Pravin Chandran, Juby Jose, Viswanath Dibbur, Prakash Sirra Ajith
Abstract Performance of end-to-end neural networks on a given hardware platform is a function of its compute and memory signature, which in-turn, is governed by a wide range of parameters such as topology size, primitives used, framework used, batching strategy, latency requirements, precision etc. Current benchmarking tools suffer from limitations such as a) being either too granular like DeepBench [1] (or) b) mandate a working implementation that is either framework specific or hardware-architecture specific or both (or) c) provide only high level benchmark metrics. In this paper, we present NTP (Neural Net Topology Profiler), a sophisticated benchmarking framework, to effectively identify memory and compute signature of an end-to-end topology on multiple hardware architectures, without the need for an actual implementation. NTP is tightly integrated with hardware specific benchmarking tools to enable exhaustive data collection and analysis. Using NTP, a deep learning researcher can quickly establish baselines needed to understand performance of an end-to-end neural network topology and make high level architectural decisions. Further, integration of NTP with frameworks like Tensorflow, Pytorch, Intel OpenVINO etc. allows for performance comparison along several vectors like a) Comparison of different frameworks on a given hardware b) Comparison of different hardware using a given framework c) Comparison across different heterogeneous hardware configurations for given framework etc. These capabilities empower a researcher to effortlessly make architectural decisions needed for achieving optimized performance on any hardware platform. The paper documents the architectural approach of NTP and demonstrates the capabilities of the tool by benchmarking Mozilla DeepSpeech, a popular Speech Recognition topology.
Tasks Quantization, Speech Recognition
Published 2019-05-22
URL https://arxiv.org/abs/1905.09063v2
PDF https://arxiv.org/pdf/1905.09063v2.pdf
PWC https://paperswithcode.com/paper/ntp-a-neural-network-topology-profiler
Repo
Framework

Modeling Drug-Disease Relations with Linguistic and Knowledge Graph Constraints

Title Modeling Drug-Disease Relations with Linguistic and Knowledge Graph Constraints
Authors Bruno Godefroy, Christopher Potts
Abstract FDA drug labels are rich sources of information about drugs and drug-disease relations, but their complexity makes them challenging texts to analyze in isolation. To overcome this, we situate these labels in two health knowledge graphs: one built from precise structured information about drugs and diseases, and another built entirely from a database of clinical narrative texts using simple heuristic methods. We show that Probabilistic Soft Logic models defined over these graphs are superior to text-only and relation-only variants, and that the clinical narratives graph delivers exceptional results with little manual effort. Finally, we release a new dataset of drug labels with annotations for five distinct drug-disease relations.
Tasks Knowledge Graphs
Published 2019-03-31
URL http://arxiv.org/abs/1904.00313v1
PDF http://arxiv.org/pdf/1904.00313v1.pdf
PWC https://paperswithcode.com/paper/modeling-drug-disease-relations-with
Repo
Framework
comments powered by Disqus