July 27, 2019

3014 words 15 mins read

Paper Group ANR 682

Paper Group ANR 682

Asymptotic Confidence Regions for High-dimensional Structured Sparsity. Open-Category Classification by Adversarial Sample Generation. Meta learning Framework for Automated Driving. Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory. CT Image Reconstruction in a Low Dimensional Manifold. Map-based Multi-Policy Reinforcem …

Asymptotic Confidence Regions for High-dimensional Structured Sparsity

Title Asymptotic Confidence Regions for High-dimensional Structured Sparsity
Authors Benjamin Stucky, Sara van de Geer
Abstract In the setting of high-dimensional linear regression models, we propose two frameworks for constructing pointwise and group confidence sets for penalized estimators which incorporate prior knowledge about the organization of the non-zero coefficients. This is done by desparsifying the estimator as in van de Geer et al. [18] and van de Geer and Stucky [17], then using an appropriate estimator for the precision matrix $\Theta$. In order to estimate the precision matrix a corresponding structured matrix norm penalty has to be introduced. After normalization the result is an asymptotic pivot. The asymptotic behavior is studied and simulations are added to study the differences between the two schemes.
Tasks
Published 2017-06-28
URL http://arxiv.org/abs/1706.09231v1
PDF http://arxiv.org/pdf/1706.09231v1.pdf
PWC https://paperswithcode.com/paper/asymptotic-confidence-regions-for-high
Repo
Framework

Open-Category Classification by Adversarial Sample Generation

Title Open-Category Classification by Adversarial Sample Generation
Authors Yang Yu, Wei-Yang Qu, Nan Li, Zimin Guo
Abstract In real-world classification tasks, it is difficult to collect training samples from all possible categories of the environment. Therefore, when an instance of an unseen class appears in the prediction stage, a robust classifier should be able to tell that it is from an unseen class, instead of classifying it to be any known category. In this paper, adopting the idea of adversarial learning, we propose the ASG framework for open-category classification. ASG generates positive and negative samples of seen categories in the unsupervised manner via an adversarial learning strategy. With the generated samples, ASG then learns to tell seen from unseen in the supervised manner. Experiments performed on several datasets show the effectiveness of ASG.
Tasks
Published 2017-05-24
URL http://arxiv.org/abs/1705.08722v2
PDF http://arxiv.org/pdf/1705.08722v2.pdf
PWC https://paperswithcode.com/paper/open-category-classification-by-adversarial
Repo
Framework

Meta learning Framework for Automated Driving

Title Meta learning Framework for Automated Driving
Authors Ahmad El Sallab, Mahmoud Saeed, Omar Abdel Tawab, Mohammed Abdou
Abstract The success of automated driving deployment is highly depending on the ability to develop an efficient and safe driving policy. The problem is well formulated under the framework of optimal control as a cost optimization problem. Model based solutions using traditional planning are efficient, but require the knowledge of the environment model. On the other hand, model free solutions suffer sample inefficiency and require too many interactions with the environment, which is infeasible in practice. Methods under the Reinforcement Learning framework usually require the notion of a reward function, which is not available in the real world. Imitation learning helps in improving sample efficiency by introducing prior knowledge obtained from the demonstrated behavior, on the risk of exact behavior cloning without generalizing to unseen environments. In this paper we propose a Meta learning framework, based on data set aggregation, to improve generalization of imitation learning algorithms. Under the proposed framework, we propose MetaDAgger, a novel algorithm which tackles the generalization issues in traditional imitation learning. We use The Open Race Car Simulator (TORCS) to test our algorithm. Results on unseen test tracks show significant improvement over traditional imitation learning algorithms, improving the learning time and sample efficiency in the same time. The results are also supported by visualization of the learnt features to prove generalization of the captured details.
Tasks Imitation Learning, Meta-Learning
Published 2017-06-11
URL http://arxiv.org/abs/1706.04038v1
PDF http://arxiv.org/pdf/1706.04038v1.pdf
PWC https://paperswithcode.com/paper/meta-learning-framework-for-automated-driving
Repo
Framework

Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory

Title Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory
Authors Peter Richtárik, Martin Takáč
Abstract We develop a family of reformulations of an arbitrary consistent linear system into a stochastic problem. The reformulations are governed by two user-defined parameters: a positive definite matrix defining a norm, and an arbitrary discrete or continuous distribution over random matrices. Our reformulation has several equivalent interpretations, allowing for researchers from various communities to leverage their domain specific insights. In particular, our reformulation can be equivalently seen as a stochastic optimization problem, stochastic linear system, stochastic fixed point problem and a probabilistic intersection problem. We prove sufficient, and necessary and sufficient conditions for the reformulation to be exact. Further, we propose and analyze three stochastic algorithms for solving the reformulated problem—basic, parallel and accelerated methods—with global linear convergence rates. The rates can be interpreted as condition numbers of a matrix which depends on the system matrix and on the reformulation parameters. This gives rise to a new phenomenon which we call stochastic preconditioning, and which refers to the problem of finding parameters (matrix and distribution) leading to a sufficiently small condition number. Our basic method can be equivalently interpreted as stochastic gradient descent, stochastic Newton method, stochastic proximal point method, stochastic fixed point method, and stochastic projection method, with fixed stepsize (relaxation parameter), applied to the reformulations.
Tasks Stochastic Optimization
Published 2017-06-04
URL https://arxiv.org/abs/1706.01108v4
PDF https://arxiv.org/pdf/1706.01108v4.pdf
PWC https://paperswithcode.com/paper/stochastic-reformulations-of-linear-systems
Repo
Framework

CT Image Reconstruction in a Low Dimensional Manifold

Title CT Image Reconstruction in a Low Dimensional Manifold
Authors Wenxiang Cong, Ge Wang, Qingsong Yang, Jiang Hsieh, Jia Li, Rongjie Lai
Abstract Regularization methods are commonly used in X-ray CT image reconstruction. Different regularization methods reflect the characterization of different prior knowledge of images. In a recent work, a new regularization method called a low-dimensional manifold model (LDMM) is investigated to characterize the low-dimensional patch manifold structure of natural images, where the manifold dimensionality characterizes structural information of an image. In this paper, we propose a CT image reconstruction method based on the prior knowledge of the low-dimensional manifold of CT image. Using the clinical raw projection data from GE clinic, we conduct comparisons for the CT image reconstruction among the proposed method, the simultaneous algebraic reconstruction technique (SART) with the total variation (TV) regularization, and the filtered back projection (FBP) method. Results show that the proposed method can successfully recover structural details of an imaging object, and achieve higher spatial and contrast resolution of the reconstructed image than counterparts of FBP and SART with TV.
Tasks Image Reconstruction
Published 2017-04-16
URL http://arxiv.org/abs/1704.04825v1
PDF http://arxiv.org/pdf/1704.04825v1.pdf
PWC https://paperswithcode.com/paper/ct-image-reconstruction-in-a-low-dimensional
Repo
Framework

Map-based Multi-Policy Reinforcement Learning: Enhancing Adaptability of Robots by Deep Reinforcement Learning

Title Map-based Multi-Policy Reinforcement Learning: Enhancing Adaptability of Robots by Deep Reinforcement Learning
Authors Ayaka Kume, Eiichi Matsumoto, Kuniyuki Takahashi, Wilson Ko, Jethro Tan
Abstract In order for robots to perform mission-critical tasks, it is essential that they are able to quickly adapt to changes in their environment as well as to injuries and or other bodily changes. Deep reinforcement learning has been shown to be successful in training robot control policies for operation in complex environments. However, existing methods typically employ only a single policy. This can limit the adaptability since a large environmental modification might require a completely different behavior compared to the learning environment. To solve this problem, we propose Map-based Multi-Policy Reinforcement Learning (MMPRL), which aims to search and store multiple policies that encode different behavioral features while maximizing the expected reward in advance of the environment change. Thanks to these policies, which are stored into a multi-dimensional discrete map according to its behavioral feature, adaptation can be performed within reasonable time without retraining the robot. An appropriate pre-trained policy from the map can be recalled using Bayesian optimization. Our experiments show that MMPRL enables robots to quickly adapt to large changes without requiring any prior knowledge on the type of injuries that could occur. A highlight of the learned behaviors can be found here: https://youtu.be/QwInbilXNOE .
Tasks
Published 2017-10-17
URL http://arxiv.org/abs/1710.06117v2
PDF http://arxiv.org/pdf/1710.06117v2.pdf
PWC https://paperswithcode.com/paper/map-based-multi-policy-reinforcement-learning
Repo
Framework

Nonlinear Supervised Dimensionality Reduction via Smooth Regular Embeddings

Title Nonlinear Supervised Dimensionality Reduction via Smooth Regular Embeddings
Authors Cem Ornek, Elif Vural
Abstract The recovery of the intrinsic geometric structures of data collections is an important problem in data analysis. Supervised extensions of several manifold learning approaches have been proposed in the recent years. Meanwhile, existing methods primarily focus on the embedding of the training data, and the generalization of the embedding to initially unseen test data is rather ignored. In this work, we build on recent theoretical results on the generalization performance of supervised manifold learning algorithms. Motivated by these performance bounds, we propose a supervised manifold learning method that computes a nonlinear embedding while constructing a smooth and regular interpolation function that extends the embedding to the whole data space in order to achieve satisfactory generalization. The embedding and the interpolator are jointly learnt such that the Lipschitz regularity of the interpolator is imposed while ensuring the separation between different classes. Experimental results on several image data sets show that the proposed method outperforms traditional classifiers and the supervised dimensionality reduction algorithms in comparison in terms of classification accuracy in most settings.
Tasks Dimensionality Reduction
Published 2017-10-19
URL http://arxiv.org/abs/1710.07120v2
PDF http://arxiv.org/pdf/1710.07120v2.pdf
PWC https://paperswithcode.com/paper/nonlinear-supervised-dimensionality-reduction
Repo
Framework

Count-Based Exploration with Neural Density Models

Title Count-Based Exploration with Neural Density Models
Authors Georg Ostrovski, Marc G. Bellemare, Aaron van den Oord, Remi Munos
Abstract Bellemare et al. (2016) introduced the notion of a pseudo-count, derived from a density model, to generalize count-based exploration to non-tabular reinforcement learning. This pseudo-count was used to generate an exploration bonus for a DQN agent and combined with a mixed Monte Carlo update was sufficient to achieve state of the art on the Atari 2600 game Montezuma’s Revenge. We consider two questions left open by their work: First, how important is the quality of the density model for exploration? Second, what role does the Monte Carlo update play in exploration? We answer the first question by demonstrating the use of PixelCNN, an advanced neural density model for images, to supply a pseudo-count. In particular, we examine the intrinsic difficulties in adapting Bellemare et al.‘s approach when assumptions about the model are violated. The result is a more practical and general algorithm requiring no special apparatus. We combine PixelCNN pseudo-counts with different agent architectures to dramatically improve the state of the art on several hard Atari games. One surprising finding is that the mixed Monte Carlo update is a powerful facilitator of exploration in the sparsest of settings, including Montezuma’s Revenge.
Tasks Atari Games, Montezuma’s Revenge
Published 2017-03-03
URL http://arxiv.org/abs/1703.01310v2
PDF http://arxiv.org/pdf/1703.01310v2.pdf
PWC https://paperswithcode.com/paper/count-based-exploration-with-neural-density
Repo
Framework

Multilayer Spectral Graph Clustering via Convex Layer Aggregation: Theory and Algorithms

Title Multilayer Spectral Graph Clustering via Convex Layer Aggregation: Theory and Algorithms
Authors Pin-Yu Chen, Alfred O. Hero
Abstract Multilayer graphs are commonly used for representing different relations between entities and handling heterogeneous data processing tasks. Non-standard multilayer graph clustering methods are needed for assigning clusters to a common multilayer node set and for combining information from each layer. This paper presents a multilayer spectral graph clustering (SGC) framework that performs convex layer aggregation. Under a multilayer signal plus noise model, we provide a phase transition analysis of clustering reliability. Moreover, we use the phase transition criterion to propose a multilayer iterative model order selection algorithm (MIMOSA) for multilayer SGC, which features automated cluster assignment and layer weight adaptation, and provides statistical clustering reliability guarantees. Numerical simulations on synthetic multilayer graphs verify the phase transition analysis, and experiments on real-world multilayer graphs show that MIMOSA is competitive or better than other clustering methods.
Tasks Graph Clustering, Spectral Graph Clustering
Published 2017-08-08
URL http://arxiv.org/abs/1708.02620v1
PDF http://arxiv.org/pdf/1708.02620v1.pdf
PWC https://paperswithcode.com/paper/multilayer-spectral-graph-clustering-via-1
Repo
Framework

Finding Modes by Probabilistic Hypergraphs Shifting

Title Finding Modes by Probabilistic Hypergraphs Shifting
Authors Yang Wang, Lin Wu
Abstract In this paper, we develop a novel paradigm, namely hypergraph shift, to find robust graph modes by probabilistic voting strategy, which are semantically sound besides the self-cohesiveness requirement in forming graph modes. Unlike the existing techniques to seek graph modes by shifting vertices based on pair-wise edges (i.e, an edge with $2$ ends), our paradigm is based on shifting high-order edges (hyperedges) to deliver graph modes. Specifically, we convert the problem of seeking graph modes as the problem of seeking maximizers of a novel objective function with the aim to generate good graph modes based on sifting edges in hypergraphs. As a result, the generated graph modes based on dense subhypergraphs may more accurately capture the object semantics besides the self-cohesiveness requirement. We also formally prove that our technique is always convergent. Extensive empirical studies on synthetic and real world data sets are conducted on clustering and graph matching. They demonstrate that our techniques significantly outperform the existing techniques.
Tasks Graph Matching
Published 2017-04-12
URL http://arxiv.org/abs/1704.03612v1
PDF http://arxiv.org/pdf/1704.03612v1.pdf
PWC https://paperswithcode.com/paper/finding-modes-by-probabilistic-hypergraphs
Repo
Framework

Dual Recovery Network with Online Compensation for Image Super-Resolution

Title Dual Recovery Network with Online Compensation for Image Super-Resolution
Authors Sifeng Xia, Wenhan Yang, Jiaying Liu, Zongming Guo
Abstract Image super-resolution (SR) methods essentially lead to a loss of some high-frequency (HF) information when predicting high-resolution (HR) images from low-resolution (LR) images without using external references. To address this issue, we additionally utilize online retrieved data to facilitate image SR in a unified deep framework. A novel dual high-frequency recovery network (DHN) is proposed to predict an HR image with three parts: an LR image, an internal inferred HF (IHF) map (HF missing part inferred solely from the LR image) and an external extracted HF (EHF) map. In particular, we infer the HF information based on both the LR image and similar HR references which are retrieved online. For the EHF map, we align the references with affine transformation and then in the aligned references, part of HF signals are extracted by the proposed DHN to compensate for the HF loss. Extensive experimental results demonstrate that our DHN achieves notably better performance than state-of-the-art SR methods.
Tasks Image Super-Resolution, Super-Resolution
Published 2017-01-20
URL http://arxiv.org/abs/1701.05652v3
PDF http://arxiv.org/pdf/1701.05652v3.pdf
PWC https://paperswithcode.com/paper/dual-recovery-network-with-online
Repo
Framework

Sublinear Time Low-Rank Approximation of Positive Semidefinite Matrices

Title Sublinear Time Low-Rank Approximation of Positive Semidefinite Matrices
Authors Cameron Musco, David P. Woodruff
Abstract We show how to compute a relative-error low-rank approximation to any positive semidefinite (PSD) matrix in sublinear time, i.e., for any $n \times n$ PSD matrix $A$, in $\tilde O(n \cdot poly(k/\epsilon))$ time we output a rank-$k$ matrix $B$, in factored form, for which $\A-B_F^2 \leq (1+\epsilon)\A-A_k_F^2$, where $A_k$ is the best rank-$k$ approximation to $A$. When $k$ and $1/\epsilon$ are not too large compared to the sparsity of $A$, our algorithm does not need to read all entries of the matrix. Hence, we significantly improve upon previous $nnz(A)$ time algorithms based on oblivious subspace embeddings, and bypass an $nnz(A)$ time lower bound for general matrices (where $nnz(A)$ denotes the number of non-zero entries in the matrix). We prove time lower bounds for low-rank approximation of PSD matrices, showing that our algorithm is close to optimal. Finally, we extend our techniques to give sublinear time algorithms for low-rank approximation of $A$ in the (often stronger) spectral norm metric $\A-B_2^2$ and for ridge regression on PSD matrices.
Tasks
Published 2017-04-11
URL http://arxiv.org/abs/1704.03371v3
PDF http://arxiv.org/pdf/1704.03371v3.pdf
PWC https://paperswithcode.com/paper/sublinear-time-low-rank-approximation-of-1
Repo
Framework

Procedural Modeling and Physically Based Rendering for Synthetic Data Generation in Automotive Applications

Title Procedural Modeling and Physically Based Rendering for Synthetic Data Generation in Automotive Applications
Authors Apostolia Tsirikoglou, Joel Kronander, Magnus Wrenninge, Jonas Unger
Abstract We present an overview and evaluation of a new, systematic approach for generation of highly realistic, annotated synthetic data for training of deep neural networks in computer vision tasks. The main contribution is a procedural world modeling approach enabling high variability coupled with physically accurate image synthesis, and is a departure from the hand-modeled virtual worlds and approximate image synthesis methods used in real-time applications. The benefits of our approach include flexible, physically accurate and scalable image synthesis, implicit wide coverage of classes and features, and complete data introspection for annotations, which all contribute to quality and cost efficiency. To evaluate our approach and the efficacy of the resulting data, we use semantic segmentation for autonomous vehicles and robotic navigation as the main application, and we train multiple deep learning architectures using synthetic data with and without fine tuning on organic (i.e. real-world) data. The evaluation shows that our approach improves the neural network’s performance and that even modest implementation efforts produce state-of-the-art results.
Tasks Autonomous Vehicles, Image Generation, Semantic Segmentation, Synthetic Data Generation
Published 2017-10-17
URL http://arxiv.org/abs/1710.06270v2
PDF http://arxiv.org/pdf/1710.06270v2.pdf
PWC https://paperswithcode.com/paper/procedural-modeling-and-physically-based
Repo
Framework

Improved Bayesian Compression

Title Improved Bayesian Compression
Authors Marco Federici, Karen Ullrich, Max Welling
Abstract Compression of Neural Networks (NN) has become a highly studied topic in recent years. The main reason for this is the demand for industrial scale usage of NNs such as deploying them on mobile devices, storing them efficiently, transmitting them via band-limited channels and most importantly doing inference at scale. In this work, we propose to join the Soft-Weight Sharing and Variational Dropout approaches that show strong results to define a new state-of-the-art in terms of model compression.
Tasks Model Compression
Published 2017-11-17
URL http://arxiv.org/abs/1711.06494v2
PDF http://arxiv.org/pdf/1711.06494v2.pdf
PWC https://paperswithcode.com/paper/improved-bayesian-compression
Repo
Framework

Decentralized High-Dimensional Bayesian Optimization with Factor Graphs

Title Decentralized High-Dimensional Bayesian Optimization with Factor Graphs
Authors Trong Nghia Hoang, Quang Minh Hoang, Ruofei Ouyang, Kian Hsiang Low
Abstract This paper presents a novel decentralized high-dimensional Bayesian optimization (DEC-HBO) algorithm that, in contrast to existing HBO algorithms, can exploit the interdependent effects of various input components on the output of the unknown objective function f for boosting the BO performance and still preserve scalability in the number of input dimensions without requiring prior knowledge or the existence of a low (effective) dimension of the input space. To realize this, we propose a sparse yet rich factor graph representation of f to be exploited for designing an acquisition function that can be similarly represented by a sparse factor graph and hence be efficiently optimized in a decentralized manner using distributed message passing. Despite richly characterizing the interdependent effects of the input components on the output of f with a factor graph, DEC-HBO can still guarantee no-regret performance asymptotically. Empirical evaluation on synthetic and real-world experiments (e.g., sparse Gaussian process model with 1811 hyperparameters) shows that DEC-HBO outperforms the state-of-the-art HBO algorithms.
Tasks
Published 2017-11-19
URL http://arxiv.org/abs/1711.07033v3
PDF http://arxiv.org/pdf/1711.07033v3.pdf
PWC https://paperswithcode.com/paper/decentralized-high-dimensional-bayesian
Repo
Framework
comments powered by Disqus