January 28, 2020

3080 words 15 mins read

Paper Group ANR 974

Paper Group ANR 974

Differentiable Sampling with Flexible Reference Word Order for Neural Machine Translation. Dynamic Approach for Lane Detection using Google Street View and CNN. Extracting Novel Facts from Tables for Knowledge Graph Completion (Extended version). Exploiting Model Sparsity in Adaptive MPC: A Compressed Sensing Viewpoint. Wind Estimation Using Quadco …

Differentiable Sampling with Flexible Reference Word Order for Neural Machine Translation

Title Differentiable Sampling with Flexible Reference Word Order for Neural Machine Translation
Authors Weijia Xu, Xing Niu, Marine Carpuat
Abstract Despite some empirical success at correcting exposure bias in machine translation, scheduled sampling algorithms suffer from a major drawback: they incorrectly assume that words in the reference translations and in sampled sequences are aligned at each time step. Our new differentiable sampling algorithm addresses this issue by optimizing the probability that the reference can be aligned with the sampled output, based on a soft alignment predicted by the model itself. As a result, the output distribution at each time step is evaluated with respect to the whole predicted sequence. Experiments on IWSLT translation tasks show that our approach improves BLEU compared to maximum likelihood and scheduled sampling baselines. In addition, our approach is simpler to train with no need for sampling schedule and yields models that achieve larger improvements with smaller beam sizes.
Tasks Machine Translation
Published 2019-04-04
URL https://arxiv.org/abs/1904.04079v2
PDF https://arxiv.org/pdf/1904.04079v2.pdf
PWC https://paperswithcode.com/paper/differentiable-sampling-with-flexible
Repo
Framework

Dynamic Approach for Lane Detection using Google Street View and CNN

Title Dynamic Approach for Lane Detection using Google Street View and CNN
Authors Rama Sai Mamidala, Uday Uthkota, Mahamkali Bhavani Shankar, A. Joseph Antony, A. V. Narasimhadhan
Abstract Lane detection algorithms have been the key enablers for a fully-assistive and autonomous navigation systems. In this paper, a novel and pragmatic approach for lane detection is proposed using a convolutional neural network (CNN) model based on SegNet encoder-decoder architecture. The encoder block renders low-resolution feature maps of the input and the decoder block provides pixel-wise classification from the feature maps. The proposed model has been trained over 2000 image data-set and tested against their corresponding ground-truth provided in the data-set for evaluation. To enable real-time navigation, we extend our model’s predictions interfacing it with the existing Google APIs evaluating the metrics of the model tuning the hyper-parameters. The novelty of this approach lies in the integration of existing segNet architecture with google APIs. This interface makes it handy for assistive robotic systems. The observed results show that the proposed method is robust under challenging occlusion conditions due to pre-processing involved and gives superior performance when compared to the existing methods.
Tasks Autonomous Navigation, Lane Detection
Published 2019-09-02
URL https://arxiv.org/abs/1909.00798v1
PDF https://arxiv.org/pdf/1909.00798v1.pdf
PWC https://paperswithcode.com/paper/dynamic-approach-for-lane-detection-using
Repo
Framework

Extracting Novel Facts from Tables for Knowledge Graph Completion (Extended version)

Title Extracting Novel Facts from Tables for Knowledge Graph Completion (Extended version)
Authors Benno Kruit, Peter Boncz, Jacopo Urbani
Abstract We propose a new end-to-end method for extending a Knowledge Graph (KG) from tables. Existing techniques tend to interpret tables by focusing on information that is already in the KG, and therefore tend to extract many redundant facts. Our method aims to find more novel facts. We introduce a new technique for table interpretation based on a scalable graphical model using entity similarities. Our method further disambiguates cell values using KG embeddings as additional ranking method. Other distinctive features are the lack of assumptions about the underlying KG and the enabling of a fine-grained tuning of the precision/recall trade-off of extracted facts. Our experiments show that our approach has a higher recall during the interpretation process than the state-of-the-art, and is more resistant against the bias observed in extracting mostly redundant facts since it produces more novel extractions.
Tasks Knowledge Graph Completion
Published 2019-06-28
URL https://arxiv.org/abs/1907.00083v2
PDF https://arxiv.org/pdf/1907.00083v2.pdf
PWC https://paperswithcode.com/paper/extracting-novel-facts-from-tables-for
Repo
Framework

Exploiting Model Sparsity in Adaptive MPC: A Compressed Sensing Viewpoint

Title Exploiting Model Sparsity in Adaptive MPC: A Compressed Sensing Viewpoint
Authors Monimoy Bujarbaruah, Charlott Vallon
Abstract This paper proposes an Adaptive Stochastic Model Predictive Control (MPC) strategy for stable linear time-invariant systems in the presence of bounded disturbances. We consider multi-input, multi-output systems that can be expressed by a Finite Impulse Response (FIR) model. The parameters of the FIR model corresponding to each output are unknown but assumed sparse. We estimate these parameters using the Recursive Least Squares algorithm. The estimates are then improved using set-based bounds obtained by solving the Basis Pursuit Denoising [1] problem. Our approach is able to handle hard input constraints and probabilistic output constraints. Using tools from distributionally robust optimization, we reformulate the probabilistic output constraints as tractable convex second-order cone constraints, which enables us to pose our MPC design task as a convex optimization problem. The efficacy of the developed algorithm is highlighted with a thorough numerical example, where we demonstrate performance gain over the counterpart algorithm of [2], which does not utilize the sparsity information of the system impulse response parameters during control design.
Tasks Denoising
Published 2019-12-09
URL https://arxiv.org/abs/1912.04408v1
PDF https://arxiv.org/pdf/1912.04408v1.pdf
PWC https://paperswithcode.com/paper/exploiting-model-sparsity-in-adaptive-mpc-a
Repo
Framework

Wind Estimation Using Quadcopter Motion: A Machine Learning Approach

Title Wind Estimation Using Quadcopter Motion: A Machine Learning Approach
Authors Sam Allison, He Bai, Balaji Jayaraman
Abstract In this article, we study the well known problem of wind estimation in atmospheric turbulence using small unmanned aerial systems (sUAS). We present a machine learning approach to wind velocity estimation based on quadcopter state measurements without a wind sensor. We accomplish this by training a long short-term memory (LSTM) neural network (NN) on roll and pitch angles and quadcopter position inputs with forcing wind velocities as the targets. The datasets are generated using a simulated quadcopter in turbulent wind fields. The trained neural network is deployed to estimate the turbulent winds as generated by the Dryden gust model as well as a realistic large eddy simulation (LES) of a near-neutral atmospheric boundary layer (ABL) over flat terrain. The resulting NN predictions are compared to a wind triangle approach that uses tilt angle as an approximation of airspeed. Results from this study indicate that the LSTM-NN based approach predicts lower errors in both the mean and variance of the local wind field as compared to the wind triangle approach. The work reported in this article demonstrates the potential of machine learning for sensor-less wind estimation and has strong implications to large-scale low-altitude atmospheric sensing using sUAS for environmental and autonomous navigation applications.
Tasks Autonomous Navigation
Published 2019-07-11
URL https://arxiv.org/abs/1907.05720v1
PDF https://arxiv.org/pdf/1907.05720v1.pdf
PWC https://paperswithcode.com/paper/wind-estimation-using-quadcopter-motion-a
Repo
Framework

3D Surface Reconstruction from Voxel-based Lidar Data

Title 3D Surface Reconstruction from Voxel-based Lidar Data
Authors Luis Roldão, Raoul de Charette, Anne Verroust-Blondet
Abstract To achieve fully autonomous navigation, vehicles need to compute an accurate model of their direct surrounding. In this paper, a 3D surface reconstruction algorithm from heterogeneous density 3D data is presented. The proposed method is based on a TSDF voxel-based representation, where an adaptive neighborhood kernel sourced on a Gaussian confidence evaluation is introduced. This enables to keep a good trade-off between the density of the reconstructed mesh and its accuracy. Experimental evaluations carried on both synthetic (CARLA) and real (KITTI) 3D data show a good performance compared to a state of the art method used for surface reconstruction.
Tasks Autonomous Navigation
Published 2019-06-25
URL https://arxiv.org/abs/1906.10515v1
PDF https://arxiv.org/pdf/1906.10515v1.pdf
PWC https://paperswithcode.com/paper/3d-surface-reconstruction-from-voxel-based
Repo
Framework

Latent Variables on Spheres for Autoencoders in High Dimensions

Title Latent Variables on Spheres for Autoencoders in High Dimensions
Authors Deli Zhao, Jiapeng Zhu, Bo Zhang
Abstract Variational Auto-Encoder (VAE) has been widely applied as a fundamental generative model in machine learning. For complex samples like imagery objects or scenes, however, VAE suffers from the dimensional dilemma between reconstruction precision that needs high-dimensional latent codes and probabilistic inference that favors a low-dimensional latent space. By virtue of high-dimensional geometry, we propose a very simple algorithm, called Spherical Auto-Encoder (SAE), completely different from existing VAEs to address the issue. SAE is in essence the vanilla autoencoder with spherical normalization on the latent space. We analyze the unique characteristics of random variables on spheres in high dimensions and argue that random variables on spheres are agnostic to various prior distributions and data modes when the dimension is sufficiently high. Therefore, SAE can harness a high-dimensional latent space to improve the inference precision of latent codes while maintain the property of stochastic sampling from priors. The experiments on sampling and inference validate our theoretical analysis and the superiority of SAE.
Tasks
Published 2019-12-21
URL https://arxiv.org/abs/1912.10233v2
PDF https://arxiv.org/pdf/1912.10233v2.pdf
PWC https://paperswithcode.com/paper/latent-variables-on-spheres-for-sampling-and-1
Repo
Framework

Quantum tensor singular value decomposition with applications to recommendation systems

Title Quantum tensor singular value decomposition with applications to recommendation systems
Authors Xiaoqiang Wang, Lejia Gu, Joseph Heung-wing Joseph Lee, Guofeng Zhang
Abstract In this paper, we present a quantum singular value decomposition algorithm for third-order tensors inspired by the classical algorithm of tensor singular value decomposition (t-svd) and then extend it to order-$p$ tensors. It can be proved that the quantum version of the t-svd for a third-order tensor $\mathcal{A} \in \mathbb{R}^{N\times N \times N}$ achieves the complexity of $\mathcal{O}(N{\rm polylog}(N))$, an exponential speedup compared with its classical counterpart. As an application, we propose a quantum algorithm for recommendation systems which incorporates the contextual situation of users to the personalized recommendation. We provide recommendations varying with contexts by measuring the output quantum state corresponding to an approximation of this user’s preferences. This algorithm runs in expected time $\mathcal{O}(N{\rm polylog}(N){\rm poly}(k)),$ if every frontal slice of the preference tensor has a good rank-$k$ approximation. At last, we provide a quantum algorithm for tensor completion based on a different truncation method which is tested to have a good performance in dynamic video completion.
Tasks Recommendation Systems
Published 2019-10-03
URL https://arxiv.org/abs/1910.01262v2
PDF https://arxiv.org/pdf/1910.01262v2.pdf
PWC https://paperswithcode.com/paper/quantum-tensor-singular-value-decomposition
Repo
Framework

eSLAM: An Energy-Efficient Accelerator for Real-Time ORB-SLAM on FPGA Platform

Title eSLAM: An Energy-Efficient Accelerator for Real-Time ORB-SLAM on FPGA Platform
Authors Runze Liu, Jianlei Yang, Yiran Chen, Weisheng Zhao
Abstract Simultaneous Localization and Mapping (SLAM) is a critical task for autonomous navigation. However, due to the computational complexity of SLAM algorithms, it is very difficult to achieve real-time implementation on low-power platforms.We propose an energy efficient architecture for real-time ORB (Oriented-FAST and Rotated- BRIEF) based visual SLAM system by accelerating the most time consuming stages of feature extraction and matching on FPGA platform.Moreover, the original ORB descriptor pattern is reformed as a rotational symmetric manner which is much more hardware friendly. Optimizations including rescheduling and parallelizing are further utilized to improve the throughput and reduce the memory footprint. Compared with Intel i7 and ARM Cortex-A9 CPUs on TUM dataset, our FPGA realization achieves up to 3X and 31X frame rate improvement, as well as up to 71X and 25X energy efficiency improvement, respectively.
Tasks Autonomous Navigation, Simultaneous Localization and Mapping
Published 2019-06-03
URL https://arxiv.org/abs/1906.05096v1
PDF https://arxiv.org/pdf/1906.05096v1.pdf
PWC https://paperswithcode.com/paper/eslam-an-energy-efficient-accelerator-for
Repo
Framework

You Are Here: Geolocation by Embedding Maps and Images

Title You Are Here: Geolocation by Embedding Maps and Images
Authors Obed Samano Abonce, Mengjie Zhou, Andrew Calway
Abstract We present a novel approach to geolocating images on a 2-D map based on learning a low dimensional embedded space, which allows a comparison between an image captured at a location and local neighbourhoods of the map. The representation is not sufficiently discriminatory to allow localisation from a single image but when concatenated along a route, localisation converges quickly, with over 90% accuracy being achieved for routes up to 200m in length when using Google Street View and Open Street Map data. The approach generalises a previous fixed semantic feature based approach and achieves faster convergence and higher accuracy without the need for including turn information.
Tasks
Published 2019-11-20
URL https://arxiv.org/abs/1911.08797v1
PDF https://arxiv.org/pdf/1911.08797v1.pdf
PWC https://paperswithcode.com/paper/you-are-here-geolocation-by-embedding-maps
Repo
Framework

A Nonlinear Acceleration Method for Iterative Algorithms

Title A Nonlinear Acceleration Method for Iterative Algorithms
Authors Mahdi Shamsi, Mahmoud Ghandi, Farokh Marvasti
Abstract Iterative methods have led to better understanding and solving problems such as missing sampling, deconvolution, inverse systems, impulsive and Salt and Pepper noise removal problems. However, the challenges such as the speed of convergence and or the accuracy of the answer still remain. In order to improve the existing iterative algorithms, a non-linear method is discussed in this paper. The mentioned method is analyzed from different aspects, including its convergence and its ability to accelerate recursive algorithms. We show that this method is capable of improving Iterative Method (IM) as a non-uniform sampling reconstruction algorithm and some iterative sparse recovery algorithms such as Iterative Reweighted Least Squares (IRLS), Iterative Method with Adaptive Thresholding (IMAT), Smoothed l0 (SL0) and Alternating Direction Method of Multipliers (ADMM) for solving LASSO problems family (including Lasso itself, Lasso-LSQR and group-Lasso). It is also capable of both accelerating and stabilizing the well-known Chebyshev Acceleration (CA) method. Furthermore, the proposed algorithm can extend the stability range by reducing the sensitivity of iterative algorithms to the changes of adaptation rate.
Tasks Salt-And-Pepper Noise Removal
Published 2019-06-04
URL https://arxiv.org/abs/1906.01595v1
PDF https://arxiv.org/pdf/1906.01595v1.pdf
PWC https://paperswithcode.com/paper/a-nonlinear-acceleration-method-for-iterative
Repo
Framework

X-Ray Sobolev Variational Auto-Encoders

Title X-Ray Sobolev Variational Auto-Encoders
Authors Gabriel Turinici
Abstract The quality of the generative models (Generative adversarial networks, Variational Auto-Encoders, …) depends heavily on the choice of a good probability distance. However some popular metrics lack convenient properties such as (geodesic) convexity, fast evaluation and so on. To address these shortcomings, we introduce a class of distances that have built-in convexity. We investigate the relationship with some known paradigms (sliced distances, reproducing kernel Hilbert spaces, energy distances).The distances are shown to posses fast implementations andare included in an adapted Variational Auto-Encoder termed X-ray Sobolev Variational Auto-Encoder (XS-VAE) which produces good quality resultson standard generative datasets.
Tasks
Published 2019-11-29
URL https://arxiv.org/abs/1911.13135v2
PDF https://arxiv.org/pdf/1911.13135v2.pdf
PWC https://paperswithcode.com/paper/x-ray-sobolev-variational-auto-encoders
Repo
Framework

Tensor-based Cooperative Control for Large Scale Multi-intersection Traffic Signal Using Deep Reinforcement Learning and Imitation Learning

Title Tensor-based Cooperative Control for Large Scale Multi-intersection Traffic Signal Using Deep Reinforcement Learning and Imitation Learning
Authors Yusen Huo, Qinghua Tao, Jianming Hu
Abstract Traffic signal control has long been considered as a critical topic in intelligent transportation systems. Most existing learning methods mainly focus on isolated intersections and suffer from inefficient training. This paper aims at the cooperative control for large scale multi-intersection traffic signal, in which a novel end-to-end learning based model is established and the efficient training method is proposed correspondingly. In the proposed model, the input traffic status in multi-intersections is represented by a tensor, which not only significantly reduces dimensionality than using a single matrix but also avoids information loss. For the output, a multidimensional boolean vector is employed for the control policy to indicate whether the signal state changes or not, which simplifies the representation and abides the practical phase changing rules. In the proposed model, a multi-task learning structure is used to get the cooperative policy by learning. Instead of only using the reinforcement learning to train the model, we employ imitation learning to integrate a rule based model with neural networks to do the pre-training, which provides a reliable and satisfactory stage solution and greatly accelerates the convergence. Afterwards, the reinforcement learning method is adopted to continue the fine training, where proximal policy optimization algorithm is incorporated to solve the policy collapse problem in multi-dimensional output situation. In numerical experiments, the advantages of the proposed model are demonstrated with comparison to the related state-of-the-art methods.
Tasks Imitation Learning, Multi-Task Learning
Published 2019-09-30
URL https://arxiv.org/abs/1909.13428v1
PDF https://arxiv.org/pdf/1909.13428v1.pdf
PWC https://paperswithcode.com/paper/tensor-based-cooperative-control-for-large
Repo
Framework

Latent Variable Algorithms for Multimodal Learning and Sensor Fusion

Title Latent Variable Algorithms for Multimodal Learning and Sensor Fusion
Authors Lijiang Guo
Abstract Multimodal learning has been lacking principled ways of combining information from different modalities and learning a low-dimensional manifold of meaningful representations. We study multimodal learning and sensor fusion from a latent variable perspective. We first present a regularized recurrent attention filter for sensor fusion. This algorithm can dynamically combine information from different types of sensors in a sequential decision making task. Each sensor is bonded with a modular neural network to maximize utility of its own information. A gating modular neural network dynamically generates a set of mixing weights for outputs from sensor networks by balancing utility of all sensors’ information. We design a co-learning mechanism to encourage co-adaption and independent learning of each sensor at the same time, and propose a regularization based co-learning method. In the second part, we focus on recovering the manifold of latent representation. We propose a co-learning approach using probabilistic graphical model which imposes a structural prior on the generative model: multimodal variational RNN (MVRNN) model, and derive a variational lower bound for its objective functions. In the third part, we extend the siamese structure to sensor fusion for robust acoustic event detection. We perform experiments to investigate the latent representations that are extracted; works will be done in the following months. Our experiments show that the recurrent attention filter can dynamically combine different sensor inputs according to the information carried in the inputs. We consider MVRNN can identify latent representations that are useful for many downstream tasks such as speech synthesis, activity recognition, and control and planning. Both algorithms are general frameworks which can be applied to other tasks where different types of sensors are jointly used for decision making.
Tasks Activity Recognition, Decision Making, Sensor Fusion, Speech Synthesis
Published 2019-04-23
URL http://arxiv.org/abs/1904.10450v1
PDF http://arxiv.org/pdf/1904.10450v1.pdf
PWC https://paperswithcode.com/paper/latent-variable-algorithms-for-multimodal
Repo
Framework

PointPainting: Sequential Fusion for 3D Object Detection

Title PointPainting: Sequential Fusion for 3D Object Detection
Authors Sourabh Vora, Alex H. Lang, Bassam Helou, Oscar Beijbom
Abstract Camera and lidar are important sensor modalities for robotics in general and self-driving cars in particular. The sensors provide complementary information offering an opportunity for tight sensor-fusion. Surprisingly, lidar-only methods outperform fusion methods on the main benchmark datasets, suggesting a gap in the literature. In this work, we propose PointPainting: a sequential fusion method to fill this gap. PointPainting works by projecting lidar points into the output of an image-only semantic segmentation network and appending the class scores to each point. The appended (painted) point cloud can then be fed to any lidar-only method. Experiments show large improvements on three different state-of-the art methods, Point-RCNN, VoxelNet and PointPillars on the KITTI and nuScenes datasets. The painted version of PointRCNN represents a new state of the art on the KITTI leaderboard for the bird’s-eye view detection task. In ablation, we study how the effects of Painting depends on the quality and format of the semantic segmentation output, and demonstrate how latency can be minimized through pipelining.
Tasks 3D Object Detection, Object Detection, Self-Driving Cars, Semantic Segmentation, Sensor Fusion
Published 2019-11-22
URL https://arxiv.org/abs/1911.10150v1
PDF https://arxiv.org/pdf/1911.10150v1.pdf
PWC https://paperswithcode.com/paper/pointpainting-sequential-fusion-for-3d-object
Repo
Framework
comments powered by Disqus