October 18, 2019

3295 words 16 mins read

Paper Group ANR 589

Paper Group ANR 589

Importance Weighted Evolution Strategies. Simple Hyper-heuristics Control the Neighbourhood Size of Randomised Local Search Optimally for LeadingOnes. Multitask and Multilingual Modelling for Lexical Analysis. OffsetNet: Deep Learning for Localization in the Lung using Rendered Images. Attention to Refine through Multi-Scales for Semantic Segmentat …

Importance Weighted Evolution Strategies

Title Importance Weighted Evolution Strategies
Authors Víctor Campos, Xavier Giro-i-Nieto, Jordi Torres
Abstract Evolution Strategies (ES) emerged as a scalable alternative to popular Reinforcement Learning (RL) techniques, providing an almost perfect speedup when distributed across hundreds of CPU cores thanks to a reduced communication overhead. Despite providing large improvements in wall-clock time, ES is data inefficient when compared to competing RL methods. One of the main causes of such inefficiency is the collection of large batches of experience, which are discarded after each policy update. In this work, we study how to perform more than one update per batch of experience by means of Importance Sampling while preserving the scalability of the original method. The proposed method, Importance Weighted Evolution Strategies (IW-ES), shows promising results and is a first step towards designing efficient ES algorithms.
Tasks
Published 2018-11-12
URL http://arxiv.org/abs/1811.04624v1
PDF http://arxiv.org/pdf/1811.04624v1.pdf
PWC https://paperswithcode.com/paper/importance-weighted-evolution-strategies
Repo
Framework

Simple Hyper-heuristics Control the Neighbourhood Size of Randomised Local Search Optimally for LeadingOnes

Title Simple Hyper-heuristics Control the Neighbourhood Size of Randomised Local Search Optimally for LeadingOnes
Authors Andrei Lissovoi, Pietro S. Oliveto, John Alasdair Warwicker
Abstract Selection HHs are randomised search methodologies which choose and execute heuristics during the optimisation process from a set of low-level heuristics. A machine learning mechanism is generally used to decide which low-level heuristic should be applied in each decision step. In this paper we analyse whether sophisticated learning mechanisms are always necessary for HHs to perform well. To this end we consider the most simple HHs from the literature and rigorously analyse their performance for the LeadingOnes function. Our analysis shows that the standard Simple Random, Permutation, Greedy and Random Gradient HHs show no signs of learning. While the former HHs do not attempt to learn from the past performance of low-level heuristics, the idea behind the Random Gradient HH is to continue to exploit the currently selected heuristic as long as it is successful. Hence, it is embedded with a reinforcement learning mechanism with the shortest possible memory. However, the probability that a promising heuristic is successful in the next step is relatively low when perturbing a reasonable solution to a combinatorial optimisation problem. We generalise the simple Random Gradient HH so success can be measured over a fixed period of time tau, instead of a single iteration. For LO we prove that the Generalised Random Gradient HH can learn to adapt the neighbourhood size of RLS to optimality during the run. We prove it has the best possible performance achievable with the low-level heuristics. We also prove that the performance of the HH improves as the number of low-level local search heuristics to choose from increases. Finally, we show that the advantages of GRG over RLS and EAs using standard bit mutation increase if the anytime performance is considered. Experimental analyses confirm these results for different problem sizes.
Tasks
Published 2018-01-23
URL https://arxiv.org/abs/1801.07546v6
PDF https://arxiv.org/pdf/1801.07546v6.pdf
PWC https://paperswithcode.com/paper/simple-hyper-heuristics-optimise-leadingones
Repo
Framework

Multitask and Multilingual Modelling for Lexical Analysis

Title Multitask and Multilingual Modelling for Lexical Analysis
Authors Johannes Bjerva
Abstract In Natural Language Processing (NLP), one traditionally considers a single task (e.g. part-of-speech tagging) for a single language (e.g. English) at a time. However, recent work has shown that it can be beneficial to take advantage of relatedness between tasks, as well as between languages. In this work I examine the concept of relatedness and explore how it can be utilised to build NLP models that require less manually annotated data. A large selection of NLP tasks is investigated for a substantial language sample comprising 60 languages. The results show potential for joint multitask and multilingual modelling, and hints at linguistic insights which can be gained from such models.
Tasks Lexical Analysis, Part-Of-Speech Tagging
Published 2018-09-07
URL http://arxiv.org/abs/1809.02428v1
PDF http://arxiv.org/pdf/1809.02428v1.pdf
PWC https://paperswithcode.com/paper/multitask-and-multilingual-modelling-for
Repo
Framework

OffsetNet: Deep Learning for Localization in the Lung using Rendered Images

Title OffsetNet: Deep Learning for Localization in the Lung using Rendered Images
Authors Jake Sganga, David Eng, Chauncey Graetzel, David Camarillo
Abstract Navigating surgical tools in the dynamic and tortuous anatomy of the lung’s airways requires accurate, real-time localization of the tools with respect to the preoperative scan of the anatomy. Such localization can inform human operators or enable closed-loop control by autonomous agents, which would require accuracy not yet reported in the literature. In this paper, we introduce a deep learning architecture, called OffsetNet, to accurately localize a bronchoscope in the lung in real-time. After training on only 30 minutes of recorded camera images in conserved regions of a lung phantom, OffsetNet tracks the bronchoscope’s motion on a held-out recording through these same regions at an update rate of 47 Hz and an average position error of 1.4 mm. Because this model performs poorly in less conserved regions, we augment the training dataset with simulated images from these regions. To bridge the gap between camera and simulated domains, we implement domain randomization and a generative adversarial network (GAN). After training on simulated images, OffsetNet tracks the bronchoscope’s motion in less conserved regions at an average position error of 2.4 mm, which meets conservative thresholds required for successful tracking.
Tasks
Published 2018-09-15
URL http://arxiv.org/abs/1809.05645v1
PDF http://arxiv.org/pdf/1809.05645v1.pdf
PWC https://paperswithcode.com/paper/offsetnet-deep-learning-for-localization-in
Repo
Framework

Attention to Refine through Multi-Scales for Semantic Segmentation

Title Attention to Refine through Multi-Scales for Semantic Segmentation
Authors Shiqi Yang, Gang Peng
Abstract This paper proposes a novel attention model for semantic segmentation, which aggregates multi-scale and context features to refine prediction. Specifically, the skeleton convolutional neural network framework takes in multiple different scales inputs, by which means the CNN can get representations in different scales. The proposed attention model will handle the features from different scale streams respectively and integrate them. Then location attention branch of the model learns to softly weight the multi-scale features at each pixel location. Moreover, we add an recalibrating branch, parallel to where location attention comes out, to recalibrate the score map per class. We achieve quite competitive results on PASCAL VOC 2012 and ADE20K datasets, which surpass baseline and related works.
Tasks Semantic Segmentation
Published 2018-07-09
URL http://arxiv.org/abs/1807.02917v1
PDF http://arxiv.org/pdf/1807.02917v1.pdf
PWC https://paperswithcode.com/paper/attention-to-refine-through-multi-scales-for
Repo
Framework

Smartphone picture organization: A hierarchical approach

Title Smartphone picture organization: A hierarchical approach
Authors Stefan Lonn, Petia Radeva, Mariella Dimiccoli
Abstract We live in a society where the large majority of the population has a camera-equipped smartphone. In addition, hard drives and cloud storage are getting cheaper and cheaper, leading to a tremendous growth in stored personal photos. Unlike photo collections captured by a digital camera, which typically are pre-processed by the user who organizes them into event-related folders, smartphone pictures are automatically stored in the cloud. As a consequence, photo collections captured by a smartphone are highly unstructured and because smartphones are ubiquitous, they present a larger variability compared to pictures captured by a digital camera. To solve the need of organizing large smartphone photo collections automatically, we propose here a new methodology for hierarchical photo organization into topics and topic-related categories. Our approach successfully estimates latent topics in the pictures by applying probabilistic Latent Semantic Analysis, and automatically assigns a name to each topic by relying on a lexical database. Topic-related categories are then estimated by using a set of topic-specific Convolutional Neuronal Networks. To validate our approach, we ensemble and make public a large dataset of more than 8,000 smartphone pictures from 10 persons. Experimental results demonstrate better user satisfaction with respect to state of the art solutions in terms of organization.
Tasks
Published 2018-03-15
URL https://arxiv.org/abs/1803.05940v2
PDF https://arxiv.org/pdf/1803.05940v2.pdf
PWC https://paperswithcode.com/paper/a-picture-is-worth-a-thousand-words-but-how
Repo
Framework

A Machine Learning Approach to Shipping Box Design

Title A Machine Learning Approach to Shipping Box Design
Authors Guang Yang, Cun Mu
Abstract Having the right assortment of shipping boxes in the fulfillment warehouse to pack and ship customer’s online orders is an indispensable and integral part of nowadays eCommerce business, as it will not only help maintain a profitable business but also create great experiences for customers. However, it is an extremely challenging operations task to strategically select the best combination of tens of box sizes from thousands of feasible ones to be responsible for hundreds of thousands of orders daily placed on millions of inventory products. In this paper, we present a machine learning approach to tackle the task by formulating the box design problem prescriptively as a generalized version of weighted $k$-medoids clustering problem, where the parameters are estimated through a variety of descriptive analytics. We test this machine learning approach on fulfillment data collected from Walmart U.S. eCommerce, and our approach is shown to be capable of improving the box utilization rate by more than $10%$.
Tasks
Published 2018-09-26
URL http://arxiv.org/abs/1809.10210v3
PDF http://arxiv.org/pdf/1809.10210v3.pdf
PWC https://paperswithcode.com/paper/a-machine-learning-approach-to-shipping-box
Repo
Framework

Accelerating CNN inference on FPGAs: A Survey

Title Accelerating CNN inference on FPGAs: A Survey
Authors Kamel Abdelouahab, Maxime Pelcat, Jocelyn Serot, François Berry
Abstract Convolutional Neural Networks (CNNs) are currently adopted to solve an ever greater number of problems, ranging from speech recognition to image classification and segmentation. The large amount of processing required by CNNs calls for dedicated and tailored hardware support methods. Moreover, CNN workloads have a streaming nature, well suited to reconfigurable hardware architectures such as FPGAs. The amount and diversity of research on the subject of CNN FPGA acceleration within the last 3 years demonstrates the tremendous industrial and academic interest. This paper presents a state-of-the-art of CNN inference accelerators over FPGAs. The computational workloads, their parallelism and the involved memory accesses are analyzed. At the level of neurons, optimizations of the convolutional and fully connected layers are explained and the performances of the different methods compared. At the network level, approximate computing and datapath optimization methods are covered and state-of-the-art approaches compared. The methods and tools investigated in this survey represent the recent trends in FPGA CNN inference accelerators and will fuel the future advances on efficient hardware deep learning.
Tasks Image Classification, Speech Recognition
Published 2018-05-26
URL http://arxiv.org/abs/1806.01683v1
PDF http://arxiv.org/pdf/1806.01683v1.pdf
PWC https://paperswithcode.com/paper/accelerating-cnn-inference-on-fpgas-a-survey
Repo
Framework

Ego-Lane Analysis System (ELAS): Dataset and Algorithms

Title Ego-Lane Analysis System (ELAS): Dataset and Algorithms
Authors Rodrigo F. Berriel, Edilson de Aguiar, Alberto F. de Souza, Thiago Oliveira-Santos
Abstract Decreasing costs of vision sensors and advances in embedded hardware boosted lane related research detection, estimation, and tracking in the past two decades. The interest in this topic has increased even more with the demand for advanced driver assistance systems (ADAS) and self-driving cars. Although extensively studied independently, there is still need for studies that propose a combined solution for the multiple problems related to the ego-lane, such as lane departure warning (LDW), lane change detection, lane marking type (LMT) classification, road markings detection and classification, and detection of adjacent lanes (i.e., immediate left and right lanes) presence. In this paper, we propose a real-time Ego-Lane Analysis System (ELAS) capable of estimating ego-lane position, classifying LMTs and road markings, performing LDW and detecting lane change events. The proposed vision-based system works on a temporal sequence of images. Lane marking features are extracted in perspective and Inverse Perspective Mapping (IPM) images that are combined to increase robustness. The final estimated lane is modeled as a spline using a combination of methods (Hough lines with Kalman filter and spline with particle filter). Based on the estimated lane, all other events are detected. To validate ELAS and cover the lack of lane datasets in the literature, a new dataset with more than 20 different scenes (in more than 15,000 frames) and considering a variety of scenarios (urban road, highways, traffic, shadows, etc.) was created. The dataset was manually annotated and made publicly available to enable evaluation of several events that are of interest for the research community (i.e., lane estimation, change, and centering; road markings; intersections; LMTs; crosswalks and adjacent lanes). ELAS achieved high detection rates in all real-world events and proved to be ready for real-time applications.
Tasks Self-Driving Cars
Published 2018-06-15
URL http://arxiv.org/abs/1806.05984v1
PDF http://arxiv.org/pdf/1806.05984v1.pdf
PWC https://paperswithcode.com/paper/ego-lane-analysis-system-elas-dataset-and
Repo
Framework

Entity Linking in 40 Languages using MAG

Title Entity Linking in 40 Languages using MAG
Authors Diego Moussallem, Ricardo Usbeck, Michael Röder, Axel-Cyrille Ngonga Ngomo
Abstract A plethora of Entity Linking (EL) approaches has recently been developed. While many claim to be multilingual, the MAG (Multilingual AGDISTIS) approach has been shown recently to outperform the state of the art in multilingual EL on 7 languages. With this demo, we extend MAG to support EL in 40 different languages, including especially low-resources languages such as Ukrainian, Greek, Hungarian, Croatian, Portuguese, Japanese and Korean. Our demo relies on online web services which allow for an easy access to our entity linking approaches and can disambiguate against DBpedia and Wikidata. During the demo, we will show how to use MAG by means of POST requests as well as using its user-friendly web interface. All data used in the demo is available at https://hobbitdata.informatik.uni-leipzig.de/agdistis/
Tasks Entity Linking
Published 2018-05-29
URL http://arxiv.org/abs/1805.11467v1
PDF http://arxiv.org/pdf/1805.11467v1.pdf
PWC https://paperswithcode.com/paper/entity-linking-in-40-languages-using-mag
Repo
Framework

Nonparametric Bayesian Sparse Graph Linear Dynamical Systems

Title Nonparametric Bayesian Sparse Graph Linear Dynamical Systems
Authors Rahi Kalantari, Joydeep Ghosh, Mingyuan Zhou
Abstract A nonparametric Bayesian sparse graph linear dynamical system (SGLDS) is proposed to model sequentially observed multivariate data. SGLDS uses the Bernoulli-Poisson link together with a gamma process to generate an infinite dimensional sparse random graph to model state transitions. Depending on the sparsity pattern of the corresponding row and column of the graph affinity matrix, a latent state of SGLDS can be categorized as either a non-dynamic state or a dynamic one. A normal-gamma construction is used to shrink the energy captured by the non-dynamic states, while the dynamic states can be further categorized into live, absorbing, or noise-injection states, which capture different types of dynamical components of the underlying time series. The state-of-the-art performance of SGLDS is demonstrated with experiments on both synthetic and real data.
Tasks Time Series
Published 2018-02-21
URL http://arxiv.org/abs/1802.07434v1
PDF http://arxiv.org/pdf/1802.07434v1.pdf
PWC https://paperswithcode.com/paper/nonparametric-bayesian-sparse-graph-linear
Repo
Framework

Predictions of short-term driving intention using recurrent neural network on sequential data

Title Predictions of short-term driving intention using recurrent neural network on sequential data
Authors Zhou Xing, Fei Xiao
Abstract Predictions of driver’s intentions and their behaviors using the road is of great importance for planning and decision making processes of autonomous driving vehicles. In particular, relatively short-term driving intentions are the fundamental units that constitute more sophisticated driving goals, behaviors, such as overtaking the slow vehicle in front, exit or merge onto a high way, etc. While it is not uncommon that most of the time human driver can rationalize, in advance, various on-road behaviors, intentions, as well as the associated risks, aggressiveness, reciprocity characteristics, etc., such reasoning skills can be challenging and difficult for an autonomous driving system to learn. In this article, we demonstrate a disciplined methodology that can be used to build and train a predictive drive system, therefore to learn the on-road characteristics aforementioned.
Tasks Autonomous Driving, Decision Making
Published 2018-03-28
URL http://arxiv.org/abs/1804.00532v1
PDF http://arxiv.org/pdf/1804.00532v1.pdf
PWC https://paperswithcode.com/paper/predictions-of-short-term-driving-intention
Repo
Framework

Deep Perm-Set Net: Learn to predict sets with unknown permutation and cardinality using deep neural networks

Title Deep Perm-Set Net: Learn to predict sets with unknown permutation and cardinality using deep neural networks
Authors S. Hamid Rezatofighi, Roman Kaskman, Farbod T. Motlagh, Qinfeng Shi, Daniel Cremers, Laura Leal-Taixé, Ian Reid
Abstract Many real-world problems, e.g. object detection, have outputs that are naturally expressed as sets of entities. This creates a challenge for traditional deep neural networks which naturally deal with structured outputs such as vectors, matrices or tensors. We present a novel approach for learning to predict sets with unknown permutation and cardinality using deep neural networks. Specifically, in our formulation we incorporate the permutation as unobservable variable and estimate its distribution during the learning process using alternating optimization. We demonstrate the validity of this new formulation on two relevant vision problems: object detection, for which our formulation outperforms state-of-the-art detectors such as Faster R-CNN and YOLO, and a complex CAPTCHA test, where we observe that, surprisingly, our set based network acquired the ability of mimicking arithmetics without any rules being coded.
Tasks Object Detection
Published 2018-05-02
URL http://arxiv.org/abs/1805.00613v4
PDF http://arxiv.org/pdf/1805.00613v4.pdf
PWC https://paperswithcode.com/paper/deep-perm-set-net-learn-to-predict-sets-with
Repo
Framework

State-Augmentation Transformations for Risk-Sensitive Reinforcement Learning

Title State-Augmentation Transformations for Risk-Sensitive Reinforcement Learning
Authors Shuai Ma, Jia Yuan Yu
Abstract In the framework of MDP, although the general reward function takes three arguments-current state, action, and successor state; it is often simplified to a function of two arguments-current state and action. The former is called a transition-based reward function, whereas the latter is called a state-based reward function. When the objective involves the expected cumulative reward only, this simplification works perfectly. However, when the objective is risk-sensitive, this simplification leads to an incorrect value. We present state-augmentation transformations (SATs), which preserve the reward sequences as well as the reward distributions and the optimal policy in risk-sensitive reinforcement learning. In risk-sensitive scenarios, firstly we prove that, for every MDP with a stochastic transition-based reward function, there exists an MDP with a deterministic state-based reward function, such that for any given (randomized) policy for the first MDP, there exists a corresponding policy for the second MDP, such that both Markov reward processes share the same reward sequence. Secondly we illustrate that two situations require the proposed SATs in an inventory control problem. One could be using Q-learning (or other learning methods) on MDPs with transition-based reward functions, and the other could be using methods, which are for the Markov processes with a deterministic state-based reward functions, on the Markov processes with general reward functions. We show the advantage of the SATs by considering Value-at-Risk as an example, which is a risk measure on the reward distribution instead of the measures (such as mean and variance) of the distribution. We illustrate the error in the reward distribution estimation from the direct use of Q-learning, and show how the SATs enable a variance formula to work on Markov processes with general reward functions.
Tasks Q-Learning
Published 2018-04-16
URL http://arxiv.org/abs/1804.05950v2
PDF http://arxiv.org/pdf/1804.05950v2.pdf
PWC https://paperswithcode.com/paper/state-augmentation-transformations-for-risk
Repo
Framework

Dynamic Ensemble Selection VS K-NN: why and when Dynamic Selection obtains higher classification performance?

Title Dynamic Ensemble Selection VS K-NN: why and when Dynamic Selection obtains higher classification performance?
Authors Rafael M. O. Cruz, Hiba H. Zakane, Robert Sabourin, George D. C. Cavalcanti
Abstract Multiple classifier systems focus on the combination of classifiers to obtain better performance than a single robust one. These systems unfold three major phases: pool generation, selection and integration. One of the most promising MCS approaches is Dynamic Selection (DS), which relies on finding the most competent classifier or ensemble of classifiers to predict each test sample. The majority of the DS techniques are based on the K-Nearest Neighbors (K-NN) definition, and the quality of the neighborhood has a huge impact on the performance of DS methods. In this paper, we perform an analysis comparing the classification results of DS techniques and the K-NN classifier under different conditions. Experiments are performed on 18 state-of-the-art DS techniques over 30 classification datasets and results show that DS methods present a significant boost in classification accuracy even though they use the same neighborhood as the K-NN. The reasons behind the outperformance of DS techniques over the K-NN classifier reside in the fact that DS techniques can deal with samples with a high degree of instance hardness (samples that are located close to the decision border) as opposed to the K-NN. In this paper, not only we explain why DS techniques achieve higher classification performance than the K-NN but also when DS should be used.
Tasks
Published 2018-04-21
URL http://arxiv.org/abs/1804.07882v1
PDF http://arxiv.org/pdf/1804.07882v1.pdf
PWC https://paperswithcode.com/paper/dynamic-ensemble-selection-vs-k-nn-why-and
Repo
Framework
comments powered by Disqus