January 28, 2020

2976 words 14 mins read

Paper Group ANR 1054

Paper Group ANR 1054

Advances on CNN-based super-resolution of Sentinel-2 images. Deep Distributional Sequence Embeddings Based on a Wasserstein Loss. Towards Automated Sexual Violence Report Tracking. A Control-Model-Based Approach for Reinforcement Learning. Identification In Missing Data Models Represented By Directed Acyclic Graphs. Design and Results of the Second …

Advances on CNN-based super-resolution of Sentinel-2 images

Title Advances on CNN-based super-resolution of Sentinel-2 images
Authors Massimiliano Gargiulo
Abstract Thanks to their temporal-spatial coverage and free access, Sentinel-2 images are very interesting for the community. However, a relatively coarse spatial resolution, compared to that of state-of-the-art commercial products, motivates the study of super-resolution techniques to mitigate such a limitation. Specifically, thirtheen bands are sensed simultaneously but at different spatial resolutions: 10, 20, and 60 meters depending on the spectral location. Here, building upon our previous convolutional neural network (CNN) based method, we propose an improved CNN solution to super-resolve the 20-m resolution bands benefiting spatial details conveyed by the accompanying 10-m spectral bands.
Tasks Super-Resolution
Published 2019-02-07
URL http://arxiv.org/abs/1902.02513v1
PDF http://arxiv.org/pdf/1902.02513v1.pdf
PWC https://paperswithcode.com/paper/advances-on-cnn-based-super-resolution-of
Repo
Framework

Deep Distributional Sequence Embeddings Based on a Wasserstein Loss

Title Deep Distributional Sequence Embeddings Based on a Wasserstein Loss
Authors Ahmed Abdelwahab, Niels Landwehr
Abstract Deep metric learning employs deep neural networks to embed instances into a metric space such that distances between instances of the same class are small and distances between instances from different classes are large. In most existing deep metric learning techniques, the embedding of an instance is given by a feature vector produced by a deep neural network and Euclidean distance or cosine similarity defines distances between these vectors. In this paper, we study deep distributional embeddings of sequences, where the embedding of a sequence is given by the distribution of learned deep features across the sequence. This has the advantage of capturing statistical information about the distribution of patterns within the sequence in the embedding. When embeddings are distributions rather than vectors, measuring distances between embeddings involves comparing their respective distributions. We propose a distance metric based on Wasserstein distances between the distributions and a corresponding loss function for metric learning, which leads to a novel end-to-end trainable embedding model. We empirically observe that distributional embeddings outperform standard vector embeddings and that training with the proposed Wasserstein metric outperforms training with other distance functions.
Tasks Metric Learning
Published 2019-12-04
URL https://arxiv.org/abs/1912.01933v1
PDF https://arxiv.org/pdf/1912.01933v1.pdf
PWC https://paperswithcode.com/paper/deep-distributional-sequence-embeddings-based
Repo
Framework

Towards Automated Sexual Violence Report Tracking

Title Towards Automated Sexual Violence Report Tracking
Authors Naeemul Hassan, Amrit Poudel, Jason Hale, Claire Hubacek, Khandakar Tasnim Huq, Shubhra Kanti Karmaker Santu, Syed Ishtiaque Ahmed
Abstract Tracking sexual violence is a challenging task. In this paper, we present a supervised learning-based automated sexual violence report tracking model that is more scalable, and reliable than its crowdsource based counterparts. We define the sexual violence report tracking problem by considering victim, perpetrator contexts and the nature of the violence. We find that our model could identify sexual violence reports with a precision and recall of 80.4% and 83.4%, respectively. Moreover, we also applied the model during and after the #MeToo movement. Several interesting findings are discovered which are not easily identifiable from a shallow analysis.
Tasks
Published 2019-11-16
URL https://arxiv.org/abs/1911.06961v1
PDF https://arxiv.org/pdf/1911.06961v1.pdf
PWC https://paperswithcode.com/paper/towards-automated-sexual-violence-report
Repo
Framework

A Control-Model-Based Approach for Reinforcement Learning

Title A Control-Model-Based Approach for Reinforcement Learning
Authors Yingdong Lu, Mark S. Squillante, Chai Wah Wu
Abstract We consider a new form of model-based reinforcement learning methods that directly learns the optimal control parameters, instead of learning the underlying dynamical system. This includes a form of exploration and exploitation in learning and applying the optimal control parameters over time. This also includes a general framework that manages a collection of such control-model-based reinforcement learning methods running in parallel and that selects the best decision from among these parallel methods with the different methods interactively learning together. We derive theoretical results for the optimal control of linear and nonlinear instances of the new control-model-based reinforcement learning methods. Our empirical results demonstrate and quantify the significant benefits of our approach.
Tasks
Published 2019-05-28
URL https://arxiv.org/abs/1905.12009v1
PDF https://arxiv.org/pdf/1905.12009v1.pdf
PWC https://paperswithcode.com/paper/a-control-model-based-approach-for
Repo
Framework

Identification In Missing Data Models Represented By Directed Acyclic Graphs

Title Identification In Missing Data Models Represented By Directed Acyclic Graphs
Authors Rohit Bhattacharya, Razieh Nabi, Ilya Shpitser, James M. Robins
Abstract Missing data is a pervasive problem in data analyses, resulting in datasets that contain censored realizations of a target distribution. Many approaches to inference on the target distribution using censored observed data, rely on missing data models represented as a factorization with respect to a directed acyclic graph. In this paper we consider the identifiability of the target distribution within this class of models, and show that the most general identification strategies proposed so far retain a significant gap in that they fail to identify a wide class of identifiable distributions. To address this gap, we propose a new algorithm that significantly generalizes the types of manipulations used in the ID algorithm, developed in the context of causal inference, in order to obtain identification.
Tasks Causal Inference
Published 2019-06-29
URL https://arxiv.org/abs/1907.00241v1
PDF https://arxiv.org/pdf/1907.00241v1.pdf
PWC https://paperswithcode.com/paper/identification-in-missing-data-models
Repo
Framework

Design and Results of the Second International Competition on Computational Models of Argumentation

Title Design and Results of the Second International Competition on Computational Models of Argumentation
Authors Sarah A. Gaggl, Thomas Linsbichler, Marco Maratea, Stefan Woltran
Abstract Argumentation is a major topic in the study of Artificial Intelligence. Since the first edition in 2015, advancements in solving (abstract) argumentation frameworks are assessed in competition events, similar to other closely related problem solving technologies. In this paper, we report about the design and results of the Second International Competition on Computational Models of Argumentation, which has been jointly organized by TU Dresden (Germany), TU Wien (Austria), and the University of Genova (Italy), in affiliation with the 2017 International Workshop on Theory and Applications of Formal Argumentation. This second edition maintains some of the design choices made in the first event, e.g. the I/O formats, the basic reasoning problems, and the organization into tasks and tracks. At the same time, it introduces significant novelties, e.g. three additional prominent semantics, and an instance selection stage for classifying instances according to their empirical hardness.
Tasks Abstract Argumentation
Published 2019-09-02
URL https://arxiv.org/abs/1909.00621v1
PDF https://arxiv.org/pdf/1909.00621v1.pdf
PWC https://paperswithcode.com/paper/design-and-results-of-the-second
Repo
Framework

Minimum Stein Discrepancy Estimators

Title Minimum Stein Discrepancy Estimators
Authors Alessandro Barp, Francois-Xavier Briol, Andrew B. Duncan, Mark Girolami, Lester Mackey
Abstract When maximum likelihood estimation is infeasible, one often turns to score matching, contrastive divergence, or minimum probability flow to obtain tractable parameter estimates. We provide a unifying perspective of these techniques as minimum Stein discrepancy estimators, and use this lens to design new diffusion kernel Stein discrepancy (DKSD) and diffusion score matching (DSM) estimators with complementary strengths. We establish the consistency, asymptotic normality, and robustness of DKSD and DSM estimators, then derive stochastic Riemannian gradient descent algorithms for their efficient optimisation. The main strength of our methodology is its flexibility, which allows us to design estimators with desirable properties for specific models at hand by carefully selecting a Stein discrepancy. We illustrate this advantage for several challenging problems for score matching, such as non-smooth, heavy-tailed or light-tailed densities.
Tasks
Published 2019-06-19
URL https://arxiv.org/abs/1906.08283v2
PDF https://arxiv.org/pdf/1906.08283v2.pdf
PWC https://paperswithcode.com/paper/minimum-stein-discrepancy-estimators
Repo
Framework

Statistical Performance of Radio Interferometric Calibration

Title Statistical Performance of Radio Interferometric Calibration
Authors Sarod Yatawatta
Abstract Calibration is an essential step in radio interferometric data processing that corrects the data for systematic errors and in addition, subtracts bright foreground interference to reveal weak signals hidden in the residual. These weak and unknown signals are much sought after to reach many science goals but the effect of calibration on such signals is an ever present concern. The main reason for this is the incompleteness of the model used in calibration. Distributed calibration based on consensus optimization has been shown to mitigate the effect due to model incompleteness by calibrating data covering a wide bandwidth in a computationally efficient manner. In this paper, we study the statistical performance of direction dependent distributed calibration, i.e., the distortion caused by calibration on the residual statistics. In order to study this, we consider the mapping between the input uncalibrated data and the output residual data. We derive an analytical relationship for the influence of the input on the residual and use this to find the relationship between the input and output probability density functions. Using simulations we show that the smallest eigenvalue of the Jacobian of this mapping is a reliable indicator of the statistical performance of calibration. The analysis developed in this paper can also be applied to other data processing steps in radio interferometry such as imaging and foreground subtraction as well as to many other machine learning problems.
Tasks Calibration, Radio Interferometry
Published 2019-02-27
URL http://arxiv.org/abs/1902.10448v2
PDF http://arxiv.org/pdf/1902.10448v2.pdf
PWC https://paperswithcode.com/paper/statistical-performance-of-radio
Repo
Framework

What Should/Do/Can LSTMs Learn When Parsing Auxiliary Verb Constructions?

Title What Should/Do/Can LSTMs Learn When Parsing Auxiliary Verb Constructions?
Authors Miryam de Lhoneux, Sara Stymne, Joakim Nivre
Abstract This article is a linguistic investigation of a neural parser. We look at transitivity and agreement information of auxiliary verb constructions (AVCs) in comparison to finite main verbs (FMVs). This comparison is motivated by theoretical work in dependency grammar and in particular the work of Tesni`ere (1959) where AVCs and FMVs are both instances of a nucleus, the basic unit of syntax. An AVC is a dissociated nucleus, it consists of at least two words, and a FMV is its non-dissociated counterpart, consisting of exactly one word. We suggest that the representation of AVCs and FMVs should capture similar information. We use diagnostic classifiers to probe agreement and transitivity information in vectors learned by a transition-based neural parser in four typologically different languages. We find that the parser learns different information about AVCs and FMVs if only sequential models (BiLSTMs) are used in the architecture but similar information when a recursive layer is used. We find explanations for why this is the case by looking closely at how information is learned in the network and looking at what happens with different dependency representations of AVCs.
Tasks
Published 2019-07-18
URL https://arxiv.org/abs/1907.07950v1
PDF https://arxiv.org/pdf/1907.07950v1.pdf
PWC https://paperswithcode.com/paper/what-shoulddocan-lstms-learn-when-parsing
Repo
Framework

Interactive Decision Making for Autonomous Vehicles in Dense Traffic

Title Interactive Decision Making for Autonomous Vehicles in Dense Traffic
Authors David Isele
Abstract Dense urban traffic environments can produce situations where accurate prediction and dynamic models are insufficient for successful autonomous vehicle motion planning. We investigate how an autonomous agent can safely negotiate with other traffic participants, enabling the agent to handle potential deadlocks. Specifically we consider merges where the gap between cars is smaller than the size of the ego vehicle. We propose a game theoretic framework capable of generating and responding to interactive behaviors. Our main contribution is to show how game-tree decision making can be executed by an autonomous vehicle, including approximations and reasoning that make the tree-search computationally tractable. Additionally, to test our model we develop a stochastic rule-based traffic agent capable of generating interactive behaviors that can be used as a benchmark for simulating traffic participants in a crowded merge setting.
Tasks Autonomous Vehicles, Decision Making, Motion Planning
Published 2019-09-27
URL https://arxiv.org/abs/1909.12914v1
PDF https://arxiv.org/pdf/1909.12914v1.pdf
PWC https://paperswithcode.com/paper/interactive-decision-making-for-autonomous
Repo
Framework

A systematic review of fuzzing based on machine learning techniques

Title A systematic review of fuzzing based on machine learning techniques
Authors Yan Wang, Peng Jia, Luping Liu, Jiayong Liu
Abstract Security vulnerabilities play a vital role in network security system. Fuzzing technology is widely used as a vulnerability discovery technology to reduce damage in advance. However, traditional fuzzing techniques have many challenges, such as how to mutate input seed files, how to increase code coverage, and how to effectively bypass verification. Machine learning technology has been introduced as a new method into fuzzing test to alleviate these challenges. This paper reviews the research progress of using machine learning technology for fuzzing test in recent years, analyzes how machine learning improve the fuzz process and results, and sheds light on future work in fuzzing. Firstly, this paper discusses the reasons why machine learning techniques can be used for fuzzing scenarios and identifies six different stages in which machine learning have been used. Then this paper systematically study the machine learning based fuzzing models from selection of machine learning algorithm, pre-processing methods, datasets, evaluation metrics, and hyperparameters setting. Next, this paper assesses the performance of the machine learning models based on the frequently used evaluation metrics. The results of the evaluation prove that machine learning technology has an acceptable capability of categorize predictive for fuzzing. Finally, the comparison on capability of discovering vulnerabilities between traditional fuzzing tools and machine learning based fuzzing tools is analyzed. The results depict that the introduction of machine learning technology can improve the performance of fuzzing. However, there are still some limitations, such as unbalanced training samples and difficult to extract the characteristics related to vulnerabilities.
Tasks
Published 2019-08-04
URL https://arxiv.org/abs/1908.01262v1
PDF https://arxiv.org/pdf/1908.01262v1.pdf
PWC https://paperswithcode.com/paper/a-systematic-review-of-fuzzing-based-on
Repo
Framework

Evaluating Older Users’ Experiences with Commercial Dialogue Systems: Implications for Future Design and Development

Title Evaluating Older Users’ Experiences with Commercial Dialogue Systems: Implications for Future Design and Development
Authors Libby Ferland, Thomas Huffstutler, Jacob Rice, Joan Zheng, Shi Ni, Maria Gini
Abstract Understanding the needs of a variety of distinct user groups is vital in designing effective, desirable dialogue systems that will be adopted by the largest possible segment of the population. Despite the increasing popularity of dialogue systems in both mobile and home formats, user studies remain relatively infrequent and often sample a segment of the user population that is not representative of the needs of the potential user population as a whole. This is especially the case for users who may be more reluctant adopters, such as older adults. In this paper we discuss the results of a recent user study performed over a large population of age 50 and over adults in the Midwestern United States that have experience using a variety of commercial dialogue systems. We show the common preferences, use cases, and feature gaps identified by older adult users in interacting with these systems. Based on these results, we propose a new, robust user modeling framework that addresses common issues facing older adult users, which can then be generalized to the wider user population.
Tasks
Published 2019-01-30
URL http://arxiv.org/abs/1902.04393v1
PDF http://arxiv.org/pdf/1902.04393v1.pdf
PWC https://paperswithcode.com/paper/evaluating-older-users-experiences-with
Repo
Framework

DeepGait: Planning and Control of Quadrupedal Gaits using Deep Reinforcement Learning

Title DeepGait: Planning and Control of Quadrupedal Gaits using Deep Reinforcement Learning
Authors Vassilios Tsounis, Mitja Alge, Joonho Lee, Farbod Farshidian, Marco Hutter
Abstract This paper addresses the problem of legged locomotion in non-flat terrain. As legged robots such as quadrupeds are to be deployed in terrains with geometries which are difficult to model and predict, the need arises to equip them with the capability to generalize well to unforeseen situations. In this work, we propose a novel technique for training neural-network policies for terrain-aware locomotion, which combines state-of-the-art methods for model-based motion planning and reinforcement learning. Our approach is centered on formulating Markov decision processes using the evaluation of dynamic feasibility criteria in place of physical simulation. We thus employ policy-gradient methods to independently train policies which respectively plan and execute foothold and base motions in 3D environments using both proprioceptive and exteroceptive measurements. We apply our method within a challenging suite of simulated terrain scenarios which contain features such as narrow bridges, gaps and stepping-stones, and train policies which succeed in locomoting effectively in all cases.
Tasks Legged Robots, Motion Planning, Policy Gradient Methods
Published 2019-09-18
URL https://arxiv.org/abs/1909.08399v2
PDF https://arxiv.org/pdf/1909.08399v2.pdf
PWC https://paperswithcode.com/paper/deepgait-planning-and-control-of-quadrupedal
Repo
Framework

Automated Gleason Grading of Prostate Biopsies using Deep Learning

Title Automated Gleason Grading of Prostate Biopsies using Deep Learning
Authors Wouter Bulten, Hans Pinckaers, Hester van Boven, Robert Vink, Thomas de Bel, Bram van Ginneken, Jeroen van der Laak, Christina Hulsbergen-van de Kaa, Geert Litjens
Abstract The Gleason score is the most important prognostic marker for prostate cancer patients but suffers from significant inter-observer variability. We developed a fully automated deep learning system to grade prostate biopsies. The system was developed using 5834 biopsies from 1243 patients. A semi-automatic labeling technique was used to circumvent the need for full manual annotation by pathologists. The developed system achieved a high agreement with the reference standard. In a separate observer experiment, the deep learning system outperformed 10 out of 15 pathologists. The system has the potential to improve prostate cancer prognostics by acting as a first or second reader.
Tasks
Published 2019-07-18
URL https://arxiv.org/abs/1907.07980v1
PDF https://arxiv.org/pdf/1907.07980v1.pdf
PWC https://paperswithcode.com/paper/automated-gleason-grading-of-prostate
Repo
Framework

Bayesian Local Sampling-based Planning

Title Bayesian Local Sampling-based Planning
Authors Tin Lai, Philippe Morere, Fabio Ramos, Gilad Francis
Abstract Sampling-based planning is the predominant paradigm for motion planning in robotics. Most sampling-based planners use a global random sampling scheme to guarantee probabilistic completeness. However, most schemes are often inefficient as the samples drawn from the global proposal distribution, and do not exploit relevant local structures. Local sampling-based motion planners, on the other hand, take sequential decisions of random walks to samples valid trajectories in configuration space. However, current approaches do not adapt their strategies according to the success and failures of past samples. In this work, we introduce a local sampling-based motion planner with a Bayesian learning scheme for modelling an adaptive sampling proposal distribution. The proposal distribution is sequentially updated based on previous samples, consequently shaping it according to local obstacles and constraints in the configuration space. Thus, through learning from past observed outcomes, we maximise the likelihood of sampling in regions that have a higher probability to form trajectories within narrow passages. We provide the formulation of a sample-efficient distribution, along with theoretical foundation of sequentially updating this distribution. We demonstrate experimentally that by using a Bayesian proposal distribution, a solution is found faster, requiring fewer samples, and without any noticeable performance overhead.
Tasks Motion Planning
Published 2019-09-08
URL https://arxiv.org/abs/1909.03452v2
PDF https://arxiv.org/pdf/1909.03452v2.pdf
PWC https://paperswithcode.com/paper/local-sampling-based-planning-with-sequential
Repo
Framework
comments powered by Disqus