January 31, 2020

3007 words 15 mins read

Paper Group ANR 21

Paper Group ANR 21

How to Manipulate CNNs to Make Them Lie: the GradCAM Case. A Divergence Minimization Perspective on Imitation Learning Methods. Image captioning with weakly-supervised attention penalty. A Novel Continuous Representation of Genetic Programmings using Recurrent Neural Networks for Symbolic Regression. An Action Recognition network for specific targe …

How to Manipulate CNNs to Make Them Lie: the GradCAM Case

Title How to Manipulate CNNs to Make Them Lie: the GradCAM Case
Authors Tom Viering, Ziqi Wang, Marco Loog, Elmar Eisemann
Abstract Recently many methods have been introduced to explain CNN decisions. However, it has been shown that some methods can be sensitive to manipulation of the input. We continue this line of work and investigate the explanation method GradCAM. Instead of manipulating the input, we consider an adversary that manipulates the model itself to attack the explanation. By changing weights and architecture, we demonstrate that it is possible to generate any desired explanation, while leaving the model’s accuracy essentially unchanged. This illustrates that GradCAM cannot explain the decision of every CNN and provides a proof of concept showing that it is possible to obfuscate the inner workings of a CNN. Finally, we combine input and model manipulation. To this end we put a backdoor in the network: the explanation is correct unless there is a specific pattern present in the input, which triggers a malicious explanation. Our work raises new security concerns, especially in settings where explanations of models may be used to make decisions, such as in the medical domain.
Tasks
Published 2019-07-25
URL https://arxiv.org/abs/1907.10901v2
PDF https://arxiv.org/pdf/1907.10901v2.pdf
PWC https://paperswithcode.com/paper/how-to-manipulate-cnns-to-make-them-lie-the
Repo
Framework

A Divergence Minimization Perspective on Imitation Learning Methods

Title A Divergence Minimization Perspective on Imitation Learning Methods
Authors Seyed Kamyar Seyed Ghasemipour, Richard Zemel, Shixiang Gu
Abstract In many settings, it is desirable to learn decision-making and control policies through learning or bootstrapping from expert demonstrations. The most common approaches under this Imitation Learning (IL) framework are Behavioural Cloning (BC), and Inverse Reinforcement Learning (IRL). Recent methods for IRL have demonstrated the capacity to learn effective policies with access to a very limited set of demonstrations, a scenario in which BC methods often fail. Unfortunately, due to multiple factors of variation, directly comparing these methods does not provide adequate intuition for understanding this difference in performance. In this work, we present a unified probabilistic perspective on IL algorithms based on divergence minimization. We present $f$-MAX, an $f$-divergence generalization of AIRL [Fu et al., 2018], a state-of-the-art IRL method. $f$-MAX enables us to relate prior IRL methods such as GAIL [Ho & Ermon, 2016] and AIRL [Fu et al., 2018], and understand their algorithmic properties. Through the lens of divergence minimization we tease apart the differences between BC and successful IRL approaches, and empirically evaluate these nuances on simulated high-dimensional continuous control domains. Our findings conclusively identify that IRL’s state-marginal matching objective contributes most to its superior performance. Lastly, we apply our new understanding of IL methods to the problem of state-marginal matching, where we demonstrate that in simulated arm pushing environments we can teach agents a diverse range of behaviours using simply hand-specified state distributions and no reward functions or expert demonstrations. For datasets and reproducing results please refer to https://github.com/KamyarGh/rl_swiss/blob/master/reproducing/fmax_paper.md .
Tasks Continuous Control, Decision Making, Imitation Learning
Published 2019-11-06
URL https://arxiv.org/abs/1911.02256v1
PDF https://arxiv.org/pdf/1911.02256v1.pdf
PWC https://paperswithcode.com/paper/a-divergence-minimization-perspective-on
Repo
Framework

Image captioning with weakly-supervised attention penalty

Title Image captioning with weakly-supervised attention penalty
Authors Jiayun Li, Mohammad K. Ebrahimpour, Azadeh Moghtaderi, Yen-Yun Yu
Abstract Stories are essential for genealogy research since they can help build emotional connections with people. A lot of family stories are reserved in historical photos and albums. Recent development on image captioning models makes it feasible to “tell stories” for photos automatically. The attention mechanism has been widely adopted in many state-of-the-art encoder-decoder based image captioning models, since it can bridge the gap between the visual part and the language part. Most existing captioning models implicitly trained attention modules with word-likelihood loss. Meanwhile, lots of studies have investigated intrinsic attentions for visual models using gradient-based approaches. Ideally, attention maps predicted by captioning models should be consistent with intrinsic attentions from visual models for any given visual concept. However, no work has been done to align implicitly learned attention maps with intrinsic visual attentions. In this paper, we proposed a novel model that measured consistency between captioning predicted attentions and intrinsic visual attentions. This alignment loss allows explicit attention correction without using any expensive bounding box annotations. We developed and evaluated our model on COCO dataset as well as a genealogical dataset from Ancestry.com Operations Inc., which contains billions of historical photos. The proposed model achieved better performances on all commonly used language evaluation metrics for both datasets.
Tasks Image Captioning
Published 2019-03-06
URL http://arxiv.org/abs/1903.02507v1
PDF http://arxiv.org/pdf/1903.02507v1.pdf
PWC https://paperswithcode.com/paper/image-captioning-with-weakly-supervised
Repo
Framework

A Novel Continuous Representation of Genetic Programmings using Recurrent Neural Networks for Symbolic Regression

Title A Novel Continuous Representation of Genetic Programmings using Recurrent Neural Networks for Symbolic Regression
Authors Aftab Anjum, Fengyang Sun, Lin Wang, Jeff Orchard
Abstract Neuro-encoded expression programming that aims to offer a novel continuous representation of combinatorial encoding for genetic programming methods is proposed in this paper. Genetic programming with linear representation uses nature-inspired operators to tune expressions and finally search out the best explicit function to simulate data. The encoding mechanism is essential for genetic programmings to find a desirable solution efficiently. However, the linear representation methods manipulate the expression tree in discrete solution space, where a small change of the input can cause a large change of the output. The unsmooth landscapes destroy the local information and make difficulty in searching. The neuro-encoded expression programming constructs the gene string with recurrent neural network (RNN) and the weights of the network are optimized by powerful continuous evolutionary algorithms. The neural network mappings smoothen the sharp fitness landscape and provide rich neighborhood information to find the best expression. The experiments indicate that the novel approach improves test accuracy and efficiency on several well-known symbolic regression problems.
Tasks
Published 2019-04-06
URL http://arxiv.org/abs/1904.03368v1
PDF http://arxiv.org/pdf/1904.03368v1.pdf
PWC https://paperswithcode.com/paper/a-novel-continuous-representation-of-genetic
Repo
Framework

An Action Recognition network for specific target based on rMC and RPN

Title An Action Recognition network for specific target based on rMC and RPN
Authors Mingjie Li, Youqian Feng, Zhonghai Yin, Cheng Zhou, Fanghao Dong, Yuan Lin, Yuhao Dong
Abstract The traditional methods of action recognition are not specific for the operator, thus results are easy to be disturbed when other actions are operated in videos. The network based on mixed convolutional resnet and RPN is proposed in this paper. The rMC is tested in the data set of UCF-101 to compare with the method of R3D. The result shows that its correct rate reaches 71.07%. Meanwhile, the action recognition network is tested in our gesture and body posture data sets for specific target. The simulation achieves a good performance in which the running speed reaches 200 FPS. Finally, our model is improved by introducing the regression block and performs better, which shows the great potential of this model.
Tasks
Published 2019-06-19
URL https://arxiv.org/abs/1906.07944v1
PDF https://arxiv.org/pdf/1906.07944v1.pdf
PWC https://paperswithcode.com/paper/an-action-recognition-network-for-specific
Repo
Framework

Learning to Generate Questions by Learning What not to Generate

Title Learning to Generate Questions by Learning What not to Generate
Authors Bang Liu, Mingjun Zhao, Di Niu, Kunfeng Lai, Yancheng He, Haojie Wei, Yu Xu
Abstract Automatic question generation is an important technique that can improve the training of question answering, help chatbots to start or continue a conversation with humans, and provide assessment materials for educational purposes. Existing neural question generation models are not sufficient mainly due to their inability to properly model the process of how each word in the question is selected, i.e., whether repeating the given passage or being generated from a vocabulary. In this paper, we propose our Clue Guided Copy Network for Question Generation (CGC-QG), which is a sequence-to-sequence generative model with copying mechanism, yet employing a variety of novel components and techniques to boost the performance of question generation. In CGC-QG, we design a multi-task labeling strategy to identify whether a question word should be copied from the input passage or be generated instead, guiding the model to learn the accurate boundaries between copying and generation. Furthermore, our input passage encoder takes as input, among a diverse range of other features, the prediction made by a clue word predictor, which helps identify whether each word in the input passage is a potential clue to be copied into the target question. The clue word predictor is designed based on a novel application of Graph Convolutional Networks onto a syntactic dependency tree representation of each passage, thus being able to predict clue words only based on their context in the passage and their relative positions to the answer in the tree. We jointly train the clue prediction as well as question generation with multi-task learning and a number of practical strategies to reduce the complexity. Extensive evaluations show that our model significantly improves the performance of question generation and out-performs all previous state-of-the-art neural question generation models by a substantial margin.
Tasks Multi-Task Learning, Question Answering, Question Generation
Published 2019-02-27
URL http://arxiv.org/abs/1902.10418v1
PDF http://arxiv.org/pdf/1902.10418v1.pdf
PWC https://paperswithcode.com/paper/learning-to-generate-questions-by-learning
Repo
Framework

A convolution recurrent autoencoder for spatio-temporal missing data imputation

Title A convolution recurrent autoencoder for spatio-temporal missing data imputation
Authors Reza Asadi, Amelia Regan
Abstract When sensors collect spatio-temporal data in a large geographical area, the existence of missing data cannot be escaped. Missing data negatively impacts the performance of data analysis and machine learning algorithms. In this paper, we study deep autoencoders for missing data imputation in spatio-temporal problems. We propose a convolution bidirectional-LSTM for capturing spatial and temporal patterns. Moreover, we analyze an autoencoder’s latent feature representation in spatio-temporal data and illustrate its performance for missing data imputation. Traffic flow data are used for evaluation of our models. The result shows that the proposed convolution recurrent neural network outperforms state-of-the-art methods.
Tasks Imputation, Multivariate Time Series Imputation
Published 2019-04-29
URL http://arxiv.org/abs/1904.12413v1
PDF http://arxiv.org/pdf/1904.12413v1.pdf
PWC https://paperswithcode.com/paper/a-convolution-recurrent-autoencoder-for
Repo
Framework

Data-Driven Vehicle Trajectory Forecasting

Title Data-Driven Vehicle Trajectory Forecasting
Authors Shayan Jawed, Eya Boumaiza, Josif Grabocka, Lars Schmidt-Thieme
Abstract An active area of research is to increase the safety of self-driving vehicles. Although safety cannot be guarenteed completely, the capability of a vehicle to predict the future trajectories of its surrounding vehicles could help ensure this notion of safety to a greater deal. We cast the trajectory forecast problem in a multi-time step forecasting problem and develop a Convolutional Neural Network based approach to learn from trajectory sequences generated from completely raw dataset in real-time. Results show improvement over baselines.
Tasks
Published 2019-02-09
URL http://arxiv.org/abs/1902.05400v1
PDF http://arxiv.org/pdf/1902.05400v1.pdf
PWC https://paperswithcode.com/paper/data-driven-vehicle-trajectory-forecasting
Repo
Framework

Semi-Parametric Uncertainty Bounds for Binary Classification

Title Semi-Parametric Uncertainty Bounds for Binary Classification
Authors Balázs Csanád Csáji, Ambrus Tamás
Abstract The paper studies binary classification and aims at estimating the underlying regression function which is the conditional expectation of the class labels given the inputs. The regression function is the key component of the Bayes optimal classifier, moreover, besides providing optimal predictions, it can also assess the risk of misclassification. We aim at building non-asymptotic confidence regions for the regression function and suggest three kernel-based semi-parametric resampling methods. We prove that all of them guarantee regions with exact coverage probabilities and they are strongly consistent.
Tasks
Published 2019-03-23
URL http://arxiv.org/abs/1903.09790v1
PDF http://arxiv.org/pdf/1903.09790v1.pdf
PWC https://paperswithcode.com/paper/semi-parametric-uncertainty-bounds-for-binary
Repo
Framework

Cycle-Consistency for Robust Visual Question Answering

Title Cycle-Consistency for Robust Visual Question Answering
Authors Meet Shah, Xinlei Chen, Marcus Rohrbach, Devi Parikh
Abstract Despite significant progress in Visual Question Answering over the years, robustness of today’s VQA models leave much to be desired. We introduce a new evaluation protocol and associated dataset (VQA-Rephrasings) and show that state-of-the-art VQA models are notoriously brittle to linguistic variations in questions. VQA-Rephrasings contains 3 human-provided rephrasings for 40k questions spanning 40k images from the VQA v2.0 validation dataset. As a step towards improving robustness of VQA models, we propose a model-agnostic framework that exploits cycle consistency. Specifically, we train a model to not only answer a question, but also generate a question conditioned on the answer, such that the answer predicted for the generated question is the same as the ground truth answer to the original question. Without the use of additional annotations, we show that our approach is significantly more robust to linguistic variations than state-of-the-art VQA models, when evaluated on the VQA-Rephrasings dataset. In addition, our approach outperforms state-of-the-art approaches on the standard VQA and Visual Question Generation tasks on the challenging VQA v2.0 dataset.
Tasks Question Answering, Question Generation, Visual Question Answering
Published 2019-02-15
URL http://arxiv.org/abs/1902.05660v1
PDF http://arxiv.org/pdf/1902.05660v1.pdf
PWC https://paperswithcode.com/paper/cycle-consistency-for-robust-visual-question
Repo
Framework

Exploration-Exploitation Trade-off in Reinforcement Learning on Online Markov Decision Processes with Global Concave Rewards

Title Exploration-Exploitation Trade-off in Reinforcement Learning on Online Markov Decision Processes with Global Concave Rewards
Authors Wang Chi Cheung
Abstract We consider an agent who is involved in a Markov decision process and receives a vector of outcomes every round. Her objective is to maximize a global concave reward function on the average vectorial outcome. The problem models applications such as multi-objective optimization, maximum entropy exploration, and constrained optimization in Markovian environments. In our general setting where a stationary policy could have multiple recurrent classes, the agent faces a subtle yet consequential trade-off in alternating among different actions for balancing the vectorial outcomes. In particular, stationary policies are in general sub-optimal. We propose a no-regret algorithm based on online convex optimization (OCO) tools (Agrawal and Devanur 2014) and UCRL2 (Jaksch et al. 2010). Importantly, we introduce a novel gradient threshold procedure, which carefully controls the switches among actions to handle the subtle trade-off. By delaying the gradient updates, our procedure produces a non-stationary policy that diversifies the outcomes for optimizing the objective. The procedure is compatible with a variety of OCO tools.
Tasks
Published 2019-05-15
URL https://arxiv.org/abs/1905.06466v1
PDF https://arxiv.org/pdf/1905.06466v1.pdf
PWC https://paperswithcode.com/paper/exploration-exploitation-trade-off-in
Repo
Framework

Artificial Intelligence Strategies for National Security and Safety Standards

Title Artificial Intelligence Strategies for National Security and Safety Standards
Authors Erik Blasch, James Sung, Tao Nguyen, Chandra P. Daniel, Alisa P. Mason
Abstract Recent advances in artificial intelligence (AI) have lead to an explosion of multimedia applications (e.g., computer vision (CV) and natural language processing (NLP)) for different domains such as commercial, industrial, and intelligence. In particular, the use of AI applications in a national security environment is often problematic because the opaque nature of the systems leads to an inability for a human to understand how the results came about. A reliance on ‘black boxes’ to generate predictions and inform decisions is potentially disastrous. This paper explores how the application of standards during each stage of the development of an AI system deployed and used in a national security environment would help enable trust. Specifically, we focus on the standards outlined in Intelligence Community Directive 203 (Analytic Standards) to subject machine outputs to the same rigorous standards as analysis performed by humans.
Tasks
Published 2019-11-03
URL https://arxiv.org/abs/1911.05727v1
PDF https://arxiv.org/pdf/1911.05727v1.pdf
PWC https://paperswithcode.com/paper/artificial-intelligence-strategies-for
Repo
Framework

A Survey on Traffic Signal Control Methods

Title A Survey on Traffic Signal Control Methods
Authors Hua Wei, Guanjie Zheng, Vikash Gayah, Zhenhui Li
Abstract Traffic signal control is an important and challenging real-world problem, which aims to minimize the travel time of vehicles by coordinating their movements at the road intersections. Current traffic signal control systems in use still rely heavily on oversimplified information and rule-based methods, although we now have richer data, more computing power and advanced methods to drive the development of intelligent transportation. With the growing interest in intelligent transportation using machine learning methods like reinforcement learning, this survey covers the widely acknowledged transportation approaches and a comprehensive list of recent literature on reinforcement for traffic signal control. We hope this survey can foster interdisciplinary research on this important topic.
Tasks
Published 2019-04-17
URL https://arxiv.org/abs/1904.08117v3
PDF https://arxiv.org/pdf/1904.08117v3.pdf
PWC https://paperswithcode.com/paper/a-survey-on-traffic-signal-control-methods
Repo
Framework

Automated Ground Truth Estimation For Automotive Radar Tracking Applications With Portable GNSS And IMU Devices

Title Automated Ground Truth Estimation For Automotive Radar Tracking Applications With Portable GNSS And IMU Devices
Authors Nicolas Scheiner, Stefan Haag, Nils Appenrodt, Bharanidhar Duraisamy, Jürgen Dickmann, Martin Fritzsche, Bernhard Sick
Abstract Baseline generation for tracking applications is a difficult task when working with real world radar data. Data sparsity usually only allows an indirect way of estimating the original tracks as most objects’ centers are not represented in the data. This article proposes an automated way of acquiring reference trajectories by using a highly accurate hand-held global navigation satellite system (GNSS). An embedded inertial measurement unit (IMU) is used for estimating orientation and motion behavior. This article contains two major contributions. A method for associating radar data to vulnerable road user (VRU) tracks is described. It is evaluated how accurate the system performs under different GNSS reception conditions and how carrying a reference system alters radar measurements. Second, the system is used to track pedestrians and cyclists over many measurement cycles in order to generate object centered occupancy grid maps. The reference system allows to much more precisely generate real world radar data distributions of VRUs than compared to conventional methods. Hereby, an important step towards radar-based VRU tracking is accomplished.
Tasks
Published 2019-05-28
URL https://arxiv.org/abs/1905.11987v2
PDF https://arxiv.org/pdf/1905.11987v2.pdf
PWC https://paperswithcode.com/paper/automated-ground-truth-estimation-for
Repo
Framework

Event-Driven Models

Title Event-Driven Models
Authors Dimiter Dobrev
Abstract In Reinforcement Learning we look for meaning in the flow of input/output information. If we do not find meaning, the information flow is not more than noise to us. Before we are able to find meaning, we should first learn how to discover and identify objects. What is an object? In this article we will demonstrate that an object is an event-driven model. These models are a generalization of action-driven models. In Markov Decision Process we have an action-driven model which changes its state at each step. The advantage of event-driven models is their greater sustainability as they change their states only upon the occurrence of particular events. These events may occur very rarely, therefore the state of the event-driven model is much more predictable.
Tasks
Published 2019-06-24
URL https://arxiv.org/abs/1906.10740v1
PDF https://arxiv.org/pdf/1906.10740v1.pdf
PWC https://paperswithcode.com/paper/event-driven-models
Repo
Framework
comments powered by Disqus