January 27, 2020

3271 words 16 mins read

Paper Group ANR 1182

Paper Group ANR 1182

Monte-Carlo Tree Search for Simulation-based Strategy Analysis. Playing it Safe: Adversarial Robustness with an Abstain Option. Deep Reinforcement Learning for Trading. Safe and Fast Tracking on a Robot Manipulator: Robust MPC and Neural Network Control. FRODO: Free rejection of out-of-distribution samples: application to chest x-ray analysis. The …

Monte-Carlo Tree Search for Simulation-based Strategy Analysis

Title Monte-Carlo Tree Search for Simulation-based Strategy Analysis
Authors Alexander Zook, Brent Harrison, Mark O. Riedl
Abstract Games are often designed to shape player behavior in a desired way; however, it can be unclear how design decisions affect the space of behaviors in a game. Designers usually explore this space through human playtesting, which can be time-consuming and of limited effectiveness in exhausting the space of possible behaviors. In this paper, we propose the use of automated planning agents to simulate humans of varying skill levels to generate game playthroughs. Metrics can then be gathered from these playthroughs to evaluate the current game design and identify its potential flaws. We demonstrate this technique in two games: the popular word game Scrabble and a collectible card game of our own design named Cardonomicon. Using these case studies, we show how using simulated agents to model humans of varying skill levels allows us to extract metrics to describe game balance (in the case of Scrabble) and highlight potential design flaws (in the case of Cardonomicon).
Tasks
Published 2019-08-04
URL https://arxiv.org/abs/1908.01423v1
PDF https://arxiv.org/pdf/1908.01423v1.pdf
PWC https://paperswithcode.com/paper/monte-carlo-tree-search-for-simulation-based
Repo
Framework

Playing it Safe: Adversarial Robustness with an Abstain Option

Title Playing it Safe: Adversarial Robustness with an Abstain Option
Authors Cassidy Laidlaw, Soheil Feizi
Abstract We explore adversarial robustness in the setting in which it is acceptable for a classifier to abstain—that is, output no class—on adversarial examples. Adversarial examples are small perturbations of normal inputs to a classifier that cause the classifier to give incorrect output; they present security and safety challenges for machine learning systems. In many safety-critical applications, it is less costly for a classifier to abstain on adversarial examples than to give incorrect output for them. We first introduce a novel objective function for adversarial robustness with an abstain option which characterizes an explicit tradeoff between robustness and accuracy. We then present a simple baseline in which an adversarially-trained classifier abstains on all inputs within a certain distance of the decision boundary, which we theoretically and experimentally evaluate. Finally, we propose Combined Abstention Robustness Learning (CARL), a method for jointly learning a classifier and the region of the input space on which it should abstain. We explore different variations of the PGD and DeepFool adversarial attacks on CARL in the abstain setting. Evaluating against these attacks, we demonstrate that training with CARL results in a more accurate, robust, and efficient classifier than the baseline.
Tasks
Published 2019-11-25
URL https://arxiv.org/abs/1911.11253v1
PDF https://arxiv.org/pdf/1911.11253v1.pdf
PWC https://paperswithcode.com/paper/playing-it-safe-adversarial-robustness-with
Repo
Framework

Deep Reinforcement Learning for Trading

Title Deep Reinforcement Learning for Trading
Authors Zihao Zhang, Stefan Zohren, Stephen Roberts
Abstract We adopt Deep Reinforcement Learning algorithms to design trading strategies for continuous futures contracts. Both discrete and continuous action spaces are considered and volatility scaling is incorporated to create reward functions which scale trade positions based on market volatility. We test our algorithms on the 50 most liquid futures contracts from 2011 to 2019, and investigate how performance varies across different asset classes including commodities, equity indices, fixed income and FX markets. We compare our algorithms against classical time series momentum strategies, and show that our method outperforms such baseline models, delivering positive profits despite heavy transaction costs. The experiments show that the proposed algorithms can follow large market trends without changing positions and can also scale down, or hold, through consolidation periods.
Tasks Time Series
Published 2019-11-22
URL https://arxiv.org/abs/1911.10107v1
PDF https://arxiv.org/pdf/1911.10107v1.pdf
PWC https://paperswithcode.com/paper/deep-reinforcement-learning-for-trading
Repo
Framework

Safe and Fast Tracking on a Robot Manipulator: Robust MPC and Neural Network Control

Title Safe and Fast Tracking on a Robot Manipulator: Robust MPC and Neural Network Control
Authors Julian Nubert, Johannes Köhler, Vincent Berenz, Frank Allgöwer, Sebastian Trimpe
Abstract Fast feedback control and safety guarantees are essential in modern robotics. We present an approach that achieves both by combining novel robust model predictive control (MPC) with function approximation via (deep) neural networks (NNs). The result is a new approach for complex tasks with nonlinear, uncertain, and constrained dynamics as are common in robotics. Specifically, we leverage recent results in MPC research to propose a new robust setpoint tracking MPC algorithm, which achieves reliable and safe tracking of a dynamic setpoint while guaranteeing stability and constraint satisfaction. The presented robust MPC scheme constitutes a one-layer approach that unifies the often separated planning and control layers, by directly computing the control command based on a reference and possibly obstacle positions. As a separate contribution, we show how the computation time of the MPC can be drastically reduced by approximating the MPC law with a NN controller. The NN is trained and validated from offline samples of the MPC, yielding statistical guarantees, and used in lieu thereof at run time. Our experiments on a state-of-the-art robot manipulator are the first to show that both the proposed robust and approximate MPC schemes scale to real-world robotic systems.
Tasks
Published 2019-12-22
URL https://arxiv.org/abs/1912.10360v2
PDF https://arxiv.org/pdf/1912.10360v2.pdf
PWC https://paperswithcode.com/paper/safe-and-fast-tracking-control-on-a-robot
Repo
Framework

FRODO: Free rejection of out-of-distribution samples: application to chest x-ray analysis

Title FRODO: Free rejection of out-of-distribution samples: application to chest x-ray analysis
Authors Erdi Çallı, Keelin Murphy, Ecem Sogancioglu, Bram van Ginneken
Abstract In this work, we propose a method to reject out-of-distribution samples which can be adapted to any network architecture and requires no additional training data. Publicly available chest x-ray data (38,353 images) is used to train a standard ResNet-50 model to detect emphysema. Feature activations of intermediate layers are used as descriptors defining the training data distribution. A novel metric, FRODO, is measured by using the Mahalanobis distance of a new test sample to the training data distribution. The method is tested using a held-out test dataset of 21,176 chest x-rays (in-distribution) and a set of 14,821 out-of-distribution x-ray images of incorrect orientation or anatomy. In classifying test samples as in or out-of distribution, our method achieves an AUC score of 0.99.
Tasks
Published 2019-07-02
URL https://arxiv.org/abs/1907.01253v1
PDF https://arxiv.org/pdf/1907.01253v1.pdf
PWC https://paperswithcode.com/paper/frodo-free-rejection-of-out-of-distribution
Repo
Framework

The Convergence of Iterative Delegations in Liquid Democracy in a Social Network

Title The Convergence of Iterative Delegations in Liquid Democracy in a Social Network
Authors Bruno Escoffier, Hugo Gilbert, Adèle Pass-Lanneau
Abstract Liquid democracy is a collective decision making paradigm which lies between direct and representative democracy. One of its main features is that voters can delegate their votes in a transitive manner such that: A delegates to B and B delegates to C leads to A indirectly delegates to C. These delegations can be effectively empowered by implementing liquid democracy in a social network, so that voters can delegate their votes to any of their neighbors in the network. However, it is uncertain that such a delegation process will lead to a stable state where all voters are satisfied with the people representing them. We study the stability (w.r.t. voters preferences) of the delegation process in liquid democracy and model it as a game in which the players are the voters and the strategies are their possible delegations. We answer several questions on the equilibria of this process in any social network or in social networks that correspond to restricted types of graphs. We show that a Nash-equilibrium may not exist, and that it is even NP-complete to decide whether one exists or not. This holds even if the social network is a complete graph or a bounded degree graph. We further show that this existence problem is W[1]-hard w.r.t. the treewidth of the social network. Besides these hardness results, we demonstrate that an equilibrium always exists whatever the preferences of the voters iff the social network is a tree. We design a dynamic programming procedure to determine some desirable equilibria (e.g., minimizing the dissatisfaction of the voters) in polynomial time for tree social networks. Lastly, we study the convergence of delegation dynamics. Unfortunately, when an equilibrium exists, we show that a best response dynamics may not converge, even if the social network is a path or a complete graph.
Tasks Decision Making
Published 2019-04-10
URL https://arxiv.org/abs/1904.05775v2
PDF https://arxiv.org/pdf/1904.05775v2.pdf
PWC https://paperswithcode.com/paper/the-convergence-of-iterative-delegations-in-1
Repo
Framework

Multi-level Similarity Learning for Low-Shot Recognition

Title Multi-level Similarity Learning for Low-Shot Recognition
Authors Hongwei Xv, Xin Sun, Junyu Dong, Shu Zhang, Qiong Li
Abstract Low-shot learning indicates the ability to recognize unseen objects based on very limited labeled training samples, which simulates human visual intelligence. According to this concept, we propose a multi-level similarity model (MLSM) to capture the deep encoded distance metric between the support and query samples. Our approach is achieved based on the fact that the image similarity learning can be decomposed into image-level, global-level, and object-level. Once the similarity function is established, MLSM will be able to classify images for unseen classes by computing the similarity scores between a limited number of labeled samples and the target images. Furthermore, we conduct 5-way experiments with both 1-shot and 5-shot setting on Caltech-UCSD datasets. It is demonstrated that the proposed model can achieve promising results compared with the existing methods in practical applications.
Tasks
Published 2019-12-13
URL https://arxiv.org/abs/1912.06418v1
PDF https://arxiv.org/pdf/1912.06418v1.pdf
PWC https://paperswithcode.com/paper/multi-level-similarity-learning-for-low-shot
Repo
Framework

Vector spaces as Kripke frames

Title Vector spaces as Kripke frames
Authors Giuseppe Greco, Fei Liang, Michael Moortgat, Alessandra Palmigiano
Abstract In recent years, the compositional distributional approach in computational linguistics has opened the way for an integration of the \emph{lexical} aspects of meaning into Lambek’s type-logical grammar program. This approach is based on the observation that a sound semantics for the associative, commutative and unital Lambek calculus can be based on vector spaces by interpreting fusion as the tensor product of vector spaces. In this paper, we build on this observation and extend it to a vector space semantics' for the {\em general} Lambek calculus, based on {\em algebras over a field} $\mathbb{K}$ (or $\mathbb{K}$-algebras), i.e. vector spaces endowed with a bilinear binary product. Such structures are well known in algebraic geometry and algebraic topology, since they are important instances of Lie algebras and Hopf algebras. Applying results and insights from duality and representation theory for the algebraic semantics of nonclassical logics, we regard $\mathbb{K}$-algebras as Kripke frames’ the complex algebras of which are complete residuated lattices. This perspective makes it possible to establish a systematic connection between vector space semantics and the standard Routley-Meyer semantics of (modal) substructural logics.
Tasks
Published 2019-08-15
URL https://arxiv.org/abs/1908.05528v2
PDF https://arxiv.org/pdf/1908.05528v2.pdf
PWC https://paperswithcode.com/paper/vector-spaces-as-kripke-frames
Repo
Framework

Automatic Neural Network Compression by Sparsity-Quantization Joint Learning: A Constrained Optimization-based Approach

Title Automatic Neural Network Compression by Sparsity-Quantization Joint Learning: A Constrained Optimization-based Approach
Authors Haichuan Yang, Shupeng Gui, Yuhao Zhu, Ji Liu
Abstract Deep Neural Networks (DNNs) are applied in a wide range of usecases. There is an increased demand for deploying DNNs on devices that do not have abundant resources such as memory and computation units. Recently, network compression through a variety of techniques such as pruning and quantization have been proposed to reduce the resource requirement. A key parameter that all existing compression techniques are sensitive to is the compression ratio (e.g., pruning sparsity, quantization bitwidth) of each layer. Traditional solutions treat the compression ratios of each layer as hyper-parameters, and tune them using human heuristic. Recent researchers start using black-box hyper-parameter optimizations, but they will introduce new hyper-parameters and have efficiency issue. In this paper, we propose a framework to jointly prune and quantize the DNNs automatically according to a target model size without using any hyper-parameters to manually set the compression ratio for each layer. In the experiments, we show that our framework can compress the weights data of ResNet-50 to be 836$\times$ smaller without accuracy loss on CIFAR-10, and compress AlexNet to be 205$\times$ smaller without accuracy loss on ImageNet classification.
Tasks Neural Network Compression, Quantization
Published 2019-10-14
URL https://arxiv.org/abs/1910.05897v3
PDF https://arxiv.org/pdf/1910.05897v3.pdf
PWC https://paperswithcode.com/paper/learning-sparsity-and-quantization-jointly
Repo
Framework

Leveraging Semantic Embeddings for Safety-Critical Applications

Title Leveraging Semantic Embeddings for Safety-Critical Applications
Authors Thomas Brunner, Frederik Diehl, Michael Truong Le, Alois Knoll
Abstract Semantic Embeddings are a popular way to represent knowledge in the field of zero-shot learning. We observe their interpretability and discuss their potential utility in a safety-critical context. Concretely, we propose to use them to add introspection and error detection capabilities to neural network classifiers. First, we show how to create embeddings from symbolic domain knowledge. We discuss how to use them for interpreting mispredictions and propose a simple error detection scheme. We then introduce the concept of semantic distance: a real-valued score that measures confidence in the semantic space. We evaluate this score on a traffic sign classifier and find that it achieves near state-of-the-art performance, while being significantly faster to compute than other confidence scores. Our approach requires no changes to the original network and is thus applicable to any task for which domain knowledge is available.
Tasks Zero-Shot Learning
Published 2019-05-19
URL https://arxiv.org/abs/1905.07733v1
PDF https://arxiv.org/pdf/1905.07733v1.pdf
PWC https://paperswithcode.com/paper/leveraging-semantic-embeddings-for-safety
Repo
Framework

From Search Engines to Search Services: An End-User Driven Approach

Title From Search Engines to Search Services: An End-User Driven Approach
Authors Gabriela Bosetti, Sergio Firmenich, Alejandro Fernandez, Marco Winckler, Gustavo Rossi
Abstract The World Wide Web is a vast and continuously changing source of information where searching is a frequent, and sometimes critical, user task. Searching is not always the user’s primary goal but an ancillary task that is performed to find complementary information allowing to complete another task. In this paper, we explore primary and/or ancillary search tasks and propose an approach for simplifying the user interaction during search tasks. Rather than fo-cusing on dedicated search engines, our approach allows the user to abstract search engines already provided by Web applications into pervasive search services that will be available for performing searches from any other Web site. We also propose to allow users to manage the way in which searching results are displayed and the interaction with them. In order to illustrate the feasibility of this approach, we have built a support tool based on a plug-in architecture that allows users to integrate new search services (created by themselves by means of visual tools) and execute them in the context of both kinds of searches. A case study illustrates the use of these tools. We also present the results of two evaluations that demonstrate the feasibility of the approach and the benefits in its use.
Tasks
Published 2019-05-24
URL https://arxiv.org/abs/1905.10215v1
PDF https://arxiv.org/pdf/1905.10215v1.pdf
PWC https://paperswithcode.com/paper/from-search-engines-to-search-services-an-end
Repo
Framework

Make Up Your Mind! Adversarial Generation of Inconsistent Natural Language Explanations

Title Make Up Your Mind! Adversarial Generation of Inconsistent Natural Language Explanations
Authors Oana-Maria Camburu, Brendan Shillingford, Pasquale Minervini, Thomas Lukasiewicz, Phil Blunsom
Abstract To increase trust in artificial intelligence systems, a growing amount of works are enhancing these systems with the capability of producing natural language explanations that support their predictions. In this work, we show that such appealing frameworks are nonetheless prone to generating inconsistent explanations, such as “A dog is an animal” and “A dog is not an animal”, which are likely to decrease users’ trust in these systems. To detect such inconsistencies, we introduce a simple but effective adversarial framework for generating a complete target sequence, a scenario that has not been addressed so far. Finally, we apply our framework to a state-of-the-art neural model that provides natural language explanations on SNLI, and we show that this model is capable of generating a significant amount of inconsistencies.
Tasks
Published 2019-10-07
URL https://arxiv.org/abs/1910.03065v2
PDF https://arxiv.org/pdf/1910.03065v2.pdf
PWC https://paperswithcode.com/paper/make-up-your-mind-adversarial-generation-of
Repo
Framework

Creative Procedural-Knowledge Extraction From Web Design Tutorials

Title Creative Procedural-Knowledge Extraction From Web Design Tutorials
Authors Longqi Yang, Chen Fang, Hailin Jin, Walter Chang, Deborah Estrin
Abstract Complex design tasks often require performing diverse actions in a specific order. To (semi-)autonomously accomplish these tasks, applications need to understand and learn a wide range of design procedures, i.e., Creative Procedural-Knowledge (CPK). Prior knowledge base construction and mining have not typically addressed the creative fields, such as design and arts. In this paper, we formalize an ontology of CPK using five components: goal, workflow, action, command and usage; and extract components’ values from online design tutorials. We scraped 19.6K tutorial-related webpages and built a web application for professional designers to identify and summarize CPK components. The annotated dataset consists of 819 unique commands, 47,491 actions, and 2,022 workflows and goals. Based on this dataset, we propose a general CPK extraction pipeline and demonstrate that existing text classification and sequence-to-sequence models are limited in identifying, predicting and summarizing complex operations described in heterogeneous styles. Through quantitative and qualitative error analysis, we discuss CPK extraction challenges that need to be addressed by future research.
Tasks Text Classification
Published 2019-04-18
URL http://arxiv.org/abs/1904.08587v1
PDF http://arxiv.org/pdf/1904.08587v1.pdf
PWC https://paperswithcode.com/paper/creative-procedural-knowledge-extraction-from
Repo
Framework

Learning Filter Basis for Convolutional Neural Network Compression

Title Learning Filter Basis for Convolutional Neural Network Compression
Authors Yawei Li, Shuhang Gu, Luc Van Gool, Radu Timofte
Abstract Convolutional neural networks (CNNs) based solutions have achieved state-of-the-art performances for many computer vision tasks, including classification and super-resolution of images. Usually the success of these methods comes with a cost of millions of parameters due to stacking deep convolutional layers. Moreover, quite a large number of filters are also used for a single convolutional layer, which exaggerates the parameter burden of current methods. Thus, in this paper, we try to reduce the number of parameters of CNNs by learning a basis of the filters in convolutional layers. For the forward pass, the learned basis is used to approximate the original filters and then used as parameters for the convolutional layers. We validate our proposed solution for multiple CNN architectures on image classification and image super-resolution benchmarks and compare favorably to the existing state-of-the-art in terms of reduction of parameters and preservation of accuracy.
Tasks Image Classification, Image Super-Resolution, Neural Network Compression, Super-Resolution
Published 2019-08-23
URL https://arxiv.org/abs/1908.08932v2
PDF https://arxiv.org/pdf/1908.08932v2.pdf
PWC https://paperswithcode.com/paper/learning-filter-basis-for-convolutional
Repo
Framework

Auxiliary Learning for Deep Multi-task Learning

Title Auxiliary Learning for Deep Multi-task Learning
Authors Yifan Liu, Bohan Zhuang, Chunhua Shen, Hao Chen, Wei Yin
Abstract Multi-task learning (MTL) is an efficient solution to solve multiple tasks simultaneously in order to get better speed and performance than handling each single-task in turn. The most current methods can be categorized as either: (i) hard parameter sharing where a subset of the parameters is shared among tasks while other parameters are task-specific; or (ii) soft parameter sharing where all parameters are task-specific but they are jointly regularized. Both methods suffer from limitations: the shared hidden layers of the former are difficult to optimize due to the competing objectives while the complexity of the latter grows linearly with the increasing number of tasks. To mitigate those drawbacks, this paper proposes an alternative, where we explicitly construct an auxiliary module to mimic the soft parameter sharing for assisting the optimization of the hard parameter sharing layers in the training phase. In particular, the auxiliary module takes the outputs of the shared hidden layers as inputs and is supervised by the auxiliary task loss. During training, the auxiliary module is jointly optimized with the MTL network, serving as a regularization by introducing an inductive bias to the shared layers. In the testing phase, only the original MTL network is kept. Thus our method avoids the limitation of both categories. We evaluate the proposed auxiliary module on pixel-wise prediction tasks, including semantic segmentation, depth estimation, and surface normal prediction with different network structures. The extensive experiments over various settings verify the effectiveness of our methods.
Tasks Auxiliary Learning, Depth Estimation, Multi-Task Learning, Semantic Segmentation
Published 2019-09-05
URL https://arxiv.org/abs/1909.02214v2
PDF https://arxiv.org/pdf/1909.02214v2.pdf
PWC https://paperswithcode.com/paper/training-compact-neural-networks-via
Repo
Framework
comments powered by Disqus