April 3, 2020

2846 words 14 mins read

Paper Group ANR 32

Paper Group ANR 32

Moving Target Monte Carlo. Can ML predict the solution value for a difficult combinatorial problem?. Algorithmic Recourse: from Counterfactual Explanations to Interventions. POPCORN: Partially Observed Prediction COnstrained ReiNforcement Learning. Verifiable RNN-Based Policies for POMDPs Under Temporal Logic Constraints. Du$^2$Net: Learning Depth …

Moving Target Monte Carlo

Title Moving Target Monte Carlo
Authors Haoyun Ying, Keheng Mao, Klaus Mosegaard
Abstract The Markov Chain Monte Carlo (MCMC) methods are popular when considering sampling from a high-dimensional random variable $\mathbf{x}$ with possibly unnormalised probability density $p$ and observed data $\mathbf{d}$. However, MCMC requires evaluating the posterior distribution $p(\mathbf{x}\mathbf{d})$ of the proposed candidate $\mathbf{x}$ at each iteration when constructing the acceptance rate. This is costly when such evaluations are intractable. In this paper, we introduce a new non-Markovian sampling algorithm called Moving Target Monte Carlo (MTMC). The acceptance rate at $n$-th iteration is constructed using an iteratively updated approximation of the posterior distribution $a_n(\mathbf{x})$ instead of $p(\mathbf{x}\mathbf{d})$. The true value of the posterior $p(\mathbf{x}\mathbf{d})$ is only calculated if the candidate $\mathbf{x}$ is accepted. The approximation $a_n$ utilises these evaluations and converges to $p$ as $n \rightarrow \infty$. A proof of convergence and estimation of convergence rate in different situations are given.
Published 2020-03-10
URL https://arxiv.org/abs/2003.04873v1
PDF https://arxiv.org/pdf/2003.04873v1.pdf
PWC https://paperswithcode.com/paper/moving-target-monte-carlo

Can ML predict the solution value for a difficult combinatorial problem?

Title Can ML predict the solution value for a difficult combinatorial problem?
Authors Constantine Goulimis, Gastón Simone
Abstract We look at whether machine learning can predict the final objective function value of a difficult combinatorial optimisation problem from the input. Our context is the pattern reduction problem, one industrially important but difficult aspect of the cutting stock problem. Machine learning appears to have higher prediction accuracy than a na"ive model, reducing mean absolute percentage error (MAPE) from 12.0% to 8.7%.
Published 2020-03-06
URL https://arxiv.org/abs/2003.03181v1
PDF https://arxiv.org/pdf/2003.03181v1.pdf
PWC https://paperswithcode.com/paper/can-ml-predict-the-solution-value-for-a

Algorithmic Recourse: from Counterfactual Explanations to Interventions

Title Algorithmic Recourse: from Counterfactual Explanations to Interventions
Authors Amir-Hossein Karimi, Bernhard Schölkopf, Isabel Valera
Abstract As machine learning is increasingly used to inform consequential decision-making (e.g., pre-trial bail and loan approval), it becomes important to explain how the system arrived at its decision, and also suggest actions to achieve a favorable decision. Counterfactual explanations – “how the world would have (had) to be different for a desirable outcome to occur” – aim to satisfy these criteria. Existing works have primarily focused on designing algorithms to obtain counterfactual explanations for a wide range of settings. However, one of the main objectives of “explanations as a means to help a data-subject act rather than merely understand” has been overlooked. In layman’s terms, counterfactual explanations inform an individual where they need to get to, but not how to get there. In this work, we rely on causal reasoning to caution against the use of counterfactual explanations as a recommendable set of actions for recourse. Instead, we propose a shift of paradigm from recourse via nearest counterfactual explanations to recourse through minimal interventions, moving the focus from explanations to recommendations. Finally, we provide the reader with an extensive discussion on how to realistically achieve recourse beyond structural interventions.
Tasks Decision Making
Published 2020-02-14
URL https://arxiv.org/abs/2002.06278v2
PDF https://arxiv.org/pdf/2002.06278v2.pdf
PWC https://paperswithcode.com/paper/algorithmic-recourse-from-counterfactual

POPCORN: Partially Observed Prediction COnstrained ReiNforcement Learning

Title POPCORN: Partially Observed Prediction COnstrained ReiNforcement Learning
Authors Joseph Futoma, Michael C. Hughes, Finale Doshi-Velez
Abstract Many medical decision-making tasks can be framed as partially observed Markov decision processes (POMDPs). However, prevailing two-stage approaches that first learn a POMDP and then solve it often fail because the model that best fits the data may not be well suited for planning. We introduce a new optimization objective that (a) produces both high-performing policies and high-quality generative models, even when some observations are irrelevant for planning, and (b) does so in batch off-policy settings that are typical in healthcare, when only retrospective data is available. We demonstrate our approach on synthetic examples and a challenging medical decision-making problem.
Tasks Decision Making
Published 2020-01-13
URL https://arxiv.org/abs/2001.04032v2
PDF https://arxiv.org/pdf/2001.04032v2.pdf
PWC https://paperswithcode.com/paper/popcorn-partially-observed-prediction

Verifiable RNN-Based Policies for POMDPs Under Temporal Logic Constraints

Title Verifiable RNN-Based Policies for POMDPs Under Temporal Logic Constraints
Authors Steven Carr, Nils Jansen, Ufuk Topcu
Abstract Recurrent neural networks (RNNs) have emerged as an effective representation of control policies in sequential decision-making problems. However, a major drawback in the application of RNN-based policies is the difficulty in providing formal guarantees on the satisfaction of behavioral specifications, e.g. safety and/or reachability. By integrating techniques from formal methods and machine learning, we propose an approach to automatically extract a finite-state controller (FSC) from an RNN, which, when composed with a finite-state system model, is amenable to existing formal verification tools. Specifically, we introduce an iterative modification to the so-called quantized bottleneck insertion technique to create an FSC as a randomized policy with memory. For the cases in which the resulting FSC fails to satisfy the specification, verification generates diagnostic information. We utilize this information to either adjust the amount of memory in the extracted FSC or perform focused retraining of the RNN. While generally applicable, we detail the resulting iterative procedure in the context of policy synthesis for partially observable Markov decision processes (POMDPs), which is known to be notoriously hard. The numerical experiments show that the proposed approach outperforms traditional POMDP synthesis methods by 3 orders of magnitude within 2% of optimal benchmark values.
Tasks Decision Making
Published 2020-02-13
URL https://arxiv.org/abs/2002.05615v1
PDF https://arxiv.org/pdf/2002.05615v1.pdf
PWC https://paperswithcode.com/paper/verifiable-rnn-based-policies-for-pomdps

Du$^2$Net: Learning Depth Estimation from Dual-Cameras and Dual-Pixels

Title Du$^2$Net: Learning Depth Estimation from Dual-Cameras and Dual-Pixels
Authors Yinda Zhang, Neal Wadhwa, Sergio Orts-Escolano, Christian Häne, Sean Fanello, Rahul Garg
Abstract Computational stereo has reached a high level of accuracy, but degrades in the presence of occlusions, repeated textures, and correspondence errors along edges. We present a novel approach based on neural networks for depth estimation that combines stereo from dual cameras with stereo from a dual-pixel sensor, which is increasingly common on consumer cameras. Our network uses a novel architecture to fuse these two sources of information and can overcome the above-mentioned limitations of pure binocular stereo matching. Our method provides a dense depth map with sharp edges, which is crucial for computational photography applications like synthetic shallow-depth-of-field or 3D Photos. Additionally, we avoid the inherent ambiguity due to the aperture problem in stereo cameras by designing the stereo baseline to be orthogonal to the dual-pixel baseline. We present experiments and comparisons with state-of-the-art approaches to show that our method offers a substantial improvement over previous works.
Tasks Depth Estimation, Stereo Matching
Published 2020-03-31
URL https://arxiv.org/abs/2003.14299v1
PDF https://arxiv.org/pdf/2003.14299v1.pdf
PWC https://paperswithcode.com/paper/du-2-net-learning-depth-estimation-from-dual

MetaFuse: A Pre-trained Fusion Model for Human Pose Estimation

Title MetaFuse: A Pre-trained Fusion Model for Human Pose Estimation
Authors Rongchang Xie, Chunyu Wang, Yizhou Wang
Abstract Cross view feature fusion is the key to address the occlusion problem in human pose estimation. The current fusion methods need to train a separate model for every pair of cameras making them difficult to scale. In this work, we introduce MetaFuse, a pre-trained fusion model learned from a large number of cameras in the Panoptic dataset. The model can be efficiently adapted or finetuned for a new pair of cameras using a small number of labeled images. The strong adaptation power of MetaFuse is due in large part to the proposed factorization of the original fusion model into two parts (1) a generic fusion model shared by all cameras, and (2) lightweight camera-dependent transformations. Furthermore, the generic model is learned from many cameras by a meta-learning style algorithm to maximize its adaptation capability to various camera poses. We observe in experiments that MetaFuse finetuned on the public datasets outperforms the state-of-the-arts by a large margin which validates its value in practice.
Tasks Meta-Learning, Pose Estimation
Published 2020-03-30
URL https://arxiv.org/abs/2003.13239v1
PDF https://arxiv.org/pdf/2003.13239v1.pdf
PWC https://paperswithcode.com/paper/metafuse-a-pre-trained-fusion-model-for-human

When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey

Title When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey
Authors Chongzhen Zhang, Jianrui Wang, Gary G. Yen, Chaoqiang Zhao, Qiyu Sun, Yang Tang, Feng Qian, Jürgen Kurths
Abstract With widespread applications of artificial intelligence (AI), the capabilities of the perception, understanding, decision-making and control for autonomous systems have improved significantly in the past years. When autonomous systems consider the performance of accuracy and transferability simultaneously, several AI methods, like adversarial learning, reinforcement learning (RL) and meta-learning, show their powerful performance. Here, we review the learning-based approaches in autonomous systems from the perspectives of accuracy and transferability. Accuracy means that a well-trained model shows good results during the testing phase, in which the testing set shares a same task or a data distribution with the training set. Transferability means that when an trained model is transferred to other testing domains, the accuracy is still good. Firstly, we introduce some basic concepts of transfer learning and then present some preliminaries of adversarial learning, RL and meta-learning. Secondly, we focus on reviewing the accuracy and transferability to show the advantages of adversarial learning, like generative adversarial networks (GANs), in typical computer vision tasks in autonomous systems, including image style transfer, image super-resolution, image deblurring/dehazing/rain removal, semantic segmentation, depth estimation and person re-identification. Then, we further review the performance of RL and meta-learning from the aspects of accuracy and transferability in autonomous systems, involving robot navigation and robotic manipulation. Finally, we discuss several challenges and future topics for using adversarial learning, RL and meta-learning in autonomous systems.
Tasks Deblurring, Decision Making, Depth Estimation, Image Super-Resolution, Meta-Learning, Person Re-Identification, Rain Removal, Robot Navigation, Semantic Segmentation, Style Transfer, Super-Resolution, Transfer Learning
Published 2020-03-29
URL https://arxiv.org/abs/2003.12948v1
PDF https://arxiv.org/pdf/2003.12948v1.pdf
PWC https://paperswithcode.com/paper/when-autonomous-systems-meet-accuracy-and

Goal-Conditioned End-to-End Visuomotor Control for Versatile Skill Primitives

Title Goal-Conditioned End-to-End Visuomotor Control for Versatile Skill Primitives
Authors Oliver Groth, Chia-Man Hung, Andrea Vedaldi, Ingmar Posner
Abstract Visuomotor control (VMC) is an effective means of achieving basic manipulation tasks such as pushing or pick-and-place from raw images. Conditioning VMC on desired goal states is a promising way of achieving versatile skill primitives. However, common conditioning schemes either rely on task-specific fine tuning (e.g. using meta-learning) or on sampling approaches using a forward model of scene dynamics i.e. model-predictive control, leaving deployability and planning horizon severely limited. In this paper we propose a conditioning scheme which avoids these pitfalls by learning the controller and its conditioning in an end-to-end manner. Our model predicts complex action sequences based directly on a dynamic image representation of the robot motion and the distance to a given target observation. In contrast to related works, this enables our approach to efficiently perform complex pushing and pick-and-place tasks from raw image observations without predefined control primitives. We report significant improvements in task success over a representative model-predictive controller and also demonstrate our model’s generalisation capabilities in challenging, unseen tasks handling unfamiliar objects.
Tasks Meta-Learning
Published 2020-03-19
URL https://arxiv.org/abs/2003.08854v1
PDF https://arxiv.org/pdf/2003.08854v1.pdf
PWC https://paperswithcode.com/paper/goal-conditioned-end-to-end-visuomotor

Inherent Dependency Displacement Bias of Transition-Based Algorithms

Title Inherent Dependency Displacement Bias of Transition-Based Algorithms
Authors Mark Anderson, Carlos Gómez-Rodríguez
Abstract A wide variety of transition-based algorithms are currently used for dependency parsers. Empirical studies have shown that performance varies across different treebanks in such a way that one algorithm outperforms another on one treebank and the reverse is true for a different treebank. There is often no discernible reason for what causes one algorithm to be more suitable for a certain treebank and less so for another. In this paper we shed some light on this by introducing the concept of an algorithm’s inherent dependency displacement distribution. This characterises the bias of the algorithm in terms of dependency displacement, which quantify both distance and direction of syntactic relations. We show that the similarity of an algorithm’s inherent distribution to a treebank’s displacement distribution is clearly correlated to the algorithm’s parsing performance on that treebank, specifically with highly significant and substantial correlations for the predominant sentence lengths in Universal Dependency treebanks. We also obtain results which show a more discrete analysis of dependency displacement does not result in any meaningful correlations.
Published 2020-03-31
URL https://arxiv.org/abs/2003.14282v1
PDF https://arxiv.org/pdf/2003.14282v1.pdf
PWC https://paperswithcode.com/paper/inherent-dependency-displacement-bias-of

Robust Mean Estimation under Coordinate-level Corruption

Title Robust Mean Estimation under Coordinate-level Corruption
Authors Zifan Liu, Jongho Park, Nils Palumbo, Theodoros Rekatsinas, Christos Tzamos
Abstract Data corruption, systematic or adversarial, may skew statistical estimation severely. Recent work provides computationally efficient estimators that nearly match the information-theoretic optimal statistic. Yet the corruption model they consider measures sample-level corruption and is not fine-grained enough for many real-world applications. In this paper, we propose a coordinate-level metric of distribution shift over high-dimensional settings with n coordinates. We introduce and analyze robust mean estimation techniques against an adversary who may hide individual coordinates of samples while being bounded by that metric. We show that for structured distribution settings, methods that leverage structure to fill in missing entries before mean estimation can improve the estimation accuracy by a factor of approximately n compared to structure-agnostic methods. We also leverage recent progress in matrix completion to obtain estimators for recovering the true mean of the samples in settings of unknown structure. We demonstrate with real-world data that our methods can capture the dependencies across attributes and provide accurate mean estimation even in high-magnitude corruption settings.
Tasks Matrix Completion
Published 2020-02-10
URL https://arxiv.org/abs/2002.04137v1
PDF https://arxiv.org/pdf/2002.04137v1.pdf
PWC https://paperswithcode.com/paper/robust-mean-estimation-under-coordinate-level

On Two Distinct Sources of Nonidentifiability in Latent Position Random Graph Models

Title On Two Distinct Sources of Nonidentifiability in Latent Position Random Graph Models
Authors Joshua Agterberg, Minh Tang, Carey E. Priebe
Abstract Two separate and distinct sources of nonidentifiability arise naturally in the context of latent position random graph models, though neither are unique to this setting. In this paper we define and examine these two nonidentifiabilities, dubbed subspace nonidentifiability and model-based nonidentifiability, in the context of random graph inference. We give examples where each type of nonidentifiability comes into play, and we show how in certain settings one need worry about one or the other type of nonidentifiability. Then, we characterize the limit for model-based nonidentifiability both with and without subspace nonidentifiability. We further obtain additional limiting results for covariances and $U$-statistics of stochastic block models and generalized random dot product graphs.
Published 2020-03-31
URL https://arxiv.org/abs/2003.14250v1
PDF https://arxiv.org/pdf/2003.14250v1.pdf
PWC https://paperswithcode.com/paper/on-two-distinct-sources-of-nonidentifiability

QnAMaker: Data to Bot in 2 Minutes

Title QnAMaker: Data to Bot in 2 Minutes
Authors Parag Agrawal, Tulasi Menon, Aya Kamel, Michel Naim, Chaikesh Chouragade, Gurvinder Singh, Rohan Kulkarni, Anshuman Suri, Sahithi Katakam, Vineet Pratik, Prakul Bansal, Simerpreet Kaur, Neha Rajput, Anand Duggal, Achraf Chalabi, Prashant Choudhari, Reddy Satti, Niranjan Nayak
Abstract Having a bot for seamless conversations is a much-desired feature that products and services today seek for their websites and mobile apps. These bots help reduce traffic received by human support significantly by handling frequent and directly answerable known questions. Many such services have huge reference documents such as FAQ pages, which makes it hard for users to browse through this data. A conversation layer over such raw data can lower traffic to human support by a great margin. We demonstrate QnAMaker, a service that creates a conversational layer over semi-structured data such as FAQ pages, product manuals, and support documents. QnAMaker is the popular choice for Extraction and Question-Answering as a service and is used by over 15,000 bots in production. It is also used by search interfaces and not just bots.
Tasks Question Answering
Published 2020-03-19
URL https://arxiv.org/abs/2003.08553v1
PDF https://arxiv.org/pdf/2003.08553v1.pdf
PWC https://paperswithcode.com/paper/qnamaker-data-to-bot-in-2-minutes

ConvLab-2: An Open-Source Toolkit for Building, Evaluating, and Diagnosing Dialogue Systems

Title ConvLab-2: An Open-Source Toolkit for Building, Evaluating, and Diagnosing Dialogue Systems
Authors Qi Zhu, Zheng Zhang, Yan Fang, Xiang Li, Ryuichi Takanobu, Jinchao Li, Baolin Peng, Jianfeng Gao, Xiaoyan Zhu, Minlie Huang
Abstract We present ConvLab-2, an open-source toolkit that enables researchers to build task-oriented dialogue systems with state-of-the-art models, perform an end-to-end evaluation, and diagnose the weakness of systems. As the successor of ConvLab (Lee et al., 2019b), ConvLab-2 inherits ConvLab’s framework but integrates more powerful dialogue models and supports more datasets. Besides, we have developed an analysis tool and an interactive tool to assist researchers in diagnosing dialogue systems. The analysis tool presents rich statistics and summarizes common mistakes from simulated dialogues, which facilitates error analysis and system improvement. The interactive tool provides a user interface that allows developers to diagnose an assembled dialogue system by interacting with the system and modifying the output of each system component.
Tasks Task-Oriented Dialogue Systems
Published 2020-02-12
URL https://arxiv.org/abs/2002.04793v1
PDF https://arxiv.org/pdf/2002.04793v1.pdf
PWC https://paperswithcode.com/paper/convlab-2-an-open-source-toolkit-for-building

Large-scale Ontological Reasoning via Datalog

Title Large-scale Ontological Reasoning via Datalog
Authors Mario Alviano, Marco Manna
Abstract Reasoning over OWL 2 is a very expensive task in general, and therefore the W3C identified tractable profiles exhibiting good computational properties. Ontological reasoning for many fragments of OWL 2 can be reduced to the evaluation of Datalog queries. This paper surveys some of these compilations, and in particular the one addressing queries over Horn-$\mathcal{SHIQ}$ knowledge bases and its implementation in DLV2 enanched by a new version of the Magic Sets algorithm.
Published 2020-03-21
URL https://arxiv.org/abs/2003.09698v1
PDF https://arxiv.org/pdf/2003.09698v1.pdf
PWC https://paperswithcode.com/paper/large-scale-ontological-reasoning-via-datalog
comments powered by Disqus