April 2, 2020

3035 words 15 mins read

Paper Group ANR 293

Paper Group ANR 293

Learning a generative model for robot control using visual feedback. Parameterized Complexity Analysis of Randomized Search Heuristics. Deep Meditations: Controlled navigation of latent space. Automated Formal Synthesis of Lyapunov Neural Networks. Retrospective Analysis of the 2019 MineRL Competition on Sample Efficient Reinforcement Learning. Dee …

Learning a generative model for robot control using visual feedback

Title Learning a generative model for robot control using visual feedback
Authors Nishad Gothoskar, Miguel Lázaro-Gredilla, Abhishek Agarwal, Yasemin Bekiroglu, Dileep George
Abstract We introduce a novel formulation for incorporating visual feedback in controlling robots. We define a generative model from actions to image observations of features on the end-effector. Inference in the model allows us to infer the robot state corresponding to target locations of the features. This, in turn, guides motion of the robot and allows for matching the target locations of the features in significantly fewer steps than state-of-the-art visual servoing methods. The training procedure for our model enables effective learning of the kinematics, feature structure, and camera parameters, simultaneously. This can be done with no prior information about the robot, structure, and cameras that observe it. Learning is done sample-efficiently and shows strong generalization to test data. Since our formulation is modular, we can modify components of our setup, like cameras and objects, and relearn them quickly online. Our method can handle noise in the observed state and noise in the controllers that we interact with. We demonstrate the effectiveness of our method by executing grasping and tight-fit insertions on robots with inaccurate controllers.
Tasks
Published 2020-03-10
URL https://arxiv.org/abs/2003.04474v1
PDF https://arxiv.org/pdf/2003.04474v1.pdf
PWC https://paperswithcode.com/paper/learning-a-generative-model-for-robot-control
Repo
Framework

Parameterized Complexity Analysis of Randomized Search Heuristics

Title Parameterized Complexity Analysis of Randomized Search Heuristics
Authors Frank Neumann, Andrew M. Sutton
Abstract This chapter compiles a number of results that apply the theory of parameterized algorithmics to the running-time analysis of randomized search heuristics such as evolutionary algorithms. The parameterized approach articulates the running time of algorithms solving combinatorial problems in finer detail than traditional approaches from classical complexity theory. We outline the main results and proof techniques for a collection of randomized search heuristics tasked to solve NP-hard combinatorial optimization problems such as finding a minimum vertex cover in a graph, finding a maximum leaf spanning tree in a graph, and the traveling salesperson problem.
Tasks Combinatorial Optimization
Published 2020-01-15
URL https://arxiv.org/abs/2001.05120v1
PDF https://arxiv.org/pdf/2001.05120v1.pdf
PWC https://paperswithcode.com/paper/parameterized-complexity-analysis-of
Repo
Framework

Deep Meditations: Controlled navigation of latent space

Title Deep Meditations: Controlled navigation of latent space
Authors Memo Akten, Rebecca Fiebrink, Mick Grierson
Abstract We introduce a method which allows users to creatively explore and navigate the vast latent spaces of deep generative models. Specifically, our method enables users to \textit{discover} and \textit{design} \textit{trajectories} in these high dimensional spaces, to construct stories, and produce time-based media such as videos—\textit{with meaningful control over narrative}. Our goal is to encourage and aid the use of deep generative models as a medium for creative expression and story telling with meaningful human control. Our method is analogous to traditional video production pipelines in that we use a conventional non-linear video editor with proxy clips, and conform with arrays of latent space vectors. Examples can be seen at \url{http://deepmeditations.ai}.
Tasks
Published 2020-02-27
URL https://arxiv.org/abs/2003.00910v1
PDF https://arxiv.org/pdf/2003.00910v1.pdf
PWC https://paperswithcode.com/paper/deep-meditations-controlled-navigation-of
Repo
Framework

Automated Formal Synthesis of Lyapunov Neural Networks

Title Automated Formal Synthesis of Lyapunov Neural Networks
Authors Alessandro Abate, Daniele Ahmed, Mirco Giacobbe, Andrea Peruffo
Abstract We propose an automated and sound technique to synthesize provably correct Lyapunov functions. We exploit a counterexample-guided approach composed of two parts: a learner provides candidate Lyapunov functions, and a verifier either guarantees the correctness of the candidate or offers counterexamples, which are used incrementally to further guide the synthesis of Lyapunov functions. Whilst the verifier employs a formal SMT solver, thus ensuring the overall soundness of the procedure, a neural network is used to learn and synthesize candidates over a domain of interest. Our approach flexibly supports neural networks of arbitrary size and depth, thus displaying interesting learning capabilities. In particular, we test our methodology over non-linear models that do not admit global polynomial Lyapunov functions, and compare the results against a cognate $\delta$-complete approach, and against an approach based on convex (SOS) optimization. The proposed technique outperforms these alternatives, synthesizing Lyapunov functions faster and over wider spatial domains.
Tasks
Published 2020-03-19
URL https://arxiv.org/abs/2003.08910v1
PDF https://arxiv.org/pdf/2003.08910v1.pdf
PWC https://paperswithcode.com/paper/automated-formal-synthesis-of-lyapunov-neural
Repo
Framework

Retrospective Analysis of the 2019 MineRL Competition on Sample Efficient Reinforcement Learning

Title Retrospective Analysis of the 2019 MineRL Competition on Sample Efficient Reinforcement Learning
Authors Stephanie Milani, Nicholay Topin, Brandon Houghton, William H. Guss, Sharada P. Mohanty, Keisuke Nakata, Oriol Vinyals, Noboru Sean Kuno
Abstract To facilitate research in the direction of sample-efficient reinforcement learning, we held the MineRL Competition on Sample-Efficient Reinforcement Learning Using Human Priors at the Thirty-fourth Conference on Neural Information Processing Systems (NeurIPS 2019). The primary goal of this competition was to promote the development of algorithms that use human demonstrations alongside reinforcement learning to reduce the number of samples needed to solve complex, hierarchical, and sparse environments. We describe the competition and provide an overview of the top solutions, each of which uses deep reinforcement learning and/or imitation learning. We also discuss the impact of our organizational decisions on the competition as well as future directions for improvement.
Tasks Imitation Learning
Published 2020-03-10
URL https://arxiv.org/abs/2003.05012v3
PDF https://arxiv.org/pdf/2003.05012v3.pdf
PWC https://paperswithcode.com/paper/the-minerl-competition-on-sample-efficient-1
Repo
Framework

DeepLPF: Deep Local Parametric Filters for Image Enhancement

Title DeepLPF: Deep Local Parametric Filters for Image Enhancement
Authors Sean Moran, Pierre Marza, Steven McDonagh, Sarah Parisot, Gregory Slabaugh
Abstract Digital artists often improve the aesthetic quality of digital photographs through manual retouching. Beyond global adjustments, professional image editing programs provide local adjustment tools operating on specific parts of an image. Options include parametric (graduated, radial filters) and unconstrained brush tools. These highly expressive tools enable a diverse set of local image enhancements. However, their use can be time consuming, and requires artistic capability. State-of-the-art automated image enhancement approaches typically focus on learning pixel-level or global enhancements. The former can be noisy and lack interpretability, while the latter can fail to capture fine-grained adjustments. In this paper, we introduce a novel approach to automatically enhance images using learned spatially local filters of three different types (Elliptical Filter, Graduated Filter, Polynomial Filter). We introduce a deep neural network, dubbed Deep Local Parametric Filters (DeepLPF), which regresses the parameters of these spatially localized filters that are then automatically applied to enhance the image. DeepLPF provides a natural form of model regularization and enables interpretable, intuitive adjustments that lead to visually pleasing results. We report on multiple benchmarks and show that DeepLPF produces state-of-the-art performance on two variants of the MIT-Adobe-5K dataset, often using a fraction of the parameters required for competing methods.
Tasks Image Enhancement
Published 2020-03-31
URL https://arxiv.org/abs/2003.13985v1
PDF https://arxiv.org/pdf/2003.13985v1.pdf
PWC https://paperswithcode.com/paper/deeplpf-deep-local-parametric-filters-for
Repo
Framework

Self-explaining AI as an alternative to interpretable AI

Title Self-explaining AI as an alternative to interpretable AI
Authors Daniel C. Elton
Abstract The ability to explain decisions made by AI systems is highly sought after, especially in domains where human lives are at stake such as medicine or autonomous vehicles. While it is always possible to approximate the input-output relations of deep neural networks with human-understandable rules or a post-hoc model, the discovery of the double descent phenomena suggests that no such approximation will ever map onto the actual mechanistic functioning of deep neural networks. Double descent indicates that deep neural networks typically operate by smoothly interpolating between data points rather than by extracting a few high level rules. As a result neural networks trained on complex real world data are inherently hard to interpret and prone to failure if used outside their domain of applicability (ie, for extrapolation). To show how we might be able to trust AI despite these problems, we introduce the concept of self-explaining AI. Self-explaining AIs are capable of providing a human-understandable explanation of each decision along with confidence levels for both the decision and explanation. Some difficulties to this approach along with possible solutions are sketched. Finally, we argue it is also important that AI systems warn their user when they are asked to perform outside their domain of applicability.
Tasks Autonomous Vehicles
Published 2020-02-12
URL https://arxiv.org/abs/2002.05149v3
PDF https://arxiv.org/pdf/2002.05149v3.pdf
PWC https://paperswithcode.com/paper/self-explainability-as-an-alternative-to
Repo
Framework

One-Shot GAN Generated Fake Face Detection

Title One-Shot GAN Generated Fake Face Detection
Authors Hadi Mansourifar, Weidong Shi
Abstract Fake face detection is a significant challenge for intelligent systems as generative models become more powerful every single day. As the quality of fake faces increases, the trained models become more and more inefficient to detect the novel fake faces, since the corresponding training data is considered outdated. In this case, robust One-Shot learning methods is more compatible with the requirements of changeable training data. In this paper, we propose a universal One-Shot GAN generated fake face detection method which can be used in significantly different areas of anomaly detection. The proposed method is based on extracting out-of-context objects from faces via scene understanding models. To do so, we use state of the art scene understanding and object detection methods as a pre-processing tool to detect the weird objects in the face. Second, we create a bag of words given all the detected out-of-context objects per all training data. This way, we transform each image into a sparse vector where each feature represents the confidence score related to each detected object in the image. Our experiments show that, we can discriminate fake faces from real ones in terms of out-of-context features. It means that, different sets of objects are detected in fake faces comparing to real ones when we analyze them with scene understanding and object detection models. We prove that, the proposed method can outperform previous methods based on our experiments on Style-GAN generated fake faces.
Tasks Anomaly Detection, Face Detection, Object Detection, One-Shot Learning, Scene Understanding
Published 2020-03-27
URL https://arxiv.org/abs/2003.12244v1
PDF https://arxiv.org/pdf/2003.12244v1.pdf
PWC https://paperswithcode.com/paper/one-shot-gan-generated-fake-face-detection
Repo
Framework

An implicit function learning approach for parametric modal regression

Title An implicit function learning approach for parametric modal regression
Authors Yangchen Pan, Ehsan Imani, Martha White, Amir-massoud Farahmand
Abstract For multi-valued functions—such as when the conditional distribution on targets given the inputs is multi-modal—standard regression approaches are not always desirable because they provide the conditional mean. Modal regression aims to instead find the conditional mode, but is restricted to nonparametric approaches. Such methods can be difficult to scale, and cannot benefit from parametric function approximation, like neural networks, which can learn complex relationships between inputs and targets. In this work, we propose a parametric modal regression algorithm, by using the implicit function theorem to develop an objective for learning a joint parameterized function over inputs and targets. We empirically demonstrate on several synthetic problems that our method (i) can learn multi-valued functions and produce the conditional modes, (ii) scales well to high-dimensional inputs and (iii) is even more effective for certain uni-modal problems, particularly for high frequency data where the joint function over inputs and targets can better capture the complex relationship between them. We then demonstrate that our method is practically useful in a real-world modal regression problem. We conclude by showing that our method provides small improvements on two regression datasets that have asymmetric distributions over the targets.
Tasks
Published 2020-02-14
URL https://arxiv.org/abs/2002.06195v1
PDF https://arxiv.org/pdf/2002.06195v1.pdf
PWC https://paperswithcode.com/paper/an-implicit-function-learning-approach-for-1
Repo
Framework

Making Metadata Fit for Next Generation Language Technology Platforms: The Metadata Schema of the European Language Grid

Title Making Metadata Fit for Next Generation Language Technology Platforms: The Metadata Schema of the European Language Grid
Authors Penny Labropoulou, Katerina Gkirtzou, Maria Gavriilidou, Miltos Deligiannis, Dimitrios Galanis, Stelios Piperidis, Georg Rehm, Maria Berger, Valérie Mapelli, Mickaël Rigault, Victoria Arranz, Khalid Choukri, Gerhard Backfried, José Manuel Gómez Pérez, Andres Garcia Silva
Abstract The current scientific and technological landscape is characterised by the increasing availability of data resources and processing tools and services. In this setting, metadata have emerged as a key factor facilitating management, sharing and usage of such digital assets. In this paper we present ELG-SHARE, a rich metadata schema catering for the description of Language Resources and Technologies (processing and generation services and tools, models, corpora, term lists, etc.), as well as related entities (e.g., organizations, projects, supporting documents, etc.). The schema powers the European Language Grid platform that aims to be the primary hub and marketplace for industry-relevant Language Technology in Europe. ELG-SHARE has been based on various metadata schemas, vocabularies, and ontologies, as well as related recommendations and guidelines.
Tasks
Published 2020-03-30
URL https://arxiv.org/abs/2003.13236v1
PDF https://arxiv.org/pdf/2003.13236v1.pdf
PWC https://paperswithcode.com/paper/making-metadata-fit-for-next-generation
Repo
Framework

Defending against Backdoor Attack on Deep Neural Networks

Title Defending against Backdoor Attack on Deep Neural Networks
Authors Hao Cheng, Kaidi Xu, Sijia Liu, Pin-Yu Chen, Pu Zhao, Xue Lin
Abstract Although deep neural networks (DNNs) have achieved a great success in various computer vision tasks, it is recently found that they are vulnerable to adversarial attacks. In this paper, we focus on the so-called \textit{backdoor attack}, which injects a backdoor trigger to a small portion of training data (also known as data poisoning) such that the trained DNN induces misclassification while facing examples with this trigger. To be specific, we carefully study the effect of both real and synthetic backdoor attacks on the internal response of vanilla and backdoored DNNs through the lens of Gard-CAM. Moreover, we show that the backdoor attack induces a significant bias in neuron activation in terms of the $\ell_\infty$ norm of an activation map compared to its $\ell_1$ and $\ell_2$ norm. Spurred by our results, we propose the \textit{$\ell_\infty$-based neuron pruning} to remove the backdoor from the backdoored DNN. Experiments show that our method could effectively decrease the attack success rate, and also hold a high classification accuracy for clean images.
Tasks data poisoning
Published 2020-02-26
URL https://arxiv.org/abs/2002.12162v1
PDF https://arxiv.org/pdf/2002.12162v1.pdf
PWC https://paperswithcode.com/paper/defending-against-backdoor-attack-on-deep
Repo
Framework

FR-Train: A mutual information-based approach to fair and robust training

Title FR-Train: A mutual information-based approach to fair and robust training
Authors Yuji Roh, Kangwook Lee, Steven Euijong Whang, Changho Suh
Abstract Trustworthy AI is a critical issue in machine learning where, in addition to training a model that is accurate, one must consider both fair and robust training in the presence of data bias and poisoning. However, the existing model fairness techniques mistakenly view poisoned data as an additional bias, resulting in severe performance degradation. To fix this problem, we propose FR-Train, which holistically performs fair and robust model training. We provide a mutual information-based interpretation of an existing adversarial training-based fairness-only method, and apply this idea to architect an additional discriminator that can identify poisoned data using a clean validation set and reduce its influence. In our experiments, FR-Train shows almost no decrease in fairness and accuracy in the presence of data poisoning by both mitigating the bias and defending against poisoning. We also demonstrate how to construct clean validation sets using crowdsourcing, and release new benchmark datasets.
Tasks data poisoning
Published 2020-02-24
URL https://arxiv.org/abs/2002.10234v1
PDF https://arxiv.org/pdf/2002.10234v1.pdf
PWC https://paperswithcode.com/paper/fr-train-a-mutual-information-based-approach
Repo
Framework

Quantifying the Effects of Recommendation Systems

Title Quantifying the Effects of Recommendation Systems
Authors Sunshine Chong, Andrés Abeliuk
Abstract Recommendation systems today exert a strong influence on consumer behavior and individual perceptions of the world. By using collaborative filtering (CF) methods to create recommendations, it generates a continuous feedback loop in which user behavior becomes magnified in the algorithmic system. Popular items get recommended more frequently, creating the bias that affects and alters user preferences. In order to visualize and compare the different biases, we will analyze the effects of recommendation systems and quantify the inequalities resulting from them.
Tasks Recommendation Systems
Published 2020-02-04
URL https://arxiv.org/abs/2002.01077v1
PDF https://arxiv.org/pdf/2002.01077v1.pdf
PWC https://paperswithcode.com/paper/quantifying-the-effects-of-recommendation
Repo
Framework

GAN-based Priors for Quantifying Uncertainty

Title GAN-based Priors for Quantifying Uncertainty
Authors Dhruv V. Patel, Assad A. Oberai
Abstract Bayesian inference is used extensively to quantify the uncertainty in an inferred field given the measurement of a related field when the two are linked by a mathematical model. Despite its many applications, Bayesian inference faces challenges when inferring fields that have discrete representations of large dimension, and/or have prior distributions that are difficult to characterize mathematically. In this work we demonstrate how the approximate distribution learned by a deep generative adversarial network (GAN) may be used as a prior in a Bayesian update to address both these challenges. We demonstrate the efficacy of this approach on two distinct, and remarkably broad, classes of problems. The first class leads to supervised learning algorithms for image classification with superior out of distribution detection and accuracy, and for image inpainting with built-in variance estimation. The second class leads to unsupervised learning algorithms for image denoising and for solving physics-driven inverse problems.
Tasks Bayesian Inference, Denoising, Image Classification, Image Denoising, Image Inpainting, Out-of-Distribution Detection
Published 2020-03-27
URL https://arxiv.org/abs/2003.12597v1
PDF https://arxiv.org/pdf/2003.12597v1.pdf
PWC https://paperswithcode.com/paper/gan-based-priors-for-quantifying-uncertainty
Repo
Framework

Context-dependent self-exciting point processes: models, methods, and risk bounds in high dimensions

Title Context-dependent self-exciting point processes: models, methods, and risk bounds in high dimensions
Authors Lili Zheng, Garvesh Raskutti, Rebecca Willett, Benjamin Mark
Abstract High-dimensional autoregressive point processes model how current events trigger or inhibit future events, such as activity by one member of a social network can affect the future activity of his or her neighbors. While past work has focused on estimating the underlying network structure based solely on the times at which events occur on each node of the network, this paper examines the more nuanced problem of estimating context-dependent networks that reflect how features associated with an event (such as the content of a social media post) modulate the strength of influences among nodes. Specifically, we leverage ideas from compositional time series and regularization methods in machine learning to conduct network estimation for high-dimensional marked point processes. Two models and corresponding estimators are considered in detail: an autoregressive multinomial model suited to categorical marks and a logistic-normal model suited to marks with mixed membership in different categories. Importantly, the logistic-normal model leads to a convex negative log-likelihood objective and captures dependence across categories. We provide theoretical guarantees for both estimators, which we validate by simulations and a synthetic data-generating model. We further validate our methods through two real data examples and demonstrate the advantages and disadvantages of both approaches.
Tasks Point Processes, Time Series
Published 2020-03-16
URL https://arxiv.org/abs/2003.07429v1
PDF https://arxiv.org/pdf/2003.07429v1.pdf
PWC https://paperswithcode.com/paper/context-dependent-self-exciting-point
Repo
Framework
comments powered by Disqus