January 27, 2020

3341 words 16 mins read

Paper Group ANR 1310

Paper Group ANR 1310

Accurate Congenital Heart Disease Model Generation for 3D Printing. Tuning-Free Disentanglement via Projection. Online Simultaneous Semi-Parametric Dynamics Model Learning. Information-Theoretic Considerations in Batch Reinforcement Learning. A Copula approach for hyperparameter transfer learning. High Throughput Computation of Reference Ranges of …

Accurate Congenital Heart Disease Model Generation for 3D Printing

Title Accurate Congenital Heart Disease Model Generation for 3D Printing
Authors Xiaowei Xu, Tianchen Wang, Dewen Zeng, Yiyu Shi, Qianjun Jia, Haiyun Yuan, Meiping Huang, Jian Zhuang
Abstract 3D printing has been widely adopted for clinical decision making and interventional planning of Congenital heart disease (CHD), while whole heart and great vessel segmentation is the most significant but time-consuming step in the model generation for 3D printing. While various automatic whole heart and great vessel segmentation frameworks have been developed in the literature, they are ineffective when applied to medical images in CHD, which have significant variations in heart structure and great vessel connections. To address the challenge, we leverage the power of deep learning in processing regular structures and that of graph algorithms in dealing with large variations and propose a framework that combines both for whole heart and great vessel segmentation in CHD. Particularly, we first use deep learning to segment the four chambers and myocardium followed by the blood pool, where variations are usually small. We then extract the connection information and apply graph matching to determine the categories of all the vessels. Experimental results using 683D CT images covering 14 types of CHD show that our method can increase Dice score by 11.9% on average compared with the state-of-the-art whole heart and great vessel segmentation method in normal anatomy. The segmentation results are also printed out using 3D printers for validation.
Tasks Decision Making, Graph Matching
Published 2019-07-06
URL https://arxiv.org/abs/1907.05273v2
PDF https://arxiv.org/pdf/1907.05273v2.pdf
PWC https://paperswithcode.com/paper/accurate-congenital-heart-disease
Repo
Framework

Tuning-Free Disentanglement via Projection

Title Tuning-Free Disentanglement via Projection
Authors Yue Bai, Leo L. Duan
Abstract In representation learning and non-linear dimension reduction, there is a huge interest to learn the ‘disentangled’ latent variables, where each sub-coordinate almost uniquely controls a facet of the observed data. While many regularization approaches have been proposed on variational autoencoders, heuristic tuning is required to balance between disentanglement and loss in reconstruction accuracy – due to the unsupervised nature, there is no principled way to find an optimal weight for regularization. Motivated to completely bypass regularization, we consider a projection strategy: modifying the canonical Gaussian encoder, we add a layer of scaling and rotation to the Gaussian mean, such that the marginal correlations among latent sub-coordinates become exactly zero. This achieves a theoretically maximal disentanglement, as guaranteed by zero cross-correlation between one latent sub-coordinate and the observed varying with the rest. Unlike regularizations, the extra projection layer does not impact the flexibility of the previous encoder layers, leading to almost no loss in expressiveness. This approach is simple to implement in practice. Our numerical experiments demonstrate very good performance, with no tuning required.
Tasks Dimensionality Reduction, Representation Learning
Published 2019-06-27
URL https://arxiv.org/abs/1906.11732v2
PDF https://arxiv.org/pdf/1906.11732v2.pdf
PWC https://paperswithcode.com/paper/tuning-free-disentanglement-via-projection
Repo
Framework

Online Simultaneous Semi-Parametric Dynamics Model Learning

Title Online Simultaneous Semi-Parametric Dynamics Model Learning
Authors Joshua Smith, Michael Mistry
Abstract Accurate models of robots’ dynamics are critical for control, stability, motion optimization, and interaction. Semi-Parametric approaches to dynamics learning combine physics-based Parametric models with unstructured Non-Parametric regression with the hope to achieve both accuracy and generalizablity. In this paper we highlight the non-stationary problem created when attempting to adapt both Parametric and Non-Parametric components simultaneously. We present a consistency transform designed to compensate for this non-stationary effect, such that the contributions of both models can adapt simultaneously without adversely affecting the performance of the platform. Thus we are able to apply the Semi-Parametric learning approach for continuous iterative online adaptation, without relying on batch or offline updates. We validate the transform via a perfect virtual model as well as by applying the overall system on a Kuka LWR IV manipulator. We demonstrate improved tracking performance during online learning and show a clear transference of contribution between the two components with a learning bias towards the Parametric component.
Tasks
Published 2019-10-09
URL https://arxiv.org/abs/1910.04297v2
PDF https://arxiv.org/pdf/1910.04297v2.pdf
PWC https://paperswithcode.com/paper/online-simultaneous-semi-parametric-dynamics
Repo
Framework

Information-Theoretic Considerations in Batch Reinforcement Learning

Title Information-Theoretic Considerations in Batch Reinforcement Learning
Authors Jinglin Chen, Nan Jiang
Abstract Value-function approximation methods that operate in batch mode have foundational importance to reinforcement learning (RL). Finite sample guarantees for these methods often crucially rely on two types of assumptions: (1) mild distribution shift, and (2) representation conditions that are stronger than realizability. However, the necessity (“why do we need them?") and the naturalness (“when do they hold?") of such assumptions have largely eluded the literature. In this paper, we revisit these assumptions and provide theoretical results towards answering the above questions, and make steps towards a deeper understanding of value-function approximation.
Tasks
Published 2019-05-01
URL http://arxiv.org/abs/1905.00360v1
PDF http://arxiv.org/pdf/1905.00360v1.pdf
PWC https://paperswithcode.com/paper/information-theoretic-considerations-in-batch
Repo
Framework

A Copula approach for hyperparameter transfer learning

Title A Copula approach for hyperparameter transfer learning
Authors David Salinas, Huibin Shen, Valerio Perrone
Abstract Bayesian optimization (BO) is a popular methodology to tune the hyperparameters of expensive black-box functions. Despite its success, standard BO focuses on a single task at a time and is not designed to leverage information from related functions, such as tuning performance metrics of the same algorithm across multiple datasets. In this work, we introduce a novel approach to achieve transfer learning across different datasets as well as different metrics. The main idea is to regress the mapping from hyperparameter to metric quantiles with a semi-parametric Gaussian Copula distribution, which provides robustness against different scales or outliers that can occur in different tasks. We introduce two methods to leverage this estimation: a Thompson sampling strategy as well as a Gaussian Copula process using such quantile estimate as a prior. We show that these strategies can combine the estimation of multiple metrics such as runtime and accuracy, steering the optimization toward cheaper hyperparameters for the same level of accuracy. Experiments on an extensive set of hyperparameter tuning tasks demonstrate significant improvements over state-of-the-art methods.
Tasks Transfer Learning
Published 2019-09-30
URL https://arxiv.org/abs/1909.13595v1
PDF https://arxiv.org/pdf/1909.13595v1.pdf
PWC https://paperswithcode.com/paper/a-copula-approach-for-hyperparameter-transfer-1
Repo
Framework

High Throughput Computation of Reference Ranges of Biventricular Cardiac Function on the UK Biobank Population Cohort

Title High Throughput Computation of Reference Ranges of Biventricular Cardiac Function on the UK Biobank Population Cohort
Authors Rahman Attar, Marco Pereanez, Ali Gooya, Xenia Alba, Le Zhang, Stefan K. Piechnik, Stefan Neubauer, Steffen E. Petersen, Alejandro F. Frangi
Abstract The exploitation of large-scale population data has the potential to improve healthcare by discovering and understanding patterns and trends within this data. To enable high throughput analysis of cardiac imaging data automatically, a pipeline should comprise quality monitoring of the input images, segmentation of the cardiac structures, assessment of the segmentation quality, and parsing of cardiac functional indexes. We present a fully automatic, high throughput image parsing workflow for the analysis of cardiac MR images, and test its performance on the UK Biobank (UKB) cardiac dataset. The proposed pipeline is capable of performing end-to-end image processing including: data organisation, image quality assessment, shape model initialisation, segmentation, segmentation quality assessment, and functional parameter computation; all without any user interaction. To the best of our knowledge,this is the first paper tackling the fully automatic 3D analysis of the UKB population study, providing reference ranges for all key cardiovascular functional indexes, from both left and right ventricles of the heart. We tested our workflow on a reference cohort of 800 healthy subjects for which manual delineations, and reference functional indexes exist. Our results show statistically significant agreement between the manually obtained reference indexes, and those automatically computed using our framework.
Tasks Image Quality Assessment
Published 2019-01-10
URL http://arxiv.org/abs/1901.03326v1
PDF http://arxiv.org/pdf/1901.03326v1.pdf
PWC https://paperswithcode.com/paper/high-throughput-computation-of-reference
Repo
Framework

Ensemble-Based Deep Reinforcement Learning for Chatbots

Title Ensemble-Based Deep Reinforcement Learning for Chatbots
Authors Heriberto Cuayáhuitl, Donghyeon Lee, Seonghan Ryu, Yongjin Cho, Sungja Choi, Satish Indurthi, Seunghak Yu, Hyungtak Choi, Inchul Hwang, Jihie Kim
Abstract Trainable chatbots that exhibit fluent and human-like conversations remain a big challenge in artificial intelligence. Deep Reinforcement Learning (DRL) is promising for addressing this challenge, but its successful application remains an open question. This article describes a novel ensemble-based approach applied to value-based DRL chatbots, which use finite action sets as a form of meaning representation. In our approach, while dialogue actions are derived from sentence clustering, the training datasets in our ensemble are derived from dialogue clustering. The latter aim to induce specialised agents that learn to interact in a particular style. In order to facilitate neural chatbot training using our proposed approach, we assume dialogue data in raw text only – without any manually-labelled data. Experimental results using chitchat data reveal that (1) near human-like dialogue policies can be induced, (2) generalisation to unseen data is a difficult problem, and (3) training an ensemble of chatbot agents is essential for improved performance over using a single agent. In addition to evaluations using held-out data, our results are further supported by a human evaluation that rated dialogues in terms of fluency, engagingness and consistency – which revealed that our proposed dialogue rewards strongly correlate with human judgements.
Tasks Chatbot
Published 2019-08-27
URL https://arxiv.org/abs/1908.10422v1
PDF https://arxiv.org/pdf/1908.10422v1.pdf
PWC https://paperswithcode.com/paper/ensemble-based-deep-reinforcement-learning
Repo
Framework

On Attribution of Recurrent Neural Network Predictions via Additive Decomposition

Title On Attribution of Recurrent Neural Network Predictions via Additive Decomposition
Authors Mengnan Du, Ninghao Liu, Fan Yang, Shuiwang Ji, Xia Hu
Abstract RNN models have achieved the state-of-the-art performance in a wide range of text mining tasks. However, these models are often regarded as black-boxes and are criticized due to the lack of interpretability. In this paper, we enhance the interpretability of RNNs by providing interpretable rationales for RNN predictions. Nevertheless, interpreting RNNs is a challenging problem. Firstly, unlike existing methods that rely on local approximation, we aim to provide rationales that are more faithful to the decision making process of RNN models. Secondly, a flexible interpretation method should be able to assign contribution scores to text segments of varying lengths, instead of only to individual words. To tackle these challenges, we propose a novel attribution method, called REAT, to provide interpretations to RNN predictions. REAT decomposes the final prediction of a RNN into additive contribution of each word in the input text. This additive decomposition enables REAT to further obtain phrase-level attribution scores. In addition, REAT is generally applicable to various RNN architectures, including GRU, LSTM and their bidirectional versions. Experimental results demonstrate the faithfulness and interpretability of the proposed attribution method. Comprehensive analysis shows that our attribution method could unveil the useful linguistic knowledge captured by RNNs. Some analysis further demonstrates our method could be utilized as a debugging tool to examine the vulnerability and failure reasons of RNNs, which may lead to several promising future directions to promote generalization ability of RNNs.
Tasks Decision Making
Published 2019-03-27
URL http://arxiv.org/abs/1903.11245v1
PDF http://arxiv.org/pdf/1903.11245v1.pdf
PWC https://paperswithcode.com/paper/on-attribution-of-recurrent-neural-network
Repo
Framework

Hierarchical Deep Q-Network from Imperfect Demonstrations in Minecraft

Title Hierarchical Deep Q-Network from Imperfect Demonstrations in Minecraft
Authors Alexey Skrynnik, Aleksey Staroverov, Ermek Aitygulov, Kirill Aksenov, Vasilii Davydov, Aleksandr I. Panov
Abstract We present hierarchical Deep Q-Network with Forgetting (HDQF) that took first place in MineRL competition. HDQF works on imperfect demonstrations utilize hierarchical structure of expert trajectories extracting effective sequence of meta-actions and subgoals. We introduce structured task dependent replay buffer and forgetting technique that allow the HDQF agent to gradually erase poor-quality expert data from the buffer. In this paper we present the details of the HDQF algorithm and give the experimental results in Minecraft domain.
Tasks
Published 2019-12-18
URL https://arxiv.org/abs/1912.08664v2
PDF https://arxiv.org/pdf/1912.08664v2.pdf
PWC https://paperswithcode.com/paper/hierarchical-deep-q-network-with-forgetting
Repo
Framework

Remaining Useful Lifetime Prediction via Deep Domain Adaptation

Title Remaining Useful Lifetime Prediction via Deep Domain Adaptation
Authors Paulo R. de O. da Costa, Alp Akcay, Yingqian Zhang, Uzay Kaymak
Abstract In Prognostics and Health Management (PHM) sufficient prior observed degradation data is usually critical for Remaining Useful Lifetime (RUL) prediction. Most previous data-driven prediction methods assume that training (source) and testing (target) condition monitoring data have similar distributions. However, due to different operating conditions, fault modes, noise and equipment updates distribution shift exists across different data domains. This shift reduces the performance of predictive models previously built to specific conditions when no observed run-to-failure data is available for retraining. To address this issue, this paper proposes a new data-driven approach for domain adaptation in prognostics using Long Short-Term Neural Networks (LSTM). We use a time window approach to extract temporal information from time-series data in a source domain with observed RUL values and a target domain containing only sensor information. We propose a Domain Adversarial Neural Network (DANN) approach to learn domain-invariant features that can be used to predict the RUL in the target domain. The experimental results show that the proposed method can provide more reliable RUL predictions under datasets with different operating conditions and fault modes. These results suggest that the proposed method offers a promising approach to performing domain adaptation in practical PHM applications.
Tasks Domain Adaptation, Time Series
Published 2019-07-17
URL https://arxiv.org/abs/1907.07480v1
PDF https://arxiv.org/pdf/1907.07480v1.pdf
PWC https://paperswithcode.com/paper/remaining-useful-lifetime-prediction-via-deep
Repo
Framework

Contextual Prediction Difference Analysis

Title Contextual Prediction Difference Analysis
Authors Jindong Gu, Volker Tresp
Abstract The interpretation of black-box models has been investigated in recent years. A number of model-aware saliency methods were proposed to explain individual classification decisions by creating saliency maps. However, they are not applicable when the parameters and the gradients of the underlying models are unavailable. Recently, model-agnostic methods have received increased attention. As one of them, Prediction Difference Analysis (PDA), a probabilistic sound methodology, was proposed. In this work, we first show that PDA can suffer from saturated classifiers. The saturation phenomenon of classifiers exists widely in current neural network-based classifiers. To understand the decisions of saturated classifiers better, we further propose Contextual PDA, which runs hundreds of times faster than PDA. The experiments show the superiority of our method by explaining image classifications of the state-of-the-art deep convolutional neural networks. We also apply our method to commercial general vision recognition systems.
Tasks
Published 2019-10-21
URL https://arxiv.org/abs/1910.09086v1
PDF https://arxiv.org/pdf/1910.09086v1.pdf
PWC https://paperswithcode.com/paper/contextual-prediction-difference-analysis
Repo
Framework

General Purpose Incremental Covariance Update and Efficient Belief Space Planning via Factor-Graph Propagation Action Tree

Title General Purpose Incremental Covariance Update and Efficient Belief Space Planning via Factor-Graph Propagation Action Tree
Authors Dmitry Kopitkov, Vadim Indelman
Abstract Fast covariance calculation is required both for SLAM (e.g.~in order to solve data association) and for evaluating the information-theoretic term for different candidate actions in belief space planning (BSP). In this paper we make two primary contributions. First, we develop a novel general-purpose incremental covariance update technique, which efficiently recovers specific covariance entries after any change in the inference problem, such as introduction of new observations/variables or re-linearization of the state vector. Our approach is shown to recover them faster than other state-of-the-art methods. Second, we present a computationally efficient approach for BSP in high-dimensional state spaces, leveraging our incremental covariance update method. State of the art BSP approaches perform belief propagation for each candidate action and then evaluate an objective function that typically includes an information-theoretic term, such as entropy or information gain. Yet, candidate actions often have similar parts (e.g. common trajectory parts), which are however evaluated separately for each candidate. Moreover, calculating the information-theoretic term involves a costly determinant computation of the entire information (covariance) matrix which is O(n^3) with n being dimension of the state or costly Schur complement operations if only marginal posterior covariance of certain variables is of interest. Our approach, rAMDL-Tree, extends our previous BSP method rAMDL, by exploiting incremental covariance calculation and performing calculation re-use between common parts of non-myopic candidate actions, such that these parts are evaluated only once, in contrast to existing approaches.
Tasks
Published 2019-06-05
URL https://arxiv.org/abs/1906.02249v1
PDF https://arxiv.org/pdf/1906.02249v1.pdf
PWC https://paperswithcode.com/paper/general-purpose-incremental-covariance-update
Repo
Framework

Deep Reinforcement Learning for Adaptive Traffic Signal Control

Title Deep Reinforcement Learning for Adaptive Traffic Signal Control
Authors Kai Liang Tan, Subhadipto Poddar, Anuj Sharma, Soumik Sarkar
Abstract Many existing traffic signal controllers are either simple adaptive controllers based on sensors placed around traffic intersections, or optimized by traffic engineers on a fixed schedule. Optimizing traffic controllers is time consuming and usually require experienced traffic engineers. Recent research has demonstrated the potential of using deep reinforcement learning (DRL) in this context. However, most of the studies do not consider realistic settings that could seamlessly transition into deployment. In this paper, we propose a DRL-based adaptive traffic signal control framework that explicitly considers realistic traffic scenarios, sensors, and physical constraints. In this framework, we also propose a novel reward function that shows significantly improved traffic performance compared to the typical baseline pre-timed and fully-actuated traffic signals controllers. The framework is implemented and validated on a simulation platform emulating real-life traffic scenarios and sensor data streams.
Tasks
Published 2019-11-14
URL https://arxiv.org/abs/1911.06294v1
PDF https://arxiv.org/pdf/1911.06294v1.pdf
PWC https://paperswithcode.com/paper/deep-reinforcement-learning-for-adaptive
Repo
Framework

On Maintaining Linear Convergence of Distributed Learning and Optimization under Limited Communication

Title On Maintaining Linear Convergence of Distributed Learning and Optimization under Limited Communication
Authors Sindri Magnússon, Hossein Shokri-Ghadikolaei, Na Li
Abstract In distributed optimization and machine learning, multiple nodes coordinate to solve large problems. To do this, the nodes need to compress important algorithm information to bits so that it can be communicated over a digital channel. The communication time of these algorithms follows a complex interplay between a) the algorithm’s convergence properties, b) the compression scheme, and c) the transmission rate offered by the digital channel. We explore these relationships for a general class of linearly convergent distributed algorithms. In particular, we illustrate how to design quantizers for these algorithms that compress the communicated information to a few bits while still preserving the linear convergence. Moreover, we characterize the communication time of these algorithms as a function of the available transmission rate. We illustrate our results on learning algorithms using different communication structures, such as decentralized algorithms where a single master coordinates information from many workers and fully distributed algorithms where only neighbours in a communication graph can communicate. We conclude that a co-design of machine learning and communication protocols are mandatory to flourish machine learning over networks.
Tasks Distributed Optimization
Published 2019-02-26
URL https://arxiv.org/abs/1902.11163v3
PDF https://arxiv.org/pdf/1902.11163v3.pdf
PWC https://paperswithcode.com/paper/on-maintaining-linear-convergence-of
Repo
Framework

Comparative Analysis of Predictive Methods for Early Assessment of Compliance with Continuous Positive Airway Pressure Therapy

Title Comparative Analysis of Predictive Methods for Early Assessment of Compliance with Continuous Positive Airway Pressure Therapy
Authors Xavier Rafael-Palou, Cecilia Turino, Alexander Steblin, Manuel Sánchez-de-la-Torre, Ferran Barbé, Eloisa Vargiu
Abstract Patients suffering from obstructive sleep apnea are mainly treated with continuous positive airway pressure (CPAP). Good compliance with this therapy is broadly accepted as more than 4h of CPAP average use nightly. Although it is a highly effective treatment, compliance with this therapy is problematic to achieve with serious consequences for the patients’ health. Previous works already reported factors significantly related to compliance with the therapy. However, further research is still required to support clinicians to early anticipate patients’ therapy compliance. This work intends to take a further step in this direction by building compliance classifiers with CPAP therapy at three different moments of the patient follow-up (i.e. before the therapy starts and at months 1 and 3 after the baseline). Results of the clinical trial confirmed that month 3 was the time-point with the most accurate classifier reaching an f1-score of 87% and 84% in cross-validation and test. At month 1, performances were almost as high as in month 3 with 82% and 84% of f1-score. At baseline, where no information about patients’ CPAP use was given yet, the best classifier achieved 73% and 76% of f1-score in cross-validation and test set respectively. Subsequent analyses carried out with the best classifiers of each time point revealed that certain baseline factors (i.e. headaches, psychological symptoms, arterial hypertension and EuroQol visual analogue scale) were closely related to the prediction of compliance independently of the time-point. In addition, among the variables taken only during the follow-up of the patients, Epworth and the average nighttime hours were the most important to predict compliance with CPAP.
Tasks
Published 2019-12-27
URL https://arxiv.org/abs/1912.12116v1
PDF https://arxiv.org/pdf/1912.12116v1.pdf
PWC https://paperswithcode.com/paper/comparative-analysis-of-predictive-methods
Repo
Framework
comments powered by Disqus