February 2, 2020

3176 words 15 mins read

Paper Group AWR 34

Paper Group AWR 34

DREAM: A Challenge Dataset and Models for Dialogue-Based Reading Comprehension. Explainable Prediction of Adverse Outcomes Using Clinical Notes. Attention-based Multi-instance Neural Network for Medical Diagnosis from Incomplete and Low Quality Data. Dense Extreme Inception Network: Towards a Robust CNN Model for Edge Detection. Multi-Stage Documen …

DREAM: A Challenge Dataset and Models for Dialogue-Based Reading Comprehension

Title DREAM: A Challenge Dataset and Models for Dialogue-Based Reading Comprehension
Authors Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, Claire Cardie
Abstract We present DREAM, the first dialogue-based multiple-choice reading comprehension dataset. Collected from English-as-a-foreign-language examinations designed by human experts to evaluate the comprehension level of Chinese learners of English, our dataset contains 10,197 multiple-choice questions for 6,444 dialogues. In contrast to existing reading comprehension datasets, DREAM is the first to focus on in-depth multi-turn multi-party dialogue understanding. DREAM is likely to present significant challenges for existing reading comprehension systems: 84% of answers are non-extractive, 85% of questions require reasoning beyond a single sentence, and 34% of questions also involve commonsense knowledge. We apply several popular neural reading comprehension models that primarily exploit surface information within the text and find them to, at best, just barely outperform a rule-based approach. We next investigate the effects of incorporating dialogue structure and different kinds of general world knowledge into both rule-based and (neural and non-neural) machine learning-based reading comprehension models. Experimental results on the DREAM dataset show the effectiveness of dialogue structure and general world knowledge. DREAM will be available at https://dataset.org/dream/.
Tasks Dialogue Understanding, Reading Comprehension
Published 2019-02-01
URL http://arxiv.org/abs/1902.00164v1
PDF http://arxiv.org/pdf/1902.00164v1.pdf
PWC https://paperswithcode.com/paper/dream-a-challenge-dataset-and-models-for
Repo https://github.com/nlpdata/dream
Framework tf

Explainable Prediction of Adverse Outcomes Using Clinical Notes

Title Explainable Prediction of Adverse Outcomes Using Clinical Notes
Authors Justin R. Lovelace, Nathan C. Hurley, Adrian D. Haimovich, Bobak J. Mortazavi
Abstract Clinical notes contain a large amount of clinically valuable information that is ignored in many clinical decision support systems due to the difficulty that comes with mining that information. Recent work has found success leveraging deep learning models for the prediction of clinical outcomes using clinical notes. However, these models fail to provide clinically relevant and interpretable information that clinicians can utilize for informed clinical care. In this work, we augment a popular convolutional model with an attention mechanism and apply it to unstructured clinical notes for the prediction of ICU readmission and mortality. We find that the addition of the attention mechanism leads to competitive performance while allowing for the straightforward interpretation of predictions. We develop clear visualizations to present important spans of text for both individual predictions and high-risk cohorts. We then conduct a qualitative analysis and demonstrate that our model is consistently attending to clinically meaningful portions of the narrative for all of the outcomes that we explore.
Tasks
Published 2019-10-30
URL https://arxiv.org/abs/1910.14095v2
PDF https://arxiv.org/pdf/1910.14095v2.pdf
PWC https://paperswithcode.com/paper/explainable-prediction-of-adverse-outcomes
Repo https://github.com/justinlovelace/explainable-mimic-predictions
Framework pytorch

Attention-based Multi-instance Neural Network for Medical Diagnosis from Incomplete and Low Quality Data

Title Attention-based Multi-instance Neural Network for Medical Diagnosis from Incomplete and Low Quality Data
Authors Zeyuan Wang, Josiah Poon, Shiding Sun, Simon Poon
Abstract One way to extract patterns from clinical records is to consider each patient record as a bag with various number of instances in the form of symptoms. Medical diagnosis is to discover informative ones first and then map them to one or more diseases. In many cases, patients are represented as vectors in some feature space and a classifier is applied after to generate diagnosis results. However, in many real-world cases, data is often of low-quality due to a variety of reasons, such as data consistency, integrity, completeness, accuracy, etc. In this paper, we propose a novel approach, attention based multi-instance neural network (AMI-Net), to make the single disease classification only based on the existing and valid information in the real-world outpatient records. In the context of a patient, it takes a bag of instances as input and output the bag label directly in end-to-end way. Embedding layer is adopted at the beginning, mapping instances into an embedding space which represents the individual patient condition. The correlations among instances and their importance for the final classification are captured by multi-head attention transformer, instance-level multi-instance pooling and bag-level multi-instance pooling. The proposed approach was test on two non-standardized and highly imbalanced datasets, one in the Traditional Chinese Medicine (TCM) domain and the other in the Western Medicine (WM) domain. Our preliminary results show that the proposed approach outperforms all baselines results by a significant margin.
Tasks Medical Diagnosis
Published 2019-04-09
URL http://arxiv.org/abs/1904.04460v1
PDF http://arxiv.org/pdf/1904.04460v1.pdf
PWC https://paperswithcode.com/paper/attention-based-multi-instance-neural-network
Repo https://github.com/Zeyuan-Wang/AMI-Net
Framework tf

Dense Extreme Inception Network: Towards a Robust CNN Model for Edge Detection

Title Dense Extreme Inception Network: Towards a Robust CNN Model for Edge Detection
Authors Xavier Soria, Edgar Riba, Angel D. Sappa
Abstract This paper proposes a Deep Learning based edge detector, which is inspired on both HED (Holistically-Nested Edge Detection) and Xception networks. The proposed approach generates thin edge-maps that are plausible for human eyes; it can be used in any edge detection task without previous training or fine tuning process. As a second contribution, a large dataset with carefully annotated edges has been generated. This dataset has been used for training the proposed approach as well the state-of-the-art algorithms for comparisons. Quantitative and qualitative evaluations have been performed on different benchmarks showing improvements with the proposed method when F-measure of ODS and OIS are considered.
Tasks Edge Detection
Published 2019-09-04
URL https://arxiv.org/abs/1909.01955v2
PDF https://arxiv.org/pdf/1909.01955v2.pdf
PWC https://paperswithcode.com/paper/dense-extreme-inception-network-towards-a
Repo https://github.com/xavysp/DexiNed
Framework tf

Multi-Stage Document Ranking with BERT

Title Multi-Stage Document Ranking with BERT
Authors Rodrigo Nogueira, Wei Yang, Kyunghyun Cho, Jimmy Lin
Abstract The advent of deep neural networks pre-trained via language modeling tasks has spurred a number of successful applications in natural language processing. This work explores one such popular model, BERT, in the context of document ranking. We propose two variants, called monoBERT and duoBERT, that formulate the ranking problem as pointwise and pairwise classification, respectively. These two models are arranged in a multi-stage ranking architecture to form an end-to-end search system. One major advantage of this design is the ability to trade off quality against latency by controlling the admission of candidates into each pipeline stage, and by doing so, we are able to find operating points that offer a good balance between these two competing metrics. On two large-scale datasets, MS MARCO and TREC CAR, experiments show that our model produces results that are either at or comparable to the state of the art. Ablation studies show the contributions of each component and characterize the latency/quality tradeoff space.
Tasks Document Ranking, Language Modelling
Published 2019-10-31
URL https://arxiv.org/abs/1910.14424v1
PDF https://arxiv.org/pdf/1910.14424v1.pdf
PWC https://paperswithcode.com/paper/multi-stage-document-ranking-with-bert
Repo https://github.com/castorini/duobert
Framework tf

EKT: Exercise-aware Knowledge Tracing for Student Performance Prediction

Title EKT: Exercise-aware Knowledge Tracing for Student Performance Prediction
Authors Qi Liu, Zhenya Huang, Yu Yin, Enhong Chen, Hui Xiong, Yu Su, Guoping Hu
Abstract For offering proactive services to students in intelligent education, one of the fundamental tasks is predicting their performance (e.g., scores) on future exercises, where it is necessary to track each student’s knowledge acquisition during her exercising activities. However, existing approaches can only exploit the exercising records of students, and the problem of extracting rich information existed in the exercise’s materials (e.g., knowledge concepts, exercise content) to achieve both precise predictions of student performance and interpretable analysis of knowledge acquisition remains underexplored. In this paper, we present a holistic study of student performance prediction. To directly achieve the primary goal of prediction, we first propose a general Exercise-Enhanced Recurrent Neural Network (EERNN) framework by exploring both student’s records and the exercise contents. In EERNN, we simply summarize each student’s state into an integrated vector and trace it with a recurrent neural network, where we design a bidirectional LSTM to learn the encoding of each exercise’s content. For making predictions, we propose two implementations under EERNN with different strategies, i.e., EERNNM with Markov property and EERNNA with Attention mechanism. Then, to explicitly track student’s knowledge acquisition on multiple knowledge concepts, we extend EERNN to an explainable Exercise-aware Knowledge Tracing (EKT) by incorporating the knowledge concept effects, where the student’s integrated state vector is extended to a knowledge state matrix. In EKT, we further develop a memory network for quantifying how much each exercise can affect the mastery of students on concepts during the exercising process. Finally, we conduct extensive experiments on large-scale real-world data. The results demonstrate the prediction effectiveness of two frameworks as well as the superior interpretability of EKT.
Tasks Knowledge Tracing
Published 2019-06-07
URL https://arxiv.org/abs/1906.05658v1
PDF https://arxiv.org/pdf/1906.05658v1.pdf
PWC https://paperswithcode.com/paper/ekt-exercise-aware-knowledge-tracing-for
Repo https://github.com/bigdata-ustc/ekt
Framework pytorch

Unsupervised Visual Domain Adaptation: A Deep Max-Margin Gaussian Process Approach

Title Unsupervised Visual Domain Adaptation: A Deep Max-Margin Gaussian Process Approach
Authors Minyoung Kim, Pritish Sahu, Behnam Gholami, Vladimir Pavlovic
Abstract In unsupervised domain adaptation, it is widely known that the target domain error can be provably reduced by having a shared input representation that makes the source and target domains indistinguishable from each other. Very recently it has been studied that not just matching the marginal input distributions, but the alignment of output (class) distributions is also critical. The latter can be achieved by minimizing the maximum discrepancy of predictors (classifiers). In this paper, we adopt this principle, but propose a more systematic and effective way to achieve hypothesis consistency via Gaussian processes (GP). The GP allows us to define/induce a hypothesis space of the classifiers from the posterior distribution of the latent random functions, turning the learning into a simple large-margin posterior separation problem, far easier to solve than previous approaches based on adversarial minimax optimization. We formulate a learning objective that effectively pushes the posterior to minimize the maximum discrepancy. This is further shown to be equivalent to maximizing margins and minimizing uncertainty of the class predictions in the target domain, a well-established principle in classical (semi-)supervised learning. Empirical results demonstrate that our approach is comparable or superior to the existing methods on several benchmark domain adaptation datasets.
Tasks Domain Adaptation, Gaussian Processes, Unsupervised Domain Adaptation
Published 2019-02-23
URL http://arxiv.org/abs/1902.08727v1
PDF http://arxiv.org/pdf/1902.08727v1.pdf
PWC https://paperswithcode.com/paper/unsupervised-visual-domain-adaptation-a-deep
Repo https://github.com/seqam-lab/GPDA
Framework pytorch

ECGadv: Generating Adversarial Electrocardiogram to Misguide Arrhythmia Classification System

Title ECGadv: Generating Adversarial Electrocardiogram to Misguide Arrhythmia Classification System
Authors Huangxun Chen, Chenyu Huang, Qianyi Huang, Qian Zhang, Wei Wang
Abstract Deep neural networks (DNNs)-powered Electrocardiogram (ECG) diagnosis systems recently achieve promising progress to take over tedious examinations by cardiologists. However, their vulnerability to adversarial attacks still lack comprehensive investigation. The existing attacks in image domain could not be directly applicable due to the distinct properties of ECGs in visualization and dynamic properties. Thus, this paper takes a step to thoroughly explore adversarial attacks on the DNN-powered ECG diagnosis system. We analyze the properties of ECGs to design effective attacks schemes under two attacks models respectively. Our results demonstrate the blind spots of DNN-powered diagnosis systems under adversarial attacks, which calls attention to adequate countermeasures.
Tasks
Published 2019-01-12
URL https://arxiv.org/abs/1901.03808v4
PDF https://arxiv.org/pdf/1901.03808v4.pdf
PWC https://paperswithcode.com/paper/ecgadv-generating-adversarial
Repo https://github.com/codespace123/ECGadv
Framework tf

Learning Correspondence from the Cycle-Consistency of Time

Title Learning Correspondence from the Cycle-Consistency of Time
Authors Xiaolong Wang, Allan Jabri, Alexei A. Efros
Abstract We introduce a self-supervised method for learning visual correspondence from unlabeled video. The main idea is to use cycle-consistency in time as free supervisory signal for learning visual representations from scratch. At training time, our model learns a feature map representation to be useful for performing cycle-consistent tracking. At test time, we use the acquired representation to find nearest neighbors across space and time. We demonstrate the generalizability of the representation – without finetuning – across a range of visual correspondence tasks, including video object segmentation, keypoint tracking, and optical flow. Our approach outperforms previous self-supervised methods and performs competitively with strongly supervised methods.
Tasks Optical Flow Estimation, Semantic Segmentation, Video Object Segmentation, Video Semantic Segmentation
Published 2019-03-18
URL http://arxiv.org/abs/1903.07593v2
PDF http://arxiv.org/pdf/1903.07593v2.pdf
PWC https://paperswithcode.com/paper/learning-correspondence-from-the-cycle
Repo https://github.com/xiaolonw/TimeCycle
Framework pytorch

Unsupervised Grounding of Plannable First-Order Logic Representation from Images

Title Unsupervised Grounding of Plannable First-Order Logic Representation from Images
Authors Masataro Asai
Abstract Recently, there is an increasing interest in obtaining the relational structures of the environment in the Reinforcement Learning community. However, the resulting “relations” are not the discrete, logical predicates compatible to the symbolic reasoning such as classical planning or goal recognition. Meanwhile, Latplan (Asai and Fukunaga 2018) bridged the gap between deep-learning perceptual systems and symbolic classical planners. One key component of the system is a Neural Network called State AutoEncoder (SAE), which encodes an image-based input into a propositional representation compatible to classical planning. To get the best of both worlds, we propose First-Order State AutoEncoder, an unsupervised architecture for grounding the first-order logic predicates and facts. Each predicate models a relationship between objects by taking the interpretable arguments and returning a propositional value. In the experiment using 8-Puzzle and a photo-realistic Blocksworld environment, we show that (1) the resulting predicates capture the interpretable relations (e.g. spatial), (2) they help obtaining the compact, abstract model of the environment, and finally, (3) the resulting model is compatible to symbolic classical planning.
Tasks
Published 2019-02-21
URL http://arxiv.org/abs/1902.08093v4
PDF http://arxiv.org/pdf/1902.08093v4.pdf
PWC https://paperswithcode.com/paper/unsupervised-grounding-of-plannable-first
Repo https://github.com/guicho271828/latplan
Framework tf

Conversing by Reading: Contentful Neural Conversation with On-demand Machine Reading

Title Conversing by Reading: Contentful Neural Conversation with On-demand Machine Reading
Authors Lianhui Qin, Michel Galley, Chris Brockett, Xiaodong Liu, Xiang Gao, Bill Dolan, Yejin Choi, Jianfeng Gao
Abstract Although neural conversation models are effective in learning how to produce fluent responses, their primary challenge lies in knowing what to say to make the conversation contentful and non-vacuous. We present a new end-to-end approach to contentful neural conversation that jointly models response generation and on-demand machine reading. The key idea is to provide the conversation model with relevant long-form text on the fly as a source of external knowledge. The model performs QA-style reading comprehension on this text in response to each conversational turn, thereby allowing for more focused integration of external knowledge than has been possible in prior approaches. To support further research on knowledge-grounded conversation, we introduce a new large-scale conversation dataset grounded in external web pages (2.8M turns, 7.4M sentences of grounding). Both human evaluation and automated metrics show that our approach results in more contentful responses compared to a variety of previous methods, improving both the informativeness and diversity of generated output.
Tasks Reading Comprehension
Published 2019-06-06
URL https://arxiv.org/abs/1906.02738v2
PDF https://arxiv.org/pdf/1906.02738v2.pdf
PWC https://paperswithcode.com/paper/conversing-by-reading-contentful-neural
Repo https://github.com/qkaren/converse_reading_cmr
Framework pytorch

Where Is My Mirror?

Title Where Is My Mirror?
Authors Xin Yang, Haiyang Mei, Ke Xu, Xiaopeng Wei, Baocai Yin, Rynson W. H. Lau
Abstract Mirrors are everywhere in our daily lives. Existing computer vision systems do not consider mirrors, and hence may get confused by the reflected content inside a mirror, resulting in a severe performance degradation. However, separating the real content outside a mirror from the reflected content inside it is non-trivial. The key challenge is that mirrors typically reflect contents similar to their surroundings, making it very difficult to differentiate the two. In this paper, we present a novel method to segment mirrors from an input image. To the best of our knowledge, this is the first work to address the mirror segmentation problem with a computational approach. We make the following contributions. First, we construct a large-scale mirror dataset that contains mirror images with corresponding manually annotated masks. This dataset covers a variety of daily life scenes, and will be made publicly available for future research. Second, we propose a novel network, called MirrorNet, for mirror segmentation, by modeling both semantical and low-level color/texture discontinuities between the contents inside and outside of the mirrors. Third, we conduct extensive experiments to evaluate the proposed method, and show that it outperforms the carefully chosen baselines from the state-of-the-art detection and segmentation methods.
Tasks
Published 2019-08-24
URL https://arxiv.org/abs/1908.09101v2
PDF https://arxiv.org/pdf/1908.09101v2.pdf
PWC https://paperswithcode.com/paper/where-is-my-mirror
Repo https://github.com/Mhaiyang/ICCV2019_MirrorNet
Framework pytorch

Sliced Gromov-Wasserstein

Title Sliced Gromov-Wasserstein
Authors Titouan Vayer, Rémi Flamary, Romain Tavenard, Laetitia Chapel, Nicolas Courty
Abstract Recently used in various machine learning contexts, the Gromov-Wasserstein distance (GW) allows for comparing distributions whose supports do not necessarily lie in the same metric space. However, this Optimal Transport (OT) distance requires solving a complex non convex quadratic program which is most of the time very costly both in time and memory. Contrary to GW, the Wasserstein distance (W) enjoys several properties ({\em e.g.} duality) that permit large scale optimization. Among those, the solution of W on the real line, that only requires sorting discrete samples in 1D, allows defining the Sliced Wasserstein (SW) distance. This paper proposes a new divergence based on GW akin to SW. We first derive a closed form for GW when dealing with 1D distributions, based on a new result for the related quadratic assignment problem. We then define a novel OT discrepancy that can deal with large scale distributions via a slicing approach and we show how it relates to the GW distance while being $O(n\log(n))$ to compute. We illustrate the behavior of this so called Sliced Gromov-Wasserstein (SGW) discrepancy in experiments where we demonstrate its ability to tackle similar problems as GW while being several order of magnitudes faster to compute.
Tasks
Published 2019-05-24
URL https://arxiv.org/abs/1905.10124v2
PDF https://arxiv.org/pdf/1905.10124v2.pdf
PWC https://paperswithcode.com/paper/sliced-gromov-wasserstein
Repo https://github.com/tvayer/SGW
Framework pytorch

Bidirectional One-Shot Unsupervised Domain Mapping

Title Bidirectional One-Shot Unsupervised Domain Mapping
Authors Tomer Cohen, Lior Wolf
Abstract We study the problem of mapping between a domain $A$, in which there is a single training sample and a domain $B$, for which we have a richer training set. The method we present is able to perform this mapping in both directions. For example, we can transfer all MNIST images to the visual domain captured by a single SVHN image and transform the SVHN image to the domain of the MNIST images. Our method is based on employing one encoder and one decoder for each domain, without utilizing weight sharing. The autoencoder of the single sample domain is trained to match both this sample and the latent space of domain $B$. Our results demonstrate convincing mapping between domains, where either the source or the target domain are defined by a single sample, far surpassing existing solutions. Our code is made publicly available at https://github.com/tomercohen11/BiOST
Tasks One Shot Image to Image Translation
Published 2019-09-04
URL https://arxiv.org/abs/1909.01595v1
PDF https://arxiv.org/pdf/1909.01595v1.pdf
PWC https://paperswithcode.com/paper/bidirectional-one-shot-unsupervised-domain
Repo https://github.com/tomercohen11/BiOST
Framework pytorch

Gradient-Based Neural DAG Learning

Title Gradient-Based Neural DAG Learning
Authors Sébastien Lachapelle, Philippe Brouillard, Tristan Deleu, Simon Lacoste-Julien
Abstract We propose a novel score-based approach to learning a directed acyclic graph (DAG) from observational data. We adapt a recently proposed continuous constrained optimization formulation to allow for nonlinear relationships between variables using neural networks. This extension allows to model complex interactions while avoiding the combinatorial nature of the problem. In addition to comparing our method to existing continuous optimization methods, we provide missing empirical comparisons to nonlinear greedy search methods. On both synthetic and real-world data sets, this new method outperforms current continuous methods on most tasks, while being competitive with existing greedy search methods on important metrics for causal inference.
Tasks Causal Inference
Published 2019-06-05
URL https://arxiv.org/abs/1906.02226v2
PDF https://arxiv.org/pdf/1906.02226v2.pdf
PWC https://paperswithcode.com/paper/gradient-based-neural-dag-learning
Repo https://github.com/kurowasan/GraN-DAG
Framework pytorch
comments powered by Disqus