April 3, 2020

# Paper Group ANR 12

Intelligent Chest X-ray Worklist Prioritization by CNNs: A Clinical Workflow Simulation. Decentralized MCTS via Learned Teammate Models. Classifying Wikipedia in a fine-grained hierarchy: what graphs can contribute. Machine Learning for Motor Learning: EEG-based Continuous Assessment of Cognitive Engagement for Adaptive Rehabilitation Robots. Certi …

#### Intelligent Chest X-ray Worklist Prioritization by CNNs: A Clinical Workflow Simulation

Title Intelligent Chest X-ray Worklist Prioritization by CNNs: A Clinical Workflow Simulation
Authors Ivo M. Baltruschat, Leonhard Steinmeister, Hannes Nickisch, Axel Saalbach, Michael Grass, Gerhard Adam, Harald Ittrich, Tobias Knopp
Abstract Growing radiologic workload and shortage of medical experts worldwide often lead to delayed or even unreported examinations, which poses a risk for patient’s safety in case of unrecognized findings in chest radiographs (CXR). The aim was to evaluate, whether deep learning algorithms for an intelligent worklist prioritization might optimize the radiology workflow and can reduce report turnaround times (RTAT) for critical findings, instead of reporting according to the First-In-First-Out-Principle (FIFO). Furthermore, we investigated the problem of false negative prediction in the context of worklist prioritization. To assess the potential benefit of an intelligent worklist prioritization, three different workflow simulations based on our analysis were run and RTAT were compared: FIFO (non-prioritized), Prio1 (prioritized) and Prio2 (prioritized, with RTATmax.). Examination triage was performed by “ChestXCheck”, a convolutional neural network, classifying eight different pathological findings ranked in descending order of urgency: pneumothorax, pleural effusion, infiltrate, congestion, atelectasis, cardiomegaly, mass and foreign object. The average RTAT for all critical findings was significantly reduced by both Prio simulations compared to the FIFO simulation (e.g. pneumothorax: 32.1 min vs. 69.7 min; p < 0.0001), while the average RTAT for normal examinations increased at the same time (69.5 min vs. 90.0 min; p < 0.0001). Both effects were slightly lower at Prio2 than at Prio1, whereas the maximum RTAT at Prio1 was substantially higher for all classes, due to individual examinations rated false negative.Our Prio2 simulation demonstrated that intelligent worklist prioritization by deep learning algorithms reduces average RTAT for critical findings in chest X-ray while maintaining a similar maximum RTAT as FIFO.
Published 2020-01-23
URL https://arxiv.org/abs/2001.08625v1
PDF https://arxiv.org/pdf/2001.08625v1.pdf
PWC https://paperswithcode.com/paper/intelligent-chest-x-ray-worklist
Repo
Framework

#### Decentralized MCTS via Learned Teammate Models

Title Decentralized MCTS via Learned Teammate Models
Authors Aleksander Czechowski, Frans Oliehoek
Abstract A key difficulty of cooperative decentralized planning lies in making accurate predictions about the decisions of other agents. In this paper we present a policy improvement operator for learning to plan in iterated cooperative multi-agent scenarios. At each application of our method, a selected agent learns an approximation of policies of its teammates from data from past simulations. Under the assumption of ideal function approximation, successive iterations of our algorithm are guaranteed to improve the policies, and eventually lead to convergence to a Nash equilibrium in a coordinate ascent manner. We combine the policy improvement operator with the decentralized Monte Carlo Tree Search planning method and demonstrate the application of the algorithm on several scenarios in the spatial task allocation problem introduced in (Claes et al., 2015). We show that deep learning and convolutional neural networks can be efficiently employed to produce policy approximators which exploit the spatial features of the problem, and that the proposed algorithm improves over the baseline planning performance for particularly challenging domain configurations.
Published 2020-03-19
URL https://arxiv.org/abs/2003.08727v1
PDF https://arxiv.org/pdf/2003.08727v1.pdf
PWC https://paperswithcode.com/paper/decentralized-mcts-via-learned-teammate
Repo
Framework

#### Classifying Wikipedia in a fine-grained hierarchy: what graphs can contribute

Title Classifying Wikipedia in a fine-grained hierarchy: what graphs can contribute
Authors Tiphaine Viard, Thomas McLachlan, Hamidreza Ghader, Satoshi Sekine
Abstract Wikipedia is a huge opportunity for machine learning, being the largest semi-structured base of knowledge available. Because of this, many works examine its contents, and focus on structuring it in order to make it usable in learning tasks, for example by classifying it into an ontology. Beyond its textual contents, Wikipedia also displays a typical graph structure, where pages are linked together through citations. In this paper, we address the task of integrating graph (i.e. structure) information to classify Wikipedia into a fine-grained named entity ontology (NE), the Extended Named Entity hierarchy. To address this task, we first start by assessing the relevance of the graph structure for NE classification. We then explore two directions, one related to feature vectors using graph descriptors commonly used in large-scale network analysis, and one extending flat classification to a weighted model taking into account semantic similarity. We conduct at-scale practical experiments, on a manually labeled subset of 22,000 pages extracted from the Japanese Wikipedia. Our results show that integrating graph information succeeds at reducing sparsity of the input feature space, and yields classification results that are comparable or better than previous works.
Tasks Semantic Similarity, Semantic Textual Similarity
Published 2020-01-21
URL https://arxiv.org/abs/2001.07558v2
PDF https://arxiv.org/pdf/2001.07558v2.pdf
PWC https://paperswithcode.com/paper/classifying-wikipedia-in-a-fine-grained
Repo
Framework

#### Machine Learning for Motor Learning: EEG-based Continuous Assessment of Cognitive Engagement for Adaptive Rehabilitation Robots

Title Machine Learning for Motor Learning: EEG-based Continuous Assessment of Cognitive Engagement for Adaptive Rehabilitation Robots
Authors Neelesh Kumar, Konstantinos P. Michmizos
Abstract Although cognitive engagement (CE) is crucial for motor learning, it remains underutilized in rehabilitation robots, partly because its assessment currently relies on subjective and gross measurements taken intermittently. Here, we propose an end-to-end computational framework that assesses CE in real-time, using electroencephalography (EEG) signals as objective measurements. The framework consists of i) a deep convolutional neural network (CNN) that extracts task-discriminative spatiotemporal EEG to predict the level of CE for two classes – cognitively engaged vs. disengaged; and ii) a novel sliding window method that predicts continuous levels of CE in real-time. We evaluated our framework on 8 subjects using an in-house Go/No-Go experiment that adapted its gameplay parameters to induce cognitive fatigue. The proposed CNN had an average leave-one-out accuracy of 88.13%. The CE prediction correlated well with a commonly used behavioral metric based on self-reports taken every 5 minutes ($\rho$=0.93). Our results objectify CE in real-time and pave the way for using CE as a rehabilitation parameter for tailoring robotic therapy to each patient’s needs and skills.
Published 2020-02-18
URL https://arxiv.org/abs/2002.07541v2
PDF https://arxiv.org/pdf/2002.07541v2.pdf
PWC https://paperswithcode.com/paper/machine-learning-for-motor-learning-eeg-based
Repo
Framework

#### Certified Robustness to Label-Flipping Attacks via Randomized Smoothing

Title Certified Robustness to Label-Flipping Attacks via Randomized Smoothing
Authors Elan Rosenfeld, Ezra Winston, Pradeep Ravikumar, J. Zico Kolter
Abstract Machine learning algorithms are known to be susceptible to data poisoning attacks, where an adversary manipulates the training data to degrade performance of the resulting classifier. While many heuristic defenses have been proposed, few defenses exist which are certified against worst-case corruption of the training data. In this work, we propose a strategy to build linear classifiers that are certifiably robust against a strong variant of label-flipping, where each test example is targeted independently. In other words, for each test point, our classifier makes a prediction and includes a certification that its prediction would be the same had some number of training labels been changed adversarially. Our approach leverages randomized smoothing, a technique that has previously been used to guarantee—with high probability—test-time robustness to adversarial manipulation of the input to a classifier. We derive a variant which provides a deterministic, analytical bound, sidestepping the probabilistic certificates that traditionally result from the sampling subprocedure. Further, we obtain these certified bounds with no additional runtime cost over standard classification. We generalize our results to the multi-class case, providing what we believe to be the first multi-class classification algorithm that is certifiably robust to label-flipping attacks.
Published 2020-02-07
URL https://arxiv.org/abs/2002.03018v1
PDF https://arxiv.org/pdf/2002.03018v1.pdf
PWC https://paperswithcode.com/paper/certified-robustness-to-label-flipping
Repo
Framework

#### Inferring Spatial Uncertainty in Object Detection

Title Inferring Spatial Uncertainty in Object Detection
Authors Zining Wang, Di Feng, Yiyang Zhou, Wei Zhan, Lars Rosenbaum, Fabian Timm, Klaus Dietmayer, Masayoshi Tomizuka
Abstract The availability of real-world datasets is the prerequisite to develop object detection methods for autonomous driving. While ambiguity exists in object labels due to error-prone annotation process or sensor observation noises, current object detection datasets only provide deterministic annotations, without considering their uncertainty. This precludes an in-depth evaluation among different object detection methods, especially for those that explicitly model predictive probability. In this work, we propose a generative model to estimate bounding box label uncertainties from LiDAR point clouds, and define a new representation of the probabilistic bounding box through spatial distribution. Comprehensive experiments show that the proposed model represents uncertainties commonly seen in driving scenarios. Based on the spatial distribution, we further propose an extension of IoU, called the Jaccard IoU (JIoU), as a new evaluation metric that incorporates label uncertainty. The experiments on the KITTI and the Waymo Open Datasets show that JIoU is superior to IoU when evaluating probabilistic object detectors.
Published 2020-03-07
URL https://arxiv.org/abs/2003.03644v1
PDF https://arxiv.org/pdf/2003.03644v1.pdf
PWC https://paperswithcode.com/paper/inferring-spatial-uncertainty-in-object
Repo
Framework

#### Deep Learning of Movement Intent and Reaction Time for EEG-informed Adaptation of Rehabilitation Robots

Title Deep Learning of Movement Intent and Reaction Time for EEG-informed Adaptation of Rehabilitation Robots
Authors Neelesh Kumar, Konstantinos P. Michmizos
Abstract Mounting evidence suggests that adaptation is a crucial mechanism for rehabilitation robots in promoting motor learning. Yet, it is commonly based on robot-derived movement kinematics, which is a rather subjective measurement of performance, especially in the presence of a sensorimotor impairment. Here, we propose a deep convolutional neural network (CNN) that uses electroencephalography (EEG) as an objective measurement of two kinematics components that are typically used to assess motor learning and thereby adaptation: i) the intent to initiate a goal-directed movement, and ii) the reaction time (RT) of that movement. We evaluated our CNN on data acquired from an in-house experiment where 13 subjects moved a rehabilitation robotic arm in four directions on a plane, in response to visual stimuli. Our CNN achieved average test accuracies of 80.08% and 79.82% in a binary classification of the intent (intent vs. no intent) and RT (slow vs. fast), respectively. Our results demonstrate how individual movement components implicated in distinct types of motor learning can be predicted from synchronized EEG data acquired before the start of the movement. Our approach can, therefore, inform robotic adaptation in real-time and has the potential to further improve one’s ability to perform the rehabilitation task.
Published 2020-02-18
URL https://arxiv.org/abs/2002.08354v1
PDF https://arxiv.org/pdf/2002.08354v1.pdf
PWC https://paperswithcode.com/paper/deep-learning-of-movement-intent-and-reaction
Repo
Framework

#### On the Similarity of Deep Learning Representations Across Didactic and Adversarial Examples

Title On the Similarity of Deep Learning Representations Across Didactic and Adversarial Examples
Authors Pamela K. Douglas, Farzad Vasheghani Farahani
Abstract The increasing use of deep neural networks (DNNs) has motivated a parallel endeavor: the design of adversaries that profit from successful misclassifications. However, not all adversarial examples are crafted for malicious purposes. For example, real world systems often contain physical, temporal, and sampling variability across instrumentation. Adversarial examples in the wild may inadvertently prove deleterious for accurate predictive modeling. Conversely, naturally occurring covariance of image features may serve didactic purposes. Here, we studied the stability of deep learning representations for neuroimaging classification across didactic and adversarial conditions characteristic of MRI acquisition variability. We show that representational similarity and performance vary according to the frequency of adversarial examples in the input space.
Published 2020-02-17
URL https://arxiv.org/abs/2002.06816v1
PDF https://arxiv.org/pdf/2002.06816v1.pdf
PWC https://paperswithcode.com/paper/on-the-similarity-of-deep-learning
Repo
Framework

#### Measuring Diversity in Heterogeneous Information Networks

Title Measuring Diversity in Heterogeneous Information Networks
Authors Pedro Ramaciotti Morales, Robin Lamarche-Perrin, Raphael Fournier-S’niehotta, Remy Poulain, Lionel Tabourier, Fabien Tarissan
Abstract Diversity is a concept relevant to numerous domains of research varying from ecology, to information theory, and to economics, to cite a few. It is a notion that is steadily gaining attention in the information retrieval, network analysis, and artificial neural networks communities. While the use of diversity measures in network-structured data counts a growing number of applications, no clear and comprehensive description is available for the different ways in which diversities can be measured. In this article, we develop a formal framework for the application of a large family of diversity measures to heterogeneous information networks (HINs), a flexible, widely-used network data formalism. This extends the application of diversity measures, from systems of classifications and apportionments, to more complex relations that can be better modeled by networks. In doing so, we not only provide an effective organization of multiple practices from different domains, but also unearth new observables in systems modeled by heterogeneous information networks. We illustrate the pertinence of our approach by developing different applications related to various domains concerned by both diversity and networks. In particular, we illustrate the usefulness of these new proposed observables in the domains of recommender systems and social media studies, among other fields.
Published 2020-01-05
URL https://arxiv.org/abs/2001.01296v2
PDF https://arxiv.org/pdf/2001.01296v2.pdf
PWC https://paperswithcode.com/paper/measuring-diversity-in-heterogeneous
Repo
Framework

#### ParKCa: Causal Inference with Partially Known Causes

Title ParKCa: Causal Inference with Partially Known Causes
Authors Raquel Aoki, Martin Ester
Abstract Causal Inference methods based on observational data are an alternative for applications where collecting the counterfactual data or realizing a more standard experiment is not possible. In this work, our goal is to combine several observational causal inference methods to learn new causes in applications where some causes are well known. We validate the proposed method on The Cancer Genome Atlas (TCGA) dataset to identify genes that potentially cause metastasis.
Published 2020-03-17
URL https://arxiv.org/abs/2003.07952v1
PDF https://arxiv.org/pdf/2003.07952v1.pdf
PWC https://paperswithcode.com/paper/parkca-causal-inference-with-partially-known
Repo
Framework

#### Coronavirus on Social Media: Analyzing Misinformation in Twitter Conversations

Title Coronavirus on Social Media: Analyzing Misinformation in Twitter Conversations
Authors Karishma Sharma, Sungyong Seo, Chuizheng Meng, Sirisha Rambhatla, Aastha Dua, Yan Liu
Abstract The ongoing Coronavirus Disease (COVID-19) pandemic highlights the interconnected-ness of our present-day globalized world. With social distancing policies in place, virtual communication has become an important source of (mis)information. As increasing number of people rely on social media platforms for news, identifying misinformation has emerged as a critical task in these unprecedented times. In addition to being malicious, the spread of such information poses a serious public health risk. To this end, we design a dashboard to track misinformation on popular social media news sharing platform - Twitter. Our dashboard allows visibility into the social media discussions around Coronavirus and the quality of information shared on the platform as the situation evolves. We collect streaming data using the Twitter API from March 1, 2020 to date and provide analysis of topic clusters and social sentiments related to important emerging policies such as “#socialdistancing” and “#workfromhome”. We track emerging hashtags over time, and provide location and time sensitive analysis of sentiments. In addition, we study the challenging problem of misinformation on social media, and provide a detection method to identify false, misleading and clickbait contents from Twitter information cascades. The dashboard maintains an evolving list of detected misinformation cascades with the corresponding detection scores, accessible online athttps://ksharmar.github.io/index.html.
Published 2020-03-26
URL https://arxiv.org/abs/2003.12309v1
PDF https://arxiv.org/pdf/2003.12309v1.pdf
PWC https://paperswithcode.com/paper/coronavirus-on-social-media-analyzing
Repo
Framework

#### Effective Correlates of Motor Imagery Performance based on Default Mode Network in Resting-State

Title Effective Correlates of Motor Imagery Performance based on Default Mode Network in Resting-State
Authors Jae-Geun Yoon, Minji Lee
Abstract Motor imagery based brain-computer interfaces (MI-BCIs) allow the control of devices and communication by imagining different muscle movements. However, most studies have reported a problem of “BCI-illiteracy” that does not have enough performance to use MI-BCI. Therefore, understanding subjects with poor performance and finding the cause of performance variation is still an important challenge. In this study, we proposed predictors of MI performance using effective connectivity in resting-state EEG. As a result, the high and low MI performance groups had a significant difference as 23% MI performance difference. We also found that connection from right lateral parietal to left lateral parietal in resting-state EEG was correlated significantly with MI performance (r = -0.37). These findings could help to understand BCI-illiteracy and to consider alternatives that are appropriate for the subject.
Published 2020-02-11
URL https://arxiv.org/abs/2002.08468v1
PDF https://arxiv.org/pdf/2002.08468v1.pdf
PWC https://paperswithcode.com/paper/effective-correlates-of-motor-imagery
Repo
Framework

#### DeepBrain: Towards Personalized EEG Interaction through Attentional and Embedded LSTM Learning

Title DeepBrain: Towards Personalized EEG Interaction through Attentional and Embedded LSTM Learning
Authors Di Wu, Huayan Wan, Siping Liu, Weiren Yu, Zhanpeng Jin, Dakuo Wang
Abstract The “mind-controlling” capability has always been in mankind’s fantasy. With the recent advancements of electroencephalograph (EEG) techniques, brain-computer interface (BCI) researchers have explored various solutions to allow individuals to perform various tasks using their minds. However, the commercial off-the-shelf devices to run accurate EGG signal collection are usually expensive and the comparably cheaper devices can only present coarse results, which prevents the practical application of these devices in domestic services. To tackle this challenge, we propose and develop an end-to-end solution that enables fine brain-robot interaction (BRI) through embedded learning of coarse EEG signals from the low-cost devices, namely DeepBrain, so that people having difficulty to move, such as the elderly, can mildly command and control a robot to perform some basic household tasks. Our contributions are two folds: 1) We present a stacked long short term memory (Stacked LSTM) structure with specific pre-processing techniques to handle the time-dependency of EEG signals and their classification. 2) We propose personalized design to capture multiple features and achieve accurate recognition of individual EEG signals by enhancing the signal interpretation of Stacked LSTM with attention mechanism. Our real-world experiments demonstrate that the proposed end-to-end solution with low cost can achieve satisfactory run-time speed, accuracy and energy-efficiency.
Published 2020-02-06
URL https://arxiv.org/abs/2002.02086v1
PDF https://arxiv.org/pdf/2002.02086v1.pdf
PWC https://paperswithcode.com/paper/deepbrain-towards-personalized-eeg
Repo
Framework

#### Towards Clarifying the Theory of the Deconfounder

Title Towards Clarifying the Theory of the Deconfounder
Authors Yixin Wang, David M. Blei
Abstract Wang and Blei (2019) studies multiple causal inference and proposes the deconfounder algorithm. The paper discusses theoretical requirements and presents empirical studies. Several refinements have been suggested around the theory of the deconfounder. Among these, Imai and Jiang clarified the assumption of “no unobserved single-cause confounders.” Using their assumption, this paper clarifies the theory. Furthermore, Ogburn et al. (2020) proposes counterexamples to the theory. But the proposed counterexamples do not satisfy the required assumptions.
Published 2020-03-10
URL https://arxiv.org/abs/2003.04948v1
PDF https://arxiv.org/pdf/2003.04948v1.pdf
PWC https://paperswithcode.com/paper/towards-clarifying-the-theory-of-the
Repo
Framework

#### Brain Tumor Classification Using Deep Learning Technique – A Comparison between Cropped, Uncropped, and Segmented Lesion Images with Different Sizes

Title Brain Tumor Classification Using Deep Learning Technique – A Comparison between Cropped, Uncropped, and Segmented Lesion Images with Different Sizes
Authors Ali Mohammad Alqudah, Hiam Alquraan, Isam Abu Qasmieh, Amin Alqudah, Wafaa Al-Sharu
Abstract Deep Learning is the newest and the current trend of the machine learning field that paid a lot of the researchers’ attention in the recent few years. As a proven powerful machine learning tool, deep learning was widely used in several applications for solving various complex problems that require extremely high accuracy and sensitivity, particularly in the medical field. In general, brain tumor is one of the most common and aggressive malignant tumor diseases which is leading to a very short expected life if it is diagnosed at higher grade. Based on that, brain tumor grading is a very critical step after detecting the tumor in order to achieve an effective treating plan. In this paper, we used Convolutional Neural Network (CNN) which is one of the most widely used deep learning architectures for classifying a dataset of 3064 T1 weighted contrast-enhanced brain MR images for grading (classifying) the brain tumors into three classes (Glioma, Meningioma, and Pituitary Tumor). The proposed CNN classifier is a powerful tool and its overall performance with accuracy of 98.93% and sensitivity of 98.18% for the cropped lesions, while the results for the uncropped lesions are 99% accuracy and 98.52% sensitivity and the results for segmented lesion images are 97.62% for accuracy and 97.40% sensitivity.