January 28, 2020

3382 words 16 mins read

Paper Group ANR 987

Paper Group ANR 987

Deep Learning Under the Microscope: Improving the Interpretability of Medical Imaging Neural Networks. Towards Deep Learning-Based EEG Electrode Detection Using Automatically Generated Labels. Thyroid Cancer Malignancy Prediction From Whole Slide Cytopathology Images. InfoMask: Masked Variational Latent Representation to Localize Chest Disease. It …

Deep Learning Under the Microscope: Improving the Interpretability of Medical Imaging Neural Networks

Title Deep Learning Under the Microscope: Improving the Interpretability of Medical Imaging Neural Networks
Authors Magdalini Paschali, Muhammad Ferjad Naeem, Walter Simson, Katja Steiger, Martin Mollenhauer, Nassir Navab
Abstract In this paper, we propose a novel interpretation method tailored to histological Whole Slide Image (WSI) processing. A Deep Neural Network (DNN), inspired by Bag-of-Features models is equipped with a Multiple Instance Learning (MIL) branch and trained with weak supervision for WSI classification. MIL avoids label ambiguity and enhances our model’s expressive power without guiding its attention. We utilize a fine-grained logit heatmap of the models activations to interpret its decision-making process. The proposed method is quantitatively and qualitatively evaluated on two challenging histology datasets, outperforming a variety of baselines. In addition, two expert pathologists were consulted regarding the interpretability provided by our method and acknowledged its potential for integration into several clinical applications.
Tasks Decision Making, Multiple Instance Learning
Published 2019-04-05
URL http://arxiv.org/abs/1904.03127v2
PDF http://arxiv.org/pdf/1904.03127v2.pdf
PWC https://paperswithcode.com/paper/deep-learning-under-the-microscope-improving
Repo
Framework

Towards Deep Learning-Based EEG Electrode Detection Using Automatically Generated Labels

Title Towards Deep Learning-Based EEG Electrode Detection Using Automatically Generated Labels
Authors Nils Gessert, Martin Gromniak, Marcel Bengs, Lars Matthäus, Alexander Schlaefer
Abstract Electroencephalography (EEG) allows for source measurement of electrical brain activity. Particularly for inverse localization, the electrode positions on the scalp need to be known. Often, systems such as optical digitizing scanners are used for accurate localization with a stylus. However, the approach is time-consuming as each electrode needs to be scanned manually and the scanning systems are expensive. We propose using an RGBD camera to directly track electrodes in the images using deep learning methods. Studying and evaluating deep learning methods requires large amounts of labeled data. To overcome the time-consuming data annotation, we generate a large number of ground-truth labels using a robotic setup. We demonstrate that deep learning-based electrode detection is feasible with a mean absolute error of 5.69 +- 6.1mm and that our annotation scheme provides a useful environment for studying deep learning methods for electrode detection.
Tasks EEG
Published 2019-08-12
URL https://arxiv.org/abs/1908.04186v1
PDF https://arxiv.org/pdf/1908.04186v1.pdf
PWC https://paperswithcode.com/paper/towards-deep-learning-based-eeg-electrode
Repo
Framework

Thyroid Cancer Malignancy Prediction From Whole Slide Cytopathology Images

Title Thyroid Cancer Malignancy Prediction From Whole Slide Cytopathology Images
Authors David Dov, Shahar Kovalsky, Jonathan Cohen, Danielle Range, Ricardo Henao, Lawrence Carin
Abstract We consider preoperative prediction of thyroid cancer based on ultra-high-resolution whole-slide cytopathology images. Inspired by how human experts perform diagnosis, our approach first identifies and classifies diagnostic image regions containing informative thyroid cells, which only comprise a tiny fraction of the entire image. These local estimates are then aggregated into a single prediction of thyroid malignancy. Several unique characteristics of thyroid cytopathology guide our deep-learning-based approach. While our method is closely related to multiple-instance learning, it deviates from these methods by using a supervised procedure to extract diagnostically relevant regions. Moreover, we propose to simultaneously predict thyroid malignancy, as well as a diagnostic score assigned by a human expert, which further allows us to devise an improved training strategy. Experimental results show that the proposed algorithm achieves performance comparable to human experts, and demonstrate the potential of using the algorithm for screening and as an assistive tool for the improved diagnosis of indeterminate cases.
Tasks Multiple Instance Learning
Published 2019-03-29
URL http://arxiv.org/abs/1904.00839v1
PDF http://arxiv.org/pdf/1904.00839v1.pdf
PWC https://paperswithcode.com/paper/thyroid-cancer-malignancy-prediction-from
Repo
Framework

InfoMask: Masked Variational Latent Representation to Localize Chest Disease

Title InfoMask: Masked Variational Latent Representation to Localize Chest Disease
Authors Saeid Asgari Taghanaki, Mohammad Havaei, Tess Berthier, Francis Dutil, Lisa Di Jorio, Ghassan Hamarneh, Yoshua Bengio
Abstract The scarcity of richly annotated medical images is limiting supervised deep learning based solutions to medical image analysis tasks, such as localizing discriminatory radiomic disease signatures. Therefore, it is desirable to leverage unsupervised and weakly supervised models. Most recent weakly supervised localization methods apply attention maps or region proposals in a multiple instance learning formulation. While attention maps can be noisy, leading to erroneously highlighted regions, it is not simple to decide on an optimal window/bag size for multiple instance learning approaches. In this paper, we propose a learned spatial masking mechanism to filter out irrelevant background signals from attention maps. The proposed method minimizes mutual information between a masked variational representation and the input while maximizing the information between the masked representation and class labels. This results in more accurate localization of discriminatory regions. We tested the proposed model on the ChestX-ray8 dataset to localize pneumonia from chest X-ray images without using any pixel-level or bounding-box annotations.
Tasks Multiple Instance Learning
Published 2019-03-28
URL https://arxiv.org/abs/1903.11741v2
PDF https://arxiv.org/pdf/1903.11741v2.pdf
PWC https://paperswithcode.com/paper/infomask-masked-variational-latent
Repo
Framework

It Takes Nine to Smell a Rat: Neural Multi-Task Learning for Check-Worthiness Prediction

Title It Takes Nine to Smell a Rat: Neural Multi-Task Learning for Check-Worthiness Prediction
Authors Slavena Vasileva, Pepa Atanasova, Lluís Màrquez, Alberto Barrón-Cedeño, Preslav Nakov
Abstract We propose a multi-task deep-learning approach for estimating the check-worthiness of claims in political debates. Given a political debate, such as the 2016 US Presidential and Vice-Presidential ones, the task is to predict which statements in the debate should be prioritized for fact-checking. While different fact-checking organizations would naturally make different choices when analyzing the same debate, we show that it pays to learn from multiple sources simultaneously (PolitiFact, FactCheck, ABC, CNN, NPR, NYT, Chicago Tribune, The Guardian, and Washington Post) in a multi-task learning setup, even when a particular source is chosen as a target to imitate. Our evaluation shows state-of-the-art results on a standard dataset for the task of check-worthiness prediction.
Tasks Multi-Task Learning
Published 2019-08-19
URL https://arxiv.org/abs/1908.07912v1
PDF https://arxiv.org/pdf/1908.07912v1.pdf
PWC https://paperswithcode.com/paper/190807912
Repo
Framework

Pornographic Image Recognition via Weighted Multiple Instance Learning

Title Pornographic Image Recognition via Weighted Multiple Instance Learning
Authors Jin Xin, Wang Yuhui, Tan Xiaoyang
Abstract In the era of Internet, recognizing pornographic images is of great significance for protecting children’s physical and mental health. However, this task is very challenging as the key pornographic contents (e.g., breast and private part) in an image often lie in local regions of small size. In this paper, we model each image as a bag of regions, and follow a multiple instance learning (MIL) approach to train a generic region-based recognition model. Specifically, we take into account the region’s degree of pornography, and make three main contributions. First, we show that based on very few annotations of the key pornographic contents in a training image, we can generate a bag of properly sized regions, among which the potential positive regions usually contain useful contexts that can aid recognition. Second, we present a simple quantitative measure of a region’s degree of pornography, which can be used to weigh the importance of different regions in a positive image. Third, we formulate the recognition task as a weighted MIL problem under the convolutional neural network framework, with a bag probability function introduced to combine the importance of different regions. Experiments on our newly collected large scale dataset demonstrate the effectiveness of the proposed method, achieving an accuracy with 97.52% true positive rate at 1% false positive rate, tested on 100K pornographic images and 100K normal images.
Tasks Multiple Instance Learning
Published 2019-02-11
URL http://arxiv.org/abs/1902.03771v1
PDF http://arxiv.org/pdf/1902.03771v1.pdf
PWC https://paperswithcode.com/paper/pornographic-image-recognition-via-weighted
Repo
Framework

Emotional Neural Language Generation Grounded in Situational Contexts

Title Emotional Neural Language Generation Grounded in Situational Contexts
Authors Sashank Santhanam, Samira Shaikh
Abstract Emotional language generation is one of the keys to human-like artificial intelligence. Humans use different type of emotions depending on the situation of the conversation. Emotions also play an important role in mediating the engagement level with conversational partners. However, current conversational agents do not effectively account for emotional content in the language generation process. To address this problem, we develop a language modeling approach that generates affective content when the dialogue is situated in a given context. We use the recently released Empathetic-Dialogues corpus to build our models. Through detailed experiments, we find that our approach outperforms the state-of-the-art method on the perplexity metric by about 5 points and achieves a higher BLEU metric score.
Tasks Language Modelling, Text Generation
Published 2019-11-25
URL https://arxiv.org/abs/1911.11161v1
PDF https://arxiv.org/pdf/1911.11161v1.pdf
PWC https://paperswithcode.com/paper/emotional-neural-language-generation-grounded
Repo
Framework

Structural Supervision Improves Learning of Non-Local Grammatical Dependencies

Title Structural Supervision Improves Learning of Non-Local Grammatical Dependencies
Authors Ethan Wilcox, Peng Qian, Richard Futrell, Miguel Ballesteros, Roger Levy
Abstract State-of-the-art LSTM language models trained on large corpora learn sequential contingencies in impressive detail and have been shown to acquire a number of non-local grammatical dependencies with some success. Here we investigate whether supervision with hierarchical structure enhances learning of a range of grammatical dependencies, a question that has previously been addressed only for subject-verb agreement. Using controlled experimental methods from psycholinguistics, we compare the performance of word-based LSTM models versus two models that represent hierarchical structure and deploy it in left-to-right processing: Recurrent Neural Network Grammars (RNNGs) (Dyer et al., 2016) and a incrementalized version of the Parsing-as-Language-Modeling configuration from Chariak et al., (2016). Models are tested on a diverse range of configurations for two classes of non-local grammatical dependencies in English—Negative Polarity licensing and Filler–Gap Dependencies. Using the same training data across models, we find that structurally-supervised models outperform the LSTM, with the RNNG demonstrating best results on both types of grammatical dependencies and even learning many of the Island Constraints on the filler–gap dependency. Structural supervision thus provides data efficiency advantages over purely string-based training of neural language models in acquiring human-like generalizations about non-local grammatical dependencies.
Tasks Language Modelling
Published 2019-03-03
URL http://arxiv.org/abs/1903.00943v2
PDF http://arxiv.org/pdf/1903.00943v2.pdf
PWC https://paperswithcode.com/paper/structural-supervision-improves-learning-of
Repo
Framework

Techniques and Applications for Crawling, Ingesting and Analyzing Blockchain Data

Title Techniques and Applications for Crawling, Ingesting and Analyzing Blockchain Data
Authors Evan Brinckman, Andrey Kuehlkamp, Jarek Nabrzyski, Ian J. Taylor
Abstract As the public Ethereum network surpasses half a billion transactions and enterprise Blockchain systems becoming highly capable of meeting the demands of global deployments, production Blockchain applications are fast becoming commonplace across a diverse range of business and scientific verticals. In this paper, we reflect on work we have been conducting recently surrounding the ingestion, retrieval and analysis of Blockchain data. We describe the scaling and semantic challenges when extracting Blockchain data in a way that preserves the original metadata of each transaction by cross referencing the Smart Contract interface with the on-chain data. We then discuss a scientific use case in the area of Scientific workflows by describing how we can harvest data from tasks and dependencies in a generic way. We then discuss how crawled public blockchain data can be analyzed using two unsupervised machine learning algorithms, which are designed to identify outlier accounts or smart contracts in the system. We compare and contrast the two machine learning methods and cross correlate with public Websites to illustrate the effectiveness such approaches.
Tasks
Published 2019-09-22
URL https://arxiv.org/abs/1909.09925v1
PDF https://arxiv.org/pdf/1909.09925v1.pdf
PWC https://paperswithcode.com/paper/190909925
Repo
Framework
Title Generalized Mean Estimation in Monte-Carlo Tree Search
Authors Tuan Dam, Pascal Klink, Carlo D’Eramo, Jan Peters, Joni Pajarinen
Abstract We consider Monte-Carlo Tree Search (MCTS) applied to Markov Decision Processes (MDPs) and Partially Observable MDPs (POMDPs), and the well-known Upper Confidence bound for Trees (UCT) algorithm. In UCT, a tree with nodes (states) and edges (actions) is incrementally built by the expansion of nodes, and the values of nodes are updated through a backup strategy based on the average value of child nodes. However, it has been shown that with enough samples the maximum operator yields more accurate node value estimates than averaging. Instead of settling for one of these value estimates, we go a step further proposing a novel backup strategy which uses the power mean operator, which computes a value between the average and maximum value. We call our new approach Power-UCT and argue how the use of the power mean operator helps to speed up the learning in MCTS. We theoretically analyze our method providing guarantees of convergence to the optimum. Moreover, we discuss a heuristic approach to balance the greediness of backups by tuning the power mean operator according to the number of visits to each node. Finally, we empirically demonstrate the effectiveness of our method in well-known MDP and POMDP benchmarks, showing significant improvement in performance and convergence speed w.r.t. UCT.
Tasks
Published 2019-11-01
URL https://arxiv.org/abs/1911.00384v1
PDF https://arxiv.org/pdf/1911.00384v1.pdf
PWC https://paperswithcode.com/paper/generalized-mean-estimation-in-monte-carlo
Repo
Framework

Distributed Online Linear Regression

Title Distributed Online Linear Regression
Authors Deming Yuan, Alexandre Proutiere, Guodong Shi
Abstract We study online linear regression problems in a distributed setting, where the data is spread over a network. In each round, each network node proposes a linear predictor, with the objective of fitting the \emph{network-wide} data. It then updates its predictor for the next round according to the received local feedback and information received from neighboring nodes. The predictions made at a given node are assessed through the notion of regret, defined as the difference between their cumulative network-wide square errors and those of the best off-line network-wide linear predictor. Various scenarios are investigated, depending on the nature of the local feedback (full information or bandit feedback), on the set of available predictors (the decision set), and the way data is generated (by an oblivious or adaptive adversary). We propose simple and natural distributed regression algorithms, involving, at each node and in each round, a local gradient descent step and a communication and averaging step where nodes aim at aligning their predictors to those of their neighbors. We establish regret upper bounds typically in ${\cal O}(T^{3/4})$ when the decision set is unbounded and in ${\cal O}(\sqrt{T})$ in case of bounded decision set.
Tasks
Published 2019-02-13
URL http://arxiv.org/abs/1902.04774v1
PDF http://arxiv.org/pdf/1902.04774v1.pdf
PWC https://paperswithcode.com/paper/distributed-online-linear-regression
Repo
Framework

Analyzing the Fine Structure of Distributions

Title Analyzing the Fine Structure of Distributions
Authors Michael C. Thrun, Tino Gehlert, Alfred Ultsch
Abstract One aim of data mining is the identification of interesting structures in data. Basic properties of the empirical distribution, such as skewness and an eventual clipping, i.e., hard limits in value ranges, need to be assessed. Of particular interest is the question, whether the data originates from one process, or contains subsets related to different states of the data producing process. Data visualization tools should deliver a sensitive picture of the univariate probability density distribution (PDF) for each feature. Visualization tools for PDFs are typically kernel density estimates and range from the classical histogram to modern tools like bean or violin plots. Conventional methods have difficulties in visualizing the pdf in case of uniform, multimodal, skewed and clipped data if density estimation parameters remain in a default setting. As a consequence, a new visualization tool called Mirrored Density plot (MD plot) is proposed which is particularly designed to discover interesting structures in continuous features. The MD plot does not require any adjustments of parameters of density estimation which makes the usage compelling for non-experts. The visualization tools are evaluated in comparison to statistical tests for the typical challenges of explorative distribution analysis. The results are presented on bimodal Gaussian and skewed distributions as well as several features with published pdfs. In exploratory data analysis of 12 features describing the quarterly financial statements, when statistical testing becomes a demanding task, only the MD plots can identify the structure of their pdfs. Overall, the MD plot can outperform the methods mentioned above.
Tasks Density Estimation
Published 2019-08-15
URL https://arxiv.org/abs/1908.06081v1
PDF https://arxiv.org/pdf/1908.06081v1.pdf
PWC https://paperswithcode.com/paper/analyzing-the-fine-structure-of-distributions
Repo
Framework

Interpretable Image Recognition with Hierarchical Prototypes

Title Interpretable Image Recognition with Hierarchical Prototypes
Authors Peter Hase, Chaofan Chen, Oscar Li, Cynthia Rudin
Abstract Vision models are interpretable when they classify objects on the basis of features that a person can directly understand. Recently, methods relying on visual feature prototypes have been developed for this purpose. However, in contrast to how humans categorize objects, these approaches have not yet made use of any taxonomical organization of class labels. With such an approach, for instance, we may see why a chimpanzee is classified as a chimpanzee, but not why it was considered to be a primate or even an animal. In this work we introduce a model that uses hierarchically organized prototypes to classify objects at every level in a predefined taxonomy. Hence, we may find distinct explanations for the prediction an image receives at each level of the taxonomy. The hierarchical prototypes enable the model to perform another important task: interpretably classifying images from previously unseen classes at the level of the taxonomy to which they correctly relate, e.g. classifying a hand gun as a weapon, when the only weapons in the training data are rifles. With a subset of ImageNet, we test our model against its counterpart black-box model on two tasks: 1) classification of data from familiar classes, and 2) classification of data from previously unseen classes at the appropriate level in the taxonomy. We find that our model performs approximately as well as its counterpart black-box model while allowing for each classification to be interpreted.
Tasks
Published 2019-06-25
URL https://arxiv.org/abs/1906.10651v2
PDF https://arxiv.org/pdf/1906.10651v2.pdf
PWC https://paperswithcode.com/paper/interpretable-image-recognition-with
Repo
Framework

Sentiment Analysis on IMDB Movie Comments and Twitter Data by Machine Learning and Vector Space Techniques

Title Sentiment Analysis on IMDB Movie Comments and Twitter Data by Machine Learning and Vector Space Techniques
Authors İlhan Tarımer, Adil Çoban, Arif Emre Kocaman
Abstract This study’s goal is to create a model of sentiment analysis on a 2000 rows IMDB movie comments and 3200 Twitter data by using machine learning and vector space techniques; positive or negative preliminary information about the text is to provide. In the study, a vector space was created in the KNIME Analytics platform, and a classification study was performed on this vector space by Decision Trees, Na"ive Bayes and Support Vector Machines classification algorithms. The conclusions obtained were compared in terms of each algorithms. The classification results for IMDB movie comments are obtained as 94,00%, 73,20%, and 85,50% by Decision Tree, Naive Bayes and SVM algorithms. The classification results for Twitter data set are presented as 82,76%, 75,44% and 72,50% by Decision Tree, Naive Bayes SVM algorithms as well. It is seen that the best classification results presented in both data sets are which calculated by SVM algorithm.
Tasks Sentiment Analysis
Published 2019-03-18
URL http://arxiv.org/abs/1903.11983v1
PDF http://arxiv.org/pdf/1903.11983v1.pdf
PWC https://paperswithcode.com/paper/sentiment-analysis-on-imdb-movie-comments-and
Repo
Framework

Accelerating Experimental Design by Incorporating Experimenter Hunches

Title Accelerating Experimental Design by Incorporating Experimenter Hunches
Authors Cheng Li, Santu Rana, Sunil Gupta, Vu Nguyen, Svetha Venkatesh, Alessandra Sutti, David Rubin, Teo Slezak, Murray Height, Mazher Mohammed, Ian Gibson
Abstract Experimental design is a process of obtaining a product with target property via experimentation. Bayesian optimization offers a sample-efficient tool for experimental design when experiments are expensive. Often, expert experimenters have ‘hunches’ about the behavior of the experimental system, offering potentials to further improve the efficiency. In this paper, we consider per-variable monotonic trend in the underlying property that results in a unimodal trend in those variables for a target value optimization. For example, sweetness of a candy is monotonic to the sugar content. However, to obtain a target sweetness, the utility of the sugar content becomes a unimodal function, which peaks at the value giving the target sweetness and falls off both ways. In this paper, we propose a novel method to solve such problems that achieves two main objectives: a) the monotonicity information is used to the fullest extent possible, whilst ensuring that b) the convergence guarantee remains intact. This is achieved by a two-stage Gaussian process modeling, where the first stage uses the monotonicity trend to model the underlying property, and the second stage uses virtual' samples, sampled from the first, to model the target value optimization function. The process is made theoretically consistent by adding appropriate adjustment factor in the posterior computation, necessitated because of using the virtual’ samples. The proposed method is evaluated through both simulations and real world experimental design problems of a) new short polymer fiber with the target length, and b) designing of a new three dimensional porous scaffolding with a target porosity. In all scenarios our method demonstrates faster convergence than the basic Bayesian optimization approach not using such `hunches’. |
Tasks
Published 2019-07-22
URL https://arxiv.org/abs/1907.09065v1
PDF https://arxiv.org/pdf/1907.09065v1.pdf
PWC https://paperswithcode.com/paper/accelerating-experimental-design-by
Repo
Framework
comments powered by Disqus