October 19, 2019

2997 words 15 mins read

Paper Group ANR 216

Paper Group ANR 216

Hierarchical ResNeXt Models for Breast Cancer Histology Image Classification. Shape and Margin-Aware Lung Nodule Classification in Low-dose CT Images via Soft Activation Mapping. Fast CapsNet for Lung Cancer Screening. Joint Learning for Pulmonary Nodule Segmentation, Attributes and Malignancy Prediction. Superpixel-guided Two-view Deterministic Ge …

Hierarchical ResNeXt Models for Breast Cancer Histology Image Classification

Title Hierarchical ResNeXt Models for Breast Cancer Histology Image Classification
Authors Ismaël Koné, Lahsen Boulmane
Abstract Microscopic histology image analysis is a cornerstone in early detection of breast cancer. However these images are very large and manual analysis is error prone and very time consuming. Thus automating this process is in high demand. We proposed a hierarchical system of convolutional neural networks (CNN) that classifies automatically patches of these images into four pathologies: normal, benign, in situ carcinoma and invasive carcinoma. We evaluated our system on the BACH challenge dataset of image-wise classification and a small dataset that we used to extend it. Using a train/test split of 75%/25%, we achieved an accuracy rate of 0.99 on the test split for the BACH dataset and 0.96 on that of the extension. On the test of the BACH challenge, we’ve reached an accuracy of 0.81 which rank us to the 8th out of 51 teams.
Tasks Breast Cancer Histology Image Classification, Image Classification
Published 2018-10-21
URL http://arxiv.org/abs/1810.09025v1
PDF http://arxiv.org/pdf/1810.09025v1.pdf
PWC https://paperswithcode.com/paper/hierarchical-resnext-models-for-breast-cancer
Repo
Framework

Shape and Margin-Aware Lung Nodule Classification in Low-dose CT Images via Soft Activation Mapping

Title Shape and Margin-Aware Lung Nodule Classification in Low-dose CT Images via Soft Activation Mapping
Authors Yiming Lei, Yukun Tian, Hongming Shan, Junping Zhang, Ge Wang, Mannudeep Kalra
Abstract A number of studies on lung nodule classification lack clinical/biological interpretations of the features extracted by convolutional neural network (CNN). The methods like class activation mapping (CAM) and gradient-based CAM (Grad-CAM) are tailored for interpreting localization and classification tasks while they ignored fine-grained features. Therefore, CAM and Grad-CAM cannot provide optimal interpretation for lung nodule categorization task in low-dose CT images, in that fine-grained pathological clues like discrete and irregular shape and margins of nodules are capable of enhancing sensitivity and specificity of nodule classification with regards to CNN. In this paper, we first develop a soft activation mapping (SAM) to enable fine-grained lung nodule shape & margin (LNSM) feature analysis with a CNN so that it can access rich discrete features. Secondly, by combining high-level convolutional features with SAM, we further propose a high-level feature enhancement scheme (HESAM) to localize LNSM features. Experiments on the LIDC-IDRI dataset indicate that 1) SAM captures more fine-grained and discrete attention regions than existing methods, 2) HESAM localizes more accurately on LNSM features and obtains the state-of-the-art predictive performance, reducing the false positive rate, and 3) we design and conduct a visually matching experiment which incorporates radiologists study to increase the confidence level of applying our method to clinical diagnosis.
Tasks Data Augmentation, Lung Nodule Classification, Mapping Of Lung Nodules In Low-Dose Ct Images
Published 2018-10-30
URL https://arxiv.org/abs/1810.12494v2
PDF https://arxiv.org/pdf/1810.12494v2.pdf
PWC https://paperswithcode.com/paper/soft-activation-mapping-of-lung-nodules-in
Repo
Framework

Fast CapsNet for Lung Cancer Screening

Title Fast CapsNet for Lung Cancer Screening
Authors Aryan Mobiny, Hien Van Nguyen
Abstract Lung cancer is the leading cause of cancer-related deaths in the past several years. A major challenge in lung cancer screening is the detection of lung nodules from computed tomography (CT) scans. State-of-the-art approaches in automated lung nodule classification use deep convolutional neural networks (CNNs). However, these networks require a large number of training samples to generalize well. This paper investigates the use of capsule networks (CapsNets) as an alternative to CNNs. We show that CapsNets significantly outperforms CNNs when the number of training samples is small. To increase the computational efficiency, our paper proposes a consistent dynamic routing mechanism that results in $3\times$ speedup of CapsNet. Finally, we show that the original image reconstruction method of CapNets performs poorly on lung nodule data. We propose an efficient alternative, called convolutional decoder, that yields lower reconstruction error and higher classification accuracy.
Tasks Computed Tomography (CT), Image Reconstruction, Lung Nodule Classification
Published 2018-06-19
URL http://arxiv.org/abs/1806.07416v1
PDF http://arxiv.org/pdf/1806.07416v1.pdf
PWC https://paperswithcode.com/paper/fast-capsnet-for-lung-cancer-screening
Repo
Framework

Joint Learning for Pulmonary Nodule Segmentation, Attributes and Malignancy Prediction

Title Joint Learning for Pulmonary Nodule Segmentation, Attributes and Malignancy Prediction
Authors Botong Wu, Zhen Zhou, Jianwei Wang, Yizhou Wang
Abstract Refer to the literature of lung nodule classification, many studies adopt Convolutional Neural Networks (CNN) to directly predict the malignancy of lung nodules with original thoracic Computed Tomography (CT) and nodule location. However, these studies cannot tell how the CNN works in terms of predicting the malignancy of the given nodule, e.g., it’s hard to conclude that whether the region within the nodule or the contextual information matters according to the output of the CNN. In this paper, we propose an interpretable and multi-task learning CNN – Joint learning for \textbf{P}ulmonary \textbf{N}odule \textbf{S}egmentation \textbf{A}ttributes and \textbf{M}alignancy \textbf{P}rediction (PN-SAMP). It is able to not only accurately predict the malignancy of lung nodules, but also provide semantic high-level attributes as well as the areas of detected nodules. Moreover, the combination of nodule segmentation, attributes and malignancy prediction is helpful to improve the performance of each single task. In addition, inspired by the fact that radiologists often change window widths and window centers to help to make decision on uncertain nodules, PN-SAMP mixes multiple WW/WC together to gain information for the raw CT input images. To verify the effectiveness of the proposed method, the evaluation is implemented on the public LIDC-IDRI dataset, which is one of the largest dataset for lung nodule malignancy prediction. Experiments indicate that the proposed PN-SAMP achieves significant improvement with respect to lung nodule classification, and promising performance on lung nodule segmentation and attribute learning, compared with the-state-of-the-art methods.
Tasks Computed Tomography (CT), Lung Nodule Classification, Lung Nodule Segmentation, Multi-Task Learning
Published 2018-02-10
URL http://arxiv.org/abs/1802.03584v1
PDF http://arxiv.org/pdf/1802.03584v1.pdf
PWC https://paperswithcode.com/paper/joint-learning-for-pulmonary-nodule
Repo
Framework

Superpixel-guided Two-view Deterministic Geometric Model Fitting

Title Superpixel-guided Two-view Deterministic Geometric Model Fitting
Authors Guobao Xiao, Hanzi Wang, Yan Yan, David Suter
Abstract Geometric model fitting is a fundamental research topic in computer vision and it aims to fit and segment multiple-structure data. In this paper, we propose a novel superpixel-guided two-view geometric model fitting method (called SDF), which can obtain reliable and consistent results for real images. Specifically, SDF includes three main parts: a deterministic sampling algorithm, a model hypothesis updating strategy and a novel model selection algorithm. The proposed deterministic sampling algorithm generates a set of initial model hypotheses according to the prior information of superpixels. Then the proposed updating strategy further improves the quality of model hypotheses. After that, by analyzing the properties of the updated model hypotheses, the proposed model selection algorithm extends the conventional “fit-and-remove” framework to estimate model instances in multiple-structure data. The three parts are tightly coupled to boost the performance of SDF in both speed and accuracy, and SDF has the deterministic nature. Experimental results show that the proposed SDF has significant advantages over several state-of-the-art fitting methods when it is applied to real images with single-structure and multiple-structure data.
Tasks Model Selection
Published 2018-05-03
URL http://arxiv.org/abs/1805.01158v1
PDF http://arxiv.org/pdf/1805.01158v1.pdf
PWC https://paperswithcode.com/paper/superpixel-guided-two-view-deterministic
Repo
Framework

The Higher-Order Prover Leo-III (Extended Version)

Title The Higher-Order Prover Leo-III (Extended Version)
Authors Alexander Steen, Christoph Benzmüller
Abstract The automated theorem prover Leo-III for classical higher-order logic with Henkin semantics and choice is presented. Leo-III is based on extensional higher-order paramodulation and accepts every common TPTP dialect (FOF, TFF, THF), including their recent extensions to rank-1 polymorphism (TF1, TH1). In addition, the prover natively supports almost every normal higher-order modal logic. Leo-III cooperates with first-order reasoning tools using translations to many-sorted first-order logic and produces verifiable proof certificates. The prover is evaluated on heterogeneous benchmark sets.
Tasks
Published 2018-02-08
URL http://arxiv.org/abs/1802.02732v2
PDF http://arxiv.org/pdf/1802.02732v2.pdf
PWC https://paperswithcode.com/paper/the-higher-order-prover-leo-iii-extended
Repo
Framework

Design Challenges of Multi-UAV Systems in Cyber-Physical Applications: A Comprehensive Survey, and Future Directions

Title Design Challenges of Multi-UAV Systems in Cyber-Physical Applications: A Comprehensive Survey, and Future Directions
Authors Reza Shakeri, Mohammed Ali Al-Garadi, Ahmed Badawy, Amr Mohamed, Tamer Khattab, Abdulla Al-Ali, Khaled A. Harras, Mohsen Guizani
Abstract Unmanned Aerial Vehicles (UAVs) have recently rapidly grown to facilitate a wide range of innovative applications that can fundamentally change the way cyber-physical systems (CPSs) are designed. CPSs are a modern generation of systems with synergic cooperation between computational and physical potentials that can interact with humans through several new mechanisms. The main advantages of using UAVs in CPS application is their exceptional features, including their mobility, dynamism, effortless deployment, adaptive altitude, agility, adjustability, and effective appraisal of real-world functions anytime and anywhere. Furthermore, from the technology perspective, UAVs are predicted to be a vital element of the development of advanced CPSs. Therefore, in this survey, we aim to pinpoint the most fundamental and important design challenges of multi-UAV systems for CPS applications. We highlight key and versatile aspects that span the coverage and tracking of targets and infrastructure objects, energy-efficient navigation, and image analysis using machine learning for fine-grained CPS applications. Key prototypes and testbeds are also investigated to show how these practical technologies can facilitate CPS applications. We present and propose state-of-the-art algorithms to address design challenges with both quantitative and qualitative methods and map these challenges with important CPS applications to draw insightful conclusions on the challenges of each application. Finally, we summarize potential new directions and ideas that could shape future research in these areas.
Tasks
Published 2018-10-23
URL http://arxiv.org/abs/1810.09729v1
PDF http://arxiv.org/pdf/1810.09729v1.pdf
PWC https://paperswithcode.com/paper/design-challenges-of-multi-uav-systems-in
Repo
Framework

Courteous Autonomous Cars

Title Courteous Autonomous Cars
Authors Liting Sun, Wei Zhan, Masayoshi Tomizuka, Anca D. Dragan
Abstract Typically, autonomous cars optimize for a combination of safety, efficiency, and driving quality. But as we get better at this optimization, we start seeing behavior go from too conservative to too aggressive. The car’s behavior exposes the incentives we provide in its cost function. In this work, we argue for cars that are not optimizing a purely selfish cost, but also try to be courteous to other interactive drivers. We formalize courtesy as a term in the objective that measures the increase in another driver’s cost induced by the autonomous car’s behavior. Such a courtesy term enables the robot car to be aware of possible irrationality of the human behavior, and plan accordingly. We analyze the effect of courtesy in a variety of scenarios. We find, for example, that courteous robot cars leave more space when merging in front of a human driver. Moreover, we find that such a courtesy term can help explain real human driver behavior on the NGSIM dataset.
Tasks
Published 2018-08-08
URL http://arxiv.org/abs/1808.02633v2
PDF http://arxiv.org/pdf/1808.02633v2.pdf
PWC https://paperswithcode.com/paper/courteous-autonomous-cars
Repo
Framework

Retrieval-Based Neural Code Generation

Title Retrieval-Based Neural Code Generation
Authors Shirley Anugrah Hayati, Raphael Olivier, Pravalika Avvaru, Pengcheng Yin, Anthony Tomasic, Graham Neubig
Abstract In models to generate program source code from natural language, representing this code in a tree structure has been a common approach. However, existing methods often fail to generate complex code correctly due to a lack of ability to memorize large and complex structures. We introduce ReCode, a method based on subtree retrieval that makes it possible to explicitly reference existing code examples within a neural code generation model. First, we retrieve sentences that are similar to input sentences using a dynamic-programming-based sentence similarity scoring method. Next, we extract n-grams of action sequences that build the associated abstract syntax tree. Finally, we increase the probability of actions that cause the retrieved n-gram action subtree to be in the predicted code. We show that our approach improves the performance on two code generation tasks by up to +2.6 BLEU.
Tasks Code Generation
Published 2018-08-29
URL http://arxiv.org/abs/1808.10025v1
PDF http://arxiv.org/pdf/1808.10025v1.pdf
PWC https://paperswithcode.com/paper/retrieval-based-neural-code-generation
Repo
Framework

Word learning and the acquisition of syntactic–semantic overhypotheses

Title Word learning and the acquisition of syntactic–semantic overhypotheses
Authors Jon Gauthier, Roger Levy, Joshua B. Tenenbaum
Abstract Children learning their first language face multiple problems of induction: how to learn the meanings of words, and how to build meaningful phrases from those words according to syntactic rules. We consider how children might solve these problems efficiently by solving them jointly, via a computational model that learns the syntax and semantics of multi-word utterances in a grounded reference game. We select a well-studied empirical case in which children are aware of patterns linking the syntactic and semantic properties of words — that the properties picked out by base nouns tend to be related to shape, while prenominal adjectives tend to refer to other properties such as color. We show that children applying such inductive biases are accurately reflecting the statistics of child-directed speech, and that inducing similar biases in our computational model captures children’s behavior in a classic adjective learning experiment. Our model incorporating such biases also demonstrates a clear data efficiency in learning, relative to a baseline model that learns without forming syntax-sensitive overhypotheses of word meaning. Thus solving a more complex joint inference problem may make the full problem of language acquisition easier, not harder.
Tasks Language Acquisition
Published 2018-05-14
URL http://arxiv.org/abs/1805.04988v1
PDF http://arxiv.org/pdf/1805.04988v1.pdf
PWC https://paperswithcode.com/paper/word-learning-and-the-acquisition-of
Repo
Framework

Embedding Electronic Health Records for Clinical Information Retrieval

Title Embedding Electronic Health Records for Clinical Information Retrieval
Authors Xing Wei, Carsten Eickhoff
Abstract Neural network representation learning frameworks have recently shown to be highly effective at a wide range of tasks ranging from radiography interpretation via data-driven diagnostics to clinical decision support. This often superior performance comes at the price of dramatically increased training data requirements that cannot be satisfied in every given institution or scenario. As a means of countering such data sparsity effects, distant supervision alleviates the need for scarce in-domain data by relying on a related, resource-rich, task for training. This study presents an end-to-end neural clinical decision support system that recommends relevant literature for individual patients (few available resources) via distant supervision on the well-known MIMIC-III collection (abundant resource). Our experiments show significant improvements in retrieval effectiveness over traditional statistical as well as purely locally supervised retrieval models.
Tasks Information Retrieval, Representation Learning
Published 2018-11-13
URL http://arxiv.org/abs/1811.05402v1
PDF http://arxiv.org/pdf/1811.05402v1.pdf
PWC https://paperswithcode.com/paper/embedding-electronic-health-records-for
Repo
Framework

Actor-Critic Deep Reinforcement Learning for Dynamic Multichannel Access

Title Actor-Critic Deep Reinforcement Learning for Dynamic Multichannel Access
Authors Chen Zhong, Ziyang Lu, M. Cenk Gursoy, Senem Velipasalar
Abstract We consider the dynamic multichannel access problem, which can be formulated as a partially observable Markov decision process (POMDP). We first propose a model-free actor-critic deep reinforcement learning based framework to explore the sensing policy. To evaluate the performance of the proposed sensing policy and the framework’s tolerance against uncertainty, we test the framework in scenarios with different channel switching patterns and consider different switching probabilities. Then, we consider a time-varying environment to identify the adaptive ability of the proposed framework. Additionally, we provide comparisons with the Deep-Q network (DQN) based framework proposed in [1], in terms of both average reward and the time efficiency.
Tasks
Published 2018-10-08
URL http://arxiv.org/abs/1810.03695v1
PDF http://arxiv.org/pdf/1810.03695v1.pdf
PWC https://paperswithcode.com/paper/actor-critic-deep-reinforcement-learning-for
Repo
Framework

Binary Ensemble Neural Network: More Bits per Network or More Networks per Bit?

Title Binary Ensemble Neural Network: More Bits per Network or More Networks per Bit?
Authors Shilin Zhu, Xin Dong, Hao Su
Abstract Binary neural networks (BNN) have been studied extensively since they run dramatically faster at lower memory and power consumption than floating-point networks, thanks to the efficiency of bit operations. However, contemporary BNNs whose weights and activations are both single bits suffer from severe accuracy degradation. To understand why, we investigate the representation ability, speed and bias/variance of BNNs through extensive experiments. We conclude that the error of BNNs is predominantly caused by the intrinsic instability (training time) and non-robustness (train & test time). Inspired by this investigation, we propose the Binary Ensemble Neural Network (BENN) which leverages ensemble methods to improve the performance of BNNs with limited efficiency cost. While ensemble techniques have been broadly believed to be only marginally helpful for strong classifiers such as deep neural networks, our analyses and experiments show that they are naturally a perfect fit to boost BNNs. We find that our BENN, which is faster and much more robust than state-of-the-art binary networks, can even surpass the accuracy of the full-precision floating number network with the same architecture.
Tasks
Published 2018-06-20
URL http://arxiv.org/abs/1806.07550v2
PDF http://arxiv.org/pdf/1806.07550v2.pdf
PWC https://paperswithcode.com/paper/binary-ensemble-neural-network-more-bits-per
Repo
Framework

Fast, Trainable, Multiscale Denoising

Title Fast, Trainable, Multiscale Denoising
Authors Sungjoon Choi, John Isidoro, Pascal Getreuer, Peyman Milanfar
Abstract Denoising is a fundamental imaging problem. Versatile but fast filtering has been demanded for mobile camera systems. We present an approach to multiscale filtering which allows real-time applications on low-powered devices. The key idea is to learn a set of kernels that upscales, filters, and blends patches of different scales guided by local structure analysis. This approach is trainable so that learned filters are capable of treating diverse noise patterns and artifacts. Experimental results show that the presented approach produces comparable results to state-of-the-art algorithms while processing time is orders of magnitude faster.
Tasks Denoising
Published 2018-02-16
URL http://arxiv.org/abs/1802.06130v1
PDF http://arxiv.org/pdf/1802.06130v1.pdf
PWC https://paperswithcode.com/paper/fast-trainable-multiscale-denoising
Repo
Framework

MARVIN: An Open Machine Learning Corpus and Environment for Automated Machine Learning Primitive Annotation and Execution

Title MARVIN: An Open Machine Learning Corpus and Environment for Automated Machine Learning Primitive Annotation and Execution
Authors Chris A. Mattmann, Sujen Shah, Brian Wilson
Abstract In this demo paper, we introduce the DARPA D3M program for automatic machine learning (ML) and JPL’s MARVIN tool that provides an environment to locate, annotate, and execute machine learning primitives for use in ML pipelines. MARVIN is a web-based application and associated back-end interface written in Python that enables composition of ML pipelines from hundreds of primitives from the world of Scikit-Learn, Keras, DL4J and other widely used libraries. MARVIN allows for the creation of Docker containers that run on Kubernetes clusters within DARPA to provide an execution environment for automated machine learning. MARVIN currently contains over 400 datasets and challenge problems from a wide array of ML domains including routine classification and regression to advanced video/image classification and remote sensing.
Tasks Image Classification
Published 2018-08-11
URL http://arxiv.org/abs/1808.03753v1
PDF http://arxiv.org/pdf/1808.03753v1.pdf
PWC https://paperswithcode.com/paper/marvin-an-open-machine-learning-corpus-and
Repo
Framework
comments powered by Disqus