October 16, 2019

3265 words 16 mins read

Paper Group ANR 1070

Paper Group ANR 1070

Classification of normal/abnormal heart sound recordings based on multi-domain features and back propagation neural network. Triple Attention Mixed Link Network for Single Image Super Resolution. On Regularized Losses for Weakly-supervised CNN Segmentation. High Dimensional Bayesian Optimization Using Dropout. IDD: A Dataset for Exploring Problems …

Classification of normal/abnormal heart sound recordings based on multi-domain features and back propagation neural network

Title Classification of normal/abnormal heart sound recordings based on multi-domain features and back propagation neural network
Authors Hong Tang, Huaming Chen, Ting Li, Mingjun Zhong
Abstract This paper aims to classify a single PCG recording as normal or abnormal for computer-aided diagnosis. The proposed framework for this challenge has four steps: preprocessing, feature extraction, training and validation. In the preprocessing step, a recording is segmented into four states, i.e., the first heart sound, systolic interval, the second heart sound, and diastolic interval by the Springer Segmentation algorithm. In the feature extraction step, the authors extract 324 features from multi-domains to perform classification. A back propagation neural network is used as predication model. The optimal threshold for distinguishing normal and abnormal is determined by the statistics of model output for both normal and abnormal. The performance of the proposed predictor tested by the six training sets is sensitivity 0.812 and specificity 0.860 (overall accuracy is 0.836). However, the performance reduces to sensitivity 0.807 and specificity 0.829 (overall accuracy is 0.818) for the hidden test set.
Tasks
Published 2018-10-17
URL http://arxiv.org/abs/1810.09253v1
PDF http://arxiv.org/pdf/1810.09253v1.pdf
PWC https://paperswithcode.com/paper/classification-of-normalabnormal-heart-sound
Repo
Framework
Title Triple Attention Mixed Link Network for Single Image Super Resolution
Authors Xi Cheng, Xiang Li, Jian Yang
Abstract Single image super resolution is of great importance as a low-level computer vision task. Recent approaches with deep convolutional neural networks have achieved im-pressive performance. However, existing architectures have limitations due to the less sophisticated structure along with less strong representational power. In this work, to significantly enhance the feature representation, we proposed Triple Attention mixed link Network (TAN) which consists of 1) three different aspects (i.e., kernel, spatial and channel) of attention mechanisms and 2) fu-sion of both powerful residual and dense connections (i.e., mixed link). Specifically, the network with multi kernel learns multi hierarchical representations under different receptive fields. The output features are recalibrated by the effective kernel and channel attentions and feed into next layer partly residual and partly dense, which filters the information and enable the network to learn more powerful representations. The features finally pass through the spatial attention in the reconstruction network which generates a fusion of local and global information, let the network restore more details and improves the quality of reconstructed images. Thanks to the diverse feature recalibrations and the advanced information flow topology, our proposed model is strong enough to per-form against the state-of-the-art methods on the bench-mark evaluations.
Tasks Image Super-Resolution, Super-Resolution
Published 2018-10-08
URL http://arxiv.org/abs/1810.03254v1
PDF http://arxiv.org/pdf/1810.03254v1.pdf
PWC https://paperswithcode.com/paper/triple-attention-mixed-link-network-for
Repo
Framework

On Regularized Losses for Weakly-supervised CNN Segmentation

Title On Regularized Losses for Weakly-supervised CNN Segmentation
Authors Meng Tang, Federico Perazzi, Abdelaziz Djelouah, Ismail Ben Ayed, Christopher Schroers, Yuri Boykov
Abstract Minimization of regularized losses is a principled approach to weak supervision well-established in deep learning, in general. However, it is largely overlooked in semantic segmentation currently dominated by methods mimicking full supervision via “fake” fully-labeled training masks (proposals) generated from available partial input. To obtain such full masks the typical methods explicitly use standard regularization techniques for “shallow” segmentation, e.g. graph cuts or dense CRFs. In contrast, we integrate such standard regularizers directly into the loss functions over partial input. This approach simplifies weakly-supervised training by avoiding extra MRF/CRF inference steps or layers explicitly generating full masks, while improving both the quality and efficiency of training. This paper proposes and experimentally compares different losses integrating MRF/CRF regularization terms. We juxtapose our regularized losses with earlier proposal-generation methods using explicit regularization steps or layers. Our approach achieves state-of-the-art accuracy in semantic segmentation with near full-supervision quality.
Tasks Semantic Segmentation
Published 2018-03-26
URL http://arxiv.org/abs/1803.09569v2
PDF http://arxiv.org/pdf/1803.09569v2.pdf
PWC https://paperswithcode.com/paper/on-regularized-losses-for-weakly-supervised
Repo
Framework

High Dimensional Bayesian Optimization Using Dropout

Title High Dimensional Bayesian Optimization Using Dropout
Authors Cheng Li, Sunil Gupta, Santu Rana, Vu Nguyen, Svetha Venkatesh, Alistair Shilton
Abstract Scaling Bayesian optimization to high dimensions is challenging task as the global optimization of high-dimensional acquisition function can be expensive and often infeasible. Existing methods depend either on limited active variables or the additive form of the objective function. We propose a new method for high-dimensional Bayesian optimization, that uses a dropout strategy to optimize only a subset of variables at each iteration. We derive theoretical bounds for the regret and show how it can inform the derivation of our algorithm. We demonstrate the efficacy of our algorithms for optimization on two benchmark functions and two real-world applications- training cascade classifiers and optimizing alloy composition.
Tasks
Published 2018-02-15
URL http://arxiv.org/abs/1802.05400v1
PDF http://arxiv.org/pdf/1802.05400v1.pdf
PWC https://paperswithcode.com/paper/high-dimensional-bayesian-optimization-using
Repo
Framework

IDD: A Dataset for Exploring Problems of Autonomous Navigation in Unconstrained Environments

Title IDD: A Dataset for Exploring Problems of Autonomous Navigation in Unconstrained Environments
Authors Girish Varma, Anbumani Subramanian, Anoop Namboodiri, Manmohan Chandraker, C V Jawahar
Abstract While several datasets for autonomous navigation have become available in recent years, they tend to focus on structured driving environments. This usually corresponds to well-delineated infrastructure such as lanes, a small number of well-defined categories for traffic participants, low variation in object or background appearance and strict adherence to traffic rules. We propose IDD, a novel dataset for road scene understanding in unstructured environments where the above assumptions are largely not satisfied. It consists of 10,004 images, finely annotated with 34 classes collected from 182 drive sequences on Indian roads. The label set is expanded in comparison to popular benchmarks such as Cityscapes, to account for new classes. It also reflects label distributions of road scenes significantly different from existing datasets, with most classes displaying greater within-class diversity. Consistent with real driving behaviours, it also identifies new classes such as drivable areas besides the road. We propose a new four-level label hierarchy, which allows varying degrees of complexity and opens up possibilities for new training methods. Our empirical study provides an in-depth analysis of the label characteristics. State-of-the-art methods for semantic segmentation achieve much lower accuracies on our dataset, demonstrating its distinction compared to Cityscapes. Finally, we propose that our dataset is an ideal opportunity for new problems such as domain adaptation, few-shot learning and behaviour prediction in road scenes.
Tasks Autonomous Navigation, Domain Adaptation, Few-Shot Learning, Scene Understanding, Semantic Segmentation
Published 2018-11-26
URL http://arxiv.org/abs/1811.10200v1
PDF http://arxiv.org/pdf/1811.10200v1.pdf
PWC https://paperswithcode.com/paper/idd-a-dataset-for-exploring-problems-of
Repo
Framework

CrystalGAN: Learning to Discover Crystallographic Structures with Generative Adversarial Networks

Title CrystalGAN: Learning to Discover Crystallographic Structures with Generative Adversarial Networks
Authors Asma Nouira, Nataliya Sokolovska, Jean-Claude Crivello
Abstract Our main motivation is to propose an efficient approach to generate novel multi-element stable chemical compounds that can be used in real world applications. This task can be formulated as a combinatorial problem, and it takes many hours of human experts to construct, and to evaluate new data. Unsupervised learning methods such as Generative Adversarial Networks (GANs) can be efficiently used to produce new data. Cross-domain Generative Adversarial Networks were reported to achieve exciting results in image processing applications. However, in the domain of materials science, there is a need to synthesize data with higher order complexity compared to observed samples, and the state-of-the-art cross-domain GANs can not be adapted directly. In this contribution, we propose a novel GAN called CrystalGAN which generates new chemically stable crystallographic structures with increased domain complexity. We introduce an original architecture, we provide the corresponding loss functions, and we show that the CrystalGAN generates very reasonable data. We illustrate the efficiency of the proposed method on a real original problem of novel hydrides discovery that can be further used in development of hydrogen storage materials.
Tasks
Published 2018-10-26
URL https://arxiv.org/abs/1810.11203v3
PDF https://arxiv.org/pdf/1810.11203v3.pdf
PWC https://paperswithcode.com/paper/crystalgan-learning-to-discover
Repo
Framework

DSNet for Real-Time Driving Scene Semantic Segmentation

Title DSNet for Real-Time Driving Scene Semantic Segmentation
Authors Wenfu Wang, Zhijie Pan
Abstract We focus on the very challenging task of semantic segmentation for autonomous driving system. It must deliver decent semantic segmentation result for traffic critical objects real-time. In this paper, we propose a very efficient yet powerful deep neural network for driving scene semantic segmentation termed as Driving Segmentation Network (DSNet). DSNet achieves state-of-the-art balance between accuracy and inference speed through efficient units and architecture design inspired by ShuffleNet V2 and ENet. More importantly, DSNet highlights classes most critical with driving decision making through our novel Driving Importance-weighted Loss. We evaluate DSNet on Cityscapes dataset, our DSNet achieves 71.8% mean Intersection-over-Union (IoU) on validation set and 69.3% on test set. Class-wise IoU scores show that Driving Importance-weighted Loss could improve most driving critical classes by a large margin. Compared with ENet, DSNet is 18.9% more accurate and 1.1+ times faster which implies great potential for autonomous driving application.
Tasks Autonomous Driving, Decision Making, Semantic Segmentation
Published 2018-12-06
URL https://arxiv.org/abs/1812.07049v2
PDF https://arxiv.org/pdf/1812.07049v2.pdf
PWC https://paperswithcode.com/paper/dsnet-for-real-time-driving-scene-semantic
Repo
Framework

Semi-Supervised Domain Adaptation with Representation Learning for Semantic Segmentation across Time

Title Semi-Supervised Domain Adaptation with Representation Learning for Semantic Segmentation across Time
Authors Assia Benbihi, Matthieu Geist, Cédric Pradalier
Abstract Deep learning generates state-of-the-art semantic segmentation provided that a large number of images together with pixel-wise annotations are available. To alleviate the expensive data collection process, we propose a semi-supervised domain adaptation method for the specific case of images with similar semantic content but different pixel distributions. A network trained with supervision on a past dataset is finetuned on the new dataset to conserve its features maps. The domain adaptation becomes a simple regression between feature maps and does not require annotations on the new dataset. This method reaches performances similar to classic transfer learning on the PASCAL VOC dataset with synthetic transformations.
Tasks Domain Adaptation, Representation Learning, Semantic Segmentation, Transfer Learning
Published 2018-05-10
URL https://arxiv.org/abs/1805.04141v2
PDF https://arxiv.org/pdf/1805.04141v2.pdf
PWC https://paperswithcode.com/paper/deep-representation-learning-for-domain
Repo
Framework

Patch-Based Sparse Representation For Bacterial Detection

Title Patch-Based Sparse Representation For Bacterial Detection
Authors Ahmed Karam Eldaly, Yoann Altmann, Ahsan Akram, Antonios Perperidis, Kevin Dhaliwal, Stephen McLaughlin
Abstract In this paper, we propose an unsupervised approach for bacterial detection in optical endomicroscopy images. This approach splits each image into a set of overlapping patches and assumes that observed intensities are linear combinations of the actual intensity values associated with background image structures, corrupted by additive Gaussian noise and potentially by a sparse outlier term modelling anomalies (which are considered to be candidate bacteria). The actual intensity term representing background structures is modelled as a linear combination of a few atoms drawn from a dictionary which is learned from bacteria-free data and then fixed while analyzing new images. The bacteria detection task is formulated as a minimization problem and an alternating direction method of multipliers (ADMM) is then used to estimate the unknown parameters. Simulations conducted using two ex vivo lung datasets show good detection and correlation performance between bacteria counts identified by a trained clinician and those of the proposed method.
Tasks
Published 2018-10-29
URL http://arxiv.org/abs/1810.12043v2
PDF http://arxiv.org/pdf/1810.12043v2.pdf
PWC https://paperswithcode.com/paper/patch-based-sparse-representation-for
Repo
Framework

3D Convolutional Encoder-Decoder Network for Low-Dose CT via Transfer Learning from a 2D Trained Network

Title 3D Convolutional Encoder-Decoder Network for Low-Dose CT via Transfer Learning from a 2D Trained Network
Authors Hongming Shan, Yi Zhang, Qingsong Yang, Uwe Kruger, Mannudeep K. Kalra, Ling Sun, Wenxiang Cong, Ge Wang
Abstract Low-dose computed tomography (CT) has attracted a major attention in the medical imaging field, since CT-associated x-ray radiation carries health risks for patients. The reduction of CT radiation dose, however, compromises the signal-to-noise ratio, and may compromise the image quality and the diagnostic performance. Recently, deep-learning-based algorithms have achieved promising results in low-dose CT denoising, especially convolutional neural network (CNN) and generative adversarial network (GAN). This article introduces a Contracting Path-based Convolutional Encoder-decoder (CPCE) network in 2D and 3D configurations within the GAN framework for low-dose CT denoising. A novel feature of our approach is that an initial 3D CPCE denoising model can be directly obtained by extending a trained 2D CNN and then fine-tuned to incorporate 3D spatial information from adjacent slices. Based on the transfer learning from 2D to 3D, the 3D network converges faster and achieves a better denoising performance than that trained from scratch. By comparing the CPCE with recently published methods based on the simulated Mayo dataset and the real MGH dataset, we demonstrate that the 3D CPCE denoising model has a better performance, suppressing image noise and preserving subtle structures.
Tasks Computed Tomography (CT), Denoising, Transfer Learning
Published 2018-02-15
URL http://arxiv.org/abs/1802.05656v2
PDF http://arxiv.org/pdf/1802.05656v2.pdf
PWC https://paperswithcode.com/paper/3d-convolutional-encoder-decoder-network-for
Repo
Framework

Learning to Teach

Title Learning to Teach
Authors Yang Fan, Fei Tian, Tao Qin, Xiang-Yang Li, Tie-Yan Liu
Abstract Teaching plays a very important role in our society, by spreading human knowledge and educating our next generations. A good teacher will select appropriate teaching materials, impact suitable methodologies, and set up targeted examinations, according to the learning behaviors of the students. In the field of artificial intelligence, however, one has not fully explored the role of teaching, and pays most attention to machine \emph{learning}. In this paper, we argue that equal attention, if not more, should be paid to teaching, and furthermore, an optimization framework (instead of heuristics) should be used to obtain good teaching strategies. We call this approach `learning to teach’. In the approach, two intelligent agents interact with each other: a student model (which corresponds to the learner in traditional machine learning algorithms), and a teacher model (which determines the appropriate data, loss function, and hypothesis space to facilitate the training of the student model). The teacher model leverages the feedback from the student model to optimize its own teaching strategies by means of reinforcement learning, so as to achieve teacher-student co-evolution. To demonstrate the practical value of our proposed approach, we take the training of deep neural networks (DNN) as an example, and show that by using the learning to teach techniques, we are able to use much less training data and fewer iterations to achieve almost the same accuracy for different kinds of DNN models (e.g., multi-layer perceptron, convolutional neural networks and recurrent neural networks) under various machine learning tasks (e.g., image classification and text understanding). |
Tasks Image Classification
Published 2018-05-09
URL http://arxiv.org/abs/1805.03643v1
PDF http://arxiv.org/pdf/1805.03643v1.pdf
PWC https://paperswithcode.com/paper/learning-to-teach
Repo
Framework

Deep Learning vs. Human Graders for Classifying Severity Levels of Diabetic Retinopathy in a Real-World Nationwide Screening Program

Title Deep Learning vs. Human Graders for Classifying Severity Levels of Diabetic Retinopathy in a Real-World Nationwide Screening Program
Authors Paisan Raumviboonsuk, Jonathan Krause, Peranut Chotcomwongse, Rory Sayres, Rajiv Raman, Kasumi Widner, Bilson J L Campana, Sonia Phene, Kornwipa Hemarat, Mongkol Tadarati, Sukhum Silpa-Acha, Jirawut Limwattanayingyong, Chetan Rao, Oscar Kuruvilla, Jesse Jung, Jeffrey Tan, Surapong Orprayoon, Chawawat Kangwanwongpaisan, Ramase Sukulmalpaiboon, Chainarong Luengchaichawang, Jitumporn Fuangkaew, Pipat Kongsap, Lamyong Chualinpha, Sarawuth Saree, Srirat Kawinpanitan, Korntip Mitvongsa, Siriporn Lawanasakol, Chaiyasit Thepchatri, Lalita Wongpichedchai, Greg S Corrado, Lily Peng, Dale R Webster
Abstract Deep learning algorithms have been used to detect diabetic retinopathy (DR) with specialist-level accuracy. This study aims to validate one such algorithm on a large-scale clinical population, and compare the algorithm performance with that of human graders. 25,326 gradable retinal images of patients with diabetes from the community-based, nation-wide screening program of DR in Thailand were analyzed for DR severity and referable diabetic macular edema (DME). Grades adjudicated by a panel of international retinal specialists served as the reference standard. Across different severity levels of DR for determining referable disease, deep learning significantly reduced the false negative rate (by 23%) at the cost of slightly higher false positive rates (2%). Deep learning algorithms may serve as a valuable tool for DR screening.
Tasks
Published 2018-10-18
URL http://arxiv.org/abs/1810.08290v1
PDF http://arxiv.org/pdf/1810.08290v1.pdf
PWC https://paperswithcode.com/paper/deep-learning-vs-human-graders-for
Repo
Framework

A Sheaf Model of Contradictions and Disagreements. Preliminary Report and Discussion

Title A Sheaf Model of Contradictions and Disagreements. Preliminary Report and Discussion
Authors Wlodek Zadrozny, Luciana Garbayo
Abstract We introduce a new formal model – based on the mathematical construct of sheaves – for representing contradictory information in textual sources. This model has the advantage of letting us (a) identify the causes of the inconsistency; (b) measure how strong it is; (c) and do something about it, e.g. suggest ways to reconcile inconsistent advice. This model naturally represents the distinction between contradictions and disagreements. It is based on the idea of representing natural language sentences as formulas with parameters sitting on lattices, creating partial orders based on predicates shared by theories, and building sheaves on these partial orders with products of lattices as stalks. Degrees of disagreement are measured by the existence of global and local sections. Limitations of the sheaf approach and connections to recent work in natural language processing, as well as the topics of contextuality in physics, data fusion, topological data analysis and epistemology are also discussed.
Tasks Topological Data Analysis
Published 2018-01-27
URL http://arxiv.org/abs/1801.09036v1
PDF http://arxiv.org/pdf/1801.09036v1.pdf
PWC https://paperswithcode.com/paper/a-sheaf-model-of-contradictions-and
Repo
Framework

Efficient Algorithms and Lower Bounds for Robust Linear Regression

Title Efficient Algorithms and Lower Bounds for Robust Linear Regression
Authors Ilias Diakonikolas, Weihao Kong, Alistair Stewart
Abstract We study the problem of high-dimensional linear regression in a robust model where an $\epsilon$-fraction of the samples can be adversarially corrupted. We focus on the fundamental setting where the covariates of the uncorrupted samples are drawn from a Gaussian distribution $\mathcal{N}(0, \Sigma)$ on $\mathbb{R}^d$. We give nearly tight upper bounds and computational lower bounds for this problem. Specifically, our main contributions are as follows: For the case that the covariance matrix is known to be the identity, we give a sample near-optimal and computationally efficient algorithm that outputs a candidate hypothesis vector $\widehat{\beta}$ which approximates the unknown regression vector $\beta$ within $\ell_2$-norm $O(\epsilon \log(1/\epsilon) \sigma)$, where $\sigma$ is the standard deviation of the random observation noise. An error of $\Omega (\epsilon \sigma)$ is information-theoretically necessary, even with infinite sample size. Prior work gave an algorithm for this problem with sample complexity $\tilde{\Omega}(d^2/\epsilon^2)$ whose error guarantee scales with the $\ell_2$-norm of $\beta$. For the case of unknown covariance, we show that we can efficiently achieve the same error guarantee as in the known covariance case using an additional $\tilde{O}(d^2/\epsilon^2)$ unlabeled examples. On the other hand, an error of $O(\epsilon \sigma)$ can be information-theoretically attained with $O(d/\epsilon^2)$ samples. We prove a Statistical Query (SQ) lower bound providing evidence that this quadratic tradeoff in the sample size is inherent. More specifically, we show that any polynomial time SQ learning algorithm for robust linear regression (in Huber’s contamination model) with estimation complexity $O(d^{2-c})$, where $c>0$ is an arbitrarily small constant, must incur an error of $\Omega(\sqrt{\epsilon} \sigma)$.
Tasks
Published 2018-05-31
URL http://arxiv.org/abs/1806.00040v1
PDF http://arxiv.org/pdf/1806.00040v1.pdf
PWC https://paperswithcode.com/paper/efficient-algorithms-and-lower-bounds-for
Repo
Framework

Theory of Generative Deep Learning : Probe Landscape of Empirical Error via Norm Based Capacity Control

Title Theory of Generative Deep Learning : Probe Landscape of Empirical Error via Norm Based Capacity Control
Authors Wendi Xu, Ming Zhang
Abstract Despite its remarkable empirical success as a highly competitive branch of artificial intelligence, deep learning is often blamed for its widely known low interpretation and lack of firm and rigorous mathematical foundation. However, most theoretical endeavor is devoted in discriminative deep learning case, whose complementary part is generative deep learning. To the best of our knowledge, we firstly highlight landscape of empirical error in generative case to complete the full picture through exquisite design of image super resolution under norm based capacity control. Our theoretical advance in interpretation of the training dynamic is achieved from both mathematical and biological sides.
Tasks Image Super-Resolution, Super-Resolution
Published 2018-10-03
URL http://arxiv.org/abs/1810.01622v1
PDF http://arxiv.org/pdf/1810.01622v1.pdf
PWC https://paperswithcode.com/paper/theory-of-generative-deep-learning-probe
Repo
Framework
comments powered by Disqus