April 3, 2020

3217 words 16 mins read

Paper Group ANR 86

Paper Group ANR 86

Validation and Optimization of Multi-Organ Segmentation on Clinical Imaging Archives. Adversarial Incremental Learning. Offline Grid-Based Coverage path planning for guards in games. Max-Affine Spline Insights into Deep Generative Networks. Can speed up the convergence rate of stochastic gradient methods to $\mathcal{O}(1/k^2)$ by a gradient averag …

Validation and Optimization of Multi-Organ Segmentation on Clinical Imaging Archives

Title Validation and Optimization of Multi-Organ Segmentation on Clinical Imaging Archives
Authors Yuchen Xu, Olivia Tang, Yucheng Tang, Ho Hin Lee, Yunqiang Chen, Dashan Gao, Shizhong Han, Riqiang Gao, Michael R. Savona, Richard G. Abramson, Yuankai Huo, Bennett A. Landman
Abstract Segmentation of abdominal computed tomography(CT) provides spatial context, morphological properties, and a framework for tissue-specific radiomics to guide quantitative Radiological assessment. A 2015 MICCAI challenge spurred substantial innovation in multi-organ abdominal CT segmentation with both traditional and deep learning methods. Recent innovations in deep methods have driven performance toward levels for which clinical translation is appealing. However, continued cross-validation on open datasets presents the risk of indirect knowledge contamination and could result in circular reasoning. Moreover, ‘real world’ segmentations can be challenging due to the wide variability of abdomen physiology within patients. Herein, we perform two data retrievals to capture clinically acquired deidentified abdominal CT cohorts with respect to a recently published variation on 3D U-Net (baseline algorithm). First, we retrieved 2004 deidentified studies on 476 patients with diagnosis codes involving spleen abnormalities (cohort A). Second, we retrieved 4313 deidentified studies on 1754 patients without diagnosis codes involving spleen abnormalities (cohort B). We perform prospective evaluation of the existing algorithm on both cohorts, yielding 13% and 8% failure rate, respectively. Then, we identified 51 subjects in cohort A with segmentation failures and manually corrected the liver and gallbladder labels. We re-trained the model adding the manual labels, resulting in performance improvement of 9% and 6% failure rate for the A and B cohorts, respectively. In summary, the performance of the baseline on the prospective cohorts was similar to that on previously published datasets. Moreover, adding data from the first cohort substantively improved performance when evaluated on the second withheld validation cohort.
Tasks Computed Tomography (CT)
Published 2020-02-10
URL https://arxiv.org/abs/2002.04102v1
PDF https://arxiv.org/pdf/2002.04102v1.pdf
PWC https://paperswithcode.com/paper/validation-and-optimization-of-multi-organ
Repo
Framework

Adversarial Incremental Learning

Title Adversarial Incremental Learning
Authors Ankur Singh
Abstract Although deep learning performs really well in a wide variety of tasks, it still suffers from catastrophic forgetting – the tendency of neural networks to forget previously learned information upon learning new tasks where previous data is not available. Earlier methods of incremental learning tackle this problem by either using a part of the old dataset, by generating exemplars or by using memory networks. Although, these methods have shown good results but using exemplars or generating them, increases memory and computation requirements. To solve these problems we propose an adversarial discriminator based method that does not make use of old data at all while training on new tasks. We particularly tackle the class incremental learning problem in image classification, where data is provided in a class-based sequential manner. For this problem, the network is trained using an adversarial loss along with the traditional cross-entropy loss. The cross-entropy loss helps the network progressively learn new classes while the adversarial loss helps in preserving information about the existing classes. Using this approach, we are able to outperform other state-of-the-art methods on CIFAR-100, SVHN, and MNIST datasets.
Tasks Image Classification
Published 2020-01-30
URL https://arxiv.org/abs/2001.11152v2
PDF https://arxiv.org/pdf/2001.11152v2.pdf
PWC https://paperswithcode.com/paper/adversarial-incremental-learning
Repo
Framework

Offline Grid-Based Coverage path planning for guards in games

Title Offline Grid-Based Coverage path planning for guards in games
Authors Wael Al Enezi, Clark Verbrugge
Abstract Algorithmic approaches to exhaustive coverage have application in video games, enabling automatic game level exploration. Current designs use simple heuristics that frequently result in poor performance or exhibit unnatural behaviour. In this paper, we introduce a novel algorithm for covering a 2D polygonal (with holes) area. We assume prior knowledge of the map layout and use a grid-based world representation. Experimental analysis over several scenarios ranging from simple layouts to more complex maps used in actual games show good performance. This work serves as an initial step towards building a more efficient coverage path planning algorithm for non-player characters.
Tasks
Published 2020-01-15
URL https://arxiv.org/abs/2001.05462v1
PDF https://arxiv.org/pdf/2001.05462v1.pdf
PWC https://paperswithcode.com/paper/offline-grid-based-coverage-path-planning-for
Repo
Framework

Max-Affine Spline Insights into Deep Generative Networks

Title Max-Affine Spline Insights into Deep Generative Networks
Authors Randall Balestriero, Sebastien Paris, Richard Baraniuk
Abstract We connect a large class of Generative Deep Networks (GDNs) with spline operators in order to derive their properties, limitations, and new opportunities. By characterizing the latent space partition, dimension and angularity of the generated manifold, we relate the manifold dimension and approximation error to the sample size. The manifold-per-region affine subspace defines a local coordinate basis; we provide necessary and sufficient conditions relating those basis vectors with disentanglement. We also derive the output probability density mapped onto the generated manifold in terms of the latent space density, which enables the computation of key statistics such as its Shannon entropy. This finding also enables the computation of the GDN likelihood, which provides a new mechanism for model comparison as well as providing a quality measure for (generated) samples under the learned distribution. We demonstrate how low entropy and/or multimodal distributions are not naturally modeled by DGNs and are a cause of training instabilities.
Tasks
Published 2020-02-26
URL https://arxiv.org/abs/2002.11912v1
PDF https://arxiv.org/pdf/2002.11912v1.pdf
PWC https://paperswithcode.com/paper/max-affine-spline-insights-into-deep
Repo
Framework

Can speed up the convergence rate of stochastic gradient methods to $\mathcal{O}(1/k^2)$ by a gradient averaging strategy?

Title Can speed up the convergence rate of stochastic gradient methods to $\mathcal{O}(1/k^2)$ by a gradient averaging strategy?
Authors Xin Xu, Xiaopeng Luo
Abstract In this paper we consider the question of whether it is possible to apply a gradient averaging strategy to improve on the sublinear convergence rates without any increase in storage. Our analysis reveals that a positive answer requires an appropriate averaging strategy and iterations that satisfy the variance dominant condition. As an interesting fact, we show that if the iterative variance we defined is always dominant even a little bit in the stochastic gradient iterations, the proposed gradient averaging strategy can increase the convergence rate $\mathcal{O}(1/k)$ to $\mathcal{O}(1/k^2)$ in probability for the strongly convex objectives with Lipschitz gradients. This conclusion suggests how we should control the stochastic gradient iterations to improve the rate of convergence.
Tasks
Published 2020-02-25
URL https://arxiv.org/abs/2002.10769v1
PDF https://arxiv.org/pdf/2002.10769v1.pdf
PWC https://paperswithcode.com/paper/can-speed-up-the-convergence-rate-of
Repo
Framework

Deep Randomized Neural Networks

Title Deep Randomized Neural Networks
Authors Claudio Gallicchio, Simone Scardapane
Abstract Randomized Neural Networks explore the behavior of neural systems where the majority of connections are fixed, either in a stochastic or a deterministic fashion. Typical examples of such systems consist of multi-layered neural network architectures where the connections to the hidden layer(s) are left untrained after initialization. Limiting the training algorithms to operate on a reduced set of weights inherently characterizes the class of Randomized Neural Networks with a number of intriguing features. Among them, the extreme efficiency of the resulting learning processes is undoubtedly a striking advantage with respect to fully trained architectures. Besides, despite the involved simplifications, randomized neural systems possess remarkable properties both in practice, achieving state-of-the-art results in multiple domains, and theoretically, allowing to analyze intrinsic properties of neural architectures (e.g. before training of the hidden layers’ connections). In recent years, the study of Randomized Neural Networks has been extended towards deep architectures, opening new research directions to the design of effective yet extremely efficient deep learning models in vectorial as well as in more complex data domains. This chapter surveys all the major aspects regarding the design and analysis of Randomized Neural Networks, and some of the key results with respect to their approximation capabilities. In particular, we first introduce the fundamentals of randomized neural models in the context of feed-forward networks (i.e., Random Vector Functional Link and equivalent models) and convolutional filters, before moving to the case of recurrent systems (i.e., Reservoir Computing networks). For both, we focus specifically on recent results in the domain of deep randomized systems, and (for recurrent models) their application to structured domains.
Tasks
Published 2020-02-27
URL https://arxiv.org/abs/2002.12287v1
PDF https://arxiv.org/pdf/2002.12287v1.pdf
PWC https://paperswithcode.com/paper/deep-randomized-neural-networks
Repo
Framework

Mining Commonsense Facts from the Physical World

Title Mining Commonsense Facts from the Physical World
Authors Yanyan Zou, Wei Lu, Xu Sun
Abstract Textual descriptions of the physical world implicitly mention commonsense facts, while the commonsense knowledge bases explicitly represent such facts as triples. Compared to dramatically increased text data, the coverage of existing knowledge bases is far away from completion. Most of the prior studies on populating knowledge bases mainly focus on Freebase. To automatically complete commonsense knowledge bases to improve their coverage is under-explored. In this paper, we propose a new task of mining commonsense facts from the raw text that describes the physical world. We build an effective new model that fuses information from both sequence text and existing knowledge base resource. Then we create two large annotated datasets each with approximate 200k instances for commonsense knowledge base completion. Empirical results demonstrate that our model significantly outperforms baselines.
Tasks Knowledge Base Completion
Published 2020-02-08
URL https://arxiv.org/abs/2002.03149v2
PDF https://arxiv.org/pdf/2002.03149v2.pdf
PWC https://paperswithcode.com/paper/mining-commonsense-facts-from-the-physical
Repo
Framework

Two-Level Transformer and Auxiliary Coherence Modeling for Improved Text Segmentation

Title Two-Level Transformer and Auxiliary Coherence Modeling for Improved Text Segmentation
Authors Goran Glavaš, Swapna Somasundaran
Abstract Breaking down the structure of long texts into semantically coherent segments makes the texts more readable and supports downstream applications like summarization and retrieval. Starting from an apparent link between text coherence and segmentation, we introduce a novel supervised model for text segmentation with simple but explicit coherence modeling. Our model – a neural architecture consisting of two hierarchically connected Transformer networks – is a multi-task learning model that couples the sentence-level segmentation objective with the coherence objective that differentiates correct sequences of sentences from corrupt ones. The proposed model, dubbed Coherence-Aware Text Segmentation (CATS), yields state-of-the-art segmentation performance on a collection of benchmark datasets. Furthermore, by coupling CATS with cross-lingual word embeddings, we demonstrate its effectiveness in zero-shot language transfer: it can successfully segment texts in languages unseen in training.
Tasks Multi-Task Learning, Word Embeddings
Published 2020-01-03
URL https://arxiv.org/abs/2001.00891v1
PDF https://arxiv.org/pdf/2001.00891v1.pdf
PWC https://paperswithcode.com/paper/two-level-transformer-and-auxiliary-coherence
Repo
Framework

Opposite Structure Learning for Semi-supervised Domain Adaptation

Title Opposite Structure Learning for Semi-supervised Domain Adaptation
Authors Can Qin, Lichen Wang, Qianqian Ma, Yu Yin, Huan Wang, Yun Fu
Abstract Current adversarial adaptation methods attempt to align the cross-domain features whereas two challenges remain unsolved: 1) conditional distribution mismatch between different domains and 2) the bias of decision boundary towards the source domain. To solve these challenges, we propose a novel framework for semi-supervised domain adaptation by unifying the learning of opposite structures (UODA). UODA consists of a generator and two classifiers (i.e., the source-based and the target-based classifiers respectively) which are trained with opposite forms of losses for a unified object. The target-based classifier attempts to cluster the target features to improve intra-class density and enlarge inter-class divergence. Meanwhile, the source-based classifier is designed to scatter the source features to enhance the smoothness of decision boundary. Through the alternation of source-feature expansion and target-feature clustering procedures, the target features are well-enclosed within the dilated boundary of the corresponding source features. This strategy effectively makes the cross-domain features precisely aligned. To overcome the model collapse through training, we progressively update the measurement of distance and the feature representation on both domains via an adversarial training paradigm. Extensive experiments on the benchmarks of DomainNet and Office-home datasets demonstrate the effectiveness of our approach over the state-of-the-art method.
Tasks Domain Adaptation
Published 2020-02-06
URL https://arxiv.org/abs/2002.02545v1
PDF https://arxiv.org/pdf/2002.02545v1.pdf
PWC https://paperswithcode.com/paper/opposite-structure-learning-for-semi
Repo
Framework

GTNet: Generative Transfer Network for Zero-Shot Object Detection

Title GTNet: Generative Transfer Network for Zero-Shot Object Detection
Authors Shizhen Zhao, Changxin Gao, Yuanjie Shao, Lerenhan Li, Changqian Yu, Zhong Ji, Nong Sang
Abstract We propose a Generative Transfer Network (GTNet) for zero shot object detection (ZSD). GTNet consists of an Object Detection Module and a Knowledge Transfer Module. The Object Detection Module can learn large-scale seen domain knowledge. The Knowledge Transfer Module leverages a feature synthesizer to generate unseen class features, which are applied to train a new classification layer for the Object Detection Module. In order to synthesize features for each unseen class with both the intra-class variance and the IoU variance, we design an IoU-Aware Generative Adversarial Network (IoUGAN) as the feature synthesizer, which can be easily integrated into GTNet. Specifically, IoUGAN consists of three unit models: Class Feature Generating Unit (CFU), Foreground Feature Generating Unit (FFU), and Background Feature Generating Unit (BFU). CFU generates unseen features with the intra-class variance conditioned on the class semantic embeddings. FFU and BFU add the IoU variance to the results of CFU, yielding class-specific foreground and background features, respectively. We evaluate our method on three public datasets and the results demonstrate that our method performs favorably against the state-of-the-art ZSD approaches.
Tasks Object Detection, Transfer Learning, Zero-Shot Object Detection
Published 2020-01-19
URL https://arxiv.org/abs/2001.06812v2
PDF https://arxiv.org/pdf/2001.06812v2.pdf
PWC https://paperswithcode.com/paper/gtnet-generative-transfer-network-for-zero
Repo
Framework

Covid-19: Automatic detection from X-Ray images utilizing Transfer Learning with Convolutional Neural Networks

Title Covid-19: Automatic detection from X-Ray images utilizing Transfer Learning with Convolutional Neural Networks
Authors Ioannis D. Apostolopoulos, Tzani Bessiana
Abstract In this study, a dataset of X-Ray images from patients with common pneumonia, Covid-19, and normal incidents was utilized for the automatic detection of the Coronavirus. The aim of the study is to evaluate the performance of state-of-the-art Convolutional Neural Network architectures proposed over recent years for medical image classification. Specifically, the procedure called transfer learning was adopted. With transfer learning, the detection of various abnormalities in small medical image datasets is an achievable target, often yielding remarkable results. The dataset utilized in this experiment is a collection of 1427 X-Ray images. 224 images with confirmed Covid-19, 700 images with confirmed common pneumonia, and 504 images of normal conditions are included. The data was collected from the available X-Ray images on public medical repositories. With transfer learning, an overall accuracy of 97.82% in the detection of Covid-19 is achieved.
Tasks Image Classification, Transfer Learning
Published 2020-03-25
URL https://arxiv.org/abs/2003.11617v1
PDF https://arxiv.org/pdf/2003.11617v1.pdf
PWC https://paperswithcode.com/paper/covid-19-automatic-detection-from-x-ray
Repo
Framework

Redistribution Systems and PRAM

Title Redistribution Systems and PRAM
Authors Paul Cohen, Tomasz Loboda
Abstract Redistribution systems iteratively redistribute mass between groups under the control of rules. PRAM is a framework for building redistribution systems. We discuss the relationships between redistribution systems, agent-based systems, compartmental models and Bayesian models. PRAM puts agent-based models on a sound probabilistic footing by reformulating them as redistribution systems. This provides a basis for integrating agent-based and probabilistic models. \pram/ extends the themes of probabilistic relational models and lifted inference to incorporate dynamical models and simulation. We illustrate PRAM with an epidemiological example.
Tasks
Published 2020-03-18
URL https://arxiv.org/abs/2003.08783v2
PDF https://arxiv.org/pdf/2003.08783v2.pdf
PWC https://paperswithcode.com/paper/redistribution-systems-and-pram
Repo
Framework

FedVision: An Online Visual Object Detection Platform Powered by Federated Learning

Title FedVision: An Online Visual Object Detection Platform Powered by Federated Learning
Authors Yang Liu, Anbu Huang, Yun Luo, He Huang, Youzhi Liu, Yuanyuan Chen, Lican Feng, Tianjian Chen, Han Yu, Qiang Yang
Abstract Visual object detection is a computer vision-based artificial intelligence (AI) technique which has many practical applications (e.g., fire hazard monitoring). However, due to privacy concerns and the high cost of transmitting video data, it is highly challenging to build object detection models on centrally stored large training datasets following the current approach. Federated learning (FL) is a promising approach to resolve this challenge. Nevertheless, there currently lacks an easy to use tool to enable computer vision application developers who are not experts in federated learning to conveniently leverage this technology and apply it in their systems. In this paper, we report FedVision - a machine learning engineering platform to support the development of federated learning powered computer vision applications. The platform has been deployed through a collaboration between WeBank and Extreme Vision to help customers develop computer vision-based safety monitoring solutions in smart city applications. Over four months of usage, it has achieved significant efficiency improvement and cost reduction while removing the need to transmit sensitive data for three major corporate customers. To the best of our knowledge, this is the first real application of FL in computer vision-based tasks.
Tasks Object Detection
Published 2020-01-17
URL https://arxiv.org/abs/2001.06202v1
PDF https://arxiv.org/pdf/2001.06202v1.pdf
PWC https://paperswithcode.com/paper/fedvision-an-online-visual-object-detection
Repo
Framework

Accuracy of MRI Classification Algorithms in a Tertiary Memory Center Clinical Routine Cohort

Title Accuracy of MRI Classification Algorithms in a Tertiary Memory Center Clinical Routine Cohort
Authors Alexandre Morin, Jorge Samper-González, Anne Bertrand, Sebastian Stroer, Didier Dormont, Aline Mendes, Pierrick Coupé, Jamila Ahdidan, Marcel Lévy, Dalila Samri, Harald Hampel, Bruno Dubois, Marc Teichmann, Stéphane Epelbaum, Olivier Colliot
Abstract BACKGROUND:Automated volumetry software (AVS) has recently become widely available to neuroradiologists. MRI volumetry with AVS may support the diagnosis of dementias by identifying regional atrophy. Moreover, automatic classifiers using machine learning techniques have recently emerged as promising approaches to assist diagnosis. However, the performance of both AVS and automatic classifiers has been evaluated mostly in the artificial setting of research datasets.OBJECTIVE:Our aim was to evaluate the performance of two AVS and an automatic classifier in the clinical routine condition of a memory clinic.METHODS:We studied 239 patients with cognitive troubles from a single memory center cohort. Using clinical routine T1-weighted MRI, we evaluated the classification performance of: 1) univariate volumetry using two AVS (volBrain and Neuroreader$^{TM}$); 2) Support Vector Machine (SVM) automatic classifier, using either the AVS volumes (SVM-AVS), or whole gray matter (SVM-WGM); 3) reading by two neuroradiologists. The performance measure was the balanced diagnostic accuracy. The reference standard was consensus diagnosis by three neurologists using clinical, biological (cerebrospinal fluid) and imaging data and following international criteria.RESULTS:Univariate AVS volumetry provided only moderate accuracies (46% to 71% with hippocampal volume). The accuracy improved when using SVM-AVS classifier (52% to 85%), becoming close to that of SVM-WGM (52 to 90%). Visual classification by neuroradiologists ranged between SVM-AVS and SVM-WGM.CONCLUSION:In the routine practice of a memory clinic, the use of volumetric measures provided by AVS yields only moderate accuracy. Automatic classifiers can improve accuracy and could be a useful tool to assist diagnosis.
Tasks
Published 2020-03-19
URL https://arxiv.org/abs/2003.09260v1
PDF https://arxiv.org/pdf/2003.09260v1.pdf
PWC https://paperswithcode.com/paper/accuracy-of-mri-classification-algorithms-in
Repo
Framework

Unconstrained Periocular Recognition: Using Generative Deep Learning Frameworks for Attribute Normalization

Title Unconstrained Periocular Recognition: Using Generative Deep Learning Frameworks for Attribute Normalization
Authors Luiz A. Zanlorensi, Hugo Proença, David Menotti
Abstract Ocular biometric systems working in unconstrained environments usually face the problem of small within-class compactness caused by the multiple factors that jointly degrade the quality of the obtained data. In this work, we propose an attribute normalization strategy based on deep learning generative frameworks, that reduces the variability of the samples used in pairwise comparisons, without reducing their discriminability. The proposed method can be seen as a preprocessing step that contributes for data regularization and improves the recognition accuracy, being fully agnostic to the recognition strategy used. As proof of concept, we consider the “eyeglasses” and “gaze” factors, comparing the levels of performance of five different recognition methods with/without using the proposed normalization strategy. Also, we introduce a new dataset for unconstrained periocular recognition, composed of images acquired by mobile devices, particularly suited to perceive the impact of “wearing eyeglasses” in recognition effectiveness. Our experiments were performed in two different datasets, and support the usefulness of our attribute normalization scheme to improve the recognition performance.
Tasks
Published 2020-02-10
URL https://arxiv.org/abs/2002.03985v1
PDF https://arxiv.org/pdf/2002.03985v1.pdf
PWC https://paperswithcode.com/paper/unconstrained-periocular-recognition-using
Repo
Framework
comments powered by Disqus