October 19, 2019

3346 words 16 mins read

Paper Group ANR 331

Paper Group ANR 331

AVRA: Automatic Visual Ratings of Atrophy from MRI images using Recurrent Convolutional Neural Networks. Cortical-inspired image reconstruction via sub-Riemannian geometry and hypoelliptic diffusion. A Novel ECOC Algorithm with Centroid Distance Based Soft Coding Scheme. Multi-View Network Embedding Via Graph Factorization Clustering and Co-Regular …

AVRA: Automatic Visual Ratings of Atrophy from MRI images using Recurrent Convolutional Neural Networks

Title AVRA: Automatic Visual Ratings of Atrophy from MRI images using Recurrent Convolutional Neural Networks
Authors Gustav Mårtensson, Daniel Ferreira, Lena Cavallin, J-Sebastian Muehlboeck, Lars-Olof Wahlund, Chunliang Wang, Eric Westman
Abstract Quantifying the degree of atrophy is done clinically by neuroradiologists following established visual rating scales. For these assessments to be reliable the rater requires substantial training and experience, and even then the rating agreement between two radiologists is not perfect. We have developed a model we call AVRA (Automatic Visual Ratings of Atrophy) based on machine learning methods and trained on 2350 visual ratings made by an experienced neuroradiologist. It provides fast and automatic ratings for Scheltens’ scale of medial temporal atrophy (MTA), the frontal subscale of Pasquier’s Global Cortical Atrophy (GCA-F) scale, and Koedam’s scale of Posterior Atrophy (PA). We demonstrate substantial inter-rater agreement between AVRA’s and a neuroradiologist ratings with Cohen’s weighted kappa values of $\kappa_w$ = 0.74/0.72 (MTA left/right), $\kappa_w$ = 0.62 (GCA-F) and $\kappa_w$ = 0.74 (PA), with an inherent intra-rater agreement of $\kappa_w$ = 1. We conclude that automatic visual ratings of atrophy can potentially have great clinical and scientific value, and aim to present AVRA as a freely available toolbox.
Tasks
Published 2018-12-23
URL http://arxiv.org/abs/1901.00418v1
PDF http://arxiv.org/pdf/1901.00418v1.pdf
PWC https://paperswithcode.com/paper/avra-automatic-visual-ratings-of-atrophy-from
Repo
Framework

Cortical-inspired image reconstruction via sub-Riemannian geometry and hypoelliptic diffusion

Title Cortical-inspired image reconstruction via sub-Riemannian geometry and hypoelliptic diffusion
Authors Ugo Boscain, Roman Chertovskih, Jean-Paul Gauthier, Dario Prandi, Alexey Remizov
Abstract In this paper we review several algorithms for image inpainting based on the hypoelliptic diffusion naturally associated with a mathematical model of the primary visual cortex. In particular, we present one algorithm that does not exploit the information of where the image is corrupted, and others that do it. While the first algorithm is able to reconstruct only images that our visual system is still capable of recognize, we show that those of the second type completely transcend such limitation providing reconstructions at the state-of-the-art in image inpainting. This can be interpreted as a validation of the fact that our visual cortex actually encodes the first type of algorithm.
Tasks Image Inpainting, Image Reconstruction
Published 2018-01-11
URL http://arxiv.org/abs/1801.03800v1
PDF http://arxiv.org/pdf/1801.03800v1.pdf
PWC https://paperswithcode.com/paper/cortical-inspired-image-reconstruction-via
Repo
Framework

A Novel ECOC Algorithm with Centroid Distance Based Soft Coding Scheme

Title A Novel ECOC Algorithm with Centroid Distance Based Soft Coding Scheme
Authors Kaijie Feng, Kunhong Liu, Beizhan Wang
Abstract In ECOC framework, the ternary coding strategy is widely deployed in coding process. It relabels classes with {"-1,0,1” }, where -1/1 means to assign the corresponding classes to the negative/positive group, and label 0 leads to ignore the corresponding classes in the training process. However, the application of hard labels may lose some information about the tendency of class distributions. Instead, we propose a Centroid distance-based Soft coding scheme to indicate such tendency, named as CSECOC. In our algorithm, Sequential Forward Floating Selection (SFFS) is applied to search an optimal class assignment by minimizing the ratio of intra-group and inter-group distance. In this way, a hard coding matrix is generated initially. Then we propose a measure, named as coverage, to describe the probability of a sample in a class falling to a correct group. The coverage of a class a group replace the corresponding hard element, so as to form a soft coding matrix. Compared with the hard ones, such soft elements can reflect the tendency of a class belonging to positive or negative group. Instead of classifiers, regressors are used as base learners in this algorithm. To the best of our knowledge, it is the first time that soft coding scheme has been proposed. The results on five UCI datasets show that compared with some state-of-art ECOC algorithms, our algorithm can produce comparable or better classification accuracy with small scale ensembles.
Tasks
Published 2018-06-22
URL http://arxiv.org/abs/1806.08465v1
PDF http://arxiv.org/pdf/1806.08465v1.pdf
PWC https://paperswithcode.com/paper/a-novel-ecoc-algorithm-with-centroid-distance
Repo
Framework

Multi-View Network Embedding Via Graph Factorization Clustering and Co-Regularized Multi-View Agreement

Title Multi-View Network Embedding Via Graph Factorization Clustering and Co-Regularized Multi-View Agreement
Authors Yiwei Sun, Ngot Bui, Tsung-Yu Hsieh, Vasant Honavar
Abstract Real-world social networks and digital platforms are comprised of individuals (nodes) that are linked to other individuals or entities through multiple types of relationships (links). Sub-networks of such a network based on each type of link correspond to distinct views of the underlying network. In real-world applications, each node is typically linked to only a small subset of other nodes. Hence, practical approaches to problems such as node labeling have to cope with the resulting sparse networks. While low-dimensional network embeddings offer a promising approach to this problem, most of the current network embedding methods focus primarily on single view networks. We introduce a novel multi-view network embedding (MVNE) algorithm for constructing low-dimensional node embeddings from multi-view networks. MVNE adapts and extends an approach to single view network embedding (SVNE) using graph factorization clustering (GFC) to the multi-view setting using an objective function that maximizes the agreement between views based on both the local and global structure of the underlying multi-view graph. Our experiments with several benchmark real-world single view networks show that GFC-based SVNE yields network embeddings that are competitive with or superior to those produced by the state-of-the-art single view network embedding methods when the embeddings are used for labeling unlabeled nodes in the networks. Our experiments with several multi-view networks show that MVNE substantially outperforms the single view methods on integrated view and the state-of-the-art multi-view methods. We further show that even when the goal is to predict labels of nodes within a single target view, MVNE outperforms its single-view counterpart suggesting that the MVNE is able to extract the information that is useful for labeling nodes in the target view from the all of the views.
Tasks Network Embedding
Published 2018-11-06
URL http://arxiv.org/abs/1811.02616v3
PDF http://arxiv.org/pdf/1811.02616v3.pdf
PWC https://paperswithcode.com/paper/multi-view-network-embedding-via-graph
Repo
Framework

Multi-level hypothesis testing for populations of heterogeneous networks

Title Multi-level hypothesis testing for populations of heterogeneous networks
Authors Guilherme Gomes, Vinayak Rao, Jennifer Neville
Abstract In this work, we consider hypothesis testing and anomaly detection on datasets where each observation is a weighted network. Examples of such data include brain connectivity networks from fMRI flow data, or word co-occurrence counts for populations of individuals. Current approaches to hypothesis testing for weighted networks typically requires thresholding the edge-weights, to transform the data to binary networks. This results in a loss of information, and outcomes are sensitivity to choice of threshold levels. Our work avoids this, and we consider weighted-graph observations in two situations, 1) where each graph belongs to one of two populations, and 2) where entities belong to one of two populations, with each entity possessing multiple graphs (indexed e.g. by time). Specifically, we propose a hierarchical Bayesian hypothesis testing framework that models each population with a mixture of latent space models for weighted networks, and then tests populations of networks for differences in distribution over components. Our framework is capable of population-level, entity-specific, as well as edge-specific hypothesis testing. We apply it to synthetic data and three real-world datasets: two social media datasets involving word co-occurrences from discussions on Twitter of the political unrest in Brazil, and on Instagram concerning Attention Deficit Hyperactivity Disorder (ADHD) and depression drugs, and one medical dataset involving fMRI brain-scans of human subjects. The results show that our proposed method has lower Type I error and higher statistical power compared to alternatives that need to threshold the edge weights. Moreover, they show our proposed method is better suited to deal with highly heterogeneous datasets.
Tasks Anomaly Detection
Published 2018-09-07
URL http://arxiv.org/abs/1809.02512v1
PDF http://arxiv.org/pdf/1809.02512v1.pdf
PWC https://paperswithcode.com/paper/multi-level-hypothesis-testing-for
Repo
Framework

“Why Should I Trust Interactive Learners?” Explaining Interactive Queries of Classifiers to Users

Title “Why Should I Trust Interactive Learners?” Explaining Interactive Queries of Classifiers to Users
Authors Stefano Teso, Kristian Kersting
Abstract Although interactive learning puts the user into the loop, the learner remains mostly a black box for the user. Understanding the reasons behind queries and predictions is important when assessing how the learner works and, in turn, trust. Consequently, we propose the novel framework of explanatory interactive learning: in each step, the learner explains its interactive query to the user, and she queries of any active classifier for visualizing explanations of the corresponding predictions. We demonstrate that this can boost the predictive and explanatory powers of and the trust into the learned model, using text (e.g. SVMs) and image classification (e.g. neural networks) experiments as well as a user study.
Tasks Image Classification
Published 2018-05-22
URL http://arxiv.org/abs/1805.08578v1
PDF http://arxiv.org/pdf/1805.08578v1.pdf
PWC https://paperswithcode.com/paper/why-should-i-trust-interactive-learners
Repo
Framework

A Simple Riemannian Manifold Network for Image Set Classification

Title A Simple Riemannian Manifold Network for Image Set Classification
Authors Rui Wang, Xiao-Jun Wu, Josef Kittler
Abstract In the domain of image-set based classification, a considerable advance has been made by representing original image sets as covariance matrices which typical lie in a Riemannian manifold. Specifically, it is a Symmetric Positive Definite (SPD) manifold. Traditional manifold learning methods inevitably have the property of high computational complexity or weak performance of the feature representation. In order to overcome these limitations, we propose a very simple Riemannian manifold network for image set classification. Inspired by deep learning architectures, we design a fully connected layer to generate more novel, more powerful SPD matrices. However we exploit the rectifying layer to prevent the input SPD matrices from being singular. We also introduce a non-linear learning of the proposed network with an innovative objective function. Furthermore we devise a pooling layer to further reduce the redundancy of the input SPD matrices, and the log-map layer to project the SPD manifold to the Euclidean space. For learning the connection weights between the input layer and the fully connected layer, we use Two-directional two-dimensional Principal Component Analysis ((2D)2PCA) algorithm. The proposed Riemannian manifold network (RieMNet) avoids complex computing and can be built and trained extremely easy and efficient. We have also developed a deep version of RieMNet, named as DRieMNet. The proposed RieMNet and DRieMNet are evaluated on three tasks: video-based face recognition, set-based object categorization, and set-based cell identification. Extensive experimental results show the superiority of our method over the state-of-the-art.
Tasks Face Recognition
Published 2018-05-27
URL http://arxiv.org/abs/1805.10628v2
PDF http://arxiv.org/pdf/1805.10628v2.pdf
PWC https://paperswithcode.com/paper/a-simple-riemannian-manifold-network-for
Repo
Framework

White matter hyperintensity segmentation from T1 and FLAIR images using fully convolutional neural networks enhanced with residual connections

Title White matter hyperintensity segmentation from T1 and FLAIR images using fully convolutional neural networks enhanced with residual connections
Authors Dakai Jin, Ziyue Xu, Adam P. Harrison, Daniel J. Mollura
Abstract Segmentation and quantification of white matter hyperintensities (WMHs) are of great importance in studying and understanding various neurological and geriatric disorders. Although automatic methods have been proposed for WMH segmentation on magnetic resonance imaging (MRI), manual corrections are often necessary to achieve clinically practical results. Major challenges for WMH segmentation stem from their inhomogeneous MRI intensities, random location and size distributions, and MRI noise. The presence of other brain anatomies or diseases with enhanced intensities adds further difficulties. To cope with these challenges, we present a specifically designed fully convolutional neural network (FCN) with residual connections to segment WMHs by using combined T1 and fluid-attenuated inversion recovery (FLAIR) images. Our customized FCN is designed to be straightforward and generalizable, providing efficient end-to-end training due to its enhanced information propagation. We tested our method on the open WMH Segmentation Challenge MICCAI2017 dataset, and, despite our method’s relative simplicity, results show that it performs amongst the leading techniques across five metrics. More importantly, our method achieves the best score for hausdorff distance and average volume difference in testing datasets from two MRI scanners that were not included in training, demonstrating better generalization ability of our proposed method over its competitors.
Tasks
Published 2018-03-19
URL http://arxiv.org/abs/1803.06782v1
PDF http://arxiv.org/pdf/1803.06782v1.pdf
PWC https://paperswithcode.com/paper/white-matter-hyperintensity-segmentation-from
Repo
Framework

Using deceased-donor kidneys to initiate chains of living donor kidney paired donations: algorithms and experimentation

Title Using deceased-donor kidneys to initiate chains of living donor kidney paired donations: algorithms and experimentation
Authors Cristina Cornelio, Lucrezia Furian, Antonio Nicolo’, Francesca Rossi
Abstract We design a flexible algorithm that exploits deceased donor kidneys to initiate chains of living donor kidney paired donations, combining deceased and living donor allocation mechanisms to improve the quantity and quality of kidney transplants. The advantages of this approach have been measured using retrospective data on the pool of donor/recipient incompatible and desensitized pairs at the Padua University Hospital, the largest center for living donor kidney transplants in Italy. The experiments show a remarkable improvement on the number of patients with incompatible donor who could be transplanted, a decrease in the number of desensitization procedures, and an increase in the number of UT patients (that is, patients unlikely to be transplanted for immunological reasons) in the waiting list who could receive an organ.
Tasks
Published 2018-12-17
URL http://arxiv.org/abs/1901.02420v1
PDF http://arxiv.org/pdf/1901.02420v1.pdf
PWC https://paperswithcode.com/paper/using-deceased-donor-kidneys-to-initiate
Repo
Framework

ClaiRE at SemEval-2018 Task 7 - Extended Version

Title ClaiRE at SemEval-2018 Task 7 - Extended Version
Authors Lena Hettinger, Alexander Dallmann, Albin Zehe, Thomas Niebler, Andreas Hotho
Abstract In this paper we describe our post-evaluation results for SemEval-2018 Task 7 on clas- sification of semantic relations in scientific literature for clean (subtask 1.1) and noisy data (subtask 1.2). This is an extended ver- sion of our workshop paper (Hettinger et al., 2018) including further technical details (Sec- tions 3.2 and 4.3) and changes made to the preprocessing step in the post-evaluation phase (Section 2.1). Due to these changes Classification of Relations using Embeddings (ClaiRE) achieved an improved F1 score of 75.11% for the first subtask and 81.44% for the second.
Tasks
Published 2018-04-16
URL http://arxiv.org/abs/1804.05825v3
PDF http://arxiv.org/pdf/1804.05825v3.pdf
PWC https://paperswithcode.com/paper/claire-at-semeval-2018-task-7-extended
Repo
Framework

Improved SVD-based Initialization for Nonnegative Matrix Factorization using Low-Rank Correction

Title Improved SVD-based Initialization for Nonnegative Matrix Factorization using Low-Rank Correction
Authors Atif Muhammad Syed, Sameer Qazi, Nicolas Gillis
Abstract Due to the iterative nature of most nonnegative matrix factorization (\textsc{NMF}) algorithms, initialization is a key aspect as it significantly influences both the convergence and the final solution obtained. Many initialization schemes have been proposed for NMF, among which one of the most popular class of methods are based on the singular value decomposition (SVD). However, these SVD-based initializations do not satisfy a rather natural condition, namely that the error should decrease as the rank of factorization increases. In this paper, we propose a novel SVD-based \textsc{NMF} initialization to specifically address this shortcoming by taking into account the SVD factors that were discarded to obtain a nonnegative initialization. This method, referred to as nonnegative SVD with low-rank correction (NNSVD-LRC), allows us to significantly reduce the initial error at a negligible additional computational cost using the low-rank structure of the discarded SVD factors. NNSVD-LRC has two other advantages compared to previous SVD-based initializations: (1) it provably generates sparse initial factors, and (2) it is faster as it only requires to compute a truncated SVD of rank $\lceil r/2 + 1 \rceil$ where $r$ is the factorization rank of the sought NMF decomposition (as opposed to a rank-$r$ truncated SVD for other methods). We show on several standard dense and sparse data sets that our new method competes favorably with state-of-the-art SVD-based initializations for NMF.
Tasks
Published 2018-07-11
URL http://arxiv.org/abs/1807.04020v1
PDF http://arxiv.org/pdf/1807.04020v1.pdf
PWC https://paperswithcode.com/paper/improved-svd-based-initialization-for
Repo
Framework

Synthesis in pMDPs: A Tale of 1001 Parameters

Title Synthesis in pMDPs: A Tale of 1001 Parameters
Authors Murat Cubuktepe, Nils Jansen, Sebastian Junges, Joost-Pieter Katoen, Ufuk Topcu
Abstract This paper considers parametric Markov decision processes (pMDPs) whose transitions are equipped with affine functions over a finite set of parameters. The synthesis problem is to find a parameter valuation such that the instantiated pMDP satisfies a specification under all strategies. We show that this problem can be formulated as a quadratically-constrained quadratic program (QCQP) and is non-convex in general. To deal with the NP-hardness of such problems, we exploit a convex-concave procedure (CCP) to iteratively obtain local optima. An appropriate interplay between CCP solvers and probabilistic model checkers creates a procedure — realized in the open-source tool PROPhESY — that solves the synthesis problem for models with thousands of parameters.
Tasks
Published 2018-03-05
URL http://arxiv.org/abs/1803.02884v4
PDF http://arxiv.org/pdf/1803.02884v4.pdf
PWC https://paperswithcode.com/paper/synthesis-in-pmdps-a-tale-of-1001-parameters
Repo
Framework

Dust concentration vision measurement based on moment of inertia in gray level-rank co-occurrence matrix

Title Dust concentration vision measurement based on moment of inertia in gray level-rank co-occurrence matrix
Authors Zhiwen Luo, Guohui Li, Junfeng Du, Jieping Wu
Abstract To improve the accuracy of existing dust concentration measurements, a dust concentration measurement based on Moment of inertia in Gray level-Rank Co-occurrence Matrix (GRCM), which is from the dust image sample measured by a machine vision system is proposed in this paper. Firstly, a Polynomial computational model between dust Concentration and Moment of inertia (PCM) is established by experimental methods and fitting methods. Then computing methods for GRCM and its Moment of inertia are constructed by theoretical and mathematical analysis methods. And then developing an on-line dust concentration vision measurement experimental system, the cement dust concentration measurement in a cement production workshop is taken as a practice example with the system and the PCM measurement. The results show that measurement error is within 9%, and the measurement range is 0.5-1000 mg/m3. Finally, comparing with the filter membrane weighing measurement, light scattering measurement and laser measurement, the proposed PCM measurement has advantages on error and cost, which can be provided a valuable reference for the dust concentration vision measurements.
Tasks
Published 2018-05-10
URL http://arxiv.org/abs/1805.03788v1
PDF http://arxiv.org/pdf/1805.03788v1.pdf
PWC https://paperswithcode.com/paper/dust-concentration-vision-measurement-based
Repo
Framework

Is feature selection secure against training data poisoning?

Title Is feature selection secure against training data poisoning?
Authors Huang Xiao, Battista Biggio, Gavin Brown, Giorgio Fumera, Claudia Eckert, Fabio Roli
Abstract Learning in adversarial settings is becoming an important task for application domains where attackers may inject malicious data into the training set to subvert normal operation of data-driven technologies. Feature selection has been widely used in machine learning for security applications to improve generalization and computational efficiency, although it is not clear whether its use may be beneficial or even counterproductive when training data are poisoned by intelligent attackers. In this work, we shed light on this issue by providing a framework to investigate the robustness of popular feature selection methods, including LASSO, ridge regression and the elastic net. Our results on malware detection show that feature selection methods can be significantly compromised under attack (we can reduce LASSO to almost random choices of feature sets by careful insertion of less than 5% poisoned training samples), highlighting the need for specific countermeasures.
Tasks data poisoning, Feature Selection, Malware Detection
Published 2018-04-21
URL http://arxiv.org/abs/1804.07933v1
PDF http://arxiv.org/pdf/1804.07933v1.pdf
PWC https://paperswithcode.com/paper/is-feature-selection-secure-against-training
Repo
Framework

Integrating Multiple Receptive Fields through Grouped Active Convolution

Title Integrating Multiple Receptive Fields through Grouped Active Convolution
Authors Yunho Jeon, Junmo Kim
Abstract Convolutional networks have achieved great success in various vision tasks. This is mainly due to a considerable amount of research on network structure. In this study, instead of focusing on architectures, we focused on the convolution unit itself. The existing convolution unit has a fixed shape and is limited to observing restricted receptive fields. In earlier work, we proposed the active convolution unit (ACU), which can freely define its shape and learn by itself. In this paper, we provide a detailed analysis of the previously proposed unit and show that it is an efficient representation of a sparse weight convolution. Furthermore, we extend an ACU to a grouped ACU, which can observe multiple receptive fields in one layer. We found that the performance of a naive grouped convolution is degraded by increasing the number of groups; however, the proposed unit retains the accuracy even though the number of parameters decreases. Based on this result, we suggest a depthwise ACU, and various experiments have shown that our unit is efficient and can replace the existing convolutions.
Tasks
Published 2018-11-11
URL http://arxiv.org/abs/1811.04387v2
PDF http://arxiv.org/pdf/1811.04387v2.pdf
PWC https://paperswithcode.com/paper/integrating-multiple-receptive-fields-through
Repo
Framework
comments powered by Disqus