January 28, 2020

3132 words 15 mins read

Paper Group ANR 957

Paper Group ANR 957

When Can Neural Networks Learn Connected Decision Regions?. Conv2Warp: An unsupervised deformable image registration with continuous convolution and warping. MarmoNet: a pipeline for automated projection mapping of the common marmoset brain from whole-brain serial two-photon tomography. A User Study of Perceived Carbon Footprint. Deformable Medical …

When Can Neural Networks Learn Connected Decision Regions?

Title When Can Neural Networks Learn Connected Decision Regions?
Authors Trung Le, Dinh Phung
Abstract Previous work has questioned the conditions under which the decision regions of a neural network are connected and further showed the implications of the corresponding theory to the problem of adversarial manipulation of classifiers. It has been proven that for a class of activation functions including leaky ReLU, neural networks having a pyramidal structure, that is no layer has more hidden units than the input dimension, produce necessarily connected decision regions. In this paper, we advance this important result by further developing the sufficient and necessary conditions under which the decision regions of a neural network are connected. We then apply our framework to overcome the limits of existing work and further study the capacity to learn connected regions of neural networks for a much wider class of activation functions including those widely used, namely ReLU, sigmoid, tanh, softlus, and exponential linear function.
Tasks
Published 2019-01-25
URL http://arxiv.org/abs/1901.08710v1
PDF http://arxiv.org/pdf/1901.08710v1.pdf
PWC https://paperswithcode.com/paper/when-can-neural-networks-learn-connected
Repo
Framework

Conv2Warp: An unsupervised deformable image registration with continuous convolution and warping

Title Conv2Warp: An unsupervised deformable image registration with continuous convolution and warping
Authors Sharib Ali, Jens Rittscher
Abstract Recent successes in deep learning based deformable image registration (DIR) methods have demonstrated that complex deformation can be learnt directly from data while reducing computation time when compared to traditional methods. However, the reliance on fully linear convolutional layers imposes a uniform sampling of pixel/voxel locations which ultimately limits their performance. To address this problem, we propose a novel approach of learning a continuous warp of the source image. Here, the required deformation vector fields are obtained from a concatenated linear and non-linear convolution layers and a learnable bicubic Catmull-Rom spline resampler. This allows to compute smooth deformation field and more accurate alignment compared to using only linear convolutions and linear resampling. In addition, the continuous warping technique penalizes disagreements that are due to topological changes. Our experiments demonstrate that this approach manages to capture large non-linear deformations and minimizes the propagation of interpolation errors. While improving accuracy the method is computationally efficient. We present comparative results on a range of public 4D CT lung (POPI) and brain datasets (CUMC12, MGH10).
Tasks Image Registration
Published 2019-08-16
URL https://arxiv.org/abs/1908.06194v1
PDF https://arxiv.org/pdf/1908.06194v1.pdf
PWC https://paperswithcode.com/paper/conv2warp-an-unsupervised-deformable-image
Repo
Framework

MarmoNet: a pipeline for automated projection mapping of the common marmoset brain from whole-brain serial two-photon tomography

Title MarmoNet: a pipeline for automated projection mapping of the common marmoset brain from whole-brain serial two-photon tomography
Authors Henrik Skibbe, Akiya Watakabe, Ken Nakae, Carlos Enrique Gutierrez, Hiromichi Tsukada, Junichi Hata, Takashi Kawase, Rui Gong, Alexander Woodward, Kenji Doya, Hideyuki Okano, Tetsuo Yamamori, Shin Ishii
Abstract Understanding the connectivity in the brain is an important prerequisite for understanding how the brain processes information. In the Brain/MINDS project, a connectivity study on marmoset brains uses two-photon microscopy fluorescence images of axonal projections to collect the neuron connectivity from defined brain regions at the mesoscopic scale. The processing of the images requires the detection and segmentation of the axonal tracer signal. The objective is to detect as much tracer signal as possible while not misclassifying other background structures as the signal. This can be challenging because of imaging noise, a cluttered image background, distortions or varying image contrast cause problems. We are developing MarmoNet, a pipeline that processes and analyzes tracer image data of the common marmoset brain. The pipeline incorporates state-of-the-art machine learning techniques based on artificial convolutional neural networks (CNN) and image registration techniques to extract and map all relevant information in a robust manner. The pipeline processes new images in a fully automated way. This report introduces the current state of the tracer signal analysis part of the pipeline.
Tasks Image Registration
Published 2019-08-02
URL https://arxiv.org/abs/1908.00876v1
PDF https://arxiv.org/pdf/1908.00876v1.pdf
PWC https://paperswithcode.com/paper/marmonet-a-pipeline-for-automated-projection
Repo
Framework

A User Study of Perceived Carbon Footprint

Title A User Study of Perceived Carbon Footprint
Authors Victor Kristof, Valentin Quelquejay-Leclère, Robin Zbinden, Lucas Maystre, Matthias Grossglauser, Patrick Thiran
Abstract We propose a statistical model to understand people’s perception of their carbon footprint. Driven by the observation that few people think of CO2 impact in absolute terms, we design a system to probe people’s perception from simple pairwise comparisons of the relative carbon footprint of their actions. The formulation of the model enables us to take an active-learning approach to selecting the pairs of actions that are maximally informative about the model parameters. We define a set of 18 actions and collect a dataset of 2183 comparisons from 176 users on a university campus. The early results reveal promising directions to improve climate communication and enhance climate mitigation.
Tasks Active Learning
Published 2019-11-26
URL https://arxiv.org/abs/1911.11658v2
PDF https://arxiv.org/pdf/1911.11658v2.pdf
PWC https://paperswithcode.com/paper/a-user-study-of-perceived-carbon-footprint
Repo
Framework

Deformable Medical Image Registration Using a Randomly-Initialized CNN as Regularization Prior

Title Deformable Medical Image Registration Using a Randomly-Initialized CNN as Regularization Prior
Authors Max-Heinrich Laves, Sontje Ihler, Tobias Ortmaier
Abstract We present deformable unsupervised medical image registration using a randomly-initialized deep convolutional neural network (CNN) as regularization prior. Conventional registration methods predict a transformation by minimizing dissimilarities between an image pair. The minimization is usually regularized with manually engineered priors, which limits the potential of the registration. By learning transformation priors from a large dataset, CNNs have achieved great success in deformable registration. However, learned methods are restricted to domain-specific data and the required amounts of medical data are difficult to obtain. Our approach uses the idea of deep image priors to combine convolutional networks with conventional registration methods based on manually engineered priors. The proposed method is applied to brain MRI scans. We show that our approach registers image pairs with state-of-the-art accuracy by providing dense, pixel-wise correspondence maps. It does not rely on prior training and is therefore not limited to a specific image domain.
Tasks Deformable Medical Image Registration, Image Registration, Medical Image Registration
Published 2019-08-02
URL https://arxiv.org/abs/1908.00788v1
PDF https://arxiv.org/pdf/1908.00788v1.pdf
PWC https://paperswithcode.com/paper/deformable-medical-image-registration-using-a
Repo
Framework

Data Poisoning Attacks on Neighborhood-based Recommender Systems

Title Data Poisoning Attacks on Neighborhood-based Recommender Systems
Authors Liang Chen, Yangjun Xu, Fenfang Xie, Min Huang, Zibin Zheng
Abstract Nowadays, collaborative filtering recommender systems have been widely deployed in many commercial companies to make profit. Neighbourhood-based collaborative filtering is common and effective. To date, despite its effectiveness, there has been little effort to explore their robustness and the impact of data poisoning attacks on their performance. Can the neighbourhood-based recommender systems be easily fooled? To this end, we shed light on the robustness of neighbourhood-based recommender systems and propose a novel data poisoning attack framework encoding the purpose of attack and constraint against them. We firstly illustrate how to calculate the optimal data poisoning attack, namely UNAttack. We inject a few well-designed fake users into the recommender systems such that target items will be recommended to as many normal users as possible. Extensive experiments are conducted on three real-world datasets to validate the effectiveness and the transferability of our proposed method. Besides, some interesting phenomenons can be found. For example, 1) neighbourhood-based recommender systems with Euclidean Distance-based similarity have strong robustness. 2) the fake users can be transferred to attack the state-of-the-art collaborative filtering recommender systems such as Neural Collaborative Filtering and Bayesian Personalized Ranking Matrix Factorization.
Tasks data poisoning, Recommendation Systems
Published 2019-12-01
URL https://arxiv.org/abs/1912.04109v1
PDF https://arxiv.org/pdf/1912.04109v1.pdf
PWC https://paperswithcode.com/paper/data-poisoning-attacks-on-neighborhood-based
Repo
Framework

Complex Valued Gated Auto-encoder for Video Frame Prediction

Title Complex Valued Gated Auto-encoder for Video Frame Prediction
Authors Niloofar Azizi, Nils Wandel, Sven Behnke
Abstract In recent years, complex valued artificial neural networks have gained increasing interest as they allow neural networks to learn richer representations while potentially incorporating less parameters. Especially in the domain of computer graphics, many traditional operations rely heavily on computations in the complex domain, thus complex valued neural networks apply naturally. In this paper, we perform frame predictions in video sequences using a complex valued gated auto-encoder. First, our method is motivated showing how the Fourier transform can be seen as the basis for translational operations. Then, we present how a complex neural network can learn such transformations and compare its performance and parameter efficiency to a real-valued gated autoencoder. Furthermore, we show how extending both - the real and the complex valued - neural networks by using convolutional units can significantly improve prediction performance and parameter efficiency. The networks are assessed on a moving noise and a bouncing ball dataset.
Tasks
Published 2019-03-08
URL http://arxiv.org/abs/1903.03336v1
PDF http://arxiv.org/pdf/1903.03336v1.pdf
PWC https://paperswithcode.com/paper/complex-valued-gated-auto-encoder-for-video
Repo
Framework

Revealing Perceptible Backdoors, without the Training Set, via the Maximum Achievable Misclassification Fraction Statistic

Title Revealing Perceptible Backdoors, without the Training Set, via the Maximum Achievable Misclassification Fraction Statistic
Authors Zhen Xiang, David J. Miller, George Kesidis
Abstract Recently, a special type of data poisoning (DP) attack, known as a backdoor, was proposed. These attacks aimto have a classifier learn to classify to a target class whenever the backdoor pattern is present in a test sample. In thispaper, we address post-training detection of perceptible backdoor patterns in DNN image classifiers, wherein thedefender does not have access to the poisoned training set, but only to the trained classifier itself, as well as to clean(unpoisoned) examples from the classification domain. This problem is challenging since a perceptible backdoorpattern could be any seemingly innocuous object in a scene, and, without the poisoned training set, we have nohint about the actual backdoor pattern used during training. We identify two important properties of perceptiblebackdoor patterns, based upon which we propose a novel detector using the maximum achievable misclassificationfraction (MAMF) statistic. We detect whether the trained DNN has been backdoor-attacked and infer the sourceand target classes used for devising the attack. Our detector, with an easily chosen threshold, is evaluated on fivedatasets, five DNN structures and nine backdoor patterns, and shows strong detection capability. Coupled with animperceptible backdoor detector, our approach helps achieve detection for all evasive backdoors of interest.
Tasks data poisoning
Published 2019-11-18
URL https://arxiv.org/abs/1911.07970v1
PDF https://arxiv.org/pdf/1911.07970v1.pdf
PWC https://paperswithcode.com/paper/revealing-perceptible-backdoors-without-the
Repo
Framework

Caveats in Generating Medical Imaging Labels from Radiology Reports

Title Caveats in Generating Medical Imaging Labels from Radiology Reports
Authors Tobi Olatunji, Li Yao, Ben Covington, Alexander Rhodes, Anthony Upton
Abstract Acquiring high-quality annotations in medical imaging is usually a costly process. Automatic label extraction with natural language processing (NLP) has emerged as a promising workaround to bypass the need of expert annotation. Despite the convenience, the limitation of such an approximation has not been carefully examined and is not well understood. With a challenging set of 1,000 chest X-ray studies and their corresponding radiology reports, we show that there exists a surprisingly large discrepancy between what radiologists visually perceive and what they clinically report. Furthermore, with inherently flawed report as ground truth, the state-of-the-art medical NLP fails to produce high-fidelity labels.
Tasks
Published 2019-05-06
URL https://arxiv.org/abs/1905.02283v1
PDF https://arxiv.org/pdf/1905.02283v1.pdf
PWC https://paperswithcode.com/paper/caveats-in-generating-medical-imaging-labels
Repo
Framework

Personalized explanation in machine learning: A conceptualization

Title Personalized explanation in machine learning: A conceptualization
Authors Johanes Schneider, Joshua Handali
Abstract Explanation in machine learning and related fields such as artificial intelligence aims at making machine learning models and their decisions understandable to humans. Existing work suggests that personalizing explanations might help to improve understandability. In this work, we derive a conceptualization of personalized explanation by defining and structuring the problem based on prior work on machine learning explanation, personalization (in machine learning) and concepts and techniques from other domains such as privacy and knowledge elicitation. We perform a categorization of explainee data used in the process of personalization as well as describing means to collect this data. We also identify three key explanation properties that are amendable to personalization: complexity, decision information and presentation. We also enhance existing work on explanation by introducing additional desiderata and measures to quantify the quality of personalized explanations.
Tasks
Published 2019-01-03
URL http://arxiv.org/abs/1901.00770v2
PDF http://arxiv.org/pdf/1901.00770v2.pdf
PWC https://paperswithcode.com/paper/personalized-explanation-in-machine-learning
Repo
Framework

Kernel Instrumental Variable Regression

Title Kernel Instrumental Variable Regression
Authors Rahul Singh, Maneesh Sahani, Arthur Gretton
Abstract Instrumental variable (IV) regression is a strategy for learning causal relationships in observational data. If measurements of input X and output Y are confounded, the causal relationship can nonetheless be identified if an instrumental variable Z is available that influences X directly, but is conditionally independent of Y given X and the unmeasured confounder. The classic two-stage least squares algorithm (2SLS) simplifies the estimation problem by modeling all relationships as linear functions. We propose kernel instrumental variable regression (KIV), a nonparametric generalization of 2SLS, modeling relations among X, Y, and Z as nonlinear functions in reproducing kernel Hilbert spaces (RKHSs). We prove the consistency of KIV under mild assumptions, and derive conditions under which convergence occurs at the minimax optimal rate for unconfounded, single-stage RKHS regression. In doing so, we obtain an efficient ratio between training sample sizes used in the algorithm’s first and second stages. In experiments, KIV outperforms state of the art alternatives for nonparametric IV regression.
Tasks
Published 2019-06-01
URL https://arxiv.org/abs/1906.00232v5
PDF https://arxiv.org/pdf/1906.00232v5.pdf
PWC https://paperswithcode.com/paper/190600232
Repo
Framework

Gradual Network for Single Image De-raining

Title Gradual Network for Single Image De-raining
Authors Zhe Huang, Weijiang Yu, Wayne Zhang, Litong Feng, Nong Xiao
Abstract Most advances in single image de-raining meet a key challenge, which is removing rain streaks with different scales and shapes while preserving image details. Existing single image de-raining approaches treat rain-streak removal as a process of pixel-wise regression directly. However, they are lacking in mining the balance between over-de-raining (e.g. removing texture details in rain-free regions) and under-de-raining (e.g. leaving rain streaks). In this paper, we firstly propose a coarse-to-fine network called Gradual Network (GraNet) consisting of coarse stage and fine stage for delving into single image de-raining with different granularities. Specifically, to reveal coarse-grained rain-streak characteristics (e.g. long and thick rain streaks/raindrops), we propose a coarse stage by utilizing local-global spatial dependencies via a local-global subnetwork composed of region-aware blocks. Taking the residual result (the coarse de-rained result) between the rainy image sample (i.e. the input data) and the output of coarse stage (i.e. the learnt rain mask) as input, the fine stage continues to de-rain by removing the fine-grained rain streaks (e.g. light rain streaks and water mist) to get a rain-free and well-reconstructed output image via a unified contextual merging sub-network with dense blocks and a merging block. Solid and comprehensive experiments on synthetic and real data demonstrate that our GraNet can significantly outperform the state-of-the-art methods by removing rain streaks with various densities, scales and shapes while keeping the image details of rain-free regions well-preserved.
Tasks Rain Removal
Published 2019-09-20
URL https://arxiv.org/abs/1909.09677v1
PDF https://arxiv.org/pdf/1909.09677v1.pdf
PWC https://paperswithcode.com/paper/190909677
Repo
Framework

Video Surveillance of Highway Traffic Events by Deep Learning Architectures

Title Video Surveillance of Highway Traffic Events by Deep Learning Architectures
Authors Matteo Tiezzi, Stefano Melacci, Marco Maggini, Angelo Frosini
Abstract In this paper we describe a video surveillance system able to detect traffic events in videos acquired by fixed videocameras on highways. The events of interest consist in a specific sequence of situations that occur in the video, as for instance a vehicle stopping on the emergency lane. Hence, the detection of these events requires to analyze a temporal sequence in the video stream. We compare different approaches that exploit architectures based on Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs). A first approach extracts vectors of features, mostly related to motion, from each video frame and exploits a RNN fed with the resulting sequence of vectors. The other approaches are based directly on the sequence of frames, that are eventually enriched with pixel-wise motion information. The obtained stream is processed by an architecture that stacks a CNN and a RNN, and we also investigate a transfer-learning-based model. The results are very promising and the best architecture will be tested online in real operative conditions.
Tasks Transfer Learning
Published 2019-09-06
URL https://arxiv.org/abs/1909.12235v1
PDF https://arxiv.org/pdf/1909.12235v1.pdf
PWC https://paperswithcode.com/paper/video-surveillance-of-highway-traffic-events
Repo
Framework

Confidence Measure Guided Single Image De-raining

Title Confidence Measure Guided Single Image De-raining
Authors Rajeev Yasarla, Vishal M. Patel
Abstract Single image de-raining is an extremely challenging problem since the rainy images contain rain streaks which often vary in size, direction and density. This varying characteristic of rain streaks affect different parts of the image differently. Previous approaches have attempted to address this problem by leveraging some prior information to remove rain streaks from a single image. One of the major limitations of these approaches is that they do not consider the location information of rain drops in the image. The proposed Image Quality-based single image Deraining using Confidence measure (QuDeC), network addresses this issue by learning the quality or distortion level of each patch in the rainy image, and further processes this information to learn the rain content at different scales. In addition, we introduce a technique which guides the network to learn the network weights based on the confidence measure about the estimate of both quality at each location and residual rain streak information (residual map). Extensive experiments on synthetic and real datasets demonstrate that the proposed method achieves significant improvements over the recent state-of-the-art methods.
Tasks Rain Removal, Single Image Deraining
Published 2019-09-10
URL https://arxiv.org/abs/1909.04207v1
PDF https://arxiv.org/pdf/1909.04207v1.pdf
PWC https://paperswithcode.com/paper/confidence-measure-guided-single-image-de
Repo
Framework

Long Range Neural Navigation Policies for the Real World

Title Long Range Neural Navigation Policies for the Real World
Authors Ayzaan Wahid, Alexander Toshev, Marek Fiser, Tsang-Wei Edward Lee
Abstract Learned Neural Network based policies have shown promising results for robot navigation. However, most of these approaches fall short of being used on a real robot due to the extensive simulated training they require. These simulations lack the visuals and dynamics of the real world, which makes it infeasible to deploy on a real robot. We present a novel Neural Net based policy, NavNet, which allows for easy deployment on a real robot. It consists of two sub policies – a high level policy which can understand real images and perform long range planning expressed in high level commands; a low level policy that can translate the long range plan into low level commands on a specific platform in a safe and robust manner. For every new deployment, the high level policy is trained on an easily obtainable scan of the environment modeling its visuals and layout. We detail the design of such an environment and how one can use it for training a final navigation policy. Further, we demonstrate a learned low-level policy. We deploy the model in a large office building and test it extensively, achieving $0.80$ success rate over long navigation runs and outperforming SLAM-based models in the same settings.
Tasks Robot Navigation
Published 2019-03-23
URL https://arxiv.org/abs/1903.09870v2
PDF https://arxiv.org/pdf/1903.09870v2.pdf
PWC https://paperswithcode.com/paper/long-range-neural-navigation-policies-for-the
Repo
Framework
comments powered by Disqus