October 18, 2019

3215 words 16 mins read

Paper Group ANR 550

Paper Group ANR 550

Escaping from Collapsing Modes in a Constrained Space. Area-preserving mapping of 3D ultrasound carotid artery images using density-equalizing reference map. Cooperative Starting Movement Detection of Cyclists Using Convolutional Neural Networks and a Boosted Stacking Ensemble. Deep saliency: What is learnt by a deep network about saliency?. Deep f …

Escaping from Collapsing Modes in a Constrained Space

Title Escaping from Collapsing Modes in a Constrained Space
Authors Chia-Che Chang, Chieh Hubert Lin, Che-Rung Lee, Da-Cheng Juan, Wei Wei, Hwann-Tzong Chen
Abstract Generative adversarial networks (GANs) often suffer from unpredictable mode-collapsing during training. We study the issue of mode collapse of Boundary Equilibrium Generative Adversarial Network (BEGAN), which is one of the state-of-the-art generative models. Despite its potential of generating high-quality images, we find that BEGAN tends to collapse at some modes after a period of training. We propose a new model, called \emph{BEGAN with a Constrained Space} (BEGAN-CS), which includes a latent-space constraint in the loss function. We show that BEGAN-CS can significantly improve training stability and suppress mode collapse without either increasing the model complexity or degrading the image quality. Further, we visualize the distribution of latent vectors to elucidate the effect of latent-space constraint. The experimental results show that our method has additional advantages of being able to train on small datasets and to generate images similar to a given real image yet with variations of designated attributes on-the-fly.
Tasks
Published 2018-08-22
URL http://arxiv.org/abs/1808.07258v1
PDF http://arxiv.org/pdf/1808.07258v1.pdf
PWC https://paperswithcode.com/paper/escaping-from-collapsing-modes-in-a
Repo
Framework

Area-preserving mapping of 3D ultrasound carotid artery images using density-equalizing reference map

Title Area-preserving mapping of 3D ultrasound carotid artery images using density-equalizing reference map
Authors Gary P. T. Choi, Bernard Chiu, Chris H. Rycroft
Abstract Carotid atherosclerosis is a focal disease at the bifurcations of the carotid artery. To quantitatively monitor the local changes in the vessel-wall-plus-plaque thickness (VWT) and compare the VWT distributions for different patients or for the same patients at different ultrasound scanning sessions, a mapping technique is required to adjust for the geometric variability of different carotid artery models. In this work, we propose a novel method called density-equalizing reference map (DERM) for mapping 3D carotid surfaces to a standardized 2D carotid template, with an emphasis on preserving the local geometry of the carotid surface by minimizing the local area distortion. The initial map was generated by a previously described arc-length scaling (ALS) mapping method, which projects a 3D carotid surface onto a 2D non-convex L-shaped domain. A smooth and area-preserving flattened map was subsequently constructed by deforming the ALS map using the proposed algorithm that combines the density-equalizing map and the reference map techniques. This combination allows, for the first time, one-to-one mapping from a 3D surface to a standardized non-convex planar domain in an area-preserving manner. Evaluations using 20 carotid surface models show that the proposed method reduced the area distortion of the flattening maps by over 80% as compared to the ALS mapping method.
Tasks
Published 2018-12-09
URL http://arxiv.org/abs/1812.03434v1
PDF http://arxiv.org/pdf/1812.03434v1.pdf
PWC https://paperswithcode.com/paper/area-preserving-mapping-of-3d-ultrasound
Repo
Framework

Cooperative Starting Movement Detection of Cyclists Using Convolutional Neural Networks and a Boosted Stacking Ensemble

Title Cooperative Starting Movement Detection of Cyclists Using Convolutional Neural Networks and a Boosted Stacking Ensemble
Authors Maarten Bieshaar, Stefan Zernetsch, Andreas Hubert, Bernhard Sick, Konrad Doll
Abstract In future, vehicles and other traffic participants will be interconnected and equipped with various types of sensors, allowing for cooperation on different levels, such as situation prediction or intention detection. In this article we present a cooperative approach for starting movement detection of cyclists using a boosted stacking ensemble approach realizing feature- and decision level cooperation. We introduce a novel method based on a 3D Convolutional Neural Network (CNN) to detect starting motions on image sequences by learning spatio-temporal features. The CNN is complemented by a smart device based starting movement detection originating from smart devices carried by the cyclist. Both model outputs are combined in a stacking ensemble approach using an extreme gradient boosting classifier resulting in a fast and yet robust cooperative starting movement detector. We evaluate our cooperative approach on real-world data originating from experiments with 49 test subjects consisting of 84 starting motions.
Tasks
Published 2018-03-09
URL http://arxiv.org/abs/1803.03487v2
PDF http://arxiv.org/pdf/1803.03487v2.pdf
PWC https://paperswithcode.com/paper/cooperative-starting-movement-detection-of
Repo
Framework

Deep saliency: What is learnt by a deep network about saliency?

Title Deep saliency: What is learnt by a deep network about saliency?
Authors Sen He, Nicolas Pugeault
Abstract Deep convolutional neural networks have achieved impressive performance on a broad range of problems, beating prior art on established benchmarks, but it often remains unclear what are the representations learnt by those systems and how they achieve such performance. This article examines the specific problem of saliency detection, where benchmarks are currently dominated by CNN-based approaches, and investigates the properties of the learnt representation by visualizing the artificial neurons’ receptive fields. We demonstrate that fine tuning a pre-trained network on the saliency detection task lead to a profound transformation of the network’s deeper layers. Moreover we argue that this transformation leads to the emergence of receptive fields conceptually similar to the centre-surround filters hypothesized by early research on visual saliency.
Tasks Saliency Detection
Published 2018-01-12
URL http://arxiv.org/abs/1801.04261v2
PDF http://arxiv.org/pdf/1801.04261v2.pdf
PWC https://paperswithcode.com/paper/deep-saliency-what-is-learnt-by-a-deep
Repo
Framework

Deep feature transfer between localization and segmentation tasks

Title Deep feature transfer between localization and segmentation tasks
Authors Szu-Yeu Hu, Andrew Beers, Ken Chang, Kathi Höbel, J. Peter Campbell, Deniz Erdogumus, Stratis Ioannidis, Jennifer Dy, Michael F. Chiang, Jayashree Kalpathy-Cramer, James M. Brown
Abstract In this paper, we propose a new pre-training scheme for U-net based image segmentation. We first train the encoding arm as a localization network to predict the center of the target, before extending it into a U-net architecture for segmentation. We apply our proposed method to the problem of segmenting the optic disc from fundus photographs. Our work shows that the features learned by encoding arm can be transferred to the segmentation network to reduce the annotation burden. We propose that an approach could have broad utility for medical image segmentation, and alleviate the burden of delineating complex structures by pre-training on annotations that are much easier to acquire.
Tasks Medical Image Segmentation, Semantic Segmentation
Published 2018-11-06
URL http://arxiv.org/abs/1811.02539v2
PDF http://arxiv.org/pdf/1811.02539v2.pdf
PWC https://paperswithcode.com/paper/deep-feature-transfer-between-localization
Repo
Framework

Per-Tensor Fixed-Point Quantization of the Back-Propagation Algorithm

Title Per-Tensor Fixed-Point Quantization of the Back-Propagation Algorithm
Authors Charbel Sakr, Naresh Shanbhag
Abstract The high computational and parameter complexity of neural networks makes their training very slow and difficult to deploy on energy and storage-constrained computing systems. Many network complexity reduction techniques have been proposed including fixed-point implementation. However, a systematic approach for designing full fixed-point training and inference of deep neural networks remains elusive. We describe a precision assignment methodology for neural network training in which all network parameters, i.e., activations and weights in the feedforward path, gradients and weight accumulators in the feedback path, are assigned close to minimal precision. The precision assignment is derived analytically and enables tracking the convergence behavior of the full precision training, known to converge a priori. Thus, our work leads to a systematic methodology of determining suitable precision for fixed-point training. The near optimality (minimality) of the resulting precision assignment is validated empirically for four networks on the CIFAR-10, CIFAR-100, and SVHN datasets. The complexity reduction arising from our approach is compared with other fixed-point neural network designs.
Tasks Quantization
Published 2018-12-31
URL http://arxiv.org/abs/1812.11732v1
PDF http://arxiv.org/pdf/1812.11732v1.pdf
PWC https://paperswithcode.com/paper/per-tensor-fixed-point-quantization-of-the
Repo
Framework

Just Interpolate: Kernel “Ridgeless” Regression Can Generalize

Title Just Interpolate: Kernel “Ridgeless” Regression Can Generalize
Authors Tengyuan Liang, Alexander Rakhlin
Abstract In the absence of explicit regularization, Kernel “Ridgeless” Regression with nonlinear kernels has the potential to fit the training data perfectly. It has been observed empirically, however, that such interpolated solutions can still generalize well on test data. We isolate a phenomenon of implicit regularization for minimum-norm interpolated solutions which is due to a combination of high dimensionality of the input data, curvature of the kernel function, and favorable geometric properties of the data such as an eigenvalue decay of the empirical covariance and kernel matrices. In addition to deriving a data-dependent upper bound on the out-of-sample error, we present experimental evidence suggesting that the phenomenon occurs in the MNIST dataset.
Tasks
Published 2018-08-01
URL http://arxiv.org/abs/1808.00387v2
PDF http://arxiv.org/pdf/1808.00387v2.pdf
PWC https://paperswithcode.com/paper/just-interpolate-kernel-ridgeless-regression
Repo
Framework

predictSLUMS: A new model for identifying and predicting informal settlements and slums in cities from street intersections using machine learning

Title predictSLUMS: A new model for identifying and predicting informal settlements and slums in cities from street intersections using machine learning
Authors Mohamed R. Ibrahim, Helena Titheridge, Tao Cheng, James Haworth
Abstract Identifying current and future informal regions within cities remains a crucial issue for policymakers and governments in developing countries. The delineation process of identifying such regions in cities requires a lot of resources. While there are various studies that identify informal settlements based on satellite image classification, relying on both supervised or unsupervised machine learning approaches, these models either require multiple input data to function or need further development with regards to precision. In this paper, we introduce a novel method for identifying and predicting informal settlements using only street intersections data, regardless of the variation of urban form, number of floors, materials used for construction or street width. With such minimal input data, we attempt to provide planners and policy-makers with a pragmatic tool that can aid in identifying informal zones in cities. The algorithm of the model is based on spatial statistics and a machine learning approach, using Multinomial Logistic Regression (MNL) and Artificial Neural Networks (ANN). The proposed model relies on defining informal settlements based on two ubiquitous characteristics that these regions tend to be filled in with smaller subdivided lots of housing relative to the formal areas within the local context, and the paucity of services and infrastructure within the boundary of these settlements that require relatively bigger lots. We applied the model in five major cities in Egypt and India that have spatial structures in which informality is present. These cities are Greater Cairo, Alexandria, Hurghada and Minya in Egypt, and Mumbai in India. The predictSLUMS model shows high validity and accuracy for identifying and predicting informality within the same city the model was trained on or in different ones of a similar context.
Tasks Image Classification
Published 2018-08-14
URL http://arxiv.org/abs/1808.06470v1
PDF http://arxiv.org/pdf/1808.06470v1.pdf
PWC https://paperswithcode.com/paper/predictslums-a-new-model-for-identifying-and
Repo
Framework

Automated Test Generation to Detect Individual Discrimination in AI Models

Title Automated Test Generation to Detect Individual Discrimination in AI Models
Authors Aniya Agarwal, Pranay Lohia, Seema Nagar, Kuntal Dey, Diptikalyan Saha
Abstract Dependability on AI models is of utmost importance to ensure full acceptance of the AI systems. One of the key aspects of the dependable AI system is to ensure that all its decisions are fair and not biased towards any individual. In this paper, we address the problem of detecting whether a model has an individual discrimination. Such a discrimination exists when two individuals who differ only in the values of their protected attributes (such as, gender/race) while the values of their non-protected ones are exactly the same, get different decisions. Measuring individual discrimination requires an exhaustive testing, which is infeasible for a non-trivial system. In this paper, we present an automated technique to generate test inputs, which is geared towards finding individual discrimination. Our technique combines the well-known technique called symbolic execution along with the local explainability for generation of effective test cases. Our experimental results clearly demonstrate that our technique produces 3.72 times more successful test cases than the existing state-of-the-art across all our chosen benchmarks.
Tasks
Published 2018-09-10
URL http://arxiv.org/abs/1809.03260v1
PDF http://arxiv.org/pdf/1809.03260v1.pdf
PWC https://paperswithcode.com/paper/automated-test-generation-to-detect
Repo
Framework

Structural Risk Minimization for $C^{1,1}(\mathbb{R}^d)$ Regression

Title Structural Risk Minimization for $C^{1,1}(\mathbb{R}^d)$ Regression
Authors Adam Gustafson, Matthew Hirn, Kitty Mohammed, Hariharan Narayanan, Jason Xu
Abstract One means of fitting functions to high-dimensional data is by providing smoothness constraints. Recently, the following smooth function approximation problem was proposed: given a finite set $E \subset \mathbb{R}^d$ and a function $f: E \rightarrow \mathbb{R}$, interpolate the given information with a function $\widehat{f} \in \dot{C}^{1, 1}(\mathbb{R}^d)$ (the class of first-order differentiable functions with Lipschitz gradients) such that $\widehat{f}(a) = f(a)$ for all $a \in E$, and the value of $\mathrm{Lip}(\nabla \widehat{f})$ is minimal. An algorithm is provided that constructs such an approximating function $\widehat{f}$ and estimates the optimal Lipschitz constant $\mathrm{Lip}(\nabla \widehat{f})$ in the noiseless setting. We address statistical aspects of reconstructing the approximating function $\widehat{f}$ from a closely-related class $C^{1, 1}(\mathbb{R}^d)$ given samples from noisy data. We observe independent and identically distributed samples $y(a) = f(a) + \xi(a)$ for $a \in E$, where $\xi(a)$ is a noise term and the set $E \subset \mathbb{R}^d$ is fixed and known. We obtain uniform bounds relating the empirical risk and true risk over the class $\mathcal{F}_{\widetilde{M}} = {f \in C^{1, 1}(\mathbb{R}^d) \mid \mathrm{Lip}(\nabla f) \leq \widetilde{M}}$, where the quantity $\widetilde{M}$ grows with the number of samples at a rate governed by the metric entropy of the class $C^{1, 1}(\mathbb{R}^d)$. Finally, we provide an implementation using Vaidya’s algorithm, supporting our results via numerical experiments on simulated data.
Tasks
Published 2018-03-29
URL http://arxiv.org/abs/1803.10884v2
PDF http://arxiv.org/pdf/1803.10884v2.pdf
PWC https://paperswithcode.com/paper/structural-risk-minimization-for-c11mathbbrd
Repo
Framework

Fairness for Whom? Critically reframing fairness with Nash Welfare Product

Title Fairness for Whom? Critically reframing fairness with Nash Welfare Product
Authors Ansh Patel
Abstract Recent studies on disparate impact in machine learning applications have sparked a debate around the concept of fairness along with attempts to formalize its different criteria. Many of these approaches focus on reducing prediction errors while maximizing sole utility of the institution. This work seeks to reconceptualize and critically frame the existing discourse on fairness by underlining the implicit biases embedded in common understandings of fairness in the literature and how they contrast with its corresponding economic and legal definitions. This paper expands the concept of utility and fairness by bringing in concepts from established literature in welfare economics and game theory. We then translate these concepts for the algorithmic prediction domain by defining a formalization of Nash Welfare Product that seeks to expand utility by collapsing that of the institution using the prediction tool and the individual subject to the prediction into one function. We then apply a modulating function that makes the fairness and welfare trade-offs explicit based on designated policy goals and then apply it to a temporal model to take into account the effects of decisions beyond the scope of one-shot predictions. We apply this on a binary classification problem and present results of a multi-epoch simulation based on the UCI Adult Income dataset and a test case analysis of the ProPublica recidivism dataset that show that expanding the concept of utility results in a fairer distribution correcting for the embedded biases in the dataset without sacrificing the classifier accuracy.
Tasks
Published 2018-10-19
URL http://arxiv.org/abs/1810.08540v1
PDF http://arxiv.org/pdf/1810.08540v1.pdf
PWC https://paperswithcode.com/paper/fairness-for-whom-critically-reframing
Repo
Framework

Stochastic Dynamic Programming Heuristics for Influence Maximization-Revenue Optimization

Title Stochastic Dynamic Programming Heuristics for Influence Maximization-Revenue Optimization
Authors Trisha Lawrence
Abstract The well-known Influence Maximization (IM) problem has been actively studied by researchers over the past decade, with emphasis on marketing and social networks. Existing research have obtained solutions to the IM problem by obtaining the influence spread and utilizing the property of submodularity. This paper is based on a novel approach to the IM problem geared towards optimizing clicks and consequently revenue within anOnline Social Network (OSN). Our approach diverts from existing approaches by adopting a novel, decision-making perspective through implementing Stochastic Dynamic Programming (SDP). Thus, we define a new problem Influence Maximization-Revenue Optimization (IM-RO) and propose SDP as a method in which this problem can be solved. The SDP method has lucrative gains for an advertiser in terms of optimizing clicks and generating revenue however, one drawback to the method is its associated “curse of dimensionality” particularly for problems involving a large state space. Thus, we introduce the Lawrence Degree Heuristic (LDH), Adaptive Hill-Climbing (AHC) and Multistage Particle Swarm Optimization (MPSO) heuristics as methods which are orders of magnitude faster than the SDP method whilst achieving near-optimal results. Through a comparative analysis on various synthetic and real-world networks we present the AHC and LDH as heuristics well suited to to the IM-RO problem in terms of their accuracy, running times and scalability under ideal model parameters. In this paper we present a compelling survey on the SDP method as a practical and lucrative method for spreading information and optimizing revenue within the context of OSNs.
Tasks Decision Making
Published 2018-02-28
URL http://arxiv.org/abs/1802.10515v2
PDF http://arxiv.org/pdf/1802.10515v2.pdf
PWC https://paperswithcode.com/paper/stochastic-dynamic-programming-heuristics-for
Repo
Framework

Integrating Episodic Memory into a Reinforcement Learning Agent using Reservoir Sampling

Title Integrating Episodic Memory into a Reinforcement Learning Agent using Reservoir Sampling
Authors Kenny J. Young, Richard S. Sutton, Shuo Yang
Abstract Episodic memory is a psychology term which refers to the ability to recall specific events from the past. We suggest one advantage of this particular type of memory is the ability to easily assign credit to a specific state when remembered information is found to be useful. Inspired by this idea, and the increasing popularity of external memory mechanisms to handle long-term dependencies in deep learning systems, we propose a novel algorithm which uses a reservoir sampling procedure to maintain an external memory consisting of a fixed number of past states. The algorithm allows a deep reinforcement learning agent to learn online to preferentially remember those states which are found to be useful to recall later on. Critically this method allows for efficient online computation of gradient estimates with respect to the write process of the external memory. Thus unlike most prior mechanisms for external memory it is feasible to use in an online reinforcement learning setting.
Tasks
Published 2018-06-01
URL http://arxiv.org/abs/1806.00540v1
PDF http://arxiv.org/pdf/1806.00540v1.pdf
PWC https://paperswithcode.com/paper/integrating-episodic-memory-into-a
Repo
Framework

What is an Ontology?

Title What is an Ontology?
Authors Fabian Neuhaus
Abstract In the knowledge engineering community “ontology” is usually defined in the tradition of Gruber as an “explicit specification of a conceptualization”. Several variations of this definition exist. In the paper we argue that (with one notable exception) these definitions are of no explanatory value, because they violate one of the basic rules for good definitions: The defining statement (the definiens) should be clearer than the term that is defined (the definiendum). In the paper we propose a different definition of “ontology” and discuss how it helps to explain various phenomena: the ability of ontologies to change, the role of the choice of vocabulary, the significance of annotations, the possibility of collaborative ontology development, and the relationship between ontological conceptualism and ontological realism.
Tasks
Published 2018-10-22
URL http://arxiv.org/abs/1810.09171v1
PDF http://arxiv.org/pdf/1810.09171v1.pdf
PWC https://paperswithcode.com/paper/what-is-an-ontology
Repo
Framework

The importance of being dissimilar in Recommendation

Title The importance of being dissimilar in Recommendation
Authors Vito Walter Anelli, Joseph Trotta, Tommaso Di Noia, Eugenio Di Sciascio, Azzurra Ragone
Abstract Similarity measures play a fundamental role in memory-based nearest neighbors approaches. They recommend items to a user based on the similarity of either items or users in a neighborhood. In this paper we argue that, although it keeps a leading importance in computing recommendations, similarity between users or items should be paired with a value of dissimilarity (computed not just as the complement of the similarity one). We formally modeled and injected this notion in some of the most used similarity measures and evaluated our approach showing its effectiveness in terms of accuracy results.
Tasks
Published 2018-07-11
URL https://arxiv.org/abs/1807.04207v2
PDF https://arxiv.org/pdf/1807.04207v2.pdf
PWC https://paperswithcode.com/paper/the-importance-of-being-dissimilar-in
Repo
Framework
comments powered by Disqus