January 28, 2020

2988 words 15 mins read

Paper Group ANR 798

Paper Group ANR 798

Applications of Nature-Inspired Algorithms for Dimension Reduction: Enabling Efficient Data Analytics. Simultaneous Detection and Removal of Dynamic Objects in Multi-view Images. Training Object Detectors from Few Weakly-Labeled and Many Unlabeled Images. Modeling Vocabulary for Big Code Machine Learning. Playing a Strategy Game with Knowledge-Base …

Applications of Nature-Inspired Algorithms for Dimension Reduction: Enabling Efficient Data Analytics

Title Applications of Nature-Inspired Algorithms for Dimension Reduction: Enabling Efficient Data Analytics
Authors Farid Ghareh Mohammadi, M. Hadi Amini, Hamid R. Arabnia
Abstract In [1], we have explored the theoretical aspects of feature selection and evolutionary algorithms. In this chapter, we focus on optimization algorithms for enhancing data analytic process, i.e., we propose to explore applications of nature-inspired algorithms in data science. Feature selection optimization is a hybrid approach leveraging feature selection techniques and evolutionary algorithms process to optimize the selected features. Prior works solve this problem iteratively to converge to an optimal feature subset. Feature selection optimization is a non-specific domain approach. Data scientists mainly attempt to find an advanced way to analyze data n with high computational efficiency and low time complexity, leading to efficient data analytics. Thus, by increasing generated/measured/sensed data from various sources, analysis, manipulation and illustration of data grow exponentially. Due to the large scale data sets, Curse of dimensionality (CoD) is one of the NP-hard problems in data science. Hence, several efforts have been focused on leveraging evolutionary algorithms (EAs) to address the complex issues in large scale data analytics problems. Dimension reduction, together with EAs, lends itself to solve CoD and solve complex problems, in terms of time complexity, efficiently. In this chapter, we first provide a brief overview of previous studies that focused on solving CoD using feature extraction optimization process. We then discuss practical examples of research studies are successfully tackled some application domains, such as image processing, sentiment analysis, network traffics / anomalies analysis, credit score analysis and other benchmark functions/data sets analysis.
Tasks Dimensionality Reduction, Feature Selection, Sentiment Analysis
Published 2019-08-22
URL https://arxiv.org/abs/1908.08563v1
PDF https://arxiv.org/pdf/1908.08563v1.pdf
PWC https://paperswithcode.com/paper/applications-of-nature-inspired-algorithms
Repo
Framework

Simultaneous Detection and Removal of Dynamic Objects in Multi-view Images

Title Simultaneous Detection and Removal of Dynamic Objects in Multi-view Images
Authors Gagan Kanojia, Shanmuganathan Raman
Abstract Consider a set of images of a scene consisting of moving objects captured using a hand-held camera. In this work, we propose an algorithm which takes this set of multi-view images as input, detects the dynamic objects present in the scene, and replaces them with the static regions which are being occluded by them. The proposed algorithm scans the reference image in the row-major order at the pixel level and classifies each pixel as static or dynamic. During the scan, when a pixel is classified as dynamic, the proposed algorithm replaces that pixel value with the corresponding pixel value of the static region which is being occluded by that dynamic region. We show that we achieve artifact-free removal of dynamic objects in multi-view images of several real-world scenes. To the best of our knowledge, we propose the first method which simultaneously detects and removes the dynamic objects present in multi-view images.
Tasks
Published 2019-12-11
URL https://arxiv.org/abs/1912.05591v1
PDF https://arxiv.org/pdf/1912.05591v1.pdf
PWC https://paperswithcode.com/paper/simultaneous-detection-and-removal-of-dynamic
Repo
Framework

Training Object Detectors from Few Weakly-Labeled and Many Unlabeled Images

Title Training Object Detectors from Few Weakly-Labeled and Many Unlabeled Images
Authors Zhaohui Yang, Miaojing Shi, Yannis Avrithis, Chao Xu, Vittorio Ferrari
Abstract Weakly-supervised object detection attempts to limit the amount of supervision by dispensing the need for bounding boxes, but still assumes image-level labels on the entire training set are available. In this work, we study the problem of training an object detector from one or few images with image-level labels and a larger set of completely unlabeled images. This is an extreme case of semi-supervised learning where the labeled data are not enough to bootstrap the learning of a detector. Our solution is to train a weakly-supervised student model from image-level pseudo-labels generated on the unlabeled set by a teacher model, bootstrapped by region-level similarities to labeled images. Building upon a recent representative weakly-supervised pipeline PCL, our method shows the capability of effectively making using of more unlabeled images and achieve performance competitive or superior to many state of the art weakly-supervised detection solutions.
Tasks Object Detection, Weakly Supervised Object Detection
Published 2019-12-01
URL https://arxiv.org/abs/1912.00384v2
PDF https://arxiv.org/pdf/1912.00384v2.pdf
PWC https://paperswithcode.com/paper/training-object-detectors-from-few-weakly-1
Repo
Framework

Modeling Vocabulary for Big Code Machine Learning

Title Modeling Vocabulary for Big Code Machine Learning
Authors Hlib Babii, Andrea Janes, Romain Robbes
Abstract When building machine learning models that operate on source code, several decisions have to be made to model source-code vocabulary. These decisions can have a large impact: some can lead to not being able to train models at all, others significantly affect performance, particularly for Neural Language Models. Yet, these decisions are not often fully described. This paper lists important modeling choices for source code vocabulary, and explores their impact on the resulting vocabulary on a large-scale corpus of 14,436 projects. We show that a subset of decisions have decisive characteristics, allowing to train accurate Neural Language Models quickly on a large corpus of 10,106 projects.
Tasks
Published 2019-04-03
URL http://arxiv.org/abs/1904.01873v1
PDF http://arxiv.org/pdf/1904.01873v1.pdf
PWC https://paperswithcode.com/paper/modeling-vocabulary-for-big-code-machine
Repo
Framework

Playing a Strategy Game with Knowledge-Based Reinforcement Learning

Title Playing a Strategy Game with Knowledge-Based Reinforcement Learning
Authors Viktor Voss, Liudmyla Nechepurenko, Dr. Rudi Schaefer, Steffen Bauer
Abstract This paper presents Knowledge-Based Reinforcement Learning (KB-RL) as a method that combines a knowledge-based approach and a reinforcement learning (RL) technique into one method for intelligent problem solving. The proposed approach focuses on multi-expert knowledge acquisition, with the reinforcement learning being applied as a conflict resolution strategy aimed at integrating the knowledge of multiple exerts into one knowledge base. The article describes the KB-RL approach in detail and applies the reported method to one of the most challenging problems of current Artificial Intelligence (AI) research, namely playing a strategy game. The results show that the KB-RL system is able to play and complete the full FreeCiv game, and to win against the computer players in various game settings. Moreover, with more games played, the system improves the gameplay by shortening the number of rounds that it takes to win the game. Overall, the reported experiment supports the idea that, based on human knowledge and empowered by reinforcement learning, the KB-RL system can deliver a strong solution to the complex, multi-strategic problems, and, mainly, to improve the solution with increased experience.
Tasks
Published 2019-08-15
URL https://arxiv.org/abs/1908.05472v1
PDF https://arxiv.org/pdf/1908.05472v1.pdf
PWC https://paperswithcode.com/paper/playing-a-strategy-game-with-knowledge-based
Repo
Framework

Scanner Invariant Multiple Sclerosis Lesion Segmentation from MRI

Title Scanner Invariant Multiple Sclerosis Lesion Segmentation from MRI
Authors Shahab Aslani, Vittorio Murino, Michael Dayan, Roger Tam, Diego Sona, Ghassan Hamarneh
Abstract This paper presents a simple and effective generalization method for magnetic resonance imaging (MRI) segmentation when data is collected from multiple MRI scanning sites and as a consequence is affected by (site-)domain shifts. We propose to integrate a traditional encoder-decoder network with a regularization network. This added network includes an auxiliary loss term which is responsible for the reduction of the domain shift problem and for the resulting improved generalization. The proposed method was evaluated on multiple sclerosis lesion segmentation from MRI data. We tested the proposed model on an in-house clinical dataset including 117 patients from 56 different scanning sites. In the experiments, our method showed better generalization performance than other baseline networks.
Tasks Lesion Segmentation
Published 2019-10-22
URL https://arxiv.org/abs/1910.10035v1
PDF https://arxiv.org/pdf/1910.10035v1.pdf
PWC https://paperswithcode.com/paper/scanner-invariant-multiple-sclerosis-lesion
Repo
Framework

The frontier of simulation-based inference

Title The frontier of simulation-based inference
Authors Kyle Cranmer, Johann Brehmer, Gilles Louppe
Abstract Many domains of science have developed complex simulations to describe phenomena of interest. While these simulations provide high-fidelity models, they are poorly suited for inference and lead to challenging inverse problems. We review the rapidly developing field of simulation-based inference and identify the forces giving new momentum to the field. Finally, we describe how the frontier is expanding so that a broad audience can appreciate the profound change these developments may have on science.
Tasks
Published 2019-11-04
URL https://arxiv.org/abs/1911.01429v2
PDF https://arxiv.org/pdf/1911.01429v2.pdf
PWC https://paperswithcode.com/paper/the-frontier-of-simulation-based-inference
Repo
Framework

Deep Hedging: Learning to Simulate Equity Option Markets

Title Deep Hedging: Learning to Simulate Equity Option Markets
Authors Magnus Wiese, Lianjun Bai, Ben Wood, Hans Buehler
Abstract We construct realistic equity option market simulators based on generative adversarial networks (GANs). We consider recurrent and temporal convolutional architectures, and assess the impact of state compression. Option market simulators are highly relevant because they allow us to extend the limited real-world data sets available for the training and evaluation of option trading strategies. We show that network-based generators outperform classical methods on a range of benchmark metrics, and adversarial training achieves the best performance. Our work demonstrates for the first time that GANs can be successfully applied to the task of generating multivariate financial time series.
Tasks Time Series
Published 2019-11-05
URL https://arxiv.org/abs/1911.01700v1
PDF https://arxiv.org/pdf/1911.01700v1.pdf
PWC https://paperswithcode.com/paper/deep-hedging-learning-to-simulate-equity
Repo
Framework

A Simple Method to Reduce Off-chip Memory Accesses on Convolutional Neural Networks

Title A Simple Method to Reduce Off-chip Memory Accesses on Convolutional Neural Networks
Authors Doyun Kim, Kyoung-Young Kim, Sangsoo Ko, Sanghyuck Ha
Abstract For convolutional neural networks, a simple algorithm to reduce off-chip memory accesses is proposed by maximally utilizing on-chip memory in a neural process unit. Especially, the algorithm provides an effective way to process a module which consists of multiple branches and a merge layer. For Inception-V3 on Samsung’s NPU in Exynos, our evaluation shows that the proposed algorithm makes off-chip memory accesses reduced by 1/50, and accordingly achieves 97.59 % reduction in the amount of feature-map data to be transferred from/to off-chip memory.
Tasks
Published 2019-01-28
URL http://arxiv.org/abs/1901.09614v1
PDF http://arxiv.org/pdf/1901.09614v1.pdf
PWC https://paperswithcode.com/paper/a-simple-method-to-reduce-off-chip-memory
Repo
Framework

Convex Formulation of Overparameterized Deep Neural Networks

Title Convex Formulation of Overparameterized Deep Neural Networks
Authors Cong Fang, Yihong Gu, Weizhong Zhang, Tong Zhang
Abstract Analysis of over-parameterized neural networks has drawn significant attention in recentyears. It was shown that such systems behave like convex systems under various restrictedsettings, such as for two-level neural networks, and when learning is only restricted locally inthe so-called neural tangent kernel space around specialized initializations. However, there areno theoretical techniques that can analyze fully trained deep neural networks encountered inpractice. This paper solves this fundamental problem by investigating such overparameterizeddeep neural networks when fully trained. We generalize a new technique called neural feature repopulation, originally introduced in (Fang et al., 2019a) for two-level neural networks, to analyze deep neural networks. It is shown that under suitable representations, overparameterized deep neural networks are inherently convex, and when optimized, the system can learn effective features suitable for the underlying learning task under mild conditions. This new analysis is consistent with empirical observations that deep neural networks are capable of learning efficient feature representations. Therefore, the highly unexpected result of this paper can satisfactorily explain the practical success of deep neural networks. Empirical studies confirm that predictions of our theory are consistent with results observed in practice.
Tasks
Published 2019-11-18
URL https://arxiv.org/abs/1911.07626v1
PDF https://arxiv.org/pdf/1911.07626v1.pdf
PWC https://paperswithcode.com/paper/convex-formulation-of-overparameterized-deep
Repo
Framework

Earthquake Prediction With Artificial Neural Network Method: The Application Of West Anatolian Fault In Turkey

Title Earthquake Prediction With Artificial Neural Network Method: The Application Of West Anatolian Fault In Turkey
Authors Handan Cam, Osman Duman
Abstract A method that exactly knows the earthquakes beforehand and can generalize them cannot still been developed. However, earthquakes are tried to be predicted through numerous methods. One of these methods, artificial neural networks give appropriate outputs to different patterns by learning the relationship between the determined inputs and outputs. In this study, a feedforward back propagation artificial neural network that is connected to Gutenberg-Richter relationship and that bases on b value used in earthquake predictions was developed. The artificial neural network was trained employing earthquake data belonging to four different regions which have intensive seismic activity in the west of Turkey. After the training process, the earthquake data belonging to later dates of the same regions were used for testing and the performance of the network was put forward. When the prediction results of the developed network are examined, the prediction results that the network predicts that an earthquake is not going to occur are quite high in all regions. Furthermore, the earthquake prediction results that the network predicts that an earthquake is going to occur are different to some extent for the studied regions.
Tasks
Published 2019-05-26
URL https://arxiv.org/abs/1907.02209v1
PDF https://arxiv.org/pdf/1907.02209v1.pdf
PWC https://paperswithcode.com/paper/earthquake-prediction-with-artificial-neural
Repo
Framework

Proximal gradient flow and Douglas-Rachford splitting dynamics: global exponential stability via integral quadratic constraints

Title Proximal gradient flow and Douglas-Rachford splitting dynamics: global exponential stability via integral quadratic constraints
Authors Sepideh Hassan-Moghaddam, Mihailo R. Jovanović
Abstract Many large-scale and distributed optimization problems can be brought into a composite form in which the objective function is given by the sum of a smooth term and a nonsmooth regularizer. Such problems can be solved via a proximal gradient method and its variants, thereby generalizing gradient descent to a nonsmooth setup. In this paper, we view proximal algorithms as dynamical systems and leverage techniques from control theory to study their global properties. In particular, for problems with strongly convex objective functions, we utilize the theory of integral quadratic constraints to prove global exponential stability of the differential equations that govern the evolution of proximal gradient and Douglas-Rachford splitting flows. In our analysis, we use the fact that these algorithms can be interpreted as variable-metric gradient methods on the forward-backward and the Douglas-Rachford envelopes and exploit structural properties of the nonlinear terms that arise from the gradient of the smooth part of the objective function and the proximal operator associated with the nonsmooth regularizer. We also demonstrate that these envelopes can be obtained from the augmented Lagrangian associated with the original nonsmooth problem and establish conditions for global exponential convergence even in the absence of strong convexity.
Tasks Distributed Optimization
Published 2019-08-23
URL https://arxiv.org/abs/1908.09043v1
PDF https://arxiv.org/pdf/1908.09043v1.pdf
PWC https://paperswithcode.com/paper/proximal-gradient-flow-and-douglas-rachford
Repo
Framework

DeepNC: Deep Generative Network Completion

Title DeepNC: Deep Generative Network Completion
Authors Cong Tran, Won-Yong Shin, Andreas Spitz, Michael Gertz
Abstract Most network data are collected from only partially observable networks with both missing nodes and edges, for example due to limited resources and privacy settings specified by users on social media. Thus, it stands to the reason that inferring the missing parts of the networks by performing \network completion should precede downstream mining or learning tasks on the networks. However, despite this need, the recovery of missing nodes and edges in such incomplete networks is an insufficiently explored problem. In this paper, we present DeepNC, a novel method for inferring the missing parts of a network that is based on a deep generative graph model. Specifically, our model first learns a likelihood over edges via a recurrent neural network (RNN)-based generative graph, and then identifies the graph that maximizes the learned likelihood conditioned on the observable graph topology. Moreover, we propose a computationally efficient DeepNC algorithm that consecutively finds a single node to maximize the probability in each node generation step, whose runtime complexity is almost linear in the number of nodes in the network. We empirically show the superiority of DeepNC over state-of-the-art network completion approaches on a variety of synthetic and real-world networks.
Tasks
Published 2019-07-17
URL https://arxiv.org/abs/1907.07381v1
PDF https://arxiv.org/pdf/1907.07381v1.pdf
PWC https://paperswithcode.com/paper/deepnc-deep-generative-network-completion
Repo
Framework

Bayesian Inference with Generative Adversarial Network Priors

Title Bayesian Inference with Generative Adversarial Network Priors
Authors Dhruv Patel, Assad A Oberai
Abstract Bayesian inference is used extensively to infer and to quantify the uncertainty in a field of interest from a measurement of a related field when the two are linked by a physical model. Despite its many applications, Bayesian inference faces challenges when inferring fields that have discrete representations of large dimension, and/or have prior distributions that are difficult to represent mathematically. In this manuscript we consider the use of Generative Adversarial Networks (GANs) in addressing these challenges. A GAN is a type of deep neural network equipped with the ability to learn the distribution implied by multiple samples of a given field. Once trained on these samples, the generator component of a GAN maps the iid components of a low-dimensional latent vector to an approximation of the distribution of the field of interest. In this work we demonstrate how this approximate distribution may be used as a prior in a Bayesian update, and how it addresses the challenges associated with characterizing complex prior distributions and the large dimension of the inferred field. We demonstrate the efficacy of this approach by applying it to the problem of inferring and quantifying uncertainty in the initial temperature field in a heat conduction problem from a noisy measurement of the temperature at later time.
Tasks Bayesian Inference
Published 2019-07-22
URL https://arxiv.org/abs/1907.09987v1
PDF https://arxiv.org/pdf/1907.09987v1.pdf
PWC https://paperswithcode.com/paper/bayesian-inference-with-generative
Repo
Framework

Compressed Learning of Deep Neural Networks for OpenCL-Capable Embedded Systems

Title Compressed Learning of Deep Neural Networks for OpenCL-Capable Embedded Systems
Authors Sangkyun Lee, Jeonghyun Lee
Abstract Deep neural networks (DNNs) have been quite successful in solving many complex learning problems. However, DNNs tend to have a large number of learning parameters, leading to a large memory and computation requirement. In this paper, we propose a model compression framework for efficient training and inference of deep neural networks on embedded systems. Our framework provides data structures and kernels for OpenCL-based parallel forward and backward computation in a compressed form. In particular, our method learns sparse representations of parameters using $\ell_1$-based sparse coding while training, storing them in compressed sparse matrices. Unlike the previous works, our method does not require a pre-trained model as an input and therefore can be more versatile for different application environments. Even though the use of $\ell_1$-based sparse coding for model compression is not new, we show that it can be far more effective than previously reported when we use proximal point algorithms and the technique of debiasing. Our experiments show that our method can produce minimal learning models suitable for small embedded devices.
Tasks Model Compression
Published 2019-05-20
URL https://arxiv.org/abs/1905.07931v1
PDF https://arxiv.org/pdf/1905.07931v1.pdf
PWC https://paperswithcode.com/paper/compressed-learning-of-deep-neural-networks
Repo
Framework
comments powered by Disqus