January 28, 2020

3276 words 16 mins read

Paper Group ANR 865

Paper Group ANR 865

Fast Multi-Agent Temporal-Difference Learning via Homotopy Stochastic Primal-Dual Optimization. A unified framework of predicting binary interestingness of images based on discriminant correlation analysis and multiple kernel learning. Deep Aggregation of Regional Convolutional Activations for Content Based Image Retrieval. Implications of Computer …

Fast Multi-Agent Temporal-Difference Learning via Homotopy Stochastic Primal-Dual Optimization

Title Fast Multi-Agent Temporal-Difference Learning via Homotopy Stochastic Primal-Dual Optimization
Authors Dongsheng Ding, Xiaohan Wei, Zhuoran Yang, Zhaoran Wang, Mihailo R. Jovanović
Abstract We consider a distributed multi-agent policy evaluation problem in reinforcement learning. In our setup, a group of agents with jointly observed states and private local actions and rewards collaborates to learn the value function of a given policy. When the dimension of state-action space is large, the temporal-difference learning with linear function approximation is widely used. Under the assumption that the samples are i.i.d., the best-known convergence rate for multi-agent temporal-difference learning is $O(1/\sqrt{T})$ minimizing the mean square projected Bellman error. In this paper, we formulate the temporal-difference learning as a distributed stochastic saddle point problem, and propose a new homotopy primal-dual algorithm by adaptively restarting the gradient update from the average of previous iterations. We prove that our algorithm enjoys an $O(1/T)$ convergence rate up to logarithmic factors of $T$, thereby significantly improving the previously-known convergence results on multi-agent temporal-difference learning. Furthermore, since our result explicitly takes into account the Markovian nature of the sampling in policy evaluation, it addresses a broader class of problems than the commonly used i.i.d. sampling scenario. From a stochastic optimization perspective, to the best of our knowledge, the proposed homotopy primal-dual algorithm is the first to achieve $O(1/T)$ convergence rate for distributed stochastic saddle point problem.
Tasks Stochastic Optimization
Published 2019-08-07
URL https://arxiv.org/abs/1908.02805v2
PDF https://arxiv.org/pdf/1908.02805v2.pdf
PWC https://paperswithcode.com/paper/fast-multi-agent-temporal-difference-learning
Repo
Framework

A unified framework of predicting binary interestingness of images based on discriminant correlation analysis and multiple kernel learning

Title A unified framework of predicting binary interestingness of images based on discriminant correlation analysis and multiple kernel learning
Authors Qiang Sun, Liting Wang, Maohui Li, Longtao Zhang, Yuxiang Yang
Abstract In the modern content-based image retrieval systems, there is an increasingly interest in constructing a computationally effective model to predict the interestingness of images since the measure of image interestingness could improve the human-centered search satisfaction and the user experience in different applications. In this paper, we propose a unified framework to predict the binary interestingness of images based on discriminant correlation analysis (DCA) and multiple kernel learning (MKL) techniques. More specially, on the one hand, to reduce feature redundancy in describing the interestingness cues of images, the DCA or multi-set discriminant correlation analysis (MDCA) technique is adopted to fuse multiple feature sets of the same type for individual cues by taking into account the class structure among the samples involved to describe the three classical interestingness cues, unusualness,aesthetics as well as general preferences, with three sets of compact and representative features; on the other hand, to make good use of the heterogeneity from the three sets of high-level features for describing the interestingness cues, the SimpleMKL method is employed to enhance the generalization ability of the built model for the task of the binary interestingness classification. Experimental results on the publicly-released interestingness prediction data set have demonstrated the rationality and effectiveness of the proposed framework in the binary prediction of image interestingness where we have conducted several groups of comparative studies across different interestingness feature combinations, different interestingness cues, as well as different feature types for the three interestingness cues.
Tasks Content-Based Image Retrieval, Image Retrieval
Published 2019-10-14
URL https://arxiv.org/abs/1910.05996v1
PDF https://arxiv.org/pdf/1910.05996v1.pdf
PWC https://paperswithcode.com/paper/a-unified-framework-of-predicting-binary
Repo
Framework

Deep Aggregation of Regional Convolutional Activations for Content Based Image Retrieval

Title Deep Aggregation of Regional Convolutional Activations for Content Based Image Retrieval
Authors Konstantin Schall, Kai Uwe Barthel, Nico Hezel, Klaus Jung
Abstract One of the key challenges of deep learning based image retrieval remains in aggregating convolutional activations into one highly representative feature vector. Ideally, this descriptor should encode semantic, spatial and low level information. Even though off-the-shelf pre-trained neural networks can already produce good representations in combination with aggregation methods, appropriate fine tuning for the task of image retrieval has shown to significantly boost retrieval performance. In this paper, we present a simple yet effective supervised aggregation method built on top of existing regional pooling approaches. In addition to the maximum activation of a given region, we calculate regional average activations of extracted feature maps. Subsequently, weights for each of the pooled feature vectors are learned to perform a weighted aggregation to a single feature vector. Furthermore, we apply our newly proposed NRA loss function for deep metric learning to fine tune the backbone neural network and to learn the aggregation weights. Our method achieves state-of-the-art results for the INRIA Holidays data set and competitive results for the Oxford Buildings and Paris data sets while reducing the training time significantly.
Tasks Content-Based Image Retrieval, Image Retrieval, Metric Learning
Published 2019-09-20
URL https://arxiv.org/abs/1909.09420v2
PDF https://arxiv.org/pdf/1909.09420v2.pdf
PWC https://paperswithcode.com/paper/deep-aggregation-of-regional-convolutional
Repo
Framework

Implications of Computer Vision Driven Assistive Technologies Towards Individuals with Visual Impairment

Title Implications of Computer Vision Driven Assistive Technologies Towards Individuals with Visual Impairment
Authors Linda Wang, Alexander Wong
Abstract Computer vision based technology is becoming ubiquitous in society. One application area that has seen an increase in computer vision is assistive technologies, specifically for those with visual impairment. Research has shown the ability of computer vision models to achieve tasks such provide scene captions, detect objects and recognize faces. Although assisting individuals with visual impairment with these tasks increases their independence and autonomy, concerns over bias, privacy and potential usefulness arise. This paper addresses the positive and negative implications computer vision based assistive technologies have on individuals with visual impairment, as well as considerations for computer vision researchers and developers in order to mitigate the amount of negative implications.
Tasks
Published 2019-05-20
URL https://arxiv.org/abs/1905.07844v1
PDF https://arxiv.org/pdf/1905.07844v1.pdf
PWC https://paperswithcode.com/paper/implications-of-computer-vision-driven
Repo
Framework

Compressive Hyperspherical Energy Minimization

Title Compressive Hyperspherical Energy Minimization
Authors Rongmei Lin, Weiyang Liu, Zhen Liu, Chen Feng, Zhiding Yu, James M. Rehg, Li Xiong, Le Song
Abstract Recent work on minimum hyperspherical energy (MHE) has demonstrated its potential in regularizing neural networks and improving their generalization. MHE was inspired by the Thomson problem in physics, where the distribution of multiple propelling electrons on a unit sphere can be modeled via minimizing some potential energy. Despite the practical effectiveness, MHE suffers from local minima as their number increases dramatically in high dimensions, limiting MHE from unleashing its full potential in improving network generalization. To address this issue, we propose compressive minimum hyperspherical energy (CoMHE) as an alternative regularization for neural networks. Specifically, CoMHE utilizes a projection mapping to reduce the dimensionality of neurons and minimizes their hyperspherical energy. According to different constructions for the projection matrix, we propose two major variants: random projection CoMHE and angle-preserving CoMHE. Furthermore, we provide theoretical insights to justify its effectiveness. We show that CoMHE consistently outperforms MHE by a significant margin in comprehensive experiments, and demonstrate its diverse applications to a variety of tasks such as image recognition and point cloud recognition.
Tasks
Published 2019-06-12
URL https://arxiv.org/abs/1906.04892v1
PDF https://arxiv.org/pdf/1906.04892v1.pdf
PWC https://paperswithcode.com/paper/compressive-hyperspherical-energy
Repo
Framework

Symmetry-constrained Rectification Network for Scene Text Recognition

Title Symmetry-constrained Rectification Network for Scene Text Recognition
Authors MingKun Yang, Yushuo Guan, Minghui Liao, Xin He, Kaigui Bian, Song Bai, Cong Yao, Xiang Bai
Abstract Reading text in the wild is a very challenging task due to the diversity of text instances and the complexity of natural scenes. Recently, the community has paid increasing attention to the problem of recognizing text instances with irregular shapes. One intuitive and effective way to handle this problem is to rectify irregular text to a canonical form before recognition. However, these methods might struggle when dealing with highly curved or distorted text instances. To tackle this issue, we propose in this paper a Symmetry-constrained Rectification Network (ScRN) based on local attributes of text instances, such as center line, scale and orientation. Such constraints with an accurate description of text shape enable ScRN to generate better rectification results than existing methods and thus lead to higher recognition accuracy. Our method achieves state-of-the-art performance on text with both regular and irregular shapes. Specifically, the system outperforms existing algorithms by a large margin on datasets that contain quite a proportion of irregular text instances, e.g., ICDAR 2015, SVT-Perspective and CUTE80.
Tasks Scene Text Recognition
Published 2019-08-06
URL https://arxiv.org/abs/1908.01957v1
PDF https://arxiv.org/pdf/1908.01957v1.pdf
PWC https://paperswithcode.com/paper/symmetry-constrained-rectification-network
Repo
Framework

Challenging deep image descriptors for retrieval in heterogeneous iconographic collections

Title Challenging deep image descriptors for retrieval in heterogeneous iconographic collections
Authors Dimitri Gominski, Martyna Poreba, Valérie Gouet-Brunet, Liming Chen
Abstract This article proposes to study the behavior of recent and efficient state-of-the-art deep-learning based image descriptors for content-based image retrieval, facing a panel of complex variations appearing in heterogeneous image datasets, in particular in cultural collections that may involve multi-source, multi-date and multi-view Permission to make digital
Tasks Content-Based Image Retrieval, Image Retrieval
Published 2019-09-19
URL https://arxiv.org/abs/1909.08866v1
PDF https://arxiv.org/pdf/1909.08866v1.pdf
PWC https://paperswithcode.com/paper/challenging-deep-image-descriptors-for
Repo
Framework

Automated Segmentation of the Optic Disk and Cup using Dual-Stage Fully Convolutional Networks

Title Automated Segmentation of the Optic Disk and Cup using Dual-Stage Fully Convolutional Networks
Authors Lei Bi, Yuyu Guo, Qian Wang, Dagan Feng, Michael Fulham, Jinman Kim
Abstract Automated segmentation of the optic cup and disk on retinal fundus images is fundamental for the automated detection / analysis of glaucoma. Traditional segmentation approaches depend heavily upon hand-crafted features and a priori knowledge of the user. As such, these methods are difficult to be adapt to the clinical environment. Recently, deep learning methods based on fully convolutional networks (FCNs) have been successful in resolving segmentation problems. However, the reliance on large annotated training data is problematic when dealing with medical images. If a sufficient amount of annotated training data to cover all possible variations is not available, FCNs do not provide accurate segmentation. In addition, FCNs have a large receptive field in the convolutional layers, and hence produce coarse outputs of boundaries. Hence, we propose a new fully automated method that we refer to as a dual-stage fully convolutional networks (DSFCN). Our approach leverages deep residual architectures and FCNs and learns and infers the location of the optic cup and disk in a step-wise manner with fine-grained details. During training, our approach learns from the training data and the estimated results derived from the previous iteration. The ability to learn from the previous iteration optimizes the learning of the optic cup and the disk boundaries. During testing (prediction), DSFCN uses test (input) images and the estimated probability map derived from previous iterations to gradually improve the segmentation accuracy. Our method achieved an average Dice co-efficient of 0.8488 and 0.9441 for optic cup and disk segmentation and an area under curve (AUC) of 0.9513 for glaucoma detection.
Tasks
Published 2019-02-13
URL http://arxiv.org/abs/1902.04713v1
PDF http://arxiv.org/pdf/1902.04713v1.pdf
PWC https://paperswithcode.com/paper/automated-segmentation-of-the-optic-disk-and
Repo
Framework

Ontology-based Design of Experiments on Big Data Solutions

Title Ontology-based Design of Experiments on Big Data Solutions
Authors Maximilian Zocholl, Elena Camossi, Anne-Laure Jousselme, Cyril Ray
Abstract Big data solutions are designed to cope with data of huge Volume and wide Variety, that need to be ingested at high Velocity and have potential Veracity issues, challenging characteristics that are usually referred to as the “4Vs of Big Data”. In order to evaluate possibly complex big data solutions, stress tests require to assess a large number of combinations of sub-components jointly with the possible big data variations. A formalization of the Design of Experiments (DoE) on big data solutions is aimed at ensuring the reproducibility of the experiments, facilitating their partitioning in sub-experiments and guaranteeing the consistency of their outcomes in a global assessment. In this paper, an ontology-based approach is proposed to support the evaluation of a big data system in two ways. Firstly, the approach formalizes a decomposition and recombination of the big data solution, allowing for the aggregation of component evaluation results at inter-component level. Secondly, existing work on DoE is translated into an ontology for supporting the selection of experiments. The proposed ontology-based approach offers the possibility to combine knowledge from the evaluation domain and the application domain. It exploits domain and inter-domain specific restrictions on the factor combinations in order to reduce the number of experiments. Contrary to existing approaches, the proposed use of ontologies is not limited to the assertional description and exploitation of past experiments but offers richer terminological descriptions for the development of a DoE from scratch. As an application example, a maritime big data solution to the problem of detecting and predicting vessel suspicious behaviour through mobility analysis is selected. The article is concluded with a sketch of future works.
Tasks
Published 2019-04-18
URL http://arxiv.org/abs/1904.08626v1
PDF http://arxiv.org/pdf/1904.08626v1.pdf
PWC https://paperswithcode.com/paper/ontology-based-design-of-experiments-on-big
Repo
Framework

Multi-Agent Task Allocation in Complementary Teams: A Hunter and Gatherer Approach

Title Multi-Agent Task Allocation in Complementary Teams: A Hunter and Gatherer Approach
Authors Mehdi Dadvar, Saeed Moazami, Harley R. Myler, Hassan Zargarzadeh
Abstract Consider a dynamic task allocation problem, where tasks are unknowingly distributed over an environment. This paper considers each task comprised of two sequential subtasks: detection and completion, where each subtask can only be carried out by a certain type of agent. We address this problem using a novel nature-inspired approach called “hunter and gatherer”. The proposed method employs two complementary teams of agents: one agile in detecting (hunters) and another skillful in completing (gatherers) the tasks. To minimize the collective cost of task accomplishments in a distributed manner, a game-theoretic solution is introduced to couple agents from complementary teams. We utilize market-based negotiation models to develop incentive-based decision-making algorithms relying on innovative notions of “certainty and uncertainty profit margins”. The simulation results demonstrate that employing two complementary teams of hunters and gatherers can effectually improve the number of tasks completed by agents compared to conventional methods, while the collective cost of accomplishments is minimized. In addition, the stability and efficacy of the proposed solutions are studied using Nash equilibrium analysis and statistical analysis respectively. It is also numerically shown that the proposed solutions function fairly, i.e. for each type of agent, the overall workload is distributed equally.
Tasks Decision Making
Published 2019-12-12
URL https://arxiv.org/abs/1912.05748v2
PDF https://arxiv.org/pdf/1912.05748v2.pdf
PWC https://paperswithcode.com/paper/multi-agent-task-allocation-in-complementary
Repo
Framework

ED2: Two-stage Active Learning for Error Detection – Technical Report

Title ED2: Two-stage Active Learning for Error Detection – Technical Report
Authors Felix Neutatz, Mohammad Mahdavi, Ziawasch Abedjan
Abstract Traditional error detection approaches require user-defined parameters and rules. Thus, the user has to know both the error detection system and the data. However, we can also formulate error detection as a semi-supervised classification problem that only requires domain expertise. The challenges for such an approach are twofold: (1) to represent the data in a way that enables a classification model to identify various kinds of data errors, and (2) to pick the most promising data values for learning. In this paper, we address these challenges with ED2, our new example-driven error detection method. First, we present a new two-dimensional multi-classifier sampling strategy for active learning. Second, we propose novel multi-column features. The combined application of these techniques provides fast convergence of the classification task with high detection accuracy. On several real-world datasets, ED2 requires, on average, less than 1% labels to outperform existing error detection approaches. This report extends the peer-reviewed paper “ED2: A Case for Active Learning in Error Detection”. All source code related to this project is available on GitHub.
Tasks Active Learning
Published 2019-08-17
URL https://arxiv.org/abs/1908.06309v1
PDF https://arxiv.org/pdf/1908.06309v1.pdf
PWC https://paperswithcode.com/paper/ed2-two-stage-active-learning-for-error
Repo
Framework

Clear Skies Ahead: Towards Real-Time Automatic Sky Replacement in Video

Title Clear Skies Ahead: Towards Real-Time Automatic Sky Replacement in Video
Authors Tavi Halperin, Harel Cain, Ofir Bibi, Michael Werman
Abstract Digital videos such as those captured by a smartphone often exhibit exposure inconsistencies, a poorly exposed sky, or simply suffer from an uninteresting or plain looking sky. Professionals may edit these videos using advanced and time-consuming tools unavailable to most users, to replace the sky with a more expressive or imaginative sky. In this work, we propose an algorithm for automatic replacement of the sky region in a video with a different sky, providing nonprofessional users with a simple yet efficient tool to seamlessly replace the sky. The method is fast, achieving close to real-time performance on mobile devices and the user’s involvement can remain as limited as simply selecting the replacement sky.
Tasks
Published 2019-03-06
URL http://arxiv.org/abs/1903.02582v1
PDF http://arxiv.org/pdf/1903.02582v1.pdf
PWC https://paperswithcode.com/paper/clear-skies-ahead-towards-real-time-automatic
Repo
Framework

Enhanced Variational Inference with Dyadic Transformation

Title Enhanced Variational Inference with Dyadic Transformation
Authors Sarin Chandy, Amin Rasekh
Abstract Variational autoencoder is a powerful deep generative model with variational inference. The practice of modeling latent variables in the VAE’s original formulation as normal distributions with a diagonal covariance matrix limits the flexibility to match the true posterior distribution. We propose a new transformation, dyadic transformation (DT), that can model a multivariate normal distribution. DT is a single-stage transformation with low computational requirements. We demonstrate empirically on MNIST dataset that DT enhances the posterior flexibility and attains competitive results compared to other VAE enhancements.
Tasks
Published 2019-01-30
URL http://arxiv.org/abs/1901.10621v2
PDF http://arxiv.org/pdf/1901.10621v2.pdf
PWC https://paperswithcode.com/paper/enhanced-variational-inference-with-dyadic
Repo
Framework

Is the Policy Gradient a Gradient?

Title Is the Policy Gradient a Gradient?
Authors Chris Nota, Philip S. Thomas
Abstract The policy gradient theorem describes the gradient of the expected discounted return with respect to an agent’s policy parameters. However, most policy gradient methods drop the discount factor from the state distribution and therefore do not optimize the discounted objective. What do they optimize instead? This has been an open question for several years, and this lack of theoretical clarity has lead to an abundance of misstatements in the literature. We answer this question by proving that the update direction approximated by most methods is not the gradient of any function. Further, we argue that algorithms that follow this direction are not guaranteed to converge to a “reasonable” fixed point by constructing a counterexample wherein the fixed point is globally pessimal with respect to both the discounted and undiscounted objectives. We motivate this work by surveying the literature and showing that there remains a widespread misunderstanding regarding discounted policy gradient methods, with errors present even in highly-cited papers published at top conferences.
Tasks Policy Gradient Methods
Published 2019-06-17
URL https://arxiv.org/abs/1906.07073v2
PDF https://arxiv.org/pdf/1906.07073v2.pdf
PWC https://paperswithcode.com/paper/is-the-policy-gradient-a-gradient
Repo
Framework

Differentially Private Objective Perturbation: Beyond Smoothness and Convexity

Title Differentially Private Objective Perturbation: Beyond Smoothness and Convexity
Authors Seth Neel, Aaron Roth, Giuseppe Vietri, Zhiwei Steven Wu
Abstract One of the most effective algorithms for differentially private learning and optimization is objective perturbation. This technique augments a given optimization problem (e.g. deriving from an ERM problem) with a random linear term, and then exactly solves it. However, to date, analyses of this approach crucially rely on the convexity and smoothness of the objective function. We give two algorithms that extend this approach substantially. The first algorithm requires nothing except boundedness of the loss function, and operates over a discrete domain. Its privacy and accuracy guarantees hold even without assuming convexity. The second algorithm operates over a continuous domain and requires only that the loss function be bounded and Lipschitz in its continuous parameter. Its privacy analysis does not even require convexity. Its accuracy analysis does require convexity, but does not require second order conditions like smoothness. We complement our theoretical results with an empirical evaluation of the non-convex case, in which we use an integer program solver as our optimization oracle. We find that for the problem of learning linear classifiers, directly optimizing for 0/1 loss using our approach can out-perform the more standard approach of privately optimizing a convex-surrogate loss function on the Adult dataset.
Tasks
Published 2019-09-03
URL https://arxiv.org/abs/1909.01783v1
PDF https://arxiv.org/pdf/1909.01783v1.pdf
PWC https://paperswithcode.com/paper/differentially-private-objective-perturbation
Repo
Framework
comments powered by Disqus