January 29, 2020

3131 words 15 mins read

Paper Group ANR 695

Paper Group ANR 695

Meta Module Network for Compositional Visual Reasoning. Testing Self-Organizing Multiagent Systems. Distributed Learning with Compressed Gradient Differences. Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers. Finite-Time Performance of Distributed Temporal Difference Learning with Linear Function Ap …

Meta Module Network for Compositional Visual Reasoning

Title Meta Module Network for Compositional Visual Reasoning
Authors Wenhu Chen, Zhe Gan, Linjie Li, Yu Cheng, William Wang, Jingjing Liu
Abstract There are two main lines of research on visual reasoning: neural module network (NMN) with explicit multi-hop reasoning through handcrafted neural modules, and monolithic network with implicit reasoning in the latent feature space. The former excels in interpretability and compositionality, while the latter usually achieves better performance due to model flexibility and parameter efficiency. In order to bridge the gap between the two and leverage the merits of both, we present Meta Module Network (MMN), a novel hybrid approach that can utilize a Meta Module to perform versatile functionalities, while preserving compositionality and interpretability through modularized design. The proposed model first parses an input question into a functional program through a Program Generator. Instead of handcrafting a task-specific network to represent each function similar to traditional NMN, we propose a Meta Module, which can read a recipe (function specifications) to dynamically instantiate the task-specific Instance Modules for compositional reasoning. To endow different instance modules with designated functionalities, we design a symbolic teacher which can execute against provided scene graphs to generate guidelines for the instantiated modules (student) to follow during training. Experiments conducted on the GQA benchmark demonstrates that MMN outperforms both NMN and monolithic network baselines, with good generalization ability to handle unseen functions.
Tasks Visual Reasoning
Published 2019-10-08
URL https://arxiv.org/abs/1910.03230v2
PDF https://arxiv.org/pdf/1910.03230v2.pdf
PWC https://paperswithcode.com/paper/meta-module-network-for-compositional-visual
Repo
Framework

Testing Self-Organizing Multiagent Systems

Title Testing Self-Organizing Multiagent Systems
Authors Nathalia Nascimento, Carlos Lucena, Paulo Alencar, Carlos Juliano Viana
Abstract Multiagent Systems (MASs) involve different characteristics, such as autonomy, asynchronous and social features, which make these systems more difficult to understand. Thus, there is a lack of procedures guaranteeing that multiagent systems would behave as desired. Further complicating the situation is the fact that current agent-based approaches may also involve non-deterministic characteristics, such as learning, self-adaptation and self-organization (SASO). Nonetheless, there is a gap in the literature regarding the testing of systems with these features. This paper presents a publish-subscribe-based approach to develop test applications that facilitate the process of failure diagnosis in a self-organizing MAS. These tests are able to detect failures at the global behavior of the system or at the local properties of its parts. To illustrate the use of this approach, we developed a self-organizing MAS system based on the context of the Internet of Things (IoT), which simulates a set of smart street lights, and we performed functional ad-hoc tests. The street lights need to interact with each other in order to achieve the global goals of reducing the energy consumption and maintaining the maximum visual comfort in illuminated areas. To achieve these global behaviors, the street lights develop local behaviors automatically through a self-organizing process based on machine learning algorithms.
Tasks
Published 2019-04-03
URL http://arxiv.org/abs/1904.01736v1
PDF http://arxiv.org/pdf/1904.01736v1.pdf
PWC https://paperswithcode.com/paper/testing-self-organizing-multiagent-systems
Repo
Framework

Distributed Learning with Compressed Gradient Differences

Title Distributed Learning with Compressed Gradient Differences
Authors Konstantin Mishchenko, Eduard Gorbunov, Martin Takáč, Peter Richtárik
Abstract Training large machine learning models requires a distributed computing approach, with communication of the model updates being the bottleneck. For this reason, several methods based on the compression (e.g., sparsification and/or quantization) of updates were recently proposed, including QSGD (Alistarh et al., 2017), TernGrad (Wen et al., 2017), SignSGD (Bernstein et al., 2018), and DQGD (Khirirat et al., 2018). However, none of these methods are able to learn the gradients, which renders them incapable of converging to the true optimum in the batch mode, incompatible with non-smooth regularizers, and slows down their convergence. In this work we propose a new distributed learning method — DIANA — which resolves these issues via compression of gradient differences. We perform a theoretical analysis in the strongly convex and nonconvex settings and show that our rates are superior to existing rates. Our analysis of block-quantization and differences between $\ell_2$ and $\ell_\infty$ quantization closes the gaps in theory and practice. Finally, by applying our analysis technique to TernGrad, we establish the first convergence rate for this method.
Tasks Quantization
Published 2019-01-26
URL https://arxiv.org/abs/1901.09269v2
PDF https://arxiv.org/pdf/1901.09269v2.pdf
PWC https://paperswithcode.com/paper/distributed-learning-with-compressed-gradient
Repo
Framework

Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers

Title Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers
Authors Xishan Zhang, Shaoli Liu, Rui Zhang, Chang Liu, Di Huang, Shiyi Zhou, Jiaming Guo, Yu Kang, Qi Guo, Zidong Du, Yunji Chen
Abstract Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers. Recent emerged quantization technique has been applied to inference of deep neural networks for fast and efficient execution. However, directly applying quantization in training can cause significant accuracy loss, thus remaining an open challenge.
Tasks Image Classification, Machine Translation, Object Detection, Quantization
Published 2019-11-01
URL https://arxiv.org/abs/1911.00361v2
PDF https://arxiv.org/pdf/1911.00361v2.pdf
PWC https://paperswithcode.com/paper/adaptive-precision-training-quantify-back
Repo
Framework

Finite-Time Performance of Distributed Temporal Difference Learning with Linear Function Approximation

Title Finite-Time Performance of Distributed Temporal Difference Learning with Linear Function Approximation
Authors Thinh T. Doan, Siva Theja Maguluri, Justin Romberg
Abstract We study the policy evaluation problem in multi-agent reinforcement learning, modeled by a Markov decision process. In this problem, the agents operate in a common environment under a fixed control policy, working together to discover the value (global discounted accumulative reward) associated with each environmental state. Over a series of time steps, the agents act, get rewarded, update their local estimate of the value function, then communicate with their neighbors. The local update at each agent can be interpreted as a distributed variant of the popular temporal difference learning methods {\sf TD}$ (\lambda)$. Our main contribution is to provide a finite-analysis on the performance of this distributed {\sf TD}$(\lambda)$ algorithm for both constant and time-varying step sizes. The key idea in our analysis is to use the geometric mixing time $\tau$ of the underlying Markov chain, that is, although the “noise” in our algorithm is Markovian, its dependence is very weak at samples spaced out at every $\tau$. We provide an explicit upper bound on the convergence rate of the proposed method as a function of the network topology, the discount factor, the constant $\lambda$, and the mixing time $\tau$. Our results also provide a mathematical explanation for observations that have appeared previously in the literature about the choice of $\lambda$. Our upper bound illustrates the trade-off between approximation accuracy and convergence speed implicit in the choice of $\lambda$. When $\lambda=1$, the solution will correspond to the best possible approximation of the value function, while choosing $\lambda = 0$ leads to faster convergence when the noise in the algorithm has large variance.
Tasks Multi-agent Reinforcement Learning
Published 2019-07-25
URL https://arxiv.org/abs/1907.12530v2
PDF https://arxiv.org/pdf/1907.12530v2.pdf
PWC https://paperswithcode.com/paper/finite-time-performance-of-distributed
Repo
Framework

Spectral Clustering via Ensemble Deep Autoencoder Learning (SC-EDAE)

Title Spectral Clustering via Ensemble Deep Autoencoder Learning (SC-EDAE)
Authors Severine Affeldt, Lazhar Labiod, Mohamed Nadif
Abstract Recently, a number of works have studied clustering strategies that combine classical clustering algorithms and deep learning methods. These approaches follow either a sequential way, where a deep representation is learned using a deep autoencoder before obtaining clusters with k-means, or a simultaneous way, where deep representation and clusters are learned jointly by optimizing a single objective function. Both strategies improve clustering performance, however the robustness of these approaches is impeded by several deep autoencoder setting issues, among which the weights initialization, the width and number of layers or the number of epochs. To alleviate the impact of such hyperparameters setting on the clustering performance, we propose a new model which combines the spectral clustering and deep autoencoder strengths in an ensemble learning framework. Extensive experiments on various benchmark datasets demonstrate the potential and robustness of our approach compared to state-of-the-art deep clustering methods.
Tasks
Published 2019-01-08
URL https://arxiv.org/abs/1901.02291v2
PDF https://arxiv.org/pdf/1901.02291v2.pdf
PWC https://paperswithcode.com/paper/spectral-clustering-via-ensemble-deep
Repo
Framework

Approximate Causal Abstraction

Title Approximate Causal Abstraction
Authors Sander Beckers, Frederick Eberhardt, Joseph Y. Halpern
Abstract Scientific models describe natural phenomena at different levels of abstraction. Abstract descriptions can provide the basis for interventions on the system and explanation of observed phenomena at a level of granularity that is coarser than the most fundamental account of the system. Beckers and Halpern (2019), building on work of Rubenstein et al. (2017), developed an account of abstraction for causal models that is exact. Here we extend this account to the more realistic case where an abstract causal model offers only an approximation of the underlying system. We show how the resulting account handles the discrepancy that can arise between low- and high-level causal models of the same system, and in the process provide an account of how one causal model approximates another, a topic of independent interest. Finally, we extend the account of approximate abstractions to probabilistic causal models, indicating how and where uncertainty can enter into an approximate abstraction.
Tasks
Published 2019-06-27
URL https://arxiv.org/abs/1906.11583v2
PDF https://arxiv.org/pdf/1906.11583v2.pdf
PWC https://paperswithcode.com/paper/approximate-causal-abstraction
Repo
Framework

Prioritized Guidance for Efficient Multi-Agent Reinforcement Learning Exploration

Title Prioritized Guidance for Efficient Multi-Agent Reinforcement Learning Exploration
Authors Qisheng Wang, Qichao Wang
Abstract Exploration efficiency is a challenging problem in multi-agent reinforcement learning (MARL), as the policy learned by confederate MARL depends on the collaborative approach among multiple agents. Another important problem is the less informative reward restricts the learning speed of MARL compared with the informative label in supervised learning. In this work, we leverage on a novel communication method to guide MARL to accelerate exploration and propose a predictive network to forecast the reward of current state-action pair and use the guidance learned by the predictive network to modify the reward function. An improved prioritized experience replay is employed to better take advantage of the different knowledge learned by different agents which utilizes Time-difference (TD) error more effectively. Experimental results demonstrates that the proposed algorithm outperforms existing methods in cooperative multi-agent environments. We remark that this algorithm can be extended to supervised learning to speed up its training.
Tasks Multi-agent Reinforcement Learning
Published 2019-07-18
URL https://arxiv.org/abs/1907.07847v3
PDF https://arxiv.org/pdf/1907.07847v3.pdf
PWC https://paperswithcode.com/paper/prioritized-guidance-for-efficient-multi
Repo
Framework

Voting-Based Multi-Agent Reinforcement Learning for Intelligent IoT

Title Voting-Based Multi-Agent Reinforcement Learning for Intelligent IoT
Authors Yue Xu, Zengde Deng, Mengdi Wang, Wenjun Xu, Anthony Man-Cho So, Shuguang Cui
Abstract The recent success of single-agent reinforcement learning (RL) in Internet of things (IoT) systems motivates the study of multi-agent reinforcement learning (MARL), which is more challenging but more useful in large-scale IoT. In this paper, we consider a voting-based MARL problem, in which the agents vote to make group decisions and the goal is to maximize the globally averaged returns. To this end, we formulate the MARL problem based on the linear programming form of the policy optimization problem and propose a distributed primal-dual algorithm to obtain the optimal solution. We also propose a voting mechanism through which the distributed learning achieves the same sublinear convergence rate as centralized learning. In other words, the distributed decision making does not slow down the process of achieving global consensus on optimality. Lastly, we verify the convergence of our proposed algorithm with numerical simulations and conduct case studies in practical multi-agent IoT systems.
Tasks Decision Making, Multi-agent Reinforcement Learning
Published 2019-07-02
URL https://arxiv.org/abs/1907.01385v2
PDF https://arxiv.org/pdf/1907.01385v2.pdf
PWC https://paperswithcode.com/paper/voting-based-multi-agent-reinforcement
Repo
Framework

On a scalable problem transformation method for multi-label learning

Title On a scalable problem transformation method for multi-label learning
Authors Dora Jambor, Peng Yu
Abstract Binary relevance is a simple approach to solve multi-label learning problems where an independent binary classifier is built per each label. A common challenge with this in real-world applications is that the label space can be very large, making it difficult to use binary relevance to larger scale problems. In this paper, we propose a scalable alternative to this, via transforming the multi-label problem into a single binary classification. We experiment with a few variations of our method and show that our method achieves higher precision than binary relevance and faster execution times on a top-K recommender system task.
Tasks Multi-Label Learning, Recommendation Systems
Published 2019-05-27
URL https://arxiv.org/abs/1905.11518v1
PDF https://arxiv.org/pdf/1905.11518v1.pdf
PWC https://paperswithcode.com/paper/on-a-scalable-problem-transformation-method
Repo
Framework
Title Stacked Autoencoder Based Deep Random Vector Functional Link Neural Network for Classification
Authors Rakesh Katuwal, P. N. Suganthan
Abstract Extreme learning machine (ELM), which can be viewed as a variant of Random Vector Functional Link (RVFL) network without the input-output direct connections, has been extensively used to create multi-layer (deep) neural networks. Such networks employ randomization based autoencoders (AE) for unsupervised feature extraction followed by an ELM classifier for final decision making. Each randomization based AE acts as an independent feature extractor and a deep network is obtained by stacking several such AEs. Inspired by the better performance of RVFL over ELM, in this paper, we propose several deep RVFL variants by utilizing the framework of stacked autoencoders. Specifically, we introduce direct connections (feature reuse) from preceding layers to the fore layers of the network as in the original RVFL network. Such connections help to regularize the randomization and also reduce the model complexity. Furthermore, we also introduce denoising criterion, recovering clean inputs from their corrupted versions, in the autoencoders to achieve better higher level representations than the ordinary autoencoders. Extensive experiments on several classification datasets show that our proposed deep networks achieve overall better and faster generalization than the other relevant state-of-the-art deep neural networks.
Tasks Decision Making, Denoising
Published 2019-10-04
URL https://arxiv.org/abs/1910.01858v4
PDF https://arxiv.org/pdf/1910.01858v4.pdf
PWC https://paperswithcode.com/paper/stacked-autoencoder-based-deep-random-vector
Repo
Framework

Learner-aware Teaching: Inverse Reinforcement Learning with Preferences and Constraints

Title Learner-aware Teaching: Inverse Reinforcement Learning with Preferences and Constraints
Authors Sebastian Tschiatschek, Ahana Ghosh, Luis Haug, Rati Devidze, Adish Singla
Abstract Inverse reinforcement learning (IRL) enables an agent to learn complex behavior by observing demonstrations from a (near-)optimal policy. The typical assumption is that the learner’s goal is to match the teacher’s demonstrated behavior. In this paper, we consider the setting where the learner has its own preferences that it additionally takes into consideration. These preferences can for example capture behavioral biases, mismatched worldviews, or physical constraints. We study two teaching approaches: learner-agnostic teaching, where the teacher provides demonstrations from an optimal policy ignoring the learner’s preferences, and learner-aware teaching, where the teacher accounts for the learner’s preferences. We design learner-aware teaching algorithms and show that significant performance improvements can be achieved over learner-agnostic teaching.
Tasks
Published 2019-06-02
URL https://arxiv.org/abs/1906.00429v2
PDF https://arxiv.org/pdf/1906.00429v2.pdf
PWC https://paperswithcode.com/paper/190600429
Repo
Framework

Learning Continuous Face Age Progression: A Pyramid of GANs

Title Learning Continuous Face Age Progression: A Pyramid of GANs
Authors Hongyu Yang, Di Huang, Yunhong Wang, Anil K. Jain
Abstract The two underlying requirements of face age progression, i.e. aging accuracy and identity permanence, are not well studied in the literature. This paper presents a novel generative adversarial network based approach to address the issues in a coupled manner. It separately models the constraints for the intrinsic subject-specific characteristics and the age-specific facial changes with respect to the elapsed time, ensuring that the generated faces present desired aging effects while simultaneously keeping personalized properties stable. To ensure photo-realistic facial details, high-level age-specific features conveyed by the synthesized face are estimated by a pyramidal adversarial discriminator at multiple scales, which simulates the aging effects with finer details. Further, an adversarial learning scheme is introduced to simultaneously train a single generator and multiple parallel discriminators, resulting in smooth continuous face aging sequences. The proposed method is applicable even in the presence of variations in pose, expression, makeup, etc., achieving remarkably vivid aging effects. Quantitative evaluations by a COTS face recognition system demonstrate that the target age distributions are accurately recovered, and 99.88% and 99.98% age progressed faces can be correctly verified at 0.001% FAR after age transformations of approximately 28 and 23 years elapsed time on the MORPH and CACD databases, respectively. Both visual and quantitative assessments show that the approach advances the state-of-the-art.
Tasks Face Recognition
Published 2019-01-10
URL http://arxiv.org/abs/1901.07528v1
PDF http://arxiv.org/pdf/1901.07528v1.pdf
PWC https://paperswithcode.com/paper/learning-continuous-face-age-progression-a
Repo
Framework

Prototypical Networks for Multi-Label Learning

Title Prototypical Networks for Multi-Label Learning
Authors Zhuo Yang, Yufei Han, Guoxian Yu, Xiangliang Zhang
Abstract We propose to address multi-label learning by jointly estimating the distribution of positive and negative instances for all labels. By a shared mapping function, each label’s positive and negative instances are mapped into a new space forming a mixture distribution of two components (positive and negative). Due to the dependency among labels, positive instances are mapped close if they share common labels, while positive and negative embeddings of the same label are pushed away. The distribution is learned in the new space, and thus well presents both the distance between instances in their original feature space and their common membership w.r.t. different categories. By measuring the density function values, new instances mapped to the new space can easily identify their membership to possible multiple categories. We use neural networks for learning the mapping function and use the expectations of the positive and negative embedding as prototypes of the positive and negative components for each label, respectively. Therefore, we name our proposed method PNML (prototypical networks for multi-label learning). Extensive experiments verify that PNML significantly outperforms the state-of-the-arts.
Tasks Multi-Label Learning
Published 2019-11-17
URL https://arxiv.org/abs/1911.07203v1
PDF https://arxiv.org/pdf/1911.07203v1.pdf
PWC https://paperswithcode.com/paper/prototypical-networks-for-multi-label
Repo
Framework

Classification of pulsars with Dirichlet process Gaussian mixture model

Title Classification of pulsars with Dirichlet process Gaussian mixture model
Authors F. Ay, G. İnce, M. E. Kamaşak, K. Y. Ekşi
Abstract Young isolated neutron stars (INS) most commonly manifest themselves as rotationally powered pulsars (RPPs) which involve conventional radio pulsars as well as gamma-ray pulsars (GRPs) and rotating radio transients (RRATs). Some other young INS families manifest themselves as anomalous X-ray pulsars (AXPs) and soft gamma-ray repeaters (SGRs) which are commonly accepted as magnetars, i.e. magnetically powered neutron stars with decaying superstrong fields. Yet some other young INS are identified as central compact objects (CCOs) and X-ray dim isolated neutron stars (XDINSs) which are cooling objects powered by their thermal energy. Older pulsars, as a result of a previous long episode of accretion from a companion, manifest themselves as millisecond pulsars and more commonly appear in binary systems. We use Dirichlet process Gaussian mixture model (DPGMM), an unsupervised machine learning algorithm, for analyzing the distribution of these pulsar families in the parameter space of period and period derivative. We compare the average values of the characteristic age, magnetic dipole field strength, surface temperature and transverse velocity of all discovered clusters. We verify that DPGMM is robust and provides hints for inferring relations between different classes of pulsars. We discuss the implications of our findings for the magneto-thermal spin evolution models and fallback discs.
Tasks
Published 2019-04-08
URL https://arxiv.org/abs/1904.04204v2
PDF https://arxiv.org/pdf/1904.04204v2.pdf
PWC https://paperswithcode.com/paper/classification-of-pulsars-with-dirichlet
Repo
Framework
comments powered by Disqus