January 29, 2020

3340 words 16 mins read

Paper Group ANR 717

Paper Group ANR 717

Sample Complexity of Kalman Filtering for Unknown Systems. Environmental drivers of systematicity and generalization in a situated agent. Multinomial Random Forest: Toward Consistency and Privacy-Preservation. Semantic-Guided Multi-Attention Localization for Zero-Shot Learning. Parameter Optimization and Learning in a Spiking Neural Network for UAV …

Sample Complexity of Kalman Filtering for Unknown Systems

Title Sample Complexity of Kalman Filtering for Unknown Systems
Authors Anastasios Tsiamis, Nikolai Matni, George J. Pappas
Abstract In this paper, we consider the task of designing a Kalman Filter (KF) for an unknown and partially observed autonomous linear time invariant system driven by process and sensor noise. To do so, we propose studying the following two step process: first, using system identification tools rooted in subspace methods, we obtain coarse finite-data estimates of the state-space parameters and Kalman gain describing the autonomous system; and second, we use these approximate parameters to design a filter which produces estimates of the system state. We show that when the system identification step produces sufficiently accurate estimates, or when the underlying true KF is sufficiently robust, that a Certainty Equivalent (CE) KF, i.e., one designed using the estimated parameters directly, enjoys provable sub-optimality guarantees. We further show that when these conditions fail, and in particular, when the CE KF is marginally stable (i.e., has eigenvalues very close to the unit circle), that imposing additional robustness constraints on the filter leads to similar sub-optimality guarantees. We further show that with high probability, both the CE and robust filters have mean prediction error bounded by $\tilde O(1/\sqrt{N})$, where $N$ is the number of data points collected in the system identification step. To the best of our knowledge, these are the first end-to-end sample complexity bounds for the Kalman Filtering of an unknown system.
Tasks
Published 2019-12-27
URL https://arxiv.org/abs/1912.12309v2
PDF https://arxiv.org/pdf/1912.12309v2.pdf
PWC https://paperswithcode.com/paper/sample-complexity-of-kalman-filtering-for
Repo
Framework

Environmental drivers of systematicity and generalization in a situated agent

Title Environmental drivers of systematicity and generalization in a situated agent
Authors Felix Hill, Andrew Lampinen, Rosalia Schneider, Stephen Clark, Matthew Botvinick, James L. McClelland, Adam Santoro
Abstract The question of whether deep neural networks are good at generalising beyond their immediate training experience is of critical importance for learning-based approaches to AI. Here, we consider tests of out-of-sample generalisation that require an agent to respond to never-seen-before instructions by manipulating and positioning objects in a 3D Unity simulated room. We first describe a comparatively generic agent architecture that exhibits strong performance on these tests. We then identify three aspects of the training regime and environment that make a significant difference to its performance: (a) the number of object/word experiences in the training set; (b) the visual invariances afforded by the agent’s perspective, or frame of reference; and (c) the variety of visual input inherent in the perceptual aspect of the agent’s perception. Our findings indicate that the degree of generalisation that networks exhibit can depend critically on particulars of the environment in which a given task is instantiated. They further suggest that the propensity for neural networks to generalise in systematic ways may increase if, like human children, those networks have access to many frames of richly varying, multi-modal observations as they learn.
Tasks
Published 2019-10-01
URL https://arxiv.org/abs/1910.00571v4
PDF https://arxiv.org/pdf/1910.00571v4.pdf
PWC https://paperswithcode.com/paper/emergent-systematic-generalization-in-a-1
Repo
Framework

Multinomial Random Forest: Toward Consistency and Privacy-Preservation

Title Multinomial Random Forest: Toward Consistency and Privacy-Preservation
Authors Yiming Li, Jiawang Bai, Jiawei Li, Xue Yang, Yong Jiang, Chun Li, Shutao Xia
Abstract Despite the impressive performance of standard random forests (RF), its theoretical properties have not been thoroughly understood. In this paper, we propose a novel RF framework, dubbed multinomial random forest (MRF), to discuss the consistency and privacy-preservation. Instead of deterministic greedy split rule, the MRF adopts two impurity-based multinomial distributions to randomly select a split feature and a split value respectively. Theoretically, we prove the consistency of the proposed MRF and analyze its privacy-preservation within the framework of differential privacy. We also demonstrate with multiple datasets that its performance is on par with the standard RF. To the best of our knowledge, MRF is the first consistent RF variant that has comparable performance to the standard RF.
Tasks
Published 2019-03-10
URL https://arxiv.org/abs/1903.04003v2
PDF https://arxiv.org/pdf/1903.04003v2.pdf
PWC https://paperswithcode.com/paper/multinomial-random-forests-fill-the-gap
Repo
Framework

Semantic-Guided Multi-Attention Localization for Zero-Shot Learning

Title Semantic-Guided Multi-Attention Localization for Zero-Shot Learning
Authors Yizhe Zhu, Jianwen Xie, Zhiqiang Tang, Xi Peng, Ahmed Elgammal
Abstract Zero-shot learning extends the conventional object classification to the unseen class recognition by introducing semantic representations of classes. Existing approaches predominantly focus on learning the proper mapping function for visual-semantic embedding, while neglecting the effect of learning discriminative visual features. In this paper, we study the significance of the discriminative region localization. We propose a semantic-guided multi-attention localization model, which automatically discovers the most discriminative parts of objects for zero-shot learning without any human annotations. Our model jointly learns cooperative global and local features from the whole object as well as the detected parts to categorize objects based on semantic descriptions. Moreover, with the joint supervision of embedding softmax loss and class-center triplet loss, the model is encouraged to learn features with high inter-class dispersion and intra-class compactness. Through comprehensive experiments on three widely used zero-shot learning benchmarks, we show the efficacy of the multi-attention localization and our proposed approach improves the state-of-the-art results by a considerable margin.
Tasks Object Classification, Zero-Shot Learning
Published 2019-03-01
URL https://arxiv.org/abs/1903.00502v2
PDF https://arxiv.org/pdf/1903.00502v2.pdf
PWC https://paperswithcode.com/paper/learning-where-to-look-semantic-guided-multi
Repo
Framework

Parameter Optimization and Learning in a Spiking Neural Network for UAV Obstacle Avoidance targeting Neuromorphic Processors

Title Parameter Optimization and Learning in a Spiking Neural Network for UAV Obstacle Avoidance targeting Neuromorphic Processors
Authors Llewyn Salt, David Howard, Giacomo Indiveri, Yulia Sandamirskaya
Abstract The Lobula Giant Movement Detector (LGMD) is an identified neuron of the locust that detects looming objects and triggers the insect’s escape responses. Understanding the neural principles and network structure that lead to these fast and robust responses can facilitate the design of efficient obstacle avoidance strategies for robotic applications. Here we present a neuromorphic spiking neural network model of the LGMD driven by the output of a neuromorphic Dynamic Vision Sensor (DVS), which incorporates spiking frequency adaptation and synaptic plasticity mechanisms, and which can be mapped onto existing neuromorphic processor chips. However, as the model has a wide range of parameters, and the mixed signal analogue-digital circuits used to implement the model are affected by variability and noise, it is necessary to optimise the parameters to produce robust and reliable responses. Here we propose to use Differential Evolution (DE) and Bayesian Optimisation (BO) techniques to optimise the parameter space and investigate the use of Self-Adaptive Differential Evolution (SADE) to ameliorate the difficulties of finding appropriate input parameters for the DE technique. We quantify the performance of the methods proposed with a comprehensive comparison of different optimisers applied to the model, and demonstrate the validity of the approach proposed using recordings made from a DVS sensor mounted on a UAV.
Tasks Bayesian Optimisation
Published 2019-10-17
URL https://arxiv.org/abs/1910.07960v1
PDF https://arxiv.org/pdf/1910.07960v1.pdf
PWC https://paperswithcode.com/paper/parameter-optimization-and-learning-in-a
Repo
Framework

Speeding up convolutional networks pruning with coarse ranking

Title Speeding up convolutional networks pruning with coarse ranking
Authors Zi Wang, Chengcheng Li, Dali Wang, Xiangyang Wang, Hairong Qi
Abstract Channel-based pruning has achieved significant successes in accelerating deep convolutional neural network, whose pipeline is an iterative three-step procedure: ranking, pruning and fine-tuning. However, this iterative procedure is computationally expensive. In this study, we present a novel computationally efficient channel pruning approach based on the coarse ranking that utilizes the intermediate results during fine-tuning to rank the importance of filters, built upon state-of-the-art works with data-driven ranking criteria. The goal of this work is not to propose a single improved approach built upon a specific channel pruning method, but to introduce a new general framework that works for a series of channel pruning methods. Various benchmark image datasets (CIFAR-10, ImageNet, Birds-200, and Flowers-102) and network architectures (AlexNet and VGG-16) are utilized to evaluate the proposed approach for object classification purpose. Experimental results show that the proposed method can achieve almost identical performance with the corresponding state-of-the-art works (baseline) while our ranking time is negligibly short. In specific, with the proposed method, 75% and 54% of the total computation time for the whole pruning procedure can be reduced for AlexNet on CIFAR-10, and for VGG-16 on ImageNet, respectively. Our approach would significantly facilitate pruning practice, especially on resource-constrained platforms.
Tasks Object Classification
Published 2019-02-18
URL http://arxiv.org/abs/1902.06385v1
PDF http://arxiv.org/pdf/1902.06385v1.pdf
PWC https://paperswithcode.com/paper/speeding-up-convolutional-networks-pruning
Repo
Framework

Harnessing spatial MRI normalization: patch individual filter layers for CNNs

Title Harnessing spatial MRI normalization: patch individual filter layers for CNNs
Authors Fabian Eitel, Jan Philipp Albrecht, Friedemann Paul, Kerstin Ritter
Abstract Neuroimaging studies based on magnetic resonance imaging (MRI) typically employ rigorous forms of preprocessing. Images are spatially normalized to a standard template using linear and non-linear transformations. Thus, one can assume that a patch at location (x, y, height, width) contains the same brain region across the entire data set. Most analyses applied on brain MRI using convolutional neural networks (CNNs) ignore this distinction from natural images. Here, we suggest a new layer type called patch individual filter (PIF) layer, which trains higher-level filters locally as we assume that more abstract features are locally specific after spatial normalization. We evaluate PIF layers on three different tasks, namely sex classification as well as either Alzheimer’s disease (AD) or multiple sclerosis (MS) detection. We demonstrate that CNNs using PIF layers outperform their counterparts in several, especially low sample size settings.
Tasks
Published 2019-11-14
URL https://arxiv.org/abs/1911.06278v1
PDF https://arxiv.org/pdf/1911.06278v1.pdf
PWC https://paperswithcode.com/paper/harnessing-spatial-mri-normalization-patch
Repo
Framework

Self-Supervised Deep Learning on Point Clouds by Reconstructing Space

Title Self-Supervised Deep Learning on Point Clouds by Reconstructing Space
Authors Jonathan Sauder, Bjarne Sievers
Abstract Point clouds provide a flexible and natural representation usable in countless applications such as robotics or self-driving cars. Recently, deep neural networks operating on raw point cloud data have shown promising results on supervised learning tasks such as object classification and semantic segmentation. While massive point cloud datasets can be captured using modern scanning technology, manually labelling such large 3D point clouds for supervised learning tasks is a cumbersome process. This necessitates methods that can learn from unlabelled data to significantly reduce the number of annotated samples needed in supervised learning. We propose a self-supervised learning task for deep learning on raw point cloud data in which a neural network is trained to reconstruct point clouds whose parts have been randomly rearranged. While solving this task, representations that capture semantic properties of the point cloud are learned. Our method is agnostic of network architecture and outperforms current unsupervised learning approaches in downstream object classification tasks. We show experimentally, that pre-training with our method before supervised training improves the performance of state-of-the-art models and significantly improves sample efficiency.
Tasks Object Classification, Self-Driving Cars, Semantic Segmentation
Published 2019-01-24
URL https://arxiv.org/abs/1901.08396v2
PDF https://arxiv.org/pdf/1901.08396v2.pdf
PWC https://paperswithcode.com/paper/context-prediction-for-unsupervised-deep
Repo
Framework

Decomposing predictability: Semantic feature overlap between words and the dynamics of reading for meaning

Title Decomposing predictability: Semantic feature overlap between words and the dynamics of reading for meaning
Authors Markus J. Hofmann, Mareike A. Kleemann, Andre Roelke, Christian Vorstius, Ralph Radach
Abstract The present study uses a computational approach to examine the role of semantic constraints in normal reading. This methodology avoids confounds inherent in conventional measures of predictability, allowing for theoretically deeper accounts of semantic processing. We start from a definition of associations between words based on the significant log likelihood that two words co-occur frequently together in the sentences of a large text corpus. Direct associations between stimulus words were controlled, and semantic feature overlap between prime and target words was manipulated by their common associates. The stimuli consisted of sentences of the form pronoun, verb, article, adjective and noun, followed by a series of closed class words, e. g. “She rides the grey elephant on one of her many exploratory voyages”. The results showed that verb-noun overlap reduces single and first fixation durations of the target noun and adjective-noun overlap reduces go-past durations. A dynamic spreading of activation account suggests that associates of the prime words take some time to become activated: The verb can act on the target noun’s early eye-movement measures presented three words later, while the adjective is presented immediately prior to the target, which induces sentence re-examination after a difficult adjective-noun semantic integration.
Tasks
Published 2019-12-06
URL https://arxiv.org/abs/1912.10164v1
PDF https://arxiv.org/pdf/1912.10164v1.pdf
PWC https://paperswithcode.com/paper/decomposing-predictability-semantic-feature
Repo
Framework

BRIDGE: Byzantine-resilient Decentralized Gradient Descent

Title BRIDGE: Byzantine-resilient Decentralized Gradient Descent
Authors Zhixiong Yang, Waheed U. Bajwa
Abstract Decentralized optimization techniques are increasingly being used to learn machine learning models from data distributed over multiple locations without gathering the data at any one location. Unfortunately, methods that are designed for faultless networks typically fail in the presence of node failures. In particular, Byzantine failures—corresponding to the scenario in which faulty/compromised nodes are allowed to arbitrarily deviate from an agreed-upon protocol—are the hardest to safeguard against in decentralized settings. This paper introduces a Byzantine-resilient decentralized gradient descent (BRIDGE) method for decentralized learning that, when compared to existing works, is more efficient and scalable in higher-dimensional settings and that is deployable in networks having topologies that go beyond the star topology. The main contributions of this work include theoretical analysis of BRIDGE for strongly convex learning objectives and numerical experiments demonstrating the efficacy of BRIDGE for both convex and nonconvex learning tasks.
Tasks
Published 2019-08-21
URL https://arxiv.org/abs/1908.08098v1
PDF https://arxiv.org/pdf/1908.08098v1.pdf
PWC https://paperswithcode.com/paper/bridge-byzantine-resilient-decentralized
Repo
Framework

The OMG-Empathy Dataset: Evaluating the Impact of Affective Behavior in Storytelling

Title The OMG-Empathy Dataset: Evaluating the Impact of Affective Behavior in Storytelling
Authors Pablo Barros, Nikhil Churamani, Angelica Lim, Stefan Wermter
Abstract Processing human affective behavior is important for developing intelligent agents that interact with humans in complex interaction scenarios. A large number of current approaches that address this problem focus on classifying emotion expressions by grouping them into known categories. Such strategies neglect, among other aspects, the impact of the affective responses from an individual on their interaction partner thus ignoring how people empathize towards each other. This is also reflected in the datasets used to train models for affective processing tasks. Most of the recent datasets, in particular, the ones which capture natural interactions (“in-the-wild” datasets), are designed, collected, and annotated based on the recognition of displayed affective reactions, ignoring how these displayed or expressed emotions are perceived. In this paper, we propose a novel dataset composed of dyadic interactions designed, collected and annotated with a focus on measuring the affective impact that eight different stories have on the listener. Each video of the dataset contains around 5 minutes of interaction where a speaker tells a story to a listener. After each interaction, the listener annotated, using a valence scale, how the story impacted their affective state, reflecting how they empathized with the speaker as well as the story. We also propose different evaluation protocols and a baseline that encourages participation in the advancement of the field of artificial empathy and emotion contagion.
Tasks
Published 2019-08-30
URL https://arxiv.org/abs/1908.11706v1
PDF https://arxiv.org/pdf/1908.11706v1.pdf
PWC https://paperswithcode.com/paper/the-omg-empathy-dataset-evaluating-the-impact
Repo
Framework

Code Generation as a Dual Task of Code Summarization

Title Code Generation as a Dual Task of Code Summarization
Authors Bolin Wei, Ge Li, Xin Xia, Zhiyi Fu, Zhi Jin
Abstract Code summarization (CS) and code generation (CG) are two crucial tasks in the field of automatic software development. Various neural network-based approaches are proposed to solve these two tasks separately. However, there exists a specific intuitive correlation between CS and CG, which have not been exploited in previous work. In this paper, we apply the relations between two tasks to improve the performance of both tasks. In other words, exploiting the duality between the two tasks, we propose a dual training framework to train the two tasks simultaneously. In this framework, we consider the dualities on probability and attention weights, and design corresponding regularization terms to constrain the duality. We evaluate our approach on two datasets collected from GitHub, and experimental results show that our dual framework can improve the performance of CS and CG tasks over baselines.
Tasks Code Generation, Code Summarization
Published 2019-10-14
URL https://arxiv.org/abs/1910.05923v1
PDF https://arxiv.org/pdf/1910.05923v1.pdf
PWC https://paperswithcode.com/paper/code-generation-as-a-dual-task-of-code
Repo
Framework

Modeling Intent, Dialog Policies and Response Adaptation for Goal-Oriented Interactions

Title Modeling Intent, Dialog Policies and Response Adaptation for Goal-Oriented Interactions
Authors Saurav Sahay, Shachi H Kumar, Eda Okur, Haroon Syed, Lama Nachman
Abstract Building a machine learning driven spoken dialog system for goal-oriented interactions involves careful design of intents and data collection along with development of intent recognition models and dialog policy learning algorithms. The models should be robust enough to handle various user distractions during the interaction flow and should steer the user back into an engaging interaction for successful completion of the interaction. In this work, we have designed a goal-oriented interaction system where children can engage with agents for a series of interactions involving Meet \& Greet' and Simon Says’ game play. We have explored various feature extractors and models for improved intent recognition and looked at leveraging previous user and system interactions in novel ways with attention models. We have also looked at dialog adaptation methods for entrained response selection. Our bootstrapped models from limited training data perform better than many baseline approaches we have looked at for intent recognition and dialog action prediction.
Tasks
Published 2019-12-20
URL https://arxiv.org/abs/1912.10130v1
PDF https://arxiv.org/pdf/1912.10130v1.pdf
PWC https://paperswithcode.com/paper/modeling-intent-dialog-policies-and-response
Repo
Framework

Towards Practical Multi-Object Manipulation using Relational Reinforcement Learning

Title Towards Practical Multi-Object Manipulation using Relational Reinforcement Learning
Authors Richard Li, Allan Jabri, Trevor Darrell, Pulkit Agrawal
Abstract Learning robotic manipulation tasks using reinforcement learning with sparse rewards is currently impractical due to the outrageous data requirements. Many practical tasks require manipulation of multiple objects, and the complexity of such tasks increases with the number of objects. Learning from a curriculum of increasingly complex tasks appears to be a natural solution, but unfortunately, does not work for many scenarios. We hypothesize that the inability of the state-of-the-art algorithms to effectively utilize a task curriculum stems from the absence of inductive biases for transferring knowledge from simpler to complex tasks. We show that graph-based relational architectures overcome this limitation and enable learning of complex tasks when provided with a simple curriculum of tasks with increasing numbers of objects. We demonstrate the utility of our framework on a simulated block stacking task. Starting from scratch, our agent learns to stack six blocks into a tower. Despite using step-wise sparse rewards, our method is orders of magnitude more data-efficient and outperforms the existing state-of-the-art method that utilizes human demonstrations. Furthermore, the learned policy exhibits zero-shot generalization, successfully stacking blocks into taller towers and previously unseen configurations such as pyramids, without any further training.
Tasks
Published 2019-12-23
URL https://arxiv.org/abs/1912.11032v1
PDF https://arxiv.org/pdf/1912.11032v1.pdf
PWC https://paperswithcode.com/paper/towards-practical-multi-object-manipulation
Repo
Framework

Deep Mouse: An End-to-end Auto-context Refinement Framework for Brain Ventricle and Body Segmentation in Embryonic Mice Ultrasound Volumes

Title Deep Mouse: An End-to-end Auto-context Refinement Framework for Brain Ventricle and Body Segmentation in Embryonic Mice Ultrasound Volumes
Authors Tongda Xu, Ziming Qiu, William Das, Chuiyu Wang, Jack Langerman, Nitin Nair, Orlando Aristizabal, Jonathan Mamou, Daniel H. Turnbull, Jeffrey A. Ketterling, Yao Wang
Abstract High-frequency ultrasound (HFU) is well suited for imaging embryonic mice due to its noninvasive and real-time characteristics. However, manual segmentation of the brain ventricles (BVs) and body requires substantial time and expertise. This work proposes a novel deep learning based end-to-end auto-context refinement framework, consisting of two stages. The first stage produces a low resolution segmentation of the BV and body simultaneously. The resulting probability map for each object (BV or body) is then used to crop a region of interest (ROI) around the target object in both the original image and the probability map to provide context to the refinement segmentation network. Joint training of the two stages provides significant improvement in Dice Similarity Coefficient (DSC) over using only the first stage (0.818 to 0.906 for the BV, and 0.919 to 0.934 for the body). The proposed method significantly reduces the inference time (102.36 to 0.09 s/volume around 1000x faster) while slightly improves the segmentation accuracy over the previous methods using slide-window approaches.
Tasks
Published 2019-10-20
URL https://arxiv.org/abs/1910.09061v2
PDF https://arxiv.org/pdf/1910.09061v2.pdf
PWC https://paperswithcode.com/paper/deep-mouse-an-end-to-end-auto-context
Repo
Framework
comments powered by Disqus