January 27, 2020

3011 words 15 mins read

Paper Group ANR 1159

Paper Group ANR 1159

Interactive Plan Explicability in Human-Robot Teaming. Deep Convolutional Spiking Neural Networks for Image Classification. Gradient Descent can Learn Less Over-parameterized Two-layer Neural Networks on Classification Problems. A Joint Model for Definition Extraction with Syntactic Connection and Semantic Consistency. Multiple Linear Regression Ha …

Interactive Plan Explicability in Human-Robot Teaming

Title Interactive Plan Explicability in Human-Robot Teaming
Authors Mehrdad Zakershahrak, Yu Zhang
Abstract Human-robot teaming is one of the most important applications of artificial intelligence in the fast-growing field of robotics. For effective teaming, a robot must not only maintain a behavioral model of its human teammates to project the team status, but also be aware that its human teammates’ expectation of itself. Being aware of the human teammates’ expectation leads to robot behaviors that better align with human expectation, thus facilitating more efficient and potentially safer teams. Our work addresses the problem of human-robot cooperation with the consideration of such teammate models in sequential domains by leveraging the concept of plan explicability. In plan explicability, however, the human is considered solely as an observer. In this paper, we extend plan explicability to consider interactive settings where human and robot behaviors can influence each other. We term this new measure as Interactive Plan Explicability. We compare the joint plan generated with the consideration of this measure using the fast forward planner (FF) with the plan created by FF without such consideration, as well as the plan created with actual human subjects. Results indicate that the explicability score of plans generated by our algorithm is comparable to the human plan, and better than the plan created by FF without considering the measure, implying that the plans created by our algorithms align better with expected joint plans of the human during execution. This can lead to more efficient collaboration in practice.
Tasks
Published 2019-01-17
URL http://arxiv.org/abs/1901.05642v1
PDF http://arxiv.org/pdf/1901.05642v1.pdf
PWC https://paperswithcode.com/paper/interactive-plan-explicability-in-human-robot
Repo
Framework

Deep Convolutional Spiking Neural Networks for Image Classification

Title Deep Convolutional Spiking Neural Networks for Image Classification
Authors Ruthvik Vaila, John Chiasson, Vishal Saxena
Abstract Spiking neural networks are biologically plausible counterparts of the artificial neural networks, artificial neural networks are usually trained with stochastic gradient descent and spiking neural networks are trained with spike timing dependant plasticity. Training deep convolutional neural networks is a memory and power intensive job. Spiking networks could potentially help in reducing the power usage. There is a large pool of tools for one to chose to train artificial neural networks of any size, on the other hand all the available tools to simulate spiking neural networks are geared towards computational neuroscience applications and they are not suitable for real life applications. In this work we focus on implementing a spiking CNN using Tensorflow to examine behaviour of the network and empirically study the effect of various parameters on learning capabilities and also study catastrophic forgetting in the spiking CNN and weight initialization problem in R-STDP using MNIST and N-MNIST data sets.
Tasks Image Classification
Published 2019-03-28
URL https://arxiv.org/abs/1903.12272v2
PDF https://arxiv.org/pdf/1903.12272v2.pdf
PWC https://paperswithcode.com/paper/deep-convolutional-spiking-neural-networks
Repo
Framework

Gradient Descent can Learn Less Over-parameterized Two-layer Neural Networks on Classification Problems

Title Gradient Descent can Learn Less Over-parameterized Two-layer Neural Networks on Classification Problems
Authors Atsushi Nitanda, Geoffrey Chinot, Taiji Suzuki
Abstract Recently, several studies have proven the global convergence and generalization abilities of the gradient descent method for two-layer ReLU networks. Most studies especially focused on the regression problems with the squared loss function, except for a few, and the importance of the positivity of the neural tangent kernel has been pointed out. On the other hand, the performance of gradient descent on classification problems using the logistic loss function has not been well studied, and further investigation of this problem structure is possible. In this work, we demonstrate that the separability assumption using a neural tangent model is more reasonable than the positivity condition of the neural tangent kernel and provide a refined convergence analysis of the gradient descent for two-layer networks with smooth activations. A remarkable point of our result is that our convergence and generalization bounds have much better dependence on the network width in comparison to related studies. Consequently, our theory provides a generalization guarantee for less over-parameterized two-layer networks, while most studies require much higher over-parameterization.
Tasks
Published 2019-05-23
URL https://arxiv.org/abs/1905.09870v3
PDF https://arxiv.org/pdf/1905.09870v3.pdf
PWC https://paperswithcode.com/paper/refined-generalization-analysis-of-gradient
Repo
Framework

A Joint Model for Definition Extraction with Syntactic Connection and Semantic Consistency

Title A Joint Model for Definition Extraction with Syntactic Connection and Semantic Consistency
Authors Amir Pouran Ben Veyseh, Franck Dernoncourt, Dejing Dou, Thien Huu Nguyen
Abstract Definition Extraction (DE) is one of the well-known topics in Information Extraction that aims to identify terms and their corresponding definitions in unstructured texts. This task can be formalized either as a sentence classification task (i.e., containing term-definition pairs or not) or a sequential labeling task (i.e., identifying the boundaries of the terms and definitions). The previous works for DE have only focused on one of the two approaches, failing to model the inter-dependencies between the two tasks. In this work, we propose a novel model for DE that simultaneously performs the two tasks in a single framework to benefit from their inter-dependencies. Our model features deep learning architectures to exploit the global structures of the input sentences as well as the semantic consistencies between the terms and the definitions, thereby improving the quality of the representation vectors for DE. Besides the joint inference between sentence classification and sequential labeling, the proposed model is fundamentally different from the prior work for DE in that the prior work has only employed the local structures of the input sentences (i.e., word-to-word relations), and not yet considered the semantic consistencies between terms and definitions. In order to implement these novel ideas, our model presents a multi-task learning framework that employs graph convolutional neural networks and predicts the dependency paths between the terms and the definitions. We also seek to enforce the consistency between the representations of the terms and definitions both globally (i.e., increasing semantic consistency between the representations of the entire sentences and the terms/definitions) and locally (i.e., promoting the similarity between the representations of the terms and the definitions).
Tasks Multi-Task Learning, Sentence Classification
Published 2019-11-05
URL https://arxiv.org/abs/1911.01678v3
PDF https://arxiv.org/pdf/1911.01678v3.pdf
PWC https://paperswithcode.com/paper/a-joint-model-for-definition-extraction-with
Repo
Framework

Multiple Linear Regression Haze-removal Model Based on Dark Channel Prior

Title Multiple Linear Regression Haze-removal Model Based on Dark Channel Prior
Authors Binghan Li, Wenrui Zhang, Mi Lu
Abstract Dark Channel Prior (DCP) is a widely recognized traditional dehazing algorithm. However, it may fail in bright region and the brightness of the restored image is darker than hazy image. In this paper, we propose an effective method to optimize DCP. We build a multiple linear regression haze-removal model based on DCP atmospheric scattering model and train this model with RESIDE dataset, which aims to reduce the unexpected errors caused by the rough estimations of transmission map t(x) and atmospheric light A. The RESIDE dataset provides enough synthetic hazy images and their corresponding groundtruth images to train and test. We compare the performances of different dehazing algorithms in terms of two important full-reference metrics, the peak-signal-to-noise ratio (PSNR) as well as the structural similarity index measure (SSIM). The experiment results show that our model gets highest SSIM value and its PSNR value is also higher than most of state-of-the-art dehazing algorithms. Our results also overcome the weakness of DCP on real-world hazy images
Tasks
Published 2019-04-25
URL http://arxiv.org/abs/1904.11587v1
PDF http://arxiv.org/pdf/1904.11587v1.pdf
PWC https://paperswithcode.com/paper/multiple-linear-regression-haze-removal-model
Repo
Framework

Perceptual Values from Observation

Title Perceptual Values from Observation
Authors Ashley D. Edwards, Charles L. Isbell
Abstract Imitation by observation is an approach for learning from expert demonstrations that lack action information, such as videos. Recent approaches to this problem can be placed into two broad categories: training dynamics models that aim to predict the actions taken between states, and learning rewards or features for computing them for Reinforcement Learning (RL). In this paper, we introduce a novel approach that learns values, rather than rewards, directly from observations. We show that by using values, we can significantly speed up RL by removing the need to bootstrap action-values, as compared to sparse-reward specifications.
Tasks
Published 2019-05-20
URL https://arxiv.org/abs/1905.07861v1
PDF https://arxiv.org/pdf/1905.07861v1.pdf
PWC https://paperswithcode.com/paper/perceptual-values-from-observation
Repo
Framework

Zoea – Composable Inductive Programming Without Limits

Title Zoea – Composable Inductive Programming Without Limits
Authors Edward McDaid, Sarah McDaid
Abstract Automatic generation of software from some form of specification has been a long standing goal of computer science research. To date successful results have been reported for the production of relatively small programs. This paper presents Zoea which is a simple programming language that allows software to be generated from a specification format that closely resembles a set of automated functional tests. Zoea incorporates a number of advances that enable it to generate software that is large enough to have commercial value. Zoea also allows programs to be composed to form still larger programs. As a result Zoea can be used to produce software of any size and complexity. An overview of the core Zoea language is provided together with a high level description of the symbolic AI based Zoea compiler.
Tasks
Published 2019-11-13
URL https://arxiv.org/abs/1911.08286v1
PDF https://arxiv.org/pdf/1911.08286v1.pdf
PWC https://paperswithcode.com/paper/zoea-composable-inductive-programming-without
Repo
Framework

Learning Semantic Correspondence Exploiting an Object-level Prior

Title Learning Semantic Correspondence Exploiting an Object-level Prior
Authors Junghyup Lee, Dohyung Kim, Wonkyung Lee, Jean Ponce, Bumsub Ham
Abstract We address the problem of semantic correspondence, that is, establishing a dense flow field between images depicting different instances of the same object or scene category. We propose to use images annotated with binary foreground masks and subjected to synthetic geometric deformations to train a convolutional neural network (CNN) for this task. Using these masks as part of the supervisory signal provides an object-level prior for the semantic correspondence task and offers a good compromise between semantic flow methods, where the amount of training data is limited by the cost of manually selecting point correspondences, and semantic alignment ones, where the regression of a single global geometric transformation between images may be sensitive to image-specific details such as background clutter. We propose a new CNN architecture, dubbed SFNet, which implements this idea. It leverages a new and differentiable version of the argmax function for end-to-end training, with a loss that combines mask and flow consistency with smoothness terms. Experimental results demonstrate the effectiveness of our approach, which significantly outperforms the state of the art on standard benchmarks.
Tasks
Published 2019-11-29
URL https://arxiv.org/abs/1911.12914v1
PDF https://arxiv.org/pdf/1911.12914v1.pdf
PWC https://paperswithcode.com/paper/learning-semantic-correspondence-exploiting
Repo
Framework

Buffer-aware Wireless Scheduling based on Deep Reinforcement Learning

Title Buffer-aware Wireless Scheduling based on Deep Reinforcement Learning
Authors Chen Xu, Jian Wang, Tianhang Yu, Chuili Kong, Yourui Huangfu, Rong Li, Yiqun Ge, Jun Wang
Abstract In this paper, the downlink packet scheduling problem for cellular networks is modeled, which jointly optimizes throughput, fairness and packet drop rate. Two genie-aided heuristic search methods are employed to explore the solution space. A deep reinforcement learning (DRL) framework with A2C algorithm is proposed for the optimization problem. Several methods have been utilized in the framework to improve the sampling and training efficiency and to adapt the algorithm to a specific scheduling problem. Numerical results show that DRL outperforms the baseline algorithm and achieves similar performance as genie-aided methods without using the future information.
Tasks
Published 2019-11-13
URL https://arxiv.org/abs/1911.05281v1
PDF https://arxiv.org/pdf/1911.05281v1.pdf
PWC https://paperswithcode.com/paper/buffer-aware-wireless-scheduling-based-on
Repo
Framework

Locality-Promoting Representation Learning

Title Locality-Promoting Representation Learning
Authors Johannes Schneider
Abstract This work investigates fundamental questions related to learning features in convolutional neural networks (CNN). Empirical findings across multiple architectures such as VGG, ResNet, Inception, DenseNet and MobileNet indicate that weights near the center of a filter are larger than weights on the outside. Current regularization schemes violate this principle. Thus, we introduce Locality-promoting Regularization (LOCO-Reg), which yields accuracy gains across multiple architectures and datasets. We also show theoretically that the empirical finding is a consequence of maximizing feature cohesion under the assumption of spatial locality.
Tasks Representation Learning
Published 2019-05-25
URL https://arxiv.org/abs/1905.10661v2
PDF https://arxiv.org/pdf/1905.10661v2.pdf
PWC https://paperswithcode.com/paper/locality-promoting-representation-learning
Repo
Framework

Efficiently avoiding saddle points with zero order methods: No gradients required

Title Efficiently avoiding saddle points with zero order methods: No gradients required
Authors Lampros Flokas, Emmanouil-Vasileios Vlatakis-Gkaragkounis, Georgios Piliouras
Abstract We consider the case of derivative-free algorithms for non-convex optimization, also known as zero order algorithms, that use only function evaluations rather than gradients. For a wide variety of gradient approximators based on finite differences, we establish asymptotic convergence to second order stationary points using a carefully tailored application of the Stable Manifold Theorem. Regarding efficiency, we introduce a noisy zero-order method that converges to second order stationary points, i.e avoids saddle points. Our algorithm uses only $\tilde{\mathcal{O}}(1 / \epsilon^2)$ approximate gradient calculations and, thus, it matches the converge rate guarantees of their exact gradient counterparts up to constants. In contrast to previous work, our convergence rate analysis avoids imposing additional dimension dependent slowdowns in the number of iterations required for non-convex zero order optimization.
Tasks
Published 2019-10-29
URL https://arxiv.org/abs/1910.13021v1
PDF https://arxiv.org/pdf/1910.13021v1.pdf
PWC https://paperswithcode.com/paper/efficiently-avoiding-saddle-points-with-zero
Repo
Framework

Improved Differentially Private Analysis of Variance

Title Improved Differentially Private Analysis of Variance
Authors Marika Swanberg, Ira Globus-Harris, Iris Griffith, Anna Ritz, Adam Groce, Andrew Bray
Abstract Hypothesis testing is one of the most common types of data analysis and forms the backbone of scientific research in many disciplines. Analysis of variance (ANOVA) in particular is used to detect dependence between a categorical and a numerical variable. Here we show how one can carry out this hypothesis test under the restrictions of differential privacy. We show that the $F$-statistic, the optimal test statistic in the public setting, is no longer optimal in the private setting, and we develop a new test statistic $F_1$ with much higher statistical power. We show how to rigorously compute a reference distribution for the $F_1$ statistic and give an algorithm that outputs accurate $p$-values. We implement our test and experimentally optimize several parameters. We then compare our test to the only previous work on private ANOVA testing, using the same effect size as that work. We see an order of magnitude improvement, with our test requiring only 7% as much data to detect the effect.
Tasks
Published 2019-03-01
URL http://arxiv.org/abs/1903.00534v1
PDF http://arxiv.org/pdf/1903.00534v1.pdf
PWC https://paperswithcode.com/paper/improved-differentially-private-analysis-of
Repo
Framework

Graph Convolutional Policy for Solving Tree Decomposition via Reinforcement Learning Heuristics

Title Graph Convolutional Policy for Solving Tree Decomposition via Reinforcement Learning Heuristics
Authors Taras Khakhulin, Roman Schutski, Ivan Oseledets
Abstract We propose a Reinforcement Learning based approach to approximately solve the Tree Decomposition (TD) problem. TD is a combinatorial problem, which is central to the analysis of graph minor structure and computational complexity, as well as in the algorithms of probabilistic inference, register allocation, and other practical tasks. Recently, it has been shown that combinatorial problems can be successively solved by learned heuristics. However, the majority of existing works do not address the question of the generalization of learning-based solutions. Our model is based on the graph convolution neural network (GCN) for learning graph representations. We show that the agent builton GCN and trained on a single graph using an Actor-Critic method can efficiently generalize to real-world TD problem instances. We establish that our method successfully generalizes from small graphs, where TD can be found by exact algorithms, to large instances of practical interest, while still having very low time-to-solution. On the other hand, the agent-based approach surpasses all greedy heuristics by the quality of the solution.
Tasks
Published 2019-10-18
URL https://arxiv.org/abs/1910.08371v2
PDF https://arxiv.org/pdf/1910.08371v2.pdf
PWC https://paperswithcode.com/paper/graph-convolutional-policy-for-solving-tree
Repo
Framework

Bayesian Generative Active Deep Learning

Title Bayesian Generative Active Deep Learning
Authors Toan Tran, Thanh-Toan Do, Ian Reid, Gustavo Carneiro
Abstract Deep learning models have demonstrated outstanding performance in several problems, but their training process tends to require immense amounts of computational and human resources for training and labeling, constraining the types of problems that can be tackled. Therefore, the design of effective training methods that require small labeled training sets is an important research direction that will allow a more effective use of resources.Among current approaches designed to address this issue, two are particularly interesting: data augmentation and active learning. Data augmentation achieves this goal by artificially generating new training points, while active learning relies on the selection of the “most informative” subset of unlabeled training samples to be labelled by an oracle. Although successful in practice, data augmentation can waste computational resources because it indiscriminately generates samples that are not guaranteed to be informative, and active learning selects a small subset of informative samples (from a large un-annotated set) that may be insufficient for the training process. In this paper, we propose a Bayesian generative active deep learning approach that combines active learning with data augmentation – we provide theoretical and empirical evidence (MNIST, CIFAR-${10,100}$, and SVHN) that our approach has more efficient training and better classification results than data augmentation and active learning.
Tasks Active Learning, Data Augmentation
Published 2019-04-26
URL http://arxiv.org/abs/1904.11643v1
PDF http://arxiv.org/pdf/1904.11643v1.pdf
PWC https://paperswithcode.com/paper/bayesian-generative-active-deep-learning
Repo
Framework

Multi-Domain Neural Machine Translation with Word-Level Adaptive Layer-wise Domain Mixing

Title Multi-Domain Neural Machine Translation with Word-Level Adaptive Layer-wise Domain Mixing
Authors Haoming Jiang, Chen Liang, Chong Wang, Tuo Zhao
Abstract Many multi-domain neural machine translation (NMT) models achieve knowledge transfer by enforcing one encoder to learn shared embedding across domains. However, this design lacks adaptation to individual domains. To overcome this limitation, we propose a novel multi-domain NMT model using individual modules for each domain, on which we apply word-level, adaptive and layer-wise domain mixing. We first observe that words in a sentence are often related to multiple domains. Hence, we assume each word has a domain proportion, which indicates its domain preference. Then word representations are obtained by mixing their embedding in individual domains based on their domain proportions. We show this can be achieved by carefully designing multi-head dot-product attention modules for different domains, and eventually taking weighted averages of their parameters by word-level layer-wise domain proportions. Through this, we can achieve effective domain knowledge sharing, and capture fine-grained domain-specific knowledge as well. Our experiments show that our proposed model outperforms existing ones in several NMT tasks.
Tasks Machine Translation, Transfer Learning
Published 2019-11-07
URL https://arxiv.org/abs/1911.02692v1
PDF https://arxiv.org/pdf/1911.02692v1.pdf
PWC https://paperswithcode.com/paper/multi-domain-neural-machine-translation-with-1
Repo
Framework
comments powered by Disqus