Paper Group ANR 882
Video based Contextual Question Answering. Few Shot Learning with Simplex. Avoiding Latent Variable Collapse With Generative Skip Models. Visual Attention for Behavioral Cloning in Autonomous Driving. Developing parsimonious ensembles using ensemble diversity within a reinforcement learning framework. Context Models for OOV Word Translation in Low- …
Video based Contextual Question Answering
Title | Video based Contextual Question Answering |
Authors | Akash Ganesan, Divyansh Pal, Karthik Muthuraman, Shubham Dash |
Abstract | The primary aim of this project is to build a contextual Question-Answering model for videos. The current methodologies provide a robust model for image based Question-Answering, but we are aim to generalize this approach to be videos. We propose a graphical representation of video which is able to handle several types of queries across the whole video. For example, if a frame has an image of a man and a cat sitting, it should be able to handle queries like, where is the cat sitting with respect to the man? or ,what is the man holding in his hand?. It should be able to answer queries relating to temporal relationships also. |
Tasks | Question Answering |
Published | 2018-04-19 |
URL | http://arxiv.org/abs/1804.07399v1 |
http://arxiv.org/pdf/1804.07399v1.pdf | |
PWC | https://paperswithcode.com/paper/video-based-contextual-question-answering |
Repo | |
Framework | |
Few Shot Learning with Simplex
Title | Few Shot Learning with Simplex |
Authors | Bowen Zhang, Xifan Zhang, Fan Cheng, Deli Zhao |
Abstract | Deep learning has made remarkable achievement in many fields. However, learning the parameters of neural networks usually demands a large amount of labeled data. The algorithms of deep learning, therefore, encounter difficulties when applied to supervised learning where only little data are available. This specific task is called few-shot learning. To address it, we propose a novel algorithm for few-shot learning using discrete geometry, in the sense that the samples in a class are modeled as a reduced simplex. The volume of the simplex is used for the measurement of class scatter. During testing, combined with the test sample and the points in the class, a new simplex is formed. Then the similarity between the test sample and the class can be quantized with the ratio of volumes of the new simplex to the original class simplex. Moreover, we present an approach to constructing simplices using local regions of feature maps yielded by convolutional neural networks. Experiments on Omniglot and miniImageNet verify the effectiveness of our simplex algorithm on few-shot learning. |
Tasks | Few-Shot Learning, Omniglot |
Published | 2018-07-27 |
URL | http://arxiv.org/abs/1807.10726v2 |
http://arxiv.org/pdf/1807.10726v2.pdf | |
PWC | https://paperswithcode.com/paper/few-shot-learning-with-simplex |
Repo | |
Framework | |
Avoiding Latent Variable Collapse With Generative Skip Models
Title | Avoiding Latent Variable Collapse With Generative Skip Models |
Authors | Adji B. Dieng, Yoon Kim, Alexander M. Rush, David M. Blei |
Abstract | Variational autoencoders learn distributions of high-dimensional data. They model data with a deep latent-variable model and then fit the model by maximizing a lower bound of the log marginal likelihood. VAEs can capture complex distributions, but they can also suffer from an issue known as “latent variable collapse,” especially if the likelihood model is powerful. Specifically, the lower bound involves an approximate posterior of the latent variables; this posterior “collapses” when it is set equal to the prior, i.e., when the approximate posterior is independent of the data. While VAEs learn good generative models, latent variable collapse prevents them from learning useful representations. In this paper, we propose a simple new way to avoid latent variable collapse by including skip connections in our generative model; these connections enforce strong links between the latent variables and the likelihood function. We study generative skip models both theoretically and empirically. Theoretically, we prove that skip models increase the mutual information between the observations and the inferred latent variables. Empirically, we study images (MNIST and Omniglot) and text (Yahoo). Compared to existing VAE architectures, we show that generative skip models maintain similar predictive performance but lead to less collapse and provide more meaningful representations of the data. |
Tasks | Omniglot |
Published | 2018-07-12 |
URL | http://arxiv.org/abs/1807.04863v2 |
http://arxiv.org/pdf/1807.04863v2.pdf | |
PWC | https://paperswithcode.com/paper/avoiding-latent-variable-collapse-with |
Repo | |
Framework | |
Visual Attention for Behavioral Cloning in Autonomous Driving
Title | Visual Attention for Behavioral Cloning in Autonomous Driving |
Authors | Sourav Pal, Tharun Mohandoss, Pabitra Mitra |
Abstract | The goal of our work is to use visual attention to enhance autonomous driving performance. We present two methods of predicting visual attention maps. The first method is a supervised learning approach in which we collect eye-gaze data for the task of driving and use this to train a model for predicting the attention map. The second method is a novel unsupervised approach where we train a model to learn to predict attention as it learns to drive a car. Finally, we present a comparative study of our results and show that the supervised approach for predicting attention when incorporated performs better than other approaches. |
Tasks | Autonomous Driving |
Published | 2018-12-05 |
URL | http://arxiv.org/abs/1812.01802v1 |
http://arxiv.org/pdf/1812.01802v1.pdf | |
PWC | https://paperswithcode.com/paper/visual-attention-for-behavioral-cloning-in |
Repo | |
Framework | |
Developing parsimonious ensembles using ensemble diversity within a reinforcement learning framework
Title | Developing parsimonious ensembles using ensemble diversity within a reinforcement learning framework |
Authors | Ana Stanescu, Gaurav Pandey |
Abstract | Heterogeneous ensembles built from the predictions of a wide variety and large number of diverse base predictors represent a potent approach to building predictive models for problems where the ideal base/individual predictor may not be obvious. Ensemble selection is an especially promising approach here, not only for improving prediction performance, but also because of its ability to select a collectively predictive subset, often a relatively small one, of the base predictors. In this paper, we present a set of algorithms that explicitly incorporate ensemble diversity, a known factor influencing predictive performance of ensembles, into a reinforcement learning framework for ensemble selection. We rigorously tested these approaches on several challenging problems and associated data sets, yielding that several of them produced more accurate ensembles than those that don’t explicitly consider diversity. More importantly, these diversity-incorporating ensembles were much smaller in size, i.e., more parsimonious, than the latter types of ensembles. This can eventually aid the interpretation or reverse engineering of predictive models assimilated into the resultant ensemble(s). |
Tasks | |
Published | 2018-05-05 |
URL | http://arxiv.org/abs/1805.02103v1 |
http://arxiv.org/pdf/1805.02103v1.pdf | |
PWC | https://paperswithcode.com/paper/developing-parsimonious-ensembles-using |
Repo | |
Framework | |
Context Models for OOV Word Translation in Low-Resource Languages
Title | Context Models for OOV Word Translation in Low-Resource Languages |
Authors | Angli Liu, Katrin Kirchhoff |
Abstract | Out-of-vocabulary word translation is a major problem for the translation of low-resource languages that suffer from a lack of parallel training data. This paper evaluates the contributions of target-language context models towards the translation of OOV words, specifically in those cases where OOV translations are derived from external knowledge sources, such as dictionaries. We develop both neural and non-neural context models and evaluate them within both phrase-based and self-attention based neural machine translation systems. Our results show that neural language models that integrate additional context beyond the current sentence are the most effective in disambiguating possible OOV word translations. We present an efficient second-pass lattice-rescoring method for wide-context neural language models and demonstrate performance improvements over state-of-the-art self-attention based neural MT systems in five out of six low-resource language pairs. |
Tasks | Machine Translation |
Published | 2018-01-26 |
URL | http://arxiv.org/abs/1801.08660v1 |
http://arxiv.org/pdf/1801.08660v1.pdf | |
PWC | https://paperswithcode.com/paper/context-models-for-oov-word-translation-in |
Repo | |
Framework | |
Fast Automatic Smoothing for Generalized Additive Models
Title | Fast Automatic Smoothing for Generalized Additive Models |
Authors | Yousra El-Bachir, Anthony C. Davison |
Abstract | Multiple generalized additive models (GAMs) are a type of distributional regression wherein parameters of probability distributions depend on predictors through smooth functions, with selection of the degree of smoothness via $L_2$ regularization. Multiple GAMs allow finer statistical inference by incorporating explanatory information in any or all of the parameters of the distribution. Owing to their nonlinearity, flexibility and interpretability, GAMs are widely used, but reliable and fast methods for automatic smoothing in large datasets are still lacking, despite recent advances. We develop a general methodology for automatically learning the optimal degree of $L_2$ regularization for multiple GAMs using an empirical Bayes approach. The smooth functions are penalized by different amounts, which are learned simultaneously by maximization of a marginal likelihood through an approximate expectation-maximization algorithm that involves a double Laplace approximation at the E-step, and leads to an efficient M-step. Empirical analysis shows that the resulting algorithm is numerically stable, faster than all existing methods and achieves state-of-the-art accuracy. For illustration, we apply it to an important and challenging problem in the analysis of extremal data. |
Tasks | |
Published | 2018-09-25 |
URL | http://arxiv.org/abs/1809.09445v1 |
http://arxiv.org/pdf/1809.09445v1.pdf | |
PWC | https://paperswithcode.com/paper/fast-automatic-smoothing-for-generalized |
Repo | |
Framework | |
Improving Neural Sequence Labelling using Additional Linguistic Information
Title | Improving Neural Sequence Labelling using Additional Linguistic Information |
Authors | Mahtab Ahmed, Muhammad Rifayat Samee, Robert E. Mercer |
Abstract | Sequence labelling is the task of assigning categorical labels to a data sequence. In Natural Language Processing, sequence labelling can be applied to various fundamental problems, such as Part of Speech (POS) tagging, Named Entity Recognition (NER), and Chunking. In this study, we propose a method to add various linguistic features to the neural sequence framework to improve sequence labelling. Besides word level knowledge, sense embeddings are added to provide semantic information. Additionally, selective readings of character embeddings are added to capture contextual as well as morphological features for each word in a sentence. Compared to previous methods, these added linguistic features allow us to design a more concise model and perform more efficient training. Our proposed architecture achieves state of the art results on the benchmark datasets of POS, NER, and chunking. Moreover, the convergence rate of our model is significantly better than the previous state of the art models. |
Tasks | Chunking, Named Entity Recognition, Part-Of-Speech Tagging |
Published | 2018-07-27 |
URL | http://arxiv.org/abs/1807.10805v1 |
http://arxiv.org/pdf/1807.10805v1.pdf | |
PWC | https://paperswithcode.com/paper/improving-neural-sequence-labelling-using |
Repo | |
Framework | |
Hybrid Pruning: Thinner Sparse Networks for Fast Inference on Edge Devices
Title | Hybrid Pruning: Thinner Sparse Networks for Fast Inference on Edge Devices |
Authors | Xiaofan Xu, Mi Sun Park, Cormac Brick |
Abstract | We introduce hybrid pruning which combines both coarse-grained channel and fine-grained weight pruning to reduce model size, computation and power demands with no to little loss in accuracy for enabling modern networks deployment on resource-constrained devices, such as always-on security cameras and drones. Additionally, to effectively perform channel pruning, we propose a fast sensitivity test that helps us quickly identify the sensitivity of within and across layers of a network to the output accuracy for target multiplier accumulators (MACs) or accuracy tolerance. Our experiment shows significantly better results on ResNet50 on ImageNet compared to existing work, even with an additional constraint of channels be hardware-friendly number. |
Tasks | |
Published | 2018-11-01 |
URL | http://arxiv.org/abs/1811.00482v1 |
http://arxiv.org/pdf/1811.00482v1.pdf | |
PWC | https://paperswithcode.com/paper/hybrid-pruning-thinner-sparse-networks-for |
Repo | |
Framework | |
Communication-Computation Efficient Gradient Coding
Title | Communication-Computation Efficient Gradient Coding |
Authors | Min Ye, Emmanuel Abbe |
Abstract | This paper develops coding techniques to reduce the running time of distributed learning tasks. It characterizes the fundamental tradeoff to compute gradients (and more generally vector summations) in terms of three parameters: computation load, straggler tolerance and communication cost. It further gives an explicit coding scheme that achieves the optimal tradeoff based on recursive polynomial constructions, coding both across data subsets and vector components. As a result, the proposed scheme allows to minimize the running time for gradient computations. Implementations are made on Amazon EC2 clusters using Python with mpi4py package. Results show that the proposed scheme maintains the same generalization error while reducing the running time by $32%$ compared to uncoded schemes and $23%$ compared to prior coded schemes focusing only on stragglers (Tandon et al., ICML 2017). |
Tasks | |
Published | 2018-02-09 |
URL | http://arxiv.org/abs/1802.03475v1 |
http://arxiv.org/pdf/1802.03475v1.pdf | |
PWC | https://paperswithcode.com/paper/communication-computation-efficient-gradient |
Repo | |
Framework | |
Deterministic Policy Gradients With General State Transitions
Title | Deterministic Policy Gradients With General State Transitions |
Authors | Qingpeng Cai, Ling Pan, Pingzhong Tang |
Abstract | We study a reinforcement learning setting, where the state transition function is a convex combination of a stochastic continuous function and a deterministic function. Such a setting generalizes the widely-studied stochastic state transition setting, namely the setting of deterministic policy gradient (DPG). We firstly give a simple example to illustrate that the deterministic policy gradient may be infinite under deterministic state transitions, and introduce a theoretical technique to prove the existence of the policy gradient in this generalized setting. Using this technique, we prove that the deterministic policy gradient indeed exists for a certain set of discount factors, and further prove two conditions that guarantee the existence for all discount factors. We then derive a closed form of the policy gradient whenever exists. Furthermore, to overcome the challenge of high sample complexity of DPG in this setting, we propose the Generalized Deterministic Policy Gradient (GDPG) algorithm. The main innovation of the algorithm is a new method of applying model-based techniques to the model-free algorithm, the deep deterministic policy gradient algorithm (DDPG). GDPG optimize the long-term rewards of the model-based augmented MDP subject to a constraint that the long-rewards of the MDP is less than the original one. We finally conduct extensive experiments comparing GDPG with state-of-the-art methods and the direct model-based extension method of DDPG on several standard continuous control benchmarks. Results demonstrate that GDPG substantially outperforms DDPG, the model-based extension of DDPG and other baselines in terms of both convergence and long-term rewards in most environments. |
Tasks | Continuous Control |
Published | 2018-07-10 |
URL | http://arxiv.org/abs/1807.03708v3 |
http://arxiv.org/pdf/1807.03708v3.pdf | |
PWC | https://paperswithcode.com/paper/deterministic-policy-gradients-with-general |
Repo | |
Framework | |
The Space-Efficient Core of Vadalog
Title | The Space-Efficient Core of Vadalog |
Authors | Gerald Berger, Georg Gottlob, Andreas Pieris, Emanuel Sallinger |
Abstract | Vadalog is a system for performing complex reasoning tasks such as those required in advanced knowledge graphs. The logical core of the underlying Vadalog language is the warded fragment of tuple-generating dependencies (TGDs). This formalism ensures tractable reasoning in data complexity, while a recent analysis focusing on a practical implementation led to the reasoning algorithm around which the Vadalog system is built. A fundamental question that has emerged in the context of Vadalog is the following: can we limit the recursion allowed by wardedness in order to obtain a formalism that provides a convenient syntax for expressing useful recursive statements, and at the same time achieves space-efficiency? After analyzing several real-life examples of warded sets of TGDs provided by our industrial partners, as well as recent benchmarks, we observed that recursion is often used in a restricted way: the body of a TGD contains at most one atom whose predicate is mutually recursive with a predicate in the head. We show that this type of recursion, known as piece-wise linear in the Datalog literature, is the answer to our main question. We further show that piece-wise linear recursion alone, without the wardedness condition, is not enough as it leads to the undecidability of reasoning. We finally study the relative expressiveness of the query languages based on (piece-wise linear) warded sets of TGDs. |
Tasks | Knowledge Graphs |
Published | 2018-09-16 |
URL | http://arxiv.org/abs/1809.05951v1 |
http://arxiv.org/pdf/1809.05951v1.pdf | |
PWC | https://paperswithcode.com/paper/the-space-efficient-core-of-vadalog |
Repo | |
Framework | |
Robust Deep Reinforcement Learning for Security and Safety in Autonomous Vehicle Systems
Title | Robust Deep Reinforcement Learning for Security and Safety in Autonomous Vehicle Systems |
Authors | Aidin Ferdowsi, Ursula Challita, Walid Saad, Narayan B. Mandayam |
Abstract | To operate effectively in tomorrow’s smart cities, autonomous vehicles (AVs) must rely on intra-vehicle sensors such as camera and radar as well as inter-vehicle communication. Such dependence on sensors and communication links exposes AVs to cyber-physical (CP) attacks by adversaries that seek to take control of the AVs by manipulating their data. Thus, to ensure safe and optimal AV dynamics control, the data processing functions at AVs must be robust to such CP attacks. To this end, in this paper, the state estimation process for monitoring AV dynamics, in presence of CP attacks, is analyzed and a novel adversarial deep reinforcement learning (RL) algorithm is proposed to maximize the robustness of AV dynamics control to CP attacks. The attacker’s action and the AV’s reaction to CP attacks are studied in a game-theoretic framework. In the formulated game, the attacker seeks to inject faulty data to AV sensor readings so as to manipulate the inter-vehicle optimal safe spacing and potentially increase the risk of AV accidents or reduce the vehicle flow on the roads. Meanwhile, the AV, acting as a defender, seeks to minimize the deviations of spacing so as to ensure robustness to the attacker’s actions. Since the AV has no information about the attacker’s action and due to the infinite possibilities for data value manipulations, the outcome of the players’ past interactions are fed to long-short term memory (LSTM) blocks. Each player’s LSTM block learns the expected spacing deviation resulting from its own action and feeds it to its RL algorithm. Then, the the attacker’s RL algorithm chooses the action which maximizes the spacing deviation, while the AV’s RL algorithm tries to find the optimal action that minimizes such deviation. |
Tasks | Autonomous Vehicles |
Published | 2018-05-02 |
URL | http://arxiv.org/abs/1805.00983v2 |
http://arxiv.org/pdf/1805.00983v2.pdf | |
PWC | https://paperswithcode.com/paper/robust-deep-reinforcement-learning-for |
Repo | |
Framework | |
CMI: An Online Multi-objective Genetic Autoscaler for Scientific and Engineering Workflows in Cloud Infrastructures with Unreliable Virtual Machines
Title | CMI: An Online Multi-objective Genetic Autoscaler for Scientific and Engineering Workflows in Cloud Infrastructures with Unreliable Virtual Machines |
Authors | David A. Monge, Elina Pacini, Cristian Mateos, Enrique Alba, Carlos García Garino |
Abstract | Cloud Computing is becoming the leading paradigm for executing scientific and engineering workflows. The large-scale nature of the experiments they model and their variable workloads make clouds the ideal execution environment due to prompt and elastic access to huge amounts of computing resources. Autoscalers are middleware-level software components that allow scaling up and down the computing platform by acquiring or terminating virtual machines (VM) at the time that workflow’s tasks are being scheduled. In this work we propose a novel online multi-objective autoscaler for workflows denominated Cloud Multi-objective Intelligence (CMI), that aims at the minimization of makespan, monetary cost and the potential impact of errors derived from unreliable VMs. In addition, this problem is subject to monetary budget constraints. CMI is responsible for periodically solving the autoscaling problems encountered along the execution of a workflow. Simulation experiments on four well-known workflows exhibit that CMI significantly outperforms a state-of-the-art autoscaler of similar characteristics called Spot Instances Aware Autoscaling (SIAA). These results convey a solid base for deepening in the study of other meta-heuristic methods for autoscaling workflow applications using cheap but unreliable infrastructures. |
Tasks | |
Published | 2018-11-02 |
URL | http://arxiv.org/abs/1811.00989v1 |
http://arxiv.org/pdf/1811.00989v1.pdf | |
PWC | https://paperswithcode.com/paper/cmi-an-online-multi-objective-genetic |
Repo | |
Framework | |
Virtual-Taobao: Virtualizing Real-world Online Retail Environment for Reinforcement Learning
Title | Virtual-Taobao: Virtualizing Real-world Online Retail Environment for Reinforcement Learning |
Authors | Jing-Cheng Shi, Yang Yu, Qing Da, Shi-Yong Chen, An-Xiang Zeng |
Abstract | Applying reinforcement learning in physical-world tasks is extremely challenging. It is commonly infeasible to sample a large number of trials, as required by current reinforcement learning methods, in a physical environment. This paper reports our project on using reinforcement learning for better commodity search in Taobao, one of the largest online retail platforms and meanwhile a physical environment with a high sampling cost. Instead of training reinforcement learning in Taobao directly, we present our approach: first we build Virtual Taobao, a simulator learned from historical customer behavior data through the proposed GAN-SD (GAN for Simulating Distributions) and MAIL (multi-agent adversarial imitation learning), and then we train policies in Virtual Taobao with no physical costs in which ANC (Action Norm Constraint) strategy is proposed to reduce over-fitting. In experiments, Virtual Taobao is trained from hundreds of millions of customers’ records, and its properties are compared with the real environment. The results disclose that Virtual Taobao faithfully recovers important properties of the real environment. We also show that the policies trained in Virtual Taobao can have significantly superior online performance to the traditional supervised approaches. We hope our work could shed some light on reinforcement learning applications in complex physical environments. |
Tasks | Imitation Learning |
Published | 2018-05-25 |
URL | http://arxiv.org/abs/1805.10000v1 |
http://arxiv.org/pdf/1805.10000v1.pdf | |
PWC | https://paperswithcode.com/paper/virtual-taobao-virtualizing-real-world-online |
Repo | |
Framework | |