July 27, 2019

2575 words 13 mins read

Paper Group ANR 717

Paper Group ANR 717

Robust Incremental Neural Semantic Graph Parsing. Cross-modal Recurrent Models for Weight Objective Prediction from Multimodal Time-series Data. Boosting the kernelized shapelets: Theory and algorithms for local features. Proportional Representation in Vote Streams. The Price of Differential Privacy For Online Learning. Worst-case vs Average-case D …

Robust Incremental Neural Semantic Graph Parsing

Title Robust Incremental Neural Semantic Graph Parsing
Authors Jan Buys, Phil Blunsom
Abstract Parsing sentences to linguistically-expressive semantic representations is a key goal of Natural Language Processing. Yet statistical parsing has focused almost exclusively on bilexical dependencies or domain-specific logical forms. We propose a neural encoder-decoder transition-based parser which is the first full-coverage semantic graph parser for Minimal Recursion Semantics (MRS). The model architecture uses stack-based embedding features, predicting graphs jointly with unlexicalized predicates and their token alignments. Our parser is more accurate than attention-based baselines on MRS, and on an additional Abstract Meaning Representation (AMR) benchmark, and GPU batch processing makes it an order of magnitude faster than a high-precision grammar-based parser. Further, the 86.69% Smatch score of our MRS parser is higher than the upper-bound on AMR parsing, making MRS an attractive choice as a semantic representation.
Tasks Amr Parsing
Published 2017-04-24
URL http://arxiv.org/abs/1704.07092v2
PDF http://arxiv.org/pdf/1704.07092v2.pdf
PWC https://paperswithcode.com/paper/robust-incremental-neural-semantic-graph
Repo
Framework

Cross-modal Recurrent Models for Weight Objective Prediction from Multimodal Time-series Data

Title Cross-modal Recurrent Models for Weight Objective Prediction from Multimodal Time-series Data
Authors Petar Veličković, Laurynas Karazija, Nicholas D. Lane, Sourav Bhattacharya, Edgar Liberis, Pietro Liò, Angela Chieh, Otmane Bellahsen, Matthieu Vegreville
Abstract We analyse multimodal time-series data corresponding to weight, sleep and steps measurements. We focus on predicting whether a user will successfully achieve his/her weight objective. For this, we design several deep long short-term memory (LSTM) architectures, including a novel cross-modal LSTM (X-LSTM), and demonstrate their superiority over baseline approaches. The X-LSTM improves parameter efficiency by processing each modality separately and allowing for information flow between them by way of recurrent cross-connections. We present a general hyperparameter optimisation technique for X-LSTMs, which allows us to significantly improve on the LSTM and a prior state-of-the-art cross-modal approach, using a comparable number of parameters. Finally, we visualise the model’s predictions, revealing implications about latent variables in this task.
Tasks Time Series
Published 2017-09-23
URL http://arxiv.org/abs/1709.08073v2
PDF http://arxiv.org/pdf/1709.08073v2.pdf
PWC https://paperswithcode.com/paper/cross-modal-recurrent-models-for-weight
Repo
Framework

Boosting the kernelized shapelets: Theory and algorithms for local features

Title Boosting the kernelized shapelets: Theory and algorithms for local features
Authors Daiki Suehiro, Kohei Hatano, Eiji Takimoto, Shuji Yamamoto, Kenichi Bannai, Akiko Takeda
Abstract We consider binary classification problems using local features of objects. One of motivating applications is time-series classification, where features reflecting some local closeness measure between a time series and a pattern sequence called shapelet are useful. Despite the empirical success of such approaches using local features, the generalization ability of resulting hypotheses is not fully understood and previous work relies on a bunch of heuristics. In this paper, we formulate a class of hypotheses using local features, where the richness of features is controlled by kernels. We derive generalization bounds of sparse ensembles over the class which is exponentially better than a standard analysis in terms of the number of possible local features. The resulting optimization problem is well suited to the boosting approach and the weak learning problem is formulated as a DC program, for which practical algorithms exist. In preliminary experiments on time-series data sets, our method achieves competitive accuracy with the state-of-the-art algorithms with small parameter-tuning cost.
Tasks Time Series, Time Series Classification
Published 2017-09-05
URL http://arxiv.org/abs/1709.01300v3
PDF http://arxiv.org/pdf/1709.01300v3.pdf
PWC https://paperswithcode.com/paper/boosting-the-kernelized-shapelets-theory-and
Repo
Framework

Proportional Representation in Vote Streams

Title Proportional Representation in Vote Streams
Authors Palash Dey, Nimrod Talmon, Otniel van Handel
Abstract We consider elections where the voters come one at a time, in a streaming fashion, and devise space-efficient algorithms which identify an approximate winning committee with respect to common multiwinner proportional representation voting rules; specifically, we consider the Approval-based and the Borda-based variants of both the Chamberlin– ourant rule and the Monroe rule. We complement our algorithms with lower bounds. Somewhat surprisingly, our results imply that, using space which does not depend on the number of voters it is possible to efficiently identify an approximate representative committee of fixed size over vote streams with huge number of voters.
Tasks
Published 2017-02-28
URL http://arxiv.org/abs/1702.08862v1
PDF http://arxiv.org/pdf/1702.08862v1.pdf
PWC https://paperswithcode.com/paper/proportional-representation-in-vote-streams
Repo
Framework

The Price of Differential Privacy For Online Learning

Title The Price of Differential Privacy For Online Learning
Authors Naman Agarwal, Karan Singh
Abstract We design differentially private algorithms for the problem of online linear optimization in the full information and bandit settings with optimal $\tilde{O}(\sqrt{T})$ regret bounds. In the full-information setting, our results demonstrate that $\epsilon$-differential privacy may be ensured for free – in particular, the regret bounds scale as $O(\sqrt{T})+\tilde{O}\left(\frac{1}{\epsilon}\right)$. For bandit linear optimization, and as a special case, for non-stochastic multi-armed bandits, the proposed algorithm achieves a regret of $\tilde{O}\left(\frac{1}{\epsilon}\sqrt{T}\right)$, while the previously known best regret bound was $\tilde{O}\left(\frac{1}{\epsilon}T^{\frac{2}{3}}\right)$.
Tasks Multi-Armed Bandits
Published 2017-01-27
URL http://arxiv.org/abs/1701.07953v2
PDF http://arxiv.org/pdf/1701.07953v2.pdf
PWC https://paperswithcode.com/paper/the-price-of-differential-privacy-for-online
Repo
Framework

Worst-case vs Average-case Design for Estimation from Fixed Pairwise Comparisons

Title Worst-case vs Average-case Design for Estimation from Fixed Pairwise Comparisons
Authors Ashwin Pananjady, Cheng Mao, Vidya Muthukumar, Martin J. Wainwright, Thomas A. Courtade
Abstract Pairwise comparison data arises in many domains, including tournament rankings, web search, and preference elicitation. Given noisy comparisons of a fixed subset of pairs of items, we study the problem of estimating the underlying comparison probabilities under the assumption of strong stochastic transitivity (SST). We also consider the noisy sorting subclass of the SST model. We show that when the assignment of items to the topology is arbitrary, these permutation-based models, unlike their parametric counterparts, do not admit consistent estimation for most comparison topologies used in practice. We then demonstrate that consistent estimation is possible when the assignment of items to the topology is randomized, thus establishing a dichotomy between worst-case and average-case designs. We propose two estimators in the average-case setting and analyze their risk, showing that it depends on the comparison topology only through the degree sequence of the topology. The rates achieved by these estimators are shown to be optimal for a large class of graphs. Our results are corroborated by simulations on multiple comparison topologies.
Tasks
Published 2017-07-19
URL http://arxiv.org/abs/1707.06217v1
PDF http://arxiv.org/pdf/1707.06217v1.pdf
PWC https://paperswithcode.com/paper/worst-case-vs-average-case-design-for
Repo
Framework

Efficient Rank Aggregation via Lehmer Codes

Title Efficient Rank Aggregation via Lehmer Codes
Authors Pan Li, Arya Mazumdar, Olgica Milenkovic
Abstract We propose a novel rank aggregation method based on converting permutations into their corresponding Lehmer codes or other subdiagonal images. Lehmer codes, also known as inversion vectors, are vector representations of permutations in which each coordinate can take values not restricted by the values of other coordinates. This transformation allows for decoupling of the coordinates and for performing aggregation via simple scalar median or mode computations. We present simulation results illustrating the performance of this completely parallelizable approach and analytically prove that both the mode and median aggregation procedure recover the correct centroid aggregate with small sample complexity when the permutations are drawn according to the well-known Mallows models. The proposed Lehmer code approach may also be used on partial rankings, with similar performance guarantees.
Tasks
Published 2017-01-28
URL http://arxiv.org/abs/1701.09083v1
PDF http://arxiv.org/pdf/1701.09083v1.pdf
PWC https://paperswithcode.com/paper/efficient-rank-aggregation-via-lehmer-codes
Repo
Framework

Interpretable and Pedagogical Examples

Title Interpretable and Pedagogical Examples
Authors Smitha Milli, Pieter Abbeel, Igor Mordatch
Abstract Teachers intentionally pick the most informative examples to show their students. However, if the teacher and student are neural networks, the examples that the teacher network learns to give, although effective at teaching the student, are typically uninterpretable. We show that training the student and teacher iteratively, rather than jointly, can produce interpretable teaching strategies. We evaluate interpretability by (1) measuring the similarity of the teacher’s emergent strategies to intuitive strategies in each domain and (2) conducting human experiments to evaluate how effective the teacher’s strategies are at teaching humans. We show that the teacher network learns to select or generate interpretable, pedagogical examples to teach rule-based, probabilistic, boolean, and hierarchical concepts.
Tasks
Published 2017-11-02
URL http://arxiv.org/abs/1711.00694v2
PDF http://arxiv.org/pdf/1711.00694v2.pdf
PWC https://paperswithcode.com/paper/interpretable-and-pedagogical-examples
Repo
Framework

Grammatical Error Correction with Neural Reinforcement Learning

Title Grammatical Error Correction with Neural Reinforcement Learning
Authors Keisuke Sakaguchi, Matt Post, Benjamin Van Durme
Abstract We propose a neural encoder-decoder model with reinforcement learning (NRL) for grammatical error correction (GEC). Unlike conventional maximum likelihood estimation (MLE), the model directly optimizes towards an objective that considers a sentence-level, task-specific evaluation metric, avoiding the exposure bias issue in MLE. We demonstrate that NRL outperforms MLE both in human and automated evaluation metrics, achieving the state-of-the-art on a fluency-oriented GEC corpus.
Tasks Grammatical Error Correction
Published 2017-07-02
URL http://arxiv.org/abs/1707.00299v1
PDF http://arxiv.org/pdf/1707.00299v1.pdf
PWC https://paperswithcode.com/paper/grammatical-error-correction-with-neural
Repo
Framework

Cross-Media Similarity Evaluation for Web Image Retrieval in the Wild

Title Cross-Media Similarity Evaluation for Web Image Retrieval in the Wild
Authors Jianfeng Dong, Xirong Li, Duanqing Xu
Abstract In order to retrieve unlabeled images by textual queries, cross-media similarity computation is a key ingredient. Although novel methods are continuously introduced, little has been done to evaluate these methods together with large-scale query log analysis. Consequently, how far have these methods brought us in answering real-user queries is unclear. Given baseline methods that compute cross-media similarity using relatively simple text/image matching, how much progress have advanced models made is also unclear. This paper takes a pragmatic approach to answering the two questions. Queries are automatically categorized according to the proposed query visualness measure, and later connected to the evaluation of multiple cross-media similarity models on three test sets. Such a connection reveals that the success of the state-of-the-art is mainly attributed to their good performance on visual-oriented queries, while these queries account for only a small part of real-user queries. To quantify the current progress, we propose a simple text2image method, representing a novel test query by a set of images selected from large-scale query log. Consequently, computing cross-media similarity between the test query and a given image boils down to comparing the visual similarity between the given image and the selected images. Image retrieval experiments on the challenging Clickture dataset show that the proposed text2image compares favorably to recent deep learning based alternatives.
Tasks Image Retrieval
Published 2017-09-05
URL http://arxiv.org/abs/1709.01305v2
PDF http://arxiv.org/pdf/1709.01305v2.pdf
PWC https://paperswithcode.com/paper/cross-media-similarity-evaluation-for-web
Repo
Framework

Reinforcement Learning Based Dynamic Selection of Auxiliary Objectives with Preserving of the Best Found Solution

Title Reinforcement Learning Based Dynamic Selection of Auxiliary Objectives with Preserving of the Best Found Solution
Authors Irina Petrova, Arina Buzdalova
Abstract Efficiency of single-objective optimization can be improved by introducing some auxiliary objectives. Ideally, auxiliary objectives should be helpful. However, in practice, objectives may be efficient on some optimization stages but obstructive on others. In this paper we propose a modification of the EA+RL method which dynamically selects optimized objectives using reinforcement learning. The proposed modification prevents from losing the best found solution. We analysed the proposed modification and compared it with the EA+RL method and Random Local Search on XdivK, Generalized OneMax and LeadingOnes problems. The proposed modification outperforms the EA+RL method on all problem instances. It also outperforms the single objective approach on the most problem instances. We also provide detailed analysis of how different components of the considered algorithms influence efficiency of optimization. In addition, we present theoretical analysis of the proposed modification on the XdivK problem.
Tasks
Published 2017-04-24
URL http://arxiv.org/abs/1704.07187v1
PDF http://arxiv.org/pdf/1704.07187v1.pdf
PWC https://paperswithcode.com/paper/reinforcement-learning-based-dynamic
Repo
Framework

Exploring Automated Essay Scoring for Nonnative English Speakers

Title Exploring Automated Essay Scoring for Nonnative English Speakers
Authors Amber Nigam
Abstract Automated Essay Scoring (AES) has been quite popular and is being widely used. However, lack of appropriate methodology for rating nonnative English speakers’ essays has meant a lopsided advancement in this field. In this paper, we report initial results of our experiments with nonnative AES that learns from manual evaluation of nonnative essays. For this purpose, we conducted an exercise in which essays written by nonnative English speakers in test environment were rated both manually and by the automated system designed for the experiment. In the process, we experimented with a few features to learn about nuances linked to nonnative evaluation. The proposed methodology of automated essay evaluation has yielded a correlation coefficient of 0.750 with the manual evaluation.
Tasks
Published 2017-06-11
URL http://arxiv.org/abs/1706.03335v3
PDF http://arxiv.org/pdf/1706.03335v3.pdf
PWC https://paperswithcode.com/paper/exploring-automated-essay-scoring-for
Repo
Framework

Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning

Title Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning
Authors Stefan Depeweg, José Miguel Hernández-Lobato, Finale Doshi-Velez, Steffen Udluft
Abstract Bayesian neural networks with latent variables are scalable and flexible probabilistic models: They account for uncertainty in the estimation of the network weights and, by making use of latent variables, can capture complex noise patterns in the data. We show how to extract and decompose uncertainty into epistemic and aleatoric components for decision-making purposes. This allows us to successfully identify informative points for active learning of functions with heteroscedastic and bimodal noise. Using the decomposition we further define a novel risk-sensitive criterion for reinforcement learning to identify policies that balance expected cost, model-bias and noise aversion.
Tasks Active Learning, Decision Making
Published 2017-10-19
URL http://arxiv.org/abs/1710.07283v4
PDF http://arxiv.org/pdf/1710.07283v4.pdf
PWC https://paperswithcode.com/paper/decomposition-of-uncertainty-in-bayesian-deep
Repo
Framework

Dropout Sampling for Robust Object Detection in Open-Set Conditions

Title Dropout Sampling for Robust Object Detection in Open-Set Conditions
Authors Dimity Miller, Lachlan Nicholson, Feras Dayoub, Niko Sünderhauf
Abstract Dropout Variational Inference, or Dropout Sampling, has been recently proposed as an approximation technique for Bayesian Deep Learning and evaluated for image classification and regression tasks. This paper investigates the utility of Dropout Sampling for object detection for the first time. We demonstrate how label uncertainty can be extracted from a state-of-the-art object detection system via Dropout Sampling. We evaluate this approach on a large synthetic dataset of 30,000 images, and a real-world dataset captured by a mobile robot in a versatile campus environment. We show that this uncertainty can be utilized to increase object detection performance under the open-set conditions that are typically encountered in robotic vision. A Dropout Sampling network is shown to achieve a 12.3% increase in recall (for the same precision score as a standard network) and a 15.1% increase in precision (for the same recall score as the standard network).
Tasks Image Classification, Object Detection, Robust Object Detection
Published 2017-10-18
URL http://arxiv.org/abs/1710.06677v2
PDF http://arxiv.org/pdf/1710.06677v2.pdf
PWC https://paperswithcode.com/paper/dropout-sampling-for-robust-object-detection
Repo
Framework

ADVISE: Symbolism and External Knowledge for Decoding Advertisements

Title ADVISE: Symbolism and External Knowledge for Decoding Advertisements
Authors Keren Ye, Adriana Kovashka
Abstract In order to convey the most content in their limited space, advertisements embed references to outside knowledge via symbolism. For example, a motorcycle stands for adventure (a positive property the ad wants associated with the product being sold), and a gun stands for danger (a negative property to dissuade viewers from undesirable behaviors). We show how to use symbolic references to better understand the meaning of an ad. We further show how anchoring ad understanding in general-purpose object recognition and image captioning improves results. We formulate the ad understanding task as matching the ad image to human-generated statements that describe the action that the ad prompts, and the rationale it provides for taking this action. Our proposed method outperforms the state of the art on this task, and on an alternative formulation of question-answering on ads. We show additional applications of our learned representations for matching ads to slogans, and clustering ads according to their topic, without extra training.
Tasks Image Captioning, Object Recognition, Question Answering
Published 2017-11-17
URL http://arxiv.org/abs/1711.06666v2
PDF http://arxiv.org/pdf/1711.06666v2.pdf
PWC https://paperswithcode.com/paper/advise-symbolism-and-external-knowledge-for
Repo
Framework
comments powered by Disqus