Paper Group NANR 151
Which Melbourne? Augmenting Geocoding with Maps. Sensala: a Dynamic Semantics System for Natural Language Processing. Neural Style Transfer via Meta Networks. Interpretable Rationale Augmented Charge Prediction System. Proceedings of the 27th International Conference on Computational Linguistics: Tutorial Abstracts. Deep Bayesian Learning and Under …
Which Melbourne? Augmenting Geocoding with Maps
Title | Which Melbourne? Augmenting Geocoding with Maps |
Authors | Milan Gritta, Mohammad Taher Pilehvar, Nigel Collier |
Abstract | The purpose of text geolocation is to associate geographic information contained in a document with a set (or sets) of coordinates, either implicitly by using linguistic features and/or explicitly by using geographic metadata combined with heuristics. We introduce a geocoder (location mention disambiguator) that achieves state-of-the-art (SOTA) results on three diverse datasets by exploiting the implicit lexical clues. Moreover, we propose a new method for systematic encoding of geographic metadata to generate two distinct views of the same text. To that end, we introduce the Map Vector (MapVec), a sparse representation obtained by plotting prior geographic probabilities, derived from population figures, on a World Map. We then integrate the implicit (language) and explicit (map) features to significantly improve a range of metrics. We also introduce an open-source dataset for geoparsing of news events covering global disease outbreaks and epidemics to help future evaluation in geoparsing. |
Tasks | |
Published | 2018-07-01 |
URL | https://www.aclweb.org/anthology/P18-1119/ |
https://www.aclweb.org/anthology/P18-1119 | |
PWC | https://paperswithcode.com/paper/which-melbourne-augmenting-geocoding-with |
Repo | |
Framework | |
Sensala: a Dynamic Semantics System for Natural Language Processing
Title | Sensala: a Dynamic Semantics System for Natural Language Processing |
Authors | Daniyar Itegulov, Ekaterina Lebedeva, Bruno Woltzenlogel Paleo |
Abstract | Here we describe Sensala , an open source framework for the semantic interpretation of natural language that provides the logical meaning of a given text. The framework{'}s theory is based on a lambda calculus with exception handling and uses contexts, continuations, events and dependent types to handle a wide range of complex linguistic phenomena, such as donkey anaphora, verb phrase anaphora, propositional anaphora, presuppositions and implicatures. |
Tasks | |
Published | 2018-08-01 |
URL | https://www.aclweb.org/anthology/C18-2027/ |
https://www.aclweb.org/anthology/C18-2027 | |
PWC | https://paperswithcode.com/paper/sensala-a-dynamic-semantics-system-for |
Repo | |
Framework | |
Neural Style Transfer via Meta Networks
Title | Neural Style Transfer via Meta Networks |
Authors | Falong Shen, Shuicheng Yan, Gang Zeng |
Abstract | In this paper we propose a noval method to generate the specified network parameters through one feed-forward propagation in the meta networks for neural style transfer. Recent works on style transfer typically need to train image transformation networks for every new style, and the style is encoded in the network parameters by enormous iterations of stochastic gradient descent, which lacks the generalization ability to new style in the inference stage. To tackle these issues, we build a meta network which takes in the style image and generates a corresponding image transformation network directly. Compared with optimization-based methods for every style, our meta networks can handle an arbitrary new style within 19 milliseconds on one modern GPU card. The fast image transformation network generated by our meta network is only 449 KB, which is capable of real-time running on a mobile device. We also investigate the manifold of the style transfer networks by operating the hidden features from meta networks. Experiments have well validated the effectiveness of our method. Code and trained models will be released. |
Tasks | Style Transfer |
Published | 2018-06-01 |
URL | http://openaccess.thecvf.com/content_cvpr_2018/html/Shen_Neural_Style_Transfer_CVPR_2018_paper.html |
http://openaccess.thecvf.com/content_cvpr_2018/papers/Shen_Neural_Style_Transfer_CVPR_2018_paper.pdf | |
PWC | https://paperswithcode.com/paper/neural-style-transfer-via-meta-networks |
Repo | |
Framework | |
Interpretable Rationale Augmented Charge Prediction System
Title | Interpretable Rationale Augmented Charge Prediction System |
Authors | Xin Jiang, Hai Ye, Zhunchen Luo, WenHan Chao, Wenjia Ma |
Abstract | This paper proposes a neural based system to solve the essential interpretability problem existing in text classification, especially in charge prediction task. First, we use a deep reinforcement learning method to extract rationales which mean short, readable and decisive snippets from input text. Then a rationale augmented classification model is proposed to elevate the prediction accuracy. Naturally, the extracted rationales serve as the introspection explanation for the prediction result of the model, enhancing the transparency of the model. Experimental results demonstrate that our system is able to extract readable rationales in a high consistency with manual annotation and is comparable with the attention model in prediction accuracy. |
Tasks | Text Classification |
Published | 2018-08-01 |
URL | https://www.aclweb.org/anthology/C18-2032/ |
https://www.aclweb.org/anthology/C18-2032 | |
PWC | https://paperswithcode.com/paper/interpretable-rationale-augmented-charge |
Repo | |
Framework | |
Proceedings of the 27th International Conference on Computational Linguistics: Tutorial Abstracts
Title | Proceedings of the 27th International Conference on Computational Linguistics: Tutorial Abstracts |
Authors | |
Abstract | |
Tasks | |
Published | 2018-08-01 |
URL | https://www.aclweb.org/anthology/C18-3000/ |
https://www.aclweb.org/anthology/C18-3000 | |
PWC | https://paperswithcode.com/paper/proceedings-of-the-27th-international-2 |
Repo | |
Framework | |
Deep Bayesian Learning and Understanding
Title | Deep Bayesian Learning and Understanding |
Authors | Jen-Tzung Chien |
Abstract | |
Tasks | Document Summarization, Machine Translation, Question Answering, Sentence Embeddings, Sentiment Analysis, Speech Recognition, Text Classification |
Published | 2018-08-01 |
URL | https://www.aclweb.org/anthology/C18-3004/ |
https://www.aclweb.org/anthology/C18-3004 | |
PWC | https://paperswithcode.com/paper/deep-bayesian-learning-and-understanding |
Repo | |
Framework | |
Character Level Convolutional Neural Network for Arabic Dialect Identification
Title | Character Level Convolutional Neural Network for Arabic Dialect Identification |
Authors | Mohamed Ali |
Abstract | This submission is for the description paper for our system in the ADI shared task. |
Tasks | |
Published | 2018-08-01 |
URL | https://www.aclweb.org/anthology/W18-3913/ |
https://www.aclweb.org/anthology/W18-3913 | |
PWC | https://paperswithcode.com/paper/character-level-convolutional-neural-network-2 |
Repo | |
Framework | |
Learning in Games with Lossy Feedback
Title | Learning in Games with Lossy Feedback |
Authors | Zhengyuan Zhou, Panayotis Mertikopoulos, Susan Athey, Nicholas Bambos, Peter W. Glynn, Yinyu Ye |
Abstract | We consider a game-theoretical multi-agent learning problem where the feedback information can be lost during the learning process and rewards are given by a broad class of games known as variationally stable games. We propose a simple variant of the classical online gradient descent algorithm, called reweighted online gradient descent (ROGD) and show that in variationally stable games, if each agent adopts ROGD, then almost sure convergence to the set of Nash equilibria is guaranteed, even when the feedback loss is asynchronous and arbitrarily corrrelated among agents. We then extend the framework to deal with unknown feedback loss probabilities by using an estimator (constructed from past data) in its replacement. Finally, we further extend the framework to accomodate both asynchronous loss and stochastic rewards and establish that multi-agent ROGD learning still converges to the set of Nash equilibria in such settings. Together, these results contribute to the broad lanscape of multi-agent online learning by significantly relaxing the feedback information that is required to achieve desirable outcomes. |
Tasks | |
Published | 2018-12-01 |
URL | http://papers.nips.cc/paper/7760-learning-in-games-with-lossy-feedback |
http://papers.nips.cc/paper/7760-learning-in-games-with-lossy-feedback.pdf | |
PWC | https://paperswithcode.com/paper/learning-in-games-with-lossy-feedback |
Repo | |
Framework | |
The Hitchhiker’s Guide to Testing Statistical Significance in Natural Language Processing
Title | The Hitchhiker’s Guide to Testing Statistical Significance in Natural Language Processing |
Authors | Rotem Dror, Gili Baumer, Segev Shlomov, Roi Reichart |
Abstract | Statistical significance testing is a standard statistical tool designed to ensure that experimental results are not coincidental. In this opinion/ theoretical paper we discuss the role of statistical significance testing in Natural Language Processing (NLP) research. We establish the fundamental concepts of significance testing and discuss the specific aspects of NLP tasks, experimental setups and evaluation measures that affect the choice of significance tests in NLP research. Based on this discussion we propose a simple practical protocol for statistical significance test selection in NLP setups and accompany this protocol with a brief survey of the most relevant tests. We then survey recent empirical papers published in ACL and TACL during 2017 and show that while our community assigns great value to experimental results, statistical significance testing is often ignored or misused. We conclude with a brief discussion of open issues that should be properly addressed so that this important tool can be applied. in NLP research in a statistically sound manner. |
Tasks | |
Published | 2018-07-01 |
URL | https://www.aclweb.org/anthology/P18-1128/ |
https://www.aclweb.org/anthology/P18-1128 | |
PWC | https://paperswithcode.com/paper/the-hitchhikeras-guide-to-testing-statistical |
Repo | |
Framework | |
Comparing Dynamics: Deep Neural Networks versus Glassy Systems
Title | Comparing Dynamics: Deep Neural Networks versus Glassy Systems |
Authors | Marco Baity-Jesi, Levent Sagun, Mario Geiger, Stefano Spigler, Gerard Ben Arous, Chiara Cammarota, Yann LeCun, Matthieu Wyart, Giulio Biroli |
Abstract | We analyze numerically the training dynamics of deep neural networks (DNN) by using methods developed in statistical physics of glassy systems. The two main issues we address are the complexity of the loss-landscape and of the dynamics within it, and to what extent DNNs share similarities with glassy systems. Our findings, obtained for different architectures and data-sets, suggest that during the training process the dynamics slows down because of an increasingly large number of flat directions. At large times, when the loss is approaching zero, the system diffuses at the bottom of the landscape. Despite some similarities with the dynamics of mean-field glassy systems, in particular, the absence of barrier crossing, we find distinctive dynamical behaviors in the two cases, thus showing that the statistical properties of the corresponding loss and energy landscapes are different. In contrast, when the network is under-parametrized we observe a typical glassy behavior, thus suggesting the existence of different phases depending on whether the network is under-parametrized or over-parametrized. |
Tasks | |
Published | 2018-07-01 |
URL | https://icml.cc/Conferences/2018/Schedule?showEvent=2190 |
http://proceedings.mlr.press/v80/baity-jesi18a/baity-jesi18a.pdf | |
PWC | https://paperswithcode.com/paper/comparing-dynamics-deep-neural-networks |
Repo | |
Framework | |
Towards Understanding Text Factors in Oral Reading
Title | Towards Understanding Text Factors in Oral Reading |
Authors | Anastassia Loukina, Van Rynald T. Liceralde, Beata Beigman Klebanov |
Abstract | Using a case study, we show that variation in oral reading rate across passages for professional narrators is consistent across readers and much of it can be explained using features of the texts being read. While text complexity is a poor predictor of the reading rate, a substantial share of variability can be explained by timing and story-based factors with performance reaching r=0.75 for unseen passages and narrator. |
Tasks | Language Acquisition |
Published | 2018-06-01 |
URL | https://www.aclweb.org/anthology/N18-1195/ |
https://www.aclweb.org/anthology/N18-1195 | |
PWC | https://paperswithcode.com/paper/towards-understanding-text-factors-in-oral |
Repo | |
Framework | |
A multilingual collection of CoNLL-U-compatible morphological lexicons
Title | A multilingual collection of CoNLL-U-compatible morphological lexicons |
Authors | Beno{^\i}t Sagot |
Abstract | |
Tasks | Dependency Parsing, Part-Of-Speech Tagging |
Published | 2018-05-01 |
URL | https://www.aclweb.org/anthology/L18-1292/ |
https://www.aclweb.org/anthology/L18-1292 | |
PWC | https://paperswithcode.com/paper/a-multilingual-collection-of-conll-u |
Repo | |
Framework | |
Multilingual Universal Dependency Parsing from Raw Text with Low-Resource Language Enhancement
Title | Multilingual Universal Dependency Parsing from Raw Text with Low-Resource Language Enhancement |
Authors | Yingting Wu, Hai Zhao, Jia-Jun Tong |
Abstract | This paper describes the system of our team Phoenix for participating CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Given the annotated gold standard data in CoNLL-U format, we train the tokenizer, tagger and parser separately for each treebank based on an open source pipeline tool UDPipe. Our system reads the plain texts for input, performs the pre-processing steps (tokenization, lemmas, morphology) and finally outputs the syntactic dependencies. For the low-resource languages with no training data, we use cross-lingual techniques to build models with some close languages instead. In the official evaluation, our system achieves the macro-averaged scores of 65.61{%}, 52.26{%}, 55.71{%} for LAS, MLAS and BLEX respectively. |
Tasks | Dependency Parsing, Part-Of-Speech Tagging, Tokenization |
Published | 2018-10-01 |
URL | https://www.aclweb.org/anthology/K18-2007/ |
https://www.aclweb.org/anthology/K18-2007 | |
PWC | https://paperswithcode.com/paper/multilingual-universal-dependency-parsing |
Repo | |
Framework | |
Using Linguistic Resources to Evaluate the Quality of Annotated Corpora
Title | Using Linguistic Resources to Evaluate the Quality of Annotated Corpora |
Authors | Max Silberztein |
Abstract | Statistical and neural-network-based methods that compute their results by comparing a given text to be analyzed with a reference corpus assume that the reference corpus is complete and reliable enough. In this article, I conduct several experiments on an extract of the Open American National Corpus to verify this assumption. |
Tasks | |
Published | 2018-08-01 |
URL | https://www.aclweb.org/anthology/W18-3802/ |
https://www.aclweb.org/anthology/W18-3802 | |
PWC | https://paperswithcode.com/paper/using-linguistic-resources-to-evaluate-the |
Repo | |
Framework | |
IBM Research at the CoNLL 2018 Shared Task on Multilingual Parsing
Title | IBM Research at the CoNLL 2018 Shared Task on Multilingual Parsing |
Authors | Hui Wan, Tahira Naseem, Young-Suk Lee, Vittorio Castelli, Miguel Ballesteros |
Abstract | This paper presents the IBM Research AI submission to the CoNLL 2018 Shared Task on Parsing Universal Dependencies. Our system implements a new joint transition-based parser, based on the Stack-LSTM framework and the Arc-Standard algorithm, that handles tokenization, part-of-speech tagging, morphological tagging and dependency parsing in one single model. By leveraging a combination of character-based modeling of words and recursive composition of partially built linguistic structures we qualified 13th overall and 7th in low resource. We also present a new sentence segmentation neural architecture based on Stack-LSTMs that was the 4th best overall. |
Tasks | Dependency Parsing, Morphological Tagging, Part-Of-Speech Tagging, Tokenization |
Published | 2018-10-01 |
URL | https://www.aclweb.org/anthology/K18-2009/ |
https://www.aclweb.org/anthology/K18-2009 | |
PWC | https://paperswithcode.com/paper/ibm-research-at-the-conll-2018-shared-task-on |
Repo | |
Framework | |