January 24, 2020

2434 words 12 mins read

Paper Group NANR 168

Paper Group NANR 168

Proceedings of the Student Research Workshop Associated with RANLP 2019. Unsupervised Hierarchical Story Infilling. Calculating the Optimal Step in Shift-Reduce Dependency Parsing: From Cubic to Linear Time. Tree LSTMs with Convolution Units to Predict Stance and Rumor Veracity in Social Media Conversations. Copula Multi-label Learning. Fully Quant …

Proceedings of the Student Research Workshop Associated with RANLP 2019

Title Proceedings of the Student Research Workshop Associated with RANLP 2019
Authors
Abstract
Tasks
Published 2019-09-01
URL https://www.aclweb.org/anthology/R19-2000/
PDF https://www.aclweb.org/anthology/R19-2000
PWC https://paperswithcode.com/paper/proceedings-of-the-student-research-workshop-8
Repo
Framework

Unsupervised Hierarchical Story Infilling

Title Unsupervised Hierarchical Story Infilling
Authors Daphne Ippolito, David Grangier, Chris Callison-Burch, Douglas Eck
Abstract Story infilling involves predicting words to go into a missing span from a story. This challenging task has the potential to transform interactive tools for creative writing. However, state-of-the-art conditional language models have trouble balancing fluency and coherence with novelty and diversity. We address this limitation with a hierarchical model which first selects a set of rare words and then generates text conditioned on that set. By relegating the high entropy task of picking rare words to a word-sampling model, the second-stage model conditioned on those words can achieve high fluency and coherence by searching for likely sentences, without sacrificing diversity.
Tasks
Published 2019-06-01
URL https://www.aclweb.org/anthology/W19-2405/
PDF https://www.aclweb.org/anthology/W19-2405
PWC https://paperswithcode.com/paper/unsupervised-hierarchical-story-infilling
Repo
Framework

Calculating the Optimal Step in Shift-Reduce Dependency Parsing: From Cubic to Linear Time

Title Calculating the Optimal Step in Shift-Reduce Dependency Parsing: From Cubic to Linear Time
Authors Mark-Jan Nederhof
Abstract We present a new cubic-time algorithm to calculate the optimal next step in shift-reduce dependency parsing, relative to ground truth, commonly referred to as dynamic oracle. Unlike existing algorithms, it is applicable if the training corpus contains non-projective structures. We then show that for a projective training corpus, the time complexity can be improved from cubic to linear.
Tasks Dependency Parsing
Published 2019-03-01
URL https://www.aclweb.org/anthology/Q19-1018/
PDF https://www.aclweb.org/anthology/Q19-1018
PWC https://paperswithcode.com/paper/calculating-the-optimal-step-in-shift-reduce
Repo
Framework

Tree LSTMs with Convolution Units to Predict Stance and Rumor Veracity in Social Media Conversations

Title Tree LSTMs with Convolution Units to Predict Stance and Rumor Veracity in Social Media Conversations
Authors Sumeet Kumar, Kathleen Carley
Abstract Learning from social-media conversations has gained significant attention recently because of its applications in areas like rumor detection. In this research, we propose a new way to represent social-media conversations as binarized constituency trees that allows comparing features in source-posts and their replies effectively. Moreover, we propose to use convolution units in Tree LSTMs that are better at learning patterns in features obtained from the source and reply posts. Our Tree LSTM models employ multi-task (stance + rumor) learning and propagate the useful stance signal up in the tree for rumor classification at the root node. The proposed models achieve state-of-the-art performance, outperforming the current best model by 12{%} and 15{%} on F1-macro for rumor-veracity classification and stance classification tasks respectively.
Tasks
Published 2019-07-01
URL https://www.aclweb.org/anthology/P19-1498/
PDF https://www.aclweb.org/anthology/P19-1498
PWC https://paperswithcode.com/paper/tree-lstms-with-convolution-units-to-predict
Repo
Framework

Copula Multi-label Learning

Title Copula Multi-label Learning
Authors Weiwei Liu
Abstract A formidable challenge in multi-label learning is to model the interdependencies between labels and features. Unfortunately, the statistical properties of existing multi-label dependency modelings are still not well understood. Copulas are a powerful tool for modeling dependence of multivariate data, and achieve great success in a wide range of applications, such as finance, econometrics and systems neuroscience. This inspires us to develop a novel copula multi-label learning paradigm for modeling label and feature dependencies. The copula based paradigm enables to reveal new statistical insights in multi-label learning. In particular, the paper first leverages the kernel trick to construct continuous distribution in the output space, and then estimates our proposed model semiparametrically where the copula is modeled parametrically, while the marginal distributions are modeled nonparametrically. Theoretically, we show that our estimator is an unbiased and consistent estimator and follows asymptotically a normal distribution. Moreover, we bound the mean squared error of estimator. The experimental results from various domains validate the superiority of our proposed approach.
Tasks Multi-Label Learning
Published 2019-12-01
URL http://papers.nips.cc/paper/8863-copula-multi-label-learning
PDF http://papers.nips.cc/paper/8863-copula-multi-label-learning.pdf
PWC https://paperswithcode.com/paper/copula-multi-label-learning
Repo
Framework

Fully Quantized Network for Object Detection

Title Fully Quantized Network for Object Detection
Authors Rundong Li, Yan Wang, Feng Liang, Hongwei Qin, Junjie Yan, Rui Fan
Abstract Efficient neural network inference is important in a number of practical domains, such as deployment in mobile settings. An effective method for increasing inference efficiency is to use low bitwidth arithmetic, which can subsequently be accelerated using dedicated hardware. However, designing effective quantization schemes while maintaining network accuracy is challenging. In particular, current techniques face difficulty in performing fully end-to-end quantization, making use of aggressively low bitwidth regimes such as 4-bit, and applying quantized networks to complex tasks such as object detection. In this paper, we demonstrate that many of these difficulties arise because of instability during the fine-tuning stage of the quantization process, and propose several novel techniques to overcome these instabilities. We apply our techniques to produce fully quantized 4-bit detectors based on RetinaNet and Faster R-CNN, and show that these achieve state-of-the-art performance for quantized detectors. The mAP loss due to quantization using our methods is more than 3.8x less than the loss from existing methods.
Tasks Object Detection, Quantization
Published 2019-06-01
URL http://openaccess.thecvf.com/content_CVPR_2019/html/Li_Fully_Quantized_Network_for_Object_Detection_CVPR_2019_paper.html
PDF http://openaccess.thecvf.com/content_CVPR_2019/papers/Li_Fully_Quantized_Network_for_Object_Detection_CVPR_2019_paper.pdf
PWC https://paperswithcode.com/paper/fully-quantized-network-for-object-detection
Repo
Framework

Guiding the Flowing of Semantics: Interpretable Video Captioning via POS Tag

Title Guiding the Flowing of Semantics: Interpretable Video Captioning via POS Tag
Authors Xinyu Xiao, Lingfeng Wang, Bin Fan, Shinming Xiang, Chunhong Pan
Abstract In the current video captioning models, the video frames are collected in one network and the semantics are mixed into one feature, which not only increase the difficulty of the caption decoding, but also decrease the interpretability of the captioning models. To address these problems, we propose an Adaptive Semantic Guidance Network (ASGN), which instantiates the whole video semantics to different POS-aware semantics with the supervision of part of speech (POS) tag. In the encoding process, the POS tag activates the related neurons and parses the whole semantic information into corresponding encoded video representations. Furthermore, the potential of the model is stimulated by the POS-aware video features. In the decoding process, the related video features of noun and verb are used as the supervision to construct a new adaptive attention model which can decide whether to attend to the video feature or not. With the explicit improving of the interpretability of the network, the learning process is more transparent and the results are more predictable. Extensive experiments demonstrate the effectiveness of our model when compared with state-of-the-art models.
Tasks Video Captioning
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-1213/
PDF https://www.aclweb.org/anthology/D19-1213
PWC https://paperswithcode.com/paper/guiding-the-flowing-of-semantics
Repo
Framework

Speculation and Negation detection in French biomedical corpora

Title Speculation and Negation detection in French biomedical corpora
Authors Cl{'e}ment Dalloux, Vincent Claveau, Natalia Grabar
Abstract In this work, we propose to address the detection of negation and speculation, and of their scope, in French biomedical documents. It has been indeed observed that they play an important role and provide crucial clues for other NLP applications. Our methods are based on CRFs and BiLSTM. We reach up to 97.21 {%} and 91.30 {%} F-measure for the detection of negation and speculation cues, respectively, using CRFs. For the computing of scope, we reach up to 90.81 {%} and 86.73 {%} F-measure on negation and speculation, respectively, using BiLSTM-CRF fed with word embeddings.
Tasks Negation Detection, Word Embeddings
Published 2019-09-01
URL https://www.aclweb.org/anthology/R19-1026/
PDF https://www.aclweb.org/anthology/R19-1026
PWC https://paperswithcode.com/paper/speculation-and-negation-detection-in-french
Repo
Framework

Text-Based Interactive Recommendation via Constraint-Augmented Reinforcement Learning

Title Text-Based Interactive Recommendation via Constraint-Augmented Reinforcement Learning
Authors Ruiyi Zhang, Tong Yu, Yilin Shen, Hongxia Jin, Changyou Chen
Abstract Text-based interactive recommendation provides richer user preferences and has demonstrated advantages over traditional interactive recommender systems. However, recommendations can easily violate preferences of users from their past natural-language feedback, since the recommender needs to explore new items for further improvement. To alleviate this issue, we propose a novel constraint-augmented reinforcement learning (RL) framework to efficiently incorporate user preferences over time. Specifically, we leverage a discriminator to detect recommendations violating user historical preference, which is incorporated into the standard RL objective of maximizing expected cumulative future rewards. Our proposed framework is general and is further extended to the task of constrained text generation. Empirical results show that the proposed method yields consistent improvement relative to standard RL methods.
Tasks Recommendation Systems, Text Generation
Published 2019-12-01
URL http://papers.nips.cc/paper/9657-text-based-interactive-recommendation-via-constraint-augmented-reinforcement-learning
PDF http://papers.nips.cc/paper/9657-text-based-interactive-recommendation-via-constraint-augmented-reinforcement-learning.pdf
PWC https://paperswithcode.com/paper/text-based-interactive-recommendation-via
Repo
Framework

Nikolov-Radivchev at SemEval-2019 Task 6: Offensive Tweet Classification with BERT and Ensembles

Title Nikolov-Radivchev at SemEval-2019 Task 6: Offensive Tweet Classification with BERT and Ensembles
Authors Alex Nikolov, Victor Radivchev
Abstract This paper examines different approaches and models towards offensive tweet classification which were used as a part of the OffensEval 2019 competition. It reviews Tweet preprocessing, techniques for overcoming unbalanced class distribution in the provided test data, and comparison of multiple attempted machine learning models.
Tasks
Published 2019-06-01
URL https://www.aclweb.org/anthology/S19-2123/
PDF https://www.aclweb.org/anthology/S19-2123
PWC https://paperswithcode.com/paper/nikolov-radivchev-at-semeval-2019-task-6
Repo
Framework

Personalizing Grammatical Error Correction: Adaptation to Proficiency Level and L1

Title Personalizing Grammatical Error Correction: Adaptation to Proficiency Level and L1
Authors Maria Nadejde, Joel Tetreault
Abstract Grammar error correction (GEC) systems have become ubiquitous in a variety of software applications, and have started to approach human-level performance for some datasets. However, very little is known about how to efficiently personalize these systems to the user{'}s characteristics, such as their proficiency level and first language, or to emerging domains of text. We present the first results on adapting a general purpose neural GEC system to both the proficiency level and the first language of a writer, using only a few thousand annotated sentences. Our study is the broadest of its kind, covering five proficiency levels and twelve different languages, and comparing three different adaptation scenarios: adapting to the proficiency level only, to the first language only, or to both aspects simultaneously. We show that tailoring to both scenarios achieves the largest performance improvement (3.6 F0.5) relative to a strong baseline.
Tasks Grammatical Error Correction
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-5504/
PDF https://www.aclweb.org/anthology/D19-5504
PWC https://paperswithcode.com/paper/personalizing-grammatical-error-correction
Repo
Framework

Scalable Neural Theorem Proving on Knowledge Bases and Natural Language

Title Scalable Neural Theorem Proving on Knowledge Bases and Natural Language
Authors Pasquale Minervini, Matko Bosnjak, Tim Rocktäschel, Edward Grefenstette, Sebastian Riedel
Abstract Reasoning over text and Knowledge Bases (KBs) is a major challenge for Artificial Intelligence, with applications in machine reading, dialogue, and question answering. Transducing text to logical forms which can be operated on is a brittle and error-prone process. Operating directly on text by jointly learning representations and transformations thereof by means of neural architectures that lack the ability to learn and exploit general rules can be very data-inefficient and not generalise correctly. These issues are addressed by Neural Theorem Provers (NTPs) (Rocktäschel & Riedel, 2017), neuro-symbolic systems based on a continuous relaxation of Prolog’s backward chaining algorithm, where symbolic unification between atoms is replaced by a differentiable operator computing the similarity between their embedding representations. In this paper, we first propose Neighbourhood-approximated Neural Theorem Provers (NaNTPs) consisting of two extensions toNTPs, namely a) a method for drastically reducing the previously prohibitive time and space complexity during inference and learning, and b) an attention mechanism for improving the rule learning process, deeming them usable on real-world datasets. Then, we propose a novel approach for jointly reasoning over KB facts and textual mentions, by jointly embedding them in a shared embedding space. The proposed method is able to extract rules and provide explanations—involving both textual patterns and KB relations—from large KBs and text corpora. We show that NaNTPs perform on par with NTPs at a fraction of a cost, and can achieve competitive link prediction results on challenging large-scale datasets, including WN18, WN18RR, and FB15k-237 (with and without textual mentions) while being able to provide explanations for each prediction and extract interpretable rules.
Tasks Automated Theorem Proving, Link Prediction, Question Answering, Reading Comprehension
Published 2019-05-01
URL https://openreview.net/forum?id=BJzmzn0ctX
PDF https://openreview.net/pdf?id=BJzmzn0ctX
PWC https://paperswithcode.com/paper/scalable-neural-theorem-proving-on-knowledge
Repo
Framework
Title Developing and Orchestrating a Portfolio of Natural Legal Language Processing and Document Curation Services
Authors Georg Rehm, Juli{'a}n Moreno-Schneider, Jorge Gracia, Artem Revenko, Victor Mireles, Maria Khvalchik, Ilan Kernerman, Andis Lagzdins, Marcis Pinnis, Artus Vasilevskis, Elena Leitner, Jan Milde, Pia Wei{\ss}enhorn
Abstract We present a portfolio of natural legal language processing and document curation services currently under development in a collaborative European project. First, we give an overview of the project and the different use cases, while, in the main part of the article, we focus upon the 13 different processing services that are being deployed in different prototype applications using a flexible and scalable microservices architecture. Their orchestration is operationalised using a content and document curation workflow manager.
Tasks
Published 2019-06-01
URL https://www.aclweb.org/anthology/W19-2207/
PDF https://www.aclweb.org/anthology/W19-2207
PWC https://paperswithcode.com/paper/developing-and-orchestrating-a-portfolio-of
Repo
Framework

Slang Detection and Identification

Title Slang Detection and Identification
Authors Zhengqi Pei, Zhewei Sun, Yang Xu
Abstract The prevalence of informal language such as slang presents challenges for natural language systems, particularly in the automatic discovery of flexible word usages. Previous work has explored slang in terms of dictionary construction, sentiment analysis, word formation, and interpretation, but scarce research has attempted the basic problem of slang detection and identification. We examine the extent to which deep learning methods support automatic detection and identification of slang from natural sentences using a combination of bidirectional recurrent neural networks, conditional random field, and multilayer perceptron. We test these models based on a comprehensive set of linguistic features in sentence-level detection and token-level identification of slang. We found that a prominent feature of slang is the surprising use of words across syntactic categories or syntactic shift (e.g., verb-noun). Our best models detect the presence of slang at the sentence level with an F1-score of 0.80 and identify its exact position at the token level with an F1-Score of 0.50.
Tasks Sentiment Analysis
Published 2019-11-01
URL https://www.aclweb.org/anthology/K19-1082/
PDF https://www.aclweb.org/anthology/K19-1082
PWC https://paperswithcode.com/paper/slang-detection-and-identification
Repo
Framework

AX Semantics’ Submission to the SIGMORPHON 2019 Shared Task

Title AX Semantics’ Submission to the SIGMORPHON 2019 Shared Task
Authors Andreas Madsack, Robert Wei{\ss}graeber
Abstract This paper describes the AX Semantics{'} submission to the SIGMORPHON 2019 shared task on morphological reinflection. We implemented two systems, both tackling the task for all languages in one codebase, without any underlying language specific features. The first one is an encoder-decoder model using AllenNLP; the second system uses the same model modified by a custom trainer that trains only with the target language resources after a specific threshold. We especially focused on building an implementation using AllenNLP with out-of-the-box methods to facilitate easy operation and reuse.
Tasks
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-4201/
PDF https://www.aclweb.org/anthology/W19-4201
PWC https://paperswithcode.com/paper/ax-semantics-submission-to-the-sigmorphon
Repo
Framework
comments powered by Disqus