July 26, 2019

1760 words 9 mins read

Paper Group NANR 141

Paper Group NANR 141

If No Media Were Allowed inside the Venue, Was Anybody Allowed?. Adapting to Learner Errors with Minimal Supervision. Argument Structure and Referent Systems. Balancing information exposure in social networks. Addressing Domain Adaptation for Chinese Word Segmentation with Global Recurrent Structure. Deeper Attention to Abusive User Content Moderat …

If No Media Were Allowed inside the Venue, Was Anybody Allowed?

Title If No Media Were Allowed inside the Venue, Was Anybody Allowed?
Authors Zahra Sarabi, Eduardo Blanco
Abstract This paper presents a framework to understand negation in positive terms. Specifically, we extract positive meaning from negation when the negation cue syntactically modifies a noun or adjective. Our approach is grounded on generating potential positive interpretations automatically, and then scoring them. Experimental results show that interpretations scored high can be reliably identified.
Tasks Question Answering
Published 2017-04-01
URL https://www.aclweb.org/anthology/E17-1081/
PDF https://www.aclweb.org/anthology/E17-1081
PWC https://paperswithcode.com/paper/if-no-media-were-allowed-inside-the-venue-was
Repo
Framework

Adapting to Learner Errors with Minimal Supervision

Title Adapting to Learner Errors with Minimal Supervision
Authors Alla Rozovskaya, Dan Roth, Mark Sammons
Abstract This article considers the problem of correcting errors made by English as a Second Language writers from a machine learning perspective, and addresses an important issue of developing an appropriate training paradigm for the task, one that accounts for error patterns of non-native writers using minimal supervision. Existing training approaches present a trade-off between large amounts of cheap data offered by the native-trained models and additional knowledge of learner error patterns provided by the more expensive method of training on annotated learner data. We propose a novel training approach that draws on the strengths offered by the two standard training paradigms{—}of training either on native or on annotated learner data{—}and that outperforms both of these standard methods. Using the key observation that parameters relating to error regularities exhibited by non-native writers are relatively simple, we develop models that can incorporate knowledge about error regularities based on a small annotated sample but that are otherwise trained on native English data. The key contribution of this article is the introduction and analysis of two methods for adapting the learned models to error patterns of non-native writers; one method that applies to generative classifiers and a second that applies to discriminative classifiers. Both methods demonstrated state-of-the-art performance in several text correction competitions. In particular, the Illinois system that implements these methods ranked at the top in two recent CoNLL shared tasks on error correction.1 We conduct further evaluation of the proposed approaches studying the effect of using error data from speakers of the same native language, languages that are closely related linguistically, and unrelated languages.
Tasks
Published 2017-12-01
URL https://www.aclweb.org/anthology/J17-4002/
PDF https://www.aclweb.org/anthology/J17-4002
PWC https://paperswithcode.com/paper/adapting-to-learner-errors-with-minimal
Repo
Framework

Argument Structure and Referent Systems

Title Argument Structure and Referent Systems
Authors Marcus Kracht, Yousuf Aboamer
Abstract
Tasks Semantic Composition
Published 2017-01-01
URL https://www.aclweb.org/anthology/W17-6918/
PDF https://www.aclweb.org/anthology/W17-6918
PWC https://paperswithcode.com/paper/argument-structure-and-referent-systems
Repo
Framework

Balancing information exposure in social networks

Title Balancing information exposure in social networks
Authors Kiran Garimella, Aristides Gionis, Nikos Parotsidis, Nikolaj Tatti
Abstract Social media has brought a revolution on how people are consuming news. Beyond the undoubtedly large number of advantages brought by social-media platforms, a point of criticism has been the creation of echo chambers and filter bubbles, caused by social homophily and algorithmic personalization. In this paper we address the problem of balancing the information exposure} in a social network. We assume that two opposing campaigns (or viewpoints) are present in the network, and that network nodes have different preferences towards these campaigns. Our goal is to find two sets of nodes to employ in the respective campaigns, so that the overall information exposure for the two campaigns is balanced. We formally define the problem, characterize its hardness, develop approximation algorithms, and present experimental evaluation results. Our model is inspired by the literature on influence maximization, but we offer significant novelties. First, balance of information exposure is modeled by a symmetric difference function, which is neither monotone nor submodular, and thus, not amenable to existing approaches. Second, while previous papers consider a setting with selfish agents and provide bounds on best response strategies (i.e., move of the last player), we consider a setting with a centralized agent and provide bounds for a global objective function.
Tasks
Published 2017-12-01
URL http://papers.nips.cc/paper/7052-balancing-information-exposure-in-social-networks
PDF http://papers.nips.cc/paper/7052-balancing-information-exposure-in-social-networks.pdf
PWC https://paperswithcode.com/paper/balancing-information-exposure-in-social
Repo
Framework

Addressing Domain Adaptation for Chinese Word Segmentation with Global Recurrent Structure

Title Addressing Domain Adaptation for Chinese Word Segmentation with Global Recurrent Structure
Authors Shen Huang, Xu Sun, Houfeng Wang
Abstract Boundary features are widely used in traditional Chinese Word Segmentation (CWS) methods as they can utilize unlabeled data to help improve the Out-of-Vocabulary (OOV) word recognition performance. Although various neural network methods for CWS have achieved performance competitive with state-of-the-art systems, these methods, constrained by the domain and size of the training corpus, do not work well in domain adaptation. In this paper, we propose a novel BLSTM-based neural network model which incorporates a global recurrent structure designed for modeling boundary features dynamically. Experiments show that the proposed structure can effectively boost the performance of Chinese Word Segmentation, especially OOV-Recall, which brings benefits to domain adaptation. We achieved state-of-the-art results on 6 domains of CNKI articles, and competitive results to the best reported on the 4 domains of SIGHAN Bakeoff 2010 data.
Tasks Chinese Word Segmentation, Domain Adaptation, Feature Engineering
Published 2017-11-01
URL https://www.aclweb.org/anthology/I17-1019/
PDF https://www.aclweb.org/anthology/I17-1019
PWC https://paperswithcode.com/paper/addressing-domain-adaptation-for-chinese-word
Repo
Framework

Deeper Attention to Abusive User Content Moderation

Title Deeper Attention to Abusive User Content Moderation
Authors John Pavlopoulos, Prodromos Malakasiotis, Ion Androutsopoulos
Abstract Experimenting with a new dataset of 1.6M user comments from a news portal and an existing dataset of 115K Wikipedia talk page comments, we show that an RNN operating on word embeddings outpeforms the previous state of the art in moderation, which used logistic regression or an MLP classifier with character or word n-grams. We also compare against a CNN operating on word embeddings, and a word-list baseline. A novel, deep, classificationspecific attention mechanism improves the performance of the RNN further, and can also highlight suspicious words for free, without including highlighted words in the training data. We consider both fully automatic and semi-automatic moderation.
Tasks Word Embeddings
Published 2017-09-01
URL https://www.aclweb.org/anthology/D17-1117/
PDF https://www.aclweb.org/anthology/D17-1117
PWC https://paperswithcode.com/paper/deeper-attention-to-abusive-user-content
Repo
Framework

Creating a gold standard corpus for terminological annotation from online forum data

Title Creating a gold standard corpus for terminological annotation from online forum data
Authors Anna H{"a}tty, Simon Tannert, Ulrich Heid
Abstract
Tasks
Published 2017-09-01
URL https://www.aclweb.org/anthology/W17-7002/
PDF https://www.aclweb.org/anthology/W17-7002
PWC https://paperswithcode.com/paper/creating-a-gold-standard-corpus-for
Repo
Framework

Keynote Lecture 1: NLP in Tomorrow’s Profiling - Words May Fail You

Title Keynote Lecture 1: NLP in Tomorrow’s Profiling - Words May Fail You
Authors Bj{"o}rn W. Schuller
Abstract
Tasks
Published 2017-12-01
URL https://www.aclweb.org/anthology/W17-7501/
PDF https://www.aclweb.org/anthology/W17-7501
PWC https://paperswithcode.com/paper/keynote-lecture-1-nlp-in-tomorrows-profiling
Repo
Framework

Keynote Lecture 2: Grammatical Error Correction: Past, Present and Future

Title Keynote Lecture 2: Grammatical Error Correction: Past, Present and Future
Authors Hwee Tou Ng
Abstract
Tasks Grammatical Error Correction
Published 2017-12-01
URL https://www.aclweb.org/anthology/W17-7513/
PDF https://www.aclweb.org/anthology/W17-7513
PWC https://paperswithcode.com/paper/keynote-lecture-2-grammatical-error
Repo
Framework

Unsupervised Acquisition of Comprehensive Multiword Lexicons using Competition in an n-gram Lattice

Title Unsupervised Acquisition of Comprehensive Multiword Lexicons using Competition in an n-gram Lattice
Authors Julian Brooke, Jan {\v{S}}najder, Timothy Baldwin
Abstract We present a new model for acquiring comprehensive multiword lexicons from large corpora based on competition among n-gram candidates. In contrast to the standard approach of simple ranking by association measure, in our model n-grams are arranged in a lattice structure based on subsumption and overlap relationships, with nodes inhibiting other nodes in their vicinity when they are selected as a lexical item. We show how the configuration of such a lattice can be optimized tractably, and demonstrate using annotations of sampled n-grams that our method consistently outperforms alternatives by at least 0.05 F-score across several corpora and languages.
Tasks
Published 2017-01-01
URL https://www.aclweb.org/anthology/Q17-1032/
PDF https://www.aclweb.org/anthology/Q17-1032
PWC https://paperswithcode.com/paper/unsupervised-acquisition-of-comprehensive
Repo
Framework

Learning Compositionality Functions on Word Embeddings for Modelling Attribute Meaning in Adjective-Noun Phrases

Title Learning Compositionality Functions on Word Embeddings for Modelling Attribute Meaning in Adjective-Noun Phrases
Authors Matthias Hartung, Fabian Kaupmann, Soufian Jebbara, Philipp Cimiano
Abstract Word embeddings have been shown to be highly effective in a variety of lexical semantic tasks. They tend to capture meaningful relational similarities between individual words, at the expense of lacking the capabilty of making the underlying semantic relation explicit. In this paper, we investigate the attribute relation that often holds between the constituents of adjective-noun phrases. We use CBOW word embeddings to represent word meaning and learn a compositionality function that combines the individual constituents into a phrase representation, thus capturing the compositional attribute meaning. The resulting embedding model, while being fully interpretable, outperforms count-based distributional vector space models that are tailored to attribute meaning in the two tasks of attribute selection and phrase similarity prediction. Moreover, as the model captures a generalized layer of attribute meaning, it bears the potential to be used for predictions over various attribute inventories without re-training.
Tasks Word Embeddings
Published 2017-04-01
URL https://www.aclweb.org/anthology/E17-1006/
PDF https://www.aclweb.org/anthology/E17-1006
PWC https://paperswithcode.com/paper/learning-compositionality-functions-on-word
Repo
Framework

``Who Mentions Whom?''- Understanding the Psycho-Sociological Aspects of Twitter Mention Network

Title ``Who Mentions Whom?''- Understanding the Psycho-Sociological Aspects of Twitter Mention Network |
Authors R Sudhesh Solomon, Abhay Narayan, Srinivas P Y K L, Amitava Das
Abstract
Tasks
Published 2017-12-01
URL https://www.aclweb.org/anthology/W17-7552/
PDF https://www.aclweb.org/anthology/W17-7552
PWC https://paperswithcode.com/paper/who-mentions-whom-understanding-the-psycho
Repo
Framework

Neural Networks for Semantic Textual Similarity

Title Neural Networks for Semantic Textual Similarity
Authors Derek Prijatelj, Jugal Kalita, Jonathan Ventura
Abstract
Tasks Semantic Textual Similarity
Published 2017-12-01
URL https://www.aclweb.org/anthology/W17-7556/
PDF https://www.aclweb.org/anthology/W17-7556
PWC https://paperswithcode.com/paper/neural-networks-for-semantic-textual
Repo
Framework

Automating Biomedical Evidence Synthesis: RobotReviewer

Title Automating Biomedical Evidence Synthesis: RobotReviewer
Authors Iain Marshall, Jo{"e}l Kuiper, Edward Banner, Byron C. Wallace
Abstract
Tasks
Published 2017-07-01
URL https://www.aclweb.org/anthology/P17-4002/
PDF https://www.aclweb.org/anthology/P17-4002
PWC https://paperswithcode.com/paper/automating-biomedical-evidence-synthesis
Repo
Framework

Two Layers of Annotation for Representing Event Mentions in News Stories

Title Two Layers of Annotation for Representing Event Mentions in News Stories
Authors Maria Pia di Buono, Martin Tutek, Jan {\v{S}}najder, Goran Glava{\v{s}}, Bojana Dalbelo Ba{\v{s}}i{'c}, Nata{\v{s}}a Mili{'c}-Frayling
Abstract In this paper, we describe our preliminary study on annotating event mention as a part of our research on high-precision news event extraction models. To this end, we propose a two-layer annotation scheme, designed to separately capture the functional and conceptual aspects of event mentions. We hypothesize that the precision of models can be improved by modeling and extracting separately the different aspects of news events, and then combining the extracted information by leveraging the complementarities of the models. In addition, we carry out a preliminary annotation using the proposed scheme and analyze the annotation quality in terms of inter-annotator agreement.
Tasks
Published 2017-04-01
URL https://www.aclweb.org/anthology/W17-0810/
PDF https://www.aclweb.org/anthology/W17-0810
PWC https://paperswithcode.com/paper/two-layers-of-annotation-for-representing
Repo
Framework
comments powered by Disqus