October 15, 2019

2260 words 11 mins read

Paper Group NANR 204

Paper Group NANR 204

A neural parser as a direct classifier for head-final languages. Now I Remember! Episodic Memory For Reinforcement Learning. Alignment, Acceptance, and Rejection of Group Identities in Online Political Discourse. Verb Alternations and Their Impact on Frame Induction. QuantTree: Histograms for Change Detection in Multivariate Data Streams. A Neural …

A neural parser as a direct classifier for head-final languages

Title A neural parser as a direct classifier for head-final languages
Authors Hiroshi Kanayama, Masayasu Muraoka, Ryosuke Kohita
Abstract This paper demonstrates a neural parser implementation suitable for consistently head-final languages such as Japanese. Unlike the transition- and graph-based algorithms in most state-of-the-art parsers, our parser directly selects the head word of a dependent from a limited number of candidates. This method drastically simplifies the model so that we can easily interpret the output of the neural model. Moreover, by exploiting grammatical knowledge to restrict possible modification types, we can control the output of the parser to reduce specific errors without adding annotated corpora. The neural parser performed well both on conventional Japanese corpora and the Japanese version of Universal Dependency corpus, and the advantages of distributed representations were observed in the comparison with the non-neural conventional model.
Tasks Dependency Parsing, Tokenization
Published 2018-07-01
URL https://www.aclweb.org/anthology/W18-2906/
PDF https://www.aclweb.org/anthology/W18-2906
PWC https://paperswithcode.com/paper/a-neural-parser-as-a-direct-classifier-for
Repo
Framework

Now I Remember! Episodic Memory For Reinforcement Learning

Title Now I Remember! Episodic Memory For Reinforcement Learning
Authors Ricky Loynd, Matthew Hausknecht, Lihong Li, Li Deng
Abstract Humans rely on episodic memory constantly, in remembering the name of someone they met 10 minutes ago, the plot of a movie as it unfolds, or where they parked the car. Endowing reinforcement learning agents with episodic memory is a key step on the path toward replicating human-like general intelligence. We analyze why standard RL agents lack episodic memory today, and why existing RL tasks don’t require it. We design a new form of external memory called Masked Experience Memory, or MEM, modeled after key features of human episodic memory. To evaluate episodic memory we define an RL task based on the common children’s game of Concentration. We find that a MEM RL agent leverages episodic memory effectively to master Concentration, unlike the baseline agents we tested.
Tasks
Published 2018-01-01
URL https://openreview.net/forum?id=SJxE3jlA-
PDF https://openreview.net/pdf?id=SJxE3jlA-
PWC https://paperswithcode.com/paper/now-i-remember-episodic-memory-for
Repo
Framework

Alignment, Acceptance, and Rejection of Group Identities in Online Political Discourse

Title Alignment, Acceptance, and Rejection of Group Identities in Online Political Discourse
Authors Hagyeong Shin, Gabriel Doyle
Abstract Conversation is a joint social process, with participants cooperating to exchange information. This process is helped along through linguistic alignment: participants{'} adoption of each other{'}s word use. This alignment is robust, appearing many settings, and is nearly always positive. We create an alignment model for examining alignment in Twitter conversations across antagonistic groups. This model finds that some word categories, specifically pronouns used to establish group identity and common ground, are negatively aligned. This negative alignment is observed despite other categories, which are less related to the group dynamics, showing the standard positive alignment. This suggests that alignment is strongly biased toward cooperative alignment, but that different linguistic features can show substantially different behaviors.
Tasks
Published 2018-06-01
URL https://www.aclweb.org/anthology/N18-4001/
PDF https://www.aclweb.org/anthology/N18-4001
PWC https://paperswithcode.com/paper/alignment-acceptance-and-rejection-of-group
Repo
Framework

Verb Alternations and Their Impact on Frame Induction

Title Verb Alternations and Their Impact on Frame Induction
Authors Esther Seyffarth
Abstract Frame induction is the automatic creation of frame-semantic resources similar to FrameNet or PropBank, which map lexical units of a language to frame representations of each lexical unit{'}s semantics. For verbs, these representations usually include a specification of their argument slots and of the selectional restrictions that apply to each slot. Verbs that participate in diathesis alternations have different syntactic realizations whose semantics are closely related, but not identical. We discuss the influence that such alternations have on frame induction, compare several possible frame structures for verbs in the causative alternation, and propose a systematic analysis of alternating verbs that encodes their similarities as well as their differences.
Tasks Question Answering
Published 2018-06-01
URL https://www.aclweb.org/anthology/N18-4003/
PDF https://www.aclweb.org/anthology/N18-4003
PWC https://paperswithcode.com/paper/verb-alternations-and-their-impact-on-frame
Repo
Framework

QuantTree: Histograms for Change Detection in Multivariate Data Streams

Title QuantTree: Histograms for Change Detection in Multivariate Data Streams
Authors Giacomo Boracchi, Diego Carrera, Cristiano Cervellera, Danilo Macciò
Abstract We address the problem of detecting distribution changes in multivariate data streams by means of histograms. Histograms are very general and flexible models, which have been relatively ignored in the change-detection literature as they often require a number of bins that grows unfeasibly with the data dimension. We present QuantTree, a recursive binary splitting scheme that adaptively defines the histogram bins to ease the detection of any distribution change. Our design scheme implies that i) we can easily control the overall number of bins and ii) the bin probabilities do not depend on the distribution of stationary data. This latter is a very relevant aspect in change detection, since thresholds of tests statistics based on these histograms (e.g., the Pearson statistic or the total variation) can be numerically computed from univariate and synthetically generated data, yet guaranteeing a controlled false positive rate. Our experiments show that the proposed histograms are very effective in detecting changes in high dimensional data streams, and that the resulting thresholds can effectively control the false positive rate, even when the number of training samples is relatively small.
Tasks
Published 2018-07-01
URL https://icml.cc/Conferences/2018/Schedule?showEvent=2268
PDF http://proceedings.mlr.press/v80/boracchi18a/boracchi18a.pdf
PWC https://paperswithcode.com/paper/quanttree-histograms-for-change-detection-in
Repo
Framework

A Neural Approach to Pun Generation

Title A Neural Approach to Pun Generation
Authors Zhiwei Yu, Jiwei Tan, Xiaojun Wan
Abstract Automatic pun generation is an interesting and challenging text generation task. Previous efforts rely on templates or laboriously manually annotated pun datasets, which heavily constrains the quality and diversity of generated puns. Since sequence-to-sequence models provide an effective technique for text generation, it is promising to investigate these models on the pun generation task. In this paper, we propose neural network models for homographic pun generation, and they can generate puns without requiring any pun data for training. We first train a conditional neural language model from a general text corpus, and then generate puns from the language model with an elaborately designed decoding algorithm. Automatic and human evaluations show that our models are able to generate homographic puns of good readability and quality.
Tasks Image Captioning, Language Modelling, Machine Translation, Text Generation, Text Summarization
Published 2018-07-01
URL https://www.aclweb.org/anthology/P18-1153/
PDF https://www.aclweb.org/anthology/P18-1153
PWC https://paperswithcode.com/paper/a-neural-approach-to-pun-generation
Repo
Framework

Proceedings of the Second Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature

Title Proceedings of the Second Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature
Authors
Abstract
Tasks
Published 2018-08-01
URL https://www.aclweb.org/anthology/W18-4500/
PDF https://www.aclweb.org/anthology/W18-4500
PWC https://paperswithcode.com/paper/proceedings-of-the-second-joint-sighum
Repo
Framework

Igbo Diacritic Restoration using Embedding Models

Title Igbo Diacritic Restoration using Embedding Models
Authors Ignatius Ezeani, Mark Hepple, Ikechukwu Onyenwe, Enemouh Chioma
Abstract Igbo is a low-resource language spoken by approximately 30 million people worldwide. It is the native language of the Igbo people of south-eastern Nigeria. In Igbo language, diacritics - orthographic and tonal - play a huge role in the distinguishing the meaning and pronunciation of words. Omitting diacritics in texts often leads to lexical ambiguity. Diacritic restoration is a pre-processing task that replaces missing diacritics on words from which they have been removed. In this work, we applied embedding models to the diacritic restoration task and compared their performances to those of n-gram models. Although word embedding models have been successfully applied to various NLP tasks, it has not been used, to our knowledge, for diacritic restoration. Two classes of word embeddings models were used: those projected from the English embedding space; and those trained with Igbo bible corpus ({\mbox{$\approx$}} 1m). Our best result, 82.49{%}, is an improvement on the baseline n-gram models.
Tasks Machine Translation, Word Embeddings
Published 2018-06-01
URL https://www.aclweb.org/anthology/N18-4008/
PDF https://www.aclweb.org/anthology/N18-4008
PWC https://paperswithcode.com/paper/igbo-diacritic-restoration-using-embedding
Repo
Framework

Coding Kendall’s Shape Trajectories for 3D Action Recognition

Title Coding Kendall’s Shape Trajectories for 3D Action Recognition
Authors Amor Ben Tanfous, Hassen Drira, Boulbaba Ben Amor
Abstract Suitable shape representations as well as their temporal evolution, termed trajectories, often lie to non-linear manifolds. This puts an additional constraint (i.e., non-linearity) in using conventional machine learning techniques for the purpose of classification, event detection, prediction, etc. This paper accommodates the well-known Sparse Coding and Dictionary Learning to the Kendall’s shape space and illustrates effective coding of 3D skeletal sequences for action recognition. Grounding on the Riemannian geometry of the shape space, an intrinsic sparse coding and dictionary learning formulation is proposed for static skeletal shapes to overcome the inherent non-linearity of the manifold. As a main result, initial trajectories give rise to sparse code functions with suitable computational properties, including sparsity and vector space representation. To achieve action recognition, two different classification schemes were adopted. A bi-directional LSTM is directly performed on sparse code functions, while a linear SVM is applied after representing sparse code functions using Fourier temporal pyramid. Experiments conducted on three publicly available datasets show the superiority of the proposed approach compared to existing Riemannian representations and its competitiveness with respect to other recently-proposed approaches. When the benefits of invariance are maintained from the Kendall’s shape representation, our approach not only overcomes the problem of non-linearity but also yields to discriminative sparse code functions.
Tasks 3D Human Action Recognition, Dictionary Learning, Temporal Action Localization
Published 2018-06-01
URL http://openaccess.thecvf.com/content_cvpr_2018/html/Tanfous_Coding_Kendalls_Shape_CVPR_2018_paper.html
PDF http://openaccess.thecvf.com/content_cvpr_2018/papers/Tanfous_Coding_Kendalls_Shape_CVPR_2018_paper.pdf
PWC https://paperswithcode.com/paper/coding-kendalls-shape-trajectories-for-3d
Repo
Framework

Investigating Productive and Receptive Knowledge: A Profile for Second Language Learning

Title Investigating Productive and Receptive Knowledge: A Profile for Second Language Learning
Authors Leonardo Zilio, Rodrigo Wilkens, C{'e}drick Fairon
Abstract The literature frequently addresses the differences in receptive and productive vocabulary, but grammar is often left unacknowledged in second language acquisition studies. In this paper, we used two corpora to investigate the divergences in the behavior of pedagogically relevant grammatical structures in reception and production texts. We further improved the divergence scores observed in this investigation by setting a polarity to them that indicates whether there is overuse or underuse of a grammatical structure by language learners. This led to the compilation of a language profile that was later combined with vocabulary and readability features for classifying reception and production texts in three classes: beginner, intermediate, and advanced. The results of the automatic classification task in both production (0.872 of F-measure) and reception (0.942 of F-measure) were comparable to the current state of the art. We also attempted to automatically attribute a score to texts produced by learners, and the correlation results were encouraging, but there is still a good amount of room for improvement in this task. The developed language profile will serve as input for a system that helps language learners to activate more of their passive knowledge in writing texts.
Tasks Language Acquisition
Published 2018-08-01
URL https://www.aclweb.org/anthology/C18-1294/
PDF https://www.aclweb.org/anthology/C18-1294
PWC https://paperswithcode.com/paper/investigating-productive-and-receptive
Repo
Framework

Thinking of Going Neural? Factors Honda R&D Americas is Considering before Making the Switch

Title Thinking of Going Neural? Factors Honda R&D Americas is Considering before Making the Switch
Authors Phil Soldini
Abstract
Tasks
Published 2018-03-01
URL https://www.aclweb.org/anthology/W18-1909/
PDF https://www.aclweb.org/anthology/W18-1909
PWC https://paperswithcode.com/paper/thinking-of-going-neural-factors-honda-r-and
Repo
Framework

Proceedings of the Fifth Workshop on Computational Linguistics and Clinical Psychology: From Keyboard to Clinic

Title Proceedings of the Fifth Workshop on Computational Linguistics and Clinical Psychology: From Keyboard to Clinic
Authors
Abstract
Tasks
Published 2018-06-01
URL https://www.aclweb.org/anthology/W18-0600/
PDF https://www.aclweb.org/anthology/W18-0600
PWC https://paperswithcode.com/paper/proceedings-of-the-fifth-workshop-on-2
Repo
Framework

Sensing and Learning Human Annotators Engaged in Narrative Sensemaking

Title Sensing and Learning Human Annotators Engaged in Narrative Sensemaking
Authors McKenna Tornblad, Luke Lapresi, Christopher Homan, Raymond Ptucha, Cecilia Ovesdotter Alm
Abstract While labor issues and quality assurance in crowdwork are increasingly studied, how annotators make sense of texts and how they are personally impacted by doing so are not. We study these questions via a narrative-sorting annotation task, where carefully selected (by sequentiality, topic, emotional content, and length) collections of tweets serve as examples of everyday storytelling. As readers process these narratives, we measure their facial expressions, galvanic skin response, and self-reported reactions. From the perspective of annotator well-being, a reassuring outcome was that the sorting task did not cause a measurable stress response, however readers reacted to humor. In terms of sensemaking, readers were more confident when sorting sequential, target-topical, and highly emotional tweets. As crowdsourcing becomes more common, this research sheds light onto the perceptive capabilities and emotional impact of human readers.
Tasks
Published 2018-06-01
URL https://www.aclweb.org/anthology/N18-4019/
PDF https://www.aclweb.org/anthology/N18-4019
PWC https://paperswithcode.com/paper/sensing-and-learning-human-annotators-engaged
Repo
Framework

A Framework for Representing Language Acquisition in a Population Setting

Title A Framework for Representing Language Acquisition in a Population Setting
Authors Jordan Kodner, Christopher Cerezo Falco
Abstract Language variation and change are driven both by individuals{'} internal cognitive processes and by the social structures through which language propagates. A wide range of computational frameworks have been proposed to connect these drivers. We compare the strengths and weaknesses of existing approaches and propose a new analytic framework which combines previous network models{'} ability to capture realistic social structure with practically and more elegant computational properties. The framework privileges the process of language acquisition and embeds learners in a social network but is modular so that population structure can be combined with different acquisition models. We demonstrate two applications for the framework: a test of practical concerns that arise when modeling acquisition in a population setting and an application of the framework to recent work on phonological mergers in progress.
Tasks Language Acquisition
Published 2018-07-01
URL https://www.aclweb.org/anthology/P18-1106/
PDF https://www.aclweb.org/anthology/P18-1106
PWC https://paperswithcode.com/paper/a-framework-for-representing-language
Repo
Framework

Lexical Profiling of Environmental Corpora

Title Lexical Profiling of Environmental Corpora
Authors Patrick Drouin, Marie-Claude L{'}Homme, Beno{^\i}t Robichaud
Abstract
Tasks
Published 2018-05-01
URL https://www.aclweb.org/anthology/L18-1539/
PDF https://www.aclweb.org/anthology/L18-1539
PWC https://paperswithcode.com/paper/lexical-profiling-of-environmental-corpora
Repo
Framework
comments powered by Disqus