January 25, 2020

2657 words 13 mins read

Paper Group NANR 65

Paper Group NANR 65

SINAI at SemEval-2019 Task 5: Ensemble learning to detect hate speech against inmigrants and women in English and Spanish tweets. Difficulty-aware Distractor Generation for Gap-Fill Items. Using Rhetorical Structure Theory to Assess Discourse Coherence for Non-native Spontaneous Speech. A Distributional Model of Affordances in Semantic Type Coercio …

SINAI at SemEval-2019 Task 5: Ensemble learning to detect hate speech against inmigrants and women in English and Spanish tweets

Title SINAI at SemEval-2019 Task 5: Ensemble learning to detect hate speech against inmigrants and women in English and Spanish tweets
Authors Flor Miriam Plaza-del-Arco, M. Dolores Molina-Gonz{'a}lez, Maite Martin, L. Alfonso Ure{~n}a-L{'o}pez
Abstract Misogyny and xenophobia are some of the most important social problems. With the in- crease in the use of social media, this feeling ofhatred towards women and immigrants can be more easily expressed, therefore it can cause harmful effects on social media users. For this reason, it is important to develop systems ca- pable of detecting hateful comments automatically. In this paper, we describe our system to analyze the hate speech in English and Spanish tweets against Immigrants and Women as part of our participation in SemEval-2019 Task 5: hatEval. Our main contribution is the integration of three individual algorithms of predic- tion in a model based on Vote ensemble classifier.
Tasks
Published 2019-06-01
URL https://www.aclweb.org/anthology/S19-2084/
PDF https://www.aclweb.org/anthology/S19-2084
PWC https://paperswithcode.com/paper/sinai-at-semeval-2019-task-5-ensemble
Repo
Framework

Difficulty-aware Distractor Generation for Gap-Fill Items

Title Difficulty-aware Distractor Generation for Gap-Fill Items
Authors Chak Yan Yeung, John Lee, Benjamin Tsou
Abstract
Tasks
Published 2019-04-01
URL https://www.aclweb.org/anthology/U19-1021/
PDF https://www.aclweb.org/anthology/U19-1021
PWC https://paperswithcode.com/paper/difficulty-aware-distractor-generation-for
Repo
Framework

Using Rhetorical Structure Theory to Assess Discourse Coherence for Non-native Spontaneous Speech

Title Using Rhetorical Structure Theory to Assess Discourse Coherence for Non-native Spontaneous Speech
Authors Xinhao Wang, Binod Gyawali, James V. Bruno, Hillary R. Molloy, Keelan Evanini, Klaus Zechner
Abstract This study aims to model the discourse structure of spontaneous spoken responses within the context of an assessment of English speaking proficiency for non-native speakers. Rhetorical Structure Theory (RST) has been commonly used in the analysis of discourse organization of written texts; however, limited research has been conducted to date on RST annotation and parsing of spoken language, in particular, non-native spontaneous speech. Due to the fact that the measurement of discourse coherence is typically a key metric in human scoring rubrics for assessments of spoken language, we conducted research to obtain RST annotations on non-native spoken responses from a standardized assessment of academic English proficiency. Subsequently, automatic parsers were trained on these annotations to process non-native spontaneous speech. Finally, a set of features were extracted from automatically generated RST trees to evaluate the discourse structure of non-native spontaneous speech, which were then employed to further improve the validity of an automated speech scoring system.
Tasks
Published 2019-06-01
URL https://www.aclweb.org/anthology/W19-2719/
PDF https://www.aclweb.org/anthology/W19-2719
PWC https://paperswithcode.com/paper/using-rhetorical-structure-theory-to-assess
Repo
Framework

A Distributional Model of Affordances in Semantic Type Coercion

Title A Distributional Model of Affordances in Semantic Type Coercion
Authors Stephen McGregor, Elisabetta Jezek
Abstract We explore a novel application for interpreting semantic type coercions, motivated by insight into the role that perceptual affordances play in the selection of the semantic roles of artefactual nouns which are observed as arguments for verbs which would stereotypically select for objects of a different type. In order to simulate affordances, which we take to be direct perceptions of context-specific opportunities for action, we preform a distributional analysis dependency relationships between target words and their modifiers and adjuncts. We use these relationships as the basis for generating on-line transformations which project semantic subspaces in which the interpretations of coercive compositions are expected to emerge as salient word-vectors. We offer some preliminary examples of how this model operates on a dataset of sentences involving coercive interactions between verbs and objects specifically designed to evaluate this work.
Tasks
Published 2019-05-01
URL https://www.aclweb.org/anthology/W19-0501/
PDF https://www.aclweb.org/anthology/W19-0501
PWC https://paperswithcode.com/paper/a-distributional-model-of-affordances-in
Repo
Framework

Local Image-to-Image Translation via Pixel-wise Highway Adaptive Instance Normalization

Title Local Image-to-Image Translation via Pixel-wise Highway Adaptive Instance Normalization
Authors Wonwoong Cho, Seunghwan Choi, Junwoo Park, David Keetae Park, Tao Qin, Jaegul Choo
Abstract Recently, image-to-image translation has seen a significant success. Among many approaches, image translation based on an exemplar image, which contains the target style information, has been popular, owing to its capability to handle multimodality as well as its suitability for practical use. However, most of the existing methods extract the style information from an entire exemplar and apply it to the entire input image, which introduces excessive image translation in irrelevant image regions. In response, this paper proposes a novel approach that jointly extracts out the local masks of the input image and the exemplar as targeted regions to be involved for image translation. In particular, the main novelty of our model lies in (1) co-segmentation networks for local mask generation and (2) the local mask-based highway adaptive instance normalization technique. We demonstrate the quantitative and the qualitative evaluation results to show the advantages of our proposed approach. Finally, the code is available at https://github.com/AnonymousIclrAuthor/Highway-Adaptive-Instance-Normalization
Tasks Image-to-Image Translation
Published 2019-05-01
URL https://openreview.net/forum?id=HJgTHnActQ
PDF https://openreview.net/pdf?id=HJgTHnActQ
PWC https://paperswithcode.com/paper/local-image-to-image-translation-via-pixel
Repo
Framework

Entity Decisions in Neural Language Modelling: Approaches and Problems

Title Entity Decisions in Neural Language Modelling: Approaches and Problems
Authors Jenny Kunz, Christian Hardmeier
Abstract We explore different approaches to explicit entity modelling in language models (LM). We independently replicate two existing models in a controlled setup, introduce a simplified variant of one of the models and analyze their performance in direct comparison. Our results suggest that today{'}s models are limited as several stochastic variables make learning difficult. We show that the most challenging point in the systems is the decision if the next token is an entity token. The low precision and recall for this variable will lead to severe cascading errors. Our own simplified approach dispenses with the need for latent variables and improves the performance in the entity yes/no decision. A standard well-tuned baseline RNN-LM with a larger number of hidden units outperforms all entity-enabled LMs in terms of perplexity.
Tasks Language Modelling
Published 2019-06-01
URL https://www.aclweb.org/anthology/W19-2803/
PDF https://www.aclweb.org/anthology/W19-2803
PWC https://paperswithcode.com/paper/entity-decisions-in-neural-language-modelling
Repo
Framework

Simulating Spanish-English Code-Switching: El Modelo Est'a Generating Code-Switches

Title Simulating Spanish-English Code-Switching: El Modelo Est'a Generating Code-Switches
Authors Chara Tsoukala, Stefan L. Frank, Antal van den Bosch, Jorge Vald{'e}s Kroff, Mirjam Broersma
Abstract Multilingual speakers are able to switch from one language to the other ({``}code-switch{''}) between or within sentences. Because the underlying cognitive mechanisms are not well understood, in this study we use computational cognitive modeling to shed light on the process of code-switching. We employed the Bilingual Dual-path model, a Recurrent Neural Network of bilingual sentence production (Tsoukala et al., 2017), and simulated sentence production in simultaneous Spanish-English bilinguals. Our first goal was to investigate whether the model would code-switch without being exposed to code-switched training input. The model indeed produced code-switches even without any exposure to such input and the patterns of code-switches are in line with earlier linguistic work (Poplack,1980). The second goal of this study was to investigate an auxiliary phrase asymmetry that exists in Spanish-English code-switched production. Using this cognitive model, we examined a possible cause for this asymmetry. To our knowledge, this is the first computational cognitive model that aims to simulate code-switched sentence production. |
Tasks
Published 2019-06-01
URL https://www.aclweb.org/anthology/W19-2903/
PDF https://www.aclweb.org/anthology/W19-2903
PWC https://paperswithcode.com/paper/simulating-spanish-english-code-switching-el
Repo
Framework

KE-GAN: Knowledge Embedded Generative Adversarial Networks for Semi-Supervised Scene Parsing

Title KE-GAN: Knowledge Embedded Generative Adversarial Networks for Semi-Supervised Scene Parsing
Authors Mengshi Qi, Yunhong Wang, Jie Qin, Annan Li
Abstract In recent years, scene parsing has captured increasing attention in computer vision. Previous works have demonstrated promising performance in this task. However, they mainly utilize holistic features, whilst neglecting the rich semantic knowledge and inter-object relationships in the scene. In addition, these methods usually require a large number of pixel-level annotations, which is too expensive in practice. In this paper, we propose a novel Knowledge Embedded Generative Adversarial Networks, dubbed as KE-GAN, to tackle the challenging problem in a semi-supervised fashion. KE-GAN captures semantic consistencies of different categories by devising a Knowledge Graph from the large-scale text corpus. In addition to readily-available unlabeled data, we generate synthetic images to unveil rich structural information underlying the images. Moreover, a pyramid architecture is incorporated into the discriminator to acquire multi-scale contextual information for better parsing results. Extensive experimental results on four standard benchmarks demonstrate that KE-GAN is capable of improving semantic consistencies and learning better representations for scene parsing, resulting in the state-of-the-art performance.
Tasks Scene Parsing
Published 2019-06-01
URL http://openaccess.thecvf.com/content_CVPR_2019/html/Qi_KE-GAN_Knowledge_Embedded_Generative_Adversarial_Networks_for_Semi-Supervised_Scene_Parsing_CVPR_2019_paper.html
PDF http://openaccess.thecvf.com/content_CVPR_2019/papers/Qi_KE-GAN_Knowledge_Embedded_Generative_Adversarial_Networks_for_Semi-Supervised_Scene_Parsing_CVPR_2019_paper.pdf
PWC https://paperswithcode.com/paper/ke-gan-knowledge-embedded-generative
Repo
Framework

Modeling Long-Distance Cue Integration in Spoken Word Recognition

Title Modeling Long-Distance Cue Integration in Spoken Word Recognition
Authors Wednesday Bushong, T. Florian Jaeger
Abstract Cues to linguistic categories are distributed across the speech signal. Optimal categorization thus requires that listeners maintain gradient representations of incoming input in order to integrate that information with later cues. There is now evidence that listeners can and do integrate cues that occur far apart in time. Computational models of this integration have however been lacking. We take a first step at addressing this gap by mathematically formalizing four models of how listeners may maintain and use cue information during spoken language understanding and test them on two perception experiments. In one experiment, we find support for rational integration of cues at long distances. In a second, more memory and attention-taxing experiment, we find evidence in favor of a switching model that avoids maintaining detailed representations of cues in memory. These results are a first step in understanding what kinds of mechanisms listeners use for cue integration under different memory and attentional constraints.
Tasks Spoken Language Understanding
Published 2019-06-01
URL https://www.aclweb.org/anthology/W19-2907/
PDF https://www.aclweb.org/anthology/W19-2907
PWC https://paperswithcode.com/paper/modeling-long-distance-cue-integration-in
Repo
Framework

Which aspects of discourse relations are hard to learn? Primitive decomposition for discourse relation classification

Title Which aspects of discourse relations are hard to learn? Primitive decomposition for discourse relation classification
Authors Charlotte Roze, Chlo{'e} Braud, Philippe Muller
Abstract Discourse relation classification has proven to be a hard task, with rather low performance on several corpora that notably differ on the relation set they use. We propose to decompose the task into smaller, mostly binary tasks corresponding to various primitive concepts encoded into the discourse relation definitions. More precisely, we translate the discourse relations into a set of values for attributes based on distinctions used in the mappings between discourse frameworks proposed by Sanders et al. (2018). This arguably allows for a more robust representation of discourse relations, and enables us to address usually ignored aspects of discourse relation prediction, namely multiple labels and underspecified annotations. We show experimentally which of the conceptual primitives are harder to learn from the Penn Discourse Treebank English corpus, and propose a correspondence to predict the original labels, with preliminary empirical comparisons with a direct model.
Tasks Relation Classification
Published 2019-09-01
URL https://www.aclweb.org/anthology/W19-5950/
PDF https://www.aclweb.org/anthology/W19-5950
PWC https://paperswithcode.com/paper/which-aspects-of-discourse-relations-are-hard
Repo
Framework

PRADO: Projection Attention Networks for Document Classification On-Device

Title PRADO: Projection Attention Networks for Document Classification On-Device
Authors Prabhu Kaliamoorthi, Sujith Ravi, Zornitsa Kozareva
Abstract Recently, there has been a great interest in the development of small and accurate neural networks that run entirely on devices such as mobile phones, smart watches and IoT. This enables user privacy, consistent user experience and low latency. Although a wide range of applications have been targeted from wake word detection to short text classification, yet there are no on-device networks for long text classification. We propose a novel projection attention neural network PRADO that combines trainable projections with attention and convolutions. We evaluate our approach on multiple large document text classification tasks. Our results show the effectiveness of the trainable projection model in finding semantically similar phrases and reaching high performance while maintaining compact size. Using this approach, we train tiny neural networks just 200 Kilobytes in size that improve over prior CNN and LSTM models and achieve near state of the art performance on multiple long document classification tasks. We also apply our model for transfer learning, show its robustness and ability to further improve the performance in limited data scenarios.
Tasks Document Classification, Text Classification, Transfer Learning
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-1506/
PDF https://www.aclweb.org/anthology/D19-1506
PWC https://paperswithcode.com/paper/prado-projection-attention-networks-for
Repo
Framework

The Development of Abstract Concepts in Children’s Early Lexical Networks

Title The Development of Abstract Concepts in Children’s Early Lexical Networks
Authors Abdellah Fourtassi, Isaac Scheinfeld, Michael Frank
Abstract How do children learn abstract concepts such as animal vs. artifact? Previous research has suggested that such concepts can partly be derived using cues from the language children hear around them. Following this suggestion, we propose a model where we represent the children{'} developing lexicon as an evolving network. The nodes of this network are based on vocabulary knowledge as reported by parents, and the edges between pairs of nodes are based on the probability of their co-occurrence in a corpus of child-directed speech. We found that several abstract categories can be identified as the dense regions in such networks. In addition, our simulations suggest that these categories develop simultaneously, rather than sequentially, thanks to the children{'}s word learning trajectory which favors the exploration of the global conceptual space.
Tasks
Published 2019-06-01
URL https://www.aclweb.org/anthology/W19-2914/
PDF https://www.aclweb.org/anthology/W19-2914
PWC https://paperswithcode.com/paper/the-development-of-abstract-concepts-in
Repo
Framework

Using Grounded Word Representations to Study Theories of Lexical Concepts

Title Using Grounded Word Representations to Study Theories of Lexical Concepts
Authors Dylan Ebert, Ellie Pavlick
Abstract The fields of cognitive science and philosophy have proposed many different theories for how humans represent {``}concepts{''}. Multiple such theories are compatible with state-of-the-art NLP methods, and could in principle be operationalized using neural networks. We focus on two particularly prominent theories{–}Classical Theory and Prototype Theory{–}in the context of visually-grounded lexical representations. We compare when and how the behavior of models based on these theories differs in terms of categorization and entailment tasks. Our preliminary results suggest that Classical-based representations perform better for entailment and Prototype-based representations perform better for categorization. We discuss plans for additional experiments needed to confirm these initial observations. |
Tasks
Published 2019-06-01
URL https://www.aclweb.org/anthology/W19-2918/
PDF https://www.aclweb.org/anthology/W19-2918
PWC https://paperswithcode.com/paper/using-grounded-word-representations-to-study
Repo
Framework

ToNy: Contextual embeddings for accurate multilingual discourse segmentation of full documents

Title ToNy: Contextual embeddings for accurate multilingual discourse segmentation of full documents
Authors Philippe Muller, Chlo{'e} Braud, Mathieu Morey
Abstract Segmentation is the first step in building practical discourse parsers, and is often neglected in discourse parsing studies. The goal is to identify the minimal spans of text to be linked by discourse relations, or to isolate explicit marking of discourse relations. Existing systems on English report F1 scores as high as 95{%}, but they generally assume gold sentence boundaries and are restricted to English newswire texts annotated within the RST framework. This article presents a generic approach and a system, ToNy, a discourse segmenter developed for the DisRPT shared task where multiple discourse representation schemes, languages and domains are represented. In our experiments, we found that a straightforward sequence prediction architecture with pretrained contextual embeddings is sufficient to reach performance levels comparable to existing systems, when separately trained on each corpus. We report performance between 81{%} and 96{%} in F1 score. We also observed that discourse segmentation models only display a moderate generalization capability, even within the same language and discourse representation scheme.
Tasks
Published 2019-06-01
URL https://www.aclweb.org/anthology/W19-2715/
PDF https://www.aclweb.org/anthology/W19-2715
PWC https://paperswithcode.com/paper/tony-contextual-embeddings-for-accurate
Repo
Framework

Joint Inference on Bilingual Parse Trees for PP-attachment Disambiguation

Title Joint Inference on Bilingual Parse Trees for PP-attachment Disambiguation
Authors Geetanjali Rakshit
Abstract Prepositional Phrase (PP) attachment is a classical problem in NLP for languages like English, which suffer from structural ambiguity. In this work, we solve this problem with the help of another language free from such ambiguities, using the parse tree of the parallel sentence in the other language, and word alignments. We formulate an optimization framework that encourages agreement between the parse trees for two languages, and solve it using a novel Dual Decomposition (DD) based algorithm. Experiments on the English-Hindi language pair show promising improvements over the baseline.
Tasks
Published 2019-08-01
URL https://www.aclweb.org/anthology/papers/W/W19/W19-3615/
PDF https://www.aclweb.org/anthology/W19-3615
PWC https://paperswithcode.com/paper/joint-inference-on-bilingual-parse-trees-for
Repo
Framework
comments powered by Disqus