Paper Group NANR 33
Revealing and Predicting Online Persuasion Strategy with Elementary Units. Do Language Models Have Common Sense?. UCSYNLP-Lab Machine Translation Systems for WAT 2019. Time-series Generative Adversarial Networks. Label Embedding using Hierarchical Structure of Labels for Twitter Classification. Decompositional Argument Mining: A General Purpose App …
Revealing and Predicting Online Persuasion Strategy with Elementary Units
Title | Revealing and Predicting Online Persuasion Strategy with Elementary Units |
Authors | Gaku Morio, Ryo Egawa, Katsuhide Fujita |
Abstract | In online arguments, identifying how users construct their arguments to persuade others is important in order to understand a persuasive strategy directly. However, existing research lacks empirical investigations on highly semantic aspects of elementary units (EUs), such as propositions for a persuasive online argument. Therefore, this paper focuses on a pilot study, revealing a persuasion strategy using EUs. Our contributions are as follows: (1) annotating five types of EUs in a persuasive forum, the so-called ChangeMyView, (2) revealing both intuitive and non-intuitive strategic insights for the persuasion by analyzing 4612 annotated EUs, and (3) proposing baseline neural models that identify the EU boundary and type. Our observations imply that EUs definitively characterize online persuasion strategies. |
Tasks | |
Published | 2019-11-01 |
URL | https://www.aclweb.org/anthology/D19-1653/ |
https://www.aclweb.org/anthology/D19-1653 | |
PWC | https://paperswithcode.com/paper/revealing-and-predicting-online-persuasion |
Repo | |
Framework | |
Do Language Models Have Common Sense?
Title | Do Language Models Have Common Sense? |
Authors | Trieu H. Trinh, Quoc V. Le |
Abstract | It has been argued that current machine learning models do not have commonsense, and therefore must be hard-coded with prior knowledge (Marcus, 2018). Here we show surprising evidence that language models can already learn to capture certain common sense knowledge. Our key observation is that a language model can compute the probability of any statement, and this probability can be used to evaluate the truthfulness of that statement. On the Winograd Schema Challenge (Levesque et al., 2011), language models are 11% higher in accuracy than previous state-of-the-art supervised methods. Language models can also be fine-tuned for the task of Mining Commonsense Knowledge on ConceptNet to achieve an F1 score of 0.912 and 0.824, outperforming previous best results (Jastrzebskiet al., 2018). Further analysis demonstrates that language models can discover unique features of Winograd Schema contexts that decide the correct answers without explicit supervision. |
Tasks | Common Sense Reasoning, Language Modelling |
Published | 2019-05-01 |
URL | https://openreview.net/forum?id=rkgfWh0qKX |
https://openreview.net/pdf?id=rkgfWh0qKX | |
PWC | https://paperswithcode.com/paper/do-language-models-have-common-sense |
Repo | |
Framework | |
UCSYNLP-Lab Machine Translation Systems for WAT 2019
Title | UCSYNLP-Lab Machine Translation Systems for WAT 2019 |
Authors | Yimon ShweSin, Win Pa Pa, KhinMar Soe |
Abstract | This paper describes the UCSYNLP-Lab submission to WAT 2019 for Myanmar-English translation tasks in both direction. We have used the neural machine translation systems with attention model and utilized the UCSY-corpus and ALT corpus. In NMT with attention model, we use the word segmentation level as well as syllable segmentation level. Especially, we made the UCSY-corpus to be cleaned in WAT 2019. Therefore, the UCSY corpus for WAT 2019 is not identical to those used in WAT 2018. Experiments show that the translation systems can produce the substantial improvements. |
Tasks | Machine Translation |
Published | 2019-11-01 |
URL | https://www.aclweb.org/anthology/D19-5226/ |
https://www.aclweb.org/anthology/D19-5226 | |
PWC | https://paperswithcode.com/paper/ucsynlp-lab-machine-translation-systems-for |
Repo | |
Framework | |
Time-series Generative Adversarial Networks
Title | Time-series Generative Adversarial Networks |
Authors | Jinsung Yoon, Daniel Jarrett, M Van Der Schaar |
Abstract | A good generative model for time-series data should preserve temporal dynamics, in the sense that new sequences respect the original relationships between variables across time. Existing methods that bring generative adversarial networks (GANs) into the sequential setting do not adequately attend to the temporal correlations unique to time-series data. At the same time, supervised models for sequence prediction - which allow finer control over network dynamics - are inherently deterministic. We propose a novel framework for generating realistic time-series data that combines the flexibility of the unsupervised paradigm with the control afforded by supervised training. Through a learned embedding space jointly optimized with both supervised and adversarial objectives, we encourage the network to adhere to the dynamics of the training data during sampling. Empirically, we evaluate the ability of our method to generate realistic samples using a variety of real and synthetic time-series datasets. Qualitatively and quantitatively, we find that the proposed framework consistently and significantly outperforms state-of-the-art benchmarks with respect to measures of similarity and predictive ability. |
Tasks | Time Series |
Published | 2019-12-01 |
URL | http://papers.nips.cc/paper/8789-time-series-generative-adversarial-networks |
http://papers.nips.cc/paper/8789-time-series-generative-adversarial-networks.pdf | |
PWC | https://paperswithcode.com/paper/time-series-generative-adversarial-networks |
Repo | |
Framework | |
Label Embedding using Hierarchical Structure of Labels for Twitter Classification
Title | Label Embedding using Hierarchical Structure of Labels for Twitter Classification |
Authors | Taro Miyazaki, Kiminobu Makino, Yuka Takei, Hiroki Okamoto, Jun Goto |
Abstract | Twitter is used for various applications such as disaster monitoring and news material gathering. In these applications, each Tweet is classified into pre-defined classes. These classes have a semantic relationship with each other and can be classified into a hierarchical structure, which is regarded as important information. Label texts of pre-defined classes themselves also include important clues for classification. Therefore, we propose a method that can consider the hierarchical structure of labels and label texts themselves. We conducted evaluation over the Text REtrieval Conference (TREC) 2018 Incident Streams (IS) track dataset, and we found that our method outperformed the methods of the conference participants. |
Tasks | |
Published | 2019-11-01 |
URL | https://www.aclweb.org/anthology/D19-1660/ |
https://www.aclweb.org/anthology/D19-1660 | |
PWC | https://paperswithcode.com/paper/label-embedding-using-hierarchical-structure |
Repo | |
Framework | |
Decompositional Argument Mining: A General Purpose Approach for Argument Graph Construction
Title | Decompositional Argument Mining: A General Purpose Approach for Argument Graph Construction |
Authors | Debela Gemechu, Chris Reed |
Abstract | This work presents an approach decomposing propositions into four functional components and identify the patterns linking those components to determine argument structure. The entities addressed by a proposition are target concepts and the features selected to make a point about the target concepts are aspects. A line of reasoning is followed by providing evidence for the points made about the target concepts via aspects. Opinions on target concepts and opinions on aspects are used to support or attack the ideas expressed by target concepts and aspects. The relations between aspects, target concepts, opinions on target concepts and aspects are used to infer the argument relations. Propositions are connected iteratively to form a graph structure. The approach is generic in that it is not tuned for a specific corpus and evaluated on three different corpora from the literature: AAEC, AMT, US2016G1tv and achieved an F score of 0.79, 0.77 and 0.64, respectively. |
Tasks | Argument Mining, graph construction |
Published | 2019-07-01 |
URL | https://www.aclweb.org/anthology/P19-1049/ |
https://www.aclweb.org/anthology/P19-1049 | |
PWC | https://paperswithcode.com/paper/decompositional-argument-mining-a-general |
Repo | |
Framework | |
``President Vows to Cut \textlessTaxes\textgreater Hair’': Dataset and Analysis of Creative Text Editing for Humorous Headlines
Title | ``President Vows to Cut \textlessTaxes\textgreater Hair’': Dataset and Analysis of Creative Text Editing for Humorous Headlines | |
Authors | Nabil Hossain, John Krumm, Michael Gamon |
Abstract | We introduce, release, and analyze a new dataset, called Humicroedit, for research in computational humor. Our publicly available data consists of regular English news headlines paired with versions of the same headlines that contain simple replacement edits designed to make them funny. We carefully curated crowdsourced editors to create funny headlines and judges to score a to a total of 15,095 edited headlines, with five judges per headline. The simple edits, usually just a single word replacement, mean we can apply straightforward analysis techniques to determine what makes our edited headlines humorous. We show how the data support classic theories of humor, such as incongruity, superiority, and setup/punchline. Finally, we develop baseline classifiers that can predict whether or not an edited headline is funny, which is a first step toward automatically generating humorous headlines as an approach to creating topical humor. |
Tasks | |
Published | 2019-06-01 |
URL | https://www.aclweb.org/anthology/N19-1012/ |
https://www.aclweb.org/anthology/N19-1012 | |
PWC | https://paperswithcode.com/paper/president-vows-to-cut |
Repo | |
Framework | |
MAGMATic: A Multi-domain Academic Gold Standard with Manual Annotation of Terminology for Machine Translation Evaluation
Title | MAGMATic: A Multi-domain Academic Gold Standard with Manual Annotation of Terminology for Machine Translation Evaluation |
Authors | R Scansani, y, Luisa Bentivogli, Silvia Bernardini, Adriano Ferraresi |
Abstract | |
Tasks | Machine Translation |
Published | 2019-08-01 |
URL | https://www.aclweb.org/anthology/W19-6608/ |
https://www.aclweb.org/anthology/W19-6608 | |
PWC | https://paperswithcode.com/paper/magmatic-a-multi-domain-academic-gold |
Repo | |
Framework | |
Domain Adaptation for MT: A Study with Unknown and Out-of-Domain Tasks
Title | Domain Adaptation for MT: A Study with Unknown and Out-of-Domain Tasks |
Authors | Hoang Cuong |
Abstract | |
Tasks | Domain Adaptation |
Published | 2019-08-01 |
URL | https://www.aclweb.org/anthology/W19-6606/ |
https://www.aclweb.org/anthology/W19-6606 | |
PWC | https://paperswithcode.com/paper/domain-adaptation-for-mt-a-study-with-unknown |
Repo | |
Framework | |
Inducing Document Structure for Aspect-based Summarization
Title | Inducing Document Structure for Aspect-based Summarization |
Authors | Lea Frermann, Alex Klementiev, re |
Abstract | Automatic summarization is typically treated as a 1-to-1 mapping from document to summary. Documents such as news articles, however, are structured and often cover multiple topics or aspects; and readers may be interested in only some of them. We tackle the task of aspect-based summarization, where, given a document and a target aspect, our models generate a summary centered around the aspect. We induce latent document structure jointly with an abstractive summarization objective, and train our models in a scalable synthetic setup. In addition to improvements in summarization over topic-agnostic baselines, we demonstrate the benefit of the learnt document structure: we show that our models (a) learn to accurately segment documents by aspect; (b) can leverage the structure to produce both abstractive and extractive aspect-based summaries; and (c) that structure is particularly advantageous for summarizing long documents. All results transfer from synthetic training documents to natural news articles from CNN/Daily Mail and RCV1. |
Tasks | Abstractive Text Summarization |
Published | 2019-07-01 |
URL | https://www.aclweb.org/anthology/P19-1630/ |
https://www.aclweb.org/anthology/P19-1630 | |
PWC | https://paperswithcode.com/paper/inducing-document-structure-for-aspect-based |
Repo | |
Framework | |
A Robust Abstractive System for Cross-Lingual Summarization
Title | A Robust Abstractive System for Cross-Lingual Summarization |
Authors | Jessica Ouyang, Boya Song, Kathy McKeown |
Abstract | We present a robust neural abstractive summarization system for cross-lingual summarization. We construct summarization corpora for documents automatically translated from three low-resource languages, Somali, Swahili, and Tagalog, using machine translation and the New York Times summarization corpus. We train three language-specific abstractive summarizers and evaluate on documents originally written in the source languages, as well as on a fourth, unseen language: Arabic. Our systems achieve significantly higher fluency than a standard copy-attention summarizer on automatically translated input documents, as well as comparable content selection. |
Tasks | Abstractive Text Summarization, Machine Translation |
Published | 2019-06-01 |
URL | https://www.aclweb.org/anthology/N19-1204/ |
https://www.aclweb.org/anthology/N19-1204 | |
PWC | https://paperswithcode.com/paper/a-robust-abstractive-system-for-cross-lingual |
Repo | |
Framework | |
Annotating and Analyzing Semantic Role of Elementary Units and Relations in Online Persuasive Arguments
Title | Annotating and Analyzing Semantic Role of Elementary Units and Relations in Online Persuasive Arguments |
Authors | Ryo Egawa, Gaku Morio, Katsuhide Fujita |
Abstract | For analyzing online persuasions, one of the important goals is to semantically understand how people construct comments to persuade others. However, analyzing the semantic role of arguments for online persuasion has been less emphasized. Therefore, in this study, we propose a novel annotation scheme that captures the semantic role of arguments in a popular online persuasion forum, so-called ChangeMyView. Through this study, we have made the following contributions: (i) proposing a scheme that includes five types of elementary units (EUs) and two types of relations. (ii) annotating ChangeMyView which results in 4612 EUs and 2713 relations in 345 posts. (iii) analyzing the semantic role of persuasive arguments. Our analyses captured certain characteristic phenomena for online persuasion. |
Tasks | |
Published | 2019-07-01 |
URL | https://www.aclweb.org/anthology/P19-2059/ |
https://www.aclweb.org/anthology/P19-2059 | |
PWC | https://paperswithcode.com/paper/annotating-and-analyzing-semantic-role-of |
Repo | |
Framework | |
Machine Translation in the Financial Services Industry: A Case Study
Title | Machine Translation in the Financial Services Industry: A Case Study |
Authors | Mara Nunziatini |
Abstract | |
Tasks | Machine Translation |
Published | 2019-08-01 |
URL | https://www.aclweb.org/anthology/W19-6709/ |
https://www.aclweb.org/anthology/W19-6709 | |
PWC | https://paperswithcode.com/paper/machine-translation-in-the-financial-services |
Repo | |
Framework | |
Syntax-aware Multi-task Graph Convolutional Networks for Biomedical Relation Extraction
Title | Syntax-aware Multi-task Graph Convolutional Networks for Biomedical Relation Extraction |
Authors | Diya Li, Heng Ji |
Abstract | In this paper we tackle two unique challenges in biomedical relation extraction. The first challenge is that the contextual information between two entity mentions often involves sophisticated syntactic structures. We propose a novel graph convolutional networks model that incorporates dependency parsing and contextualized embedding to effectively capture comprehensive contextual information. The second challenge is that most of the benchmark data sets for this task are quite imbalanced because more than 80{%} mention pairs are negative instances (i.e., no relations). We propose a multi-task learning framework to jointly model relation identification and classification tasks to propagate supervision signals from each other and apply a focal loss to focus training on ambiguous mention pairs. By applying these two strategies, experiments show that our model achieves state-of-the-art F-score on the 2013 drug-drug interaction extraction task. |
Tasks | Dependency Parsing, Multi-Task Learning, Relation Extraction |
Published | 2019-11-01 |
URL | https://www.aclweb.org/anthology/D19-6204/ |
https://www.aclweb.org/anthology/D19-6204 | |
PWC | https://paperswithcode.com/paper/syntax-aware-multi-task-graph-convolutional |
Repo | |
Framework | |
Neural News Recommendation with Multi-Head Self-Attention
Title | Neural News Recommendation with Multi-Head Self-Attention |
Authors | Chuhan Wu, Fangzhao Wu, Suyu Ge, Tao Qi, Yongfeng Huang, Xing Xie |
Abstract | News recommendation can help users find interested news and alleviate information overload. Precisely modeling news and users is critical for news recommendation, and capturing the contexts of words and news is important to learn news and user representations. In this paper, we propose a neural news recommendation approach with multi-head self-attention (NRMS). The core of our approach is a news encoder and a user encoder. In the news encoder, we use multi-head self-attentions to learn news representations from news titles by modeling the interactions between words. In the user encoder, we learn representations of users from their browsed news and use multi-head self-attention to capture the relatedness between the news. Besides, we apply additive attention to learn more informative news and user representations by selecting important words and news. Experiments on a real-world dataset validate the effectiveness and efficiency of our approach. |
Tasks | |
Published | 2019-11-01 |
URL | https://www.aclweb.org/anthology/D19-1671/ |
https://www.aclweb.org/anthology/D19-1671 | |
PWC | https://paperswithcode.com/paper/neural-news-recommendation-with-multi-head |
Repo | |
Framework | |