January 24, 2020

2731 words 13 mins read

Paper Group NANR 250

Paper Group NANR 250

Investigating Dynamic Routing in Tree-Structured LSTM for Sentiment Analysis. ELiRF-UPV at SemEval-2019 Task 3: Snapshot Ensemble of Hierarchical Convolutional Neural Networks for Contextual Emotion Detection. EmoDet at SemEval-2019 Task 3: Emotion Detection in Text using Deep Learning. You Only Need Attention to Traverse Trees. Document-Level N-ar …

Investigating Dynamic Routing in Tree-Structured LSTM for Sentiment Analysis

Title Investigating Dynamic Routing in Tree-Structured LSTM for Sentiment Analysis
Authors Jin Wang, Liang-Chih Yu, K. Robert Lai, Xuejie Zhang
Abstract Deep neural network models such as long short-term memory (LSTM) and tree-LSTM have been proven to be effective for sentiment analysis. However, sequential LSTM is a bias model wherein the words in the tail of a sentence are more heavily emphasized than those in the header for building sentence representations. Even tree-LSTM, with useful structural information, could not avoid the bias problem because the root node will be dominant and the nodes in the bottom of the parse tree will be less emphasized even though they may contain salient information. To overcome the bias problem, this study proposes a capsule tree-LSTM model, introducing a dynamic routing algorithm as an aggregation layer to build sentence representation by assigning different weights to nodes according to their contributions to prediction. Experiments on Stanford Sentiment Treebank (SST) for sentiment classification and EmoBank for regression show that the proposed method improved the performance of tree-LSTM and other neural network models. In addition, the deeper the tree structure, the bigger the improvement.
Tasks Sentiment Analysis
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-1343/
PDF https://www.aclweb.org/anthology/D19-1343
PWC https://paperswithcode.com/paper/investigating-dynamic-routing-in-tree
Repo
Framework

ELiRF-UPV at SemEval-2019 Task 3: Snapshot Ensemble of Hierarchical Convolutional Neural Networks for Contextual Emotion Detection

Title ELiRF-UPV at SemEval-2019 Task 3: Snapshot Ensemble of Hierarchical Convolutional Neural Networks for Contextual Emotion Detection
Authors Jos{'e}-{'A}ngel Gonz{'a}lez, Llu{'\i}s-F. Hurtado, Ferran Pla
Abstract This paper describes the approach developed by the ELiRF-UPV team at SemEval 2019 Task 3: Contextual Emotion Detection in Text. We have developed a Snapshot Ensemble of 1D Hierarchical Convolutional Neural Networks to extract features from 3-turn conversations in order to perform contextual emotion detection in text. This Snapshot Ensemble is obtained by averaging the models selected by a Genetic Algorithm that optimizes the evaluation measure. The proposed ensemble obtains better results than a single model and it obtains competitive and promising results on Contextual Emotion Detection in Text.
Tasks
Published 2019-06-01
URL https://www.aclweb.org/anthology/S19-2031/
PDF https://www.aclweb.org/anthology/S19-2031
PWC https://paperswithcode.com/paper/elirf-upv-at-semeval-2019-task-3-snapshot
Repo
Framework

EmoDet at SemEval-2019 Task 3: Emotion Detection in Text using Deep Learning

Title EmoDet at SemEval-2019 Task 3: Emotion Detection in Text using Deep Learning
Authors Hani Al-Omari, Malak Abdullah, Nabeel Bassam
Abstract Task 3, EmoContext, in the International Workshop SemEval 2019 provides training and testing datasets for the participant teams to detect emotion classes (Happy, Sad, Angry, or Others). This paper proposes a participating system (EmoDet) to detect emotions using deep learning architecture. The main input to the system is a combination of Word2Vec word embeddings and a set of semantic features (e.g. from AffectiveTweets Weka-package). The proposed system (EmoDet) ensembles a fully connected neural network architecture and LSTM neural network to obtain performance results that show substantial improvements (F1-Score 0.67) over the baseline model provided by Task 3 organizers (F1-score 0.58).
Tasks Word Embeddings
Published 2019-06-01
URL https://www.aclweb.org/anthology/S19-2032/
PDF https://www.aclweb.org/anthology/S19-2032
PWC https://paperswithcode.com/paper/emodet-at-semeval-2019-task-3-emotion
Repo
Framework

You Only Need Attention to Traverse Trees

Title You Only Need Attention to Traverse Trees
Authors Mahtab Ahmed, Muhammad Rifayat Samee, Robert E. Mercer
Abstract In recent NLP research, a topic of interest is universal sentence encoding, sentence representations that can be used in any supervised task. At the word sequence level, fully attention-based models suffer from two problems: a quadratic increase in memory consumption with respect to the sentence length and an inability to capture and use syntactic information. Recursive neural nets can extract very good syntactic information by traversing a tree structure. To this end, we propose Tree Transformer, a model that captures phrase level syntax for constituency trees as well as word-level dependencies for dependency trees by doing recursive traversal only with attention. Evaluation of this model on four tasks gets noteworthy results compared to the standard transformer and LSTM-based models as well as tree-structured LSTMs. Ablation studies to find whether positional information is inherently encoded in the trees and which type of attention is suitable for doing the recursive traversal are provided.
Tasks
Published 2019-07-01
URL https://www.aclweb.org/anthology/P19-1030/
PDF https://www.aclweb.org/anthology/P19-1030
PWC https://paperswithcode.com/paper/you-only-need-attention-to-traverse-trees
Repo
Framework

Document-Level N-ary Relation Extraction with Multiscale Representation Learning

Title Document-Level N-ary Relation Extraction with Multiscale Representation Learning
Authors Robin Jia, Cliff Wong, Hoifung Poon
Abstract Most information extraction methods focus on binary relations expressed within single sentences. In high-value domains, however, n-ary relations are of great demand (e.g., drug-gene-mutation interactions in precision oncology). Such relations often involve entity mentions that are far apart in the document, yet existing work on cross-sentence relation extraction is generally confined to small text spans (e.g., three consecutive sentences), which severely limits recall. In this paper, we propose a novel multiscale neural architecture for document-level n-ary relation extraction. Our system combines representations learned over various text spans throughout the document and across the subrelation hierarchy. Widening the system{'}s purview to the entire document maximizes potential recall. Moreover, by integrating weak signals across the document, multiscale modeling increases precision, even in the presence of noisy labels from distant supervision. Experiments on biomedical machine reading show that our approach substantially outperforms previous n-ary relation extraction methods.
Tasks Reading Comprehension, Relation Extraction, Representation Learning
Published 2019-06-01
URL https://www.aclweb.org/anthology/N19-1370/
PDF https://www.aclweb.org/anthology/N19-1370
PWC https://paperswithcode.com/paper/document-level-n-ary-relation-extraction-with-1
Repo
Framework

LIRMM-Advanse at SemEval-2019 Task 3: Attentive Conversation Modeling for Emotion Detection and Classification

Title LIRMM-Advanse at SemEval-2019 Task 3: Attentive Conversation Modeling for Emotion Detection and Classification
Authors Waleed Ragheb, J{'e}r{^o}me Az{'e}, S Bringay, ra, Maximilien Servajean
Abstract This paper addresses the problem of modeling textual conversations and detecting emotions. Our proposed model makes use of 1) deep transfer learning rather than the classical shallow methods of word embedding; 2) self-attention mechanisms to focus on the most important parts of the texts and 3) turn-based conversational modeling for classifying the emotions. The approach does not rely on any hand-crafted features or lexicons. Our model was evaluated on the data provided by the SemEval-2019 shared task on contextual emotion detection in text. The model shows very competitive results.
Tasks Transfer Learning
Published 2019-06-01
URL https://www.aclweb.org/anthology/S19-2042/
PDF https://www.aclweb.org/anthology/S19-2042
PWC https://paperswithcode.com/paper/lirmm-advanse-at-semeval-2019-task-3
Repo
Framework

MoonGrad at SemEval-2019 Task 3: Ensemble BiRNNs for Contextual Emotion Detection in Dialogues

Title MoonGrad at SemEval-2019 Task 3: Ensemble BiRNNs for Contextual Emotion Detection in Dialogues
Authors Ch Bothe, rakant, Stefan Wermter
Abstract When reading {}I don{'}t want to talk to you any more{''}, we might interpret this as either an angry or a sad emotion in the absence of context. Often, the utterances are shorter, and given a short utterance like {}Me too!{''}, it is difficult to interpret the emotion without context. The lack of prosodic or visual information makes it a challenging problem to detect such emotions only with text. However, using contextual information in the dialogue is gaining importance to provide a context-aware recognition of linguistic features such as emotion, dialogue act, sentiment etc. The SemEval 2019 Task 3 EmoContext competition provides a dataset of three-turn dialogues labeled with the three emotion classes, i.e. Happy, Sad and Angry, and in addition with Others as none of the aforementioned emotion classes. We develop an ensemble of the recurrent neural model with character- and word-level features as an input to solve this problem. The system performs quite well, achieving a microaveraged F1 score (F1μ) of 0.7212 for the three emotion classes.
Tasks
Published 2019-06-01
URL https://www.aclweb.org/anthology/S19-2044/
PDF https://www.aclweb.org/anthology/S19-2044
PWC https://paperswithcode.com/paper/moongrad-at-semeval-2019-task-3-ensemble
Repo
Framework

NL-FIIT at SemEval-2019 Task 3: Emotion Detection From Conversational Triplets Using Hierarchical Encoders

Title NL-FIIT at SemEval-2019 Task 3: Emotion Detection From Conversational Triplets Using Hierarchical Encoders
Authors Michal Farkas, Peter Lacko
Abstract In this paper, we present our system submission for the EmoContext, the third task of the SemEval 2019 workshop. Our solution is a hierarchical recurrent neural network with ELMo embeddings and regularization through dropout and Gaussian noise. We have mainly experimented with two main model architectures: simple and hierarchical LSTM network. We have also examined ensembling of the models and various variants of an ensemble. We have achieved microF1 score of 0.7481, which is significantly higher than baseline and currently the 19th best submission.
Tasks
Published 2019-06-01
URL https://www.aclweb.org/anthology/S19-2046/
PDF https://www.aclweb.org/anthology/S19-2046
PWC https://paperswithcode.com/paper/nl-fiit-at-semeval-2019-task-3-emotion
Repo
Framework

Multilingual Entity, Relation, Event and Human Value Extraction

Title Multilingual Entity, Relation, Event and Human Value Extraction
Authors Manling Li, Ying Lin, Joseph Hoover, Spencer Whitehead, Clare Voss, Morteza Dehghani, Heng Ji
Abstract This paper demonstrates a state-of-the-art end-to-end multilingual (English, Russian, and Ukrainian) knowledge extraction system that can perform entity discovery and linking, relation extraction, event extraction, and coreference. It extracts and aggregates knowledge elements across multiple languages and documents as well as provides visualizations of the results along three dimensions: temporal (as displayed in an event timeline), spatial (as displayed in an event heatmap), and relational (as displayed in entity-relation networks). For our system to further support users{'} analyses of causal sequences of events in complex situations, we also integrate a wide range of human moral value measures, independently derived from region-based survey, into the event heatmap. This system is publicly available as a docker container and a live demo.
Tasks Relation Extraction
Published 2019-06-01
URL https://www.aclweb.org/anthology/N19-4019/
PDF https://www.aclweb.org/anthology/N19-4019
PWC https://paperswithcode.com/paper/multilingual-entity-relation-event-and-human
Repo
Framework

Extracting Adverse Drug Event Information with Minimal Engineering

Title Extracting Adverse Drug Event Information with Minimal Engineering
Authors Timothy Miller, Alon Geva, Dmitriy Dligach
Abstract In this paper we describe an evaluation of the potential of classical information extraction methods to extract drug-related attributes, including adverse drug events, and compare to more recently developed neural methods. We use the 2018 N2C2 shared task data as our gold standard data set for training. We train support vector machine classifiers to detect drug and drug attribute spans, and pair these detected entities as training instances for an SVM relation classifier, with both systems using standard features. We compare to baseline neural methods that use standard contextualized embedding representations for entity and relation extraction. The SVM-based system and a neural system obtain comparable results, with the SVM system doing better on concepts and the neural system performing better on relation extraction tasks. The neural system obtains surprisingly strong results compared to the system based on years of research in developing features for information extraction.
Tasks Relation Extraction
Published 2019-06-01
URL https://www.aclweb.org/anthology/W19-1903/
PDF https://www.aclweb.org/anthology/W19-1903
PWC https://paperswithcode.com/paper/extracting-adverse-drug-event-information
Repo
Framework

A BERT-based Universal Model for Both Within- and Cross-sentence Clinical Temporal Relation Extraction

Title A BERT-based Universal Model for Both Within- and Cross-sentence Clinical Temporal Relation Extraction
Authors Chen Lin, Timothy Miller, Dmitriy Dligach, Steven Bethard, Guergana Savova
Abstract Classic methods for clinical temporal relation extraction focus on relational candidates within a sentence. On the other hand, break-through Bidirectional Encoder Representations from Transformers (BERT) are trained on large quantities of arbitrary spans of contiguous text instead of sentences. In this study, we aim to build a sentence-agnostic framework for the task of CONTAINS temporal relation extraction. We establish a new state-of-the-art result for the task, 0.684F for in-domain (0.055-point improvement) and 0.565F for cross-domain (0.018-point improvement), by fine-tuning BERT and pre-training domain-specific BERT models on sentence-agnostic temporal relation instances with WordPiece-compatible encodings, and augmenting the labeled data with automatically generated {``}silver{''} instances. |
Tasks Relation Extraction
Published 2019-06-01
URL https://www.aclweb.org/anthology/W19-1908/
PDF https://www.aclweb.org/anthology/W19-1908
PWC https://paperswithcode.com/paper/a-bert-based-universal-model-for-both-within
Repo
Framework

Tell Me Where I Am: Object-Level Scene Context Prediction

Title Tell Me Where I Am: Object-Level Scene Context Prediction
Authors Xiaotian Qiao, Quanlong Zheng, Ying Cao, Rynson W.H. Lau
Abstract Contextual information has been shown to be effective in helping solve various image understanding tasks. Previous works have focused on the extraction of contextual information from an image and use it to infer the properties of some object(s) in the image. In this paper, we consider an inverse problem of how to hallucinate missing contextual information from the properties of a few standalone objects. We refer to it as scene context prediction. This problem is difficult as it requires an extensive knowledge of complex and diverse relationships among different objects in natural scenes. We propose a convolutional neural network, which takes as input the properties (i.e., category, shape, and position) of a few standalone objects to predict an object-level scene layout that compactly encodes the semantics and structure of the scene context where the given objects are. Our quantitative experiments and user studies show that our model can generate more plausible scene context than the baseline approach. We demonstrate that our model allows for the synthesis of realistic scene images from just partial scene layouts and internally learns useful features for scene recognition.
Tasks Scene Recognition
Published 2019-06-01
URL http://openaccess.thecvf.com/content_CVPR_2019/html/Qiao_Tell_Me_Where_I_Am_Object-Level_Scene_Context_Prediction_CVPR_2019_paper.html
PDF http://openaccess.thecvf.com/content_CVPR_2019/papers/Qiao_Tell_Me_Where_I_Am_Object-Level_Scene_Context_Prediction_CVPR_2019_paper.pdf
PWC https://paperswithcode.com/paper/tell-me-where-i-am-object-level-scene-context
Repo
Framework

A General-Purpose Annotation Model for Knowledge Discovery: Case Study in Spanish Clinical Text

Title A General-Purpose Annotation Model for Knowledge Discovery: Case Study in Spanish Clinical Text
Authors Alej Piad-Morffis, ro, Yoan Guit{'e}rrez, Suilan Estevez-Velarde, Rafael Mu{~n}oz
Abstract Knowledge discovery from text in natural language is a task usually aided by the manual construction of annotated corpora. Specifically in the clinical domain, several annotation models are used depending on the characteristics of the task to solve (e.g., named entity recognition, relation extraction, etc.). However, few general-purpose annotation models exist, that can support a broad range of knowledge extraction tasks. This paper presents an annotation model designed to capture a large portion of the semantics of natural language text. The structure of the annotation model is presented, with examples of annotated sentences and a brief description of each semantic role and relation defined. This research focuses on an application to clinical texts in the Spanish language. Nevertheless, the presented annotation model is extensible to other domains and languages. An example of annotated sentences, guidelines, and suitable configuration files for an annotation tool are also provided for the research community.
Tasks Named Entity Recognition, Relation Extraction
Published 2019-06-01
URL https://www.aclweb.org/anthology/W19-1910/
PDF https://www.aclweb.org/anthology/W19-1910
PWC https://paperswithcode.com/paper/a-general-purpose-annotation-model-for
Repo
Framework

Distantly Supervised Biomedical Knowledge Acquisition via Knowledge Graph Based Attention

Title Distantly Supervised Biomedical Knowledge Acquisition via Knowledge Graph Based Attention
Authors Qin Dai, Naoya Inoue, Paul Reisert, Ryo Takahashi, Kentaro Inui
Abstract The increased demand for structured scientific knowledge has attracted considerable attention in extracting scientific relation from the ever growing scientific publications. Distant supervision is widely applied approach to automatically generate large amounts of labelled data with low manual annotation cost. However, distant supervision inevitably accompanies the wrong labelling problem, which will negatively affect the performance of Relation Extraction (RE). To address this issue, (Han et al., 2018) proposes a novel framework for jointly training both RE model and Knowledge Graph Completion (KGC) model to extract structured knowledge from non-scientific dataset. In this work, we firstly investigate the feasibility of this framework on scientific dataset, specifically on biomedical dataset. Secondly, to achieve better performance on the biomedical dataset, we extend the framework with other competitive KGC models. Moreover, we proposed a new end-to-end KGC model to extend the framework. Experimental results not only show the feasibility of the framework on the biomedical dataset, but also indicate the effectiveness of our extensions, because our extended model achieves significant and consistent improvements on distant supervised RE as compared with baselines.
Tasks Knowledge Graph Completion, Relation Extraction
Published 2019-06-01
URL https://www.aclweb.org/anthology/W19-2601/
PDF https://www.aclweb.org/anthology/W19-2601
PWC https://paperswithcode.com/paper/distantly-supervised-biomedical-knowledge
Repo
Framework

NTUA-ISLab at SemEval-2019 Task 3: Determining emotions in contextual conversations with deep learning

Title NTUA-ISLab at SemEval-2019 Task 3: Determining emotions in contextual conversations with deep learning
Authors Rol Potamias, os Alex, ros, Georgios Siolas
Abstract Sentiment analysis (SA) in texts is a well-studied Natural Language Processing task, which in nowadays gains popularity due to the explosion of social media, and the subsequent accumulation of huge amounts of related data. However, capturing emotional states and the sentiment polarity of written excerpts requires knowledge on the events triggering them. Towards this goal, we present a computational end-to-end context-aware SA methodology, which was competed in the context of the SemEval-2019 / EmoContext task (Task 3). The proposed system is founded on the combination of two neural architectures, a deep recurrent neural network, structured by an attentive Bidirectional LSTM, and a deep dense network (DNN). The system achieved 0.745 micro f1-score, and ranked 26/165 (top 20{%}) teams among the official task submissions.
Tasks Sentiment Analysis
Published 2019-06-01
URL https://www.aclweb.org/anthology/S19-2047/
PDF https://www.aclweb.org/anthology/S19-2047
PWC https://paperswithcode.com/paper/ntua-islab-at-semeval-2019-task-3-determining
Repo
Framework
comments powered by Disqus