October 15, 2019

2171 words 11 mins read

Paper Group NANR 65

Paper Group NANR 65

Deep Kalman Filtering Network for Video Compression Artifact Reduction. Exemplar Encoder-Decoder for Neural Conversation Generation. Handling Rare Items in Data-to-Text Generation. Korean TimeBank Including Relative Temporal Information. N-ary Relation Extraction using Graph-State LSTM. SemEval-2018 Task 1: Affect in Tweets. Understanding Perceptua …

Deep Kalman Filtering Network for Video Compression Artifact Reduction

Title Deep Kalman Filtering Network for Video Compression Artifact Reduction
Authors Guo Lu, Wanli Ouyang, Dong Xu, Xiaoyun Zhang, Zhiyong Gao, Ming-Ting Sun
Abstract When lossy video compression algorithms are applied, compression artifacts often appear in videos, making decoded videos unpleasant for human visual systems. In this paper, we model the video artifact reduction task as a Kalman filtering procedure and restore decoded frames through a deep Kalman filtering network. Different from the existing works using the noisy previous decoded frames as the temporal information in restoration problem, we utilize the less noisy previous restored frame and build a recursive filtering scheme based on Kalman model. This strategy can provide more accurate and consistent temporal information, which produces higher quality restoration. In addition, the strong prior information of the prediction residual is also exploited for restoration through a well designed neural network. These two components are combined under the Kalman framework and optimized through the deep Kalman filtering network. Our approach can well bridge the gap between the model-based methods and learning-based methods by integrating the recursive nature of the Kalman model and highly non-linear transformation ability of deep neural network. Experimental results on the benchmark dataset demonstrate the effectiveness of our proposed method.
Tasks Video Compression
Published 2018-09-01
URL http://openaccess.thecvf.com/content_ECCV_2018/html/Guo_Lu_Deep_Kalman_Filtering_ECCV_2018_paper.html
PDF http://openaccess.thecvf.com/content_ECCV_2018/papers/Guo_Lu_Deep_Kalman_Filtering_ECCV_2018_paper.pdf
PWC https://paperswithcode.com/paper/deep-kalman-filtering-network-for-video
Repo
Framework

Exemplar Encoder-Decoder for Neural Conversation Generation

Title Exemplar Encoder-Decoder for Neural Conversation Generation
Authors P, Gaurav ey, Danish Contractor, Vineet Kumar, Sachindra Joshi
Abstract In this paper we present the Exemplar Encoder-Decoder network (EED), a novel conversation model that learns to utilize \textit{similar} examples from training data to generate responses. Similar conversation examples (context-response pairs) from training data are retrieved using a traditional TF-IDF based retrieval model and the corresponding responses are used by our decoder to generate the ground truth response. The contribution of each retrieved response is weighed by the similarity of corresponding context with the input context. As a result, our model learns to assign higher similarity scores to those retrieved contexts whose responses are crucial for generating the final response. We present detailed experiments on two large data sets and we find that our method out-performs state of the art sequence to sequence generative models on several recently proposed evaluation metrics.
Tasks
Published 2018-07-01
URL https://www.aclweb.org/anthology/P18-1123/
PDF https://www.aclweb.org/anthology/P18-1123
PWC https://paperswithcode.com/paper/exemplar-encoder-decoder-for-neural
Repo
Framework

Handling Rare Items in Data-to-Text Generation

Title Handling Rare Items in Data-to-Text Generation
Authors Anastasia Shimorina, Claire Gardent
Abstract Neural approaches to data-to-text generation generally handle rare input items using either delexicalisation or a copy mechanism. We investigate the relative impact of these two methods on two datasets (E2E and WebNLG) and using two evaluation settings. We show (i) that rare items strongly impact performance; (ii) that combining delexicalisation and copying yields the strongest improvement; (iii) that copying underperforms for rare and unseen items and (iv) that the impact of these two mechanisms greatly varies depending on how the dataset is constructed and on how it is split into train, dev and test.
Tasks Data-to-Text Generation, Text Generation
Published 2018-11-01
URL https://www.aclweb.org/anthology/W18-6543/
PDF https://www.aclweb.org/anthology/W18-6543
PWC https://paperswithcode.com/paper/handling-rare-items-in-data-to-text
Repo
Framework

Korean TimeBank Including Relative Temporal Information

Title Korean TimeBank Including Relative Temporal Information
Authors Chae-Gyun Lim, Young-Seob Jeong, Ho-Jin Choi
Abstract
Tasks Question Answering, Temporal Information Extraction
Published 2018-05-01
URL https://www.aclweb.org/anthology/L18-1326/
PDF https://www.aclweb.org/anthology/L18-1326
PWC https://paperswithcode.com/paper/korean-timebank-including-relative-temporal
Repo
Framework

N-ary Relation Extraction using Graph-State LSTM

Title N-ary Relation Extraction using Graph-State LSTM
Authors Linfeng Song, Yue Zhang, Zhiguo Wang, Daniel Gildea
Abstract Cross-sentence $n$-ary relation extraction detects relations among $n$ entities across multiple sentences. Typical methods formulate an input as a \textit{document graph}, integrating various intra-sentential and inter-sentential dependencies. The current state-of-the-art method splits the input graph into two DAGs, adopting a DAG-structured LSTM for each. Though being able to model rich linguistic knowledge by leveraging graph edges, important information can be lost in the splitting procedure. We propose a graph-state LSTM model, which uses a parallel state to model each word, recurrently enriching state values via message passing. Compared with DAG LSTMs, our graph LSTM keeps the original graph structure, and speeds up computation by allowing more parallelization. On a standard benchmark, our model shows the best result in the literature.
Tasks Relation Extraction
Published 2018-10-01
URL https://www.aclweb.org/anthology/D18-1246/
PDF https://www.aclweb.org/anthology/D18-1246
PWC https://paperswithcode.com/paper/n-ary-relation-extraction-using-graph-state-1
Repo
Framework

SemEval-2018 Task 1: Affect in Tweets

Title SemEval-2018 Task 1: Affect in Tweets
Authors Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, Svetlana Kiritchenko
Abstract We present the SemEval-2018 Task 1: Affect in Tweets, which includes an array of subtasks on inferring the affectual state of a person from their tweet. For each task, we created labeled data from English, Arabic, and Spanish tweets. The individual tasks are: 1. emotion intensity regression, 2. emotion intensity ordinal classification, 3. valence (sentiment) regression, 4. valence ordinal classification, and 5. emotion classification. Seventy-five teams (about 200 team members) participated in the shared task. We summarize the methods, resources, and tools used by the participating teams, with a focus on the techniques and resources that are particularly useful. We also analyze systems for consistent bias towards a particular race or gender. The data is made freely available to further improve our understanding of how people convey emotions through language.
Tasks Emotion Classification
Published 2018-06-01
URL https://www.aclweb.org/anthology/S18-1001/
PDF https://www.aclweb.org/anthology/S18-1001
PWC https://paperswithcode.com/paper/semeval-2018-task-1-affect-in-tweets
Repo
Framework

Understanding Perceptual and Conceptual Fluency at a Large Scale

Title Understanding Perceptual and Conceptual Fluency at a Large Scale
Authors Shengli Hu, Ali Borji
Abstract We create a dataset of 543,758 logo designs spanning 39 industrial categories and 216 countries. We experiment and compare how different deep convolutional neural network (hereafter, DCNN) architectures, pretraining protocols, and weight initializations perform in predicting design memorability and likability. We propose and provide estimation methods based on training DCNNs to extract and evaluate two independent constructs for designs: perceptual distinctiveness (perceptual fluency'' metrics) and ambiguity in meaning (conceptual fluency’’ metrics) of each logo. We provide evidences of causal inference that both constructs significantly affect memory for a logo design, consistent with cognitive elaboration theory. The effect on liking, however, is interactive, consistent with processing fluency (e.g., Lee and Labroo (2004), Landwehr et al. (2011).
Tasks Causal Inference
Published 2018-09-01
URL http://openaccess.thecvf.com/content_ECCV_2018/html/Meredith_Hu_Understanding_Perceptual_and_ECCV_2018_paper.html
PDF http://openaccess.thecvf.com/content_ECCV_2018/papers/Meredith_Hu_Understanding_Perceptual_and_ECCV_2018_paper.pdf
PWC https://paperswithcode.com/paper/understanding-perceptual-and-conceptual
Repo
Framework

Incorporating Global Contexts into Sentence Embedding for Relational Extraction at the Paragraph Level with Distant Supervision

Title Incorporating Global Contexts into Sentence Embedding for Relational Extraction at the Paragraph Level with Distant Supervision
Authors Eun-kyung Kim, Key-Sun Choi
Abstract
Tasks Relation Extraction, Sentence Embedding
Published 2018-05-01
URL https://www.aclweb.org/anthology/L18-1563/
PDF https://www.aclweb.org/anthology/L18-1563
PWC https://paperswithcode.com/paper/incorporating-global-contexts-into-sentence
Repo
Framework

Reddit: A Gold Mine for Personality Prediction

Title Reddit: A Gold Mine for Personality Prediction
Authors Matej Gjurkovi{'c}, Jan {\v{S}}najder
Abstract Automated personality prediction from social media is gaining increasing attention in natural language processing and social sciences communities. However, due to high labeling costs and privacy issues, the few publicly available datasets are of limited size and low topic diversity. We address this problem by introducing a large-scale dataset derived from Reddit, a source so far overlooked for personality prediction. The dataset is labeled with Myers-Briggs Type Indicators (MBTI) and comes with a rich set of features for more than 9k users. We carry out a preliminary feature analysis, revealing marked differences between the MBTI dimensions and poles. Furthermore, we use the dataset to train and evaluate benchmark personality prediction models, achieving macro F1-scores between 67{%} and 82{%} on the individual dimensions and 82{%} accuracy for exact or one-off accurate type prediction. These results are encouraging and comparable with the reliability of standardized tests.
Tasks
Published 2018-06-01
URL https://www.aclweb.org/anthology/W18-1112/
PDF https://www.aclweb.org/anthology/W18-1112
PWC https://paperswithcode.com/paper/reddit-a-gold-mine-for-personality-prediction
Repo
Framework

Proceedings of Grand Challenge and Workshop on Human Multimodal Language (Challenge-HML)

Title Proceedings of Grand Challenge and Workshop on Human Multimodal Language (Challenge-HML)
Authors
Abstract
Tasks
Published 2018-07-01
URL https://www.aclweb.org/anthology/W18-3300/
PDF https://www.aclweb.org/anthology/W18-3300
PWC https://paperswithcode.com/paper/proceedings-of-grand-challenge-and-workshop
Repo
Framework

Multi-lingual Entity Discovery and Linking

Title Multi-lingual Entity Discovery and Linking
Authors Avi Sil, Heng Ji, Dan Roth, Silviu-Petru Cucerzan
Abstract The primary goals of this tutorial are to review the framework of cross-lingual EL and motivate it as a broad paradigm for the Information Extraction task. We will start by discussing the traditional EL techniques and metrics and address questions relevant to the adequacy of these to across domains and languages. We will then present more recent approaches such as Neural EL, discuss the basic building blocks of a state-of-the-art neural EL system and analyze some of the current results on English EL. We will then proceed to Cross-lingual EL and discuss methods that work across languages. In particular, we will discuss and compare multiple methods that make use of multi-lingual word embeddings. We will also present EL methods that work for both name tagging and linking in very low resource languages. Finally, we will discuss the uses of cross-lingual EL in a variety of applications like search engines and commercial product selling applications. Also, contrary to the 2014 EL tutorial, we will also focus on Entity Discovery which is an essential component of EL.
Tasks Entity Linking, Knowledge Base Population, Machine Translation, Word Embeddings
Published 2018-07-01
URL https://www.aclweb.org/anthology/P18-5008/
PDF https://www.aclweb.org/anthology/P18-5008
PWC https://paperswithcode.com/paper/multi-lingual-entity-discovery-and-linking
Repo
Framework

AirDialogue: An Environment for Goal-Oriented Dialogue Research

Title AirDialogue: An Environment for Goal-Oriented Dialogue Research
Authors Wei Wei, Quoc Le, Andrew Dai, Jia Li
Abstract Recent progress in dialogue generation has inspired a number of studies on dialogue systems that are capable of accomplishing tasks through natural language interactions. A promising direction among these studies is the use of reinforcement learning techniques, such as self-play, for training dialogue agents. However, current datasets are limited in size, and the environment for training agents and evaluating progress is relatively unsophisticated. We present AirDialogue, a large dataset that contains 301,427 goal-oriented conversations. To collect this dataset, we create a context-generator which provides travel and flight restrictions. We then ask human annotators to play the role of a customer or an agent and interact with the goal of successfully booking a trip given the restrictions. Key to our environment is the ease of evaluating the success of the dialogue, which is achieved by using ground-truth states (e.g., the flight being booked) generated by the restrictions. Any dialogue agent that does not generate the correct states is considered to fail. Our experimental results indicate that state-of-the-art dialogue models can only achieve a score of 0.17 while humans can reach a score of 0.91, which suggests significant opportunities for future improvement.
Tasks Dialogue Generation, Text Generation
Published 2018-10-01
URL https://www.aclweb.org/anthology/D18-1419/
PDF https://www.aclweb.org/anthology/D18-1419
PWC https://paperswithcode.com/paper/airdialogue-an-environment-for-goal-oriented
Repo
Framework

Semantics as a Foreign Language

Title Semantics as a Foreign Language
Authors Gabriel Stanovsky, Ido Dagan
Abstract We propose a novel approach to semantic dependency parsing (SDP) by casting the task as an instance of multi-lingual machine translation, where each semantic representation is a different foreign dialect. To that end, we first generalize syntactic linearization techniques to account for the richer semantic dependency graph structure. Following, we design a neural sequence-to-sequence framework which can effectively recover our graph linearizations, performing almost on-par with previous SDP state-of-the-art while requiring less parallel training annotations. Beyond SDP, our linearization technique opens the door to integration of graph-based semantic representations as features in neural models for downstream applications.
Tasks Dependency Parsing, Machine Translation, Semantic Dependency Parsing, Semantic Parsing, Structured Prediction
Published 2018-10-01
URL https://www.aclweb.org/anthology/D18-1263/
PDF https://www.aclweb.org/anthology/D18-1263
PWC https://paperswithcode.com/paper/semantics-as-a-foreign-language
Repo
Framework

German and French Neural Supertagging Experiments for LTAG Parsing

Title German and French Neural Supertagging Experiments for LTAG Parsing
Authors Tatiana Bladier, Andreas van Cranenburgh, Younes Samih, Laura Kallmeyer
Abstract We present ongoing work on data-driven parsing of German and French with Lexicalized Tree Adjoining Grammars. We use a supertagging approach combined with deep learning. We show the challenges of extracting LTAG supertags from the French Treebank, introduce the use of left- and right-sister-adjunction, present a neural architecture for the supertagger, and report experiments of n-best supertagging for French and German.
Tasks Dependency Parsing, Semantic Parsing, Semantic Role Labeling
Published 2018-07-01
URL https://www.aclweb.org/anthology/P18-3009/
PDF https://www.aclweb.org/anthology/P18-3009
PWC https://paperswithcode.com/paper/german-and-french-neural-supertagging
Repo
Framework

CYUT-III Team Chinese Grammatical Error Diagnosis System Report in NLPTEA-2018 CGED Shared Task

Title CYUT-III Team Chinese Grammatical Error Diagnosis System Report in NLPTEA-2018 CGED Shared Task
Authors Shih-Hung Wu, Jun-Wei Wang, Liang-Pu Chen, Ping-Che Yang
Abstract This paper reports how we build a Chinese Grammatical Error Diagnosis system in the NLPTEA-2018 CGED shared task. In 2018, we sent three runs with three different approaches. The first one is a pattern-based approach by frequent error pattern matching. The second one is a sequential labelling approach by conditional random fields (CRF). The third one is a rewriting approach by sequence to sequence (seq2seq) model. The three approaches have different properties that aim to optimize different performance metrics and the formal run results show the differences as we expected.
Tasks Language Modelling
Published 2018-07-01
URL https://www.aclweb.org/anthology/W18-3729/
PDF https://www.aclweb.org/anthology/W18-3729
PWC https://paperswithcode.com/paper/cyut-iii-team-chinese-grammatical-error
Repo
Framework
comments powered by Disqus