January 24, 2020

2497 words 12 mins read

Paper Group NANR 202

Paper Group NANR 202

A Study on Game Review Summarization. Proceedings of the IWCS Workshop Vector Semantics for Discourse and Dialogue. The Expressive Power of Gated Recurrent Units as a Continuous Dynamical System. RENAS: Reinforced Evolutionary Neural Architecture Search. Learning to Describe Unknown Phrases with Local and Global Contexts. The World in My Mind: Visu …

A Study on Game Review Summarization

Title A Study on Game Review Summarization
Authors George Panagiotopoulos, George Giannakopoulos, Antonios Liapis
Abstract Game reviews have constituted a unique means of interaction between players and companies for many years. The dynamics appearing through online publishing have significantly grown the number of comments per game, giving rise to very interesting communities. The growth has, in turn, led to a difficulty in dealing with the volume and varying quality of the comments as a source of information. This work studies whether and how game reviews can be summarized, based on the notions pre-existing in aspect-based summarization and sentiment analysis. The work provides suggested pipeline of analysis, also offering preliminary findings on whether aspects detected in a set of comments can be consistently evaluated by human users.
Tasks Sentiment Analysis
Published 2019-09-01
URL https://www.aclweb.org/anthology/W19-8906/
PDF https://www.aclweb.org/anthology/W19-8906
PWC https://paperswithcode.com/paper/a-study-on-game-review-summarization
Repo
Framework

Proceedings of the IWCS Workshop Vector Semantics for Discourse and Dialogue

Title Proceedings of the IWCS Workshop Vector Semantics for Discourse and Dialogue
Authors
Abstract
Tasks
Published 2019-05-01
URL https://www.aclweb.org/anthology/W19-0900/
PDF https://www.aclweb.org/anthology/W19-0900
PWC https://paperswithcode.com/paper/proceedings-of-the-iwcs-workshop-vector
Repo
Framework

The Expressive Power of Gated Recurrent Units as a Continuous Dynamical System

Title The Expressive Power of Gated Recurrent Units as a Continuous Dynamical System
Authors Ian D. Jordan, Piotr Aleksander Sokol, Il Memming Park
Abstract Gated recurrent units (GRUs) were inspired by the common gated recurrent unit, long short-term memory (LSTM), as a means of capturing temporal structure with less complex memory unit architecture. Despite their incredible success in tasks such as natural and artificial language processing, speech, video, and polyphonic music, very little is understood about the specific dynamic features representable in a GRU network. As a result, it is difficult to know a priori how successful a GRU-RNN will perform on a given data set. In this paper, we develop a new theoretical framework to analyze one and two dimensional GRUs as a continuous dynamical system, and classify the dynamical features obtainable with such system. We found rich repertoire that includes stable limit cycles over time (nonlinear oscillations), multi-stable state transitions with various topologies, and homoclinic orbits. In addition, we show that any finite dimensional GRU cannot precisely replicate the dynamics of a ring attractor, or more generally, any continuous attractor, and is limited to finitely many isolated fixed points in theory. These findings were then experimentally verified in two dimensions by means of time series prediction.
Tasks Time Series, Time Series Prediction
Published 2019-05-01
URL https://openreview.net/forum?id=H1eiZnAqKm
PDF https://openreview.net/pdf?id=H1eiZnAqKm
PWC https://paperswithcode.com/paper/the-expressive-power-of-gated-recurrent-units
Repo
Framework
Title RENAS: Reinforced Evolutionary Neural Architecture Search
Authors Yukang Chen, Gaofeng Meng, Qian Zhang, Shiming Xiang, Chang Huang, Lisen Mu, Xinggang Wang
Abstract Neural Architecture Search (NAS) is an important yet challenging task in network design due to its high computational consumption. To address this issue, we propose the Reinforced Evolutionary Neural Architecture Search (RENAS), which is an evolutionary method with reinforced mutation for NAS. Our method integrates reinforced mutation into an evolution algorithm for neural architecture exploration, in which a mutation controller is introduced to learn the effects of slight modifications and make mutation actions. The reinforced mutation controller guides the model population to evolve efficiently. Furthermore, as child models can inherit parameters from their parents during evolution, our method requires very limited computational resources. In experiments, we conduct the proposed search method on CIFAR-10 and obtain a powerful network architecture, RENASNet. This architecture achieves a competitive result on CIFAR-10. The explored network architecture is transferable to ImageNet and achieves a new state-of-the-art accuracy, i.e., 75.7% top-1 accuracy with 5.36M parameters on mobile ImageNet. We further test its performance on semantic segmentation with DeepLabv3 on the PASCAL VOC. RENASNet outperforms MobileNet-v1, MobileNet-v2 and NASNet. It achieves 75.83% mIOU without being pretrained on COCO.
Tasks Neural Architecture Search, Semantic Segmentation
Published 2019-06-01
URL http://openaccess.thecvf.com/content_CVPR_2019/html/Chen_RENAS_Reinforced_Evolutionary_Neural_Architecture_Search_CVPR_2019_paper.html
PDF http://openaccess.thecvf.com/content_CVPR_2019/papers/Chen_RENAS_Reinforced_Evolutionary_Neural_Architecture_Search_CVPR_2019_paper.pdf
PWC https://paperswithcode.com/paper/renas-reinforced-evolutionary-neural
Repo
Framework

Learning to Describe Unknown Phrases with Local and Global Contexts

Title Learning to Describe Unknown Phrases with Local and Global Contexts
Authors Shonosuke Ishiwatari, Hiroaki Hayashi, Naoki Yoshinaga, Graham Neubig, Shoetsu Sato, Masashi Toyoda, Masaru Kitsuregawa
Abstract When reading a text, it is common to become stuck on unfamiliar words and phrases, such as polysemous words with novel senses, rarely used idioms, internet slang, or emerging entities. If we humans cannot figure out the meaning of those expressions from the immediate local context, we consult dictionaries for definitions or search documents or the web to find other global context to help in interpretation. Can machines help us do this work? Which type of context is more important for machines to solve the problem? To answer these questions, we undertake a task of describing a given phrase in natural language based on its local and global contexts. To solve this task, we propose a neural description model that consists of two context encoders and a description decoder. In contrast to the existing methods for non-standard English explanation [Ni+ 2017] and definition generation [Noraset+ 2017; Gadetsky+ 2018], our model appropriately takes important clues from both local and global contexts. Experimental results on three existing datasets (including WordNet, Oxford and Urban Dictionaries) and a dataset newly created from Wikipedia demonstrate the effectiveness of our method over previous work.
Tasks
Published 2019-06-01
URL https://www.aclweb.org/anthology/N19-1350/
PDF https://www.aclweb.org/anthology/N19-1350
PWC https://paperswithcode.com/paper/learning-to-describe-unknown-phrases-with
Repo
Framework

The World in My Mind: Visual Dialog with Adversarial Multi-modal Feature Encoding

Title The World in My Mind: Visual Dialog with Adversarial Multi-modal Feature Encoding
Authors Yiqun Yao, Jiaming Xu, Bo Xu
Abstract Visual Dialog is a multi-modal task that requires a model to participate in a multi-turn human dialog grounded on an image, and generate correct, human-like responses. In this paper, we propose a novel Adversarial Multi-modal Feature Encoding (AMFE) framework for effective and robust auxiliary training of visual dialog systems. AMFE can force the language-encoding part of a model to generate hidden states in a distribution closely related to the distribution of real-world images, resulting in language features containing general knowledge from both modalities by nature, which can help generate both more correct and more general responses with reasonably low time cost. Experimental results show that AMFE can steadily bring performance gains to different models on different scales of data. Our method outperforms both the supervised learning baselines and other fine-tuning methods, achieving state-of-the-art results on most metrics of VisDial v0.5/v0.9 generative tasks.
Tasks Visual Dialog
Published 2019-06-01
URL https://www.aclweb.org/anthology/N19-1266/
PDF https://www.aclweb.org/anthology/N19-1266
PWC https://paperswithcode.com/paper/the-world-in-my-mind-visual-dialog-with
Repo
Framework

Learnability and Overgeneration in Computational Syntax

Title Learnability and Overgeneration in Computational Syntax
Authors Yiding Hao
Abstract
Tasks Language Acquisition
Published 2019-01-01
URL https://www.aclweb.org/anthology/W19-0113/
PDF https://www.aclweb.org/anthology/W19-0113
PWC https://paperswithcode.com/paper/learnability-and-overgeneration-in
Repo
Framework

The Relation Between Infrastructure Quality and Government Effectiveness in Egypt.

Title The Relation Between Infrastructure Quality and Government Effectiveness in Egypt.
Authors Mustafa Elnemr
Abstract The article examines the relationship between infrastructure quality and government effectiveness in Egypt. The hypothesis is that public private partnership can help Egypt increase its infrastructure efficiency and lower burden in government budget. The paper conclude that Egypt have a considerable opportunity to finance its infrastructure investment gap through private investment. Public-Private partnership could be adopted as it provides the fastest gains in efficiency.
Tasks Abstractive Text Summarization
Published 2019-07-30
URL https://ijbassnet.com/publication/251/details
PDF https://ijbassnet.com/storage/app/publications/5d4018e55e1df11564481765.pdf
PWC https://paperswithcode.com/paper/the-relation-between-infrastructure-quality
Repo
Framework

Neural-based Chinese Idiom Recommendation for Enhancing Elegance in Essay Writing

Title Neural-based Chinese Idiom Recommendation for Enhancing Elegance in Essay Writing
Authors Yuanchao Liu, Bo Pang, Bingquan Liu
Abstract Although the proper use of idioms can enhance the elegance of writing, the active use of various expressions is a challenge because remembering idioms is difficult. In this study, we address the problem of idiom recommendation by leveraging a neural machine translation framework, in which we suppose that idioms are written with one pseudo target language. Two types of real-life datasets are collected to support this study. Experimental results show that the proposed approach achieves promising performance compared with other baseline methods.
Tasks Machine Translation
Published 2019-07-01
URL https://www.aclweb.org/anthology/P19-1552/
PDF https://www.aclweb.org/anthology/P19-1552
PWC https://paperswithcode.com/paper/neural-based-chinese-idiom-recommendation-for
Repo
Framework

Generating Quantified Descriptions of Abstract Visual Scenes

Title Generating Quantified Descriptions of Abstract Visual Scenes
Authors Guanyi Chen, Kees van Deemter, Chenghua Lin
Abstract Quantified expressions have always taken up a central position in formal theories of meaning and language use. Yet quantified expressions have so far attracted far less attention from the Natural Language Generation community than, for example, referring expressions. In an attempt to start redressing the balance, we investigate a recently developed corpus in which quantified expressions play a crucial role; the corpus is the result of a carefully controlled elicitation experiment, in which human participants were asked to describe visually presented scenes. Informed by an analysis of this corpus, we propose algorithms that produce computer-generated descriptions of a wider class of visual scenes, and we evaluate the descriptions generated by these algorithms in terms of their correctness, completeness, and human-likeness. We discuss what this exercise can teach us about the nature of quantification and about the challenges posed by the generation of quantified expressions.
Tasks Text Generation
Published 2019-10-01
URL https://www.aclweb.org/anthology/W19-8667/
PDF https://www.aclweb.org/anthology/W19-8667
PWC https://paperswithcode.com/paper/generating-quantified-descriptions-of
Repo
Framework

Cross-Domain NER using Cross-Domain Language Modeling

Title Cross-Domain NER using Cross-Domain Language Modeling
Authors Chen Jia, Xiaobo Liang, Yue Zhang
Abstract Due to limitation of labeled resources, cross-domain named entity recognition (NER) has been a challenging task. Most existing work considers a supervised setting, making use of labeled data for both the source and target domains. A disadvantage of such methods is that they cannot train for domains without NER data. To address this issue, we consider using cross-domain LM as a bridge cross-domains for NER domain adaptation, performing cross-domain and cross-task knowledge transfer by designing a novel parameter generation network. Results show that our method can effectively extract domain differences from cross-domain LM contrast, allowing unsupervised domain adaptation while also giving state-of-the-art results among supervised domain adaptation methods.
Tasks Cross-Domain Named Entity Recognition, Domain Adaptation, Language Modelling, Named Entity Recognition, Transfer Learning, Unsupervised Domain Adaptation
Published 2019-07-01
URL https://www.aclweb.org/anthology/P19-1236/
PDF https://www.aclweb.org/anthology/P19-1236
PWC https://paperswithcode.com/paper/cross-domain-ner-using-cross-domain-language
Repo
Framework

Robust to Noise Models in Natural Language Processing Tasks

Title Robust to Noise Models in Natural Language Processing Tasks
Authors Valentin Malykh
Abstract There are a lot of noise texts surrounding a person in modern life. The traditional approach is to use spelling correction, yet the existing solutions are far from perfect. We propose robust to noise word embeddings model, which outperforms existing commonly used models, like fasttext and word2vec in different tasks. In addition, we investigate the noise robustness of current models in different natural language processing tasks. We propose extensions for modern models in three downstream tasks, i.e. text classification, named entity recognition and aspect extraction, which shows improvement in noise robustness over existing solutions.
Tasks Aspect Extraction, Named Entity Recognition, Spelling Correction, Text Classification, Word Embeddings
Published 2019-07-01
URL https://www.aclweb.org/anthology/P19-2002/
PDF https://www.aclweb.org/anthology/P19-2002
PWC https://paperswithcode.com/paper/robust-to-noise-models-in-natural-language
Repo
Framework

Identification of Good and Bad News on Twitter

Title Identification of Good and Bad News on Twitter
Authors Piush Aggarwal, Ahmet Aker
Abstract Social media plays a great role in news dissemination which includes good and bad news. However, studies show that news, in general, has a significant impact on our mental stature and that this influence is more in bad news. An ideal situation would be that we have a tool that can help to filter out the type of news we do not want to consume. In this paper, we provide the basis for such a tool. In our work, we focus on Twitter. We release a manually annotated dataset containing 6,853 tweets from 5 different topical categories. Each tweet is annotated with good and bad labels. We also investigate various machine learning systems and features and evaluate their performance on the newly generated dataset. We also perform a comparative analysis with sentiments showing that sentiment alone is not enough to distinguish between good and bad news.
Tasks
Published 2019-09-01
URL https://www.aclweb.org/anthology/R19-1002/
PDF https://www.aclweb.org/anthology/R19-1002
PWC https://paperswithcode.com/paper/identification-of-good-and-bad-news-on
Repo
Framework

Self-Attention Enhanced CNNs and Collaborative Curriculum Learning for Distantly Supervised Relation Extraction

Title Self-Attention Enhanced CNNs and Collaborative Curriculum Learning for Distantly Supervised Relation Extraction
Authors Yuyun Huang, Jinhua Du
Abstract Distance supervision is widely used in relation extraction tasks, particularly when large-scale manual annotations are virtually impossible to conduct. Although Distantly Supervised Relation Extraction (DSRE) benefits from automatic labelling, it suffers from serious mislabelling issues, i.e. some or all of the instances for an entity pair (head and tail entities) do not express the labelled relation. In this paper, we propose a novel model that employs a collaborative curriculum learning framework to reduce the effects of mislabelled data. Specifically, we firstly propose an internal self-attention mechanism between the convolution operations in convolutional neural networks (CNNs) to learn a better sentence representation from the noisy inputs. Then we define two sentence selection models as two relation extractors in order to collaboratively learn and regularise each other under a curriculum scheme to alleviate noisy effects, where the curriculum could be constructed by conflicts or small loss. Finally, experiments are conducted on a widely-used public dataset and the results indicate that the proposed model significantly outperforms baselines including the state-of-the-art in terms of P@N and PR curve metrics, thus evidencing its capability of reducing noisy effects for DSRE.
Tasks Relation Extraction
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-1037/
PDF https://www.aclweb.org/anthology/D19-1037
PWC https://paperswithcode.com/paper/self-attention-enhanced-cnns-and
Repo
Framework

``My Way of Telling a Story’': Persona based Grounded Story Generation

Title ``My Way of Telling a Story’': Persona based Grounded Story Generation |
Authors Ch, Khyathi u, Shrimai Prabhumoye, Ruslan Salakhutdinov, Alan W Black
Abstract Visual storytelling is the task of generating stories based on a sequence of images. Inspired by the recent works in neural generation focusing on controlling the form of text, this paper explores the idea of generating these stories in different personas. However, one of the main challenges of performing this task is the lack of a dataset of visual stories in different personas. Having said that, there are independent datasets for both visual storytelling and annotated sentences for various persona. In this paper we describe an approach to overcome this by getting labelled persona data from a different task and leveraging those annotations to perform persona based story generation. We inspect various ways of incorporating personality in both the encoder and the decoder representations to steer the generation in the target direction. To this end, we propose five models which are incremental extensions to the baseline model to perform the task at hand. In our experiments we use five different personas to guide the generation process. We find that the models based on our hypotheses perform better at capturing words while generating stories in the target persona.
Tasks Visual Storytelling
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-3402/
PDF https://www.aclweb.org/anthology/W19-3402
PWC https://paperswithcode.com/paper/my-way-of-telling-a-story-persona-based-1
Repo
Framework
comments powered by Disqus