January 25, 2020

2672 words 13 mins read

Paper Group NANR 69

Paper Group NANR 69

Task-GAN for Improved GAN based Image Restoration. Composing Noun Phrase Vector Representations. MICE. DS at SemEval-2019 Task 9: From Suggestion Mining with neural networks to adversarial cross-domain classification. A Neural Citation Count Prediction Model based on Peer Review Text. Meaning Representation of Null Instantiated Semantic Roles in Fr …

Task-GAN for Improved GAN based Image Restoration

Title Task-GAN for Improved GAN based Image Restoration
Authors Jiahong Ouyang, Guanhua Wang, Enhao Gong, Kevin Chen, John Pauly and Greg Zaharchuk
Abstract Deep Learning (DL) algorithms based on Generative Adversarial Network (GAN) have demonstrated great potentials in computer vision tasks such as image restoration. Despite the rapid development of image restoration algorithms using DL and GANs, image restoration for specific scenarios, such as medical image enhancement and super-resolved identity recognition, are still facing challenges. How to ensure visually realistic restoration while avoiding hallucination or mode- collapse? How to make sure the visually plausible results do not contain hallucinated features jeopardizing downstream tasks such as pathology identification and subject identification? Here we propose to resolve these challenges by coupling the GAN based image restoration framework with another task-specific network. With medical imaging restoration as an example, the proposed model conducts additional pathology recognition/classification task to ensure the preservation of detailed structures that are important to this task. Validated on multiple medical datasets, we demonstrate the proposed method leads to improved deep learning based image restoration while preserving the detailed structure and diagnostic features. Additionally, the trained task network show potentials to achieve super-human level performance in identifying pathology and diagnosis. Further validation on super-resolved identity recognition tasks also show that the proposed method can be generalized for diverse image restoration tasks.
Tasks Image Enhancement, Image Restoration
Published 2019-05-01
URL https://openreview.net/forum?id=Hylnis0qKX
PDF https://openreview.net/pdf?id=Hylnis0qKX
PWC https://paperswithcode.com/paper/task-gan-for-improved-gan-based-image
Repo
Framework

Composing Noun Phrase Vector Representations

Title Composing Noun Phrase Vector Representations
Authors Aikaterini-Lida Kalouli, Valeria de Paiva, Richard Crouch
Abstract Vector representations of words have seen an increasing success over the past years in a variety of NLP tasks. While there seems to be a consensus about the usefulness of word embeddings and how to learn them, it is still unclear which representations can capture the meaning of phrases or even whole sentences. Recent work has shown that simple operations outperform more complex deep architectures. In this work, we propose two novel constraints for computing noun phrase vector representations. First, we propose that the semantic and not the syntactic contribution of each component of a noun phrase should be considered, so that the resulting composed vectors express more of the phrase meaning. Second, the composition process of the two phrase vectors should apply suitable dimensions{'} selection in a way that specific semantic features captured by the phrase{'}s meaning become more salient. Our proposed methods are compared to 11 other approaches, including popular baselines and a neural net architecture, and are evaluated across 6 tasks and 2 datasets. Our results show that these constraints lead to more expressive phrase representations and can be applied to other state-of-the-art methods to improve their performance.
Tasks Word Embeddings
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-4311/
PDF https://www.aclweb.org/anthology/W19-4311
PWC https://paperswithcode.com/paper/composing-noun-phrase-vector-representations
Repo
Framework

MICE

Title MICE
Authors Joachim Van den Bogaert, Heidi Depraetere, Tom Vanallemeersch, Frederic Everaert, Koen Van Winckel, Katri Tammsaar, Ingmar Vali, Tambet Artma, Piret Saartee, Laura Katariina Teder, Art{=u}rs Vasi{\c{l}}evskis, Valters Sics, Johan Haelterman, David Bienfait
Abstract
Tasks
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-6720/
PDF https://www.aclweb.org/anthology/W19-6720
PWC https://paperswithcode.com/paper/mice
Repo
Framework

DS at SemEval-2019 Task 9: From Suggestion Mining with neural networks to adversarial cross-domain classification

Title DS at SemEval-2019 Task 9: From Suggestion Mining with neural networks to adversarial cross-domain classification
Authors Tobias Cabanski
Abstract Suggestion Mining is the task of classifying sentences into suggestions or non-suggestions. SemEval-2019 Task 9 sets the task to mine suggestions from online texts. For each of the two subtasks, the classification has to be applied on a different domain. Subtask A addresses the domain of posts in suggestion online forums and comes with a set of training examples, that is used for supervised training. A combination of LSTM and CNN networks is constructed to create a model which uses BERT word embeddings as input features. For subtask B, the domain of hotel reviews is regarded. In contrast to subtask A, no labeled data for supervised training is provided, so that additional unlabeled data is taken to apply a cross-domain classification. This is done by using adversarial training of the three model parts label classifier, domain classifier and the shared feature representation. For subtask A, the developed model archives a F1-score of 0.7273, which is in the top ten of the leader board. The F1-score for subtask B is 0.8187 and is ranked in the top five of the submissions for that task.
Tasks Word Embeddings
Published 2019-06-01
URL https://www.aclweb.org/anthology/S19-2209/
PDF https://www.aclweb.org/anthology/S19-2209
PWC https://paperswithcode.com/paper/ds-at-semeval-2019-task-9-from-suggestion
Repo
Framework

A Neural Citation Count Prediction Model based on Peer Review Text

Title A Neural Citation Count Prediction Model based on Peer Review Text
Authors Siqing Li, Wayne Xin Zhao, Eddy Jing Yin, Ji-Rong Wen
Abstract Citation count prediction (CCP) has been an important research task for automatically estimating the future impact of a scholarly paper. Previous studies mainly focus on extracting or mining useful features from the paper itself or the associated authors. An important kind of data signals, peer review text, has not been utilized for the CCP task. In this paper, we take the initiative to utilize peer review data for the CCP task with a neural prediction model. Our focus is to learn a comprehensive semantic representation for peer review text for improving the prediction performance. To achieve this goal, we incorporate the abstract-review match mechanism and the cross-review match mechanism to learn deep features from peer review text. We also consider integrating hand-crafted features via a wide component. The deep and wide components jointly make the prediction. Extensive experiments have demonstrated the usefulness of the peer review data and the effectiveness of the proposed model. Our dataset has been released online.
Tasks
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-1497/
PDF https://www.aclweb.org/anthology/D19-1497
PWC https://paperswithcode.com/paper/a-neural-citation-count-prediction-model
Repo
Framework

Meaning Representation of Null Instantiated Semantic Roles in FrameNet

Title Meaning Representation of Null Instantiated Semantic Roles in FrameNet
Authors Miriam R L Petruck
Abstract Humans have the unique ability to infer information about participants in a scene, even if they are not mentioned in a text about that scene. Computer systems cannot do so without explicit information about those participants. This paper addresses the linguistic phenomenon of null-instantiated frame elements, i.e., implicit semantic roles, and their representation in FrameNet (FN). It motivates FN{'}s annotation practice, and illustrates three types of null-instantiated arguments that FrameNet tracks, noting that other lexical resources do not record such semantic-pragmatic information, despite its need in natural language understanding (NLU), and the elaborate efforts to create new datasets. It challenges the community to appeal to FN data to develop more sophisticated techniques for recognizing implicit semantic roles, and creating needed datasets. Although the annotation of null-instantiated roles was lexicographically motivated, FN provides useful information for text processing, and therefore must be considered in the design of any meaning representation for natural language understanding.
Tasks
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-3313/
PDF https://www.aclweb.org/anthology/W19-3313
PWC https://paperswithcode.com/paper/meaning-representation-of-null-instantiated
Repo
Framework

A Multi-Platform Annotation Ecosystem for Domain Adaptation

Title A Multi-Platform Annotation Ecosystem for Domain Adaptation
Authors Richard Eckart de Castilho, Nancy Ide, Jin-Dong Kim, Jan-Christoph Klie, Keith Suderman
Abstract This paper describes an ecosystem consisting of three independent text annotation platforms. To demonstrate their ability to work in concert, we illustrate how to use them to address an interactive domain adaptation task in biomedical entity recognition. The platforms and the approach are in general domain-independent and can be readily applied to other areas of science.
Tasks Domain Adaptation
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-4021/
PDF https://www.aclweb.org/anthology/W19-4021
PWC https://paperswithcode.com/paper/a-multi-platform-annotation-ecosystem-for
Repo
Framework

Multilingual Probing of Deep Pre-Trained Contextual Encoders

Title Multilingual Probing of Deep Pre-Trained Contextual Encoders
Authors Vinit Ravishankar, Memduh G{"o}k{\i}rmak, Lilja {\O}vrelid, Erik Velldal
Abstract Encoders that generate representations based on context have, in recent years, benefited from adaptations that allow for pre-training on large text corpora. Earlier work on evaluating fixed-length sentence representations has included the use of {`}probing{'} tasks, that use diagnostic classifiers to attempt to quantify the extent to which these encoders capture specific linguistic phenomena. The principle of probing has also resulted in extended evaluations that include relatively newer word-level pre-trained encoders. We build on probing tasks established in the literature and comprehensively evaluate and analyse {–} from a typological perspective amongst others {–} multilingual variants of existing encoders on probing datasets constructed for 6 non-English languages. Specifically, we probe each layer of a multiple monolingual RNN-based ELMo models, the transformer-based BERT{'}s cased and uncased multilingual variants, and a variant of BERT that uses a cross-lingual modelling scheme (XLM). |
Tasks
Published 2019-09-01
URL https://www.aclweb.org/anthology/W19-6205/
PDF https://www.aclweb.org/anthology/W19-6205
PWC https://paperswithcode.com/paper/multilingual-probing-of-deep-pre-trained
Repo
Framework

Production of Voicing Contrast in Children with Cochlear Implants

Title Production of Voicing Contrast in Children with Cochlear Implants
Authors Georgia Koupka
Abstract
Tasks
Published 2019-09-01
URL https://www.aclweb.org/anthology/W19-7416/
PDF https://www.aclweb.org/anthology/W19-7416
PWC https://paperswithcode.com/paper/production-of-voicing-contrast-in-children
Repo
Framework

Variational Autoencoders with Jointly Optimized Latent Dependency Structure

Title Variational Autoencoders with Jointly Optimized Latent Dependency Structure
Authors Jiawei He, Yu Gong, Joseph Marino, Greg Mori, Andreas Lehrmann
Abstract We propose a method for learning the dependency structure between latent variables in deep latent variable models. Our general modeling and inference framework combines the complementary strengths of deep generative models and probabilistic graphical models. In particular, we express the latent variable space of a variational autoencoder (VAE) in terms of a Bayesian network with a learned, flexible dependency structure. The network parameters, variational parameters as well as the latent topology are optimized simultaneously with a single objective. Inference is formulated via a sampling procedure that produces expectations over latent variable structures and incorporates top-down and bottom-up reasoning over latent variable values. We validate our framework in extensive experiments on MNIST, Omniglot, and CIFAR-10. Comparisons to state-of-the-art structured variational autoencoder baselines show improvements in terms of the expressiveness of the learned model.
Tasks Latent Variable Models, Omniglot
Published 2019-05-01
URL https://openreview.net/forum?id=SJgsCjCqt7
PDF https://openreview.net/pdf?id=SJgsCjCqt7
PWC https://paperswithcode.com/paper/variational-autoencoders-with-jointly
Repo
Framework

Information Theoretic lower bounds on negative log likelihood

Title Information Theoretic lower bounds on negative log likelihood
Authors Luis A. Lastras-Montaño
Abstract In this article we use rate-distortion theory, a branch of information theory devoted to the problem of lossy compression, to shed light on an important problem in latent variable modeling of data: is there room to improve the model? One way to address this question is to find an upper bound on the probability (equivalently a lower bound on the negative log likelihood) that the model can assign to some data as one varies the prior and/or the likelihood function in a latent variable model. The core of our contribution is to formally show that the problem of optimizing priors in latent variable models is exactly an instance of the variational optimization problem that information theorists solve when computing rate-distortion functions, and then to use this to derive a lower bound on negative log likelihood. Moreover, we will show that if changing the prior can improve the log likelihood, then there is a way to change the likelihood function instead and attain the same log likelihood, and thus rate-distortion theory is of relevance to both optimizing priors as well as optimizing likelihood functions. We will experimentally argue for the usefulness of quantities derived from rate-distortion theory in latent variable modeling by applying them to a problem in image modeling.
Tasks Latent Variable Models
Published 2019-05-01
URL https://openreview.net/forum?id=rkemqsC9Fm
PDF https://openreview.net/pdf?id=rkemqsC9Fm
PWC https://paperswithcode.com/paper/information-theoretic-lower-bounds-on
Repo
Framework

Toward Comprehensive Understanding of a Sentiment Based on Human Motives

Title Toward Comprehensive Understanding of a Sentiment Based on Human Motives
Authors Naoki Otani, Eduard Hovy
Abstract In sentiment detection, the natural language processing community has focused on determining holders, facets, and valences, but has paid little attention to the reasons for sentiment decisions. Our work considers human motives as the driver for human sentiments and addresses the problem of motive detection as the first step. Following a study in psychology, we define six basic motives that cover a wide range of topics appearing in review texts, annotate 1,600 texts in restaurant and laptop domains with the motives, and report the performance of baseline methods on this new dataset. We also show that cross-domain transfer learning boosts detection performance, which indicates that these universal motives exist across different domains.
Tasks Transfer Learning
Published 2019-07-01
URL https://www.aclweb.org/anthology/P19-1461/
PDF https://www.aclweb.org/anthology/P19-1461
PWC https://paperswithcode.com/paper/toward-comprehensive-understanding-of-a
Repo
Framework

CNL-ER: A Controlled Natural Language for Specifying and Verbalising Entity Relationship Models

Title CNL-ER: A Controlled Natural Language for Specifying and Verbalising Entity Relationship Models
Authors Bayzid Ashik Hossain, Gayathri Rajan, Rolf Schwitter
Abstract The first step towards designing an information system is conceptual modelling where domain experts and knowledge engineers identify the necessary information together to build an information system. Entity relationship modelling is one of the most popular conceptual modelling techniques that represents an information system in terms of entities, attributes and relationships. Entity relationship models are constructed graphically but are often difficult to understand by domain experts. To overcome this problem, we suggest to verbalise these models in a controlled natural language. In this paper, we present CNL-ER, a controlled natural language for specifying and verbalising entity relationship (ER) models that not only solves the verbalisation problem for these models but also provides the benefits of automatic verification and validation, and semantic round-tripping which makes the communication process transparent between the domain experts and the knowledge engineers.
Tasks
Published 2019-04-01
URL https://www.aclweb.org/anthology/U19-1017/
PDF https://www.aclweb.org/anthology/U19-1017
PWC https://paperswithcode.com/paper/cnl-er-a-controlled-natural-language-for
Repo
Framework

Team SVMrank: Leveraging Feature-rich Support Vector Machines for Ranking Explanations to Elementary Science Questions

Title Team SVMrank: Leveraging Feature-rich Support Vector Machines for Ranking Explanations to Elementary Science Questions
Authors Jennifer D{'}Souza, Isaiah On Mulang{'}, o, S{"o}ren Auer
Abstract The TextGraphs 2019 Shared Task on Multi-Hop Inference for Explanation Regeneration (MIER-19) tackles explanation generation for answers to elementary science questions. It builds on the AI2 Reasoning Challenge 2018 (ARC-18) which was organized as an advanced question answering task on a dataset of elementary science questions. The ARC-18 questions were shown to be hard to answer with systems focusing on surface-level cues alone, instead requiring far more powerful knowledge and reasoning. To address MIER-19, we adopt a hybrid pipelined architecture comprising a featurerich learning-to-rank (LTR) machine learning model, followed by a rule-based system for reranking the LTR model predictions. Our system was ranked fourth in the official evaluation, scoring close to the second and third ranked teams, achieving 39.4{%} MAP.
Tasks Learning-To-Rank, Question Answering
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-5312/
PDF https://www.aclweb.org/anthology/D19-5312
PWC https://paperswithcode.com/paper/team-svmrank-leveraging-feature-rich-support
Repo
Framework

Insights from Building an Open-Ended Conversational Agent

Title Insights from Building an Open-Ended Conversational Agent
Authors Khyatti Gupta, Meghana Joshi, Ankush Chatterjee, Sonam Damani, Kedhar Nath Narahari, Puneet Agrawal
Abstract Dialogue systems and conversational agents are becoming increasingly popular in modern society. We conceptualized one such conversational agent, Microsoft{'}s {}Ruuh{''} with the promise to be able to talk to its users on any subject they choose. Building an open-ended conversational agent like Ruuh at onset seems like a daunting task, since the agent needs to think beyond the utilitarian notion of merely generating {}relevant{''} responses and meet a wider range of user social needs, like expressing happiness when user{'}s favourite sports team wins, sharing a cute comment on showing the pictures of the user{'}s pet and so on. The agent also needs to detect and respond to abusive language, sensitive topics and trolling behaviour of the users. Many of these problems pose significant research challenges as well as product design limitations as one needs to circumnavigate the technical limitations to create an acceptable user experience. However, as the product reaches the real users the true test begins, and one realizes the challenges and opportunities that lie in the vast domain of conversations. With over 2.5 million real-world users till date who have generated over 300 million user conversations with Ruuh, there is a plethora of learning, insights and opportunities that we will talk about in this paper.
Tasks
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-4112/
PDF https://www.aclweb.org/anthology/W19-4112
PWC https://paperswithcode.com/paper/insights-from-building-an-open-ended
Repo
Framework
comments powered by Disqus