Paper Group NANR 172
Modeling Inter-Aspect Dependencies for Aspect-Based Sentiment Analysis. The APVA-TURBO Approach To Question Answering in Knowledge Base. Self-Supervised Generation of Spatial Audio for 360° Video. Lexical Networks in !Xung. Proceedings of the 3rd Workshop on Computational Creativity in Natural Language Generation (CC-NLG 2018). Towards Language Tec …
Modeling Inter-Aspect Dependencies for Aspect-Based Sentiment Analysis
Title | Modeling Inter-Aspect Dependencies for Aspect-Based Sentiment Analysis |
Authors | Devamanyu Hazarika, Soujanya Poria, Prateek Vij, Gangeshwar Krishnamurthy, Erik Cambria, Roger Zimmermann |
Abstract | Aspect-based Sentiment Analysis is a fine-grained task of sentiment classification for multiple aspects in a sentence. Present neural-based models exploit aspect and its contextual information in the sentence but largely ignore the inter-aspect dependencies. In this paper, we incorporate this pattern by simultaneous classification of all aspects in a sentence along with temporal dependency processing of their corresponding sentence representations using recurrent networks. Results on the benchmark SemEval 2014 dataset suggest the effectiveness of our proposed approach. |
Tasks | Aspect-Based Sentiment Analysis, Sentiment Analysis |
Published | 2018-06-01 |
URL | https://www.aclweb.org/anthology/N18-2043/ |
https://www.aclweb.org/anthology/N18-2043 | |
PWC | https://paperswithcode.com/paper/modeling-inter-aspect-dependencies-for-aspect |
Repo | |
Framework | |
The APVA-TURBO Approach To Question Answering in Knowledge Base
Title | The APVA-TURBO Approach To Question Answering in Knowledge Base |
Authors | Yue Wang, Richong Zhang, Cheng Xu, Yongyi Mao |
Abstract | In this paper, we study the problem of question answering over knowledge base. We identify that the primary bottleneck in this problem is the difficulty in accurately predicting the relations connecting the subject entity to the object entities. We advocate a new model architecture, APVA, which includes a verification mechanism responsible for checking the correctness of predicted relations. The APVA framework naturally supports a well-principled iterative training procedure, which we call turbo training. We demonstrate via experiments that the APVA-TUBRO approach drastically improves the question answering performance. |
Tasks | Question Answering, Semantic Parsing |
Published | 2018-08-01 |
URL | https://www.aclweb.org/anthology/C18-1170/ |
https://www.aclweb.org/anthology/C18-1170 | |
PWC | https://paperswithcode.com/paper/the-apva-turbo-approach-to-question-answering |
Repo | |
Framework | |
Self-Supervised Generation of Spatial Audio for 360° Video
Title | Self-Supervised Generation of Spatial Audio for 360° Video |
Authors | Pedro Morgado, Nuno Nvasconcelos, Timothy Langlois, Oliver Wang |
Abstract | We introduce an approach to convert mono audio recorded by a 360° video camera into spatial audio, a representation of the distribution of sound over the full viewing sphere. Spatial audio is an important component of immersive 360° video viewing, but spatial audio microphones are still rare in current 360° video production. Our system consists of end-to-end trainable neural networks that separate individual sound sources and localize them on the viewing sphere, conditioned on multi-modal analysis from the audio and 360° video frames. We introduce several datasets, including one filmed ourselves, and one collected in-the-wild from YouTube, consisting of 360° videos uploaded with spatial audio. During training, ground truth spatial audio serves as self-supervision and a mixed down mono track forms the input to our network. Using our approach we show that it is possible to infer the spatial localization of sounds based only on a synchronized 360° video and the mono audio track. |
Tasks | |
Published | 2018-12-01 |
URL | http://papers.nips.cc/paper/7319-self-supervised-generation-of-spatial-audio-for-360-video |
http://papers.nips.cc/paper/7319-self-supervised-generation-of-spatial-audio-for-360-video.pdf | |
PWC | https://paperswithcode.com/paper/self-supervised-generation-of-spatial-audio |
Repo | |
Framework | |
Lexical Networks in !Xung
Title | Lexical Networks in !Xung |
Authors | Syed-Amad Hussain, Micha Elsner, Am Miller, a |
Abstract | We investigate the lexical network properties of the large phoneme inventory Southern African language Mangetti Dune !Xung as it compares to English and other commonly-studied languages. Lexical networks are graphs in which nodes (words) are linked to their minimal pairs; global properties of these networks are believed to mediate lexical access in the minds of speakers. We show that the network properties of !Xung are within the range found in previously-studied languages. By simulating data ({''}pseudolexicons{''}) with varying levels of phonotactic structure, we find that the lexical network properties of !Xung diverge from previously-studied languages when fewer phonotactic constraints are retained. We conclude that lexical network properties are representative of an underlying cognitive structure which is necessary for efficient word retrieval and that the phonotactics of !Xung may be shaped by a selective pressure which preserves network properties within this cognitively useful range. |
Tasks | |
Published | 2018-10-01 |
URL | https://www.aclweb.org/anthology/W18-5802/ |
https://www.aclweb.org/anthology/W18-5802 | |
PWC | https://paperswithcode.com/paper/lexical-networks-in-xung |
Repo | |
Framework | |
Proceedings of the 3rd Workshop on Computational Creativity in Natural Language Generation (CC-NLG 2018)
Title | Proceedings of the 3rd Workshop on Computational Creativity in Natural Language Generation (CC-NLG 2018) |
Authors | Hugo Gon{\c{c}}alo Oliveira, Ben Burtenshaw, Raquel Herv{'a}s |
Abstract | |
Tasks | Text Generation |
Published | 2018-11-01 |
URL | https://www.aclweb.org/anthology/papers/W/W18/W18-6600/ |
https://www.aclweb.org/anthology/W18-6600 | |
PWC | https://paperswithcode.com/paper/proceedings-of-the-3rd-workshop-on-1 |
Repo | |
Framework | |
Towards Language Technology for Mi’kmaq
Title | Towards Language Technology for Mi’kmaq |
Authors | Anant Maheshwari, L{'e}o Bouscarrat, Paul Cook |
Abstract | |
Tasks | Language Identification, Language Modelling, Machine Translation, Spelling Correction |
Published | 2018-05-01 |
URL | https://www.aclweb.org/anthology/L18-1653/ |
https://www.aclweb.org/anthology/L18-1653 | |
PWC | https://paperswithcode.com/paper/towards-language-technology-for-mikmaq |
Repo | |
Framework | |
Supervised Machine Learning for Extractive Query Based Summarisation of Biomedical Data
Title | Supervised Machine Learning for Extractive Query Based Summarisation of Biomedical Data |
Authors | M Kaur, eep, Diego Moll{'a} |
Abstract | The automation of text summarisation of biomedical publications is a pressing need due to the plethora of information available online. This paper explores the impact of several supervised machine learning approaches for extracting multi-document summaries for given queries. In particular, we compare classification and regression approaches for query-based extractive summarisation using data provided by the BioASQ Challenge. We tackled the problem of annotating sentences for training classification systems and show that a simple annotation approach outperforms regression-based summarisation. |
Tasks | |
Published | 2018-10-01 |
URL | https://www.aclweb.org/anthology/W18-5604/ |
https://www.aclweb.org/anthology/W18-5604 | |
PWC | https://paperswithcode.com/paper/supervised-machine-learning-for-extractive-1 |
Repo | |
Framework | |
Variational Network Quantization
Title | Variational Network Quantization |
Authors | Jan Achterhold, Jan Mathias Koehler, Anke Schmeink, Tim Genewein |
Abstract | In this paper, the preparation of a neural network for pruning and few-bit quantization is formulated as a variational inference problem. To this end, a quantizing prior that leads to a multi-modal, sparse posterior distribution over weights, is introduced and a differentiable Kullback-Leibler divergence approximation for this prior is derived. After training with Variational Network Quantization, weights can be replaced by deterministic quantization values with small to negligible loss of task accuracy (including pruning by setting weights to 0). The method does not require fine-tuning after quantization. Results are shown for ternary quantization on LeNet-5 (MNIST) and DenseNet (CIFAR-10). |
Tasks | Quantization |
Published | 2018-01-01 |
URL | https://openreview.net/forum?id=ry-TW-WAb |
https://openreview.net/pdf?id=ry-TW-WAb | |
PWC | https://paperswithcode.com/paper/variational-network-quantization |
Repo | |
Framework | |
Evaluation of a Prototype System that Automatically Assigns Subject Headings to Nursing Narratives Using Recurrent Neural Network
Title | Evaluation of a Prototype System that Automatically Assigns Subject Headings to Nursing Narratives Using Recurrent Neural Network |
Authors | Hans Moen, Kai Hakala, Laura-Maria Peltonen, Henry Suhonen, Petri Loukasm{"a}ki, Tapio Salakoski, Filip Ginter, Sanna Salanter{"a} |
Abstract | We present our initial evaluation of a prototype system designed to assist nurses in assigning subject headings to nursing narratives {–} written in the context of documenting patient care in hospitals. Currently nurses may need to memorize several hundred subject headings from standardized nursing terminologies when structuring and assigning the right section/subject headings to their text. Our aim is to allow nurses to write in a narrative manner without having to plan and structure the text with respect to sections and subject headings, instead the system should assist with the assignment of subject headings and restructuring afterwards. We hypothesize that this could reduce the time and effort needed for nursing documentation in hospitals. A central component of the system is a text classification model based on a long short-term memory (LSTM) recurrent neural network architecture, trained on a large data set of nursing notes. A simple Web-based interface has been implemented for user interaction. To evaluate the system, three nurses write a set of artificial nursing shift notes in a fully unstructured narrative manner, without planning for or consider the use of sections and subject headings. These are then fed to the system which assigns subject headings to each sentence and then groups them into paragraphs. Manual evaluation is conducted by a group of nurses. The results show that about 70{%} of the sentences are assigned to correct subject headings. The nurses believe that such a system can be of great help in making nursing documentation in hospitals easier and less time consuming. Finally, various measures and approaches for improving the system are discussed. |
Tasks | Text Classification |
Published | 2018-10-01 |
URL | https://www.aclweb.org/anthology/W18-5611/ |
https://www.aclweb.org/anthology/W18-5611 | |
PWC | https://paperswithcode.com/paper/evaluation-of-a-prototype-system-that |
Repo | |
Framework | |
Hierarchical Convolutional Attention Networks for Text Classification
Title | Hierarchical Convolutional Attention Networks for Text Classification |
Authors | Shang Gao, Arvind Ramanathan, Georgia Tourassi |
Abstract | Recent work in machine translation has demonstrated that self-attention mechanisms can be used in place of recurrent neural networks to increase training speed without sacrificing model accuracy. We propose combining this approach with the benefits of convolutional filters and a hierarchical structure to create a document classification model that is both highly accurate and fast to train {–} we name our method Hierarchical Convolutional Attention Networks. We demonstrate the effectiveness of this architecture by surpassing the accuracy of the current state-of-the-art on several classification tasks while being twice as fast to train. |
Tasks | Document Classification, Machine Translation, Representation Learning, Text Classification |
Published | 2018-07-01 |
URL | https://www.aclweb.org/anthology/W18-3002/ |
https://www.aclweb.org/anthology/W18-3002 | |
PWC | https://paperswithcode.com/paper/hierarchical-convolutional-attention-networks |
Repo | |
Framework | |
Horizon-Independent Minimax Linear Regression
Title | Horizon-Independent Minimax Linear Regression |
Authors | Alan Malek, Peter L. Bartlett |
Abstract | We consider online linear regression: at each round, an adversary reveals a covariate vector, the learner predicts a real value, the adversary reveals a label, and the learner suffers the squared prediction error. The aim is to minimize the difference between the cumulative loss and that of the linear predictor that is best in hindsight. Previous work demonstrated that the minimax optimal strategy is easy to compute recursively from the end of the game; this requires the entire sequence of covariate vectors in advance. We show that, once provided with a measure of the scale of the problem, we can invert the recursion and play the minimax strategy without knowing the future covariates. Further, we show that this forward recursion remains optimal even against adaptively chosen labels and covariates, provided that the adversary adheres to a set of constraints that prevent misrepresentation of the scale of the problem. This strategy is horizon-independent in that the regret and minimax strategies depend on the size of the constraint set and not on the time-horizon, and hence it incurs no more regret than the optimal strategy that knows in advance the number of rounds of the game. We also provide an interpretation of the minimax algorithm as a follow-the-regularized-leader strategy with a data-dependent regularizer and obtain an explicit expression for the minimax regret. |
Tasks | |
Published | 2018-12-01 |
URL | http://papers.nips.cc/paper/7772-horizon-independent-minimax-linear-regression |
http://papers.nips.cc/paper/7772-horizon-independent-minimax-linear-regression.pdf | |
PWC | https://paperswithcode.com/paper/horizon-independent-minimax-linear-regression |
Repo | |
Framework | |
Iterative development of family history annotation guidelines using a synthetic corpus of clinical text
Title | Iterative development of family history annotation guidelines using a synthetic corpus of clinical text |
Authors | Taraka Rama, P{\aa}l Brekke, {\O}ystein Nytr{\o}, Lilja {\O}vrelid |
Abstract | In this article, we describe the development of annotation guidelines for family history information in Norwegian clinical text. We make use of incrementally developed synthetic clinical text describing patients{'} family history relating to cases of cardiac disease and present a general methodology which integrates the synthetically produced clinical statements and guideline development. We analyze inter-annotator agreement based on the developed guidelines and present results from experiments aimed at evaluating the validity and applicability of the annotated corpus using machine learning techniques. The resulting annotated corpus contains 477 sentences and 6030 tokens. Both the annotation guidelines and the annotated corpus are made freely available and as such constitutes the first publicly available resource of Norwegian clinical text. |
Tasks | |
Published | 2018-10-01 |
URL | https://www.aclweb.org/anthology/W18-5613/ |
https://www.aclweb.org/anthology/W18-5613 | |
PWC | https://paperswithcode.com/paper/iterative-development-of-family-history |
Repo | |
Framework | |
In the Eye of Beholder: Joint Learning of Gaze and Actions in First Person Video
Title | In the Eye of Beholder: Joint Learning of Gaze and Actions in First Person Video |
Authors | Yin Li, Miao Liu, James M. Rehg |
Abstract | We address the task of jointly determining what a person is doing and where they are looking based on the analysis of video captured by a headworn camera. We propose a novel deep model for joint gaze estimation and action recognition in First Person Vision. Our method describes the participant’s gaze as a probabilistic variable and models its distribution using stochastic units in a deep network. We sample from these stochastic units to generate an attention map. This attention map guides the aggregation of visual features in action recognition, thereby providing coupling between gaze and action. We evaluate our method on the standard EGTEA dataset and demonstrate performance that exceeds the state-of-the-art by a significant margin of 3.5%. |
Tasks | Gaze Estimation, Temporal Action Localization |
Published | 2018-09-01 |
URL | http://openaccess.thecvf.com/content_ECCV_2018/html/Yin_Li_In_the_Eye_ECCV_2018_paper.html |
http://openaccess.thecvf.com/content_ECCV_2018/papers/Yin_Li_In_the_Eye_ECCV_2018_paper.pdf | |
PWC | https://paperswithcode.com/paper/in-the-eye-of-beholder-joint-learning-of-gaze |
Repo | |
Framework | |
The Task Matters: Comparing Image Captioning and Task-Based Dialogical Image Description
Title | The Task Matters: Comparing Image Captioning and Task-Based Dialogical Image Description |
Authors | Nikolai Ilinykh, Sina Zarrie{\ss}, David Schlangen |
Abstract | Image captioning models are typically trained on data that is collected from people who are asked to describe an image, without being given any further task context. As we argue here, this context independence is likely to cause problems for transferring to task settings in which image description is bound by task demands. We demonstrate that careful design of data collection is required to obtain image descriptions which are contextually bounded to a particular meta-level task. As a task, we use MeetUp!, a text-based communication game where two players have the goal of finding each other in a visual environment. To reach this goal, the players need to describe images representing their current location. We analyse a dataset from this domain and show that the nature of image descriptions found in MeetUp! is diverse, dynamic and rich with phenomena that are not present in descriptions obtained through a simple image captioning task, which we ran for comparison. |
Tasks | Image Captioning, Text Generation |
Published | 2018-11-01 |
URL | https://www.aclweb.org/anthology/W18-6547/ |
https://www.aclweb.org/anthology/W18-6547 | |
PWC | https://paperswithcode.com/paper/the-task-matters-comparing-image-captioning |
Repo | |
Framework | |
A Reinforcement Learning Framework for Natural Question Generation using Bi-discriminators
Title | A Reinforcement Learning Framework for Natural Question Generation using Bi-discriminators |
Authors | Zhihao Fan, Zhongyu Wei, Siyuan Wang, Yang Liu, Xuanjing Huang |
Abstract | Visual Question Generation (VQG) aims to ask natural questions about an image automatically. Existing research focus on training model to fit the annotated data set that makes it indifferent from other language generation tasks. We argue that natural questions need to have two specific attributes from the perspectives of content and linguistic respectively, namely, natural and human-written. Inspired by the setting of discriminator in adversarial learning, we propose two discriminators, one for each attribute, to enhance the training. We then use the reinforcement learning framework to incorporate scores from the two discriminators as the reward to guide the training of the question generator. Experimental results on a benchmark VQG dataset show the effectiveness and robustness of our model compared to some state-of-the-art models in terms of both automatic and human evaluation metrics. |
Tasks | Question Answering, Question Generation, Scene Understanding, Text Generation, Visual Question Answering |
Published | 2018-08-01 |
URL | https://www.aclweb.org/anthology/C18-1150/ |
https://www.aclweb.org/anthology/C18-1150 | |
PWC | https://paperswithcode.com/paper/a-reinforcement-learning-framework-for |
Repo | |
Framework | |