Paper Group NANR 21
Modelling Representation Noise in Emotion Analysis using Gaussian Processes. Work With What You’ve Got. Converting a comprehensive lexical database into a computational model: The case of East Cree verb inflection. Tecnolengua Lingmotif at EmoInt-2017: A lexicon-based approach. Failure Transducers and Applications in Knowledge-Based Text Processing …
Modelling Representation Noise in Emotion Analysis using Gaussian Processes
Title | Modelling Representation Noise in Emotion Analysis using Gaussian Processes |
Authors | Daniel Beck |
Abstract | Emotion Analysis is the task of modelling latent emotions present in natural language. Labelled datasets for this task are scarce so learning good input text representations is not trivial. Using averaged word embeddings is a simple way to leverage unlabelled corpora to build text representations but this approach can be prone to noise either coming from the embedding themselves or the averaging procedure. In this paper we propose a model for Emotion Analysis using Gaussian Processes and kernels that are better suitable for functions that exhibit noisy behaviour. Empirical evaluations in a emotion prediction task show that our model outperforms commonly used baselines for regression. |
Tasks | Emotion Recognition, Gaussian Processes, Opinion Mining, Word Embeddings |
Published | 2017-11-01 |
URL | https://www.aclweb.org/anthology/I17-2024/ |
https://www.aclweb.org/anthology/I17-2024 | |
PWC | https://paperswithcode.com/paper/modelling-representation-noise-in-emotion |
Repo | |
Framework | |
Work With What You’ve Got
Title | Work With What You’ve Got |
Authors | Lucy Bell, Lawrence Bell |
Abstract | |
Tasks | |
Published | 2017-03-01 |
URL | https://www.aclweb.org/anthology/W17-0107/ |
https://www.aclweb.org/anthology/W17-0107 | |
PWC | https://paperswithcode.com/paper/work-with-what-youve-got |
Repo | |
Framework | |
Converting a comprehensive lexical database into a computational model: The case of East Cree verb inflection
Title | Converting a comprehensive lexical database into a computational model: The case of East Cree verb inflection |
Authors | Antti Arppe, Marie-Odile Junker, Delasie Torkornoo |
Abstract | |
Tasks | |
Published | 2017-03-01 |
URL | https://www.aclweb.org/anthology/W17-0108/ |
https://www.aclweb.org/anthology/W17-0108 | |
PWC | https://paperswithcode.com/paper/converting-a-comprehensive-lexical-database |
Repo | |
Framework | |
Tecnolengua Lingmotif at EmoInt-2017: A lexicon-based approach
Title | Tecnolengua Lingmotif at EmoInt-2017: A lexicon-based approach |
Authors | Antonio Moreno-Ortiz |
Abstract | In this paper we describe Tecnolengua Group{'}s participation in the shared task on emotion intensity at WASSA 2017. We used the Lingmotif tool and a new, complementary tool, Lingmotif Learn, which we developed for this occasion. We based our intensity predictions for the four test datasets entirely on Lingmotif{'}s TSS (text sentiment score) feature. We also developed mechanisms for dealing with the idiosyncrasies of Twitter text. Results were comparatively poor, but the experience meant a good opportunity for us to identify issues in our score calculation for short texts, a genre for which the Lingmotif tool was not originally designed. |
Tasks | Sentiment Analysis |
Published | 2017-09-01 |
URL | https://www.aclweb.org/anthology/W17-5231/ |
https://www.aclweb.org/anthology/W17-5231 | |
PWC | https://paperswithcode.com/paper/tecnolengua-lingmotif-at-emoint-2017-a |
Repo | |
Framework | |
Failure Transducers and Applications in Knowledge-Based Text Processing
Title | Failure Transducers and Applications in Knowledge-Based Text Processing |
Authors | Stoyan Mihov, Klaus U. Schulz |
Abstract | |
Tasks | |
Published | 2017-09-01 |
URL | https://www.aclweb.org/anthology/W17-4001/ |
https://www.aclweb.org/anthology/W17-4001 | |
PWC | https://paperswithcode.com/paper/failure-transducers-and-applications-in |
Repo | |
Framework | |
Building a SentiWordNet for Odia
Title | Building a SentiWordNet for Odia |
Authors | Gaurav Mohanty, Abishek Kannan, Radhika Mamidi |
Abstract | As a discipline of Natural Language Processing, Sentiment Analysis is used to extract and analyze subjective information present in natural language data. The task of Sentiment Analysis has acquired wide commercial uses including social media monitoring tasks, survey responses, review systems, etc. Languages like English have several resources which aid in the task of Sentiment Analysis. SentiWordNet and Subjectivity WordList are examples of such tools and resources. With more data being available in native vernacular, language-specific SentiWordNet(s) have become essential. For resource poor languages, creating such SentiWordNet(s) is a difficult task to achieve. One solution is to use available resources in English and translate the final source lexicon to target lexicon via machine translation. Machine translation systems for the English-Odia language pair have not yet been developed. In this paper, we discuss a method to create a SentiWordNet for Odia, which is resource-poor, by only using resources which are currently available for Indian languages. The lexicon created, would serve as a tool for Sentiment Analysis related task specific to Odia data. |
Tasks | Machine Translation, Sentiment Analysis, Word Alignment |
Published | 2017-09-01 |
URL | https://www.aclweb.org/anthology/W17-5219/ |
https://www.aclweb.org/anthology/W17-5219 | |
PWC | https://paperswithcode.com/paper/building-a-sentiwordnet-for-odia |
Repo | |
Framework | |
Improving Distributed Representations of Tweets - Present and Future
Title | Improving Distributed Representations of Tweets - Present and Future |
Authors | Ganesh Jawahar |
Abstract | |
Tasks | Information Retrieval, Representation Learning, Sentiment Analysis, Unsupervised Representation Learning |
Published | 2017-07-01 |
URL | https://www.aclweb.org/anthology/P17-3002/ |
https://www.aclweb.org/anthology/P17-3002 | |
PWC | https://paperswithcode.com/paper/improving-distributed-representations-of |
Repo | |
Framework | |
The parse is darc and full of errors: Universal dependency parsing with transition-based and graph-based algorithms
Title | The parse is darc and full of errors: Universal dependency parsing with transition-based and graph-based algorithms |
Authors | Kuan Yu, Pavel Sofroniev, Erik Schill, Erhard Hinrichs |
Abstract | We developed two simple systems for dependency parsing: darc, a transition-based parser, and mstnn, a graph-based parser. We tested our systems in the CoNLL 2017 UD Shared Task, with darc being the official system. Darc ranked 12th among 33 systems, just above the baseline. Mstnn had no official ranking, but its main score was above the 27th. In this paper, we describe our two systems, examine their strengths and weaknesses, and discuss the lessons we learned. |
Tasks | Dependency Parsing |
Published | 2017-08-01 |
URL | https://www.aclweb.org/anthology/K17-3013/ |
https://www.aclweb.org/anthology/K17-3013 | |
PWC | https://paperswithcode.com/paper/the-parse-is-darc-and-full-of-errors |
Repo | |
Framework | |
Accelerated First-order Methods for Geodesically Convex Optimization on Riemannian Manifolds
Title | Accelerated First-order Methods for Geodesically Convex Optimization on Riemannian Manifolds |
Authors | Yuanyuan Liu, Fanhua Shang, James Cheng, Hong Cheng, Licheng Jiao |
Abstract | In this paper, we propose an accelerated first-order method for geodesically convex optimization, which is the generalization of the standard Nesterov’s accelerated method from Euclidean space to nonlinear Riemannian space. We first derive two equations and obtain two nonlinear operators for geodesically convex optimization instead of the linear extrapolation step in Euclidean space. In particular, we analyze the global convergence properties of our accelerated method for geodesically strongly-convex problems, which show that our method improves the convergence rate from O((1-\mu/L)^{k}) to O((1-\sqrt{\mu/L})^{k}). Moreover, our method also improves the global convergence rate on geodesically general convex problems from O(1/k) to O(1/k^{2}). Finally, we give a specific iterative scheme for matrix Karcher mean problems, and validate our theoretical results with experiments. |
Tasks | |
Published | 2017-12-01 |
URL | http://papers.nips.cc/paper/7072-accelerated-first-order-methods-for-geodesically-convex-optimization-on-riemannian-manifolds |
http://papers.nips.cc/paper/7072-accelerated-first-order-methods-for-geodesically-convex-optimization-on-riemannian-manifolds.pdf | |
PWC | https://paperswithcode.com/paper/accelerated-first-order-methods-for |
Repo | |
Framework | |
Recovering Question Answering Errors via Query Revision
Title | Recovering Question Answering Errors via Query Revision |
Authors | Semih Yavuz, Izzeddin Gur, Yu Su, Xifeng Yan |
Abstract | The existing factoid QA systems often lack a post-inspection component that can help models recover from their own mistakes. In this work, we propose to crosscheck the corresponding KB relations behind the predicted answers and identify potential inconsistencies. Instead of developing a new model that accepts evidences collected from these relations, we choose to plug them back to the original questions directly and check if the revised question makes sense or not. A bidirectional LSTM is applied to encode revised questions. We develop a scoring mechanism over the revised question encodings to refine the predictions of a base QA system. This approach can improve the F1 score of STAGG (Yih et al., 2015), one of the leading QA systems, from 52.5{%} to 53.9{%} on WEBQUESTIONS data. |
Tasks | Question Answering, Semantic Parsing |
Published | 2017-09-01 |
URL | https://www.aclweb.org/anthology/D17-1094/ |
https://www.aclweb.org/anthology/D17-1094 | |
PWC | https://paperswithcode.com/paper/recovering-question-answering-errors-via |
Repo | |
Framework | |
A Statistical, Grammar-Based Approach to Microplanning
Title | A Statistical, Grammar-Based Approach to Microplanning |
Authors | Claire Gardent, Laura Perez-Beltrachini |
Abstract | Although there has been much work in recent years on data-driven natural language generation, little attention has been paid to the fine-grained interactions that arise during microplanning between aggregation, surface realization, and sentence segmentation. In this article, we propose a hybrid symbolic/statistical approach to jointly model the constraints regulating these interactions. Our approach integrates a small handwritten grammar, a statistical hypertagger, and a surface realization algorithm. It is applied to the verbalization of knowledge base queries and tested on 13 knowledge bases to demonstrate domain independence. We evaluate our approach in several ways. A quantitative analysis shows that the hybrid approach outperforms a purely symbolic approach in terms of both speed and coverage. Results from a human study indicate that users find the output of this hybrid statistic/symbolic system more fluent than both a template-based and a purely symbolic grammar-based approach. Finally, we illustrate by means of examples that our approach can account for various factors impacting aggregation, sentence segmentation, and surface realization. |
Tasks | Text Generation |
Published | 2017-04-01 |
URL | https://www.aclweb.org/anthology/J17-1001/ |
https://www.aclweb.org/anthology/J17-1001 | |
PWC | https://paperswithcode.com/paper/a-statistical-grammar-based-approach-to |
Repo | |
Framework | |
Speaking, Seeing, Understanding: Correlating semantic models with conceptual representation in the brain
Title | Speaking, Seeing, Understanding: Correlating semantic models with conceptual representation in the brain |
Authors | Luana Bulat, Stephen Clark, Ekaterina Shutova |
Abstract | Research in computational semantics is increasingly guided by our understanding of human semantic processing. However, semantic models are typically studied in the context of natural language processing system performance. In this paper, we present a systematic evaluation and comparison of a range of widely-used, state-of-the-art semantic models in their ability to predict patterns of conceptual representation in the human brain. Our results provide new insights both for the design of computational semantic models and for further research in cognitive neuroscience. |
Tasks | Semantic Textual Similarity |
Published | 2017-09-01 |
URL | https://www.aclweb.org/anthology/D17-1113/ |
https://www.aclweb.org/anthology/D17-1113 | |
PWC | https://paperswithcode.com/paper/speaking-seeing-understanding-correlating |
Repo | |
Framework | |
當代非監督式方法之比較於節錄式語音摘要 (An Empirical Comparison of Contemporary Unsupervised Approaches for Extractive Speech Summarization) [In Chinese]
Title | 當代非監督式方法之比較於節錄式語音摘要 (An Empirical Comparison of Contemporary Unsupervised Approaches for Extractive Speech Summarization) [In Chinese] |
Authors | Shih-Hung Liu, Kuan-Yu Chen, Kai-Wun Shih, Berlin Chen, Hsin-Min Wang, Wen-Lian Hsu |
Abstract | |
Tasks | Information Retrieval, Language Modelling |
Published | 2017-06-01 |
URL | https://www.aclweb.org/anthology/O17-2001/ |
https://www.aclweb.org/anthology/O17-2001 | |
PWC | https://paperswithcode.com/paper/caecca1413a1-e1414c-ea14eae3e-an-empirical |
Repo | |
Framework | |
Greedy Transition-Based Dependency Parsing with Stack LSTMs
Title | Greedy Transition-Based Dependency Parsing with Stack LSTMs |
Authors | Miguel Ballesteros, Chris Dyer, Yoav Goldberg, Noah A. Smith |
Abstract | We introduce a greedy transition-based parser that learns to represent parser states using recurrent neural networks. Our primary innovation that enables us to do this efficiently is a new control structure for sequential neural networks{—}the stack long short-term memory unit (LSTM). Like the conventional stack data structures used in transition-based parsers, elements can be pushed to or popped from the top of the stack in constant time, but, in addition, an LSTM maintains a continuous space embedding of the stack contents. Our model captures three facets of the parser{'}s state: (i) unbounded look-ahead into the buffer of incoming words, (ii) the complete history of transition actions taken by the parser, and (iii) the complete contents of the stack of partially built tree fragments, including their internal structures. In addition, we compare two different word representations: (i) standard word vectors based on look-up tables and (ii) character-based models of words. Although standard word embedding models work well in all languages, the character-based models improve the handling of out-of-vocabulary words, particularly in morphologically rich languages. Finally, we discuss the use of dynamic oracles in training the parser. During training, dynamic oracles alternate between sampling parser states from the training data and from the model as it is being learned, making the model more robust to the kinds of errors that will be made at test time. Training our model with dynamic oracles yields a linear-time greedy parser with very competitive performance. |
Tasks | Dependency Parsing, Transition-Based Dependency Parsing |
Published | 2017-06-01 |
URL | https://www.aclweb.org/anthology/J17-2002/ |
https://www.aclweb.org/anthology/J17-2002 | |
PWC | https://paperswithcode.com/paper/greedy-transition-based-dependency-parsing |
Repo | |
Framework | |
Proceedings of the 7th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2017)
Title | Proceedings of the 7th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2017) |
Authors | |
Abstract | |
Tasks | |
Published | 2017-04-01 |
URL | https://www.aclweb.org/anthology/W17-0700/ |
https://www.aclweb.org/anthology/W17-0700 | |
PWC | https://paperswithcode.com/paper/proceedings-of-the-7th-workshop-on-cognitive |
Repo | |
Framework | |