Paper Group NANR 169
Can You See the (Linguistic) Difference? Exploring Mass/Count Distinction in Vision. Unsupervised Induction of Compositional Types for English Adjective-Noun Pairs. Does Free Word Order Hurt? Assessing the Practical Lexical Function Model for Croatian. Parsing with Context Embeddings. Entropy Reduction correlates with temporal lobe activity. CUNI S …
Can You See the (Linguistic) Difference? Exploring Mass/Count Distinction in Vision
Title | Can You See the (Linguistic) Difference? Exploring Mass/Count Distinction in Vision |
Authors | David Addison Smith, S Pezzelle, ro, Francesca Franzon, Chiara Zanini, Raffaella Bernardi |
Abstract | |
Tasks | |
Published | 2017-01-01 |
URL | https://www.aclweb.org/anthology/W17-6939/ |
https://www.aclweb.org/anthology/W17-6939 | |
PWC | https://paperswithcode.com/paper/can-you-see-the-linguistic-difference |
Repo | |
Framework | |
Unsupervised Induction of Compositional Types for English Adjective-Noun Pairs
Title | Unsupervised Induction of Compositional Types for English Adjective-Noun Pairs |
Authors | Wiebke Petersen, Oliver Hellwig |
Abstract | |
Tasks | Word Embeddings |
Published | 2017-01-01 |
URL | https://www.aclweb.org/anthology/W17-6932/ |
https://www.aclweb.org/anthology/W17-6932 | |
PWC | https://paperswithcode.com/paper/unsupervised-induction-of-compositional-types |
Repo | |
Framework | |
Does Free Word Order Hurt? Assessing the Practical Lexical Function Model for Croatian
Title | Does Free Word Order Hurt? Assessing the Practical Lexical Function Model for Croatian |
Authors | Zoran Medi{'c}, Jan {\v{S}}najder, Sebastian Pad{'o} |
Abstract | The Practical Lexical Function (PLF) model is a model of computational distributional semantics that attempts to strike a balance between expressivity and learnability in predicting phrase meaning and shows competitive results. We investigate how well the PLF carries over to free word order languages, given that it builds on observations of predicate-argument combinations that are harder to recover in free word order languages. We evaluate variants of the PLF for Croatian, using a new lexical substitution dataset. We find that the PLF works about as well for Croatian as for English, but demonstrate that its strength lies in modeling verbs, and that the free word order affects the less robust PLF variant. |
Tasks | Semantic Textual Similarity |
Published | 2017-08-01 |
URL | https://www.aclweb.org/anthology/S17-1014/ |
https://www.aclweb.org/anthology/S17-1014 | |
PWC | https://paperswithcode.com/paper/does-free-word-order-hurt-assessing-the |
Repo | |
Framework | |
Parsing with Context Embeddings
Title | Parsing with Context Embeddings |
Authors | {"O}mer K{\i}rnap, Berkay Furkan {"O}nder, Deniz Yuret |
Abstract | We introduce context embeddings, dense vectors derived from a language model that represent the left/right context of a word instance, and demonstrate that context embeddings significantly improve the accuracy of our transition based parser. Our model consists of a bidirectional LSTM (BiLSTM) based language model that is pre-trained to predict words in plain text, and a multi-layer perceptron (MLP) decision model that uses features from the language model to predict the correct actions for an ArcHybrid transition based parser. We participated in the CoNLL 2017 UD Shared Task as the {``}Ko{\c{c}} University{''} team and our system was ranked 7th out of 33 systems that parsed 81 treebanks in 49 languages. | |
Tasks | Language Modelling, Word Embeddings, Word Sense Induction |
Published | 2017-08-01 |
URL | https://www.aclweb.org/anthology/K17-3008/ |
https://www.aclweb.org/anthology/K17-3008 | |
PWC | https://paperswithcode.com/paper/parsing-with-context-embeddings |
Repo | |
Framework | |
Entropy Reduction correlates with temporal lobe activity
Title | Entropy Reduction correlates with temporal lobe activity |
Authors | Matthew Nelson, Stanislas Dehaene, Christophe Pallier, John Hale |
Abstract | Using the Entropy Reduction incremental complexity metric, we relate high gamma power signals from the brains of epileptic patients to incremental stages of syntactic analysis in English and French. We find that signals recorded intracranially from the anterior Inferior Temporal Sulcus (aITS) and the posterior Inferior Temporal Gyrus (pITG) correlate with word-by-word Entropy Reduction values derived from phrase structure grammars for those languages. In the anterior region, this correlation persists even in combination with surprisal co-predictors from PCFG and ngram models. The result confirms the idea that the brain{'}s temporal lobe houses a parsing function, one whose incremental processing difficulty profile reflects changes in grammatical uncertainty. |
Tasks | |
Published | 2017-04-01 |
URL | https://www.aclweb.org/anthology/W17-0701/ |
https://www.aclweb.org/anthology/W17-0701 | |
PWC | https://paperswithcode.com/paper/entropy-reduction-correlates-with-temporal |
Repo | |
Framework | |
CUNI System for WMT17 Automatic Post-Editing Task
Title | CUNI System for WMT17 Automatic Post-Editing Task |
Authors | Du{\v{s}}an Vari{\v{s}}, Ond{\v{r}}ej Bojar |
Abstract | |
Tasks | Automatic Post-Editing, Machine Translation |
Published | 2017-09-01 |
URL | https://www.aclweb.org/anthology/W17-4777/ |
https://www.aclweb.org/anthology/W17-4777 | |
PWC | https://paperswithcode.com/paper/cuni-system-for-wmt17-automatic-post-editing |
Repo | |
Framework | |
Recognition of Genuine Polish Suicide Notes
Title | Recognition of Genuine Polish Suicide Notes |
Authors | Maciej Piasecki, Ksenia M{\l}ynarczyk, Jan Koco{'n} |
Abstract | In this article we present the result of the recent research in the recognition of genuine Polish suicide notes (SNs). We provide useful method to distinguish between SNs and other types of discourse, including counterfeited SNs. The method uses a wide range of word-based and semantic features and it was evaluated using Polish Corpus of Suicide Notes, which contains 1244 genuine SNs, expanded with manually prepared set of 334 counterfeited SNs and 2200 letter-like texts from the Internet. We utilized the algorithm to create the class-related sense dictionaries to improve the result of SNs classification. The obtained results show that there are fundamental differences between genuine SNs and counterfeited SNs. The applied method of the sense dictionary construction appeared to be the best way of improving the model. |
Tasks | |
Published | 2017-09-01 |
URL | https://www.aclweb.org/anthology/R17-1076/ |
https://doi.org/10.26615/978-954-452-049-6_076 | |
PWC | https://paperswithcode.com/paper/recognition-of-genuine-polish-suicide-notes |
Repo | |
Framework | |
Creating Common Ground through Multimodal Simulations
Title | Creating Common Ground through Multimodal Simulations |
Authors | James Pustejovsky, Nikhil Krishnaswamy, Bruce Draper, Pradyumna Narayana, Rahul Bangar |
Abstract | |
Tasks | |
Published | 2017-01-01 |
URL | https://www.aclweb.org/anthology/W17-7103/ |
https://www.aclweb.org/anthology/W17-7103 | |
PWC | https://paperswithcode.com/paper/creating-common-ground-through-multimodal |
Repo | |
Framework | |
Identifying Polysemous Words and Inferring Sense Glosses in a Semantic Network
Title | Identifying Polysemous Words and Inferring Sense Glosses in a Semantic Network |
Authors | Maxime Chapuis, Mathieu Lafourcade |
Abstract | |
Tasks | |
Published | 2017-01-01 |
URL | https://www.aclweb.org/anthology/W17-7206/ |
https://www.aclweb.org/anthology/W17-7206 | |
PWC | https://paperswithcode.com/paper/identifying-polysemous-words-and-inferring |
Repo | |
Framework | |
Annotating similes in literary texts
Title | Annotating similes in literary texts |
Authors | Suzanne Mpouli |
Abstract | |
Tasks | |
Published | 2017-01-01 |
URL | https://www.aclweb.org/anthology/W17-7403/ |
https://www.aclweb.org/anthology/W17-7403 | |
PWC | https://paperswithcode.com/paper/annotating-similes-in-literary-texts |
Repo | |
Framework | |
POMELO: Medline corpus with manually annotated food-drug interactions
Title | POMELO: Medline corpus with manually annotated food-drug interactions |
Authors | Thierry Hamon, Vincent Tabanou, Fleur Mougin, Natalia Grabar, Frantz Thiessard |
Abstract | When patients take more than one medication, they may be at risk of drug interactions, which means that a given drug can cause unexpected effects when taken in combination with other drugs. Similar effects may occur when drugs are taken together with some food or beverages. For instance, grapefruit has interactions with several drugs, because its active ingredients inhibit enzymes involved in the drugs metabolism and can then cause an excessive dosage of these drugs. Yet, information on food/drug interactions is poorly researched. The current research is mainly provided by the medical domain and a very tentative work is provided by computer sciences and NLP domains. One factor that motivates the research is related to the availability of the annotated corpora and the reference data. The purpose of our work is to describe the rationale and approach for creation and annotation of scientific corpus with information on food/drug interactions. This corpus contains 639 MEDLINE citations (titles and abstracts), corresponding to 5,752 sentences. It is manually annotated by two experts. The corpus is named POMELO. This annotated corpus will be made available for the research purposes. |
Tasks | |
Published | 2017-09-01 |
URL | https://www.aclweb.org/anthology/W17-8010/ |
https://doi.org/10.26615/978-954-452-044-1_010 | |
PWC | https://paperswithcode.com/paper/pomelo-medline-corpus-with-manually-annotated |
Repo | |
Framework | |
Multi-Model and Crosslingual Dependency Analysis
Title | Multi-Model and Crosslingual Dependency Analysis |
Authors | Johannes Heinecke, Munshi Asadullah |
Abstract | This paper describes the system of the Team Orange-Deski{~n}, used for the CoNLL 2017 UD Shared Task in Multilingual Dependency Parsing. We based our approach on an existing open source tool (BistParser), which we modified in order to produce the required output. Additionally we added a kind of pseudo-projectivisation. This was needed since some of the task{'}s languages have a high percentage of non-projective dependency trees. In most cases we also employed word embeddings. For the 4 surprise languages, the data provided seemed too little to train on. Thus we decided to use the training data of typologically close languages instead. Our system achieved a macro-averaged LAS of 68.61{%} (10th in the overall ranking) which improved to 69.38{%} after bug fixes. |
Tasks | Dependency Parsing, Word Embeddings |
Published | 2017-08-01 |
URL | https://www.aclweb.org/anthology/K17-3011/ |
https://www.aclweb.org/anthology/K17-3011 | |
PWC | https://paperswithcode.com/paper/multi-model-and-crosslingual-dependency |
Repo | |
Framework | |
TurkuNLP: Delexicalized Pre-training of Word Embeddings for Dependency Parsing
Title | TurkuNLP: Delexicalized Pre-training of Word Embeddings for Dependency Parsing |
Authors | Jenna Kanerva, Juhani Luotolahti, Filip Ginter |
Abstract | We present the TurkuNLP entry in the CoNLL 2017 Shared Task on Multilingual Parsing from Raw Text to Universal Dependencies. The system is based on the UDPipe parser with our focus being in exploring various techniques to pre-train the word embeddings used by the parser in order to improve its performance especially on languages with small training sets. The system ranked 11th among the 33 participants overall, being 8th on the small treebanks, 10th on the large treebanks, 12th on the parallel test sets, and 26th on the surprise languages. |
Tasks | Dependency Parsing, Word Embeddings |
Published | 2017-08-01 |
URL | https://www.aclweb.org/anthology/K17-3012/ |
https://www.aclweb.org/anthology/K17-3012 | |
PWC | https://paperswithcode.com/paper/turkunlp-delexicalized-pre-training-of-word |
Repo | |
Framework | |
Preventing Gradient Explosions in Gated Recurrent Units
Title | Preventing Gradient Explosions in Gated Recurrent Units |
Authors | Sekitoshi Kanai, Yasuhiro Fujiwara, Sotetsu Iwamura |
Abstract | A gated recurrent unit (GRU) is a successful recurrent neural network architecture for time-series data. The GRU is typically trained using a gradient-based method, which is subject to the exploding gradient problem in which the gradient increases significantly. This problem is caused by an abrupt change in the dynamics of the GRU due to a small variation in the parameters. In this paper, we find a condition under which the dynamics of the GRU changes drastically and propose a learning method to address the exploding gradient problem. Our method constrains the dynamics of the GRU so that it does not drastically change. We evaluated our method in experiments on language modeling and polyphonic music modeling. Our experiments showed that our method can prevent the exploding gradient problem and improve modeling accuracy. |
Tasks | Language Modelling, Music Modeling, Time Series |
Published | 2017-12-01 |
URL | http://papers.nips.cc/paper/6647-preventing-gradient-explosions-in-gated-recurrent-units |
http://papers.nips.cc/paper/6647-preventing-gradient-explosions-in-gated-recurrent-units.pdf | |
PWC | https://paperswithcode.com/paper/preventing-gradient-explosions-in-gated |
Repo | |
Framework | |
Modeling Situations in Neural Chat Bots
Title | Modeling Situations in Neural Chat Bots |
Authors | Shoetsu Sato, Naoki Yoshinaga, Masashi Toyoda, Masaru Kitsuregawa |
Abstract | |
Tasks | Machine Translation, Task-Oriented Dialogue Systems |
Published | 2017-07-01 |
URL | https://www.aclweb.org/anthology/P17-3020/ |
https://www.aclweb.org/anthology/P17-3020 | |
PWC | https://paperswithcode.com/paper/modeling-situations-in-neural-chat-bots |
Repo | |
Framework | |