May 4, 2019

1929 words 10 mins read

Paper Group NANR 227

Paper Group NANR 227

Completely random measures for modelling block-structured sparse networks. A General Regularization Framework for Domain Adaptation. Parallel Discourse Annotations on a Corpus of Short Texts. Sentiment Analysis - What are we talking about?. Distributional Hypernym Generation by Jointly Learning Clusters and Projections. Learning Thesaurus Relations …

Completely random measures for modelling block-structured sparse networks

Title Completely random measures for modelling block-structured sparse networks
Authors Tue Herlau, Mikkel N. Schmidt, Morten Mørup
Abstract Statistical methods for network data often parameterize the edge-probability by attributing latent traits such as block structure to the vertices and assume exchangeability in the sense of the Aldous-Hoover representation theorem. These assumptions are however incompatible with traits found in real-world networks such as a power-law degree-distribution. Recently, Caron & Fox (2014) proposed the use of a different notion of exchangeability after Kallenberg (2005) and obtained a network model which permits edge-inhomogeneity, such as a power-law degree-distribution whilst retaining desirable statistical properties. However, this model does not capture latent vertex traits such as block-structure. In this work we re-introduce the use of block-structure for network models obeying Kallenberg’s notion of exchangeability and thereby obtain a collapsed model which both admits the inference of block-structure and edge inhomogeneity. We derive a simple expression for the likelihood and an efficient sampling method. The obtained model is not significantly more difficult to implement than existing approaches to block-modelling and performs well on real network datasets.
Tasks
Published 2016-12-01
URL http://papers.nips.cc/paper/6521-completely-random-measures-for-modelling-block-structured-sparse-networks
PDF http://papers.nips.cc/paper/6521-completely-random-measures-for-modelling-block-structured-sparse-networks.pdf
PWC https://paperswithcode.com/paper/completely-random-measures-for-modelling
Repo
Framework

A General Regularization Framework for Domain Adaptation

Title A General Regularization Framework for Domain Adaptation
Authors Wei Lu, Hai Leong Chieu, Jonathan L{"o}fgren
Abstract
Tasks Domain Adaptation, Multi-Task Learning, Transfer Learning
Published 2016-11-01
URL https://www.aclweb.org/anthology/D16-1095/
PDF https://www.aclweb.org/anthology/D16-1095
PWC https://paperswithcode.com/paper/a-general-regularization-framework-for-domain
Repo
Framework

Parallel Discourse Annotations on a Corpus of Short Texts

Title Parallel Discourse Annotations on a Corpus of Short Texts
Authors Manfred Stede, Stergos Afantenos, Andreas Peldszus, Nicholas Asher, J{'e}r{'e}my Perret
Abstract We present the first corpus of texts annotated with two alternative approaches to discourse structure, Rhetorical Structure Theory (Mann and Thompson, 1988) and Segmented Discourse Representation Theory (Asher and Lascarides, 2003). 112 short argumentative texts have been analyzed according to these two theories. Furthermore, in previous work, the same texts have already been annotated for their argumentation structure, according to the scheme of Peldszus and Stede (2013). This corpus therefore enables studies of correlations between the two accounts of discourse structure, and between discourse and argumentation. We converted the three annotation formats to a common dependency tree format that enables to compare the structures, and we describe some initial findings.
Tasks
Published 2016-05-01
URL https://www.aclweb.org/anthology/L16-1167/
PDF https://www.aclweb.org/anthology/L16-1167
PWC https://paperswithcode.com/paper/parallel-discourse-annotations-on-a-corpus-of
Repo
Framework

Sentiment Analysis - What are we talking about?

Title Sentiment Analysis - What are we talking about?
Authors Alex Balahur, ra
Abstract
Tasks Common Sense Reasoning, Sentiment Analysis
Published 2016-06-01
URL https://www.aclweb.org/anthology/W16-0401/
PDF https://www.aclweb.org/anthology/W16-0401
PWC https://paperswithcode.com/paper/sentiment-analysis-what-are-we-talking-about
Repo
Framework

Distributional Hypernym Generation by Jointly Learning Clusters and Projections

Title Distributional Hypernym Generation by Jointly Learning Clusters and Projections
Authors Josuke Yamane, Tomoya Takatani, Hitoshi Yamada, Makoto Miwa, Yutaka Sasaki
Abstract We propose a novel word embedding-based hypernym generation model that jointly learns clusters of hyponym-hypernym relations, i.e., hypernymy, and projections from hyponym to hypernym embeddings. Most of the recent hypernym detection models focus on a hypernymy classification problem that determines whether a pair of words is in hypernymy or not. These models do not directly deal with a hypernym generation problem in that a model generates hypernyms for a given word. Differently from previous studies, our model jointly learns the clusters and projections with adjusting the number of clusters so that the number of clusters can be determined depending on the learned projections and vice versa. Our model also boosts the performance by incorporating inner product-based similarity measures and negative examples, i.e., sampled non-hypernyms, into our objectives in learning. We evaluated our joint learning models on the task of Japanese and English hypernym generation and showed a significant improvement over an existing pipeline model. Our model also compared favorably to existing distributed hypernym detection models on the English hypernym classification task.
Tasks Question Answering, Word Embeddings
Published 2016-12-01
URL https://www.aclweb.org/anthology/C16-1176/
PDF https://www.aclweb.org/anthology/C16-1176
PWC https://paperswithcode.com/paper/distributional-hypernym-generation-by-jointly
Repo
Framework

Learning Thesaurus Relations from Distributional Features

Title Learning Thesaurus Relations from Distributional Features
Authors Rosa Tsegaye Aga, Christian Wartena, Lucas Drumond, Lars Schmidt-Thieme
Abstract In distributional semantics words are represented by aggregated context features. The similarity of words can be computed by comparing their feature vectors. Thus, we can predict whether two words are synonymous or similar with respect to some other semantic relation. We will show on six different datasets of pairs of similar and non-similar words that a supervised learning algorithm on feature vectors representing pairs of words outperforms cosine similarity between vectors representing single words. We compared different methods to construct a feature vector representing a pair of words. We show that simple methods like pairwise addition or multiplication give better results than a recently proposed method that combines different types of features. The semantic relation we consider is relatedness of terms in thesauri for intellectual document classification. Thus our findings can directly be applied for the maintenance and extension of such thesauri. To the best of our knowledge this relation was not considered before in the field of distributional semantics.
Tasks Document Classification
Published 2016-05-01
URL https://www.aclweb.org/anthology/L16-1328/
PDF https://www.aclweb.org/anthology/L16-1328
PWC https://paperswithcode.com/paper/learning-thesaurus-relations-from
Repo
Framework

The Multilingual Affective Soccer Corpus (MASC): Compiling a biased parallel corpus on soccer reportage in English, German and Dutch

Title The Multilingual Affective Soccer Corpus (MASC): Compiling a biased parallel corpus on soccer reportage in English, German and Dutch
Authors Nadine Braun, Martijn Goudbeek, Emiel Krahmer
Abstract
Tasks Text Generation
Published 2016-09-01
URL https://www.aclweb.org/anthology/W16-6612/
PDF https://www.aclweb.org/anthology/W16-6612
PWC https://paperswithcode.com/paper/the-multilingual-affective-soccer-corpus-masc
Repo
Framework
Title Semantic Links for Portuguese
Authors Fabricio Chalub, Livy Real, Alex Rademaker, re, Valeria de Paiva
Abstract This paper describes work on incorporating Princenton{'}s WordNet morphosemantics links to the fabric of the Portuguese OpenWordNet-PT. Morphosemantic links are relations between verbs and derivationally related nouns that are semantically typed (such as for tune-tuner ― in Portuguese {}afinar-afinador{''} {--} linked through an {}agent{''} link). Morphosemantic links have been discussed for Princeton{'}s WordNet for a while, but have not been added to the official database. These links are very useful, they help us to improve our Portuguese WordNet. Thus we discuss the integration of these links in our base and the issues we encountered with the integration.
Tasks
Published 2016-05-01
URL https://www.aclweb.org/anthology/L16-1142/
PDF https://www.aclweb.org/anthology/L16-1142
PWC https://paperswithcode.com/paper/semantic-links-for-portuguese
Repo
Framework

Building Concept Graphs from Monolingual Dictionary Entries

Title Building Concept Graphs from Monolingual Dictionary Entries
Authors G{'a}bor Recski
Abstract We present the dict{_}to{_}4lang tool for processing entries of three monolingual dictionaries of English and mapping definitions to concept graphs following the 4lang principles of semantic representation introduced by (Kornai, 2010). 4lang representations are domain- and language-independent, and make use of only a very limited set of primitives to encode the meaning of all utterances. Our pipeline relies on the Stanford Dependency Parser for syntactic analysis, the dep to 4lang module then builds directed graphs of concepts based on dependency relations between words in each definition. Several issues are handled by construction-specific rules that are applied to the output of dep{_}to{_}4lang. Manual evaluation suggests that ca. 75{%} of graphs built from the Longman Dictionary are either entirely correct or contain only minor errors. dict{_}to{_}4lang is available under an MIT license as part of the 4lang library and has been used successfully in measuring Semantic Textual Similarity (Recski and {'A}cs, 2015). An interactive demo of core 4lang functionalities is available at http://4lang.hlt.bme.hu.
Tasks Semantic Textual Similarity
Published 2016-05-01
URL https://www.aclweb.org/anthology/L16-1417/
PDF https://www.aclweb.org/anthology/L16-1417
PWC https://paperswithcode.com/paper/building-concept-graphs-from-monolingual
Repo
Framework

POLY: Mining Relational Paraphrases from Multilingual Sentences

Title POLY: Mining Relational Paraphrases from Multilingual Sentences
Authors Adam Grycner, Gerhard Weikum
Abstract
Tasks Natural Language Inference, Question Answering
Published 2016-11-01
URL https://www.aclweb.org/anthology/D16-1236/
PDF https://www.aclweb.org/anthology/D16-1236
PWC https://paperswithcode.com/paper/poly-mining-relational-paraphrases-from
Repo
Framework

:telephone::person::sailboat::whale::okhand: ; or ``Call me Ishmael’’ – How do you translate emoji?

Title :telephone::person::sailboat::whale::okhand: ; or ``Call me Ishmael’’ – How do you translate emoji? |
Authors Will Radford, Ben Hachey, Bo Han, Andy Chisholm
Abstract
Tasks Part-Of-Speech Tagging, Word Alignment
Published 2016-12-01
URL https://www.aclweb.org/anthology/U16-1018/
PDF https://www.aclweb.org/anthology/U16-1018
PWC https://paperswithcode.com/paper/telephonepersonsailboatwhaleokhand-or-call-me
Repo
Framework

Encoding Adjective Scales for Fine-grained Resources

Title Encoding Adjective Scales for Fine-grained Resources
Authors C{'e}dric Lopez, Fr{'e}d{'e}rique Segond, Christiane Fellbaum
Abstract We propose an automatic approach towards determining the relative location of adjectives on a common scale based on their strength. We focus on adjectives expressing different degrees of goodness occurring in French product (perfumes) reviews. Using morphosyntactic patterns, we extract from the reviews short phrases consisting of a noun that encodes a particular aspect of the perfume and an adjective modifying that noun. We then associate each such n-gram with the corresponding product aspect and its related star rating. Next, based on the star scores, we generate adjective scales reflecting the relative strength of specific adjectives associated with a shared attribute of the product. An automatic ordering of the adjectives {}correct{''} (correct), {}sympa{''} (nice), {}bon{''} (good) and {}excellent{''} (excellent) according to their score in our resource is consistent with an intuitive scale based on human judgments. Our long-term objective is to generate different adjective scales in an empirical manner, which could allow the enrichment of lexical resources.
Tasks
Published 2016-05-01
URL https://www.aclweb.org/anthology/L16-1177/
PDF https://www.aclweb.org/anthology/L16-1177
PWC https://paperswithcode.com/paper/encoding-adjective-scales-for-fine-grained
Repo
Framework

When do we laugh?

Title When do we laugh?
Authors Ye Tian, Chiara Mazzocconi, Jonathan Ginzburg
Abstract
Tasks
Published 2016-09-01
URL https://www.aclweb.org/anthology/W16-3645/
PDF https://www.aclweb.org/anthology/W16-3645
PWC https://paperswithcode.com/paper/when-do-we-laugh
Repo
Framework

SPALS: Fast Alternating Least Squares via Implicit Leverage Scores Sampling

Title SPALS: Fast Alternating Least Squares via Implicit Leverage Scores Sampling
Authors Dehua Cheng, Richard Peng, Yan Liu, Ioakeim Perros
Abstract Tensor CANDECOMP/PARAFAC (CP) decomposition is a powerful but computationally challenging tool in modern data analytics. In this paper, we show ways of sampling intermediate steps of alternating minimization algorithms for computing low rank tensor CP decompositions, leading to the sparse alternating least squares (SPALS) method. Specifically, we sample the the Khatri-Rao product, which arises as an intermediate object during the iterations of alternating least squares. This product captures the interactions between different tensor modes, and form the main computational bottleneck for solving many tensor related tasks. By exploiting the spectral structures of the matrix Khatri-Rao product, we provide efficient access to its statistical leverage scores. When applied to the tensor CP decomposition, our method leads to the first algorithm that runs in sublinear time per-iteration and approximates the output of deterministic alternating least squares algorithms. Empirical evaluations of this approach show significantly speedups over existing randomized and deterministic routines for performing CP decomposition. On a tensor of the size 2.4m by 6.6m by 92k with over 2 billion nonzeros formed by Amazon product reviews, our routine converges in two minutes to the same error as deterministic ALS.
Tasks
Published 2016-12-01
URL http://papers.nips.cc/paper/6436-spals-fast-alternating-least-squares-via-implicit-leverage-scores-sampling
PDF http://papers.nips.cc/paper/6436-spals-fast-alternating-least-squares-via-implicit-leverage-scores-sampling.pdf
PWC https://paperswithcode.com/paper/spals-fast-alternating-least-squares-via
Repo
Framework

Sensing Emotions in Text Messages: An Application and Deployment Study of EmotionPush

Title Sensing Emotions in Text Messages: An Application and Deployment Study of EmotionPush
Authors Shih-Ming Wang, Chun-Hui Scott Lee, Yu-Chun Lo, Ting-Hao Huang, Lun-Wei Ku
Abstract Instant messaging and push notifications play important roles in modern digital life. To enable robust sense-making and rich context awareness in computer mediated communications, we introduce EmotionPush, a system that automatically conveys the emotion of received text with a colored push notification on mobile devices. EmotionPush is powered by state-of-the-art emotion classifiers and is deployed for Facebook Messenger clients on Android. The study showed that the system is able to help users prioritize interactions.
Tasks
Published 2016-12-01
URL https://www.aclweb.org/anthology/C16-2030/
PDF https://www.aclweb.org/anthology/C16-2030
PWC https://paperswithcode.com/paper/sensing-emotions-in-text-messages-an
Repo
Framework
comments powered by Disqus