October 15, 2019

2539 words 12 mins read

Paper Group NANR 264

Paper Group NANR 264

Filtering Aggression from the Multilingual Social Media Feed. A Nontrivial Sentence Corpus for the Task of Sentence Readability Assessment in Portuguese. Modeling Trolling in Social Media Conversations. Abstractive Text-Image Summarization Using Multi-Modal Attentional Hierarchical RNN. Temporal Poisson Square Root Graphical Models. `Lighter’ Can S …

Filtering Aggression from the Multilingual Social Media Feed

Title Filtering Aggression from the Multilingual Social Media Feed
Authors S Modha, ip, Prasenjit Majumder, M, Thomas l
Abstract This paper describes the participation of team DA-LD-Hildesheim from the Information Retrieval Lab(IRLAB) at DA-IICT Gandhinagar, India in collaboration with the University of Hildesheim, Germany and LDRP-ITR, Gandhinagar, India in a shared task on Aggression Identification workshop in COLING 2018. The objective of the shared task is to identify the level of aggression from the User-Generated contents within Social media written in English, Devnagiri Hindi and Romanized Hindi. Aggression levels are categorized into three predefined classes namely: {}Overtly Aggressive{}, {}Covertly Aggressive{} and {}Non-aggressive{}. The participating teams are required to develop a multi-class classifier which classifies User-generated content into these pre-defined classes. Instead of relying on a bag-of-words model, we have used pre-trained vectors for word embedding. We have performed experiments with standard machine learning classifiers. In addition, we have developed various deep learning models for the multi-class classification problem. Using the validation data, we found that validation accuracy of our deep learning models outperform all standard machine learning classifiers and voting based ensemble techniques and results on test data support these findings. We have also found that hyper-parameters of the deep neural network are the keys to improve the results.
Tasks Information Retrieval
Published 2018-08-01
URL https://www.aclweb.org/anthology/W18-4423/
PDF https://www.aclweb.org/anthology/W18-4423
PWC https://paperswithcode.com/paper/filtering-aggression-from-the-multilingual
Repo
Framework

A Nontrivial Sentence Corpus for the Task of Sentence Readability Assessment in Portuguese

Title A Nontrivial Sentence Corpus for the Task of Sentence Readability Assessment in Portuguese
Authors Sidney Evaldo Leal, Magali Sanches Duran, S Alu{'\i}sio, ra Maria
Abstract Effective textual communication depends on readers being proficient enough to comprehend texts, and texts being clear enough to be understood by the intended audience, in a reading task. When the meaning of textual information and instructions is not well conveyed, many losses and damages may occur. Among the solutions to alleviate this problem is the automatic evaluation of sentence readability, task which has been receiving a lot of attention due to its large applicability. However, a shortage of resources, such as corpora for training and evaluation, hinders the full development of this task. In this paper, we generate a nontrivial sentence corpus in Portuguese. We evaluate three scenarios for building it, taking advantage of a parallel corpus of simplification, in which each sentence triplet is aligned and has simplification operations annotated, being ideal for justifying possible mistakes of future methods. The best scenario of our corpus PorSimplesSent is composed of 4,888 pairs, which is bigger than a similar corpus for English; all the three versions of it are publicly available. We created four baselines for PorSimplesSent and made available a pairwise ranking method, using 17 linguistic and psycholinguistic features, which correctly identifies the ranking of sentence pairs with an accuracy of 74.2{%}.
Tasks
Published 2018-08-01
URL https://www.aclweb.org/anthology/C18-1034/
PDF https://www.aclweb.org/anthology/C18-1034
PWC https://paperswithcode.com/paper/a-nontrivial-sentence-corpus-for-the-task-of
Repo
Framework

Modeling Trolling in Social Media Conversations

Title Modeling Trolling in Social Media Conversations
Authors Luis Gerardo Mojica de la Vega, Vincent Ng
Abstract
Tasks Text Categorization
Published 2018-05-01
URL https://www.aclweb.org/anthology/L18-1585/
PDF https://www.aclweb.org/anthology/L18-1585
PWC https://paperswithcode.com/paper/modeling-trolling-in-social-media-1
Repo
Framework

Abstractive Text-Image Summarization Using Multi-Modal Attentional Hierarchical RNN

Title Abstractive Text-Image Summarization Using Multi-Modal Attentional Hierarchical RNN
Authors Jingqiang Chen, Hai Zhuge
Abstract Rapid growth of multi-modal documents on the Internet makes multi-modal summarization research necessary. Most previous research summarizes texts or images separately. Recent neural summarization research shows the strength of the Encoder-Decoder model in text summarization. This paper proposes an abstractive text-image summarization model using the attentional hierarchical Encoder-Decoder model to summarize a text document and its accompanying images simultaneously, and then to align the sentences and images in summaries. A multi-modal attentional mechanism is proposed to attend original sentences, images, and captions when decoding. The DailyMail dataset is extended by collecting images and captions from the Web. Experiments show our model outperforms the neural abstractive and extractive text summarization methods that do not consider images. In addition, our model can generate informative summaries of images.
Tasks Text Summarization
Published 2018-10-01
URL https://www.aclweb.org/anthology/D18-1438/
PDF https://www.aclweb.org/anthology/D18-1438
PWC https://paperswithcode.com/paper/abstractive-text-image-summarization-using
Repo
Framework

Temporal Poisson Square Root Graphical Models

Title Temporal Poisson Square Root Graphical Models
Authors Sinong Geng, Zhaobin Kuang, Peggy Peissig, David Page
Abstract We propose temporal Poisson square root graphical models (TPSQRs), a generalization of Poisson square root graphical models (PSQRs) specifically designed for modeling longitudinal event data. By estimating the temporal relationships for all possible pairs of event types, TPSQRs can offer a holistic perspective about whether the occurrences of any given event type could excite or inhibit any other type. A TPSQR is learned by estimating a collection of interrelated PSQRs that share the same template parameterization. These PSQRs are estimated jointly in a pseudo-likelihood fashion, where Poisson pseudo-likelihood is used to approximate the original more computationally intensive pseudo-likelihood problem stemming from PSQRs. Theoretically, we demonstrate that under mild assumptions, the Poisson pseudolikelihood approximation is sparsistent for recovering the underlying PSQR. Empirically, we learn TPSQRs from a real-world large-scale electronic health record (EHR) with millions of drug prescription and condition diagnosis events, for adverse drug reaction (ADR) detection. Experimental results demonstrate that the learned TPSQRs can recover ADR signals from the EHR effectively and efficiently.
Tasks
Published 2018-07-01
URL https://icml.cc/Conferences/2018/Schedule?showEvent=2301
PDF http://proceedings.mlr.press/v80/geng18a/geng18a.pdf
PWC https://paperswithcode.com/paper/temporal-poisson-square-root-graphical-models
Repo
Framework

`Lighter’ Can Still Be Dark: Modeling Comparative Color Descriptions

Title `Lighter’ Can Still Be Dark: Modeling Comparative Color Descriptions |
Authors Olivia Winn, Smar Muresan, a
Abstract We propose a novel paradigm of grounding comparative adjectives within the realm of color descriptions. Given a reference RGB color and a comparative term (e.g., lighter, darker), our model learns to ground the comparative as a direction in the RGB space such that the colors along the vector, rooted at the reference color, satisfy the comparison. Our model generates grounded representations of comparative adjectives with an average accuracy of 0.65 cosine similarity to the desired direction of change. These vectors approach colors with Delta-E scores of under 7 compared to the target colors, indicating the differences are very small with respect to human perception. Our approach makes use of a newly created dataset for this task derived from existing labeled color data.
Tasks Object Recognition
Published 2018-07-01
URL https://www.aclweb.org/anthology/P18-2125/
PDF https://www.aclweb.org/anthology/P18-2125
PWC https://paperswithcode.com/paper/alightera-can-still-be-dark-modeling
Repo
Framework

Sampling Informative Training Data for RNN Language Models

Title Sampling Informative Training Data for RNN Language Models
Authors Fern, Jared ez, Doug Downey
Abstract We propose an unsupervised importance sampling approach to selecting training data for recurrent neural network (RNNs) language models. To increase the information content of the training set, our approach preferentially samples high perplexity sentences, as determined by an easily queryable n-gram language model. We experimentally evaluate the heldout perplexity of models trained with our various importance sampling distributions. We show that language models trained on data sampled using our proposed approach outperform models trained over randomly sampled subsets of both the Billion Word (Chelba et al., 2014 Wikitext-103 benchmark corpora (Merity et al., 2016).
Tasks Language Modelling
Published 2018-07-01
URL https://www.aclweb.org/anthology/P18-3002/
PDF https://www.aclweb.org/anthology/P18-3002
PWC https://paperswithcode.com/paper/sampling-informative-training-data-for-rnn
Repo
Framework

Learning-based Composite Metrics for Improved Caption Evaluation

Title Learning-based Composite Metrics for Improved Caption Evaluation
Authors Naeha Sharif, Lyndon White, Mohammed Bennamoun, Syed Afaq Ali Shah
Abstract The evaluation of image caption quality is a challenging task, which requires the assessment of two main aspects in a caption: adequacy and fluency. These quality aspects can be judged using a combination of several linguistic features. However, most of the current image captioning metrics focus only on specific linguistic facets, such as the lexical or semantic, and fail to meet a satisfactory level of correlation with human judgements at the sentence-level. We propose a learning-based framework to incorporate the scores of a set of lexical and semantic metrics as features, to capture the adequacy and fluency of captions at different linguistic levels. Our experimental results demonstrate that composite metrics draw upon the strengths of stand-alone measures to yield improved correlation and accuracy.
Tasks Image Captioning, Language Modelling, Semantic Textual Similarity
Published 2018-07-01
URL https://www.aclweb.org/anthology/P18-3003/
PDF https://www.aclweb.org/anthology/P18-3003
PWC https://paperswithcode.com/paper/learning-based-composite-metrics-for-improved
Repo
Framework

How to tell when a clustering is (approximately) correct using convex relaxations

Title How to tell when a clustering is (approximately) correct using convex relaxations
Authors Marina Meila
Abstract We introduce the Sublevel Set (SS) method, a generic method to obtain sufficient guarantees of near-optimality and uniqueness (up to small perturbations) for a clustering. This method can be instantiated for a variety of clustering loss functions for which convex relaxations exist. Obtaining the guarantees in practice amounts to solving a convex optimization. We demonstrate the applicability of this method by obtaining distribution free guarantees for K-means clustering on realistic data sets.
Tasks
Published 2018-12-01
URL http://papers.nips.cc/paper/7970-how-to-tell-when-a-clustering-is-approximately-correct-using-convex-relaxations
PDF http://papers.nips.cc/paper/7970-how-to-tell-when-a-clustering-is-approximately-correct-using-convex-relaxations.pdf
PWC https://paperswithcode.com/paper/how-to-tell-when-a-clustering-is
Repo
Framework

SuperNMT: Neural Machine Translation with Semantic Supersenses and Syntactic Supertags

Title SuperNMT: Neural Machine Translation with Semantic Supersenses and Syntactic Supertags
Authors Eva Vanmassenhove, Andy Way
Abstract In this paper we incorporate semantic supersensetags and syntactic supertag features into EN{–}FR and EN{–}DE factored NMT systems. In experiments on various test sets, we observe that such features (and particularly when combined) help the NMT model training to converge faster and improve the model quality according to the BLEU scores.
Tasks Machine Translation, Named Entity Recognition, Prepositional Phrase Attachment, Word Embeddings
Published 2018-07-01
URL https://www.aclweb.org/anthology/P18-3010/
PDF https://www.aclweb.org/anthology/P18-3010
PWC https://paperswithcode.com/paper/supernmt-neural-machine-translation-with
Repo
Framework

Identifying Depression on Reddit: The Effect of Training Data

Title Identifying Depression on Reddit: The Effect of Training Data
Authors Inna Pirina, {\c{C}}a{\u{g}}r{\i} {\c{C}}{"o}ltekin
Abstract This paper presents a set of classification experiments for identifying depression in posts gathered from social media platforms. In addition to the data gathered previously by other researchers, we collect additional data from the social media platform Reddit. Our experiments show promising results for identifying depression from social media texts. More importantly, however, we show that the choice of corpora is crucial in identifying depression and can lead to misleading conclusions in case of poor choice of data.
Tasks
Published 2018-10-01
URL https://www.aclweb.org/anthology/W18-5903/
PDF https://www.aclweb.org/anthology/W18-5903
PWC https://paperswithcode.com/paper/identifying-depression-on-reddit-the-effect
Repo
Framework

From `Solved Problems’ to New Challenges: A Report on LDC Activities

Title From `Solved Problems’ to New Challenges: A Report on LDC Activities |
Authors Christopher Cieri, Mark Liberman, Stephanie Strassel, Denise DiPersio, Jonathan Wright, Andrea Mazzucchi
Abstract
Tasks Dialogue Management, Language Identification, Speech Recognition, Speech Synthesis
Published 2018-05-01
URL https://www.aclweb.org/anthology/L18-1516/
PDF https://www.aclweb.org/anthology/L18-1516
PWC https://paperswithcode.com/paper/from-asolved-problemsa-to-new-challenges-a
Repo
Framework

Language Independent Sentiment Analysis with Sentiment-Specific Word Embeddings

Title Language Independent Sentiment Analysis with Sentiment-Specific Word Embeddings
Authors Carl Saroufim, Akram Almatarky, Mohammad Abdel Hady
Abstract Data annotation is a critical step to train a text model but it is tedious, expensive and time-consuming. We present a language independent method to train a sentiment polarity model with limited amount of manually-labeled data. Word embeddings such as Word2Vec are efficient at incorporating semantic and syntactic properties of words, yielding good results for document classification. However, these embeddings might map words with opposite polarities, to vectors close to each other. We train Sentiment Specific Word Embeddings (SSWE) on top of an unsupervised Word2Vec model, using either Recurrent Neural Networks (RNN) or Convolutional Neural Networks (CNN) on data auto-labeled as {}Positive{''} or {}Negative{''}. For this task, we rely on the universality of emojis and emoticons to auto-label a large number of French tweets using a small set of positive and negative emojis and emoticons. Finally, we apply a transfer learning approach to refine the network weights with a small-size manually-labeled training data set. Experiments are conducted to evaluate the performance of this approach on French sentiment classification using benchmark data sets from SemEval 2016 competition. We were able to achieve a performance improvement by using SSWE over Word2Vec. We also used a graph-based approach for label propagation to auto-generate a sentiment lexicon.
Tasks Document Classification, Sentiment Analysis, Transfer Learning, Word Embeddings
Published 2018-10-01
URL https://www.aclweb.org/anthology/W18-6204/
PDF https://www.aclweb.org/anthology/W18-6204
PWC https://paperswithcode.com/paper/language-independent-sentiment-analysis-with
Repo
Framework

Gradient descent with identity initialization efficiently learns positive definite linear transformations by deep residual networks

Title Gradient descent with identity initialization efficiently learns positive definite linear transformations by deep residual networks
Authors Peter Bartlett, Dave Helmbold, Philip Long
Abstract We analyze algorithms for approximating a function $f(x) = \Phi x$ mapping $\Re^d$ to $\Re^d$ using deep linear neural networks, i.e. that learn a function $h$ parameterized by matrices $\Theta_1,…,\Theta_L$ and defined by $h(x) = \Theta_L \Theta_{L-1} … \Theta_1 x$. We focus on algorithms that learn through gradient descent on the population quadratic loss in the case that the distribution over the inputs is isotropic. We provide polynomial bounds on the number of iterations for gradient descent to approximate the least squares matrix $\Phi$, in the case where the initial hypothesis $\Theta_1 = … = \Theta_L = I$ has excess loss bounded by a small enough constant. On the other hand, we show that gradient descent fails to converge for $\Phi$ whose distance from the identity is a larger constant, and we show that some forms of regularization toward the identity in each layer do not help. If $\Phi$ is symmetric positive definite, we show that an algorithm that initializes $\Theta_i = I$ learns an $\epsilon$-approximation of $f$ using a number of updates polynomial in $L$, the condition number of $\Phi$, and $\log(d/\epsilon)$. In contrast, we show that if the least squares matrix $\Phi$ is symmetric and has a negative eigenvalue, then all members of a class of algorithms that perform gradient descent with identity initialization, and optionally regularize toward the identity in each layer, fail to converge. We analyze an algorithm for the case that $\Phi$ satisfies $u^{\top} \Phi u > 0$ for all $u$, but may not be symmetric. This algorithm uses two regularizers: one that maintains the invariant $u^{\top} \Theta_L \Theta_{L-1} … \Theta_1 u > 0$ for all $u$, and another that “balances” $\Theta_1, …, \Theta_L$ so that they have the same singular values.
Tasks
Published 2018-07-01
URL https://icml.cc/Conferences/2018/Schedule?showEvent=1980
PDF http://proceedings.mlr.press/v80/bartlett18a/bartlett18a.pdf
PWC https://paperswithcode.com/paper/gradient-descent-with-identity-initialization
Repo
Framework

GraphGAN: Generating Graphs via Random Walks

Title GraphGAN: Generating Graphs via Random Walks
Authors Aleksandar Bojchevski, Oleksandr Shchur, Daniel Zügner, Stephan Günnemann
Abstract We propose GraphGAN - the first implicit generative model for graphs that enables to mimic real-world networks. We pose the problem of graph generation as learning the distribution of biased random walks over a single input graph. Our model is based on a stochastic neural network that generates discrete output samples, and is trained using the Wasserstein GAN objective. GraphGAN enables us to generate sibling graphs, which have similar properties yet are not exact replicas of the original graph. Moreover, GraphGAN learns a semantic mapping from the latent input space to the generated graph’s properties. We discover that sampling from certain regions of the latent space leads to varying properties of the output graphs, with smooth transitions between them. Strong generalization properties of GraphGAN are highlighted by its competitive performance in link prediction as well as promising results on node classification, even though not specifically trained for these tasks.
Tasks Graph Generation, Link Prediction, Node Classification
Published 2018-01-01
URL https://openreview.net/forum?id=H15RufWAW
PDF https://openreview.net/pdf?id=H15RufWAW
PWC https://paperswithcode.com/paper/graphgan-generating-graphs-via-random-walks
Repo
Framework
comments powered by Disqus