July 26, 2019

2327 words 11 mins read

Paper Group NANR 97

Paper Group NANR 97

NLG301 at SemEval-2017 Task 5: Fine-Grained Sentiment Analysis on Financial Microblogs and News. How to Make Context More Useful? An Empirical Study on Context-Aware Neural Conversational Models. ``Oh, I’ve Heard That Before’': Modelling Own-Dialect Bias After Perceptual Learning by Weighting Training Data. Online Learning with a Hint. Finding a Ch …

NLG301 at SemEval-2017 Task 5: Fine-Grained Sentiment Analysis on Financial Microblogs and News

Title NLG301 at SemEval-2017 Task 5: Fine-Grained Sentiment Analysis on Financial Microblogs and News
Authors Chung-Chi Chen, Hen-Hsen Huang, Hsin-Hsi Chen
Abstract Short length, multi-targets, target relation-ship, monetary expressions, and outside reference are characteristics of financial tweets. This paper proposes methods to extract target spans from a tweet and its referencing web page. Total 15 publicly available sentiment dictionaries and one sentiment dictionary constructed from training set, containing sentiment scores in binary or real numbers, are used to compute the sentiment scores of text spans. Moreover, the correlation coeffi-cients of the price return between any two stocks are learned with the price data from Bloomberg. They are used to capture the relationships between the interesting tar-get and other stocks mentioned in a tweet. The best result of our method in both sub-task are 56.68{%} and 55.43{%}, evaluated by evaluation method 2.
Tasks Sentiment Analysis
Published 2017-08-01
URL https://www.aclweb.org/anthology/S17-2144/
PDF https://www.aclweb.org/anthology/S17-2144
PWC https://paperswithcode.com/paper/nlg301-at-semeval-2017-task-5-fine-grained
Repo
Framework

How to Make Context More Useful? An Empirical Study on Context-Aware Neural Conversational Models

Title How to Make Context More Useful? An Empirical Study on Context-Aware Neural Conversational Models
Authors Zhiliang Tian, Rui Yan, Lili Mou, Yiping Song, Yansong Feng, Dongyan Zhao
Abstract Generative conversational systems are attracting increasing attention in natural language processing (NLP). Recently, researchers have noticed the importance of context information in dialog processing, and built various models to utilize context. However, there is no systematic comparison to analyze how to use context effectively. In this paper, we conduct an empirical study to compare various models and investigate the effect of context information in dialog systems. We also propose a variant that explicitly weights context vectors by context-query relevance, outperforming the other baselines.
Tasks
Published 2017-07-01
URL https://www.aclweb.org/anthology/P17-2036/
PDF https://www.aclweb.org/anthology/P17-2036
PWC https://paperswithcode.com/paper/how-to-make-context-more-useful-an-empirical
Repo
Framework

``Oh, I’ve Heard That Before’': Modelling Own-Dialect Bias After Perceptual Learning by Weighting Training Data

Title ``Oh, I’ve Heard That Before’': Modelling Own-Dialect Bias After Perceptual Learning by Weighting Training Data |
Authors Rachael Tatman
Abstract Human listeners are able to quickly and robustly adapt to new accents and do so by using information about speaker{'}s identities. This paper will present experimental evidence that, even considering information about speaker{'}s identities, listeners retain a strong bias towards the acoustics of their own dialect after dialect learning. Participants{'} behaviour was accurately mimicked by a classifier which was trained on more cases from the base dialect and fewer from the target dialect. This suggests that imbalanced training data may result in automatic speech recognition errors consistent with those of speakers from populations over-represented in the training data.
Tasks Speech Recognition
Published 2017-04-01
URL https://www.aclweb.org/anthology/W17-0704/
PDF https://www.aclweb.org/anthology/W17-0704
PWC https://paperswithcode.com/paper/aoh-ive-heard-that-before-modelling-own
Repo
Framework

Online Learning with a Hint

Title Online Learning with a Hint
Authors Ofer Dekel, Arthur Flajolet, Nika Haghtalab, Patrick Jaillet
Abstract We study a variant of online linear optimization where the player receives a hint about the loss function at the beginning of each round. The hint is given in the form of a vector that is weakly correlated with the loss vector on that round. We show that the player can benefit from such a hint if the set of feasible actions is sufficiently round. Specifically, if the set is strongly convex, the hint can be used to guarantee a regret of O(log(T)), and if the set is q-uniformly convex for q\in(2,3), the hint can be used to guarantee a regret of o(sqrt{T}). In contrast, we establish Omega(sqrt{T}) lower bounds on regret when the set of feasible actions is a polyhedron.
Tasks
Published 2017-12-01
URL http://papers.nips.cc/paper/7114-online-learning-with-a-hint
PDF http://papers.nips.cc/paper/7114-online-learning-with-a-hint.pdf
PWC https://paperswithcode.com/paper/online-learning-with-a-hint
Repo
Framework

Finding a Character’s Voice: Stylome Classification on Literary Characters

Title Finding a Character’s Voice: Stylome Classification on Literary Characters
Authors Liviu P. Dinu, Ana Sabina Uban
Abstract We investigate in this paper the problem of classifying the stylome of characters in a literary work. Previous research in the field of authorship attribution has shown that the writing style of an author can be characterized and distinguished from that of other authors automatically. In this paper we take a look at the less approached problem of how the styles of different characters can be distinguished, trying to verify if an author managed to create believable characters with individual styles. We present the results of some initial experiments developed on the novel {``}Liaisons Dangereuses{''}, showing that a simple bag of words model can be used to classify the characters. |
Tasks Text Categorization
Published 2017-08-01
URL https://www.aclweb.org/anthology/W17-2210/
PDF https://www.aclweb.org/anthology/W17-2210
PWC https://paperswithcode.com/paper/finding-a-characteras-voice-stylome
Repo
Framework

deepCybErNet at EmoInt-2017: Deep Emotion Intensities in Tweets

Title deepCybErNet at EmoInt-2017: Deep Emotion Intensities in Tweets
Authors Vinayakumar R, Premjith B, Sachin Kumar S, Soman KP, Poornach, Prabaharan ran
Abstract This working note presents the methodology used in deepCybErNet submission to the shared task on Emotion Intensities in Tweets (EmoInt) WASSA-2017. The goal of the task is to predict a real valued score in the range [0-1] for a particular tweet with an emotion type. To do this, we used Bag-of-Words and embedding based on recurrent network architecture. We have developed two systems and experiments are conducted on the Emotion Intensity shared Task 1 data base at WASSA-2017. A system which uses word embedding based on recurrent network architecture has achieved highest 5 fold cross-validation accuracy. This has used embedding with recurrent network to extract optimal features at tweet level and logistic regression for prediction. These methods are highly language independent and experimental results shows that the proposed methods are apt for predicting a real valued score in than range [0-1] for a given tweet with its emotion type.
Tasks Emotion Classification, Natural Language Inference, Question Answering
Published 2017-09-01
URL https://www.aclweb.org/anthology/W17-5237/
PDF https://www.aclweb.org/anthology/W17-5237
PWC https://paperswithcode.com/paper/deepcybernet-at-emoint-2017-deep-emotion
Repo
Framework

On Blackbox Backpropagation and Jacobian Sensing

Title On Blackbox Backpropagation and Jacobian Sensing
Authors Krzysztof M. Choromanski, Vikas Sindhwani
Abstract From a small number of calls to a given “blackbox” on random input perturbations, we show how to efficiently recover its unknown Jacobian, or estimate the left action of its Jacobian on a given vector. Our methods are based on a novel combination of compressed sensing and graph coloring techniques, and provably exploit structural prior knowledge about the Jacobian such as sparsity and symmetry while being noise robust. We demonstrate efficient backpropagation through noisy blackbox layers in a deep neural net, improved data-efficiency in the task of linearizing the dynamics of a rigid body system, and the generic ability to handle a rich class of input-output dependency structures in Jacobian estimation problems.
Tasks
Published 2017-12-01
URL http://papers.nips.cc/paper/7230-on-blackbox-backpropagation-and-jacobian-sensing
PDF http://papers.nips.cc/paper/7230-on-blackbox-backpropagation-and-jacobian-sensing.pdf
PWC https://paperswithcode.com/paper/on-blackbox-backpropagation-and-jacobian
Repo
Framework

Understanding and Improving Morphological Learning in the Neural Machine Translation Decoder

Title Understanding and Improving Morphological Learning in the Neural Machine Translation Decoder
Authors Fahim Dalvi, Nadir Durrani, Hassan Sajjad, Yonatan Belinkov, Stephan Vogel
Abstract End-to-end training makes the neural machine translation (NMT) architecture simpler, yet elegant compared to traditional statistical machine translation (SMT). However, little is known about linguistic patterns of morphology, syntax and semantics learned during the training of NMT systems, and more importantly, which parts of the architecture are responsible for learning each of these phenomenon. In this paper we i) analyze how much morphology an NMT decoder learns, and ii) investigate whether injecting target morphology in the decoder helps it to produce better translations. To this end we present three methods: i) simultaneous translation, ii) joint-data learning, and iii) multi-task learning. Our results show that explicit morphological information helps the decoder learn target language morphology and improves the translation quality by 0.2{–}0.6 BLEU points.
Tasks Machine Translation, Multi-Task Learning
Published 2017-11-01
URL https://www.aclweb.org/anthology/I17-1015/
PDF https://www.aclweb.org/anthology/I17-1015
PWC https://paperswithcode.com/paper/understanding-and-improving-morphological
Repo
Framework

The Treebanked Conspiracy. Actors and Actions in Bellum Catilinae

Title The Treebanked Conspiracy. Actors and Actions in Bellum Catilinae
Authors Marco Passarotti, Berta Gonz{'a}lez Saavedra
Abstract
Tasks Semantic Role Labeling
Published 2017-01-01
URL https://www.aclweb.org/anthology/W17-7605/
PDF https://www.aclweb.org/anthology/W17-7605
PWC https://paperswithcode.com/paper/the-treebanked-conspiracy-actors-and-actions
Repo
Framework

Native Language Identification using Phonetic Algorithms

Title Native Language Identification using Phonetic Algorithms
Authors Charese Smiley, S K{"u}bler, ra
Abstract In this paper, we discuss the results of the IUCL system in the NLI Shared Task 2017. For our system, we explore a variety of phonetic algorithms to generate features for Native Language Identification. These features are contrasted with one of the most successful type of features in NLI, character n-grams. We find that although phonetic features do not perform as well as character n-grams alone, they do increase overall F1 score when used together with character n-grams.
Tasks Language Identification, Native Language Identification
Published 2017-09-01
URL https://www.aclweb.org/anthology/W17-5046/
PDF https://www.aclweb.org/anthology/W17-5046
PWC https://paperswithcode.com/paper/native-language-identification-using-phonetic
Repo
Framework

German Dialect Identification in Interview Transcriptions

Title German Dialect Identification in Interview Transcriptions
Authors Shervin Malmasi, Marcos Zampieri
Abstract This paper presents three systems submitted to the German Dialect Identification (GDI) task at the VarDial Evaluation Campaign 2017. The task consists of training models to identify the dialect of Swiss-German speech transcripts. The dialects included in the GDI dataset are Basel, Bern, Lucerne, and Zurich. The three systems we submitted are based on: a plurality ensemble, a mean probability ensemble, and a meta-classifier trained on character and word n-grams. The best results were obtained by the meta-classifier achieving 68.1{%} accuracy and 66.2{%} F1-score, ranking first among the 10 teams which participated in the GDI shared task.
Tasks Machine Translation
Published 2017-04-01
URL https://www.aclweb.org/anthology/W17-1220/
PDF https://www.aclweb.org/anthology/W17-1220
PWC https://paperswithcode.com/paper/german-dialect-identification-in-interview
Repo
Framework

Unsupervised real-time anomaly detection for streaming data

Title Unsupervised real-time anomaly detection for streaming data
Authors Subutai Ahmad;Alexander Lavin;Scott Purdy; Zuha Agha
Abstract We are seeing an enormous increase in the availability of streaming, time-series data. Largely driven by the rise of connected real-time data sources, this data presents technical challenges and opportunities. One fundamental capability for streaming analytics is to model each stream in an unsupervised fashion and detect unusual, anomalous behaviors in real-time. Early anomaly detection is valuable, yet it can be difficult to execute reliably in practice. Application constraints require systems to process data in real-time, not batches. Streaming data inherently exhibits concept drift, favoring algorithms that learn continuously. Furthermore, the massive number of independent streams in practice requires that anomaly detectors be fully automated. In this paper we propose a novel anomaly detection algorithm that meets these constraints. The technique is based on an online sequence memory algorithm called Hierarchical Temporal Memory (HTM). We also present results using the Numenta Anomaly Benchmark (NAB), a benchmark containing real-world data streams with labeled anomalies. The benchmark, the first of its kind, provides a controlled open-source environment for testing anomaly detection algorithms on streaming data. We present results and analysis for a wide range of algorithms on this benchmark, and discuss future challenges for the emerging field of streaming analytics.
Tasks Anomaly Detection, Time Series
Published 2017-06-02
URL https://www.researchgate.net/publication/317325599_Unsupervised_real-time_anomaly_detection_for_streaming_data
PDF https://bit.ly/2mvTiXH
PWC https://paperswithcode.com/paper/unsupervised-real-time-anomaly-detection-for
Repo
Framework

Acceleration and Averaging in Stochastic Descent Dynamics

Title Acceleration and Averaging in Stochastic Descent Dynamics
Authors Walid Krichene, Peter L. Bartlett
Abstract We formulate and study a general family of (continuous-time) stochastic dynamics for accelerated first-order minimization of smooth convex functions. Building on an averaging formulation of accelerated mirror descent, we propose a stochastic variant in which the gradient is contaminated by noise, and study the resulting stochastic differential equation. We prove a bound on the rate of change of an energy function associated with the problem, then use it to derive estimates of convergence rates of the function values (almost surely and in expectation), both for persistent and asymptotically vanishing noise. We discuss the interaction between the parameters of the dynamics (learning rate and averaging rates) and the covariation of the noise process. In particular, we show how the asymptotic rate of covariation affects the choice of parameters and, ultimately, the convergence rate.
Tasks
Published 2017-12-01
URL http://papers.nips.cc/paper/7256-acceleration-and-averaging-in-stochastic-descent-dynamics
PDF http://papers.nips.cc/paper/7256-acceleration-and-averaging-in-stochastic-descent-dynamics.pdf
PWC https://paperswithcode.com/paper/acceleration-and-averaging-in-stochastic
Repo
Framework

The limits of automatic summarisation according to ROUGE

Title The limits of automatic summarisation according to ROUGE
Authors Natalie Schluter
Abstract This paper discusses some central caveats of summarisation, incurred in the use of the ROUGE metric for evaluation, with respect to optimal solutions. The task is NP-hard, of which we give the first proof. Still, as we show empirically for three central benchmark datasets for the task, greedy algorithms empirically seem to perform optimally according to the metric. Additionally, overall quality assurance is problematic: there is no natural upper bound on the quality of summarisation systems, and even humans are excluded from performing optimal summarisation.
Tasks
Published 2017-04-01
URL https://www.aclweb.org/anthology/E17-2007/
PDF https://www.aclweb.org/anthology/E17-2007
PWC https://paperswithcode.com/paper/the-limits-of-automatic-summarisation
Repo
Framework

A consensus layer v pyramidal neuron can sustain interpulse-interval coding

Title A consensus layer v pyramidal neuron can sustain interpulse-interval coding
Authors Chandan Singh, William B. Levy
Abstract In terms of a single neuron’s long-distance communication, interpulse intervals (IPIs) are an attractive alternative to rate and binary codes. As a proxy for an IPI, a neuron’s time-to-spike can be found in the biophysical and experimental intracellular literature. Using the current, consensus layer V pyramidal neuron, the present study examines the feasibility of IPI-coding and examines the noise sources that limit the information rate of such an encoding. In descending order of importance, the noise sources are (i) synaptic variability, (ii) sodium channel shot-noise, followed by (iii) thermal noise. The biophysical simulations allow the calculation of mutual information, which is about 3.0 bits/spike. More importantly, while, by any conventional definition, the biophysical model is highly nonlinear, the underlying function that relates input intensity to the defined output variable is linear. When one assumes the perspective of a neuron coding via first hitting-time, this result justifies a pervasive and simplifying assumption of computational modelers—that a class of cortical neurons can be treated as linearly additive, computational devices.
Tasks
Published 2017-07-13
URL https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0180839
PDF https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0180839&type=printable
PWC https://paperswithcode.com/paper/a-consensus-layer-v-pyramidal-neuron-can
Repo
Framework
comments powered by Disqus