October 15, 2019

2668 words 13 mins read

Paper Group NANR 47

Paper Group NANR 47

A Dataset of Hindi-English Code-Mixed Social Media Text for Hate Speech Detection. Evaluation methodologies in Automatic Question Generation 2013-2018. Detecting Institutional Dialog Acts in Police Traffic Stops. Real-time Change Point Detection using On-line Topic Models. IIT Delhi at SemEval-2018 Task 1 : Emotion Intensity Prediction. Does Abilit …

A Dataset of Hindi-English Code-Mixed Social Media Text for Hate Speech Detection

Title A Dataset of Hindi-English Code-Mixed Social Media Text for Hate Speech Detection
Authors Aditya Bohra, Deepanshu Vijay, Vinay Singh, Syed Sarfaraz Akhtar, Manish Shrivastava
Abstract Hate speech detection in social media texts is an important Natural language Processing task, which has several crucial applications like sentiment analysis, investigating cyberbullying and examining socio-political controversies. While relevant research has been done independently on code-mixed social media texts and hate speech detection, our work is the first attempt in detecting hate speech in Hindi-English code-mixed social media text. In this paper, we analyze the problem of hate speech detection in code-mixed texts and present a Hindi-English code-mixed dataset consisting of tweets posted online on Twitter. The tweets are annotated with the language at word level and the class they belong to (Hate Speech or Normal Speech). We also propose a supervised classification system for detecting hate speech in the text using various character level, word level, and lexicon based features.
Tasks Hate Speech Detection, Sentiment Analysis
Published 2018-06-01
URL https://www.aclweb.org/anthology/W18-1105/
PDF https://www.aclweb.org/anthology/W18-1105
PWC https://paperswithcode.com/paper/a-dataset-of-hindi-english-code-mixed-social
Repo
Framework

Evaluation methodologies in Automatic Question Generation 2013-2018

Title Evaluation methodologies in Automatic Question Generation 2013-2018
Authors Jacopo Amidei, Paul Piwek, Alistair Willis
Abstract In the last few years Automatic Question Generation (AQG) has attracted increasing interest. In this paper we survey the evaluation methodologies used in AQG. Based on a sample of 37 papers, our research shows that the systems{'} development has not been accompanied by similar developments in the methodologies used for the systems{'} evaluation. Indeed, in the papers we examine here, we find a wide variety of both intrinsic and extrinsic evaluation methodologies. Such diverse evaluation practices make it difficult to reliably compare the quality of different generation systems. Our study suggests that, given the rapidly increasing level of research in the area, a common framework is urgently needed to compare the performance of AQG systems and NLG systems more generally.
Tasks Question Generation, Text Generation
Published 2018-11-01
URL https://www.aclweb.org/anthology/W18-6537/
PDF https://www.aclweb.org/anthology/W18-6537
PWC https://paperswithcode.com/paper/evaluation-methodologies-in-automatic
Repo
Framework

Detecting Institutional Dialog Acts in Police Traffic Stops

Title Detecting Institutional Dialog Acts in Police Traffic Stops
Authors Vinodkumar Prabhakaran, Camilla Griffiths, Hang Su, Prateek Verma, Nelson Morgan, Jennifer L. Eberhardt, Dan Jurafsky
Abstract We apply computational dialog methods to police body-worn camera footage to model conversations between police officers and community members in traffic stops. Relying on the theory of institutional talk, we develop a labeling scheme for police speech during traffic stops, and a tagger to detect institutional dialog acts (Reasons, Searches, Offering Help) from transcribed text at the turn (78{%} F-score) and stop (89{%} F-score) level. We then develop speech recognition and segmentation algorithms to detect these acts at the stop level from raw camera audio (81{%} F-score, with even higher accuracy for crucial acts like conveying the reason for the stop). We demonstrate that the dialog structures produced by our tagger could reveal whether officers follow law enforcement norms like introducing themselves, explaining the reason for the stop, and asking permission for searches. This work may therefore inform and aid efforts to ensure the procedural justice of police-community interactions.
Tasks Speech Recognition
Published 2018-01-01
URL https://www.aclweb.org/anthology/Q18-1033/
PDF https://www.aclweb.org/anthology/Q18-1033
PWC https://paperswithcode.com/paper/detecting-institutional-dialog-acts-in-police
Repo
Framework

Real-time Change Point Detection using On-line Topic Models

Title Real-time Change Point Detection using On-line Topic Models
Authors Yunli Wang, Cyril Goutte
Abstract Detecting changes within an unfolding event in real time from news articles or social media enables to react promptly to serious issues in public safety, public health or natural disasters. In this study, we use on-line Latent Dirichlet Allocation (LDA) to model shifts in topics, and apply on-line change point detection (CPD) algorithms to detect when significant changes happen. We describe an on-line Bayesian change point detection algorithm that we use to detect topic changes from on-line LDA output. Extensive experiments on social media data and news articles show the benefits of on-line LDA versus standard LDA, and of on-line change point detection compared to off-line algorithms. This yields F-scores up to 52{%} on the detection of significant real-life changes from these document streams.
Tasks Change Point Detection, Time Series, Topic Models
Published 2018-08-01
URL https://www.aclweb.org/anthology/C18-1212/
PDF https://www.aclweb.org/anthology/C18-1212
PWC https://paperswithcode.com/paper/real-time-change-point-detection-using-on
Repo
Framework

IIT Delhi at SemEval-2018 Task 1 : Emotion Intensity Prediction

Title IIT Delhi at SemEval-2018 Task 1 : Emotion Intensity Prediction
Authors Bhaskar Kotakonda, Prashanth Gowda, Brejesh Lall
Abstract This paper discusses the experiments performed for predicting the emotion intensity in tweets using a generalized supervised learning approach. We extract 3 kind of features from each of the tweets - one denoting the sentiment and emotion metrics obtained from different sentiment lexicons, one denoting the semantic representation of the word using dense representations like Glove, Word2vec and finally the syntactic information through POS N-grams, Word clusters, etc. We provide a comparative analysis of the significance of each of these features individually and in combination tested over standard regressors avaliable in scikit-learn. We apply an ensemble of these models to choose the best combination over cross validation.
Tasks Emotion Recognition, Sentiment Analysis
Published 2018-06-01
URL https://www.aclweb.org/anthology/S18-1051/
PDF https://www.aclweb.org/anthology/S18-1051
PWC https://paperswithcode.com/paper/iit-delhi-at-semeval-2018-task-1-emotion
Repo
Framework

Does Ability Affect Alignment in Second Language Tutorial Dialogue?

Title Does Ability Affect Alignment in Second Language Tutorial Dialogue?
Authors Arabella Sinclair, Adam Lopez, C. G. Lucas, Dragan Gasevic
Abstract The role of alignment between interlocutors in second language learning is different to that in fluent conversational dialogue. Learners gain linguistic skill through increased alignment, yet the extent to which they can align will be constrained by their ability. Tutors may use alignment to teach and encourage the student, yet still must push the student and correct their errors, decreasing alignment. To understand how learner ability interacts with alignment, we measure the influence of ability on lexical priming, an indicator of alignment. We find that lexical priming in learner-tutor dialogues differs from that in conversational and task-based dialogues, and we find evidence that alignment increases with ability and with word complexity.
Tasks
Published 2018-07-01
URL https://www.aclweb.org/anthology/W18-5005/
PDF https://www.aclweb.org/anthology/W18-5005
PWC https://paperswithcode.com/paper/does-ability-affect-alignment-in-second
Repo
Framework

Point process latent variable models of larval zebrafish behavior

Title Point process latent variable models of larval zebrafish behavior
Authors Anuj Sharma, Robert Johnson, Florian Engert, Scott Linderman
Abstract A fundamental goal of systems neuroscience is to understand how neural activity gives rise to natural behavior. In order to achieve this goal, we must first build comprehensive models that offer quantitative descriptions of behavior. We develop a new class of probabilistic models to tackle this challenge in the study of larval zebrafish, an important model organism for neuroscience. Larval zebrafish locomote via sequences of punctate swim bouts–brief flicks of the tail–which are naturally modeled as a marked point process. However, these sequences of swim bouts belie a set of discrete and continuous internal states, latent variables that are not captured by standard point process models. We incorporate these variables as latent marks of a point process and explore various models for their dynamics. To infer the latent variables and fit the parameters of this model, we develop an amortized variational inference algorithm that targets the collapsed posterior distribution, analytically marginalizing out the discrete latent variables. With a dataset of over 120,000 swim bouts, we show that our models reveal interpretable discrete classes of swim bouts and continuous internal states like hunger that modulate their dynamics. These models are a major step toward understanding the natural behavioral program of the larval zebrafish and, ultimately, its neural underpinnings.
Tasks Latent Variable Models
Published 2018-12-01
URL http://papers.nips.cc/paper/8289-point-process-latent-variable-models-of-larval-zebrafish-behavior
PDF http://papers.nips.cc/paper/8289-point-process-latent-variable-models-of-larval-zebrafish-behavior.pdf
PWC https://paperswithcode.com/paper/point-process-latent-variable-models-of
Repo
Framework

Learning to Dodge A Bullet: Concyclic View Morphing via Deep Learning

Title Learning to Dodge A Bullet: Concyclic View Morphing via Deep Learning
Authors Shi Jin, Ruiynag Liu, Yu Ji, Jinwei Ye, Jingyi Yu
Abstract The bullet-time effect, presented in feature film ``The Matrix”, has been widely adopted in feature films and TV commercials to create an amazing stopping-time illusion. Producing such visual effects, however, typically requires using a large number of cameras/images surrounding the subject. In this paper, we present a learning-based solution that is capable of producing the bullet-time effect from only a small set of images. Specifically, we present a view morphing framework that can synthesize smooth and realistic transitions along extit{a circular view path} using as few as three reference images. We apply a novel cyclic rectification technique to align the reference images onto a common circle and then feed the rectified results into a deep network to predict its motion field and per-pixel visibility for new view interpolation. Comprehensive experiments on synthetic and real data show that our new framework outperforms the state-of-the-art and provides an inexpensive and practical solution for producing the bullet-time effects. |
Tasks
Published 2018-09-01
URL http://openaccess.thecvf.com/content_ECCV_2018/html/shi_jin_Learning_to_Dodge_ECCV_2018_paper.html
PDF http://openaccess.thecvf.com/content_ECCV_2018/papers/shi_jin_Learning_to_Dodge_ECCV_2018_paper.pdf
PWC https://paperswithcode.com/paper/learning-to-dodge-a-bullet-concyclic-view
Repo
Framework

The emergence of multiple retinal cell types through efficient coding of natural movies

Title The emergence of multiple retinal cell types through efficient coding of natural movies
Authors Samuel Ocko, Jack Lindsey, Surya Ganguli, Stephane Deny
Abstract One of the most striking aspects of early visual processing in the retina is the immediate parcellation of visual information into multiple parallel pathways, formed by different retinal ganglion cell types each tiling the entire visual field. Existing theories of efficient coding have been unable to account for the functional advantages of such cell-type diversity in encoding natural scenes. Here we go beyond previous theories to analyze how a simple linear retinal encoding model with different convolutional cell types efficiently encodes naturalistic spatiotemporal movies given a fixed firing rate budget. We find that optimizing the receptive fields and cell densities of two cell types makes them match the properties of the two main cell types in the primate retina, midget and parasol cells, in terms of spatial and temporal sensitivity, cell spacing, and their relative ratio. Moreover, our theory gives a precise account of how the ratio of midget to parasol cells decreases with retinal eccentricity. Also, we train a nonlinear encoding model with a rectifying nonlinearity to efficiently encode naturalistic movies, and again find emergent receptive fields resembling those of midget and parasol cells that are now further subdivided into ON and OFF types. Thus our work provides a theoretical justification, based on the efficient coding of natural movies, for the existence of the four most dominant cell types in the primate retina that together comprise 70% of all ganglion cells.
Tasks
Published 2018-12-01
URL http://papers.nips.cc/paper/8150-the-emergence-of-multiple-retinal-cell-types-through-efficient-coding-of-natural-movies
PDF http://papers.nips.cc/paper/8150-the-emergence-of-multiple-retinal-cell-types-through-efficient-coding-of-natural-movies.pdf
PWC https://paperswithcode.com/paper/the-emergence-of-multiple-retinal-cell-types
Repo
Framework

Remote Photoplethysmography Correspondence Feature for 3D Mask Face Presentation Attack Detection

Title Remote Photoplethysmography Correspondence Feature for 3D Mask Face Presentation Attack Detection
Authors Si-Qi Liu, Xiangyuan Lan, Pong C. Yuen
Abstract 3D mask face presentation attack, as a new challenge in face recognition, has been attracting increasing attention. Recently, remote Photoplethysmography (rPPG) is employed as an intrinsic liveness cue which is independent of the mask appearance. Although existing rPPG-based methods achieve promising results on both intra and cross dataset scenarios, they may not be robust enough when rPPG signals are contaminated by noise. In this paper, we propose a new liveness feature, called rPPG correspondence feature (CFrPPG) to precisely identify the heartbeat vestige from the observed noisy rPPG signals. To further overcome the global interferences, we propose a novel learning strategy which incorporates the global noise within the CFrPPG feature. Extensive experiments indicate that the proposed feature not only outperforms the state-of-the-art rPPG based methods on 3D mask attacks but also be able to handle the practical scenarios with dim light and camera motion.
Tasks Face Presentation Attack Detection, Face Recognition
Published 2018-09-01
URL http://openaccess.thecvf.com/content_ECCV_2018/html/Siqi_Liu_Remote_Photoplethysmography_Correspondence_ECCV_2018_paper.html
PDF http://openaccess.thecvf.com/content_ECCV_2018/papers/Siqi_Liu_Remote_Photoplethysmography_Correspondence_ECCV_2018_paper.pdf
PWC https://paperswithcode.com/paper/remote-photoplethysmography-correspondence
Repo
Framework

Stylistic Chinese Poetry Generation via Unsupervised Style Disentanglement

Title Stylistic Chinese Poetry Generation via Unsupervised Style Disentanglement
Authors Cheng Yang, Maosong Sun, Xiaoyuan Yi, Wenhao Li
Abstract The ability to write diverse poems in different styles under the same poetic imagery is an important characteristic of human poetry writing. Most previous works on automatic Chinese poetry generation focused on improving the coherency among lines. Some work explored style transfer but suffered from expensive expert labeling of poem styles. In this paper, we target on stylistic poetry generation in a fully unsupervised manner for the first time. We propose a novel model which requires no supervised style labeling by incorporating mutual information, a concept in information theory, into modeling. Experimental results show that our model is able to generate stylistic poems without losing fluency and coherency.
Tasks Machine Translation, Style Transfer
Published 2018-10-01
URL https://www.aclweb.org/anthology/D18-1430/
PDF https://www.aclweb.org/anthology/D18-1430
PWC https://paperswithcode.com/paper/stylistic-chinese-poetry-generation-via
Repo
Framework

Grounding language acquisition by training semantic parsers using captioned videos

Title Grounding language acquisition by training semantic parsers using captioned videos
Authors C Ross, ace, Andrei Barbu, Yevgeni Berzak, Battushig Myanganbayar, Boris Katz
Abstract We develop a semantic parser that is trained in a grounded setting using pairs of videos captioned with sentences. This setting is both data-efficient, requiring little annotation, and similar to the experience of children where they observe their environment and listen to speakers. The semantic parser recovers the meaning of English sentences despite not having access to any annotated sentences. It does so despite the ambiguity inherent in vision where a sentence may refer to any combination of objects, object properties, relations or actions taken by any agent in a video. For this task, we collected a new dataset for grounded language acquisition. Learning a grounded semantic parser {—} turning sentences into logical forms using captioned videos {—} can significantly expand the range of data that parsers can be trained on, lower the effort of training a semantic parser, and ultimately lead to a better understanding of child language acquisition.
Tasks Language Acquisition, Semantic Parsing
Published 2018-10-01
URL https://www.aclweb.org/anthology/D18-1285/
PDF https://www.aclweb.org/anthology/D18-1285
PWC https://paperswithcode.com/paper/grounding-language-acquisition-by-training
Repo
Framework

Time-Resolved Light Transport Decomposition for Thermal Photometric Stereo

Title Time-Resolved Light Transport Decomposition for Thermal Photometric Stereo
Authors Kenichiro Tanaka, Nobuhiro Ikeya, Tsuyoshi Takatani, Hiroyuki Kubo, Takuya Funatomi, Yasuhiro Mukaigawa
Abstract We present a novel time-resolved light transport decomposition method using thermal imaging. Because the speed of heat propagation is much slower than the speed of light propagation, transient transport of far infrared light can be observed at a video frame rate. A key observation is that the thermal image looks similar to the visible light image in an appropriately controlled environment. This implies that conventional computer vision techniques can be straightforwardly applied to the thermal image. We show that the diffuse component in the thermal image can be separated and, therefore, the surface normals of objects can be estimated by the Lambertian photometric stereo. The effectiveness of our method is evaluated by conducting real-world experiments, and its applicability to black body, transparent, and translucent objects is shown.
Tasks
Published 2018-06-01
URL http://openaccess.thecvf.com/content_cvpr_2018/html/Tanaka_Time-Resolved_Light_Transport_CVPR_2018_paper.html
PDF http://openaccess.thecvf.com/content_cvpr_2018/papers/Tanaka_Time-Resolved_Light_Transport_CVPR_2018_paper.pdf
PWC https://paperswithcode.com/paper/time-resolved-light-transport-decomposition
Repo
Framework

A Multilingual Dataset for Evaluating Parallel Sentence Extraction from Comparable Corpora

Title A Multilingual Dataset for Evaluating Parallel Sentence Extraction from Comparable Corpora
Authors Pierre Zweigenbaum, Serge Sharoff, Reinhard Rapp
Abstract
Tasks Machine Translation, Semantic Textual Similarity
Published 2018-05-01
URL https://www.aclweb.org/anthology/L18-1605/
PDF https://www.aclweb.org/anthology/L18-1605
PWC https://paperswithcode.com/paper/a-multilingual-dataset-for-evaluating
Repo
Framework

Alibaba Submission to the WMT18 Parallel Corpus Filtering Task

Title Alibaba Submission to the WMT18 Parallel Corpus Filtering Task
Authors Jun Lu, Xiaoyu Lv, Yangbin Shi, Boxing Chen
Abstract This paper describes the Alibaba Machine Translation Group submissions to the WMT 2018 Shared Task on Parallel Corpus Filtering. While evaluating the quality of the parallel corpus, the three characteristics of the corpus are investigated, i.e. 1) the bilingual/translation quality, 2) the monolingual quality and 3) the corpus diversity. Both rule-based and model-based methods are adapted to score the parallel sentence pairs. The final parallel corpus filtering system is reliable, easy to build and adapt to other language pairs.
Tasks Machine Translation, Word Alignment
Published 2018-10-01
URL https://www.aclweb.org/anthology/W18-6482/
PDF https://www.aclweb.org/anthology/W18-6482
PWC https://paperswithcode.com/paper/alibaba-submission-to-the-wmt18-parallel
Repo
Framework
comments powered by Disqus