Paper Group NANR 145
The Solution Path Algorithm for Identity-Aware Multi-Object Tracking. How Interlocutors Coordinate with each other within Emotional Segments?. Using Linguistic Data for English and Spanish Verb-Noun Combination Identification. Story Cloze Evaluator: Vector Space Representation Evaluation by Predicting What Happens Next. Automated speech-unit delimi …
The Solution Path Algorithm for Identity-Aware Multi-Object Tracking
Title | The Solution Path Algorithm for Identity-Aware Multi-Object Tracking |
Authors | Shoou-I Yu, Deyu Meng, Wangmeng Zuo, Alexander Hauptmann |
Abstract | We propose an identity-aware multi-object tracker based on the solution path algorithm. Our tracker not only produces identity-coherent trajectories based on cues such as face recognition, but also has the ability to pinpoint potential tracking errors. The tracker is formulated as a quadratic optimization problem with L0 norm constraints, which we propose to solve with the solution path algorithm. The algorithm successively solves the same optimization problem but under different Lp norm constraints, where p gradually decreases from 1 to 0. Inspired by the success of the solution path algorithm in various machine learning tasks, this strategy is expected to converge to a better local minimum than directly minimizing the hardly solvable L0 norm or the roughly approximated L1 norm constraints. Furthermore, the acquired solution path complies with the “decision making process” of the tracker, which provides more insight to locating potential tracking errors. Experiments show that not only is our proposed tracker effective, but also the solution path enables automatic pinpointing of potential tracking failures, which can be readily utilized in an active learning framework to improve identity-aware multi-object tracking. |
Tasks | Active Learning, Decision Making, Face Recognition, Multi-Object Tracking, Object Tracking |
Published | 2016-06-01 |
URL | http://openaccess.thecvf.com/content_cvpr_2016/html/Yu_The_Solution_Path_CVPR_2016_paper.html |
http://openaccess.thecvf.com/content_cvpr_2016/papers/Yu_The_Solution_Path_CVPR_2016_paper.pdf | |
PWC | https://paperswithcode.com/paper/the-solution-path-algorithm-for-identity |
Repo | |
Framework | |
How Interlocutors Coordinate with each other within Emotional Segments?
Title | How Interlocutors Coordinate with each other within Emotional Segments? |
Authors | Firoj Alam, Shammur Absar Chowdhury, Morena Danieli, Giuseppe Riccardi |
Abstract | In this paper, we aim to investigate the coordination of interlocutors behavior in different emotional segments. Conversational coordination between the interlocutors is the tendency of speakers to predict and adjust each other accordingly on an ongoing conversation. In order to find such a coordination, we investigated 1) lexical similarities between the speakers in each emotional segments, 2) correlation between the interlocutors using psycholinguistic features, such as linguistic styles, psychological process, personal concerns among others, and 3) relation of interlocutors turn-taking behaviors such as competitiveness. To study the degree of coordination in different emotional segments, we conducted our experiments using real dyadic conversations collected from call centers in which agent{'}s emotional state include empathy and customer{'}s emotional states include anger and frustration. Our findings suggest that the most coordination occurs between the interlocutors inside anger segments, where as, a little coordination was observed when the agent was empathic, even though an increase in the amount of non-competitive overlaps was observed. We found no significant difference between anger and frustration segment in terms of turn-taking behaviors. However, the length of pause significantly decreases in the preceding segment of anger where as it increases in the preceding segment of frustration. |
Tasks | |
Published | 2016-12-01 |
URL | https://www.aclweb.org/anthology/C16-1070/ |
https://www.aclweb.org/anthology/C16-1070 | |
PWC | https://paperswithcode.com/paper/how-interlocutors-coordinate-with-each-other |
Repo | |
Framework | |
Using Linguistic Data for English and Spanish Verb-Noun Combination Identification
Title | Using Linguistic Data for English and Spanish Verb-Noun Combination Identification |
Authors | Uxoa I{~n}urrieta, Arantza D{'\i}az de Ilarraza, Gorka Labaka, Kepa Sarasola, Itziar Aduriz, John Carroll |
Abstract | We present a linguistic analysis of a set of English and Spanish verb+noun combinations (VNCs), and a method to use this information to improve VNC identification. Firstly, a sample of frequent VNCs are analysed in-depth and tagged along lexico-semantic and morphosyntactic dimensions, obtaining satisfactory inter-annotator agreement scores. Then, a VNC identification experiment is undertaken, where the analysed linguistic data is combined with chunking information and syntactic dependencies. A comparison between the results of the experiment and the results obtained by a basic detection method shows that VNC identification can be greatly improved by using linguistic information, as a large number of additional occurrences are detected with high precision. |
Tasks | Chunking, Machine Translation |
Published | 2016-12-01 |
URL | https://www.aclweb.org/anthology/C16-1082/ |
https://www.aclweb.org/anthology/C16-1082 | |
PWC | https://paperswithcode.com/paper/using-linguistic-data-for-english-and-spanish |
Repo | |
Framework | |
Story Cloze Evaluator: Vector Space Representation Evaluation by Predicting What Happens Next
Title | Story Cloze Evaluator: Vector Space Representation Evaluation by Predicting What Happens Next |
Authors | Nasrin Mostafazadeh, V, Lucy erwende, Wen-tau Yih, Pushmeet Kohli, James Allen |
Abstract | |
Tasks | Representation Learning, Semantic Textual Similarity |
Published | 2016-08-01 |
URL | https://www.aclweb.org/anthology/W16-2505/ |
https://www.aclweb.org/anthology/W16-2505 | |
PWC | https://paperswithcode.com/paper/story-cloze-evaluator-vector-space |
Repo | |
Framework | |
Automated speech-unit delimitation in spoken learner English
Title | Automated speech-unit delimitation in spoken learner English |
Authors | Russell Moore, Andrew Caines, Calbert Graham, Paula Buttery |
Abstract | In order to apply computational linguistic analyses and pass information to downstream applications, transcriptions of speech obtained via automatic speech recognition (ASR) need to be divided into smaller meaningful units, in a task we refer to as {}speech-unit (SU) delimitation{'}. We closely recreate the automatic delimitation system described by Lee and Glass (2012), { }Sentence detection using multiple annotations{'}, Proceedings of INTERSPEECH, which combines a prosodic model, language model and speech-unit length model in log-linear fashion. Since state-of-the-art natural language processing (NLP) tools have been developed to deal with written text and its characteristic sentence-like units, SU delimitation helps bridge the gap between ASR and NLP, by normalising spoken data into a more canonical format. Previous work has focused on native speaker recordings; we test the system of Lee and Glass (2012) on non-native speaker (or {}learner{'}) data, achieving performance above the state-of-the-art. We also consider alternative evaluation metrics which move away from the idea of a single { }truth{'} in SU delimitation, and frame this work in the context of downstream NLP applications. |
Tasks | Language Modelling, Speech Recognition |
Published | 2016-12-01 |
URL | https://www.aclweb.org/anthology/C16-1075/ |
https://www.aclweb.org/anthology/C16-1075 | |
PWC | https://paperswithcode.com/paper/automated-speech-unit-delimitation-in-spoken |
Repo | |
Framework | |
Simple PPDB: A Paraphrase Database for Simplification
Title | Simple PPDB: A Paraphrase Database for Simplification |
Authors | Ellie Pavlick, Chris Callison-Burch |
Abstract | |
Tasks | Lexical Simplification, Text Simplification |
Published | 2016-08-01 |
URL | https://www.aclweb.org/anthology/P16-2024/ |
https://www.aclweb.org/anthology/P16-2024 | |
PWC | https://paperswithcode.com/paper/simple-ppdb-a-paraphrase-database-for |
Repo | |
Framework | |
Alleviating Poor Context with Background Knowledge for Named Entity Disambiguation
Title | Alleviating Poor Context with Background Knowledge for Named Entity Disambiguation |
Authors | Ander Barrena, Aitor Soroa, Eneko Agirre |
Abstract | |
Tasks | Entity Disambiguation, Entity Linking, Entity Resolution |
Published | 2016-08-01 |
URL | https://www.aclweb.org/anthology/P16-1179/ |
https://www.aclweb.org/anthology/P16-1179 | |
PWC | https://paperswithcode.com/paper/alleviating-poor-context-with-background |
Repo | |
Framework | |
OCR Post-Correction Evaluation of Early Dutch Books Online - Revisited
Title | OCR Post-Correction Evaluation of Early Dutch Books Online - Revisited |
Authors | Martin Reynaert |
Abstract | We present further work on evaluation of the fully automatic post-correction of Early Dutch Books Online, a collection of 10,333 18th century books. In prior work we evaluated the new implementation of Text-Induced Corpus Clean-up (TICCL) on the basis of a single book Gold Standard derived from this collection. In the current paper we revisit the same collection on the basis of a sizeable 1020 item random sample of OCR post-corrected strings from the full collection. Both evaluations have their own stories to tell and lessons to teach. |
Tasks | Optical Character Recognition |
Published | 2016-05-01 |
URL | https://www.aclweb.org/anthology/L16-1154/ |
https://www.aclweb.org/anthology/L16-1154 | |
PWC | https://paperswithcode.com/paper/ocr-post-correction-evaluation-of-early-dutch |
Repo | |
Framework | |
Geolocation for Twitter: Timing Matters
Title | Geolocation for Twitter: Timing Matters |
Authors | Mark Dredze, Miles Osborne, Prabhanjan Kambadur |
Abstract | |
Tasks | |
Published | 2016-06-01 |
URL | https://www.aclweb.org/anthology/N16-1122/ |
https://www.aclweb.org/anthology/N16-1122 | |
PWC | https://paperswithcode.com/paper/geolocation-for-twitter-timing-matters |
Repo | |
Framework | |
Neurons Equipped with Intrinsic Plasticity Learn Stimulus Intensity Statistics
Title | Neurons Equipped with Intrinsic Plasticity Learn Stimulus Intensity Statistics |
Authors | Travis Monk, Cristina Savin, Jörg Lücke |
Abstract | Experience constantly shapes neural circuits through a variety of plasticity mechanisms. While the functional roles of some plasticity mechanisms are well-understood, it remains unclear how changes in neural excitability contribute to learning. Here, we develop a normative interpretation of intrinsic plasticity (IP) as a key component of unsupervised learning. We introduce a novel generative mixture model that accounts for the class-specific statistics of stimulus intensities, and we derive a neural circuit that learns the input classes and their intensities. We will analytically show that inference and learning for our generative model can be achieved by a neural circuit with intensity-sensitive neurons equipped with a specific form of IP. Numerical experiments verify our analytical derivations and show robust behavior for artificial and natural stimuli. Our results link IP to non-trivial input statistics, in particular the statistics of stimulus intensities for classes to which a neuron is sensitive. More generally, our work paves the way toward new classification algorithms that are robust to intensity variations. |
Tasks | |
Published | 2016-12-01 |
URL | http://papers.nips.cc/paper/6582-neurons-equipped-with-intrinsic-plasticity-learn-stimulus-intensity-statistics |
http://papers.nips.cc/paper/6582-neurons-equipped-with-intrinsic-plasticity-learn-stimulus-intensity-statistics.pdf | |
PWC | https://paperswithcode.com/paper/neurons-equipped-with-intrinsic-plasticity |
Repo | |
Framework | |
CogALex-V Shared Task: CGSRC - Classifying Semantic Relations using Convolutional Neural Networks
Title | CogALex-V Shared Task: CGSRC - Classifying Semantic Relations using Convolutional Neural Networks |
Authors | Chinnappa Guggilla |
Abstract | In this paper, we describe a system (CGSRC) for classifying four semantic relations: synonym, hypernym, antonym and meronym using convolutional neural networks (CNN). We have participated in CogALex-V semantic shared task of corpus-based identification of semantic relations. Proposed approach using CNN-based deep neural networks leveraging pre-compiled word2vec distributional neural embeddings achieved 43.15{%} weighted-F1 accuracy on subtask-1 (checking existence of a relation between two terms) and 25.24{%} weighted-F1 accuracy on subtask-2 (classifying relation types). |
Tasks | Machine Translation, Paraphrase Generation, Question Answering, Relation Classification, Word Embeddings |
Published | 2016-12-01 |
URL | https://www.aclweb.org/anthology/W16-5314/ |
https://www.aclweb.org/anthology/W16-5314 | |
PWC | https://paperswithcode.com/paper/cogalex-v-shared-task-cgsrc-classifying |
Repo | |
Framework | |
Semantic Relation Classification via Hierarchical Recurrent Neural Network with Attention
Title | Semantic Relation Classification via Hierarchical Recurrent Neural Network with Attention |
Authors | Minguang Xiao, Cong Liu |
Abstract | Semantic relation classification remains a challenge in natural language processing. In this paper, we introduce a hierarchical recurrent neural network that is capable of extracting information from raw sentences for relation classification. Our model has several distinctive features: (1) Each sentence is divided into three context subsequences according to two annotated nominals, which allows the model to encode each context subsequence independently so as to selectively focus as on the important context information; (2) The hierarchical model consists of two recurrent neural networks (RNNs): the first one learns context representations of the three context subsequences respectively, and the second one computes semantic composition of these three representations and produces a sentence representation for the relationship classification of the two nominals. (3) The attention mechanism is adopted in both RNNs to encourage the model to concentrate on the important information when learning the sentence representations. Experimental results on the SemEval-2010 Task 8 dataset demonstrate that our model is comparable to the state-of-the-art without using any hand-crafted features. |
Tasks | Relation Classification, Semantic Composition |
Published | 2016-12-01 |
URL | https://www.aclweb.org/anthology/C16-1119/ |
https://www.aclweb.org/anthology/C16-1119 | |
PWC | https://paperswithcode.com/paper/semantic-relation-classification-via-3 |
Repo | |
Framework | |
Which argument is more convincing? Analyzing and predicting convincingness of Web arguments using bidirectional LSTM
Title | Which argument is more convincing? Analyzing and predicting convincingness of Web arguments using bidirectional LSTM |
Authors | Ivan Habernal, Iryna Gurevych |
Abstract | |
Tasks | Relation Classification |
Published | 2016-08-01 |
URL | https://www.aclweb.org/anthology/P16-1150/ |
https://www.aclweb.org/anthology/P16-1150 | |
PWC | https://paperswithcode.com/paper/which-argument-is-more-convincing-analyzing |
Repo | |
Framework | |
SemRelData ― Multilingual Contextual Annotation of Semantic Relations between Nominals: Dataset and Guidelines
Title | SemRelData ― Multilingual Contextual Annotation of Semantic Relations between Nominals: Dataset and Guidelines |
Authors | Darina Benikova, Chris Biemann |
Abstract | Semantic relations play an important role in linguistic knowledge representation. Although their role is relevant in the context of written text, there is no approach or dataset that makes use of contextuality of classic semantic relations beyond the boundary of one sentence. We present the SemRelData dataset that contains annotations of semantic relations between nominals in the context of one paragraph. To be able to analyse the universality of this context notion, the annotation was performed on a multi-lingual and multi-genre corpus. To evaluate the dataset, it is compared to large, manually created knowledge resources in the respective languages. The comparison shows that knowledge bases not only have coverage gaps; they also do not account for semantic relations that are manifested in particular contexts only, yet still play an important role for text cohesion. |
Tasks | |
Published | 2016-05-01 |
URL | https://www.aclweb.org/anthology/L16-1656/ |
https://www.aclweb.org/anthology/L16-1656 | |
PWC | https://paperswithcode.com/paper/semreldata-a-multilingual-contextual |
Repo | |
Framework | |
Representation and Learning of Temporal Relations
Title | Representation and Learning of Temporal Relations |
Authors | Leon Derczynski |
Abstract | Determining the relative order of events and times described in text is an important problem in natural language processing. It is also a difficult one: general state-of-the-art performance has been stuck at a relatively low ceiling for years. We investigate the representation of temporal relations, and empirically evaluate the effect that various temporal relation representations have on machine learning performance. While machine learning performance decreases with increased representational expressiveness, not all representation simplifications have equal impact. |
Tasks | |
Published | 2016-12-01 |
URL | https://www.aclweb.org/anthology/C16-1182/ |
https://www.aclweb.org/anthology/C16-1182 | |
PWC | https://paperswithcode.com/paper/representation-and-learning-of-temporal |
Repo | |
Framework | |