October 15, 2019

2537 words 12 mins read

Paper Group NANR 61

Paper Group NANR 61

Bayesian Inference of Temporal Task Specifications from Demonstrations. Multi-Modal Sequence Fusion via Recursive Attention for Emotion Recognition. Error annotation in a Learner Corpus of Portuguese. Prediction Models for Risk of Type-2 Diabetes Using Health Claims. Person Search by Multi-Scale Matching. Unsupervised Domain Adaptation for Semantic …

Bayesian Inference of Temporal Task Specifications from Demonstrations

Title Bayesian Inference of Temporal Task Specifications from Demonstrations
Authors Ankit Shah, Pritish Kamath, Julie A. Shah, Shen Li
Abstract When observing task demonstrations, human apprentices are able to identify whether a given task is executed correctly long before they gain expertise in actually performing that task. Prior research into learning from demonstrations (LfD) has failed to capture this notion of the acceptability of an execution; meanwhile, temporal logics provide a flexible language for expressing task specifications. Inspired by this, we present Bayesian specification inference, a probabilistic model for inferring task specification as a temporal logic formula. We incorporate methods from probabilistic programming to define our priors, along with a domain-independent likelihood function to enable sampling-based inference. We demonstrate the efficacy of our model for inferring true specifications with over 90% similarity between the inferred specification and the ground truth, both within a synthetic domain and a real-world table setting task.
Tasks Bayesian Inference, Probabilistic Programming
Published 2018-12-01
URL http://papers.nips.cc/paper/7637-bayesian-inference-of-temporal-task-specifications-from-demonstrations
PDF http://papers.nips.cc/paper/7637-bayesian-inference-of-temporal-task-specifications-from-demonstrations.pdf
PWC https://paperswithcode.com/paper/bayesian-inference-of-temporal-task
Repo
Framework

Multi-Modal Sequence Fusion via Recursive Attention for Emotion Recognition

Title Multi-Modal Sequence Fusion via Recursive Attention for Emotion Recognition
Authors Rory Beard, Ritwik Das, Raymond W. M. Ng, P. G. Keerthana Gopalakrishnan, Luka Eerens, Pawel Swietojanski, Ondrej Miksik
Abstract Natural human communication is nuanced and inherently multi-modal. Humans possess specialised sensoria for processing vocal, visual, and linguistic, and para-linguistic information, but form an intricately fused percept of the multi-modal data stream to provide a holistic representation. Analysis of emotional content in face-to-face communication is a cognitive task to which humans are particularly attuned, given its sociological importance, and poses a difficult challenge for machine emulation due to the subtlety and expressive variability of cross-modal cues. Inspired by the empirical success of recent so-called End-To-End Memory Networks and related works, we propose an approach based on recursive multi-attention with a shared external memory updated over multiple gated iterations of analysis. We evaluate our model across several large multi-modal datasets and show that global contextualised memory with gated memory update can effectively achieve emotion recognition.
Tasks Emotion Recognition
Published 2018-10-01
URL https://www.aclweb.org/anthology/K18-1025/
PDF https://www.aclweb.org/anthology/K18-1025
PWC https://paperswithcode.com/paper/multi-modal-sequence-fusion-via-recursive
Repo
Framework

Error annotation in a Learner Corpus of Portuguese

Title Error annotation in a Learner Corpus of Portuguese
Authors Iria del R{'\i}o, Am{'a}lia Mendes
Abstract
Tasks Language Acquisition
Published 2018-05-01
URL https://www.aclweb.org/anthology/L18-1649/
PDF https://www.aclweb.org/anthology/L18-1649
PWC https://paperswithcode.com/paper/error-annotation-in-a-learner-corpus-of
Repo
Framework

Prediction Models for Risk of Type-2 Diabetes Using Health Claims

Title Prediction Models for Risk of Type-2 Diabetes Using Health Claims
Authors Masatoshi Nagata, Kohichi Takai, Keiji Yasuda, Panikos Heracleous, Akio Yoneyama
Abstract This study focuses on highly accurate prediction of the onset of type-2 diabetes. We investigated whether prediction accuracy can be improved by utilizing lab test data obtained from health checkups and incorporating health claim text data such as medically diagnosed diseases with ICD10 codes and pharmacy information. In a previous study, prediction accuracy was increased slightly by adding diagnosis disease name and independent variables such as prescription medicine. Therefore, in the current study we explored more suitable models for prediction by using state-of-the-art techniques such as XGBoost and long short-term memory (LSTM) based on recurrent neural networks. In the current study, text data was vectorized using word2vec, and the prediction model was compared with logistic regression. The results obtained confirmed that onset of type-2 diabetes can be predicted with a high degree of accuracy when the XGBoost model is used.
Tasks
Published 2018-07-01
URL https://www.aclweb.org/anthology/W18-2322/
PDF https://www.aclweb.org/anthology/W18-2322
PWC https://paperswithcode.com/paper/prediction-models-for-risk-of-type-2-diabetes
Repo
Framework

Person Search by Multi-Scale Matching

Title Person Search by Multi-Scale Matching
Authors Xu Lan , Xiatian Zhu , Shaogang Gong
Abstract We consider the problem of person search in unconstrained scene images. Existing methods usually focus on improving the person detection accuracy to mitigate negative effects imposed by misalignment, mis-detections, and false alarms resulted from noisy people auto-detection. In contrast to previous studies, we show that sufficiently reliable person instance cropping is achievable by slightly improved state-of-the-art deep learning object detectors (e.g. Faster-RCNN), and the under-studied multi-scale matching problem in person search is a more severe barrier. In this work, we address this multi-scale person search challenge by proposing a Cross-Level Semantic Alignment (CLSA) deep learning approach capable of learning more discriminative identity feature representations in a unified end-to-end model. This is realised by exploiting the in-network feature pyramid structure of a deep neural network enhanced by a novel cross pyramid-level semantic alignment loss function. This favourably eliminates the need for constructing a computationally expensive image pyramid and a complex multi-branch network architecture. Extensive experiments show the modelling advantages and performance superiority of CLSA over the state-of-the-art person search and multi-scale matching methods on two large person search benchmarking datasets: CUHK-SYSU and PRW.
Tasks Human Detection, Person Search
Published 2018-09-01
URL http://openaccess.thecvf.com/content_ECCV_2018/html/Xu_Lan_Person_Search_by_ECCV_2018_paper.html
PDF http://openaccess.thecvf.com/content_ECCV_2018/papers/Xu_Lan_Person_Search_by_ECCV_2018_paper.pdf
PWC https://paperswithcode.com/paper/person-search-by-multi-scale-matching-1
Repo
Framework

Unsupervised Domain Adaptation for Semantic Segmentation via Class-Balanced Self-Training

Title Unsupervised Domain Adaptation for Semantic Segmentation via Class-Balanced Self-Training
Authors Yang Zou, Zhiding Yu, B.V.K. Vijaya Kumar, Jinsong Wang
Abstract Recent deep networks achieved state of the art performanceon a variety of semantic segmentation tasks. Despite such progress, thesemodels often face challenges in real world “wild tasks” where large differ-ence between labeled training/source data and unseen test/target dataexists. In particular, such difference is often referred to as “domain gap”,and could cause significantly decreased performance which cannot beeasily remedied by further increasing the representation power. Unsuper-vised domain adaptation (UDA) seeks to overcome such problem withouttarget domain labels. In this paper, we propose a novel UDA frameworkbased on an iterative self-training (ST) procedure, where the problemis formulated as latent variable loss minimization, and can be solved byalternatively generating pseudo labels on target data and re-training themodel with these labels. On top of ST, we also propose a novel class-balanced self-training (CBST) framework to avoid the gradual domi-nance of large classes on pseudo-label generation, and introduce spatialpriors to refine generated labels. Comprehensive experiments show thatthe proposed methods achieve state of the art semantic segmentationperformance under multiple major UDA settings.
Tasks Domain Adaptation, Semantic Segmentation, Unsupervised Domain Adaptation
Published 2018-09-01
URL http://openaccess.thecvf.com/content_ECCV_2018/html/Yang_Zou_Unsupervised_Domain_Adaptation_ECCV_2018_paper.html
PDF http://openaccess.thecvf.com/content_ECCV_2018/papers/Yang_Zou_Unsupervised_Domain_Adaptation_ECCV_2018_paper.pdf
PWC https://paperswithcode.com/paper/unsupervised-domain-adaptation-for-semantic
Repo
Framework

Er … well, it matters, right? On the role of data representations in spoken language dependency parsing

Title Er … well, it matters, right? On the role of data representations in spoken language dependency parsing
Authors Kaja Dobrovoljc, Matej Martinc
Abstract Despite the significant improvement of data-driven dependency parsing systems in recent years, they still achieve a considerably lower performance in parsing spoken language data in comparison to written data. On the example of Spoken Slovenian Treebank, the first spoken data treebank using the UD annotation scheme, we investigate which speech-specific phenomena undermine parsing performance, through a series of training data and treebank modification experiments using two distinct state-of-the-art parsing systems. Our results show that utterance segmentation is the most prominent cause of low parsing performance, both in parsing raw and pre-segmented transcriptions. In addition to shorter utterances, both parsers perform better on normalized transcriptions including basic markers of prosody and excluding disfluencies, discourse markers and fillers. On the other hand, the effects of written training data addition and speech-specific dependency representations largely depend on the parsing system selected.
Tasks Dependency Parsing, Language Modelling
Published 2018-11-01
URL https://www.aclweb.org/anthology/W18-6005/
PDF https://www.aclweb.org/anthology/W18-6005
PWC https://paperswithcode.com/paper/er-well-it-matters-right-on-the-role-of-data
Repo
Framework

Mind the Gap: Data Enrichment in Dependency Parsing of Elliptical Constructions

Title Mind the Gap: Data Enrichment in Dependency Parsing of Elliptical Constructions
Authors Kira Droganova, Filip Ginter, Jenna Kanerva, Daniel Zeman
Abstract In this paper, we focus on parsing rare and non-trivial constructions, in particular ellipsis. We report on several experiments in enrichment of training data for this specific construction, evaluated on five languages: Czech, English, Finnish, Russian and Slovak. These data enrichment methods draw upon self-training and tri-training, combined with a stratified sampling method mimicking the structural complexity of the original treebank. In addition, using these same methods, we also demonstrate small improvements over the CoNLL-17 parsing shared task winning system for four of the five languages, not only restricted to the elliptical constructions.
Tasks Dependency Parsing
Published 2018-11-01
URL https://www.aclweb.org/anthology/W18-6006/
PDF https://www.aclweb.org/anthology/W18-6006
PWC https://paperswithcode.com/paper/mind-the-gap-data-enrichment-in-dependency
Repo
Framework

Investigating NP-Chunking with Universal Dependencies for English

Title Investigating NP-Chunking with Universal Dependencies for English
Authors Oph{'e}lie Lacroix
Abstract Chunking is a pre-processing task generally dedicated to improving constituency parsing. In this paper, we want to show that universal dependency (UD) parsing can also leverage the information provided by the task of chunking even though annotated chunks are not provided with universal dependency trees. In particular, we introduce the possibility of deducing noun-phrase (NP) chunks from universal dependencies, focusing on English as a first example. We then demonstrate how the task of NP-chunking can benefit PoS-tagging in a multi-task learning setting {–} comparing two different strategies {–} and how it can be used as a feature for dependency parsing in order to learn enriched models.
Tasks Chunking, Constituency Parsing, Dependency Parsing, Multi-Task Learning, Part-Of-Speech Tagging
Published 2018-11-01
URL https://www.aclweb.org/anthology/W18-6010/
PDF https://www.aclweb.org/anthology/W18-6010
PWC https://paperswithcode.com/paper/investigating-np-chunking-with-universal
Repo
Framework

The Hebrew Universal Dependency Treebank: Past Present and Future

Title The Hebrew Universal Dependency Treebank: Past Present and Future
Authors Shoval Sade, Amit Seker, Reut Tsarfaty
Abstract The Hebrew treebank (HTB), consisting of 6221 morpho-syntactically annotated newspaper sentences, has been the only resource for training and validating statistical parsers and taggers for Hebrew, for almost two decades now. During these decades, the HTB has gone through a trajectory of automatic and semi-automatic conversions, until arriving at its UDv2 form. In this work we manually validate the UDv2 version of the HTB, and, according to our findings, we apply scheme changes that bring the UD HTB to the same theoretical grounds as the rest of UD. Our experimental parsing results with UDv2New confirm that improving the coherence and internal consistency of the UD HTB indeed leads to improved parsing performance. At the same time, our analysis demonstrates that there is more to be done at the point of intersection of UD with other linguistic processing layers, in particular, at the points where UD interfaces external morphological and lexical resources.
Tasks Dependency Parsing
Published 2018-11-01
URL https://www.aclweb.org/anthology/W18-6016/
PDF https://www.aclweb.org/anthology/W18-6016
PWC https://paperswithcode.com/paper/the-hebrew-universal-dependency-treebank-past
Repo
Framework

Toward Universal Dependencies for Shipibo-Konibo

Title Toward Universal Dependencies for Shipibo-Konibo
Authors Alonso Vasquez, Renzo Ego Aguirre, C Angulo, y, John Miller, Claudia Villanueva, {\v{Z}}eljko Agi{'c}, Roberto Zariquiey, Arturo Oncevay
Abstract We present an initial version of the Universal Dependencies (UD) treebank for Shipibo-Konibo, the first South American, Amazonian, Panoan and Peruvian language with a resource built under UD. We describe the linguistic aspects of how the tagset was defined and the treebank was annotated; in addition we present our specific treatment of linguistic units called \textit{clitics}. Although the treebank is still under development, it allowed us to perform a typological comparison against Spanish, the predominant language in Peru, and dependency syntax parsing experiments in both monolingual and cross-lingual approaches.
Tasks Dependency Parsing, Machine Translation
Published 2018-11-01
URL https://www.aclweb.org/anthology/W18-6018/
PDF https://www.aclweb.org/anthology/W18-6018
PWC https://paperswithcode.com/paper/toward-universal-dependencies-for-shipibo
Repo
Framework

Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science

Title Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science
Authors Emily M. Bender, Batya Friedman
Abstract In this paper, we propose data statements as a design solution and professional practice for natural language processing technologists, in both research and development. Through the adoption and widespread use of data statements, the field can begin to address critical scientific and ethical issues that result from the use of data from certain populations in the development of technology for other populations. We present a form that data statements can take and explore the implications of adopting them as part of regular practice. We argue that data statements will help alleviate issues related to exclusion and bias in language technology, lead to better precision in claims about how natural language processing research can generalize and thus better engineering results, protect companies from public embarrassment, and ultimately lead to language technology that meets its users in their own preferred linguistic style and furthermore does not misrepresent them to others.
Tasks
Published 2018-01-01
URL https://www.aclweb.org/anthology/Q18-1041/
PDF https://www.aclweb.org/anthology/Q18-1041
PWC https://paperswithcode.com/paper/data-statements-for-natural-language
Repo
Framework

Preposition Sense Disambiguation and Representation

Title Preposition Sense Disambiguation and Representation
Authors Hongyu Gong, Jiaqi Mu, Suma Bhat, Pramod Viswanath
Abstract Prepositions are highly polysemous, and their variegated senses encode significant semantic information. In this paper we match each preposition{'}s left- and right context, and their interplay to the geometry of the word vectors to the left and right of the preposition. Extracting these features from a large corpus and using them with machine learning models makes for an efficient preposition sense disambiguation (PSD) algorithm, which is comparable to and better than state-of-the-art on two benchmark datasets. Our reliance on no linguistic tool allows us to scale the PSD algorithm to a large corpus and learn sense-specific preposition representations. The crucial abstraction of preposition senses as word representations permits their use in downstream applications{–}phrasal verb paraphrasing and preposition selection{–}with new state-of-the-art results.
Tasks
Published 2018-10-01
URL https://www.aclweb.org/anthology/D18-1180/
PDF https://www.aclweb.org/anthology/D18-1180
PWC https://paperswithcode.com/paper/preposition-sense-disambiguation-and
Repo
Framework

Discourse Coherence: Concurrent Explicit and Implicit Relations

Title Discourse Coherence: Concurrent Explicit and Implicit Relations
Authors Hannah Rohde, Alex Johnson, er, Nathan Schneider, Bonnie Webber
Abstract Theories of discourse coherence posit relations between discourse segments as a key feature of coherent text. Our prior work suggests that multiple discourse relations can be simultaneously operative between two segments for reasons not predicted by the literature. Here we test how this joint presence can lead participants to endorse seemingly divergent conjunctions (e.g., BUT and SO) to express the link they see between two segments. These apparent divergences are not symptomatic of participant naivety or bias, but arise reliably from the concurrent availability of multiple relations between segments {–} some available through explicit signals and some via inference. We believe that these new results can both inform future progress in theoretical work on discourse coherence and lead to higher levels of performance in discourse parsing.
Tasks
Published 2018-07-01
URL https://www.aclweb.org/anthology/P18-1210/
PDF https://www.aclweb.org/anthology/P18-1210
PWC https://paperswithcode.com/paper/discourse-coherence-concurrent-explicit-and
Repo
Framework

Deep Learning Inferences with Hybrid Homomorphic Encryption

Title Deep Learning Inferences with Hybrid Homomorphic Encryption
Authors Anthony Meehan, Ryan K L Ko, Geoff Holmes
Abstract When deep learning is applied to sensitive data sets, many privacy-related implementation issues arise. These issues are especially evident in the healthcare, finance, law and government industries. Homomorphic encryption could allow a server to make inferences on inputs encrypted by a client, but to our best knowledge, there has been no complete implementation of common deep learning operations, for arbitrary model depths, using homomorphic encryption. This paper demonstrates a novel approach, efficiently implementing many deep learning functions with bootstrapped homomorphic encryption. As part of our implementation, we demonstrate Single and Multi-Layer Neural Networks, for the Wisconsin Breast Cancer dataset, as well as a Convolutional Neural Network for MNIST. Our results give promising directions for privacy-preserving representation learning, and the return of data control to users.
Tasks Representation Learning
Published 2018-01-01
URL https://openreview.net/forum?id=ByCPHrgCW
PDF https://openreview.net/pdf?id=ByCPHrgCW
PWC https://paperswithcode.com/paper/deep-learning-inferences-with-hybrid
Repo
Framework
comments powered by Disqus