January 25, 2020

2458 words 12 mins read

Paper Group NANR 58

Paper Group NANR 58

Proceedings of the Fifth International Workshop on Computational Linguistics for Uralic Languages. Feature-Level Frankenstein: Eliminating Variations for Discriminative Recognition. ACMM: Aligned Cross-Modal Memory for Few-Shot Image and Sentence Matching. YNU_DYX at SemEval-2019 Task 5: A Stacked BiGRU Model Based on Capsule Network in Detection …

Proceedings of the Fifth International Workshop on Computational Linguistics for Uralic Languages

Title Proceedings of the Fifth International Workshop on Computational Linguistics for Uralic Languages
Authors
Abstract
Tasks
Published 2019-01-01
URL https://www.aclweb.org/anthology/W19-0300/
PDF https://www.aclweb.org/anthology/W19-0300
PWC https://paperswithcode.com/paper/proceedings-of-the-fifth-international-1
Repo
Framework

Feature-Level Frankenstein: Eliminating Variations for Discriminative Recognition

Title Feature-Level Frankenstein: Eliminating Variations for Discriminative Recognition
Authors Xiaofeng Liu, Site Li, Lingsheng Kong, Wanqing Xie, Ping Jia, Jane You, B.V.K. Kumar
Abstract Recent successes of deep learning-based recognition rely on maintaining the content related to the main-task label. However, how to explicitly dispel the noisy signals for better generalization remains an open issue. We systematically summarize the detrimental factors as task-relevant/irrelevant semantic variations and unspecified latent variation. In this paper, we cast these problems as an adversarial minimax game in the latent space. Specifically, we propose equipping an end-to-end conditional adversarial network with the ability to decompose an input sample into three complementary parts. The discriminative representation inherits the desired invariance property guided by prior knowledge of the task, which is marginally independent to the task-relevant/irrelevant semantic and latent variations. Our proposed framework achieves top performance on a serial of tasks, including digits recognition, lighting, makeup, disguise-tolerant face recognition, and facial attributes recognition.
Tasks Face Recognition
Published 2019-06-01
URL http://openaccess.thecvf.com/content_CVPR_2019/html/Liu_Feature-Level_Frankenstein_Eliminating_Variations_for_Discriminative_Recognition_CVPR_2019_paper.html
PDF http://openaccess.thecvf.com/content_CVPR_2019/papers/Liu_Feature-Level_Frankenstein_Eliminating_Variations_for_Discriminative_Recognition_CVPR_2019_paper.pdf
PWC https://paperswithcode.com/paper/feature-level-frankenstein-eliminating
Repo
Framework

ACMM: Aligned Cross-Modal Memory for Few-Shot Image and Sentence Matching

Title ACMM: Aligned Cross-Modal Memory for Few-Shot Image and Sentence Matching
Authors Yan Huang, Liang Wang
Abstract Image and sentence matching has drawn much attention recently, but due to the lack of sufficient pairwise data for training, most previous methods still cannot well associate those challenging pairs of images and sentences containing rarely appeared regions and words, i.e., few-shot content. In this work, we study this challenging scenario as few-shot image and sentence matching, and accordingly propose an Aligned Cross-Modal Memory (ACMM) model to memorize the rarely appeared content. Given a pair of image and sentence, the model first includes an aligned memory controller network to produce two sets of semantically-comparable interface vectors through cross-modal alignment. Then the interface vectors are used by modality-specific read and update operations to alternatively interact with shared memory items. The memory items persistently memorize cross-modal shared semantic representations, which can be addressed out to better enhance the representation of few-shot content. We apply the proposed model to both conventional and few-shot image and sentence matching tasks, and demonstrate its effectiveness by achieving the state-of-the-art performance on two benchmark datasets.
Tasks
Published 2019-10-01
URL http://openaccess.thecvf.com/content_ICCV_2019/html/Huang_ACMM_Aligned_Cross-Modal_Memory_for_Few-Shot_Image_and_Sentence_Matching_ICCV_2019_paper.html
PDF http://openaccess.thecvf.com/content_ICCV_2019/papers/Huang_ACMM_Aligned_Cross-Modal_Memory_for_Few-Shot_Image_and_Sentence_Matching_ICCV_2019_paper.pdf
PWC https://paperswithcode.com/paper/acmm-aligned-cross-modal-memory-for-few-shot
Repo
Framework

YNU_DYX at SemEval-2019 Task 5: A Stacked BiGRU Model Based on Capsule Network in Detection of Hate

Title YNU_DYX at SemEval-2019 Task 5: A Stacked BiGRU Model Based on Capsule Network in Detection of Hate
Authors Yunxia Ding, Xiaobing Zhou, Xuejie Zhang
Abstract This paper describes our system designed for SemEval 2019 Task 5 {``}Shared Task on Multilingual Detection of Hate{''}.We only participate in subtask-A in English. To address this task, we present a stacked BiGRU model based on a capsule network system. In or- der to convert the tweets into corresponding vector representations and input them into the neural network, we use the fastText tools to get word representations. Then, the sentence representation is enriched by stacked Bidirectional Gated Recurrent Units (BiGRUs) and used as the input of capsule network. Our system achieves an average F1-score of 0.546 and ranks 3rd in the subtask-A in English. |
Tasks
Published 2019-06-01
URL https://www.aclweb.org/anthology/S19-2096/
PDF https://www.aclweb.org/anthology/S19-2096
PWC https://paperswithcode.com/paper/ynu_dyx-at-semeval-2019-task-5-a-stacked
Repo
Framework

DiaHClust: an Iterative Hierarchical Clustering Approach for Identifying Stages in Language Change

Title DiaHClust: an Iterative Hierarchical Clustering Approach for Identifying Stages in Language Change
Authors Christin Sch{"a}tzle, Hannah Booth
Abstract Language change is often assessed against a set of pre-determined time periods in order to be able to trace its diachronic trajectory. This is problematic, since a pre-determined periodization might obscure significant developments and lead to false assumptions about the data. Moreover, these time periods can be based on factors which are either arbitrary or non-linguistic, e.g., dividing the corpus data into equidistant stages or taking into account language-external events. Addressing this problem, in this paper we present a data-driven approach to periodization: {`}DiaHClust{'}. DiaHClust is based on iterative hierarchical clustering and offers a multi-layered perspective on change from text-level to broader time periods. We demonstrate the usefulness of DiaHClust via a case study investigating syntactic change in Icelandic, modelling the syntactic system of the language in terms of vectors of syntactic change. |
Tasks
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-4716/
PDF https://www.aclweb.org/anthology/W19-4716
PWC https://paperswithcode.com/paper/diahclust-an-iterative-hierarchical
Repo
Framework

OCR Quality and NLP Preprocessing

Title OCR Quality and NLP Preprocessing
Authors Margot Mieskes, Stefan Schmunk
Abstract We present initial experiments to evaluate the performance of tasks such as Part of Speech Tagging on data corrupted by Optical Character Recognition (OCR). Our results, based on English and German data, using artificial experiments as well as initial real OCRed data indicate that already a small drop in OCR quality considerably increases the error rates, which would have a significant impact on subsequent processing steps.
Tasks Optical Character Recognition, Part-Of-Speech Tagging
Published 2019-08-01
URL https://www.aclweb.org/anthology/papers/W/W19/W19-3633/
PDF https://www.aclweb.org/anthology/W19-3633
PWC https://paperswithcode.com/paper/ocr-quality-and-nlp-preprocessing
Repo
Framework

Functional Bayesian Neural Networks for Model Uncertainty Quantification

Title Functional Bayesian Neural Networks for Model Uncertainty Quantification
Authors Nanyang Ye, Zhanxing Zhu
Abstract In this paper, we extend the Bayesian neural network to functional Bayesian neural network with functional Monte Carlo methods that use the samples of functionals instead of samples of networks’ parameters for inference to overcome the curse of dimensionality for uncertainty quantification. Based on the previous work on Riemannian Langevin dynamics, we propose the stochastic gradient functional Riemannian dynamics for training functional Bayesian neural network. We show the effectiveness and efficiency of our proposed approach with various experiments.
Tasks
Published 2019-05-01
URL https://openreview.net/forum?id=SJxFN3RcFX
PDF https://openreview.net/pdf?id=SJxFN3RcFX
PWC https://paperswithcode.com/paper/functional-bayesian-neural-networks-for-model
Repo
Framework

On the Relationship between Neural Machine Translation and Word Alignment

Title On the Relationship between Neural Machine Translation and Word Alignment
Authors Xintong Li, Lemao Liu, Guanlin Li, Max Meng, Shuming Shi
Abstract Prior researches suggest that attentional neural machine translation (NMT) is able to capture word alignment by attention, however, to our surprise, it almost fails for NMT models with multiple attentional layers except for those with a single layer. This paper introduce two methods to induce word alignment from general neural machine translation models. Experiments verify that both methods obtain much better word alignment than the method by attention. Furthermore, based on one of the proposed method, we design a criterion to divide target words into two categories (i.e. those mostly contributed from source “CFS” words and the other words mostly contributed from target “CFT” words), and analyze word alignment under these two categories in depth. We find that although NMT models are difficult to capture word alignment for CFT words but these words do not sacrifice translation quality significantly, which provides an explanation why NMT is more successful for translation yet worse for word alignment compared to statistical machine translation. We further demonstrate that word alignment errors for CFS words are responsible for translation errors in some extent by measuring the correlation between word alignment and translation for several NMT systems.
Tasks Machine Translation, Word Alignment
Published 2019-01-01
URL https://openreview.net/forum?id=S1eEdj0cK7
PDF https://openreview.net/pdf?id=S1eEdj0cK7
PWC https://paperswithcode.com/paper/on-the-relationship-between-neural-machine
Repo
Framework

Semi-Supervised Teacher-Student Architecture for Relation Extraction

Title Semi-Supervised Teacher-Student Architecture for Relation Extraction
Authors Fan Luo, Ajay Nagesh, Rebecca Sharp, Mihai Surdeanu
Abstract Generating a large amount of training data for information extraction (IE) is either costly (if annotations are created manually), or runs the risk of introducing noisy instances (if distant supervision is used). On the other hand, semi-supervised learning (SSL) is a cost-efficient solution to combat lack of training data. In this paper, we adapt Mean Teacher (Tarvainen and Valpola, 2017), a denoising SSL framework to extract semantic relations between pairs of entities. We explore the sweet spot of amount of supervision required for good performance on this binary relation extraction task. Additionally, different syntax representations are incorporated into our models to enhance the learned representation of sentences. We evaluate our approach on the Google-IISc Distant Supervision (GDS) dataset, which removes test data noise present in all previous distance supervision datasets, which makes it a reliable evaluation benchmark (Jat et al., 2017). Our results show that the SSL Mean Teacher approach nears the performance of fully-supervised approaches even with only 10{%} of the labeled corpus. Further, the syntax-aware model outperforms other syntax-free approaches across all levels of supervision.
Tasks Denoising, Relation Extraction
Published 2019-06-01
URL https://www.aclweb.org/anthology/W19-1505/
PDF https://www.aclweb.org/anthology/W19-1505
PWC https://paperswithcode.com/paper/semi-supervised-teacher-student-architecture
Repo
Framework

English to Hindi Multi-modal Neural Machine Translation and Hindi Image Captioning

Title English to Hindi Multi-modal Neural Machine Translation and Hindi Image Captioning
Authors Sahinur Rahman Laskar, Rohit Pratap Singh, Partha Pakray, B, Sivaji yopadhyay
Abstract With the widespread use of Machine Trans-lation (MT) techniques, attempt to minimizecommunication gap among people from di-verse linguistic backgrounds. We have par-ticipated in Workshop on Asian Transla-tion 2019 (WAT2019) multi-modal translationtask. There are three types of submissiontrack namely, multi-modal translation, Hindi-only image captioning and text-only transla-tion for English to Hindi translation. The mainchallenge is to provide a precise MT output.The multi-modal concept incorporates textualand visual features in the translation task. Inthis work, multi-modal translation track re-lies on pre-trained convolutional neural net-works (CNN) with Visual Geometry Grouphaving 19 layered (VGG19) to extract imagefeatures and attention-based Neural MachineTranslation (NMT) system for translation.The merge-model of recurrent neural network(RNN) and CNN is used for the Hindi-onlyimage captioning. The text-only translationtrack is based on the transformer model of theNMT system. The official results evaluated atWAT2019 translation task, which shows thatour multi-modal NMT system achieved Bilin-gual Evaluation Understudy (BLEU) score20.37, Rank-based Intuitive Bilingual Eval-uation Score (RIBES) 0.642838, Adequacy-Fluency Metrics (AMFM) score 0.668260 forchallenge test data and BLEU score 40.55,RIBES 0.760080, AMFM score 0.770860 forevaluation test data in English to Hindi multi-modal translation respectively.
Tasks Image Captioning, Machine Translation
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-5205/
PDF https://www.aclweb.org/anthology/D19-5205
PWC https://paperswithcode.com/paper/english-to-hindi-multi-modal-neural-machine
Repo
Framework

De-Identification of Emails: Pseudonymizing Privacy-Sensitive Data in a German Email Corpus

Title De-Identification of Emails: Pseudonymizing Privacy-Sensitive Data in a German Email Corpus
Authors Elisabeth Eder, Ulrike Krieg-Holz, Udo Hahn
Abstract We deal with the pseudonymization of those stretches of text in emails that might allow to identify real individual persons. This task is decomposed into two steps. First, named entities carrying privacy-sensitive information (e.g., names of persons, locations, phone numbers or dates) are identified, and, second, these privacy-bearing entities are replaced by synthetically generated surrogates (e.g., a person originally named {}John Doe{'} is renamed as {}Bill Powers{'}). We describe a system architecture for surrogate generation and evaluate our approach on CodeAlltag, a German email corpus.
Tasks
Published 2019-09-01
URL https://www.aclweb.org/anthology/R19-1030/
PDF https://www.aclweb.org/anthology/R19-1030
PWC https://paperswithcode.com/paper/de-identification-of-emails-pseudonymizing
Repo
Framework

Leveraging backtranslation to improve machine translation for Gaelic languages

Title Leveraging backtranslation to improve machine translation for Gaelic languages
Authors Meghan Dowling, Teresa Lynn, Andy Way
Abstract
Tasks Machine Translation
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-6908/
PDF https://www.aclweb.org/anthology/W19-6908
PWC https://paperswithcode.com/paper/leveraging-backtranslation-to-improve-machine
Repo
Framework

Learning Positive Functions with Pseudo Mirror Descent

Title Learning Positive Functions with Pseudo Mirror Descent
Authors Yingxiang Yang, Haoxiang Wang, Negar Kiyavash, Niao He
Abstract The nonparametric learning of positive-valued functions appears widely in machine learning, especially in the context of estimating intensity functions of point processes. Yet, existing approaches either require computing expensive projections or semidefinite relaxations, or lack convexity and theoretical guarantees after introducing nonlinear link functions. In this paper, we propose a novel algorithm, pseudo mirror descent, that performs efficient estimation of positive functions within a Hilbert space without expensive projections. The algorithm guarantees positivity by performing mirror descent with an appropriately selected Bregman divergence, and a pseudo-gradient is adopted to speed up the gradient evaluation procedure in practice. We analyze both asymptotic and nonasymptotic convergence of the algorithm. Through simulations, we show that pseudo mirror descent outperforms the state-of-the-art benchmarks for learning intensities of Poisson and multivariate Hawkes processes, in terms of both computational efficiency and accuracy.
Tasks Point Processes
Published 2019-12-01
URL http://papers.nips.cc/paper/9563-learning-positive-functions-with-pseudo-mirror-descent
PDF http://papers.nips.cc/paper/9563-learning-positive-functions-with-pseudo-mirror-descent.pdf
PWC https://paperswithcode.com/paper/learning-positive-functions-with-pseudo
Repo
Framework

Fine-Grained Control of Sentence Segmentation and Entity Positioning in Neural NLG

Title Fine-Grained Control of Sentence Segmentation and Entity Positioning in Neural NLG
Authors Kritika Mehta, Raheel Qader, Cyril Labbe, Fran{\c{c}}ois Portet
Abstract The move from pipeline Natural Language Generation (NLG) approaches to neural end-to-end approaches led to a loss of control in sentence planning operations owing to the conflation of intermediary micro-planning stages into a single model. Such control is highly necessary when the text should be tailored to respect some constraints such as which entity to be mentioned first, the entity position, the complexity of sentences, etc. In this paper, we introduce fine-grained control of sentence planning in neural data-to-text generation models at two levels - realization of input entities in desired sentences and realization of the input entities in the desired position among individual sentences. We show that by augmenting the input with explicit position identifiers, the neural model can achieve a great control over the output structure while keeping the naturalness of the generated text intact. Since sentence level metrics are not entirely suitable to evaluate this task, we used a metric specific to our task that accounts for the model{'}s ability to achieve control. The results demonstrate that the position identifiers do constraint the neural model to respect the intended output structure which can be useful in a variety of domains that require the generated text to be in a certain structure.
Tasks Data-to-Text Generation, Text Generation
Published 2019-11-01
URL https://www.aclweb.org/anthology/W19-8103/
PDF https://www.aclweb.org/anthology/W19-8103
PWC https://paperswithcode.com/paper/fine-grained-control-of-sentence-segmentation
Repo
Framework

BA-Net: Dense Bundle Adjustment Networks

Title BA-Net: Dense Bundle Adjustment Networks
Authors Chengzhou Tang, Ping Tan
Abstract This paper introduces a network architecture to solve the structure-from-motion (SfM) problem via feature-metric bundle adjustment (BA), which explicitly enforces multi-view geometry constraints in the form of feature-metric error. The whole pipeline is differentiable, so that the network can learn suitable features that make the BA problem more tractable. Furthermore, this work introduces a novel depth parameterization to recover dense per-pixel depth. The network first generates several basis depth maps according to the input image, and optimizes the final depth as a linear combination of these basis depth maps via feature-metric BA. The basis depth maps generator is also learned via end-to-end training. The whole system nicely combines domain knowledge (i.e. hard-coded multi-view geometry constraints) and deep learning (i.e. feature learning and basis depth maps learning) to address the challenging dense SfM problem. Experiments on large scale real data prove the success of the proposed method.
Tasks
Published 2019-05-01
URL https://openreview.net/forum?id=B1gabhRcYX
PDF https://openreview.net/pdf?id=B1gabhRcYX
PWC https://paperswithcode.com/paper/ba-net-dense-bundle-adjustment-networks
Repo
Framework
comments powered by Disqus