January 24, 2020

2605 words 13 mins read

Paper Group NANR 103

Paper Group NANR 103

NICT’s Machine Translation Systems for the WMT19 Similar Language Translation Task. Incremental training of multi-generative adversarial networks. Object-Oriented Model Learning through Multi-Level Abstraction. UDS–DFKI Submission to the WMT2019 Czech–Polish Similar Language Translation Shared Task. Neural Machine Translation of Low-Resource and …

NICT’s Machine Translation Systems for the WMT19 Similar Language Translation Task

Title NICT’s Machine Translation Systems for the WMT19 Similar Language Translation Task
Authors Benjamin Marie, Raj Dabre, Atsushi Fujita
Abstract This paper presents the NICT{'}s participation in the WMT19 shared Similar Language Translation Task. We participated in the Spanish-Portuguese task. For both translation directions, we prepared state-of-the-art statistical (SMT) and neural (NMT) machine translation systems. Our NMT systems with the Transformer architecture were trained on the provided parallel data enlarged with a large quantity of back-translated monolingual data. Our primary submission to the task is the result of a simple combination of our SMT and NMT systems. According to BLEU, our systems were ranked second and third respectively for the Portuguese-to-Spanish and Spanish-to-Portuguese translation directions. For contrastive experiments, we also submitted outputs generated with an unsupervised SMT system.
Tasks Machine Translation
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-5428/
PDF https://www.aclweb.org/anthology/W19-5428
PWC https://paperswithcode.com/paper/nicts-machine-translation-systems-for-the
Repo
Framework

Incremental training of multi-generative adversarial networks

Title Incremental training of multi-generative adversarial networks
Authors Qi Tan, Pingzhong Tang, Ke Xu, Weiran Shen, Song Zuo
Abstract Generative neural networks map a standard, possibly distribution to a complex high-dimensional distribution, which represents the real world data set. However, a determinate input distribution as well as a specific architecture of neural networks may impose limitations on capturing the diversity in the high dimensional target space. To resolve this difficulty, we propose a training framework that greedily produce a series of generative adversarial networks that incrementally capture the diversity of the target space. We show theoretically and empirically that our training algorithm converges to the theoretically optimal distribution, the projection of the real distribution onto the convex hull of the network’s distribution space.
Tasks
Published 2019-05-01
URL https://openreview.net/forum?id=ryekdoCqF7
PDF https://openreview.net/pdf?id=ryekdoCqF7
PWC https://paperswithcode.com/paper/incremental-training-of-multi-generative
Repo
Framework

Object-Oriented Model Learning through Multi-Level Abstraction

Title Object-Oriented Model Learning through Multi-Level Abstraction
Authors Guangxiang Zhu, Jianhao Wang, ZhiZhou Ren, Chongjie Zhang
Abstract Object-based approaches for learning action-conditioned dynamics has demonstrated promise for generalization and interpretability. However, existing approaches suffer from structural limitations and optimization difficulties for common environments with multiple dynamic objects. In this paper, we present a novel self-supervised learning framework, called Multi-level Abstraction Object-oriented Predictor (MAOP), for learning object-based dynamics models from raw visual observations. MAOP employs a three-level learning architecture that enables efficient dynamics learning for complex environments with a dynamic background. We also design a spatial-temporal relational reasoning mechanism to support instance-level dynamics learning and handle partial observability. Empirical results show that MAOP significantly outperforms previous methods in terms of sample efficiency and generalization over novel environments that have multiple controllable and uncontrollable dynamic objects and different static object layouts. In addition, MAOP learns semantically and visually interpretable disentangled representations.
Tasks Relational Reasoning
Published 2019-05-01
URL https://openreview.net/forum?id=BkxkH30cFm
PDF https://openreview.net/pdf?id=BkxkH30cFm
PWC https://paperswithcode.com/paper/object-oriented-model-learning-through-multi
Repo
Framework

UDS–DFKI Submission to the WMT2019 Czech–Polish Similar Language Translation Shared Task

Title UDS–DFKI Submission to the WMT2019 Czech–Polish Similar Language Translation Shared Task
Authors Santanu Pal, Marcos Zampieri, Josef van Genabith
Abstract In this paper we present the UDS-DFKI system submitted to the Similar Language Translation shared task at WMT 2019. The first edition of this shared task featured data from three pairs of similar languages: Czech and Polish, Hindi and Nepali, and Portuguese and Spanish. Participants could choose to participate in any of these three tracks and submit system outputs in any translation direction. We report the results obtained by our system in translating from Czech to Polish and comment on the impact of out-of-domain test data in the performance of our system. UDS-DFKI achieved competitive performance ranking second among ten teams in Czech to Polish translation.
Tasks
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-5430/
PDF https://www.aclweb.org/anthology/W19-5430
PWC https://paperswithcode.com/paper/uds-dfki-submission-to-the-wmt2019-czech
Repo
Framework

Neural Machine Translation of Low-Resource and Similar Languages with Backtranslation

Title Neural Machine Translation of Low-Resource and Similar Languages with Backtranslation
Authors Michael Przystupa, Muhammad Abdul-Mageed
Abstract We present our contribution to the WMT19 Similar Language Translation shared task. We investigate the utility of neural machine translation on three low-resource, similar language pairs: Spanish {–} Portuguese, Czech {–} Polish, and Hindi {–} Nepali. Since state-of-the-art neural machine translation systems still require large amounts of bitext, which we do not have for the pairs we consider, we focus primarily on incorporating monolingual data into our models with backtranslation. In our analysis, we found Transformer models to work best on Spanish {–} Portuguese and Czech {–} Polish translation, whereas LSTMs with global attention worked best on Hindi {–} Nepali translation.
Tasks Machine Translation
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-5431/
PDF https://www.aclweb.org/anthology/W19-5431
PWC https://paperswithcode.com/paper/neural-machine-translation-of-low-resource
Repo
Framework

Unsupervised Domain Adaptation for Distance Metric Learning

Title Unsupervised Domain Adaptation for Distance Metric Learning
Authors Kihyuk Sohn, Wenling Shang, Xiang Yu, Manmohan Chandraker
Abstract Unsupervised domain adaptation is a promising avenue to enhance the performance of deep neural networks on a target domain, using labels only from a source domain. However, the two predominant methods, domain discrepancy reduction learning and semi-supervised learning, are not readily applicable when source and target domains do not share a common label space. This paper addresses the above scenario by learning a representation space that retains discriminative power on both the (labeled) source and (unlabeled) target domains while keeping representations for the two domains well-separated. Inspired by a theoretical analysis, we first reformulate the disjoint classification task, where the source and target domains correspond to non-overlapping class labels, to a verification one. To handle both within and cross domain verifications, we propose a Feature Transfer Network (FTN) to separate the target feature space from the original source space while aligned with a transformed source space. Moreover, we present a non-parametric multi-class entropy minimization loss to further boost the discriminative power of FTNs on the target domain. In experiments, we first illustrate how FTN works in a controlled setting of adapting from MNIST-M to MNIST with disjoint digit classes between the two domains and then demonstrate the effectiveness of FTNs through state-of-the-art performances on a cross-ethnicity face recognition problem.
Tasks Domain Adaptation, Face Recognition, Metric Learning, Unsupervised Domain Adaptation
Published 2019-05-01
URL https://openreview.net/forum?id=BklhAj09K7
PDF https://openreview.net/pdf?id=BklhAj09K7
PWC https://paperswithcode.com/paper/unsupervised-domain-adaptation-for-distance
Repo
Framework

Dual Monolingual Cross-Entropy Delta Filtering of Noisy Parallel Data

Title Dual Monolingual Cross-Entropy Delta Filtering of Noisy Parallel Data
Authors Amittai Axelrod, Anish Kumar, Steve Sloto
Abstract We introduce a purely monolingual approach to filtering for parallel data from a noisy corpus in a low-resource scenario. Our work is inspired by Junczysdowmunt:2018, but we relax the requirements to allow for cases where no parallel data is available. Our primary contribution is a dual monolingual cross-entropy delta criterion modified from Cynical data selection Axelrod:2017, and is competitive (within 1.8 BLEU) with the best bilingual filtering method when used to train SMT systems. Our approach is featherweight, and runs end-to-end on a standard laptop in three hours.
Tasks
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-5433/
PDF https://www.aclweb.org/anthology/W19-5433
PWC https://paperswithcode.com/paper/dual-monolingual-cross-entropy-delta
Repo
Framework

End-to-End Sequential Metaphor Identification Inspired by Linguistic Theories

Title End-to-End Sequential Metaphor Identification Inspired by Linguistic Theories
Authors Rui Mao, Chenghua Lin, Frank Guerin
Abstract End-to-end training with Deep Neural Networks (DNN) is a currently popular method for metaphor identification. However, standard sequence tagging models do not explicitly take advantage of linguistic theories of metaphor identification. We experiment with two DNN models which are inspired by two human metaphor identification procedures. By testing on three public datasets, we find that our models achieve state-of-the-art performance in end-to-end metaphor identification.
Tasks
Published 2019-07-01
URL https://www.aclweb.org/anthology/P19-1378/
PDF https://www.aclweb.org/anthology/P19-1378
PWC https://paperswithcode.com/paper/end-to-end-sequential-metaphor-identification
Repo
Framework

Comparing Automated Methods to Detect Explicit Content in Song Lyrics

Title Comparing Automated Methods to Detect Explicit Content in Song Lyrics
Authors Michael Fell, Elena Cabrio, Michele Corazza, G, Fabien on
Abstract The Parental Advisory Label (PAL) is a warning label that is placed on audio recordings in recognition of profanity or inappropriate references, with the intention of alerting parents of material potentially unsuitable for children. Since 2015, digital providers {–} such as iTunes, Spotify, Amazon Music and Deezer {–} also follow PAL guidelines and tag such tracks as {``}explicit{''}. Nowadays, such labelling is carried out mainly manually on voluntary basis, with the drawbacks of being time consuming and therefore costly, error prone and partly a subjective task. In this paper, we compare automated methods ranging from dictionary-based lookup to state-of-the-art deep neural networks to automatically detect explicit contents in English lyrics. We show that more complex models perform only slightly better on this task, and relying on a qualitative analysis of the data, we discuss the inherent hardness and subjectivity of the task. |
Tasks
Published 2019-09-01
URL https://www.aclweb.org/anthology/R19-1039/
PDF https://www.aclweb.org/anthology/R19-1039
PWC https://paperswithcode.com/paper/comparing-automated-methods-to-detect
Repo
Framework

SynDeMo: Synergistic Deep Feature Alignment for Joint Learning of Depth and Ego-Motion

Title SynDeMo: Synergistic Deep Feature Alignment for Joint Learning of Depth and Ego-Motion
Authors Behzad Bozorgtabar, Mohammad Saeed Rad, Dwarikanath Mahapatra, Jean-Philippe Thiran
Abstract Despite well-established baselines, learning of scene depth and ego-motion from monocular video remains an ongoing challenge, specifically when handling scaling ambiguity issues and depth inconsistencies in image sequences. Much prior work uses either a supervised mode of learning or stereo images. The former is limited by the amount of labeled data, as it requires expensive sensors, while the latter is not always readily available as monocular sequences. In this work, we demonstrate the benefit of using geometric information from synthetic images, coupled with scene depth information, to recover the scale in depth and ego-motion estimation from monocular videos. We developed our framework using synthetic image-depth pairs and unlabeled real monocular images. We had three training objectives: first, to use deep feature alignment to reduce the domain gap between synthetic and monocular images to yield more accurate depth estimation when presented with only real monocular images at test time. Second, we learn scene specific representation by exploiting self-supervision coming from multi-view synthetic images without the need for depth labels. Third, our method uses single-view depth and pose networks, which are capable of jointly training and supervising one another mutually, yielding consistent depth and ego-motion estimates. Extensive experiments demonstrate that our depth and ego-motion models surpass the state-of-the-art, unsupervised methods and compare favorably to early supervised deep models for geometric understanding. We validate the effectiveness of our training objectives against standard benchmarks thorough an ablation study.
Tasks Depth Estimation, Motion Estimation
Published 2019-10-01
URL http://openaccess.thecvf.com/content_ICCV_2019/html/Bozorgtabar_SynDeMo_Synergistic_Deep_Feature_Alignment_for_Joint_Learning_of_Depth_ICCV_2019_paper.html
PDF http://openaccess.thecvf.com/content_ICCV_2019/papers/Bozorgtabar_SynDeMo_Synergistic_Deep_Feature_Alignment_for_Joint_Learning_of_Depth_ICCV_2019_paper.pdf
PWC https://paperswithcode.com/paper/syndemo-synergistic-deep-feature-alignment
Repo
Framework

Linguistic classification: dealing jointly with irrelevance and inconsistency

Title Linguistic classification: dealing jointly with irrelevance and inconsistency
Authors Laura Franzoi, Andrea Sgarro, Anca Dinu, Liviu P. Dinu
Abstract In this paper, we present new methods for language classification which put to good use both syntax and fuzzy tools, and are capable of dealing with irrelevant linguistic features (i.e. features which should not contribute to the classification) and even inconsistent features (which do not make sense for specific languages). We introduce a metric distance, based on the generalized Steinhaus transform, which allows one to deal jointly with irrelevance and inconsistency. To evaluate our methods, we test them on a syntactic data set, due to the linguist G. Longobardi and his school. We obtain phylogenetic trees which sometimes outperform the ones obtained by Atkinson and Gray.
Tasks
Published 2019-09-01
URL https://www.aclweb.org/anthology/R19-1040/
PDF https://www.aclweb.org/anthology/R19-1040
PWC https://paperswithcode.com/paper/linguistic-classification-dealing-jointly
Repo
Framework

Responsive and Self-Expressive Dialogue Generation

Title Responsive and Self-Expressive Dialogue Generation
Authors Kozo Chikai, Junya Takayama, Yuki Arase
Abstract A neural conversation model is a promising approach to develop dialogue systems with the ability of chit-chat. It allows training a model in an end-to-end manner without complex rule design nor feature engineering. However, as a side effect, the neural model tends to generate safe but uninformative and insensitive responses like {}OK{''} and {}I don{'}t know.{''} Such replies are called generic responses and regarded as a critical problem for user-engagement of dialogue systems. For a more engaging chit-chat experience, we propose a neural conversation model that generates responsive and self-expressive replies. Specifically, our model generates domain-aware and sentiment-rich responses. Experiments empirically confirmed that our model outperformed the sequence-to-sequence model; 68.1{%} of our responses were domain-aware with sentiment polarities, which was only 2.7{%} for responses generated by the sequence-to-sequence model.
Tasks Dialogue Generation, Feature Engineering
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-4116/
PDF https://www.aclweb.org/anthology/W19-4116
PWC https://paperswithcode.com/paper/responsive-and-self-expressive-dialogue
Repo
Framework

Spatial Correspondence With Generative Adversarial Network: Learning Depth From Monocular Videos

Title Spatial Correspondence With Generative Adversarial Network: Learning Depth From Monocular Videos
Authors Zhenyao Wu, Xinyi Wu, Xiaoping Zhang, Song Wang, Lili Ju
Abstract Depth estimation from monocular videos has important applications in many areas such as autonomous driving and robot navigation. It is a very challenging problem without knowing the camera pose since errors in camera-pose estimation can significantly affect the video-based depth estimation accuracy. In this paper, we present a novel SC-GAN network with end-to-end adversarial training for depth estimation from monocular videos without estimating the camera pose and pose change over time. To exploit cross-frame relations, SC-GAN includes a spatial correspondence module which uses Smolyak sparse grids to efficiently match the features across adjacent frames, and an attention mechanism to learn the importance of features in different directions. Furthermore, the generator in SC-GAN learns to estimate depth from the input frames, while the discriminator learns to distinguish between the ground-truth and estimated depth map for the reference frame. Experiments on the KITTI and Cityscapes datasets show that the proposed SC-GAN can achieve much more accurate depth maps than many existing state-of-the-art methods on monocular videos.
Tasks Autonomous Driving, Depth Estimation, Pose Estimation, Robot Navigation
Published 2019-10-01
URL http://openaccess.thecvf.com/content_ICCV_2019/html/Wu_Spatial_Correspondence_With_Generative_Adversarial_Network_Learning_Depth_From_Monocular_ICCV_2019_paper.html
PDF http://openaccess.thecvf.com/content_ICCV_2019/papers/Wu_Spatial_Correspondence_With_Generative_Adversarial_Network_Learning_Depth_From_Monocular_ICCV_2019_paper.pdf
PWC https://paperswithcode.com/paper/spatial-correspondence-with-generative
Repo
Framework

Distribution Learning of a Random Spatial Field with a Location-Unaware Mobile Sensor

Title Distribution Learning of a Random Spatial Field with a Location-Unaware Mobile Sensor
Authors Meera Pai, Animesh Kumar
Abstract Measurement of spatial fields is of interest in environment monitoring. Recently mobile sensing has been proposed for spatial field reconstruction, which requires a smaller number of sensors when compared to the traditional paradigm of sensing with static sensors. A challenge in mobile sensing is to overcome the location uncertainty of its sensors. While GPS or other localization methods can reduce this uncertainty, we address a more fundamental question: can a location-unaware mobile sensor, recording samples on a directed non-uniform random walk, learn the statistical distribution (as a function of space) of an underlying random process (spatial field)? The answer is in the affirmative for Lipschitz continuous fields, where the accuracy of our distribution-learning method increases with the number of observed field samples (sampling rate). To validate our distribution-learning method, we have created a dataset with 43 experimental trials by measuring sound-level along a fixed path using a location-unaware mobile sound-level meter.
Tasks
Published 2019-12-01
URL http://papers.nips.cc/paper/9412-distribution-learning-of-a-random-spatial-field-with-a-location-unaware-mobile-sensor
PDF http://papers.nips.cc/paper/9412-distribution-learning-of-a-random-spatial-field-with-a-location-unaware-mobile-sensor.pdf
PWC https://paperswithcode.com/paper/distribution-learning-of-a-random-spatial
Repo
Framework

Quotation Detection and Classification with a Corpus-Agnostic Model

Title Quotation Detection and Classification with a Corpus-Agnostic Model
Authors Sean Papay, Sebastian Pad{'o}
Abstract The detection of quotations (i.e., reported speech, thought, and writing) has established itself as an NLP analysis task. However, state-of-the-art models have been developed on the basis of specific corpora and incorpo- rate a high degree of corpus-specific assumptions and knowledge, which leads to fragmentation. In the spirit of task-agnostic modeling, we present a corpus-agnostic neural model for quotation detection and evaluate it on three corpora that vary in language, text genre, and structural assumptions. The model (a) approaches the state-of-the-art on the corpora when using established feature sets and (b) shows reasonable performance even when us- ing solely word forms, which makes it applicable for non-standard (i.e., historical) corpora.
Tasks
Published 2019-09-01
URL https://www.aclweb.org/anthology/R19-1103/
PDF https://www.aclweb.org/anthology/R19-1103
PWC https://paperswithcode.com/paper/quotation-detection-and-classification-with-a
Repo
Framework
comments powered by Disqus