January 24, 2020

2625 words 13 mins read

Paper Group NANR 140

Paper Group NANR 140

Order-Aware Generative Modeling Using the 3D-Craft Dataset. Investigating the Effectiveness of BPE: The Power of Shorter Sequences. Noisy Parallel Corpus Filtering through Projected Word Embeddings. Analysing the Impact of Supervised Machine Learning on Automatic Term Extraction: HAMLET vs TermoStat. Lipschitz regularized Deep Neural Networks gener …

Order-Aware Generative Modeling Using the 3D-Craft Dataset

Title Order-Aware Generative Modeling Using the 3D-Craft Dataset
Authors Zhuoyuan Chen, Demi Guo, Tong Xiao, Saining Xie, Xinlei Chen, Haonan Yu, Jonathan Gray, Kavya Srinet, Haoqi Fan, Jerry Ma, Charles R. Qi, Shubham Tulsiani, Arthur Szlam, C. Lawrence Zitnick
Abstract In this paper, we study the problem of sequentially building houses in the game of Minecraft, and demonstrate that learning the ordering can make for more effective autoregressive models. Given a partially built house made by a human player, our system tries to place additional blocks in a human-like manner to complete the house. We introduce a new dataset, HouseCraft, for this new task. HouseCraft contains the sequential order in which 2,500 Minecraft houses were built from scratch by humans. The human action sequences enable us to learn an order-aware generative model called Voxel-CNN. In contrast to many generative models where the sequential generation ordering either does not matter (e.g. holistic generation with GANs), or is manually/arbitrarily set by simple rules (e.g. raster-scan order), our focus is on an ordered generation that imitates humans. To evaluate if a generative model can accurately predict human-like actions, we propose several novel quantitative metrics. We demonstrate that our Voxel-CNN model is simple and effective at this creative task, and can serve as a strong baseline for future research in this direction. The HouseCraft dataset and code with baseline models will be made publicly available.
Tasks
Published 2019-10-01
URL http://openaccess.thecvf.com/content_ICCV_2019/html/Chen_Order-Aware_Generative_Modeling_Using_the_3D-Craft_Dataset_ICCV_2019_paper.html
PDF http://openaccess.thecvf.com/content_ICCV_2019/papers/Chen_Order-Aware_Generative_Modeling_Using_the_3D-Craft_Dataset_ICCV_2019_paper.pdf
PWC https://paperswithcode.com/paper/order-aware-generative-modeling-using-the-3d
Repo
Framework

Investigating the Effectiveness of BPE: The Power of Shorter Sequences

Title Investigating the Effectiveness of BPE: The Power of Shorter Sequences
Authors Matthias Gall{'e}
Abstract Byte-Pair Encoding (BPE) is an unsupervised sub-word tokenization technique, commonly used in neural machine translation and other NLP tasks. Its effectiveness makes it a de facto standard, but the reasons for this are not well understood. We link BPE to the broader family of dictionary-based compression algorithms and compare it with other members of this family. Our experiments across datasets, language pairs, translation models, and vocabulary size show that - given a fixed vocabulary size budget - the fewer tokens an algorithm needs to cover the test set, the better the translation (as measured by BLEU).
Tasks Machine Translation, Tokenization
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-1141/
PDF https://www.aclweb.org/anthology/D19-1141
PWC https://paperswithcode.com/paper/investigating-the-effectiveness-of-bpe-the
Repo
Framework

Noisy Parallel Corpus Filtering through Projected Word Embeddings

Title Noisy Parallel Corpus Filtering through Projected Word Embeddings
Authors Murathan Kurfal{\i}, Robert {"O}stling
Abstract We present a very simple method for parallel text cleaning of low-resource languages, based on projection of word embeddings trained on large monolingual corpora in high-resource languages. In spite of its simplicity, we approach the strong baseline system in the downstream machine translation evaluation.
Tasks Machine Translation, Word Embeddings
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-5438/
PDF https://www.aclweb.org/anthology/W19-5438
PWC https://paperswithcode.com/paper/noisy-parallel-corpus-filtering-through
Repo
Framework

Analysing the Impact of Supervised Machine Learning on Automatic Term Extraction: HAMLET vs TermoStat

Title Analysing the Impact of Supervised Machine Learning on Automatic Term Extraction: HAMLET vs TermoStat
Authors Ayla Rigouts Terryn, Patrick Drouin, Veronique Hoste, Els Lefever
Abstract Traditional approaches to automatic term extraction do not rely on machine learning (ML) and select the top n ranked candidate terms or candidate terms above a certain predefined cut-off point, based on a limited number of linguistic and statistical clues. However, supervised ML approaches are gaining interest. Relatively little is known about the impact of these supervised methodologies; evaluations are often limited to precision, and sometimes recall and f1-scores, without information about the nature of the extracted candidate terms. Therefore, the current paper presents a detailed and elaborate analysis and comparison of a traditional, state-of-the-art system (TermoStat) and a new, supervised ML approach (HAMLET), using the results obtained for the same, manually annotated, Dutch corpus about dressage.
Tasks
Published 2019-09-01
URL https://www.aclweb.org/anthology/R19-1117/
PDF https://www.aclweb.org/anthology/R19-1117
PWC https://paperswithcode.com/paper/analysing-the-impact-of-supervised-machine
Repo
Framework

Lipschitz regularized Deep Neural Networks generalize

Title Lipschitz regularized Deep Neural Networks generalize
Authors Adam M. Oberman, Jeff Calder
Abstract We show that if the usual training loss is augmented by a Lipschitz regularization term, then the networks generalize. We prove generalization by first establishing a stronger convergence result, along with a rate of convergence. A second result resolves a question posed in Zhang et al. (2016): how can a model distinguish between the case of clean labels, and randomized labels? Our answer is that Lipschitz regularization using the Lipschitz constant of the clean data makes this distinction. In this case, the model learns a different function which we hypothesize correctly fails to learn the dirty labels.
Tasks
Published 2019-05-01
URL https://openreview.net/forum?id=r1l3NiCqY7
PDF https://openreview.net/pdf?id=r1l3NiCqY7
PWC https://paperswithcode.com/paper/lipschitz-regularized-deep-neural-networks-1
Repo
Framework

Visual Localization by Learning Objects-Of-Interest Dense Match Regression

Title Visual Localization by Learning Objects-Of-Interest Dense Match Regression
Authors Philippe Weinzaepfel, Gabriela Csurka, Yohann Cabon, Martin Humenberger
Abstract We introduce a novel CNN-based approach for visual localization from a single RGB image that relies on densely matching a set of Objects-of-Interest (OOIs). In this paper, we focus on planar objects which are highly descriptive in an environment, such as paintings in museums or logos and storefronts in malls or airports. For each OOI, we define a reference image for which 3D world coordinates are available. Given a query image, our CNN model detects the OOIs, segments them and finds a dense set of 2D-2D matches between each detected OOI and its corresponding reference image. Given these 2D-2D matches, together with the 3D world coordinates of each reference image, we obtain a set of 2D-3D matches from which solving a Perspective-n-Point problem gives a pose estimate. We show that 2D-3D matches for reference images, as well as OOI annotations can be obtained for all training images from a single instance annotation per OOI by leveraging Structure-from-Motion reconstruction. We introduce a novel synthetic dataset, VirtualGallery, which targets challenges such as varying lighting conditions and different occlusion levels. Our results show that our method achieves high precision and is robust to these challenges. We also experiment using the Baidu localization dataset captured in a shopping mall. Our approach is the first deep regression-based method to scale to such a larger environment.
Tasks Visual Localization
Published 2019-06-01
URL http://openaccess.thecvf.com/content_CVPR_2019/html/Weinzaepfel_Visual_Localization_by_Learning_Objects-Of-Interest_Dense_Match_Regression_CVPR_2019_paper.html
PDF http://openaccess.thecvf.com/content_CVPR_2019/papers/Weinzaepfel_Visual_Localization_by_Learning_Objects-Of-Interest_Dense_Match_Regression_CVPR_2019_paper.pdf
PWC https://paperswithcode.com/paper/visual-localization-by-learning-objects-of
Repo
Framework

A neural lens for super-resolution biological imaging

Title A neural lens for super-resolution biological imaging
Authors James A Grant-Jacob, Benita S Mackay, James A G Baker, Yunhui Xie, Daniel J Heath, Matthew Loxham, Robert W Eason and Ben Mills
Abstract Visualizing structures smaller than the eye can see has been a driving force in scientific research since the invention of the optical microscope. Here, we use a network of neural networks to create a neural lens that has the ability to transform 20× optical microscope images into a resolution comparable to a 1500× scanning electron microscope image. In addition to magnification, the neural lens simultaneously identifies the types of objects present, and hence can label, colour-enhance and remove specific types of objects in the magnified image. The neural lens was used for the imaging of Iva xanthiifolia and Galanthus pollen grains, showing the potential for low cost, non-destructive, high- resolution microscopy with automatic image processing.
Tasks Super-Resolution
Published 2019-07-17
URL https://iopscience.iop.org/article/10.1088/2399-6528/ab267d
PDF https://iopscience.iop.org/article/10.1088/2399-6528/ab267d/pdf
PWC https://paperswithcode.com/paper/a-neural-lens-for-super-resolution-biological
Repo
Framework

An Intelligent Testing Strategy for Vocabulary Assessment of Chinese Second Language Learners

Title An Intelligent Testing Strategy for Vocabulary Assessment of Chinese Second Language Learners
Authors Wei Zhou, Renfen Hu, Feipeng Sun, Ronghuai Huang
Abstract Vocabulary is one of the most important parts of language competence. Testing of vocabulary knowledge is central to research on reading and language. However, it usually costs a large amount of time and human labor to build an item bank and to test large number of students. In this paper, we propose a novel testing strategy by combining automatic item generation (AIG) and computerized adaptive testing (CAT) in vocabulary assessment for Chinese L2 learners. Firstly, we generate three types of vocabulary questions by modeling both the vocabulary knowledge and learners{'} writing error data. After evaluation and calibration, we construct a balanced item pool with automatically generated items, and implement a three-parameter computerized adaptive test. We conduct manual item evaluation and online student tests in the experiments. The results show that the combination of AIG and CAT can construct test items efficiently and reduce test cost significantly. Also, the test result of CAT can provide valuable feedback to AIG algorithms.
Tasks Calibration
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-4403/
PDF https://www.aclweb.org/anthology/W19-4403
PWC https://paperswithcode.com/paper/an-intelligent-testing-strategy-for
Repo
Framework

N-Ary Quantization for CNN Model Compression and Inference Acceleration

Title N-Ary Quantization for CNN Model Compression and Inference Acceleration
Authors Günther Schindler, Wolfgang Roth, Franz Pernkopf, Holger Fröning
Abstract The tremendous memory and computational complexity of Convolutional Neural Networks (CNNs) prevents the inference deployment on resource-constrained systems. As a result, recent research focused on CNN optimization techniques, in particular quantization, which allows weights and activations of layers to be represented with just a few bits while achieving impressive prediction performance. However, aggressive quantization techniques still fail to achieve full-precision prediction performance on state-of-the-art CNN architectures on large-scale classification tasks. In this work we propose a method for weight and activation quantization that is scalable in terms of quantization levels (n-ary representations) and easy to compute while maintaining the performance close to full-precision CNNs. Our weight quantization scheme is based on trainable scaling factors and a nested-means clustering strategy which is robust to weight updates and therefore exhibits good convergence properties. The flexibility of nested-means clustering enables exploration of various n-ary weight representations with the potential of high parameter compression. For activations, we propose a linear quantization strategy that takes the statistical properties of batch normalization into account. We demonstrate the effectiveness of our approach using state-of-the-art models on ImageNet.
Tasks Model Compression, Quantization
Published 2019-05-01
URL https://openreview.net/forum?id=HylDpoActX
PDF https://openreview.net/pdf?id=HylDpoActX
PWC https://paperswithcode.com/paper/n-ary-quantization-for-cnn-model-compression
Repo
Framework

Wetin dey with these comments? Modeling Sociolinguistic Factors Affecting Code-switching Behavior in Nigerian Online Discussions

Title Wetin dey with these comments? Modeling Sociolinguistic Factors Affecting Code-switching Behavior in Nigerian Online Discussions
Authors Innocent Ndubuisi-Obi, Sayan Ghosh, David Jurgens
Abstract Multilingual individuals code switch between languages as a part of a complex communication process. However, most computational studies have examined only one or a handful of contextual factors predictive of switching. Here, we examine Naija-English code switching in a rich contextual environment to understand the social and topical factors eliciting a switch. We introduce a new corpus of 330K articles and accompanying 389K comments labeled for code switching behavior. In modeling whether a comment will switch, we show that topic-driven variation, tribal affiliation, emotional valence, and audience design all play complementary roles in behavior.
Tasks
Published 2019-07-01
URL https://www.aclweb.org/anthology/P19-1625/
PDF https://www.aclweb.org/anthology/P19-1625
PWC https://paperswithcode.com/paper/wetin-dey-with-these-comments-modeling
Repo
Framework

Proceedings of the Tenth Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis

Title Proceedings of the Tenth Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis
Authors
Abstract
Tasks
Published 2019-06-01
URL https://www.aclweb.org/anthology/W19-1300/
PDF https://www.aclweb.org/anthology/W19-1300
PWC https://paperswithcode.com/paper/proceedings-of-the-tenth-workshop-on-2
Repo
Framework

Fine-Grained Visual Classification with Batch Confusion Norm

Title Fine-Grained Visual Classification with Batch Confusion Norm
Authors Yen-Chi Hsu, Cheng-Yao Hong, Ding-Jie Chen, Ming-Sui Lee, Davi Geiger, Tyng-Luh Liu
Abstract We introduce a regularization concept based on the proposed Batch Confusion Norm (BCN) to address Fine-Grained Visual Classification (FGVC). The FGVC problem is notably characterized by its two intriguing properties, significant inter-class similarity and intra-class variations, which cause learning an effective FGVC classifier a challenging task. Inspired by the use of pairwise confusion energy as a regularization mechanism, we develop the BCN technique to improve the FGVC learning by imposing class prediction confusion on each training batch, and consequently alleviate the possible overfitting due to exploring image feature of fine details. In addition, our method is implemented with an attention gated CNN model, boosted by the incorporation of Atrous Spatial Pyramid Pooling (ASPP) to extract discriminative features and proper attentions. To demonstrate the usefulness of our method, we report state-of-the-art results on several benchmark FGVC datasets, along with comprehensive ablation comparisons.
Tasks Fine-Grained Image Classification
Published 2019-10-28
URL https://arxiv.org/abs/1910.12423
PDF https://arxiv.org/abs/1910.12423
PWC https://paperswithcode.com/paper/fine-grained-visual-classification-with-batch
Repo
Framework

Bend but Don’t Break? Multi-Challenge Stress Test for QA Models

Title Bend but Don’t Break? Multi-Challenge Stress Test for QA Models
Authors Hemant Pugaliya, James Route, Kaixin Ma, Yixuan Geng, Eric Nyberg
Abstract The field of question answering (QA) has seen rapid growth in new tasks and modeling approaches in recent years. Large scale datasets and focus on challenging linguistic phenomena have driven development in neural models, some of which have achieved parity with human performance in limited cases. However, an examination of state-of-the-art model output reveals that a gap remains in reasoning ability compared to a human, and performance tends to degrade when models are exposed to less-constrained tasks. We are interested in more clearly defining the strengths and limitations of leading models across diverse QA challenges, intending to help future researchers with identifying pathways to generalizable performance. We conduct extensive qualitative and quantitative analyses on the results of four models across four datasets and relate common errors to model capabilities. We also illustrate limitations in the datasets we examine and discuss a way forward for achieving generalizable models and datasets that broadly test QA capabilities.
Tasks Question Answering
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-5818/
PDF https://www.aclweb.org/anthology/D19-5818
PWC https://paperswithcode.com/paper/bend-but-dont-break-multi-challenge-stress
Repo
Framework

Multilingual and Cross-Lingual Graded Lexical Entailment

Title Multilingual and Cross-Lingual Graded Lexical Entailment
Authors Ivan Vuli{'c}, Simone Paolo Ponzetto, Goran Glava{\v{s}}
Abstract Grounded in cognitive linguistics, graded lexical entailment (GR-LE) is concerned with fine-grained assertions regarding the directional hierarchical relationships between concepts on a continuous scale. In this paper, we present the first work on cross-lingual generalisation of GR-LE relation. Starting from HyperLex, the only available GR-LE dataset in English, we construct new monolingual GR-LE datasets for three other languages, and combine those to create a set of six cross-lingual GR-LE datasets termed CL-HYPERLEX. We next present a novel method dubbed CLEAR (Cross-Lingual Lexical Entailment Attract-Repel) for effectively capturing graded (and binary) LE, both monolingually in different languages as well as across languages (i.e., on CL-HYPERLEX). Coupled with a bilingual dictionary, CLEAR leverages taxonomic LE knowledge in a resource-rich language (e.g., English) and propagates it to other languages. Supported by cross-lingual LE transfer, CLEAR sets competitive baseline performance on three new monolingual GR-LE datasets and six cross-lingual GR-LE datasets. In addition, we show that CLEAR outperforms current state-of-the-art on binary cross-lingual LE detection by a wide margin for diverse language pairs.
Tasks
Published 2019-07-01
URL https://www.aclweb.org/anthology/P19-1490/
PDF https://www.aclweb.org/anthology/P19-1490
PWC https://paperswithcode.com/paper/multilingual-and-cross-lingual-graded-lexical
Repo
Framework

Integration of Dubbing Constraints into Machine Translation

Title Integration of Dubbing Constraints into Machine Translation
Authors Ashutosh Saboo, Timo Baumann
Abstract Translation systems aim to perform a meaning-preserving conversion of linguistic material (typically text but also speech) from a source to a target language (and, to a lesser degree, the corresponding socio-cultural contexts). Dubbing, i.e., the lip-synchronous translation and revoicing of speech adds to this constraints about the close matching of phonetic and resulting visemic synchrony characteristics of source and target material. There is an inherent conflict between a translation{'}s meaning preservation and {}dubbability{'} and the resulting trade-off can be controlled by weighing the synchrony constraints. We introduce our work, which to the best of our knowledge is the first of its kind, on integrating synchrony constraints into the machine translation paradigm. We present first results for the integration of synchrony constraints into encoder decoder-based neural machine translation and show that considerably more {}dubbable{'} translations can be achieved with only a small impact on BLEU score, and dubbability improves more steeply than BLEU degrades.
Tasks Machine Translation
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-5210/
PDF https://www.aclweb.org/anthology/W19-5210
PWC https://paperswithcode.com/paper/integration-of-dubbing-constraints-into
Repo
Framework
comments powered by Disqus