January 24, 2020

2136 words 11 mins read

Paper Group NANR 104

Paper Group NANR 104

Learned optimizers that outperform on wall-clock and validation loss. Webinterpret Submission to the WMT2019 Shared Task on Parallel Corpus Filtering. Pay Attention when you Pay the Bills. A Multilingual Corpus with Dependency-based and Semantic Annotation of Collocations.. Large Dataset and Language Model Fun-Tuning for Humor Recognition. A Univer …

Learned optimizers that outperform on wall-clock and validation loss

Title Learned optimizers that outperform on wall-clock and validation loss
Authors Luke Metz, Niru Maheswaranathan, Jeremy Nixon, Daniel Freeman, Jascha Sohl-dickstein
Abstract Deep learning has shown that learned functions can dramatically outperform hand-designed functions on perceptual tasks. Analogously, this suggests that learned update functions may similarly outperform current hand-designed optimizers, especially for specific tasks. However, learned optimizers are notoriously difficult to train and have yet to demonstrate wall-clock speedups over hand-designed optimizers, and thus are rarely used in practice. Typically, learned optimizers are trained by truncated backpropagation through an unrolled optimization process. The resulting gradients are either strongly biased (for short truncations) or have exploding norm (for long truncations). In this work we propose a training scheme which overcomes both of these difficulties, by dynamically weighting two unbiased gradient estimators for a variational loss on optimizer performance. This allows us to train neural networks to perform optimization faster than well tuned first-order methods. Moreover, by training the optimizer against validation loss, as opposed to training loss, we are able to use it to train models which generalize better than those trained by first order methods. We demonstrate these results on problems where our learned optimizer trains convolutional networks in a fifth of the wall-clock time compared to tuned first-order methods, and with an improvement
Tasks
Published 2019-05-01
URL https://openreview.net/forum?id=HJxwAo09KQ
PDF https://openreview.net/pdf?id=HJxwAo09KQ
PWC https://paperswithcode.com/paper/learned-optimizers-that-outperform-on-wall
Repo
Framework

Webinterpret Submission to the WMT2019 Shared Task on Parallel Corpus Filtering

Title Webinterpret Submission to the WMT2019 Shared Task on Parallel Corpus Filtering
Authors Jes{'u}s Gonz{'a}lez-Rubio
Abstract This document describes the participation of Webinterpret in the shared task on parallel corpus filtering at the Fourth Conference on Machine Translation (WMT 2019). Here, we describe the main characteristics of our approach and discuss the results obtained on the data sets published for the shared task.
Tasks Machine Translation
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-5437/
PDF https://www.aclweb.org/anthology/W19-5437
PWC https://paperswithcode.com/paper/webinterpret-submission-to-the-wmt2019-shared
Repo
Framework

Pay Attention when you Pay the Bills. A Multilingual Corpus with Dependency-based and Semantic Annotation of Collocations.

Title Pay Attention when you Pay the Bills. A Multilingual Corpus with Dependency-based and Semantic Annotation of Collocations.
Authors Marcos Garcia, Marcos Garc{'\i}a Salido, Susana Sotelo, Estela Mosqueira, Margarita Alonso-Ramos
Abstract This paper presents a new multilingual corpus with semantic annotation of collocations in English, Portuguese, and Spanish. The whole resource contains 155k tokens and 1,526 collocations labeled in context. The annotated examples belong to three syntactic relations (adjective-noun, verb-object, and nominal compounds), and represent 58 lexical functions in the Meaning-Text Theory (e.g., Oper, Magn, Bon, etc.). Each collocation was annotated by three linguists and the final resource was revised by a team of experts. The resulting corpus can serve as a basis to evaluate different approaches for collocation identification, which in turn can be useful for different NLP tasks such as natural language understanding or natural language generation.
Tasks Text Generation
Published 2019-07-01
URL https://www.aclweb.org/anthology/P19-1392/
PDF https://www.aclweb.org/anthology/P19-1392
PWC https://paperswithcode.com/paper/pay-attention-when-you-pay-the-bills-a
Repo
Framework

Large Dataset and Language Model Fun-Tuning for Humor Recognition

Title Large Dataset and Language Model Fun-Tuning for Humor Recognition
Authors Vladislav Blinov, Valeria Bolotova-Baranova, Pavel Braslavski
Abstract The task of humor recognition has attracted a lot of attention recently due to the urge to process large amounts of user-generated texts and rise of conversational agents. We collected a dataset of jokes and funny dialogues in Russian from various online resources and complemented them carefully with unfunny texts with similar lexical properties. The dataset comprises of more than 300,000 short texts, which is significantly larger than any previous humor-related corpus. Manual annotation of 2,000 items proved the reliability of the corpus construction approach. Further, we applied language model fine-tuning for text classification and obtained an F1 score of 0.91 on a test set, which constitutes a considerable gain over baseline methods. The dataset is freely available for research community.
Tasks Language Modelling, Text Classification
Published 2019-07-01
URL https://www.aclweb.org/anthology/P19-1394/
PDF https://www.aclweb.org/anthology/P19-1394
PWC https://paperswithcode.com/paper/large-dataset-and-language-model-fun-tuning
Repo
Framework

A Universal System for Automatic Text-to-Phonetics Conversion

Title A Universal System for Automatic Text-to-Phonetics Conversion
Authors Chen Gafni
Abstract This paper describes an automatic text-to-phonetics conversion system. The system was constructed to primarily serve as a research tool. It is implemented in a general-purpose linguistic software, which allows it to be incorporated in a multifaceted linguistic research in essentially any language. The system currently relies on two mechanisms to generate phonetic transcriptions from texts: (i) importing ready-made phonetic word forms from external dictionaries, and (ii) automatic generation of phonetic word forms based on a set of deterministic linguistic rules. The current paper describes the proposed system and its potential application to linguistic research.
Tasks
Published 2019-09-01
URL https://www.aclweb.org/anthology/R19-1042/
PDF https://www.aclweb.org/anthology/R19-1042
PWC https://paperswithcode.com/paper/a-universal-system-for-automatic-text-to
Repo
Framework

Deep Learning for Seeing Through Window With Raindrops

Title Deep Learning for Seeing Through Window With Raindrops
Authors Yuhui Quan, Shijie Deng, Yixin Chen, Hui Ji
Abstract When taking pictures through glass window in rainy day, the images are comprised and corrupted by the raindrops adhered to glass surfaces. It is a challenging problem to remove the effect of raindrops from an image. The key task is how to accurately and robustly identify the raindrop regions in an image. This paper develops a convolutional neural network (CNN) for removing the effect of raindrops from an image. In the proposed CNN, we introduce a double attention mechanism that concurrently guides the CNN using shape-driven attention and channel re-calibration. The shape-driven attention exploits physical shape priors of raindrops, i.e. convexness and contour closedness, to accurately locate raindrops, and the channel re-calibration improves the robustness when processing raindrops with varying appearances. The experimental results show that the proposed CNN outperforms the state-of-the-art approaches in terms of both quantitative metrics and visual quality.
Tasks Calibration
Published 2019-10-01
URL http://openaccess.thecvf.com/content_ICCV_2019/html/Quan_Deep_Learning_for_Seeing_Through_Window_With_Raindrops_ICCV_2019_paper.html
PDF http://openaccess.thecvf.com/content_ICCV_2019/papers/Quan_Deep_Learning_for_Seeing_Through_Window_With_Raindrops_ICCV_2019_paper.pdf
PWC https://paperswithcode.com/paper/deep-learning-for-seeing-through-window-with
Repo
Framework

Interpretable and Fine-Grained Visual Explanations for Convolutional Neural Networks

Title Interpretable and Fine-Grained Visual Explanations for Convolutional Neural Networks
Authors Jorg Wagner, Jan Mathias Kohler, Tobias Gindele, Leon Hetzel, Jakob Thaddaus Wiedemer, Sven Behnke
Abstract To verify and validate networks, it is essential to gain insight into their decisions, limitations as well as possible shortcomings of training data. In this work, we propose a post-hoc, optimization based visual explanation method, which highlights the evidence in the input image for a specific prediction. Our approach is based on a novel technique to defend against adversarial evidence (i.e. faulty evidence due to artefacts) by filtering gradients during optimization. The defense does not depend on human-tuned parameters. It enables explanations which are both fine-grained and preserve the characteristics of images, such as edges and colors. The explanations are interpretable, suited for visualizing detailed evidence and can be tested as they are valid model inputs. We qualitatively and quantitatively evaluate our approach on a multitude of models and datasets.
Tasks
Published 2019-06-01
URL http://openaccess.thecvf.com/content_CVPR_2019/html/Wagner_Interpretable_and_Fine-Grained_Visual_Explanations_for_Convolutional_Neural_Networks_CVPR_2019_paper.html
PDF http://openaccess.thecvf.com/content_CVPR_2019/papers/Wagner_Interpretable_and_Fine-Grained_Visual_Explanations_for_Convolutional_Neural_Networks_CVPR_2019_paper.pdf
PWC https://paperswithcode.com/paper/interpretable-and-fine-grained-visual
Repo
Framework

Knows When it Doesn’t Know: Deep Abstaining Classifiers

Title Knows When it Doesn’t Know: Deep Abstaining Classifiers
Authors Sunil Thulasidasan, Tanmoy Bhattacharya, Jeffrey Bilmes, Gopinath Chennupati, Jamal Mohd-Yusof
Abstract We introduce the deep abstaining classifier – a deep neural network trained with a novel loss function that provides an abstention option during training. This allows the DNN to abstain on confusing or difficult-to-learn examples while improving performance on the non-abstained samples. We show that such deep abstaining classifiers can: (i) learn representations for structured noise – where noisy training labels or confusing examples are correlated with underlying features – and then learn to abstain based on such features; (ii) enable robust learning in the presence of arbitrary or unstructured noise by identifying noisy samples; and (iii) be used as an effective out-of-category detector that learns to reliably abstain when presented with samples from unknown classes. We provide analytical results on loss function behavior that enable automatic tuning of accuracy and coverage, and demonstrate the utility of the deep abstaining classifier using multiple image benchmarks, Results indicate significant improvement in learning in the presence of label noise.
Tasks
Published 2019-05-01
URL https://openreview.net/forum?id=rJxF73R9tX
PDF https://openreview.net/pdf?id=rJxF73R9tX
PWC https://paperswithcode.com/paper/knows-when-it-doesnt-know-deep-abstaining
Repo
Framework

Calibration of Axial Fisheye Cameras Through Generic Virtual Central Models

Title Calibration of Axial Fisheye Cameras Through Generic Virtual Central Models
Authors Pierre-Andre Brousseau, Sebastien Roy
Abstract Fisheye cameras are notoriously hard to calibrate using traditional plane-based methods. This paper proposes a new calibration method for large field of view cameras. Similarly to planar calibration, it relies on multiple images of a planar calibration grid with dense correspondences, typically obtained using structured light. By relying on the grids themselves instead of the distorted image plane, we can build a rectilinear Generic Virtual Central (GVC) camera. Instead of relying on a single GVC camera, our method proposes a selection of multiple GVC cameras which can cover any field of view and be trivially aligned to provide a very accurate generic central model. We demonstrate that this approach can directly model axial cameras, assuming the distortion center is located on the camera axis. Experimental validation is provided on both synthetic and real fisheye cameras featuring up to a 280deg field of view. To our knowledge, this is one of the only practical methods to calibrate axial cameras.
Tasks Calibration
Published 2019-10-01
URL http://openaccess.thecvf.com/content_ICCV_2019/html/Brousseau_Calibration_of_Axial_Fisheye_Cameras_Through_Generic_Virtual_Central_Models_ICCV_2019_paper.html
PDF http://openaccess.thecvf.com/content_ICCV_2019/papers/Brousseau_Calibration_of_Axial_Fisheye_Cameras_Through_Generic_Virtual_Central_Models_ICCV_2019_paper.pdf
PWC https://paperswithcode.com/paper/calibration-of-axial-fisheye-cameras-through
Repo
Framework

Multiple Admissibility: Judging Grammaticality using Unlabeled Data in Language Learning

Title Multiple Admissibility: Judging Grammaticality using Unlabeled Data in Language Learning
Authors Anisia Katinskaia, Sardana Ivanova
Abstract We present our work on the problem of Multiple Admissibility (MA) in language learning. Multiple Admissibility occurs in many languages when more than one grammatical form of a word fits syntactically and semantically in a given context. In second language (L2) education - in particular, in intelligent tutoring systems/computer-aided language learning (ITS/CALL) systems, which generate exercises automatically - this implies that multiple alternative answers are possible. We treat the problem as a grammaticality judgement task. We train a neural network with an objective to label sentences as grammatical or ungrammatical, using a {``}simulated learner corpus{''}: a dataset with correct text, and with artificial errors generated automatically. While MA occurs commonly in many languages, this paper focuses on learning Russian. We present a detailed classification of the types of constructions in Russian, in which MA is possible, and evaluate the model using a test set built from answers provided by the users of a running language learning system. |
Tasks
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-3702/
PDF https://www.aclweb.org/anthology/W19-3702
PWC https://paperswithcode.com/paper/multiple-admissibility-judging-grammaticality
Repo
Framework

Surface Normals and Shape From Water

Title Surface Normals and Shape From Water
Authors Satoshi Murai, Meng-Yu Jennifer Kuo, Ryo Kawahara, Shohei Nobuhara, Ko Nishino
Abstract In this paper, we introduce a novel method for reconstructing surface normals and depth of dynamic objects in water. Past shape recovery methods have leveraged various visual cues for estimating shape (e.g., depth) or surface normals. Methods that estimate both compute one from the other. We show that these two geometric surface properties can be simultaneously recovered for each pixel when the object is observed underwater. Our key idea is to leverage multi-wavelength near-infrared light absorption along different underwater light paths in conjunction with surface shading. We derive a principled theory for this surface normals and shape from water method and a practical calibration method for determining its imaging parameters values. By construction, the method can be implemented as a one-shot imaging system. We prototype both an off-line and a video-rate imaging system and demonstrate the effectiveness of the method on a number of real-world static and dynamic objects. The results show that the method can recover intricate surface features that are otherwise inaccessible.
Tasks Calibration
Published 2019-10-01
URL http://openaccess.thecvf.com/content_ICCV_2019/html/Murai_Surface_Normals_and_Shape_From_Water_ICCV_2019_paper.html
PDF http://openaccess.thecvf.com/content_ICCV_2019/papers/Murai_Surface_Normals_and_Shape_From_Water_ICCV_2019_paper.pdf
PWC https://paperswithcode.com/paper/surface-normals-and-shape-from-water
Repo
Framework

Predicting learner knowledge of individual words using machine learning

Title Predicting learner knowledge of individual words using machine learning
Authors Drilon Avdiu, Vanessa Bui, Kl{'a}ra Pta{\v{c}}inov{'a} Klim{\v{c}}{'\i}kov{'a}
Abstract
Tasks
Published 2019-09-01
URL https://www.aclweb.org/anthology/W19-6301/
PDF https://www.aclweb.org/anthology/W19-6301
PWC https://paperswithcode.com/paper/predicting-learner-knowledge-of-individual
Repo
Framework

Understanding Vocabulary Growth Through An Adaptive Language Learning System

Title Understanding Vocabulary Growth Through An Adaptive Language Learning System
Authors Elma Kerz, Andreas Burgdorf, Daniel Wiechmann, Stefan Meeger, Yu Qiao, Christian Kohlschein, Tobias Meisen
Abstract
Tasks
Published 2019-09-01
URL https://www.aclweb.org/anthology/W19-6307/
PDF https://www.aclweb.org/anthology/W19-6307
PWC https://paperswithcode.com/paper/understanding-vocabulary-growth-through-an
Repo
Framework

Towards Unlocking the Narrative of the United States Income Tax Forms

Title Towards Unlocking the Narrative of the United States Income Tax Forms
Authors Man, Esme ise
Abstract
Tasks
Published 2019-09-01
URL https://www.aclweb.org/anthology/W19-6405/
PDF https://www.aclweb.org/anthology/W19-6405
PWC https://paperswithcode.com/paper/towards-unlocking-the-narrative-of-the-united
Repo
Framework

FinTOC-2019 Shared Task: Finding Title in Text Blocks

Title FinTOC-2019 Shared Task: Finding Title in Text Blocks
Authors Hanna Abi Akl, Anubhav Gupta, Dominique Mariko
Abstract
Tasks
Published 2019-09-01
URL https://www.aclweb.org/anthology/W19-6408/
PDF https://www.aclweb.org/anthology/W19-6408
PWC https://paperswithcode.com/paper/fintoc-2019-shared-task-finding-title-in-text
Repo
Framework
comments powered by Disqus