Paper Group NANR 193
Burst Image Deblurring Using Permutation Invariant Convolutional Neural Networks. Automated Paraphrase Lattice Creation for HyTER Machine Translation Evaluation. Speed Reading: Learning to Read ForBackward via Shuttle. Textual Deconvolution Saliency (TDS) : a deep tool box for linguistic analysis. Text Mining for History: first steps on building a …
Burst Image Deblurring Using Permutation Invariant Convolutional Neural Networks
Title | Burst Image Deblurring Using Permutation Invariant Convolutional Neural Networks |
Authors | Miika Aittala, Fredo Durand |
Abstract | We propose a neural approach for fusing an arbitrary-length burst of photographs suffering from severe camera shake and noise into a sharp and noise-free image. Our novel convolutional architecture has a simultaneous view of all frames in the burst, and by construction treats them in an order-independent manner. This enables it to effectively detect and leverage subtle cues scattered across different frames, while ensuring that each frame gets a full and equal consideration regardless of its position in the sequence. We train the network with richly varied synthetic data consisting of camera shake, realistic noise, and other common imaging defects. The method demonstrates consistent state of the art burst image restoration performance for highly degraded sequences of real-world images, and extracts accurate detail that is not discernible from any of the individual frames in isolation. |
Tasks | Deblurring, Image Restoration |
Published | 2018-09-01 |
URL | http://openaccess.thecvf.com/content_ECCV_2018/html/Miika_Aittala_Burst_Image_Deblurring_ECCV_2018_paper.html |
http://openaccess.thecvf.com/content_ECCV_2018/papers/Miika_Aittala_Burst_Image_Deblurring_ECCV_2018_paper.pdf | |
PWC | https://paperswithcode.com/paper/burst-image-deblurring-using-permutation |
Repo | |
Framework | |
Automated Paraphrase Lattice Creation for HyTER Machine Translation Evaluation
Title | Automated Paraphrase Lattice Creation for HyTER Machine Translation Evaluation |
Authors | Marianna Apidianaki, Guillaume Wisniewski, Anne Cocos, Chris Callison-Burch |
Abstract | We propose a variant of a well-known machine translation (MT) evaluation metric, HyTER (Dreyer and Marcu, 2012), which exploits reference translations enriched with meaning equivalent expressions. The original HyTER metric relied on hand-crafted paraphrase networks which restricted its applicability to new data. We test, for the first time, HyTER with automatically built paraphrase lattices. We show that although the metric obtains good results on small and carefully curated data with both manually and automatically selected substitutes, it achieves medium performance on much larger and noisier datasets, demonstrating the limits of the metric for tuning and evaluation of current MT systems. |
Tasks | Machine Translation |
Published | 2018-06-01 |
URL | https://www.aclweb.org/anthology/N18-2077/ |
https://www.aclweb.org/anthology/N18-2077 | |
PWC | https://paperswithcode.com/paper/automated-paraphrase-lattice-creation-for |
Repo | |
Framework | |
Speed Reading: Learning to Read ForBackward via Shuttle
Title | Speed Reading: Learning to Read ForBackward via Shuttle |
Authors | Tsu-Jui Fu, Wei-Yun Ma |
Abstract | We present LSTM-Shuttle, which applies human speed reading techniques to natural language processing tasks for accurate and efficient comprehension. In contrast to previous work, LSTM-Shuttle not only reads shuttling forward but also goes back. Shuttling forward enables high efficiency, and going backward gives the model a chance to recover lost information, ensuring better prediction. We evaluate LSTM-Shuttle on sentiment analysis, news classification, and cloze on IMDB, Rotten Tomatoes, AG, and Children{'}s Book Test datasets. We show that LSTM-Shuttle predicts both better and more quickly. To demonstrate how LSTM-Shuttle actually behaves, we also analyze the shuttling operation and present a case study. |
Tasks | Document Classification, Document Summarization, Machine Translation, Named Entity Recognition, Part-Of-Speech Tagging, Question Answering, Reading Comprehension, Sentiment Analysis |
Published | 2018-10-01 |
URL | https://www.aclweb.org/anthology/D18-1474/ |
https://www.aclweb.org/anthology/D18-1474 | |
PWC | https://paperswithcode.com/paper/speed-reading-learning-to-read-forbackward |
Repo | |
Framework | |
Textual Deconvolution Saliency (TDS) : a deep tool box for linguistic analysis
Title | Textual Deconvolution Saliency (TDS) : a deep tool box for linguistic analysis |
Authors | Laurent Vanni, Melanie Ducoffe, Carlos Aguilar, Frederic Precioso, Damon Mayaffre |
Abstract | In this paper, we propose a new strategy, called Text Deconvolution Saliency (TDS), to visualize linguistic information detected by a CNN for text classification. We extend Deconvolution Networks to text in order to present a new perspective on text analysis to the linguistic community. We empirically demonstrated the efficiency of our Text Deconvolution Saliency on corpora from three different languages: English, French, and Latin. For every tested dataset, our Text Deconvolution Saliency automatically encodes complex linguistic patterns based on co-occurrences and possibly on grammatical and syntax analysis. |
Tasks | Text Classification |
Published | 2018-07-01 |
URL | https://www.aclweb.org/anthology/P18-1051/ |
https://www.aclweb.org/anthology/P18-1051 | |
PWC | https://paperswithcode.com/paper/textual-deconvolution-saliency-tds-a-deep |
Repo | |
Framework | |
Text Mining for History: first steps on building a large dataset
Title | Text Mining for History: first steps on building a large dataset |
Authors | Suemi Higuchi, Cl{'a}udia Freitas, Bruno Cuconato, Alex Rademaker, re |
Abstract | |
Tasks | |
Published | 2018-05-01 |
URL | https://www.aclweb.org/anthology/L18-1593/ |
https://www.aclweb.org/anthology/L18-1593 | |
PWC | https://paperswithcode.com/paper/text-mining-for-history-first-steps-on |
Repo | |
Framework | |
Effects of Gender Stereotypes on Trust and Likability in Spoken Human-Robot Interaction
Title | Effects of Gender Stereotypes on Trust and Likability in Spoken Human-Robot Interaction |
Authors | Matthias Kraus, Johannes Kraus, Martin Baumann, Wolfgang Minker |
Abstract | |
Tasks | |
Published | 2018-05-01 |
URL | https://www.aclweb.org/anthology/L18-1018/ |
https://www.aclweb.org/anthology/L18-1018 | |
PWC | https://paperswithcode.com/paper/effects-of-gender-stereotypes-on-trust-and |
Repo | |
Framework | |
Looking for Structure in Lexical and Acoustic-Prosodic Entrainment Behaviors
Title | Looking for Structure in Lexical and Acoustic-Prosodic Entrainment Behaviors |
Authors | Andreas Weise, Rivka Levitan |
Abstract | Entrainment has been shown to occur for various linguistic features individually. Motivated by cognitive theories regarding linguistic entrainment, we analyze speakers{'} overall entrainment behaviors and search for an underlying structure. We consider various measures of both acoustic-prosodic and lexical entrainment, measuring the latter with a novel application of two previously introduced methods in addition to a standard high-frequency word measure. We present a negative result of our search, finding no meaningful correlations, clusters, or principal components in various entrainment measures, and discuss practical and theoretical implications. |
Tasks | |
Published | 2018-06-01 |
URL | https://www.aclweb.org/anthology/N18-2048/ |
https://www.aclweb.org/anthology/N18-2048 | |
PWC | https://paperswithcode.com/paper/looking-for-structure-in-lexical-and-acoustic |
Repo | |
Framework | |
Interpretable Word Embedding Contextualization
Title | Interpretable Word Embedding Contextualization |
Authors | Kyoung-Rok Jang, Sung-Hyon Myaeng, Sang-Bum Kim |
Abstract | In this paper, we propose a method of calibrating a word embedding, so that the semantic it conveys becomes more relevant to the context. Our method is novel because the output shows clearly which senses that were originally presented in a target word embedding become stronger or weaker. This is possible by utilizing the technique of using sparse coding to recover senses that comprises a word embedding. |
Tasks | Word Embeddings |
Published | 2018-11-01 |
URL | https://www.aclweb.org/anthology/W18-5442/ |
https://www.aclweb.org/anthology/W18-5442 | |
PWC | https://paperswithcode.com/paper/interpretable-word-embedding |
Repo | |
Framework | |
LEARNING SEMANTIC WORD RESPRESENTATIONS VIA TENSOR FACTORIZATION
Title | LEARNING SEMANTIC WORD RESPRESENTATIONS VIA TENSOR FACTORIZATION |
Authors | Eric Bailey, Charles Meyer, Shuchin Aeron |
Abstract | Many state-of-the-art word embedding techniques involve factorization of a cooccurrence based matrix. We aim to extend this approach by studying word embedding techniques that involve factorization of co-occurrence based tensors (N- way arrays). We present two new word embedding techniques based on tensor factorization and show that they outperform common methods on several semantic NLP tasks when given the same data. To train one of the embeddings, we present a new joint tensor factorization problem and an approach for solving it. Furthermore, we modify the performance metrics for the Outlier Detection Camacho- Collados & Navigli (2016) task to measure the quality of higher-order relationships that a word embedding captures. Our tensor-based methods significantly outperform existing methods at this task when using our new metric. Finally, we demonstrate that vectors in our embeddings can be composed multiplicatively to create different vector representations for each meaning of a polysemous word. We show that this property stems from the higher order information that the vectors contain, and thus is unique to our tensor based embeddings. |
Tasks | Outlier Detection |
Published | 2018-01-01 |
URL | https://openreview.net/forum?id=B1kIr-WRb |
https://openreview.net/pdf?id=B1kIr-WRb | |
PWC | https://paperswithcode.com/paper/learning-semantic-word-respresentations-via |
Repo | |
Framework | |
Natural Language Inference with Definition Embedding Considering Context On the Fly
Title | Natural Language Inference with Definition Embedding Considering Context On the Fly |
Authors | Kosuke Nishida, Kyosuke Nishida, Hisako Asano, Junji Tomita |
Abstract | Natural language inference (NLI) is one of the most important tasks in NLP. In this study, we propose a novel method using word dictionaries, which are pairs of a word and its definition, as external knowledge. Our neural definition embedding mechanism encodes input sentences with the definitions of each word of the sentences on the fly. It can encode the definition of words considering the context of input sentences by using an attention mechanism. We evaluated our method using WordNet as a dictionary and confirmed that our method performed better than baseline models when using the full or a subset of 100d GloVe as word embeddings. |
Tasks | Domain Adaptation, Information Retrieval, Natural Language Inference, Question Answering, Representation Learning, Word Embeddings |
Published | 2018-07-01 |
URL | https://www.aclweb.org/anthology/W18-3007/ |
https://www.aclweb.org/anthology/W18-3007 | |
PWC | https://paperswithcode.com/paper/natural-language-inference-with-definition |
Repo | |
Framework | |
Adaptive Negative Curvature Descent with Applications in Non-convex Optimization
Title | Adaptive Negative Curvature Descent with Applications in Non-convex Optimization |
Authors | Mingrui Liu, Zhe Li, Xiaoyu Wang, Jinfeng Yi, Tianbao Yang |
Abstract | Negative curvature descent (NCD) method has been utilized to design deterministic or stochastic algorithms for non-convex optimization aiming at finding second-order stationary points or local minima. In existing studies, NCD needs to approximate the smallest eigen-value of the Hessian matrix with a sufficient precision (e.g., $\epsilon_2\ll 1$) in order to achieve a sufficiently accurate second-order stationary solution (i.e., $\lambda_{\min}(\nabla^2 f(\x))\geq -\epsilon_2)$. One issue with this approach is that the target precision $\epsilon_2$ is usually set to be very small in order to find a high quality solution, which increases the complexity for computing a negative curvature. To address this issue, we propose an adaptive NCD to allow for an adaptive error dependent on the current gradient’s magnitude in approximating the smallest eigen-value of the Hessian, and to encourage competition between a noisy NCD step and gradient descent step. We consider the applications of the proposed adaptive NCD for both deterministic and stochastic non-convex optimization, and demonstrate that it can help reduce the the overall complexity in computing the negative curvatures during the course of optimization without sacrificing the iteration complexity. |
Tasks | |
Published | 2018-12-01 |
URL | http://papers.nips.cc/paper/7734-adaptive-negative-curvature-descent-with-applications-in-non-convex-optimization |
http://papers.nips.cc/paper/7734-adaptive-negative-curvature-descent-with-applications-in-non-convex-optimization.pdf | |
PWC | https://paperswithcode.com/paper/adaptive-negative-curvature-descent-with |
Repo | |
Framework | |
A farewell to arms: Non-verbal communication for non-humanoid robots
Title | A farewell to arms: Non-verbal communication for non-humanoid robots |
Authors | Aaron G. Cass, Kristina Striegnitz, Nick Webb |
Abstract | Human-robot interactions situated in a dynamic environment create a unique mix of challenges for conversational systems. We argue that, on the one hand, NLG can contribute to addressing these challenges and that, on the other hand, they pose interesting research problems for NLG. To illustrate our position we describe our research on non-humanoid robots using non-verbal signals to support communication. |
Tasks | Text Generation |
Published | 2018-11-01 |
URL | https://www.aclweb.org/anthology/W18-6905/ |
https://www.aclweb.org/anthology/W18-6905 | |
PWC | https://paperswithcode.com/paper/a-farewell-to-arms-non-verbal-communication |
Repo | |
Framework | |
Neural Tree Transducers for Tree to Tree Learning
Title | Neural Tree Transducers for Tree to Tree Learning |
Authors | João Sedoc, Dean Foster, Lyle Ungar |
Abstract | We introduce a novel approach to tree-to-tree learning, the neural tree transducer (NTT), a top-down depth first context-sensitive tree decoder, which is paired with recursive neural encoders. Our method works purely on tree-to-tree manipulations rather than sequence-to-tree or tree-to-sequence and is able to encode and decode multiple depth trees. We compare our method to sequence-to-sequence models applied to serializations of the trees and show that our method outperforms previous methods for tree-to-tree transduction. |
Tasks | |
Published | 2018-01-01 |
URL | https://openreview.net/forum?id=rJBwoM-Cb |
https://openreview.net/pdf?id=rJBwoM-Cb | |
PWC | https://paperswithcode.com/paper/neural-tree-transducers-for-tree-to-tree |
Repo | |
Framework | |
A Comparison of Two Paraphrase Models for Taxonomy Augmentation
Title | A Comparison of Two Paraphrase Models for Taxonomy Augmentation |
Authors | Vassilis Plachouras, Fabio Petroni, Timothy Nugent, Jochen L. Leidner |
Abstract | Taxonomies are often used to look up the concepts they contain in text documents (for instance, to classify a document). The more comprehensive the taxonomy, the higher recall the application has that uses the taxonomy. In this paper, we explore automatic taxonomy augmentation with paraphrases. We compare two state-of-the-art paraphrase models based on Moses, a statistical Machine Translation system, and a sequence-to-sequence neural network, trained on a paraphrase datasets with respect to their abilities to add novel nodes to an existing taxonomy from the risk domain. We conduct component-based and task-based evaluations. Our results show that paraphrasing is a viable method to enrich a taxonomy with more terms, and that Moses consistently outperforms the sequence-to-sequence neural model. To the best of our knowledge, this is the first approach to augment taxonomies with paraphrases. |
Tasks | Document Classification, Machine Translation, Paraphrase Generation |
Published | 2018-06-01 |
URL | https://www.aclweb.org/anthology/N18-2051/ |
https://www.aclweb.org/anthology/N18-2051 | |
PWC | https://paperswithcode.com/paper/a-comparison-of-two-paraphrase-models-for |
Repo | |
Framework | |
A Laypeople Study on Terminology Identification across Domains and Task Definitions
Title | A Laypeople Study on Terminology Identification across Domains and Task Definitions |
Authors | Anna H{"a}tty, Sabine Schulte im Walde |
Abstract | This paper introduces a new dataset of term annotation. Given that even experts vary significantly in their understanding of termhood, and that term identification is mostly performed as a binary task, we offer a novel perspective to explore the common, natural understanding of what constitutes a term: Laypeople annotate single-word and multi-word terms, across four domains and across four task definitions. Analyses based on inter-annotator agreement offer insights into differences in term specificity, term granularity and subtermhood. |
Tasks | Information Retrieval |
Published | 2018-06-01 |
URL | https://www.aclweb.org/anthology/N18-2052/ |
https://www.aclweb.org/anthology/N18-2052 | |
PWC | https://paperswithcode.com/paper/a-laypeople-study-on-terminology |
Repo | |
Framework | |