January 24, 2020

2491 words 12 mins read

Paper Group NANR 242

Paper Group NANR 242

BERT-based Lexical Substitution. Development and Deployment of a Large-Scale Dialog-based Intelligent Tutoring System. Recurrent Attentive Zooming for Joint Crowd Counting and Precise Localization. Optimal Analysis of Subset-Selection Based L_p Low-Rank Approximation. The use of Extract Morphology for Automatic Generation of Language Technology for …

BERT-based Lexical Substitution

Title BERT-based Lexical Substitution
Authors Wangchunshu Zhou, Tao Ge, Ke Xu, Furu Wei, Ming Zhou
Abstract Previous studies on lexical substitution tend to obtain substitute candidates by finding the target word{'}s synonyms from lexical resources (e.g., WordNet) and then rank the candidates based on its contexts. These approaches have two limitations: (1) They are likely to overlook good substitute candidates that are not the synonyms of the target words in the lexical resources; (2) They fail to take into account the substitution{'}s influence on the global context of the sentence. To address these issues, we propose an end-to-end BERT-based lexical substitution approach which can propose and validate substitute candidates without using any annotated data or manually curated resources. Our approach first applies dropout to the target word{'}s embedding for partially masking the word, allowing BERT to take balanced consideration of the target word{'}s semantics and contexts for proposing substitute candidates, and then validates the candidates based on their substitution{'}s influence on the global contextualized representation of the sentence. Experiments show our approach performs well in both proposing and ranking substitute candidates, achieving the state-of-the-art results in both LS07 and LS14 benchmarks.
Tasks
Published 2019-07-01
URL https://www.aclweb.org/anthology/P19-1328/
PDF https://www.aclweb.org/anthology/P19-1328
PWC https://paperswithcode.com/paper/bert-based-lexical-substitution
Repo
Framework

Development and Deployment of a Large-Scale Dialog-based Intelligent Tutoring System

Title Development and Deployment of a Large-Scale Dialog-based Intelligent Tutoring System
Authors Shazia Afzal, Tejas Dhamecha, Nirmal Mukhi, Renuka Sindhgatta, Smit Marvaniya, Matthew Ventura, Jessica Yarbro
Abstract There are significant challenges involved in the design and implementation of a dialog-based tutoring system (DBT) ranging from domain engineering to natural language classification and eventually instantiating an adaptive, personalized dialog strategy. These issues are magnified when implementing such a system at scale and across domains. In this paper, we describe and reflect on the design, methods, decisions and assessments that led to the successful deployment of our AI driven DBT currently being used by several hundreds of college level students for practice and self-regulated study in diverse subjects like Sociology, Communications, and American Government.
Tasks
Published 2019-06-01
URL https://www.aclweb.org/anthology/N19-2015/
PDF https://www.aclweb.org/anthology/N19-2015
PWC https://paperswithcode.com/paper/development-and-deployment-of-a-large-scale
Repo
Framework

Recurrent Attentive Zooming for Joint Crowd Counting and Precise Localization

Title Recurrent Attentive Zooming for Joint Crowd Counting and Precise Localization
Authors Chenchen Liu, Xinyu Weng, Yadong Mu
Abstract Crowd counting is a new frontier in computer vision with far-reaching applications particularly in social safety management. A majority of existing works adopt a methodology that first estimates a person-density map and then calculates integral over this map to obtain the final count. As noticed by several prior investigations, the learned density map can significantly deviate from the true person density even though the final reported count is precise. This implies that the density map is unreliable for localizing crowd. To address this issue, this work proposes a novel framework that simultaneously solving two inherently related tasks - crowd counting and localization. The contributions are several-fold. First, our formulation is based on a crucial observation that localization tends to be inaccurate at high-density regions, and increasing the resolution is an effective albeit simple solution for improving localization. We thus propose Recurrent Attentive Zooming Network, which recurrently detects ambiguous image region and zooms it into high resolution for re-inspection. Second, the two tasks of counting and localization mutually reinforce each other. We propose an adaptive fusion scheme that effectively elevates the performance. Finally, a well-defined evaluation metric is proposed for the rarely-explored localization task. We conduct comprehensive evaluations on several crowd benchmarks, including the newly-developed large-scale UCF-QNRF dataset and demonstrate superior advantages over state-of-the-art methods.
Tasks Crowd Counting
Published 2019-06-01
URL http://openaccess.thecvf.com/content_CVPR_2019/html/Liu_Recurrent_Attentive_Zooming_for_Joint_Crowd_Counting_and_Precise_Localization_CVPR_2019_paper.html
PDF http://openaccess.thecvf.com/content_CVPR_2019/papers/Liu_Recurrent_Attentive_Zooming_for_Joint_Crowd_Counting_and_Precise_Localization_CVPR_2019_paper.pdf
PWC https://paperswithcode.com/paper/recurrent-attentive-zooming-for-joint-crowd
Repo
Framework

Optimal Analysis of Subset-Selection Based L_p Low-Rank Approximation

Title Optimal Analysis of Subset-Selection Based L_p Low-Rank Approximation
Authors Chen Dan, Hong Wang, Hongyang Zhang, Yuchen Zhou, Pradeep K. Ravikumar
Abstract We show that for the problem of $\ell_p$ rank-$k$ approximation of any given matrix over $R^{n\times m}$ and $C^{n\times m}$, the algorithm of column subset selection enjoys approximation ratio $(k+1)^{1/p}$ for $1\le p\le 2$ and $(k+1)^{1-1/p}$ for $p\ge 2$. This improves upon the previous $O(k+1)$ bound (Chierichetti et al.,2017) for $p\ge 1$. We complement our analysis with lower bounds; these bounds match our upper bounds up to constant 1 when $p\geq 2$. At the core of our techniques is an application of Riesz-Thorin interpolation theorem from harmonic analysis, which might be of independent interest to other algorithmic designs and analysis more broadly. Our analysis results in improvements on approximation guarantees of several other algorithms with various time complexity. For example, to make the algorithm of column subset selection computationally efficient, we analyze a polynomial time bi-criteria algorithm which selects $O(k\log m)$ number of columns. We show that this algorithm has an approximation ratio of $O((k+1)^{1/p})$ for $1\le p\le 2$ and $O((k+1)^{1-1/p})$ for $p\ge 2$. This improves over the bound in (Chierichetti et al.,2017) with an $O(k+1)$ approximation ratio. Our bi-criteria algorithm also implies an exact-rank method in polynomial time with a slightly larger approximation ratio.
Tasks
Published 2019-12-01
URL http://papers.nips.cc/paper/8523-optimal-analysis-of-subset-selection-based-l_p-low-rank-approximation
PDF http://papers.nips.cc/paper/8523-optimal-analysis-of-subset-selection-based-l_p-low-rank-approximation.pdf
PWC https://paperswithcode.com/paper/optimal-analysis-of-subset-selection-based-1
Repo
Framework

The use of Extract Morphology for Automatic Generation of Language Technology for Votic

Title The use of Extract Morphology for Automatic Generation of Language Technology for Votic
Authors Kristian Kankainen
Abstract
Tasks Code Generation, Morphological Analysis
Published 2019-01-01
URL https://www.aclweb.org/anthology/W19-0315/
PDF https://www.aclweb.org/anthology/W19-0315
PWC https://paperswithcode.com/paper/the-use-of-extract-morphology-for-automatic
Repo
Framework

Noisy Channel for Low Resource Grammatical Error Correction

Title Noisy Channel for Low Resource Grammatical Error Correction
Authors Simon Flachs, Oph{'e}lie Lacroix, Anders S{\o}gaard
Abstract This paper describes our contribution to the low-resource track of the BEA 2019 shared task on Grammatical Error Correction (GEC). Our approach to GEC builds on the theory of the noisy channel by combining a channel model and language model. We generate confusion sets from the Wikipedia edit history and use the frequencies of edits to estimate the channel model. Additionally, we use two pre-trained language models: 1) Google{'}s BERT model, which we fine-tune for specific error types and 2) OpenAI{'}s GPT-2 model, utilizing that it can operate with previous sentences as context. Furthermore, we search for the optimal combinations of corrections using beam search.
Tasks Grammatical Error Correction, Language Modelling
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-4420/
PDF https://www.aclweb.org/anthology/W19-4420
PWC https://paperswithcode.com/paper/noisy-channel-for-low-resource-grammatical
Repo
Framework

PRUNING WITH HINTS: AN EFFICIENT FRAMEWORK FOR MODEL ACCELERATION

Title PRUNING WITH HINTS: AN EFFICIENT FRAMEWORK FOR MODEL ACCELERATION
Authors Wei Gao, Yi Wei, Quanquan Li, Hongwei Qin, Wanli Ouyang, Junjie Yan
Abstract In this paper, we propose an efficient framework to accelerate convolutional neural networks. We utilize two types of acceleration methods: pruning and hints. Pruning can reduce model size by removing channels of layers. Hints can improve the performance of student model by transferring knowledge from teacher model. We demonstrate that pruning and hints are complementary to each other. On one hand, hints can benefit pruning by maintaining similar feature representations. On the other hand, the model pruned from teacher networks is a good initialization for student model, which increases the transferability between two networks. Our approach performs pruning stage and hints stage iteratively to further improve the performance. Furthermore, we propose an algorithm to reconstruct the parameters of hints layer and make the pruned model more suitable for hints. Experiments were conducted on various tasks including classification and pose estimation. Results on CIFAR-10, ImageNet and COCO demonstrate the generalization and superiority of our framework.
Tasks Pose Estimation
Published 2019-05-01
URL https://openreview.net/forum?id=Hyffti0ctQ
PDF https://openreview.net/pdf?id=Hyffti0ctQ
PWC https://paperswithcode.com/paper/pruning-with-hints-an-efficient-framework-for
Repo
Framework

Generating Modern Poetry Automatically in Finnish

Title Generating Modern Poetry Automatically in Finnish
Authors Mika H{"a}m{"a}l{"a}inen, Khalid Alnajjar
Abstract We present a novel approach for generating poetry automatically for the morphologically rich Finnish language by using a genetic algorithm. The approach improves the state of the art of the previous Finnish poem generators by introducing a higher degree of freedom in terms of structural creativity. Our approach is evaluated and described within the paradigm of computational creativity, where the fitness functions of the genetic algorithm are assimilated with the notion of aesthetics. The output is considered to be a poem 81.5{%} of the time by human evaluators.
Tasks
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-1617/
PDF https://www.aclweb.org/anthology/D19-1617
PWC https://paperswithcode.com/paper/generating-modern-poetry-automatically-in
Repo
Framework

Bacteria Biotope Relation Extraction via Lexical Chains and Dependency Graphs

Title Bacteria Biotope Relation Extraction via Lexical Chains and Dependency Graphs
Authors Wuti Xiong, Fei Li, Ming Cheng, Hong Yu, Donghong Ji
Abstract abstract In this article, we describe our approach for the Bacteria Biotopes relation extraction (BB-rel) subtask in the BioNLP Shared Task 2019. This task aims to promote the development of text mining systems that extract relationships between Microorganism, Habitat and Phenotype entities. In this paper, we propose a novel approach for dependency graph construction based on lexical chains, so one dependency graph can represent one or multiple sentences. After that, we propose a neural network model which consists of the bidirectional long short-term memories and an attention graph convolution neural network to learn relation extraction features from the graph. Our approach is able to extract both intra- and inter-sentence relations, and meanwhile utilize syntax information. The results show that our approach achieved the best F1 (66.3{%}) in the official evaluation participated by 7 teams.
Tasks graph construction, Relation Extraction
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-5723/
PDF https://www.aclweb.org/anthology/D19-5723
PWC https://paperswithcode.com/paper/bacteria-biotope-relation-extraction-via
Repo
Framework

Controllable Text Simplification with Lexical Constraint Loss

Title Controllable Text Simplification with Lexical Constraint Loss
Authors Daiki Nishihara, Tomoyuki Kajiwara, Yuki Arase
Abstract We propose a method to control the level of a sentence in a text simplification task. Text simplification is a monolingual translation task translating a complex sentence into a simpler and easier to understand the alternative. In this study, we use the grade level of the US education system as the level of the sentence. Our text simplification method succeeds in translating an input into a specific grade level by considering levels of both sentences and words. Sentence level is considered by adding the target grade level as input. By contrast, the word level is considered by adding weights to the training loss based on words that frequently appear in sentences of the desired grade level. Although existing models that consider only the sentence level may control the syntactic complexity, they tend to generate words beyond the target level. Our approach can control both the lexical and syntactic complexity and achieve an aggressive rewriting. Experiment results indicate that the proposed method improves the metrics of both BLEU and SARI.
Tasks Text Simplification
Published 2019-07-01
URL https://www.aclweb.org/anthology/P19-2036/
PDF https://www.aclweb.org/anthology/P19-2036
PWC https://paperswithcode.com/paper/controllable-text-simplification-with-lexical
Repo
Framework

Fine-Grained Analysis of Propaganda in News Article

Title Fine-Grained Analysis of Propaganda in News Article
Authors Giovanni Da San Martino, Seunghak Yu, Alberto Barr{'o}n-Cede{~n}o, Rostislav Petrov, Preslav Nakov
Abstract Propaganda aims at influencing people{'}s mindset with the purpose of advancing a specific agenda. Previous work has addressed propaganda detection at document level, typically labelling all articles from a propagandistic news outlet as propaganda. Such noisy gold labels inevitably affect the quality of any learning system trained on them. A further issue with most existing systems is the lack of explainability. To overcome these limitations, we propose a novel task: performing fine-grained analysis of texts by detecting all fragments that contain propaganda techniques as well as their type. In particular, we create a corpus of news articles manually annotated at fragment level with eighteen propaganda techniques and propose a suitable evaluation measure. We further design a novel multi-granularity neural network, and we show that it outperforms several strong BERT-based baselines.
Tasks
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-1565/
PDF https://www.aclweb.org/anthology/D19-1565
PWC https://paperswithcode.com/paper/fine-grained-analysis-of-propaganda-in-news-1
Repo
Framework

Proceedings of the 1st International Workshop on Computational Approaches to Historical Language Change

Title Proceedings of the 1st International Workshop on Computational Approaches to Historical Language Change
Authors
Abstract
Tasks
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-4700/
PDF https://www.aclweb.org/anthology/W19-4700
PWC https://paperswithcode.com/paper/proceedings-of-the-1st-international-workshop
Repo
Framework

Language Learning and Processing in People and Machines

Title Language Learning and Processing in People and Machines
Authors Aida Nematzadeh, Richard Futrell, Roger Levy
Abstract The goal of this tutorial is to bring the fields of computational linguistics and computational cognitive science closer: we will introduce different stages of language acquisition and their parallel problems in NLP. As an example, one of the early challenges children face is mapping the meaning of word labels (such as {``}cat{''}) to their referents (the furry animal in the living room). Word learning is similar to the word alignment problem in machine translation. We explain the current computational models of language acquisition, their limitations, and how the insights from these models can be incorporated into NLP applications. Moreover, we discuss how we can take advantage of the cognitive science of language in computational linguistics: for example, by designing cognitively-motivated evaluations task or buildings language-learning inductive biases into our models. |
Tasks Language Acquisition, Machine Translation, Word Alignment
Published 2019-06-01
URL https://www.aclweb.org/anthology/N19-5005/
PDF https://www.aclweb.org/anthology/N19-5005
PWC https://paperswithcode.com/paper/language-learning-and-processing-in-people
Repo
Framework

Results of the WMT19 Metrics Shared Task: Segment-Level and Strong MT Systems Pose Big Challenges

Title Results of the WMT19 Metrics Shared Task: Segment-Level and Strong MT Systems Pose Big Challenges
Authors Qingsong Ma, Johnny Wei, Ond{\v{r}}ej Bojar, Yvette Graham
Abstract This paper presents the results of the WMT19 Metrics Shared Task. Participants were asked to score the outputs of the translations systems competing in the WMT19 News Translation Task with automatic metrics. 13 research groups submitted 24 metrics, 10 of which are reference-less {}metrics{''} and constitute submissions to the joint task with WMT19 Quality Estimation Task, {}QE as a Metric{''}. In addition, we computed 11 baseline metrics, with 8 commonly applied baselines (BLEU, SentBLEU, NIST, WER, PER, TER, CDER, and chrF) and 3 reimplementations (chrF+, sacreBLEU-BLEU, and sacreBLEU-chrF). Metrics were evaluated on the system level, how well a given metric correlates with the WMT19 official manual ranking, and segment level, how well the metric correlates with human judgements of segment quality. This year, we use direct assessment (DA) as our only form of manual evaluation.
Tasks
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-5302/
PDF https://www.aclweb.org/anthology/W19-5302
PWC https://paperswithcode.com/paper/results-of-the-wmt19-metrics-shared-task
Repo
Framework

Apertium-fin-eng–Rule-based Shallow Machine Translation for WMT 2019 Shared Task

Title Apertium-fin-eng–Rule-based Shallow Machine Translation for WMT 2019 Shared Task
Authors Tommi Pirinen
Abstract In this paper we describe a rule-based, bi-directional machine translation system for the Finnish{—}English language pair. The baseline system was based on the existing data of FinnWordNet, omorfi and apertium-eng. We have built the disambiguation, lexical selection and translation rules by hand. The dictionaries and rules have been developed based on the shared task data. We describe in this article the use of the shared task data as a kind of a test-driven development workflow in RBMT development and show that it suits perfectly to a modern software engineering continuous integration workflow of RBMT and yields big increases to BLEU scores with minimal effort.
Tasks Machine Translation
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-5336/
PDF https://www.aclweb.org/anthology/W19-5336
PWC https://paperswithcode.com/paper/apertium-fin-eng-rule-based-shallow-machine
Repo
Framework
comments powered by Disqus