January 25, 2020

2016 words 10 mins read

Paper Group NANR 57

Paper Group NANR 57

Workflows for kickstarting RBMT in virtually No-Resource Situation. Accelerating first order optimization algorithms. Variation between Different Discourse Types: Literate vs. Oral. Online aggression from a sociological perspective: An integrative view on determinants and possible countermeasures. Verbs in Egyptian Arabic: a case for register varia …

Workflows for kickstarting RBMT in virtually No-Resource Situation

Title Workflows for kickstarting RBMT in virtually No-Resource Situation
Authors Tommi A Pirinen
Abstract
Tasks
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-6803/
PDF https://www.aclweb.org/anthology/W19-6803
PWC https://paperswithcode.com/paper/workflows-for-kickstarting-rbmt-in-virtually
Repo
Framework

Accelerating first order optimization algorithms

Title Accelerating first order optimization algorithms
Authors Ange tato, Roger nkambou
Abstract There exist several stochastic optimization algorithms. However in most cases, it is difficult to tell for a particular problem which will be the best optimizer to choose as each of them are good. Thus, we present a simple and intuitive technique, when applied to first order optimization algorithms, is able to improve the speed of convergence and reaches a better minimum for the loss function compared to the original algorithms. The proposed solution modifies the update rule, based on the variation of the direction of the gradient during training. We conducted several tests with Adam and AMSGrad on two different datasets. The preliminary results show that the proposed technique improves the performance of existing optimization algorithms and works well in practice.
Tasks Stochastic Optimization
Published 2019-05-01
URL https://openreview.net/forum?id=SkgCV205tQ
PDF https://openreview.net/pdf?id=SkgCV205tQ
PWC https://paperswithcode.com/paper/accelerating-first-order-optimization
Repo
Framework

Variation between Different Discourse Types: Literate vs. Oral

Title Variation between Different Discourse Types: Literate vs. Oral
Authors Katrin Ortmann, Stefanie Dipper
Abstract This paper deals with the automatic identification of literate and oral discourse in German texts. A range of linguistic features is selected and their role in distinguishing between literate- and oral-oriented registers is investigated, using a decision-tree classifier. It turns out that all of the investigated features are related in some way to oral conceptuality. Especially simple measures of complexity (average sentence and word length) are prominent indicators of oral and literate discourse. In addition, features of reference and deixis (realized by different types of pronouns) also prove to be very useful in determining the degree of orality of different registers.
Tasks
Published 2019-06-01
URL https://www.aclweb.org/anthology/W19-1407/
PDF https://www.aclweb.org/anthology/W19-1407
PWC https://paperswithcode.com/paper/variation-between-different-discourse-types
Repo
Framework

Online aggression from a sociological perspective: An integrative view on determinants and possible countermeasures

Title Online aggression from a sociological perspective: An integrative view on determinants and possible countermeasures
Authors Sebastian Weingartner, Lea Stahel
Abstract The present paper introduces a theoretical model for explaining aggressive online comments from a sociological perspective. It is innovative as it combines individual, situational, and social-structural determinants of online aggression and tries to theoretically derive their interplay. Moreover, the paper suggests an empirical strategy for testing the model. The main contribution will be to match online commenting data with survey data containing rich background data of non- /aggressive online commentators.
Tasks
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-3520/
PDF https://www.aclweb.org/anthology/W19-3520
PWC https://paperswithcode.com/paper/online-aggression-from-a-sociological
Repo
Framework

Verbs in Egyptian Arabic: a case for register variation

Title Verbs in Egyptian Arabic: a case for register variation
Authors Michael Grant White, Deryle W. Lonsdale
Abstract
Tasks
Published 2019-07-01
URL https://www.aclweb.org/anthology/W19-5608/
PDF https://www.aclweb.org/anthology/W19-5608
PWC https://paperswithcode.com/paper/verbs-in-egyptian-arabic-a-case-for-register
Repo
Framework

INMT: Interactive Neural Machine Translation Prediction

Title INMT: Interactive Neural Machine Translation Prediction
Authors Sebastin Santy, D, S apat, ipan, Monojit Choudhury, Kalika Bali
Abstract In this paper, we demonstrate an Interactive Machine Translation interface, that assists human translators with on-the-fly hints and suggestions. This makes the end-to-end translation process faster, more efficient and creates high-quality translations. We augment the OpenNMT backend with a mechanism to accept the user input and generate conditioned translations.
Tasks Machine Translation
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-3018/
PDF https://www.aclweb.org/anthology/D19-3018
PWC https://paperswithcode.com/paper/inmt-interactive-neural-machine-translation
Repo
Framework

A Comparative Study of English-Chinese Translations of Court Texts by Machine and Human Translators and the Word2Vec Based Similarity Measure’s Ability To Gauge Human Evaluation Biases

Title A Comparative Study of English-Chinese Translations of Court Texts by Machine and Human Translators and the Word2Vec Based Similarity Measure’s Ability To Gauge Human Evaluation Biases
Authors Ming Qian, Jessie Liu, Chaofeng Li, Liming Pals
Abstract
Tasks
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-6714/
PDF https://www.aclweb.org/anthology/W19-6714
PWC https://paperswithcode.com/paper/a-comparative-study-of-english-chinese
Repo
Framework

Initial Experiments In Cross-Lingual Morphological Analysis Using Morpheme Segmentation

Title Initial Experiments In Cross-Lingual Morphological Analysis Using Morpheme Segmentation
Authors Vladislav Mikhailov, Lorenzo Tosi, Anastasia Khorosheva, Oleg Serikov
Abstract The paper describes initial experiments in data-driven cross-lingual morphological analysis of open-category words using a combination of unsupervised morpheme segmentation, annotation projection and an LSTM encoder-decoder model with attention. Our algorithm provides lemmatisation and morphological analysis generation for previously unseen low-resource language surface forms with only annotated data on the related languages given. Despite the inherently lossy annotation projection, we achieved the best lemmatisation F1-score in the VarDial 2019 Shared Task on Cross-Lingual Morphological Analysis for both Karachay-Balkar (Turkic languages, agglutinative morphology) and Sardinian (Romance languages, fusional morphology).
Tasks Morphological Analysis
Published 2019-06-01
URL https://www.aclweb.org/anthology/W19-1415/
PDF https://www.aclweb.org/anthology/W19-1415
PWC https://paperswithcode.com/paper/initial-experiments-in-cross-lingual
Repo
Framework

A Survey of the Perceived Text Adaptation Needs of Adults with Autism

Title A Survey of the Perceived Text Adaptation Needs of Adults with Autism
Authors Victoria Yaneva, Constantin Orasan, Le An Ha, Natalia Ponomareva
Abstract NLP approaches to automatic text adaptation often rely on user-need guidelines which are generic and do not account for the differences between various types of target groups. One such group are adults with high-functioning autism, who are usually able to read long sentences and comprehend difficult words but whose comprehension may be impeded by other linguistic constructions. This is especially challenging for real-world user-generated texts such as product reviews, which cannot be controlled editorially and are thus a particularly good applcation for automatic text adaptation systems. In this paper we present a mixed-methods survey conducted with 24 adult web-users diagnosed with autism and an age-matched control group of 33 neurotypical participants. The aim of the survey was to identify whether the group with autism experienced any barriers when reading online reviews, what these potential barriers were, and what NLP methods would be best suited to improve the accessibility of online reviews for people with autism. The group with autism consistently reported significantly greater difficulties with understanding online product reviews compared to the control group and identified issues related to text length, poor topic organisation, and the use of irony and sarcasm.
Tasks
Published 2019-09-01
URL https://www.aclweb.org/anthology/R19-1155/
PDF https://www.aclweb.org/anthology/R19-1155
PWC https://paperswithcode.com/paper/a-survey-of-the-perceived-text-adaptation
Repo
Framework

Unsupervised Procedure Learning via Joint Dynamic Summarization

Title Unsupervised Procedure Learning via Joint Dynamic Summarization
Authors Ehsan Elhamifar, Zwe Naing
Abstract We address the problem of unsupervised procedure learning from unconstrained instructional videos. Our goal is to produce a summary of the procedure key-steps and their ordering needed to perform a given task, as well as localization of the key-steps in videos. We develop a collaborative sequential subset selection framework, where we build a dynamic model on videos by learning states and transitions between them, where states correspond to different subactivities, including background and procedure steps. To extract procedure key-steps, we develop an optimization framework that finds a sequence of a small number of states that well represents all videos and is compatible with the state transition model. Given that our proposed optimization is non-convex and NP-hard, we develop a fast greedy algorithm whose complexity is linear in the length of the videos and the number of states of the dynamic model, hence, scales to large datasets. Under appropriate conditions on the transition model, our proposed formulation is approximately submodular, hence, comes with performance guarantees. We also present ProceL, a new multimodal dataset of 47.3 hours of videos and their transcripts from diverse tasks, for procedure learning evaluation. By extensive experiments, we show that our framework significantly improves the state of the art performance.
Tasks
Published 2019-10-01
URL http://openaccess.thecvf.com/content_ICCV_2019/html/Elhamifar_Unsupervised_Procedure_Learning_via_Joint_Dynamic_Summarization_ICCV_2019_paper.html
PDF http://openaccess.thecvf.com/content_ICCV_2019/papers/Elhamifar_Unsupervised_Procedure_Learning_via_Joint_Dynamic_Summarization_ICCV_2019_paper.pdf
PWC https://paperswithcode.com/paper/unsupervised-procedure-learning-via-joint
Repo
Framework

Face Alignment With Kernel Density Deep Neural Network

Title Face Alignment With Kernel Density Deep Neural Network
Authors Lisha Chen, Hui Su, Qiang Ji
Abstract Deep neural networks achieve good performance in many computer vision problems such as face alignment. However, when the testing image is challenging due to low resolution, occlusion or adversarial attacks, the accuracy of a deep neural network suffers greatly. Therefore, it is important to quantify the uncertainty in its predictions. A probabilistic neural network with Gaussian distribution over the target is typically used to quantify uncertainty for regression problems. However, in real-world problems especially computer vision tasks, the Gaussian assumption is too strong. To model more general distributions, such as multi-modal or asymmetric distributions, we propose to develop a kernel density deep neural network. Specifically, for face alignment, we adapt state-of-the-art hourglass neural network into a probabilistic neural network framework with landmark probability map as its output. The model is trained by maximizing the conditional log likelihood. To exploit the output probability map, we extend the model to multi-stage so that the logits map from the previous stage can feed into the next stage to progressively improve the landmark detection accuracy. Extensive experiments on benchmark datasets against state-of-the-art unconstrained deep learning method demonstrate that the proposed kernel density network achieves comparable or superior performance in terms of prediction accuracy. It further provides aleatoric uncertainty estimation in predictions.
Tasks Face Alignment
Published 2019-10-01
URL http://openaccess.thecvf.com/content_ICCV_2019/html/Chen_Face_Alignment_With_Kernel_Density_Deep_Neural_Network_ICCV_2019_paper.html
PDF http://openaccess.thecvf.com/content_ICCV_2019/papers/Chen_Face_Alignment_With_Kernel_Density_Deep_Neural_Network_ICCV_2019_paper.pdf
PWC https://paperswithcode.com/paper/face-alignment-with-kernel-density-deep
Repo
Framework

Efficient Smooth Non-Convex Stochastic Compositional Optimization via Stochastic Recursive Gradient Descent

Title Efficient Smooth Non-Convex Stochastic Compositional Optimization via Stochastic Recursive Gradient Descent
Authors Huizhuo Yuan, Xiangru Lian, Chris Junchi Li, Ji Liu, Wenqing Hu
Abstract Stochastic compositional optimization arises in many important machine learning tasks such as reinforcement learning and portfolio management. The objective function is the composition of two expectations of stochastic functions, and is more challenging to optimize than vanilla stochastic optimization problems. In this paper, we investigate the stochastic compositional optimization in the general smooth non-convex setting. We employ a recently developed idea of \textit{Stochastic Recursive Gradient Descent} to design a novel algorithm named SARAH-Compositional, and prove a sharp Incremental First-order Oracle (IFO) complexity upper bound for stochastic compositional optimization: $\mathcal{O}((n+m)^{1/2} \varepsilon^{-2})$ in the finite-sum case and $\mathcal{O}(\varepsilon^{-3})$ in the online case. Such a complexity is known to be the best one among IFO complexity results for non-convex stochastic compositional optimization. Numerical experiments validate the superior performance of our algorithm and theory.
Tasks Stochastic Optimization
Published 2019-12-01
URL http://papers.nips.cc/paper/8916-efficient-smooth-non-convex-stochastic-compositional-optimization-via-stochastic-recursive-gradient-descent
PDF http://papers.nips.cc/paper/8916-efficient-smooth-non-convex-stochastic-compositional-optimization-via-stochastic-recursive-gradient-descent.pdf
PWC https://paperswithcode.com/paper/efficient-smooth-non-convex-stochastic
Repo
Framework

TwistBytes - Identification of Cuneiform Languages and German Dialects at VarDial 2019

Title TwistBytes - Identification of Cuneiform Languages and German Dialects at VarDial 2019
Authors Fern Benites, o, Pius von D{"a}niken, Mark Cieliebak
Abstract We describe our approaches for the German Dialect Identification (GDI) and the Cuneiform Language Identification (CLI) tasks at the VarDial Evaluation Campaign 2019. The goal was to identify dialects of Swiss German in GDI and Sumerian and Akkadian in CLI. In GDI, the system should distinguish four dialects from the German-speaking part of Switzerland. Our system for GDI achieved third place out of 6 teams, with a macro averaged F-1 of 74.6{%}. In CLI, the system should distinguish seven languages written in cuneiform script. Our system achieved third place out of 8 teams, with a macro averaged F-1 of 74.7{%}.
Tasks Language Identification
Published 2019-06-01
URL https://www.aclweb.org/anthology/W19-1421/
PDF https://www.aclweb.org/anthology/W19-1421
PWC https://paperswithcode.com/paper/twistbytes-identification-of-cuneiform
Repo
Framework

DTeam @ VarDial 2019: Ensemble based on skip-gram and triplet loss neural networks for Moldavian vs. Romanian cross-dialect topic identification

Title DTeam @ VarDial 2019: Ensemble based on skip-gram and triplet loss neural networks for Moldavian vs. Romanian cross-dialect topic identification
Authors Diana Tudoreanu
Abstract This paper presents the solution proposed by DTeam in the VarDial 2019 Evaluation Campaign for the Moldavian vs. Romanian cross-topic identification task. The solution proposed is a Support Vector Machines (SVM) ensemble composed of a two character-level neural networks. The first network is a skip-gram classification model formed of an embedding layer, three convolutional layers and two fully-connected layers. The second network has a similar architecture, but is trained using the triplet loss function.
Tasks
Published 2019-06-01
URL https://www.aclweb.org/anthology/W19-1422/
PDF https://www.aclweb.org/anthology/W19-1422
PWC https://paperswithcode.com/paper/dteam-vardial-2019-ensemble-based-on-skip
Repo
Framework

Automatic Translation for Software with Safe Velocity

Title Automatic Translation for Software with Safe Velocity
Authors Dag Schmidtke, Declan Groves
Abstract
Tasks
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-6729/
PDF https://www.aclweb.org/anthology/W19-6729
PWC https://paperswithcode.com/paper/automatic-translation-for-software-with-safe
Repo
Framework
comments powered by Disqus