May 5, 2019

1583 words 8 mins read

Paper Group NANR 6

Paper Group NANR 6

An End-to-End Chinese Discourse Parser with Adaptation to Explicit and Non-explicit Relation Recognition. Short-Dot: Computing Large Linear Transforms Distributedly Using Coded Short Dot Products. A primal-dual method for conic constrained distributed optimization problems. MUTT: Metric Unit TesTing for Language Generation Tasks. Dual Decomposed Le …

An End-to-End Chinese Discourse Parser with Adaptation to Explicit and Non-explicit Relation Recognition

Title An End-to-End Chinese Discourse Parser with Adaptation to Explicit and Non-explicit Relation Recognition
Authors Xiaomian Kang, Haoran Li, Long Zhou, Jiajun Zhang, Chengqing Zong
Abstract
Tasks Machine Translation, Question Answering, Text Summarization
Published 2016-08-01
URL https://www.aclweb.org/anthology/K16-2003/
PDF https://www.aclweb.org/anthology/K16-2003
PWC https://paperswithcode.com/paper/an-end-to-end-chinese-discourse-parser-with
Repo
Framework

Short-Dot: Computing Large Linear Transforms Distributedly Using Coded Short Dot Products

Title Short-Dot: Computing Large Linear Transforms Distributedly Using Coded Short Dot Products
Authors Sanghamitra Dutta, Viveck Cadambe, Pulkit Grover
Abstract Faced with saturation of Moore’s law and increasing size and dimension of data, system designers have increasingly resorted to parallel and distributed computing to reduce computation time of machine-learning algorithms. However, distributed computing is often bottle necked by a small fraction of slow processors called “stragglers” that reduce the speed of computation because the fusion node has to wait for all processors to complete their processing. To combat the effect of stragglers, recent literature proposes introducing redundancy in computations across processors, e.g., using repetition-based strategies or erasure codes. The fusion node can exploit this redundancy by completing the computation using outputs from only a subset of the processors, ignoring the stragglers. In this paper, we propose a novel technique - that we call “Short-Dot” - to introduce redundant computations in a coding theory inspired fashion, for computing linear transforms of long vectors. Instead of computing long dot products as required in the original linear transform, we construct a larger number of redundant and short dot products that can be computed more efficiently at individual processors. Further, only a subset of these short dot products are required at the fusion node to finish the computation successfully. We demonstrate through probabilistic analysis as well as experiments on computing clusters that Short-Dot offers significant speed-up compared to existing techniques. We also derive trade-offs between the length of the dot-products and the resilience to stragglers (number of processors required to finish), for any such strategy and compare it to that achieved by our strategy.
Tasks
Published 2016-12-01
URL http://papers.nips.cc/paper/6329-short-dot-computing-large-linear-transforms-distributedly-using-coded-short-dot-products
PDF http://papers.nips.cc/paper/6329-short-dot-computing-large-linear-transforms-distributedly-using-coded-short-dot-products.pdf
PWC https://paperswithcode.com/paper/short-dot-computing-large-linear-transforms
Repo
Framework

A primal-dual method for conic constrained distributed optimization problems

Title A primal-dual method for conic constrained distributed optimization problems
Authors Necdet Serhat Aybat, Erfan Yazdandoost Hamedani
Abstract We consider cooperative multi-agent consensus optimization problems over an undirected network of agents, where only those agents connected by an edge can directly communicate. The objective is to minimize the sum of agent-specific composite convex functions over agent-specific private conic constraint sets; hence, the optimal consensus decision should lie in the intersection of these private sets. We provide convergence rates in sub-optimality, infeasibility and consensus violation; examine the effect of underlying network topology on the convergence rates of the proposed decentralized algorithms; and show how to extend these methods to handle time-varying communication networks.
Tasks Distributed Optimization
Published 2016-12-01
URL http://papers.nips.cc/paper/6242-a-primal-dual-method-for-conic-constrained-distributed-optimization-problems
PDF http://papers.nips.cc/paper/6242-a-primal-dual-method-for-conic-constrained-distributed-optimization-problems.pdf
PWC https://paperswithcode.com/paper/a-primal-dual-method-for-conic-constrained
Repo
Framework

MUTT: Metric Unit TesTing for Language Generation Tasks

Title MUTT: Metric Unit TesTing for Language Generation Tasks
Authors William Boag, Renan Campos, Kate Saenko, Anna Rumshisky
Abstract
Tasks Image Captioning, Machine Translation, Text Generation, Video Captioning
Published 2016-08-01
URL https://www.aclweb.org/anthology/P16-1182/
PDF https://www.aclweb.org/anthology/P16-1182
PWC https://paperswithcode.com/paper/mutt-metric-unit-testing-for-language
Repo
Framework

Dual Decomposed Learning with Factorwise Oracle for Structural SVM of Large Output Domain

Title Dual Decomposed Learning with Factorwise Oracle for Structural SVM of Large Output Domain
Authors Ian En-Hsu Yen, Xiangru Huang, Kai Zhong, Ruohan Zhang, Pradeep K. Ravikumar, Inderjit S. Dhillon
Abstract Many applications of machine learning involve structured output with large domain, where learning of structured predictor is prohibitive due to repetitive calls to expensive inference oracle. In this work, we show that, by decomposing training of Structural Support Vector Machine (SVM) into a series of multiclass SVM problems connected through messages, one can replace expensive structured oracle with Factorwise Maximization Oracle (FMO) that allows efficient implementation of complexity sublinear to the factor domain. A Greedy Direction Method of Multiplier (GDMM) algorithm is proposed to exploit sparsity of messages which guarantees $\epsilon$ sub-optimality after $O(log(1/\epsilon))$ passes of FMO calls. We conduct experiments on chain-structured problems and fully-connected problems of large output domains. The proposed approach is orders-of-magnitude faster than the state-of-the-art training algorithms for Structural SVM.
Tasks
Published 2016-12-01
URL http://papers.nips.cc/paper/6422-dual-decomposed-learning-with-factorwise-oracle-for-structural-svm-of-large-output-domain
PDF http://papers.nips.cc/paper/6422-dual-decomposed-learning-with-factorwise-oracle-for-structural-svm-of-large-output-domain.pdf
PWC https://paperswithcode.com/paper/dual-decomposed-learning-with-factorwise
Repo
Framework

Results of the WMT16 Tuning Shared Task

Title Results of the WMT16 Tuning Shared Task
Authors Bushra Jawaid, Amir Kamran, Milo{\v{s}} Stanojevi{'c}, Ond{\v{r}}ej Bojar
Abstract
Tasks Machine Translation
Published 2016-08-01
URL https://www.aclweb.org/anthology/W16-2303/
PDF https://www.aclweb.org/anthology/W16-2303
PWC https://paperswithcode.com/paper/results-of-the-wmt16-tuning-shared-task
Repo
Framework

Analysis of Policy Agendas: Lessons Learned from Automatic Topic Classification of Croatian Political Texts

Title Analysis of Policy Agendas: Lessons Learned from Automatic Topic Classification of Croatian Political Texts
Authors Mladen Karan, Jan {\v{S}}najder, Daniela {\v{S}}irini{'c}, Goran Glava{\v{s}}
Abstract
Tasks Decision Making
Published 2016-08-01
URL https://www.aclweb.org/anthology/W16-2102/
PDF https://www.aclweb.org/anthology/W16-2102
PWC https://paperswithcode.com/paper/analysis-of-policy-agendas-lessons-learned
Repo
Framework

Combinatorics vs Grammar: Archeology of Computational Poetry in Tape Mark I

Title Combinatorics vs Grammar: Archeology of Computational Poetry in Tape Mark I
Authors Aless Mazzei, ro, Andrea Valle
Abstract
Tasks Text Generation
Published 2016-09-01
URL https://www.aclweb.org/anthology/W16-5509/
PDF https://www.aclweb.org/anthology/W16-5509
PWC https://paperswithcode.com/paper/combinatorics-vs-grammar-archeology-of
Repo
Framework

Overview of NLP-TEA 2016 Shared Task for Chinese Grammatical Error Diagnosis

Title Overview of NLP-TEA 2016 Shared Task for Chinese Grammatical Error Diagnosis
Authors Lung-Hao Lee, Gaoqi Rao, Liang-Chih Yu, Endong Xun, Baolin Zhang, Li-Ping Chang
Abstract This paper presents the NLP-TEA 2016 shared task for Chinese grammatical error diagnosis which seeks to identify grammatical error types and their range of occurrence within sentences written by learners of Chinese as foreign language. We describe the task definition, data preparation, performance metrics, and evaluation results. Of the 15 teams registered for this shared task, 9 teams developed the system and submitted a total of 36 runs. We expected this evaluation campaign could lead to the development of more advanced NLP techniques for educational applications, especially for Chinese error detection. All data sets with gold standards and scoring scripts are made publicly available to researchers.
Tasks Grammatical Error Correction
Published 2016-12-01
URL https://www.aclweb.org/anthology/W16-4906/
PDF https://www.aclweb.org/anthology/W16-4906
PWC https://paperswithcode.com/paper/overview-of-nlp-tea-2016-shared-task-for
Repo
Framework

Visual Question Answering with Question Representation Update (QRU)

Title Visual Question Answering with Question Representation Update (QRU)
Authors Ruiyu Li, Jiaya Jia
Abstract Our method aims at reasoning over natural language questions and visual images. Given a natural language question about an image, our model updates the question representation iteratively by selecting image regions relevant to the query and learns to give the correct answer. Our model contains several reasoning layers, exploiting complex visual relations in the visual question answering (VQA) task. The proposed network is end-to-end trainable through back-propagation, where its weights are initialized using pre-trained convolutional neural network (CNN) and gated recurrent unit (GRU). Our method is evaluated on challenging datasets of COCO-QA and VQA and yields state-of-the-art performance.
Tasks Question Answering, Visual Question Answering
Published 2016-12-01
URL http://papers.nips.cc/paper/6261-visual-question-answering-with-question-representation-update-qru
PDF http://papers.nips.cc/paper/6261-visual-question-answering-with-question-representation-update-qru.pdf
PWC https://paperswithcode.com/paper/visual-question-answering-with-question
Repo
Framework

Merged bilingual trees based on Universal Dependencies in Machine Translation

Title Merged bilingual trees based on Universal Dependencies in Machine Translation
Authors David Mare{\v{c}}ek
Abstract
Tasks Language Modelling, Machine Translation
Published 2016-08-01
URL https://www.aclweb.org/anthology/W16-2318/
PDF https://www.aclweb.org/anthology/W16-2318
PWC https://paperswithcode.com/paper/merged-bilingual-trees-based-on-universal
Repo
Framework

LIMSI’s Contribution to the WMT’16 Biomedical Translation Task

Title LIMSI’s Contribution to the WMT’16 Biomedical Translation Task
Authors Julia Ive, Aur{'e}lien Max, Fran{\c{c}}ois Yvon
Abstract
Tasks Machine Translation
Published 2016-08-01
URL https://www.aclweb.org/anthology/W16-2337/
PDF https://www.aclweb.org/anthology/W16-2337
PWC https://paperswithcode.com/paper/limsis-contribution-to-the-wmt16-biomedical
Repo
Framework

Japanese Lexical Simplification for Non-Native Speakers

Title Japanese Lexical Simplification for Non-Native Speakers
Authors Muhaimin Hading, Yuji Matsumoto, Maki Sakamoto
Abstract This paper introduces Japanese lexical simplification. Japanese lexical simplification is the task of replacing difficult words in a given sentence to produce a new sentence with simple words without changing the original meaning of the sentence. We purpose a method of supervised regression learning to estimate difficulty ordering of words with statistical features obtained from two types of Japanese corpora. For the similarity of words, we use a Japanese thesaurus and dependency-based word embeddings. Evaluation of the proposed method is performed by comparing the difficulty ordering of the words.
Tasks Language Modelling, Lexical Simplification, Machine Translation, Word Alignment, Word Embeddings
Published 2016-12-01
URL https://www.aclweb.org/anthology/W16-4912/
PDF https://www.aclweb.org/anthology/W16-4912
PWC https://paperswithcode.com/paper/japanese-lexical-simplification-for-non
Repo
Framework

Investigating the Sources of Linguistic Alignment in Conversation

Title Investigating the Sources of Linguistic Alignment in Conversation
Authors Gabriel Doyle, Michael C. Frank
Abstract
Tasks
Published 2016-08-01
URL https://www.aclweb.org/anthology/P16-1050/
PDF https://www.aclweb.org/anthology/P16-1050
PWC https://paperswithcode.com/paper/investigating-the-sources-of-linguistic
Repo
Framework

Generating and Scoring Correction Candidates in Chinese Grammatical Error Diagnosis

Title Generating and Scoring Correction Candidates in Chinese Grammatical Error Diagnosis
Authors Shao-Heng Chen, Yu-Lin Tsai, Chuan-Jie Lin
Abstract Grammatical error diagnosis is an essential part in a language-learning tutoring system. Based on the data sets of Chinese grammar error detection tasks, we proposed a system which measures the likelihood of correction candidates generated by deleting or inserting characters or words, moving substrings to different positions, substituting prepositions with other prepositions, or substituting words with their synonyms or similar strings. Sentence likelihood is measured based on the frequencies of substrings from the space-removed version of Google n-grams. The evaluation on the training set shows that Missing-related and Selection-related candidate generation methods have promising performance. Our final system achieved a precision of 30.28{%} and a recall of 62.85{%} in the identification level evaluated on the test set.
Tasks Decision Making
Published 2016-12-01
URL https://www.aclweb.org/anthology/W16-4917/
PDF https://www.aclweb.org/anthology/W16-4917
PWC https://paperswithcode.com/paper/generating-and-scoring-correction-candidates
Repo
Framework
comments powered by Disqus