May 5, 2019

2038 words 10 mins read

Paper Group NANR 77

Paper Group NANR 77

LSTM Shift-Reduce CCG Parsing. TermoPL - a Flexible Tool for Terminology Extraction. Learning Transferrable Representations for Unsupervised Domain Adaptation. Corpora for Learning the Mutual Relationship between Semantic Relatedness and Textual Entailment. Syntax-based Multi-system Machine Translation. Progressively Parsing Interactional Objects f …

LSTM Shift-Reduce CCG Parsing

Title LSTM Shift-Reduce CCG Parsing
Authors Wenduan Xu
Abstract
Tasks Feature Engineering
Published 2016-11-01
URL https://www.aclweb.org/anthology/papers/D16-1181/d16-1181
PDF https://www.aclweb.org/anthology/D16-1181v2
PWC https://paperswithcode.com/paper/lstm-shift-reduce-ccg-parsing
Repo
Framework

TermoPL - a Flexible Tool for Terminology Extraction

Title TermoPL - a Flexible Tool for Terminology Extraction
Authors Malgorzata Marciniak, Agnieszka Mykowiecka, Piotr Rychlik
Abstract The purpose of this paper is to introduce the TermoPL tool created to extract terminology from domain corpora in Polish. The program extracts noun phrases, term candidates, with the help of a simple grammar that can be adapted for user{'}s needs. It applies the C-value method to rank term candidates being either the longest identified nominal phrases or their nested subphrases. The method operates on simplified base forms in order to unify morphological variants of terms and to recognize their contexts. We support the recognition of nested terms by word connection strength which allows us to eliminate truncated phrases from the top part of the term list. The program has an option to convert simplified forms of phrases into correct phrases in the nominal case. TermoPL accepts as input morphologically annotated and disambiguated domain texts and creates a list of terms, the top part of which comprises domain terminology. It can also compare two candidate term lists using three different coefficients showing asymmetry of term occurrences in this data.
Tasks
Published 2016-05-01
URL https://www.aclweb.org/anthology/L16-1361/
PDF https://www.aclweb.org/anthology/L16-1361
PWC https://paperswithcode.com/paper/termopl-a-flexible-tool-for-terminology
Repo
Framework

Learning Transferrable Representations for Unsupervised Domain Adaptation

Title Learning Transferrable Representations for Unsupervised Domain Adaptation
Authors Ozan Sener, Hyun Oh Song, Ashutosh Saxena, Silvio Savarese
Abstract Supervised learning with large scale labelled datasets and deep layered models has caused a paradigm shift in diverse areas in learning and recognition. However, this approach still suffers from generalization issues under the presence of a domain shift between the training and the test data distribution. Since unsupervised domain adaptation algorithms directly address this domain shift problem between a labelled source dataset and an unlabelled target dataset, recent papers have shown promising results by fine-tuning the networks with domain adaptation loss functions which try to align the mismatch between the training and testing data distributions. Nevertheless, these recent deep learning based domain adaptation approaches still suffer from issues such as high sensitivity to the gradient reversal hyperparameters and overfitting during the fine-tuning stage. In this paper, we propose a unified deep learning framework where the representation, cross domain transformation, and target label inference are all jointly optimized in an end-to-end fashion for unsupervised domain adaptation. Our experiments show that the proposed method significantly outperforms state-of-the-art algorithms in both object recognition and digit classification experiments by a large margin. We will make our learned models as well as the source code available immediately upon acceptance.
Tasks Domain Adaptation, Object Recognition, Unsupervised Domain Adaptation
Published 2016-12-01
URL http://papers.nips.cc/paper/6360-learning-transferrable-representations-for-unsupervised-domain-adaptation
PDF http://papers.nips.cc/paper/6360-learning-transferrable-representations-for-unsupervised-domain-adaptation.pdf
PWC https://paperswithcode.com/paper/learning-transferrable-representations-for
Repo
Framework

Corpora for Learning the Mutual Relationship between Semantic Relatedness and Textual Entailment

Title Corpora for Learning the Mutual Relationship between Semantic Relatedness and Textual Entailment
Authors Ngoc Phuoc An Vo, Octavian Popescu
Abstract In this paper we present the creation of a corpora annotated with both semantic relatedness (SR) scores and textual entailment (TE) judgments. In building this corpus we aimed at discovering, if any, the relationship between these two tasks for the mutual benefit of resolving one of them by relying on the insights gained from the other. We considered a corpora already annotated with TE judgments and we proceed to the manual annotation with SR scores. The RTE 1-4 corpora used in the PASCAL competition fit our need. The annotators worked independently of one each other and they did not have access to the TE judgment during annotation. The intuition that the two annotations are correlated received major support from this experiment and this finding led to a system that uses this information to revise the initial estimates of SR scores. As semantic relatedness is one of the most general and difficult task in natural language processing we expect that future systems will combine different sources of information in order to solve it. Our work suggests that textual entailment plays a quantifiable role in addressing it.
Tasks Natural Language Inference
Published 2016-05-01
URL https://www.aclweb.org/anthology/L16-1539/
PDF https://www.aclweb.org/anthology/L16-1539
PWC https://paperswithcode.com/paper/corpora-for-learning-the-mutual-relationship
Repo
Framework

Syntax-based Multi-system Machine Translation

Title Syntax-based Multi-system Machine Translation
Authors Mat{=\i}ss Rikters, Inguna Skadi{\c{n}}a
Abstract This paper describes a hybrid machine translation system that explores a parser to acquire syntactic chunks of a source sentence, translates the chunks with multiple online machine translation (MT) system application program interfaces (APIs) and creates output by combining translated chunks to obtain the best possible translation. The selection of the best translation hypothesis is performed by calculating the perplexity for each translated chunk. The goal of this approach is to enhance the baseline multi-system hybrid translation (MHyT) system that uses only a language model to select best translation from translations obtained with different APIs and to improve overall English ― Latvian machine translation quality over each of the individual MT APIs. The presented syntax-based multi-system translation (SyMHyT) system demonstrates an improvement in terms of BLEU and NIST scores compared to the baseline system. Improvements reach from 1.74 up to 2.54 BLEU points.
Tasks Language Modelling, Machine Translation
Published 2016-05-01
URL https://www.aclweb.org/anthology/L16-1093/
PDF https://www.aclweb.org/anthology/L16-1093
PWC https://paperswithcode.com/paper/syntax-based-multi-system-machine-translation
Repo
Framework

Progressively Parsing Interactional Objects for Fine Grained Action Detection

Title Progressively Parsing Interactional Objects for Fine Grained Action Detection
Authors Bingbing Ni, Xiaokang Yang, Shenghua Gao
Abstract Fine grained video action analysis often requires reliable detection and tracking of various interacting objects and human body parts, denoted as interactional object parsing. However, most of the previous methods based on either independent or joint object detection might suffer from high model complexity and challenging image content, e.g., illumination/pose/appearance/scale variation, motion, occlusion etc. In this work, we propose an end-to-end system based on recursive neural network to perform frame by frame interactional object parsing, which can alleviate the difficulty through a incremental manner. Our key innovation is that: instead of jointly outputting all object detections at once, for each frame, we use a set of long-short term memory (LSTM) nodes to incrementally refine the detections. After passing each LSTM node, more object detections are consolidated and thus more contextual information could be utilized to determine more difficult object detections. Extensive experiments on two benchmark fine grained activity datasets demonstrate that our proposed algorithm achieves better interacting object detection performance, which in turn boosts the action recognition performance over the state-of-the-art.
Tasks Action Detection, Fine-Grained Action Detection, Object Detection, Temporal Action Localization
Published 2016-06-01
URL http://openaccess.thecvf.com/content_cvpr_2016/html/Ni_Progressively_Parsing_Interactional_CVPR_2016_paper.html
PDF http://openaccess.thecvf.com/content_cvpr_2016/papers/Ni_Progressively_Parsing_Interactional_CVPR_2016_paper.pdf
PWC https://paperswithcode.com/paper/progressively-parsing-interactional-objects
Repo
Framework

Towards Time-Aware Knowledge Graph Completion

Title Towards Time-Aware Knowledge Graph Completion
Authors Tingsong Jiang, Tianyu Liu, Tao Ge, Lei Sha, Baobao Chang, Sujian Li, Zhifang Sui
Abstract Knowledge graph (KG) completion adds new facts to a KG by making inferences from existing facts. Most existing methods ignore the time information and only learn from time-unknown fact triples. In dynamic environments that evolve over time, it is important and challenging for knowledge graph completion models to take into account the temporal aspects of facts. In this paper, we present a novel time-aware knowledge graph completion model that is able to predict links in a KG using both the existing facts and the temporal information of the facts. To incorporate the happening time of facts, we propose a time-aware KG embedding model using temporal order information among facts. To incorporate the valid time of facts, we propose a joint time-aware inference model based on Integer Linear Programming (ILP) using temporal consistencyinformationasconstraints. Wefurtherintegratetwomodelstomakefulluseofglobal temporal information. We empirically evaluate our models on time-aware KG completion task. Experimental results show that our time-aware models achieve the state-of-the-art on temporal facts consistently.
Tasks Knowledge Graph Completion, Knowledge Graphs, Question Answering, Relation Extraction
Published 2016-12-01
URL https://www.aclweb.org/anthology/C16-1161/
PDF https://www.aclweb.org/anthology/C16-1161
PWC https://paperswithcode.com/paper/towards-time-aware-knowledge-graph-completion
Repo
Framework

Sentiment Analysis for Low Resource Languages: A Study on Informal Indonesian Tweets

Title Sentiment Analysis for Low Resource Languages: A Study on Informal Indonesian Tweets
Authors Tuan Anh Le, David Moeljadi, Yasuhide Miura, Tomoko Ohkuma
Abstract This paper describes our attempt to build a sentiment analysis system for Indonesian tweets. With this system, we can study and identify sentiments and opinions in a text or document computationally. We used four thousand manually labeled tweets collected in February and March 2016 to build the model. Because of the variety of content in tweets, we analyze tweets into eight groups in total, including pos(itive), neg(ative), and neu(tral). Finally, we obtained 73.2{%} accuracy with Long Short Term Memory (LSTM) without normalizer.
Tasks Sentiment Analysis
Published 2016-12-01
URL https://www.aclweb.org/anthology/W16-5415/
PDF https://www.aclweb.org/anthology/W16-5415
PWC https://paperswithcode.com/paper/sentiment-analysis-for-low-resource-languages
Repo
Framework

Rule Extraction for Tree-to-Tree Transducers by Cost Minimization

Title Rule Extraction for Tree-to-Tree Transducers by Cost Minimization
Authors Pascual Mart{'\i}nez-G{'o}mez, Yusuke Miyao
Abstract
Tasks Machine Translation, Natural Language Inference, Question Answering, Text Summarization
Published 2016-11-01
URL https://www.aclweb.org/anthology/D16-1002/
PDF https://www.aclweb.org/anthology/D16-1002
PWC https://paperswithcode.com/paper/rule-extraction-for-tree-to-tree-transducers
Repo
Framework

Multiple In-text Reference Aggregation Phenomenon

Title Multiple In-text Reference Aggregation Phenomenon
Authors Marc Bertin, Iana Atanassova
Abstract
Tasks Information Retrieval
Published 2016-06-01
URL https://www.aclweb.org/anthology/W16-1502/
PDF https://www.aclweb.org/anthology/W16-1502
PWC https://paperswithcode.com/paper/multiple-in-text-reference-aggregation
Repo
Framework

Exploring the Leading Authors and Journals in Major Topics by Citation Sentences and Topic Modeling

Title Exploring the Leading Authors and Journals in Major Topics by Citation Sentences and Topic Modeling
Authors Ha Jin Kim, Juyoung An, Yoo Kyung Jeong, Min Song
Abstract
Tasks Information Retrieval
Published 2016-06-01
URL https://www.aclweb.org/anthology/W16-1506/
PDF https://www.aclweb.org/anthology/W16-1506
PWC https://paperswithcode.com/paper/exploring-the-leading-authors-and-journals-in
Repo
Framework

University of Houston at CL-SciSumm 2016: SVMs with tree kernels and Sentence Similarity

Title University of Houston at CL-SciSumm 2016: SVMs with tree kernels and Sentence Similarity
Authors Luis Moraes, Shahryar Baki, Rakesh Verma, Daniel Lee
Abstract
Tasks Information Retrieval, Text Summarization
Published 2016-06-01
URL https://www.aclweb.org/anthology/W16-1513/
PDF https://www.aclweb.org/anthology/W16-1513
PWC https://paperswithcode.com/paper/university-of-houston-at-cl-scisumm-2016-svms
Repo
Framework

Interactive-Predictive Machine Translation based on Syntactic Constraints of Prefix

Title Interactive-Predictive Machine Translation based on Syntactic Constraints of Prefix
Authors Na Ye, Guiping Zhang, Dongfeng Cai
Abstract Interactive-predictive machine translation (IPMT) is a translation mode which combines machine translation technology and human behaviours. In the IPMT system, the utilization of the prefix greatly affects the interaction efficiency. However, state-of-the-art methods filter translation hypotheses mainly according to their matching results with the prefix on character level, and the advantage of the prefix is not fully developed. Focusing on this problem, this paper mines the deep constraints of prefix on syntactic level to improve the performance of IPMT systems. Two syntactic subtree matching rules based on phrase structure grammar are proposed to filter the translation hypotheses more strictly. Experimental results on LDC Chinese-English corpora show that the proposed method outperforms state-of-the-art phrase-based IPMT system while keeping comparable decoding speed.
Tasks Machine Translation, Question Answering
Published 2016-12-01
URL https://www.aclweb.org/anthology/C16-1169/
PDF https://www.aclweb.org/anthology/C16-1169
PWC https://paperswithcode.com/paper/interactive-predictive-machine-translation
Repo
Framework

Selective Annotation of Sentence Parts: Identification of Relevant Sub-sentential Units

Title Selective Annotation of Sentence Parts: Identification of Relevant Sub-sentential Units
Authors Ge Xu, Xiaoyan Yang, Chu-Ren Huang
Abstract Many NLP tasks involve sentence-level annotation yet the relevant information is not encoded at sentence level but at some relevant parts of the sentence. Such tasks include but are not limited to: sentiment expression annotation, product feature annotation, and template annotation for Q{&}A systems. However, annotation of the full corpus sentence by sentence is resource intensive. In this paper, we propose an approach that iteratively extracts frequent parts of sentences for annotating, and compresses the set of sentences after each round of annotation. Our approach can also be used in preparing training sentences for binary classification (domain-related vs. noise, subjectivity vs. objectivity, etc.), assuming that sentence-type annotation can be predicted by annotation of the most relevant sub-sentences. Two experiments are performed to test our proposal and evaluated in terms of time saved and agreement of annotation.
Tasks Opinion Mining
Published 2016-12-01
URL https://www.aclweb.org/anthology/W16-5411/
PDF https://www.aclweb.org/anthology/W16-5411
PWC https://paperswithcode.com/paper/selective-annotation-of-sentence-parts
Repo
Framework

UAlacant word-level and phrase-level machine translation quality estimation systems at WMT 2016

Title UAlacant word-level and phrase-level machine translation quality estimation systems at WMT 2016
Authors Miquel Espl{`a}-Gomis, Felipe S{'a}nchez-Mart{'\i}nez, Mikel Forcada
Abstract
Tasks Machine Translation
Published 2016-08-01
URL https://www.aclweb.org/anthology/W16-2383/
PDF https://www.aclweb.org/anthology/W16-2383
PWC https://paperswithcode.com/paper/ualacant-word-level-and-phrase-level-machine
Repo
Framework
comments powered by Disqus