Paper Group NANR 21
NLP and Online Health Reports: What do we say and what do we mean?. Leveraging coreference to identify arms in medical abstracts: An experimental study. Hybrid methods for ICD-10 coding of death certificates. Learning Latent Local Conversation Modes for Predicting Comment Endorsement in Online Discussions. Jointly learning heterogeneous features fo …
NLP and Online Health Reports: What do we say and what do we mean?
Title | NLP and Online Health Reports: What do we say and what do we mean? |
Authors | Nigel Collier |
Abstract | |
Tasks | Machine Translation, Representation Learning |
Published | 2016-11-01 |
URL | https://www.aclweb.org/anthology/W16-6111/ |
https://www.aclweb.org/anthology/W16-6111 | |
PWC | https://paperswithcode.com/paper/nlp-and-online-health-reports-what-do-we-say |
Repo | |
Framework | |
Leveraging coreference to identify arms in medical abstracts: An experimental study
Title | Leveraging coreference to identify arms in medical abstracts: An experimental study |
Authors | Elisa Ferracane, Iain Marshall, Byron C. Wallace, Katrin Erk |
Abstract | |
Tasks | Coreference Resolution |
Published | 2016-11-01 |
URL | https://www.aclweb.org/anthology/W16-6112/ |
https://www.aclweb.org/anthology/W16-6112 | |
PWC | https://paperswithcode.com/paper/leveraging-coreference-to-identify-arms-in |
Repo | |
Framework | |
Hybrid methods for ICD-10 coding of death certificates
Title | Hybrid methods for ICD-10 coding of death certificates |
Authors | Pierre Zweigenbaum, Thomas Lavergne |
Abstract | |
Tasks | |
Published | 2016-11-01 |
URL | https://www.aclweb.org/anthology/W16-6113/ |
https://www.aclweb.org/anthology/W16-6113 | |
PWC | https://paperswithcode.com/paper/hybrid-methods-for-icd-10-coding-of-death |
Repo | |
Framework | |
Learning Latent Local Conversation Modes for Predicting Comment Endorsement in Online Discussions
Title | Learning Latent Local Conversation Modes for Predicting Comment Endorsement in Online Discussions |
Authors | Hao Fang, Hao Cheng, Mari Ostendorf |
Abstract | |
Tasks | Decision Making |
Published | 2016-11-01 |
URL | https://www.aclweb.org/anthology/W16-6209/ |
https://www.aclweb.org/anthology/W16-6209 | |
PWC | https://paperswithcode.com/paper/learning-latent-local-conversation-modes-for-1 |
Repo | |
Framework | |
Jointly learning heterogeneous features for rgb-d activity recognition
Title | Jointly learning heterogeneous features for rgb-d activity recognition |
Authors | Jian-Fang Hu, Wei-Shi Zheng, Jianhuang Lai, Jianguo Zhang |
Abstract | In this paper, we focus on heterogeneous features learning for RGB-D activity recognition. We find that features from different channels (RGB, depth) could share some similar hidden structures, and then propose a joint learning model to simultaneously explore the shared and feature-specific components as an instance of heterogeneous multi-task learning. The proposed model formed in a unified framework is capable of: 1) jointly mining a set of subspaces with the same dimensionality to exploit latent shared features across different feature channels, 2) meanwhile, quantifying the shared and feature-specific components of features in the subspaces, and 3) transferring feature-specific intermediate transforms (i-transforms) for learning fusion of heterogeneous features across datasets. To efficiently train the joint model, a three-step iterative optimization algorithm is proposed, followed by a simple inference model. Extensive experimental results on four activity datasets have demonstrated the efficacy of the proposed method. Anew RGB-D activity dataset focusing on human-object interaction is further contributed, which presents more challenges for RGB-D activity benchmarking. |
Tasks | Activity Recognition, Human-Object Interaction Detection, Multi-Task Learning, Skeleton Based Action Recognition |
Published | 2016-12-15 |
URL | https://doi.org/10.1109/TPAMI.2016.2640292 |
https://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Hu_Jointly_Learning_Heterogeneous_2015_CVPR_paper.pdf | |
PWC | https://paperswithcode.com/paper/jointly-learning-heterogeneous-features-for-1 |
Repo | |
Framework | |
Machine Translation of Non-Contiguous Multiword Units
Title | Machine Translation of Non-Contiguous Multiword Units |
Authors | Anabela Barreiro, Fern Batista, o |
Abstract | |
Tasks | Machine Translation |
Published | 2016-06-01 |
URL | https://www.aclweb.org/anthology/W16-0903/ |
https://www.aclweb.org/anthology/W16-0903 | |
PWC | https://paperswithcode.com/paper/machine-translation-of-non-contiguous |
Repo | |
Framework | |
Syntax and Pragmatics of Conversation: A Case of Bangla
Title | Syntax and Pragmatics of Conversation: A Case of Bangla |
Authors | Samir Karmakar, Soumya Sankar Ghosh |
Abstract | |
Tasks | |
Published | 2016-12-01 |
URL | https://www.aclweb.org/anthology/W16-6309/ |
https://www.aclweb.org/anthology/W16-6309 | |
PWC | https://paperswithcode.com/paper/syntax-and-pragmatics-of-conversation-a-case |
Repo | |
Framework | |
Insertion Position Selection Model for Flexible Non-Terminals in Dependency Tree-to-Tree Machine Translation
Title | Insertion Position Selection Model for Flexible Non-Terminals in Dependency Tree-to-Tree Machine Translation |
Authors | Toshiaki Nakazawa, John Richardson, Sadao Kurohashi |
Abstract | |
Tasks | Machine Translation, Word Alignment |
Published | 2016-11-01 |
URL | https://www.aclweb.org/anthology/D16-1247/ |
https://www.aclweb.org/anthology/D16-1247 | |
PWC | https://paperswithcode.com/paper/insertion-position-selection-model-for |
Repo | |
Framework | |
Without-Replacement Sampling for Stochastic Gradient Methods
Title | Without-Replacement Sampling for Stochastic Gradient Methods |
Authors | Ohad Shamir |
Abstract | Stochastic gradient methods for machine learning and optimization problems are usually analyzed assuming data points are sampled with replacement. In contrast, sampling without replacement is far less understood, yet in practice it is very common, often easier to implement, and usually performs better. In this paper, we provide competitive convergence guarantees for without-replacement sampling under several scenarios, focusing on the natural regime of few passes over the data. Moreover, we describe a useful application of these results in the context of distributed optimization with randomly-partitioned data, yielding a nearly-optimal algorithm for regularized least squares (in terms of both communication complexity and runtime complexity) under broad parameter regimes. Our proof techniques combine ideas from stochastic optimization, adversarial online learning and transductive learning theory, and can potentially be applied to other stochastic optimization and learning problems. |
Tasks | Distributed Optimization, Stochastic Optimization |
Published | 2016-12-01 |
URL | http://papers.nips.cc/paper/6245-without-replacement-sampling-for-stochastic-gradient-methods |
http://papers.nips.cc/paper/6245-without-replacement-sampling-for-stochastic-gradient-methods.pdf | |
PWC | https://paperswithcode.com/paper/without-replacement-sampling-for-stochastic-1 |
Repo | |
Framework | |
The REAL Corpus: A Crowd-Sourced Corpus of Human Generated and Evaluated Spatial References to Real-World Urban Scenes
Title | The REAL Corpus: A Crowd-Sourced Corpus of Human Generated and Evaluated Spatial References to Real-World Urban Scenes |
Authors | Phil Bartie, William Mackaness, Dimitra Gkatzia, Verena Rieser |
Abstract | Our interest is in people{'}s capacity to efficiently and effectively describe geographic objects in urban scenes. The broader ambition is to develop spatial models capable of equivalent functionality able to construct such referring expressions. To that end we present a newly crowd-sourced data set of natural language references to objects anchored in complex urban scenes (In short: The REAL Corpus ― Referring Expressions Anchored Language). The REAL corpus contains a collection of images of real-world urban scenes together with verbal descriptions of target objects generated by humans, paired with data on how successful other people were able to identify the same object based on these descriptions. In total, the corpus contains 32 images with on average 27 descriptions per image and 3 verifications for each description. In addition, the corpus is annotated with a variety of linguistically motivated features. The paper highlights issues posed by collecting data using crowd-sourcing with an unrestricted input format, as well as using real-world urban scenes. |
Tasks | |
Published | 2016-05-01 |
URL | https://www.aclweb.org/anthology/L16-1341/ |
https://www.aclweb.org/anthology/L16-1341 | |
PWC | https://paperswithcode.com/paper/the-real-corpus-a-crowd-sourced-corpus-of |
Repo | |
Framework | |
Reading and Thinking: Re-read LSTM Unit for Textual Entailment Recognition
Title | Reading and Thinking: Re-read LSTM Unit for Textual Entailment Recognition |
Authors | Lei Sha, Baobao Chang, Zhifang Sui, Sujian Li |
Abstract | Recognizing Textual Entailment (RTE) is a fundamentally important task in natural language processing that has many applications. The recently released Stanford Natural Language Inference (SNLI) corpus has made it possible to develop and evaluate deep neural network methods for the RTE task. Previous neural network based methods usually try to encode the two sentences (premise and hypothesis) and send them together into a multi-layer perceptron to get their entailment type, or use LSTM-RNN to link two sentences together while using attention mechanic to enhance the model{'}s ability. In this paper, we propose to use the re-read mechanic, which means to read the premise again and again while reading the hypothesis. After read the premise again, the model can get a better understanding of the premise, which can also affect the understanding of the hypothesis. On the contrary, a better understanding of the hypothesis can also affect the understanding of the premise. With the alternative re-read process, the model can {}think{''} of a better decision of entailment type. We designed a new LSTM unit called re-read LSTM (rLSTM) to implement this { }thinking{''} process. Experiments show that we achieve results better than current state-of-the-art equivalents. |
Tasks | Information Retrieval, Machine Translation, Natural Language Inference, Question Answering, Word Alignment |
Published | 2016-12-01 |
URL | https://www.aclweb.org/anthology/C16-1270/ |
https://www.aclweb.org/anthology/C16-1270 | |
PWC | https://paperswithcode.com/paper/reading-and-thinking-re-read-lstm-unit-for |
Repo | |
Framework | |
A Tagged Corpus for Automatic Labeling of Disabilities in Medical Scientific Papers
Title | A Tagged Corpus for Automatic Labeling of Disabilities in Medical Scientific Papers |
Authors | Carlos Valmaseda, Juan Martinez-Romo, Lourdes Araujo |
Abstract | This paper presents the creation of a corpus of labeled disabilities in scientific papers. The identification of medical concepts in documents and, especially, the identification of disabilities, is a complex task mainly due to the variety of expressions that can make reference to the same problem. Currently there is not a set of documents manually annotated with disabilities with which to evaluate an automatic detection system of such concepts. This is the reason why this corpus arises, aiming to facilitate the evaluation of systems that implement an automatic annotation tool for extracting biomedical concepts such as disabilities. The result is a set of scientific papers manually annotated. For the selection of these scientific papers has been conducted a search using a list of rare diseases, since they generally have associated several disabilities of different kinds. |
Tasks | |
Published | 2016-05-01 |
URL | https://www.aclweb.org/anthology/L16-1162/ |
https://www.aclweb.org/anthology/L16-1162 | |
PWC | https://paperswithcode.com/paper/a-tagged-corpus-for-automatic-labeling-of |
Repo | |
Framework | |
A Deep Fusion Model for Domain Adaptation in Phrase-based MT
Title | A Deep Fusion Model for Domain Adaptation in Phrase-based MT |
Authors | Nadir Durrani, Hassan Sajjad, Shafiq Joty, Ahmed Abdelali |
Abstract | We present a novel fusion model for domain adaptation in Statistical Machine Translation. Our model is based on the joint source-target neural network Devlin et al., 2014, and is learned by fusing in- and out-domain models. The adaptation is performed by backpropagating errors from the output layer to the word embedding layer of each model, subsequently adjusting parameters of the composite model towards the in-domain data. On the standard tasks of translating English-to-German and Arabic-to-English TED talks, we observed average improvements of +0.9 and +0.7 BLEU points, respectively over a competition grade phrase-based system. We also demonstrate improvements over existing adaptation methods. |
Tasks | Domain Adaptation, Machine Translation, Word Embeddings |
Published | 2016-12-01 |
URL | https://www.aclweb.org/anthology/C16-1299/ |
https://www.aclweb.org/anthology/C16-1299 | |
PWC | https://paperswithcode.com/paper/a-deep-fusion-model-for-domain-adaptation-in |
Repo | |
Framework | |
Expected F-Measure Training for Shift-Reduce Parsing with Recurrent Neural Networks
Title | Expected F-Measure Training for Shift-Reduce Parsing with Recurrent Neural Networks |
Authors | Wenduan Xu, Michael Auli, Stephen Clark |
Abstract | |
Tasks | Feature Engineering |
Published | 2016-06-01 |
URL | https://www.aclweb.org/anthology/N16-1025/ |
https://www.aclweb.org/anthology/N16-1025 | |
PWC | https://paperswithcode.com/paper/expected-f-measure-training-for-shift-reduce |
Repo | |
Framework | |
Deriving Players & Themes in the Regesta Imperii using SVMs and Neural Networks
Title | Deriving Players & Themes in the Regesta Imperii using SVMs and Neural Networks |
Authors | Juri Opitz, Anette Frank |
Abstract | |
Tasks | Text Classification |
Published | 2016-08-01 |
URL | https://www.aclweb.org/anthology/W16-2108/ |
https://www.aclweb.org/anthology/W16-2108 | |
PWC | https://paperswithcode.com/paper/deriving-players-themes-in-the-regesta |
Repo | |
Framework | |