Paper Group NANR 34
Do translator trainees trust machine translation? An experiment on post-editing and revision. Uncertainty-Aware Audiovisual Activity Recognition Using Deep Bayesian Variational Inference. Mining Tweets that refer to TV programs with Deep Neural Networks. A Social Opinion Gold Standard for the Malta Government Budget 2018. Hybrid Models for Aspects …
Do translator trainees trust machine translation? An experiment on post-editing and revision
Title | Do translator trainees trust machine translation? An experiment on post-editing and revision |
Authors | R Scansani, y, Silvia Bernardini, Adriano Ferraresi, Luisa Bentivogli |
Abstract | |
Tasks | Machine Translation |
Published | 2019-08-01 |
URL | https://www.aclweb.org/anthology/W19-6711/ |
https://www.aclweb.org/anthology/W19-6711 | |
PWC | https://paperswithcode.com/paper/do-translator-trainees-trust-machine |
Repo | |
Framework | |
Uncertainty-Aware Audiovisual Activity Recognition Using Deep Bayesian Variational Inference
Title | Uncertainty-Aware Audiovisual Activity Recognition Using Deep Bayesian Variational Inference |
Authors | Mahesh Subedar, Ranganath Krishnan, Paulo Lopez Meyer, Omesh Tickoo, Jonathan Huang |
Abstract | Deep neural networks (DNNs) provide state-of-the-art results for a multitude of applications, but the approaches using DNNs for multimodal audiovisual applications do not consider predictive uncertainty associated with individual modalities. Bayesian deep learning methods provide principled confidence and quantify predictive uncertainty. Our contribution in this work is to propose an uncertainty aware multimodal Bayesian fusion framework for activity recognition. We demonstrate a novel approach that combines deterministic and variational layers to scale Bayesian DNNs to deeper architectures. Our experiments using in- and out-of-distribution samples selected from a subset of Moments-in-Time (MiT) dataset show a more reliable confidence measure as compared to the non-Bayesian baseline and the Monte Carlo dropout (MC dropout) approximate Bayesian inference. We also demonstrate the uncertainty estimates obtained from the proposed framework can identify out-of-distribution data on the UCF101 and MiT datasets. In the multimodal setting, the proposed framework improved precision-recall AUC by 10.2% on the subset of MiT dataset as compared to non-Bayesian baseline. |
Tasks | Activity Recognition, Bayesian Inference |
Published | 2019-10-01 |
URL | http://openaccess.thecvf.com/content_ICCV_2019/html/Subedar_Uncertainty-Aware_Audiovisual_Activity_Recognition_Using_Deep_Bayesian_Variational_Inference_ICCV_2019_paper.html |
http://openaccess.thecvf.com/content_ICCV_2019/papers/Subedar_Uncertainty-Aware_Audiovisual_Activity_Recognition_Using_Deep_Bayesian_Variational_Inference_ICCV_2019_paper.pdf | |
PWC | https://paperswithcode.com/paper/uncertainty-aware-audiovisual-activity |
Repo | |
Framework | |
Mining Tweets that refer to TV programs with Deep Neural Networks
Title | Mining Tweets that refer to TV programs with Deep Neural Networks |
Authors | Takeshi Kobayakawa, Taro Miyazaki, Hiroki Okamoto, Simon Clippingdale |
Abstract | The automatic analysis of expressions of opinion has been well studied in the opinion mining area, but a remaining problem is robustness for user-generated texts. Although consumer-generated texts are valuable since they contain a great number and wide variety of user evaluations, spelling inconsistency and the variety of expressions make analysis difficult. In order to tackle such situations, we applied a model that is reported to handle context in many natural language processing areas, to the problem of extracting references to the opinion target from text. Experiments on tweets that refer to television programs show that the model can extract such references with more than 90{%} accuracy. |
Tasks | Opinion Mining |
Published | 2019-11-01 |
URL | https://www.aclweb.org/anthology/D19-5517/ |
https://www.aclweb.org/anthology/D19-5517 | |
PWC | https://paperswithcode.com/paper/mining-tweets-that-refer-to-tv-programs-with |
Repo | |
Framework | |
A Social Opinion Gold Standard for the Malta Government Budget 2018
Title | A Social Opinion Gold Standard for the Malta Government Budget 2018 |
Authors | Keith Cortis, Brian Davis |
Abstract | We present a gold standard of annotated social opinion for the Malta Government Budget 2018. It consists of over 500 online posts in English and/or the Maltese less-resourced language, gathered from social media platforms, specifically, social networking services and newswires, which have been annotated with information about opinions expressed by the general public and other entities, in terms of sentiment polarity, emotion, sarcasm/irony, and negation. This dataset is a resource for opinion mining based on social data, within the context of politics. It is the first opinion annotated social dataset from Malta, which has very limited language resources available. |
Tasks | Opinion Mining |
Published | 2019-11-01 |
URL | https://www.aclweb.org/anthology/D19-5547/ |
https://www.aclweb.org/anthology/D19-5547 | |
PWC | https://paperswithcode.com/paper/a-social-opinion-gold-standard-for-the-malta |
Repo | |
Framework | |
Hybrid Models for Aspects Extraction without Labelled Dataset
Title | Hybrid Models for Aspects Extraction without Labelled Dataset |
Authors | Wai-Howe Khong, Lay-Ki Soon, Hui-Ngo Goh |
Abstract | One of the important tasks in opinion mining is to extract aspects of the opinion target. Aspects are features or characteristics of the opinion target that are being reviewed, which can be categorised into explicit and implicit aspects. Extracting aspects from opinions is essential in order to ensure accurate information about certain attributes of an opinion target is retrieved. For instance, a professional camera receives a positive feedback in terms of its functionalities in a review, but its overly high price receives negative feedback. Most of the existing solutions focus on explicit aspects. However, sentences in reviews normally do not state the aspects explicitly. In this research, two hybrid models are proposed to identify and extract both explicit and implicit aspects, namely TDM-DC and TDM-TED. The proposed models combine topic modelling and dictionary-based approach. The models are unsupervised as they do not require any labelled dataset. The experimental results show that TDM-DC achieves F1-measure of 58.70{%}, where it outperforms both the baseline topic model and dictionary-based approach. In comparison to other existing unsupervised techniques, the proposed models are able to achieve higher F1-measure by approximately 3{%}. Although the supervised techniques perform slightly better, the proposed models are domain-independent, and hence more versatile. |
Tasks | Opinion Mining |
Published | 2019-11-01 |
URL | https://www.aclweb.org/anthology/D19-6611/ |
https://www.aclweb.org/anthology/D19-6611 | |
PWC | https://paperswithcode.com/paper/hybrid-models-for-aspects-extraction-without |
Repo | |
Framework | |
Context-Aware Conversation Thread Detection in Multi-Party Chat
Title | Context-Aware Conversation Thread Detection in Multi-Party Chat |
Authors | Ming Tan, Dakuo Wang, Yupeng Gao, Haoyu Wang, Saloni Potdar, Xiaoxiao Guo, Shiyu Chang, Mo Yu |
Abstract | In multi-party chat, it is common for multiple conversations to occur concurrently, leading to intermingled conversation threads in chat logs. In this work, we propose a novel Context-Aware Thread Detection (CATD) model that automatically disentangles these conversation threads. We evaluate our model on four real-world datasets and demonstrate an overall im-provement in thread detection accuracy over state-of-the-art benchmarks. |
Tasks | |
Published | 2019-11-01 |
URL | https://www.aclweb.org/anthology/D19-1682/ |
https://www.aclweb.org/anthology/D19-1682 | |
PWC | https://paperswithcode.com/paper/context-aware-conversation-thread-detection |
Repo | |
Framework | |
A Margin-based Loss with Synthetic Negative Samples for Continuous-output Machine Translation
Title | A Margin-based Loss with Synthetic Negative Samples for Continuous-output Machine Translation |
Authors | Gayatri Bhat, Sachin Kumar, Yulia Tsvetkov |
Abstract | Neural models that eliminate the softmax bottleneck by generating word embeddings (rather than multinomial distributions over a vocabulary) attain faster training with fewer learnable parameters. These models are currently trained by maximizing densities of pretrained target embeddings under von Mises-Fisher distributions parameterized by corresponding model-predicted embeddings. This work explores the utility of margin-based loss functions in optimizing such models. We present syn-margin loss, a novel margin-based loss that uses a synthetic negative sample constructed from only the predicted and target embeddings at every step. The loss is efficient to compute, and we use a geometric analysis to argue that it is more consistent and interpretable than other margin-based losses. Empirically, we find that syn-margin provides small but significant improvements over both vMF and standard margin-based losses in continuous-output neural machine translation. |
Tasks | Machine Translation, Word Embeddings |
Published | 2019-11-01 |
URL | https://www.aclweb.org/anthology/D19-5621/ |
https://www.aclweb.org/anthology/D19-5621 | |
PWC | https://paperswithcode.com/paper/a-margin-based-loss-with-synthetic-negative |
Repo | |
Framework | |
JHU LoResMT 2019 Shared Task System Description
Title | JHU LoResMT 2019 Shared Task System Description |
Authors | Paul McNamee |
Abstract | |
Tasks | |
Published | 2019-08-01 |
URL | https://www.aclweb.org/anthology/W19-6812/ |
https://www.aclweb.org/anthology/W19-6812 | |
PWC | https://paperswithcode.com/paper/jhu-loresmt-2019-shared-task-system |
Repo | |
Framework | |
Translation Quality and Effort Prediction in Professional Machine Translation Post-Editing
Title | Translation Quality and Effort Prediction in Professional Machine Translation Post-Editing |
Authors | Jennifer Vardaro, Moritz Schaeffer, Silvia Hansen-Schirra |
Abstract | |
Tasks | Machine Translation |
Published | 2019-08-01 |
URL | https://www.aclweb.org/anthology/W19-7004/ |
https://www.aclweb.org/anthology/W19-7004 | |
PWC | https://paperswithcode.com/paper/translation-quality-and-effort-prediction-in |
Repo | |
Framework | |
Proceedings of the Celtic Language Technology Workshop
Title | Proceedings of the Celtic Language Technology Workshop |
Authors | |
Abstract | |
Tasks | |
Published | 2019-08-01 |
URL | https://www.aclweb.org/anthology/W19-6900/ |
https://www.aclweb.org/anthology/W19-6900 | |
PWC | https://paperswithcode.com/paper/proceedings-of-the-celtic-language-technology |
Repo | |
Framework | |
iRDA Method for Sparse Convolutional Neural Networks
Title | iRDA Method for Sparse Convolutional Neural Networks |
Authors | Xiaodong Jia, Liang Zhao, Lian Zhang, Juncai He, Jinchao Xu |
Abstract | We propose a new approach, known as the iterative regularized dual averaging (iRDA), to improve the efficiency of convolutional neural networks (CNN) by significantly reducing the redundancy of the model without reducing its accuracy. The method has been tested for various data sets, and proven to be significantly more efficient than most existing compressing techniques in the deep learning literature. For many popular data sets such as MNIST and CIFAR-10, more than 95% of the weights can be zeroed out without losing accuracy. In particular, we are able to make ResNet18 with 95% sparsity to have an accuracy that is comparable to that of a much larger model ResNet50 with the best 60% sparsity as reported in the literature. |
Tasks | |
Published | 2019-05-01 |
URL | https://openreview.net/forum?id=HJMXus0ct7 |
https://openreview.net/pdf?id=HJMXus0ct7 | |
PWC | https://paperswithcode.com/paper/irda-method-for-sparse-convolutional-neural |
Repo | |
Framework | |
Dynamic Anchor Feature Selection for Single-Shot Object Detection
Title | Dynamic Anchor Feature Selection for Single-Shot Object Detection |
Authors | Shuai Li, Lingxiao Yang, Jianqiang Huang, Xian-Sheng Hua, Lei Zhang |
Abstract | The design of anchors is critical to the performance of one-stage detectors. Recently, the anchor refinement module (ARM) has been proposed to adjust the initialization of default anchors, providing the detector a better anchor reference. However, this module brings another problem: all pixels at a feature map have the same receptive field while the anchors associated with each pixel have different positions and sizes. This discordance may lead to a less effective detector. In this paper, we present a dynamic feature selection operation to select new pixels in a feature map for each refined anchor received from the ARM. The pixels are selected based on the new anchor position and size so that the receptive filed of these pixels can fit the anchor areas well, which makes the detector, especially the regression part, much easier to optimize. Furthermore, to enhance the representation ability of selected feature pixels, we design a bidirectional feature fusion module by combining features from early and deep layers. Extensive experiments on both PASCAL VOC and COCO demonstrate the effectiveness of our dynamic anchor feature selection (DAFS) operation. For the case of high IoU threshold, our DAFS can improve the mAP by a large margin. |
Tasks | Feature Selection, Object Detection |
Published | 2019-10-01 |
URL | http://openaccess.thecvf.com/content_ICCV_2019/html/Li_Dynamic_Anchor_Feature_Selection_for_Single-Shot_Object_Detection_ICCV_2019_paper.html |
http://openaccess.thecvf.com/content_ICCV_2019/papers/Li_Dynamic_Anchor_Feature_Selection_for_Single-Shot_Object_Detection_ICCV_2019_paper.pdf | |
PWC | https://paperswithcode.com/paper/dynamic-anchor-feature-selection-for-single |
Repo | |
Framework | |
Multilingual word translation using auxiliary languages
Title | Multilingual word translation using auxiliary languages |
Authors | Hagai Taitelbaum, Gal Chechik, Jacob Goldberger |
Abstract | Current multilingual word translation methods are focused on jointly learning mappings from each language to a shared space. The actual translation, however, is still performed as an isolated bilingual task. In this study we propose a multilingual translation procedure that uses all the learned mappings to translate a word from one language to another. For each source word, we first search for the most relevant auxiliary languages. We then use the translations to these languages to form an improved representation of the source word. Finally, this representation is used for the actual translation to the target language. Experiments on a standard multilingual word translation benchmark demonstrate that our model outperforms state of the art results. |
Tasks | |
Published | 2019-11-01 |
URL | https://www.aclweb.org/anthology/D19-1134/ |
https://www.aclweb.org/anthology/D19-1134 | |
PWC | https://paperswithcode.com/paper/multilingual-word-translation-using-auxiliary |
Repo | |
Framework | |
Proceedings of the 3rd International Conference on Natural Language and Speech Processing
Title | Proceedings of the 3rd International Conference on Natural Language and Speech Processing |
Authors | |
Abstract | |
Tasks | |
Published | 2019-09-01 |
URL | https://www.aclweb.org/anthology/W19-7400/ |
https://www.aclweb.org/anthology/W19-7400 | |
PWC | https://paperswithcode.com/paper/proceedings-of-the-3rd-international |
Repo | |
Framework | |
Encoding Position Improves Recurrent Neural Text Summarizers
Title | Encoding Position Improves Recurrent Neural Text Summarizers |
Authors | Apostolos Karanikolos, Ioannis Refanidis |
Abstract | |
Tasks | |
Published | 2019-09-01 |
URL | https://www.aclweb.org/anthology/W19-7420/ |
https://www.aclweb.org/anthology/W19-7420 | |
PWC | https://paperswithcode.com/paper/encoding-position-improves-recurrent-neural |
Repo | |
Framework | |