Paper Group NANR 25
Proceedings of the 6th Workshop on Argument Mining. Unsupervised Rewriter for Multi-Sentence Compression. Multi-Task Multi-Sensor Fusion for 3D Object Detection. EmoSense at SemEval-2019 Task 3: Bidirectional LSTM Network for Contextual Emotion Detection in Textual Conversations. UC Davis at SemEval-2019 Task 1: DAG Semantic Parsing with Attention- …
Proceedings of the 6th Workshop on Argument Mining
Title | Proceedings of the 6th Workshop on Argument Mining |
Authors | |
Abstract | |
Tasks | Argument Mining |
Published | 2019-08-01 |
URL | https://www.aclweb.org/anthology/W19-4500/ |
https://www.aclweb.org/anthology/W19-4500 | |
PWC | https://paperswithcode.com/paper/proceedings-of-the-6th-workshop-on-argument |
Repo | |
Framework | |
Unsupervised Rewriter for Multi-Sentence Compression
Title | Unsupervised Rewriter for Multi-Sentence Compression |
Authors | Yang Zhao, Xiaoyu Shen, Wei Bi, Akiko Aizawa |
Abstract | Multi-sentence compression (MSC) aims to generate a grammatical but reduced compression from multiple input sentences while retaining their key information. Previous dominating approach for MSC is the extraction-based word graph approach. A few variants further leveraged lexical substitution to yield more abstractive compression. However, two limitations exist. First, the word graph approach that simply concatenates fragments from multiple sentences may yield non-fluent or ungrammatical compression. Second, lexical substitution is often inappropriate without the consideration of context information. To tackle the above-mentioned issues, we present a neural rewriter for multi-sentence compression that does not need any parallel corpus. Empirical studies have shown that our approach achieves comparable results upon automatic evaluation and improves the grammaticality of compression based on human evaluation. A parallel corpus with more than 140,000 (sentence group, compression) pairs is also constructed as a by-product for future research. |
Tasks | Sentence Compression |
Published | 2019-07-01 |
URL | https://www.aclweb.org/anthology/P19-1216/ |
https://www.aclweb.org/anthology/P19-1216 | |
PWC | https://paperswithcode.com/paper/unsupervised-rewriter-for-multi-sentence |
Repo | |
Framework | |
Multi-Task Multi-Sensor Fusion for 3D Object Detection
Title | Multi-Task Multi-Sensor Fusion for 3D Object Detection |
Authors | Ming Liang, Bin Yang, Yun Chen, Rui Hu, Raquel Urtasun |
Abstract | In this paper we propose to exploit multiple related tasks for accurate multi-sensor 3D object detection. Towards this goal we present an end-to-end learnable architecture that reasons about 2D and 3D object detection as well as ground estimation and depth completion. Our experiments show that all these tasks are complementary and help the network learn better representations by fusing information at various levels. Importantly, our approach leads the KITTI benchmark on 2D, 3D and bird’s eye view object detection, while being real-time. |
Tasks | 3D Object Detection, Depth Completion, Object Detection, Sensor Fusion |
Published | 2019-06-01 |
URL | http://openaccess.thecvf.com/content_CVPR_2019/html/Liang_Multi-Task_Multi-Sensor_Fusion_for_3D_Object_Detection_CVPR_2019_paper.html |
http://openaccess.thecvf.com/content_CVPR_2019/papers/Liang_Multi-Task_Multi-Sensor_Fusion_for_3D_Object_Detection_CVPR_2019_paper.pdf | |
PWC | https://paperswithcode.com/paper/multi-task-multi-sensor-fusion-for-3d-object |
Repo | |
Framework | |
EmoSense at SemEval-2019 Task 3: Bidirectional LSTM Network for Contextual Emotion Detection in Textual Conversations
Title | EmoSense at SemEval-2019 Task 3: Bidirectional LSTM Network for Contextual Emotion Detection in Textual Conversations |
Authors | Sergey Smetanin |
Abstract | In this paper, we describe a deep-learning system for emotion detection in textual conversations that participated in SemEval-2019 Task 3 {``}EmoContext{''}. We designed a specific architecture of bidirectional LSTM which allows not only to learn semantic and sentiment feature representation, but also to capture user-specific conversation features. To fine-tune word embeddings using distant supervision we additionally collected a significant amount of emotional texts. The system achieved 72.59{%} micro-average F1 score for emotion classes on the test dataset, thereby significantly outperforming the officially-released baseline. Word embeddings and the source code were released for the research community. | |
Tasks | Word Embeddings |
Published | 2019-06-01 |
URL | https://www.aclweb.org/anthology/S19-2034/ |
https://www.aclweb.org/anthology/S19-2034 | |
PWC | https://paperswithcode.com/paper/emosense-at-semeval-2019-task-3-bidirectional |
Repo | |
Framework | |
UC Davis at SemEval-2019 Task 1: DAG Semantic Parsing with Attention-based Decoder
Title | UC Davis at SemEval-2019 Task 1: DAG Semantic Parsing with Attention-based Decoder |
Authors | Dian Yu, Kenji Sagae |
Abstract | We present an encoder-decoder model for semantic parsing with UCCA SemEval 2019 Task 1. The encoder is a Bi-LSTM and the decoder uses recursive self-attention. The proposed model alleviates challenges and feature engineering in traditional transition-based and graph-based parsers. The resulting parser is simple and proved to effective on the semantic parsing task. |
Tasks | Feature Engineering, Semantic Parsing |
Published | 2019-06-01 |
URL | https://www.aclweb.org/anthology/S19-2017/ |
https://www.aclweb.org/anthology/S19-2017 | |
PWC | https://paperswithcode.com/paper/uc-davis-at-semeval-2019-task-1-dag-semantic |
Repo | |
Framework | |
Jeff Da at COIN - Shared Task: BIG MOOD: Relating Transformers to Explicit Commonsense Knowledge
Title | Jeff Da at COIN - Shared Task: BIG MOOD: Relating Transformers to Explicit Commonsense Knowledge |
Authors | Jeff Da |
Abstract | We introduce a simple yet effective method of integrating contextual embeddings with commonsense graph embeddings, dubbed BERT Infused Graphs: Matching Over Other embeDdings. First, we introduce a preprocessing method to improve the speed of querying knowledge bases. Then, we develop a method of creating knowledge embeddings from each knowledge base. We introduce a method of aligning tokens between two misaligned tokenization methods. Finally, we contribute a method of contextualizing BERT after combining with knowledge base embeddings. We also show BERTs tendency to correct lower accuracy question types. Our model achieves a higher accuracy than BERT, and we score fifth on the official leaderboard of the shared task and score the highest without any additional language model pretraining. |
Tasks | Language Modelling, Tokenization |
Published | 2019-11-01 |
URL | https://www.aclweb.org/anthology/D19-6010/ |
https://www.aclweb.org/anthology/D19-6010 | |
PWC | https://paperswithcode.com/paper/jeff-da-at-coin-shared-task |
Repo | |
Framework | |
Turkish Treebanking: Unifying and Constructing Efforts
Title | Turkish Treebanking: Unifying and Constructing Efforts |
Authors | Utku T{"u}rk, Furkan Atmaca, {\c{S}}aziye Bet{"u}l {"O}zate{\c{s}}, Abdullatif K{"o}ksal, Balkiz Ozturk Basaran, Tunga Gungor, Arzucan {"O}zg{"u}r |
Abstract | In this paper, we present the current version of two different treebanks, the re-annotation of the Turkish PUD Treebank and the first annotation of the Turkish National Corpus Universal Dependency (henceforth TNC-UD). The annotation of both treebanks, the Turkish PUD Treebank and TNC-UD, was carried out based on the decisions concerning linguistic adequacy of re-annotation of the Turkish IMST-UD Treebank (T{"u}rk et. al., forthcoming). Both of the treebanks were annotated with the same annotation process and morphological and syntactic analyses. The TNC-UD is planned to have 10,000 sentences. In this paper, we will present the first 500 sentences along with the annotation PUD Treebank. Moreover, this paper also offers the parsing results of a graph-based neural parser on the previous and re-annotated PUD, as well as the TNC-UD. In light of the comparisons, even though we observe a slight decrease in the attachment scores of the Turkish PUD treebank, we demonstrate that the annotation of the TNC-UD improves the parsing accuracy of Turkish. In addition to the treebanks, we have also constructed a custom annotation software with advanced filtering and morphological editing options. Both the treebanks, including a full edit-history and the annotation guidelines, and the custom software are publicly available under an open license online. |
Tasks | |
Published | 2019-08-01 |
URL | https://www.aclweb.org/anthology/W19-4019/ |
https://www.aclweb.org/anthology/W19-4019 | |
PWC | https://paperswithcode.com/paper/turkish-treebanking-unifying-and-constructing |
Repo | |
Framework | |
Commonsense about Human Senses: Labeled Data Collection Processes
Title | Commonsense about Human Senses: Labeled Data Collection Processes |
Authors | Ndapa Nakashole |
Abstract | We consider the problem of extracting from text commonsense knowledge pertaining to human senses such as sound and smell. First, we consider the problem of recognizing mentions of human senses in text. Our contribution is a method for acquiring labeled data. Experiments show the effectiveness of our proposed data labeling approach when used with standard machine learning models on the task of sense recognition in text. Second, we propose to extract novel, common sense relationships pertaining to sense perception concepts. Our contribution is a process for generating labeled data by leveraging large corpora and crowdsourcing questionnaires. |
Tasks | Common Sense Reasoning |
Published | 2019-11-01 |
URL | https://www.aclweb.org/anthology/D19-6005/ |
https://www.aclweb.org/anthology/D19-6005 | |
PWC | https://paperswithcode.com/paper/commonsense-about-human-senses-labeled-data |
Repo | |
Framework | |
Semantic Frame Embeddings for Detecting Relations between Software Requirements
Title | Semantic Frame Embeddings for Detecting Relations between Software Requirements |
Authors | Waad Alhoshan, Riza Batista-Navarro, Liping Zhao |
Abstract | The early phases of requirements engineering (RE) deal with a vast amount of software requirements (i.e., requirements that define characteristics of software systems), which are typically expressed in natural language. Analysing such unstructured requirements, usually obtained from users{'} inputs, is considered a challenging task due to the inherent ambiguity and inconsistency of natural language. To support such a task, methods based on natural language processing (NLP) can be employed. One of the more recent advances in NLP is the use of word embeddings for capturing contextual information, which can then be applied in word analogy tasks. In this paper, we describe a new resource, i.e., embedding-based representations of semantic frames in FrameNet, which was developed to support the detection of relations between software requirements. Our embeddings, which encapsulate contextual information at the semantic frame level, were trained on a large corpus of requirements (i.e., a collection of more than three million mobile application reviews). The similarity between these frame embeddings is then used as a basis for detecting semantic relatedness between software requirements. Compared with existing resources underpinned by word-level embeddings alone, and frame embeddings built upon pre-trained vectors, our proposed frame embeddings obtained better performance against judgements of an RE expert. These encouraging results demonstrate the strong potential of the resource in supporting RE analysis tasks (e.g., traceability), which we plan to investigate as part of our future work. |
Tasks | Word Embeddings |
Published | 2019-05-01 |
URL | https://www.aclweb.org/anthology/W19-0606/ |
https://www.aclweb.org/anthology/W19-0606 | |
PWC | https://paperswithcode.com/paper/semantic-frame-embeddings-for-detecting |
Repo | |
Framework | |
Adaptive Activation Thresholding: Dynamic Routing Type Behavior for Interpretability in Convolutional Neural Networks
Title | Adaptive Activation Thresholding: Dynamic Routing Type Behavior for Interpretability in Convolutional Neural Networks |
Authors | Yiyou Sun, Sathya N. Ravi, Vikas Singh |
Abstract | There is a growing interest in strategies that can help us understand or interpret neural networks – that is, not merely provide a prediction, but also offer additional context explaining why and how. While many current methods offer tools to perform this analysis for a given (trained) network post-hoc, recent results (especially on capsule networks) suggest that when classes map to a few high level “concepts” in the preceding layers of the network, the behavior of the network is easier to interpret or explain. Such training may be accomplished via dynamic/EM routing where the network “routes” for individual classes (or subsets of images) are dynamic and involve few nodes even if the full network may not be sparse. In this paper, we show how a simple modification of the SGD scheme can help provide dynamic/EM routing type behavior in convolutional neural networks. Through extensive experiments, we evaluate the effect of this idea for interpretability where we obtain promising results, while also showing that no compromise in attainable accuracy is involved. Further, we show that the minor modification is seemingly ad-hoc, the new algorithm can be analyzed by an approximate method which provably matches known rates for SGD. |
Tasks | |
Published | 2019-10-01 |
URL | http://openaccess.thecvf.com/content_ICCV_2019/html/Sun_Adaptive_Activation_Thresholding_Dynamic_Routing_Type_Behavior_for_Interpretability_in_ICCV_2019_paper.html |
http://openaccess.thecvf.com/content_ICCV_2019/papers/Sun_Adaptive_Activation_Thresholding_Dynamic_Routing_Type_Behavior_for_Interpretability_in_ICCV_2019_paper.pdf | |
PWC | https://paperswithcode.com/paper/adaptive-activation-thresholding-dynamic |
Repo | |
Framework | |
AD-VAT: An Asymmetric Dueling mechanism for learning Visual Active Tracking
Title | AD-VAT: An Asymmetric Dueling mechanism for learning Visual Active Tracking |
Authors | Fangwei Zhong, Peng Sun, Wenhan Luo, Tingyun Yan, Yizhou Wang |
Abstract | Visual Active Tracking (VAT) aims at following a target object by autonomously controlling the motion system of a tracker given visual observations. Previous work has shown that the tracker can be trained in a simulator via reinforcement learning and deployed in real-world scenarios. However, during training, such a method requires manually specifying the moving path of the target object to be tracked, which cannot ensure the tracker’s generalization on the unseen object moving patterns. To learn a robust tracker for VAT, in this paper, we propose a novel adversarial RL method which adopts an Asymmetric Dueling mechanism, referred to as AD-VAT. In AD-VAT, both the tracker and the target are approximated by end-to-end neural networks, and are trained via RL in a dueling/competitive manner: i.e., the tracker intends to lockup the target, while the target tries to escape from the tracker. They are asymmetric in that the target is aware of the tracker, but not vice versa. Specifically, besides its own observation, the target is fed with the tracker’s observation and action, and learns to predict the tracker’s reward as an auxiliary task. We show that such an asymmetric dueling mechanism produces a stronger target, which in turn induces a more robust tracker. To stabilize the training, we also propose a novel partial zero-sum reward for the tracker/target. The experimental results, in both 2D and 3D environments, demonstrate that the proposed method leads to a faster convergence in training and yields more robust tracking behaviors in different testing scenarios. For supplementary videos, see: https://www.youtube.com/playlist?list=PL9rZj4Mea7wOZkdajK1TsprRg8iUf51BS |
Tasks | |
Published | 2019-05-01 |
URL | https://openreview.net/forum?id=HkgYmhR9KX |
https://openreview.net/pdf?id=HkgYmhR9KX | |
PWC | https://paperswithcode.com/paper/ad-vat-an-asymmetric-dueling-mechanism-for |
Repo | |
Framework | |
Learning to Localize Through Compressed Binary Maps
Title | Learning to Localize Through Compressed Binary Maps |
Authors | Xinkai Wei, Ioan Andrei Barsan, Shenlong Wang, Julieta Martinez, Raquel Urtasun |
Abstract | One of the main difficulties of scaling current localization systems to large environments is the on-board storage required for the maps. In this paper we propose to learn to compress the map representation such that it is optimal for the localization task. As a consequence, higher compression rates can be achieved without loss of localization accuracy when compared to standard coding schemes that optimize for reconstruction, thus ignoring the end task. Our experiments show that it is possible to learn a task-specific compression which reduces storage requirements by two orders of magnitude over general-purpose codecs such as WebP without sacrificing performance. |
Tasks | |
Published | 2019-06-01 |
URL | http://openaccess.thecvf.com/content_CVPR_2019/html/Wei_Learning_to_Localize_Through_Compressed_Binary_Maps_CVPR_2019_paper.html |
http://openaccess.thecvf.com/content_CVPR_2019/papers/Wei_Learning_to_Localize_Through_Compressed_Binary_Maps_CVPR_2019_paper.pdf | |
PWC | https://paperswithcode.com/paper/learning-to-localize-through-compressed |
Repo | |
Framework | |
A General-Purpose Algorithm for Constrained Sequential Inference
Title | A General-Purpose Algorithm for Constrained Sequential Inference |
Authors | Daniel Deutsch, Shyam Upadhyay, Dan Roth |
Abstract | Inference in structured prediction involves finding the best output structure for an input, subject to certain constraints. Many current approaches use sequential inference, which constructs the output in a left-to-right manner. However, there is no general framework to specify constraints in these approaches. We present a principled approach for incorporating constraints into sequential inference algorithms. Our approach expresses constraints using an automaton, which is traversed in lock-step during inference, guiding the search to valid outputs. We show that automata can express commonly used constraints and are easily incorporated into sequential inference. When it is more natural to represent constraints as a set of automata, our algorithm uses an active set method for demonstrably fast and efficient inference. We experimentally show the benefits of our algorithm on constituency parsing and semantic role labeling. For parsing, unlike unconstrained approaches, our algorithm always generates valid output, incurring only a small drop in performance. For semantic role labeling, imposing constraints using our algorithm corrects common errors, improving F1 by 1.5 points. These benefits increase in low-resource settings. Our active set method achieves a 5.2x relative speed-up over a naive approach. |
Tasks | Constituency Parsing, Semantic Role Labeling, Structured Prediction |
Published | 2019-11-01 |
URL | https://www.aclweb.org/anthology/K19-1045/ |
https://www.aclweb.org/anthology/K19-1045 | |
PWC | https://paperswithcode.com/paper/a-general-purpose-algorithm-for-constrained |
Repo | |
Framework | |
Unsupervised Labeled Parsing with Deep Inside-Outside Recursive Autoencoders
Title | Unsupervised Labeled Parsing with Deep Inside-Outside Recursive Autoencoders |
Authors | Andrew Drozdov, Patrick Verga, Yi-Pei Chen, Mohit Iyyer, Andrew McCallum |
Abstract | Understanding text often requires identifying meaningful constituent spans such as noun phrases and verb phrases. In this work, we show that we can effectively recover these types of labels using the learned phrase vectors from deep inside-outside recursive autoencoders (DIORA). Specifically, we cluster span representations to induce span labels. Additionally, we improve the model{'}s labeling accuracy by integrating latent code learning into the training procedure. We evaluate this approach empirically through unsupervised labeled constituency parsing. Our method outperforms ELMo and BERT on two versions of the Wall Street Journal (WSJ) dataset and is competitive to prior work that requires additional human annotations, improving over a previous state-of-the-art system that depends on ground-truth part-of-speech tags by 5 absolute F1 points (19{%} relative error reduction). |
Tasks | Constituency Parsing |
Published | 2019-11-01 |
URL | https://www.aclweb.org/anthology/D19-1161/ |
https://www.aclweb.org/anthology/D19-1161 | |
PWC | https://paperswithcode.com/paper/unsupervised-labeled-parsing-with-deep-inside |
Repo | |
Framework | |
Annotation Process for the Dialog Act Classification of a Taglish E-commerce Q&A Corpus
Title | Annotation Process for the Dialog Act Classification of a Taglish E-commerce Q&A Corpus |
Authors | Jared Rivera, Jan Caleb Oliver Pensica, Jolene Valenzuela, Alfonso Secuya, Charibeth Cheng |
Abstract | With conversational agents or chatbots making up in quantity of replies rather than quality, the need to identify user intent has become a main concern to improve these agents. Dialog act (DA) classification tackles this concern, and while existing studies have already addressed DA classification in general contexts, no training corpora in the context of e-commerce is available to the public. This research addressed the said insufficiency by building a text-based corpus of 7,265 posts from the question and answer section of products on Lazada Philippines. The SWBD-DAMSL tagset for DA classification was modified to 28 tags fitting the categories applicable to e-commerce conversations. The posts were annotated manually by three (3) human annotators and preprocessing techniques decreased the vocabulary size from 6,340 to 1,134. After analysis, the corpus was composed dominantly of single-label posts, with 34{%} of the corpus having multiple intent tags. The annotated corpus allowed insights toward the structure of posts created with single to multiple intents. |
Tasks | Dialog Act Classification |
Published | 2019-11-01 |
URL | https://www.aclweb.org/anthology/D19-5108/ |
https://www.aclweb.org/anthology/D19-5108 | |
PWC | https://paperswithcode.com/paper/annotation-process-for-the-dialog-act |
Repo | |
Framework | |