Paper Group NANR 231
Pearl: Prototype lEArning via Rule Lists. On task effects in NLG corpus elicitation: a replication study using mixed effects modeling. Litigation Analytics: Case Outcomes Extracted from US Federal Court Dockets. Social Relation Recognition From Videos via Multi-Scale Spatial-Temporal Reasoning. SUDA-Alibaba at MRP 2019: Graph-Based Models with BERT …
Pearl: Prototype lEArning via Rule Lists
Title | Pearl: Prototype lEArning via Rule Lists |
Authors | Tianfan Fu*, Tian Gao*, Cao Xiao*, Tengfei Ma*, Jimeng Sun |
Abstract | Deep neural networks have demonstrated promising prediction and classification performance on many healthcare applications. However, the interpretability of those models are often lacking. On the other hand, classical interpretable models such as rule lists or decision trees do not lead to the same level of accuracy as deep neural networks and can often be too complex to interpret (due to the potentially large depth of rule lists). In this work, we present PEARL, Prototype lEArning via Rule Lists, which iteratively uses rule lists to guide a neural network to learn representative data prototypes. The resulting prototype neural network provides accurate prediction, and the prediction can be easily explained by prototype and its guiding rule lists. Thanks to the prediction power of neural networks, the rule lists from prototypes are more concise and hence provide better interpretability. On two real-world electronic healthcare records (EHR) datasets, PEARL consistently outperforms all baselines across both datasets, especially achieving performance improvement over conventional rule learning by up to 28% and over prototype learning by up to 3%. Experimental results also show the resulting interpretation of PEARL is simpler than the standard rule learning. |
Tasks | |
Published | 2019-05-01 |
URL | https://openreview.net/forum?id=r1gnQ20qYX |
https://openreview.net/pdf?id=r1gnQ20qYX | |
PWC | https://paperswithcode.com/paper/pearl-prototype-learning-via-rule-lists |
Repo | |
Framework | |
On task effects in NLG corpus elicitation: a replication study using mixed effects modeling
Title | On task effects in NLG corpus elicitation: a replication study using mixed effects modeling |
Authors | Emiel van Miltenburg, Merel van de Kerkhof, Ruud Koolen, Martijn Goudbeek, Emiel Krahmer |
Abstract | Task effects in NLG corpus elicitation recently started to receive more attention, but are usually not modeled statistically. We present a controlled replication of the study by Van Miltenburg et al. (2018b), contrasting spoken with written descriptions. We collected additional written Dutch descriptions to supplement the spoken data from the DIDEC corpus, and analyzed the descriptions using mixed effects modeling to account for variation between participants and items. Our results show that the effects of modality largely disappear in a controlled setting. |
Tasks | |
Published | 2019-10-01 |
URL | https://www.aclweb.org/anthology/W19-8649/ |
https://www.aclweb.org/anthology/W19-8649 | |
PWC | https://paperswithcode.com/paper/on-task-effects-in-nlg-corpus-elicitation-a |
Repo | |
Framework | |
Litigation Analytics: Case Outcomes Extracted from US Federal Court Dockets
Title | Litigation Analytics: Case Outcomes Extracted from US Federal Court Dockets |
Authors | Thomas Vacek, Ronald Teo, Dezhao Song, Timothy Nugent, Conner Cowling, Frank Schilder |
Abstract | Dockets contain a wealth of information for planning a litigation strategy, but the information is locked up in semi-structured text. Manually deriving the outcomes for each party (e.g., settlement, verdict) would be very labor intensive. Having such information available for every past court case, however, would be very useful for developing a strategy because it potentially reveals tendencies and trends of judges and courts and the opposing counsel. We used Natural Language Processing (NLP) techniques and deep learning methods allowing us to scale the automatic analysis of millions of US federal court dockets. The automatically extracted information is fed into a Litigation Analytics tool that is used by lawyers to plan how they approach concrete litigations. |
Tasks | |
Published | 2019-06-01 |
URL | https://www.aclweb.org/anthology/W19-2206/ |
https://www.aclweb.org/anthology/W19-2206 | |
PWC | https://paperswithcode.com/paper/litigation-analytics-case-outcomes-extracted |
Repo | |
Framework | |
Social Relation Recognition From Videos via Multi-Scale Spatial-Temporal Reasoning
Title | Social Relation Recognition From Videos via Multi-Scale Spatial-Temporal Reasoning |
Authors | Xinchen Liu, Wu Liu, Meng Zhang, Jingwen Chen, Lianli Gao, Chenggang Yan, Tao Mei |
Abstract | Discovering social relations, e.g., kinship, friendship, etc., from visual contents can make machines better interpret the behaviors and emotions of human beings. Existing studies mainly focus on recognizing social relations from still images while neglecting another important media–video. On one hand, the actions and storylines in videos provide more important cues for social relation recognition. On the other hand, the key persons may appear at arbitrary spatial-temporal locations, even not in one same image from beginning to the end. To overcome these challenges, we propose a Multi-scale Spatial-Temporal Reasoning (MSTR) framework to recognize social relations from videos. For the spatial representation, we not only adopt a temporal segment network to learn global action and scene information, but also design a Triple Graphs model to capture visual relations between persons and objects. For the temporal domain, we propose a Pyramid Graph Convolutional Network to perform temporal reasoning with multi-scale receptive fields, which can obtain both long-term and short-term storylines in videos. By this means, MSTR can comprehensively explore the multi-scale actions and storylines in spatial-temporal dimensions for social relation reasoning in videos. Extensive experiments on a new large-scale Video Social Relation dataset demonstrate the effectiveness of the proposed framework. |
Tasks | |
Published | 2019-06-01 |
URL | http://openaccess.thecvf.com/content_CVPR_2019/html/Liu_Social_Relation_Recognition_From_Videos_via_Multi-Scale_Spatial-Temporal_Reasoning_CVPR_2019_paper.html |
http://openaccess.thecvf.com/content_CVPR_2019/papers/Liu_Social_Relation_Recognition_From_Videos_via_Multi-Scale_Spatial-Temporal_Reasoning_CVPR_2019_paper.pdf | |
PWC | https://paperswithcode.com/paper/social-relation-recognition-from-videos-via |
Repo | |
Framework | |
SUDA-Alibaba at MRP 2019: Graph-Based Models with BERT
Title | SUDA-Alibaba at MRP 2019: Graph-Based Models with BERT |
Authors | Yue Zhang, Wei Jiang, Qingrong Xia, Junjie Cao, Rui Wang, Zhenghua Li, Min Zhang |
Abstract | In this paper, we describe our participating systems in the shared task on Cross- Framework Meaning Representation Parsing (MRP) at the 2019 Conference for Computational Language Learning (CoNLL). The task includes five frameworks for graph-based meaning representations, i.e., DM, PSD, EDS, UCCA, and AMR. One common characteristic of our systems is that we employ graph-based methods instead of transition-based methods when predicting edges between nodes. For SDP, we jointly perform edge prediction, frame tagging, and POS tagging via multi-task learning (MTL). For UCCA, we also jointly model a constituent tree parsing and a remote edge recovery task. For both EDS and AMR, we produce nodes first and edges second in a pipeline fashion. External resources like BERT are found helpful for all frameworks except AMR. Our final submission ranks the third on the overall MRP evaluation metric, the first on EDS and the second on UCCA. |
Tasks | Multi-Task Learning |
Published | 2019-11-01 |
URL | https://www.aclweb.org/anthology/K19-2014/ |
https://www.aclweb.org/anthology/K19-2014 | |
PWC | https://paperswithcode.com/paper/suda-alibaba-at-mrp-2019-graph-based-models |
Repo | |
Framework | |
Peking at MRP 2019: Factorization- and Composition-Based Parsing for Elementary Dependency Structures
Title | Peking at MRP 2019: Factorization- and Composition-Based Parsing for Elementary Dependency Structures |
Authors | Yufei Chen, Yajie Ye, Weiwei Sun |
Abstract | We design, implement and evaluate two semantic parsers, which represent factorization- and composition-based approaches respectively, for Elementary Dependency Structures (EDS) at the CoNLL 2019 Shared Task on Cross-Framework Meaning Representation Parsing. The detailed evaluation of the two parsers gives us a new perception about parsing into linguistically enriched meaning representations: current neural EDS parsers are able to reach an accuracy at the inter-annotator agreement level in the same-epoch-and-domain setup. |
Tasks | |
Published | 2019-11-01 |
URL | https://www.aclweb.org/anthology/K19-2016/ |
https://www.aclweb.org/anthology/K19-2016 | |
PWC | https://paperswithcode.com/paper/peking-at-mrp-2019-factorization-and |
Repo | |
Framework | |
Scalable, Semi-Supervised Extraction of Structured Information from Scientific Literature
Title | Scalable, Semi-Supervised Extraction of Structured Information from Scientific Literature |
Authors | Kritika Agrawal, Aakash Mittal, Vikram Pudi |
Abstract | As scientific communities grow and evolve, there is a high demand for improved methods for finding relevant papers, comparing papers on similar topics and studying trends in the research community. All these tasks involve the common problem of extracting structured information from scientific articles. In this paper, we propose a novel, scalable, semi-supervised method for extracting relevant structured information from the vast available raw scientific literature. We extract the fundamental concepts of {}aim{''}, {''}method{''} and { }result{''} from scientific articles and use them to construct a knowledge graph. Our algorithm makes use of domain-based word embedding and the bootstrap framework. Our experiments show that our system achieves precision and recall comparable to the state of the art. We also show the domain independence of our algorithm by analyzing the research trends of two distinct communities - computational linguistics and computer vision. |
Tasks | |
Published | 2019-06-01 |
URL | https://www.aclweb.org/anthology/W19-2602/ |
https://www.aclweb.org/anthology/W19-2602 | |
PWC | https://paperswithcode.com/paper/scalable-semi-supervised-extraction-of |
Repo | |
Framework | |
Learning from Omission
Title | Learning from Omission |
Authors | Bill McDowell, Noah Goodman |
Abstract | Pragmatic reasoning allows humans to go beyond the literal meaning when interpret- ing language in context. Previous work has shown that such reasoning can improve the performance of already-trained language understanding systems. Here, we explore whether pragmatic reasoning during training can improve the quality of learned meanings. Our experiments on reference game data show that end-to-end pragmatic training produces more accurate utterance interpretation models, especially when data is sparse and language is complex. |
Tasks | |
Published | 2019-07-01 |
URL | https://www.aclweb.org/anthology/P19-1059/ |
https://www.aclweb.org/anthology/P19-1059 | |
PWC | https://paperswithcode.com/paper/learning-from-omission |
Repo | |
Framework | |
Bridging by Word: Image Grounded Vocabulary Construction for Visual Captioning
Title | Bridging by Word: Image Grounded Vocabulary Construction for Visual Captioning |
Authors | Zhihao Fan, Zhongyu Wei, Siyuan Wang, Xuanjing Huang |
Abstract | Image Captioning aims at generating a short description for an image. Existing research usually employs the architecture of CNN-RNN that views the generation as a sequential decision-making process and the entire dataset vocabulary is used as decoding space. They suffer from generating high frequent n-gram with irrelevant words. To tackle this problem, we propose to construct an image-grounded vocabulary, based on which, captions are generated with limitation and guidance. In specific, a novel hierarchical structure is proposed to construct the vocabulary incorporating both visual information and relations among words. For generation, we propose a word-aware RNN cell incorporating vocabulary information into the decoding process directly. Reinforce algorithm is employed to train the generator using constraint vocabulary as action space. Experimental results on MS COCO and Flickr30k show the effectiveness of our framework compared to some state-of-the-art models. |
Tasks | Decision Making, Image Captioning |
Published | 2019-07-01 |
URL | https://www.aclweb.org/anthology/P19-1652/ |
https://www.aclweb.org/anthology/P19-1652 | |
PWC | https://paperswithcode.com/paper/bridging-by-word-image-grounded-vocabulary |
Repo | |
Framework | |
Recurrent models and lower bounds for projective syntactic decoding
Title | Recurrent models and lower bounds for projective syntactic decoding |
Authors | Natalie Schluter |
Abstract | The current state-of-the-art in neural graph-based parsing uses only approximate decoding at the training phase. In this paper aim to understand this result better. We show how recurrent models can carry out projective maximum spanning tree decoding. This result holds for both current state-of-the-art models for shift-reduce and graph-based parsers, projective or not. We also provide the first proof on the lower bounds of projective maximum spanning tree decoding. |
Tasks | |
Published | 2019-06-01 |
URL | https://www.aclweb.org/anthology/N19-1022/ |
https://www.aclweb.org/anthology/N19-1022 | |
PWC | https://paperswithcode.com/paper/recurrent-models-and-lower-bounds-for |
Repo | |
Framework | |
Seeing the Wind: Visual Wind Speed Prediction with a Coupled Convolutional and Recurrent Neural Network
Title | Seeing the Wind: Visual Wind Speed Prediction with a Coupled Convolutional and Recurrent Neural Network |
Authors | Jennifer Cardona, Michael Howland, John Dabiri |
Abstract | Wind energy resource quantification, air pollution monitoring, and weather forecasting all rely on rapid, accurate measurement of local wind conditions. Visual observations of the effects of wind—the swaying of trees and flapping of flags, for example—encode information regarding local wind conditions that can potentially be leveraged for visual anemometry that is inexpensive and ubiquitous. Here, we demonstrate a coupled convolutional neural network and recurrent neural network architecture that extracts the wind speed encoded in visually recorded flow-structure interactions of a flag and tree in naturally occurring wind. Predictions for wind speeds ranging from 0.75-11 m/s showed agreement with measurements from a cup anemometer on site, with a root-mean-squared error approaching the natural wind speed variability due to atmospheric turbulence. Generalizability of the network was demonstrated by successful prediction of wind speed based on recordings of other flags in the field and in a controlled wind tunnel test. Furthermore, physics-based scaling of the flapping dynamics accurately predicts the dependence of the network performance on the video frame rate and duration. |
Tasks | Weather Forecasting |
Published | 2019-12-01 |
URL | http://papers.nips.cc/paper/9078-seeing-the-wind-visual-wind-speed-prediction-with-a-coupled-convolutional-and-recurrent-neural-network |
http://papers.nips.cc/paper/9078-seeing-the-wind-visual-wind-speed-prediction-with-a-coupled-convolutional-and-recurrent-neural-network.pdf | |
PWC | https://paperswithcode.com/paper/seeing-the-wind-visual-wind-speed-prediction-1 |
Repo | |
Framework | |
INTERPRETABLE CONVOLUTIONAL FILTER PRUNING
Title | INTERPRETABLE CONVOLUTIONAL FILTER PRUNING |
Authors | Zhuwei Qin, Fuxun Yu, Chenchen Liu, Xiang Chen |
Abstract | The sophisticated structure of Convolutional Neural Network (CNN) allows for outstanding performance, but at the cost of intensive computation. As significant redundancies inevitably present in such a structure, many works have been proposed to prune the convolutional filters for computation cost reduction. Although extremely effective, most works are based only on quantitative characteristics of the convolutional filters, and highly overlook the qualitative interpretation of individual filter’s specific functionality. In this work, we interpreted the functionality and redundancy of the convolutional filters from different perspectives, and proposed a functionality-oriented filter pruning method. With extensive experiment results, we proved the convolutional filters’ qualitative significance regardless of magnitude, demonstrated significant neural network redundancy due to repetitive filter functions, and analyzed the filter functionality defection under inappropriate retraining process. Such an interpretable pruning approach not only offers outstanding computation cost optimization over previous filter pruning methods, but also interprets filter pruning process. |
Tasks | |
Published | 2019-05-01 |
URL | https://openreview.net/forum?id=BJ4BVhRcYX |
https://openreview.net/pdf?id=BJ4BVhRcYX | |
PWC | https://paperswithcode.com/paper/interpretable-convolutional-filter-pruning-1 |
Repo | |
Framework | |
Improving Semantic Dependency Parsing with Syntactic Features
Title | Improving Semantic Dependency Parsing with Syntactic Features |
Authors | Robin Kurtz, Daniel Roxbo, Marco Kuhlmann |
Abstract | We extend a state-of-the-art deep neural architecture for semantic dependency parsing with features defined over syntactic dependency trees. Our empirical results show that only gold-standard syntactic information leads to consistent improvements in semantic parsing accuracy, and that the magnitude of these improvements varies with the specific combination of the syntactic and the semantic representation used. In contrast, automatically predicted syntax does not seem to help semantic parsing. Our error analysis suggests that there is a significant overlap between syntactic and semantic representations. |
Tasks | Dependency Parsing, Semantic Dependency Parsing, Semantic Parsing |
Published | 2019-09-01 |
URL | https://www.aclweb.org/anthology/W19-6202/ |
https://www.aclweb.org/anthology/W19-6202 | |
PWC | https://paperswithcode.com/paper/improving-semantic-dependency-parsing-with |
Repo | |
Framework | |
Computer Assisted Annotation of Tension Development in TED Talks through Crowdsourcing
Title | Computer Assisted Annotation of Tension Development in TED Talks through Crowdsourcing |
Authors | Seungwon Yoon, Wonsuk Yang, Jong Park |
Abstract | We propose a method of machine-assisted annotation for the identification of tension development, annotating whether the tension is increasing, decreasing, or staying unchanged. We use a neural network based prediction model, whose predicted results are given to the annotators as initial values for the options that they are asked to choose. By presenting such initial values to the annotators, the annotation task becomes an evaluation task where the annotators inspect whether or not the predicted results are correct. To demonstrate the effectiveness of our method, we performed the annotation task in both in-house and crowdsourced environments. For the crowdsourced environment, we compared the annotation results with and without our method of machine-assisted annotation. We find that the results with our method showed a higher agreement to the gold standard than those without, though our method had little effect at reducing the time for annotation. Our codes for the experiment are made publicly available. |
Tasks | |
Published | 2019-11-01 |
URL | https://www.aclweb.org/anthology/D19-5906/ |
https://www.aclweb.org/anthology/D19-5906 | |
PWC | https://paperswithcode.com/paper/computer-assisted-annotation-of-tension |
Repo | |
Framework | |
FreebaseQA: A New Factoid QA Data Set Matching Trivia-Style Question-Answer Pairs with Freebase
Title | FreebaseQA: A New Factoid QA Data Set Matching Trivia-Style Question-Answer Pairs with Freebase |
Authors | Kelvin Jiang, Dekun Wu, Hui Jiang |
Abstract | In this paper, we present a new data set, named FreebaseQA, for open-domain factoid question answering (QA) tasks over structured knowledge bases, like Freebase. The data set is generated by matching trivia-type question-answer pairs with subject-predicate-object triples in Freebase. For each collected question-answer pair, we first tag all entities in each question and search for relevant predicates that bridge a tagged entity with the answer in Freebase. Finally, human annotation is used to remove any false positive in these matched triples. Using this method, we are able to efficiently generate over 54K matches from about 28K unique questions with minimal cost. Our analysis shows that this data set is suitable for model training in factoid QA tasks beyond simpler questions since FreebaseQA provides more linguistically sophisticated questions than other existing data sets. |
Tasks | Question Answering |
Published | 2019-06-01 |
URL | https://www.aclweb.org/anthology/N19-1028/ |
https://www.aclweb.org/anthology/N19-1028 | |
PWC | https://paperswithcode.com/paper/freebaseqa-a-new-factoid-qa-data-set-matching |
Repo | |
Framework | |