January 24, 2020

2591 words 13 mins read

Paper Group NANR 248

Paper Group NANR 248

SemEval-2019 Task 2: Unsupervised Lexical Frame Induction. Graph2Bots, Unsupervised Assistance for Designing Chatbots. Is It Dish Washer Safe? Automatically Answering ``Yes/No’’ Questions Using Customer Reviews. The Tagged Corpus (SYN2010) as a Help and a Pitfall in the Word-formation Research. DyRep: Learning Representations over Dynamic Graphs. D …

SemEval-2019 Task 2: Unsupervised Lexical Frame Induction

Title SemEval-2019 Task 2: Unsupervised Lexical Frame Induction
Authors Behrang QasemiZadeh, Miriam R. L. Petruck, Regina Stodden, Laura Kallmeyer, C, Marie ito
Abstract This paper presents Unsupervised Lexical Frame Induction, Task 2 of the International Workshop on Semantic Evaluation in 2019. Given a set of prespecified syntactic forms in context, the task requires that verbs and their arguments be clustered to resemble semantic frame structures. Results are useful in identifying polysemous words, i.e., those whose frame structures are not easily distinguished, as well as discerning semantic relations of the arguments. Evaluation of unsupervised frame induction methods fell into two tracks: Task A) Verb Clustering based on FrameNet 1.7; and B) Argument Clustering, with B.1) based on FrameNet{'}s core frame elements, and B.2) on VerbNet 3.2 semantic roles. The shared task attracted nine teams, of whom three reported promising results. This paper describes the task and its data, reports on methods and resources that these systems used, and offers a comparison to human annotation.
Tasks
Published 2019-06-01
URL https://www.aclweb.org/anthology/S19-2003/
PDF https://www.aclweb.org/anthology/S19-2003
PWC https://paperswithcode.com/paper/semeval-2019-task-2-unsupervised-lexical
Repo
Framework

Graph2Bots, Unsupervised Assistance for Designing Chatbots

Title Graph2Bots, Unsupervised Assistance for Designing Chatbots
Authors Jean-Leon Bouraoui, Sonia Le Meitour, Romain Carbou, Lina M. Rojas Barahona, Vincent Lemaire
Abstract We present Graph2Bots, a tool for assisting conversational agent designers. It extracts a graph representation from human-human conversations by using unsupervised learning. The generated graph contains the main stages of the dialogue and their inner transitions. The graphical user interface (GUI) then allows graph editing.
Tasks
Published 2019-09-01
URL https://www.aclweb.org/anthology/W19-5915/
PDF https://www.aclweb.org/anthology/W19-5915
PWC https://paperswithcode.com/paper/graph2bots-unsupervised-assistance-for
Repo
Framework

Is It Dish Washer Safe? Automatically Answering ``Yes/No’’ Questions Using Customer Reviews

Title Is It Dish Washer Safe? Automatically Answering ``Yes/No’’ Questions Using Customer Reviews |
Authors Daria Dzendzik, Carl Vogel, Jennifer Foster
Abstract It has become commonplace for people to share their opinions about all kinds of products by posting reviews online. It has also become commonplace for potential customers to do research about the quality and limitations of these products by posting questions online. We test the extent to which reviews are useful in question-answering by combining two Amazon datasets and focusing our attention on yes/no questions. A manual analysis of 400 cases reveals that the reviews directly contain the answer to the question just over a third of the time. Preliminary reading comprehension experiments with this dataset prove inconclusive, with accuracy in the range 50-66{%}.
Tasks Question Answering, Reading Comprehension
Published 2019-06-01
URL https://www.aclweb.org/anthology/N19-3001/
PDF https://www.aclweb.org/anthology/N19-3001
PWC https://paperswithcode.com/paper/is-it-dish-washer-safe-automatically
Repo
Framework

The Tagged Corpus (SYN2010) as a Help and a Pitfall in the Word-formation Research

Title The Tagged Corpus (SYN2010) as a Help and a Pitfall in the Word-formation Research
Authors Kl{'a}ra Osolsob{\v{e}}
Abstract
Tasks
Published 2019-09-01
URL https://www.aclweb.org/anthology/W19-8507/
PDF https://www.aclweb.org/anthology/W19-8507
PWC https://paperswithcode.com/paper/the-tagged-corpus-syn2010-as-a-help-and-a
Repo
Framework

DyRep: Learning Representations over Dynamic Graphs

Title DyRep: Learning Representations over Dynamic Graphs
Authors Rakshit Trivedi, Mehrdad Farajtabar, Prasenjeet Biswal, Hongyuan Zha
Abstract Representation Learning over graph structured data has received significant attention recently due to its ubiquitous applicability. However, most advancements have been made in static graph settings while efforts for jointly learning dynamic of the graph and dynamic on the graph are still in an infant stage. Two fundamental questions arise in learning over dynamic graphs: (i) How to elegantly model dynamical processes over graphs? (ii) How to leverage such a model to effectively encode evolving graph information into low-dimensional representations? We present DyRep - a novel modeling framework for dynamic graphs that posits representation learning as a latent mediation process bridging two observed processes namely – dynamics of the network (realized as topological evolution) and dynamics on the network (realized as activities between nodes). Concretely, we propose a two-time scale deep temporal point process model that captures the interleaved dynamics of the observed processes. This model is further parameterized by a temporal-attentive representation network that encodes temporally evolving structural information into node representations which in turn drives the nonlinear evolution of the observed graph dynamics. Our unified framework has inductive capability to generalize over unseen nodes and we design an efficient unsupervised procedure for end-to-end training. We demonstrate that DyRep outperforms state-of-art baselines in quantitative analysis using dynamic link prediction and time prediction tasks. We further present extensive qualitative insights into our framework to discern indispensable role of various components of our framework.
Tasks Dynamic Link Prediction, Link Prediction, Representation Learning
Published 2019-05-01
URL https://openreview.net/forum?id=HyePrhR5KX
PDF https://openreview.net/pdf?id=HyePrhR5KX
PWC https://paperswithcode.com/paper/dyrep-learning-representations-over-dynamic
Repo
Framework

Divergence Triangle for Joint Training of Generator Model, Energy-Based Model, and Inferential Model

Title Divergence Triangle for Joint Training of Generator Model, Energy-Based Model, and Inferential Model
Authors Tian Han, Erik Nijkamp, Xiaolin Fang, Mitch Hill, Song-Chun Zhu, Ying Nian Wu
Abstract This paper proposes the divergence triangle as a framework for joint training of a generator model, energy-based model and inference model. The divergence triangle is a compact and symmetric (anti-symmetric) objective function that seamlessly integrates variational learning, adversarial learning, wake-sleep algorithm, and contrastive divergence in a unified probabilistic formulation. This unification makes the processes of sampling, inference, and energy evaluation readily available without the need for costly Markov chain Monte Carlo methods. Our experiments demonstrate that the divergence triangle is capable of learning (1) an energy-based model with well-formed energy landscape, (2) direct sampling in the form of a generator network, and (3) feed-forward inference that faithfully reconstructs observed as well as synthesized data.
Tasks
Published 2019-06-01
URL http://openaccess.thecvf.com/content_CVPR_2019/html/Han_Divergence_Triangle_for_Joint_Training_of_Generator_Model_Energy-Based_Model_CVPR_2019_paper.html
PDF http://openaccess.thecvf.com/content_CVPR_2019/papers/Han_Divergence_Triangle_for_Joint_Training_of_Generator_Model_Energy-Based_Model_CVPR_2019_paper.pdf
PWC https://paperswithcode.com/paper/divergence-triangle-for-joint-training-of-1
Repo
Framework

Integral Pruning on Activations and Weights for Efficient Neural Networks

Title Integral Pruning on Activations and Weights for Efficient Neural Networks
Authors Qing Yang, Wei Wen, Zuoguan Wang, Yiran Chen, Hai Li
Abstract With the rapidly scaling up of deep neural networks (DNNs), extensive research studies on network model compression such as weight pruning have been performed for efficient deployment. This work aims to advance the compression beyond the weights to the activations of DNNs. We propose the Integral Pruning (IP) technique which integrates the activation pruning with the weight pruning. Through the learning on the different importance of neuron responses and connections, the generated network, namely IPnet, balances the sparsity between activations and weights and therefore further improves execution efficiency. The feasibility and effectiveness of IPnet are thoroughly evaluated through various network models with different activation functions and on different datasets. With <0.5% disturbance on the testing accuracy, IPnet saves 71.1% ~ 96.35% of computation cost, compared to the original dense models with up to 5.8x and 10x reductions in activation and weight numbers, respectively.
Tasks Model Compression
Published 2019-05-01
URL https://openreview.net/forum?id=HyevnsCqtQ
PDF https://openreview.net/pdf?id=HyevnsCqtQ
PWC https://paperswithcode.com/paper/integral-pruning-on-activations-and-weights
Repo
Framework

Likelihood-based Permutation Invariant Loss Function for Probability Distributions

Title Likelihood-based Permutation Invariant Loss Function for Probability Distributions
Authors Masataro Asai
Abstract We propose a permutation-invariant loss function designed for the neural networks reconstructing a set of elements without considering the order within its vector representation. Unlike popular approaches for encoding and decoding a set, our work does not rely on a carefully engineered network topology nor by any additional sequential algorithm. The proposed method, Set Cross Entropy, has a natural information-theoretic interpretation and is related to the metrics defined for sets. We evaluate the proposed approach in two object reconstruction tasks and a rule learning task.
Tasks Object Reconstruction
Published 2019-05-01
URL https://openreview.net/forum?id=rJxpuoCqtQ
PDF https://openreview.net/pdf?id=rJxpuoCqtQ
PWC https://paperswithcode.com/paper/likelihood-based-permutation-invariant-loss
Repo
Framework

Distribution is not enough: going Firther

Title Distribution is not enough: going Firther
Authors Andy L{"u}cking, Robin Cooper, Staffan Larsson, Jonathan Ginzburg
Abstract Much work in contemporary computational semantics follows the distributional hypothesis (DH), which is understood as an approach to semantics according to which the meaning of a word is a function of its distribution over contexts which is represented as vectors (word embeddings) within a multi-dimensional semantic space. In practice, use is identified with occurrence in text corpora, though there are some efforts to use corpora containing multi-modal information. In this paper we argue that the distributional hypothesis is intrinsically misguided as a self-supporting basis for semantics, as Firth was entirely aware. We mention philosophical arguments concerning the lack of normativity within DH data. Furthermore, we point out the shortcomings of DH as a model of learning, by discussing a variety of linguistic classes that cannot be learnt on a distributional basis, including indexicals, proper names, and wh-phrases. Instead of pursuing DH, we sketch an account of the problematic learning cases by integrating a rich, Firthian notion of dialogue context with interactive learning in signalling games backed by in probabilistic Type Theory with Records. We conclude that the success of the DH in computational semantics rests on a post hoc effect: DS presupposes a referential semantics on the basis of which utterances can be produced, comprehended and analysed in the first place.
Tasks Word Embeddings
Published 2019-05-01
URL https://www.aclweb.org/anthology/W19-1101/
PDF https://www.aclweb.org/anthology/W19-1101
PWC https://paperswithcode.com/paper/distribution-is-not-enough-going-firther
Repo
Framework

Neural Inter-Frame Compression for Video Coding

Title Neural Inter-Frame Compression for Video Coding
Authors Abdelaziz Djelouah, Joaquim Campos, Simone Schaub-Meyer, Christopher Schroers
Abstract While there are many deep learning based approaches for single image compression, the field of end-to-end learned video coding has remained much less explored. Therefore, in this work we present an inter-frame compression approach for neural video coding that can seamlessly build up on different existing neural image codecs. Our end-to-end solution performs temporal prediction by optical flow based motion compensation in pixel space. The key insight is that we can increase both decoding efficiency and reconstruction quality by encoding the required information into a latent representation that directly decodes into motion and blending coefficients. In order to account for remaining prediction errors, residual information between the original image and the interpolated frame is needed. We propose to compute residuals directly in latent space instead of in pixel space as this allows to reuse the same image compression network for both key frames and intermediate frames. Our extended evaluation on different datasets and resolutions shows that the rate-distortion performance of our approach is competitive with existing state-of-the-art codecs.
Tasks Image Compression, Motion Compensation, Optical Flow Estimation
Published 2019-10-01
URL http://openaccess.thecvf.com/content_ICCV_2019/html/Djelouah_Neural_Inter-Frame_Compression_for_Video_Coding_ICCV_2019_paper.html
PDF http://openaccess.thecvf.com/content_ICCV_2019/papers/Djelouah_Neural_Inter-Frame_Compression_for_Video_Coding_ICCV_2019_paper.pdf
PWC https://paperswithcode.com/paper/neural-inter-frame-compression-for-video
Repo
Framework

A hybrid model for spatiotemporal forecasting of PM2.5 based on graph convolutional neural network and long short-term memory

Title A hybrid model for spatiotemporal forecasting of PM2.5 based on graph convolutional neural network and long short-term memory
Authors Yanlin Qi a, Qi Lia, Hamed Karimiana, Di Liub
Abstract Increasing availability of data related to air quality from ground monitoring stations has provided the chance for data mining researchers to propose sophisticated models for predicting the concentrations of different air pollutants. In this paper, we proposed a hybrid model based on deep learning methods that integrates Graph Convolutional networks and Long Short-Term Memory networks (GC-LSTM) to model and forecast the spatiotemporal variation of PM2.5 concentrations. Specifically, historical observations on different stations are constructed as spatiotemporal graph series, and historical air quality variables, meteorological factors, spatial terms and temporal attributes are defined as graph signals. To evaluate the performance of the GC-LSTM, we compared our results with several state-of-the-art methods in different time intervals. Based on the results, our GC-LSTM model achieved the best performance for predictions. Moreover, evaluations of recall rate (68.45%), false alarm rate (4.65%) (both of threshold: 115 lg/m3) and correlation coefficient R2 (0.72) for 72-hour predictions also verify the feasibility of our proposed model. This methodology can be used for concentration forecasting of different air pollutants in future.
Tasks
Published 2019-02-01
URL https://www.ncbi.nlm.nih.gov/pubmed/30743109
PDF https://www.ncbi.nlm.nih.gov/pubmed/30743109
PWC https://paperswithcode.com/paper/a-hybrid-model-for-spatiotemporal-forecasting
Repo
Framework

Learning to Order Graph Elements with Application to Multilingual Surface Realization

Title Learning to Order Graph Elements with Application to Multilingual Surface Realization
Authors Wenchao Du, Alan W Black
Abstract Recent advances in deep learning have shown promises in solving complex combinatorial optimization problems, such as sorting variable-sized sequences. In this work, we take a step further and tackle the problem of ordering the elements of sequences that come with graph structures. Our solution adopts an encoder-decoder framework, in which the encoder is a graph neural network that learns the representation for each element, and the decoder predicts the ordering of each local neighborhood of the graph in turn. We apply our framework to multilingual surface realization, which is the task of ordering and completing sentences with their dependency parses given but without the ordering of words. Experiments show that our approach is much better for this task than prior works that do not consider graph structures. We participated in 2019 Surface Realization Shared Task (SR{'}19), and we ranked second out of 14 teams while outperforming those teams below by a large margin.
Tasks Combinatorial Optimization
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-6302/
PDF https://www.aclweb.org/anthology/D19-6302
PWC https://paperswithcode.com/paper/learning-to-order-graph-elements-with
Repo
Framework

An Improved Coarse-to-Fine Method for Solving Generation Tasks

Title An Improved Coarse-to-Fine Method for Solving Generation Tasks
Authors Wenyv Guan, Qianying Liu, Guangzhi Han, Bin Wang, Sujian Li
Abstract The coarse-to-fine (coarse2fine) methods have recently been widely used in the generation tasks. The methods first generate a rough sketch in the coarse stage and then use the sketch to get the final result in the fine stage. However, they usually lack the correction ability when getting a wrong sketch. To solve this problem, in this paper, we propose an improved coarse2fine model with a control mechanism, with which our method can control the influence of the sketch on the final results in the fine stage. Even if the sketch is wrong, our model still has the opportunity to get a correct result. We have experimented our model on the tasks of semantic parsing and math word problem solving. The results have shown the effectiveness of our proposed model.
Tasks Math Word Problem Solving, Semantic Parsing
Published 2019-04-01
URL https://www.aclweb.org/anthology/U19-1024/
PDF https://www.aclweb.org/anthology/U19-1024
PWC https://paperswithcode.com/paper/an-improved-coarse-to-fine-method-for-solving
Repo
Framework

Inferring missing metadata from environmental policy texts

Title Inferring missing metadata from environmental policy texts
Authors Steven Bethard, Egoitz Laparra, Sophia Wang, Yiyun Zhao, Ragheb Al-Ghezi, Aaron Lien, Laura L{'o}pez-Hoffman
Abstract The National Environmental Policy Act (NEPA) provides a trove of data on how environmental policy decisions have been made in the United States over the last 50 years. Unfortunately, there is no central database for this information and it is too voluminous to assess manually. We describe our efforts to enable systematic research over US environmental policy by extracting and organizing metadata from the text of NEPA documents. Our contributions include collecting more than 40,000 NEPA-related documents, and evaluating rule-based baselines that establish the difficulty of three important tasks: identifying lead agencies, aligning document versions, and detecting reused text.
Tasks
Published 2019-06-01
URL https://www.aclweb.org/anthology/W19-2506/
PDF https://www.aclweb.org/anthology/W19-2506
PWC https://paperswithcode.com/paper/inferring-missing-metadata-from-environmental
Repo
Framework

CUNY-PKU Parser at SemEval-2019 Task 1: Cross-Lingual Semantic Parsing with UCCA

Title CUNY-PKU Parser at SemEval-2019 Task 1: Cross-Lingual Semantic Parsing with UCCA
Authors Weimin Lyu, Sheng Huang, Abdul Rafae Khan, Shengqiang Zhang, Weiwei Sun, Jia Xu
Abstract This paper describes the systems of the CUNY-PKU team in SemEval 2019 Task 1: Cross-lingual Semantic Parsing with UCCA. We introduce a novel model by applying a cascaded MLP and BiLSTM model. Then, we ensemble multiple system-outputs by reparsing. In particular, we introduce a new decoding algorithm for building the UCCA representation. Our system won the first place in one track (French-20K-Open), second places in four tracks (English-Wiki-Open, English-20K-Open, German-20K-Open, and German-20K-Closed), and third place in one track (English-20K-Closed), among all seven tracks.
Tasks Semantic Parsing
Published 2019-06-01
URL https://www.aclweb.org/anthology/S19-2012/
PDF https://www.aclweb.org/anthology/S19-2012
PWC https://paperswithcode.com/paper/cuny-pku-parser-at-semeval-2019-task-1-cross
Repo
Framework
comments powered by Disqus