January 24, 2020

2633 words 13 mins read

Paper Group NANR 229

Paper Group NANR 229

Face Anti-Spoofing: Model Matters, so Does Data. Hello, It’s GPT-2 - How Can I Help You? Towards the Use of Pretrained Language Models for Task-Oriented Dialogue Systems. Implicit Surface Representations As Layers in Neural Networks. Lexical Representation & Retrieval on Monolingual Interpretative text production. An Algorithm to Learn Polytree Ne …

Face Anti-Spoofing: Model Matters, so Does Data

Title Face Anti-Spoofing: Model Matters, so Does Data
Authors Xiao Yang, Wenhan Luo, Linchao Bao, Yuan Gao, Dihong Gong, Shibao Zheng, Zhifeng Li, Wei Liu
Abstract Face anti-spoofing is an important task in full-stack face applications including face detection, verification, and recognition. Previous approaches build models on datasets which do not simulate the real-world data well (e.g., small scale, insignificant variance, etc.). Existing models may rely on auxiliary information, which prevents these anti-spoofing solutions from generalizing well in practice. In this paper, we present a data collection solution along with a data synthesis technique to simulate digital medium-based face spoofing attacks, which can easily help us obtain a large amount of training data well reflecting the real-world scenarios. Through exploiting a novel Spatio-Temporal Anti-Spoof Network (STASN), we are able to push the performance on public face anti-spoofing datasets over state-of-the-art methods by a large margin. Since the proposed model can automatically attend to discriminative regions, it makes analyzing the behaviors of the network possible.We conduct extensive experiments and show that the proposed model can distinguish spoof faces by extracting features from a variety of regions to seek out subtle evidences such as borders, moire patterns, reflection artifacts, etc.
Tasks Face Anti-Spoofing, Face Detection
Published 2019-06-01
URL http://openaccess.thecvf.com/content_CVPR_2019/html/Yang_Face_Anti-Spoofing_Model_Matters_so_Does_Data_CVPR_2019_paper.html
PDF http://openaccess.thecvf.com/content_CVPR_2019/papers/Yang_Face_Anti-Spoofing_Model_Matters_so_Does_Data_CVPR_2019_paper.pdf
PWC https://paperswithcode.com/paper/face-anti-spoofing-model-matters-so-does-data
Repo
Framework

Hello, It’s GPT-2 - How Can I Help You? Towards the Use of Pretrained Language Models for Task-Oriented Dialogue Systems

Title Hello, It’s GPT-2 - How Can I Help You? Towards the Use of Pretrained Language Models for Task-Oriented Dialogue Systems
Authors Pawe{\l} Budzianowski, Ivan Vuli{'c}
Abstract Data scarcity is a long-standing and crucial challenge that hinders quick development of task-oriented dialogue systems across multiple domains: task-oriented dialogue models are expected to learn grammar, syntax, dialogue reasoning, decision making, and language generation from absurdly small amounts of task-specific data. In this paper, we demonstrate that recent progress in language modeling pre-training and transfer learning shows promise to overcome this problem. We propose a task-oriented dialogue model that operates solely on text input: it effectively bypasses explicit policy and language generation modules. Building on top of the TransferTransfo framework (Wolf et al., 2019) and generative model pre-training (Radford et al., 2019), we validate the approach on complex multi-domain task-oriented dialogues from the MultiWOZ dataset. Our automatic and human evaluations show that the proposed model is on par with a strong task-specific neural baseline. In the long run, our approach holds promise to mitigate the data scarcity problem, and to support the construction of more engaging and more eloquent task-oriented conversational agents.
Tasks Decision Making, Language Modelling, Task-Oriented Dialogue Systems, Text Generation, Transfer Learning
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-5602/
PDF https://www.aclweb.org/anthology/D19-5602
PWC https://paperswithcode.com/paper/hello-its-gpt-2-how-can-i-help-you-towards-1
Repo
Framework

Implicit Surface Representations As Layers in Neural Networks

Title Implicit Surface Representations As Layers in Neural Networks
Authors Mateusz Michalkiewicz, Jhony K. Pontes, Dominic Jack, Mahsa Baktashmotlagh, Anders Eriksson
Abstract Implicit shape representations, such as Level Sets, provide a very elegant formulation for performing computations involving curves and surfaces. However, including implicit representations into canonical Neural Network formulations is far from straightforward. This has consequently restricted existing approaches to shape inference, to significantly less effective representations, perhaps most commonly voxels occupancy maps or sparse point clouds. To overcome this limitation we propose a novel formulation that permits the use of implicit representations of curves and surfaces, of arbitrary topology, as individual layers in Neural Network architectures with end-to-end trainability. Specifically, we propose to represent the output as an oriented level set of a continuous and discretised embedding function. We investigate the benefits of our approach on the task of 3D shape prediction from a single image; and demonstrate its ability to produce a more accurate reconstruction compared to voxel-based representations. We further show that our model is flexible and can be applied to a variety of shape inference problems.
Tasks
Published 2019-10-01
URL http://openaccess.thecvf.com/content_ICCV_2019/html/Michalkiewicz_Implicit_Surface_Representations_As_Layers_in_Neural_Networks_ICCV_2019_paper.html
PDF http://openaccess.thecvf.com/content_ICCV_2019/papers/Michalkiewicz_Implicit_Surface_Representations_As_Layers_in_Neural_Networks_ICCV_2019_paper.pdf
PWC https://paperswithcode.com/paper/implicit-surface-representations-as-layers-in
Repo
Framework

Lexical Representation & Retrieval on Monolingual Interpretative text production

Title Lexical Representation & Retrieval on Monolingual Interpretative text production
Authors Debasish Sahoo, Michael Carl
Abstract
Tasks
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-7007/
PDF https://www.aclweb.org/anthology/W19-7007
PWC https://paperswithcode.com/paper/lexical-representation-retrieval-on
Repo
Framework

An Algorithm to Learn Polytree Networks with Hidden Nodes

Title An Algorithm to Learn Polytree Networks with Hidden Nodes
Authors Firoozeh Sepehr, Donatello Materassi
Abstract Ancestral graphs are a prevalent mathematical tool to take into account latent (hidden) variables in a probabilistic graphical model. In ancestral graph representations, the nodes are only the observed (manifest) variables and the notion of m-separation fully characterizes the conditional independence relations among such variables, bypassing the need to explicitly consider latent variables. However, ancestral graph models do not necessarily represent the actual causal structure of the model, and do not contain information about, for example, the precise number and location of the hidden variables. Being able to detect the presence of latent variables while also inferring their precise location within the actual causal structure model is a more challenging task that provides more information about the actual causal relationships among all the model variables, including the latent ones. In this article, we develop an algorithm to exactly recover graphical models of random variables with underlying polytree structures when the latent nodes satisfy specific degree conditions. Therefore, this article proposes an approach for the full identification of hidden variables in a polytree. We also show that the algorithm is complete in the sense that when such degree conditions are not met, there exists another polytree with fewer number of latent nodes satisfying the degree conditions and entailing the same independence relations among the observed variables, making it indistinguishable from the actual polytree.
Tasks
Published 2019-12-01
URL http://papers.nips.cc/paper/9647-an-algorithm-to-learn-polytree-networks-with-hidden-nodes
PDF http://papers.nips.cc/paper/9647-an-algorithm-to-learn-polytree-networks-with-hidden-nodes.pdf
PWC https://paperswithcode.com/paper/an-algorithm-to-learn-polytree-networks-with
Repo
Framework

Towards Bridging Semantic Gap to Improve Semantic Segmentation

Title Towards Bridging Semantic Gap to Improve Semantic Segmentation
Authors Yanwei Pang, Yazhao Li, Jianbing Shen, Ling Shao
Abstract Aggregating multi-level features is essential for capturing multi-scale context information for precise scene semantic segmentation. However, the improvement by directly fusing shallow features and deep features becomes limited as the semantic gap between them increases. To solve this problem, we explore two strategies for robust feature fusion. One is enhancing shallow features using a semantic enhancement module (SeEM) to alleviate the semantic gap between shallow features and deep features. The other strategy is feature attention, which involves discovering complementary information (i.e., boundary information) from low-level features to enhance high-level features for precise segmentation. By embedding these two strategies, we construct a parallel feature pyramid towards improving multi-level feature fusion. A Semantic Enhanced Network called SeENet is constructed with the parallel pyramid to implement precise segmentation. Experiments on three benchmark datasets demonstrate the effectiveness of our method for robust multi-level feature aggregation. As a result, our SeENet has achieved better performance than other state-of-the-art methods for semantic segmentation.
Tasks Semantic Segmentation
Published 2019-10-01
URL http://openaccess.thecvf.com/content_ICCV_2019/html/Pang_Towards_Bridging_Semantic_Gap_to_Improve_Semantic_Segmentation_ICCV_2019_paper.html
PDF http://openaccess.thecvf.com/content_ICCV_2019/papers/Pang_Towards_Bridging_Semantic_Gap_to_Improve_Semantic_Segmentation_ICCV_2019_paper.pdf
PWC https://paperswithcode.com/paper/towards-bridging-semantic-gap-to-improve
Repo
Framework

Studying Generalisability across Abusive Language Detection Datasets

Title Studying Generalisability across Abusive Language Detection Datasets
Authors Steve Durairaj Swamy, Anupam Jamatia, Bj{"o}rn Gamb{"a}ck
Abstract Work on Abusive Language Detection has tackled a wide range of subtasks and domains. As a result of this, there exists a great deal of redundancy and non-generalisability between datasets. Through experiments on cross-dataset training and testing, the paper reveals that the preconceived notion of including more non-abusive samples in a dataset (to emulate reality) may have a detrimental effect on the generalisability of a model trained on that data. Hence a hierarchical annotation model is utilised here to reveal redundancies in existing datasets and to help reduce redundancy in future efforts.
Tasks
Published 2019-11-01
URL https://www.aclweb.org/anthology/K19-1088/
PDF https://www.aclweb.org/anthology/K19-1088
PWC https://paperswithcode.com/paper/studying-generalisability-across-abusive
Repo
Framework

Experimenting with Power Divergences for Language Modeling

Title Experimenting with Power Divergences for Language Modeling
Authors Matthieu Labeau, Shay B. Cohen
Abstract Neural language models are usually trained using Maximum-Likelihood Estimation (MLE). The corresponding objective function for MLE is derived from the Kullback-Leibler (KL) divergence between the empirical probability distribution representing the data and the parametric probability distribution output by the model. However, the word frequency discrepancies in natural language make performance extremely uneven: while the perplexity is usually very low for frequent words, it is especially difficult to predict rare words. In this paper, we experiment with several families (alpha, beta and gamma) of power divergences, generalized from the KL divergence, for learning language models with an objective different than standard MLE. Intuitively, these divergences should affect the way the probability mass is spread during learning, notably by prioritizing performances on high or low-frequency words. In addition, we implement and experiment with various sampling-based objectives, where the computation of the output layer is only done on a small subset of the vocabulary. They are derived as power generalizations of a softmax approximated via Importance Sampling, and Noise Contrastive Estimation, for accelerated learning. Our experiments on the Penn Treebank and Wikitext-2 show that these power divergences can indeed be used to prioritize learning on the frequent or rare words, and lead to general performance improvements in the case of sampling-based learning.
Tasks Language Modelling
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-1421/
PDF https://www.aclweb.org/anthology/D19-1421
PWC https://paperswithcode.com/paper/experimenting-with-power-divergences-for
Repo
Framework

Copula and Case-Stacking Annotations for Korean AMR

Title Copula and Case-Stacking Annotations for Korean AMR
Authors Hyonsu Choe, Jiyoon Han, Hyejin Park, Hansaem Kim
Abstract This paper concerns the application of Abstract Meaning Representation (AMR) to Korean. In this regard, it focuses on the copula construction and its negation and the case-stacking phenomenon thereof. To illustrate this clearly, we reviewed the :domain annotation scheme from various perspectives. In this process, the existing annotation guidelines were improved to devise annotation schemes for each issue under the principle of pursuing consistency and efficiency of annotation without distorting the characteristics of Korean.
Tasks
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-3314/
PDF https://www.aclweb.org/anthology/W19-3314
PWC https://paperswithcode.com/paper/copula-and-case-stacking-annotations-for
Repo
Framework

Exploring Diverse Expressions for Paraphrase Generation

Title Exploring Diverse Expressions for Paraphrase Generation
Authors Lihua Qian, Lin Qiu, Weinan Zhang, Xin Jiang, Yong Yu
Abstract Paraphrasing plays an important role in various natural language processing (NLP) tasks, such as question answering, information retrieval and sentence simplification. Recently, neural generative models have shown promising results in paraphrase generation. However, prior work mainly focused on single paraphrase generation, while ignoring the fact that diversity is essential for enhancing generalization capability and robustness of downstream applications. Few works have been done to solve diverse paraphrase generation. In this paper, we propose a novel approach with two discriminators and multiple generators to generate a variety of different paraphrases. A reinforcement learning algorithm is applied to train our model. Our experiments on two real-world datasets demonstrate that our model not only gains a significant increase in diversity but also improves generation quality over several state-of-the-art baselines.
Tasks Information Retrieval, Paraphrase Generation, Question Answering
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-1313/
PDF https://www.aclweb.org/anthology/D19-1313
PWC https://paperswithcode.com/paper/exploring-diverse-expressions-for-paraphrase
Repo
Framework

Verb-Second Effect on Quantifier Scope Interpretation

Title Verb-Second Effect on Quantifier Scope Interpretation
Authors Asad Sayeed, Matthias Lindemann, Vera Demberg
Abstract Sentences like {``}Every child climbed a tree{''} have at least two interpretations depending on the precedence order of the universal quantifier and the indefinite. Previous experimental work explores the role that different mechanisms such as semantic reanalysis and world knowledge may have in enabling each interpretation. This paper discusses a web-based task that uses the verb-second characteristic of German main clauses to estimate the influence of word order variation over world knowledge. |
Tasks
Published 2019-06-01
URL https://www.aclweb.org/anthology/W19-2915/
PDF https://www.aclweb.org/anthology/W19-2915
PWC https://paperswithcode.com/paper/verb-second-effect-on-quantifier-scope
Repo
Framework

Inject Rubrics into Short Answer Grading System

Title Inject Rubrics into Short Answer Grading System
Authors Tianqi Wang, Naoya Inoue, Hiroki Ouchi, Tomoya Mizumoto, Kentaro Inui
Abstract Short Answer Grading (SAG) is a task of scoring students{'} answers in examinations. Most existing SAG systems predict scores based only on the answers, including the model used as base line in this paper, which gives the-state-of-the-art performance. But they ignore important evaluation criteria such as rubrics, which play a crucial role for evaluating answers in real-world situations. In this paper, we present a method to inject information from rubrics into SAG systems. We implement our approach on top of word-level attention mechanism to introduce the rubric information, in order to locate information in each answer that are highly related to the score. Our experimental results demonstrate that injecting rubric information effectively contributes to the performance improvement and that our proposed model outperforms the state-of-the-art SAG model on the widely used ASAP-SAS dataset under low-resource settings.
Tasks
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-6119/
PDF https://www.aclweb.org/anthology/D19-6119
PWC https://paperswithcode.com/paper/inject-rubrics-into-short-answer-grading
Repo
Framework

Through-Wall Human Mesh Recovery Using Radio Signals

Title Through-Wall Human Mesh Recovery Using Radio Signals
Authors Mingmin Zhao, Yingcheng Liu, Aniruddh Raghu, Tianhong Li, Hang Zhao, Antonio Torralba, Dina Katabi
Abstract This paper presents RF-Avatar, a neural network model that can estimate 3D meshes of the human body in the presence of occlusions, baggy clothes, and bad lighting conditions. We leverage that radio frequency (RF) signals in the WiFi range traverse clothes and occlusions and bounce off the human body. Our model parses such radio signals and recovers 3D body meshes. Our meshes are dynamic and smoothly track the movements of the corresponding people. Further, our model works both in single and multi-person scenarios. Inferring body meshes from radio signals is a highly under-constrained problem. Our model deals with this challenge using: 1) a combination of strong and weak supervision, 2) a multi-headed self-attention mechanism that attends differently to temporal information in the radio signal, and 3) an adversarially trained temporal discriminator that imposes a prior on the dynamics of human motion. Our results show that RF-Avatar accurately recovers dynamic 3D meshes in the presence of occlusions, baggy clothes, bad lighting conditions, and even through walls.
Tasks
Published 2019-10-01
URL http://openaccess.thecvf.com/content_ICCV_2019/html/Zhao_Through-Wall_Human_Mesh_Recovery_Using_Radio_Signals_ICCV_2019_paper.html
PDF http://openaccess.thecvf.com/content_ICCV_2019/papers/Zhao_Through-Wall_Human_Mesh_Recovery_Using_Radio_Signals_ICCV_2019_paper.pdf
PWC https://paperswithcode.com/paper/through-wall-human-mesh-recovery-using-radio
Repo
Framework

CLaC at CLPsych 2019: Fusion of Neural Features and Predicted Class Probabilities for Suicide Risk Assessment Based on Online Posts

Title CLaC at CLPsych 2019: Fusion of Neural Features and Predicted Class Probabilities for Suicide Risk Assessment Based on Online Posts
Authors Elham Mohammadi, Hessam Amini, Leila Kosseim
Abstract This paper summarizes our participation to the CLPsych 2019 shared task, under the name CLaC. The goal of the shared task was to detect and assess suicide risk based on a collection of online posts. For our participation, we used an ensemble method which utilizes 8 neural sub-models to extract neural features and predict class probabilities, which are then used by an SVM classifier. Our team ranked first in 2 out of the 3 tasks (tasks A and C).
Tasks
Published 2019-06-01
URL https://www.aclweb.org/anthology/W19-3004/
PDF https://www.aclweb.org/anthology/W19-3004
PWC https://paperswithcode.com/paper/clac-at-clpsych-2019-fusion-of-neural
Repo
Framework

A Dataset for Semantic Role Labelling of Hindi-English Code-Mixed Tweets

Title A Dataset for Semantic Role Labelling of Hindi-English Code-Mixed Tweets
Authors Riya Pal, Dipti Sharma
Abstract We present a data set of 1460 Hindi-English code-mixed tweets consisting of 20,949 tokens labelled with Proposition Bank labels marking their semantic roles. We created verb frames for complex predicates present in the corpus and formulated mappings from Paninian dependency labels to Proposition Bank labels. With the help of these mappings and the dependency tree, we propose a baseline rule based system for Semantic Role Labelling of Hindi-English code-mixed data. We obtain an accuracy of 96.74{%} for Argument Identification and are able to further classify 73.93{%} of the labels correctly. While there is relevant ongoing research on Semantic Role Labelling and on building tools for code-mixed social media data, this is the first attempt at labelling semantic roles in code-mixed data, to the best of our knowledge.
Tasks
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-4020/
PDF https://www.aclweb.org/anthology/W19-4020
PWC https://paperswithcode.com/paper/a-dataset-for-semantic-role-labelling-of
Repo
Framework
comments powered by Disqus