October 15, 2019

2153 words 11 mins read

Paper Group NANR 223

Paper Group NANR 223

Analyzing Vocabulary Commonality Index Using Large-scaled Database of Child Language Development. An Approach to Measuring Complexity with a Fuzzy Grammar & Degrees of Grammaticality. A Bayesian Approach to Generative Adversarial Imitation Learning. The DLDP Survey on Digital Use and Usability of EU Regional and Minority Languages. Empirical Study …

Analyzing Vocabulary Commonality Index Using Large-scaled Database of Child Language Development

Title Analyzing Vocabulary Commonality Index Using Large-scaled Database of Child Language Development
Authors Yan Cao, Yasuhiro Minami, Yuko Okumura, Tessei Kobayashi
Abstract
Tasks
Published 2018-05-01
URL https://www.aclweb.org/anthology/L18-1642/
PDF https://www.aclweb.org/anthology/L18-1642
PWC https://paperswithcode.com/paper/analyzing-vocabulary-commonality-index-using
Repo
Framework

An Approach to Measuring Complexity with a Fuzzy Grammar & Degrees of Grammaticality

Title An Approach to Measuring Complexity with a Fuzzy Grammar & Degrees of Grammaticality
Authors Adri{`a} Torrens Urrutia
Abstract This paper presents an approach to evaluate complexity of a given natural language input by means of a Fuzzy Grammar with some fuzzy logic formulations. Usually, the approaches in linguistics has described a natural language grammar by means of discrete terms. However, a grammar can be explained in terms of degrees by following the concepts of linguistic gradience {&} fuzziness. Understanding a grammar as a fuzzy or gradient object allows us to establish degrees of grammaticality for every linguistic input. This shall be meaningful for linguistic complexity considering that the less grammatical an input is the more complex its processing will be. In this regard, the degree of complexity of a linguistic input (which is a linguistic representation of a natural language expression) depends on the chosen grammar. The bases of the fuzzy grammar are shown here. Some of these are described by Fuzzy Type Theory. The linguistic inputs are characterized by constraints through a Property Grammar.
Tasks
Published 2018-08-01
URL https://www.aclweb.org/anthology/W18-4607/
PDF https://www.aclweb.org/anthology/W18-4607
PWC https://paperswithcode.com/paper/an-approach-to-measuring-complexity-with-a
Repo
Framework

A Bayesian Approach to Generative Adversarial Imitation Learning

Title A Bayesian Approach to Generative Adversarial Imitation Learning
Authors Wonseok Jeon, Seokin Seo, Kee-Eung Kim
Abstract Generative adversarial training for imitation learning has shown promising results on high-dimensional and continuous control tasks. This paradigm is based on reducing the imitation learning problem to the density matching problem, where the agent iteratively refines the policy to match the empirical state-action visitation frequency of the expert demonstration. Although this approach has shown to robustly learn to imitate even with scarce demonstration, one must still address the inherent challenge that collecting trajectory samples in each iteration is a costly operation. To address this issue, we first propose a Bayesian formulation of generative adversarial imitation learning (GAIL), where the imitation policy and the cost function are represented as stochastic neural networks. Then, we show that we can significantly enhance the sample efficiency of GAIL leveraging the predictive density of the cost, on an extensive set of imitation learning tasks with high-dimensional states and actions.
Tasks Continuous Control, Imitation Learning
Published 2018-12-01
URL http://papers.nips.cc/paper/7972-a-bayesian-approach-to-generative-adversarial-imitation-learning
PDF http://papers.nips.cc/paper/7972-a-bayesian-approach-to-generative-adversarial-imitation-learning.pdf
PWC https://paperswithcode.com/paper/a-bayesian-approach-to-generative-adversarial
Repo
Framework

The DLDP Survey on Digital Use and Usability of EU Regional and Minority Languages

Title The DLDP Survey on Digital Use and Usability of EU Regional and Minority Languages
Authors Claudia Soria, Valeria Quochi, Irene Russo
Abstract
Tasks
Published 2018-05-01
URL https://www.aclweb.org/anthology/L18-1656/
PDF https://www.aclweb.org/anthology/L18-1656
PWC https://paperswithcode.com/paper/the-dldp-survey-on-digital-use-and-usability
Repo
Framework

Empirical Study of the Topology and Geometry of Deep Networks

Title Empirical Study of the Topology and Geometry of Deep Networks
Authors Alhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard, Stefano Soatto
Abstract The goal of this paper is to analyze the geometric properties of deep neural network image classifiers in the input space. We specifically study the topology of classification regions created by deep networks, as well as their associated decision boundary. Through a systematic empirical study, we show that state-of-the-art deep nets learn connected classification regions, and that the decision boundary in the vicinity of datapoints is flat along most directions. We further draw an essential connection between two seemingly unrelated properties of deep networks: their sensitivity to additive perturbations of the inputs, and the curvature of their decision boundary. The directions where the decision boundary is curved in fact characterize the directions to which the classifier is the most vulnerable. We finally leverage a fundamental asymmetry in the curvature of the decision boundary of deep nets, and propose a method to discriminate between original images, and images perturbed with small adversarial examples. We show the effectiveness of this purely geometric approach for detecting small adversarial perturbations in images, and for recovering the labels of perturbed images.
Tasks
Published 2018-06-01
URL http://openaccess.thecvf.com/content_cvpr_2018/html/Fawzi_Empirical_Study_of_CVPR_2018_paper.html
PDF http://openaccess.thecvf.com/content_cvpr_2018/papers/Fawzi_Empirical_Study_of_CVPR_2018_paper.pdf
PWC https://paperswithcode.com/paper/empirical-study-of-the-topology-and-geometry
Repo
Framework

Predicting Nods by using Dialogue Acts in Dialogue

Title Predicting Nods by using Dialogue Acts in Dialogue
Authors Ryo Ishii, Ryuichiro Higashinaka, Junji Tomita
Abstract
Tasks
Published 2018-05-01
URL https://www.aclweb.org/anthology/L18-1465/
PDF https://www.aclweb.org/anthology/L18-1465
PWC https://paperswithcode.com/paper/predicting-nods-by-using-dialogue-acts-in
Repo
Framework

Joint 3D tracking of a deformable object in interaction with a hand

Title Joint 3D tracking of a deformable object in interaction with a hand
Authors Aggeliki Tsoli, Antonis A. Argyros
Abstract We present a novel method that is able to track a complex deformable object in interaction with a hand. This is achieved by formulating and solving an optimization problem that jointly considers the hand, the deformable object and the hand/object contact points. The optimization evaluates several hand/object contact configuration hypotheses and adopts the one that results in the best fit of the object’s model to the available RGBD observations in the vicinity of the hand. Thus, the hand is not treated as a distractor that occludes parts of the deformable object, but as a source of valuable information. Experimental results on a dataset that has been developed specifically for this new problem illustrate the superior performance of the proposed approach against relevant, state of the art solutions.
Tasks
Published 2018-09-01
URL http://openaccess.thecvf.com/content_ECCV_2018/html/Aggeliki_Tsoli_Joint_3D_tracking_ECCV_2018_paper.html
PDF http://openaccess.thecvf.com/content_ECCV_2018/papers/Aggeliki_Tsoli_Joint_3D_tracking_ECCV_2018_paper.pdf
PWC https://paperswithcode.com/paper/joint-3d-tracking-of-a-deformable-object-in
Repo
Framework

On the Semantic Relations and Functional Properties of Noun-Noun Compounds in Mandarin

Title On the Semantic Relations and Functional Properties of Noun-Noun Compounds in Mandarin
Authors Shu-Ping Gong, Chih-Hung Liu
Abstract
Tasks
Published 2018-10-01
URL https://www.aclweb.org/anthology/O18-1020/
PDF https://www.aclweb.org/anthology/O18-1020
PWC https://paperswithcode.com/paper/on-the-semantic-relations-and-functional
Repo
Framework

Goodness-of-fit Testing for Discrete Distributions via Stein Discrepancy

Title Goodness-of-fit Testing for Discrete Distributions via Stein Discrepancy
Authors Jiasen Yang, Qiang Liu, Vinayak Rao, Jennifer Neville
Abstract Recent work has combined Stein’s method with reproducing kernel Hilbert space theory to develop nonparametric goodness-of-fit tests for un-normalized probability distributions. However, the currently available tests apply exclusively to distributions with smooth density functions. In this work, we introduce a kernelized Stein discrepancy measure for discrete spaces, and develop a nonparametric goodness-of-fit test for discrete distributions with intractable normalization constants. Furthermore, we propose a general characterization of Stein operators that encompasses both discrete and continuous distributions, providing a recipe for constructing new Stein operators. We apply the proposed goodness-of-fit test to three statistical models involving discrete distributions, and our experiments show that the proposed test typically outperforms a two-sample test based on the maximum mean discrepancy.
Tasks
Published 2018-07-01
URL https://icml.cc/Conferences/2018/Schedule?showEvent=1894
PDF http://proceedings.mlr.press/v80/yang18c/yang18c.pdf
PWC https://paperswithcode.com/paper/goodness-of-fit-testing-for-discrete
Repo
Framework

Task Proposal: The TL;DR Challenge

Title Task Proposal: The TL;DR Challenge
Authors Shahbaz Syed, Michael V{"o}lske, Martin Potthast, Nedim Lipka, Benno Stein, Hinrich Sch{"u}tze
Abstract The TL;DR challenge fosters research in abstractive summarization of informal text, the largest and fastest-growing source of textual data on the web, which has been overlooked by summarization research so far. The challenge owes its name to the frequent practice of social media users to supplement long posts with a {}TL;DR{''}{---}for {}too long; didn{'}t read{''}{—}followed by a short summary as a courtesy to those who would otherwise reply with the exact same abbreviation to indicate they did not care to read a post for its apparent length. Posts featuring TL;DR summaries form an excellent ground truth for summarization, and by tapping into this resource for the first time, we have mined millions of training examples from social media, opening the door to all kinds of generative models.
Tasks Abstractive Text Summarization, Information Retrieval, Text Generation, Text Summarization
Published 2018-11-01
URL https://www.aclweb.org/anthology/W18-6538/
PDF https://www.aclweb.org/anthology/W18-6538
PWC https://paperswithcode.com/paper/task-proposal-the-tldr-challenge
Repo
Framework

Evidence Type Classification in Randomized Controlled Trials

Title Evidence Type Classification in Randomized Controlled Trials
Authors Tobias Mayer, Elena Cabrio, Serena Villata
Abstract Randomized Controlled Trials (RCT) are a common type of experimental studies in the medical domain for evidence-based decision making. The ability to automatically extract the \textit{arguments} proposed therein can be of valuable support for clinicians and practitioners in their daily evidence-based decision making activities. Given the peculiarity of the medical domain and the required level of detail, standard approaches to argument component detection in \textit{argument(ation) mining} are not fine-grained enough to support such activities. In this paper, we introduce a new sub-task of the argument component identification task: \textit{evidence type classification}. To address it, we propose a supervised approach and we test it on a set of RCT abstracts on different medical topics.
Tasks Argument Mining, Decision Making
Published 2018-11-01
URL https://www.aclweb.org/anthology/W18-5204/
PDF https://www.aclweb.org/anthology/W18-5204
PWC https://paperswithcode.com/paper/evidence-type-classification-in-randomized
Repo
Framework

Generalized Zero-Shot Learning with Deep Calibration Network

Title Generalized Zero-Shot Learning with Deep Calibration Network
Authors Shichen Liu, Mingsheng Long, Jianmin Wang, Michael I. Jordan
Abstract A technical challenge of deep learning is recognizing target classes without seen data. Zero-shot learning leverages semantic representations such as attributes or class prototypes to bridge source and target classes. Existing standard zero-shot learning methods may be prone to overfitting the seen data of source classes as they are blind to the semantic representations of target classes. In this paper, we study generalized zero-shot learning that assumes accessible to target classes for unseen data during training, and prediction on unseen data is made by searching on both source and target classes. We propose a novel Deep Calibration Network (DCN) approach towards this generalized zero-shot learning paradigm, which enables simultaneous calibration of deep networks on the confidence of source classes and uncertainty of target classes. Our approach maps visual features of images and semantic representations of class prototypes to a common embedding space such that the compatibility of seen data to both source and target classes are maximized. We show superior accuracy of our approach over the state of the art on benchmark datasets for generalized zero-shot learning, including AwA, CUB, SUN, and aPY.
Tasks Calibration, Zero-Shot Learning
Published 2018-12-01
URL http://papers.nips.cc/paper/7471-generalized-zero-shot-learning-with-deep-calibration-network
PDF http://papers.nips.cc/paper/7471-generalized-zero-shot-learning-with-deep-calibration-network.pdf
PWC https://paperswithcode.com/paper/generalized-zero-shot-learning-with-deep
Repo
Framework

LENA computerized automatic analysis of speech development from birth to three

Title LENA computerized automatic analysis of speech development from birth to three
Authors Li-Mei Chen, D. Kimbrough Oller, Chia-Cheng Lee, Chin-Ting Jimbo Liu
Abstract
Tasks
Published 2018-10-01
URL https://www.aclweb.org/anthology/O18-1017/
PDF https://www.aclweb.org/anthology/O18-1017
PWC https://paperswithcode.com/paper/lena-computerized-automatic-analysis-of
Repo
Framework

On the Derivational Entropy of Left-to-Right Probabilistic Finite-State Automata and Hidden Markov Models

Title On the Derivational Entropy of Left-to-Right Probabilistic Finite-State Automata and Hidden Markov Models
Authors Joan Andreu S{'a}nchez, Martha Alicia Rocha, Ver{'o}nica Romero, Mauricio Villegas
Abstract Probabilistic finite-state automata are a formalism that is widely used in many problems of automatic speech recognition and natural language processing. Probabilistic finite-state automata are closely related to other finite-state models as weighted finite-state automata, word lattices, and hidden Markov models. Therefore, they share many similar properties and problems. Entropy measures of finite-state models have been investigated in the past in order to study the information capacity of these models. The derivational entropy quantifies the uncertainty that the model has about the probability distribution it represents. The derivational entropy in a finite-state automaton is computed from the probability that is accumulated in all of its individual state sequences. The computation of the entropy from a weighted finite-state automaton requires a normalized model. This article studies an efficient computation of the derivational entropy of left-to-right probabilistic finite-state automata, and it introduces an efficient algorithm for normalizing weighted finite-state automata. The efficient computation of the derivational entropy is also extended to continuous hidden Markov models.
Tasks Speech Recognition
Published 2018-03-01
URL https://www.aclweb.org/anthology/J18-1002/
PDF https://www.aclweb.org/anthology/J18-1002
PWC https://paperswithcode.com/paper/on-the-derivational-entropy-of-left-to-right
Repo
Framework

LeapsAndBounds: A Method for Approximately Optimal Algorithm Configuration

Title LeapsAndBounds: A Method for Approximately Optimal Algorithm Configuration
Authors Gellert Weisz, Andras Gyorgy, Csaba Szepesvari
Abstract We consider the problem of configuring general-purpose solvers to run efficiently on problem instances drawn from an unknown distribution. The goal of the configurator is to find a configuration that runs fast on average on most instances, and do so with the least amount of total work. It can run a chosen solver on a random instance until the solver finishes or a timeout is reached. We propose LeapsAndBounds, an algorithm that tests configurations on randomly selected problem instances for longer and longer time. We prove that the capped expected runtime of the configuration returned by LeapsAndBounds is close to the optimal expected runtime, while our algorithm’s running time is near-optimal. Our results show that LeapsAndBounds is more efficient than the recent algorithm of Kleinberg et al. (2017), which, to our knowledge, is the only other algorithm configuration method with non-trivial theoretical guarantees. Experimental results on configuring a public SAT solver on a new benchmark dataset also stand witness to the superiority of our method.
Tasks
Published 2018-07-01
URL https://icml.cc/Conferences/2018/Schedule?showEvent=2263
PDF http://proceedings.mlr.press/v80/weisz18a/weisz18a.pdf
PWC https://paperswithcode.com/paper/leapsandbounds-a-method-for-approximately-1
Repo
Framework
comments powered by Disqus