January 24, 2020

2563 words 13 mins read

Paper Group NANR 236

Paper Group NANR 236

Robust Document Representations for Cross-Lingual Information Retrieval in Low-Resource Settings. A Polynomial Time Algorithm for Log-Concave Maximum Likelihood via Locally Exponential Families. An In-depth Analysis of the Effect of Lexical Normalization on the Dependency Parsing of Social Media. JBNU at MRP 2019: Multi-level Biaffine Attention for …

Robust Document Representations for Cross-Lingual Information Retrieval in Low-Resource Settings

Title Robust Document Representations for Cross-Lingual Information Retrieval in Low-Resource Settings
Authors Mahsa Yarmohammadi, Xutai Ma, Sorami Hisamoto, Muhammad Rahman, Yiming Wang, Hainan Xu, Daniel Povey, Philipp Koehn, Kevin Duh
Abstract
Tasks Information Retrieval
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-6602/
PDF https://www.aclweb.org/anthology/W19-6602
PWC https://paperswithcode.com/paper/robust-document-representations-for-cross
Repo
Framework

A Polynomial Time Algorithm for Log-Concave Maximum Likelihood via Locally Exponential Families

Title A Polynomial Time Algorithm for Log-Concave Maximum Likelihood via Locally Exponential Families
Authors Brian Axelrod, Ilias Diakonikolas, Alistair Stewart, Anastasios Sidiropoulos, Gregory Valiant
Abstract We consider the problem of computing the maximum likelihood multivariate log-concave distribution for a set of points. Specifically, we present an algorithm which, given $n$ points in $\mathbb{R}^d$ and an accuracy parameter $\eps>0$, runs in time $\poly(n,d,1/\eps),$ and returns a log-concave distribution which, with high probability, has the property that the likelihood of the $n$ points under the returned distribution is at most an additive $\eps$ less than the maximum likelihood that could be achieved via any log-concave distribution. This is the first computationally efficient (polynomial time) algorithm for this fundamental and practically important task. Our algorithm rests on a novel connection with exponential families: the maximum likelihood log-concave distribution belongs to a class of structured distributions which, while not an exponential family, ``locally’’ possesses key properties of exponential families. This connection then allows the problem of computing the log-concave maximum likelihood distribution to be formulated as a convex optimization problem, and solved via an approximate first-order method. Efficiently approximating the (sub) gradients of the objective function of this optimization problem is quite delicate, and is the main technical challenge in this work. |
Tasks
Published 2019-12-01
URL http://papers.nips.cc/paper/8988-a-polynomial-time-algorithm-for-log-concave-maximum-likelihood-via-locally-exponential-families
PDF http://papers.nips.cc/paper/8988-a-polynomial-time-algorithm-for-log-concave-maximum-likelihood-via-locally-exponential-families.pdf
PWC https://paperswithcode.com/paper/a-polynomial-time-algorithm-for-log-concave
Repo
Framework

An In-depth Analysis of the Effect of Lexical Normalization on the Dependency Parsing of Social Media

Title An In-depth Analysis of the Effect of Lexical Normalization on the Dependency Parsing of Social Media
Authors Rob van der Goot
Abstract Existing natural language processing systems have often been designed with standard texts in mind. However, when these tools are used on the substantially different texts from social media, their performance drops dramatically. One solution is to translate social media data to standard language before processing, this is also called normalization. It is well-known that this improves performance for many natural language processing tasks on social media data. However, little is known about which types of normalization replacements have the most effect. Furthermore, it is unknown what the weaknesses of existing lexical normalization systems are in an extrinsic setting. In this paper, we analyze the effect of manual as well as automatic lexical normalization for dependency parsing. After our analysis, we conclude that for most categories, automatic normalization scores close to manually annotated normalization and that small annotation differences are important to take into consideration when exploiting normalization in a pipeline setup.
Tasks Dependency Parsing, Lexical Normalization
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-5515/
PDF https://www.aclweb.org/anthology/D19-5515
PWC https://paperswithcode.com/paper/an-in-depth-analysis-of-the-effect-of-lexical
Repo
Framework

JBNU at MRP 2019: Multi-level Biaffine Attention for Semantic Dependency Parsing

Title JBNU at MRP 2019: Multi-level Biaffine Attention for Semantic Dependency Parsing
Authors Seung-Hoon Na, Jinwoon Min, Kwanghyeon Park, Jong-Hun Shin, Young-Kil Kim
Abstract This paper describes Jeonbuk National University (JBNU){'}s system for the 2019 shared task on Cross-Framework Meaning Representation Parsing (MRP 2019) at the Conference on Computational Natural Language Learning. Of the five frameworks, we address only the DELPH-IN MRS Bi-Lexical Dependencies (DP), Prague Semantic Dependencies (PSD), and Universal Conceptual Cognitive Annotation (UCCA) frameworks. We propose a unified parsing model using biaffine attention (Dozat and Manning, 2017), consisting of 1) a BERT-BiLSTM encoder and 2) a biaffine attention decoder. First, the BERT-BiLSTM for sentence encoder uses BERT to compose a sentence{'}s wordpieces into word-level embeddings and subsequently applies BiLSTM to word-level representations. Second, the biaffine attention decoder determines the scores for an edge{'}s existence and its labels based on biaffine attention functions between roledependent representations. We also present multi-level biaffine attention models by combining all the role-dependent representations that appear at multiple intermediate layers.
Tasks Dependency Parsing, Semantic Dependency Parsing
Published 2019-11-01
URL https://www.aclweb.org/anthology/K19-2009/
PDF https://www.aclweb.org/anthology/K19-2009
PWC https://paperswithcode.com/paper/jbnu-at-mrp-2019-multi-level-biaffine
Repo
Framework

Mean Replacement Pruning

Title Mean Replacement Pruning
Authors Utku Evci, Nicolas Le Roux, Pablo Castro, Leon Bottou
Abstract Pruning units in a deep network can help speed up inference and training as well as reduce the size of the model. We show that bias propagation is a pruning technique which consistently outperforms the common approach of merely removing units, regardless of the architecture and the dataset. We also show how a simple adaptation to an existing scoring function allows us to select the best units to prune. Finally, we show that the units selected by the best performing scoring functions are somewhat consistent over the course of training, implying the dead parts of the network appear during the stages of training.
Tasks
Published 2019-05-01
URL https://openreview.net/forum?id=BJxRVnC5Fm
PDF https://openreview.net/pdf?id=BJxRVnC5Fm
PWC https://paperswithcode.com/paper/mean-replacement-pruning
Repo
Framework

Learning to Drive by Observing the Best and Synthesizing the Worst

Title Learning to Drive by Observing the Best and Synthesizing the Worst
Authors Mayank Bansal, Alex Krizhevsky, Abhijit Ogale
Abstract Our goal is to train a policy for autonomous driving via imitation learning that is robust enough to drive a real vehicle. We find that standard behavior cloning is insufficient for handling complex driving scenarios, even when we leverage a perception system for preprocessing the input and a controller for executing the output on the car: 30 million examples are still not enough. We propose exposing the learner to synthesized data in the form of perturbations to the expert’s driving, which creates interesting situations such as collisions and/or going off the road. Rather than purely imitating all data, we augment the imitation loss with additional losses that penalize undesirable events and encourage progress – the perturbations then provide an important signal for these losses and lead to robustness of the learned model. We show that the model can handle complex situations in simulation, and present ablation experiments that emphasize the importance of each of our proposed changes and show that the model is responding to the appropriate causal factors. Finally, we demonstrate the model driving a car in the real world ( https://sites.google.com/view/learn-to-drive ).
Tasks Autonomous Driving, Imitation Learning
Published 2019-05-01
URL https://openreview.net/forum?id=SkGtjjR5t7
PDF https://openreview.net/pdf?id=SkGtjjR5t7
PWC https://paperswithcode.com/paper/learning-to-drive-by-observing-the-best-and
Repo
Framework

S3TA: A Soft, Spatial, Sequential, Top-Down Attention Model

Title S3TA: A Soft, Spatial, Sequential, Top-Down Attention Model
Authors Alex Mott, Daniel Zoran, Mike Chrzanowski, Daan Wierstra, Danilo J. Rezende
Abstract We present a soft, spatial, sequential, top-down attention model (S3TA). This model uses a soft attention mechanism to bottleneck its view of the input. A recurrent core is used to generate query vectors, which actively select information from the input by correlating the query with input- and space-dependent key maps at different spatial locations. We demonstrate the power and interpretabilty of this model under two settings. First, we build an agent which uses this attention model in RL environments and show that we can achieve performance competitive with state-of-the-art models while producing attention maps that elucidate some of the strategies used to solve the task. Second, we use this model in supervised learning tasks and show that it also achieves competitive performance and provides interpretable attention maps that show some of the underlying logic in the model’s decision making.
Tasks Decision Making
Published 2019-05-01
URL https://openreview.net/forum?id=B1gJOoRcYQ
PDF https://openreview.net/pdf?id=B1gJOoRcYQ
PWC https://paperswithcode.com/paper/s3ta-a-soft-spatial-sequential-top-down
Repo
Framework

FROM DEEP LEARNING TO DEEP DEDUCING: AUTOMATICALLY TRACKING DOWN NASH EQUILIBRIUM THROUGH AUTONOMOUS NEURAL AGENT, A POSSIBLE MISSING STEP TOWARD GENERAL A.I.

Title FROM DEEP LEARNING TO DEEP DEDUCING: AUTOMATICALLY TRACKING DOWN NASH EQUILIBRIUM THROUGH AUTONOMOUS NEURAL AGENT, A POSSIBLE MISSING STEP TOWARD GENERAL A.I.
Authors Brown Wang
Abstract Contrary to most reinforcement learning studies, which emphasize on training a deep neural network to approximate its output layer to certain strategies, this paper proposes a reversed method for reinforcement learning. We call this “Deep Deducing”. In short, after adequately training a deep neural network according to a strategy-environment-to-payoff table, then we initialize randomized strategy input and propagate the error between the actual output and the desired output back to the initially-randomized strategy input in the “input layer” of the trained deep neural network gradually to perform a task similar to “human deduction”. And we view the final strategy input in the “input layer” as the fittest strategy for a neural network when confronting the observed environment input from the world outside.
Tasks
Published 2019-01-01
URL https://openreview.net/forum?id=ByxZdj09tX
PDF https://openreview.net/pdf?id=ByxZdj09tX
PWC https://paperswithcode.com/paper/from-deep-learning-to-deep-deducing
Repo
Framework

Proceedings of the Second International Workshop on Resources and Tools for Derivational Morphology

Title Proceedings of the Second International Workshop on Resources and Tools for Derivational Morphology
Authors
Abstract
Tasks
Published 2019-09-01
URL https://www.aclweb.org/anthology/W19-8500/
PDF https://www.aclweb.org/anthology/W19-8500
PWC https://paperswithcode.com/paper/proceedings-of-the-second-international-2
Repo
Framework

Multilingual Model Using Cross-Task Embedding Projection

Title Multilingual Model Using Cross-Task Embedding Projection
Authors Jin Sakuma, Naoki Yoshinaga
Abstract We present a method for applying a neural network trained on one (resource-rich) language for a given task to other (resource-poor) languages. We accomplish this by inducing a mapping from pre-trained cross-lingual word embeddings to the embedding layer of the neural network trained on the resource-rich language. To perform element-wise cross-task embedding projection, we invent locally linear mapping which assumes and preserves the local topology across the semantic spaces before and after the projection. Experimental results on topic classification task and sentiment analysis task showed that the fully task-specific multilingual model obtained using our method outperformed the existing multilingual models with embedding layers fixed to pre-trained cross-lingual word embeddings.
Tasks Sentiment Analysis, Word Embeddings
Published 2019-11-01
URL https://www.aclweb.org/anthology/K19-1003/
PDF https://www.aclweb.org/anthology/K19-1003
PWC https://paperswithcode.com/paper/multilingual-model-using-cross-task-embedding
Repo
Framework

Automatic and Robust Skull Registration Based on Discrete Uniformization

Title Automatic and Robust Skull Registration Based on Discrete Uniformization
Authors Junli Zhao, Xin Qi, Chengfeng Wen, Na Lei, Xianfeng Gu
Abstract Skull registration plays a fundamental role in forensic science and is crucial for craniofacial reconstruction. The complicated topology, lack of anatomical features, and low quality reconstructed mesh make skull registration challenging. In this work, we propose an automatic skull registration method based on the discrete uniformization theory, which can handle complicated topologies and is robust to low quality meshes. We apply dynamic Yamabe flow to realize discrete uniformization, which modifies the mesh combinatorial structure during the flow and conformally maps the multiply connected skull surface onto a planar disk with circular holes. The 3D surfaces can be registered by matching their planar images using harmonic maps. This method is rigorous with theoretic guarantee, automatic without user intervention, and robust to low mesh quality. Our experimental results demonstrate the efficiency and efficacy of the method.
Tasks
Published 2019-10-01
URL http://openaccess.thecvf.com/content_ICCV_2019/html/Zhao_Automatic_and_Robust_Skull_Registration_Based_on_Discrete_Uniformization_ICCV_2019_paper.html
PDF http://openaccess.thecvf.com/content_ICCV_2019/papers/Zhao_Automatic_and_Robust_Skull_Registration_Based_on_Discrete_Uniformization_ICCV_2019_paper.pdf
PWC https://paperswithcode.com/paper/automatic-and-robust-skull-registration-based
Repo
Framework

Visually grounded generation of entailments from premises

Title Visually grounded generation of entailments from premises
Authors Somayeh Jafaritazehjani, Albert Gatt, Marc Tanti
Abstract Natural Language Inference (NLI) is the task of determining the semantic relationship between a premise and a hypothesis. In this paper, we focus on the generation of hypotheses from premises in a multimodal setting, to generate a sentence (hypothesis) given an image and/or its description (premise) as the input. The main goals of this paper are (a) to investigate whether it is reasonable to frame NLI as a generation task; and (b) to consider the degree to which grounding textual premises in visual information is beneficial to generation. We compare different neural architectures, showing through automatic and human evaluation that entailments can indeed be generated successfully. We also show that multimodal models outperform unimodal models in this task, albeit marginally
Tasks Natural Language Inference
Published 2019-10-01
URL https://www.aclweb.org/anthology/W19-8625/
PDF https://www.aclweb.org/anthology/W19-8625
PWC https://paperswithcode.com/paper/visually-grounded-generation-of-entailments
Repo
Framework

A Deep Cybersickness Predictor Based on Brain Signal Analysis for Virtual Reality Contents

Title A Deep Cybersickness Predictor Based on Brain Signal Analysis for Virtual Reality Contents
Authors Jinwoo Kim, Woojae Kim, Heeseok Oh, Seongmin Lee, Sanghoon Lee
Abstract What if we could interpret the cognitive state of a user while experiencing a virtual reality (VR) and estimate the cognitive state from a visual stimulus? In this paper, we address the above question by developing an electroencephalography (EEG) driven VR cybersickness prediction model. The EEG data has been widely utilized to learn the cognitive representation of brain activity. In the first stage, to fully exploit the advantages of the EEG data, it is transformed into the multi-channel spectrogram which enables to account for the correlation of spectral and temporal coefficient. Then, a convolutional neural network (CNN) is applied to encode the cognitive representation of the EEG spectrogram. In the second stage, we train a cybersickness prediction model on the VR video sequence by designing a Recurrent Neural Network (RNN). Here, the encoded cognitive representation is transferred to the model to train the visual and cognitive features for cybersickness prediction. Through the proposed framework, it is possible to predict the cybersickness level that reflects brain activity automatically. We use 8-channels EEG data to record brain activity while more than 200 subjects experience 44 different VR contents. After rigorous training, we demonstrate that the proposed framework reliably estimates cognitive states without the EEG data. Furthermore, it achieves state-of-the-art performance comparing to existing VR cybersickness prediction models.
Tasks EEG
Published 2019-10-01
URL http://openaccess.thecvf.com/content_ICCV_2019/html/Kim_A_Deep_Cybersickness_Predictor_Based_on_Brain_Signal_Analysis_for_ICCV_2019_paper.html
PDF http://openaccess.thecvf.com/content_ICCV_2019/papers/Kim_A_Deep_Cybersickness_Predictor_Based_on_Brain_Signal_Analysis_for_ICCV_2019_paper.pdf
PWC https://paperswithcode.com/paper/a-deep-cybersickness-predictor-based-on-brain
Repo
Framework

CLPsych2019 Shared Task: Predicting Suicide Risk Level from Reddit Posts on Multiple Forums

Title CLPsych2019 Shared Task: Predicting Suicide Risk Level from Reddit Posts on Multiple Forums
Authors Victor Ruiz, Lingyun Shi, Wei Quan, Neal Ryan, C Biernesser, ice, David Brent, Rich Tsui
Abstract We aimed to predict an individual suicide risk level from longitudinal posts on Reddit discussion forums. Through participating in a shared task competition hosted by CLPsych2019, we received two annotated datasets: a training dataset with 496 users (31,553 posts) and a test dataset with 125 users (9610 posts). We submitted results from our three best-performing machine-learning models: SVM, Na{"\i}ve Bayes, and an ensemble model. Each model provided a user{'}s suicide risk level in four categories, i.e., no risk, low risk, moderate risk, and severe risk. Among the three models, the ensemble model had the best macro-averaged F1 score 0.379 when tested on the holdout test dataset. The NB model had the best performance in two additional binary-classification tasks, i.e., no risk vs. flagged risk (any risk level other than no risk) with F1 score 0.836 and no or low risk vs. urgent risk (moderate or severe risk) with F1 score 0.736. We conclude that the NB model may serve as a tool for identifying users with flagged or urgent suicide risk based on longitudinal posts on Reddit discussion forums.
Tasks
Published 2019-06-01
URL https://www.aclweb.org/anthology/W19-3020/
PDF https://www.aclweb.org/anthology/W19-3020
PWC https://paperswithcode.com/paper/clpsych2019-shared-task-predicting-suicide
Repo
Framework

International Relations Perspectives on Technology

Title International Relations Perspectives on Technology
Authors Hassan Imam Essa Hassan
Abstract This paper addresses the literature of International Relations approaches to technology as a form of power in international politics in two stages: First, the current IR approaches to technological power in international politics which includes instrumentalism, essentialism and Social Construction of Technology (SCOT). Second, the historical materialist approaches to technological power in IR which includes instrumentalism, essentialism and critical theory of technology.
Tasks
Published 2019-07-30
URL https://ijbassnet.com/publication/254/details
PDF https://ijbassnet.com/storage/app/publications/5d4019e56b76a11564482021.pdf
PWC https://paperswithcode.com/paper/international-relations-perspectives-on
Repo
Framework
comments powered by Disqus