January 24, 2020

2664 words 13 mins read

Paper Group NANR 158

Paper Group NANR 158

GFrames: Gradient-Based Local Reference Frame for 3D Shape Matching. Vector of Locally Aggregated Embeddings for Text Representation. How well do NLI models capture verb veridicality?. QUARCH: A New Quasi-Affine Reconstruction Stratum From Vague Relative Camera Orientation Knowledge. Original Semantics-Oriented Attention and Deep Fusion Network for …

GFrames: Gradient-Based Local Reference Frame for 3D Shape Matching

Title GFrames: Gradient-Based Local Reference Frame for 3D Shape Matching
Authors Simone Melzi, Riccardo Spezialetti, Federico Tombari, Michael M. Bronstein, Luigi Di Stefano, Emanuele Rodola
Abstract We introduce GFrames, a novel local reference frame (LRF) construction for 3D meshes and point clouds. GFrames are based on the computation of the intrinsic gradient of a scalar field defined on top of the input shape. The resulting tangent vector field defines a repeatable tangent direction of the local frame at each point; importantly, it directly inherits the properties and invariance classes of the underlying scalar function, making it remarkably robust under strong sampling artifacts, vertex noise, as well as non-rigid deformations. Existing local descriptors can directly benefit from our repeatable frames, as we showcase in a selection of 3D vision and shape analysis applications where we demonstrate state-of-the-art performance in a variety of challenging settings.
Tasks
Published 2019-06-01
URL http://openaccess.thecvf.com/content_CVPR_2019/html/Melzi_GFrames_Gradient-Based_Local_Reference_Frame_for_3D_Shape_Matching_CVPR_2019_paper.html
PDF http://openaccess.thecvf.com/content_CVPR_2019/papers/Melzi_GFrames_Gradient-Based_Local_Reference_Frame_for_3D_Shape_Matching_CVPR_2019_paper.pdf
PWC https://paperswithcode.com/paper/gframes-gradient-based-local-reference-frame
Repo
Framework

Vector of Locally Aggregated Embeddings for Text Representation

Title Vector of Locally Aggregated Embeddings for Text Representation
Authors Hadi Amiri, Mitra Mohtarami
Abstract We present Vector of Locally Aggregated Embeddings (VLAE) for effective and, ultimately, lossless representation of textual content. Our model encodes each input text by effectively identifying and integrating the representations of its semantically-relevant parts. The proposed model generates high quality representation of textual content and improves the classification performance of current state-of-the-art deep averaging networks across several text classification tasks.
Tasks Text Classification
Published 2019-06-01
URL https://www.aclweb.org/anthology/N19-1143/
PDF https://www.aclweb.org/anthology/N19-1143
PWC https://paperswithcode.com/paper/vector-of-locally-aggregated-embeddings-for
Repo
Framework

How well do NLI models capture verb veridicality?

Title How well do NLI models capture verb veridicality?
Authors Alexis Ross, Ellie Pavlick
Abstract In natural language inference (NLI), contexts are considered veridical if they allow us to infer that their underlying propositions make true claims about the real world. We investigate whether a state-of-the-art natural language inference model (BERT) learns to make correct inferences about veridicality in verb-complement constructions. We introduce an NLI dataset for veridicality evaluation consisting of 1,500 sentence pairs, covering 137 unique verbs. We find that both human and model inferences generally follow theoretical patterns, but exhibit a systematic bias towards assuming that verbs are veridical{–}a bias which is amplified in BERT. We further show that, encouragingly, BERT{'}s inferences are sensitive not only to the presence of individual verb types, but also to the syntactic role of the verb, the form of the complement clause (to- vs. that-complements), and negation.
Tasks Natural Language Inference
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-1228/
PDF https://www.aclweb.org/anthology/D19-1228
PWC https://paperswithcode.com/paper/how-well-do-nli-models-capture-verb
Repo
Framework

QUARCH: A New Quasi-Affine Reconstruction Stratum From Vague Relative Camera Orientation Knowledge

Title QUARCH: A New Quasi-Affine Reconstruction Stratum From Vague Relative Camera Orientation Knowledge
Authors Devesh Adlakha, Adlane Habed, Fabio Morbidi, Cedric Demonceaux, Michel de Mathelin
Abstract We present a new quasi-affine reconstruction of a scene and its application to camera self-calibration. We refer to this reconstruction as QUARCH (QUasi-Affine Reconstruction with respect to Camera centers and the Hodographs of horopters). A QUARCH can be obtained by solving a semidefinite programming problem when, (i) the images have been captured by a moving camera with constant intrinsic parameters, and (ii) a vague knowledge of the relative orientation (under or over 120 degrees) between camera pairs is available. The resulting reconstruction comes close enough to an affine one allowing thus an easy upgrade of the QUARCH to its affine and metric counterparts. We also present a constrained Levenberg-Marquardt method for nonlinear optimization subject to Linear Matrix Inequality (LMI) constraints so as to ensure that the QUARCH LMIs are satisfied during optimization. Experiments with synthetic and real data show the benefits of QUARCH in reliably obtaining a metric reconstruction.
Tasks Calibration
Published 2019-10-01
URL http://openaccess.thecvf.com/content_ICCV_2019/html/Adlakha_QUARCH_A_New_Quasi-Affine_Reconstruction_Stratum_From_Vague_Relative_Camera_ICCV_2019_paper.html
PDF http://openaccess.thecvf.com/content_ICCV_2019/papers/Adlakha_QUARCH_A_New_Quasi-Affine_Reconstruction_Stratum_From_Vague_Relative_Camera_ICCV_2019_paper.pdf
PWC https://paperswithcode.com/paper/quarch-a-new-quasi-affine-reconstruction
Repo
Framework

Original Semantics-Oriented Attention and Deep Fusion Network for Sentence Matching

Title Original Semantics-Oriented Attention and Deep Fusion Network for Sentence Matching
Authors Mingtong Liu, Yujie Zhang, Jinan Xu, Yufeng Chen
Abstract Sentence matching is a key issue in natural language inference and paraphrase identification. Despite the recent progress on multi-layered neural network with cross sentence attention, one sentence learns attention to the intermediate representations of another sentence, which are propagated from preceding layers and therefore are uncertain and unstable for matching, particularly at the risk of error propagation. In this paper, we present an original semantics-oriented attention and deep fusion network (OSOA-DFN) for sentence matching. Unlike existing models, each attention layer of OSOA-DFN is oriented to the original semantic representation of another sentence, which captures the relevant information from a fixed matching target. The multiple attention layers allow one sentence to repeatedly read the important information of another sentence for better matching. We then additionally design deep fusion to propagate the attention information at each matching layer. At last, we introduce a self-attention mechanism to capture global context to enhance attention-aware representation within each sentence. Experiment results on three sentence matching benchmark datasets SNLI, SciTail and Quora show that OSOA-DFN has the ability to model sentence matching more precisely.
Tasks Natural Language Inference, Paraphrase Identification
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-1267/
PDF https://www.aclweb.org/anthology/D19-1267
PWC https://paperswithcode.com/paper/original-semantics-oriented-attention-and
Repo
Framework

Temporal Analysis of the Semantic Verbal Fluency Task in Persons with Subjective and Mild Cognitive Impairment

Title Temporal Analysis of the Semantic Verbal Fluency Task in Persons with Subjective and Mild Cognitive Impairment
Authors Nicklas Linz, Kristina Lundholm Fors, Hali Lindsay, Marie Eckerstr{"o}m, Alex, Jan ersson, Dimitrios Kokkinakis
Abstract The Semantic Verbal Fluency (SVF) task is a classical neuropsychological assessment where persons are asked to produce words belonging to a semantic category (e.g., animals) in a given time. This paper introduces a novel method of temporal analysis for SVF tasks utilizing time intervals and applies it to a corpus of elderly Swedish subjects (mild cognitive impairment, subjective cognitive impairment and healthy controls). A general decline in word count and lexical frequency over the course of the task is revealed, as well as an increase in word transition times. Persons with subjective cognitive impairment had a higher word count during the last intervals, but produced words of the same lexical frequencies. Persons with MCI had a steeper decline in both word count and lexical frequencies during the third interval. Additional correlations with neuropsychological scores suggest these findings are linked to a person{'}s overall vocabulary size and processing speed, respectively. Classification results improved when adding the novel features (AUC=0.72), supporting their diagnostic value.
Tasks
Published 2019-06-01
URL https://www.aclweb.org/anthology/W19-3012/
PDF https://www.aclweb.org/anthology/W19-3012
PWC https://paperswithcode.com/paper/temporal-analysis-of-the-semantic-verbal
Repo
Framework

Overcoming Catastrophic Forgetting During Domain Adaptation of Neural Machine Translation

Title Overcoming Catastrophic Forgetting During Domain Adaptation of Neural Machine Translation
Authors Brian Thompson, Jeremy Gwinnup, Huda Khayrallah, Kevin Duh, Philipp Koehn
Abstract Continued training is an effective method for domain adaptation in neural machine translation. However, in-domain gains from adaptation come at the expense of general-domain performance. In this work, we interpret the drop in general-domain performance as catastrophic forgetting of general-domain knowledge. To mitigate it, we adapt Elastic Weight Consolidation (EWC){—}a machine learning method for learning a new task without forgetting previous tasks. Our method retains the majority of general-domain performance lost in continued training without degrading in-domain performance, outperforming the previous state-of-the-art. We also explore the full range of general-domain performance available when some in-domain degradation is acceptable.
Tasks Domain Adaptation, Machine Translation
Published 2019-06-01
URL https://www.aclweb.org/anthology/N19-1209/
PDF https://www.aclweb.org/anthology/N19-1209
PWC https://paperswithcode.com/paper/overcoming-catastrophic-forgetting-during
Repo
Framework

Fine-tune BERT with Sparse Self-Attention Mechanism

Title Fine-tune BERT with Sparse Self-Attention Mechanism
Authors Baiyun Cui, Yingming Li, Ming Chen, Zhongfei Zhang
Abstract In this paper, we develop a novel Sparse Self-Attention Fine-tuning model (referred as SSAF) which integrates sparsity into self-attention mechanism to enhance the fine-tuning performance of BERT. In particular, sparsity is introduced into the self-attention by replacing softmax function with a controllable sparse transformation when fine-tuning with BERT. It enables us to learn a structurally sparse attention distribution, which leads to a more interpretable representation for the whole input. The proposed model is evaluated on sentiment analysis, question answering, and natural language inference tasks. The extensive experimental results across multiple datasets demonstrate its effectiveness and superiority to the baseline methods.
Tasks Natural Language Inference, Question Answering, Sentiment Analysis
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-1361/
PDF https://www.aclweb.org/anthology/D19-1361
PWC https://paperswithcode.com/paper/fine-tune-bert-with-sparse-self-attention
Repo
Framework

Task Refinement Learning for Improved Accuracy and Stability of Unsupervised Domain Adaptation

Title Task Refinement Learning for Improved Accuracy and Stability of Unsupervised Domain Adaptation
Authors Yftah Ziser, Roi Reichart
Abstract Pivot Based Language Modeling (PBLM) (Ziser and Reichart, 2018a), combining LSTMs with pivot-based methods, has yielded significant progress in unsupervised domain adaptation. However, this approach is still challenged by the large pivot detection problem that should be solved, and by the inherent instability of LSTMs. In this paper we propose a Task Refinement Learning (TRL) approach, in order to solve these problems. Our algorithms iteratively train the PBLM model, gradually increasing the information exposed about each pivot. TRL-PBLM achieves stateof- the-art accuracy in six domain adaptation setups for sentiment classification. Moreover, it is much more stable than plain PBLM across model configurations, making the model much better fitted for practical use.
Tasks Domain Adaptation, Language Modelling, Sentiment Analysis, Unsupervised Domain Adaptation
Published 2019-07-01
URL https://www.aclweb.org/anthology/P19-1591/
PDF https://www.aclweb.org/anthology/P19-1591
PWC https://paperswithcode.com/paper/task-refinement-learning-for-improved
Repo
Framework

BAD SLAM: Bundle Adjusted Direct RGB-D SLAM

Title BAD SLAM: Bundle Adjusted Direct RGB-D SLAM
Authors Thomas Schops, Torsten Sattler, Marc Pollefeys
Abstract A key component of Simultaneous Localization and Mapping (SLAM) systems is the joint optimization of the estimated 3D map and camera trajectory. Bundle adjustment (BA) is the gold standard for this. Due to the large number of variables in dense RGB-D SLAM, previous work has focused on approximating BA. In contrast, in this paper we present a novel, fast direct BA formulation which we implement in a real-time dense RGB-D SLAM algorithm. In addition, we show that direct RGB-D SLAM systems are highly sensitive to rolling shutter, RGB and depth sensor synchronization, and calibration errors. In order to facilitate state-of-the-art research on direct RGB-D SLAM, we propose a novel, well-calibrated benchmark for this task that uses synchronized global shutter RGB and depth cameras. It includes a training set, a test set without public ground truth, and an online evaluation service. We observe that the ranking of methods changes on this dataset compared to existing ones, and our proposed algorithm outperforms all other evaluated SLAM methods. Our benchmark and our open source SLAM algorithm are available at: www.eth3d.net
Tasks Calibration, Simultaneous Localization and Mapping
Published 2019-06-01
URL http://openaccess.thecvf.com/content_CVPR_2019/html/Schops_BAD_SLAM_Bundle_Adjusted_Direct_RGB-D_SLAM_CVPR_2019_paper.html
PDF http://openaccess.thecvf.com/content_CVPR_2019/papers/Schops_BAD_SLAM_Bundle_Adjusted_Direct_RGB-D_SLAM_CVPR_2019_paper.pdf
PWC https://paperswithcode.com/paper/bad-slam-bundle-adjusted-direct-rgb-d-slam
Repo
Framework

Learning To Solve Circuit-SAT: An Unsupervised Differentiable Approach

Title Learning To Solve Circuit-SAT: An Unsupervised Differentiable Approach
Authors Saeed Amizadeh, Sergiy Matusevych, Markus Weimer
Abstract Recent efforts to combine Representation Learning with Formal Methods, commonly known as the Neuro-Symbolic Methods, have given rise to a new trend of applying rich neural architectures to solve classical combinatorial optimization problems. In this paper, we propose a neural framework that can learn to solve the Circuit Satisfiability problem. Our framework is built upon two fundamental contributions: a rich embedding architecture that encodes the problem structure and an end-to-end differentiable training procedure that mimics Reinforcement Learning and trains the model directly toward solving the SAT problem. The experimental results show the superior out-of-sample generalization performance of our framework compared to the recently developed NeuroSAT method.
Tasks Combinatorial Optimization, Representation Learning
Published 2019-05-01
URL https://openreview.net/forum?id=BJxgz2R9t7
PDF https://openreview.net/pdf?id=BJxgz2R9t7
PWC https://paperswithcode.com/paper/learning-to-solve-circuit-sat-an-unsupervised
Repo
Framework

Pyramid Recurrent Neural Networks for Multi-Scale Change-Point Detection

Title Pyramid Recurrent Neural Networks for Multi-Scale Change-Point Detection
Authors Zahra Ebrahimzadeh, Min Zheng, Selcuk Karakas, Samantha Kleinberg
Abstract Many real-world time series, such as in activity recognition, finance, or climate science, have changepoints where the system’s structure or parameters change. Detecting changes is important as they may indicate critical events. However, existing methods for changepoint detection face challenges when (1) the patterns of change cannot be modeled using simple and predefined metrics, and (2) changes can occur gradually, at multiple time-scales. To address this, we show how changepoint detection can be treated as a supervised learning problem, and propose a new deep neural network architecture that can efficiently identify both abrupt and gradual changes at multiple scales. Our proposed method, pyramid recurrent neural network (PRNN), is designed to be scale-invariant, by incorporating wavelets and pyramid analysis techniques from multi-scale signal processing. Through experiments on synthetic and real-world datasets, we show that PRNN can detect abrupt and gradual changes with higher accuracy than the state of the art and can extrapolate to detect changepoints at novel timescales that have not been seen in training.
Tasks Activity Recognition, Change Point Detection, Time Series
Published 2019-05-01
URL https://openreview.net/forum?id=HkGTwjCctm
PDF https://openreview.net/pdf?id=HkGTwjCctm
PWC https://paperswithcode.com/paper/pyramid-recurrent-neural-networks-for-multi
Repo
Framework

Distant Supervised Relation Extraction with Separate Head-Tail CNN

Title Distant Supervised Relation Extraction with Separate Head-Tail CNN
Authors Rui Xing, Jie Luo
Abstract Distant supervised relation extraction is an efficient and effective strategy to find relations between entities in texts. However, it inevitably suffers from mislabeling problem and the noisy data will hinder the performance. In this paper, we propose the Separate Head-Tail Convolution Neural Network (SHTCNN), a novel neural relation extraction framework to alleviate this issue. In this method, we apply separate convolution and pooling to the head and tail entity respectively for extracting better semantic features of sentences, and coarse-to-fine strategy to filter out instances which do not have actual relations in order to alleviate noisy data issues. Experiments on a widely used dataset show that our model achieves significant and consistent improvements in relation extraction compared to statistical and vanilla CNN-based methods.
Tasks Relation Extraction
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-5533/
PDF https://www.aclweb.org/anthology/D19-5533
PWC https://paperswithcode.com/paper/distant-supervised-relation-extraction-with
Repo
Framework

Spherical Fractal Convolutional Neural Networks for Point Cloud Recognition

Title Spherical Fractal Convolutional Neural Networks for Point Cloud Recognition
Authors Yongming Rao, Jiwen Lu, Jie Zhou
Abstract We present a generic, flexible and 3D rotation invariant framework based on spherical symmetry for point cloud recognition. By introducing regular icosahedral lattice and its fractals to approximate and discretize sphere, convolution can be easily implemented to process 3D points. Based on the fractal structure, a hierarchical feature learning framework together with an adaptive sphere projection module is proposed to learn deep feature in an end-to-end manner. Our framework not only inherits the strong representation power and generalization capability from convolutional neural networks for image recognition, but also extends CNN to learn robust feature resistant to rotations and perturbations. The proposed model is effective yet robust. Comprehensive experimental study demonstrates that our approach can achieve competitive performance compared to state-of-the-art techniques on both 3D object classification and part segmentation tasks, meanwhile, outperform other rotation invariant models on rotated 3D object classification and retrieval tasks by a large margin.
Tasks 3D Object Classification, Object Classification
Published 2019-06-01
URL http://openaccess.thecvf.com/content_CVPR_2019/html/Rao_Spherical_Fractal_Convolutional_Neural_Networks_for_Point_Cloud_Recognition_CVPR_2019_paper.html
PDF http://openaccess.thecvf.com/content_CVPR_2019/papers/Rao_Spherical_Fractal_Convolutional_Neural_Networks_for_Point_Cloud_Recognition_CVPR_2019_paper.pdf
PWC https://paperswithcode.com/paper/spherical-fractal-convolutional-neural
Repo
Framework

An Overview of the Active Gene Annotation Corpus and the BioNLP OST 2019 AGAC Track Tasks

Title An Overview of the Active Gene Annotation Corpus and the BioNLP OST 2019 AGAC Track Tasks
Authors Yuxing Wang, Kaiyin Zhou, Mina Gachloo, Jingbo Xia
Abstract The active gene annotation corpus (AGAC) was developed to support knowledge discovery for drug repurposing. Based on the corpus, the AGAC track of the BioNLP Open Shared Tasks 2019 was organized, to facilitate cross-disciplinary collaboration across BioNLP and Pharmacoinformatics communities, for drug repurposing. The AGAC track consists of three subtasks: 1) named entity recognition, 2) thematic relation extraction, and 3) loss of function (LOF) / gain of function (GOF) topic classification. The AGAC track was participated by five teams, of which the performance are compared and analyzed. The the results revealed a substantial room for improvement in the design of the task, which we analyzed in terms of {}imbalanced data{''}, {}selective annotation{''} and {``}latent topic annotation{''}. |
Tasks Named Entity Recognition, Relation Extraction
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-5710/
PDF https://www.aclweb.org/anthology/D19-5710
PWC https://paperswithcode.com/paper/an-overview-of-the-active-gene-annotation
Repo
Framework
comments powered by Disqus