October 16, 2019

2450 words 12 mins read

Paper Group NAWR 28

Paper Group NAWR 28

Annotating picture description task responses for content analysis. Type-Sensitive Knowledge Base Inference Without Explicit Type Supervision. PSANet: Point-wise Spatial Attention Network for Scene Parsing. Multi-Scale Deep Compressive Sensing Network. Learning Attentions: Residual Attentional Siamese Network for High Performance Online Visual Trac …

Annotating picture description task responses for content analysis

Title Annotating picture description task responses for content analysis
Authors Levi King, Markus Dickinson
Abstract Given that all users of a language can be creative in their language usage, the overarching goal of this work is to investigate issues of variability and acceptability in written text, for both non-native speakers (NNSs) and native speakers (NSs). We control for meaning by collecting a dataset of picture description task (PDT) responses from a number of NSs and NNSs, and we define and annotate a handful of features pertaining to form and meaning, to capture the multi-dimensional ways in which responses can vary and can be acceptable. By examining the decisions made in this corpus development, we highlight the questions facing anyone working with learner language properties like variability, acceptability and native-likeness. We find reliable inter-annotator agreement, though disagreements point to difficult areas for establishing a link between form and meaning.
Tasks Reading Comprehension
Published 2018-06-01
URL https://www.aclweb.org/anthology/W18-0510/
PDF https://www.aclweb.org/anthology/W18-0510
PWC https://paperswithcode.com/paper/annotating-picture-description-task-responses
Repo https://github.com/sailscorpus/sails
Framework none

Type-Sensitive Knowledge Base Inference Without Explicit Type Supervision

Title Type-Sensitive Knowledge Base Inference Without Explicit Type Supervision
Authors Prachi Jain, Pankaj Kumar, {Mausam}, Soumen Chakrabarti
Abstract State-of-the-art knowledge base completion (KBC) models predict a score for every known or unknown fact via a latent factorization over entity and relation embeddings. We observe that when they fail, they often make entity predictions that are incompatible with the type required by the relation. In response, we enhance each base factorization with two type-compatibility terms between entity-relation pairs, and combine the signals in a novel manner. Without explicit supervision from a type catalog, our proposed modification obtains up to 7{%} MRR gains over base models, and new state-of-the-art results on several datasets. Further analysis reveals that our models better represent the latent types of entities and their embeddings also predict supervised types better than the embeddings fitted by baseline models.
Tasks Knowledge Base Completion
Published 2018-07-01
URL https://www.aclweb.org/anthology/P18-2013/
PDF https://www.aclweb.org/anthology/P18-2013
PWC https://paperswithcode.com/paper/type-sensitive-knowledge-base-inference
Repo https://github.com/dair-iitd/kbi
Framework pytorch

PSANet: Point-wise Spatial Attention Network for Scene Parsing

Title PSANet: Point-wise Spatial Attention Network for Scene Parsing
Authors Hengshuang Zhao, Yi Zhang, Shu Liu, Jianping Shi, Chen Change Loy, Dahua Lin, Jiaya Jia
Abstract We notice information flow in convolutional neural networks is restricted inside local neighborhood regions due to the physical design of convolutional filters, which limits the overall understanding of complex scenes. In this paper, we propose the point-wise spatial attention network (PSANet) to relax the local neighborhood constraint. Each position on the feature map is connected to all the other ones through a self-adaptively learned attention mask. Moreover, information propagation in bi-direction for scene parsing is enabled. Information at other positions can be collected to help the prediction of the current position and vice versa, information at the current position can be distributed to assist the prediction of other ones. Our proposed approach achieves top performance on various competitive scene parsing datasets, including ADE20K, PASCAL VOC 2012 and Cityscapes, demonstrating its effectiveness and generality.
Tasks Scene Parsing, Semantic Segmentation
Published 2018-09-01
URL http://openaccess.thecvf.com/content_ECCV_2018/html/Hengshuang_Zhao_PSANet_Point-wise_Spatial_ECCV_2018_paper.html
PDF http://openaccess.thecvf.com/content_ECCV_2018/papers/Hengshuang_Zhao_PSANet_Point-wise_Spatial_ECCV_2018_paper.pdf
PWC https://paperswithcode.com/paper/psanet-point-wise-spatial-attention-network
Repo https://github.com/hszhao/PSANet
Framework pytorch

Multi-Scale Deep Compressive Sensing Network

Title Multi-Scale Deep Compressive Sensing Network
Authors Canh, Thuong Nguyen; Jeon, Byeungwoo
Abstract With joint learning of the sampling and recovery, the deep learning-based compressive sensing (DCS) has shown significant improvement in performance and running time reduction. Its reconstructed image, however, losses high-frequency content especially at low subrates. It is understood due to relatively much low-frequency information captured into the sampling matrix. This behaviour happens similarly in the multi-scale sampling scheme which also samples more low-frequency components. This paper proposes a multi-scale DCS (MS-DCSNet) based on convolutional neural network. Firstly, we convert image signal using multiple scale-based wavelet transform. Then, the signal is captured through the convolution block by block across scales. The initial reconstructed image is directly recovered from multi-scale measurements. Multi-scale wavelet convolution is utilized to enhance the final reconstruction quality. The network learns to perform both multi-scale in sampling and reconstruction thus results in better reconstruction quality.
Tasks Compressive Sensing
Published 2018-12-12
URL https://arxiv.org/abs/1809.05717
PDF https://arxiv.org/abs/1809.05717
PWC https://paperswithcode.com/paper/multi-scale-deep-compressive-sensing-network-1
Repo https://github.com/AtenaKid/MS-DCSNet-Release
Framework none

Learning Attentions: Residual Attentional Siamese Network for High Performance Online Visual Tracking

Title Learning Attentions: Residual Attentional Siamese Network for High Performance Online Visual Tracking
Authors Qiang Wang, Zhu Teng, Junliang Xing, Jin Gao, Weiming Hu, Stephen Maybank
Abstract Offline training for object tracking has recently shown great potentials in balancing tracking accuracy and speed. However, it is still difficult to adapt an offline trained model to a target tracked online. This work presents a Residual Attentional Siamese Network (RASNet) for high performance object tracking. The RASNet model reformulates the correlation filter within a Siamese tracking framework, and introduces different kinds of the attention mechanisms to adapt the model without updating the model online. In particular, by exploiting the offline trained general attention, the target adapted residual attention, and the channel favored feature attention, the RASNet not only mitigates the over-fitting problem in deep network training, but also enhances its discriminative capacity and adaptability due to the separation of representation learning and discriminator learning. The proposed deep architecture is trained from end to end and takes full advantage of the rich spatial temporal information to achieve robust visual tracking. Experimental results on two latest benchmarks, OTB-2015 and VOT2017, show that the RASNet tracker has the state-of-the-art tracking accuracy while runs at more than 80 frames per second.
Tasks Object Tracking, Representation Learning, Visual Tracking
Published 2018-06-01
URL http://openaccess.thecvf.com/content_cvpr_2018/html/Wang_Learning_Attentions_Residual_CVPR_2018_paper.html
PDF http://openaccess.thecvf.com/content_cvpr_2018/papers/Wang_Learning_Attentions_Residual_CVPR_2018_paper.pdf
PWC https://paperswithcode.com/paper/learning-attentions-residual-attentional
Repo https://github.com/HaHuangChan/RASNet
Framework pytorch

Extractive Summarization with SWAP-NET: Sentences and Words from Alternating Pointer Networks

Title Extractive Summarization with SWAP-NET: Sentences and Words from Alternating Pointer Networks
Authors Aishwarya Jadhav, Vaibhav Rajan
Abstract We present a new neural sequence-to-sequence model for extractive summarization called SWAP-NET (Sentences and Words from Alternating Pointer Networks). Extractive summaries comprising a salient subset of input sentences, often also contain important key words. Guided by this principle, we design SWAP-NET that models the interaction of key words and salient sentences using a new two-level pointer network based architecture. SWAP-NET identifies both salient sentences and key words in an input document, and then combines them to form the extractive summary. Experiments on large scale benchmark corpora demonstrate the efficacy of SWAP-NET that outperforms state-of-the-art extractive summarizers.
Tasks Abstractive Text Summarization, Document Summarization, Machine Translation, Question Answering
Published 2018-07-01
URL https://www.aclweb.org/anthology/P18-1014/
PDF https://www.aclweb.org/anthology/P18-1014
PWC https://paperswithcode.com/paper/extractive-summarization-with-swap-net
Repo https://github.com/aishj10/swap-net
Framework tf

Hierarchical Relation Extraction with Coarse-to-Fine Grained Attention

Title Hierarchical Relation Extraction with Coarse-to-Fine Grained Attention
Authors Xu Han, Pengfei Yu, Zhiyuan Liu, Maosong Sun, Peng Li
Abstract Distantly supervised relation extraction employs existing knowledge graphs to automatically collect training data. While distant supervision is effective to scale relation extraction up to large-scale corpora, it inevitably suffers from the wrong labeling problem. Many efforts have been devoted to identifying valid instances from noisy data. However, most existing methods handle each relation in isolation, regardless of rich semantic correlations located in relation hierarchies. In this paper, we aim to incorporate the hierarchical information of relations for distantly supervised relation extraction and propose a novel hierarchical attention scheme. The multiple layers of our hierarchical attention scheme provide coarse-to-fine granularity to better identify valid instances, which is especially effective for extracting those long-tail relations. The experimental results on a large-scale benchmark dataset demonstrate that our models are capable of modeling the hierarchical information of relations and significantly outperform other baselines. The source code of this paper can be obtained from \url{https://github.com/thunlp/HNRE}.
Tasks Knowledge Graphs, Relation Extraction
Published 2018-10-01
URL https://www.aclweb.org/anthology/D18-1247/
PDF https://www.aclweb.org/anthology/D18-1247
PWC https://paperswithcode.com/paper/hierarchical-relation-extraction-with-coarse
Repo https://github.com/thunlp/HNRE
Framework tf

Moireé Pattern Detection using Wavelet Decomposition and Convolutional Neural Network

Title Moireé Pattern Detection using Wavelet Decomposition and Convolutional Neural Network
Authors Eldho Abraham
Abstract Moiré patterns are interference patterns that are produced due to the overlap of the digital grids of the camera sensor resulting in a high-frequency noise in the image. This paper proposes a new method to detect Moiré patterns using wavelet decomposition and a multi-input deep Convolutional Neural Network (CNN), for images captured from a computer screen. Also, this paper proposes a method to use the normalized intensity values in the image, as weights for the frequency strength of Moiré pattern. The CNN model created with this approach is robust to high background frequencies other than those of Moiré patterns, as the model is trained using images captured considering diverse scenarios. We have tested this model in receipt scanning application, to detect the Moiré patterns produced in the images captured from a computer screen, and achieved an accuracy of 98.4%.
Tasks
Published 2018-11-18
URL https://ieeexplore.ieee.org/document/8628746
PDF https://ieeexplore.ieee.org/document/8628746
PWC https://paperswithcode.com/paper/moiree-pattern-detection-using-wavelet
Repo https://github.com/AmadeusITGroup/Moire-Pattern-Detection
Framework tf

Parallel Corpora for the Biomedical Domain

Title Parallel Corpora for the Biomedical Domain
Authors Aur{'e}lie N{'e}v{'e}ol, Antonio Jimeno Yepes, Mariana Neves, Karin Verspoor
Abstract
Tasks Information Retrieval, Machine Translation
Published 2018-05-01
URL https://www.aclweb.org/anthology/L18-1043/
PDF https://www.aclweb.org/anthology/L18-1043
PWC https://paperswithcode.com/paper/parallel-corpora-for-the-biomedical-domain
Repo https://github.com/biomedical-translation-corpora/corpora
Framework none

Adapting Serious Game for Fallacious Argumentation to German: Pitfalls, Insights, and Best Practices

Title Adapting Serious Game for Fallacious Argumentation to German: Pitfalls, Insights, and Best Practices
Authors Ivan Habernal, Patrick Pauli, Iryna Gurevych
Abstract
Tasks Argument Mining
Published 2018-05-01
URL https://www.aclweb.org/anthology/L18-1526/
PDF https://www.aclweb.org/anthology/L18-1526
PWC https://paperswithcode.com/paper/adapting-serious-game-for-fallacious
Repo https://github.com/UKPLab/argotario
Framework none

Robust Discovery of Positive and Negative Rules in Knowledge-Bases

Title Robust Discovery of Positive and Negative Rules in Knowledge-Bases
Authors Stefano Ortona, Venkata Vamsikrishna Meduri, Paolo Papotti
Abstract We present RUDIK, a system for the discovery of declarative rules over knowledge-bases (KBs). RUDIK discovers rules that express positive relationships between entities, such as “if two persons have the same parent, they are siblings”, and negative rules, i.e., patterns that identify contradictions in the data, such as “if two persons are married, one cannot be the child of the other”. While the former class infers new facts in the KB, the latter class is crucial for other tasks, such as detecting erroneous triples in data cleaning, or the creation of negative examples to bootstrap learning algorithms. The system is designed to: (i) enlarge the expressive power of the rule language to obtain complex rules and wide coverage of the facts in the KB, (ii) discover approximate rules (soft constraints) to be robust to errors and incompleteness in the KB, (iii) use disk-based algorithms, effectively enabling rule mining in commodity machines. In contrast with traditional ranking of all rules based on a measure of support, we propose an approach to identify the subset of useful rules to be exposed to the user. We model the mining process as an incremental graph exploration problem and prove that our search strategy has guarantees on the optimality of the results. We have conducted extensive experiments using real-world KBs to show that RUDIK outperforms previous proposals in terms of efficiency and that it discovers more effective rules for the application at hand.
Tasks Knowledge Graphs, Knowledge Graphs Data Curation
Published 2018-04-16
URL http://www.eurecom.fr/fr/publication/5469/detail/robust-discovery-of-positive-and-negative-rules-in-knowledge-bases-1
PDF https://www.dropbox.com/s/4hgcli75ccqe20t/Rudik_CR_ICDE.pdf?dl=0
PWC https://paperswithcode.com/paper/robust-discovery-of-positive-and-negative
Repo https://github.com/stefano-ortona/rudik
Framework none

HyTE: Hyperplane-based Temporally aware Knowledge Graph Embedding

Title HyTE: Hyperplane-based Temporally aware Knowledge Graph Embedding
Authors Shib Sankar Dasgupta, Swayambhu Nath Ray, Partha Talukdar
Abstract Knowledge Graph (KG) embedding has emerged as an active area of research resulting in the development of several KG embedding methods. Relational facts in KG often show temporal dynamics, e.g., the fact (Cristiano{_}Ronaldo, playsFor, Manchester{_}United) is valid only from 2003 to 2009. Most of the existing KG embedding methods ignore this temporal dimension while learning embeddings of the KG elements. In this paper, we propose HyTE, a temporally aware KG embedding method which explicitly incorporates time in the entity-relation space by associating each timestamp with a corresponding hyperplane. HyTE not only performs KG inference using temporal guidance, but also predicts temporal scopes for relational facts with missing time annotations. Through extensive experimentation on temporal datasets extracted from real-world KGs, we demonstrate the effectiveness of our model over both traditional as well as temporal KG embedding methods.
Tasks Graph Embedding, Information Retrieval, Knowledge Graph Embedding, Knowledge Graphs, Question Answering, Representation Learning
Published 2018-10-01
URL https://www.aclweb.org/anthology/D18-1225/
PDF https://www.aclweb.org/anthology/D18-1225
PWC https://paperswithcode.com/paper/hyte-hyperplane-based-temporally-aware
Repo https://github.com/malllabiisc/HyTE
Framework tf

Put It Back: Entity Typing with Language Model Enhancement

Title Put It Back: Entity Typing with Language Model Enhancement
Authors Ji Xin, Hao Zhu, Xu Han, Zhiyuan Liu, Maosong Sun
Abstract Entity typing aims to classify semantic types of an entity mention in a specific context. Most existing models obtain training data using distant supervision, and inevitably suffer from the problem of noisy labels. To address this issue, we propose entity typing with language model enhancement. It utilizes a language model to measure the compatibility between context sentences and labels, and thereby automatically focuses more on context-dependent labels. Experiments on benchmark datasets demonstrate that our method is capable of enhancing the entity typing model with information from the language model, and significantly outperforms the state-of-the-art baseline. Code and data for this paper can be found from \url{https://github.com/thunlp/LME}.
Tasks Entity Linking, Entity Typing, Language Modelling, Question Answering, Relation Extraction
Published 2018-10-01
URL https://www.aclweb.org/anthology/D18-1121/
PDF https://www.aclweb.org/anthology/D18-1121
PWC https://paperswithcode.com/paper/put-it-back-entity-typing-with-language-model
Repo https://github.com/thunlp/LME
Framework none

Disambiguation of Verbal Shifters

Title Disambiguation of Verbal Shifters
Authors Michael Wiegand, Sylvette Loda, Josef Ruppenhofer
Abstract
Tasks Natural Language Inference, Relation Extraction, Sentiment Analysis, Word Sense Disambiguation
Published 2018-05-01
URL https://www.aclweb.org/anthology/papers/L18-1097/l18-1097
PDF https://www.aclweb.org/anthology/L18-1097
PWC https://paperswithcode.com/paper/disambiguation-of-verbal-shifters
Repo https://github.com/miwieg/lrec2018
Framework none

Learning to Promote Saliency Detectors

Title Learning to Promote Saliency Detectors
Authors Yu Zeng, Huchuan Lu, Lihe Zhang, Mengyang Feng, Ali Borji
Abstract The categories and appearance of salient objects vary from image to image, therefore, saliency detection is an image-specific task. Due to lack of large-scale saliency training data, using deep neural networks (DNNs) with pre-training is difficult to precisely capture the image-specific saliency cues. To solve this issue, we formulate a zero-shot learning problem to promote existing saliency detectors. Concretely, a DNN is trained as an embedding function to map pixels and the attributes of the salient/background regions of an image into the same metric space, in which an image-specific classifier is learned to classify the pixels. Since the image-specific task is performed by the classifier, the DNN embedding effectively plays the role of a general feature extractor. Compared with transferring the learning to a new recognition task using limited data, this formulation makes the DNN learn more effectively from small data. Extensive experiments on five data sets show that our method significantly improves accuracy of existing methods and compares favorably against state-of-the-art approaches.
Tasks Saliency Detection, Zero-Shot Learning
Published 2018-06-01
URL http://openaccess.thecvf.com/content_cvpr_2018/html/Zeng_Learning_to_Promote_CVPR_2018_paper.html
PDF http://openaccess.thecvf.com/content_cvpr_2018/papers/Zeng_Learning_to_Promote_CVPR_2018_paper.pdf
PWC https://paperswithcode.com/paper/learning-to-promote-saliency-detectors
Repo https://github.com/zengxianyu/lps
Framework pytorch
comments powered by Disqus