January 24, 2020

2945 words 14 mins read

Paper Group NANR 238

Paper Group NANR 238

Label-PEnet: Sequential Label Propagation and Enhancement Networks for Weakly Supervised Instance Segmentation. On Efficient Retrieval of Top Similarity Vectors. Polar Prototype Networks. Evaluating Automatic Term Extraction Methods on Individual Documents. Numbers Normalisation in the Inflected Languages: a Case Study of Polish. Riemannian TransE: …

Label-PEnet: Sequential Label Propagation and Enhancement Networks for Weakly Supervised Instance Segmentation

Title Label-PEnet: Sequential Label Propagation and Enhancement Networks for Weakly Supervised Instance Segmentation
Authors Weifeng Ge, Sheng Guo, Weilin Huang, Matthew R. Scott
Abstract Weakly-supervised instance segmentation aims to detect and segment object instances precisely, given image-level labels only. Unlike previous methods which are composed of multiple offline stages, we propose Sequential Label Propagation and Enhancement Networks (referred as Label-PEnet) that progressively transforms image-level labels to pixel-wise labels in a coarse-to-fine manner. We design four cascaded modules including multi-label classification, object detection, instance refinement and instance segmentation, which are implemented sequentially by sharing the same backbone. The cascaded pipeline is trained alternatively with a curriculum learning strategy that generalizes labels from high level images to low-level pixels gradually with increasing accuracy. In addition, we design a proposal calibration module to explore the ability of classification networks to find key pixels that identify object parts, which serves as a post validation strategy running in the inverse order. We evaluate the efficiency of our Label-PEnet in mining instance masks on standard benchmarks: PASCAL VOC 2007 and 2012. Experimental results show that Label-PEnet outperforms the state-of-art algorithms by a clear margin, and obtains comparable performance even with fully supervised approaches.
Tasks Calibration, Instance Segmentation, Multi-Label Classification, Object Detection, Semantic Segmentation, Weakly-supervised instance segmentation
Published 2019-10-01
URL http://openaccess.thecvf.com/content_ICCV_2019/html/Ge_Label-PEnet_Sequential_Label_Propagation_and_Enhancement_Networks_for_Weakly_Supervised_ICCV_2019_paper.html
PDF http://openaccess.thecvf.com/content_ICCV_2019/papers/Ge_Label-PEnet_Sequential_Label_Propagation_and_Enhancement_Networks_for_Weakly_Supervised_ICCV_2019_paper.pdf
PWC https://paperswithcode.com/paper/label-penet-sequential-label-propagation-and-1
Repo
Framework

On Efficient Retrieval of Top Similarity Vectors

Title On Efficient Retrieval of Top Similarity Vectors
Authors Shulong Tan, Zhixin Zhou, Zhaozhuo Xu, Ping Li
Abstract Retrieval of relevant vectors produced by representation learning critically influences the efficiency in natural language processing (NLP) tasks. In this paper, we demonstrate an efficient method for searching vectors via a typical non-metric matching function: inner product. Our method, which constructs an approximate Inner Product Delaunay Graph (IPDG) for top-1 Maximum Inner Product Search (MIPS), transforms retrieving the most suitable latent vectors into a graph search problem with great benefits of efficiency. Experiments on data representations learned for different machine learning tasks verify the outperforming effectiveness and efficiency of the proposed IPDG.
Tasks Representation Learning
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-1527/
PDF https://www.aclweb.org/anthology/D19-1527
PWC https://paperswithcode.com/paper/on-efficient-retrieval-of-top-similarity
Repo
Framework

Polar Prototype Networks

Title Polar Prototype Networks
Authors Pascal Mettes, Elise van der Pol, Cees G. M. Snoek
Abstract This paper proposes a neural network for classification and regression, without the need to learn layout structures in the output space. Standard solutions such as softmax cross-entropy and mean squared error are effective but parametric, meaning that known inductive structures such as maximum margin separation and simplicity (Occam’s Razor) need to be learned for the task at hand. Instead, we propose polar prototype networks, a class of networks that explicitly states the structure, \ie the layout, of the output. The structure is defined by polar prototypes, points on the hypersphere of the output space. For classification, each class is described by a single polar prototype and they are a priori distributed with maximal separation and equal shares on the hypersphere. Classes are assigned to prototypes randomly or based on semantic priors and training becomes a matter of minimizing angular distances between examples and their class prototypes. For regression, we show that training can be performed as a polar interpolation between two prototypes, arriving at a regression with higher-dimensional outputs. From empirical analysis, we find that polar prototype networks benefit from large margin separation and semantic class structure, while only requiring a minimal amount of output dimensions. While the structure is simple, the performance is on par with (classification) or better than (regression) standard network methods. Moreover, we show that we gain the ability to perform regression and classification jointly in the same space, which is disentangled and interpretable by design.
Tasks
Published 2019-05-01
URL https://openreview.net/forum?id=Syx4_iCqKQ
PDF https://openreview.net/pdf?id=Syx4_iCqKQ
PWC https://paperswithcode.com/paper/polar-prototype-networks
Repo
Framework

Evaluating Automatic Term Extraction Methods on Individual Documents

Title Evaluating Automatic Term Extraction Methods on Individual Documents
Authors Antonio {\v{S}}ajatovi{'c}, Maja Buljan, Jan {\v{S}}najder, Bojana Dalbelo Ba{\v{s}}i{'c}
Abstract Automatic Term Extraction (ATE) extracts terminology from domain-specific corpora. ATE is used in many NLP tasks, including Computer Assisted Translation, where it is typically applied to individual documents rather than the entire corpus. While corpus-level ATE has been extensively evaluated, it is not obvious how the results transfer to document-level ATE. To fill this gap, we evaluate 16 state-of-the-art ATE methods on full-length documents from three different domains, on both corpus and document levels. Unlike existing studies, our evaluation is more realistic as we take into account all gold terms. We show that no single method is best in corpus-level ATE, but C-Value and KeyConceptRelatendess surpass others in document-level ATE.
Tasks
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-5118/
PDF https://www.aclweb.org/anthology/W19-5118
PWC https://paperswithcode.com/paper/evaluating-automatic-term-extraction-methods
Repo
Framework

Numbers Normalisation in the Inflected Languages: a Case Study of Polish

Title Numbers Normalisation in the Inflected Languages: a Case Study of Polish
Authors Rafa{\l} Po{'s}wiata, Micha{\l} Pere{\l}kiewicz
Abstract Text normalisation in Text-to-Speech systems is a process of converting written expressions to their spoken forms. This task is complicated because in many cases the normalised form depends on the context. Furthermore, when we analysed languages like Croatian, Lithuanian, Polish, Russian or Slovak there is additional difficulty related to their inflected nature. In this paper we want to show how to deal with this problem for one of these languages: Polish, without having a large dedicated data set and using solutions prepared for other NLP tasks. We limited our study to only numbers expressions, which are the most common non-standard words to normalise. The proposed solution is a combination of morphological tagger and transducer supported by a dictionary of numbers in their spoken forms. The data set used for evaluation is based on the part of 1-million word subset of the National Corpus of Polish. The accuracy of the described approach is presented with a comparison to a simple baseline and two commercial systems: Google Cloud Text-to-Speech and Amazon Polly.
Tasks
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-3703/
PDF https://www.aclweb.org/anthology/W19-3703
PWC https://paperswithcode.com/paper/numbers-normalisation-in-the-inflected
Repo
Framework

Riemannian TransE: Multi-relational Graph Embedding in Non-Euclidean Space

Title Riemannian TransE: Multi-relational Graph Embedding in Non-Euclidean Space
Authors Atsushi Suzuki, Yosuke Enokida, Kenji Yamanishi
Abstract Multi-relational graph embedding which aims at achieving effective representations with reduced low-dimensional parameters, has been widely used in knowledge base completion. Although knowledge base data usually contains tree-like or cyclic structure, none of existing approaches can embed these data into a compatible space that in line with the structure. To overcome this problem, a novel framework, called Riemannian TransE, is proposed in this paper to embed the entities in a Riemannian manifold. Riemannian TransE models each relation as a move to a point and defines specific novel distance dissimilarity for each relation, so that all the relations are naturally embedded in correspondence to the structure of data. Experiments on several knowledge base completion tasks have shown that, based on an appropriate choice of manifold, Riemannian TransE achieves good performance even with a significantly reduced parameters.
Tasks Graph Embedding, Knowledge Base Completion
Published 2019-05-01
URL https://openreview.net/forum?id=r1xRW3A9YX
PDF https://openreview.net/pdf?id=r1xRW3A9YX
PWC https://paperswithcode.com/paper/riemannian-transe-multi-relational-graph
Repo
Framework

Weakly Supervised Attentional Model for Low Resource Ad-hoc Cross-lingual Information Retrieval

Title Weakly Supervised Attentional Model for Low Resource Ad-hoc Cross-lingual Information Retrieval
Authors Lingjun Zhao, Rabih Zbib, Zhuolin Jiang, Damianos Karakos, Zhongqiang Huang
Abstract We propose a weakly supervised neural model for Ad-hoc Cross-lingual Information Retrieval (CLIR) from low-resource languages. Low resource languages often lack relevance annotations for CLIR, and when available the training data usually has limited coverage for possible queries. In this paper, we design a model which does not require relevance annotations, instead it is trained on samples extracted from translation corpora as weak supervision. This model relies on an attention mechanism to learn spans in the foreign sentence that are relevant to the query. We report experiments on two low resource languages: Swahili and Tagalog, trained on less that 100k parallel sentences each. The proposed model achieves 19 MAP points improvement compared to using CNNs for feature extraction, 12 points improvement from machine translation-based CLIR, and up to 6 points improvement compared to probabilistic CLIR models.
Tasks Information Retrieval, Machine Translation
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-6129/
PDF https://www.aclweb.org/anthology/D19-6129
PWC https://paperswithcode.com/paper/weakly-supervised-attentional-model-for-low
Repo
Framework

At a Glance: The Impact of Gaze Aggregation Views on Syntactic Tagging

Title At a Glance: The Impact of Gaze Aggregation Views on Syntactic Tagging
Authors Sigrid Klerke, Barbara Plank
Abstract Readers{'} eye movements used as part of the training signal have been shown to improve performance in a wide range of Natural Language Processing (NLP) tasks. Previous work uses gaze data either at the type level or at the token level and mostly from a single eye-tracking corpus. In this paper, we analyze type vs token-level integration options with eye tracking data from two corpora to inform two syntactic sequence labeling problems: binary phrase chunking and part-of-speech tagging. We show that using globally-aggregated measures that capture the central tendency or variability of gaze data is more beneficial than proposed local views which retain individual participant information. While gaze data is informative for supervised POS tagging, which complements previous findings on unsupervised POS induction, almost no improvement is obtained for binary phrase chunking, except for a single specific setup. Hence, caution is warranted when using gaze data as signal for NLP, as no single view is robust over tasks, modeling choice and gaze corpus.
Tasks Chunking, Eye Tracking, Part-Of-Speech Tagging
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-6408/
PDF https://www.aclweb.org/anthology/D19-6408
PWC https://paperswithcode.com/paper/at-a-glance-the-impact-of-gaze-aggregation
Repo
Framework

Assessing Personally Perceived Image Quality via Image Features and Collaborative Filtering

Title Assessing Personally Perceived Image Quality via Image Features and Collaborative Filtering
Authors Jari Korhonen
Abstract During the past few years, different methods for optimizing the camera settings and post-processing techniques to improve the subjective quality of consumer photos have been studied extensively. However, most of the research in the prior art has focused on finding the optimal method for an average user. Since there is large deviation in personal opinions and aesthetic standards, the next challenge is to find the settings and post-processing techniques that fit to the individual users’ personal taste. In this study, we aim to predict the personally perceived image quality by combining classical image feature analysis and collaboration filtering approach known from the recommendation systems. The experimental results for the proposed method show promising results. As a practical application, our work can be used for personalizing the camera settings or post-processing parameters for different users and images.
Tasks Recommendation Systems
Published 2019-06-01
URL http://openaccess.thecvf.com/content_CVPR_2019/html/Korhonen_Assessing_Personally_Perceived_Image_Quality_via_Image_Features_and_Collaborative_CVPR_2019_paper.html
PDF http://openaccess.thecvf.com/content_CVPR_2019/papers/Korhonen_Assessing_Personally_Perceived_Image_Quality_via_Image_Features_and_Collaborative_CVPR_2019_paper.pdf
PWC https://paperswithcode.com/paper/assessing-personally-perceived-image-quality
Repo
Framework

Guaranteed Matrix Completion Under Multiple Linear Transformations

Title Guaranteed Matrix Completion Under Multiple Linear Transformations
Authors Chao Li, Wei He, Longhao Yuan, Zhun Sun, Qibin Zhao
Abstract Low-rank matrix completion (LRMC) is a classical model in both computer vision (CV) and machine learning, and has been successfully applied to various real applications. In the recent CV tasks, the completion is usually employed on the variants of data, such as “non-local” or filtered, rather than their original forms. This fact makes that the theoretical analysis of the conventional LRMC is no longer suitable in these applications. To tackle this problem, we propose a more general framework for LRMC, in which the linear transformations of the data are taken into account. We rigorously prove the identifiability of the proposed model and show an upper bound of the reconstruction error. Furthermore, we derive an efficient completion algorithm by using augmented Lagrangian multipliers and the sketching trick. In the experiments, we apply the proposed method to the classical image inpainting problem and achieve the state-of-the-art results.
Tasks Image Inpainting, Low-Rank Matrix Completion, Matrix Completion
Published 2019-06-01
URL http://openaccess.thecvf.com/content_CVPR_2019/html/Li_Guaranteed_Matrix_Completion_Under_Multiple_Linear_Transformations_CVPR_2019_paper.html
PDF http://openaccess.thecvf.com/content_CVPR_2019/papers/Li_Guaranteed_Matrix_Completion_Under_Multiple_Linear_Transformations_CVPR_2019_paper.pdf
PWC https://paperswithcode.com/paper/guaranteed-matrix-completion-under-multiple
Repo
Framework

PEPSI : Fast Image Inpainting With Parallel Decoding Network

Title PEPSI : Fast Image Inpainting With Parallel Decoding Network
Authors Min-cheol Sagong, Yong-goo Shin, Seung-wook Kim, Seung Park, Sung-jea Ko
Abstract Recently, a generative adversarial network (GAN)-based method employing the coarse-to-fine network with the contextual attention module (CAM) has shown outstanding results in image inpainting. However, this method requires numerous computational resources due to its two-stage process for feature encoding. To solve this problem, in this paper, we present a novel network structure, called PEPSI: parallel extended-decoder path for semantic inpainting. PEPSI can reduce the number of convolution operations by adopting a structure consisting of a single shared encoding network and a parallel decoding network with coarse and inpainting paths. The coarse path produces a preliminary inpainting result with which the encoding network is trained to predict features for the CAM. At the same time, the inpainting path creates a higher-quality inpainting result using refined features reconstructed by the CAM. PEPSI not only reduces the number of convolution operation almost by half as compared to the conventional coarse-to-fine networks but also exhibits superior performance to other models in terms of testing time and qualitative scores.
Tasks Image Inpainting
Published 2019-06-01
URL http://openaccess.thecvf.com/content_CVPR_2019/html/Sagong_PEPSI__Fast_Image_Inpainting_With_Parallel_Decoding_Network_CVPR_2019_paper.html
PDF http://openaccess.thecvf.com/content_CVPR_2019/papers/Sagong_PEPSI__Fast_Image_Inpainting_With_Parallel_Decoding_Network_CVPR_2019_paper.pdf
PWC https://paperswithcode.com/paper/pepsi-fast-image-inpainting-with-parallel
Repo
Framework

Sakura: Large-scale Incorrect Example Retrieval System for Learners of Japanese as a Second Language

Title Sakura: Large-scale Incorrect Example Retrieval System for Learners of Japanese as a Second Language
Authors Mio Arai, Tomonori Kodaira, Mamoru Komachi
Abstract This study develops an incorrect example retrieval system, called Sakura, using a large-scale Lang-8 dataset for Japanese language learners. Existing example retrieval systems do not include grammatically incorrect examples or present only a few examples, if any. If a retrieval system has a wide coverage of incorrect examples along with the correct counterpart, learners can revise their composition themselves. Considering the usability of retrieving incorrect examples, our proposed system uses a large-scale corpus to expand the coverage of incorrect examples and presents correct expressions along with incorrect expressions. Our intrinsic and extrinsic evaluations indicate that our system is more useful than a previous system.
Tasks
Published 2019-07-01
URL https://www.aclweb.org/anthology/P19-3001/
PDF https://www.aclweb.org/anthology/P19-3001
PWC https://paperswithcode.com/paper/sakura-large-scale-incorrect-example
Repo
Framework

Deep Restoration of Vintage Photographs From Scanned Halftone Prints

Title Deep Restoration of Vintage Photographs From Scanned Halftone Prints
Authors Qifan Gao, Xiao Shu, Xiaolin Wu
Abstract A great number of invaluable historical photographs unfortunately only exist in the form of halftone prints in old publications such as newspapers or books. Their original continuous-tone films have long been lost or irreparably damaged. There have been attempts to digitally restore these vintage halftone prints to the original film quality or higher. However, even using powerful deep convolutional neural networks, it is still difficult to obtain satisfactory results. The main challenge is that the degradation process is complex and compounded while little to no real data is available for properly training a data-driven method. In this research, we adopt a novel strategy of two-stage deep learning, in which the restoration task is divided into two stages: the removal of printing artifacts and the inverse of halftoning. The advantage of our technique is that only the simple first stage requires unsupervised training in order to make the combined network generalize on real halftone prints, while the more complex second stage of inverse halftoning can be easily trained with synthetic data. Extensive experimental results demonstrate the efficacy of the proposed technique for real halftone prints; the new technique significantly outperforms the existing ones in visual quality.
Tasks
Published 2019-10-01
URL http://openaccess.thecvf.com/content_ICCV_2019/html/Gao_Deep_Restoration_of_Vintage_Photographs_From_Scanned_Halftone_Prints_ICCV_2019_paper.html
PDF http://openaccess.thecvf.com/content_ICCV_2019/papers/Gao_Deep_Restoration_of_Vintage_Photographs_From_Scanned_Halftone_Prints_ICCV_2019_paper.pdf
PWC https://paperswithcode.com/paper/deep-restoration-of-vintage-photographs-from
Repo
Framework

Modeling Document-level Causal Structures for Event Causal Relation Identification

Title Modeling Document-level Causal Structures for Event Causal Relation Identification
Authors Lei Gao, Prafulla Kumar Choubey, Ruihong Huang
Abstract We aim to comprehensively identify all the event causal relations in a document, both within a sentence and across sentences, which is important for reconstructing pivotal event structures. The challenges we identified are two: 1) event causal relations are sparse among all possible event pairs in a document, in addition, 2) few causal relations are explicitly stated. Both challenges are especially true for identifying causal relations between events across sentences. To address these challenges, we model rich aspects of document-level causal structures for achieving comprehensive causal relation identification. The causal structures include heavy involvements of document-level main events in causal relations as well as several types of fine-grained constraints that capture implications from certain sentential syntactic relations and discourse relations as well as interactions between event causal relations and event coreference relations. Our experimental results show that modeling the global and fine-grained aspects of causal structures using Integer Linear Programming (ILP) greatly improves the performance of causal relation identification, especially in identifying cross-sentence causal relations.
Tasks
Published 2019-06-01
URL https://www.aclweb.org/anthology/N19-1179/
PDF https://www.aclweb.org/anthology/N19-1179
PWC https://paperswithcode.com/paper/modeling-document-level-causal-structures-for
Repo
Framework

Enhancing Local Feature Extraction with Global Representation for Neural Text Classification

Title Enhancing Local Feature Extraction with Global Representation for Neural Text Classification
Authors Guocheng Niu, Hengru Xu, Bolei He, Xinyan Xiao, Hua Wu, Sheng Gao
Abstract For text classification, traditional local feature driven models learn long dependency by deeply stacking or hybrid modeling. This paper proposes a novel Encoder1-Encoder2 architecture, where global information is incorporated into the procedure of local feature extraction from scratch. In particular, Encoder1 serves as a global information provider, while Encoder2 performs as a local feature extractor and is directly fed into the classifier. Meanwhile, two modes are also designed for their interaction. Thanks to the awareness of global information, our method is able to learn better instance specific local features and thus avoids complicated upper operations. Experiments conducted on eight benchmark datasets demonstrate that our proposed architecture promotes local feature driven models by a substantial margin and outperforms the previous best models in the fully-supervised setting.
Tasks Text Classification
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-1047/
PDF https://www.aclweb.org/anthology/D19-1047
PWC https://paperswithcode.com/paper/enhancing-local-feature-extraction-with
Repo
Framework
comments powered by Disqus