January 25, 2020

2379 words 12 mins read

Paper Group NANR 30

Paper Group NANR 30

Inverse Discriminative Networks for Handwritten Signature Verification. Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019). An Efficient Solution to the Homography-Based Relative Pose Problem With a Common Reference Direction. The effectiveness of layer-by-layer training using the information bottleneck pr …

Inverse Discriminative Networks for Handwritten Signature Verification

Title Inverse Discriminative Networks for Handwritten Signature Verification
Authors Ping Wei, Huan Li, Ping Hu
Abstract Handwritten signature verification is an important technique for many financial, commercial, and forensic applications. In this paper, we propose an inverse discriminative network (IDN) for writer-independent handwritten signature verification, which aims to determine whether a test signature is genuine or forged compared to the reference signature. The IDN model contains four weight-shared neural network streams, of which two receiving the original signature images are the discriminative streams and the other two addressing the gray-inverted images form the inverse streams. Multiple paths of attention modules connect the discriminative streams and the inverse streams to propagate messages. With the inverse streams and the multi-path attention modules, the IDN model intensifies the effective information of signature verification. Since there was no proper Chinese signature dataset in the community, we collected a large-scale Chinese signature dataset with approximately 29,000 images of 749 individuals’ signatures. We test our method on the Chinese signature dataset and other three signature datasets of different languages: CEDAR, BHSig-B, and BHSig-H. Experiments prove the strength and potential of our method.
Tasks
Published 2019-06-01
URL http://openaccess.thecvf.com/content_CVPR_2019/html/Wei_Inverse_Discriminative_Networks_for_Handwritten_Signature_Verification_CVPR_2019_paper.html
PDF http://openaccess.thecvf.com/content_CVPR_2019/papers/Wei_Inverse_Discriminative_Networks_for_Handwritten_Signature_Verification_CVPR_2019_paper.pdf
PWC https://paperswithcode.com/paper/inverse-discriminative-networks-for
Repo
Framework

Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019)

Title Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019)
Authors
Abstract
Tasks
Published 2019-06-01
URL https://www.aclweb.org/anthology/S19-1000/
PDF https://www.aclweb.org/anthology/S19-1000
PWC https://paperswithcode.com/paper/proceedings-of-the-eighth-joint-conference-on
Repo
Framework

An Efficient Solution to the Homography-Based Relative Pose Problem With a Common Reference Direction

Title An Efficient Solution to the Homography-Based Relative Pose Problem With a Common Reference Direction
Authors Yaqing Ding, Jian Yang, Jean Ponce, Hui Kong
Abstract In this paper, we propose a novel approach to two-view minimal-case relative pose problems based on homography with a common reference direction. We explore the rank-1 constraint on the difference between the Euclidean homography matrix and the corresponding rotation, and propose an efficient two-step solution for solving both the calibrated and partially calibrated (unknown focal length) problems. We derive new 3.5-point, 3.5-point, 4-point solvers for two cameras such that the two focal lengths are unknown but equal, one of them is unknown, and both are unknown and possibly different, respectively. We present detailed analyses and comparisons with existing 6 and 7-point solvers, including results with smart phone images.
Tasks
Published 2019-10-01
URL http://openaccess.thecvf.com/content_ICCV_2019/html/Ding_An_Efficient_Solution_to_the_Homography-Based_Relative_Pose_Problem_With_ICCV_2019_paper.html
PDF http://openaccess.thecvf.com/content_ICCV_2019/papers/Ding_An_Efficient_Solution_to_the_Homography-Based_Relative_Pose_Problem_With_ICCV_2019_paper.pdf
PWC https://paperswithcode.com/paper/an-efficient-solution-to-the-homography-based
Repo
Framework

The effectiveness of layer-by-layer training using the information bottleneck principle

Title The effectiveness of layer-by-layer training using the information bottleneck principle
Authors Adar Elad, Doron Haviv, Yochai Blau, Tomer Michaeli
Abstract The recently proposed information bottleneck (IB) theory of deep nets suggests that during training, each layer attempts to maximize its mutual information (MI) with the target labels (so as to allow good prediction accuracy), while minimizing its MI with the input (leading to effective compression and thus good generalization). To date, evidence of this phenomenon has been indirect and aroused controversy due to theoretical and practical complications. In particular, it has been pointed out that the MI with the input is theoretically infinite in many cases of interest, and that the MI with the target is fundamentally difficult to estimate in high dimensions. As a consequence, the validity of this theory has been questioned. In this paper, we overcome these obstacles by two means. First, as previously suggested, we replace the MI with the input by a noise-regularized version, which ensures it is finite. As we show, this modified penalty in fact acts as a form of weight decay regularization. Second, to obtain accurate (noise regularized) MI estimates between an intermediate representation and the input, we incorporate the strong prior-knowledge we have about their relation, into the recently proposed MI estimator of Belghazi et al. (2018). With this scheme, we are able to stably train each layer independently to explicitly optimize the IB functional. Surprisingly, this leads to enhanced prediction accuracy, thus directly validating the IB theory of deep nets for the first time.
Tasks
Published 2019-05-01
URL https://openreview.net/forum?id=r1Nb5i05tX
PDF https://openreview.net/pdf?id=r1Nb5i05tX
PWC https://paperswithcode.com/paper/the-effectiveness-of-layer-by-layer-training
Repo
Framework

A Spanish E-dictionary of Collocations

Title A Spanish E-dictionary of Collocations
Authors Maria Auxiliadora Barrios Rodriguez, Igor Boguslavsky
Abstract
Tasks
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-7719/
PDF https://www.aclweb.org/anthology/W19-7719
PWC https://paperswithcode.com/paper/a-spanish-e-dictionary-of-collocations
Repo
Framework

Diachronic Sense Modeling with Deep Contextualized Word Embeddings: An Ecological View

Title Diachronic Sense Modeling with Deep Contextualized Word Embeddings: An Ecological View
Authors Renfen Hu, Shen Li, Shichen Liang
Abstract Diachronic word embeddings have been widely used in detecting temporal changes. However, existing methods face the meaning conflation deficiency by representing a word as a single vector at each time period. To address this issue, this paper proposes a sense representation and tracking framework based on deep contextualized embeddings, aiming at answering not only what and when, but also how the word meaning changes. The experiments show that our framework is effective in representing fine-grained word senses, and it brings a significant improvement in word change detection task. Furthermore, we model the word change from an ecological viewpoint, and sketch two interesting sense behaviors in the process of language evolution, i.e. sense competition and sense cooperation.
Tasks Word Embeddings
Published 2019-07-01
URL https://www.aclweb.org/anthology/P19-1379/
PDF https://www.aclweb.org/anthology/P19-1379
PWC https://paperswithcode.com/paper/diachronic-sense-modeling-with-deep
Repo
Framework

Some Insights Towards a Unified Semantic Representation of Explanation for eXplainable Artificial Intelligence

Title Some Insights Towards a Unified Semantic Representation of Explanation for eXplainable Artificial Intelligence
Authors Isma{"\i}l Baaj, Jean-Philippe Poli, Wassila Ouerdane
Abstract
Tasks
Published 2019-01-01
URL https://www.aclweb.org/anthology/W19-8404/
PDF https://www.aclweb.org/anthology/W19-8404
PWC https://paperswithcode.com/paper/some-insights-towards-a-unified-semantic
Repo
Framework

Silent HMMs: Generalized Representation of Hidden Semi-Markov Models and Hierarchical HMMs

Title Silent HMMs: Generalized Representation of Hidden Semi-Markov Models and Hierarchical HMMs
Authors Kei Wakabayashi
Abstract Modeling sequence data using probabilistic finite state machines (PFSMs) is a technique that analyzes the underlying dynamics in sequences of symbols. Hidden semi-Markov models (HSMMs) and hierarchical hidden Markov models (HHMMs) are PFSMs that have been successfully applied to a wide variety of applications by extending HMMs to make the extracted patterns easier to interpret. However, these models are independently developed with their own training algorithm, so that we cannot combine multiple kinds of structures to build a PFSM for a specific application. In this paper, we prove that silent hidden Markov models (silent HMMs) are flexible models that have more expressive power than HSMMs and HHMMs. Silent HMMs are HMMs that contain silent states, which do not emit any observations. We show that we can obtain silent HMM equivalent to given HSMMs and HHMMs. We believe that these results form a firm foundation to use silent HMMs as a unified representation for PFSM modeling.
Tasks
Published 2019-09-01
URL https://www.aclweb.org/anthology/W19-3113/
PDF https://www.aclweb.org/anthology/W19-3113
PWC https://paperswithcode.com/paper/silent-hmms-generalized-representation-of
Repo
Framework

Learning from Noisy Demonstration Sets via Meta-Learned Suitability Assessor

Title Learning from Noisy Demonstration Sets via Meta-Learned Suitability Assessor
Authors Te-Lin Wu, Jaedong Hwang, Jingyun Yang, Shaofan Lai, Carl Vondrick, Joseph J. Lim
Abstract A noisy and diverse demonstration set may hinder the performances of an agent aiming to acquire certain skills via imitation learning. However, state-of-the-art imitation learning algorithms often assume the optimality of the given demonstration set. In this paper, we address such optimal assumption by learning only from the most suitable demonstrations in a given set. Suitability of a demonstration is estimated by whether imitating it produce desirable outcomes for achieving the goals of the tasks. For more efficient demonstration suitability assessments, the learning agent should be capable of imitating a demonstration as quick as possible, which shares similar spirit with fast adaptation in the meta-learning regime. Our framework, thus built on top of Model-Agnostic Meta-Learning, evaluates how desirable the imitated outcomes are, after adaptation to each demonstration in the set. The resulting assessments hence enable us to select suitable demonstration subsets for acquiring better imitated skills. The videos related to our experiments are available at: https://sites.google.com/view/deepdj
Tasks Imitation Learning, Meta-Learning
Published 2019-05-01
URL https://openreview.net/forum?id=rkxkHnA5tX
PDF https://openreview.net/pdf?id=rkxkHnA5tX
PWC https://paperswithcode.com/paper/learning-from-noisy-demonstration-sets-via
Repo
Framework

SpaceRefNet: a neural approach to spatial reference resolution in a real city environment

Title SpaceRefNet: a neural approach to spatial reference resolution in a real city environment
Authors Dmytro Kalpakchi, Johan Boye
Abstract Adding interactive capabilities to pedestrian wayfinding systems in the form of spoken dialogue will make them more natural to humans. Such an interactive wayfinding system needs to continuously understand and interpret pedestrian{'}s utterances referring to the spatial context. Achieving this requires the system to identify exophoric referring expressions in the utterances, and link these expressions to the geographic entities in the vicinity. This exophoric spatial reference resolution problem is difficult, as there are often several dozens of candidate referents. We present a neural network-based approach for identifying pedestrian{'}s references (using a network called RefNet) and resolving them to appropriate geographic objects (using a network called SpaceRefNet). Both methods show promising results beating the respective baselines and earlier reported results in the literature.
Tasks
Published 2019-09-01
URL https://www.aclweb.org/anthology/W19-5949/
PDF https://www.aclweb.org/anthology/W19-5949
PWC https://paperswithcode.com/paper/spacerefnet-a-neural-approach-to-spatial
Repo
Framework

Content and Style Disentanglement for Artistic Style Transfer

Title Content and Style Disentanglement for Artistic Style Transfer
Authors Dmytro Kotovenko, Artsiom Sanakoyeu, Sabine Lang, Bjorn Ommer
Abstract Artists rarely paint in a single style throughout their career. More often they change styles or develop variations of it. In addition, artworks in different styles and even within one style depict real content differently: while Picasso’s Blue Period displays a vase in a blueish tone but as a whole, his Cubist works deconstruct the object. To produce artistically convincing stylizations, style transfer models must be able to reflect these changes and variations. Recently many works have aimed to improve the style transfer task, but neglected to address the described observations. We present a novel approach which captures particularities of style and the variations within and separates style and content. This is achieved by introducing two novel losses: a fixpoint triplet style loss to learn subtle variations within one style or between different styles and a disentanglement loss to ensure that the stylization is not conditioned on the real input photo. In addition the paper proposes various evaluation methods to measure the importance of both losses on the validity, quality and variability of final stylizations. We provide qualitative results to demonstrate the performance of our approach.
Tasks Style Transfer
Published 2019-10-01
URL http://openaccess.thecvf.com/content_ICCV_2019/html/Kotovenko_Content_and_Style_Disentanglement_for_Artistic_Style_Transfer_ICCV_2019_paper.html
PDF http://openaccess.thecvf.com/content_ICCV_2019/papers/Kotovenko_Content_and_Style_Disentanglement_for_Artistic_Style_Transfer_ICCV_2019_paper.pdf
PWC https://paperswithcode.com/paper/content-and-style-disentanglement-for
Repo
Framework

Understanding Generalized Whitening and Coloring Transform for Universal Style Transfer

Title Understanding Generalized Whitening and Coloring Transform for Universal Style Transfer
Authors Tai-Yin Chiu
Abstract Style transfer is a task of rendering images in the styles of other images. In the past few years, neural style transfer has achieved a great success in this task, yet suffers from either the inability to generalize to unseen style images or fast style transfer. Recently, an universal style transfer technique that applies zero-phase component analysis (ZCA) for whitening and coloring image features realizes fast and arbitrary style transfer. However, using ZCA for style transfer is empirical and does not have any theoretical support. In addition, other whitening and coloring transforms (WCT) than ZCA have not been investigated. In this report, we generalize ZCA to the general form of WCT, provide an analytical performance analysis from the angle of neural style transfer, and show why ZCA is a good choice for style transfer among different WCTs and why some WCTs are not well applicable for style transfer.
Tasks Style Transfer
Published 2019-10-01
URL http://openaccess.thecvf.com/content_ICCV_2019/html/Chiu_Understanding_Generalized_Whitening_and_Coloring_Transform_for_Universal_Style_Transfer_ICCV_2019_paper.html
PDF http://openaccess.thecvf.com/content_ICCV_2019/papers/Chiu_Understanding_Generalized_Whitening_and_Coloring_Transform_for_Universal_Style_Transfer_ICCV_2019_paper.pdf
PWC https://paperswithcode.com/paper/understanding-generalized-whitening-and
Repo
Framework

SketchGAN: Joint Sketch Completion and Recognition With Generative Adversarial Network

Title SketchGAN: Joint Sketch Completion and Recognition With Generative Adversarial Network
Authors Fang Liu, Xiaoming Deng, Yu-Kun Lai, Yong-Jin Liu, Cuixia Ma, Hongan Wang
Abstract Hand-drawn sketch recognition is a fundamental problem in computer vision, widely used in sketch-based image and video retrieval, editing, and reorganization. Previous methods often assume that a complete sketch is used as input; however, hand-drawn sketches in common application scenarios are often incomplete, which makes sketch recognition a challenging problem. In this paper, we propose SketchGAN, a new generative adversarial network (GAN) based approach that jointly completes and recognizes a sketch, boosting the performance of both tasks. Specifically, we use a cascade Encode-Decoder network to complete the input sketch in an iterative manner, and employ an auxiliary sketch recognition task to recognize the completed sketch. Experiments on the Sketchy database benchmark demonstrate that our joint learning approach achieves competitive sketch completion and recognition performance compared with the state-of-the-art methods. Further experiments using several sketch-based applications also validate the performance of our method.
Tasks Sketch Recognition, Video Retrieval
Published 2019-06-01
URL http://openaccess.thecvf.com/content_CVPR_2019/html/Liu_SketchGAN_Joint_Sketch_Completion_and_Recognition_With_Generative_Adversarial_Network_CVPR_2019_paper.html
PDF http://openaccess.thecvf.com/content_CVPR_2019/papers/Liu_SketchGAN_Joint_Sketch_Completion_and_Recognition_With_Generative_Adversarial_Network_CVPR_2019_paper.pdf
PWC https://paperswithcode.com/paper/sketchgan-joint-sketch-completion-and
Repo
Framework

Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers)

Title Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers)
Authors
Abstract
Tasks Machine Translation
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-5200/
PDF https://www.aclweb.org/anthology/W19-5200
PWC https://paperswithcode.com/paper/proceedings-of-the-fourth-conference-on-1
Repo
Framework

Modeling Personal Biases in Language Use by Inducing Personalized Word Embeddings

Title Modeling Personal Biases in Language Use by Inducing Personalized Word Embeddings
Authors Daisuke Oba, Naoki Yoshinaga, Shoetsu Sato, Satoshi Akasaki, Masashi Toyoda
Abstract There exist biases in individual{'}s language use; the same word (e.g., cool) is used for expressing different meanings (e.g., temperature range) or different words (e.g., cloudy, hazy) are used for describing the same meaning. In this study, we propose a method of modeling such personal biases in word meanings (hereafter, semantic variations) with personalized word embeddings obtained by solving a task on subjective text while regarding words used by different individuals as different words. To prevent personalized word embeddings from being contaminated by other irrelevant biases, we solve a task of identifying a review-target (objective output) from a given review. To stabilize the training of this extreme multi-class classification, we perform a multi-task learning with metadata identification. Experimental results with reviews retrieved from RateBeer confirmed that the obtained personalized word embeddings improved the accuracy of sentiment analysis as well as the target task. Analysis of the obtained personalized word embeddings revealed trends in semantic variations related to frequent and adjective words.
Tasks Multi-Task Learning, Sentiment Analysis, Word Embeddings
Published 2019-06-01
URL https://www.aclweb.org/anthology/N19-1215/
PDF https://www.aclweb.org/anthology/N19-1215
PWC https://paperswithcode.com/paper/modeling-personal-biases-in-language-use-by
Repo
Framework
comments powered by Disqus