January 25, 2020

2616 words 13 mins read

Paper Group NANR 8

Paper Group NANR 8

Cross-linguistic research into derivational networks. The Active-Filler Strategy in a Move-Eager Left-Corner Minimalist Grammar Parser. Identity From Here, Pose From There: Self-Supervised Disentanglement and Generation of Objects Using Unlabeled Videos. Unsupervised one-to-many image translation. Window-Based Neural Tagging for Shallow Discourse A …

Cross-linguistic research into derivational networks

Title Cross-linguistic research into derivational networks
Authors L{'\i}via K{\H{o}}rtv{'e}lyessy
Abstract
Tasks
Published 2019-09-01
URL https://www.aclweb.org/anthology/W19-8501/
PDF https://www.aclweb.org/anthology/W19-8501
PWC https://paperswithcode.com/paper/cross-linguistic-research-into-derivational
Repo
Framework

The Active-Filler Strategy in a Move-Eager Left-Corner Minimalist Grammar Parser

Title The Active-Filler Strategy in a Move-Eager Left-Corner Minimalist Grammar Parser
Authors Tim Hunter, Milo{\v{s}} Stanojevi{'c}, Edward Stabler
Abstract Recent psycholinguistic evidence suggests that human parsing of moved elements is {}active{'}, and perhaps even {}hyper-active{'}: it seems that a leftward-moved object is related to a verbal position rapidly, perhaps even before the transitivity information associated with the verb is available to the listener. This paper presents a formal, sound and complete parser for Minimalist Grammars whose search space contains branching points that we can identify as the locus of the decision to perform this kind of active gap-finding. This brings formal models of parsing into closer contact with recent psycholinguistic theorizing than was previously possible.
Tasks Human Parsing
Published 2019-06-01
URL https://www.aclweb.org/anthology/W19-2901/
PDF https://www.aclweb.org/anthology/W19-2901
PWC https://paperswithcode.com/paper/the-active-filler-strategy-in-a-move-eager
Repo
Framework

Identity From Here, Pose From There: Self-Supervised Disentanglement and Generation of Objects Using Unlabeled Videos

Title Identity From Here, Pose From There: Self-Supervised Disentanglement and Generation of Objects Using Unlabeled Videos
Authors Fanyi Xiao, Haotian Liu, Yong Jae Lee
Abstract We propose a novel approach that disentangles the identity and pose of objects for image generation. Our model takes as input an ID image and a pose image, and generates an output image with the identity of the ID image and the pose of the pose image. Unlike most previous unsupervised work which rely on cyclic constraints, which can often be brittle, we instead propose to learn this in a self-supervised way. Specifically, we leverage unlabeled videos to automatically construct pseudo ground-truth targets to directly supervise our model. To enforce disentanglement, we propose a novel disentanglement loss, and to improve realism, we propose a pixel-verification loss in which the generated image’s pixels must trace back to the ID input. We conduct extensive experiments on both synthetic and real images to demonstrate improved realism, diversity, and ID/pose disentanglement compared to existing methods.
Tasks Image Generation
Published 2019-10-01
URL http://openaccess.thecvf.com/content_ICCV_2019/html/Xiao_Identity_From_Here_Pose_From_There_Self-Supervised_Disentanglement_and_Generation_ICCV_2019_paper.html
PDF http://openaccess.thecvf.com/content_ICCV_2019/papers/Xiao_Identity_From_Here_Pose_From_There_Self-Supervised_Disentanglement_and_Generation_ICCV_2019_paper.pdf
PWC https://paperswithcode.com/paper/identity-from-here-pose-from-there-self
Repo
Framework

Unsupervised one-to-many image translation

Title Unsupervised one-to-many image translation
Authors Samuel Lavoie-Marchildon, Sebastien Lachapelle, Mikołaj Bińkowski, Aaron Courville, Yoshua Bengio, R Devon Hjelm
Abstract We perform completely unsupervised one-sided image to image translation between a source domain $X$ and a target domain $Y$ such that we preserve relevant underlying shared semantics (e.g., class, size, shape, etc). In particular, we are interested in a more difficult case than those typically addressed in the literature, where the source and target are ``far” enough that reconstruction-style or pixel-wise approaches fail. We argue that transferring (i.e., \emph{translating}) said relevant information should involve both discarding source domain-specific information while incorporate target domain-specific information, the latter of which we model with a noisy prior distribution. In order to avoid the degenerate case where the generated samples are only explained by the prior distribution, we propose to minimize an estimate of the mutual information between the generated sample and the sample from the prior distribution. We discover that the architectural choices are an important factor to consider in order to preserve the shared semantic between $X$ and $Y$. We show state of the art results on the MNIST to SVHN task for unsupervised image to image translation. |
Tasks Image-to-Image Translation, Unsupervised Image-To-Image Translation
Published 2019-05-01
URL https://openreview.net/forum?id=B1GIQhCcYm
PDF https://openreview.net/pdf?id=B1GIQhCcYm
PWC https://paperswithcode.com/paper/unsupervised-one-to-many-image-translation
Repo
Framework

Window-Based Neural Tagging for Shallow Discourse Argument Labeling

Title Window-Based Neural Tagging for Shallow Discourse Argument Labeling
Authors Ren{'e} Knaebel, Manfred Stede, Sebastian Stober
Abstract This paper describes a novel approach for the task of end-to-end argument labeling in shallow discourse parsing. Our method describes a decomposition of the overall labeling task into subtasks and a general distance-based aggregation procedure. For learning these subtasks, we train a recurrent neural network and gradually replace existing components of our baseline by our model. The model is trained and evaluated on the Penn Discourse Treebank 2 corpus. While it is not as good as knowledge-intense approaches, it clearly outperforms other models that are also trained without additional linguistic features.
Tasks
Published 2019-11-01
URL https://www.aclweb.org/anthology/K19-1072/
PDF https://www.aclweb.org/anthology/K19-1072
PWC https://paperswithcode.com/paper/window-based-neural-tagging-for-shallow
Repo
Framework

TextGraphs 2019 Shared Task on Multi-Hop Inference for Explanation Regeneration

Title TextGraphs 2019 Shared Task on Multi-Hop Inference for Explanation Regeneration
Authors Peter Jansen, Dmitry Ustalov
Abstract While automated question answering systems are increasingly able to retrieve answers to natural language questions, their ability to generate detailed human-readable explanations for their answers is still quite limited. The Shared Task on Multi-Hop Inference for Explanation Regeneration tasks participants with regenerating detailed gold explanations for standardized elementary science exam questions by selecting facts from a knowledge base of semi-structured tables. Each explanation contains between 1 and 16 interconnected facts that form an {``}explanation graph{''} spanning core scientific knowledge and detailed world knowledge. It is expected that successfully combining these facts to generate detailed explanations will require advancing methods in multi-hop inference and information combination, and will make use of the supervised training data provided by the WorldTree explanation corpus. The top-performing system achieved a mean average precision (MAP) of 0.56, substantially advancing the state-of-the-art over a baseline information retrieval model. Detailed extended analyses of all submitted systems showed large relative improvements in accessing the most challenging multi-hop inference problems, while absolute performance remains low, highlighting the difficulty of generating detailed explanations through multi-hop reasoning. |
Tasks Information Retrieval, Question Answering
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-5309/
PDF https://www.aclweb.org/anthology/D19-5309
PWC https://paperswithcode.com/paper/textgraphs-2019-shared-task-on-multi-hop
Repo
Framework

Proceedings of the IWCS Shared Task on Semantic Parsing

Title Proceedings of the IWCS Shared Task on Semantic Parsing
Authors
Abstract
Tasks Semantic Parsing
Published 2019-05-01
URL https://www.aclweb.org/anthology/W19-1200/
PDF https://www.aclweb.org/anthology/W19-1200
PWC https://paperswithcode.com/paper/proceedings-of-the-iwcs-shared-task-on
Repo
Framework

Zoom to Learn, Learn to Zoom

Title Zoom to Learn, Learn to Zoom
Authors Xuaner Zhang, Qifeng Chen, Ren Ng, Vladlen Koltun
Abstract This paper shows that when applying machine learning to digital zoom, it is beneficial to operate on real, RAW sensor data. Existing learning-based super-resolution methods do not use real sensor data, instead operating on processed RGB images. We show that these approaches forfeit detail and accuracy that can be gained by operating on raw data, particularly when zooming in on distant objects. The key barrier to using real sensor data for training is that ground-truth high-resolution imagery is missing. We show how to obtain such ground-truth data via optical zoom and contribute a dataset, SR-RAW, for real-world computational zoom. We use SR-RAW to train a deep network with a novel contextual bilateral loss that is robust to mild misalignment between input and outputs images. The trained network achieves state-of-the-art performance in 4X and 8X computational zoom. We also show that synthesizing sensor data by resampling high-resolution RGB images is an oversimplified approximation of real sensor data and noise, resulting in worse image quality.
Tasks Super-Resolution
Published 2019-06-01
URL http://openaccess.thecvf.com/content_CVPR_2019/html/Zhang_Zoom_to_Learn_Learn_to_Zoom_CVPR_2019_paper.html
PDF http://openaccess.thecvf.com/content_CVPR_2019/papers/Zhang_Zoom_to_Learn_Learn_to_Zoom_CVPR_2019_paper.pdf
PWC https://paperswithcode.com/paper/zoom-to-learn-learn-to-zoom-1
Repo
Framework

Jiuge: A Human-Machine Collaborative Chinese Classical Poetry Generation System

Title Jiuge: A Human-Machine Collaborative Chinese Classical Poetry Generation System
Authors Guo Zhipeng, Xiaoyuan Yi, Maosong Sun, Wenhao Li, Cheng Yang, Jiannan Liang, Huimin Chen, Yuhui Zhang, Ruoyu Li
Abstract Research on the automatic generation of poetry, the treasure of human culture, has lasted for decades. Most existing systems, however, are merely model-oriented, which input some user-specified keywords and directly complete the generation process in one pass, with little user participation. We believe that the machine, being a collaborator or an assistant, should not replace human beings in poetic creation. Therefore, we proposed Jiuge, a human-machine collaborative Chinese classical poetry generation system. Unlike previous systems, Jiuge allows users to revise the unsatisfied parts of a generated poem draft repeatedly. According to the revision, the poem will be dynamically updated and regenerated. After the revision and modification procedure, the user can write a satisfying poem together with Jiuge system collaboratively. Besides, Jiuge can accept multi-modal inputs, such as keywords, plain text or images. By exposing the options of poetry genres, styles and revision modes, Jiuge, acting as a professional assistant, allows constant and active participation of users in poetic creation.
Tasks
Published 2019-07-01
URL https://www.aclweb.org/anthology/P19-3005/
PDF https://www.aclweb.org/anthology/P19-3005
PWC https://paperswithcode.com/paper/jiuge-a-human-machine-collaborative-chinese
Repo
Framework

AFD-Net: Aggregated Feature Difference Learning for Cross-Spectral Image Patch Matching

Title AFD-Net: Aggregated Feature Difference Learning for Cross-Spectral Image Patch Matching
Authors Dou Quan, Xuefeng Liang, Shuang Wang, Shaowei Wei, Yanfeng Li, Ning Huyan, Licheng Jiao
Abstract Image patch matching across different spectral domains is more challenging than in a single spectral domain. We consider the reason is twofold: 1. the weaker discriminative feature learned by conventional methods; 2. the significant appearance difference between two images domains. To tackle these problems, we propose an aggregated feature difference learning network (AFD-Net). Unlike other methods that merely rely on the high-level features, we find the feature differences in other levels also provide useful learning information. Thus, the multi-level feature differences are aggregated to enhance the discrimination. To make features invariant across different domains, we introduce a domain invariant feature extraction network based on instance normalization (IN). In order to optimize the AFD-Net, we borrow the large margin cosine loss which can minimize intra-class distance and maximize inter-class distance between matching and non-matching samples. Extensive experiments show that AFD-Net largely outperforms the state-of-the-arts on the cross-spectral dataset, meanwhile, demonstrates a considerable generalizability on a single spectral dataset.
Tasks
Published 2019-10-01
URL http://openaccess.thecvf.com/content_ICCV_2019/html/Quan_AFD-Net_Aggregated_Feature_Difference_Learning_for_Cross-Spectral_Image_Patch_Matching_ICCV_2019_paper.html
PDF http://openaccess.thecvf.com/content_ICCV_2019/papers/Quan_AFD-Net_Aggregated_Feature_Difference_Learning_for_Cross-Spectral_Image_Patch_Matching_ICCV_2019_paper.pdf
PWC https://paperswithcode.com/paper/afd-net-aggregated-feature-difference
Repo
Framework

A type-theoretical reduction of morphological, syntactic and semantic compositionality to a single level of description

Title A type-theoretical reduction of morphological, syntactic and semantic compositionality to a single level of description
Authors Erkki Luuk
Abstract The paper presents NLC, a new formalism for modeling natural language (NL) compositionality. NLC is a functional type system (i.e. one based on mathematical functions and their types). Its main features include a close correspondence with NL and an integrated modeling of morphological, syntactic and semantic compositionality. The integration is effected with a subclass of compound types (types which are syntactic compounds of multiple types or their terms), while the correspondence is sought with function types and polymorphism. The paper also presents an implementation of NLC in Coq. The implementation formalizes a diverse fragment of NL, with NLC expressions type checking and failing to type check in exactly the same ways that NL expressions pass and fail their acceptability tests. Among other things, this demonstrates the possibility of reducing morphological, syntactic and semantic compositionality to a single level of description. The level is tentatively identified with semantic compositionality {—} an interpretation which, besides being supported by results from language processing, has interesting implications on NL structure and modeling.
Tasks
Published 2019-09-01
URL https://www.aclweb.org/anthology/R19-1078/
PDF https://www.aclweb.org/anthology/R19-1078
PWC https://paperswithcode.com/paper/a-type-theoretical-reduction-of-morphological
Repo
Framework

Coupling Global and Local Context for Unsupervised Aspect Extraction

Title Coupling Global and Local Context for Unsupervised Aspect Extraction
Authors Ming Liao, Jing Li, Haisong Zhang, Lingzhi Wang, Xixin Wu, Kam-Fai Wong
Abstract Aspect words, indicating opinion targets, are essential in expressing and understanding human opinions. To identify aspects, most previous efforts focus on using sequence tagging models trained on human-annotated data. This work studies unsupervised aspect extraction and explores how words appear in global context (on sentence level) and local context (conveyed by neighboring words). We propose a novel neural model, capable of coupling global and local representation to discover aspect words. Experimental results on two benchmarks, laptop and restaurant reviews, show that our model significantly outperforms the state-of-the-art models from previous studies evaluated with varying metrics. Analysis on model output show our ability to learn meaningful and coherent aspect representations. We further investigate how words distribute in global and local context, and find that aspect and non-aspect words do exhibit different context, interpreting our superiority in unsupervised aspect extraction.
Tasks Aspect Extraction
Published 2019-11-01
URL https://www.aclweb.org/anthology/D19-1465/
PDF https://www.aclweb.org/anthology/D19-1465
PWC https://paperswithcode.com/paper/coupling-global-and-local-context-for
Repo
Framework

Time-Conditioned Action Anticipation in One Shot

Title Time-Conditioned Action Anticipation in One Shot
Authors Qiuhong Ke, Mario Fritz, Bernt Schiele
Abstract The goal of human action anticipation is to predict future actions. Ideally, in real-world applications such as video surveillance and self-driving systems, future actions should not only be predicted with high accuracy but also at arbitrary and variable time-horizons ranging from short- to long-term predictions. Current work mostly focuses on predicting the next action and thus long-term prediction is achieved by recursive prediction of each next action, which is both inefficient and accumulates errors. In this paper, we propose a novel time-conditioned method for efficient and effective long-term action anticipation. There are two key ingredients to our approach. First, by explicitly conditioning our anticipation network on time allows to efficiently anticipate also long-term actions. And second, we propose an attended temporal feature and a time-conditioned skip connection to extract relevant and useful information from observations for effective anticipation. We conduct extensive experiments on the large-scale Epic-Kitchen and the 50Salads Datasets. Experimental results show that the proposed method is capable of anticipating future actions at both short-term and long-term, and achieves state-of-the-art performance.
Tasks
Published 2019-06-01
URL http://openaccess.thecvf.com/content_CVPR_2019/html/Ke_Time-Conditioned_Action_Anticipation_in_One_Shot_CVPR_2019_paper.html
PDF http://openaccess.thecvf.com/content_CVPR_2019/papers/Ke_Time-Conditioned_Action_Anticipation_in_One_Shot_CVPR_2019_paper.pdf
PWC https://paperswithcode.com/paper/time-conditioned-action-anticipation-in-one
Repo
Framework

Phase-Only Image Based Kernel Estimation for Single Image Blind Deblurring

Title Phase-Only Image Based Kernel Estimation for Single Image Blind Deblurring
Authors Liyuan Pan, Richard Hartley, Miaomiao Liu, Yuchao Dai
Abstract The image motion blurring process is generally modelled as the convolution of a blur kernel with a latent image. Therefore, the estimation of the blur kernel is essentially important for blind image deblurring. Unlike existing approaches which focus on approaching the problem by enforcing various priors on the blur kernel and the latent image, we are aiming at obtaining a high quality blur kernel directly by studying the problem in the frequency domain. We show that the auto-correlation of the absolute phase-only image 1 can provide faithful information about the motion (e.g., the motion direction and magnitude, we call it the motion pattern in this paper.) that caused the blur, leading to a new and efficient blur kernel estimation approach. The blur kernel is then refined and the sharp image is estimated by solving an optimization problem by enforcing a regularization on the blur kernel and the latent image. We further extend our approach to handle non-uniform blur, which involves spatially varying blur kernels. Our approach is evaluated extensively on synthetic and real data and shows good results compared to the state-of-the-art deblurring approaches.
Tasks Blind Image Deblurring, Deblurring, Single-Image Blind Deblurring
Published 2019-06-01
URL http://openaccess.thecvf.com/content_CVPR_2019/html/Pan_Phase-Only_Image_Based_Kernel_Estimation_for_Single_Image_Blind_Deblurring_CVPR_2019_paper.html
PDF http://openaccess.thecvf.com/content_CVPR_2019/papers/Pan_Phase-Only_Image_Based_Kernel_Estimation_for_Single_Image_Blind_Deblurring_CVPR_2019_paper.pdf
PWC https://paperswithcode.com/paper/phase-only-image-based-kernel-estimation-for-1
Repo
Framework

A Flexible Convolutional Solver for Fast Style Transfers

Title A Flexible Convolutional Solver for Fast Style Transfers
Authors Gilles Puy, Patrick Perez
Abstract We propose a new flexible deep convolutional neural network (convnet) to perform fast neural style transfers. Our network is trained to solve approximately, but rapidly, the artistic style transfer problem of [Gatys et al.] for arbritary styles. While solutions already exist, our network is uniquely flexible by design: it can be manipulated at runtime to enforce new constraints on the final output. As examples, we show that it can be modified to perform tasks such as fast photorealistic style transfer, or fast video style transfer with short term consistency, with no retraining. This flexibility stems from the proposed architecture which is obtained by unrolling the gradient descent algorithm used in [Gatys et al.]. Regularisations added to [Gatys et al.] to solve a new task can be reported on-the-fly in our network, even after training.
Tasks Style Transfer, Video Style Transfer
Published 2019-06-01
URL http://openaccess.thecvf.com/content_CVPR_2019/html/Puy_A_Flexible_Convolutional_Solver_for_Fast_Style_Transfers_CVPR_2019_paper.html
PDF http://openaccess.thecvf.com/content_CVPR_2019/papers/Puy_A_Flexible_Convolutional_Solver_for_Fast_Style_Transfers_CVPR_2019_paper.pdf
PWC https://paperswithcode.com/paper/a-flexible-convolutional-solver-for-fast
Repo
Framework
comments powered by Disqus