January 24, 2020

1859 words 9 mins read

Paper Group NANR 199

Paper Group NANR 199

World From Blur. Neural RGB(r)D Sensing: Depth and Uncertainty From a Video Camera. Consistent Jumpy Predictions for Videos and Scenes. tweeDe – A Universal Dependencies treebank for German tweets. A Conceptual Spaces Model of Socially Motivated Language Change. Retrieval-Augmented Convolutional Neural Networks Against Adversarial Examples. Univer …

World From Blur

Title World From Blur
Authors Jiayan Qiu, Xinchao Wang, Stephen J. Maybank, Dacheng Tao
Abstract What can we tell from a single motion-blurred image? We show in this paper that a 3D scene can be revealed. Unlike prior methods that focus on producing a deblurred image, we propose to estimate and take advantage of the hidden message of a blurred image, the relative motion trajectory, to restore the 3D scene collapsed during the exposure process. To this end, we train a deep network that jointly predicts the motion trajectory, the deblurred image, and the depth one, all of which in turn form a collaborative and self-supervised cycle that supervise one another to reproduce the input blurred image, enabling plausible 3D scene reconstruction from a single blurred image. We test the proposed model on several large-scale datasets we constructed based on benchmarks, as well as real-world blurred images, and show that it yields very encouraging quantitative and qualitative results.
Tasks 3D Scene Reconstruction
Published 2019-06-01
URL http://openaccess.thecvf.com/content_CVPR_2019/html/Qiu_World_From_Blur_CVPR_2019_paper.html
PDF http://openaccess.thecvf.com/content_CVPR_2019/papers/Qiu_World_From_Blur_CVPR_2019_paper.pdf
PWC https://paperswithcode.com/paper/world-from-blur
Repo
Framework

Neural RGB(r)D Sensing: Depth and Uncertainty From a Video Camera

Title Neural RGB(r)D Sensing: Depth and Uncertainty From a Video Camera
Authors Chao Liu, Jinwei Gu, Kihwan Kim, Srinivasa G. Narasimhan, Jan Kautz
Abstract Depth sensing is crucial for 3D reconstruction and scene understanding. Active depth sensors provide dense metric measurements, but often suffer from limitations such as restricted operating ranges, low spatial resolution, sensor interference, and high power consumption. In this paper, we propose a deep learning (DL) method to estimate per-pixel depth and its uncertainty continuously from a monocular video stream, with the goal of effectively turning an RGB camera into an RGB-D camera. Unlike prior DL-based methods, we estimate a depth probability distribution for each pixel rather than a single depth value, leading to an estimate of a 3D depth probability volume for each input frame. These depth probability volumes are accumulated over time under a Bayesian filtering framework as more incoming frames are processed sequentially, which effectively reduces depth uncertainty and improves accuracy, robustness, and temporal stability. Compared to prior work, the proposed approach achieves more accurate and stable results, and generalizes better to new datasets. Experimental results also show the output of our approach can be directly fed into classical RGB-D based 3D scanning methods for 3D scene reconstruction.
Tasks 3D Reconstruction, 3D Scene Reconstruction, Scene Understanding
Published 2019-06-01
URL http://openaccess.thecvf.com/content_CVPR_2019/html/Liu_Neural_RGBrD_Sensing_Depth_and_Uncertainty_From_a_Video_Camera_CVPR_2019_paper.html
PDF http://openaccess.thecvf.com/content_CVPR_2019/papers/Liu_Neural_RGBrD_Sensing_Depth_and_Uncertainty_From_a_Video_Camera_CVPR_2019_paper.pdf
PWC https://paperswithcode.com/paper/neural-rgbrd-sensing-depth-and-uncertainty
Repo
Framework

Consistent Jumpy Predictions for Videos and Scenes

Title Consistent Jumpy Predictions for Videos and Scenes
Authors Ananya Kumar, S. M. Ali Eslami, Danilo Rezende, Marta Garnelo, Fabio Viola, Edward Lockhart, Murray Shanahan
Abstract Stochastic video prediction models take in a sequence of image frames, and generate a sequence of consecutive future image frames. These models typically generate future frames in an autoregressive fashion, which is slow and requires the input and output frames to be consecutive. We introduce a model that overcomes these drawbacks by generating a latent representation from an arbitrary set of frames that can then be used to simultaneously and efficiently sample temporally consistent frames at arbitrary time-points. For example, our model can “jump” and directly sample frames at the end of the video, without sampling intermediate frames. Synthetic video evaluations confirm substantial gains in speed and functionality without loss in fidelity. We also apply our framework to a 3D scene reconstruction dataset. Here, our model is conditioned on camera location and can sample consistent sets of images for what an occluded region of a 3D scene might look like, even if there are multiple possibilities for what that region might contain. Reconstructions and videos are available at https://bit.ly/2O4Pc4R.
Tasks 3D Scene Reconstruction, Video Prediction
Published 2019-05-01
URL https://openreview.net/forum?id=S1gQ5sRcFm
PDF https://openreview.net/pdf?id=S1gQ5sRcFm
PWC https://paperswithcode.com/paper/consistent-jumpy-predictions-for-videos-and-1
Repo
Framework

tweeDe – A Universal Dependencies treebank for German tweets

Title tweeDe – A Universal Dependencies treebank for German tweets
Authors Ines Rehbein, Josef Ruppenhofer, Bich-Ngoc Do
Abstract
Tasks
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-7811/
PDF https://www.aclweb.org/anthology/W19-7811
PWC https://paperswithcode.com/paper/tweede-a-universal-dependencies-treebank-for
Repo
Framework

A Conceptual Spaces Model of Socially Motivated Language Change

Title A Conceptual Spaces Model of Socially Motivated Language Change
Authors Heather Burnett, Olivier Bonami
Abstract
Tasks
Published 2019-01-01
URL https://www.aclweb.org/anthology/W19-0114/
PDF https://www.aclweb.org/anthology/W19-0114
PWC https://paperswithcode.com/paper/a-conceptual-spaces-model-of-socially
Repo
Framework

Retrieval-Augmented Convolutional Neural Networks Against Adversarial Examples

Title Retrieval-Augmented Convolutional Neural Networks Against Adversarial Examples
Authors Jake Zhao (Junbo), Kyunghyun Cho
Abstract We propose a retrieval-augmented convolutional network (RaCNN) and propose to train it with local mixup, a novel variant of the recently proposed mixup algorithm. The proposed hybrid architecture combining a convolutional network and an off-the-shelf retrieval engine was designed to mitigate the adverse effect of off-manifold adversarial examples, while the proposed local mixup addresses on-manifold ones by explicitly encouraging the classifier to locally behave linearly on the data manifold. Our evaluation of the proposed approach against seven readilyavailable adversarial attacks on three datasets-CIFAR-10, SVHN and ImageNet-demonstrate the improved robustness compared to a vanilla convolutional network, and comparable performance with the state-of-the-art reactive defense approaches.
Tasks
Published 2019-06-01
URL http://openaccess.thecvf.com/content_CVPR_2019/html/Zhao_Retrieval-Augmented_Convolutional_Neural_Networks_Against_Adversarial_Examples_CVPR_2019_paper.html
PDF http://openaccess.thecvf.com/content_CVPR_2019/papers/Zhao_Retrieval-Augmented_Convolutional_Neural_Networks_Against_Adversarial_Examples_CVPR_2019_paper.pdf
PWC https://paperswithcode.com/paper/retrieval-augmented-convolutional-neural-1
Repo
Framework

Universal Adversarial Perturbation via Prior Driven Uncertainty Approximation

Title Universal Adversarial Perturbation via Prior Driven Uncertainty Approximation
Authors Hong Liu, Rongrong Ji, Jie Li, Baochang Zhang, Yue Gao, Yongjian Wu, Feiyue Huang
Abstract Deep learning models have shown their vulnerabilities to universal adversarial perturbations (UAP), which are quasi-imperceptible. Compared to the conventional supervised UAPs that suffer from the knowledge of training data, the data-independent unsupervised UAPs are more applicable. Existing unsupervised methods fail to take advantage of the model uncertainty to produce robust perturbations. In this paper, we propose a new unsupervised universal adversarial perturbation method, termed as Prior Driven Uncertainty Approximation (PD-UA), to generate a robust UAP by fully exploiting the model uncertainty at each network layer. Specifically, a Monte Carlo sampling method is deployed to activate more neurons to increase the model uncertainty for a better adversarial perturbation. Thereafter, a textural bias prior to revealing a statistical uncertainty is proposed, which helps to improve the attacking performance. The UAP is crafted by the stochastic gradient descent algorithm with a boosted momentum optimizer, and a Laplacian pyramid frequency model is finally used to maintain the statistical uncertainty. Extensive experiments demonstrate that our method achieves well attacking performances on the ImageNet validation set, and significantly improves the fooling rate compared with the state-of-the-art methods.
Tasks
Published 2019-10-01
URL http://openaccess.thecvf.com/content_ICCV_2019/html/Liu_Universal_Adversarial_Perturbation_via_Prior_Driven_Uncertainty_Approximation_ICCV_2019_paper.html
PDF http://openaccess.thecvf.com/content_ICCV_2019/papers/Liu_Universal_Adversarial_Perturbation_via_Prior_Driven_Uncertainty_Approximation_ICCV_2019_paper.pdf
PWC https://paperswithcode.com/paper/universal-adversarial-perturbation-via-prior
Repo
Framework

A Comparative Corpus Analysis of PP Ordering in English and Chinese

Title A Comparative Corpus Analysis of PP Ordering in English and Chinese
Authors Zoey Liu
Abstract
Tasks
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-7905/
PDF https://www.aclweb.org/anthology/W19-7905
PWC https://paperswithcode.com/paper/a-comparative-corpus-analysis-of-pp-ordering
Repo
Framework

Universal Dependencies for Mby'a Guaran'\i

Title Universal Dependencies for Mby'a Guaran'\i
Authors Guillaume Thomas
Abstract
Tasks
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-8008/
PDF https://www.aclweb.org/anthology/W19-8008
PWC https://paperswithcode.com/paper/universal-dependencies-for-mbya-guarani
Repo
Framework

Nested Coordination in Universal Dependencies

Title Nested Coordination in Universal Dependencies
Authors Adam Przepi{'o}rkowski, Agnieszka Patejuk
Abstract
Tasks
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-8007/
PDF https://www.aclweb.org/anthology/W19-8007
PWC https://paperswithcode.com/paper/nested-coordination-in-universal-dependencies
Repo
Framework

HDT-UD: A very large Universal Dependencies Treebank for German

Title HDT-UD: A very large Universal Dependencies Treebank for German
Authors Emanuel Borges V{"o}lker, Maximilian Wendt, Felix Hennig, Arne K{"o}hn
Abstract
Tasks
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-8006/
PDF https://www.aclweb.org/anthology/W19-8006
PWC https://paperswithcode.com/paper/hdt-ud-a-very-large-universal-dependencies
Repo
Framework

Divide, Conquer and Combine: Hierarchical Feature Fusion Network with Local and Global Perspectives for Multimodal Affective Computing

Title Divide, Conquer and Combine: Hierarchical Feature Fusion Network with Local and Global Perspectives for Multimodal Affective Computing
Authors Sijie Mai, Haifeng Hu, Songlong Xing
Abstract We propose a general strategy named {}divide, conquer and combine{'} for multimodal fusion. Instead of directly fusing features at holistic level, we conduct fusion hierarchically so that both local and global interactions are considered for a comprehensive interpretation of multimodal embeddings. In the {}divide{'} and {}conquer{'} stages, we conduct local fusion by exploring the interaction of a portion of the aligned feature vectors across various modalities lying within a sliding window, which ensures that each part of multimodal embeddings are explored sufficiently. On its basis, global fusion is conducted in the {}combine{'} stage to explore the interconnection across local interactions, via an Attentive Bi-directional Skip-connected LSTM that directly connects distant local interactions and integrates two levels of attention mechanism. In this way, local interactions can exchange information sufficiently and thus obtain an overall view of multimodal information. Our method achieves state-of-the-art performance on multimodal affective computing with higher efficiency.
Tasks
Published 2019-07-01
URL https://www.aclweb.org/anthology/P19-1046/
PDF https://www.aclweb.org/anthology/P19-1046
PWC https://paperswithcode.com/paper/divide-conquer-and-combine-hierarchical
Repo
Framework

A Simple Recipe towards Reducing Hallucination in Neural Surface Realisation

Title A Simple Recipe towards Reducing Hallucination in Neural Surface Realisation
Authors Feng Nie, Jin-Ge Yao, Jinpeng Wang, Rong Pan, Chin-Yew Lin
Abstract Recent neural language generation systems often \textit{hallucinate} contents (i.e., producing irrelevant or contradicted facts), especially when trained on loosely corresponding pairs of the input structure and text. To mitigate this issue, we propose to integrate a language understanding module for data refinement with self-training iterations to effectively induce strong equivalence between the input data and the paired text. Experiments on the E2E challenge dataset show that our proposed framework can reduce more than 50{%} relative unaligned noise from the original data-text pairs. A vanilla sequence-to-sequence neural NLG model trained on the refined data has improved on content correctness compared with the current state-of-the-art ensemble generator.
Tasks Text Generation
Published 2019-07-01
URL https://www.aclweb.org/anthology/P19-1256/
PDF https://www.aclweb.org/anthology/P19-1256
PWC https://paperswithcode.com/paper/a-simple-recipe-towards-reducing
Repo
Framework

Thirty Musts for Meaning Banking

Title Thirty Musts for Meaning Banking
Authors Lasha Abzianidze, Johan Bos
Abstract Meaning banking{—}creating a semantically annotated corpus for the purpose of semantic parsing or generation{—}is a challenging task. It is quite simple to come up with a complex meaning representation, but it is hard to design a simple meaning representation that captures many nuances of meaning. This paper lists some lessons learned in nearly ten years of meaning annotation during the development of the Groningen Meaning Bank (Bos et al., 2017) and the Parallel Meaning Bank (Abzianidze et al., 2017). The paper{'}s format is rather unconventional: there is no explicit related work, no methodology section, no results, and no discussion (and the current snippet is not an abstract but actually an introductory preface). Instead, its structure is inspired by work of Traum (2000) and Bender (2013). The list starts with a brief overview of the existing meaning banks (Section 1) and the rest of the items are roughly divided into three groups: corpus collection (Section 2 and 3, annotation methods (Section 4{–}11), and design of meaning representations (Section 12{–}30). We hope this overview will give inspiration and guidance in creating improved meaning banks in the future
Tasks Semantic Parsing
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-3302/
PDF https://www.aclweb.org/anthology/W19-3302
PWC https://paperswithcode.com/paper/thirty-musts-for-meaning-banking
Repo
Framework

Building minority dependency treebanks, dictionaries and computational grammars at the same time—an experiment in Karelian treebanking

Title Building minority dependency treebanks, dictionaries and computational grammars at the same time—an experiment in Karelian treebanking
Authors Tommi A Pirinen
Abstract
Tasks
Published 2019-08-01
URL https://www.aclweb.org/anthology/W19-8016/
PDF https://www.aclweb.org/anthology/W19-8016
PWC https://paperswithcode.com/paper/building-minority-dependency-treebanks
Repo
Framework
comments powered by Disqus