April 2, 2020

3108 words 15 mins read

Paper Group ANR 193

Paper Group ANR 193

Distributed Averaging Methods for Randomized Second Order Optimization. Improving the Tightness of Convex Relaxation Bounds for Training Certifiably Robust Classifiers. Learned Spectral Computed Tomography. Learning to Transfer Texture from Clothing Images to 3D Humans. Augmented Cyclic Consistency Regularization for Unpaired Image-to-Image Transla …

Distributed Averaging Methods for Randomized Second Order Optimization

Title Distributed Averaging Methods for Randomized Second Order Optimization
Authors Burak Bartan, Mert Pilanci
Abstract We consider distributed optimization problems where forming the Hessian is computationally challenging and communication is a significant bottleneck. We develop unbiased parameter averaging methods for randomized second order optimization that employ sampling and sketching of the Hessian. Existing works do not take the bias of the estimators into consideration, which limits their application to massively parallel computation. We provide closed-form formulas for regularization parameters and step sizes that provably minimize the bias for sketched Newton directions. We also extend the framework of second order averaging methods to introduce an unbiased distributed optimization framework for heterogeneous computing systems with varying worker resources. Additionally, we demonstrate the implications of our theoretical findings via large scale experiments performed on a serverless computing platform.
Tasks Distributed Optimization
Published 2020-02-16
URL https://arxiv.org/abs/2002.06540v1
PDF https://arxiv.org/pdf/2002.06540v1.pdf
PWC https://paperswithcode.com/paper/distributed-averaging-methods-for-randomized
Repo
Framework

Improving the Tightness of Convex Relaxation Bounds for Training Certifiably Robust Classifiers

Title Improving the Tightness of Convex Relaxation Bounds for Training Certifiably Robust Classifiers
Authors Chen Zhu, Renkun Ni, Ping-yeh Chiang, Hengduo Li, Furong Huang, Tom Goldstein
Abstract Convex relaxations are effective for training and certifying neural networks against norm-bounded adversarial attacks, but they leave a large gap between certifiable and empirical robustness. In principle, convex relaxation can provide tight bounds if the solution to the relaxed problem is feasible for the original non-convex problem. We propose two regularizers that can be used to train neural networks that yield tighter convex relaxation bounds for robustness. In all of our experiments, the proposed regularizers result in higher certified accuracy than non-regularized baselines.
Tasks
Published 2020-02-22
URL https://arxiv.org/abs/2002.09766v1
PDF https://arxiv.org/pdf/2002.09766v1.pdf
PWC https://paperswithcode.com/paper/improving-the-tightness-of-convex-relaxation
Repo
Framework

Learned Spectral Computed Tomography

Title Learned Spectral Computed Tomography
Authors Dimitris Kamilis, Mario Blatter, Nick Polydorides
Abstract Spectral Photon-Counting Computed Tomography (SPCCT) is a promising technology that has shown a number of advantages over conventional X-ray Computed Tomography (CT) in the form of material separation, artefact removal and enhanced image quality. However, due to the increased complexity and non-linearity of the SPCCT governing equations, model-based reconstruction algorithms typically require handcrafted regularisation terms and meticulous tuning of hyperparameters making them impractical to calibrate in variable conditions. Additionally, they typically incur high computational costs and in cases of limited-angle data, their imaging capability deteriorates significantly. Recently, Deep Learning has proven to provide state-of-the-art reconstruction performance in medical imaging applications while circumventing most of these challenges. Inspired by these advances, we propose a Deep Learning imaging method for SPCCT that exploits the expressive power of Neural Networks while also incorporating model knowledge. The method takes the form of a two-step learned primal-dual algorithm that is trained using case-specific data. The proposed approach is characterised by fast reconstruction capability and high imaging performance, even in limited-data cases, while avoiding the hand-tuning that is required by other optimisation approaches. We demonstrate the performance of the method in terms of reconstructed images and quality metrics via numerical examples inspired by the application of cardiovascular imaging.
Tasks Computed Tomography (CT)
Published 2020-03-09
URL https://arxiv.org/abs/2003.04138v1
PDF https://arxiv.org/pdf/2003.04138v1.pdf
PWC https://paperswithcode.com/paper/learned-spectral-computed-tomography
Repo
Framework

Learning to Transfer Texture from Clothing Images to 3D Humans

Title Learning to Transfer Texture from Clothing Images to 3D Humans
Authors Aymen Mir, Thiemo Alldieck, Gerard Pons-Moll
Abstract In this paper, we present a simple yet effective method to automatically transfer textures of clothing images (front and back) to 3D garments worn on top SMPL, in real time. We first automatically compute training pairs of images with aligned 3D garments using a custom non-rigid 3D to 2D registration method, which is accurate but slow. Using these pairs, we learn a mapping from pixels to the 3D garment surface. Our idea is to learn dense correspondences from garment image silhouettes to a 2D-UV map of a 3D garment surface using shape information alone, completely ignoring texture, which allows us to generalize to the wide range of web images. Several experiments demonstrate that our model is more accurate than widely used baselines such as thin-plate-spline warping and image-to-image translation networks while being orders of magnitude faster. Our model opens the door for applications such as virtual try-on, and allows for generation of 3D humans with varied textures which is necessary for learning.
Tasks Image-to-Image Translation
Published 2020-03-04
URL https://arxiv.org/abs/2003.02050v2
PDF https://arxiv.org/pdf/2003.02050v2.pdf
PWC https://paperswithcode.com/paper/learning-to-transfer-texture-from-clothing
Repo
Framework

Augmented Cyclic Consistency Regularization for Unpaired Image-to-Image Translation

Title Augmented Cyclic Consistency Regularization for Unpaired Image-to-Image Translation
Authors Takehiko Ohkawa, Naoto Inoue, Hirokatsu Kataoka, Nakamasa Inoue
Abstract Unpaired image-to-image (I2I) translation has received considerable attention in pattern recognition and computer vision because of recent advancements in generative adversarial networks (GANs). However, due to the lack of explicit supervision, unpaired I2I models often fail to generate realistic images, especially in challenging datasets with different backgrounds and poses. Hence, stabilization is indispensable for real-world applications and GANs. Herein, we propose Augmented Cyclic Consistency Regularization (ACCR), a novel regularization method for unpaired I2I translation. Our main idea is to enforce consistency regularization originating from semi-supervised learning on the discriminators leveraging real, fake, reconstructed, and augmented samples. We regularize the discriminators to output similar predictions when fed pairs of original and perturbed images. We qualitatively clarify the generation property between unpaired I2I models and standard GANs, and explain why consistency regularization on fake and reconstructed samples works well. Quantitatively, our method outperforms the consistency regularized GAN (CR-GAN) in digit translations and demonstrates efficacy against several data augmentation variants and cycle-consistent constraints.
Tasks Data Augmentation, Image-to-Image Translation
Published 2020-02-29
URL https://arxiv.org/abs/2003.00187v1
PDF https://arxiv.org/pdf/2003.00187v1.pdf
PWC https://paperswithcode.com/paper/augmented-cyclic-consistency-regularization
Repo
Framework

A Computationally Efficient Neural Network Invariant to the Action of Symmetry Subgroups

Title A Computationally Efficient Neural Network Invariant to the Action of Symmetry Subgroups
Authors Piotr Kicki, Mete Ozay, Piotr Skrzypczyński
Abstract We introduce a method to design a computationally efficient $G$-invariant neural network that approximates functions invariant to the action of a given permutation subgroup $G \leq S_n$ of the symmetric group on input data. The key element of the proposed network architecture is a new $G$-invariant transformation module, which produces a $G$-invariant latent representation of the input data. This latent representation is then processed with a multi-layer perceptron in the network. We prove the universality of the proposed architecture, discuss its properties and highlight its computational and memory efficiency. Theoretical considerations are supported by numerical experiments involving different network configurations, which demonstrate the effectiveness and strong generalization properties of the proposed method in comparison to other $G$-invariant neural networks.
Tasks
Published 2020-02-18
URL https://arxiv.org/abs/2002.07528v1
PDF https://arxiv.org/pdf/2002.07528v1.pdf
PWC https://paperswithcode.com/paper/a-computationally-efficient-neural-network
Repo
Framework

Transfer Learning for Abstractive Summarization at Controllable Budgets

Title Transfer Learning for Abstractive Summarization at Controllable Budgets
Authors Ritesh Sarkhel, Moniba Keymanesh, Arnab Nandi, Srinivasan Parthasarathy
Abstract Summarizing a document within an allocated budget while maintaining its major concepts is a challenging task. If the budget can take any arbitrary value and not known beforehand, it becomes even more difficult. Most of the existing methods for abstractive summarization, including state-of-the-art neural networks are data intensive. If the number of available training samples becomes limited, they fail to construct high-quality summaries. We propose MLS, an end-to-end framework to generate abstractive summaries with limited training data at arbitrary compression budgets. MLS employs a pair of supervised sequence-to-sequence networks. The first network called the \textit{MFS-Net} constructs a minimal feasible summary by identifying the key concepts of the input document. The second network called the Pointer-Magnifier then generates the final summary from the minimal feasible summary by leveraging an interpretable multi-headed attention model. Experiments on two cross-domain datasets show that MLS outperforms baseline methods over a range of success metrics including ROUGE and METEOR. We observed an improvement of approximately 4% in both metrics over the state-of-art convolutional network at lower budgets. Results from a human evaluation study also establish the effectiveness of MLS in generating complete coherent summaries at arbitrary compression budgets.
Tasks Abstractive Text Summarization, Transfer Learning
Published 2020-02-18
URL https://arxiv.org/abs/2002.07845v1
PDF https://arxiv.org/pdf/2002.07845v1.pdf
PWC https://paperswithcode.com/paper/transfer-learning-for-abstractive
Repo
Framework

Abstractive Summarization for Low Resource Data using Domain Transfer and Data Synthesis

Title Abstractive Summarization for Low Resource Data using Domain Transfer and Data Synthesis
Authors Ahmed Magooda, Diane Litman
Abstract Training abstractive summarization models typically requires large amounts of data, which can be a limitation for many domains. In this paper we explore using domain transfer and data synthesis to improve the performance of recent abstractive summarization methods when applied to small corpora of student reflections. First, we explored whether tuning state of the art model trained on newspaper data could boost performance on student reflection data. Evaluations demonstrated that summaries produced by the tuned model achieved higher ROUGE scores compared to model trained on just student reflection data or just newspaper data. The tuned model also achieved higher scores compared to extractive summarization baselines, and additionally was judged to produce more coherent and readable summaries in human evaluations. Second, we explored whether synthesizing summaries of student data could additionally boost performance. We proposed a template-based model to synthesize new data, which when incorporated into training further increased ROUGE scores. Finally, we showed that combining data synthesis with domain transfer achieved higher ROUGE scores compared to only using one of the two approaches.
Tasks Abstractive Text Summarization
Published 2020-02-09
URL https://arxiv.org/abs/2002.03407v1
PDF https://arxiv.org/pdf/2002.03407v1.pdf
PWC https://paperswithcode.com/paper/abstractive-summarization-for-low-resource
Repo
Framework

Deep-Geometric 6 DoF Localization from a Single Image in Topo-metric Maps

Title Deep-Geometric 6 DoF Localization from a Single Image in Topo-metric Maps
Authors Tom Roussel, Punarjay Chakravarty, Gaurav Pandey, Tinne Tuytelaars, Luc Van Eycken
Abstract We describe a Deep-Geometric Localizer that is able to estimate the full 6 Degree of Freedom (DoF) global pose of the camera from a single image in a previously mapped environment. Our map is a topo-metric one, with discrete topological nodes whose 6 DoF poses are known. Each topo-node in our map also comprises of a set of points, whose 2D features and 3D locations are stored as part of the mapping process. For the mapping phase, we utilise a stereo camera and a regular stereo visual SLAM pipeline. During the localization phase, we take a single camera image, localize it to a topological node using Deep Learning, and use a geometric algorithm (PnP) on the matched 2D features (and their 3D positions in the topo map) to determine the full 6 DoF globally consistent pose of the camera. Our method divorces the mapping and the localization algorithms and sensors (stereo and mono), and allows accurate 6 DoF pose estimation in a previously mapped environment using a single camera. With potential VR/AR and localization applications in single camera devices such as mobile phones and drones, our hybrid algorithm compares favourably with the fully Deep-Learning based Pose-Net that regresses pose from a single image in simulated as well as real environments.
Tasks Pose Estimation
Published 2020-02-04
URL https://arxiv.org/abs/2002.01210v1
PDF https://arxiv.org/pdf/2002.01210v1.pdf
PWC https://paperswithcode.com/paper/deep-geometric-6-dof-localization-from-a
Repo
Framework

Multi-Cycle-Consistent Adversarial Networks for CT Image Denoising

Title Multi-Cycle-Consistent Adversarial Networks for CT Image Denoising
Authors Jinglan Liu, Yukun Ding, Jinjun Xiong, Qianjun Jia, Meiping Huang, Jian Zhuang, Bike Xie, Chun-Chen Liu, Yiyu Shi
Abstract CT image denoising can be treated as an image-to-image translation task where the goal is to learn the transform between a source domain $X$ (noisy images) and a target domain $Y$ (clean images). Recently, cycle-consistent adversarial denoising network (CCADN) has achieved state-of-the-art results by enforcing cycle-consistent loss without the need of paired training data. Our detailed analysis of CCADN raises a number of interesting questions. For example, if the noise is large leading to significant difference between domain $X$ and domain $Y$, can we bridge $X$ and $Y$ with an intermediate domain $Z$ such that both the denoising process between $X$ and $Z$ and that between $Z$ and $Y$ are easier to learn? As such intermediate domains lead to multiple cycles, how do we best enforce cycle-consistency? Driven by these questions, we propose a multi-cycle-consistent adversarial network (MCCAN) that builds intermediate domains and enforces both local and global cycle-consistency. The global cycle-consistency couples all generators together to model the whole denoising process, while the local cycle-consistency imposes effective supervision on the process between adjacent domains. Experiments show that both local and global cycle-consistency are important for the success of MCCAN, which outperforms the state-of-the-art.
Tasks Denoising, Image Denoising, Image-to-Image Translation
Published 2020-02-27
URL https://arxiv.org/abs/2002.12130v1
PDF https://arxiv.org/pdf/2002.12130v1.pdf
PWC https://paperswithcode.com/paper/multi-cycle-consistent-adversarial-networks
Repo
Framework

Photorealistic Lip Sync with Adversarial Temporal Convolutional Networks

Title Photorealistic Lip Sync with Adversarial Temporal Convolutional Networks
Authors Ruobing Zheng, Zhou Zhu, Bo Song, Changjiang Ji
Abstract Lip sync has emerged as a promising technique to generate mouth movements on a talking head. However, synthesizing a clear, accurate and human-like performance is still challenging. In this paper, we present a novel lip-sync solution for producing a high-quality and photorealistic talking head from speech. We focus on capturing the specific lip movement and talking style of the target person. We model the seq-to-seq mapping from audio signals to mouth features by two adversarial temporal convolutional networks. Experiments show our model outperforms traditional RNN-based baselines in both accuracy and speed. We also propose an image-to-image translation-based approach for generating high-resolution photoreal face appearance from synthetic facial maps. This fully-trainable framework not only avoids the cumbersome steps like candidate-frame selection in graphics-based rendering methods but also solves some existing issues in recent neural network-based solutions. Our work will benefit related applications such as conversational agent, virtual anchor, tele-presence and gaming.
Tasks Image-to-Image Translation
Published 2020-02-20
URL https://arxiv.org/abs/2002.08700v1
PDF https://arxiv.org/pdf/2002.08700v1.pdf
PWC https://paperswithcode.com/paper/photorealistic-lip-sync-with-adversarial
Repo
Framework

SemI2I: Semantically Consistent Image-to-Image Translation for Domain Adaptation of Remote Sensing Data

Title SemI2I: Semantically Consistent Image-to-Image Translation for Domain Adaptation of Remote Sensing Data
Authors Onur Tasar, S L Happy, Yuliya Tarabalka, Pierre Alliez
Abstract Although convolutional neural networks have been proven to be an effective tool to generate high quality maps from remote sensing images, their performance significantly deteriorates when there exists a large domain shift between training and test data. To address this issue, we propose a new data augmentation approach that transfers the style of test data to training data using generative adversarial networks. Our semantic segmentation framework consists in first training a U-net from the real training data and then fine-tuning it on the test stylized fake training data generated by the proposed approach. Our experimental results prove that our framework outperforms the existing domain adaptation methods.
Tasks Data Augmentation, Domain Adaptation, Image-to-Image Translation, Semantic Segmentation
Published 2020-02-14
URL https://arxiv.org/abs/2002.05925v2
PDF https://arxiv.org/pdf/2002.05925v2.pdf
PWC https://paperswithcode.com/paper/semi2i-semantically-consistent-image-to-image
Repo
Framework

Image-to-Image Translation with Text Guidance

Title Image-to-Image Translation with Text Guidance
Authors Bowen Li, Xiaojuan Qi, Philip H. S. Torr, Thomas Lukasiewicz
Abstract The goal of this paper is to embed controllable factors, i.e., natural language descriptions, into image-to-image translation with generative adversarial networks, which allows text descriptions to determine the visual attributes of synthetic images. We propose four key components: (1) the implementation of part-of-speech tagging to filter out non-semantic words in the given description, (2) the adoption of an affine combination module to effectively fuse different modality text and image features, (3) a novel refined multi-stage architecture to strengthen the differential ability of discriminators and the rectification ability of generators, and (4) a new structure loss to further improve discriminators to better distinguish real and synthetic images. Extensive experiments on the COCO dataset demonstrate that our method has a superior performance on both visual realism and semantic consistency with given descriptions.
Tasks Image-to-Image Translation, Part-Of-Speech Tagging
Published 2020-02-12
URL https://arxiv.org/abs/2002.05235v1
PDF https://arxiv.org/pdf/2002.05235v1.pdf
PWC https://paperswithcode.com/paper/image-to-image-translation-with-text-guidance
Repo
Framework

Normalization of Input-output Shared Embeddings in Text Generation Models

Title Normalization of Input-output Shared Embeddings in Text Generation Models
Authors Jinyang Liu, Yujia Zhai, Zizhong Chen
Abstract Neural Network based models have been state-of-the-art models for various Natural Language Processing tasks, however, the input and output dimension problem in the networks has still not been fully resolved, especially in text generation tasks (e.g. Machine Translation, Text Summarization), in which input and output both have huge sizes of vocabularies. Therefore, input-output embedding weight sharing has been introduced and adopted widely, which remains to be improved. Based on linear algebra and statistical theories, this paper locates the shortcoming of existed input-output embedding weight sharing method, then raises methods for improving input-output weight shared embedding, among which methods of normalization of embedding weight matrices show best performance. These methods are nearly computational cost-free, can get combined with other embedding techniques, and show good effectiveness when applied on state-of-the-art Neural Network models. For Transformer-big models, the normalization techniques can get at best 0.6 BLEU improvement compared to the original version of model on WMT’16 En-De dataset, and similar BLEU improvements on IWSLT 14’ datasets. For DynamicConv models, 0.5 BLEU improvement can be attained on WMT’16 En-De dataset, and 0.41 BLEU improvement on IWSLT 14’ De-En translation task is achieved.
Tasks Machine Translation, Text Generation, Text Summarization
Published 2020-01-22
URL https://arxiv.org/abs/2001.07885v2
PDF https://arxiv.org/pdf/2001.07885v2.pdf
PWC https://paperswithcode.com/paper/normalization-of-input-output-shared
Repo
Framework

Characterising hot stellar systems with confidence

Title Characterising hot stellar systems with confidence
Authors Souradeep Chattopadhyay, Ranjan Maitra
Abstract Hot stellar systems (HSS) are a collection of stars bound together by gravitational attraction. These systems hold clues to many mysteries of outer space so understanding their origin, evolution and physical properties is important but remains a huge challenge. We used multivariate $t$-mixtures model-based clustering to analyze 13456 hot stellar systems from Misgeld & Hilker (2011) that included 12763 candidate globular clusters and found eight homogeneous groups using the Bayesian Information Criterion (BIC). A nonparametric bootstrap procedure was used to estimate the confidence of each of our clustering assignments. The eight obtained groups can be characterized in terms of the correlation, mass, effective radius and surface density. Using conventional correlation-mass-effective radius-surface density notation, the largest group, Group 1, can be described as having positive-low-low-moderate characteristics. The other groups, numbered in decreasing sizes are similarly characterised, with Group 2 having positive-low-low-high characteristics, Group 3 displaying positive-low-low-moderate characteristics, Group 4 having positive-low-low-high characteristic, Group 5 displaying positive-low-moderate-moderate characteristic and Group 6 showing positive-moderate-low-high characteristic. The smallest group (Group 8) shows negative-low-moderate-moderate characteristic. Group 7 has no candidate clusters and so cannot be similarly labeled but the mass, effective radius correlation for these non-candidates indicates that they zare larger than typical globular clusters. Assertions drawn for each group are ambiguous for a few HSS having low confidence in classification. Our analysis identifies distinct kinds of HSS with varying confidence and provides novel insight into their physical and evolutionary properties.
Tasks
Published 2020-03-12
URL https://arxiv.org/abs/2003.05777v2
PDF https://arxiv.org/pdf/2003.05777v2.pdf
PWC https://paperswithcode.com/paper/characterization-of-hot-stellar-systems-with
Repo
Framework
comments powered by Disqus