April 3, 2020

3498 words 17 mins read

Paper Group AWR 35

Paper Group AWR 35

Adversarial Perturbations Fool Deepfake Detectors. Wise Sliding Window Segmentation: A classification-aided approach for trajectory segmentation. What Deep CNNs Benefit from Global Covariance Pooling: An Optimization Perspective. Cloud-Net+: A Cloud Segmentation CNN for Landsat 8 Remote Sensing Imagery Optimized with Filtered Jaccard Loss Function. …

Adversarial Perturbations Fool Deepfake Detectors

Title Adversarial Perturbations Fool Deepfake Detectors
Authors Apurva Gandhi, Shomik Jain
Abstract This work uses adversarial perturbations to enhance deepfake images and fool common deepfake detectors. We created adversarial perturbations using the Fast Gradient Sign Method and the Carlini and Wagner L2 norm attack in both blackbox and whitebox settings. Detectors achieved over 95% accuracy on unperturbed deepfakes, but less than 27% accuracy on perturbed deepfakes. We also explore two improvements to deepfake detectors: (i) Lipschitz regularization, and (ii) Deep Image Prior (DIP). Lipschitz regularization constrains the gradient of the detector with respect to the input in order to increase robustness to input perturbations. The DIP defense removes perturbations using generative convolutional neural networks in an unsupervised manner. Regularization improved the detection of perturbed deepfakes on average, including a 10% accuracy boost in the blackbox case. The DIP defense achieved 95% accuracy on perturbed deepfakes that fooled the original detector, while retaining 98% accuracy in other cases on a 100 image subsample.
Tasks Face Swapping
Published 2020-03-24
URL https://arxiv.org/abs/2003.10596v1
PDF https://arxiv.org/pdf/2003.10596v1.pdf
PWC https://paperswithcode.com/paper/adversarial-perturbations-fool-deepfake
Repo https://github.com/ApGa/adversarial_deepfakes
Framework pytorch

Wise Sliding Window Segmentation: A classification-aided approach for trajectory segmentation

Title Wise Sliding Window Segmentation: A classification-aided approach for trajectory segmentation
Authors Mohammad Etemad, Zahra Etemad, Amilcar Soares, Vania Bogorny, Stan Matwin, Luis Torgo
Abstract Large amounts of mobility data are being generated from many different sources, and several data mining methods have been proposed for this data. One of the most critical steps for trajectory data mining is segmentation. This task can be seen as a pre-processing step in which a trajectory is divided into several meaningful consecutive sub-sequences. This process is necessary because trajectory patterns may not hold in the entire trajectory but on trajectory parts. In this work, we propose a supervised trajectory segmentation algorithm, called Wise Sliding Window Segmentation (WS-II). It processes the trajectory coordinates to find behavioral changes in space and time, generating an error signal that is further used to train a binary classifier for segmenting trajectory data. This algorithm is flexible and can be used in different domains. We evaluate our method over three real datasets from different domains (meteorology, fishing, and individuals movements), and compare it with four other trajectory segmentation algorithms: OWS, GRASP-UTS, CB-SMoT, and SPD. We observed that the proposed algorithm achieves the highest performance for all datasets with statistically significant differences in terms of the harmonic mean of purity and coverage.
Tasks
Published 2020-03-23
URL https://arxiv.org/abs/2003.10248v1
PDF https://arxiv.org/pdf/2003.10248v1.pdf
PWC https://paperswithcode.com/paper/wise-sliding-window-segmentation-a
Repo https://github.com/metemaad/WS-II
Framework none

What Deep CNNs Benefit from Global Covariance Pooling: An Optimization Perspective

Title What Deep CNNs Benefit from Global Covariance Pooling: An Optimization Perspective
Authors Qilong Wang, Li Zhang, Banggu Wu, Dongwei Ren, Peihua Li, Wangmeng Zuo, Qinghua Hu
Abstract Recent works have demonstrated that global covariance pooling (GCP) has the ability to improve performance of deep convolutional neural networks (CNNs) on visual classification task. Despite considerable advance, the reasons on effectiveness of GCP on deep CNNs have not been well studied. In this paper, we make an attempt to understand what deep CNNs benefit from GCP in a viewpoint of optimization. Specifically, we explore the effect of GCP on deep CNNs in terms of the Lipschitzness of optimization loss and the predictiveness of gradients, and show that GCP can make the optimization landscape more smooth and the gradients more predictive. Furthermore, we discuss the connection between GCP and second-order optimization for deep CNNs. More importantly, above findings can account for several merits of covariance pooling for training deep CNNs that have not been recognized previously or fully explored, including significant acceleration of network convergence (i.e., the networks trained with GCP can support rapid decay of learning rates, achieving favorable performance while significantly reducing number of training epochs), stronger robustness to distorted examples generated by image corruptions and perturbations, and good generalization ability to different vision tasks, e.g., object detection and instance segmentation. We conduct extensive experiments using various deep CNN models on diversified tasks, and the results provide strong support to our findings.
Tasks Instance Segmentation, Object Detection, Semantic Segmentation
Published 2020-03-25
URL https://arxiv.org/abs/2003.11241v1
PDF https://arxiv.org/pdf/2003.11241v1.pdf
PWC https://paperswithcode.com/paper/what-deep-cnns-benefit-from-global-covariance
Repo https://github.com/ZhangLi-CS/GCP_Optimization
Framework pytorch

Cloud-Net+: A Cloud Segmentation CNN for Landsat 8 Remote Sensing Imagery Optimized with Filtered Jaccard Loss Function

Title Cloud-Net+: A Cloud Segmentation CNN for Landsat 8 Remote Sensing Imagery Optimized with Filtered Jaccard Loss Function
Authors Sorour Mohajerani, Parvaneh Saeedi
Abstract Cloud Segmentation is one of the fundamental steps in optical remote sensing image analysis. Current methods for identification of cloud regions in aerial or satellite images are not accurate enough especially in the presence of snow and haze. This paper presents a deep learning-based framework to address the problem of cloud detection in Landsat 8 imagery. The proposed method benefits from a convolutional neural network (Cloud-Net+) with multiple blocks, which is trained with a novel loss function (Filtered Jaccard loss). The proposed loss function is more sensitive to the absence of cloud pixels in an image and penalizes/rewards the predicted mask more accurately. The combination of Cloud-Net+ and Filtered Jaccard loss function delivers superior results over four public cloud detection datasets. Our experiments on one of the most common public datasets in computer vision (Pascal VOC dataset) show that the proposed network/loss function could be used in other segmentation tasks for more accurate performance/evaluation.
Tasks Cloud Detection
Published 2020-01-23
URL https://arxiv.org/abs/2001.08768v1
PDF https://arxiv.org/pdf/2001.08768v1.pdf
PWC https://paperswithcode.com/paper/cloud-net-a-cloud-segmentation-cnn-for
Repo https://github.com/dveyarangi/cloud-net-plus
Framework pytorch

Reusing Discriminators for Encoding: Towards Unsupervised Image-to-Image Translation

Title Reusing Discriminators for Encoding: Towards Unsupervised Image-to-Image Translation
Authors Runfa Chen, Wenbing Huang, Binghui Huang, Fuchun Sun, Bin Fang
Abstract Unsupervised image-to-image translation is a central task in computer vision. Current translation frameworks will abandon the discriminator once the training process is completed. This paper contends a novel role of the discriminator by reusing it for encoding the images of the target domain. The proposed architecture, termed as NICE-GAN, exhibits two advantageous patterns over previous approaches: First, it is more compact since no independent encoding component is required; Second, this plug-in encoder is directly trained by the adversary loss, making it more informative and trained more effectively if a multi-scale discriminator is applied. The main issue in NICE-GAN is the coupling of translation with discrimination along the encoder, which could incur training inconsistency when we play the min-max game via GAN. To tackle this issue, we develop a decoupled training strategy by which the encoder is only trained when maximizing the adversary loss while keeping frozen otherwise. Extensive experiments on four popular benchmarks demonstrate the superior performance of NICE-GAN over state-of-the-art methods in terms of FID, KID, and also human preference. Comprehensive ablation studies are also carried out to isolate the validity of each proposed component. Our codes are available at https://github.com/alpc91/NICE-GAN-pytorch.
Tasks Image-to-Image Translation, Unsupervised Image-To-Image Translation
Published 2020-02-29
URL https://arxiv.org/abs/2003.00273v6
PDF https://arxiv.org/pdf/2003.00273v6.pdf
PWC https://paperswithcode.com/paper/reusing-discriminators-for-encoding-towards
Repo https://github.com/alpc91/NICE-GAN-pytorch
Framework pytorch

On Consequentialism and Fairness

Title On Consequentialism and Fairness
Authors Dallas Card, Noah A. Smith
Abstract Recent work on fairness in machine learning has primarily emphasized how to define, quantify, and encourage “fair” outcomes. Less attention has been paid, however, to the ethical foundations which underlie such efforts. Among the ethical perspectives that should be taken into consideration is consequentialism, the position that, roughly speaking, outcomes are all that matter. Although consequentialism is not free from difficulties, and although it does not necessarily provide a tractable way of choosing actions (because of the combined problems of uncertainty, subjectivity, and aggregation), it nevertheless provides a powerful foundation from which to critique the existing literature on machine learning fairness. Moreover, it brings to the fore some of the tradeoffs involved, including the problem of who counts, the pros and cons of using a policy, and the relative value of the distant future. In this paper we provide a consequentialist critique of common definitions of fairness within machine learning, as well as a machine learning perspective on consequentialism. We conclude with a broader discussion of the issues of learning and randomization, which have important implications for the ethics of automated decision making systems.
Tasks Decision Making
Published 2020-01-02
URL https://arxiv.org/abs/2001.00329v1
PDF https://arxiv.org/pdf/2001.00329v1.pdf
PWC https://paperswithcode.com/paper/on-consequentialism-and-fairness
Repo https://github.com/summerscope/fair-ml-reading-group
Framework none

Retouchdown: Adding Touchdown to StreetLearn as a Shareable Resource for Language Grounding Tasks in Street View

Title Retouchdown: Adding Touchdown to StreetLearn as a Shareable Resource for Language Grounding Tasks in Street View
Authors Harsh Mehta, Yoav Artzi, Jason Baldridge, Eugene Ie, Piotr Mirowski
Abstract The Touchdown dataset (Chen et al., 2019) provides instructions by human annotators for navigation through New York City streets and for resolving spatial descriptions at a given location. To enable the wider research community to work effectively with the Touchdown tasks, we are publicly releasing the 29k raw Street View panoramas needed for Touchdown. We follow the process used for the StreetLearn data release (Mirowski et al., 2019) to check panoramas for personally identifiable information and blur them as necessary. These have been added to the StreetLearn dataset and can be obtained via the same process as used previously for StreetLearn. We also provide a reference implementation for both of the Touchdown tasks: vision and language navigation (VLN) and spatial description resolution (SDR). We compare our model results to those given in Chen et al. (2019) and show that the panoramas we have added to StreetLearn fully support both Touchdown tasks and can be used effectively for further research and comparison.
Tasks
Published 2020-01-10
URL https://arxiv.org/abs/2001.03671v1
PDF https://arxiv.org/pdf/2001.03671v1.pdf
PWC https://paperswithcode.com/paper/retouchdown-adding-touchdown-to-streetlearn
Repo https://github.com/clic-lab/touchdown
Framework pytorch

Ensemble Slice Sampling

Title Ensemble Slice Sampling
Authors Minas Karamanis, Florian Beutler
Abstract Slice Sampling has emerged as a powerful Markov Chain Monte Carlo algorithm that adapts to the characteristics of the target distribution with minimal hand-tuning. However, Slice Sampling’s performance is highly sensitive to the user-specified initial length scale hyperparameter. Moreover, Slice Sampling generally struggles with poorly scaled or strongly correlated distributions. This paper introduces Ensemble Slice Sampling, a new class of algorithms that bypasses such difficulties by adaptively tuning the length scale. Furthermore, Ensemble Slice Sampling’s performance is immune to linear correlations by exploiting an ensemble of parallel walkers. These algorithms are trivial to construct, require no hand-tuning, and can easily be implemented in parallel computing environments. Empirical tests show that Ensemble Slice Sampling can improve efficiency by more than an order of magnitude compared to conventional MCMC methods on highly correlated target distributions such as the Autoregressive Process of Order 1 and the Correlated Funnel distribution.
Tasks
Published 2020-02-14
URL https://arxiv.org/abs/2002.06212v1
PDF https://arxiv.org/pdf/2002.06212v1.pdf
PWC https://paperswithcode.com/paper/ensemble-slice-sampling
Repo https://github.com/minaskar/zeus
Framework none

A Hierarchical Location Normalization System for Text

Title A Hierarchical Location Normalization System for Text
Authors Dongyun Liang, Guohua Wang, Jing Nie, Binxu Zhai, Xiusen Gu
Abstract It’s natural these days for people to know the local events from massive documents. Many texts contain location information, such as city name or road name, which is always incomplete or latent. It’s significant to extract the administrative area of the text and organize the hierarchy of area, called location normalization. Existing detecting location systems either exclude hierarchical normalization or present only a few specific regions. We propose a system named ROIBase that normalizes the text by the Chinese hierarchical administrative divisions. ROIBase adopts a co-occurrence constraint as the basic framework to score the hit of the administrative area, achieves the inference by special embeddings, and expands the recall by the ROI (region of interest). It has high efficiency and interpretability because it mainly establishes on the definite knowledge and has less complex logic than the supervised models. We demonstrate that ROIBase achieves better performance against feasible solutions and is useful as a strong support system for location normalization.
Tasks
Published 2020-01-21
URL https://arxiv.org/abs/2001.07320v1
PDF https://arxiv.org/pdf/2001.07320v1.pdf
PWC https://paperswithcode.com/paper/a-hierarchical-location-normalization-system
Repo https://github.com/waterblas/ROIBase-lite
Framework none

RAB: Provable Robustness Against Backdoor Attacks

Title RAB: Provable Robustness Against Backdoor Attacks
Authors Maurice Weber, Xiaojun Xu, Bojan Karlas, Ce Zhang, Bo Li
Abstract Recent studies have shown that deep neural networks (DNNs) are vulnerable to various attacks, including evasion attacks and poisoning attacks. On the defense side, there have been intensive interests in provable robustness against evasion attacks. In this paper, we focus on improving model robustness against more diverse threat models. Specifically, we provide the first unified framework using smoothing functional to certify the model robustness against general adversarial attacks. In particular, we propose the first robust training process RAB to certify against backdoor attacks. We theoretically prove the robustness bound for machine learning models based on the RAB training process, analyze the tightness of the robustness bound, as well as proposing different smoothing noise distributions such as Gaussian and Uniform distributions. Moreover, we evaluate the certified robustness of a family of “smoothed” DNNs which are trained in a differentially private fashion. In addition, we theoretically show that for simpler models such as K-nearest neighbor models, it is possible to train the robust smoothed models efficiently. For K=1, we propose an exact algorithm to smooth the training process, eliminating the need to sample from a noise distribution.Empirically, we conduct comprehensive experiments for different machine learning models such as DNNs, differentially private DNNs, and KNN models on MNIST, CIFAR-10 and ImageNet datasets to provide the first benchmark for certified robustness against backdoor attacks. In particular, we also evaluate KNN models on a spambase tabular dataset to demonstrate its advantages. Both the theoretic analysis for certified model robustness against arbitrary backdoors, and the comprehensive benchmark on diverse ML models and datasets would shed light on further robust learning strategies against training time or even general adversarial attacks on ML models.
Tasks
Published 2020-03-19
URL https://arxiv.org/abs/2003.08904v1
PDF https://arxiv.org/pdf/2003.08904v1.pdf
PWC https://paperswithcode.com/paper/rab-provable-robustness-against-backdoor
Repo https://github.com/AI-secure/Robustness-Against-Backdoor-Attacks
Framework pytorch

Neural Arithmetic Units

Title Neural Arithmetic Units
Authors Andreas Madsen, Alexander Rosenberg Johansen
Abstract Neural networks can approximate complex functions, but they struggle to perform exact arithmetic operations over real numbers. The lack of inductive bias for arithmetic operations leaves neural networks without the underlying logic necessary to extrapolate on tasks such as addition, subtraction, and multiplication. We present two new neural network components: the Neural Addition Unit (NAU), which can learn exact addition and subtraction; and the Neural Multiplication Unit (NMU) that can multiply subsets of a vector. The NMU is, to our knowledge, the first arithmetic neural network component that can learn to multiply elements from a vector, when the hidden size is large. The two new components draw inspiration from a theoretical analysis of recently proposed arithmetic components. We find that careful initialization, restricting parameter space, and regularizing for sparsity is important when optimizing the NAU and NMU. Our proposed units NAU and NMU, compared with previous neural units, converge more consistently, have fewer parameters, learn faster, can converge for larger hidden sizes, obtain sparse and meaningful weights, and can extrapolate to negative and small values.
Tasks
Published 2020-01-14
URL https://arxiv.org/abs/2001.05016v1
PDF https://arxiv.org/pdf/2001.05016v1.pdf
PWC https://paperswithcode.com/paper/neural-arithmetic-units-1
Repo https://github.com/AndreasMadsen/stable-nalu
Framework pytorch

PT2PC: Learning to Generate 3D Point Cloud Shapes from Part Tree Conditions

Title PT2PC: Learning to Generate 3D Point Cloud Shapes from Part Tree Conditions
Authors Kaichun Mo, He Wang, Xinchen Yan, Leonidas J. Guibas
Abstract 3D generative shape modeling is a fundamental research area in computer vision and interactive computer graphics, with many real-world applications. This paper investigates the novel problem of generating 3D shape point cloud geometry from a symbolic part tree representation. In order to learn such a conditional shape generation procedure in an end-to-end fashion, we propose a conditional GAN “part tree”-to-“point cloud” model (PT2PC) that disentangles the structural and geometric factors. The proposed model incorporates the part tree condition into the architecture design by passing messages top-down and bottom-up along the part tree hierarchy. Experimental results and user study demonstrate the strengths of our method in generating perceptually plausible and diverse 3D point clouds, given the part tree condition. We also propose a novel structural measure for evaluating if the generated shape point clouds satisfy the part tree conditions.
Tasks
Published 2020-03-19
URL https://arxiv.org/abs/2003.08624v1
PDF https://arxiv.org/pdf/2003.08624v1.pdf
PWC https://paperswithcode.com/paper/pt2pc-learning-to-generate-3d-point-cloud
Repo https://github.com/daerduoCarey/pt2pc
Framework pytorch

Understanding and mitigating gradient pathologies in physics-informed neural networks

Title Understanding and mitigating gradient pathologies in physics-informed neural networks
Authors Sifan Wang, Yujun Teng, Paris Perdikaris
Abstract The widespread use of neural networks across different scientific domains often involves constraining them to satisfy certain symmetries, conservation laws, or other domain knowledge. Such constraints are often imposed as soft penalties during model training and effectively act as domain-specific regularizers of the empirical risk loss. Physics-informed neural networks is an example of this philosophy in which the outputs of deep neural networks are constrained to approximately satisfy a given set of partial differential equations. In this work we review recent advances in scientific machine learning with a specific focus on the effectiveness of physics-informed neural networks in predicting outcomes of physical systems and discovering hidden physics from noisy data. We will also identify and analyze a fundamental mode of failure of such approaches that is related to numerical stiffness leading to unbalanced back-propagated gradients during model training. To address this limitation we present a learning rate annealing algorithm that utilizes gradient statistics during model training to balance the interplay between different terms in composite loss functions. We also propose a novel neural network architecture that is more resilient to such gradient pathologies. Taken together, our developments provide new insights into the training of constrained neural networks and consistently improve the predictive accuracy of physics-informed neural networks by a factor of 50-100x across a range of problems in computational physics. All code and data accompanying this manuscript are publicly available at \url{https://github.com/PredictiveIntelligenceLab/GradientPathologiesPINNs}.
Tasks
Published 2020-01-13
URL https://arxiv.org/abs/2001.04536v1
PDF https://arxiv.org/pdf/2001.04536v1.pdf
PWC https://paperswithcode.com/paper/understanding-and-mitigating-gradient
Repo https://github.com/PredictiveIntelligenceLab/GradientPathologiesPINNs
Framework none

On the infinite width limit of neural networks with a standard parameterization

Title On the infinite width limit of neural networks with a standard parameterization
Authors Jascha Sohl-Dickstein, Roman Novak, Samuel S. Schoenholz, Jaehoon Lee
Abstract There are currently two parameterizations used to derive fixed kernels corresponding to infinite width neural networks, the NTK (Neural Tangent Kernel) parameterization and the naive standard parameterization. However, the extrapolation of both of these parameterizations to infinite width is problematic. The standard parameterization leads to a divergent neural tangent kernel while the NTK parameterization fails to capture crucial aspects of finite width networks such as: the dependence of training dynamics on relative layer widths, the relative training dynamics of weights and biases, and a nonstandard learning rate scale. Here we propose an improved extrapolation of the standard parameterization that preserves all of these properties as width is taken to infinity and yields a well-defined neural tangent kernel. We show experimentally that the resulting kernels typically achieve similar accuracy to those resulting from an NTK parameterization, but with better correspondence to the parameterization of typical finite width networks. Additionally, with careful tuning of width parameters, the improved standard parameterization kernels can outperform those stemming from an NTK parameterization. We release code implementing this improved standard parameterization as part of the Neural Tangents library at https://github.com/google/neural-tangents.
Tasks
Published 2020-01-21
URL https://arxiv.org/abs/2001.07301v2
PDF https://arxiv.org/pdf/2001.07301v2.pdf
PWC https://paperswithcode.com/paper/on-the-infinite-width-limit-of-neural
Repo https://github.com/google/neural-tangents
Framework jax

Feature Extraction for Hyperspectral Imagery: The Evolution from Shallow to Deep

Title Feature Extraction for Hyperspectral Imagery: The Evolution from Shallow to Deep
Authors Behnood Rasti, Danfeng Hong, Renlong Hang, Pedram Ghamisi, Xudong Kang, Jocelyn Chanussot, Jon Atli Benediktsson
Abstract Hyperspectral images provide detailed spectral information through hundreds of (narrow) spectral channels (also known as dimensionality or bands) with continuous spectral information that can accurately classify diverse materials of interest. The increased dimensionality of such data makes it possible to significantly improve data information content but provides a challenge to the conventional techniques (the so-called curse of dimensionality) for accurate analysis of hyperspectral images. Feature extraction, as a vibrant field of research in the hyperspectral community, evolved through decades of research to address this issue and extract informative features suitable for data representation and classification. The advances in feature extraction have been inspired by two fields of research, including the popularization of image and signal processing as well as machine (deep) learning, leading to two types of feature extraction approaches named shallow and deep techniques. This article outlines the advances in feature extraction approaches for hyperspectral imagery by providing a technical overview of the state-of-the-art techniques, providing useful entry points for researchers at different levels, including students, researchers, and senior researchers, willing to explore novel investigations on this challenging topic. In more detail, this paper provides a bird’s eye view over shallow (both supervised and unsupervised) and deep feature extraction approaches specifically dedicated to the topic of hyperspectral feature extraction and its application on hyperspectral image classification. Additionally, this paper compares 15 advanced techniques with an emphasis on their methodological foundations in terms of classification accuracies. Furthermore, the codes and libraries are shared at https://github.com/BehnoodRasti/HyFTech-Hyperspectral-Shallow-Deep-Feature-Extraction-Toolbox.
Tasks Hyperspectral Image Classification, Image Classification
Published 2020-03-05
URL https://arxiv.org/abs/2003.02822v2
PDF https://arxiv.org/pdf/2003.02822v2.pdf
PWC https://paperswithcode.com/paper/feature-extraction-for-hyperspectral-imagery
Repo https://github.com/BehnoodRasti/HyFTech-Hyperspectral-Shallow-Deep-Feature-Extraction-Toolbox
Framework none
comments powered by Disqus