January 30, 2020

3353 words 16 mins read

Paper Group ANR 237

Paper Group ANR 237

Textual Adversarial Attack as Combinatorial Optimization. DADI: Dynamic Discovery of Fair Information with Adversarial Reinforcement Learning. Generative Neural Network based Spectrum Sharing using Linear Sum Assignment Problems. Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks. Data Poisoning a …

Textual Adversarial Attack as Combinatorial Optimization

Title Textual Adversarial Attack as Combinatorial Optimization
Authors Yuan Zang, Chenghao Yang, Fanchao Qi, Zhiyuan Liu, Meng Zhang, Qun Liu, Maosong Sun
Abstract Adversarial attack is carried out to reveal the vulnerability of deep neural networks. Textual adversarial attack is challenging because text is discrete and any perturbation might bring big semantic change. Word substitution is a class of effective textual attack method and has been extensively explored. However, all existing word substitution-based attack methods suffer the problems of bad semantic preservation, insufficient adversarial examples or suboptimal attack results. In this paper, we formalize the word substitution-based attack as a combinatorial optimization problem. We also propose a novel attack model, which comprises a sememe-based word substitution strategy and the particle swarm optimization algorithm, to tackle the existing problems. In experiments, we evaluate our attack model on the sentiment analysis task. Experimental results demonstrate our model achieves higher attack success rates and less modification than the baseline methods. The ablation study also verifies the superiority of the two parts of our model over previous ones.
Tasks Adversarial Attack, Combinatorial Optimization, Natural Language Inference, Sentiment Analysis, Word Embeddings
Published 2019-10-27
URL https://arxiv.org/abs/1910.12196v2
PDF https://arxiv.org/pdf/1910.12196v2.pdf
PWC https://paperswithcode.com/paper/open-the-boxes-of-words-incorporating-sememes
Repo
Framework

DADI: Dynamic Discovery of Fair Information with Adversarial Reinforcement Learning

Title DADI: Dynamic Discovery of Fair Information with Adversarial Reinforcement Learning
Authors Michiel A. Bakker, Duy Patrick Tu, Humberto Riverón Valdés, Krishna P. Gummadi, Kush R. Varshney, Adrian Weller, Alex Pentland
Abstract We introduce a framework for dynamic adversarial discovery of information (DADI), motivated by a scenario where information (a feature set) is used by third parties with unknown objectives. We train a reinforcement learning agent to sequentially acquire a subset of the information while balancing accuracy and fairness of predictors downstream. Based on the set of already acquired features, the agent decides dynamically to either collect more information from the set of available features or to stop and predict using the information that is currently available. Building on previous work exploring adversarial representation learning, we attain group fairness (demographic parity) by rewarding the agent with the adversary’s loss, computed over the final feature set. Importantly, however, the framework provides a more general starting point for fair or private dynamic information discovery. Finally, we demonstrate empirically, using two real-world datasets, that we can trade-off fairness and predictive performance
Tasks Representation Learning
Published 2019-10-30
URL https://arxiv.org/abs/1910.13983v1
PDF https://arxiv.org/pdf/1910.13983v1.pdf
PWC https://paperswithcode.com/paper/dadi-dynamic-discovery-of-fair-information
Repo
Framework

Generative Neural Network based Spectrum Sharing using Linear Sum Assignment Problems

Title Generative Neural Network based Spectrum Sharing using Linear Sum Assignment Problems
Authors Ahmed B. Zaky, Joshua Zhexue Huang, KaishunWu, Basem M. ElHalawany
Abstract Spectrum management and resource allocation (RA) problems are challenging and critical in a vast number of research areas such as wireless communications and computer networks. The traditional approaches for solving such problems usually consume time and memory, especially for large size problems. Recently different machine learning approaches have been considered as potential promising techniques for combinatorial optimization problems, especially the generative model of the deep neural networks. In this work, we propose a resource allocation deep autoencoder network, as one of the promising generative models, for enabling spectrum sharing in underlay device-to-device (D2D) communication by solving linear sum assignment problems (LSAPs). Specifically, we investigate the performance of three different architectures for the conditional variational autoencoders (CVAE). The three proposed architecture are the convolutional neural network (CVAE-CNN) autoencoder, the feed-forward neural network (CVAE-FNN) autoencoder, and the hybrid (H-CVAE) autoencoder. The simulation results show that the proposed approach could be used as a replacement of the conventional RA techniques, such as the Hungarian algorithm, due to its ability to find solutions of LASPs of different sizes with high accuracy and very fast execution time. Moreover, the simulation results reveal that the accuracy of the proposed hybrid autoencoder architecture outperforms the other proposed architectures and the state-of-the-art DNN techniques.
Tasks Combinatorial Optimization
Published 2019-10-12
URL https://arxiv.org/abs/1910.05510v1
PDF https://arxiv.org/pdf/1910.05510v1.pdf
PWC https://paperswithcode.com/paper/generative-neural-network-based-spectrum
Repo
Framework

Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks

Title Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks
Authors David J. Miller, Zhen Xiang, George Kesidis
Abstract There is great potential for damage from adversarial learning (AL) attacks on machine-learning based systems. In this paper, we provide a contemporary survey of AL, focused particularly on defenses against attacks on statistical classifiers. After introducing relevant terminology and the goals and range of possible knowledge of both attackers and defenders, we survey recent work on test-time evasion (TTE), data poisoning (DP), and reverse engineering (RE) attacks and particularly defenses against same. In so doing, we distinguish robust classification from anomaly detection (AD), unsupervised from supervised, and statistical hypothesis-based defenses from ones that do not have an explicit null (no attack) hypothesis; we identify the hyperparameters a particular method requires, its computational complexity, as well as the performance measures on which it was evaluated and the obtained quality. We then dig deeper, providing novel insights that challenge conventional AL wisdom and that target unresolved issues, including: 1) robust classification versus AD as a defense strategy; 2) the belief that attack success increases with attack strength, which ignores susceptibility to AD; 3) small perturbations for test-time evasion attacks: a fallacy or a requirement?; 4) validity of the universal assumption that a TTE attacker knows the ground-truth class for the example to be attacked; 5) black, grey, or white box attacks as the standard for defense evaluation; 6) susceptibility of query-based RE to an AD defense. We also discuss attacks on the privacy of training data. We then present benchmark comparisons of several defenses against TTE, RE, and backdoor DP attacks on images. The paper concludes with a discussion of future work.
Tasks Anomaly Detection, data poisoning
Published 2019-04-12
URL https://arxiv.org/abs/1904.06292v3
PDF https://arxiv.org/pdf/1904.06292v3.pdf
PWC https://paperswithcode.com/paper/adversarial-learning-in-statistical
Repo
Framework

Data Poisoning against Differentially-Private Learners: Attacks and Defenses

Title Data Poisoning against Differentially-Private Learners: Attacks and Defenses
Authors Yuzhe Ma, Xiaojin Zhu, Justin Hsu
Abstract Data poisoning attacks aim to manipulate the model produced by a learning algorithm by adversarially modifying the training set. We consider differential privacy as a defensive measure against this type of attack. We show that such learners are resistant to data poisoning attacks when the adversary is only able to poison a small number of items. However, this protection degrades as the adversary poisons more data. To illustrate, we design attack algorithms targeting objective and output perturbation learners, two standard approaches to differentially-private machine learning. Experiments show that our methods are effective when the attacker is allowed to poison sufficiently many training items.
Tasks data poisoning
Published 2019-03-23
URL https://arxiv.org/abs/1903.09860v2
PDF https://arxiv.org/pdf/1903.09860v2.pdf
PWC https://paperswithcode.com/paper/data-poisoning-against-differentially-private
Repo
Framework

Identifying and Resisting Adversarial Videos Using Temporal Consistency

Title Identifying and Resisting Adversarial Videos Using Temporal Consistency
Authors Xiaojun Jia, Xingxing Wei, Xiaochun Cao
Abstract Video classification is a challenging task in computer vision. Although Deep Neural Networks (DNNs) have achieved excellent performance in video classification, recent research shows adding imperceptible perturbations to clean videos can make the well-trained models output wrong labels with high confidence. In this paper, we propose an effective defense framework to characterize and defend adversarial videos. The proposed method contains two phases: (1) adversarial video detection using temporal consistency between adjacent frames, and (2) adversarial perturbation reduction via denoisers in the spatial and temporal domains respectively. Specifically, because of the linear nature of DNNs, the imperceptible perturbations will enlarge with the increasing of DNNs depth, which leads to the inconsistency of DNNs output between adjacent frames. However, the benign video frames often have the same outputs with their neighbor frames owing to the slight changes. Based on this observation, we can distinguish between adversarial videos and benign videos. After that, we utilize different defense strategies against different attacks. We propose the temporal defense, which reconstructs the polluted frames with their temporally neighbor clean frames, to deal with the adversarial videos with sparse polluted frames. For the videos with dense polluted frames, we use an efficient adversarial denoiser to process each frame in the spatial domain, and thus purify the perturbations (we call it as spatial defense). A series of experiments conducted on the UCF-101 dataset demonstrate that the proposed method significantly improves the robustness of video classifiers against adversarial attacks.
Tasks Video Classification
Published 2019-09-11
URL https://arxiv.org/abs/1909.04837v1
PDF https://arxiv.org/pdf/1909.04837v1.pdf
PWC https://paperswithcode.com/paper/identifying-and-resisting-adversarial-videos
Repo
Framework

A Multimodal Vision Sensor for Autonomous Driving

Title A Multimodal Vision Sensor for Autonomous Driving
Authors Dongming Sun, Xiao Huang, Kailun Yang
Abstract This paper describes a multimodal vision sensor that integrates three types of cameras, including a stereo camera, a polarization camera and a panoramic camera. Each sensor provides a specific dimension of information: the stereo camera measures depth per pixel, the polarization obtains the degree of polarization, and the panoramic camera captures a 360-degree landscape. Data fusion and advanced environment perception could be built upon the combination of sensors. Designed especially for autonomous driving, this vision sensor is shipped with a robust semantic segmentation network. In addition, we demonstrate how cross-modal enhancement could be achieved by registering the color image and the polarization image. An example of water hazard detection is given. To prove the multimodal vision sensor’s compatibility with different devices, a brief runtime performance analysis is carried out.
Tasks Autonomous Driving, Semantic Segmentation
Published 2019-08-15
URL https://arxiv.org/abs/1908.05649v1
PDF https://arxiv.org/pdf/1908.05649v1.pdf
PWC https://paperswithcode.com/paper/a-multimodal-vision-sensor-for-autonomous
Repo
Framework

Intelligent Policing Strategy for Traffic Violation Prevention

Title Intelligent Policing Strategy for Traffic Violation Prevention
Authors Monireh Dabaghchian, Amir Alipour-Fanid, Kai Zeng
Abstract Police officer presence at an intersection discourages a potential traffic violator from violating the law. It also alerts the motorists’ consciousness to take precaution and follow the rules. However, due to the abundant intersections and shortage of human resources, it is not possible to assign a police officer to every intersection. In this paper, we propose an intelligent and optimal policing strategy for traffic violation prevention. Our model consists of a specific number of targeted intersections and two police officers with no prior knowledge on the number of the traffic violations in the designated intersections. At each time interval, the proposed strategy, assigns the two police officers to different intersections such that at the end of the time horizon, maximum traffic violation prevention is achieved. Our proposed methodology adapts the PROLA (Play and Random Observe Learning Algorithm) algorithm [1] to achieve an optimal traffic violation prevention strategy. Finally, we conduct a case study to evaluate and demonstrate the performance of the proposed method.
Tasks
Published 2019-09-20
URL https://arxiv.org/abs/1909.09291v1
PDF https://arxiv.org/pdf/1909.09291v1.pdf
PWC https://paperswithcode.com/paper/intelligent-policing-strategy-for-traffic
Repo
Framework

Sum-Product Network Decompilation

Title Sum-Product Network Decompilation
Authors Cory J. Butz, Jhonatan S. Oliveira, Robert Peharz
Abstract There exists a dichotomy between classical probabilistic graphical models, such as Bayesian networks (BNs), and modern tractable models, such as sum-product networks (SPNs). The former have generally intractable inference, but allow a high level of interpretability, while the latter admits a wide range of tractable inference routines, but are typically harder to interpret. Due to this dichotomy, tools to convert between BNs and SPNs are desirable. While one direction – compiling BNs into SPNs – is well discussed in Darwiche’s seminal work on arithmetic circuit compilation, the converse direction – decompiling SPNs into BNs – has received surprisingly little attention. In this paper, we fill this gap by proposing SPN2BN, an algorithm that decompiles an SPN into a BN. SPN2BN has several salient features when compared to the only other two works decompiling SPNs. Most significantly, the BNs returned by SPN2BN are minimal independence-maps. Secondly, SPN2BN is more parsimonious with respect to the introduction of latent variables. Thirdly, the output BN produced by SPN2BN can be precisely characterized with respect to the compiled BN. More specifically, a certain set of directed edges will be added to the input BN, giving what we will call the moral-closure. It immediately follows that there is a set of BNs related to the input BN that will also return the same moral closure. Lastly, it is established that our compilation-decompilation process is idempotent. We confirm our results with systematic experiments on a number of synthetic BNs.
Tasks
Published 2019-12-20
URL https://arxiv.org/abs/1912.10092v1
PDF https://arxiv.org/pdf/1912.10092v1.pdf
PWC https://paperswithcode.com/paper/sum-product-network-decompilation
Repo
Framework

Ensemble Knowledge Distillation for Learning Improved and Efficient Networks

Title Ensemble Knowledge Distillation for Learning Improved and Efficient Networks
Authors Umar Asif, Jianbin Tang, Stefan Harrer
Abstract Ensemble models comprising of deep Convolutional Neural Networks (CNN) have shown significant improvements in model generalization but at the cost of large computation and memory requirements. In this paper, we present a framework for learning compact CNN models with improved classification performance and model generalization. For this, we propose a CNN architecture of a compact student model with parallel branches which are trained using ground truth labels and information from high capacity teacher networks in an ensemble learning fashion. Our framework provides two main benefits: i) Distilling knowledge from different teachers into the student network promotes heterogeneity in feature learning at different branches of the student network and enables the network to learn diverse solutions to the target problem. ii) Coupling the branches of the student network through ensembling encourages collaboration and improves the quality of the final predictions by reducing variance in the network outputs. Experiments on the well established CIFAR-10 and CIFAR-100 datasets show that our Ensemble Knowledge Distillation (EKD) improves classification accuracy and model generalization especially in situations with limited training data. Experiments also show that our EKD based compact networks outperform in terms of mean accuracy on the test datasets compared to state-of-the-art knowledge distillation based methods.
Tasks
Published 2019-09-17
URL https://arxiv.org/abs/1909.08097v2
PDF https://arxiv.org/pdf/1909.08097v2.pdf
PWC https://paperswithcode.com/paper/ensemble-knowledge-distillation-for-learning
Repo
Framework

Heuristic design of fuzzy inference systems: A review of three decades of research

Title Heuristic design of fuzzy inference systems: A review of three decades of research
Authors Varun Ojha, Ajith Abraham, Vaclav Snasel
Abstract This paper provides an in-depth review of the optimal design of type-1 and type-2 fuzzy inference systems (FIS) using five well known computational frameworks: genetic-fuzzy systems (GFS), neuro-fuzzy systems (NFS), hierarchical fuzzy systems (HFS), evolving fuzzy systems (EFS), and multi-objective fuzzy systems (MFS), which is in view that some of them are linked to each other. The heuristic design of GFS uses evolutionary algorithms for optimizing both Mamdani-type and Takagi-Sugeno-Kang-type fuzzy systems. Whereas, the NFS combines the FIS with neural network learning systems to improve the approximation ability. An HFS combines two or more low-dimensional fuzzy logic units in a hierarchical design to overcome the curse of dimensionality. An EFS solves the data streaming issues by evolving the system incrementally, and an MFS solves the multi-objective trade-offs like the simultaneous maximization of both interpretability and accuracy. This paper offers a synthesis of these dimensions and explores their potentials, challenges, and opportunities in FIS research. This review also examines the complex relations among these dimensions and the possibilities of combining one or more computational frameworks adding another dimension: deep fuzzy systems.
Tasks
Published 2019-08-27
URL https://arxiv.org/abs/1908.10122v1
PDF https://arxiv.org/pdf/1908.10122v1.pdf
PWC https://paperswithcode.com/paper/heuristic-design-of-fuzzy-inference-systems-a
Repo
Framework

Hybrid Models with Deep and Invertible Features

Title Hybrid Models with Deep and Invertible Features
Authors Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, Balaji Lakshminarayanan
Abstract We propose a neural hybrid model consisting of a linear model defined on a set of features computed by a deep, invertible transformation (i.e. a normalizing flow). An attractive property of our model is that both p(features), the density of the features, and p(targets features), the predictive distribution, can be computed exactly in a single feed-forward pass. We show that our hybrid model, despite the invertibility constraints, achieves similar accuracy to purely predictive models. Moreover the generative component remains a good model of the input features despite the hybrid optimization objective. This offers additional capabilities such as detection of out-of-distribution inputs and enabling semi-supervised learning. The availability of the exact joint density p(targets, features) also allows us to compute many quantities readily, making our hybrid model a useful building block for downstream applications of probabilistic deep learning.
Tasks
Published 2019-02-07
URL https://arxiv.org/abs/1902.02767v2
PDF https://arxiv.org/pdf/1902.02767v2.pdf
PWC https://paperswithcode.com/paper/hybrid-models-with-deep-and-invertible
Repo
Framework

Interpreting Epsilon of Differential Privacy in Terms of Advantage in Guessing or Approximating Sensitive Attributes

Title Interpreting Epsilon of Differential Privacy in Terms of Advantage in Guessing or Approximating Sensitive Attributes
Authors Peeter Laud, Alisa Pankova
Abstract There are numerous methods of achieving $\epsilon$-differential privacy (DP). The question is what is the appropriate value of $\epsilon$, since there is no common agreement on a “sufficiently small” $\epsilon$, and its goodness depends on the query as well as the data. In this paper, we show how to compute $\epsilon$ that corresponds to $\delta$, defined as the adversary’s advantage in probability of guessing some specific property of the output. The attacker’s goal can be stated as Boolean expression over guessing particular attributes, possibly within some precision. The attributes combined in this way should be independent. We assume that both the input and the output distributions have corresponding probability density functions, or probability mass functions.
Tasks
Published 2019-11-28
URL https://arxiv.org/abs/1911.12777v1
PDF https://arxiv.org/pdf/1911.12777v1.pdf
PWC https://paperswithcode.com/paper/interpreting-epsilon-of-differential-privacy
Repo
Framework

Localized Compression: Applying Convolutional Neural Networks to Compressed Images

Title Localized Compression: Applying Convolutional Neural Networks to Compressed Images
Authors Christopher A. George, Bradley M. West
Abstract We address the challenge of applying existing convolutional neural network (CNN) architectures to compressed images. Existing CNN architectures represent images as a matrix of pixel intensities with a specified dimension; this desired dimension is achieved by downgrading or cropping. Downgrading and cropping are attractive in that the result is also an image; however, an algorithm producing an alternative “compressed” representation could yield better classification performance. This compression algorithm need not be reversible, but must be compatible with the CNN’s operations. This problem is thus the counterpart of the well-studied problem of applying compressed CNNs to uncompressed images, which has attracted great interest as CNNs are deployed to size-, weight-, and power- (SWaP)-limited devices. We introduce Localized Compression, a generalization of downgrading in which the original image is divided into blocks and each block is compressed to a smaller size using either sampling- or random-matrix-based techniques. By aligning the size of the compressed blocks with the size of the CNN’s convolutional region, localized compression can be made compatible with any CNN architecture. Our experimental results show that Localized Compression results in classification accuracy approximately 1-2% higher than is achieved by downgrading to the equivalent resolution.
Tasks
Published 2019-11-20
URL https://arxiv.org/abs/1911.09188v1
PDF https://arxiv.org/pdf/1911.09188v1.pdf
PWC https://paperswithcode.com/paper/localized-compression-applying-convolutional
Repo
Framework

ConfusionFlow: A model-agnostic visualization for temporal analysis of classifier confusion

Title ConfusionFlow: A model-agnostic visualization for temporal analysis of classifier confusion
Authors Andreas Hinterreiter, Peter Ruch, Holger Stitz, Martin Ennemoser, Jürgen Bernard, Hendrik Strobelt, Marc Streit
Abstract Classifiers are among the most widely used supervised machine learning algorithms. Many classification models exist, and choosing the right one for a given task is difficult. During model selection and debugging, data scientists need to asses classifier performance, evaluate the training behavior over time, and compare different models. Typically, this analysis is based on single-number performance measures such as accuracy. A more detailed evaluation of classifiers is possible by inspecting class errors. The confusion matrix is an established way for visualizing these class errors, but it was not designed with temporal or comparative analysis in mind. More generally, established performance analysis systems do not allow a combined temporal and comparative analysis of class-level information. To address this issue, we propose ConfusionFlow, an interactive, comparative visualization tool that combines the benefits of class confusion matrices with the visualization of performance characteristics over time. ConfusionFlow is model-agnostic and can be used to compare performances for different model types, model architectures, and/or training and test datasets. We demonstrate the usefulness of ConfusionFlow in a case study on instance selection strategies in active learning. We further assess scalability issues and possible mitigation mechanisms when ConfusionFlow is applied to problems with a high number of classes.
Tasks Active Learning, Model Selection, Network Pruning
Published 2019-10-02
URL https://arxiv.org/abs/1910.00969v2
PDF https://arxiv.org/pdf/1910.00969v2.pdf
PWC https://paperswithcode.com/paper/confusionflow-a-model-agnostic-visualization
Repo
Framework
comments powered by Disqus