April 2, 2020

3249 words 16 mins read

Paper Group ANR 325

Paper Group ANR 325

FastWordBug: A Fast Method To Generate Adversarial Text Against NLP Applications. TimeConvNets: A Deep Time Windowed Convolution Neural Network Design for Real-time Video Facial Expression Recognition. On the Existence of Characterization Logics and Fundamental Properties of Argumentation Semantics. Lossless Attention in Convolutional Networks for …

FastWordBug: A Fast Method To Generate Adversarial Text Against NLP Applications

Title FastWordBug: A Fast Method To Generate Adversarial Text Against NLP Applications
Authors Dou Goodman, Lv Zhonghou, Wang minghua
Abstract In this paper, we present a novel algorithm, FastWordBug, to efficiently generate small text perturbations in a black-box setting that forces a sentiment analysis or text classification mode to make an incorrect prediction. By combining the part of speech attributes of words, we propose a scoring method that can quickly identify important words that affect text classification. We evaluate FastWordBug on three real-world text datasets and two state-of-the-art machine learning models under black-box setting. The results show that our method can significantly reduce the accuracy of the model, and at the same time, we can call the model as little as possible, with the highest attack efficiency. We also attack two popular real-world cloud services of NLP, and the results show that our method works as well.
Tasks Adversarial Text, Sentiment Analysis, Text Classification
Published 2020-01-31
URL https://arxiv.org/abs/2002.00760v1
PDF https://arxiv.org/pdf/2002.00760v1.pdf
PWC https://paperswithcode.com/paper/fastwordbug-a-fast-method-to-generate
Repo
Framework

TimeConvNets: A Deep Time Windowed Convolution Neural Network Design for Real-time Video Facial Expression Recognition

Title TimeConvNets: A Deep Time Windowed Convolution Neural Network Design for Real-time Video Facial Expression Recognition
Authors James Ren Hou Lee, Alexander Wong
Abstract A core challenge faced by the majority of individuals with Autism Spectrum Disorder (ASD) is an impaired ability to infer other people’s emotions based on their facial expressions. With significant recent advances in machine learning, one potential approach to leveraging technology to assist such individuals to better recognize facial expressions and reduce the risk of possible loneliness and depression due to social isolation is the design of computer vision-driven facial expression recognition systems. Motivated by this social need as well as the low latency requirement of such systems, this study explores a novel deep time windowed convolutional neural network design (TimeConvNets) for the purpose of real-time video facial expression recognition. More specifically, we explore an efficient convolutional deep neural network design for spatiotemporal encoding of time windowed video frame sub-sequences and study the respective balance between speed and accuracy. Furthermore, to evaluate the proposed TimeConvNet design, we introduce a more difficult dataset called BigFaceX, composed of a modified aggregation of the extended Cohn-Kanade (CK+), BAUM-1, and the eNTERFACE public datasets. Different variants of the proposed TimeConvNet design with different backbone network architectures were evaluated using BigFaceX alongside other network designs for capturing spatiotemporal information, and experimental results demonstrate that TimeConvNets can better capture the transient nuances of facial expressions and boost classification accuracy while maintaining a low inference time.
Tasks Facial Expression Recognition
Published 2020-03-03
URL https://arxiv.org/abs/2003.01791v1
PDF https://arxiv.org/pdf/2003.01791v1.pdf
PWC https://paperswithcode.com/paper/timeconvnets-a-deep-time-windowed-convolution
Repo
Framework

On the Existence of Characterization Logics and Fundamental Properties of Argumentation Semantics

Title On the Existence of Characterization Logics and Fundamental Properties of Argumentation Semantics
Authors Ringo Baumann
Abstract Given the large variety of existing logical formalisms it is of utmost importance to select the most adequate one for a specific purpose, e.g. for representing the knowledge relevant for a particular application or for using the formalism as a modeling tool for problem solving. Awareness of the nature of a logical formalism, in other words, of its fundamental intrinsic properties, is indispensable and provides the basis of an informed choice. In this treatise we consider the existence characterization logics as well as properties like existence and uniqueness, expressibility, replaceability and verifiability in the realm of abstract argumentation
Tasks Abstract Argumentation
Published 2020-03-02
URL https://arxiv.org/abs/2003.00767v1
PDF https://arxiv.org/pdf/2003.00767v1.pdf
PWC https://paperswithcode.com/paper/on-the-existence-of-characterization-logics
Repo
Framework

Lossless Attention in Convolutional Networks for Facial Expression Recognition in the Wild

Title Lossless Attention in Convolutional Networks for Facial Expression Recognition in the Wild
Authors Chuang Wang, Ruimin Hu, Min Hu, Jiang Liu, Ting Ren, Shan He, Ming Jiang, Jing Miao
Abstract Unlike the constraint frontal face condition, faces in the wild have various unconstrained interference factors, such as complex illumination, changing perspective and various occlusions. Facial expressions recognition (FER) in the wild is a challenging task and existing methods can’t perform well. However, for occluded faces (containing occlusion caused by other objects and self-occlusion caused by head posture changes), the attention mechanism has the ability to focus on the non-occluded regions automatically. In this paper, we propose a Lossless Attention Model (LLAM) for convolutional neural networks (CNN) to extract attention-aware features from faces. Our module avoids decay information in the process of generating attention maps by using the information of the previous layer and not reducing the dimensionality. Sequentially, we adaptively refine the feature responses by fusing the attention map with the feature map. We participate in the seven basic expression classification sub-challenges of FG-2020 Affective Behavior Analysis in-the-wild Challenge. And we validate our method on the Aff-Wild2 datasets released by the Challenge. The total accuracy (Accuracy) and the unweighted mean (F1) of our method on the validation set are 0.49 and 0.38 respectively, and the final result is 0.42 (0.67 F1-Score + 0.33 Accuracy).
Tasks Facial Expression Recognition
Published 2020-01-31
URL https://arxiv.org/abs/2001.11869v1
PDF https://arxiv.org/pdf/2001.11869v1.pdf
PWC https://paperswithcode.com/paper/lossless-attention-in-convolutional-networks
Repo
Framework

Deep Metric Structured Learning For Facial Expression Recognition

Title Deep Metric Structured Learning For Facial Expression Recognition
Authors Pedro D. Marrero Fernandez, Tsang Ing Ren, Tsang Ing Jyh, Fidel A. Guerrero Peña, Alexandre Cunha
Abstract We propose a deep metric learning model to create embedded sub-spaces with a well defined structure. A new loss function that imposes Gaussian structures on the output space is introduced to create these sub-spaces thus shaping the distribution of the data. Having a mixture of Gaussians solution space is advantageous given its simplified and well established structure. It allows fast discovering of classes within classes and the identification of mean representatives at the centroids of individual classes. We also propose a new semi-supervised method to create sub-classes. We illustrate our methods on the facial expression recognition problem and validate results on the FER+, AffectNet, Extended Cohn-Kanade (CK+), BU-3DFE, and JAFFE datasets. We experimentally demonstrate that the learned embedding can be successfully used for various applications including expression retrieval and emotion recognition.
Tasks Emotion Recognition, Facial Expression Recognition, Metric Learning
Published 2020-01-18
URL https://arxiv.org/abs/2001.06612v1
PDF https://arxiv.org/pdf/2001.06612v1.pdf
PWC https://paperswithcode.com/paper/deep-metric-structured-learning-for-facial
Repo
Framework

Boosted and Differentially Private Ensembles of Decision Trees

Title Boosted and Differentially Private Ensembles of Decision Trees
Authors Richard Nock, Wilko Henecka
Abstract Boosted ensemble of decision tree (DT) classifiers are extremely popular in international competitions, yet to our knowledge nothing is formally known on how to make them \textit{also} differential private (DP), up to the point that random forests currently reign supreme in the DP stage. Our paper starts with the proof that the privacy vs boosting picture for DT involves a notable and general technical tradeoff: the sensitivity tends to increase with the boosting rate of the loss, for any proper loss. DT induction algorithms being fundamentally iterative, our finding implies non-trivial choices to select or tune the loss to balance noise against utility to split nodes. To address this, we craft a new parametererized proper loss, called the M$\alpha$-loss, which, as we show, allows to finely tune the tradeoff in the complete spectrum of sensitivity vs boosting guarantees. We then introduce \textit{objective calibration} as a method to adaptively tune the tradeoff during DT induction to limit the privacy budget spent while formally being able to keep boosting-compliant convergence on limited-depth nodes with high probability. Extensive experiments on 19 UCI domains reveal that objective calibration is highly competitive, even in the DP-free setting. Our approach tends to very significantly beat random forests, in particular on high DP regimes ($\varepsilon \leq 0.1$) and even with boosted ensembles containing ten times less trees, which could be crucial to keep a key feature of DT models under differential privacy: interpretability.
Tasks Calibration
Published 2020-01-26
URL https://arxiv.org/abs/2001.09384v2
PDF https://arxiv.org/pdf/2001.09384v2.pdf
PWC https://paperswithcode.com/paper/boosted-and-differentially-private-ensembles
Repo
Framework

Semi-Sequential Probabilistic Model For Indoor Localization Enhancement

Title Semi-Sequential Probabilistic Model For Indoor Localization Enhancement
Authors Minh Tu Hoang, Brosnan Yuen, Xiaodai Dong, Tao Lu, Robert Westendorp, Kishore Reddy
Abstract This paper proposes a semi-sequential probabilistic model (SSP) that applies an additional short term memory to enhance the performance of the probabilistic indoor localization. The conventional probabilistic methods normally treat the locations in the database indiscriminately. In contrast, SSP leverages the information of the previous position to determine the probable location since the user’s speed in an indoor environment is bounded and locations near the previous one have higher probability than the other locations. Although the SSP utilizes the previous location information, it does not require the exact moving speed and direction of the user. On-site experiments using the received signal strength indicator (RSSI) and channel state information (CSI) fingerprints for localization demonstrate that SSP reduces the maximum error and boosts the performance of existing probabilistic approaches by 25% - 30%.
Tasks
Published 2020-01-08
URL https://arxiv.org/abs/2001.02400v1
PDF https://arxiv.org/pdf/2001.02400v1.pdf
PWC https://paperswithcode.com/paper/semi-sequential-probabilistic-model-for
Repo
Framework

Exploring Backdoor Poisoning Attacks Against Malware Classifiers

Title Exploring Backdoor Poisoning Attacks Against Malware Classifiers
Authors Giorgio Severi, Jim Meyer, Scott Coull, Alina Oprea
Abstract Current training pipelines for machine learning (ML) based malware classification rely on crowdsourced threat feeds, exposing a natural attack injection point. We study for the first time the susceptibility of ML malware classifiers to backdoor poisoning attacks, specifically focusing on challenging “clean label” attacks where attackers do not control the sample labeling process. We propose the use of techniques from explainable machine learning to guide the selection of relevant features and their values to create a watermark in a model-agnostic fashion. Using a dataset of 800,000 Windows binaries, we demonstrate effective attacks against gradient boosting decision trees and a neural network model for malware classification under various constraints imposed on the attacker. For example, an attacker injecting just 1% poison samples in the training process can achieve a success rate greater than 97% by crafting a watermark of 8 features out of more than 2,300 available features. To demonstrate the feasibility of our backdoor attacks in practice, we create a watermarking utility for Windows PE files that preserves the binary’s functionality. Finally, we experiment with potential defensive strategies and show the difficulties of completely defending against these powerful attacks, especially when the attacks blend in with the legitimate sample distribution.
Tasks Malware Classification
Published 2020-03-02
URL https://arxiv.org/abs/2003.01031v1
PDF https://arxiv.org/pdf/2003.01031v1.pdf
PWC https://paperswithcode.com/paper/exploring-backdoor-poisoning-attacks-against
Repo
Framework

Feature-level Malware Obfuscation in Deep Learning

Title Feature-level Malware Obfuscation in Deep Learning
Authors Keith Dillon
Abstract We consider the problem of detecting malware with deep learning models, where the malware may be combined with significant amounts of benign code. Examples of this include piggybacking and trojan horse attacks on a system, where malicious behavior is hidden within a useful application. Such added flexibility in augmenting the malware enables significantly more code obfuscation. Hence we focus on the use of static features, particularly Intents, Permissions, and API calls, which we presume cannot be ultimately hidden from the Android system, but only augmented with yet more such features. We first train a deep neural network classifier for malware classification using features of benign and malware samples. Then we demonstrate a steep increase in false negative rate (i.e., attacks succeed), simply by randomly adding features of a benign app to malware. Finally we test the use of data augmentation to harden the classifier against such attacks. We find that for API calls, it is possible to reject the vast majority of attacks, where using Intents or Permissions is less successful.
Tasks Data Augmentation, Malware Classification
Published 2020-02-10
URL https://arxiv.org/abs/2002.05517v1
PDF https://arxiv.org/pdf/2002.05517v1.pdf
PWC https://paperswithcode.com/paper/feature-level-malware-obfuscation-in-deep
Repo
Framework

An Emerging Coding Paradigm VCM: A Scalable Coding Approach Beyond Feature and Signal

Title An Emerging Coding Paradigm VCM: A Scalable Coding Approach Beyond Feature and Signal
Authors Sifeng Xia, Kunchangtai Liang, Wenhan Yang, Ling-Yu Duan, Jiaying Liu
Abstract In this paper, we study a new problem arising from the emerging MPEG standardization effort Video Coding for Machine (VCM), which aims to bridge the gap between visual feature compression and classical video coding. VCM is committed to address the requirement of compact signal representation for both machine and human vision in a more or less scalable way. To this end, we make endeavors in leveraging the strength of predictive and generative models to support advanced compression techniques for both machine and human vision tasks simultaneously, in which visual features serve as a bridge to connect signal-level and task-level compact representations in a scalable manner. Specifically, we employ a conditional deep generation network to reconstruct video frames with the guidance of learned motion pattern. By learning to extract sparse motion pattern via a predictive model, the network elegantly leverages the feature representation to generate the appearance of to-be-coded frames via a generative model, relying on the appearance of the coded key frames. Meanwhile, the sparse motion pattern is compact and highly effective for high-level vision tasks, e.g. action recognition. Experimental results demonstrate that our method yields much better reconstruction quality compared with the traditional video codecs (0.0063 gain in SSIM), as well as state-of-the-art action recognition performance over highly compressed videos (9.4% gain in recognition accuracy), which showcases a promising paradigm of coding signal for both human and machine vision.
Tasks
Published 2020-01-09
URL https://arxiv.org/abs/2001.03004v1
PDF https://arxiv.org/pdf/2001.03004v1.pdf
PWC https://paperswithcode.com/paper/an-emerging-coding-paradigm-vcm-a-scalable
Repo
Framework

Training Progressively Binarizing Deep Networks Using FPGAs

Title Training Progressively Binarizing Deep Networks Using FPGAs
Authors Corey Lammie, Wei Xiang, Mostafa Rahimi Azghadi
Abstract While hardware implementations of inference routines for Binarized Neural Networks (BNNs) are plentiful, current realizations of efficient BNN hardware training accelerators, suitable for Internet of Things (IoT) edge devices, leave much to be desired. Conventional BNN hardware training accelerators perform forward and backward propagations with parameters adopting binary representations, and optimization using parameters adopting floating or fixed-point real-valued representations–requiring two distinct sets of network parameters. In this paper, we propose a hardware-friendly training method that, contrary to conventional methods, progressively binarizes a singular set of fixed-point network parameters, yielding notable reductions in power and resource utilizations. We use the Intel FPGA SDK for OpenCL development environment to train our progressively binarizing DNNs on an OpenVINO FPGA. We benchmark our training approach on both GPUs and FPGAs using CIFAR-10 and compare it to conventional BNNs.
Tasks
Published 2020-01-08
URL https://arxiv.org/abs/2001.02390v1
PDF https://arxiv.org/pdf/2001.02390v1.pdf
PWC https://paperswithcode.com/paper/training-progressively-binarizing-deep
Repo
Framework

Rectifying Pseudo Label Learning via Uncertainty Estimation for Domain Adaptive Semantic Segmentation

Title Rectifying Pseudo Label Learning via Uncertainty Estimation for Domain Adaptive Semantic Segmentation
Authors Zhedong Zheng, Yi Yang
Abstract This paper focuses on the unsupervised domain adaptation of transferring the knowledge from the source domain to the target domain in the context of semantic segmentation. Existing approaches usually regard the pseudo label as the ground truth to fully exploit the unlabeled target-domain data. Yet the pseudo labels of the target-domain data are usually predicted by the model trained on the source domain. Thus, the generated labels inevitably contain the incorrect prediction due to the discrepancy between the training domain and the test domain, which could be transferred to the final adapted model and largely compromises the training process. To overcome the problem, this paper proposes to explicitly estimate the prediction uncertainty during training to rectify the pseudo label learning for unsupervised semantic segmentation adaptation. Given the input image, the model outputs the semantic segmentation prediction as well as the uncertainty of the prediction. Specifically, we model the uncertainty via the prediction variance and involve the uncertainty into the optimization objective. To verify the effectiveness of the proposed method, we evaluate the proposed method on two prevalent synthetic-to-real semantic segmentation benchmarks, i.e., GTA5 -> Cityscapes and SYNTHIA -> Cityscapes, as well as one cross-city benchmark, i.e., Cityscapes -> Oxford RobotCar. We demonstrate through extensive experiments that the proposed approach (1) dynamically sets different confidence thresholds according to the prediction variance, (2) rectifies the learning from noisy pseudo labels, and (3) achieves significant improvements over the conventional pseudo label learning and yields competitive performance on all three benchmarks.
Tasks Domain Adaptation, Semantic Segmentation, Unsupervised Domain Adaptation, Unsupervised Semantic Segmentation
Published 2020-03-08
URL https://arxiv.org/abs/2003.03773v1
PDF https://arxiv.org/pdf/2003.03773v1.pdf
PWC https://paperswithcode.com/paper/rectifying-pseudo-label-learning-via
Repo
Framework

On the Convergence of the Monte Carlo Exploring Starts Algorithm for Reinforcement Learning

Title On the Convergence of the Monte Carlo Exploring Starts Algorithm for Reinforcement Learning
Authors Che Wang, Keith Ross
Abstract A simple and natural algorithm for reinforcement learning is Monte Carlo Exploring States (MCES), where the Q-function is estimated by averaging the Monte Carlo returns, and the policy is improved by choosing actions that maximize the current estimate of the Q-function. Exploration is performed by “exploring starts”, that is, each episode begins with a randomly chosen state and action and then follows the current policy. Establishing convergence for this algorithm has been an open problem for more than 20 years. We make headway with this problem by proving convergence for Optimal Policy Feed-Forward MDPs, which are MDPs whose states are not revisited within any episode for an optimal policy. Such MDPs include all deterministic environments (including Cliff Walking and other gridworld examples) and a large class of stochastic environments (including Blackjack). The convergence results presented here make progress for this long-standing open problem in reinforcement learning.
Tasks
Published 2020-02-10
URL https://arxiv.org/abs/2002.03585v1
PDF https://arxiv.org/pdf/2002.03585v1.pdf
PWC https://paperswithcode.com/paper/on-the-convergence-of-the-monte-carlo
Repo
Framework

Combinatorial Semi-Bandit in the Non-Stationary Environment

Title Combinatorial Semi-Bandit in the Non-Stationary Environment
Authors Wei Chen, Liwei Wang, Haoyu Zhao, Kai Zheng
Abstract In this paper, we investigate the non-stationary combinatorial semi-bandit problem, both in the switching case and in the dynamic case. In the general case where (a) the reward function is non-linear, (b) arms may be probabilistically triggered, and (c) only approximate offline oracle exists \cite{wang2017improving}, our algorithm achieves $\tilde{\mathcal{O}}(\sqrt{\mathcal{S} T})$ distribution-dependent regret in the switching case, and $\tilde{\mathcal{O}}(\mathcal{V}^{1/3}T^{2/3})$ in the dynamic case, where $\mathcal S$ is the number of switchings and $\mathcal V$ is the sum of the total ``distribution changes’'. The regret bounds in both scenarios are nearly optimal, but our algorithm needs to know the parameter $\mathcal S$ or $\mathcal V$ in advance. We further show that by employing another technique, our algorithm no longer needs to know the parameters $\mathcal S$ or $\mathcal V$ but the regret bounds could become suboptimal. In a special case where the reward function is linear and we have an exact oracle, we design a parameter-free algorithm that achieves nearly optimal regret both in the switching case and in the dynamic case without knowing the parameters in advance. |
Tasks
Published 2020-02-10
URL https://arxiv.org/abs/2002.03580v1
PDF https://arxiv.org/pdf/2002.03580v1.pdf
PWC https://paperswithcode.com/paper/combinatorial-semi-bandit-in-the-non
Repo
Framework

Predicting star formation properties of galaxies using deep learning

Title Predicting star formation properties of galaxies using deep learning
Authors Shraddha Surana, Yogesh Wadadekar, Omkar Bait, Hrushikesh Bhosle
Abstract Understanding the star-formation properties of galaxies as a function of cosmic epoch is a critical exercise in studies of galaxy evolution. Traditionally, stellar population synthesis models have been used to obtain best fit parameters that characterise star formation in galaxies. As multiband flux measurements become available for thousands of galaxies, an alternative approach to characterising star formation using machine learning becomes feasible. In this work, we present the use of deep learning techniques to predict three important star formation properties – stellar mass, star formation rate and dust luminosity. We characterise the performance of our deep learning models through comparisons with outputs from a standard stellar population synthesis code.
Tasks
Published 2020-02-10
URL https://arxiv.org/abs/2002.03578v1
PDF https://arxiv.org/pdf/2002.03578v1.pdf
PWC https://paperswithcode.com/paper/predicting-star-formation-properties-of
Repo
Framework
comments powered by Disqus