January 25, 2020

2998 words 15 mins read

Paper Group ANR 1763

Paper Group ANR 1763

An Iteratively Re-weighted Method for Problems with Sparsity-Inducing Norms. Aerial multi-object tracking by detection using deep association networks. Mixed Strategy Game Model Against Data Poisoning Attacks. Data Poisoning Attack against Knowledge Graph Embedding. Learning Meta Model for Zero- and Few-shot Face Anti-spoofing. Can Machine Learning …

An Iteratively Re-weighted Method for Problems with Sparsity-Inducing Norms

Title An Iteratively Re-weighted Method for Problems with Sparsity-Inducing Norms
Authors Feiping Nie, Zhanxuan Hu, Xiaoqian Wang, Rong Wang, Xuelong Li, Heng Huang
Abstract This work aims at solving the problems with intractable sparsity-inducing norms that are often encountered in various machine learning tasks, such as multi-task learning, subspace clustering, feature selection, robust principal component analysis, and so on. Specifically, an Iteratively Re-Weighted method (IRW) with solid convergence guarantee is provided. We investigate its convergence speed via numerous experiments on real data. Furthermore, in order to validate the practicality of IRW, we use it to solve a concrete robust feature selection model with complicated objective function. The experimental results show that the model coupled with proposed optimization method outperforms alternative methods significantly.
Tasks Feature Selection, Multi-Task Learning
Published 2019-07-02
URL https://arxiv.org/abs/1907.01121v1
PDF https://arxiv.org/pdf/1907.01121v1.pdf
PWC https://paperswithcode.com/paper/an-iteratively-re-weighted-method-for
Repo
Framework

Aerial multi-object tracking by detection using deep association networks

Title Aerial multi-object tracking by detection using deep association networks
Authors Ajit Jadhav, Prerana Mukherjee, Vinay Kaushik, Brejesh Lall
Abstract A lot a research is focused on object detection and it has achieved significant advances with deep learning techniques in recent years. Inspite of the existing research, these algorithms are not usually optimal for dealing with sequences or images captured by drone-based platforms, due to various challenges such as view point change, scales, density of object distribution and occlusion. In this paper, we develop a model for detection of objects in drone images using the VisDrone2019 DET dataset. Using the RetinaNet model as our base, we modify the anchor scales to better handle the detection of dense distribution and small size of the objects. We explicitly model the channel interdependencies by using “Squeeze-and-Excitation” (SE) blocks that adaptively recalibrates channel-wise feature responses. This helps to bring significant improvements in performance at a slight additional computational cost. Using this architecture for object detection, we build a custom DeepSORT network for object detection on the VisDrone2019 MOT dataset by training a custom Deep Association network for the algorithm.
Tasks Multi-Object Tracking, Object Detection, Object Tracking
Published 2019-09-04
URL https://arxiv.org/abs/1909.01547v1
PDF https://arxiv.org/pdf/1909.01547v1.pdf
PWC https://paperswithcode.com/paper/aerial-multi-object-tracking-by-detection
Repo
Framework

Mixed Strategy Game Model Against Data Poisoning Attacks

Title Mixed Strategy Game Model Against Data Poisoning Attacks
Authors Yifan Ou, Reza Samavi
Abstract In this paper we use game theory to model poisoning attack scenarios. We prove the non-existence of pure strategy Nash Equilibrium in the attacker and defender game. We then propose a mixed extension of our game model and an algorithm to approximate the Nash Equilibrium strategy for the defender. We then demonstrate the effectiveness of the mixed defence strategy generated by the algorithm, in an experiment.
Tasks data poisoning
Published 2019-06-07
URL https://arxiv.org/abs/1906.02872v1
PDF https://arxiv.org/pdf/1906.02872v1.pdf
PWC https://paperswithcode.com/paper/mixed-strategy-game-model-against-data
Repo
Framework

Data Poisoning Attack against Knowledge Graph Embedding

Title Data Poisoning Attack against Knowledge Graph Embedding
Authors Hengtong Zhang, Tianhang Zheng, Jing Gao, Chenglin Miao, Lu Su, Yaliang Li, Kui Ren
Abstract Knowledge graph embedding (KGE) is a technique for learning continuous embeddings for entities and relations in the knowledge graph.Due to its benefit to a variety of downstream tasks such as knowledge graph completion, question answering and recommendation, KGE has gained significant attention recently. Despite its effectiveness in a benign environment, KGE’ robustness to adversarial attacks is not well-studied. Existing attack methods on graph data cannot be directly applied to attack the embeddings of knowledge graph due to its heterogeneity. To fill this gap, we propose a collection of data poisoning attack strategies, which can effectively manipulate the plausibility of arbitrary targeted facts in a knowledge graph by adding or deleting facts on the graph. The effectiveness and efficiency of the proposed attack strategies are verified by extensive evaluations on two widely-used benchmarks.
Tasks data poisoning, Graph Embedding, Knowledge Graph Completion, Knowledge Graph Embedding, Question Answering
Published 2019-04-26
URL https://arxiv.org/abs/1904.12052v2
PDF https://arxiv.org/pdf/1904.12052v2.pdf
PWC https://paperswithcode.com/paper/towards-data-poisoning-attack-against
Repo
Framework

Learning Meta Model for Zero- and Few-shot Face Anti-spoofing

Title Learning Meta Model for Zero- and Few-shot Face Anti-spoofing
Authors Yunxiao Qin, Chenxu Zhao, Xiangyu Zhu, Zezheng Wang, Zitong Yu, Tianyu Fu, Feng Zhou, Jingping Shi, Zhen Lei
Abstract Face anti-spoofing is crucial to the security of face recognition systems. Most previous methods formulate face anti-spoofing as a supervised learning problem to detect various predefined presentation attacks, which need large scale training data to cover as many attacks as possible. However, the trained model is easy to overfit several common attacks and is still vulnerable to unseen attacks. To overcome this challenge, the detector should: 1) learn discriminative features that can generalize to unseen spoofing types from predefined presentation attacks; 2) quickly adapt to new spoofing types by learning from both the predefined attacks and a few examples of the new spoofing types. Therefore, we define face anti-spoofing as a zero- and few-shot learning problem. In this paper, we propose a novel Adaptive Inner-update Meta Face Anti-Spoofing (AIM-FAS) method to tackle this problem through meta-learning. Specifically, AIM-FAS trains a meta-learner focusing on the task of detecting unseen spoofing types by learning from predefined living and spoofing faces and a few examples of new attacks. To assess the proposed approach, we propose several benchmarks for zero- and few-shot FAS. Experiments show its superior performances on the presented benchmarks to existing methods in existing zero-shot FAS protocols.
Tasks Face Anti-Spoofing, Face Recognition, Few-Shot Learning, Meta-Learning
Published 2019-04-29
URL https://arxiv.org/abs/1904.12490v3
PDF https://arxiv.org/pdf/1904.12490v3.pdf
PWC https://paperswithcode.com/paper/meta-anti-spoofing-learning-to-learn-in-face
Repo
Framework

Can Machine Learning Model with Static Features be Fooled: an Adversarial Machine Learning Approach

Title Can Machine Learning Model with Static Features be Fooled: an Adversarial Machine Learning Approach
Authors Rahim Taheri, Reza Javidan, Mohammad Shojafar, Vinod P, Mauro Conti
Abstract The widespread adoption of smartphones dramatically increases the risk of attacks and the spread of mobile malware, especially on the Android platform. Machine learning-based solutions have been already used as a tool to supersede signature-based anti-malware systems. However, malware authors leverage features from malicious and legitimate samples to estimate statistical difference in-order to create adversarial examples. Hence, to evaluate the vulnerability of machine learning algorithms in malware detection, we propose five different attack scenarios to perturb malicious applications (apps). By doing this, the classification algorithm inappropriately fits the discriminant function on the set of data points, eventually yielding a higher misclassification rate. Further, to distinguish the adversarial examples from benign samples, we propose two defense mechanisms to counter attacks. To validate our attacks and solutions, we test our model on three different benchmark datasets. We also test our methods using various classifier algorithms and compare them with the state-of-the-art data poisoning method using the Jacobian matrix. Promising results show that generated adversarial samples can evade detection with a very high probability. Additionally, evasive variants generated by our attack models when used to harden the developed anti-malware system improves the detection rate up to 50% when using the Generative Adversarial Network (GAN) method.
Tasks data poisoning, Malware Detection
Published 2019-04-20
URL https://arxiv.org/abs/1904.09433v2
PDF https://arxiv.org/pdf/1904.09433v2.pdf
PWC https://paperswithcode.com/paper/can-machine-learning-model-with-static
Repo
Framework

Symmetric Regularization based BERT for Pair-wise Semantic Reasoning

Title Symmetric Regularization based BERT for Pair-wise Semantic Reasoning
Authors Xingyi Cheng, Weidi Xu, Kunlong Chen, Wei Wang, Bin Bi, Ming Yan, Chen Wu, Luo Si, Wei Chu, Taifeng Wang
Abstract The ability of semantic reasoning over the sentence pair is essential for many natural language understanding tasks, e.g., natural language inference and machine reading comprehension. A recent significant improvement in these tasks comes from BERT. As reported, the next sentence prediction (NSP) in BERT, which learns the contextual relationship between two sentences, is of great significance for downstream problems with sentence-pair input. Despite the effectiveness of NSP, we suggest that NSP still lacks the essential signal to distinguish between entailment and shallow correlation. To remedy this, we propose to augment the NSP task to a 3-class categorization task, which includes a category for previous sentence prediction (PSP). The involvement of PSP encourages the model to focus on the informative semantics to determine the sentence order, thereby improves the ability of semantic understanding. This simple modification yields remarkable improvement against vanilla BERT. To further incorporate the document-level information, the scope of NSP and PSP is expanded into a broader range, i.e., NSP and PSP also include close but nonsuccessive sentences, the noise of which is mitigated by the label-smoothing technique. Both qualitative and quantitative experimental results demonstrate the effectiveness of the proposed method. Our method consistently improves the performance on the NLI and MRC benchmarks, including the challenging HANS dataset~\cite{hans}, suggesting that the document-level task is still promising for the pre-training.
Tasks Machine Reading Comprehension, Natural Language Inference, Reading Comprehension
Published 2019-09-08
URL https://arxiv.org/abs/1909.03405v1
PDF https://arxiv.org/pdf/1909.03405v1.pdf
PWC https://paperswithcode.com/paper/symmetric-regularization-based-bert-for-pair
Repo
Framework

SLSGD: Secure and Efficient Distributed On-device Machine Learning

Title SLSGD: Secure and Efficient Distributed On-device Machine Learning
Authors Cong Xie, Sanmi Koyejo, Indranil Gupta
Abstract We consider distributed on-device learning with limited communication and security requirements. We propose a new robust distributed optimization algorithm with efficient communication and attack tolerance. The proposed algorithm has provable convergence and robustness under non-IID settings. Empirical results show that the proposed algorithm stabilizes the convergence and tolerates data poisoning on a small number of workers.
Tasks data poisoning, Distributed Optimization
Published 2019-03-16
URL https://arxiv.org/abs/1903.06996v3
PDF https://arxiv.org/pdf/1903.06996v3.pdf
PWC https://paperswithcode.com/paper/practical-distributed-learning-secure-machine
Repo
Framework

Lifting 2d Human Pose to 3d : A Weakly Supervised Approach

Title Lifting 2d Human Pose to 3d : A Weakly Supervised Approach
Authors Sandika Biswas, Sanjana Sinha, Kavya Gupta, Brojeshwar Bhowmick
Abstract Estimating 3d human pose from monocular images is a challenging problem due to the variety and complexity of human poses and the inherent ambiguity in recovering depth from the single view. Recent deep learning based methods show promising results by using supervised learning on 3d pose annotated datasets. However, the lack of large-scale 3d annotated training data captured under in-the-wild settings makes the 3d pose estimation difficult for in-the-wild poses. Few approaches have utilized training images from both 3d and 2d pose datasets in a weakly-supervised manner for learning 3d poses in unconstrained settings. In this paper, we propose a method which can effectively predict 3d human pose from 2d pose using a deep neural network trained in a weakly-supervised manner on a combination of ground-truth 3d pose and ground-truth 2d pose. Our method uses re-projection error minimization as a constraint to predict the 3d locations of body joints, and this is crucial for training on data where the 3d ground-truth is not present. Since minimizing re-projection error alone may not guarantee an accurate 3d pose, we also use additional geometric constraints on skeleton pose to regularize the pose in 3d. We demonstrate the superior generalization ability of our method by cross-dataset validation on a challenging 3d benchmark dataset MPI-INF-3DHP containing in the wild 3d poses.
Tasks 3D Pose Estimation, Pose Estimation
Published 2019-05-03
URL https://arxiv.org/abs/1905.01047v1
PDF https://arxiv.org/pdf/1905.01047v1.pdf
PWC https://paperswithcode.com/paper/lifting-2d-human-pose-to-3d-a-weakly
Repo
Framework

Quadruplet Selection Methods for Deep Embedding Learning

Title Quadruplet Selection Methods for Deep Embedding Learning
Authors Kaan Karaman, Erhan Gundogdu, Aykut Koc, A. Aydin Alatan
Abstract Recognition of objects with subtle differences has been used in many practical applications, such as car model recognition and maritime vessel identification. For discrimination of the objects in fine-grained detail, we focus on deep embedding learning by using a multi-task learning framework, in which the hierarchical labels (coarse and fine labels) of the samples are utilized both for classification and a quadruplet-based loss function. In order to improve the recognition strength of the learned features, we present a novel feature selection method specifically designed for four training samples of a quadruplet. By experiments, it is observed that the selection of very hard negative samples with relatively easy positive ones from the same coarse and fine classes significantly increases some performance metrics in a fine-grained dataset when compared to selecting the quadruplet samples randomly. The feature embedding learned by the proposed method achieves favorable performance against its state-of-the-art counterparts.
Tasks Feature Selection, Multi-Task Learning
Published 2019-07-22
URL https://arxiv.org/abs/1907.09245v1
PDF https://arxiv.org/pdf/1907.09245v1.pdf
PWC https://paperswithcode.com/paper/quadruplet-selection-methods-for-deep
Repo
Framework

Group Emotion Recognition Using Machine Learning

Title Group Emotion Recognition Using Machine Learning
Authors Samanyou Garg
Abstract Automatic facial emotion recognition is a challenging task that has gained significant scientific interest over the past few years, but the problem of emotion recognition for a group of people has been less extensively studied. However, it is slowly gaining popularity due to the massive amount of data available on social networking sites containing images of groups of people participating in various social events. Group emotion recognition is a challenging problem due to obstructions like head and body pose variations, occlusions, variable lighting conditions, variance of actors, varied indoor and outdoor settings and image quality. The objective of this task is to classify a group’s perceived emotion as Positive, Neutral or Negative. In this report, we describe our solution which is a hybrid machine learning system that incorporates deep neural networks and Bayesian classifiers. Deep Convolutional Neural Networks (CNNs) work from bottom to top, analysing facial expressions expressed by individual faces extracted from the image. The Bayesian network works from top to bottom, inferring the global emotion for the image, by integrating the visual features of the contents of the image obtained through a scene descriptor. In the final pipeline, the group emotion category predicted by an ensemble of CNNs in the bottom-up module is passed as input to the Bayesian Network in the top-down module and an overall prediction for the image is obtained. Experimental results show that the stated system achieves 65.27% accuracy on the validation set which is in line with state-of-the-art results. As an outcome of this project, a Progressive Web Application and an accompanying Android app with a simple and intuitive user interface are presented, allowing users to test out the system with their own pictures.
Tasks Emotion Recognition
Published 2019-05-03
URL https://arxiv.org/abs/1905.01118v1
PDF https://arxiv.org/pdf/1905.01118v1.pdf
PWC https://paperswithcode.com/paper/group-emotion-recognition-using-machine
Repo
Framework

Online Data Poisoning Attack

Title Online Data Poisoning Attack
Authors Xuezhou Zhang, Xiaojin Zhu, Laurent Lessard
Abstract We study data poisoning attacks in the online setting where training items arrive sequentially, and the attacker may perturb the current item to manipulate online learning. Importantly, the attacker has no knowledge of future training items nor the data generating distribution. We formulate online data poisoning attack as a stochastic optimal control problem, and solve it with model predictive control and deep reinforcement learning. We also upper bound the suboptimality suffered by the attacker for not knowing the data generating distribution. Experiments validate our control approach in generating near-optimal attacks on both supervised and unsupervised learning tasks.
Tasks data poisoning
Published 2019-03-05
URL https://arxiv.org/abs/1903.01666v2
PDF https://arxiv.org/pdf/1903.01666v2.pdf
PWC https://paperswithcode.com/paper/online-data-poisoning-attack
Repo
Framework

Adversarial Perturbations on the Perceptual Ball

Title Adversarial Perturbations on the Perceptual Ball
Authors Andrew Elliott, Stephen Law, Chris Russell
Abstract We present a simple regularisation of Adversarial Perturbations based upon the perceptual loss. While the resulting perturbations remain imperceptible to the human eye, they differ from existing adversarial perturbations in two important regards: (i) our resulting perturbations are semi-sparse,and typically make alterations to objects and regions of interest leaving the background static; (ii) our perturbations do not alter the distribution of data in the image and are undetectable by state-of-the-art-methods. As such this workreinforces the connection between explainable AI and adversarial perturbations. We show the merits of our approach by evaluating onstandard explainablity benchmarks and by defeating recenttests for detecting adversarial perturbations, substantially decreasing the effectiveness of detecting adversarial perturbations.
Tasks
Published 2019-12-19
URL https://arxiv.org/abs/1912.09405v1
PDF https://arxiv.org/pdf/1912.09405v1.pdf
PWC https://paperswithcode.com/paper/adversarial-perturbations-on-the-perceptual
Repo
Framework

Deep Crowd-Flow Prediction in Built Environments

Title Deep Crowd-Flow Prediction in Built Environments
Authors Samuel S. Sohn, Seonghyeon Moon, Honglu Zhou, Sejong Yoon, Vladimir Pavlovic, Mubbasir Kapadia
Abstract Predicting the behavior of crowds in complex environments is a key requirement in a multitude of application areas, including crowd and disaster management, architectural design, and urban planning. Given a crowd’s immediate state, current approaches simulate crowd movement to arrive at a future state. However, most applications require the ability to predict hundreds of possible simulation outcomes (e.g., under different environment and crowd situations) at real-time rates, for which these approaches are prohibitively expensive. In this paper, we propose an approach to instantly predict the long-term flow of crowds in arbitrarily large, realistic environments. Central to our approach is a novel CAGE representation consisting of Capacity, Agent, Goal, and Environment-oriented information, which efficiently encodes and decodes crowd scenarios into compact, fixed-size representations that are environmentally lossless. We present a framework to facilitate the accurate and efficient prediction of crowd flow in never-before-seen crowd scenarios. We conduct a series of experiments to evaluate the efficacy of our approach and showcase positive results.
Tasks
Published 2019-10-13
URL https://arxiv.org/abs/1910.05810v1
PDF https://arxiv.org/pdf/1910.05810v1.pdf
PWC https://paperswithcode.com/paper/deep-crowd-flow-prediction-in-built
Repo
Framework

TrojDRL: Trojan Attacks on Deep Reinforcement Learning Agents

Title TrojDRL: Trojan Attacks on Deep Reinforcement Learning Agents
Authors Panagiota Kiourti, Kacper Wardega, Susmit Jha, Wenchao Li
Abstract Recent work has identified that classification models implemented as neural networks are vulnerable to data-poisoning and Trojan attacks at training time. In this work, we show that these training-time vulnerabilities extend to deep reinforcement learning (DRL) agents and can be exploited by an adversary with access to the training process. In particular, we focus on Trojan attacks that augment the function of reinforcement learning policies with hidden behaviors. We demonstrate that such attacks can be implemented through minuscule data poisoning (as little as 0.025% of the training data) and in-band reward modification that does not affect the reward on normal inputs. The policies learned with our proposed attack approach perform imperceptibly similar to benign policies but deteriorate drastically when the Trojan is triggered in both targeted and untargeted settings. Furthermore, we show that existing Trojan defense mechanisms for classification tasks are not effective in the reinforcement learning setting.
Tasks data poisoning
Published 2019-03-01
URL http://arxiv.org/abs/1903.06638v1
PDF http://arxiv.org/pdf/1903.06638v1.pdf
PWC https://paperswithcode.com/paper/trojdrl-trojan-attacks-on-deep-reinforcement
Repo
Framework
comments powered by Disqus