January 27, 2020

3673 words 18 mins read

Paper Group ANR 1174

Paper Group ANR 1174

Enforcing Deterministic Constraints on Generative Adversarial Networks for Emulating Physical Systems. A Combination of Temporal Sequence Learning and Data Description for Anomaly-based NIDS. Dynamic Context Correspondence Network for Semantic Alignment. Locally adaptive activation functions with slope recovery term for deep and physics-informed ne …

Enforcing Deterministic Constraints on Generative Adversarial Networks for Emulating Physical Systems

Title Enforcing Deterministic Constraints on Generative Adversarial Networks for Emulating Physical Systems
Authors Zeng Yang, Jin-Long Wu, Heng Xiao
Abstract Generative adversarial networks (GANs) are initially proposed to generate images by learning from a large number of samples. Recently, GANs have been used to emulate complex physical systems such as turbulent flows. However, a critical question must be answered before GANs can be considered trusted emulators for physical systems: do GANs-generated samples conform to the various physical constraints? These include both deterministic constraints (e.g., conservation laws) and statistical constraints (e.g., energy spectrum in turbulent flows). The latter has been studied in a companion paper (Wu et al. 2019. Enforcing statistical constraints in generative adversarial networks for modeling chaotic dynamical systems. arxiv:1905.06841). In the present work, we enforce deterministic yet approximate constraints on GANs by incorporating them into the loss function of the generator. We evaluate the performance of physics-constrained GANs on two representative tasks with geometrical constraints (generating points on circles) and differential constraints (generating divergence-free flow velocity fields), respectively. In both cases, the constrained GANs produced samples that precisely conform to the underlying constraints, even though the constraints are only enforced approximately. More importantly, the imposed constraints significantly accelerate the convergence and improve the robustness in the training. These improvements are noteworthy, as the convergence and robustness are two well-known obstacles in the training of GANs.
Tasks
Published 2019-11-15
URL https://arxiv.org/abs/1911.06671v1
PDF https://arxiv.org/pdf/1911.06671v1.pdf
PWC https://paperswithcode.com/paper/enforcing-deterministic-constraints-on
Repo
Framework

A Combination of Temporal Sequence Learning and Data Description for Anomaly-based NIDS

Title A Combination of Temporal Sequence Learning and Data Description for Anomaly-based NIDS
Authors Nguyen Thanh Van, Tran Ngoc Thinh, Le Thanh Sach
Abstract Through continuous observation and modeling of normal behavior in networks, Anomaly-based Network Intrusion Detection System (A-NIDS) offers a way to find possible threats via deviation from the normal model. The analysis of network traffic based on the time series model has the advantage of exploiting the relationship between packages within network traffic and observing trends of behaviors over a period of time. It will generate new sequences with good features that support anomaly detection in network traffic and provide the ability to detect new attacks. Besides, an anomaly detection technique, which focuses on the normal data and aims to build a description of it, will be an effective technique for anomaly detection in imbalanced data. In this paper, we propose a combination model of Long Short Term Memory (LSTM) architecture for processing time series and a data description Support Vector Data Description (SVDD) for anomaly detection in A-NIDS to obtain the advantages of them. This model helps parameters in LSTM and SVDD are jointly trained with the joint optimization method. Our experimental results with KDD99 dataset show that the proposed combined model obtains high performance in intrusion detection, especially DoS and Probe attacks with 98.0% and 99.8%, respectively.
Tasks Anomaly Detection, Intrusion Detection, Network Intrusion Detection, Time Series
Published 2019-06-07
URL https://arxiv.org/abs/1906.05277v1
PDF https://arxiv.org/pdf/1906.05277v1.pdf
PWC https://paperswithcode.com/paper/a-combination-of-temporal-sequence-learning
Repo
Framework

Dynamic Context Correspondence Network for Semantic Alignment

Title Dynamic Context Correspondence Network for Semantic Alignment
Authors Shuaiyi Huang, Qiuyue Wang, Songyang Zhang, Shipeng Yan, Xuming He
Abstract Establishing semantic correspondence is a core problem in computer vision and remains challenging due to large intra-class variations and lack of annotated data. In this paper, we aim to incorporate global semantic context in a flexible manner to overcome the limitations of prior work that relies on local semantic representations. To this end, we first propose a context-aware semantic representation that incorporates spatial layout for robust matching against local ambiguities. We then develop a novel dynamic fusion strategy based on attention mechanism to weave the advantages of both local and context features by integrating semantic cues from multiple scales. We instantiate our strategy by designing an end-to-end learnable deep network, named as Dynamic Context Correspondence Network (DCCNet). To train the network, we adopt a multi-auxiliary task loss to improve the efficiency of our weakly-supervised learning procedure. Our approach achieves superior or competitive performance over previous methods on several challenging datasets, including PF-Pascal, PF-Willow, and TSS, demonstrating its effectiveness and generality.
Tasks
Published 2019-09-08
URL https://arxiv.org/abs/1909.03444v1
PDF https://arxiv.org/pdf/1909.03444v1.pdf
PWC https://paperswithcode.com/paper/dynamic-context-correspondence-network-for
Repo
Framework

Locally adaptive activation functions with slope recovery term for deep and physics-informed neural networks

Title Locally adaptive activation functions with slope recovery term for deep and physics-informed neural networks
Authors Ameya D. Jagtap, Kenji Kawaguchi, George Em Karniadakis
Abstract We propose two approaches of locally adaptive activation functions namely, layer-wise and neuron-wise locally adaptive activation functions, which improve the performance of deep and physics-informed neural networks. The local adaptation of activation function is achieved by introducing scalable hyper-parameters in each layer (layer-wise) and for every neuron separately (neuron-wise), and then optimizing it using the stochastic gradient descent algorithm. Introduction of neuron-wise activation function acts like a vector activation function in each hidden-layer as opposed to the traditional scalar activation function given by fixed, global and layer-wise activations. In order to further increase the training speed, an activation slope based slope recovery term is added in the loss function, which further accelerate convergence, thereby reducing the training cost. For numerical experiments, a nonlinear discontinuous function is approximated using a deep neural network with layer-wise and neuron-wise locally adaptive activation functions with and without the slope recovery term and compared with its global counterpart. Moreover, solution of the nonlinear Burgers equation, which exhibits steep gradients, is also obtained using the proposed methods. On the theoretical side, we prove that in the proposed method the gradient descent algorithms are not attracted to sub-optimal critical points or local minima under practical conditions on the initialization and learning rate. Furthermore, the proposed adaptive activation functions with the slope recovery are shown to accelerate the training process in standard deep learning benchmarks using CIFAR-10, CIFAR-100, SVHN, MNIST, KMNIST, Fashion-MNIST, and Semeion data sets with and without data augmentation.
Tasks Data Augmentation
Published 2019-09-25
URL https://arxiv.org/abs/1909.12228v3
PDF https://arxiv.org/pdf/1909.12228v3.pdf
PWC https://paperswithcode.com/paper/locally-adaptive-activation-functions-with
Repo
Framework

Initialization for Network Embedding: A Graph Partition Approach

Title Initialization for Network Embedding: A Graph Partition Approach
Authors Wenqing Lin, Feng He, Faqiang Zhang, Xu Cheng, Hongyun Cai
Abstract Network embedding has been intensively studied in the literature and widely used in various applications, such as link prediction and node classification. While previous work focus on the design of new algorithms or are tailored for various problem settings, the discussion of initialization strategies in the learning process is often missed. In this work, we address this important issue of initialization for network embedding that could dramatically improve the performance of the algorithms on both effectiveness and efficiency. Specifically, we first exploit the graph partition technique that divides the graph into several disjoint subsets, and then construct an abstract graph based on the partitions. We obtain the initialization of the embedding for each node in the graph by computing the network embedding on the abstract graph, which is much smaller than the input graph, and then propagating the embedding among the nodes in the input graph. With extensive experiments on various datasets, we demonstrate that our initialization technique significantly improves the performance of the state-of-the-art algorithms on the evaluations of link prediction and node classification by up to 7.76% and 8.74% respectively. Besides, we show that the technique of initialization reduces the running time of the state-of-the-arts by at least 20%.
Tasks graph partitioning, Link Prediction, Network Embedding, Node Classification
Published 2019-08-28
URL https://arxiv.org/abs/1908.10697v4
PDF https://arxiv.org/pdf/1908.10697v4.pdf
PWC https://paperswithcode.com/paper/effective-and-efficient-network-embedding
Repo
Framework

Bayes EMbedding (BEM): Refining Representation by Integrating Knowledge Graphs and Behavior-specific Networks

Title Bayes EMbedding (BEM): Refining Representation by Integrating Knowledge Graphs and Behavior-specific Networks
Authors Yuting Ye, Xuwu Wang, Jiangchao Yao, Kunyang Jia, Jingren Zhou, Yanghua Xiao, Hongxia Yang
Abstract Low-dimensional embeddings of knowledge graphs and behavior graphs have proved remarkably powerful in varieties of tasks, from predicting unobserved edges between entities to content recommendation. The two types of graphs can contain distinct and complementary information for the same entities/nodes. However, previous works focus either on knowledge graph embedding or behavior graph embedding while few works consider both in a unified way. Here we present BEM , a Bayesian framework that incorporates the information from knowledge graphs and behavior graphs. To be more specific, BEM takes as prior the pre-trained embeddings from the knowledge graph, and integrates them with the pre-trained embeddings from the behavior graphs via a Bayesian generative model. BEM is able to mutually refine the embeddings from both sides while preserving their own topological structures. To show the superiority of our method, we conduct a range of experiments on three benchmark datasets: node classification, link prediction, triplet classification on two small datasets related to Freebase, and item recommendation on a large-scale e-commerce dataset.
Tasks Graph Embedding, Knowledge Graph Embedding, Knowledge Graphs, Link Prediction, Node Classification
Published 2019-08-28
URL https://arxiv.org/abs/1908.10611v1
PDF https://arxiv.org/pdf/1908.10611v1.pdf
PWC https://paperswithcode.com/paper/bayes-embedding-bem-refining-representation
Repo
Framework

How You Act Tells a Lot: Privacy-Leakage Attack on Deep Reinforcement Learning

Title How You Act Tells a Lot: Privacy-Leakage Attack on Deep Reinforcement Learning
Authors Xinlei Pan, Weiyao Wang, Xiaoshuai Zhang, Bo Li, Jinfeng Yi, Dawn Song
Abstract Machine learning has been widely applied to various applications, some of which involve training with privacy-sensitive data. A modest number of data breaches have been studied, including credit card information in natural language data and identities from face dataset. However, most of these studies focus on supervised learning models. As deep reinforcement learning (DRL) has been deployed in a number of real-world systems, such as indoor robot navigation, whether trained DRL policies can leak private information requires in-depth study. To explore such privacy breaches in general, we mainly propose two methods: environment dynamics search via genetic algorithm and candidate inference based on shadow policies. We conduct extensive experiments to demonstrate such privacy vulnerabilities in DRL under various settings. We leverage the proposed algorithms to infer floor plans from some trained Grid World navigation DRL agents with LiDAR perception. The proposed algorithm can correctly infer most of the floor plans and reaches an average recovery rate of 95.83% using policy gradient trained agents. In addition, we are able to recover the robot configuration in continuous control environments and an autonomous driving simulator with high accuracy. To the best of our knowledge, this is the first work to investigate privacy leakage in DRL settings and we show that DRL-based agents do potentially leak privacy-sensitive information from the trained policies.
Tasks Autonomous Driving, Continuous Control, Robot Navigation
Published 2019-04-24
URL http://arxiv.org/abs/1904.11082v1
PDF http://arxiv.org/pdf/1904.11082v1.pdf
PWC https://paperswithcode.com/paper/how-you-act-tells-a-lot-privacy-leakage
Repo
Framework

Variational Graph Recurrent Neural Networks

Title Variational Graph Recurrent Neural Networks
Authors Ehsan Hajiramezanali, Arman Hasanzadeh, Nick Duffield, Krishna R Narayanan, Mingyuan Zhou, Xiaoning Qian
Abstract Representation learning over graph structured data has been mostly studied in static graph settings while efforts for modeling dynamic graphs are still scant. In this paper, we develop a novel hierarchical variational model that introduces additional latent random variables to jointly model the hidden states of a graph recurrent neural network (GRNN) to capture both topology and node attribute changes in dynamic graphs. We argue that the use of high-level latent random variables in this variational GRNN (VGRNN) can better capture potential variability observed in dynamic graphs as well as the uncertainty of node latent representation. With semi-implicit variational inference developed for this new VGRNN architecture (SI-VGRNN), we show that flexible non-Gaussian latent representations can further help dynamic graph analytic tasks. Our experiments with multiple real-world dynamic graph datasets demonstrate that SI-VGRNN and VGRNN consistently outperform the existing baseline and state-of-the-art methods by a significant margin in dynamic link prediction.
Tasks Dynamic Link Prediction, Link Prediction, Representation Learning
Published 2019-08-26
URL https://arxiv.org/abs/1908.09710v2
PDF https://arxiv.org/pdf/1908.09710v2.pdf
PWC https://paperswithcode.com/paper/variational-graph-recurrent-neural-networks
Repo
Framework

Follow the Attention: Combining Partial Pose and Object Motion for Fine-Grained Action Detection

Title Follow the Attention: Combining Partial Pose and Object Motion for Fine-Grained Action Detection
Authors Mohammad Mahdi Kazemi Moghaddam, Ehsan Abbasnejad, Javen Shi
Abstract Retailers have long been searching for ways to effectively understand their customers’ behaviour in order to provide a smooth and pleasant shopping experience that attracts more customers everyday and maximises their revenue, consequently. Humans can flawlessly understand others’ behaviour by combining different visual cues from activity to gestures and facial expressions. Empowering the computer vision systems to do so, however, is still an open problem due to its intrinsic challenges as well as extrinsic enforced difficulties like lack of publicly available data and unique environment conditions (wild). In this work, We emphasise on detecting the first and by far the most crucial cue in behaviour analysis; that is human activity detection in computer vision. To do so, we introduce a framework for integrating human pose and object motion to both temporally detect and classify the activities in a fine-grained manner (very short and similar activities). We incorporate partial human pose and interaction with the objects in a multi-stream neural network architecture to guide the spatiotemporal attention mechanism for more efficient activity recognition. To this end, in the absence of pose supervision, we propose to use the Generative Adversarial Network (GAN) to generate exact joint locations from noisy probability heat maps. Additionally, based on the intuition that complex actions demand more than one source of information to be identified even by humans, we integrate the second stream of object motion to our network as a prior knowledge that we quantitatively show improves the recognition results. We empirically show the capability of our approach by achieving state-of-the-art results on MERL shopping dataset. We further investigate the effectiveness of this approach on a new shopping dataset that we have collected to address existing shortcomings.
Tasks Action Detection, Activity Detection, Activity Recognition, Fine-Grained Action Detection
Published 2019-05-11
URL https://arxiv.org/abs/1905.04430v2
PDF https://arxiv.org/pdf/1905.04430v2.pdf
PWC https://paperswithcode.com/paper/follow-the-attention-combining-partial-pose
Repo
Framework

An Improved Self-supervised GAN via Adversarial Training

Title An Improved Self-supervised GAN via Adversarial Training
Authors Ngoc-Trung Tran, Viet-Hung Tran, Ngoc-Bao Nguyen, Ngai-Man Cheung
Abstract We propose to improve unconditional Generative Adversarial Networks (GAN) by training the self-supervised learning with the adversarial process. In particular, we apply self-supervised learning via the geometric transformation on input images and assign the pseudo-labels to these transformed images. (i) In addition to the GAN task, which distinguishes data (real) versus generated (fake) samples, we train the discriminator to predict the correct pseudo-labels of real transformed samples (classification task). Importantly, we find out that simultaneously training the discriminator to classify the fake class from the pseudo-classes of real samples for the classification task will improve the discriminator and subsequently lead better guides to train generator. (ii) The generator is trained by attempting to confuse the discriminator for not only the GAN task but also the classification task. For the classification task, the generator tries to confuse the discriminator recognizing the transformation of its output as one of the real transformed classes. Especially, we exploit that when the generator creates samples that result in a similar loss (via cross-entropy) as that of the real ones, the training is more stable and the generator distribution tends to match better the data distribution. When integrating our techniques into a state-of-the-art Auto-Encoder (AE) based-GAN model, they help to significantly boost the model’s performance and also establish new state-of-the-art Fr'echet Inception Distance (FID) scores in the literature of unconditional GAN for CIFAR-10 and STL-10 datasets.
Tasks
Published 2019-05-14
URL https://arxiv.org/abs/1905.05469v1
PDF https://arxiv.org/pdf/1905.05469v1.pdf
PWC https://paperswithcode.com/paper/an-improved-self-supervised-gan-via
Repo
Framework

HG-Caffe: Mobile and Embedded Neural Network GPU (OpenCL) Inference Engine with FP16 Supporting

Title HG-Caffe: Mobile and Embedded Neural Network GPU (OpenCL) Inference Engine with FP16 Supporting
Authors Zhuoran Ji
Abstract Breakthroughs in the fields of deep learning and mobile system-on-chips are radically changing the way we use our smartphones. However, deep neural networks inference is still a challenging task for edge AI devices due to the computational overhead on mobile CPUs and a severe drain on the batteries. In this paper, we present a deep neural network inference engine named HG-Caffe, which supports GPUs with half precision. HG-Caffe provides up to 20 times speedup with GPUs compared to the original implementations. In addition to the speedup, the peak memory usage is also reduced to about 80%. With HG-Caffe, more innovative and fascinating mobile applications will be turned into reality.
Tasks
Published 2019-01-03
URL http://arxiv.org/abs/1901.00858v1
PDF http://arxiv.org/pdf/1901.00858v1.pdf
PWC https://paperswithcode.com/paper/hg-caffe-mobile-and-embedded-neural-network
Repo
Framework

Graph Space Embedding

Title Graph Space Embedding
Authors João Pereira, Albert Groen, Erik Stroes, Evgeni Levin
Abstract We propose the Graph Space Embedding (GSE), a technique that maps the input into a space where interactions are implicitly encoded, with little computations required. We provide theoretical results on an optimal regime for the GSE, namely a feasibility region for its parameters, and demonstrate the experimental relevance of our findings. Next, we introduce a strategy to gain insight on which interactions are responsible for the certain predictions, paving the way for a far more transparent model. In an empirical evaluation on a real-world clinical cohort containing patients with suspected coronary artery disease, the GSE achieves far better performance than traditional algorithms.
Tasks
Published 2019-07-31
URL https://arxiv.org/abs/1907.13443v1
PDF https://arxiv.org/pdf/1907.13443v1.pdf
PWC https://paperswithcode.com/paper/graph-space-embedding
Repo
Framework

“Can you say more about the location?” The Development of a Pedagogical Reference Resolution Agent

Title “Can you say more about the location?” The Development of a Pedagogical Reference Resolution Agent
Authors Maike Paetzel, Ramesh Manuvinakurike
Abstract In an increasingly globalized world, geographic literacy is crucial. In this paper, we present a collaborative two-player game to improve people’s ability to locate countries on the world map. We discuss two implementations of the game: First, we created a web-based version which can be played with the remote-controlled agent Nellie. With the knowledge we gained from a large online data collection, we re-implemented the game so it can be played face-to-face with the Furhat robot Neil. Our analysis shows that participants found the game not just engaging to play, they also believe they gained lasting knowledge about the world map.
Tasks
Published 2019-09-03
URL https://arxiv.org/abs/1909.00945v1
PDF https://arxiv.org/pdf/1909.00945v1.pdf
PWC https://paperswithcode.com/paper/can-you-say-more-about-the-location-the
Repo
Framework

A deep active learning system for species identification and counting in camera trap images

Title A deep active learning system for species identification and counting in camera trap images
Authors Mohammad Sadegh Norouzzadeh, Dan Morris, Sara Beery, Neel Joshi, Nebojsa Jojic, Jeff Clune
Abstract Biodiversity conservation depends on accurate, up-to-date information about wildlife population distributions. Motion-activated cameras, also known as camera traps, are a critical tool for population surveys, as they are cheap and non-intrusive. However, extracting useful information from camera trap images is a cumbersome process: a typical camera trap survey may produce millions of images that require slow, expensive manual review. Consequently, critical information is often lost due to resource limitations, and critical conservation questions may be answered too slowly to support decision-making. Computer vision is poised to dramatically increase efficiency in image-based biodiversity surveys, and recent studies have harnessed deep learning techniques for automatic information extraction from camera trap images. However, the accuracy of results depends on the amount, quality, and diversity of the data available to train models, and the literature has focused on projects with millions of relevant, labeled training images. Many camera trap projects do not have a large set of labeled images and hence cannot benefit from existing machine learning techniques. Furthermore, even projects that do have labeled data from similar ecosystems have struggled to adopt deep learning methods because image classification models overfit to specific image backgrounds (i.e., camera locations). In this paper, we focus not on automating the labeling of camera trap images, but on accelerating this process. We combine the power of machine intelligence and human intelligence to build a scalable, fast, and accurate active learning system to minimize the manual work required to identify and count animals in camera trap images. Our proposed scheme can match the state of the art accuracy on a 3.2 million image dataset with as few as 14,100 manual labels, which means decreasing manual labeling effort by over 99.5%.
Tasks Active Learning, Decision Making, Image Classification
Published 2019-10-22
URL https://arxiv.org/abs/1910.09716v1
PDF https://arxiv.org/pdf/1910.09716v1.pdf
PWC https://paperswithcode.com/paper/a-deep-active-learning-system-for-species
Repo
Framework

motif2vec: Motif Aware Node Representation Learning for Heterogeneous Networks

Title motif2vec: Motif Aware Node Representation Learning for Heterogeneous Networks
Authors Manoj Reddy Dareddy, Mahashweta Das, Hao Yang
Abstract Recent years have witnessed a surge of interest in machine learning on graphs and networks with applications ranging from vehicular network design to IoT traffic management to social network recommendations. Supervised machine learning tasks in networks such as node classification and link prediction require us to perform feature engineering that is known and agreed to be the key to success in applied machine learning. Research efforts dedicated to representation learning, especially representation learning using deep learning, has shown us ways to automatically learn relevant features from vast amounts of potentially noisy, raw data. However, most of the methods are not adequate to handle heterogeneous information networks which pretty much represents most real-world data today. The methods cannot preserve the structure and semantic of multiple types of nodes and links well enough, capture higher-order heterogeneous connectivity patterns, and ensure coverage of nodes for which representations are generated. We propose a novel efficient algorithm, motif2vec that learns node representations or embeddings for heterogeneous networks. Specifically, we leverage higher-order, recurring, and statistically significant network connectivity patterns in the form of motifs to transform the original graph to motif graph(s), conduct biased random walk to efficiently explore higher order neighborhoods, and then employ heterogeneous skip-gram model to generate the embeddings. Unlike previous efforts that uses different graph meta-structures to guide the random walk, we use graph motifs to transform the original network and preserve the heterogeneity. We evaluate the proposed algorithm on multiple real-world networks from diverse domains and against existing state-of-the-art methods on multi-class node classification and link prediction tasks, and demonstrate its consistent superiority over prior work.
Tasks Feature Engineering, Link Prediction, Node Classification, Representation Learning
Published 2019-08-22
URL https://arxiv.org/abs/1908.08227v1
PDF https://arxiv.org/pdf/1908.08227v1.pdf
PWC https://paperswithcode.com/paper/motif2vec-motif-aware-node-representation
Repo
Framework
comments powered by Disqus