April 3, 2020

3375 words 16 mins read

Paper Group ANR 55

Paper Group ANR 55

Infinite-Horizon Differentiable Model Predictive Control. The Past and Present of Imitation Learning: A Citation Chain Study. 1D CNN Based Network Intrusion Detection with Normalization on Imbalanced Data. Survey of Network Intrusion Detection Methods from the Perspective of the Knowledge Discovery in Databases Process. Asynchronous Policy Evaluati …

Infinite-Horizon Differentiable Model Predictive Control

Title Infinite-Horizon Differentiable Model Predictive Control
Authors Sebastian East, Marco Gallieri, Jonathan Masci, Jan Koutnik, Mark Cannon
Abstract This paper proposes a differentiable linear quadratic Model Predictive Control (MPC) framework for safe imitation learning. The infinite-horizon cost is enforced using a terminal cost function obtained from the discrete-time algebraic Riccati equation (DARE), so that the learned controller can be proven to be stabilizing in closed-loop. A central contribution is the derivation of the analytical derivative of the solution of the DARE, thereby allowing the use of differentiation-based learning methods. A further contribution is the structure of the MPC optimization problem: an augmented Lagrangian method ensures that the MPC optimization is feasible throughout training whilst enforcing hard constraints on state and input, and a pre-stabilizing controller ensures that the MPC solution and derivatives are accurate at each iteration. The learning capabilities of the framework are demonstrated in a set of numerical studies.
Tasks Imitation Learning
Published 2020-01-07
URL https://arxiv.org/abs/2001.02244v1
PDF https://arxiv.org/pdf/2001.02244v1.pdf
PWC https://paperswithcode.com/paper/infinite-horizon-differentiable-model-1
Repo
Framework

The Past and Present of Imitation Learning: A Citation Chain Study

Title The Past and Present of Imitation Learning: A Citation Chain Study
Authors Nishanth Kumar
Abstract Imitation Learning is a promising area of active research. Over the last 30 years, Imitation Learning has advanced significantly and been used to solve difficult tasks ranging from Autonomous Driving to playing Atari games. In the course of this development, different methods for performing Imitation Learning have fallen into and out of favor. In this paper, I explore the development of these different methods and attempt to examine how the field has progressed. I focus my analysis on surveying 4 landmark papers that sequentially build upon each other to develop increasingly impressive Imitation Learning methods.
Tasks Atari Games, Autonomous Driving, Imitation Learning
Published 2020-01-08
URL https://arxiv.org/abs/2001.02328v1
PDF https://arxiv.org/pdf/2001.02328v1.pdf
PWC https://paperswithcode.com/paper/the-past-and-present-of-imitation-learning-a
Repo
Framework

1D CNN Based Network Intrusion Detection with Normalization on Imbalanced Data

Title 1D CNN Based Network Intrusion Detection with Normalization on Imbalanced Data
Authors Azizjon Meliboev, Jumabek Alikhanov, Wooseong Kim
Abstract Intrusion detection system (IDS) plays an essential role in computer networks protecting computing resources and data from outside attacks. Recent IDS faces challenges improving flexibility and efficiency of the IDS for unexpected and unpredictable attacks. Deep neural network (DNN) is considered popularly for complex systems to abstract features and learn as a machine learning technique. In this paper, we propose a deep learning approach for developing the efficient and flexible IDS using one-dimensional Convolutional Neural Network (1D-CNN). Two-dimensional CNN methods have shown remarkable performance in detecting objects of images in computer vision area. Meanwhile, the 1D-CNN can be used for supervised learning on time-series data. We establish a machine learning model based on the 1D-CNN by serializing Transmission Control Protocol/Internet Protocol (TCP/IP) packets in a predetermined time range as an invasion Internet traffic model for the IDS, where normal and abnormal network traffics are categorized and labeled for supervised learning in the 1D-CNN. We evaluated our model on UNSW_NB15 IDS dataset to show the effectiveness of our method. For comparison study in performance, machine learning-based Random Forest (RF) and Support Vector Machine (SVM) models in addition to the 1D-CNN with various network parameters and architecture are exploited. In each experiment, the models are run up to 200 epochs with a learning rate in 0.0001 on imbalanced and balanced data. 1D-CNN and its variant architectures have outperformed compared to the classical machine learning classifiers. This is mainly due to the reason that CNN has the capability to extract high-level feature representations that represent the abstract form of low-level feature sets of network traffic connections.
Tasks Intrusion Detection, Network Intrusion Detection, Time Series
Published 2020-03-01
URL https://arxiv.org/abs/2003.00476v2
PDF https://arxiv.org/pdf/2003.00476v2.pdf
PWC https://paperswithcode.com/paper/1d-cnn-based-network-intrusion-detection-with
Repo
Framework

Survey of Network Intrusion Detection Methods from the Perspective of the Knowledge Discovery in Databases Process

Title Survey of Network Intrusion Detection Methods from the Perspective of the Knowledge Discovery in Databases Process
Authors Borja Molina-Coronado, Usue Mori, Alexander Mendiburu, José Miguel-Alonso
Abstract The identification of cyberattacks which target information and communication systems has been a focus of the research community for years. Network intrusion detection is a complex problem which presents a diverse number of challenges. Many attacks currently remain undetected, while newer ones emerge due to the proliferation of connected devices and the evolution of communication technology. In this survey, we review the methods that have been applied to network data with the purpose of developing an intrusion detector, but contrary to previous reviews in the area, we analyze them from the perspective of the Knowledge Discovery in Databases (KDD) process. As such, we discuss the techniques used for the capture, preparation and transformation of the data, as well as, the data mining and evaluation methods. In addition, we also present the characteristics and motivations behind the use of each of these techniques and propose more adequate and up-to-date taxonomies and definitions for intrusion detectors based on the terminology used in the area of data mining and KDD. Special importance is given to the evaluation procedures followed to assess the different detectors, discussing their applicability in current real networks. Finally, as a result of this literature review, we investigate some open issues which will need to be considered for further research in the area of network security.
Tasks Intrusion Detection, Network Intrusion Detection
Published 2020-01-27
URL https://arxiv.org/abs/2001.09697v1
PDF https://arxiv.org/pdf/2001.09697v1.pdf
PWC https://paperswithcode.com/paper/survey-of-network-intrusion-detection-methods
Repo
Framework

Asynchronous Policy Evaluation in Distributed Reinforcement Learning over Networks

Title Asynchronous Policy Evaluation in Distributed Reinforcement Learning over Networks
Authors Xingyu Sha, Jiaqi Zhang, Kaiqing Zhang, Keyou You, Tamer Başar
Abstract This paper proposes a \emph{fully asynchronous} scheme for policy evaluation of distributed reinforcement learning (DisRL) over peer-to-peer networks. Without any form of coordination, nodes can communicate with neighbors and compute their local variables using (possibly) delayed information at any time, which is in sharp contrast to the asynchronous gossip. Thus, the proposed scheme fully takes advantage of the distributed setting. We prove that our method converges at a linear rate $\mathcal{O}(c^k)$ where $c\in(0,1)$ and $k$ increases by one no matter on which node updates, showing the computational advantage by reducing the amount of synchronization. Numerical experiments show that our method speeds up linearly w.r.t. the number of nodes, and is robust to straggler nodes. To the best of our knowledge, our work is the first theoretical analysis for asynchronous update in DisRL, including the \emph{parallel RL} domain advocated by A3C.
Tasks
Published 2020-03-01
URL https://arxiv.org/abs/2003.00433v1
PDF https://arxiv.org/pdf/2003.00433v1.pdf
PWC https://paperswithcode.com/paper/asynchronous-policy-evaluation-in-distributed
Repo
Framework

Skeleton Based Action Recognition using a Stacked Denoising Autoencoder with Constraints of Privileged Information

Title Skeleton Based Action Recognition using a Stacked Denoising Autoencoder with Constraints of Privileged Information
Authors Zhize Wu, Thomas Weise, Le Zou, Fei Sun, Ming Tan
Abstract Recently, with the availability of cost-effective depth cameras coupled with real-time skeleton estimation, the interest in skeleton-based human action recognition is renewed. Most of the existing skeletal representation approaches use either the joint location or the dynamics model. Differing from the previous studies, we propose a new method called Denoising Autoencoder with Temporal and Categorical Constraints (DAE_CTC)} to study the skeletal representation in a view of skeleton reconstruction. Based on the concept of learning under privileged information, we integrate action categories and temporal coordinates into a stacked denoising autoencoder in the training phase, to preserve category and temporal feature, while learning the hidden representation from a skeleton. Thus, we are able to improve the discriminative validity of the hidden representation. In order to mitigate the variation resulting from temporary misalignment, a new method of temporal registration, called Locally-Warped Sequence Registration (LWSR), is proposed for registering the sequences of inter- and intra-class actions. We finally represent the sequences using a Fourier Temporal Pyramid (FTP) representation and perform classification using a combination of LWSR registration, FTP representation, and a linear Support Vector Machine (SVM). The experimental results on three action data sets, namely MSR-Action3D, UTKinect-Action, and Florence3D-Action, show that our proposal performs better than many existing methods and comparably to the state of the art.
Tasks Denoising, Skeleton Based Action Recognition, Temporal Action Localization
Published 2020-03-12
URL https://arxiv.org/abs/2003.05684v1
PDF https://arxiv.org/pdf/2003.05684v1.pdf
PWC https://paperswithcode.com/paper/skeleton-based-action-recognition-using-a
Repo
Framework

Causal Discovery from Incomplete Data: A Deep Learning Approach

Title Causal Discovery from Incomplete Data: A Deep Learning Approach
Authors Yuhao Wang, Vlado Menkovski, Hao Wang, Xin Du, Mykola Pechenizkiy
Abstract As systems are getting more autonomous with the development of artificial intelligence, it is important to discover the causal knowledge from observational sensory inputs. By encoding a series of cause-effect relations between events, causal networks can facilitate the prediction of effects from a given action and analyze their underlying data generation mechanism. However, missing data are ubiquitous in practical scenarios. Directly performing existing casual discovery algorithms on partially observed data may lead to the incorrect inference. To alleviate this issue, we proposed a deep learning framework, dubbed Imputated Causal Learning (ICL), to perform iterative missing data imputation and causal structure discovery. Through extensive simulations on both synthetic and real data, we show that ICL can outperform state-of-the-art methods under different missing data mechanisms.
Tasks Causal Discovery, Imputation
Published 2020-01-15
URL https://arxiv.org/abs/2001.05343v1
PDF https://arxiv.org/pdf/2001.05343v1.pdf
PWC https://paperswithcode.com/paper/causal-discovery-from-incomplete-data-a-deep
Repo
Framework

Inverse Feature Learning: Feature learning based on Representation Learning of Error

Title Inverse Feature Learning: Feature learning based on Representation Learning of Error
Authors Behzad Ghazanfari, Fatemeh Afghah, MohammadTaghi Hajiaghayi
Abstract This paper proposes inverse feature learning as a novel supervised feature learning technique that learns a set of high-level features for classification based on an error representation approach. The key contribution of this method is to learn the representation of error as high-level features, while current representation learning methods interpret error by loss functions which are obtained as a function of differences between the true labels and the predicted ones. One advantage of such learning method is that the learned features for each class are independent of learned features for other classes; therefore, this method can learn simultaneously meaning that it can learn new classes without retraining. Error representation learning can also help with generalization and reduce the chance of over-fitting by adding a set of impactful features to the original data set which capture the relationships between each instance and different classes through an error generation and analysis process. This method can be particularly effective in data sets, where the instances of each class have diverse feature representations or the ones with imbalanced classes. The experimental results show that the proposed method results in significantly better performance compared to the state-of-the-art classification techniques for several popular data sets. We hope this paper can open a new path to utilize the proposed perspective of error representation learning in different feature learning domains.
Tasks Representation Learning
Published 2020-03-08
URL https://arxiv.org/abs/2003.03689v1
PDF https://arxiv.org/pdf/2003.03689v1.pdf
PWC https://paperswithcode.com/paper/inverse-feature-learning-feature-learning
Repo
Framework

Practical Approach of Knowledge Management in Medical Science

Title Practical Approach of Knowledge Management in Medical Science
Authors Mahdi Bohlouli, Patrick Uhr, Fabian Merges, Sanaz Mohammad Hassani, Madjid Fathi
Abstract Knowledge organization, infrastructure, and knowledge-based activities are all subjects that help in the creation of business strategies for the new enterprise. In this paper, the first basics of knowledge-based systems are studied. Practical issues and challenges of Knowledge Management (KM) implementations are then illustrated. Finally, a comparison of different knowledge-based projects is presented along with abstracted information on their implementation, techniques, and results. Most of these projects are in the field of medical science. Based on our study and evaluation of different KM projects, we conclude that KM is being used in every science, industry, and business. But its importance in medical science and assisted living projects are highlighted nowadays with the most of research institutes. Most medical centers are interested in using knowledge-based services like portals and learning techniques of knowledge for their future innovations and supports.
Tasks
Published 2020-01-16
URL https://arxiv.org/abs/2001.09795v1
PDF https://arxiv.org/pdf/2001.09795v1.pdf
PWC https://paperswithcode.com/paper/practical-approach-of-knowledge-management-in
Repo
Framework

MREC: a fast and versatile framework for aligning and matching point clouds with applications to single cell molecular data

Title MREC: a fast and versatile framework for aligning and matching point clouds with applications to single cell molecular data
Authors Andrew J. Blumberg, Mathieu Carriere, Michael A. Mandell, Raul Rabadan, Soledad Villar
Abstract Comparing and aligning large datasets is a pervasive problem occurring across many different knowledge domains. We introduce and study MREC, a recursive decomposition algorithm for computing matchings between data sets. The basic idea is to partition the data, match the partitions, and then recursively match the points within each pair of identified partitions. The matching itself is done using black box matching procedures that are too expensive to run on the entire data set. Using an absolute measure of the quality of a matching, the framework supports optimization over parameters including partitioning procedures and matching algorithms. By design, MREC can be applied to extremely large data sets. We analyze the procedure to describe when we can expect it to work well and demonstrate its flexibility and power by applying it to a number of alignment problems arising in the analysis of single cell molecular data.
Tasks
Published 2020-01-06
URL https://arxiv.org/abs/2001.01666v3
PDF https://arxiv.org/pdf/2001.01666v3.pdf
PWC https://paperswithcode.com/paper/mrec-a-fast-and-versatile-framework-for
Repo
Framework

Regularization via Structural Label Smoothing

Title Regularization via Structural Label Smoothing
Authors Weizhi Li, Gautam Dasarathy, Visar Berisha
Abstract Regularization is an effective way to promote the generalization performance of machine learning models. In this paper, we focus on label smoothing, a form of output distribution regularization that prevents overfitting of a neural network by softening the ground-truth labels in the training data in an attempt to penalize overconfident outputs. Existing approaches typically use cross-validation to impose this smoothing, which is uniform across all training data. In this paper, we show that such label smoothing imposes a quantifiable bias in the Bayes error rate of the training data, with regions of the feature space with high overlap and low marginal likelihood having a lower bias and regions of low overlap and high marginal likelihood having a higher bias. These theoretical results motivate a simple objective function for data-dependent smoothing to mitigate the potential negative consequences of the operation while maintaining its desirable properties as a regularizer. We call this approach Structural Label Smoothing (SLS). We implement SLS and empirically validate on synthetic, Higgs, SVHN, CIFAR-10, and CIFAR-100 datasets. The results confirm our theoretical insights and demonstrate the effectiveness of the proposed method in comparison to traditional label smoothing.
Tasks
Published 2020-01-07
URL https://arxiv.org/abs/2001.01900v1
PDF https://arxiv.org/pdf/2001.01900v1.pdf
PWC https://paperswithcode.com/paper/regularization-via-structural-label-smoothing
Repo
Framework

Social-WaGDAT: Interaction-aware Trajectory Prediction via Wasserstein Graph Double-Attention Network

Title Social-WaGDAT: Interaction-aware Trajectory Prediction via Wasserstein Graph Double-Attention Network
Authors Jiachen Li, Hengbo Ma, Zhihao Zhang, Masayoshi Tomizuka
Abstract Effective understanding of the environment and accurate trajectory prediction of surrounding dynamic obstacles are indispensable for intelligent mobile systems (like autonomous vehicles and social robots) to achieve safe and high-quality planning when they navigate in highly interactive and crowded scenarios. Due to the existence of frequent interactions and uncertainty in the scene evolution, it is desired for the prediction system to enable relational reasoning on different entities and provide a distribution of future trajectories for each agent. In this paper, we propose a generic generative neural system (called Social-WaGDAT) for multi-agent trajectory prediction, which makes a step forward to explicit interaction modeling by incorporating relational inductive biases with a dynamic graph representation and leverages both trajectory and scene context information. We also employ an efficient kinematic constraint layer applied to vehicle trajectory prediction which not only ensures physical feasibility but also enhances model performance. The proposed system is evaluated on three public benchmark datasets for trajectory prediction, where the agents cover pedestrians, cyclists and on-road vehicles. The experimental results demonstrate that our model achieves better performance than various baseline approaches in terms of prediction accuracy.
Tasks Autonomous Vehicles, Relational Reasoning, Trajectory Prediction
Published 2020-02-14
URL https://arxiv.org/abs/2002.06241v1
PDF https://arxiv.org/pdf/2002.06241v1.pdf
PWC https://paperswithcode.com/paper/social-wagdat-interaction-aware-trajectory
Repo
Framework

Local Implicit Grid Representations for 3D Scenes

Title Local Implicit Grid Representations for 3D Scenes
Authors Chiyu Max Jiang, Avneesh Sud, Ameesh Makadia, Jingwei Huang, Matthias Nießner, Thomas Funkhouser
Abstract Shape priors learned from data are commonly used to reconstruct 3D objects from partial or noisy data. Yet no such shape priors are available for indoor scenes, since typical 3D autoencoders cannot handle their scale, complexity, or diversity. In this paper, we introduce Local Implicit Grid Representations, a new 3D shape representation designed for scalability and generality. The motivating idea is that most 3D surfaces share geometric details at some scale – i.e., at a scale smaller than an entire object and larger than a small patch. We train an autoencoder to learn an embedding of local crops of 3D shapes at that size. Then, we use the decoder as a component in a shape optimization that solves for a set of latent codes on a regular grid of overlapping crops such that an interpolation of the decoded local shapes matches a partial or noisy observation. We demonstrate the value of this proposed approach for 3D surface reconstruction from sparse point observations, showing significantly better results than alternative approaches.
Tasks 3D Shape Representation
Published 2020-03-19
URL https://arxiv.org/abs/2003.08981v1
PDF https://arxiv.org/pdf/2003.08981v1.pdf
PWC https://paperswithcode.com/paper/local-implicit-grid-representations-for-3d
Repo
Framework

Dynamic Coronary Roadmapping via Catheter Tip Tracking in X-ray Fluoroscopy with Deep Learning Based Bayesian Filtering

Title Dynamic Coronary Roadmapping via Catheter Tip Tracking in X-ray Fluoroscopy with Deep Learning Based Bayesian Filtering
Authors Hua Ma, Ihor Smal, Joost Daemen, Theo van Walsum
Abstract Percutaneous coronary intervention (PCI) is typically performed with image guidance using X-ray angiograms in which coronary arteries are opacified with X-ray opaque contrast agents. Interventional cardiologists typically navigate instruments using non-contrast-enhanced fluoroscopic images, since higher use of contrast agents increases the risk of kidney failure. When using fluoroscopic images, the interventional cardiologist needs to rely on a mental anatomical reconstruction. This paper reports on the development of a novel dynamic coronary roadmapping approach for improving visual feedback and reducing contrast use during PCI. The approach compensates cardiac and respiratory induced vessel motion by ECG alignment and catheter tip tracking in X-ray fluoroscopy, respectively. In particular, for accurate and robust tracking of the catheter tip, we proposed a new deep learning based Bayesian filtering method that integrates the detection outcome of a convolutional neural network and the motion estimation between frames using a particle filtering framework. The proposed roadmapping and tracking approaches were validated on clinical X-ray images, achieving accurate performance on both catheter tip tracking and dynamic coronary roadmapping experiments. In addition, our approach runs in real-time on a computer with a single GPU and has the potential to be integrated into the clinical workflow of PCI procedures, providing cardiologists with visual guidance during interventions without the need of extra use of contrast agent.
Tasks Motion Estimation
Published 2020-01-11
URL https://arxiv.org/abs/2001.03801v1
PDF https://arxiv.org/pdf/2001.03801v1.pdf
PWC https://paperswithcode.com/paper/dynamic-coronary-roadmapping-via-catheter-tip
Repo
Framework

Reanalysis of Variance Reduced Temporal Difference Learning

Title Reanalysis of Variance Reduced Temporal Difference Learning
Authors Tengyu Xu, Zhe Wang, Yi Zhou, Yingbin Liang
Abstract Temporal difference (TD) learning is a popular algorithm for policy evaluation in reinforcement learning, but the vanilla TD can substantially suffer from the inherent optimization variance. A variance reduced TD (VRTD) algorithm was proposed by Korda and La (2015), which applies the variance reduction technique directly to the online TD learning with Markovian samples. In this work, we first point out the technical errors in the analysis of VRTD in Korda and La (2015), and then provide a mathematically solid analysis of the non-asymptotic convergence of VRTD and its variance reduction performance. We show that VRTD is guaranteed to converge to a neighborhood of the fixed-point solution of TD at a linear convergence rate. Furthermore, the variance error (for both i.i.d.\ and Markovian sampling) and the bias error (for Markovian sampling) of VRTD are significantly reduced by the batch size of variance reduction in comparison to those of vanilla TD. As a result, the overall computational complexity of VRTD to attain a given accurate solution outperforms that of TD under Markov sampling and outperforms that of TD under i.i.d.\ sampling for a sufficiently small conditional number.
Tasks
Published 2020-01-07
URL https://arxiv.org/abs/2001.01898v2
PDF https://arxiv.org/pdf/2001.01898v2.pdf
PWC https://paperswithcode.com/paper/reanalysis-of-variance-reduced-temporal-1
Repo
Framework
comments powered by Disqus