Paper Group ANR 256
Semi-Supervised Learning on Graphs with Feature-Augmented Graph Basis Functions. CMOS-Free Multilayer Perceptron Enabled by Four-Terminal MTJ Device. Joint Goal and Strategy Inference across Heterogeneous Demonstrators via Reward Network Distillation. Optimising HEP parameter fits via Monte Carlo weight derivative regression. Modeling Uncertainty a …
Semi-Supervised Learning on Graphs with Feature-Augmented Graph Basis Functions
Title | Semi-Supervised Learning on Graphs with Feature-Augmented Graph Basis Functions |
Authors | Wolfgang Erb |
Abstract | For semi-supervised learning on graphs, we study how initial kernels in a supervised learning regime can be augmented with additional information from known priors or from unsupervised learning outputs. These augmented kernels are constructed in a simple update scheme based on the Schur-Hadamard product of the kernel with additional feature kernels. As generators of the positive definite kernels we will focus on graph basis functions (GBF) that allow to include geometric information of the graph via the graph Fourier transform. Using a regularized least squares (RLS) approach for machine learning, we will test the derived augmented kernels for the classification of data on graphs. |
Tasks | |
Published | 2020-03-17 |
URL | https://arxiv.org/abs/2003.07646v1 |
https://arxiv.org/pdf/2003.07646v1.pdf | |
PWC | https://paperswithcode.com/paper/semi-supervised-learning-on-graphs-with-1 |
Repo | |
Framework | |
CMOS-Free Multilayer Perceptron Enabled by Four-Terminal MTJ Device
Title | CMOS-Free Multilayer Perceptron Enabled by Four-Terminal MTJ Device |
Authors | Wesley H. Brigner, Naimul Hassan, Xuan Hu, Christopher H. Bennett, Felipe Garcia-Sanchez, Matthew J. Marinella, Jean Anne C. Incorvia, Joseph S. Friedman |
Abstract | Neuromorphic computing promises revolutionary improvements over conventional systems for applications that process unstructured information. To fully realize this potential, neuromorphic systems should exploit the biomimetic behavior of emerging nanodevices. In particular, exceptional opportunities are provided by the non-volatility and analog capabilities of spintronic devices. While spintronic devices have previously been proposed that emulate neurons and synapses, complementary metal-oxide-semiconductor (CMOS) devices are required to implement multilayer spintronic perceptron crossbars. This work therefore proposes a new spintronic neuron that enables purely spintronic multilayer perceptrons, eliminating the need for CMOS circuitry and simplifying fabrication. |
Tasks | |
Published | 2020-02-03 |
URL | https://arxiv.org/abs/2002.00862v1 |
https://arxiv.org/pdf/2002.00862v1.pdf | |
PWC | https://paperswithcode.com/paper/cmos-free-multilayer-perceptron-enabled-by |
Repo | |
Framework | |
Joint Goal and Strategy Inference across Heterogeneous Demonstrators via Reward Network Distillation
Title | Joint Goal and Strategy Inference across Heterogeneous Demonstrators via Reward Network Distillation |
Authors | Letian Chen, Rohan Paleja, Muyleng Ghuy, Matthew Gombolay |
Abstract | Reinforcement learning (RL) has achieved tremendous success as a general framework for learning how to make decisions. However, this success relies on the interactive hand-tuning of a reward function by RL experts. On the other hand, inverse reinforcement learning (IRL) seeks to learn a reward function from readily-obtained human demonstrations. Yet, IRL suffers from two major limitations: 1) reward ambiguity - there are an infinite number of possible reward functions that could explain an expert’s demonstration and 2) heterogeneity - human experts adopt varying strategies and preferences, which makes learning from multiple demonstrators difficult due to the common assumption that demonstrators seeks to maximize the same reward. In this work, we propose a method to jointly infer a task goal and humans’ strategic preferences via network distillation. This approach enables us to distill a robust task reward (addressing reward ambiguity) and to model each strategy’s objective (handling heterogeneity). We demonstrate our algorithm can better recover task reward and strategy rewards and imitate the strategies in two simulated tasks and a real-world table tennis task. |
Tasks | |
Published | 2020-01-02 |
URL | https://arxiv.org/abs/2001.00503v2 |
https://arxiv.org/pdf/2001.00503v2.pdf | |
PWC | https://paperswithcode.com/paper/joint-goal-and-strategy-inference-across |
Repo | |
Framework | |
Optimising HEP parameter fits via Monte Carlo weight derivative regression
Title | Optimising HEP parameter fits via Monte Carlo weight derivative regression |
Authors | Andrea Valassi |
Abstract | HEP event selection is traditionally considered a binary classification problem, involving the dichotomous categories of signal and background. In distribution fits for particle masses or couplings, however, signal events are not all equivalent, as the signal differential cross section has different sensitivities to the measured parameter in different regions of phase space. In this paper, I describe a mathematical framework for the evaluation and optimization of HEP parameter fits, where this sensitivity is defined on an event-by-event basis, and for MC events it is modeled in terms of their MC weight derivatives with respect to the measured parameter. Minimising the statistical error on a measurement implies the need to resolve (i.e. separate) events with different sensitivities, which ultimately represents a non-dichotomous classification problem. Since MC weight derivatives are not available for real data, the practical strategy I suggest consists in training a regressor of weight derivatives against MC events, and then using it as an optimal partitioning variable for 1-dimensional fits of data events. This CHEP2019 paper is an extension of the study presented at CHEP2018: in particular, event-by-event sensitivities allow the exact computation of the “FIP” ratio between the Fisher information obtained from an analysis and the maximum information that could possibly be obtained with an ideal detector. Using this expression, I discuss the relationship between FIP and two metrics commonly used in Meteorology (Brier score and MSE), and the importance of “sharpness” both in HEP and in that domain. I finally point out that HEP distribution fits should be optimized and evaluated using probabilistic metrics (like FIP or MSE), whereas ranking metrics (like AUC) or threshold metrics (like accuracy) are of limited relevance for these specific problems. |
Tasks | |
Published | 2020-03-28 |
URL | https://arxiv.org/abs/2003.12853v1 |
https://arxiv.org/pdf/2003.12853v1.pdf | |
PWC | https://paperswithcode.com/paper/optimising-hep-parameter-fits-via-monte-carlo |
Repo | |
Framework | |
Modeling Uncertainty and Imprecision in Nonmonotonic Reasoning using Fuzzy Numbers
Title | Modeling Uncertainty and Imprecision in Nonmonotonic Reasoning using Fuzzy Numbers |
Authors | Sandip Paul, Kumar Sankar Ray, Diganta Saha |
Abstract | To deal with uncertainty in reasoning, interval-valued logic has been developed. But uniform intervals cannot capture the difference in degrees of belief for different values in the interval. To salvage the problem triangular and trapezoidal fuzzy numbers are used as the set of truth values along with traditional intervals. Preorder-based truth and knowledge ordering are defined over the set of fuzzy numbers defined over $[0,1]$. Based on this enhanced set of epistemic states, an answer set framework is developed, with properly defined logical connectives. This type of framework is efficient in knowledge representation and reasoning with vague and uncertain information under nonmonotonic environment where rules may posses exceptions. |
Tasks | |
Published | 2020-01-03 |
URL | https://arxiv.org/abs/2001.01781v1 |
https://arxiv.org/pdf/2001.01781v1.pdf | |
PWC | https://paperswithcode.com/paper/modeling-uncertainty-and-imprecision-in |
Repo | |
Framework | |
Efficient statistical validation with edge cases to evaluate Highly Automated Vehicles
Title | Efficient statistical validation with edge cases to evaluate Highly Automated Vehicles |
Authors | Dhanoop Karunakaran, Stewart Worrall, Eduardo Nebot |
Abstract | The widescale deployment of Autonomous Vehicles (AV) seems to be imminent despite many safety challenges that are yet to be resolved. It is well known that there are no universally agreed Verification and Validation (VV) methodologies to guarantee absolute safety, which is crucial for the acceptance of this technology. Existing standards focus on deterministic processes where the validation requires only a set of test cases that cover the requirements. Modern autonomous vehicles will undoubtedly include machine learning and probabilistic techniques that require a much more comprehensive testing regime due to the non-deterministic nature of the operating design domain. A rigourous statistical validation process is an essential component required to address this challenge. Most research in this area focuses on evaluating system performance in large scale real-world data gathering exercises (number of miles travelled), or randomised test scenarios in simulation. This paper presents a new approach to compute the statistical characteristics of a system’s behaviour by biasing automatically generated test cases towards the worst case scenarios, identifying potential unsafe edge cases.We use reinforcement learning (RL) to learn the behaviours of simulated actors that cause unsafe behaviour measured by the well established RSS safety metric. We demonstrate that by using the method we can more efficiently validate a system using a smaller number of test cases by focusing the simulation towards the worst case scenario, generating edge cases that correspond to unsafe situations. |
Tasks | Autonomous Vehicles |
Published | 2020-03-04 |
URL | https://arxiv.org/abs/2003.01886v1 |
https://arxiv.org/pdf/2003.01886v1.pdf | |
PWC | https://paperswithcode.com/paper/efficient-statistical-validation-with-edge |
Repo | |
Framework | |
Hierarchical Correlation Clustering and Tree Preserving Embedding
Title | Hierarchical Correlation Clustering and Tree Preserving Embedding |
Authors | Morteza Haghir Chehreghani |
Abstract | We propose a hierarchical correlation clustering method that extends the well-known correlation clustering to produce hierarchical clusters. We then investigate embedding the respective hierarchy to be used for (tree preserving) embedding and feature extraction. We study the connection of such an embedding to single linkage embedding and minimax distances, and in particular study minimax distances for correlation clustering. Finally, we demonstrate the performance of our methods on several UCI and 20 newsgroup datasets. |
Tasks | |
Published | 2020-02-18 |
URL | https://arxiv.org/abs/2002.07756v1 |
https://arxiv.org/pdf/2002.07756v1.pdf | |
PWC | https://paperswithcode.com/paper/hierarchical-correlation-clustering-and-tree |
Repo | |
Framework | |
Automatic Cost Function Learning with Interpretable Compositional Networks
Title | Automatic Cost Function Learning with Interpretable Compositional Networks |
Authors | Florian Richoux, Jean-François Baffier |
Abstract | Cost Function Networks (CFN) are a formalism in Constraint Programming to model combinatorial satisfaction or optimization problems. By associating a function to each constraint type to evaluate the quality of an assignment, it extends the expressivity of regular CSP/COP formalisms but at a price of making harder the problem modeling. Indeed, in addition to regular variables/domains/constraints sets, one must provide a set of cost functions that are not always easy to define. Here we propose a method to automatically learn a cost function of a constraint, given a function deciding if assignments are valid or not. This is to the best of our knowledge the first attempt to automatically learn cost functions. Our method aims to learn cost functions in a supervised fashion, trying to reproduce the Hamming distance, by using a variation of neural networks we named Interpretable Compositional Networks, allowing us to get explainable results, unlike regular artificial neural networks. We experiment it on 5 different constraints to show its versatility. Experiments show that functions learned on small dimensions scale on high dimensions, outputting a perfect or near-perfect Hamming distance for most constraints. Our system can be used to automatically generate cost functions and then having the expressivity of CFN with the same modeling effort than for CSP/COP. |
Tasks | |
Published | 2020-02-23 |
URL | https://arxiv.org/abs/2002.09811v1 |
https://arxiv.org/pdf/2002.09811v1.pdf | |
PWC | https://paperswithcode.com/paper/automatic-cost-function-learning-with |
Repo | |
Framework | |
A Supervised Learning Algorithm for Multilayer Spiking Neural Networks Based on Temporal Coding Toward Energy-Efficient VLSI Processor Design
Title | A Supervised Learning Algorithm for Multilayer Spiking Neural Networks Based on Temporal Coding Toward Energy-Efficient VLSI Processor Design |
Authors | Yusuke Sakemi, Kai Morino, Takashi Morie, Kazuyuki Aihara |
Abstract | Spiking neural networks (SNNs) are brain-inspired mathematical models with the ability to process information in the form of spikes. SNNs are expected to provide not only new machine-learning algorithms, but also energy-efficient computational models when implemented in VLSI circuits. In this paper, we propose a novel supervised learning algorithm for SNNs based on temporal coding. A spiking neuron in this algorithm is designed to facilitate analog VLSI implementations with analog resistive memory, by which ultra-high energy efficiency can be achieved. We also propose several techniques to improve the performance on a recognition task, and show that the classification accuracy of the proposed algorithm is as high as that of the state-of-the-art temporal coding SNN algorithms on the MNIST dataset. Finally, we discuss the robustness of the proposed SNNs against variations that arise from the device manufacturing process and are unavoidable in analog VLSI implementation. We also propose a technique to suppress the effects of variations in the manufacturing process on the recognition performance. |
Tasks | |
Published | 2020-01-08 |
URL | https://arxiv.org/abs/2001.05348v1 |
https://arxiv.org/pdf/2001.05348v1.pdf | |
PWC | https://paperswithcode.com/paper/a-supervised-learning-algorithm-for |
Repo | |
Framework | |
Adaptive Experience Selection for Policy Gradient
Title | Adaptive Experience Selection for Policy Gradient |
Authors | Saad Mohamad, Giovanni Montana |
Abstract | Policy gradient reinforcement learning (RL) algorithms have achieved impressive performance in challenging learning tasks such as continuous control, but suffer from high sample complexity. Experience replay is a commonly used approach to improve sample efficiency, but gradient estimators using past trajectories typically have high variance. Existing sampling strategies for experience replay like uniform sampling or prioritised experience replay do not explicitly try to control the variance of the gradient estimates. In this paper, we propose an online learning algorithm, adaptive experience selection (AES), to adaptively learn an experience sampling distribution that explicitly minimises this variance. Using a regret minimisation approach, AES iteratively updates the experience sampling distribution to match the performance of a competitor distribution assumed to have optimal variance. Sample non-stationarity is addressed by proposing a dynamic (i.e. time changing) competitor distribution for which a closed-form solution is proposed. We demonstrate that AES is a low-regret algorithm with reasonable sample complexity. Empirically, AES has been implemented for deep deterministic policy gradient and soft actor critic algorithms, and tested on 8 continuous control tasks from the OpenAI Gym library. Ours results show that AES leads to significantly improved performance compared to currently available experience sampling strategies for policy gradient. |
Tasks | Continuous Control |
Published | 2020-02-17 |
URL | https://arxiv.org/abs/2002.06946v1 |
https://arxiv.org/pdf/2002.06946v1.pdf | |
PWC | https://paperswithcode.com/paper/adaptive-experience-selection-for-policy |
Repo | |
Framework | |
DeepHammer: Depleting the Intelligence of Deep Neural Networks through Targeted Chain of Bit Flips
Title | DeepHammer: Depleting the Intelligence of Deep Neural Networks through Targeted Chain of Bit Flips |
Authors | Fan Yao, Adnan Siraj Rakin, Deliang Fan |
Abstract | Security of machine learning is increasingly becoming a major concern due to the ubiquitous deployment of deep learning in many security-sensitive domains. Many prior studies have shown external attacks such as adversarial examples that tamper with the integrity of DNNs using maliciously crafted inputs. However, the security implication of internal threats (i.e., hardware vulnerability) to DNN models has not yet been well understood. In this paper, we demonstrate the first hardware-based attack on quantized deep neural networks-DeepHammer-that deterministically induces bit flips in model weights to compromise DNN inference by exploiting the rowhammer vulnerability. DeepHammer performs aggressive bit search in the DNN model to identify the most vulnerable weight bits that are flippable under system constraints. To trigger deterministic bit flips across multiple pages within reasonable amount of time, we develop novel system-level techniques that enable fast deployment of victim pages, memory-efficient rowhammering and precise flipping of targeted bits. DeepHammer can deliberately degrade the inference accuracy of the victim DNN system to a level that is only as good as random guess, thus completely depleting the intelligence of targeted DNN systems. We systematically demonstrate our attacks on real systems against 12 DNN architectures with 4 different datasets and different application domains. Our evaluation shows that DeepHammer is able to successfully tamper DNN inference behavior at run-time within a few minutes. We further discuss several mitigation techniques from both algorithm and system levels to protect DNNs against such attacks. Our work highlights the need to incorporate security mechanisms in future deep learning system to enhance the robustness of DNN against hardware-based deterministic fault injections. |
Tasks | |
Published | 2020-03-30 |
URL | https://arxiv.org/abs/2003.13746v1 |
https://arxiv.org/pdf/2003.13746v1.pdf | |
PWC | https://paperswithcode.com/paper/deephammer-depleting-the-intelligence-of-deep |
Repo | |
Framework | |
Inference in Multi-Layer Networks with Matrix-Valued Unknowns
Title | Inference in Multi-Layer Networks with Matrix-Valued Unknowns |
Authors | Parthe Pandit, Mojtaba Sahraee-Ardakan, Sundeep Rangan, Philip Schniter, Alyson K. Fletcher |
Abstract | We consider the problem of inferring the input and hidden variables of a stochastic multi-layer neural network from an observation of the output. The hidden variables in each layer are represented as matrices. This problem applies to signal recovery via deep generative prior models, multi-task and mixed regression and learning certain classes of two-layer neural networks. A unified approximation algorithm for both MAP and MMSE inference is proposed by extending a recently-developed Multi-Layer Vector Approximate Message Passing (ML-VAMP) algorithm to handle matrix-valued unknowns. It is shown that the performance of the proposed Multi-Layer Matrix VAMP (ML-Mat-VAMP) algorithm can be exactly predicted in a certain random large-system limit, where the dimensions $N\times d$ of the unknown quantities grow as $N\rightarrow\infty$ with $d$ fixed. In the two-layer neural-network learning problem, this scaling corresponds to the case where the number of input features and training samples grow to infinity but the number of hidden nodes stays fixed. The analysis enables a precise prediction of the parameter and test error of the learning. |
Tasks | |
Published | 2020-01-26 |
URL | https://arxiv.org/abs/2001.09396v1 |
https://arxiv.org/pdf/2001.09396v1.pdf | |
PWC | https://paperswithcode.com/paper/inference-in-multi-layer-networks-with-matrix |
Repo | |
Framework | |
WiSM: Windowing Surrogate Model for Evaluation of Curvature-Constrained Tours with Dubins vehicle
Title | WiSM: Windowing Surrogate Model for Evaluation of Curvature-Constrained Tours with Dubins vehicle |
Authors | Jan Drchal, Jan Faigl, Petr Váňa |
Abstract | Dubins tours represent a solution of the Dubins Traveling Salesman Problem (DTSP) that is a variant of the optimization routing problem to determine a curvature-constrained shortest path to visit a set of locations such that the path is feasible for Dubins vehicle, which moves only forward and has a limited turning radius. The DTSP combines the NP-hard combinatorial optimization to determine the optimal sequence of visits to the locations, as in the regular TSP, with the continuous optimization of the heading angles at the locations, where the optimal heading values depend on the sequence of visits and vice versa. We address the computationally challenging DTSP by fast evaluation of the sequence of visits by the proposed Windowing Surrogate Model (WiSM) which estimates the length of the optimal Dubins path connecting a sequence of locations in a Dubins tour. The estimation is sped up by a regression model trained using close to optimum solutions of small Dubins tours that are generalized for large-scale instances of the addressed DTSP utilizing the sliding window technique and a cache for already computed results. The reported results support that the proposed WiSM enables a fast convergence of a relatively simple evolutionary algorithm to high-quality solutions of the DTSP. We show that with an increasing number of locations, our algorithm scales significantly better than other state-of-the-art DTSP solvers. |
Tasks | Combinatorial Optimization |
Published | 2020-02-03 |
URL | https://arxiv.org/abs/2002.00811v1 |
https://arxiv.org/pdf/2002.00811v1.pdf | |
PWC | https://paperswithcode.com/paper/wism-windowing-surrogate-model-for-evaluation |
Repo | |
Framework | |
Viewport-Aware Deep Reinforcement Learning Approach for 360$^o$ Video Caching
Title | Viewport-Aware Deep Reinforcement Learning Approach for 360$^o$ Video Caching |
Authors | Pantelis Maniotis, Nikolaos Thomos |
Abstract | 360$^o$ video is an essential component of VR/AR/MR systems that provides immersive experience to the users. However, 360$^o$ video is associated with high bandwidth requirements. The required bandwidth can be reduced by exploiting the fact that users are interested in viewing only a part of the video scene and that users request viewports that overlap with each other. Motivated by the findings of recent works where the benefits of caching video tiles at edge servers instead of caching entire 360$^o$ videos were shown, in this paper, we introduce the concept of virtual viewports that have the same number of tiles with the original viewports. The tiles forming these viewports are the most popular ones for each video and are determined by the users’ requests. Then, we propose a proactive caching scheme that assumes unknown videos’ and viewports’ popularity. Our scheme determines which videos to cache as well as which is the optimal virtual viewport per video. Virtual viewports permit to lower the dimensionality of the cache optimization problem. To solve the problem, we first formulate the content placement of 360$^o$ videos in edge cache networks as a Markov Decision Process (MDP), and then we determine the optimal caching placement using the Deep Q-Network (DQN) algorithm. The proposed solution aims at maximizing the overall quality of the 360$^o$ videos delivered to the end-users by caching the most popular 360$^o$ videos at base quality along with a virtual viewport in high quality. We extensively evaluate the performance of the proposed system and compare it with that of known systems such as LFU, LRU, FIFO, over both synthetic and real 360$^o$ video traces. The results reveal the large benefits coming from proactive caching of virtual viewports instead of the original ones in terms of the overall quality of the rendered viewports, the cache hit ratio, and the servicing cost. |
Tasks | |
Published | 2020-03-18 |
URL | https://arxiv.org/abs/2003.08473v1 |
https://arxiv.org/pdf/2003.08473v1.pdf | |
PWC | https://paperswithcode.com/paper/viewport-aware-deep-reinforcement-learning |
Repo | |
Framework | |
AAAI FSS-19: Human-Centered AI: Trustworthiness of AI Models and Data Proceedings
Title | AAAI FSS-19: Human-Centered AI: Trustworthiness of AI Models and Data Proceedings |
Authors | Florian Buettner, John Piorkowski, Ian McCulloh, Ulli Waltinger |
Abstract | To facilitate the widespread acceptance of AI systems guiding decision-making in real-world applications, it is key that solutions comprise trustworthy, integrated human-AI systems. Not only in safety-critical applications such as autonomous driving or medicine, but also in dynamic open world systems in industry and government it is crucial for predictive models to be uncertainty-aware and yield trustworthy predictions. Another key requirement for deployment of AI at enterprise scale is to realize the importance of integrating human-centered design into AI systems such that humans are able to use systems effectively, understand results and output, and explain findings to oversight committees. While the focus of this symposium was on AI systems to improve data quality and technical robustness and safety, we welcomed submissions from broadly defined areas also discussing approaches addressing requirements such as explainable models, human trust and ethical aspects of AI. |
Tasks | Autonomous Driving, Decision Making |
Published | 2020-01-15 |
URL | https://arxiv.org/abs/2001.05375v1 |
https://arxiv.org/pdf/2001.05375v1.pdf | |
PWC | https://paperswithcode.com/paper/aaai-fss-19-human-centered-ai-trustworthiness |
Repo | |
Framework | |