January 25, 2020

3415 words 17 mins read

Paper Group ANR 1759

Paper Group ANR 1759

Proposition d’un modèle pour l’optimisation automatique de boucles dans le compilateur Tiramisu : cas d’optimisation de déroulage. ExpertMatcher: Automating ML Model Selection for Clients using Hidden Representations. Towards a Philological Metric through a Topological Data Analysis Approach. Learning Rotation Adaptive Correlation Filters in Robust …

Proposition d’un modèle pour l’optimisation automatique de boucles dans le compilateur Tiramisu : cas d’optimisation de déroulage

Title Proposition d’un modèle pour l’optimisation automatique de boucles dans le compilateur Tiramisu : cas d’optimisation de déroulage
Authors Asma Balamane, Zina Taklit
Abstract Computer architectures become more and more complex. It requires more effort to develop techniques that improve the programs of performance and allow to exploit material resources efficiently. As a result, many transformations are applied on various levels of code abstraction. The first level is the high level, where the representation is close to the high level language. The second one is the low level, where the presentation is close to the machine code. Those transformations are called code optimizations. Optimizing programs requires deep expertise. On one hand, it is a tedious task, because it requires a lot of tests to find out the best combination of optimizations to apply with their best factors. On the other hand, this task is critical, because it may degrade the performance of the program instead of improving it. The automatization of this task can deal with this problem and permit to obtain good results. Our end of study project consists on proposing a novel approach based on neural networks to automatically optimize loops in Tiramisu. Tiramisu is a new language to create a code of high performance. It allows to separate between the algorithm and its optimizations. We have chosen loop unrolling as a study case. Our contribution aims to automate the choice of the best loop unrolling factor for a program written in Tiramisu.
Tasks
Published 2019-07-29
URL https://arxiv.org/abs/1908.01057v1
PDF https://arxiv.org/pdf/1908.01057v1.pdf
PWC https://paperswithcode.com/paper/proposition-dun-modele-pour-loptimisation
Repo
Framework

ExpertMatcher: Automating ML Model Selection for Clients using Hidden Representations

Title ExpertMatcher: Automating ML Model Selection for Clients using Hidden Representations
Authors Vivek Sharma, Praneeth Vepakomma, Tristan Swedish, Ken Chang, Jayashree Kalpathy-Cramer, Ramesh Raskar
Abstract Recently, there has been the development of Split Learning, a framework for distributed computation where model components are split between the client and server (Vepakomma et al., 2018b). As Split Learning scales to include many different model components, there needs to be a method of matching client-side model components with the best server-side model components. A solution to this problem was introduced in the ExpertMatcher (Sharma et al., 2019) framework, which uses autoencoders to match raw data to models. In this work, we propose an extension of ExpertMatcher, where matching can be performed without the need to share the client’s raw data representation. The technique is applicable to situations where there are local clients and centralized expert ML models, but the sharing of raw data is constrained.
Tasks Model Selection
Published 2019-10-09
URL https://arxiv.org/abs/1910.03731v1
PDF https://arxiv.org/pdf/1910.03731v1.pdf
PWC https://paperswithcode.com/paper/expertmatcher-automating-ml-model-selection-1
Repo
Framework

Towards a Philological Metric through a Topological Data Analysis Approach

Title Towards a Philological Metric through a Topological Data Analysis Approach
Authors Eduardo Paluzo-Hidalgo, Rocio Gonzalez-Diaz, Miguel A. Gutiérrez-Naranjo
Abstract The canon of the baroque Spanish literature has been thoroughly studied with philological techniques. The major representatives of the poetry of this epoch are Francisco de Quevedo and Luis de G'ongora y Argote. They are commonly classified by the literary experts in two different streams: Quevedo belongs to the Conceptismo and G'ongora to the Culteranismo. Besides, traditionally, even if Quevedo is considered the most representative of the Conceptismo, Lope de Vega is also considered to be, at least, closely related to this literary trend. In this paper, we use Topological Data Analysis techniques to provide a first approach to a metric distance between the literary style of these poets. As a consequence, we reach results that are under the literary experts’ criteria, locating the literary style of Lope de Vega, closer to the one of Quevedo than to the one of G'ongora.
Tasks Topological Data Analysis
Published 2019-12-19
URL https://arxiv.org/abs/1912.09253v3
PDF https://arxiv.org/pdf/1912.09253v3.pdf
PWC https://paperswithcode.com/paper/towards-a-philological-metric-through-a
Repo
Framework

Learning Rotation Adaptive Correlation Filters in Robust Visual Object Tracking

Title Learning Rotation Adaptive Correlation Filters in Robust Visual Object Tracking
Authors Litu Rout, Priya Mariam Raju, Deepak Mishra, Rama Krishna Sai Subrahmanyam Gorthi
Abstract Visual object tracking is one of the major challenges in the field of computer vision. Correlation Filter (CF) trackers are one of the most widely used categories in tracking. Though numerous tracking algorithms based on CFs are available today, most of them fail to efficiently detect the object in an unconstrained environment with dynamically changing object appearance. In order to tackle such challenges, the existing strategies often rely on a particular set of algorithms. Here, we propose a robust framework that offers the provision to incorporate illumination and rotation invariance in the standard Discriminative Correlation Filter (DCF) formulation. We also supervise the detection stage of DCF trackers by eliminating false positives in the convolution response map. Further, we demonstrate the impact of displacement consistency on CF trackers. The generality and efficiency of the proposed framework is illustrated by integrating our contributions into two state-of-the-art CF trackers: SRDCF and ECO. As per the comprehensive experiments on the VOT2016 dataset, our top trackers show substantial improvement of 14.7% and 6.41% in robustness, 11.4% and 1.71% in Average Expected Overlap (AEO) over the baseline SRDCF and ECO, respectively.
Tasks Object Tracking, Visual Object Tracking
Published 2019-06-04
URL https://arxiv.org/abs/1906.01551v1
PDF https://arxiv.org/pdf/1906.01551v1.pdf
PWC https://paperswithcode.com/paper/learning-rotation-adaptive-correlation
Repo
Framework

Deep Learning-based Vehicle Behaviour Prediction For Autonomous Driving Applications: A Review

Title Deep Learning-based Vehicle Behaviour Prediction For Autonomous Driving Applications: A Review
Authors Sajjad Mozaffari, Omar Y. Al-Jarrah, Mehrdad Dianati, Paul Jennings, Alexandros Mouzakitis
Abstract Behaviour prediction function of an autonomous vehicle predicts the future states of the nearby vehicles based on the current and past observations of the surrounding environment. This helps enhance their awareness of the imminent hazards. However, conventional behaviour prediction solutions are applicable in simple driving scenarios that require short prediction horizons. Most recently, deep learning-based approaches have become popular due to their superior performance in more complex environments compared to the conventional approaches. Motivated by this increased popularity, we provide a comprehensive review of the state-of-the-art of deep learning-based approaches for vehicle behaviour prediction in this paper. We firstly give an overview of the generic problem of vehicle behaviour prediction and discuss its challenges, followed by classification and review of the most recent deep learning-based solutions based on three criteria: input representation, output type, and prediction method. The paper also discusses the performance of several well-known solutions, identifies the research gaps in the literature and outlines potential new research directions.
Tasks Autonomous Driving
Published 2019-12-25
URL https://arxiv.org/abs/1912.11676v1
PDF https://arxiv.org/pdf/1912.11676v1.pdf
PWC https://paperswithcode.com/paper/deep-learning-based-vehicle-behaviour
Repo
Framework

MaskedNet: The First Hardware Inference Engine Aiming Power Side-Channel Protection

Title MaskedNet: The First Hardware Inference Engine Aiming Power Side-Channel Protection
Authors Anuj Dubey, Rosario Cammarota, Aydin Aysu
Abstract Differential Power Analysis (DPA) has been an active area of research for the past two decades to study the attacks for extracting secret information from cryptographic implementations through power measurements and their defenses. Unfortunately, the research on power side-channels have so far predominantly focused on analyzing implementations of ciphers such as AES, DES, RSA, and recently post-quantum cryptography primitives (e.g., lattices). Meanwhile, machine-learning, and in particular deep-learning applications are becoming ubiquitous with several scenarios where the Machine Learning Models are Intellectual Properties requiring confidentiality. Expanding side-channel analysis to Machine Learning Model extraction, however, is largely unexplored. This paper expands the DPA framework to neural-network classifiers. First, it shows DPA attacks during inference to extract the secret model parameters such as weights and biases of a neural network. Second, it proposes the $\textit{first countermeasures}$ against these attacks by augmenting $\textit{masking}$. The resulting design uses novel masked components such as masked adder trees for fully-connected layers and masked Rectifier Linear Units for activation functions. On a SAKURA-X FPGA board, experiments show that the first-order DPA attacks on the unprotected implementation can succeed with only 200 traces and our protection respectively increases the latency and area-cost by 2.8x and 2.3x.
Tasks
Published 2019-10-29
URL https://arxiv.org/abs/1910.13063v3
PDF https://arxiv.org/pdf/1910.13063v3.pdf
PWC https://paperswithcode.com/paper/maskednet-a-pathway-for-secure-inference
Repo
Framework

Unsupervised Space-Time Clustering using Persistent Homology

Title Unsupervised Space-Time Clustering using Persistent Homology
Authors Umar Islambekov, Yulia Gel
Abstract This paper presents a new clustering algorithm for space-time data based on the concepts of topological data analysis and in particular, persistent homology. Employing persistent homology - a flexible mathematical tool from algebraic topology used to extract topological information from data - in unsupervised learning is an uncommon and a novel approach. A notable aspect of this methodology consists in analyzing data at multiple resolutions which allows to distinguish true features from noise based on the extent of their persistence. We evaluate the performance of our algorithm on synthetic data and compare it to other well-known clustering algorithms such as K-means, hierarchical clustering and DBSCAN. We illustrate its application in the context of a case study of water quality in the Chesapeake Bay.
Tasks Topological Data Analysis
Published 2019-10-25
URL https://arxiv.org/abs/1910.11525v1
PDF https://arxiv.org/pdf/1910.11525v1.pdf
PWC https://paperswithcode.com/paper/unsupervised-space-time-clustering-using
Repo
Framework

Adaptive Partitioning for Template Functions on Persistence Diagrams

Title Adaptive Partitioning for Template Functions on Persistence Diagrams
Authors Sarah Tymochko, Elizabeth Munch, Firas A. Khasawneh
Abstract As the field of Topological Data Analysis continues to show success in theory and in applications, there has been increasing interest in using tools from this field with methods for machine learning. Using persistent homology, specifically persistence diagrams, as inputs to machine learning techniques requires some mathematical creativity. The space of persistence diagrams does not have the desirable properties for machine learning, thus methods such as kernel methods and vectorization methods have been developed. One such featurization of persistence diagrams by Perea, Munch and Khasawneh uses continuous, compactly supported functions, referred to as “template functions,” which results in a stable vector representation of the persistence diagram. In this paper, we provide a method of adaptively partitioning persistence diagrams to improve these featurizations based on localized information in the diagrams. Additionally, we provide a framework to adaptively select parameters required for the template functions in order to best utilize the partitioning method. We present results for application to example data sets comparing classification results between template function featurizations with and without partitioning, in addition to other methods from the literature.
Tasks Topological Data Analysis
Published 2019-10-18
URL https://arxiv.org/abs/1910.08506v1
PDF https://arxiv.org/pdf/1910.08506v1.pdf
PWC https://paperswithcode.com/paper/adaptive-partitioning-for-template-functions
Repo
Framework

Deduction Theorem: The Problematic Nature of Common Practice in Game Theory

Title Deduction Theorem: The Problematic Nature of Common Practice in Game Theory
Authors Holger I. Meinhardt
Abstract We consider the Deduction Theorem that is used in the literature of game theory to run a purported proof by contradiction. In the context of game theory, it is stated that if we have a proof of $\phi \vdash \varphi$, then we also have a proof of $\phi \Rightarrow \varphi$. Hence, the proof of $\phi \Rightarrow \varphi$ is deduced from a previous known statement. However, we argue that one has to manage to prove that the clauses $\phi$ and $\varphi$ exist, i.e., they are known true statements in order to establish that $\phi \vdash \varphi$ is provable, and that therefore $\phi \Rightarrow \varphi$ is provable as well. Thus, we are only allowed to reason with known true statements, i.e., we are not allowed to assume that $\phi$ or $\varphi$ exist. Doing so, leads immediately to a wrong conclusion. Apart from this, we stress to other facts why the Deduction Theorem is not applicable to run a proof by contradiction. Finally, we present an example from industrial cooperation where the Deduction Theorem is not correctly applied with the consequence that the obtained result contradicts the well-known aggregation issue.
Tasks
Published 2019-07-31
URL https://arxiv.org/abs/1908.00409v1
PDF https://arxiv.org/pdf/1908.00409v1.pdf
PWC https://paperswithcode.com/paper/deduction-theorem-the-problematic-nature-of
Repo
Framework

Learning Internal Representations (PhD Thesis)

Title Learning Internal Representations (PhD Thesis)
Authors Jonathan Baxter
Abstract Most machine learning theory and practice is concerned with learning a single task. In this thesis it is argued that in general there is insufficient information in a single task for a learner to generalise well and that what is required for good generalisation is information about many similar learning tasks. Similar learning tasks form a body of prior information that can be used to constrain the learner and make it generalise better. Examples of learning scenarios in which there are many similar tasks are handwritten character recognition and spoken word recognition. The concept of the environment of a learner is introduced as a probability measure over the set of learning problems the learner might be expected to learn. It is shown how a sample from the environment may be used to learn a representation, or recoding of the input space that is appropriate for the environment. Learning a representation can equivalently be thought of as learning the appropriate features of the environment. Bounds are derived on the sample size required to ensure good generalisation from a representation learning process. These bounds show that under certain circumstances learning a representation appropriate for $n$ tasks reduces the number of examples required of each task by a factor of $n$. Once a representation is learnt it can be used to learn novel tasks from the same environment, with the result that far fewer examples are required of the new tasks to ensure good generalisation. Bounds are given on the number of tasks and the number of samples from each task required to ensure that a representation will be a good one for learning novel tasks. The results on representation learning are generalised to cover any form of automated hypothesis space bias.
Tasks Representation Learning
Published 2019-11-09
URL https://arxiv.org/abs/1911.03731v2
PDF https://arxiv.org/pdf/1911.03731v2.pdf
PWC https://paperswithcode.com/paper/learning-internal-representations
Repo
Framework

Learning with Delayed Synaptic Plasticity

Title Learning with Delayed Synaptic Plasticity
Authors Anil Yaman, Giovanni Iacca, Decebal Constantin Mocanu, George Fletcher, Mykola Pechenizkiy
Abstract The plasticity property of biological neural networks allows them to perform learning and optimize their behavior by changing their configuration. Inspired by biology, plasticity can be modeled in artificial neural networks by using Hebbian learning rules, i.e. rules that update synapses based on the neuron activations and reinforcement signals. However, the distal reward problem arises when the reinforcement signals are not available immediately after each network output to associate the neuron activations that contributed to receiving the reinforcement signal. In this work, we extend Hebbian plasticity rules to allow learning in distal reward cases. We propose the use of neuron activation traces (NATs) to provide additional data storage in each synapse to keep track of the activation of the neurons. Delayed reinforcement signals are provided after each episode relative to the networks’ performance during the previous episode. We employ genetic algorithms to evolve delayed synaptic plasticity (DSP) rules and perform synaptic updates based on NATs and delayed reinforcement signals. We compare DSP with an analogous hill climbing algorithm that does not incorporate domain knowledge introduced with the NATs, and show that the synaptic updates performed by the DSP rules demonstrate more effective training performance relative to the HC algorithm.
Tasks
Published 2019-03-22
URL http://arxiv.org/abs/1903.09393v2
PDF http://arxiv.org/pdf/1903.09393v2.pdf
PWC https://paperswithcode.com/paper/learning-with-delayed-synaptic-plasticity
Repo
Framework

Towards closing the gap between the theory and practice of SVRG

Title Towards closing the gap between the theory and practice of SVRG
Authors Othmane Sebbouh, Nidham Gazagnadou, Samy Jelassi, Francis Bach, Robert M. Gower
Abstract Among the very first variance reduced stochastic methods for solving the empirical risk minimization problem was the SVRG method (Johnson & Zhang 2013). SVRG is an inner-outer loop based method, where in the outer loop a reference full gradient is evaluated, after which $m \in \mathbb{N}$ steps of an inner loop are executed where the reference gradient is used to build a variance reduced estimate of the current gradient. The simplicity of the SVRG method and its analysis has lead to multiple extensions and variants for even non-convex optimization. Yet there is a significant gap between the parameter settings that the analysis suggests and what is known to work well in practice. Our first contribution is that we take several steps towards closing this gap. In particular, the current analysis shows that $m$ should be of the order of the condition number so that the resulting method has a favorable complexity. Yet in practice $m =n$ works well irregardless of the condition number, where $n$ is the number of data points. Furthermore, the current analysis shows that the inner iterates have to be reset using averaging after every outer loop. Yet in practice SVRG works best when the inner iterates are updated continuously and not reset. We provide an analysis of these aforementioned practical settings and show that they achieve the same favorable complexity as the original analysis (with slightly better constants). Our second contribution is to provide a more general analysis than had been previously done by using arbitrary sampling, which allows us to analyse virtually all forms of mini-batching through a single theorem. Since our setup and analysis reflects what is done in practice, we are able to set the parameters such as the mini-batch size and step size using our theory in such a way that produces a more efficient algorithm in practice, as we show in extensive numerical experiments.
Tasks
Published 2019-07-31
URL https://arxiv.org/abs/1908.02725v1
PDF https://arxiv.org/pdf/1908.02725v1.pdf
PWC https://paperswithcode.com/paper/towards-closing-the-gap-between-the-theory
Repo
Framework

Haploid-Diploid Evolution: Nature’s Memetic Algorithm

Title Haploid-Diploid Evolution: Nature’s Memetic Algorithm
Authors Michail-Antisthenis Tsompanas, Larry Bull, Andrew Adamatzky, Igor Balaz
Abstract This paper uses a recent explanation for the fundamental haploid-diploid lifecycle of eukaryotic organisms to present a new memetic algorithm that differs from all previous known work using diploid representations. A form of the Baldwin effect has been identified as inherent to the evolutionary mechanisms of eukaryotes and a simplified version is presented here which maintains such behaviour. Using a well-known abstract tuneable model, it is shown that varying fitness landscape ruggedness varies the benefit of haploid-diploid algorithms. Moreover, the methodology is applied to optimise the targeted delivery of a therapeutic compound utilizing nano-particles to cancerous tumour cells with the multicellular simulator PhysiCell.
Tasks
Published 2019-11-13
URL https://arxiv.org/abs/1911.07302v1
PDF https://arxiv.org/pdf/1911.07302v1.pdf
PWC https://paperswithcode.com/paper/haploid-diploid-evolution-natures-memetic
Repo
Framework

MaMiC: Macro and Micro Curriculum for Robotic Reinforcement Learning

Title MaMiC: Macro and Micro Curriculum for Robotic Reinforcement Learning
Authors Manan Tomar, Akhil Sathuluri, Balaraman Ravindran
Abstract Shaping in humans and animals has been shown to be a powerful tool for learning complex tasks as compared to learning in a randomized fashion. This makes the problem less complex and enables one to solve the easier sub task at hand first. Generating a curriculum for such guided learning involves subjecting the agent to easier goals first, and then gradually increasing their difficulty. This paper takes a similar direction and proposes a dual curriculum scheme for solving robotic manipulation tasks with sparse rewards, called MaMiC. It includes a macro curriculum scheme which divides the task into multiple sub-tasks followed by a micro curriculum scheme which enables the agent to learn between such discovered sub-tasks. We show how combining macro and micro curriculum strategies help in overcoming major exploratory constraints considered in robot manipulation tasks without having to engineer any complex rewards. We also illustrate the meaning of the individual curricula and how they can be used independently based on the task. The performance of such a dual curriculum scheme is analyzed on the Fetch environments.
Tasks
Published 2019-05-17
URL https://arxiv.org/abs/1905.07193v1
PDF https://arxiv.org/pdf/1905.07193v1.pdf
PWC https://paperswithcode.com/paper/mamic-macro-and-micro-curriculum-for-robotic
Repo
Framework

Distributionally Robust Reinforcement Learning

Title Distributionally Robust Reinforcement Learning
Authors Elena Smirnova, Elvis Dohmatob, Jérémie Mary
Abstract Real-world applications require RL algorithms to act safely. During learning process, it is likely that the agent executes sub-optimal actions that may lead to unsafe/poor states of the system. Exploration is particularly brittle in high-dimensional state/action space due to increased number of low-performing actions. In this work, we consider risk-averse exploration in approximate RL setting. To ensure safety during learning, we propose the distributionally robust policy iteration scheme that provides lower bound guarantee on state-values. Our approach induces a dynamic level of risk to prevent poor decisions and yet preserves the convergence to the optimal policy. Our formulation results in a efficient algorithm that accounts for a simple re-weighting of policy actions in the standard policy iteration scheme. We extend our approach to continuous state/action space and present a practical algorithm, distributionally robust soft actor-critic, that implements a different exploration strategy: it acts conservatively at short-term and it explores optimistically in a long-run. We provide promising experimental results on continuous control tasks.
Tasks Continuous Control, Q-Learning
Published 2019-02-23
URL https://arxiv.org/abs/1902.08708v2
PDF https://arxiv.org/pdf/1902.08708v2.pdf
PWC https://paperswithcode.com/paper/distributionally-robust-reinforcement
Repo
Framework
comments powered by Disqus