April 2, 2020

2925 words 14 mins read

Paper Group ANR 269

Paper Group ANR 269

Learning to Select Base Classes for Few-shot Classification. Entangled Watermarks as a Defense against Model Extraction. Assessing Robustness of Deep learning Methods in Dermatological Workflow. Predictive Business Process Monitoring via Generative Adversarial Nets: The Case of Next Event Prediction. Evolution of Scikit-Learn Pipelines with Dynamic …

Learning to Select Base Classes for Few-shot Classification

Title Learning to Select Base Classes for Few-shot Classification
Authors Linjun Zhou, Peng Cui, Xu Jia, Shiqiang Yang, Qi Tian
Abstract Few-shot learning has attracted intensive research attention in recent years. Many methods have been proposed to generalize a model learned from provided base classes to novel classes, but no previous work studies how to select base classes, or even whether different base classes will result in different generalization performance of the learned model. In this paper, we utilize a simple yet effective measure, the Similarity Ratio, as an indicator for the generalization performance of a few-shot model. We then formulate the base class selection problem as a submodular optimization problem over Similarity Ratio. We further provide theoretical analysis on the optimization lower bound of different optimization methods, which could be used to identify the most appropriate algorithm for different experimental settings. The extensive experiments on ImageNet, Caltech256 and CUB-200-2011 demonstrate that our proposed method is effective in selecting a better base dataset.
Tasks Few-Shot Learning
Published 2020-04-01
URL https://arxiv.org/abs/2004.00315v1
PDF https://arxiv.org/pdf/2004.00315v1.pdf
PWC https://paperswithcode.com/paper/learning-to-select-base-classes-for-few-shot
Repo
Framework

Entangled Watermarks as a Defense against Model Extraction

Title Entangled Watermarks as a Defense against Model Extraction
Authors Hengrui Jia, Christopher A. Choquette-Choo, Nicolas Papernot
Abstract Machine learning involves expensive data collection and training procedures. Model owners may be concerned that valuable intellectual property can be leaked if adversaries mount model extraction attacks. Because it is difficult to defend against model extraction without sacrificing significant prediction accuracy, watermarking leverages unused model capacity to have the model overfit to outlier input-output pairs, which are not sampled from the task distribution and are only known to the defender. The defender then demonstrates knowledge of the input-output pairs to claim ownership of the model at inference. The effectiveness of watermarks remains limited because they are distinct from the task distribution and can thus be easily removed through compression or other forms of knowledge transfer. We introduce Entangled Watermarking Embeddings (EWE). Our approach encourages the model to learn common features for classifying data that is sampled from the task distribution, but also data that encodes watermarks. An adversary attempting to remove watermarks that are entangled with legitimate data is also forced to sacrifice performance on legitimate data. Experiments on MNIST, Fashion-MNIST, and Google Speech Commands validate that the defender can claim model ownership with 95% confidence after less than 10 queries to the stolen copy, at a modest cost of 1% accuracy in the defended model’s performance.
Tasks Transfer Learning
Published 2020-02-27
URL https://arxiv.org/abs/2002.12200v1
PDF https://arxiv.org/pdf/2002.12200v1.pdf
PWC https://paperswithcode.com/paper/entangled-watermarks-as-a-defense-against
Repo
Framework

Assessing Robustness of Deep learning Methods in Dermatological Workflow

Title Assessing Robustness of Deep learning Methods in Dermatological Workflow
Authors Sourav Mishra, Subhajit Chaudhury, Hideaki Imaizumi, Toshihiko Yamasaki
Abstract This paper aims to evaluate the suitability of current deep learning methods for clinical workflow especially by focusing on dermatology. Although deep learning methods have been attempted to get dermatologist level accuracy in several individual conditions, it has not been rigorously tested for common clinical complaints. Most projects involve data acquired in well-controlled laboratory conditions. This may not reflect regular clinical evaluation where corresponding image quality is not always ideal. We test the robustness of deep learning methods by simulating non-ideal characteristics on user submitted images of ten classes of diseases. Assessing via imitated conditions, we have found the overall accuracy to drop and individual predictions change significantly in many cases despite of robust training.
Tasks
Published 2020-01-15
URL https://arxiv.org/abs/2001.05878v2
PDF https://arxiv.org/pdf/2001.05878v2.pdf
PWC https://paperswithcode.com/paper/assessing-robustness-of-deep-learning-methods
Repo
Framework

Predictive Business Process Monitoring via Generative Adversarial Nets: The Case of Next Event Prediction

Title Predictive Business Process Monitoring via Generative Adversarial Nets: The Case of Next Event Prediction
Authors Farbod Taymouri, Marcello La Rosa, Sarah Erfani, Zahra Dasht Bozorgi, Ilya Verenich
Abstract Predictive process monitoring aims to predict future characteristics of an ongoing process case, such as case outcome or remaining timestamp. Recently, several predictive process monitoring methods based on deep learning such as Long Short-Term Memory or Convolutional Neural Network have been proposed to address the problem of next event prediction. However, due to insufficient training data or sub-optimal network configuration and architecture, these approaches do not generalize well the problem at hand. This paper proposes a novel adversarial training framework to address this shortcoming, based on an adaptation of Generative Adversarial Networks (GANs) to the realm of sequential temporal data. The training works by putting one neural network against the other in a two-player game (hence the adversarial nature) which leads to predictions that are indistinguishable from the ground truth. We formally show that the worst-case accuracy of the proposed approach is at least equal to the accuracy achieved in non-adversarial settings. From the experimental evaluation it emerges that the approach systematically outperforms all baselines both in terms of accuracy and earliness of the prediction, despite using a simple network architecture and a naive feature encoding. Moreover, the approach is more robust, as its accuracy is not affected by fluctuations over the case length.
Tasks
Published 2020-03-25
URL https://arxiv.org/abs/2003.11268v2
PDF https://arxiv.org/pdf/2003.11268v2.pdf
PWC https://paperswithcode.com/paper/predictive-business-process-monitoring-via
Repo
Framework

Evolution of Scikit-Learn Pipelines with Dynamic Structured Grammatical Evolution

Title Evolution of Scikit-Learn Pipelines with Dynamic Structured Grammatical Evolution
Authors Filipe Assunção, Nuno Lourenço, Bernardete Ribeiro, Penousal Machado
Abstract The deployment of Machine Learning (ML) models is a difficult and time-consuming job that comprises a series of sequential and correlated tasks that go from the data pre-processing, and the design and extraction of features, to the choice of the ML algorithm and its parameterisation. The task is even more challenging considering that the design of features is in many cases problem specific, and thus requires domain-expertise. To overcome these limitations Automated Machine Learning (AutoML) methods seek to automate, with few or no human-intervention, the design of pipelines, i.e., automate the selection of the sequence of methods that have to be applied to the raw data. These methods have the potential to enable non-expert users to use ML, and provide expert users with solutions that they would unlikely consider. In particular, this paper describes AutoML-DSGE - a novel grammar-based framework that adapts Dynamic Structured Grammatical Evolution (DSGE) to the evolution of Scikit-Learn classification pipelines. The experimental results include comparing AutoML-DSGE to another grammar-based AutoML framework, Resilient ClassificationPipeline Evolution (RECIPE), and show that the average performance of the classification pipelines generated by AutoML-DSGE is always superior to the average performance of RECIPE; the differences are statistically significant in 3 out of the 10 used datasets.
Tasks AutoML
Published 2020-04-01
URL https://arxiv.org/abs/2004.00307v1
PDF https://arxiv.org/pdf/2004.00307v1.pdf
PWC https://paperswithcode.com/paper/evolution-of-scikit-learn-pipelines-with
Repo
Framework

MGCN: Descriptor Learning using Multiscale GCNs

Title MGCN: Descriptor Learning using Multiscale GCNs
Authors Yiqun Wang, Jing Ren, Dong-Ming Yan, Jianwei Guo, Xiaopeng Zhang, Peter Wonka
Abstract We propose a novel framework for computing descriptors for characterizing points on three-dimensional surfaces. First, we present a new non-learned feature that uses graph wavelets to decompose the Dirichlet energy on a surface. We call this new feature wavelet energy decomposition signature (WEDS). Second, we propose a new multiscale graph convolutional network (MGCN) to transform a non-learned feature to a more discriminative descriptor. Our results show that the new descriptor WEDS is more discriminative than the current state-of-the-art non-learned descriptors and that the combination of WEDS and MGCN is better than the state-of-the-art learned descriptors. An important design criterion for our descriptor is the robustness to different surface discretizations including triangulations with varying numbers of vertices. Our results demonstrate that previous graph convolutional networks significantly overfit to a particular resolution or even a particular triangulation, but MGCN generalizes well to different surface discretizations. In addition, MGCN is compatible with previous descriptors and it can also be used to improve the performance of other descriptors, such as the heat kernel signature, the wave kernel signature, or the local point signature.
Tasks
Published 2020-01-28
URL https://arxiv.org/abs/2001.10472v1
PDF https://arxiv.org/pdf/2001.10472v1.pdf
PWC https://paperswithcode.com/paper/mgcn-descriptor-learning-using-multiscale
Repo
Framework

Cyclic Boosting – an explainable supervised machine learning algorithm

Title Cyclic Boosting – an explainable supervised machine learning algorithm
Authors Felix Wick, Ulrich Kerzel, Michael Feindt
Abstract Supervised machine learning algorithms have seen spectacular advances and surpassed human level performance in a wide range of specific applications. However, using complex ensemble or deep learning algorithms typically results in black box models, where the path leading to individual predictions cannot be followed in detail. In order to address this issue, we propose the novel “Cyclic Boosting” machine learning algorithm, which allows to efficiently perform accurate regression and classification tasks while at the same time allowing a detailed understanding of how each individual prediction was made.
Tasks
Published 2020-02-09
URL https://arxiv.org/abs/2002.03425v1
PDF https://arxiv.org/pdf/2002.03425v1.pdf
PWC https://paperswithcode.com/paper/cyclic-boosting-an-explainable-supervised
Repo
Framework

Learning to Collide: An Adaptive Safety-Critical Scenarios Generating Method

Title Learning to Collide: An Adaptive Safety-Critical Scenarios Generating Method
Authors Wenhao Ding, Minjun Xu, Ding Zhao
Abstract Long-tail and rare event problems become crucial when autonomous driving algorithms are applied in the real world. For the purpose of evaluating systems in challenging settings, we propose a generative framework to create safety-critical scenarios for evaluating specific task algorithms. We first represent the traffic scenarios with a series of autoregressive building blocks and generate diverse scenarios by sampling from the joint distribution of these blocks. We then train the generative model as an agent (or a generator) to investigate the risky distribution parameters for a given driving algorithm being evaluated. We regard the task algorithm as an environment (or a discriminator) that returns a reward to the agent when a risky scenario is generated. Through the experiments conducted on several scenarios in the simulation, we demonstrate that the proposed framework generates safety-critical scenarios more efficiently than grid search or human design methods. Another advantage of this method is its adaptiveness to the routes and parameters.
Tasks Autonomous Driving
Published 2020-03-02
URL https://arxiv.org/abs/2003.01197v1
PDF https://arxiv.org/pdf/2003.01197v1.pdf
PWC https://paperswithcode.com/paper/learning-to-collide-an-adaptive-safety
Repo
Framework

Constructing a variational family for nonlinear state-space models

Title Constructing a variational family for nonlinear state-space models
Authors Jarrad Courts, Christopher Renton, Thomas B. Schön, Adrian Wills
Abstract We consider the problem of maximum likelihood parameter estimation for nonlinear state-space models. This is an important, but challenging problem. This challenge stems from the intractable multidimensional integrals that must be solved in order to compute, and maximise, the likelihood. Here we present a new variational family where variational inference is used in combination with tractable approximations of these integrals resulting in a deterministic optimisation problem. Our developments also include a novel means for approximating the smoothed state distributions. We demonstrate our construction on several examples and show that they perform well compared to state of the art methods on real data-sets.
Tasks
Published 2020-02-07
URL https://arxiv.org/abs/2002.02620v1
PDF https://arxiv.org/pdf/2002.02620v1.pdf
PWC https://paperswithcode.com/paper/constructing-a-variational-family-for
Repo
Framework

DLSpec: A Deep Learning Task Exchange Specification

Title DLSpec: A Deep Learning Task Exchange Specification
Authors Abdul Dakkak, Cheng Li, Jinjun Xiong, Wen-Mei Hwu
Abstract Deep Learning (DL) innovations are being introduced at a rapid pace. However, the current lack of standard specification of DL tasks makes sharing, running, reproducing, and comparing these innovations difficult. To address this problem, we propose DLSpec, a model-, dataset-, software-, and hardware-agnostic DL specification that captures the different aspects of DL tasks. DLSpec has been tested by specifying and running hundreds of DL tasks.
Tasks
Published 2020-02-26
URL https://arxiv.org/abs/2002.11262v1
PDF https://arxiv.org/pdf/2002.11262v1.pdf
PWC https://paperswithcode.com/paper/dlspec-a-deep-learning-task-exchange
Repo
Framework

Statistical Inference in Heterogeneous Block Model

Title Statistical Inference in Heterogeneous Block Model
Authors Majid Noroozi, Marianna Pensky
Abstract There exist various types of network block models such as the Stochastic Block Model (SBM), the Degree Corrected Block Model (DCBM), and the Popularity Adjusted Block Model (PABM). While this leads to a variety of choices, the block models do not have a nested structure. In addition, there is a substantial jump in the number of parameters from the DCBM to the PABM. The objective of this paper is formulation of a hierarchy of block model which does not rely on arbitrary identifiability conditions, treats the SBM, the DCBM and the PABM as its particular cases with specific parameter values and, in addition, allows a multitude of versions that are more complicated than DCBM but have fewer unknown parameters than the PABM. The latter allows one to carry out clustering and estimation without preliminary testing to see which block model is really true.
Tasks
Published 2020-02-07
URL https://arxiv.org/abs/2002.02610v1
PDF https://arxiv.org/pdf/2002.02610v1.pdf
PWC https://paperswithcode.com/paper/statistical-inference-in-heterogeneous-block
Repo
Framework

Pix2Shape – Towards Unsupervised Learning of 3D Scenes from Images using a View-based Representation

Title Pix2Shape – Towards Unsupervised Learning of 3D Scenes from Images using a View-based Representation
Authors Sai Rajeswar, Fahim Mannan, Florian Golemo, Jérôme Parent-Lévesque, David Vazquez, Derek Nowrouzezahrai, Aaron Courville
Abstract We infer and generate three-dimensional (3D) scene information from a single input image and without supervision. This problem is under-explored, with most prior work relying on supervision from, e.g., 3D ground-truth, multiple images of a scene, image silhouettes or key-points. We propose Pix2Shape, an approach to solve this problem with four components: (i) an encoder that infers the latent 3D representation from an image, (ii) a decoder that generates an explicit 2.5D surfel-based reconstruction of a scene from the latent code (iii) a differentiable renderer that synthesizes a 2D image from the surfel representation, and (iv) a critic network trained to discriminate between images generated by the decoder-renderer and those from a training distribution. Pix2Shape can generate complex 3D scenes that scale with the view-dependent on-screen resolution, unlike representations that capture world-space resolution, i.e., voxels or meshes. We show that Pix2Shape learns a consistent scene representation in its encoded latent space and that the decoder can then be applied to this latent representation in order to synthesize the scene from a novel viewpoint. We evaluate Pix2Shape with experiments on the ShapeNet dataset as well as on a novel benchmark we developed, called 3D-IQTT, to evaluate models based on their ability to enable 3d spatial reasoning. Qualitative and quantitative evaluation demonstrate Pix2Shape’s ability to solve scene reconstruction, generation, and understanding tasks.
Tasks
Published 2020-03-23
URL https://arxiv.org/abs/2003.14166v1
PDF https://arxiv.org/pdf/2003.14166v1.pdf
PWC https://paperswithcode.com/paper/pix2shape-towards-unsupervised-learning-of-3d
Repo
Framework

Projective Preferential Bayesian Optimization

Title Projective Preferential Bayesian Optimization
Authors Petrus Mikkola, Milica Todorović, Jari Järvi, Patrick Rinke, Samuel Kaski
Abstract Bayesian optimization is an effective method for finding extrema of a black-box function. We propose a new type of Bayesian optimization for learning user preferences in high-dimensional spaces. The central assumption is that the underlying objective function cannot be evaluated directly, but instead a minimizer along a projection can be queried, which we call a projective preferential query. The form of the query allows for feedback that is natural for a human to give, and which enables interaction. This is demonstrated in a user experiment in which the user feedback comes in the form of optimal position and orientation of a molecule adsorbing to a surface. We demonstrate that our framework is able to find a global minimum of a high-dimensional black-box function, which is an infeasible task for existing preferential Bayesian optimization frameworks that are based on pairwise comparisons.
Tasks
Published 2020-02-08
URL https://arxiv.org/abs/2002.03113v1
PDF https://arxiv.org/pdf/2002.03113v1.pdf
PWC https://paperswithcode.com/paper/projective-preferential-bayesian-optimization
Repo
Framework

Non-Parametric Learning of Lifted Restricted Boltzmann Machines

Title Non-Parametric Learning of Lifted Restricted Boltzmann Machines
Authors Navdeep Kaur, Gautam Kunapuli, Sriraam Natarajan
Abstract We consider the problem of discriminatively learning restricted Boltzmann machines in the presence of relational data. Unlike previous approaches that employ a rule learner (for structure learning) and a weight learner (for parameter learning) sequentially, we develop a gradient-boosted approach that performs both simultaneously. Our approach learns a set of weak relational regression trees, whose paths from root to leaf are conjunctive clauses and represent the structure, and whose leaf values represent the parameters. When the learned relational regression trees are transformed into a lifted RBM, its hidden nodes are precisely the conjunctive clauses derived from the relational regression trees. This leads to a more interpretable and explainable model. Our empirical evaluations clearly demonstrate this aspect, while displaying no loss in effectiveness of the learned models.
Tasks
Published 2020-01-09
URL https://arxiv.org/abs/2001.10070v1
PDF https://arxiv.org/pdf/2001.10070v1.pdf
PWC https://paperswithcode.com/paper/non-parametric-learning-of-lifted-restricted
Repo
Framework

Combinatory Chemistry: Towards a Simple Model of Emergent Evolution

Title Combinatory Chemistry: Towards a Simple Model of Emergent Evolution
Authors Germán Kruszewski, Tomas Mikolov
Abstract Researching the conditions for the emergence of life – not necessarily as it is, but as it could be – is one of the main goals of Artificial Life. Answering this question requires a model that can first explain the emergence of evolvable units, namely, structures that (1) preserve themselves in time (2) self-reproduce and (3) can tolerate a certain amount of variation when reproducing. To tackle this challenge, here we introduce Combinatory Chemistry, an Algorithmic Artificial Chemistry based on a simple computational paradigm named Combinatory Logic. The dynamics of this system comprise very few rules, it is initialized with an elementary tabula rasa state, and features conservation laws replicating natural resource constraints. Our experiments show that a single run of this dynamical system discovers a wide range of emergent patterns with no external intervention. All these structures rely on acquiring basic constituents from the environment and decomposing them in a process that is remarkably similar to biological metabolisms. These patterns involve autopoietic structures that maintain their organisation, recursive ones that grow in linear chains or binary-branching trees, and most notably, patterns able to reproduce themselves, duplicating their number at each generation.
Tasks Artificial Life
Published 2020-03-17
URL https://arxiv.org/abs/2003.07916v1
PDF https://arxiv.org/pdf/2003.07916v1.pdf
PWC https://paperswithcode.com/paper/combinatory-chemistry-towards-a-simple-model
Repo
Framework
comments powered by Disqus