January 30, 2020

2824 words 14 mins read

Paper Group ANR 387

Paper Group ANR 387

Meta-Learning surrogate models for sequential decision making. Open Source Face Recognition Performance Evaluation Package. Deep Meta Functionals for Shape Representation. Preselection Bandits under the Plackett-Luce Model. Automata Learning: An Algebraic Approach. Narrowing Down XML Template Expansion and Schema Validation. Resource-aware Elastic …

Meta-Learning surrogate models for sequential decision making

Title Meta-Learning surrogate models for sequential decision making
Authors Alexandre Galashov, Jonathan Schwarz, Hyunjik Kim, Marta Garnelo, David Saxton, Pushmeet Kohli, S. M. Ali Eslami, Yee Whye Teh
Abstract We introduce a unified probabilistic framework for solving sequential decision making problems ranging from Bayesian optimisation to contextual bandits and reinforcement learning. This is accomplished by a probabilistic model-based approach that explains observed data while capturing predictive uncertainty during the decision making process. Crucially, this probabilistic model is chosen to be a Meta-Learning system that allows learning from a distribution of related problems, allowing data efficient adaptation to a target task. As a suitable instantiation of this framework, we explore the use of Neural processes due to statistical and computational desiderata. We apply our framework to a broad range of problem domains, such as control problems, recommender systems and adversarial attacks on RL agents, demonstrating an efficient and general black-box learning approach.
Tasks Bayesian Optimisation, Decision Making, Meta-Learning, Multi-Armed Bandits, Recommendation Systems
Published 2019-03-28
URL https://arxiv.org/abs/1903.11907v2
PDF https://arxiv.org/pdf/1903.11907v2.pdf
PWC https://paperswithcode.com/paper/meta-learning-surrogate-models-for-sequential
Repo
Framework

Open Source Face Recognition Performance Evaluation Package

Title Open Source Face Recognition Performance Evaluation Package
Authors Xiang Xu, Ioannis A. Kakadiaris
Abstract Biometrics-related research has been accelerated significantly by deep learning technology. However, there are limited open-source resources to help researchers evaluate their deep learning-based biometrics algorithms efficiently, especially for the face recognition tasks. In this work, we design and implement a light-weight, maintainable, scalable, generalizable, and extendable face recognition evaluation toolbox named FaRE that supports both online and offline evaluation to provide feedback to algorithm development and accelerate biometrics-related research. FaRE consists of a set of evaluation metric functions and provides various APIs for commonly-used face recognition datasets including LFW, CFP, UHDB31, and IJB-series datasets, which can be easily extended to include other customized datasets. The package and the pre-trained baseline models will be released for public academic research use after obtaining university approval.
Tasks Face Recognition
Published 2019-01-27
URL http://arxiv.org/abs/1901.09447v1
PDF http://arxiv.org/pdf/1901.09447v1.pdf
PWC https://paperswithcode.com/paper/open-source-face-recognition-performance
Repo
Framework

Deep Meta Functionals for Shape Representation

Title Deep Meta Functionals for Shape Representation
Authors Gidi Littwin, Lior Wolf
Abstract We present a new method for 3D shape reconstruction from a single image, in which a deep neural network directly maps an image to a vector of network weights. The network \textcolor{black}{parametrized by} these weights represents a 3D shape by classifying every point in the volume as either within or outside the shape. The new representation has virtually unlimited capacity and resolution, and can have an arbitrary topology. Our experiments show that it leads to more accurate shape inference from a 2D projection than the existing methods, including voxel-, silhouette-, and mesh-based methods. The code is available at: https://github.com/gidilittwin/Deep-Meta
Tasks
Published 2019-08-17
URL https://arxiv.org/abs/1908.06277v1
PDF https://arxiv.org/pdf/1908.06277v1.pdf
PWC https://paperswithcode.com/paper/deep-meta-functionals-for-shape
Repo
Framework

Preselection Bandits under the Plackett-Luce Model

Title Preselection Bandits under the Plackett-Luce Model
Authors Viktor Bengs, Eyke Hüllermeier
Abstract In this paper, we introduce the Preselection Bandit problem, in which the learner preselects a subset of arms (choice alternatives) for a user, which then chooses the final arm from this subset. The learner is not aware of the user’s preferences, but can learn them from observed choices. In our concrete setting, we allow these choices to be stochastic and model the user’s actions by means of the Plackett-Luce model. The learner’s main task is to preselect subsets that eventually lead to highly preferred choices. To formalize this goal, we introduce a reasonable notion of regret and derive lower bounds on the expected regret. Moreover, we propose algorithms for which the upper bound on expected regret matches the lower bound up to a logarithmic term of the time horizon.
Tasks
Published 2019-07-13
URL https://arxiv.org/abs/1907.06123v1
PDF https://arxiv.org/pdf/1907.06123v1.pdf
PWC https://paperswithcode.com/paper/preselection-bandits-under-the-plackett-luce
Repo
Framework

Automata Learning: An Algebraic Approach

Title Automata Learning: An Algebraic Approach
Authors Henning Urbat, Lutz Schröder
Abstract We propose a generic categorical framework for learning unknown formal languages of various types (e.g. finite or infinite words, trees, weighted and nominal languages). Our approach is parametric in a monad T that represents the given type of languages and their recognizing algebraic structures. Using the concept of an automata presentation of T-algebras, we demonstrate that the task of learning a T-recognizable language can be reduced to learning an abstract form of automaton, which is achieved via a generalized version of Angluin’s L* algorithm. The algorithm is phrased in terms of categorically described extension steps; we provide for a generic termination and complexity analysis based on a dedicated notion of finiteness. Our framework applies to structures like tree languages or omega-regular languages that were not within the scope of existing categorical accounts of automata learning. In addition, it yields new generic learning algorithms for several types of languages for which no such algorithms were previously known at all, including sorted languages, nominal languages with name binding, and cost functions.
Tasks
Published 2019-11-03
URL https://arxiv.org/abs/1911.00874v1
PDF https://arxiv.org/pdf/1911.00874v1.pdf
PWC https://paperswithcode.com/paper/automata-learning-an-algebraic-approach
Repo
Framework

Narrowing Down XML Template Expansion and Schema Validation

Title Narrowing Down XML Template Expansion and Schema Validation
Authors René Haberland
Abstract This work examines how much template instantiation can narrow down schema validation for XML-documents. First, instantiation and validation are formalised. Properties towards their practical meaning are probed, an implementation is developed. Requirements for an unification are elaborated and a comparison is taken out. The semantics are formulated in terms of denotational semantics as well as rule-based referring to the data models chosen. Formalisation makes it clearer instantiation is adequately represented. Both semantics show, that the rules set for both, instantiation and validation, cannot totally be unified. However, reuse of simplified code also simplifies unification. Implementation allows unification of both processes on document-level. The validity of all implementations is guaranteed by a comprehensive test suite. Analysis shows the minimal XML template language has got regular grammar properties, except macros. An explanation was given, why filters and arrows are not best, especially towards a unified language to be variable and extensive. Recommendations for future language design are provided. Instantiation shows a universal gap in applications, for instance, as seen by XSLT. Lack of expressibility of arbitrary functions in a schema is one such example, expressibility of the command language is another basic restriction. Useful unification constraints are found out to be handy, such as typing each slot. In order to obtain most flexibility out of command languages adaptations are required. An alternative to introducing constraints is the effective construction of special NFAs. Comparison criteria are introduced regarding mainly syntax and semantics. Comparisons is done accordingly. Despite its huge syntax definitions XSD was found weaker than RelaxNG or XML template language. As template language the latter is considered universal.
Tasks
Published 2019-12-17
URL https://arxiv.org/abs/1912.10816v1
PDF https://arxiv.org/pdf/1912.10816v1.pdf
PWC https://paperswithcode.com/paper/narrowing-down-xml-template-expansion-and
Repo
Framework

Resource-aware Elastic Swap Random Forest for Evolving Data Streams

Title Resource-aware Elastic Swap Random Forest for Evolving Data Streams
Authors Diego Marrón, Eduard Ayguadé, José Ramon Herrero, Albert Bifet
Abstract Continual learning based on data stream mining deals with ubiquitous sources of Big Data arriving at high-velocity and in real-time. Adaptive Random Forest ({\em ARF}) is a popular ensemble method used for continual learning due to its simplicity in combining adaptive leveraging bagging with fast random Hoeffding trees. While the default ARF size provides competitive accuracy, it is usually over-provisioned resulting in the use of additional classifiers that only contribute to increasing CPU and memory consumption with marginal impact in the overall accuracy. This paper presents Elastic Swap Random Forest ({\em ESRF}), a method for reducing the number of trees in the ARF ensemble while providing similar accuracy. {\em ESRF} extends {\em ARF} with two orthogonal components: 1) a swap component that splits learners into two sets based on their accuracy (only classifiers with the highest accuracy are used to make predictions); and 2) an elastic component for dynamically increasing or decreasing the number of classifiers in the ensemble. The experimental evaluation of {\em ESRF} and comparison with the original {\em ARF} shows how the two new components contribute to reducing the number of classifiers up to one third while providing almost the same accuracy, resulting in speed-ups in terms of per-sample execution time close to 3x.
Tasks Continual Learning
Published 2019-05-14
URL https://arxiv.org/abs/1905.05881v1
PDF https://arxiv.org/pdf/1905.05881v1.pdf
PWC https://paperswithcode.com/paper/resource-aware-elastic-swap-random-forest-for
Repo
Framework

Algorithms for an Efficient Tensor Biclustering

Title Algorithms for an Efficient Tensor Biclustering
Authors Andriantsiory Dina Faneva, Mustapha Lebbah, Hanane Azzag, Gaël Beck
Abstract Consider a data set collected by (individuals-features) pairs in different times. It can be represented as a tensor of three dimensions (Individuals, features and times). The tensor biclustering problem computes a subset of individuals and a subset of features whose signal trajectories over time lie in a low-dimensional subspace, modeling similarity among the signal trajectories while allowing different scalings across different individuals or different features. This approach are based on spectral decomposition in order to build the desired biclusters. We evaluate the quality of the results from each algorithms with both synthetic and real data set.
Tasks
Published 2019-03-10
URL http://arxiv.org/abs/1903.04042v1
PDF http://arxiv.org/pdf/1903.04042v1.pdf
PWC https://paperswithcode.com/paper/algorithms-for-an-efficient-tensor
Repo
Framework

On the Interpretability and Evaluation of Graph Representation Learning

Title On the Interpretability and Evaluation of Graph Representation Learning
Authors Antonia Gogoglou, C. Bayan Bruss, Keegan E. Hines
Abstract With the rising interest in graph representation learning, a variety of approaches have been proposed to effectively capture a graph’s properties. While these approaches have improved performance in graph machine learning tasks compared to traditional graph techniques, they are still perceived as techniques with limited insight into the information encoded in these representations. In this work, we explore methods to interpret node embeddings and propose the creation of a robust evaluation framework for comparing graph representation learning algorithms and hyperparameters. We test our methods on graphs with different properties and investigate the relationship between embedding training parameters and the ability of the produced embedding to recover the structure of the original graph in a downstream task.
Tasks Graph Representation Learning, Representation Learning
Published 2019-10-07
URL https://arxiv.org/abs/1910.03081v1
PDF https://arxiv.org/pdf/1910.03081v1.pdf
PWC https://paperswithcode.com/paper/on-the-interpretability-and-evaluation-of
Repo
Framework

The Diversity-Innovation Paradox in Science

Title The Diversity-Innovation Paradox in Science
Authors Bas Hofstra, Vivek V. Kulkarni, Sebastian Munoz-Najar Galvez, Bryan He, Dan Jurafsky, Daniel A. McFarland
Abstract Prior work finds a diversity paradox: diversity breeds innovation, and yet, underrepresented groups that diversify organizations have less successful careers within them. Does the diversity paradox hold for scientists as well? We study this by utilizing a near-population of ~1.2 million US doctoral recipients from 1977-2015 and following their careers into publishing and faculty positions. We use text analysis and machine learning to answer a series of questions: How do we detect scientific innovations? Are underrepresented groups more likely to generate scientific innovations? And are the innovations of underrepresented groups adopted and rewarded? Our analyses show that underrepresented groups produce higher rates of scientific novelty. However, their novel contributions are devalued and discounted: e.g., novel contributions by gender and racial minorities are taken up by other scholars at lower rates than novel contributions by gender and racial majorities, and equally impactful contributions of gender and racial minorities are less likely to result in successful scientific careers than for majority groups. These results suggest there may be unwarranted reproduction of stratification in academic careers that discounts diversity’s role in innovation and partly explains the underrepresentation of some groups in academia.
Tasks
Published 2019-09-04
URL https://arxiv.org/abs/1909.02063v2
PDF https://arxiv.org/pdf/1909.02063v2.pdf
PWC https://paperswithcode.com/paper/diversity-breeds-innovation-with-discounted
Repo
Framework

A Computer-Aided System for Determining the Application Range of a Warfarin Clinical Dosing Algorithm Using Support Vector Machines with a Polynomial Kernel Function

Title A Computer-Aided System for Determining the Application Range of a Warfarin Clinical Dosing Algorithm Using Support Vector Machines with a Polynomial Kernel Function
Authors Ashkan Sharabiani, Adam Bress, William Galanter, Rezvan Nazempour, Houshang Darabi
Abstract Determining the optimal initial dose for warfarin is a critically important task. Several factors have an impact on the therapeutic dose for individual patients, such as patients’ physical attributes (Age, Height, etc.), medication profile, co-morbidities, and metabolic genotypes (CYP2C9 and VKORC1). These wide range factors influencing therapeutic dose, create a complex environment for clinicians to determine the optimal initial dose. Using a sample of 4,237 patients, we have proposed a companion classification model to one of the most popular dosing algorithms (International Warfarin Pharmacogenetics Consortium (IWPC) clinical model), which identifies the appropriate cohort of patients for applying this model. The proposed model functions as a clinical decision support system which assists clinicians in dosing. We have developed a classification model using Support Vector Machines, with a polynomial kernel function to determine if applying the dose prediction model is appropriate for a given patient. The IWPC clinical model will only be used if the patient is classified as “Safe for model”. By using the proposed methodology, the dosing mode’s prediction accuracy increases by 15 percent in terms of Root Mean Squared Error and 17 percent in terms of Mean Absolute Error in dose estimates of patients classified as “Safe for model”.
Tasks
Published 2019-03-21
URL http://arxiv.org/abs/1903.09267v1
PDF http://arxiv.org/pdf/1903.09267v1.pdf
PWC https://paperswithcode.com/paper/a-computer-aided-system-for-determining-the
Repo
Framework

Signal Clustering with Class-independent Segmentation

Title Signal Clustering with Class-independent Segmentation
Authors Stefano Gasperini, Magdalini Paschali, Carsten Hopke, David Wittmann, Nassir Navab
Abstract Radar signals have been dramatically increasing in complexity, limiting the source separation ability of traditional approaches. In this paper we propose a Deep Learning-based clustering method, which encodes concurrent signals into images, and, for the first time, tackles clustering with image segmentation. Novel loss functions are introduced to optimize a Neural Network to separate the input pulses into pure and non-fragmented clusters. Outperforming a variety of baselines, the proposed approach is capable of clustering inputs directly with a Neural Network, in an end-to-end fashion.
Tasks Semantic Segmentation
Published 2019-11-18
URL https://arxiv.org/abs/1911.07590v1
PDF https://arxiv.org/pdf/1911.07590v1.pdf
PWC https://paperswithcode.com/paper/signal-clustering-with-class-independent
Repo
Framework

Zeno++: Robust Fully Asynchronous SGD

Title Zeno++: Robust Fully Asynchronous SGD
Authors Cong Xie, Sanmi Koyejo, Indranil Gupta
Abstract We propose Zeno++, a new robust asynchronous Stochastic Gradient Descent~(SGD) procedure which tolerates Byzantine failures of the workers. In contrast to previous work, Zeno++ removes some unrealistic restrictions on worker-server communications, allowing for fully asynchronous updates from anonymous workers, arbitrarily stale worker updates, and the possibility of an unbounded number of Byzantine workers. The key idea is to estimate the descent of the loss value after the candidate gradient is applied, where large descent values indicate that the update results in optimization progress. We prove the convergence of Zeno++ for non-convex problems under Byzantine failures. Experimental results show that Zeno++ outperforms existing approaches.
Tasks
Published 2019-03-17
URL https://arxiv.org/abs/1903.07020v4
PDF https://arxiv.org/pdf/1903.07020v4.pdf
PWC https://paperswithcode.com/paper/zeno-robust-asynchronous-sgd-with-arbitrary
Repo
Framework

A Novel Cost Function for Despeckling using Convolutional Neural Networks

Title A Novel Cost Function for Despeckling using Convolutional Neural Networks
Authors Giampaolo Ferraioli, Vito Pascazio, Sergio Vitale
Abstract Removing speckle noise from SAR images is still an open issue. It is well know that the interpretation of SAR images is very challenging and despeckling algorithms are necessary to improve the ability of extracting information. An urban environment makes this task more heavy due to different structures and to different objects scale. Following the recent spread of deep learning methods related to several remote sensing applications, in this work a convolutional neural networks based algorithm for despeckling is proposed. The network is trained on simulated SAR data. The paper is mainly focused on the implementation of a cost function that takes account of both spatial consistency of image and statistical properties of noise.
Tasks
Published 2019-06-11
URL https://arxiv.org/abs/1906.04441v1
PDF https://arxiv.org/pdf/1906.04441v1.pdf
PWC https://paperswithcode.com/paper/a-novel-cost-function-for-despeckling-using
Repo
Framework

Mega-Reward: Achieving Human-Level Play without Extrinsic Rewards

Title Mega-Reward: Achieving Human-Level Play without Extrinsic Rewards
Authors Yuhang Song, Jianyi Wang, Thomas Lukasiewicz, Zhenghua Xu, Shangtong Zhang, Andrzej Wojcicki, Mai Xu
Abstract Intrinsic rewards were introduced to simulate how human intelligence works; they are usually evaluated by intrinsically-motivated play, i.e., playing games without extrinsic rewards but evaluated with extrinsic rewards. However, none of the existing intrinsic reward approaches can achieve human-level performance under this very challenging setting of intrinsically-motivated play. In this work, we propose a novel megalomania-driven intrinsic reward (called mega-reward), which, to our knowledge, is the first approach that achieves human-level performance in intrinsically-motivated play. Intuitively, mega-reward comes from the observation that infants’ intelligence develops when they try to gain more control on entities in an environment; therefore, mega-reward aims to maximize the control capabilities of agents on given entities in a given environment. To formalize mega-reward, a relational transition model is proposed to bridge the gaps between direct and latent control. Experimental studies show that mega-reward (i) can greatly outperform all state-of-the-art intrinsic reward approaches, (ii) generally achieves the same level of performance as Ex-PPO and professional human-level scores, and (iii) has also a superior performance when it is incorporated with extrinsic rewards.
Tasks
Published 2019-05-12
URL https://arxiv.org/abs/1905.04640v4
PDF https://arxiv.org/pdf/1905.04640v4.pdf
PWC https://paperswithcode.com/paper/mega-reward-achieving-human-level-play
Repo
Framework
comments powered by Disqus