May 7, 2019

2901 words 14 mins read

Paper Group ANR 64

Paper Group ANR 64

Iterative Hierarchical Optimization for Misspecified Problems (IHOMP). Ranking of classification algorithms in terms of mean-standard deviation using A-TOPSIS. Transliteration in Any Language with Surrogate Languages. Predicting brain age with deep learning from raw imaging data results in a reliable and heritable biomarker. A SAT model to mine fle …

Iterative Hierarchical Optimization for Misspecified Problems (IHOMP)

Title Iterative Hierarchical Optimization for Misspecified Problems (IHOMP)
Authors Daniel J. Mankowitz, Timothy A. Mann, Shie Mannor
Abstract For complex, high-dimensional Markov Decision Processes (MDPs), it may be necessary to represent the policy with function approximation. A problem is misspecified whenever, the representation cannot express any policy with acceptable performance. We introduce IHOMP : an approach for solving misspecified problems. IHOMP iteratively learns a set of context specialized options and combines these options to solve an otherwise misspecified problem. Our main contribution is proving that IHOMP enjoys theoretical convergence guarantees. In addition, we extend IHOMP to exploit Option Interruption (OI) enabling it to decide where the learned options can be reused. Our experiments demonstrate that IHOMP can find near-optimal solutions to otherwise misspecified problems and that OI can further improve the solutions.
Tasks
Published 2016-02-10
URL http://arxiv.org/abs/1602.03348v2
PDF http://arxiv.org/pdf/1602.03348v2.pdf
PWC https://paperswithcode.com/paper/iterative-hierarchical-optimization-for
Repo
Framework

Ranking of classification algorithms in terms of mean-standard deviation using A-TOPSIS

Title Ranking of classification algorithms in terms of mean-standard deviation using A-TOPSIS
Authors Andre G. C. Pacheco, Renato A. Krohling
Abstract In classification problems when multiples algorithms are applied to different benchmarks a difficult issue arises, i.e., how can we rank the algorithms? In machine learning it is common run the algorithms several times and then a statistic is calculated in terms of means and standard deviations. In order to compare the performance of the algorithms, it is very common to employ statistical tests. However, these tests may also present limitations, since they consider only the means and not the standard deviations of the obtained results. In this paper, we present the so called A-TOPSIS, based on TOPSIS (Technique for Order Preference by Similarity to Ideal Solution), to solve the problem of ranking and comparing classification algorithms in terms of means and standard deviations. We use two case studies to illustrate the A-TOPSIS for ranking classification algorithms and the results show the suitability of A-TOPSIS to rank the algorithms. The presented approach is general and can be applied to compare the performance of stochastic algorithms in machine learning. Finally, to encourage researchers to use the A-TOPSIS for ranking algorithms we also presented in this work an easy-to-use A-TOPSIS web framework.
Tasks
Published 2016-10-22
URL http://arxiv.org/abs/1610.06998v1
PDF http://arxiv.org/pdf/1610.06998v1.pdf
PWC https://paperswithcode.com/paper/ranking-of-classification-algorithms-in-terms
Repo
Framework

Transliteration in Any Language with Surrogate Languages

Title Transliteration in Any Language with Surrogate Languages
Authors Stephen Mayhew, Christos Christodoulopoulos, Dan Roth
Abstract We introduce a method for transliteration generation that can produce transliterations in every language. Where previous results are only as multilingual as Wikipedia, we show how to use training data from Wikipedia as surrogate training for any language. Thus, the problem becomes one of ranking Wikipedia languages in order of suitability with respect to a target language. We introduce several task-specific methods for ranking languages, and show that our approach is comparable to the oracle ceiling, and even outperforms it in some cases.
Tasks Transliteration
Published 2016-09-14
URL http://arxiv.org/abs/1609.04325v1
PDF http://arxiv.org/pdf/1609.04325v1.pdf
PWC https://paperswithcode.com/paper/transliteration-in-any-language-with
Repo
Framework

Predicting brain age with deep learning from raw imaging data results in a reliable and heritable biomarker

Title Predicting brain age with deep learning from raw imaging data results in a reliable and heritable biomarker
Authors James H Cole, Rudra PK Poudel, Dimosthenis Tsagkrasoulis, Matthan WA Caan, Claire Steves, Tim D Spector, Giovanni Montana
Abstract Machine learning analysis of neuroimaging data can accurately predict chronological age in healthy people and deviations from healthy brain ageing have been associated with cognitive impairment and disease. Here we sought to further establish the credentials of “brain-predicted age” as a biomarker of individual differences in the brain ageing process, using a predictive modelling approach based on deep learning, and specifically convolutional neural networks (CNN), and applied to both pre-processed and raw T1-weighted MRI data. Firstly, we aimed to demonstrate the accuracy of CNN brain-predicted age using a large dataset of healthy adults (N = 2001). Next, we sought to establish the heritability of brain-predicted age using a sample of monozygotic and dizygotic female twins (N = 62). Thirdly, we examined the test-retest and multi-centre reliability of brain-predicted age using two samples (within-scanner N = 20; between-scanner N = 11). CNN brain-predicted ages were generated and compared to a Gaussian Process Regression (GPR) approach, on all datasets. Input data were grey matter (GM) or white matter (WM) volumetric maps generated by Statistical Parametric Mapping (SPM) or raw data. Brain-predicted age represents an accurate, highly reliable and genetically-valid phenotype, that has potential to be used as a biomarker of brain ageing. Moreover, age predictions can be accurately generated on raw T1-MRI data, substantially reducing computation time for novel data, bringing the process closer to giving real-time information on brain health in clinical settings.
Tasks
Published 2016-12-08
URL http://arxiv.org/abs/1612.02572v1
PDF http://arxiv.org/pdf/1612.02572v1.pdf
PWC https://paperswithcode.com/paper/predicting-brain-age-with-deep-learning-from
Repo
Framework

A SAT model to mine flexible sequences in transactional datasets

Title A SAT model to mine flexible sequences in transactional datasets
Authors Rémi Coletta, Benjamin Negrevergne
Abstract Traditional pattern mining algorithms generally suffer from a lack of flexibility. In this paper, we propose a SAT formulation of the problem to successfully mine frequent flexible sequences occurring in transactional datasets. Our SAT-based approach can easily be extended with extra constraints to address a broad range of pattern mining applications. To demonstrate this claim, we formulate and add several constraints, such as gap and span constraints, to our model in order to extract more specific patterns. We also use interactive solving to perform important derived tasks, such as closed pattern mining or maximal pattern mining. Finally, we prove the practical feasibility of our SAT model by running experiments on two real datasets.
Tasks
Published 2016-04-01
URL http://arxiv.org/abs/1604.00300v1
PDF http://arxiv.org/pdf/1604.00300v1.pdf
PWC https://paperswithcode.com/paper/a-sat-model-to-mine-flexible-sequences-in
Repo
Framework

A Generalized Stochastic Variational Bayesian Hyperparameter Learning Framework for Sparse Spectrum Gaussian Process Regression

Title A Generalized Stochastic Variational Bayesian Hyperparameter Learning Framework for Sparse Spectrum Gaussian Process Regression
Authors Quang Minh Hoang, Trong Nghia Hoang, Kian Hsiang Low
Abstract While much research effort has been dedicated to scaling up sparse Gaussian process (GP) models based on inducing variables for big data, little attention is afforded to the other less explored class of low-rank GP approximations that exploit the sparse spectral representation of a GP kernel. This paper presents such an effort to advance the state of the art of sparse spectrum GP models to achieve competitive predictive performance for massive datasets. Our generalized framework of stochastic variational Bayesian sparse spectrum GP (sVBSSGP) models addresses their shortcomings by adopting a Bayesian treatment of the spectral frequencies to avoid overfitting, modeling these frequencies jointly in its variational distribution to enable their interaction a posteriori, and exploiting local data for boosting the predictive performance. However, such structural improvements result in a variational lower bound that is intractable to be optimized. To resolve this, we exploit a variational parameterization trick to make it amenable to stochastic optimization. Interestingly, the resulting stochastic gradient has a linearly decomposable structure that can be exploited to refine our stochastic optimization method to incur constant time per iteration while preserving its property of being an unbiased estimator of the exact gradient of the variational lower bound. Empirical evaluation on real-world datasets shows that sVBSSGP outperforms state-of-the-art stochastic implementations of sparse GP models.
Tasks Stochastic Optimization
Published 2016-11-18
URL http://arxiv.org/abs/1611.06080v1
PDF http://arxiv.org/pdf/1611.06080v1.pdf
PWC https://paperswithcode.com/paper/a-generalized-stochastic-variational-bayesian
Repo
Framework

A Maturity Model for Public Administration as Open Translation Data Providers

Title A Maturity Model for Public Administration as Open Translation Data Providers
Authors Núria Bel, Mikel L. Forcada, Asunción Gómez-Pérez
Abstract Any public administration that produces translation data can be a provider of useful reusable data to meet its own translation needs and the ones of other public organizations and private companies that work with texts of the same domain. These data can also be crucial to produce domain-tuned Machine Translation systems. The organization’s management of the translation process, the characteristics of the archives of the generated resources and of the infrastructure available to support them determine the efficiency and the effectiveness with which the materials produced can be converted into reusable data. However, it is of utmost importance that the organizations themselves first become aware of the goods they are producing and, second, adapt their internal processes to become optimal providers. In this article, we propose a Maturity Model to help these organizations to achieve it by identifying the different stages of the management of translation data that determine the path to the aforementioned goal.
Tasks Machine Translation
Published 2016-07-07
URL http://arxiv.org/abs/1607.01990v1
PDF http://arxiv.org/pdf/1607.01990v1.pdf
PWC https://paperswithcode.com/paper/a-maturity-model-for-public-administration-as
Repo
Framework

A Low Complexity Algorithm with $O(\sqrt{T})$ Regret and Finite Constraint Violations for Online Convex Optimization with Long Term Constraints

Title A Low Complexity Algorithm with $O(\sqrt{T})$ Regret and Finite Constraint Violations for Online Convex Optimization with Long Term Constraints
Authors Hao Yu, Michael J. Neely
Abstract This paper considers online convex optimization over a complicated constraint set, which typically consists of multiple functional constraints and a set constraint. The conventional projection based online projection algorithm (Zinkevich, 2003) can be difficult to implement due to the potentially high computation complexity of the projection operation. In this paper, we relax the functional constraints by allowing them to be violated at each round but still requiring them to be satisfied in the long term. This type of relaxed online convex optimization (with long term constraints) was first considered in Mahdavi et al. (2012). That prior work proposes an algorithm to achieve $O(\sqrt{T})$ regret and $O(T^{3/4})$ constraint violations for general problems and another algorithm to achieve an $O(T^{2/3})$ bound for both regret and constraint violations when the constraint set can be described by a finite number of linear constraints. A recent extension in Jenatton et al. (2016) can achieve $O(T^{\max{\beta,1-\beta}})$ regret and $O(T^{1-\beta/2})$ constraint violations where $\beta\in (0,1)$. The current paper proposes a new simple algorithm that yields improved performance in comparison to prior works. The new algorithm achieves an $O(\sqrt{T})$ regret bound with finite constraint violations.
Tasks
Published 2016-04-08
URL http://arxiv.org/abs/1604.02218v2
PDF http://arxiv.org/pdf/1604.02218v2.pdf
PWC https://paperswithcode.com/paper/a-low-complexity-algorithm-with-osqrtt-regret
Repo
Framework

A Sensorimotor Reinforcement Learning Framework for Physical Human-Robot Interaction

Title A Sensorimotor Reinforcement Learning Framework for Physical Human-Robot Interaction
Authors Ali Ghadirzadeh, Judith Bütepage, Atsuto Maki, Danica Kragic, Mårten Björkman
Abstract Modeling of physical human-robot collaborations is generally a challenging problem due to the unpredictive nature of human behavior. To address this issue, we present a data-efficient reinforcement learning framework which enables a robot to learn how to collaborate with a human partner. The robot learns the task from its own sensorimotor experiences in an unsupervised manner. The uncertainty of the human actions is modeled using Gaussian processes (GP) to implement action-value functions. Optimal action selection given the uncertain GP model is ensured by Bayesian optimization. We apply the framework to a scenario in which a human and a PR2 robot jointly control the ball position on a plank based on vision and force/torque data. Our experimental results show the suitability of the proposed method in terms of fast and data-efficient model learning, optimal action selection under uncertainties and equal role sharing between the partners.
Tasks Gaussian Processes
Published 2016-07-27
URL http://arxiv.org/abs/1607.07939v1
PDF http://arxiv.org/pdf/1607.07939v1.pdf
PWC https://paperswithcode.com/paper/a-sensorimotor-reinforcement-learning
Repo
Framework

Collaborative Learning for Language and Speaker Recognition

Title Collaborative Learning for Language and Speaker Recognition
Authors Lantian Li, Zhiyuan Tang, Dong Wang, Andrew Abel, Yang Feng, Shiyue Zhang
Abstract This paper presents a unified model to perform language and speaker recognition simultaneously and altogether. The model is based on a multi-task recurrent neural network where the output of one task is fed as the input of the other, leading to a collaborative learning framework that can improve both language and speaker recognition by borrowing information from each other. Our experiments demonstrated that the multi-task model outperforms the task-specific models on both tasks.
Tasks Speaker Recognition
Published 2016-09-27
URL http://arxiv.org/abs/1609.08442v2
PDF http://arxiv.org/pdf/1609.08442v2.pdf
PWC https://paperswithcode.com/paper/collaborative-learning-for-language-and
Repo
Framework

Discriminative Training of Deep Fully-connected Continuous CRF with Task-specific Loss

Title Discriminative Training of Deep Fully-connected Continuous CRF with Task-specific Loss
Authors Fayao Liu, Guosheng Lin, Chunhua Shen
Abstract Recent works on deep conditional random fields (CRF) have set new records on many vision tasks involving structured predictions. Here we propose a fully-connected deep continuous CRF model for both discrete and continuous labelling problems. We exemplify the usefulness of the proposed model on multi-class semantic labelling (discrete) and the robust depth estimation (continuous) problems. In our framework, we model both the unary and the pairwise potential functions as deep convolutional neural networks (CNN), which are jointly learned in an end-to-end fashion. The proposed method possesses the main advantage of continuously-valued CRF, which is a closed-form solution for the Maximum a posteriori (MAP) inference. To better adapt to different tasks, instead of using the commonly employed maximum likelihood CRF parameter learning protocol, we propose task-specific loss functions for learning the CRF parameters. It enables direct optimization of the quality of the MAP estimates during the course of learning. Specifically, we optimize the multi-class classification loss for the semantic labelling task and the Turkey’s biweight loss for the robust depth estimation problem. Experimental results on the semantic labelling and robust depth estimation tasks demonstrate that the proposed method compare favorably against both baseline and state-of-the-art methods. In particular, we show that although the proposed deep CRF model is continuously valued, with the equipment of task-specific loss, it achieves impressive results even on discrete labelling tasks.
Tasks Depth Estimation
Published 2016-01-28
URL http://arxiv.org/abs/1601.07649v1
PDF http://arxiv.org/pdf/1601.07649v1.pdf
PWC https://paperswithcode.com/paper/discriminative-training-of-deep-fully
Repo
Framework

The Famine of Forte: Few Search Problems Greatly Favor Your Algorithm

Title The Famine of Forte: Few Search Problems Greatly Favor Your Algorithm
Authors George D. Montanez
Abstract Casting machine learning as a type of search, we demonstrate that the proportion of problems that are favorable for a fixed algorithm is strictly bounded, such that no single algorithm can perform well over a large fraction of them. Our results explain why we must either continue to develop new learning methods year after year or move towards highly parameterized models that are both flexible and sensitive to their hyperparameters. We further give an upper bound on the expected performance for a search algorithm as a function of the mutual information between the target and the information resource (e.g., training dataset), proving the importance of certain types of dependence for machine learning. Lastly, we show that the expected per-query probability of success for an algorithm is mathematically equivalent to a single-query probability of success under a distribution (called a search strategy), and prove that the proportion of favorable strategies is also strictly bounded. Thus, whether one holds fixed the search algorithm and considers all possible problems or one fixes the search problem and looks at all possible search strategies, favorable matches are exceedingly rare. The forte (strength) of any algorithm is quantifiably restricted.
Tasks
Published 2016-09-28
URL http://arxiv.org/abs/1609.08913v2
PDF http://arxiv.org/pdf/1609.08913v2.pdf
PWC https://paperswithcode.com/paper/the-famine-of-forte-few-search-problems
Repo
Framework

Deep Directed Generative Models with Energy-Based Probability Estimation

Title Deep Directed Generative Models with Energy-Based Probability Estimation
Authors Taesup Kim, Yoshua Bengio
Abstract Training energy-based probabilistic models is confronted with apparently intractable sums, whose Monte Carlo estimation requires sampling from the estimated probability distribution in the inner loop of training. This can be approximately achieved by Markov chain Monte Carlo methods, but may still face a formidable obstacle that is the difficulty of mixing between modes with sharp concentrations of probability. Whereas an MCMC process is usually derived from a given energy function based on mathematical considerations and requires an arbitrarily long time to obtain good and varied samples, we propose to train a deep directed generative model (not a Markov chain) so that its sampling distribution approximately matches the energy function that is being trained. Inspired by generative adversarial networks, the proposed framework involves training of two models that represent dual views of the estimated probability distribution: the energy function (mapping an input configuration to a scalar energy value) and the generator (mapping a noise vector to a generated configuration), both represented by deep neural networks.
Tasks
Published 2016-06-10
URL http://arxiv.org/abs/1606.03439v1
PDF http://arxiv.org/pdf/1606.03439v1.pdf
PWC https://paperswithcode.com/paper/deep-directed-generative-models-with-energy
Repo
Framework

Philosophy in the Face of Artificial Intelligence

Title Philosophy in the Face of Artificial Intelligence
Authors Vincent Conitzer
Abstract In this article, I discuss how the AI community views concerns about the emergence of superintelligent AI and related philosophical issues.
Tasks
Published 2016-05-19
URL http://arxiv.org/abs/1605.06048v1
PDF http://arxiv.org/pdf/1605.06048v1.pdf
PWC https://paperswithcode.com/paper/philosophy-in-the-face-of-artificial
Repo
Framework

Image Restoration and Reconstruction using Variable Splitting and Class-adapted Image Priors

Title Image Restoration and Reconstruction using Variable Splitting and Class-adapted Image Priors
Authors Afonso M. Teodoro, José M. Bioucas-Dias, Mário A. T. Figueiredo
Abstract This paper proposes using a Gaussian mixture model as a prior, for solving two image inverse problems, namely image deblurring and compressive imaging. We capitalize on the fact that variable splitting algorithms, like ADMM, are able to decouple the handling of the observation operator from that of the regularizer, and plug a state-of-the-art algorithm into the pure denoising step. Furthermore, we show that, when applied to a specific type of image, a Gaussian mixture model trained from an database of images of the same type is able to outperform current state-of-the-art methods.
Tasks Deblurring, Denoising, Image Restoration
Published 2016-02-12
URL http://arxiv.org/abs/1602.04052v2
PDF http://arxiv.org/pdf/1602.04052v2.pdf
PWC https://paperswithcode.com/paper/image-restoration-and-reconstruction-using
Repo
Framework
comments powered by Disqus