Paper Group ANR 362
Ensemble Multi-task Gaussian Process Regression with Multiple Latent Processes. Deep Recurrent Gaussian Process with Variational Sparse Spectrum Approximation. Yet Another ADNI Machine Learning Paper? Paving The Way Towards Fully-reproducible Research on Classification of Alzheimer’s Disease. Resurrecting the sigmoid in deep learning through dynami …
Ensemble Multi-task Gaussian Process Regression with Multiple Latent Processes
Title | Ensemble Multi-task Gaussian Process Regression with Multiple Latent Processes |
Authors | Weitong Ruan, Eric L. Miller |
Abstract | Multi-task/Multi-output learning seeks to exploit correlation among tasks to enhance performance over learning or solving each task independently. In this paper, we investigate this problem in the context of Gaussian Processes (GPs) and propose a new model which learns a mixture of latent processes by decomposing the covariance matrix into a sum of structured hidden components each of which is controlled by a latent GP over input features and a “weight” over tasks. From this sum structure, we propose a parallelizable parameter learning algorithm with a predetermined initialization for the “weights”. We also notice that an ensemble parameter learning approach using mini-batches of training data not only reduces the computation complexity of learning but also improves the regression performance. We evaluate our model on two datasets, the smaller Swiss Jura dataset and another relatively larger ATMS dataset from NOAA. Substantial improvements are observed compared with established alternatives. |
Tasks | Gaussian Processes |
Published | 2017-09-22 |
URL | http://arxiv.org/abs/1709.07903v3 |
http://arxiv.org/pdf/1709.07903v3.pdf | |
PWC | https://paperswithcode.com/paper/ensemble-multi-task-gaussian-process |
Repo | |
Framework | |
Deep Recurrent Gaussian Process with Variational Sparse Spectrum Approximation
Title | Deep Recurrent Gaussian Process with Variational Sparse Spectrum Approximation |
Authors | Roman Föll, Bernard Haasdonk, Markus Hanselmann, Holger Ulmer |
Abstract | Modeling sequential data has become more and more important in practice. Some applications are autonomous driving, virtual sensors and weather forecasting. To model such systems so called recurrent models are used. In this article we introduce two new Deep Recurrent Gaussian Process (DRGP) models based on the Sparse Spectrum Gaussian Process (SSGP) and the improved variational version called Variational Sparse Spectrum Gaussian Process (VSSGP). We follow the recurrent structure given by an existing DRGP based on a specific sparse Nystr"om approximation. Therefore, we also variationally integrate out the input-space and hence can propagate uncertainty through the layers. We can show that for the resulting lower bound an optimal variational distribution exists. Training is realized through optimizing the variational lower bound. Using Distributed Variational Inference (DVI), we can reduce the computational complexity. We improve over current state of the art methods in prediction accuracy for experimental data-sets used for their evaluation and introduce a new data-set for engine control, named Emission. Furthermore, our method can easily be adapted for unsupervised learning, e.g. the latent variable model and its deep version. |
Tasks | Autonomous Driving, Weather Forecasting |
Published | 2017-11-02 |
URL | http://arxiv.org/abs/1711.00799v2 |
http://arxiv.org/pdf/1711.00799v2.pdf | |
PWC | https://paperswithcode.com/paper/deep-recurrent-gaussian-process-with |
Repo | |
Framework | |
Yet Another ADNI Machine Learning Paper? Paving The Way Towards Fully-reproducible Research on Classification of Alzheimer’s Disease
Title | Yet Another ADNI Machine Learning Paper? Paving The Way Towards Fully-reproducible Research on Classification of Alzheimer’s Disease |
Authors | Jorge Samper-González, Ninon Burgos, Sabrina Fontanella, Hugo Bertin, Marie-Odile Habert, Stanley Durrleman, Theodoros Evgeniou, Olivier Colliot |
Abstract | In recent years, the number of papers on Alzheimer’s disease classification has increased dramatically, generating interesting methodological ideas on the use machine learning and feature extraction methods. However, practical impact is much more limited and, eventually, one could not tell which of these approaches are the most efficient. While over 90% of these works make use of ADNI an objective comparison between approaches is impossible due to variations in the subjects included, image pre-processing, performance metrics and cross-validation procedures. In this paper, we propose a framework for reproducible classification experiments using multimodal MRI and PET data from ADNI. The core components are: 1) code to automatically convert the full ADNI database into BIDS format; 2) a modular architecture based on Nipype in order to easily plug-in different classification and feature extraction tools; 3) feature extraction pipelines for MRI and PET data; 4) baseline classification approaches for unimodal and multimodal features. This provides a flexible framework for benchmarking different feature extraction and classification tools in a reproducible manner. We demonstrate its use on all (1519) baseline T1 MR images and all (1102) baseline FDG PET images from ADNI 1, GO and 2 with SPM-based feature extraction pipelines and three different classification techniques (linear SVM, anatomically regularized SVM and multiple kernel learning SVM). The highest accuracies achieved were: 91% for AD vs CN, 83% for MCIc vs CN, 75% for MCIc vs MCInc, 94% for AD-A$\beta$+ vs CN-A$\beta$- and 72% for MCIc-A$\beta$+ vs MCInc-A$\beta$+. The code is publicly available at https://gitlab.icm-institute.org/aramislab/AD-ML (depends on the Clinica software platform, publicly available at http://www.clinica.run). |
Tasks | |
Published | 2017-09-21 |
URL | http://arxiv.org/abs/1709.07267v1 |
http://arxiv.org/pdf/1709.07267v1.pdf | |
PWC | https://paperswithcode.com/paper/yet-another-adni-machine-learning-paper |
Repo | |
Framework | |
Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice
Title | Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice |
Authors | Jeffrey Pennington, Samuel S. Schoenholz, Surya Ganguli |
Abstract | It is well known that the initialization of weights in deep neural networks can have a dramatic impact on learning speed. For example, ensuring the mean squared singular value of a network’s input-output Jacobian is $O(1)$ is essential for avoiding the exponential vanishing or explosion of gradients. The stronger condition that all singular values of the Jacobian concentrate near $1$ is a property known as dynamical isometry. For deep linear networks, dynamical isometry can be achieved through orthogonal weight initialization and has been shown to dramatically speed up learning; however, it has remained unclear how to extend these results to the nonlinear setting. We address this question by employing powerful tools from free probability theory to compute analytically the entire singular value distribution of a deep network’s input-output Jacobian. We explore the dependence of the singular value distribution on the depth of the network, the weight initialization, and the choice of nonlinearity. Intriguingly, we find that ReLU networks are incapable of dynamical isometry. On the other hand, sigmoidal networks can achieve isometry, but only with orthogonal weight initialization. Moreover, we demonstrate empirically that deep nonlinear networks achieving dynamical isometry learn orders of magnitude faster than networks that do not. Indeed, we show that properly-initialized deep sigmoidal networks consistently outperform deep ReLU networks. Overall, our analysis reveals that controlling the entire distribution of Jacobian singular values is an important design consideration in deep learning. |
Tasks | |
Published | 2017-11-13 |
URL | http://arxiv.org/abs/1711.04735v1 |
http://arxiv.org/pdf/1711.04735v1.pdf | |
PWC | https://paperswithcode.com/paper/resurrecting-the-sigmoid-in-deep-learning |
Repo | |
Framework | |
Aerial Vehicle Tracking by Adaptive Fusion of Hyperspectral Likelihood Maps
Title | Aerial Vehicle Tracking by Adaptive Fusion of Hyperspectral Likelihood Maps |
Authors | Burak Uzkent, Aneesh Rangnekar, M. J. Hoffman |
Abstract | Hyperspectral cameras can provide unique spectral signatures for consistently distinguishing materials that can be used to solve surveillance tasks. In this paper, we propose a novel real-time hyperspectral likelihood maps-aided tracking method (HLT) inspired by an adaptive hyperspectral sensor. A moving object tracking system generally consists of registration, object detection, and tracking modules. We focus on the target detection part and remove the necessity to build any offline classifiers and tune a large amount of hyperparameters, instead learning a generative target model in an online manner for hyperspectral channels ranging from visible to infrared wavelengths. The key idea is that, our adaptive fusion method can combine likelihood maps from multiple bands of hyperspectral imagery into one single more distinctive representation increasing the margin between mean value of foreground and background pixels in the fused map. Experimental results show that the HLT not only outperforms all established fusion methods but is on par with the current state-of-the-art hyperspectral target tracking frameworks. |
Tasks | Object Detection, Object Tracking |
Published | 2017-07-12 |
URL | http://arxiv.org/abs/1707.03553v1 |
http://arxiv.org/pdf/1707.03553v1.pdf | |
PWC | https://paperswithcode.com/paper/aerial-vehicle-tracking-by-adaptive-fusion-of |
Repo | |
Framework | |
Recognizing Multi-talker Speech with Permutation Invariant Training
Title | Recognizing Multi-talker Speech with Permutation Invariant Training |
Authors | Dong Yu, Xuankai Chang, Yanmin Qian |
Abstract | In this paper, we propose a novel technique for direct recognition of multiple speech streams given the single channel of mixed speech, without first separating them. Our technique is based on permutation invariant training (PIT) for automatic speech recognition (ASR). In PIT-ASR, we compute the average cross entropy (CE) over all frames in the whole utterance for each possible output-target assignment, pick the one with the minimum CE, and optimize for that assignment. PIT-ASR forces all the frames of the same speaker to be aligned with the same output layer. This strategy elegantly solves the label permutation problem and speaker tracing problem in one shot. Our experiments on artificially mixed AMI data showed that the proposed approach is very promising. |
Tasks | Speech Recognition |
Published | 2017-03-22 |
URL | http://arxiv.org/abs/1704.01985v4 |
http://arxiv.org/pdf/1704.01985v4.pdf | |
PWC | https://paperswithcode.com/paper/recognizing-multi-talker-speech-with |
Repo | |
Framework | |
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
Title | BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain |
Authors | Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg |
Abstract | Deep learning-based techniques have achieved state-of-the-art performance on a wide variety of recognition and classification tasks. However, these networks are typically computationally expensive to train, requiring weeks of computation on many GPUs; as a result, many users outsource the training procedure to the cloud or rely on pre-trained models that are then fine-tuned for a specific task. In this paper we show that outsourced training introduces new security risks: an adversary can create a maliciously trained network (a backdoored neural network, or a \emph{BadNet}) that has state-of-the-art performance on the user’s training and validation samples, but behaves badly on specific attacker-chosen inputs. We first explore the properties of BadNets in a toy example, by creating a backdoored handwritten digit classifier. Next, we demonstrate backdoors in a more realistic scenario by creating a U.S. street sign classifier that identifies stop signs as speed limits when a special sticker is added to the stop sign; we then show in addition that the backdoor in our US street sign detector can persist even if the network is later retrained for another task and cause a drop in accuracy of {25}% on average when the backdoor trigger is present. These results demonstrate that backdoors in neural networks are both powerful and—because the behavior of neural networks is difficult to explicate—stealthy. This work provides motivation for further research into techniques for verifying and inspecting neural networks, just as we have developed tools for verifying and debugging software. |
Tasks | |
Published | 2017-08-22 |
URL | http://arxiv.org/abs/1708.06733v2 |
http://arxiv.org/pdf/1708.06733v2.pdf | |
PWC | https://paperswithcode.com/paper/badnets-identifying-vulnerabilities-in-the |
Repo | |
Framework | |
Predicting vehicular travel times by modeling heterogeneous influences between arterial roads
Title | Predicting vehicular travel times by modeling heterogeneous influences between arterial roads |
Authors | Avinash Achar, Venkatesh Sarangan, R Rohith, Anand Sivasubramaniam |
Abstract | Predicting travel times of vehicles in urban settings is a useful and tangible quantity of interest in the context of intelligent transportation systems. We address the problem of travel time prediction in arterial roads using data sampled from probe vehicles. There is only a limited literature on methods using data input from probe vehicles. The spatio-temporal dependencies captured by existing data driven approaches are either too detailed or very simplistic. We strike a balance of the existing data driven approaches to account for varying degrees of influence a given road may experience from its neighbors, while controlling the number of parameters to be learnt. Specifically, we use a NoisyOR conditional probability distribution (CPD) in conjunction with a dynamic bayesian network (DBN) to model state transitions of various roads. We propose an efficient algorithm to learn model parameters. We propose an algorithm for predicting travel times on trips of arbitrary durations. Using synthetic and real world data traces we demonstrate the superior performance of the proposed method under different traffic conditions. |
Tasks | |
Published | 2017-11-15 |
URL | http://arxiv.org/abs/1711.05767v1 |
http://arxiv.org/pdf/1711.05767v1.pdf | |
PWC | https://paperswithcode.com/paper/predicting-vehicular-travel-times-by-modeling |
Repo | |
Framework | |
On Measuring and Quantifying Performance: Error Rates, Surrogate Loss, and an Example in SSL
Title | On Measuring and Quantifying Performance: Error Rates, Surrogate Loss, and an Example in SSL |
Authors | Marco Loog, Jesse H. Krijthe, Are C. Jensen |
Abstract | In various approaches to learning, notably in domain adaptation, active learning, learning under covariate shift, semi-supervised learning, learning with concept drift, and the like, one often wants to compare a baseline classifier to one or more advanced (or at least different) strategies. In this chapter, we basically argue that if such classifiers, in their respective training phases, optimize a so-called surrogate loss that it may also be valuable to compare the behavior of this loss on the test set, next to the regular classification error rate. It can provide us with an additional view on the classifiers’ relative performances that error rates cannot capture. As an example, limited but convincing empirical results demonstrates that we may be able to find semi-supervised learning strategies that can guarantee performance improvements with increasing numbers of unlabeled data in terms of log-likelihood. In contrast, the latter may be impossible to guarantee for the classification error rate. |
Tasks | Active Learning, Domain Adaptation |
Published | 2017-07-13 |
URL | http://arxiv.org/abs/1707.04025v1 |
http://arxiv.org/pdf/1707.04025v1.pdf | |
PWC | https://paperswithcode.com/paper/on-measuring-and-quantifying-performance |
Repo | |
Framework | |
Shapelet-based Sparse Representation for Landcover Classification of Hyperspectral Images
Title | Shapelet-based Sparse Representation for Landcover Classification of Hyperspectral Images |
Authors | Ribana Roscher, Björn Waske |
Abstract | This paper presents a sparse representation-based classification approach with a novel dictionary construction procedure. By using the constructed dictionary sophisticated prior knowledge about the spatial nature of the image can be integrated. The approach is based on the assumption that each image patch can be factorized into characteristic spatial patterns, also called shapelets, and patch-specific spectral information. A set of shapelets is learned in an unsupervised way and spectral information are embodied by training samples. A combination of shapelets and spectral information are represented in an undercomplete spatial-spectral dictionary for each individual patch, where the elements of the dictionary are linearly combined to a sparse representation of the patch. The patch-based classification is obtained by means of the representation error. Experiments are conducted on three well-known hyperspectral image datasets. They illustrate that our proposed approach shows superior results in comparison to sparse representation-based classifiers that use only limited spatial information and behaves competitively with or better than state-of-the-art classifiers utilizing spatial information and kernelized sparse representation-based classifiers. |
Tasks | Classification Of Hyperspectral Images, Sparse Representation-based Classification |
Published | 2017-08-20 |
URL | http://arxiv.org/abs/1708.05974v1 |
http://arxiv.org/pdf/1708.05974v1.pdf | |
PWC | https://paperswithcode.com/paper/shapelet-based-sparse-representation-for |
Repo | |
Framework | |
IEOPF: An Active Contour Model for Image Segmentation with Inhomogeneities Estimated by Orthogonal Primary Functions
Title | IEOPF: An Active Contour Model for Image Segmentation with Inhomogeneities Estimated by Orthogonal Primary Functions |
Authors | Chaolu Feng |
Abstract | Image segmentation is still an open problem especially when intensities of the interested objects are overlapped due to the presence of intensity inhomogeneity (also known as bias field). To segment images with intensity inhomogeneities, a bias correction embedded level set model is proposed where Inhomogeneities are Estimated by Orthogonal Primary Functions (IEOPF). In the proposed model, the smoothly varying bias is estimated by a linear combination of a given set of orthogonal primary functions. An inhomogeneous intensity clustering energy is then defined and membership functions of the clusters described by the level set function are introduced to rewrite the energy as a data term of the proposed model. Similar to popular level set methods, a regularization term and an arc length term are also included to regularize and smooth the level set function, respectively. The proposed model is then extended to multichannel and multiphase patterns to segment colourful images and images with multiple objects, respectively. It has been extensively tested on both synthetic and real images that are widely used in the literature and public BrainWeb and IBSR datasets. Experimental results and comparison with state-of-the-art methods demonstrate that advantages of the proposed model in terms of bias correction and segmentation accuracy. |
Tasks | Semantic Segmentation |
Published | 2017-12-05 |
URL | http://arxiv.org/abs/1712.01707v4 |
http://arxiv.org/pdf/1712.01707v4.pdf | |
PWC | https://paperswithcode.com/paper/ieopf-an-active-contour-model-for-image |
Repo | |
Framework | |
Relaxation of the EM Algorithm via Quantum Annealing for Gaussian Mixture Models
Title | Relaxation of the EM Algorithm via Quantum Annealing for Gaussian Mixture Models |
Authors | Hideyuki Miyahara, Koji Tsumura, Yuki Sughiyama |
Abstract | We propose a modified expectation-maximization algorithm by introducing the concept of quantum annealing, which we call the deterministic quantum annealing expectation-maximization (DQAEM) algorithm. The expectation-maximization (EM) algorithm is an established algorithm to compute maximum likelihood estimates and applied to many practical applications. However, it is known that EM heavily depends on initial values and its estimates are sometimes trapped by local optima. To solve such a problem, quantum annealing (QA) was proposed as a novel optimization approach motivated by quantum mechanics. By employing QA, we then formulate DQAEM and present a theorem that supports its stability. Finally, we demonstrate numerical simulations to confirm its efficiency. |
Tasks | |
Published | 2017-01-12 |
URL | http://arxiv.org/abs/1701.03268v1 |
http://arxiv.org/pdf/1701.03268v1.pdf | |
PWC | https://paperswithcode.com/paper/relaxation-of-the-em-algorithm-via-quantum |
Repo | |
Framework | |
Multilinear Class-Specific Discriminant Analysis
Title | Multilinear Class-Specific Discriminant Analysis |
Authors | Dat Thanh Tran, Moncef Gabbouj, Alexandros Iosifidis |
Abstract | There has been a great effort to transfer linear discriminant techniques that operate on vector data to high-order data, generally referred to as Multilinear Discriminant Analysis (MDA) techniques. Many existing works focus on maximizing the inter-class variances to intra-class variances defined on tensor data representations. However, there has not been any attempt to employ class-specific discrimination criteria for the tensor data. In this paper, we propose a multilinear subspace learning technique suitable for applications requiring class-specific tensor models. The method maximizes the discrimination of each individual class in the feature space while retains the spatial structure of the input. We evaluate the efficiency of the proposed method on two problems, i.e. facial image analysis and stock price prediction based on limit order book data. |
Tasks | Stock Price Prediction |
Published | 2017-10-29 |
URL | http://arxiv.org/abs/1710.10695v1 |
http://arxiv.org/pdf/1710.10695v1.pdf | |
PWC | https://paperswithcode.com/paper/multilinear-class-specific-discriminant |
Repo | |
Framework | |
Multi-objective Contextual Multi-armed Bandit with a Dominant Objective
Title | Multi-objective Contextual Multi-armed Bandit with a Dominant Objective |
Authors | Cem Tekin, Eralp Turgay |
Abstract | In this paper, we propose a new multi-objective contextual multi-armed bandit (MAB) problem with two objectives, where one of the objectives dominates the other objective. Unlike single-objective MAB problems in which the learner obtains a random scalar reward for each arm it selects, in the proposed problem, the learner obtains a random reward vector, where each component of the reward vector corresponds to one of the objectives and the distribution of the reward depends on the context that is provided to the learner at the beginning of each round. We call this problem contextual multi-armed bandit with a dominant objective (CMAB-DO). In CMAB-DO, the goal of the learner is to maximize its total reward in the non-dominant objective while ensuring that it maximizes its total reward in the dominant objective. In this case, the optimal arm given a context is the one that maximizes the expected reward in the non-dominant objective among all arms that maximize the expected reward in the dominant objective. First, we show that the optimal arm lies in the Pareto front. Then, we propose the multi-objective contextual multi-armed bandit algorithm (MOC-MAB), and define two performance measures: the 2-dimensional (2D) regret and the Pareto regret. We show that both the 2D regret and the Pareto regret of MOC-MAB are sublinear in the number of rounds. We also compare the performance of the proposed algorithm with other state-of-the-art methods in synthetic and real-world datasets. The proposed model and the algorithm have a wide range of real-world applications that involve multiple and possibly conflicting objectives ranging from wireless communication to medical diagnosis and recommender systems. |
Tasks | Medical Diagnosis, Recommendation Systems |
Published | 2017-08-18 |
URL | http://arxiv.org/abs/1708.05655v3 |
http://arxiv.org/pdf/1708.05655v3.pdf | |
PWC | https://paperswithcode.com/paper/multi-objective-contextual-multi-armed-bandit |
Repo | |
Framework | |
Twitter100k: A Real-world Dataset for Weakly Supervised Cross-Media Retrieval
Title | Twitter100k: A Real-world Dataset for Weakly Supervised Cross-Media Retrieval |
Authors | Yuting Hu, Liang Zheng, Yi Yang, Yongfeng Huang |
Abstract | This paper contributes a new large-scale dataset for weakly supervised cross-media retrieval, named Twitter100k. Current datasets, such as Wikipedia, NUS Wide and Flickr30k, have two major limitations. First, these datasets are lacking in content diversity, i.e., only some pre-defined classes are covered. Second, texts in these datasets are written in well-organized language, leading to inconsistency with realistic applications. To overcome these drawbacks, the proposed Twitter100k dataset is characterized by two aspects: 1) it has 100,000 image-text pairs randomly crawled from Twitter and thus has no constraint in the image categories; 2) text in Twitter100k is written in informal language by the users. Since strongly supervised methods leverage the class labels that may be missing in practice, this paper focuses on weakly supervised learning for cross-media retrieval, in which only text-image pairs are exploited during training. We extensively benchmark the performance of four subspace learning methods and three variants of the Correspondence AutoEncoder, along with various text features on Wikipedia, Flickr30k and Twitter100k. Novel insights are provided. As a minor contribution, inspired by the characteristic of Twitter100k, we propose an OCR-based cross-media retrieval method. In experiment, we show that the proposed OCR-based method improves the baseline performance. |
Tasks | Optical Character Recognition |
Published | 2017-03-20 |
URL | http://arxiv.org/abs/1703.06618v1 |
http://arxiv.org/pdf/1703.06618v1.pdf | |
PWC | https://paperswithcode.com/paper/twitter100k-a-real-world-dataset-for-weakly |
Repo | |
Framework | |