January 28, 2020

3084 words 15 mins read

Paper Group ANR 1000

Paper Group ANR 1000

Error Correcting Algorithms for Sparsely Correlated Regressors. Bayesian Image Classification with Deep Convolutional Gaussian Processes. Architecture Compression. Plasmodium Detection Using Simple CNN and Clustered GLCM Features. Diameter-based Interactive Structure Discovery. A Causality-Guided Prediction of the TED Talk Ratings from the Speech-T …

Error Correcting Algorithms for Sparsely Correlated Regressors

Title Error Correcting Algorithms for Sparsely Correlated Regressors
Authors Andrés Corrada-Emmanuel, Edward Zahrebelski, Edward Pantridge
Abstract Autonomy and adaptation of machines requires that they be able to measure their own errors. We consider the advantages and limitations of such an approach when a machine has to measure the error in a regression task. How can a machine measure the error of regression sub-components when it does not have the ground truth for the correct predictions? A compressed sensing approach applied to the error signal of the regressors can recover their precision error without any ground truth. It allows for some regressors to be \emph{strongly correlated} as long as not too many are so related. Its solutions, however, are not unique - a property of ground truth inference solutions. Adding $\ell_1$–minimization as a condition can recover the correct solution in settings where error correction is possible. We briefly discuss the similarity of the mathematics of ground truth inference for regressors to that for classifiers.
Tasks
Published 2019-06-17
URL https://arxiv.org/abs/1906.07291v1
PDF https://arxiv.org/pdf/1906.07291v1.pdf
PWC https://paperswithcode.com/paper/error-correcting-algorithms-for-sparsely
Repo
Framework

Bayesian Image Classification with Deep Convolutional Gaussian Processes

Title Bayesian Image Classification with Deep Convolutional Gaussian Processes
Authors Vincent Dutordoir, Mark van der Wilk, Artem Artemev, James Hensman
Abstract In decision-making systems, it is important to have classifiers that have calibrated uncertainties, with an optimisation objective that can be used for automated model selection and training. Gaussian processes (GPs) provide uncertainty estimates and a marginal likelihood objective, but their weak inductive biases lead to inferior accuracy. This has limited their applicability in certain tasks (e.g. image classification). We propose a translation-insensitive convolutional kernel, which relaxes the translation invariance constraint imposed by previous convolutional GPs. We show how we can use the marginal likelihood to learn the degree of insensitivity. We also reformulate GP image-to-image convolutional mappings as multi-output GPs, leading to deep convolutional GPs. We show experimentally that our new kernel improves performance in both single-layer and deep models. We also demonstrate that our fully Bayesian approach improves on dropout-based Bayesian deep learning methods in terms of uncertainty and marginal likelihood estimates.
Tasks Decision Making, Gaussian Processes, Image Classification, Model Selection
Published 2019-02-15
URL https://arxiv.org/abs/1902.05888v2
PDF https://arxiv.org/pdf/1902.05888v2.pdf
PWC https://paperswithcode.com/paper/translation-insensitivity-for-deep
Repo
Framework

Architecture Compression

Title Architecture Compression
Authors Anubhav Ashok
Abstract In this paper we propose a novel approach to model compression termed Architecture Compression. Instead of operating on the weight or filter space of the network like classical model compression methods, our approach operates on the architecture space. A 1-D CNN encoder-decoder is trained to learn a mapping from discrete architecture space to a continuous embedding and back. Additionally, this embedding is jointly trained to regress accuracy and parameter count in order to incorporate information about the architecture’s effectiveness on the dataset. During the compression phase, we first encode the network and then perform gradient descent in continuous space to optimize a compression objective function that maximizes accuracy and minimizes parameter count. The final continuous feature is then mapped to a discrete architecture using the decoder. We demonstrate the merits of this approach on visual recognition tasks such as CIFAR-10, CIFAR-100, Fashion-MNIST and SVHN and achieve a greater than 20x compression on CIFAR-10.
Tasks Model Compression
Published 2019-02-08
URL http://arxiv.org/abs/1902.03326v3
PDF http://arxiv.org/pdf/1902.03326v3.pdf
PWC https://paperswithcode.com/paper/architecture-compression
Repo
Framework

Plasmodium Detection Using Simple CNN and Clustered GLCM Features

Title Plasmodium Detection Using Simple CNN and Clustered GLCM Features
Authors Julisa Bana Abraham
Abstract Malaria is a serious disease caused by the Plasmodium parasite that transmitted through the bite of a female Anopheles mosquito and invades human erythrocytes. Malaria must be recognized precisely in order to treat the patient in time and to prevent further spread of infection. The standard diagnostic technique using microscopic examination is inefficient, the quality of the diagnosis depends on the quality of blood smears and experience of microscopists in classifying and counting infected and non-infected cells. Convolutional Neural Networks (CNN) is one of deep learning class that able to automate feature engineering and learn effective features that could be very effective in diagnosing malaria. This study proposes an intelligent system based on simple CNN for detecting malaria parasites through images of thin blood smears. The CNN model obtained high sensitivity of 97% and relatively high PPV of 81%. This study also proposes a false positive reduction method using feature clustering extracted from the gray level co-occurrence matrix (GLCM) from the Region of Interests (ROIs). Adding the GLCM feature can significantly reduce false positives. However, this technique requires manual set up of silhouette and euclidean distance limits to ensure cluster quality, so it does not adversely affect sensitivity.
Tasks Feature Engineering
Published 2019-09-28
URL https://arxiv.org/abs/1909.13101v1
PDF https://arxiv.org/pdf/1909.13101v1.pdf
PWC https://paperswithcode.com/paper/plasmodium-detection-using-simple-cnn-and
Repo
Framework

Diameter-based Interactive Structure Discovery

Title Diameter-based Interactive Structure Discovery
Authors Christopher Tosh, Daniel Hsu
Abstract We introduce interactive structure discovery, a generic framework that encompasses many interactive learning settings, including active learning, top-k item identification, interactive drug discovery, and others. We adapt a recently developed active learning algorithm of Tosh and Dasgupta (2017) for interactive structure discovery, and show that the new algorithm can be made noise-tolerant and enjoys favorable query complexity bounds.
Tasks Active Learning, Drug Discovery
Published 2019-06-05
URL https://arxiv.org/abs/1906.02101v2
PDF https://arxiv.org/pdf/1906.02101v2.pdf
PWC https://paperswithcode.com/paper/diameter-based-interactive-structure-search
Repo
Framework

A Causality-Guided Prediction of the TED Talk Ratings from the Speech-Transcripts using Neural Networks

Title A Causality-Guided Prediction of the TED Talk Ratings from the Speech-Transcripts using Neural Networks
Authors Md Iftekhar Tanveer, Md Kamrul Hasan, Daniel Gildea, M. Ehsan Hoque
Abstract Automated prediction of public speaking performance enables novel systems for tutoring public speaking skills. We use the largest open repository—TED Talks—to predict the ratings provided by the online viewers. The dataset contains over 2200 talk transcripts and the associated meta information including over 5.5 million ratings from spontaneous visitors to the website. We carefully removed the bias present in the dataset (e.g., the speakers’ reputations, popularity gained by publicity, etc.) by modeling the data generating process using a causal diagram. We use a word sequence based recurrent architecture and a dependency tree based recursive architecture as the neural networks for predicting the TED talk ratings. Our neural network models can predict the ratings with an average F-score of 0.77 which largely outperforms the competitive baseline method.
Tasks
Published 2019-05-21
URL https://arxiv.org/abs/1905.08392v1
PDF https://arxiv.org/pdf/1905.08392v1.pdf
PWC https://paperswithcode.com/paper/a-causality-guided-prediction-of-the-ted-talk
Repo
Framework

On the Limitations of Representing Functions on Sets

Title On the Limitations of Representing Functions on Sets
Authors Edward Wagstaff, Fabian B. Fuchs, Martin Engelcke, Ingmar Posner, Michael Osborne
Abstract Recent work on the representation of functions on sets has considered the use of summation in a latent space to enforce permutation invariance. In particular, it has been conjectured that the dimension of this latent space may remain fixed as the cardinality of the sets under consideration increases. However, we demonstrate that the analysis leading to this conjecture requires mappings which are highly discontinuous and argue that this is only of limited practical use. Motivated by this observation, we prove that an implementation of this model via continuous mappings (as provided by e.g. neural networks or Gaussian processes) actually imposes a constraint on the dimensionality of the latent space. Practical universal function representation for set inputs can only be achieved with a latent dimension at least the size of the maximum number of input elements.
Tasks Gaussian Processes
Published 2019-01-25
URL https://arxiv.org/abs/1901.09006v2
PDF https://arxiv.org/pdf/1901.09006v2.pdf
PWC https://paperswithcode.com/paper/on-the-limitations-of-representing-functions
Repo
Framework

RNN-based Online Handwritten Character Recognition Using Accelerometer and Gyroscope Data

Title RNN-based Online Handwritten Character Recognition Using Accelerometer and Gyroscope Data
Authors Davit Soselia, Shota Amashukeli, Irakli Koberidze, Levan Shugliashvili
Abstract This abstract explores an RNN-based approach to online handwritten recognition problem. Our method uses data from an accelerometer and a gyroscope mounted on a handheld pen-like device to train and run a character pre-diction model. We have built a dataset of timestamped gyroscope and accelerometer data gathered during the manual process of handwriting Latin characters, labeled with the character being written; in total, the dataset con-sists of 1500 gyroscope and accelerometer data sequenc-es for 8 characters of the Latin alphabet from 6 different people, and 20 characters, each 1500 samples from Georgian alphabet from 5 different people. with each sequence containing the gyroscope and accelerometer data captured during the writing of a particular character sampled once every 10ms. We train an RNN-based neural network architecture on this dataset to predict the character being written. The model is optimized with categorical cross-entropy loss and RMSprop optimizer and achieves high accuracy on test data.
Tasks
Published 2019-07-24
URL https://arxiv.org/abs/1907.12935v1
PDF https://arxiv.org/pdf/1907.12935v1.pdf
PWC https://paperswithcode.com/paper/rnn-based-online-handwritten-character
Repo
Framework

Generalization ability of region proposal networks for multispectral person detection

Title Generalization ability of region proposal networks for multispectral person detection
Authors Kevin Fritz, Daniel König, Ulrich Klauck, Michael Teutsch
Abstract Multispectral person detection aims at automatically localizing humans in images that consist of multiple spectral bands. Usually, the visual-optical (VIS) and the thermal infrared (IR) spectra are combined to achieve higher robustness for person detection especially in insufficiently illuminated scenes. This paper focuses on analyzing existing detection approaches for their generalization ability. Generalization is a key feature for machine learning based detection algorithms that are supposed to perform well across different datasets. Inspired by recent literature regarding person detection in the VIS spectrum, we perform a cross-validation study to empirically determine the most promising dataset to train a well-generalizing detector. Therefore, we pick one reference Deep Convolutional Neural Network (DCNN) architecture and three different multispectral datasets. The Region Proposal Network (RPN) originally introduced for object detection within the popular Faster R-CNN is chosen as a reference DCNN. The reason is that a stand-alone RPN is able to serve as a competitive detector for two-class problems such as person detection. Furthermore, current state-of-the-art approaches initially apply an RPN followed by individual classifiers. The three considered datasets are the KAIST Multispectral Pedestrian Benchmark including recently published improved annotations for training and testing, the Tokyo Multi-spectral Semantic Segmentation dataset, and the OSU Color-Thermal dataset including recently released annotations. The experimental results show that the KAIST Multispectral Pedestrian Benchmark with its improved annotations provides the best basis to train a DCNN with good generalization ability compared to the other two multispectral datasets. On average, this detection model achieves a log-average Miss Rate (MR) of 29.74 % evaluated on the reasonable test subsets of the three datasets.
Tasks Human Detection, Object Detection, Semantic Segmentation
Published 2019-05-07
URL https://arxiv.org/abs/1905.02758v1
PDF https://arxiv.org/pdf/1905.02758v1.pdf
PWC https://paperswithcode.com/paper/generalization-ability-of-region-proposal
Repo
Framework

Image Evolution Trajectory Prediction and Classification from Baseline using Learning-based Patch Atlas Selection for Early Diagnosis

Title Image Evolution Trajectory Prediction and Classification from Baseline using Learning-based Patch Atlas Selection for Early Diagnosis
Authors Can Gafuroglu, Islem Rekik
Abstract Patients initially diagnosed with early mild cognitive impairment (eMCI) are known to be a clinically heterogeneous group with very subtle patterns of brain atrophy. To examine the boarders between normal controls (NC) and eMCI, Magnetic Resonance Imaging (MRI) was extensively used as a non-invasive imaging modality to pin-down subtle changes in brain images of MCI patients. However, eMCI research remains limited by the number of available MRI acquisition timepoints. Ideally, one would learn how to diagnose MCI patients in an early stage from MRI data acquired at a single timepoint, while leveraging ‘non-existing’ follow-up observations. To this aim, we propose novel supervised and unsupervised frameworks that learn how to jointly predict and label the evolution trajectory of intensity patches, each seeded at a specific brain landmark, from a baseline intensity patch. Specifically, both strategies aim to identify the best training atlas patches at baseline timepoint to predict and classify the evolution trajectory of a given testing baseline patch. The supervised technique learns how to select the best atlas patches by training bidirectional mappings from the space of pairwise patch similarities to their corresponding prediction errors -when one patch was used to predict the other. On the other hand, the unsupervised technique learns a manifold of baseline atlas and testing patches using multiple kernels to well capture patch distributions at multiple scales. Once the best baseline atlas patches are selected, we retrieve their evolution trajectories and average them to predict the evolution trajectory of the testing baseline patch. Next, we input the predicted trajectories to an ensemble of linear classifiers, each trained at a specific landmark. Our classification accuracy increased by up to 10% points in comparison to single timepoint-based classification methods.
Tasks Trajectory Prediction
Published 2019-07-13
URL https://arxiv.org/abs/1907.06064v1
PDF https://arxiv.org/pdf/1907.06064v1.pdf
PWC https://paperswithcode.com/paper/image-evolution-trajectory-prediction-and
Repo
Framework

High-Order Langevin Diffusion Yields an Accelerated MCMC Algorithm

Title High-Order Langevin Diffusion Yields an Accelerated MCMC Algorithm
Authors Wenlong Mou, Yi-An Ma, Martin J. Wainwright, Peter L. Bartlett, Michael I. Jordan
Abstract We propose a Markov chain Monte Carlo (MCMC) algorithm based on third-order Langevin dynamics for sampling from distributions with log-concave and smooth densities. The higher-order dynamics allow for more flexible discretization schemes, and we develop a specific method that combines splitting with more accurate integration. For a broad class of $d$-dimensional distributions arising from generalized linear models, we prove that the resulting third-order algorithm produces samples from a distribution that is at most $\varepsilon > 0$ in Wasserstein distance from the target distribution in $O\left(\frac{d^{1/3}}{ \varepsilon^{2/3}} \right)$ steps. This result requires only Lipschitz conditions on the gradient. For general strongly convex potentials with $\alpha$-th order smoothness, we prove that the mixing time scales as $O \left(\frac{d^{1/3}}{\varepsilon^{2/3}} + \frac{d^{1/2}}{\varepsilon^{1/(\alpha - 1)}} \right)$.
Tasks
Published 2019-08-28
URL https://arxiv.org/abs/1908.10859v1
PDF https://arxiv.org/pdf/1908.10859v1.pdf
PWC https://paperswithcode.com/paper/high-order-langevin-diffusion-yields-an
Repo
Framework

Ingesting High-Velocity Streaming Graphs from Social Media Sources

Title Ingesting High-Velocity Streaming Graphs from Social Media Sources
Authors Subhasis Dasgupta, Aditya Bagchi, Amarnath Gupta
Abstract Many data science applications like social network analysis use graphs as their primary form of data. However, acquiring graph-structured data from social media presents some interesting challenges. The first challenge is the high data velocity and bursty nature of the social media data. The second challenge is that the complex nature of the data makes the ingestion process expensive. If we want to store the streaming graph data in a graph database, we face a third challenge – the database is very often unable to sustain the ingestion of high-velocity, high-burst data. We have developed an adaptive buffering mechanism and a graph compression technique that effectively mitigates the problem. A novel aspect of our method is that the adaptive buffering algorithm uses the data rate, the data content as well as the CPU resources of the database machine to determine an optimal data ingestion mechanism. We further show that an ingestion-time graph-compression strategy improves the efficiency of the data ingestion into the database. We have verified the efficacy of our ingestion optimization strategy through extensive experiments.
Tasks
Published 2019-05-20
URL https://arxiv.org/abs/1905.08337v1
PDF https://arxiv.org/pdf/1905.08337v1.pdf
PWC https://paperswithcode.com/paper/ingesting-high-velocity-streaming-graphs-from
Repo
Framework

Domain-Independent Cost-Optimal Planning in ASP

Title Domain-Independent Cost-Optimal Planning in ASP
Authors David Spies, Jia-Huai You, Ryan Hayward
Abstract We investigate the problem of cost-optimal planning in ASP. Current ASP planners can be trivially extended to a cost-optimal one by adding weak constraints, but only for a given makespan (number of steps). It is desirable to have a planner that guarantees global optimality. In this paper, we present two approaches to addressing this problem. First, we show how to engineer a cost-optimal planner composed of two ASP programs running in parallel. Using lessons learned from this, we then develop an entirely new approach to cost-optimal planning, stepless planning, which is completely free of makespan. Experiments to compare the two approaches with the only known cost-optimal planner in SAT reveal good potentials for stepless planning in ASP. The paper is under consideration for acceptance in TPLP.
Tasks
Published 2019-07-31
URL https://arxiv.org/abs/1908.00112v1
PDF https://arxiv.org/pdf/1908.00112v1.pdf
PWC https://paperswithcode.com/paper/domain-independent-cost-optimal-planning-in
Repo
Framework

Personalizing Smartwatch Based Activity Recognition Using Transfer Learning

Title Personalizing Smartwatch Based Activity Recognition Using Transfer Learning
Authors Karanpreet Singh, Rajen Bhatt
Abstract Smartwatches are increasingly being used to recognize human daily life activities. These devices may employ different kind of machine learning (ML) solutions. One of such ML models is Gradient Boosting Machine (GBM) which has shown an excellent performance in the literature. The GBM can be trained on available data set before it is deployed on any device. However, this data set may not represent every kind of human behavior in real life. For example, a ML model to detect elder and young persons running activity may give different results because of differences in their activity patterns. This may result in decrease in the accuracy of activity recognition. Therefore, a transfer learning based method is proposed in which user-specific performance can be improved significantly by doing on-device calibration of GBM by just tuning its parameters without retraining its estimators. Results show that this method can significantly improve the user-based accuracy for activity recognition.
Tasks Activity Recognition, Calibration, Transfer Learning
Published 2019-09-03
URL https://arxiv.org/abs/1909.01202v1
PDF https://arxiv.org/pdf/1909.01202v1.pdf
PWC https://paperswithcode.com/paper/personalizing-smartwatch-based-activity
Repo
Framework

Model-Based Detector for SSDs in the Presence of Inter-cell Interference

Title Model-Based Detector for SSDs in the Presence of Inter-cell Interference
Authors Hachem Yassine, Mihai-Alin Badiu, Justin Coon
Abstract In this paper, we consider the problem of reducing the bit error rate of flash-based solid state drives (SSDs) when cells are subject to inter-cell interference (ICI). By observing that the outputs of adjacent victim cells can be correlated due to common aggressors, we propose a novel channel model to accurately represent the true flash channel. This model, equivalent to a finite-state Markov channel model, allows the use of the sum-product algorithm to calculate more accurate posterior distributions of individual cell inputs given the joint outputs of victim cells. These posteriors can be easily mapped to the log-likelihood ratios that are passed as inputs to the soft LDPC decoder. When the output is available with high precision, our simulation showed that a significant reduction in the bit-error rate can be obtained, reaching $99.99%$ reduction compared to current methods, when the diagonal coupling is very strong. In the realistic case of low-precision output, our scheme provides less impressive improvements due to information loss in the process of quantization. To improve the performance of the new detector in the quantized case, we propose a new iterative scheme that alternates multiple times between the detector and the decoder. Our simulations showed that the iterative scheme can significantly improve the bit error rate even in the quantized case.
Tasks Quantization
Published 2019-01-31
URL http://arxiv.org/abs/1902.01212v2
PDF http://arxiv.org/pdf/1902.01212v2.pdf
PWC https://paperswithcode.com/paper/model-based-detector-for-ssds-in-the-presence
Repo
Framework
comments powered by Disqus