January 31, 2020

3350 words 16 mins read

Paper Group ANR 158

Paper Group ANR 158

A novel method for extracting interpretable knowledge from a spiking neural classifier with time-varying synaptic weights. A Review of methods for Textureless Object Recognition. CDTB: A Color and Depth Visual Object Tracking Dataset and Benchmark. Boosting Item-based Collaborative Filtering via Nearly Uncoupled Random Walks. Comparing Samples from …

A novel method for extracting interpretable knowledge from a spiking neural classifier with time-varying synaptic weights

Title A novel method for extracting interpretable knowledge from a spiking neural classifier with time-varying synaptic weights
Authors Abeegithan Jeyasothy, Suresh Sundaram, Savitha Ramasamy, Narasimhan Sundararajan
Abstract This paper presents a novel method for information interpretability in an MC-SEFRON classifier. To develop a method to extract knowledge stored in a trained classifier, first, the binary-class SEFRON classifier developed earlier is extended to handle multi-class problems. MC-SEFRON uses the population encoding scheme to encode the real-valued input data into spike patterns. MC-SEFRON is trained using the same supervised learning rule used in the SEFRON. After training, the proposed method extracts the knowledge for a given class stored in the classifier by mapping the weighted postsynaptic potential in the time domain to the feature domain as Feature Strength Functions (FSFs). A set of FSFs corresponding to each output class represents the extracted knowledge from the classifier. This knowledge encoding method is derived to maintain consistency between the classification in the time domain and the feature domain. The correctness of the FSF is quantitatively measured by using FSF directly for classification tasks. For a given input, each FSF is sampled at the input value to obtain the corresponding feature strength value (FSV). Then the aggregated FSVs obtained for each class are used to determine the output class labels during classification. FSVs are also used to interpret the predictions during the classification task. Using ten UCI datasets and the MNIST dataset, the knowledge extraction method, interpretation and the reliability of the FSF are demonstrated. Based on the studies, it can be seen that on an average, the difference in the classification accuracies using the FSF directly and those obtained by MC-SEFRON is only around 0.9% & 0.1% for the UCI datasets and the MNIST dataset respectively. This clearly shows that the knowledge represented by the FSFs has acceptable reliability and the interpretability of classification using the classifier’s knowledge has been justified.
Tasks
Published 2019-02-28
URL http://arxiv.org/abs/1904.11367v1
PDF http://arxiv.org/pdf/1904.11367v1.pdf
PWC https://paperswithcode.com/paper/190411367
Repo
Framework

A Review of methods for Textureless Object Recognition

Title A Review of methods for Textureless Object Recognition
Authors Frincy Clement, Kirtan Shah, Dhara Pancholi
Abstract Textureless object recognition has become a significant task in Computer Vision with the advent of Robotics and its applications in manufacturing sector. It has been very challenging to get good performance because of its lack of discriminative features and reflectance properties. Hence, the approaches used for textured objects cannot be applied for textureless objects. A lot of work has been done in the last 20 years, especially in the recent 5 years after the TLess and other textureless dataset were introduced. In our research, we plan to combine image processing techniques (for feature enhancement) along with deep learning techniques (for object recognition). Here we present an overview of the various existing work in the field of textureless object recognition, which can be broadly classified into View-based, Feature-based and Shape-based. We have also added a review of few of the research papers submitted at the International Conference on Smart Multimedia, 2018. Index terms: Computer Vision, Textureless object detection, Textureless object recognition, Feature-based, Edge detection, Deep Learning
Tasks Edge Detection, Object Detection, Object Recognition
Published 2019-10-31
URL https://arxiv.org/abs/1910.14255v1
PDF https://arxiv.org/pdf/1910.14255v1.pdf
PWC https://paperswithcode.com/paper/a-review-of-methods-for-textureless-object
Repo
Framework

CDTB: A Color and Depth Visual Object Tracking Dataset and Benchmark

Title CDTB: A Color and Depth Visual Object Tracking Dataset and Benchmark
Authors Alan Lukežič, Ugur Kart, Jani Käpylä, Ahmed Durmush, Joni-Kristian Kämäräinen, Jiří Matas, Matej Kristan
Abstract A long-term visual object tracking performance evaluation methodology and a benchmark are proposed. Performance measures are designed by following a long-term tracking definition to maximize the analysis probing strength. The new measures outperform existing ones in interpretation potential and in better distinguishing between different tracking behaviors. We show that these measures generalize the short-term performance measures, thus linking the two tracking problems. Furthermore, the new measures are highly robust to temporal annotation sparsity and allow annotation of sequences hundreds of times longer than in the current datasets without increasing manual annotation labor. A new challenging dataset of carefully selected sequences with many target disappearances is proposed. A new tracking taxonomy is proposed to position trackers on the short-term/long-term spectrum. The benchmark contains an extensive evaluation of the largest number of long-term tackers and comparison to state-of-the-art short-term trackers. We analyze the influence of tracking architecture implementations to long-term performance and explore various re-detection strategies as well as influence of visual model update strategies to long-term tracking drift. The methodology is integrated in the VOT toolkit to automate experimental analysis and benchmarking and to facilitate future development of long-term trackers.
Tasks Object Tracking, Visual Object Tracking
Published 2019-07-01
URL https://arxiv.org/abs/1907.00618v1
PDF https://arxiv.org/pdf/1907.00618v1.pdf
PWC https://paperswithcode.com/paper/cdtb-a-color-and-depth-visual-object-tracking
Repo
Framework

Boosting Item-based Collaborative Filtering via Nearly Uncoupled Random Walks

Title Boosting Item-based Collaborative Filtering via Nearly Uncoupled Random Walks
Authors Athanasios N. Nikolakopoulos, George Karypis
Abstract Item-based models are among the most popular collaborative filtering approaches for building recommender systems. Random walks can provide a powerful tool for harvesting the rich network of interactions captured within these models. They can exploit indirect relations between the items, mitigate the effects of sparsity, ensure wider itemspace coverage, as well as increase the diversity of recommendation lists. Their potential, however, can be hindered by the tendency of the walks to rapidly concentrate towards the central nodes of the graph, thereby significantly restricting the range of K-step distributions that can be exploited for personalized recommendations. In this work we introduce RecWalk; a novel random walk-based method that leverages the spectral properties of nearly uncoupled Markov chains to provably lift this limitation and prolong the influence of users’ past preferences on the successive steps of the walk—allowing the walker to explore the underlying network more fruitfully. A comprehensive set of experiments on real-world datasets verify the theoretically predicted properties of the proposed approach and indicate that they are directly linked to significant improvements in top-n recommendation accuracy. They also highlight RecWalk’s potential in providing a framework for boosting the performance of item-based models. RecWalk achieves state-of-the-art top-n recommendation quality outperforming several competing approaches, including recently proposed methods that rely on deep neural networks.
Tasks Recommendation Systems
Published 2019-09-09
URL https://arxiv.org/abs/1909.03579v1
PDF https://arxiv.org/pdf/1909.03579v1.pdf
PWC https://paperswithcode.com/paper/boosting-item-based-collaborative-filtering
Repo
Framework

Comparing Samples from the $\mathcal{G}^0$ Distribution using a Geodesic Distance

Title Comparing Samples from the $\mathcal{G}^0$ Distribution using a Geodesic Distance
Authors Alejandro C. Frery, Juliana Gambini
Abstract The $\mathcal{G}^0$ distribution is widely used for monopolarized SAR image modeling because it can characterize regions with different degree of texture accurately. It is indexed by three parameters: the number of looks (which can be estimated for the whole image), a scale parameter and a texture parameter. This paper presents a new proposal for comparing samples from the $\mathcal{G}^0$ distribution using a Geodesic Distance (GD) as a measure of dissimilarity between models. The objective is quantifying the difference between pairs of samples from SAR data using both local parameters (scale and texture) of the $\mathcal{G}^0$ distribution. We propose three tests based on the GD which combine the tests presented in~\cite{GeodesicDistanceGI0JSTARS}, and we estimate their probability distributions using permutation methods.
Tasks
Published 2019-04-23
URL http://arxiv.org/abs/1904.10499v1
PDF http://arxiv.org/pdf/1904.10499v1.pdf
PWC https://paperswithcode.com/paper/comparing-samples-from-the-mathcalg0
Repo
Framework

Generative Adversarial Nets for Robust Scatter Estimation: A Proper Scoring Rule Perspective

Title Generative Adversarial Nets for Robust Scatter Estimation: A Proper Scoring Rule Perspective
Authors Chao Gao, Yuan Yao, Weizhi Zhu
Abstract Robust scatter estimation is a fundamental task in statistics. The recent discovery on the connection between robust estimation and generative adversarial nets (GANs) by Gao et al. (2018) suggests that it is possible to compute depth-like robust estimators using similar techniques that optimize GANs. In this paper, we introduce a general learning via classification framework based on the notion of proper scoring rules. This framework allows us to understand both matrix depth function and various GANs through the lens of variational approximations of $f$-divergences induced by proper scoring rules. We then propose a new class of robust scatter estimators in this framework by carefully constructing discriminators with appropriate neural network structures. These estimators are proved to achieve the minimax rate of scatter estimation under Huber’s contamination model. Our numerical results demonstrate its good performance under various settings against competitors in the literature.
Tasks
Published 2019-03-05
URL http://arxiv.org/abs/1903.01944v1
PDF http://arxiv.org/pdf/1903.01944v1.pdf
PWC https://paperswithcode.com/paper/generative-adversarial-nets-for-robust
Repo
Framework

Learning Interpretable Models with Causal Guarantees

Title Learning Interpretable Models with Causal Guarantees
Authors Carolyn Kim, Osbert Bastani
Abstract Machine learning has shown much promise in helping improve the quality of medical, legal, and economic decision-making. In these applications, machine learning models must satisfy two important criteria: (i) they must be causal, since the goal is typically to predict individual treatment effects, and (ii) they must be interpretable, so that human decision makers can validate and trust the model predictions. There has recently been much progress along each direction independently, yet the state-of-the-art approaches are fundamentally incompatible. We propose a framework for learning causal interpretable models—from observational data—that can be used to predict individual treatment effects. Our framework can be used with any algorithm for learning interpretable models. Furthermore, we prove an error bound on the treatment effects predicted by our model. Finally, in an experiment on real-world data, we show that the models trained using our framework significantly outperform a number of baselines.
Tasks Decision Making
Published 2019-01-24
URL http://arxiv.org/abs/1901.08576v1
PDF http://arxiv.org/pdf/1901.08576v1.pdf
PWC https://paperswithcode.com/paper/learning-interpretable-models-with-causal
Repo
Framework

Deep neural networks for automated classification of colorectal polyps on histopathology slides: A multi-institutional evaluation

Title Deep neural networks for automated classification of colorectal polyps on histopathology slides: A multi-institutional evaluation
Authors Jason W. Wei, Arief A. Suriawinata, Louis J. Vaickus, Bing Ren, Xiaoying Liu, Mikhail Lisovsky, Naofumi Tomita, Behnaz Abdollahi, Adam S. Kim, Dale C. Snover, John A. Baron, Elizabeth L. Barry, Saeed Hassanpour
Abstract Histological classification of colorectal polyps plays a critical role in both screening for colorectal cancer and care of affected patients. An accurate and automated algorithm for the classification of colorectal polyps on digitized histopathology slides could benefit clinicians and patients. Evaluate the performance and assess the generalizability of a deep neural network for colorectal polyp classification on histopathology slide images using a multi-institutional dataset. In this study, we developed a deep neural network for classification of four major colorectal polyp types, tubular adenoma, tubulovillous/villous adenoma, hyperplastic polyp, and sessile serrated adenoma, based on digitized histopathology slides from our institution, Dartmouth-Hitchcock Medical Center (DHMC), in New Hampshire. We evaluated the deep neural network on an internal dataset of 157 histopathology slide images from DHMC, as well as on an external dataset of 238 histopathology slide images from 24 different institutions spanning 13 states in the United States. We measured accuracy, sensitivity, and specificity of our model in this evaluation and compared its performance to local pathologists’ diagnoses at the point-of-care retrieved from corresponding pathology laboratories. For the internal evaluation, the deep neural network had a mean accuracy of 93.5% (95% CI 89.6%-97.4%), compared with local pathologists’ accuracy of 91.4% (95% CI 87.0%-95.8%). On the external test set, the deep neural network achieved an accuracy of 87.0% (95% CI 82.7%-91.3%), comparable with local pathologists’ accuracy of 86.6% (95% CI 82.3%-90.9%). If confirmed in clinical settings, our model could assist pathologists by improving the diagnostic efficiency, reproducibility, and accuracy of colorectal cancer screenings.
Tasks
Published 2019-09-27
URL https://arxiv.org/abs/1909.12959v2
PDF https://arxiv.org/pdf/1909.12959v2.pdf
PWC https://paperswithcode.com/paper/deep-neural-networks-for-automated
Repo
Framework

Multi Modal Semantic Segmentation using Synthetic Data

Title Multi Modal Semantic Segmentation using Synthetic Data
Authors Kartik Srivastava, Akash Kumar Singh, Guruprasad M. Hegde
Abstract Semantic understanding of scenes in three-dimensional space (3D) is a quintessential part of robotics oriented applications such as autonomous driving as it provides geometric cues such as size, orientation and true distance of separation to objects which are crucial for taking mission critical decisions. As a first step, in this work we investigate the possibility of semantically classifying different parts of a given scene in 3D by learning the underlying geometric context in addition to the texture cues BUT in the absence of labelled real-world datasets. To this end we generate a large number of synthetic scenes, their pixel-wise labels and corresponding 3D representations using CARLA software framework. We then build a deep neural network that learns underlying category specific 3D representation and texture cues from color information of the rendered synthetic scenes. Further on we apply the learned model on different real world datasets to evaluate its performance. Our preliminary investigation of results show that the neural network is able to learn the geometric context from synthetic scenes and effectively apply this knowledge to classify each point of a 3D representation of a scene in real-world.
Tasks Autonomous Driving, Semantic Segmentation
Published 2019-10-30
URL https://arxiv.org/abs/1910.13676v1
PDF https://arxiv.org/pdf/1910.13676v1.pdf
PWC https://paperswithcode.com/paper/multi-modal-semantic-segmentation-using
Repo
Framework

Post-synaptic potential regularization has potential

Title Post-synaptic potential regularization has potential
Authors Enzo Tartaglione, Daniele Perlo, Marco Grangetto
Abstract Improving generalization is one of the main challenges for training deep neural networks on classification tasks. In particular, a number of techniques have been proposed, aiming to boost the performance on unseen data: from standard data augmentation techniques to the $\ell_2$ regularization, dropout, batch normalization, entropy-driven SGD and many more.\ In this work we propose an elegant, simple and principled approach: post-synaptic potential regularization (PSP). We tested this regularization on a number of different state-of-the-art scenarios. Empirical results show that PSP achieves a classification error comparable to more sophisticated learning strategies in the MNIST scenario, while improves the generalization compared to $\ell_2$ regularization in deep architectures trained on CIFAR-10.
Tasks Data Augmentation
Published 2019-07-19
URL https://arxiv.org/abs/1907.08544v1
PDF https://arxiv.org/pdf/1907.08544v1.pdf
PWC https://paperswithcode.com/paper/post-synaptic-potential-regularization-has
Repo
Framework

Evolutionary Neural AutoML for Deep Learning

Title Evolutionary Neural AutoML for Deep Learning
Authors Jason Liang, Elliot Meyerson, Babak Hodjat, Dan Fink, Karl Mutch, Risto Miikkulainen
Abstract Deep neural networks (DNNs) have produced state-of-the-art results in many benchmarks and problem domains. However, the success of DNNs depends on the proper configuration of its architecture and hyperparameters. Such a configuration is difficult and as a result, DNNs are often not used to their full potential. In addition, DNNs in commercial applications often need to satisfy real-world design constraints such as size or number of parameters. To make configuration easier, automatic machine learning (AutoML) systems for deep learning have been developed, focusing mostly on optimization of hyperparameters. This paper takes AutoML a step further. It introduces an evolutionary AutoML framework called LEAF that not only optimizes hyperparameters but also network architectures and the size of the network. LEAF makes use of both state-of-the-art evolutionary algorithms (EAs) and distributed computing frameworks. Experimental results on medical image classification and natural language analysis show that the framework can be used to achieve state-of-the-art performance. In particular, LEAF demonstrates that architecture optimization provides a significant boost over hyperparameter optimization, and that networks can be minimized at the same time with little drop in performance. LEAF therefore forms a foundation for democratizing and improving AI, as well as making AI practical in future applications.
Tasks AutoML, Hyperparameter Optimization, Image Classification, Neural Architecture Search
Published 2019-02-18
URL http://arxiv.org/abs/1902.06827v3
PDF http://arxiv.org/pdf/1902.06827v3.pdf
PWC https://paperswithcode.com/paper/evolutionary-neural-automl-for-deep-learning
Repo
Framework

Latent space conditioning for improved classification and anomaly detection

Title Latent space conditioning for improved classification and anomaly detection
Authors Erik Norlander, Alexandros Sopasakis
Abstract We propose a new type of variational autoencoder to perform improved pre-processing for clustering and anomaly detection on data with a given label. Anomalies however are not known or labeled. We call our method conditional latent space variational autonencoder since it separates the latent space by conditioning on information within the data. The method fits one prior distribution to each class in the dataset, effectively expanding the prior distribution to include a Gaussian mixture model. Our approach is compared against the capabilities of a typical variational autoencoder by measuring their V-score during cluster formation with respect to the k-means and EM algorithms. For anomaly detection, we use a new metric composed of the mass-volume and excess-mass curves which can work in an unsupervised setting. We compare the results between established methods such as as isolation forest, local outlier factor and one-class support vector machine.
Tasks Anomaly Detection
Published 2019-11-24
URL https://arxiv.org/abs/1911.10599v2
PDF https://arxiv.org/pdf/1911.10599v2.pdf
PWC https://paperswithcode.com/paper/latent-space-conditioning-for-improved
Repo
Framework

Reinforcement Learning Experience Reuse with Policy Residual Representation

Title Reinforcement Learning Experience Reuse with Policy Residual Representation
Authors Wen-Ji Zhou, Yang Yu, Yingfeng Chen, Kai Guan, Tangjie Lv, Changjie Fan, Zhi-Hua Zhou
Abstract Experience reuse is key to sample-efficient reinforcement learning. One of the critical issues is how the experience is represented and stored. Previously, the experience can be stored in the forms of features, individual models, and the average model, each lying at a different granularity. However, new tasks may require experience across multiple granularities. In this paper, we propose the policy residual representation (PRR) network, which can extract and store multiple levels of experience. PRR network is trained on a set of tasks with a multi-level architecture, where a module in each level corresponds to a subset of the tasks. Therefore, the PRR network represents the experience in a spectrum-like way. When training on a new task, PRR can provide different levels of experience for accelerating the learning. We experiment with the PRR network on a set of grid world navigation tasks, locomotion tasks, and fighting tasks in a video game. The results show that the PRR network leads to better reuse of experience and thus outperforms some state-of-the-art approaches.
Tasks
Published 2019-05-31
URL https://arxiv.org/abs/1905.13719v1
PDF https://arxiv.org/pdf/1905.13719v1.pdf
PWC https://paperswithcode.com/paper/reinforcement-learning-experience-reuse-with
Repo
Framework

Offline handwritten mathematical symbol recognition utilising deep learning

Title Offline handwritten mathematical symbol recognition utilising deep learning
Authors Azadeh Nazemi, Niloofar Tavakolian, Donal Fitzpatrick, Chandrik a Fernando, Ching Y. Suen
Abstract This paper describes an approach for offline recognition of handwritten mathematical symbols. The process of symbol recognition in this paper includes symbol segmentation and accurate classification for over 300 classes. Many multidimensional mathematical symbols need both horizontal and vertical projection to be segmented. However, some symbols do not permit to be projected and stop segmentation, such as the root symbol. Besides, many mathematical symbols are structurally similar, specifically in handwritten such as 0 and null. There are more than 300 Mathematical symbols. Therefore, designing an accurate classifier for more than 300 classes is required. This paper initially addresses the issue regarding segmentation using Simple Linear Iterative Clustering (SLIC). Experimental results indicate that the accuracy of the designed kNN classifier is 84% for salient, 57% Histogram of Oriented Gradient (HOG), 53% for Linear Binary Pattern (LBP) and finally 43% for pixel intensity of raw image for 66 classes. 87 classes using modified LeNet represents 90% accuracy. Finally, for 101 classes, SqueezeNet ac
Tasks
Published 2019-10-16
URL https://arxiv.org/abs/1910.07395v1
PDF https://arxiv.org/pdf/1910.07395v1.pdf
PWC https://paperswithcode.com/paper/offline-handwritten-mathematical-symbol
Repo
Framework

Fair Contextual Multi-Armed Bandits: Theory and Experiments

Title Fair Contextual Multi-Armed Bandits: Theory and Experiments
Authors Yifang Chen, Alex Cuellar, Haipeng Luo, Jignesh Modi, Heramb Nemlekar, Stefanos Nikolaidis
Abstract When an AI system interacts with multiple users, it frequently needs to make allocation decisions. For instance, a virtual agent decides whom to pay attention to in a group setting, or a factory robot selects a worker to deliver a part. Demonstrating fairness in decision making is essential for such systems to be broadly accepted. We introduce a Multi-Armed Bandit algorithm with fairness constraints, where fairness is defined as a minimum rate that a task or a resource is assigned to a user. The proposed algorithm uses contextual information about the users and the task and makes no assumptions on how the losses capturing the performance of different users are generated. We provide theoretical guarantees of performance and empirical results from simulation and an online user study. The results highlight the benefit of accounting for contexts in fair decision making, especially when users perform better at some contexts and worse at others.
Tasks Decision Making, Multi-Armed Bandits
Published 2019-12-13
URL https://arxiv.org/abs/1912.08055v1
PDF https://arxiv.org/pdf/1912.08055v1.pdf
PWC https://paperswithcode.com/paper/fair-contextual-multi-armed-bandits-theory
Repo
Framework
comments powered by Disqus