May 6, 2019

2951 words 14 mins read

Paper Group ANR 184

Paper Group ANR 184

Towards deep learning with spiking neurons in energy based models with contrastive Hebbian plasticity. Declarative Machine Learning - A Classification of Basic Properties and Types. Sparse Generalized Eigenvalue Problem: Optimal Statistical Rates via Truncated Rayleigh Flow. Revising Incompletely Specified Convex Probabilistic Belief Bases. Speech …

Towards deep learning with spiking neurons in energy based models with contrastive Hebbian plasticity

Title Towards deep learning with spiking neurons in energy based models with contrastive Hebbian plasticity
Authors Thomas Mesnard, Wulfram Gerstner, Johanni Brea
Abstract In machine learning, error back-propagation in multi-layer neural networks (deep learning) has been impressively successful in supervised and reinforcement learning tasks. As a model for learning in the brain, however, deep learning has long been regarded as implausible, since it relies in its basic form on a non-local plasticity rule. To overcome this problem, energy-based models with local contrastive Hebbian learning were proposed and tested on a classification task with networks of rate neurons. We extended this work by implementing and testing such a model with networks of leaky integrate-and-fire neurons. Preliminary results indicate that it is possible to learn a non-linear regression task with hidden layers, spiking neurons and a local synaptic plasticity rule.
Tasks
Published 2016-12-09
URL http://arxiv.org/abs/1612.03214v1
PDF http://arxiv.org/pdf/1612.03214v1.pdf
PWC https://paperswithcode.com/paper/towards-deep-learning-with-spiking-neurons-in
Repo
Framework

Declarative Machine Learning - A Classification of Basic Properties and Types

Title Declarative Machine Learning - A Classification of Basic Properties and Types
Authors Matthias Boehm, Alexandre V. Evfimievski, Niketan Pansare, Berthold Reinwald
Abstract Declarative machine learning (ML) aims at the high-level specification of ML tasks or algorithms, and automatic generation of optimized execution plans from these specifications. The fundamental goal is to simplify the usage and/or development of ML algorithms, which is especially important in the context of large-scale computations. However, ML systems at different abstraction levels have emerged over time and accordingly there has been a controversy about the meaning of this general definition of declarative ML. Specification alternatives range from ML algorithms expressed in domain-specific languages (DSLs) with optimization for performance, to ML task (learning problem) specifications with optimization for performance and accuracy. We argue that these different types of declarative ML complement each other as they address different users (data scientists and end users). This paper makes an attempt to create a taxonomy for declarative ML, including a definition of essential basic properties and types of declarative ML. Along the way, we provide insights into implications of these properties. We also use this taxonomy to classify existing systems. Finally, we draw conclusions on defining appropriate benchmarks and specification languages for declarative ML.
Tasks
Published 2016-05-19
URL http://arxiv.org/abs/1605.05826v1
PDF http://arxiv.org/pdf/1605.05826v1.pdf
PWC https://paperswithcode.com/paper/declarative-machine-learning-a-classification
Repo
Framework

Sparse Generalized Eigenvalue Problem: Optimal Statistical Rates via Truncated Rayleigh Flow

Title Sparse Generalized Eigenvalue Problem: Optimal Statistical Rates via Truncated Rayleigh Flow
Authors Kean Ming Tan, Zhaoran Wang, Han Liu, Tong Zhang
Abstract Sparse generalized eigenvalue problem (GEP) plays a pivotal role in a large family of high-dimensional statistical models, including sparse Fisher’s discriminant analysis, canonical correlation analysis, and sufficient dimension reduction. Sparse GEP involves solving a non-convex optimization problem. Most existing methods and theory in the context of specific statistical models that are special cases of the sparse GEP require restrictive structural assumptions on the input matrices. In this paper, we propose a two-stage computational framework to solve the sparse GEP. At the first stage, we solve a convex relaxation of the sparse GEP. Taking the solution as an initial value, we then exploit a nonconvex optimization perspective and propose the truncated Rayleigh flow method (Rifle) to estimate the leading generalized eigenvector. We show that Rifle converges linearly to a solution with the optimal statistical rate of convergence for many statistical models. Theoretically, our method significantly improves upon the existing literature by eliminating structural assumptions on the input matrices for both stages. To achieve this, our analysis involves two key ingredients: (i) a new analysis of the gradient based method on nonconvex objective functions, and (ii) a fine-grained characterization of the evolution of sparsity patterns along the solution path. Thorough numerical studies are provided to validate the theoretical results.
Tasks Dimensionality Reduction
Published 2016-04-29
URL http://arxiv.org/abs/1604.08697v3
PDF http://arxiv.org/pdf/1604.08697v3.pdf
PWC https://paperswithcode.com/paper/sparse-generalized-eigenvalue-problem-optimal
Repo
Framework

Revising Incompletely Specified Convex Probabilistic Belief Bases

Title Revising Incompletely Specified Convex Probabilistic Belief Bases
Authors Gavin Rens, Thomas Meyer, Giovanni Casini
Abstract We propose a method for an agent to revise its incomplete probabilistic beliefs when a new piece of propositional information is observed. In this work, an agent’s beliefs are represented by a set of probabilistic formulae – a belief base. The method involves determining a representative set of ‘boundary’ probability distributions consistent with the current belief base, revising each of these probability distributions and then translating the revised information into a new belief base. We use a version of Lewis Imaging as the revision operation. The correctness of the approach is proved. The expressivity of the belief bases under consideration are rather restricted, but has some applications. We also discuss methods of belief base revision employing the notion of optimum entropy, and point out some of the benefits and difficulties in those methods. Both the boundary distribution method and the optimum entropy method are reasonable, yet yield different results.
Tasks
Published 2016-04-07
URL http://arxiv.org/abs/1604.02133v1
PDF http://arxiv.org/pdf/1604.02133v1.pdf
PWC https://paperswithcode.com/paper/revising-incompletely-specified-convex
Repo
Framework

Speech vocoding for laboratory phonology

Title Speech vocoding for laboratory phonology
Authors Milos Cernak, Stefan Benus, Alexandros Lazaridis
Abstract Using phonological speech vocoding, we propose a platform for exploring relations between phonology and speech processing, and in broader terms, for exploring relations between the abstract and physical structures of a speech signal. Our goal is to make a step towards bridging phonology and speech processing and to contribute to the program of Laboratory Phonology. We show three application examples for laboratory phonology: compositional phonological speech modelling, a comparison of phonological systems and an experimental phonological parametric text-to-speech (TTS) system. The featural representations of the following three phonological systems are considered in this work: (i) Government Phonology (GP), (ii) the Sound Pattern of English (SPE), and (iii) the extended SPE (eSPE). Comparing GP- and eSPE-based vocoded speech, we conclude that the latter achieves slightly better results than the former. However, GP - the most compact phonological speech representation - performs comparably to the systems with a higher number of phonological features. The parametric TTS based on phonological speech representation, and trained from an unlabelled audiobook in an unsupervised manner, achieves intelligibility of 85% of the state-of-the-art parametric speech synthesis. We envision that the presented approach paves the way for researchers in both fields to form meaningful hypotheses that are explicitly testable using the concepts developed and exemplified in this paper. On the one hand, laboratory phonologists might test the applied concepts of their theoretical models, and on the other hand, the speech processing community may utilize the concepts developed for the theoretical phonological models for improvements of the current state-of-the-art applications.
Tasks Speech Synthesis
Published 2016-01-22
URL http://arxiv.org/abs/1601.05991v3
PDF http://arxiv.org/pdf/1601.05991v3.pdf
PWC https://paperswithcode.com/paper/speech-vocoding-for-laboratory-phonology
Repo
Framework

Top-N Recommender System via Matrix Completion

Title Top-N Recommender System via Matrix Completion
Authors Zhao Kang, Chong Peng, Qiang Cheng
Abstract Top-N recommender systems have been investigated widely both in industry and academia. However, the recommendation quality is far from satisfactory. In this paper, we propose a simple yet promising algorithm. We fill the user-item matrix based on a low-rank assumption and simultaneously keep the original information. To do that, a nonconvex rank relaxation rather than the nuclear norm is adopted to provide a better rank approximation and an efficient optimization strategy is designed. A comprehensive set of experiments on real datasets demonstrates that our method pushes the accuracy of Top-N recommendation to a new level.
Tasks Matrix Completion, Recommendation Systems
Published 2016-01-19
URL http://arxiv.org/abs/1601.04800v1
PDF http://arxiv.org/pdf/1601.04800v1.pdf
PWC https://paperswithcode.com/paper/top-n-recommender-system-via-matrix
Repo
Framework

Joint Network based Attention for Action Recognition

Title Joint Network based Attention for Action Recognition
Authors Yemin Shi, Yonghong Tian, Yaowei Wang, Tiejun Huang
Abstract By extracting spatial and temporal characteristics in one network, the two-stream ConvNets can achieve the state-of-the-art performance in action recognition. However, such a framework typically suffers from the separately processing of spatial and temporal information between the two standalone streams and is hard to capture long-term temporal dependence of an action. More importantly, it is incapable of finding the salient portions of an action, say, the frames that are the most discriminative to identify the action. To address these problems, a \textbf{j}oint \textbf{n}etwork based \textbf{a}ttention (JNA) is proposed in this study. We find that the fully-connected fusion, branch selection and spatial attention mechanism are totally infeasible for action recognition. Thus in our joint network, the spatial and temporal branches share some information during the training stage. We also introduce an attention mechanism on the temporal domain to capture the long-term dependence meanwhile finding the salient portions. Extensive experiments are conducted on two benchmark datasets, UCF101 and HMDB51. Experimental results show that our method can improve the action recognition performance significantly and achieves the state-of-the-art results on both datasets.
Tasks Temporal Action Localization
Published 2016-11-16
URL http://arxiv.org/abs/1611.05215v1
PDF http://arxiv.org/pdf/1611.05215v1.pdf
PWC https://paperswithcode.com/paper/joint-network-based-attention-for-action
Repo
Framework

Deep Neural Ensemble for Retinal Vessel Segmentation in Fundus Images towards Achieving Label-free Angiography

Title Deep Neural Ensemble for Retinal Vessel Segmentation in Fundus Images towards Achieving Label-free Angiography
Authors Avisek Lahiri, Abhijit Guha Roy, Debdoot Sheet, Prabir Kumar Biswas
Abstract Automated segmentation of retinal blood vessels in label-free fundus images entails a pivotal role in computed aided diagnosis of ophthalmic pathologies, viz., diabetic retinopathy, hypertensive disorders and cardiovascular diseases. The challenge remains active in medical image analysis research due to varied distribution of blood vessels, which manifest variations in their dimensions of physical appearance against a noisy background. In this paper we formulate the segmentation challenge as a classification task. Specifically, we employ unsupervised hierarchical feature learning using ensemble of two level of sparsely trained denoised stacked autoencoder. First level training with bootstrap samples ensures decoupling and second level ensemble formed by different network architectures ensures architectural revision. We show that ensemble training of auto-encoders fosters diversity in learning dictionary of visual kernels for vessel segmentation. SoftMax classifier is used for fine tuning each member auto-encoder and multiple strategies are explored for 2-level fusion of ensemble members. On DRIVE dataset, we achieve maximum average accuracy of 95.33% with an impressively low standard deviation of 0.003 and Kappa agreement coefficient of 0.708 . Comparison with other major algorithms substantiates the high efficacy of our model.
Tasks Retinal Vessel Segmentation
Published 2016-09-19
URL http://arxiv.org/abs/1609.05871v1
PDF http://arxiv.org/pdf/1609.05871v1.pdf
PWC https://paperswithcode.com/paper/deep-neural-ensemble-for-retinal-vessel
Repo
Framework

3D Ultrasound image segmentation: A Survey

Title 3D Ultrasound image segmentation: A Survey
Authors Mohammad Hamed Mozaffari, WonSook Lee
Abstract Three-dimensional Ultrasound image segmentation methods are surveyed in this paper. The focus of this report is to investigate applications of these techniques and a review of the original ideas and concepts. Although many two-dimensional image segmentation in the literature have been considered as a three-dimensional approach by mistake but we review them as a three-dimensional technique. We select the studies that have addressed the problem of medical three-dimensional Ultrasound image segmentation utilizing their proposed techniques. The evaluation methods and comparison between them are presented and tabulated in terms of evaluation techniques, interactivity, and robustness.
Tasks Semantic Segmentation
Published 2016-11-29
URL http://arxiv.org/abs/1611.09811v1
PDF http://arxiv.org/pdf/1611.09811v1.pdf
PWC https://paperswithcode.com/paper/3d-ultrasound-image-segmentation-a-survey
Repo
Framework

DeepChrome: Deep-learning for predicting gene expression from histone modifications

Title DeepChrome: Deep-learning for predicting gene expression from histone modifications
Authors Ritambhara Singh, Jack Lanchantin, Gabriel Robins, Yanjun Qi
Abstract Motivation: Histone modifications are among the most important factors that control gene regulation. Computational methods that predict gene expression from histone modification signals are highly desirable for understanding their combinatorial effects in gene regulation. This knowledge can help in developing ‘epigenetic drugs’ for diseases like cancer. Previous studies for quantifying the relationship between histone modifications and gene expression levels either failed to capture combinatorial effects or relied on multiple methods that separate predictions and combinatorial analysis. This paper develops a unified discriminative framework using a deep convolutional neural network to classify gene expression using histone modification data as input. Our system, called DeepChrome, allows automatic extraction of complex interactions among important features. To simultaneously visualize the combinatorial interactions among histone modifications, we propose a novel optimization-based technique that generates feature pattern maps from the learnt deep model. This provides an intuitive description of underlying epigenetic mechanisms that regulate genes. Results: We show that DeepChrome outperforms state-of-the-art models like Support Vector Machines and Random Forests for gene expression classification task on 56 different cell-types from REMC database. The output of our visualization technique not only validates the previous observations but also allows novel insights about combinatorial interactions among histone modification marks, some of which have recently been observed by experimental studies.
Tasks
Published 2016-07-07
URL http://arxiv.org/abs/1607.02078v1
PDF http://arxiv.org/pdf/1607.02078v1.pdf
PWC https://paperswithcode.com/paper/deepchrome-deep-learning-for-predicting-gene
Repo
Framework

Best-Buddies Similarity - Robust Template Matching using Mutual Nearest Neighbors

Title Best-Buddies Similarity - Robust Template Matching using Mutual Nearest Neighbors
Authors Shaul Oron, Tali Dekel, Tianfan Xue, William T. Freeman, Shai Avidan
Abstract We propose a novel method for template matching in unconstrained environments. Its essence is the Best-Buddies Similarity (BBS), a useful, robust, and parameter-free similarity measure between two sets of points. BBS is based on counting the number of Best-Buddies Pairs (BBPs)–pairs of points in source and target sets, where each point is the nearest neighbor of the other. BBS has several key features that make it robust against complex geometric deformations and high levels of outliers, such as those arising from background clutter and occlusions. We study these properties, provide a statistical analysis that justifies them, and demonstrate the consistent success of BBS on a challenging real-world dataset while using different types of features.
Tasks
Published 2016-09-06
URL http://arxiv.org/abs/1609.01571v1
PDF http://arxiv.org/pdf/1609.01571v1.pdf
PWC https://paperswithcode.com/paper/best-buddies-similarity-robust-template
Repo
Framework

Convolutional Residual Memory Networks

Title Convolutional Residual Memory Networks
Authors Joel Moniz, Christopher Pal
Abstract Very deep convolutional neural networks (CNNs) yield state of the art results on a wide variety of visual recognition problems. A number of state of the the art methods for image recognition are based on networks with well over 100 layers and the performance vs. depth trend is moving towards networks in excess of 1000 layers. In such extremely deep architectures the vanishing or exploding gradient problem becomes a key issue. Recent evidence also indicates that convolutional networks could benefit from an interface to explicitly constructed memory mechanisms interacting with a CNN feature processing hierarchy. Correspondingly, we propose and evaluate a memory mechanism enhanced convolutional neural network architecture based on augmenting convolutional residual networks with a long short term memory mechanism. We refer to this as a convolutional residual memory network. To the best of our knowledge this approach can yield state of the art performance on the CIFAR-100 benchmark and compares well with other state of the art techniques on the CIFAR-10 and SVHN benchmarks. This is achieved using networks with more breadth, much less depth and much less overall computation relative to comparable deep ResNets without the memory mechanism. Our experiments and analysis explore the importance of the memory mechanism, network depth, breadth, and predictive performance.
Tasks
Published 2016-06-16
URL http://arxiv.org/abs/1606.05262v3
PDF http://arxiv.org/pdf/1606.05262v3.pdf
PWC https://paperswithcode.com/paper/convolutional-residual-memory-networks
Repo
Framework

Quantum Laplacian Eigenmap

Title Quantum Laplacian Eigenmap
Authors Yiming Huang, Xiaoyu Li
Abstract Laplacian eigenmap algorithm is a typical nonlinear model for dimensionality reduction in classical machine learning. We propose an efficient quantum Laplacian eigenmap algorithm to exponentially speed up the original counterparts. In our work, we demonstrate that the Hermitian chain product proposed in quantum linear discriminant analysis (arXiv:1510.00113,2015) can be applied to implement quantum Laplacian eigenmap algorithm. While classical Laplacian eigenmap algorithm requires polynomial time to solve the eigenvector problem, our algorithm is able to exponentially speed up nonlinear dimensionality reduction.
Tasks Dimensionality Reduction
Published 2016-11-02
URL http://arxiv.org/abs/1611.00760v1
PDF http://arxiv.org/pdf/1611.00760v1.pdf
PWC https://paperswithcode.com/paper/quantum-laplacian-eigenmap
Repo
Framework

Separating Answers from Queries for Neural Reading Comprehension

Title Separating Answers from Queries for Neural Reading Comprehension
Authors Dirk Weissenborn
Abstract We present a novel neural architecture for answering queries, designed to optimally leverage explicit support in the form of query-answer memories. Our model is able to refine and update a given query while separately accumulating evidence for predicting the answer. Its architecture reflects this separation with dedicated embedding matrices and loosely connected information pathways (modules) for updating the query and accumulating evidence. This separation of responsibilities effectively decouples the search for query related support and the prediction of the answer. On recent benchmark datasets for reading comprehension, our model achieves state-of-the-art results. A qualitative analysis reveals that the model effectively accumulates weighted evidence from the query and over multiple support retrieval cycles which results in a robust answer prediction.
Tasks Reading Comprehension
Published 2016-07-12
URL http://arxiv.org/abs/1607.03316v3
PDF http://arxiv.org/pdf/1607.03316v3.pdf
PWC https://paperswithcode.com/paper/separating-answers-from-queries-for-neural
Repo
Framework

Piecewise Latent Variables for Neural Variational Text Processing

Title Piecewise Latent Variables for Neural Variational Text Processing
Authors Iulian V. Serban, Alexander G. Ororbia II, Joelle Pineau, Aaron Courville
Abstract Advances in neural variational inference have facilitated the learning of powerful directed graphical models with continuous latent variables, such as variational autoencoders. The hope is that such models will learn to represent rich, multi-modal latent factors in real-world data, such as natural language text. However, current models often assume simplistic priors on the latent variables - such as the uni-modal Gaussian distribution - which are incapable of representing complex latent factors efficiently. To overcome this restriction, we propose the simple, but highly flexible, piecewise constant distribution. This distribution has the capacity to represent an exponential number of modes of a latent target distribution, while remaining mathematically tractable. Our results demonstrate that incorporating this new latent distribution into different models yields substantial improvements in natural language processing tasks such as document modeling and natural language generation for dialogue.
Tasks Text Generation
Published 2016-12-01
URL http://arxiv.org/abs/1612.00377v4
PDF http://arxiv.org/pdf/1612.00377v4.pdf
PWC https://paperswithcode.com/paper/piecewise-latent-variables-for-neural
Repo
Framework
comments powered by Disqus