Paper Group ANR 174
Learning and Inferring Relations in Cortical Networks. Finding Singular Features. Criticality in Formal Languages and Statistical Physics. Monge’s Optimal Transport Distance for Image Classification. Audio Visual Emotion Recognition with Temporal Alignment and Perception Attention. Image Based Appraisal of Real Estate Properties. Fast Parallel Rand …
Learning and Inferring Relations in Cortical Networks
Title | Learning and Inferring Relations in Cortical Networks |
Authors | Peter U. Diehl, Matthew Cook |
Abstract | A pressing scientific challenge is to understand how brains work. Of particular interest is the neocortex,the part of the brain that is especially large in humans, capable of handling a wide variety of tasks including visual, auditory, language, motor, and abstract processing. These functionalities are processed in different self-organized regions of the neocortical sheet, and yet the anatomical structure carrying out the processing is relatively uniform across the sheet. We are at a loss to explain, simulate, or understand such a multi-functional homogeneous sheet-like computational structure - we do not have computational models which work in this way. Here we present an important step towards developing such models: we show how uniform modules of excitatory and inhibitory neurons can be connected bidirectionally in a network that, when exposed to input in the form of population codes, learns the input encodings as well as the relationships between the inputs. STDP learning rules lead the modules to self-organize into a relational network, which is able to infer missing inputs,restore noisy signals, decide between conflicting inputs, and combine cues to improve estimates. These networks show that it is possible for a homogeneous network of spiking units to self-organize so as to provide meaningful processing of its inputs. If such networks can be scaled up, they could provide an initial computational model relevant to the large scale anatomy of the neocortex. |
Tasks | |
Published | 2016-08-29 |
URL | http://arxiv.org/abs/1608.08267v1 |
http://arxiv.org/pdf/1608.08267v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-and-inferring-relations-in-cortical |
Repo | |
Framework | |
Finding Singular Features
Title | Finding Singular Features |
Authors | Christopher Genovese, Marco Perone-Pacifico, Isabella Verdinelli, Larry Wasserman |
Abstract | We present a method for finding high density, low-dimensional structures in noisy point clouds. These structures are sets with zero Lebesgue measure with respect to the $D$-dimensional ambient space and belong to a $d<D$ dimensional space. We call them “singular features.” Hunting for singular features corresponds to finding unexpected or unknown structures hidden in point clouds belonging to $\R^D$. Our method outputs well defined sets of dimensions $d<D$. Unlike spectral clustering, the method works well in the presence of noise. We show how to find singular features by first finding ridges in the estimated density, followed by a filtering step based on the eigenvalues of the Hessian of the density. |
Tasks | |
Published | 2016-06-01 |
URL | http://arxiv.org/abs/1606.00265v1 |
http://arxiv.org/pdf/1606.00265v1.pdf | |
PWC | https://paperswithcode.com/paper/finding-singular-features |
Repo | |
Framework | |
Criticality in Formal Languages and Statistical Physics
Title | Criticality in Formal Languages and Statistical Physics |
Authors | Henry W. Lin, Max Tegmark |
Abstract | We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks. |
Tasks | |
Published | 2016-06-21 |
URL | http://arxiv.org/abs/1606.06737v3 |
http://arxiv.org/pdf/1606.06737v3.pdf | |
PWC | https://paperswithcode.com/paper/criticality-in-formal-languages-and |
Repo | |
Framework | |
Monge’s Optimal Transport Distance for Image Classification
Title | Monge’s Optimal Transport Distance for Image Classification |
Authors | Michael Snow, Jan Van lent |
Abstract | This paper focuses on a similarity measure, known as the Wasserstein distance, with which to compare images. The Wasserstein distance results from a partial differential equation (PDE) formulation of Monge’s optimal transport problem. We present an efficient numerical solution method for solving Monge’s problem. To demonstrate the measure’s discriminatory power when comparing images, we use a $1$-Nearest Neighbour ($1$-NN) machine learning algorithm to illustrate the measure’s potential benefits over other more traditional distance metrics and also the Tangent Space distance, designed to perform excellently on the well-known MNIST dataset. To our knowledge, the PDE formulation of the Wasserstein metric has not been presented for dealing with image comparison, nor has the Wasserstein distance been used within the $1$-nearest neighbour architecture. |
Tasks | Image Classification |
Published | 2016-12-01 |
URL | http://arxiv.org/abs/1612.00181v2 |
http://arxiv.org/pdf/1612.00181v2.pdf | |
PWC | https://paperswithcode.com/paper/monges-optimal-transport-distance-for-image |
Repo | |
Framework | |
Audio Visual Emotion Recognition with Temporal Alignment and Perception Attention
Title | Audio Visual Emotion Recognition with Temporal Alignment and Perception Attention |
Authors | Linlin Chao, Jianhua Tao, Minghao Yang, Ya Li, Zhengqi Wen |
Abstract | This paper focuses on two key problems for audio-visual emotion recognition in the video. One is the audio and visual streams temporal alignment for feature level fusion. The other one is locating and re-weighting the perception attentions in the whole audio-visual stream for better recognition. The Long Short Term Memory Recurrent Neural Network (LSTM-RNN) is employed as the main classification architecture. Firstly, soft attention mechanism aligns the audio and visual streams. Secondly, seven emotion embedding vectors, which are corresponding to each classification emotion type, are added to locate the perception attentions. The locating and re-weighting process is also based on the soft attention mechanism. The experiment results on EmotiW2015 dataset and the qualitative analysis show the efficiency of the proposed two techniques. |
Tasks | Emotion Recognition |
Published | 2016-03-28 |
URL | http://arxiv.org/abs/1603.08321v1 |
http://arxiv.org/pdf/1603.08321v1.pdf | |
PWC | https://paperswithcode.com/paper/audio-visual-emotion-recognition-with |
Repo | |
Framework | |
Image Based Appraisal of Real Estate Properties
Title | Image Based Appraisal of Real Estate Properties |
Authors | Quanzeng You, Ran Pang, Liangliang Cao, Jiebo Luo |
Abstract | Real estate appraisal, which is the process of estimating the price for real estate properties, is crucial for both buys and sellers as the basis for negotiation and transaction. Traditionally, the repeat sales model has been widely adopted to estimate real estate price. However, it depends the design and calculation of a complex economic related index, which is challenging to estimate accurately. Today, real estate brokers provide easy access to detailed online information on real estate properties to their clients. We are interested in estimating the real estate price from these large amounts of easily accessed data. In particular, we analyze the prediction power of online house pictures, which is one of the key factors for online users to make a potential visiting decision. The development of robust computer vision algorithms makes the analysis of visual content possible. In this work, we employ a Recurrent Neural Network (RNN) to predict real estate price using the state-of-the-art visual features. The experimental results indicate that our model outperforms several of other state-of-the-art baseline algorithms in terms of both mean absolute error (MAE) and mean absolute percentage error (MAPE). |
Tasks | |
Published | 2016-11-28 |
URL | http://arxiv.org/abs/1611.09180v2 |
http://arxiv.org/pdf/1611.09180v2.pdf | |
PWC | https://paperswithcode.com/paper/image-based-appraisal-of-real-estate |
Repo | |
Framework | |
Fast Parallel Randomized Algorithm for Nonnegative Matrix Factorization with KL Divergence for Large Sparse Datasets
Title | Fast Parallel Randomized Algorithm for Nonnegative Matrix Factorization with KL Divergence for Large Sparse Datasets |
Authors | Duy Khuong Nguyen, Tu Bao Ho |
Abstract | Nonnegative Matrix Factorization (NMF) with Kullback-Leibler Divergence (NMF-KL) is one of the most significant NMF problems and equivalent to Probabilistic Latent Semantic Indexing (PLSI), which has been successfully applied in many applications. For sparse count data, a Poisson distribution and KL divergence provide sparse models and sparse representation, which describe the random variation better than a normal distribution and Frobenius norm. Specially, sparse models provide more concise understanding of the appearance of attributes over latent components, while sparse representation provides concise interpretability of the contribution of latent components over instances. However, minimizing NMF with KL divergence is much more difficult than minimizing NMF with Frobenius norm; and sparse models, sparse representation and fast algorithms for large sparse datasets are still challenges for NMF with KL divergence. In this paper, we propose a fast parallel randomized coordinate descent algorithm having fast convergence for large sparse datasets to archive sparse models and sparse representation. The proposed algorithm’s experimental results overperform the current studies’ ones in this problem. |
Tasks | |
Published | 2016-04-14 |
URL | http://arxiv.org/abs/1604.04026v1 |
http://arxiv.org/pdf/1604.04026v1.pdf | |
PWC | https://paperswithcode.com/paper/fast-parallel-randomized-algorithm-for |
Repo | |
Framework | |
Accelerated Randomized Mirror Descent Algorithms For Composite Non-strongly Convex Optimization
Title | Accelerated Randomized Mirror Descent Algorithms For Composite Non-strongly Convex Optimization |
Authors | Le Thi Khanh Hien, Cuong V. Nguyen, Huan Xu, Canyi Lu, Jiashi Feng |
Abstract | We consider the problem of minimizing the sum of an average function of a large number of smooth convex components and a general, possibly non-differentiable, convex function. Although many methods have been proposed to solve this problem with the assumption that the sum is strongly convex, few methods support the non-strongly convex case. Adding a small quadratic regularization is a common devise used to tackle non-strongly convex problems; however, it may cause loss of sparsity of solutions or weaken the performance of the algorithms. Avoiding this devise, we propose an accelerated randomized mirror descent method for solving this problem without the strongly convex assumption. Our method extends the deterministic accelerated proximal gradient methods of Paul Tseng and can be applied even when proximal points are computed inexactly. We also propose a scheme for solving the problem when the component functions are non-smooth. |
Tasks | |
Published | 2016-05-23 |
URL | http://arxiv.org/abs/1605.06892v6 |
http://arxiv.org/pdf/1605.06892v6.pdf | |
PWC | https://paperswithcode.com/paper/accelerated-randomized-mirror-descent |
Repo | |
Framework | |
On distances, paths and connections for hyperspectral image segmentation
Title | On distances, paths and connections for hyperspectral image segmentation |
Authors | Guillaume Noyel, Jesus Angulo, Dominique Jeulin |
Abstract | The present paper introduces the $\eta$ and {\eta} connections in order to add regional information on $\lambda$-flat zones, which only take into account a local information. A top-down approach is considered. First $\lambda$-flat zones are built in a way leading to a sub-segmentation. Then a finer segmentation is obtained by computing $\eta$-bounded regions and $\mu$-geodesic balls inside the $\lambda$-flat zones. The proposed algorithms for the construction of new partitions are based on queues with an ordered selection of seeds using the cumulative distance. $\eta$-bounded regions offers a control on the variations of amplitude in the class from a point, called center, and $\mu$-geodesic balls controls the “size” of the class. These results are applied to hyperspectral images. |
Tasks | Hyperspectral Image Segmentation, Semantic Segmentation |
Published | 2016-02-02 |
URL | http://arxiv.org/abs/1603.08497v1 |
http://arxiv.org/pdf/1603.08497v1.pdf | |
PWC | https://paperswithcode.com/paper/on-distances-paths-and-connections-for |
Repo | |
Framework | |
Inference in Probabilistic Logic Programs using Lifted Explanations
Title | Inference in Probabilistic Logic Programs using Lifted Explanations |
Authors | Arun Nampally, C. R. Ramakrishnan |
Abstract | In this paper, we consider the problem of lifted inference in the context of Prism-like probabilistic logic programming languages. Traditional inference in such languages involves the construction of an explanation graph for the query and computing probabilities over this graph. When evaluating queries over probabilistic logic programs with a large number of instances of random variables, traditional methods treat each instance separately. For many programs and queries, we observe that explanations can be summarized into substantially more compact structures, which we call lifted explanation graphs. In this paper, we define lifted explanation graphs and operations over them. In contrast to existing lifted inference techniques, our method for constructing lifted explanations naturally generalizes existing methods for constructing explanation graphs. To compute probability of query answers, we solve recurrences generated from the lifted graphs. We show examples where the use of our technique reduces the asymptotic complexity of inference. |
Tasks | |
Published | 2016-08-20 |
URL | http://arxiv.org/abs/1608.05763v1 |
http://arxiv.org/pdf/1608.05763v1.pdf | |
PWC | https://paperswithcode.com/paper/inference-in-probabilistic-logic-programs |
Repo | |
Framework | |
Stable Models for Infinitary Formulas with Extensional Atoms
Title | Stable Models for Infinitary Formulas with Extensional Atoms |
Authors | Amelia Harrison, Vladimir Lifschitz |
Abstract | The definition of stable models for propositional formulas with infinite conjunctions and disjunctions can be used to describe the semantics of answer set programming languages. In this note, we enhance that definition by introducing a distinction between intensional and extensional atoms. The symmetric splitting theorem for first-order formulas is then extended to infinitary formulas and used to reason about infinitary definitions. This note is under consideration for publication in Theory and Practice of Logic Programming. |
Tasks | |
Published | 2016-08-04 |
URL | http://arxiv.org/abs/1608.01603v1 |
http://arxiv.org/pdf/1608.01603v1.pdf | |
PWC | https://paperswithcode.com/paper/stable-models-for-infinitary-formulas-with |
Repo | |
Framework | |
Computing AIC for black-box models using Generalised Degrees of Freedom: a comparison with cross-validation
Title | Computing AIC for black-box models using Generalised Degrees of Freedom: a comparison with cross-validation |
Authors | Severin Hauenstein, Carsten F. Dormann, Simon N Wood |
Abstract | Generalised Degrees of Freedom (GDF), as defined by Ye (1998 JASA 93:120-131), represent the sensitivity of model fits to perturbations of the data. As such they can be computed for any statistical model, making it possible, in principle, to derive the number of parameters in machine-learning approaches. Defined originally for normally distributed data only, we here investigate the potential of this approach for Bernoulli-data. GDF-values for models of simulated and real data are compared to model complexity-estimates from cross-validation. Similarly, we computed GDF-based AICc for randomForest, neural networks and boosted regression trees and demonstrated its similarity to cross-validation. GDF-estimates for binary data were unstable and inconsistently sensitive to the number of data points perturbed simultaneously, while at the same time being extremely computer-intensive in their calculation. Repeated 10-fold cross-validation was more robust, based on fewer assumptions and faster to compute. Our findings suggest that the GDF-approach does not readily transfer to Bernoulli data and a wider range of regression approaches. |
Tasks | |
Published | 2016-03-09 |
URL | http://arxiv.org/abs/1603.02743v1 |
http://arxiv.org/pdf/1603.02743v1.pdf | |
PWC | https://paperswithcode.com/paper/computing-aic-for-black-box-models-using |
Repo | |
Framework | |
Compressive Image Recovery Using Recurrent Generative Model
Title | Compressive Image Recovery Using Recurrent Generative Model |
Authors | Akshat Dave, Anil Kumar Vadathya, Kaushik Mitra |
Abstract | Reconstruction of signals from compressively sensed measurements is an ill-posed problem. In this paper, we leverage the recurrent generative model, RIDE, as an image prior for compressive image reconstruction. Recurrent networks can model long-range dependencies in images and hence are suitable to handle global multiplexing in reconstruction from compressive imaging. We perform MAP inference with RIDE using back-propagation to the inputs and projected gradient method. We propose an entropy thresholding based approach for preserving texture in images well. Our approach shows superior reconstructions compared to recent global reconstruction approaches like D-AMP and TVAL3 on both simulated and real data. |
Tasks | Image Reconstruction |
Published | 2016-12-13 |
URL | http://arxiv.org/abs/1612.04229v2 |
http://arxiv.org/pdf/1612.04229v2.pdf | |
PWC | https://paperswithcode.com/paper/compressive-image-recovery-using-recurrent |
Repo | |
Framework | |
Edward: A library for probabilistic modeling, inference, and criticism
Title | Edward: A library for probabilistic modeling, inference, and criticism |
Authors | Dustin Tran, Alp Kucukelbir, Adji B. Dieng, Maja Rudolph, Dawen Liang, David M. Blei |
Abstract | Probabilistic modeling is a powerful approach for analyzing empirical information. We describe Edward, a library for probabilistic modeling. Edward’s design reflects an iterative process pioneered by George Box: build a model of a phenomenon, make inferences about the model given data, and criticize the model’s fit to the data. Edward supports a broad class of probabilistic models, efficient algorithms for inference, and many techniques for model criticism. The library builds on top of TensorFlow to support distributed training and hardware such as GPUs. Edward enables the development of complex probabilistic models and their algorithms at a massive scale. |
Tasks | |
Published | 2016-10-31 |
URL | http://arxiv.org/abs/1610.09787v3 |
http://arxiv.org/pdf/1610.09787v3.pdf | |
PWC | https://paperswithcode.com/paper/edward-a-library-for-probabilistic-modeling |
Repo | |
Framework | |
A Message Passing Algorithm for the Minimum Cost Multicut Problem
Title | A Message Passing Algorithm for the Minimum Cost Multicut Problem |
Authors | Paul Swoboda, Bjoern Andres |
Abstract | We propose a dual decomposition and linear program relaxation of the NP -hard minimum cost multicut problem. Unlike other polyhedral relaxations of the multicut polytope, it is amenable to efficient optimization by message passing. Like other polyhedral elaxations, it can be tightened efficiently by cutting planes. We define an algorithm that alternates between message passing and efficient separation of cycle- and odd-wheel inequalities. This algorithm is more efficient than state-of-the-art algorithms based on linear programming, including algorithms written in the framework of leading commercial software, as we show in experiments with large instances of the problem from applications in computer vision, biomedical image analysis and data mining. |
Tasks | |
Published | 2016-12-16 |
URL | http://arxiv.org/abs/1612.05441v2 |
http://arxiv.org/pdf/1612.05441v2.pdf | |
PWC | https://paperswithcode.com/paper/a-message-passing-algorithm-for-the-minimum |
Repo | |
Framework | |