October 17, 2019

2978 words 14 mins read

Paper Group ANR 809

Paper Group ANR 809

Towards Accurate and High-Speed Spiking Neuromorphic Systems with Data Quantization-Aware Deep Networks. On The Chain Rule Optimal Transport Distance. Disunited Nations? A Multiplex Network Approach to Detecting Preference Affinity Blocs using Texts and Votes. On Rational Entailment for Propositional Typicality Logic. Identification of multi-scale …

Towards Accurate and High-Speed Spiking Neuromorphic Systems with Data Quantization-Aware Deep Networks

Title Towards Accurate and High-Speed Spiking Neuromorphic Systems with Data Quantization-Aware Deep Networks
Authors Fuqiang Liu, C. Liu
Abstract Deep Neural Networks (DNNs) have gained immense success in cognitive applications and greatly pushed today’s artificial intelligence forward. The biggest challenge in executing DNNs is their extremely data-extensive computations. The computing efficiency in speed and energy is constrained when traditional computing platforms are employed in such computational hungry executions. Spiking neuromorphic computing (SNC) has been widely investigated in deep networks implementation own to their high efficiency in computation and communication. However, weights and signals of DNNs are required to be quantized when deploying the DNNs on the SNC, which results in unacceptable accuracy loss. %However, the system accuracy is limited by quantizing data directly in deep networks deployment. Previous works mainly focus on weights discretize while inter-layer signals are mainly neglected. In this work, we propose to represent DNNs with fixed integer inter-layer signals and fixed-point weights while holding good accuracy. We implement the proposed DNNs on the memristor-based SNC system as a deployment example. With 4-bit data representation, our results show that the accuracy loss can be controlled within 0.02% (2.3%) on MNIST (CIFAR-10). Compared with the 8-bit dynamic fixed-point DNNs, our system can achieve more than 9.8x speedup, 89.1% energy saving, and 30% area saving.
Tasks Quantization
Published 2018-05-08
URL https://arxiv.org/abs/1805.03054v3
PDF https://arxiv.org/pdf/1805.03054v3.pdf
PWC https://paperswithcode.com/paper/towards-accurate-and-high-speed-spiking
Repo
Framework

On The Chain Rule Optimal Transport Distance

Title On The Chain Rule Optimal Transport Distance
Authors Frank Nielsen, Ke Sun
Abstract We define a novel class of distances between statistical multivariate distributions by solving an optimal transportation problem on their marginal densities with respect to a ground distance defined on their conditional densities. By using the chain rule factorization of probabilities, we show how to perform optimal transport on a ground space being an information-geometric manifold of conditional probabilities. We prove that this new distance is a metric whenever the chosen ground distance is a metric. Our distance generalizes both the Wasserstein distances between point sets and a recently introduced metric distance between statistical mixtures. As a first application of this Chain Rule Optimal Transport (CROT) distance, we show that the ground distance between statistical mixtures is upper bounded by this optimal transport distance and its fast relaxed Sinkhorn distance, whenever the ground distance is joint convex. We report on our experiments which quantify the tightness of the CROT distance for the total variation distance, the square root generalization of the Jensen-Shannon divergence, the Wasserstein $W_p$ metric and the R'enyi divergence between mixtures.
Tasks
Published 2018-12-19
URL http://arxiv.org/abs/1812.08113v2
PDF http://arxiv.org/pdf/1812.08113v2.pdf
PWC https://paperswithcode.com/paper/on-the-chain-rule-optimal-transport-distance
Repo
Framework

Disunited Nations? A Multiplex Network Approach to Detecting Preference Affinity Blocs using Texts and Votes

Title Disunited Nations? A Multiplex Network Approach to Detecting Preference Affinity Blocs using Texts and Votes
Authors Caleb Pomeroy, Niheer Dasandi, Slava J. Mikhaylov
Abstract This paper contributes to an emerging literature that models votes and text in tandem to better understand polarization of expressed preferences. It introduces a new approach to estimate preference polarization in multidimensional settings, such as international relations, based on developments in the natural language processing and network science literatures – namely word embeddings, which retain valuable syntactical qualities of human language, and community detection in multilayer networks, which locates densely connected actors across multiple, complex networks. We find that the employment of these tools in tandem helps to better estimate states’ foreign policy preferences expressed in UN votes and speeches beyond that permitted by votes alone. The utility of these located affinity blocs is demonstrated through an application to conflict onset in International Relations, though these tools will be of interest to all scholars faced with the measurement of preferences and polarization in multidimensional settings.
Tasks Community Detection, Word Embeddings
Published 2018-02-01
URL https://arxiv.org/abs/1802.00396v2
PDF https://arxiv.org/pdf/1802.00396v2.pdf
PWC https://paperswithcode.com/paper/disunited-nations-a-multiplex-network
Repo
Framework

On Rational Entailment for Propositional Typicality Logic

Title On Rational Entailment for Propositional Typicality Logic
Authors Richard Booth, Giovanni Casini, Thomas Meyer, Ivan Varzinczak
Abstract Propositional Typicality Logic (PTL) is a recently proposed logic, obtained by enriching classical propositional logic with a typicality operator capturing the most typical (alias normal or conventional) situations in which a given sentence holds. The semantics of PTL is in terms of ranked models as studied in the well-known KLM approach to preferential reasoning and therefore KLM-style rational consequence relations can be embedded in PTL. In spite of the non-monotonic features introduced by the semantics adopted for the typicality operator, the obvious Tarskian definition of entailment for PTL remains monotonic and is therefore not appropriate in many contexts. Our first important result is an impossibility theorem showing that a set of proposed postulates that at first all seem appropriate for a notion of entailment with regard to typicality cannot be satisfied simultaneously. Closer inspection reveals that this result is best interpreted as an argument for advocating the development of more than one type of PTL entailment. In the spirit of this interpretation, we investigate three different (semantic) versions of entailment for PTL, each one based on the definition of rational closure as introduced by Lehmann and Magidor for KLM-style conditionals, and constructed using different notions of minimality.
Tasks
Published 2018-09-28
URL https://arxiv.org/abs/1809.10946v2
PDF https://arxiv.org/pdf/1809.10946v2.pdf
PWC https://paperswithcode.com/paper/on-rational-entailment-for-propositional
Repo
Framework

Identification of multi-scale hierarchical brain functional networks using deep matrix factorization

Title Identification of multi-scale hierarchical brain functional networks using deep matrix factorization
Authors Hongming Li, Xiaofeng Zhu, Yong Fan
Abstract We present a deep semi-nonnegative matrix factorization method for identifying subject-specific functional networks (FNs) at multiple spatial scales with a hierarchical organization from resting state fMRI data. Our method is built upon a deep semi-nonnegative matrix factorization framework to jointly detect the FNs at multiple scales with a hierarchical organization, enhanced by group sparsity regularization that helps identify subject-specific FNs without loss of inter-subject comparability. The proposed method has been validated for predicting subject-specific functional activations based on functional connectivity measures of the hierarchical multi-scale FNs of the same subjects. Experimental results have demonstrated that our method could obtain subject-specific multi-scale hierarchical FNs and their functional connectivity measures across different scales could better predict subject-specific functional activations than those obtained by alternative techniques.
Tasks
Published 2018-09-14
URL http://arxiv.org/abs/1809.05557v1
PDF http://arxiv.org/pdf/1809.05557v1.pdf
PWC https://paperswithcode.com/paper/identification-of-multi-scale-hierarchical
Repo
Framework

Mixture of Regression Experts in fMRI Encoding

Title Mixture of Regression Experts in fMRI Encoding
Authors Subba Reddy Oota, Adithya Avvaru, Naresh Manwani, Raju S. Bapi
Abstract fMRI semantic category understanding using linguistic encoding models attempt to learn a forward mapping that relates stimuli to the corresponding brain activation. Classical encoding models use linear multi-variate methods to predict the brain activation (all voxels) given the stimulus. However, these methods essentially assume multiple regions as one large uniform region or several independent regions, ignoring connections among them. In this paper, we present a mixture of experts-based model where a group of experts captures brain activity patterns related to particular regions of interest (ROI) and also show the discrimination across different experts. The model is trained word stimuli encoded as 25-dimensional feature vectors as input and the corresponding brain responses as output. Given a new word (25-dimensional feature vector), it predicts the entire brain activation as the linear combination of multiple experts brain activations. We argue that each expert learns a certain region of brain activations corresponding to its category of words, which solves the problem of identifying the regions with a simple encoding model. We showcase that proposed mixture of experts-based model indeed learns region-based experts to predict the brain activations with high spatial accuracy.
Tasks
Published 2018-11-26
URL http://arxiv.org/abs/1811.10740v2
PDF http://arxiv.org/pdf/1811.10740v2.pdf
PWC https://paperswithcode.com/paper/mixture-of-regression-experts-in-fmri
Repo
Framework

Local Rule-Based Explanations of Black Box Decision Systems

Title Local Rule-Based Explanations of Black Box Decision Systems
Authors Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Dino Pedreschi, Franco Turini, Fosca Giannotti
Abstract The recent years have witnessed the rise of accurate but obscure decision systems which hide the logic of their internal decision processes to the users. The lack of explanations for the decisions of black box systems is a key ethical issue, and a limitation to the adoption of machine learning components in socially sensitive and safety-critical contexts. %Therefore, we need explanations that reveals the reasons why a predictor takes a certain decision. In this paper we focus on the problem of black box outcome explanation, i.e., explaining the reasons of the decision taken on a specific instance. We propose LORE, an agnostic method able to provide interpretable and faithful explanations. LORE first leans a local interpretable predictor on a synthetic neighborhood generated by a genetic algorithm. Then it derives from the logic of the local interpretable predictor a meaningful explanation consisting of: a decision rule, which explains the reasons of the decision; and a set of counterfactual rules, suggesting the changes in the instance’s features that lead to a different outcome. Wide experiments show that LORE outperforms existing methods and baselines both in the quality of explanations and in the accuracy in mimicking the black box.
Tasks
Published 2018-05-28
URL http://arxiv.org/abs/1805.10820v1
PDF http://arxiv.org/pdf/1805.10820v1.pdf
PWC https://paperswithcode.com/paper/local-rule-based-explanations-of-black-box
Repo
Framework

Error Correction Maximization for Deep Image Hashing

Title Error Correction Maximization for Deep Image Hashing
Authors Xiang Xu, Xiaofang Wang, Kris M. Kitani
Abstract We propose to use the concept of the Hamming bound to derive the optimal criteria for learning hash codes with a deep network. In particular, when the number of binary hash codes (typically the number of image categories) and code length are known, it is possible to derive an upper bound on the minimum Hamming distance between the hash codes. This upper bound can then be used to define the loss function for learning hash codes. By encouraging the margin (minimum Hamming distance) between the hash codes of different image categories to match the upper bound, we are able to learn theoretically optimal hash codes. Our experiments show that our method significantly outperforms competing deep learning-based approaches and obtains top performance on benchmark datasets.
Tasks
Published 2018-08-06
URL http://arxiv.org/abs/1808.01942v1
PDF http://arxiv.org/pdf/1808.01942v1.pdf
PWC https://paperswithcode.com/paper/error-correction-maximization-for-deep-image
Repo
Framework

Solar Potential Analysis of Rooftops Using Satellite Imagery

Title Solar Potential Analysis of Rooftops Using Satellite Imagery
Authors Akash Kumar
Abstract Solar energy is one of the most important sources of renewable energy and the cleanest form of energy. In India, where solar energy could produce power around trillion kilowatt-hours in a year, our country is only able to produce power of around in gigawatts only. Many people are not aware of the solar potential of their rooftop, and hence they always think that installing solar panels is very much expensive. In this work, we introduce an approach through which we can generate a report remotely that provides the amount of solar potential of a building using only its latitude and longitude. We further evaluated various types of rooftops to make our solution more robust. We also provide an approximate area of rooftop that can be used for solar panels placement and a visual analysis of how solar panels can be placed to maximize the output of solar power at a location.
Tasks
Published 2018-12-30
URL https://arxiv.org/abs/1812.11606v2
PDF https://arxiv.org/pdf/1812.11606v2.pdf
PWC https://paperswithcode.com/paper/solar-potential-analysis-of-rooftops-using
Repo
Framework

Fused Gromov-Wasserstein distance for structured objects: theoretical foundations and mathematical properties

Title Fused Gromov-Wasserstein distance for structured objects: theoretical foundations and mathematical properties
Authors Titouan Vayer, Laetita Chapel, Rémi Flamary, Romain Tavenard, Nicolas Courty
Abstract Optimal transport theory has recently found many applications in machine learning thanks to its capacity for comparing various machine learning objects considered as distributions. The Kantorovitch formulation, leading to the Wasserstein distance, focuses on the features of the elements of the objects but treat them independently, whereas the Gromov-Wasserstein distance focuses only on the relations between the elements, depicting the structure of the object, yet discarding its features. In this paper we propose to extend these distances in order to encode simultaneously both the feature and structure informations, resulting in the Fused Gromov-Wasserstein distance. We develop the mathematical framework for this novel distance, prove its metric and interpolation properties and provide a concentration result for the convergence of finite samples. We also illustrate and interpret its use in various contexts where structured objects are involved.
Tasks
Published 2018-11-07
URL http://arxiv.org/abs/1811.02834v1
PDF http://arxiv.org/pdf/1811.02834v1.pdf
PWC https://paperswithcode.com/paper/fused-gromov-wasserstein-distance-for
Repo
Framework

Tensor graph convolutional neural network

Title Tensor graph convolutional neural network
Authors Tong Zhang, Wenming Zheng, Zhen Cui, Yang Li
Abstract In this paper, we propose a novel tensor graph convolutional neural network (TGCNN) to conduct convolution on factorizable graphs, for which here two types of problems are focused, one is sequential dynamic graphs and the other is cross-attribute graphs. Especially, we propose a graph preserving layer to memorize salient nodes of those factorized subgraphs, i.e. cross graph convolution and graph pooling. For cross graph convolution, a parameterized Kronecker sum operation is proposed to generate a conjunctive adjacency matrix characterizing the relationship between every pair of nodes across two subgraphs. Taking this operation, then general graph convolution may be efficiently performed followed by the composition of small matrices, which thus reduces high memory and computational burden. Encapsuling sequence graphs into a recursive learning, the dynamics of graphs can be efficiently encoded as well as the spatial layout of graphs. To validate the proposed TGCNN, experiments are conducted on skeleton action datasets as well as matrix completion dataset. The experiment results demonstrate that our method can achieve more competitive performance with the state-of-the-art methods.
Tasks Matrix Completion
Published 2018-03-27
URL http://arxiv.org/abs/1803.10071v1
PDF http://arxiv.org/pdf/1803.10071v1.pdf
PWC https://paperswithcode.com/paper/tensor-graph-convolutional-neural-network
Repo
Framework

When is there a Representer Theorem? Nondifferentiable Regularisers and Banach spaces

Title When is there a Representer Theorem? Nondifferentiable Regularisers and Banach spaces
Authors Kevin Schlegel
Abstract We consider a general regularised interpolation problem for learning a parameter vector from data. The well known representer theorem says that under certain conditions on the regulariser there exists a solution in the linear span of the data points. This is the core of kernel methods in machine learning as it makes the problem computationally tractable. Necessary and sufficient conditions for differentiable regularisers on Hilbert spaces to admit a representer theorem have been proved. We extend those results to nondifferentiable regularisers on uniformly convex and uniformly smooth Banach spaces. This gives a (more) complete answer to the question when there is a representer theorem. We then note that for regularised interpolation in fact the solution is determined by the function space alone and independent of the regulariser, making the extension to Banach spaces even more valuable.
Tasks
Published 2018-04-25
URL http://arxiv.org/abs/1804.09605v1
PDF http://arxiv.org/pdf/1804.09605v1.pdf
PWC https://paperswithcode.com/paper/when-is-there-a-representer-theorem
Repo
Framework

An Extended Beta-Elliptic Model and Fuzzy Elementary Perceptual Codes for Online Multilingual Writer Identification using Deep Neural Network

Title An Extended Beta-Elliptic Model and Fuzzy Elementary Perceptual Codes for Online Multilingual Writer Identification using Deep Neural Network
Authors Thameur Dhieb, Sourour Njah, Houcine Boubaker, Wael Ouarda, Mounir Ben Ayed, Adel M. Alimi
Abstract Actually, the ability to identify the documents authors provides more chances for using these documents for various purposes. In this paper, we present a new effective biometric writer identification system from online handwriting. The system consists of the preprocessing and the segmentation of online handwriting into a sequence of Beta strokes in a first step. Then, from each stroke, we extract a set of static and dynamic features from new proposed model that we called Extended Beta-Elliptic model and from the Fuzzy Elementary Perceptual Codes. Next, all the segments which are composed of N consecutive strokes are categorized into groups and subgroups according to their position and their geometric characteristics. Finally, Deep Neural Network is used as classifier. Experimental results reveal that the proposed system achieves interesting results as compared to those of the existing writer identification systems on Latin and Arabic scripts.
Tasks
Published 2018-04-16
URL http://arxiv.org/abs/1804.05661v4
PDF http://arxiv.org/pdf/1804.05661v4.pdf
PWC https://paperswithcode.com/paper/an-extended-beta-elliptic-model-and-fuzzy
Repo
Framework

Cross-situational learning of large lexicons with finite memory

Title Cross-situational learning of large lexicons with finite memory
Authors James Holehouse, Richard A. Blythe
Abstract Cross-situational word learning, wherein a learner combines information about possible meanings of a word across multiple exposures, has previously been shown to be a very powerful strategy to acquire a large lexicon in a short time. However, this success may derive from idealizations that are made when modeling the word-learning process. In particular, an earlier model assumed that a learner could perfectly recall all previous instances of a word’s use and the inferences that were drawn about its meaning. In this work, we relax this assumption and determine the performance of a model cross-situational learner who forgets word-meaning associations over time. Our main finding is that it is possible for this learner to acquire a human-scale lexicon by adulthood with word-exposure and memory-decay rates that are consistent with empirical research on childhood word learning, as long as the degree of referential uncertainty is not too high or the learner employs a mutual exclusivity constraint. Our findings therefore suggest that successful word learning does not necessarily demand either highly accurate long-term tracking of word and meaning statistics or hypothesis-testing strategies.
Tasks
Published 2018-09-28
URL http://arxiv.org/abs/1809.11047v1
PDF http://arxiv.org/pdf/1809.11047v1.pdf
PWC https://paperswithcode.com/paper/cross-situational-learning-of-large-lexicons
Repo
Framework

Some techniques in density estimation

Title Some techniques in density estimation
Authors Hassan Ashtiani, Abbas Mehrabian
Abstract Density estimation is an interdisciplinary topic at the intersection of statistics, theoretical computer science and machine learning. We review some old and new techniques for bounding the sample complexity of estimating densities of continuous distributions, focusing on the class of mixtures of Gaussians and its subclasses. In particular, we review the main techniques used to prove the new sample complexity bounds for mixtures of Gaussians by Ashtiani, Ben-David, Harvey, Liaw, Mehrabian, and Plan arXiv:1710.05209.
Tasks Density Estimation
Published 2018-01-11
URL http://arxiv.org/abs/1801.04003v2
PDF http://arxiv.org/pdf/1801.04003v2.pdf
PWC https://paperswithcode.com/paper/some-techniques-in-density-estimation
Repo
Framework
comments powered by Disqus