May 6, 2019

2909 words 14 mins read

Paper Group ANR 241

Paper Group ANR 241

Posterior Dispersion Indices. STDP allows close-to-optimal spatiotemporal spike pattern detection by single coincidence detector neurons. Supervised Incremental Hashing. Mysteries of Visual Experience. Digital Makeup from Internet Images. Training with Exploration Improves a Greedy Stack-LSTM Parser. A Conceptual Development of Quench Prediction Ap …

Posterior Dispersion Indices

Title Posterior Dispersion Indices
Authors Alp Kucukelbir, David M. Blei
Abstract Probabilistic modeling is cyclical: we specify a model, infer its posterior, and evaluate its performance. Evaluation drives the cycle, as we revise our model based on how it performs. This requires a metric. Traditionally, predictive accuracy prevails. Yet, predictive accuracy does not tell the whole story. We propose to evaluate a model through posterior dispersion. The idea is to analyze how each datapoint fares in relation to posterior uncertainty around the hidden structure. We propose a family of posterior dispersion indices (PDI) that capture this idea. A PDI identifies rich patterns of model mismatch in three real data examples: voting preferences, supermarket shopping, and population genetics.
Tasks
Published 2016-05-24
URL http://arxiv.org/abs/1605.07604v1
PDF http://arxiv.org/pdf/1605.07604v1.pdf
PWC https://paperswithcode.com/paper/posterior-dispersion-indices
Repo
Framework

STDP allows close-to-optimal spatiotemporal spike pattern detection by single coincidence detector neurons

Title STDP allows close-to-optimal spatiotemporal spike pattern detection by single coincidence detector neurons
Authors Timothée Masquelier
Abstract By recording multiple cells simultaneously, electrophysiologists have found evidence for repeating spatiotemporal spike patterns, which can carry information. How this information is extracted by downstream neurons is unclear. In this theoretical paper, we investigate to what extent a single cell could detect a given spike pattern and what the optimal parameters to do so are, in particular the membrane time constant $\tau$. Using a leaky integrate-and-fire (LIF) neuron with instantaneous synapses and homogeneous Poisson input, we were able to compute this optimum analytically. Our results indicate that a relatively small $\tau$ (at most a few tens of ms) is usually optimal, even when the pattern is much longer. This is somewhat counter intuitive as the resulting detector ignores most of the pattern, due to its fast memory decay. Next, we wondered if spike-timing-dependent plasticity (STDP) could enable a neuron to reach the theoretical optimum. We simulated a LIF neuron equipped with additive spike-timing-dependent potentiation and homeostatic rate-based depression, and repeatedly exposed it to a given input spike pattern. As in previous studies, the LIF progressively became selective to the repeating pattern with no supervision, even when the pattern was embedded in Poisson activity. Here we show that, using certain STDP parameters, the resulting pattern detector can be optimal. Taken together, these results may explain how humans can learn repeating visual or auditory sequences. Long sequences could be recognized thanks to coincidence detectors working at a much shorter timescale. This is consistent with the fact that recognition is still possible if a sound sequence is compressed, played backward, or scrambled using 10ms bins. Coincidence detection is a simple yet powerful mechanism, which could be the main function of neurons in the brain.
Tasks
Published 2016-10-24
URL http://arxiv.org/abs/1610.07355v2
PDF http://arxiv.org/pdf/1610.07355v2.pdf
PWC https://paperswithcode.com/paper/stdp-allows-close-to-optimal-spatiotemporal
Repo
Framework

Supervised Incremental Hashing

Title Supervised Incremental Hashing
Authors Bahadir Ozdemir, Mahyar Najibi, Larry S. Davis
Abstract We propose an incremental strategy for learning hash functions with kernels for large-scale image search. Our method is based on a two-stage classification framework that treats binary codes as intermediate variables between the feature space and the semantic space. In the first stage of classification, binary codes are considered as class labels by a set of binary SVMs; each corresponds to one bit. In the second stage, binary codes become the input space of a multi-class SVM. Hash functions are learned by an efficient algorithm where the NP-hard problem of finding optimal binary codes is solved via cyclic coordinate descent and SVMs are trained in a parallelized incremental manner. For modifications like adding images from a previously unseen class, we describe an incremental procedure for effective and efficient updates to the previous hash functions. Experiments on three large-scale image datasets demonstrate the effectiveness of the proposed hashing method, Supervised Incremental Hashing (SIH), over the state-of-the-art supervised hashing methods.
Tasks Image Retrieval
Published 2016-04-25
URL http://arxiv.org/abs/1604.07342v2
PDF http://arxiv.org/pdf/1604.07342v2.pdf
PWC https://paperswithcode.com/paper/supervised-incremental-hashing
Repo
Framework

Mysteries of Visual Experience

Title Mysteries of Visual Experience
Authors Jerome Feldman
Abstract Science is a crowning glory of the human spirit and its applications remain our best hope for social progress. But there are limitations to current science and perhaps to any science. The general mind-body problem is known to be intractable and currently mysterious. This is one of many deep problems that are universally agreed to be beyond the current purview of Science, including quantum phenomena, etc. But all of these famous unsolved problems are either remote from everyday experience (entanglement, dark matter) or are hard to even define sharply (phenomenology, consciousness, etc.). In this note, we will consider some obvious computational problems in vision that arise every time that we open our eyes and yet are demonstrably incompatible with current theories of neural computation. The focus will be on two related phenomena, known as the neural binding problem and the illusion of a detailed stable visual world.
Tasks
Published 2016-04-28
URL http://arxiv.org/abs/1604.08612v4
PDF http://arxiv.org/pdf/1604.08612v4.pdf
PWC https://paperswithcode.com/paper/mysteries-of-visual-experience
Repo
Framework

Digital Makeup from Internet Images

Title Digital Makeup from Internet Images
Authors Asad Khan, Muhammad Ahmad, Yudong Guo, Ligang Liu
Abstract We present a novel approach of color transfer between images by exploring their high-level semantic information. First, we set up a database which consists of the collection of downloaded images from the internet, which are segmented automatically by using matting techniques. We then, extract image foregrounds from both source and multiple target images. Then by using image matting algorithms, the system extracts the semantic information such as faces, lips, teeth, eyes, eyebrows, etc., from the extracted foregrounds of the source image. And, then the color is transferred between corresponding parts with the same semantic information. Next we get the color transferred result by seamlessly compositing different parts together using alpha blending. In the final step, we present an efficient method of color consistency to optimize the color of a collection of images showing the common scene. The main advantage of our method over existing techniques is that it does not need face matching, as one could use more than one target images. It is not restricted to head shot images as we can also change the color style in the wild. Moreover, our algorithm does not require to choose the same color style, same pose and image size between source and target images. Our algorithm is not restricted to one-to-one image color transfer and can make use of more than one target images to transfer the color in different parts in the source image. Comparing with other approaches, our algorithm is much better in color blending in the input data.
Tasks Image Matting
Published 2016-10-16
URL http://arxiv.org/abs/1610.04861v2
PDF http://arxiv.org/pdf/1610.04861v2.pdf
PWC https://paperswithcode.com/paper/digital-makeup-from-internet-images
Repo
Framework

Training with Exploration Improves a Greedy Stack-LSTM Parser

Title Training with Exploration Improves a Greedy Stack-LSTM Parser
Authors Miguel Ballesteros, Yoav Goldberg, Chris Dyer, Noah A. Smith
Abstract We adapt the greedy Stack-LSTM dependency parser of Dyer et al. (2015) to support a training-with-exploration procedure using dynamic oracles(Goldberg and Nivre, 2013) instead of cross-entropy minimization. This form of training, which accounts for model predictions at training time rather than assuming an error-free action history, improves parsing accuracies for both English and Chinese, obtaining very strong results for both languages. We discuss some modifications needed in order to get training with exploration to work well for a probabilistic neural-network.
Tasks Dependency Parsing
Published 2016-03-11
URL http://arxiv.org/abs/1603.03793v2
PDF http://arxiv.org/pdf/1603.03793v2.pdf
PWC https://paperswithcode.com/paper/training-with-exploration-improves-a-greedy
Repo
Framework

A Conceptual Development of Quench Prediction App build on LSTM and ELQA framework

Title A Conceptual Development of Quench Prediction App build on LSTM and ELQA framework
Authors Matej Mertik, Maciej Wielgosz, Andrzej Skoczeń
Abstract This article presents a development of web application for quench prediction in \gls{te-mpe-ee} at CERN. The authors describe an ELectrical Quality Assurance (ELQA) framework, a platform which was designed for rapid development of web integrated data analysis applications for different analysis needed during the hardware commissioning of the Large Hadron Collider (LHC). In second part the article describes a research carried out with the data collected from Quench Detection System by means of using an LSTM recurrent neural network. The article discusses and presents a conceptual work of implementing quench prediction application for \gls{te-mpe-ee} based on the ELQA and quench prediction algorithm.
Tasks
Published 2016-10-25
URL http://arxiv.org/abs/1610.09201v1
PDF http://arxiv.org/pdf/1610.09201v1.pdf
PWC https://paperswithcode.com/paper/a-conceptual-development-of-quench-prediction
Repo
Framework

A Cognitive Architecture for the Implementation of Emotions in Computing Systems

Title A Cognitive Architecture for the Implementation of Emotions in Computing Systems
Authors Jordi Vallverdú, Max Talanov, Salvatore Distefano, Manuel Mazzara, Alexander Tchitchigin, Ildar Nurgaliev
Abstract In this paper we present a new neurobiologically-inspired affective cognitive architecture: NEUCOGAR (NEUromodulating COGnitive ARchitecture). The objective of NEUCOGAR is the identification of a mapping from the influence of serotonin, dopamine and noradrenaline to the computing processes based on Von Neuman’s architecture, in order to implement affective phenomena which can operate on the Turing’s machine model. As basis of the modeling we use and extend the L"ovheim Cube of Emotion with parameters of the Von Neumann architecture. Validation is conducted via simulation on a computing system of dopamine neuromodulation and its effects on the Cortex. In the experimental phase of the project, the increase of computing power and storage redistribution due to emotion stimulus modulated by the dopamine system, confirmed the soundness of the model.
Tasks
Published 2016-06-09
URL http://arxiv.org/abs/1606.02899v1
PDF http://arxiv.org/pdf/1606.02899v1.pdf
PWC https://paperswithcode.com/paper/a-cognitive-architecture-for-the
Repo
Framework

Language to Logical Form with Neural Attention

Title Language to Logical Form with Neural Attention
Authors Li Dong, Mirella Lapata
Abstract Semantic parsing aims at mapping natural language to machine interpretable meaning representations. Traditional approaches rely on high-quality lexicons, manually-built templates, and linguistic features which are either domain- or representation-specific. In this paper we present a general method based on an attention-enhanced encoder-decoder model. We encode input utterances into vector representations, and generate their logical forms by conditioning the output sequences or trees on the encoding vectors. Experimental results on four datasets show that our approach performs competitively without using hand-engineered features and is easy to adapt across domains and meaning representations.
Tasks Semantic Parsing
Published 2016-01-06
URL http://arxiv.org/abs/1601.01280v2
PDF http://arxiv.org/pdf/1601.01280v2.pdf
PWC https://paperswithcode.com/paper/language-to-logical-form-with-neural
Repo
Framework

A Distributed Quaternion Kalman Filter With Applications to Fly-by-Wire Systems

Title A Distributed Quaternion Kalman Filter With Applications to Fly-by-Wire Systems
Authors Sayed Pouria Talebi
Abstract The introduction of automated flight control and management systems have made possible aircraft designs that sacrifice arodynamic stability in order to incorporate stealth technology intro their shape, operate more efficiently, and are highly maneuverable. Therefore, modern flight management systems are reliant on multiple redundant sensors to monitor and control the rotations of the aircraft. To this end, a novel distributed quaternion Kalman filtering algorithm is developed for tracking the rotation and orientation of an aircraft in the three-dimensional space. The algorithm is developed to distribute computation among the sensors in a manner that forces them to consent to a unique solution while being robust to sensor and link failure, a desirable characteristic for flight management systems. In addition, the underlying quaternion-valued state space model allows to avoid problems associated with gimbal lock. The performance of the developed algorithm is verified through simulations.
Tasks
Published 2016-05-15
URL http://arxiv.org/abs/1605.05588v2
PDF http://arxiv.org/pdf/1605.05588v2.pdf
PWC https://paperswithcode.com/paper/a-distributed-quaternion-kalman-filter-with
Repo
Framework

ACDC: $α$-Carving Decision Chain for Risk Stratification

Title ACDC: $α$-Carving Decision Chain for Risk Stratification
Authors Yubin Park, Joyce Ho, Joydeep Ghosh
Abstract In many healthcare settings, intuitive decision rules for risk stratification can help effective hospital resource allocation. This paper introduces a novel variant of decision tree algorithms that produces a chain of decisions, not a general tree. Our algorithm, $\alpha$-Carving Decision Chain (ACDC), sequentially carves out “pure” subsets of the majority class examples. The resulting chain of decision rules yields a pure subset of the minority class examples. Our approach is particularly effective in exploring large and class-imbalanced health datasets. Moreover, ACDC provides an interactive interpretation in conjunction with visual performance metrics such as Receiver Operating Characteristics curve and Lift chart.
Tasks
Published 2016-06-16
URL http://arxiv.org/abs/1606.05325v1
PDF http://arxiv.org/pdf/1606.05325v1.pdf
PWC https://paperswithcode.com/paper/acdc-carving-decision-chain-for-risk
Repo
Framework

Sparsity-based Color Image Super Resolution via Exploiting Cross Channel Constraints

Title Sparsity-based Color Image Super Resolution via Exploiting Cross Channel Constraints
Authors Hojjat S. Mousavi, Vishal Monga
Abstract Sparsity constrained single image super-resolution (SR) has been of much recent interest. A typical approach involves sparsely representing patches in a low-resolution (LR) input image via a dictionary of example LR patches, and then using the coefficients of this representation to generate the high-resolution (HR) output via an analogous HR dictionary. However, most existing sparse representation methods for super resolution focus on the luminance channel information and do not capture interactions between color channels. In this work, we extend sparsity based super-resolution to multiple color channels by taking color information into account. Edge similarities amongst RGB color bands are exploited as cross channel correlation constraints. These additional constraints lead to a new optimization problem which is not easily solvable; however, a tractable solution is proposed to solve it efficiently. Moreover, to fully exploit the complementary information among color channels, a dictionary learning method is also proposed specifically to learn color dictionaries that encourage edge similarities. Merits of the proposed method over state of the art are demonstrated both visually and quantitatively using image quality metrics.
Tasks Dictionary Learning, Image Super-Resolution, Super-Resolution
Published 2016-10-04
URL http://arxiv.org/abs/1610.01066v1
PDF http://arxiv.org/pdf/1610.01066v1.pdf
PWC https://paperswithcode.com/paper/sparsity-based-color-image-super-resolution
Repo
Framework

Morphology Generation for Statistical Machine Translation using Deep Learning Techniques

Title Morphology Generation for Statistical Machine Translation using Deep Learning Techniques
Authors Marta R. Costa-jussà, Carlos Escolano
Abstract Morphology in unbalanced languages remains a big challenge in the context of machine translation. In this paper, we propose to de-couple machine translation from morphology generation in order to better deal with the problem. We investigate the morphology simplification with a reasonable trade-off between expected gain and generation complexity. For the Chinese-Spanish task, optimum morphological simplification is in gender and number. For this purpose, we design a new classification architecture which, compared to other standard machine learning techniques, obtains the best results. This proposed neural-based architecture consists of several layers: an embedding, a convolutional followed by a recurrent neural network and, finally, ends with sigmoid and softmax layers. We obtain classification results over 98% accuracy in gender classification, over 93% in number classification, and an overall translation improvement of 0.7 METEOR.
Tasks Machine Translation
Published 2016-10-07
URL http://arxiv.org/abs/1610.02209v2
PDF http://arxiv.org/pdf/1610.02209v2.pdf
PWC https://paperswithcode.com/paper/morphology-generation-for-statistical-machine-1
Repo
Framework

Spatial probabilistic pulsatility model for enhancing photoplethysmographic imaging systems

Title Spatial probabilistic pulsatility model for enhancing photoplethysmographic imaging systems
Authors Robert Amelard, David A Clausi, Alexander Wong
Abstract Photolethysmographic imaging (PPGI) is a widefield non-contact biophotonic technology able to remotely monitor cardiovascular function over anatomical areas. Though spatial context can provide increased physiological insight, existing PPGI systems rely on coarse spatial averaging with no anatomical priors for assessing arterial pulsatility. Here, we developed a continuous probabilistic pulsatility model for importance-weighted blood pulse waveform extraction. Using a data-driven approach, the model was constructed using a 23 participant sample with large demographic variation (11/12 female/male, age 11-60 years, BMI 16.4-35.1 kg$\cdot$m$^{-2}$). Using time-synchronized ground-truth waveforms, spatial correlation priors were computed and projected into a co-aligned importance-weighted Cartesian space. A modified Parzen-Rosenblatt kernel density estimation method was used to compute the continuous resolution-agnostic probabilistic pulsatility model. The model identified locations that consistently exhibited pulsatility across the sample. Blood pulse waveform signals extracted with the model exhibited significantly stronger temporal correlation ($W=35,p<0.01$) and spectral SNR ($W=31,p<0.01$) compared to uniform spatial averaging. Heart rate estimation was in strong agreement with true heart rate ($r^2=0.9619$, error $(\mu,\sigma)=(0.52,1.69)$ bpm).
Tasks Density Estimation, Heart rate estimation
Published 2016-07-27
URL http://arxiv.org/abs/1607.08129v1
PDF http://arxiv.org/pdf/1607.08129v1.pdf
PWC https://paperswithcode.com/paper/spatial-probabilistic-pulsatility-model-for
Repo
Framework

Modelling Cyber-Security Experts’ Decision Making Processes using Aggregation Operators

Title Modelling Cyber-Security Experts’ Decision Making Processes using Aggregation Operators
Authors Simon Miller, Christian Wagner, Uwe Aickelin, Jonathan M. Garibaldi
Abstract An important role carried out by cyber-security experts is the assessment of proposed computer systems, during their design stage. This task is fraught with difficulties and uncertainty, making the knowledge provided by human experts essential for successful assessment. Today, the increasing number of progressively complex systems has led to an urgent need to produce tools that support the expert-led process of system-security assessment. In this research, we use weighted averages (WAs) and ordered weighted averages (OWAs) with evolutionary algorithms (EAs) to create aggregation operators that model parts of the assessment process. We show how individual overall ratings for security components can be produced from ratings of their characteristics, and how these individual overall ratings can be aggregated to produce overall rankings of potential attacks on a system. As well as the identification of salient attacks and weak points in a prospective system, the proposed method also highlights which factors and security components contribute most to a component’s difficulty and attack ranking respectively. A real world scenario is used in which experts were asked to rank a set of technical attacks, and to answer a series of questions about the security components that are the subject of the attacks. The work shows how finding good aggregation operators, and identifying important components and factors of a cyber-security problem can be automated. The resulting operators have the potential for use as decision aids for systems designers and cyber-security experts, increasing the amount of assessment that can be achieved with the limited resources available.
Tasks Decision Making
Published 2016-08-30
URL http://arxiv.org/abs/1608.08497v1
PDF http://arxiv.org/pdf/1608.08497v1.pdf
PWC https://paperswithcode.com/paper/modelling-cyber-security-experts-decision
Repo
Framework
comments powered by Disqus