Paper Group ANR 431
Monte Carlo Sort for unreliable human comparisons. Storm Detection by Visual Learning Using Satellite Images. Stable Memory Allocation in the Hippocampus: Fundamental Limits and Neural Realization. Generating Simulations of Motion Events from Verbal Descriptions. Environmental Modeling Framework using Stacked Gaussian Processes. Denoising and Covar …
Monte Carlo Sort for unreliable human comparisons
Title | Monte Carlo Sort for unreliable human comparisons |
Authors | Samuel L Smith |
Abstract | Algorithms which sort lists of real numbers into ascending order have been studied for decades. They are typically based on a series of pairwise comparisons and run entirely on chip. However people routinely sort lists which depend on subjective or complex judgements that cannot be automated. Examples include marketing research; where surveys are used to learn about customer preferences for products, the recruiting process; where interviewers attempt to rank potential employees, and sporting tournaments; where we infer team rankings from a series of one on one matches. We develop a novel sorting algorithm, where each pairwise comparison reflects a subjective human judgement about which element is bigger or better. We introduce a finite and large error rate to each judgement, and we take the cost of each comparison to significantly exceed the cost of other computational steps. The algorithm must request the most informative sequence of comparisons from the user; in order to identify the correct sorted list with minimum human input. Our Discrete Adiabatic Monte Carlo approach exploits the gradual acquisition of information by tracking a set of plausible hypotheses which are updated after each additional comparison. |
Tasks | |
Published | 2016-12-27 |
URL | http://arxiv.org/abs/1612.08555v1 |
http://arxiv.org/pdf/1612.08555v1.pdf | |
PWC | https://paperswithcode.com/paper/monte-carlo-sort-for-unreliable-human |
Repo | |
Framework | |
Storm Detection by Visual Learning Using Satellite Images
Title | Storm Detection by Visual Learning Using Satellite Images |
Authors | Yu Zhang, Stephen Wistar, Jia Li, Michael Steinberg, James Z. Wang |
Abstract | Computers are widely utilized in today’s weather forecasting as a powerful tool to leverage an enormous amount of data. Yet, despite the availability of such data, current techniques often fall short of producing reliable detailed storm forecasts. Each year severe thunderstorms cause significant damage and loss of life, some of which could be avoided if better forecasts were available. We propose a computer algorithm that analyzes satellite images from historical archives to locate visual signatures of severe thunderstorms for short-term predictions. While computers are involved in weather forecasts to solve numerical models based on sensory data, they are less competent in forecasting based on visual patterns from satellite images. In our system, we extract and summarize important visual storm evidence from satellite image sequences in the way that meteorologists interpret the images. In particular, the algorithm extracts and fits local cloud motion from image sequences to model the storm-related cloud patches. Image data from the year 2008 have been adopted to train the model, and historical thunderstorm reports in continental US from 2000 through 2013 have been used as the ground-truth and priors in the modeling process. Experiments demonstrate the usefulness and potential of the algorithm for producing more accurate thunderstorm forecasts. |
Tasks | Weather Forecasting |
Published | 2016-03-01 |
URL | http://arxiv.org/abs/1603.00146v1 |
http://arxiv.org/pdf/1603.00146v1.pdf | |
PWC | https://paperswithcode.com/paper/storm-detection-by-visual-learning-using |
Repo | |
Framework | |
Stable Memory Allocation in the Hippocampus: Fundamental Limits and Neural Realization
Title | Stable Memory Allocation in the Hippocampus: Fundamental Limits and Neural Realization |
Authors | Wenlong Mou, Zhi Wang, Liwei Wang |
Abstract | It is believed that hippocampus functions as a memory allocator in brain, the mechanism of which remains unrevealed. In Valiant’s neuroidal model, the hippocampus was described as a randomly connected graph, the computation on which maps input to a set of activated neuroids with stable size. Valiant proposed three requirements for the hippocampal circuit to become a stable memory allocator (SMA): stability, continuity and orthogonality. The functionality of SMA in hippocampus is essential in further computation within cortex, according to Valiant’s model. In this paper, we put these requirements for memorization functions into rigorous mathematical formulation and introduce the concept of capacity, based on the probability of erroneous allocation. We prove fundamental limits for the capacity and error probability of SMA, in both data-independent and data-dependent settings. We also establish an example of stable memory allocator that can be implemented via neuroidal circuits. Both theoretical bounds and simulation results show that the neural SMA functions well. |
Tasks | |
Published | 2016-12-14 |
URL | http://arxiv.org/abs/1612.04659v1 |
http://arxiv.org/pdf/1612.04659v1.pdf | |
PWC | https://paperswithcode.com/paper/stable-memory-allocation-in-the-hippocampus |
Repo | |
Framework | |
Generating Simulations of Motion Events from Verbal Descriptions
Title | Generating Simulations of Motion Events from Verbal Descriptions |
Authors | James Pustejovsky, Nikhil Krishnaswamy |
Abstract | In this paper, we describe a computational model for motion events in natural language that maps from linguistic expressions, through a dynamic event interpretation, into three-dimensional temporal simulations in a model. Starting with the model from (Pustejovsky and Moszkowicz, 2011), we analyze motion events using temporally-traced Labelled Transition Systems. We model the distinction between path- and manner-motion in an operational semantics, and further distinguish different types of manner-of-motion verbs in terms of the mereo-topological relations that hold throughout the process of movement. From these representations, we generate minimal models, which are realized as three-dimensional simulations in software developed with the game engine, Unity. The generated simulations act as a conceptual “debugger” for the semantics of different motion verbs: that is, by testing for consistency and informativeness in the model, simulations expose the presuppositions associated with linguistic expressions and their compositions. Because the model generation component is still incomplete, this paper focuses on an implementation which maps directly from linguistic interpretations into the Unity code snippets that create the simulations. |
Tasks | |
Published | 2016-10-06 |
URL | http://arxiv.org/abs/1610.01713v1 |
http://arxiv.org/pdf/1610.01713v1.pdf | |
PWC | https://paperswithcode.com/paper/generating-simulations-of-motion-events-from |
Repo | |
Framework | |
Environmental Modeling Framework using Stacked Gaussian Processes
Title | Environmental Modeling Framework using Stacked Gaussian Processes |
Authors | Kareem Abdelfatah, Junshu Bao, Gabriel Terejanu |
Abstract | A network of independently trained Gaussian processes (StackedGP) is introduced to obtain predictions of quantities of interest with quantified uncertainties. The main applications of the StackedGP framework are to integrate different datasets through model composition, enhance predictions of quantities of interest through a cascade of intermediate predictions, and to propagate uncertainties through emulated dynamical systems driven by uncertain forcing variables. By using analytical first and second-order moments of a Gaussian process with uncertain inputs using squared exponential and polynomial kernels, approximated expectations of quantities of interests that require an arbitrary composition of functions can be obtained. The StackedGP model is extended to any number of layers and nodes per layer, and it provides flexibility in kernel selection for the input nodes. The proposed nonparametric stacked model is validated using synthetic datasets, and its performance in model composition and cascading predictions is measured in two applications using real data. |
Tasks | Gaussian Processes |
Published | 2016-12-09 |
URL | http://arxiv.org/abs/1612.02897v2 |
http://arxiv.org/pdf/1612.02897v2.pdf | |
PWC | https://paperswithcode.com/paper/environmental-modeling-framework-using |
Repo | |
Framework | |
Denoising and Covariance Estimation of Single Particle Cryo-EM Images
Title | Denoising and Covariance Estimation of Single Particle Cryo-EM Images |
Authors | Tejal Bhamre, Teng Zhang, Amit Singer |
Abstract | The problem of image restoration in cryo-EM entails correcting for the effects of the Contrast Transfer Function (CTF) and noise. Popular methods for image restoration include phase flipping', which corrects only for the Fourier phases but not amplitudes, and Wiener filtering, which requires the spectral signal to noise ratio. We propose a new image restoration method which we call Covariance Wiener Filtering’ (CWF). In CWF, the covariance matrix of the projection images is used within the classical Wiener filtering framework for solving the image restoration deconvolution problem. Our estimation procedure for the covariance matrix is new and successfully corrects for the CTF. We demonstrate the efficacy of CWF by applying it to restore both simulated and experimental cryo-EM images. Results with experimental datasets demonstrate that CWF provides a good way to evaluate the particle images and to see what the dataset contains even without 2D classification and averaging. |
Tasks | Denoising, Image Restoration |
Published | 2016-02-22 |
URL | http://arxiv.org/abs/1602.06632v3 |
http://arxiv.org/pdf/1602.06632v3.pdf | |
PWC | https://paperswithcode.com/paper/denoising-and-covariance-estimation-of-single |
Repo | |
Framework | |
Deep nets for local manifold learning
Title | Deep nets for local manifold learning |
Authors | Charles K. Chui, H. N. Mhaskar |
Abstract | The problem of extending a function $f$ defined on a training data $\mathcal{C}$ on an unknown manifold $\mathbb{X}$ to the entire manifold and a tubular neighborhood of this manifold is considered in this paper. For $\mathbb{X}$ embedded in a high dimensional ambient Euclidean space $\mathbb{R}^D$, a deep learning algorithm is developed for finding a local coordinate system for the manifold {\bf without eigen–decomposition}, which reduces the problem to the classical problem of function approximation on a low dimensional cube. Deep nets (or multilayered neural networks) are proposed to accomplish this approximation scheme by using the training data. Our methods do not involve such optimization techniques as back–propagation, while assuring optimal (a priori) error bounds on the output in terms of the number of derivatives of the target function. In addition, these methods are universal, in that they do not require a prior knowledge of the smoothness of the target function, but adjust the accuracy of approximation locally and automatically, depending only upon the local smoothness of the target function. Our ideas are easily extended to solve both the pre–image problem and the out–of–sample extension problem, with a priori bounds on the growth of the function thus extended. |
Tasks | |
Published | 2016-07-24 |
URL | http://arxiv.org/abs/1607.07110v1 |
http://arxiv.org/pdf/1607.07110v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-nets-for-local-manifold-learning |
Repo | |
Framework | |
On the Empirical Effect of Gaussian Noise in Under-sampled MRI Reconstruction
Title | On the Empirical Effect of Gaussian Noise in Under-sampled MRI Reconstruction |
Authors | Patrick Virtue, Michael Lustig |
Abstract | In Fourier-based medical imaging, sampling below the Nyquist rate results in an underdetermined system, in which linear reconstructions will exhibit artifacts. Another consequence of under-sampling is lower signal to noise ratio (SNR) due to fewer acquired measurements. Even if an oracle provided the information to perfectly disambiguate the underdetermined system, the reconstructed image could still have lower image quality than a corresponding fully sampled acquisition because of the reduced measurement time. The effects of lower SNR and the underdetermined system are coupled during reconstruction, making it difficult to isolate the impact of lower SNR on image quality. To this end, we present an image quality prediction process that reconstructs fully sampled, fully determined data with noise added to simulate the loss of SNR induced by a given under-sampling pattern. The resulting prediction image empirically shows the effect of noise in under-sampled image reconstruction without any effect from an underdetermined system. We discuss how our image quality prediction process can simulate the distribution of noise for a given under-sampling pattern, including variable density sampling that produces colored noise in the measurement data. An interesting consequence of our prediction model is that we can show that recovery from underdetermined non-uniform sampling is equivalent to a weighted least squares optimization that accounts for heterogeneous noise levels across measurements. Through a series of experiments with synthetic and in vivo datasets, we demonstrate the efficacy of the image quality prediction process and show that it provides a better estimation of reconstruction image quality than the corresponding fully-sampled reference image. |
Tasks | Image Reconstruction |
Published | 2016-10-03 |
URL | http://arxiv.org/abs/1610.00410v1 |
http://arxiv.org/pdf/1610.00410v1.pdf | |
PWC | https://paperswithcode.com/paper/on-the-empirical-effect-of-gaussian-noise-in |
Repo | |
Framework | |
Image Restoration: A General Wavelet Frame Based Model and Its Asymptotic Analysis
Title | Image Restoration: A General Wavelet Frame Based Model and Its Asymptotic Analysis |
Authors | Bin Dong, Zuowei Shen, Peichu Xie |
Abstract | Image restoration is one of the most important areas in imaging science. Mathematical tools have been widely used in image restoration, where wavelet frame based approach is one of the successful examples. In this paper, we introduce a generic wavelet frame based image restoration model, called the “general model”, which includes most of the existing wavelet frame based models as special cases. Moreover, the general model also includes examples that are new to the literature. Motivated by our earlier studies [1-3], We provide an asymptotic analysis of the general model as image resolution goes to infinity, which establishes a connection between the general model in discrete setting and a new variatonal model in continuum setting. The variational model also includes some of the existing variational models as special cases, such as the total generalized variational model proposed by [4]. In the end, we introduce an algorithm solving the general model and present one numerical simulation as an example. |
Tasks | Image Restoration |
Published | 2016-02-17 |
URL | http://arxiv.org/abs/1602.05332v1 |
http://arxiv.org/pdf/1602.05332v1.pdf | |
PWC | https://paperswithcode.com/paper/image-restoration-a-general-wavelet-frame |
Repo | |
Framework | |
Deep Portfolio Theory
Title | Deep Portfolio Theory |
Authors | J. B. Heaton, N. G. Polson, J. H. Witte |
Abstract | We construct a deep portfolio theory. By building on Markowitz’s classic risk-return trade-off, we develop a self-contained four-step routine of encode, calibrate, validate and verify to formulate an automated and general portfolio selection process. At the heart of our algorithm are deep hierarchical compositions of portfolios constructed in the encoding step. The calibration step then provides multivariate payouts in the form of deep hierarchical portfolios that are designed to target a variety of objective functions. The validate step trades-off the amount of regularization used in the encode and calibrate steps. The verification step uses a cross validation approach to trace out an ex post deep portfolio efficient frontier. We demonstrate all four steps of our portfolio theory numerically. |
Tasks | Calibration |
Published | 2016-05-23 |
URL | http://arxiv.org/abs/1605.07230v2 |
http://arxiv.org/pdf/1605.07230v2.pdf | |
PWC | https://paperswithcode.com/paper/deep-portfolio-theory |
Repo | |
Framework | |
Hawkes Processes with Stochastic Excitations
Title | Hawkes Processes with Stochastic Excitations |
Authors | Young Lee, Kar Wai Lim, Cheng Soon Ong |
Abstract | We propose an extension to Hawkes processes by treating the levels of self-excitation as a stochastic differential equation. Our new point process allows better approximation in application domains where events and intensities accelerate each other with correlated levels of contagion. We generalize a recent algorithm for simulating draws from Hawkes processes whose levels of excitation are stochastic processes, and propose a hybrid Markov chain Monte Carlo approach for model fitting. Our sampling procedure scales linearly with the number of required events and does not require stationarity of the point process. A modular inference procedure consisting of a combination between Gibbs and Metropolis Hastings steps is put forward. We recover expectation maximization as a special case. Our general approach is illustrated for contagion following geometric Brownian motion and exponential Langevin dynamics. |
Tasks | |
Published | 2016-09-22 |
URL | http://arxiv.org/abs/1609.06831v1 |
http://arxiv.org/pdf/1609.06831v1.pdf | |
PWC | https://paperswithcode.com/paper/hawkes-processes-with-stochastic-excitations |
Repo | |
Framework | |
Quality Adaptive Low-Rank Based JPEG Decoding with Applications
Title | Quality Adaptive Low-Rank Based JPEG Decoding with Applications |
Authors | Xiao Shu, Xiaolin Wu |
Abstract | Small compression noises, despite being transparent to human eyes, can adversely affect the results of many image restoration processes, if left unaccounted for. Especially, compression noises are highly detrimental to inverse operators of high-boosting (sharpening) nature, such as deblurring and superresolution against a convolution kernel. By incorporating the non-linear DCT quantization mechanism into the formulation for image restoration, we propose a new sparsity-based convex programming approach for joint compression noise removal and image restoration. Experimental results demonstrate significant performance gains of the new approach over existing image restoration methods. |
Tasks | Deblurring, Image Restoration, Quantization |
Published | 2016-01-06 |
URL | http://arxiv.org/abs/1601.01339v1 |
http://arxiv.org/pdf/1601.01339v1.pdf | |
PWC | https://paperswithcode.com/paper/quality-adaptive-low-rank-based-jpeg-decoding |
Repo | |
Framework | |
Low-rank Matrix Factorization under General Mixture Noise Distributions
Title | Low-rank Matrix Factorization under General Mixture Noise Distributions |
Authors | Xiangyong Cao, Qian Zhao, Deyu Meng, Yang Chen, Zongben Xu |
Abstract | Many computer vision problems can be posed as learning a low-dimensional subspace from high dimensional data. The low rank matrix factorization (LRMF) represents a commonly utilized subspace learning strategy. Most of the current LRMF techniques are constructed on the optimization problems using L1-norm and L2-norm losses, which mainly deal with Laplacian and Gaussian noises, respectively. To make LRMF capable of adapting more complex noise, this paper proposes a new LRMF model by assuming noise as Mixture of Exponential Power (MoEP) distributions and proposes a penalized MoEP (PMoEP) model by combining the penalized likelihood method with MoEP distributions. Such setting facilitates the learned LRMF model capable of automatically fitting the real noise through MoEP distributions. Each component in this mixture is adapted from a series of preliminary super- or sub-Gaussian candidates. Moreover, by facilitating the local continuity of noise components, we embed Markov random field into the PMoEP model and further propose the advanced PMoEP-MRF model. An Expectation Maximization (EM) algorithm and a variational EM (VEM) algorithm are also designed to infer the parameters involved in the proposed PMoEP and the PMoEP-MRF model, respectively. The superseniority of our methods is demonstrated by extensive experiments on synthetic data, face modeling, hyperspectral image restoration and background subtraction. |
Tasks | Image Restoration |
Published | 2016-01-06 |
URL | http://arxiv.org/abs/1601.01060v1 |
http://arxiv.org/pdf/1601.01060v1.pdf | |
PWC | https://paperswithcode.com/paper/low-rank-matrix-factorization-under-general |
Repo | |
Framework | |
Understanding Rating Behaviour and Predicting Ratings by Identifying Representative Users
Title | Understanding Rating Behaviour and Predicting Ratings by Identifying Representative Users |
Authors | Rahul Kamath, Masanao Ochi, Yutaka Matsuo |
Abstract | Online user reviews describing various products and services are now abundant on the web. While the information conveyed through review texts and ratings is easily comprehensible, there is a wealth of hidden information in them that is not immediately obvious. In this study, we unlock this hidden value behind user reviews to understand the various dimensions along which users rate products. We learn a set of users that represent each of these dimensions and use their ratings to predict product ratings. Specifically, we work with restaurant reviews to identify users whose ratings are influenced by dimensions like ‘Service’, ‘Atmosphere’ etc. in order to predict restaurant ratings and understand the variation in rating behaviour across different cuisines. While previous approaches to obtaining product ratings require either a large number of user ratings or a few review texts, we show that it is possible to predict ratings with few user ratings and no review text. Our experiments show that our approach outperforms other conventional methods by 16-27% in terms of RMSE. |
Tasks | |
Published | 2016-04-19 |
URL | http://arxiv.org/abs/1604.05468v1 |
http://arxiv.org/pdf/1604.05468v1.pdf | |
PWC | https://paperswithcode.com/paper/understanding-rating-behaviour-and-predicting |
Repo | |
Framework | |
Behavior and path planning for the coalition of cognitive robots in smart relocation tasks
Title | Behavior and path planning for the coalition of cognitive robots in smart relocation tasks |
Authors | Aleksandr I. Panov, Konstantin Yakovlev |
Abstract | In this paper we outline the approach of solving special type of navigation tasks for robotic systems, when a coalition of robots (agents) acts in the 2D environment, which can be modified by the actions, and share the same goal location. The latter is originally unreachable for some members of the coalition, but the common task still can be accomplished as the agents can assist each other (e.g. by modifying the environment). We call such tasks smart relocation tasks (as the can not be solved by pure path planning methods) and study spatial and behavior interaction of robots while solving them. We use cognitive approach and introduce semiotic knowledge representation - sign world model which underlines behavioral planning methodology. Planning is viewed as a recursive search process in the hierarchical state-space induced by sings with path planning signs reside on the lowest level. Reaching this level triggers path planning which is accomplished by state of the art grid-based planners focused on producing smooth paths (e.g. LIAN) and thus indirectly guarantying feasibility of that paths against agent’s dynamic constraints. |
Tasks | |
Published | 2016-07-27 |
URL | http://arxiv.org/abs/1607.08038v1 |
http://arxiv.org/pdf/1607.08038v1.pdf | |
PWC | https://paperswithcode.com/paper/behavior-and-path-planning-for-the-coalition |
Repo | |
Framework | |