Paper Group ANR 83
Likelihood Inflating Sampling Algorithm. Supervised Texture Segmentation: A Comparative Study. Natural Language Generation as Planning under Uncertainty Using Reinforcement Learning. Free Form based active contours for image segmentation and free space perception. Prediction of Manipulation Actions. Parameterized Complexity Results for a Model of T …
Likelihood Inflating Sampling Algorithm
Title | Likelihood Inflating Sampling Algorithm |
Authors | Reihaneh Entezari, Radu V. Craiu, Jeffrey S. Rosenthal |
Abstract | Markov Chain Monte Carlo (MCMC) sampling from a posterior distribution corresponding to a massive data set can be computationally prohibitive since producing one sample requires a number of operations that is linear in the data size. In this paper, we introduce a new communication-free parallel method, the Likelihood Inflating Sampling Algorithm (LISA), that significantly reduces computational costs by randomly splitting the dataset into smaller subsets and running MCMC methods independently in parallel on each subset using different processors. Each processor will be used to run an MCMC chain that samples sub-posterior distributions which are defined using an “inflated” likelihood function. We develop a strategy for combining the draws from different sub-posteriors to study the full posterior of the Bayesian Additive Regression Trees (BART) model. The performance of the method is tested using both simulated and real data. |
Tasks | |
Published | 2016-05-06 |
URL | http://arxiv.org/abs/1605.02113v3 |
http://arxiv.org/pdf/1605.02113v3.pdf | |
PWC | https://paperswithcode.com/paper/likelihood-inflating-sampling-algorithm |
Repo | |
Framework | |
Supervised Texture Segmentation: A Comparative Study
Title | Supervised Texture Segmentation: A Comparative Study |
Authors | Omar S. Al-Kadi |
Abstract | This paper aims to compare between four different types of feature extraction approaches in terms of texture segmentation. The feature extraction methods that were used for segmentation are Gabor filters (GF), Gaussian Markov random fields (GMRF), run-length matrix (RLM) and co-occurrence matrix (GLCM). It was shown that the GF performed best in terms of quality of segmentation while the GLCM localises the texture boundaries better as compared to the other methods. |
Tasks | |
Published | 2016-01-02 |
URL | http://arxiv.org/abs/1601.00212v1 |
http://arxiv.org/pdf/1601.00212v1.pdf | |
PWC | https://paperswithcode.com/paper/supervised-texture-segmentation-a-comparative |
Repo | |
Framework | |
Natural Language Generation as Planning under Uncertainty Using Reinforcement Learning
Title | Natural Language Generation as Planning under Uncertainty Using Reinforcement Learning |
Authors | Verena Rieser, Oliver Lemon |
Abstract | We present and evaluate a new model for Natural Language Generation (NLG) in Spoken Dialogue Systems, based on statistical planning, given noisy feedback from the current generation context (e.g. a user and a surface realiser). We study its use in a standard NLG problem: how to present information (in this case a set of search results) to users, given the complex trade- offs between utterance length, amount of information conveyed, and cognitive load. We set these trade-offs by analysing existing MATCH data. We then train a NLG pol- icy using Reinforcement Learning (RL), which adapts its behaviour to noisy feed- back from the current generation context. This policy is compared to several base- lines derived from previous work in this area. The learned policy significantly out- performs all the prior approaches. |
Tasks | Spoken Dialogue Systems, Text Generation |
Published | 2016-06-15 |
URL | http://arxiv.org/abs/1606.04686v1 |
http://arxiv.org/pdf/1606.04686v1.pdf | |
PWC | https://paperswithcode.com/paper/natural-language-generation-as-planning-under |
Repo | |
Framework | |
Free Form based active contours for image segmentation and free space perception
Title | Free Form based active contours for image segmentation and free space perception |
Authors | Ouiddad Labbani I., Pauline Merveilleux O, Olivier Ruatta |
Abstract | In this paper we present a novel approach for representing and evolving deformable active contours. The method combines piecewise regular B{'e}zier models and curve evolution defined by local Free Form Deformation. The contour deformation is locally constrained which allows contour convergence with almost linear complexity while adapting to various shape settings and handling topology changes of the active contour. We demonstrate the effectiveness of the new active contour scheme for visual free space perception and segmentation using omnidirectional images acquired by a robot exploring unknown indoor and outdoor environments. Several experiments validate the approach with comparison to state-of-the art parametric and geometric active contours and provide fast and real-time robot free space segmentation and navigation. |
Tasks | Semantic Segmentation |
Published | 2016-06-15 |
URL | http://arxiv.org/abs/1606.04774v1 |
http://arxiv.org/pdf/1606.04774v1.pdf | |
PWC | https://paperswithcode.com/paper/free-form-based-active-contours-for-image |
Repo | |
Framework | |
Prediction of Manipulation Actions
Title | Prediction of Manipulation Actions |
Authors | Cornelia Fermüller, Fang Wang, Yezhou Yang, Konstantinos Zampogiannis, Yi Zhang, Francisco Barranco, Michael Pfeiffer |
Abstract | Looking at a person’s hands one often can tell what the person is going to do next, how his/her hands are moving and where they will be, because an actor’s intentions shape his/her movement kinematics during action execution. Similarly, active systems with real-time constraints must not simply rely on passive video-segment classification, but they have to continuously update their estimates and predict future actions. In this paper, we study the prediction of dexterous actions. We recorded from subjects performing different manipulation actions on the same object, such as “squeezing”, “flipping”, “washing”, “wiping” and “scratching” with a sponge. In psychophysical experiments, we evaluated human observers’ skills in predicting actions from video sequences of different length, depicting the hand movement in the preparation and execution of actions before and after contact with the object. We then developed a recurrent neural network based method for action prediction using as input patches around the hand. We also used the same formalism to predict the forces on the finger tips using for training synchronized video and force data streams. Evaluations on two new datasets showed that our system closely matches human performance in the recognition task, and demonstrate the ability of our algorithm to predict what and how a dexterous action is performed. |
Tasks | |
Published | 2016-10-03 |
URL | http://arxiv.org/abs/1610.00759v1 |
http://arxiv.org/pdf/1610.00759v1.pdf | |
PWC | https://paperswithcode.com/paper/prediction-of-manipulation-actions |
Repo | |
Framework | |
Parameterized Complexity Results for a Model of Theory of Mind Based on Dynamic Epistemic Logic
Title | Parameterized Complexity Results for a Model of Theory of Mind Based on Dynamic Epistemic Logic |
Authors | Iris van de Pol, Iris van Rooij, Jakub Szymanik |
Abstract | In this paper we introduce a computational-level model of theory of mind (ToM) based on dynamic epistemic logic (DEL), and we analyze its computational complexity. The model is a special case of DEL model checking. We provide a parameterized complexity analysis, considering several aspects of DEL (e.g., number of agents, size of preconditions, etc.) as parameters. We show that model checking for DEL is PSPACE-hard, also when restricted to single-pointed models and S5 relations, thereby solving an open problem in the literature. Our approach is aimed at formalizing current intractability claims in the cognitive science literature regarding computational models of ToM. |
Tasks | |
Published | 2016-06-24 |
URL | http://arxiv.org/abs/1606.07526v1 |
http://arxiv.org/pdf/1606.07526v1.pdf | |
PWC | https://paperswithcode.com/paper/parameterized-complexity-results-for-a-model |
Repo | |
Framework | |
Computerized Multiparametric MR image Analysis for Prostate Cancer Aggressiveness-Assessment
Title | Computerized Multiparametric MR image Analysis for Prostate Cancer Aggressiveness-Assessment |
Authors | Imon Banerjee, Lewis Hahn, Geoffrey Sonn, Richard Fan, Daniel L. Rubin |
Abstract | We propose an automated method for detecting aggressive prostate cancer(CaP) (Gleason score >=7) based on a comprehensive analysis of the lesion and the surrounding normal prostate tissue which has been simultaneously captured in T2-weighted MR images, diffusion-weighted images (DWI) and apparent diffusion coefficient maps (ADC). The proposed methodology was tested on a dataset of 79 patients (40 aggressive, 39 non-aggressive). We evaluated the performance of a wide range of popular quantitative imaging features on the characterization of aggressive versus non-aggressive CaP. We found that a group of 44 discriminative predictors among 1464 quantitative imaging features can be used to produce an area under the ROC curve of 0.73. |
Tasks | |
Published | 2016-12-01 |
URL | http://arxiv.org/abs/1612.00408v1 |
http://arxiv.org/pdf/1612.00408v1.pdf | |
PWC | https://paperswithcode.com/paper/computerized-multiparametric-mr-image |
Repo | |
Framework | |
Improving Color Constancy by Discounting the Variation of Camera Spectral Sensitivity
Title | Improving Color Constancy by Discounting the Variation of Camera Spectral Sensitivity |
Authors | Shao-Bing Gao, Ming Zhang, Chao-Yi Li, Yong-Jie Li |
Abstract | It is an ill-posed problem to recover the true scene colors from a color biased image by discounting the effects of scene illuminant and camera spectral sensitivity (CSS) at the same time. Most color constancy (CC) models have been designed to first estimate the illuminant color, which is then removed from the color biased image to obtain an image taken under white light, without the explicit consideration of CSS effect on CC. This paper first studies the CSS effect on illuminant estimation arising in the inter-dataset-based CC (inter-CC), i.e., training a CC model on one dataset and then testing on another dataset captured by a distinct CSS. We show the clear degradation of existing CC models for inter-CC application. Then a simple way is proposed to overcome such degradation by first learning quickly a transform matrix between the two distinct CSSs (CSS-1 and CSS-2). The learned matrix is then used to convert the data (including the illuminant ground truth and the color biased images) rendered under CSS-1 into CSS-2, and then train and apply the CC model on the color biased images under CSS-2, without the need of burdensome acquiring of training set under CSS-2. Extensive experiments on synthetic and real images show that our method can clearly improve the inter-CC performance for traditional CC algorithms. We suggest that by taking the CSS effect into account, it is more likely to obtain the truly color constant images invariant to the changes of both illuminant and camera sensors. |
Tasks | Color Constancy |
Published | 2016-09-06 |
URL | http://arxiv.org/abs/1609.01670v2 |
http://arxiv.org/pdf/1609.01670v2.pdf | |
PWC | https://paperswithcode.com/paper/improving-color-constancy-by-discounting-the |
Repo | |
Framework | |
The Dependent Random Measures with Independent Increments in Mixture Models
Title | The Dependent Random Measures with Independent Increments in Mixture Models |
Authors | Cheng Luo, Richard Yi Da Xu, Yang Xiang |
Abstract | When observations are organized into groups where commonalties exist amongst them, the dependent random measures can be an ideal choice for modeling. One of the propositions of the dependent random measures is that the atoms of the posterior distribution are shared amongst groups, and hence groups can borrow information from each other. When normalized dependent random measures prior with independent increments are applied, we can derive appropriate exchangeable probability partition function (EPPF), and subsequently also deduce its inference algorithm given any mixture model likelihood. We provide all necessary derivation and solution to this framework. For demonstration, we used mixture of Gaussians likelihood in combination with a dependent structure constructed by linear combinations of CRMs. Our experiments show superior performance when using this framework, where the inferred values including the mixing weights and the number of clusters both respond appropriately to the number of completely random measure used. |
Tasks | |
Published | 2016-06-27 |
URL | http://arxiv.org/abs/1606.08105v1 |
http://arxiv.org/pdf/1606.08105v1.pdf | |
PWC | https://paperswithcode.com/paper/the-dependent-random-measures-with |
Repo | |
Framework | |
Approaching the Computational Color Constancy as a Classification Problem through Deep Learning
Title | Approaching the Computational Color Constancy as a Classification Problem through Deep Learning |
Authors | Seoung Wug Oh, Seon Joo Kim |
Abstract | Computational color constancy refers to the problem of computing the illuminant color so that the images of a scene under varying illumination can be normalized to an image under the canonical illumination. In this paper, we adopt a deep learning framework for the illumination estimation problem. The proposed method works under the assumption of uniform illumination over the scene and aims for the accurate illuminant color computation. Specifically, we trained the convolutional neural network to solve the problem by casting the color constancy problem as an illumination classification problem. We designed the deep learning architecture so that the output of the network can be directly used for computing the color of the illumination. Experimental results show that our deep network is able to extract useful features for the illumination estimation and our method outperforms all previous color constancy methods on multiple test datasets. |
Tasks | Color Constancy |
Published | 2016-08-29 |
URL | http://arxiv.org/abs/1608.07951v1 |
http://arxiv.org/pdf/1608.07951v1.pdf | |
PWC | https://paperswithcode.com/paper/approaching-the-computational-color-constancy |
Repo | |
Framework | |
Deep Structured-Output Regression Learning for Computational Color Constancy
Title | Deep Structured-Output Regression Learning for Computational Color Constancy |
Authors | Yanlin Qian, Ke Chen, Joni-Kristian Kamarainen, Jarno Nikkanen, Jiri Matas |
Abstract | Computational color constancy that requires esti- mation of illuminant colors of images is a fundamental yet active problem in computer vision, which can be formulated into a regression problem. To learn a robust regressor for color constancy, obtaining meaningful imagery features and capturing latent correlations across output variables play a vital role. In this work, we introduce a novel deep structured-output regression learning framework to achieve both goals simultaneously. By borrowing the power of deep convolutional neural networks (CNN) originally designed for visual recognition, the proposed framework can automatically discover strong features for white balancing over different illumination conditions and learn a multi-output regressor beyond underlying relationships between features and targets to find the complex interdependence of dif- ferent dimensions of target variables. Experiments on two public benchmarks demonstrate that our method achieves competitive performance in comparison with the state-of-the-art approaches. |
Tasks | Color Constancy |
Published | 2016-07-13 |
URL | http://arxiv.org/abs/1607.03856v2 |
http://arxiv.org/pdf/1607.03856v2.pdf | |
PWC | https://paperswithcode.com/paper/deep-structured-output-regression-learning |
Repo | |
Framework | |
Local Network Community Detection with Continuous Optimization of Conductance and Weighted Kernel K-Means
Title | Local Network Community Detection with Continuous Optimization of Conductance and Weighted Kernel K-Means |
Authors | Twan van Laarhoven, Elena Marchiori |
Abstract | Local network community detection is the task of finding a single community of nodes concentrated around few given seed nodes in a localized way. Conductance is a popular objective function used in many algorithms for local community detection. This paper studies a continuous relaxation of conductance. We show that continuous optimization of this objective still leads to discrete communities. We investigate the relation of conductance with weighted kernel k-means for a single community, which leads to the introduction of a new objective function, $\sigma$-conductance. Conductance is obtained by setting $\sigma$ to $0$. Two algorithms, EMc and PGDc, are proposed to locally optimize $\sigma$-conductance and automatically tune the parameter $\sigma$. They are based on expectation maximization and projected gradient descent, respectively. We prove locality and give performance guarantees for EMc and PGDc for a class of dense and well separated communities centered around the seeds. Experiments are conducted on networks with ground-truth communities, comparing to state-of-the-art graph diffusion algorithms for conductance optimization. On large graphs, results indicate that EMc and PGDc stay localized and produce communities most similar to the ground, while graph diffusion algorithms generate large communities of lower quality. |
Tasks | Community Detection, Local Community Detection |
Published | 2016-01-21 |
URL | http://arxiv.org/abs/1601.05775v2 |
http://arxiv.org/pdf/1601.05775v2.pdf | |
PWC | https://paperswithcode.com/paper/local-network-community-detection-with |
Repo | |
Framework | |
Embracing Error to Enable Rapid Crowdsourcing
Title | Embracing Error to Enable Rapid Crowdsourcing |
Authors | Ranjay Krishna, Kenji Hata, Stephanie Chen, Joshua Kravitz, David A. Shamma, Li Fei-Fei, Michael S. Bernstein |
Abstract | Microtask crowdsourcing has enabled dataset advances in social science and machine learning, but existing crowdsourcing schemes are too expensive to scale up with the expanding volume of data. To scale and widen the applicability of crowdsourcing, we present a technique that produces extremely rapid judgments for binary and categorical labels. Rather than punishing all errors, which causes workers to proceed slowly and deliberately, our technique speeds up workers’ judgments to the point where errors are acceptable and even expected. We demonstrate that it is possible to rectify these errors by randomizing task order and modeling response latency. We evaluate our technique on a breadth of common labeling tasks such as image verification, word similarity, sentiment analysis and topic classification. Where prior work typically achieves a 0.25x to 1x speedup over fixed majority vote, our approach often achieves an order of magnitude (10x) speedup. |
Tasks | Sentiment Analysis |
Published | 2016-02-14 |
URL | http://arxiv.org/abs/1602.04506v1 |
http://arxiv.org/pdf/1602.04506v1.pdf | |
PWC | https://paperswithcode.com/paper/embracing-error-to-enable-rapid-crowdsourcing |
Repo | |
Framework | |
Micro-interventions in urban transport from pattern discovery on the flow of passengers and on the bus network
Title | Micro-interventions in urban transport from pattern discovery on the flow of passengers and on the bus network |
Authors | Carlos Caminha, Vasco Furtado, Vládia Pinheiro e Caio Ponte |
Abstract | In this paper, we describe a case study in a big metropolis, in which from data collected by digital sensors, we tried to understand mobility patterns of persons using buses and how this can generate knowledge to suggest interventions that are applied incrementally into the transportation network in use. We have first estimated an Origin-Destination matrix of buses users from datasets about the ticket validation and GPS positioning of buses. Then we represent the supply of buses with their routes through bus stops as a complex network, which allowed us to understand the bottlenecks of the current scenario and, in particular, applying community discovery techniques, to identify clusters that the service supply infrastructure has. Finally, from the superimposing of the flow of people represented in the OriginDestination matrix in the supply network, we exemplify how micro-interventions can be prospected by means of an example of the introduction of express routes. |
Tasks | |
Published | 2016-06-14 |
URL | http://arxiv.org/abs/1606.04190v1 |
http://arxiv.org/pdf/1606.04190v1.pdf | |
PWC | https://paperswithcode.com/paper/micro-interventions-in-urban-transport-from |
Repo | |
Framework | |
Overcoming Challenges in Fixed Point Training of Deep Convolutional Networks
Title | Overcoming Challenges in Fixed Point Training of Deep Convolutional Networks |
Authors | Darryl D. Lin, Sachin S. Talathi |
Abstract | It is known that training deep neural networks, in particular, deep convolutional networks, with aggressively reduced numerical precision is challenging. The stochastic gradient descent algorithm becomes unstable in the presence of noisy gradient updates resulting from arithmetic with limited numeric precision. One of the well-accepted solutions facilitating the training of low precision fixed point networks is stochastic rounding. However, to the best of our knowledge, the source of the instability in training neural networks with noisy gradient updates has not been well investigated. This work is an attempt to draw a theoretical connection between low numerical precision and training algorithm stability. In doing so, we will also propose and verify through experiments methods that are able to improve the training performance of deep convolutional networks in fixed point. |
Tasks | |
Published | 2016-07-08 |
URL | http://arxiv.org/abs/1607.02241v1 |
http://arxiv.org/pdf/1607.02241v1.pdf | |
PWC | https://paperswithcode.com/paper/overcoming-challenges-in-fixed-point-training |
Repo | |
Framework | |