Paper Group ANR 301
Utilization of Deep Reinforcement Learning for saccadic-based object visual search. Technical Report: Band selection for nonlinear unmixing of hyperspectral images as a maximal clique problem. The Coconut Model with Heterogeneous Strategies and Learning. Optimal resampling for the noisy OneMax problem. Landmark-based consonant voicing detection on …
Utilization of Deep Reinforcement Learning for saccadic-based object visual search
Title | Utilization of Deep Reinforcement Learning for saccadic-based object visual search |
Authors | Tomasz Kornuta, Kamil Rocki |
Abstract | The paper focuses on the problem of learning saccades enabling visual object search. The developed system combines reinforcement learning with a neural network for learning to predict the possible outcomes of its actions. We validated the solution in three types of environment consisting of (pseudo)-randomly generated matrices of digits. The experimental verification is followed by the discussion regarding elements required by systems mimicking the fovea movement and possible further research directions. |
Tasks | |
Published | 2016-10-20 |
URL | http://arxiv.org/abs/1610.06492v1 |
http://arxiv.org/pdf/1610.06492v1.pdf | |
PWC | https://paperswithcode.com/paper/utilization-of-deep-reinforcement-learning |
Repo | |
Framework | |
Technical Report: Band selection for nonlinear unmixing of hyperspectral images as a maximal clique problem
Title | Technical Report: Band selection for nonlinear unmixing of hyperspectral images as a maximal clique problem |
Authors | Tales Imbiriba, José Carlos Moreira Bermudez, Cédric Richard |
Abstract | Kernel-based nonlinear mixing models have been applied to unmix spectral information of hyperspectral images when the type of mixing occurring in the scene is too complex or unknown. Such methods, however, usually require the inversion of matrices of sizes equal to the number of spectral bands. Reducing the computational load of these methods remains a challenge in large scale applications. This paper proposes a centralized method for band selection (BS) in the reproducing kernel Hilbert space (RKHS). It is based upon the coherence criterion, which sets the largest value allowed for correlations between the basis kernel functions characterizing the unmixing model. We show that the proposed BS approach is equivalent to solving a maximum clique problem (MCP), that is, searching for the biggest complete subgraph in a graph. Furthermore, we devise a strategy for selecting the coherence threshold and the Gaussian kernel bandwidth using coherence bounds for linearly independent bases. Simulation results illustrate the efficiency of the proposed method. |
Tasks | |
Published | 2016-03-01 |
URL | http://arxiv.org/abs/1603.00437v2 |
http://arxiv.org/pdf/1603.00437v2.pdf | |
PWC | https://paperswithcode.com/paper/technical-report-band-selection-for-nonlinear |
Repo | |
Framework | |
The Coconut Model with Heterogeneous Strategies and Learning
Title | The Coconut Model with Heterogeneous Strategies and Learning |
Authors | Sven Banisch, Eckehard Olbrich |
Abstract | In this paper, we develop an agent-based version of the Diamond search equilibrium model - also called Coconut Model. In this model, agents are faced with production decisions that have to be evaluated based on their expectations about the future utility of the produced entity which in turn depends on the global production level via a trading mechanism. While the original dynamical systems formulation assumes an infinite number of homogeneously adapting agents obeying strong rationality conditions, the agent-based setting allows to discuss the effects of heterogeneous and adaptive expectations and enables the analysis of non-equilibrium trajectories. Starting from a baseline implementation that matches the asymptotic behavior of the original model, we show how agent heterogeneity can be accounted for in the aggregate dynamical equations. We then show that when agents adapt their strategies by a simple temporal difference learning scheme, the system converges to one of the fixed points of the original system. Systematic simulations reveal that this is the only stable equilibrium solution. |
Tasks | |
Published | 2016-12-01 |
URL | http://arxiv.org/abs/1612.00221v1 |
http://arxiv.org/pdf/1612.00221v1.pdf | |
PWC | https://paperswithcode.com/paper/the-coconut-model-with-heterogeneous |
Repo | |
Framework | |
Optimal resampling for the noisy OneMax problem
Title | Optimal resampling for the noisy OneMax problem |
Authors | Jialin Liu, Michael Fairbank, Diego Pérez-Liébana, Simon M. Lucas |
Abstract | The OneMax problem is a standard benchmark optimisation problem for a binary search space. Recent work on applying a Bandit-Based Random Mutation Hill-Climbing algorithm to the noisy OneMax Problem showed that it is important to choose a good value for the resampling number to make a careful trade off between taking more samples in order to reduce noise, and taking fewer samples to reduce the total computational cost. This paper extends that observation, by deriving an analytical expression for the running time of the RMHC algorithm with resampling applied to the noisy OneMax problem, and showing both theoretically and empirically that the optimal resampling number increases with the number of dimensions in the search space. |
Tasks | |
Published | 2016-07-22 |
URL | http://arxiv.org/abs/1607.06641v3 |
http://arxiv.org/pdf/1607.06641v3.pdf | |
PWC | https://paperswithcode.com/paper/optimal-resampling-for-the-noisy-onemax |
Repo | |
Framework | |
Landmark-based consonant voicing detection on multilingual corpora
Title | Landmark-based consonant voicing detection on multilingual corpora |
Authors | Xiang Kong, Xuesong Yang, Mark Hasegawa-Johnson, Jeung-Yoon Choi, Stefanie Shattuck-Hufnagel |
Abstract | This paper tests the hypothesis that distinctive feature classifiers anchored at phonetic landmarks can be transferred cross-lingually without loss of accuracy. Three consonant voicing classifiers were developed: (1) manually selected acoustic features anchored at a phonetic landmark, (2) MFCCs (either averaged across the segment or anchored at the landmark), and(3) acoustic features computed using a convolutional neural network (CNN). All detectors are trained on English data (TIMIT),and tested on English, Turkish, and Spanish (performance measured using F1 and accuracy). Experiments demonstrate that manual features outperform all MFCC classifiers, while CNNfeatures outperform both. MFCC-based classifiers suffer an F1reduction of 16% absolute when generalized from English to other languages. Manual features suffer only a 5% F1 reduction,and CNN features actually perform better in Turkish and Span-ish than in the training language, demonstrating that features capable of representing long-term spectral dynamics (CNN and landmark-based features) are able to generalize cross-lingually with little or no loss of accuracy |
Tasks | |
Published | 2016-11-10 |
URL | http://arxiv.org/abs/1611.03533v1 |
http://arxiv.org/pdf/1611.03533v1.pdf | |
PWC | https://paperswithcode.com/paper/landmark-based-consonant-voicing-detection-on |
Repo | |
Framework | |
Locating a Small Cluster Privately
Title | Locating a Small Cluster Privately |
Authors | Kobbi Nissim, Uri Stemmer, Salil Vadhan |
Abstract | We present a new algorithm for locating a small cluster of points with differential privacy [Dwork, McSherry, Nissim, and Smith, 2006]. Our algorithm has implications to private data exploration, clustering, and removal of outliers. Furthermore, we use it to significantly relax the requirements of the sample and aggregate technique [Nissim, Raskhodnikova, and Smith, 2007], which allows compiling of “off the shelf” (non-private) analyses into analyses that preserve differential privacy. |
Tasks | |
Published | 2016-04-19 |
URL | http://arxiv.org/abs/1604.05590v2 |
http://arxiv.org/pdf/1604.05590v2.pdf | |
PWC | https://paperswithcode.com/paper/locating-a-small-cluster-privately |
Repo | |
Framework | |
Audio-Visual Speaker Diarization Based on Spatiotemporal Bayesian Fusion
Title | Audio-Visual Speaker Diarization Based on Spatiotemporal Bayesian Fusion |
Authors | Israel D. Gebru, Silèye Ba, Xiaofei Li, Radu Horaud |
Abstract | Speaker diarization consists of assigning speech signals to people engaged in a dialogue. An audio-visual spatiotemporal diarization model is proposed. The model is well suited for challenging scenarios that consist of several participants engaged in multi-party interaction while they move around and turn their heads towards the other participants rather than facing the cameras and the microphones. Multiple-person visual tracking is combined with multiple speech-source localization in order to tackle the speech-to-person association problem. The latter is solved within a novel audio-visual fusion method on the following grounds: binaural spectral features are first extracted from a microphone pair, then a supervised audio-visual alignment technique maps these features onto an image, and finally a semi-supervised clustering method assigns binaural spectral features to visible persons. The main advantage of this method over previous work is that it processes in a principled way speech signals uttered simultaneously by multiple persons. The diarization itself is cast into a latent-variable temporal graphical model that infers speaker identities and speech turns, based on the output of an audio-visual association process, executed at each time slice, and on the dynamics of the diarization variable itself. The proposed formulation yields an efficient exact inference procedure. A novel dataset, that contains audio-visual training data as well as a number of scenarios involving several participants engaged in formal and informal dialogue, is introduced. The proposed method is thoroughly tested and benchmarked with respect to several state-of-the art diarization algorithms. |
Tasks | Speaker Diarization, Visual Tracking |
Published | 2016-03-31 |
URL | http://arxiv.org/abs/1603.09725v2 |
http://arxiv.org/pdf/1603.09725v2.pdf | |
PWC | https://paperswithcode.com/paper/audio-visual-speaker-diarization-based-on |
Repo | |
Framework | |
Iteratively Reweighted Least Squares Algorithms for L1-Norm Principal Component Analysis
Title | Iteratively Reweighted Least Squares Algorithms for L1-Norm Principal Component Analysis |
Authors | Young Woong Park, Diego Klabjan |
Abstract | Principal component analysis (PCA) is often used to reduce the dimension of data by selecting a few orthonormal vectors that explain most of the variance structure of the data. L1 PCA uses the L1 norm to measure error, whereas the conventional PCA uses the L2 norm. For the L1 PCA problem minimizing the fitting error of the reconstructed data, we propose an exact reweighted and an approximate algorithm based on iteratively reweighted least squares. We provide convergence analyses, and compare their performance against benchmark algorithms in the literature. The computational experiment shows that the proposed algorithms consistently perform best. |
Tasks | |
Published | 2016-09-10 |
URL | http://arxiv.org/abs/1609.02997v2 |
http://arxiv.org/pdf/1609.02997v2.pdf | |
PWC | https://paperswithcode.com/paper/iteratively-reweighted-least-squares |
Repo | |
Framework | |
A comprehensive study of sparse codes on abnormality detection
Title | A comprehensive study of sparse codes on abnormality detection |
Authors | Huamin Ren, Hong Pan, Søren Ingvor Olsen, Thomas B. Moeslund |
Abstract | Sparse representation has been applied successfully in abnormal event detection, in which the baseline is to learn a dictionary accompanied by sparse codes. While much emphasis is put on discriminative dictionary construction, there are no comparative studies of sparse codes regarding abnormality detection. We comprehensively study two types of sparse codes solutions - greedy algorithms and convex L1-norm solutions - and their impact on abnormality detection performance. We also propose our framework of combining sparse codes with different detection methods. Our comparative experiments are carried out from various angles to better understand the applicability of sparse codes, including computation time, reconstruction error, sparsity, detection accuracy, and their performance combining various detection methods. Experiments show that combining OMP codes with maximum coordinate detection could achieve state-of-the-art performance on the UCSD dataset [14]. |
Tasks | Anomaly Detection |
Published | 2016-03-13 |
URL | http://arxiv.org/abs/1603.04026v1 |
http://arxiv.org/pdf/1603.04026v1.pdf | |
PWC | https://paperswithcode.com/paper/a-comprehensive-study-of-sparse-codes-on |
Repo | |
Framework | |
A note on adjusting $R^2$ for using with cross-validation
Title | A note on adjusting $R^2$ for using with cross-validation |
Authors | Indre Zliobaite, Nikolaj Tatti |
Abstract | We show how to adjust the coefficient of determination ($R^2$) when used for measuring predictive accuracy via leave-one-out cross-validation. |
Tasks | |
Published | 2016-05-05 |
URL | http://arxiv.org/abs/1605.01703v1 |
http://arxiv.org/pdf/1605.01703v1.pdf | |
PWC | https://paperswithcode.com/paper/a-note-on-adjusting-r2-for-using-with-cross |
Repo | |
Framework | |
A Dynamic Epistemic Framework for Conformant Planning
Title | A Dynamic Epistemic Framework for Conformant Planning |
Authors | Quan Yu, Yanjun Li, Yanjing Wang |
Abstract | In this paper, we introduce a lightweight dynamic epistemic logical framework for automated planning under initial uncertainty. We reduce plan verification and conformant planning to model checking problems of our logic. We show that the model checking problem of the iteration-free fragment is PSPACE-complete. By using two non-standard (but equivalent) semantics, we give novel model checking algorithms to the full language and the iteration-free language. |
Tasks | |
Published | 2016-06-24 |
URL | http://arxiv.org/abs/1606.07528v1 |
http://arxiv.org/pdf/1606.07528v1.pdf | |
PWC | https://paperswithcode.com/paper/a-dynamic-epistemic-framework-for-conformant |
Repo | |
Framework | |
True Lies
Title | True Lies |
Authors | Thomas Ågotnes, Hans van Ditmarsch, Yanjing Wang |
Abstract | A true lie is a lie that becomes true when announced. In a logic of announcements, where the announcing agent is not modelled, a true lie is a formula (that is false and) that becomes true when announced. We investigate true lies and other types of interaction between announced formulas, their preconditions and their postconditions, in the setting of Gerbrandy’s logic of believed announcements, wherein agents may have or obtain incorrect beliefs. Our results are on the satisfiability and validity of instantiations of these semantically defined categories, on iterated announcements, including arbitrarily often iterated announcements, and on syntactic characterization. We close with results for iterated announcements in the logic of knowledge (instead of belief), and for lying as private announcements (instead of public announcements) to different agents. Detailed examples illustrate our lying concepts. |
Tasks | |
Published | 2016-06-27 |
URL | http://arxiv.org/abs/1606.08333v2 |
http://arxiv.org/pdf/1606.08333v2.pdf | |
PWC | https://paperswithcode.com/paper/true-lies |
Repo | |
Framework | |
Cox process representation and inference for stochastic reaction-diffusion processes
Title | Cox process representation and inference for stochastic reaction-diffusion processes |
Authors | David Schnoerr, Ramon Grima, Guido Sanguinetti |
Abstract | Complex behaviour in many systems arises from the stochastic interactions of spatially distributed particles or agents. Stochastic reaction-diffusion processes are widely used to model such behaviour in disciplines ranging from biology to the social sciences, yet they are notoriously difficult to simulate and calibrate to observational data. Here we use ideas from statistical physics and machine learning to provide a solution to the inverse problem of learning a stochastic reaction-diffusion process from data. Our solution relies on a non-trivial connection between stochastic reaction-diffusion processes and spatio-temporal Cox processes, a well-studied class of models from computational statistics. This connection leads to an efficient and flexible algorithm for parameter inference and model selection. Our approach shows excellent accuracy on numeric and real data examples from systems biology and epidemiology. Our work provides both insights into spatio-temporal stochastic systems, and a practical solution to a long-standing problem in computational modelling. |
Tasks | Epidemiology, Model Selection |
Published | 2016-01-08 |
URL | http://arxiv.org/abs/1601.01972v2 |
http://arxiv.org/pdf/1601.01972v2.pdf | |
PWC | https://paperswithcode.com/paper/cox-process-representation-and-inference-for |
Repo | |
Framework | |
A Mathematical Trust Algebra for International Nation Relations Computation and Evaluation
Title | A Mathematical Trust Algebra for International Nation Relations Computation and Evaluation |
Authors | Mohd Anuar Mat Isa, Ramlan Mahmod, Nur Izura Udzir, Jamalul-lail Ab Manan, Ali Dehghan Tanha |
Abstract | This paper presents a trust computation for international relations and its calculus, which related to Bayesian inference, Dempster Shafer theory and subjective logic. We proposed a method that allows a trust computation which is previously subjective and incomputable. An example of case study for the trust computation is the United States of America Great Britain relations. The method supports decision makers in a government such as foreign ministry, defense ministry, presidential or prime minister office. The Department of Defense (DoD) may use our method to determine a nation that can be known as a friendly, neutral or hostile nation. |
Tasks | Bayesian Inference |
Published | 2016-02-13 |
URL | http://arxiv.org/abs/1604.00980v1 |
http://arxiv.org/pdf/1604.00980v1.pdf | |
PWC | https://paperswithcode.com/paper/a-mathematical-trust-algebra-for |
Repo | |
Framework | |
Bootstrapping Distantly Supervised IE using Joint Learning and Small Well-structured Corpora
Title | Bootstrapping Distantly Supervised IE using Joint Learning and Small Well-structured Corpora |
Authors | Lidong Bing, Bhuwan Dhingra, Kathryn Mazaitis, Jong Hyuk Park, William W. Cohen |
Abstract | We propose a framework to improve performance of distantly-supervised relation extraction, by jointly learning to solve two related tasks: concept-instance extraction and relation extraction. We combine this with a novel use of document structure: in some small, well-structured corpora, sections can be identified that correspond to relation arguments, and distantly-labeled examples from such sections tend to have good precision. Using these as seeds we extract additional relation examples by applying label propagation on a graph composed of noisy examples extracted from a large unstructured testing corpus. Combined with the soft constraint that concept examples should have the same type as the second argument of the relation, we get significant improvements over several state-of-the-art approaches to distantly-supervised relation extraction. |
Tasks | Relation Extraction |
Published | 2016-06-10 |
URL | http://arxiv.org/abs/1606.03398v2 |
http://arxiv.org/pdf/1606.03398v2.pdf | |
PWC | https://paperswithcode.com/paper/bootstrapping-distantly-supervised-ie-using |
Repo | |
Framework | |