Paper Group ANR 86
Understanding Error Correction and its Role as Part of the Communication Channel in Environments composed of Self-Integrating Systems. Stochastic inference with spiking neurons in the high-conductance state. VideoMCC: a New Benchmark for Video Comprehension. Phased Exploration with Greedy Exploitation in Stochastic Combinatorial Partial Monitoring …
Understanding Error Correction and its Role as Part of the Communication Channel in Environments composed of Self-Integrating Systems
Title | Understanding Error Correction and its Role as Part of the Communication Channel in Environments composed of Self-Integrating Systems |
Authors | Aleksander Lodwich |
Abstract | The raise of complexity of technical systems also raises knowledge required to set them up and to maintain them. The cost to evolve such systems can be prohibitive. In the field of Autonomic Computing, technical systems should therefore have various self-healing capabilities allowing system owners to provide only partial, potentially inconsistent updates of the system. The self-healing or self-integrating system shall find out the remaining changes to communications and functionalities in order to accommodate change and yet still restore function. This issue becomes even more interesting in context of Internet of Things and Industrial Internet where previously unexpected device combinations can be assembled in order to provide a surprising new function. In order to pursue higher levels of self-integration capabilities I propose to think of self-integration as sophisticated error correcting communications. Therefore, this paper discusses an extended scope of error correction with the purpose to emphasize error correction’s role as an integrated element of bi-directional communication channels in self-integrating, autonomic communication scenarios. |
Tasks | |
Published | 2016-12-21 |
URL | http://arxiv.org/abs/1612.07294v1 |
http://arxiv.org/pdf/1612.07294v1.pdf | |
PWC | https://paperswithcode.com/paper/understanding-error-correction-and-its-role |
Repo | |
Framework | |
Stochastic inference with spiking neurons in the high-conductance state
Title | Stochastic inference with spiking neurons in the high-conductance state |
Authors | Mihai A. Petrovici, Johannes Bill, Ilja Bytschok, Johannes Schemmel, Karlheinz Meier |
Abstract | The highly variable dynamics of neocortical circuits observed in vivo have been hypothesized to represent a signature of ongoing stochastic inference but stand in apparent contrast to the deterministic response of neurons measured in vitro. Based on a propagation of the membrane autocorrelation across spike bursts, we provide an analytical derivation of the neural activation function that holds for a large parameter space, including the high-conductance state. On this basis, we show how an ensemble of leaky integrate-and-fire neurons with conductance-based synapses embedded in a spiking environment can attain the correct firing statistics for sampling from a well-defined target distribution. For recurrent networks, we examine convergence toward stationarity in computer simulations and demonstrate sample-based Bayesian inference in a mixed graphical model. This points to a new computational role of high-conductance states and establishes a rigorous link between deterministic neuron models and functional stochastic dynamics on the network level. |
Tasks | Bayesian Inference |
Published | 2016-10-23 |
URL | http://arxiv.org/abs/1610.07161v1 |
http://arxiv.org/pdf/1610.07161v1.pdf | |
PWC | https://paperswithcode.com/paper/stochastic-inference-with-spiking-neurons-in |
Repo | |
Framework | |
VideoMCC: a New Benchmark for Video Comprehension
Title | VideoMCC: a New Benchmark for Video Comprehension |
Authors | Du Tran, Maksim Bolonkin, Manohar Paluri, Lorenzo Torresani |
Abstract | While there is overall agreement that future technology for organizing, browsing and searching videos hinges on the development of methods for high-level semantic understanding of video, so far no consensus has been reached on the best way to train and assess models for this task. Casting video understanding as a form of action or event categorization is problematic as it is not fully clear what the semantic classes or abstractions in this domain should be. Language has been exploited to sidestep the problem of defining video categories, by formulating video understanding as the task of captioning or description. However, language is highly complex, redundant and sometimes ambiguous. Many different captions may express the same semantic concept. To account for this ambiguity, quantitative evaluation of video description requires sophisticated metrics, whose performance scores are typically hard to interpret by humans. This paper provides four contributions to this problem. First, we formulate Video Multiple Choice Caption (VideoMCC) as a new well-defined task with an easy-to-interpret performance measure. Second, we describe a general semi-automatic procedure to create benchmarks for this task. Third, we publicly release a large-scale video benchmark created with an implementation of this procedure and we include a human study that assesses human performance on our dataset. Finally, we propose and test a varied collection of approaches on this benchmark for the purpose of gaining a better understanding of the new challenges posed by video comprehension. |
Tasks | Video Description, Video Understanding |
Published | 2016-06-23 |
URL | http://arxiv.org/abs/1606.07373v5 |
http://arxiv.org/pdf/1606.07373v5.pdf | |
PWC | https://paperswithcode.com/paper/videomcc-a-new-benchmark-for-video |
Repo | |
Framework | |
Phased Exploration with Greedy Exploitation in Stochastic Combinatorial Partial Monitoring Games
Title | Phased Exploration with Greedy Exploitation in Stochastic Combinatorial Partial Monitoring Games |
Authors | Sougata Chaudhuri, Ambuj Tewari |
Abstract | Partial monitoring games are repeated games where the learner receives feedback that might be different from adversary’s move or even the reward gained by the learner. Recently, a general model of combinatorial partial monitoring (CPM) games was proposed \cite{lincombinatorial2014}, where the learner’s action space can be exponentially large and adversary samples its moves from a bounded, continuous space, according to a fixed distribution. The paper gave a confidence bound based algorithm (GCB) that achieves $O(T^{2/3}\log T)$ distribution independent and $O(\log T)$ distribution dependent regret bounds. The implementation of their algorithm depends on two separate offline oracles and the distribution dependent regret additionally requires existence of a unique optimal action for the learner. Adopting their CPM model, our first contribution is a Phased Exploration with Greedy Exploitation (PEGE) algorithmic framework for the problem. Different algorithms within the framework achieve $O(T^{2/3}\sqrt{\log T})$ distribution independent and $O(\log^2 T)$ distribution dependent regret respectively. Crucially, our framework needs only the simpler “argmax” oracle from GCB and the distribution dependent regret does not require existence of a unique optimal action. Our second contribution is another algorithm, PEGE2, which combines gap estimation with a PEGE algorithm, to achieve an $O(\log T)$ regret bound, matching the GCB guarantee but removing the dependence on size of the learner’s action space. However, like GCB, PEGE2 requires access to both offline oracles and the existence of a unique optimal action. Finally, we discuss how our algorithm can be efficiently applied to a CPM problem of practical interest: namely, online ranking with feedback at the top. |
Tasks | |
Published | 2016-08-23 |
URL | http://arxiv.org/abs/1608.06403v1 |
http://arxiv.org/pdf/1608.06403v1.pdf | |
PWC | https://paperswithcode.com/paper/phased-exploration-with-greedy-exploitation |
Repo | |
Framework | |
Energy-Efficient ConvNets Through Approximate Computing
Title | Energy-Efficient ConvNets Through Approximate Computing |
Authors | Bert Moons, Bert De Brabandere, Luc Van Gool, Marian Verhelst |
Abstract | Recently ConvNets or convolutional neural networks (CNN) have come up as state-of-the-art classification and detection algorithms, achieving near-human performance in visual detection. However, ConvNet algorithms are typically very computation and memory intensive. In order to be able to embed ConvNet-based classification into wearable platforms and embedded systems such as smartphones or ubiquitous electronics for the internet-of-things, their energy consumption should be reduced drastically. This paper proposes methods based on approximate computing to reduce energy consumption in state-of-the-art ConvNet accelerators. By combining techniques both at the system- and circuit level, we can gain energy in the systems arithmetic: up to 30x without losing classification accuracy and more than 100x at 99% classification accuracy, compared to the commonly used 16-bit fixed point number format. |
Tasks | |
Published | 2016-03-22 |
URL | http://arxiv.org/abs/1603.06777v1 |
http://arxiv.org/pdf/1603.06777v1.pdf | |
PWC | https://paperswithcode.com/paper/energy-efficient-convnets-through-approximate |
Repo | |
Framework | |
An Efficient Algorithm for the Piecewise-Smooth Model with Approximately Explicit Solutions
Title | An Efficient Algorithm for the Piecewise-Smooth Model with Approximately Explicit Solutions |
Authors | Huihui Song, Yuhui Zheng, Kaihua Zhang |
Abstract | This paper presents an efficient approach to image segmentation that approximates the piecewise-smooth (PS) functional in [12] with explicit solutions. By rendering some rational constraints on the initial conditions and the final solutions of the PS functional, we propose two novel formulations which can be approximated to be the explicit solutions of the evolution partial differential equations (PDEs) of the PS model, in which only one PDE needs to be solved efficiently. Furthermore, an energy term that regularizes the level set function to be a signed distance function is incorporated into our evolution formulation, and the time-consuming re-initialization is avoided. Experiments on synthetic and real images show that our method is more efficient than both the PS model and the local binary fitting (LBF) model [4], while having similar segmentation accuracy as the LBF model. |
Tasks | Semantic Segmentation |
Published | 2016-12-08 |
URL | http://arxiv.org/abs/1612.02521v1 |
http://arxiv.org/pdf/1612.02521v1.pdf | |
PWC | https://paperswithcode.com/paper/an-efficient-algorithm-for-the-piecewise |
Repo | |
Framework | |
Topological descriptors for 3D surface analysis
Title | Topological descriptors for 3D surface analysis |
Authors | Matthias Zeppelzauer, Bartosz Zieliński, Mateusz Juda, Markus Seidl |
Abstract | We investigate topological descriptors for 3D surface analysis, i.e. the classification of surfaces according to their geometric fine structure. On a dataset of high-resolution 3D surface reconstructions we compute persistence diagrams for a 2D cubical filtration. In the next step we investigate different topological descriptors and measure their ability to discriminate structurally different 3D surface patches. We evaluate their sensitivity to different parameters and compare the performance of the resulting topological descriptors to alternative (non-topological) descriptors. We present a comprehensive evaluation that shows that topological descriptors are (i) robust, (ii) yield state-of-the-art performance for the task of 3D surface analysis and (iii) improve classification performance when combined with non-topological descriptors. |
Tasks | |
Published | 2016-01-22 |
URL | http://arxiv.org/abs/1601.06057v1 |
http://arxiv.org/pdf/1601.06057v1.pdf | |
PWC | https://paperswithcode.com/paper/topological-descriptors-for-3d-surface |
Repo | |
Framework | |
A Data-Driven Compressive Sensing Framework Tailored For Energy-Efficient Wearable Sensing
Title | A Data-Driven Compressive Sensing Framework Tailored For Energy-Efficient Wearable Sensing |
Authors | Kai Xu, Yixing Li, Fengbo Ren |
Abstract | Compressive sensing (CS) is a promising technology for realizing energy-efficient wireless sensors for long-term health monitoring. However, conventional model-driven CS frameworks suffer from limited compression ratio and reconstruction quality when dealing with physiological signals due to inaccurate models and the overlook of individual variability. In this paper, we propose a data-driven CS framework that can learn signal characteristics and personalized features from any individual recording of physiologic signals to enhance CS performance with a minimized number of measurements. Such improvements are accomplished by a co-training approach that optimizes the sensing matrix and the dictionary towards improved restricted isometry property and signal sparsity, respectively. Experimental results upon ECG signals show that the proposed method, at a compression ratio of 10x, successfully reduces the isometry constant of the trained sensing matrices by 86% against random matrices and improves the overall reconstructed signal-to-noise ratio by 15dB over conventional model-driven approaches. |
Tasks | Compressive Sensing |
Published | 2016-12-15 |
URL | http://arxiv.org/abs/1612.04887v2 |
http://arxiv.org/pdf/1612.04887v2.pdf | |
PWC | https://paperswithcode.com/paper/a-data-driven-compressive-sensing-framework |
Repo | |
Framework | |
Complex systems: features, similarity and connectivity
Title | Complex systems: features, similarity and connectivity |
Authors | Cesar H. Comin, Thomas K. DM. Peron, Filipi N. Silva, Diego R. Amancio, Francisco A. Rodrigues, Luciano da F. Costa |
Abstract | The increasing interest in complex networks research has been a consequence of several intrinsic features of this area, such as the generality of the approach to represent and model virtually any discrete system, and the incorporation of concepts and methods deriving from many areas, from statistical physics to sociology, which are often used in an independent way. Yet, for this same reason, it would be desirable to integrate these various aspects into a more coherent and organic framework, which would imply in several benefits normally allowed by the systematization in science, including the identification of new types of problems and the cross-fertilization between fields. More specifically, the identification of the main areas to which the concepts frequently used in complex networks can be applied paves the way to adopting and applying a larger set of concepts and methods deriving from those respective areas. Among the several areas that have been used in complex networks research, pattern recognition, optimization, linear algebra, and time series analysis seem to play a more basic and recurrent role. In the present manuscript, we propose a systematic way to integrate the concepts from these diverse areas regarding complex networks research. In order to do so, we start by grouping the multidisciplinary concepts into three main groups, namely features, similarity, and network connectivity. Then we show that several of the analysis and modeling approaches to complex networks can be thought as a composition of maps between these three groups, with emphasis on nine main types of mappings, which are presented and illustrated. Such a systematization of principles and approaches also provides an opportunity to review some of the most closely related works in the literature, which is also developed in this article. |
Tasks | Time Series, Time Series Analysis |
Published | 2016-06-17 |
URL | http://arxiv.org/abs/1606.05400v1 |
http://arxiv.org/pdf/1606.05400v1.pdf | |
PWC | https://paperswithcode.com/paper/complex-systems-features-similarity-and |
Repo | |
Framework | |
Indicators of Good Student Performance in Moodle Activity Data
Title | Indicators of Good Student Performance in Moodle Activity Data |
Authors | Ewa Młynarska, Derek Greene, Pádraig Cunningham |
Abstract | In this paper we conduct an analysis of Moodle activity data focused on identifying early predictors of good student performance. The analysis shows that three relevant hypotheses are largely supported by the data. These hypotheses are: early submission is a good sign, a high level of activity is predictive of good results and evening activity is even better than daytime activity. We highlight some pathological examples where high levels of activity correlates with bad results. |
Tasks | |
Published | 2016-01-12 |
URL | http://arxiv.org/abs/1601.02975v1 |
http://arxiv.org/pdf/1601.02975v1.pdf | |
PWC | https://paperswithcode.com/paper/indicators-of-good-student-performance-in |
Repo | |
Framework | |
Generating Images Part by Part with Composite Generative Adversarial Networks
Title | Generating Images Part by Part with Composite Generative Adversarial Networks |
Authors | Hanock Kwak, Byoung-Tak Zhang |
Abstract | Image generation remains a fundamental problem in artificial intelligence in general and deep learning in specific. The generative adversarial network (GAN) was successful in generating high quality samples of natural images. We propose a model called composite generative adversarial network, that reveals the complex structure of images with multiple generators in which each generator generates some part of the image. Those parts are combined by alpha blending process to create a new single image. It can generate, for example, background and face sequentially with two generators, after training on face dataset. Training was done in an unsupervised way without any labels about what each generator should generate. We found possibilities of learning the structure by using this generative model empirically. |
Tasks | Image Generation |
Published | 2016-07-19 |
URL | http://arxiv.org/abs/1607.05387v2 |
http://arxiv.org/pdf/1607.05387v2.pdf | |
PWC | https://paperswithcode.com/paper/generating-images-part-by-part-with-composite |
Repo | |
Framework | |
Student’s t Distribution based Estimation of Distribution Algorithms for Derivative-free Global Optimization
Title | Student’s t Distribution based Estimation of Distribution Algorithms for Derivative-free Global Optimization |
Authors | Bin Liu, Shi Cheng, Yuhui Shi |
Abstract | In this paper, we are concerned with a branch of evolutionary algorithms termed estimation of distribution (EDA), which has been successfully used to tackle derivative-free global optimization problems. For existent EDA algorithms, it is a common practice to use a Gaussian distribution or a mixture of Gaussian components to represent the statistical property of available promising solutions found so far. Observing that the Student’s t distribution has heavier and longer tails than the Gaussian, which may be beneficial for exploring the solution space, we propose a novel EDA algorithm termed ESTDA, in which the Student’s t distribution, rather than Gaussian, is employed. To address hard multimodal and deceptive problems, we extend ESTDA further by substituting a single Student’s t distribution with a mixture of Student’s t distributions. The resulting algorithm is named as estimation of mixture of Student’s t distribution algorithm (EMSTDA). Both ESTDA and EMSTDA are evaluated through extensive and in-depth numerical experiments using over a dozen of benchmark objective functions. Empirical results demonstrate that the proposed algorithms provide remarkably better performance than their Gaussian counterparts. |
Tasks | |
Published | 2016-08-12 |
URL | http://arxiv.org/abs/1608.03757v2 |
http://arxiv.org/pdf/1608.03757v2.pdf | |
PWC | https://paperswithcode.com/paper/students-t-distribution-based-estimation-of |
Repo | |
Framework | |
Edge-exchangeable graphs and sparsity
Title | Edge-exchangeable graphs and sparsity |
Authors | Tamara Broderick, Diana Cai |
Abstract | A known failing of many popular random graph models is that the Aldous-Hoover Theorem guarantees these graphs are dense with probability one; that is, the number of edges grows quadratically with the number of nodes. This behavior is considered unrealistic in observed graphs. We define a notion of edge exchangeability for random graphs in contrast to the established notion of infinite exchangeability for random graphs — which has traditionally relied on exchangeability of nodes (rather than edges) in a graph. We show that, unlike node exchangeability, edge exchangeability encompasses models that are known to provide a projective sequence of random graphs that circumvent the Aldous-Hoover Theorem and exhibit sparsity, i.e., sub-quadratic growth of the number of edges with the number of nodes. We show how edge-exchangeability of graphs relates naturally to existing notions of exchangeability from clustering (a.k.a. partitions) and other familiar combinatorial structures. |
Tasks | |
Published | 2016-03-22 |
URL | http://arxiv.org/abs/1603.06898v1 |
http://arxiv.org/pdf/1603.06898v1.pdf | |
PWC | https://paperswithcode.com/paper/edge-exchangeable-graphs-and-sparsity |
Repo | |
Framework | |
Probabilistic Extension to the Concurrent Constraint Factor Oracle Model for Music Improvisation
Title | Probabilistic Extension to the Concurrent Constraint Factor Oracle Model for Music Improvisation |
Authors | Mauricio Toro |
Abstract | We can program a Real-Time (RT) music improvisation system in C++ without a formal semantic or we can model it with process calculi such as the Non-deterministic Timed Concurrent Constraint (ntcc) calculus. “A Concurrent Constraints Factor Oracle (FO) model for Music Improvisation” (Ccfomi) is an improvisation model specified on ntcc. Since Ccfomi improvises non-deterministically, there is no control on choices and therefore little control over the sequence variation during the improvisation. To avoid this, we extended Ccfomi using the Probabilistic Non-deterministic Timed Concurrent Constraint calculus. Our extension to Ccfomi does not change the time and space complexity of building the FO, thus making our extension compatible with RT. However, there was not a ntcc interpreter capable of RT to execute Ccfomi. We developed Ntccrt –a RT capable interpreter for ntcc– and we executed Ccfomi on Ntccrt. In the future, we plan to extend Ntccrt to execute our extension to Ccfomi. |
Tasks | |
Published | 2016-02-05 |
URL | http://arxiv.org/abs/1602.02169v1 |
http://arxiv.org/pdf/1602.02169v1.pdf | |
PWC | https://paperswithcode.com/paper/probabilistic-extension-to-the-concurrent |
Repo | |
Framework | |
Introduction to the “Industrial Benchmark”
Title | Introduction to the “Industrial Benchmark” |
Authors | Daniel Hein, Alexander Hentschel, Volkmar Sterzing, Michel Tokic, Steffen Udluft |
Abstract | A novel reinforcement learning benchmark, called Industrial Benchmark, is introduced. The Industrial Benchmark aims at being be realistic in the sense, that it includes a variety of aspects that we found to be vital in industrial applications. It is not designed to be an approximation of any real system, but to pose the same hardness and complexity. |
Tasks | |
Published | 2016-10-12 |
URL | http://arxiv.org/abs/1610.03793v2 |
http://arxiv.org/pdf/1610.03793v2.pdf | |
PWC | https://paperswithcode.com/paper/introduction-to-the-industrial-benchmark |
Repo | |
Framework | |