July 28, 2019

3130 words 15 mins read

Paper Group ANR 292

Paper Group ANR 292

Applications of Deep Learning and Reinforcement Learning to Biological Data. A Connectedness Constraint for Learning Sparse Graphs. Large-Scale Classification of Structured Objects using a CRF with Deep Class Embedding. A Tight Excess Risk Bound via a Unified PAC-Bayesian-Rademacher-Shtarkov-MDL Complexity. Learning to Generate Time-Lapse Videos Us …

Applications of Deep Learning and Reinforcement Learning to Biological Data

Title Applications of Deep Learning and Reinforcement Learning to Biological Data
Authors Mufti Mahmud, M. Shamim Kaiser, Amir Hussain, Stefano Vassanelli
Abstract Rapid advances of hardware-based technologies during the past decades have opened up new possibilities for Life scientists to gather multimodal data in various application domains (e.g., Omics, Bioimaging, Medical Imaging, and [Brain/Body]-Machine Interfaces), thus generating novel opportunities for development of dedicated data intensive machine learning techniques. Overall, recent research in Deep learning (DL), Reinforcement learning (RL), and their combination (Deep RL) promise to revolutionize Artificial Intelligence. The growth in computational power accompanied by faster and increased data storage and declining computing costs have already allowed scientists in various fields to apply these techniques on datasets that were previously intractable for their size and complexity. This review article provides a comprehensive survey on the application of DL, RL, and Deep RL techniques in mining Biological data. In addition, we compare performances of DL techniques when applied to different datasets across various application domains. Finally, we outline open issues in this challenging research area and discuss future development perspectives.
Tasks
Published 2017-11-10
URL http://arxiv.org/abs/1711.03985v2
PDF http://arxiv.org/pdf/1711.03985v2.pdf
PWC https://paperswithcode.com/paper/applications-of-deep-learning-and
Repo
Framework

A Connectedness Constraint for Learning Sparse Graphs

Title A Connectedness Constraint for Learning Sparse Graphs
Authors Martin Sundin, Arun Venkitaraman, Magnus Jansson, Saikat Chatterjee
Abstract Graphs are naturally sparse objects that are used to study many problems involving networks, for example, distributed learning and graph signal processing. In some cases, the graph is not given, but must be learned from the problem and available data. Often it is desirable to learn sparse graphs. However, making a graph highly sparse can split the graph into several disconnected components, leading to several separate networks. The main difficulty is that connectedness is often treated as a combinatorial property, making it hard to enforce in e.g. convex optimization problems. In this article, we show how connectedness of undirected graphs can be formulated as an analytical property and can be enforced as a convex constraint. We especially show how the constraint relates to the distributed consensus problem and graph Laplacian learning. Using simulated and real data, we perform experiments to learn sparse and connected graphs from data.
Tasks
Published 2017-08-29
URL http://arxiv.org/abs/1708.09021v1
PDF http://arxiv.org/pdf/1708.09021v1.pdf
PWC https://paperswithcode.com/paper/a-connectedness-constraint-for-learning
Repo
Framework

Large-Scale Classification of Structured Objects using a CRF with Deep Class Embedding

Title Large-Scale Classification of Structured Objects using a CRF with Deep Class Embedding
Authors Eran Goldman, Jacob Goldberger
Abstract This paper presents a novel deep learning architecture to classify structured objects in datasets with a large number of visually similar categories. We model sequences of images as linear-chain CRFs, and jointly learn the parameters from both local-visual features and neighboring classes. The visual features are computed by convolutional layers, and the class embeddings are learned by factorizing the CRF pairwise potential matrix. This forms a highly nonlinear objective function which is trained by optimizing a local likelihood approximation with batch-normalization. This model overcomes the difficulties of existing CRF methods to learn the contextual relationships thoroughly when there is a large number of classes and the data is sparse. The performance of the proposed method is illustrated on a huge dataset that contains images of retail-store product displays, taken in varying settings and viewpoints, and shows significantly improved results compared to linear CRF modeling and unnormalized likelihood optimization.
Tasks
Published 2017-05-21
URL https://arxiv.org/abs/1705.07420v3
PDF https://arxiv.org/pdf/1705.07420v3.pdf
PWC https://paperswithcode.com/paper/large-scale-classification-of-structured
Repo
Framework

A Tight Excess Risk Bound via a Unified PAC-Bayesian-Rademacher-Shtarkov-MDL Complexity

Title A Tight Excess Risk Bound via a Unified PAC-Bayesian-Rademacher-Shtarkov-MDL Complexity
Authors Peter D. Grünwald, Nishant A. Mehta
Abstract We present a novel notion of complexity that interpolates between and generalizes some classic existing complexity notions in learning theory: for estimators like empirical risk minimization (ERM) with arbitrary bounded losses, it is upper bounded in terms of data-independent Rademacher complexity; for generalized Bayesian estimators, it is upper bounded by the data-dependent information complexity (also known as stochastic or PAC-Bayesian, $\mathrm{KL}(\text{posterior} \operatorname{} \text{prior})$ complexity. For (penalized) ERM, the new complexity reduces to (generalized) normalized maximum likelihood (NML) complexity, i.e. a minimax log-loss individual-sequence regret. Our first main result bounds excess risk in terms of the new complexity. Our second main result links the new complexity via Rademacher complexity to $L_2(P)$ entropy, thereby generalizing earlier results of Opper, Haussler, Lugosi, and Cesa-Bianchi who did the log-loss case with $L_\infty$. Together, these results recover optimal bounds for VC- and large (polynomial entropy) classes, replacing localized Rademacher complexity by a simpler analysis which almost completely separates the two aspects that determine the achievable rates: ‘easiness’ (Bernstein) conditions and model complexity.
Tasks
Published 2017-10-21
URL http://arxiv.org/abs/1710.07732v1
PDF http://arxiv.org/pdf/1710.07732v1.pdf
PWC https://paperswithcode.com/paper/a-tight-excess-risk-bound-via-a-unified-pac
Repo
Framework

Learning to Generate Time-Lapse Videos Using Multi-Stage Dynamic Generative Adversarial Networks

Title Learning to Generate Time-Lapse Videos Using Multi-Stage Dynamic Generative Adversarial Networks
Authors Wei Xiong, Wenhan Luo, Lin Ma, Wei Liu, Jiebo Luo
Abstract Taking a photo outside, can we predict the immediate future, e.g., how would the cloud move in the sky? We address this problem by presenting a generative adversarial network (GAN) based two-stage approach to generating realistic time-lapse videos of high resolution. Given the first frame, our model learns to generate long-term future frames. The first stage generates videos of realistic contents for each frame. The second stage refines the generated video from the first stage by enforcing it to be closer to real videos with regard to motion dynamics. To further encourage vivid motion in the final generated video, Gram matrix is employed to model the motion more precisely. We build a large scale time-lapse dataset, and test our approach on this new dataset. Using our model, we are able to generate realistic videos of up to $128\times 128$ resolution for 32 frames. Quantitative and qualitative experiment results have demonstrated the superiority of our model over the state-of-the-art models.
Tasks
Published 2017-09-22
URL http://arxiv.org/abs/1709.07592v3
PDF http://arxiv.org/pdf/1709.07592v3.pdf
PWC https://paperswithcode.com/paper/learning-to-generate-time-lapse-videos-using
Repo
Framework

Deep Learning Improves Template Matching by Normalized Cross Correlation

Title Deep Learning Improves Template Matching by Normalized Cross Correlation
Authors Davit Buniatyan, Thomas Macrina, Dodam Ih, Jonathan Zung, H. Sebastian Seung
Abstract Template matching by normalized cross correlation (NCC) is widely used for finding image correspondences. We improve the robustness of this algorithm by preprocessing images with “siamese” convolutional networks trained to maximize the contrast between NCC values of true and false matches. The improvement is quantified using patches of brain images from serial section electron microscopy. Relative to a parameter-tuned bandpass filter, siamese convolutional networks significantly reduce false matches. Furthermore, all false matches can be eliminated by removing a tiny fraction of all matches based on NCC values. The improved accuracy of our method could be essential for connectomics, because emerging petascale datasets may require billions of template matches to assemble 2D images of serial sections into a 3D image stack. Our method is also expected to generalize to many other computer vision applications that use NCC template matching to find image correspondences.
Tasks
Published 2017-05-24
URL http://arxiv.org/abs/1705.08593v1
PDF http://arxiv.org/pdf/1705.08593v1.pdf
PWC https://paperswithcode.com/paper/deep-learning-improves-template-matching-by
Repo
Framework

Assigning personality/identity to a chatting machine for coherent conversation generation

Title Assigning personality/identity to a chatting machine for coherent conversation generation
Authors Qiao Qian, Minlie Huang, Haizhou Zhao, Jingfang Xu, Xiaoyan Zhu
Abstract Endowing a chatbot with personality or an identity is quite challenging but critical to deliver more realistic and natural conversations. In this paper, we address the issue of generating responses that are coherent to a pre-specified agent profile. We design a model consisting of three modules: a profile detector to decide whether a post should be responded using the profile and which key should be addressed, a bidirectional decoder to generate responses forward and backward starting from a selected profile value, and a position detector that predicts a word position from which decoding should start given a selected profile value. We show that general conversation data from social media can be used to generate profile-coherent responses. Manual and automatic evaluation shows that our model can deliver more coherent, natural, and diversified responses.
Tasks Chatbot
Published 2017-06-09
URL http://arxiv.org/abs/1706.02861v3
PDF http://arxiv.org/pdf/1706.02861v3.pdf
PWC https://paperswithcode.com/paper/assigning-personalityidentity-to-a-chatting
Repo
Framework

Ranking and Selection with Covariates for Personalized Decision Making

Title Ranking and Selection with Covariates for Personalized Decision Making
Authors Haihui Shen, L. Jeff Hong, Xiaowei Zhang
Abstract We consider a ranking and selection problem in the context of personalized decision making, where the best alternative is not universal but varies as a function of observable covariates. The goal of ranking and selection with covariates (R&S-C) is to use sampling to compute a decision rule that can specify the best alternative with certain statistical guarantee for each subsequent individual after observing his or her covariates. A linear model is proposed to capture the relationship between the mean performance of an alternative and the covariates. Under the indifference-zone formulation, we develop two-stage procedures for both homoscedastic and heteroscedastic sampling errors, respectively, and prove their statistical validity, which is defined in terms of probability of correct selection. We also generalize the well-known slippage configuration, and prove that the generalized slippage configuration is the least favorable configuration of our procedures. Extensive numerical experiments are conducted to investigate the performance of the proposed procedures. Finally, we demonstrate the usefulness of R&S-C via a case study of selecting the best treatment regimen in the prevention of esophageal cancer. We find that by leveraging disease-related personal information, R&S-C can improve substantially the expected quality-adjusted life years for some groups of patients through providing patient-specific treatment regimen.
Tasks Decision Making
Published 2017-10-07
URL http://arxiv.org/abs/1710.02642v1
PDF http://arxiv.org/pdf/1710.02642v1.pdf
PWC https://paperswithcode.com/paper/ranking-and-selection-with-covariates-for
Repo
Framework

Homomorphic Parameter Compression for Distributed Deep Learning Training

Title Homomorphic Parameter Compression for Distributed Deep Learning Training
Authors Jaehee Jang, Byungook Na, Sungroh Yoon
Abstract Distributed training of deep neural networks has received significant research interest, and its major approaches include implementations on multiple GPUs and clusters. Parallelization can dramatically improve the efficiency of training deep and complicated models with large-scale data. A fundamental barrier against the speedup of DNN training, however, is the trade-off between computation and communication time. In other words, increasing the number of worker nodes decreases the time consumed in computation while simultaneously increasing communication overhead under constrained network bandwidth, especially in commodity hardware environments. To alleviate this trade-off, we suggest the idea of homomorphic parameter compression, which compresses parameters with the least expense and trains the DNN with the compressed representation. Although the specific method is yet to be discovered, we demonstrate that there is a high probability that the homomorphism can reduce the communication overhead, thanks to little compression and decompression times. We also provide theoretical speedup of homomorphic compression.
Tasks
Published 2017-11-28
URL http://arxiv.org/abs/1711.10123v1
PDF http://arxiv.org/pdf/1711.10123v1.pdf
PWC https://paperswithcode.com/paper/homomorphic-parameter-compression-for
Repo
Framework

On the effect of pooling on the geometry of representations

Title On the effect of pooling on the geometry of representations
Authors Gary Bécigneul
Abstract In machine learning and neuroscience, certain computational structures and algorithms are known to yield disentangled representations without us understanding why, the most striking examples being perhaps convolutional neural networks and the ventral stream of the visual cortex in humans and primates. As for the latter, it was conjectured that representations may be disentangled by being flattened progressively and at a local scale. An attempt at a formalization of the role of invariance in learning representations was made recently, being referred to as I-theory. In this framework and using the language of differential geometry, we show that pooling over a group of transformations of the input contracts the metric and reduces its curvature, and provide quantitative bounds, in the aim of moving towards a theoretical understanding on how to disentangle representations.
Tasks
Published 2017-03-20
URL http://arxiv.org/abs/1703.06726v1
PDF http://arxiv.org/pdf/1703.06726v1.pdf
PWC https://paperswithcode.com/paper/on-the-effect-of-pooling-on-the-geometry-of
Repo
Framework

A Comprehensive Survey of Graph Embedding: Problems, Techniques and Applications

Title A Comprehensive Survey of Graph Embedding: Problems, Techniques and Applications
Authors Hongyun Cai, Vincent W. Zheng, Kevin Chen-Chuan Chang
Abstract Graph is an important data representation which appears in a wide diversity of real-world scenarios. Effective graph analytics provides users a deeper understanding of what is behind the data, and thus can benefit a lot of useful applications such as node classification, node recommendation, link prediction, etc. However, most graph analytics methods suffer the high computation and space cost. Graph embedding is an effective yet efficient way to solve the graph analytics problem. It converts the graph data into a low dimensional space in which the graph structural information and graph properties are maximally preserved. In this survey, we conduct a comprehensive review of the literature in graph embedding. We first introduce the formal definition of graph embedding as well as the related concepts. After that, we propose two taxonomies of graph embedding which correspond to what challenges exist in different graph embedding problem settings and how the existing work address these challenges in their solutions. Finally, we summarize the applications that graph embedding enables and suggest four promising future research directions in terms of computation efficiency, problem settings, techniques and application scenarios.
Tasks Graph Embedding, Link Prediction, Node Classification
Published 2017-09-22
URL http://arxiv.org/abs/1709.07604v3
PDF http://arxiv.org/pdf/1709.07604v3.pdf
PWC https://paperswithcode.com/paper/a-comprehensive-survey-of-graph-embedding
Repo
Framework

Normalisation de la langue et de lecriture arabe : enjeux culturels regionaux et mondiaux

Title Normalisation de la langue et de lecriture arabe : enjeux culturels regionaux et mondiaux
Authors Henri Hudrisier, Ben Henda Mokhtar
Abstract Arabic language and writing are now facing a resurgence of international normative solutions that challenge most of their local or network based operating principles. Even if the multilingual digital coding solutions, especially those proposed by Unicode, have solved many difficulties of Arabic writing, the linguistic aspect is still in search of more adapted solutions. Terminology is one of the sectors in which the Arabic language requires a deep modernization of its classical productivity models. The normative approach, in particular that of the ISO TC37, is proposed as one of the solutions that would allow it to combine with international standards to better integrate the knowledge society under construction. La langue et lecriture arabe sont aujourdhui confrontees a une recrudescence de solutions normatives internationales qui remettent en cause la plupart de leurs principes de fonctionnement en site ou sur les reseaux. Meme si les solutions du codage numerique multilingue, notamment celles proposees par Unicode, ont resolu beaucoup de difficultes de lecriture arabe, le volet linguistique est encore en quete de solutions plus adaptees. La terminologie est lun des secteurs dans lequel la langue arabe necessite une modernisation profonde de ses modeles classiques de production. La voie normative, notamment celle du TC37 de ISO, est proposee comme une des solutions qui lui permettrait de se mettre en synergie avec les referentiels internationaux pour mieux integrer la societe du savoir en voie de construction.
Tasks
Published 2017-02-24
URL http://arxiv.org/abs/1703.04512v1
PDF http://arxiv.org/pdf/1703.04512v1.pdf
PWC https://paperswithcode.com/paper/normalisation-de-la-langue-et-de-lecriture
Repo
Framework

Constrained Manifold Learning for Hyperspectral Imagery Visualization

Title Constrained Manifold Learning for Hyperspectral Imagery Visualization
Authors Danping Liao, Yuntao Qian, Yuan Yan Tang
Abstract Displaying the large number of bands in a hyper- spectral image (HSI) on a trichromatic monitor is important for HSI processing and analysis system. The visualized image shall convey as much information as possible from the original HSI and meanwhile facilitate image interpretation. However, most existing methods display HSIs in false color, which contradicts with user experience and expectation. In this paper, we propose a visualization approach based on constrained manifold learning, whose goal is to learn a visualized image that not only preserves the manifold structure of the HSI but also has natural colors. Manifold learning preserves the image structure by forcing pixels with similar signatures to be displayed with similar colors. A composite kernel is applied in manifold learning to incorporate both the spatial and spectral information of HSI in the embedded space. The colors of the output image are constrained by a corresponding natural-looking RGB image, which can either be generated from the HSI itself (e.g., band selection from the visible wavelength) or be captured by a separate device. Our method can be done at instance-level and feature-level. Instance-level learning directly obtains the RGB coordinates for the pixels in the HSI while feature-level learning learns an explicit mapping function from the high dimensional spectral space to the RGB space. Experimental results demonstrate the advantage of the proposed method in information preservation and natural color visualization.
Tasks
Published 2017-11-24
URL http://arxiv.org/abs/1712.01657v1
PDF http://arxiv.org/pdf/1712.01657v1.pdf
PWC https://paperswithcode.com/paper/constrained-manifold-learning-for
Repo
Framework

Simplex Search Based Brain Storm Optimization

Title Simplex Search Based Brain Storm Optimization
Authors Wei Chen, YingYing Cao, Shi Cheng, Yifei Sun, Qunfeng Liu, Yun Li
Abstract Through modeling human’s brainstorming process, the brain storm optimization (BSO) algorithm has become a promising population-based evolutionary algorithm. However, BSO is pointed out that it possesses a degenerated L-curve phenomenon, i.e., it often gets near optimum quickly but needs much more cost to improve the accuracy. To overcome this question in this paper, an excellent direct search based local solver, the Nelder-Mead Simplex (NMS) method is adopted in BSO. Through combining BSO’s exploration ability and NMS’s exploitation ability together, a simplex search based BSO (Simplex-BSO) is developed via a better balance between global exploration and local exploitation. Simplex-BSO is shown to be able to eliminate the degenerated L-curve phenomenon on unimodal functions, and alleviate significantly this phenomenon on multimodal functions. Large number of experimental results show that Simplex-BSO is a promising algorithm for global optimization problems.
Tasks
Published 2017-10-24
URL http://arxiv.org/abs/1712.03166v3
PDF http://arxiv.org/pdf/1712.03166v3.pdf
PWC https://paperswithcode.com/paper/simplex-search-based-brain-storm-optimization
Repo
Framework

Towards A Novel Unified Framework for Developing Formal, Network and Validated Agent-Based Simulation Models of Complex Adaptive Systems

Title Towards A Novel Unified Framework for Developing Formal, Network and Validated Agent-Based Simulation Models of Complex Adaptive Systems
Authors Muaz A. Niazi
Abstract Literature on the modeling and simulation of complex adaptive systems (cas) has primarily advanced vertically in different scientific domains with scientists developing a variety of domain-specific approaches and applications. However, while cas researchers are inher-ently interested in an interdisciplinary comparison of models, to the best of our knowledge, there is currently no single unified framework for facilitating the development, comparison, communication and validation of models across different scientific domains. In this thesis, we propose first steps towards such a unified framework using a combination of agent-based and complex network-based modeling approaches and guidelines formulated in the form of a set of four levels of usage, which allow multidisciplinary researchers to adopt a suitable framework level on the basis of available data types, their research study objectives and expected outcomes, thus allowing them to better plan and conduct their respective re-search case studies.
Tasks
Published 2017-08-08
URL http://arxiv.org/abs/1708.02357v1
PDF http://arxiv.org/pdf/1708.02357v1.pdf
PWC https://paperswithcode.com/paper/towards-a-novel-unified-framework-for
Repo
Framework
comments powered by Disqus