October 16, 2019

3268 words 16 mins read

Paper Group ANR 1073

Paper Group ANR 1073

Context Encoding Chest X-rays. An Inductive Formalization of Self Reproduction in Dynamical Hierarchies. A Kernel for Multi-Parameter Persistent Homology. SAFFRON: an adaptive algorithm for online control of the false discovery rate. Learning Localized Spatio-Temporal Models From Streaming Data. Temporal Action Detection by Joint Identification-Ver …

Context Encoding Chest X-rays

Title Context Encoding Chest X-rays
Authors Davide Belli, Shi Hu, Ecem Sogancioglu, Bram van Ginneken
Abstract Chest X-rays are one of the most commonly used technologies for medical diagnosis. Many deep learning models have been proposed to improve and automate the abnormality detection task on this type of data. In this paper, we propose a different approach based on image inpainting under adversarial training first introduced by Goodfellow et al. We configure the context encoder model for this task and train it over 1.1M 128x128 images from healthy X-rays. The goal of our model is to reconstruct the missing central 64x64 patch. Once the model has learned how to inpaint healthy tissue, we test its performance on images with and without abnormalities. We discuss and motivate our results considering PSNR, MSE and SSIM scores as evaluation metrics. In addition, we conduct a 2AFC observer study showing that in half of the times an expert is unable to distinguish real images from the ones reconstructed using our model. By computing and visualizing the pixel-wise difference between the source and the reconstructed images, we can highlight abnormalities to simplify further detection and classification tasks.
Tasks Anomaly Detection, Image Inpainting, Medical Diagnosis
Published 2018-12-03
URL http://arxiv.org/abs/1812.00964v2
PDF http://arxiv.org/pdf/1812.00964v2.pdf
PWC https://paperswithcode.com/paper/chest-x-rays-image-inpainting-with-context
Repo
Framework

An Inductive Formalization of Self Reproduction in Dynamical Hierarchies

Title An Inductive Formalization of Self Reproduction in Dynamical Hierarchies
Authors Janardan Misra
Abstract Formalizing self reproduction in dynamical hierarchies is one of the important problems in Artificial Life (AL) studies. We study, in this paper, an inductively defined algebraic framework for self reproduction on macroscopic organizational levels under dynamical system setting for simulated AL models and explore some existential results. Starting with defining self reproduction for atomic entities we define self reproduction with possible mutations on higher organizational levels in terms of hierarchical sets and the corresponding inductively defined `meta’ - reactions. We introduce constraints to distinguish a collection of entities from genuine cases of emergent organizational structures. |
Tasks Artificial Life
Published 2018-06-23
URL http://arxiv.org/abs/1806.08925v1
PDF http://arxiv.org/pdf/1806.08925v1.pdf
PWC https://paperswithcode.com/paper/an-inductive-formalization-of-self
Repo
Framework

A Kernel for Multi-Parameter Persistent Homology

Title A Kernel for Multi-Parameter Persistent Homology
Authors René Corbet, Ulderico Fugacci, Michael Kerber, Claudia Landi, Bei Wang
Abstract Topological data analysis and its main method, persistent homology, provide a toolkit for computing topological information of high-dimensional and noisy data sets. Kernels for one-parameter persistent homology have been established to connect persistent homology with machine learning techniques. We contribute a kernel construction for multi-parameter persistence by integrating a one-parameter kernel weighted along straight lines. We prove that our kernel is stable and efficiently computable, which establishes a theoretical connection between topological data analysis and machine learning for multivariate data analysis.
Tasks Topological Data Analysis
Published 2018-09-26
URL https://arxiv.org/abs/1809.10231v2
PDF https://arxiv.org/pdf/1809.10231v2.pdf
PWC https://paperswithcode.com/paper/a-kernel-for-multi-parameter-persistent
Repo
Framework

SAFFRON: an adaptive algorithm for online control of the false discovery rate

Title SAFFRON: an adaptive algorithm for online control of the false discovery rate
Authors Aaditya Ramdas, Tijana Zrnic, Martin Wainwright, Michael Jordan
Abstract In the online false discovery rate (FDR) problem, one observes a possibly infinite sequence of $p$-values $P_1,P_2,\dots$, each testing a different null hypothesis, and an algorithm must pick a sequence of rejection thresholds $\alpha_1,\alpha_2,\dots$ in an online fashion, effectively rejecting the $k$-th null hypothesis whenever $P_k \leq \alpha_k$. Importantly, $\alpha_k$ must be a function of the past, and cannot depend on $P_k$ or any of the later unseen $p$-values, and must be chosen to guarantee that for any time $t$, the FDR up to time $t$ is less than some pre-determined quantity $\alpha \in (0,1)$. In this work, we present a powerful new framework for online FDR control that we refer to as SAFFRON. Like older alpha-investing (AI) algorithms, SAFFRON starts off with an error budget, called alpha-wealth, that it intelligently allocates to different tests over time, earning back some wealth on making a new discovery. However, unlike older methods, SAFFRON’s threshold sequence is based on a novel estimate of the alpha fraction that it allocates to true null hypotheses. In the offline setting, algorithms that employ an estimate of the proportion of true nulls are called adaptive methods, and SAFFRON can be seen as an online analogue of the famous offline Storey-BH adaptive procedure. Just as Storey-BH is typically more powerful than the Benjamini-Hochberg (BH) procedure under independence, we demonstrate that SAFFRON is also more powerful than its non-adaptive counterparts, such as LORD and other generalized alpha-investing algorithms. Further, a monotone version of the original AI algorithm is recovered as a special case of SAFFRON, that is often more stable and powerful than the original. Lastly, the derivation of SAFFRON provides a novel template for deriving new online FDR rules.
Tasks
Published 2018-02-25
URL https://arxiv.org/abs/1802.09098v2
PDF https://arxiv.org/pdf/1802.09098v2.pdf
PWC https://paperswithcode.com/paper/saffron-an-adaptive-algorithm-for-online
Repo
Framework

Learning Localized Spatio-Temporal Models From Streaming Data

Title Learning Localized Spatio-Temporal Models From Streaming Data
Authors Muhammad Osama, Dave Zachariah, Thomas B. Schön
Abstract We address the problem of predicting spatio-temporal processes with temporal patterns that vary across spatial regions, when data is obtained as a stream. That is, when the training dataset is augmented sequentially. Specifically, we develop a localized spatio-temporal covariance model of the process that can capture spatially varying temporal periodicities in the data. We then apply a covariance-fitting methodology to learn the model parameters which yields a predictor that can be updated sequentially with each new data point. The proposed method is evaluated using both synthetic and real climate data which demonstrate its ability to accurately predict data missing in spatial regions over time.
Tasks
Published 2018-02-09
URL http://arxiv.org/abs/1802.03334v2
PDF http://arxiv.org/pdf/1802.03334v2.pdf
PWC https://paperswithcode.com/paper/learning-localized-spatio-temporal-models
Repo
Framework

Temporal Action Detection by Joint Identification-Verification

Title Temporal Action Detection by Joint Identification-Verification
Authors Wen Wang, Yongjian Wu, Haijun Liu, Shiguang Wang, Jian Cheng
Abstract Temporal action detection aims at not only recognizing action category but also detecting start time and end time for each action instance in an untrimmed video. The key challenge of this task is to accurately classify the action and determine the temporal boundaries of each action instance. In temporal action detection benchmark: THUMOS 2014, large variations exist in the same action category while many similarities exist in different action categories, which always limit the performance of temporal action detection. To address this problem, we propose to use joint Identification-Verification network to reduce the intra-action variations and enlarge inter-action differences. The joint Identification-Verification network is a siamese network based on 3D ConvNets, which can simultaneously predict the action categories and the similarity scores for the input pairs of video proposal segments. Extensive experimental results on the challenging THUMOS 2014 dataset demonstrate the effectiveness of our proposed method compared to the existing state-of-art methods for temporal action detection in untrimmed videos.
Tasks Action Detection
Published 2018-10-19
URL http://arxiv.org/abs/1810.08375v1
PDF http://arxiv.org/pdf/1810.08375v1.pdf
PWC https://paperswithcode.com/paper/temporal-action-detection-by-joint
Repo
Framework

Neural MultiVoice Models for Expressing Novel Personalities in Dialog

Title Neural MultiVoice Models for Expressing Novel Personalities in Dialog
Authors Shereen Oraby, Lena Reed, Sharath TS, Shubhangi Tandon, Marilyn Walker
Abstract Natural language generators for task-oriented dialog should be able to vary the style of the output utterance while still effectively realizing the system dialog actions and their associated semantics. While the use of neural generation for training the response generation component of conversational agents promises to simplify the process of producing high quality responses in new domains, to our knowledge, there has been very little investigation of neural generators for task-oriented dialog that can vary their response style, and we know of no experiments on models that can generate responses that are different in style from those seen during training, while still maintain- ing semantic fidelity to the input meaning representation. Here, we show that a model that is trained to achieve a single stylis- tic personality target can produce outputs that combine stylistic targets. We carefully evaluate the multivoice outputs for both semantic fidelity and for similarities to and differences from the linguistic features that characterize the original training style. We show that contrary to our predictions, the learned models do not always simply interpolate model parameters, but rather produce styles that are distinct, and novel from the personalities they were trained on.
Tasks
Published 2018-09-05
URL http://arxiv.org/abs/1809.01331v1
PDF http://arxiv.org/pdf/1809.01331v1.pdf
PWC https://paperswithcode.com/paper/neural-multivoice-models-for-expressing-novel
Repo
Framework

Computation of the Maximum Likelihood estimator in low-rank Factor Analysis

Title Computation of the Maximum Likelihood estimator in low-rank Factor Analysis
Authors Koulik Khamaru, Rahul Mazumder
Abstract Factor analysis, a classical multivariate statistical technique is popularly used as a fundamental tool for dimensionality reduction in statistics, econometrics and data science. Estimation is often carried out via the Maximum Likelihood (ML) principle, which seeks to maximize the likelihood under the assumption that the positive definite covariance matrix can be decomposed as the sum of a low rank positive semidefinite matrix and a diagonal matrix with nonnegative entries. This leads to a challenging rank constrained nonconvex optimization problem. We reformulate the low rank ML Factor Analysis problem as a nonlinear nonsmooth semidefinite optimization problem, study various structural properties of this reformulation and propose fast and scalable algorithms based on difference of convex (DC) optimization. Our approach has computational guarantees, gracefully scales to large problems, is applicable to situations where the sample covariance matrix is rank deficient and adapts to variants of the ML problem with additional constraints on the problem parameters. Our numerical experiments demonstrate the significant usefulness of our approach over existing state-of-the-art approaches.
Tasks Dimensionality Reduction
Published 2018-01-18
URL http://arxiv.org/abs/1801.05935v1
PDF http://arxiv.org/pdf/1801.05935v1.pdf
PWC https://paperswithcode.com/paper/computation-of-the-maximum-likelihood
Repo
Framework

Dynamic Island Model based on Spectral Clustering in Genetic Algorithm

Title Dynamic Island Model based on Spectral Clustering in Genetic Algorithm
Authors Qinxue Meng, Jia Wu, John Ellisy, Paul J. Kennedy
Abstract How to maintain relative high diversity is important to avoid premature convergence in population-based optimization methods. Island model is widely considered as a major approach to achieve this because of its flexibility and high efficiency. The model maintains a group of sub-populations on different islands and allows sub-populations to interact with each other via predefined migration policies. However, current island model has some drawbacks. One is that after a certain number of generations, different islands may retain quite similar, converged sub-populations thereby losing diversity and decreasing efficiency. Another drawback is that determining the number of islands to maintain is also very challenging. Meanwhile initializing many sub-populations increases the randomness of island model. To address these issues, we proposed a dynamic island model~(DIM-SP) which can force each island to maintain different sub-populations, control the number of islands dynamically and starts with one sub-population. The proposed island model outperforms the other three state-of-the-art island models in three baseline optimization problems including job shop scheduler problem, travelling salesmen problem and quadratic multiple knapsack problem.
Tasks
Published 2018-01-05
URL http://arxiv.org/abs/1801.01620v1
PDF http://arxiv.org/pdf/1801.01620v1.pdf
PWC https://paperswithcode.com/paper/dynamic-island-model-based-on-spectral
Repo
Framework

Efficient and Robust Machine Learning for Real-World Systems

Title Efficient and Robust Machine Learning for Real-World Systems
Authors Franz Pernkopf, Wolfgang Roth, Matthias Zoehrer, Lukas Pfeifenberger, Guenther Schindler, Holger Froening, Sebastian Tschiatschek, Robert Peharz, Matthew Mattina, Zoubin Ghahramani
Abstract While machine learning is traditionally a resource intensive task, embedded systems, autonomous navigation and the vision of the Internet-of-Things fuel the interest in resource efficient approaches. These approaches require a carefully chosen trade-off between performance and resource consumption in terms of computation and energy. On top of this, it is crucial to treat uncertainty in a consistent manner in all but the simplest applications of machine learning systems. In particular, a desideratum for any real-world system is to be robust in the presence of outliers and corrupted data, as well as being `aware’ of its limits, i.e.\ the system should maintain and provide an uncertainty estimate over its own predictions. These complex demands are among the major challenges in current machine learning research and key to ensure a smooth transition of machine learning technology into every day’s applications. In this article, we provide an overview of the current state of the art of machine learning techniques facilitating these real-world requirements. First we provide a comprehensive review of resource-efficiency in deep neural networks with focus on techniques for model size reduction, compression and reduced precision. These techniques can be applied during training or as post-processing and are widely used to reduce both computational complexity and memory footprint. As most (practical) neural networks are limited in their ways to treat uncertainty, we contrast them with probabilistic graphical models, which readily serve these desiderata by means of probabilistic inference. In that way, we provide an extensive overview of the current state-of-the-art of robust and efficient machine learning for real-world systems. |
Tasks Autonomous Navigation
Published 2018-12-05
URL http://arxiv.org/abs/1812.02240v1
PDF http://arxiv.org/pdf/1812.02240v1.pdf
PWC https://paperswithcode.com/paper/efficient-and-robust-machine-learning-for
Repo
Framework

Online Convolutional Sparse Coding with Sample-Dependent Dictionary

Title Online Convolutional Sparse Coding with Sample-Dependent Dictionary
Authors Yaqing Wang, Quanming Yao, James T. Kwok, Lionel M. Ni
Abstract Convolutional sparse coding (CSC) has been popularly used for the learning of shift-invariant dictionaries in image and signal processing. However, existing methods have limited scalability. In this paper, instead of convolving with a dictionary shared by all samples, we propose the use of a sample-dependent dictionary in which filters are obtained as linear combinations of a small set of base filters learned from the data. This added flexibility allows a large number of sample-dependent patterns to be captured, while the resultant model can still be efficiently learned by online learning. Extensive experimental results show that the proposed method outperforms existing CSC algorithms with significantly reduced time and space requirements.
Tasks
Published 2018-04-27
URL http://arxiv.org/abs/1804.10366v2
PDF http://arxiv.org/pdf/1804.10366v2.pdf
PWC https://paperswithcode.com/paper/online-convolutional-sparse-coding-with
Repo
Framework

Exploring Linear Relationship in Feature Map Subspace for ConvNets Compression

Title Exploring Linear Relationship in Feature Map Subspace for ConvNets Compression
Authors Dong Wang, Lei Zhou, Xueni Zhang, Xiao Bai, Jun Zhou
Abstract While the research on convolutional neural networks (CNNs) is progressing quickly, the real-world deployment of these models is often limited by computing resources and memory constraints. In this paper, we address this issue by proposing a novel filter pruning method to compress and accelerate CNNs. Our work is based on the linear relationship identified in different feature map subspaces via visualization of feature maps. Such linear relationship implies that the information in CNNs is redundant. Our method eliminates the redundancy in convolutional filters by applying subspace clustering to feature maps. In this way, most of the representative information in the network can be retained in each cluster. Therefore, our method provides an effective solution to filter pruning for which most existing methods directly remove filters based on simple heuristics. The proposed method is independent of the network structure, thus it can be adopted by any off-the-shelf deep learning libraries. Experiments on different networks and tasks show that our method outperforms existing techniques before fine-tuning, and achieves the state-of-the-art results after fine-tuning.
Tasks
Published 2018-03-15
URL http://arxiv.org/abs/1803.05729v1
PDF http://arxiv.org/pdf/1803.05729v1.pdf
PWC https://paperswithcode.com/paper/exploring-linear-relationship-in-feature-map
Repo
Framework

Matrix completion with deterministic pattern - a geometric perspective

Title Matrix completion with deterministic pattern - a geometric perspective
Authors Alexander Shapiro, Yao Xie, Rui Zhang
Abstract We consider the matrix completion problem with a deterministic pattern of observed entries. In this setting, we aim to answer the question: under what condition there will be (at least locally) unique solution to the matrix completion problem, i.e., the underlying true matrix is identifiable. We answer the question from a certain point of view and outline a geometric perspective. We give an algebraically verifiable sufficient condition, which we call the well-posedness condition, for the local uniqueness of MRMC solutions. We argue that this condition is necessary for local stability of MRMC solutions, and we show that the condition is generic using the characteristic rank. We also argue that the low-rank approximation approaches are more stable than MRMC and further propose a sequential statistical testing procedure to determine the “true” rank from observed entries. Finally, we provide numerical examples aimed at verifying validity of the presented theory.
Tasks Matrix Completion
Published 2018-01-31
URL http://arxiv.org/abs/1802.00047v4
PDF http://arxiv.org/pdf/1802.00047v4.pdf
PWC https://paperswithcode.com/paper/matrix-completion-with-deterministic-pattern
Repo
Framework

Semi-supervised acoustic model training for speech with code-switching

Title Semi-supervised acoustic model training for speech with code-switching
Authors Emre Yılmaz, Mitchell McLaren, Henk van den Heuvel, David A. van Leeuwen
Abstract In the FAME! project, we aim to develop an automatic speech recognition (ASR) system for Frisian-Dutch code-switching (CS) speech extracted from the archives of a local broadcaster with the ultimate goal of building a spoken document retrieval system. Unlike Dutch, Frisian is a low-resourced language with a very limited amount of manually annotated speech data. In this paper, we describe several automatic annotation approaches to enable using of a large amount of raw bilingual broadcast data for acoustic model training in a semi-supervised setting. Previously, it has been shown that the best-performing ASR system is obtained by two-stage multilingual deep neural network (DNN) training using 11 hours of manually annotated CS speech (reference) data together with speech data from other high-resourced languages. We compare the quality of transcriptions provided by this bilingual ASR system with several other approaches that use a language recognition system for assigning language labels to raw speech segments at the front-end and using monolingual ASR resources for transcription. We further investigate automatic annotation of the speakers appearing in the raw broadcast data by first labeling with (pseudo) speaker tags using a speaker diarization system and then linking to the known speakers appearing in the reference data using a speaker recognition system. These speaker labels are essential for speaker-adaptive training in the proposed setting. We train acoustic models using the manually and automatically annotated data and run recognition experiments on the development and test data of the FAME! speech corpus to quantify the quality of the automatic annotations. The ASR and CS detection results demonstrate the potential of using automatic language and speaker tagging in semi-supervised bilingual acoustic model training.
Tasks Speaker Diarization, Speaker Recognition, Speech Recognition
Published 2018-10-23
URL http://arxiv.org/abs/1810.09699v1
PDF http://arxiv.org/pdf/1810.09699v1.pdf
PWC https://paperswithcode.com/paper/semi-supervised-acoustic-model-training-for
Repo
Framework

Matrix Completion with Nonconvex Regularization: Spectral Operators and Scalable Algorithms

Title Matrix Completion with Nonconvex Regularization: Spectral Operators and Scalable Algorithms
Authors Rahul Mazumder, Diego F. Saldana, Haolei Weng
Abstract In this paper, we study the popularly dubbed matrix completion problem, where the task is to “fill in” the unobserved entries of a matrix from a small subset of observed entries, under the assumption that the underlying matrix is of low-rank. Our contributions herein, enhance our prior work on nuclear norm regularized problems for matrix completion (Mazumder et al., 2010) by incorporating a continuum of nonconvex penalty functions between the convex nuclear norm and nonconvex rank functions. Inspired by SOFT-IMPUTE (Mazumder et al., 2010; Hastie et al., 2016), we propose NC-IMPUTE- an EM-flavored algorithmic framework for computing a family of nonconvex penalized matrix completion problems with warm-starts. We present a systematic study of the associated spectral thresholding operators, which play an important role in the overall algorithm. We study convergence properties of the algorithm. Using structured low-rank SVD computations, we demonstrate the computational scalability of our proposal for problems up to the Netflix size (approximately, a $500,000 \times 20, 000$ matrix with $10^8$ observed entries). We demonstrate that on a wide range of synthetic and real data instances, our proposed nonconvex regularization framework leads to low-rank solutions with better predictive performance when compared to those obtained from nuclear norm problems. Implementations of algorithms proposed herein, written in the R programming language, are made available on github.
Tasks Matrix Completion
Published 2018-01-24
URL https://arxiv.org/abs/1801.08227v2
PDF https://arxiv.org/pdf/1801.08227v2.pdf
PWC https://paperswithcode.com/paper/matrix-completion-with-nonconvex
Repo
Framework
comments powered by Disqus