May 6, 2019

2672 words 13 mins read

Paper Group ANR 413

Paper Group ANR 413

Patient-Driven Privacy Control through Generalized Distillation. Adaptive Learning Rate via Covariance Matrix Based Preconditioning for Deep Neural Networks. A Persona-Based Neural Conversation Model. ResFeats: Residual Network Based Features for Image Classification. Towards a Cognitive Routing Engine for Software Defined Networks. Exploration Pot …

Patient-Driven Privacy Control through Generalized Distillation

Title Patient-Driven Privacy Control through Generalized Distillation
Authors Z. Berkay Celik, David Lopez-Paz, Patrick McDaniel
Abstract The introduction of data analytics into medicine has changed the nature of patient treatment. In this, patients are asked to disclose personal information such as genetic markers, lifestyle habits, and clinical history. This data is then used by statistical models to predict personalized treatments. However, due to privacy concerns, patients often desire to withhold sensitive information. This self-censorship can impede proper diagnosis and treatment, which may lead to serious health complications and even death over time. In this paper, we present privacy distillation, a mechanism which allows patients to control the type and amount of information they wish to disclose to the healthcare providers for use in statistical models. Meanwhile, it retains the accuracy of models that have access to all patient data under a sufficient but not full set of privacy-relevant information. We validate privacy distillation using a corpus of patients prescribed to warfarin for a personalized dosage. We use a deep neural network to implement privacy distillation for training and making dose predictions. We find that privacy distillation with sufficient privacy-relevant information i) retains accuracy almost as good as having all patient data (only 3% worse), and ii) is effective at preventing errors that introduce health-related risks (only 3.9% worse under- or over-prescriptions).
Tasks
Published 2016-11-26
URL http://arxiv.org/abs/1611.08648v2
PDF http://arxiv.org/pdf/1611.08648v2.pdf
PWC https://paperswithcode.com/paper/patient-driven-privacy-control-through
Repo
Framework

Adaptive Learning Rate via Covariance Matrix Based Preconditioning for Deep Neural Networks

Title Adaptive Learning Rate via Covariance Matrix Based Preconditioning for Deep Neural Networks
Authors Yasutoshi Ida, Yasuhiro Fujiwara, Sotetsu Iwamura
Abstract Adaptive learning rate algorithms such as RMSProp are widely used for training deep neural networks. RMSProp offers efficient training since it uses first order gradients to approximate Hessian-based preconditioning. However, since the first order gradients include noise caused by stochastic optimization, the approximation may be inaccurate. In this paper, we propose a novel adaptive learning rate algorithm called SDProp. Its key idea is effective handling of the noise by preconditioning based on covariance matrix. For various neural networks, our approach is more efficient and effective than RMSProp and its variant.
Tasks Stochastic Optimization
Published 2016-05-31
URL http://arxiv.org/abs/1605.09593v2
PDF http://arxiv.org/pdf/1605.09593v2.pdf
PWC https://paperswithcode.com/paper/adaptive-learning-rate-via-covariance-matrix
Repo
Framework

A Persona-Based Neural Conversation Model

Title A Persona-Based Neural Conversation Model
Authors Jiwei Li, Michel Galley, Chris Brockett, Georgios P. Spithourakis, Jianfeng Gao, Bill Dolan
Abstract We present persona-based models for handling the issue of speaker consistency in neural response generation. A speaker model encodes personas in distributed embeddings that capture individual characteristics such as background information and speaking style. A dyadic speaker-addressee model captures properties of interactions between two interlocutors. Our models yield qualitative performance improvements in both perplexity and BLEU scores over baseline sequence-to-sequence models, with similar gains in speaker consistency as measured by human judges.
Tasks
Published 2016-03-19
URL http://arxiv.org/abs/1603.06155v2
PDF http://arxiv.org/pdf/1603.06155v2.pdf
PWC https://paperswithcode.com/paper/a-persona-based-neural-conversation-model
Repo
Framework

ResFeats: Residual Network Based Features for Image Classification

Title ResFeats: Residual Network Based Features for Image Classification
Authors Ammar Mahmood, Mohammed Bennamoun, Senjian An, Ferdous Sohel
Abstract Deep residual networks have recently emerged as the state-of-the-art architecture in image segmentation and object detection. In this paper, we propose new image features (called ResFeats) extracted from the last convolutional layer of deep residual networks pre-trained on ImageNet. We propose to use ResFeats for diverse image classification tasks namely, object classification, scene classification and coral classification and show that ResFeats consistently perform better than their CNN counterparts on these classification tasks. Since the ResFeats are large feature vectors, we propose to use PCA for dimensionality reduction. Experimental results are provided to show the effectiveness of ResFeats with state-of-the-art classification accuracies on Caltech-101, Caltech-256 and MLC datasets and a significant performance improvement on MIT-67 dataset compared to the widely used CNN features.
Tasks Dimensionality Reduction, Image Classification, Object Classification, Object Detection, Scene Classification, Semantic Segmentation
Published 2016-11-21
URL http://arxiv.org/abs/1611.06656v1
PDF http://arxiv.org/pdf/1611.06656v1.pdf
PWC https://paperswithcode.com/paper/resfeats-residual-network-based-features-for
Repo
Framework

Towards a Cognitive Routing Engine for Software Defined Networks

Title Towards a Cognitive Routing Engine for Software Defined Networks
Authors Frederic Francois, Erol Gelenbe
Abstract Most Software Defined Networks (SDN) traffic engineering applications use excessive and frequent global monitoring in order to find the optimal Quality-of-Service (QoS) paths for the current state of the network. In this work, we present the motivations, architecture and initial evaluation of a SDN application called Cognitive Routing Engine (CRE) which is able to find near-optimal paths for a user-specified QoS while using a very small monitoring overhead compared to global monitoring which is required to guarantee that optimal paths are found. Smaller monitoring overheads bring the advantage of smaller response time for the SDN controllers and switches. The initial evaluation of CRE on a SDN representation of the GEANT academic network shows that it is possible to find near-optimal paths with a small optimality gap of 1.65% while using 9.5 times less monitoring.
Tasks
Published 2016-02-01
URL http://arxiv.org/abs/1602.00487v1
PDF http://arxiv.org/pdf/1602.00487v1.pdf
PWC https://paperswithcode.com/paper/towards-a-cognitive-routing-engine-for
Repo
Framework

Exploration Potential

Title Exploration Potential
Authors Jan Leike
Abstract We introduce exploration potential, a quantity that measures how much a reinforcement learning agent has explored its environment class. In contrast to information gain, exploration potential takes the problem’s reward structure into account. This leads to an exploration criterion that is both necessary and sufficient for asymptotic optimality (learning to act optimally across the entire environment class). Our experiments in multi-armed bandits use exploration potential to illustrate how different algorithms make the tradeoff between exploration and exploitation.
Tasks Multi-Armed Bandits
Published 2016-09-16
URL http://arxiv.org/abs/1609.04994v3
PDF http://arxiv.org/pdf/1609.04994v3.pdf
PWC https://paperswithcode.com/paper/exploration-potential
Repo
Framework

Clickstream analysis for crowd-based object segmentation with confidence

Title Clickstream analysis for crowd-based object segmentation with confidence
Authors Eric Heim, Alexander Seitel, Jonas Andrulis, Fabian Isensee, Christian Stock, Tobias Ross, Lena Maier-Hein
Abstract With the rapidly increasing interest in machine learning based solutions for automatic image annotation, the availability of reference annotations for algorithm training is one of the major bottlenecks in the field. Crowdsourcing has evolved as a valuable option for low-cost and large-scale data annotation; however, quality control remains a major issue which needs to be addressed. To our knowledge, we are the first to analyze the annotation process to improve crowd-sourced image segmentation. Our method involves training a regressor to estimate the quality of a segmentation from the annotator’s clickstream data. The quality estimation can be used to identify spam and weight individual annotations by their (estimated) quality when merging multiple segmentations of one image. Using a total of 29,000 crowd annotations performed on publicly available data of different object classes, we show that (1) our method is highly accurate in estimating the segmentation quality based on clickstream data, (2) outperforms state-of-the-art methods for merging multiple annotations. As the regressor does not need to be trained on the object class that it is applied to it can be regarded as a low-cost option for quality control and confidence analysis in the context of crowd-based image annotation.
Tasks Semantic Segmentation
Published 2016-11-25
URL http://arxiv.org/abs/1611.08527v4
PDF http://arxiv.org/pdf/1611.08527v4.pdf
PWC https://paperswithcode.com/paper/clickstream-analysis-for-crowd-based-object
Repo
Framework

Information-Theoretic Lower Bounds for Recovery of Diffusion Network Structures

Title Information-Theoretic Lower Bounds for Recovery of Diffusion Network Structures
Authors Keehwan Park, Jean Honorio
Abstract We study the information-theoretic lower bound of the sample complexity of the correct recovery of diffusion network structures. We introduce a discrete-time diffusion model based on the Independent Cascade model for which we obtain a lower bound of order $\Omega(k \log p)$, for directed graphs of $p$ nodes, and at most $k$ parents per node. Next, we introduce a continuous-time diffusion model, for which a similar lower bound of order $\Omega(k \log p)$ is obtained. Our results show that the algorithm of Pouget-Abadie et al. is statistically optimal for the discrete-time regime. Our work also opens the question of whether it is possible to devise an optimal algorithm for the continuous-time regime.
Tasks
Published 2016-01-28
URL http://arxiv.org/abs/1601.07932v2
PDF http://arxiv.org/pdf/1601.07932v2.pdf
PWC https://paperswithcode.com/paper/information-theoretic-lower-bounds-for
Repo
Framework

Synthesizing Dynamic Patterns by Spatial-Temporal Generative ConvNet

Title Synthesizing Dynamic Patterns by Spatial-Temporal Generative ConvNet
Authors Jianwen Xie, Song-Chun Zhu, Ying Nian Wu
Abstract Video sequences contain rich dynamic patterns, such as dynamic texture patterns that exhibit stationarity in the temporal domain, and action patterns that are non-stationary in either spatial or temporal domain. We show that a spatial-temporal generative ConvNet can be used to model and synthesize dynamic patterns. The model defines a probability distribution on the video sequence, and the log probability is defined by a spatial-temporal ConvNet that consists of multiple layers of spatial-temporal filters to capture spatial-temporal patterns of different scales. The model can be learned from the training video sequences by an “analysis by synthesis” learning algorithm that iterates the following two steps. Step 1 synthesizes video sequences from the currently learned model. Step 2 then updates the model parameters based on the difference between the synthesized video sequences and the observed training sequences. We show that the learning algorithm can synthesize realistic dynamic patterns.
Tasks
Published 2016-06-03
URL http://arxiv.org/abs/1606.00972v2
PDF http://arxiv.org/pdf/1606.00972v2.pdf
PWC https://paperswithcode.com/paper/synthesizing-dynamic-patterns-by-spatial
Repo
Framework

Large scale multi-objective optimization: Theoretical and practical challenges

Title Large scale multi-objective optimization: Theoretical and practical challenges
Authors Kinjal Basu, Ankan Saha, Shaunak Chatterjee
Abstract Multi-objective optimization (MOO) is a well-studied problem for several important recommendation problems. While multiple approaches have been proposed, in this work, we focus on using constrained optimization formulations (e.g., quadratic and linear programs) to formulate and solve MOO problems. This approach can be used to pick desired operating points on the trade-off curve between multiple objectives. It also works well for internet applications which serve large volumes of online traffic, by working with Lagrangian duality formulation to connect dual solutions (computed offline) with the primal solutions (computed online). We identify some key limitations of this approach – namely the inability to handle user and item level constraints, scalability considerations and variance of dual estimates introduced by sampling processes. We propose solutions for each of the problems and demonstrate how through these solutions we significantly advance the state-of-the-art in this realm. Our proposed methods can exactly handle user and item (and other such local) constraints, achieve a $100\times$ scalability boost over existing packages in R and reduce variance of dual estimates by two orders of magnitude.
Tasks
Published 2016-02-09
URL http://arxiv.org/abs/1602.03131v2
PDF http://arxiv.org/pdf/1602.03131v2.pdf
PWC https://paperswithcode.com/paper/large-scale-multi-objective-optimization
Repo
Framework

Frequency Analysis of Temporal Graph Signals

Title Frequency Analysis of Temporal Graph Signals
Authors Andreas Loukas, Damien Foucard
Abstract This letter extends the concept of graph-frequency to graph signals that evolve with time. Our goal is to generalize and, in fact, unify the familiar concepts from time- and graph-frequency analysis. To this end, we study a joint temporal and graph Fourier transform (JFT) and demonstrate its attractive properties. We build on our results to create filters which act on the joint (temporal and graph) frequency domain, and show how these can be used to perform interference cancellation. The proposed algorithms are distributed, have linear complexity, and can approximate any desired joint filtering objective.
Tasks
Published 2016-02-14
URL http://arxiv.org/abs/1602.04434v1
PDF http://arxiv.org/pdf/1602.04434v1.pdf
PWC https://paperswithcode.com/paper/frequency-analysis-of-temporal-graph-signals
Repo
Framework

High-dimensional regression adjustments in randomized experiments

Title High-dimensional regression adjustments in randomized experiments
Authors Stefan Wager, Wenfei Du, Jonathan Taylor, Robert Tibshirani
Abstract We study the problem of treatment effect estimation in randomized experiments with high-dimensional covariate information, and show that essentially any risk-consistent regression adjustment can be used to obtain efficient estimates of the average treatment effect. Our results considerably extend the range of settings where high-dimensional regression adjustments are guaranteed to provide valid inference about the population average treatment effect. We then propose cross-estimation, a simple method for obtaining finite-sample-unbiased treatment effect estimates that leverages high-dimensional regression adjustments. Our method can be used when the regression model is estimated using the lasso, the elastic net, subset selection, etc. Finally, we extend our analysis to allow for adaptive specification search via cross-validation, and flexible non-parametric regression adjustments with machine learning methods such as random forests or neural networks.
Tasks
Published 2016-07-22
URL http://arxiv.org/abs/1607.06801v3
PDF http://arxiv.org/pdf/1607.06801v3.pdf
PWC https://paperswithcode.com/paper/high-dimensional-regression-adjustments-in
Repo
Framework

Removing Clouds and Recovering Ground Observations in Satellite Image Sequences via Temporally Contiguous Robust Matrix Completion

Title Removing Clouds and Recovering Ground Observations in Satellite Image Sequences via Temporally Contiguous Robust Matrix Completion
Authors Jialei Wang, Peder A. Olsen, Andrew R. Conn, Aurelie C. Lozano
Abstract We consider the problem of removing and replacing clouds in satellite image sequences, which has a wide range of applications in remote sensing. Our approach first detects and removes the cloud-contaminated part of the image sequences. It then recovers the missing scenes from the clean parts using the proposed “TECROMAC” (TEmporally Contiguous RObust MAtrix Completion) objective. The objective function balances temporal smoothness with a low rank solution while staying close to the original observations. The matrix whose the rows are pixels and columnsare days corresponding to the image, has low-rank because the pixels reflect land-types such as vegetation, roads and lakes and there are relatively few variations as a result. We provide efficient optimization algorithms for TECROMAC, so we can exploit images containing millions of pixels. Empirical results on real satellite image sequences, as well as simulated data, demonstrate that our approach is able to recover underlying images from heavily cloud-contaminated observations.
Tasks Matrix Completion
Published 2016-04-13
URL http://arxiv.org/abs/1604.03915v1
PDF http://arxiv.org/pdf/1604.03915v1.pdf
PWC https://paperswithcode.com/paper/removing-clouds-and-recovering-ground
Repo
Framework

A Sparse Representation of Complete Local Binary Pattern Histogram for Human Face Recognition

Title A Sparse Representation of Complete Local Binary Pattern Histogram for Human Face Recognition
Authors Mawloud Guermoui, Mohamed L. Mekhalfi
Abstract Human face recognition has been a long standing problem in computer vision and pattern recognition. Facial analysis can be viewed as a two-fold problem, namely (i) facial representation, and (ii) classification. So far, many face representations have been proposed, a well-known method is the Local Binary Pattern (LBP), which has witnessed a growing interest. In this respect, we treat in this paper the issues of face representation as well as classification in a novel manner. On the one hand, we use a variant to LBP, so-called Complete Local Binary Pattern (CLBP), which differs from the basic LBP by coding a given local region using a given central pixel and Sing_ Magnitude difference. Subsequently, most of LBPbased descriptors use a fixed grid to code a given facial image, which technique is, in most cases, not robust to pose variation and misalignment. To cope with such issue, a representative Multi-Resolution Histogram (MH) decomposition is adopted in our work. On the other hand, having the histograms of the considered images extracted, we exploit their sparsity to construct a so-called Sparse Representation Classifier (SRC) for further face classification. Experimental results have been conducted on ORL face database, and pointed out the superiority of our scheme over other popular state-of-the-art techniques.
Tasks Face Recognition
Published 2016-05-31
URL http://arxiv.org/abs/1605.09584v1
PDF http://arxiv.org/pdf/1605.09584v1.pdf
PWC https://paperswithcode.com/paper/a-sparse-representation-of-complete-local
Repo
Framework

Interactive Image Segmentation Using Constrained Dominant Sets

Title Interactive Image Segmentation Using Constrained Dominant Sets
Authors Eyasu Zemene, Marcello Pelillo
Abstract We propose a new approach to interactive image segmentation based on some properties of a family of quadratic optimization problems related to dominant sets, a well-known graph-theoretic notion of a cluster which generalizes the concept of a maximal clique to edge-weighted graphs. In particular, we show that by properly controlling a regularization parameter which determines the structure and the scale of the underlying problem, we are in a position to extract groups of dominant-set clusters which are constrained to contain user-selected elements. The resulting algorithm can deal naturally with any type of input modality, including scribbles, sloppy contours, and bounding boxes, and is able to robustly handle noisy annotations on the part of the user. Experiments on standard benchmark datasets show the effectiveness of our approach as compared to state-of-the-art algorithms on a variety of natural images under several input conditions.
Tasks Semantic Segmentation
Published 2016-08-01
URL http://arxiv.org/abs/1608.00641v2
PDF http://arxiv.org/pdf/1608.00641v2.pdf
PWC https://paperswithcode.com/paper/interactive-image-segmentation-using
Repo
Framework
comments powered by Disqus