July 28, 2019

3083 words 15 mins read

Paper Group ANR 179

Paper Group ANR 179

Representation and Reinforcement Learning for Personalized Glycemic Control in Septic Patients. Algorithmic learning of probability distributions from random data in the limit. Efficient Compression Technique for Sparse Sets. Radiomics strategies for risk assessment of tumour failure in head-and-neck cancer. What do We Learn by Semantic Scene Under …

Representation and Reinforcement Learning for Personalized Glycemic Control in Septic Patients

Title Representation and Reinforcement Learning for Personalized Glycemic Control in Septic Patients
Authors Wei-Hung Weng, Mingwu Gao, Ze He, Susu Yan, Peter Szolovits
Abstract Glycemic control is essential for critical care. However, it is a challenging task because there has been no study on personalized optimal strategies for glycemic control. This work aims to learn personalized optimal glycemic trajectories for severely ill septic patients by learning data-driven policies to identify optimal targeted blood glucose levels as a reference for clinicians. We encoded patient states using a sparse autoencoder and adopted a reinforcement learning paradigm using policy iteration to learn the optimal policy from data. We also estimated the expected return following the policy learned from the recorded glycemic trajectories, which yielded a function indicating the relationship between real blood glucose values and 90-day mortality rates. This suggests that the learned optimal policy could reduce the patients’ estimated 90-day mortality rate by 6.3%, from 31% to 24.7%. The result demonstrates that reinforcement learning with appropriate patient state encoding can potentially provide optimal glycemic trajectories and allow clinicians to design a personalized strategy for glycemic control in septic patients.
Tasks
Published 2017-12-02
URL http://arxiv.org/abs/1712.00654v1
PDF http://arxiv.org/pdf/1712.00654v1.pdf
PWC https://paperswithcode.com/paper/representation-and-reinforcement-learning-for
Repo
Framework

Algorithmic learning of probability distributions from random data in the limit

Title Algorithmic learning of probability distributions from random data in the limit
Authors George Barmpalias, Frank Stephan
Abstract We study the problem of identifying a probability distribution for some given randomly sampled data in the limit, in the context of algorithmic learning theory as proposed recently by Vinanyi and Chater. We show that there exists a computable partial learner for the computable probability measures, while by Bienvenu, Monin and Shen it is known that there is no computable learner for the computable probability measures. Our main result is the characterization of the oracles that compute explanatory learners for the computable (continuous) probability measures as the high oracles. This provides an analogue of a well-known result of Adleman and Blum in the context of learning computable probability distributions. We also discuss related learning notions such as behaviorally correct learning and orther variations of explanatory learning, in the context of learning probability distributions from data.
Tasks
Published 2017-10-31
URL http://arxiv.org/abs/1710.11303v3
PDF http://arxiv.org/pdf/1710.11303v3.pdf
PWC https://paperswithcode.com/paper/algorithmic-learning-of-probability
Repo
Framework

Efficient Compression Technique for Sparse Sets

Title Efficient Compression Technique for Sparse Sets
Authors Rameshwar Pratap, Ishan Sohony, Raghav Kulkarni
Abstract Recent technological advancements have led to the generation of huge amounts of data over the web, such as text, image, audio and video. Most of this data is high dimensional and sparse, for e.g., the bag-of-words representation used for representing text. Often, an efficient search for similar data points needs to be performed in many applications like clustering, nearest neighbour search, ranking and indexing. Even though there have been significant increases in computational power, a simple brute-force similarity-search on such datasets is inefficient and at times impossible. Thus, it is desirable to get a compressed representation which preserves the similarity between data points. In this work, we consider the data points as sets and use Jaccard similarity as the similarity measure. Compression techniques are generally evaluated on the following parameters –1) Randomness required for compression, 2) Time required for compression, 3) Dimension of the data after compression, and 4) Space required to store the compressed data. Ideally, the compressed representation of the data should be such, that the similarity between each pair of data points is preserved, while keeping the time and the randomness required for compression as low as possible. We show that the compression technique suggested by Pratap and Kulkarni also works well for Jaccard similarity. We present a theoretical proof of the same and complement it with rigorous experimentations on synthetic as well as real-world datasets. We also compare our results with the state-of-the-art “min-wise independent permutation”, and show that our compression algorithm achieves almost equal accuracy while significantly reducing the compression time and the randomness.
Tasks
Published 2017-08-16
URL http://arxiv.org/abs/1708.04799v1
PDF http://arxiv.org/pdf/1708.04799v1.pdf
PWC https://paperswithcode.com/paper/efficient-compression-technique-for-sparse
Repo
Framework

Radiomics strategies for risk assessment of tumour failure in head-and-neck cancer

Title Radiomics strategies for risk assessment of tumour failure in head-and-neck cancer
Authors Martin Vallières, Emily Kay-Rivest, Léo Jean Perrin, Xavier Liem, Christophe Furstoss, Hugo J. W. L. Aerts, Nader Khaouam, Phuc Felix Nguyen-Tan, Chang-Shu Wang, Khalil Sultanem, Jan Seuntjens, Issam El Naqa
Abstract Quantitative extraction of high-dimensional mineable data from medical images is a process known as radiomics. Radiomics is foreseen as an essential prognostic tool for cancer risk assessment and the quantification of intratumoural heterogeneity. In this work, 1615 radiomic features (quantifying tumour image intensity, shape, texture) extracted from pre-treatment FDG-PET and CT images of 300 patients from four different cohorts were analyzed for the risk assessment of locoregional recurrences (LR) and distant metastases (DM) in head-and-neck cancer. Prediction models combining radiomic and clinical variables were constructed via random forests and imbalance-adjustment strategies using two of the four cohorts. Independent validation of the prediction and prognostic performance of the models was carried out on the other two cohorts (LR: AUC = 0.69 and CI = 0.67; DM: AUC = 0.86 and CI = 0.88). Furthermore, the results obtained via Kaplan-Meier analysis demonstrated the potential of radiomics for assessing the risk of specific tumour outcomes using multiple stratification groups. This could have important clinical impact, notably by allowing for a better personalization of chemo-radiation treatments for head-and-neck cancer patients from different risk groups.
Tasks
Published 2017-03-24
URL http://arxiv.org/abs/1703.08516v1
PDF http://arxiv.org/pdf/1703.08516v1.pdf
PWC https://paperswithcode.com/paper/radiomics-strategies-for-risk-assessment-of
Repo
Framework

What do We Learn by Semantic Scene Understanding for Remote Sensing imagery in CNN framework?

Title What do We Learn by Semantic Scene Understanding for Remote Sensing imagery in CNN framework?
Authors Haifeng Li, Jian Peng, Chao Tao, Jie Chen, Min Deng
Abstract Recently, deep convolutional neural network (DCNN) achieved increasingly remarkable success and rapidly developed in the field of natural image recognition. Compared with the natural image, the scale of remote sensing image is larger and the scene and the object it represents are more macroscopic. This study inquires whether remote sensing scene and natural scene recognitions differ and raises the following questions: What are the key factors in remote sensing scene recognition? Is the DCNN recognition mechanism centered on object recognition still applicable to the scenarios of remote sensing scene understanding? We performed several experiments to explore the influence of the DCNN structure and the scale of remote sensing scene understanding from the perspective of scene complexity. Our experiment shows that understanding a complex scene depends on an in-depth network and multiple-scale perception. Using a visualization method, we qualitatively and quantitatively analyze the recognition mechanism in a complex remote sensing scene and demonstrate the importance of multi-objective joint semantic support.
Tasks Object Recognition, Scene Recognition, Scene Understanding
Published 2017-05-19
URL http://arxiv.org/abs/1705.07077v1
PDF http://arxiv.org/pdf/1705.07077v1.pdf
PWC https://paperswithcode.com/paper/what-do-we-learn-by-semantic-scene
Repo
Framework

Efficient Private ERM for Smooth Objectives

Title Efficient Private ERM for Smooth Objectives
Authors Jiaqi Zhang, Kai Zheng, Wenlong Mou, Liwei Wang
Abstract In this paper, we consider efficient differentially private empirical risk minimization from the viewpoint of optimization algorithms. For strongly convex and smooth objectives, we prove that gradient descent with output perturbation not only achieves nearly optimal utility, but also significantly improves the running time of previous state-of-the-art private optimization algorithms, for both $\epsilon$-DP and $(\epsilon, \delta)$-DP. For non-convex but smooth objectives, we propose an RRPSGD (Random Round Private Stochastic Gradient Descent) algorithm, which provably converges to a stationary point with privacy guarantee. Besides the expected utility bounds, we also provide guarantees in high probability form. Experiments demonstrate that our algorithm consistently outperforms existing method in both utility and running time.
Tasks
Published 2017-03-29
URL http://arxiv.org/abs/1703.09947v2
PDF http://arxiv.org/pdf/1703.09947v2.pdf
PWC https://paperswithcode.com/paper/efficient-private-erm-for-smooth-objectives
Repo
Framework

Trend Detection based Regret Minimization for Bandit Problems

Title Trend Detection based Regret Minimization for Bandit Problems
Authors Paresh Nakhe, Rebecca Reiffenhäuser
Abstract We study a variation of the classical multi-armed bandits problem. In this problem, the learner has to make a sequence of decisions, picking from a fixed set of choices. In each round, she receives as feedback only the loss incurred from the chosen action. Conventionally, this problem has been studied when losses of the actions are drawn from an unknown distribution or when they are adversarial. In this paper, we study this problem when the losses of the actions also satisfy certain structural properties, and especially, do show a trend structure. When this is true, we show that using \textit{trend detection}, we can achieve regret of order $\tilde{O} (N \sqrt{TK})$ with respect to a switching strategy for the version of the problem where a single action is chosen in each round and $\tilde{O} (Nm \sqrt{TK})$ when $m$ actions are chosen each round. This guarantee is a significant improvement over the conventional benchmark. Our approach can, as a framework, be applied in combination with various well-known bandit algorithms, like Exp3. For both versions of the problem, we give regret guarantees also for the \textit{anytime} setting, i.e. when the length of the choice-sequence is not known in advance. Finally, we pinpoint the advantages of our method by comparing it to some well-known other strategies.
Tasks Multi-Armed Bandits
Published 2017-09-15
URL http://arxiv.org/abs/1709.05156v1
PDF http://arxiv.org/pdf/1709.05156v1.pdf
PWC https://paperswithcode.com/paper/trend-detection-based-regret-minimization-for
Repo
Framework

A Neural Network Approach for Mixing Language Models

Title A Neural Network Approach for Mixing Language Models
Authors Youssef Oualil, Dietrich Klakow
Abstract The performance of Neural Network (NN)-based language models is steadily improving due to the emergence of new architectures, which are able to learn different natural language characteristics. This paper presents a novel framework, which shows that a significant improvement can be achieved by combining different existing heterogeneous models in a single architecture. This is done through 1) a feature layer, which separately learns different NN-based models and 2) a mixture layer, which merges the resulting model features. In doing so, this architecture benefits from the learning capabilities of each model with no noticeable increase in the number of model parameters or the training time. Extensive experiments conducted on the Penn Treebank (PTB) and the Large Text Compression Benchmark (LTCB) corpus showed a significant reduction of the perplexity when compared to state-of-the-art feedforward as well as recurrent neural network architectures.
Tasks
Published 2017-08-23
URL http://arxiv.org/abs/1708.06989v1
PDF http://arxiv.org/pdf/1708.06989v1.pdf
PWC https://paperswithcode.com/paper/a-neural-network-approach-for-mixing-language
Repo
Framework

Graph Embedding with Rich Information through Heterogeneous Network

Title Graph Embedding with Rich Information through Heterogeneous Network
Authors Guolei Sun, Xiangliang Zhang
Abstract Graph embedding has attracted increasing attention due to its critical application in social network analysis. Most existing algorithms for graph embedding only rely on the typology information and fail to use the copious information in nodes as well as edges. As a result, their performance for many tasks may not be satisfactory. In this paper, we proposed a novel and general framework of representation learning for graph with rich text information through constructing a bipartite heterogeneous network. Specially, we designed a biased random walk to explore the constructed heterogeneous network with the notion of flexible neighborhood. The efficacy of our method is demonstrated by extensive comparison experiments with several baselines on various datasets. It improves the Micro-F1 and Macro-F1 of node classification by 10% and 7% on Cora dataset.
Tasks Graph Embedding, Node Classification, Representation Learning
Published 2017-10-18
URL http://arxiv.org/abs/1710.06879v2
PDF http://arxiv.org/pdf/1710.06879v2.pdf
PWC https://paperswithcode.com/paper/graph-embedding-with-rich-information-through
Repo
Framework

Multi-scale Deep Learning Architectures for Person Re-identification

Title Multi-scale Deep Learning Architectures for Person Re-identification
Authors Xuelin Qian, Yanwei Fu, Yu-Gang Jiang, Tao Xiang, Xiangyang Xue
Abstract Person Re-identification (re-id) aims to match people across non-overlapping camera views in a public space. It is a challenging problem because many people captured in surveillance videos wear similar clothes. Consequently, the differences in their appearance are often subtle and only detectable at the right location and scales. Existing re-id models, particularly the recently proposed deep learning based ones match people at a single scale. In contrast, in this paper, a novel multi-scale deep learning model is proposed. Our model is able to learn deep discriminative feature representations at different scales and automatically determine the most suitable scales for matching. The importance of different spatial locations for extracting discriminative features is also learned explicitly. Experiments are carried out to demonstrate that the proposed model outperforms the state-of-the art on a number of benchmarks
Tasks Person Re-Identification
Published 2017-09-15
URL http://arxiv.org/abs/1709.05165v1
PDF http://arxiv.org/pdf/1709.05165v1.pdf
PWC https://paperswithcode.com/paper/multi-scale-deep-learning-architectures-for
Repo
Framework

Born to Learn: the Inspiration, Progress, and Future of Evolved Plastic Artificial Neural Networks

Title Born to Learn: the Inspiration, Progress, and Future of Evolved Plastic Artificial Neural Networks
Authors Andrea Soltoggio, Kenneth O. Stanley, Sebastian Risi
Abstract Biological plastic neural networks are systems of extraordinary computational capabilities shaped by evolution, development, and lifetime learning. The interplay of these elements leads to the emergence of adaptive behavior and intelligence. Inspired by such intricate natural phenomena, Evolved Plastic Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed plastic neural networks with a large variety of dynamics, architectures, and plasticity rules: these artificial systems are composed of inputs, outputs, and plastic components that change in response to experiences in an environment. These systems may autonomously discover novel adaptive algorithms, and lead to hypotheses on the emergence of biological adaptation. EPANNs have seen considerable progress over the last two decades. Current scientific and technological advances in artificial neural networks are now setting the conditions for radically new approaches and results. In particular, the limitations of hand-designed networks could be overcome by more flexible and innovative solutions. This paper brings together a variety of inspiring ideas that define the field of EPANNs. The main methods and results are reviewed. Finally, new opportunities and developments are presented.
Tasks
Published 2017-03-30
URL http://arxiv.org/abs/1703.10371v3
PDF http://arxiv.org/pdf/1703.10371v3.pdf
PWC https://paperswithcode.com/paper/born-to-learn-the-inspiration-progress-and
Repo
Framework

Saliency detection by aggregating complementary background template with optimization framework

Title Saliency detection by aggregating complementary background template with optimization framework
Authors Chenxing Xia, Hanling Zhang, Xiuju Gao
Abstract This paper proposes an unsupervised bottom-up saliency detection approach by aggregating complementary background template with refinement. Feature vectors are extracted from each superpixel to cover regional color, contrast and texture information. By using these features, a coarse detection for salient region is realized based on background template achieved by different combinations of boundary regions instead of only treating four boundaries as background. Then, by ranking the relevance of the image nodes with foreground cues extracted from the former saliency map, we obtain an improved result. Finally, smoothing operation is utilized to refine the foreground-based saliency map to improve the contrast between salient and non-salient regions until a close to binary saliency map is reached. Experimental results show that the proposed algorithm generates more accurate saliency maps and performs favorably against the state-off-the-art saliency detection methods on four publicly available datasets.
Tasks Saliency Detection
Published 2017-06-14
URL http://arxiv.org/abs/1706.04285v1
PDF http://arxiv.org/pdf/1706.04285v1.pdf
PWC https://paperswithcode.com/paper/saliency-detection-by-aggregating
Repo
Framework

Learned Perceptual Image Enhancement

Title Learned Perceptual Image Enhancement
Authors Hossein Talebi, Peyman Milanfar
Abstract Learning a typical image enhancement pipeline involves minimization of a loss function between enhanced and reference images. While L1 and L2 losses are perhaps the most widely used functions for this purpose, they do not necessarily lead to perceptually compelling results. In this paper, we show that adding a learned no-reference image quality metric to the loss can significantly improve enhancement operators. This metric is implemented using a CNN (convolutional neural network) trained on a large-scale dataset labelled with aesthetic preferences of human raters. This loss allows us to conveniently perform back-propagation in our learning framework to simultaneously optimize for similarity to a given ground truth reference and perceptual quality. This perceptual loss is only used to train parameters of image processing operators, and does not impose any extra complexity at inference time. Our experiments demonstrate that this loss can be effective for tuning a variety of operators such as local tone mapping and dehazing.
Tasks Image Enhancement
Published 2017-12-07
URL http://arxiv.org/abs/1712.02864v1
PDF http://arxiv.org/pdf/1712.02864v1.pdf
PWC https://paperswithcode.com/paper/learned-perceptual-image-enhancement
Repo
Framework

Concentration of weakly dependent Banach-valued sums and applications to statistical learning methods

Title Concentration of weakly dependent Banach-valued sums and applications to statistical learning methods
Authors Gilles Blanchard, Oleksandr Zadorozhnyi
Abstract We obtain a Bernstein-type inequality for sums of Banach-valued random variables satisfying a weak dependence assumption of general type and under certain smoothness assumptions of the underlying Banach norm. We use this inequality in order to investigate in the asymptotical regime the error upper bounds for the broad family of spectral regularization methods for reproducing kernel decision rules, when trained on a sample coming from a $\tau-$mixing process.
Tasks
Published 2017-12-05
URL http://arxiv.org/abs/1712.01934v2
PDF http://arxiv.org/pdf/1712.01934v2.pdf
PWC https://paperswithcode.com/paper/concentration-of-weakly-dependent-banach
Repo
Framework

Sparsification of the Alignment Path Search Space in Dynamic Time Warping

Title Sparsification of the Alignment Path Search Space in Dynamic Time Warping
Authors Saeid Soheily-Khah, Pierre-François Marteau
Abstract Temporal data are naturally everywhere, especially in the digital era that sees the advent of big data and internet of things. One major challenge that arises during temporal data analysis and mining is the comparison of time series or sequences, which requires to determine a proper distance or (dis)similarity measure. In this context, the Dynamic Time Warping (DTW) has enjoyed success in many domains, due to its ‘temporal elasticity’, a property particularly useful when matching temporal data. Unfortunately this dissimilarity measure suffers from a quadratic computational cost, which prohibits its use for large scale applications. This work addresses the sparsification of the alignment path search space for DTW-like measures, essentially to lower their computational cost without loosing on the quality of the measure. As a result of our sparsification approach, two new (dis)similarity measures, namely SP-DTW (Sparsified-Paths search space DTW) and its kernelization SP-K rdtw (Sparsified-Paths search space K rdtw kernel) are proposed for time series comparison. A wide range of public datasets is used to evaluate the efficiency (estimated in term of speed-up ratio and classification accuracy) of the proposed (dis)similarity measures on the 1-Nearest Neighbor (1-NN) and the Support Vector Machine (SVM) classification algorithms. Our experiment shows that our proposed measures provide a significant speed-up without loosing on accuracy. Furthermore, at the cost of a slight reduction of the speedup they significantly outperform on the accuracy criteria the old but well known Sakoe-Chiba approach that reduces the DTW path search space using a symmetric corridor.
Tasks Time Series
Published 2017-11-13
URL http://arxiv.org/abs/1711.04453v1
PDF http://arxiv.org/pdf/1711.04453v1.pdf
PWC https://paperswithcode.com/paper/sparsification-of-the-alignment-path-search
Repo
Framework
comments powered by Disqus