April 2, 2020

2957 words 14 mins read

Paper Group ANR 303

Paper Group ANR 303

Handling noise in image deblurring via joint learning. SOL: Effortless Device Support for AI Frameworks without Source Code Changes. Mi YouTube es Su YouTube? Analyzing the Cultures using YouTube Thumbnails of Popular Videos. On Stochastic Automata over Monoids. Automatic Differentiation Variational Inference with Mixtures. Exploiting Unlabeled Dat …

Handling noise in image deblurring via joint learning

Title Handling noise in image deblurring via joint learning
Authors Si Miao, Yongxin Zhu
Abstract Currently, many blind deblurring methods assume blurred images are noise-free and perform unsatisfactorily on the blurry images with noise. Unfortunately, noise is quite common in real scenes. A straightforward solution is to denoise images before deblurring them. However, even state-of-the-art denoisers cannot guarantee to remove noise entirely. Slight residual noise in the denoised images could cause significant artifacts in the deblurring stage. To tackle this problem, we propose a cascaded framework consisting of a denoiser subnetwork and a deblurring subnetwork. In contrast to previous methods, we train the two subnetworks jointly. Joint learning reduces the effect of the residual noise after denoising on deblurring, hence improves the robustness of deblurring to heavy noise. Moreover, our method is also helpful for blur kernel estimation. Experiments on the CelebA dataset and the GOPRO dataset show that our method performs favorably against several state-of-the-art methods.
Tasks Deblurring, Denoising
Published 2020-01-27
URL https://arxiv.org/abs/2001.09730v1
PDF https://arxiv.org/pdf/2001.09730v1.pdf
PWC https://paperswithcode.com/paper/handling-noise-in-image-deblurring-via-joint

SOL: Effortless Device Support for AI Frameworks without Source Code Changes

Title SOL: Effortless Device Support for AI Frameworks without Source Code Changes
Authors Nicolas Weber, Felipe Huici
Abstract Modern high performance computing clusters heavily rely on accelerators to overcome the limited compute power of CPUs. These supercomputers run various applications from different domains such as simulations, numerical applications or artificial intelligence (AI). As a result, vendors need to be able to efficiently run a wide variety of workloads on their hardware. In the AI domain this is in particular exacerbated by the existence of a number of popular frameworks (e.g, PyTorch, TensorFlow, etc.) that have no common code base, and can vary in functionality. The code of these frameworks evolves quickly, making it expensive to keep up with all changes and potentially forcing developers to go through constant rounds of upstreaming. In this paper we explore how to provide hardware support in AI frameworks without changing the framework’s source code in order to minimize maintenance overhead. We introduce SOL, an AI acceleration middleware that provides a hardware abstraction layer that allows us to transparently support heterogeneous hardware. As a proof of concept, we implemented SOL for PyTorch with three backends: CPUs, GPUs and vector processors.
Published 2020-03-24
URL https://arxiv.org/abs/2003.10688v1
PDF https://arxiv.org/pdf/2003.10688v1.pdf
PWC https://paperswithcode.com/paper/sol-effortless-device-support-for-ai
Title Mi YouTube es Su YouTube? Analyzing the Cultures using YouTube Thumbnails of Popular Videos
Authors Songyang Zhang, Tolga Aktas, Jiebo Luo
Abstract YouTube, a world-famous video sharing website, maintains a list of the top trending videos on the platform. Due to its huge amount of users, it enables researchers to understand people’s preference by analyzing the trending videos. Trending videos vary from country to country. By analyzing such differences and changes, we can tell how users’ preferences differ over locations. Previous work focuses on analyzing such culture preferences from videos’ metadata, while the culture information hidden within the visual content has not been discovered. In this study, we explore culture preferences among countries using the thumbnails of YouTube trending videos. We first process the thumbnail images of the videos using object detectors. The collected object information is then used for various statistical analysis. In particular, we examine the data from three perspectives: geographical locations, video genres and users’ reactions. Experimental results indicate that the users from similar cultures shares interests in watching similar videos on YouTube. Our study demonstrates that discovering the culture preference through the thumbnails can be an effective mechanism for video social media analysis.
Published 2020-01-27
URL https://arxiv.org/abs/2002.00842v1
PDF https://arxiv.org/pdf/2002.00842v1.pdf
PWC https://paperswithcode.com/paper/mi-youtube-es-su-youtube-analyzing-the

On Stochastic Automata over Monoids

Title On Stochastic Automata over Monoids
Authors Karl-Heinz Zimmermann, Merve Nur Cakir
Abstract Stochastic automata over monoids as input sets are studied. The well-definedness of these automata requires an extension postulate that replaces the inherent universal property of free monoids. As a generalization of Turakainen’s result, it will be shown that the generalized automata over monoids have the same acceptance power as their stochastic counterparts. The key to homomorphisms is a commuting property between the monoid homomorphism of input states and the monoid homomorphism of transition matrices. Closure properties of the languages accepted by stochastic automata over monoids are investigated. matrices. Closure properties of the languages accepted by stochastic automata over monoids are investigated.
Published 2020-02-04
URL https://arxiv.org/abs/2002.01214v1
PDF https://arxiv.org/pdf/2002.01214v1.pdf
PWC https://paperswithcode.com/paper/on-stochastic-automata-over-monoids

Automatic Differentiation Variational Inference with Mixtures

Title Automatic Differentiation Variational Inference with Mixtures
Authors Warren R. Morningstar, Sharad M. Vikram, Cusuh Ham, Andrew Gallagher, Joshua V. Dillon
Abstract Automatic Differentiation Variational Inference (ADVI) is a useful tool for efficiently learning probabilistic models in machine learning. Generally approximate posteriors learned by ADVI are forced to be unimodal in order to facilitate use of the reparameterization trick. In this paper, we show how stratified sampling may be used to enable mixture distributions as the approximate posterior, and derive a new lower bound on the evidence analogous to the importance weighted autoencoder (IWAE). We show that this “SIWAE” is a tighter bound than both IWAE and the traditional ELBO, both of which are special instances of this bound. We verify empirically that the traditional ELBO objective disfavors the presence of multimodal posterior distributions and may therefore not be able to fully capture structure in the latent space. Our experiments show that using the SIWAE objective allows the encoder to learn more complex distributions which regularly contain multimodality, resulting in higher accuracy and better calibration in the presence of incomplete, limited, or corrupted data.
Tasks Calibration
Published 2020-03-03
URL https://arxiv.org/abs/2003.01687v2
PDF https://arxiv.org/pdf/2003.01687v2.pdf
PWC https://paperswithcode.com/paper/automatic-differentiation-variational-1

Exploiting Unlabeled Data in Smart Cities using Federated Learning

Title Exploiting Unlabeled Data in Smart Cities using Federated Learning
Authors Abdullatif Albaseer, Bekir Sait Ciftler, Mohamed Abdallah, Ala Al-Fuqaha
Abstract Privacy concerns are considered one of the main challenges in smart cities as sharing sensitive data brings threatening problems to people’s lives. Federated learning has emerged as an effective technique to avoid privacy infringement as well as increase the utilization of the data. However, there is a scarcity in the amount of labeled data and an abundance of unlabeled data collected in smart cities, hence there is a need to use semi-supervised learning. We propose a semi-supervised federated learning method called FedSem that exploits unlabeled data. The algorithm is divided into two phases where the first phase trains a global model based on the labeled data. In the second phase, we use semi-supervised learning based on the pseudo labeling technique to improve the model. We conducted several experiments using traffic signs dataset to show that FedSem can improve accuracy up to 8% by utilizing the unlabeled data in the learning process.
Published 2020-01-10
URL https://arxiv.org/abs/2001.04030v2
PDF https://arxiv.org/pdf/2001.04030v2.pdf
PWC https://paperswithcode.com/paper/exploiting-unlabeled-data-in-smart-cities

Domain Adversarial Training for Infrared-colour Person Re-Identification

Title Domain Adversarial Training for Infrared-colour Person Re-Identification
Authors Nima Mohammadi Meshky, Sara Iodice, Krystian Mikolajczyk
Abstract Person re-identification (re-ID) is a very active area of research in computer vision, due to the role it plays in video surveillance. Currently, most methods only address the task of matching between colour images. However, in poorly-lit environments CCTV cameras switch to infrared imaging, hence developing a system which can correctly perform matching between infrared and colour images is a necessity. In this paper, we propose a part-feature extraction network to better focus on subtle, unique signatures on the person which are visible across both infrared and colour modalities. To train the model we propose a novel variant of the domain adversarial feature-learning framework. Through extensive experimentation, we show that our approach outperforms state-of-the-art methods.
Tasks Person Re-Identification
Published 2020-03-09
URL https://arxiv.org/abs/2003.04191v1
PDF https://arxiv.org/pdf/2003.04191v1.pdf
PWC https://paperswithcode.com/paper/domain-adversarial-training-for-infrared

Information Directed Sampling for Linear Partial Monitoring

Title Information Directed Sampling for Linear Partial Monitoring
Authors Johannes Kirschner, Tor Lattimore, Andreas Krause
Abstract Partial monitoring is a rich framework for sequential decision making under uncertainty that generalizes many well known bandit models, including linear, combinatorial and dueling bandits. We introduce information directed sampling (IDS) for stochastic partial monitoring with a linear reward and observation structure. IDS achieves adaptive worst-case regret rates that depend on precise observability conditions of the game. Moreover, we prove lower bounds that classify the minimax regret of all finite games into four possible regimes. IDS achieves the optimal rate in all cases up to logarithmic factors, without tuning any hyper-parameters. We further extend our results to the contextual and the kernelized setting, which significantly increases the range of possible applications.
Tasks Decision Making, Decision Making Under Uncertainty
Published 2020-02-25
URL https://arxiv.org/abs/2002.11182v1
PDF https://arxiv.org/pdf/2002.11182v1.pdf
PWC https://paperswithcode.com/paper/information-directed-sampling-for-linear

Naive Exploration is Optimal for Online LQR

Title Naive Exploration is Optimal for Online LQR
Authors Max Simchowitz, Dylan J. Foster
Abstract We consider the problem of online adaptive control of the linear quadratic regulator, where the true system parameters are unknown. We prove new upper and lower bounds demonstrating that the optimal regret scales as $\widetilde{\Theta}({\sqrt{d_{\mathbf{u}}^2 d_{\mathbf{x}} T}})$, where $T$ is the number of time steps, $d_{\mathbf{u}}$ is the dimension of the input space, and $d_{\mathbf{x}}$ is the dimension of the system state. Notably, our lower bounds rule out the possibility of a $\mathrm{poly}(\log{}T)$-regret algorithm, which has been conjectured due to the apparent strong convexity of the problem. Our upper bounds are attained by a simple variant of \emph{certainty equivalence control}, where the learner selects control inputs according to the optimal controller for their estimate of the system while injecting exploratory random noise. While this approach was shown to achieve $\sqrt{T}$-regret by Mania et al. 2019, we show that if the learner continually refines their estimates of the system matrices, the method attains optimal dimension dependence as well. Central to our upper and lower bounds is a new approach for controlling perturbations of Ricatti equations, which we call the \emph{self-bounding ODE method}. The approach enables regret upper bounds which hold for \emph{any stabilizable instance}, require no foreknowledge of the system except for a single stabilizing controller, and scale with natural control-theoretic quantities.
Published 2020-01-27
URL https://arxiv.org/abs/2001.09576v1
PDF https://arxiv.org/pdf/2001.09576v1.pdf
PWC https://paperswithcode.com/paper/naive-exploration-is-optimal-for-online-lqr

Online Batch Decision-Making with High-Dimensional Covariates

Title Online Batch Decision-Making with High-Dimensional Covariates
Authors Chi-Hua Wang, Guang Cheng
Abstract We propose and investigate a class of new algorithms for sequential decision making that interacts with \textit{a batch of users} simultaneously instead of \textit{a user} at each decision epoch. This type of batch models is motivated by interactive marketing and clinical trial, where a group of people are treated simultaneously and the outcomes of the whole group are collected before the next stage of decision. In such a scenario, our goal is to allocate a batch of treatments to maximize treatment efficacy based on observed high-dimensional user covariates. We deliver a solution, named \textit{Teamwork LASSO Bandit algorithm}, that resolves a batch version of explore-exploit dilemma via switching between teamwork stage and selfish stage during the whole decision process. This is made possible based on statistical properties of LASSO estimate of treatment efficacy that adapts to a sequence of batch observations. In general, a rate of optimal allocation condition is proposed to delineate the exploration and exploitation trade-off on the data collection scheme, which is sufficient for LASSO to identify the optimal treatment for observed user covariates. An upper bound on expected cumulative regret of the proposed algorithm is provided.
Tasks Decision Making
Published 2020-02-21
URL https://arxiv.org/abs/2002.09438v2
PDF https://arxiv.org/pdf/2002.09438v2.pdf
PWC https://paperswithcode.com/paper/online-batch-decision-making-with-high

An Investigation of Interpretability Techniques for Deep Learning in Predictive Process Analytics

Title An Investigation of Interpretability Techniques for Deep Learning in Predictive Process Analytics
Authors Catarina Moreira, Renuka Sindhgatta, Chun Ouyang, Peter Bruza, Andreas Wichert
Abstract This paper explores interpretability techniques for two of the most successful learning algorithms in medical decision-making literature: deep neural networks and random forests. We applied these algorithms in a real-world medical dataset containing information about patients with cancer, where we learn models that try to predict the type of cancer of the patient, given their set of medical activity records. We explored different algorithms based on neural network architectures using long short term deep neural networks, and random forests. Since there is a growing need to provide decision-makers understandings about the logic of predictions of black boxes, we also explored different techniques that provide interpretations for these classifiers. In one of the techniques, we intercepted some hidden layers of these neural networks and used autoencoders in order to learn what is the representation of the input in the hidden layers. In another, we investigated an interpretable model locally around the random forest’s prediction. Results show learning an interpretable model locally around the model’s prediction leads to a higher understanding of why the algorithm is making some decision. Use of local and linear model helps identify the features used in prediction of a specific instance or data point. We see certain distinct features used for predictions that provide useful insights about the type of cancer, along with features that do not generalize well. In addition, the structured deep learning approach using autoencoders provided meaningful prediction insights, which resulted in the identification of nonlinear clusters correspondent to the patients’ different types of cancer.
Tasks Decision Making
Published 2020-02-21
URL https://arxiv.org/abs/2002.09192v1
PDF https://arxiv.org/pdf/2002.09192v1.pdf
PWC https://paperswithcode.com/paper/an-investigation-of-interpretability

PA-Cache: Learning-based Popularity-Aware Content Caching in Edge Networks

Title PA-Cache: Learning-based Popularity-Aware Content Caching in Edge Networks
Authors Qilin Fan, Jian Li, Xiuhua Li, Qiang He, Shu Fu, Sen Wang
Abstract With the aggressive growth of smart environments, a large amount of data are generated by edge devices. As a result, content delivery has been quickly pushed to network edges. Compared with classical content delivery networks, edge caches with smaller size usually suffer from more bursty requests, which makes conventional caching algorithms perform poorly in edge networks. This paper aims to propose an effective caching decision policy called PA-Cache that uses evolving deep learning to adaptively learn time-varying content popularity to decide which content to evict when the cache is full. Unlike prior learning-based approaches that either use a small set of features for decision making or require the entire training dataset to be available for learning a fine-tuned but might outdated prediction model, PA-Cache weights a large set of critical features to train the neural network in an evolving manner so as to meet the edge requests with fluctuations and bursts. We demonstrate the effectiveness of PA-Cache through extensive experiments with real-world data traces from a large commercial video-on-demand service provider. The evaluation shows that PA-Cache improves the hit rate in comparison with state-of-the-art methods at a lower computational cost.
Tasks Decision Making
Published 2020-02-20
URL https://arxiv.org/abs/2002.08805v1
PDF https://arxiv.org/pdf/2002.08805v1.pdf
PWC https://paperswithcode.com/paper/pa-cache-learning-based-popularity-aware

Learning in the Sky: An Efficient 3D Placement of UAVs

Title Learning in the Sky: An Efficient 3D Placement of UAVs
Authors Atefeh Hajijamali Arani, M. Mahdi Azari, William Melek, Safieddin Safavi-Naeini
Abstract Deployment of unmanned aerial vehicles (UAVs) as aerial base stations can deliver a fast and flexible solution for serving varying traffic demand. In order to adequately benefit of UAVs deployment, their efficient placement is of utmost importance, and requires to intelligently adapt to the environment changes. In this paper, we propose a learning-based mechanism for the three-dimensional deployment of UAVs assisting terrestrial cellular networks in the downlink. The problem is modeled as a non-cooperative game among UAVs in satisfaction form. To solve the game, we utilize a low complexity algorithm, in which unsatisfied UAVs update their locations based on a learning algorithm. Simulation results reveal that the proposed UAV placement algorithm yields significant performance gains up to about 52% and 74% in terms of throughput and the number of dropped users, respectively, compared to an optimized baseline algorithm.
Published 2020-03-02
URL https://arxiv.org/abs/2003.02650v1
PDF https://arxiv.org/pdf/2003.02650v1.pdf
PWC https://paperswithcode.com/paper/learning-in-the-sky-an-efficient-3d-placement

Information Compensation for Deep Conditional Generative Networks

Title Information Compensation for Deep Conditional Generative Networks
Authors Zehao Wang, Kaili Wang, Tinne Tuytelaars, Jose Oramas
Abstract In recent years, unsupervised/weakly-supervised conditional generative adversarial networks (GANs) have achieved many successes on the task of modeling and generating data. However, one of their weaknesses lies in their poor ability to separate, or disentangle, the different factors that characterize the representation encoded in their latent space. To address this issue, we propose a novel structure for unsupervised conditional GANs powered by a novel Information Compensation Connection (IC-Connection). The proposed IC-Connection enables GANs to compensate for information loss incurred during deconvolution operations. In addition, to quantify the degree of disentanglement on both discrete and continuous latent variables, we design a novel evaluation procedure. Our empirical results suggest that our method achieves better disentanglement compared to the state-of-the-art GANs in a conditional generation setting.
Published 2020-01-23
URL https://arxiv.org/abs/2001.08559v2
PDF https://arxiv.org/pdf/2001.08559v2.pdf
PWC https://paperswithcode.com/paper/information-compensation-for-deep-conditional

Graphcore C2 Card performance for image-based deep learning application: A Report

Title Graphcore C2 Card performance for image-based deep learning application: A Report
Authors Ilyes Kacher, Maxime Portaz, Hicham Randrianarivo, Sylvain Peyronnet
Abstract Recently, Graphcore has introduced an IPU Processor for accelerating machine learning applications. The architecture of the processor has been designed to achieve state of the art performance on current machine intelligence models for both training and inference. In this paper, we report on a benchmark in which we have evaluated the performance of IPU processors on deep neural networks for inference. We focus on deep vision models such as ResNeXt. We report the observed latency, throughput and energy efficiency.
Published 2020-02-26
URL https://arxiv.org/abs/2002.11670v2
PDF https://arxiv.org/pdf/2002.11670v2.pdf
PWC https://paperswithcode.com/paper/graphcore-c2-card-performance-for-image-based
comments powered by Disqus