April 3, 2020

3168 words 15 mins read

Paper Group ANR 22

Paper Group ANR 22

Fast-Fourier-Forecasting Resource Utilisation in Distributed Systems. Intra Order-preserving Functions for Calibration of Multi-Class Neural Networks. Using Deep Learning to Improve Ensemble Smoother: Applications to Subsurface Characterization. Learning From Multiple Experts: Self-paced Knowledge Distillation for Long-tailed Classification. A Nove …

Fast-Fourier-Forecasting Resource Utilisation in Distributed Systems

Title Fast-Fourier-Forecasting Resource Utilisation in Distributed Systems
Authors Paul J. Pritz, Daniel Perez, Kin K. Leung
Abstract Distributed computing systems often consist of hundreds of nodes, executing tasks with different resource requirements. Efficient resource provisioning and task scheduling in such systems are non-trivial and require close monitoring and accurate forecasting of the state of the system, specifically resource utilisation at its constituent machines. Two challenges present themselves towards these objectives. First, collecting monitoring data entails substantial communication overhead. This overhead can be prohibitively high, especially in networks where bandwidth is limited. Second, forecasting models to predict resource utilisation should be accurate and need to exhibit high inference speed. Mission critical scheduling and resource allocation algorithms use these predictions and rely on their immediate availability. To address the first challenge, we present a communication-efficient data collection mechanism. Resource utilisation data is collected at the individual machines in the system and transmitted to a central controller in batches. Each batch is processed by an adaptive data-reduction algorithm based on Fourier transforms and truncation in the frequency domain. We show that the proposed mechanism leads to a significant reduction in communication overhead while incurring only minimal error and adhering to accuracy guarantees. To address the second challenge, we propose a deep learning architecture using complex Gated Recurrent Units to forecast resource utilisation. This architecture is directly integrated with the above data collection mechanism to improve inference speed of our forecasting model. Using two real-world datasets, we demonstrate the effectiveness of our approach, both in terms of forecasting accuracy and inference speed. Our approach resolves challenges encountered in resource provisioning frameworks and can be applied to other forecasting problems.
Published 2020-01-13
URL https://arxiv.org/abs/2001.04281v2
PDF https://arxiv.org/pdf/2001.04281v2.pdf
PWC https://paperswithcode.com/paper/fast-fourier-forecasting-resource-utilisation

Intra Order-preserving Functions for Calibration of Multi-Class Neural Networks

Title Intra Order-preserving Functions for Calibration of Multi-Class Neural Networks
Authors Amir Rahimi, Amirreza Shaban, Ching-An Cheng, Byron Boots, Richard Hartley
Abstract Predicting calibrated confidence scores for multi-class deep networks is important for avoiding rare but costly mistakes. A common approach is to learn a post-hoc calibration function that transforms the output of the original network into calibrated confidence scores while maintaining the network’s accuracy. However, previous post-hoc calibration techniques work only with simple calibration functions, potentially lacking sufficient representation to calibrate the complex function landscape of deep networks. In this work, we aim to learn general post-hoc calibration functions that can preserve the top-k predictions of any deep network. We call this family of functions intra order-preserving functions. We propose a new neural network architecture that represents a class of intra order-preserving functions by combining common neural network components. Additionally, we introduce order-invariant and diagonal sub-families, which can act as regularization for better generalization when the training data size is small. We show the effectiveness of the proposed method across a wide range of datasets and classifiers. Our method outperforms state-of-the-art post-hoc calibration methods, namely temperature scaling and Dirichlet calibration, in multiple settings.
Tasks Calibration
Published 2020-03-15
URL https://arxiv.org/abs/2003.06820v1
PDF https://arxiv.org/pdf/2003.06820v1.pdf
PWC https://paperswithcode.com/paper/intra-order-preserving-functions-for

Using Deep Learning to Improve Ensemble Smoother: Applications to Subsurface Characterization

Title Using Deep Learning to Improve Ensemble Smoother: Applications to Subsurface Characterization
Authors Jiangjiang Zhang, Qiang Zheng, Laosheng Wu, Lingzao Zeng
Abstract Ensemble smoother (ES) has been widely used in various research fields to reduce the uncertainty of the system-of-interest. However, the commonly-adopted ES method that employs the Kalman formula, that is, ES$_\text{(K)}$, does not perform well when the probability distributions involved are non-Gaussian. To address this issue, we suggest to use deep learning (DL) to derive an alternative update scheme for ES in complex data assimilation applications. Here we show that the DL-based ES method, that is, ES$_\text{(DL)}$, is more general and flexible. In this new update scheme, a high volume of training data are generated from a relatively small-sized ensemble of model parameters and simulation outputs, and possible non-Gaussian features can be preserved in the training data and captured by an adequate DL model. This new variant of ES is tested in two subsurface characterization problems with or without Gaussian assumptions. Results indicate that ES$_\text{(DL)}$ can produce similar (in the Gaussian case) or even better (in the non-Gaussian case) results compared to those from ES$_\text{(K)}$. The success of ES$_\text{(DL)}$ comes from the power of DL in extracting complex (including non-Gaussian) features and learning nonlinear relationships from massive amounts of training data. Although in this work we only apply the ES$_\text{(DL)}$ method in parameter estimation problems, the proposed idea can be conveniently extended to analysis of model structural uncertainty and state estimation in real-time forecasting studies.
Published 2020-02-21
URL https://arxiv.org/abs/2002.09100v1
PDF https://arxiv.org/pdf/2002.09100v1.pdf
PWC https://paperswithcode.com/paper/using-deep-learning-to-improve-ensemble

Learning From Multiple Experts: Self-paced Knowledge Distillation for Long-tailed Classification

Title Learning From Multiple Experts: Self-paced Knowledge Distillation for Long-tailed Classification
Authors Liuyu Xiang, Guiguang Ding
Abstract In real-world scenarios, data tends to exhibit a long-tailed, imbalanced distribution. Developing algorithms to deal with such long-tailed distribution thus becomes indispensable in practical applications. In this paper, we propose a novel self-paced knowledge distillation framework, termed Learning From Multiple Experts (LFME). Our method is inspired by the observation that deep Convolutional Neural Networks (CNNs) trained on less imbalanced subsets of the entire long-tailed distribution often yield better performances than their jointly-trained counterparts. We refer to these models as Expert Models', and the proposed LFME framework aggregates the knowledge from multiple Experts’ to learn a unified student model. Specifically, the proposed framework involves two levels of self-paced learning schedules: Self-paced Expert Selection and Self-paced Instance Selection, so that the knowledge is adaptively transferred from multiple Experts' to the Student’. In order to verify the effectiveness of our proposed framework, we conduct extensive experiments on two long-tailed benchmark classification datasets. The experimental results demonstrate that our method is able to achieve superior performances compared to the state-of-the-art methods. We also show that our method can be easily plugged into state-of-the-art long-tailed classification algorithms for further improvements.
Published 2020-01-06
URL https://arxiv.org/abs/2001.01536v1
PDF https://arxiv.org/pdf/2001.01536v1.pdf
PWC https://paperswithcode.com/paper/learning-from-multiple-experts-self-paced

A Novel Inspection System For Variable Data Printing Using Deep Learning

Title A Novel Inspection System For Variable Data Printing Using Deep Learning
Authors Oren Haik, Oded Perry, Eli Chen, Peter Klammer
Abstract We present a novel approach for inspecting variable data prints (VDP) with an ultra-low false alarm rate (0.005%) and potential applicability to other real-world problems. The system is based on a comparison between two images: a reference image and an image captured by low-cost scanners. The comparison task is challenging as low-cost imaging systems create artifacts that may erroneously be classified as true (genuine) defects. To address this challenge we introduce two new fusion methods, for change detection applications, which are both fast and efficient. The first is an early fusion method that combines the two input images into a single pseudo-color image. The second, called Change-Detection Single Shot Detector (CD-SSD) leverages the SSD by fusing features in the middle of the network. We demonstrate the effectiveness of the proposed deep learning-based approach with a large dataset from real-world printing scenarios. Finally, we evaluate our models on a different domain of aerial imagery change detection (AICD). Our best method clearly outperforms the state-of-the-art baseline on this dataset.
Published 2020-01-13
URL https://arxiv.org/abs/2001.04325v1
PDF https://arxiv.org/pdf/2001.04325v1.pdf
PWC https://paperswithcode.com/paper/a-novel-inspection-system-for-variable-data

Achieving the fundamental convergence-communication tradeoff with Differentially Quantized Gradient Descent

Title Achieving the fundamental convergence-communication tradeoff with Differentially Quantized Gradient Descent
Authors Chung-Yi Lin, Victoria Kostina, Babak Hassibi
Abstract The problem of reducing the communication cost in distributed training through gradient quantization is considered. For the class of smooth and strongly convex objective functions, we characterize the minimum achievable linear convergence rate for a given number of bits per problem dimension $n$. We propose Differentially Quantized Gradient Descent, a quantization algorithm with error compensation, and prove that it achieves the fundamental tradeoff between communication rate and convergence rate as $n$ goes to infinity. In contrast, the naive quantizer that compresses the current gradient directly fails to achieve that optimal tradeoff. Experimental results on both simulated and real-world least-squares problems confirm our theoretical analysis.
Tasks Quantization
Published 2020-02-06
URL https://arxiv.org/abs/2002.02508v1
PDF https://arxiv.org/pdf/2002.02508v1.pdf
PWC https://paperswithcode.com/paper/achieving-the-fundamental-convergence

Temporal Spike Sequence Learning via Backpropagation for Deep Spiking Neural Networks

Title Temporal Spike Sequence Learning via Backpropagation for Deep Spiking Neural Networks
Authors Wenrui Zhang, Peng Li
Abstract Spiking neural networks (SNNs) are well suited for spatio-temporal learning and implementations on energy-efficient event-driven neuromorphic processors. However, existing SNNs error backpropagation (BP) track methods lack proper handling of spiking discontinuities and suffer from low performance compared to BP methods for traditional artificial neural networks. In addition, a large number of time steps are typically required for SNNs to achieve decent performance, leading to high latency and rendering spike-based computation unscalable to deep architectures. We present a novel Temporal Spike Sequence Learning Backpropagation (TSSL-BP) method for training deep SNNs, which breaks down error backpropagation across two types of inter-neuron and intra-neuron dependencies. It considers the all-or-none characteristics of firing activities, capturing inter-neuron dependencies through presynaptic firing times, and internal evolution of each neuronal state through time capturing intra-neuron dependencies. For various image classification datasets, TSSL-BP efficiently trains deep SNNs within a short temporal time window of a few steps with improved accuracy and runtime efficiency including achieving more than 2% accuracy improvement over the previously reported SNN work on CIFAR10.
Tasks Image Classification
Published 2020-02-24
URL https://arxiv.org/abs/2002.10085v1
PDF https://arxiv.org/pdf/2002.10085v1.pdf
PWC https://paperswithcode.com/paper/temporal-spike-sequence-learning-via
Title Novel Edge and Density Metrics for Link Cohesion
Authors Cetin Savkli, Catherine Schwartz, Amanda Galante, Jonathan Cohen
Abstract We present a new metric of link cohesion for measuring the strength of edges in complex, highly connected graphs. Link cohesion accounts for local small hop connections and associated node degrees and can be used to support edge scoring and graph simplification. We also present a novel graph density measure to estimate the average cohesion across nodes. Link cohesion and the density measure are employed to demonstrate community detection through graph sparsification by maximizing graph density. Link cohesion is also shown to be loosely correlated with edge betweenness centrality.
Tasks Community Detection
Published 2020-03-06
URL https://arxiv.org/abs/2003.02999v1
PDF https://arxiv.org/pdf/2003.02999v1.pdf
PWC https://paperswithcode.com/paper/novel-edge-and-density-metrics-for-link

BVI-DVC: A Training Database for Deep Video Compression

Title BVI-DVC: A Training Database for Deep Video Compression
Authors Di Ma, Fan Zhang, David R. Bull
Abstract Deep learning methods are increasingly being applied in the optimisation of video compression algorithms and can achieve significantly enhanced coding gains, compared to conventional approaches. Such approaches often employ Convolutional Neural Networks (CNNs) which are trained on databases with relatively limited content coverage. In this paper, a new extensive and representative video database, BVI-DVC, is presented for training CNN-based coding tools. BVI-DVC contains 800 sequences at various spatial resolutions from 270p to 2160p and has been evaluated on ten existing network architectures for four different coding tools. Experimental results show that the database produces significant improvements in terms of coding gains over three existing (commonly used) image/video training databases, for all tested CNN architectures under the same training and evaluation configurations.
Tasks Video Compression
Published 2020-03-30
URL https://arxiv.org/abs/2003.13552v1
PDF https://arxiv.org/pdf/2003.13552v1.pdf
PWC https://paperswithcode.com/paper/bvi-dvc-a-training-database-for-deep-video

Serial Speakers: a Dataset of TV Series

Title Serial Speakers: a Dataset of TV Series
Authors Xavier Bost, Vincent Labatut, Georges Linares
Abstract For over a decade, TV series have been drawing increasing interest, both from the audience and from various academic fields. But while most viewers are hooked on the continuous plots of TV serials, the few annotated datasets available to researchers focus on standalone episodes of classical TV series. We aim at filling this gap by providing the multimedia/speech processing communities with Serial Speakers, an annotated dataset of 161 episodes from three popular American TV serials: Breaking Bad, Game of Thrones and House of Cards. Serial Speakers is suitable both for investigating multimedia retrieval in realistic use case scenarios, and for addressing lower level speech related tasks in especially challenging conditions. We publicly release annotations for every speech turn (boundaries, speaker) and scene boundary, along with annotations for shot boundaries, recurring shots, and interacting speakers in a subset of episodes. Because of copyright restrictions, the textual content of the speech turns is encrypted in the public version of the dataset, but we provide the users with a simple online tool to recover the plain text from their own subtitle files.
Published 2020-02-17
URL https://arxiv.org/abs/2002.06923v1
PDF https://arxiv.org/pdf/2002.06923v1.pdf
PWC https://paperswithcode.com/paper/serial-speakers-a-dataset-of-tv-series

A Bayesian Long Short-Term Memory Model for Value at Risk and Expected Shortfall Joint Forecasting

Title A Bayesian Long Short-Term Memory Model for Value at Risk and Expected Shortfall Joint Forecasting
Authors Zhengkun Li, Minh-Ngoc Tran, Chao Wang, Richard Gerlach, Junbin Gao
Abstract Value-at-Risk (VaR) and Expected Shortfall (ES) are widely used in the financial sector to measure the market risk and manage the extreme market movement. The recent link between the quantile score function and the Asymmetric Laplace density has led to a flexible likelihood-based framework for joint modelling of VaR and ES. It is of high interest in financial applications to be able to capture the underlying joint dynamics of these two quantities. We address this problem by developing a hybrid model that is based on the Asymmetric Laplace quasi-likelihood and employs the Long Short-Term Memory (LSTM) time series modelling technique from Machine Learning to capture efficiently the underlying dynamics of VaR and ES. We refer to this model as LSTM-AL. We adopt the adaptive Markov chain Monte Carlo (MCMC) algorithm for Bayesian inference in the LSTM-AL model. Empirical results show that the proposed LSTM-AL model can improve the VaR and ES forecasting accuracy over a range of well-established competing models.
Tasks Bayesian Inference, Time Series
Published 2020-01-23
URL https://arxiv.org/abs/2001.08374v1
PDF https://arxiv.org/pdf/2001.08374v1.pdf
PWC https://paperswithcode.com/paper/a-bayesian-long-short-term-memory-model-for

Analysis of Bayesian Inference Algorithms by the Dynamical Functional Approach

Title Analysis of Bayesian Inference Algorithms by the Dynamical Functional Approach
Authors Burak Çakmak, Manfred Opper
Abstract We analyze the dynamics of an algorithm for approximate inference with large Gaussian latent variable models in a student-teacher scenario. To model nontrivial dependencies between the latent variables, we assume random covariance matrices drawn from rotation invariant ensembles. For the case of perfect data-model matching, the knowledge of static order parameters derived from the replica method allows us to obtain efficient algorithmic updates in terms of matrix-vector multiplications with a fixed matrix. Using the dynamical functional approach, we obtain an exact effective stochastic process in the thermodynamic limit for a single node. From this, we obtain closed-form expressions for the rate of the convergence. Analytical results are excellent agreement with simulations of single instances of large models.
Tasks Bayesian Inference, Latent Variable Models
Published 2020-01-14
URL https://arxiv.org/abs/2001.04918v1
PDF https://arxiv.org/pdf/2001.04918v1.pdf
PWC https://paperswithcode.com/paper/analysis-of-bayesian-inference-algorithms-by

House-GAN: Relational Generative Adversarial Networks for Graph-constrained House Layout Generation

Title House-GAN: Relational Generative Adversarial Networks for Graph-constrained House Layout Generation
Authors Nelson Nauata, Kai-Hung Chang, Chin-Yi Cheng, Greg Mori, Yasutaka Furukawa
Abstract This paper proposes a novel graph-constrained generative adversarial network, whose generator and discriminator are built upon relational architecture. The main idea is to encode the constraint into the graph structure of its relational networks. We have demonstrated the proposed architecture for a new house layout generation problem, whose task is to take an architectural constraint as a graph (i.e., the number and types of rooms with their spatial adjacency) and produce a set of axis-aligned bounding boxes of rooms. We measure the quality of generated house layouts with the three metrics: the realism, the diversity, and the compatibility with the input graph constraint. Our qualitative and quantitative evaluations over 117,000 real floorplan images demonstrate that the proposed approach outperforms existing methods and baselines. We will publicly share all our code and data.
Published 2020-03-16
URL https://arxiv.org/abs/2003.06988v1
PDF https://arxiv.org/pdf/2003.06988v1.pdf
PWC https://paperswithcode.com/paper/house-gan-relational-generative-adversarial

Video Coding for Machines: A Paradigm of Collaborative Compression and Intelligent Analytics

Title Video Coding for Machines: A Paradigm of Collaborative Compression and Intelligent Analytics
Authors Ling-Yu Duan, Jiaying Liu, Wenhan Yang, Tiejun Huang, Wen Gao
Abstract Video coding, which targets to compress and reconstruct the whole frame, and feature compression, which only preserves and transmits the most critical information, stand at two ends of the scale. That is, one is with compactness and efficiency to serve for machine vision, and the other is with full fidelity, bowing to human perception. The recent endeavors in imminent trends of video compression, e.g. deep learning based coding tools and end-to-end image/video coding, and MPEG-7 compact feature descriptor standards, i.e. Compact Descriptors for Visual Search and Compact Descriptors for Video Analysis, promote the sustainable and fast development in their own directions, respectively. In this paper, thanks to booming AI technology, e.g. prediction and generation models, we carry out exploration in the new area, Video Coding for Machines (VCM), arising from the emerging MPEG standardization efforts1. Towards collaborative compression and intelligent analytics, VCM attempts to bridge the gap between feature coding for machine vision and video coding for human vision. Aligning with the rising Analyze then Compress instance Digital Retina, the definition, formulation, and paradigm of VCM are given first. Meanwhile, we systematically review state-of-the-art techniques in video compression and feature compression from the unique perspective of MPEG standardization, which provides the academic and industrial evidence to realize the collaborative compression of video and feature streams in a broad range of AI applications. Finally, we come up with potential VCM solutions, and the preliminary results have demonstrated the performance and efficiency gains. Further direction is discussed as well.
Tasks Video Compression
Published 2020-01-10
URL https://arxiv.org/abs/2001.03569v2
PDF https://arxiv.org/pdf/2001.03569v2.pdf
PWC https://paperswithcode.com/paper/video-coding-for-machines-a-paradigm-of

Scalable bundling via dense product embeddings

Title Scalable bundling via dense product embeddings
Authors Madhav Kumar, Dean Eckles, Sinan Aral
Abstract Bundling, the practice of jointly selling two or more products at a discount, is a widely used strategy in industry and a well examined concept in academia. Historically, the focus has been on theoretical studies in the context of monopolistic firms and assumed product relationships, e.g., complementarity in usage. We develop a new machine-learning-driven methodology for designing bundles in a large-scale, cross-category retail setting. We leverage historical purchases and consideration sets created from clickstream data to generate dense continuous representations of products called embeddings. We then put minimal structure on these embeddings and develop heuristics for complementarity and substitutability among products. Subsequently, we use the heuristics to create multiple bundles for each product and test their performance using a field experiment with a large retailer. We combine the results from the experiment with product embeddings using a hierarchical model that maps bundle features to their purchase likelihood, as measured by the add-to-cart rate. We find that our embeddings-based heuristics are strong predictors of bundle success, robust across product categories, and generalize well to the retailer’s entire assortment.
Published 2020-01-31
URL https://arxiv.org/abs/2002.00100v1
PDF https://arxiv.org/pdf/2002.00100v1.pdf
PWC https://paperswithcode.com/paper/scalable-bundling-via-dense-product
comments powered by Disqus