January 26, 2020

3258 words 16 mins read

Paper Group ANR 1530

Paper Group ANR 1530

Retrieving Similar Trajectories from Cellular Data at City Scale. MAPEL: Multi-Agent Pursuer-Evader Learning using Situation Report. Predicting AC Optimal Power Flows: Combining Deep Learning and Lagrangian Dual Methods. An Empirical Exploration of Deep Recurrent Connections and Memory Cells Using Neuro-Evolution. Contrastive Learning for Lifted Ne …

Retrieving Similar Trajectories from Cellular Data at City Scale

Title Retrieving Similar Trajectories from Cellular Data at City Scale
Authors Zhihao Shen, Wan Du, Xi Zhao, Jianhua Zou
Abstract Retrieving similar trajectories from a large trajectory dataset is important for a variety of applications, like transportation planning and mobility analysis. Unlike previous works based on fine-grained GPS trajectories, this paper investigates the feasibility of identifying similar trajectories from cellular data observed by mobile infrastructure, which provide more comprehensive coverage. To handle the large localization errors and low sample rates of cellular data, we develop a holistic system, cellSim, which seamlessly integrates map matching and similar trajectory search. A set of map matching techniques are proposed to transform cell tower sequences into moving trajectories on a road map by considering the unique features of cellular data, like the dynamic density of cell towers and bidirectional roads. To further improve the accuracy of similarity search, map matching outputs M trajectory candidates of different confidence, and a new similarity measure scheme is developed to process the map matching results. Meanwhile, M is dynamically adapted to maintain a low false positive rate of the similarity search, and two pruning schemes are proposed to minimize the computation overhead. Extensive experiments on a large-scale dataset and real-world trajectories of 1701 km reveal that cellSim provides high accuracy (precision 62.4% and recall of 89.8%).
Tasks
Published 2019-07-20
URL https://arxiv.org/abs/1907.12371v2
PDF https://arxiv.org/pdf/1907.12371v2.pdf
PWC https://paperswithcode.com/paper/retrieving-similar-trajectories-from-cellular
Repo
Framework

MAPEL: Multi-Agent Pursuer-Evader Learning using Situation Report

Title MAPEL: Multi-Agent Pursuer-Evader Learning using Situation Report
Authors Sagar Verma, Richa Verma, P. B. Sujit
Abstract In this paper, we consider a territory guarding game involving pursuers, evaders and a target in an environment that contains obstacles. The goal of the evaders is to capture the target, while that of the pursuers is to capture the evaders before they reach the target. All the agents have limited sensing range and can only detect each other when they are in their observation space. We focus on the challenge of effective cooperation between agents of a team. Finding exact solutions for such multi-agent systems is difficult because of the inherent complexity. We present Multi-Agent Pursuer-Evader Learning (MAPEL), a class of algorithms that use spatio-temporal graph representation to learn structured cooperation. The key concept is that the learning takes place in a decentralized manner and agents use situation report updates to learn about the whole environment from each others’ partial observations. We use Recurrent Neural Networks (RNNs) to parameterize the spatio-temporal graph. An agent in MAPEL only updates all the other agents if an opponent or the target is inside its observation space by using situation report. We present two methods for cooperation via situation report update: a) Peer-to-Peer Situation Report (P2PSR) and b) Ring Situation Report (RSR). We present a detailed analysis of how these two cooperation methods perform when the number of agents in the game are increased. We provide empirical results to show how agents cooperate under these two methods.
Tasks
Published 2019-10-17
URL https://arxiv.org/abs/1910.07780v1
PDF https://arxiv.org/pdf/1910.07780v1.pdf
PWC https://paperswithcode.com/paper/mapel-multi-agent-pursuer-evader-learning
Repo
Framework

Predicting AC Optimal Power Flows: Combining Deep Learning and Lagrangian Dual Methods

Title Predicting AC Optimal Power Flows: Combining Deep Learning and Lagrangian Dual Methods
Authors Ferdinando Fioretto, Terrence W. K. Mak, Pascal Van Hentenryck
Abstract The Optimal Power Flow (OPF) problem is a fundamental building block for the optimization of electrical power systems. It is nonlinear and nonconvex and computes the generator setpoints for power and voltage, given a set of load demands. It is often needed to be solved repeatedly under various conditions, either in real-time or in large-scale studies. This need is further exacerbated by the increasing stochasticity of power systems due to renewable energy sources in front and behind the meter. To address these challenges, this paper presents a deep learning approach to the OPF. The learning model exploits the information available in the prior states of the system (which is commonly available in practical applications), as well as a dual Lagrangian method to satisfy the physical and engineering constraints present in the OPF. The proposed model is evaluated on a large collection of realistic power systems. The experimental results show that its predictions are highly accurate with average errors as low as 0.2%. Additionally, the proposed approach is shown to improve the accuracy of widely adopted OPF linear DC approximation by at least two orders of magnitude.
Tasks
Published 2019-09-19
URL https://arxiv.org/abs/1909.10461v2
PDF https://arxiv.org/pdf/1909.10461v2.pdf
PWC https://paperswithcode.com/paper/190910461
Repo
Framework

An Empirical Exploration of Deep Recurrent Connections and Memory Cells Using Neuro-Evolution

Title An Empirical Exploration of Deep Recurrent Connections and Memory Cells Using Neuro-Evolution
Authors Travis J. Desell, AbdElRahman A. ElSaid, Alexander G. Ororbia
Abstract Neuro-evolution and neural architecture search algorithms have gained increasing interest due to the challenges involved in designing optimal artificial neural networks (ANNs). While these algorithms have been shown to possess the potential to outperform the best human crafted architectures, a less common use of them is as a tool for analysis of ANN structural components and connectivity structures. In this work, we focus on this particular use-case to develop a rigorous examination and comparison framework for analyzing recurrent neural networks (RNNs) applied to time series prediction using the novel neuro-evolutionary process known as Evolutionary eXploration of Augmenting Memory Models (EXAMM). Specifically, we use our EXAMM-based analysis to investigate the capabilities of recurrent memory cells and the generalization ability afforded by various complex recurrent connectivity patterns that span one or more steps in time, i.e., deep recurrent connections. EXAMM, in this study, was used to train over 10.56 million RNNs in 5,280 repeated experiments with varying components. While many modern, often hand-crafted RNNs rely on complex memory cells (which have internal recurrent connections that only span a single time step) operating under the assumption that these sufficiently latch information and handle long term dependencies, our results show that networks evolved with deep recurrent connections perform significantly better than those without. More importantly, in some cases, the best performing RNNs consisted of only simple neurons and deep time skip connections, without any memory cells. These results strongly suggest that utilizing deep time skip connections in RNNs for time series data prediction not only deserves further, dedicated study, but also demonstrate the potential of neuro-evolution as a means to better study, understand, and train effective RNNs.
Tasks Neural Architecture Search, Time Series, Time Series Prediction
Published 2019-09-20
URL https://arxiv.org/abs/1909.09502v3
PDF https://arxiv.org/pdf/1909.09502v3.pdf
PWC https://paperswithcode.com/paper/an-empirical-exploration-of-deep-recurrent
Repo
Framework

Contrastive Learning for Lifted Networks

Title Contrastive Learning for Lifted Networks
Authors Christopher Zach, Virginia Estellers
Abstract In this work we address supervised learning of neural networks via lifted network formulations. Lifted networks are interesting because they allow training on massively parallel hardware and assign energy models to discriminatively trained neural networks. We demonstrate that the training methods for lifted networks proposed in the literature have significant limitations and show how to use a contrastive loss to address those limitations. We demonstrate that this contrastive training approximates back-propagation in theory and in practice and that it is superior to the training objective regularly used for lifted networks.
Tasks
Published 2019-05-07
URL https://arxiv.org/abs/1905.02507v2
PDF https://arxiv.org/pdf/1905.02507v2.pdf
PWC https://paperswithcode.com/paper/contrastive-learning-for-lifted-networks
Repo
Framework

Physics-Informed Deep Neural Network Method for Limited Observability State Estimation

Title Physics-Informed Deep Neural Network Method for Limited Observability State Estimation
Authors Jonatan Ostrometzky, Konstantin Berestizshevsky, Andrey Bernstein, Gil Zussman
Abstract The precise knowledge regarding the state of the power grid is important in order to ensure optimal and reliable grid operation. Specifically, knowing the state of the distribution grid becomes increasingly important as more renewable energy sources are connected directly into the distribution network, increasing the fluctuations of the injected power. In this paper, we consider the case when the distribution grid becomes partially observable, and the state estimation problem is under-determined. We present a new methodology that leverages a deep neural network (DNN) to estimate the grid state. The standard DNN training method is modified to explicitly incorporate the physical information of the grid topology and line/shunt admittance. We show that our method leads to a superior accuracy of the estimation when compared to the case when no physical information is provided. Finally, we compare the performance of our method to the standard state estimation approach, which is based on the weighted least squares with pseudo-measurements, and show that our method performs significantly better with respect to the estimation accuracy.
Tasks
Published 2019-10-14
URL https://arxiv.org/abs/1910.06401v2
PDF https://arxiv.org/pdf/1910.06401v2.pdf
PWC https://paperswithcode.com/paper/physics-informed-deep-neural-network-method
Repo
Framework

Sequential no-Substitution k-Median-Clustering

Title Sequential no-Substitution k-Median-Clustering
Authors Tom Hess, Sivan Sabato
Abstract We study the sample-based k-median clustering objective under a sequential setting without substitutions. In this setting, an i.i.d. sequence of examples is observed. An example can be selected as a center only immediately after it is observed, and it cannot be substituted later. The goal is to select a set of centers with a good k-median cost on the distribution which generated the sequence. We provide an efficient algorithm for this setting, and show that its multiplicative approximation factor is twice the approximation factor of an efficient offline algorithm. In addition, we show that if efficiency requirements are removed, there is an algorithm that can obtain the same approximation factor as the best offline algorithm. We demonstrate in experiments the performance of the efficient algorithm on real data sets.
Tasks
Published 2019-05-30
URL https://arxiv.org/abs/1905.12925v2
PDF https://arxiv.org/pdf/1905.12925v2.pdf
PWC https://paperswithcode.com/paper/sequential-no-substitution-k-median
Repo
Framework

Time Series Modeling for Dream Team in Fantasy Premier League

Title Time Series Modeling for Dream Team in Fantasy Premier League
Authors Akhil Gupta
Abstract The performance of football players in English Premier League varies largely from season to season and for different teams. It is evident that a method capable of forecasting and analyzing the future of these players on-field antics shall assist the management to a great extent. In a simulated environment like the Fantasy Premier League, enthusiasts from all over the world participate and manage the players catalogue for the entire season. Due to the dynamic nature of points system, there is no known approach for the formulation of a dream team. This study aims to tackle this problem by using a hybrid of Autoregressive Integrated Moving Average (ARIMA) and Recurrent Neural Networks (RNNs) for time series prediction of player points and subsequent maximization of total points using Linear Programming (LPP). Given the player points for the past three seasons, the predictions have been made for the current season by modeling differently for ARIMA and RNN, and then creating an ensemble of the same. Prior to that, proper data preprocessing techniques were deployed to enhance the efficacy of the prepared model. Constraints on the type of players like goalkeepers, defenders, midfielders and forwards along with the total budget were effectively optimized using LPP approach. The validation of the proposed team was done with the performance in upcoming season, where the players outperform as expected, and helped in strengthening the feasibility of the solution. Likewise, the proposed approach can be extended to English Premier League by official managers on-field.
Tasks Time Series, Time Series Prediction
Published 2019-09-19
URL https://arxiv.org/abs/1909.12938v1
PDF https://arxiv.org/pdf/1909.12938v1.pdf
PWC https://paperswithcode.com/paper/time-series-modeling-for-dream-team-in
Repo
Framework

Bayesian Incremental Inference Update by Re-using Calculations from Belief Space Planning: A New Paradigm

Title Bayesian Incremental Inference Update by Re-using Calculations from Belief Space Planning: A New Paradigm
Authors Elad I. Farhi, Vadim Indelman
Abstract Inference and decision making under uncertainty are key processes in every autonomous system and numerous robotic problems. In recent years, the similarities between inference and decision making triggered much work, from developing unified computational frameworks to pondering about the duality between the two. In spite of these efforts, inference and control, as well as inference and belief space planning (BSP) are still treated as two separate processes. In this paper we propose a paradigm shift, a novel approach which deviates from conventional Bayesian inference and utilizes the similarities between inference and BSP. We make the key observation that inference can be efficiently updated using predictions made during the decision making stage, even in light of inconsistent data association between the two. We developed a two staged process that implements our novel approach and updates inference using calculations from the precursory planning phase. Using autonomous navigation in an unknown environment along with iSAM2 efficient methodologies as a test case, we benchmarked our novel approach against standard Bayesian inference, both with synthetic and real-world data (KITTI dataset). Results indicate that not only our approach improves running time by at least a factor of two while providing the same estimation accuracy, but it also alleviates the computational burden of state dimensionality and loop closures.
Tasks Autonomous Navigation, Bayesian Inference, Decision Making, Decision Making Under Uncertainty
Published 2019-08-06
URL https://arxiv.org/abs/1908.02002v1
PDF https://arxiv.org/pdf/1908.02002v1.pdf
PWC https://paperswithcode.com/paper/bayesian-incremental-inference-update-by-re
Repo
Framework

Deep Multi-Kernel Convolutional LSTM Networks and an Attention-Based Mechanism for Videos

Title Deep Multi-Kernel Convolutional LSTM Networks and an Attention-Based Mechanism for Videos
Authors Sebastian Agethen, Winston H. Hsu
Abstract Action recognition greatly benefits motion understanding in video analysis. Recurrent networks such as long short-term memory (LSTM) networks are a popular choice for motion-aware sequence learning tasks. Recently, a convolutional extension of LSTM was proposed, in which input-to-hidden and hidden-to-hidden transitions are modeled through convolution with a single kernel. This implies an unavoidable trade-off between effectiveness and efficiency. Herein, we propose a new enhancement to convolutional LSTM networks that supports accommodation of multiple convolutional kernels and layers. This resembles a Network-in-LSTM approach, which improves upon the aforementioned concern. In addition, we propose an attention-based mechanism that is specifically designed for our multi-kernel extension. We evaluated our proposed extensions in a supervised classification setting on the UCF-101 and Sports-1M datasets, with the findings showing that our enhancements improve accuracy. We also undertook qualitative analysis to reveal the characteristics of our system and the convolutional LSTM baseline.
Tasks
Published 2019-07-30
URL https://arxiv.org/abs/1908.08990v1
PDF https://arxiv.org/pdf/1908.08990v1.pdf
PWC https://paperswithcode.com/paper/deep-multi-kernel-convolutional-lstm-networks
Repo
Framework

Inference for multiple object tracking: A Bayesian nonparametric approach

Title Inference for multiple object tracking: A Bayesian nonparametric approach
Authors Bahman Moraffah
Abstract In recent years, multi object tracking (MOT) problem has drawn attention to it and has been studied in various research areas. However, some of the challenging problems including time dependent cardinality, unordered measurement set, and object labeling remain unclear. In this paper, we propose robust nonparametric methods to model the state prior for MOT problem. These models are shown to be more flexible and robust compared to existing methods. In particular, the overall approach estimates time dependent object cardinality, provides object labeling, and identifies object associated measurements. Moreover, our proposed framework dynamically contends with the birth/death and survival of the objects through dependent nonparametric processes. We present Inference algorithms that demonstrate the utility of the dependent nonparametric models for tracking. We employ Monte Carlo sampling methods to demonstrate the proposed algorithms efficiently learn the trajectory of objects from noisy measurements. The computational results display the performance of the proposed algorithms and comparison not only between one another, but also between proposed algorithms and labeled multi Bernoulli tracker.
Tasks Multi-Object Tracking, Multiple Object Tracking, Object Tracking
Published 2019-09-16
URL https://arxiv.org/abs/1909.06984v1
PDF https://arxiv.org/pdf/1909.06984v1.pdf
PWC https://paperswithcode.com/paper/inference-for-multiple-object-tracking-a
Repo
Framework

Learning with Learned Loss Function: Speech Enhancement with Quality-Net to Improve Perceptual Evaluation of Speech Quality

Title Learning with Learned Loss Function: Speech Enhancement with Quality-Net to Improve Perceptual Evaluation of Speech Quality
Authors Szu-Wei Fu, Chien-Feng Liao, Yu Tsao
Abstract Utilizing a human-perception-related objective function to train a speech enhancement model has become a popular topic recently. The main reason is that the conventional mean squared error (MSE) loss cannot represent auditory perception well. One of the typical hu-man-perception-related metrics, which is the perceptual evaluation of speech quality (PESQ), has been proven to provide a high correlation to the quality scores rated by humans. Owing to its complex and non-differentiable properties, however, the PESQ function may not be used to optimize speech enhancement models directly. In this study, we propose optimizing the enhancement model with an approximated PESQ function, which is differentiable and learned from the training data. The experimental results show that the learned surrogate function can guide the enhancement model to further boost the PESQ score (in-crease of 0.18 points compared to the results trained with MSE loss) and maintain the speech intelligibility.
Tasks Speech Enhancement
Published 2019-05-06
URL https://arxiv.org/abs/1905.01898v3
PDF https://arxiv.org/pdf/1905.01898v3.pdf
PWC https://paperswithcode.com/paper/learning-with-learned-loss-function-speech
Repo
Framework

Non-local Attention Optimized Deep Image Compression

Title Non-local Attention Optimized Deep Image Compression
Authors Haojie Liu, Tong Chen, Peiyao Guo, Qiu Shen, Xun Cao, Yao Wang, Zhan Ma
Abstract This paper proposes a novel Non-Local Attention Optimized Deep Image Compression (NLAIC) framework, which is built on top of the popular variational auto-encoder (VAE) structure. Our NLAIC framework embeds non-local operations in the encoders and decoders for both image and latent feature probability information (known as hyperprior) to capture both local and global correlations, and apply attention mechanism to generate masks that are used to weigh the features for the image and hyperprior, which implicitly adapt bit allocation for different features based on their importance. Furthermore, both hyperpriors and spatial-channel neighbors of the latent features are used to improve entropy coding. The proposed model outperforms the existing methods on Kodak dataset, including learned (e.g., Balle2019, Balle2018) and conventional (e.g., BPG, JPEG2000, JPEG) image compression methods, for both PSNR and MS-SSIM distortion metrics.
Tasks Image Compression
Published 2019-04-22
URL http://arxiv.org/abs/1904.09757v1
PDF http://arxiv.org/pdf/1904.09757v1.pdf
PWC https://paperswithcode.com/paper/non-local-attention-optimized-deep-image
Repo
Framework

Reducing Popularity Bias in Recommendation Over Time

Title Reducing Popularity Bias in Recommendation Over Time
Authors Himan Abdollahpouri, Robin Burke
Abstract Many recommendation algorithms suffer from popularity bias: a small number of popular items being recommended too frequently, while other items get insufficient exposure. Research in this area so far has concentrated on a one-shot representation of this bias, and on algorithms to improve the diversity of individual recommendation lists. In this work, we take a time-sensitive view of popularity bias, in which the algorithm assesses its long-tail coverage at regular intervals, and compensates in the present moment for omissions in the past. In particular, we present a temporal version of the well-known xQuAD diversification algorithm adapted for long-tail recommendation. Experimental results on two public datasets show that our method is more effective in terms of the long-tail coverage and accuracy tradeoff compared to some other existing approaches.
Tasks
Published 2019-06-27
URL https://arxiv.org/abs/1906.11711v1
PDF https://arxiv.org/pdf/1906.11711v1.pdf
PWC https://paperswithcode.com/paper/reducing-popularity-bias-in-recommendation
Repo
Framework

Connecting the Dots: Document-level Neural Relation Extraction with Edge-oriented Graphs

Title Connecting the Dots: Document-level Neural Relation Extraction with Edge-oriented Graphs
Authors Fenia Christopoulou, Makoto Miwa, Sophia Ananiadou
Abstract Document-level relation extraction is a complex human process that requires logical inference to extract relationships between named entities in text. Existing approaches use graph-based neural models with words as nodes and edges as relations between them, to encode relations across sentences. These models are node-based, i.e., they form pair representations based solely on the two target node representations. However, entity relations can be better expressed through unique edge representations formed as paths between nodes. We thus propose an edge-oriented graph neural model for document-level relation extraction. The model utilises different types of nodes and edges to create a document-level graph. An inference mechanism on the graph edges enables to learn intra- and inter-sentence relations using multi-instance learning internally. Experiments on two document-level biomedical datasets for chemical-disease and gene-disease associations show the usefulness of the proposed edge-oriented approach.
Tasks Relation Extraction
Published 2019-08-31
URL https://arxiv.org/abs/1909.00228v1
PDF https://arxiv.org/pdf/1909.00228v1.pdf
PWC https://paperswithcode.com/paper/connecting-the-dots-document-level-neural
Repo
Framework
comments powered by Disqus