Paper Group ANR 1469
Neural networks trained with WiFi traces to predict airport passenger behavior. Self-Supervised Learning of Video-Induced Visual Invariances. Value Propagation for Decentralized Networked Deep Multi-agent Reinforcement Learning. Visual Analytics of Anomalous User Behaviors: A Survey. Hybrid system identification using switching density networks. In …
Neural networks trained with WiFi traces to predict airport passenger behavior
Title | Neural networks trained with WiFi traces to predict airport passenger behavior |
Authors | Federico Orsini, Massimiliano Gastaldi, Luca Mantecchini, Riccardo Rossi |
Abstract | The use of neural networks to predict airport passenger activity choices inside the terminal is presented in this paper. Three network architectures are proposed: Feedforward Neural Networks (FNN), Long Short-Term Memory (LSTM) networks, and a combination of the two. Inputs to these models are both static (passenger and trip characteristics) and dynamic (real-time passenger tracking). A real-world case study exemplifies the application of these models, using anonymous WiFi traces collected at Bologna Airport to train the networks. The performance of the models were evaluated according to the misclassification rate of passenger activity choices. In the LSTM approach, two different multi-step forecasting strategies are tested. According to our findings, the direct LSTM approach provides better results than the FNN, especially when the prediction horizon is relatively short (20 minutes or less). |
Tasks | |
Published | 2019-10-30 |
URL | https://arxiv.org/abs/1910.14026v1 |
https://arxiv.org/pdf/1910.14026v1.pdf | |
PWC | https://paperswithcode.com/paper/neural-networks-trained-with-wifi-traces-to |
Repo | |
Framework | |
Self-Supervised Learning of Video-Induced Visual Invariances
Title | Self-Supervised Learning of Video-Induced Visual Invariances |
Authors | Michael Tschannen, Josip Djolonga, Marvin Ritter, Aravindh Mahendran, Neil Houlsby, Sylvain Gelly, Mario Lucic |
Abstract | We propose a general framework for self-supervised learning of transferable visual representations based on video-induced visual invariances (VIVI). We consider the implicit hierarchy present in the videos and make use of (i) frame-level invariances (e.g. stability to color and contrast perturbations), (ii) shot/clip-level invariances (e.g. robustness to changes in object orientation and lighting conditions), and (iii) video-level invariances (semantic relationships of scenes across shots/clips), to define a holistic self-supervised loss. Training models using different variants of the proposed framework on videos from the YouTube-8M data set, we obtain state-of-the-art self-supervised transfer learning results on the 19 diverse downstream tasks of the Visual Task Adaptation Benchmark (VTAB), using only 1000 labels per task. We then show how to co-train our models jointly with labeled images, outperforming an ImageNet-pretrained ResNet-50 by 0.8 points with 10x fewer labeled images, as well as the previous best supervised model by 3.7 points using the full ImageNet data set. |
Tasks | Transfer Learning |
Published | 2019-12-05 |
URL | https://arxiv.org/abs/1912.02783v1 |
https://arxiv.org/pdf/1912.02783v1.pdf | |
PWC | https://paperswithcode.com/paper/self-supervised-learning-of-video-induced |
Repo | |
Framework | |
Value Propagation for Decentralized Networked Deep Multi-agent Reinforcement Learning
Title | Value Propagation for Decentralized Networked Deep Multi-agent Reinforcement Learning |
Authors | Chao Qu, Shie Mannor, Huan Xu, Yuan Qi, Le Song, Junwu Xiong |
Abstract | We consider the networked multi-agent reinforcement learning (MARL) problem in a fully decentralized setting, where agents learn to coordinate to achieve the joint success. This problem is widely encountered in many areas including traffic control, distributed control, and smart grids. We assume that the reward function for each agent can be different and observed only locally by the agent itself. Furthermore, each agent is located at a node of a communication network and can exchanges information only with its neighbors. Using softmax temporal consistency and a decentralized optimization method, we obtain a principled and data-efficient iterative algorithm. In the first step of each iteration, an agent computes its local policy and value gradients and then updates only policy parameters. In the second step, the agent propagates to its neighbors the messages based on its value function and then updates its own value function. Hence we name the algorithm value propagation. We prove a non-asymptotic convergence rate 1/T with the nonlinear function approximation. To the best of our knowledge, it is the first MARL algorithm with convergence guarantee in the control, off-policy and non-linear function approximation setting. We empirically demonstrate the effectiveness of our approach in experiments. |
Tasks | Multi-agent Reinforcement Learning |
Published | 2019-01-27 |
URL | https://arxiv.org/abs/1901.09326v4 |
https://arxiv.org/pdf/1901.09326v4.pdf | |
PWC | https://paperswithcode.com/paper/value-propagation-for-decentralized-networked |
Repo | |
Framework | |
Visual Analytics of Anomalous User Behaviors: A Survey
Title | Visual Analytics of Anomalous User Behaviors: A Survey |
Authors | Yang Shi, Yuyin Liu, Hanghang Tong, Jingrui He, Gang Yan, Nan Cao |
Abstract | The increasing accessibility of data provides substantial opportunities for understanding user behaviors. Unearthing anomalies in user behaviors is of particular importance as it helps signal harmful incidents such as network intrusions, terrorist activities, and financial frauds. Many visual analytics methods have been proposed to help understand user behavior-related data in various application domains. In this work, we survey the state of art in visual analytics of anomalous user behaviors and classify them into four categories including social interaction, travel, network communication, and transaction. We further examine the research works in each category in terms of data types, anomaly detection techniques, and visualization techniques, and interaction methods. Finally, we discuss the findings and potential research directions. |
Tasks | Anomaly Detection |
Published | 2019-05-14 |
URL | https://arxiv.org/abs/1905.06720v2 |
https://arxiv.org/pdf/1905.06720v2.pdf | |
PWC | https://paperswithcode.com/paper/visual-analytics-of-anomalous-user-behaviors |
Repo | |
Framework | |
Hybrid system identification using switching density networks
Title | Hybrid system identification using switching density networks |
Authors | Michael Burke, Yordan Hristov, Subramanian Ramamoorthy |
Abstract | Behaviour cloning is a commonly used strategy for imitation learning and can be extremely effective in constrained domains. However, in cases where the dynamics of an environment may be state dependent and varying, behaviour cloning places a burden on model capacity and the number of demonstrations required. This paper introduces switching density networks, which rely on a categorical reparametrisation for hybrid system identification. This results in a network comprising a classification layer that is followed by a regression layer. We use switching density networks to predict the parameters of hybrid control laws, which are toggled by a switching layer to produce different controller outputs, when conditioned on an input state. This work shows how switching density networks can be used for hybrid system identification in a variety of tasks, successfully identifying the key joint angle goals that make up manipulation tasks, while simultaneously learning image-based goal classifiers and regression networks that predict joint angles from images. We also show that they can cluster the phase space of an inverted pendulum, identifying the balance, spin and pump controllers required to solve this task. Switching density networks can be difficult to train, but we introduce a cross entropy regularisation loss that stabilises training. |
Tasks | Imitation Learning |
Published | 2019-07-09 |
URL | https://arxiv.org/abs/1907.04360v4 |
https://arxiv.org/pdf/1907.04360v4.pdf | |
PWC | https://paperswithcode.com/paper/hybrid-system-identification-using-switching |
Repo | |
Framework | |
Integration of Imitation Learning using GAIL and Reinforcement Learning using Task-achievement Rewards via Probabilistic Graphical Model
Title | Integration of Imitation Learning using GAIL and Reinforcement Learning using Task-achievement Rewards via Probabilistic Graphical Model |
Authors | Akira Kinose, Tadahiro Taniguchi |
Abstract | Integration of reinforcement learning and imitation learning is an important problem that has been studied for a long time in the field of intelligent robotics. Reinforcement learning optimizes policies to maximize the cumulative reward, whereas imitation learning attempts to extract general knowledge about the trajectories demonstrated by experts, i.e., demonstrators. Because each of them has their own drawbacks, methods combining them and compensating for each set of drawbacks have been explored thus far. However, many of the methods are heuristic and do not have a solid theoretical basis. In this paper, we present a new theory for integrating reinforcement and imitation learning by extending the probabilistic generative model framework for reinforcement learning, {\it plan by inference}. We develop a new probabilistic graphical model for reinforcement learning with multiple types of rewards and a probabilistic graphical model for Markov decision processes with multiple optimality emissions (pMDP-MO). Furthermore, we demonstrate that the integrated learning method of reinforcement learning and imitation learning can be formulated as a probabilistic inference of policies on pMDP-MO by considering the output of the discriminator in generative adversarial imitation learning as an additional optimal emission observation. We adapt the generative adversarial imitation learning and task-achievement reward to our proposed framework, achieving significantly better performance than agents trained with reinforcement learning or imitation learning alone. Experiments demonstrate that our framework successfully integrates imitation and reinforcement learning even when the number of demonstrators is only a few. |
Tasks | Imitation Learning |
Published | 2019-07-03 |
URL | https://arxiv.org/abs/1907.02140v2 |
https://arxiv.org/pdf/1907.02140v2.pdf | |
PWC | https://paperswithcode.com/paper/integration-of-imitation-learning-using-gail |
Repo | |
Framework | |
Automatic Calibration of Multiple 3D LiDARs in Urban Environments
Title | Automatic Calibration of Multiple 3D LiDARs in Urban Environments |
Authors | Jianhao Jiao, Yang Yu, Qinghai Liao, Haoyang Ye, Ming Liu |
Abstract | Multiple LiDARs have progressively emerged on autonomous vehicles for rendering a wide field of view and dense measurements. However, the lack of precise calibration negatively affects their potential applications in localization and perception systems. In this paper, we propose a novel system that enables automatic multi-LiDAR calibration without any calibration target, prior environmental information, and initial values of the extrinsic parameters. Our approach starts with a hand-eye calibration for automatic initialization by aligning the estimated motions of each sensor. The resulting parameters are then refined with an appearance-based method by minimizing a cost function constructed from point-plane correspondences. Experimental results on simulated and real-world data sets demonstrate the reliability and accuracy of our calibration approach. The proposed approach can calibrate a multi-LiDAR system with the rotation and translation errors less than 0.04 [rad] and 0.1 [m] respectively for a mobile platform. |
Tasks | Autonomous Vehicles, Calibration |
Published | 2019-05-13 |
URL | https://arxiv.org/abs/1905.04912v1 |
https://arxiv.org/pdf/1905.04912v1.pdf | |
PWC | https://paperswithcode.com/paper/automatic-calibration-of-multiple-3d-lidars |
Repo | |
Framework | |
In-bed Pressure-based Pose Estimation using Image Space Representation Learning
Title | In-bed Pressure-based Pose Estimation using Image Space Representation Learning |
Authors | Vandad Davoodnia, Saeed Ghorbani, Ali Etemad |
Abstract | In-bed pose estimation has shown value in fields such as hospital patient monitoring, sleep studies, and smart homes. In this paper, we present a novel in-bed pressure-based pose estimation approach capable of accurately detecting body parts from highly ambiguous pressure data. We exploit the idea of using a learnable pre-processing step, which transforms the vague pressure maps to a representation close to the expected input space of common purpose pose identification modules, which fail if solely used on the pressure data. To this end, a fully convolutional network with multiple scales is used as the learnable pre-processing step to provide the pose-specific characteristics of the pressure maps to the pre-trained pose identification module. A combination of loss functions is used to model the constraints, ensuring that unclear body parts are reconstructed correctly while preventing the pre-processing block from generating arbitrary images. The evaluation results show high visual fidelity in the generated pre-processed images as well as high detection rates in pose estimation. Furthermore, we show that the trained pre-processing block can be effective for pose identification models for which it has not been trained as well. |
Tasks | Pose Estimation, Representation Learning |
Published | 2019-08-21 |
URL | https://arxiv.org/abs/1908.08919v1 |
https://arxiv.org/pdf/1908.08919v1.pdf | |
PWC | https://paperswithcode.com/paper/in-bed-pressure-based-pose-estimation-using |
Repo | |
Framework | |
ScisummNet: A Large Annotated Corpus and Content-Impact Models for Scientific Paper Summarization with Citation Networks
Title | ScisummNet: A Large Annotated Corpus and Content-Impact Models for Scientific Paper Summarization with Citation Networks |
Authors | Michihiro Yasunaga, Jungo Kasai, Rui Zhang, Alexander R. Fabbri, Irene Li, Dan Friedman, Dragomir R. Radev |
Abstract | Scientific article summarization is challenging: large, annotated corpora are not available, and the summary should ideally include the article’s impacts on research community. This paper provides novel solutions to these two challenges. We 1) develop and release the first large-scale manually-annotated corpus for scientific papers (on computational linguistics) by enabling faster annotation, and 2) propose summarization methods that integrate the authors’ original highlights (abstract) and the article’s actual impacts on the community (citations), to create comprehensive, hybrid summaries. We conduct experiments to demonstrate the efficacy of our corpus in training data-driven models for scientific paper summarization and the advantage of our hybrid summaries over abstracts and traditional citation-based summaries. Our large annotated corpus and hybrid methods provide a new framework for scientific paper summarization research. |
Tasks | |
Published | 2019-09-04 |
URL | https://arxiv.org/abs/1909.01716v3 |
https://arxiv.org/pdf/1909.01716v3.pdf | |
PWC | https://paperswithcode.com/paper/scisummnet-a-large-annotated-dataset-and |
Repo | |
Framework | |
Sample Efficient Learning of Path Following and Obstacle Avoidance Behavior for Quadrotors
Title | Sample Efficient Learning of Path Following and Obstacle Avoidance Behavior for Quadrotors |
Authors | Stefan Stevsic, Tobias Naegeli, Javier Alonso-Mora, Otmar Hilliges |
Abstract | In this paper we propose an algorithm for the training of neural network control policies for quadrotors. The learned control policy computes control commands directly from sensor inputs and is hence computationally efficient. An imitation learning algorithm produces a policy that reproduces the behavior of a path following control algorithm with collision avoidance. Due to the generalization ability of neural networks, the resulting policy performs local collision avoidance of unseen obstacles while following a global reference path. The algorithm uses a time-free model predictive path-following controller as a supervisor. The controller generates demonstrations by following few example paths. This enables an easy to implement learning algorithm that is robust to errors of the model used in the model predictive controller. The policy is trained on the real quadrotor, which requires collision-free exploration around the example path. An adapted version of the supervisor is used to enable exploration. Thus, the policy can be trained from a relatively small number of examples on the real quadrotor, making the training sample efficient. |
Tasks | Imitation Learning |
Published | 2019-06-28 |
URL | https://arxiv.org/abs/1906.12082v1 |
https://arxiv.org/pdf/1906.12082v1.pdf | |
PWC | https://paperswithcode.com/paper/sample-efficient-learning-of-path-following |
Repo | |
Framework | |
Heterogeneity in demand and optimal price conditioning for local rail transport
Title | Heterogeneity in demand and optimal price conditioning for local rail transport |
Authors | Evgeniy M. Ozhegov, Alina Ozhegova |
Abstract | This paper describes the results of research project on optimal pricing for LLC “Perm Local Rail Company”. In this study we propose a regression tree based approach for estimation of demand function for local rail tickets considering high degree of demand heterogeneity by various trip directions and the goals of travel. Employing detailed data on ticket sales for 5 years we estimate the parameters of demand function and reveal the significant variation in price elasticity of demand. While in average the demand is elastic by price, near a quarter of trips is characterized by weakly elastic demand. Lower elasticity of demand is correlated with lower degree of competition with other transport and inflexible frequency of travel. |
Tasks | |
Published | 2019-05-30 |
URL | https://arxiv.org/abs/1905.12859v1 |
https://arxiv.org/pdf/1905.12859v1.pdf | |
PWC | https://paperswithcode.com/paper/heterogeneity-in-demand-and-optimal-price |
Repo | |
Framework | |
A New GAN-based End-to-End TTS Training Algorithm
Title | A New GAN-based End-to-End TTS Training Algorithm |
Authors | Haohan Guo, Frank K. Soong, Lei He, Lei Xie |
Abstract | End-to-end, autoregressive model-based TTS has shown significant performance improvements over the conventional one. However, the autoregressive module training is affected by the exposure bias, or the mismatch between the different distributions of real and predicted data. While real data is available in training, but in testing, only predicted data is available to feed the autoregressive module. By introducing both real and generated data sequences in training, we can alleviate the effects of the exposure bias. We propose to use Generative Adversarial Network (GAN) along with the key idea of Professor Forcing in training. A discriminator in GAN is jointly trained to equalize the difference between real and predicted data. In AB subjective listening test, the results show that the new approach is preferred over the standard transfer learning with a CMOS improvement of 0.1. Sentence level intelligibility tests show significant improvement in a pathological test set. The GAN-trained new model is also more stable than the baseline to produce better alignments for the Tacotron output. |
Tasks | Transfer Learning |
Published | 2019-04-09 |
URL | http://arxiv.org/abs/1904.04775v1 |
http://arxiv.org/pdf/1904.04775v1.pdf | |
PWC | https://paperswithcode.com/paper/a-new-gan-based-end-to-end-tts-training |
Repo | |
Framework | |
Adaloss: Adaptive Loss Function for Landmark Localization
Title | Adaloss: Adaptive Loss Function for Landmark Localization |
Authors | Brian Teixeira, Birgi Tamersoy, Vivek Singh, Ankur Kapoor |
Abstract | Landmark localization is a challenging problem in computer vision with a multitude of applications. Recent deep learning based methods have shown improved results by regressing likelihood maps instead of regressing the coordinates directly. However, setting the precision of these regression targets during the training is a cumbersome process since it creates a trade-off between trainability vs localization accuracy. Using precise targets introduces a significant sampling bias and hence makes the training more difficult, whereas using imprecise targets results in inaccurate landmark detectors. In this paper, we introduce “Adaloss”, an objective function that adapts itself during the training by updating the target precision based on the training statistics. This approach does not require setting problem-specific parameters and shows improved stability in training and better localization accuracy during inference. We demonstrate the effectiveness of our proposed method in three different applications of landmark localization: 1) the challenging task of precisely detecting catheter tips in medical X-ray images, 2) localizing surgical instruments in endoscopic images, and 3) localizing facial features on in-the-wild images where we show state-of-the-art results on the 300-W benchmark dataset. |
Tasks | Facial Landmark Detection |
Published | 2019-08-02 |
URL | https://arxiv.org/abs/1908.01070v1 |
https://arxiv.org/pdf/1908.01070v1.pdf | |
PWC | https://paperswithcode.com/paper/adaloss-adaptive-loss-function-for-landmark |
Repo | |
Framework | |
Improving Image-Based Localization with Deep Learning: The Impact of the Loss Function
Title | Improving Image-Based Localization with Deep Learning: The Impact of the Loss Function |
Authors | Isaac Ronald Ward, M. A. Asim K. Jalwana, Mohammed Bennamoun |
Abstract | This work investigates the impact of the loss function on the performance of Neural Networks, in the context of a monocular, RGB-only, image localization task. A common technique used when regressing a camera’s pose from an image is to formulate the loss as a linear combination of positional and rotational mean squared error (using tuned hyperparameters as coefficients). In this work we observe that changes to rotation and position mutually affect the captured image, and in order to improve performance, a pose regression network’s loss function should include a term which combines the error of both of these coupled quantities. Based on task specific observations and experimental tuning, we present said loss term, and create a new model by appending this loss term to the loss function of the pre-existing pose regression network `PoseNet’. We achieve improvements in the localization accuracy of the network for indoor scenes; with decreases of up to 26.7% and 24.0% in the median positional and rotational error respectively, when compared to the default PoseNet. | |
Tasks | Image-Based Localization |
Published | 2019-04-28 |
URL | https://arxiv.org/abs/1905.03692v2 |
https://arxiv.org/pdf/1905.03692v2.pdf | |
PWC | https://paperswithcode.com/paper/190503692 |
Repo | |
Framework | |
Inner-Imaging Networks: Put Lenses into Convolutional Structure
Title | Inner-Imaging Networks: Put Lenses into Convolutional Structure |
Authors | Yang Hu, Guihua Wen, Mingnan Luo, Dan Dai, Wenming Cao, Zhiwen Yu, Wendy Hall |
Abstract | Despite the tremendous success in computer vision, deep convolutional networks suffer from serious computation costs and redundancies. Although previous works address this issue by enhancing diversities of filters, they have not considered the complementarity and the completeness of the internal structure of the convolutional network. To deal with these problems, a novel Inner-Imaging architecture is proposed in this paper, which allows relationships between channels to meet the above requirement. Specifically, we organize the channel signal points in groups using convolutional kernels to model both the intra-group and inter-group relationships simultaneously. The convolutional filter is a powerful tool for modeling spatial relations and organizing grouped signals, so the proposed methods map the channel signals onto a pseudo-image, like putting a lens into convolution internal structure. Consequently, not only the diversity of channels is increased, but also the complementarity and completeness can be explicitly enhanced. The proposed architecture is lightweight and easy to be implemented. It provides an efficient self-organization strategy for convolutional networks so as to improve their efficiency and performance. Extensive experiments are conducted on multiple benchmark image recognition data sets including CIFAR, SVHN and ImageNet. Experimental results verify the effectiveness of the Inner-Imaging mechanism with the most popular convolutional networks as the backbones. |
Tasks | |
Published | 2019-04-22 |
URL | https://arxiv.org/abs/1904.12639v2 |
https://arxiv.org/pdf/1904.12639v2.pdf | |
PWC | https://paperswithcode.com/paper/190412639 |
Repo | |
Framework | |