April 2, 2020

3133 words 15 mins read

Paper Group ANR 253

Paper Group ANR 253

A Deeper Look into Hybrid Images. A Survey on Edge Intelligence. Graph Hawkes Network for Reasoning on Temporal Knowledge Graphs. A Semi-Dynamic Bus Routing Infrastructure based on MBTA Bus Data. Learning To Solve Differential Equations Across Initial Conditions. Analyzing the Noise Robustness of Deep Neural Networks. Computational Methods in Profe …

A Deeper Look into Hybrid Images

Title A Deeper Look into Hybrid Images
Authors Jimut Bahan Pal
Abstract $Hybrid$ $images$ was first introduced by Olivia et al., that produced static images with two interpretations such that the images changes as a function of viewing distance. Hybrid images are built by studying human processing of multiscale images and are motivated by masking studies in visual perception. The first introduction of hybrid images showed that two images can be blend together with a high pass filter and a low pass filter in such a way that when the blended image is viewed from a distance, the high pass filter fades away and the low pass filter becomes prominent. Our main aim here is to study and review the original paper by changing and tweaking certain parameters to see how they affect the quality of the blended image produced. We have used exhaustively different set of images and filters to see how they function and whether this can be used in a real time system or not.
Tasks
Published 2020-01-30
URL https://arxiv.org/abs/2001.11302v2
PDF https://arxiv.org/pdf/2001.11302v2.pdf
PWC https://paperswithcode.com/paper/a-deeper-look-into-hybrid-images
Repo
Framework

A Survey on Edge Intelligence

Title A Survey on Edge Intelligence
Authors Dianlei Xu, Tong Li, Yong Li, Xiang Su, Sasu Tarkoma, Pan Hui
Abstract Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis in locations close to where data is captured based on artificial intelligence. The aim of edge intelligence is to enhance the quality and speed of data processing and protect the privacy and security of the data. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this paper, we present a thorough and comprehensive survey on the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, namely edge caching, edge training, edge inference, and edge offloading, based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare and analyse the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, etc. This survey article provides a comprehensive introduction to edge intelligence and its application areas. In addition, we summarise the development of the emerging research field and the current state-of-the-art and discuss the important open issues and possible theoretical and technical solutions.
Tasks
Published 2020-03-26
URL https://arxiv.org/abs/2003.12172v1
PDF https://arxiv.org/pdf/2003.12172v1.pdf
PWC https://paperswithcode.com/paper/a-survey-on-edge-intelligence
Repo
Framework

Graph Hawkes Network for Reasoning on Temporal Knowledge Graphs

Title Graph Hawkes Network for Reasoning on Temporal Knowledge Graphs
Authors Zhen Han, Yuyi Wang, Yunpu Ma, Stephan Günnemann, Volker Tresp
Abstract The Hawkes process has become a standard method for modeling self-exciting event sequences with different event types. A recent work generalizing the Hawkes process to a neurally self-modulating multivariate point process enables the capturing of more complex and realistic influences of past events on the future. However, this approach is limited by the number of event types, making it impossible to model the dynamics of evolving graph sequences, where each possible link between two nodes can be considered as an event type. The problem becomes even more dramatic when links are directional and labeled, since, in this case, the number of event types scales with the number of nodes and link types. To address this issue, we propose the Graph Hawkes Network to capture the dynamics of evolving graph sequences. Extensive experiments on large-scale temporal relational databases, such as temporal knowledge graphs, demonstrate the effectiveness of our approach.
Tasks Knowledge Graphs
Published 2020-03-30
URL https://arxiv.org/abs/2003.13432v2
PDF https://arxiv.org/pdf/2003.13432v2.pdf
PWC https://paperswithcode.com/paper/the-graph-hawkes-network-for-reasoning-on
Repo
Framework

A Semi-Dynamic Bus Routing Infrastructure based on MBTA Bus Data

Title A Semi-Dynamic Bus Routing Infrastructure based on MBTA Bus Data
Authors Movses Musaelian, Anane Boateng, Md Zakirul Alam Bhuiyan
Abstract Transportation is quickly evolving in the emerging smart city ecosystem with personalized ride sharing services quickly advancing. Yet, the public bus infrastructure has been slow to respond to these trends. With our research, we propose a semi-dynamic bus routing framework that is data-driven and responsive to relevant parameters in bus transport. We use newly published bus event data from a bus line in Boston and several algorithmic heuristics to create this framework and demonstrate the capabilities and results. We find that this approach yields a very promising routing infrastructure that is smarter and more dynamic than the existing system.
Tasks
Published 2020-03-29
URL https://arxiv.org/abs/2004.00427v1
PDF https://arxiv.org/pdf/2004.00427v1.pdf
PWC https://paperswithcode.com/paper/a-semi-dynamic-bus-routing-infrastructure
Repo
Framework

Learning To Solve Differential Equations Across Initial Conditions

Title Learning To Solve Differential Equations Across Initial Conditions
Authors Shehryar Malik, Usman Anwar, Ali Ahmed, Alireza Aghasi
Abstract Recently, there has been a lot of interest in using neural networks for solving partial differential equations. A number of neural network-based partial differential equation solvers have been formulated which provide performances equivalent, and in some cases even superior, to classical solvers. However, these neural solvers, in general, need to be retrained each time the initial conditions or the domain of the partial differential equation changes. In this work, we posit the problem of approximating the solution of a fixed partial differential equation for any arbitrary initial conditions as learning a conditional probability distribution. We demonstrate the utility of our method on Burger’s Equation.
Tasks
Published 2020-03-26
URL https://arxiv.org/abs/2003.12159v1
PDF https://arxiv.org/pdf/2003.12159v1.pdf
PWC https://paperswithcode.com/paper/learning-to-solve-differential-equations
Repo
Framework

Analyzing the Noise Robustness of Deep Neural Networks

Title Analyzing the Noise Robustness of Deep Neural Networks
Authors Kelei Cao, Mengchen Liu, Hang Su, Jing Wu, Jun Zhu, Shixia Liu
Abstract Adversarial examples, generated by adding small but intentionally imperceptible perturbations to normal examples, can mislead deep neural networks (DNNs) to make incorrect predictions. Although much work has been done on both adversarial attack and defense, a fine-grained understanding of adversarial examples is still lacking. To address this issue, we present a visual analysis method to explain why adversarial examples are misclassified. The key is to compare and analyze the datapaths of both the adversarial and normal examples. A datapath is a group of critical neurons along with their connections. We formulate the datapath extraction as a subset selection problem and solve it by constructing and training a neural network. A multi-level visualization consisting of a network-level visualization of data flows, a layer-level visualization of feature maps, and a neuron-level visualization of learned features, has been designed to help investigate how datapaths of adversarial and normal examples diverge and merge in the prediction process. A quantitative evaluation and a case study were conducted to demonstrate the promise of our method to explain the misclassification of adversarial examples.
Tasks Adversarial Attack
Published 2020-01-26
URL https://arxiv.org/abs/2001.09395v1
PDF https://arxiv.org/pdf/2001.09395v1.pdf
PWC https://paperswithcode.com/paper/analyzing-the-noise-robustness-of-deep-neural-1
Repo
Framework

Computational Methods in Professional Communication

Title Computational Methods in Professional Communication
Authors André Calero Valdez, Lena Adam, Dennis Assenmacher, Laura Burbach, Malte Bonart, Lena Frischlich, Philipp Schaer
Abstract The digitization of the world has also led to a digitization of communication processes. Traditional research methods fall short in understanding communication in digital worlds as the scope has become too large in volume, variety, and velocity to be studied using traditional approaches. In this paper, we present computational methods and their use in public and mass communication research and how those could be adapted to professional communication research. The paper is a proposal for a panel in which the panelists, each an expert in their field, will present their current work using computational methods and will discuss transferability of these methods to professional communication.
Tasks
Published 2020-01-02
URL https://arxiv.org/abs/2001.00565v1
PDF https://arxiv.org/pdf/2001.00565v1.pdf
PWC https://paperswithcode.com/paper/computational-methods-in-professional
Repo
Framework

Turbulent scalar flux in inclined jets in crossflow: counter gradient transport and deep learning modelling

Title Turbulent scalar flux in inclined jets in crossflow: counter gradient transport and deep learning modelling
Authors Pedro M. Milani, Julia Ling, John K. Eaton
Abstract A cylindrical and inclined jet in crossflow is studied under two distinct velocity ratios, $r=1$ and $r=2$, using highly resolved large eddy simulations (LES). First, an investigation of turbulent scalar mixing sheds light onto the previously observed but unexplained phenomenon of negative turbulent diffusivity. We identify two distinct types of counter gradient transport, prevalent in different regions: the first, throughout the windward shear layer, is caused by cross-gradient transport; the second, close to the wall right after injection, is caused by non-local effects. Then, we propose a deep learning approach for modelling the turbulent scalar flux by adapting the tensor basis neural network previously developed to model Reynolds stresses (Ling et al. 2016a). This approach uses a deep neural network with embedded coordinate frame invariance to predict a tensorial turbulent diffusivity that is not explicitly available in the high fidelity data used for training. After ensuring that the matrix diffusivity leads to a stable solution for the advection diffusion equation, we apply this approach in the inclined jets in crossflow under study. The results show significant improvement compared to a simple model, particularly where cross-gradient effects play an important role in turbulent mixing. The model proposed herein is not limited to jets in crossflow; it can be used in any turbulent flow where the Reynolds averaged transport of a scalar is considered.
Tasks
Published 2020-01-14
URL https://arxiv.org/abs/2001.04600v1
PDF https://arxiv.org/pdf/2001.04600v1.pdf
PWC https://paperswithcode.com/paper/turbulent-scalar-flux-in-inclined-jets-in
Repo
Framework

Polynomial Optimization for Bounding Lipschitz Constants of Deep Networks

Title Polynomial Optimization for Bounding Lipschitz Constants of Deep Networks
Authors Tong Chen, Jean-Bernard Lasserre, Victor Magron, Edouard Pauwels
Abstract The Lipschitz constant of a network plays an important role in many applications of deep learning, such as robustness certification and Wasserstein Generative Adversarial Network. We introduce a semidefinite programming hierarchy to estimate the global and local Lipschitz constant of a multiple layer deep neural network. The novelty is to combine a polynomial lifting for ReLU functions derivatives with a weak generalization of Putinar’s positivity certificate. This idea could also apply to other, nearly sparse, polynomial optimization problems in machine learning. We empirically demonstrate that our method not only runs faster than state-of-the-art linear programming based method, but also provides sharper bounds.
Tasks
Published 2020-02-10
URL https://arxiv.org/abs/2002.03657v1
PDF https://arxiv.org/pdf/2002.03657v1.pdf
PWC https://paperswithcode.com/paper/polynomial-optimization-for-bounding
Repo
Framework

MW-GAN: Multi-Warping GAN for Caricature Generation with Multi-Style Geometric Exaggeration

Title MW-GAN: Multi-Warping GAN for Caricature Generation with Multi-Style Geometric Exaggeration
Authors Haodi Hou, Jing Huo, Jing Wu, Yu-Kun Lai, Yang Gao
Abstract Given an input face photo, the goal of caricature generation is to produce stylized, exaggerated caricatures that share the same identity as the photo. It requires simultaneous style transfer and shape exaggeration with rich diversity, and meanwhile preserving the identity of the input. To address this challenging problem, we propose a novel framework called Multi-Warping GAN (MW-GAN), including a style network and a geometric network that are designed to conduct style transfer and geometric exaggeration respectively. We bridge the gap between the style and landmarks of an image with corresponding latent code spaces by a dual way design, so as to generate caricatures with arbitrary styles and geometric exaggeration, which can be specified either through random sampling of latent code or from a given caricature sample. Besides, we apply identity preserving loss to both image space and landmark space, leading to a great improvement in quality of generated caricatures. Experiments show that caricatures generated by MW-GAN have better quality than existing methods.
Tasks Caricature, Style Transfer
Published 2020-01-07
URL https://arxiv.org/abs/2001.01870v1
PDF https://arxiv.org/pdf/2001.01870v1.pdf
PWC https://paperswithcode.com/paper/mw-gan-multi-warping-gan-for-caricature
Repo
Framework

Predictive Sampling with Forecasting Autoregressive Models

Title Predictive Sampling with Forecasting Autoregressive Models
Authors Auke J. Wiggers, Emiel Hoogeboom
Abstract Autoregressive models (ARMs) currently hold state-of-the-art performance in likelihood-based modeling of image and audio data. Generally, neural network based ARMs are designed to allow fast inference, but sampling from these models is impractically slow. In this paper, we introduce the predictive sampling algorithm: a procedure that exploits the fast inference property of ARMs in order to speed up sampling, while keeping the model intact. We propose two variations of predictive sampling, namely sampling with ARM fixed-point iteration and learned forecasting modules. Their effectiveness is demonstrated in two settings: i) explicit likelihood modeling on binary MNIST, SVHN and CIFAR10, and ii) discrete latent modeling in an autoencoder trained on SVHN, CIFAR10 and Imagenet32. Empirically, we show considerable improvements over baselines in number of ARM inference calls and sampling speed.
Tasks
Published 2020-02-23
URL https://arxiv.org/abs/2002.09928v1
PDF https://arxiv.org/pdf/2002.09928v1.pdf
PWC https://paperswithcode.com/paper/predictive-sampling-with-forecasting
Repo
Framework

Model Agnostic Multilevel Explanations

Title Model Agnostic Multilevel Explanations
Authors Karthikeyan Natesan Ramamurthy, Bhanukiran Vinzamuri, Yunfeng Zhang, Amit Dhurandhar
Abstract In recent years, post-hoc local instance-level and global dataset-level explainability of black-box models has received a lot of attention. Much less attention has been given to obtaining insights at intermediate or group levels, which is a need outlined in recent works that study the challenges in realizing the guidelines in the General Data Protection Regulation (GDPR). In this paper, we propose a meta-method that, given a typical local explainability method, can build a multilevel explanation tree. The leaves of this tree correspond to the local explanations, the root corresponds to the global explanation, and intermediate levels correspond to explanations for groups of data points that it automatically clusters. The method can also leverage side information, where users can specify points for which they may want the explanations to be similar. We argue that such a multilevel structure can also be an effective form of communication, where one could obtain few explanations that characterize the entire dataset by considering an appropriate level in our explanation tree. Explanations for novel test points can be cost-efficiently obtained by associating them with the closest training points. When the local explainability technique is generalized additive (viz. LIME, GAMs), we develop a fast approximate algorithm for building the multilevel tree and study its convergence behavior. We validate the effectiveness of the proposed technique based on two human studies – one with experts and the other with non-expert users – on real world datasets, and show that we produce high fidelity sparse explanations on several other public datasets.
Tasks
Published 2020-03-12
URL https://arxiv.org/abs/2003.06005v1
PDF https://arxiv.org/pdf/2003.06005v1.pdf
PWC https://paperswithcode.com/paper/model-agnostic-multilevel-explanations
Repo
Framework

Quantum circuit-like learning: A fast and scalable classical machine-learning algorithm with similar performance to quantum circuit learning

Title Quantum circuit-like learning: A fast and scalable classical machine-learning algorithm with similar performance to quantum circuit learning
Authors Naoko Koide-Majima, Kei Majima
Abstract The application of near-term quantum devices to machine learning (ML) has attracted much attention. In one such attempt, Mitarai et al. (2018) proposed a framework to use a quantum circuit for supervised ML tasks, which is called quantum circuit learning (QCL). Due to the use of a quantum circuit, QCL can employ an exponentially high-dimensional Hilbert space as its feature space. However, its efficiency compared to classical algorithms remains unexplored. In this study, using a statistical technique called count sketch, we propose a classical ML algorithm that uses the same Hilbert space. In numerical simulations, our proposed algorithm demonstrates similar performance to QCL for several ML tasks. This provides a new perspective with which to consider the computational and memory efficiency of quantum ML algorithms.
Tasks
Published 2020-03-24
URL https://arxiv.org/abs/2003.10667v1
PDF https://arxiv.org/pdf/2003.10667v1.pdf
PWC https://paperswithcode.com/paper/quantum-circuit-like-learning-a-fast-and
Repo
Framework

Probabilistic Future Prediction for Video Scene Understanding

Title Probabilistic Future Prediction for Video Scene Understanding
Authors Anthony Hu, Fergal Cotter, Nikhil Mohan, Corina Gurau, Alex Kendall
Abstract We present a novel deep learning architecture for probabilistic future prediction from video. We predict the future semantics, geometry and motion of complex real-world urban scenes and use this representation to control an autonomous vehicle. This work is the first to jointly predict ego-motion, static scene, and the motion of dynamic agents in a probabilistic manner, which allows sampling consistent, highly probable futures from a compact latent space. Our model learns a representation from RGB video with a spatio-temporal convolutional module. The learned representation can be explicitly decoded to future semantic segmentation, depth, and optical flow, in addition to being an input to a learnt driving policy. To model the stochasticity of the future, we introduce a conditional variational approach which minimises the divergence between the present distribution (what could happen given what we have seen) and the future distribution (what we observe actually happens). During inference, diverse futures are generated by sampling from the present distribution.
Tasks Future prediction, Optical Flow Estimation, Scene Understanding, Semantic Segmentation
Published 2020-03-13
URL https://arxiv.org/abs/2003.06409v1
PDF https://arxiv.org/pdf/2003.06409v1.pdf
PWC https://paperswithcode.com/paper/probabilistic-future-prediction-for-video
Repo
Framework

DeepSperm: A robust and real-time bull sperm-cell detection in densely populated semen videos

Title DeepSperm: A robust and real-time bull sperm-cell detection in densely populated semen videos
Authors Priyanto Hidayatullah, Xueting Wang, Toshihiko Yamasaki, Tati L. E. R. Mengko, Rinaldi Munir, Anggraini Barlian, Eros Sukmawati, Supraptono Supraptono
Abstract Background and Objective: Object detection is a primary research interest in computer vision. Sperm-cell detection in a densely populated bull semen microscopic observation video presents challenges such as partial occlusion, vast number of objects in a single video frame, tiny size of the object, artifacts, low contrast, and blurry objects because of the rapid movement of the sperm cells. This study proposes an architecture, called DeepSperm, that solves the aforementioned challenges and is more accurate and faster than state-of-the-art architectures. Methods: In the proposed architecture, we use only one detection layer, which is specific for small object detection. For handling overfitting and increasing accuracy, we set a higher network resolution, use a dropout layer, and perform data augmentation on hue, saturation, and exposure. Several hyper-parameters are tuned to achieve better performance. We compare our proposed method with those of a conventional image processing-based object-detection method, you only look once (YOLOv3), and mask region-based convolutional neural network (Mask R-CNN). Results: In our experiment, we achieve 86.91 mAP on the test dataset and a processing speed of 50.3 fps. In comparison with YOLOv3, we achieve an increase of 16.66 mAP point, 3.26 x faster on testing, and 1.4 x faster on training with a small training dataset, which contains 40 video frames. The weights file size was also reduced significantly, with 16.94 x smaller than that of YOLOv3. Moreover, it requires 1.3 x less graphical processing unit (GPU) memory than YOLOv3. Conclusions: This study proposes DeepSperm, which is a simple, effective, and efficient architecture with its hyper-parameters and configuration to detect bull sperm cells robustly in real time. In our experiment, we surpass the state of the art in terms of accuracy, speed, and resource needs.
Tasks Data Augmentation, Object Detection, Small Object Detection
Published 2020-03-03
URL https://arxiv.org/abs/2003.01395v1
PDF https://arxiv.org/pdf/2003.01395v1.pdf
PWC https://paperswithcode.com/paper/deepsperm-a-robust-and-real-time-bull-sperm
Repo
Framework
comments powered by Disqus