January 30, 2020

3508 words 17 mins read

Paper Group ANR 367

Paper Group ANR 367

Remote UAV Online Path Planning via Neural Network Based Opportunistic Control. A Joint Learning and Communications Framework for Federated Learning over Wireless Networks. Super-Resolution of Brain MRI Images using Overcomplete Dictionaries and Nonlocal Similarity. An MDL-Based Classifier for Transactional Datasets with Application in Malware Dete …

Remote UAV Online Path Planning via Neural Network Based Opportunistic Control

Title Remote UAV Online Path Planning via Neural Network Based Opportunistic Control
Authors Hamid Shiri, Jihong Park, Mehdi Bennis
Abstract This letter proposes a neural network (NN) aided remote unmanned aerial vehicle (UAV) online control algorithm, coined oHJB. By downloading a UAV’s state, a base station (BS) trains an HJB NN that solves the Hamilton-Jacobi-Bellman equation (HJB) in real time, yielding the optimal control action. Initially, the BS uploads this control action to the UAV. If the HJB NN is sufficiently trained and the UAV is far away, the BS uploads the HJB NN model, enabling to locally carry out control decisions even when the connection is lost. Simulations corroborate the effectiveness of oHJB in reducing the UAV’s travel time and energy by utilizing the trade-off between uploading delays and control robustness in poor channel conditions.
Tasks
Published 2019-10-11
URL https://arxiv.org/abs/1910.04969v1
PDF https://arxiv.org/pdf/1910.04969v1.pdf
PWC https://paperswithcode.com/paper/remote-uav-online-path-planning-via-neural
Repo
Framework

A Joint Learning and Communications Framework for Federated Learning over Wireless Networks

Title A Joint Learning and Communications Framework for Federated Learning over Wireless Networks
Authors Mingzhe Chen, Zhaohui Yang, Walid Saad, Changchuan Yin, H. Vincent Poor, Shuguang Cui
Abstract In this paper, the problem of training federated learning (FL) algorithms over a realistic wireless network is studied. In particular, in the considered model, wireless users execute an FL algorithm while training their local FL models using their own data and transmitting the trained local FL models to a base station (BS) that will generate a global FL model and send it back to the users. Since all training parameters are transmitted over wireless links, the quality of the training will be affected by wireless factors such as packet errors and the availability of wireless resources. Meanwhile, due to the limited wireless bandwidth, the BS must select an appropriate subset of users to execute the FL algorithm so as to build a global FL model accurately. This joint learning, wireless resource allocation, and user selection problem is formulated as an optimization problem whose goal is to minimize an FL loss function that captures the performance of the FL algorithm. To address this problem, a closed-form expression for the expected convergence rate of the FL algorithm is first derived to quantify the impact of wireless factors on FL. Then, based on the expected convergence rate of the FL algorithm, the optimal transmit power for each user is derived, under a given user selection and uplink resource block (RB) allocation scheme. Finally, the user selection and uplink RB allocation is optimized so as to minimize the FL loss function. Simulation results show that the proposed joint federated learning and communication framework can reduce the FL loss function value by up to 10% and 16%, respectively, compared to: 1) An optimal user selection algorithm with random resource allocation and 2) a standard FL algorithm with random user selection and resource allocation.
Tasks
Published 2019-09-17
URL https://arxiv.org/abs/1909.07972v1
PDF https://arxiv.org/pdf/1909.07972v1.pdf
PWC https://paperswithcode.com/paper/a-joint-learning-and-communications-framework
Repo
Framework

Super-Resolution of Brain MRI Images using Overcomplete Dictionaries and Nonlocal Similarity

Title Super-Resolution of Brain MRI Images using Overcomplete Dictionaries and Nonlocal Similarity
Authors Yinghua Li, Bin Song, Jie Guo, Xiaojiang Du, Mohsen Guizani
Abstract Recently, the Magnetic Resonance Imaging (MRI) images have limited and unsatisfactory resolutions due to various constraints such as physical, technological and economic considerations. Super-resolution techniques can obtain high-resolution MRI images. The traditional methods obtained the resolution enhancement of brain MRI by interpolations, affecting the accuracy of the following diagnose process. The requirement for brain image quality is fast increasing. In this paper, we propose an image super-resolution (SR) method based on overcomplete dictionaries and inherent similarity of an image to recover the high-resolution (HR) image from a single low-resolution (LR) image. We explore the nonlocal similarity of the image to tentatively search for similar blocks in the whole image and present a joint reconstruction method based on compressive sensing (CS) and similarity constraints. The sparsity and self-similarity of the image blocks are taken as the constraints. The proposed method is summarized in the following steps. First, a dictionary classification method based on the measurement domain is presented. The image blocks are classified into smooth, texture and edge parts by analyzing their features in the measurement domain. Then, the corresponding dictionaries are trained using the classified image blocks. Equally important, in the reconstruction part, we use the CS reconstruction method to recover the HR brain MRI image, considering both nonlocal similarity and the sparsity of an image as the constraints. This method performs better both visually and quantitatively than some existing methods.
Tasks Compressive Sensing, Image Super-Resolution, Super-Resolution
Published 2019-02-13
URL http://arxiv.org/abs/1902.04902v1
PDF http://arxiv.org/pdf/1902.04902v1.pdf
PWC https://paperswithcode.com/paper/super-resolution-of-brain-mri-images-using
Repo
Framework

An MDL-Based Classifier for Transactional Datasets with Application in Malware Detection

Title An MDL-Based Classifier for Transactional Datasets with Application in Malware Detection
Authors Behzad Asadi, Vijay Varadharajan
Abstract We design a classifier for transactional datasets with application in malware detection. We build the classifier based on the minimum description length (MDL) principle. This involves selecting a model that best compresses the training dataset for each class considering the MDL criterion. To select a model for a dataset, we first use clustering followed by closed frequent pattern mining to extract a subset of closed frequent patterns (CFPs). We show that this method acts as a pattern summarization method to avoid pattern explosion; this is done by giving priority to longer CFPs, and without requiring to extract all CFPs. We then use the MDL criterion to further summarize extracted patterns, and construct a code table of patterns. This code table is considered as the selected model for the compression of the dataset. We evaluate our classifier for the problem of static malware detection in portable executable (PE) files. We consider API calls of PE files as their distinguishing features. The presence-absence of API calls forms a transactional dataset. Using our proposed method, we construct two code tables, one for the benign training dataset, and one for the malware training dataset. Our dataset consists of 19696 benign, and 19696 malware samples, each a binary sequence of size 22761. We compare our classifier with deep neural networks providing us with the state-of-the-art performance. The comparison shows that our classifier performs very close to deep neural networks. We also discuss that our classifier is an interpretable classifier. This provides the motivation to use this type of classifiers where some degree of explanation is required as to why a sample is classified under one class rather than the other class.
Tasks Malware Detection
Published 2019-10-09
URL https://arxiv.org/abs/1910.03751v2
PDF https://arxiv.org/pdf/1910.03751v2.pdf
PWC https://paperswithcode.com/paper/an-mdl-based-classifier-for-transactional
Repo
Framework

Achieving Conservation of Energy in Neural Network Emulators for Climate Modeling

Title Achieving Conservation of Energy in Neural Network Emulators for Climate Modeling
Authors Tom Beucler, Stephan Rasp, Michael Pritchard, Pierre Gentine
Abstract Artificial neural-networks have the potential to emulate cloud processes with higher accuracy than the semi-empirical emulators currently used in climate models. However, neural-network models do not intrinsically conserve energy and mass, which is an obstacle to using them for long-term climate predictions. Here, we propose two methods to enforce linear conservation laws in neural-network emulators of physical models: Constraining (1) the loss function or (2) the architecture of the network itself. Applied to the emulation of explicitly-resolved cloud processes in a prototype multi-scale climate model, we show that architecture constraints can enforce conservation laws to satisfactory numerical precision, while all constraints help the neural-network better generalize to conditions outside of its training set, such as global warming.
Tasks
Published 2019-06-15
URL https://arxiv.org/abs/1906.06622v1
PDF https://arxiv.org/pdf/1906.06622v1.pdf
PWC https://paperswithcode.com/paper/achieving-conservation-of-energy-in-neural
Repo
Framework

A Novel Hybrid Scheme Using Genetic Algorithms and Deep Learning for the Reconstruction of Portuguese Tile Panels

Title A Novel Hybrid Scheme Using Genetic Algorithms and Deep Learning for the Reconstruction of Portuguese Tile Panels
Authors Daniel Rika, Dror Sholomon, Eli David, Nathan S. Netanyahu
Abstract This paper presents a novel scheme, based on a unique combination of genetic algorithms (GAs) and deep learning (DL), for the automatic reconstruction of Portuguese tile panels, a challenging real-world variant of the jigsaw puzzle problem (JPP) with important national heritage implications. Specifically, we introduce an enhanced GA-based puzzle solver, whose integration with a novel DL-based compatibility measure (DLCM) yields state-of-the-art performance, regarding the above application. Current compatibility measures consider typically (the chromatic information of) edge pixels (between adjacent tiles), and help achieve high accuracy for the synthetic JPP variant. However, such measures exhibit rather poor performance when applied to the Portuguese tile panels, which are susceptible to various real-world effects, e.g., monochromatic panels, non-squared tiles, edge degradation, etc. To overcome such difficulties, we have developed a novel DLCM to extract high-level texture/color statistics from the entire tile information. Integrating this measure with our enhanced GA-based puzzle solver, we have demonstrated, for the first time, how to deal most effectively with large-scale real-world problems, such as the Portuguese tile problem. Specifically, we have achieved 82% accuracy for the reconstruction of Portuguese tile panels with unknown piece rotation and puzzle dimension (compared to merely 3.5% average accuracy achieved by the best method known for solving this problem variant). The proposed method outperforms even human experts in several cases, correcting their mistakes in the manual tile assembly.
Tasks
Published 2019-12-04
URL https://arxiv.org/abs/1912.02707v1
PDF https://arxiv.org/pdf/1912.02707v1.pdf
PWC https://paperswithcode.com/paper/a-novel-hybrid-scheme-using-genetic
Repo
Framework

Learning Graph Neural Networks with Noisy Labels

Title Learning Graph Neural Networks with Noisy Labels
Authors Hoang NT, Choong Jun Jin, Tsuyoshi Murata
Abstract We study the robustness to symmetric label noise of GNNs training procedures. By combining the nonlinear neural message-passing models (e.g. Graph Isomorphism Networks, GraphSAGE, etc.) with loss correction methods, we present a noise-tolerant approach for the graph classification task. Our experiments show that test accuracy can be improved under the artificial symmetric noisy setting.
Tasks Graph Classification
Published 2019-05-05
URL https://arxiv.org/abs/1905.01591v1
PDF https://arxiv.org/pdf/1905.01591v1.pdf
PWC https://paperswithcode.com/paper/learning-graph-neural-networks-with-noisy
Repo
Framework

Robust data-driven approach for predicting the configurational energy of high entropy alloys

Title Robust data-driven approach for predicting the configurational energy of high entropy alloys
Authors Jiaxin Zhang, Xianglin Liu, Sirui Bi, Junqi Yin, Guannan Zhang, Markus Eisenbach
Abstract High entropy alloys (HEAs) have been increasingly attractive as promising next-generation materials due to their various excellent properties. It’s necessary to essentially characterize the degree of chemical ordering and identify order-disorder transitions through efficient simulation and modeling of thermodynamics. In this study, a robust data-driven framework based on Bayesian approaches is proposed and demonstrated on the accurate and efficient prediction of configurational energy of high entropy alloys. The proposed effective pair interaction (EPI) model with ensemble sampling is used to map the configuration and its corresponding energy. Given limited data calculated by first-principles calculations, Bayesian regularized regression not only offers an accurate and stable prediction but also effectively quantifies the uncertainties associated with EPI parameters. Compared with the arbitrary determination of model complexity, we further conduct a physical feature selection to identify the truncation of coordination shells in EPI model using Bayesian information criterion. The results achieve efficient and robust performance in predicting the configurational energy, particularly given small data. The developed methodology is applied to study a series of refractory HEAs, i.e. NbMoTaW, NbMoTaWV and NbMoTaWTi where it is demonstrated how dataset size affects the confidence we can place in statistical estimates of configurational energy when data are sparse.
Tasks Feature Selection
Published 2019-08-10
URL https://arxiv.org/abs/1908.03665v1
PDF https://arxiv.org/pdf/1908.03665v1.pdf
PWC https://paperswithcode.com/paper/robust-data-driven-approach-for-predicting
Repo
Framework

On the marginal likelihood and cross-validation

Title On the marginal likelihood and cross-validation
Authors Edwin Fong, Chris Holmes
Abstract In Bayesian statistics, the marginal likelihood, also known as the evidence, is used to evaluate model fit as it quantifies the joint probability of the data under the prior. In contrast, non-Bayesian models are typically compared using cross-validation on held-out data, either through $k$-fold partitioning or leave-$p$-out subsampling. We show that the marginal likelihood is formally equivalent to exhaustive leave-$p$-out cross-validation averaged over all values of $p$ and all held-out test sets when using the log posterior predictive probability as the scoring rule. Moreover, the log posterior predictive is the only coherent scoring rule under data exchangeability. This offers new insight into the marginal likelihood and cross-validation and highlights the potential sensitivity of the marginal likelihood to the choice of the prior. We suggest an alternative approach using cumulative cross-validation following a preparatory training phase. Our work has connections to prequential analysis and intrinsic Bayes factors but is motivated through a different course.
Tasks
Published 2019-05-21
URL https://arxiv.org/abs/1905.08737v2
PDF https://arxiv.org/pdf/1905.08737v2.pdf
PWC https://paperswithcode.com/paper/on-the-marginal-likelihood-and-cross
Repo
Framework

Neural Voxel Renderer: Learning an Accurate and Controllable Rendering Tool

Title Neural Voxel Renderer: Learning an Accurate and Controllable Rendering Tool
Authors Konstantinos Rematas, Vittorio Ferrari
Abstract We present a neural rendering framework that maps a voxelized scene into a high quality image. Highly-textured objects and scene element interactions are realistically rendered by our method, despite having a rough representation as an input. Moreover, our approach allows controllable rendering: geometric and appearance modifications in the input are accurately propagated to the output. The user can move, rotate and scale an object, change its appearance and texture or modify the position of the light and all these edits are represented in the final rendering. We demonstrate the effectiveness of our approach by rendering scenes with varying appearance, from single color per object to complex, high-frequency textures. We show that our rerendering network can generate very detailed images that represent precisely the appearance of the input scene. Our experiments illustrate that our approach achieves more accurate image synthesis results compared to alternatives and can also handle low voxel grid resolutions. Finally, we show how our neural rendering framework can capture and faithfully render objects from real images and from a diverse set of classes.
Tasks Image Generation
Published 2019-12-10
URL https://arxiv.org/abs/1912.04591v1
PDF https://arxiv.org/pdf/1912.04591v1.pdf
PWC https://paperswithcode.com/paper/neural-voxel-renderer-learning-an-accurate
Repo
Framework

Planning for Goal-Oriented Dialogue Systems

Title Planning for Goal-Oriented Dialogue Systems
Authors Christian Muise, Tathagata Chakraborti, Shubham Agarwal, Ondrej Bajgar, Arunima Chaudhary, Luis A. Lastras-Montano, Josef Ondrej, Miroslav Vodolan, Charlie Wiecha
Abstract Generating complex multi-turn goal-oriented dialogue agents is a difficult problem that has seen a considerable focus from many leaders in the tech industry, including IBM, Google, Amazon, and Microsoft. This is in large part due to the rapidly growing market demand for dialogue agents capable of goal-oriented behaviour. Due to the business process nature of these conversations, end-to-end machine learning systems are generally not a viable option, as the generated dialogue agents must be deployable and verifiable on behalf of the businesses authoring them. In this work, we propose a paradigm shift in the creation of goal-oriented complex dialogue systems that dramatically eliminates the need for a designer to manually specify a dialogue tree, which nearly all current systems have to resort to when the interaction pattern falls outside standard patterns such as slot filling. We propose a declarative representation of the dialogue agent to be processed by state-of-the-art planning technology. Our proposed approach covers all aspects of the process; from model solicitation to the execution of the generated plans/dialogue agents. Along the way, we introduce novel planning encodings for declarative dialogue synthesis, a variety of interfaces for working with the specification as a dialogue architect, and a robust executor for generalized contingent plans. We have created prototype implementations of all components, and in this paper, we further demonstrate the resulting system empirically.
Tasks Goal-Oriented Dialogue Systems, Slot Filling
Published 2019-10-17
URL https://arxiv.org/abs/1910.08137v1
PDF https://arxiv.org/pdf/1910.08137v1.pdf
PWC https://paperswithcode.com/paper/planning-for-goal-oriented-dialogue-systems
Repo
Framework

Quantum Expectation-Maximization for Gaussian Mixture Models

Title Quantum Expectation-Maximization for Gaussian Mixture Models
Authors Iordanis Kerenidis, Alessandro Luongo, Anupam Prakash
Abstract The Expectation-Maximization (EM) algorithm is a fundamental tool in unsupervised machine learning. It is often used as an efficient way to solve Maximum Likelihood (ML) estimation problems, especially for models with latent variables. It is also the algorithm of choice to fit mixture models: generative models that represent unlabelled points originating from $k$ different processes, as samples from $k$ multivariate distributions. In this work we define and use a quantum version of EM to fit a Gaussian Mixture Model. Given quantum access to a dataset of $n$ vectors of dimension $d$, our algorithm has convergence and precision guarantees similar to the classical algorithm, but the runtime is only polylogarithmic in the number of elements in the training set, and is polynomial in other parameters - as the dimension of the feature space, and the number of components in the mixture. We generalize further the algorithm in two directions. First, we show how to fit any mixture model of probability distributions in the exponential family. Then, we show how to use this algorithm to compute the Maximum a Posteriori (MAP) estimate of a mixture model: the Bayesian approach to likelihood estimation problems. We discuss the performance of the algorithm on datasets that are expected to be classified successfully by those algorithms, arguing that on those cases we can give strong guarantees on the runtime.
Tasks
Published 2019-08-19
URL https://arxiv.org/abs/1908.06657v1
PDF https://arxiv.org/pdf/1908.06657v1.pdf
PWC https://paperswithcode.com/paper/quantum-expectation-maximization-for-gaussian
Repo
Framework

An Attention-based Recurrent Convolutional Network for Vehicle Taillight Recognition

Title An Attention-based Recurrent Convolutional Network for Vehicle Taillight Recognition
Authors Kuan-Hui Lee, Takaaki Tagawa, Jia-En M. Pan, Adrien Gaidon, Bertrand Douillard
Abstract Vehicle taillight recognition is an important application for automated driving, especially for intent prediction of ado vehicles and trajectory planning of the ego vehicle. In this work, we propose an end-to-end deep learning framework to recognize taillights, i.e. rear turn and brake signals, from a sequence of images. The proposed method starts with a Convolutional Neural Network (CNN) to extract spatial features, and then applies a Long Short-Term Memory network (LSTM) to learn temporal dependencies. Furthermore, we integrate attention models in both spatial and temporal domains, where the attention models learn to selectively focus on both spatial and temporal features. Our method is able to outperform the state of the art in terms of accuracy on the UC Merced Vehicle Rear Signal Dataset, demonstrating the effectiveness of attention models for vehicle taillight recognition.
Tasks
Published 2019-06-09
URL https://arxiv.org/abs/1906.03683v1
PDF https://arxiv.org/pdf/1906.03683v1.pdf
PWC https://paperswithcode.com/paper/an-attention-based-recurrent-convolutional
Repo
Framework

Direct Estimation of Difference Between Structural Equation Models in High Dimensions

Title Direct Estimation of Difference Between Structural Equation Models in High Dimensions
Authors Asish Ghoshal, Jean Honorio
Abstract Discovering cause-effect relationships between variables from observational data is a fundamental challenge in many scientific disciplines. However, in many situations it is desirable to directly estimate the change in causal relationships across two different conditions, e.g., estimating the change in genetic expression across healthy and diseased subjects can help isolate genetic factors behind the disease. This paper focuses on the problem of directly estimating the structural difference between two causal DAGs, having the same topological ordering, given two sets of samples drawn from the individual DAGs. We present an algorithm that can recover the difference-DAG in $O(d \log p)$ samples, where $d$ is related to the number of edges in the difference-DAG. We also show that any method requires at least $\Omega(d \log p/d)$ samples to learn difference DAGs with at most $d$ parents per node. We validate our theoretical results with synthetic experiments and show that our method out-performs the state-of-the-art.
Tasks
Published 2019-06-28
URL https://arxiv.org/abs/1906.12024v1
PDF https://arxiv.org/pdf/1906.12024v1.pdf
PWC https://paperswithcode.com/paper/direct-estimation-of-difference-between
Repo
Framework

Learning Compositional Representations of Interacting Systems with Restricted Boltzmann Machines: Comparative Study of Lattice Proteins

Title Learning Compositional Representations of Interacting Systems with Restricted Boltzmann Machines: Comparative Study of Lattice Proteins
Authors Jérôme Tubiana, Simona Cocco, Rémi Monasson
Abstract A Restricted Boltzmann Machine (RBM) is an unsupervised machine-learning bipartite graphical model that jointly learns a probability distribution over data and extracts their relevant statistical features. As such, RBM were recently proposed for characterizing the patterns of coevolution between amino acids in protein sequences and for designing new sequences. Here, we study how the nature of the features learned by RBM changes with its defining parameters, such as the dimensionality of the representations (size of the hidden layer) and the sparsity of the features. We show that for adequate values of these parameters, RBM operate in a so-called compositional phase in which visible configurations sampled from the RBM are obtained by recombining these features. We then compare the performance of RBM with other standard representation learning algorithms, including Principal or Independent Component Analysis, autoencoders (AE), variational auto-encoders (VAE), and their sparse variants. We show that RBM, due to the stochastic mapping between data configurations and representations, better capture the underlying interactions in the system and are significantly more robust with respect to sample size than deterministic methods such as PCA or ICA. In addition, this stochastic mapping is not prescribed a priori as in VAE, but learned from data, which allows RBM to show good performance even with shallow architectures. All numerical results are illustrated on synthetic lattice-protein data, that share similar statistical features with real protein sequences, and for which ground-truth interactions are known.
Tasks Representation Learning
Published 2019-02-18
URL http://arxiv.org/abs/1902.06495v1
PDF http://arxiv.org/pdf/1902.06495v1.pdf
PWC https://paperswithcode.com/paper/learning-compositional-representations-of
Repo
Framework
comments powered by Disqus