Paper Group ANR 843
On Privacy Protection of Latent Dirichlet Allocation Model Training. BAGH – Comparative study. Comparison of statistical post-processing methods for probabilistic NWP forecasts of solar radiation. GLADAS: Gesture Learning for Advanced Driver Assistance Systems. Label Mapping Neural Networks with Response Consolidation for Class Incremental Learnin …
On Privacy Protection of Latent Dirichlet Allocation Model Training
Title | On Privacy Protection of Latent Dirichlet Allocation Model Training |
Authors | Fangyuan Zhao, Xuebin Ren, Shusen Yang, Xinyu Yang |
Abstract | Latent Dirichlet Allocation (LDA) is a popular topic modeling technique for discovery of hidden semantic architecture of text datasets, and plays a fundamental role in many machine learning applications. However, like many other machine learning algorithms, the process of training a LDA model may leak the sensitive information of the training datasets and bring significant privacy risks. To mitigate the privacy issues in LDA, we focus on studying privacy-preserving algorithms of LDA model training in this paper. In particular, we first develop a privacy monitoring algorithm to investigate the privacy guarantee obtained from the inherent randomness of the Collapsed Gibbs Sampling (CGS) process in a typical LDA training algorithm on centralized curated datasets. Then, we further propose a locally private LDA training algorithm on crowdsourced data to provide local differential privacy for individual data contributors. The experimental results on real-world datasets demonstrate the effectiveness of our proposed algorithms. |
Tasks | |
Published | 2019-06-04 |
URL | https://arxiv.org/abs/1906.01178v2 |
https://arxiv.org/pdf/1906.01178v2.pdf | |
PWC | https://paperswithcode.com/paper/on-privacy-protection-of-latent-dirichlet |
Repo | |
Framework | |
BAGH – Comparative study
Title | BAGH – Comparative study |
Authors | B. Kamala |
Abstract | Process mining is a new emerging research trend over the last decade which focuses on analyzing the processes using event log and data. The raising integration of information systems for the operation of business processes provides the basis for innovative data analysis approaches. Process mining has the strong relationship between with data mining so that it enables the bond between business intelligence approach and business process management. It focuses on end to end processes and is possible because of the growing availability of event data and new process discovery and conformance checking techniques. Process mining aims to discover, monitor and improve real processes by extracting knowledge from event logs readily available in todays information systems. The discovered process models can be used for a variety of analysis purposes. Many companies have adopted Process aware Information Systems for supporting their business processes in some form. These systems typically have their log events related to the actual business process executions. Proper analysis of Process Aware Information Systems execution logs can yield important knowledge and help organizations improve the quality of their services. This paper reviews and compares various process mining algorithms based on their input parameters, the techniques used and the output generated by them. |
Tasks | |
Published | 2019-09-04 |
URL | https://arxiv.org/abs/1909.06159v1 |
https://arxiv.org/pdf/1909.06159v1.pdf | |
PWC | https://paperswithcode.com/paper/bagh-comparative-study |
Repo | |
Framework | |
Comparison of statistical post-processing methods for probabilistic NWP forecasts of solar radiation
Title | Comparison of statistical post-processing methods for probabilistic NWP forecasts of solar radiation |
Authors | Kilian Bakker, Kirien Whan, Wouter Knap, Maurice Schmeits |
Abstract | The increased usage of solar energy places additional importance on forecasts of solar radiation. Solar panel power production is primarily driven by the amount of solar radiation and it is therefore important to have accurate forecasts of solar radiation. Accurate forecasts that also give information on the forecast uncertainties can help users of solar energy to make better solar radiation based decisions related to the stability of the electrical grid. To achieve this, we apply statistical post-processing techniques that determine relationships between observations of global radiation (made within the KNMI network of automatic weather stations in the Netherlands) and forecasts of various meteorological variables from the numerical weather prediction (NWP) model HARMONIE-AROME (HA) and the atmospheric composition model CAMS. Those relationships are used to produce probabilistic forecasts of global radiation. We compare 7 different statistical post-processing methods, consisting of two parametric and five non-parametric methods. We find that all methods are able to generate probabilistic forecasts that improve the raw global radiation forecast from HA according to the root mean squared error (on the median) and the potential economic value. Additionally, we show how important the predictors are in the different regression methods. We also compare the regression methods using various probabilistic scoring metrics, namely the continuous ranked probability skill score, the Brier skill score and reliability diagrams. We find that quantile regression and generalized random forests generally perform best. In (near) clear sky conditions the non-parametric methods have more skill than the parametric ones. |
Tasks | |
Published | 2019-04-15 |
URL | https://arxiv.org/abs/1904.07192v2 |
https://arxiv.org/pdf/1904.07192v2.pdf | |
PWC | https://paperswithcode.com/paper/comparison-of-statistical-post-processing |
Repo | |
Framework | |
GLADAS: Gesture Learning for Advanced Driver Assistance Systems
Title | GLADAS: Gesture Learning for Advanced Driver Assistance Systems |
Authors | Ethan Shaotran, Jonathan J. Cruz, Vijay Janapa Reddi |
Abstract | Human-computer interaction (HCI) is crucial for the safety of lives as autonomous vehicles (AVs) become commonplace. Yet, little effort has been put toward ensuring that AVs understand humans on the road. In this paper, we present GLADAS, a simulator-based research platform designed to teach AVs to understand pedestrian hand gestures. GLADAS supports the training, testing, and validation of deep learning-based self-driving car gesture recognition systems. We focus on gestures as they are a primordial (i.e, natural and common) way to interact with cars. To the best of our knowledge, GLADAS is the first system of its kind designed to provide an infrastructure for further research into human-AV interaction. We also develop a hand gesture recognition algorithm for self-driving cars, using GLADAS to evaluate its performance. Our results show that an AV understands human gestures 85.91% of the time, reinforcing the need for further research into human-AV interaction. |
Tasks | Autonomous Vehicles, Gesture Recognition, Hand Gesture Recognition, Hand-Gesture Recognition, Self-Driving Cars |
Published | 2019-10-02 |
URL | https://arxiv.org/abs/1910.04695v1 |
https://arxiv.org/pdf/1910.04695v1.pdf | |
PWC | https://paperswithcode.com/paper/gladas-gesture-learning-for-advanced-driver |
Repo | |
Framework | |
Label Mapping Neural Networks with Response Consolidation for Class Incremental Learning
Title | Label Mapping Neural Networks with Response Consolidation for Class Incremental Learning |
Authors | Xu Zhang, Yang Yao, Baile Xu, Lekun Mao, Furao Shen, Jian Zhao, Qingwei Lin |
Abstract | Class incremental learning refers to a special multi-class classification task, in which the number of classes is not fixed but is increasing with the continual arrival of new data. Existing researches mainly focused on solving catastrophic forgetting problem in class incremental learning. To this end, however, these models still require the old classes cached in the auxiliary data structure or models, which is inefficient in space or time. In this paper, it is the first time to discuss the difficulty without support of old classes in class incremental learning, which is called as softmax suppression problem. To address these challenges, we develop a new model named Label Mapping with Response Consolidation (LMRC), which need not access the old classes anymore. We propose the Label Mapping algorithm combined with the multi-head neural network for mitigating the softmax suppression problem, and propose the Response Consolidation method to overcome the catastrophic forgetting problem. Experimental results on the benchmark datasets show that our proposed method achieves much better performance compared to the related methods in different scenarios. |
Tasks | |
Published | 2019-05-20 |
URL | https://arxiv.org/abs/1905.07835v1 |
https://arxiv.org/pdf/1905.07835v1.pdf | |
PWC | https://paperswithcode.com/paper/label-mapping-neural-networks-with-response |
Repo | |
Framework | |
Parameter Estimation in Adaptive Control of Time-Varying Systems Under a Range of Excitation Conditions
Title | Parameter Estimation in Adaptive Control of Time-Varying Systems Under a Range of Excitation Conditions |
Authors | Joseph E. Gaudio, Anuradha M. Annaswamy, Eugene Lavretsky, Michael A. Bolender |
Abstract | This paper presents a new parameter estimation algorithm for the adaptive control of a class of time-varying plants. The main feature of this algorithm is a matrix of time-varying learning rates, which enables parameter estimation error trajectories to tend exponentially fast towards a compact set whenever excitation conditions are satisfied. This algorithm is employed in a large class of problems where unknown parameters are present and are time-varying. It is shown that this algorithm guarantees global boundedness of the state and parameter errors of the system, and avoids an often used filtering approach for constructing key regressor signals. In addition, intervals of time over which these errors tend exponentially fast toward a compact set are provided, both in the presence of finite and persistent excitation. A projection operator is used to ensure the boundedness of the learning rate matrix, as compared to a time-varying forgetting factor. Numerical simulations are provided to complement the theoretical analysis. |
Tasks | |
Published | 2019-11-10 |
URL | https://arxiv.org/abs/1911.03810v2 |
https://arxiv.org/pdf/1911.03810v2.pdf | |
PWC | https://paperswithcode.com/paper/parameter-estimation-in-adaptive-control-of |
Repo | |
Framework | |
Fast Calculation of Probabilistic Power Flow: A Model-based Deep Learning Approach
Title | Fast Calculation of Probabilistic Power Flow: A Model-based Deep Learning Approach |
Authors | Yan Yang, Zhifang Yang, Juan Yu, Baosen Zhang |
Abstract | Probabilistic power flow (PPF) plays a critical role in power system analysis. However, the high computational burden makes it challenging for the practical implementation of PPF. This paper proposes a model-based deep learning approach to overcome the computational challenge. A deep neural network (DNN) is used to approximate the power flow calculation and is trained according to the physical power flow equations to improve its learning ability. The training process consists of several steps: 1) the branch flows are added into the objective function of the DNN as a penalty term, which improves the approximation accuracy of the DNN; 2) the gradients used in the back propagation process are simplified according to the physical characteristics of the transmission grid, which accelerates the training speed while maintaining effective guidance of the physical model; and 3) an improved initialization method for the DNN parameters is proposed to improve the convergence speed. The simulation results demonstrate the accuracy and efficiency of the proposed method in standard IEEE and utility benchmark systems. |
Tasks | |
Published | 2019-06-14 |
URL | https://arxiv.org/abs/1906.06017v2 |
https://arxiv.org/pdf/1906.06017v2.pdf | |
PWC | https://paperswithcode.com/paper/fast-calculation-of-probabilistic-power-flow |
Repo | |
Framework | |
Don’t Forget Your Teacher: A Corrective Reinforcement Learning Framework
Title | Don’t Forget Your Teacher: A Corrective Reinforcement Learning Framework |
Authors | Mohammadreza Nazari, Majid Jahani, Lawrence V. Snyder, Martin Takáč |
Abstract | Although reinforcement learning (RL) can provide reliable solutions in many settings, practitioners are often wary of the discrepancies between the RL solution and their status quo procedures. Therefore, they may be reluctant to adapt to the novel way of executing tasks proposed by RL. On the other hand, many real-world problems require relatively small adjustments from the status quo policies to achieve improved performance. Therefore, we propose a student-teacher RL mechanism in which the RL (the “student”) learns to maximize its reward, subject to a constraint that bounds the difference between the RL policy and the “teacher” policy. The teacher can be another RL policy (e.g., trained under a slightly different setting), the status quo policy, or any other exogenous policy. We formulate this problem using a stochastic optimization model and solve it using a primal-dual policy gradient algorithm. We prove that the policy is asymptotically optimal. However, a naive implementation suffers from high variance and convergence to a stochastic optimal policy. With a few practical adjustments to address these issues, our numerical experiments confirm the effectiveness of our proposed method in multiple GridWorld scenarios. |
Tasks | Stochastic Optimization |
Published | 2019-05-30 |
URL | https://arxiv.org/abs/1905.13562v1 |
https://arxiv.org/pdf/1905.13562v1.pdf | |
PWC | https://paperswithcode.com/paper/dont-forget-your-teacher-a-corrective |
Repo | |
Framework | |
A view of Estimation of Distribution Algorithms through the lens of Expectation-Maximization
Title | A view of Estimation of Distribution Algorithms through the lens of Expectation-Maximization |
Authors | David H. Brookes, Akosua Busia, Clara Fannjiang, Kevin Murphy, Jennifer Listgarten |
Abstract | We show that a large class of Estimation of Distribution Algorithms (EDAs), including, but not limited to, Covariance Matrix Adaption, can be written exactly as a Monte Carlo Expectation-Maximization (EM) algorithm, and as exact EM in the limit of infinite samples. Because EM sits on a rigorous statistical foundation and has been thoroughly analyzed, this connection provides a new coherent framework with which to reason about EDAs—one complementary to that of Information Geometry Optimization (IGO). To illustrate the potential benefits of such a connection, we leverage it to (i) formally show that this class of EDAs can be seen as approximating natural gradient descent, and (ii) leverage a rigorously-derived adaptive EM-based algorithm for EDAs, demonstrating potentially advantageous directions for adaptive hybrid approaches. |
Tasks | Stochastic Optimization |
Published | 2019-05-24 |
URL | https://arxiv.org/abs/1905.10474v8 |
https://arxiv.org/pdf/1905.10474v8.pdf | |
PWC | https://paperswithcode.com/paper/a-view-of-estimation-of-distribution |
Repo | |
Framework | |
On the Semantic Interpretability of Artificial Intelligence Models
Title | On the Semantic Interpretability of Artificial Intelligence Models |
Authors | Vivian S. Silva, André Freitas, Siegfried Handschuh |
Abstract | Artificial Intelligence models are becoming increasingly more powerful and accurate, supporting or even replacing humans’ decision making. But with increased power and accuracy also comes higher complexity, making it hard for users to understand how the model works and what the reasons behind its predictions are. Humans must explain and justify their decisions, and so do the AI models supporting them in this process, making semantic interpretability an emerging field of study. In this work, we look at interpretability from a broader point of view, going beyond the machine learning scope and covering different AI fields such as distributional semantics and fuzzy logic, among others. We examine and classify the models according to their nature and also based on how they introduce interpretability features, analyzing how each approach affects the final users and pointing to gaps that still need to be addressed to provide more human-centered interpretability solutions. |
Tasks | Decision Making |
Published | 2019-07-09 |
URL | https://arxiv.org/abs/1907.04105v1 |
https://arxiv.org/pdf/1907.04105v1.pdf | |
PWC | https://paperswithcode.com/paper/on-the-semantic-interpretability-of |
Repo | |
Framework | |
An Efficient Method of Detection and Recognition in Remote Sensing Image Based on multi-angle Region of Interests
Title | An Efficient Method of Detection and Recognition in Remote Sensing Image Based on multi-angle Region of Interests |
Authors | Hongyu Wang, Wei Liang, Guangcun Shan |
Abstract | Presently, deep learning technology has been widely used in the field of image recognition. However, it mainly aims at the recognition and detection of ordinary pictures and common scenes. As special images, remote sensing images have different shooting angles and shooting methods compared with ordinary ones, which makes remote sensing images play an irreplaceable role in some areas. In this paper, based on a deep convolution neural network for providing multi-level information of images and combines RPN (Region Proposal Network) for generating multi-angle ROIs (Region of Interest), a new model for object detection and recognition in remote sensing images is proposed. In the experiment, it achieves better results than traditional ways, which demonstrate that the model proposed here would have a huge potential application in remote sensing image recognition. |
Tasks | Object Detection |
Published | 2019-07-22 |
URL | https://arxiv.org/abs/1907.09320v1 |
https://arxiv.org/pdf/1907.09320v1.pdf | |
PWC | https://paperswithcode.com/paper/an-efficient-method-of-detection-and |
Repo | |
Framework | |
Issues concerning realizability of Blackwell optimal policies in reinforcement learning
Title | Issues concerning realizability of Blackwell optimal policies in reinforcement learning |
Authors | Nicholas Denis |
Abstract | N-discount optimality was introduced as a hierarchical form of policy- and value-function optimality, with Blackwell optimality lying at the top level of the hierarchy Veinott (1969); Blackwell (1962). We formalize notions of myopic discount factors, value functions and policies in terms of Blackwell optimality in MDPs, and we provide a novel concept of regret, called Blackwell regret, which measures the regret compared to a Blackwell optimal policy. Our main analysis focuses on long horizon MDPs with sparse rewards. We show that selecting the discount factor under which zero Blackwell regret can be achieved becomes arbitrarily hard. Moreover, even with oracle knowledge of such a discount factor that can realize a Blackwell regret-free value function, an $\epsilon$-Blackwell optimal value function may not even be gain optimal. Difficulties associated with this class of problems is discussed, and the notion of a policy gap is defined as the difference in expected return between a given policy and any other policy that differs at that state; we prove certain properties related to this gap. Finally, we provide experimental results that further support our theoretical results. |
Tasks | |
Published | 2019-05-20 |
URL | https://arxiv.org/abs/1905.08293v1 |
https://arxiv.org/pdf/1905.08293v1.pdf | |
PWC | https://paperswithcode.com/paper/issues-concerning-realizability-of-blackwell |
Repo | |
Framework | |
The coupling effect of Lipschitz regularization in deep neural networks
Title | The coupling effect of Lipschitz regularization in deep neural networks |
Authors | Nicolas Couellan |
Abstract | We investigate robustness of deep feed-forward neural networks when input data are subject to random uncertainties. More specifically, we consider regularization of the network by its Lipschitz constant and emphasize its role. We highlight the fact that this regularization is not only a way to control the magnitude of the weights but has also a coupling effect on the network weights accross the layers. We claim and show evidence on a dataset that this coupling effect brings a tradeoff between robustness and expressiveness of the network. This suggests that Lipschitz regularization should be carefully implemented so as to maintain coupling accross layers. |
Tasks | |
Published | 2019-04-12 |
URL | http://arxiv.org/abs/1904.06253v1 |
http://arxiv.org/pdf/1904.06253v1.pdf | |
PWC | https://paperswithcode.com/paper/the-coupling-effect-of-lipschitz |
Repo | |
Framework | |
Communication trade-offs for synchronized distributed SGD with large step size
Title | Communication trade-offs for synchronized distributed SGD with large step size |
Authors | Kumar Kshitij Patel, Aymeric Dieuleveut |
Abstract | Synchronous mini-batch SGD is state-of-the-art for large-scale distributed machine learning. However, in practice, its convergence is bottlenecked by slow communication rounds between worker nodes. A natural solution to reduce communication is to use the \emph{`local-SGD’} model in which the workers train their model independently and synchronize every once in a while. This algorithm improves the computation-communication trade-off but its convergence is not understood very well. We propose a non-asymptotic error analysis, which enables comparison to \emph{one-shot averaging} i.e., a single communication round among independent workers, and \emph{mini-batch averaging} i.e., communicating at every step. We also provide adaptive lower bounds on the communication frequency for large step-sizes ($ t^{-\alpha} $, $ \alpha\in (1/2 , 1 ) $) and show that \emph{Local-SGD} reduces communication by a factor of $O\Big(\frac{\sqrt{T}}{P^{3/2}}\Big)$, with $T$ the total number of gradients and $P$ machines. | |
Tasks | |
Published | 2019-04-25 |
URL | http://arxiv.org/abs/1904.11325v1 |
http://arxiv.org/pdf/1904.11325v1.pdf | |
PWC | https://paperswithcode.com/paper/communication-trade-offs-for-synchronized |
Repo | |
Framework | |
A Phase Shift Deep Neural Network for High Frequency Approximation and Wave Problems
Title | A Phase Shift Deep Neural Network for High Frequency Approximation and Wave Problems |
Authors | Wei Cai, Xiaoguang Li, Lizuo Liu |
Abstract | In this paper, we propose a phase shift deep neural network (PhaseDNN), which provides a uniform wideband convergence in approximating high frequency functions and solutions of wave equations. The PhaseDNN makes use of the fact that common DNNs often achieve convergence in the low frequency range first, and a series of moderately-sized DNNs are constructed and trained for selected high frequency ranges. With the help of phase shifts in the frequency domain, each of the DNNs will be trained to approximate the function’s higher frequency content over a specific range at the the speed of convergence as in the low frequency range. As a result, the proposed PhaseDNN is able to convert high frequency learning to low frequency one, allowing a uniform learning to wideband functions. The PhaseDNN will then be applied to find the solution of high frequency wave equations in inhomogeneous media through both differential and integral equation formulations with least square residual loss functions. Numerical results have demonstrated the capability of the PhaseDNN in learning high frequency functions and oscillatory solutions of interior and exterior Helmholtz equations. |
Tasks | |
Published | 2019-09-23 |
URL | https://arxiv.org/abs/1909.11759v2 |
https://arxiv.org/pdf/1909.11759v2.pdf | |
PWC | https://paperswithcode.com/paper/a-phase-shift-deep-neural-network-for-high |
Repo | |
Framework | |