April 3, 2020

3026 words 15 mins read

Paper Group ANR 43

Paper Group ANR 43

Cost-Sensitive Portfolio Selection via Deep Reinforcement Learning. Zero-Shot Cross-Lingual Transfer with Meta Learning. Private Mean Estimation of Heavy-Tailed Distributions. Guided Constrained Policy Optimization for Dynamic Quadrupedal Robot Locomotion. PAC-Bayesian Meta-learning with Implicit Prior. How to Solve Fair $k$-Center in Massive Data …

Cost-Sensitive Portfolio Selection via Deep Reinforcement Learning

Title Cost-Sensitive Portfolio Selection via Deep Reinforcement Learning
Authors Yifan Zhang, Peilin Zhao, Qingyao Wu, Bin Li, Junzhou Huang, Mingkui Tan
Abstract Portfolio Selection is an important real-world financial task and has attracted extensive attention in artificial intelligence communities. This task, however, has two main difficulties: (i) the non-stationary price series and complex asset correlations make the learning of feature representation very hard; (ii) the practicality principle in financial markets requires controlling both transaction and risk costs. Most existing methods adopt handcraft features and/or consider no constraints for the costs, which may make them perform unsatisfactorily and fail to control both costs in practice. In this paper, we propose a cost-sensitive portfolio selection method with deep reinforcement learning. Specifically, a novel two-stream portfolio policy network is devised to extract both price series patterns and asset correlations, while a new cost-sensitive reward function is developed to maximize the accumulated return and constrain both costs via reinforcement learning. We theoretically analyze the near-optimality of the proposed reward, which shows that the growth rate of the policy regarding this reward function can approach the theoretical optimum. We also empirically evaluate the proposed method on real-world datasets. Promising results demonstrate the effectiveness and superiority of the proposed method in terms of profitability, cost-sensitivity and representation abilities.
Tasks
Published 2020-03-06
URL https://arxiv.org/abs/2003.03051v1
PDF https://arxiv.org/pdf/2003.03051v1.pdf
PWC https://paperswithcode.com/paper/cost-sensitive-portfolio-selection-via-deep
Repo
Framework

Zero-Shot Cross-Lingual Transfer with Meta Learning

Title Zero-Shot Cross-Lingual Transfer with Meta Learning
Authors Farhad Nooralahzadeh, Giannis Bekoulis, Johannes Bjerva, Isabelle Augenstein
Abstract Learning what to share between tasks has been a topic of high importance recently, as strategic sharing of knowledge has been shown to improve the performance of downstream tasks. The same applies to sharing between languages, and is especially important when considering the fact that most languages in the world suffer from being under-resourced. In this paper, we consider the setting of training models on multiple different languages at the same time, when little or no data is available for languages other than English. We show that this challenging setup can be approached using meta-learning, where, in addition to training a source language model, another model learns to select which training instances are the most beneficial. We experiment using standard supervised, zero-shot cross-lingual, as well as few-shot cross-lingual settings for different natural language understanding tasks (natural language inference, question answering). Our extensive experimental setup demonstrates the consistent effectiveness of meta-learning, on a total 16 languages. We improve upon state-of-the-art on zero-shot and few-shot NLI and QA tasks on the XNLI and X-WikiRe datasets, respectively. We further conduct a comprehensive analysis which indicates that correlation of typological features between languages can further explain when parameter sharing learned via meta learning is beneficial.
Tasks Cross-Lingual Transfer, Language Modelling, Meta-Learning, Natural Language Inference, Question Answering
Published 2020-03-05
URL https://arxiv.org/abs/2003.02739v1
PDF https://arxiv.org/pdf/2003.02739v1.pdf
PWC https://paperswithcode.com/paper/zero-shot-cross-lingual-transfer-with-meta
Repo
Framework

Private Mean Estimation of Heavy-Tailed Distributions

Title Private Mean Estimation of Heavy-Tailed Distributions
Authors Gautam Kamath, Vikrant Singhal, Jonathan Ullman
Abstract We give new upper and lower bounds on the minimax sample complexity of differentially private mean estimation of distributions with bounded $k$-th moments. Roughly speaking, in the univariate case, we show that $n = \Theta\left(\frac{1}{\alpha^2} + \frac{1}{\alpha^{\frac{k}{k-1}}\varepsilon}\right)$ samples are necessary and sufficient to estimate the mean to $\alpha$-accuracy under $\varepsilon$-differential privacy, or any of its common relaxations. This result demonstrates a qualitatively different behavior compared to estimation absent privacy constraints, for which the sample complexity is identical for all $k \geq 2$. We also give algorithms for the multivariate setting whose sample complexity is a factor of $O(d)$ larger than the univariate case.
Tasks
Published 2020-02-21
URL https://arxiv.org/abs/2002.09464v1
PDF https://arxiv.org/pdf/2002.09464v1.pdf
PWC https://paperswithcode.com/paper/private-mean-estimation-of-heavy-tailed
Repo
Framework

Guided Constrained Policy Optimization for Dynamic Quadrupedal Robot Locomotion

Title Guided Constrained Policy Optimization for Dynamic Quadrupedal Robot Locomotion
Authors Siddhant Gangapurwala, Alexander Mitchell, Ioannis Havoutis
Abstract Deep reinforcement learning (RL) uses model-free techniques to optimize task-specific control policies. Despite having emerged as a promising approach for complex problems, RL is still hard to use reliably for real-world applications. Apart from challenges such as precise reward function tuning, inaccurate sensing and actuation, and non-deterministic response, existing RL methods do not guarantee behavior within required safety constraints that are crucial for real robot scenarios. In this regard, we introduce guided constrained policy optimization (GCPO), an RL framework based upon our implementation of constrained proximal policy optimization (CPPO) for tracking base velocity commands while following the defined constraints. We also introduce schemes which encourage state recovery into constrained regions in case of constraint violations. We present experimental results of our training method and test it on the real ANYmal quadruped robot. We compare our approach against the unconstrained RL method and show that guided constrained RL offers faster convergence close to the desired optimum resulting in an optimal, yet physically feasible, robotic control behavior without the need for precise reward function tuning.
Tasks
Published 2020-02-22
URL https://arxiv.org/abs/2002.09676v1
PDF https://arxiv.org/pdf/2002.09676v1.pdf
PWC https://paperswithcode.com/paper/guided-constrained-policy-optimization-for
Repo
Framework

PAC-Bayesian Meta-learning with Implicit Prior

Title PAC-Bayesian Meta-learning with Implicit Prior
Authors Cuong Nguyen, Thanh-Toan Do, Gustavo Carneiro
Abstract We introduce a new and rigorously-formulated PAC-Bayes few-shot meta-learning algorithm that implicitly learns a prior distribution of the model of interest. Our proposed method extends the PAC-Bayes framework from a single task setting to the few-shot learning setting to upper-bound generalisation errors on unseen tasks and samples. We also propose a generative-based approach to model the shared prior and the posterior of task-specific model parameters more expressively compared to the usual diagonal Gaussian assumption. We show that the models trained with our proposed meta-learning algorithm are well calibrated and accurate, with state-of-the-art calibration and classification results on few-shot classification (mini-ImageNet and tiered-ImageNet) and regression (multi-modal task-distribution regression) benchmarks.
Tasks Calibration, Few-Shot Learning, Meta-Learning
Published 2020-03-05
URL https://arxiv.org/abs/2003.02455v1
PDF https://arxiv.org/pdf/2003.02455v1.pdf
PWC https://paperswithcode.com/paper/pac-bayesian-meta-learning-with-implicit
Repo
Framework

How to Solve Fair $k$-Center in Massive Data Models

Title How to Solve Fair $k$-Center in Massive Data Models
Authors Ashish Chiplunkar, Sagar Kale, Sivaramakrishnan Natarajan Ramamoorthy
Abstract Fueled by massive data, important decision making is being automated with the help of algorithms, therefore, fairness in algorithms has become an especially important research topic. In this work, we design new streaming and distributed algorithms for the fair $k$-center problem that models fair data summarization. The streaming and distributed models of computation have an attractive feature of being able to handle massive data sets that do not fit into main memory. Our main contributions are: (a) the first distributed algorithm; which has provably constant approximation ratio and is extremely parallelizable, and (b) a two-pass streaming algorithm with a provable approximation guarantee matching the best known algorithm (which is not a streaming algorithm). Our algorithms have the advantages of being easy to implement in practice, being fast with linear running times, having very small working memory and communication, and outperforming existing algorithms on several real and synthetic data sets. To complement our distributed algorithm, we also give a hardness result for natural distributed algorithms, which holds for even the special case of $k$-center.
Tasks Data Summarization, Decision Making
Published 2020-02-18
URL https://arxiv.org/abs/2002.07682v2
PDF https://arxiv.org/pdf/2002.07682v2.pdf
PWC https://paperswithcode.com/paper/how-to-solve-fair-k-center-in-massive-data
Repo
Framework

GradientDICE: Rethinking Generalized Offline Estimation of Stationary Values

Title GradientDICE: Rethinking Generalized Offline Estimation of Stationary Values
Authors Shangtong Zhang, Bo Liu, Shimon Whiteson
Abstract We present GradientDICE for estimating the density ratio between the state distribution of the target policy and the sampling distribution in off-policy reinforcement learning. GradientDICE fixes several problems of GenDICE (Zhang et al., 2020), the state-of-the-art for estimating such density ratios. Namely, the optimization problem in GenDICE is not a convex-concave saddle-point problem once nonlinearity in optimization variable parameterization is introduced to ensure positivity, so any primal-dual algorithm is not guaranteed to converge or find the desired solution. However, such nonlinearity is essential to ensure the consistency of GenDICE even with a tabular representation. This is a fundamental contradiction, resulting from GenDICE’s original formulation of the optimization problem. In GradientDICE, we optimize a different objective from GenDICE by using the Perron-Frobenius theorem and eliminating GenDICE’s use of divergence. Consequently, nonlinearity in parameterization is not necessary for GradientDICE, which is provably convergent under linear function approximation.
Tasks
Published 2020-01-29
URL https://arxiv.org/abs/2001.11113v2
PDF https://arxiv.org/pdf/2001.11113v2.pdf
PWC https://paperswithcode.com/paper/gradientdice-rethinking-generalized-offline
Repo
Framework

Short-Term Traffic Flow Prediction Using Variational LSTM Networks

Title Short-Term Traffic Flow Prediction Using Variational LSTM Networks
Authors Mehrdad Farahani, Marzieh Farahani, Mohammad Manthouri, Okyay Kaynak
Abstract Traffic flow characteristics are one of the most critical decision-making and traffic policing factors in a region. Awareness of the predicted status of the traffic flow has prime importance in traffic management and traffic information divisions. The purpose of this research is to suggest a forecasting model for traffic flow by using deep learning techniques based on historical data in the Intelligent Transportation Systems area. The historical data collected from the Caltrans Performance Measurement Systems (PeMS) for six months in 2019. The proposed prediction model is a Variational Long Short-Term Memory Encoder in brief VLSTM-E try to estimate the flow accurately in contrast to other conventional methods. VLSTM-E can provide more reliable short-term traffic flow by considering the distribution and missing values.
Tasks Decision Making
Published 2020-02-18
URL https://arxiv.org/abs/2002.07922v1
PDF https://arxiv.org/pdf/2002.07922v1.pdf
PWC https://paperswithcode.com/paper/short-term-traffic-flow-prediction-using
Repo
Framework

QEML (Quantum Enhanced Machine Learning): Using Quantum Computing to Enhance ML Classifiers and Feature Spaces

Title QEML (Quantum Enhanced Machine Learning): Using Quantum Computing to Enhance ML Classifiers and Feature Spaces
Authors Siddharth Sharma
Abstract Machine learning and quantum computing are two technologies that are causing a paradigm shift in the performance and behavior of certain algorithms, achieving previously unattainable results. Machine learning (kernel classification) has become ubiquitous as the forefront method for pattern recognition and has been shown to have numerous societal applications. While not yet fault-tolerant, Quantum computing is an entirely new method of computation due to its exploitation of quantum phenomena such as superposition and entanglement. While current machine learning classifiers like the Support Vector Machine are seeing gradual improvements in performance, there are still severe limitations on the efficiency and scalability of such algorithms due to a limited feature space which makes the kernel functions computationally expensive to estimate. By integrating quantum circuits into traditional ML, we may solve this problem through the use of quantum feature space, a technique that improves existing Machine Learning algorithms through the use of parallelization and the reduction of the storage space from exponential to linear. This research expands on this concept of the Hilbert space and applies it for classical machine learning by implementing the quantum-enhanced version of the K nearest neighbors algorithm. This paper first understands the mathematical intuition for the implementation of quantum feature space and successfully simulates quantum properties and algorithms like Fidelity and Grover’s Algorithm via the Qiskit python library and the IBM Quantum Experience platform. The primary experiment of this research is to build a noisy variational quantum circuit KNN (QKNN) which mimics the classification methods of a traditional KNN classifier. The QKNN utilizes the distance metric of Hamming Distance and is able to outperform the existing KNN on a 10-dimensional Breast Cancer dataset.
Tasks
Published 2020-02-22
URL https://arxiv.org/abs/2002.10453v2
PDF https://arxiv.org/pdf/2002.10453v2.pdf
PWC https://paperswithcode.com/paper/qeml-quantum-enhanced-machine-learning-using
Repo
Framework

Development, Demonstration, and Validation of Data-driven Compact Diode Models for Circuit Simulation and Analysis

Title Development, Demonstration, and Validation of Data-driven Compact Diode Models for Circuit Simulation and Analysis
Authors K. Aadithya, P. Kuberry, B. Paskaleva, P. Bochev, K. Leeson, A. Mar, T. Mei, E. Keiter
Abstract Compact semiconductor device models are essential for efficiently designing and analyzing large circuits. However, traditional compact model development requires a large amount of manual effort and can span many years. Moreover, inclusion of new physics (eg, radiation effects) into an existing compact model is not trivial and may require redevelopment from scratch. Machine Learning (ML) techniques have the potential to automate and significantly speed up the development of compact models. In addition, ML provides a range of modeling options that can be used to develop hierarchies of compact models tailored to specific circuit design stages. In this paper, we explore three such options: (1) table-based interpolation, (2)Generalized Moving Least-Squares, and (3) feed-forward Deep Neural Networks, to develop compact models for a p-n junction diode. We evaluate the performance of these “data-driven” compact models by (1) comparing their voltage-current characteristics against laboratory data, and (2) building a bridge rectifier circuit using these devices, predicting the circuit’s behavior using SPICE-like circuit simulations, and then comparing these predictions against laboratory measurements of the same circuit.
Tasks
Published 2020-01-06
URL https://arxiv.org/abs/2001.01699v1
PDF https://arxiv.org/pdf/2001.01699v1.pdf
PWC https://paperswithcode.com/paper/development-demonstration-and-validation-of
Repo
Framework

Meta Cyclical Annealing Schedule: A Simple Approach to Avoiding Meta-Amortization Error

Title Meta Cyclical Annealing Schedule: A Simple Approach to Avoiding Meta-Amortization Error
Authors Yusuke Hayashi, Taiji Suzuki
Abstract The ability to learn new concepts with small amounts of data is a crucial aspect of intelligence that has proven challenging for deep learning methods. Meta-learning for few-shot learning offers a potential solution to this problem: by learning to learn across data from many previous tasks, few-shot learning algorithms can discover the structure among tasks to enable fast learning of new tasks. However, a critical challenge in few-shot learning is task ambiguity: even when a powerful prior can be meta-learned from a large number of prior tasks, a small dataset for a new task can simply be very ambiguous to acquire a single model for that task. The Bayesian meta-learning models can naturally resolve this problem by putting a sophisticated prior distribution and let the posterior well regularized through Bayesian decision theory. However, currently known Bayesian meta-learning procedures such as VERSA suffer from the so-called {\it information preference problem}, that is, the posterior distribution is degenerated to one point and is far from the exact one. To address this challenge, we design a novel meta-regularization objective using {\it cyclical annealing schedule} and {\it maximum mean discrepancy} (MMD) criterion. The cyclical annealing schedule is quite effective at avoiding such degenerate solutions. This procedure includes a difficult KL-divergence estimation, but we resolve the issue by employing MMD instead of KL-divergence. The experimental results show that our approach substantially outperforms standard meta-learning algorithms.
Tasks Few-Shot Learning, Meta-Learning
Published 2020-03-04
URL https://arxiv.org/abs/2003.01889v1
PDF https://arxiv.org/pdf/2003.01889v1.pdf
PWC https://paperswithcode.com/paper/meta-cyclical-annealing-schedule-a-simple
Repo
Framework

Joint Learning of Assignment and Representation for Biometric Group Membership

Title Joint Learning of Assignment and Representation for Biometric Group Membership
Authors Marzieh Gheisari, Teddy Furon, Laurent Amsaleg
Abstract This paper proposes a framework for group membership protocols preventing the curious but honest server from reconstructing the enrolled biometric signatures and inferring the identity of querying clients. This framework learns the embedding parameters, group representations and assignments simultaneously. Experiments show the trade-off between security/privacy and verification/identification performances.
Tasks
Published 2020-02-24
URL https://arxiv.org/abs/2002.10363v1
PDF https://arxiv.org/pdf/2002.10363v1.pdf
PWC https://paperswithcode.com/paper/joint-learning-of-assignment-and
Repo
Framework

Fair inference on error-prone outcomes

Title Fair inference on error-prone outcomes
Authors Laura Boeschoten, Erik-Jan van Kesteren, Ayoub Bagheri, Daniel L. Oberski
Abstract Fair inference in supervised learning is an important and active area of research, yielding a range of useful methods to assess and account for fairness criteria when predicting ground truth targets. As shown in recent work, however, when target labels are error-prone, potential prediction unfairness can arise from measurement error. In this paper, we show that, when an error-prone proxy target is used, existing methods to assess and calibrate fairness criteria do not extend to the true target variable of interest. To remedy this problem, we suggest a framework resulting from the combination of two existing literatures: fair ML methods, such as those found in the counterfactual fairness literature on the one hand, and, on the other, measurement models found in the statistical literature. We discuss these approaches and their connection resulting in our framework. In a healthcare decision problem, we find that using a latent variable model to account for measurement error removes the unfairness detected previously.
Tasks
Published 2020-03-17
URL https://arxiv.org/abs/2003.07621v1
PDF https://arxiv.org/pdf/2003.07621v1.pdf
PWC https://paperswithcode.com/paper/fair-inference-on-error-prone-outcomes
Repo
Framework

Learning Context-aware Task Reasoning for Efficient Meta-reinforcement Learning

Title Learning Context-aware Task Reasoning for Efficient Meta-reinforcement Learning
Authors Haozhe Wang, Jiale Zhou, Xuming He
Abstract Despite recent success of deep network-based Reinforcement Learning (RL), it remains elusive to achieve human-level efficiency in learning novel tasks. While previous efforts attempt to address this challenge using meta-learning strategies, they typically suffer from sampling inefficiency with on-policy RL algorithms or meta-overfitting with off-policy learning. In this work, we propose a novel meta-RL strategy to address those limitations. In particular, we decompose the meta-RL problem into three sub-tasks, task-exploration, task-inference and task-fulfillment, instantiated with two deep network agents and a task encoder. During meta-training, our method learns a task-conditioned actor network for task-fulfillment, an explorer network with a self-supervised reward shaping that encourages task-informative experiences in task-exploration, and a context-aware graph-based task encoder for task inference. We validate our approach with extensive experiments on several public benchmarks and the results show that our algorithm effectively performs exploration for task inference, improves sample efficiency during both training and testing, and mitigates the meta-overfitting problem.
Tasks Meta-Learning
Published 2020-03-03
URL https://arxiv.org/abs/2003.01373v1
PDF https://arxiv.org/pdf/2003.01373v1.pdf
PWC https://paperswithcode.com/paper/learning-context-aware-task-reasoning-for
Repo
Framework

Transformer Hawkes Process

Title Transformer Hawkes Process
Authors Simiao Zuo, Haoming Jiang, Zichong Li, Tuo Zhao, Hongyuan Zha
Abstract Modern data acquisition routinely produce massive amounts of event sequence data in various domains, such as social media, healthcare, and financial markets. These data often exhibit complicated short-term and long-term temporal dependencies. However, most of the existing recurrent neural network-based point process models fail to capture such dependencies, and yield unreliable prediction performance. To address this issue, we propose a Transformer Hawkes Process (THP) model, which leverages the self-attention mechanism to capture long-term dependencies and meanwhile enjoys computational efficiency. Numerical experiments on various datasets show that THP outperforms existing models in terms of both likelihood and event prediction accuracy by a notable margin. Moreover, THP is quite general and can incorporate additional structural knowledge. We provide a concrete example, where THP achieves improved prediction performance for learning multiple point processes when incorporating their relational information.
Tasks Point Processes
Published 2020-02-21
URL https://arxiv.org/abs/2002.09291v1
PDF https://arxiv.org/pdf/2002.09291v1.pdf
PWC https://paperswithcode.com/paper/transformer-hawkes-process
Repo
Framework
comments powered by Disqus