Paper Group ANR 26
AlignTTS: Efficient Feed-Forward Text-to-Speech System without Explicit Alignment. A Measure-Theoretic Approach to Kernel Conditional Mean Embeddings. Deep Adversarial Reinforcement Learning for Object Disentangling. Conditional Gaussian Distribution Learning for Open Set Recognition. Lake Ice Detection from Sentinel-1 SAR with Deep Learning. Monta …
AlignTTS: Efficient Feed-Forward Text-to-Speech System without Explicit Alignment
Title | AlignTTS: Efficient Feed-Forward Text-to-Speech System without Explicit Alignment |
Authors | Zhen Zeng, Jianzong Wang, Ning Cheng, Tian Xia, Jing Xiao |
Abstract | Targeting at both high efficiency and performance, we propose AlignTTS to predict the mel-spectrum in parallel. AlignTTS is based on a Feed-Forward Transformer which generates mel-spectrum from a sequence of characters, and the duration of each character is determined by a duration predictor.Instead of adopting the attention mechanism in Transformer TTS to align text to mel-spectrum, the alignment loss is presented to consider all possible alignments in training by use of dynamic programming. Experiments on the LJSpeech dataset show that our model achieves not only state-of-the-art performance which outperforms Transformer TTS by 0.03 in mean option score (MOS), but also a high efficiency which is more than 50 times faster than real-time. |
Tasks | |
Published | 2020-03-04 |
URL | https://arxiv.org/abs/2003.01950v1 |
https://arxiv.org/pdf/2003.01950v1.pdf | |
PWC | https://paperswithcode.com/paper/aligntts-efficient-feed-forward-text-to |
Repo | |
Framework | |
A Measure-Theoretic Approach to Kernel Conditional Mean Embeddings
Title | A Measure-Theoretic Approach to Kernel Conditional Mean Embeddings |
Authors | Junhyung Park, Krikamol Muandet |
Abstract | We present a new operator-free, measure-theoretic definition of the conditional mean embedding as a random variable taking values in a reproducing kernel Hilbert space. While the kernel mean embedding of marginal distributions has been defined rigorously, the existing operator-based approach of the conditional version lacks a rigorous definition, and depends on strong assumptions that hinder its analysis. Our definition does not impose any of the assumptions that the operator-based counterpart requires. We derive a natural regression interpretation to obtain empirical estimates, and provide a thorough analysis of its properties, including universal consistency. As natural by-products, we obtain the conditional analogues of the Maximum Mean Discrepancy and Hilbert-Schmidt Independence Criterion, and demonstrate their behaviour via simulations. |
Tasks | |
Published | 2020-02-10 |
URL | https://arxiv.org/abs/2002.03689v3 |
https://arxiv.org/pdf/2002.03689v3.pdf | |
PWC | https://paperswithcode.com/paper/a-measure-theoretic-approach-to-kernel |
Repo | |
Framework | |
Deep Adversarial Reinforcement Learning for Object Disentangling
Title | Deep Adversarial Reinforcement Learning for Object Disentangling |
Authors | Melvin Laux, Oleg Arenz, Jan Peters, Joni Pajarinen |
Abstract | Deep learning in combination with improved training techniques and high computational power has led to recent advances in the field of reinforcement learning (RL) and to successful robotic RL applications such as in-hand manipulation. However, most robotic RL relies on a well known initial state distribution. In real-world tasks, this information is however often not available. For example, when disentangling waste objects the actual position of the robot w.r.t.\ the objects may not match the positions the RL policy was trained for. To solve this problem, we present a novel adversarial reinforcement learning (ARL) framework. The ARL framework utilizes an adversary, which is trained to steer the original agent, the protagonist, to challenging states. We train the protagonist and the adversary jointly to allow them to adapt to the changing policy of their opponent. We show that our method can generalize from training to test scenarios by training an end-to-end system for robot control to solve a challenging object disentangling task. Experiments with a KUKA LBR+ 7-DOF robot arm show that our approach outperforms the baseline method in disentangling when starting from different initial states than provided during training. |
Tasks | |
Published | 2020-03-08 |
URL | https://arxiv.org/abs/2003.03779v1 |
https://arxiv.org/pdf/2003.03779v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-adversarial-reinforcement-learning-for |
Repo | |
Framework | |
Conditional Gaussian Distribution Learning for Open Set Recognition
Title | Conditional Gaussian Distribution Learning for Open Set Recognition |
Authors | Xin Sun, Zhenning Yang, Chi Zhang, Guohao Peng, Keck-Voon Ling |
Abstract | Deep neural networks have achieved state-of-the-art performance in a wide range of recognition/classification tasks. However, when applying deep learning to real-world applications, there are still multiple challenges. A typical challenge is that unknown samples may be fed into the system during the testing phase and traditional deep neural networks will wrongly recognize the unknown sample as one of the known classes. Open set recognition is a potential solution to overcome this problem, where the open set classifier should have the ability to reject unknown samples as well as maintain high classification accuracy on known classes. The variational auto-encoder (VAE) is a popular model to detect unknowns, but it cannot provide discriminative representations for known classification. In this paper, we propose a novel method, Conditional Gaussian Distribution Learning (CGDL), for open set recognition. In addition to detecting unknown samples, this method can also classify known samples by forcing different latent features to approximate different Gaussian models. Meanwhile, to avoid information hidden in the input vanishing in the middle layers, we also adopt the probabilistic ladder architecture to extract high-level abstract features. Experiments on several standard image datasets reveal that the proposed method significantly outperforms the baseline method and achieves new state-of-the-art results. |
Tasks | Open Set Learning |
Published | 2020-03-19 |
URL | https://arxiv.org/abs/2003.08823v2 |
https://arxiv.org/pdf/2003.08823v2.pdf | |
PWC | https://paperswithcode.com/paper/conditional-gaussian-distribution-learning |
Repo | |
Framework | |
Lake Ice Detection from Sentinel-1 SAR with Deep Learning
Title | Lake Ice Detection from Sentinel-1 SAR with Deep Learning |
Authors | Manu Tom, Roberto Aguilar, Pascal Imhof, Silvan Leinss, Emmanuel Baltsavias, Konrad Schindler |
Abstract | Lake ice, as part of the Essential Climate Variable (ECV) lakes, is an important indicator to monitor climate change and global warming. The spatio-temporal extent of lake ice cover, along with the timings of key phenological events such as freeze-up and break-up, provides important cues about the local and global climate. We present a lake ice monitoring system based on the automatic analysis of Sentinel-1 Synthetic Aperture Radar (SAR) data with a deep neural network. In previous studies that used optical satellite imagery for lake ice monitoring, frequent cloud cover was a main limiting factor, which we overcome thanks to the ability of microwave sensors to penetrate clouds and observe the lakes regardless of the weather and illumination conditions. We cast ice detection as a two class (frozen, non-frozen) semantic segmentation problem and solve it using a state-of-the-art deep convolutional network (CNN). We report results on two winters ($2016-17$ and $2017-18$) and three alpine lakes in Switzerland, including cross-validation tests to assess the generalisation to unseen lakes and winters. The proposed model reaches mean Intersection-over-Union (mIoU) scores >90% on average, and >84% even for the most difficult lake. |
Tasks | Semantic Segmentation |
Published | 2020-02-17 |
URL | https://arxiv.org/abs/2002.07040v1 |
https://arxiv.org/pdf/2002.07040v1.pdf | |
PWC | https://paperswithcode.com/paper/lake-ice-detection-from-sentinel-1-sar-with |
Repo | |
Framework | |
Montage: A Neural Network Language Model-Guided JavaScript Engine Fuzzer
Title | Montage: A Neural Network Language Model-Guided JavaScript Engine Fuzzer |
Authors | Suyoung Lee, HyungSeok Han, Sang Kil Cha, Sooel Son |
Abstract | JavaScript (JS) engine vulnerabilities pose significant security threats affecting billions of web browsers. While fuzzing is a prevalent technique for finding such vulnerabilities, there have been few studies that leverage the recent advances in neural network language models (NNLMs). In this paper, we present Montage, the first NNLM-guided fuzzer for finding JS engine vulnerabilities. The key aspect of our technique is to transform a JS abstract syntax tree (AST) into a sequence of AST subtrees that can directly train prevailing NNLMs. We demonstrate that Montage is capable of generating valid JS tests, and show that it outperforms previous studies in terms of finding vulnerabilities. Montage found 37 real-world bugs, including three CVEs, in the latest JS engines, demonstrating its efficacy in finding JS engine bugs. |
Tasks | Language Modelling |
Published | 2020-01-13 |
URL | https://arxiv.org/abs/2001.04107v2 |
https://arxiv.org/pdf/2001.04107v2.pdf | |
PWC | https://paperswithcode.com/paper/montage-a-neural-network-language-model |
Repo | |
Framework | |
Feedback Graph Convolutional Network for Skeleton-based Action Recognition
Title | Feedback Graph Convolutional Network for Skeleton-based Action Recognition |
Authors | Hao Yang, Dan Yan, Li Zhang, Dong Li, YunDa Sun, ShaoDi You, Stephen J. Maybank |
Abstract | Skeleton-based action recognition has attracted considerable attention in computer vision since skeleton data is more robust to the dynamic circumstance and complicated background than other modalities. Recently, many researchers have used the Graph Convolutional Network (GCN) to model spatial-temporal features of skeleton sequences by an end-to-end optimization. However, conventional GCNs are feedforward networks which are impossible for low-level layers to access semantic information in the high-level layers. In this paper, we propose a novel network, named Feedback Graph Convolutional Network (FGCN). This is the first work that introduces the feedback mechanism into GCNs and action recognition. Compared with conventional GCNs, FGCN has the following advantages: (1) a multi-stage temporal sampling strategy is designed to extract spatial-temporal features for action recognition in a coarse-to-fine progressive process; (2) A dense connections based Feedback Graph Convolutional Block (FGCB) is proposed to introduce feedback connections into the GCNs. It transmits the high-level semantic features to the low-level layers and flows temporal information stage by stage to progressively model global spatial-temporal features for action recognition; (3) The FGCN model provides early predictions. In the early stages, the model receives partial information about actions. Naturally, its predictions are relatively coarse. The coarse predictions are treated as the prior to guide the feature learning of later stages for a accurate prediction. Extensive experiments on the datasets, NTU-RGB+D, NTU-RGB+D120 and Northwestern-UCLA, demonstrate that the proposed FGCN is effective for action recognition. It achieves the state-of-the-art performance on the three datasets. |
Tasks | Skeleton Based Action Recognition |
Published | 2020-03-17 |
URL | https://arxiv.org/abs/2003.07564v1 |
https://arxiv.org/pdf/2003.07564v1.pdf | |
PWC | https://paperswithcode.com/paper/feedback-graph-convolutional-network-for |
Repo | |
Framework | |
Online Sinkhorn: optimal transportation distances from sample streams
Title | Online Sinkhorn: optimal transportation distances from sample streams |
Authors | Arthur Mensch, Gabriel Peyré |
Abstract | Optimal Transport (OT) distances are now routinely used as loss functions in ML tasks. Yet, computing OT distances between arbitrary (i.e. not necessarily discrete) probability distributions remains an open problem. This paper introduces a new online estimator of entropy-regularized OT distances between two such arbitrary distributions. It uses streams of samples from both distributions to iteratively enrich a non-parametric representation of the transportation plan. Compared to the classic Sinkhorn algorithm, our method leverages new samples at each iteration, which enables a consistent estimation of the true regularized OT distance. We cast our algorithm as a block-convex mirror descent in the space of positive distributions, and provide a theoretical analysis of its convergence. We numerically illustrate the performance of our method in comparison with concurrent approaches. |
Tasks | |
Published | 2020-03-03 |
URL | https://arxiv.org/abs/2003.01415v1 |
https://arxiv.org/pdf/2003.01415v1.pdf | |
PWC | https://paperswithcode.com/paper/online-sinkhorn-optimal-transportation |
Repo | |
Framework | |
Kernel Conditional Moment Test via Maximum Moment Restriction
Title | Kernel Conditional Moment Test via Maximum Moment Restriction |
Authors | Krikamol Muandet, Wittawat Jitkrittum, Jonas Kübler |
Abstract | We propose a new family of specification tests called kernel conditional moment (KCM) tests. Our tests are built on conditional moment embeddings (CMME)—a novel representation of conditional moment restrictions in a reproducing kernel Hilbert space (RKHS). After transforming the conditional moment restrictions into a continuum of unconditional counterparts, the test statistic is defined as the maximum moment restriction within the unit ball of the RKHS. We show that the CMME fully characterizes the original conditional moment restrictions, leading to consistency in both hypothesis testing and parameter estimation. The proposed test also has an analytic expression that is easy to compute as well as closed-form asymptotic distributions. Our empirical studies show that the KCM test has a promising finite-sample performance compared to existing tests. |
Tasks | |
Published | 2020-02-21 |
URL | https://arxiv.org/abs/2002.09225v2 |
https://arxiv.org/pdf/2002.09225v2.pdf | |
PWC | https://paperswithcode.com/paper/kernel-conditional-moment-test-via-maximum |
Repo | |
Framework | |
A General Method for Robust Learning from Batches
Title | A General Method for Robust Learning from Batches |
Authors | Ayush Jain, Alon Orlitsky |
Abstract | In many applications, data is collected in batches, some of which are corrupt or even adversarial. Recent work derived optimal robust algorithms for estimating discrete distributions in this setting. We consider a general framework of robust learning from batches, and determine the limits of both classification and distribution estimation over arbitrary, including continuous, domains. Building on these results, we derive the first robust agnostic computationally-efficient learning algorithms for piecewise-interval classification, and for piecewise-polynomial, monotone, log-concave, and gaussian-mixture distribution estimation. |
Tasks | |
Published | 2020-02-25 |
URL | https://arxiv.org/abs/2002.11099v1 |
https://arxiv.org/pdf/2002.11099v1.pdf | |
PWC | https://paperswithcode.com/paper/a-general-method-for-robust-learning-from |
Repo | |
Framework | |
Theoretical Models of Learning to Learn
Title | Theoretical Models of Learning to Learn |
Authors | Jonathan Baxter |
Abstract | A Machine can only learn if it is biased in some way. Typically the bias is supplied by hand, for example through the choice of an appropriate set of features. However, if the learning machine is embedded within an {\em environment} of related tasks, then it can {\em learn} its own bias by learning sufficiently many tasks from the environment. In this paper two models of bias learning (or equivalently, learning to learn) are introduced and the main theoretical results presented. The first model is a PAC-type model based on empirical process theory, while the second is a hierarchical Bayes model. |
Tasks | |
Published | 2020-02-27 |
URL | https://arxiv.org/abs/2002.12364v1 |
https://arxiv.org/pdf/2002.12364v1.pdf | |
PWC | https://paperswithcode.com/paper/theoretical-models-of-learning-to-learn |
Repo | |
Framework | |
Generalized Policy Elimination: an efficient algorithm for Nonparametric Contextual Bandits
Title | Generalized Policy Elimination: an efficient algorithm for Nonparametric Contextual Bandits |
Authors | Aurélien F. Bibaut, Antoine Chambaz, Mark J. van der Laan |
Abstract | We propose the Generalized Policy Elimination (GPE) algorithm, an oracle-efficient contextual bandit (CB) algorithm inspired by the Policy Elimination algorithm of \cite{dudik2011}. We prove the first regret optimality guarantee theorem for an oracle-efficient CB algorithm competing against a nonparametric class with infinite VC-dimension. Specifically, we show that GPE is regret-optimal (up to logarithmic factors) for policy classes with integrable entropy. For classes with larger entropy, we show that the core techniques used to analyze GPE can be used to design an $\varepsilon$-greedy algorithm with regret bound matching that of the best algorithms to date. We illustrate the applicability of our algorithms and theorems with examples of large nonparametric policy classes, for which the relevant optimization oracles can be efficiently implemented. |
Tasks | Multi-Armed Bandits |
Published | 2020-03-05 |
URL | https://arxiv.org/abs/2003.02873v1 |
https://arxiv.org/pdf/2003.02873v1.pdf | |
PWC | https://paperswithcode.com/paper/generalized-policy-elimination-an-efficient |
Repo | |
Framework | |
Finding Optimal Points for Expensive Functions Using Adaptive RBF-Based Surrogate Model Via Uncertainty Quantification
Title | Finding Optimal Points for Expensive Functions Using Adaptive RBF-Based Surrogate Model Via Uncertainty Quantification |
Authors | Ray-Bing Chen, Yuan Wang, C. F. Jeff Wu |
Abstract | Global optimization of expensive functions has important applications in physical and computer experiments. It is a challenging problem to develop efficient optimization scheme, because each function evaluation can be costly and the derivative information of the function is often not available. We propose a novel global optimization framework using adaptive Radial Basis Functions (RBF) based surrogate model via uncertainty quantification. The framework consists of two iteration steps. It first employs an RBF-based Bayesian surrogate model to approximate the true function, where the parameters of the RBFs can be adaptively estimated and updated each time a new point is explored. Then it utilizes a model-guided selection criterion to identify a new point from a candidate set for function evaluation. The selection criterion adopted here is a sample version of the expected improvement (EI) criterion. We conduct simulation studies with standard test functions, which show that the proposed method has some advantages, especially when the true surface is not very smooth. In addition, we also propose modified approaches to improve the search performance for identifying global optimal points and to deal with the higher dimension scenarios. |
Tasks | |
Published | 2020-01-19 |
URL | https://arxiv.org/abs/2001.06858v1 |
https://arxiv.org/pdf/2001.06858v1.pdf | |
PWC | https://paperswithcode.com/paper/finding-optimal-points-for-expensive |
Repo | |
Framework | |
A game-theoretic approach for Generative Adversarial Networks
Title | A game-theoretic approach for Generative Adversarial Networks |
Authors | Barbara Franci, Sergio Grammatico |
Abstract | Generative adversarial networks (GANs) are a class of generative models, known for producing accurate samples. The key feature of GANs is that there are two antagonistic neural networks: the generator and the discriminator. The main bottleneck for their implementation is that the neural networks are very hard to train. One way to improve their performance is to design reliable algorithms for the adversarial process. Since the training can be cast as a stochastic Nash equilibrium problem, we rewrite it as a variational inequality and introduce an algorithm to compute an approximate solution. Specifically, we propose a stochastic relaxed forward-backward algorithm for GANs. We prove that when the pseudogradient mapping of the game is monotone, we have convergence to an exact solution or in a neighbourhood of it. |
Tasks | |
Published | 2020-03-30 |
URL | https://arxiv.org/abs/2003.13637v1 |
https://arxiv.org/pdf/2003.13637v1.pdf | |
PWC | https://paperswithcode.com/paper/a-game-theoretic-approach-for-generative |
Repo | |
Framework | |
Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization
Title | Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization |
Authors | Sicheng Zhu, Xiao Zhang, David Evans |
Abstract | Training machine learning models to be robust against adversarial inputs poses seemingly insurmountable challenges. To better understand model robustness, we consider the underlying problem of learning robust representations. We develop a general definition of representation vulnerability that captures the maximum change of mutual information between the input and output distributions, under the worst-case input distribution perturbation. We prove a theorem that establishes a lower bound on the minimum adversarial risk that can be achieved for any downstream classifier based on this definition. We then propose an unsupervised learning method for obtaining intrinsically robust representations by maximizing the worst-case mutual information between input and output distributions. Experiments on downstream classification tasks and analyses of saliency maps support the robustness of the representations found using unsupervised learning with our training principle. |
Tasks | |
Published | 2020-02-26 |
URL | https://arxiv.org/abs/2002.11798v1 |
https://arxiv.org/pdf/2002.11798v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-adversarially-robust-representations |
Repo | |
Framework | |