April 2, 2020

2937 words 14 mins read

Paper Group ANR 296

Paper Group ANR 296

Multiresolution Tensor Learning for Efficient and Interpretable Spatial Analysis. CAZSL: Zero-Shot Regression for Pushing Models by Generalizing Through Context. Improving speaker discrimination of target speech extraction with time-domain SpeakerBeam. Interference and Generalization in Temporal Difference Learning. tfp.mcmc: Modern Markov Chain Mo …

Multiresolution Tensor Learning for Efficient and Interpretable Spatial Analysis

Title Multiresolution Tensor Learning for Efficient and Interpretable Spatial Analysis
Authors Jung Yeon Park, Kenneth Theo Carr, Stephan Zheng, Yisong Yue, Rose Yu
Abstract Efficient and interpretable spatial analysis is crucial in many fields such as geology, sports, and climate science. Large-scale spatial data often contains complex higher-order correlations across features and locations. While tensor latent factor models can describe higher-order correlations, they are inherently computationally expensive to train. Furthermore, for spatial analysis, these models should not only be predictive but also be spatially coherent. However, latent factor models are sensitive to initialization and can yield inexplicable results. We develop a novel Multi-resolution Tensor Learning (MRTL) algorithm for efficiently learning interpretable spatial patterns. MRTL initializes the latent factors from an approximate full-rank tensor model for improved interpretability and progressively learns from a coarse resolution to the fine resolution for an enormous computation speedup. We also prove the theoretical convergence and computational complexity of MRTL. When applied to two real-world datasets, MRTL demonstrates 4 ~ 5 times speedup compared to a fixed resolution while yielding accurate and interpretable models.
Published 2020-02-13
URL https://arxiv.org/abs/2002.05578v2
PDF https://arxiv.org/pdf/2002.05578v2.pdf
PWC https://paperswithcode.com/paper/multiresolution-tensor-learning-for-efficient

CAZSL: Zero-Shot Regression for Pushing Models by Generalizing Through Context

Title CAZSL: Zero-Shot Regression for Pushing Models by Generalizing Through Context
Authors Wenyu Zhang, Skyler Seto, Devesh K. Jha
Abstract Learning accurate models of the physical world is required for a lot of robotic manipulation tasks. However, during manipulation, robots are expected to interact with unknown workpieces so that building predictive models which can generalize over a number of these objects is highly desirable. In this paper, we study the problem of designing learning agents which can generalize their models of the physical world by building context-aware learning models. The purpose of these agents is to quickly adapt and/or generalize their notion of physics of interaction in the real world based on certain features about the interacting objects that provide different contexts to the predictive models. With this motivation, we present context-aware zero shot learning (CAZSL, pronounced as ‘casual’) models, an approach utilizing a Siamese network architecture, embedding space masking and regularization based on context variables which allows us to learn a model that can generalize to different parameters or features of the interacting objects. We test our proposed learning algorithm on the recently released Omnipush datatset that allows testing of meta-learning capabilities using low-dimensional data.
Tasks Meta-Learning, Zero-Shot Learning
Published 2020-03-26
URL https://arxiv.org/abs/2003.11696v1
PDF https://arxiv.org/pdf/2003.11696v1.pdf
PWC https://paperswithcode.com/paper/cazsl-zero-shot-regression-for-pushing-models

Improving speaker discrimination of target speech extraction with time-domain SpeakerBeam

Title Improving speaker discrimination of target speech extraction with time-domain SpeakerBeam
Authors Marc Delcroix, Tsubasa Ochiai, Katerina Zmolikova, Keisuke Kinoshita, Naohiro Tawara, Tomohiro Nakatani, Shoko Araki
Abstract Target speech extraction, which extracts a single target source in a mixture given clues about the target speaker, has attracted increasing attention. We have recently proposed SpeakerBeam, which exploits an adaptation utterance of the target speaker to extract his/her voice characteristics that are then used to guide a neural network towards extracting speech of that speaker. SpeakerBeam presents a practical alternative to speech separation as it enables tracking speech of a target speaker across utterances, and achieves promising speech extraction performance. However, it sometimes fails when speakers have similar voice characteristics, such as in same-gender mixtures, because it is difficult to discriminate the target speaker from the interfering speakers. In this paper, we investigate strategies for improving the speaker discrimination capability of SpeakerBeam. First, we propose a time-domain implementation of SpeakerBeam similar to that proposed for a time-domain audio separation network (TasNet), which has achieved state-of-the-art performance for speech separation. Besides, we investigate (1) the use of spatial features to better discriminate speakers when microphone array recordings are available, (2) adding an auxiliary speaker identification loss for helping to learn more discriminative voice characteristics. We show experimentally that these strategies greatly improve speech extraction performance, especially for same-gender mixtures, and outperform TasNet in terms of target speech extraction.
Tasks Speaker Identification, Speech Separation
Published 2020-01-23
URL https://arxiv.org/abs/2001.08378v1
PDF https://arxiv.org/pdf/2001.08378v1.pdf
PWC https://paperswithcode.com/paper/improving-speaker-discrimination-of-target

Interference and Generalization in Temporal Difference Learning

Title Interference and Generalization in Temporal Difference Learning
Authors Emmanuel Bengio, Joelle Pineau, Doina Precup
Abstract We study the link between generalization and interference in temporal-difference (TD) learning. Interference is defined as the inner product of two different gradients, representing their alignment. This quantity emerges as being of interest from a variety of observations about neural networks, parameter sharing and the dynamics of learning. We find that TD easily leads to low-interference, under-generalizing parameters, while the effect seems reversed in supervised learning. We hypothesize that the cause can be traced back to the interplay between the dynamics of interference and bootstrapping. This is supported empirically by several observations: the negative relationship between the generalization gap and interference in TD, the negative effect of bootstrapping on interference and the local coherence of targets, and the contrast between the propagation rate of information in TD(0) versus TD($\lambda$) and regression tasks such as Monte-Carlo policy evaluation. We hope that these new findings can guide the future discovery of better bootstrapping methods.
Published 2020-03-13
URL https://arxiv.org/abs/2003.06350v1
PDF https://arxiv.org/pdf/2003.06350v1.pdf
PWC https://paperswithcode.com/paper/interference-and-generalization-in-temporal

tfp.mcmc: Modern Markov Chain Monte Carlo Tools Built for Modern Hardware

Title tfp.mcmc: Modern Markov Chain Monte Carlo Tools Built for Modern Hardware
Authors Junpeng Lao, Christopher Suter, Ian Langmore, Cyril Chimisov, Ashish Saxena, Pavel Sountsov, Dave Moore, Rif A. Saurous, Matthew D. Hoffman, Joshua V. Dillon
Abstract Markov chain Monte Carlo (MCMC) is widely regarded as one of the most important algorithms of the 20th century. Its guarantees of asymptotic convergence, stability, and estimator-variance bounds using only unnormalized probability functions make it indispensable to probabilistic programming. In this paper, we introduce the TensorFlow Probability MCMC toolkit, and discuss some of the considerations that motivated its design.
Tasks Probabilistic Programming
Published 2020-02-04
URL https://arxiv.org/abs/2002.01184v1
PDF https://arxiv.org/pdf/2002.01184v1.pdf
PWC https://paperswithcode.com/paper/tfpmcmc-modern-markov-chain-monte-carlo-tools

Application of independent component analysis and TOPSIS to deal with dependent criteria in multicriteria decision problems

Title Application of independent component analysis and TOPSIS to deal with dependent criteria in multicriteria decision problems
Authors Guilherme Dean Pelegrina, Leonardo Tomazeli Duarte, João Marcos Travassos Romano
Abstract A vast number of multicriteria decision making methods have been developed to deal with the problem of ranking a set of alternatives evaluated in a multicriteria fashion. Very often, these methods assume that the evaluation among criteria is statistically independent. However, in actual problems, the observed data may comprise dependent criteria, which, among other problems, may result in biased rankings. In order to deal with this issue, we propose a novel approach whose aim is to estimate, from the observed data, a set of independent latent criteria, which can be seen as an alternative representation of the original decision matrix. A central element of our approach is to formulate the decision problem as a blind source separation problem, which allows us to apply independent component analysis techniques to estimate the latent criteria. Moreover, we consider TOPSIS-based approaches to obtain the ranking of alternatives from the latent criteria. Results in both synthetic and actual data attest the relevance of the proposed approach.
Tasks Decision Making
Published 2020-02-06
URL https://arxiv.org/abs/2002.02257v1
PDF https://arxiv.org/pdf/2002.02257v1.pdf
PWC https://paperswithcode.com/paper/application-of-independent-component-analysis-1

High-Level Plan for Behavioral Robot Navigation with Natural Language Directions and R-NET

Title High-Level Plan for Behavioral Robot Navigation with Natural Language Directions and R-NET
Authors Amar Shrestha, Krittaphat Pugdeethosapol, Haowen Fang, Qinru Qiu
Abstract When the navigational environment is known, it can be represented as a graph where landmarks are nodes, the robot behaviors that move from node to node are edges, and the route is a set of behavioral instructions. The route path from source to destination can be viewed as a class of combinatorial optimization problems where the path is a sequential subset from a set of discrete items. The pointer network is an attention-based recurrent network that is suitable for such a task. In this paper, we utilize a modified R-NET with gated attention and self-matching attention translating natural language instructions to a high-level plan for behavioral robot navigation by developing an understanding of the behavioral navigational graph to enable the pointer network to produce a sequence of behaviors representing the path. Tests on the navigation graph dataset show that our model outperforms the state-of-the-art approach for both known and unknown environments.
Tasks Combinatorial Optimization, Robot Navigation
Published 2020-01-08
URL https://arxiv.org/abs/2001.02330v1
PDF https://arxiv.org/pdf/2001.02330v1.pdf
PWC https://paperswithcode.com/paper/high-level-plan-for-behavioral-robot

Supervised Hyperalignment for multi-subject fMRI data alignment

Title Supervised Hyperalignment for multi-subject fMRI data alignment
Authors Muhammad Yousefnezhad, Alessandro Selvitella, Liangxiu Han, Daoqiang Zhang
Abstract Hyperalignment has been widely employed in Multivariate Pattern (MVP) analysis to discover the cognitive states in the human brains based on multi-subject functional Magnetic Resonance Imaging (fMRI) datasets. Most of the existing HA methods utilized unsupervised approaches, where they only maximized the correlation between the voxels with the same position in the time series. However, these unsupervised solutions may not be optimum for handling the functional alignment in the supervised MVP problems. This paper proposes a Supervised Hyperalignment (SHA) method to ensure better functional alignment for MVP analysis, where the proposed method provides a supervised shared space that can maximize the correlation among the stimuli belonging to the same category and minimize the correlation between distinct categories of stimuli. Further, SHA employs a generalized optimization solution, which generates the shared space and calculates the mapped features in a single iteration, hence with optimum time and space complexities for large datasets. Experiments on multi-subject datasets demonstrate that SHA method achieves up to 19% better performance for multi-class problems over the state-of-the-art HA algorithms.
Tasks Multi-Subject Fmri Data Alignment, Time Series
Published 2020-01-09
URL https://arxiv.org/abs/2001.02894v1
PDF https://arxiv.org/pdf/2001.02894v1.pdf
PWC https://paperswithcode.com/paper/supervised-hyperalignment-for-multi-subject

Accelerating Smooth Games by Manipulating Spectral Shapes

Title Accelerating Smooth Games by Manipulating Spectral Shapes
Authors Waïss Azizian, Damien Scieur, Ioannis Mitliagkas, Simon Lacoste-Julien, Gauthier Gidel
Abstract We use matrix iteration theory to characterize acceleration in smooth games. We define the spectral shape of a family of games as the set containing all eigenvalues of the Jacobians of standard gradient dynamics in the family. Shapes restricted to the real line represent well-understood classes of problems, like minimization. Shapes spanning the complex plane capture the added numerical challenges in solving smooth games. In this framework, we describe gradient-based methods, such as extragradient, as transformations on the spectral shape. Using this perspective, we propose an optimal algorithm for bilinear games. For smooth and strongly monotone operators, we identify a continuum between convex minimization, where acceleration is possible using Polyak’s momentum, and the worst case where gradient descent is optimal. Finally, going beyond first-order methods, we propose an accelerated version of consensus optimization.
Published 2020-01-02
URL https://arxiv.org/abs/2001.00602v2
PDF https://arxiv.org/pdf/2001.00602v2.pdf
PWC https://paperswithcode.com/paper/accelerating-smooth-games-by-manipulating

Nonasymptotic analysis of Stochastic Gradient Hamiltonian Monte Carlo under local conditions for nonconvex optimization

Title Nonasymptotic analysis of Stochastic Gradient Hamiltonian Monte Carlo under local conditions for nonconvex optimization
Authors Ömer Deniz Akyildiz, Sotirios Sabanis
Abstract We provide a nonasymptotic analysis of the convergence of the stochastic gradient Hamiltonian Monte Carlo (SGHMC) to a target measure in Wasserstein-2 distance without assuming log-concavity. By making the dimension dependence explicit, we provide a uniform convergence rate of order $\mathcal{O}(\eta^{1/4} )$, where $\eta$ is the step-size. Our results shed light onto the performance of the SGHMC methods compared to their overdamped counterparts, e.g., stochastic gradient Langevin dynamics (SGLD). Furthermore, our results also imply that the SGHMC, when viewed as a nonconvex optimizer, converges to a global minimum with the best known rates.
Published 2020-02-13
URL https://arxiv.org/abs/2002.05465v1
PDF https://arxiv.org/pdf/2002.05465v1.pdf
PWC https://paperswithcode.com/paper/nonasymptotic-analysis-of-stochastic-gradient

To Split or Not to Split: The Impact of Disparate Treatment in Classification

Title To Split or Not to Split: The Impact of Disparate Treatment in Classification
Authors Hao Wang, Hsiang Hsu, Mario Diaz, Flavio P. Calmon
Abstract Disparate treatment occurs when a machine learning model produces different decisions for groups defined by a legally protected or sensitive attribute (e.g., race, gender). In domains where prediction accuracy is paramount, it is acceptable to fit a model which exhibits disparate treatment. We explore the effect of splitting classifiers (i.e., training and deploying a separate classifier on each group) and derive an information-theoretic impossibility result: there exists precise conditions where a group-blind classifier will always have a non-trivial performance gap from the split classifiers. We further demonstrate that, in the finite sample regime, splitting is no longer always beneficial and relies on the number of samples from each group and the complexity of the hypothesis class. We provide data-dependent bounds for understanding the effect of splitting and illustrate these bounds on real-world datasets.
Published 2020-02-12
URL https://arxiv.org/abs/2002.04788v1
PDF https://arxiv.org/pdf/2002.04788v1.pdf
PWC https://paperswithcode.com/paper/to-split-or-not-to-split-the-impact-of

Temporal-Spatial Neural Filter: Direction Informed End-to-End Multi-channel Target Speech Separation

Title Temporal-Spatial Neural Filter: Direction Informed End-to-End Multi-channel Target Speech Separation
Authors Rongzhi Gu, Yuexian Zou
Abstract Target speech separation refers to extracting the target speaker’s speech from mixed signals. Despite the recent advances in deep learning based close-talk speech separation, the applications to real-world are still an open issue. Two main challenges are the complex acoustic environment and the real-time processing requirement. To address these challenges, we propose a temporal-spatial neural filter, which directly estimates the target speech waveform from multi-speaker mixture in reverberant environments, assisted with directional information of the speaker(s). Firstly, against variations brought by complex environment, the key idea is to increase the acoustic representation completeness through the jointly modeling of temporal, spectral and spatial discriminability between the target and interference source. Specifically, temporal, spectral, spatial along with the designed directional features are integrated to create a joint acoustic representation. Secondly, to reduce the latency, we design a fully-convolutional autoencoder framework, which is purely end-to-end and single-pass. All the feature computation is implemented by the network layers and operations to speed up the separation procedure. Evaluation is conducted on simulated reverberant dataset WSJ0-2mix and WSJ0-3mix under speaker-independent scenario. Experimental results demonstrate that the proposed method outperforms state-of-the-art deep learning based multi-channel approaches with fewer parameters and faster processing speed. Furthermore, the proposed temporal-spatial neural filter can handle mixtures with varying and unknown number of speakers and exhibits persistent performance even when existing a direction estimation error. Codes and models will be released soon.
Tasks Speech Separation
Published 2020-01-02
URL https://arxiv.org/abs/2001.00391v1
PDF https://arxiv.org/pdf/2001.00391v1.pdf
PWC https://paperswithcode.com/paper/temporal-spatial-neural-filter-direction

Coherent and Archimedean choice in general Banach spaces

Title Coherent and Archimedean choice in general Banach spaces
Authors Gert de Cooman
Abstract I introduce and study a new notion of Archimedeanity for binary and non-binary choice between options that live in an abstract Banach space, through a very general class of choice models, called sets of desirable option sets. In order to be able to bring horse lottery options into the fold, I pay special attention to the case where these linear spaces do not include all `constant’ options. I consider the frameworks of conservative inference associated with Archimedean (and coherent) choice models, and also pay quite a lot of attention to representation of general (non-binary) choice models in terms of the simpler, binary ones. The representation theorems proved here provide an axiomatic characterisation of, amongst other choice methods, Levi’s E-admissibility and Walley–Sen maximality. |
Published 2020-02-13
URL https://arxiv.org/abs/2002.05461v1
PDF https://arxiv.org/pdf/2002.05461v1.pdf
PWC https://paperswithcode.com/paper/coherent-and-archimedean-choice-in-general

Towards a combinatorial characterization of bounded memory learning

Title Towards a combinatorial characterization of bounded memory learning
Authors Alon Gonen, Shachar Lovett, Michal Moshkovitz
Abstract Combinatorial dimensions play an important role in the theory of machine learning. For example, VC dimension characterizes PAC learning, SQ dimension characterizes weak learning with statistical queries, and Littlestone dimension characterizes online learning. In this paper we aim to develop combinatorial dimensions that characterize bounded memory learning. We propose a candidate solution for the case of realizable strong learning under a known distribution, based on the SQ dimension of neighboring distributions. We prove both upper and lower bounds for our candidate solution, that match in some regime of parameters. In this parameter regime there is an equivalence between bounded memory and SQ learning. We conjecture that our characterization holds in a much wider regime of parameters.
Published 2020-02-08
URL https://arxiv.org/abs/2002.03123v1
PDF https://arxiv.org/pdf/2002.03123v1.pdf
PWC https://paperswithcode.com/paper/towards-a-combinatorial-characterization-of

Touch the Wind: Simultaneous Airflow, Drag and Interaction Sensing on a Multirotor

Title Touch the Wind: Simultaneous Airflow, Drag and Interaction Sensing on a Multirotor
Authors Andrea Tagliabue, Aleix Paris, Suhan Kim, Regan Kubicek, Sarah Bergbreiter, Jonathan P. How
Abstract Disturbance estimation for Micro Aerial Vehicles (MAVs) is crucial for robustness and safety. In this paper, we use novel, bio-inspired airflow sensors to measure the airflow acting on a MAV, and we fuse this information in an Unscented Kalman Filter (UKF) to simultaneously estimate the three-dimensional wind vector, the drag force, and other interaction forces (e.g. due to collisions, interaction with a human) acting on the robot. To this end, we present and compare a fully model-based and a deep learning-based strategy. The model-based approach considers the MAV and airflow sensor dynamics and its interaction with the wind, while the deep learning-based strategy uses a Long Short-Term Memory (LSTM) neural network to obtain an estimate of the relative airflow, which is then fused in the proposed filter. We validate our methods in hardware experiments, showing that we can accurately estimate relative airflow of up to 4 m/s, and we can differentiate drag and interaction force.
Published 2020-03-04
URL https://arxiv.org/abs/2003.02305v1
PDF https://arxiv.org/pdf/2003.02305v1.pdf
PWC https://paperswithcode.com/paper/touch-the-wind-simultaneous-airflow-drag-and
comments powered by Disqus