Paper Group ANR 513
Explore Aggressively, Update Conservatively: Stochastic Extragradient Methods with Variable Stepsize Scaling. XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalization. Deep Inverse Feature Learning: A Representation Learning of Error. Distributed No-Regret Learning in Multi-Agent Systems. Towards Multi-persp …
Explore Aggressively, Update Conservatively: Stochastic Extragradient Methods with Variable Stepsize Scaling
Title | Explore Aggressively, Update Conservatively: Stochastic Extragradient Methods with Variable Stepsize Scaling |
Authors | Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, Panayotis Mertikopoulos |
Abstract | Owing to their stability and convergence speed, extragradient methods have become a staple for solving large-scale saddle-point problems in machine learning. The basic premise of these algorithms is the use of an extrapolation step before performing an update; thanks to this exploration step, extra-gradient methods overcome many of the non-convergence issues that plague gradient descent/ascent schemes. On the other hand, as we show in this paper, running vanilla extragradient with stochastic gradients may jeopardize its convergence, even in simple bilinear models. To overcome this failure, we investigate a double stepsize extragradient algorithm where the exploration step evolves at a more aggressive time-scale compared to the update step. We show that this modification allows the method to converge even with stochastic gradients, and we derive sharp convergence rates under an error bound condition. |
Tasks | |
Published | 2020-03-23 |
URL | https://arxiv.org/abs/2003.10162v1 |
https://arxiv.org/pdf/2003.10162v1.pdf | |
PWC | https://paperswithcode.com/paper/explore-aggressively-update-conservatively |
Repo | |
Framework | |
XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalization
Title | XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalization |
Authors | Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, Melvin Johnson |
Abstract | Much recent progress in applications of machine learning models to NLP has been driven by benchmarks that evaluate models across a wide variety of tasks. However, these broad-coverage benchmarks have been mostly limited to English, and despite an increasing interest in multilingual models, a benchmark that enables the comprehensive evaluation of such methods on a diverse range of languages and tasks is still missing. To this end, we introduce the Cross-lingual TRansfer Evaluation of Multilingual Encoders XTREME benchmark, a multi-task benchmark for evaluating the cross-lingual generalization capabilities of multilingual representations across 40 languages and 9 tasks. We demonstrate that while models tested on English reach human performance on many tasks, there is still a sizable gap in the performance of cross-lingually transferred models, particularly on syntactic and sentence retrieval tasks. There is also a wide spread of results across languages. We release the benchmark to encourage research on cross-lingual learning methods that transfer linguistic knowledge across a diverse and representative set of languages and tasks. |
Tasks | Cross-Lingual Transfer |
Published | 2020-03-24 |
URL | https://arxiv.org/abs/2003.11080v2 |
https://arxiv.org/pdf/2003.11080v2.pdf | |
PWC | https://paperswithcode.com/paper/xtreme-a-massively-multilingual-multi-task |
Repo | |
Framework | |
Deep Inverse Feature Learning: A Representation Learning of Error
Title | Deep Inverse Feature Learning: A Representation Learning of Error |
Authors | Behzad Ghazanfari, Fatemeh Afghah |
Abstract | This paper introduces a novel perspective about error in machine learning and proposes inverse feature learning (IFL) as a representation learning approach that learns a set of high-level features based on the representation of error for classification or clustering purposes. The proposed perspective about error representation is fundamentally different from current learning methods, where in classification approaches they interpret the error as a function of the differences between the true labels and the predicted ones or in clustering approaches, in which the clustering objective functions such as compactness are used. Inverse feature learning method operates based on a deep clustering approach to obtain a qualitative form of the representation of error as features. The performance of the proposed IFL method is evaluated by applying the learned features along with the original features, or just using the learned features in different classification and clustering techniques for several data sets. The experimental results show that the proposed method leads to promising results in classification and especially in clustering. In classification, the proposed features along with the primary features improve the results of most of the classification methods on several popular data sets. In clustering, the performance of different clustering methods is considerably improved on different data sets. There are interesting results that show some few features of the representation of error capture highly informative aspects of primary features. We hope this paper helps to utilize the error representation learning in different feature learning domains. |
Tasks | Representation Learning |
Published | 2020-03-09 |
URL | https://arxiv.org/abs/2003.04285v1 |
https://arxiv.org/pdf/2003.04285v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-inverse-feature-learning-a |
Repo | |
Framework | |
Distributed No-Regret Learning in Multi-Agent Systems
Title | Distributed No-Regret Learning in Multi-Agent Systems |
Authors | Xiao Xu, Qing Zhao |
Abstract | In this tutorial article, we give an overview of new challenges and representative results on distributed no-regret learning in multi-agent systems modeled as repeated unknown games. Four emerging game characteristics—dynamicity, incomplete and imperfect feedback, bounded rationality, and heterogeneity—that challenge canonical game models are explored. For each of the four characteristics, we illuminate its implications and ramifications in game modeling, notions of regret, feasible game outcomes, and the design and analysis of distributed learning algorithms. |
Tasks | |
Published | 2020-02-20 |
URL | https://arxiv.org/abs/2002.09047v1 |
https://arxiv.org/pdf/2002.09047v1.pdf | |
PWC | https://paperswithcode.com/paper/distributed-no-regret-learning-in-multi-agent |
Repo | |
Framework | |
Towards Multi-perspective conformance checking with fuzzy sets
Title | Towards Multi-perspective conformance checking with fuzzy sets |
Authors | Sicui Zhang, Laura Genga, Hui Yan, Xudong Lu, Huilong Duan, Uzay Kaymak |
Abstract | Conformance checking techniques are widely adopted to pinpoint possible discrepancies between process models and the execution of the process in reality. However, state of the art approaches adopt a crisp evaluation of deviations, with the result that small violations are considered at the same level of significant ones. This affects the quality of the provided diagnostics, especially when there exists some tolerance with respect to reasonably small violations, and hampers the flexibility of the process. In this work, we propose a novel approach which allows to represent actors’ tolerance with respect to violations and to account for severity of deviations when assessing executions compliance. We argue that besides improving the quality of the provided diagnostics, allowing some tolerance in deviations assessment also enhances the flexibility of conformance checking techniques and, indirectly, paves the way for improving the resilience of the overall process management system. |
Tasks | |
Published | 2020-01-29 |
URL | https://arxiv.org/abs/2001.10730v1 |
https://arxiv.org/pdf/2001.10730v1.pdf | |
PWC | https://paperswithcode.com/paper/towards-multi-perspective-conformance |
Repo | |
Framework | |
`Why not give this work to them?’ Explaining AI-Moderated Task-Allocation Outcomes using Negotiation Trees
Title | `Why not give this work to them?’ Explaining AI-Moderated Task-Allocation Outcomes using Negotiation Trees | |
Authors | Zahra Zahedi, Sailik Sengupta, Subbarao Kambhampati |
Abstract | The problem of multi-agent task allocation arises in a variety of scenarios involving human teams. In many such settings, human teammates may act with selfish motives and try to minimize their cost metrics. In the absence of (1) complete knowledge about the reward of other agents and (2) the team’s overall cost associated with a particular allocation outcome, distributed algorithms can only arrive at sub-optimal solutions within a reasonable amount of time. To address these challenges, we introduce the notion of an AI Task Allocator (AITA) that, with complete knowledge, comes up with fair allocations that strike a balance between the individual human costs and the team’s performance cost. To ensure that AITA is explicable to the humans, we allow each human agent to question AITA’s proposed allocation with counterfactual allocations. In response, we design AITA to provide a replay negotiation tree that acts as an explanation showing why the counterfactual allocation, with the correct costs, will eventually result in a sub-optimal allocation. This explanation also updates a human’s incomplete knowledge about their teammate’s and the team’s actual costs. We then investigate whether humans are (1) able to understand the explanations provided and (2) convinced by it using human factor studies. Finally, we show the effect of various kinds of incompleteness on the length of explanations. We conclude that underestimation of other’s costs often leads to the need for explanations and in turn, longer explanations on average. |
Tasks | |
Published | 2020-02-05 |
URL | https://arxiv.org/abs/2002.01640v2 |
https://arxiv.org/pdf/2002.01640v2.pdf | |
PWC | https://paperswithcode.com/paper/why-not-give-this-work-to-them-explaining-ai |
Repo | |
Framework | |
Spiking Inception Module for Multi-layer Unsupervised Spiking Neural Networks
Title | Spiking Inception Module for Multi-layer Unsupervised Spiking Neural Networks |
Authors | Mingyuan Meng, Xingyu Yang, Shanlin Xiao, Zhiyi Yu |
Abstract | Spiking Neural Network (SNN), as a brain-inspired approach, is attracting attentions due to its potential to produce ultra-high-energy-efficient hardware. Competitive learning based on Spike-Timing-Dependent Plasticity (STDP) is a popular method to train unsupervised SNN. However, previous unsupervised SNNs trained through this method are limited to shallow networks with only one learnable layer and can’t achieve satisfactory results when compared with multi-layer SNNs. In this paper, we ease this limitation by: 1)We propose Spiking Inception (Sp-Inception) module, inspired by the Inception module in Artificial Neural Network (ANN) literature. This module is trained through STDP- based competitive learning and outperforms baseline modules on learning capability, learning efficiency, and robustness; 2)We propose Pooling-Reshape-Activate (PRA) layer to make Sp-Inception module stackable; 3)We stack multiple Sp-Inception modules to construct multi-layer SNNs. Our method greatly exceeds baseline methods on image classification tasks and reaches state-of-the-art results on MNIST dataset among existing unsupervised SNNs. |
Tasks | Image Classification |
Published | 2020-01-29 |
URL | https://arxiv.org/abs/2001.10696v2 |
https://arxiv.org/pdf/2001.10696v2.pdf | |
PWC | https://paperswithcode.com/paper/spiking-inception-module-for-multi-layer |
Repo | |
Framework | |
Fairness in Learning-Based Sequential Decision Algorithms: A Survey
Title | Fairness in Learning-Based Sequential Decision Algorithms: A Survey |
Authors | Xueru Zhang, Mingyan Liu |
Abstract | Algorithmic fairness in decision-making has been studied extensively in static settings where one-shot decisions are made on tasks such as classification. However, in practice most decision-making processes are of a sequential nature, where decisions made in the past may have an impact on future data. This is particularly the case when decisions affect the individuals or users generating the data used for future decisions. In this survey, we review existing literature on the fairness of data-driven sequential decision-making. We will focus on two types of sequential decisions: (1) past decisions have no impact on the underlying user population and thus no impact on future data; (2) past decisions have an impact on the underlying user population and therefore the future data, which can then impact future decisions. In each case the impact of various fairness interventions on the underlying population is examined. |
Tasks | Decision Making |
Published | 2020-01-14 |
URL | https://arxiv.org/abs/2001.04861v1 |
https://arxiv.org/pdf/2001.04861v1.pdf | |
PWC | https://paperswithcode.com/paper/fairness-in-learning-based-sequential |
Repo | |
Framework | |
CLARA: Clinical Report Auto-completion
Title | CLARA: Clinical Report Auto-completion |
Authors | Siddharth Biswal, Cao Xiao, Lucas M. Glass, M. Brandon Westover, Jimeng Sun |
Abstract | Generating clinical reports from raw recordings such as X-rays and electroencephalogram (EEG) is an essential and routine task for doctors. However, it is often time-consuming to write accurate and detailed reports. Most existing methods try to generate the whole reports from the raw input with limited success because 1) generated reports often contain errors that need manual review and correction, 2) it does not save time when doctors want to write additional information into the report, and 3) the generated reports are not customized based on individual doctors’ preference. We propose {\it CL}inic{\it A}l {\it R}eport {\it A}uto-completion (CLARA), an interactive method that generates reports in a sentence by sentence fashion based on doctors’ anchor words and partially completed sentences. CLARA searches for most relevant sentences from existing reports as the template for the current report. The retrieved sentences are sequentially modified by combining with the input feature representations to create the final report. In our experimental evaluation, CLARA achieved 0.393 CIDEr and 0.248 BLEU-4 on X-ray reports and 0.482 CIDEr and 0.491 BLEU-4 for EEG reports for sentence-level generation, which is up to 35% improvement over the best baseline. Also via our qualitative evaluation, CLARA is shown to produce reports which have a significantly higher level of approval by doctors in a user study (3.74 out of 5 for CLARA vs 2.52 out of 5 for the baseline). |
Tasks | EEG |
Published | 2020-02-26 |
URL | https://arxiv.org/abs/2002.11701v2 |
https://arxiv.org/pdf/2002.11701v2.pdf | |
PWC | https://paperswithcode.com/paper/clara-clinical-report-auto-completion |
Repo | |
Framework | |
Always Look on the Bright Side of the Field: Merging Pose and Contextual Data to Estimate Orientation of Soccer Players
Title | Always Look on the Bright Side of the Field: Merging Pose and Contextual Data to Estimate Orientation of Soccer Players |
Authors | Adrià Arbués-Sangüesa, Adrián Martín, Javier Fernández, Carlos Rodríguez, Gloria Haro, Coloma Ballester |
Abstract | Although orientation has proven to be a key skill of soccer players in order to succeed in a broad spectrum of plays, body orientation is a yet-little-explored area in sports analytics’ research. Despite being an inherently ambiguous concept, player orientation can be defined as the projection (2D) of the normal vector placed in the center of the upper-torso of players (3D). This research presents a novel technique to obtain player orientation from monocular video recordings by mapping pose parts (shoulders and hips) in a 2D field by combining OpenPose with a super-resolution network, and merging the obtained estimation with contextual information (ball position). Results have been validated with players-held EPTS devices, obtaining a median error of 27 degrees/player. Moreover, three novel types of orientation maps are proposed in order to make raw orientation data easy to visualize and understand, thus allowing further analysis at team- or player-level. |
Tasks | Super-Resolution |
Published | 2020-03-02 |
URL | https://arxiv.org/abs/2003.00943v1 |
https://arxiv.org/pdf/2003.00943v1.pdf | |
PWC | https://paperswithcode.com/paper/always-look-on-the-bright-side-of-the-field |
Repo | |
Framework | |
An estimation-based method to segment PET images
Title | An estimation-based method to segment PET images |
Authors | Ziping Liu, Richard Laforest, Joyce Mhlanga, Hae Sol Moon, Tyler J. Fraum, Malak Itani, Aaron Mintz, Farrokh Dehdashti, Barry A. Siegel, Abhinav K. Jha |
Abstract | Tumor segmentation in oncological PET images is challenging, a major reason being the partial-volume effects that arise from low system resolution and a finite pixel size. The latter results in pixels containing more than one region, also referred to as tissue-fraction effects. Conventional classification-based segmentation approaches are inherently limited in accounting for the tissue-fraction effects. To address this limitation, we pose the segmentation task as an estimation problem. We propose a Bayesian method that estimates the posterior mean of the tumorfraction area within each pixel and uses these estimates to define the segmented tumor boundary. The method was implemented using an autoencoder. Quantitative evaluation of the method was performed using realistic simulation studies conducted in the context of segmenting the primary tumor in PET images of patients with lung cancer. For these studies, a framework was developed to generate clinically realistic simulated PET images. Realism of these images was quantitatively confirmed using a two-alternative-forced-choice study by six trained readers with expertise in reading PET scans. The evaluation studies demonstrated that the proposed segmentation method was accurate, significantly outperformed widely used conventional methods on the tasks of tumor segmentation and estimation of tumor-fraction areas, was relatively insensitive to partial-volume effects, and reliably estimated the ground-truth tumor boundaries. Further, these results were obtained across different clinical-scanner configurations. This proof-of-concept study demonstrates the efficacy of an estimation-based approach to PET segmentation. |
Tasks | |
Published | 2020-02-29 |
URL | https://arxiv.org/abs/2003.00317v1 |
https://arxiv.org/pdf/2003.00317v1.pdf | |
PWC | https://paperswithcode.com/paper/an-estimation-based-method-to-segment-pet |
Repo | |
Framework | |
ESSOP: Efficient and Scalable Stochastic Outer Product Architecture for Deep Learning
Title | ESSOP: Efficient and Scalable Stochastic Outer Product Architecture for Deep Learning |
Authors | Vinay Joshi, Geethan Karunaratne, Manuel Le Gallo, Irem Boybat, Christophe Piveteau, Abu Sebastian, Bipin Rajendran, Evangelos Eleftheriou |
Abstract | Deep neural networks (DNNs) have surpassed human-level accuracy in a variety of cognitive tasks but at the cost of significant memory/time requirements in DNN training. This limits their deployment in energy and memory limited applications that require real-time learning. Matrix-vector multiplications (MVM) and vector-vector outer product (VVOP) are the two most expensive operations associated with the training of DNNs. Strategies to improve the efficiency of MVM computation in hardware have been demonstrated with minimal impact on training accuracy. However, the VVOP computation remains a relatively less explored bottleneck even with the aforementioned strategies. Stochastic computing (SC) has been proposed to improve the efficiency of VVOP computation but on relatively shallow networks with bounded activation functions and floating-point (FP) scaling of activation gradients. In this paper, we propose ESSOP, an efficient and scalable stochastic outer product architecture based on the SC paradigm. We introduce efficient techniques to generalize SC for weight update computation in DNNs with the unbounded activation functions (e.g., ReLU), required by many state-of-the-art networks. Our architecture reduces the computational cost by re-using random numbers and replacing certain FP multiplication operations by bit shift scaling. We show that the ResNet-32 network with 33 convolution layers and a fully-connected layer can be trained with ESSOP on the CIFAR-10 dataset to achieve baseline comparable accuracy. Hardware design of ESSOP at 14nm technology node shows that, compared to a highly pipelined FP16 multiplier design, ESSOP is 82.2% and 93.7% better in energy and area efficiency respectively for outer product computation. |
Tasks | |
Published | 2020-03-25 |
URL | https://arxiv.org/abs/2003.11256v1 |
https://arxiv.org/pdf/2003.11256v1.pdf | |
PWC | https://paperswithcode.com/paper/essop-efficient-and-scalable-stochastic-outer |
Repo | |
Framework | |
Analytic Marching: An Analytic Meshing Solution from Deep Implicit Surface Networks
Title | Analytic Marching: An Analytic Meshing Solution from Deep Implicit Surface Networks |
Authors | Jiabao Lei, Kui Jia |
Abstract | This paper studies a problem of learning surface mesh via implicit functions in an emerging field of deep learning surface reconstruction, where implicit functions are popularly implemented as multi-layer perceptrons (MLPs) with rectified linear units (ReLU). To achieve meshing from learned implicit functions, existing methods adopt the de-facto standard algorithm of marching cubes; while promising, they suffer from loss of precision learned in the MLPs, due to the discretization nature of marching cubes. Motivated by the knowledge that a ReLU based MLP partitions its input space into a number of linear regions, we identify from these regions analytic cells and analytic faces that are associated with zero-level isosurface of the implicit function, and characterize the theoretical conditions under which the identified analytic faces are guaranteed to connect and form a closed, piecewise planar surface. Based on our theorem, we propose a naturally parallelizable algorithm of analytic marching, which marches among analytic cells to exactly recover the mesh captured by a learned MLP. Experiments on deep learning mesh reconstruction verify the advantages of our algorithm over existing ones. |
Tasks | |
Published | 2020-02-16 |
URL | https://arxiv.org/abs/2002.06597v1 |
https://arxiv.org/pdf/2002.06597v1.pdf | |
PWC | https://paperswithcode.com/paper/analytic-marching-an-analytic-meshing |
Repo | |
Framework | |
Langevin DQN
Title | Langevin DQN |
Authors | Vikranth Dwaracherla, Benjamin Van Roy |
Abstract | Algorithms that tackle deep exploration – an important challenge in reinforcement learning – have relied on epistemic uncertainty representation through ensembles or other hypermodels, exploration bonuses, or visitation count distributions. An open question is whether deep exploration can be achieved by an incremental reinforcement learning algorithm that tracks a single point estimate, without additional complexity required to account for epistemic uncertainty. We answer this question in the affirmative. In particular, we develop Langevin DQN, a variation of DQN that differs only in perturbing parameter updates with Gaussian noise, and demonstrate through a computational study that the algorithm achieves deep exploration. We also provide an intuition for why Langevin DQN performs deep exploration. |
Tasks | |
Published | 2020-02-17 |
URL | https://arxiv.org/abs/2002.07282v1 |
https://arxiv.org/pdf/2002.07282v1.pdf | |
PWC | https://paperswithcode.com/paper/langevin-dqn |
Repo | |
Framework | |
Neural Lyapunov Model Predictive Control
Title | Neural Lyapunov Model Predictive Control |
Authors | Mayank Mittal, Marco Gallieri, Alessio Quaglino, Seyed Sina Mirrazavi Salehian, Jan Koutník |
Abstract | This paper presents Neural Lyapunov MPC, an algorithm to alternately train a Lyapunov neural network and a stabilising constrained Model Predictive Controller (MPC), given a neural network model of the system dynamics. This extends recent works on Lyapunov networks to be able to train solely from expert demonstrations of one-step transitions. The learned Lyapunov network is used as the value function for the MPC in order to guarantee stability and extend the stable region. Formal results are presented on the existence of a set of MPC parameters, such as discount factors, that guarantees stability with a horizon as short as one. Robustness margins are also discussed and existing performance bounds on value function MPC are extended to the case of imperfect models. The approach is tested on unstable non-linear continuous control tasks with hard constraints. Results demonstrate that, when a neural network trained on short sequences is used for predictions, a one-step horizon Neural Lyapunov MPC can successfully reproduce the expert behaviour and significantly outperform longer horizon MPCs. |
Tasks | Continuous Control |
Published | 2020-02-21 |
URL | https://arxiv.org/abs/2002.10451v1 |
https://arxiv.org/pdf/2002.10451v1.pdf | |
PWC | https://paperswithcode.com/paper/200210451 |
Repo | |
Framework | |