January 29, 2020

3148 words 15 mins read

Paper Group ANR 687

Paper Group ANR 687

Optimization and Manipulation of Contextual Mutual Spaces for Multi-User Virtual and Augmented Reality Interaction. Capuchin: Causal Database Repair for Algorithmic Fairness. Simplex2Vec embeddings for community detection in simplicial complexes. Extraction of hierarchical functional connectivity components in human brain using resting-state fMRI. …

Optimization and Manipulation of Contextual Mutual Spaces for Multi-User Virtual and Augmented Reality Interaction

Title Optimization and Manipulation of Contextual Mutual Spaces for Multi-User Virtual and Augmented Reality Interaction
Authors Mohammad Keshavarzi, Allen Y. Yang, Woojin Ko, Luisa Caldas
Abstract Spatial computing experiences are physically constrained by the geometry and semantics of the local user environment. This limitation is elevated in remote multi-user interaction scenarios, where finding a common virtual ground physically accessible for all participants becomes challenging. Locating a common accessible virtual ground is difficult for the users themselves, particularly if they are not aware of the spatial properties of other participants. In this paper, we introduce a framework to generate an optimal mutual virtual space for a multi-user interaction setting where remote users’ room spaces can have different layout and sizes. The framework further recommends movement of surrounding furniture objects that expand the size of the mutual space with minimal physical effort. Finally, we demonstrate the performance of our solution on real-world datasets and also a real HoloLens application. Results show the proposed algorithm can effectively discover optimal shareable space for multi-user virtual interaction and hence facilitate remote spatial computing communication in various collaborative workflows.
Tasks
Published 2019-10-14
URL https://arxiv.org/abs/1910.05998v2
PDF https://arxiv.org/pdf/1910.05998v2.pdf
PWC https://paperswithcode.com/paper/optimization-and-manipulation-of-contextual
Repo
Framework

Capuchin: Causal Database Repair for Algorithmic Fairness

Title Capuchin: Causal Database Repair for Algorithmic Fairness
Authors Babak Salimi, Luke Rodriguez, Bill Howe, Dan Suciu
Abstract Fairness is increasingly recognized as a critical component of machine learning systems. However, it is the underlying data on which these systems are trained that often reflect discrimination, suggesting a database repair problem. Existing treatments of fairness rely on statistical correlations that can be fooled by statistical anomalies, such as Simpson’s paradox. Proposals for causality-based definitions of fairness can correctly model some of these situations, but they require specification of the underlying causal models. In this paper, we formalize the situation as a database repair problem, proving sufficient conditions for fair classifiers in terms of admissible variables as opposed to a complete causal model. We show that these conditions correctly capture subtle fairness violations. We then use these conditions as the basis for database repair algorithms that provide provable fairness guarantees about classifiers trained on their training labels. We evaluate our algorithms on real data, demonstrating improvement over the state of the art on multiple fairness metrics proposed in the literature while retaining high utility.
Tasks
Published 2019-02-21
URL https://arxiv.org/abs/1902.08283v5
PDF https://arxiv.org/pdf/1902.08283v5.pdf
PWC https://paperswithcode.com/paper/capuchin-causal-database-repair-for
Repo
Framework

Simplex2Vec embeddings for community detection in simplicial complexes

Title Simplex2Vec embeddings for community detection in simplicial complexes
Authors Jacob Charles Wright Billings, Mirko Hu, Giulia Lerda, Alexey N. Medvedev, Francesco Mottes, Adrian Onicas, Andrea Santoro, Giovanni Petri
Abstract Topological representations are rapidly becoming a popular way to capture and encode higher-order interactions in complex systems. They have found applications in disciplines as different as cancer genomics, brain function, and computational social science, in representing both descriptive features of data and inference models. While intense research has focused on the connectivity and homological features of topological representations, surprisingly scarce attention has been given to the investigation of the community structures of simplicial complexes. To this end, we adopt recent advances in symbolic embeddings to compute and visualize the community structures of simplicial complexes. We first investigate the stability properties of embedding obtained for synthetic simplicial complexes to the presence of higher order interactions. We then focus on complexes arising from social and brain functional data and show how higher order interactions can be leveraged to improve clustering detection and assess the effect of higher order interaction on individual nodes. We conclude delineating limitations and directions for extension of this work.
Tasks Community Detection
Published 2019-06-21
URL https://arxiv.org/abs/1906.09068v1
PDF https://arxiv.org/pdf/1906.09068v1.pdf
PWC https://paperswithcode.com/paper/simplex2vec-embeddings-for-community
Repo
Framework

Extraction of hierarchical functional connectivity components in human brain using resting-state fMRI

Title Extraction of hierarchical functional connectivity components in human brain using resting-state fMRI
Authors Dushyant Sahoo, Danielle Bassett, Christos Davatzikos
Abstract The study of hierarchy in networks of the human brain has been of significant interest among the researchers as numerous studies have pointed out towards a functional hierarchical organization of the human brain. This paper provides a novel method for the extraction of hierarchical connectivity components in the human brain using resting-state fMRI. The method builds upon prior work of Sparse Connectivity Patterns (SCPs) by introducing a hierarchy of sparse overlapping patterns. The components are estimated by deep factorization of correlation matrices generated from fMRI. The goal of the paper is to extract interpretable hierarchical patterns using correlation matrices where a low rank decomposition is formed by a linear combination of a high rank decomposition. We formulate the decomposition as a non-convex optimization problem and solve it using gradient descent algorithms with adaptive step size. We also provide a method for the warm start of the gradient descent using singular value decomposition. We demonstrate the effectiveness of the developed method on two different real-world datasets by showing that multi-scale hierarchical SCPs are reproducible between sub-samples and are more reproducible as compared to single scale patterns. We also compare our method with existing hierarchical community detection approaches. Our method also provides novel insight into the functional organization of the human brain.
Tasks Community Detection
Published 2019-06-19
URL https://arxiv.org/abs/1906.08365v2
PDF https://arxiv.org/pdf/1906.08365v2.pdf
PWC https://paperswithcode.com/paper/extraction-of-hierarchical-functional
Repo
Framework

Optimal Convergence for Stochastic Optimization with Multiple Expectation Constraints

Title Optimal Convergence for Stochastic Optimization with Multiple Expectation Constraints
Authors Kinjal Basu, Preetam Nandy
Abstract In this paper, we focus on the problem of stochastic optimization where the objective function can be written as an expectation function over a closed convex set. We also consider multiple expectation constraints which restrict the domain of the problem. We extend the cooperative stochastic approximation algorithm from Lan and Zhou [2016] to solve the particular problem. We close the gaps in the previous analysis and provide a novel proof technique to show that our algorithm attains the optimal rate of convergence for both optimality gap and constraint violation when the functions are generally convex. We also compare our algorithm empirically to the state-of-the-art and show improved convergence in many situations.
Tasks Stochastic Optimization
Published 2019-06-08
URL https://arxiv.org/abs/1906.03401v2
PDF https://arxiv.org/pdf/1906.03401v2.pdf
PWC https://paperswithcode.com/paper/optimal-convergence-for-stochastic
Repo
Framework

Conservative Q-Improvement: Reinforcement Learning for an Interpretable Decision-Tree Policy

Title Conservative Q-Improvement: Reinforcement Learning for an Interpretable Decision-Tree Policy
Authors Aaron M. Roth, Nicholay Topin, Pooyan Jamshidi, Manuela Veloso
Abstract There is a growing desire in the field of reinforcement learning (and machine learning in general) to move from black-box models toward more “interpretable AI.” We improve interpretability of reinforcement learning by increasing the utility of decision tree policies learned via reinforcement learning. These policies consist of a decision tree over the state space, which requires fewer parameters to express than traditional policy representations. Existing methods for creating decision tree policies via reinforcement learning focus on accurately representing an action-value function during training, but this leads to much larger trees than would otherwise be required. To address this shortcoming, we propose a novel algorithm which only increases tree size when the estimated discounted future reward of the overall policy would increase by a sufficient amount. Through evaluation in a simulated environment, we show that its performance is comparable or superior to traditional tree-based approaches and that it yields a more succinct policy. Additionally, we discuss tuning parameters to control the tradeoff between optimizing for smaller tree size or for overall reward.
Tasks
Published 2019-07-02
URL https://arxiv.org/abs/1907.01180v1
PDF https://arxiv.org/pdf/1907.01180v1.pdf
PWC https://paperswithcode.com/paper/conservative-q-improvement-reinforcement
Repo
Framework

Made for Each Other: Broad-coverage Semantic Structures Meet Preposition Supersenses

Title Made for Each Other: Broad-coverage Semantic Structures Meet Preposition Supersenses
Authors Jakob Prange, Nathan Schneider, Omri Abend
Abstract Universal Conceptual Cognitive Annotation (UCCA; Abend and Rappoport, 2013) is a typologically-informed, broad-coverage semantic annotation scheme that describes coarse-grained predicate-argument structure but currently lacks semantic roles. We argue that lexicon-free annotation of the semantic roles marked by prepositions, as formulated by Schneider et al. (2018b), is complementary and suitable for integration within UCCA. We show empirically for English that the schemes, though annotated independently, are compatible and can be combined in a single semantic graph. A comparison of several approaches to parsing the integrated representation lays the groundwork for future research on this task.
Tasks
Published 2019-09-19
URL https://arxiv.org/abs/1909.08796v1
PDF https://arxiv.org/pdf/1909.08796v1.pdf
PWC https://paperswithcode.com/paper/made-for-each-other-broad-coverage-semantic
Repo
Framework

PanopticFusion: Online Volumetric Semantic Mapping at the Level of Stuff and Things

Title PanopticFusion: Online Volumetric Semantic Mapping at the Level of Stuff and Things
Authors Gaku Narita, Takashi Seno, Tomoya Ishikawa, Yohsuke Kaji
Abstract We propose PanopticFusion, a novel online volumetric semantic mapping system at the level of stuff and things. In contrast to previous semantic mapping systems, PanopticFusion is able to densely predict class labels of a background region (stuff) and individually segment arbitrary foreground objects (things). In addition, our system has the capability to reconstruct a large-scale scene and extract a labeled mesh thanks to its use of a spatially hashed volumetric map representation. Our system first predicts pixel-wise panoptic labels (class labels for stuff regions and instance IDs for thing regions) for incoming RGB frames by fusing 2D semantic and instance segmentation outputs. The predicted panoptic labels are integrated into the volumetric map together with depth measurements while keeping the consistency of the instance IDs, which could vary frame to frame, by referring to the 3D map at that moment. In addition, we construct a fully connected conditional random field (CRF) model with respect to panoptic labels for map regularization. For online CRF inference, we propose a novel unary potential approximation and a map division strategy. We evaluated the performance of our system on the ScanNet (v2) dataset. PanopticFusion outperformed or compared with state-of-the-art offline 3D DNN methods in both semantic and instance segmentation benchmarks. Also, we demonstrate a promising augmented reality application using a 3D panoptic map generated by the proposed system.
Tasks Instance Segmentation, Semantic Segmentation
Published 2019-03-04
URL https://arxiv.org/abs/1903.01177v2
PDF https://arxiv.org/pdf/1903.01177v2.pdf
PWC https://paperswithcode.com/paper/panopticfusion-online-volumetric-semantic
Repo
Framework

Polynomial Cost of Adaptation for X -Armed Bandits

Title Polynomial Cost of Adaptation for X -Armed Bandits
Authors Hédi Hadiji
Abstract In the context of stochastic continuum-armed bandits, we present an algorithm that adapts to the unknown smoothness of the objective function. We exhibit and compute a polynomial cost of adaptation to the H{"o}lder regularity for regret minimization. To do this, we first reconsider the recent lower bound of Locatelli and Carpentier [20], and define and characterize admissible rate functions. Our new algorithm matches any of these minimal rate functions. We provide a finite-time analysis and a thorough discussion about asymptotic optimality.
Tasks
Published 2019-05-24
URL https://arxiv.org/abs/1905.10221v2
PDF https://arxiv.org/pdf/1905.10221v2.pdf
PWC https://paperswithcode.com/paper/polynomial-cost-of-adaptation-for-x-armed
Repo
Framework

Automatic Segmentation of Vestibular Schwannoma from T2-Weighted MRI by Deep Spatial Attention with Hardness-Weighted Loss

Title Automatic Segmentation of Vestibular Schwannoma from T2-Weighted MRI by Deep Spatial Attention with Hardness-Weighted Loss
Authors Guotai Wang, Jonathan Shapey, Wenqi Li, Reuben Dorent, Alex Demitriadis, Sotirios Bisdas, Ian Paddick, Robert Bradford, Sebastien Ourselin, Tom Vercauteren
Abstract Automatic segmentation of vestibular schwannoma (VS) tumors from magnetic resonance imaging (MRI) would facilitate efficient and accurate volume measurement to guide patient management and improve clinical workflow. The accuracy and robustness is challenged by low contrast, small target region and low through-plane resolution. We introduce a 2.5D convolutional neural network (CNN) able to exploit the different in-plane and through-plane resolutions encountered in standard of care imaging protocols. We use an attention module to enable the CNN to focus on the small target and propose a supervision on the learning of attention maps for more accurate segmentation. Additionally, we propose a hardness-weighted Dice loss function that gives higher weights to harder voxels to boost the training of CNNs. Experiments with ablation studies on the VS tumor segmentation task show that: 1) the proposed 2.5D CNN outperforms its 2D and 3D counterparts, 2) our supervised attention mechanism outperforms unsupervised attention, 3) the voxel-level hardness-weighted Dice loss can improve the performance of CNNs. Our method achieved an average Dice score and ASSD of 0.87 and 0.43~mm respectively. This will facilitate patient management decisions in clinical practice.
Tasks
Published 2019-06-10
URL https://arxiv.org/abs/1906.03906v1
PDF https://arxiv.org/pdf/1906.03906v1.pdf
PWC https://paperswithcode.com/paper/automatic-segmentation-of-vestibular
Repo
Framework

HyPar-Flow: Exploiting MPI and Keras for Scalable Hybrid-Parallel DNN Training using TensorFlow

Title HyPar-Flow: Exploiting MPI and Keras for Scalable Hybrid-Parallel DNN Training using TensorFlow
Authors Ammar Ahmad Awan, Arpan Jain, Quentin Anthony, Hari Subramoni, Dhabaleswar K. Panda
Abstract To reduce training time of large-scale DNNs, scientists have started to explore parallelization strategies like data-parallelism, model-parallelism, and hybrid-parallelism. While data-parallelism has been extensively studied and developed, several problems exist in realizing model-parallelism and hybrid-parallelism efficiently. Four major problems we focus on are: 1) defining a notion of a distributed model across processes, 2) implementing forward/back-propagation across process boundaries that requires explicit communication, 3) obtaining parallel speedup on an inherently sequential task, and 4) achieving scalability without losing out on a model’s accuracy. To address these problems, we create HyPar-Flow — a model-size/-type agnostic, scalable, practical, and user-transparent system for hybrid-parallel training by exploiting MPI, Keras, and TensorFlow. HyPar-Flow provides a single API that can be used to perform data, model, and hybrid parallel training of any Keras model at scale. We create an internal distributed representation of the user-provided Keras model, utilize TF’s Eager execution features for distributed forward/back-propagation across processes, exploit pipelining to improve performance and leverage efficient MPI primitives for scalable communication. Between model partitions, we use send and recv to exchange layer-data/partial-errors while allreduce is used to accumulate/average gradients across model replicas. Beyond the design and implementation of HyPar-Flow, we also provide comprehensive correctness and performance results on three state-of-the-art HPC systems including TACC Frontera (#5 on Top500.org). For ResNet-1001, an ultra-deep model, HyPar-Flow provides: 1) Up to 1.6x speedup over Horovod-based data-parallel training, 2) 110x speedup over single-node on 128 Stampede2 nodes, and 3) 481x speedup over single-node on 512 Frontera nodes.
Tasks
Published 2019-11-12
URL https://arxiv.org/abs/1911.05146v2
PDF https://arxiv.org/pdf/1911.05146v2.pdf
PWC https://paperswithcode.com/paper/hypar-flow-exploiting-mpi-and-keras-for
Repo
Framework

Divergence-Based Motivation for Online EM and Combining Hidden Variable Models

Title Divergence-Based Motivation for Online EM and Combining Hidden Variable Models
Authors Ehsan Amid, Manfred K. Warmuth
Abstract Expectation-Maximization (EM) is a prominent approach for parameter estimation of hidden (aka latent) variable models. Given the full batch of data, EM forms an upper-bound of the negative log-likelihood of the model at each iteration and updates to the minimizer of this upper-bound. We first provide a “model level” interpretation of the EM upper-bound as sum of relative entropy divergences to a set of singleton models, induced by the set of observations. Our alternative motivation unifies the “observation level” and the “model level” view of the EM. As a result, we formulate an online version of the EM algorithm by adding an analogous inertia term which corresponds to the relative entropy divergence to the old model. Our motivation is more widely applicable than the previous approaches and leads to simple online updates for mixture of exponential distributions, hidden Markov models, and the first known online update for Kalman filters. Additionally, the finite sample form of the inertia term lets us derive online updates when there is no closed-form solution. Finally, we extend the analysis to the distributed setting where we motivate a systematic way of combining multiple hidden variable models. Experimentally, we validate the results on synthetic as well as real-world datasets.
Tasks Latent Variable Models
Published 2019-02-11
URL https://arxiv.org/abs/1902.04107v2
PDF https://arxiv.org/pdf/1902.04107v2.pdf
PWC https://paperswithcode.com/paper/divergence-based-motivation-for-online-em-and
Repo
Framework

Unsupervised Image Regression for Heterogeneous Change Detection

Title Unsupervised Image Regression for Heterogeneous Change Detection
Authors Luigi T. Luppino, Filippo M. Bianchi, Gabriele Moser, Stian N. Anfinsen
Abstract Change detection in heterogeneous multitemporal satellite images is an emerging and challenging topic in remote sensing. In particular, one of the main challenges is to tackle the problem in an unsupervised manner. In this paper we propose an unsupervised framework for bitemporal heterogeneous change detection based on the comparison of affinity matrices and image regression. First, our method quantifies the similarity of affinity matrices computed from co-located image patches in the two images. This is done to automatically identify pixels that are likely to be unchanged. With the identified pixels as pseudo-training data, we learn a transformation to map the first image to the domain of the other image, and vice versa. Four regression methods are selected to carry out the transformation: Gaussian process regression, support vector regression, random forest regression, and a recently proposed kernel regression method called homogeneous pixel transformation. To evaluate the potentials and limitations of our framework, and also the benefits and disadvantages of each regression method, we perform experiments on two real data sets. The results indicate that the comparison of the affinity matrices can already be considered a change detection method by itself. However, image regression is shown to improve the results obtained by the previous step alone and produces accurate change detection maps despite of the heterogeneity of the multitemporal input data. Notably, the random forest regression approach excels by achieving similar accuracy as the other methods, but with a significantly lower computational cost and with fast and robust tuning of hyperparameters.
Tasks
Published 2019-09-07
URL https://arxiv.org/abs/1909.05948v1
PDF https://arxiv.org/pdf/1909.05948v1.pdf
PWC https://paperswithcode.com/paper/unsupervised-image-regression-for
Repo
Framework

Comparison theorems on large-margin learning

Title Comparison theorems on large-margin learning
Authors Jun Fan, Dao-Hong Xiang
Abstract This paper studies binary classification problem associated with a family of loss functions called large-margin unified machines (LUM), which offers a natural bridge between distribution-based likelihood approaches and margin-based approaches. It also can overcome the so-called data piling issue of support vector machine in the high-dimension and low-sample size setting. In this paper we establish some new comparison theorems for all LUM loss functions which play a key role in the further error analysis of large-margin learning algorithms.
Tasks
Published 2019-08-13
URL https://arxiv.org/abs/1908.04470v1
PDF https://arxiv.org/pdf/1908.04470v1.pdf
PWC https://paperswithcode.com/paper/comparison-theorems-on-large-margin-learning
Repo
Framework

Momentum-Based Variance Reduction in Non-Convex SGD

Title Momentum-Based Variance Reduction in Non-Convex SGD
Authors Ashok Cutkosky, Francesco Orabona
Abstract Variance reduction has emerged in recent years as a strong competitor to stochastic gradient descent in non-convex problems, providing the first algorithms to improve upon the converge rate of stochastic gradient descent for finding first-order critical points. However, variance reduction techniques typically require carefully tuned learning rates and willingness to use excessively large “mega-batches” in order to achieve their improved results. We present a new algorithm, STORM, that does not require any batches and makes use of adaptive learning rates, enabling simpler implementation and less hyperparameter tuning. Our technique for removing the batches uses a variant of momentum to achieve variance reduction in non-convex optimization. On smooth losses $F$, STORM finds a point $\boldsymbol{x}$ with $\mathbb{E}[\nabla F(\boldsymbol{x})]\le O(1/\sqrt{T}+\sigma^{1/3}/T^{1/3})$ in $T$ iterations with $\sigma^2$ variance in the gradients, matching the optimal rate but without requiring knowledge of $\sigma$.
Tasks
Published 2019-05-24
URL https://arxiv.org/abs/1905.10018v2
PDF https://arxiv.org/pdf/1905.10018v2.pdf
PWC https://paperswithcode.com/paper/momentum-based-variance-reduction-in-non
Repo
Framework
comments powered by Disqus