April 3, 2020

3574 words 17 mins read

Paper Group ANR 57

Paper Group ANR 57

Automating Representation Discovery with MAP-Elites. TIME: A Transparent, Interpretable, Model-Adaptive and Explainable Neural Network for Dynamic Physical Processes. Accelerating Psychometric Screening Tests With Bayesian Active Differential Selection. Logistic Regression Regret: What’s the Catch?. Task-adaptive Asymmetric Deep Cross-modal Hashing …

Automating Representation Discovery with MAP-Elites

Title Automating Representation Discovery with MAP-Elites
Authors Adam Gaier, Alexander Asteroth, Jean-Baptiste Mouret
Abstract The way solutions are represented, or encoded, is usually the result of domain knowledge and experience. In this work, we combine MAP-Elites with Variational Autoencoders to learn a Data-Driven Encoding (DDE) that captures the essence of the highest-performing solutions while still able to encode a wide array of solutions. Our approach learns this data-driven encoding during optimization by balancing between exploiting the DDE to generalize the knowledge contained in the current archive of elites and exploring new representations that are not yet captured by the DDE. Learning representation during optimization allows the algorithm to solve high-dimensional problems, and provides a low-dimensional representation which can be then be re-used. We evaluate the DDE approach by evolving solutions for inverse kinematics of a planar arm (200 joint angles) and for gaits of a 6-legged robot in action space (a sequence of 60 positions for each of the 12 joints). We show that the DDE approach not only accelerates and improves optimization, but produces a powerful encoding that captures a bias for high performance while expressing a variety of solutions.
Tasks
Published 2020-03-09
URL https://arxiv.org/abs/2003.04389v1
PDF https://arxiv.org/pdf/2003.04389v1.pdf
PWC https://paperswithcode.com/paper/automating-representation-discovery-with-map
Repo
Framework

TIME: A Transparent, Interpretable, Model-Adaptive and Explainable Neural Network for Dynamic Physical Processes

Title TIME: A Transparent, Interpretable, Model-Adaptive and Explainable Neural Network for Dynamic Physical Processes
Authors Gurpreet Singh, Soumyajit Gupta, Matt Lease, Clint N. Dawson
Abstract Partial Differential Equations are infinite dimensional encoded representations of physical processes. However, imbibing multiple observation data towards a coupled representation presents significant challenges. We present a fully convolutional architecture that captures the invariant structure of the domain to reconstruct the observable system. The proposed architecture is significantly low-weight compared to other networks for such problems. Our intent is to learn coupled dynamic processes interpreted as deviations from true kernels representing isolated processes for model-adaptivity. Experimental analysis shows that our architecture is robust and transparent in capturing process kernels and system anomalies. We also show that high weights representation is not only redundant but also impacts network interpretability. Our design is guided by domain knowledge, with isolated process representations serving as ground truths for verification. These allow us to identify redundant kernels and their manifestations in activation maps to guide better designs that are both interpretable and explainable unlike traditional deep-nets.
Tasks
Published 2020-03-05
URL https://arxiv.org/abs/2003.02426v2
PDF https://arxiv.org/pdf/2003.02426v2.pdf
PWC https://paperswithcode.com/paper/time-a-transparent-interpretable-model
Repo
Framework

Accelerating Psychometric Screening Tests With Bayesian Active Differential Selection

Title Accelerating Psychometric Screening Tests With Bayesian Active Differential Selection
Authors Trevor J. Larsen, Gustavo Malkomes, Dennis L. Barbour
Abstract Classical methods for psychometric function estimation either require excessive measurements or produce only a low-resolution approximation of the target psychometric function. In this paper, we propose a novel solution for rapid screening for a change in the psychometric function estimation of a given patient. We use Bayesian active model selection to perform an automated pure-tone audiogram test with the goal of quickly finding if the current audiogram will be different from a previous audiogram. We validate our approach using audiometric data from the National Institute for Occupational Safety and Health NIOSH. Initial results show that with a few tones we can detect if the patient’s audiometric function has changed between the two test sessions with high confidence.
Tasks Model Selection
Published 2020-02-04
URL https://arxiv.org/abs/2002.01547v1
PDF https://arxiv.org/pdf/2002.01547v1.pdf
PWC https://paperswithcode.com/paper/accelerating-psychometric-screening-tests
Repo
Framework

Logistic Regression Regret: What’s the Catch?

Title Logistic Regression Regret: What’s the Catch?
Authors Gil I. Shamir
Abstract We address the problem of the achievable regret rates with online logistic regression. We derive lower bounds with logarithmic regret under $L_1$, $L_2$, and $L_\infty$ constraints on the parameter values. The bounds are dominated by $d/2 \log T$, where $T$ is the horizon and $d$ is the dimensionality of the parameter space. We show their achievability for $d=o(T^{1/3})$ in all these cases with Bayesian methods, that achieve them up to a $d/2 \log d$ term. Interesting different behaviors are shown for larger dimensionality. Specifically, on the negative side, if $d = \Omega(\sqrt{T})$, any algorithm is guaranteed regret of $\Omega(d \log T)$ (greater than $\Omega(\sqrt{T})$) under $L_\infty$ constraints on the parameters (and the example features). On the positive side, under $L_1$ constraints on the parameters, there exist algorithms that can achieve regret that is sub-linear in $d$ for the asymptotically larger values of $d$. For $L_2$ constraints, it is shown that for large enough $d$, the regret remains linear in $d$ but no longer logarithmic in $T$. Adapting the redundancy-capacity theorem from information theory, we demonstrate a principled methodology based on grids of parameters to derive lower bounds. Grids are also utilized to derive some upper bounds. Our results strengthen results by Kakade and Ng (2005) and Foster et al. (2018) for upper bounds for this problem, introduce novel lower bounds, and adapt a methodology that can be used to obtain such bounds for other related problems. They also give a novel characterization of the asymptotic behavior when the dimension of the parameter space is allowed to grow with $T$. They additionally establish connections to the information theory literature, demonstrating that the actual regret for logistic regression depends on the richness of the parameter class, where even within this problem, richer classes lead to greater regret.
Tasks
Published 2020-02-07
URL https://arxiv.org/abs/2002.02950v2
PDF https://arxiv.org/pdf/2002.02950v2.pdf
PWC https://paperswithcode.com/paper/logistic-regression-regret-whats-the-catch
Repo
Framework

Task-adaptive Asymmetric Deep Cross-modal Hashing

Title Task-adaptive Asymmetric Deep Cross-modal Hashing
Authors Tong Wang, Lei Zhu, Zhiyong Cheng, Jingjing Li, Huaxiang Zhang
Abstract Supervised cross-modal hashing aims to embed the semantic correlations of heterogeneous modality data into the binary hash codes with discriminative semantic labels. Because of its advantages on retrieval and storage efficiency, it is widely used for solving efficient cross-modal retrieval. However, existing researches equally handle the different tasks of cross-modal retrieval, and simply learn the same couple of hash functions in a symmetric way for them. Under such circumstance, the uniqueness of different cross-modal retrieval tasks are ignored and sub-optimal performance may be brought. Motivated by this, we present a Task-adaptive Asymmetric Deep Cross-modal Hashing (TA-ADCMH) method in this paper. It can learn task-adaptive hash functions for two sub-retrieval tasks via simultaneous modality representation and asymmetric hash learning. Unlike previous cross-modal hashing approaches, our learning framework jointly optimizes semantic preserving that transforms deep features of multimedia data into binary hash codes, and the semantic regression which directly regresses query modality representation to explicit label. With our model, the binary codes can effectively preserve semantic correlations across different modalities, meanwhile, adaptively capture the query semantics. The superiority of TA-ADCMH is proved on two standard datasets from many aspects.
Tasks Cross-Modal Retrieval
Published 2020-04-01
URL https://arxiv.org/abs/2004.00197v1
PDF https://arxiv.org/pdf/2004.00197v1.pdf
PWC https://paperswithcode.com/paper/task-adaptive-asymmetric-deep-cross-modal
Repo
Framework

The Sample Complexity of Meta Sparse Regression

Title The Sample Complexity of Meta Sparse Regression
Authors Zhanyu Wang, Jean Honorio
Abstract This paper addresses the meta-learning problem in sparse linear regression with infinite tasks. We assume that the learner can access several similar tasks. The goal of the learner is to transfer knowledge from the prior tasks to a similar but novel task. For p parameters, size of the support set k , and l samples per task, we show that T \in O (( k log(p) ) /l ) tasks are sufficient in order to recover the common support of all tasks. With the recovered support, we can greatly reduce the sample complexity for estimating the parameter of the novel task, i.e., l \in O (1) with respect to T and p . We also prove that our rates are minimax optimal. A key difference between meta-learning and the classical multi-task learning, is that meta-learning focuses only on the recovery of the parameters of the novel task, while multi-task learning estimates the parameter of all tasks, which requires l to grow with T . Instead, our efficient meta-learning estimator allows for l to be constant with respect to T (i.e., few-shot learning).
Tasks Few-Shot Learning, Meta-Learning, Multi-Task Learning
Published 2020-02-22
URL https://arxiv.org/abs/2002.09587v1
PDF https://arxiv.org/pdf/2002.09587v1.pdf
PWC https://paperswithcode.com/paper/the-sample-complexity-of-meta-sparse
Repo
Framework

The Future of Digital Health with Federated Learning

Title The Future of Digital Health with Federated Learning
Authors Nicola Rieke, Jonny Hancox, Wenqi Li, Fausto Milletari, Holger Roth, Shadi Albarqouni, Spyridon Bakas, Mathieu N. Galtier, Bennett Landman, Klaus Maier-Hein, Sebastien Ourselin, Micah Sheller, Ronald M. Summers, Andrew Trask, Daguang Xu, Maximilian Baust, M. Jorge Cardoso
Abstract Data-driven Machine Learning has emerged as a promising approach for building accurate and robust statistical models from medical data, which is collected in huge volumes by modern healthcare systems. Existing medical data is not fully exploited by ML primarily because it sits in data silos and privacy concerns restrict access to this data. However, without access to sufficient data, ML will be prevented from reaching its full potential and, ultimately, from making the transition from research to clinical practice. This paper considers key factors contributing to this issue, explores how Federated Learning (FL) may provide a solution for the future of digital health and highlights the challenges and considerations that need to be addressed.
Tasks
Published 2020-03-18
URL https://arxiv.org/abs/2003.08119v1
PDF https://arxiv.org/pdf/2003.08119v1.pdf
PWC https://paperswithcode.com/paper/the-future-of-digital-health-with-federated
Repo
Framework

Crime Prediction Using Spatio-Temporal Data

Title Crime Prediction Using Spatio-Temporal Data
Authors Sohrab Hossain, Ahmed Abtahee, Imran Kashem, Mohammed Moshiul Hoque, Iqbal H. Sarker
Abstract A crime is a punishable offence that is harmful for an individual and his society. It is obvious to comprehend the patterns of criminal activity to prevent them. Research can help society to prevent and solve crime activates. Study shows that only 10 percent offenders commits 50 percent of the total offences. The enforcement team can respond faster if they have early information and pre-knowledge about crime activities of the different points of a city. In this paper, supervised learning technique is used to predict crimes with better accuracy. The proposed system predicts crimes by analyzing data-set that contains records of previously committed crimes and their patterns. The system stands on two main algorithms - i) decision tree, and ii) k-nearest neighbor. Random Forest algorithm and Adaboost are used to increase the accuracy of the prediction. Finally, oversampling is used for better accuracy. The proposed system is feed with a criminal-activity data set of twelve years of San Francisco city.
Tasks Crime Prediction
Published 2020-03-11
URL https://arxiv.org/abs/2003.09322v1
PDF https://arxiv.org/pdf/2003.09322v1.pdf
PWC https://paperswithcode.com/paper/crime-prediction-using-spatio-temporal-data
Repo
Framework

Modeling of Spatio-Temporal Hawkes Processes with Randomized Kernels

Title Modeling of Spatio-Temporal Hawkes Processes with Randomized Kernels
Authors Fatih Ilhan, Suleyman Serdar Kozat
Abstract We investigate spatio-temporal event analysis using point processes. Inferring the dynamics of event sequences spatiotemporally has many practical applications including crime prediction, social media analysis, and traffic forecasting. In particular, we focus on spatio-temporal Hawkes processes that are commonly used due to their capability to capture excitations between event occurrences. We introduce a novel inference framework based on randomized transformations and gradient descent to learn the process. We replace the spatial kernel calculations by randomized Fourier feature-based transformations. The introduced randomization by this representation provides flexibility while modeling the spatial excitation between events. Moreover, the system described by the process is expressed within closed-form in terms of scalable matrix operations. During the optimization, we use maximum likelihood estimation approach and gradient descent while properly handling positivity and orthonormality constraints. The experiment results show the improvements achieved by the introduced method in terms of fitting capability in synthetic and real datasets with respect to the conventional inference methods in the spatio-temporal Hawkes process literature. We also analyze the triggering interactions between event types and how their dynamics change in space and time through the interpretation of learned parameters.
Tasks Crime Prediction, Point Processes
Published 2020-03-07
URL https://arxiv.org/abs/2003.03671v1
PDF https://arxiv.org/pdf/2003.03671v1.pdf
PWC https://paperswithcode.com/paper/modeling-of-spatio-temporal-hawkes-processes
Repo
Framework

A Comparative Study on Crime in Denver City Based on Machine Learning and Data Mining

Title A Comparative Study on Crime in Denver City Based on Machine Learning and Data Mining
Authors Md. Aminur Rab Ratul
Abstract To ensure the security of the general mass, crime prevention is one of the most higher priorities for any government. An accurate crime prediction model can help the government, law enforcement to prevent violence, detect the criminals in advance, allocate the government resources, and recognize problems causing crimes. To construct any future-oriented tools, examine and understand the crime patterns in the earliest possible time is essential. In this paper, I analyzed a real-world crime and accident dataset of Denver county, USA, from January 2014 to May 2019, which containing 478,578 incidents. This project aims to predict and highlights the trends of occurrence that will, in return, support the law enforcement agencies and government to discover the preventive measures from the prediction rates. At first, I apply several statistical analysis supported by several data visualization approaches. Then, I implement various classification algorithms such as Random Forest, Decision Tree, AdaBoost Classifier, Extra Tree Classifier, Linear Discriminant Analysis, K-Neighbors Classifiers, and 4 Ensemble Models to classify 15 different classes of crimes. The outcomes are captured using two popular test methods: train-test split, and k-fold cross-validation. Moreover, to evaluate the performance flawlessly, I also utilize precision, recall, F1-score, Mean Squared Error (MSE), ROC curve, and paired-T-test. Except for the AdaBoost classifier, most of the algorithms exhibit satisfactory accuracy. Random Forest, Decision Tree, Ensemble Model 1, 3, and 4 even produce me more than 90% accuracy. Among all the approaches, Ensemble Model 4 presented superior results for every evaluation basis. This study could be useful to raise the awareness of peoples regarding the occurrence locations and to assist security agencies to predict future outbreaks of violence in a specific area within a particular time.
Tasks Crime Prediction
Published 2020-01-09
URL https://arxiv.org/abs/2001.02802v1
PDF https://arxiv.org/pdf/2001.02802v1.pdf
PWC https://paperswithcode.com/paper/a-comparative-study-on-crime-in-denver-city
Repo
Framework

EEG-based Brain-Computer Interfaces (BCIs): A Survey of Recent Studies on Signal Sensing Technologies and Computational Intelligence Approaches and their Applications

Title EEG-based Brain-Computer Interfaces (BCIs): A Survey of Recent Studies on Signal Sensing Technologies and Computational Intelligence Approaches and their Applications
Authors Xiaotong Gu, Zehong Cao, Alireza Jolfaei, Peng Xu, Dongrui Wu, Tzyy-Ping Jung, Chin-Teng Lin
Abstract Brain-Computer Interface (BCI) is a powerful communication tool between users and systems, which enhances the capability of the human brain in communicating and interacting with the environment directly. Advances in neuroscience and computer science in the past decades have led to exciting developments in BCI, thereby making BCI a top interdisciplinary research area in computational neuroscience and intelligence. Recent technological advances such as wearable sensing devices, real-time data streaming, machine learning, and deep learning approaches have increased interest in electroencephalographic (EEG) based BCI for translational and healthcare applications. Many people benefit from EEG-based BCIs, which facilitate continuous monitoring of fluctuations in cognitive states under monotonous tasks in the workplace or at home. In this study, we survey the recent literature of EEG signal sensing technologies and computational intelligence approaches in BCI applications, compensated for the gaps in the systematic summary of the past five years (2015-2019). In specific, we first review the current status of BCI and its significant obstacles. Then, we present advanced signal sensing and enhancement technologies to collect and clean EEG signals, respectively. Furthermore, we demonstrate state-of-art computational intelligence techniques, including interpretable fuzzy models, transfer learning, deep learning, and combinations, to monitor, maintain, or track human cognitive states and operating performance in prevalent applications. Finally, we deliver a couple of innovative BCI-inspired healthcare applications and discuss some future research directions in EEG-based BCIs.
Tasks EEG, Transfer Learning
Published 2020-01-28
URL https://arxiv.org/abs/2001.11337v1
PDF https://arxiv.org/pdf/2001.11337v1.pdf
PWC https://paperswithcode.com/paper/eeg-based-brain-computer-interfaces-bcis-a
Repo
Framework

Spatiotemporal-Aware Augmented Reality: Redefining HCI in Image-Guided Therapy

Title Spatiotemporal-Aware Augmented Reality: Redefining HCI in Image-Guided Therapy
Authors Javad Fotouhi, Arian Mehrfard, Tianyu Song, Alex Johnson, Greg Osgood, Mathias Unberath, Mehran Armand, Nassir Navab
Abstract Suboptimal interaction with patient data and challenges in mastering 3D anatomy based on ill-posed 2D interventional images are essential concerns in image-guided therapies. Augmented reality (AR) has been introduced in the operating rooms in the last decade; however, in image-guided interventions, it has often only been considered as a visualization device improving traditional workflows. As a consequence, the technology is gaining minimum maturity that it requires to redefine new procedures, user interfaces, and interactions. The main contribution of this paper is to reveal how exemplary workflows are redefined by taking full advantage of head-mounted displays when entirely co-registered with the imaging system at all times. The proposed AR landscape is enabled by co-localizing the users and the imaging devices via the operating room environment and exploiting all involved frustums to move spatial information between different bodies. The awareness of the system from the geometric and physical characteristics of X-ray imaging allows the redefinition of different human-machine interfaces. We demonstrate that this AR paradigm is generic, and can benefit a wide variety of procedures. Our system achieved an error of $4.76\pm2.91$ mm for placing K-wire in a fracture management procedure, and yielded errors of $1.57\pm1.16^\circ$ and $1.46\pm1.00^\circ$ in the abduction and anteversion angles, respectively, for total hip arthroplasty. We hope that our holistic approach towards improving the interface of surgery not only augments the surgeon’s capabilities but also augments the surgical team’s experience in carrying out an effective intervention with reduced complications and provide novel approaches of documenting procedures for training purposes.
Tasks
Published 2020-03-04
URL https://arxiv.org/abs/2003.02260v1
PDF https://arxiv.org/pdf/2003.02260v1.pdf
PWC https://paperswithcode.com/paper/spatiotemporal-aware-augmented-reality
Repo
Framework

Differentially Private Mean Embeddings with Random Features (DP-MERF) for Simple & Practical Synthetic Data Generation

Title Differentially Private Mean Embeddings with Random Features (DP-MERF) for Simple & Practical Synthetic Data Generation
Authors Frederik Harder, Kamil Adamczewski, Mijung Park
Abstract We present a differentially private data generation paradigm using random feature representations of kernel mean embeddings when comparing the distribution of true data with that of synthetic data. We exploit the random feature representations for two important benefits. First, we require a very low privacy cost for training deep generative models. This is because unlike kernel-based distance metrics that require computing the kernel matrix on all pairs of true and synthetic data points, we can detach the data-dependent term from the term solely dependent on synthetic data. Hence, we need to perturb the data-dependent term once-for-all and then use it until the end of the generator training. Second, we can obtain an analytic sensitivity of the kernel mean embedding as the random features are norm bounded by construction. This removes the necessity of hyperparameter search for a clipping norm to handle the unknown sensitivity of an encoder network when dealing with high-dimensional data. We provide several variants of our algorithm, differentially private mean embeddings with random features (DP-MERF) to generate (a) heterogeneous tabular data, (b) input features and corresponding labels jointly; and (c) high-dimensional data. Our algorithm achieves better privacy-utility trade-offs than existing methods tested on several datasets.
Tasks Synthetic Data Generation
Published 2020-02-26
URL https://arxiv.org/abs/2002.11603v2
PDF https://arxiv.org/pdf/2002.11603v2.pdf
PWC https://paperswithcode.com/paper/differentially-private-mean-embeddings-with
Repo
Framework

Robust Learning from Discriminative Feature Feedback

Title Robust Learning from Discriminative Feature Feedback
Authors Sanjoy Dasgupta, Sivan Sabato
Abstract Recent work introduced the model of learning from discriminative feature feedback, in which a human annotator not only provides labels of instances, but also identifies discriminative features that highlight important differences between pairs of instances. It was shown that such feedback can be conducive to learning, and makes it possible to efficiently learn some concept classes that would otherwise be intractable. However, these results all relied upon perfect annotator feedback. In this paper, we introduce a more realistic, robust version of the framework, in which the annotator is allowed to make mistakes. We show how such errors can be handled algorithmically, in both an adversarial and a stochastic setting. In particular, we derive regret bounds in both settings that, as in the case of a perfect annotator, are independent of the number of features. We show that this result cannot be obtained by a naive reduction from the robust setting to the non-robust setting.
Tasks
Published 2020-03-09
URL https://arxiv.org/abs/2003.03946v1
PDF https://arxiv.org/pdf/2003.03946v1.pdf
PWC https://paperswithcode.com/paper/robust-learning-from-discriminative-feature
Repo
Framework

Automatic Extraction of Bengali Root Verbs using Paninian Grammar

Title Automatic Extraction of Bengali Root Verbs using Paninian Grammar
Authors Arijit Das, Tapas Halder, Diganta Saha
Abstract In this research work, we have proposed an algorithm based on supervised learning methodology to extract the root forms of the Bengali verbs using the grammatical rules proposed by Panini [1] in Ashtadhyayi. This methodology can be applied for the languages which are derived from Sanskrit. The proposed system has been developed based on tense, person and morphological inflections of the verbs to find their root forms. The work has been executed in two phases: first, the surface level forms or inflected forms of the verbs have been classified into a certain number of groups of similar tense and person. For this task, a standard pattern, available in Bengali language has been used. Next, a set of rules have been applied to extract the root form from the surface level forms of a verb. The system has been tested on 10000 verbs collected from the Bengali text corpus developed in the TDIL project of the Govt. of India. The accuracy of the output has been achieved 98% which is verified by a linguistic expert. Root verb identification is a key step in semantic searching, multi-sentence search query processing, understanding the meaning of a language, disambiguation of word sense, classification of the sentences etc.
Tasks
Published 2020-03-31
URL https://arxiv.org/abs/2004.00089v1
PDF https://arxiv.org/pdf/2004.00089v1.pdf
PWC https://paperswithcode.com/paper/automatic-extraction-of-bengali-root-verbs
Repo
Framework
comments powered by Disqus