Paper Group ANR 1524
Learning to Identify Patients at Risk of Uncontrolled Hypertension Using Electronic Health Records Data. The ISTI Rapid Response on Exploring Cloud Computing 2018. Reinforcement Learning for Learning of Dynamical Systems in Uncertain Environment: a Tutorial. An Empirical Study on Hyperparameters and their Interdependence for RL Generalization. Free …
Learning to Identify Patients at Risk of Uncontrolled Hypertension Using Electronic Health Records Data
Title | Learning to Identify Patients at Risk of Uncontrolled Hypertension Using Electronic Health Records Data |
Authors | Ramin Mohammadi, Sarthak Jain, Stephen Agboola, Ramya Palacholla, Sagar Kamarthi, Byron C. Wallace |
Abstract | Hypertension is a major risk factor for stroke, cardiovascular disease, and end-stage renal disease, and its prevalence is expected to rise dramatically. Effective hypertension management is thus critical. A particular priority is decreasing the incidence of uncontrolled hypertension. Early identification of patients at risk for uncontrolled hypertension would allow targeted use of personalized, proactive treatments. We develop machine learning models (logistic regression and recurrent neural networks) to stratify patients with respect to the risk of exhibiting uncontrolled hypertension within the coming three-month period. We trained and tested models using EHR data from 14,407 and 3,009 patients, respectively. The best model achieved an AUROC of 0.719, outperforming the simple, competitive baseline of relying prediction based on the last BP measure alone (0.634). Perhaps surprisingly, recurrent neural networks did not outperform a simple logistic regression for this task, suggesting that linear models should be included as strong baselines for predictive tasks using EHR |
Tasks | |
Published | 2019-06-28 |
URL | https://arxiv.org/abs/1907.00089v1 |
https://arxiv.org/pdf/1907.00089v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-to-identify-patients-at-risk-of |
Repo | |
Framework | |
The ISTI Rapid Response on Exploring Cloud Computing 2018
Title | The ISTI Rapid Response on Exploring Cloud Computing 2018 |
Authors | Carleton Coffrin, James Arnold, Stephan Eidenbenz, Derek Aberle, John Ambrosiano, Zachary Baker, Sara Brambilla, Michael Brown, K. Nolan Carter, Pinghan Chu, Patrick Conry, Keeley Costigan, Ariane Eberhardt, David M. Fobes, Adam Gausmann, Sean Harris, Donovan Heimer, Marlin Holmes, Bill Junor, Csaba Kiss, Steve Linger, Rodman Linn, Li-Ta Lo, Jonathan MacCarthy, Omar Marcillo, Clay McGinnis, Alexander McQuarters, Eric Michalak, Arvind Mohan, Matt Nelson, Diane Oyen, Nidhi Parikh, Donatella Pasqualini, Aaron s. Pope, Reid Porter, Chris Rawlings, Hannah Reinbolt, Reid Rivenburgh, Phil Romero, Kevin Schoonover, Alexei Skurikhin, Daniel Tauritz, Dima Tretiak, Zhehui Wang, James Wernicke, Brad Wolfe, Phillip Wolfram, Jonathan Woodring |
Abstract | This report describes eighteen projects that explored how commercial cloud computing services can be utilized for scientific computation at national laboratories. These demonstrations ranged from deploying proprietary software in a cloud environment to leveraging established cloud-based analytics workflows for processing scientific datasets. By and large, the projects were successful and collectively they suggest that cloud computing can be a valuable computational resource for scientific computation at national laboratories. |
Tasks | |
Published | 2019-01-04 |
URL | http://arxiv.org/abs/1901.01331v1 |
http://arxiv.org/pdf/1901.01331v1.pdf | |
PWC | https://paperswithcode.com/paper/the-isti-rapid-response-on-exploring-cloud |
Repo | |
Framework | |
Reinforcement Learning for Learning of Dynamical Systems in Uncertain Environment: a Tutorial
Title | Reinforcement Learning for Learning of Dynamical Systems in Uncertain Environment: a Tutorial |
Authors | Mehran Attar, Mohammadreza Dabirian |
Abstract | In this paper, a review of model-free reinforcement learning for learning of dynamical systems in uncertain environments has discussed. For this purpose, the Markov Decision Process (MDP) will be reviewed. Furthermore, some learning algorithms such as Temporal Difference (TD) learning, Q-Learning, and Approximate Q-learning as model-free algorithms which constitute the main part of this article have been investigated, and benefits and drawbacks of each algorithm will be discussed. The discussed concepts in each section are explaining with details and examples. |
Tasks | Q-Learning |
Published | 2019-05-19 |
URL | https://arxiv.org/abs/1905.07727v1 |
https://arxiv.org/pdf/1905.07727v1.pdf | |
PWC | https://paperswithcode.com/paper/reinforcement-learning-for-learning-of |
Repo | |
Framework | |
An Empirical Study on Hyperparameters and their Interdependence for RL Generalization
Title | An Empirical Study on Hyperparameters and their Interdependence for RL Generalization |
Authors | Xingyou Song, Yilun Du, Jacob Jackson |
Abstract | Recent results in Reinforcement Learning (RL) have shown that agents with limited training environments are susceptible to a large amount of overfitting across many domains. A key challenge for RL generalization is to quantitatively explain the effects of changing parameters on testing performance. Such parameters include architecture, regularization, and RL-dependent variables such as discount factor and action stochasticity. We provide empirical results that show complex and interdependent relationships between hyperparameters and generalization. We further show that several empirical metrics such as gradient cosine similarity and trajectory-dependent metrics serve to provide intuition towards these results. |
Tasks | |
Published | 2019-06-02 |
URL | https://arxiv.org/abs/1906.00431v1 |
https://arxiv.org/pdf/1906.00431v1.pdf | |
PWC | https://paperswithcode.com/paper/190600431 |
Repo | |
Framework | |
Freeze and Chaos for DNNs: an NTK view of Batch Normalization, Checkerboard and Boundary Effects
Title | Freeze and Chaos for DNNs: an NTK view of Batch Normalization, Checkerboard and Boundary Effects |
Authors | Arthur Jacot, Franck Gabriel, Clément Hongler |
Abstract | In this paper, we analyze a number of architectural features of Deep Neural Networks (DNNs), using the so-called Neural Tangent Kernel (NTK). The NTK describes the training trajectory and generalization of DNNs in the infinite-width limit. In this limit, we show that for (fully-connected) DNNs, as the depth grows, two regimes appear: “freeze” (also known as “order”), where the (scaled) NTK converges to a constant (slowing convergence), and “chaos”, where it converges to a Kronecker delta (limiting generalization). We show that when using the scaled ReLU as a nonlinearity, we naturally end up in the “freeze”. We show that Batch Normalization (BN) avoids the freeze regime by reducing the importance of the constant mode in the NTK. A similar effect is obtained by normalizing the nonlinearity which moves the network to the chaotic regime. We uncover the same “freeze” and “chaos” modes in Deep Deconvolutional Networks (DC-NNs). The “freeze” regime is characterized by checkerboard patterns in the image space in addition to the constant modes in input space. Finally, we introduce a new NTK-based parametrization to eliminate border artifacts and we propose a layer-dependent learning rate to improve the convergence of DC-NNs. We illustrate our findings by training DCGANs using our setup. When trained in the “freeze” regime, we see that the generator collapses to a checkerboard mode. We also demonstrate numerically that the generator collapse can be avoided and that good quality samples can be obtained, by tuning the nonlinearity to reach the “chaos” regime (without using batch normalization). |
Tasks | |
Published | 2019-07-11 |
URL | https://arxiv.org/abs/1907.05715v1 |
https://arxiv.org/pdf/1907.05715v1.pdf | |
PWC | https://paperswithcode.com/paper/freeze-and-chaos-for-dnns-an-ntk-view-of |
Repo | |
Framework | |
Predicting Performance of Software Configurations: There is no Silver Bullet
Title | Predicting Performance of Software Configurations: There is no Silver Bullet |
Authors | Alexander Grebhahn, Norbert Siegmund, Sven Apel |
Abstract | Many software systems offer configuration options to tailor their functionality and non-functional properties (e.g., performance). Often, users are interested in the (performance-)optimal configuration, but struggle to find it, due to missing information on influences of individual configuration options and their interactions. In the past, various supervised machine-learning techniques have been used to predict the performance of all configurations and to identify the optimal one. In the literature, there is a large number of machine-learning techniques and sampling strategies to select from. It is unclear, though, to what extent they affect prediction accuracy. We have conducted a comparative study regarding the mean prediction accuracy when predicting the performance of all configurations considering 6 machine-learning techniques, 18 sampling strategies, and 6 subject software systems. We found that both the learning technique and the sampling strategy have a strong influence on prediction accuracy. We further observed that some learning techniques (e.g., random forests) outperform other learning techniques (e.g., k-nearest neighbor) in most cases. Moreover, as the prediction accuracy strongly depends on the subject system, there is no combination of a learning technique and sampling strategy that is optimal in all cases, considering the tradeoff between accuracy and measurement overhead, which is in line with the famous no-free-lunch theorem. |
Tasks | |
Published | 2019-11-28 |
URL | https://arxiv.org/abs/1911.12643v1 |
https://arxiv.org/pdf/1911.12643v1.pdf | |
PWC | https://paperswithcode.com/paper/predicting-performance-of-software |
Repo | |
Framework | |
D2D-LSTM based Prediction of the D2D Diffusion Path in Mobile Social Networks
Title | D2D-LSTM based Prediction of the D2D Diffusion Path in Mobile Social Networks |
Authors | Hao Xu |
Abstract | Recently, how to expand data transmission to reduce cell data and repeated cell transmission has received more and more research attention. In mobile social networks, content popularity prediction has always been an important part of traffic offloading and expanding data dissemination. However, current mainstream content popularity prediction methods only use the number of downloads and shares or the distribution of user interests, which do not consider important time and geographic location information in mobile social networks, and all of data is from OSN which is not same as MSN. In this work, we propose D2D Long Short-Term Memory (D2D-LSTM), a deep neural network based on LSTM, which is designed to predict a complete D2D diffusion path. Our work is the first attempt in the world to use real data of MSN to predict diffusion path with deep neural networks which conforms to the D2D structure. Compared to linear sequence networks, only learn users’ social features without time distribution or GPS distribution and files’ content features, our model can predict the propagation path more accurately (up to 85.858%) and can reach convergence faster (less than 100 steps) because of the neural network that conforms to the D2D structure and combines user social features and files features. Moreover, we can simulate generating a D2D propagation tree. After experiment and comparison, it is found to be very similar to the ground-truth trees. Finally, we define a user prototype refinement that can more accurately describe the propagation sharing habits of a prototype user (including content preferences, time preferences, and geographic location preferences), and experimentally validate the predictions when the user prototype is added to 1000 classes, it is almost identical to the 50 categories. |
Tasks | |
Published | 2019-09-28 |
URL | https://arxiv.org/abs/1910.01453v1 |
https://arxiv.org/pdf/1910.01453v1.pdf | |
PWC | https://paperswithcode.com/paper/d2d-lstm-based-prediction-of-the-d2d |
Repo | |
Framework | |
Smooth function approximation by deep neural networks with general activation functions
Title | Smooth function approximation by deep neural networks with general activation functions |
Authors | Ilsang Ohn, Yongdai Kim |
Abstract | There has been a growing interest in expressivity of deep neural networks. However, most of the existing work about this topic focuses only on the specific activation function such as ReLU or sigmoid. In this paper, we investigate the approximation ability of deep neural networks with a broad class of activation functions. This class of activation functions includes most of frequently used activation functions. We derive the required depth, width and sparsity of a deep neural network to approximate any H"older smooth function upto a given approximation error for the large class of activation functions. Based on our approximation error analysis, we derive the minimax optimality of the deep neural network estimators with the general activation functions in both regression and classification problems. |
Tasks | |
Published | 2019-06-17 |
URL | https://arxiv.org/abs/1906.06903v2 |
https://arxiv.org/pdf/1906.06903v2.pdf | |
PWC | https://paperswithcode.com/paper/smooth-function-approximation-by-deep-neural |
Repo | |
Framework | |
Estimating Smooth GLM in Non-interactive Local Differential Privacy Model with Public Unlabeled Data
Title | Estimating Smooth GLM in Non-interactive Local Differential Privacy Model with Public Unlabeled Data |
Authors | Di Wang, Huanyu Zhang, Marco Gaboardi, Jinhui Xu |
Abstract | In this paper, we study the problem of estimating smooth Generalized Linear Models (GLM) in the Non-interactive Local Differential Privacy (NLDP) model. Different from its classical setting, our model allows the server to access some additional public but unlabeled data. By using Stein’s lemma and its variants, we first show that there is an $(\epsilon, \delta)$-NLDP algorithm for GLM (under some mild assumptions), if each data record is i.i.d sampled from some sub-Gaussian distribution with bounded $\ell_1$-norm. Then with high probability, the sample complexity of the public and private data, for the algorithm to achieve an $\alpha$ estimation error (in $\ell_\infty$-norm), is $O(p^2\alpha^{-2})$ and ${O}(p^2\alpha^{-2}\epsilon^{-2})$, respectively, if $\alpha$ is not too small ({\em i.e.,} $\alpha\geq \Omega(\frac{1}{\sqrt{p}})$), where $p$ is the dimensionality of the data. This is a significant improvement over the previously known quasi-polynomial (in $\alpha$) or exponential (in $p$) complexity of GLM with no public data. Also, our algorithm can answer multiple (at most $\exp(O(p))$) GLM queries with the same sample complexities as in the one GLM query case with at least constant probability. We then extend our idea to the non-linear regression problem and show a similar phenomenon for it. Finally, we demonstrate the effectiveness of our algorithms through experiments on both synthetic and real world datasets. To our best knowledge, this is the first paper showing the existence of efficient and effective algorithms for GLM and non-linear regression in the NLDP model with public unlabeled data. |
Tasks | |
Published | 2019-10-01 |
URL | https://arxiv.org/abs/1910.00482v1 |
https://arxiv.org/pdf/1910.00482v1.pdf | |
PWC | https://paperswithcode.com/paper/estimating-smooth-glm-in-non-interactive |
Repo | |
Framework | |
MUTLA: A Large-Scale Dataset for Multimodal Teaching and Learning Analytics
Title | MUTLA: A Large-Scale Dataset for Multimodal Teaching and Learning Analytics |
Authors | Fangli Xu, Lingfei Wu, KP Thai, Carol Hsu, Wei Wang, Richard Tong |
Abstract | Automatic analysis of teacher and student interactions could be very important to improve the quality of teaching and student engagement. However, despite some recent progress in utilizing multimodal data for teaching and learning analytics, a thorough analysis of a rich multimodal dataset coming for a complex real learning environment has yet to be done. To bridge this gap, we present a large-scale MUlti-modal Teaching and Learning Analytics (MUTLA) dataset. This dataset includes time-synchronized multimodal data records of students (learning logs, videos, EEG brainwaves) as they work in various subjects from Squirrel AI Learning System (SAIL) to solve problems of varying difficulty levels. The dataset resources include user records from the learner records store of SAIL, brainwave data collected by EEG headset devices, and video data captured by web cameras while students worked in the SAIL products. Our hope is that by analyzing real-world student learning activities, facial expressions, and brainwave patterns, researchers can better predict engagement, which can then be used to improve adaptive learning selection and student learning outcomes. An additional goal is to provide a dataset gathered from real-world educational activities versus those from controlled lab environments to benefit the educational learning community. |
Tasks | EEG |
Published | 2019-10-05 |
URL | https://arxiv.org/abs/1910.06078v1 |
https://arxiv.org/pdf/1910.06078v1.pdf | |
PWC | https://paperswithcode.com/paper/mutla-a-large-scale-dataset-for-multimodal |
Repo | |
Framework | |
Learning Optimal and Fair Decision Trees for Non-Discriminative Decision-Making
Title | Learning Optimal and Fair Decision Trees for Non-Discriminative Decision-Making |
Authors | Sina Aghaei, Mohammad Javad Azizi, Phebe Vayanos |
Abstract | In recent years, automated data-driven decision-making systems have enjoyed a tremendous success in a variety of fields (e.g., to make product recommendations, or to guide the production of entertainment). More recently, these algorithms are increasingly being used to assist socially sensitive decision-making (e.g., to decide who to admit into a degree program or to prioritize individuals for public housing). Yet, these automated tools may result in discriminative decision-making in the sense that they may treat individuals unfairly or unequally based on membership to a category or a minority, resulting in disparate treatment or disparate impact and violating both moral and ethical standards. This may happen when the training dataset is itself biased (e.g., if individuals belonging to a particular group have historically been discriminated upon). However, it may also happen when the training dataset is unbiased, if the errors made by the system affect individuals belonging to a category or minority differently (e.g., if misclassification rates for Blacks are higher than for Whites). In this paper, we unify the definitions of unfairness across classification and regression. We propose a versatile mixed-integer optimization framework for learning optimal and fair decision trees and variants thereof to prevent disparate treatment and/or disparate impact as appropriate. This translates to a flexible schema for designing fair and interpretable policies suitable for socially sensitive decision-making. We conduct extensive computational studies that show that our framework improves the state-of-the-art in the field (which typically relies on heuristics) to yield non-discriminative decisions at lower cost to overall accuracy. |
Tasks | Decision Making |
Published | 2019-03-25 |
URL | http://arxiv.org/abs/1903.10598v1 |
http://arxiv.org/pdf/1903.10598v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-optimal-and-fair-decision-trees-for |
Repo | |
Framework | |
Speech-XLNet: Unsupervised Acoustic Model Pretraining For Self-Attention Networks
Title | Speech-XLNet: Unsupervised Acoustic Model Pretraining For Self-Attention Networks |
Authors | Xingchen Song, Guangsen Wang, Zhiyong Wu, Yiheng Huang, Dan Su, Dong Yu, Helen Meng |
Abstract | Self-attention network (SAN) can benefit significantly from the bi-directional representation learning through unsupervised pretraining paradigms such as BERT and XLNet. In this paper, we present an XLNet-like pretraining scheme “Speech-XLNet” for unsupervised acoustic model pretraining to learn speech representations with SAN. The pretrained SAN is finetuned under the hybrid SAN/HMM framework. We conjecture that by shuffling the speech frame orders, the permutation in Speech-XLNet serves as a strong regularizer to encourage the SAN to make inferences by focusing on global structures through its attention weights. In addition, Speech-XLNet also allows the model to explore the bi-directional contexts for effective speech representation learning. Experiments on TIMIT and WSJ demonstrate that Speech-XLNet greatly improves the SAN/HMM performance in terms of both convergence speed and recognition accuracy compared to the one trained from randomly initialized weights. Our best systems achieve a relative improvement of 11.9% and 8.3% on the TIMIT and WSJ tasks respectively. In particular, the best system achieves a phone error rate (PER) of 13.3% on the TIMIT test set, which to our best knowledge, is the lowest PER obtained from a single system. |
Tasks | Representation Learning |
Published | 2019-10-23 |
URL | https://arxiv.org/abs/1910.10387v1 |
https://arxiv.org/pdf/1910.10387v1.pdf | |
PWC | https://paperswithcode.com/paper/speech-xlnet-unsupervised-acoustic-model |
Repo | |
Framework | |
FBK-HUPBA Submission to the EPIC-Kitchens 2019 Action Recognition Challenge
Title | FBK-HUPBA Submission to the EPIC-Kitchens 2019 Action Recognition Challenge |
Authors | Swathikiran Sudhakaran, Sergio Escalera, Oswald Lanz |
Abstract | In this report we describe the technical details of our submission to the EPIC-Kitchens 2019 action recognition challenge. To participate in the challenge we have developed a number of CNN-LSTA [3] and HF-TSN [2] variants, and submitted predictions from an ensemble compiled out of these two model families. Our submission, visible on the public leaderboard with team name FBK-HUPBA, achieved a top-1 action recognition accuracy of 35.54% on S1 setting, and 20.25% on S2 setting. |
Tasks | |
Published | 2019-06-21 |
URL | https://arxiv.org/abs/1906.08960v1 |
https://arxiv.org/pdf/1906.08960v1.pdf | |
PWC | https://paperswithcode.com/paper/fbk-hupba-submission-to-the-epic-kitchens |
Repo | |
Framework | |
PCGAN-CHAR: Progressively Trained Classifier Generative Adversarial Networks for Classification of Noisy Handwritten Bangla Characters
Title | PCGAN-CHAR: Progressively Trained Classifier Generative Adversarial Networks for Classification of Noisy Handwritten Bangla Characters |
Authors | Qun Liu, Edward Collier, Supratik Mukhopadhyay |
Abstract | Due to the sparsity of features, noise has proven to be a great inhibitor in the classification of handwritten characters. To combat this, most techniques perform denoising of the data before classification. In this paper, we consolidate the approach by training an all-in-one model that is able to classify even noisy characters. For classification, we progressively train a classifier generative adversarial network on the characters from low to high resolution. We show that by learning the features at each resolution independently a trained model is able to accurately classify characters even in the presence of noise. We experimentally demonstrate the effectiveness of our approach by classifying noisy versions of MNIST, handwritten Bangla Numeral, and Basic Character datasets. |
Tasks | Denoising |
Published | 2019-08-11 |
URL | https://arxiv.org/abs/1908.08987v1 |
https://arxiv.org/pdf/1908.08987v1.pdf | |
PWC | https://paperswithcode.com/paper/pcgan-char-progressively-trained-classifier |
Repo | |
Framework | |
ROIPCA: An Online PCA algorithm based on rank-one updates
Title | ROIPCA: An Online PCA algorithm based on rank-one updates |
Authors | Roy Mitz, Yoel Shkolnisky |
Abstract | Principal components analysis (PCA) is a fundamental algorithm in data analysis. Its online version is useful in many modern applications where the data are too large to fit in memory, or when speed of calculation is important. In this paper we propose ROIPCA, an online PCA algorithm based on rank-one updates. ROIPCA is linear in both the dimension of the data and the number of components calculated. We demonstrate its advantages over existing state-of-the-art algorithms in terms of accuracy and running time. |
Tasks | |
Published | 2019-11-25 |
URL | https://arxiv.org/abs/1911.11049v1 |
https://arxiv.org/pdf/1911.11049v1.pdf | |
PWC | https://paperswithcode.com/paper/roipca-an-online-pca-algorithm-based-on-rank |
Repo | |
Framework | |