Paper Group ANR 275
Sum of previous inpatient serum creatinine measurements predicts acute kidney injury in rehospitalized patients. GPGPU Acceleration of the KAZE Image Feature Extraction Algorithm. Image Disguise based on Generative Model. High-dimensional classification by sparse logistic regression. Self Adversarial Training for Human Pose Estimation. Retrofitting …
Sum of previous inpatient serum creatinine measurements predicts acute kidney injury in rehospitalized patients
Title | Sum of previous inpatient serum creatinine measurements predicts acute kidney injury in rehospitalized patients |
Authors | Sam Weisenthal, Haofu Liao, Philip Ng, Martin Zand |
Abstract | Acute Kidney Injury (AKI), the abrupt decline in kidney function due to temporary or permanent injury, is associated with increased mortality, morbidity, length of stay, and hospital cost. Sometimes, simple interventions such as medication review or hydration can prevent AKI. There is therefore interest in estimating risk of AKI at hospitalization. To gain insight into this task, we employ multilayer perceptron (MLP) and recurrent neural networks (RNNs) using serum creatinine (sCr) as a lone feature. We explore different feature input structures, including variable-length look-backs and a nested formulation for rehospitalized patients with previous sCr measurements. Experimental results show that the simplest model, MLP processing the sum of sCr, had best performance: AUROC 0.92 and AUPRC 0.70. Such a simple model could be easily integrated into an EHR. Preliminary results also suggest that inpatient data streams with missing outpatient measurements—common in the medical setting—might be best modeled with a tailored architecture. |
Tasks | |
Published | 2017-12-05 |
URL | http://arxiv.org/abs/1712.01880v1 |
http://arxiv.org/pdf/1712.01880v1.pdf | |
PWC | https://paperswithcode.com/paper/sum-of-previous-inpatient-serum-creatinine |
Repo | |
Framework | |
GPGPU Acceleration of the KAZE Image Feature Extraction Algorithm
Title | GPGPU Acceleration of the KAZE Image Feature Extraction Algorithm |
Authors | Ramkumar B, R. S. Hegde, Rob Laber, Hristo Bojinov |
Abstract | The recently proposed open-source KAZE image feature detection and description algorithm offers unprecedented performance in comparison to conventional ones like SIFT and SURF as it relies on nonlinear scale spaces instead of Gaussian linear scale spaces. The improved performance, however, comes with a significant computational cost limiting its use for many applications. We report a GPGPU implementation of the KAZE algorithm without resorting to binary descriptors for gaining speedup. For a 1920 by 1200 sized image our Compute Unified Device Architecture (CUDA) C based GPU version took around 300 milliseconds on a NVIDIA GeForce GTX Titan X (Maxwell Architecture-GM200) card in comparison to nearly 2400 milliseconds for a multithreaded CPU version (16 threaded Intel(R) Xeon(R) CPU E5-2650 processsor). The CUDA based parallel implementation is described in detail with fine-grained comparison between the GPU and CPU implementations. By achieving nearly 8 fold speedup without performance degradation our work expands the applicability of the KAZE algorithm. Additionally, the strategies described here can prove useful for the GPU implementation of other nonlinear scale space based methods. |
Tasks | |
Published | 2017-06-21 |
URL | http://arxiv.org/abs/1706.06750v1 |
http://arxiv.org/pdf/1706.06750v1.pdf | |
PWC | https://paperswithcode.com/paper/gpgpu-acceleration-of-the-kaze-image-feature |
Repo | |
Framework | |
Image Disguise based on Generative Model
Title | Image Disguise based on Generative Model |
Authors | Xintao Duan, Haoxian Song, En Zhang, Jingjing Liu |
Abstract | To protect image contents, most existing encryption algorithms are designed to transform an original image into a texture-like or noise-like image, which is, however, an obvious visual sign indicating the presence of an encrypted image, results in a significantly large number of attacks. To solve this problem, in this paper, we propose a new image encryption method to generate a visually same image as the original one by sending a meaning-normal and independent image to a corresponding well-trained generative model to achieve the effect of disguising the original image. This image disguise method not only solves the problem of obvious visual implication, but also guarantees the security of the information. |
Tasks | |
Published | 2017-10-21 |
URL | http://arxiv.org/abs/1710.07782v4 |
http://arxiv.org/pdf/1710.07782v4.pdf | |
PWC | https://paperswithcode.com/paper/image-disguise-based-on-generative-model |
Repo | |
Framework | |
High-dimensional classification by sparse logistic regression
Title | High-dimensional classification by sparse logistic regression |
Authors | Felix Abramovich, Vadim Grinshtein |
Abstract | We consider high-dimensional binary classification by sparse logistic regression. We propose a model/feature selection procedure based on penalized maximum likelihood with a complexity penalty on the model size and derive the non-asymptotic bounds for the resulting misclassification excess risk. The bounds can be reduced under the additional low-noise condition. The proposed complexity penalty is remarkably related to the VC-dimension of a set of sparse linear classifiers. Implementation of any complexity penalty-based criterion, however, requires a combinatorial search over all possible models. To find a model selection procedure computationally feasible for high-dimensional data, we extend the Slope estimator for logistic regression and show that under an additional weighted restricted eigenvalue condition it is rate-optimal in the minimax sense. |
Tasks | Feature Selection, Model Selection |
Published | 2017-06-26 |
URL | http://arxiv.org/abs/1706.08344v3 |
http://arxiv.org/pdf/1706.08344v3.pdf | |
PWC | https://paperswithcode.com/paper/high-dimensional-classification-by-sparse |
Repo | |
Framework | |
Self Adversarial Training for Human Pose Estimation
Title | Self Adversarial Training for Human Pose Estimation |
Authors | Chia-Jung Chou, Jui-Ting Chien, Hwann-Tzong Chen |
Abstract | This paper presents a deep learning based approach to the problem of human pose estimation. We employ generative adversarial networks as our learning paradigm in which we set up two stacked hourglass networks with the same architecture, one as the generator and the other as the discriminator. The generator is used as a human pose estimator after the training is done. The discriminator distinguishes ground-truth heatmaps from generated ones, and back-propagates the adversarial loss to the generator. This process enables the generator to learn plausible human body configurations and is shown to be useful for improving the prediction accuracy. |
Tasks | Pose Estimation |
Published | 2017-07-08 |
URL | http://arxiv.org/abs/1707.02439v2 |
http://arxiv.org/pdf/1707.02439v2.pdf | |
PWC | https://paperswithcode.com/paper/self-adversarial-training-for-human-pose |
Repo | |
Framework | |
Retrofitting Concept Vector Representations of Medical Concepts to Improve Estimates of Semantic Similarity and Relatedness
Title | Retrofitting Concept Vector Representations of Medical Concepts to Improve Estimates of Semantic Similarity and Relatedness |
Authors | Zhiguo Yu, Byron C. Wallace, Todd Johnson, Trevor Cohen |
Abstract | Estimation of semantic similarity and relatedness between biomedical concepts has utility for many informatics applications. Automated methods fall into two categories: methods based on distributional statistics drawn from text corpora, and methods using the structure of existing knowledge resources. Methods in the former category disregard taxonomic structure, while those in the latter fail to consider semantically relevant empirical information. In this paper, we present a method that retrofits distributional context vector representations of biomedical concepts using structural information from the UMLS Metathesaurus, such that the similarity between vector representations of linked concepts is augmented. We evaluated it on the UMNSRS benchmark. Our results demonstrate that retrofitting of concept vector representations leads to better correlation with human raters for both similarity and relatedness, surpassing the best results reported to date. They also demonstrate a clear improvement in performance on this reference standard for retrofitted vector representations, as compared to those without retrofitting. |
Tasks | Semantic Similarity, Semantic Textual Similarity |
Published | 2017-09-21 |
URL | http://arxiv.org/abs/1709.07357v1 |
http://arxiv.org/pdf/1709.07357v1.pdf | |
PWC | https://paperswithcode.com/paper/retrofitting-concept-vector-representations |
Repo | |
Framework | |
Racial Disparity in Natural Language Processing: A Case Study of Social Media African-American English
Title | Racial Disparity in Natural Language Processing: A Case Study of Social Media African-American English |
Authors | Su Lin Blodgett, Brendan O’Connor |
Abstract | We highlight an important frontier in algorithmic fairness: disparity in the quality of natural language processing algorithms when applied to language from authors of different social groups. For example, current systems sometimes analyze the language of females and minorities more poorly than they do of whites and males. We conduct an empirical analysis of racial disparity in language identification for tweets written in African-American English, and discuss implications of disparity in NLP. |
Tasks | Language Identification |
Published | 2017-06-30 |
URL | http://arxiv.org/abs/1707.00061v1 |
http://arxiv.org/pdf/1707.00061v1.pdf | |
PWC | https://paperswithcode.com/paper/racial-disparity-in-natural-language |
Repo | |
Framework | |
On the “Calligraphy” of Books
Title | On the “Calligraphy” of Books |
Authors | Vanessa Q. Marinho, Henrique F. de Arruda, Thales S. Lima, Luciano F. Costa, Diego R. Amancio |
Abstract | Authorship attribution is a natural language processing task that has been widely studied, often by considering small order statistics. In this paper, we explore a complex network approach to assign the authorship of texts based on their mesoscopic representation, in an attempt to capture the flow of the narrative. Indeed, as reported in this work, such an approach allowed the identification of the dominant narrative structure of the studied authors. This has been achieved due to the ability of the mesoscopic approach to take into account relationships between different, not necessarily adjacent, parts of the text, which is able to capture the story flow. The potential of the proposed approach has been illustrated through principal component analysis, a comparison with the chance baseline method, and network visualization. Such visualizations reveal individual characteristics of the authors, which can be understood as a kind of calligraphy. |
Tasks | |
Published | 2017-05-29 |
URL | http://arxiv.org/abs/1705.10415v1 |
http://arxiv.org/pdf/1705.10415v1.pdf | |
PWC | https://paperswithcode.com/paper/on-the-calligraphy-of-books |
Repo | |
Framework | |
An Efficient Decomposition Framework for Discriminative Segmentation with Supermodular Losses
Title | An Efficient Decomposition Framework for Discriminative Segmentation with Supermodular Losses |
Authors | Jiaqian Yu, Matthew B. Blaschko |
Abstract | Several supermodular losses have been shown to improve the perceptual quality of image segmentation in a discriminative framework such as a structured output support vector machine (SVM). These loss functions do not necessarily have the same structure as the one used by the segmentation inference algorithm, and in general, we may have to resort to generic submodular minimization algorithms for loss augmented inference. Although these come with polynomial time guarantees, they are not practical to apply to image scale data. Many supermodular losses come with strong optimization guarantees, but are not readily incorporated in a loss augmented graph cuts procedure. This motivates our strategy of employing the alternating direction method of multipliers (ADMM) decomposition for loss augmented inference. In doing so, we create a new API for the structured SVM that separates the maximum a posteriori (MAP) inference of the model from the loss augmentation during training. In this way, we gain computational efficiency, making new choices of loss functions practical for the first time, while simultaneously making the inference algorithm employed during training closer to the test time procedure. We show improvement both in accuracy and computational performance on the Microsoft Research Grabcut database and a brain structure segmentation task, empirically validating the use of several supermodular loss functions during training, and the improved computational properties of the proposed ADMM approach over the Fujishige-Wolfe minimum norm point algorithm. |
Tasks | Semantic Segmentation |
Published | 2017-02-13 |
URL | http://arxiv.org/abs/1702.03690v1 |
http://arxiv.org/pdf/1702.03690v1.pdf | |
PWC | https://paperswithcode.com/paper/an-efficient-decomposition-framework-for |
Repo | |
Framework | |
A Generalized Genetic Algorithm-Based Solver for Very Large Jigsaw Puzzles of Complex Types
Title | A Generalized Genetic Algorithm-Based Solver for Very Large Jigsaw Puzzles of Complex Types |
Authors | Dror Sholomon, Eli David, Nathan S. Netanyahu |
Abstract | In this paper we introduce new types of square-piece jigsaw puzzles, where in addition to the unknown location and orientation of each piece, a piece might also need to be flipped. These puzzles, which are associated with a number of real world problems, are considerably harder, from a computational standpoint. Specifically, we present a novel generalized genetic algorithm (GA)-based solver that can handle puzzle pieces of unknown location and orientation (Type 2 puzzles) and (two-sided) puzzle pieces of unknown location, orientation, and face (Type 4 puzzles). To the best of our knowledge, our solver provides a new state-of-the-art, solving previously attempted puzzles faster and far more accurately, handling puzzle sizes that have never been attempted before, and assembling the newly introduced two-sided puzzles automatically and effectively. This paper also presents, among other results, the most extensive set of experimental results, compiled as of yet, on Type 2 puzzles. |
Tasks | |
Published | 2017-11-17 |
URL | http://arxiv.org/abs/1711.06768v1 |
http://arxiv.org/pdf/1711.06768v1.pdf | |
PWC | https://paperswithcode.com/paper/a-generalized-genetic-algorithm-based-solver |
Repo | |
Framework | |
Sensitivity Analysis for Predictive Uncertainty in Bayesian Neural Networks
Title | Sensitivity Analysis for Predictive Uncertainty in Bayesian Neural Networks |
Authors | Stefan Depeweg, José Miguel Hernández-Lobato, Steffen Udluft, Thomas Runkler |
Abstract | We derive a novel sensitivity analysis of input variables for predictive epistemic and aleatoric uncertainty. We use Bayesian neural networks with latent variables as a model class and illustrate the usefulness of our sensitivity analysis on real-world datasets. Our method increases the interpretability of complex black-box probabilistic models. |
Tasks | |
Published | 2017-12-10 |
URL | http://arxiv.org/abs/1712.03605v1 |
http://arxiv.org/pdf/1712.03605v1.pdf | |
PWC | https://paperswithcode.com/paper/sensitivity-analysis-for-predictive |
Repo | |
Framework | |
Multi-label Pixelwise Classification for Reconstruction of Large-scale Urban Areas
Title | Multi-label Pixelwise Classification for Reconstruction of Large-scale Urban Areas |
Authors | Yuanlie He, Sudhir Mudur, Charalambos Poullis |
Abstract | Object classification is one of the many holy grails in computer vision and as such has resulted in a very large number of algorithms being proposed already. Specifically in recent years there has been considerable progress in this area primarily due to the increased efficiency and accessibility of deep learning techniques. In fact, for single-label object classification [i.e. only one object present in the image] the state-of-the-art techniques employ deep neural networks and are reporting very close to human-like performance. There are specialized applications in which single-label object-level classification will not suffice; for example in cases where the image contains multiple intertwined objects of different labels. In this paper, we address the complex problem of multi-label pixelwise classification. We present our distinct solution based on a convolutional neural network (CNN) for performing multi-label pixelwise classification and its application to large-scale urban reconstruction. A supervised learning approach is followed for training a 13-layer CNN using both LiDAR and satellite images. An empirical study has been conducted to determine the hyperparameters which result in the optimal performance of the CNN. Scale invariance is introduced by training the network on five different scales of the input and labeled data. This results in six pixelwise classifications for each different scale. An SVM is then trained to map the six pixelwise classifications into a single-label. Lastly, we refine boundary pixel labels using graph-cuts for maximum a-posteriori (MAP) estimation with Markov Random Field (MRF) priors. The resulting pixelwise classification is then used to accurately extract and reconstruct the buildings in large-scale urban areas. The proposed approach has been extensively tested and the results are reported. |
Tasks | Object Classification |
Published | 2017-09-21 |
URL | http://arxiv.org/abs/1709.07368v2 |
http://arxiv.org/pdf/1709.07368v2.pdf | |
PWC | https://paperswithcode.com/paper/multi-label-pixelwise-classification-for |
Repo | |
Framework | |
Learning for Active 3D Mapping
Title | Learning for Active 3D Mapping |
Authors | Karel Zimmermann, Tomas Petricek, Vojtech Salansky, Tomas Svoboda |
Abstract | We propose an active 3D mapping method for depth sensors, which allow individual control of depth-measuring rays, such as the newly emerging solid-state lidars. The method simultaneously (i) learns to reconstruct a dense 3D occupancy map from sparse depth measurements, and (ii) optimizes the reactive control of depth-measuring rays. To make the first step towards the online control optimization, we propose a fast prioritized greedy algorithm, which needs to update its cost function in only a small fraction of pos- sible rays. The approximation ratio of the greedy algorithm is derived. An experimental evaluation on the subset of the KITTI dataset demonstrates significant improve- ment in the 3D map accuracy when learning-to-reconstruct from sparse measurements is coupled with the optimization of depth-measuring rays. |
Tasks | |
Published | 2017-08-07 |
URL | http://arxiv.org/abs/1708.02074v1 |
http://arxiv.org/pdf/1708.02074v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-for-active-3d-mapping |
Repo | |
Framework | |
Accelerated Variance Reduced Stochastic ADMM
Title | Accelerated Variance Reduced Stochastic ADMM |
Authors | Yuanyuan Liu, Fanhua Shang, James Cheng |
Abstract | Recently, many variance reduced stochastic alternating direction method of multipliers (ADMM) methods (e.g.\ SAG-ADMM, SDCA-ADMM and SVRG-ADMM) have made exciting progress such as linear convergence rates for strongly convex problems. However, the best known convergence rate for general convex problems is O(1/T) as opposed to O(1/T^2) of accelerated batch algorithms, where $T$ is the number of iterations. Thus, there still remains a gap in convergence rates between existing stochastic ADMM and batch algorithms. To bridge this gap, we introduce the momentum acceleration trick for batch optimization into the stochastic variance reduced gradient based ADMM (SVRG-ADMM), which leads to an accelerated (ASVRG-ADMM) method. Then we design two different momentum term update rules for strongly convex and general convex cases. We prove that ASVRG-ADMM converges linearly for strongly convex problems. Besides having a low per-iteration complexity as existing stochastic ADMM methods, ASVRG-ADMM improves the convergence rate on general convex problems from O(1/T) to O(1/T^2). Our experimental results show the effectiveness of ASVRG-ADMM. |
Tasks | |
Published | 2017-07-11 |
URL | http://arxiv.org/abs/1707.03190v1 |
http://arxiv.org/pdf/1707.03190v1.pdf | |
PWC | https://paperswithcode.com/paper/accelerated-variance-reduced-stochastic-admm |
Repo | |
Framework | |
Local Neighborhood Intensity Pattern: A new texture feature descriptor for image retrieval
Title | Local Neighborhood Intensity Pattern: A new texture feature descriptor for image retrieval |
Authors | Prithaj Banerjee, Ayan Kumar Bhunia, Avirup Bhattacharyya, Partha Pratim Roy, Subrahmanyam Murala |
Abstract | In this paper, a new texture descriptor based on the local neighborhood intensity difference is proposed for content based image retrieval (CBIR). For computation of texture features like Local Binary Pattern (LBP), the center pixel in a 3*3 window of an image is compared with all the remaining neighbors, one pixel at a time to generate a binary bit pattern. It ignores the effect of the adjacent neighbors of a particular pixel for its binary encoding and also for texture description. The proposed method is based on the concept that neighbors of a particular pixel hold a significant amount of texture information that can be considered for efficient texture representation for CBIR. Taking this into account, we develop a new texture descriptor, named as Local Neighborhood Intensity Pattern (LNIP) which considers the relative intensity difference between a particular pixel and the center pixel by considering its adjacent neighbors and generate a sign and a magnitude pattern. Since sign and magnitude patterns hold complementary information to each other, these two patterns are concatenated into a single feature descriptor to generate a more concrete and useful feature descriptor. The proposed descriptor has been tested for image retrieval on four databases, including three texture image databases - Brodatz texture image database, MIT VisTex database and Salzburg texture database and one face database AT&T face database. The precision and recall values observed on these databases are compared with some state-of-art local patterns. The proposed method showed a significant improvement over many other existing methods. |
Tasks | Content-Based Image Retrieval, Image Retrieval |
Published | 2017-09-07 |
URL | http://arxiv.org/abs/1709.02463v3 |
http://arxiv.org/pdf/1709.02463v3.pdf | |
PWC | https://paperswithcode.com/paper/local-neighborhood-intensity-pattern-a-new |
Repo | |
Framework | |