Paper Group ANR 754
Sparse Multi-Output Gaussian Processes for Medical Time Series Prediction. Exploiting Convolution Filter Patterns for Transfer Learning. Additive Models with Trend Filtering. Pose-based Deep Gait Recognition. Graph Regularized Tensor Sparse Coding for Image Representation. Linear-Time Algorithm in Bayesian Image Denoising based on Gaussian Markov R …
Sparse Multi-Output Gaussian Processes for Medical Time Series Prediction
Title | Sparse Multi-Output Gaussian Processes for Medical Time Series Prediction |
Authors | Li-Fang Cheng, Gregory Darnell, Bianca Dumitrascu, Corey Chivers, Michael E Draugelis, Kai Li, Barbara E Engelhardt |
Abstract | In the scenario of real-time monitoring of hospital patients, high-quality inference of patients’ health status using all information available from clinical covariates and lab tests is essential to enable successful medical interventions and improve patient outcomes. Developing a computational framework that can learn from observational large-scale electronic health records (EHRs) and make accurate real-time predictions is a critical step. In this work, we develop and explore a Bayesian nonparametric model based on Gaussian process (GP) regression for hospital patient monitoring. We propose MedGP, a statistical framework that incorporates 24 clinical and lab covariates and supports a rich reference data set from which relationships between observed covariates may be inferred and exploited for high-quality inference of patient state over time. To do this, we develop a highly structured sparse GP kernel to enable tractable computation over tens of thousands of time points while estimating correlations among clinical covariates, patients, and periodicity in patient observations. MedGP has a number of benefits over current methods, including (i) not requiring an alignment of the time series data, (ii) quantifying confidence regions in the predictions, (iii) exploiting a vast and rich database of patients, and (iv) inferring interpretable relationships among clinical covariates. We evaluate and compare results from MedGP on the task of online prediction for three patient subgroups from two medical data sets across 8,043 patients. We found MedGP improves online prediction over baseline methods for nearly all covariates across different disease subgroups and studies. The publicly available code is at https://github.com/bee-hive/MedGP. |
Tasks | Gaussian Processes, Time Series, Time Series Prediction |
Published | 2017-03-27 |
URL | http://arxiv.org/abs/1703.09112v2 |
http://arxiv.org/pdf/1703.09112v2.pdf | |
PWC | https://paperswithcode.com/paper/sparse-multi-output-gaussian-processes-for |
Repo | |
Framework | |
Exploiting Convolution Filter Patterns for Transfer Learning
Title | Exploiting Convolution Filter Patterns for Transfer Learning |
Authors | Mehmet Aygün, Yusuf Aytar, Hazım Kemal Ekenel |
Abstract | In this paper, we introduce a new regularization technique for transfer learning. The aim of the proposed approach is to capture statistical relationships among convolution filters learned from a well-trained network and transfer this knowledge to another network. Since convolution filters of the prevalent deep Convolutional Neural Network (CNN) models share a number of similar patterns, in order to speed up the learning procedure, we capture such correlations by Gaussian Mixture Models (GMMs) and transfer them using a regularization term. We have conducted extensive experiments on the CIFAR10, Places2, and CMPlaces datasets to assess generalizability, task transferability, and cross-model transferability of the proposed approach, respectively. The experimental results show that the feature representations have efficiently been learned and transferred through the proposed statistical regularization scheme. Moreover, our method is an architecture independent approach, which is applicable for a variety of CNN architectures. |
Tasks | Transfer Learning |
Published | 2017-08-23 |
URL | http://arxiv.org/abs/1708.06973v1 |
http://arxiv.org/pdf/1708.06973v1.pdf | |
PWC | https://paperswithcode.com/paper/exploiting-convolution-filter-patterns-for |
Repo | |
Framework | |
Additive Models with Trend Filtering
Title | Additive Models with Trend Filtering |
Authors | Veeranjaneyulu Sadhanala, Ryan J. Tibshirani |
Abstract | We study additive models built with trend filtering, i.e., additive models whose components are each regularized by the (discrete) total variation of their $k$th (discrete) derivative, for a chosen integer $k \geq 0$. This results in $k$th degree piecewise polynomial components, (e.g., $k=0$ gives piecewise constant components, $k=1$ gives piecewise linear, $k=2$ gives piecewise quadratic, etc.). Analogous to its advantages in the univariate case, additive trend filtering has favorable theoretical and computational properties, thanks in large part to the localized nature of the (discrete) total variation regularizer that it uses. On the theory side, we derive fast error rates for additive trend filtering estimates, and show these rates are minimax optimal when the underlying function is additive and has component functions whose derivatives are of bounded variation. We also show that these rates are unattainable by additive smoothing splines (and by additive models built from linear smoothers, in general). On the computational side, as per the standard in additive models, backfitting is an appealing method for optimization, but it is particularly appealing for additive trend filtering because we can leverage a few highly efficient univariate trend filtering solvers. Going one step further, we describe a new backfitting algorithm whose iterations can be run in parallel, which (as far as we know) is the first of its kind. Lastly, we present experiments to examine the empirical performance of additive trend filtering. |
Tasks | |
Published | 2017-02-16 |
URL | http://arxiv.org/abs/1702.05037v4 |
http://arxiv.org/pdf/1702.05037v4.pdf | |
PWC | https://paperswithcode.com/paper/additive-models-with-trend-filtering |
Repo | |
Framework | |
Pose-based Deep Gait Recognition
Title | Pose-based Deep Gait Recognition |
Authors | Anna Sokolova, Anton Konushin |
Abstract | Human gait or walking manner is a biometric feature that allows identification of a person when other biometric features such as the face or iris are not visible. In this paper, we present a new pose-based convolutional neural network model for gait recognition. Unlike many methods that consider the full-height silhouette of a moving person, we consider the motion of points in the areas around human joints. To extract motion information, we estimate the optical flow between consecutive frames. We propose a deep convolutional model that computes pose-based gait descriptors. We compare different network architectures and aggregation methods and experimentally assess various sets of body parts to determine which are the most important for gait recognition. In addition, we investigate the generalization ability of the developed algorithms by transferring them between datasets. The results of these experiments show that our approach outperforms state-of-the-art methods. |
Tasks | Gait Recognition, Optical Flow Estimation |
Published | 2017-10-17 |
URL | http://arxiv.org/abs/1710.06512v3 |
http://arxiv.org/pdf/1710.06512v3.pdf | |
PWC | https://paperswithcode.com/paper/pose-based-deep-gait-recognition |
Repo | |
Framework | |
Graph Regularized Tensor Sparse Coding for Image Representation
Title | Graph Regularized Tensor Sparse Coding for Image Representation |
Authors | Fei Jiang, Xiao-Yang Liu, Hongtao Lu, Ruimin Shen |
Abstract | Sparse coding (SC) is an unsupervised learning scheme that has received an increasing amount of interests in recent years. However, conventional SC vectorizes the input images, which destructs the intrinsic spatial structures of the images. In this paper, we propose a novel graph regularized tensor sparse coding (GTSC) for image representation. GTSC preserves the local proximity of elementary structures in the image by adopting the newly proposed tubal-tensor representation. Simultaneously, it considers the intrinsic geometric properties by imposing graph regularization that has been successfully applied to uncover the geometric distribution for the image data. Moreover, the returned sparse representations by GTSC have better physical explanations as the key operation (i.e., circular convolution) in the tubal-tensor model preserves the shifting invariance property. Experimental results on image clustering demonstrate the effectiveness of the proposed scheme. |
Tasks | Image Clustering |
Published | 2017-03-27 |
URL | http://arxiv.org/abs/1703.09342v1 |
http://arxiv.org/pdf/1703.09342v1.pdf | |
PWC | https://paperswithcode.com/paper/graph-regularized-tensor-sparse-coding-for |
Repo | |
Framework | |
Linear-Time Algorithm in Bayesian Image Denoising based on Gaussian Markov Random Field
Title | Linear-Time Algorithm in Bayesian Image Denoising based on Gaussian Markov Random Field |
Authors | Muneki Yasuda, Junpei Watanabe, Shun Kataoka, kazuyuki Tanaka |
Abstract | In this paper, we consider Bayesian image denoising based on a Gaussian Markov random field (GMRF) model, for which we propose an new algorithm. Our method can solve Bayesian image denoising problems, including hyperparameter estimation, in $O(n)$-time, where $n$ is the number of pixels in a given image. From the perspective of the order of the computational time, this is a state-of-the-art algorithm for the present problem setting. Moreover, the results of our numerical experiments we show our method is in fact effective in practice. |
Tasks | Denoising, Image Denoising |
Published | 2017-10-20 |
URL | https://arxiv.org/abs/1710.07393v2 |
https://arxiv.org/pdf/1710.07393v2.pdf | |
PWC | https://paperswithcode.com/paper/linear-time-algorithm-in-bayesian-image |
Repo | |
Framework | |
Coarse Grained Exponential Variational Autoencoders
Title | Coarse Grained Exponential Variational Autoencoders |
Authors | Ke Sun, Xiangliang Zhang |
Abstract | Variational autoencoders (VAE) often use Gaussian or category distribution to model the inference process. This puts a limit on variational learning because this simplified assumption does not match the true posterior distribution, which is usually much more sophisticated. To break this limitation and apply arbitrary parametric distribution during inference, this paper derives a \emph{semi-continuous} latent representation, which approximates a continuous density up to a prescribed precision, and is much easier to analyze than its continuous counterpart because it is fundamentally discrete. We showcase the proposition by applying polynomial exponential family distributions as the posterior, which are universal probability density function generators. Our experimental results show consistent improvements over commonly used VAE models. |
Tasks | |
Published | 2017-02-25 |
URL | http://arxiv.org/abs/1702.07904v1 |
http://arxiv.org/pdf/1702.07904v1.pdf | |
PWC | https://paperswithcode.com/paper/coarse-grained-exponential-variational |
Repo | |
Framework | |
Yes-Net: An effective Detector Based on Global Information
Title | Yes-Net: An effective Detector Based on Global Information |
Authors | Liangzhuang Ma, Xin Kan, Qianjiang Xiao, Wenlong Liu, Peiqin Sun |
Abstract | This paper introduces a new real-time object detection approach named Yes-Net. It realizes the prediction of bounding boxes and class via single neural network like YOLOv2 and SSD, but owns more efficient and outstanding features. It combines local information with global information by adding the RNN architecture as a packed unit in CNN model to form the basic feature extractor. Independent anchor boxes coming from full-dimension k-means is also applied in Yes-Net, it brings better average IOU than grid anchor box. In addition, instead of NMS, Yes-Net uses RNN as a filter to get the final boxes, which is more efficient. For 416 x 416 input, Yes-Net achieves 79.2% mAP on VOC2007 test at 39 FPS on an Nvidia Titan X Pascal. |
Tasks | Object Detection, Real-Time Object Detection |
Published | 2017-06-28 |
URL | http://arxiv.org/abs/1706.09180v2 |
http://arxiv.org/pdf/1706.09180v2.pdf | |
PWC | https://paperswithcode.com/paper/yes-net-an-effective-detector-based-on-global |
Repo | |
Framework | |
Tracking objects using 3D object proposals
Title | Tracking objects using 3D object proposals |
Authors | Ramanpreet Singh Pahwa, Tian Tsong Ng, Minh N. Do |
Abstract | 3D object proposals, quickly detected regions in a 3D scene that likely contain an object of interest, are an effective approach to improve the computational efficiency and accuracy of the object detection framework. In this work, we propose a novel online method that uses our previously developed 3D object proposals, in a RGB-D video sequence, to match and track static objects in the scene using shape matching. Our main observation is that depth images provide important information about the geometry of the scene that is often ignored in object matching techniques. Our method takes less than a second in MATLAB on the UW-RGBD scene dataset on a single thread CPU and thus, has potential to be used in low-power chips in Unmanned Aerial Vehicles (UAVs), quadcopters, and drones. |
Tasks | Object Detection |
Published | 2017-12-19 |
URL | http://arxiv.org/abs/1712.06780v1 |
http://arxiv.org/pdf/1712.06780v1.pdf | |
PWC | https://paperswithcode.com/paper/tracking-objects-using-3d-object-proposals |
Repo | |
Framework | |
W2VLDA: Almost Unsupervised System for Aspect Based Sentiment Analysis
Title | W2VLDA: Almost Unsupervised System for Aspect Based Sentiment Analysis |
Authors | Aitor García-Pablos, Montse Cuadros, German Rigau |
Abstract | With the increase of online customer opinions in specialised websites and social networks, the necessity of automatic systems to help to organise and classify customer reviews by domain-specific aspect/categories and sentiment polarity is more important than ever. Supervised approaches to Aspect Based Sentiment Analysis obtain good results for the domain/language their are trained on, but having manually labelled data for training supervised systems for all domains and languages are usually very costly and time consuming. In this work we describe W2VLDA, an almost unsupervised system based on topic modelling, that combined with some other unsupervised methods and a minimal configuration, performs aspect/category classifiation, aspect-terms/opinion-words separation and sentiment polarity classification for any given domain and language. We evaluate the performance of the aspect and sentiment classification in the multilingual SemEval 2016 task 5 (ABSA) dataset. We show competitive results for several languages (English, Spanish, French and Dutch) and domains (hotels, restaurants, electronic-devices). |
Tasks | Aspect-Based Sentiment Analysis, Sentiment Analysis |
Published | 2017-05-22 |
URL | http://arxiv.org/abs/1705.07687v2 |
http://arxiv.org/pdf/1705.07687v2.pdf | |
PWC | https://paperswithcode.com/paper/w2vlda-almost-unsupervised-system-for-aspect |
Repo | |
Framework | |
Adversarial Phenomenon in the Eyes of Bayesian Deep Learning
Title | Adversarial Phenomenon in the Eyes of Bayesian Deep Learning |
Authors | Ambrish Rawat, Martin Wistuba, Maria-Irina Nicolae |
Abstract | Deep Learning models are vulnerable to adversarial examples, i.e.\ images obtained via deliberate imperceptible perturbations, such that the model misclassifies them with high confidence. However, class confidence by itself is an incomplete picture of uncertainty. We therefore use principled Bayesian methods to capture model uncertainty in prediction for observing adversarial misclassification. We provide an extensive study with different Bayesian neural networks attacked in both white-box and black-box setups. The behaviour of the networks for noise, attacks and clean test data is compared. We observe that Bayesian neural networks are uncertain in their predictions for adversarial perturbations, a behaviour similar to the one observed for random Gaussian perturbations. Thus, we conclude that Bayesian neural networks can be considered for detecting adversarial examples. |
Tasks | |
Published | 2017-11-22 |
URL | http://arxiv.org/abs/1711.08244v1 |
http://arxiv.org/pdf/1711.08244v1.pdf | |
PWC | https://paperswithcode.com/paper/adversarial-phenomenon-in-the-eyes-of |
Repo | |
Framework | |
WristAuthen: A Dynamic Time Wrapping Approach for User Authentication by Hand-Interaction through Wrist-Worn Devices
Title | WristAuthen: A Dynamic Time Wrapping Approach for User Authentication by Hand-Interaction through Wrist-Worn Devices |
Authors | Qi Lyu, Zhifeng Kong, Chao Shen, Tianwei Yue |
Abstract | The growing trend of using wearable devices for context-aware computing and pervasive sensing systems has raised its potentials for quick and reliable authentication techniques. Since personal writing habitats differ from each other, it is possible to realize user authentication through writing. This is of great significance as sensible information is easily collected by these devices. This paper presents a novel user authentication system through wrist-worn devices by analyzing the interaction behavior with users, which is both accurate and efficient for future usage. The key feature of our approach lies in using much more effective Savitzky-Golay filter and Dynamic Time Wrapping method to obtain fine-grained writing metrics for user authentication. These new metrics are relatively unique from person to person and independent of the computing platform. Analyses are conducted on the wristband-interaction data collected from 50 users with diversity in gender, age, and height. Extensive experimental results show that the proposed approach can identify users in a timely and accurate manner, with a false-negative rate of 1.78%, false-positive rate of 6.7%, and Area Under ROC Curve of 0.983 . Additional examination on robustness to various mimic attacks, tolerance to training data, and comparisons to further analyze the applicability. |
Tasks | |
Published | 2017-10-22 |
URL | http://arxiv.org/abs/1710.07941v1 |
http://arxiv.org/pdf/1710.07941v1.pdf | |
PWC | https://paperswithcode.com/paper/wristauthen-a-dynamic-time-wrapping-approach |
Repo | |
Framework | |
Coping with Construals in Broad-Coverage Semantic Annotation of Adpositions
Title | Coping with Construals in Broad-Coverage Semantic Annotation of Adpositions |
Authors | Jena D. Hwang, Archna Bhatia, Na-Rae Han, Tim O’Gorman, Vivek Srikumar, Nathan Schneider |
Abstract | We consider the semantics of prepositions, revisiting a broad-coverage annotation scheme used for annotating all 4,250 preposition tokens in a 55,000 word corpus of English. Attempts to apply the scheme to adpositions and case markers in other languages, as well as some problematic cases in English, have led us to reconsider the assumption that a preposition’s lexical contribution is equivalent to the role/relation that it mediates. Our proposal is to embrace the potential for construal in adposition use, expressing such phenomena directly at the token level to manage complexity and avoid sense proliferation. We suggest a framework to represent both the scene role and the adposition’s lexical function so they can be annotated at scale—supporting automatic, statistical processing of domain-general language—and sketch how this representation would inform a constructional analysis. |
Tasks | |
Published | 2017-03-10 |
URL | http://arxiv.org/abs/1703.03771v1 |
http://arxiv.org/pdf/1703.03771v1.pdf | |
PWC | https://paperswithcode.com/paper/coping-with-construals-in-broad-coverage |
Repo | |
Framework | |
Spatio-Temporal Action Detection with Cascade Proposal and Location Anticipation
Title | Spatio-Temporal Action Detection with Cascade Proposal and Location Anticipation |
Authors | Zhenheng Yang, Jiyang Gao, Ram Nevatia |
Abstract | In this work, we address the problem of spatio-temporal action detection in temporally untrimmed videos. It is an important and challenging task as finding accurate human actions in both temporal and spatial space is important for analyzing large-scale video data. To tackle this problem, we propose a cascade proposal and location anticipation (CPLA) model for frame-level action detection. There are several salient points of our model: (1) a cascade region proposal network (casRPN) is adopted for action proposal generation and shows better localization accuracy compared with single region proposal network (RPN); (2) action spatio-temporal consistencies are exploited via a location anticipation network (LAN) and thus frame-level action detection is not conducted independently. Frame-level detections are then linked by solving an linking score maximization problem, and temporally trimmed into spatio-temporal action tubes. We demonstrate the effectiveness of our model on the challenging UCF101 and LIRIS-HARL datasets, both achieving state-of-the-art performance. |
Tasks | Action Detection |
Published | 2017-07-31 |
URL | http://arxiv.org/abs/1708.00042v1 |
http://arxiv.org/pdf/1708.00042v1.pdf | |
PWC | https://paperswithcode.com/paper/spatio-temporal-action-detection-with-cascade |
Repo | |
Framework | |
Towards CNN Map Compression for camera relocalisation
Title | Towards CNN Map Compression for camera relocalisation |
Authors | Luis Contreras, Walterio Mayol-Cuevas |
Abstract | This paper presents a study on the use of Convolutional Neural Networks for camera relocalisation and its application to map compression. We follow state of the art visual relocalisation results and evaluate response to different data inputs – namely, depth, grayscale, RGB, spatial position and combinations of these. We use a CNN map representation and introduce the notion of CNN map compression by using a smaller CNN architecture. We evaluate our proposal in a series of publicly available datasets. This formulation allows us to improve relocalisation accuracy by increasing the number of training trajectories while maintaining a constant-size CNN. |
Tasks | Camera Relocalization |
Published | 2017-03-02 |
URL | http://arxiv.org/abs/1703.00845v1 |
http://arxiv.org/pdf/1703.00845v1.pdf | |
PWC | https://paperswithcode.com/paper/towards-cnn-map-compression-for-camera |
Repo | |
Framework | |