October 19, 2019

3161 words 15 mins read

Paper Group ANR 310

Paper Group ANR 310

21 Million Opportunities: A 19 Facility Investigation of Factors Affecting Hand Hygiene Compliance via Linear Predictive Models. Industrial Smoke Detection and Visualization. Synergy Effect between Convolutional Neural Networks and the Multiplicity of SMILES for Improvement of Molecular Prediction. Efficient Model Identification for Tensegrity Loco …

21 Million Opportunities: A 19 Facility Investigation of Factors Affecting Hand Hygiene Compliance via Linear Predictive Models

Title 21 Million Opportunities: A 19 Facility Investigation of Factors Affecting Hand Hygiene Compliance via Linear Predictive Models
Authors Michael T. Lash, Jason Slater, Philip M. Polgreen, Alberto M. Segre
Abstract This large-scale study, consisting of 21.3 million hand hygiene opportunities from 19 distinct facilities in 10 different states, uses linear predictive models to expose factors that may affect hand hygiene compliance. We examine the use of features such as temperature, relative humidity, influenza severity, day/night shift, federal holidays and the presence of new medical residents in predicting daily hand hygiene compliance; the investigation is undertaken using both a “global” model to glean general trends, and facility-specific models to elicit facility-specific insights. The results suggest that colder temperatures and federal holidays have an adverse effect on hand hygiene compliance rates, and that individual cultures and attitudes regarding hand hygiene exist among facilities.
Tasks
Published 2018-01-26
URL http://arxiv.org/abs/1801.09546v1
PDF http://arxiv.org/pdf/1801.09546v1.pdf
PWC https://paperswithcode.com/paper/21-million-opportunities-a-19-facility
Repo
Framework

Industrial Smoke Detection and Visualization

Title Industrial Smoke Detection and Visualization
Authors Yen-Chia Hsu, Paul Dille, Randy Sargent, Illah Nourbakhsh
Abstract As sensing technology proliferates and becomes affordable to the general public, there is a growing trend in citizen science where scientists and volunteers form a strong partnership in conducting scientific research including problem finding, data collection, analysis, visualization, and storytelling. Providing easy-to-use computational tools to support citizen science has become an important issue. To raise the public awareness of environmental science and improve the air quality in local areas, we are currently collaborating with a local community in monitoring and documenting fugitive emissions from a coke refinery. We have helped the community members build a live camera system which captures and visualizes high resolution timelapse imagery starting from November 2014. However, searching and documenting smoke emissions manually from all video frames requires manpower and takes an impractical investment of time. This paper describes a software tool which integrates four features: (1) an algorithm based on change detection and texture segmentation for identifying smoke emissions; (2) an interactive timeline visualization providing indicators for seeking to interesting events; (3) an autonomous fast-forwarding mode for skipping uninteresting timelapse frames; and (4) a collection of animated smoke images generated automatically according to the algorithm for documentation, presentation, storytelling, and sharing. With the help of this tool, citizen scientists can now focus on the content of the story instead of time-consuming and laborious works.
Tasks
Published 2018-09-17
URL http://arxiv.org/abs/1809.06263v1
PDF http://arxiv.org/pdf/1809.06263v1.pdf
PWC https://paperswithcode.com/paper/industrial-smoke-detection-and-visualization
Repo
Framework

Synergy Effect between Convolutional Neural Networks and the Multiplicity of SMILES for Improvement of Molecular Prediction

Title Synergy Effect between Convolutional Neural Networks and the Multiplicity of SMILES for Improvement of Molecular Prediction
Authors Talia B. Kimber, Sebastian Engelke, Igor V. Tetko, Eric Bruno, Guillaume Godin
Abstract In our study, we demonstrate the synergy effect between convolutional neural networks and the multiplicity of SMILES. The model we propose, the so-called Convolutional Neural Fingerprint (CNF) model, reaches the accuracy of traditional descriptors such as Dragon (Mauri et al. [22]), RDKit (Landrum [18]), CDK2 (Willighagen et al. [43]) and PyDescriptor (Masand and Rastija [20]). Moreover the CNF model generally performs better than highly fine-tuned traditional descriptors, especially on small data sets, which is of great interest for the chemical field where data sets are generally small due to experimental costs, the availability of molecules or accessibility to private databases. We evaluate the CNF model along with SMILES augmentation during both training and testing. To the best of our knowledge, this is the first time that such a methodology is presented. We show that using the multiplicity of SMILES during training acts as a regulariser and therefore avoids overfitting and can be seen as ensemble learning when considered for testing.
Tasks
Published 2018-12-11
URL http://arxiv.org/abs/1812.04439v1
PDF http://arxiv.org/pdf/1812.04439v1.pdf
PWC https://paperswithcode.com/paper/synergy-effect-between-convolutional-neural
Repo
Framework

Efficient Model Identification for Tensegrity Locomotion

Title Efficient Model Identification for Tensegrity Locomotion
Authors Shaojun Zhu, David Surovik, Kostas E. Bekris, Abdeslam Boularias
Abstract This paper aims to identify in a practical manner unknown physical parameters, such as mechanical models of actuated robot links, which are critical in dynamical robotic tasks. Key features include the use of an off-the-shelf physics engine and the Bayesian optimization framework. The task being considered is locomotion with a high-dimensional, compliant Tensegrity robot. A key insight, in this case, is the need to project the model identification challenge into an appropriate lower dimensional space for efficiency. Comparisons with alternatives indicate that the proposed method can identify the parameters more accurately within the given time budget, which also results in more precise locomotion control.
Tasks
Published 2018-04-12
URL http://arxiv.org/abs/1804.04696v1
PDF http://arxiv.org/pdf/1804.04696v1.pdf
PWC https://paperswithcode.com/paper/efficient-model-identification-for-tensegrity
Repo
Framework

Towards Safe Autonomous Driving: Capture Uncertainty in the Deep Neural Network For Lidar 3D Vehicle Detection

Title Towards Safe Autonomous Driving: Capture Uncertainty in the Deep Neural Network For Lidar 3D Vehicle Detection
Authors Di Feng, Lars Rosenbaum, Klaus Dietmayer
Abstract To assure that an autonomous car is driving safely on public roads, its object detection module should not only work correctly, but show its prediction confidence as well. Previous object detectors driven by deep learning do not explicitly model uncertainties in the neural network. We tackle with this problem by presenting practical methods to capture uncertainties in a 3D vehicle detector for Lidar point clouds. The proposed probabilistic detector represents reliable epistemic uncertainty and aleatoric uncertainty in classification and localization tasks. Experimental results show that the epistemic uncertainty is related to the detection accuracy, whereas the aleatoric uncertainty is influenced by vehicle distance and occlusion. The results also show that we can improve the detection performance by 1%-5% by modeling the aleatoric uncertainty.
Tasks Autonomous Driving, Object Detection
Published 2018-04-13
URL http://arxiv.org/abs/1804.05132v2
PDF http://arxiv.org/pdf/1804.05132v2.pdf
PWC https://paperswithcode.com/paper/towards-safe-autonomous-driving-capture
Repo
Framework

A Review on Image- and Network-based Brain Data Analysis Techniques for Alzheimer’s Disease Diagnosis Reveals a Gap in Developing Predictive Methods for Prognosis

Title A Review on Image- and Network-based Brain Data Analysis Techniques for Alzheimer’s Disease Diagnosis Reveals a Gap in Developing Predictive Methods for Prognosis
Authors Mayssa Soussia, Islem Rekik
Abstract Unveiling pathological brain changes associated with Alzheimer’s disease (AD) is a challenging task especially that people do not show symptoms of dementia until it is late. Over the past years, neuroimaging techniques paved the way for computer-based diagnosis and prognosis to facilitate the automation of medical decision support and help clinicians identify cognitively intact subjects that are at high-risk of developing AD. As a progressive neurodegenerative disorder, researchers investigated how AD affects the brain using different approaches: 1) image-based methods where mainly neuroimaging modalities are used to provide early AD biomarkers, and 2) network-based methods which focus on functional and structural brain connectivities to give insights into how AD alters brain wiring. In this study, we reviewed neuroimaging-based technical methods developed for AD and mild-cognitive impairment (MCI) classification and prediction tasks, selected by screening all MICCAI proceedings published between 2010 and 2016. We included papers that fit into image-based or network-based categories. The majority of papers focused on classifying MCI vs. AD brain states, which has enabled the discovery of discriminative or altered brain regions and connections. However, very few works aimed to predict MCI progression based on early neuroimaging-based observations. Despite the high importance of reliably identifying which early MCI patient will convert to AD, remain stable or reverse to normal over months/years, predictive models are still lagging behind.
Tasks
Published 2018-08-06
URL http://arxiv.org/abs/1808.01951v1
PDF http://arxiv.org/pdf/1808.01951v1.pdf
PWC https://paperswithcode.com/paper/a-review-on-image-and-network-based-brain
Repo
Framework

Joint CS-MRI Reconstruction and Segmentation with a Unified Deep Network

Title Joint CS-MRI Reconstruction and Segmentation with a Unified Deep Network
Authors Liyan Sun, Zhiwen Fan, Yue Huang, Xinghao Ding, John Paisley
Abstract The need for fast acquisition and automatic analysis of MRI data is growing in the age of big data. Although compressed sensing magnetic resonance imaging (CS-MRI) has been studied to accelerate MRI by reducing k-space measurements, in current CS-MRI techniques MRI applications such as segmentation are overlooked when doing image reconstruction. In this paper, we test the utility of CS-MRI methods in automatic segmentation models and propose a unified deep neural network architecture called SegNetMRI which we apply to the combined CS-MRI reconstruction and segmentation problem. SegNetMRI is built upon a MRI reconstruction network with multiple cascaded blocks each containing an encoder-decoder unit and a data fidelity unit, and MRI segmentation networks having the same encoder-decoder structure. The two subnetworks are pre-trained and fine-tuned with shared reconstruction encoders. The outputs are merged into the final segmentation. Our experiments show that SegNetMRI can improve both the reconstruction and segmentation performance when using compressive measurements.
Tasks Image Reconstruction
Published 2018-05-06
URL http://arxiv.org/abs/1805.02165v1
PDF http://arxiv.org/pdf/1805.02165v1.pdf
PWC https://paperswithcode.com/paper/joint-cs-mri-reconstruction-and-segmentation
Repo
Framework

AutoGAN: Robust Classifier Against Adversarial Attacks

Title AutoGAN: Robust Classifier Against Adversarial Attacks
Authors Blerta Lindqvist, Shridatt Sugrim, Rauf Izmailov
Abstract Classifiers fail to classify correctly input images that have been purposefully and imperceptibly perturbed to cause misclassification. This susceptability has been shown to be consistent across classifiers, regardless of their type, architecture or parameters. Common defenses against adversarial attacks modify the classifer boundary by training on additional adversarial examples created in various ways. In this paper, we introduce AutoGAN, which counters adversarial attacks by enhancing the lower-dimensional manifold defined by the training data and by projecting perturbed data points onto it. AutoGAN mitigates the need for knowing the attack type and magnitude as well as the need for having adversarial samples of the attack. Our approach uses a Generative Adversarial Network (GAN) with an autoencoder generator and a discriminator that also serves as a classifier. We test AutoGAN against adversarial samples generated with state-of-the-art Fast Gradient Sign Method (FGSM) as well as samples generated with random Gaussian noise, both using the MNIST dataset. For different magnitudes of perturbation in training and testing, AutoGAN can surpass the accuracy of FGSM method by up to 25% points on samples perturbed using FGSM. Without an augmented training dataset, AutoGAN achieves an accuracy of 89% compared to 1% achieved by FGSM method on FGSM testing adversarial samples.
Tasks
Published 2018-12-08
URL http://arxiv.org/abs/1812.03405v1
PDF http://arxiv.org/pdf/1812.03405v1.pdf
PWC https://paperswithcode.com/paper/autogan-robust-classifier-against-adversarial
Repo
Framework

Multi-Context Deep Network for Angle-Closure Glaucoma Screening in Anterior Segment OCT

Title Multi-Context Deep Network for Angle-Closure Glaucoma Screening in Anterior Segment OCT
Authors Huazhu Fu, Yanwu Xu, Stephen Lin, Damon Wing Kee Wong, Baskaran Mani, Meenakshi Mahesh, Tin Aung, Jiang Liu
Abstract A major cause of irreversible visual impairment is angle-closure glaucoma, which can be screened through imagery from Anterior Segment Optical Coherence Tomography (AS-OCT). Previous computational diagnostic techniques address this screening problem by extracting specific clinical measurements or handcrafted visual features from the images for classification. In this paper, we instead propose to learn from training data a discriminative representation that may capture subtle visual cues not modeled by predefined features. Based on clinical priors, we formulate this learning with a presented Multi-Context Deep Network (MCDN) architecture, in which parallel Convolutional Neural Networks are applied to particular image regions and at corresponding scales known to be informative for clinically diagnosing angle-closure glaucoma. The output feature maps of the parallel streams are merged into a classification layer to produce the deep screening result. Moreover, we incorporate estimated clinical parameters to further enhance performance. On a clinical AS-OCT dataset, our system is validated through comparisons to previous screening methods.
Tasks
Published 2018-09-10
URL http://arxiv.org/abs/1809.03239v1
PDF http://arxiv.org/pdf/1809.03239v1.pdf
PWC https://paperswithcode.com/paper/multi-context-deep-network-for-angle-closure
Repo
Framework

Novelty-organizing team of classifiers in noisy and dynamic environments

Title Novelty-organizing team of classifiers in noisy and dynamic environments
Authors Danilo Vasconcellos Vargas, Hirotaka Takano, Junichi Murata
Abstract In the real world, the environment is constantly changing with the input variables under the effect of noise. However, few algorithms were shown to be able to work under those circumstances. Here, Novelty-Organizing Team of Classifiers (NOTC) is applied to the continuous action mountain car as well as two variations of it: a noisy mountain car and an unstable weather mountain car. These problems take respectively noise and change of problem dynamics into account. Moreover, NOTC is compared with NeuroEvolution of Augmenting Topologies (NEAT) in these problems, revealing a trade-off between the approaches. While NOTC achieves the best performance in all of the problems, NEAT needs less trials to converge. It is demonstrated that NOTC achieves better performance because of its division of the input space (creating easier problems). Unfortunately, this division of input space also requires a bit of time to bootstrap.
Tasks
Published 2018-09-19
URL http://arxiv.org/abs/1809.07098v1
PDF http://arxiv.org/pdf/1809.07098v1.pdf
PWC https://paperswithcode.com/paper/novelty-organizing-team-of-classifiers-in
Repo
Framework

Debunking Fake News One Feature at a Time

Title Debunking Fake News One Feature at a Time
Authors Melanie Tosik, Antonio Mallia, Kedar Gangopadhyay
Abstract Identifying the stance of a news article body with respect to a certain headline is the first step to automated fake news detection. In this paper, we introduce a 2-stage ensemble model to solve the stance detection task. By using only hand-crafted features as input to a gradient boosting classifier, we are able to achieve a score of 9161.5 out of 11651.25 (78.63%) on the official Fake News Challenge (Stage 1) dataset. We identify the most useful features for detecting fake news and discuss how sampling techniques can be used to improve recall accuracy on a highly imbalanced dataset.
Tasks Fake News Detection, Stance Detection
Published 2018-08-08
URL http://arxiv.org/abs/1808.02831v1
PDF http://arxiv.org/pdf/1808.02831v1.pdf
PWC https://paperswithcode.com/paper/debunking-fake-news-one-feature-at-a-time
Repo
Framework

Detection and segmentation of the Left Ventricle in Cardiac MRI using Deep Learning

Title Detection and segmentation of the Left Ventricle in Cardiac MRI using Deep Learning
Authors Alexandre Attia, Sharone Dayan
Abstract Manual segmentation of the Left Ventricle (LV) is a tedious and meticulous task that can vary depending on the patient, the Magnetic Resonance Images (MRI) cuts and the experts. Still today, we consider manual delineation done by experts as being the ground truth for cardiac diagnosticians. Thus, we are reviewing the paper - written by Avendi and al. - who presents a combined approach with Convolutional Neural Networks, Stacked Auto-Encoders and Deformable Models, to try and automate the segmentation while performing more accurately. Furthermore, we have implemented parts of the paper (around three quarts) and experimented both the original method and slightly modified versions when changing the architecture and the parameters.
Tasks
Published 2018-01-07
URL http://arxiv.org/abs/1801.02171v1
PDF http://arxiv.org/pdf/1801.02171v1.pdf
PWC https://paperswithcode.com/paper/detection-and-segmentation-of-the-left
Repo
Framework

Credibility evaluation of income data with hierarchical correlation reconstruction

Title Credibility evaluation of income data with hierarchical correlation reconstruction
Authors Jarek Duda, Adam Szulc
Abstract In situations like tax declarations or analyzes of household budgets we would like to automatically evaluate credibility of exogenous variable (declared income) based on some available (endogenous) variables - we want to build a model and train it on provided data sample to predict (conditional) probability distribution of exogenous variable based on values of endogenous variables. Using Polish household budget survey data there will be discussed simple and systematic adaptation of hierarchical correlation reconstruction (HCR) technique for this purpose, which allows to combine interpretability of statistics with modelling of complex densities like in machine learning. For credibility evaluation we normalize marginal distribution of predicted variable to $\rho\approx 1$ uniform distribution on $[0,1]$ using empirical distribution function $(x=EDF(y)\in[0,1])$, then model density of its conditional distribution $(\textrm{Pr}(x_0x_1 x_2\ldots))$ as a linear combination of orthonormal polynomials using coefficients modelled as linear combinations of features of the remaining variables. These coefficients can be calculated independently, have similar interpretation as cumulants, additionally allowing to directly reconstruct probability distribution. Values corresponding to high predicted density can be considered as credible, while low density suggests disagreement with statistics of data sample, for example to mark for manual verification a chosen percentage of data points evaluated as the least credible.
Tasks
Published 2018-12-19
URL http://arxiv.org/abs/1812.08040v3
PDF http://arxiv.org/pdf/1812.08040v3.pdf
PWC https://paperswithcode.com/paper/credibility-evaluation-of-income-data-with
Repo
Framework

Developmental Bayesian Optimization of Black-Box with Visual Similarity-Based Transfer Learning

Title Developmental Bayesian Optimization of Black-Box with Visual Similarity-Based Transfer Learning
Authors Maxime Petit, Amaury Depierre, Xiaofang Wang, Emmanuel Dellandréa, Liming Chen
Abstract We present a developmental framework based on a long-term memory and reasoning mechanisms (Vision Similarity and Bayesian Optimisation). This architecture allows a robot to optimize autonomously hyper-parameters that need to be tuned from any action and/or vision module, treated as a black-box. The learning can take advantage of past experiences (stored in the episodic and procedural memories) in order to warm-start the exploration using a set of hyper-parameters previously optimized from objects similar to the new unknown one (stored in a semantic memory). As example, the system has been used to optimized 9 continuous hyper-parameters of a professional software (Kamido) both in simulation and with a real robot (industrial robotic arm Fanuc) with a total of 13 different objects. The robot is able to find a good object-specific optimization in 68 (simulation) or 40 (real) trials. In simulation, we demonstrate the benefit of the transfer learning based on visual similarity, as opposed to an amnesic learning (i.e. learning from scratch all the time). Moreover, with the real robot, we show that the method consistently outperforms the manual optimization from an expert with less than 2 hours of training time to achieve more than 88% of success.
Tasks Bayesian Optimisation, Transfer Learning
Published 2018-09-26
URL http://arxiv.org/abs/1809.10141v7
PDF http://arxiv.org/pdf/1809.10141v7.pdf
PWC https://paperswithcode.com/paper/developmental-bayesian-optimization-of-black
Repo
Framework

Rate-Accuracy Trade-Off In Video Classification With Deep Convolutional Neural Networks

Title Rate-Accuracy Trade-Off In Video Classification With Deep Convolutional Neural Networks
Authors Mohammad Jubran, Alhabib Abbas, Aaron Chadha, Yiannis Andreopoulos
Abstract Advanced video classification systems decode video frames to derive the necessary texture and motion representations for ingestion and analysis by spatio-temporal deep convolutional neural networks (CNNs). However, when considering visual Internet-of-Things applications, surveillance systems and semantic crawlers of large video repositories, the video capture and the CNN-based semantic analysis parts do not tend to be co-located. This necessitates the transport of compressed video over networks and incurs significant overhead in bandwidth and energy consumption, thereby significantly undermining the deployment potential of such systems. In this paper, we investigate the trade-off between the encoding bitrate and the achievable accuracy of CNN-based video classification models that directly ingest AVC/H.264 and HEVC encoded videos. Instead of retaining entire compressed video bitstreams and applying complex optical flow calculations prior to CNN processing, we only retain motion vector and select texture information at significantly-reduced bitrates and apply no additional processing prior to CNN ingestion. Based on three CNN architectures and two action recognition datasets, we achieve 11%-94% saving in bitrate with marginal effect on classification accuracy. A model-based selection between multiple CNNs increases these savings further, to the point where, if up to 7% loss of accuracy can be tolerated, video classification can take place with as little as 3 kbps for the transport of the required compressed video information to the system implementing the CNN models.
Tasks Optical Flow Estimation, Temporal Action Localization, Video Classification
Published 2018-09-27
URL http://arxiv.org/abs/1810.03964v2
PDF http://arxiv.org/pdf/1810.03964v2.pdf
PWC https://paperswithcode.com/paper/rate-accuracy-trade-off-in-video
Repo
Framework
comments powered by Disqus