January 26, 2020

3656 words 18 mins read

Paper Group ANR 1519

Paper Group ANR 1519

Sparse and Low-Rank Tensor Regression via Parallel Proximal Method. Academic Performance Estimation with Attention-based Graph Convolutional Networks. Affordance Learning In Direct Perception for Autonomous Driving. Fast Video Crowd Counting with a Temporal Aware Network. Density-based Community Detection/Optimization. Feature-aware Adaptation and …

Sparse and Low-Rank Tensor Regression via Parallel Proximal Method

Title Sparse and Low-Rank Tensor Regression via Parallel Proximal Method
Authors Jiaqi Zhang, Beilun Wang
Abstract Motivated by applications in various scientific fields having demand of predicting relationship between higher-order (tensor) feature and univariate response, we propose a \underline{S}parse and \underline{L}ow-rank \underline{T}ensor \underline{R}egression model (SLTR). This model enforces sparsity and low-rankness of the tensor coefficient by directly applying $\ell_1$ norm and tensor nuclear norm on it respectively, such that (1) the structural information of tensor is preserved and (2) the data interpretation is convenient. To make the solving procedure scalable and efficient, SLTR makes use of the proximal gradient method to optimize two norm regularizers, which can be easily implemented parallelly. Additionally, a tighter convergence rate is proved over three-order tensor data. We evaluate SLTR on several simulated datasets and one fMRI dataset. Experiment results show that, compared with previous models, SLTR is able to obtain a solution no worse than others with much less time cost.
Tasks
Published 2019-11-29
URL https://arxiv.org/abs/1911.12965v1
PDF https://arxiv.org/pdf/1911.12965v1.pdf
PWC https://paperswithcode.com/paper/sparse-and-low-rank-tensor-regression-via
Repo
Framework

Academic Performance Estimation with Attention-based Graph Convolutional Networks

Title Academic Performance Estimation with Attention-based Graph Convolutional Networks
Authors Qian Hu, Huzefa Rangwala
Abstract Student’s academic performance prediction empowers educational technologies including academic trajectory and degree planning, course recommender systems, early warning and advising systems. Given a student’s past data (such as grades in prior courses), the task of student’s performance prediction is to predict a student’s grades in future courses. Academic programs are structured in a way that prior courses lay the foundation for future courses. The knowledge required by courses is obtained by taking multiple prior courses, which exhibits complex relationships modeled by graph structures. Traditional methods for student’s performance prediction usually neglect the underlying relationships between multiple courses; and how students acquire knowledge across them. In addition, traditional methods do not provide interpretation for predictions needed for decision making. In this work, we propose a novel attention-based graph convolutional networks model for student’s performance prediction. We conduct extensive experiments on a real-world dataset obtained from a large public university. The experimental results show that our proposed model outperforms state-of-the-art approaches in terms of grade prediction. The proposed model also shows strong accuracy in identifying students who are at-risk of failing or dropping out so that timely intervention and feedback can be provided to the student.
Tasks Decision Making, Recommendation Systems
Published 2019-12-26
URL https://arxiv.org/abs/2001.00632v1
PDF https://arxiv.org/pdf/2001.00632v1.pdf
PWC https://paperswithcode.com/paper/academic-performance-estimation-with
Repo
Framework

Affordance Learning In Direct Perception for Autonomous Driving

Title Affordance Learning In Direct Perception for Autonomous Driving
Authors Chen Sun, Jean M. Uwabeza Vianney, Dongpu Cao
Abstract Recent development in autonomous driving involves high-level computer vision and detailed road scene understanding. Today, most autonomous vehicles are using mediated perception approach for path planning and control, which highly rely on high-definition 3D maps and real time sensors. Recent research efforts aim to substitute the massive HD maps with coarse road attributes. In this paper, we follow the direct perception based method to train a deep neural network for affordance learning in autonomous driving. Our goal in this work is to develop the affordance learning model based on freely available Google Street View panoramas and Open Street Map road vector attributes. Driving scene understanding can be achieved by learning affordances from the images captured by car-mounted cameras. Such scene understanding by learning affordances may be useful for corroborating base maps such as HD maps so that the required data storage space is minimized and available for processing in real time. We compare capability in road attribute identification between human volunteers and our model by experimental evaluation. Our results indicate that this method could act as a cheaper way for training data collection in autonomous driving. The cross validation results also indicate the effectiveness of our model.
Tasks Autonomous Driving, Autonomous Vehicles, Scene Understanding
Published 2019-03-20
URL http://arxiv.org/abs/1903.08746v1
PDF http://arxiv.org/pdf/1903.08746v1.pdf
PWC https://paperswithcode.com/paper/affordance-learning-in-direct-perception-for
Repo
Framework

Fast Video Crowd Counting with a Temporal Aware Network

Title Fast Video Crowd Counting with a Temporal Aware Network
Authors Xingjiao Wu, Baohan Xu, Yingbin Zheng, Hao Ye, Jing Yang, Liang He
Abstract Crowd counting aims to count the number of instantaneous people in a crowded space, and many promising solutions have been proposed for single image crowd counting. With the ubiquitous video capture devices in public safety field, how to effectively apply the crowd counting technique to video content has become an urgent problem. In this paper, we introduce a novel framework based on temporal aware modeling of the relationship between video frames. The proposed network contains a few dilated residual blocks, and each of them consists of the layers that compute the temporal convolutions of features from the adjacent frames to improve the prediction. To alleviate the expensive computation and satisfy the demand of fast video crowd counting, we also introduce a lightweight network to balance the computational cost with representation ability. We conduct experiments on the crowd counting benchmarks and demonstrate its superiority in terms of effectiveness and efficiency over previous video-based approaches.
Tasks Crowd Counting
Published 2019-07-04
URL https://arxiv.org/abs/1907.02198v2
PDF https://arxiv.org/pdf/1907.02198v2.pdf
PWC https://paperswithcode.com/paper/video-crowd-counting-via-dynamic-temporal
Repo
Framework

Density-based Community Detection/Optimization

Title Density-based Community Detection/Optimization
Authors Rui Portocarrero Sarmento
Abstract Modularity-based algorithms used for community detection have been increasing in recent years. Modularity and its application have been generating controversy since some authors argue it is not a metric without disadvantages. It has been shown that algorithms that use modularity to detect communities suffer a resolution limit and, therefore, it is unable to identify small communities in some situations. In this work, we try to apply a density optimization of communities found by the label propagation algorithm and study what happens regarding modularity of optimized results. We introduce a metric we call ADC (Average Density per Community); we use this metric to prove our optimization provides improvements to the community density obtained with benchmark algorithms. Additionally, we provide evidence this optimization might not alter modularity of resulting communities significantly. Additionally, by also using the SSC (Strongly Connected Components) concept we developed a community detection algorithm that we also compare with the label propagation algorithm. These comparisons were executed with several test networks and with different network sizes. The results of the optimization algorithm proved to be interesting. Additionally, the results of the community detection algorithm turned out to be similar to the benchmark algorithm we used.
Tasks Community Detection
Published 2019-04-07
URL http://arxiv.org/abs/1904.12593v1
PDF http://arxiv.org/pdf/1904.12593v1.pdf
PWC https://paperswithcode.com/paper/190412593
Repo
Framework

Feature-aware Adaptation and Structured Density Alignment for Crowd Counting in Video Surveillance

Title Feature-aware Adaptation and Structured Density Alignment for Crowd Counting in Video Surveillance
Authors Junyu Gao, Qi Wang, Yuan Yuan
Abstract With the development of deep neural networks, the performance of crowd counting and pixel-wise density estimation are continually being refreshed. Despite this, there are still two challenging problems in this field: 1) current supervised learning needs a large amount of training data, but collecting and annotating them is difficult; 2) existing methods can not generalize well to the unseen domain. A recently released synthetic crowd dataset alleviates these two problems. However, the domain gap between the real-world data and synthetic images decreases the models’ performance. To reduce the gap, in this paper, we propose a domain-adaptation-style crowd counting method, which can effectively adapt the model from synthetic data to the specific real-world scenes. It consists of Multi-level Feature-aware Adaptation (MFA) and Structured Density map Alignment (SDA). To be specific, MFA boosts the model to extract domain-invariant features from multiple layers. SDA guarantees the network outputs fine density maps with a reasonable distribution on the real domain. Finally, we evaluate the proposed method on four mainstream surveillance crowd datasets, Shanghai Tech Part B, WorldExpo’10, Mall and UCSD. Extensive experiments evidence that our approach outperforms the state-of-the-art methods for the same cross-domain counting problem.
Tasks Crowd Counting, Density Estimation, Domain Adaptation
Published 2019-12-08
URL https://arxiv.org/abs/1912.03672v1
PDF https://arxiv.org/pdf/1912.03672v1.pdf
PWC https://paperswithcode.com/paper/feature-aware-adaptation-and-structured
Repo
Framework

Universality of Computational Lower Bounds for Submatrix Detection

Title Universality of Computational Lower Bounds for Submatrix Detection
Authors Matthew Brennan, Guy Bresler, Wasim Huleihel
Abstract In the general submatrix detection problem, the task is to detect the presence of a small $k \times k$ submatrix with entries sampled from a distribution $\mathcal{P}$ in an $n \times n$ matrix of samples from $\mathcal{Q}$. This formulation includes a number of well-studied problems, such as biclustering when $\mathcal{P}$ and $\mathcal{Q}$ are Gaussians and the planted dense subgraph formulation of community detection when the submatrix is a principal minor and $\mathcal{P}$ and $\mathcal{Q}$ are Bernoulli random variables. These problems all seem to exhibit a universal phenomenon: there is a statistical-computational gap depending on $\mathcal{P}$ and $\mathcal{Q}$ between the minimum $k$ at which this task can be solved and the minimum $k$ at which it can be solved in polynomial time. Our main result is to tightly characterize this computational barrier as a tradeoff between $k$ and the KL divergences between $\mathcal{P}$ and $\mathcal{Q}$ through average-case reductions from the planted clique conjecture. These computational lower bounds hold given mild assumptions on $\mathcal{P}$ and $\mathcal{Q}$ arising naturally from classical binary hypothesis testing. Our results recover and generalize the planted clique lower bounds for Gaussian biclustering in Ma-Wu (2015) and Brennan et al. (2018) and for the sparse and general regimes of planted dense subgraph in Hajek et al. (2015) and Brennan et al. (2018). This yields the first universality principle for computational lower bounds obtained through average-case reductions.
Tasks Community Detection
Published 2019-02-19
URL https://arxiv.org/abs/1902.06916v3
PDF https://arxiv.org/pdf/1902.06916v3.pdf
PWC https://paperswithcode.com/paper/universality-of-computational-lower-bounds
Repo
Framework

Handling temporality of clinical events with application to Adverse Drug Event detection in Electronic Health Records: A scoping review

Title Handling temporality of clinical events with application to Adverse Drug Event detection in Electronic Health Records: A scoping review
Authors Maria Bampa
Abstract The increased adoption of Electronic Health Records(EHRs) has brought changes to the way the patient care is carried out. The rich heterogeneous and temporal data space stored in EHRs can be leveraged by machine learning models to capture the underlying information and make clinically relevant predictions. This can be exploited to support public health activities such as pharmacovigilance and specifically mitigate the public health issue of Adverse Drug Events(ADEs). The aim of this article is, therefore, to investigate the various ways of handling temporal data for the purpose of detecting ADEs. Based on a review of the existing literature, 11 articles from the last 10 years were chosen to be studied. According to the literature retrieved the main methods were found to fall into 5 different approaches: based on temporal abstraction, graph-based, learning weights and data tables containing time series of different length. To that end, EHRs are a valuable source that has led current research to the automatic detection of ADEs. Yet there still exists a great deal of challenges that concerns the exploitation of the heterogeneous, data types with temporal information included in EHRs for predicting ADEs.
Tasks Time Series
Published 2019-04-09
URL http://arxiv.org/abs/1904.04940v1
PDF http://arxiv.org/pdf/1904.04940v1.pdf
PWC https://paperswithcode.com/paper/handling-temporality-of-clinical-events-with
Repo
Framework

TensorSCONE: A Secure TensorFlow Framework using Intel SGX

Title TensorSCONE: A Secure TensorFlow Framework using Intel SGX
Authors Roland Kunkel, Do Le Quoc, Franz Gregor, Sergei Arnautov, Pramod Bhatotia, Christof Fetzer
Abstract Machine learning has become a critical component of modern data-driven online services. Typically, the training phase of machine learning techniques requires to process large-scale datasets which may contain private and sensitive information of customers. This imposes significant security risks since modern online services rely on cloud computing to store and process the sensitive data. In the untrusted computing infrastructure, security is becoming a paramount concern since the customers need to trust the thirdparty cloud provider. Unfortunately, this trust has been violated multiple times in the past. To overcome the potential security risks in the cloud, we answer the following research question: how to enable secure executions of machine learning computations in the untrusted infrastructure? To achieve this goal, we propose a hardware-assisted approach based on Trusted Execution Environments (TEEs), specifically Intel SGX, to enable secure execution of the machine learning computations over the private and sensitive datasets. More specifically, we propose a generic and secure machine learning framework based on Tensorflow, which enables secure execution of existing applications on the commodity untrusted infrastructure. In particular, we have built our system called TensorSCONE from ground-up by integrating TensorFlow with SCONE, a shielded execution framework based on Intel SGX. The main challenge of this work is to overcome the architectural limitations of Intel SGX in the context of building a secure TensorFlow system. Our evaluation shows that we achieve reasonable performance overheads while providing strong security properties with low TCB.
Tasks
Published 2019-02-12
URL http://arxiv.org/abs/1902.04413v1
PDF http://arxiv.org/pdf/1902.04413v1.pdf
PWC https://paperswithcode.com/paper/tensorscone-a-secure-tensorflow-framework
Repo
Framework

Design of one-year mortality forecast at hospital admission based: a machine learning approach

Title Design of one-year mortality forecast at hospital admission based: a machine learning approach
Authors Vicent Blanes-Selva, Vicente Ruiz-García, Salvador Tortajada, José-Miguel Benedí, Bernardo Valdivieso, Juan M. García-Gómez
Abstract Background: Palliative care is referred to a set of programs for patients that suffer life-limiting illnesses. These programs aim to guarantee a minimum level of quality of life (QoL) for the last stage of life. They are currently based on clinical evaluation of risk of one-year mortality. Objectives: The main objective of this work is to develop and validate machine-learning based models to predict the exitus of a patient within the next year using data gathered at hospital admission. Methods: Five machine learning techniques were applied in our study to develop machine-learning predictive models: Support Vector Machines, K-neighbors Classifier, Gradient Boosting Classifier, Random Forest and Multilayer Perceptron. All models were trained and evaluated using the retrospective dataset. The evaluation was performed with five metrics computed by a resampling strategy: Accuracy, the area under the ROC curve, Specificity, Sensitivity, and the Balanced Error Rate. Results: All models for forecasting one-year mortality achieved an AUC ROC from 0.858 to 0.911. Specifically, Gradient Boosting Classifier was the best model, producing an AUC ROC of 0.911 (CI 95%, 0.911 to 0.912), a sensitivity of 0.858 (CI 95%, 0.856 to 0.86) and a specificity of 0.807 (CI 95%, 0.806 to 0808) and a BER of 0.168 (CI 95%, 0.167 to 0.169). Conclusions: The analysis of common information at hospital admission combined with machine learning techniques produced models with competitive discriminative power. Our models reach the best results reported in state of the art. These results demonstrate that they can be used as an accurate data-driven palliative care criteria inclusion.
Tasks
Published 2019-07-22
URL https://arxiv.org/abs/1907.09474v1
PDF https://arxiv.org/pdf/1907.09474v1.pdf
PWC https://paperswithcode.com/paper/design-of-one-year-mortality-forecast-at
Repo
Framework

Accurate Automatic Segmentation of Amygdala Subnuclei and Modeling of Uncertainty via Bayesian Fully Convolutional Neural Network

Title Accurate Automatic Segmentation of Amygdala Subnuclei and Modeling of Uncertainty via Bayesian Fully Convolutional Neural Network
Authors Yilin Liu, Gengyan Zhao, Brendon M. Nacewicz, Nagesh Adluru, Gregory R. Kirk, Peter A Ferrazzano, Martin Styner, Andrew L. Alexander
Abstract Recent advances in deep learning have improved the segmentation accuracy of subcortical brain structures, which would be useful in neuroimaging studies of many neurological disorders. However, most of the previous deep learning work does not investigate the specific difficulties that exist in segmenting extremely small but important brain regions such as the amygdala and its subregions. To tackle this challenging task, a novel 3D Bayesian fully convolutional neural network was developed to apply a dilated dualpathway approach that retains fine details and utilizes both local and more global contextual information to automatically segment the amygdala and its subregions at high precision. The proposed method provides insights on network design and sampling strategy that target segmentations of small 3D structures. In particular, this study confirms that a large context, enabled by a large field of view, is beneficial for segmenting small objects; furthermore, precise contextual information enabled by dilated convolutions allows for better boundary localization, which is critical for examining the morphology of the structure. In addition, it is demonstrated that the uncertainty information estimated from our network may be leveraged to identify atypicality in data. Our method was compared with two state-of-the-art deep learning models and a traditional multi-atlas approach, and exhibited excellent performance as measured both by Dice overlap as well as average symmetric surface distance. To the best of our knowledge, this work is the first deep learning-based approach that targets the subregions of the amygdala.
Tasks
Published 2019-02-19
URL http://arxiv.org/abs/1902.07289v1
PDF http://arxiv.org/pdf/1902.07289v1.pdf
PWC https://paperswithcode.com/paper/accurate-automatic-segmentation-of-amygdala
Repo
Framework

Automated Segmentation for Hyperdense Middle Cerebral Artery Sign of Acute Ischemic Stroke on Non-Contrast CT Images

Title Automated Segmentation for Hyperdense Middle Cerebral Artery Sign of Acute Ischemic Stroke on Non-Contrast CT Images
Authors Jia You, Philip L. H. Yu, Anderson C. O. Tsang, Eva L. H. Tsui, Pauline P. S. Woo, Gilberto K. K. Leung
Abstract The hyperdense middle cerebral artery (MCA) dot sign has been reported as an important factor in the diagnosis of acute ischemic stroke due to large vessel occlusion. Interpreting the initial CT brain scan in these patients requires high level of expertise, and has high inter-observer variability. An automated computerized interpretation of the urgent CT brain image, with an emphasis to pick up early signs of ischemic stroke will facilitate early patient diagnosis, triage, and shorten the door-to-revascularization time for these group of patients. In this paper, we present an automated detection method of segmenting the MCA dot sign on non-contrast CT brain image scans based on powerful deep learning technique.
Tasks
Published 2019-05-22
URL https://arxiv.org/abs/1905.09049v1
PDF https://arxiv.org/pdf/1905.09049v1.pdf
PWC https://paperswithcode.com/paper/automated-segmentation-for-hyperdense-middle
Repo
Framework

Gated Convolutional Neural Networks for Domain Adaptation

Title Gated Convolutional Neural Networks for Domain Adaptation
Authors Avinash Madasu, Vijjini Anvesh Rao
Abstract Domain Adaptation explores the idea of how to maximize performance on a target domain, distinct from source domain, upon which the classifier was trained. This idea has been explored for the task of sentiment analysis extensively. The training of reviews pertaining to one domain and evaluation on another domain is widely studied for modeling a domain independent algorithm. This further helps in understanding correlation between domains. In this paper, we show that Gated Convolutional Neural Networks (GCN) perform effectively at learning sentiment analysis in a manner where domain dependant knowledge is filtered out using its gates. We perform our experiments on multiple gate architectures: Gated Tanh ReLU Unit (GTRU), Gated Tanh Unit (GTU) and Gated Linear Unit (GLU). Extensive experimentation on two standard datasets relevant to the task, reveal that training with Gated Convolutional Neural Networks give significantly better performance on target domains than regular convolution and recurrent based architectures. While complex architectures like attention, filter domain specific knowledge as well, their complexity order is remarkably high as compared to gated architectures. GCNs rely on convolution hence gaining an upper hand through parallelization.
Tasks Domain Adaptation, Sentiment Analysis
Published 2019-05-16
URL https://arxiv.org/abs/1905.06906v1
PDF https://arxiv.org/pdf/1905.06906v1.pdf
PWC https://paperswithcode.com/paper/gated-convolutional-neural-networks-for
Repo
Framework

Stochastic tissue window normalization of deep learning on computed tomography

Title Stochastic tissue window normalization of deep learning on computed tomography
Authors Yuankai Huo, Yucheng Tang, Yunqiang Chen, Dashan Gao, Shizhong Han, Shunxing Bao, Smita De, James G. Terry, Jeffrey J. Carr, Richard G. Abramson, Bennett A. Landman
Abstract Tissue window filtering has been widely used in deep learning for computed tomography (CT) image analyses to improve training performance (e.g., soft tissue windows for abdominal CT). However, the effectiveness of tissue window normalization is questionable since the generalizability of the trained model might be further harmed, especially when such models are applied to new cohorts with different CT reconstruction kernels, contrast mechanisms, dynamic variations in the acquisition, and physiological changes. We evaluate the effectiveness of both with and without using soft tissue window normalization on multisite CT cohorts. Moreover, we propose a stochastic tissue window normalization (SWN) method to improve the generalizability of tissue window normalization. Different from the random sampling, the SWN method centers the randomization around the soft tissue window to maintain the specificity for abdominal organs. To evaluate the performance of different strategies, 80 training and 453 validation and testing scans from six datasets are employed to perform multi-organ segmentation using standard 2D U-Net. The six datasets cover the scenarios, where the training and testing scans are from (1) same scanner and same population, (2) same CT contrast but different pathology, and (3) different CT contrast and pathology. The traditional soft tissue window and nonwindowed approaches achieved better performance on (1). The proposed SWN achieved general superior performance on (2) and (3) with statistical analyses, which offers better generalizability for a trained model.
Tasks Computed Tomography (CT)
Published 2019-12-01
URL https://arxiv.org/abs/1912.00420v1
PDF https://arxiv.org/pdf/1912.00420v1.pdf
PWC https://paperswithcode.com/paper/stochastic-tissue-window-normalization-of
Repo
Framework

Driver Behavior Analysis Using Lane Departure Detection Under Challenging Conditions

Title Driver Behavior Analysis Using Lane Departure Detection Under Challenging Conditions
Authors Luis Riera, Koray Ozcan, Jennifer Merickel, Mathew Rizzo, Soumik Sarkar, Anuj Sharma
Abstract In this paper, we present a novel model to detect lane regions and extract lane departure events (changes and incursions) from challenging, lower-resolution videos recorded with mobile cameras. Our algorithm used a Mask-RCNN based lane detection model as pre-processor. Recently, deep learning-based models provide state-of-the-art technology for object detection combined with segmentation. Among the several deep learning architectures, convolutional neural networks (CNNs) outperformed other machine learning models, especially for region proposal and object detection tasks. Recent development in object detection has been driven by the success of region proposal methods and region-based CNNs (R-CNNs). Our algorithm utilizes lane segmentation mask for detection and Fix-lag Kalman filter for tracking, rather than the usual approach of detecting lane lines from single video frames. The algorithm permits detection of driver lane departures into left or right lanes from continuous lane detections. Preliminary results show promise for robust detection of lane departure events. The overall sensitivity for lane departure events on our custom test dataset is 81.81%.
Tasks Lane Detection, Object Detection
Published 2019-05-31
URL https://arxiv.org/abs/1906.00093v1
PDF https://arxiv.org/pdf/1906.00093v1.pdf
PWC https://paperswithcode.com/paper/190600093
Repo
Framework
comments powered by Disqus