May 6, 2019

2723 words 13 mins read

Paper Group ANR 294

Paper Group ANR 294

Multi-Atlas Segmentation with Joint Label Fusion of Osteoporotic Vertebral Compression Fractures on CT. Stratified Knowledge Bases as Interpretable Probabilistic Models (Extended Abstract). Segmentation of Soft atherosclerotic plaques using active contour models. Customized Facial Constant Positive Air Pressure (CPAP) Masks. OpenSalicon: An Open So …

Multi-Atlas Segmentation with Joint Label Fusion of Osteoporotic Vertebral Compression Fractures on CT

Title Multi-Atlas Segmentation with Joint Label Fusion of Osteoporotic Vertebral Compression Fractures on CT
Authors Yinong Wang, Jianhua Yao, Holger R. Roth, Joseph E. Burns, Ronald M. Summers
Abstract The precise and accurate segmentation of the vertebral column is essential in the diagnosis and treatment of various orthopedic, neurological, and oncological traumas and pathologies. Segmentation is especially challenging in the presence of pathology such as vertebral compression fractures. In this paper, we propose a method to produce segmentations for osteoporotic compression fractured vertebrae by applying a multi-atlas joint label fusion technique for clinical CT images. A total of 170 thoracic and lumbar vertebrae were evaluated using atlases from five patients with varying degrees of spinal degeneration. In an osteoporotic cohort of bundled atlases, registration provided an average Dice coefficient and mean absolute surface distance of 2.7$\pm$4.5% and 0.32$\pm$0.13mm for osteoporotic vertebrae, respectively, and 90.9$\pm$3.0% and 0.36$\pm$0.11mm for compression fractured vertebrae.
Tasks
Published 2016-01-13
URL http://arxiv.org/abs/1601.03375v1
PDF http://arxiv.org/pdf/1601.03375v1.pdf
PWC https://paperswithcode.com/paper/multi-atlas-segmentation-with-joint-label
Repo
Framework

Stratified Knowledge Bases as Interpretable Probabilistic Models (Extended Abstract)

Title Stratified Knowledge Bases as Interpretable Probabilistic Models (Extended Abstract)
Authors Ondrej Kuzelka, Jesse Davis, Steven Schockaert
Abstract In this paper, we advocate the use of stratified logical theories for representing probabilistic models. We argue that such encodings can be more interpretable than those obtained in existing frameworks such as Markov logic networks. Among others, this allows for the use of domain experts to improve learned models by directly removing, adding, or modifying logical formulas.
Tasks
Published 2016-11-18
URL http://arxiv.org/abs/1611.06174v1
PDF http://arxiv.org/pdf/1611.06174v1.pdf
PWC https://paperswithcode.com/paper/stratified-knowledge-bases-as-interpretable
Repo
Framework

Segmentation of Soft atherosclerotic plaques using active contour models

Title Segmentation of Soft atherosclerotic plaques using active contour models
Authors Muhammad Moazzam Jawaid
Abstract Detection of non-calcified plaques in the coronary tree is a challenging problem due to the nature of comprising substances. Hard plaques are easily discernible in CTA data cloud due to apparent bright behaviour, therefore many approaches have been proposed for automatic segmentation of calcified plaques. In contrast soft plaques show very small difference in intensity with respect to surrounding heart tissues & blood voxels. This similarity in intensity makes the isolation and detection of soft plaques very difficult. This work aims to develop framework for segmentation of vulnerable plaques with minimal user dependency. In first step automatic seed point has been established based on the fact that coronary artery behaves as tubular structure through axial slices. In the following step the behaviour of contrast agent has been modelled mathematically to reflect the dye diffusion in respective CTA volume. Consequently based on detected seed point & intensity behaviour, localized active contour segmentation has been applied to extract complete coronary tree. Bidirectional segmentation has been applied to avoid loss of coronary information due to the seed point location whereas auto adjustment feature of contour grabs new emerging branches. Medial axis for extracted coronary tree is generated using fast marching method for obtaining curve planar reformation for validation of contrast agent behaviour. Obtained coronary tree is to be evaluated for soft plaques in second phase of this research.
Tasks
Published 2016-07-30
URL http://arxiv.org/abs/1608.00116v1
PDF http://arxiv.org/pdf/1608.00116v1.pdf
PWC https://paperswithcode.com/paper/segmentation-of-soft-atherosclerotic-plaques
Repo
Framework

Customized Facial Constant Positive Air Pressure (CPAP) Masks

Title Customized Facial Constant Positive Air Pressure (CPAP) Masks
Authors Matan Sela, Nadav Toledo, Yaron Honen, Ron Kimmel
Abstract Sleep apnea is a syndrome that is characterized by sudden breathing halts while sleeping. One of the common treatments involves wearing a mask that delivers continuous air flow into the nostrils so as to maintain a steady air pressure. These masks are designed for an average facial model and are often difficult to adjust due to poor fit to the actual patient. The incompatibility is characterized by gaps between the mask and the face, which deteriorates the impermeability of the mask and leads to air leakage. We suggest a fully automatic approach for designing a personalized nasal mask interface using a facial depth scan. The interfaces generated by the proposed method accurately fit the geometry of the scanned face, and are easy to manufacture. The proposed method utilizes cheap commodity depth sensors and 3D printing technologies to efficiently design and manufacture customized masks for patients suffering from sleep apnea.
Tasks
Published 2016-09-22
URL http://arxiv.org/abs/1609.07049v1
PDF http://arxiv.org/pdf/1609.07049v1.pdf
PWC https://paperswithcode.com/paper/customized-facial-constant-positive-air
Repo
Framework

OpenSalicon: An Open Source Implementation of the Salicon Saliency Model

Title OpenSalicon: An Open Source Implementation of the Salicon Saliency Model
Authors Christopher Thomas
Abstract In this technical report, we present our publicly downloadable implementation of the SALICON saliency model. At the time of this writing, SALICON is one of the top performing saliency models on the MIT 300 fixation prediction dataset which evaluates how well an algorithm is able to predict where humans would look in a given image. Recently, numerous models have achieved state-of-the-art performance on this benchmark, but none of the top 5 performing models (including SALICON) are available for download. To address this issue, we have created a publicly downloadable implementation of the SALICON model. It is our hope that our model will engender further research in visual attention modeling by providing a baseline for comparison of other algorithms and a platform for extending this implementation. The model we provide supports both training and testing, enabling researchers to quickly fine-tune the model on their own dataset. We also provide a pre-trained model and code for those users who only need to generate saliency maps for images without training their own model.
Tasks
Published 2016-06-01
URL http://arxiv.org/abs/1606.00110v1
PDF http://arxiv.org/pdf/1606.00110v1.pdf
PWC https://paperswithcode.com/paper/opensalicon-an-open-source-implementation-of
Repo
Framework

Accelerating a hybrid continuum-atomistic fluidic model with on-the-fly machine learning

Title Accelerating a hybrid continuum-atomistic fluidic model with on-the-fly machine learning
Authors David Stephenson, James R Kermode, Duncan A Lockerby
Abstract We present a hybrid continuum-atomistic scheme which combines molecular dynamics (MD) simulations with on-the-fly machine learning techniques for the accurate and efficient prediction of multiscale fluidic systems. By using a Gaussian process as a surrogate model for the computationally expensive MD simulations, we use Bayesian inference to predict the system behaviour at the atomistic scale, purely by consideration of the macroscopic inputs and outputs. Whenever the uncertainty of this prediction is greater than a predetermined acceptable threshold, a new MD simulation is performed to continually augment the database, which is never required to be complete. This provides a substantial enhancement to the current generation of hybrid methods, which often require many similar atomistic simulations to be performed, discarding information after it is used once. We apply our hybrid scheme to nano-confined unsteady flow through a high-aspect-ratio converging-diverging channel, and make comparisons between the new scheme and full MD simulations for a range of uncertainty thresholds and initial databases. For low thresholds, our hybrid solution is highly accurate,—,within the thermal noise of a full MD simulation. As the uncertainty threshold is raised, the accuracy of our scheme decreases and the computational speed-up increases (relative to a full MD simulation), enabling the compromise between precision and efficiency to be tuned. The speed-up of our hybrid solution ranges from an order of magnitude, with no initial database, to cases where an extensive initial database ensures no new MD simulations are required.
Tasks Bayesian Inference
Published 2016-03-15
URL http://arxiv.org/abs/1603.04628v1
PDF http://arxiv.org/pdf/1603.04628v1.pdf
PWC https://paperswithcode.com/paper/accelerating-a-hybrid-continuum-atomistic
Repo
Framework

Combat Models for RTS Games

Title Combat Models for RTS Games
Authors Alberto Uriarte, Santiago Ontañón
Abstract Game tree search algorithms, such as Monte Carlo Tree Search (MCTS), require access to a forward model (or “simulator”) of the game at hand. However, in some games such forward model is not readily available. This paper presents three forward models for two-player attrition games, which we call “combat models”, and show how they can be used to simulate combat in RTS games. We also show how these combat models can be learned from replay data. We use StarCraft as our application domain. We report experiments comparing our combat models predicting a combat output and their impact when used for tactical decisions during a real game.
Tasks Starcraft
Published 2016-05-17
URL http://arxiv.org/abs/1605.05305v1
PDF http://arxiv.org/pdf/1605.05305v1.pdf
PWC https://paperswithcode.com/paper/combat-models-for-rts-games
Repo
Framework

Theory and computer simulation of the moiré patterns in single-layer cylindrical particles

Title Theory and computer simulation of the moiré patterns in single-layer cylindrical particles
Authors Vladimir Saveljev, Irina Palchikova
Abstract Basing on the theory for arbitrary oriented surfaces, we developed the theory of the moir'e effect for cylindrical single-layer objects in the paraxial approximation. With using the dual grids, the moir'e effect in the plane gratings is simulated, as well as the near-axis moir'e effect in cylinders including the chiral layouts. The results can be applied to the graphene layers, to single-walled nanotubes, and to cylinders in general.
Tasks
Published 2016-10-12
URL http://arxiv.org/abs/1610.04156v1
PDF http://arxiv.org/pdf/1610.04156v1.pdf
PWC https://paperswithcode.com/paper/theory-and-computer-simulation-of-the-moire
Repo
Framework

Frequency estimation in three-phase power systems with harmonic contamination: A multistage quaternion Kalman filtering approach

Title Frequency estimation in three-phase power systems with harmonic contamination: A multistage quaternion Kalman filtering approach
Authors Sayed Pouria Talebi, Danilo P. Mandic
Abstract Motivated by the need for accurate frequency information, a novel algorithm for estimating the fundamental frequency and its rate of change in three-phase power systems is developed. This is achieved through two stages of Kalman filtering. In the first stage a quaternion extended Kalman filter, which provides a unified framework for joint modeling of voltage measurements from all the phases, is used to estimate the instantaneous phase increment of the three-phase voltages. The phase increment estimates are then used as observations of the extended Kalman filter in the second stage that accounts for the dynamic behavior of the system frequency and simultaneously estimates the fundamental frequency and its rate of change. The framework is then extended to account for the presence of harmonics. Finally, the concept is validated through simulation on both synthetic and real-world data.
Tasks
Published 2016-03-08
URL http://arxiv.org/abs/1603.02977v1
PDF http://arxiv.org/pdf/1603.02977v1.pdf
PWC https://paperswithcode.com/paper/frequency-estimation-in-three-phase-power
Repo
Framework

Multi-View Kernel Consensus For Data Analysis

Title Multi-View Kernel Consensus For Data Analysis
Authors Moshe Salhov, Ofir Lindenbaum, Yariv Aizenbud, Avi Silberschatz, Yoel Shkolnisky, Amir Averbuch
Abstract The input data features set for many data driven tasks is high-dimensional while the intrinsic dimension of the data is low. Data analysis methods aim to uncover the underlying low dimensional structure imposed by the low dimensional hidden parameters by utilizing distance metrics that consider the set of attributes as a single monolithic set. However, the transformation of the low dimensional phenomena into the measured high dimensional observations might distort the distance metric, This distortion can effect the desired estimated low dimensional geometric structure. In this paper, we suggest to utilize the redundancy in the attribute domain by partitioning the attributes into multiple subsets we call views. The proposed methods utilize the agreement also called consensus between different views to extract valuable geometric information that unifies multiple views about the intrinsic relationships among several different observations. This unification enhances the information that a single view or a simple concatenations of views provides.
Tasks
Published 2016-06-28
URL http://arxiv.org/abs/1606.08819v2
PDF http://arxiv.org/pdf/1606.08819v2.pdf
PWC https://paperswithcode.com/paper/multi-view-kernel-consensus-for-data-analysis
Repo
Framework

Clustering Mixed Datasets Using Homogeneity Analysis with Applications to Big Data

Title Clustering Mixed Datasets Using Homogeneity Analysis with Applications to Big Data
Authors Rajiv Sambasivan, Sourish Das
Abstract Datasets with a mixture of numerical and categorical attributes are routinely encountered in many application domains. In this work we examine an approach to clustering such datasets using homogeneity analysis. Homogeneity analysis determines a euclidean representation of the data. This can be analyzed by leveraging the large body of tools and techniques for data with a euclidean representation. Experiments conducted as part of this study suggest that this approach can be useful in the analysis and exploration of big datasets with a mixture of numerical and categorical attributes.
Tasks
Published 2016-08-17
URL http://arxiv.org/abs/1608.04961v3
PDF http://arxiv.org/pdf/1608.04961v3.pdf
PWC https://paperswithcode.com/paper/clustering-mixed-datasets-using-homogeneity
Repo
Framework

Weakly Supervised Learning of Heterogeneous Concepts in Videos

Title Weakly Supervised Learning of Heterogeneous Concepts in Videos
Authors Sohil Shah, Kuldeep Kulkarni, Arijit Biswas, Ankit Gandhi, Om Deshmukh, Larry Davis
Abstract Typical textual descriptions that accompany online videos are ‘weak’: i.e., they mention the main concepts in the video but not their corresponding spatio-temporal locations. The concepts in the description are typically heterogeneous (e.g., objects, persons, actions). Certain location constraints on these concepts can also be inferred from the description. The goal of this paper is to present a generalization of the Indian Buffet Process (IBP) that can (a) systematically incorporate heterogeneous concepts in an integrated framework, and (b) enforce location constraints, for efficient classification and localization of the concepts in the videos. Finally, we develop posterior inference for the proposed formulation using mean-field variational approximation. Comparative evaluations on the Casablanca and the A2D datasets show that the proposed approach significantly outperforms other state-of-the-art techniques: 24% relative improvement for pairwise concept classification in the Casablanca dataset and 9% relative improvement for localization in the A2D dataset as compared to the most competitive baseline.
Tasks
Published 2016-07-12
URL http://arxiv.org/abs/1607.03240v1
PDF http://arxiv.org/pdf/1607.03240v1.pdf
PWC https://paperswithcode.com/paper/weakly-supervised-learning-of-heterogeneous
Repo
Framework

Unsupervised learning of transcriptional regulatory networks via latent tree graphical models

Title Unsupervised learning of transcriptional regulatory networks via latent tree graphical models
Authors Anthony Gitter, Furong Huang, Ragupathyraj Valluvan, Ernest Fraenkel, Animashree Anandkumar
Abstract Gene expression is a readily-observed quantification of transcriptional activity and cellular state that enables the recovery of the relationships between regulators and their target genes. Reconstructing transcriptional regulatory networks from gene expression data is a problem that has attracted much attention, but previous work often makes the simplifying (but unrealistic) assumption that regulator activity is represented by mRNA levels. We use a latent tree graphical model to analyze gene expression without relying on transcription factor expression as a proxy for regulator activity. The latent tree model is a type of Markov random field that includes both observed gene variables and latent (hidden) variables, which factorize on a Markov tree. Through efficient unsupervised learning approaches, we determine which groups of genes are co-regulated by hidden regulators and the activity levels of those regulators. Post-processing annotates many of these discovered latent variables as specific transcription factors or groups of transcription factors. Other latent variables do not necessarily represent physical regulators but instead reveal hidden structure in the gene expression such as shared biological function. We apply the latent tree graphical model to a yeast stress response dataset. In addition to novel predictions, such as condition-specific binding of the transcription factor Msn4, our model recovers many known aspects of the yeast regulatory network. These include groups of co-regulated genes, condition-specific regulator activity, and combinatorial regulation among transcription factors. The latent tree graphical model is a general approach for analyzing gene expression data that requires no prior knowledge of which possible regulators exist, regulator activity, or where transcription factors physically bind.
Tasks
Published 2016-09-20
URL http://arxiv.org/abs/1609.06335v1
PDF http://arxiv.org/pdf/1609.06335v1.pdf
PWC https://paperswithcode.com/paper/unsupervised-learning-of-transcriptional
Repo
Framework

Bias Correction for Regularized Regression and its Application in Learning with Streaming Data

Title Bias Correction for Regularized Regression and its Application in Learning with Streaming Data
Authors Qiang Wu
Abstract We propose an approach to reduce the bias of ridge regression and regularization kernel network. When applied to a single data set the new algorithms have comparable learning performance with the original ones. When applied to incremental learning with block wise streaming data the new algorithms are more efficient due to bias reduction. Both theoretical characterizations and simulation studies are used to verify the effectiveness of these new algorithms.
Tasks
Published 2016-03-15
URL http://arxiv.org/abs/1603.04882v1
PDF http://arxiv.org/pdf/1603.04882v1.pdf
PWC https://paperswithcode.com/paper/bias-correction-for-regularized-regression
Repo
Framework

Stratified Bayesian Optimization

Title Stratified Bayesian Optimization
Authors Saul Toscano-Palmerin, Peter I. Frazier
Abstract We consider derivative-free black-box global optimization of expensive noisy functions, when most of the randomness in the objective is produced by a few influential scalar random inputs. We present a new Bayesian global optimization algorithm, called Stratified Bayesian Optimization (SBO), which uses this strong dependence to improve performance. Our algorithm is similar in spirit to stratification, a technique from simulation, which uses strong dependence on a categorical representation of the random input to reduce variance. We demonstrate in numerical experiments that SBO outperforms state-of-the-art Bayesian optimization benchmarks that do not leverage this dependence.
Tasks
Published 2016-02-07
URL http://arxiv.org/abs/1602.02338v2
PDF http://arxiv.org/pdf/1602.02338v2.pdf
PWC https://paperswithcode.com/paper/stratified-bayesian-optimization
Repo
Framework
comments powered by Disqus