October 17, 2019

2891 words 14 mins read

Paper Group ANR 711

Paper Group ANR 711

Application of Clinical Concept Embeddings for Heart Failure Prediction in UK EHR data. Segmentation of Multiple Sclerosis lesion in brain MR images using Fuzzy C-Means. Dynamic Pricing with Finitely Many Unknown Valuations. Quality assessment metrics for edge detection and edge-aware filtering: A tutorial review. A Radiomics Approach to Traumatic …

Application of Clinical Concept Embeddings for Heart Failure Prediction in UK EHR data

Title Application of Clinical Concept Embeddings for Heart Failure Prediction in UK EHR data
Authors Spiros Denaxas, Pontus Stenetorp, Sebastian Riedel, Maria Pikoula, Richard Dobson, Harry Hemingway
Abstract Electronic health records (EHR) are increasingly being used for constructing disease risk prediction models. Feature engineering in EHR data however is challenging due to their highly dimensional and heterogeneous nature. Low-dimensional representations of EHR data can potentially mitigate these challenges. In this paper, we use global vectors (GloVe) to learn word embeddings for diagnoses and procedures recorded using 13 million ontology terms across 2.7 million hospitalisations in national UK EHR. We demonstrate the utility of these embeddings by evaluating their performance in identifying patients which are at higher risk of being hospitalised for congestive heart failure. Our findings indicate that embeddings can enable the creation of robust EHR-derived disease risk prediction models and address some the limitations associated with manual clinical feature engineering.
Tasks Feature Engineering, Word Embeddings
Published 2018-11-23
URL http://arxiv.org/abs/1811.11005v2
PDF http://arxiv.org/pdf/1811.11005v2.pdf
PWC https://paperswithcode.com/paper/application-of-clinical-concept-embeddings
Repo
Framework

Segmentation of Multiple Sclerosis lesion in brain MR images using Fuzzy C-Means

Title Segmentation of Multiple Sclerosis lesion in brain MR images using Fuzzy C-Means
Authors Saba Heidari Gheshlaghi, Abolfazl Madani, AmirAbolfazl Suratgar, Fardin Faraji
Abstract Magnetic resonance images (MRI) play an important role in supporting and substituting clinical information in the diagnosis of multiple sclerosis (MS) disease by presenting lesion in brain MR images. In this paper, an algorithm for MS lesion segmentation from Brain MR Images has been presented. We revisit the modification of properties of fuzzy -c means algorithms and the canny edge detection. By changing and reformed fuzzy c-means clustering algorithms, and applying canny contraction principle, a relationship between MS lesions and edge detection is established. For the special case of FCM, we derive a sufficient condition and clustering parameters, allowing identification of them as (local) minima of the objective function.
Tasks Edge Detection, Lesion Segmentation
Published 2018-04-10
URL http://arxiv.org/abs/1804.03282v1
PDF http://arxiv.org/pdf/1804.03282v1.pdf
PWC https://paperswithcode.com/paper/segmentation-of-multiple-sclerosis-lesion-in
Repo
Framework

Dynamic Pricing with Finitely Many Unknown Valuations

Title Dynamic Pricing with Finitely Many Unknown Valuations
Authors Nicolò Cesa-Bianchi, Tommaso Cesari, Vianney Perchet
Abstract Motivated by posted price auctions where buyers are grouped in an unknown number of latent types characterized by their private values for the good on sale, we investigate revenue maximization in stochastic dynamic pricing when the distribution of buyers’ private values is supported on an unknown set of points in [0,1] of unknown cardinality $K$. This setting can be viewed as an instance of a stochastic $K$-armed bandit problem where the location of the arms (the $K$ unknown valuations) must be learned as well. In the distribution-free case, we prove that our setting is just as hard as $K$-armed stochastic bandits: no algorithm can achieve a regret significantly better than $\sqrt{KT}$, (where T is the time horizon); we present an efficient algorithm matching this lower bound up to logarithmic factors. In the distribution-dependent case, we show that for all $K>2$ our setting is strictly harder than $K$-armed stochastic bandits by proving that it is impossible to obtain regret bounds that grow logarithmically in time or slower. On the other hand, when a lower bound $\gamma>0$ on the smallest drop in the demand curve is known, we prove an upper bound on the regret of order $(1/\Delta+(\log \log T)/\gamma^2)(K\log T)$. This is a significant improvement on previously known regret bounds for discontinuous demand curves, that are at best of order $(K^{12}/\gamma^8)\sqrt{T}$. When $K=2$ in the distribution-dependent case, the hardness of our setting reduces to that of a stochastic $2$-armed bandit: we prove that an upper bound of order $(\log T)/\Delta$ (up to $\log\log$ factors) on the regret can be achieved with no information on the demand curve. Finally, we show a $O(\sqrt{T})$ upper bound on the regret for the setting in which the buyers’ decisions are nonstochastic, and the regret is measured with respect to the best between two fixed valuations one of which is known to the seller.
Tasks
Published 2018-07-09
URL http://arxiv.org/abs/1807.03288v2
PDF http://arxiv.org/pdf/1807.03288v2.pdf
PWC https://paperswithcode.com/paper/dynamic-pricing-with-finitely-many-unknown
Repo
Framework

Quality assessment metrics for edge detection and edge-aware filtering: A tutorial review

Title Quality assessment metrics for edge detection and edge-aware filtering: A tutorial review
Authors Diana Sadykova, Alex Pappachen James
Abstract The quality assessment of edges in an image is an important topic as it helps to benchmark the performance of edge detectors, and edge-aware filters that are used in a wide range of image processing tasks. The most popular image quality metrics such as Mean squared error (MSE), Peak signal-to-noise ratio (PSNR) and Structural similarity (SSIM) metrics for assessing and justifying the quality of edges. However, they do not address the structural and functional accuracy of edges in images with a wide range of natural variabilities. In this review, we provide an overview of all the most relevant performance metrics that can be used to benchmark the quality performance of edges in images. We identify four major groups of metrics and also provide a critical insight into the evaluation protocol and governing equations.
Tasks Edge Detection
Published 2018-01-01
URL http://arxiv.org/abs/1801.00454v1
PDF http://arxiv.org/pdf/1801.00454v1.pdf
PWC https://paperswithcode.com/paper/quality-assessment-metrics-for-edge-detection
Repo
Framework

A Radiomics Approach to Traumatic Brain Injury Prediction in CT Scans

Title A Radiomics Approach to Traumatic Brain Injury Prediction in CT Scans
Authors Ezequiel de la Rosa, Diana M. Sima, Thijs Vande Vyvere, Jan S. Kirschke, Bjoern Menze
Abstract Computer Tomography (CT) is the gold standard technique for brain damage evaluation after acute Traumatic Brain Injury (TBI). It allows identification of most lesion types and determines the need of surgical or alternative therapeutic procedures. However, the traditional approach for lesion classification is restricted to visual image inspection. In this work, we characterize and predict TBI lesions by using CT-derived radiomics descriptors. Relevant shape, intensity and texture biomarkers characterizing the different lesions are isolated and a lesion predictive model is built by using Partial Least Squares. On a dataset containing 155 scans (105 train, 50 test) the methodology achieved 89.7 % accuracy over the unseen data. When a model was build using only texture features, a 88.2 % accuracy was obtained. Our results suggest that selected radiomics descriptors could play a key role in brain injury prediction. Besides, the proposed methodology is close to reproduce radiologists decision making. These results open new possibilities for radiomics-inspired brain lesion detection, segmentation and prediction.
Tasks Decision Making, Injury Prediction
Published 2018-11-14
URL http://arxiv.org/abs/1811.05699v1
PDF http://arxiv.org/pdf/1811.05699v1.pdf
PWC https://paperswithcode.com/paper/a-radiomics-approach-to-traumatic-brain
Repo
Framework

Learning Myelin Content in Multiple Sclerosis from Multimodal MRI through Adversarial Training

Title Learning Myelin Content in Multiple Sclerosis from Multimodal MRI through Adversarial Training
Authors Wen Wei, Emilie Poirion, Benedetta Bodini, Stanley Durrleman, Nicholas Ayache, Bruno Stankoff, Olivier Colliot
Abstract Multiple sclerosis (MS) is a demyelinating disease of the central nervous system (CNS). A reliable measure of the tissue myelin content is therefore essential for the understanding of the physiopathology of MS, tracking progression and assessing treatment efficacy. Positron emission tomography (PET) with $[^{11} \mbox{C}] \mbox{PIB}$ has been proposed as a promising biomarker for measuring myelin content changes in-vivo in MS. However, PET imaging is expensive and invasive due to the injection of a radioactive tracer. On the contrary, magnetic resonance imaging (MRI) is a non-invasive, widely available technique, but existing MRI sequences do not provide, to date, a reliable, specific, or direct marker of either demyelination or remyelination. In this work, we therefore propose Sketcher-Refiner Generative Adversarial Networks (GANs) with specifically designed adversarial loss functions to predict the PET-derived myelin content map from a combination of MRI modalities. The prediction problem is solved by a sketch-refinement process in which the sketcher generates the preliminary anatomical and physiological information and the refiner refines and generates images reflecting the tissue myelin content in the human brain. We evaluated the ability of our method to predict myelin content at both global and voxel-wise levels. The evaluation results show that the demyelination in lesion regions and myelin content in normal-appearing white matter (NAWM) can be well predicted by our method. The method has the potential to become a useful tool for clinical management of patients with MS.
Tasks
Published 2018-04-21
URL http://arxiv.org/abs/1804.08039v2
PDF http://arxiv.org/pdf/1804.08039v2.pdf
PWC https://paperswithcode.com/paper/learning-myelin-content-in-multiple-sclerosis
Repo
Framework

Belief likelihood function for generalised logistic regression

Title Belief likelihood function for generalised logistic regression
Authors Fabio Cuzzolin
Abstract The notion of belief likelihood function of repeated trials is introduced, whenever the uncertainty for individual trials is encoded by a belief measure (a finite random set). This generalises the traditional likelihood function, and provides a natural setting for belief inference from statistical data. Factorisation results are proven for the case in which conjunctive or disjunctive combination are employed, leading to analytical expressions for the lower and upper likelihoods of `sharp’ samples in the case of Bernoulli trials, and to the formulation of a generalised logistic regression framework. |
Tasks
Published 2018-08-07
URL http://arxiv.org/abs/1808.02560v2
PDF http://arxiv.org/pdf/1808.02560v2.pdf
PWC https://paperswithcode.com/paper/belief-likelihood-function-for-generalised
Repo
Framework

Evenly Cascaded Convolutional Networks

Title Evenly Cascaded Convolutional Networks
Authors Chengxi Ye, Chinmaya Devaraj, Michael Maynord, Cornelia Fermüller, Yiannis Aloimonos
Abstract We introduce Evenly Cascaded convolutional Network (ECN), a neural network taking inspiration from the cascade algorithm of wavelet analysis. ECN employs two feature streams - a low-level and high-level steam. At each layer these streams interact, such that low-level features are modulated using advanced perspectives from the high-level stream. ECN is evenly structured through resizing feature map dimensions by a consistent ratio, which removes the burden of ad-hoc specification of feature map dimensions. ECN produces easily interpretable features maps, a result whose intuition can be understood in the context of scale-space theory. We demonstrate that ECN’s design facilitates the training process through providing easily trainable shortcuts. We report new state-of-the-art results for small networks, without the need for additional treatment such as pruning or compression - a consequence of ECN’s simple structure and direct training. A 6-layered ECN design with under 500k parameters achieves 95.24% and 78.99% accuracy on CIFAR-10 and CIFAR-100 datasets, respectively, outperforming the current state-of-the-art on small parameter networks, and a 3 million parameter ECN produces results competitive to the state-of-the-art.
Tasks
Published 2018-07-02
URL http://arxiv.org/abs/1807.00456v2
PDF http://arxiv.org/pdf/1807.00456v2.pdf
PWC https://paperswithcode.com/paper/evenly-cascaded-convolutional-networks
Repo
Framework

A New Decidable Class of Tuple Generating Dependencies: The Triangularly-Guarded Class

Title A New Decidable Class of Tuple Generating Dependencies: The Triangularly-Guarded Class
Authors Vernon Asuncion, Yan Zhang
Abstract In this paper we introduce a new class of tuple-generating dependencies (TGDs) called triangularly-guarded TGDs, which are TGDs with certain restrictions on the atomic derivation track embedded in the underlying rule set. We show that conjunctive query answering under this new class of TGDs is decidable. We further show that this new class strictly contains some other decidable classes such as weak-acyclic, guarded, sticky and shy, which, to the best of our knowledge, provides a unified representation of all these aforementioned classes.
Tasks
Published 2018-04-17
URL http://arxiv.org/abs/1804.05997v2
PDF http://arxiv.org/pdf/1804.05997v2.pdf
PWC https://paperswithcode.com/paper/a-new-decidable-class-of-tuple-generating
Repo
Framework

DeepGestalt - Identifying Rare Genetic Syndromes Using Deep Learning

Title DeepGestalt - Identifying Rare Genetic Syndromes Using Deep Learning
Authors Yaron Gurovich, Yair Hanani, Omri Bar, Nicole Fleischer, Dekel Gelbman, Lina Basel-Salmon, Peter Krawitz, Susanne B Kamphausen, Martin Zenker, Lynne M. Bird, Karen W. Gripp
Abstract Facial analysis technologies have recently measured up to the capabilities of expert clinicians in syndrome identification. To date, these technologies could only identify phenotypes of a few diseases, limiting their role in clinical settings where hundreds of diagnoses must be considered. We developed a facial analysis framework, DeepGestalt, using computer vision and deep learning algorithms, that quantifies similarities to hundreds of genetic syndromes based on unconstrained 2D images. DeepGestalt is currently trained with over 26,000 patient cases from a rapidly growing phenotype-genotype database, consisting of tens of thousands of validated clinical cases, curated through a community-driven platform. DeepGestalt currently achieves 91% top-10-accuracy in identifying over 215 different genetic syndromes and has outperformed clinical experts in three separate experiments. We suggest that this form of artificial intelligence is ready to support medical genetics in clinical and laboratory practices and will play a key role in the future of precision medicine.
Tasks
Published 2018-01-23
URL http://arxiv.org/abs/1801.07637v1
PDF http://arxiv.org/pdf/1801.07637v1.pdf
PWC https://paperswithcode.com/paper/deepgestalt-identifying-rare-genetic
Repo
Framework

audEERING’s approach to the One-Minute-Gradual Emotion Challenge

Title audEERING’s approach to the One-Minute-Gradual Emotion Challenge
Authors Andreas Triantafyllopoulos, Hesam Sagha, Florian Eyben, Björn Schuller
Abstract This paper describes audEERING’s submissions as well as additional evaluations for the One-Minute-Gradual (OMG) emotion recognition challenge. We provide the results for audio and video processing on subject (in)dependent evaluations. On the provided Development set, we achieved 0.343 Concordance Correlation Coefficient (CCC) for arousal (from audio) and .401 for valence (from video).
Tasks Emotion Recognition
Published 2018-05-03
URL http://arxiv.org/abs/1805.01222v1
PDF http://arxiv.org/pdf/1805.01222v1.pdf
PWC https://paperswithcode.com/paper/audeerings-approach-to-the-one-minute-gradual
Repo
Framework

Fully Dense UNet for 2D Sparse Photoacoustic Tomography Artifact Removal

Title Fully Dense UNet for 2D Sparse Photoacoustic Tomography Artifact Removal
Authors Steven Guan, Amir Khan, Siddhartha Sikdar, Parag V. Chitnis
Abstract Photoacoustic imaging is an emerging imaging modality that is based upon the photoacoustic effect. In photoacoustic tomography (PAT), the induced acoustic pressure waves are measured by an array of detectors and used to reconstruct an image of the initial pressure distribution. A common challenge faced in PAT is that the measured acoustic waves can only be sparsely sampled. Reconstructing sparsely sampled data using standard methods results in severe artifacts that obscure information within the image. We propose a modified convolutional neural network (CNN) architecture termed Fully Dense UNet (FD-UNet) for removing artifacts from 2D PAT images reconstructed from sparse data and compare the proposed CNN with the standard UNet in terms of reconstructed image quality.
Tasks
Published 2018-08-31
URL http://arxiv.org/abs/1808.10848v3
PDF http://arxiv.org/pdf/1808.10848v3.pdf
PWC https://paperswithcode.com/paper/fully-dense-unet-for-2d-sparse-photoacoustic
Repo
Framework

Action Model Acquisition using LSTM

Title Action Model Acquisition using LSTM
Authors Ankuj Arora, Humbert Fiorino, Damien Pellier, Sylvie Pesty
Abstract In the field of Automated Planning and Scheduling (APS), intelligent agents by virtue require an action model (blueprints of actions whose interleaved executions effectuates transitions of the system state) in order to plan and solve real world problems. It is, however, becoming increasingly cumbersome to codify this model, and is more efficient to learn it from observed plan execution sequences (training data). While the underlying objective is to subsequently plan from this learnt model, most approaches fall short as anything less than a flawless reconstruction of the underlying model renders it unusable in certain domains. This work presents a novel approach using long short-term memory (LSTM) techniques for the acquisition of the underlying action model. We use the sequence labelling capabilities of LSTMs to isolate from an exhaustive model set a model identical to the one responsible for producing the training data. This isolation capability renders our approach as an effective one.
Tasks
Published 2018-10-03
URL http://arxiv.org/abs/1810.01992v1
PDF http://arxiv.org/pdf/1810.01992v1.pdf
PWC https://paperswithcode.com/paper/action-model-acquisition-using-lstm
Repo
Framework

A Preliminary Exploration of Floating Point Grammatical Evolution

Title A Preliminary Exploration of Floating Point Grammatical Evolution
Authors Brad Alexander
Abstract Current GP frameworks are highly effective on a range of real and simulated benchmarks. However, due to the high dimensionality of the genotypes for GP, the task of visualising the fitness landscape for GP search can be difficult. This paper describes a new framework: Floating Point Grammatical Evolution (FP-GE) which uses a single floating point genotype to encode an individual program. This encoding permits easier visualisation of the fitness landscape arbitrary problems by providing a way to map fitness against a single dimension. The new framework also makes it trivially easy to apply continuous search algorithms, such as Differential Evolution, to the search problem. In this work, the FP-GE framework is tested against several regression problems, visualising the search landscape for these and comparing different search meta-heuristics.
Tasks
Published 2018-06-09
URL http://arxiv.org/abs/1806.03455v1
PDF http://arxiv.org/pdf/1806.03455v1.pdf
PWC https://paperswithcode.com/paper/a-preliminary-exploration-of-floating-point
Repo
Framework

SOSA: A Lightweight Ontology for Sensors, Observations, Samples, and Actuators

Title SOSA: A Lightweight Ontology for Sensors, Observations, Samples, and Actuators
Authors Krzysztof Janowicz, Armin Haller, Simon J D Cox, Danh Le Phuoc, Maxime Lefrancois
Abstract The Sensor, Observation, Sample, and Actuator (SOSA) ontology provides a formal but lightweight general-purpose specification for modeling the interaction between the entities involved in the acts of observation, actuation, and sampling. SOSA is the result of rethinking the W3C-XG Semantic Sensor Network (SSN) ontology based on changes in scope and target audience, technical developments, and lessons learned over the past years. SOSA also acts as a replacement of SSN’s Stimulus Sensor Observation (SSO) core. It has been developed by the first joint working group of the Open Geospatial Consortium (OGC) and the World Wide Web Consortium (W3C) on \emph{Spatial Data on the Web}. In this work, we motivate the need for SOSA, provide an overview of the main classes and properties, and briefly discuss its integration with the new release of the SSN ontology as well as various other alignments to specifications such as OGC’s Observations and Measurements (O&M), Dolce-Ultralite (DUL), and other prominent ontologies. We will also touch upon common modeling problems and application areas related to publishing and searching observation, sampling, and actuation data on the Web. The SOSA ontology and standard can be accessed at \url{https://www.w3.org/TR/vocab-ssn/}.
Tasks
Published 2018-05-25
URL http://arxiv.org/abs/1805.09979v2
PDF http://arxiv.org/pdf/1805.09979v2.pdf
PWC https://paperswithcode.com/paper/sosa-a-lightweight-ontology-for-sensors
Repo
Framework
comments powered by Disqus