January 29, 2020

3348 words 16 mins read

Paper Group ANR 493

Paper Group ANR 493

Preconditioned P-ULA for Joint Deconvolution-Segmentation of Ultrasound Images – Extended Version. Learning to Localize Temporal Events in Large-scale Video Data. Estudo comparativo de meta-heurísticas para problemas de colorações de grafos. Deep Learning for Multi-Scale Changepoint Detection in Multivariate Time Series. High-dimensional structure …

Preconditioned P-ULA for Joint Deconvolution-Segmentation of Ultrasound Images – Extended Version

Title Preconditioned P-ULA for Joint Deconvolution-Segmentation of Ultrasound Images – Extended Version
Authors Corbineau Marie-Caroline, Kouamé Denis, Chouzenoux Emilie, Tourneret Jean-Yves, Pesquet Jean-Christophe
Abstract Joint deconvolution and segmentation of ultrasound images is a challenging problem in medical imaging. By adopting a hierarchical Bayesian model, we propose an accelerated Markov chain Monte Carlo scheme where the tissue reflectivity function is sampled thanks to a recently introduced proximal unadjusted Langevin algorithm. This new approach is combined with a forward-backward step and a preconditioning strategy to accelerate the convergence, and with a method based on the majorization-minimization principle to solve the inner nonconvex minimization problems. As demonstrated in numerical experiments conducted on both simulated and in vivo ultrasound images, the proposed method provides high-quality restoration and segmentation results and is up to six times faster than an existing Hamiltonian Monte Carlo method.
Tasks
Published 2019-03-19
URL https://arxiv.org/abs/1903.08111v4
PDF https://arxiv.org/pdf/1903.08111v4.pdf
PWC https://paperswithcode.com/paper/preconditioned-p-ula-for-joint-deconvolution
Repo
Framework

Learning to Localize Temporal Events in Large-scale Video Data

Title Learning to Localize Temporal Events in Large-scale Video Data
Authors Mikel Bober-Irizar, Miha Skalic, David Austin
Abstract We address temporal localization of events in large-scale video data, in the context of the Youtube-8M Segments dataset. This emerging field within video recognition can enable applications to identify the precise time a specified event occurs in a video, which has broad implications for video search. To address this we present two separate approaches: (1) a gradient boosted decision tree model on a crafted dataset and (2) a combination of deep learning models based on frame-level data, video-level data, and a localization model. The combinations of these two approaches achieved 5th place in the 3rd Youtube-8M video recognition challenge.
Tasks Temporal Localization, Video Recognition
Published 2019-10-25
URL https://arxiv.org/abs/1910.11631v1
PDF https://arxiv.org/pdf/1910.11631v1.pdf
PWC https://paperswithcode.com/paper/learning-to-localize-temporal-events-in-large
Repo
Framework

Estudo comparativo de meta-heurísticas para problemas de colorações de grafos

Title Estudo comparativo de meta-heurísticas para problemas de colorações de grafos
Authors Flávio José Mendes Coelho
Abstract A classic graph coloring problem is to assign colors to vertices of any graph so that distinct colors are assigned to adjacent vertices. Optimal graph coloring colors a graph with a minimum number of colors, which is its chromatic number. Finding out the chromatic number is a combinatorial optimization problem proven to be computationally intractable, which implies that no algorithm that computes large instances of the problem in a reasonable time is known. For this reason, approximate methods and metaheuristics form a set of techniques that do not guarantee optimality but obtain good solutions in a reasonable time. This paper reports a comparative study of the Hill-Climbing, Simulated Annealing, Tabu Search, and Iterated Local Search metaheuristics for the classic graph coloring problem considering its time efficiency for processing the DSJC125 and DSJC250 instances of the DIMACS benchmark.
Tasks Combinatorial Optimization
Published 2019-12-18
URL https://arxiv.org/abs/1912.11533v1
PDF https://arxiv.org/pdf/1912.11533v1.pdf
PWC https://paperswithcode.com/paper/estudo-comparativo-de-meta-heuristicas-para
Repo
Framework

Deep Learning for Multi-Scale Changepoint Detection in Multivariate Time Series

Title Deep Learning for Multi-Scale Changepoint Detection in Multivariate Time Series
Authors Zahra Ebrahimzadeh, Min Zheng, Selcuk Karakas, Samantha Kleinberg
Abstract Many real-world time series, such as in health, have changepoints where the system’s structure or parameters change. Since changepoints can indicate critical events such as onset of illness, it is highly important to detect them. However, existing methods for changepoint detection (CPD) often require user-specified models and cannot recognize changes that occur gradually or at multiple time-scales. To address both, we show how CPD can be treated as a supervised learning problem, and propose a new deep neural network architecture to efficiently identify both abrupt and gradual changes at multiple timescales from multivariate data. Our proposed pyramid recurrent neural network (PRN) provides scale-invariance using wavelets and pyramid analysis techniques from multi-scale signal processing. Through experiments on synthetic and real-world datasets, we show that PRN can detect abrupt and gradual changes with higher accuracy than the state of the art and can extrapolate to detect changepoints at novel scales not seen in training.
Tasks Time Series
Published 2019-05-16
URL https://arxiv.org/abs/1905.06913v1
PDF https://arxiv.org/pdf/1905.06913v1.pdf
PWC https://paperswithcode.com/paper/deep-learning-for-multi-scale-changepoint
Repo
Framework

High-dimensional structure learning of binary pairwise Markov networks: A comparative numerical study

Title High-dimensional structure learning of binary pairwise Markov networks: A comparative numerical study
Authors Johan Pensar, Yingying Xu, Santeri Puranen, Maiju Pesonen, Yoshiyuki Kabashima, Jukka Corander
Abstract Learning the undirected graph structure of a Markov network from data is a problem that has received a lot of attention during the last few decades. As a result of the general applicability of the model class, a myriad of methods have been developed in parallel in several research fields. Recently, as the size of the considered systems has increased, the focus of new methods has been shifted towards the high-dimensional domain. In particular, introduction of the pseudo-likelihood function has pushed the limits of score-based methods which were originally based on the likelihood function. At the same time, methods based on simple pairwise tests have been developed to meet the challenges arising from increasingly large data sets in computational biology. Apart from being applicable to high-dimensional problems, methods based on the pseudo-likelihood and pairwise tests are fundamentally very different. To compare the accuracy of the different types of methods, an extensive numerical study is performed on data generated by binary pairwise Markov networks. A parallelizable Gibbs sampler, based on restricted Boltzmann machines, is proposed as a tool to efficiently sample from sparse high-dimensional networks. The results of the study show that pairwise methods can be more accurate than pseudo-likelihood methods in settings often encountered in high-dimensional structure learning applications.
Tasks
Published 2019-01-14
URL https://arxiv.org/abs/1901.04345v2
PDF https://arxiv.org/pdf/1901.04345v2.pdf
PWC https://paperswithcode.com/paper/high-dimensional-structure-learning-of-binary
Repo
Framework

A Novel Approach for Detection and Ranking of Trendy and Emerging Cyber Threat Events in Twitter Streams

Title A Novel Approach for Detection and Ranking of Trendy and Emerging Cyber Threat Events in Twitter Streams
Authors Avishek Bose, Vahid Behzadan, Carlos Aguirre, William H. Hsu
Abstract We present a new machine learning and text information extraction approach to detection of cyber threat events in Twitter that are novel (previously non-extant) and developing (marked by significance with respect to similarity with a previously detected event). While some existing approaches to event detection measure novelty and trendiness, typically as independent criteria and occasionally as a holistic measure, this work focuses on detecting both novel and developing events using an unsupervised machine learning approach. Furthermore, our proposed approach enables the ranking of cyber threat events based on an importance score by extracting the tweet terms that are characterized as named entities, keywords, or both. We also impute influence to users in order to assign a weighted score to noun phrases in proportion to user influence and the corresponding event scores for named entities and keywords. To evaluate the performance of our proposed approach, we measure the efficiency and detection error rate for events over a specified time interval, relative to human annotator ground truth.
Tasks
Published 2019-07-12
URL https://arxiv.org/abs/1907.07768v1
PDF https://arxiv.org/pdf/1907.07768v1.pdf
PWC https://paperswithcode.com/paper/a-novel-approach-for-detection-and-ranking-of
Repo
Framework

A Survey of Challenges and Opportunities in Sensing and Analytics for Cardiovascular Disorders

Title A Survey of Challenges and Opportunities in Sensing and Analytics for Cardiovascular Disorders
Authors Nathan C. Hurley, Erica S. Spatz, Harlan M. Krumholz, Roozbeh Jafari, Bobak J. Mortazavi
Abstract Cardiovascular disorders account for nearly 1 in 3 deaths in the United States. Care for these disorders are often determined during visits to acute care facilities, such as hospitals. While the length of stay in these settings represents just a small proportion of patients’ lives, they account for a disproportionately large amount of decision making. To overcome this bias towards data from acute care settings, there is a need for longitudinal monitoring in patients with cardiovascular disorders. Longitudinal monitoring can provide a more comprehensive picture of patient health, allowing for more informed decision making. This work surveys the current field of sensing technologies and machine learning analytics that exist in the field of remote monitoring for cardiovascular disorders. We highlight three primary needs in the design of new smart health technologies: 1) the need for sensing technology that can track longitudinal trends in signs and symptoms of the cardiovascular disorder despite potentially infrequent, noisy, or missing data measurements; 2) the need for new analytic techniques that model data captured in a longitudinal, continual fashion to aid in the development of new risk prediction techniques and in tracking disease progression; and 3) the need for machine learning techniques that are personalized and interpretable, allowing for advancements in shared clinical decision making. We highlight these needs based upon the current state-of-the-art in smart health technologies and analytics and discuss the ample opportunities that exist in addressing all three needs in the development of smart health technologies and analytics applied to the field of cardiovascular disorders and care.
Tasks Decision Making
Published 2019-08-12
URL https://arxiv.org/abs/1908.06170v1
PDF https://arxiv.org/pdf/1908.06170v1.pdf
PWC https://paperswithcode.com/paper/a-survey-of-challenges-and-opportunities-in
Repo
Framework

Super-resolution of Time-series Labels for Bootstrapped Event Detection

Title Super-resolution of Time-series Labels for Bootstrapped Event Detection
Authors Ivan Kiskin, Udeepa Meepegama, Steven Roberts
Abstract Solving real-world problems, particularly with deep learning, relies on the availability of abundant, quality data. In this paper we develop a novel framework that maximises the utility of time-series datasets that contain only small quantities of expertly-labelled data, larger quantities of weakly (or coarsely) labelled data and a large volume of unlabelled data. This represents scenarios commonly encountered in the real world, such as in crowd-sourcing applications. In our work, we use a nested loop using a Kernel Density Estimator (KDE) to super-resolve the abundant low-quality data labels, thereby enabling effective training of a Convolutional Neural Network (CNN). We demonstrate two key results: a) The KDE is able to super-resolve labels more accurately, and with better calibrated probabilities, than well-established classifiers acting as baselines; b) Our CNN, trained on super-resolved labels from the KDE, achieves an improvement in F1 score of 22.1% over the next best baseline system in our candidate problem domain.
Tasks Super-Resolution, Time Series
Published 2019-06-01
URL https://arxiv.org/abs/1906.00254v1
PDF https://arxiv.org/pdf/1906.00254v1.pdf
PWC https://paperswithcode.com/paper/190600254
Repo
Framework

A New Covariance Estimator for Sufficient Dimension Reduction in High-Dimensional and Undersized Sample Problems

Title A New Covariance Estimator for Sufficient Dimension Reduction in High-Dimensional and Undersized Sample Problems
Authors Kabir Opeyemi Olorede, Waheed Babatunde Yahya
Abstract The application of standard sufficient dimension reduction methods for reducing the dimension space of predictors without losing regression information requires inverting the covariance matrix of the predictors. This has posed a number of challenges especially when analyzing high-dimensional data sets in which the number of predictors $\mathit{p}$ is much larger than number of samples $n,~(n\ll p)$. A new covariance estimator, called the \textit{Maximum Entropy Covariance} (MEC) that addresses loss of covariance information when similar covariance matrices are linearly combined using \textit{Maximum Entropy} (ME) principle is proposed in this work. By benefitting naturally from slicing or discretizing range of the response variable, y into \textit{H} non-overlapping categories, $\mathit{h_{1},\ldots ,h_{H}}$, MEC first combines covariance matrices arising from samples in each y slice $\mathit{h\in H}$ and then select the one that maximizes entropy under the principle of maximum uncertainty. The MEC estimator is then formed from convex mixture of such entropy-maximizing sample covariance $S_{\mbox{mec}}$ estimate and pooled sample covariance $\mathbf{S}_{\mathit{p}}$ estimate across the $\mathit{H}$ slices without requiring time-consuming covariance optimization procedures. MEC deals directly with singularity and instability of sample group covariance estimate in both regression and classification problems. The efficiency of the MEC estimator is studied with the existing sufficient dimension reduction methods such as \textit{Sliced Inverse Regression} (SIR) and \textit{Sliced Average Variance Estimator} (SAVE) as demonstrated on both classification and regression problems using real life Leukemia cancer data and customers’ electricity load profiles from smart meter data sets respectively.
Tasks Dimensionality Reduction
Published 2019-09-28
URL https://arxiv.org/abs/1909.13017v1
PDF https://arxiv.org/pdf/1909.13017v1.pdf
PWC https://paperswithcode.com/paper/a-new-covariance-estimator-for-sufficient
Repo
Framework

Deep Clustering With Intra-class Distance Constraint for Hyperspectral Images

Title Deep Clustering With Intra-class Distance Constraint for Hyperspectral Images
Authors Jinguang Sun, Wanli Wang, Xian Wei, Li Fang, Xiaoliang Tang, Yusheng Xu, Hui Yu, Wei Yao
Abstract The high dimensionality of hyperspectral images often results in the degradation of clustering performance. Due to the powerful ability of deep feature extraction and non-linear feature representation, the clustering algorithm based on deep learning has become a hot research topic in the field of hyperspectral remote sensing. However, most deep clustering algorithms for hyperspectral images utilize deep neural networks as feature extractor without considering prior knowledge constraints that are suitable for clustering. To solve this problem, we propose an intra-class distance constrained deep clustering algorithm for high-dimensional hyperspectral images. The proposed algorithm constrains the feature mapping procedure of the auto-encoder network by intra-class distance so that raw images are transformed from the original high-dimensional space to the low-dimensional feature space that is more conducive to clustering. Furthermore, the related learning process is treated as a joint optimization problem of deep feature extraction and clustering. Experimental results demonstrate the intense competitiveness of the proposed algorithm in comparison with state-of-the-art clustering methods of hyperspectral images.
Tasks
Published 2019-04-01
URL http://arxiv.org/abs/1904.00562v1
PDF http://arxiv.org/pdf/1904.00562v1.pdf
PWC https://paperswithcode.com/paper/deep-clustering-with-intra-class-distance
Repo
Framework

Commodity RGB-D Sensors: Data Acquisition

Title Commodity RGB-D Sensors: Data Acquisition
Authors Michael Zollhöfer
Abstract Over the past ten years we have seen a democratization of range sensing technology. While previously range sensors have been highly expensive and only accessible to a few domain experts, such sensors are nowadays ubiquitous and can even be found in the latest generation of mobile devices, e.g., current smartphones. This democratization of range sensing technology was started with the release of the Microsoft Kinect, and since then many different commodity range sensors followed its lead, such as the Primesense Carmine, Asus Xtion Pro, and the Structure Sensor from Occipital. The availability of cheap range sensing technology led to a big leap in research, especially in the context of more powerful static and dynamic reconstruction techniques, starting from 3D scanning applications, such as KinectFusion, to highly accurate face and body tracking approaches. In this chapter, we have a detailed look into the different types of existing range sensors. We discuss the two fundamental types of commodity range sensing techniques in detail, namely passive and active sensing, and we explore the principles these technologies are based on. Our focus is on modern active commodity range sensors based on time-of-flight and structured light. We conclude by discussing the noise characteristics, working ranges, and types of errors made by the different sensing modalities.
Tasks
Published 2019-02-18
URL http://arxiv.org/abs/1902.06835v1
PDF http://arxiv.org/pdf/1902.06835v1.pdf
PWC https://paperswithcode.com/paper/commodity-rgb-d-sensors-data-acquisition
Repo
Framework

Using Temporal and Topological Features for Intrusion Detection in Operational Networks

Title Using Temporal and Topological Features for Intrusion Detection in Operational Networks
Authors Simon D. Duque Anton, Daniel Fraunholz, Hans Dieter Schotten
Abstract Until two decades ago, industrial networks were deemed secure due to physical separation from public networks. An abundance of successful attacks proved that assumption wrong. Intrusion detection solutions for industrial application need to meet certain requirements that differ from home- and office-environments, such as working without feedback to the process and compatibility with legacy systems. Industrial systems are commonly used for several decades, updates are often difficult and expensive. Furthermore, most industrial protocols do not have inherent authentication or encryption mechanisms, allowing for easy lateral movement of an intruder once the perimeter is breached. In this work, an algorithm for motif discovery in time series, Matrix Profiles, is used to detect outliers in the timing behaviour of an industrial process. This process was monitored in an experimental environment, containing ground truth labels after attacks were performed. Furthermore, the graph representations of a different industrial data set that has been emulated are used to detect malicious activities. These activities can be derived from anomalous communication patterns, represented as edges in the graph. Finally, an integration concept for both methods is proposed.
Tasks Intrusion Detection, Time Series
Published 2019-07-09
URL https://arxiv.org/abs/1907.04098v1
PDF https://arxiv.org/pdf/1907.04098v1.pdf
PWC https://paperswithcode.com/paper/using-temporal-and-topological-features-for
Repo
Framework

Spatio-Temporal Pyramid Graph Convolutions for Human Action Recognition and Postural Assessment

Title Spatio-Temporal Pyramid Graph Convolutions for Human Action Recognition and Postural Assessment
Authors Behnoosh Parsa, Athma Narayanan, Behzad Dariush
Abstract Recognition of human actions and associated interactions with objects and the environment is an important problem in computer vision due to its potential applications in a variety of domains. The most versatile methods can generalize to various environments and deal with cluttered backgrounds, occlusions, and viewpoint variations. Among them, methods based on graph convolutional networks that extract features from the skeleton have demonstrated promising performance. In this paper, we propose a novel Spatio-Temporal Pyramid Graph Convolutional Network (ST-PGN) for online action recognition for ergonomic risk assessment that enables the use of features from all levels of the skeleton feature hierarchy. The proposed algorithm outperforms state-of-art action recognition algorithms tested on two public benchmark datasets typically used for postural assessment (TUM and UW-IOM). We also introduce a pipeline to enhance postural assessment methods with online action recognition techniques. Finally, the proposed algorithm is integrated with a traditional ergonomic risk index (REBA) to demonstrate the potential value for assessment of musculoskeletal disorders in occupational safety.
Tasks Temporal Action Localization
Published 2019-12-07
URL https://arxiv.org/abs/1912.03442v1
PDF https://arxiv.org/pdf/1912.03442v1.pdf
PWC https://paperswithcode.com/paper/spatio-temporal-pyramid-graph-convolutions
Repo
Framework

Generative Adversarial Networks for Distributed Intrusion Detection in the Internet of Things

Title Generative Adversarial Networks for Distributed Intrusion Detection in the Internet of Things
Authors Aidin Ferdowsi, Walid Saad
Abstract To reap the benefits of the Internet of Things (IoT), it is imperative to secure the system against cyber attacks in order to enable mission critical and real-time applications. To this end, intrusion detection systems (IDSs) have been widely used to detect anomalies caused by a cyber attacker in IoT systems. However, due to the large-scale nature of the IoT, an IDS must operate in a distributed manner with minimum dependence on a central controller. Moreover, in many scenarios such as health and financial applications, the datasets are private and IoTDs may not intend to share such data. To this end, in this paper, a distributed generative adversarial network (GAN) is proposed to provide a fully distributed IDS for the IoT so as to detect anomalous behavior without reliance on any centralized controller. In this architecture, every IoTD can monitor its own data as well as neighbor IoTDs to detect internal and external attacks. In addition, the proposed distributed IDS does not require sharing the datasets between the IoTDs, thus, it can be implemented in IoTs that preserve the privacy of user data such as health monitoring systems or financial applications. It is shown analytically that the proposed distributed GAN has higher accuracy of detecting intrusion compared to a standalone IDS that has access to only a single IoTD dataset. Simulation results show that, the proposed distributed GAN-based IDS has up to 20% higher accuracy, 25% higher precision, and 60% lower false positive rate compared to a standalone GAN-based IDS.
Tasks Intrusion Detection
Published 2019-06-03
URL https://arxiv.org/abs/1906.00567v1
PDF https://arxiv.org/pdf/1906.00567v1.pdf
PWC https://paperswithcode.com/paper/190600567
Repo
Framework

Stacking and stability

Title Stacking and stability
Authors Nino Arsov, Martin Pavlovski, Ljupco Kocarev
Abstract Stacking is a general approach for combining multiple models toward greater predictive accuracy. It has found various application across different domains, ensuing from its meta-learning nature. Our understanding, nevertheless, on how and why stacking works remains intuitive and lacking in theoretical insight. In this paper, we use the stability of learning algorithms as an elemental analysis framework suitable for addressing the issue. To this end, we analyze the hypothesis stability of stacking, bag-stacking, and dag-stacking and establish a connection between bag-stacking and weighted bagging. We show that the hypothesis stability of stacking is a product of the hypothesis stability of each of the base models and the combiner. Moreover, in bag-stacking and dag-stacking, the hypothesis stability depends on the sampling strategy used to generate the training set replicates. Our findings suggest that 1) subsampling and bootstrap sampling improve the stability of stacking, and 2) stacking improves the stability of both subbagging and bagging.
Tasks Meta-Learning
Published 2019-01-26
URL http://arxiv.org/abs/1901.09134v1
PDF http://arxiv.org/pdf/1901.09134v1.pdf
PWC https://paperswithcode.com/paper/stacking-and-stability
Repo
Framework
comments powered by Disqus