July 27, 2019

3299 words 16 mins read

Paper Group ANR 641

Paper Group ANR 641

Text Recognition in Scene Image and Video Frame using Color Channel Selection. Fuzzy finite element model updating using metaheuristic optimization algorithms. Towards multiple kernel principal component analysis for integrative analysis of tumor samples. Pointwise Convolutional Neural Networks. Be Precise or Fuzzy: Learning the Meaning of Cardinal …

Text Recognition in Scene Image and Video Frame using Color Channel Selection

Title Text Recognition in Scene Image and Video Frame using Color Channel Selection
Authors Ayan Kumar Bhunia, Gautam Kumar, Partha Pratim Roy, R. Balasubramanian, Umapada Pal
Abstract In recent years, recognition of text from natural scene image and video frame has got increased attention among the researchers due to its various complexities and challenges. Because of low resolution, blurring effect, complex background, different fonts, color and variant alignment of text within images and video frames, etc., text recognition in such scenario is difficult. Most of the current approaches usually apply a binarization algorithm to convert them into binary images and next OCR is applied to get the recognition result. In this paper, we present a novel approach based on color channel selection for text recognition from scene images and video frames. In the approach, at first, a color channel is automatically selected and then selected color channel is considered for text recognition. Our text recognition framework is based on Hidden Markov Model (HMM) which uses Pyramidal Histogram of Oriented Gradient features extracted from selected color channel. From each sliding window of a color channel our color-channel selection approach analyzes the image properties from the sliding window and then a multi-label Support Vector Machine (SVM) classifier is applied to select the color channel that will provide the best recognition results in the sliding window. This color channel selection for each sliding window has been found to be more fruitful than considering a single color channel for the whole word image. Five different features have been analyzed for multi-label SVM based color channel selection where wavelet transform based feature outperforms others. Our framework has been tested on different publicly available scene/video text image datasets. For Devanagari script, we collected our own data dataset. The performances obtained from experimental results are encouraging and show the advantage of the proposed method.
Tasks Optical Character Recognition
Published 2017-07-21
URL http://arxiv.org/abs/1707.06810v2
PDF http://arxiv.org/pdf/1707.06810v2.pdf
PWC https://paperswithcode.com/paper/text-recognition-in-scene-image-and-video
Repo
Framework

Fuzzy finite element model updating using metaheuristic optimization algorithms

Title Fuzzy finite element model updating using metaheuristic optimization algorithms
Authors I. Boulkaibet, T. Marwala, M. I. Friswell, H. Haddad Khodaparast, S. Adhikari
Abstract In this paper, a non-probabilistic method based on fuzzy logic is used to update finite element models (FEMs). Model updating techniques use the measured data to improve the accuracy of numerical models of structures. However, the measured data are contaminated with experimental noise and the models are inaccurate due to randomness in the parameters. This kind of aleatory uncertainty is irreducible, and may decrease the accuracy of the finite element model updating process. However, uncertainty quantification methods can be used to identify the uncertainty in the updating parameters. In this paper, the uncertainties associated with the modal parameters are defined as fuzzy membership functions, while the model updating procedure is defined as an optimization problem at each {\alpha}-cut level. To determine the membership functions of the updated parameters, an objective function is defined and minimized using two metaheuristic optimization algorithms: ant colony optimization (ACO) and particle swarm optimization (PSO). A structural example is used to investigate the accuracy of the fuzzy model updating strategy using the PSO and ACO algorithms. Furthermore, the results obtained by the fuzzy finite element model updating are compared with the Bayesian model updating results.
Tasks
Published 2017-01-03
URL http://arxiv.org/abs/1701.00833v1
PDF http://arxiv.org/pdf/1701.00833v1.pdf
PWC https://paperswithcode.com/paper/fuzzy-finite-element-model-updating-using
Repo
Framework

Towards multiple kernel principal component analysis for integrative analysis of tumor samples

Title Towards multiple kernel principal component analysis for integrative analysis of tumor samples
Authors Nora K. Speicher, Nico Pfeifer
Abstract Personalized treatment of patients based on tissue-specific cancer subtypes has strongly increased the efficacy of the chosen therapies. Even though the amount of data measured for cancer patients has increased over the last years, most cancer subtypes are still diagnosed based on individual data sources (e.g. gene expression data). We propose an unsupervised data integration method based on kernel principal component analysis. Principal component analysis is one of the most widely used techniques in data analysis. Unfortunately, the straight-forward multiple-kernel extension of this method leads to the use of only one of the input matrices, which does not fit the goal of gaining information from all data sources. Therefore, we present a scoring function to determine the impact of each input matrix. The approach enables visualizing the integrated data and subsequent clustering for cancer subtype identification. Due to the nature of the method, no free parameters have to be set. We apply the methodology to five different cancer data sets and demonstrate its advantages in terms of results and usability.
Tasks
Published 2017-01-02
URL http://arxiv.org/abs/1701.00422v2
PDF http://arxiv.org/pdf/1701.00422v2.pdf
PWC https://paperswithcode.com/paper/towards-multiple-kernel-principal-component
Repo
Framework

Pointwise Convolutional Neural Networks

Title Pointwise Convolutional Neural Networks
Authors Binh-Son Hua, Minh-Khoi Tran, Sai-Kit Yeung
Abstract Deep learning with 3D data such as reconstructed point clouds and CAD models has received great research interests recently. However, the capability of using point clouds with convolutional neural network has been so far not fully explored. In this paper, we present a convolutional neural network for semantic segmentation and object recognition with 3D point clouds. At the core of our network is pointwise convolution, a new convolution operator that can be applied at each point of a point cloud. Our fully convolutional network design, while being surprisingly simple to implement, can yield competitive accuracy in both semantic segmentation and object recognition task.
Tasks Object Recognition, Semantic Segmentation
Published 2017-12-14
URL http://arxiv.org/abs/1712.05245v2
PDF http://arxiv.org/pdf/1712.05245v2.pdf
PWC https://paperswithcode.com/paper/pointwise-convolutional-neural-networks
Repo
Framework

Be Precise or Fuzzy: Learning the Meaning of Cardinals and Quantifiers from Vision

Title Be Precise or Fuzzy: Learning the Meaning of Cardinals and Quantifiers from Vision
Authors Sandro Pezzelle, Marco Marelli, Raffaella Bernardi
Abstract People can refer to quantities in a visual scene by using either exact cardinals (e.g. one, two, three) or natural language quantifiers (e.g. few, most, all). In humans, these two processes underlie fairly different cognitive and neural mechanisms. Inspired by this evidence, the present study proposes two models for learning the objective meaning of cardinals and quantifiers from visual scenes containing multiple objects. We show that a model capitalizing on a ‘fuzzy’ measure of similarity is effective for learning quantifiers, whereas the learning of exact cardinals is better accomplished when information about number is provided.
Tasks
Published 2017-02-17
URL http://arxiv.org/abs/1702.05270v1
PDF http://arxiv.org/pdf/1702.05270v1.pdf
PWC https://paperswithcode.com/paper/be-precise-or-fuzzy-learning-the-meaning-of
Repo
Framework

Stochastic Submodular Maximization: The Case of Coverage Functions

Title Stochastic Submodular Maximization: The Case of Coverage Functions
Authors Mohammad Reza Karimi, Mario Lucic, Hamed Hassani, Andreas Krause
Abstract Stochastic optimization of continuous objectives is at the heart of modern machine learning. However, many important problems are of discrete nature and often involve submodular objectives. We seek to unleash the power of stochastic continuous optimization, namely stochastic gradient descent and its variants, to such discrete problems. We first introduce the problem of stochastic submodular optimization, where one needs to optimize a submodular objective which is given as an expectation. Our model captures situations where the discrete objective arises as an empirical risk (e.g., in the case of exemplar-based clustering), or is given as an explicit stochastic model (e.g., in the case of influence maximization in social networks). By exploiting that common extensions act linearly on the class of submodular functions, we employ projected stochastic gradient ascent and its variants in the continuous domain, and perform rounding to obtain discrete solutions. We focus on the rich and widely used family of weighted coverage functions. We show that our approach yields solutions that are guaranteed to match the optimal approximation guarantees, while reducing the computational cost by several orders of magnitude, as we demonstrate empirically.
Tasks Stochastic Optimization
Published 2017-11-05
URL http://arxiv.org/abs/1711.01566v1
PDF http://arxiv.org/pdf/1711.01566v1.pdf
PWC https://paperswithcode.com/paper/stochastic-submodular-maximization-the-case
Repo
Framework

Follow the Compressed Leader: Faster Online Learning of Eigenvectors and Faster MMWU

Title Follow the Compressed Leader: Faster Online Learning of Eigenvectors and Faster MMWU
Authors Zeyuan Allen-Zhu, Yuanzhi Li
Abstract The online problem of computing the top eigenvector is fundamental to machine learning. In both adversarial and stochastic settings, previous results (such as matrix multiplicative weight update, follow the regularized leader, follow the compressed leader, block power method) either achieve optimal regret but run slow, or run fast at the expense of loosing a $\sqrt{d}$ factor in total regret where $d$ is the matrix dimension. We propose a $\textit{follow-the-compressed-leader (FTCL)}$ framework which achieves optimal regret without sacrificing the running time. Our idea is to “compress” the matrix strategy to dimension 3 in the adversarial setting, or dimension 1 in the stochastic setting. These respectively resolve two open questions regarding the design of optimal and efficient algorithms for the online eigenvector problem.
Tasks
Published 2017-01-06
URL http://arxiv.org/abs/1701.01722v3
PDF http://arxiv.org/pdf/1701.01722v3.pdf
PWC https://paperswithcode.com/paper/follow-the-compressed-leader-faster-online
Repo
Framework

Revisiting the Master-Slave Architecture in Multi-Agent Deep Reinforcement Learning

Title Revisiting the Master-Slave Architecture in Multi-Agent Deep Reinforcement Learning
Authors Xiangyu Kong, Bo Xin, Fangchen Liu, Yizhou Wang
Abstract Many tasks in artificial intelligence require the collaboration of multiple agents. We exam deep reinforcement learning for multi-agent domains. Recent research efforts often take the form of two seemingly conflicting perspectives, the decentralized perspective, where each agent is supposed to have its own controller; and the centralized perspective, where one assumes there is a larger model controlling all agents. In this regard, we revisit the idea of the master-slave architecture by incorporating both perspectives within one framework. Such a hierarchical structure naturally leverages advantages from one another. The idea of combining both perspectives is intuitive and can be well motivated from many real world systems, however, out of a variety of possible realizations, we highlights three key ingredients, i.e. composed action representation, learnable communication and independent reasoning. With network designs to facilitate these explicitly, our proposal consistently outperforms latest competing methods both in synthetic experiments and when applied to challenging StarCraft micromanagement tasks.
Tasks Starcraft
Published 2017-12-20
URL http://arxiv.org/abs/1712.07305v1
PDF http://arxiv.org/pdf/1712.07305v1.pdf
PWC https://paperswithcode.com/paper/revisiting-the-master-slave-architecture-in
Repo
Framework

Relief-Based Feature Selection: Introduction and Review

Title Relief-Based Feature Selection: Introduction and Review
Authors Ryan J. Urbanowicz, Melissa Meeker, William LaCava, Randal S. Olson, Jason H. Moore
Abstract Feature selection plays a critical role in biomedical data mining, driven by increasing feature dimensionality in target problems and growing interest in advanced but computationally expensive methodologies able to model complex associations. Specifically, there is a need for feature selection methods that are computationally efficient, yet sensitive to complex patterns of association, e.g. interactions, so that informative features are not mistakenly eliminated prior to downstream modeling. This paper focuses on Relief-based algorithms (RBAs), a unique family of filter-style feature selection algorithms that have gained appeal by striking an effective balance between these objectives while flexibly adapting to various data characteristics, e.g. classification vs. regression. First, this work broadly examines types of feature selection and defines RBAs within that context. Next, we introduce the original Relief algorithm and associated concepts, emphasizing the intuition behind how it works, how feature weights generated by the algorithm can be interpreted, and why it is sensitive to feature interactions without evaluating combinations of features. Lastly, we include an expansive review of RBA methodological research beyond Relief and its popular descendant, ReliefF. In particular, we characterize branches of RBA research, and provide comparative summaries of RBA algorithms including contributions, strategies, functionality, time complexity, adaptation to key data characteristics, and software availability.
Tasks Feature Selection
Published 2017-11-22
URL http://arxiv.org/abs/1711.08421v2
PDF http://arxiv.org/pdf/1711.08421v2.pdf
PWC https://paperswithcode.com/paper/relief-based-feature-selection-introduction
Repo
Framework

Accelerating Stochastic Gradient Descent For Least Squares Regression

Title Accelerating Stochastic Gradient Descent For Least Squares Regression
Authors Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli, Aaron Sidford
Abstract There is widespread sentiment that it is not possible to effectively utilize fast gradient methods (e.g. Nesterov’s acceleration, conjugate gradient, heavy ball) for the purposes of stochastic optimization due to their instability and error accumulation, a notion made precise in d’Aspremont 2008 and Devolder, Glineur, and Nesterov 2014. This work considers these issues for the special case of stochastic approximation for the least squares regression problem, and our main result refutes the conventional wisdom by showing that acceleration can be made robust to statistical errors. In particular, this work introduces an accelerated stochastic gradient method that provably achieves the minimax optimal statistical risk faster than stochastic gradient descent. Critical to the analysis is a sharp characterization of accelerated stochastic gradient descent as a stochastic process. We hope this characterization gives insights towards the broader question of designing simple and effective accelerated stochastic methods for more general convex and non-convex optimization problems.
Tasks Stochastic Optimization
Published 2017-04-26
URL http://arxiv.org/abs/1704.08227v2
PDF http://arxiv.org/pdf/1704.08227v2.pdf
PWC https://paperswithcode.com/paper/accelerating-stochastic-gradient-descent-for
Repo
Framework

A Fast Algorithm Based on a Sylvester-like Equation for LS Regression with GMRF Prior

Title A Fast Algorithm Based on a Sylvester-like Equation for LS Regression with GMRF Prior
Authors Qi Wei, Emilie Chouzenoux, Jean-Yves Tourneret, Jean-Christophe Pesquet
Abstract This paper presents a fast approach for penalized least squares (LS) regression problems using a 2D Gaussian Markov random field (GMRF) prior. More precisely, the computation of the proximity operator of the LS criterion regularized by different GMRF potentials is formulated as solving a Sylvester-like matrix equation. By exploiting the structural properties of GMRFs, this matrix equation is solved columnwise in an analytical way. The proposed algorithm can be embedded into a wide range of proximal algorithms to solve LS regression problems including a convex penalty. Experiments carried out in the case of a constrained LS regression problem arising in a multichannel image processing application, provide evidence that an alternating direction method of multipliers performs quite efficiently in this context.
Tasks
Published 2017-09-18
URL http://arxiv.org/abs/1709.06178v2
PDF http://arxiv.org/pdf/1709.06178v2.pdf
PWC https://paperswithcode.com/paper/a-fast-algorithm-based-on-a-sylvester-like
Repo
Framework

Summarization of ICU Patient Motion from Multimodal Multiview Videos

Title Summarization of ICU Patient Motion from Multimodal Multiview Videos
Authors Carlos Torres, Kenneth Rose, Jeffrey C. Fried, B. S. Manjunath
Abstract Clinical observations indicate that during critical care at the hospitals, patients sleep positioning and motion affect recovery. Unfortunately, there is no formal medical protocol to record, quantify, and analyze patient motion. There is a small number of clinical studies, which use manual analysis of sleep poses and motion recordings to support medical benefits of patient positioning and motion monitoring. Manual processes are not scalable, are prone to human errors, and strain an already taxed healthcare workforce. This study introduces DECU (Deep Eye-CU): an autonomous mulitmodal multiview system, which addresses these issues by autonomously monitoring healthcare environments and enabling the recording and analysis of patient sleep poses and motion. DECU uses three RGB-D cameras to monitor patient motion in a medical Intensive Care Unit (ICU). The algorithms in DECU estimate pose direction at different temporal resolutions and use keyframes to efficiently represent pose transition dynamics. DECU combines deep features computed from the data with a modified version of Hidden Markov Model to more flexibly model sleep pose duration, analyze pose patterns, and summarize patient motion. Extensive experimental results are presented. The performance of DECU is evaluated in ideal (BC: Bright and Clear/occlusion-free) and natural (DO: Dark and Occluded) scenarios at two motion resolutions in a mock-up and a real ICU. The results indicate that deep features allow DECU to match the classification performance of engineered features in BC scenes and increase the accuracy by up to 8% in DO scenes. In addition, the overall pose history summarization tracing accuracy shows an average detection rate of 85% in BC and of 76% in DO scenes. The proposed keyframe estimation algorithm allows DECU to reach an average 78% transition classification accuracy.
Tasks
Published 2017-06-28
URL http://arxiv.org/abs/1706.09430v1
PDF http://arxiv.org/pdf/1706.09430v1.pdf
PWC https://paperswithcode.com/paper/summarization-of-icu-patient-motion-from
Repo
Framework

SLEEPNET: Automated Sleep Staging System via Deep Learning

Title SLEEPNET: Automated Sleep Staging System via Deep Learning
Authors Siddharth Biswal, Joshua Kulas, Haoqi Sun, Balaji Goparaju, M Brandon Westover, Matt T Bianchi, Jimeng Sun
Abstract Sleep disorders, such as sleep apnea, parasomnias, and hypersomnia, affect 50-70 million adults in the United States (Hillman et al., 2006). Overnight polysomnography (PSG), including brain monitoring using electroencephalography (EEG), is a central component of the diagnostic evaluation for sleep disorders. While PSG is conventionally performed by trained technologists, the recent rise of powerful neural network learning algorithms combined with large physiological datasets offers the possibility of automation, potentially making expert-level sleep analysis more widely available. We propose SLEEPNET (Sleep EEG neural network), a deployed annotation tool for sleep staging. SLEEPNET uses a deep recurrent neural network trained on the largest sleep physiology database assembled to date, consisting of PSGs from over 10,000 patients from the Massachusetts General Hospital (MGH) Sleep Laboratory. SLEEPNET achieves human-level annotation performance on an independent test set of 1,000 EEGs, with an average accuracy of 85.76% and algorithm-expert inter-rater agreement (IRA) of kappa = 79.46%, comparable to expert-expert IRA.
Tasks EEG
Published 2017-07-26
URL http://arxiv.org/abs/1707.08262v1
PDF http://arxiv.org/pdf/1707.08262v1.pdf
PWC https://paperswithcode.com/paper/sleepnet-automated-sleep-staging-system-via
Repo
Framework

A Composite Quantile Fourier Neural Network for Multi-Horizon Probabilistic Forecasting

Title A Composite Quantile Fourier Neural Network for Multi-Horizon Probabilistic Forecasting
Authors Kostas Hatalis, Shalinee Kishore
Abstract A novel quantile Fourier neural network is presented for nonparametric probabilistic forecasting. Prediction are provided in the form of composite quantiles using time as the only input to the model. This effectively is a form of extrapolation based quantile regression applied for forecasting. Empirical results showcase that for time series data that have clear seasonality and trend, the model provides high quality probabilistic predictions. This work introduces a new class of forecasting of using only time as the input versus using past data such as an autoregressive model. Extrapolation based regression has not been studied before for probabilistic forecasting.
Tasks Time Series
Published 2017-12-27
URL http://arxiv.org/abs/1712.09641v1
PDF http://arxiv.org/pdf/1712.09641v1.pdf
PWC https://paperswithcode.com/paper/a-composite-quantile-fourier-neural-network
Repo
Framework

Multi-scale Image Fusion Between Pre-operative Clinical CT and X-ray Microtomography of Lung Pathology

Title Multi-scale Image Fusion Between Pre-operative Clinical CT and X-ray Microtomography of Lung Pathology
Authors Holger R. Roth, Kai Nagara, Hirohisa Oda, Masahiro Oda, Tomoshi Sugiyama, Shota Nakamura, Kensaku Mori
Abstract Computational anatomy allows the quantitative analysis of organs in medical images. However, most analysis is constrained to the millimeter scale because of the limited resolution of clinical computed tomography (CT). X-ray microtomography ($\mu$CT) on the other hand allows imaging of ex-vivo tissues at a resolution of tens of microns. In this work, we use clinical CT to image lung cancer patients before partial pneumonectomy (resection of pathological lung tissue). The resected specimen is prepared for $\mu$CT imaging at a voxel resolution of 50 $\mu$m (0.05 mm). This high-resolution image of the lung cancer tissue allows further insides into understanding of tumor growth and categorization. For making full use of this additional information, image fusion (registration) needs to be performed in order to re-align the $\mu$CT image with clinical CT. We developed a multi-scale non-rigid registration approach. After manual initialization using a few landmark points and rigid alignment, several levels of non-rigid registration between down-sampled (in the case of $\mu$CT) and up-sampled (in the case of clinical CT) representations of the image are performed. Any non-lung tissue is ignored during the computation of the similarity measure used to guide the registration during optimization. We are able to recover the volume differences introduced by the resection and preparation of the lung specimen. The average ($\pm$ std. dev.) minimum surface distance between $\mu$CT and clinical CT at the resected lung surface is reduced from 3.3 $\pm$ 2.9 (range: [0.1, 15.9]) to 2.3 mm $\pm$ 2.8 (range: [0.0, 15.3]) mm. The alignment of clinical CT with $\mu$CT will allow further registration with even finer resolutions of $\mu$CT (up to 10 $\mu$m resolution) and ultimately with histopathological microscopy images for further macro to micro image fusion that can aid medical image analysis.
Tasks Computed Tomography (CT)
Published 2017-02-27
URL http://arxiv.org/abs/1702.08155v1
PDF http://arxiv.org/pdf/1702.08155v1.pdf
PWC https://paperswithcode.com/paper/multi-scale-image-fusion-between-pre
Repo
Framework
comments powered by Disqus