May 5, 2019

3270 words 16 mins read

Paper Group ANR 498

Paper Group ANR 498

Linear Classification of data with Support Vector Machines and Generalized Support Vector Machines. Design of false color palettes for grayscale reproduction. Pieces-of-parts for supervoxel segmentation with global context: Application to DCE-MRI tumour delineation. Model-Driven Feed-Forward Prediction for Manipulation of Deformable Objects. Simula …

Linear Classification of data with Support Vector Machines and Generalized Support Vector Machines

Title Linear Classification of data with Support Vector Machines and Generalized Support Vector Machines
Authors Xiaomin Qi, Sergei Silvestrov, Talat Nazir
Abstract In this paper, we study the support vector machine and introduced the notion of generalized support vector machine for classification of data. We show that the problem of generalized support vector machine is equivalent to the problem of generalized variational inequality and establish various results for the existence of solutions. Moreover, we provide various examples to support our results.
Tasks
Published 2016-05-31
URL http://arxiv.org/abs/1606.05664v1
PDF http://arxiv.org/pdf/1606.05664v1.pdf
PWC https://paperswithcode.com/paper/linear-classification-of-data-with-support
Repo
Framework

Design of false color palettes for grayscale reproduction

Title Design of false color palettes for grayscale reproduction
Authors Filip A. Sala
Abstract Design of false color palette is quite easy but some effort has to be done to achieve good dynamic range, contrast and overall appearance of the palette. Such palettes, for instance, are commonly used in scientific papers for presenting the data. However, to lower the cost of the paper most scientists decide to let the data to be printed in grayscale. The same applies to e-book readers based on e-ink where most of them are still grayscale. For majority of false color palettes reproducing them in grayscale results in ambiguous mapping of the colors and may be misleading for the reader. In this article design of false color palettes suitable for grayscale reproduction is described. Due to the monotonic change of luminance of these palettes grayscale representation is very similar to the data directly presented with a grayscale palette. Some suggestions and examples how to design such palettes are provided.
Tasks
Published 2016-02-06
URL http://arxiv.org/abs/1602.03206v2
PDF http://arxiv.org/pdf/1602.03206v2.pdf
PWC https://paperswithcode.com/paper/design-of-false-color-palettes-for-grayscale
Repo
Framework

Pieces-of-parts for supervoxel segmentation with global context: Application to DCE-MRI tumour delineation

Title Pieces-of-parts for supervoxel segmentation with global context: Application to DCE-MRI tumour delineation
Authors Benjamin Irving, James M Franklin, Bartlomiej W Papiez, Ewan M Anderson, Ricky A Sharma, Fergus V Gleeson, Sir Michael Brady, Julia A Schnabel
Abstract Rectal tumour segmentation in dynamic contrast-enhanced MRI (DCE-MRI) is a challenging task, and an automated and consistent method would be highly desirable to improve the modelling and prediction of patient outcomes from tissue contrast enhancement characteristics - particularly in routine clinical practice. A framework is developed to automate DCE-MRI tumour segmentation, by introducing: perfusion-supervoxels to over-segment and classify DCE-MRI volumes using the dynamic contrast enhancement characteristics; and the pieces-of-parts graphical model, which adds global (anatomic) constraints that further refine the supervoxel components that comprise the tumour. The framework was evaluated on 23 DCE-MRI scans of patients with rectal adenocarcinomas, and achieved a voxelwise area-under the receiver operating characteristic curve (AUC) of 0.97 compared to expert delineations. Creating a binary tumour segmentation, 21 of the 23 cases were segmented correctly with a median Dice similarity coefficient (DSC) of 0.63, which is close to the inter-rater variability of this challenging task. A sec- ond study is also included to demonstrate the method’s generalisability and achieved a DSC of 0.71. The framework achieves promising results for the underexplored area of rectal tumour segmentation in DCE-MRI, and the methods have potential to be applied to other DCE-MRI and supervoxel segmentation problems
Tasks
Published 2016-04-18
URL http://arxiv.org/abs/1604.05210v1
PDF http://arxiv.org/pdf/1604.05210v1.pdf
PWC https://paperswithcode.com/paper/pieces-of-parts-for-supervoxel-segmentation
Repo
Framework

Model-Driven Feed-Forward Prediction for Manipulation of Deformable Objects

Title Model-Driven Feed-Forward Prediction for Manipulation of Deformable Objects
Authors Yinxiao Li, Yan Wang, Yonghao Yue, Danfei Xu, Michael Case, Shih-Fu Chang, Eitan Grinspun, Peter Allen
Abstract Robotic manipulation of deformable objects is a difficult problem especially because of the complexity of the many different ways an object can deform. Searching such a high dimensional state space makes it difficult to recognize, track, and manipulate deformable objects. In this paper, we introduce a predictive, model-driven approach to address this challenge, using a pre-computed, simulated database of deformable object models. Mesh models of common deformable garments are simulated with the garments picked up in multiple different poses under gravity, and stored in a database for fast and efficient retrieval. To validate this approach, we developed a comprehensive pipeline for manipulating clothing as in a typical laundry task. First, the database is used for category and pose estimation for a garment in an arbitrary position. A fully featured 3D model of the garment is constructed in real-time and volumetric features are then used to obtain the most similar model in the database to predict the object category and pose. Second, the database can significantly benefit the manipulation of deformable objects via non-rigid registration, providing accurate correspondences between the reconstructed object model and the database models. Third, the accurate model simulation can also be used to optimize the trajectories for manipulation of deformable objects, such as the folding of garments. Extensive experimental results are shown for the tasks above using a variety of different clothing.
Tasks Pose Estimation
Published 2016-07-15
URL http://arxiv.org/abs/1607.04411v1
PDF http://arxiv.org/pdf/1607.04411v1.pdf
PWC https://paperswithcode.com/paper/model-driven-feed-forward-prediction-for
Repo
Framework

Simulated Tornado Optimization

Title Simulated Tornado Optimization
Authors S. Hossein Hosseini, Tohid Nouri, Afshin Ebrahimi, S. Ali Hosseini
Abstract We propose a swarm-based optimization algorithm inspired by air currents of a tornado. Two main air currents - spiral and updraft - are mimicked. Spiral motion is designed for exploration of new search areas and updraft movements is deployed for exploitation of a promising candidate solution. Assignment of just one search direction to each particle at each iteration, leads to low computational complexity of the proposed algorithm respect to the conventional algorithms. Regardless of the step size parameters, the only parameter of the proposed algorithm, called tornado diameter, can be efficiently adjusted by randomization. Numerical results over six different benchmark cost functions indicate comparable and, in some cases, better performance of the proposed algorithm respect to some other metaheuristics.
Tasks
Published 2016-12-31
URL http://arxiv.org/abs/1701.00736v1
PDF http://arxiv.org/pdf/1701.00736v1.pdf
PWC https://paperswithcode.com/paper/simulated-tornado-optimization
Repo
Framework

Hard Clusters Maximize Mutual Information

Title Hard Clusters Maximize Mutual Information
Authors Bernhard C. Geiger, Rana Ali Amjad
Abstract In this paper, we investigate mutual information as a cost function for clustering, and show in which cases hard, i.e., deterministic, clusters are optimal. Using convexity properties of mutual information, we show that certain formulations of the information bottleneck problem are solved by hard clusters. Similarly, hard clusters are optimal for the information-theoretic co-clustering problem that deals with simultaneous clustering of two dependent data sets. If both data sets have to be clustered using the same cluster assignment, hard clusters are not optimal in general. We point at interesting and practically relevant special cases of this so-called pairwise clustering problem, for which we can either prove or have evidence that hard clusters are optimal. Our results thus show that one can relax the otherwise combinatorial hard clustering problem to a real-valued optimization problem with the same global optimum.
Tasks
Published 2016-08-17
URL http://arxiv.org/abs/1608.04872v1
PDF http://arxiv.org/pdf/1608.04872v1.pdf
PWC https://paperswithcode.com/paper/hard-clusters-maximize-mutual-information
Repo
Framework
Title Contextualizing Geometric Data Analysis and Related Data Analytics: A Virtual Microscope for Big Data Analytics
Authors Fionn Murtagh, Mohsen Farid
Abstract The relevance and importance of contextualizing data analytics is described. Qualitative characteristics might form the context of quantitative analysis. Topics that are at issue include: contrast, baselining, secondary data sources, supplementary data sources, dynamic and heterogeneous data. In geometric data analysis, especially with the Correspondence Analysis platform, various case studies are both experimented with, and are reviewed. In such aspects as paradigms followed, and technical implementation, implicitly and explicitly, an important point made is the major relevance of such work for both burgeoning analytical needs and for new analytical areas including Big Data analytics, and so on. For the general reader, it is aimed to display and describe, first of all, the analytical outcomes that are subject to analysis here, and then proceed to detail the more quantitative outcomes that fully support the analytics carried out.
Tasks
Published 2016-11-30
URL http://arxiv.org/abs/1611.09948v4
PDF http://arxiv.org/pdf/1611.09948v4.pdf
PWC https://paperswithcode.com/paper/contextualizing-geometric-data-analysis-and
Repo
Framework

Deep convolutional networks for automated detection of posterior-element fractures on spine CT

Title Deep convolutional networks for automated detection of posterior-element fractures on spine CT
Authors Holger R. Roth, Yinong Wang, Jianhua Yao, Le Lu, Joseph E. Burns, Ronald M. Summers
Abstract Injuries of the spine, and its posterior elements in particular, are a common occurrence in trauma patients, with potentially devastating consequences. Computer-aided detection (CADe) could assist in the detection and classification of spine fractures. Furthermore, CAD could help assess the stability and chronicity of fractures, as well as facilitate research into optimization of treatment paradigms. In this work, we apply deep convolutional networks (ConvNets) for the automated detection of posterior element fractures of the spine. First, the vertebra bodies of the spine with its posterior elements are segmented in spine CT using multi-atlas label fusion. Then, edge maps of the posterior elements are computed. These edge maps serve as candidate regions for predicting a set of probabilities for fractures along the image edges using ConvNets in a 2.5D fashion (three orthogonal patches in axial, coronal and sagittal planes). We explore three different methods for training the ConvNet using 2.5D patches along the edge maps of ‘positive’, i.e. fractured posterior-elements and ‘negative’, i.e. non-fractured elements. An experienced radiologist retrospectively marked the location of 55 displaced posterior-element fractures in 18 trauma patients. We randomly split the data into training and testing cases. In testing, we achieve an area-under-the-curve of 0.857. This corresponds to 71% or 81% sensitivities at 5 or 10 false-positives per patient, respectively. Analysis of our set of trauma patients demonstrates the feasibility of detecting posterior-element fractures in spine CT images using computer vision techniques such as deep convolutional networks.
Tasks
Published 2016-01-29
URL http://arxiv.org/abs/1602.00020v1
PDF http://arxiv.org/pdf/1602.00020v1.pdf
PWC https://paperswithcode.com/paper/deep-convolutional-networks-for-automated
Repo
Framework

Human-In-The-Loop Person Re-Identification

Title Human-In-The-Loop Person Re-Identification
Authors Hanxiao Wang, Shaogang Gong, Xiatian Zhu, Tao Xiang
Abstract Current person re-identification (re-id) methods assume that (1) pre-labelled training data are available for every camera pair, (2) the gallery size for re-identification is moderate. Both assumptions scale poorly to real-world applications when camera network size increases and gallery size becomes large. Human verification of automatic model ranked re-id results becomes inevitable. In this work, a novel human-in-the-loop re-id model based on Human Verification Incremental Learning (HVIL) is formulated which does not require any pre-labelled training data to learn a model, therefore readily scalable to new camera pairs. This HVIL model learns cumulatively from human feedback to provide instant improvement to re-id ranking of each probe on-the-fly enabling the model scalable to large gallery sizes. We further formulate a Regularised Metric Ensemble Learning (RMEL) model to combine a series of incrementally learned HVIL models into a single ensemble model to be used when human feedback becomes unavailable.
Tasks Person Re-Identification
Published 2016-12-05
URL http://arxiv.org/abs/1612.01345v2
PDF http://arxiv.org/pdf/1612.01345v2.pdf
PWC https://paperswithcode.com/paper/human-in-the-loop-person-re-identification
Repo
Framework

Higher-Order Block Term Decomposition for Spatially Folded fMRI Data

Title Higher-Order Block Term Decomposition for Spatially Folded fMRI Data
Authors Christos Chatzichristos, Eleftherios Kofidis, Giannis Kopsinis, Sergios Theodoridis
Abstract The growing use of neuroimaging technologies generates a massive amount of biomedical data that exhibit high dimensionality. Tensor-based analysis of brain imaging data has been proved quite effective in exploiting their multiway nature. The advantages of tensorial methods over matrix-based approaches have also been demonstrated in the characterization of functional magnetic resonance imaging (fMRI) data, where the spatial (voxel) dimensions are commonly grouped (unfolded) as a single way/mode of the 3-rd order array, the other two ways corresponding to time and subjects. However, such methods are known to be ineffective in more demanding scenarios, such as the ones with strong noise and/or significant overlapping of activated regions. This paper aims at investigating the possible gains from a better exploitation of the spatial dimension, through a higher- (4 or 5) order tensor modeling of the fMRI signal. In this context, and in order to increase the degrees of freedom of the modeling process, a higher-order Block Term Decomposition (BTD) is applied, for the first time in fMRI analysis. Its effectiveness is demonstrated via extensive simulation results.
Tasks
Published 2016-07-15
URL http://arxiv.org/abs/1607.05073v1
PDF http://arxiv.org/pdf/1607.05073v1.pdf
PWC https://paperswithcode.com/paper/higher-order-block-term-decomposition-for
Repo
Framework

ROS Regression: Integrating Regularization and Optimal Scaling Regression

Title ROS Regression: Integrating Regularization and Optimal Scaling Regression
Authors Jacqueline J. Meulman, Anita J. van der Kooij
Abstract In this paper we combine two important extensions of ordinary least squares regression: regularization and optimal scaling. Optimal scaling (sometimes also called optimal scoring) has originally been developed for categorical data, and the process finds quantifications for the categories that are optimal for the regression model in the sense that they maximize the multiple correlation. Although the optimal scaling method was developed initially for variables with a limited number of categories, optimal transformations of continuous variables are a special case. We will consider a variety of transformation types; typically we use step functions for categorical variables, and smooth (spline) functions for continuous variables. Both types of functions can be restricted to be monotonic, preserving the ordinal information in the data. In addition to optimal scaling, three regularization methods will be considered: Ridge regression, the Lasso, and the Elastic Net. The resulting method will be called ROS Regression (Regularized Optimal Scaling Regression. We will show that the basic OS algorithm provides straightforward and efficient estimation of the regularized regression coefficients, automatically gives the Group Lasso and Blockwise Sparse Regression, and extends them with monotonicity properties. We will show that Optimal Scaling linearizes nonlinear relationships between predictors and outcome, and improves upon the condition of the predictor correlation matrix, increasing (on average) the conditional independence of the predictors. Alternative options for regularization of either regression coefficients or category quantifications are mentioned. Extended examples are provided. Keywords: Categorical Data, Optimal Scaling, Conditional Independence, Step Functions, Splines, Monotonic Transformations, Regularization, Lasso, Elastic Net, Group Lasso, Blockwise Sparse Regression.
Tasks
Published 2016-11-16
URL http://arxiv.org/abs/1611.05433v1
PDF http://arxiv.org/pdf/1611.05433v1.pdf
PWC https://paperswithcode.com/paper/ros-regression-integrating-regularization-and
Repo
Framework

Low-rank Multi-view Clustering in Third-Order Tensor Space

Title Low-rank Multi-view Clustering in Third-Order Tensor Space
Authors Ming Yin, Junbin Gao, Shengli Xie, Yi Guo
Abstract The plenty information from multiple views data as well as the complementary information among different views are usually beneficial to various tasks, e.g., clustering, classification, de-noising. Multi-view subspace clustering is based on the fact that the multi-view data are generated from a latent subspace. To recover the underlying subspace structure, the success of the sparse and/or low-rank subspace clustering has been witnessed recently. Despite some state-of-the-art subspace clustering approaches can numerically handle multi-view data, by simultaneously exploring all possible pairwise correlation within views, the high order statistics is often disregarded which can only be captured by simultaneously utilizing all views. As a consequence, the clustering performance for multi-view data is compromised. To address this issue, in this paper, a novel multi-view clustering method is proposed by using \textit{t-product} in third-order tensor space. Based on the circular convolution operation, multi-view data can be effectively represented by a \textit{t-linear} combination with sparse and low-rank penalty using “self-expressiveness”. Our extensive experimental results on facial, object, digits image and text data demonstrate that the proposed method outperforms the state-of-the-art methods in terms of many criteria.
Tasks Multi-view Subspace Clustering
Published 2016-08-30
URL http://arxiv.org/abs/1608.08336v2
PDF http://arxiv.org/pdf/1608.08336v2.pdf
PWC https://paperswithcode.com/paper/low-rank-multi-view-clustering-in-third-order
Repo
Framework

Nonextensive information theoretical machine

Title Nonextensive information theoretical machine
Authors Chaobing Song, Shu-Tao Xia
Abstract In this paper, we propose a new discriminative model named \emph{nonextensive information theoretical machine (NITM)} based on nonextensive generalization of Shannon information theory. In NITM, weight parameters are treated as random variables. Tsallis divergence is used to regularize the distribution of weight parameters and maximum unnormalized Tsallis entropy distribution is used to evaluate fitting effect. On the one hand, it is showed that some well-known margin-based loss functions such as $\ell_{0/1}$ loss, hinge loss, squared hinge loss and exponential loss can be unified by unnormalized Tsallis entropy. On the other hand, Gaussian prior regularization is generalized to Student-t prior regularization with similar computational complexity. The model can be solved efficiently by gradient-based convex optimization and its performance is illustrated on standard datasets.
Tasks
Published 2016-04-21
URL http://arxiv.org/abs/1604.06153v1
PDF http://arxiv.org/pdf/1604.06153v1.pdf
PWC https://paperswithcode.com/paper/nonextensive-information-theoretical-machine
Repo
Framework

Relational Multi-Manifold Co-Clustering

Title Relational Multi-Manifold Co-Clustering
Authors Ping Li, Jiajun Bu, Chun Chen, Zhanying He, Deng Cai
Abstract Co-clustering targets on grouping the samples (e.g., documents, users) and the features (e.g., words, ratings) simultaneously. It employs the dual relation and the bilateral information between the samples and features. In many realworld applications, data usually reside on a submanifold of the ambient Euclidean space, but it is nontrivial to estimate the intrinsic manifold of the data space in a principled way. In this study, we focus on improving the co-clustering performance via manifold ensemble learning, which is able to maximally approximate the intrinsic manifolds of both the sample and feature spaces. To achieve this, we develop a novel co-clustering algorithm called Relational Multi-manifold Co-clustering (RMC) based on symmetric nonnegative matrix tri-factorization, which decomposes the relational data matrix into three submatrices. This method considers the intertype relationship revealed by the relational data matrix, and also the intra-type information reflected by the affinity matrices encoded on the sample and feature data distributions. Specifically, we assume the intrinsic manifold of the sample or feature space lies in a convex hull of some pre-defined candidate manifolds. We want to learn a convex combination of them to maximally approach the desired intrinsic manifold. To optimize the objective function, the multiplicative rules are utilized to update the submatrices alternatively. Besides, both the entropic mirror descent algorithm and the coordinate descent algorithm are exploited to learn the manifold coefficient vector. Extensive experiments on documents, images and gene expression data sets have demonstrated the superiority of the proposed algorithm compared to other well-established methods.
Tasks
Published 2016-11-16
URL http://arxiv.org/abs/1611.05743v1
PDF http://arxiv.org/pdf/1611.05743v1.pdf
PWC https://paperswithcode.com/paper/relational-multi-manifold-co-clustering
Repo
Framework

An Approximate Dynamic Programming Approach to Repeated Games with Vector Losses

Title An Approximate Dynamic Programming Approach to Repeated Games with Vector Losses
Authors Vijay Kamble, Patrick Loiseau, Jean Walrand
Abstract We describe an approximate dynamic programming (ADP) approach to compute approximately optimal strategies and approximations of the minimal losses that can be guaranteed in discounted repeated games with vector losses. At the core of our approach is a characterization of the lower Pareto frontier of the set of expected losses that a player can guarantee in these games as the unique fixed point of a set-valued dynamic programming (DP) operator. This fixed point can be approximated by an iterative application of this DP operator compounded by a polytopic set approximation, beginning with a single point. Each iteration can be computed by solving a set of linear programs corresponding to the vertices of the polytope. We derive rigorous bounds on the error of the resulting approximation and the performance of the corresponding approximately optimal strategies. We discuss an application to regret minimization in repeated decision-making in adversarial environments, where we show that this approach can be used to compute approximately optimal strategies and approximations of the minimax optimal regret when the action sets are finite. We illustrate this approach by computing provably approximately optimal strategies for the problem of prediction using expert advice under discounted ${0,1}-$losses. Our numerical evaluations demonstrate the sub-optimality of well-known off-the-shelf online learning algorithms like Hedge and a significantly improved performance on using our approximately optimal strategies in these settings. Our work thus demonstrates the significant potential in using the ADP framework to design effective online learning algorithms.
Tasks Decision Making
Published 2016-03-16
URL http://arxiv.org/abs/1603.04981v5
PDF http://arxiv.org/pdf/1603.04981v5.pdf
PWC https://paperswithcode.com/paper/an-approximate-dynamic-programming-approach
Repo
Framework
comments powered by Disqus