Paper Group ANR 190
A random version of principal component analysis in data clustering. Fast Orthonormal Sparsifying Transforms Based on Householder Reflectors. End-to-End Kernel Learning with Supervised Convolutional Kernel Networks. Large Scale Kernel Learning using Block Coordinate Descent. Prognostics of Surgical Site Infections using Dynamic Health Data. Fuzzy t …
A random version of principal component analysis in data clustering
Title | A random version of principal component analysis in data clustering |
Authors | Luigi Leonardo Palese |
Abstract | Principal component analysis (PCA) is a widespread technique for data analysis that relies on the covariance-correlation matrix of the analyzed data. However to properly work with high-dimensional data, PCA poses severe mathematical constraints on the minimum number of different replicates or samples that must be included in the analysis. Here we show that a modified algorithm works not only on well dimensioned datasets, but also on degenerated ones. |
Tasks | |
Published | 2016-10-27 |
URL | http://arxiv.org/abs/1610.08664v1 |
http://arxiv.org/pdf/1610.08664v1.pdf | |
PWC | https://paperswithcode.com/paper/a-random-version-of-principal-component |
Repo | |
Framework | |
Fast Orthonormal Sparsifying Transforms Based on Householder Reflectors
Title | Fast Orthonormal Sparsifying Transforms Based on Householder Reflectors |
Authors | Cristian Rusu, Nuria Gonzalez-Prelcic, Robert Heath |
Abstract | Dictionary learning is the task of determining a data-dependent transform that yields a sparse representation of some observed data. The dictionary learning problem is non-convex, and usually solved via computationally complex iterative algorithms. Furthermore, the resulting transforms obtained generally lack structure that permits their fast application to data. To address this issue, this paper develops a framework for learning orthonormal dictionaries which are built from products of a few Householder reflectors. Two algorithms are proposed to learn the reflector coefficients: one that considers a sequential update of the reflectors and one with a simultaneous update of all reflectors that imposes an additional internal orthogonal constraint. The proposed methods have low computational complexity and are shown to converge to local minimum points which can be described in terms of the spectral properties of the matrices involved. The resulting dictionaries balance between the computational complexity and the quality of the sparse representations by controlling the number of Householder reflectors in their product. Simulations of the proposed algorithms are shown in the image processing setting where well-known fast transforms are available for comparisons. The proposed algorithms have favorable reconstruction error and the advantage of a fast implementation relative to the classical, unstructured, dictionaries. |
Tasks | Dictionary Learning |
Published | 2016-11-24 |
URL | http://arxiv.org/abs/1611.08229v1 |
http://arxiv.org/pdf/1611.08229v1.pdf | |
PWC | https://paperswithcode.com/paper/fast-orthonormal-sparsifying-transforms-based |
Repo | |
Framework | |
End-to-End Kernel Learning with Supervised Convolutional Kernel Networks
Title | End-to-End Kernel Learning with Supervised Convolutional Kernel Networks |
Authors | Julien Mairal |
Abstract | In this paper, we introduce a new image representation based on a multilayer kernel machine. Unlike traditional kernel methods where data representation is decoupled from the prediction task, we learn how to shape the kernel with supervision. We proceed by first proposing improvements of the recently-introduced convolutional kernel networks (CKNs) in the context of unsupervised learning; then, we derive backpropagation rules to take advantage of labeled training data. The resulting model is a new type of convolutional neural network, where optimizing the filters at each layer is equivalent to learning a linear subspace in a reproducing kernel Hilbert space (RKHS). We show that our method achieves reasonably competitive performance for image classification on some standard “deep learning” datasets such as CIFAR-10 and SVHN, and also for image super-resolution, demonstrating the applicability of our approach to a large variety of image-related tasks. |
Tasks | Image Classification, Image Super-Resolution, Super-Resolution |
Published | 2016-05-20 |
URL | http://arxiv.org/abs/1605.06265v2 |
http://arxiv.org/pdf/1605.06265v2.pdf | |
PWC | https://paperswithcode.com/paper/end-to-end-kernel-learning-with-supervised |
Repo | |
Framework | |
Large Scale Kernel Learning using Block Coordinate Descent
Title | Large Scale Kernel Learning using Block Coordinate Descent |
Authors | Stephen Tu, Rebecca Roelofs, Shivaram Venkataraman, Benjamin Recht |
Abstract | We demonstrate that distributed block coordinate descent can quickly solve kernel regression and classification problems with millions of data points. Armed with this capability, we conduct a thorough comparison between the full kernel, the Nystr"om method, and random features on three large classification tasks from various domains. Our results suggest that the Nystr"om method generally achieves better statistical accuracy than random features, but can require significantly more iterations of optimization. Lastly, we derive new rates for block coordinate descent which support our experimental findings when specialized to kernel methods. |
Tasks | |
Published | 2016-02-17 |
URL | http://arxiv.org/abs/1602.05310v1 |
http://arxiv.org/pdf/1602.05310v1.pdf | |
PWC | https://paperswithcode.com/paper/large-scale-kernel-learning-using-block |
Repo | |
Framework | |
Prognostics of Surgical Site Infections using Dynamic Health Data
Title | Prognostics of Surgical Site Infections using Dynamic Health Data |
Authors | Chuyang Ke, Yan Jin, Heather Evans, Bill Lober, Xiaoning Qian, Ji Liu, Shuai Huang |
Abstract | Surgical Site Infection (SSI) is a national priority in healthcare research. Much research attention has been attracted to develop better SSI risk prediction models. However, most of the existing SSI risk prediction models are built on static risk factors such as comorbidities and operative factors. In this paper, we investigate the use of the dynamic wound data for SSI risk prediction. There have been emerging mobile health (mHealth) tools that can closely monitor the patients and generate continuous measurements of many wound-related variables and other evolving clinical variables. Since existing prediction models of SSI have quite limited capacity to utilize the evolving clinical data, we develop the corresponding solution to equip these mHealth tools with decision-making capabilities for SSI prediction with a seamless assembly of several machine learning models to tackle the analytic challenges arising from the spatial-temporal data. The basic idea is to exploit the low-rank property of the spatial-temporal data via the bilinear formulation, and further enhance it with automatic missing data imputation by the matrix completion technique. We derive efficient optimization algorithms to implement these models and demonstrate the superior performances of our new predictive model on a real-world dataset of SSI, compared to a range of state-of-the-art methods. |
Tasks | Decision Making, Imputation, Matrix Completion |
Published | 2016-11-12 |
URL | http://arxiv.org/abs/1611.04049v1 |
http://arxiv.org/pdf/1611.04049v1.pdf | |
PWC | https://paperswithcode.com/paper/prognostics-of-surgical-site-infections-using |
Repo | |
Framework | |
Fuzzy thresholding in wavelet domain for speckle reduction in Synthetic Aperture Radar images
Title | Fuzzy thresholding in wavelet domain for speckle reduction in Synthetic Aperture Radar images |
Authors | Mario Mastriani |
Abstract | The application of wavelet transforms to Synthetic Aperture Radar (SAR) imagery has improved despeckling performance. To deduce the problem of filtering the multiplicative noise to the case of an additive noise, the wavelet decomposition is performed on the logarithm of the image gray levels. The detail coefficients produced by the bidimensional discrete wavelet transform (DWT-2D) needs to be thresholded to extract out the speckle in highest subbands. An initial threshold value is estimated according to the noise variance. In this paper, an additional fuzzy thresholding approach for automatic determination of the rate threshold level around the traditional wavelet noise thresholding (initial threshold) is applied, and used for the soft or hard-threshold performed on all the high frequency subimages. The filtered logarithmic image is then obtained by reconstruction from the thresholded coefficients. This process is applied a single time, and exclusively to the first level of decomposition. The exponential function of this reconstructed image gives the final filtered image. Experimental results on test images have demonstrated the effectiveness of this method compared to the most of methods in use at the moment. |
Tasks | |
Published | 2016-07-31 |
URL | http://arxiv.org/abs/1608.00277v1 |
http://arxiv.org/pdf/1608.00277v1.pdf | |
PWC | https://paperswithcode.com/paper/fuzzy-thresholding-in-wavelet-domain-for |
Repo | |
Framework | |
Guaranteed bounds on the Kullback-Leibler divergence of univariate mixtures using piecewise log-sum-exp inequalities
Title | Guaranteed bounds on the Kullback-Leibler divergence of univariate mixtures using piecewise log-sum-exp inequalities |
Authors | Frank Nielsen, Ke Sun |
Abstract | Information-theoretic measures such as the entropy, cross-entropy and the Kullback-Leibler divergence between two mixture models is a core primitive in many signal processing tasks. Since the Kullback-Leibler divergence of mixtures provably does not admit a closed-form formula, it is in practice either estimated using costly Monte-Carlo stochastic integration, approximated, or bounded using various techniques. We present a fast and generic method that builds algorithmically closed-form lower and upper bounds on the entropy, the cross-entropy and the Kullback-Leibler divergence of mixtures. We illustrate the versatile method by reporting on our experiments for approximating the Kullback-Leibler divergence between univariate exponential mixtures, Gaussian mixtures, Rayleigh mixtures, and Gamma mixtures. |
Tasks | |
Published | 2016-06-19 |
URL | http://arxiv.org/abs/1606.05850v2 |
http://arxiv.org/pdf/1606.05850v2.pdf | |
PWC | https://paperswithcode.com/paper/guaranteed-bounds-on-the-kullback-leibler |
Repo | |
Framework | |
Estimating motion with principal component regression strategies
Title | Estimating motion with principal component regression strategies |
Authors | Felipe P. do Carmo, Vania Vieira Estrela, Joaquim Teixeira de Assis |
Abstract | In this paper, two simple principal component regression methods for estimating the optical flow between frames of video sequences according to a pel-recursive manner are introduced. These are easy alternatives to dealing with mixtures of motion vectors in addition to the lack of prior information on spatial-temporal statistics (although they are supposed to be normal in a local sense). The 2D motion vector estimation approaches take into consideration simple image properties and are used to harmonize regularized least square estimates. Their main advantage is that no knowledge of the noise distribution is necessary, although there is an underlying assumption of localized smoothness. Preliminary experiments indicate that this approach provides robust estimates of the optical flow. |
Tasks | Optical Flow Estimation |
Published | 2016-11-08 |
URL | http://arxiv.org/abs/1611.02637v1 |
http://arxiv.org/pdf/1611.02637v1.pdf | |
PWC | https://paperswithcode.com/paper/estimating-motion-with-principal-component |
Repo | |
Framework | |
Performance Localisation
Title | Performance Localisation |
Authors | Brendan Cody-Kenny, Michael O’Neill, Stephen Barrett |
Abstract | Performance becomes an issue particularly when execution cost hinders the functionality of a program. Typically a profiler can be used to find program code execution which represents a large portion of the overall execution cost of a program. Pinpointing where a performance issue exists provides a starting point for tracing cause back through a program. While profiling shows where a performance issue manifests, we use mutation analysis to show where a performance improvement is likely to exist. We find that mutation analysis can indicate locations within a program which are highly impactful to the overall execution cost of a program yet are executed relatively infrequently. By better locating potential performance improvements in programs we hope to make performance improvement more amenable to automation. |
Tasks | |
Published | 2016-03-04 |
URL | http://arxiv.org/abs/1603.01489v2 |
http://arxiv.org/pdf/1603.01489v2.pdf | |
PWC | https://paperswithcode.com/paper/performance-localisation |
Repo | |
Framework | |
Perceptual Reward Functions
Title | Perceptual Reward Functions |
Authors | Ashley Edwards, Charles Isbell, Atsuo Takanishi |
Abstract | Reinforcement learning problems are often described through rewards that indicate if an agent has completed some task. This specification can yield desirable behavior, however many problems are difficult to specify in this manner, as one often needs to know the proper configuration for the agent. When humans are learning to solve tasks, we often learn from visual instructions composed of images or videos. Such representations motivate our development of Perceptual Reward Functions, which provide a mechanism for creating visual task descriptions. We show that this approach allows an agent to learn from rewards that are based on raw pixels rather than internal parameters. |
Tasks | |
Published | 2016-08-12 |
URL | http://arxiv.org/abs/1608.03824v1 |
http://arxiv.org/pdf/1608.03824v1.pdf | |
PWC | https://paperswithcode.com/paper/perceptual-reward-functions |
Repo | |
Framework | |
Optimization of Test Case Generation using Genetic Algorithm (GA)
Title | Optimization of Test Case Generation using Genetic Algorithm (GA) |
Authors | Ahmed Mateen, Marriam Nazir, Salman Afsar Awan |
Abstract | Testing provides means pertaining to assuring software performance. The total aim of software industry is actually to make a certain start associated with high quality software for the end user. However, associated with software testing has quite a few underlying concerns, which are very important and need to pay attention on these issues. These issues are effectively generating, prioritization of test cases, etc. These issues can be overcome by paying attention and focus. Solitary of the greatest Problems in the software testing area is usually how to acquire a great proper set associated with cases to confirm software. Some other strategies and also methodologies are proposed pertaining to shipping care of most of these issues. Genetic Algorithm (GA) belongs to evolutionary algorithms. Evolutionary algorithms have a significant role in the automatic test generation and many researchers are focusing on it. In this study explored software testing related issues by using the GA approach. In addition to right after applying some analysis, better solution produced, that is feasible and reliable. The particular research presents the implementation of GAs because of its generation of optimized test cases. Along these lines, this paper gives proficient system for the optimization of test case generation using genetic algorithm. |
Tasks | |
Published | 2016-12-28 |
URL | http://arxiv.org/abs/1612.08813v1 |
http://arxiv.org/pdf/1612.08813v1.pdf | |
PWC | https://paperswithcode.com/paper/optimization-of-test-case-generation-using |
Repo | |
Framework | |
A Generic Method for Automatic Ground Truth Generation of Camera-captured Documents
Title | A Generic Method for Automatic Ground Truth Generation of Camera-captured Documents |
Authors | Sheraz Ahmed, Muhammad Imran Malik, Muhammad Zeshan Afzal, Koichi Kise, Masakazu Iwamura, Andreas Dengel, Marcus Liwicki |
Abstract | The contribution of this paper is fourfold. The first contribution is a novel, generic method for automatic ground truth generation of camera-captured document images (books, magazines, articles, invoices, etc.). It enables us to build large-scale (i.e., millions of images) labeled camera-captured/scanned documents datasets, without any human intervention. The method is generic, language independent and can be used for generation of labeled documents datasets (both scanned and cameracaptured) in any cursive and non-cursive language, e.g., English, Russian, Arabic, Urdu, etc. To assess the effectiveness of the presented method, two different datasets in English and Russian are generated using the presented method. Evaluation of samples from the two datasets shows that 99:98% of the images were correctly labeled. The second contribution is a large dataset (called C3Wi) of camera-captured characters and words images, comprising 1 million word images (10 million character images), captured in a real camera-based acquisition. This dataset can be used for training as well as testing of character recognition systems on camera-captured documents. The third contribution is a novel method for the recognition of cameracaptured document images. The proposed method is based on Long Short-Term Memory and outperforms the state-of-the-art methods for camera based OCRs. As a fourth contribution, various benchmark tests are performed to uncover the behavior of commercial (ABBYY), open source (Tesseract), and the presented camera-based OCR using the presented C3Wi dataset. Evaluation results reveal that the existing OCRs, which already get very high accuracies on scanned documents, have limited performance on camera-captured document images; where ABBYY has an accuracy of 75%, Tesseract an accuracy of 50.22%, while the presented character recognition system has an accuracy of 95.10%. |
Tasks | Optical Character Recognition |
Published | 2016-05-04 |
URL | http://arxiv.org/abs/1605.01189v1 |
http://arxiv.org/pdf/1605.01189v1.pdf | |
PWC | https://paperswithcode.com/paper/a-generic-method-for-automatic-ground-truth |
Repo | |
Framework | |
DeepGaze II: Reading fixations from deep features trained on object recognition
Title | DeepGaze II: Reading fixations from deep features trained on object recognition |
Authors | Matthias Kümmerer, Thomas S. A. Wallis, Matthias Bethge |
Abstract | Here we present DeepGaze II, a model that predicts where people look in images. The model uses the features from the VGG-19 deep neural network trained to identify objects in images. Contrary to other saliency models that use deep features, here we use the VGG features for saliency prediction with no additional fine-tuning (rather, a few readout layers are trained on top of the VGG features to predict saliency). The model is therefore a strong test of transfer learning. After conservative cross-validation, DeepGaze II explains about 87% of the explainable information gain in the patterns of fixations and achieves top performance in area under the curve metrics on the MIT300 hold-out benchmark. These results corroborate the finding from DeepGaze I (which explained 56% of the explainable information gain), that deep features trained on object recognition provide a versatile feature space for performing related visual tasks. We explore the factors that contribute to this success and present several informative image examples. A web service is available to compute model predictions at http://deepgaze.bethgelab.org. |
Tasks | Object Recognition, Saliency Prediction, Transfer Learning |
Published | 2016-10-05 |
URL | http://arxiv.org/abs/1610.01563v1 |
http://arxiv.org/pdf/1610.01563v1.pdf | |
PWC | https://paperswithcode.com/paper/deepgaze-ii-reading-fixations-from-deep |
Repo | |
Framework | |
Towards an Ontology-Driven Blockchain Design for Supply Chain Provenance
Title | Towards an Ontology-Driven Blockchain Design for Supply Chain Provenance |
Authors | Henry M. Kim, Marek Laskowski |
Abstract | An interesting research problem in our age of Big Data is that of determining provenance. Granular evaluation of provenance of physical goods–e.g. tracking ingredients of a pharmaceutical or demonstrating authenticity of luxury goods–has often not been possible with today’s items that are produced and transported in complex, inter-organizational, often internationally-spanning supply chains. Recent adoption of Internet of Things and Blockchain technologies give promise at better supply chain provenance. We are particularly interested in the blockchain as many favoured use cases of blockchain are for provenance tracking. We are also interested in applying ontologies as there has been some work done on knowledge provenance, traceability, and food provenance using ontologies. In this paper, we make a case for why ontologies can contribute to blockchain design. To support this case, we analyze a traceability ontology and translate some of its representations to smart contracts that execute a provenance trace and enforce traceability constraints on the Ethereum blockchain platform. |
Tasks | |
Published | 2016-08-28 |
URL | http://arxiv.org/abs/1610.02922v1 |
http://arxiv.org/pdf/1610.02922v1.pdf | |
PWC | https://paperswithcode.com/paper/towards-an-ontology-driven-blockchain-design |
Repo | |
Framework | |
Knowledge Questions from Knowledge Graphs
Title | Knowledge Questions from Knowledge Graphs |
Authors | Dominic Seyler, Mohamed Yahya, Klaus Berberich |
Abstract | We address the novel problem of automatically generating quiz-style knowledge questions from a knowledge graph such as DBpedia. Questions of this kind have ample applications, for instance, to educate users about or to evaluate their knowledge in a specific domain. To solve the problem, we propose an end-to-end approach. The approach first selects a named entity from the knowledge graph as an answer. It then generates a structured triple-pattern query, which yields the answer as its sole result. If a multiple-choice question is desired, the approach selects alternative answer options. Finally, our approach uses a template-based method to verbalize the structured query and yield a natural language question. A key challenge is estimating how difficult the generated question is to human users. To do this, we make use of historical data from the Jeopardy! quiz show and a semantically annotated Web-scale document collection, engineer suitable features, and train a logistic regression classifier to predict question difficulty. Experiments demonstrate the viability of our overall approach. |
Tasks | Knowledge Graphs |
Published | 2016-10-31 |
URL | http://arxiv.org/abs/1610.09935v2 |
http://arxiv.org/pdf/1610.09935v2.pdf | |
PWC | https://paperswithcode.com/paper/knowledge-questions-from-knowledge-graphs |
Repo | |
Framework | |