May 6, 2019

3139 words 15 mins read

Paper Group ANR 274

Paper Group ANR 274

Evaluation of Protein Structural Models Using Random Forests. Detecting and Extracting Events from Text Documents. Property-driven State-Space Coarsening for Continuous Time Markov Chains. Average-case Hardness of RIP Certification. Convolutional Tables Ensemble: classification in microseconds. Learning Robust Video Synchronization without Annotati …

Evaluation of Protein Structural Models Using Random Forests

Title Evaluation of Protein Structural Models Using Random Forests
Authors Renzhi Cao, Taeho Jo, Jianlin Cheng
Abstract Protein structure prediction has been a grand challenge problem in the structure biology over the last few decades. Protein quality assessment plays a very important role in protein structure prediction. In the paper, we propose a new protein quality assessment method which can predict both local and global quality of the protein 3D structural models. Our method uses both multi and single model quality assessment method for global quality assessment, and uses chemical, physical, geo-metrical features, and global quality score for local quality assessment. CASP9 targets are used to generate the features for local quality assessment. We evaluate the performance of our local quality assessment method on CASP10, which is comparable with two stage-of-art QA methods based on the average absolute distance between the real and predicted distance. In addition, we blindly tested our method on CASP11, and the good performance shows that combining single and multiple model quality assessment method could be a good way to improve the accuracy of model quality assessment, and the random forest technique could be used to train a good local quality assessment model.
Tasks
Published 2016-02-13
URL http://arxiv.org/abs/1602.04277v1
PDF http://arxiv.org/pdf/1602.04277v1.pdf
PWC https://paperswithcode.com/paper/evaluation-of-protein-structural-models-using
Repo
Framework

Detecting and Extracting Events from Text Documents

Title Detecting and Extracting Events from Text Documents
Authors Jugal Kalita
Abstract Events of various kinds are mentioned and discussed in text documents, whether they are books, news articles, blogs or microblog feeds. The paper starts by giving an overview of how events are treated in linguistics and philosophy. We follow this discussion by surveying how events and associated information are handled in computationally. In particular, we look at how textual documents can be mined to extract events and ancillary information. These days, it is mostly through the application of various machine learning techniques. We also discuss applications of event detection and extraction systems, particularly in summarization, in the medical domain and in the context of Twitter posts. We end the paper with a discussion of challenges and future directions.
Tasks
Published 2016-01-15
URL http://arxiv.org/abs/1601.04012v1
PDF http://arxiv.org/pdf/1601.04012v1.pdf
PWC https://paperswithcode.com/paper/detecting-and-extracting-events-from-text
Repo
Framework

Property-driven State-Space Coarsening for Continuous Time Markov Chains

Title Property-driven State-Space Coarsening for Continuous Time Markov Chains
Authors Michalis Michaelides, Dimitrios Milios, Jane Hillston, Guido Sanguinetti
Abstract Dynamical systems with large state-spaces are often expensive to thoroughly explore experimentally. Coarse-graining methods aim to define simpler systems which are more amenable to analysis and exploration; most current methods, however, focus on a priori state aggregation based on similarities in transition rates, which is not necessarily reflected in similar behaviours at the level of trajectories. We propose a way to coarsen the state-space of a system which optimally preserves the satisfaction of a set of logical specifications about the system’s trajectories. Our approach is based on Gaussian Process emulation and Multi-Dimensional Scaling, a dimensionality reduction technique which optimally preserves distances in non-Euclidean spaces. We show how to obtain low-dimensional visualisations of the system’s state-space from the perspective of properties’ satisfaction, and how to define macro-states which behave coherently with respect to the specifications. Our approach is illustrated on a non-trivial running example, showing promising performance and high computational efficiency.
Tasks Dimensionality Reduction
Published 2016-06-03
URL http://arxiv.org/abs/1606.01111v2
PDF http://arxiv.org/pdf/1606.01111v2.pdf
PWC https://paperswithcode.com/paper/property-driven-state-space-coarsening-for
Repo
Framework

Average-case Hardness of RIP Certification

Title Average-case Hardness of RIP Certification
Authors Tengyao Wang, Quentin Berthet, Yaniv Plan
Abstract The restricted isometry property (RIP) for design matrices gives guarantees for optimal recovery in sparse linear models. It is of high interest in compressed sensing and statistical learning. This property is particularly important for computationally efficient recovery methods. As a consequence, even though it is in general NP-hard to check that RIP holds, there have been substantial efforts to find tractable proxies for it. These would allow the construction of RIP matrices and the polynomial-time verification of RIP given an arbitrary matrix. We consider the framework of average-case certifiers, that never wrongly declare that a matrix is RIP, while being often correct for random instances. While there are such functions which are tractable in a suboptimal parameter regime, we show that this is a computationally hard task in any better regime. Our results are based on a new, weaker assumption on the problem of detecting dense subgraphs.
Tasks
Published 2016-05-31
URL http://arxiv.org/abs/1605.09646v1
PDF http://arxiv.org/pdf/1605.09646v1.pdf
PWC https://paperswithcode.com/paper/average-case-hardness-of-rip-certification
Repo
Framework

Convolutional Tables Ensemble: classification in microseconds

Title Convolutional Tables Ensemble: classification in microseconds
Authors Aharon Bar-Hillel, Eyal Krupka, Noam Bloom
Abstract We study classifiers operating under severe classification time constraints, corresponding to 1-1000 CPU microseconds, using Convolutional Tables Ensemble (CTE), an inherently fast architecture for object category recognition. The architecture is based on convolutionally-applied sparse feature extraction, using trees or ferns, and a linear voting layer. Several structure and optimization variants are considered, including novel decision functions, tree learning algorithm, and distillation from CNN to CTE architecture. Accuracy improvements of 24-45% over related art of similar speed are demonstrated on standard object recognition benchmarks. Using Pareto speed-accuracy curves, we show that CTE can provide better accuracy than Convolutional Neural Networks (CNN) for a certain range of classification time constraints, or alternatively provide similar error rates with 5-200X speedup.
Tasks Object Recognition
Published 2016-02-14
URL http://arxiv.org/abs/1602.04489v1
PDF http://arxiv.org/pdf/1602.04489v1.pdf
PWC https://paperswithcode.com/paper/convolutional-tables-ensemble-classification
Repo
Framework

Learning Robust Video Synchronization without Annotations

Title Learning Robust Video Synchronization without Annotations
Authors Patrick Wieschollek, Ido Freeman, Hendrik P. A. Lensch
Abstract Aligning video sequences is a fundamental yet still unsolved component for a broad range of applications in computer graphics and vision. Most classical image processing methods cannot be directly applied to related video problems due to the high amount of underlying data and their limit to small changes in appearance. We present a scalable and robust method for computing a non-linear temporal video alignment. The approach autonomously manages its training data for learning a meaningful representation in an iterative procedure each time increasing its own knowledge. It leverages on the nature of the videos themselves to remove the need for manually created labels. While previous alignment methods similarly consider weather conditions, season and illumination, our approach is able to align videos from data recorded months apart.
Tasks Video Alignment, Video Synchronization
Published 2016-10-19
URL http://arxiv.org/abs/1610.05985v3
PDF http://arxiv.org/pdf/1610.05985v3.pdf
PWC https://paperswithcode.com/paper/learning-robust-video-synchronization-without
Repo
Framework

Barcodes for Medical Image Retrieval Using Autoencoded Radon Transform

Title Barcodes for Medical Image Retrieval Using Autoencoded Radon Transform
Authors Hamid R. Tizhoosh, Christopher Mitcheltree, Shujin Zhu, Shamak Dutta
Abstract Using content-based binary codes to tag digital images has emerged as a promising retrieval technology. Recently, Radon barcodes (RBCs) have been introduced as a new binary descriptor for image search. RBCs are generated by binarization of Radon projections and by assembling them into a vector, namely the barcode. A simple local thresholding has been suggested for binarization. In this paper, we put forward the idea of “autoencoded Radon barcodes”. Using images in a training dataset, we autoencode Radon projections to perform binarization on outputs of hidden layers. We employed the mini-batch stochastic gradient descent approach for the training. Each hidden layer of the autoencoder can produce a barcode using a threshold determined based on the range of the logistic function used. The compressing capability of autoencoders apparently reduces the redundancies inherent in Radon projections leading to more accurate retrieval results. The IRMA dataset with 14,410 x-ray images is used to validate the performance of the proposed method. The experimental results, containing comparison with RBCs, SURF and BRISK, show that autoencoded Radon barcode (ARBC) has the capacity to capture important information and to learn richer representations resulting in lower retrieval errors for image retrieval measured with the accuracy of the first hit only.
Tasks Image Retrieval, Medical Image Retrieval
Published 2016-09-16
URL http://arxiv.org/abs/1609.05112v1
PDF http://arxiv.org/pdf/1609.05112v1.pdf
PWC https://paperswithcode.com/paper/barcodes-for-medical-image-retrieval-using
Repo
Framework

Gabor Barcodes for Medical Image Retrieval

Title Gabor Barcodes for Medical Image Retrieval
Authors Mina Nouredanesh, Hamid R. Tizhoosh, Ershad Banijamali
Abstract In recent years, advances in medical imaging have led to the emergence of massive databases, containing images from a diverse range of modalities. This has significantly heightened the need for automated annotation of the images on one side, and fast and memory-efficient content-based image retrieval systems on the other side. Binary descriptors have recently gained more attention as a potential vehicle to achieve these goals. One of the recently introduced binary descriptors for tagging of medical images are Radon barcodes (RBCs) that are driven from Radon transform via local thresholding. Gabor transform is also a powerful transform to extract texture-based information. Gabor features have exhibited robustness against rotation, scale, and also photometric disturbances, such as illumination changes and image noise in many applications. This paper introduces Gabor Barcodes (GBCs), as a novel framework for the image annotation. To find the most discriminative GBC for a given query image, the effects of employing Gabor filters with different parameters, i.e., different sets of scales and orientations, are investigated, resulting in different barcode lengths and retrieval performances. The proposed method has been evaluated on the IRMA dataset with 193 classes comprising of 12,677 x-ray images for indexing, and 1,733 x-rays images for testing. A total error score as low as $351$ ($\approx 80%$ accuracy for the first hit) was achieved.
Tasks Content-Based Image Retrieval, Image Retrieval, Medical Image Retrieval
Published 2016-05-14
URL http://arxiv.org/abs/1605.04478v1
PDF http://arxiv.org/pdf/1605.04478v1.pdf
PWC https://paperswithcode.com/paper/gabor-barcodes-for-medical-image-retrieval
Repo
Framework

Data Integration with High Dimensionality

Title Data Integration with High Dimensionality
Authors Xin Gao, Raymond J. Carroll
Abstract We consider a problem of data integration. Consider determining which genes affect a disease. The genes, which we call predictor objects, can be measured in different experiments on the same individual. We address the question of finding which genes are predictors of disease by any of the experiments. Our formulation is more general. In a given data set, there are a fixed number of responses for each individual, which may include a mix of discrete, binary and continuous variables. There is also a class of predictor objects, which may differ within a subject depending on how the predictor object is measured, i.e., depend on the experiment. The goal is to select which predictor objects affect any of the responses, where the number of such informative predictor objects or features tends to infinity as sample size increases. There are marginal likelihoods for each way the predictor object is measured, i.e., for each experiment. We specify a pseudolikelihood combining the marginal likelihoods, and propose a pseudolikelihood information criterion. Under regularity conditions, we establish selection consistency for the pseudolikelihood information criterion with unbounded true model size, which includes a Bayesian information criterion with appropriate penalty term as a special case. Simulations indicate that data integration improves upon, sometimes dramatically, using only one of the data sources.
Tasks
Published 2016-10-03
URL http://arxiv.org/abs/1610.00667v1
PDF http://arxiv.org/pdf/1610.00667v1.pdf
PWC https://paperswithcode.com/paper/data-integration-with-high-dimensionality
Repo
Framework

Truthful Mechanisms for Matching and Clustering in an Ordinal World

Title Truthful Mechanisms for Matching and Clustering in an Ordinal World
Authors Elliot Anshelevich, Shreyas Sekar
Abstract We study truthful mechanisms for matching and related problems in a partial information setting, where the agents’ true utilities are hidden, and the algorithm only has access to ordinal preference information. Our model is motivated by the fact that in many settings, agents cannot express the numerical values of their utility for different outcomes, but are still able to rank the outcomes in their order of preference. Specifically, we study problems where the ground truth exists in the form of a weighted graph of agent utilities, but the algorithm can only elicit the agents’ private information in the form of a preference ordering for each agent induced by the underlying weights. Against this backdrop, we design truthful algorithms to approximate the true optimum solution with respect to the hidden weights. Our techniques yield universally truthful algorithms for a number of graph problems: a 1.76-approximation algorithm for Max-Weight Matching, 2-approximation algorithm for Max k-matching, a 6-approximation algorithm for Densest k-subgraph, and a 2-approximation algorithm for Max Traveling Salesman as long as the hidden weights constitute a metric. We also provide improved approximation algorithms for such problems when the agents are not able to lie about their preferences. Our results are the first non-trivial truthful approximation algorithms for these problems, and indicate that in many situations, we can design robust algorithms even when the agents may lie and only provide ordinal information instead of precise utilities.
Tasks
Published 2016-10-13
URL http://arxiv.org/abs/1610.04069v2
PDF http://arxiv.org/pdf/1610.04069v2.pdf
PWC https://paperswithcode.com/paper/truthful-mechanisms-for-matching-and
Repo
Framework

Deep Convolutional Poses for Human Interaction Recognition in Monocular Videos

Title Deep Convolutional Poses for Human Interaction Recognition in Monocular Videos
Authors Marcel Sheeny de Moraes, Sankha Mukherjee, Neil M Robertson
Abstract Human interaction recognition is a challenging problem in computer vision and has been researched over the years due to its important applications. With the development of deep models for the human pose estimation problem, this work aims to verify the effectiveness of using the human pose in order to recognize the human interaction in monocular videos. This paper developed a method based on 5 steps: detect each person in the scene, track them, retrieve the human pose, extract features based on the pose and finally recognize the interaction using a classifier. The Two-Person interaction dataset was used for the development of this methodology. Using a whole sequence evaluation approach it achieved 87.56% of average accuracy of all interaction. Yun, et at achieved 91.10% using the same dataset, however their methodology used the depth sensor to recognize the interaction. The methodology developed in this paper shows that an RGB camera can be as effective as depth cameras to recognize the interaction between two persons using the recent development of deep models to estimate the human pose.
Tasks Human Interaction Recognition, Pose Estimation
Published 2016-12-13
URL http://arxiv.org/abs/1612.03982v1
PDF http://arxiv.org/pdf/1612.03982v1.pdf
PWC https://paperswithcode.com/paper/deep-convolutional-poses-for-human
Repo
Framework

Coverage Embedding Models for Neural Machine Translation

Title Coverage Embedding Models for Neural Machine Translation
Authors Haitao Mi, Baskaran Sankaran, Zhiguo Wang, Abe Ittycheriah
Abstract In this paper, we enhance the attention-based neural machine translation (NMT) by adding explicit coverage embedding models to alleviate issues of repeating and dropping translations in NMT. For each source word, our model starts with a full coverage embedding vector to track the coverage status, and then keeps updating it with neural networks as the translation goes. Experiments on the large-scale Chinese-to-English task show that our enhanced model improves the translation quality significantly on various test sets over the strong large vocabulary NMT system.
Tasks Machine Translation
Published 2016-05-10
URL http://arxiv.org/abs/1605.03148v2
PDF http://arxiv.org/pdf/1605.03148v2.pdf
PWC https://paperswithcode.com/paper/coverage-embedding-models-for-neural-machine
Repo
Framework

The Conditional Lucas & Kanade Algorithm

Title The Conditional Lucas & Kanade Algorithm
Authors Chen-Hsuan Lin, Rui Zhu, Simon Lucey
Abstract The Lucas & Kanade (LK) algorithm is the method of choice for efficient dense image and object alignment. The approach is efficient as it attempts to model the connection between appearance and geometric displacement through a linear relationship that assumes independence across pixel coordinates. A drawback of the approach, however, is its generative nature. Specifically, its performance is tightly coupled with how well the linear model can synthesize appearance from geometric displacement, even though the alignment task itself is associated with the inverse problem. In this paper, we present a new approach, referred to as the Conditional LK algorithm, which: (i) directly learns linear models that predict geometric displacement as a function of appearance, and (ii) employs a novel strategy for ensuring that the generative pixel independence assumption can still be taken advantage of. We demonstrate that our approach exhibits superior performance to classical generative forms of the LK algorithm. Furthermore, we demonstrate its comparable performance to state-of-the-art methods such as the Supervised Descent Method with substantially less training examples, as well as the unique ability to “swap” geometric warp functions without having to retrain from scratch. Finally, from a theoretical perspective, our approach hints at possible redundancies that exist in current state-of-the-art methods for alignment that could be leveraged in vision systems of the future.
Tasks
Published 2016-03-29
URL http://arxiv.org/abs/1603.08597v1
PDF http://arxiv.org/pdf/1603.08597v1.pdf
PWC https://paperswithcode.com/paper/the-conditional-lucas-kanade-algorithm
Repo
Framework

Validity and reliability of free software for bidimensional gait analysis

Title Validity and reliability of free software for bidimensional gait analysis
Authors Ana Paula Quixadá, Andrea Naomi Onodera, Norberto Peña, José Garcia Vivas Miranda, Katia Nunes Sá
Abstract Despite the evaluation systems of human movement that have been advancing in recent decades, their use are not feasible for clinical practice because it has a high cost and scarcity of trained operators to interpret their results. An ideal videogrammetry system should be easy to use, low cost, with minimal equipment, and fast realization. The CvMob is a free tool for dynamic evaluation of human movements that express measurements in figures, tables, and graphics. This paper aims to determine if CvMob is a reliable tool for the evaluation of two dimensional human gait. This is a validity and reliability study. The sample was composed of 56 healthy individuals who walked on a 9-meterlong walkway and were simultaneously filmed by CvMob and Vicon system cameras. Linear trajectories and angular measurements were compared to validate the CvMob system, and inter and intrarater findings of the same measurements were used to determine reliability. A strong correlation (rs mean = 0.988) of the linear trajectories between systems and inter and intrarater analysis were found. According to the Bland-Altman method, the angles that had good agreement between systems were maximum flexion and extension (stance and swing) of the knee and dorsiflexion range of motion and stride length. The CvMob is a reliable tool for analysis of linear motion and lengths in two-dimensional evaluations of human gait. The angular measurements demonstrate high agreement for the knee joint; however, the hip and ankle measurements were limited by differences between systems.
Tasks
Published 2016-02-14
URL http://arxiv.org/abs/1602.04513v1
PDF http://arxiv.org/pdf/1602.04513v1.pdf
PWC https://paperswithcode.com/paper/validity-and-reliability-of-free-software-for
Repo
Framework

Comparing learning algorithms in neural network for diagnosing cardiovascular disease

Title Comparing learning algorithms in neural network for diagnosing cardiovascular disease
Authors Mirmorsal Madani
Abstract Today data mining techniques are exploited in medical science for diagnosing, overcoming and treating diseases. Neural network is one of the techniques which are widely used for diagnosis in medical field. In this article efficiency of nine algorithms, which are basis of neural network learning in diagnosing cardiovascular diseases, will be assessed. Algorithms are assessed in terms of accuracy, sensitivity, transparency, AROC and convergence rate by means of 10 fold cross validation. The results suggest that in training phase, Lonberg-M algorithm has the best efficiency in terms of all metrics, algorithm OSS has maximum accuracy in testing phase, algorithm SCG has the maximum transparency and algorithm CGB has the maximum sensitivity.
Tasks
Published 2016-11-05
URL http://arxiv.org/abs/1611.01678v1
PDF http://arxiv.org/pdf/1611.01678v1.pdf
PWC https://paperswithcode.com/paper/comparing-learning-algorithms-in-neural
Repo
Framework
comments powered by Disqus