Paper Group ANR 262
Rigid Slice-To-Volume Medical Image Registration through Markov Random Fields. Deep API Learning. Multimodal Remote Sensing Image Registration with Accuracy Estimation at Local and Global Scales. Automated assessment of non-native learner essays: Investigating the role of linguistic features. Relative Error Embeddings for the Gaussian Kernel Distan …
Rigid Slice-To-Volume Medical Image Registration through Markov Random Fields
Title | Rigid Slice-To-Volume Medical Image Registration through Markov Random Fields |
Authors | Roque Porchetto, Franco Stramana, Nikos Paragios, Enzo Ferrante |
Abstract | Rigid slice-to-volume registration is a challenging task, which finds application in medical imaging problems like image fusion for image guided surgeries and motion correction for volume reconstruction. It is usually formulated as an optimization problem and solved using standard continuous methods. In this paper, we discuss how this task be formulated as a discrete labeling problem on a graph. Inspired by previous works on discrete estimation of linear transformations using Markov Random Fields (MRFs), we model it using a pairwise MRF, where the nodes are associated to the rigid parameters, and the edges encode the relation between the variables. We compare the performance of the proposed method to a continuous formulation optimized using simplex, and we discuss how it can be used to further improve the accuracy of our approach. Promising results are obtained using a monomodal dataset composed of magnetic resonance images (MRI) of a beating heart. |
Tasks | Image Registration, Medical Image Registration |
Published | 2016-08-19 |
URL | http://arxiv.org/abs/1608.05562v1 |
http://arxiv.org/pdf/1608.05562v1.pdf | |
PWC | https://paperswithcode.com/paper/rigid-slice-to-volume-medical-image |
Repo | |
Framework | |
Deep API Learning
Title | Deep API Learning |
Authors | Xiaodong Gu, Hongyu Zhang, Dongmei Zhang, Sunghun Kim |
Abstract | Developers often wonder how to implement a certain functionality (e.g., how to parse XML files) using APIs. Obtaining an API usage sequence based on an API-related natural language query is very helpful in this regard. Given a query, existing approaches utilize information retrieval models to search for matching API sequences. These approaches treat queries and APIs as bag-of-words (i.e., keyword matching or word-to-word alignment) and lack a deep understanding of the semantics of the query. We propose DeepAPI, a deep learning based approach to generate API usage sequences for a given natural language query. Instead of a bags-of-words assumption, it learns the sequence of words in a query and the sequence of associated APIs. DeepAPI adapts a neural language model named RNN Encoder-Decoder. It encodes a word sequence (user query) into a fixed-length context vector, and generates an API sequence based on the context vector. We also augment the RNN Encoder-Decoder by considering the importance of individual APIs. We empirically evaluate our approach with more than 7 million annotated code snippets collected from GitHub. The results show that our approach generates largely accurate API sequences and outperforms the related approaches. |
Tasks | Information Retrieval, Language Modelling, Word Alignment |
Published | 2016-05-27 |
URL | http://arxiv.org/abs/1605.08535v3 |
http://arxiv.org/pdf/1605.08535v3.pdf | |
PWC | https://paperswithcode.com/paper/deep-api-learning |
Repo | |
Framework | |
Multimodal Remote Sensing Image Registration with Accuracy Estimation at Local and Global Scales
Title | Multimodal Remote Sensing Image Registration with Accuracy Estimation at Local and Global Scales |
Authors | M. L. Uss, B. Vozel, V. V. Lukin, K. Chehdi |
Abstract | This paper focuses on potential accuracy of remote sensing images registration. We investigate how this accuracy can be estimated without ground truth available and used to improve registration quality of mono- and multi-modal pair of images. At the local scale of image fragments, the Cramer-Rao lower bound (CRLB) on registration error is estimated for each local correspondence between coarsely registered pair of images. This CRLB is defined by local image texture and noise properties. Opposite to the standard approach, where registration accuracy is only evaluated at the output of the registration process, such valuable information is used by us as an additional input knowledge. It greatly helps detecting and discarding outliers and refining the estimation of geometrical transformation model parameters. Based on these ideas, a new area-based registration method called RAE (Registration with Accuracy Estimation) is proposed. In addition to its ability to automatically register very complex multimodal image pairs with high accuracy, the RAE method provides registration accuracy at the global scale as covariance matrix of estimation error of geometrical transformation model parameters or as point-wise registration Standard Deviation. This accuracy does not depend on any ground truth availability and characterizes each pair of registered images individually. Thus, the RAE method can identify image areas for which a predefined registration accuracy is guaranteed. The RAE method is proved successful with reaching subpixel accuracy while registering eight complex mono/multimodal and multitemporal image pairs including optical to optical, optical to radar, optical to Digital Elevation Model (DEM) images and DEM to radar cases. Other methods employed in comparisons fail to provide in a stable manner accurate results on the same test cases. |
Tasks | Image Registration |
Published | 2016-02-08 |
URL | http://arxiv.org/abs/1602.02720v2 |
http://arxiv.org/pdf/1602.02720v2.pdf | |
PWC | https://paperswithcode.com/paper/multimodal-remote-sensing-image-registration |
Repo | |
Framework | |
Automated assessment of non-native learner essays: Investigating the role of linguistic features
Title | Automated assessment of non-native learner essays: Investigating the role of linguistic features |
Authors | Sowmya Vajjala |
Abstract | Automatic essay scoring (AES) refers to the process of scoring free text responses to given prompts, considering human grader scores as the gold standard. Writing such essays is an essential component of many language and aptitude exams. Hence, AES became an active and established area of research, and there are many proprietary systems used in real life applications today. However, not much is known about which specific linguistic features are useful for prediction and how much of this is consistent across datasets. This article addresses that by exploring the role of various linguistic features in automatic essay scoring using two publicly available datasets of non-native English essays written in test taking scenarios. The linguistic properties are modeled by encoding lexical, syntactic, discourse and error types of learner language in the feature set. Predictive models are then developed using these features on both datasets and the most predictive features are compared. While the results show that the feature set used results in good predictive models with both datasets, the question “what are the most predictive features?” has a different answer for each dataset. |
Tasks | |
Published | 2016-12-02 |
URL | http://arxiv.org/abs/1612.00729v1 |
http://arxiv.org/pdf/1612.00729v1.pdf | |
PWC | https://paperswithcode.com/paper/automated-assessment-of-non-native-learner |
Repo | |
Framework | |
Relative Error Embeddings for the Gaussian Kernel Distance
Title | Relative Error Embeddings for the Gaussian Kernel Distance |
Authors | Di Chen, Jeff M. Phillips |
Abstract | A reproducing kernel can define an embedding of a data point into an infinite dimensional reproducing kernel Hilbert space (RKHS). The norm in this space describes a distance, which we call the kernel distance. The random Fourier features (of Rahimi and Recht) describe an oblivious approximate mapping into finite dimensional Euclidean space that behaves similar to the RKHS. We show in this paper that for the Gaussian kernel the Euclidean norm between these mapped to features has $(1+\epsilon)$-relative error with respect to the kernel distance. When there are $n$ data points, we show that $O((1/\epsilon^2) \log(n))$ dimensions of the approximate feature space are sufficient and necessary. Without a bound on $n$, but when the original points lie in $\mathbb{R}^d$ and have diameter bounded by $\mathcal{M}$, then we show that $O((d/\epsilon^2) \log(\mathcal{M}))$ dimensions are sufficient, and that this many are required, up to $\log(1/\epsilon)$ factors. |
Tasks | |
Published | 2016-02-17 |
URL | http://arxiv.org/abs/1602.05350v2 |
http://arxiv.org/pdf/1602.05350v2.pdf | |
PWC | https://paperswithcode.com/paper/relative-error-embeddings-for-the-gaussian |
Repo | |
Framework | |
Quantifying mesoscale neuroanatomy using X-ray microtomography
Title | Quantifying mesoscale neuroanatomy using X-ray microtomography |
Authors | Eva L. Dyer, William Gray Roncal, Hugo L. Fernandes, Doga Gürsoy, Vincent De Andrade, Rafael Vescovi, Kamel Fezzaa, Xianghui Xiao, Joshua T. Vogelstein, Chris Jacobsen, Konrad P. Körding, Narayanan Kasthuri |
Abstract | Methods for resolving the 3D microstructure of the brain typically start by thinly slicing and staining the brain, and then imaging each individual section with visible light photons or electrons. In contrast, X-rays can be used to image thick samples, providing a rapid approach for producing large 3D brain maps without sectioning. Here we demonstrate the use of synchrotron X-ray microtomography ($\mu$CT) for producing mesoscale $(1~\mu m^3)$ resolution brain maps from millimeter-scale volumes of mouse brain. We introduce a pipeline for $\mu$CT-based brain mapping that combines methods for sample preparation, imaging, automated segmentation of image volumes into cells and blood vessels, and statistical analysis of the resulting brain structures. Our results demonstrate that X-ray tomography promises rapid quantification of large brain volumes, complementing other brain mapping and connectomics efforts. |
Tasks | |
Published | 2016-04-13 |
URL | http://arxiv.org/abs/1604.03629v2 |
http://arxiv.org/pdf/1604.03629v2.pdf | |
PWC | https://paperswithcode.com/paper/quantifying-mesoscale-neuroanatomy-using-x |
Repo | |
Framework | |
From Deterministic ODEs to Dynamic Structural Causal Models
Title | From Deterministic ODEs to Dynamic Structural Causal Models |
Authors | Paul K. Rubenstein, Stephan Bongers, Bernhard Schoelkopf, Joris M. Mooij |
Abstract | Structural Causal Models are widely used in causal modelling, but how they relate to other modelling tools is poorly understood. In this paper we provide a novel perspective on the relationship between Ordinary Differential Equations and Structural Causal Models. We show how, under certain conditions, the asymptotic behaviour of an Ordinary Differential Equation under non-constant interventions can be modelled using Dynamic Structural Causal Models. In contrast to earlier work, we study not only the effect of interventions on equilibrium states; rather, we model asymptotic behaviour that is dynamic under interventions that vary in time, and include as a special case the study of static equilibria. |
Tasks | |
Published | 2016-08-29 |
URL | http://arxiv.org/abs/1608.08028v2 |
http://arxiv.org/pdf/1608.08028v2.pdf | |
PWC | https://paperswithcode.com/paper/from-deterministic-odes-to-dynamic-structural |
Repo | |
Framework | |
F-Score Driven Max Margin Neural Network for Named Entity Recognition in Chinese Social Media
Title | F-Score Driven Max Margin Neural Network for Named Entity Recognition in Chinese Social Media |
Authors | Hangfeng He, Xu Sun |
Abstract | We focus on named entity recognition (NER) for Chinese social media. With massive unlabeled text and quite limited labelled corpus, we propose a semi-supervised learning model based on B-LSTM neural network. To take advantage of traditional methods in NER such as CRF, we combine transition probability with deep learning in our model. To bridge the gap between label accuracy and F-score of NER, we construct a model which can be directly trained on F-score. When considering the instability of F-score driven method and meaningful information provided by label accuracy, we propose an integrated method to train on both F-score and label accuracy. Our integrated model yields 7.44% improvement over previous state-of-the-art result. |
Tasks | Named Entity Recognition |
Published | 2016-11-14 |
URL | http://arxiv.org/abs/1611.04234v2 |
http://arxiv.org/pdf/1611.04234v2.pdf | |
PWC | https://paperswithcode.com/paper/f-score-driven-max-margin-neural-network-for |
Repo | |
Framework | |
Balancing Statistical and Computational Precision and Applications to Penalized Linear Regression with Group Sparsity
Title | Balancing Statistical and Computational Precision and Applications to Penalized Linear Regression with Group Sparsity |
Authors | Mahsa Taheri, Néhémy Lim, Johannes Lederer |
Abstract | Due to technological advances, large and high-dimensional data have become the rule rather than the exception. Methods that allow for feature selection with such data are thus highly sought after, in particular, since standard methods, such as cross-validated lasso and group-lasso, can be challenging both computationally and mathematically. In this paper, we propose a novel approach to feature selection and group feature selection in linear regression. It consists of simple optimization steps and tests, which makes it computationally more efficient than standard approaches and suitable even for very large data sets. Moreover, it satisfies sharp guarantees for estimation and feature selection in terms of oracle inequalities. We thus expect that our contribution can help to leverage the increasing volume of data in Biology, Public Health, Astronomy, Economics, and other fields. |
Tasks | Feature Selection |
Published | 2016-09-23 |
URL | https://arxiv.org/abs/1609.07195v2 |
https://arxiv.org/pdf/1609.07195v2.pdf | |
PWC | https://paperswithcode.com/paper/efficient-feature-selection-with-large-and |
Repo | |
Framework | |
Guided macro-mutation in a graded energy based genetic algorithm for protein structure prediction
Title | Guided macro-mutation in a graded energy based genetic algorithm for protein structure prediction |
Authors | Mahmood A. Rashid, Sumaiya Iqbal, Firas Khatib, Md Tamjidul Hoque, Abdul Sattar |
Abstract | Protein structure prediction is considered as one of the most challenging and computationally intractable combinatorial problem. Thus, the efficient modeling of convoluted search space, the clever use of energy functions, and more importantly, the use of effective sampling algorithms become crucial to address this problem. For protein structure modeling, an off-lattice model provides limited scopes to exercise and evaluate the algorithmic developments due to its astronomically large set of data-points. In contrast, an on-lattice model widens the scopes and permits studying the relatively larger proteins because of its finite set of data-points. In this work, we took the full advantage of an on-lattice model by using a face-centered-cube lattice that has the highest packing density with the maximum degree of freedom. We proposed a graded energy-strategically mixes the Miyazawa-Jernigan (MJ) energy with the hydrophobic-polar (HP) energy-based genetic algorithm (GA) for conformational search. In our application, we introduced a 2x2 HP energy guided macro-mutation operator within the GA to explore the best possible local changes exhaustively. Conversely, the 20x20 MJ energy model-the ultimate objective function of our GA that needs to be minimized-considers the impacts amongst the 20 different amino acids and allow searching the globally acceptable conformations. On a set of benchmark proteins, our proposed approach outperformed state-of-the-art approaches in terms of the free energy levels and the root-mean-square deviations. |
Tasks | |
Published | 2016-03-07 |
URL | http://arxiv.org/abs/1607.06113v1 |
http://arxiv.org/pdf/1607.06113v1.pdf | |
PWC | https://paperswithcode.com/paper/guided-macro-mutation-in-a-graded-energy |
Repo | |
Framework | |
Depth Superresolution using Motion Adaptive Regularization
Title | Depth Superresolution using Motion Adaptive Regularization |
Authors | Ulugbek S. Kamilov, Petros T. Boufounos |
Abstract | Spatial resolution of depth sensors is often significantly lower compared to that of conventional optical cameras. Recent work has explored the idea of improving the resolution of depth using higher resolution intensity as a side information. In this paper, we demonstrate that further incorporating temporal information in videos can significantly improve the results. In particular, we propose a novel approach that improves depth resolution, exploiting the space-time redundancy in the depth and intensity using motion-adaptive low-rank regularization. Experiments confirm that the proposed approach substantially improves the quality of the estimated high-resolution depth. Our approach can be a first component in systems using vision techniques that rely on high resolution depth information. |
Tasks | |
Published | 2016-03-04 |
URL | http://arxiv.org/abs/1603.01633v1 |
http://arxiv.org/pdf/1603.01633v1.pdf | |
PWC | https://paperswithcode.com/paper/depth-superresolution-using-motion-adaptive |
Repo | |
Framework | |
Modeling Context Between Objects for Referring Expression Understanding
Title | Modeling Context Between Objects for Referring Expression Understanding |
Authors | Varun K. Nagaraja, Vlad I. Morariu, Larry S. Davis |
Abstract | Referring expressions usually describe an object using properties of the object and relationships of the object with other objects. We propose a technique that integrates context between objects to understand referring expressions. Our approach uses an LSTM to learn the probability of a referring expression, with input features from a region and a context region. The context regions are discovered using multiple-instance learning (MIL) since annotations for context objects are generally not available for training. We utilize max-margin based MIL objective functions for training the LSTM. Experiments on the Google RefExp and UNC RefExp datasets show that modeling context between objects provides better performance than modeling only object properties. We also qualitatively show that our technique can ground a referring expression to its referred region along with the supporting context region. |
Tasks | Multiple Instance Learning |
Published | 2016-08-01 |
URL | http://arxiv.org/abs/1608.00525v1 |
http://arxiv.org/pdf/1608.00525v1.pdf | |
PWC | https://paperswithcode.com/paper/modeling-context-between-objects-for |
Repo | |
Framework | |
Variational inference for rare variant detection in deep, heterogeneous next-generation sequencing data
Title | Variational inference for rare variant detection in deep, heterogeneous next-generation sequencing data |
Authors | Fan Zhang, Patrick Flaherty |
Abstract | The detection of rare variants is important for understanding the genetic heterogeneity in mixed samples. Recently, next-generation sequencing (NGS) technologies have enabled the identification of single nucleotide variants (SNVs) in mixed samples with high resolution. Yet, the noise inherent in the biological processes involved in next-generation sequencing necessitates the use of statistical methods to identify true rare variants. We propose a novel Bayesian statistical model and a variational expectation-maximization (EM) algorithm to estimate non-reference allele frequency (NRAF) and identify SNVs in heterogeneous cell populations. We demonstrate that our variational EM algorithm has comparable sensitivity and specificity compared with a Markov Chain Monte Carlo (MCMC) sampling inference algorithm, and is more computationally efficient on tests of low coverage ($27\times$ and $298\times$) data. Furthermore, we show that our model with a variational EM inference algorithm has higher specificity than many state-of-the-art algorithms. In an analysis of a directed evolution longitudinal yeast data set, we are able to identify a time-series trend in non-reference allele frequency and detect novel variants that have not yet been reported. Our model also detects the emergence of a beneficial variant earlier than was previously shown, and a pair of concomitant variants. |
Tasks | Time Series |
Published | 2016-04-14 |
URL | http://arxiv.org/abs/1604.04280v2 |
http://arxiv.org/pdf/1604.04280v2.pdf | |
PWC | https://paperswithcode.com/paper/variational-inference-for-rare-variant |
Repo | |
Framework | |
Two Differentially Private Rating Collection Mechanisms for Recommender Systems
Title | Two Differentially Private Rating Collection Mechanisms for Recommender Systems |
Authors | Wenjie Zheng |
Abstract | We design two mechanisms for the recommender system to collect user ratings. One is modified Laplace mechanism, and the other is randomized response mechanism. We prove that they are both differentially private and preserve the data utility. |
Tasks | Recommendation Systems |
Published | 2016-04-28 |
URL | http://arxiv.org/abs/1604.08402v1 |
http://arxiv.org/pdf/1604.08402v1.pdf | |
PWC | https://paperswithcode.com/paper/two-differentially-private-rating-collection |
Repo | |
Framework | |
Investigation of event-based memory surfaces for high-speed tracking, unsupervised feature extraction and object recognition
Title | Investigation of event-based memory surfaces for high-speed tracking, unsupervised feature extraction and object recognition |
Authors | Saeed Afshar, Gregory Cohen, Tara Julia Hamilton, Jonathan Tapson, Andre van Schaik |
Abstract | In this paper we compare event-based decaying and time based-decaying memory surfaces for high-speed eventbased tracking, feature extraction, and object classification using an event-based camera. The high-speed recognition task involves detecting and classifying model airplanes that are dropped free-hand close to the camera lens so as to generate a challenging dataset exhibiting significant variance in target velocity. This variance motivated the investigation of event-based decaying memory surfaces in comparison to time-based decaying memory surfaces to capture the temporal aspect of the event-based data. These surfaces are then used to perform unsupervised feature extraction, tracking and recognition. In order to generate the memory surfaces, event binning, linearly decaying kernels, and exponentially decaying kernels were investigated with exponentially decaying kernels found to perform best. Event-based decaying memory surfaces were found to outperform time-based decaying memory surfaces in recognition especially when invariance to target velocity was made a requirement. A range of network and receptive field sizes were investigated. The system achieves 98.75% recognition accuracy within 156 milliseconds of an airplane entering the field of view, using only twenty-five event-based feature extracting neurons in series with a linear classifier. By comparing the linear classifier results to an ELM classifier, we find that a small number of event-based feature extractors can effectively project the complex spatio-temporal event patterns of the dataset to an almost linearly separable representation in feature space. |
Tasks | Object Classification, Object Recognition |
Published | 2016-03-14 |
URL | http://arxiv.org/abs/1603.04223v3 |
http://arxiv.org/pdf/1603.04223v3.pdf | |
PWC | https://paperswithcode.com/paper/investigation-of-event-based-memory-surfaces |
Repo | |
Framework | |