January 30, 2020

3697 words 18 mins read

Paper Group ANR 221

Paper Group ANR 221

Conditional Segmentation in Lieu of Image Registration. Cross-Sensor and Cross-Spectral Periocular Biometrics: A Comparative Benchmark including Smartphone Authentication. Dense Deformation Network for High Resolution Tissue Cleared Image Registration. Handling Missing MRI Input Data in Deep Learning Segmentation of Brain Metastases: A Multi-Center …

Conditional Segmentation in Lieu of Image Registration

Title Conditional Segmentation in Lieu of Image Registration
Authors Yipeng Hu, Eli Gibson, Dean C. Barratt, Mark Emberton, J. Alison Noble, Tom Vercauteren
Abstract Classical pairwise image registration methods search for a spatial transformation that optimises a numerical measure that indicates how well a pair of moving and fixed images are aligned. Current learning-based registration methods have adopted the same paradigm and typically predict, for any new input image pair, dense correspondences in the form of a dense displacement field or parameters of a spatial transformation model. However, in many applications of registration, the spatial transformation itself is only required to propagate points or regions of interest (ROIs). In such cases, detailed pixel- or voxel-level correspondence within or outside of these ROIs often have little clinical value. In this paper, we propose an alternative paradigm in which the location of corresponding image-specific ROIs, defined in one image, within another image is learnt. This results in replacing image registration by a conditional segmentation algorithm, which can build on typical image segmentation networks and their widely-adopted training strategies. Using the registration of 3D MRI and ultrasound images of the prostate as an example to demonstrate this new approach, we report a median target registration error (TRE) of 2.1 mm between the ground-truth ROIs defined on intraoperative ultrasound images and those propagated from the preoperative MR images. Significantly lower (>34%) TREs were obtained using the proposed conditional segmentation compared with those obtained from a previously-proposed spatial-transformation-predicting registration network trained with the same multiple ROI labels for individual image pairs. We conclude this work by using a quantitative bias-variance analysis to provide one explanation of the observed improvement in registration accuracy.
Tasks Image Registration, Semantic Segmentation
Published 2019-06-30
URL https://arxiv.org/abs/1907.00438v1
PDF https://arxiv.org/pdf/1907.00438v1.pdf
PWC https://paperswithcode.com/paper/conditional-segmentation-in-lieu-of-image
Repo
Framework

Cross-Sensor and Cross-Spectral Periocular Biometrics: A Comparative Benchmark including Smartphone Authentication

Title Cross-Sensor and Cross-Spectral Periocular Biometrics: A Comparative Benchmark including Smartphone Authentication
Authors Fernando Alonso-Fernandez, Kiran B. Raja, R. Raghavendra, Cristoph Busch, Josef Bigun, Ruben Vera-Rodriguez, Julian Fierrez
Abstract The massive availability of cameras results in a wide variability of imaging conditions, producing large intra-class variations and a significant performance drop if images from heterogeneous environments are compared for person recognition. However, as biometrics is deployed, it will be common to replace damaged or obsolete hardware, or to exchange information between applications in heterogeneous environments. Variations in spectral bands can also occur. For example, faces are typically acquired in the visible (VIS) spectrum, while iris is captured in near-infrared (NIR). However, cross-spectrum comparison may be needed if a face from a surveillance camera needs to be compared against a legacy iris database. Here, we propose a multialgorithmic approach to cope with periocular images from different sensors. We integrate different comparators with a fusion scheme based on linear logistic regression, in which scores tend to be log-likelihood ratios. This allows easy interpretation of output scores and the use of Bayes thresholds for optimal decision-making, since scores from different comparators are in the same probabilistic range. We evaluate our approach in the context of the Cross-Eyed Competition, whose aim was to compare recognition approaches when NIR and VIS periocular images are matched. Our approach achieves reductions in error rates of up to 30-40% in cross-spectral NIR-VIS comparisons, leading to EER=0.2% and FRR of just 0.47% at FAR=0.01%, representing the best overall approach of the competition. Experiments are also reported with a database of VIS images from different smartphones. We also discuss the impact of template size and computation times, with the most computationally heavy comparator playing an important role in the results. Lastly, the proposed method is shown to outperform other popular fusion approaches, such as the average of scores, SVMs or Random Forest.
Tasks Decision Making, Person Recognition
Published 2019-02-21
URL https://arxiv.org/abs/1902.08123v3
PDF https://arxiv.org/pdf/1902.08123v3.pdf
PWC https://paperswithcode.com/paper/cross-sensor-periocular-biometrics-a
Repo
Framework

Dense Deformation Network for High Resolution Tissue Cleared Image Registration

Title Dense Deformation Network for High Resolution Tissue Cleared Image Registration
Authors Abdullah Nazib, Clinton Fookes, Dimitri Perrin
Abstract The recent application of deep learning in various areas of medical image analysis has brought excellent performance gains. In particular, technologies based on deep learning in medical image registration can outperform traditional optimisation-based registration algorithms both in registration time and accuracy. However, the U-net based architectures used in most of the image registration frameworks downscale the data, which removes global information and affects the deformation. In this paper, we present a densely connected convolutional architecture for deformable image registration. Our proposed dense network downsizes data only in one stage and have dense connections instead of the skip connections in U-net architecture. The training of the network is unsupervised and does not require ground-truth deformation or any synthetic deformation as a label. The proposed architecture is trained and tested on two different versions of tissue-cleared data, at 10% and 25% resolution of the original single-cell-resolution dataset. We demonstrate comparable registration performance to state-of-the-art registration methods and superior performance to the deep-learning based VoxelMorph method in terms of accuracy and increased resolution handling ability. In both resolutions, the proposed DenseDeformation network outperforms VoxelMorph in registration accuracy. Importantly, it can register brains in one minute where conventional methods can take hours at 25% resolution.
Tasks Image Registration, Medical Image Registration
Published 2019-06-13
URL https://arxiv.org/abs/1906.06180v2
PDF https://arxiv.org/pdf/1906.06180v2.pdf
PWC https://paperswithcode.com/paper/dense-deformation-network-for-high-resolution
Repo
Framework

Handling Missing MRI Input Data in Deep Learning Segmentation of Brain Metastases: A Multi-Center Study

Title Handling Missing MRI Input Data in Deep Learning Segmentation of Brain Metastases: A Multi-Center Study
Authors Endre Grøvik, Darvin Yi, Michael Iv, Elizabeth Tong, Line Brennhaug Nilsen, Anna Latysheva, Cathrine Saxhaug, Kari Dolven Jacobsen, Åslaug Helland, Kyrre Eeg Emblem, Daniel Rubin, Greg Zaharchuk
Abstract The purpose was to assess the clinical value of a novel DropOut model for detecting and segmenting brain metastases, in which a neural network is trained on four distinct MRI sequences using an input dropout layer, thus simulating the scenario of missing MRI data by training on the full set and all possible subsets of the input data. This retrospective, multi-center study, evaluated 165 patients with brain metastases. A deep learning based segmentation model for automatic segmentation of brain metastases, named DropOut, was trained on multi-sequence MRI from 100 patients, and validated/tested on 10/55 patients. The segmentation results were compared with the performance of a state-of-the-art DeepLabV3 model. The MR sequences in the training set included pre- and post-gadolinium (Gd) T1-weighted 3D fast spin echo, post-Gd T1-weighted inversion recovery (IR) prepped fast spoiled gradient echo, and 3D fluid attenuated inversion recovery (FLAIR), whereas the test set did not include the IR prepped image-series. The ground truth were established by experienced neuroradiologists. The results were evaluated using precision, recall, Dice score, and receiver operating characteristics (ROC) curve statistics, while the Wilcoxon rank sum test was used to compare the performance of the two neural networks. The area under the ROC curve (AUC), averaged across all test cases, was 0.989+-0.029 for the DropOut model and 0.989+-0.023 for the DeepLabV3 model (p=0.62). The DropOut model showed a significantly higher Dice score compared to the DeepLabV3 model (0.795+-0.105 vs. 0.774+-0.104, p=0.017), and a significantly lower average false positive rate of 3.6/patient vs. 7.0/patient (p<0.001) using a 10mm3 lesion-size limit. The DropOut model may facilitate accurate detection and segmentation of brain metastases on a multi-center basis, even when the test cohort is missing MRI input data.
Tasks
Published 2019-12-27
URL https://arxiv.org/abs/1912.11966v1
PDF https://arxiv.org/pdf/1912.11966v1.pdf
PWC https://paperswithcode.com/paper/handling-missing-mri-input-data-in-deep
Repo
Framework

Adversarially Robust Low Dimensional Representations

Title Adversarially Robust Low Dimensional Representations
Authors Pranjal Awasthi, Vaggos Chatziafratis, Xue Chen, Aravindan Vijayaraghavan
Abstract Adversarial or test time robustness measures the susceptibility of a machine learning system to small perturbations made to the input at test time. This has attracted much interest on the empirical side, since many existing ML systems perform poorly under imperceptible adversarial perturbations to the test inputs. On the other hand, our theoretical understanding of this phenomenon is limited, and has mostly focused on supervised learning tasks. In this work we study the problem of computing adversarially robust representations of data. We formulate a natural extension of Principal Component Analysis (PCA) where the goal is to find a low dimensional subspace to represent the given data with minimum projection error, and that is in addition robust to small perturbations measured in $\ell_q$ norm (say $q=\infty$). Unlike PCA which is solvable in polynomial time, our formulation is computationally intractable to optimize as it captures the well-studied sparse PCA objective. We show the following algorithmic and statistical results. - Polynomial time algorithms in the worst-case that achieve constant factor approximations to the objective while only violating the robustness constraint by a constant factor. - We prove that our formulation (and algorithms) also enjoy significant statistical benefits in terms of sample complexity over standard PCA on account of a “regularization effect”, that is formalized using the well-studied spiked covariance model. - Surprisingly, we show that our algorithmic techniques can also be made robust to corruptions in the training data, in addition to yielding representations that are robust at test time! Here an adversary is allowed to corrupt potentially every data point up to a specified amount in the $\ell_q$ norm. We further apply these techniques for mean estimation and clustering under adversarial corruptions to the training data.
Tasks
Published 2019-11-29
URL https://arxiv.org/abs/1911.13268v1
PDF https://arxiv.org/pdf/1911.13268v1.pdf
PWC https://paperswithcode.com/paper/adversarially-robust-low-dimensional
Repo
Framework

Toward Real-World Single Image Super-Resolution: A New Benchmark and A New Model

Title Toward Real-World Single Image Super-Resolution: A New Benchmark and A New Model
Authors Jianrui Cai, Hui Zeng, Hongwei Yong, Zisheng Cao, Lei Zhang
Abstract Most of the existing learning-based single image superresolution (SISR) methods are trained and evaluated on simulated datasets, where the low-resolution (LR) images are generated by applying a simple and uniform degradation (i.e., bicubic downsampling) to their high-resolution (HR) counterparts. However, the degradations in real-world LR images are far more complicated. As a consequence, the SISR models trained on simulated data become less effective when applied to practical scenarios. In this paper, we build a real-world super-resolution (RealSR) dataset where paired LR-HR images on the same scene are captured by adjusting the focal length of a digital camera. An image registration algorithm is developed to progressively align the image pairs at different resolutions. Considering that the degradation kernels are naturally non-uniform in our dataset, we present a Laplacian pyramid based kernel prediction network (LP-KPN), which efficiently learns per-pixel kernels to recover the HR image. Our extensive experiments demonstrate that SISR models trained on our RealSR dataset deliver better visual quality with sharper edges and finer textures on real-world scenes than those trained on simulated datasets. Though our RealSR dataset is built by using only two cameras (Canon 5D3 and Nikon D810), the trained model generalizes well to other camera devices such as Sony a7II and mobile phones.
Tasks Image Registration, Image Super-Resolution, Super-Resolution
Published 2019-04-01
URL http://arxiv.org/abs/1904.00523v1
PDF http://arxiv.org/pdf/1904.00523v1.pdf
PWC https://paperswithcode.com/paper/toward-real-world-single-image-super
Repo
Framework

Convolution Forgetting Curve Model for Repeated Learning

Title Convolution Forgetting Curve Model for Repeated Learning
Authors Yanlu Xie, Yue Chen, Man Li
Abstract Most of mathematic forgetting curve models fit well with the forgetting data under the learning condition of one time rather than repeated. In the paper, a convolution model of forgetting curve is proposed to simulate the memory process during learning. In this model, the memory ability (i.e. the central procedure in the working memory model) and learning material (i.e. the input in the working memory model) is regarded as the system function and the input function, respectively. The status of forgetting (i.e. the output in the working memory model) is regarded as output function or the convolution result of the memory ability and learning material. The model is applied to simulate the forgetting curves in different situations. The results show that the model is able to simulate the forgetting curves not only in one time learning condition but also in multi-times condition. The model is further verified in the experiments of Mandarin tone learning for Japanese learners. And the predicted curve fits well on the test points.
Tasks
Published 2019-01-19
URL http://arxiv.org/abs/1901.08114v1
PDF http://arxiv.org/pdf/1901.08114v1.pdf
PWC https://paperswithcode.com/paper/convolution-forgetting-curve-model-for
Repo
Framework

Palmprint image registration using convolutional neural networks and Hough transform

Title Palmprint image registration using convolutional neural networks and Hough transform
Authors Mohsen Ahmadi, Hossein Soleimani
Abstract Minutia-based palmprint recognition systems has got lots of interest in last two decades. Due to the large number of minutiae in a palmprint, approximately 1000 minutiae, the matching process is time consuming which makes it unpractical for real time applications. One way to address this issue is aligning all palmprint images to a reference image and bringing them to a same coordinate system. Bringing all palmprint images to a same coordinate system, results in fewer computations during minutia matching. In this paper, using convolutional neural network (CNN) and generalized Hough transform (GHT), we propose a new method to register palmprint images accurately. This method, finds the corresponding rotation and displacement (in both x and y direction) between the palmprint and a reference image. Exact palmprint registration can enhance the speed and the accuracy of matching process. Proposed method is capable of distinguishing between left and right palmprint automatically which helps to speed up the matching process. Furthermore, designed structure of CNN in registration stage, gives us the segmented palmprint image from background which is a pre-processing step for minutia extraction. The proposed registration method followed by minutia-cylinder code (MCC) matching algorithm has been evaluated on the THUPALMLAB database, and the results show the superiority of our algorithm over most of the state-of-the-art algorithms.
Tasks Image Registration
Published 2019-04-01
URL http://arxiv.org/abs/1904.00579v2
PDF http://arxiv.org/pdf/1904.00579v2.pdf
PWC https://paperswithcode.com/paper/palmprint-image-registration-using
Repo
Framework

Expert-Augmented Machine Learning

Title Expert-Augmented Machine Learning
Authors E. D. Gennatas, J. H. Friedman, L. H. Ungar, R. Pirracchio, E. Eaton, L. Reichman, Y. Interian, C. B. Simone, A. Auerbach, E. Delgado, M. J. Van der Laan, T. D. Solberg, G. Valdes
Abstract Machine Learning is proving invaluable across disciplines. However, its success is often limited by the quality and quantity of available data, while its adoption by the level of trust that models afford users. Human vs. machine performance is commonly compared empirically to decide whether a certain task should be performed by a computer or an expert. In reality, the optimal learning strategy may involve combining the complementary strengths of man and machine. Here we present Expert-Augmented Machine Learning (EAML), an automated method that guides the extraction of expert knowledge and its integration into machine-learned models. We use a large dataset of intensive care patient data to predict mortality and show that we can extract expert knowledge using an online platform, help reveal hidden confounders, improve generalizability on a different population and learn using less data. EAML presents a novel framework for high performance and dependable machine learning in critical applications.
Tasks
Published 2019-03-22
URL http://arxiv.org/abs/1903.09731v2
PDF http://arxiv.org/pdf/1903.09731v2.pdf
PWC https://paperswithcode.com/paper/expert-augmented-machine-learning
Repo
Framework

Thresholding Bandit with Optimal Aggregate Regret

Title Thresholding Bandit with Optimal Aggregate Regret
Authors Chao Tao, Saùl Blanco, Jian Peng, Yuan Zhou
Abstract We consider the thresholding bandit problem, whose goal is to find arms of mean rewards above a given threshold $\theta$, with a fixed budget of $T$ trials. We introduce LSA, a new, simple and anytime algorithm that aims to minimize the aggregate regret (or the expected number of mis-classified arms). We prove that our algorithm is instance-wise asymptotically optimal. We also provide comprehensive empirical results to demonstrate the algorithm’s superior performance over existing algorithms under a variety of different scenarios.
Tasks
Published 2019-05-27
URL https://arxiv.org/abs/1905.11046v1
PDF https://arxiv.org/pdf/1905.11046v1.pdf
PWC https://paperswithcode.com/paper/thresholding-bandit-with-optimal-aggregate
Repo
Framework

Universal Text Representation from BERT: An Empirical Study

Title Universal Text Representation from BERT: An Empirical Study
Authors Xiaofei Ma, Zhiguo Wang, Patrick Ng, Ramesh Nallapati, Bing Xiang
Abstract We present a systematic investigation of layer-wise BERT activations for general-purpose text representations to understand what linguistic information they capture and how transferable they are across different tasks. Sentence-level embeddings are evaluated against two state-of-the-art models on downstream and probing tasks from SentEval, while passage-level embeddings are evaluated on four question-answering (QA) datasets under a learning-to-rank problem setting. Embeddings from the pre-trained BERT model perform poorly in semantic similarity and sentence surface information probing tasks. Fine-tuning BERT on natural language inference data greatly improves the quality of the embeddings. Combining embeddings from different BERT layers can further boost performance. BERT embeddings outperform BM25 baseline significantly on factoid QA datasets at the passage level, but fail to perform better than BM25 on non-factoid datasets. For all QA datasets, there is a gap between embedding-based method and in-domain fine-tuned BERT (we report new state-of-the-art results on two datasets), which suggests deep interactions between question and answer pairs are critical for those hard tasks.
Tasks Learning-To-Rank, Natural Language Inference, Question Answering, Semantic Similarity, Semantic Textual Similarity
Published 2019-10-17
URL https://arxiv.org/abs/1910.07973v2
PDF https://arxiv.org/pdf/1910.07973v2.pdf
PWC https://paperswithcode.com/paper/universal-text-representation-from-bert-an
Repo
Framework

Modularity in Query-Based Concept Learning

Title Modularity in Query-Based Concept Learning
Authors Benjamin Caulfield, Sanjit A. Seshia
Abstract We define and study the problem of modular concept learning, that is, learning a concept that is a cross product of component concepts. If an element’s membership in a concept depends solely on it’s membership in the components, learning the concept as a whole can be reduced to learning the components. We analyze this problem with respect to different types of oracle interfaces, defining different sets of queries. If a given oracle interface cannot answer questions about the components, learning can be difficult, even when the components are easy to learn with the same type of oracle queries. While learning from superset queries is easy, learning from membership, equivalence, or subset queries is harder. However, we show that these problems become tractable when oracles are given a positive example and are allowed to ask membership queries.
Tasks
Published 2019-11-07
URL https://arxiv.org/abs/1911.02714v1
PDF https://arxiv.org/pdf/1911.02714v1.pdf
PWC https://paperswithcode.com/paper/modularity-in-query-based-concept-learning
Repo
Framework
Title sharpDARTS: Faster and More Accurate Differentiable Architecture Search
Authors Andrew Hundt, Varun Jain, Gregory D. Hager
Abstract Neural Architecture Search (NAS) has been a source of dramatic improvements in neural network design, with recent results meeting or exceeding the performance of hand-tuned architectures. However, our understanding of how to represent the search space for neural net architectures and how to search that space efficiently are both still in their infancy. We have performed an in-depth analysis to identify limitations in a widely used search space and a recent architecture search method, Differentiable Architecture Search (DARTS). These findings led us to introduce novel network blocks with a more general, balanced, and consistent design; a better-optimized Cosine Power Annealing learning rate schedule; and other improvements. Our resulting sharpDARTS search is 50% faster with a 20-30% relative improvement in final model error on CIFAR-10 when compared to DARTS. Our best single model run has 1.93% (1.98+/-0.07) validation error on CIFAR-10 and 5.5% error (5.8+/-0.3) on the recently released CIFAR-10.1 test set. To our knowledge, both are state of the art for models of similar size. This model also generalizes competitively to ImageNet at 25.1% top-1 (7.8% top-5) error. We found improvements for existing search spaces but does DARTS generalize to new domains? We propose Differentiable Hyperparameter Grid Search and the HyperCuboid search space, which are representations designed to leverage DARTS for more general parameter optimization. Here we find that DARTS fails to generalize when compared against a human’s one shot choice of models. We look back to the DARTS and sharpDARTS search spaces to understand why, and an ablation study reveals an unusual generalization gap. We finally propose Max-W regularization to solve this problem, which proves significantly better than the handmade design. Code will be made available.
Tasks Neural Architecture Search
Published 2019-03-23
URL http://arxiv.org/abs/1903.09900v1
PDF http://arxiv.org/pdf/1903.09900v1.pdf
PWC https://paperswithcode.com/paper/sharpdarts-faster-and-more-accurate
Repo
Framework

Identifying similarity and anomalies for cryptocurrency moments and distribution extremities

Title Identifying similarity and anomalies for cryptocurrency moments and distribution extremities
Authors Nick James, Max Menzies, Jennifer Chan
Abstract We propose two new methods for identifying similarity and anomalies among collections of time series, and apply these methods to analyse cryptocurrencies. First, we analyse change points with respect to various distribution moments, considering these points as signals of erratic behaviour and potential risk. This technique uses the MJ$_1$ semi-metric, from the more general MJ$_p$ class of semi-metrics \citep{James2019}, to measure distance between these change point sets. Prior work on this topic fails to consider data between change points, and in particular, does not justify the utility of this change point analysis. Therefore, we introduce a second method to determine similarity between time series, in this instance with respect to their extreme values, or tail behaviour. Finally, we measure the consistency between our two methods, that is, structural break versus tail behaviour similarity. With cryptocurrency investment as an apt example of erratic, extreme behaviour, we notice an impressive consistency between these two methods.
Tasks Anomaly Detection, Time Series
Published 2019-12-12
URL https://arxiv.org/abs/1912.06193v2
PDF https://arxiv.org/pdf/1912.06193v2.pdf
PWC https://paperswithcode.com/paper/a-new-method-for-similarity-and-anomaly
Repo
Framework
Title Asteroids Detection Technique: Classic “Blink” An Automated Approch
Authors D. Copandean, C. Nandra, D. Gorgan, O. Vaduvescu
Abstract Asteroids detection is a very important research field that received increased attention in the last couple of decades. Some major surveys have their own dedicated people, equipment and detection applications, so they are discovering Near Earth Asteroids (NEAs) daily. The interest in asteroids is not limited to those major surveys, it is shared by amateurs and mini-surveys too. A couple of them are using the few existent software solutions, most of which are developed by amateurs. The rest obtain their results in a visual manner: they “blink” a sequence of reduced images of the same field, taken at a specific time interval, and they try to detect a real moving object in the resulting animation. Such a technique becomes harder with the increase in size of the CCD cameras. Aiming to replace manual detection, we propose an automated “blink” technique for asteroids detection.
Tasks
Published 2019-01-08
URL http://arxiv.org/abs/1901.02542v1
PDF http://arxiv.org/pdf/1901.02542v1.pdf
PWC https://paperswithcode.com/paper/asteroids-detection-technique-classic-blink
Repo
Framework
comments powered by Disqus