July 28, 2019

3152 words 15 mins read

Paper Group ANR 251

Paper Group ANR 251

Expectation Propagation for t-Exponential Family Using Q-Algebra. $α$-Variational Inference with Statistical Guarantees. Deep Learning Based Cryptographic Primitive Classification. Highly curved image sensors: a practical approach for improved optical performance. Hierarchical Gated Recurrent Neural Tensor Network for Answer Triggering. Evolving im …

Expectation Propagation for t-Exponential Family Using Q-Algebra

Title Expectation Propagation for t-Exponential Family Using Q-Algebra
Authors Futoshi Futami, Issei Sato, Masashi Sugiyama
Abstract Exponential family distributions are highly useful in machine learning since their calculation can be performed efficiently through natural parameters. The exponential family has recently been extended to the t-exponential family, which contains Student-t distributions as family members and thus allows us to handle noisy data well. However, since the t-exponential family is denied by the deformed exponential, we cannot derive an efficient learning algorithm for the t-exponential family such as expectation propagation (EP). In this paper, we borrow the mathematical tools of q-algebra from statistical physics and show that the pseudo additivity of distributions allows us to perform calculation of t-exponential family distributions through natural parameters. We then develop an expectation propagation (EP) algorithm for the t-exponential family, which provides a deterministic approximation to the posterior or predictive distribution with simple moment matching. We finally apply the proposed EP algorithm to the Bayes point machine and Student-t process classication, and demonstrate their performance numerically.
Tasks
Published 2017-05-25
URL http://arxiv.org/abs/1705.09046v2
PDF http://arxiv.org/pdf/1705.09046v2.pdf
PWC https://paperswithcode.com/paper/expectation-propagation-for-t-exponential
Repo
Framework

$α$-Variational Inference with Statistical Guarantees

Title $α$-Variational Inference with Statistical Guarantees
Authors Yun Yang, Debdeep Pati, Anirban Bhattacharya
Abstract We propose a family of variational approximations to Bayesian posterior distributions, called $\alpha$-VB, with provable statistical guarantees. The standard variational approximation is a special case of $\alpha$-VB with $\alpha=1$. When $\alpha \in(0,1]$, a novel class of variational inequalities are developed for linking the Bayes risk under the variational approximation to the objective function in the variational optimization problem, implying that maximizing the evidence lower bound in variational inference has the effect of minimizing the Bayes risk within the variational density family. Operating in a frequentist setup, the variational inequalities imply that point estimates constructed from the $\alpha$-VB procedure converge at an optimal rate to the true parameter in a wide range of problems. We illustrate our general theory with a number of examples, including the mean-field variational approximation to (low)-high-dimensional Bayesian linear regression with spike and slab priors, mixture of Gaussian models, latent Dirichlet allocation, and (mixture of) Gaussian variational approximation in regular parametric models.
Tasks
Published 2017-10-09
URL http://arxiv.org/abs/1710.03266v2
PDF http://arxiv.org/pdf/1710.03266v2.pdf
PWC https://paperswithcode.com/paper/-variational-inference-with-statistical
Repo
Framework

Deep Learning Based Cryptographic Primitive Classification

Title Deep Learning Based Cryptographic Primitive Classification
Authors Gregory D. Hill, Xavier J. A. Bellekens
Abstract Cryptovirological augmentations present an immediate, incomparable threat. Over the last decade, the substantial proliferation of crypto-ransomware has had widespread consequences for consumers and organisations alike. Established preventive measures perform well, however, the problem has not ceased. Reverse engineering potentially malicious software is a cumbersome task due to platform eccentricities and obfuscated transmutation mechanisms, hence requiring smarter, more efficient detection strategies. The following manuscript presents a novel approach for the classification of cryptographic primitives in compiled binary executables using deep learning. The model blueprint, a DCNN, is fittingly configured to learn from variable-length control flow diagnostics output from a dynamic trace. To rival the size and variability of contemporary data compendiums, hence feeding the model cognition, a methodology for the procedural generation of synthetic cryptographic binaries is defined, utilising core primitives from OpenSSL with multivariate obfuscation, to draw a vastly scalable distribution. The library, CryptoKnight, rendered an algorithmic pool of AES, RC4, Blowfish, MD5 and RSA to synthesis combinable variants which are automatically fed in its core model. Converging at 91% accuracy, CryptoKnight is successfully able to classify the sample algorithms with minimal loss.
Tasks
Published 2017-09-25
URL http://arxiv.org/abs/1709.08385v1
PDF http://arxiv.org/pdf/1709.08385v1.pdf
PWC https://paperswithcode.com/paper/deep-learning-based-cryptographic-primitive
Repo
Framework

Highly curved image sensors: a practical approach for improved optical performance

Title Highly curved image sensors: a practical approach for improved optical performance
Authors Brian Guenter, Neel Joshi, Richard Stoakley, Andrew Keefe, Kevin Geary, Ryan Freeman, Jake Hundley, Pamela Patterson, David Hammon, Guillermo Herrera, Elena Sherman, Andrew Nowak, Randall Schubert, Peter Brewer, Louis Yang, Russell Mott, Geoff McKnight
Abstract The significant optical and size benefits of using a curved focal surface for imaging systems have been well studied yet never brought to market for lack of a high-quality, mass-producible, curved image sensor. In this work we demonstrate that commercial silicon CMOS image sensors can be thinned and formed into accurate, highly curved optical surfaces with undiminished functionality. Our key development is a pneumatic forming process that avoids rigid mechanical constraints and suppresses wrinkling instabilities. A combination of forming-mold design, pressure membrane elastic properties, and controlled friction forces enables us to gradually contact the die at the corners and smoothly press the sensor into a spherical shape. Allowing the die to slide into the concave target shape enables a threefold increase in the spherical curvature over prior approaches having mechanical constraints that resist deformation, and create a high-stress, stretch-dominated state. Our process creates a bridge between the high precision and low-cost but planar CMOS process, and ideal non-planar component shapes such as spherical imagers for improved optical systems. We demonstrate these curved sensors in prototype cameras with custom lenses, measuring exceptional resolution of 3220 line-widths per picture height at an aperture of f/1.2 and nearly 100% relative illumination across the field. Though we use a 1/2.3” format image sensor in this report, we also show this process is generally compatible with many state of the art imaging sensor formats. By example, we report photogrammetry test data for an APS-C sized silicon die formed to a 30$^\circ$ subtended spherical angle. These gains in sharpness and relative illumination enable a new generation of ultra-high performance, manufacturable, digital imaging systems for scientific, industrial, and artistic use.
Tasks
Published 2017-06-20
URL http://arxiv.org/abs/1706.07041v1
PDF http://arxiv.org/pdf/1706.07041v1.pdf
PWC https://paperswithcode.com/paper/highly-curved-image-sensors-a-practical
Repo
Framework

Hierarchical Gated Recurrent Neural Tensor Network for Answer Triggering

Title Hierarchical Gated Recurrent Neural Tensor Network for Answer Triggering
Authors Wei Li, Yunfang Wu
Abstract In this paper, we focus on the problem of answer triggering ad-dressed by Yang et al. (2015), which is a critical component for a real-world question answering system. We employ a hierarchical gated recurrent neural tensor (HGRNT) model to capture both the context information and the deep in-teractions between the candidate answers and the question. Our result on F val-ue achieves 42.6%, which surpasses the baseline by over 10 %.
Tasks Question Answering
Published 2017-09-17
URL http://arxiv.org/abs/1709.05599v1
PDF http://arxiv.org/pdf/1709.05599v1.pdf
PWC https://paperswithcode.com/paper/hierarchical-gated-recurrent-neural-tensor
Repo
Framework

Evolving imputation strategies for missing data in classification problems with TPOT

Title Evolving imputation strategies for missing data in classification problems with TPOT
Authors Unai Garciarena, Roberto Santana, Alexander Mendiburu
Abstract Missing data has a ubiquitous presence in real-life applications of machine learning techniques. Imputation methods are algorithms conceived for restoring missing values in the data, based on other entries in the database. The choice of the imputation method has an influence on the performance of the machine learning technique, e.g., it influences the accuracy of the classification algorithm applied to the data. Therefore, selecting and applying the right imputation method is important and usually requires a substantial amount of human intervention. In this paper we propose the use of genetic programming techniques to search for the right combination of imputation and classification algorithms. We build our work on the recently introduced Python-based TPOT library, and incorporate a heterogeneous set of imputation algorithms as part of the machine learning pipeline search. We show that genetic programming can automatically find increasingly better pipelines that include the most effective combinations of imputation methods, feature pre-processing, and classifiers for a variety of classification problems with missing data.
Tasks Imputation
Published 2017-06-04
URL http://arxiv.org/abs/1706.01120v2
PDF http://arxiv.org/pdf/1706.01120v2.pdf
PWC https://paperswithcode.com/paper/evolving-imputation-strategies-for-missing
Repo
Framework

BitNet: Bit-Regularized Deep Neural Networks

Title BitNet: Bit-Regularized Deep Neural Networks
Authors Aswin Raghavan, Mohamed Amer, Sek Chai, Graham Taylor
Abstract We present a novel optimization strategy for training neural networks which we call “BitNet”. The parameters of neural networks are usually unconstrained and have a dynamic range dispersed over all real values. Our key idea is to limit the expressive power of the network by dynamically controlling the range and set of values that the parameters can take. We formulate this idea using a novel end-to-end approach that circumvents the discrete parameter space by optimizing a relaxed continuous and differentiable upper bound of the typical classification loss function. The approach can be interpreted as a regularization inspired by the Minimum Description Length (MDL) principle. For each layer of the network, our approach optimizes real-valued translation and scaling factors and arbitrary precision integer-valued parameters (weights). We empirically compare BitNet to an equivalent unregularized model on the MNIST and CIFAR-10 datasets. We show that BitNet converges faster to a superior quality solution. Additionally, the resulting model has significant savings in memory due to the use of integer-valued parameters.
Tasks
Published 2017-08-16
URL http://arxiv.org/abs/1708.04788v3
PDF http://arxiv.org/pdf/1708.04788v3.pdf
PWC https://paperswithcode.com/paper/bitnet-bit-regularized-deep-neural-networks
Repo
Framework

The Likelihood Ratio Test in High-Dimensional Logistic Regression Is Asymptotically a Rescaled Chi-Square

Title The Likelihood Ratio Test in High-Dimensional Logistic Regression Is Asymptotically a Rescaled Chi-Square
Authors Pragya Sur, Yuxin Chen, Emmanuel J. Candès
Abstract Logistic regression is used thousands of times a day to fit data, predict future outcomes, and assess the statistical significance of explanatory variables. When used for the purpose of statistical inference, logistic models produce p-values for the regression coefficients by using an approximation to the distribution of the likelihood-ratio test. Indeed, Wilks’ theorem asserts that whenever we have a fixed number $p$ of variables, twice the log-likelihood ratio (LLR) $2\Lambda$ is distributed as a $\chi^2_k$ variable in the limit of large sample sizes $n$; here, $k$ is the number of variables being tested. In this paper, we prove that when $p$ is not negligible compared to $n$, Wilks’ theorem does not hold and that the chi-square approximation is grossly incorrect; in fact, this approximation produces p-values that are far too small (under the null hypothesis). Assume that $n$ and $p$ grow large in such a way that $p/n\rightarrow\kappa$ for some constant $\kappa < 1/2$. We prove that for a class of logistic models, the LLR converges to a rescaled chi-square, namely, $2\Lambda~\stackrel{\mathrm{d}}{\rightarrow}~\alpha(\kappa)\chi_k^2$, where the scaling factor $\alpha(\kappa)$ is greater than one as soon as the dimensionality ratio $\kappa$ is positive. Hence, the LLR is larger than classically assumed. For instance, when $\kappa=0.3$, $\alpha(\kappa)\approx1.5$. In general, we show how to compute the scaling factor by solving a nonlinear system of two equations with two unknowns. Our mathematical arguments are involved and use techniques from approximate message passing theory, non-asymptotic random matrix theory and convex geometry. We also complement our mathematical study by showing that the new limiting distribution is accurate for finite sample sizes. Finally, all the results from this paper extend to some other regression models such as the probit regression model.
Tasks
Published 2017-06-05
URL http://arxiv.org/abs/1706.01191v1
PDF http://arxiv.org/pdf/1706.01191v1.pdf
PWC https://paperswithcode.com/paper/the-likelihood-ratio-test-in-high-dimensional
Repo
Framework

Neural machine translation for low-resource languages

Title Neural machine translation for low-resource languages
Authors Robert Östling, Jörg Tiedemann
Abstract Neural machine translation (NMT) approaches have improved the state of the art in many machine translation settings over the last couple of years, but they require large amounts of training data to produce sensible output. We demonstrate that NMT can be used for low-resource languages as well, by introducing more local dependencies and using word alignments to learn sentence reordering during translation. In addition to our novel model, we also present an empirical evaluation of low-resource phrase-based statistical machine translation (SMT) and NMT to investigate the lower limits of the respective technologies. We find that while SMT remains the best option for low-resource settings, our method can produce acceptable translations with only 70000 tokens of training data, a level where the baseline NMT system fails completely.
Tasks Machine Translation
Published 2017-08-18
URL http://arxiv.org/abs/1708.05729v1
PDF http://arxiv.org/pdf/1708.05729v1.pdf
PWC https://paperswithcode.com/paper/neural-machine-translation-for-low-resource-1
Repo
Framework

Close Yet Distinctive Domain Adaptation

Title Close Yet Distinctive Domain Adaptation
Authors Lingkun Luo, Xiaofang Wang, Shiqiang Hu, Chao Wang, Yuxing Tang, Liming Chen
Abstract Domain adaptation is transfer learning which aims to generalize a learning model across training and testing data with different distributions. Most previous research tackle this problem in seeking a shared feature representation between source and target domains while reducing the mismatch of their data distributions. In this paper, we propose a close yet discriminative domain adaptation method, namely CDDA, which generates a latent feature representation with two interesting properties. First, the discrepancy between the source and target domain, measured in terms of both marginal and conditional probability distribution via Maximum Mean Discrepancy is minimized so as to attract two domains close to each other. More importantly, we also design a repulsive force term, which maximizes the distances between each label dependent sub-domain to all others so as to drag different class dependent sub-domains far away from each other and thereby increase the discriminative power of the adapted domain. Moreover, given the fact that the underlying data manifold could have complex geometric structure, we further propose the constraints of label smoothness and geometric structure consistency for label propagation. Extensive experiments are conducted on 36 cross-domain image classification tasks over four public datasets. The comprehensive results show that the proposed method consistently outperforms the state-of-the-art methods with significant margins.
Tasks Domain Adaptation, Image Classification, Transfer Learning
Published 2017-04-13
URL http://arxiv.org/abs/1704.04235v1
PDF http://arxiv.org/pdf/1704.04235v1.pdf
PWC https://paperswithcode.com/paper/close-yet-distinctive-domain-adaptation
Repo
Framework

WebCaricature: a benchmark for caricature recognition

Title WebCaricature: a benchmark for caricature recognition
Authors Jing Huo, Wenbin Li, Yinghuan Shi, Yang Gao, Hujun Yin
Abstract Studying caricature recognition is fundamentally important to understanding of face perception. However, little research has been conducted in the computer vision community, largely due to the shortage of suitable datasets. In this paper, a new caricature dataset is built, with the objective to facilitate research in caricature recognition. All the caricatures and face images were collected from the Web. Compared with two existing datasets, this dataset is much more challenging, with a much greater number of available images, artistic styles and larger intra-personal variations. Evaluation protocols are also offered together with their baseline performances on the dataset to allow fair comparisons. Besides, a framework for caricature face recognition is presented to make a thorough analyze of the challenges of caricature recognition. By analyzing the challenges, the goal is to show problems that worth to be further investigated. Additionally, based on the evaluation protocols and the framework, baseline performances of various state-of-the-art algorithms are provided. A conclusion is that there is still a large space for performance improvement and the analyzed problems still need further investigation.
Tasks Caricature, Face Recognition
Published 2017-03-09
URL http://arxiv.org/abs/1703.03230v4
PDF http://arxiv.org/pdf/1703.03230v4.pdf
PWC https://paperswithcode.com/paper/webcaricature-a-benchmark-for-caricature
Repo
Framework

Deep Learning for Automated Quality Assessment of Color Fundus Images in Diabetic Retinopathy Screening

Title Deep Learning for Automated Quality Assessment of Color Fundus Images in Diabetic Retinopathy Screening
Authors Sajib Kumar Saha, Basura Fernando, Jorge Cuadros, Di Xiao, Yogesan Kanagasingam
Abstract Purpose To develop a computer based method for the automated assessment of image quality in the context of diabetic retinopathy (DR) to guide the photographer. Methods A deep learning framework was trained to grade the images automatically. A large representative set of 7000 color fundus images were used for the experiment which were obtained from the EyePACS that were made available by the California Healthcare Foundation. Three retinal image analysis experts were employed to categorize these images into Accept and Reject classes based on the precise definition of image quality in the context of DR. A deep learning framework was trained using 3428 images. Results A total of 3572 images were used for the evaluation of the proposed method. The method shows an accuracy of 100% to successfully categorise Accept and Reject images. Conclusion Image quality is an essential prerequisite for the grading of DR. In this paper we have proposed a deep learning based automated image quality assessment method in the context of DR. The method can be easily incorporated with the fundus image capturing system and thus can guide the photographer whether a recapture is necessary or not.
Tasks Image Quality Assessment
Published 2017-03-07
URL http://arxiv.org/abs/1703.02511v1
PDF http://arxiv.org/pdf/1703.02511v1.pdf
PWC https://paperswithcode.com/paper/deep-learning-for-automated-quality
Repo
Framework

Tuning Modular Networks with Weighted Losses for Hand-Eye Coordination

Title Tuning Modular Networks with Weighted Losses for Hand-Eye Coordination
Authors Fangyi Zhang, Jürgen Leitner, Michael Milford, Peter I. Corke
Abstract This paper introduces an end-to-end fine-tuning method to improve hand-eye coordination in modular deep visuo-motor policies (modular networks) where each module is trained independently. Benefiting from weighted losses, the fine-tuning method significantly improves the performance of the policies for a robotic planar reaching task.
Tasks
Published 2017-05-15
URL http://arxiv.org/abs/1705.05116v1
PDF http://arxiv.org/pdf/1705.05116v1.pdf
PWC https://paperswithcode.com/paper/tuning-modular-networks-with-weighted-losses
Repo
Framework

Learning Low-Dimensional Metrics

Title Learning Low-Dimensional Metrics
Authors Lalit Jain, Blake Mason, Robert Nowak
Abstract This paper investigates the theoretical foundations of metric learning, focused on three key questions that are not fully addressed in prior work: 1) we consider learning general low-dimensional (low-rank) metrics as well as sparse metrics; 2) we develop upper and lower (minimax)bounds on the generalization error; 3) we quantify the sample complexity of metric learning in terms of the dimension of the feature space and the dimension/rank of the underlying metric;4) we also bound the accuracy of the learned metric relative to the underlying true generative metric. All the results involve novel mathematical approaches to the metric learning problem, and lso shed new light on the special case of ordinal embedding (aka non-metric multidimensional scaling).
Tasks Metric Learning
Published 2017-09-18
URL http://arxiv.org/abs/1709.06171v2
PDF http://arxiv.org/pdf/1709.06171v2.pdf
PWC https://paperswithcode.com/paper/learning-low-dimensional-metrics
Repo
Framework

Multi-view Registration Based on Weighted Low Rank and Sparse Matrix Decomposition of Motions

Title Multi-view Registration Based on Weighted Low Rank and Sparse Matrix Decomposition of Motions
Authors Congcong Jin, Jihua Zhu, Yaochen Li, Shanmin Pang, Lei Chen, Jun Wang
Abstract Recently, the low rank and sparse (LRS) matrix decomposition has been introduced as an effective mean to solve the multi-view registration. It views each available relative motion as a block element to reconstruct one matrix so as to approximate the low rank matrix, where global motions can be recovered for multi-view registration. However, this approach is sensitive to the sparsity of the reconstructed matrix and it treats all block elements equally in spite of their varied reliability. Therefore, this paper proposes an effective approach for multi-view registration by the weighted LRS decomposition. Based on the anti-symmetry property of relative motions, it firstly proposes a completion strategy to reduce the sparsity of the reconstructed matrix. The reduced sparsity of reconstructed matrix can improve the robustness of LRS decomposition. Then, it proposes the weighted LRS decomposition, where each block element is assigned with one estimated weight to denote its reliability. By introducing the weight, more accurate registration results can be recovered from the estimated low rank matrix with good efficiency. Experimental results tested on public data sets illustrate the superiority of the proposed approach over the state-of-the-art approaches on robustness, accuracy, and efficiency.
Tasks
Published 2017-09-25
URL http://arxiv.org/abs/1709.08393v2
PDF http://arxiv.org/pdf/1709.08393v2.pdf
PWC https://paperswithcode.com/paper/multi-view-registration-based-on-weighted-low
Repo
Framework
comments powered by Disqus