May 7, 2019

3420 words 17 mins read

Paper Group ANR 31

Paper Group ANR 31

A Comprehensive Analysis of Deep Learning Based Representation for Face Recognition. Extending the Harper Identity to Iterated Belief Change. Efficient Rectangular Maximal-Volume Algorithm for Rating Elicitation in Collaborative Filtering. Testing $k$-Monotonicity. Gaussian process modeling in approximate Bayesian computation to estimate horizontal …

A Comprehensive Analysis of Deep Learning Based Representation for Face Recognition

Title A Comprehensive Analysis of Deep Learning Based Representation for Face Recognition
Authors Mostafa Mehdipour Ghazi, Hazim Kemal Ekenel
Abstract Deep learning based approaches have been dominating the face recognition field due to the significant performance improvement they have provided on the challenging wild datasets. These approaches have been extensively tested on such unconstrained datasets, on the Labeled Faces in the Wild and YouTube Faces, to name a few. However, their capability to handle individual appearance variations caused by factors such as head pose, illumination, occlusion, and misalignment has not been thoroughly assessed till now. In this paper, we present a comprehensive study to evaluate the performance of deep learning based face representation under several conditions including the varying head pose angles, upper and lower face occlusion, changing illumination of different strengths, and misalignment due to erroneous facial feature localization. Two successful and publicly available deep learning models, namely VGG-Face and Lightened CNN have been utilized to extract face representations. The obtained results show that although deep learning provides a powerful representation for face recognition, it can still benefit from preprocessing, for example, for pose and illumination normalization in order to achieve better performance under various conditions. Particularly, if these variations are not included in the dataset used to train the deep learning model, the role of preprocessing becomes more crucial. Experimental results also show that deep learning based representation is robust to misalignment and can tolerate facial feature localization errors up to 10% of the interocular distance.
Tasks Face Recognition
Published 2016-06-09
URL http://arxiv.org/abs/1606.02894v1
PDF http://arxiv.org/pdf/1606.02894v1.pdf
PWC https://paperswithcode.com/paper/a-comprehensive-analysis-of-deep-learning
Repo
Framework

Extending the Harper Identity to Iterated Belief Change

Title Extending the Harper Identity to Iterated Belief Change
Authors Jake Chandler, Richard Booth
Abstract The field of iterated belief change has focused mainly on revision, with the other main operator of AGM belief change theory, i.e. contraction, receiving relatively little attention. In this paper we extend the Harper Identity from single-step change to define iterated contraction in terms of iterated revision. Specifically, just as the Harper Identity provides a recipe for defining the belief set resulting from contracting A in terms of (i) the initial belief set and (ii) the belief set resulting from revision by not-A, we look at ways to define the plausibility ordering over worlds resulting from contracting A in terms of (iii) the initial plausibility ordering, and (iv) the plausibility ordering resulting from revision by not-A. After noting that the most straightforward such extension leads to a trivialisation of the space of permissible orderings, we provide a family of operators for combining plausibility orderings that avoid such a result. These operators are characterised in our domain of interest by a pair of intuitively compelling properties, which turn out to enable the derivation of a number of iterated contraction postulates from postulates for iterated revision. We finish by observing that a salient member of this family allows for the derivation of counterparts for contraction of some well known iterated revision operators, as well as for defining new iterated contraction operators.
Tasks
Published 2016-04-19
URL http://arxiv.org/abs/1604.05419v1
PDF http://arxiv.org/pdf/1604.05419v1.pdf
PWC https://paperswithcode.com/paper/extending-the-harper-identity-to-iterated
Repo
Framework

Efficient Rectangular Maximal-Volume Algorithm for Rating Elicitation in Collaborative Filtering

Title Efficient Rectangular Maximal-Volume Algorithm for Rating Elicitation in Collaborative Filtering
Authors Alexander Fonarev, Alexander Mikhalev, Pavel Serdyukov, Gleb Gusev, Ivan Oseledets
Abstract Cold start problem in Collaborative Filtering can be solved by asking new users to rate a small seed set of representative items or by asking representative users to rate a new item. The question is how to build a seed set that can give enough preference information for making good recommendations. One of the most successful approaches, called Representative Based Matrix Factorization, is based on Maxvol algorithm. Unfortunately, this approach has one important limitation — a seed set of a particular size requires a rating matrix factorization of fixed rank that should coincide with that size. This is not necessarily optimal in the general case. In the current paper, we introduce a fast algorithm for an analytical generalization of this approach that we call Rectangular Maxvol. It allows the rank of factorization to be lower than the required size of the seed set. Moreover, the paper includes the theoretical analysis of the method’s error, the complexity analysis of the existing methods and the comparison to the state-of-the-art approaches.
Tasks
Published 2016-10-16
URL http://arxiv.org/abs/1610.04850v1
PDF http://arxiv.org/pdf/1610.04850v1.pdf
PWC https://paperswithcode.com/paper/efficient-rectangular-maximal-volume
Repo
Framework

Testing $k$-Monotonicity

Title Testing $k$-Monotonicity
Authors Clément L. Canonne, Elena Grigorescu, Siyao Guo, Akash Kumar, Karl Wimmer
Abstract A Boolean $k$-monotone function defined over a finite poset domain ${\cal D}$ alternates between the values $0$ and $1$ at most $k$ times on any ascending chain in ${\cal D}$. Therefore, $k$-monotone functions are natural generalizations of the classical monotone functions, which are the $1$-monotone functions. Motivated by the recent interest in $k$-monotone functions in the context of circuit complexity and learning theory, and by the central role that monotonicity testing plays in the context of property testing, we initiate a systematic study of $k$-monotone functions, in the property testing model. In this model, the goal is to distinguish functions that are $k$-monotone (or are close to being $k$-monotone) from functions that are far from being $k$-monotone. Our results include the following: - We demonstrate a separation between testing $k$-monotonicity and testing monotonicity, on the hypercube domain ${0,1}^d$, for $k\geq 3$; - We demonstrate a separation between testing and learning on ${0,1}^d$, for $k=\omega(\log d)$: testing $k$-monotonicity can be performed with $2^{O(\sqrt d \cdot \log d\cdot \log{1/\varepsilon})}$ queries, while learning $k$-monotone functions requires $2^{\Omega(k\cdot \sqrt d\cdot{1/\varepsilon})}$ queries (Blais et al. (RANDOM 2015)). - We present a tolerant test for functions $f\colon[n]^d\to {0,1}$ with complexity independent of $n$, which makes progress on a problem left open by Berman et al. (STOC 2014). Our techniques exploit the testing-by-learning paradigm, use novel applications of Fourier analysis on the grid $[n]^d$, and draw connections to distribution testing techniques.
Tasks
Published 2016-09-01
URL http://arxiv.org/abs/1609.00265v2
PDF http://arxiv.org/pdf/1609.00265v2.pdf
PWC https://paperswithcode.com/paper/testing-k-monotonicity
Repo
Framework

Gaussian process modeling in approximate Bayesian computation to estimate horizontal gene transfer in bacteria

Title Gaussian process modeling in approximate Bayesian computation to estimate horizontal gene transfer in bacteria
Authors Marko Järvenpää, Michael Gutmann, Aki Vehtari, Pekka Marttinen
Abstract Approximate Bayesian computation (ABC) can be used for model fitting when the likelihood function is intractable but simulating from the model is feasible. However, even a single evaluation of a complex model may take several hours, limiting the number of model evaluations available. Modelling the discrepancy between the simulated and observed data using a Gaussian process (GP) can be used to reduce the number of model evaluations required by ABC, but the sensitivity of this approach to a specific GP formulation has not yet been thoroughly investigated. We begin with a comprehensive empirical evaluation of using GPs in ABC, including various transformations of the discrepancies and two novel GP formulations. Our results indicate the choice of GP may significantly affect the accuracy of the estimated posterior distribution. Selection of an appropriate GP model is thus important. We formulate expected utility to measure the accuracy of classifying discrepancies below or above the ABC threshold, and show that it can be used to automate the GP model selection step. Finally, based on the understanding gained with toy examples, we fit a population genetic model for bacteria, providing insight into horizontal gene transfer events within the population and from external origins.
Tasks Model Selection
Published 2016-10-20
URL http://arxiv.org/abs/1610.06462v3
PDF http://arxiv.org/pdf/1610.06462v3.pdf
PWC https://paperswithcode.com/paper/gaussian-process-modeling-in-approximate
Repo
Framework

How Users Explore Ontologies on the Web: A Study of NCBO’s BioPortal Usage Logs

Title How Users Explore Ontologies on the Web: A Study of NCBO’s BioPortal Usage Logs
Authors Simon Walk, Lisette Espín-Noboa, Denis Helic, Markus Strohmaier, Mark Musen
Abstract Ontologies in the biomedical domain are numerous, highly specialized and very expensive to develop. Thus, a crucial prerequisite for ontology adoption and reuse is effective support for exploring and finding existing ontologies. Towards that goal, the National Center for Biomedical Ontology (NCBO) has developed BioPortal—an online repository designed to support users in exploring and finding more than 500 existing biomedical ontologies. In 2016, BioPortal represents one of the largest portals for exploration of semantic biomedical vocabularies and terminologies, which is used by many researchers and practitioners. While usage of this portal is high, we know very little about how exactly users search and explore ontologies and what kind of usage patterns or user groups exist in the first place. Deeper insights into user behavior on such portals can provide valuable information to devise strategies for a better support of users in exploring and finding existing ontologies, and thereby enable better ontology reuse. To that end, we study and group users according to their browsing behavior on BioPortal using data mining techniques. Additionally, we use the obtained groups to characterize and compare exploration strategies across ontologies. In particular, we were able to identify seven distinct browsing-behavior types, which all make use of different functionality provided by BioPortal. For example, Search Explorers make extensive use of the search functionality while Ontology Tree Explorers mainly rely on the class hierarchy to explore ontologies. Further, we show that specific characteristics of ontologies influence the way users explore and interact with the website. Our results may guide the development of more user-oriented systems for ontology exploration on the Web.
Tasks
Published 2016-10-28
URL http://arxiv.org/abs/1610.09160v2
PDF http://arxiv.org/pdf/1610.09160v2.pdf
PWC https://paperswithcode.com/paper/how-users-explore-ontologies-on-the-web-a
Repo
Framework

Authorship Attribution Using a Neural Network Language Model

Title Authorship Attribution Using a Neural Network Language Model
Authors Zhenhao Ge, Yufang Sun, Mark J. T. Smith
Abstract In practice, training language models for individual authors is often expensive because of limited data resources. In such cases, Neural Network Language Models (NNLMs), generally outperform the traditional non-parametric N-gram models. Here we investigate the performance of a feed-forward NNLM on an authorship attribution problem, with moderate author set size and relatively limited data. We also consider how the text topics impact performance. Compared with a well-constructed N-gram baseline method with Kneser-Ney smoothing, the proposed method achieves nearly 2:5% reduction in perplexity and increases author classification accuracy by 3:43% on average, given as few as 5 test sentences. The performance is very competitive with the state of the art in terms of accuracy and demand on test data. The source code, preprocessed datasets, a detailed description of the methodology and results are available at https://github.com/zge/authorship-attribution.
Tasks Language Modelling
Published 2016-02-17
URL http://arxiv.org/abs/1602.05292v1
PDF http://arxiv.org/pdf/1602.05292v1.pdf
PWC https://paperswithcode.com/paper/authorship-attribution-using-a-neural-network
Repo
Framework

Rolling Shutter Camera Relative Pose: Generalized Epipolar Geometry

Title Rolling Shutter Camera Relative Pose: Generalized Epipolar Geometry
Authors Yuchao Dai, Hongdong Li, Laurent Kneip
Abstract The vast majority of modern consumer-grade cameras employ a rolling shutter mechanism. In dynamic geometric computer vision applications such as visual SLAM, the so-called rolling shutter effect therefore needs to be properly taken into account. A dedicated relative pose solver appears to be the first problem to solve, as it is of eminent importance to bootstrap any derivation of multi-view geometry. However, despite its significance, it has received inadequate attention to date. This paper presents a detailed investigation of the geometry of the rolling shutter relative pose problem. We introduce the rolling shutter essential matrix, and establish its link to existing models such as the push-broom cameras, summarized in a clean hierarchy of multi-perspective cameras. The generalization of well-established concepts from epipolar geometry is completed by a definition of the Sampson distance in the rolling shutter case. The work is concluded with a careful investigation of the introduced epipolar geometry for rolling shutter cameras on several dedicated benchmarks.
Tasks
Published 2016-05-02
URL http://arxiv.org/abs/1605.00475v1
PDF http://arxiv.org/pdf/1605.00475v1.pdf
PWC https://paperswithcode.com/paper/rolling-shutter-camera-relative-pose
Repo
Framework

Robust and Globally Optimal Manhattan Frame Estimation in Near Real Time

Title Robust and Globally Optimal Manhattan Frame Estimation in Near Real Time
Authors Kyungdon Joo, Tae-Hyun Oh, Junsik Kim, In So Kweon
Abstract Most man-made environments, such as urban and indoor scenes, consist of a set of parallel and orthogonal planar structures. These structures are approximated by the Manhattan world assumption, in which notion can be represented as a Manhattan frame (MF). Given a set of inputs such as surface normals or vanishing points, we pose an MF estimation problem as a consensus set maximization that maximizes the number of inliers over the rotation search space. Conventionally, this problem can be solved by a branch-and-bound framework, which mathematically guarantees global optimality. However, the computational time of the conventional branch-and-bound algorithms is rather far from real-time. In this paper, we propose a novel bound computation method on an efficient measurement domain for MF estimation, i.e., the extended Gaussian image (EGI). By relaxing the original problem, we can compute the bound with a constant complexity, while preserving global optimality. Furthermore, we quantitatively and qualitatively demonstrate the performance of the proposed method for various synthetic and real-world data. We also show the versatility of our approach through three different applications: extension to multiple MF estimation, 3D rotation based video stabilization, and vanishing point estimation (line clustering).
Tasks
Published 2016-05-12
URL http://arxiv.org/abs/1605.03730v2
PDF http://arxiv.org/pdf/1605.03730v2.pdf
PWC https://paperswithcode.com/paper/robust-and-globally-optimal-manhattan-frame
Repo
Framework

Transferring Knowledge from Text to Predict Disease Onset

Title Transferring Knowledge from Text to Predict Disease Onset
Authors Yun Liu, Kun-Ta Chuang, Fu-Wen Liang, Huey-Jen Su, Collin M. Stultz, John V. Guttag
Abstract In many domains such as medicine, training data is in short supply. In such cases, external knowledge is often helpful in building predictive models. We propose a novel method to incorporate publicly available domain expertise to build accurate models. Specifically, we use word2vec models trained on a domain-specific corpus to estimate the relevance of each feature’s text description to the prediction problem. We use these relevance estimates to rescale the features, causing more important features to experience weaker regularization. We apply our method to predict the onset of five chronic diseases in the next five years in two genders and two age groups. Our rescaling approach improves the accuracy of the model, particularly when there are few positive examples. Furthermore, our method selects 60% fewer features, easing interpretation by physicians. Our method is applicable to other domains where feature and outcome descriptions are available.
Tasks
Published 2016-08-06
URL http://arxiv.org/abs/1608.02071v1
PDF http://arxiv.org/pdf/1608.02071v1.pdf
PWC https://paperswithcode.com/paper/transferring-knowledge-from-text-to-predict
Repo
Framework

Detection of epileptic seizure in EEG signals using linear least squares preprocessing

Title Detection of epileptic seizure in EEG signals using linear least squares preprocessing
Authors Z. Roshan Zamir
Abstract An epileptic seizure is a transient event of abnormal excessive neuronal discharge in the brain. This unwanted event can be obstructed by detection of electrical changes in the brain that happen before the seizure takes place. The automatic detection of seizures is necessary since the visual screening of EEG recordings is a time consuming task and requires experts to improve the diagnosis. Four linear least squares-based preprocessing models are proposed to extract key features of an EEG signal in order to detect seizures. The first two models are newly developed. The original signal (EEG) is approximated by a sinusoidal curve. Its amplitude is formed by a polynomial function and compared with the pre developed spline function.Different statistical measures namely classification accuracy, true positive and negative rates, false positive and negative rates and precision are utilized to assess the performance of the proposed models. These metrics are derived from confusion matrices obtained from classifiers. Different classifiers are used over the original dataset and the set of extracted features. The proposed models significantly reduce the dimension of the classification problem and the computational time while the classification accuracy is improved in most cases. The first and third models are promising feature extraction methods. Logistic, LazyIB1, LazyIB5 and J48 are the best classifiers. Their true positive and negative rates are $1$ while false positive and negative rates are zero and the corresponding precision values are $1$. Numerical results suggest that these models are robust and efficient for detecting epileptic seizure.
Tasks EEG
Published 2016-04-27
URL http://arxiv.org/abs/1604.08500v1
PDF http://arxiv.org/pdf/1604.08500v1.pdf
PWC https://paperswithcode.com/paper/detection-of-epileptic-seizure-in-eeg-signals
Repo
Framework

Non-convex regularization in remote sensing

Title Non-convex regularization in remote sensing
Authors Devis Tuia, Remi Flamary, Michel Barlaud
Abstract In this paper, we study the effect of different regularizers and their implications in high dimensional image classification and sparse linear unmixing. Although kernelization or sparse methods are globally accepted solutions for processing data in high dimensions, we present here a study on the impact of the form of regularization used and its parametrization. We consider regularization via traditional squared (2) and sparsity-promoting (1) norms, as well as more unconventional nonconvex regularizers (p and Log Sum Penalty). We compare their properties and advantages on several classification and linear unmixing tasks and provide advices on the choice of the best regularizer for the problem at hand. Finally, we also provide a fully functional toolbox for the community.
Tasks Image Classification
Published 2016-06-23
URL http://arxiv.org/abs/1606.07289v1
PDF http://arxiv.org/pdf/1606.07289v1.pdf
PWC https://paperswithcode.com/paper/non-convex-regularization-in-remote-sensing
Repo
Framework

Active Learning for Community Detection in Stochastic Block Models

Title Active Learning for Community Detection in Stochastic Block Models
Authors Akshay Gadde, Eyal En Gad, Salman Avestimehr, Antonio Ortega
Abstract The stochastic block model (SBM) is an important generative model for random graphs in network science and machine learning, useful for benchmarking community detection (or clustering) algorithms. The symmetric SBM generates a graph with $2n$ nodes which cluster into two equally sized communities. Nodes connect with probability $p$ within a community and $q$ across different communities. We consider the case of $p=a\ln (n)/n$ and $q=b\ln (n)/n$. In this case, it was recently shown that recovering the community membership (or label) of every node with high probability (w.h.p.) using only the graph is possible if and only if the Chernoff-Hellinger (CH) divergence $D(a,b)=(\sqrt{a}-\sqrt{b})^2 \geq 1$. In this work, we study if, and by how much, community detection below the clustering threshold (i.e. $D(a,b)<1$) is possible by querying the labels of a limited number of chosen nodes (i.e., active learning). Our main result is to show that, under certain conditions, sampling the labels of a vanishingly small fraction of nodes (a number sub-linear in $n$) is sufficient for exact community detection even when $D(a,b)<1$. Furthermore, we provide an efficient learning algorithm which recovers the community memberships of all nodes w.h.p. as long as the number of sampled points meets the sufficient condition. We also show that recovery is not possible if the number of observed labels is less than $n^{1-D(a,b)}$. The validity of our results is demonstrated through numerical experiments.
Tasks Active Learning, Community Detection
Published 2016-05-08
URL http://arxiv.org/abs/1605.02372v1
PDF http://arxiv.org/pdf/1605.02372v1.pdf
PWC https://paperswithcode.com/paper/active-learning-for-community-detection-in
Repo
Framework

Stateology: State-Level Interactive Charting of Language, Feelings, and Values

Title Stateology: State-Level Interactive Charting of Language, Feelings, and Values
Authors Konstantinos Pappas, Steven Wilson, Rada Mihalcea
Abstract People’s personality and motivations are manifest in their everyday language usage. With the emergence of social media, ample examples of such usage are procurable. In this paper, we aim to analyze the vocabulary used by close to 200,000 Blogger users in the U.S. with the purpose of geographically portraying various demographic, linguistic, and psychological dimensions at the state level. We give a description of a web-based tool for viewing maps that depict various characteristics of the social media users as derived from this large blog dataset of over two billion words.
Tasks
Published 2016-12-20
URL http://arxiv.org/abs/1612.06685v1
PDF http://arxiv.org/pdf/1612.06685v1.pdf
PWC https://paperswithcode.com/paper/stateology-state-level-interactive-charting
Repo
Framework

Adaptive Lambda Least-Squares Temporal Difference Learning

Title Adaptive Lambda Least-Squares Temporal Difference Learning
Authors Timothy A. Mann, Hugo Penedones, Shie Mannor, Todd Hester
Abstract Temporal Difference learning or TD($\lambda$) is a fundamental algorithm in the field of reinforcement learning. However, setting TD’s $\lambda$ parameter, which controls the timescale of TD updates, is generally left up to the practitioner. We formalize the $\lambda$ selection problem as a bias-variance trade-off where the solution is the value of $\lambda$ that leads to the smallest Mean Squared Value Error (MSVE). To solve this trade-off we suggest applying Leave-One-Trajectory-Out Cross-Validation (LOTO-CV) to search the space of $\lambda$ values. Unfortunately, this approach is too computationally expensive for most practical applications. For Least Squares TD (LSTD) we show that LOTO-CV can be implemented efficiently to automatically tune $\lambda$ and apply function optimization methods to efficiently search the space of $\lambda$ values. The resulting algorithm, ALLSTD, is parameter free and our experiments demonstrate that ALLSTD is significantly computationally faster than the na"{i}ve LOTO-CV implementation while achieving similar performance.
Tasks
Published 2016-12-30
URL http://arxiv.org/abs/1612.09465v1
PDF http://arxiv.org/pdf/1612.09465v1.pdf
PWC https://paperswithcode.com/paper/adaptive-lambda-least-squares-temporal
Repo
Framework
comments powered by Disqus