July 29, 2019

3126 words 15 mins read

Paper Group ANR 142

Paper Group ANR 142

From visual words to a visual grammar: using language modelling for image classification. Crowdsourcing with Sparsely Interacting Workers. An Effective Feature Selection Method Based on Pair-Wise Feature Proximity for High Dimensional Low Sample Size Data. Unsupervised Adaptive Re-identification in Open World Dynamic Camera Networks. Direct Pose Es …

From visual words to a visual grammar: using language modelling for image classification

Title From visual words to a visual grammar: using language modelling for image classification
Authors Antonio Foncubierta-Rodríguez, Henning Müller, Adrien Depeursinge
Abstract The Bag–of–Visual–Words (BoVW) is a visual description technique that aims at shortening the semantic gap by partitioning a low–level feature space into regions of the feature space that potentially correspond to visual concepts and by giving more value to this space. In this paper we present a conceptual analysis of three major properties of language grammar and how they can be adapted to the computer vision and image understanding domain based on the bag of visual words paradigm. Evaluation of the visual grammar shows that a positive impact on classification accuracy and/or descriptor size is obtained when the technique are applied when the proposed techniques are applied.
Tasks Image Classification, Language Modelling
Published 2017-03-16
URL http://arxiv.org/abs/1703.05571v1
PDF http://arxiv.org/pdf/1703.05571v1.pdf
PWC https://paperswithcode.com/paper/from-visual-words-to-a-visual-grammar-using
Repo
Framework

Crowdsourcing with Sparsely Interacting Workers

Title Crowdsourcing with Sparsely Interacting Workers
Authors Yao Ma, Alex Olshevsky, Venkatesh Saligrama, Csaba Szepesvari
Abstract We consider estimation of worker skills from worker-task interaction data (with unknown labels) for the single-coin crowd-sourcing binary classification model in symmetric noise. We define the (worker) interaction graph whose nodes are workers and an edge between two nodes indicates whether or not the two workers participated in a common task. We show that skills are asymptotically identifiable if and only if an appropriate limiting version of the interaction graph is irreducible and has odd-cycles. We then formulate a weighted rank-one optimization problem to estimate skills based on observations on an irreducible, aperiodic interaction graph. We propose a gradient descent scheme and show that for such interaction graphs estimates converge asymptotically to the global minimum. We characterize noise robustness of the gradient scheme in terms of spectral properties of signless Laplacians of the interaction graph. We then demonstrate that a plug-in estimator based on the estimated skills achieves state-of-art performance on a number of real-world datasets. Our results have implications for rank-one matrix completion problem in that gradient descent can provably recover $W \times W$ rank-one matrices based on $W+1$ off-diagonal observations of a connected graph with a single odd-cycle.
Tasks Matrix Completion
Published 2017-06-20
URL http://arxiv.org/abs/1706.06660v1
PDF http://arxiv.org/pdf/1706.06660v1.pdf
PWC https://paperswithcode.com/paper/crowdsourcing-with-sparsely-interacting
Repo
Framework

An Effective Feature Selection Method Based on Pair-Wise Feature Proximity for High Dimensional Low Sample Size Data

Title An Effective Feature Selection Method Based on Pair-Wise Feature Proximity for High Dimensional Low Sample Size Data
Authors S L Happy, Ramanarayan Mohanty, Aurobinda Routray
Abstract Feature selection has been studied widely in the literature. However, the efficacy of the selection criteria for low sample size applications is neglected in most cases. Most of the existing feature selection criteria are based on the sample similarity. However, the distance measures become insignificant for high dimensional low sample size (HDLSS) data. Moreover, the variance of a feature with a few samples is pointless unless it represents the data distribution efficiently. Instead of looking at the samples in groups, we evaluate their efficiency based on pairwise fashion. In our investigation, we noticed that considering a pair of samples at a time and selecting the features that bring them closer or put them far away is a better choice for feature selection. Experimental results on benchmark data sets demonstrate the effectiveness of the proposed method with low sample size, which outperforms many other state-of-the-art feature selection methods.
Tasks Feature Selection
Published 2017-08-08
URL http://arxiv.org/abs/1708.02443v1
PDF http://arxiv.org/pdf/1708.02443v1.pdf
PWC https://paperswithcode.com/paper/an-effective-feature-selection-method-based
Repo
Framework

Unsupervised Adaptive Re-identification in Open World Dynamic Camera Networks

Title Unsupervised Adaptive Re-identification in Open World Dynamic Camera Networks
Authors Rameswar Panda, Amran Bhuiyan, Vittorio Murino, Amit K. Roy-Chowdhury
Abstract Person re-identification is an open and challenging problem in computer vision. Existing approaches have concentrated on either designing the best feature representation or learning optimal matching metrics in a static setting where the number of cameras are fixed in a network. Most approaches have neglected the dynamic and open world nature of the re-identification problem, where a new camera may be temporarily inserted into an existing system to get additional information. To address such a novel and very practical problem, we propose an unsupervised adaptation scheme for re-identification models in a dynamic camera network. First, we formulate a domain perceptive re-identification method based on geodesic flow kernel that can effectively find the best source camera (already installed) to adapt with a newly introduced target camera, without requiring a very expensive training phase. Second, we introduce a transitive inference algorithm for re-identification that can exploit the information from best source camera to improve the accuracy across other camera pairs in a network of multiple cameras. Extensive experiments on four benchmark datasets demonstrate that the proposed approach significantly outperforms the state-of-the-art unsupervised learning based alternatives whilst being extremely efficient to compute.
Tasks Person Re-Identification
Published 2017-06-09
URL http://arxiv.org/abs/1706.03112v1
PDF http://arxiv.org/pdf/1706.03112v1.pdf
PWC https://paperswithcode.com/paper/unsupervised-adaptive-re-identification-in
Repo
Framework

Direct Pose Estimation with a Monocular Camera

Title Direct Pose Estimation with a Monocular Camera
Authors Darius Burschka, Elmar Mair
Abstract We present a direct method to calculate a 6DoF pose change of a monocular camera for mobile navigation. The calculated pose is estimated up to a constant unknown scale parameter that is kept constant over the entire reconstruction process. This method allows a direct cal- culation of the metric position and rotation without any necessity to fuse the information in a probabilistic approach over longer frame sequence as it is the case in most currently used VSLAM approaches. The algorithm provides two novel aspects to the field of monocular navigation. It allows a direct pose estimation without any a-priori knowledge about the world directly from any two images and it provides a quality measure for the estimated motion parameters that allows to fuse the resulting information in Kalman Filters. We present the mathematical formulation of the approach together with experimental validation on real scene images.
Tasks Pose Estimation
Published 2017-09-18
URL http://arxiv.org/abs/1709.05815v1
PDF http://arxiv.org/pdf/1709.05815v1.pdf
PWC https://paperswithcode.com/paper/direct-pose-estimation-with-a-monocular
Repo
Framework
Title Matrix Completion and Related Problems via Strong Duality
Authors Maria-Florina Balcan, Yingyu Liang, David P. Woodruff, Hongyang Zhang
Abstract This work studies the strong duality of non-convex matrix factorization problems: we show that under certain dual conditions, these problems and its dual have the same optimum. This has been well understood for convex optimization, but little was known for non-convex problems. We propose a novel analytical framework and show that under certain dual conditions, the optimal solution of the matrix factorization program is the same as its bi-dual and thus the global optimality of the non-convex program can be achieved by solving its bi-dual which is convex. These dual conditions are satisfied by a wide class of matrix factorization problems, although matrix factorization problems are hard to solve in full generality. This analytical framework may be of independent interest to non-convex optimization more broadly. We apply our framework to two prototypical matrix factorization problems: matrix completion and robust Principal Component Analysis (PCA). These are examples of efficiently recovering a hidden matrix given limited reliable observations of it. Our framework shows that exact recoverability and strong duality hold with nearly-optimal sample complexity guarantees for matrix completion and robust PCA.
Tasks Matrix Completion
Published 2017-04-27
URL http://arxiv.org/abs/1704.08683v5
PDF http://arxiv.org/pdf/1704.08683v5.pdf
PWC https://paperswithcode.com/paper/matrix-completion-and-related-problems-via
Repo
Framework

Classifying Symmetrical Differences and Temporal Change in Mammography Using Deep Neural Networks

Title Classifying Symmetrical Differences and Temporal Change in Mammography Using Deep Neural Networks
Authors Thijs Kooi, Nico Karssemeijer
Abstract We investigate the addition of symmetry and temporal context information to a deep Convolutional Neural Network (CNN) with the purpose of detecting malignant soft tissue lesions in mammography. We employ a simple linear mapping that takes the location of a mass candidate and maps it to either the contra-lateral or prior mammogram and Regions Of Interest (ROI) are extracted around each location. We subsequently explore two different architectures (1) a fusion model employing two datastreams were both ROIs are fed to the network during training and testing and (2) a stage-wise approach where a single ROI CNN is trained on the primary image and subsequently used as feature extractor for both primary and symmetrical or prior ROIs. A ‘shallow’ Gradient Boosted Tree (GBT) classifier is then trained on the concatenation of these features and used to classify the joint representation. Results shown a significant increase in performance using the first architecture and symmetry information, but only marginal gains in performance using temporal data and the other setting. We feel results are promising and can greatly be improved when more temporal data becomes available.
Tasks
Published 2017-03-22
URL http://arxiv.org/abs/1703.07715v2
PDF http://arxiv.org/pdf/1703.07715v2.pdf
PWC https://paperswithcode.com/paper/classifying-symmetrical-differences-and
Repo
Framework

Reinforcement Learning with Budget-Constrained Nonparametric Function Approximation for Opportunistic Spectrum Access

Title Reinforcement Learning with Budget-Constrained Nonparametric Function Approximation for Opportunistic Spectrum Access
Authors Theodoros Tsiligkaridis, David Romero
Abstract Opportunistic spectrum access is one of the emerging techniques for maximizing throughput in congested bands and is enabled by predicting idle slots in spectrum. We propose a kernel-based reinforcement learning approach coupled with a novel budget-constrained sparsification technique that efficiently captures the environment to find the best channel access actions. This approach allows learning and planning over the intrinsic state-action space and extends well to large state spaces. We apply our methods to evaluate coexistence of a reinforcement learning-based radio with a multi-channel adversarial radio and a single-channel CSMA-CA radio. Numerical experiments show the performance gains over carrier-sense systems.
Tasks
Published 2017-06-14
URL http://arxiv.org/abs/1706.04546v2
PDF http://arxiv.org/pdf/1706.04546v2.pdf
PWC https://paperswithcode.com/paper/reinforcement-learning-with-budget
Repo
Framework

Servant of Many Masters: Shifting priorities in Pareto-optimal sequential decision-making

Title Servant of Many Masters: Shifting priorities in Pareto-optimal sequential decision-making
Authors Andrew Critch, Stuart Russell
Abstract It is often argued that an agent making decisions on behalf of two or more principals who have different utility functions should adopt a {\em Pareto-optimal} policy, i.e., a policy that cannot be improved upon for one agent without making sacrifices for another. A famous theorem of Harsanyi shows that, when the principals have a common prior on the outcome distributions of all policies, a Pareto-optimal policy for the agent is one that maximizes a fixed, weighted linear combination of the principals’ utilities. In this paper, we show that Harsanyi’s theorem does not hold for principals with different priors, and derive a more precise generalization which does hold, which constitutes our main result. In this more general case, the relative weight given to each principal’s utility should evolve over time according to how well the agent’s observations conform with that principal’s prior. The result has implications for the design of contracts, treaties, joint ventures, and robots.
Tasks Decision Making
Published 2017-10-31
URL http://arxiv.org/abs/1711.00363v1
PDF http://arxiv.org/pdf/1711.00363v1.pdf
PWC https://paperswithcode.com/paper/servant-of-many-masters-shifting-priorities
Repo
Framework

Mode-Seeking Clustering and Density Ridge Estimation via Direct Estimation of Density-Derivative-Ratios

Title Mode-Seeking Clustering and Density Ridge Estimation via Direct Estimation of Density-Derivative-Ratios
Authors Hiroaki Sasaki, Takafumi Kanamori, Aapo Hyvärinen, Gang Niu, Masashi Sugiyama
Abstract Modes and ridges of the probability density function behind observed data are useful geometric features. Mode-seeking clustering assigns cluster labels by associating data samples with the nearest modes, and estimation of density ridges enables us to find lower-dimensional structures hidden in data. A key technical challenge both in mode-seeking clustering and density ridge estimation is accurate estimation of the ratios of the first- and second-order density derivatives to the density. A naive approach takes a three-step approach of first estimating the data density, then computing its derivatives, and finally taking their ratios. However, this three-step approach can be unreliable because a good density estimator does not necessarily mean a good density derivative estimator, and division by the estimated density could significantly magnify the estimation error. To cope with these problems, we propose a novel estimator for the \emph{density-derivative-ratios}. The proposed estimator does not involve density estimation, but rather \emph{directly} approximates the ratios of density derivatives of any order. Moreover, we establish a convergence rate of the proposed estimator. Based on the proposed estimator, novel methods both for mode-seeking clustering and density ridge estimation are developed, and the respective convergence rates to the mode and ridge of the underlying density are also established. Finally, we experimentally demonstrate that the developed methods significantly outperform existing methods, particularly for relatively high-dimensional data.
Tasks Density Estimation
Published 2017-07-06
URL http://arxiv.org/abs/1707.01711v3
PDF http://arxiv.org/pdf/1707.01711v3.pdf
PWC https://paperswithcode.com/paper/mode-seeking-clustering-and-density-ridge
Repo
Framework

Random Subspace with Trees for Feature Selection Under Memory Constraints

Title Random Subspace with Trees for Feature Selection Under Memory Constraints
Authors Antonio Sutera, Célia Châtel, Gilles Louppe, Louis Wehenkel, Pierre Geurts
Abstract Dealing with datasets of very high dimension is a major challenge in machine learning. In this paper, we consider the problem of feature selection in applications where the memory is not large enough to contain all features. In this setting, we propose a novel tree-based feature selection approach that builds a sequence of randomized trees on small subsamples of variables mixing both variables already identified as relevant by previous models and variables randomly selected among the other variables. As our main contribution, we provide an in-depth theoretical analysis of this method in infinite sample setting. In particular, we study its soundness with respect to common definitions of feature relevance and its convergence speed under various variable dependance scenarios. We also provide some preliminary empirical results highlighting the potential of the approach.
Tasks Feature Selection
Published 2017-09-04
URL http://arxiv.org/abs/1709.01177v2
PDF http://arxiv.org/pdf/1709.01177v2.pdf
PWC https://paperswithcode.com/paper/random-subspace-with-trees-for-feature
Repo
Framework

TorusE: Knowledge Graph Embedding on a Lie Group

Title TorusE: Knowledge Graph Embedding on a Lie Group
Authors Takuma Ebisu, Ryutaro Ichise
Abstract Knowledge graphs are useful for many artificial intelligence (AI) tasks. However, knowledge graphs often have missing facts. To populate the graphs, knowledge graph embedding models have been developed. Knowledge graph embedding models map entities and relations in a knowledge graph to a vector space and predict unknown triples by scoring candidate triples. TransE is the first translation-based method and it is well known because of its simplicity and efficiency for knowledge graph completion. It employs the principle that the differences between entity embeddings represent their relations. The principle seems very simple, but it can effectively capture the rules of a knowledge graph. However, TransE has a problem with its regularization. TransE forces entity embeddings to be on a sphere in the embedding vector space. This regularization warps the embeddings and makes it difficult for them to fulfill the abovementioned principle. The regularization also affects adversely the accuracies of the link predictions. On the other hand, regularization is important because entity embeddings diverge by negative sampling without it. This paper proposes a novel embedding model, TorusE, to solve the regularization problem. The principle of TransE can be defined on any Lie group. A torus, which is one of the compact Lie groups, can be chosen for the embedding space to avoid regularization. To the best of our knowledge, TorusE is the first model that embeds objects on other than a real or complex vector space, and this paper is the first to formally discuss the problem of regularization of TransE. Our approach outperforms other state-of-the-art approaches such as TransE, DistMult and ComplEx on a standard link prediction task. We show that TorusE is scalable to large-size knowledge graphs and is faster than the original TransE.
Tasks Entity Embeddings, Graph Embedding, Knowledge Graph Completion, Knowledge Graph Embedding, Knowledge Graphs, Link Prediction
Published 2017-11-15
URL http://arxiv.org/abs/1711.05435v1
PDF http://arxiv.org/pdf/1711.05435v1.pdf
PWC https://paperswithcode.com/paper/toruse-knowledge-graph-embedding-on-a-lie
Repo
Framework

Estimating the Fundamental Limits is Easier than Achieving the Fundamental Limits

Title Estimating the Fundamental Limits is Easier than Achieving the Fundamental Limits
Authors Jiantao Jiao, Yanjun Han, Irena Fischer-Hwang, Tsachy Weissman
Abstract We show through case studies that it is easier to estimate the fundamental limits of data processing than to construct explicit algorithms to achieve those limits. Focusing on binary classification, data compression, and prediction under logarithmic loss, we show that in the finite space setting, when it is possible to construct an estimator of the limits with vanishing error with $n$ samples, it may require at least $n\ln n$ samples to construct an explicit algorithm to achieve the limits.
Tasks
Published 2017-07-05
URL http://arxiv.org/abs/1707.01203v2
PDF http://arxiv.org/pdf/1707.01203v2.pdf
PWC https://paperswithcode.com/paper/estimating-the-fundamental-limits-is-easier
Repo
Framework

Maximum Resilience of Artificial Neural Networks

Title Maximum Resilience of Artificial Neural Networks
Authors Chih-Hong Cheng, Georg Nührenberg, Harald Ruess
Abstract The deployment of Artificial Neural Networks (ANNs) in safety-critical applications poses a number of new verification and certification challenges. In particular, for ANN-enabled self-driving vehicles it is important to establish properties about the resilience of ANNs to noisy or even maliciously manipulated sensory input. We are addressing these challenges by defining resilience properties of ANN-based classifiers as the maximal amount of input or sensor perturbation which is still tolerated. This problem of computing maximal perturbation bounds for ANNs is then reduced to solving mixed integer optimization problems (MIP). A number of MIP encoding heuristics are developed for drastically reducing MIP-solver runtimes, and using parallelization of MIP-solvers results in an almost linear speed-up in the number (up to a certain limit) of computing cores in our experiments. We demonstrate the effectiveness and scalability of our approach by means of computing maximal resilience bounds for a number of ANN benchmark sets ranging from typical image recognition scenarios to the autonomous maneuvering of robots.
Tasks
Published 2017-04-28
URL http://arxiv.org/abs/1705.01040v2
PDF http://arxiv.org/pdf/1705.01040v2.pdf
PWC https://paperswithcode.com/paper/maximum-resilience-of-artificial-neural
Repo
Framework

Deep Kernels for Optimizing Locomotion Controllers

Title Deep Kernels for Optimizing Locomotion Controllers
Authors Rika Antonova, Akshara Rai, Christopher G. Atkeson
Abstract Sample efficiency is important when optimizing parameters of locomotion controllers, since hardware experiments are time consuming and expensive. Bayesian Optimization, a sample-efficient optimization framework, has recently been widely applied to address this problem, but further improvements in sample efficiency are needed for practical applicability to real-world robots and high-dimensional controllers. To address this, prior work has proposed using domain expertise for constructing custom distance metrics for locomotion. In this work we show how to learn such a distance metric automatically. We use a neural network to learn an informed distance metric from data obtained in high-fidelity simulations. We conduct experiments on two different controllers and robot architectures. First, we demonstrate improvement in sample efficiency when optimizing a 5-dimensional controller on the ATRIAS robot hardware. We then conduct simulation experiments to optimize a 16-dimensional controller for a 7-link robot model and obtain significant improvements even when optimizing in perturbed environments. This demonstrates that our approach is able to enhance sample efficiency for two different controllers, hence is a fitting candidate for further experiments on hardware in the future.
Tasks
Published 2017-07-27
URL http://arxiv.org/abs/1707.09062v2
PDF http://arxiv.org/pdf/1707.09062v2.pdf
PWC https://paperswithcode.com/paper/deep-kernels-for-optimizing-locomotion
Repo
Framework
comments powered by Disqus