May 7, 2019

2741 words 13 mins read

Paper Group ANR 35

Paper Group ANR 35

Classification of Human Whole-Body Motion using Hidden Markov Models. Superresolution of Noisy Remotely Sensed Images Through Directional Representations. Building a Learning Database for the Neural Network Retrieval of Sea Surface Salinity from SMOS Brightness Temperatures. Performance Trade-Offs in Multi-Processor Approximate Message Passing. Eng …

Classification of Human Whole-Body Motion using Hidden Markov Models

Title Classification of Human Whole-Body Motion using Hidden Markov Models
Authors Matthias Plappert
Abstract Human motion plays an important role in many fields. Large databases exist that store and make available recordings of human motions. However, annotating each motion with multiple labels is a cumbersome and error-prone process. This bachelor’s thesis presents different approaches to solve the multi-label classification problem using Hidden Markov Models (HMMs). First, different features that can be directly obtained from the raw data are introduced. Next, additional features are derived to improve classification performance. These features are then used to perform the multi-label classification using two different approaches. The first approach simply transforms the multi-label problem into a multi-class problem. The second, novel approach solves the same problem without the need to construct a transformation by predicting the labels directly from the likelihood scores. The second approach scales linearly with the number of labels whereas the first approach is subject to combinatorial explosion. All aspects of the classification process are evaluated on a data set that consists of 454 motions. System 1 achieves an accuracy of 98.02% and system 2 an accuracy of 93.39% on the test set.
Tasks Multi-Label Classification
Published 2016-05-05
URL http://arxiv.org/abs/1605.01569v1
PDF http://arxiv.org/pdf/1605.01569v1.pdf
PWC https://paperswithcode.com/paper/classification-of-human-whole-body-motion
Repo
Framework

Superresolution of Noisy Remotely Sensed Images Through Directional Representations

Title Superresolution of Noisy Remotely Sensed Images Through Directional Representations
Authors Wojciech Czaja, James M. Murphy, Daniel Weinberg
Abstract We develop an algorithm for single-image superresolution of remotely sensed data, based on the discrete shearlet transform. The shearlet transform extracts directional features of signals, and is known to provide near-optimally sparse representations for a broad class of images. This often leads to superior performance in edge detection and image representation when compared to isotropic frames. We justify the use of shearlets mathematically, before presenting a denoising single-image superresolution algorithm that combines the shearlet transform with sparse mixing estimators (SME). Our algorithm is compared with a variety of single-image superresolution methods, including wavelet SME superresolution. Our numerical results demonstrate competitive performance in terms of PSNR and SSIM.
Tasks Denoising, Edge Detection
Published 2016-02-27
URL http://arxiv.org/abs/1602.08575v2
PDF http://arxiv.org/pdf/1602.08575v2.pdf
PWC https://paperswithcode.com/paper/superresolution-of-noisy-remotely-sensed
Repo
Framework

Building a Learning Database for the Neural Network Retrieval of Sea Surface Salinity from SMOS Brightness Temperatures

Title Building a Learning Database for the Neural Network Retrieval of Sea Surface Salinity from SMOS Brightness Temperatures
Authors Adel Ammar, Sylvie Labroue, Estelle Obligis, Michel Crépon, Sylvie Thiria
Abstract This article deals with an important aspect of the neural network retrieval of sea surface salinity (SSS) from SMOS brightness temperatures (TBs). The neural network retrieval method is an empirical approach that offers the possibility of being independent from any theoretical emissivity model, during the in-flight phase. A Previous study [1] has proven that this approach is applicable to all pixels on ocean, by designing a set of neural networks with different inputs. The present study focuses on the choice of the learning database and demonstrates that a judicious distribution of the geophysical parameters allows to markedly reduce the systematic regional biases of the retrieved SSS, which are due to the high noise on the TBs. An equalization of the distribution of the geophysical parameters, followed by a new technique for boosting the learning process, makes the regional biases almost disappear for latitudes between 40{\deg}S and 40{\deg}N, while the global standard deviation remains between 0.6 psu (at the center of the of the swath) and 1 psu (at the edges).
Tasks
Published 2016-01-17
URL http://arxiv.org/abs/1601.04296v1
PDF http://arxiv.org/pdf/1601.04296v1.pdf
PWC https://paperswithcode.com/paper/building-a-learning-database-for-the-neural
Repo
Framework

Performance Trade-Offs in Multi-Processor Approximate Message Passing

Title Performance Trade-Offs in Multi-Processor Approximate Message Passing
Authors Junan Zhu, Ahmad Beirami, Dror Baron
Abstract We consider large-scale linear inverse problems in Bayesian settings. Our general approach follows a recent line of work that applies the approximate message passing (AMP) framework in multi-processor (MP) computational systems by storing and processing a subset of rows of the measurement matrix along with corresponding measurements at each MP node. In each MP-AMP iteration, nodes of the MP system and its fusion center exchange lossily compressed messages pertaining to their estimates of the input. There is a trade-off between the physical costs of the reconstruction process including computation time, communication loads, and the reconstruction quality, and it is impossible to simultaneously minimize all the costs. We pose this minimization as a multi-objective optimization problem (MOP), and study the properties of the best trade-offs (Pareto optimality) in this MOP. We prove that the achievable region of this MOP is convex, and conjecture how the combined cost of computation and communication scales with the desired mean squared error. These properties are verified numerically.
Tasks
Published 2016-04-10
URL http://arxiv.org/abs/1604.02752v1
PDF http://arxiv.org/pdf/1604.02752v1.pdf
PWC https://paperswithcode.com/paper/performance-trade-offs-in-multi-processor
Repo
Framework

Engagement Detection in Meetings

Title Engagement Detection in Meetings
Authors Maria Frank, Ghassem Tofighi, Haisong Gu, Renate Fruchter
Abstract Group meetings are frequent business events aimed to develop and conduct project work, such as Big Room design and construction project meetings. To be effective in these meetings, participants need to have an engaged mental state. The mental state of participants however, is hidden from other participants, and thereby difficult to evaluate. Mental state is understood as an inner process of thinking and feeling, that is formed of a conglomerate of mental representations and propositional attitudes. There is a need to create transparency of these hidden states to understand, evaluate and influence them. Facilitators need to evaluate the meeting situation and adjust for higher engagement and productivity. This paper presents a framework that defines a spectrum of engagement states and an array of classifiers aimed to detect the engagement state of participants in real time. The Engagement Framework integrates multi-modal information from 2D and 3D imaging and sound. Engagement is detected and evaluated at participants and aggregated at group level. We use empirical data collected at the lab of Konica Minolta, Inc. to test initial applications of this framework. The paper presents examples of the tested engagement classifiers, which are based on research in psychology, communication, and human computer interaction. Their accuracy is illustrated in dyadic interaction for engagement detection. In closing we discuss the potential extension to complex group collaboration settings and future feedback implementations.
Tasks
Published 2016-08-31
URL http://arxiv.org/abs/1608.08711v1
PDF http://arxiv.org/pdf/1608.08711v1.pdf
PWC https://paperswithcode.com/paper/engagement-detection-in-meetings
Repo
Framework

Fixed Points of Belief Propagation – An Analysis via Polynomial Homotopy Continuation

Title Fixed Points of Belief Propagation – An Analysis via Polynomial Homotopy Continuation
Authors Christian Knoll, Franz Pernkopf, Dhagash Mehta, Tianran Chen
Abstract Belief propagation (BP) is an iterative method to perform approximate inference on arbitrary graphical models. Whether BP converges and if the solution is a unique fixed point depends on both the structure and the parametrization of the model. To understand this dependence it is interesting to find \emph{all} fixed points. In this work, we formulate a set of polynomial equations, the solutions of which correspond to BP fixed points. To solve such a nonlinear system we present the numerical polynomial-homotopy-continuation (NPHC) method. Experiments on binary Ising models and on error-correcting codes show how our method is capable of obtaining all BP fixed points. On Ising models with fixed parameters we show how the structure influences both the number of fixed points and the convergence properties. We further asses the accuracy of the marginals and weighted combinations thereof. Weighting marginals with their respective partition function increases the accuracy in all experiments. Contrary to the conjecture that uniqueness of BP fixed points implies convergence, we find graphs for which BP fails to converge, even though a unique fixed point exists. Moreover, we show that this fixed point gives a good approximation, and the NPHC method is able to obtain this fixed point.
Tasks
Published 2016-05-20
URL http://arxiv.org/abs/1605.06451v3
PDF http://arxiv.org/pdf/1605.06451v3.pdf
PWC https://paperswithcode.com/paper/fixed-points-of-belief-propagation-an
Repo
Framework

Early Warning System for Seismic Events in Coal Mines Using Machine Learning

Title Early Warning System for Seismic Events in Coal Mines Using Machine Learning
Authors Robert Bogucki, Jan Lasek, Jan Kanty Milczek, Michal Tadeusiak
Abstract This document describes an approach to the problem of predicting dangerous seismic events in active coal mines up to 8 hours in advance. It was developed as a part of the AAIA’16 Data Mining Challenge: Predicting Dangerous Seismic Events in Active Coal Mines. The solutions presented consist of ensembles of various predictive models trained on different sets of features. The best one achieved a winning score of 0.939 AUC.
Tasks
Published 2016-09-21
URL http://arxiv.org/abs/1609.06957v1
PDF http://arxiv.org/pdf/1609.06957v1.pdf
PWC https://paperswithcode.com/paper/early-warning-system-for-seismic-events-in
Repo
Framework

CMA-ES for Hyperparameter Optimization of Deep Neural Networks

Title CMA-ES for Hyperparameter Optimization of Deep Neural Networks
Authors Ilya Loshchilov, Frank Hutter
Abstract Hyperparameters of deep neural networks are often optimized by grid search, random search or Bayesian optimization. As an alternative, we propose to use the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), which is known for its state-of-the-art performance in derivative-free optimization. CMA-ES has some useful invariance properties and is friendly to parallel evaluations of solutions. We provide a toy example comparing CMA-ES and state-of-the-art Bayesian optimization algorithms for tuning the hyperparameters of a convolutional neural network for the MNIST dataset on 30 GPUs in parallel.
Tasks Hyperparameter Optimization
Published 2016-04-25
URL http://arxiv.org/abs/1604.07269v1
PDF http://arxiv.org/pdf/1604.07269v1.pdf
PWC https://paperswithcode.com/paper/cma-es-for-hyperparameter-optimization-of
Repo
Framework

Evaluation System for a Bayesian Optimization Service

Title Evaluation System for a Bayesian Optimization Service
Authors Ian Dewancker, Michael McCourt, Scott Clark, Patrick Hayes, Alexandra Johnson, George Ke
Abstract Bayesian optimization is an elegant solution to the hyperparameter optimization problem in machine learning. Building a reliable and robust Bayesian optimization service requires careful testing methodology and sound statistical analysis. In this talk we will outline our development of an evaluation framework to rigorously test and measure the impact of changes to the SigOpt optimization service. We present an overview of our evaluation system and discuss how this framework empowers our research engineers to confidently and quickly make changes to our core optimization engine
Tasks Hyperparameter Optimization
Published 2016-05-19
URL http://arxiv.org/abs/1605.06170v1
PDF http://arxiv.org/pdf/1605.06170v1.pdf
PWC https://paperswithcode.com/paper/evaluation-system-for-a-bayesian-optimization
Repo
Framework

Compositional Learning of Relation Path Embedding for Knowledge Base Completion

Title Compositional Learning of Relation Path Embedding for Knowledge Base Completion
Authors Xixun Lin, Yanchun Liang, Fausto Giunchiglia, Xiaoyue Feng, Renchu Guan
Abstract Large-scale knowledge bases have currently reached impressive sizes; however, these knowledge bases are still far from complete. In addition, most of the existing methods for knowledge base completion only consider the direct links between entities, ignoring the vital impact of the consistent semantics of relation paths. In this paper, we study the problem of how to better embed entities and relations of knowledge bases into different low-dimensional spaces by taking full advantage of the additional semantics of relation paths, and we propose a compositional learning model of relation path embedding (RPE). Specifically, with the corresponding relation and path projections, RPE can simultaneously embed each entity into two types of latent spaces. It is also proposed that type constraints could be extended from traditional relation-specific constraints to the new proposed path-specific constraints. The results of experiments show that the proposed model achieves significant and consistent improvements compared with the state-of-the-art algorithms.
Tasks Knowledge Base Completion
Published 2016-11-22
URL http://arxiv.org/abs/1611.07232v4
PDF http://arxiv.org/pdf/1611.07232v4.pdf
PWC https://paperswithcode.com/paper/compositional-learning-of-relation-path
Repo
Framework
Title Link Prediction using Embedded Knowledge Graphs
Authors Yelong Shen, Po-Sen Huang, Ming-Wei Chang, Jianfeng Gao
Abstract Since large knowledge bases are typically incomplete, missing facts need to be inferred from observed facts in a task called knowledge base completion. The most successful approaches to this task have typically explored explicit paths through sequences of triples. These approaches have usually resorted to human-designed sampling procedures, since large knowledge graphs produce prohibitively large numbers of possible paths, most of which are uninformative. As an alternative approach, we propose performing a single, short sequence of interactive lookup operations on an embedded knowledge graph which has been trained through end-to-end backpropagation to be an optimized and compressed version of the initial knowledge base. Our proposed model, called Embedded Knowledge Graph Network (EKGN), achieves new state-of-the-art results on popular knowledge base completion benchmarks.
Tasks Knowledge Base Completion, Knowledge Graphs, Link Prediction
Published 2016-11-14
URL http://arxiv.org/abs/1611.04642v5
PDF http://arxiv.org/pdf/1611.04642v5.pdf
PWC https://paperswithcode.com/paper/link-prediction-using-embedded-knowledge
Repo
Framework

A Tight Convex Upper Bound on the Likelihood of a Finite Mixture

Title A Tight Convex Upper Bound on the Likelihood of a Finite Mixture
Authors Elad Mezuman, Yair Weiss
Abstract The likelihood function of a finite mixture model is a non-convex function with multiple local maxima and commonly used iterative algorithms such as EM will converge to different solutions depending on initial conditions. In this paper we ask: is it possible to assess how far we are from the global maximum of the likelihood? Since the likelihood of a finite mixture model can grow unboundedly by centering a Gaussian on a single datapoint and shrinking the covariance, we constrain the problem by assuming that the parameters of the individual models are members of a large discrete set (e.g. estimating a mixture of two Gaussians where the means and variances of both Gaussians are members of a set of a million possible means and variances). For this setting we show that a simple upper bound on the likelihood can be computed using convex optimization and we analyze conditions under which the bound is guaranteed to be tight. This bound can then be used to assess the quality of solutions found by EM (where the final result is projected on the discrete set) or any other mixture estimation algorithm. For any dataset our method allows us to find a finite mixture model together with a dataset-specific bound on how far the likelihood of this mixture is from the global optimum of the likelihood
Tasks
Published 2016-08-18
URL http://arxiv.org/abs/1608.05275v1
PDF http://arxiv.org/pdf/1608.05275v1.pdf
PWC https://paperswithcode.com/paper/a-tight-convex-upper-bound-on-the-likelihood
Repo
Framework

Exploring and measuring non-linear correlations: Copulas, Lightspeed Transportation and Clustering

Title Exploring and measuring non-linear correlations: Copulas, Lightspeed Transportation and Clustering
Authors Gautier Marti, Sebastien Andler, Frank Nielsen, Philippe Donnat
Abstract We propose a methodology to explore and measure the pairwise correlations that exist between variables in a dataset. The methodology leverages copulas for encoding dependence between two variables, state-of-the-art optimal transport for providing a relevant geometry to the copulas, and clustering for summarizing the main dependence patterns found between the variables. Some of the clusters centers can be used to parameterize a novel dependence coefficient which can target or forget specific dependence patterns. Finally, we illustrate and benchmark the methodology on several datasets. Code and numerical experiments are available online for reproducible research.
Tasks
Published 2016-10-30
URL http://arxiv.org/abs/1610.09659v1
PDF http://arxiv.org/pdf/1610.09659v1.pdf
PWC https://paperswithcode.com/paper/exploring-and-measuring-non-linear
Repo
Framework

European Union regulations on algorithmic decision-making and a “right to explanation”

Title European Union regulations on algorithmic decision-making and a “right to explanation”
Authors Bryce Goodman, Seth Flaxman
Abstract We summarize the potential impact that the European Union’s new General Data Protection Regulation will have on the routine use of machine learning algorithms. Slated to take effect as law across the EU in 2018, it will restrict automated individual decision-making (that is, algorithms that make decisions based on user-level predictors) which “significantly affect” users. The law will also effectively create a “right to explanation,” whereby a user can ask for an explanation of an algorithmic decision that was made about them. We argue that while this law will pose large challenges for industry, it highlights opportunities for computer scientists to take the lead in designing algorithms and evaluation frameworks which avoid discrimination and enable explanation.
Tasks Decision Making
Published 2016-06-28
URL http://arxiv.org/abs/1606.08813v3
PDF http://arxiv.org/pdf/1606.08813v3.pdf
PWC https://paperswithcode.com/paper/european-union-regulations-on-algorithmic
Repo
Framework

Neighborhood Mixture Model for Knowledge Base Completion

Title Neighborhood Mixture Model for Knowledge Base Completion
Authors Dat Quoc Nguyen, Kairit Sirts, Lizhen Qu, Mark Johnson
Abstract Knowledge bases are useful resources for many natural language processing tasks, however, they are far from complete. In this paper, we define a novel entity representation as a mixture of its neighborhood in the knowledge base and apply this technique on TransE-a well-known embedding model for knowledge base completion. Experimental results show that the neighborhood information significantly helps to improve the results of the TransE model, leading to better performance than obtained by other state-of-the-art embedding models on three benchmark datasets for triple classification, entity prediction and relation prediction tasks.
Tasks Knowledge Base Completion
Published 2016-06-21
URL http://arxiv.org/abs/1606.06461v3
PDF http://arxiv.org/pdf/1606.06461v3.pdf
PWC https://paperswithcode.com/paper/neighborhood-mixture-model-for-knowledge-base
Repo
Framework
comments powered by Disqus