May 6, 2019

3169 words 15 mins read

Paper Group ANR 315

Paper Group ANR 315

Characterizing Quantifier Fuzzification Mechanisms: a behavioral guide for practical applications. Continuous multilinguality with language vectors. Multi-Person Tracking by Multicut and Deep Matching. Learning Support Correlation Filters for Visual Tracking. Detecting Relative Anomaly. Harnessing disordered quantum dynamics for machine learning. B …

Characterizing Quantifier Fuzzification Mechanisms: a behavioral guide for practical applications

Title Characterizing Quantifier Fuzzification Mechanisms: a behavioral guide for practical applications
Authors F. Diaz-Hermida, M. Pereira-Fariña, Juan C. Vidal, A. Ramos-Soto
Abstract Important advances have been made in the fuzzy quantification field. Nevertheless, some problems remain when we face the decision of selecting the most convenient model for a specific application. In the literature, several desirable adequacy properties have been proposed, but theoretical limits impede quantification models from simultaneously fulfilling every adequacy property that has been defined. Besides, the complexity of model definitions and adequacy properties makes very difficult for real users to understand the particularities of the different models that have been presented. In this work we will present several criteria conceived to help in the process of selecting the most adequate Quantifier Fuzzification Mechanisms for specific practical applications. In addition, some of the best known well-behaved models will be compared against this list of criteria. Based on this analysis, some guidance to choose fuzzy quantification models for practical applications will be provided.
Tasks
Published 2016-05-11
URL http://arxiv.org/abs/1605.03506v1
PDF http://arxiv.org/pdf/1605.03506v1.pdf
PWC https://paperswithcode.com/paper/characterizing-quantifier-fuzzification
Repo
Framework

Continuous multilinguality with language vectors

Title Continuous multilinguality with language vectors
Authors Robert Östling, Jörg Tiedemann
Abstract Most existing models for multilingual natural language processing (NLP) treat language as a discrete category, and make predictions for either one language or the other. In contrast, we propose using continuous vector representations of language. We show that these can be learned efficiently with a character-based neural language model, and used to improve inference about language varieties not seen during training. In experiments with 1303 Bible translations into 990 different languages, we empirically explore the capacity of multilingual language models, and also show that the language vectors capture genetic relationships between languages.
Tasks Language Modelling
Published 2016-12-22
URL http://arxiv.org/abs/1612.07486v2
PDF http://arxiv.org/pdf/1612.07486v2.pdf
PWC https://paperswithcode.com/paper/continuous-multilinguality-with-language-1
Repo
Framework

Multi-Person Tracking by Multicut and Deep Matching

Title Multi-Person Tracking by Multicut and Deep Matching
Authors Siyu Tang, Bjoern Andres, Mykhaylo Andriluka, Bernt Schiele
Abstract In [1], we proposed a graph-based formulation that links and clusters person hypotheses over time by solving a minimum cost subgraph multicut problem. In this paper, we modify and extend [1] in three ways: 1) We introduce a novel local pairwise feature based on local appearance matching that is robust to partial occlusion and camera motion. 2) We perform extensive experiments to compare different pairwise potentials and to analyze the robustness of the tracking formulation. 3) We consider a plain multicut problem and remove outlying clusters from its solution. This allows us to employ an efficient primal feasible optimization algorithm that is not applicable to the subgraph multicut problem of [1]. Unlike the branch-and-cut algorithm used there, this efficient algorithm used here is applicable to long videos and many detections. Together with the novel feature, it eliminates the need for the intermediate tracklet representation of [1]. We demonstrate the effectiveness of our overall approach on the MOT16 benchmark [2], achieving state-of-art performance.
Tasks
Published 2016-08-17
URL http://arxiv.org/abs/1608.05404v1
PDF http://arxiv.org/pdf/1608.05404v1.pdf
PWC https://paperswithcode.com/paper/multi-person-tracking-by-multicut-and-deep
Repo
Framework

Learning Support Correlation Filters for Visual Tracking

Title Learning Support Correlation Filters for Visual Tracking
Authors Wangmeng Zuo, Xiaohe Wu, Liang Lin, Lei Zhang, Ming-Hsuan Yang
Abstract Sampling and budgeting training examples are two essential factors in tracking algorithms based on support vector machines (SVMs) as a trade-off between accuracy and efficiency. Recently, the circulant matrix formed by dense sampling of translated image patches has been utilized in correlation filters for fast tracking. In this paper, we derive an equivalent formulation of a SVM model with circulant matrix expression and present an efficient alternating optimization method for visual tracking. We incorporate the discrete Fourier transform with the proposed alternating optimization process, and pose the tracking problem as an iterative learning of support correlation filters (SCFs) which find the global optimal solution with real-time performance. For a given circulant data matrix with n^2 samples of size nn, the computational complexity of the proposed algorithm is O(n^2logn) whereas that of the standard SVM-based approaches is at least O(n^4). In addition, we extend the SCF-based tracking algorithm with multi-channel features, kernel functions, and scale-adaptive approaches to further improve the tracking performance. Experimental results on a large benchmark dataset show that the proposed SCF-based algorithms perform favorably against the state-of-the-art tracking methods in terms of accuracy and speed.
Tasks Visual Tracking
Published 2016-01-22
URL http://arxiv.org/abs/1601.06032v1
PDF http://arxiv.org/pdf/1601.06032v1.pdf
PWC https://paperswithcode.com/paper/learning-support-correlation-filters-for
Repo
Framework

Detecting Relative Anomaly

Title Detecting Relative Anomaly
Authors Richard Neuberg, Yixin Shi
Abstract System states that are anomalous from the perspective of a domain expert occur frequently in some anomaly detection problems. The performance of commonly used unsupervised anomaly detection methods may suffer in that setting, because they use frequency as a proxy for anomaly. We propose a novel concept for anomaly detection, called relative anomaly detection. It is tailored to be robust towards anomalies that occur frequently, by taking into account their location relative to the most typical observations. The approaches we develop are computationally feasible even for large data sets, and they allow real-time detection. We illustrate using data sets of potential scraping attempts and Wi-Fi channel utilization, both from Google, Inc.
Tasks Anomaly Detection, Unsupervised Anomaly Detection
Published 2016-05-12
URL http://arxiv.org/abs/1605.03805v2
PDF http://arxiv.org/pdf/1605.03805v2.pdf
PWC https://paperswithcode.com/paper/detecting-relative-anomaly
Repo
Framework

Harnessing disordered quantum dynamics for machine learning

Title Harnessing disordered quantum dynamics for machine learning
Authors Keisuke Fujii, Kohei Nakajima
Abstract Quantum computer has an amazing potential of fast information processing. However, realisation of a digital quantum computer is still a challenging problem requiring highly accurate controls and key application strategies. Here we propose a novel platform, quantum reservoir computing, to solve these issues successfully by exploiting natural quantum dynamics, which is ubiquitous in laboratories nowadays, for machine learning. In this framework, nonlinear dynamics including classical chaos can be universally emulated in quantum systems. A number of numerical experiments show that quantum systems consisting of at most seven qubits possess computational capabilities comparable to conventional recurrent neural networks of 500 nodes. This discovery opens up a new paradigm for information processing with artificial intelligence powered by quantum physics.
Tasks
Published 2016-02-26
URL http://arxiv.org/abs/1602.08159v2
PDF http://arxiv.org/pdf/1602.08159v2.pdf
PWC https://paperswithcode.com/paper/harnessing-disordered-quantum-dynamics-for
Repo
Framework

Bayesian Optimization with Shape Constraints

Title Bayesian Optimization with Shape Constraints
Authors Michael Jauch, Víctor Peña
Abstract In typical applications of Bayesian optimization, minimal assumptions are made about the objective function being optimized. This is true even when researchers have prior information about the shape of the function with respect to one or more argument. We make the case that shape constraints are often appropriate in at least two important application areas of Bayesian optimization: (1) hyperparameter tuning of machine learning algorithms and (2) decision analysis with utility functions. We describe a methodology for incorporating a variety of shape constraints within the usual Bayesian optimization framework and present positive results from simple applications which suggest that Bayesian optimization with shape constraints is a promising topic for further research.
Tasks
Published 2016-12-28
URL http://arxiv.org/abs/1612.08915v1
PDF http://arxiv.org/pdf/1612.08915v1.pdf
PWC https://paperswithcode.com/paper/bayesian-optimization-with-shape-constraints
Repo
Framework

Adaptive Design of Experiments for Conservative Estimation of Excursion Sets

Title Adaptive Design of Experiments for Conservative Estimation of Excursion Sets
Authors Dario Azzimonti, David Ginsbourger, Clément Chevalier, Julien Bect, Yann Richet
Abstract We consider the problem of estimating the set of all inputs that leads a system to some particular behavior. The system is modeled by an expensive-to-evaluate function, such as a computer experiment, and we are interested in its excursion set, i.e. the set of points where the function takes values above or below some prescribed threshold. The objective function is emulated with a Gaussian Process (GP) model based on an initial design of experiments enriched with evaluation results at (batch-)sequentially determined input points. The GP model provides conservative estimates for the excursion set, which control false positives while minimizing false negatives. We introduce adaptive strategies that sequentially select new evaluations of the function by reducing the uncertainty on conservative estimates. Following the Stepwise Uncertainty Reduction approach we obtain new evaluations by minimizing adapted criteria. Tractable formulae for the conservative criteria are derived, which allow more convenient optimization. The method is benchmarked on random functions generated under the model assumptions in different scenarios of noise and batch size. We then apply it to a reliability engineering test case. Overall, the proposed strategy of minimizing false negatives in conservative estimation achieves competitive performance both in terms of model-based and model-free indicators.
Tasks
Published 2016-11-22
URL https://arxiv.org/abs/1611.07256v6
PDF https://arxiv.org/pdf/1611.07256v6.pdf
PWC https://paperswithcode.com/paper/adaptive-design-of-experiments-for
Repo
Framework

Attentive Contexts for Object Detection

Title Attentive Contexts for Object Detection
Authors Jianan Li, Yunchao Wei, Xiaodan Liang, Jian Dong, Tingfa Xu, Jiashi Feng, Shuicheng Yan
Abstract Modern deep neural network based object detection methods typically classify candidate proposals using their interior features. However, global and local surrounding contexts that are believed to be valuable for object detection are not fully exploited by existing methods yet. In this work, we take a step towards understanding what is a robust practice to extract and utilize contextual information to facilitate object detection in practice. Specifically, we consider the following two questions: “how to identify useful global contextual information for detecting a certain object?” and “how to exploit local context surrounding a proposal for better inferring its contents?". We provide preliminary answers to these questions through developing a novel Attention to Context Convolution Neural Network (AC-CNN) based object detection model. AC-CNN effectively incorporates global and local contextual information into the region-based CNN (e.g. Fast RCNN) detection model and provides better object detection performance. It consists of one attention-based global contextualized (AGC) sub-network and one multi-scale local contextualized (MLC) sub-network. To capture global context, the AGC sub-network recurrently generates an attention map for an input image to highlight useful global contextual locations, through multiple stacked Long Short-Term Memory (LSTM) layers. For capturing surrounding local context, the MLC sub-network exploits both the inside and outside contextual information of each specific proposal at multiple scales. The global and local context are then fused together for making the final decision for detection. Extensive experiments on PASCAL VOC 2007 and VOC 2012 well demonstrate the superiority of the proposed AC-CNN over well-established baselines. In particular, AC-CNN outperforms the popular Fast-RCNN by 2.0% and 2.2% on VOC 2007 and VOC 2012 in terms of mAP, respectively.
Tasks Object Detection
Published 2016-03-24
URL http://arxiv.org/abs/1603.07415v1
PDF http://arxiv.org/pdf/1603.07415v1.pdf
PWC https://paperswithcode.com/paper/attentive-contexts-for-object-detection
Repo
Framework

A Machine learning approach for Shape From Shading

Title A Machine learning approach for Shape From Shading
Authors Lyes Abada, Saliha Aouat
Abstract The aim of Shape From Shading (SFS) problem is to reconstruct the relief of an object from a single gray level image. In this paper we present a new method to solve the problem of SFS using Machine learning method. Our approach belongs to Local resolution category. The orientation of each part of the object is represented by the perpendicular vector to the surface (Normal Vector), this vector is defined by two angles SLANT and TILT, such as the TILT is the angle between the normal vector and Z-axis, and the SLANT is the angle between the the X-axis and the projection of the normal to the plane. The TILT can be determined from the gray level, the unknown is the SLANT. To calculate the normal of each part of the surface (pixel) a supervised Machine learning method has been proposed. This method divided into three steps: the first step is the preparation of the training data from 3D mathematical functions and synthetic objects. The second step is the creation of database of examples from 3D objects (off-line process). The third step is the application of test images (on-line process). The idea is to find for each pixel of the test image the most similar element in the examples database using a similarity value.
Tasks
Published 2016-07-12
URL http://arxiv.org/abs/1607.03284v1
PDF http://arxiv.org/pdf/1607.03284v1.pdf
PWC https://paperswithcode.com/paper/a-machine-learning-approach-for-shape-from
Repo
Framework

Multilevel Monte Carlo methods for the approximation of invariant measures of stochastic differential equations

Title Multilevel Monte Carlo methods for the approximation of invariant measures of stochastic differential equations
Authors Michael B. Giles, Mateusz B. Majka, Lukasz Szpruch, Sebastian Vollmer, Konstantinos Zygalakis
Abstract We develop a framework that allows the use of the multi-level Monte Carlo (MLMC) methodology (Giles2015) to calculate expectations with respect to the invariant measure of an ergodic SDE. In that context, we study the (over-damped) Langevin equations with a strongly concave potential. We show that, when appropriate contracting couplings for the numerical integrators are available, one can obtain a uniform in time estimate of the MLMC variance in contrast to the majority of the results in the MLMC literature. As a consequence, a root mean square error of $\mathcal{O}(\varepsilon)$ is achieved with $\mathcal{O}(\varepsilon^{-2})$ complexity on par with Markov Chain Monte Carlo (MCMC) methods, which however can be computationally intensive when applied to large data sets. Finally, we present a multi-level version of the recently introduced Stochastic Gradient Langevin Dynamics (SGLD) method (Welling and Teh, 2011) built for large datasets applications. We show that this is the first stochastic gradient MCMC method with complexity $\mathcal{O}(\varepsilon^{-2}\log {\varepsilon}^{3})$, in contrast to the complexity $\mathcal{O}(\varepsilon^{-3})$ of currently available methods. Numerical experiments confirm our theoretical findings.
Tasks
Published 2016-05-04
URL https://arxiv.org/abs/1605.01384v4
PDF https://arxiv.org/pdf/1605.01384v4.pdf
PWC https://paperswithcode.com/paper/multi-level-monte-carlo-methods-for-a-class
Repo
Framework

Process Monitoring of Extrusion Based 3D Printing via Laser Scanning

Title Process Monitoring of Extrusion Based 3D Printing via Laser Scanning
Authors Matthias Faes, Wim Abbeloos, Frederik Vogeler, Hans Valkenaers, Kurt Coppens, Toon Goedemé, Eleonora Ferraris
Abstract Extrusion based 3D Printing (E3DP) is an Additive Manufacturing (AM) technique that extrudes thermoplastic polymer in order to build up components using a layerwise approach. Hereby, AM typically requires long production times in comparison to mass production processes such as Injection Molding. Failures during the AM process are often only noticed after build completion and frequently lead to part rejection because of dimensional inaccuracy or lack of mechanical performance, resulting in an important loss of time and material. A solution to improve the accuracy and robustness of a manufacturing technology is the integration of sensors to monitor and control process state-variables online. In this way, errors can be rapidly detected and possibly compensated at an early stage. To achieve this, we integrated a modular 2D laser triangulation scanner into an E3DP machine and analyzed feedback signals. A 2D laser triangulation scanner was selected here owing to the very compact size, achievable accuracy and the possibility of capturing geometrical 3D data. Thus, our implemented system is able to provide both quantitative and qualitative information. Also, in this work, first steps towards the development of a quality control loop for E3DP processes are presented and opportunities are discussed.
Tasks
Published 2016-12-07
URL http://arxiv.org/abs/1612.02219v1
PDF http://arxiv.org/pdf/1612.02219v1.pdf
PWC https://paperswithcode.com/paper/process-monitoring-of-extrusion-based-3d
Repo
Framework

Parameterized Principal Component Analysis

Title Parameterized Principal Component Analysis
Authors Ajay Gupta, Adrian Barbu
Abstract When modeling multivariate data, one might have an extra parameter of contextual information that could be used to treat some observations as more similar to others. For example, images of faces can vary by age, and one would expect the face of a 40 year old to be more similar to the face of a 30 year old than to a baby face. We introduce a novel manifold approximation method, parameterized principal component analysis (PPCA) that models data with linear subspaces that change continuously according to the extra parameter of contextual information (e.g. age), instead of ad-hoc atlases. Special care has been taken in the loss function and the optimization method to encourage smoothly changing subspaces across the parameter values. The approach ensures that each observation’s projection will share information with observations that have similar parameter values, but not with observations that have large parameter differences. We tested PPCA on artificial data based on known, smooth functions of an added parameter, as well as on three real datasets with different types of parameters. We compared PPCA to PCA, sparse PCA and to independent principal component analysis (IPCA), which groups observations by their parameter values and projects each group using PCA with no sharing of information for different groups. PPCA recovers the known functions with less error and projects the datasets’ test set observations with consistently less reconstruction error than IPCA does. In some cases where the manifold is truly nonlinear, PCA outperforms all the other manifold approximation methods compared.
Tasks
Published 2016-08-16
URL http://arxiv.org/abs/1608.04695v2
PDF http://arxiv.org/pdf/1608.04695v2.pdf
PWC https://paperswithcode.com/paper/parameterized-principal-component-analysis
Repo
Framework

NPCs Vote! Changing Voter Reactions Over Time Using the Extreme AI Personality Engine

Title NPCs Vote! Changing Voter Reactions Over Time Using the Extreme AI Personality Engine
Authors Jeffrey Georgeson
Abstract Can non-player characters have human-realistic personalities, changing over time depending on input from those around them? And can they have different reactions and thoughts about different people? Using Extreme AI, a psychology-based personality engine using the Five Factor model of personality, I answer these questions by creating personalities for 100 voters and allowing them to react to two politicians to see if the NPC voters’ choice of candidate develops in a realistic-seeming way, based on initial and changing personality facets and on their differing feelings toward the politicians (in this case, across liking, trusting, and feeling affiliated with the candidates). After 16 test runs, the voters did indeed change their attitudes and feelings toward the candidates in different and yet generally realistic ways, and even changed their attitudes about other issues based on what a candidate extolled.
Tasks
Published 2016-09-17
URL http://arxiv.org/abs/1609.05315v1
PDF http://arxiv.org/pdf/1609.05315v1.pdf
PWC https://paperswithcode.com/paper/npcs-vote-changing-voter-reactions-over-time
Repo
Framework

False Discovery Rate Control and Statistical Quality Assessment of Annotators in Crowdsourced Ranking

Title False Discovery Rate Control and Statistical Quality Assessment of Annotators in Crowdsourced Ranking
Authors Qianqian Xu, Jiechao Xiong, Xiaochun Cao, Yuan Yao
Abstract With the rapid growth of crowdsourcing platforms it has become easy and relatively inexpensive to collect a dataset labeled by multiple annotators in a short time. However due to the lack of control over the quality of the annotators, some abnormal annotators may be affected by position bias which can potentially degrade the quality of the final consensus labels. In this paper we introduce a statistical framework to model and detect annotator’s position bias in order to control the false discovery rate (FDR) without a prior knowledge on the amount of biased annotators - the expected fraction of false discoveries among all discoveries being not too high, in order to assure that most of the discoveries are indeed true and replicable. The key technical development relies on some new knockoff filters adapted to our problem and new algorithms based on the Inverse Scale Space dynamics whose discretization is potentially suitable for large scale crowdsourcing data analysis. Our studies are supported by experiments with both simulated examples and real-world data. The proposed framework provides us a useful tool for quantitatively studying annotator’s abnormal behavior in crowdsourcing data arising from machine learning, sociology, computer vision, multimedia, etc.
Tasks
Published 2016-05-19
URL http://arxiv.org/abs/1605.05860v3
PDF http://arxiv.org/pdf/1605.05860v3.pdf
PWC https://paperswithcode.com/paper/false-discovery-rate-control-and-statistical
Repo
Framework
comments powered by Disqus