October 16, 2019

3535 words 17 mins read

Paper Group ANR 1093

Paper Group ANR 1093

EmbNum: Semantic labeling for numerical values with deep metric learning. Study and Observation of the Variation of Accuracies of KNN, SVM, LMNN, ENN Algorithms on Eleven Different Datasets from UCI Machine Learning Repository. An Accurate and Real-time Self-blast Glass Insulator Location Method Based On Faster R-CNN and U-net with Aerial Images. L …

EmbNum: Semantic labeling for numerical values with deep metric learning

Title EmbNum: Semantic labeling for numerical values with deep metric learning
Authors Phuc Nguyen, Khai Nguyen, Ryutaro Ichise, Hideaki Takeda
Abstract Semantic labeling for numerical values is a task of assigning semantic labels to unknown numerical attributes. The semantic labels could be numerical properties in ontologies, instances in knowledge bases, or labeled data that are manually annotated by domain experts. In this paper, we refer to semantic labeling as a retrieval setting where the label of an unknown attribute is assigned by the label of the most relevant attribute in labeled data. One of the greatest challenges is that an unknown attribute rarely has the same set of values with the similar one in the labeled data. To overcome the issue, statistical interpretation of value distribution is taken into account. However, the existing studies assume a specific form of distribution. It is not appropriate in particular to apply open data where there is no knowledge of data in advance. To address these problems, we propose a neural numerical embedding model (EmbNum) to learn useful representation vectors for numerical attributes without prior assumptions on the distribution of data. Then, the “semantic similarities” between the attributes are measured on these representation vectors by the Euclidean distance. Our empirical experiments on City Data and Open Data show that EmbNum significantly outperforms state-of-the-art methods for the task of numerical attribute semantic labeling regarding effectiveness and efficiency.
Tasks Metric Learning
Published 2018-06-26
URL http://arxiv.org/abs/1807.01367v2
PDF http://arxiv.org/pdf/1807.01367v2.pdf
PWC https://paperswithcode.com/paper/embnum-semantic-labeling-for-numerical-values
Repo
Framework

Study and Observation of the Variation of Accuracies of KNN, SVM, LMNN, ENN Algorithms on Eleven Different Datasets from UCI Machine Learning Repository

Title Study and Observation of the Variation of Accuracies of KNN, SVM, LMNN, ENN Algorithms on Eleven Different Datasets from UCI Machine Learning Repository
Authors Mohammad Mahmudur Rahman Khan, Rezoana Bente Arif, Md. Abu Bakr Siddique, Mahjabin Rahman Oishe
Abstract Machine learning qualifies computers to assimilate with data, without being solely programmed [1, 2]. Machine learning can be classified as supervised and unsupervised learning. In supervised learning, computers learn an objective that portrays an input to an output hinged on training input-output pairs [3]. Most efficient and widely used supervised learning algorithms are K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Large Margin Nearest Neighbor (LMNN), and Extended Nearest Neighbor (ENN). The main contribution of this paper is to implement these elegant learning algorithms on eleven different datasets from the UCI machine learning repository to observe the variation of accuracies for each of the algorithms on all datasets. Analyzing the accuracy of the algorithms will give us a brief idea about the relationship of the machine learning algorithms and the data dimensionality. All the algorithms are developed in Matlab. Upon such accuracy observation, the comparison can be built among KNN, SVM, LMNN, and ENN regarding their performances on each dataset.
Tasks
Published 2018-09-17
URL http://arxiv.org/abs/1809.06186v3
PDF http://arxiv.org/pdf/1809.06186v3.pdf
PWC https://paperswithcode.com/paper/study-and-observation-of-the-variation-of
Repo
Framework

An Accurate and Real-time Self-blast Glass Insulator Location Method Based On Faster R-CNN and U-net with Aerial Images

Title An Accurate and Real-time Self-blast Glass Insulator Location Method Based On Faster R-CNN and U-net with Aerial Images
Authors Zenan Ling, Robert C. Qiu, Zhijian Jin, Yuhang Zhang, Xing He, Haichun Liu, Lei Chu
Abstract The location of broken insulators in aerial images is a challenging task. This paper, focusing on the self-blast glass insulator, proposes a deep learning solution. We address the broken insulators location problem as a low signal-noise-ratio image location framework with two modules: 1) object detection based on Fast R-CNN, and 2) classification of pixels based on U-net. A diverse aerial image set of some grid in China is tested to validated the proposed approach. Furthermore, a comparison is made among different methods and the result shows that our approach is accurate and real-time.
Tasks Object Detection
Published 2018-01-16
URL http://arxiv.org/abs/1801.05143v1
PDF http://arxiv.org/pdf/1801.05143v1.pdf
PWC https://paperswithcode.com/paper/an-accurate-and-real-time-self-blast-glass
Repo
Framework

Learning and Testing Causal Models with Interventions

Title Learning and Testing Causal Models with Interventions
Authors Jayadev Acharya, Arnab Bhattacharyya, Constantinos Daskalakis, Saravanan Kandasamy
Abstract We consider testing and learning problems on causal Bayesian networks as defined by Pearl (Pearl, 2009). Given a causal Bayesian network $\mathcal{M}$ on a graph with $n$ discrete variables and bounded in-degree and bounded `confounded components’, we show that $O(\log n)$ interventions on an unknown causal Bayesian network $\mathcal{X}$ on the same graph, and $\tilde{O}(n/\epsilon^2)$ samples per intervention, suffice to efficiently distinguish whether $\mathcal{X}=\mathcal{M}$ or whether there exists some intervention under which $\mathcal{X}$ and $\mathcal{M}$ are farther than $\epsilon$ in total variation distance. We also obtain sample/time/intervention efficient algorithms for: (i) testing the identity of two unknown causal Bayesian networks on the same graph; and (ii) learning a causal Bayesian network on a given graph. Although our algorithms are non-adaptive, we show that adaptivity does not help in general: $\Omega(\log n)$ interventions are necessary for testing the identity of two unknown causal Bayesian networks on the same graph, even adaptively. Our algorithms are enabled by a new subadditivity inequality for the squared Hellinger distance between two causal Bayesian networks. |
Tasks
Published 2018-05-24
URL http://arxiv.org/abs/1805.09697v1
PDF http://arxiv.org/pdf/1805.09697v1.pdf
PWC https://paperswithcode.com/paper/learning-and-testing-causal-models-with
Repo
Framework

Deep Graph Translation

Title Deep Graph Translation
Authors Xiaojie Guo, Lingfei Wu, Liang Zhao
Abstract Inspired by the tremendous success of deep generative models on generating continuous data like image and audio, in the most recent year, few deep graph generative models have been proposed to generate discrete data such as graphs. They are typically unconditioned generative models which has no control on modes of the graphs being generated. Differently, in this paper, we are interested in a new problem named \emph{Deep Graph Translation}: given an input graph, we want to infer a target graph based on their underlying (both global and local) translation mapping. Graph translation could be highly desirable in many applications such as disaster management and rare event forecasting, where the rare and abnormal graph patterns (e.g., traffic congestions and terrorism events) will be inferred prior to their occurrence even without historical data on the abnormal patterns for this graph (e.g., a road network or human contact network). To achieve this, we propose a novel Graph-Translation-Generative Adversarial Networks (GT-GAN) which will generate a graph translator from input to target graphs. GT-GAN consists of a graph translator where we propose new graph convolution and deconvolution layers to learn the global and local translation mapping. A new conditional graph discriminator has also been proposed to classify target graphs by conditioning on input graphs. Extensive experiments on multiple synthetic and real-world datasets demonstrate the effectiveness and scalability of the proposed GT-GAN.
Tasks
Published 2018-05-25
URL http://arxiv.org/abs/1805.09980v2
PDF http://arxiv.org/pdf/1805.09980v2.pdf
PWC https://paperswithcode.com/paper/deep-graph-translation
Repo
Framework

Deep attention-based classification network for robust depth prediction

Title Deep attention-based classification network for robust depth prediction
Authors Ruibo Li, Ke Xian, Chunhua Shen, Zhiguo Cao, Hao Lu, Lingxiao Hang
Abstract In this paper, we present our deep attention-based classification (DABC) network for robust single image depth prediction, in the context of the Robust Vision Challenge 2018 (ROB 2018). Unlike conventional depth prediction, our goal is to design a model that can perform well in both indoor and outdoor scenes with a single parameter set. However, robust depth prediction suffers from two challenging problems: a) How to extract more discriminative features for different scenes (compared to a single scene)? b) How to handle the large differences of depth ranges between indoor and outdoor datasets? To address these two problems, we first formulate depth prediction as a multi-class classification task and apply a softmax classifier to classify the depth label of each pixel. We then introduce a global pooling layer and a channel-wise attention mechanism to adaptively select the discriminative channels of features and to update the original features by assigning important channels with higher weights. Further, to reduce the influence of quantization errors, we employ a soft-weighted sum inference strategy for the final prediction. Experimental results on both indoor and outdoor datasets demonstrate the effectiveness of our method. It is worth mentioning that we won the 2-nd place in single image depth prediction entry of ROB 2018, in conjunction with IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2018.
Tasks Deep Attention, Depth Estimation, Quantization
Published 2018-07-11
URL http://arxiv.org/abs/1807.03959v1
PDF http://arxiv.org/pdf/1807.03959v1.pdf
PWC https://paperswithcode.com/paper/deep-attention-based-classification-network
Repo
Framework

Improving Bag-of-Visual-Words Towards Effective Facial Expressive Image Classification

Title Improving Bag-of-Visual-Words Towards Effective Facial Expressive Image Classification
Authors Dawood Al Chanti, Alice Caplier
Abstract Bag-of-Visual-Words (BoVW) approach has been widely used in the recent years for image classification purposes. However, the limitations regarding optimal feature selection, clustering technique, the lack of spatial organization of the data and the weighting of visual words are crucial. These factors affect the stability of the model and reduce performance. We propose to develop an algorithm based on BoVW for facial expression analysis which goes beyond those limitations. Thus the visual codebook is built by using k-Means++ method to avoid poor clustering. To exploit reliable low level features, we search for the best feature detector that avoids locating a large number of keypoints which do not contribute to the classification process. Then, we propose to compute the relative conjunction matrix in order to preserve the spatial order of the data by coding the relationships among visual words. In addition, a weighting scheme that reflects how important a visual word is with respect to a given image is introduced. We speed up the learning process by using histogram intersection kernel by Support Vector Machine to learn a discriminative classifier. The efficiency of the proposed algorithm is compared with standard bag of visual words method and with bag of visual words method with spatial pyramid. Extensive experiments on the CK+, the MMI and the JAFFE databases show good average recognition rates. Likewise, the ability to recognize spontaneous and non-basic expressive states is investigated using the DynEmo database.
Tasks Feature Selection, Image Classification
Published 2018-09-30
URL http://arxiv.org/abs/1810.00360v1
PDF http://arxiv.org/pdf/1810.00360v1.pdf
PWC https://paperswithcode.com/paper/improving-bag-of-visual-words-towards
Repo
Framework

Inference for Individual Mediation Effects and Interventional Effects in Sparse High-Dimensional Causal Graphical Models

Title Inference for Individual Mediation Effects and Interventional Effects in Sparse High-Dimensional Causal Graphical Models
Authors Abhishek Chakrabortty, Preetam Nandy, Hongzhe Li
Abstract We consider the problem of identifying intermediate variables (or mediators) that regulate the effect of a treatment on a response variable. While there has been significant research on this topic, little work has been done when the set of potential mediators is high-dimensional and when they are interrelated. In particular, we assume that the causal structure of the treatment, the potential mediators and the response is a directed acyclic graph (DAG). High-dimensional DAG models have previously been used for the estimation of causal effects from observational data and methods called IDA and joint-IDA have been developed for estimating the effects of single interventions and multiple simultaneous interventions respectively. In this paper, we propose an IDA-type method called MIDA for estimating mediation effects from high-dimensional observational data. Although IDA and joint-IDA estimators have been shown to be consistent in certain sparse high-dimensional settings, their asymptotic properties such as convergence in distribution and inferential tools in such settings remained unknown. We prove high-dimensional consistency of MIDA for linear structural equation models with sub-Gaussian errors. More importantly, we derive distributional convergence results for MIDA in similar high-dimensional settings, which are applicable to IDA and joint-IDA estimators as well. To the best of our knowledge, these are the first distributional convergence results facilitating inference for IDA-type estimators. These results have been built on our novel theoretical results regarding uniform bounds for linear regression estimators over varying subsets of high-dimensional covariates, which may be of independent interest. Finally, we empirically validate our asymptotic theory and demonstrate the usefulness of MIDA in identifying large mediation effects via simulations and application to real data in genomics.
Tasks
Published 2018-09-27
URL http://arxiv.org/abs/1809.10652v1
PDF http://arxiv.org/pdf/1809.10652v1.pdf
PWC https://paperswithcode.com/paper/inference-for-individual-mediation-effects
Repo
Framework

Efficient Load Sampling for Worst-Case Structural Analysis Under Force Location Uncertainty

Title Efficient Load Sampling for Worst-Case Structural Analysis Under Force Location Uncertainty
Authors Yining Wang, Erva Ulu, Aarti Singh, Levent Burak Kara
Abstract An important task in structural design is to quantify the structural performance of an object under the external forces it may experience during its use. The problem proves to be computationally very challenging as the external forces’ contact locations and magnitudes may exhibit significant variations. We present an efficient analysis approach to determine the most critical force contact location in such problems with force location uncertainty. Given an input 3D model and regions on its boundary where arbitrary normal forces may make contact, our algorithm predicts the worst-case force configuration responsible for creating the highest stress within the object. Our approach uses a computationally tractable experimental design method to select number of sample force locations based on geometry only, without inspecting the stress response that requires computationally expensive finite-element analysis. Then, we construct a simple regression model on these samples and corresponding maximum stresses. Combined with a simple ranking based post-processing step, our method provides a practical solution to worst-case structural analysis problem. The results indicate that our approach achieves significant improvements over the existing work and brute force approaches. We demonstrate that further speed- up can be obtained when small amount of an error tolerance in maximum stress is allowed.
Tasks
Published 2018-10-25
URL http://arxiv.org/abs/1810.10977v1
PDF http://arxiv.org/pdf/1810.10977v1.pdf
PWC https://paperswithcode.com/paper/efficient-load-sampling-for-worst-case
Repo
Framework

The GaussianSketch for Almost Relative Error Kernel Distance

Title The GaussianSketch for Almost Relative Error Kernel Distance
Authors Jeff M. Phillips, Wai Ming Tai
Abstract We introduce a two versions of a new sketch for approximately embedding the Gaussian kernel into Euclidean inner product space. These work by truncating infinite expansions of the Gaussian inner product, and carefully invoking the TensorSketch. After providing concentration and approximation properties of these sketches, we demonstrate them to approximate the kernel distance between points sets. These sketches yield almost $(1+\varepsilon)$-relative error, but with a small additive $\alpha$ term. In the first variants the dependence on $1/\alpha$ is logarithmic, but has a separate exponential dependence on the original dimension $d$. In the second variant, the dependence on $1/\alpha$ is still sub-polynomial, but the dependence on $d$ is linear.
Tasks
Published 2018-11-09
URL https://arxiv.org/abs/1811.04136v2
PDF https://arxiv.org/pdf/1811.04136v2.pdf
PWC https://paperswithcode.com/paper/relative-error-rkhs-embeddings-for-gaussian
Repo
Framework

A Benchmark for Iris Location and a Deep Learning Detector Evaluation

Title A Benchmark for Iris Location and a Deep Learning Detector Evaluation
Authors Evair Severo, Rayson Laroca, Cides S. Bezerra, Luiz A. Zanlorensi, Daniel Weingaertner, Gladston Moreira, David Menotti
Abstract The iris is considered as the biometric trait with the highest unique probability. The iris location is an important task for biometrics systems, affecting directly the results obtained in specific applications such as iris recognition, spoofing and contact lenses detection, among others. This work defines the iris location problem as the delimitation of the smallest squared window that encompasses the iris region. In order to build a benchmark for iris location we annotate (iris squared bounding boxes) four databases from different biometric applications and make them publicly available to the community. Besides these 4 annotated databases, we include 2 others from the literature. We perform experiments on these six databases, five obtained with near infra-red sensors and one with visible light sensor. We compare the classical and outstanding Daugman iris location approach with two window based detectors: 1) a sliding window detector based on features from Histogram of Oriented Gradients (HOG) and a linear Support Vector Machines (SVM) classifier; 2) a deep learning based detector fine-tuned from YOLO object detector. Experimental results showed that the deep learning based detector outperforms the other ones in terms of accuracy and runtime (GPUs version) and should be chosen whenever possible.
Tasks Iris Recognition
Published 2018-03-03
URL http://arxiv.org/abs/1803.01250v5
PDF http://arxiv.org/pdf/1803.01250v5.pdf
PWC https://paperswithcode.com/paper/a-benchmark-for-iris-location-and-a-deep
Repo
Framework

Holarchic Structures for Decentralized Deep Learning - A Performance Analysis

Title Holarchic Structures for Decentralized Deep Learning - A Performance Analysis
Authors Evangelos Pournaras, Srivatsan Yadhunathan, Ada Diaconescu
Abstract Structure plays a key role in learning performance. In centralized computational systems, hyperparameter optimization and regularization techniques such as dropout are computational means to enhance learning performance by adjusting the deep hierarchical structure. However, in decentralized deep learning by the Internet of Things, the structure is an actual network of autonomous interconnected devices such as smart phones that interact via complex network protocols. Self-adaptation of the learning structure is a challenge. Uncertainties such as network latency, node and link failures or even bottlenecks by limited processing capacity and energy availability can signif- icantly downgrade learning performance. Network self-organization and self-management is complex, while it requires additional computational and network resources that hinder the feasibility of decentralized deep learning. In contrast, this paper introduces a self-adaptive learning approach based on holarchic learning structures for exploring, mitigating and boosting learning performance in distributed environments with uncertainties. A large-scale performance analysis with 864000 experiments fed with synthetic and real-world data from smart grid and smart city pilot projects confirm the cost-effectiveness of holarchic structures for decentralized deep learning.
Tasks Hyperparameter Optimization
Published 2018-05-07
URL http://arxiv.org/abs/1805.02686v2
PDF http://arxiv.org/pdf/1805.02686v2.pdf
PWC https://paperswithcode.com/paper/holarchic-structures-for-decentralized-deep
Repo
Framework

Computing and Testing Pareto Optimal Committees

Title Computing and Testing Pareto Optimal Committees
Authors Haris Aziz, Jerome Lang, Jerome Monnot
Abstract Selecting a set of alternatives based on the preferences of agents is an important problem in committee selection and beyond. Among the various criteria put forth for the desirability of a committee, Pareto optimality is a minimal and important requirement. As asking agents to specify their preferences over exponentially many subsets of alternatives is practically infeasible, we assume that each agent specifies a weak order on single alternatives, from which a preference relation over subsets is derived using some preference extension. We consider five prominent extensions (responsive, downward lexicographic, upward lexicographic, best, and worst). For each of them, we consider the corresponding Pareto optimality notion, and we study the complexity of computing and verifying Pareto optimal outcomes. We also consider strategic issues: for four of the set extensions, we present a linear-time, Pareto optimal and strategyproof algorithm that even works for weak preferences.
Tasks
Published 2018-03-18
URL http://arxiv.org/abs/1803.06644v1
PDF http://arxiv.org/pdf/1803.06644v1.pdf
PWC https://paperswithcode.com/paper/computing-and-testing-pareto-optimal
Repo
Framework

Learning from Demonstration in the Wild

Title Learning from Demonstration in the Wild
Authors Feryal Behbahani, Kyriacos Shiarlis, Xi Chen, Vitaly Kurin, Sudhanshu Kasewa, Ciprian Stirbu, João Gomes, Supratik Paul, Frans A. Oliehoek, João Messias, Shimon Whiteson
Abstract Learning from demonstration (LfD) is useful in settings where hand-coding behaviour or a reward function is impractical. It has succeeded in a wide range of problems but typically relies on manually generated demonstrations or specially deployed sensors and has not generally been able to leverage the copious demonstrations available in the wild: those that capture behaviours that were occurring anyway using sensors that were already deployed for another purpose, e.g., traffic camera footage capturing demonstrations of natural behaviour of vehicles, cyclists, and pedestrians. We propose Video to Behaviour (ViBe), a new approach to learn models of behaviour from unlabelled raw video data of a traffic scene collected from a single, monocular, initially uncalibrated camera with ordinary resolution. Our approach calibrates the camera, detects relevant objects, tracks them through time, and uses the resulting trajectories to perform LfD, yielding models of naturalistic behaviour. We apply ViBe to raw videos of a traffic intersection and show that it can learn purely from videos, without additional expert knowledge.
Tasks
Published 2018-11-08
URL http://arxiv.org/abs/1811.03516v2
PDF http://arxiv.org/pdf/1811.03516v2.pdf
PWC https://paperswithcode.com/paper/learning-from-demonstration-in-the-wild
Repo
Framework

Fast Robust Methods for Singular State-Space Models

Title Fast Robust Methods for Singular State-Space Models
Authors Jonathan Jonker, Aleksandr Y. Aravkin, James V. Burke, Gianluigi Pillonetto, Sarah Webster
Abstract State-space models are used in a wide range of time series analysis formulations. Kalman filtering and smoothing are work-horse algorithms in these settings. While classic algorithms assume Gaussian errors to simplify estimation, recent advances use a broader range of optimization formulations to allow outlier-robust estimation, as well as constraints to capture prior information. Here we develop methods on state-space models where either innovations or error covariances may be singular. These models frequently arise in navigation (e.g. for `colored noise’ models or deterministic integrals) and are ubiquitous in auto-correlated time series models such as ARMA. We reformulate all state-space models (singular as well as nonsinguar) as constrained convex optimization problems, and develop an efficient algorithm for this reformulation. The convergence rate is {\it locally linear}, with constants that do not depend on the conditioning of the problem. Numerical comparisons show that the new approach outperforms competing approaches for {\it nonsingular} models, including state of the art interior point (IP) methods. IP methods converge at superlinear rates; we expect them to dominate. However, the steep rate of the proposed approach (independent of problem conditioning) combined with cheap iterations wins against IP in a run-time comparison. We therefore suggest that the proposed approach be the {\it default choice} for estimating state space models outside of the Gaussian context, regardless of whether the error covariances are singular or not. |
Tasks Time Series, Time Series Analysis
Published 2018-03-07
URL http://arxiv.org/abs/1803.02525v2
PDF http://arxiv.org/pdf/1803.02525v2.pdf
PWC https://paperswithcode.com/paper/fast-robust-methods-for-singular-state-space
Repo
Framework
comments powered by Disqus