April 1, 2020

3391 words 16 mins read

Paper Group ANR 450

Paper Group ANR 450

Semi-supervised Anomaly Detection on Attributed Graphs. Expected Improvement versus Predicted Value in Surrogate-Based Optimization. Learning to mirror speaking styles incrementally. Segmentation of Satellite Imagery using U-Net Models for Land Cover Classification. A Benchmark for Temporal Color Constancy. Print Defect Mapping with Semantic Segmen …

Semi-supervised Anomaly Detection on Attributed Graphs

Title Semi-supervised Anomaly Detection on Attributed Graphs
Authors Atsutoshi Kumagai, Tomoharu Iwata, Yasuhiro Fujiwara
Abstract We propose a simple yet effective method for detecting anomalous instances on an attribute graph with label information of a small number of instances. Although with standard anomaly detection methods it is usually assumed that instances are independent and identically distributed, in many real-world applications, instances are often explicitly connected with each other, resulting in so-called attributed graphs. The proposed method embeds nodes (instances) on the attributed graph in the latent space by taking into account their attributes as well as the graph structure based on graph convolutional networks (GCNs). To learn node embeddings specialized for anomaly detection, in which there is a class imbalance due to the rarity of anomalies, the parameters of a GCN are trained to minimize the volume of a hypersphere that encloses the node embeddings of normal instances while embedding anomalous ones outside the hypersphere. This enables us to detect anomalies by simply calculating the distances between the node embeddings and hypersphere center. The proposed method can effectively propagate label information on a small amount of nodes to unlabeled ones by taking into account the node’s attributes, graph structure, and class imbalance. In experiments with five real-world attributed graph datasets, we demonstrate that the proposed method achieves better performance than various existing anomaly detection methods.
Tasks Anomaly Detection
Published 2020-02-27
URL https://arxiv.org/abs/2002.12011v1
PDF https://arxiv.org/pdf/2002.12011v1.pdf
PWC https://paperswithcode.com/paper/semi-supervised-anomaly-detection-on
Repo
Framework

Expected Improvement versus Predicted Value in Surrogate-Based Optimization

Title Expected Improvement versus Predicted Value in Surrogate-Based Optimization
Authors Frederik Rehbach, Martin Zaefferer, Boris Naujoks, Thomas Bartz-Beielstein
Abstract Surrogate-based optimization relies on so-called infill criteria (acquisition functions) to decide which point to evaluate next. When Kriging is used as the surrogate model of choice (also called Bayesian optimization), one of the most frequently chosen criteria is expected improvement. We argue that the popularity of expected improvement largely relies on its theoretical properties rather than empirically validated performance. Few results from the literature show evidence, that under certain conditions, expected improvement may perform worse than something as simple as the predicted value of the surrogate model. We benchmark both infill criteria in an extensive empirical study on the `BBOB’ function set. This investigation includes a detailed study of the impact of problem dimensionality on algorithm performance. The results support the hypothesis that exploration loses importance with increasing problem dimensionality. A statistical analysis reveals that the purely exploitative search with the predicted value criterion performs better on most problems of five or higher dimensions. Possible reasons for these results are discussed. In addition, we give an in-depth guide for choosing the infill criteria based on prior knowledge about the problem at hand, its dimensionality, and the available budget. |
Tasks
Published 2020-01-09
URL https://arxiv.org/abs/2001.02957v2
PDF https://arxiv.org/pdf/2001.02957v2.pdf
PWC https://paperswithcode.com/paper/expected-improvement-versus-predicted-value
Repo
Framework

Learning to mirror speaking styles incrementally

Title Learning to mirror speaking styles incrementally
Authors Siyi Liu, Ziang Leng, Derry Wijaya
Abstract Mirroring is the behavior in which one person subconsciously imitates the gesture, speech pattern, or attitude of another. In conversations, mirroring often signals the speakers enjoyment and engagement in their communication. In chatbots, methods have been proposed to add personas to the chatbots and to train them to speak or to shift their dialogue style to that of the personas. However, they often require a large dataset consisting of dialogues of the target personalities to train. In this work, we explore a method that can learn to mirror the speaking styles of a person incrementally. Our method extracts ngrams that capture a persons speaking styles and uses the ngrams to create patterns for transforming sentences to the persons speaking styles. Our experiments show that our method is able to capture patterns of speaking style that can be used to transform regular sentences into sentences with the target style.
Tasks
Published 2020-03-05
URL https://arxiv.org/abs/2003.04993v1
PDF https://arxiv.org/pdf/2003.04993v1.pdf
PWC https://paperswithcode.com/paper/learning-to-mirror-speaking-styles
Repo
Framework

Segmentation of Satellite Imagery using U-Net Models for Land Cover Classification

Title Segmentation of Satellite Imagery using U-Net Models for Land Cover Classification
Authors Priit Ulmas, Innar Liiv
Abstract The focus of this paper is using a convolutional machine learning model with a modified U-Net structure for creating land cover classification mapping based on satellite imagery. The aim of the research is to train and test convolutional models for automatic land cover mapping and to assess their usability in increasing land cover mapping accuracy and change detection. To solve these tasks, authors prepared a dataset and trained machine learning models for land cover classification and semantic segmentation from satellite images. The results were analysed on three different land classification levels. BigEarthNet satellite image archive was selected for the research as one of two main datasets. This novel and recent dataset was published in 2019 and includes Sentinel-2 satellite photos from 10 European countries made in 2017 and 2018. As a second dataset the authors composed an original set containing a Sentinel-2 image and a CORINE land cover map of Estonia. The developed classification model shows a high overall F\textsubscript{1} score of 0.749 on multiclass land cover classification with 43 possible image labels. The model also highlights noisy data in the BigEarthNet dataset, where images seem to have incorrect labels. The segmentation models offer a solution for generating automatic land cover mappings based on Sentinel-2 satellite images and show a high IoU score for land cover classes such as forests, inland waters and arable land. The models show a capability of increasing the accuracy of existing land classification maps and in land cover change detection.
Tasks Semantic Segmentation
Published 2020-03-05
URL https://arxiv.org/abs/2003.02899v1
PDF https://arxiv.org/pdf/2003.02899v1.pdf
PWC https://paperswithcode.com/paper/segmentation-of-satellite-imagery-using-u-net
Repo
Framework

A Benchmark for Temporal Color Constancy

Title A Benchmark for Temporal Color Constancy
Authors Yanlin Qian, Jani Käpylä, Joni-Kristian Kämäräinen, Samu Koskinen, Jiri Matas
Abstract Temporal Color Constancy (CC) is a recently proposed approach that challenges the conventional single-frame color constancy. The conventional approach is to use a single frame - shot frame - to estimate the scene illumination color. In temporal CC, multiple frames from the view finder sequence are used to estimate the color. However, there are no realistic large scale temporal color constancy datasets for method evaluation. In this work, a new temporal CC benchmark is introduced. The benchmark comprises of (1) 600 real-world sequences recorded with a high-resolution mobile phone camera, (2) a fixed train-test split which ensures consistent evaluation, and (3) a baseline method which achieves high accuracy in the new benchmark and the dataset used in previous works. Results for more than 20 well-known color constancy methods including the recent state-of-the-arts are reported in our experiments.
Tasks Color Constancy
Published 2020-03-08
URL https://arxiv.org/abs/2003.03763v1
PDF https://arxiv.org/pdf/2003.03763v1.pdf
PWC https://paperswithcode.com/paper/a-benchmark-for-temporal-color-constancy
Repo
Framework
Title Print Defect Mapping with Semantic Segmentation
Authors Augusto C. Valente, Cristina Wada, Deangela Neves, Deangeli Neves, Fábio V. M. Perez, Guilherme A. S. Megeto, Marcos H. Cascone, Otavio Gomes, Qian Lin
Abstract Efficient automated print defect mapping is valuable to the printing industry since such defects directly influence customer-perceived printer quality and manually mapping them is cost-ineffective. Conventional methods consist of complicated and hand-crafted feature engineering techniques, usually targeting only one type of defect. In this paper, we propose the first end-to-end framework to map print defects at pixel level, adopting an approach based on semantic segmentation. Our framework uses Convolutional Neural Networks, specifically DeepLab-v3+, and achieves promising results in the identification of defects in printed images. We use synthetic training data by simulating two types of print defects and a print-scan effect with image processing and computer graphic techniques. Compared with conventional methods, our framework is versatile, allowing two inference strategies, one being near real-time and providing coarser results, and the other focusing on offline processing with more fine-grained detection. Our model is evaluated on a dataset of real printed images.
Tasks Feature Engineering, Semantic Segmentation
Published 2020-01-27
URL https://arxiv.org/abs/2001.10111v1
PDF https://arxiv.org/pdf/2001.10111v1.pdf
PWC https://paperswithcode.com/paper/print-defect-mapping-with-semantic
Repo
Framework

UAV Autonomous Localization using Macro-Features Matching with a CAD Model

Title UAV Autonomous Localization using Macro-Features Matching with a CAD Model
Authors Akkas Haque, Ahmed Elsaharti, Tarek Elderini, Mohamed Atef Elsaharty, Jeremiah Neubert
Abstract Research in the field of autonomous Unmanned Aerial Vehicles (UAVs) has significantly advanced in recent years, mainly due to their relevance in a large variety of commercial, industrial, and military applications. However, UAV navigation in GPS-denied environments continues to be a challenging problem that has been tackled in recent research through sensor-based approaches. This paper presents a novel offline, portable, real-time in-door UAV localization technique that relies on macro-feature detection and matching. The proposed system leverages the support of machine learning, traditional computer vision techniques, and pre-existing knowledge of the environment. The main contribution of this work is the real-time creation of a macro-feature description vector from the UAV captured images which are simultaneously matched with an offline pre-existing vector from a Computer-Aided Design (CAD) model. This results in a quick UAV localization within the CAD model. The effectiveness and accuracy of the proposed system were evaluated through simulations and experimental prototype implementation. Final results reveal the algorithm’s low computational burden as well as its ease of deployment in GPS-denied environments.
Tasks
Published 2020-01-30
URL https://arxiv.org/abs/2001.11610v1
PDF https://arxiv.org/pdf/2001.11610v1.pdf
PWC https://paperswithcode.com/paper/uav-autonomous-localization-using-macro
Repo
Framework

Automatic Hyper-Parameter Optimization Based on Mapping Discovery from Data to Hyper-Parameters

Title Automatic Hyper-Parameter Optimization Based on Mapping Discovery from Data to Hyper-Parameters
Authors Bozhou Chen, Kaixin Zhang, Longshen Ou, Chenmin Ba, Hongzhi Wang, Chunnan Wang
Abstract Machine learning algorithms have made remarkable achievements in the field of artificial intelligence. However, most machine learning algorithms are sensitive to the hyper-parameters. Manually optimizing the hyper-parameters is a common method of hyper-parameter tuning. However, it is costly and empirically dependent. Automatic hyper-parameter optimization (autoHPO) is favored due to its effectiveness. However, current autoHPO methods are usually only effective for a certain type of problems, and the time cost is high. In this paper, we propose an efficient automatic parameter optimization approach, which is based on the mapping from data to the corresponding hyper-parameters. To describe such mapping, we propose a sophisticated network structure. To obtain such mapping, we develop effective network constrution algorithms. We also design strategy to optimize the result futher during the application of the mapping. Extensive experimental results demonstrate that the proposed approaches outperform the state-of-the-art apporaches significantly.
Tasks
Published 2020-03-03
URL https://arxiv.org/abs/2003.01751v1
PDF https://arxiv.org/pdf/2003.01751v1.pdf
PWC https://paperswithcode.com/paper/automatic-hyper-parameter-optimization-based
Repo
Framework

Detection Method Based on Automatic Visual Shape Clustering for Pin-Missing Defect in Transmission Lines

Title Detection Method Based on Automatic Visual Shape Clustering for Pin-Missing Defect in Transmission Lines
Authors Zhenbing Zhao, Hongyu Qi, Yincheng Qi, Ke Zhang, Yongjie Zhai, Wenqing Zhao
Abstract Bolts are the most numerous fasteners in transmission lines and are prone to losing their split pins. How to realize the automatic pin-missing defect detection for bolts in transmission lines so as to achieve timely and efficient trouble shooting is a difficult problem and the long-term research target of power systems. In this paper, an automatic detection model called Automatic Visual Shape Clustering Network (AVSCNet) for pin-missing defect is constructed. Firstly, an unsupervised clustering method for the visual shapes of bolts is proposed and applied to construct a defect detection model which can learn the difference of visual shape. Next, three deep convolutional neural network optimization methods are used in the model: the feature enhancement, feature fusion and region feature extraction. The defect detection results are obtained by applying the regression calculation and classification to the regional features. In this paper, the object detection model of different networks is used to test the dataset of pin-missing defect constructed by the aerial images of transmission lines from multiple locations, and it is evaluated by various indicators and is fully verified. The results show that our method can achieve considerably satisfactory detection effect.
Tasks Object Detection
Published 2020-01-17
URL https://arxiv.org/abs/2001.06236v1
PDF https://arxiv.org/pdf/2001.06236v1.pdf
PWC https://paperswithcode.com/paper/detection-method-based-on-automatic-visual
Repo
Framework

A Unified System for Aggression Identification in English Code-Mixed and Uni-Lingual Texts

Title A Unified System for Aggression Identification in English Code-Mixed and Uni-Lingual Texts
Authors Anant Khandelwal, Niraj Kumar
Abstract Wide usage of social media platforms has increased the risk of aggression, which results in mental stress and affects the lives of people negatively like psychological agony, fighting behavior, and disrespect to others. Majority of such conversations contains code-mixed languages[28]. Additionally, the way used to express thought or communication style also changes from one social media plat-form to another platform (e.g., communication styles are different in twitter and Facebook). These all have increased the complexity of the problem. To solve these problems, we have introduced a unified and robust multi-modal deep learning architecture which works for English code-mixed dataset and uni-lingual English dataset both.The devised system, uses psycho-linguistic features and very ba-sic linguistic features. Our multi-modal deep learning architecture contains, Deep Pyramid CNN, Pooled BiLSTM, and Disconnected RNN(with Glove and FastText embedding, both). Finally, the system takes the decision based on model averaging. We evaluated our system on English Code-Mixed TRAC 2018 dataset and uni-lingual English dataset obtained from Kaggle. Experimental results show that our proposed system outperforms all the previous approaches on English code-mixed dataset and uni-lingual English dataset.
Tasks
Published 2020-01-15
URL https://arxiv.org/abs/2001.05493v2
PDF https://arxiv.org/pdf/2001.05493v2.pdf
PWC https://paperswithcode.com/paper/aggressionnet-generalised-multi-modal-deep
Repo
Framework

Going in circles is the way forward: the role of recurrence in visual inference

Title Going in circles is the way forward: the role of recurrence in visual inference
Authors Ruben S. van Bergen, Nikolaus Kriegeskorte
Abstract Biological visual systems exhibit abundant recurrent connectivity. State-of-the-art neural network models for visual recognition, by contrast, rely heavily or exclusively on feedforward computation. Any finite-time recurrent neural network (RNN) can be unrolled along time to yield an equivalent feedforward neural network (FNN). This important insight suggests that computational neuroscientists may not need to engage recurrent computation, and that computer-vision engineers may be limiting themselves to a special case of FNN if they build recurrent models. Here we argue, to the contrary, that FNNs are a special case of RNNs and that computational neuroscientists and engineers should engage recurrence to understand how brains and machines can (1) achieve greater and more flexible computational depth, (2) compress complex computations into limited hardware, (3) integrate priors and priorities into visual inference through expectation and attention, (4) exploit sequential dependencies in their data for better inference and prediction, and (5) leverage the power of iterative computation.
Tasks
Published 2020-03-26
URL https://arxiv.org/abs/2003.12128v1
PDF https://arxiv.org/pdf/2003.12128v1.pdf
PWC https://paperswithcode.com/paper/going-in-circles-is-the-way-forward-the-role
Repo
Framework

Identity Recognition in Intelligent Cars with Behavioral Data and LSTM-ResNet Classifier

Title Identity Recognition in Intelligent Cars with Behavioral Data and LSTM-ResNet Classifier
Authors Michael Hammann, Maximilian Kraus, Sina Shafaei, Alois Knoll
Abstract Identity recognition in a car cabin is a critical task nowadays and offers a great field of applications ranging from personalizing intelligent cars to suit drivers physical and behavioral needs to increasing safety and security. However, the performance and applicability of published approaches are still not suitable for use in series cars and need to be improved. In this paper, we investigate Human Identity Recognition in a car cabin with Time Series Classification (TSC) and deep neural networks. We use gas and brake pedal pressure as input to our models. This data is easily collectable during driving in everyday situations. Since our classifiers have very little memory requirements and do not require any input data preproccesing, we were able to train on one Intel i5-3210M processor only. Our classification approach is based on a combination of LSTM and ResNet. The network trained on a subset of NUDrive outperforms the ResNet and LSTM models trained solely by 35.9 % and 53.85 % accuracy respectively. We reach a final accuracy of 79.49 % on a 10-drivers subset of NUDrive and 96.90 % on a 5-drivers subset of UTDrive.
Tasks Time Series, Time Series Classification
Published 2020-03-02
URL https://arxiv.org/abs/2003.00770v1
PDF https://arxiv.org/pdf/2003.00770v1.pdf
PWC https://paperswithcode.com/paper/identity-recognition-in-intelligent-cars-with
Repo
Framework

Fastidious Attention Network for Navel Orange Segmentation

Title Fastidious Attention Network for Navel Orange Segmentation
Authors Xiaoye Sun, Gongyan Li, Shaoyun Xu
Abstract Deep learning achieves excellent performance in many domains, so we not only apply it to the navel orange semantic segmentation task to solve the two problems of distinguishing defect categories and identifying the stem end and blossom end, but also propose a fastidious attention mechanism to further improve model performance. This lightweight attention mechanism includes two learnable parameters, activations and thresholds, to capture long-range dependence. Specifically, the threshold picks out part of the spatial feature map and the activation excite this area. Based on activations and thresholds training from different types of feature maps, we design fastidious self-attention module (FSAM) and fastidious inter-attention module (FIAM). And then construct the Fastidious Attention Network (FANet), which uses U-Net as the backbone and embeds these two modules, to solve the problems with semantic segmentation for stem end, blossom end, flaw and ulcer. Compared with some state-of-the-art deep-learning-based networks under our navel orange dataset, experiments show that our network is the best performance with pixel accuracy 99.105%, mean accuracy 77.468%, mean IU 70.375% and frequency weighted IU 98.335%. And embedded modules show better discrimination of 5 categories including background, especially the IU of flaw is increased by 3.165%.
Tasks Semantic Segmentation
Published 2020-03-26
URL https://arxiv.org/abs/2003.11734v1
PDF https://arxiv.org/pdf/2003.11734v1.pdf
PWC https://paperswithcode.com/paper/fastidious-attention-network-for-navel-orange
Repo
Framework

A Learning Strategy for Contrast-agnostic MRI Segmentation

Title A Learning Strategy for Contrast-agnostic MRI Segmentation
Authors Benjamin Billot, Douglas Greve, Koen Van Leemput, Bruce Fischl, Juan Eugenio Iglesias, Adrian V. Dalca
Abstract We present a deep learning strategy that enables, for the first time, contrast-agnostic semantic segmentation of completely unpreprocessed brain MRI scans, without requiring additional training or fine-tuning for new modalities. Classical Bayesian methods address this segmentation problem with unsupervised intensity models, but require significant computational resources. In contrast, learning-based methods can be fast at test time, but are sensitive to the data available at training. Our proposed learning method, SynthSeg, leverages a set of training segmentations (no intensity images required) to generate synthetic sample images of widely varying contrasts on the fly during training. These samples are produced using the generative model of the classical Bayesian segmentation framework, with randomly sampled parameters for appearance, deformation, noise, and bias field. Because each mini-batch has a different synthetic contrast, the final network is not biased towards any MRI contrast. We comprehensively evaluate our approach on four datasets comprising over 1,000 subjects and four types of MR contrast. The results show that our approach successfully segments every contrast in the data, performing slightly better than classical Bayesian segmentation, and three orders of magnitude faster. Moreover, even within the same type of MRI contrast, our strategy generalizes significantly better across datasets, compared to training using real images. Finally, we find that synthesizing a broad range of contrasts, even if unrealistic, increases the generalization of the neural network. Our code and model are open source at https://github.com/BBillot/SynthSeg.
Tasks Semantic Segmentation
Published 2020-03-04
URL https://arxiv.org/abs/2003.01995v1
PDF https://arxiv.org/pdf/2003.01995v1.pdf
PWC https://paperswithcode.com/paper/a-learning-strategy-for-contrast-agnostic-mri
Repo
Framework

Towards Noise-resistant Object Detection with Noisy Annotations

Title Towards Noise-resistant Object Detection with Noisy Annotations
Authors Junnan Li, Caiming Xiong, Richard Socher, Steven Hoi
Abstract Training deep object detectors requires significant amount of human-annotated images with accurate object labels and bounding box coordinates, which are extremely expensive to acquire. Noisy annotations are much more easily accessible, but they could be detrimental for learning. We address the challenging problem of training object detectors with noisy annotations, where the noise contains a mixture of label noise and bounding box noise. We propose a learning framework which jointly optimizes object labels, bounding box coordinates, and model parameters by performing alternating noise correction and model training. To disentangle label noise and bounding box noise, we propose a two-step noise correction method. The first step performs class-agnostic bounding box correction by minimizing classifier discrepancy and maximizing region objectness. The second step distils knowledge from dual detection heads for soft label correction and class-specific bounding box refinement. We conduct experiments on PASCAL VOC and MS-COCO dataset with both synthetic noise and machine-generated noise. Our method achieves state-of-the-art performance by effectively cleaning both label noise and bounding box noise. Code to reproduce all results will be released.
Tasks Object Detection
Published 2020-03-03
URL https://arxiv.org/abs/2003.01285v1
PDF https://arxiv.org/pdf/2003.01285v1.pdf
PWC https://paperswithcode.com/paper/towards-noise-resistant-object-detection-with
Repo
Framework
comments powered by Disqus