January 29, 2020

3412 words 17 mins read

Paper Group ANR 601

Paper Group ANR 601

Streaming Networks: Enable A Robust Classification of Noise-Corrupted Images. A Novel Visual Fault Detection and Classification System for Semiconductor Manufacturing Using Stacked Hybrid Convolutional Neural Networks. Multi-PCA based Fault Detection Model Combined with Prior knowledge of HVAC. White Noise Analysis of Neural Networks. Where Do Huma …

Streaming Networks: Enable A Robust Classification of Noise-Corrupted Images

Title Streaming Networks: Enable A Robust Classification of Noise-Corrupted Images
Authors Sergey Tarasenko, Fumihiko Takahashi
Abstract The convolution neural nets (conv nets) have achieved a state-of-the-art performance in many applications of image and video processing. The most recent studies illustrate that the conv nets are fragile in terms of recognition accuracy to various image distortions such as noise, scaling, rotation, etc. In this study we focus on the problem of robust recognition accuracy of random noise distorted images. A common solution to this problem is either to add a lot of noisy images into a training dataset, which can be very costly, or use sophisticated loss function and denoising techniques. We introduce a novel conv net architecture with multiple streams. Each stream is taking a certain intensity slice of the original image as an input, and stream parameters are trained independently. We call this novel network a “Streaming Net”. Our results indicate that Streaming Net outperforms 1-stream conv net (employed as a single stream) and 1-stream wide conv net (employs the same number of filters as Streaming Net) in recognition accuracy of noise-corrupted images, while producing the same or higher recognition accuracy of no noise images in almost all of the tests. Thus, we introduce a new simple method to increase robustness of recognition of noisy images without using data generation or sophisticated training techniques.
Tasks Denoising
Published 2019-10-23
URL https://arxiv.org/abs/1910.11107v1
PDF https://arxiv.org/pdf/1910.11107v1.pdf
PWC https://paperswithcode.com/paper/streaming-networks-enable-a-robust
Repo
Framework

A Novel Visual Fault Detection and Classification System for Semiconductor Manufacturing Using Stacked Hybrid Convolutional Neural Networks

Title A Novel Visual Fault Detection and Classification System for Semiconductor Manufacturing Using Stacked Hybrid Convolutional Neural Networks
Authors Tobias Schlosser, Frederik Beuth, Michael Friedrich, Danny Kowerko
Abstract Automated visual inspection in the semiconductor industry aims to detect and classify manufacturing defects utilizing modern image processing techniques. While an earliest possible detection of defect patterns allows quality control and automation of manufacturing chains, manufacturers benefit from an increased yield and reduced manufacturing costs. Since classical image processing systems are limited in their ability to detect novel defect patterns, and machine learning approaches often involve a tremendous amount of computational effort, this contribution introduces a novel deep neural network-based hybrid approach. Unlike classical deep neural networks, a multi-stage system allows the detection and classification of the finest structures in pixel size within high-resolution imagery. Consisting of stacked hybrid convolutional neural networks (SH-CNN) and inspired by current approaches of visual attention, the realized system draws the focus over the level of detail from its structures to more task-relevant areas of interest. The results of our test environment show that the SH-CNN outperforms current approaches of learning-based automated visual inspection, whereas a distinction depending on the level of detail enables the elimination of defect patterns in earlier stages of the manufacturing process.
Tasks Fault Detection
Published 2019-11-25
URL https://arxiv.org/abs/1911.11250v3
PDF https://arxiv.org/pdf/1911.11250v3.pdf
PWC https://paperswithcode.com/paper/a-novel-visual-fault-detection-and
Repo
Framework

Multi-PCA based Fault Detection Model Combined with Prior knowledge of HVAC

Title Multi-PCA based Fault Detection Model Combined with Prior knowledge of HVAC
Authors Ziming Liu, Xiaobo Liu
Abstract The traditional PCA fault detection methods completely depend on the training data. The prior knowledge such as the physical principle of the system has not been taken into account. In this paper, we propose a new multi-PCA fault detection model combined with prior knowledge. This new model can adapt to the variable operating conditions of the central air conditioning system, and it can detect small deviation faults of sensors and significantly shorten the time delay of detecting drift faults. We also conducted enough ablation experiments to demonstrate that our model is more robust and efficient.
Tasks Fault Detection
Published 2019-11-21
URL https://arxiv.org/abs/1911.13263v1
PDF https://arxiv.org/pdf/1911.13263v1.pdf
PWC https://paperswithcode.com/paper/multi-pca-based-fault-detection-model
Repo
Framework

White Noise Analysis of Neural Networks

Title White Noise Analysis of Neural Networks
Authors Ali Borji, Sikun Lin
Abstract A white noise analysis of modern deep neural networks is presented to unveil their biases at the whole network level or the single neuron level. Our analysis is based on two popular and related methods in psychophysics and neurophysiology namely classification images and spike triggered analysis. These methods have been widely used to understand the underlying mechanisms of sensory systems in humans and monkeys. We leverage them to investigate the inherent biases of deep neural networks and to obtain a first-order approximation of their functionality. We emphasize on CNNs since they are currently the state of the art methods in computer vision and are a decent model of human visual processing. In addition, we study multi-layer perceptrons, logistic regression, and recurrent neural networks. Experiments over four classic datasets, MNIST, Fashion-MNIST, CIFAR-10, and ImageNet, show that the computed bias maps resemble the target classes and when used for classification lead to an over twofold performance than the chance level. Further, we show that classification images can be used to attack a black-box classifier and to detect adversarial patch attacks. Finally, we utilize spike triggered averaging to derive the filters of CNNs and explore how the behavior of a network changes when neurons in different layers are modulated. Our effort illustrates a successful example of borrowing from neurosciences to study ANNs and highlights the importance of cross-fertilization and synergy across machine learning, deep learning, and computational neuroscience.
Tasks
Published 2019-12-23
URL https://arxiv.org/abs/1912.12106v1
PDF https://arxiv.org/pdf/1912.12106v1.pdf
PWC https://paperswithcode.com/paper/white-noise-analysis-of-neural-networks-1
Repo
Framework

Where Do Human Heuristics Come From?

Title Where Do Human Heuristics Come From?
Authors Marcel Binz, Dominik Endres
Abstract Human decision-making deviates from the optimal solution, that maximizes cumulative rewards, in many situations. Here we approach this discrepancy from the perspective of bounded rationality and our goal is to provide a justification for such seemingly sub-optimal strategies. More specifically we investigate the hypothesis, that humans do not know optimal decision-making algorithms in advance, but instead employ a learned, resource-bounded approximation. The idea is formalized through combining a recently proposed meta-learning model based on Recurrent Neural Networks with a resource-bounded objective. The resulting approach is closely connected to variational inference and the Minimum Description Length principle. Empirical evidence is obtained from a two-armed bandit task. Here we observe patterns in our family of models that resemble differences between individual human participants.
Tasks Decision Making, Meta-Learning
Published 2019-02-20
URL https://arxiv.org/abs/1902.07580v2
PDF https://arxiv.org/pdf/1902.07580v2.pdf
PWC https://paperswithcode.com/paper/where-do-human-heuristics-come-from
Repo
Framework

FaultNet: Faulty Rail-Valves Detection using Deep Learning and Computer Vision

Title FaultNet: Faulty Rail-Valves Detection using Deep Learning and Computer Vision
Authors Ramanpreet Singh Pahwa, Jin Chao, Jestine Paul, Yiqun Li, Ma Tin Lay Nwe, Shudong Xie, Ashish James, Arulmurugan Ambikapathi, Zeng Zeng, Vijay Ramaseshan Chandrasekhar
Abstract Regular inspection of rail valves and engines is an important task to ensure the safety and efficiency of railway networks around the globe. Over the past decade, computer vision and pattern recognition based techniques have gained traction for such inspection and defect detection tasks. An automated end-to-end trained system can potentially provide a low-cost, high throughput, and cheap alternative to manual visual inspection of these components. However, such systems require a huge amount of defective images for networks to understand complex defects. In this paper, a multi-phase deep learning based technique is proposed to perform accurate fault detection of rail-valves. Our approach uses a two-step method to perform high precision image segmentation of rail-valves resulting in pixel-wise accurate segmentation. Thereafter, a computer vision technique is used to identify faulty valves. We demonstrate that the proposed approach results in improved detection performance when compared to current state-of-theart techniques used in fault detection.
Tasks Fault Detection, Semantic Segmentation
Published 2019-11-09
URL https://arxiv.org/abs/1912.04219v1
PDF https://arxiv.org/pdf/1912.04219v1.pdf
PWC https://paperswithcode.com/paper/faultnet-faulty-rail-valves-detection-using
Repo
Framework

SeizureNet: Multi-Spectral Deep Feature Learning for Seizure Type Classification

Title SeizureNet: Multi-Spectral Deep Feature Learning for Seizure Type Classification
Authors Umar Asif, Subhrajit Roy, Jianbin Tang, Stefan Harrer
Abstract Automatic classification of epliptic seizure types in EEG datacould enable more precise diagnosis and efficient manage-ment of the disease. Automatic seizure type classificationusing clinical electroencephalograms (EEGs) is challengingdue to factors such as low signal to noise ratios, signal arte-facts, high variance in the seizure semiology among individ-ual epileptic patients, and limited availability of clinical data.To overcome these challenges, in this paper, we present adeep learning based framework which learns multi-spectralfeature embeddings using multiple CNN models in an ensem-ble architecture for accurate cross-patient seizure type clas-sification. Experiments on the recently released TUH EEGSeizure Corpus show that our multi-spectral dense featurelearning produces a weighted f1 score of 0.98 for seizure typeclassification setting new benchmarks on the dataset.
Tasks EEG, Seizure Detection
Published 2019-03-08
URL https://arxiv.org/abs/1903.03232v3
PDF https://arxiv.org/pdf/1903.03232v3.pdf
PWC https://paperswithcode.com/paper/seizurenet-a-deep-convolutional-neural
Repo
Framework

Automated Focal Loss for Image based Object Detection

Title Automated Focal Loss for Image based Object Detection
Authors Michael Weber, Michael Fürst, J. Marius Zöllner
Abstract Current state-of-the-art object detection algorithms still suffer the problem of imbalanced distribution of training data over object classes and background. Recent work introduced a new loss function called focal loss to mitigate this problem, but at the cost of an additional hyperparameter. Manually tuning this hyperparameter for each training task is highly time-consuming. With automated focal loss we introduce a new loss function which substitutes this hyperparameter by a parameter that is automatically adapted during the training progress and controls the amount of focusing on hard training examples. We show on the COCO benchmark that this leads to an up to 30% faster training convergence. We further introduced a focal regression loss which on the more challenging task of 3D vehicle detection outperforms other loss functions by up to 1.8 AOS and can be used as a value range independent metric for regression.
Tasks Object Detection
Published 2019-04-19
URL http://arxiv.org/abs/1904.09048v1
PDF http://arxiv.org/pdf/1904.09048v1.pdf
PWC https://paperswithcode.com/paper/automated-focal-loss-for-image-based-object
Repo
Framework

Testing that a Local Optimum of the Likelihood is Globally Optimum using Reparameterized Embeddings

Title Testing that a Local Optimum of the Likelihood is Globally Optimum using Reparameterized Embeddings
Authors Joel W. LeBlanc, Brian J. Thelen, Alfred O. Hero
Abstract Many mathematical imaging problems are posed as non-convex optimization problems. When numerically tractable global optimization procedures are not available, one is often interested in testing ex post facto whether or not a locally convergent algorithm has found the globally optimal solution. When the problem is formulated in terms of maximizing the likelihood function of using a statistical model, a local test of global optimality can be constructed. In this paper, we develop such a test, based on a global maximum validation function proposed by Biernacki, under the assumption that the statistical distribution is in the generalized location family, a condition often satisfied in inverse problems. In addition, a new reparameterization and embedding is presented that exploits knowledge about the forward operator to improve the global maximum validation function. It is shown that the reparameterized embedding can be gainfully applied to a physically-motivated joint-inverse problem arising in camera-blur estimation. Improved accuracy and reduced computation are demonstrated for the proposed global maximum testing method.
Tasks
Published 2019-05-31
URL https://arxiv.org/abs/1906.00101v2
PDF https://arxiv.org/pdf/1906.00101v2.pdf
PWC https://paperswithcode.com/paper/testing-that-a-local-optimum-of-the
Repo
Framework

Adaptive Class Weight based Dual Focal Loss for Improved Semantic Segmentation

Title Adaptive Class Weight based Dual Focal Loss for Improved Semantic Segmentation
Authors Md Sazzad Hossain, Andrew P Paplinski, John M Betts
Abstract In this paper, we propose a Dual Focal Loss (DFL) function, as a replacement for the standard cross entropy (CE) function to achieve a better treatment of the unbalanced classes in a dataset. Our DFL method is an improvement on the recently reported Focal Loss (FL) cross-entropy function, which proposes a scaling method that puts more weight on the examples that are difficult to classify over those that are easy. However, the scaling parameter of FL is empirically set, which is problem-dependent. In addition, like other CE variants, FL only focuses on the loss of true classes. Therefore, no loss feedback is gained from the false classes. Although focusing only on true examples increases probability on true classes and correspondingly reduces probability on false classes due to the nature of the softmax function, it does not achieve the best convergence due to avoidance of the loss on false classes. Our DFL method improves on the simple FL in two ways. Firstly, it takes the idea of FL to focus more on difficult examples than the easy ones, but evaluates loss on both true and negative classes with equal importance. Secondly, the scaling parameter of DFL has been made learnable so that it can tune itself by backpropagation rather than being dependent on manual tuning. In this way, our proposed DFL method offers an auto-tunable loss function that can reduce the class imbalance effect as well as put more focus on both true difficult examples and negative easy examples. Experimental results show that our proposed method provides better accuracy in every test run conducted over a variety of different network models and datasets.
Tasks Semantic Segmentation
Published 2019-09-26
URL https://arxiv.org/abs/1909.11932v2
PDF https://arxiv.org/pdf/1909.11932v2.pdf
PWC https://paperswithcode.com/paper/adaptive-class-weight-based-dual-focal-loss
Repo
Framework

Towards Personalized Dialog Policies for Conversational Skill Discovery

Title Towards Personalized Dialog Policies for Conversational Skill Discovery
Authors Maryam Fazel-Zarandi, Sampat Biswas, Ryan Summers, Ahmed Elmalt, Andy McCraw, Michael McPhilips, John Peach
Abstract Many businesses and consumers are extending the capabilities of voice-based services such as Amazon Alexa, Google Home, Microsoft Cortana, and Apple Siri to create custom voice experiences (also known as skills). As the number of these experiences increases, a key problem is the discovery of skills that can be used to address a user’s request. In this paper, we focus on conversational skill discovery and present a conversational agent which engages in a dialog with users to help them find the skills that fulfill their needs. To this end, we start with a rule-based agent and improve it by using reinforcement learning. In this way, we enable the agent to adapt to different user attributes and conversational styles as it interacts with users. We evaluate our approach in a real production setting by deploying the agent to interact with real users, and show the effectiveness of the conversational agent in helping users find the skills that serve their request.
Tasks
Published 2019-11-15
URL https://arxiv.org/abs/1911.06747v1
PDF https://arxiv.org/pdf/1911.06747v1.pdf
PWC https://paperswithcode.com/paper/towards-personalized-dialog-policies-for
Repo
Framework

Weighted Laplacian and Its Theoretical Applications

Title Weighted Laplacian and Its Theoretical Applications
Authors Shijie Xu, Jiayan Fang, Xiang-Yang Li
Abstract In this paper, we develop a novel weighted Laplacian method, which is partially inspired by the theory of graph Laplacian, to study recent popular graph problems, such as multilevel graph partitioning and balanced minimum cut problem, in a more convenient manner. Since the weighted Laplacian strategy inherits the virtues of spectral methods, graph algorithms designed using weighted Laplacian will necessarily possess more robust theoretical guarantees for algorithmic performances, comparing with those existing algorithms that are heuristically proposed. In order to illustrate its powerful utility both in theory and in practice, we also present two effective applications of our weighted Laplacian method to multilevel graph partitioning and balanced minimum cut problem, respectively. By means of variational methods and theory of partial differential equations (PDEs), we have established the equivalence relations among the weighted cut problem, balanced minimum cut problem and the initial clustering problem that arises in the middle stage of graph partitioning algorithms under a multilevel structure. These equivalence relations can indeed provide solid theoretical support for algorithms based on our proposed weighted Laplacian strategy. Moreover, from the perspective of the application to the balanced minimum cut problem, weighted Laplacian can make it possible for research of numerical solutions of PDEs to be a powerful tool for the algorithmic study of graph problems. Experimental results also indicate that the algorithm embedded with our strategy indeed outperforms other existing graph algorithms, especially in terms of accuracy, thus verifying the efficacy of the proposed weighted Laplacian.
Tasks graph partitioning
Published 2019-11-23
URL https://arxiv.org/abs/1911.10311v1
PDF https://arxiv.org/pdf/1911.10311v1.pdf
PWC https://paperswithcode.com/paper/weighted-laplacian-and-its-theoretical
Repo
Framework

DBP: Discrimination Based Block-Level Pruning for Deep Model Acceleration

Title DBP: Discrimination Based Block-Level Pruning for Deep Model Acceleration
Authors Wenxiao Wang, Shuai Zhao, Minghao Chen, Jinming Hu, Deng Cai, Haifeng Liu
Abstract Neural network pruning is one of the most popular methods of accelerating the inference of deep convolutional neural networks (CNNs). The dominant pruning methods, filter-level pruning methods, evaluate their performance through the reduction ratio of computations and deem that a higher reduction ratio of computations is equivalent to a higher acceleration ratio in terms of inference time. However, we argue that they are not equivalent if parallel computing is considered. Given that filter-level pruning only prunes filters in layers and computations in a layer usually run in parallel, most computations reduced by filter-level pruning usually run in parallel with the un-reduced ones. Thus, the acceleration ratio of filter-level pruning is limited. To get a higher acceleration ratio, it is better to prune redundant layers because computations of different layers cannot run in parallel. In this paper, we propose our Discrimination based Block-level Pruning method (DBP). Specifically, DBP takes a sequence of consecutive layers (e.g., Conv-BN-ReLu) as a block and removes redundant blocks according to the discrimination of their output features. As a result, DBP achieves a considerable acceleration ratio by reducing the depth of CNNs. Extensive experiments show that DBP has surpassed state-of-the-art filter-level pruning methods in both accuracy and acceleration ratio. Our code will be made available soon.
Tasks Network Pruning
Published 2019-12-21
URL https://arxiv.org/abs/1912.10178v1
PDF https://arxiv.org/pdf/1912.10178v1.pdf
PWC https://paperswithcode.com/paper/dbp-discrimination-based-block-level-pruning
Repo
Framework

Towards Effective Device-Aware Federated Learning

Title Towards Effective Device-Aware Federated Learning
Authors Vito Walter Anelli, Yashar Deldjoo, Tommaso Di Noia, Antonio Ferrara
Abstract With the wealth of information produced by social networks, smartphones, medical or financial applications, speculations have been raised about the sensitivity of such data in terms of users’ personal privacy and data security. To address the above issues, Federated Learning (FL) has been recently proposed as a means to leave data and computational resources distributed over a large number of nodes (clients) where a central coordinating server aggregates only locally computed updates without knowing the original data. In this work, we extend the FL framework by pushing forward the state the art in the field on several dimensions: (i) unlike the original FedAvg approach relying solely on single criteria (i.e., local dataset size), a suite of domain- and client-specific criteria constitute the basis to compute each local client’s contribution, (ii) the multi-criteria contribution of each device is computed in a prioritized fashion by leveraging a priority-aware aggregation operator used in the field of information retrieval, and (iii) a mechanism is proposed for online-adjustment of the aggregation operator parameters via a local search strategy with backtracking. Extensive experiments on a publicly available dataset indicate the merits of the proposed approach compared to standard FedAvg baseline.
Tasks Information Retrieval
Published 2019-08-20
URL https://arxiv.org/abs/1908.07420v1
PDF https://arxiv.org/pdf/1908.07420v1.pdf
PWC https://paperswithcode.com/paper/towards-effective-device-aware-federated
Repo
Framework

UQ-CHI: An Uncertainty Quantification-Based Contemporaneous Health Index for Degenerative Disease Monitoring

Title UQ-CHI: An Uncertainty Quantification-Based Contemporaneous Health Index for Degenerative Disease Monitoring
Authors Aven Samareh, Shuai Huang
Abstract Developing knowledge-driven contemporaneous health index (CHI) that can precisely reflect the underlying patient across the course of the condition’s progression holds a unique value, like facilitating a range of clinical decision-making opportunities. This is particularly important for monitoring degenerative condition such as Alzheimer’s disease (AD), where the condition of the patient will decay over time. Detecting early symptoms and progression sign, and continuous severity evaluation, are all essential for disease management. While a few methods have been developed in the literature, uncertainty quantification of those health index models has been largely neglected. To ensure the continuity of the care, we should be more explicit about the level of confidence in model outputs. Ideally, decision-makers should be provided with recommendations that are robust in the face of substantial uncertainty about future outcomes. In this paper, we aim at filling this gap by developing an uncertainty quantification based contemporaneous longitudinal index, named UQ-CHI, with a particular focus on continuous patient monitoring of degenerative conditions. Our method is to combine convex optimization and Bayesian learning using the maximum entropy learning (MEL) framework, integrating uncertainty on labels as well. Our methodology also provides closed-form solutions in some important decision making tasks, e.g., such as predicting the label of a new sample. Numerical studies demonstrate the effectiveness of the propose UQ-CHI method in prediction accuracy, monitoring efficacy, and unique advantages if uncertainty quantification is enabled practice.
Tasks Decision Making
Published 2019-02-21
URL http://arxiv.org/abs/1902.08246v1
PDF http://arxiv.org/pdf/1902.08246v1.pdf
PWC https://paperswithcode.com/paper/uq-chi-an-uncertainty-quantification-based
Repo
Framework
comments powered by Disqus