April 1, 2020

3111 words 15 mins read

Paper Group ANR 390

Paper Group ANR 390

On the human evaluation of audio adversarial examples. G-VAE: A Continuously Variable Rate Deep Image Compression Framework. Fair Allocation Based Soft Load Shedding. A deep learning-facilitated radiomics solution for the prediction of lung lesion shrinkage in non-small cell lung cancer trials. Almost-Matching-Exactly for Treatment Effect Estimatio …

On the human evaluation of audio adversarial examples

Title On the human evaluation of audio adversarial examples
Authors Jon Vadillo, Roberto Santana
Abstract Human-machine interaction is increasingly dependent on speech communication. Machine Learning models are usually applied to interpret human speech commands. However, these models can be fooled by adversarial examples, which are inputs intentionally perturbed to produce a wrong prediction without being noticed. While much research has been focused on developing new techniques to generate adversarial perturbations, less attention has been given to aspects that determine whether and how the perturbations are noticed by humans. This question is relevant since high fooling rates of proposed adversarial perturbation strategies are only valuable if the perturbations are not detectable. In this paper we investigate to which extent the distortion metrics proposed in the literature for audio adversarial examples, and which are commonly applied to evaluate the effectiveness of methods for generating these attacks, are a reliable measure of the human perception of the perturbations. Using an analytical framework, and an experiment in which 18 subjects evaluate audio adversarial examples, we demonstrate that the metrics employed by convention are not a reliable measure of the perceptual similarity of adversarial examples in the audio domain.
Tasks
Published 2020-01-23
URL https://arxiv.org/abs/2001.08444v1
PDF https://arxiv.org/pdf/2001.08444v1.pdf
PWC https://paperswithcode.com/paper/on-the-human-evaluation-of-audio-adversarial
Repo
Framework

G-VAE: A Continuously Variable Rate Deep Image Compression Framework

Title G-VAE: A Continuously Variable Rate Deep Image Compression Framework
Authors Ze Cui, Jing Wang, Bo Bai, Tiansheng Guo, Yihui Feng
Abstract Rate adaption of deep image compression in a single model will become one of the decisive factors competing with the classical image compression codecs. However, until now, there is no perfect solution that neither increases the computation nor affects the compression performance. In this paper, we propose a novel image compression framework G-VAE (Gained Variational Autoencoder), which could achieve continuously variable rate in a single model. Unlike the previous solutions that encode progressively or change the internal unit of the network, G-VAE only adds a pair of gain units at the output of encoder and the input of decoder. It is so concise that G-VAE could be applied to almost all the image compression methods and achieve continuously variable rate with negligible additional parameters and computation. We also propose a new deep image compression framework, which outperforms all the published results on Kodak datasets in PSNR and MS-SSIM metrics. Experimental results show that adding a pair of gain units will not affect the performance of the basic models while endowing them with continuously variable rate.
Tasks Image Compression
Published 2020-03-04
URL https://arxiv.org/abs/2003.02012v1
PDF https://arxiv.org/pdf/2003.02012v1.pdf
PWC https://paperswithcode.com/paper/g-vae-a-continuously-variable-rate-deep-image
Repo
Framework

Fair Allocation Based Soft Load Shedding

Title Fair Allocation Based Soft Load Shedding
Authors Sarwan Ali, Haris Mansoor, Imdadullah Khan, Naveed Arshad, Muhammad Asad Khan, Safiullah Faizullah
Abstract Renewable sources are taking center stage in electricity generation. Due to the intermittent nature of these renewable resources, the problem of the demand-supply gap arises. To solve this problem, several techniques have been proposed in the literature in terms of cost (adding peaker plants), availability of data (Demand Side Management “DSM”), hardware infrastructure (appliance controlling DSM) and safety (voltage reduction). However, these solutions are not fair in terms of electricity distribution. In many cases, although the available supply may not match the demand in peak hours, however, the total aggregated demand remains less than the total supply for the whole day. Load shedding (complete blackout) is a commonly used solution to deal with the demand-supply gap, which can cause substantial economic losses. To solve the demand-supply gap problem, we propose a solution called Soft Load Shedding (SLS), which assigns electricity quota to each household in a fair way. We measure the fairness of SLS by defining a function for household satisfaction level. We model the household utilities by parametric function and formulate the problem of SLS as a social welfare problem. We also consider revenue generated from the fair allocation as a performance measure. To evaluate our approach, extensive experiments have been performed on both synthetic and real-world datasets, and our model is compared with several baselines to show its effectiveness in terms of fair allocation and revenue generation.
Tasks
Published 2020-02-02
URL https://arxiv.org/abs/2002.00451v1
PDF https://arxiv.org/pdf/2002.00451v1.pdf
PWC https://paperswithcode.com/paper/fair-allocation-based-soft-load-shedding
Repo
Framework

A deep learning-facilitated radiomics solution for the prediction of lung lesion shrinkage in non-small cell lung cancer trials

Title A deep learning-facilitated radiomics solution for the prediction of lung lesion shrinkage in non-small cell lung cancer trials
Authors Antong Chen, Jennifer Saouaf, Bo Zhou, Randolph Crawford, Jianda Yuan, Junshui Ma, Richard Baumgartner, Shubing Wang, Gregory Goldmacher
Abstract Herein we propose a deep learning-based approach for the prediction of lung lesion response based on radiomic features extracted from clinical CT scans of patients in non-small cell lung cancer trials. The approach starts with the classification of lung lesions from the set of primary and metastatic lesions at various anatomic locations. Focusing on the lung lesions, we perform automatic segmentation to extract their 3D volumes. Radiomic features are then extracted from the lesion on the pre-treatment scan and the first follow-up scan to predict which lesions will shrink at least 30% in diameter during treatment (either Pembrolizumab or combinations of chemotherapy and Pembrolizumab), which is defined as a partial response by the Response Evaluation Criteria In Solid Tumors (RECIST) guidelines. A 5-fold cross validation on the training set led to an AUC of 0.84 +/- 0.03, and the prediction on the testing dataset reached AUC of 0.73 +/- 0.02 for the outcome of 30% diameter shrinkage.
Tasks
Published 2020-03-05
URL https://arxiv.org/abs/2003.02943v1
PDF https://arxiv.org/pdf/2003.02943v1.pdf
PWC https://paperswithcode.com/paper/a-deep-learning-facilitated-radiomics
Repo
Framework

Almost-Matching-Exactly for Treatment Effect Estimation under Network Interference

Title Almost-Matching-Exactly for Treatment Effect Estimation under Network Interference
Authors M. Usaid Awan, Marco Morucci, Vittorio Orlandi, Sudeepa Roy, Cynthia Rudin, Alexander Volfovsky
Abstract We propose a matching method that recovers direct treatment effects from randomized experiments where units are connected in an observed network, and units that share edges can potentially influence each others’ outcomes. Traditional treatment effect estimators for randomized experiments are biased and error prone in this setting. Our method matches units almost exactly on counts of unique subgraphs within their neighborhood graphs. The matches that we construct are interpretable and high-quality. Our method can be extended easily to accommodate additional unit-level covariate information. We show empirically that our method performs better than other existing methodologies for this problem, while producing meaningful, interpretable results.
Tasks
Published 2020-03-02
URL https://arxiv.org/abs/2003.00964v1
PDF https://arxiv.org/pdf/2003.00964v1.pdf
PWC https://paperswithcode.com/paper/almost-matching-exactly-for-treatment-effect
Repo
Framework

Residual-Sparse Fuzzy $C$-Means Clustering Incorporating Morphological Reconstruction and Wavelet frames

Title Residual-Sparse Fuzzy $C$-Means Clustering Incorporating Morphological Reconstruction and Wavelet frames
Authors Cong Wang, Witold Pedrycz, ZhiWu Li, MengChu Zhou, Jun Zhao
Abstract Instead of directly utilizing an observed image including some outliers, noise or intensity inhomogeneity, the use of its ideal value (e.g. noise-free image) has a favorable impact on clustering. Hence, the accurate estimation of the residual (e.g. unknown noise) between the observed image and its ideal value is an important task. To do so, we propose an $\ell_0$ regularization-based Fuzzy $C$-Means (FCM) algorithm incorporating a morphological reconstruction operation and a tight wavelet frame transform. To achieve a sound trade-off between detail preservation and noise suppression, morphological reconstruction is used to filter an observed image. By combining the observed and filtered images, a weighted sum image is generated. Since a tight wavelet frame system has sparse representations of an image, it is employed to decompose the weighted sum image, thus forming its corresponding feature set. Taking it as data for clustering, we present an improved FCM algorithm by imposing an $\ell_0$ regularization term on the residual between the feature set and its ideal value, which implies that the favorable estimation of the residual is obtained and the ideal value participates in clustering. Spatial information is also introduced into clustering since it is naturally encountered in image segmentation. Furthermore, it makes the estimation of the residual more reliable. To further enhance the segmentation effects of the improved FCM algorithm, we also employ the morphological reconstruction to smoothen the labels generated by clustering. Finally, based on the prototypes and smoothed labels, the segmented image is reconstructed by using a tight wavelet frame reconstruction operation. Experimental results reported for synthetic, medical, and color images show that the proposed algorithm is effective and efficient, and outperforms other algorithms.
Tasks Semantic Segmentation
Published 2020-02-14
URL https://arxiv.org/abs/2002.08418v1
PDF https://arxiv.org/pdf/2002.08418v1.pdf
PWC https://paperswithcode.com/paper/residual-sparse-fuzzy-c-means-clustering
Repo
Framework

Securing of Unmanned Aerial Systems (UAS) against security threats using human immune system

Title Securing of Unmanned Aerial Systems (UAS) against security threats using human immune system
Authors Reza Fotohi
Abstract UASs form a large part of the fighting ability of the advanced military forces. In particular, these systems that carry confidential information are subject to security attacks. Accordingly, an Intrusion Detection System (IDS) has been proposed in the proposed design to protect against the security problems using the human immune system (HIS). The IDSs are used to detect and respond to attempts to compromise the target system. Since the UASs operate in the real world, the testing and validation of these systems with a variety of sensors is confronted with problems. This design is inspired by HIS. In the mapping, insecure signals are equivalent to an antigen that are detected by antibody-based training patterns and removed from the operation cycle. Among the main uses of the proposed design are the quick detection of intrusive signals and quarantining their activity. Moreover, SUAS-HIS method is evaluated here via extensive simulations carried out in NS-3 environment. The simulation results indicate that the UAS network performance metrics are improved in terms of false positive rate, false negative rate, detection rate, and packet delivery rate.
Tasks Intrusion Detection
Published 2020-03-01
URL https://arxiv.org/abs/2003.04984v1
PDF https://arxiv.org/pdf/2003.04984v1.pdf
PWC https://paperswithcode.com/paper/securing-of-unmanned-aerial-systems-uas
Repo
Framework

Simultaneous Inference for Massive Data: Distributed Bootstrap

Title Simultaneous Inference for Massive Data: Distributed Bootstrap
Authors Yang Yu, Shih-Kang Chao, Guang Cheng
Abstract In this paper, we propose a bootstrap method applied to massive data processed distributedly in a large number of machines. This new method is computationally efficient in that we bootstrap on the master machine without over-resampling, typically required by existing methods \cite{kleiner2014scalable,sengupta2016subsampled}, while provably achieving optimal statistical efficiency with minimal communication. Our method does not require repeatedly re-fitting the model but only applies multiplier bootstrap in the master machine on the gradients received from the worker machines. Simulations validate our theory.
Tasks
Published 2020-02-19
URL https://arxiv.org/abs/2002.08443v1
PDF https://arxiv.org/pdf/2002.08443v1.pdf
PWC https://paperswithcode.com/paper/simultaneous-inference-for-massive-data
Repo
Framework

Nonlinear Functional Output Regression: a Dictionary Approach

Title Nonlinear Functional Output Regression: a Dictionary Approach
Authors Dimitri Bouche, Marianne Clausel, François Roueff, Florence d’Alché-Buc
Abstract Many applications in signal processing involve data that consists in a high number of simultaneous or sequential measurements of the same phenomenon. Such data is inherently high dimensional, however it contains strong within observation correlations and smoothness patterns which can be exploited in the learning process. A relevant modelling is provided by functional data analysis. We consider the setting of functional output regression. We introduce Projection Learning, a novel dictionary-based approach that combines a representation of the functional output on this dictionary with the minimization of a functional loss. This general method is instantiated with vector-valued kernels, allowing to impose some structure on the model. We prove general theoretical results on projection learning, with in particular a bound on the estimation error. From the practical point of view, experiments on several data sets show the efficiency of the method. Notably, we provide evidence that Projection Learning is competitive compared to other nonlinear output functional regression methods and shows an interesting ability to deal with sparsely observed functions with missing data.
Tasks
Published 2020-03-03
URL https://arxiv.org/abs/2003.01432v1
PDF https://arxiv.org/pdf/2003.01432v1.pdf
PWC https://paperswithcode.com/paper/nonlinear-functional-output-regression-a
Repo
Framework

A Driver Fatigue Recognition Algorithm Based on Spatio-Temporal Feature Sequence

Title A Driver Fatigue Recognition Algorithm Based on Spatio-Temporal Feature Sequence
Authors Chen Zhang, Xiaobo Lu, Zhiliang Huang
Abstract Researches show that fatigue driving is one of the important causes of road traffic accidents, so it is of great significance to study the driver fatigue recognition algorithm to improve road traffic safety. In recent years, with the development of deep learning, the field of pattern recognition has made great development. This paper designs a real-time fatigue state recognition algorithm based on spatio-temporal feature sequence, which can be mainly applied to the scene of fatigue driving recognition. The algorithm is divided into three task networks: face detection network, facial landmark detection and head pose estimation network, fatigue recognition network. Experiments show that the algorithm has the advantages of small volume, high speed and high accuracy.
Tasks Face Detection, Facial Landmark Detection, Head Pose Estimation, Pose Estimation
Published 2020-03-18
URL https://arxiv.org/abs/2003.08134v1
PDF https://arxiv.org/pdf/2003.08134v1.pdf
PWC https://paperswithcode.com/paper/a-driver-fatigue-recognition-algorithm-based
Repo
Framework

Policy Evaluation Networks

Title Policy Evaluation Networks
Authors Jean Harb, Tom Schaul, Doina Precup, Pierre-Luc Bacon
Abstract Many reinforcement learning algorithms use value functions to guide the search for better policies. These methods estimate the value of a single policy while generalizing across many states. The core idea of this paper is to flip this convention and estimate the value of many policies, for a single set of states. This approach opens up the possibility of performing direct gradient ascent in policy space without seeing any new data. The main challenge for this approach is finding a way to represent complex policies that facilitates learning and generalization. To address this problem, we introduce a scalable, differentiable fingerprinting mechanism that retains essential policy information in a concise embedding. Our empirical results demonstrate that combining these three elements (learned Policy Evaluation Network, policy fingerprints, gradient ascent) can produce policies that outperform those that generated the training data, in zero-shot manner.
Tasks
Published 2020-02-26
URL https://arxiv.org/abs/2002.11833v1
PDF https://arxiv.org/pdf/2002.11833v1.pdf
PWC https://paperswithcode.com/paper/policy-evaluation-networks
Repo
Framework

Forecasting NIFTY 50 benchmark Index using Seasonal ARIMA time series models

Title Forecasting NIFTY 50 benchmark Index using Seasonal ARIMA time series models
Authors Amit Tewari
Abstract This paper analyses how Time Series Analysis techniques can be applied to capture movement of an exchange traded index in a stock market. Specifically, Seasonal Auto Regressive Integrated Moving Average (SARIMA) class of models is applied to capture the movement of Nifty 50 index which is one of the most actively exchange traded contracts globally [1]. A total of 729 model parameter combinations were evaluated and the most appropriate selected for making the final forecast based on AIC criteria [8]. NIFTY 50 can be used for a variety of purposes such as benchmarking fund portfolios, launching of index funds, exchange traded funds (ETFs) and structured products. The index tracks the behaviour of a portfolio of blue chip companies, the largest and most liquid Indian securities and can be regarded as a true reflection of the Indian stock market [2].
Tasks Time Series, Time Series Analysis
Published 2020-01-09
URL https://arxiv.org/abs/2001.08979v1
PDF https://arxiv.org/pdf/2001.08979v1.pdf
PWC https://paperswithcode.com/paper/forecasting-nifty-50-benchmark-index-using
Repo
Framework

Synthetic Error Dataset Generation Mimicking Bengali Writing Pattern

Title Synthetic Error Dataset Generation Mimicking Bengali Writing Pattern
Authors Md. Habibur Rahman Sifat, Chowdhury Rafeed Rahman, Mohammad Rafsan, Md. Hasibur Rahman
Abstract While writing Bengali using English keyboard, users often make spelling mistakes. The accuracy of any Bengali spell checker or paragraph correction module largely depends on the kind of error dataset it is based on. Manual generation of such error dataset is a cumbersome process. In this research, We present an algorithm for automatic misspelled Bengali word generation from correct word through analyzing Bengali writing pattern using QWERTY layout English keyboard. As part of our analysis, we have formed a list of most commonly used Bengali words, phonetically similar replaceable clusters, frequently mispressed replaceable clusters, frequently mispressed insertion prone clusters and some rules for Juktakkhar (constant letter clusters) handling while generating errors.
Tasks
Published 2020-03-07
URL https://arxiv.org/abs/2003.03484v1
PDF https://arxiv.org/pdf/2003.03484v1.pdf
PWC https://paperswithcode.com/paper/synthetic-error-dataset-generation-mimicking
Repo
Framework

Visual Concept-Metaconcept Learning

Title Visual Concept-Metaconcept Learning
Authors Chi Han, Jiayuan Mao, Chuang Gan, Joshua B. Tenenbaum, Jiajun Wu
Abstract Humans reason with concepts and metaconcepts: we recognize red and green from visual input; we also understand that they describe the same property of objects (i.e., the color). In this paper, we propose the visual concept-metaconcept learner (VCML) for joint learning of concepts and metaconcepts from images and associated question-answer pairs. The key is to exploit the bidirectional connection between visual concepts and metaconcepts. Visual representations provide grounding cues for predicting relations between unseen pairs of concepts. Knowing that red and green describe the same property of objects, we generalize to the fact that cube and sphere also describe the same property of objects, since they both categorize the shape of objects. Meanwhile, knowledge about metaconcepts empowers visual concept learning from limited, noisy, and even biased data. From just a few examples of purple cubes we can understand a new color purple, which resembles the hue of the cubes instead of the shape of them. Evaluation on both synthetic and real-world datasets validates our claims.
Tasks
Published 2020-02-04
URL https://arxiv.org/abs/2002.01464v1
PDF https://arxiv.org/pdf/2002.01464v1.pdf
PWC https://paperswithcode.com/paper/visual-concept-metaconcept-learning-1
Repo
Framework

Out-of-Distribution Detection in Multi-Label Datasets using Latent Space of $β$-VAE

Title Out-of-Distribution Detection in Multi-Label Datasets using Latent Space of $β$-VAE
Authors Vijaya Kumar Sundar, Shreyas Ramakrishna, Zahra Rahiminasab, Arvind Easwaran, Abhishek Dubey
Abstract Learning Enabled Components (LECs) are widely being used in a variety of perception based autonomy tasks like image segmentation, object detection, end-to-end driving, etc. These components are trained with large image datasets with multimodal factors like weather conditions, time-of-day, traffic-density, etc. The LECs learn from these factors during training, and while testing if there is variation in any of these factors, the components get confused resulting in low confidence predictions. The images with factors not seen during training is commonly referred to as Out-of-Distribution (OOD). For safe autonomy it is important to identify the OOD images, so that a suitable mitigation strategy can be performed. Classical one-class classifiers like SVM and SVDD are used to perform OOD detection. However, the multiple labels attached to the images in these datasets, restricts the direct application of these techniques. We address this problem using the latent space of the $\beta$-Variational Autoencoder ($\beta$-VAE). We use the fact that compact latent space generated by an appropriately selected $\beta$-VAE will encode the information about these factors in a few latent variables, and that can be used for computationally inexpensive detection. We evaluate our approach on the nuScenes dataset, and our results shows the latent space of $\beta$-VAE is sensitive to encode changes in the values of the generative factor.
Tasks Object Detection, Out-of-Distribution Detection, Semantic Segmentation
Published 2020-03-10
URL https://arxiv.org/abs/2003.08740v1
PDF https://arxiv.org/pdf/2003.08740v1.pdf
PWC https://paperswithcode.com/paper/out-of-distribution-detection-in-multi-label
Repo
Framework
comments powered by Disqus