Paper Group ANR 374
Adversarial Distributional Training for Robust Deep Learning. Fair Adversarial Networks. Taylor Moment Expansion for Continuous-Discrete Gaussian Filtering and Smoothing. Cooperative Object Detection and Parameter Estimation Using Visible Light Communications. Hybrid Tiled Convolutional Neural Networks for Text Sentiment Classification. Multi-modal …
Adversarial Distributional Training for Robust Deep Learning
Title | Adversarial Distributional Training for Robust Deep Learning |
Authors | Zhijie Deng, Yinpeng Dong, Tianyu Pang, Hang Su, Jun Zhu |
Abstract | Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples. However, the adversarially trained models do not perform well enough on test data or under other attack algorithms unseen during training, which remains to be improved. In this paper, we introduce a novel adversarial distributional training (ADT) framework for learning robust models. Specifically, we formulate ADT as a minimax optimization problem, where the inner maximization aims to learn an adversarial distribution to characterize the potential adversarial examples around a natural one, and the outer minimization aims to train robust classifiers by minimizing the expected loss over the worst-case adversarial distributions. We conduct a theoretical analysis on how to solve the minimax problem, leading to a general algorithm for ADT. We further propose three different approaches to parameterize the adversarial distributions. Empirical results on various benchmarks validate the effectiveness of ADT compared with the state-of-the-art AT methods. |
Tasks | |
Published | 2020-02-14 |
URL | https://arxiv.org/abs/2002.05999v1 |
https://arxiv.org/pdf/2002.05999v1.pdf | |
PWC | https://paperswithcode.com/paper/adversarial-distributional-training-for |
Repo | |
Framework | |
Fair Adversarial Networks
Title | Fair Adversarial Networks |
Authors | George Cevora |
Abstract | The influence of human judgement is ubiquitous in datasets used across the analytics industry, yet humans are known to be sub-optimal decision makers prone to various biases. Analysing biased datasets then leads to biased outcomes of the analysis. Bias by protected characteristics (e.g. race) is of particular interest as it may not only make the output of analytical process sub-optimal, but also illegal. Countering the bias by constraining the analytical outcomes to be fair is problematic because A) fairness lacks a universally accepted definition, while at the same time some definitions are mutually exclusive, and B) the use of optimisation constraints ensuring fairness is incompatible with most analytical pipelines. Both problems are solved by methods which remove bias from the data and returning an altered dataset. This approach aims to not only remove the actual bias variable (e.g. race), but also alter all proxy variables (e.g. postcode) so the bias variable is not detectable from the rest of the data. The advantage of using this approach is that the definition of fairness as a lack of detectable bias in the data (as opposed to the output of analysis) is universal and therefore solves problem (A). Furthermore, as the data is altered to remove bias the problem (B) disappears because the analytical pipelines can remain unchanged. This approach has been adopted by several technical solutions. None of them, however, seems to be satisfactory in terms of ability to remove multivariate, non-linear and non-binary biases. Therefore, in this paper I propose the concept of Fair Adversarial Networks as an easy-to-implement general method for removing bias from data. This paper demonstrates that Fair Adversarial Networks achieve this aim. |
Tasks | |
Published | 2020-02-23 |
URL | https://arxiv.org/abs/2002.12144v1 |
https://arxiv.org/pdf/2002.12144v1.pdf | |
PWC | https://paperswithcode.com/paper/fair-adversarial-networks |
Repo | |
Framework | |
Taylor Moment Expansion for Continuous-Discrete Gaussian Filtering and Smoothing
Title | Taylor Moment Expansion for Continuous-Discrete Gaussian Filtering and Smoothing |
Authors | Zheng Zhao, Toni Karvonen, Roland Hostettler, Simo Särkkä |
Abstract | The paper is concerned with non-linear Gaussian filtering and smoothing in continuous-discrete state-space models, where the dynamic model is formulated as an It^{o} stochastic differential equation (SDE), and the measurements are obtained at discrete time instants. We propose novel Taylor moment expansion (TME) Gaussian filter and smoother which approximate the moments of the SDE with a temporal Taylor expansion. Differently from classical linearisation or It^{o}–Taylor approaches, the Taylor expansion is formed for the moment functions directly and in time variable, not by using a Taylor expansion on the non-linear functions in the model. We analyse the theoretical properties, including the positive definiteness of the covariance estimate and stability of the TME Gaussian filter and smoother. By numerical experiments, we demonstrate that the proposed TME Gaussian filter and smoother significantly outperform the state-of-the-art methods in terms of estimation accuracy and numerical stability. |
Tasks | |
Published | 2020-01-08 |
URL | https://arxiv.org/abs/2001.02466v1 |
https://arxiv.org/pdf/2001.02466v1.pdf | |
PWC | https://paperswithcode.com/paper/taylor-moment-expansion-for-continuous |
Repo | |
Framework | |
Cooperative Object Detection and Parameter Estimation Using Visible Light Communications
Title | Cooperative Object Detection and Parameter Estimation Using Visible Light Communications |
Authors | Hamid Hosseinianfar, Maite Brandt-Pearce |
Abstract | Visible light communication (VLC) systems are promising candidates for future indoor access and peer-to-peer networks. The performance of these systems, however, is vulnerable to the line of sight (LOS) link blockage due to objects inside the room. In this paper, we develop a probabilistic object detection method that takes advantage of the blockage status of the LOS links between the user devices and transceivers on the ceiling to locate those objects. The target objects are modeled as cylinders with random radii. The location and size of an object can be estimated by using a quadratic programming approach. Simulation results show that the root-mean-squared error can be less than $1$ cm and $8$ cm for estimating the center and the radius of the object, respectively. |
Tasks | Object Detection |
Published | 2020-03-17 |
URL | https://arxiv.org/abs/2003.07525v1 |
https://arxiv.org/pdf/2003.07525v1.pdf | |
PWC | https://paperswithcode.com/paper/cooperative-object-detection-and-parameter |
Repo | |
Framework | |
Hybrid Tiled Convolutional Neural Networks for Text Sentiment Classification
Title | Hybrid Tiled Convolutional Neural Networks for Text Sentiment Classification |
Authors | Maria Mihaela Trusca, Gerasimos Spanakis |
Abstract | The tiled convolutional neural network (tiled CNN) has been applied only to computer vision for learning invariances. We adjust its architecture to NLP to improve the extraction of the most salient features for sentiment analysis. Knowing that the major drawback of the tiled CNN in the NLP field is its inflexible filter structure, we propose a novel architecture called hybrid tiled CNN that applies a filter only on the words that appear in the similar contexts and on their neighbor words (a necessary step for preventing the loss of some n-grams). The experiments on the datasets of IMDB movie reviews and SemEval 2017 demonstrate the efficiency of the hybrid tiled CNN that performs better than both CNN and tiled CNN. |
Tasks | Sentiment Analysis |
Published | 2020-01-31 |
URL | https://arxiv.org/abs/2001.11857v1 |
https://arxiv.org/pdf/2001.11857v1.pdf | |
PWC | https://paperswithcode.com/paper/hybrid-tiled-convolutional-neural-networks |
Repo | |
Framework | |
Multi-modal Sentiment Analysis using Super Characters Method on Low-power CNN Accelerator Device
Title | Multi-modal Sentiment Analysis using Super Characters Method on Low-power CNN Accelerator Device |
Authors | Baohua Sun, Lin Yang, Hao Sha, Michael Lin |
Abstract | Recent years NLP research has witnessed the record-breaking accuracy improvement by DNN models. However, power consumption is one of the practical concerns for deploying NLP systems. Most of the current state-of-the-art algorithms are implemented on GPUs, which is not power-efficient and the deployment cost is also very high. On the other hand, CNN Domain Specific Accelerator (CNN-DSA) has been in mass production providing low-power and low cost computation power. In this paper, we will implement the Super Characters method on the CNN-DSA. In addition, we modify the Super Characters method to utilize the multi-modal data, i.e. text plus tabular data in the CL-Aff sharedtask. |
Tasks | Sentiment Analysis |
Published | 2020-01-28 |
URL | https://arxiv.org/abs/2001.10179v1 |
https://arxiv.org/pdf/2001.10179v1.pdf | |
PWC | https://paperswithcode.com/paper/multi-modal-sentiment-analysis-using-super |
Repo | |
Framework | |
Search-Based Software Engineering for Self-Adaptive Systems: One Survey, Five Disappointments and Six Opportunities
Title | Search-Based Software Engineering for Self-Adaptive Systems: One Survey, Five Disappointments and Six Opportunities |
Authors | Tao Chen, Miqing Li, Ke Li, Kalyanmoy Deb |
Abstract | Search-Based Software Engineering (SBSE) is a promising paradigm that exploits computational search to optimize different processes when engineering complex software systems. Self-adaptive system (SAS) is one category of such complex systems that permits to optimize different functional and non-functional objectives/criteria under changing environment (e.g., requirements and workload), which involves problems that are subject to search. In this regard, over years, there have been a considerable amount of work that investigates SBSE for SASs. In this paper, we provide the first systematic and comprehensive survey exclusively on SBSE for SASs, covering 3,740 papers in 27 venues from 7 repositories, which eventually leads to several key statistics from the most notable 73 primary studies in this particular field of research. Our results, surprisingly, have revealed five disappointed issues that are of utmost importance, but have been overwhelmingly ignored in existing studies. We provide evidences to justify our arguments against the disappointments and highlight six emergent, but currently under-explored opportunities for future work on SBSE for SASs. By mitigating the disappointed issues revealed in this work, together with the highlighted opportunities, we hope to be able to excite a much more significant growth on this particular research direction. |
Tasks | |
Published | 2020-01-22 |
URL | https://arxiv.org/abs/2001.08236v1 |
https://arxiv.org/pdf/2001.08236v1.pdf | |
PWC | https://paperswithcode.com/paper/search-based-software-engineering-for-self |
Repo | |
Framework | |
Pattern Similarity-based Machine Learning Methods for Mid-term Load Forecasting: A Comparative Study
Title | Pattern Similarity-based Machine Learning Methods for Mid-term Load Forecasting: A Comparative Study |
Authors | Grzegorz Dudek, Pawel Pelka |
Abstract | Pattern similarity-based methods are widely used in classification and regression problems. Repeated, similar-shaped cycles observed in seasonal time series encourage to apply these methods for forecasting. In this paper we use the pattern similarity-based methods for forecasting monthly electricity demand expressing annual seasonality. An integral part of the models is the time series representation using patterns of time series sequences. Pattern representation ensures the input and output data unification through trend filtering and variance equalization. Consequently, pattern representation simplifies the forecasting problem and allows us to use models based on pattern similarity. We consider four such models: nearest neighbor model, fuzzy neighborhood model, kernel regression model and general regression neural network. A regression function is constructed by aggregation output patterns with weights dependent on the similarity between input patterns. The advantages of the proposed models are: clear principle of operation, small number of parameters to adjust, fast optimization procedure, good generalization ability, working on the newest data without retraining, robustness to missing input variables, and generating a vector as an output. In the experimental part of the work the proposed models were used to forecasting the monthly demand for 35 European countries. The model performances were compared with the performances of the classical models such as ARIMA and exponential smoothing as well as state-of-the-art models such as multilayer perceptron, neuro-fuzzy system and long short-term memory model. The results show high performance of the proposed models which outperform the comparative models in accuracy, simplicity and ease of optimization. |
Tasks | Load Forecasting, Time Series |
Published | 2020-03-03 |
URL | https://arxiv.org/abs/2003.01475v1 |
https://arxiv.org/pdf/2003.01475v1.pdf | |
PWC | https://paperswithcode.com/paper/pattern-similarity-based-machine-learning |
Repo | |
Framework | |
Knowledge Discovery from Social Media using Big Data provided Sentiment Analysis (SoMABiT)
Title | Knowledge Discovery from Social Media using Big Data provided Sentiment Analysis (SoMABiT) |
Authors | Mahdi Bohlouli, Jens Dalter, Mareike Dornhöfer, Johannes Zenkert, Madjid Fathi |
Abstract | In todays competitive business world, being aware of customer needs and market-oriented production is a key success factor for industries. To this aim, the use of efficient analytic algorithms ensures a better understanding of customer feedback and improves the next generation of products. Accordingly, the dramatic increase in using social media in daily life provides beneficial sources for market analytics. But how traditional analytic algorithms and methods can scale up for such disparate and multi-structured data sources is the main challenge in this regard. This paper presents and discusses the technological and scientific focus of the SoMABiT as a social media analysis platform using big data technology. Sentiment analysis has been employed in order to discover knowledge from social media. The use of MapReduce and developing a distributed algorithm towards an integrated platform that can scale for any data volume and provide a social media-driven knowledge is the main novelty of the proposed concept in comparison to the state-of-the-art technologies. |
Tasks | Sentiment Analysis |
Published | 2020-01-16 |
URL | https://arxiv.org/abs/2001.05996v1 |
https://arxiv.org/pdf/2001.05996v1.pdf | |
PWC | https://paperswithcode.com/paper/knowledge-discovery-from-social-media-using |
Repo | |
Framework | |
Ensemble Deep Learning on Large, Mixed-Site fMRI Datasets in Autism and Other Tasks
Title | Ensemble Deep Learning on Large, Mixed-Site fMRI Datasets in Autism and Other Tasks |
Authors | Matthew Leming, Juan Manuel Górriz, John Suckling |
Abstract | Deep learning models for MRI classification face two recurring problems: they are typically limited by low sample size, and are abstracted by their own complexity (the “black box problem”). In this paper, we train a convolutional neural network (CNN) with the largest multi-source, functional MRI (fMRI) connectomic dataset ever compiled, consisting of 43,858 datapoints. We apply this model to a cross-sectional comparison of autism (ASD) vs typically developing (TD) controls that has proved difficult to characterise with inferential statistics. To contextualise these findings, we additionally perform classifications of gender and task vs rest. Employing class-balancing to build a training set, we trained 3$\times$300 modified CNNs in an ensemble model to classify fMRI connectivity matrices with overall AUROCs of 0.6774, 0.7680, and 0.9222 for ASD vs TD, gender, and task vs rest, respectively. Additionally, we aim to address the black box problem in this context using two visualization methods. First, class activation maps show which functional connections of the brain our models focus on when performing classification. Second, by analyzing maximal activations of the hidden layers, we were also able to explore how the model organizes a large and mixed-centre dataset, finding that it dedicates specific areas of its hidden layers to processing different covariates of data (depending on the independent variable analyzed), and other areas to mix data from different sources. Our study finds that deep learning models that distinguish ASD from TD controls focus broadly on temporal and cerebellar connections, with a particularly high focus on the right caudate nucleus and paracentral sulcus. |
Tasks | |
Published | 2020-02-14 |
URL | https://arxiv.org/abs/2002.07874v1 |
https://arxiv.org/pdf/2002.07874v1.pdf | |
PWC | https://paperswithcode.com/paper/ensemble-deep-learning-on-large-mixed-site |
Repo | |
Framework | |
Towards Photo-Realistic Virtual Try-On by Adaptively Generating$\leftrightarrow$Preserving Image Content
Title | Towards Photo-Realistic Virtual Try-On by Adaptively Generating$\leftrightarrow$Preserving Image Content |
Authors | Han Yang, Ruimao Zhang, Xiaobao Guo, Wei Liu, Wangmeng Zuo, Ping Luo |
Abstract | Image visual try-on aims at transferring a target clothing image onto a reference person, and has become a hot topic in recent years. Prior arts usually focus on preserving the character of a clothing image (e.g. texture, logo, embroidery) when warping it to arbitrary human pose. However, it remains a big challenge to generate photo-realistic try-on images when large occlusions and human poses are presented in the reference person. To address this issue, we propose a novel visual try-on network, namely Adaptive Content Generating and Preserving Network (ACGPN). In particular, ACGPN first predicts semantic layout of the reference image that will be changed after try-on (e.g. long sleeve shirt$\rightarrow$arm, arm$\rightarrow$jacket), and then determines whether its image content needs to be generated or preserved according to the predicted semantic layout, leading to photo-realistic try-on and rich clothing details. ACGPN generally involves three major modules. First, a semantic layout generation module utilizes semantic segmentation of the reference image to progressively predict the desired semantic layout after try-on. Second, a clothes warping module warps clothing images according to the generated semantic layout, where a second-order difference constraint is introduced to stabilize the warping process during training. Third, an inpainting module for content fusion integrates all information (e.g. reference image, semantic layout, warped clothes) to adaptively produce each semantic part of human body. In comparison to the state-of-the-art methods, ACGPN can generate photo-realistic images with much better perceptual quality and richer fine-details. |
Tasks | Semantic Segmentation |
Published | 2020-03-12 |
URL | https://arxiv.org/abs/2003.05863v1 |
https://arxiv.org/pdf/2003.05863v1.pdf | |
PWC | https://paperswithcode.com/paper/towards-photo-realistic-virtual-try-on-by |
Repo | |
Framework | |
Scalable Tactile Sensing for an Omni-adaptive Soft Robot Finger
Title | Scalable Tactile Sensing for an Omni-adaptive Soft Robot Finger |
Authors | Zeyi Yang, Sheng Ge, Fang Wan, Yujia Liu, Chaoyang Song |
Abstract | Robotic fingers made of soft material and compliant structures usually lead to superior adaptation when interacting with the unstructured physical environment. In this paper, we present an embedded sensing solution using optical fibers for an omni-adaptive soft robotic finger with exceptional adaptation in all directions. In particular, we managed to insert a pair of optical fibers inside the finger’s structural cavity without interfering with its adaptive performance. The resultant integration is scalable as a versatile, low-cost, and moisture-proof solution for physically safe human-robot interaction. In addition, we experimented with our finger design for an object sorting task and identified sectional diameters of 94% objects within the $\pm$6mm error and measured 80% of the structural strains within $\pm$0.1mm/mm error. The proposed sensor design opens many doors in future applications of soft robotics for scalable and adaptive physical interactions in the unstructured environment. |
Tasks | |
Published | 2020-02-29 |
URL | https://arxiv.org/abs/2003.01583v1 |
https://arxiv.org/pdf/2003.01583v1.pdf | |
PWC | https://paperswithcode.com/paper/scalable-tactile-sensing-for-an-omni-adaptive |
Repo | |
Framework | |
ContainerStress: Autonomous Cloud-Node Scoping Framework for Big-Data ML Use Cases
Title | ContainerStress: Autonomous Cloud-Node Scoping Framework for Big-Data ML Use Cases |
Authors | Guang Chao Wang, Kenny Gross, Akshay Subramaniam |
Abstract | Deploying big-data Machine Learning (ML) services in a cloud environment presents a challenge to the cloud vendor with respect to the cloud container configuration sizing for any given customer use case. OracleLabs has developed an automated framework that uses nested-loop Monte Carlo simulation to autonomously scale any size customer ML use cases across the range of cloud CPU-GPU “Shapes” (configurations of CPUs and/or GPUs in Cloud containers available to end customers). Moreover, the OracleLabs and NVIDIA authors have collaborated on a ML benchmark study which analyzes the compute cost and GPU acceleration of any ML prognostic algorithm and assesses the reduction of compute cost in a cloud container comprising conventional CPUs and NVIDIA GPUs. |
Tasks | |
Published | 2020-03-18 |
URL | https://arxiv.org/abs/2003.08011v1 |
https://arxiv.org/pdf/2003.08011v1.pdf | |
PWC | https://paperswithcode.com/paper/containerstress-autonomous-cloud-node-scoping |
Repo | |
Framework | |
Real-Time High-Performance Semantic Image Segmentation of Urban Street Scenes
Title | Real-Time High-Performance Semantic Image Segmentation of Urban Street Scenes |
Authors | Genshun Dong, Yan Yan, Chunhua Shen, Hanzi Wang |
Abstract | Deep Convolutional Neural Networks (DCNNs) have recently shown outstanding performance in semantic image segmentation. However, state-of-the-art DCNN-based semantic segmentation methods usually suffer from high computational complexity due to the use of complex network architectures. This greatly limits their applications in the real-world scenarios that require real-time processing. In this paper, we propose a real-time high-performance DCNN-based method for robust semantic segmentation of urban street scenes, which achieves a good trade-off between accuracy and speed. Specifically, a Lightweight Baseline Network with Atrous convolution and Attention (LBN-AA) is firstly used as our baseline network to efficiently obtain dense feature maps. Then, the Distinctive Atrous Spatial Pyramid Pooling (DASPP), which exploits the different sizes of pooling operations to encode the rich and distinctive semantic information, is developed to detect objects at multiple scales. Meanwhile, a Spatial detail-Preserving Network (SPN) with shallow convolutional layers is designed to generate high-resolution feature maps preserving the detailed spatial information. Finally, a simple but practical Feature Fusion Network (FFN) is used to effectively combine both shallow and deep features from the semantic branch (DASPP) and the spatial branch (SPN), respectively. Extensive experimental results show that the proposed method respectively achieves the accuracy of 73.6% and 68.0% mean Intersection over Union (mIoU) with the inference speed of 51.0 fps and 39.3 fps on the challenging Cityscapes and CamVid test datasets (by only using a single NVIDIA TITAN X card). This demonstrates that the proposed method offers excellent performance at the real-time speed for semantic segmentation of urban street scenes. |
Tasks | Semantic Segmentation |
Published | 2020-03-11 |
URL | https://arxiv.org/abs/2003.08736v1 |
https://arxiv.org/pdf/2003.08736v1.pdf | |
PWC | https://paperswithcode.com/paper/real-time-high-performance-semantic-image |
Repo | |
Framework | |
COVIDX-Net: A Framework of Deep Learning Classifiers to Diagnose COVID-19 in X-Ray Images
Title | COVIDX-Net: A Framework of Deep Learning Classifiers to Diagnose COVID-19 in X-Ray Images |
Authors | Ezz El-Din Hemdan, Marwa A. Shouman, Mohamed Esmail Karar |
Abstract | Background and Purpose: Coronaviruses (CoV) are perilous viruses that may cause Severe Acute Respiratory Syndrome (SARS-CoV), Middle East Respiratory Syndrome (MERS-CoV). The novel 2019 Coronavirus disease (COVID-19) was discovered as a novel disease pneumonia in the city of Wuhan, China at the end of 2019. Now, it becomes a Coronavirus outbreak around the world, the number of infected people and deaths are increasing rapidly every day according to the updated reports of the World Health Organization (WHO). Therefore, the aim of this article is to introduce a new deep learning framework; namely COVIDX-Net to assist radiologists to automatically diagnose COVID-19 in X-ray images. Materials and Methods: Due to the lack of public COVID-19 datasets, the study is validated on 50 Chest X-ray images with 25 confirmed positive COVID-19 cases. The COVIDX-Net includes seven different architectures of deep convolutional neural network models, such as modified Visual Geometry Group Network (VGG19) and the second version of Google MobileNet. Each deep neural network model is able to analyze the normalized intensities of the X-ray image to classify the patient status either negative or positive COVID-19 case. Results: Experiments and evaluation of the COVIDX-Net have been successfully done based on 80-20% of X-ray images for the model training and testing phases, respectively. The VGG19 and Dense Convolutional Network (DenseNet) models showed a good and similar performance of automated COVID-19 classification with f1-scores of 0.89 and 0.91 for normal and COVID-19, respectively. Conclusions: This study demonstrated the useful application of deep learning models to classify COVID-19 in X-ray images based on the proposed COVIDX-Net framework. Clinical studies are the next milestone of this research work. |
Tasks | |
Published | 2020-03-24 |
URL | https://arxiv.org/abs/2003.11055v1 |
https://arxiv.org/pdf/2003.11055v1.pdf | |
PWC | https://paperswithcode.com/paper/covidx-net-a-framework-of-deep-learning |
Repo | |
Framework | |