April 2, 2020

3228 words 16 mins read

# Paper Group ANR 124

Coronavirus Optimization Algorithm: A bioinspired metaheuristic based on the COVID-19 propagation model. Tensor denoising and completion based on ordinal observations. Novelty search employed into the development of cancer treatment simulations. IoT Device Identification Using Deep Learning. Anypath Routing Protocol Design via Q-Learning for Underw …

#### Coronavirus Optimization Algorithm: A bioinspired metaheuristic based on the COVID-19 propagation model

Title Coronavirus Optimization Algorithm: A bioinspired metaheuristic based on the COVID-19 propagation model
Authors F. Martínez-Álvarez, G. Asencio-Cortés, J. F. Torres, D. Gutiérrez-Avilés, L. Melgar-García, R. Pérez-Chacón, C. Rubio-Escudero, J. C. Riquelme, A. Troncoso
Abstract A novel bioinspired metaheuristic is proposed in this work, simulating how the Coronavirus spreads and infects healthy people. From an initial individual (the patient zero), the coronavirus infects new patients at known rates, creating new populations of infected people. Every individual can either die or infect and, afterwards, be sent to the recovered population. Relevant terms such as re-infection probability, super-spreading rate or traveling rate are introduced in the model in order to simulate as accurately as possible the coronavirus activity. The Coronavirus Optimization Algorithm has two major advantages compared to other similar strategies. First, the input parameters are already set according to the disease statistics, preventing researchers from initializing them with arbitrary values. Second, the approach has the ability of ending after several iterations, without setting this value either. Infected population initially grows at an exponential rate but after some iterations, the high number recovered and dead people starts decreasing the number of infected people in new iterations. As application case, it has been used to train a deep learning model for electricity load forecasting, showing quite remarkable results after few iterations.
Published 2020-03-30
URL https://arxiv.org/abs/2003.13633v1
PDF https://arxiv.org/pdf/2003.13633v1.pdf
PWC https://paperswithcode.com/paper/coronavirus-optimization-algorithm-a
Repo
Framework

#### Tensor denoising and completion based on ordinal observations

Title Tensor denoising and completion based on ordinal observations
Authors Chanwoo Lee, Miaoyan Wang
Abstract Higher-order tensors arise frequently in applications such as neuroimaging, recommendation system, social network analysis, and psychological studies. We consider the problem of low-rank tensor estimation from possibly incomplete, ordinal-valued observations. Two related problems are studied, one on tensor denoising and another on tensor completion. We propose a multi-linear cumulative link model, develop a rank-constrained M-estimator, and obtain theoretical accuracy guarantees. Our mean squared error bound enjoys a faster convergence rate than previous results, and we show that the proposed estimator is minimax optimal under the class of low-rank models. Furthermore, the procedure developed serves as an efficient completion method which guarantees consistent recovery of an order-$K$ $(d,\ldots,d)$-dimensional low-rank tensor using only $\tilde{\mathcal{O}}(Kd)$ noisy, quantized observations. We demonstrate the outperformance of our approach over previous methods on the tasks of clustering and collaborative filtering.
Published 2020-02-16
URL https://arxiv.org/abs/2002.06524v1
PDF https://arxiv.org/pdf/2002.06524v1.pdf
PWC https://paperswithcode.com/paper/tensor-denoising-and-completion-based-on
Repo
Framework

#### Novelty search employed into the development of cancer treatment simulations

Title Novelty search employed into the development of cancer treatment simulations
Authors Michail-Antisthenis Tsompanas, Larry Bull, Andrew Adamatzky, Igor Balaz
Abstract Conventional optimization methodologies may be hindered when the automated search is stuck into local optima because of a deceptive objective function landscape. Consequently, open ended search methodologies, such as novelty search, have been proposed to tackle this issue. Overlooking the objective, while putting pressure into discovering novel solutions may lead to better solutions in practical problems. Novelty search was employed here to optimize the simulated design of a targeted drug delivery system for tumor treatment under the PhysiCell simulator. A hybrid objective equation was used containing both the actual objective of an effective tumour treatment and the novelty measure of the possible solutions. Different weights of the two components of the hybrid equation were investigated to unveil the significance of each one.
Published 2020-03-21
URL https://arxiv.org/abs/2003.11624v1
PDF https://arxiv.org/pdf/2003.11624v1.pdf
PWC https://paperswithcode.com/paper/novelty-search-employed-into-the-development
Repo
Framework

#### IoT Device Identification Using Deep Learning

Title IoT Device Identification Using Deep Learning
Authors Jaidip Kotak, Yuval Elovici
Abstract The growing use of IoT devices in organizations has increased the number of attack vectors available to attackers due to the less secure nature of the devices. The widely adopted bring your own device (BYOD) policy which allows an employee to bring any IoT device into the workplace and attach it to an organization’s network also increases the risk of attacks. In order to address this threat, organizations often implement security policies in which only the connection of white-listed IoT devices is permitted. To monitor adherence to such policies and protect their networks, organizations must be able to identify the IoT devices connected to their networks and, more specifically, to identify connected IoT devices that are not on the white-list (unknown devices). In this study, we applied deep learning on network traffic to automatically identify IoT devices connected to the network. In contrast to previous work, our approach does not require that complex feature engineering be applied on the network traffic, since we represent the communication behavior of IoT devices using small images built from the IoT devices network traffic payloads. In our experiments, we trained a multiclass classifier on a publicly available dataset, successfully identifying 10 different IoT devices and the traffic of smartphones and computers, with over 99% accuracy. We also trained multiclass classifiers to detect unauthorized IoT devices connected to the network, achieving over 99% overall average detection accuracy.
Published 2020-02-25
URL https://arxiv.org/abs/2002.11686v1
PDF https://arxiv.org/pdf/2002.11686v1.pdf
PWC https://paperswithcode.com/paper/iot-device-identification-using-deep-learning
Repo
Framework

#### Anypath Routing Protocol Design via Q-Learning for Underwater Sensor Networks

Title Anypath Routing Protocol Design via Q-Learning for Underwater Sensor Networks
Authors Yuan Zhou, Tao Cao, Wei Xiang
Abstract As a promising technology in the Internet of Underwater Things, underwater sensor networks have drawn a widespread attention from both academia and industry. However, designing a routing protocol for underwater sensor networks is a great challenge due to high energy consumption and large latency in the underwater environment. This paper proposes a Q-learning-based localization-free anypath routing (QLFR) protocol to prolong the lifetime as well as reduce the end-to-end delay for underwater sensor networks. Aiming at optimal routing policies, the Q-value is calculated by jointly considering the residual energy and depth information of sensor nodes throughout the routing process. More specifically, we define two reward functions (i.e., depth-related and energy-related rewards) for Q-learning with the objective of reducing latency and extending network lifetime. In addition, a new holding time mechanism for packet forwarding is designed according to the priority of forwarding candidate nodes. Furthermore, a mathematical analysis is presented to analyze the performance of the proposed routing protocol. Extensive simulation results demonstrate the superiority performance of the proposed routing protocol in terms of the end-to-end delay and the network lifetime.
Published 2020-02-22
URL https://arxiv.org/abs/2002.09623v1
PDF https://arxiv.org/pdf/2002.09623v1.pdf
PWC https://paperswithcode.com/paper/anypath-routing-protocol-design-via-q
Repo
Framework

#### Improving Multi-Turn Response Selection Models with Complementary Last-Utterance Selection by Instance Weighting

Title Improving Multi-Turn Response Selection Models with Complementary Last-Utterance Selection by Instance Weighting
Authors Kun Zhou, Wayne Xin Zhao, Yutao Zhu, Ji-Rong Wen, Jingsong Yu
Abstract Open-domain retrieval-based dialogue systems require a considerable amount of training data to learn their parameters. However, in practice, the negative samples of training data are usually selected from an unannotated conversation data set at random. The generated training data is likely to contain noise and affect the performance of the response selection models. To address this difficulty, we consider utilizing the underlying correlation in the data resource itself to derive different kinds of supervision signals and reduce the influence of noisy data. More specially, we consider a main-complementary task pair. The main task (\ie our focus) selects the correct response given the last utterance and context, and the complementary task selects the last utterance given the response and context. The key point is that the output of the complementary task is used to set instance weights for the main task. We conduct extensive experiments in two public datasets and obtain significant improvement in both datasets. We also investigate the variant of our approach in multiple aspects, and the results have verified the effectiveness of our approach.
Published 2020-02-18
URL https://arxiv.org/abs/2002.07397v1
PDF https://arxiv.org/pdf/2002.07397v1.pdf
PWC https://paperswithcode.com/paper/improving-multi-turn-response-selection
Repo
Framework

#### Limited Angle Tomography for Transmission X-Ray Microscopy Using Deep Learning

Title Limited Angle Tomography for Transmission X-Ray Microscopy Using Deep Learning
Authors Yixing Huang, Shengxiang Wang, Yong Guan, Andreas Maier
Abstract In transmission X-ray microscopy (TXM) systems, the rotation of a scanned sample might be restricted to a limited angular range to avoid collision to other system parts or high attenuation at certain tilting angles. Image reconstruction from such limited angle data suffers from artifacts due to missing data. In this work, deep learning is applied to limited angle reconstruction in TXMs for the first time. With the challenge to obtain sufficient real data for training, training a deep neural network from synthetic data is investigated. Particularly, the U-Net, the state-of-the-art neural network in biomedical imaging, is trained from synthetic ellipsoid data and multi-category data to reduce artifacts in filtered back-projection (FBP) reconstruction images. The proposed method is evaluated on synthetic data and real scanned chlorella data in $100^\circ$ limited angle tomography. For synthetic test data, the U-Net significantly reduces root-mean-square error (RMSE) from $2.55 \times 10^{-3}$ {\mu}m$^{-1}$ in the FBP reconstruction to $1.21 \times 10^{-3}$ {\mu}m$^{-1}$ in the U-Net reconstruction, and also improves structural similarity (SSIM) index from 0.625 to 0.920. With penalized weighted least square denoising of measured projections, the RMSE and SSIM are further improved to $1.16 \times 10^{-3}$ {\mu}m$^{-1}$ and 0.932, respectively. For real test data, the proposed method remarkably improves the 3-D visualization of the subcellular structures in the chlorella cell, which indicates its important value for nano-scale imaging in biology, nanoscience and materials science.
Tasks Denoising, Image Reconstruction
Published 2020-01-08
URL https://arxiv.org/abs/2001.02469v1
PDF https://arxiv.org/pdf/2001.02469v1.pdf
PWC https://paperswithcode.com/paper/limited-angle-tomography-for-transmission-x
Repo
Framework

#### A kernel Principal Component Analysis (kPCA) digest with a new backward mapping (pre-image reconstruction) strategy

Title A kernel Principal Component Analysis (kPCA) digest with a new backward mapping (pre-image reconstruction) strategy
Authors Alberto García-González, Antonio Huerta, Sergio Zlotnik, Pedro Díez
Abstract Methodologies for multidimensionality reduction aim at discovering low-dimensional manifolds where data ranges. Principal Component Analysis (PCA) is very effective if data have linear structure. But fails in identifying a possible dimensionality reduction if data belong to a nonlinear low-dimensional manifold. For nonlinear dimensionality reduction, kernel Principal Component Analysis (kPCA) is appreciated because of its simplicity and ease implementation. The paper provides a concise review of PCA and kPCA main ideas, trying to collect in a single document aspects that are often dispersed. Moreover, a strategy to map back the reduced dimension into the original high dimensional space is also devised, based on the minimization of a discrepancy functional.
Tasks Dimensionality Reduction, Image Reconstruction
Published 2020-01-07
URL https://arxiv.org/abs/2001.01958v1
PDF https://arxiv.org/pdf/2001.01958v1.pdf
PWC https://paperswithcode.com/paper/a-kernel-principal-component-analysis-kpca
Repo
Framework

#### Advancing machine learning for MR image reconstruction with an open competition: Overview of the 2019 fastMRI challenge

Title Advancing machine learning for MR image reconstruction with an open competition: Overview of the 2019 fastMRI challenge
Authors Florian Knoll, Tullie Murrell, Anuroop Sriram, Nafissa Yakubova, Jure Zbontar, Michael Rabbat, Aaron Defazio, Matthew J. Muckley, Daniel K. Sodickson, C. Lawrence Zitnick, Michael P. Recht
Abstract Purpose: To advance research in the field of machine learning for MR image reconstruction with an open challenge. Methods: We provided participants with a dataset of raw k-space data from 1,594 consecutive clinical exams of the knee. The goal of the challenge was to reconstruct images from these data. In order to strike a balance between realistic data and a shallow learning curve for those not already familiar with MR image reconstruction, we ran multiple tracks for multi-coil and single-coil data. We performed a two-stage evaluation based on quantitative image metrics followed by evaluation by a panel of radiologists. The challenge ran from June to December of 2019. Results: We received a total of 33 challenge submissions. All participants chose to submit results from supervised machine learning approaches. Conclusion: The challenge led to new developments in machine learning for image reconstruction, provided insight into the current state of the art in the field, and highlighted remaining hurdles for clinical adoption.
Published 2020-01-06
URL https://arxiv.org/abs/2001.02518v1
PDF https://arxiv.org/pdf/2001.02518v1.pdf
Repo
Framework

#### ParasNet: Fast Parasites Detection with Neural Networks

Title ParasNet: Fast Parasites Detection with Neural Networks
Authors X. F. Xu, S. Talbot, T. Selvaraja
Abstract Deep learning has dramatically improved the performance in many application areas such as image classification, object detection, speech recognition, drug discovery and etc since 2012. Where deep learning algorithms promise to discover the intricate hidden information inside the data by leveraging the large dataset, advanced model and computing power. Although deep learning techniques show medical expert level performance in a lot of medical applications, but some of the applications are still not explored or under explored due to the variation of the species. In this work, we studied the bright field based cell level Cryptosporidium and Giardia detection in the drink water with deep learning. Our experimental demonstrates that the new developed deep learning-based algorithm surpassed the handcrafted SVM based algorithm with above 97 percentage in accuracy and 700+fps in speed on embedded Jetson TX2 platform. Our research will lead to real-time and high accuracy label-free cell level Cryptosporidium and Giardia detection system in the future.
Tasks Drug Discovery, Image Classification, Object Detection, Speech Recognition
Published 2020-02-26
URL https://arxiv.org/abs/2002.11327v2
PDF https://arxiv.org/pdf/2002.11327v2.pdf
PWC https://paperswithcode.com/paper/deep-learning-based-cell-parasites-detection
Repo
Framework

#### A Safety Framework for Critical Systems Utilising Deep Neural Networks

Title A Safety Framework for Critical Systems Utilising Deep Neural Networks
Authors Xingyu Zhao, Alec Banks, James Sharp, Valentin Robu, David Flynn, Michael Fisher, Xiaowei Huang
Abstract Increasingly sophisticated mathematical modelling processes from Machine Learning are being used to analyse complex data. However, the performance and explainability of these models within practical critical systems requires a rigorous and continuous verification of their safe utilisation. Working towards addressing this challenge, this paper presents a principled novel safety argument framework for critical systems that utilise deep neural networks. The approach allows various forms of predictions, e.g., future reliability of passing some demands, or confidence on a required reliability level. It is supported by a Bayesian analysis using operational data and the recent verification and validation techniques for deep learning. The prediction is conservative – it starts with partial prior knowledge obtained from lifecycle activities and then determines the worst-case prediction. Open challenges are also identified.
Published 2020-03-07
URL https://arxiv.org/abs/2003.05311v1
PDF https://arxiv.org/pdf/2003.05311v1.pdf
PWC https://paperswithcode.com/paper/a-safety-framework-for-critical-systems
Repo
Framework

#### A Hypersensitive Breast Cancer Detector

Title A Hypersensitive Breast Cancer Detector
Authors Stefano Pedemonte, Brent Mombourquette, Alexis Goh, Trevor Tsue, Aaron Long, Sadanand Singh, Thomas Paul Matthews, Meet Shah, Jason Su
Abstract Early detection of breast cancer through screening mammography yields a 20-35% increase in survival rate; however, there are not enough radiologists to serve the growing population of women seeking screening mammography. Although commercial computer aided detection (CADe) software has been available to radiologists for decades, it has failed to improve the interpretation of full-field digital mammography (FFDM) images due to its low sensitivity over the spectrum of findings. In this work, we leverage a large set of FFDM images with loose bounding boxes of mammographically significant findings to train a deep learning detector with extreme sensitivity. Building upon work from the Hourglass architecture, we train a model that produces segmentation-like images with high spatial resolution, with the aim of producing 2D Gaussian blobs centered on ground-truth boxes. We replace the pixel-wise $L_2$ norm with a weak-supervision loss designed to achieve high sensitivity, asymmetrically penalizing false positives and false negatives while softening the noise of the loose bounding boxes by permitting a tolerance in misaligned predictions. The resulting system achieves a sensitivity for malignant findings of 0.99 with only 4.8 false positive markers per image. When utilized in a CADe system, this model could enable a novel workflow where radiologists can focus their attention with trust on only the locations proposed by the model, expediting the interpretation process and bringing attention to potential findings that could otherwise have been missed. Due to its nearly perfect sensitivity, the proposed detector can also be used as a high-performance proposal generator in two-stage detection systems.
Published 2020-01-23
URL https://arxiv.org/abs/2001.08382v1
PDF https://arxiv.org/pdf/2001.08382v1.pdf
PWC https://paperswithcode.com/paper/a-hypersensitive-breast-cancer-detector
Repo
Framework

#### Exploiting Unsupervised Inputs for Accurate Few-Shot Classification

Title Exploiting Unsupervised Inputs for Accurate Few-Shot Classification
Authors Yuqing Hu, Vincent Gripon, Stéphane Pateux
Abstract In few-shot classification, the aim is to learn models able to discriminate classes with only a small number of labelled examples. Most of the literature considers the problem of labelling a single unknown input at a time. Instead, it can be beneficial to consider a setting where a batch of unlabelled inputs are treated conjointly and non-independently. In this vein, we propose a method able to exploit three levels of information: a) feature extractors pretrained on generic datasets, b) few labelled examples of classes to discriminate and c) other available unlabelled inputs. If for a), we use state-of-the-art approaches, we introduce the use of simplified graph convolutions to perform b) and c) together. Our proposed model reaches state-of-the-art accuracy with a $6-11%$ increase compared to available alternatives on standard few-shot vision classification datasets.
Published 2020-01-27
URL https://arxiv.org/abs/2001.09849v3
PDF https://arxiv.org/pdf/2001.09849v3.pdf
PWC https://paperswithcode.com/paper/exploiting-unsupervised-inputs-for-accurate
Repo
Framework

#### A unified framework for spectral clustering in sparse graphs

Title A unified framework for spectral clustering in sparse graphs
Authors Lorenzo Dall’Amico, Romain Couillet, Nicolas Tremblay
Abstract This article considers spectral community detection in the regime of sparse networks with heterogeneous degree distributions, for which we devise an algorithm to efficiently retrieve communities. Specifically, we demonstrate that a conveniently parametrized form of regularized Laplacian matrix can be used to perform spectral clustering in sparse networks, without suffering from its degree heterogeneity. Besides, we exhibit important connections between this proposed matrix and the now popular non-backtracking matrix, the Bethe-Hessian matrix, as well as the standard Laplacian matrix. Interestingly, as opposed to competitive methods, our proposed improved parametrization inherently accounts for the hardness of the classification problem. These findings are summarized under the form of an algorithm capable of both estimating the number of communities and achieving high-quality community reconstruction.
Published 2020-03-20
URL https://arxiv.org/abs/2003.09198v1
PDF https://arxiv.org/pdf/2003.09198v1.pdf
PWC https://paperswithcode.com/paper/a-unified-framework-for-spectral-clustering
Repo
Framework

#### Methodologies for Successful Segmentation of HRTEM Images via Neural Network

Title Methodologies for Successful Segmentation of HRTEM Images via Neural Network
Authors Catherine K. Groschner, Christina Choi, M. C. Scott
Abstract High throughput analysis of samples has been a topic increasingly discussed in both light and electron microscopy. Deep learning can help implement high throughput analysis by segmenting images in a pixel-by-pixel fashion and classifying these regions. However, to date, relatively little has been done in the realm of automated high resolution transmission electron microscopy (HRTEM) micrograph analysis. Neural networks for HRTEM have, so far, focused on identification of single atomic columns in single materials systems. For true high throughput analysis, networks will need to not only recognize atomic columns but also segment out regions of interest from background for a wide variety of materials. We therefore analyze the requirements for achieving a high performance convolutional neural network for segmentation of nanoparticle regions from amorphous carbon in HRTEM images. We also examine how to achieve generalizability of the neural network to a range of materials. We find that networks trained on micrographs of a single material system result in worse segmentation outcomes than one which is trained on a variety of materials’ micrographs. Our final network is able to segment nanoparticle regions from amorphous background with 91% pixelwise accuracy.