October 19, 2019

3435 words 17 mins read

Paper Group ANR 268

Paper Group ANR 268

Computationally Efficient Deep Neural Network for Computed Tomography Image Reconstruction. Distributed Anomaly Detection using Autoencoder Neural Networks in WSN for IoT. Circular Antenna Array Design for Breast Cancer Detection. Inferring Multidimensional Rates of Aging from Cross-Sectional Data. Enhancing Cooperative Coevolution for Large Scale …

Computationally Efficient Deep Neural Network for Computed Tomography Image Reconstruction

Title Computationally Efficient Deep Neural Network for Computed Tomography Image Reconstruction
Authors Dufan Wu, Kyungsang Kim, Quanzheng Li
Abstract Deep-neural-network-based image reconstruction has demonstrated promising performance in medical imaging for under-sampled and low-dose scenarios. However, it requires large amount of memory and extensive time for the training. It is especially challenging to train the reconstruction networks for three-dimensional computed tomography (CT) because of the high resolution of CT images. The purpose of this work is to reduce the memory and time consumption of the training of the reconstruction networks for CT to make it practical for current hardware, while maintaining the quality of the reconstructed images. We unrolled the proximal gradient descent algorithm for iterative image reconstruction to finite iterations and replaced the terms related to the penalty function with trainable convolutional neural networks (CNN). The network was trained greedily iteration by iteration in the image-domain on patches, which requires reasonable amount of memory and time on mainstream graphics processing unit (GPU). To overcome the local-minimum problem caused by greedy learning, we used deep UNet as the CNN and incorporated separable quadratic surrogate with ordered subsets for data fidelity, so that the solution could escape from easy local minimums and achieve better image quality. The proposed method achieved comparable image quality with state-of-the-art neural network for CT image reconstruction on 2D sparse-view and limited-angle problems on the low-dose CT challenge dataset.
Tasks Computed Tomography (CT), Image Reconstruction
Published 2018-10-05
URL https://arxiv.org/abs/1810.03999v3
PDF https://arxiv.org/pdf/1810.03999v3.pdf
PWC https://paperswithcode.com/paper/computationally-efficient-deep-neural-network
Repo
Framework

Distributed Anomaly Detection using Autoencoder Neural Networks in WSN for IoT

Title Distributed Anomaly Detection using Autoencoder Neural Networks in WSN for IoT
Authors Tie Luo, Sai G. Nagarajan
Abstract Wireless sensor networks (WSN) are fundamental to the Internet of Things (IoT) by bridging the gap between the physical and the cyber worlds. Anomaly detection is a critical task in this context as it is responsible for identifying various events of interests such as equipment faults and undiscovered phenomena. However, this task is challenging because of the elusive nature of anomalies and the volatility of the ambient environments. In a resource-scarce setting like WSN, this challenge is further elevated and weakens the suitability of many existing solutions. In this paper, for the first time, we introduce autoencoder neural networks into WSN to solve the anomaly detection problem. We design a two-part algorithm that resides on sensors and the IoT cloud respectively, such that (i) anomalies can be detected at sensors in a fully distributed manner without the need for communicating with any other sensors or the cloud, and (ii) the relatively more computation-intensive learning task can be handled by the cloud with a much lower (and configurable) frequency. In addition to the minimal communication overhead, the computational load on sensors is also very low (of polynomial complexity) and readily affordable by most COTS sensors. Using a real WSN indoor testbed and sensor data collected over 4 consecutive months, we demonstrate via experiments that our proposed autoencoder-based anomaly detection mechanism achieves high detection accuracy and low false alarm rate. It is also able to adapt to unforeseeable and new changes in a non-stationary environment, thanks to the unsupervised learning feature of our chosen autoencoder neural networks.
Tasks Anomaly Detection
Published 2018-12-12
URL http://arxiv.org/abs/1812.04872v1
PDF http://arxiv.org/pdf/1812.04872v1.pdf
PWC https://paperswithcode.com/paper/distributed-anomaly-detection-using
Repo
Framework

Circular Antenna Array Design for Breast Cancer Detection

Title Circular Antenna Array Design for Breast Cancer Detection
Authors Kalthoum Ouerghi, Najib Fadlallah, Amor Smida, Ridha Ghayoula, Jaouhar Fattahi, Noureddine Boulejfen
Abstract Microwave imaging for breast cancer detection is based on the contrast in the electrical properties of healthy fatty breast tissues. This paper presents an industrial, scientific and medical (ISM) bands comparative study of five microstrip patch antennas for microwave imaging at a frequency of 2.45 GHz. The choice of one antenna is made for an antenna array composed of 8 antennas for a microwave breast imaging system. Each antenna element is arranged in a circular configuration so that it can be directly faced to the breast phantom for better tumor detection. This choice is made by putting each antenna alone on the Breast skin to study the electric field, magnetic fields and current density in the healthy tissue of the breast phantom designed and simulated in Ansoft High Frequency Simulation Software (HFSS).
Tasks Breast Cancer Detection
Published 2018-01-15
URL http://arxiv.org/abs/1801.05068v1
PDF http://arxiv.org/pdf/1801.05068v1.pdf
PWC https://paperswithcode.com/paper/circular-antenna-array-design-for-breast
Repo
Framework

Inferring Multidimensional Rates of Aging from Cross-Sectional Data

Title Inferring Multidimensional Rates of Aging from Cross-Sectional Data
Authors Emma Pierson, Pang Wei Koh, Tatsunori Hashimoto, Daphne Koller, Jure Leskovec, Nicholas Eriksson, Percy Liang
Abstract Modeling how individuals evolve over time is a fundamental problem in the natural and social sciences. However, existing datasets are often cross-sectional with each individual observed only once, making it impossible to apply traditional time-series methods. Motivated by the study of human aging, we present an interpretable latent-variable model that learns temporal dynamics from cross-sectional data. Our model represents each individual’s features over time as a nonlinear function of a low-dimensional, linearly-evolving latent state. We prove that when this nonlinear function is constrained to be order-isomorphic, the model family is identifiable solely from cross-sectional data provided the distribution of time-independent variation is known. On the UK Biobank human health dataset, our model reconstructs the observed data while learning interpretable rates of aging associated with diseases, mortality, and aging risk factors.
Tasks Time Series
Published 2018-07-12
URL http://arxiv.org/abs/1807.04709v3
PDF http://arxiv.org/pdf/1807.04709v3.pdf
PWC https://paperswithcode.com/paper/inferring-multidimensional-rates-of-aging
Repo
Framework

Enhancing Cooperative Coevolution for Large Scale Optimization by Adaptively Constructing Surrogate Models

Title Enhancing Cooperative Coevolution for Large Scale Optimization by Adaptively Constructing Surrogate Models
Authors Bei Pang, Zhigang Ren, Yongsheng Liang, An Chen
Abstract It has been shown that cooperative coevolution (CC) can effectively deal with large scale optimization problems (LSOPs) through a divide-and-conquer strategy. However, its performance is severely restricted by the current context-vector-based sub-solution evaluation method since this method needs to access the original high dimensional simulation model when evaluating each sub-solution and thus requires many computation resources. To alleviate this issue, this study proposes an adaptive surrogate model assisted CC framework. This framework adaptively constructs surrogate models for different sub-problems by fully considering their characteristics. For the single dimensional sub-problems obtained through decomposition, accurate enough surrogate models can be obtained and used to find out the optimal solutions of the corresponding sub-problems directly. As for the nonseparable sub-problems, the surrogate models are employed to evaluate the corresponding sub-solutions, and the original simulation model is only adopted to reevaluate some good sub-solutions selected by surrogate models. By these means, the computation cost could be greatly reduced without significantly sacrificing evaluation quality. Empirical studies on IEEE CEC 2010 benchmark functions show that the concrete algorithm based on this framework is able to find much better solutions than the conventional CC algorithms and a non-CC algorithm even with much fewer computation resources.
Tasks
Published 2018-03-01
URL http://arxiv.org/abs/1803.00906v1
PDF http://arxiv.org/pdf/1803.00906v1.pdf
PWC https://paperswithcode.com/paper/enhancing-cooperative-coevolution-for-large
Repo
Framework

EpiRL: A Reinforcement Learning Agent to Facilitate Epistasis Detection

Title EpiRL: A Reinforcement Learning Agent to Facilitate Epistasis Detection
Authors Kexin Huang, Rodrigo Nogueira
Abstract Epistasis (gene-gene interaction) is crucial to predicting genetic disease. Our work tackles the computational challenges faced by previous works in epistasis detection by modeling it as a one-step Markov Decision Process where the state is genome data, the actions are the interacted genes, and the reward is an interaction measurement for the selected actions. A reinforcement learning agent using policy gradient method then learns to discover a set of highly interacted genes.
Tasks
Published 2018-09-24
URL http://arxiv.org/abs/1809.09143v1
PDF http://arxiv.org/pdf/1809.09143v1.pdf
PWC https://paperswithcode.com/paper/epirl-a-reinforcement-learning-agent-to
Repo
Framework

Bidding Machine: Learning to Bid for Directly Optimizing Profits in Display Advertising

Title Bidding Machine: Learning to Bid for Directly Optimizing Profits in Display Advertising
Authors Kan Ren, Weinan Zhang, Ke Chang, Yifei Rong, Yong Yu, Jun Wang
Abstract Real-time bidding (RTB) based display advertising has become one of the key technological advances in computational advertising. RTB enables advertisers to buy individual ad impressions via an auction in real-time and facilitates the evaluation and the bidding of individual impressions across multiple advertisers. In RTB, the advertisers face three main challenges when optimizing their bidding strategies, namely (i) estimating the utility (e.g., conversions, clicks) of the ad impression, (ii) forecasting the market value (thus the cost) of the given ad impression, and (iii) deciding the optimal bid for the given auction based on the first two. Previous solutions assume the first two are solved before addressing the bid optimization problem. However, these challenges are strongly correlated and dealing with any individual problem independently may not be globally optimal. In this paper, we propose Bidding Machine, a comprehensive learning to bid framework, which consists of three optimizers dealing with each challenge above, and as a whole, jointly optimizes these three parts. We show that such a joint optimization would largely increase the campaign effectiveness and the profit. From the learning perspective, we show that the bidding machine can be updated smoothly with both offline periodical batch or online sequential training schemes. Our extensive offline empirical study and online A/B testing verify the high effectiveness of the proposed bidding machine.
Tasks
Published 2018-03-01
URL http://arxiv.org/abs/1803.02194v2
PDF http://arxiv.org/pdf/1803.02194v2.pdf
PWC https://paperswithcode.com/paper/bidding-machine-learning-to-bid-for-directly
Repo
Framework

Effect of secular trend in drug effectiveness study in real world data

Title Effect of secular trend in drug effectiveness study in real world data
Authors Sharon Hensley Alford, Piyush Madan, Shilpa Mahatma, Italo Buleje, Yanyan Han, Fang Lu
Abstract We discovered secular trend bias in a drug effectiveness study for a recently approved drug. We compared treatment outcomes between patients who received the newly approved drug and patients exposed to the standard treatment. All patients diagnosed after the new drug’s approval date were considered. We built a machine learning causal inference model to determine patient subpopulations likely to respond better to the newly approved drug. After identifying the presence of secular trend bias in our data, we attempted to adjust for the bias in two different ways. First, we matched patients on the number of days from the new drug’s approval date that the patient’s treatment (new or standard) began. Second, we included a covariate in the model for the number of days between the date of approval of the new drug and the treatment (new or standard) start date. Neither approach completely mitigated the bias. Residual bias we attribute to differences in patient disease severity or other unmeasured patient characteristics. Had we not identified the secular trend bias in our data, the causal inference model would have been interpreted without consideration for this underlying bias. Being aware of, testing for, and handling potential bias in the data is essential to diminish the uncertainty in AI modeling.
Tasks Causal Inference
Published 2018-08-18
URL http://arxiv.org/abs/1808.06117v1
PDF http://arxiv.org/pdf/1808.06117v1.pdf
PWC https://paperswithcode.com/paper/effect-of-secular-trend-in-drug-effectiveness
Repo
Framework

Machine learning in APOGEE: Unsupervised spectral classification with $K$-means

Title Machine learning in APOGEE: Unsupervised spectral classification with $K$-means
Authors Rafael Garcia-Dias, Carlos Allende Prieto, Jorge Sánchez Almeida, Ignacio Ordovás-Pascual
Abstract The data volume generated by astronomical surveys is growing rapidly. Traditional analysis techniques in spectroscopy either demand intensive human interaction or are computationally expensive. In this scenario, machine learning, and unsupervised clustering algorithms in particular offer interesting alternatives. The Apache Point Observatory Galactic Evolution Experiment (APOGEE) offers a vast data set of near-infrared stellar spectra which is perfect for testing such alternatives. Apply an unsupervised classification scheme based on $K$-means to the massive APOGEE data set. Explore whether the data are amenable to classification into discrete classes. We apply the $K$-means algorithm to 153,847 high resolution spectra ($R\approx22,500$). We discuss the main virtues and weaknesses of the algorithm, as well as our choice of parameters. We show that a classification based on normalised spectra captures the variations in stellar atmospheric parameters, chemical abundances, and rotational velocity, among other factors. The algorithm is able to separate the bulge and halo populations, and distinguish dwarfs, sub-giants, RC and RGB stars. However, a discrete classification in flux space does not result in a neat organisation in the parameters space. Furthermore, the lack of obvious groups in flux space causes the results to be fairly sensitive to the initialisation, and disrupts the efficiency of commonly-used methods to select the optimal number of clusters. Our classification is publicly available, including extensive online material associated with the APOGEE Data Release 12 (DR12). Our description of the APOGEE database can enormously help with the identification of specific types of targets for various applications. We find a lack of obvious groups in flux space, and identify limitations of the $K$-means algorithm in dealing with this kind of data.
Tasks
Published 2018-01-24
URL http://arxiv.org/abs/1801.07912v2
PDF http://arxiv.org/pdf/1801.07912v2.pdf
PWC https://paperswithcode.com/paper/machine-learning-in-apogee-unsupervised
Repo
Framework

Algorithms that Remember: Model Inversion Attacks and Data Protection Law

Title Algorithms that Remember: Model Inversion Attacks and Data Protection Law
Authors Michael Veale, Reuben Binns, Lilian Edwards
Abstract Many individuals are concerned about the governance of machine learning systems and the prevention of algorithmic harms. The EU’s recent General Data Protection Regulation (GDPR) has been seen as a core tool for achieving better governance of this area. While the GDPR does apply to the use of models in some limited situations, most of its provisions relate to the governance of personal data, while models have traditionally been seen as intellectual property. We present recent work from the information security literature around model inversion' and membership inference’ attacks, which indicate that the process of turning training data into machine learned systems is not one-way, and demonstrate how this could lead some models to be legally classified as personal data. Taking this as a probing experiment, we explore the different rights and obligations this would trigger and their utility, and posit future directions for algorithmic governance and regulation.
Tasks
Published 2018-07-12
URL http://arxiv.org/abs/1807.04644v2
PDF http://arxiv.org/pdf/1807.04644v2.pdf
PWC https://paperswithcode.com/paper/algorithms-that-remember-model-inversion
Repo
Framework

Neural Feature Learning From Relational Database

Title Neural Feature Learning From Relational Database
Authors Hoang Thanh Lam, Tran Ngoc Minh, Mathieu Sinn, Beat Buesser, Martin Wistuba
Abstract Feature engineering is one of the most important but most tedious tasks in data science. This work studies automation of feature learning from relational database. We first prove theoretically that finding the optimal features from relational data for predictive tasks is NP-hard. We propose an efficient rule-based approach based on heuristics and a deep neural network to automatically learn appropriate features from relational data. We benchmark our approaches in ensembles in past Kaggle competitions. Our new approach wins late medals and beats the state-of-the-art solutions with significant margins. To the best of our knowledge, this is the first time an automated data science system could win medals in Kaggle competitions with complex relational database.
Tasks Feature Engineering
Published 2018-01-16
URL https://arxiv.org/abs/1801.05372v4
PDF https://arxiv.org/pdf/1801.05372v4.pdf
PWC https://paperswithcode.com/paper/neural-feature-learning-from-relational
Repo
Framework

Using Curvilinear Features in Focus for Registering a Single Image to a 3D Object

Title Using Curvilinear Features in Focus for Registering a Single Image to a 3D Object
Authors Hatem A. Rashwan, Sylvie Chambon, Pierre Gurdjos, Géraldine Morin, Vincent Charvillat
Abstract In the context of 2D/3D registration, this paper introduces an approach that allows to match features detected in two different modalities: photographs and 3D models, by using a common 2D reprensentation. More precisely, 2D images are matched with a set of depth images, representing the 3D model. After introducing the concept of curvilinear saliency, related to curvature estimation, we propose a new ridge and valley detector for depth images rendered from 3D model. A variant of this detector is adapted to photographs, in particular by applying it in multi-scale and by combining this feature detector with the principle of focus curves. Finally, a registration algorithm for determining the correct viewpoint of the 3D model and thus the pose is proposed. It is based on using histogram of gradients features adapted to the features manipulated in 2D and in 3D, and the introduction of repeatability scores. The results presented highlight the quality of the features detected, in term of repeatability, and also the interest of the approach for registration and pose estimation.
Tasks Pose Estimation
Published 2018-02-26
URL http://arxiv.org/abs/1802.09384v1
PDF http://arxiv.org/pdf/1802.09384v1.pdf
PWC https://paperswithcode.com/paper/using-curvilinear-features-in-focus-for
Repo
Framework

Land-Cover Classification with High-Resolution Remote Sensing Images Using Transferable Deep Models

Title Land-Cover Classification with High-Resolution Remote Sensing Images Using Transferable Deep Models
Authors Xin-Yi Tong, Gui-Song Xia, Qikai Lu, Huanfeng Shen, Shengyang Li, Shucheng You, Liangpei Zhang
Abstract In recent years, large amount of high spatial-resolution remote sensing (HRRS) images are available for land-cover mapping. However, due to the complex information brought by the increased spatial resolution and the data disturbances caused by different conditions of image acquisition, it is often difficult to find an efficient method for achieving accurate land-cover classification with high-resolution and heterogeneous remote sensing images. In this paper, we propose a scheme to apply deep model obtained from labeled land-cover dataset to classify unlabeled HRRS images. The main idea is to rely on deep neural networks for presenting the contextual information contained in different types of land-covers and propose a pseudo-labeling and sample selection scheme for improving the transferability of deep models. More precisely, a deep Convolutional Neural Networks is first pre-trained with a well-annotated land-cover dataset, referred to as the source data. Then, given a target image with no labels, the pre-trained CNN model is utilized to classify the image in a patch-wise manner. The patches with high confidence are assigned with pseudo-labels and employed as the queries to retrieve related samples from the source data. The pseudo-labels confirmed with the retrieved results are regarded as supervised information for fine-tuning the pre-trained deep model. To obtain a pixel-wise land-cover classification with the target image, we rely on the fine-tuned CNN and develop a hybrid classification by combining patch-wise classification and hierarchical segmentation. In addition, we create a large-scale land-cover dataset containing 150 Gaofen-2 satellite images for CNN pre-training. Experiments on multi-source HRRS images show encouraging results and demonstrate the applicability of the proposed scheme to land-cover classification.
Tasks
Published 2018-07-16
URL https://arxiv.org/abs/1807.05713v2
PDF https://arxiv.org/pdf/1807.05713v2.pdf
PWC https://paperswithcode.com/paper/learning-transferable-deep-models-for-land
Repo
Framework

ISIC 2018-A Method for Lesion Segmentation

Title ISIC 2018-A Method for Lesion Segmentation
Authors Hongdiao Wen, Rongjian Xu, Tie Zhang
Abstract Our team participate in the challenge of Task 1: Lesion Boundary Segmentation , and use a combined network, one of which is designed by ourselves named updcnn net and another is an improved VGG 16-layer net. Updcnn net uses reduced size images for training, and VGG 16-layer net utilizes large size images. Image enhancement is used to get a richer data set. We use boxes in the VGG 16-layer net network for local attention regularization to fine-tune the loss function, which can increase the number of training data, and also make the model more robust. In the test, the model is used for joint testing and achieves good results.
Tasks Image Enhancement, Lesion Segmentation
Published 2018-07-19
URL http://arxiv.org/abs/1807.07391v2
PDF http://arxiv.org/pdf/1807.07391v2.pdf
PWC https://paperswithcode.com/paper/isic-2018-a-method-for-lesion-segmentation
Repo
Framework

Impact of ultrasound image reconstruction method on breast lesion classification with neural transfer learning

Title Impact of ultrasound image reconstruction method on breast lesion classification with neural transfer learning
Authors Michal Byra, Tomasz Sznajder, Danijel Korzinek, Hanna Piotrzkowska-Wroblewska, Katarzyna Dobruch-Sobczak, Andrzej Nowicki, Krzysztof Marasek
Abstract Deep learning algorithms, especially convolutional neural networks, have become a methodology of choice in medical image analysis. However, recent studies in computer vision show that even a small modification of input image intensities may cause a deep learning model to classify the image differently. In medical imaging, the distribution of image intensities is related to applied image reconstruction algorithm. In this paper we investigate the impact of ultrasound image reconstruction method on breast lesion classification with neural transfer learning. Due to high dynamic range raw ultrasonic signals are commonly compressed in order to reconstruct B-mode images. Based on raw data acquired from breast lesions, we reconstruct B-mode images using different compression levels. Next, transfer learning is applied for classification. Differently reconstructed images are employed for training and evaluation. We show that the modification of the reconstruction algorithm leads to decrease of classification performance. As a remedy, we propose a method of data augmentation. We show that the augmentation of the training set with differently reconstructed B-mode images leads to a more robust and efficient classification. Our study suggests that it is important to take into account image reconstruction algorithms implemented in medical scanners during development of computer aided diagnosis systems.
Tasks Data Augmentation, Image Reconstruction, Transfer Learning
Published 2018-04-06
URL http://arxiv.org/abs/1804.02119v1
PDF http://arxiv.org/pdf/1804.02119v1.pdf
PWC https://paperswithcode.com/paper/impact-of-ultrasound-image-reconstruction
Repo
Framework
comments powered by Disqus