April 1, 2020

3353 words 16 mins read

Paper Group ANR 482

Paper Group ANR 482

How to Evaluate Solutions in Pareto-based Search-Based Software Engineering? A Critical Review and Methodological Guidance. Dominance Move calculation using a MIP approach for comparison of multi and many-objective optimization solution sets. Improved dual channel pulse coupled neural network and its application to multi-focus image fusion. Adversa …

How to Evaluate Solutions in Pareto-based Search-Based Software Engineering? A Critical Review and Methodological Guidance

Title How to Evaluate Solutions in Pareto-based Search-Based Software Engineering? A Critical Review and Methodological Guidance
Authors Miqing Li, Tao Chen, Xin Yao
Abstract With modern requirements, there is an increasing tendancy of considering multiple objectives/criteria simultaneously in many Software Engineering (SE) scenarios. Such a multi-objective optimization scenario comes with an important issue — how to evaluate the outcome of optimization algorithms, which typically is a set of incomparable solutions (i.e., being Pareto non-dominated to each other). This issue can be challenging for the SE community, particularly for practitioners of Search-Based SE (SBSE). On one hand, multiobjective optimization may still be relatively new to SE/SBSE researchers, who may not be able to identify right evaluation methods for their problems. On the other hand, simply following the evaluation methods for general multiobjective optimisation problems may not be appropriate for specific SE problems, especially when the problem nature or decision maker’s preferences are explicitly/implicitly available. This has been well echoed in the literature by various inappropriate/inadequate selection and inaccurate/misleading uses of evaluation methods. In this paper, we carry out a critical review of quality evaluation for multiobjective optimization in SBSE. We survey 717 papers published between 2009 and 2019 from 36 venues in 7 repositories, and select 97 prominent studies, through which we identify five important but overlooked issues in the area. We then conduct an in-depth analysis of quality evaluation indicators and general situations in SBSE, which, together with the identified issues, enables us to provide a methodological guidance to selecting and using evaluation methods in different SBSE scenarios.
Tasks Multiobjective Optimization
Published 2020-02-20
URL https://arxiv.org/abs/2002.09040v2
PDF https://arxiv.org/pdf/2002.09040v2.pdf
PWC https://paperswithcode.com/paper/how-to-evaluate-solutions-in-pareto-based
Repo
Framework

Dominance Move calculation using a MIP approach for comparison of multi and many-objective optimization solution sets

Title Dominance Move calculation using a MIP approach for comparison of multi and many-objective optimization solution sets
Authors Claudio Lucio do Val Lopes, Flávio Vinícius Cruzeiro Martins, Elizabeth Fialho Wanner
Abstract Dominance move (DoM) is a binary quality indicator that can be used in multiobjective optimization. It can compare solution sets while representing some important features such as convergence, spread, uniformity, and cardinality. DoM has an intuitive concept and considers the minimum move of one set needed to weakly Pareto dominate the other set. Despite the aforementioned properties, DoM is hard to calculate. The original formulation presents an efficient and exact method to calculate it in a biobjective case only. This work presents a new approach to calculate and extend DoM to deal with three or more objectives. The idea is to use a mixed integer programming (MIP) approach to calculate DoM. Some initial experiments, in the biobjective space, were done to verify the model correctness. Furthermore, other experiments, using three, five, and ten objective functions were done to show how the model behaves in higher dimensional cases. Algorithms such as IBEA, MOEAD, NSGAIII, NSGAII, and SPEA2 were used to generate the solution sets, however any other algorithms could be used with DoM indicator. The results have confirmed the effectiveness of the MIP DoM in problems with more than three objective functions. Final notes, considerations, and future research are discussed to exploit some solution sets particularities and improve the model and its use for other situations.
Tasks Multiobjective Optimization
Published 2020-01-10
URL https://arxiv.org/abs/2001.03657v1
PDF https://arxiv.org/pdf/2001.03657v1.pdf
PWC https://paperswithcode.com/paper/dominance-move-calculation-using-a-mip
Repo
Framework

Improved dual channel pulse coupled neural network and its application to multi-focus image fusion

Title Improved dual channel pulse coupled neural network and its application to multi-focus image fusion
Authors Huai-Shui Tong, Xiao-Jun Wu, Hui Li
Abstract This paper presents an improved dual channel pulse coupled neural network (IDC-PCNN) model for image fusion. The model can overcome some defects of standard PCNN model. In this fusion scheme, the multiplication rule is replaced by addition rule in the information fusion pool of dual channel PCNN (DC-PCNN) model. Meanwhile the sum of modified Laplacian (SML) measure is adopted, which is better than other focus measures. This method not only inherits the good characteristics of the standard PCNN model but also enhances the computing efficiency and fusion quality. The performance of the proposed method is evaluated by using four criteria including average cross entropy, root mean square error, peak value signal to noise ratio and structure similarity index. Comparative studies show that the proposed fusion algorithm outperforms the standard PCNN method and the DC-PCNN method.
Tasks
Published 2020-02-04
URL https://arxiv.org/abs/2002.01102v1
PDF https://arxiv.org/pdf/2002.01102v1.pdf
PWC https://paperswithcode.com/paper/improved-dual-channel-pulse-coupled-neural
Repo
Framework

Adversarial Examples and the Deeper Riddle of Induction: The Need for a Theory of Artifacts in Deep Learning

Title Adversarial Examples and the Deeper Riddle of Induction: The Need for a Theory of Artifacts in Deep Learning
Authors Cameron Buckner
Abstract Deep learning is currently the most widespread and successful technology in artificial intelligence. It promises to push the frontier of scientific discovery beyond current limits. However, skeptics have worried that deep neural networks are black boxes, and have called into question whether these advances can really be deemed scientific progress if humans cannot understand them. Relatedly, these systems also possess bewildering new vulnerabilities: most notably a susceptibility to “adversarial examples”. In this paper, I argue that adversarial examples will become a flashpoint of debate in philosophy and diverse sciences. Specifically, new findings concerning adversarial examples have challenged the consensus view that the networks’ verdicts on these cases are caused by overfitting idiosyncratic noise in the training set, and may instead be the result of detecting predictively useful “intrinsic features of the data geometry” that humans cannot perceive (Ilyas et al., 2019). These results should cause us to re-examine responses to one of the deepest puzzles at the intersection of philosophy and science: Nelson Goodman’s “new riddle” of induction. Specifically, they raise the possibility that progress in a number of sciences will depend upon the detection and manipulation of useful features that humans find inscrutable. Before we can evaluate this possibility, however, we must decide which (if any) of these inscrutable features are real but available only to “alien” perception and cognition, and which are distinctive artifacts of deep learning-for artifacts like lens flares or Gibbs phenomena can be similarly useful for prediction, but are usually seen as obstacles to scientific theorizing. Thus, machine learning researchers urgently need to develop a theory of artifacts for deep neural networks, and I conclude by sketching some initial directions for this area of research.
Tasks
Published 2020-03-20
URL https://arxiv.org/abs/2003.11917v1
PDF https://arxiv.org/pdf/2003.11917v1.pdf
PWC https://paperswithcode.com/paper/adversarial-examples-and-the-deeper-riddle-of
Repo
Framework

Perceptual Image Super-Resolution with Progressive Adversarial Network

Title Perceptual Image Super-Resolution with Progressive Adversarial Network
Authors Lone Wong, Deli Zhao, Shaohua Wan, Bo Zhang
Abstract Single Image Super-Resolution (SISR) aims to improve resolution of small-size low-quality image from a single one. With popularity of consumer electronics in our daily life, this topic has become more and more attractive. In this paper, we argue that the curse of dimensionality is the underlying reason of limiting the performance of state-of-the-art algorithms. To address this issue, we propose Progressive Adversarial Network (PAN) that is capable of coping with this difficulty for domain-specific image super-resolution. The key principle of PAN is that we do not apply any distance-based reconstruction errors as the loss to be optimized, thus free from the restriction of the curse of dimensionality. To maintain faithful reconstruction precision, we resort to U-Net and progressive growing of neural architecture. The low-level features in encoder can be transferred into decoder to enhance textural details with U-Net. Progressive growing enhances image resolution gradually, thereby preserving precision of recovered image. Moreover, to obtain high-fidelity outputs, we leverage the framework of the powerful StyleGAN to perform adversarial learning. Without the curse of dimensionality, our model can super-resolve large-size images with remarkable photo-realistic details and few distortions. Extensive experiments demonstrate the superiority of our algorithm over state-of-the-arts both quantitatively and qualitatively.
Tasks Image Super-Resolution, Super-Resolution
Published 2020-03-08
URL https://arxiv.org/abs/2003.03756v4
PDF https://arxiv.org/pdf/2003.03756v4.pdf
PWC https://paperswithcode.com/paper/domain-specific-image-super-resolution-with
Repo
Framework

PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models

Title PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models
Authors Sachit Menon, Alexandru Damian, Shijia Hu, Nikhil Ravi, Cynthia Rudin
Abstract The primary aim of single-image super-resolution is to construct a high-resolution (HR) image from a corresponding low-resolution (LR) input. In previous approaches, which have generally been supervised, the training objective typically measures a pixel-wise average distance between the super-resolved (SR) and HR images. Optimizing such metrics often leads to blurring, especially in high variance (detailed) regions. We propose an alternative formulation of the super-resolution problem based on creating realistic SR images that downscale correctly. We present a novel super-resolution algorithm addressing this problem, PULSE (Photo Upsampling via Latent Space Exploration), which generates high-resolution, realistic images at resolutions previously unseen in the literature. It accomplishes this in an entirely self-supervised fashion and is not confined to a specific degradation operator used during training, unlike previous methods (which require training on databases of LR-HR image pairs for supervised learning). Instead of starting with the LR image and slowly adding detail, PULSE traverses the high-resolution natural image manifold, searching for images that downscale to the original LR image. This is formalized through the “downscaling loss,” which guides exploration through the latent space of a generative model. By leveraging properties of high-dimensional Gaussians, we restrict the search space to guarantee that our outputs are realistic. PULSE thereby generates super-resolved images that both are realistic and downscale correctly. We show extensive experimental results demonstrating the efficacy of our approach in the domain of face super-resolution (also known as face hallucination). Our method outperforms state-of-the-art methods in perceptual quality at higher resolutions and scale factors than previously possible.
Tasks Face Hallucination, Image Super-Resolution, Super-Resolution
Published 2020-03-08
URL https://arxiv.org/abs/2003.03808v1
PDF https://arxiv.org/pdf/2003.03808v1.pdf
PWC https://paperswithcode.com/paper/pulse-self-supervised-photo-upsampling-via
Repo
Framework

HybridCite: A Hybrid Model for Context-Aware Citation Recommendation

Title HybridCite: A Hybrid Model for Context-Aware Citation Recommendation
Authors Michael Färber, Ashwath Sampath
Abstract Citation recommendation systems aim to recommend citations for either a complete paper or a small portion of text called a citation context. The process of recommending citations for citation contexts is called local citation recommendation and is the focus of this paper. In this paper, firstly, we develop citation recommendation approaches based on embeddings, topic modeling, and information retrieval techniques. We combine, for the first time to the best of our knowledge, the best-performing algorithms into a semi-genetic hybrid recommender system for citation recommendation. We evaluate the single approaches and the hybrid approach offline based on several data sets, such as the Microsoft Academic Graph (MAG) and the MAG in combination with arXiv and ACL. We further conduct a user study for evaluating our approaches online. Our evaluation results show that a hybrid model containing embedding and information retrieval-based components outperforms its individual components and further algorithms by a large margin.
Tasks Information Retrieval, Recommendation Systems
Published 2020-02-15
URL https://arxiv.org/abs/2002.06406v1
PDF https://arxiv.org/pdf/2002.06406v1.pdf
PWC https://paperswithcode.com/paper/hybridcite-a-hybrid-model-for-context-aware
Repo
Framework

Weighted Encoding Based Image Interpolation With Nonlocal Linear Regression Model

Title Weighted Encoding Based Image Interpolation With Nonlocal Linear Regression Model
Authors Junchao Zhang
Abstract Image interpolation is a special case of image super-resolution, where the low-resolution image is directly down-sampled from its high-resolution counterpart without blurring and noise. Therefore, assumptions adopted in super-resolution models are not valid for image interpolation. To address this problem, we propose a novel image interpolation model based on sparse representation. Two widely used priors including sparsity and nonlocal self-similarity are used as the regularization terms to enhance the stability of interpolation model. Meanwhile, we incorporate the nonlocal linear regression into this model since nonlocal similar patches could provide a better approximation to a given patch. Moreover, we propose a new approach to learn adaptive sub-dictionary online instead of clustering. For each patch, similar patches are grouped to learn adaptive sub-dictionary, generating a more sparse and accurate representation. Finally, the weighted encoding is introduced to suppress tailing of fitting residuals in data fidelity. Abundant experimental results demonstrate that our proposed method outperforms several state-of-the-art methods in terms of quantitative measures and visual quality.
Tasks Image Super-Resolution, Super-Resolution
Published 2020-03-04
URL https://arxiv.org/abs/2003.04811v1
PDF https://arxiv.org/pdf/2003.04811v1.pdf
PWC https://paperswithcode.com/paper/weighted-encoding-based-image-interpolation
Repo
Framework

Audio Impairment Recognition Using a Correlation-Based Feature Representation

Title Audio Impairment Recognition Using a Correlation-Based Feature Representation
Authors Alessandro Ragano, Emmanouil Benetos, Andrew Hines
Abstract Audio impairment recognition is based on finding noise in audio files and categorising the impairment type. Recently, significant performance improvement has been obtained thanks to the usage of advanced deep learning models. However, feature robustness is still an unresolved issue and it is one of the main reasons why we need powerful deep learning architectures. In the presence of a variety of musical styles, hand-crafted features are less efficient in capturing audio degradation characteristics and they are prone to failure when recognising audio impairments and could mistakenly learn musical concepts rather than impairment types. In this paper, we propose a new representation of hand-crafted features that is based on the correlation of feature pairs. We experimentally compare the proposed correlation-based feature representation with a typical raw feature representation used in machine learning and we show superior performance in terms of compact feature dimensionality and improved computational speed in the test stage whilst achieving comparable accuracy.
Tasks
Published 2020-03-22
URL https://arxiv.org/abs/2003.09889v2
PDF https://arxiv.org/pdf/2003.09889v2.pdf
PWC https://paperswithcode.com/paper/audio-impairment-recognition-using-a
Repo
Framework

Stable Prediction with Model Misspecification and Agnostic Distribution Shift

Title Stable Prediction with Model Misspecification and Agnostic Distribution Shift
Authors Kun Kuang, Ruoxuan Xiong, Peng Cui, Susan Athey, Bo Li
Abstract For many machine learning algorithms, two main assumptions are required to guarantee performance. One is that the test data are drawn from the same distribution as the training data, and the other is that the model is correctly specified. In real applications, however, we often have little prior knowledge on the test data and on the underlying true model. Under model misspecification, agnostic distribution shift between training and test data leads to inaccuracy of parameter estimation and instability of prediction across unknown test data. To address these problems, we propose a novel Decorrelated Weighting Regression (DWR) algorithm which jointly optimizes a variable decorrelation regularizer and a weighted regression model. The variable decorrelation regularizer estimates a weight for each sample such that variables are decorrelated on the weighted training data. Then, these weights are used in the weighted regression to improve the accuracy of estimation on the effect of each variable, thus help to improve the stability of prediction across unknown test data. Extensive experiments clearly demonstrate that our DWR algorithm can significantly improve the accuracy of parameter estimation and stability of prediction with model misspecification and agnostic distribution shift.
Tasks
Published 2020-01-31
URL https://arxiv.org/abs/2001.11713v1
PDF https://arxiv.org/pdf/2001.11713v1.pdf
PWC https://paperswithcode.com/paper/stable-prediction-with-model-misspecification
Repo
Framework

Abstractive Summarization with Combination of Pre-trained Sequence-to-Sequence and Saliency Models

Title Abstractive Summarization with Combination of Pre-trained Sequence-to-Sequence and Saliency Models
Authors Itsumi Saito, Kyosuke Nishida, Kosuke Nishida, Junji Tomita
Abstract Pre-trained sequence-to-sequence (seq-to-seq) models have significantly improved the accuracy of several language generation tasks, including abstractive summarization. Although the fluency of abstractive summarization has been greatly improved by fine-tuning these models, it is not clear whether they can also identify the important parts of the source text to be included in the summary. In this study, we investigated the effectiveness of combining saliency models that identify the important parts of the source text with the pre-trained seq-to-seq models through extensive experiments. We also proposed a new combination model consisting of a saliency model that extracts a token sequence from a source text and a seq-to-seq model that takes the sequence as an additional input text. Experimental results showed that most of the combination models outperformed a simple fine-tuned seq-to-seq model on both the CNN/DM and XSum datasets even if the seq-to-seq model is pre-trained on large-scale corpora. Moreover, for the CNN/DM dataset, the proposed combination model exceeded the previous best-performed model by 1.33 points on ROUGE-L.
Tasks Abstractive Text Summarization, Text Generation
Published 2020-03-29
URL https://arxiv.org/abs/2003.13028v1
PDF https://arxiv.org/pdf/2003.13028v1.pdf
PWC https://paperswithcode.com/paper/abstractive-summarization-with-combination-of
Repo
Framework

An Optimal Statistical and Computational Framework for Generalized Tensor Estimation

Title An Optimal Statistical and Computational Framework for Generalized Tensor Estimation
Authors Rungang Han, Rebecca Willett, Anru Zhang
Abstract This paper describes a flexible framework for generalized low-rank tensor estimation problems that includes many important instances arising from applications in computational imaging, genomics, and network analysis. The proposed estimator consists of finding a low-rank tensor fit to the data under generalized parametric models. To overcome the difficulty of non-convexity in these problems, we introduce a unified approach of projected gradient descent that adapts to the underlying low-rank structure. Under mild conditions on the loss function, we establish both an upper bound on statistical error and the linear rate of computational convergence through a general deterministic analysis. Then we further consider a suite of generalized tensor estimation problems, including sub-Gaussian tensor denoising, tensor regression, and Poisson and binomial tensor PCA. We prove that the proposed algorithm achieves the minimax optimal rate of convergence in estimation error. Finally, we demonstrate the superiority of the proposed framework via extensive experiments on both simulated and real data.
Tasks Denoising
Published 2020-02-26
URL https://arxiv.org/abs/2002.11255v1
PDF https://arxiv.org/pdf/2002.11255v1.pdf
PWC https://paperswithcode.com/paper/an-optimal-statistical-and-computational
Repo
Framework

Fast Generation of Big Random Binary Trees

Title Fast Generation of Big Random Binary Trees
Authors William B. Langdon
Abstract random_tree() is a linear time and space C++ implementation able to create trees of up to a billion nodes for genetic programming and genetic improvement experiments. A 3.60GHz CPU can generate more than 18 million random nodes for GP program trees per second.
Tasks
Published 2020-01-13
URL https://arxiv.org/abs/2001.04505v1
PDF https://arxiv.org/pdf/2001.04505v1.pdf
PWC https://paperswithcode.com/paper/fast-generation-of-big-random-binary-trees
Repo
Framework

Cross-stained Segmentation from Renal Biopsy Images Using Multi-level Adversarial Learning

Title Cross-stained Segmentation from Renal Biopsy Images Using Multi-level Adversarial Learning
Authors Ke Mei, Chuang Zhu, Lei Jiang, Jun Liu, Yuanyuan Qiao
Abstract Segmentation from renal pathological images is a key step in automatic analyzing the renal histological characteristics. However, the performance of models varies significantly in different types of stained datasets due to the appearance variations. In this paper, we design a robust and flexible model for cross-stained segmentation. It is a novel multi-level deep adversarial network architecture that consists of three sub-networks: (i) a segmentation network; (ii) a pair of multi-level mirrored discriminators for guiding the segmentation network to extract domain-invariant features; (iii) a shape discriminator that is utilized to further identify the output of the segmentation network and the ground truth. Experimental results on glomeruli segmentation from renal biopsy images indicate that our network is able to improve segmentation performance on target type of stained images and use unlabeled data to achieve similar accuracy to labeled data. In addition, this method can be easily applied to other tasks.
Tasks
Published 2020-02-20
URL https://arxiv.org/abs/2002.08587v1
PDF https://arxiv.org/pdf/2002.08587v1.pdf
PWC https://paperswithcode.com/paper/cross-stained-segmentation-from-renal-biopsy
Repo
Framework

COPD Classification in CT Images Using a 3D Convolutional Neural Network

Title COPD Classification in CT Images Using a 3D Convolutional Neural Network
Authors Jalil Ahmed, Sulaiman Vesal, Felix Durlak, Rainer Kaergel, Nishant Ravikumar, Martine Remy-Jardin, Andreas Maier
Abstract Chronic obstructive pulmonary disease (COPD) is a lung disease that is not fully reversible and one of the leading causes of morbidity and mortality in the world. Early detection and diagnosis of COPD can increase the survival rate and reduce the risk of COPD progression in patients. Currently, the primary examination tool to diagnose COPD is spirometry. However, computed tomography (CT) is used for detecting symptoms and sub-type classification of COPD. Using different imaging modalities is a difficult and tedious task even for physicians and is subjective to inter-and intra-observer variations. Hence, developing meth-ods that can automatically classify COPD versus healthy patients is of great interest. In this paper, we propose a 3D deep learning approach to classify COPD and emphysema using volume-wise annotations only. We also demonstrate the impact of transfer learning on the classification of emphysema using knowledge transfer from a pre-trained COPD classification model.
Tasks Computed Tomography (CT), Transfer Learning
Published 2020-01-04
URL https://arxiv.org/abs/2001.01100v1
PDF https://arxiv.org/pdf/2001.01100v1.pdf
PWC https://paperswithcode.com/paper/copd-classification-in-ct-images-using-a-3d
Repo
Framework
comments powered by Disqus