February 1, 2020

3373 words 16 mins read

Paper Group AWR 321

Paper Group AWR 321

Spatial Search Strategies for Open Government Data: A Systematic Comparison. A Cone-Beam X-Ray CT Data Collection designed for Machine Learning. Progressive Differentiable Architecture Search: Bridging the Depth Gap between Search and Evaluation. Image Super-Resolution by Neural Texture Transfer. 3D Object Recognition with Ensemble Learning — A S …

Spatial Search Strategies for Open Government Data: A Systematic Comparison

Title Spatial Search Strategies for Open Government Data: A Systematic Comparison
Authors Auriol Degbelo, Brhane Bahrishum Teka
Abstract The increasing availability of open government datasets on the Web calls for ways to enable their efficient access and searching. There is however an overall lack of understanding regarding spatial search strategies which would perform best in this context. To address this gap, this work has assessed the impact of different spatial search strategies on performance and user relevance judgment. We harvested machine-readable spatial datasets and their metadata from three English-based open government data portals, performed metadata enhancement, developed a prototype and performed both a theoretical and user-based evaluation. The results highlight that (i) switching between area of overlap and Hausdorff distance for spatial similarity computation does not have any substantial impact on performance; and (ii) the use of Hausdorff distance induces slightly better user relevance ratings than the use of area of overlap. The data collected and the insights gleaned may serve as a baseline against which future work can compare.
Tasks
Published 2019-11-04
URL https://arxiv.org/abs/1911.01097v1
PDF https://arxiv.org/pdf/1911.01097v1.pdf
PWC https://paperswithcode.com/paper/spatial-search-strategies-for-open-government
Repo https://github.com/brhanebt/recommender
Framework none

A Cone-Beam X-Ray CT Data Collection designed for Machine Learning

Title A Cone-Beam X-Ray CT Data Collection designed for Machine Learning
Authors Henri Der Sarkissian, Felix Lucka, Maureen van Eijnatten, Giulia Colacicco, Sophia Bethany Coban, Kees Joost Batenburg
Abstract Unlike previous works, this open data collection consists of X-ray cone-beam (CB) computed tomography (CT) datasets specifically designed for machine learning applications and high cone-angle artefact reduction. Forty-two walnuts were scanned with a laboratory X-ray set-up to provide not only data from a single object but from a class of objects with natural variability. For each walnut, CB projections on three different source orbits were acquired to provide CB data with different cone angles as well as being able to compute artefact-free, high-quality ground truth images from the combined data that can be used for supervised learning. We provide the complete image reconstruction pipeline: raw projection data, a description of the scanning geometry, pre-processing and reconstruction scripts using open software, and the reconstructed volumes. Due to this, the dataset can not only be used for high cone-angle artefact reduction but also for algorithm development and evaluation for other tasks, such as image reconstruction from limited or sparse-angle (low-dose) scanning, super resolution, or segmentation.
Tasks Computed Tomography (CT), Image Reconstruction, Super-Resolution
Published 2019-05-12
URL https://arxiv.org/abs/1905.04787v2
PDF https://arxiv.org/pdf/1905.04787v2.pdf
PWC https://paperswithcode.com/paper/a-cone-beam-x-ray-ct-data-collection-designed
Repo https://github.com/cicwi/WalnutReconstructionCodes
Framework none

Progressive Differentiable Architecture Search: Bridging the Depth Gap between Search and Evaluation

Title Progressive Differentiable Architecture Search: Bridging the Depth Gap between Search and Evaluation
Authors Xin Chen, Lingxi Xie, Jun Wu, Qi Tian
Abstract Recently, differentiable search methods have made major progress in reducing the computational costs of neural architecture search. However, these approaches often report lower accuracy in evaluating the searched architecture or transferring it to another dataset. This is arguably due to the large gap between the architecture depths in search and evaluation scenarios. In this paper, we present an efficient algorithm which allows the depth of searched architectures to grow gradually during the training procedure. This brings two issues, namely, heavier computational overheads and weaker search stability, which we solve using search space approximation and regularization, respectively. With a significantly reduced search time (~7 hours on a single GPU), our approach achieves state-of-the-art performance on both the proxy dataset (CIFAR10 or CIFAR100) and the target dataset (ImageNet). Code is available at https://github.com/chenxin061/pdarts.
Tasks Neural Architecture Search
Published 2019-04-29
URL http://arxiv.org/abs/1904.12760v1
PDF http://arxiv.org/pdf/1904.12760v1.pdf
PWC https://paperswithcode.com/paper/progressive-differentiable-architecture
Repo https://github.com/stdacore/pdnas
Framework pytorch

Image Super-Resolution by Neural Texture Transfer

Title Image Super-Resolution by Neural Texture Transfer
Authors Zhifei Zhang, Zhaowen Wang, Zhe Lin, Hairong Qi
Abstract Due to the significant information loss in low-resolution (LR) images, it has become extremely challenging to further advance the state-of-the-art of single image super-resolution (SISR). Reference-based super-resolution (RefSR), on the other hand, has proven to be promising in recovering high-resolution (HR) details when a reference (Ref) image with similar content as that of the LR input is given. However, the quality of RefSR can degrade severely when Ref is less similar. This paper aims to unleash the potential of RefSR by leveraging more texture details from Ref images with stronger robustness even when irrelevant Ref images are provided. Inspired by the recent work on image stylization, we formulate the RefSR problem as neural texture transfer. We design an end-to-end deep model which enriches HR details by adaptively transferring the texture from Ref images according to their textural similarity. Instead of matching content in the raw pixel space as done by previous methods, our key contribution is a multi-level matching conducted in the neural space. This matching scheme facilitates multi-scale neural transfer that allows the model to benefit more from those semantically related Ref patches, and gracefully degrade to SISR performance on the least relevant Ref inputs. We build a benchmark dataset for the general research of RefSR, which contains Ref images paired with LR inputs with varying levels of similarity. Both quantitative and qualitative evaluations demonstrate the superiority of our method over state-of-the-art.
Tasks Image Stylization, Image Super-Resolution, Super-Resolution
Published 2019-03-03
URL http://arxiv.org/abs/1903.00834v2
PDF http://arxiv.org/pdf/1903.00834v2.pdf
PWC https://paperswithcode.com/paper/image-super-resolution-by-neural-texture
Repo https://github.com/ZZUTK/SRNTT
Framework tf

3D Object Recognition with Ensemble Learning — A Study of Point Cloud-Based Deep Learning Models

Title 3D Object Recognition with Ensemble Learning — A Study of Point Cloud-Based Deep Learning Models
Authors Daniel Koguciuk, Łukasz Chechliński, Tarek El-Gaaly
Abstract In this study, we present an analysis of model-based ensemble learning for 3D point-cloud object classification and detection. An ensemble of multiple model instances is known to outperform a single model instance, but there is little study of the topic of ensemble learning for 3D point clouds. First, an ensemble of multiple model instances trained on the same part of the $\textit{ModelNet40}$ dataset was tested for seven deep learning, point cloud-based classification algorithms: $\textit{PointNet}$, $\textit{PointNet++}$, $\textit{SO-Net}$, $\textit{KCNet}$, $\textit{DeepSets}$, $\textit{DGCNN}$, and $\textit{PointCNN}$. Second, the ensemble of different architectures was tested. Results of our experiments show that the tested ensemble learning methods improve over state-of-the-art on the $\textit{ModelNet40}$ dataset, from $92.65%$ to $93.64%$ for the ensemble of single architecture instances, $94.03%$ for two different architectures, and $94.15%$ for five different architectures. We show that the ensemble of two models with different architectures can be as effective as the ensemble of 10 models with the same architecture. Third, a study on classic bagging i.e. with different subsets used for training multiple model instances) was tested and sources of ensemble accuracy growth were investigated for best-performing architecture, i.e. $\textit{SO-Net}$. We also investigate the ensemble learning of $\textit{Frustum PointNet}$ approach in the task of 3D object detection, increasing the average precision of 3D box detection on the $\textit{KITTI}$ dataset from $63.1%$ to $66.5%$ using only three model instances. We measure the inference time of all 3D classification architectures on a $\textit{Nvidia Jetson TX2}$, a common embedded computer for mobile robots, to allude to the use of these models in real-life applications.
Tasks 3D Object Detection, 3D Object Recognition, Object Classification, Object Detection, Object Recognition
Published 2019-04-17
URL https://arxiv.org/abs/1904.08159v2
PDF https://arxiv.org/pdf/1904.08159v2.pdf
PWC https://paperswithcode.com/paper/3d-object-recognition-with-ensemble-learning
Repo https://github.com/dkoguciuk/ensemble_learning_for_point_clouds
Framework pytorch

DetNAS: Backbone Search for Object Detection

Title DetNAS: Backbone Search for Object Detection
Authors Yukang Chen, Tong Yang, Xiangyu Zhang, Gaofeng Meng, Xinyu Xiao, Jian Sun
Abstract Object detectors are usually equipped with backbone networks designed for image classification. It might be sub-optimal because of the gap between the tasks of image classification and object detection. In this work, we present DetNAS to use Neural Architecture Search (NAS) for the design of better backbones for object detection. It is non-trivial because detection training typically needs ImageNet pre-training while NAS systems require accuracies on the target detection task as supervisory signals. Based on the technique of one-shot supernet, which contains all possible networks in the search space, we propose a framework for backbone search on object detection. We train the supernet under the typical detector training schedule: ImageNet pre-training and detection fine-tuning. Then, the architecture search is performed on the trained supernet, using the detection task as the guidance. This framework makes NAS on backbones very efficient. In experiments, we show the effectiveness of DetNAS on various detectors, for instance, one-stage RetinaNet and the two-stage FPN. We empirically find that networks searched on object detection shows consistent superiority compared to those searched on ImageNet classification. The resulting architecture achieves superior performance than hand-crafted networks on COCO with much less FLOPs complexity.
Tasks Image Classification, Neural Architecture Search, Object Detection
Published 2019-03-26
URL https://arxiv.org/abs/1903.10979v4
PDF https://arxiv.org/pdf/1903.10979v4.pdf
PWC https://paperswithcode.com/paper/detnas-neural-architecture-search-on-object
Repo https://github.com/megvii-model/DetNAS
Framework pytorch

Evolution of Novel Activation Functions in Neural Network Training with Applications to Classification of Exoplanets

Title Evolution of Novel Activation Functions in Neural Network Training with Applications to Classification of Exoplanets
Authors Snehanshu Saha, Nithin Nagaraj, Archana Mathur, Rahul Yedida
Abstract We present analytical exploration of novel activation functions as consequence of integration of several ideas leading to implementation and subsequent use in habitability classification of exoplanets. Neural networks, although a powerful engine in supervised methods, often require expensive tuning efforts for optimized performance. Habitability classes are hard to discriminate, especially when attributes used as hard markers of separation are removed from the data set. The solution is approached from the point of investigating analytical properties of the proposed activation functions. The theory of ordinary differential equations and fixed point are exploited to justify the “lack of tuning efforts” to achieve optimal performance compared to traditional activation functions. Additionally, the relationship between the proposed activation functions and the more popular ones is established through extensive analytical and empirical evidence. Finally, the activation functions have been implemented in plain vanilla feed-forward neural network to classify exoplanets.
Tasks
Published 2019-06-01
URL https://arxiv.org/abs/1906.01975v1
PDF https://arxiv.org/pdf/1906.01975v1.pdf
PWC https://paperswithcode.com/paper/190601975
Repo https://github.com/yrahul3910/symnet
Framework none

CoinNet: Deep Ancient Roman Republican Coin Classification via Feature Fusion and Attention

Title CoinNet: Deep Ancient Roman Republican Coin Classification via Feature Fusion and Attention
Authors Hafeez Anwar, Saeed Anwar, Sebastian Zambanini, Fatih Porikli
Abstract We perform classification of ancient Roman Republican coins via recognizing their reverse motifs where various objects, faces, scenes, animals, and buildings are minted along with legends. Most of these coins are eroded due to their age and varying degrees of preservation, thereby affecting their informative attributes for visual recognition. Changes in the positions of principal symbols on the reverse motifs also cause huge variations among the coin types. Lastly, in-plane orientations, uneven illumination, and a moderate background clutter further make the task of classification non-trivial and challenging. To this end, we present a novel network model, CoinNet, that employs compact bilinear pooling, residual groups, and feature attention layers. Furthermore, we gathered the largest and most diverse image dataset of the Roman Republican coins that contains more than 18,000 images belonging to 228 different reverse motifs. On this dataset, our model achieves a classification accuracy of more than \textbf{98%} and outperforms the conventional bag-of-visual-words based approaches and more recent state-of-the-art deep learning methods. We also provide a detailed ablation study of our network and its generalization capability.
Tasks
Published 2019-08-26
URL https://arxiv.org/abs/1908.09428v1
PDF https://arxiv.org/pdf/1908.09428v1.pdf
PWC https://paperswithcode.com/paper/coinnet-deep-ancient-roman-republican-coin
Repo https://github.com/saeed-anwar/CoinNet
Framework pytorch

Multimodal Style Transfer via Graph Cuts

Title Multimodal Style Transfer via Graph Cuts
Authors Yulun Zhang, Chen Fang, Yilin Wang, Zhaowen Wang, Zhe Lin, Yun Fu, Jimei Yang
Abstract An assumption widely used in recent neural style transfer methods is that image styles can be described by global statics of deep features like Gram or covariance matrices. Alternative approaches have represented styles by decomposing them into local pixel or neural patches. Despite the recent progress, most existing methods treat the semantic patterns of style image uniformly, resulting unpleasing results on complex styles. In this paper, we introduce a more flexible and general universal style transfer technique: multimodal style transfer (MST). MST explicitly considers the matching of semantic patterns in content and style images. Specifically, the style image features are clustered into sub-style components, which are matched with local content features under a graph cut formulation. A reconstruction network is trained to transfer each sub-style and render the final stylized result. We also generalize MST to improve some existing methods. Extensive experiments demonstrate the superior effectiveness, robustness, and flexibility of MST.
Tasks Style Transfer
Published 2019-04-09
URL https://arxiv.org/abs/1904.04443v6
PDF https://arxiv.org/pdf/1904.04443v6.pdf
PWC https://paperswithcode.com/paper/multimodal-style-transfer-via-graph-cuts
Repo https://github.com/yulunzhang/MST
Framework tf

A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks

Title A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks
Authors Hadi Salman, Greg Yang, Huan Zhang, Cho-Jui Hsieh, Pengchuan Zhang
Abstract Verification of neural networks enables us to gauge their robustness against adversarial attacks. Verification algorithms fall into two categories: exact verifiers that run in exponential time and relaxed verifiers that are efficient but incomplete. In this paper, we unify all existing LP-relaxed verifiers, to the best of our knowledge, under a general convex relaxation framework. This framework works for neural networks with diverse architectures and nonlinearities and covers both primal and dual views of robustness verification. We further prove strong duality between the primal and dual problems under very mild conditions. Next, we perform large-scale experiments, amounting to more than 22 CPU-years, to obtain exact solution to the convex-relaxed problem that is optimal within our framework for ReLU networks. We find the exact solution does not significantly improve upon the gap between PGD and existing relaxed verifiers for various networks trained normally or robustly on MNIST and CIFAR datasets. Our results suggest there is an inherent barrier to tight verification for the large class of methods captured by our framework. We discuss possible causes of this barrier and potential future directions for bypassing it. Our code and trained models are available at http://github.com/Hadisalman/robust-verify-benchmark .
Tasks
Published 2019-02-23
URL https://arxiv.org/abs/1902.08722v5
PDF https://arxiv.org/pdf/1902.08722v5.pdf
PWC https://paperswithcode.com/paper/a-convex-relaxation-barrier-to-tight
Repo https://github.com/Hadisalman/robust-verify-benchmark
Framework pytorch

Uncertainty Guided Multi-Scale Residual Learning-using a Cycle Spinning CNN for Single Image De-Raining

Title Uncertainty Guided Multi-Scale Residual Learning-using a Cycle Spinning CNN for Single Image De-Raining
Authors Rajeev Yasarla, Vishal M. Patel
Abstract Single image de-raining is an extremely challenging problem since the rainy image may contain rain streaks which may vary in size, direction and density. Previous approaches have attempted to address this problem by leveraging some prior information to remove rain streaks from a single image. One of the major limitations of these approaches is that they do not consider the location information of rain drops in the image. The proposed Uncertainty guided Multi-scale Residual Learning (UMRL) network attempts to address this issue by learning the rain content at different scales and using them to estimate the final de-rained output. In addition, we introduce a technique which guides the network to learn the network weights based on the confidence measure about the estimate. Furthermore, we introduce a new training and testing procedure based on the notion of cycle spinning to improve the final de-raining performance. Extensive experiments on synthetic and real datasets to demonstrate that the proposed method achieves significant improvements over the recent state-of-the-art methods. Code is available at: https://github.com/rajeevyasarla/UMRL--using-Cycle-Spinning
Tasks
Published 2019-06-12
URL https://arxiv.org/abs/1906.11129v1
PDF https://arxiv.org/pdf/1906.11129v1.pdf
PWC https://paperswithcode.com/paper/uncertainty-guided-multi-scale-residual-1
Repo https://github.com/rajeevyasarla/UMRL--using-Cycle-Spinning
Framework pytorch

IMaT: Unsupervised Text Attribute Transfer via Iterative Matching and Translation

Title IMaT: Unsupervised Text Attribute Transfer via Iterative Matching and Translation
Authors Zhijing Jin, Di Jin, Jonas Mueller, Nicholas Matthews, Enrico Santus
Abstract Text attribute transfer aims to automatically rewrite sentences such that they possess certain linguistic attributes, while simultaneously preserving their semantic content. This task remains challenging due to a lack of supervised parallel data. Existing approaches try to explicitly disentangle content and attribute information, but this is difficult and often results in poor content-preservation and ungrammaticality. In contrast, we propose a simpler approach, Iterative Matching and Translation (IMaT), which: (1) constructs a pseudo-parallel corpus by aligning a subset of semantically similar sentences from the source and the target corpora; (2) applies a standard sequence-to-sequence model to learn the attribute transfer; (3) iteratively improves the learned transfer function by refining imperfections in the alignment. In sentiment modification and formality transfer tasks, our method outperforms complex state-of-the-art systems by a large margin. As an auxiliary contribution, we produce a publicly-available test set with human-generated transfer references.
Tasks Style Transfer, Text Attribute Transfer, Text Style Transfer
Published 2019-01-31
URL https://arxiv.org/abs/1901.11333v4
PDF https://arxiv.org/pdf/1901.11333v4.pdf
PWC https://paperswithcode.com/paper/unsupervised-text-style-transfer-via
Repo https://github.com/zhijing-jin/IMT
Framework none

Robust Angular Local Descriptor Learning

Title Robust Angular Local Descriptor Learning
Authors Yanwu Xu, Mingming Gong, Tongliang Liu, Kayhan Batmanghelich, Chaohui Wang
Abstract In recent years, the learned local descriptors have outperformed handcrafted ones by a large margin, due to the powerful deep convolutional neural network architectures such as L2-Net [1] and triplet based metric learning [2]. However, there are two problems in the current methods, which hinders the overall performance. Firstly, the widely-used margin loss is sensitive to incorrect correspondences, which are prevalent in the existing local descriptor learning datasets. Second, the L2 distance ignores the fact that the feature vectors have been normalized to unit norm. To tackle these two problems and further boost the performance, we propose a robust angular loss which 1) uses cosine similarity instead of L2 distance to compare descriptors and 2) relies on a robust loss function that gives smaller penalty to triplets with negative relative similarity. The resulting descriptor shows robustness on different datasets, reaching the state-of-the-art result on Brown dataset , as well as demonstrating excellent generalization ability on the Hpatches dataset and a Wide Baseline Stereo dataset.
Tasks Metric Learning
Published 2019-01-21
URL http://arxiv.org/abs/1901.07076v2
PDF http://arxiv.org/pdf/1901.07076v2.pdf
PWC https://paperswithcode.com/paper/robust-angular-local-descriptor-learning
Repo https://github.com/xuyanwu/RAL-Net
Framework pytorch

Multi-adversarial Faster-RCNN for Unrestricted Object Detection

Title Multi-adversarial Faster-RCNN for Unrestricted Object Detection
Authors Zhenwei He, Lei Zhang
Abstract Conventional object detection methods essentially suppose that the training and testing data are collected from a restricted target domain with expensive labeling cost. For alleviating the problem of domain dependency and cumbersome labeling, this paper proposes to detect objects in an unrestricted environment by leveraging domain knowledge trained from an auxiliary source domain with sufficient labels. Specifically, we propose a multi-adversarial Faster-RCNN (MAF) framework for unrestricted object detection, which inherently addresses domain disparity minimization for domain adaptation in feature representation. The paper merits are in three-fold: 1) With the idea that object detectors often becomes domain incompatible when image distribution resulted domain disparity appears, we propose a hierarchical domain feature alignment module, in which multiple adversarial domain classifier submodules for layer-wise domain feature confusion are designed; 2) An information invariant scale reduction module (SRM) for hierarchical feature map resizing is proposed for promoting the training efficiency of adversarial domain adaptation; 3) In order to improve the domain adaptability, the aggregated proposal features with detection results are feed into a proposed weighted gradient reversal layer (WGRL) for characterizing hard confused domain samples. We evaluate our MAF on unrestricted tasks, including Cityscapes, KITTI, Sim10k, etc. and the experiments show the state-of-the-art performance over the existing detectors.
Tasks Domain Adaptation, Object Detection
Published 2019-07-24
URL https://arxiv.org/abs/1907.10343v2
PDF https://arxiv.org/pdf/1907.10343v2.pdf
PWC https://paperswithcode.com/paper/multi-adversarial-faster-rcnn-for
Repo https://github.com/He-Zhenwei/MAF
Framework none

Predicting Retrosynthetic Reaction using Self-Corrected Transformer Neural Networks

Title Predicting Retrosynthetic Reaction using Self-Corrected Transformer Neural Networks
Authors Shuangjia Zheng, Jiahua Rao, Zhongyue Zhang, Jun Xu, Yuedong Yang
Abstract Synthesis planning is the process of recursively decomposing target molecules into available precursors. Computer-aided retrosynthesis can potentially assist chemists in designing synthetic routes, but at present it is cumbersome and provides results of dissatisfactory quality. In this study, we develop a template-free self-corrected retrosynthesis predictor (SCROP) to perform a retrosynthesis prediction task trained by using the Transformer neural network architecture. In the method, the retrosynthesis planning is converted as a machine translation problem between molecular linear notations of reactants and the products. Coupled with a neural network-based syntax corrector, our method achieves an accuracy of 59.0% on a standard benchmark dataset, which increases >21% over other deep learning methods, and >6% over template-based methods. More importantly, our method shows an accuracy 1.7 times higher than other state-of-the-art methods for compounds not appearing in the training set.
Tasks Machine Translation
Published 2019-07-02
URL https://arxiv.org/abs/1907.01356v2
PDF https://arxiv.org/pdf/1907.01356v2.pdf
PWC https://paperswithcode.com/paper/predicting-retrosynthetic-reaction-using-self
Repo https://github.com/sysu-yanglab/Self-Corrected-Retrosynthetic-Reaction-Predictor
Framework pytorch
comments powered by Disqus