April 3, 2020

3514 words 17 mins read

Paper Group AWR 13

Paper Group AWR 13

Bridging the Gap Between Spectral and Spatial Domains in Graph Neural Networks. SparseIDS: Learning Packet Sampling with Reinforcement Learning. Quantisation and Pruning for Neural Network Compression and Regularisation. BlendMask: Top-Down Meets Bottom-Up for Instance Segmentation. A Simple Framework for Contrastive Learning of Visual Representati …

Bridging the Gap Between Spectral and Spatial Domains in Graph Neural Networks

Title Bridging the Gap Between Spectral and Spatial Domains in Graph Neural Networks
Authors Muhammet Balcilar, Guillaume Renton, Pierre Heroux, Benoit Gauzere, Sebastien Adam, Paul Honeine
Abstract This paper aims at revisiting Graph Convolutional Neural Networks by bridging the gap between spectral and spatial design of graph convolutions. We theoretically demonstrate some equivalence of the graph convolution process regardless it is designed in the spatial or the spectral domain. The obtained general framework allows to lead a spectral analysis of the most popular ConvGNNs, explaining their performance and showing their limits. Moreover, the proposed framework is used to design new convolutions in spectral domain with a custom frequency profile while applying them in the spatial domain. We also propose a generalization of the depthwise separable convolution framework for graph convolutional networks, what allows to decrease the total number of trainable parameters by keeping the capacity of the model. To the best of our knowledge, such a framework has never been used in the GNNs literature. Our proposals are evaluated on both transductive and inductive graph learning problems. Obtained results show the relevance of the proposed method and provide one of the first experimental evidence of transferability of spectral filter coefficients from one graph to another. Our source codes are publicly available at: https://github.com/balcilar/Spectral-Designed-Graph-Convolutions
Tasks Node Classification
Published 2020-03-26
URL https://arxiv.org/abs/2003.11702v1
PDF https://arxiv.org/pdf/2003.11702v1.pdf
PWC https://paperswithcode.com/paper/bridging-the-gap-between-spectral-and-spatial
Repo https://github.com/balcilar/Spectral-Designed-Graph-Convolutions
Framework tf

SparseIDS: Learning Packet Sampling with Reinforcement Learning

Title SparseIDS: Learning Packet Sampling with Reinforcement Learning
Authors Maximilian Bachl, Fares Meghdouri, Joachim Fabini, Tanja Zseby
Abstract Recurrent Neural Networks (RNNs) have been shown to be valuable for constructing Intrusion Detection Systems (IDSs) for network data. They allow determining if a flow is malicious or not already before it is over, making it possible to take action immediately. However, considering the large number of packets that have to be inspected, the question of computational efficiency arises. We show that by using a novel Reinforcement Learning (RL)-based approach called SparseIDS, we can reduce the number of consumed packets by more than three fourths while keeping classification accuracy high. Comparing to various other sampling techniques, SparseIDS consistently achieves higher classification accuracy by learning to sample only relevant packets. A major novelty of our RL-based approach is that it can not only skip up to a predefined maximum number of samples like other approaches proposed in the domain of Natural Language Processing but can even skip arbitrarily many packets in one step. This enables saving even more computational resources for long sequences. Inspecting SparseIDS’s behavior of choosing packets shows that it adopts different sampling strategies for different attack types and network flows. Finally we build an automatic steering mechanism that can guide SparseIDS in deployment to achieve a desired level of sparsity.
Tasks Intrusion Detection
Published 2020-02-10
URL https://arxiv.org/abs/2002.03872v1
PDF https://arxiv.org/pdf/2002.03872v1.pdf
PWC https://paperswithcode.com/paper/sparseids-learning-packet-sampling-with
Repo https://github.com/CN-TU/adversarial-recurrent-ids
Framework none

Quantisation and Pruning for Neural Network Compression and Regularisation

Title Quantisation and Pruning for Neural Network Compression and Regularisation
Authors Kimessha Paupamah, Steven James, Richard Klein
Abstract Deep neural networks are typically too computationally expensive to run in real-time on consumer-grade hardware and low-powered devices. In this paper, we investigate reducing the computational and memory requirements of neural networks through network pruning and quantisation. We examine their efficacy on large networks like AlexNet compared to recent compact architectures: ShuffleNet and MobileNet. Our results show that pruning and quantisation compresses these networks to less than half their original size and improves their efficiency, particularly on MobileNet with a 7x speedup. We also demonstrate that pruning, in addition to reducing the number of parameters in a network, can aid in the correction of overfitting.
Tasks Network Pruning, Neural Network Compression
Published 2020-01-14
URL https://arxiv.org/abs/2001.04850v1
PDF https://arxiv.org/pdf/2001.04850v1.pdf
PWC https://paperswithcode.com/paper/quantisation-and-pruning-for-neural-network
Repo https://github.com/kpaupamah/compression-and-regularisation
Framework pytorch

BlendMask: Top-Down Meets Bottom-Up for Instance Segmentation

Title BlendMask: Top-Down Meets Bottom-Up for Instance Segmentation
Authors Hao Chen, Kunyang Sun, Zhi Tian, Chunhua Shen, Yongming Huang, Youliang Yan
Abstract Instance segmentation is one of the fundamental vision tasks. Recently, fully convolutional instance segmentation methods have drawn much attention as they are often simpler and more efficient than two-stage approaches like Mask R-CNN. To date, almost all such approaches fall behind the two-stage Mask R-CNN method in mask precision when models have similar computation complexity, leaving great room for improvement. In this work, we achieve improved mask prediction by effectively combining instance-level information with semantic information with lower-level fine-granularity. Our main contribution is a blender module which draws inspiration from both top-down and bottom-up instance segmentation approaches. The proposed BlendMask can effectively predict dense per-pixel position-sensitive instance features with very few channels, and learn attention maps for each instance with merely one convolution layer, thus being fast in inference. BlendMask can be easily incorporated with the state-of-the-art one-stage detection frameworks and outperforms Mask R-CNN under the same training schedule while being 20% faster. A light-weight version of BlendMask achieves $ 34.2% $ mAP at 25 FPS evaluated on a single 1080Ti GPU card. Because of its simplicity and efficacy, we hope that our BlendMask could serve as a simple yet strong baseline for a wide range of instance-wise prediction tasks.
Tasks Instance Segmentation, Semantic Segmentation
Published 2020-01-02
URL https://arxiv.org/abs/2001.00309v2
PDF https://arxiv.org/pdf/2001.00309v2.pdf
PWC https://paperswithcode.com/paper/blendmask-top-down-meets-bottom-up-for
Repo https://github.com/aim-uofa/AdelaiDet
Framework pytorch

A Simple Framework for Contrastive Learning of Visual Representations

Title A Simple Framework for Contrastive Learning of Visual Representations
Authors Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton
Abstract This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels.
Tasks Self-Supervised Image Classification, Semi-Supervised Image Classification
Published 2020-02-13
URL https://arxiv.org/abs/2002.05709v2
PDF https://arxiv.org/pdf/2002.05709v2.pdf
PWC https://paperswithcode.com/paper/a-simple-framework-for-contrastive-learning
Repo https://github.com/spijkervet/simclr
Framework pytorch

AraBERT: Transformer-based Model for Arabic Language Understanding

Title AraBERT: Transformer-based Model for Arabic Language Understanding
Authors Wissam Antoun, Fady Baly, Hazem Hajj
Abstract The Arabic language is a morphologically rich language with relatively few resources and a less explored syntax compared to English. Given these limitations, Arabic Natural Language Processing (NLP) tasks like Sentiment Analysis (SA), Named Entity Recognition (NER), and Question Answering (QA), have proven to be very challenging to tackle. Recently, with the surge of transformers based models, language-specific BERT based models have proven to be very efficient at language understanding, provided they are pre-trained on a very large corpus. Such models were able to set new standards and achieve state-of-the-art results for most NLP tasks. In this paper, we pre-trained BERT specifically for the Arabic language in the pursuit of achieving the same success that BERT did for the English language. The performance of AraBERT is compared to multilingual BERT from Google and other state-of-the-art approaches. The results showed that the newly developed AraBERT achieved state-of-the-art performance on most tested Arabic NLP tasks. The pretrained araBERT models are publicly available on https://github.com/aub-mind/arabert hoping to encourage research and applications for Arabic NLP.
Tasks Named Entity Recognition, Question Answering, Sentiment Analysis
Published 2020-02-28
URL https://arxiv.org/abs/2003.00104v2
PDF https://arxiv.org/pdf/2003.00104v2.pdf
PWC https://paperswithcode.com/paper/arabert-transformer-based-model-for-arabic
Repo https://github.com/aub-mind/araBERT
Framework tf

PDDLGym: Gym Environments from PDDL Problems

Title PDDLGym: Gym Environments from PDDL Problems
Authors Tom Silver, Rohan Chitnis
Abstract We present PDDLGym, a framework that automatically constructs OpenAI Gym environments from PDDL domains and problems. Observations and actions in PDDLGym are relational, making the framework particularly well-suited for research in relational reinforcement learning and relational sequential decision-making. PDDLGym is also useful as a generic framework for rapidly building numerous, diverse benchmarks from a concise and familiar specification language. We discuss design decisions and implementation details, and also illustrate empirical variations between the 15 built-in environments in terms of planning and model-learning difficulty. We hope that PDDLGym will facilitate bridge-building between the reinforcement learning community (from which Gym emerged) and the AI planning community (which produced PDDL). We look forward to gathering feedback from all those interested and expanding the set of available environments and features accordingly. Code: https://github.com/tomsilver/pddlgym
Tasks Decision Making
Published 2020-02-15
URL https://arxiv.org/abs/2002.06432v1
PDF https://arxiv.org/pdf/2002.06432v1.pdf
PWC https://paperswithcode.com/paper/pddlgym-gym-environments-from-pddl-problems
Repo https://github.com/tomsilver/pddlgym
Framework none

A Regression Tsetlin Machine with Integer Weighted Clauses for Compact Pattern Representation

Title A Regression Tsetlin Machine with Integer Weighted Clauses for Compact Pattern Representation
Authors K. Darshana Abeyrathna, Ole-Christoffer Granmo, Morten Goodwin
Abstract The Regression Tsetlin Machine (RTM) addresses the lack of interpretability impeding state-of-the-art nonlinear regression models. It does this by using conjunctive clauses in propositional logic to capture the underlying non-linear frequent patterns in the data. These, in turn, are combined into a continuous output through summation, akin to a linear regression function, however, with non-linear components and unity weights. Although the RTM has solved non-linear regression problems with competitive accuracy, the resolution of the output is proportional to the number of clauses employed. This means that computation cost increases with resolution. To reduce this problem, we here introduce integer weighted RTM clauses. Our integer weighted clause is a compact representation of multiple clauses that capture the same sub-pattern-N repeating clauses are turned into one, with an integer weight N. This reduces computation cost N times, and increases interpretability through a sparser representation. We further introduce a novel learning scheme that allows us to simultaneously learn both the clauses and their weights, taking advantage of so-called stochastic searching on the line. We evaluate the potential of the integer weighted RTM empirically using six artificial datasets. The results show that the integer weighted RTM is able to acquire on par or better accuracy using significantly less computational resources compared to regular RTMs. We further show that integer weights yield improved accuracy over real-valued ones.
Tasks
Published 2020-02-04
URL https://arxiv.org/abs/2002.01245v1
PDF https://arxiv.org/pdf/2002.01245v1.pdf
PWC https://paperswithcode.com/paper/a-regression-tsetlin-machine-with-integer
Repo https://github.com/cair/pyTsetlinMachineMT
Framework none

Learning to Structure an Image with Few Colors

Title Learning to Structure an Image with Few Colors
Authors Yunzhong Hou, Liang Zheng, Stephen Gould
Abstract Color and structure are the two pillars that construct an image. Usually, the structure is well expressed through a rich spectrum of colors, allowing objects in an image to be recognized by neural networks. However, under extreme limitations of color space, the structure tends to vanish, and thus a neural network might fail to understand the image. Interested in exploring this interplay between color and structure, we study the scientific problem of identifying and preserving the most informative image structures while constraining the color space to just a few bits, such that the resulting image can be recognized with possibly high accuracy. To this end, we propose a color quantization network, ColorCNN, which learns to structure the images from the classification loss in an end-to-end manner. Given a color space size, ColorCNN quantizes colors in the original image by generating a color index map and an RGB color palette. Then, this color-quantized image is fed to a pre-trained task network to evaluate its performance. In our experiment, with only a 1-bit color space (i.e., two colors), the proposed network achieves 82.1% top-1 accuracy on the CIFAR10 dataset, outperforming traditional color quantization methods by a large margin. For applications, when encoded with PNG, the proposed color quantization shows superiority over other image compression methods in the extremely low bit-rate regime. The code is available at: https://github.com/hou-yz/color_distillation.
Tasks Image Compression, Quantization
Published 2020-03-17
URL https://arxiv.org/abs/2003.07848v1
PDF https://arxiv.org/pdf/2003.07848v1.pdf
PWC https://paperswithcode.com/paper/learning-to-structure-an-image-with-few
Repo https://github.com/hou-yz/color_distillation
Framework pytorch

DNN+NeuroSim V2.0: An End-to-End Benchmarking Framework for Compute-in-Memory Accelerators for On-chip Training

Title DNN+NeuroSim V2.0: An End-to-End Benchmarking Framework for Compute-in-Memory Accelerators for On-chip Training
Authors Xiaochen Peng, Shanshi Huang, Hongwu Jiang, Anni Lu, Shimeng Yu
Abstract DNN+NeuroSim is an integrated framework to benchmark compute-in-memory (CIM) accelerators for deep neural networks, with hierarchical design options from device-level, to circuit-level and up to algorithm-level. A python wrapper is developed to interface NeuroSim with a popular machine learning platform: Pytorch, to support flexible network structures. The framework provides automatic algorithm-to-hardware mapping, and evaluates chip-level area, energy efficiency and throughput for training or inference, as well as training/inference accuracy with hardware constraints. Our prior work (DNN+NeuroSim V1.1) was developed to estimate the impact of reliability in synaptic devices, and analog-to-digital converter (ADC) quantization loss on the accuracy and hardware performance of inference engines. In this work, we further investigated the impact of the analog emerging non-volatile memory non-ideal device properties for on-chip training. By introducing the nonlinearity, asymmetry, device-to-device and cycle-to-cycle variation of weight update into the python wrapper, and peripheral circuits for error/weight gradient computation in NeuroSim core, we benchmarked CIM accelerators based on state-of-the-art SRAM and eNVM devices for VGG-8 on CIFAR-10 dataset, revealing the crucial specs of synaptic devices for on-chip training. The proposed DNN+NeuroSim V2.0 framework is available on GitHub.
Tasks Quantization
Published 2020-03-13
URL https://arxiv.org/abs/2003.06471v1
PDF https://arxiv.org/pdf/2003.06471v1.pdf
PWC https://paperswithcode.com/paper/dnnneurosim-v20-an-end-to-end-benchmarking
Repo https://github.com/neurosim/DNN_NeuroSim_V2.0
Framework pytorch

Automatic Perturbation Analysis on General Computational Graphs

Title Automatic Perturbation Analysis on General Computational Graphs
Authors Kaidi Xu, Zhouxing Shi, Huan Zhang, Minlie Huang, Kai-Wei Chang, Bhavya Kailkhura, Xue Lin, Cho-Jui Hsieh
Abstract Linear relaxation based perturbation analysis for neural networks, which aims to compute tight linear bounds of output neurons given a certain amount of input perturbation, has become a core component in robustness verification and certified defense. However, the majority of linear relaxation based methods only consider feed-forward ReLU networks. While several works extended them to relatively complicated networks, they often need tedious manual derivations and implementation which are arduous and error-prone. Their limited flexibility makes it difficult to handle more complicated tasks. In this paper, we take a significant leap by developing an automatic perturbation analysis algorithm to enable perturbation analysis on any neural network structure, and its computation can be done automatically in a similar manner as the back-propagation algorithm for gradient computation. The main idea is to express a network as a computational graph and then generalize linear relaxation algorithms such as CROWN as a graph algorithm. Our algorithm itself is differentiable and integrated with PyTorch, which allows to optimize network parameters to reshape bounds into desired specifications, enabling automatic robustness verification and certified defense. In particular, we demonstrate a few tasks that are not easily achievable without an automatic framework. We first perform certified robust training and robustness verification for complex natural language models which could be challenging with manual derivation and implementation. We further show that our algorithm can be used for tasks beyond certified defense - we create a neural network with a provably flat optimization landscape and study its generalization capability, and we show that this network can preserve accuracy better after aggressive weight quantization. Code is available at https://github.com/KaidiXu/auto_LiRPA.
Tasks Quantization
Published 2020-02-28
URL https://arxiv.org/abs/2002.12920v1
PDF https://arxiv.org/pdf/2002.12920v1.pdf
PWC https://paperswithcode.com/paper/automatic-perturbation-analysis-on-general
Repo https://github.com/KaidiXu/auto_LiRPA
Framework pytorch

Exploring the Connection Between Binary and Spiking Neural Networks

Title Exploring the Connection Between Binary and Spiking Neural Networks
Authors Sen Lu, Abhronil Sengupta
Abstract On-chip edge intelligence has necessitated the exploration of algorithmic techniques to reduce the compute requirements of current machine learning frameworks. This work aims to bridge the recent algorithmic progress in training Binary Neural Networks and Spiking Neural Networks - both of which are driven by the same motivation and yet synergies between the two have not been fully explored. We show that training Spiking Neural Networks in the extreme quantization regime results in near full precision accuracies on large-scale datasets like CIFAR-$100$ and ImageNet. An important implication of this work is that Binary Spiking Neural Networks can be enabled by “In-Memory” hardware accelerators catered for Binary Neural Networks without suffering any accuracy degradation due to binarization. We utilize standard training techniques for non-spiking networks to generate our spiking networks by conversion process and also perform an extensive empirical analysis and explore simple design-time and run-time optimization techniques for reducing inference latency of spiking networks (both for binary and full-precision models) by an order of magnitude over prior work.
Tasks Quantization
Published 2020-02-24
URL https://arxiv.org/abs/2002.10064v1
PDF https://arxiv.org/pdf/2002.10064v1.pdf
PWC https://paperswithcode.com/paper/exploring-the-connection-between-binary-and
Repo https://github.com/NeuroCompLab-psu/SNN-Conversion
Framework pytorch

Deep Optimized Multiple Description Image Coding via Scalar Quantization Learning

Title Deep Optimized Multiple Description Image Coding via Scalar Quantization Learning
Authors Lijun Zhao, Huihui Bai, Anhong Wang, Yao Zhao
Abstract In this paper, we introduce a deep multiple description coding (MDC) framework optimized by minimizing multiple description (MD) compressive loss. First, MD multi-scale-dilated encoder network generates multiple description tensors, which are discretized by scalar quantizers, while these quantized tensors are decompressed by MD cascaded-ResBlock decoder networks. To greatly reduce the total amount of artificial neural network parameters, an auto-encoder network composed of these two types of network is designed as a symmetrical parameter sharing structure. Second, this autoencoder network and a pair of scalar quantizers are simultaneously learned in an end-to-end self-supervised way. Third, considering the variation in the image spatial distribution, each scalar quantizer is accompanied by an importance-indicator map to generate MD tensors, rather than using direct quantization. Fourth, we introduce the multiple description structural similarity distance loss, which implicitly regularizes the diversified multiple description generations, to explicitly supervise multiple description diversified decoding in addition to MD reconstruction loss. Finally, we demonstrate that our MDC framework performs better than several state-of-the-art MDC approaches regarding image coding efficiency when tested on several commonly available datasets.
Tasks Quantization
Published 2020-01-12
URL https://arxiv.org/abs/2001.03851v1
PDF https://arxiv.org/pdf/2001.03851v1.pdf
PWC https://paperswithcode.com/paper/deep-optimized-multiple-description-image
Repo https://github.com/mdcnn/Deep-Multiple-Description-Coding
Framework none

Holopix50k: A Large-Scale In-the-wild Stereo Image Dataset

Title Holopix50k: A Large-Scale In-the-wild Stereo Image Dataset
Authors Yiwen Hua, Puneet Kohli, Pritish Uplavikar, Anand Ravi, Saravana Gunaseelan, Jason Orozco, Edward Li
Abstract With the mass-market adoption of dual-camera mobile phones, leveraging stereo information in computer vision has become increasingly important. Current state-of-the-art methods utilize learning-based algorithms, where the amount and quality of training samples heavily influence results. Existing stereo image datasets are limited either in size or subject variety. Hence, algorithms trained on such datasets do not generalize well to scenarios encountered in mobile photography. We present Holopix50k, a novel in-the-wild stereo image dataset, comprising 49,368 image pairs contributed by users of the Holopix mobile social platform. In this work, we describe our data collection process and statistically compare our dataset to other popular stereo datasets. We experimentally show that using our dataset significantly improves results for tasks such as stereo super-resolution and self-supervised monocular depth estimation. Finally, we showcase practical applications of our dataset to motivate novel works and use cases. The Holopix50k dataset is available at http://github.com/leiainc/holopix50k
Tasks Depth Estimation, Monocular Depth Estimation, Super-Resolution
Published 2020-03-25
URL https://arxiv.org/abs/2003.11172v1
PDF https://arxiv.org/pdf/2003.11172v1.pdf
PWC https://paperswithcode.com/paper/holopix50k-a-large-scale-in-the-wild-stereo-1
Repo https://github.com/YingqianWang/Flickr1024
Framework none

Fractional Skipping: Towards Finer-Grained Dynamic CNN Inference

Title Fractional Skipping: Towards Finer-Grained Dynamic CNN Inference
Authors Jianghao Shen, Yonggan Fu, Yue Wang, Pengfei Xu, Zhangyang Wang, Yingyan Lin
Abstract While increasingly deep networks are still in general desired for achieving state-of-the-art performance, for many specific inputs a simpler network might already suffice. Existing works exploited this observation by learning to skip convolutional layers in an input-dependent manner. However, we argue their binary decision scheme, i.e., either fully executing or completely bypassing one layer for a specific input, can be enhanced by introducing finer-grained, “softer” decisions. We therefore propose a Dynamic Fractional Skipping (DFS) framework. The core idea of DFS is to hypothesize layer-wise quantization (to different bitwidths) as intermediate “soft” choices to be made between fully utilizing and skipping a layer. For each input, DFS dynamically assigns a bitwidth to both weights and activations of each layer, where fully executing and skipping could be viewed as two “extremes” (i.e., full bitwidth and zero bitwidth). In this way, DFS can “fractionally” exploit a layer’s expressive power during input-adaptive inference, enabling finer-grained accuracy-computational cost trade-offs. It presents a unified view to link input-adaptive layer skipping and input-adaptive hybrid quantization. Extensive experimental results demonstrate the superior tradeoff between computational cost and model expressive power (accuracy) achieved by DFS. More visualizations also indicate a smooth and consistent transition in the DFS behaviors, especially the learned choices between layer skipping and different quantizations when the total computational budgets vary, validating our hypothesis that layer quantization could be viewed as intermediate variants of layer skipping. Our source code and supplementary material are available at \link{https://github.com/Torment123/DFS}.
Tasks Quantization
Published 2020-01-03
URL https://arxiv.org/abs/2001.00705v1
PDF https://arxiv.org/pdf/2001.00705v1.pdf
PWC https://paperswithcode.com/paper/fractional-skipping-towards-finer-grained
Repo https://github.com/Torment123/DFS
Framework pytorch
comments powered by Disqus