Paper Group AWR 98
A Symbolic Neural Network Representation and its Application to Understanding, Verifying, and Patching Networks. ABD-Net: Attentive but Diverse Person Re-Identification. Unrestricted Adversarial Examples via Semantic Manipulation. Boundary and Entropy-driven Adversarial Learning for Fundus Image Segmentation. Hierarchical Text Classification with R …
A Symbolic Neural Network Representation and its Application to Understanding, Verifying, and Patching Networks
Title | A Symbolic Neural Network Representation and its Application to Understanding, Verifying, and Patching Networks |
Authors | Matthew Sotoudeh, Aditya V. Thakur |
Abstract | Analysis and manipulation of trained neural networks is a challenging and important problem. We propose a symbolic representation for piecewise-linear neural networks and discuss its efficient computation. With this representation, one can translate the problem of analyzing a complex neural network into that of analyzing a finite set of affine functions. We demonstrate the use of this representation for three applications. First, we apply the symbolic representation to computing weakest preconditions on network inputs, which we use to exactly visualize the advisories made by a network meant to operate an aircraft collision avoidance system. Second, we use the symbolic representation to compute strongest postconditions on the network outputs, which we use to perform bounded model checking on standard neural network controllers. Finally, we show how the symbolic representation can be combined with a new form of neural network to perform patching; i.e., correct user-specified behavior of the network. |
Tasks | |
Published | 2019-08-17 |
URL | https://arxiv.org/abs/1908.06223v2 |
https://arxiv.org/pdf/1908.06223v2.pdf | |
PWC | https://paperswithcode.com/paper/a-symbolic-neural-network-representation-and |
Repo | https://github.com/95616ARG/SyReNN |
Framework | pytorch |
ABD-Net: Attentive but Diverse Person Re-Identification
Title | ABD-Net: Attentive but Diverse Person Re-Identification |
Authors | Tianlong Chen, Shaojin Ding, Jingyi Xie, Ye Yuan, Wuyang Chen, Yang Yang, Zhou Ren, Zhangyang Wang |
Abstract | Attention mechanism has been shown to be effective for person re-identification (Re-ID). However, the learned attentive feature embeddings which are often not naturally diverse nor uncorrelated, will compromise the retrieval performance based on the Euclidean distance. We advocate that enforcing diversity could greatly complement the power of attention. To this end, we propose an Attentive but Diverse Network (ABD-Net), which seamlessly integrates attention modules and diversity regularization throughout the entire network, to learn features that are representative, robust, and more discriminative. Specifically, we introduce a pair of complementary attention modules, focusing on channel aggregation and position awareness, respectively. Furthermore, a new efficient form of orthogonality constraint is derived to enforce orthogonality on both hidden activations and weights. Through careful ablation studies, we verify that the proposed attentive and diverse terms each contributes to the performance gains of ABD-Net. On three popular benchmarks, ABD-Net consistently outperforms existing state-of-the-art methods. |
Tasks | Person Re-Identification |
Published | 2019-08-03 |
URL | https://arxiv.org/abs/1908.01114v3 |
https://arxiv.org/pdf/1908.01114v3.pdf | |
PWC | https://paperswithcode.com/paper/abd-net-attentive-but-diverse-person-re |
Repo | https://github.com/mangye16/ReID-Survey |
Framework | pytorch |
Unrestricted Adversarial Examples via Semantic Manipulation
Title | Unrestricted Adversarial Examples via Semantic Manipulation |
Authors | Anand Bhattad, Min Jin Chong, Kaizhao Liang, Bo Li, D. A. Forsyth |
Abstract | Machine learning models, especially deep neural networks (DNNs), have been shown to be vulnerable against adversarial examples which are carefully crafted samples with a small magnitude of the perturbation. Such adversarial perturbations are usually restricted by bounding their $\mathcal{L}_p$ norm such that they are imperceptible, and thus many current defenses can exploit this property to reduce their adversarial impact. In this paper, we instead introduce “unrestricted” perturbations that manipulate semantically meaningful image-based visual descriptors - color and texture - in order to generate effective and photorealistic adversarial examples. We show that these semantically aware perturbations are effective against JPEG compression, feature squeezing and adversarially trained model. We also show that the proposed methods can effectively be applied to both image classification and image captioning tasks on complex datasets such as ImageNet and MSCOCO. In addition, we conduct comprehensive user studies to show that our generated semantic adversarial examples are photorealistic to humans despite large magnitude perturbations when compared to other attacks. |
Tasks | Colorization, Image Captioning, Image Classification |
Published | 2019-04-12 |
URL | https://arxiv.org/abs/1904.06347v2 |
https://arxiv.org/pdf/1904.06347v2.pdf | |
PWC | https://paperswithcode.com/paper/big-but-imperceptible-adversarial |
Repo | https://github.com/mchong6/Semantic_Adversarial_Cadv |
Framework | pytorch |
Boundary and Entropy-driven Adversarial Learning for Fundus Image Segmentation
Title | Boundary and Entropy-driven Adversarial Learning for Fundus Image Segmentation |
Authors | Shujun Wang, Lequan Yu, Kang Li, Xin Yang, Chi-Wing Fu, Pheng-Ann Heng |
Abstract | Accurate segmentation of the optic disc (OD) and cup (OC)in fundus images from different datasets is critical for glaucoma disease screening. The cross-domain discrepancy (domain shift) hinders the generalization of deep neural networks to work on different domain datasets.In this work, we present an unsupervised domain adaptation framework,called Boundary and Entropy-driven Adversarial Learning (BEAL), to improve the OD and OC segmentation performance, especially on the ambiguous boundary regions. In particular, our proposed BEAL frame-work utilizes the adversarial learning to encourage the boundary prediction and mask probability entropy map (uncertainty map) of the target domain to be similar to the source ones, generating more accurate boundaries and suppressing the high uncertainty predictions of OD and OC segmentation. We evaluate the proposed BEAL framework on two public retinal fundus image datasets (Drishti-GS and RIM-ONE-r3), and the experiment results demonstrate that our method outperforms the state-of-the-art unsupervised domain adaptation methods. Codes will be available at https://github.com/EmmaW8/BEAL. |
Tasks | Domain Adaptation, Semantic Segmentation, Unsupervised Domain Adaptation |
Published | 2019-06-26 |
URL | https://arxiv.org/abs/1906.11143v2 |
https://arxiv.org/pdf/1906.11143v2.pdf | |
PWC | https://paperswithcode.com/paper/boundary-and-entropy-driven-adversarial |
Repo | https://github.com/EmmaW8/BEAL |
Framework | pytorch |
Hierarchical Text Classification with Reinforced Label Assignment
Title | Hierarchical Text Classification with Reinforced Label Assignment |
Authors | Yuning Mao, Jingjing Tian, Jiawei Han, Xiang Ren |
Abstract | While existing hierarchical text classification (HTC) methods attempt to capture label hierarchies for model training, they either make local decisions regarding each label or completely ignore the hierarchy information during inference. To solve the mismatch between training and inference as well as modeling label dependencies in a more principled way, we formulate HTC as a Markov decision process and propose to learn a Label Assignment Policy via deep reinforcement learning to determine where to place an object and when to stop the assignment process. The proposed method, HiLAP, explores the hierarchy during both training and inference time in a consistent manner and makes inter-dependent decisions. As a general framework, HiLAP can incorporate different neural encoders as base models for end-to-end training. Experiments on five public datasets and four base models show that HiLAP yields an average improvement of 33.4% in Macro-F1 over flat classifiers and outperforms state-of-the-art HTC methods by a large margin. Data and code can be found at https://github.com/morningmoni/HiLAP. |
Tasks | Text Classification |
Published | 2019-08-27 |
URL | https://arxiv.org/abs/1908.10419v1 |
https://arxiv.org/pdf/1908.10419v1.pdf | |
PWC | https://paperswithcode.com/paper/hierarchical-text-classification-with |
Repo | https://github.com/morningmoni/HiLAP |
Framework | pytorch |
Flight Controller Synthesis Via Deep Reinforcement Learning
Title | Flight Controller Synthesis Via Deep Reinforcement Learning |
Authors | William Koch |
Abstract | Traditional control methods are inadequate in many deployment settings involving control of Cyber-Physical Systems (CPS). In such settings, CPS controllers must operate and respond to unpredictable interactions, conditions, or failure modes. Dealing with such unpredictability requires the use of executive and cognitive control functions that allow for planning and reasoning. Motivated by the sport of drone racing, this dissertation addresses these concerns for state-of-the-art flight control by investigating the use of deep neural networks to bring essential elements of higher-level cognition for constructing low level flight controllers. This thesis reports on the development and release of an open source, full solution stack for building neuro-flight controllers. This stack consists of the methodology for constructing a multicopter digital twin for synthesize the flight controller unique to a specific aircraft, a tuning framework for implementing training environments (GymFC), and a firmware for the world’s first neural network supported flight controller (Neuroflight). GymFC’s novel approach fuses together the digital twinning paradigm for flight control training to provide seamless transfer to hardware. Additionally, this thesis examines alternative reward system functions as well as changes to the software environment to bridge the gap between the simulation and real world deployment environments. Work summarized in this thesis demonstrates that reinforcement learning is able to be leveraged for training neural network controllers capable, not only of maintaining stable flight, but also precision aerobatic maneuvers in real world settings. As such, this work provides a foundation for developing the next generation of flight control systems. |
Tasks | |
Published | 2019-09-14 |
URL | https://arxiv.org/abs/1909.06493v1 |
https://arxiv.org/pdf/1909.06493v1.pdf | |
PWC | https://paperswithcode.com/paper/flight-controller-synthesis-via-deep |
Repo | https://github.com/wil3/neuroflight |
Framework | tf |
Counting with Focus for Free
Title | Counting with Focus for Free |
Authors | Zenglin Shi, Pascal Mettes, Cees G. M. Snoek |
Abstract | This paper aims to count arbitrary objects in images. The leading counting approaches start from point annotations per object from which they construct density maps. Then, their training objective transforms input images to density maps through deep convolutional networks. We posit that the point annotations serve more supervision purposes than just constructing density maps. We introduce ways to repurpose the points for free. First, we propose supervised focus from segmentation, where points are converted into binary maps. The binary maps are combined with a network branch and accompanying loss function to focus on areas of interest. Second, we propose supervised focus from global density, where the ratio of point annotations to image pixels is used in another branch to regularize the overall density estimation. To assist both the density estimation and the focus from segmentation, we also introduce an improved kernel size estimator for the point annotations. Experiments on six datasets show that all our contributions reduce the counting error, regardless of the base network, resulting in state-of-the-art accuracy using only a single network. Finally, we are the first to count on WIDER FACE, allowing us to show the benefits of our approach in handling varying object scales and crowding levels. Code is available at https://github.com/shizenglin/Counting-with-Focus-for-Free |
Tasks | Density Estimation |
Published | 2019-03-28 |
URL | https://arxiv.org/abs/1903.12206v2 |
https://arxiv.org/pdf/1903.12206v2.pdf | |
PWC | https://paperswithcode.com/paper/counting-with-focus-for-free |
Repo | https://github.com/shizenglin/Counting-with-Focus-for-Free |
Framework | tf |
Interpreting Adversarially Trained Convolutional Neural Networks
Title | Interpreting Adversarially Trained Convolutional Neural Networks |
Authors | Tianyuan Zhang, Zhanxing Zhu |
Abstract | We attempt to interpret how adversarially trained convolutional neural networks (AT-CNNs) recognize objects. We design systematic approaches to interpret AT-CNNs in both qualitative and quantitative ways and compare them with normally trained models. Surprisingly, we find that adversarial training alleviates the texture bias of standard CNNs when trained on object recognition tasks, and helps CNNs learn a more shape-biased representation. We validate our hypothesis from two aspects. First, we compare the salience maps of AT-CNNs and standard CNNs on clean images and images under different transformations. The comparison could visually show that the prediction of the two types of CNNs is sensitive to dramatically different types of features. Second, to achieve quantitative verification, we construct additional test datasets that destroy either textures or shapes, such as style-transferred version of clean data, saturated images and patch-shuffled ones, and then evaluate the classification accuracy of AT-CNNs and normal CNNs on these datasets. Our findings shed some light on why AT-CNNs are more robust than those normally trained ones and contribute to a better understanding of adversarial training over CNNs from an interpretation perspective. |
Tasks | Object Recognition |
Published | 2019-05-23 |
URL | https://arxiv.org/abs/1905.09797v1 |
https://arxiv.org/pdf/1905.09797v1.pdf | |
PWC | https://paperswithcode.com/paper/interpreting-adversarially-trained |
Repo | https://github.com/PKUAI26/AT-CNN |
Framework | pytorch |
Brno Mobile OCR Dataset
Title | Brno Mobile OCR Dataset |
Authors | Martin Kišš, Michal Hradiš, Oldřich Kodym |
Abstract | We introduce the Brno Mobile OCR Dataset (B-MOD) for document Optical Character Recognition from low-quality images captured by handheld mobile devices. While OCR of high-quality scanned documents is a mature field where many commercial tools are available, and large datasets of text in the wild exist, no existing datasets can be used to develop and test document OCR methods robust to non-uniform lighting, image blur, strong noise, built-in denoising, sharpening, compression and other artifacts present in many photographs from mobile devices. This dataset contains 2 113 unique pages from random scientific papers, which were photographed by multiple people using 23 different mobile devices. The resulting 19 728 photographs of various visual quality are accompanied by precise positions and text annotations of 500k text lines. We further provide an evaluation methodology, including an evaluation server and a testset with non-public annotations. We provide a state-of-the-art text recognition baseline build on convolutional and recurrent neural networks trained with Connectionist Temporal Classification loss. This baseline achieves 2 %, 22 % and 73 % word error rates on easy, medium and hard parts of the dataset, respectively, confirming that the dataset is challenging. The presented dataset will enable future development and evaluation of document analysis for low-quality images. It is primarily intended for line-level text recognition, and can be further used for line localization, layout analysis, image restoration and text binarization. |
Tasks | Denoising, Image Restoration, Optical Character Recognition |
Published | 2019-07-02 |
URL | https://arxiv.org/abs/1907.01307v1 |
https://arxiv.org/pdf/1907.01307v1.pdf | |
PWC | https://paperswithcode.com/paper/brno-mobile-ocr-dataset |
Repo | https://github.com/DCGM/B-MOD |
Framework | tf |
Learning elementary structures for 3D shape generation and matching
Title | Learning elementary structures for 3D shape generation and matching |
Authors | Theo Deprelle, Thibault Groueix, Matthew Fisher, Vladimir G. Kim, Bryan C. Russell, Mathieu Aubry |
Abstract | We propose to represent shapes as the deformation and combination of learnable elementary 3D structures, which are primitives resulting from training over a collection of shape. We demonstrate that the learned elementary 3D structures lead to clear improvements in 3D shape generation and matching. More precisely, we present two complementary approaches for learning elementary structures: (i) patch deformation learning and (ii) point translation learning. Both approaches can be extended to abstract structures of higher dimensions for improved results. We evaluate our method on two tasks: reconstructing ShapeNet objects and estimating dense correspondences between human scans (FAUST inter challenge). We show 16% improvement over surface deformation approaches for shape reconstruction and outperform FAUST inter challenge state of the art by 6%. |
Tasks | 3D Shape Generation |
Published | 2019-08-13 |
URL | https://arxiv.org/abs/1908.04725v2 |
https://arxiv.org/pdf/1908.04725v2.pdf | |
PWC | https://paperswithcode.com/paper/learning-elementary-structures-for-3d-shape |
Repo | https://github.com/TheoDEPRELLE/AtlasNetV2 |
Framework | pytorch |
Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming
Title | Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming |
Authors | Claudio Michaelis, Benjamin Mitzkus, Robert Geirhos, Evgenia Rusak, Oliver Bringmann, Alexander S. Ecker, Matthias Bethge, Wieland Brendel |
Abstract | The ability to detect objects regardless of image distortions or weather conditions is crucial for real-world applications of deep learning like autonomous driving. We here provide an easy-to-use benchmark to assess how object detection models perform when image quality degrades. The three resulting benchmark datasets, termed Pascal-C, Coco-C and Cityscapes-C, contain a large variety of image corruptions. We show that a range of standard object detection models suffer a severe performance loss on corrupted images (down to 30–60% of the original performance). However, a simple data augmentation trick—stylizing the training images—leads to a substantial increase in robustness across corruption type, severity and dataset. We envision our comprehensive benchmark to track future progress towards building robust object detection models. Benchmark, code and data are publicly available. |
Tasks | Autonomous Driving, Data Augmentation, Instance Segmentation, Object Detection, Robust Object Detection |
Published | 2019-07-17 |
URL | https://arxiv.org/abs/1907.07484v2 |
https://arxiv.org/pdf/1907.07484v2.pdf | |
PWC | https://paperswithcode.com/paper/benchmarking-robustness-in-object-detection |
Repo | https://github.com/bethgelab/robust-detection-benchmark |
Framework | none |
A Multi Hidden Recurrent Neural Network with a Modified Grey Wolf Optimizer
Title | A Multi Hidden Recurrent Neural Network with a Modified Grey Wolf Optimizer |
Authors | Tarik A. Rashid, Dosti K. Abbas, Yalin K. Turel |
Abstract | Identifying university students’ weaknesses results in better learning and can function as an early warning system to enable students to improve. However, the satisfaction level of existing systems is not promising. New and dynamic hybrid systems are needed to imitate this mechanism. A hybrid system (a modified Recurrent Neural Network with an adapted Grey Wolf Optimizer) is used to forecast students’ outcomes. This proposed system would improve instruction by the faculty and enhance the students’ learning experiences. The results show that a modified recurrent neural network with an adapted Grey Wolf Optimizer has the best accuracy when compared with other models. |
Tasks | |
Published | 2019-03-27 |
URL | http://arxiv.org/abs/1903.11712v1 |
http://arxiv.org/pdf/1903.11712v1.pdf | |
PWC | https://paperswithcode.com/paper/a-multi-hidden-recurrent-neural-network-with |
Repo | https://github.com/Tarik4Rashid4/student-performance |
Framework | none |
Improved Knowledge Distillation via Teacher Assistant
Title | Improved Knowledge Distillation via Teacher Assistant |
Authors | Seyed-Iman Mirzadeh, Mehrdad Farajtabar, Ang Li, Nir Levine, Akihiro Matsukawa, Hassan Ghasemzadeh |
Abstract | Despite the fact that deep neural networks are powerful models and achieve appealing results on many tasks, they are too large to be deployed on edge devices like smartphones or embedded sensor nodes. There have been efforts to compress these networks, and a popular method is knowledge distillation, where a large (teacher) pre-trained network is used to train a smaller (student) network. However, in this paper, we show that the student network performance degrades when the gap between student and teacher is large. Given a fixed student network, one cannot employ an arbitrarily large teacher, or in other words, a teacher can effectively transfer its knowledge to students up to a certain size, not smaller. To alleviate this shortcoming, we introduce multi-step knowledge distillation, which employs an intermediate-sized network (teacher assistant) to bridge the gap between the student and the teacher. Moreover, we study the effect of teacher assistant size and extend the framework to multi-step distillation. Theoretical analysis and extensive experiments on CIFAR-10,100 and ImageNet datasets and on CNN and ResNet architectures substantiate the effectiveness of our proposed approach. |
Tasks | |
Published | 2019-02-09 |
URL | https://arxiv.org/abs/1902.03393v2 |
https://arxiv.org/pdf/1902.03393v2.pdf | |
PWC | https://paperswithcode.com/paper/improved-knowledge-distillation-via-teacher |
Repo | https://github.com/unconst/MACH |
Framework | tf |
Eigenvalue Normalized Recurrent Neural Networks for Short Term Memory
Title | Eigenvalue Normalized Recurrent Neural Networks for Short Term Memory |
Authors | Kyle Helfrich, Qiang Ye |
Abstract | Several variants of recurrent neural networks (RNNs) with orthogonal or unitary recurrent matrices have recently been developed to mitigate the vanishing/exploding gradient problem and to model long-term dependencies of sequences. However, with the eigenvalues of the recurrent matrix on the unit circle, the recurrent state retains all input information which may unnecessarily consume model capacity. In this paper, we address this issue by proposing an architecture that expands upon an orthogonal/unitary RNN with a state that is generated by a recurrent matrix with eigenvalues in the unit disc. Any input to this state dissipates in time and is replaced with new inputs, simulating short-term memory. A gradient descent algorithm is derived for learning such a recurrent matrix. The resulting method, called the Eigenvalue Normalized RNN (ENRNN), is shown to be highly competitive in several experiments. |
Tasks | |
Published | 2019-11-18 |
URL | https://arxiv.org/abs/1911.07964v1 |
https://arxiv.org/pdf/1911.07964v1.pdf | |
PWC | https://paperswithcode.com/paper/eigenvalue-normalized-recurrent-neural |
Repo | https://github.com/KHelfrich1/ENRNN |
Framework | tf |
Image Deformation Meta-Networks for One-Shot Learning
Title | Image Deformation Meta-Networks for One-Shot Learning |
Authors | Zitian Chen, Yanwei Fu, Yu-Xiong Wang, Lin Ma, Wei Liu, Martial Hebert |
Abstract | Humans can robustly learn novel visual concepts even when images undergo various deformations and lose certain information. Mimicking the same behavior and synthesizing deformed instances of new concepts may help visual recognition systems perform better one-shot learning, i.e., learning concepts from one or few examples. Our key insight is that, while the deformed images may not be visually realistic, they still maintain critical semantic information and contribute significantly to formulating classifier decision boundaries. Inspired by the recent progress of meta-learning, we combine a meta-learner with an image deformation sub-network that produces additional training examples, and optimize both models in an end-to-end manner. The deformation sub-network learns to deform images by fusing a pair of images — a probe image that keeps the visual content and a gallery image that diversifies the deformations. We demonstrate results on the widely used one-shot learning benchmarks (miniImageNet and ImageNet 1K Challenge datasets), which significantly outperform state-of-the-art approaches. Code is available at https://github.com/tankche1/IDeMe-Net. |
Tasks | Meta-Learning, One-Shot Learning |
Published | 2019-05-28 |
URL | https://arxiv.org/abs/1905.11641v2 |
https://arxiv.org/pdf/1905.11641v2.pdf | |
PWC | https://paperswithcode.com/paper/190511641 |
Repo | https://github.com/tankche1/IDeMe-Net |
Framework | pytorch |