February 1, 2020

3340 words 16 mins read

Paper Group AWR 287

Paper Group AWR 287

Exploiting the Redundancy in Convolutional Filters for Parameter Reduction. Animal Detection in Man-made Environments. Evaluating Commonsense in Pre-trained Language Models. Collaborative Translational Metric Learning. A Bi-Directional Co-Design Approach to Enable Deep Learning on IoT Devices. Compressing Gradient Optimizers via Count-Sketches. Ass …

Exploiting the Redundancy in Convolutional Filters for Parameter Reduction

Title Exploiting the Redundancy in Convolutional Filters for Parameter Reduction
Authors Kumara Kahatapitiya, Ranga Rodrigo
Abstract Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance in many computer vision tasks over the years. However, such a performance comes at the cost of computation and memory intensive network designs, suggesting potential improvements in efficiency. Convolutional layers of CNNs partly account for such an inefficiency, as they are known to learn redundant features. In this work, we exploit this redundancy, observing it as the correlation between convolutional filters of a layer, and propose an alternative to reproduce it efficiently. The proposed ‘LinearConv’ layer learns a set of orthogonal filters, and a set of coefficients that linearly combines them to introduce a controlled redundancy. We introduce a correlation-based regularization loss to achieve such flexibility over redundancy, and control the number of parameters in turn. This is designed as a plug-and-play layer to conveniently replace a conventional convolutional layer, without any additional changes required in the network architecture or the hyperparameter settings. Our experiments verify that LinearConv models achieve a performance on-par with their counterparts, with almost a 50% reduction in parameters on average, and having the same computational requirement at inference.
Tasks
Published 2019-07-26
URL https://arxiv.org/abs/1907.11432v2
PDF https://arxiv.org/pdf/1907.11432v2.pdf
PWC https://paperswithcode.com/paper/linearconv-regenerating-redundancy-in
Repo https://github.com/kumarak93/LinearConv
Framework pytorch

Animal Detection in Man-made Environments

Title Animal Detection in Man-made Environments
Authors Abhineet Singh, Marcin Pietrasik, Gabriell Natha, Nehla Ghouaiel, Ken Brizel, Nilanjan Ray
Abstract Automatic detection of animals that have strayed into human inhabited areas has important security and road safety applications. This paper attempts to solve this problem using deep learning techniques from a variety of computer vision fields including object detection, tracking, segmentation and edge detection. Several interesting insights into transfer learning are elicited while adapting models trained on benchmark datasets for real world deployment. Empirical evidence is presented to demonstrate the inability of detectors to generalize from training images of animals in their natural habitats to deployment scenarios of man-made environments. A solution is also proposed using semi-automated synthetic data generation for domain specific training. Code and data used in the experiments are made available to facilitate further work in this domain.
Tasks Edge Detection, Object Detection, Synthetic Data Generation, Transfer Learning
Published 2019-10-24
URL https://arxiv.org/abs/1910.11443v2
PDF https://arxiv.org/pdf/1910.11443v2.pdf
PWC https://paperswithcode.com/paper/animal-detection-in-man-made-environments
Repo https://github.com/abhineet123/animal_detection
Framework tf

Evaluating Commonsense in Pre-trained Language Models

Title Evaluating Commonsense in Pre-trained Language Models
Authors Xuhui Zhou, Yue Zhang, Leyang Cui, Dandan Huang
Abstract Contextualized representations trained over large raw text data have given remarkable improvements for NLP tasks including question answering and reading comprehension. There have been works showing that syntactic, semantic and word sense knowledge are contained in such representations, which explains why they benefit such tasks. However, relatively little work has been done investigating commonsense knowledge contained in contextualized representations, which is crucial for human question answering and reading comprehension. We study the commonsense ability of GPT, BERT, XLNet, and RoBERTa by testing them on seven challenging benchmarks, finding that language modeling and its variants are effective objectives for promoting models’ commonsense ability while bi-directional context and larger training set are bonuses. We additionally find that current models do poorly on tasks require more necessary inference steps. Finally, we test the robustness of models by making dual test cases, which are correlated so that the correct prediction of one sample should lead to correct prediction of the other. Interestingly, the models show confusion on these test cases, which suggests that they learn commonsense at the surface rather than the deep level. We release a test set, named CATs publicly, for future research.
Tasks Language Modelling, Question Answering, Reading Comprehension
Published 2019-11-27
URL https://arxiv.org/abs/1911.11931v1
PDF https://arxiv.org/pdf/1911.11931v1.pdf
PWC https://paperswithcode.com/paper/evaluating-commonsense-in-pre-trained
Repo https://github.com/XuhuiZhou/CATS
Framework pytorch

Collaborative Translational Metric Learning

Title Collaborative Translational Metric Learning
Authors Chanyoung Park, Donghyun Kim, Xing Xie, Hwanjo Yu
Abstract Recently, matrix factorization-based recommendation methods have been criticized for the problem raised by the triangle inequality violation. Although several metric learning-based approaches have been proposed to overcome this issue, existing approaches typically project each user to a single point in the metric space, and thus do not suffice for properly modeling the intensity and the heterogeneity of user-item relationships in implicit feedback. In this paper, we propose TransCF to discover such latent user-item relationships embodied in implicit user-item interactions. Inspired by the translation mechanism popularized by knowledge graph embedding, we construct user-item specific translation vectors by employing the neighborhood information of users and items, and translate each user toward items according to the user’s relationships with the items. Our proposed method outperforms several state-of-the-art methods for top-N recommendation on seven real-world data by up to 17% in terms of hit ratio. We also conduct extensive qualitative evaluations on the translation vectors learned by our proposed method to ascertain the benefit of adopting the translation mechanism for implicit feedback-based recommendations.
Tasks Graph Embedding, Knowledge Graph Embedding, Metric Learning
Published 2019-06-04
URL https://arxiv.org/abs/1906.01637v1
PDF https://arxiv.org/pdf/1906.01637v1.pdf
PWC https://paperswithcode.com/paper/collaborative-translational-metric-learning
Repo https://github.com/pcy1302/TransCF
Framework pytorch

A Bi-Directional Co-Design Approach to Enable Deep Learning on IoT Devices

Title A Bi-Directional Co-Design Approach to Enable Deep Learning on IoT Devices
Authors Xiaofan Zhang, Cong Hao, Yuhong Li, Yao Chen, Jinjun Xiong, Wen-mei Hwu, Deming Chen
Abstract Developing deep learning models for resource-constrained Internet-of-Things (IoT) devices is challenging, as it is difficult to achieve both good quality of results (QoR), such as DNN model inference accuracy, and quality of service (QoS), such as inference latency, throughput, and power consumption. Existing approaches typically separate the DNN model development step from its deployment on IoT devices, resulting in suboptimal solutions. In this paper, we first introduce a few interesting but counterintuitive observations about such a separate design approach, and empirically show why it may lead to suboptimal designs. Motivated by these observations, we then propose a novel and practical bi-directional co-design approach: a bottom-up DNN model design strategy together with a top-down flow for DNN accelerator design. It enables a joint optimization of both DNN models and their deployment configurations on IoT devices as represented as FPGAs. We demonstrate the effectiveness of the proposed co-design approach on a real-life object detection application using Pynq-Z1 embedded FPGA. Our method obtains the state-of-the-art results on both QoR with high accuracy (IoU) and QoS with high throughput (FPS) and high energy efficiency.
Tasks Object Detection
Published 2019-05-20
URL https://arxiv.org/abs/1905.08369v1
PDF https://arxiv.org/pdf/1905.08369v1.pdf
PWC https://paperswithcode.com/paper/a-bi-directional-co-design-approach-to-enable
Repo https://github.com/TomG008/SkyNet
Framework pytorch

Compressing Gradient Optimizers via Count-Sketches

Title Compressing Gradient Optimizers via Count-Sketches
Authors Ryan Spring, Anastasios Kyrillidis, Vijai Mohan, Anshumali Shrivastava
Abstract Many popular first-order optimization methods (e.g., Momentum, AdaGrad, Adam) accelerate the convergence rate of deep learning models. However, these algorithms require auxiliary parameters, which cost additional memory proportional to the number of parameters in the model. The problem is becoming more severe as deep learning models continue to grow larger in order to learn from complex, large-scale datasets. Our proposed solution is to maintain a linear sketch to compress the auxiliary variables. We demonstrate that our technique has the same performance as the full-sized baseline, while using significantly less space for the auxiliary variables. Theoretically, we prove that count-sketch optimization maintains the SGD convergence rate, while gracefully reducing memory usage for large-models. On the large-scale 1-Billion Word dataset, we save 25% of the memory used during training (8.6 GB instead of 11.7 GB) by compressing the Adam optimizer in the Embedding and Softmax layers with negligible accuracy and performance loss. For an Amazon extreme classification task with over 49.5 million classes, we also reduce the training time by 38%, by increasing the mini-batch size 3.5x using our count-sketch optimizer.
Tasks
Published 2019-02-01
URL http://arxiv.org/abs/1902.00179v2
PDF http://arxiv.org/pdf/1902.00179v2.pdf
PWC https://paperswithcode.com/paper/compressing-gradient-optimizers-via-count
Repo https://github.com/rdspring1/Count-Sketch-Optimizers
Framework pytorch

Association of genomic subtypes of lower-grade gliomas with shape features automatically extracted by a deep learning algorithm

Title Association of genomic subtypes of lower-grade gliomas with shape features automatically extracted by a deep learning algorithm
Authors Mateusz Buda, Ashirbani Saha, Maciej A Mazurowski
Abstract Recent analysis identified distinct genomic subtypes of lower-grade glioma tumors which are associated with shape features. In this study, we propose a fully automatic way to quantify tumor imaging characteristics using deep learning-based segmentation and test whether these characteristics are predictive of tumor genomic subtypes. We used preoperative imaging and genomic data of 110 patients from 5 institutions with lower-grade gliomas from The Cancer Genome Atlas. Based on automatic deep learning segmentations, we extracted three features which quantify two-dimensional and three-dimensional characteristics of the tumors. Genomic data for the analyzed cohort of patients consisted of previously identified genomic clusters based on IDH mutation and 1p/19q co-deletion, DNA methylation, gene expression, DNA copy number, and microRNA expression. To analyze the relationship between the imaging features and genomic clusters, we conducted the Fisher exact test for 10 hypotheses for each pair of imaging feature and genomic subtype. To account for multiple hypothesis testing, we applied a Bonferroni correction. P-values lower than 0.005 were considered statistically significant. We found the strongest association between RNASeq clusters and the bounding ellipsoid volume ratio ($p<0.0002$) and between RNASeq clusters and margin fluctuation ($p<0.005$). In addition, we identified associations between bounding ellipsoid volume ratio and all tested molecular subtypes ($p<0.02$) as well as between angular standard deviation and RNASeq cluster ($p<0.02$). In terms of automatic tumor segmentation that was used to generate the quantitative image characteristics, our deep learning algorithm achieved a mean Dice coefficient of 82% which is comparable to human performance.
Tasks 3D Medical Imaging Segmentation, Brain Segmentation, Brain Tumor Segmentation
Published 2019-06-09
URL https://arxiv.org/abs/1906.03720v1
PDF https://arxiv.org/pdf/1906.03720v1.pdf
PWC https://paperswithcode.com/paper/association-of-genomic-subtypes-of-lower
Repo https://github.com/mateuszbuda/brain-segmentation-pytorch
Framework pytorch

Inter-Level Cooperation in Hierarchical Reinforcement Learning

Title Inter-Level Cooperation in Hierarchical Reinforcement Learning
Authors Abdul Rahman Kreidieh, Samyak Parajuli, Nathan Lichtle, Yiling You, Rayyan Nasr, Alexandre M. Bayen
Abstract This article presents a novel algorithm for promoting cooperation between internal actors in a goal-conditioned hierarchical reinforcement learning (HRL) policy. Current techniques for HRL policy optimization treat the higher and lower level policies as separate entities which are trained to maximize different objective functions, rendering the HRL problem formulation more similar to a general sum game than a single-agent task. Within this setting, we hypothesize that improved cooperation between the internal agents of a hierarchy can simplify the credit assignment problem from the perspective of the high-level policies, thereby leading to significant improvements to training in situations where intricate sets of action primitives must be performed to yield improvements in performance. In order to promote cooperation within this setting, we propose the inclusion of a connected gradient term to the gradient computations of the higher level policies. Our method is demonstrated to achieve superior results to existing techniques in a set of difficult long time horizon tasks.
Tasks Hierarchical Reinforcement Learning
Published 2019-12-05
URL https://arxiv.org/abs/1912.02368v1
PDF https://arxiv.org/pdf/1912.02368v1.pdf
PWC https://paperswithcode.com/paper/191202368
Repo https://github.com/AboudyKreidieh/h-baselines
Framework tf

Accelerating Training of Deep Neural Networks with a Standardization Loss

Title Accelerating Training of Deep Neural Networks with a Standardization Loss
Authors Jasmine Collins, Johannes Balle, Jonathon Shlens
Abstract A significant advance in accelerating neural network training has been the development of normalization methods, permitting the training of deep models both faster and with better accuracy. These advances come with practical challenges: for instance, batch normalization ties the prediction of individual examples with other examples within a batch, resulting in a network that is heavily dependent on batch size. Layer normalization and group normalization are data-dependent and thus must be continually used, even at test-time. To address the issues that arise from using explicit normalization techniques, we propose to replace existing normalization methods with a simple, secondary objective loss that we term a standardization loss. This formulation is flexible and robust across different batch sizes and surprisingly, this secondary objective accelerates learning on the primary training objective. Because it is a training loss, it is simply removed at test-time, and no further effort is needed to maintain normalized activations. We find that a standardization loss accelerates training on both small- and large-scale image classification experiments, works with a variety of architectures, and is largely robust to training across different batch sizes.
Tasks Image Classification
Published 2019-03-03
URL http://arxiv.org/abs/1903.00925v1
PDF http://arxiv.org/pdf/1903.00925v1.pdf
PWC https://paperswithcode.com/paper/accelerating-training-of-deep-neural-networks
Repo https://github.com/lessw2020/auto-adaptive-ai
Framework pytorch

Robust Variational Autoencoders for Outlier Detection and Repair of Mixed-Type Data

Title Robust Variational Autoencoders for Outlier Detection and Repair of Mixed-Type Data
Authors Simão Eduardo, Alfredo Nazábal, Christopher K. I. Williams, Charles Sutton
Abstract We focus on the problem of unsupervised cell outlier detection and repair in mixed-type tabular data. Traditional methods are concerned only with detecting which rows in the dataset are outliers. However, identifying which cells are corrupted in a specific row is an important problem in practice, and the very first step towards repairing them. We introduce the Robust Variational Autoencoder (RVAE), a deep generative model that learns the joint distribution of the clean data while identifying the outlier cells, allowing their imputation (repair). RVAE explicitly learns the probability of each cell being an outlier, balancing different likelihood models in the row outlier score, making the method suitable for outlier detection in mixed-type datasets. We show experimentally that not only RVAE performs better than several state-of-the-art methods in cell outlier detection and repair for tabular data, but also that is robust against the initial hyper-parameter selection.
Tasks Imputation, Outlier Detection
Published 2019-07-15
URL https://arxiv.org/abs/1907.06671v2
PDF https://arxiv.org/pdf/1907.06671v2.pdf
PWC https://paperswithcode.com/paper/robust-variational-autoencoders-for-outlier
Repo https://github.com/sfme/RVAE_MixedTypes
Framework pytorch

Coreference Resolution as Query-based Span Prediction

Title Coreference Resolution as Query-based Span Prediction
Authors Wei Wu, Fei Wang, Arianna Yuan, Fei Wu, Jiwei Li
Abstract In this paper, we present an accurate and extensible approach for the coreference resolution task. We formulate the problem as a span prediction task, like in machine reading comprehension (MRC): A query is generated for each candidate mention using its surrounding context, and a span prediction module is employed to extract the text spans of the coreferences within the document using the generated query. This formulation comes with the following key advantages: (1) The span prediction strategy provides the flexibility of retrieving mentions left out at the mention proposal stage; (2) In the MRC framework, encoding the mention and its context explicitly in a query makes it possible to have a deep and thorough examination of cues embedded in the context of coreferent mentions; and (3) A plethora of existing MRC datasets can be used for data augmentation to improve the model’s generalization capability. Experiments demonstrate significant performance boost over previous models, with 87.5 (+2.5) F1 score on the GAP benchmark and 83.1 (+3.5) F1 score on the CoNLL-2012 benchmark.
Tasks Coreference Resolution, Data Augmentation, Machine Reading Comprehension, Reading Comprehension
Published 2019-11-05
URL https://arxiv.org/abs/1911.01746v1
PDF https://arxiv.org/pdf/1911.01746v1.pdf
PWC https://paperswithcode.com/paper/coreference-resolution-as-query-based-span
Repo https://github.com/ShannonAI/CorefQA
Framework tf

SceneGraphNet: Neural Message Passing for 3D Indoor Scene Augmentation

Title SceneGraphNet: Neural Message Passing for 3D Indoor Scene Augmentation
Authors Yang Zhou, Zachary While, Evangelos Kalogerakis
Abstract In this paper we propose a neural message passing approach to augment an input 3D indoor scene with new objects matching their surroundings. Given an input, potentially incomplete, 3D scene and a query location, our method predicts a probability distribution over object types that fit well in that location. Our distribution is predicted though passing learned messages in a dense graph whose nodes represent objects in the input scene and edges represent spatial and structural relationships. By weighting messages through an attention mechanism, our method learns to focus on the most relevant surrounding scene context to predict new scene objects. We found that our method significantly outperforms state-of-the-art approaches in terms of correctly predicting objects missing in a scene based on our experiments in the SUNCG dataset. We also demonstrate other applications of our method, including context-based 3D object recognition and iterative scene generation.
Tasks 3D Object Recognition, Object Recognition, Scene Generation
Published 2019-07-25
URL https://arxiv.org/abs/1907.11308v1
PDF https://arxiv.org/pdf/1907.11308v1.pdf
PWC https://paperswithcode.com/paper/scenegraphnet-neural-message-passing-for-3d
Repo https://github.com/yzhou359/3DIndoor-SceneGraphNet
Framework pytorch

Bridging the Gap between Community and Node Representations: Graph Embedding via Community Detection

Title Bridging the Gap between Community and Node Representations: Graph Embedding via Community Detection
Authors Artem Lutov, Dingqi Yang, Philippe Cudré-Mauroux
Abstract Graph embedding has become a key component of many data mining and analysis systems. Current graph embedding approaches either sample a large number of node pairs from a graph to learn node embeddings via stochastic optimization or factorize a high-order proximity/adjacency matrix of the graph via computationally expensive matrix factorization techniques. These approaches typically require significant resources for the learning process and rely on multiple parameters, which limits their applicability in practice. Moreover, most of the existing graph embedding techniques operate effectively in one specific metric space only (e.g., the one produced with cosine similarity), do not preserve higher-order structural features of the input graph and cannot automatically determine a meaningful number of embedding dimensions. Typically, the produced embeddings are not easily interpretable, which complicates further analyses and limits their applicability. To address these issues, we propose DAOR, a highly efficient and parameter-free graph embedding technique producing metric space-robust, compact and interpretable embeddings without any manual tuning. Compared to a dozen state-of-the-art graph embedding algorithms, DAOR yields competitive results on both node classification (which benefits form high-order proximity) and link prediction (which relies on low-order proximity mostly). Unlike existing techniques, however, DAOR does not require any parameter tuning and improves the embeddings generation speed by several orders of magnitude. Our approach has hence the ambition to greatly simplify and speed up data analysis tasks involving graph representation learning.
Tasks Community Detection, Graph Embedding, Graph Representation Learning, Link Prediction, Node Classification, Representation Learning, Stochastic Optimization
Published 2019-12-17
URL https://arxiv.org/abs/1912.08808v1
PDF https://arxiv.org/pdf/1912.08808v1.pdf
PWC https://paperswithcode.com/paper/bridging-the-gap-between-community-and-node
Repo https://github.com/eXascaleInfolab/daor
Framework none

EL Embeddings: Geometric construction of models for the Description Logic EL ++

Title EL Embeddings: Geometric construction of models for the Description Logic EL ++
Authors Maxat Kulmanov, Wang Liu-Wei, Yuan Yan, Robert Hoehndorf
Abstract An embedding is a function that maps entities from one algebraic structure into another while preserving certain characteristics. Embeddings are being used successfully for mapping relational data or text into vector spaces where they can be used for machine learning, similarity search, or similar tasks. We address the problem of finding vector space embeddings for theories in the Description Logic $\mathcal{EL}^{++}$ that are also models of the TBox. To find such embeddings, we define an optimization problem that characterizes the model-theoretic semantics of the operators in $\mathcal{EL}^{++}$ within $\Re^n$, thereby solving the problem of finding an interpretation function for an $\mathcal{EL}^{++}$ theory given a particular domain $\Delta$. Our approach is mainly relevant to large $\mathcal{EL}^{++}$ theories and knowledge bases such as the ontologies and knowledge graphs used in the life sciences. We demonstrate that our method can be used for improved prediction of protein–protein interactions when compared to semantic similarity measures or knowledge graph embedding
Tasks Graph Embedding, Knowledge Graph Embedding, Knowledge Graphs, Semantic Similarity, Semantic Textual Similarity
Published 2019-02-27
URL http://arxiv.org/abs/1902.10499v1
PDF http://arxiv.org/pdf/1902.10499v1.pdf
PWC https://paperswithcode.com/paper/el-embeddings-geometric-construction-of
Repo https://github.com/bio-ontology-research-group/el-embeddings
Framework none

Statistical Guarantees for the Robustness of Bayesian Neural Networks

Title Statistical Guarantees for the Robustness of Bayesian Neural Networks
Authors Luca Cardelli, Marta Kwiatkowska, Luca Laurenti, Nicola Paoletti, Andrea Patane, Matthew Wicker
Abstract We introduce a probabilistic robustness measure for Bayesian Neural Networks (BNNs), defined as the probability that, given a test point, there exists a point within a bounded set such that the BNN prediction differs between the two. Such a measure can be used, for instance, to quantify the probability of the existence of adversarial examples. Building on statistical verification techniques for probabilistic models, we develop a framework that allows us to estimate probabilistic robustness for a BNN with statistical guarantees, i.e., with a priori error and confidence bounds. We provide experimental comparison for several approximate BNN inference techniques on image classification tasks associated to MNIST and a two-class subset of the GTSRB dataset. Our results enable quantification of uncertainty of BNN predictions in adversarial settings.
Tasks Image Classification
Published 2019-03-05
URL http://arxiv.org/abs/1903.01980v1
PDF http://arxiv.org/pdf/1903.01980v1.pdf
PWC https://paperswithcode.com/paper/statistical-guarantees-for-the-robustness-of
Repo https://github.com/matthewwicker/StatisticalGuarenteesForBNNs
Framework tf
comments powered by Disqus