April 3, 2020

2670 words 13 mins read

Paper Group AWR 74

Paper Group AWR 74

Point-GNN: Graph Neural Network for 3D Object Detection in a Point Cloud. Statistical power for cluster analysis. Multi-variate Probabilistic Time Series Forecasting via Conditioned Normalizing Flows. Synonymous Generalization in Sequence-to-Sequence Recurrent Networks. CRYSPNet: Crystal Structure Predictions via Neural Network. NYTWIT: A Dataset o …

Point-GNN: Graph Neural Network for 3D Object Detection in a Point Cloud

Title Point-GNN: Graph Neural Network for 3D Object Detection in a Point Cloud
Authors Weijing Shi, Ragunathan, Rajkumar
Abstract In this paper, we propose a graph neural network to detect objects from a LiDAR point cloud. Towards this end, we encode the point cloud efficiently in a fixed radius near-neighbors graph. We design a graph neural network, named Point-GNN, to predict the category and shape of the object that each vertex in the graph belongs to. In Point-GNN, we propose an auto-registration mechanism to reduce translation variance, and also design a box merging and scoring operation to combine detections from multiple vertices accurately. Our experiments on the KITTI benchmark show the proposed approach achieves leading accuracy using the point cloud alone and can even surpass fusion-based algorithms. Our results demonstrate the potential of using the graph neural network as a new approach for 3D object detection. The code is available https://github.com/WeijingShi/Point-GNN.
Tasks 3D Object Detection, Object Detection
Published 2020-03-02
URL https://arxiv.org/abs/2003.01251v1
PDF https://arxiv.org/pdf/2003.01251v1.pdf
PWC https://paperswithcode.com/paper/point-gnn-graph-neural-network-for-3d-object
Repo https://github.com/WeijingShi/Point-GNN
Framework tf

Statistical power for cluster analysis

Title Statistical power for cluster analysis
Authors E. S. Dalmaijer, C. L. Nord, D. E. Astle
Abstract Cluster algorithms are gaining in popularity due to their compelling ability to identify discrete subgroups in data, and their increasing accessibility in mainstream programming languages and statistical software. While researchers can follow guidelines to choose the right algorithms, and to determine what constitutes convincing clustering, there are no firmly established ways of computing a priori statistical power for cluster analysis. Here, we take a simulation approach to estimate power and classification accuracy for popular analysis pipelines. We systematically varied cluster size, number of clusters, number of different features between clusters, effect size within each different feature, and cluster covariance structure in generated datasets. We then subjected these datasets to common dimensionality reduction approaches (none, multi-dimensional scaling, or uniform manifold approximation and projection) and cluster algorithms (k-means, hierarchical agglomerative clustering with Ward linkage and Euclidean distance, or average linkage and cosine distance, HDBSCAN). Furthermore, we simulated additional datasets to explore the effect of sample size and cluster separation on statistical power and classification accuracy. We found that clustering outcomes were driven by large effect sizes or the accumulation of many smaller effects across features, and were mostly unaffected by differences in covariance structure. Sufficient statistical power can be achieved with relatively small samples (N=20 per subgroup), provided cluster separation is large ({\Delta}=4). Finally, we discuss whether fuzzy clustering (c-means) could provide a more parsimonious alternative for identifying separable multivariate normal distributions, particularly those with lower centroid separation.
Tasks Dimensionality Reduction
Published 2020-03-01
URL https://arxiv.org/abs/2003.00381v1
PDF https://arxiv.org/pdf/2003.00381v1.pdf
PWC https://paperswithcode.com/paper/statistical-power-for-cluster-analysis
Repo https://github.com/esdalmaijer/cluster_power
Framework none

Multi-variate Probabilistic Time Series Forecasting via Conditioned Normalizing Flows

Title Multi-variate Probabilistic Time Series Forecasting via Conditioned Normalizing Flows
Authors Kashif Rasul, Abdul-Saboor Sheikh, Ingmar Schuster, Urs Bergmann, Roland Vollgraf
Abstract Time series forecasting is often fundamental to scientific and engineering problems and enables decision making. With ever increasing data set sizes, a trivial solution to scale up predictions is to assume independence between interacting time series. However, modeling statistical dependencies can improve accuracy and enable analysis of interaction effects. Deep learning methods are well suited for this problem, but multi-variate models often assume a simple parametric distribution and do not scale to high dimensions. In this work we model the multi-variate temporal dynamics of time series via an autoregressive deep learning model, where the data distribution is represented by a conditioned normalizing flow. This combination retains the power of autoregressive models, such as good performance in extrapolation into the future, with the flexibility of flows as a general purpose high-dimensional distribution model, while remaining computationally tractable. We show that it improves over the state-of-the-art for standard metrics on many real-world data sets with several thousand interacting time-series.
Tasks Decision Making, Time Series, Time Series Forecasting
Published 2020-02-14
URL https://arxiv.org/abs/2002.06103v1
PDF https://arxiv.org/pdf/2002.06103v1.pdf
PWC https://paperswithcode.com/paper/multi-variate-probabilistic-time-series
Repo https://github.com/zalandoresearch/pytorch-ts
Framework pytorch

Synonymous Generalization in Sequence-to-Sequence Recurrent Networks

Title Synonymous Generalization in Sequence-to-Sequence Recurrent Networks
Authors Ning Shi
Abstract When learning a language, people can quickly expand their understanding of the unknown content by using compositional skills, such as from two words “go” and “fast” to a new phrase “go fast.” In recent work of Lake and Baroni (2017), modern Sequence-to-Sequence(se12seq) Recurrent Neural Networks (RNNs) can make powerful zero-shot generalizations in specifically controlled experiments. However, there is a missing regarding the property of such strong generalization and its precise requirements. This paper explores this positive result in detail and defines this pattern as the synonymous generalization, an ability to recognize an unknown sequence by decomposing the difference between it and a known sequence as corresponding existing synonyms. To better investigate it, I introduce a new environment called Colorful Extended Cleanup World (CECW), which consists of complex commands paired with logical expressions. While demonstrating that sequential RNNs can perform synonymous generalizations on foreign commands, I conclude their prerequisites for success. I also propose a data augmentation method, which is successfully verified on the Geoquery (GEO) dataset, as a novel application of synonymous generalization for real cases.
Tasks Data Augmentation
Published 2020-03-14
URL https://arxiv.org/abs/2003.06658v1
PDF https://arxiv.org/pdf/2003.06658v1.pdf
PWC https://paperswithcode.com/paper/synonymous-generalization-in-sequence-to
Repo https://github.com/MrShininnnnn/CECW
Framework none

CRYSPNet: Crystal Structure Predictions via Neural Network

Title CRYSPNet: Crystal Structure Predictions via Neural Network
Authors Haotong Liang, Valentin Stanev, A. Gilad Kusne, Ichiro Takeuchi
Abstract Structure is the most basic and important property of crystalline solids; it determines directly or indirectly most materials characteristics. However, predicting crystal structure of solids remains a formidable and not fully solved problem. Standard theoretical tools for this task are computationally expensive and at times inaccurate. Here we present an alternative approach utilizing machine learning for crystal structure prediction. We developed a tool called Crystal Structure Prediction Network (CRYSPNet) that can predict the Bravais lattice, space group, and lattice parameters of an inorganic material based only on its chemical composition. CRYSPNet consists of a series of neural network models, using as inputs predictors aggregating the properties of the elements constituting the compound. It was trained and validated on more than 100,000 entries from the Inorganic Crystal Structure Database. The tool demonstrates robust predictive capability and outperforms alternative strategies by a large margin. Made available to the public (at https://github.com/AuroraLHT/cryspnet), it can be used both as an independent prediction engine or as a method to generate candidate structures for further computational and/or experimental validation.
Tasks
Published 2020-03-31
URL https://arxiv.org/abs/2003.14328v1
PDF https://arxiv.org/pdf/2003.14328v1.pdf
PWC https://paperswithcode.com/paper/cryspnet-crystal-structure-predictions-via
Repo https://github.com/AuroraLHT/cryspnet
Framework pytorch

NYTWIT: A Dataset of Novel Words in the New York Times

Title NYTWIT: A Dataset of Novel Words in the New York Times
Authors Yuval Pinter, Cassandra L. Jacobs, Max Bittker
Abstract We present the New York Times Word Innovation Types dataset, or NYTWIT, a collection of over 2,500 novel English words published in the New York Times between November 2017 and March 2019, manually annotated for their class of novelty (such as lexical derivation, dialectal variation, blending, or compounding). We present baseline results for both uncontextual and contextual prediction of novelty class, showing that there is room for improvement even for state-of-the-art NLP systems. We hope this resource will prove useful for linguists and NLP practitioners by providing a real-world environment of novel word appearance.
Tasks
Published 2020-03-06
URL https://arxiv.org/abs/2003.03444v1
PDF https://arxiv.org/pdf/2003.03444v1.pdf
PWC https://paperswithcode.com/paper/nytwit-a-dataset-of-novel-words-in-the-new
Repo https://github.com/yuvalpinter/nytwit
Framework none

SAM: The Sensitivity of Attribution Methods to Hyperparameters

Title SAM: The Sensitivity of Attribution Methods to Hyperparameters
Authors Naman Bansal, Chirag Agarwal, Anh Nguyen
Abstract Attribution methods can provide powerful insights into the reasons for a classifier’s decision. We argue that a key desideratum of an explanation method is its robustness to input hyperparameters which are often randomly set or empirically tuned. High sensitivity to arbitrary hyperparameter choices does not only impede reproducibility but also questions the correctness of an explanation and impairs the trust of end-users. In this paper, we provide a thorough empirical study on the sensitivity of existing attribution methods. We found an alarming trend that many methods are highly sensitive to changes in their common hyperparameters e.g. even changing a random seed can yield a different explanation! Interestingly, such sensitivity is not reflected in the average explanation accuracy scores over the dataset as commonly reported in the literature. In addition, explanations generated for robust classifiers (i.e. which are trained to be invariant to pixel-wise perturbations) are surprisingly more robust than those generated for regular classifiers.
Tasks
Published 2020-03-04
URL https://arxiv.org/abs/2003.08754v1
PDF https://arxiv.org/pdf/2003.08754v1.pdf
PWC https://paperswithcode.com/paper/sam-the-sensitivity-of-attribution-methods-to
Repo https://github.com/anguyen8/sam
Framework pytorch

On the distance between two neural networks and the stability of learning

Title On the distance between two neural networks and the stability of learning
Authors Jeremy Bernstein, Arash Vahdat, Yisong Yue, Ming-Yu Liu
Abstract How far apart are two neural networks? This is a foundational question in their theory. We derive a simple and tractable bound that relates distance in function space to distance in parameter space for a broad class of nonlinear compositional functions. The bound distills a clear dependence on depth of the composition. The theory is of practical relevance since it establishes a trust region for first-order optimisation. In turn, this suggests an optimiser that we call Frobenius matched gradient descent—or Fromage. Fromage involves a principled form of gradient rescaling and enjoys guarantees on stability of both the spectra and Frobenius norms of the weights. We find that the new algorithm increases the depth at which a multilayer perceptron may be trained as compared to Adam and SGD and is competitive with Adam for training generative adversarial networks. We further verify that Fromage scales up to a language transformer with over $10^8$ parameters. Please find code & reproducibility instructions at: https://github.com/jxbz/fromage.
Tasks
Published 2020-02-09
URL https://arxiv.org/abs/2002.03432v1
PDF https://arxiv.org/pdf/2002.03432v1.pdf
PWC https://paperswithcode.com/paper/on-the-distance-between-two-neural-networks
Repo https://github.com/jxbz/fromage
Framework pytorch

Exploiting Verified Neural Networks via Floating Point Numerical Error

Title Exploiting Verified Neural Networks via Floating Point Numerical Error
Authors Kai Jia, Martin Rinard
Abstract We show how to construct adversarial examples for neural networks with exactly verified robustness against $\ell_{\infty}$-bounded input perturbations by exploiting floating point error. We argue that any exact verification of real-valued neural networks must accurately model the implementation details of any floating point arithmetic used during inference or verification.
Tasks
Published 2020-03-06
URL https://arxiv.org/abs/2003.03021v1
PDF https://arxiv.org/pdf/2003.03021v1.pdf
PWC https://paperswithcode.com/paper/exploiting-verified-neural-networks-via
Repo https://github.com/jia-kai/realadv
Framework pytorch

Deep differentiable forest with sparse attention for the tabular data

Title Deep differentiable forest with sparse attention for the tabular data
Authors Yingshi Chen
Abstract We present a general architecture of deep differentiable forest and its sparse attention mechanism. The differentiable forest has the advantages of both trees and neural networks. Its structure is a simple binary tree, easy to use and understand. It has full differentiability and all variables are learnable parameters. We would train it by the gradient-based optimization method, which shows great power in the training of deep CNN. We find and analyze the attention mechanism in the differentiable forest. That is, each decision depends on only a few important features, and others are irrelevant. The attention is always sparse. Based on this observation, we improve its sparsity by data-aware initialization. We use the attribute importance to initialize the attention weight. Then the learned weight is much sparse than that from random initialization. Our experiment on some large tabular dataset shows differentiable forest has higher accuracy than GBDT, which is the state of art algorithm for tabular datasets. The source codes are available at https://github.com/closest-git/QuantumForest
Tasks
Published 2020-02-29
URL https://arxiv.org/abs/2003.00223v1
PDF https://arxiv.org/pdf/2003.00223v1.pdf
PWC https://paperswithcode.com/paper/deep-differentiable-forest-with-sparse
Repo https://github.com/closest-git/QuantumForest
Framework pytorch

COVID-19 Image Data Collection

Title COVID-19 Image Data Collection
Authors Joseph Paul Cohen, Paul Morrison, Lan Dao
Abstract This paper describes the initial COVID-19 open image data collection. It was created by assembling medical images from websites and publications and currently contains 123 frontal view X-rays.
Tasks
Published 2020-03-25
URL https://arxiv.org/abs/2003.11597v1
PDF https://arxiv.org/pdf/2003.11597v1.pdf
PWC https://paperswithcode.com/paper/covid-19-image-data-collection
Repo https://github.com/ieee8023/covid-chestxray-dataset
Framework none

How Useful is Self-Supervised Pretraining for Visual Tasks?

Title How Useful is Self-Supervised Pretraining for Visual Tasks?
Authors Alejandro Newell, Jia Deng
Abstract Recent advances have spurred incredible progress in self-supervised pretraining for vision. We investigate what factors may play a role in the utility of these pretraining methods for practitioners. To do this, we evaluate various self-supervised algorithms across a comprehensive array of synthetic datasets and downstream tasks. We prepare a suite of synthetic data that enables an endless supply of annotated images as well as full control over dataset difficulty. Our experiments offer insights into how the utility of self-supervision changes as the number of available labels grows as well as how the utility changes as a function of the downstream task and the properties of the training data. We also find that linear evaluation does not correlate with finetuning performance. Code and data is available at \href{https://www.github.com/princeton-vl/selfstudy}{github.com/princeton-vl/selfstudy}.
Tasks
Published 2020-03-31
URL https://arxiv.org/abs/2003.14323v1
PDF https://arxiv.org/pdf/2003.14323v1.pdf
PWC https://paperswithcode.com/paper/how-useful-is-self-supervised-pretraining-for
Repo https://github.com/princeton-vl/selfstudy
Framework none

SuPer Deep: A Surgical Perception Framework for Robotic Tissue Manipulation using Deep Learning for Feature Extraction

Title SuPer Deep: A Surgical Perception Framework for Robotic Tissue Manipulation using Deep Learning for Feature Extraction
Authors Jingpei Lu, Ambareesh Jayakumari, Florian Richter, Yang Li, Michael C. Yip
Abstract Robotic automation in surgery requires precise tracking of surgical tools and mapping of deformable tissue. Previous works on surgical perception frameworks require significant effort in developing features for surgical tool and tissue tracking. In this work, we overcome the challenge by exploiting deep learning methods for surgical perception. We integrated deep neural networks, capable of efficient feature extraction, into the tissue reconstruction and instrument pose estimation processes. By leveraging transfer learning, the deep learning based approach requires minimal training data and reduced feature engineering efforts to fully perceive a surgical scene. The framework was tested on three publicly available datasets, which use the da Vinci Surgical System, for comprehensive analysis. Experimental results show that our framework achieves state-of-the-art tracking performance in a surgical environment by utilizing deep learning for feature extraction.
Tasks Feature Engineering, Pose Estimation, Transfer Learning
Published 2020-03-07
URL https://arxiv.org/abs/2003.03472v1
PDF https://arxiv.org/pdf/2003.03472v1.pdf
PWC https://paperswithcode.com/paper/super-deep-a-surgical-perception-framework
Repo https://github.com/jingpeilu/psmnet_ros
Framework pytorch

Represented Value Function Approach for Large Scale Multi Agent Reinforcement Learning

Title Represented Value Function Approach for Large Scale Multi Agent Reinforcement Learning
Authors Weiya Ren
Abstract In this paper, we consider the problem of large scale multi agent reinforcement learning. Firstly, we studied the representation problem of the pairwise value function to reduce the complexity of the interactions among agents. Secondly, we adopt a l2-norm trick to ensure the trivial term of the approximated value function is bounded. Thirdly, experimental results on battle game demonstrate the effectiveness of the proposed approach.
Tasks Multi-agent Reinforcement Learning
Published 2020-01-04
URL https://arxiv.org/abs/2001.01096v2
PDF https://arxiv.org/pdf/2001.01096v2.pdf
PWC https://paperswithcode.com/paper/represented-value-function-approach-for-large
Repo https://github.com/renweiya/RFQ-RFAC-Represented-Value-Function-Approach-for-Large-Scale-Multi-Agent-Reinforcement-Learning
Framework tf

HybridPose: 6D Object Pose Estimation under Hybrid Representations

Title HybridPose: 6D Object Pose Estimation under Hybrid Representations
Authors Chen Song, Jiaru Song, Qixing Huang
Abstract We introduce HybridPose, a novel 6D object pose estimation approach. HybridPose utilizes a hybrid intermediate representation to express different geometric information in the input image, including keypoints, edge vectors, and symmetry correspondences. Compared to a unitary representation, our hybrid representation allows pose regression to exploit more and diverse features when one type of predicted representation is inaccurate (e.g., because of occlusion). Different intermediate representations used by HybridPose can all be predicted by the same simple neural network, and outliers in predicted intermediate representations are filtered by a robust regression module. Compared to state-of-the-art pose estimation approaches, HybridPose is comparable in running time and is significantly more accurate. For example, on Occlusion Linemod dataset, our method achieves a prediction speed of 30 fps with a mean ADD(-S) accuracy of 79.2%, representing a 67.4% improvement from the current state-of-the-art approach. The implementation of HybridPose is available at https://github.com/chensong1995/HybridPose.
Tasks 6D Pose Estimation using RGB, Pose Estimation
Published 2020-01-07
URL https://arxiv.org/abs/2001.01869v2
PDF https://arxiv.org/pdf/2001.01869v2.pdf
PWC https://paperswithcode.com/paper/hybridpose-6d-object-pose-estimation-under
Repo https://github.com/chensong1995/HybridPose
Framework pytorch
comments powered by Disqus