April 1, 2020

378 words 2 mins read

Paper Group NAWR 7

Paper Group NAWR 7

GL2vec: Graph Embedding Enriched by Line Graphs with Edge Features. AtomNAS: Fine-Grained End-to-End Neural Architecture Search …

GL2vec: Graph Embedding Enriched by Line Graphs with Edge Features

Title GL2vec: Graph Embedding Enriched by Line Graphs with Edge Features
Authors Hong Chen, Hisashi Koga
Abstract Recently, several techniques to learn the embedding for a given graph dataset have been proposed. Among them, Graph2vec is significant in that it unsupervisedly learns the embedding of entire graphs which is useful for graph classification. This paper develops an algorithm which improves Graph2vec. First, we point out two limitations of Graph2vec: (1) Edge labels cannot be handled and (2) Graph2vec does not always preserve structural information enough to evaluate the structural similarity, because it bundles the node label information and the structural information in extracting subgraphs. Our algorithm overcomes these limitations by exploiting the line graphs (edge-to-vertex dual graphs) of given graphs. Specifically, it complements either the edge label information or the structural information which Graph2vec misses with the embeddings of the line graphs. Our method is named as GL2vec (Graph and Line graph to vector) because it concatenates the embedding of an original graph to that of the corresponding line graph. Experimentally, GL2vec achieves significant improvements in graph classification task over Graph2vec for many benchmark datasets.
Tasks Graph Classification, Graph Embedding
Published 2020-01-23
URL https://link.springer.com/chapter/10.1007/978-3-030-36718-3_1
PDF https://link.springer.com/chapter/10.1007/978-3-030-36718-3_1
PWC https://paperswithcode.com/paper/gl2vec-graph-embedding-enriched-by-line
Repo https://github.com/benedekrozemberczki/karateclub
Framework none
Title AtomNAS: Fine-Grained End-to-End Neural Architecture Search
Authors Anonymous
Abstract Designing of search space is a critical problem for neural architecture search (NAS) algorithms. We propose a fine-grained search space comprised of atomic blocks, a minimal search unit much smaller than the ones used in recent NAS algorithms. This search space facilitates direct selection of channel numbers and kernel sizes in convolutions. In addition, we propose a resource-constrained architecture search algorithm which dynamically selects atomic blocks during training. The algorithm is further accelerated by a dynamic network shrinkage technique. Instead of a search-and-retrain two-stage paradigm, our method can simultaneously search and train the target architecture in an end-to-end manner. Our method achieves state-of-the-art performance under several FLOPS configurations on ImageNet with a negligible searching cost.
Tasks Image Classification, Neural Architecture Search
Published 2020-01-01
URL https://openreview.net/forum?id=BylQSxHFwr
PDF https://openreview.net/pdf?id=BylQSxHFwr
PWC https://paperswithcode.com/paper/atomnas-fine-grained-end-to-end-neural
Repo https://github.com/meijieru/AtomNAS
Framework pytorch
comments powered by Disqus