论文标题
符号:可扩展的发表图神经网络
SIGN: Scalable Inception Graph Neural Networks
论文作者
论文摘要
图表学习最近已应用于从计算机图形和化学到高能量物理和社交媒体的各种问题。图形神经网络的普及引起了学术界和行业的兴趣,开发了扩展到诸如Facebook或Twitter社交网络之类的非常大图的方法。在大多数此类方法中,计算成本通过在培训时保留了节点邻居或子图的子集的抽样策略来缓解计算成本。在本文中,我们提出了一种新的,高效且可扩展的深度学习体系结构,该体系结构通过使用具有不同大小的图形卷积过滤器来避开了对图采样的需求,这些卷积过滤器可适应有效的预录,从而实现了非常快速的训练和推理。我们的体系结构允许使用不同的本地图操作员(例如主题诱导的邻接矩阵或个性化页面等级扩散矩阵),以最适合手头的任务。我们对各种开放基准进行了广泛的实验评估,并表明我们的方法与其他最先进的体系结构具有竞争力,同时需要培训和推理时间的一小部分。此外,我们在最大的公共图数据集的OGBN-Papers 100m上获得了最先进的结果,具有超过1.1亿个节点和15亿个边缘。
Graph representation learning has recently been applied to a broad spectrum of problems ranging from computer graphics and chemistry to high energy physics and social media. The popularity of graph neural networks has sparked interest, both in academia and in industry, in developing methods that scale to very large graphs such as Facebook or Twitter social networks. In most of these approaches, the computational cost is alleviated by a sampling strategy retaining a subset of node neighbors or subgraphs at training time. In this paper we propose a new, efficient and scalable graph deep learning architecture which sidesteps the need for graph sampling by using graph convolutional filters of different size that are amenable to efficient precomputation, allowing extremely fast training and inference. Our architecture allows using different local graph operators (e.g. motif-induced adjacency matrices or Personalized Page Rank diffusion matrix) to best suit the task at hand. We conduct extensive experimental evaluation on various open benchmarks and show that our approach is competitive with other state-of-the-art architectures, while requiring a fraction of the training and inference time. Moreover, we obtain state-of-the-art results on ogbn-papers100M, the largest public graph dataset, with over 110 million nodes and 1.5 billion edges.