论文标题
GPNET:通过多通道几何多项式简化图形神经网络
GPNet: Simplifying Graph Neural Networks via Multi-channel Geometric Polynomials
论文作者
论文摘要
图形神经网络(GNN)是一种有希望的深度学习方法,用于规避图形结构数据上的许多现实世界问题。但是,这些模型通常至少具有四个基本局限性:过度平滑,过度拟合,难以训练和强有力的同质假设。例如,已知简单的图形卷积(SGC)受到第一个和第四局限性的影响。为了应对这些局限性,我们确定了一组关键设计,包括(D1)扩张的卷积,(D2)多通道学习,(D3)自我关注得分,以及(D4)符号因素,以从不同类型(即同质和异常)和量表(即小型,中,大型)网络和图形的网络及其组合及其gpn and a gpn and a gpn and a gpn and and a gpn and a gpn and a gpn and a gpn and a gpn and a a gpn and a a a gpn and a a g。我们理论上分析了该模型,并表明它可以通过调整自我发场评分和符号因子来近似各种图形过滤器。实验表明,GPNET始终在半监督和全面监督任务上的平均等级,平均准确性,复杂性和参数方面均超过基准,并且与具有感应性学习任务的最新模型相比,与最新的模型相比,达到了竞争性能。
Graph Neural Networks (GNNs) are a promising deep learning approach for circumventing many real-world problems on graph-structured data. However, these models usually have at least one of four fundamental limitations: over-smoothing, over-fitting, difficult to train, and strong homophily assumption. For example, Simple Graph Convolution (SGC) is known to suffer from the first and fourth limitations. To tackle these limitations, we identify a set of key designs including (D1) dilated convolution, (D2) multi-channel learning, (D3) self-attention score, and (D4) sign factor to boost learning from different types (i.e. homophily and heterophily) and scales (i.e. small, medium, and large) of networks, and combine them into a graph neural network, GPNet, a simple and efficient one-layer model. We theoretically analyze the model and show that it can approximate various graph filters by adjusting the self-attention score and sign factor. Experiments show that GPNet consistently outperforms baselines in terms of average rank, average accuracy, complexity, and parameters on semi-supervised and full-supervised tasks, and achieves competitive performance compared to state-of-the-art model with inductive learning task.