论文标题

通用图形神经网络的通用调整

Universal Prompt Tuning for Graph Neural Networks

论文作者

Fang, Taoran, Zhang, Yunchao, Yang, Yang, Wang, Chunping, Chen, Lei

论文摘要

近年来,及时的调整引发了调整预训练模型的研究激增。与语言领域中采用的统一预训练策略不同,该图领域表现出各种训练的训练策略,在为图形神经网络设计适当的基于及时的调整方法时提出了挑战。尽管一些开创性的工作已经设计了使用Edge预测作为其预训练任务的模型的专门提示功能,但这些方法仅限于特定的预训练的GNN模型,并且缺乏更广泛的适用性。在本文中,我们在任何训练策略下引入了一种基于通用的及时调整方法,用于预训练的GNN模型。 GPF在输入图的特征空间上运行,理论上可以与任何形式的提示功能实现等效效果。因此,我们不再需要明确说明与每个预训练策略相对应的提示功能。相反,我们采用GPF以自适应方式获得下游任务的提示图。我们提供严格的推导,以证明GPF的普遍性并保证其有效性。各种训练策略下的实验结果表明,我们的方法的性能要比微调更好,在全射线场景中的平均提高约为1.4%,在几种场景中的平均提高约为3.2%。此外,我们的方法在利用其专门研究的训练前策略应用于模型时,极大地胜过现有的专业及时调整方法。这些众多优势将我们的方法定位为对下游适应的微调替代方案。

In recent years, prompt tuning has sparked a research surge in adapting pre-trained models. Unlike the unified pre-training strategy employed in the language field, the graph field exhibits diverse pre-training strategies, posing challenges in designing appropriate prompt-based tuning methods for graph neural networks. While some pioneering work has devised specialized prompting functions for models that employ edge prediction as their pre-training tasks, these methods are limited to specific pre-trained GNN models and lack broader applicability. In this paper, we introduce a universal prompt-based tuning method called Graph Prompt Feature (GPF) for pre-trained GNN models under any pre-training strategy. GPF operates on the input graph's feature space and can theoretically achieve an equivalent effect to any form of prompting function. Consequently, we no longer need to illustrate the prompting function corresponding to each pre-training strategy explicitly. Instead, we employ GPF to obtain the prompted graph for the downstream task in an adaptive manner. We provide rigorous derivations to demonstrate the universality of GPF and make guarantee of its effectiveness. The experimental results under various pre-training strategies indicate that our method performs better than fine-tuning, with an average improvement of about 1.4% in full-shot scenarios and about 3.2% in few-shot scenarios. Moreover, our method significantly outperforms existing specialized prompt-based tuning methods when applied to models utilizing the pre-training strategy they specialize in. These numerous advantages position our method as a compelling alternative to fine-tuning for downstream adaptations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源