论文标题

dyg2Vec:动态图的有效表示学习

DyG2Vec: Efficient Representation Learning for Dynamic Graphs

论文作者

Alomrani, Mohammad Ali, Biparva, Mahdi, Zhang, Yingxue, Coates, Mark

论文摘要

时间图神经网络通过自动提取时间模式来显示学习归纳表示的有希望的结果。但是,以前的作品通常依靠复杂的内存模块或效率低下的随机步行方法来构建时间表示。为了解决这些局限性,我们提出了一个有效但有效的基于注意力的编码器,该编码器利用时间边缘编码和基于窗口的子图抽样来生成任务无关的嵌入。此外,我们建议使用非对抗性SSL的联合构建结构,以学习无标签的丰富时间嵌入。 7个基准数据集的实验结果表明,平均而言,我们的模型在未来的链接预测任务上的表现要优于SOTA基准,而转置设置的实验基准则在4.23%中,而对于电感设置,而仅需3.30%,而仅需要减少5-10倍的培训/推理时间。最后,通过实验分析和消融研究研究了拟议框架的不同方面。该代码可在https://github.com/huawei-noah/noah-research/tree/master/master/graph_atlas上公开获得。

Temporal graph neural networks have shown promising results in learning inductive representations by automatically extracting temporal patterns. However, previous works often rely on complex memory modules or inefficient random walk methods to construct temporal representations. To address these limitations, we present an efficient yet effective attention-based encoder that leverages temporal edge encodings and window-based subgraph sampling to generate task-agnostic embeddings. Moreover, we propose a joint-embedding architecture using non-contrastive SSL to learn rich temporal embeddings without labels. Experimental results on 7 benchmark datasets indicate that on average, our model outperforms SoTA baselines on the future link prediction task by 4.23% for the transductive setting and 3.30% for the inductive setting while only requiring 5-10x less training/inference time. Lastly, different aspects of the proposed framework are investigated through experimental analysis and ablation studies. The code is publicly available at https://github.com/huawei-noah/noah-research/tree/master/graph_atlas.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源