论文标题

可解释的深图生成与节点边缘的共符号

Interpretable Deep Graph Generation with Node-Edge Co-Disentanglement

论文作者

Guo, Xiaojie, Zhao, Liang, Qin, Zhao, Wu, Lingfei, Shehu, Amarda, Ye, Yanfang

论文摘要

分散的表示学习最近吸引了大量关注,尤其是在图像表示学习领域。但是,学习图形背后的分离表示表示仍然没有探索,尤其是对于具有节点和边缘特征的属性图。图形生成的分离学习面临着重大的新挑战,包括1)缺乏与共同解码节点和边缘属性的图形反卷积操作; 2)在分别影响的潜在因素之间执行分离的困难:i)仅节点,ii)仅边缘和iii)它们之间的关节模式。为了应对这些挑战,我们为归因图的深层生成模型提出了一个新的解开增强框架。特别是,提出了一个新的变分目标,以消除上述三种潜在因素,并具有用于节点和边缘反应的新型结构。此外,在每种类型中,个人因素的分离进一步增强,这被证明是现有图像框架的概括。合成和现实世界数据集的定性和定量实验证明了所提出的模型及其扩展的有效性。

Disentangled representation learning has recently attracted a significant amount of attention, particularly in the field of image representation learning. However, learning the disentangled representations behind a graph remains largely unexplored, especially for the attributed graph with both node and edge features. Disentanglement learning for graph generation has substantial new challenges including 1) the lack of graph deconvolution operations to jointly decode node and edge attributes; and 2) the difficulty in enforcing the disentanglement among latent factors that respectively influence: i) only nodes, ii) only edges, and iii) joint patterns between them. To address these challenges, we propose a new disentanglement enhancement framework for deep generative models for attributed graphs. In particular, a novel variational objective is proposed to disentangle the above three types of latent factors, with novel architecture for node and edge deconvolutions. Moreover, within each type, individual-factor-wise disentanglement is further enhanced, which is shown to be a generalization of the existing framework for images. Qualitative and quantitative experiments on both synthetic and real-world datasets demonstrate the effectiveness of the proposed model and its extensions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源