论文标题

Dan-SNR:一个深入的社会意识的细心网络,下一个利益建议

DAN-SNR: A Deep Attentive Network for Social-Aware Next Point-of-Interest Recommendation

论文作者

Huang, Liwei, Ma, Yutao, Liu, Yanbo, He, Keqing

论文摘要

近年来,下一个(或连续的)利率(POI)推荐引起了人们越来越多的关注。以前的大多数研究都试图将用户签到的时空信息和顺序模式纳入推荐模型,以预测目标用户的下一步行动。但是,这些方法都没有利用每个用户朋友的社会影响。在这项研究中,我们讨论了下一个POI建议的新主题,并提出了一个深切的社会意识网络,称为Dan-Snr。尤其是,Dan-SNR利用自我发项机制,而不是复发性神经网络的结构,以统一的方式对顺序影响和社会影响进行建模。此外,我们设计并实施了两个并行渠道,以分别捕获短期用户的偏好和长期用户偏好以及社交影响。通过利用多头自我注意力,DAN-SNR可以有效地对任何两个历史检查之间的远程依赖性建模,并在适应下对下一个目的地的贡献。此外,我们使用从两个流行的基于位置的社交网络(即Gowalla和Brightkite)收集的大规模现实世界数据集进行了全面的评估。实验结果表明,DAN-SNR在推荐性能方面的表现优于七种竞争性基线方法,并且在六种基于注意的神经网络和基于注意力的方法中具有很高的效率。

Next (or successive) point-of-interest (POI) recommendation has attracted increasing attention in recent years. Most of the previous studies attempted to incorporate the spatiotemporal information and sequential patterns of user check-ins into recommendation models to predict the target user's next move. However, none of these approaches utilized the social influence of each user's friends. In this study, we discuss a new topic of next POI recommendation and present a deep attentive network for social-aware next POI recommendation called DAN-SNR. In particular, the DAN-SNR makes use of the self-attention mechanism instead of the architecture of recurrent neural networks to model sequential influence and social influence in a unified manner. Moreover, we design and implement two parallel channels to capture short-term user preference and long-term user preference as well as social influence, respectively. By leveraging multi-head self-attention, the DAN-SNR can model long-range dependencies between any two historical check-ins efficiently and weigh their contributions to the next destination adaptively. Also, we carried out a comprehensive evaluation using large-scale real-world datasets collected from two popular location-based social networks, namely Gowalla and Brightkite. Experimental results indicate that the DAN-SNR outperforms seven competitive baseline approaches regarding recommendation performance and is of high efficiency among six neural-network- and attention-based methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源