论文标题

通过知识增强的及时学习朝着统一的会话推荐系统迈

Towards Unified Conversational Recommender Systems via Knowledge-Enhanced Prompt Learning

论文作者

Wang, Xiaolei, Zhou, Kun, Wen, Ji-Rong, Zhao, Wayne Xin

论文摘要

会话推荐系统(CRS)旨在主动引起用户偏好,并通过自然语言对话推荐高质量的项目。通常,CRS由建议模块组成,以预测用户的首选项目和对话模块,以生成适当的响应。要开发有效的CR,必须无缝整合两个模块。现有作品要么设计语义一致性策略,要么共享两个模块之间的知识资源和表示。但是,这些方法仍然依靠不同的架构或技术来开发两个模块,因此很难进行有效的模块集成。 为了解决这个问题,我们根据知识增强的及时学习提出了一个名为UNICRS的统一CRS模型。我们的方法将建议和对话子任务统一到及时学习范式中,并根据固定的预训练的语言模型(PLM)利用知识增强的提示来以统一的方法来实现这两个子任务。在及时的设计中,我们包括融合的知识表示,特定于任务的软令牌和对话上下文,它们可以提供足够的上下文信息以使PLM适应CRS任务。此外,对于建议子任务,我们还将生成的响应模板作为提示的重要组成部分,以增强两个子任务之间的信息相互作用。对两个公共CRS数据集进行的广泛实验已经证明了我们方法的有效性。

Conversational recommender systems (CRS) aim to proactively elicit user preference and recommend high-quality items through natural language conversations. Typically, a CRS consists of a recommendation module to predict preferred items for users and a conversation module to generate appropriate responses. To develop an effective CRS, it is essential to seamlessly integrate the two modules. Existing works either design semantic alignment strategies, or share knowledge resources and representations between the two modules. However, these approaches still rely on different architectures or techniques to develop the two modules, making it difficult for effective module integration. To address this problem, we propose a unified CRS model named UniCRS based on knowledge-enhanced prompt learning. Our approach unifies the recommendation and conversation subtasks into the prompt learning paradigm, and utilizes knowledge-enhanced prompts based on a fixed pre-trained language model (PLM) to fulfill both subtasks in a unified approach. In the prompt design, we include fused knowledge representations, task-specific soft tokens, and the dialogue context, which can provide sufficient contextual information to adapt the PLM for the CRS task. Besides, for the recommendation subtask, we also incorporate the generated response template as an important part of the prompt, to enhance the information interaction between the two subtasks. Extensive experiments on two public CRS datasets have demonstrated the effectiveness of our approach.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源