论文标题

熔岩:通过各种自动编码对话策略优化的潜在动作空间

LAVA: Latent Action Spaces via Variational Auto-encoding for Dialogue Policy Optimization

论文作者

Lubis, Nurul, Geishauser, Christian, Heck, Michael, Lin, Hsien-chin, Moresi, Marco, van Niekerk, Carel, Gašić, Milica

论文摘要

强化学习(RL)可以使以任务为导向的对话系统引导对话成功完成任务。在端到端的设置中,可以在单词级的顺序决策过程中构建响应,整个系统词汇作为动作空间。以这种方式培训的政策不需要专家定义的动作空间,但是他们必须处理大型动作空间和长途轨迹,使RL变得不切实际。将各种模型的潜在空间作为动作空间减轻了这个问题。但是,当前的方法使用未知的先验进行培训,并仅在上下文上优化潜在分布。因此,尚不清楚潜在表示是否真正编码不同动作的特征。在本文中,我们探讨了利用辅助任务来塑造潜在变量分布的三种方式:通过预训练,获得知情先验和通过多任务学习。我们选择响应自动编码为辅助任务,因为这捕获了对话响应的生成因素,同时需要较低的计算成本,而既不需要其他数据也不需要标签。我们的方法产生了更具动作特征的潜在表示,该表示支持端到端对话政策优化并实现最先进的成功率。这些结果需要在端到端对话模型中更广泛地使用RL。

Reinforcement learning (RL) can enable task-oriented dialogue systems to steer the conversation towards successful task completion. In an end-to-end setting, a response can be constructed in a word-level sequential decision making process with the entire system vocabulary as action space. Policies trained in such a fashion do not require expert-defined action spaces, but they have to deal with large action spaces and long trajectories, making RL impractical. Using the latent space of a variational model as action space alleviates this problem. However, current approaches use an uninformed prior for training and optimize the latent distribution solely on the context. It is therefore unclear whether the latent representation truly encodes the characteristics of different actions. In this paper, we explore three ways of leveraging an auxiliary task to shape the latent variable distribution: via pre-training, to obtain an informed prior, and via multitask learning. We choose response auto-encoding as the auxiliary task, as this captures the generative factors of dialogue responses while requiring low computational cost and neither additional data nor labels. Our approach yields a more action-characterized latent representations which support end-to-end dialogue policy optimization and achieves state-of-the-art success rates. These results warrant a more wide-spread use of RL in end-to-end dialogue models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源