论文标题

动态自适应持续的强化学习通过渐进式上下文化

Dynamics-Adaptive Continual Reinforcement Learning via Progressive Contextualization

论文作者

Zhang, Tiantian, Lin, Zichuan, Wang, Yuxing, Ye, Deheng, Fu, Qiang, Yang, Wei, Wang, Xueqian, Liang, Bin, Yuan, Bo, Li, Xiu

论文摘要

在动态环境中,持续强化学习(CRL)的关键挑战是,随着环境在其一生中的变化,同时最大程度地减少对学习信息的灾难性忘记,随着环境在其一生中的变化而迅速调整RL代理的行为。为了应对这一挑战,在本文中,我们提出了Dacorl,即动态自动持续RL。 Dacorl使用渐进式上下文化学习了上下文条件条件的策略,该策略会逐步将动态环境中的一系列固定任务群插入一系列上下文中,并选择一个可扩展的多头神经网络以近似策略。具体来说,我们定义了一组具有类似动力学的任务,并将上下文推理作为在线贝叶斯无限高斯混合物聚类的过程中,将环境特征集中在线,诉诸在线贝叶斯推断,以推断上下文的后验分布。在以前的中国餐厅流程的假设下,该技术可以将当前任务准确地将当前的任务分类为先前看到的上下文,或者根据需要实例化新的上下文,而无需依靠任何外部指标来提前发出环境变化。此外,我们采用了可扩展的多头神经网络,其输出层与新实例化的上下文同步扩展,以及一个知识蒸馏正规化项来保留学习任务的性能。作为一个可以与各种深度RL算法结合使用的一般框架,Dacorl在稳定性,整体性能和概括能力方面具有一致的优势,而不是现有方法,这是通过对几种机器人导航和Mujoco Socomotion任务进行的广泛实验来验证的。

A key challenge of continual reinforcement learning (CRL) in dynamic environments is to promptly adapt the RL agent's behavior as the environment changes over its lifetime, while minimizing the catastrophic forgetting of the learned information. To address this challenge, in this article, we propose DaCoRL, i.e., dynamics-adaptive continual RL. DaCoRL learns a context-conditioned policy using progressive contextualization, which incrementally clusters a stream of stationary tasks in the dynamic environment into a series of contexts and opts for an expandable multihead neural network to approximate the policy. Specifically, we define a set of tasks with similar dynamics as an environmental context and formalize context inference as a procedure of online Bayesian infinite Gaussian mixture clustering on environment features, resorting to online Bayesian inference to infer the posterior distribution over contexts. Under the assumption of a Chinese restaurant process prior, this technique can accurately classify the current task as a previously seen context or instantiate a new context as needed without relying on any external indicator to signal environmental changes in advance. Furthermore, we employ an expandable multihead neural network whose output layer is synchronously expanded with the newly instantiated context, and a knowledge distillation regularization term for retaining the performance on learned tasks. As a general framework that can be coupled with various deep RL algorithms, DaCoRL features consistent superiority over existing methods in terms of the stability, overall performance and generalization ability, as verified by extensive experiments on several robot navigation and MuJoCo locomotion tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源