论文标题

使用加强Q学习计划的最佳跟踪交换不情愿电动机驱动器的当前控制

Optimal Tracking Current Control of Switched Reluctance Motor Drives Using Reinforcement Q-learning Scheduling

论文作者

Alharkan, Hamad A., Saadatmand, Sepehr, Ferdowsi, Mehdi, Shamsi, Pourya

论文摘要

在本文中,研究了一种新型的Q学习调度方法,用于当前不情愿电动机(SRM)驱动器的当前控制器。 Q学习算法是一类强化学习方法,可以找到线性控制问题的最佳远期解决方案。本文将介绍一种新的计划学习算法,该算法利用Q核表位于SRM模型的非线性表面上,而无需涉及有关模型参数的任何信息来通过安排Infinite Horizo​​n Linorear Quadratic Quadratic Quadratic Quadratic Quadratic tracters(LQT)来跟踪参考电流轨迹,该轨迹由Q-learthmears Q-Learearning Algorearning Algorearning。此外,提出了线性插值算法来指导LQT之间的LQT在训练有素的Q核之间,以确保随着状态变量在模型的非线性表面上的演变,以确保平滑响应。最后,提供了模拟和实验结果,以验证提出的控制方案的有效性。

In this paper, a novel Q-learning scheduling method for the current controller of switched reluctance motor (SRM) drive is investigated. Q-learning algorithm is a class of reinforcement learning approaches that can find the best forward-in-time solution of a linear control problem. This paper will introduce a new scheduled-Q-learning algorithm that utilizes a table of Q-cores that lies on the nonlinear surface of a SRM model without involving any information about the model parameters to track the reference current trajectory by scheduling infinite horizon linear quadratic trackers (LQT) handled by Q-learning algorithms. Additionally, a linear interpolation algorithm is proposed to guide the transition of the LQT between trained Q-cores to ensure a smooth response as state variables evolve on the nonlinear surface of the model. Lastly, simulation and experimental results are provided to validate the effectiveness of the proposed control scheme.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源