论文标题
联合量子自然梯度下降用于量子联合学习
Federated Quantum Natural Gradient Descent for Quantum Federated Learning
论文作者
论文摘要
量子联合学习(QFL)的核心与跨多个局部量子设备的分布式学习体系结构有关,QFL的更有效的培训算法有望最大程度地减少不同量子参与者之间的通信开销。在这项工作中,我们提出了一种有效的学习算法,即联合量子天然梯度下降(FQNGD),该算法由QFL框架应用于QFL框架中,该框架由基于变异的量子电路(VQC)的量子神经网络(QNN)组成。 FQNGD算法承认,QFL模型融合的训练迭代量更少,并且可以大大降低本地量子设备之间的总通信成本。与其他联合学习算法相比,我们对手写数字分类数据集进行的实验证实了FQNGD算法对QFL的有效性,以训练数据集的更快收敛速率以及测试的较高精度。
The heart of Quantum Federated Learning (QFL) is associated with a distributed learning architecture across several local quantum devices and a more efficient training algorithm for the QFL is expected to minimize the communication overhead among different quantum participants. In this work, we put forth an efficient learning algorithm, namely federated quantum natural gradient descent (FQNGD), applied in a QFL framework which consists of the variational quantum circuit (VQC)-based quantum neural networks (QNN). The FQNGD algorithm admits much fewer training iterations for the QFL model to get converged and it can significantly reduce the total communication cost among local quantum devices. Compared with other federated learning algorithms, our experiments on a handwritten digit classification dataset corroborate the effectiveness of the FQNGD algorithm for the QFL in terms of a faster convergence rate on the training dataset and higher accuracy on the test one.