论文标题

联合学习符合多目标优化

Federated Learning Meets Multi-objective Optimization

论文作者

Hu, Zeou, Shaloudegi, Kiarash, Zhang, Guojun, Yu, Yaoliang

论文摘要

联合学习已成为一种有希望的,大规模分布的方式,可以在大量边缘设备上训练联合模型,同时严格将私人用户数据严格保留在设备上。在这项工作中,由于确保用户之间的公平性和针对恶意对手的鲁棒性,我们将联合学习作为多目标优化,并提出了一种新的算法FEDMGDA+,保证将其融合到Pareto Sentary Solutions。 FEDMGDA+非常易于实现,较少的超参数可以调整,并且不牺牲任何参与用户的性能。我们建立了FEDMGDA+的收敛属性,并指出其与现有方法的联系。在各种数据集上进行了广泛的实验,证实FEDMGDA+与最先进的相比进行了比较。

Federated learning has emerged as a promising, massively distributed way to train a joint deep model over large amounts of edge devices while keeping private user data strictly on device. In this work, motivated from ensuring fairness among users and robustness against malicious adversaries, we formulate federated learning as multi-objective optimization and propose a new algorithm FedMGDA+ that is guaranteed to converge to Pareto stationary solutions. FedMGDA+ is simple to implement, has fewer hyperparameters to tune, and refrains from sacrificing the performance of any participating user. We establish the convergence properties of FedMGDA+ and point out its connections to existing approaches. Extensive experiments on a variety of datasets confirm that FedMGDA+ compares favorably against state-of-the-art.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源