论文标题
关于Wasserstein强大联盟学习的概括
On the Generalization of Wasserstein Robust Federated Learning
论文作者
论文摘要
在联合学习中,参与的客户通常拥有非i.i.d。数据,对看不见的分布提出了重大挑战。为了解决这个问题,我们提出了一种称为WAFL的Wasserstein分布强大的优化方案。利用其二元性,我们将WAFL视为经验替代风险最小化问题,并使用基于局部SGD的算法和融合保证的算法来解决它。我们表明,WAFL的鲁棒性比相关方法更一般,并且泛化结合对Wasserstein Ball内部的所有对抗分布(模棱两可)。由于可以对Wasserstein Ball的中心位置和半径进行适当的修改,因此WAFL不仅显示了其适用性,而且在域的适应性中也显示出其适用性。通过经验评估,我们证明了WAFL比非i.i.d中的香草fedavg更好地概括了。设置,并且比分配移动设置中的其他相关方法更强大。此外,使用基准数据集,我们表明WAFL能够概括到看不见的目标域。
In federated learning, participating clients typically possess non-i.i.d. data, posing a significant challenge to generalization to unseen distributions. To address this, we propose a Wasserstein distributionally robust optimization scheme called WAFL. Leveraging its duality, we frame WAFL as an empirical surrogate risk minimization problem, and solve it using a local SGD-based algorithm with convergence guarantees. We show that the robustness of WAFL is more general than related approaches, and the generalization bound is robust to all adversarial distributions inside the Wasserstein ball (ambiguity set). Since the center location and radius of the Wasserstein ball can be suitably modified, WAFL shows its applicability not only in robustness but also in domain adaptation. Through empirical evaluation, we demonstrate that WAFL generalizes better than the vanilla FedAvg in non-i.i.d. settings, and is more robust than other related methods in distribution shift settings. Further, using benchmark datasets we show that WAFL is capable of generalizing to unseen target domains.