论文标题

从本地SGD到本地的定点方法,用于联合学习

From Local SGD to Local Fixed-Point Methods for Federated Learning

论文作者

Malinovsky, Grigory, Kovalev, Dmitry, Gasanov, Elnur, Condat, Laurent, Richtárik, Peter

论文摘要

大多数用于解决优化问题的算法或查找凸连接功能的鞍点是固定点算法。在这项工作中,我们考虑了在分布式设置中找到平均运算符或其近似值的固定点的通用问题。我们的工作是由联邦学习的需求激励的。在这种情况下,每个本地操作员在移动设备上进行了本地进行的计算。我们研究了两种达成共识的策略:一个基于固定数量的本地步骤,另一个基于随机计算。在这两种情况下,目标都是限制本地计算变量的通信,这通常是分布式框架中的瓶颈。我们对这两种方法进行融合分析,并进行许多实验,突出了我们方法的好处。

Most algorithms for solving optimization problems or finding saddle points of convex-concave functions are fixed-point algorithms. In this work we consider the generic problem of finding a fixed point of an average of operators, or an approximation thereof, in a distributed setting. Our work is motivated by the needs of federated learning. In this context, each local operator models the computations done locally on a mobile device. We investigate two strategies to achieve such a consensus: one based on a fixed number of local steps, and the other based on randomized computations. In both cases, the goal is to limit communication of the locally-computed variables, which is often the bottleneck in distributed frameworks. We perform convergence analysis of both methods and conduct a number of experiments highlighting the benefits of our approach.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源