论文标题

通过贪婪的部署和寻求代理商的共识的多代理表演预测

Multi-agent Performative Prediction with Greedy Deployment and Consensus Seeking Agents

论文作者

Li, Qiang, Yau, Chung-Yiu, Wai, Hoi-To

论文摘要

我们考虑了一个场景,其中多个代理商正在从数据中学习一个共同的决策向量,该数据可能受代理的决策影响。这导致了多代理性能预测(多PFD)的问题。在本文中,我们将多PFD提出为分散的优化问题,该问题最小化了损失函数的总和,其中每个损失函数基于受本地决策向量影响的分布。我们首先证明了多PFD问题的必要条件,可以接受独特的多代理性能稳定(多PS)解决方案。我们表明,与单一剂量相比,实施共识会导致具有多PS解决方案的兰克斯条件。然后,我们研究了贪婪部署计划的分散扩展[Mendler-Dünner等,2020],称为DSGD-GD方案。我们表明,DSGD-GD会收敛到多PS溶液并分析其非反应收敛速率。数值结果验证了我们的分析。

We consider a scenario where multiple agents are learning a common decision vector from data which can be influenced by the agents' decisions. This leads to the problem of multi-agent performative prediction (Multi-PfD). In this paper, we formulate Multi-PfD as a decentralized optimization problem that minimizes a sum of loss functions, where each loss function is based on a distribution influenced by the local decision vector. We first prove the necessary and sufficient condition for the Multi-PfD problem to admit a unique multi-agent performative stable (Multi-PS) solution. We show that enforcing consensus leads to a laxer condition for the existence of Multi-PS solution with respect to the distributions' sensitivities, compared to the single agent case. Then, we study a decentralized extension to the greedy deployment scheme [Mendler-Dünner et al., 2020], called the DSGD-GD scheme. We show that DSGD-GD converges to the Multi-PS solution and analyze its non-asymptotic convergence rate. Numerical results validate our analysis.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源