论文标题

鸡尾酒会攻击:使用独立组件分析中基于聚合的隐私在联邦学习中破坏汇总的隐私

Cocktail Party Attack: Breaking Aggregation-Based Privacy in Federated Learning using Independent Component Analysis

论文作者

Kariyappa, Sanjay, Guo, Chuan, Maeng, Kiwan, Xiong, Wenjie, Suh, G. Edward, Qureshi, Moinuddin K, Lee, Hsien-Hsin S.

论文摘要

联邦学习(FL)旨在对多个数据所有者持有的分布式数据执行隐私的机器学习。为此,FL要求数据所有者在本地执行培训,并与中央服务器共享梯度更新(而不是私人输入),然后将其牢固地汇总在多个数据所有者上。尽管汇总本身并未证明提供隐私保护,但先前的工作表明,如果批处理大小足够大,则足够了。在本文中,我们提出了鸡尾酒会攻击(CPA),与先前的信念相反,能够从汇总的渐变中恢复私人输入,这是批量较大的大小。 CPA利用了至关重要的见解,即来自完全连接的层的聚合梯度是其输入的线性组合,这使我们将梯度反转作为盲源分离(BSS)问题(非正式地称为鸡尾酒会问题)。我们适应独立的组件分析(ICA) - BSS问题的经典解决方案 - 恢复完全连接和卷积网络的私人输入,并表明CPA明显优于先前的梯度反转攻击,对成像网的输入量表,并在大量批量的大小上均高达1024。

Federated learning (FL) aims to perform privacy-preserving machine learning on distributed data held by multiple data owners. To this end, FL requires the data owners to perform training locally and share the gradient updates (instead of the private inputs) with the central server, which are then securely aggregated over multiple data owners. Although aggregation by itself does not provably offer privacy protection, prior work showed that it may suffice if the batch size is sufficiently large. In this paper, we propose the Cocktail Party Attack (CPA) that, contrary to prior belief, is able to recover the private inputs from gradients aggregated over a very large batch size. CPA leverages the crucial insight that aggregate gradients from a fully connected layer is a linear combination of its inputs, which leads us to frame gradient inversion as a blind source separation (BSS) problem (informally called the cocktail party problem). We adapt independent component analysis (ICA)--a classic solution to the BSS problem--to recover private inputs for fully-connected and convolutional networks, and show that CPA significantly outperforms prior gradient inversion attacks, scales to ImageNet-sized inputs, and works on large batch sizes of up to 1024.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源