论文标题
关于正规化反对会员推理攻击的有效性
On the Effectiveness of Regularization Against Membership Inference Attacks
论文作者
论文摘要
深度学习模型通常会引起隐私问题,因为它们会泄漏有关其培训数据的信息。这使对手能够通过进行会员推理攻击(MIA)来确定数据点是否在模型培训中。先前的工作已经猜测,正式化技术(应对过度拟合)也可能减轻泄漏。尽管存在许多正则化机制,但它们对MIA的有效性尚未被系统地研究,并且所产生的隐私属性尚未得到很好的了解。我们探索了实用攻击可以实现的信息泄漏的下限。首先,我们评估了三种标准图像分类任务,评估8种机制在减轻两个最近的MIA中的有效性。我们发现某些机制,例如标签平滑,可能会无意中帮助MIA。其次,我们通过结合互补机制来研究提高对MIA的弹性的潜力。最后,我们通过设计一个基于对抗性样品制定的白色框“距离距离”(DTC)度量来量化未来MIA的机会来妥协隐私。我们的指标表明,即使现有的MIA失败,训练样本也可能与测试样本可区分。这表明正则化机制可以提供错误的隐私感,即使它们对现有的MIA有效。
Deep learning models often raise privacy concerns as they leak information about their training data. This enables an adversary to determine whether a data point was in a model's training set by conducting a membership inference attack (MIA). Prior work has conjectured that regularization techniques, which combat overfitting, may also mitigate the leakage. While many regularization mechanisms exist, their effectiveness against MIAs has not been studied systematically, and the resulting privacy properties are not well understood. We explore the lower bound for information leakage that practical attacks can achieve. First, we evaluate the effectiveness of 8 mechanisms in mitigating two recent MIAs, on three standard image classification tasks. We find that certain mechanisms, such as label smoothing, may inadvertently help MIAs. Second, we investigate the potential of improving the resilience to MIAs by combining complementary mechanisms. Finally, we quantify the opportunity of future MIAs to compromise privacy by designing a white-box `distance-to-confident' (DtC) metric, based on adversarial sample crafting. Our metric reveals that, even when existing MIAs fail, the training samples may remain distinguishable from test samples. This suggests that regularization mechanisms can provide a false sense of privacy, even when they appear effective against existing MIAs.