论文标题

Empir:混合精度深网的合奏,以增加对抗攻击的鲁棒性

EMPIR: Ensembles of Mixed Precision Deep Networks for Increased Robustness against Adversarial Attacks

论文作者

Sen, Sanchari, Ravindran, Balaraman, Raghunathan, Anand

论文摘要

确保深神经网络(DNN)的鲁棒性对于它们在自动驾驶汽车,无人机和医疗保健等安全至关重要的应用中的采用至关重要。值得注意的是,DNN容易受到对抗性攻击的影响,在这种攻击中,小型输入扰动会产生灾难性的错误分类。在这项工作中,我们提出了Empir,具有不同数值精确度的量化DNN模型的集合,作为提高对抗性攻击的鲁棒性的一种新方法。 Empir基于这样的观察结果,即量化的神经网络通常比完整的精确网络表现出对对抗性攻击的鲁棒性,但以原始(未扰动)输入的准确性造成了重大损失。 Empir克服了这一限制,以实现“两全其美”,即,通过将它们组合成合奏中的完整精度模型的较高不受干扰的精确度以及低精度模型的较高鲁棒性。此外,由于低精度DNN模型的计算和存储要求明显低于完整的精确模型,因此与单个完整精确模型相比,Empir模型仅产生适度的计算和内存开销(在我们的评估中<25%)。我们评估了跨DNN套件的Empir,以完成3个不同的图像识别任务(MNIST,CIFAR-10和IMAGENET),并在4种不同的对抗性攻击下。我们的结果表明,与单个完全完整的模型相比,对在MNIST,CIFAR-10和Imagenet数据集进行训练的DNN模型分别将平均对抗精度提高了42.6%,15.2%和10.5%,分别将平均对手精度提高,分别在MNIST,CIFAR-10和Imagenet数据集中培训。

Ensuring robustness of Deep Neural Networks (DNNs) is crucial to their adoption in safety-critical applications such as self-driving cars, drones, and healthcare. Notably, DNNs are vulnerable to adversarial attacks in which small input perturbations can produce catastrophic misclassifications. In this work, we propose EMPIR, ensembles of quantized DNN models with different numerical precisions, as a new approach to increase robustness against adversarial attacks. EMPIR is based on the observation that quantized neural networks often demonstrate much higher robustness to adversarial attacks than full precision networks, but at the cost of a substantial loss in accuracy on the original (unperturbed) inputs. EMPIR overcomes this limitation to achieve the 'best of both worlds', i.e., the higher unperturbed accuracies of the full precision models combined with the higher robustness of the low precision models, by composing them in an ensemble. Further, as low precision DNN models have significantly lower computational and storage requirements than full precision models, EMPIR models only incur modest compute and memory overheads compared to a single full-precision model (<25% in our evaluations). We evaluate EMPIR across a suite of DNNs for 3 different image recognition tasks (MNIST, CIFAR-10 and ImageNet) and under 4 different adversarial attacks. Our results indicate that EMPIR boosts the average adversarial accuracies by 42.6%, 15.2% and 10.5% for the DNN models trained on the MNIST, CIFAR-10 and ImageNet datasets respectively, when compared to single full-precision models, without sacrificing accuracy on the unperturbed inputs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源