论文标题

挑选攻击:针对对象检测的类型特定对抗性攻击

Pick-Object-Attack: Type-Specific Adversarial Attack for Object Detection

论文作者

Nezami, Omid Mohamad, Chaturvedi, Akshay, Dras, Mark, Garain, Utpal

论文摘要

许多最近的研究表明,深层神经模型容易受到对抗样本的影响:例如,具有不可察觉的扰动的图像可以欺骗图像分类器。在本文中,我们提出了第一种特定于类型的方法来生成对象检测的对抗示例,该方法需要检测图像中存在的多个对象并同时对它们进行分类,从而使其更难的任务比图像分类更难。我们特别旨在通过更改图像中特定对象的预测标签来攻击广泛使用的R-CNN:在先前的工作针对一个特定对象(停止符号)的情况下,我们将其推广到任意对象,关键挑战是需要更改该对象类型的所有实例的所有边界框的标签。为此,我们提出了一种新颖的方法,名为Pick-Object-攻击。 Pick-Object-Attack成功地将扰动仅添加到目标对象的边界框,并保留图像中其他检测到的对象的标签。就可感知而言,该方法引起的扰动非常小。此外,我们首次根据下游任务(图像字幕)来研究对抗攻击对对象检测的影响;我们表明,可以修改所有对象类型的方法会导致标题的非常明显的变化,而我们受约束攻击的变化却不那么明显。

Many recent studies have shown that deep neural models are vulnerable to adversarial samples: images with imperceptible perturbations, for example, can fool image classifiers. In this paper, we present the first type-specific approach to generating adversarial examples for object detection, which entails detecting bounding boxes around multiple objects present in the image and classifying them at the same time, making it a harder task than against image classification. We specifically aim to attack the widely used Faster R-CNN by changing the predicted label for a particular object in an image: where prior work has targeted one specific object (a stop sign), we generalise to arbitrary objects, with the key challenge being the need to change the labels of all bounding boxes for all instances of that object type. To do so, we propose a novel method, named Pick-Object-Attack. Pick-Object-Attack successfully adds perturbations only to bounding boxes for the targeted object, preserving the labels of other detected objects in the image. In terms of perceptibility, the perturbations induced by the method are very small. Furthermore, for the first time, we examine the effect of adversarial attacks on object detection in terms of a downstream task, image captioning; we show that where a method that can modify all object types leads to very obvious changes in captions, the changes from our constrained attack are much less apparent.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源