Paper
15 August 2023 Robustness evaluation of object detection models: a gradient attack method based on spatial sensitivity
Kai Fu, Mingzheng Wang, Kuangyin Meng, Changbin Shao
Author Affiliations +
Proceedings Volume 12719, Second International Conference on Electronic Information Technology (EIT 2023); 1271936 (2023) https://doi.org/10.1117/12.2685460
Event: Second International Conference on Electronic Information Technology (EIT 2023), 2023, Wuhan, China
Abstract
In the field of model robustness, adversarial attacks have become the most powerful way to threaten the performance of various deep models; adversarial attacks are to add adversarial noise to common samples to generate adversarial samples in order to mislead the model. The aggressiveness and imperceptibility of its noise are two main indicators to measure the adversarial attack. In this paper, we focus on attacks on object detection models. Previous gradient-sign based noise generation methods have achieved powerful attacks, however the adversarial examples they generate are usually easily perceived by our visual system. This is mainly due to the crude sign noise addition strategy adopted by the gradient sign method on the global input space. To solve this problem, the sensitivity of the deep detection model to the input sample space is analyzed, and a gradient attack method based on space sensitivity is proposed based on this clue. Specifically, inspired by the attention mechanism of the deep model, we investigate the global gradient information of the entire image, and verify the spatial regions that play a key role in detection and classification; further, we propose a method based on the key gradient information noise screening strategy to generate adversarial examples. This not only avoids the perceptibility flaw caused by directly attaching the global gradient symbol, but also provides a more powerful attack effect. Taking the Yolov3 detection model as an example, we conducted observational verification and offensive testing on the VOC detection dataset, and the experimental results confirmed the effectiveness of our method. This poses a greater challenge on how to conduct more effective model defense.
© (2023) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Kai Fu, Mingzheng Wang, Kuangyin Meng, and Changbin Shao "Robustness evaluation of object detection models: a gradient attack method based on spatial sensitivity", Proc. SPIE 12719, Second International Conference on Electronic Information Technology (EIT 2023), 1271936 (15 August 2023); https://doi.org/10.1117/12.2685460
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Object detection

Performance modeling

Data modeling

Statistical modeling

Defense and security

Detection and tracking algorithms

Image classification

Back to Top