24 February 2024 Adversarial attack on human pose estimation network
Zhaoxin Zhang, Shize Huang, Xiaowen Liu, Qianhui Fan, Decun Dong
Author Affiliations +
Abstract

Real-time human pose estimation (HPE) using convolutional neural networks (CNN) is critical for enabling machines to better understand human beings based on images and videos, and for assisting supervisors in identifying human behavior. However, CNN-based systems are susceptible to adversarial attacks, and the attacks specifically targeting HPE have received little attention. We present a gradient-based adversarial example generation method, named AdaptiveFool, which is designed to effectively perform a keypoints-invisible attack against OpenPose by aggregating the loss function of human keypoints and generating adaptive adversarial perturbations. In addition, we introduce an object-oriented perturbation generation method during the AdaptiveFool process to eliminate background perturbations. Our proposed method adapts the adversarial perturbations and generates object-oriented perturbations. On COCO 2017 datasets, our method achieves 6.3% mean average precision on OpenPose. This research provides inspiration for future work on developing efficient and effective adversarial example defense methods for HPE.

© 2024 SPIE and IS&T
Zhaoxin Zhang, Shize Huang, Xiaowen Liu, Qianhui Fan, and Decun Dong "Adversarial attack on human pose estimation network," Journal of Electronic Imaging 33(1), 013052 (24 February 2024). https://doi.org/10.1117/1.JEI.33.1.013052
Received: 18 July 2023; Accepted: 8 February 2024; Published: 24 February 2024
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Pose estimation

Object detection

Adaptive optics

Convolutional neural networks

Detection and tracking algorithms

Computer vision technology

Design

Back to Top