19 March 2023 Defending against adversarial examples using perceptual image hashing
Ke Wu, Zichi Wang, Xinpeng Zhang, Zhenjun Tang
Author Affiliations +
Abstract

Conventional deep neural networks (DNNs) have been shown to be vulnerable to images with adversarial perturbations, referred to as adversarial examples. In this study, we propose a method to protect neural networks against adversarial examples using perceptual image hashing. Because perceptual hashing is robust to adversarial perturbations, we combine hash sequences of input images with the parameters of a neural network in an image-hash processing network. Thus, outputs of the neural network are affected by image hashes, which render the model robust to adversarial examples to some extent. Thus, the proposed approach provides a defense against adversarial examples. The experiment was conducted on the CIFAR-10 dataset, and we used ResNet-18 as our target network. To verify our method, we tested our defense network using several common white-box attacks and black-box attacks. The results show that it achieved a significant improvement in the classification accuracy for adversarial examples.

© 2023 SPIE and IS&T
Ke Wu, Zichi Wang, Xinpeng Zhang, and Zhenjun Tang "Defending against adversarial examples using perceptual image hashing," Journal of Electronic Imaging 32(2), 023016 (19 March 2023). https://doi.org/10.1117/1.JEI.32.2.023016
Received: 27 October 2022; Accepted: 20 February 2023; Published: 19 March 2023
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Defense and security

Image processing

Education and training

Neural networks

Image classification

Data modeling

Feature extraction

Back to Top