Paper
27 November 2019 XGAN: adversarial attacks with GAN
Xiaoyu Fang, Guoxu Cao, Huapeng Song, Zhiyou Ouyang
Author Affiliations +
Proceedings Volume 11321, 2019 International Conference on Image and Video Processing, and Artificial Intelligence; 113211G (2019) https://doi.org/10.1117/12.2543218
Event: The Second International Conference on Image, Video Processing and Artifical Intelligence, 2019, Shanghai, China
Abstract
Recent studies have demonstrated that deep neural networks can be attacked by adding small pixel-level perturbations to the input data. In general, such disturbances are indistinguishable to the human eye, but can completely subvert the output of the deep neural network classifier to achieve non-target or target attacks. The current common practice is to superimpose the original image after generating a disturbance for the neural network. In this paper, we applied a method of generating target images directly using GAN to achieve a method of attacking deep neural networks. This method has excellent results on black-box attacks and is also suitable for the preconditions of most neural network attacks. Using this method, we achieved an 82% success rate on the black-box target attack on the cifar10 dataset and the MNIST dataset, while ensuring that the generated image is comparable to the original image.
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Xiaoyu Fang, Guoxu Cao, Huapeng Song, and Zhiyou Ouyang "XGAN: adversarial attacks with GAN", Proc. SPIE 11321, 2019 International Conference on Image and Video Processing, and Artificial Intelligence, 113211G (27 November 2019); https://doi.org/10.1117/12.2543218
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image classification

Neural networks

Image processing

Computer vision technology

Back to Top