Paper
1 October 2018 Exploiting random perturbations to defend against adversarial attacks
Author Affiliations +
Proceedings Volume 10808, Photonics Applications in Astronomy, Communications, Industry, and High-Energy Physics Experiments 2018; 108082N (2018) https://doi.org/10.1117/12.2501606
Event: Photonics Applications in Astronomy, Communications, Industry, and High-Energy Physics Experiments 2018, 2018, Wilga, Poland
Abstract
Adversarial examples are deliberately crafted data points which aim to induce errors in machine learning models. This phenomenon has gained much attention recently, especially in the field of image classification, where many methods have been proposed to generate such malicious examples. In this paper we focus on defending a trained model against such attacks by introducing randomness to its inputs.
© (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Pawel Zawistowski and Bartlomiej Twardowski "Exploiting random perturbations to defend against adversarial attacks", Proc. SPIE 10808, Photonics Applications in Astronomy, Communications, Industry, and High-Energy Physics Experiments 2018, 108082N (1 October 2018); https://doi.org/10.1117/12.2501606
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Data modeling

Image classification

Neural networks

Machine learning

Statistical modeling

Artificial intelligence

Binary data

RELATED CONTENT


Back to Top