Deep Neural Networks (DNNs) have been deployed in many real-world applications in various domains, both industry and academic, and have proven to deliver outstanding performance. However, DNNs are vulnerable to adversarial attacks, that are small perturbations embedded in an image. As a result, introduction of DNNs into safety-critical systems, such as autonomous vehicles, unmanned aerial vehicles or healthcare devices, would introduce very high risk of limiting their capabilities to recognize and interpret the environment in which they are used and therefore would lead to devastating consequences. Thus, robustness enhancement of DNNs by development of defense mechanisms is a matter of the utmost importance. In this paper, we evaluated a set of state-of-the-art denoising filters designed for impulsive noise removal as defensive solutions. The proposed methods are applied as a pre-processing step, in which the adversarial patterns in the source image are removed before performing classification task. As a result, the pre-processing defense block can be easily integrated with any type of classifier, without any knowledge about utilized training procedures or internal architecture of the model. Moreover, the evaluated filtering methods can be considered as universal defensive techniques, as they are completely unrelated with the internal aspects of the selected attack and can be applied against any type of adversarial threats. The experimental results obtained on German Traffic Sign Recognition Benchmark (GTSRB) have proven that the denoising filters provide high robustness against sparse adversarial attacks and do not significantly decrease the classification performance on non-altered data.
|