Images can be captured using devices operating at different light spectrum's. As a result, cross domain image translation becomes a nontrivial task which requires the adaptation of Deep convolutional networks (DCNNs) to resolve the aforementioned imagery challenges. Automatic target recognition(ATR) from infrared imagery in a real time environment is one of such difficult tasks. Generative Adversarial Network (GAN) has already shown promising performance in translating image characteristic from one domain to another. In this paper, we have explored the potential of GAN architecture in cross-domain image translation. Our proposed GAN model maps images from the source domain to the target domain in a conditional GAN framework. We verify the performance of the generated images with the help of a CNN-based target classifier. Classification results of the synthetic images achieve a comparable performance to the ground truth ensuring realistic image generation of the designed network.
Deep Convolutional Neural Networks (DCNN) have proven to be an exceptional tool for object recognition in various computer vision applications. However, recent findings have shown that such state of the art models can be easily deceived by inserting slight imperceptible perturbations to key pixels in the input image. In this paper, we focus on deceiving Automatic Target Recognition(ATR) classiers. These classiers are built to recognize specified targets in a scene and also simultaneously identify their class types. In our work, we explore the vulnerabilities of DCNN-based target classifiers. We demonstrate significant progress in developing infrared adversarial target by adding small perturbations to the input image such that the image perturbation cannot be easily detected. The algorithm is built to adapt to both targeted and non-targeted adversarial attacks. Our findings reveal promising results that reflect serious implications of adversarial attacks.
Target detection systems identify targets by localizing their coordinates on the input image of interest. This is ideally achieved by labeling each pixel in an image as a background or a potential target pixel. Deep Convolutional Neural Network (DCNN) classifiers have proven to be successful tools for computer vision applications. However, prior research confirms that even state of the art classifier models are susceptible to adversarial attacks. In this paper, we show how to generate adversarial infrared images by adding small perturbations to the targets region to deceive a DCNN-based target detector at remarkable levels. We demonstrate significant progress in developing visually imperceptible adversarial infrared images where the targets are visually recognizable by an expert but a DCNN-based target detector cannot detect the targets in the image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.