Presentation + Paper
12 September 2021 A comparison of deep saliency map generators on multispectral data in object detection
Jens Bayer, David Münch, Michael Arens
Author Affiliations +
Abstract
Deep neural networks, especially convolutional deep neural networks, are state-of-the-art methods to classify, segment or even generate images, movies, or sounds. However, these methods lack of a good semantic understanding of what happens internally. The question, why a COVID-19 detector has classified a stack of lung-ct images as positive, is sometimes more interesting than the overall specificity and sensitivity. Especially when human domain expert knowledge disagrees with the given output. This way, human domain experts could also be advised to reconsider their choice, regarding the information pointed out by the system. In addition, the deep learning model can be controlled, and a present dataset bias can be found. Currently, most explainable AI methods in the computer vision domain are purely used on image classification, where the images are ordinary images in the visible spectrum. As a result, there is no comparison on how the methods behave with multimodal image data, as well as most methods have not been investigated on how they behave when used for object detection. This work tries to close the gaps by investigating three saliency map generator methods on how their maps differ in the different spectra. This is achieved via an accurate and systematic training. Additionally, we examine how they perform when used for object detection. As a practical problem, we chose object detection in the infrared and visual spectrum for autonomous driving. The dataset used in this work, is the Multispectral Object Detection Dataset,1 where each scene is available in the long-wave (FIR), mid-wave (MIR) and short-wave (NIR) infrared as well as the visual (RGB) spectrum. The results show, that there are differences between the infrared and visual activation maps. Further, an advanced training with both, the infrared and visual data not only improves the network's output, it also leads to more focused spots in the saliency maps.
Conference Presentation
© (2021) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Jens Bayer, David Münch, and Michael Arens "A comparison of deep saliency map generators on multispectral data in object detection", Proc. SPIE 11869, Counterterrorism, Crime Fighting, Forensics, and Surveillance Technologies V, 118690I (12 September 2021); https://doi.org/10.1117/12.2599742
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
RGB color model

Convolution

Near infrared

Visualization

Far infrared

Feature extraction

Image classification

Back to Top