Paper
30 November 2016 Computationally efficient target classification in multispectral image data with Deep Neural Networks
Lukas Cavigelli, Dominic Bernath, Michele Magno, Luca Benini
Author Affiliations +
Proceedings Volume 9997, Target and Background Signatures II; 99970L (2016) https://doi.org/10.1117/12.2241383
Event: SPIE Security + Defence, 2016, Edinburgh, United Kingdom
Abstract
Detecting and classifying targets in video streams from surveillance cameras is a cumbersome, error-prone and expensive task. Often, the incurred costs are prohibitive for real-time monitoring. This leads to data being stored locally or transmitted to a central storage site for post-incident examination. The required communication links and archiving of the video data are still expensive and this setup excludes preemptive actions to respond to imminent threats. An effective way to overcome these limitations is to build a smart camera that analyzes the data on-site, close to the sensor, and transmits alerts when relevant video sequences are detected.

Deep neural networks (DNNs) have come to outperform humans in visual classifications tasks and are also performing exceptionally well on other computer vision tasks. The concept of DNNs and Convolutional Networks (ConvNets) can easily be extended to make use of higher-dimensional input data such as multispectral data. We explore this opportunity in terms of achievable accuracy and required computational effort.

To analyze the precision of DNNs for scene labeling in an urban surveillance scenario we have created a dataset with 8 classes obtained in a field experiment. We combine an RGB camera with a 25-channel VIS-NIR snapshot sensor to assess the potential of multispectral image data for target classification. We evaluate several new DNNs, showing that the spectral information fused together with the RGB frames can be used to improve the accuracy of the system or to achieve similar accuracy with a 3x smaller computation effort. We achieve a very high per-pixel accuracy of 99.1%. Even for scarcely occurring, but particularly interesting classes, such as cars, 75% of the pixels are labeled correctly with errors occurring only around the border of the objects. This high accuracy was obtained with a training set of only 30 labeled images, paving the way for fast adaptation to various application scenarios.
© (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Lukas Cavigelli, Dominic Bernath, Michele Magno, and Luca Benini "Computationally efficient target classification in multispectral image data with Deep Neural Networks", Proc. SPIE 9997, Target and Background Signatures II, 99970L (30 November 2016); https://doi.org/10.1117/12.2241383
Lens.org Logo
CITATIONS
Cited by 15 scholarly publications and 2 patents.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Cameras

RGB color model

Sensors

Multispectral imaging

Neural networks

Image processing

Image segmentation

Back to Top