Identification and tracking of dynamic 3D objects from Synthetic Aperture Radar (SAR) and Infrared (IR) Thermal imaging in the presence of significant clutter and occlusion is a highly challenging task. In this paper, we primarily present an approach for 3D objects recognition and tracking based on their multi-modality (e.g., SAR and IR) imagery signatures and discuss a multi-scale scheme for multi-modality imagery salient keypoint descriptors extraction from 3D objects. Next, we describe how to cluster local salient keypoints and model them as signature surface patch features suitable for object detection and recognition. During our supervised training phase, multiple views of test model are presented to the system where a set of multi-scale invariant surface features are extracted from each model and registered as the object’s class signature exemplar. These features are employed during the online recognition phase to generate recognition hypotheses. When each object of interest is verified and recognized, the object’s attributes are annotated semantically. The coded semantic annotations are then efficiently presented to a Hidden Markov Model (HMM) for spatiotemporal object state discovery and tracking. Through this process, corresponding features of same objects from multiple sequential multi-modality imagery data are realized and tracked overtime. The proposed algorithm was tested using IRIS simulation model where two test scenarios were constructed. One scenario is used for activity recognition of ground-based vehicles, and the other one is used for classification of Unmanned Aerial Vehicles (UAV’s). In both scenarios, synthetic SAR and IR imagery are generated using IRIS simulation model for the purpose of training and testing of newly developed algorithms. Experimental results show that our algorithms offer significant efficiency and effectiveness.
Detection and recognition of 3D objects and their motion characteristics from Synthetic Aperture Radar (SAR) and Infrared (IR) Thermal imaging in the presence of significant clutter and occlusion is a highly challenging task. In this paper, we present an efficient technique for generation of static and dynamic synthetic SAR and IR imagery data in the cluttered virtual environments. Such imagery data sets closely represent the view of physical environment as potentially can be perceived by the physical SAR and IR imaging systems respectively. In this work, we present IRIS simulation model for the efficient construction and modeling of virtual environment with clutter and discuss our techniques for low-poly 3D object surface patch generation. Furthermore, we present several test scenarios based on which the synthetic SAR and IR imaging data sets are obtained and discuss the role of key control parameters impacting the performance of our synthetic multi-modality imaging systems. Lastly, we describe a method for multi-scale feature extraction from 3D objects based on synthetic SAR and IR imagery data sets for a variety of test ground-based and aerial-based vehicles and demonstrate efficiency and effectiveness of this approach in different test scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.