Advancement in remote sensing capabilities have led to unprecedented quantity and quality of data across a number of sensing modalities. It is now possible to outfit nearly any mobile platform not only with high resolution cameras, but also with inexpensive infrared and LIDAR sensors. For the specific goal of providing a comprehensive assessment of vehicle maneuverability, we address the problem of co-registering multiple sensor phenomenologies, such as visual, infrared, and LIDAR imagery collected from vehicle-mounted sensors. We show that a data fusion across these sensors provides invaluable information in hazard detection, localization, and classification. In addition, the co-registered measurements lead to the feasibility of enhanced heterogeneous data machine learning methods. This approach is verified on a dataset collected by the U.S. Army ERDC.
|