PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Leidos has completed a two year Rapid Innovation Fund (RIF) effort with the Army CCDC Ground Vehicles Systems Center (GVSC) entitled “Vision Based Localization” (VBL) to provide long duration precision navigation for ground vehicles in a GPS denied environment. The Leidos system, called the Vision Integrated Spatial Estimator (VISE), uses Convolutional Neural Networks (CNNs) to extract position information from monocular camera feeds. VISE runs the Leidos Dynamically Reconfigurable Particle Filter (DRPF) as the engine for sensor fusion, enabling incorporation of open source road network information to aid the navigation solution in real time without having to make simplifying assumptions about the measurement likelihood distribution. The VISE system was demonstrated in September 2019 by completing a 4 hour / 160 km drive test in Detroit MI in a GPS denied situation and achieving a < 20 m median error with a 20 m final error. Details of the results are presented, including video of the particle filtering system and the CNN processing.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The alert did not successfully save. Please try again later.
Jonathan Ryan, Paul Muench, Kyung-Min Su, David Francis, Christopher Rose, Scott Sexton, Aaron Maitland, "Using vision navigation and convolutional neural networks to provide absolute position aiding for ground vehicles," Proc. SPIE 11758, Unmanned Systems Technology XXIII, 117580A (26 April 2021); https://doi.org/10.1117/12.2586116