Full Content is available to subscribers

Subscribe/Learn More  >
Proceedings Article

3D reconstruction from a monocular vision system for unmanned ground vehicles

[+] Author Affiliations
R. Cortland Tompkins, Yakov Diskin, Menatoallah M. Youssef, Vijayan K. Asari

Univ. of Dayton (USA)

Proc. SPIE 8186, Electro-Optical Remote Sensing, Photonic Technologies, and Applications V, 818608 (October 19, 2011); doi:10.1117/12.897749
Text Size: A A A
From Conference Volume 8186

  • Electro-Optical Remote Sensing, Photonic Technologies, and Applications V
  • Gary W. Kamerman; Ove Steinvall; Gary J. Bishop; John D. Gonglewski; Keith L. Lewis; Richard C. Hollins; Thomas J. Merlet
  • Prague, Czech Republic | September 19, 2011

abstract

In this paper we present a 3D reconstruction technique designed to support an autonomously navigated unmanned system. The algorithm and methods presented focus on the 3D reconstruction of a scene, with color and distance information, using only a single moving camera. In this way, the system may provide positional self-awareness for navigation within a known, GPS-denied area. It can also be used to construct a new model of unknown areas. Existing 3D reconstruction methods for GPS-denied areas often rely on expensive inertial measurement units to establish camera location and orientation. The algorithm proposed---after the preprocessing tasks of stabilization and video enhancement---performs Speeded-Up Robust Feature extraction, in which we locate unique stable points within every frame. Additional features are extracted using an optical flow method, with the resultant points fused and pruned based on several quality metrics. Each unique point is then tracked through the video sequence and assigned a disparity value used to compute the depth for each feature within the scene. The algorithm also assigns each feature point a horizontal and vertical coordinate using the camera's field of views specifications. From this, a resultant point cloud consists of thousands of feature points plotted from a particular camera position and direction, generated from pairs of sequential frames. The proposed method can use the yaw, pitch and roll information calculated from visual cues within the image data to accurately compute location and orientation. This positioning information enables the reconstruction of a robust 3D model particularly suitable for autonomous navigation and mapping tasks.

© (2011) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Citation

R. Cortland Tompkins ; Yakov Diskin ; Menatoallah M. Youssef and Vijayan K. Asari
"3D reconstruction from a monocular vision system for unmanned ground vehicles", Proc. SPIE 8186, Electro-Optical Remote Sensing, Photonic Technologies, and Applications V, 818608 (October 19, 2011); doi:10.1117/12.897749; http://dx.doi.org/10.1117/12.897749


Access This Proceeding
Sign in or Create a personal account to Buy this proceeding ($15 for members, $18 for non-members).

Figures

Tables

NOTE:
Citing articles are presented as examples only. In non-demo SCM6 implementation, integration with CrossRef’s "Cited By" API will populate this tab (http://www.crossref.org/citedby.html).

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Related Book Chapters

Topic Collections

Advertisement
  • Don't have an account?
  • Subscribe to the SPIE Digital Library
  • Create a FREE account to sign up for Digital Library content alerts and gain access to institutional subscriptions remotely.
Access This Proceeding
Sign in or Create a personal account to Buy this proceeding ($15 for members, $18 for non-members).
Access This Proceeding
Sign in or Create a personal account to Buy this article ($15 for members, $18 for non-members).
Access This Chapter

Access to SPIE eBooks is limited to subscribing institutions and is not available as part of a personal subscription. Print or electronic versions of individual SPIE books may be purchased via SPIE.org.