In this paper we present a 3D reconstruction technique designed to support an autonomously navigated unmanned system. The algorithm and methods presented focus on the 3D reconstruction of a scene, with color and distance information, using only a single moving camera. In this way, the system may provide positional self-awareness for navigation within a known, GPS-denied area. It can also be used to construct a new model of unknown areas. Existing 3D reconstruction methods for GPS-denied areas often rely on expensive inertial measurement units to establish camera location and orientation. The algorithm proposed---after the preprocessing tasks of stabilization and video enhancement---performs Speeded-Up Robust Feature extraction, in which we locate unique stable points within every frame. Additional features are extracted using an optical flow method, with the resultant points fused and pruned based on several quality metrics. Each unique point is then tracked through the video sequence and assigned a disparity value used to compute the depth for each feature within the scene. The algorithm also assigns each feature point a horizontal and vertical coordinate using the camera's field of views specifications. From this, a resultant point cloud consists of thousands of feature points plotted from a particular camera position and direction, generated from pairs of sequential frames. The proposed method can use the yaw, pitch and roll information calculated from visual cues within the image data to accurately compute location and orientation. This positioning information enables the reconstruction of a robust 3D model particularly suitable for autonomous navigation and mapping tasks.© (2011) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.