PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
We propose and simulate a method of reconstructing a three-dimensional scene from a two-dimensional image for developing and augmenting world models for autonomous navigation. This is an extension of the Perspective-n-Point (PnP) method which uses a sampling of the 3D scene, 2D image point parings, and Random Sampling Consensus (RANSAC) to infer the pose of the object and produce a 3D mesh of the original scene. Using object recognition and segmentation, we simulate the implementation on a scene of 3D objects with an eye to implementation on embeddable hardware. The final solution will be deployed on the NVIDIA Tegra platform.
Franz Parkins andEddie Jacobs
"Three-dimensional scene reconstruction from a two-dimensional image", Proc. SPIE 10199, Geospatial Informatics, Fusion, and Motion Video Analytics VII, 1019909 (1 May 2017); https://doi.org/10.1117/12.2266411
ACCESS THE FULL ARTICLE
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The alert did not successfully save. Please try again later.
Franz Parkins, Eddie Jacobs, "Three-dimensional scene reconstruction from a two-dimensional image," Proc. SPIE 10199, Geospatial Informatics, Fusion, and Motion Video Analytics VII, 1019909 (1 May 2017); https://doi.org/10.1117/12.2266411