Paper
8 October 2007 Self-localization of a mobile robot using a single vision sensor
Author Affiliations +
Proceedings Volume 6718, Optomechatronic Computer-Vision Systems II; 67180B (2007) https://doi.org/10.1117/12.754552
Event: International Symposium on Optomechatronic Technologies, 2007, Lausanne, Switzerland
Abstract
The most important factor of autonomous mobile robot is to build a map for surrounding environment and estimate its localization. This paper proposes a real-time localization and map building method through 3-D reconstruction using scale invariant feature from single camera. Mobile robot attached monocular camera looking wall extracts scale invariant features in each image using SIFT(Scale Invariant Feature Transform) as it follows wall. Matching is carried out by the extracted features and matching feature map that is transformed into absolute coordinates using 3-D reconstruction of point and geometrical analysis of surrounding environment build, and store it map database. After finished feature map building, the robot finds some points matched with previous feature map and find its pose by affine parameter in real time. Position error of the proposed method was max. 6.2cm and angle error was within 5°.
© (2007) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Jaehong Shim "Self-localization of a mobile robot using a single vision sensor", Proc. SPIE 6718, Optomechatronic Computer-Vision Systems II, 67180B (8 October 2007); https://doi.org/10.1117/12.754552
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Sensors

Mobile robots

Cameras

Image processing

Robotic systems

Embedded systems

Feature extraction

RELATED CONTENT


Back to Top