Paper
25 May 2012 Semantic perception for ground robotics
M. Hebert, J. A. Bagnell, M. Bajracharya, K. Daniilidis, L. H. Matthies, L. Mianzo, L. Navarro-Serment, J. Shi, M. Wellfare
Author Affiliations +
Abstract
Semantic perception involves naming objects and features in the scene, understanding the relations between them, and understanding the behaviors of agents, e.g., people, and their intent from sensor data. Semantic perception is a central component of future UGVs to provide representations which 1) can be used for higher-level reasoning and tactical behaviors, beyond the immediate needs of autonomous mobility, and 2) provide an intuitive description of the robot's environment in terms of semantic elements that can shared effectively with a human operator. In this paper, we summarize the main approaches that we are investigating in the RCTA as initial steps toward the development of perception systems for UGVs.
© (2012) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
M. Hebert, J. A. Bagnell, M. Bajracharya, K. Daniilidis, L. H. Matthies, L. Mianzo, L. Navarro-Serment, J. Shi, and M. Wellfare "Semantic perception for ground robotics", Proc. SPIE 8387, Unmanned Systems Technology XIV, 83870Y (25 May 2012); https://doi.org/10.1117/12.918915
Lens.org Logo
CITATIONS
Cited by 7 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image segmentation

3D image processing

Data modeling

Sensors

LIDAR

Environmental sensing

Motion models

Back to Top