Paper
13 May 2016 Learning object models from few examples
Ishan Misra, Yuxiong Wang, Martial Hebert
Author Affiliations +
Abstract
Current computer vision systems rely primarily on fixed models learned in a supervised fashion, i.e., with extensive manually labelled data. This is appropriate in scenarios in which the information about all the possible visual queries can be anticipated in advance, but it does not scale to scenarios in which new objects need to be added during the operation of the system, as in dynamic interaction with UGVs. For example, the user might have found a new type of object of interest, e.g., a particular vehicle, which needs to be added to the system right away. The supervised approach is not practical to acquire extensive data and to annotate it. In this paper, we describe techniques for rapidly updating or creating models using sparsely labelled data. The techniques address scenarios in which only a few annotated training samples are available and need to be used to generate models suitable for recognition. These approaches are crucial for on-the-fly insertion of models by users and on-line learning.
© (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Ishan Misra, Yuxiong Wang, and Martial Hebert "Learning object models from few examples", Proc. SPIE 9837, Unmanned Systems Technology XVIII, 98370O (13 May 2016); https://doi.org/10.1117/12.2231108
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Sensors

Data modeling

Video

Systems modeling

Performance modeling

Statistical modeling

Visual process modeling

Back to Top