Paper
24 December 2013 Robust visual tracking based on online learning of joint sparse dictionary
Qiaozhe Li, Yu Qiao, Jie Yang, Li Bai
Author Affiliations +
Proceedings Volume 9067, Sixth International Conference on Machine Vision (ICMV 2013); 90671E (2013) https://doi.org/10.1117/12.2051541
Event: Sixth International Conference on Machine Vision (ICMV 13), 2013, London, United Kingdom
Abstract
In this paper, we propose a robust visual tracking algorithm based on online learning of a joint sparse dictionary. The joint sparse dictionary consists of positive and negative sub-dictionaries, which model foreground and background objects respectively. An online dictionary learning method is developed to update the joint sparse dictionary by selecting both positive and negative bases from bags of positive and negative image patches/templates during tracking. A linear classifier is trained with sparse coefficients of image patches in the current frame, which are calculated using the joint sparse dictionary. This classifier is then used to locate the target in the next frame. Experimental results show that our tracking method is robust against object variation, occlusion and illumination change.
© (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Qiaozhe Li, Yu Qiao, Jie Yang, and Li Bai "Robust visual tracking based on online learning of joint sparse dictionary", Proc. SPIE 9067, Sixth International Conference on Machine Vision (ICMV 2013), 90671E (24 December 2013); https://doi.org/10.1117/12.2051541
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Associative arrays

Detection and tracking algorithms

Optical tracking

Algorithm development

Knowledge management

Lithium

Control systems

Back to Top