Paper
17 February 2006 Key-text spotting in documentary videos using Adaboost
M. Lalonde, L. Gagnon
Author Affiliations +
Abstract
This paper presents a method for spotting key-text in videos, based on a cascade of classifiers trained with Adaboost. The video is first reduced to a set of key-frames. Each key-frame is then analyzed for its text content. Text spotting is performed by scanning the image with a variable-size window (to account for scale) within which simple features (mean/variance of grayscale values and x/y derivatives) are extracted in various sub-areas. Training builds classifiers using the most discriminant spatial combinations of features for text detection. The text-spotting module outputs a decision map of the size of the input key-frame showing regions of interest that may contain text suitable for recognition by an OCR system. Performance is measured against a dataset of 147 key-frames extracted from 22 documentary films of the National Film Board (NFB) of Canada. A detection rate of 97% is obtained with relatively few false alarms.
© (2006) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
M. Lalonde and L. Gagnon "Key-text spotting in documentary videos using Adaboost", Proc. SPIE 6064, Image Processing: Algorithms and Systems, Neural Networks, and Machine Learning, 60641N (17 February 2006); https://doi.org/10.1117/12.641924
Lens.org Logo
CITATIONS
Cited by 10 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

Error analysis

Optical character recognition

Feature extraction

Image processing

Sensors

Image quality

Back to Top