Paper
19 June 2017 Gaze inspired subtitle position evaluation for MOOCs videos
Hongli Chen, Mengzhen Yan, Sijiang Liu, Bo Jiang
Author Affiliations +
Proceedings Volume 10443, Second International Workshop on Pattern Recognition; 1044318 (2017) https://doi.org/10.1117/12.2280281
Event: Second International Workshop on Pattern Recognition, 2017, Singapore, Singapore
Abstract
Online educational resources, such as MOOCs, is becoming increasingly popular, especially in higher education field. One most important media type for MOOCs is course video. Besides traditional bottom-position subtitle accompany to the videos, in recent years, researchers try to develop more advanced algorithms to generate speaker-following style subtitles. However, the effectiveness of such subtitle is still unclear. In this paper, we investigate the relationship between subtitle position and the learning effect after watching the video on tablet devices. Inspired with image based human eye tracking technique, this work combines the objective gaze estimation statistics with subjective user study to achieve a convincing conclusion -- speaker-following subtitles are more suitable for online educational videos.
© (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Hongli Chen, Mengzhen Yan, Sijiang Liu, and Bo Jiang "Gaze inspired subtitle position evaluation for MOOCs videos", Proc. SPIE 10443, Second International Workshop on Pattern Recognition, 1044318 (19 June 2017); https://doi.org/10.1117/12.2280281
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

Tablets

Eye

Detection and tracking algorithms

Video processing

Algorithm development

Statistical analysis

Back to Top