Paper
29 March 2023 Adaptive gaze estimation based on channel attention mechanism
Author Affiliations +
Proceedings Volume 12594, Second International Conference on Electronic Information Engineering and Computer Communication (EIECC 2022); 125942O (2023) https://doi.org/10.1117/12.2671296
Event: Second International Conference on Electronic Information Engineering and Computer Communication (EIECC 2022), 2022, Xi'an, China
Abstract
Attention mechanisms have been found to be effective for human gaze estimation. To address the problem that traditional attention has limited ability to extract higher-order contextual information in gaze estimation tasks, an ECA attention mechanism-based gaze estimation network is proposed, which aims to effectively exploit the channel relations of features through a global average pooling layer without dimensionality reduction, suppress some facial regions that do not contribute to gaze estimation, and activate subtle facial features that can improve gaze estimation. The model can take full advantage of the user's appearance, which helps to improve the accuracy of the gaze estimation model. In this paper, experiments are conducted on the MPIIGaze dataset, and the results show that the network based on the channel attention mechanism can reduce the estimation error, and the model proposed in this paper can achieve more accurate gaze estimation.
© (2023) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Chenwei Zhao, Hui Xu, Bicai Yin, and Jingyi Zhao "Adaptive gaze estimation based on channel attention mechanism", Proc. SPIE 12594, Second International Conference on Electronic Information Engineering and Computer Communication (EIECC 2022), 125942O (29 March 2023); https://doi.org/10.1117/12.2671296
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Convolution

Education and training

Feature extraction

Machine learning

Deep learning

Back to Top