17 June 2024 Face antispoofing method based on single-modal and lightweight network
Guoxiang Tong, Xinrong Yan
Author Affiliations +
Abstract

In the field of face antispoofing, researchers are increasingly focusing their efforts on multimodal and feature fusion. While multimodal approaches are more effective than single-modal ones, they often come with a huge number of parameters, require significant computational resources, and pose challenges for execution on mobile devices. To address the real-time problem, we propose a fast and lightweight framework based on ShuffleNet V2. Our approach takes patch-level images as input, enhances unit performance by introducing an attention module, and addresses dataset sample imbalance issues through the focal loss function. The framework effectively tackles the real-time constraints of the model. We evaluate the performance of our model on CASIA-FASD, Replay-Attack, and MSU-MFSD datasets. The results demonstrate that our method outperforms the current state-of-the-art methods in both intratest and intertest scenarios. Furthermore, our network has only 0.84 M parameters and 0.81 GFlops, making it suitable for deployment in mobile and real-time settings. Our work can serve as a valuable reference for researchers seeking to develop single-modal face antispoofing methods suitable for mobile and real-time applications.

© 2024 SPIE and IS&T
Guoxiang Tong and Xinrong Yan "Face antispoofing method based on single-modal and lightweight network," Journal of Electronic Imaging 33(3), 033030 (17 June 2024). https://doi.org/10.1117/1.JEI.33.3.033030
Received: 29 December 2023; Accepted: 23 May 2024; Published: 17 June 2024
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Convolution

RGB color model

Video

Performance modeling

Data modeling

Feature extraction

Statistical modeling

Back to Top