PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Despite the significant progress in recent years, deep face recognition is often treated as a “black box” and has been criticized for lacking explainability. It becomes increasingly important to understand the characteristics and decisions of deep face recognition systems to make them more acceptable to the public. Explainable face recognition (XFR) refers to the problem of interpreting why a recognition model matches a probe face with one identity over others. Recent studies have explored use of visual saliency maps as an explanation mechanism, but they often lack a deeper analysis in the context of face recognition. This paper starts by proposing a rigorous definition of explainable face recognition (XFR) which focuses on the decision-making process of the deep recognition model. Based on that definition, a similarity-based RISE algorithm (S-RISE) is then introduced to produce high-quality visual saliency maps for a deep face recognition model. Furthermore, an evaluation approach is proposed to systematically validate the reliability and accuracy of general visual saliency-based XFR methods.
Conference Presentation
(2023) Published by SPIE. Downloading of the abstract is permitted for personal use only.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The alert did not successfully save. Please try again later.
Yuhang Lu, Touradj Ebrahimi, "Explanation of face recognition via saliency maps," Proc. SPIE 12674, Applications of Digital Image Processing XLVI, 126740U (4 October 2023); https://doi.org/10.1117/12.2677353