In the field of facial expression recognition, problems exist in traditional dataset acquisition, including high economic cost, time-consuming process, and difficulties in avoiding subjective factors. However, expression synthesis methods have provided solutions for such problems. In expression synthesis, facial expression trait decoupling is a key technology that affects the image quality. As such, in targeting incomplete separation of the two main types of features, identity features and expression features, new methods for finishing trait decoupling were proposed in the present study. ResNet50 was used to obtain the initial features and the latent codes were obtained by mapping network. A progressive generative adversarial network was used to generate images of different resolution layers. Subsequently, a feedback network was constructed to obtain latent codes. Thus, the separation of identity features and expression features was achieved, and the FID and SSIM between the generated images and the raw images were smaller. Through the proposed method, the accuracy of facial feature editing can be improved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.