Face attribute editing is a research hotspot in the field of computer vision, which aims to modify a certain attribute of a face image to generate a new face image. The current methods based on Generative Adversarial Networks (GAN) have attribute entanglement problems and the implementation process is relatively complicated. To this end, this paper proposes a face attribute editing network based on style-content disentanglement and convolutional attention. Adding convolutional attention (CAT) module to the StyleGAN generator makes the network's control of content features no longer affected by the overall style of the image, and realizes the separation of spatial content and style from coarse to fine. In addition, the hierarchical CAT modules control different levels of attribute features, and changing the input of any layer of CAT can change the corresponding attribute features. The experimental results on the CelebA-HQ dataset show that the method in this paper can achieve disentangled editing of face attributes, and the scores of various indicators are better than the existing models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.