Image-to-image translation with unaligned domains is a seminal work that is capable of automatically selecting suitable image of unaligned dataset for translation. Unaligned datasets often contain irrelevant images that disrupt translation. However, the accuracy of selecting appropriate images is an issue that needs to be urgently improved. To improve the grasp of image features and enhance the performance of unaligned image-to-image translation without changing much semantic information, we propose a novel framework unaligned image-to-image translation model based on contrastive learning and generative adversarial networks (UCGAN), which can focus on both local and global features of the image. Inspired by contrastive learning, here we propose the patch-by-patch feature contrastive (PPFC). PPFC reuses the encoder of the generator to extract multi-layer features of different image patches. We then normalize the various features using |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Education and training
Gallium nitride
Feature extraction
Image processing
Image enhancement
Semantics
Design