Infrared and visible image fusion is a popular research hotspot in the field of image processing. However, the existing fusion methods still have some limitations, such as insufficient use of intermediate information and inability to focus on features that are meaningful for fusion. To solve these problems, we propose an infrared and visible image fusion method based on generative adversarial networks with dense connection and attention mechanism (DAFuse). Since infrared and visible image are different modalities, we design two branches to extract the features in infrared and visible image, respectively. To make full use of the features extracted from the middle layer and make the model focus on useful information, we introduce the dense block, channel attention mechanism, and spatial attention mechanism into the generator. The self-attention model is incorporated into the discriminator. The proposed method not only retains rich texture detail features and sufficient contrast information but also conforms to human visual perception. Extensive qualitative and quantitative experimental results show that the proposed method has better performance in visual perception and quantitative evaluation than the existing state-of-the-art methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.