21 September 2023 Hyperspectral unmixing method based on dual-branch multiscale residual attention network
Congping Chen, Zhiwei Xu, Peng Lu, Nuo Cao
Author Affiliations +
Abstract

We propose a solution to the issue of hyperspectral unmixing methods that only consider local spectral–spatial information of the pixel level or pixel block. The proposed method is a two-stage convolutional autoencoder network that takes into account global spatial context information. In stage-I, the parallel dual-branch module is dedicated to extracting multi-scale spatial and spectral features. The extracted spatial–spectral prior information is then propagated from stage-I to stage-II to assist in the extraction of joint spectral–spatial features. The proposed method also uses a spectral–spatial attention residual module to refine spectral–spatial features into distinct deep spectral–spatial features and suppress irrelevant redundant features. We validate the proposed method using synthetic and real datasets. It is found that the proposed method outperforms existing unmixing methods in terms of endmembers extraction and abundance estimation. The source code for the proposed model will be made public in the Github repository available at https://github.com/xzw001212/two-stage-DBMRANet.

© 2023 Society of Photo-Optical Instrumentation Engineers (SPIE)
Congping Chen, Zhiwei Xu, Peng Lu, and Nuo Cao "Hyperspectral unmixing method based on dual-branch multiscale residual attention network," Optical Engineering 62(9), 093102 (21 September 2023). https://doi.org/10.1117/1.OE.62.9.093102
Received: 3 July 2023; Accepted: 6 September 2023; Published: 21 September 2023
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Feature extraction

Convolution

Optical engineering

Matrices

Network architectures

Roads

Visualization

Back to Top