Fusion of complementary information from multisensor data is of great importance for identifying the land covers. However, integration of multisource information is a challenging task. A framework is developed to integrate hyperspectral and LiDAR data for land cover classification. In the proposed method, sparse stacked autoencoders are used to represent the spectral and spatial information in a compact form. The spatial information is extracted both from hyperspectral and LiDAR data using morphological operators. The encoded spectral and spatial features are combined with elevation information to form a joint feature vector. The joint features are fed to a convolutional neural network (CNN) classifier to classify the land covers. The CNN classifier is a hybrid three- and two-dimensional (3D)–(2D) model having three 3D convolutional layers and one 2D convolutional layer. The experiments are carried out on two datasets Houston and Samford to evaluate the performance of the proposed method. The results have demonstrated the effectiveness of the method with global κ = 0.9285 and global naive accuracy (OA) of 93.44% for Houston data. For Samford data, it achieves κ = 0.9811 and OA = 98.93 % . |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
![Lens.org Logo](/images/Lens.org/lens-logo.png)
CITATIONS
Cited by 7 scholarly publications.
LIDAR
Data fusion
Hyperspectral imaging
3D modeling
Image fusion
Feature extraction
Information fusion