We present an automatic method, termed as the principal component analysis network with composite kernel (PCANet-CK), for the classification of three-dimensional (3-D) retinal optical coherence tomography (OCT) images. Specifically, the proposed PCANet-CK method first utilizes the PCANet to automatically learn features from each B-scan of the 3-D retinal OCT images. Then, multiple kernels are separately applied to a set of very important features of the B-scans and these kernels are fused together, which can jointly exploit the correlations among features of the 3-D OCT images. Finally, the fused (composite) kernel is incorporated into an extreme learning machine for the OCT image classification. We tested our proposed algorithm on two real 3-D spectral domain OCT (SD-OCT) datasets (of normal subjects and subjects with the macular edema and age-related macular degeneration), which demonstrated its effectiveness.
Detection and recognition of macular lesions in optical coherence tomography (OCT) are very important for retinal diseases diagnosis and treatment. As one kind of retinal disease (e.g., diabetic retinopathy) may contain multiple lesions (e.g., edema, exudates, and microaneurysms) and eye patients may suffer from multiple retinal diseases, multiple lesions often coexist within one retinal image. Therefore, one single-lesion-based detector may not support the diagnosis of clinical eye diseases. To address this issue, we propose a multi-instance multilabel-based lesions recognition (MIML-LR) method for the simultaneous detection and recognition of multiple lesions. The proposed MIML-LR method consists of the following steps: (1) segment the regions of interest (ROIs) for different lesions, (2) compute descriptive instances (features) for each lesion region, (3) construct multilabel detectors, and (4) recognize each ROI with the detectors. The proposed MIML-LR method was tested on 823 clinically labeled OCT images with normal macular and macular with three common lesions: epiretinal membrane, edema, and drusen. For each input OCT image, our MIML-LR method can automatically identify the number of lesions and assign the class labels, achieving the average accuracy of 88.72% for the cases with multiple lesions, which better assists macular disease diagnosis and treatment.
Image fusion combines multiple images of the same scene into a single image which is suitable for human perception and practical applications. Different images of the same scene can be viewed as an ensemble of intercorrelated images. This paper proposes a novel multimodal image fusion scheme based on the joint sparsity model which is derived from the distributed compressed sensing. First, the source images are jointly sparsely represented as common and innovation components using an over-complete dictionary. Second, the common and innovations sparse coefficients are combined as the jointly sparse coefficients of the fused image. Finally, the fused result is reconstructed from the obtained sparse coefficients. Furthermore, the proposed method is compared with some popular image fusion methods, such as multiscale transform-based methods and simultaneous orthogonal matching pursuit-based method. The experimental results demonstrate the effectiveness of the proposed method in terms of visual effect and quantitative fusion evaluation indexes.
Multiwavelet transform, a new notion addition to wavelet theory, offers simultaneous orthogonality, symmetry, short support, and vanishing moments, which are not possible with scalar wavelets. In this paper, a novel fusion method for Landsat Thematic Mapper (TM) and Synergetic Aperture Radar (SAR) images using multiwavelet transform is proposed. The approximation subband of the fused multiwavelet transform representation is from that of Landsat TM image, and the detail subbands are constructed by fusing those of Landsat TM and SAR images. The proposed method was compared quantitatively with intensity-hue-saturation, principal component analysis, and wavelet transform. It was found that the presented technique is clearly better than those methods in preserving both spectral and spatial information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.