Reticular pseudodrusen (RPD) are subretinal drusenoid deposits that represent an important disease feature in age-related macular degeneration (AMD). RPD are of particular interest because their presence is a strong predictor of progression to advanced AMD. RPD features can be characterized using volumetric spectral-domain optical coherence tomography (SD-OCT). In this work, we curated a dataset from the Age-Related Eye Diseases Study 2 (AREDS2) ancillary OCT study. The dataset included 826 SD-OCT scans, with RPD present in 222 SD-OCT scans. Binary RPD labels were transferred from fundus autofluorescence (FAF) images taken at the same visits as the SD-OCT scans. The dataset was split at the participant level into training (70%), validation (10%), and test sets (20%). We proposed a 3D classification network to detect RPD from SD-OCT scans. We compared it to a baseline 2D network with average bagging and a 3D network with multi-tasking. The proposed network achieved the highest accuracy of 0.7784, area under receiver characteristic operating curve of 0.8689, and mean average precision of 0.7706 for detecting RPD from SD-OCT scans.
Geographic atrophy (GA) is the defining lesion of advanced atrophic age-related macular degeneration (AMD). GA can be detected and characterized most accurately using spectral-domain optical coherence tomography (SDOCT), which provides detailed 3D information about changes in multiple retinal layers. Existing methods are limited to 2D convolutional neural networks (CNNs). Therefore, they do not capture the 3D context between adjacent 2D slices of the OCT scan and also require a large inference time. We propose 3D CNNs with 3D attention mechanisms for the automated detection of GA on SDOCT scans using scan-level labels. The best network achieved an accuracy of 88%, and its visualizations suggest the interpretability of its predictions.
Corneal pathologies are leading causes of blindness and represent a world health problem according to the world health organization. Early detection of corneal diseases is necessary to prevent blindness. In this paper, we use transfer learning with pretrained deep learning networks to diagnose three common corneal diseases, namely, dry eye, Fuchs' endothelial dystrophy, and keratoconus as well as healthy eyes using only optical coherence tomography (OCT) images. Corneal OCT scans were obtained from 413 eyes of 269 patients and used to train, validate, and test the networks. All networks achieved all-category accuracy values > 99%, categorical area under curve values > 0:99, categorical specificity values > 99%, and categorical sensitivity values > 99% on the training, validation, and testing, respectively. The work in this paper has clinical significance and can potentially be applied in clinical practice to potentially solve a significant world health problem.
Various common corneal eye diseases, such as dry eye, Fuchs endothelial dystrophy, Keratoconus and corneal graft rejection, can be diagnosed based on the changes in the thickness of corneal microlayers. Optical Coherence Tomography (OCT) technology made it possible to obtain high resolution corneal images that show the microlayered structures of the cornea. Manual segmentation is subjective and not feasible due to the large volume of obtained images. Existing automatic methods, used for segmenting corneal layer interfaces, are not robust and they segment few corneal microlayer interfaces. Moreover, there is no large annotated database of corneal OCT images, which is an obstacle towards the application of powerful machine learning methods such as deep learning for the segmentation of corneal interfaces. In this paper, we propose a novel segmentation method for corneal OCT images using Graph Search and Radon Transform. To the best of our knowledge, we are the first to develop an automatic segmentation method for the six corneal microlayer interfaces. The proposed method involves a novel image denoising method and an inner interfaces localization method. The proposed method was tested on 15 corneal OCT images. The images were randomly selected and manually segmented by two operators. Experimental results show that our method has a mean segmentation error of 3.87 ± 5.21 pixels (i.e. 5.81 ± 7.82μm) across all interfaces compared to the segmentation of the manual operators. The two manual operators have mean segmentation difference of 4.07 ± 4.71 pixels (i.e. 6.11 ± 7.07μm). The mean running time to segment all the corneal microlayer interfaces is 6.66 ± 0.22 seconds.
Measuring the thickness of different corneal microlayers is important for the diagnosis of common corneal eye diseases such as dry eye, keratoconus, Fuchs endothelial dystrophy, and corneal graft rejection. High resolution corneal images, obtained using optical coherence tomography (OCT), made it possible to measure the thickness of different corneal microlayers in vivo. The manual segmentation of these images is subjective and time consuming. Therefore, automatic segmentation is necessary. Several methods were proposed for segmenting corneal OCT images, but none of these methods segment all the microlayer interfaces and they are not robust. In addition, the lack of a large annotated database of corneal OCT images impedes the application of machine learning methods such as deep learning which proves to be very powerful. In this paper, we present a new corneal OCT image segmentation algorithm using Randomized Hough Transform. To the best of our knowledge, we developed the first automatic segmentation method for the six corneal microlayer interfaces. The proposed method includes a robust estimate of relative distances of inner corneal interfaces with respect to outer corneal interfaces. Also, it handles properly the correct ordering and the non-intersection of corneal microlayer interfaces. The proposed method was tested on 15 corneal OCT images that were randomly selected. OCT images were manually segmented by two trained operators for comparison. Comparison with the manual segmentation shows that the proposed method has mean segmentation error of 3.77±4.25 pixels across all interfaces which corresponds to 5.66 ± 6.38μm. The mean segmentation error between the two manual operators is 4.07 ± 4.71 pixels, which corresponds to 6.11 ± 7.07μm. The proposed method takes a mean time of 2.59 ± 0.06 seconds to segment six corneal interfaces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.