This study investigates the efficacy of the red, green, and blue channels in color fundus photography on the deep learning classification of retinopathy of prematurity (ROP). We used a total of 200 color fundus images from four ROP stages and applied the transfer learning for deep learning classification. To enhance visibility, contrast limiting adaptive histogram equalization (CLAHE) was utilized. Multi-color-channel fusion approach was tested to determine its effect on ROP classification. For individual channel classification, the green channel demonstrated the best results, with an accuracy of 80.5%, sensitivity of 61%, and specificity of 87%. Multi-color-channel fusion provided slightly better performance than green channel with an accuracy of 81%, sensitivity of 62%, and specificity of 87.33%. After CLAHE, the red-only, green-only, and RGB-fusion showed comparable performance, with accuracies of 83.5%, 84%, and 84.25, sensitivities of 67%, 68% and 68.5%, and specificities of 89%, 89.33% and 89.50%, respectively. This observation suggests that the red channel after contrast enhancement can provide sufficient information for ROP stage classification.
Early detection of diabetic retinopathy (DR) is an essential step to prevent vision losses. This study is the first effort to explore convolutional neural networks (CNNs) for transfer-learning based optical coherence tomography angiography (OCTA) detection and classification of DR. We employed transfer-learning using a pre-trained CNN, VGG16, based on the ImageNet dataset for classification of OCTA images. To prevent overfitting, data augmentation, e.g. rotations, flips, and zooming, and 5-fold cross-validation were implemented. A dataset comprising of 131 OCTA images from 20 control, 17 diabetic patients without DR (NoDR), and 60 nonproliferative DR (NPDR) patients were used for preliminary validation. Best classification performance was achieved with fine-tuning nine layers of the sixteen-layer CNN model.
Retinopathy of prematurity (ROP) is a disease that affects premature infants, where abnormal growth of the retinal blood vessels can lead to blindness unless treated accordingly. Infants considered at risk of severe ROP are monitored for symptoms of plus disease, characterized by arterial tortuosity and venous dilation at the posterior pole, with a standard photographic definition. Disagreement among ROP experts in diagnosing plus disease has driven the development of computer-based methods that classify images based on hand-crafted features extracted from the vasculature. However, most of these approaches are semi-automated, which are time-consuming and subject to variability. In contrast, deep learning is a fully automated approach that has shown great promise in a wide variety of domains, including medical genetics, informatics and imaging. Convolutional neural networks (CNNs) are deep networks which learn rich representations of disease features that are highly robust to variations in acquisition and image quality. In this study, we utilized a U-Net architecture to perform vessel segmentation and then a GoogLeNet to perform disease classification. The classifier was trained on 3,000 retinal images and validated on an independent test set of patients with different observed progressions and treatments. We show that our fully automated algorithm can be used to monitor the progression of plus disease over multiple patient visits with results that are consistent with the experts’ consensus diagnosis. Future work will aim to further validate the method on larger cohorts of patients to assess its applicability within the clinic as a treatment monitoring tool.
In conventional fundus imaging devices, transpupillary illumination is used for illuminating the inside of the eye. In this method, the illumination light is directed into the posterior segment of the eye through the cornea and passes the pupillary area. As a result of sharing the pupillary area for the illumination beam and observation path, pupil dilation is typically necessary for wide-angle fundus examination, and the field of view is inherently limited. An alternative approach is to deliver light from the sclera. It is possible to image a wider retinal area with transcleral-illumination. However, the requirement of physical contact between the illumination probe and the sclera is a drawback of this method. We report here trans-palpebral illumination as a new method to deliver the light through the upper eyelid (palpebra). For this study, we used a 1.5 mm diameter fiber with a warm white LED light source. To illuminate the inside of the eye, the fiber illuminator was placed at the location corresponding to the pars plana region. A custom designed optical system was attached to a digital camera for retinal imaging. The optical system contained a 90 diopter ophthalmic lens and a 25 diopter relay lens. The ophthalmic lens collected light coming from the posterior of the eye and formed an aerial image between the ophthalmic and relay lenses. The aerial image was captured by the camera through the relay lens. An adequate illumination level was obtained to capture wide angle fundus images within ocular safety limits, defined by the ISO 15004-2: 2007 standard. This novel trans-palpebral illumination approach enables wide-angle fundus photography without eyeball contact and pupil dilation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.