At present, high myopia has become a hot spot for eye diseases worldwide because of its increasing prevalence. Linear lesion is an important clinical signal in the pathological changes of high myopia. ICGA is considered to be the “Ground Truth” for the diagnosis of linear lesions, but it is invasive and may cause adverse reactions such as allergy, dizziness, and even shock in some patients. Therefore, it is urgent to find a non-invasive imaging modality to replace ICGA for the diagnosis of linear lesions. Multi-color scanning laser (MCSL) imaging is a non-invasive imaging technique that can reveal linear lesion more richly than other non-invasive imaging technique such as color fundus imaging and red-free fundus imaging and some other invasive one such as fundus fluorescein angiography (FFA). To our best knowledge, there are no studies focusing on the linear lesion segmentation based on MCSL images. In this paper, we propose a new U-shape based segmentation network with multi-scale and global context fusion (SGCF) block named as SGCNet to segment the linear lesion in MCSL images. The features with multi-scales and global context information extracted by SGCF block are fused by learnable parameters to obtain richer high-level features. Four-fold cross validation was adopted to evaluate the performance of the proposed method on 86 MCSL images from 57 high myopia patients. The IoU coefficient, Dice coefficient, Sensitivity coefficient and Specialty are 0.494±0.109, 0.654±0.104, 0.676±0.131 and 0.998±0.002, respectively. Experiment results indicate the effectiveness of the proposed network.
Glaucoma is a leading cause of irreversible blindness. Accurate optic disc (OD) and optic cup (OC) segmentation in fundus images is beneficial to glaucoma screening and diagnosis. Recently, convolutional neural networks have demonstrated promising progress in OD and OC joint segmentation in fundus images. However, the segmentation of OC is a challenge due to the low contrast and blurred boundary. In this paper, we propose an improved U-shape based network to jointly segment OD and OC. There are three main contributions: (1) The efficient channel attention (ECA) blocks are embedded into our proposed network to avoid dimensionality reduction and capture cross-channel interaction in an efficient way. (2) A multiplexed dilation convolution (MDC) module is proposed to extract more target features with various sizes and preserve more spatial information. (3) Three global context extraction (GCE) modules are used in our network. By introducing multiple GCE modules between encoder and decoder, the global semantic information flow from high-level stages can be gradually guided to different stages. The method proposed in this paper was tested on 240 fundus images. Compared with U-Net, Attention U-Net, Seg-Net and FCNs, the OD and OC’s mean Dice similarity coefficient of the proposed method can reach 96.20% and 90.00% respectively, which are better than the above networks.
Optical coherence tomography (OCT) is an imaging modality that is extensively used for ophthalmic diagnosis and treatment. OCT can help reveal disease-related alterations below the surface of the retina, such as retinal fluid which can cause vision impairment. In this paper, we propose a novel context attention-and-fusion network (named as CAF-Net) for multiclass retinal fluid segmentation, including intraretinal fluid (IRF), subretinal fluid (SRF) and pigment epithelial detachment (PED). To deal with the seriously uneven sizes and irregular distributions of different types of fluid, our CAF-Net proposes the context shrinkage encode (CSE) module and context pyramid guide (CPG) module to extract and fuse global context information. The CSE module embedded in the encoder path can ignore redundant information and focus on useful information by a shrinkage function. Besides, the CPG module is inserted between the encoder and decoder, which can dynamically fuse multi-scale information in high-level features. The proposed CAF-Net was evaluated on a public dataset from RETOUCH Challenge in MICCAI2017, which consists of 70 OCT volumes with three types of retinal fluid from three different types of devices. The average of Dice similarity coefficient (DSC) and Intersection over Union (IoU) are 74.64% and 62.08%, respectively.
Pathologic myopia (PM) is a major cause of legal blindness in the world. Linear lesions are closely related to PM, which include two types of lesions in the posterior fundus of pathologic eyes in optical coherence tomography (OCT) images: retinal pigment epithelium-Bruch's membrane-choriocapillaris complex (RBCC) disruption and myopic stretch line (MSL). In this paper, a fully automated method based on U-shape network is proposed to segment RBCC disruption and MSL in retinal OCT images. Compared with the original U-Net, there are two main improvements in the proposed network: (1) We creatively propose a new downsampling module named as feature aggregation pooling module (FAPM), which aggregates context information and local information. (2) Deep supervision module (DSM) is adopted to help the network converge faster and improve the segmentation performance. The proposed method was evaluated via 3-fold crossvalidation strategy on a dataset composed of 667 2D OCT B-scan images. The mean Dice similarity coefficient, Sensitivity and Jaccard of RBCC disruption and MSL are 0.626, 0.665, 0.491 and 0.739, 0.814, 0.626, respectively. The primary experimental results show the effectiveness of our proposed method.
The recent introduction of next generation spectral optical coherence tomography (OCT) has become increasingly important in the detection and investigation of retinal related diseases. However, unstable eye position of patient makes tracking disease progression over short period difficult. This paper proposed a method to remove the eye position difference for longitudinal retinal OCT data. In the proposed method, pre-processing is first applied to get the projection image. Then, a vessel enhancement filter is applied to detect vessel shadows. Third, SURF algorithm is used to extract the feature points and RANSAC algorithm is used to remove outliers. Finally, transform parameter is estimated and the longitudinal OCT data are registered. Simulation results show that our proposed method is accurate.
Optical coherence tomography (OCT) has been widely applied in the examination and diagnosis of corneal diseases, but the information directly achieved from the OCT images by manual inspection is limited. We propose an automatic processing method to assist ophthalmologists in locating the boundaries in corneal OCT images and analyzing the recovery of corneal wounds after treatment from longitudinal OCT images. It includes the following steps: preprocessing, epithelium and endothelium boundary segmentation and correction, wound detection, corneal boundary fitting and wound analysis. The method was tested on a data set with longitudinal corneal OCT images from 20 subjects. Each subject has five images acquired after corneal operation over a period of time. The segmentation and classification accuracy of the proposed algorithm is high and can be used for analyzing wound recovery after corneal surgery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.