Eye diseases have always been a threat to public health worldwide. Many people suffer from various eye diseases, but there are not enough skilled ophthalmologists to meet the demand for medical care. Thus, finding a method to perform ophthalmic examinations automatically and conveniently is necessary. Although many well-designed ophthalmic diagnosis systems have been proposed to diagnose ophthalmic disorders using artificial intelligence algorithms, they tend to depend on high-quality anterior segment images to perform appropriately. In order to capture high-quality anterior segment images simply with a smartphone, we proposed a system including a semantic segmentation model and an image quality assessment method for anterior segment images. Our proposed segmentation model, namely the multi-task anterior segment image semantic segmentation (MT-ASISS) model, has a designed multitask learning network structure and achieves an accuracy of 92.63% in Dice and a processing speed of 138ms per frame on smartphones. Our anterior segment image quality assessment method, namely Mixed-Parameters Quality Assessment (MPQA) method, has an accuracy of 92.6% in mean average precision (mAP). The system can help reduce the demand for professional image collecting equipment, share the burden of choosing satisfactory images manually and improve the efficiency of acquiring anterior segment images.
We propose an automated segmentation method to detect, segment, and quantify hyperreflective foci (HFs) in three-dimensional (3-D) spectral domain optical coherence tomography (SD-OCT). The algorithm is divided into three stages: preprocessing, layer segmentation, and HF segmentation. In this paper, a supervised classifier (random forest) was used to produce the set of boundary probabilities in which an optimal graph search method was then applied to identify and produce the layer segmentation using the Sobel edge algorithm. An automated grow-cut algorithm was applied to segment the HFs. The proposed algorithm was tested on 20 3-D SD-OCT volumes from 20 patients diagnosed with proliferative diabetic retinopathy (PDR) and diabetic macular edema (DME). The average dice similarity coefficient and correlation coefficient (r) are 62.30%, 96.90% for PDR, and 63.80%, 97.50% for DME, respectively. The proposed algorithm can provide clinicians with accurate quantitative information, such as the size and volume of the HFs. This can assist in clinical diagnosis, treatment, disease monitoring, and progression.
The goal of no-reference/blind image quality assessment (NR-IQA) is to devise a perceptual model that can accurately predict the quality of a distorted image as human opinions, in which feature extraction is an important issue. However, the features used in the state-of-the-art “general purpose” NR-IQA algorithms are usually natural scene statistics (NSS) based or are perceptually relevant; therefore, the performance of these models is limited. To further improve the performance of NR-IQA, we propose a general purpose NR-IQA algorithm which combines NSS-based features with perceptually relevant features. The new method extracts features in both the spatial and gradient domains. In the spatial domain, we extract the point-wise statistics for single pixel values which are characterized by a generalized Gaussian distribution model to form the underlying features. In the gradient domain, statistical features based on neighboring gradient magnitude similarity are extracted. Then a mapping is learned to predict quality scores using a support vector regression. The experimental results on the benchmark image databases demonstrate that the proposed algorithm correlates highly with human judgments of quality and leads to significant performance improvements over state-of-the-art methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.