This study examines the potential of Self-Supervised Learning (SSL) for segmenting the aorta and coronary arteries in Coronary Computed Tomography Angiography (CCTA) volumes to facilitate automated Pericoronary Adipose Tissue (PCAT) analysis. Utilizing 83 CCTA volumes, we explored the efficacy of SSL in a limited dataset environment. Forty-nine CCTA volumes were designated for SSL and supervised learning, while the remaining 34 formed a held-out test set. The Deep Learning (DL) model’s encoder was pretrained on unlabeled CCTA volumes during SSL and subsequently fine-tuned on labeled volumes in the supervised learning phase. This process enabled the DL model to learn feature representations without extensive annotations. The segmentation performance was assessed by varying the percentage of the 49 CCTA volumes used in supervised learning. With SSL, the model demonstrated a consistently higher segmentation performance than that of non-pretrained (random) weights, achieving a Dice of 0.866 with only 15 labeled volumes (30%) compared to a Dice of 0.864 with 44 labeled volumes (90%) required by random weights. Additionally, we segmented Pericoronary Adipose Tissue (PCAT), finding no significant differences in mean Hounsfield Unit (HU) attenuation between ground truth and predictions. The mean attenuation of PCAT-LAD and PCAT-RCA for ground truth were -79.97 HU (SD = 9.54) and -85.47 HU (SD = 8.41) respectively, indicating no statistically significant differences when compared to the predicted values of -80.11 HU (SD = 9.35) for PCAT-LAD and -86.19 HU (SD = 8.36) for PCAT-RCA. The findings suggest that in-domain SSL pretraining required about 66% less labeled volumes for comparable segmentation performance, thus offering a more efficient approach to leveraging limited dataset for DL applications in medical imaging.
Non-contrast, cardiac CT Calcium Score (CTCS) images provide a low-cost cardiovascular disease screening exam to guide therapeutics. We are extending standard Agatston score to include cardiovascular risk assessments from features of epicardial adipose tissue, pericoronary adipose tissue, heart size, and more, which are currently extracted from Coronary CT Angiography (CCTA) images. To aid such determinations, we developed a deep-learning method to synthesize Virtual CT Angiography (VCTA) images from CTCS images. We retrospectively collected 256 patients who underwent CCTA and CTCS from our hospitals (MacKay and UH). Training on 205 patients from UH, we used the contrastive, unpaired translation method to create VCTA images. Testing on 51 patients from Mackay, we generated VCTA images that compared favorably to the matched CCTA images with enhanced coronaries and ventricular cavity that were well delineated from surrounding tissues (epicardial adipose tissue and myocardium). The automated segmentation of myocardium and left-ventricle cavity in VCTA showed strong agreement with the measurements obtained from CCTA. The measured percent volume differences between VCTA and CCTA segmentation were 2±8% for the myocardium and 5±10% for the left-ventricle cavity, respectively. Manually segmented coronary arteries from VCTA and CTCS (with guidance from registered CCTA) aligned well. Centerline displacements were within 50% of coronary artery diameter (4mm). Pericoronary adipose tissue measurements using the axial disk method showed excellent agreements between measurements from VCTA ROIs and manual segmentations (e.g., average HU differences were typically <3HU). Promising results suggest that VCTA can be used to add assessments indicative of cardiovascular risk from CTCS images.
PurposeRetinopathy of prematurity (ROP) is a retinal vascular disease affecting premature infants that can culminate in blindness within days if not monitored and treated. A disease stage for scrutiny and administration of treatment within ROP is “plus disease” characterized by increased tortuosity and dilation of posterior retinal blood vessels. The monitoring of ROP occurs via routine imaging, typically using expensive instruments ($50 to $140 K) that are unavailable in low-resource settings at the point of care.ApproachAs part of the smartphone-ROP program to enable referrals to expert physicians, fundus images are acquired using smartphone cameras and inexpensive lenses. We developed methods for artificial intelligence determination of plus disease, consisting of a preprocessing pipeline to enhance vessels and harmonize images followed by deep learning classification. A deep learning binary classifier (plus disease versus no plus disease) was developed using GoogLeNet.ResultsVessel contrast was enhanced by 90% after preprocessing as assessed by the contrast improvement index. In an image quality evaluation, preprocessed and original images were evaluated by pediatric ophthalmologists from the US and South America with years of experience diagnosing ROP and plus disease. All participating ophthalmologists agreed or strongly agreed that vessel visibility was improved with preprocessing. Using images from various smartphones, harmonized via preprocessing (e.g., vessel enhancement and size normalization) and augmented in physically reasonable ways (e.g., image rotation), we achieved an area under the ROC curve of 0.9754 for plus disease on a limited dataset.ConclusionsPromising results indicate the potential for developing algorithms and software to facilitate the usage of cell phone images for staging of plus disease.
Retinopathy of prematurity (ROP) is a retinal vascular disease that affects premature infants and can result in blindness within days if not monitored and treated. A disease stage for increased scrutiny and treatment within ROP is “plus disease,” characterized by increased tortuosity and dilation of posterior retinal blood vessels. Monitoring of ROP occurs with routine imaging, typically using expensive instruments ranging from $50-140K. In low-resource areas of the world, smartphone cameras and inexpensive Volk 28D lenses are being used to image the fundus, albeit with lower fields of view and image quality than the expensive systems. We developed a preprocessing pipeline to enhance vessel visualization and harmonize images for automated analysis using deep learning algorithms. After preprocessing, vessel contrast was enhanced by 90% as assessed by the contrast improvement index. In an image quality evaluation, 441 images were evaluated by pediatric ophthalmologists from the US and South America, all with years of experience diagnosing ROP and plus disease. 100% of participating ophthalmologists either agreed or strongly agreed that vessel visibility was improved in the processed images. A preliminary deep learning binary classifier (plus vs. no plus disease) was developed using GoogLeNet. Using smartphone images harmonized via preprocessing (e.g., vessel enhancement and size normalization) and augmented in physically reasonable ways (e.g., image rotation), we achieved an exceptional accuracy of 0.96 for plus disease on a limited dataset. These promising results suggest the potential to create algorithms and software to improve usage of cell phone images for ROP staging.
Vagus nerve stimulation (VNS) is a method to treat drug-resistant epilepsy and depression, but therapeutic outcomes are often not ideal. Newer electrode designs such as intra-fascicular electrodes offer potential improvements in reducing off-target effects but require a detailed understanding of the fascicular anatomy of the vagus nerve. We have adapted a section-and-image technique, cryo-imaging, with UV excitation to visualize fascicles along the length of the vagus nerve. In addition to offering optical sectioning at the surface via reduced penetration depth, UV illumination also produces sufficient contrast between fascicular structures and connective tissue. Here we demonstrate the utility of this approach in pilot experiments. We imaged fixed, cadaver vagus nerve samples, segmented fascicles, and demonstrated 3D tracking of fascicles. Such data can serve as input for computer models of vagus nerve stimulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.