Intravenous contrast phase classification can improve the data curation of CT scans for medical AI applications. In our early work, a five-phase classification model based on ResNet was developed to serve this purpose. It can accurately predict CT phases if the testing data follows the data distribution of the training data, while its performance drops substantially on the out-of-distribution data. To address this issue, this work aims to incorporate the inherent domain knowledge related to different CT phases for their classification. We explore the intensity distributions of a few key organs in different CT phases. TotalSegmentator was used to segment these organs including pulmonary artery, heart atrium, heart ventricle, heart myocardium, aorta, portal vein, splenic vein, liver, kidney, inferior vena cava, iliac artery, iliac veins, and urinary bladder. The intensity information was then extracted as image features to train a random forest classification model. The classification models were trained on an in-house dataset of 252 CT scans. They were validated on a testing dataset with 213 outof-distribution CT scans. The proposed method achieved better accuracy of 62.44%, while it was 53.02% using ResNet. These results showed that embedding domain knowledge could improve CT phase classification to be robust to the out-of-distribution dataset.
Intravenous contrast enhancement phase information is important for computer-aided diagnosis of CT scans because the visual appearance of the scans varies substantially among the different phases. Although phase information could help to refine training data curation for downstream tasks, it is seldom included in the process of data augmentation for training a deep learning model. Unfortunately, in the current clinical settings, phase information is either unavailable or unreliable in most PACS systems. This motivates us to develop a method to automatically classify multiphase CT scans. In this study, a residual network (ResNet34) was utilized to classify five CT phases commonly used in the clinical environment: non-contrast, arterial, portal venous, nephrographic, and delayed contrast phases. A dataset of 395 multiphase CT scans was weakly labeled using keywords. The weakly-labeled dataset was split into 316 training, and 79 test CT scans. We compared the ResNet34 with two other popular classification models, VGG19 and DenseNet121. ResNet34 achieved the highest accuracy of 99%, while the accuracy of VGG19 and DenseNet121 were 97% and 95%, respectively. In addition, ResNet34 had fewer parameters to train in comparison with two other models, which could reduce the inference time to 35 seconds per scan and enhance generalizability of the model. High accuracy of multiphase classification suggests a potential way to improve data curation based on CT contrast enhancement phase. This would be useful to improve deep learning models by enhancing dataset curation and providing more realistic augmented data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.