We trained and evaluated a previously in-house developed AI/ML model for COVID severity prediction using two COVID-19-positive consecutive adult patient cohorts from a single institution. The first cohort was from the time that the Delta strain was dominant accounting for <95% of cases (June 24-December 11, 2021, 820 patients, 1331 chest radiographs (CXRs)) and the second cohort was from the time that the Omicron variant was dominant (Jan 1-21, 2022, 656 patients, 970 CXRs). Inclusion criteria were COVID-positivity and the availability of CXR imaging exams, in general for patients not admitted to ICU and prior to ICU admission for those patients admitted to ICU as part of their treatment. Exclusion criteria were image acquisition in ICU or the presence of mechanical ventilation. Our image-based AI/ML model was trained to predict, based on each frontal CXR from a COVID-positive patient, whether this patient would be admitted to ICU within a 24, 48, 72, or 96-hour window. The model was evaluated 1) in a cross-sectional test when trained on a subset/tested on an independent subset of the Delta cohort, 2) similarly for the Omicron cohort, and 3) in a longitudinal test when trained on the Delta cohort/tested on the Omicron cohort.
Cohorts were similar in ICU admission rate and fraction of portable CXRs, while immunization rate was higher for the Omicron cohort. The model did not demonstrate signs of aging with performances in the longitudinal test being very similar to those within the Delta cohort, e.g., an area under the ROC curve in the task of predicting ICU admission within 24 hours of 0.76 [0.68; 0.84] when trained/tested within the Delta cohort and 0.77 [0.73; 0.80] for the longitudinal test (p>0.05). The performance within the Omicron cohort was similar as well, at 0.76 [0.66; 0.84].
Our AI/ML model for COVID-severity prediction did not demonstrate signs of aging in a longitudinal test when trained on the Delta cohort and applied as-is to the Omicron cohort.
DICOM header information is frequently used to classify medical image types; however, if a header is missing fields or contains incorrect data, the utility is limited. To expedite image classification, we trained convolutional neural networks (CNNs) in two classification tasks for thoracic radiographic views obtained from dual-energy studies: (a) distinguishing between frontal, lateral, soft tissue, and bone images and (b) distinguishing between posteroanterior (PA) or anteroposterior (AP) chest radiographs. CNNs with AlexNet architecture were trained from scratch. 1910 manually classified radiographs were used for training the network to accomplish task (a), then tested with an independent test set (3757 images). Frontal radiographs from the two datasets were combined to train a network to accomplish task (b); tested using an independent test set of 1000 radiographs. ROC analysis was performed for each trained CNN with area under the curve (AUC) as a performance metric. Classification between frontal images (AP/PA) and other image types yielded an AUC of 0.997 [95% confidence interval (CI): 0.996, 0.998]. Classification between PA and AP radiographs resulted in an AUC of 0.973 (95% CI: 0.961, 0.981). CNNs were able to rapidly classify thoracic radiographs with high accuracy, thus potentially contributing to effective and efficient workflow.
An editorial by Editor-in-Chief Maryellen Giger explains the journal’s transition to structured abstracts.
The purpose of this study was to evaluate breast MRI radiomics in predicting, prior to any treatment, the response to neoadjuvant chemotherapy (NAC) in patients with invasive lymph node (LN)-positive breast cancer for two tasks: (1) prediction of pathologic complete response and (2) prediction of post-NAC LN status. Our study included 158 patients, with 19 showing post-NAC complete pathologic response (pathologic TNM stage T0,N0,MX) and 139 showing incomplete response. Forty-two patients were post-NAC LN-negative, and 116 were post-NAC LN-positive. We further analyzed prediction of response by hormone receptor subtype of the primary cancer (77 hormone receptor-positive, 39 HER2-enriched, 38 triple negative, and 4 cancers with unknown receptor status). Only pre-NAC MRIs underwent computer analysis, initialized by an expert breast radiologist indicating index cancers and metastatic axillary sentinel LNs on DCE-MRI images. Forty-nine computer-extracted radiomics features were obtained, both for the primary cancers and for the metastatic sentinel LNs. Since the dataset contained MRIs acquired at 1.5 T and at 3.0 T, we eliminated features affected by magnet strength using the Mann–Whitney U-test with the null-hypothesis that 1.5 T and 3.0 T samples were selected from populations having the same distribution. Bootstrapping and ROC analysis were used to assess performance of individual features in the two classification tasks. Eighteen features appeared unaffected by magnet strength. Pre-NAC tumor features generally appeared uninformative in predicting response to therapy. In contrast, some pre-NAC LN features were able to predict response: two pre-NAC LN features were able to predict pathologic complete response (area under the ROC curve (AUC) up to 0.82 [0.70; 0.88]), and another two were able to predict post-NAC LN-status (AUC up to 0.72 [0.62; 0.77]), respectively. In the analysis by a hormone receptor subtype, several potentially useful features were identified for predicting response to therapy in the hormone receptor-positive and HER2-enriched cancers.
If comparisons failed to meet p < 0.0015, features were considered potentially robust. While, as expected, the morphology feature of irregularity was significantly different (p = 0.0003) for benign lesions due to how biopsy events increased irregularity of benign lesions, most features were potentially robust between biopsy conditions. While features did well in distinguishing between luminal A and benign lesions, all failed to demonstrate significance differences in AUC between biopsy conditions.
View contact details