Radiomics involves the quantitative analysis of medical images to provide useful information for a range of clinical applications including disease diagnosis, treatment assessment, etc. However, the generalizability of radiomics model is often challenged by undesirable variability in radiomics feature values introduced by different scanners and imaging conditions. To address this issue, we developed a novel dual-domain deep learning algorithm to recover ground truth feature values given known blur and noise in the image. The network consists of two U-Nets connected by a differentiable GLCM estimator. The first U-Net restores the image, and the second restores the GLCM. We evaluated the performance of the network on lung CT image patches in terms of both closeness of recovered feature values to the ground truth and accuracy of classification between normal and COVID lungs. Performance was compared with an image restoration-only method and an analytical method developed in previous work. The proposed network outperforms both methods, achieving GLCM with the lowest mean-absolute-error from ground truth. Recovered GLCM feature values from the proposed method, on average, is within 2.19% error to the ground truth. Classification performance using recovered features from the network closely matches the “best case” performance achieved using ground truth feature values. The deep learning method has been shown to be a promising tool for radiomics standardization, paving the way for more reliable and repeatable radiomics models.
|