Radiomics involves the quantitative analysis of medical images to provide useful information for a range of clinical applications including disease diagnosis, treatment assessment, etc. However, the generalizability of radiomics model is often challenged by undesirable variability in radiomics feature values introduced by different scanners and imaging conditions. To address this issue, we developed a novel dual-domain deep learning algorithm to recover ground truth feature values given known blur and noise in the image. The network consists of two U-Nets connected by a differentiable GLCM estimator. The first U-Net restores the image, and the second restores the GLCM. We evaluated the performance of the network on lung CT image patches in terms of both closeness of recovered feature values to the ground truth and accuracy of classification between normal and COVID lungs. Performance was compared with an image restoration-only method and an analytical method developed in previous work. The proposed network outperforms both methods, achieving GLCM with the lowest mean-absolute-error from ground truth. Recovered GLCM feature values from the proposed method, on average, is within 2.19% error to the ground truth. Classification performance using recovered features from the network closely matches the “best case” performance achieved using ground truth feature values. The deep learning method has been shown to be a promising tool for radiomics standardization, paving the way for more reliable and repeatable radiomics models.
Restoration of images contaminated by blur is an important processing tool across modalities including computed tomography where the blur induced by various system factors can be complex with dependencies on acquisition and reconstruction protocol, and even be patient-dependent. In many cases, such a blur can be modeled and predicted with high accuracy providing an important input to a classical deconvolution approach. While traditional deblurring methods tend to be highly noise magnifying, deep learning approaches have the potential to improve upon classic performance limits. However, most network architectures base their restoration on data inputs alone without knowledge of the system blur. In this work, we explore a deep learning approach that takes both image inputs as well as information that characterizes the system blur to combine modeling and deep learning approaches. We apply the approach to CT image restoration and compare with an image-only deep learning approach. We find that inclusion of the system blur model improves deblurring performance - suggesting the potential power of the combined modeling and deep learning technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.