Optimization of CT image quality typically involves balancing variance and bias. In traditional filtered back-projection, this trade-off is controlled by the filter cutoff frequency. In model-based iterative reconstruction, the regularization strength parameter often serves the same function. Deep neural networks (DNNs) typically do not provide this tunable control over output image properties. Models are often trained to minimize the expected mean squared error, which penalizes both variance and bias in image outputs but does not offer any control over the trade-off between the two. We propose a method for controlling the output image properties of neural networks with a new loss function called weighted covariance and bias (WCB). Our proposed method uses multiple noise realizations of the input images during training to allow for separate weighting matrices for the variance and bias penalty terms. Moreover, we show that tuning these weights enables targeted penalization of specific image features with spatial frequency domain penalties. To evaluate our method, we present a simulation study using digital anthropomorphic phantoms, physical simulation of CT measurements, and image formation with various algorithms. We show that the WCB loss function offers a greater degree of control over trade-offs between variance and bias, whereas mean-squared error provides only one specific image quality configuration. We also show that WCB can be used to control specific image properties including variance, bias, spatial resolution, and the noise correlation of neural network outputs. Finally, we present a method to optimize the proposed weights for a spiculated lung nodule shape discrimination task. Our results demonstrate this new image quality can control the image properties of DNN outputs and optimize image quality for task-specific performance. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Education and training
Image quality
Computed tomography
Neural networks
Image acquisition
Data modeling
Image restoration