Open Access Paper
17 October 2022 Virtual non-metal network for metal artifact reduction in the sinogram domain
Da-in Choi, Taejin Kwon, Jaehong Hwang, Joon Il Hwang, Yeonkyoung Choi, Seungryong Cho
Author Affiliations +
Proceedings Volume 12304, 7th International Conference on Image Formation in X-Ray Computed Tomography; 1230426 (2022) https://doi.org/10.1117/12.2646950
Event: Seventh International Conference on Image Formation in X-Ray Computed Tomography (ICIFXCT 2022), 2022, Baltimore, United States
Abstract
Often, the artifacts caused by high-density objects degrade the quality of the image with streaks and information loss in CT imaging. In recent years, machine learning has proven itself a powerful tool to resolve some of the challenges faced in reducing metal artifacts. In this work, a novel method of metal artifact reduction (MAR) without metal segmentation by using a CNN network is proposed. The approach focuses on removing the need for the sensitive metal segmentation step to improve robustness and aims to tackle beam hardening directly in the sinogram domain. In the proposed method, we trained the network with sinogram pairs that include metal objects and those that include virtual non-metal (VNM) replacement objects. A VNM object is designed to be less dense than metal but more dense than soft tissue. The novelty of this method lies in the sinogram-to-sinogram training without the need for metal segmentation by replacing the metal object to a virtual non-metal object in the sinogram to reduce beam hardening and successfully compensate for the information loss.

I.

INTRODUCTION

High density materials cause degradation of the CT image quality via factors such as beam-hardening, photon starvation and scatter. Metal artifact is the overarching term referring to the resulting artifacts observed as many streaks, loss of image information, structural deformation and more [1-2]. For decades, researchers devised multitude of methods to tackle metal artifacts. The most common and analytic MAR methods are sinogram interpolation based, such as linear MAR and Normalized MAR (NMAR). These methods aim to replace the metal trace in the sinogram with neighboring information. In doing so, the essential step for these methods is metal segmentation [3-5]. However, for cases with severe beam hardening and photon starvation, segmenting the metal accurately is a great challenge.

The prowess of CNN shines in medical imaging when it comes to segmentation and solving complex problems by finding patterns and training features [6-8]. In recent years, new methods that incorporate CNN for metal segmentation to aid in interpolation have been proposed. For instance, CNN-MAR proposed by Zhang et al. train the network to synthesize an artifact reduced image based on the non-corrected, beam hardening corrected (BHC) and linear interpolated images [9]. The CNN result is used as the prior image in the NMAR process. The DuDoNet proposed by Lin et al. takes advantage of the CNN to enhance the sinogram while retaining geometric consistency and to improve the reconstructed image based on the linear interpolated images [10]. While both methods improved image quality, for metal artifacts of different shapes and sizes, the result image is sensitivity to metal segmentation and to soft tissue smoothing and deformation.

Since metal artifact and prior image generation are sensitive, more approaches that do not require metal segmentation are published. However, most utilize reconstructed images for training. As a result, metal artifacts that are not fully taken care of in the MAR processed and labeled data used for training remain in the test result. Thus, the quality of the image domain training is limited by the MAR preprocessing implemented to the training dataset. To overcome this limitation, Park et al. trained U-net with metal-corrupted sinogram and metal artifact corrected sinogram with metal mask for hip prosthesis replaced by air [11].

In this work, we propose to train a sinogram-to-sinogram CNN to tackle the effect of beam hardening without the need for metal tracing. The common tactic to replace the metal trace via interpolation of the neighboring pixels or to replace the metal object with air is not desirable in that the metal trace not only contain the metal but also soft tissue. We propose to reduce metal artifact by tackling the beam hardening effect in the sinogram region. This method aims to replace the metal sinogram regions with objects with significantly less density than metal. The resulting sinogram, when reconstructed will restore soft tissue information in the shading artifact region and reduce streak artifacts. The feasibility of the proposed method is tested by a simulation study. The image quality of the result is evaluated qualitatively and quantitatively against the conventional NMAR algorithm.

II.

Materials and Methods

1.1

Dataset preparation

Attaining a large set of projection data of clinical volumes was found difficult. Thus, this study was performed with a simulation data set. To obtain a more realistic simulation data, a material map is generated by applying thresholds to the human body CT image without metal implants attained from NIH Clinical Center [12]. The thresholds are set to divide the object into bone, soft tissue, and fat. Then, elliptical objects of random size, number and position were added to the material map. The position of the elliptical objects was limited to the body. The elliptical objects were then given a material value for either titanium for metal and carbon for VNM substitute. Finally, the projection data was generated using a polynomial energy forward projection simulator with the parameters shown in Table 1. An example of the material map and the simulator results are shown in Fig. 1.

Figure 1.

a) CT image, b) Material map, c) material map with metal, d) sinogram simulated with metal, e) sinogram simulated with VNM-material

00079_PSISDG12304_1230426_page_2_1.jpg

TABLE I

Fan-beam Parameters for Simulated Data

ParametersValues
Views per rotation720
Detector pixel number512 x 1
Detector pixel pitch0.8 mm
Distance of source to detector1300 mm
Distance of source to object900 mm
X-ray source120 kVp

1.2

U-Net Training

The U-Net architecture implemented for this experiment is similar to the original U-Net published by Ronneberger et al. and is shown in Fig. 2 [13]. The last output channel is adjusted to be a single channel since the goal of this network is not to segment but produce a new sinogram image. The training input is the metal-inclusive sinogram set generated in the simulator and the label is the VNM-inclusive sinogram set generated in the simulator. From the total of 2351 data pairs, 80% was used for training and 10% was used for validation and test. In order to increase the variety in the dataset, data augmentation was performed by shifting the sinograms randomly for each epoch. The batch size was 4. The network was optimized with ADAM optimizer and the MSE loss function. The network was trained for 400 epochs with learning rate of 10-4, coming to a convergence.

Figure 2.

U-Net structure with the input as a metal-inclusive sinogram and the output image as the VNM-inclusive sinogram pair.

00079_PSISDG12304_1230426_page_3_1.jpg

III.

Results

For testing, 235 sinogram pairs were synthesized in the same manner as the training data using the simulator. The metal-inclusive sinograms were loaded to the trained network and the resulting output sinograms were reconstructed with an FBP algorithm. The simulated VNM-inclusive sinograms were set as the reference. Fig. 3 shows an example of the test cases. A metal masked image that was extracted with thresholding during NMAR were added to the reconstructed images of the proposed method in order to compare the results qualitatively.

Figure 3.

Image reconstructed from the sinograms of the simulator (Reference and FBP), NMAR with and without metal, and proposed method with and without metal.

00079_PSISDG12304_1230426_page_4_1.jpg

Root mean square error (RMSE) and structured similarity index (SSIM) values in the four ROI were calculated to assess the effectiveness of the MAR algorithm in not only removing metal artifacts but also in retaining the anatomical structures [14]. The four ROIs were selected to assess the performance of the NMAR algorithm and the proposed method pertaining to different metal artifacts. ROI 1 aims to evaluate the reduction of dark streaks and the recovery of the soft tissue. ROI 2 focuses on the bone structure near a large metal object. While ROI 3 does not show noticeable metal artifacts, streak artifacts degraded the contrast and the structure of the soft tissue. ROI 4 has streak artifacts on boneless soft tissue. ROI 5 includes all pixels except the metal mask. The ROIs were selected such that the metal replacements are not included but the neighboring pixels are. Table II shows the result of the quantitative calculation performed on the different ROIs marked and displayed on Fig. 4.

Figure 4.

The four ROIs are marked on the reference image. Each odd row shows the according ROI. The window level of each ROI is [0.001, 0.035], [0.01, 0.025], [0.01, 0.025], and [0.01, 0.015]. Every even row shows the difference image from the reference and the window level is [0 0.005]. The window levels are adjusted to observe the artifacts and their reduction with ease. The columns show the enlarged images at the ROI in the order of reference, FBP, NMAR and proposed method result.

00079_PSISDG12304_1230426_page_6_1.jpg

TABLE II

QUANTITATIVE ANALYSIS OF THE ROI

ROIParametersRMSESSIM
1FBP2.514E-30.8289
NMAR1.747E-30.8655
Proposed5.058E-40.9521
2FBP4.574E-30.6053
NMAR3.093E-30.6529
Proposed6.471E-40.8612
3FBP1.230E-30.5928
NMAR8.113E-40.7694
Proposed3.600E-40.9353
4FBP1.532E-30.4031
NMAR1.476E-30.5753
Proposed2.265E-40.8705
5FBP2.6283E-60.6368
NMAR2.9660E-60.7062
Proposed4.7452E-70.9556

IV.

Discussion

Depending on the ROI, the difference is significant. The proposed method excels in recovery of the soft tissue and bone information even when there is harsh beam hardening since no prior metal mask is required and the sinograms are substituted not only in the metal mask region but throughout. This can be observed in ROI 1 and 2. In ROI 1, the dark region marked with the red arrow is the neighboring region of a large metal object. Since NMAR tries to recover the lost information with the prior image, the bone information is not recovered fully. This may be due to the fact that the dark region was masked as air instead of tissue in the prior image due to the low pixel value. The proposed method does not require a well-defined prior image and the beam hardening effects are corrected in the sinogram domain. As a result, the marked region is

Similarly, in ROI 2, the proposed method successfully retrieves the bone and the neighboring soft tissue information that are almost lost in the FDK result (red arrow). The success of the proposed method in retrieving the soft tissue information even for those pixels with values close to air poses speculation on the simplicity of the soft tissue model and possible overfitting. However, being that the sinograms for the training set and the test set were simulated from material maps of different patient sets with varying anatomical details, it is unlikely. This can be further investigated by testing clinical data with more complex soft tissue and bone structures to the same network.

The SSIM values of FBP and NMAR are similar for ROI 2 and 4 while that of the proposed method is significantly better. These two ROIs are contaminated with dark and white streaks and have a great potential for creating misleading NMAR priors. For the case of ROI 4, the streaks are prevalent in the soft tissue even after NMAR. Furthermore, beam hardening artifact at the perimeter of the metal objects were not corrected properly and the remaining streaks are observed. The proposed method not only reduces streaks better but also improves the visibility and details of the soft tissue structure.

As for ROI 3, the streaks were reduced well by both NMAR and the proposed method. Yet, the reconstructed NMAR lacks contrast and suffers from soft tissue deformation in multitude of areas. An example of structural degradation is noted by a red arrow. This is quantitatively reflected in the low SSIM value for NMAR compared to the proposed method: 0.7694 compared to 0.9353.

Finally, to compare the effectiveness of the MAR algorithm in general for all areas, ROI 5 was evaluated. The RMSE of all three results are negligible since large part of the RMSE computes air region which are similar for all cases. However, the performance of MAR on the overall image can be noted significantly from the improved SSIM value. Overall, the proposed method is successful in reducing the metal artifact in the sinogram domain. The quality of the reconstructed image improves as the beam hardening artifacts and streak artifacts are corrected. Significantly, unlike methods that require metal segmentation or depend on the quality of the prior images, the proposed method has merit in working without the need for metal segmentation. Furthermore, the proposed method replaces the metal-inclusive sinogram with a virtual non-metal sinogram. In doing so, the discontinuity of the sinogram is reduced compared to air replacement and streak artifact improvement.

V.

Conclusion

Overall, the propose method outperforms FBP and NMAR in reducing metal artifacts and appropriately constructing the soft tissue and bone information. Through this simulation study, we have successfully demonstrated the feasibility of the proposed method. The result of the preliminary simulation study shows promise for further exploration of the method. We are interested in testing the method with datasets with greater complexity. It could be done by increasing the number of materials and types of metals used to create the sinogram pairs. Additionally, the shape of the metal objects used for the feasibility study were simply elliptical. Metal objects with sharp corners and complex shapes can be added to assimilate screws and needles. Finally, noise and photon starvation effect can be added to the data simulator to test the algorithm under exacerbated metal artifact conditions.

References

[1] 

B. De Man, J. Nuyts, P. Dupont, G. Marchal, and P. Suetens, “Metal streak artifacts in x-ray computed tomography: A simulation study,” IEEE Transactions on Nuclear Science, 46 (3), 691 –696 (1999). https://doi.org/10.1109/TNS.23 Google Scholar

[2] 

M. L. Kataoka, M. G. Hochman, E. K. Rodriguez, P.-J. P. Lin, S. Kubo, and V. D. Raptopolous, “A review of factors that affect artifact from metallic hardware on multi-row detector computed tomography,” Curr. Probl. Diagn. Radiol., 39 (4), 125 –36 (2010). https://doi.org/10.1067/j.cpradiol.2009.05.002 Google Scholar

[3] 

E. Meyer, F. Bergner, R. Raupach, T. Flohr, and M. Kachelrieß, “Normalized metal artifact reduction (NMAR) in computed tomography,” Medical Physics, 37 (10), 5482 –5493 (2010). https://doi.org/10.1118/1.3484090 Google Scholar

[4] 

V. Ruth, D. Kolditz, C. Steiding, and W. A. Kalender, “Metal artifact reduction in X-ray computed tomography using computer-aided design data of implants as prior information,” Invest. Radiol., 52 (6), 349 –359 (2017). https://doi.org/10.1097/RLI.0000000000000345 Google Scholar

[5] 

J. W. Stayman, Y. Otake, J. L. Prince, A. J. Khanna, and J. H. Siewerdsen, “Model-based tomographic reconstruction of objects containing known components,” IEEE Trans. Med. Imaging, 31 (10), 1837 –48 (2012). https://doi.org/10.1109/TMI.2012.2199763 Google Scholar

[6] 

L. Zhu, Y. Han, L. Li, X. Xi, M. Zhu and B. Yan, “Metal Artifact Reduction for X-Ray Computed Tomography Using U-Net in Image Domain,” IEEE Access, 7 98743 –98754 (2019). https://doi.org/10.1109/Access.6287639 Google Scholar

[7] 

J. Lee, J. Gu and J. C. Ye, “Unsupervised CT metal artifact learning using attention-guided ß-CycleGAN,” IEEE Trans. Med. Imag., 40 (12), 3932 –3944 (2021). https://doi.org/10.1109/TMI.2021.3101363 Google Scholar

[8] 

D. F. Bauer, C. Ulrich, T. Russ, A.-K. Golla, L. R. Schad, and F. G. Zöllner, “End-to-End Deep Learning CT Image Reconstruction for Metal Artifact Reduction,” Applied Sciences, 12 (1), 404 (2021). https://doi.org/10.3390/app12010404 Google Scholar

[9] 

Y. Zhang and H. Yu, “Convolutional Neural Network Based Metal Artifact Reduction in X-Ray Computed Tomography,” IEEE Transactions on Medical Imaging, 37 (6), 1370 –1381 (2018). https://doi.org/10.1109/TMI.2018.2823083 Google Scholar

[10] 

W. A. Lin, H. Liao, C. Peng, X. Sun, J. Zhang, J. Luo, R. Chellappa, and S.K. Zhou, “Dudonet: Dual domain network for ct metal artifact reduction,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10512 –10521 Google Scholar

[11] 

H. S. Park, S. M. Lee, H. P. Kim and J. K. Seo, “CT sinogram-consistency learning for metal-induced beam hardening correction,” (2017) https://arxiv.org/abs/1708.00607 Google Scholar

[12] 

K. Yan, X. Wang, L. Lu, R. M. Summers, “DeepLesion: Automated Mining of Large-Scale Lesion Annotations and Universal Lesion Detection with Deep Learning,” Journal of Medical Imaging, (2018). https://doi.org/10.1117/1.JMI.5.3.036501 Google Scholar

[13] 

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” Medical Image Computing and Computer–Assisted Interventions (MICCAI), 234 –241 (2015). Google Scholar

[14] 

Zhou Wang, A. C. Bovik, H. R. Sheikh and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” in IEEE Transactions on Image Processing, 600 –612 (2004). Google Scholar
© (2022) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Da-in Choi, Taejin Kwon, Jaehong Hwang, Joon Il Hwang, Yeonkyoung Choi, and Seungryong Cho "Virtual non-metal network for metal artifact reduction in the sinogram domain", Proc. SPIE 12304, 7th International Conference on Image Formation in X-Ray Computed Tomography, 1230426 (17 October 2022); https://doi.org/10.1117/12.2646950
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Metals

Tissues

Image segmentation

Image quality

Bone

Nonmetals

Computed tomography

Back to Top