Dual Energy CT (DECT) has ability to characterize different materials and quantify the densities or proportions of different contrast agents. However, the basis images decomposition is an ill-posed problem and the traditional model-based and image-domain direct inversion methods always suffer from serious degradation of the signal-to-noise ratios (SNRs). To this issue, we propose a new strategy by combining model-based and learning-based methods, which suppresses noise in the material images after direct inverse, and design a semi-supervised framework, Adaptive Semi-supervised Learning Material Estimation Network (ASLME-Net), to balance the detail structure preservation and noise suppression when fed little paired data in training stage of the deep learning. Specifically, the ASLME-Net contains two sub-networks, i.e., supervise sub-network and unsupervised sub-network. The supervised sub-network aims at capturing key features learned by with the labeled data, and the unsupervised sub-network adaptively learns the transferred feature distribution from supervised sub-network with Kullback-Leibler (KL) divergence. Experiment shows that the presented method can suppress the noise propagation in decomposition and yield qualitatively and quantitatively accurate results during the process of material decomposition. To this issue, we propose a new strategy by combining model-based and learning-based methods, which suppresses noise in the material images after direct inverse, and design a semi-supervised framework, Adaptive Semi-supervised Learning Material Estimation Network (ASLME-Net), to balance the detail structure preservation and noise suppression when fed little paired data in training stage of the deep learning.
Inspired by the deep learning techniques, data-driven methods have been developed to promote image quality and material decomposition accuracy in dual energy computed tomography (DECT) imaging. Most of these data-driven DECT imaging methods exploit the image priors within large amount of training data to learn the mapping function from the noisy DECT images to the desired high-quality material images in a supervised manner. Meanwhile, these supervised DECT imaging methods only estimate the multiple material images directly from the network but the material decomposition mechanism is not included in the network, and they fail to consider the unlabeled noisy DECT images to further improve the performance. In this work, to address these issues, we propose a novel Weak-supervised learning Multi-material Decomposition Network with self-attention mechanism (WMD-Net) to estimate multiple material images from DECT images with the combination of labeled and unlabeled DECT images accurately and effectively. Specifically, in the proposed WMD-Net, the labeled DECT images are used to estimate the three material images in a supervised sub-network, and the unlabeled DECT images are used to construct the unsupervised sub-network with the benefit of material decomposition mechanism. Finally, the two sub-networks are introduced into the proposed WMD-Net method. The proposed WMD-Net method is validated and evaluated through the synthesized clinical data, and the experimental results demonstrate that the proposed WMD-Net method can estimate more accurate material images than the other competing methods in terms of noise-induced artifacts reduction and structure details preservation.
Sparse-view computed tomographic (CT) image reconstruction aims to shorten scanning time, reduce radiation dose, and yield high-quality CT images simultaneously. Some researchers have developed deep learning (DL) based models for sparse-view CT reconstruction on the circular scanning trajectories. However, cone beam CT (CBCT) image reconstruction based on the circular trajectory is theoretically an ill-posed problem and cannot accurately reconstruct 3D CT images, while CBCT reconstruction of helical trajectory has the possibility of accurate reconstruction because it satisfies the tuy condition. Therefore, we propose a dual-domain helical projection-fidelity network (DHPF-Net) for sparse-view helical CT (SHCT) reconstruction. The DHPF-Net mainly consists of three modules, namely artifact reduction network (ARN), helical projection fidelity (HPF), and union restoration network (URN). Specifically, the ARN reconstructs high-quality CT images by suppressing the noise artifacts of sparse-view images. The HPF module uses the measured sparse-view projection to replace the projection values of the corresponding position in the projection of the ARN, which can ensure data fidelity of the final predicted projection and preserve the sharpness of the reconstructed CT images. The URN further improves the reconstruction performance by combining the sparse-view images, IRN images, and HPF images. In addition, in order to extract the structure information of adjacent images, leverage the structural self-similarity information, and avoid the expensive computational cost, we convert 3D volumn CT image into channel directions. The experimental results on the public dataset demonstrated that the proposed method can achieve a superior performance for sparse-view helical CT image reconstruction.
Deep learning (DL)-based algorithms have shown promising performance in low-dose computed tomography (LDCT) and are becoming mainstream methods. These DL-based methods focus on different aspects of CT image restoration, such as noise suppression, artifacts removal, structure preservation, etc. Therefore, in this paper, we propose a bayesian ensemble learning network (BENet) that fuses several representative denoising algorithms from the denoiser pool to improve the LDCT imaging performance. Specifically, we first select four advanced CT image and natural image restoration networks, including REDCNN, FBPConvNet, HINet, and Restormer, to form the denoiser pool, which integrates the denoising capabilities of different networks. The denoiser pool is pre-trained to obtain the denoising results of each denoiser. Then, we present a bayesian neural network to predict the weight maps and variances of denoiser pool by modeling the aleatoric and epistemic uncertainties of DL. Finally, the predicted pixel-wise weight maps are used to fuse the denoising results to obtain the final reconstruction result. Qualitative and quantitative analysis results have shown that the proposed BENet can effectively boost the denoising performance and robustness of LDCT image reconstruction.
The presence of metal often heavily degrades the computed tomography (CT) image quality and inevitably affects the subsequent clinical diagnosis and therapy. With the rapid development of deep learning (DL), a lot of DL-based methods have been proposed for metal artifact reduction (MAR) task in CT imaging, including image domain, projection domain and dual-domain based MAR methods. Recently, view-by-view backprojection tensor (VVBP-Tensor) domain is developed as the intermediary domain between image domain and projection domain, while VVBP-Tensor also has many good mathematical properties, such as low-rank property and structural self-similarity. Therefore, we present a VVBP-Tensor based deep neural network (DNN) framework for better MAR performance in CT imaging. Specifically, the original projection is separately pre-processed by the linear interpolation completion algorithm and the clipping algorithm, to quickly remove most metal artifacts and preserve structural information. Then, the clipped projection is restored by one sinogram recovery network to smooth the projection values in and out of the metal trajectory. In addition, two pre-processed projections are separately transferred to two tensors by filtering, backprojecting and sorting, and two sorted tensors are simultaneously rolled into the MAR reconstruction network for further improving reconstructed CT image quality. The proposed method has a good interpretability since the MAR reconstruction network can be considered as a weighted CT image reconstruction process with learnable adaptive weights along the direction of scan views. The superior MAR performance of the presented method is demonstrated on the simulated dataset in terms of qualitative and quantitative measurements.
KEYWORDS: Digital breast tomosynthesis, Breast, Image restoration, Reconstruction algorithms, Tunable filters, Computer simulations, Deep learning, 3D modeling, 3D image reconstruction, Monte Carlo methods
High-attenuation artifacts in digital breast tomosynthesis (DBT) imaging will potentially obscure some lesions in breast, which may result in increasing false-negative rate. Many image domain and projection domain based methods have been developed to reduce the high-attenuation artifacts. However, the high-attenuation artifacts have not been effectively removed, since these existing methods have not exactly addressed the inherent DBT imaging constraint of sparse-view low-dose scanning in a limited angular range. Recently, view-by-view backprojection tensor (VVBP-Tensor) domain is presented as the intermediary domain between projection domain and image domain, which may be beneficial to DBT image reconstruction. Moreover, high-attenuation artifacts are relative to the imaging geometry, and it is reasonable to hypothesize that the diffusion pattern of artifacts in VVBP-Tensor domain are similar for the same DBT imaging system. Therefore, we proposed a VVBP-Tensor based deep learning framework for high-attenuation artifact reduction in DBT imaging (shorten as VTDL-DBT), which learns the artifact diffusion pattern in VVBP-Tensor domain and remove these artifacts in a data-driven manner. The proposed method can be considered as the implicitly weighted filtered backprojection (wFBP) algorithm, which replaces the explicit weighted summing with the learnable deep neural network model. In addition, a pipeline of generating paired training data is also presented for DBT high-attenuation artifact removal task, which utilizes digital anthropomorphic breast phantoms and the Monte Carlo simulation algorithm. Both qualitative and quantitative results demonstrate that the presented VTDL-DBT method has a superior DBT imaging performance on the simulated DBT dataset, in terms of high-attenuation artifact reduction and structural texture preservation.
Computed tomography (CT) is a widely used medical imaging modality which is capable of displaying the fine details of human body. In clinics, the CT images need to highlight different desired details or structures with different filter kernels and different display windows. To achieve this goal, in this work, we proposed a deep learning based ”All-in-One” (DAIO) combined visualization strategy for high-performance disease screening in the disease screening task. Specifically, the presented DAIO method takes into consideration of both kernel conversion and display window mapping in the deep learning network. First, the sharp kernel, smooth kernel reconstructed images and lung mask are collected for network training. Then, the structure is adaptively transferred to the kernel style through local kernel conversion to make the image have higher diagnostic value. Finally, the dynamic range of the image is compressed to a limited gray level by the mapping operator based on the traditional window settings. Moreover, to promote the structure details enhancement, we introduce a weighted mean filtering loss function. In the experiment, nine of the ten full dose patients cases from the Mayo clinic dataset are utilized to train the presented DAIO method, and one patient case from the Mayo clinic dataset are used for test. Results shows that the proposed DAIO method can merge multiple kernels and multiple window settings into a single one for the disease screening.
KEYWORDS: Bone, Data modeling, Computed tomography, Signal to noise ratio, Signal attenuation, Model-based design, Image processing, Signal processing, Performance modeling, Medical imaging
Dual-energy computed tomography (DECT) imaging plays an important role in clinical diagnosis applications due to its material decomposition capability. However, in the cases of low-dose DECT imaging and ill-conditioned issue, the direct decomposed material images from DECT images would suffer from severe noise-induced artifacts, leading to low quality and accuracy. In this paper, we propose a self-supervised Nonlocal Spectral Similarity-induced Decomposition Network (NSSD-Net) to produce decomposed material images with high quality and accuracy in the low-dose DECT imaging. Specifically, we first build the model-driven iterative decomposition model and optimize the objective function by the iterative shrinkage-thresholding algorithm (ISTA) with the convolutional neural network. Considering the intrinsic characteristics information (i.e., structural similarity and spectral correlation) underlying DECT images, which can be used as the prior information to improve the accuracy of the decomposed material images, we construct the nonlocal spectral similarity-based cost function by using the prior information and incorporating it into the iterative decomposition network to guarantee stability. The proposed NSSD-Net method was validated and evaluated with real clinical data. Experimental results showed that the presented NSSD-Net method outperforms the other competing methods in terms of noise-induced artifacts reduction and decomposition accuracy.
Deep learning (DL) are being extensively investigated for low-dose computed tomography (CT). The success of DL lies in the availability of big data, learning the non-linear mapping of low-dose CT to target images based on convolutional neural networks. However, due to the commercial confidentiality of CT vendors, there are very few publicly raw projection data available to simulate paired training data, which greatly reduces the generalization and performance of the network. In the paper, we propose a dual-task learning network (DTNet) for low-dose CT simulation and denoising at arbitrary dose levels simultaneously. The DTNet can integrate low-lose CT simulation and denoising into a unified optimization framework by learning the joint distribution of low-dose CT and normal-dose CT data. Specifically, in the simulation task, we propose to train the simulation network by learning a mapping from normal-dose to low-dose at different levels, where the dose level can be continuously controlled by a noise factor. In the denoising task, we propose a multi-level low-dose CT learning strategy to train the denoising network, learning many-to-one mapping. The experimental results demonstrate the effectiveness of our proposed method in low-dose CT simulation and denoising at arbitrary dose levels.
Sparse view sampling is one of the effective ways to reduce radiation dose in CT imaging. However, artifacts and noise in sparse-view filtered back projection reconstructed CT images are obvious that should be removed effectively to maintain diagnostic accuracy. In this paper, we propose a novel sparse-view CT reconstruction framework, which integrates the projection-to-image and image-to-projection mappings to build a dual domain closed-loop learning network. For simplicity, the proposed framework is termed a closed-loop learning reconstruction network (CLRrcon). Specifically, the primal mapping (i.e., projection-to-image mapping) contains a projection domain network, a backward projection module, and an image domain network. The dual mapping (i.e., image-to-projection mapping) contains an image domain network and a forward projection module. All modules are trained simultaneously during the network training stage, and only the first mapping is used in the network inference stage. It should be noted that both the inference time and hardware requirements do not increase compared with traditional hybrid domain networks. Experiments on low-dose CT data demonstrate the proposed CLRecon model can obtain promising reconstruction results in terms of edge preservation, texture recovery, and reconstruction accuracy in the sparse-view CT reconstruction task.
Dynamic imaging (such as computed tomography (CT) perfusion, dynamic CT angiography, dynamic positron emission tomography, four-dimensional CT, etc.) is widely used in the clinic. The multiple-scan mechanism of dynamic imaging results in greatly increased radiation dose and prolonged acquisition time. To deal with these problems, low-mAs or sparse-view protocols are usually adopted, which lead to noisy or incomplete data for each frame. To obtain high-quality images from the corrupted data, a popular strategy is to incorporate the composite image that reconstructed using the full dataset into the iterative reconstruction procedure. Previous studies have tried to enforce each frame to approach the composite image in each iteration, which, however, introduces mixed temporal information into each frame. In this paper, we propose an average consistency (AC) model for dynamic CT image reconstruction. The core idea of AC is to enforce the average of all frames to approach the composite image in each iteration, which preserves image edges and noise characteristics while avoids the invasion of mixed temporal information. Experiment on a dynamic phantom and a patient for CT perfusion imaging shows that the proposed method obtains the best qualitative and quantitative results. We conclude that the AC model is a general framework and a superior way of using the composite image for dynamic CT reconstruction.
In this study we present a novel contrast-medium anisotropy-aware TTV (Cute-TTV) model to reflect intrinsic sparsity configurations of a cerebral perfusion Computed Tomography (PCT) object. We also propose a PCT reconstruction scheme via the Cute-TTV model to improve the performance of PCT reconstructions in the weak radiation tasks (referred as CuteTTV-RECON). An efficient optimization algorithm is developed for the CuteTTV-RECON. Preliminary simulation studies demonstrate that it can achieve significant improvements over existing state-of-the-art methods in terms of artifacts suppression, structures preservation and parametric maps accuracy with weak radiation.
For a very long time, low-dose computed tomography (CT) imaging techniques have been performed by either preprocessing the projection data or regularizing the iterative reconstruction. The conventional filtered backprojection (FBP) algorithm is rarely studied. In this work, we show that the intermediate data during FBP possess some fascinating properties and can be readily processed to reduce the noise and artifacts. The FBP algorithm can be technically decomposed into three steps: filtering, view-by-view backprojection and summing. The data after view-by-view backprojection is naturally a tensor, which is supposed to contain useful information for processing in higher dimensionality. We here introduce a sorting operation to the tensor along the angular direction based on the pixel intensity. The sorting for each point in the image plane is independent. Through the sorting operation, the structures of the object can be explicitly encoded into the tensor data and the artifacts can be automatically driven into the top and bottom slices of the tensor. The sorted tensor also provides high dimensional information and good low-rank properties. Therefore, any advanced processing methods can be applied. In the experiments, we demonstrate that under the proposed scheme, even the Gaussian smoothing can be used to remove the streaking artifacts in the ultra-low dose case, with nearly no compromising of the image resolution. It is noted that the scheme presented in this paper is a heuristic idea for developing new algorithms of low-dose CT imaging.
High radiation dose in CT imaging is a major concern, which could result in increased lifetime risk of cancers. Therefore, to reduce the radiation dose at the same time maintaining clinically acceptable CT image quality is desirable in CT application. One of the most successful strategies is to apply statistical iterative reconstruction (SIR) to obtain promising CT images at low dose. Although the SIR algorithms are effective, they usually have three disadvantages: 1) desired-image prior design; 2) optimal parameters selection; and 3) high computation burden. To address these three issues, in this work, inspired by the deep learning network for inverse problem, we present a low-dose CT image reconstruction strategy driven by a deep dual network (LdCT-Net) to yield high-quality CT images by incorporating both projection information and image information simultaneously. Specifically, the present LdCT-Net effectively reconstructs CT images by adequately taking into account the information learned in dual-domain, i.e., projection domain and image domain, simultaneously. The experiment results on patients data demonstrated the present LdCT-Net can achieve promising gains over other existing algorithms in terms of noise-induced artifacts suppression and edge details preservation.
KEYWORDS: Computed tomography, Dual energy imaging, Gold, Convolution, Bone, Convolutional neural networks, Signal attenuation, Medical imaging, Surgery, Biological research
Dual energy computed tomography (DECT) usually scans the object twice using different energy spectrum, and then DECT is able to get two unprecedented material decompositions by directly performing signal decomposition. In general, one is the water equivalent fraction and other is the bone equivalent fraction. It is noted that the material decomposition often depends on two or more different energy spectrum. In this study, we present a deep learning-based framework to obtain basic material images directly form single energy CT images via cascade deep convolutional neural networks (CD-ConvNet). We denote this imaging procedure as pseudo DECT imaging. The CD-ConvNet is designed to learn the non-linear mapping from the measured energy-specific CT images to the desired basic material decomposition images. Specifically, the output of the former convolutional neural networks (ConvNet) in the CD-ConvNet is used as part of inputs for the following ConvNet to produce high quality material decomposition images. Clinical patient data was used to validate and evaluate the performance of the presented CD-ConvNet. Experimental results demonstrate that the presented CD-ConvNet can yield qualitatively and quantitatively accurate results when compared against gold standard. We conclude that the presented CD-ConvNet can help to improve research utility of CT in quantitative imaging, especially in single energy CT.
Computed Tomography (CT) is one of the most important medical imaging modality. CT images can be used to assist in the detection and diagnosis of lesions and to facilitate follow-up treatment. However, CT images are vulnerable to noise. Actually, there are two major source intrinsically causing the CT data noise, i.e., the X-ray photo statistics and the electronic noise background. Therefore, it is necessary to doing image quality assessment (IQA) in CT imaging before diagnosis and treatment. Most of existing CT images IQA methods are based on human observer study. However, these methods are impractical in clinical for their complex and time-consuming. In this paper, we presented a blind CT image quality assessment via deep learning strategy. A database of 1500 CT images is constructed, containing 300 high-quality images and 1200 corresponding noisy images. Specifically, the high-quality images were used to simulate the corresponding noisy images at four different doses. Then, the images are scored by the experienced radiologists by the following attributes: image noise, artifacts, edge and structure, overall image quality, and tumor size and boundary estimation with five-point scale. We trained a network for learning the non-liner map from CT images to subjective evaluation scores. Then, we load the pre-trained model to yield predicted score from the test image. To demonstrate the performance of the deep learning network in IQA, correlation coefficients: Pearson Linear Correlation Coefficient (PLCC) and Spearman Rank Order Correlation Coefficient (SROCC) are utilized. And the experimental result demonstrate that the presented deep learning based IQA strategy can be used in the CT image quality assessment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.