The simplified reference tissue model (SRTM) can provide a robust estimation of binding potential (BP) without a measured arterial blood input function. Although a voxel-wise estimation of BP (so called parametric image) is much more valuable than region of interested (ROI) based estimation of BP, it is challenging to compute it due to limited signal-to-noise ratio (SNR) in dynamic PET data. To achieve reliable parametric imaging, temporal images are commonly low-pass filtered prior to the kinetic parameter estimation, which sacrifices the resolution significantly. In this project, we propose an innovative method, the residual simplified reference tissue model (R-SRTM), to calculate parametric image with high resolution. In phantom simulation, we demonstrate that the proposed method outperforms the conventional SRTM method.
Positron emission tomography (PET) images still suffer from low signal-to-noise ratio (SNR) due to various physical degradation factors. Recently deep neural networks (DNNs) have been successfully applied to medical image denoising tasks when large number of training pairs are available. Previously the deep image prior framework1 shows that individual information can be enough to train a denoising network, with noisy image itself as the training label. In this work, we propose to improve PET image quality by jointly employing population and individual information based on DNN. The population information was utilized by pre-training the network using a group of patients. The individual information was introduced during testing phase by fine-tuning the population-information-trained network. Unlike traditional DNN denoising, in this framework fine-tuning during testing phase is available as the noisy PET image itself was treated as the training label. Quantification results based on clinical PET/MR datasets containing thirty patients demonstrate that the proposed framework outperforms Gaussian, non-local mean and deep image prior denoising methods.
Dual energy computed tomography (DECT) usually uses 80kVp and 140kVp for patient scans. Due to high attenuation, the 80kVp image may become too noisy for reduced photon flux scenarios such as low-dose protocols or large-sized patients, further leading to unacceptable decomposed image quality. In this paper, we proposed a deep-neural-network-based reconstruction approach to compensate for the increased noise in low-dose DECT scan. The learned primal-dual network structure was used in this study, where the input and output of the network consisted of both low- and high-energy data. The network was trained on 30 patients who went through normal-dose chest DECT scans with simulated noises inserted into the raw data. It was further evaluated on another 10 patients undergoing half-dose chest DECT scans. Improved image quality close to the normal-dose scan was achieved and no significant bias was found on Hounsfield units (HU) values or iodine concentration.
PET image reconstruction is challenging due to the ill-poseness of the inverse problem and limited number of detected photons. Recently deep neural networks have been widely applied to medical imaging denoising applications. In this work, based on the MAPEM algorithm, we propose a novel unrolled neural network framework for 3D PET image reconstruction. In this framework, the convolutional neural network is combined with the MAPEM update steps so that data consistency can be enforced. Both simulation and clinical datasets were used to evaluate the effectiveness of the proposed method. Quantification results show that our proposed MAPEM-Net method can outperform the neural network and Gaussian denoising methods.
PET image reconstruction is challenging due to the ill-poseness of the inverse problem and limited number of detected photons. Recently deep neural networks have been widely applied to medical imaging denoising applications. In this work, based on the expectation maximization (EM) algorithm, we propose an unrolled neural network framework for PET image reconstruction, named EMnet. An innovative feature of the proposed framework is that the deep neural network is combined with the EM update steps in a whole graph. Thus data consistency can act as a constraint during network training. Both simulation data and real data are used to evaluate the proposed method. Quantification results show that our proposed EMnet method can outperform the neural network denoising and Gaussian denoising methods.
Deep neural networks have attracted growing interests in medical image due to its success in computer vision tasks. One barrier for the application of deep neural networks is the need of large amounts of prior training pairs, which is not always feasible in clinical practice. Recently, the deep image prior framework shows that the convolutional neural network (CNN) can learn intrinsic structure information from the corrupted image. In this work, an iterative parametric reconstruction framework is proposed using deep neural network as constraint. The network does not need prior training pairs, but only the patient’s own CT image. The training is based on Logan plot derived from multi-bed-position dynamic positron emission tomography (PET) images using 68Ga-PRGD2 tracer. We formulated the estimation of the slope of Logan plot as a constraint optimization problem and solved it using the alternating direction method of multipliers (ADMM) algorithm. Quantification results based on real patient dataset shows that the proposed parametric reconstruction method is better than the Gaussian denoising and non-local mean denoising methods.
In computed tomographic (CT) image reconstruction, image prior design and parameter tuning are important to improving the image reconstruction quality from noisy or undersampled projections. In recent years, the development of deep learning in medical image reconstruction made it possible to automatically find both suitable image priors and hyperparameters. By unrolling reconstruction algorithm to finite iterations and parameterizing prior functions and hyperparameters with deep artificial neural networks, all the parameters can be learned end-to-end to reduce the difference between reconstructed images and the training ground truth. Despite of its superior performance, the unrolling scheme suffers from huge memory consumption and computational cost in the training phase, made it hard to apply to 3 dimensional applications in CT, such as cone-beam CT, helical CT, tomosynthesis, etc. In this paper, we proposed a training-time computational-efficient cascaded neural network for CT image reconstruction, which had several sequentially trained cascades of networks for image quality improvement, connected with data fidelity correction steps. Each cascade was trained purely in the image domain, so that image patches could be utilized for training, which would significantly accelerate the training process and reduce memory consumption. The proposed method was fully scalable to 3D data with current hardware. Simulation of sparse-view sampling were done and demonstrated that the proposed method could achieve similar image quality compared to the state-of-the-art unrolled networks.
Attenuation correction is essential for quantitative reliability of positron emission tomography (PET) imaging. In time-of-flight (TOF) PET, attenuation sinogram can be determined up to a global constant from noiseless emission data due to the TOF PET data consistency condition. This provides the theoretical basis for jointly estimating both activity image and attenuation sinogram/image directly from TOF PET emission data. Multiple joint estimation methods, such as maximum likelihood activity and attenuation (MLAA) and maximum likelihood attenuation correction factor (MLACF), have already been shown that can produce improved reconstruction results in TOF cases. However, due to the nonconcavity of the joint log-likelihood function and Poisson noise presented in PET data, the iterative method still requires proper initialization and well-designed regularization to prevent convergence to local maxima. To address this issue, we propose a joint estimation of activity image and attenuation sinogram using the TOF PET data consistency condition as an attenuation sinogram filter, and then evaluate the performance of the proposed method using computer simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.