Dual-energy CT (DECT) provides additional material-based contrast using spectral information. The realization of DECT using a rotation-to-rotation kVp switching may suffer from structure misalignment due to patient’s motion and requires deformable image registration (DIR) between the two kVp images. Recent studies in DIR has highlighted deep-learning-based methods which can achieve superior registration accuracy with reasonable computational time. However, current deep-learning-based DIR methods may eliminate important anatomical features or hallucinate faked structures. The lack of interpretability complicates the robustness verification. Alternatively, recent studies have introduced the algorithm unrolling method that provides a concrete and systematic connection between model-based iterative methods and data-driven methods. In this work, we present an unsupervised Model-Based deep Unrolling Registration Network (MBURegNet) for DIR in DECT. MBURegNet comprises a sequence of stacked update blocks to unroll the Large Deformation Diffeomorphic Metric Mapping method, where each block samples the velocity field that follows the diffeomorphism physics. Preliminary studies using clinical data has shown that the proposed network can achieve superior performance improvement compared to the baseline deep-learning-based method, as evidenced by both qualitative and quantitative analyses. Additionally, the network can generate a sequence of intermediate images connecting the initial and final motion states, effectively illustrating the continuous flow of diffeomorphisms.
Numerous dual-energy CT (DECT) techniques have been developed in the past few decades. Dual-energy CT (DECT) statistical iterative reconstruction (SIR) has demonstrated its potential for reducing noise and increasing accuracy. Our lab proposed a joint statistical DECT algorithm for stopping power estimation and showed that it outperforms competing image-based material-decomposition methods. However, due to its slow convergence and the high computational cost of projections, the elapsed time of 3D DECT SIR is often not clinically acceptable. Therefore, to improve its convergence, we have embedded DECT SIR into a deep learning model-based unrolled network for 3D DECT reconstruction (MB-DECTNet) that can be trained in an end-to-end fashion. This deep learning-based method is trained to learn the shortcuts between the initial conditions and the stationary points of iterative algorithms while preserving the unbiased estimation property of model-based algorithms. MB-DECTNet is formed by stacking multiple update blocks, each of which consists of a data consistency layer (DC) and a spatial mixer layer, where the spatial mixer layer is the shrunken U-Net, and the DC layer is a one-step update of an arbitrary traditional iterative method. Although the proposed network can be combined with numerous iterative DECT algorithms, we demonstrate its performance with the dual-energy alternating minimization (DEAM). The qualitative result shows that MB-DECTNet with DEAM significantly reduces noise while increasing the resolution of the test image. The quantitative result shows that MB-DECTNet has the potential to estimate attenuation coefficients accurately as traditional statistical algorithms but with a much lower computational cost.
Accuracy in proton range prediction is critical in proton therapy to ensure conformal tumor dose. Our lab proposed a joint statistical image reconstruction method (JSIR) based on a basis vector model (BVM) for estimation of stopping power ratio maps and demonstrated that it outperforms competing Dual Energy CT (DECT) methods. However, no study has been performed on the clinical utility of our method. Here, we study the resulting dose prediction error, the difference between the dose delivered to tissue based on the more accurate JSIR-BVM method and the planned dose based on Single Energy CT (SECT).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.