MRI and Transrectal Ultrasound (TRUS) prostate images provide essential information for prostate intervention and cancer treatment. It is beneficial to register the MRI with TRUS images for cancer diagnosis and tumor delineation. However, the two imaging modalities have distinct image intensity which makes the registration difficult. A deep learning based image registration framework was proposed to perform MRI-TRUS image fusion. The prostate from the MRI and TRUS was first segmented for prostate shape modeling using point clouds. The network utilizes point cloud matching to perform prostate alignment. Biomechanical modeling was used to regularize the prostate deformation during image registration. The performance of the network was evaluated using dice similarity coefficient (DSC), mean surface distance (MSD), Hausdorff distance (HD) and target registration error (TRE). The calculated DSC, MSD, HD and TRE were 0.94±0.02, 0.90±0.23mm, 2.96±1.00mm and 1.57±0.77mm, respectively. Our results showed that the proposed method can register the prostate on MRI to that on the TRUS accurately
|