For 3D ultrasound (US) images with large slice thickness, high frequency information in the slice direction is missing and cannot be resolved through interpolation. As an ill-posed problem, current high-resolution methods rely on the presence of external/training atlases to learn the transform from low resolution images to high resolution images. In this study, we aim to propose a self-supervised learning method, which does not use any external atlas images, yet can still resolve high resolution images only reliant on the acquired image with a large slice thickness. To circumvent the lack of training data, the simulated training data were obtained from the input image. To do this, each 2D sagittal slice is regarded as a high-resolution image, while each coronal and axial slice is regarded as low-resolution images. By training a deep learning-based model on sagittal slices and using this model to infer high-resolution coronal and axial slices, we can apply the mapping to low-resolution images with large slice thickness to estimate the high-resolution images with thin slice thickness. The proposed algorithm was evaluated using 30 sets of US breast data. The US image downsampled in z-axis was used as low-resolution image, the original US image was used as ground truth. The normalized mean absolute error (NMAE), peak signal-to-noise ratio (PSNR) and normalized cross-correlation (NCC) indices were used to quantify the accuracy of the proposed algorithm. The NMAE, PSNR and NCC were 0.011±0.02, 34.6±2.14 dB and 0.98±0.01. The proposed method showed similar image quality as compared to ground truth.
|