KEYWORDS: Image segmentation, Medical imaging, Education and training, Content addressable memory, Control systems, Magnetic resonance imaging, Machine learning, Data modeling, White matter, Biomedical applications
Medical image harmonization aims to transform the image ‘style’ among heterogeneous datasets while preserving the anatomical content. It enables data-sensitive learning-based approaches to fully leverage the data power of large multi-site datasets with different image acquisitions. Recently, the attention mechanism has achieved excellent performance on the image-to-image (I2I) translation of natural images. In this work, we further explore the potential of leveraging the attention mechanism to improve the performance of medical image harmonization. Here, we introduce two attention-based frameworks with outstanding performance in the natural I2I scenario in the context of cross-scanner MRI harmonization for the first time. We compare them with the existing commonly used harmonization frameworks by evaluating their ability to enhance the performance of the downstream subcortical segmentation task on T1-weighted (T1w) MRI datasets from 1.5T vs. 3T scanners. Both qualitative and quantitative results prove that the attention mechanism contributes to a noticeable improvement in harmonization ability.
Brain extraction, also known as skull stripping, from magnetic resonance images (MRIs) is an essential preprocessing step for many medical image analysis tasks and is also useful as a stand-alone task for estimating the total brain volume. Currently, many proposed methods have excellent performance on T1-weighted images, especially for healthy adults. However, such methods do not always generalize well to more challenging datasets such as pediatric, severely pathological, or heterogeneous data. In this paper, we propose an automatic deep learning framework for brain extraction on T1-weighted MRIs of adult healthy controls, Huntington’s disease patients and pediatric Aicardi Gouti`eres Syndrome (AGS) patients. We examine our method on the PREDICT-HD and the AGS datasets, which are multi-site datasets with different protocols/scanners. Compared to current state-of-the-art methods, our method produced the best segmentations with the highest Dice score, lowest average surface distance and lowest 95-percent Hausdorff distance on both datasets. These results indicate that our method has better accuracy and generalizability for heterogeneous T1-w MRI datasets.
The subcortical structures of the brain are relevant for many neurodegenerative diseases like Huntington’s disease (HD). Quantitative segmentation of these structures from magnetic resonance images (MRIs) has been studied in clinical and neuroimaging research. Recently, convolutional neural networks (CNNs) have been successfully used for many medical image analysis tasks, including subcortical segmentation. In this work, we propose a 2-stage cascaded 3D subcortical segmentation framework, with the same 3D CNN architecture for both stages. Attention gates, residual blocks and output adding are used in our proposed 3D CNN. In the first stage, we apply our model to downsampled images to output a coarse segmentation. Next, we crop the extended subcortical region from the original image based on this coarse segmentation, and we input the cropped region to the second CNN to obtain the final segmentation. Left and right pairs of thalamus, caudate, pallidum and putamen are considered in our segmentation. We use the Dice coefficient as our metric and evaluate our method on two datasets: the publicly available IBSR dataset and a subset of the PREDICT-HD database, which includes healthy controls and HD subjects. We train our models on only healthy control subjects and test on both healthy controls and HD subjects to examine model generalizability. Compared with the state-of-the-art methods, our method has the highest mean Dice score on all considered subcortical structures (except the thalamus on IBSR), with more pronounced improvement for HD subjects. This suggests that our method may have better ability to segment MRIs of subjects with neurodegenerative disease.
Longitudinal information is important for monitoring the progression of neurodegenerative diseases, such as Huntington's disease (HD). Specifically, longitudinal magnetic resonance imaging (MRI) studies may allow the discovery of subtle intra-subject changes over time that may otherwise go undetected because of inter-subject variability. For HD patients, the primary imaging-based marker of disease progression is the atrophy of subcortical structures, mainly the caudate and putamen. To better understand the course of subcortical atrophy in HD and its correlation with clinical outcome measures, highly accurate segmentation is important. In recent years, subcortical segmentation methods have moved towards deep learning, given the state-of-the-art accuracy and computational efficiency provided by these models. However, these methods are not designed for longitudinal analysis, but rather treat each time point as an independent sample, discarding the longitudinal structure of the data. In this paper, we propose a deep learning based subcortical segmentation method that takes into account this longitudinal information. Our method takes a longitudinal pair of 3D MRIs as input, and jointly computes the corresponding segmentations. We use bi-directional convolutional long short-term memory (C-LSTM) blocks in our model to leverage the longitudinal information between scans. We test our method on the PREDICT-HD dataset and use the Dice coefficient, average surface distance and 95-percent Hausdor distance as our evaluation metrics. Compared to cross-sectional segmentation, we improve the overall accuracy of segmentation, and our method has more consistent performance across time points. Furthermore, our method identifies a stronger correlation between subcortical volume loss and decline in the total motor score, an important clinical outcome measure for HD.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.