KEYWORDS: Machine learning, Clinical trials, Magnetic resonance imaging, Image segmentation, Data storage, Video, Tumors, Super resolution, Prostate cancer, Medical imaging
Machine learning and deep learning are ubiquitous across a wide variety of scientific disciplines, including medical imaging. An overview of multiple application areas along the imaging chain where deep learning methods are utilized in discovery and clinical quantitative imaging trials is presented. Example application areas along the imaging chain include quality control, preprocessing, segmentation, and scoring. Within each area, one or more specific applications is demonstrated, such as automated structural brain MRI quality control assessment in a core lab environment, super-resolution MRI preprocessing for neurodegenerative disease quantification in translational clinical trials, and multimodal PET/CT tumor segmentation in prostate cancer trials. The quantitative output of these algorithms is described, including their impact on decision making and relationship to traditional read-based methods. Development and deployment of these techniques for use in quantitative imaging trials presents unique challenges. The interplay between technical and scientific domain knowledge required for algorithm development is highlighted. The infrastructure surrounding algorithm deployment is critical, given regulatory, method robustness, computational, and performance considerations. The sensitivity of a given technique to these considerations and thus complexity of deployment is task- and phase-dependent. Context is provided for the infrastructure surrounding these methods, including common strategies for data flow, storage, access, and dissemination as well as application-specific considerations for individual use cases.
This work reports on a comparative study between five manual and automated methods for intra-subject pair-wise registration of images from different modalities. The study includes a variety of inter-modal image registrations (MR-CT, PET-CT, PET-MR) utilizing different methods including two manual point-based techniques using rigid and similarity transformations, one automated point-based approach based on Iterative Closest Point (ICP) algorithm, and two automated intensity-based methods using mutual information (MI) and normalized mutual information (NMI). These techniques were employed for inter-modal registration of brain images of 9 subjects from a publicly available dataset, and the results were evaluated qualitatively via checkerboard images and quantitatively using root mean square error and MI criteria. In addition, for each inter-modal registration, a paired t-test was performed on the quantitative results in order to find any significant difference between the results of the studied registration techniques.
This paper describes enhancements to automate classification of brain tissues for multi-site degenerative magnetic resonance imaging (MRI) data analysis. Processing of large collections of MR images is a key research technique to advance our understanding of the human brain. Previous studies have developed a robust multi-modal tool for automated tissue classification of large-scale data based on expectation maximization (EM) method initialized by group-wise prior probability distributions. This work aims to augment the EM-based classification using a non-parametric fuzzy k-Nearest Neighbor (k-NN) classifier that can model the unique anatomical states of each subject in the study of degenerative diseases. The presented method is applicable to multi-center heterogeneous data analysis and is quantitatively validated on a set of 18 synthetic multi-modal MR datasets having six different levels of noise and three degrees of bias-field provided with known ground truth. Dice index and average Hausdorff distance are used to compare the accuracy and robustness of the proposed method to a state-of-the-art classification method implemented based on EM algorithm. Both evaluation measurements show that presented enhancements produce superior results as compared to the EM only classification.
Anatomical landmarks such as the anterior commissure (AC) and posterior commissure (PC) are commonly used by
researchers for co-registration of images. In this paper, we present a novel, automated approach for landmark detection
that combines morphometric constraining and statistical shape models to provide accurate estimation of landmark points.
This method is made robust to large rotations in initial head orientation by extracting extra information of the eye centers
using a radial Hough transform and exploiting the centroid of head mass (CM) using a novel estimation approach. To
evaluate the effectiveness of this method, the algorithm is trained on a set of 20 images with manually selected
landmarks, and a test dataset is used to compare the automatically detected against the manually detected landmark
locations of the AC, PC, midbrain-pons junction (MPJ), and fourth ventricle notch (VN4). The results show that the
proposed method is accurate as the average error between the automatically and manually labeled landmark points is less
than 1 mm. Also, the algorithm is highly robust as it was successfully run on a large dataset that included different kinds
of images with various orientation, spacing, and origin.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.