CT imaging provides physicians valuable insights when diagnosing disease in a clinical setting. In order to provide an accurate diagnosis, is it important to have a high accuracy with controlled variability across CT scans from different scanners and imaging parameters. The purpose of this study was to analyze variability of lung imaging biomarkers across various scanners and parameters using a customized version of a commercially available anthropomorphic chest Phantom (Kyoto Kagaku) with several experimental sample inserts. The phantom was across 10 different CT scanners with a total of 209 imaging conditions. An algorithm was developed to compute different imaging biomarkers. Variability across images from the same scanner and from different scanners was analyzed by computing coefficients of variation (CV) and standard deviations of HU values. LAA -950 and LAA -856 biomarkers had the highest levels of variability, while the majority of other biomarkers had variability less than 10 HU or 10% CV in both inter and intrascan measurements. There was no clear trend present between the biomarker measurements and CTDIvol. The results of this study demonstrates the existing variability in CT quantifications for lung imaging, which prompt further studies on how to reduce such variation.
Traditional methods of quantitative analysis of CT images typically involve working with patient data, which is often expensive and limited in terms of ground truth. To counter these restrictions, quantitative assessments can instead be made through Virtual Imaging Trials (VITs) which simulate the CT imaging process. This study sought to validate DukeSim (a scanner-specific CT simulator) utilizing clinically relevant biomarkers for a customized anthropomorphic chest phantom. The physical phantom was imaged utilizing two commercial CT scanners (Siemens Somatom Force and Definition Flash) with varying imaging parameters. A computational version of the phantom was simulated utilizing DukeSim for each corresponding real acquisition. Biomarkers were computed and compared between the real and virtually acquired CT images to assess the validity of DukeSim. The simulated images closely matched the real images both qualitatively and quantitatively, with the average biomarker percent difference of 3.84% (range 0.19% to 18.27%). Results showed that DukeSim is reasonably well validated across various patient imaging conditions and scanners, which indicates the utility of DukeSim for further VIT studies where real patient data may not be feasible.
In recent years, the use of large animals in neurological research has escalated due to advantages over small animals. Unfortunately, large animal imaging researchers lack functional automated medical imaging tools, requiring laborious manual processing. As a response, we have implemented a Reinforcement Learning pipeline for brain anatomical landmark detection in minipig MRIs. Leveraging a deep convolutional network, two-step detection process, and multiple Deep-Q multi-agent networks, our approach is suitable for accurate landmark detection in large animals. Using a heterogeneous dataset containing 154 minipig images, we achieved an average accuracy of 1.56mm on predicting 19 landmarks.
Current computer-aided diagnosis (CAD) models for determining pulmonary nodule malignancy characterize nodule shape, density, and border in computed tomography (CT) data. Analyzing the lung parenchyma surrounding the nodule has been minimally explored. We hypothesize that improved nodule classification is achievable by including features quantified from the surrounding lung tissue. To explore this hypothesis, we have developed expanded quantitative CT feature extraction techniques, including volumetric Laws texture energy measures for the parenchyma and nodule, border descriptors using ray-casting and rubber-band straightening, histogram features characterizing densities, and global lung measurements. Using stepwise forward selection and leave-one-case-out cross-validation, a neural network was used for classification. When applied to 50 nodules (22 malignant and 28 benign) from high-resolution CT scans, 52 features (8 nodule, 39 parenchymal, and 5 global) were statistically significant. Nodule-only features yielded an area under the ROC curve of 0.918 (including nodule size) and 0.872 (excluding nodule size). Performance was improved through inclusion of parenchymal (0.938) and global features (0.932). These results show a trend toward increased performance when the parenchyma is included, coupled with the large number of significant parenchymal features that support our hypothesis: the pulmonary parenchyma is influenced differentially by malignant versus benign nodules, assisting CAD-based nodule characterizations.
Current computer-aided diagnosis (CAD) models, developed to determine the malignancy of pulmonary nodules, characterize the nodule’s shape, density, and border. Analyzing the lung parenchyma surrounding the nodule is an area that has been minimally explored. We hypothesize that improved classification of nodules can be achieved through the inclusion of features quantified from the surrounding lung tissue. From computed tomography (CT) data, feature extraction techniques were developed to quantify the parenchymal and nodule textures, including a three-dimensional application of Laws’ Texture Energy Measures. Border irregularity was investigated using ray-casting and rubber-band straightening techniques, while histogram features characterized the densities of the nodule and parenchyma. The feature set was reduced by stepwise feature selection to a few independent features that best summarized the dataset. Using leave-one-out cross-validation, a neural network was used for classification. The CAD tool was applied to 50 nodules (22 malignant, 28 benign) from high-resolution CT scans. 47 features, including 39 parenchymal features, were statistically significant, with both nodule and parenchyma features selected for classification, yielding an area under the ROC curve (AUC) of 0.935. This was compared to classification solely based on the nodule yielding an AUC of 0.917. These preliminary results show an increase in performance when the surrounding parenchyma is included in analysis. While modest, the improvement and large number of significant parenchyma features supports our hypothesis that the parenchyma contains meaningful data that can assist in CAD development.
KEYWORDS: Computed tomography, Magnetic resonance imaging, Image registration, Animal model studies, Medical imaging, Genetics, Preclinical imaging, Visualization, Bone, Skeletal system
Recent growth of genetic disease models in swine has presented the opportunity to advance translation of developed imaging protocols, while characterizing the genotype to phenotype relationship. Repeated imaging with multiple clinical modalities provides non-invasive detection, diagnosis, and monitoring of disease to accomplish these goals; however, longitudinal scanning requires repeatable and reproducible positioning of the animals. A modular positioning unit was designed to provide a fixed, stable base for the anesthetized animal through transit and imaging. Post ventilation and sedation, animals were placed supine in the unit and monitored for consistent vitals. Comprehensive imaging was performed with a computed tomography (CT) chest-abdomen-pelvis scan at each screening time point. Longitudinal images were rigidly registered, accounting for rotation, translation, and anisotropic scaling, and the skeleton was isolated using a basic thresholding algorithm. Assessment of alignment was quantified via eleven pairs of corresponding points on the skeleton with the first time point as the reference. Results were obtained with five animals over five screening time points. The developed unit aided in skeletal alignment within an average of 13.13 ± 6.7 mm for all five subjects providing a strong foundation for developing qualitative and quantitative methods of disease tracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.