We are developing DCNN-based methods for the segmentation of lung nodules of a wide variety of sizes, shapes, and margins. In this study, we developed several fusion methods for hybridizing multiple DCNNs with different structures to improve segmentation accuracy. Two groups of fusion methods that combined the output information from our previously designed shallow and deep U-shape based deep learning models (U-DL) were compared. Group-1 denotes the late fusion (LF) methods that concatenated the feature maps output from the last layers (before a sigmoid activation layer) of the two U-DLs. Group-2 denotes early fusion (EF) methods that combined multi-scale output information from the encoders and decoders, or decoders alone, of the two U-DLs. A set of 883 cases from the LIDC-IDRI database which contained lung nodules manually marked by at least two radiologists was selected for this study. We split the data into 683 training and validation and 200 independent test cases. The multiple DCNNs with the fusion method were trained simultaneously and end-to-end. The baseline LF-1 method using pre-defined thresholds achieved an average DICE coefficient of 0.718±0.159. The newly developed LF method using Squeeze and Excitation Attention Block (SEAB) followed by a sigmoid activation layer (LF-4), and two Convolutional Block Attention Modules (CBAM) based EF methods that combined multiscale information from the decoders alone (EF-2) or from both the encoders and decoders (EF-3) of the two U-DLs, achieved significantly (p<0.05) better performance with average DICE coefficients of 0.745±0.135, 0.745±0.142, and 0.747±0.142, respectively.
We developed a radiomic-based reinforcement learning (R-RL) model for the early diagnosis of lung cancer. We formulated the classification of malignant and benign lung nodules with multiple years of screening as a Markov decision process. The reinforcement learning method learned a policy mapping from the set of states (patients’ clinical conditions) of the environment (patients) to the set of possible actions (decisions). The customary mapping between the two sets was based on a value function with the expected reward designed to be associated with lung cancer risk which was increased when the patient was diagnosed with lung cancer and vice versa in the Markov chains. The trained model can be deployed to a single baseline CT scan for early diagnosis of malignant nodules. 215 NLST cases including 108 positive and 107 negative cases with 431 LDCT scans collected from 3 years of screening were used as the training set and another 70 cases with 35 positive and 35 negative cases were used as the independent test set. For each screen-detected nodule in a CT exam, forty-three texture features were extracted and used as the state in reinforcement learning. An offline model-free value iteration method was used to build the R-RL model. Our R-RL model trained with 3 years of serial CT exams achieved an AUC of 0.824 ± 0.003 when deployed to the first year CT exams of the test set. In comparison, the R-RL model trained with only the first year CT scans achieved a significantly (P<0.05) lower test AUC of 0.736 ± 0.004. Our study demonstrated that the R-RL model built with serial CT scans has the potential to improve early diagnosis of indeterminate lung nodules in screening programs, thus reducing follow-up exams or unnecessary biopsy and the associated costs.
We are developing quantitative image analysis methods for early diagnosis of lung cancer. We designed a hybrid deep learning (H-DL) method for volume segmentation of lung nodules with large variations in size, shape, margin and opacity. In our H-DL method, two UNet++ based DL models, one used a 19-layer VGG network and the other used a 201-layer DenseNet network as backbone, were trained separately and then combined to segment nodules with wide ranges of size, shape, margin, and opacity. A data set collected from LIDC-IDRI containing 430 cases with lung nodules manually segmented by at least two radiologists was split into 352 training and 78 independent test cases. The 50% consensus consolidation of radiologists’ annotation was used as the reference standard for each nodule. For the 78 test cases with 167 nodules, our H-DL model achieved an average 3D DICE coefficient of 0.732±0.158 for all nodules. For the nodules with size larger than 9.5 mm, nodules with margin described by LIDC-IDRI as sharp or spiculated, or nodules with structure described as lobulated or having solid opacity, the segmentation accuracy achieved by our H-DL model were not significantly different from the average of radiologists’ manual annotation in terms of the DICE coefficient. The results demonstrated that our hybrid deep learning scheme could achieve high segmentation accuracy comparable to radiologists’ average segmentations for the wide variety of nodules.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.