Glioblastoma Multiforme (GBM) is the most common and most lethal primary brain tumor in adults with a five-year survival rate of 5%. The current standard of care and survival rate have remained largely unchanged due to the degree of difficulty in surgically removing these tumors, which plays a crucial role in survival, as better surgical resection leads to longer survival times. Thus, novel technologies need to be identified to improve resection accuracy. Our study features a curated database of GBM and normal brain tissue specimens, which we used to train and validate a multi-instance learning model for GBM detection via rapid evaporative ionization mass spectrometry. This method enables real-time tissue typing. The specimens were collected by a surgeon, reviewed by a pathologist, and sampled with an electrocautery device. The dataset comprised 276 normal tissue burns and 321 GBM tissue burns. Our multi-instance learning model was adapted to identify the molecular signatures of GBM, and we employed a patient-stratified four-fold cross-validation approach for model training and evaluation. Our models demonstrated robustness and outperformed baseline models with an improved AUC of 0.95 and accuracy of 0.95 in correctly classifying GBM and normal brain. This study marks the first application of deep learning to REIMS data for brain tumor tissue characterization. This study sets the foundation for investigating more clinically relevant questions where intraoperative tissue detection in neurosurgery is pertinent.
Treatment for Basal Cell Carcinoma (BCC) includes an excisional surgery to remove cancerous tissues, using a cautery tool to make burns along a defined resection margin around the tumor. Margin evaluation occurs post-surgically, requiring repeat surgery if positive margins are detected. Rapid Evaporative Ionization Mass Spectrometry (REIMS) can help distinguish healthy and cancerous tissue but does not provide spatial information about the cautery tool location where the spectra are acquired. We propose using intraoperative surgical video recordings and deep learning to provide surgeons with guidance to locate sites of potential positive margins. Frames from 14 intraoperative videos of BCC surgery were extracted and used to train a sequence of networks. The first network extracts frames showing surgery in-progress, then, an object detection network localizes the cautery tool and resection margin. Finally, our burn prediction model leverages the effectiveness of both a Long Short-Term Memory (LSTM) network and a Receiver Operating Characteristic (ROC) curve to accurately predict when the surgeon is cutting. The cut identifications will be used in the future for synchronization with iKnife data to provide localizations when cuts are predicted. The model was trained with four-fold cross-validation on a patient-wise split between training, validation, and testing sets. Average recall over the four folds of testing for the LSTM and ROC were 0.80 and 0.73, respectively. The video-based approach is simple yet effective at identifying tool-to-skin contact instances and may help guide surgeons, enabling them to deliver precise treatments in combination with iKnife data.
Surgical excision for basal cell carcinoma (BCC) is a common treatment to remove the affected areas of skin. Minimizing positive margins around excised tissue is essential for successful treatment. Residual cancer cells may result in repeat surgery; however, detecting remaining cancer can be challenging and time-consuming. Using chemical signal data acquired while tissue is excised with a cautery tool, the iKnife system can discriminate between healthy and cancerous tissue but lacks spatial information, making it difficult to navigate back to suspicious margins. Intraoperative videos of BCC excision allow cautery locations to be tracked, providing the sites of potential positive margins. We propose a deep learning approach using convolutional neural networks to recognize phases in the videos and subsequently track the cautery location, comparing two localization methods (supervised and semi-supervised). Phase recognition was used for preprocessing to classify frames as showing the surgery or the start/stop of iKnife data acquisition. Only frames designated as showing the surgery were used for cautery localization. Fourteen videos were recorded during BCC excisions with iKnife data collection. On unseen testing data (2 videos, 1,832 frames), the phase recognition model showed an overall accuracy of 86%. Tool localization performed with a mean average precision of 0.98 and 0.96 for supervised and semisupervised methods, respectively, at a 0.5 intersection over union threshold. Incorporating intraoperative phase data with tool tracking provides surgeons with spatial information about the cautery tool location around suspicious regions, potentially improving the surgeon's ability to navigate back to the area of concern.
PURPOSE: Basal Cell Carcinoma (BCC) is the most common cancer in the world. Surgery is the standard treatment and margin assessment is used to evaluate the outcome. The presence of cancerous cells at the edge of resected tissue i.e., positive margin, can negatively impact patient outcomes and increase the probability of cancer recurrence. Novel mass spectrometry technologies paired with machine learning can provide surgeons with real-time feedback about margins to eliminate the need for resurgery. To our knowledge, this is the first study to report the performance of cancer detection using Graph Convolutional Networks (GCN) on mass spectrometry data from resected BCC samples. METHODS: The dataset used in this study is a subset of an ongoing clinical data acquired by our group and annotated with the help of a trained pathologist. There is a total number of 190 spectra in this dataset, including 127 normal and 63 BCC samples. We propose single-layer and multi-layer conversion methods to represent each mass spectrum as a structured graph. The graph classifier is developed based on the deep GCN structure to distinguish between cancer and normal spectra. The results are compared with the state of the art in mass spectra analysis. RESULTS: The classification performance of GCN with multi-layer representation without any data augmentation is comparable to the previous studies that have used augmentation. CONCLUSION: The results indicate the capability of the proposed graph-based analysis of mass spectrometry data for tissue characterization or real-time margin assessment during cancer surgery.
KEYWORDS: Visualization, Image segmentation, Tissues, 3D image processing, Tumors, Open source software, Surgery, 3D modeling, Image visualization, Information visualization
PURPOSE: There are several interstitial (needle based) image-guided ablation planning systems available, but most of them are closed or unsupported. We propose an open source software platform for the planning of image-guided interstitial ablation procedures, providing generic functionality and support for specialized plug-ins. METHODS: The patient’s image data is loaded or streamed into the system and the relevant structures are segmented. The user places fiducial points as ablation needle entries and tips, sets the ablation times, and the thermal dose is calculated by a dose engine. The thermal dose is then visualized on the 2D image slices and 3D rendering using a combination of isodose lines and surfaces. Quantitative feedback is provided by dose volume histograms. The treatment plan can be iteratively edited until satisfactory dose distribution is achieved. We performed a usability study with eight novice users in which they were asked to create a satisfactory treatment plan. RESULTS: Interventionists can use the proposed system to create and visualize thermal ablation plans. Researchers can use the platform to create a wide range of specialized applications by adding plug-ins for various types of ablation methods, thermal models, and dose calculation engines. Existing extensions of the platform can provide real-time imaging and tracked or robotic navigation to aid the user in optimal needle placement. From our usability study, the users found the visual information well represented and the platform intuitive to use. The users averaged 4.4 recalculation attempts before finding an optimal treatment, which was evaluated as 100% necrosis of the tumor. CONCLUSION: The developed platform fulfills a demand for a universal and shared ablation planning system. While also being supported by the state-of-the-art development of specialized plug-ins, the open source system can adapt to the desired dose calculation or ablation procedure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.