PURPOSE: Deep learning methods for classifying prostate cancer (PCa) in ultrasound images typically employ convolutional neural networks (CNN) to detect cancer in small regions of interest (ROI) along a needle trace region. However, this approach suffers from weak labelling, since the ground-truth histopathology labels do not describe the properties of individual ROIs. Recently, multi-scale approaches have sought to mitigate this issue by combining the context awareness of transformers with a convolutional feature extractor to detect cancer from multiple ROIs using multiple-instance learning (MIL). In this work, we present a detailed study of several image transformer architectures for both ROI-scale and multi-scale classification, and a comparison of the performance of CNNs and transformers for ultrasound-based prostate cancer classification. We also design a novel multi-objective learning strategy that combines both ROI and core predictions to further mitigate label noise. METHODS: We use a dataset of 6607 prostate biopsy cores extracted from 693 patients at 5 distinct clinical centers. We evaluate 3 image transformers on ROI-scale cancer classification, then use the strongest model to tune a multi-scale classifier with MIL, wherein another transformer is fine-tuned on top of the existing model’s features. We train our MIL models using our novel multi-objective learning strategy and compare our results to existing baselines. RESULTS: We find that for both ROI-scale and multi-scale PCa detection, image transformer backbones lag behind their CNN counterparts. This deficit in performance is even more noticeable for larger models. When using multi-objective learning, we are able to improve the performance of MIL models, with a 77.9% AUROC, a sensitivity of 75.9%, and a specificity of 66.3%, a considerable improvement over the baseline. CONCLUSION: We conclude that convolutional networks are better suited for modelling sparse datasets of prostate ultrasounds, producing more robust features than their transformer counterparts in PCa detection. Multi-scale methods remain the best architecture for this task, with multi-objective learning presenting an effective way to improve performance.
After breast-conserving surgery, positive margins occur when breast cancer cells are found on the resection margin, leading to a higher chance of recurrence and the need for repeat surgery. The NaviKnife is an electromagnetic tracking-based surgical navigation system that helps to provide visual and spatial feedback to the surgeon. In this study, we conduct a gross evaluation of this navigation system with respect to resection margins. The trajectory of the surgical cautery relative to ultrasound-visible tumor will be visualized, and its distance and location from the tumor will be compared with pathology reports. Six breast-conserving surgery cases that resulted in positive margins were performed using the NaviKnife system. Trackers were placed on the surgical tools and their positions in three-dimensional space were recorded throughout the procedure. The closest distance between the cautery and the tumor throughout the procedure was measured. The trajectory of the cautery when it came closest to the tumor model was plotted in 3D Slicer and compared with pathology reports. In two of the six cases, the side at which the cautery came the closest to the tumor model coincided with the side at which positive margins were found from pathology reports. Our method shows that positive margins occur mainly in areas that are not visible from ultrasound imaging. Our system will need to be used in combination with intraoperative tissue characterization methods to effectively predict the occurrence and location of positive margins.
KEYWORDS: Visualization, Open source software, 3D applications, Statistical analysis, Analytical research, Tissues, Data analysis, Ions, Biological research, Principal component analysis
Mass Spectrometry Imaging (MSI) is a powerful tool capable of visualizing molecular patterns to identify disease markers in tissue analysis. However, data analysis is computationally heavy and currently time-consuming as there is no single platform capable of performing the entire preprocessing, visualization, and analysis pipeline end-to-end. Using different software tools and file formats required for such tools also makes the process prone to error. The purpose of this work is to develop a free, open-source software implementation called “Visualization, Preprocessing, and Registration Environment” (ViPRE), capable of end-to-end analysis of MSI data. ViPRE was developed to provide various functionalities required for MSI analysis including data import, data visualization, data registration, Region of Interest (ROI) selection, spectral data alignment and data analysis. The software implementation is offered as an open-source module in 3D Slicer, a medical imaging platform. It is also designed for flexibility and usability throughout the user experience. ViPRE was tested using sample MSI data to evaluate the computational pipeline, with the results showing successful implementation of its functionalities and end-to-end usage. A preliminary usability test was also performed to assess user experience, with findings showing positive results. ViPRE aspires to satisfy the need for a single-stop comprehensive interface for MSI data analysis. The source code and documentation will be made publicly available.
Treatment for Basal Cell Carcinoma (BCC) includes an excisional surgery to remove cancerous tissues, using a cautery tool to make burns along a defined resection margin around the tumor. Margin evaluation occurs post-surgically, requiring repeat surgery if positive margins are detected. Rapid Evaporative Ionization Mass Spectrometry (REIMS) can help distinguish healthy and cancerous tissue but does not provide spatial information about the cautery tool location where the spectra are acquired. We propose using intraoperative surgical video recordings and deep learning to provide surgeons with guidance to locate sites of potential positive margins. Frames from 14 intraoperative videos of BCC surgery were extracted and used to train a sequence of networks. The first network extracts frames showing surgery in-progress, then, an object detection network localizes the cautery tool and resection margin. Finally, our burn prediction model leverages the effectiveness of both a Long Short-Term Memory (LSTM) network and a Receiver Operating Characteristic (ROC) curve to accurately predict when the surgeon is cutting. The cut identifications will be used in the future for synchronization with iKnife data to provide localizations when cuts are predicted. The model was trained with four-fold cross-validation on a patient-wise split between training, validation, and testing sets. Average recall over the four folds of testing for the LSTM and ROC were 0.80 and 0.73, respectively. The video-based approach is simple yet effective at identifying tool-to-skin contact instances and may help guide surgeons, enabling them to deliver precise treatments in combination with iKnife data.
Up to 30% of breast-conserving surgery patients require secondary surgery to remove cancerous tissue missed in the initial intervention. We hypothesize that tracked tissue sensing can improve the success rate of breast-conserving surgery. Tissue sensor tracking allows the surgeon to intraoperatively scan the tumor bed for leftover cancerous tissue. In this study, we characterize the performance of our tracked optical scanning testbed using an experimental pipeline. We assess the Dice similarity coefficient, accuracy, and latency of the testbed.
Glioblastoma Multiforme (GBM) is the most common and most lethal primary brain tumor in adults with a five-year survival rate of 5%. The current standard of care and survival rate have remained largely unchanged due to the degree of difficulty in surgically removing these tumors, which plays a crucial role in survival, as better surgical resection leads to longer survival times. Thus, novel technologies need to be identified to improve resection accuracy. Our study features a curated database of GBM and normal brain tissue specimens, which we used to train and validate a multi-instance learning model for GBM detection via rapid evaporative ionization mass spectrometry. This method enables real-time tissue typing. The specimens were collected by a surgeon, reviewed by a pathologist, and sampled with an electrocautery device. The dataset comprised 276 normal tissue burns and 321 GBM tissue burns. Our multi-instance learning model was adapted to identify the molecular signatures of GBM, and we employed a patient-stratified four-fold cross-validation approach for model training and evaluation. Our models demonstrated robustness and outperformed baseline models with an improved AUC of 0.95 and accuracy of 0.95 in correctly classifying GBM and normal brain. This study marks the first application of deep learning to REIMS data for brain tumor tissue characterization. This study sets the foundation for investigating more clinically relevant questions where intraoperative tissue detection in neurosurgery is pertinent.
Up to 35% of breast-conserving surgeries fail to resect all the tumors completely. Ideally, machine learning methods using the iKnife data, which uses Rapid Evaporative Ionization Mass Spectrometry (REIMS), can be utilized to predict tissue type in real-time during surgery, resulting in better tumor resections. As REIMS data is heterogeneous and weakly labeled, and datasets are often small, model performance and reliability can be adversely affected. Self-supervised training and uncertainty estimation of the prediction can be used to mitigate these challenges by learning the signatures of input data without their label as well as including predictive confidence in output reporting. We first design an autoencoder model using a reconstruction pretext task as a self-supervised pretraining step without considering tissue type. Next, we construct our uncertainty-aware classifier using the encoder part of the model with Masksembles layers to estimate the uncertainty associated with its predictions. The pretext task was trained on 190 burns collected from 34 patients from Basal Cell Carcinoma iKnife data. The model was further trained on breast cancer data comprising of 200 burns collected from 15 patients. Our proposed model shows improvement in sensitivity and uncertainty metrics of 10% and 15.7% over the baseline, respectively. The proposed strategies lead to improvements in uncertainty calibration and overall performance, toward reducing the likelihood of incomplete resection, supporting removal of minimal non-neoplastic tissue, and improved model reliability during surgery. Future work will focus on further testing the model on intraoperative data and additional exvivo data following collection of more breast samples.
Breast cancer commonly requires surgical treatment. A procedure used to remove breast cancer is lumpectomy, which removes a minimal healthy tissue margin surrounding the tumor, called a negative margin. A cancer-free margin is difficult to achieve because tumors are not visible or palpable, and the breast deforms during surgery. One notable solution is Rapid Evaporative Ionization Mass Spectrometry (REIMS), which differentiates tumor from healthy tissue with high accuracy from the vapor generated by the surgical cautery. REIMS combined with navigation could detect where the surgical cautery breaches tumor tissue. However, fusing position tracking and REIMS data for navigation is challenging. REIMS has a time-delay dependent on a series of factors. Our objective was to evaluate REIMS time-delay for surgical navigation. The average time-delay of REIMS classifications was measured by video recording. Incisions and corresponding REIMS classifications were measured in tissue samples. We measured the time-delay between physical incision of the tissue and tissue classification. We measured the typical timing of incisions by tracking the cautery in five lumpectomy procedures. The average REMIS time delay was found to be 2.1 ± 0.36 s (average ± SD), with a 95% confidence interval of 0.08 s. The average time between incisions was 2.5 ± 0.87 s. In conclusion, the variation in REIMS tissue classification time-delay allows localization of the tracked incision where the tissue sample originates. REIMS could be used to update surgeons about the location of cancerous tissue with only a few seconds of delay.
PURPOSE: Over 30% of breast conserving surgery patients must undergo repeat surgery to address incomplete tumor resection. We hypothesize that the addition of a robotic cavity scanning system can improve the success rates of these procedures by performing additional, intraoperative imaging to detect left-over cancer cells. In this study, we assess the feasibility of a combined optical and acoustic imaging approach for this cavity scanning system. METHODS: Dual-layer tissue phantoms are imaged with both throughput broadband spectroscopy and an endocavity ultrasound probe. The absorbance and transmittance of the incident light from the broadband source is used to characterize each tissue sample optically. Additionally, a temporally enhanced ultrasound approach is used to distinguish the heterogeneity of the tissue sample by classifying individual pixels in the ultrasound image with a support vector machine. The goal of this combined approach is to use optical characterization to classify the tissue surface, and acoustic characterization to classify the sample heterogeneity. RESULTS: Both optical and acoustic characterization demonstrated promising preliminary results. The class of each tissue sample is distinctly separable based on the transmittance and absorption of the broadband light. Additionally, an SVM trained on the temporally enhance ultrasound signals for each tissue type, showed 82% linear separability of labelled temporally enhanced ultrasound sequences in our test set. CONCLUSIONS: By combining broadband and ultrasound imaging, we demonstrate a potential non-destructive imaging approach for this robotic cavity scanning system. With this approach, our system can detect both surface level tissue characteristics and depth information. Applying this to breast conserving surgery can help inform the surgeon about the tissue composition of the resection cavity after initial tumor resection.
Surgical excision for basal cell carcinoma (BCC) is a common treatment to remove the affected areas of skin. Minimizing positive margins around excised tissue is essential for successful treatment. Residual cancer cells may result in repeat surgery; however, detecting remaining cancer can be challenging and time-consuming. Using chemical signal data acquired while tissue is excised with a cautery tool, the iKnife system can discriminate between healthy and cancerous tissue but lacks spatial information, making it difficult to navigate back to suspicious margins. Intraoperative videos of BCC excision allow cautery locations to be tracked, providing the sites of potential positive margins. We propose a deep learning approach using convolutional neural networks to recognize phases in the videos and subsequently track the cautery location, comparing two localization methods (supervised and semi-supervised). Phase recognition was used for preprocessing to classify frames as showing the surgery or the start/stop of iKnife data acquisition. Only frames designated as showing the surgery were used for cautery localization. Fourteen videos were recorded during BCC excisions with iKnife data collection. On unseen testing data (2 videos, 1,832 frames), the phase recognition model showed an overall accuracy of 86%. Tool localization performed with a mean average precision of 0.98 and 0.96 for supervised and semisupervised methods, respectively, at a 0.5 intersection over union threshold. Incorporating intraoperative phase data with tool tracking provides surgeons with spatial information about the cautery tool location around suspicious regions, potentially improving the surgeon's ability to navigate back to the area of concern.
PURPOSE: Basal Cell Carcinoma (BCC) is the most common cancer in the world. Surgery is the standard treatment and margin assessment is used to evaluate the outcome. The presence of cancerous cells at the edge of resected tissue i.e., positive margin, can negatively impact patient outcomes and increase the probability of cancer recurrence. Novel mass spectrometry technologies paired with machine learning can provide surgeons with real-time feedback about margins to eliminate the need for resurgery. To our knowledge, this is the first study to report the performance of cancer detection using Graph Convolutional Networks (GCN) on mass spectrometry data from resected BCC samples. METHODS: The dataset used in this study is a subset of an ongoing clinical data acquired by our group and annotated with the help of a trained pathologist. There is a total number of 190 spectra in this dataset, including 127 normal and 63 BCC samples. We propose single-layer and multi-layer conversion methods to represent each mass spectrum as a structured graph. The graph classifier is developed based on the deep GCN structure to distinguish between cancer and normal spectra. The results are compared with the state of the art in mass spectra analysis. RESULTS: The classification performance of GCN with multi-layer representation without any data augmentation is comparable to the previous studies that have used augmentation. CONCLUSION: The results indicate the capability of the proposed graph-based analysis of mass spectrometry data for tissue characterization or real-time margin assessment during cancer surgery.
PURPOSE: Raman spectroscopy is an optical imaging technique used to characterize tissue via molecular analysis. The use of Raman spectroscopy for real-time intraoperative tissue classification requires fast analysis with minimal human intervention. In order to have accurate predictions and classifications, a large and reliable database of tissue classifications with spectra results is required. We have developed a system that can be used to generate an efficient scanning path for robotic scanning of tissues using Raman spectroscopy. METHODS: A camera mounted to a robotic controller is used to take an image of a tissue slide. The corners of the tissue slides within the sample image are identified, and the size of the slide is calculated. The image is cropped to fit the size of the slide and the image is manipulated to identify the tissue contour. A grid set to fit around the size of the tissue is calculated and a grid scanning pattern is generated. A masked image of the tissue contour is used to create a scanning pattern containing only the tissue. The tissue scanning pattern points are transformed to the robot controller coordinate system and used for robotic tissue scanning. The pattern is validated using spectroscopic scans of the tissue sample. The run time of the tissue scan pattern is compared to a region of interest scanning pattern encapsulating the tissue using the robotic controller. RESULTS: The average scanning time for the tissue scanning pattern compared to region of interest scanning reduced by 4 minutes and 58 seconds. CONCLUSION: This method reduced the number of points used for automated robotic scanning, and can be used to reduce scanning time and unusable data points to improve data collection efficiency.
PURPOSE: The iKnife is a new surgical tool designed to aid in tumor resection procedures by providing enriched chemical feedback about the tumor resection cavity from electrosurgical vapors. We build and compare machine learning classifiers that are capable of distinguishing primary cancer from surrounding tissue at different stages of tumor progression. In developing our classification framework, we implement feature reduction and recognition tools that will assist in the translation of xenograft studies to clinical application and compare these tools to standard linear methods that have been previously demonstrated. METHODS: Two cohorts (n=6 each) of 12 week old female immunocompromised (Rag2−/−;Il2rg−/−) mice were injected with the same human breast adenocarcinoma (MDA-MB-231) cell line. At 4 and 6 weeks after cell injection, mice in each cohort were respectively euthanized, followed by iKnife burns performed on tumors and tissues prior to sample collection for future studies. A feature reduction technique that uses a neural network is compared to traditional linear analysis. For each method, we fit a classifier to distinguish primary cancer from surrounding tissue. RESULTS: Both classifiers can distinguish primary cancer from metastasis and surrounding tissue. The classifier that uses a neural network achieves an accuracy of 96.8% and the classifier without the neural network achieves an accuracy of 96%. CONCLUSIONS: The performance of these classifiers indicate that this device has the potential to offer real-time, intraoperative classification of tissue. This technology may be used to assist in intraoperative margin detection and inform surgical decisions to offer a better standard of care for cancer patients.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.