PURPOSE: Deep learning methods for classifying prostate cancer (PCa) in ultrasound images typically employ convolutional neural networks (CNN) to detect cancer in small regions of interest (ROI) along a needle trace region. However, this approach suffers from weak labelling, since the ground-truth histopathology labels do not describe the properties of individual ROIs. Recently, multi-scale approaches have sought to mitigate this issue by combining the context awareness of transformers with a convolutional feature extractor to detect cancer from multiple ROIs using multiple-instance learning (MIL). In this work, we present a detailed study of several image transformer architectures for both ROI-scale and multi-scale classification, and a comparison of the performance of CNNs and transformers for ultrasound-based prostate cancer classification. We also design a novel multi-objective learning strategy that combines both ROI and core predictions to further mitigate label noise. METHODS: We use a dataset of 6607 prostate biopsy cores extracted from 693 patients at 5 distinct clinical centers. We evaluate 3 image transformers on ROI-scale cancer classification, then use the strongest model to tune a multi-scale classifier with MIL, wherein another transformer is fine-tuned on top of the existing model’s features. We train our MIL models using our novel multi-objective learning strategy and compare our results to existing baselines. RESULTS: We find that for both ROI-scale and multi-scale PCa detection, image transformer backbones lag behind their CNN counterparts. This deficit in performance is even more noticeable for larger models. When using multi-objective learning, we are able to improve the performance of MIL models, with a 77.9% AUROC, a sensitivity of 75.9%, and a specificity of 66.3%, a considerable improvement over the baseline. CONCLUSION: We conclude that convolutional networks are better suited for modelling sparse datasets of prostate ultrasounds, producing more robust features than their transformer counterparts in PCa detection. Multi-scale methods remain the best architecture for this task, with multi-objective learning presenting an effective way to improve performance.
Up to 35% of breast-conserving surgeries fail to resect all the tumors completely. Ideally, machine learning methods using the iKnife data, which uses Rapid Evaporative Ionization Mass Spectrometry (REIMS), can be utilized to predict tissue type in real-time during surgery, resulting in better tumor resections. As REIMS data is heterogeneous and weakly labeled, and datasets are often small, model performance and reliability can be adversely affected. Self-supervised training and uncertainty estimation of the prediction can be used to mitigate these challenges by learning the signatures of input data without their label as well as including predictive confidence in output reporting. We first design an autoencoder model using a reconstruction pretext task as a self-supervised pretraining step without considering tissue type. Next, we construct our uncertainty-aware classifier using the encoder part of the model with Masksembles layers to estimate the uncertainty associated with its predictions. The pretext task was trained on 190 burns collected from 34 patients from Basal Cell Carcinoma iKnife data. The model was further trained on breast cancer data comprising of 200 burns collected from 15 patients. Our proposed model shows improvement in sensitivity and uncertainty metrics of 10% and 15.7% over the baseline, respectively. The proposed strategies lead to improvements in uncertainty calibration and overall performance, toward reducing the likelihood of incomplete resection, supporting removal of minimal non-neoplastic tissue, and improved model reliability during surgery. Future work will focus on further testing the model on intraoperative data and additional exvivo data following collection of more breast samples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.