KEYWORDS: Education and training, Performance modeling, Breast cancer, Data modeling, Silver, Deep learning, Tunable filters, Tumors, Pathology, Evolutionary algorithms
Deep learning (DL) systems obtain high accuracy on digital pathology datasets that are within the same distribution as the training set. When applied to unseen datasets, performance degradation occurs due to differences in acquisition hardware/software and staining protocols/vendors. This issue poses a barrier to translation since developed models cannot be readily deployed at new labs. To overcome this challenge, we present silver standard (SS) annotations as a method to improve the performance of deep learning architectures on unseen Ki67 pathology images. An unsupervised technique referred to as IHCCH was used to generate SS masks for Ki67+ and Ki67− nuclei from the target lab. A previously validated architecture for Ki67, UV-Net, is trained with a combination of the gold standard (GS) and SS masks to enhance performance consistency. It was found that adding SS masks from the unseen center to the training pool improved performance over clinically relevant PI ranges.
Nuclei detection is a key task in Ki67 proliferation index estimation in breast cancer images. Deep learning algorithms have shown strong potential in nuclei detection tasks. However, they face challenges when applied to pathology images with dense medium and overlapping nuclei since _ne details are often diluted or completely lost by early maxpooling layers. This paper introduces an optimized UV-Net architecture, specifically developed to recover nuclear details with high-resolution through feature preservation for Ki67 proliferation index computation. UV-Net achieves an average F1-score of 0.83 on held-out test patch data, while other architectures obtain 0.74- 0.79. On tissue microarrays (unseen) test data obtained from multiple centers, UV-Net's accuracy exceeds other architectures by a wide margin, including 9-42% on Ontario Veterinary College, 7-35% on Protein Atlas and 0.3-3% on University Health Network.
An artifact found in magnetic resonance images (MRI) called partial volume averaging (PVA) has received much attention since accurate segmentation of cerebral anatomy and pathology is impeded by this artifact. Traditional neurological segmentation techniques rely on Gaussian mixture models to handle noise and PVA, or high-dimensional feature sets that exploit redundancy in multispectral datasets. Unfortunately, model-based techniques may not be optimal for images with non-Gaussian noise distributions and/or pathology, and multispectral techniques model probabilities instead of the partial volume (PV) fraction. For robust segmentation, a PV fraction estimation approach is developed for cerebral MRI that does not depend on predetermined intensity distribution models or multispectral scans. Instead, the PV fraction is estimated directly from each image using an adaptively defined global edge map constructed by exploiting a relationship between edge content and PVA. The final PVA map is used to segment anatomy and pathology with subvoxel accuracy. Validation on simulated and real, pathology-free T1 MRI (Gaussian noise), as well as pathological fluid attenuation inversion recovery MRI (non-Gaussian noise), demonstrate that the PV fraction is accurately estimated and the resultant segmentation is robust. Comparison to model-based methods further highlight the benefits of the current approach.
Alzheimer's disease (AD) is the most common form of dementia in the elderly characterized by extracellular deposition of amyloid plaques (AP). Using animal models, AP loads have been manually measured from histological specimens to understand disease etiology, as well as response to treatment. Due to the manual nature of these approaches, obtaining the AP load is labourious, subjective and error prone. Automated algorithms can be designed to alleviate these challenges by objectively segmenting AP. In this paper, we focus on the development of a novel algorithm for AP segmentation based on robust preprocessing and a Type II fuzzy system. Type II fuzzy systems are much more advantageous over the traditional Type I fuzzy systems, since ambiguity in the membership function may be modeled and exploited to generate excellent segmentation results. The ambiguity in the membership function is defined as an adaptively changing parameter that is tuned based on the local contrast characteristics of the image. Using transgenic mouse brains with AP ground truth, validation studies were carried out showing a high degree of overlap and low degree of oversegmentation (0.8233 and 0.0917, respectively). The results highlight that such a framework is able to handle plaques of various types (diffuse, punctate), plaques with varying Aβ concentrations as well as intensity variation caused by treatment effects or staining variability.
Conference Committee Involvement (7)
Digital and Computational Pathology
18 February 2025 | San Diego, California, United States
Digital and Computational Pathology
19 February 2024 | San Diego, California, United States
Digital and Computational Pathology
20 February 2023 | San Diego, California, United States
Digital and Computational Pathology
20 February 2022 | San Diego, California, United States
Digital and Computational Pathology
15 February 2021 | Online Only, California, United States
Digital Pathology
19 February 2020 | Houston, Texas, United States
Digital Pathology
20 February 2019 | San Diego, California, United States
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.