Presentation
3 April 2024 Bias in radiology artificial intelligence: causes, evaluation and mitigation
Author Affiliations +
Abstract
Despite the expert-level performance of artificial intelligence (AI) models for various medical imaging tasks, real-world performance failures with disparate outputs for various minority subgroups limit the usefulness of AI in improving patients’ lives. AI has been shown to have a remarkable ability to detect protected attributes of age, sex, and race, while the same models demonstrate bias against historically underserved subgroups of age, sex, and race in disease diagnosis. Therefore, an AI model may take shortcut predictions from these correlations and subsequently generate an outcome that is biased toward certain subgroups even when protected attributes are not explicitly used as inputs into the model. This talk will discuss various types of bias from shortcut learning that may occur at different phases of AI model development. I will also summarize current techniques for mitigating bias from preprocessing (data-centric solutions) and during model development (computational solutions) and postprocessing (recalibration of learning).
Conference Presentation
© (2024) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Imon Banerjee "Bias in radiology artificial intelligence: causes, evaluation and mitigation", Proc. SPIE 12926, Medical Imaging 2024: Image Processing, 129260O (3 April 2024); https://doi.org/10.1117/12.3023603
Advertisement
Advertisement
Back to Top