Recently, deep learning networks have achieved considerable success in segmenting organs in medical images. Several methods have used volumetric information with deep networks to achieve segmentation accuracy. However, these networks suffer from interference, risk of overfitting, and low accuracy as a result of artifacts, in the case of very challenging objects like the brachial plexuses. In this paper, to address these issues, we synergize the strengths of high-level human knowledge (i.e., Natural Intelligence (NI)) with deep learning (i.e., Artificial Intelligence (AI)) for recognition and delineation of the thoracic Brachial Plexuses (BPs) in Computed Tomography (CT) images. We formulate an anatomy-guided deep learning hybrid intelligence approach for segmenting thoracic right and left brachial plexuses consisting of two key stages. In the first stage (AAR-R), objects are recognized based on a previously created fuzzy anatomy model of the body region with its key organs relevant for the task at hand wherein high-level human anatomic knowledge is precisely codified. The second stage (DL-D) uses information from AAR-R to limit the search region to just where each object is most likely to reside and performs encoder-decoder delineation in slices. The proposed method is tested on a dataset that consists of 125 images of the thorax acquired for radiation therapy planning of tumors in the thorax and achieves a Dice coefficient of 0.659.
Image segmentation is the process of delineating the regions occupied by objects of interest in a given image. This operation is a fundamentally required first step in numerous applications of medical imagery. In the medical imaging field, this activity has a rich literature that spans over 45 years. In spite of numerous advances, including deep learning (DL) networks (DLNs) in recent years, the problem has defied a robust, fail-safe, and satisfactory solution, especially for objects that are manifest with low contrast, are spatially sparse, have variable shape among individuals, or are affected by imaging artifacts, pathology, or post-treatment change in the body. Although image processing techniques, notably DLNs, are uncanny in their ability to harness low-level intensity pattern information on objects, they fall short in the high-level task of identifying and localizing an entire object as a gestalt. This dilemma has been a fundamental unmet challenge in medical image segmentation. In this paper, we demonstrate that by synergistically marrying the unmatched strengths of high-level human knowledge (i.e., natural intelligence (NI)) with the capabilities of DL networks (i.e., artificial intelligence (AI)) in garnering intricate details, these challenges can be significantly overcome. Focusing on the object recognition task, we formulate an anatomy-guided DL object recognition approach named Automatic Anatomy Recognition-Deep Learning (AAR-DL) which combines an advanced anatomy-modeling strategy, model-based non-DL object recognition, and DL object detection networks to achieve expert human-like performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.