We propose an automatic approach to anatomy partitioning on three-dimensional (3D) computed tomography (CT) images that divides the human torso into several volumes of interest (VOIs) according to anatomical definition. In the proposed approach, a deep convolutional neural network (CNN) is trained to automatically detect the bounding boxes of organs on two-dimensional (2D) sections of CT images. The coordinates of those boxes are then grouped so that a vote on a 3D VOI (called localization) for each organ can be obtained separately. We applied this approach to localize the 3D VOIs of 17 types of organs in the human torso and then evaluated the performance of the approach by conducting a four-fold crossvalidation using a dataset consisting of 240 3D CT scans with the human-annotated ground truth for each organ region. The preliminary results showed that 86.7% of the 3D VOIs of the 3177 organs in the 240 test CT images were localized with acceptable accuracy (mean of Jaccard indexes was 72.8%) compared to that of the human annotations. This performance was better than that of the state-of-the-art method reported recently. The experimental results demonstrated that using a deep CNN for anatomy partitioning on 3D CT images was more efficient and useful compared to the method used in our previous work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.