ShapeAXI represents a cutting-edge framework for shape analysis that leverages a multi-view approach, capturing 3D objects from diverse viewpoints and subsequently analyzing them via 2D Convolutional Neural Networks (CNNs). We implement an automatic N-fold cross-validation process and aggregate the results across all folds. This ensures insightful explainability heat-maps for each class across every shape, enhancing interpretability and contributing to a more nuanced understanding of the underlying phenomena. We demonstrate the versatility of ShapeAXI through two targeted classification experiments. The first experiment categorizes condyles into healthy and degenerative states. The second, more intricate experiment, engages with shapes extracted from CBCT scans of cleft patients, efficiently classifying them into four severity classes. This innovative application not only aligns with existing medical research but also opens new avenues for specialized cleft patient analysis, holding considerable promise for both scientific exploration and clinical practice. The rich insights derived from ShapeAXI’s explainability images reinforce existing knowledge and provide a platform for fresh discovery in the fields of condyle assessment and cleft patient severity classification. As a versatile and interpretative tool, ShapeAXI sets a new benchmark in 3D object interpretation and classification, and its groundbreaking approach hopes to make significant contributions to research and practical applications across various domains. ShapeAXI is available in our GitHub repository https://github.com/DCBIA-OrthoLab/ShapeAXI.
In this paper, we present FlyBy CNN, a novel deep learning based approach for 3D shape segmentation. FlyBy-CNN consists of sampling the surface of the 3D object from different view points and extracting surface featuressuch as the normal vectors. The generated 2D images are then analyzed via 2D convolutional neural networkssuch as RUNETs. We test our framework in a dental application for segmentation of intra-oral surfaces. TheRUNET is trained for the segmentation task using image pairs of surface features and image labels as groundtruth. The resulting labels from each segmented image are put back into the surface thanks to our samplingapproach that generates 1-1 correspondence of image pixels and triangles in the surface model. The segmentationtask achieved an accuracy of 0.9
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.