A new imaging modality (viz., Long-Film [LF]) for acquiring long-length tomosynthesis images of the spine was recently enabled on the O-arm™ system and used in an IRB-approved clinical study at our institution. The work presented here implements and evaluates a combined image synthesis and registration approach to solve multi-modality registration of MR and LF images. The approach is well-suited for pediatric cases that use MR for preoperative diagnosis and aim for lower levels of intraoperative radiation exposure. A patch-based conditional GAN was used to synthesize 3D CT images from MR. The network was trained on deformably co-registered MR and CT image pairs. Synthesized images were registered to LF images using a model-based 3D-2D registration algorithm. Images from our clinical study were manually labeled, and the intra-user variability in anatomical landmark definition was measured in a simulation study. Geometric accuracy of registrations was evaluated on anatomical landmarks in separate test cases from the clinical study. The synthesis process generated CT images with clear bone structures. Analysis of manual labeling revealed 3.1±2.2 mm projection distance error between 3D and 2D anatomical landmarks. Anatomical MR landmarks projected on lateral LF images demonstrated a median projection distance error of 3.6 mm after registration. This work constitutes the first reported approach for MR to LF registration based on deep image synthesis. Preliminary results demonstrated the feasibility of globally rigid registration in aligning preoperative MR and intraoperative LF images. Work currently underway extends this approach to vertebra level, locally rigid / globally deformable registrations, with initialization based on automatically labeled vertebrae levels.
Purpose: A recent imaging method (viz., Long-Film) for capturing long-length images of the spine was enabled on the Oarm™ system. Proposed work uses a custom, multi-perspective, region-based convolutional neural network (R-CNN) for labeling vertebrae in Long-Film images and evaluates approaches for incorporating long contextual information to take advantage of the extended field-of-view and improve the labeling accuracy. Methods: Evaluated methods for incorporating contextual information include: (1) a recurrent network module with long short-term memory (LSTM) added after R-CNN classification; and (2) a post-processing, sequence-sorting step based on the label confidence scores. The models were trained and validated on 11,805 Long-Film images simulated from projections of 370 CT images and tested on 50 Long-Film images of 14 cadaveric specimens. Results: The multi-perspective R-CNN with LSTM module achieved 91.7% vertebrae level identification rate, compared to 72.4% when used without LSTM, thus demonstrating the improvement of incorporating contextual information. While sequence sorting achieved 89.4% in labeling accuracy, it failed to handle errors during detection and did not provide additional improvements when applied following the LSTM module. Conclusions: The proposed LSTM module significantly improved the labeling accuracy upon the base model through effective contextual information incorporation and training in an end-to-end fashion. Compared to sequence sorting, it showed more flexibility towards false positives and false negatives in vertebrae detection. The proposed model offers the potential to provide a valuable check for target localization and forms the basis for automatic measurement of spinal curvature changes in interventional settings.
Spinal degeneration and deformity present an enormous healthcare burden, with spine surgery among the main treatment modalities. Unfortunately, spine surgery (e.g., lumbar fusion) exhibits broad variability in the quality of outcome, with ~20-40% of patients gaining no benefit in pain or function (“failed back surgery”) and earning criticism that is difficult to reconcile versus rapid growth in frequency and cost over the last decade. Vital to advancing the quality of care in spine surgery are improved clinical decision support (CDS) tools that are accurate, explainable, and actionable: accurate in prediction of outcomes; explainable in terms of the physical / physiological factors underlying the prediction; and actionable within the shared decision process between a surgeon and patient in identifying steps that could improve outcome. This technical note presents an overview of a novel outcome prediction framework for spine surgery (dubbed SpineCloud) that leverages innovative image analytics in combination with explainable prediction models to achieve accurate outcome prediction. Key to the SpineCloud framework are image analysis methods for extraction of high-level quantitative features from multi-modality peri-operative images (CT, MR, and radiography) related to spinal morphology (including bone and soft-tissue features), the surgical construct (including deviation from an ideal reference), and longitudinal change in such features. The inclusion of such image-based features is hypothesized to boost the predictive power of models that conventionally rely on demographic / clinical data alone (e.g., age, gender, BMI, etc.). Preliminary results using gradient boosted decision trees demonstrate that such prediction models are explainable (i.e., why a particular prediction is made), actionable (identifying features that may be addressed by the surgeon and/or patient), and boost predictive accuracy compared to analysis based on demographics alone (e.g., AUC improved by ~25% in preliminary studies). Incorporation of such CDS tools in spine surgery could fundamentally alter and improve the shared decisionmaking process between surgeons and patients by highlighting actionable features to improve selection of therapeutic and rehabilitative pathways.
Purpose: Data-intensive modeling could provide insight on the broad variability in outcomes in spine surgery. Previous studies were limited to analysis of demographic and clinical characteristics. We report an analytic framework called “SpineCloud” that incorporates quantitative features extracted from perioperative images to predict spine surgery outcome.
Approach: A retrospective study was conducted in which patient demographics, imaging, and outcome data were collected. Image features were automatically computed from perioperative CT. Postoperative 3- and 12-month functional and pain outcomes were analyzed in terms of improvement relative to the preoperative state. A boosted decision tree classifier was trained to predict outcome using demographic and image features as predictor variables. Predictions were computed based on SpineCloud and conventional demographic models, and features associated with poor outcome were identified from weighting terms evident in the boosted tree.
Results: Neither approach was predictive of 3- or 12-month outcomes based on preoperative data alone in the current, preliminary study. However, SpineCloud predictions incorporating image features obtained during and immediately following surgery (i.e., intraoperative and immediate postoperative images) exhibited significant improvement in area under the receiver operating characteristic (AUC): AUC = 0.72 (CI95 = 0.59 to 0.83) at 3 months and AUC = 0.69 (CI95 = 0.55 to 0.82) at 12 months.
Conclusions: Predictive modeling of lumbar spine surgery outcomes was improved by incorporation of image-based features compared to analysis based on conventional demographic data. The SpineCloud framework could improve understanding of factors underlying outcome variability and warrants further investigation and validation in a larger patient cohort.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.