X23D - Intraoperative 3D Lumbar Spine Shape Reconstruction Based on Sparse Multi-View X-ray Data

1Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland 2Department of Orthopedics, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland

Abstract

Visual assessment based on intraoperative 2D X-rays remains the predominant aid for intraoperative decision-making, surgical guidance, and error prevention. However, correctly assessing the 3D shape of complex anatomies, such as the spine, based on planar fluoroscopic images remains a challenge even for experienced surgeons. This work proposes a novel deep learning-based method to intraoperatively estimate the 3D shape of patients' lumbar vertebrae directly from sparse, multi-view X-ray data. High-quality and accurate 3D reconstructions were achieved with a learned multi-view stereo machine approach capable of incorporating the X-ray calibration parameters in the neural network. This strategy allowed a priori knowledge of the spinal shape to be acquired while preserving patient specificity and achieving a higher accuracy compared to the state of the art. Our method was trained and evaluated on 17,420 fluoroscopy images that were digitally reconstructed from the public CTSpine1K dataset. As evaluated by unseen data, we achieved an 88% average F1 score and a 71% surface score. Furthermore, by utilizing the calibration parameters of the input X-rays, our method outperformed a counterpart method in the state of the art by 22% in terms of surface score. This increase in accuracy opens new possibilities for surgical navigation and intraoperative decision-making solely based on intraoperative data, especially in surgical applications where the acquisition of 3D image data is not part of the standard clinical workflow.

Video

Applying Domain Adaptation: X23D on Real Data

In our follow-up study, “Domain Adaptation Strategies for 3D Reconstruction of the Lumbar Spine Using Real Fluoroscopy Data”, we addressed the domain gap between synthetic training data and real intraoperative images. This study introduced a paired dataset combining synthetic and real fluoroscopic images, enabling our deep learning model to achieve robust 3D reconstructions directly from real X-rays. Utilizing transfer learning and style adaptation, the refined X23D model now provides real-time, high-accuracy spinal reconstructions with a computational time of just 81.1 ms. This advancement bridges critical gaps in surgical navigation, paving the way for enhanced surgical planning and robotics. Read more in the Medical Image Analysis journal.

DRR Generation UI

Figure 1: DRR Generation UI

Data Collection Pipeline

Figure 2: Data Collection Pipeline

Results on Real and Synthetic Data

Figure 3: Results on Real and Synthetic Data

Evaluation Study: Ex-Vivo Validation of X23D

Our evaluation study, “Spinal Navigation with AI-Driven 3D Reconstruction of Fluoroscopy Images: An Ex-Vivo Feasibility Study”, validated X23D’s surgical application in realistic settings. In this study, five spine surgeons placed pedicle screws on human torsi using our AI-based system and compared it against traditional fluoroscopy. Results showed comparable breach rates, reduced radiation exposure, and promising user feedback, affirming X23D’s potential as a reliable, radiation-efficient alternative to conventional navigation systems. Learn more about our findings in BMC Musculoskeletal Disorders.

BibTeX


@article{jecklin2022x23d,
  author    = {Jecklin, Sascha and Jancik, Carla and Farshad, Mazda and F{\"u}rnstahl, Philipp and Esfandiari, Hooman},
  title     = {X23D-Intraoperative 3D Lumbar Spine Shape Reconstruction Based on Sparse Multi-View X-ray Data},
  journal   = {Journal of Imaging},
  volume    = {8},
  number    = {10},
  pages     = {271},
  year      = {2022},
  publisher = {MDPI}
}
    

@article{jecklin_domain_2024,
  title        = {Domain adaptation strategies for 3D reconstruction of the lumbar spine using real fluoroscopy data},
  volume       = {98},
  issn         = {1361-8415},
  url          = {https://www.sciencedirect.com/science/article/pii/S1361841524002470},
  doi          = {10.1016/j.media.2024.103322},
  pages        = {103322},
  journaltitle = {Medical Image Analysis},
  shortjournal = {Medical Image Analysis},
  author       = {Jecklin, Sascha and Shen, Youyang and Gout, Amandine and Suter, Daniel and Calvet, Lilian and Zingg, Lukas and Straub, Jennifer and Cavalcanti, Nicola Alessandro and Farshad, Mazda and Fürnstahl, Philipp and Esfandiari, Hooman},
  date         = {2024-12-01},
  keywords     = {Fluoroscopy, Intraoperative, X-ray, 3D reconstruction, Deep Learning},
}
    

@article{luchmann_spinal_2024,
  title        = {Spinal navigation with {AI}-driven 3D-reconstruction of fluoroscopy images: an ex-vivo feasibility study},
  volume       = {25},
  issn         = {1471-2474},
  url          = {https://doi.org/10.1186/s12891-024-08052-2},
  doi          = {10.1186/s12891-024-08052-2},
  pages        = {925},
  number       = {1},
  journaltitle = {{BMC} Musculoskeletal Disorders},
  shortjournal = {{BMC} Musculoskeletal Disorders},
  author       = {Luchmann, Dietmar and Jecklin, Sascha and Cavalcanti, Nicola A. and Laux, Christoph J. and Massalimova, Aidana and Esfandiari, Hooman and Farshad, Mazda and Fürnstahl, Philipp},
  date         = {2024-11-19},
}