As surgical robotics are made progressively smaller, and their actuation systems simplified, the opportunity arises to re-evaluate how we integrate them into operating room workflows. Over the past few years, several research groups have shown that robots can be made so small and light that they can become hand-held tools, in contrast to the prevailing commercial paradigm of surgical robots being large multi-arm floor-mounted systems that must be remotely teleoperated. This hand-held paradigm enables robots to fit much more seamlessly into existing clinical workflows, and as such, these new robots need to be paired with similarly compact user interfaces. It also gives rise to a new area of user interface research, exploring how the surgeon can simultaneously control the position and orientation of the overall system, while also simultaneously controlling small robotic manipulators that maneuver dexterously at the tip. In this paper, we compare an onboard user interface mounted directly to the robotic platform against the traditional offboard user interface positioned away from the robot. In the latter, the surgeon positions the robot, and a support arm holds it in place while the surgeon operates the manipulators using the offboard surgeon console. The surgeon can move back and forth between the robot and the console as often as desired. Three experiments were conducted, and results show that the onboard interface enables statistically significantly faster performance in a point-touching task performed in a virtual environment.
Ureteroscopic intrarenal surgery comprises the passage of a flexible ureteroscope through the ureter into the kidney and is commonly used for the treatment of kidney stones or upper tract urothelial carcinoma (UTUC). Flexible ureteroscopes (fURS) are limited by their visualization ability and fragility, which can cause missed regions during the procedure in hard-to-visualize locations and/or due to scope breakage. This contributes to a high recurrence rate for both kidney stone and UTUC patients. We introduce an automated patient-specific analysis for determining viewability in the renal collecting system using pre-operative CT scans.
Image segmentation has been increasingly applied in medical settings as recent developments have skyrocketed the potential applications of deep learning. Urology, specifically, is one field of medicine that is primed for the adoption of a real-time image segmentation system with the long-term aim of automating endoscopic stone treatment. In this project, we explored supervised deep learning models to annotate kidney stones in surgical endoscopic video feeds. In this paper, we describe how we built a dataset from the raw videos and how we developed a pipeline to automate as much of the process as possible. For the segmentation task, we adapted and analyzed three baseline deep learning models – U-Net, U-Net++, and DenseNet – to predict annotations on the frames of the endoscopic videos with the highest accuracy above 90%. To show clinical potential for real-time use, we also confirmed that our best trained model can accurately annotate new videos at 30 frames per second. Our results demonstrate that the proposed method justifies continued development and study of image segmentation to annotate ureteroscopic video feeds.
KEYWORDS: 3D modeling, Data modeling, 3D image reconstruction, Surgery, Imaging systems, 3D image processing, Luminescence, Image segmentation, Endoscopes, Kidney, Robotic surgery
Over the past several years, researchers have made significant progress toward providing image guidance for the da Vinci system, utilizing data sources such as robot kinematic data, endoscope image data, and preoperative medical images. One data source that could provide additional subsurface information for use in image guidance is the da Vinci’s FireFly camera system. FireFly is a fluorescence imaging feature for the da Vinci system that uses injected indocyanine green dye and special endoscope filters to illuminate subsurface anatomical features as the surgeon operates. FireFly is now standard for many surgical procedures with the da Vinci robot; however, it is currently challenging to understand spatial relationships between pre-operative CT images and intraoperative fluorescence images. Here, we extend our image guidance system to incorporate FireFly information as well, so that the surgeon can view FireFly data in the image guidance display while operating with the da Vinci robot. We present a method for reconstructing 3D models of the FireFly fluorescence data from endoscope images and mapping the models into an image guidance display that also includes segmented, registered preoperative CT images. We analyze the accuracy of our reconstruction and mapping method and present a proof-of-concept application where we reconstruct a fluorescent subsurface blood vessel and map it into our image guidance display. Our method could be used to provide surgeons with additional context for the FireFly fluorescence imaging data or to provide additional data for computing or verifying the registration between the robot and preoperative images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.