Augmented Reality (AR) is a convenient way of porting information from medical images into the surgical field of
view and can deliver valuable assistance to the surgeon, especially in laparoscopic procedures. In addition, high
definition (HD) laparoscopic video devices are a great improvement over the previously used low resolution
equipment. However, in AR applications that rely on real-time detection of fiducials from video streams, the demand
for efficient image processing has increased due to the introduction of HD devices. We present an algorithm based on
the well-known Conditional Density Propagation (CONDENSATION) algorithm which can satisfy these new
demands. By incorporating a prediction around an already existing and robust segmentation algorithm, we can speed
up the whole procedure while leaving the robustness of the fiducial segmentation untouched. For evaluation purposes
we tested the algorithm on recordings from real interventions, allowing for a meaningful interpretation of the results.
Our results show that we can accelerate the segmentation by a factor of 3.5 on average. Moreover, the prediction
information can be used to compensate for fiducials that are temporarily occluded or out of scope, providing greater
stability.
Augmented reality (AR) for enhancement of intra-operative images is gaining increasing interest in the field of
navigated medical interventions. In this context, various imaging modalities such as ultrasound (US), C-Arm
computed tomography (CT) and endoscopic images have been applied to acquire intra-operative information
about the patient's anatomy. The aim of this paper was to evaluate the potential of the novel Time-of-Flight
(ToF) camera technique as means for markerless intra-operative registration. For this purpose, ToF range data
and corresponding CT images were acquired from a set of explanted non-transplantable human and porcine
organs equipped with a set of marker that served as targets. Based on a rigid matching of the surfaces generated
from the ToF images with the organ surfaces generated from the CT data, the targets extracted from the
planning images were superimposed on the 2D ToF intensity images, and the target visualization error (TVE)
was computed as quality measure. Color video data of the same organs were further used to assess the TVE of a
previously proposed marker-based registration method. The ToF-based registration showed promising accuracy
yielding a mean TVE of 2.5±1.1 mm compared to 0.7±0.4 mm with the marker-based approach. Furthermore,
the target registration error (TRE) was assessed to determine the anisotropy in the localization error of ToF
image data. The TRE was 8.9± 4.7 mm on average indicating a high localization error in the viewing direction
of the camera. Nevertheless, the young ToF technique may become a valuable means for intra-operative surface
acquisition. Future work should focus on the calibration of systematic distance errors.
KEYWORDS: Endoscopy, Video, 3D acquisition, Medical imaging, Visualization, Visual process modeling, Current controlled current source, Laparoscopy, 3D modeling, Natural surfaces
A growing number of applications in the field of computer-assisted laparoscopic interventions depend on accurate and fast 3D surface acquisition. The most commonly applied methods for 3D reconstruction of organ surfaces from 2D endoscopic images involve establishment of correspondences in image pairs to allow for computation of 3D point coordinates via triangulation. The popular feature-based approach for correspondence search applies a feature descriptor to compute high-dimensional feature vectors describing the characteristics of selected image points. Correspondences are established between image points with similar feature vectors. In a previous study, the performance of a large set of state-of-the art descriptors for the use in minimally invasive surgery was assessed. However, standard Phase Alternating Line (PAL) endoscopic images were utilized for this purpose. In this paper, we apply some of the best performing feature descriptors to in-vivo PAL endoscopic images as well as to High Definition Television (HDTV) endoscopic images of the same scene and show that the quality of the correspondences can be increased significantly when using high resolution images.
Image Guided Therapy (IGT) faces researchers with high demands and efforts in system design, prototype implementation,
and evaluation. The lack of standardized software tools, like algorithm implementations, tracking
device and tool setups, and data processing methods escalate the labor for system development and sustainable
system evaluation. In this paper, a new toolkit component of the Medical Imaging and Interaction Toolkit
(MITK), the MITK-IGT, and its exemplary application for computer-assisted prostate surgery are presented.
MITK-IGT aims at integrating software tools, algorithms and tracking device interfaces into the MITK toolkit
to provide a comprehensive software framework for computer aided diagnosis support, therapy planning, treatment
support, and radiological follow-up. An exemplary application of the MITK-IGT framework is introduced
with a surgical navigation system for laparos-copic prostate surgery. It illustrates the broad range of application
possibilities provided by the framework, as well as its simple extensibility with custom algorithms and other
software modules.
KEYWORDS: Image segmentation, 3D modeling, Data modeling, Statistical modeling, Prostate, Ultrasonography, 3D image processing, Databases, Medical imaging, Prostate cancer
Due to the high noise and artifacts typically encountered in ultrasound images, segmenting objects from this
modality is one of the most challenging tasks in medical image analysis. Model-based approaches like statistical
shape models (SSMs) incorporate prior knowledge that supports object detection in case of incomplete evidence
from the image data. How well the model adapts to an unseen image is primarily determined by the suitability of
the used appearance model, which evaluates the goodness of fit during model evolution. In this paper, we compare
two gradient profile models with a region-based approach featuring local histograms to detect the prostate in
3D transrectal ultrasound (TRUS) images. All models are used within an SSM segmentation framework with
optimal surface detection for outlier removal. Evaluation was performed using cross-validation on 35 datasets.
While the histogram model failed in 10 cases, both gradient models had only 2 failures and reached an average
surface distance of 1.16 ± 0.38 mm in comparison with interactively generated reference contours.
We introduce a novel navigation system to support minimally invasive prostate surgery. The system utilizes
transrectal ultrasonography (TRUS) and needle-shaped navigation aids to visualize hidden structures via Augmented
Reality. During the intervention, the navigation aids are segmented once from a 3D TRUS dataset and
subsequently tracked by the endoscope camera. Camera Pose Estimation methods directly determine position
and orientation of the camera in relation to the navigation aids. Accordingly, our system does not require any
external tracking device for registration of endoscope camera and ultrasonography probe. In addition to a preoperative
planning step in which the navigation targets are defined, the procedure consists of two main steps which
are carried out during the intervention: First, the preoperatively prepared planning data is registered with an
intraoperatively acquired 3D TRUS dataset and the segmented navigation aids. Second, the navigation aids are
continuously tracked by the endoscope camera. The camera's pose can thereby be derived and relevant medical
structures can be superimposed on the video image.
This paper focuses on the latter step. We have implemented several promising real-time algorithms and
incorporated them into the Open Source Toolkit MITK (www.mitk.org). Furthermore, we have evaluated them
for minimally invasive surgery (MIS) navigation scenarios. For this purpose, a virtual evaluation environment
has been developed, which allows for the simulation of navigation targets and navigation aids, including their
measurement errors. Besides evaluating the accuracy of the computed pose, we have analyzed the impact of an
inaccurate pose and the resulting displacement of navigation targets in Augmented Reality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.