C-arm fluoroscopy units provide continuously updating X-ray video images during surgical procedure. The
modality is widely adopted for its low cost, real-time imaging capabilities, and its ability to display radio-opaque
tools in the anatomy. It is, however, important to correct for fluoroscopic image distortion and estimate camera
parameters, such as focal length and camera center, for registration with 3D CT scans in fluoroscopic imageguided
procedures. This paper describes a method for C-arm calibration and evaluates its accuracy in multiple
C-arm units and in different viewing orientations. The proposed calibration method employs a commerciallyavailable
unit to track the C-arm and a calibration plate. The method estimates both the internal calibration
parameters and the transformation between the coordinate systems of tracker and C-arm. The method was
successfully tested on two C-arm units (GE OEC 9800 and GE OEC 9800 Plus) of different image intensifier
sizes and verified with a rigid airway phantom model. The mean distortion-model error was found to be 0.14
mm and 0.17 mm for the respective C-arms. The mean overall system reprojection error (which measures the
accuracy of predicting an image using tracker coordinates) was found to be 0.63 mm for the GE OEC 9800.
Stereotactic body-radiation therapy (SBRT) has gained acceptance in treating lung cancer. Localization of a
thoracic lesion is challenging as tumors can move significantly with breathing. Some SBRT systems compensate
for tumor motion with the intrafraction tracking of targets by two stereo fluoroscopy cameras. However, many
lung tumors lack a fluoroscopic signature and cannot be directly tracked. Small radiopaque fiducial markers,
acting as fluoroscopically visible surrogates, are instead implanted nearby. The spacing and configuration of
the fiducial markers is important to the success of the therapy as SBRT systems impose constraints on the
geometry of a fiducial-marker constellation. It is difficult even for experienced physicians mentally assess the
validity of a constellation a priori. To address this challenge, we present the first automated planning system
for bronchoscopic fiducial-marker placement. Fiducial-marker planning is posed as a constrained combinatoric
optimization problem. Constraints include requiring access from a navigable airway, having sufficient separation
in the fluoroscopic imaging planes to resolve each individual marker, and avoidance of major blood vessels.
Automated fiducial-marker planning takes approximately fifteen seconds, fitting within the clinical workflow.
The resulting locations are integrated into a virtual bronchoscopic planning system, which provides guidance to
each location during the implantation procedure. To date, we have retrospectively planned over 50 targets for
treatment, and have implanted markers according to the automated plan in one patient who then underwent
SBRT treatment. To our knowledge, this approach is the first to address automated bronchoscopic fiducialmarker
planning for SBRT.
Reliable transbronchial access of peripheral lung lesions is desirable for the diagnosis and potential treatment
of lung cancer. This procedure can be difficult, however, because accessory devices (e.g., needle or forceps)
cannot be reliably localized while deployed. We present a fluoroscopic image-guided intervention (IGI) system
for tracking such bronchoscopic accessories. Fluoroscopy, an imaging technology currently utilized by many
bronchoscopists, has a fundamental shortcoming - many lung lesions are invisible in its images. Our IGI
system aligns a digitally reconstructed radiograph (DRR) defined from a pre-operative computed tomography
(CT) scan with live fluoroscopic images. Radiopaque accessory devices are readily apparent in fluoroscopic video,
while lesions lacking a fluoroscopic signature but identifiable in the CT scan are superimposed in the scene. The
IGI system processing steps consist of: (1) calibrating the fluoroscopic imaging system; (2) registering the CT
anatomy with its depiction in the fluoroscopic scene; (3) optical tracking to continually update the DRR and
target positions as the fluoroscope is moved about the patient. The end result is a continuous correlation of the
DRR and projected targets with the anatomy depicted in the live fluoroscopic video feed. Because both targets
and bronchoscopic devices are readily apparent in arbitrary fluoroscopic orientations, multiplane guidance is
straightforward. The system tracks in real-time with no computational lag. We have measured a mean projected
tracking accuracy of 1.0 mm in a phantom and present results from an in vivo animal study.
Bronchoscopes contain wide-angle lenses that produce a large field of view but suffer from radial distortion. For
image-guided bronchoscopy, geometric calibration including distortion correction is essential for comparing video
images to renderings developed from 3D computed-tomography (CT) images. This paper describes an easy-to-use
system for bronchoscopic video-distortion correction and studies the robustness of the resulting calibration over a
wide range of conditions. The internal calibration method integrated into the system incorporates a well-known
camera calibration framework devised for general camera-distortion correction. The robustness study considers
the calibration results as follows: (1) varying lighting during video capture, (2) using different number of captured
images for parameter estimation, (3) changing camera pose with respect to the calibration pattern, (4) recording
temporal changes in estimated parameters, and (5) comparing parameters between different bronchoscopes of a
same model. Multiple bronchoscopes were successfully calibrated under a variety of conditions.
KEYWORDS: Video, Volume rendering, Visualization, 3D image processing, 3D modeling, 3D scanning, Bronchoscopy, Associative arrays, Video processing, Computed tomography
Early lung cancer can cause structural and color changes to the airway mucosa. A three-dimensional (3D)
multidetector CT (MDCT) chest scan provides 3D structural data for airway walls, but no detailed mucosal
information. Conversely, bronchoscopy gives color mucosal information, due to airway-wall inflammation and
early cancer formation. Unfortunately, each bronchoscopic video image provides only a limited local view of
the airway mucosal surface and no 3D structural/location information. The physician has to mentally correlate
the video images with each other and the airway surface data to analyze the airway mucosal structure and
color. A fusion of the topographical information from the 3D MDCT data and the color information from the
bronchoscopic video enables 3D visualization, navigation, localization, and combined color-topographic analysis
of the airways. This paper presents a fast method for topographic airway-mucosal surface fusion of bronchoscopic
video with 3D MDCT endoluminal views. Tests were performed on phantom sequences, real bronchoscopy
patient video, and associated 3D MDCT scans. Results show that we can effectively accomplish mapping over
a continuous sequence of airway images spanning several generations of airways in a few seconds. Real-time
navigation and visualization of the combined data was performed. The average surface-point mapping error for
a phantom case was estimated to be only on the order of 2 mm for 20 mm diameter airway.
Endoscopic needle biopsy requires off-line 3D computed-tomography (CT) chest image analysis to plan a biopsy site followed by live endoscopy to perform the biopsy. We present a method for continuous image-based endoscopic guidance that interleaves periodic normalized-mutual-information-based CT-video registration with optical-flow-based endoscopic video motion tracking. The method operates at a near real-time rate and was successfully tested on endoscopic video sequences for phantom and human lung-cancer cases. We also illustrate its use when incorporated into a complete system for image-based planning and guidance of endoscopy.
An endoscope is a commonly used instrument for performing minimally invasive visual examination of the tissues inside the body. A physician uses the endoscopic video images to identify tissue abnormalities. The images, however, are highly dependent on the optical properties of the endoscope and its orientation and location with respect to the tissue structure. The analysis of endoscopic video images is, therefore, purely subjective. Studies suggest that the fusion of endoscopic video images (providing color and texture information) with virtual endoscopic views (providing structural information) can be useful for assessing various pathologies for several applications: (1) surgical simulation, training, and pedagogy; (2) the creation of a database for pathologies; and (3) the building of patient-specific models. Such fusion requires both geometric and radiometric alignment of endoscopic video images in the texture space. Inconsistent estimates of texture/color of the tissue surface result in seams when multiple endoscopic video images are combined together. This paper (1) identifies the endoscope-dependent variables to be calibrated for objective and consistent estimation of surface texture/color and (2) presents an integrated set of methods to measure them. Results show that the calibration method can
be successfully used to estimate objective color/texture values for simple planar scenes, whereas uncalibrated endoscopes performed very poorly for the same tests.
Previous research has indicated that use of guidance systems during endoscopy can improve the performance
and decrease the skill variation of physicians. Current guidance systems, however, rely on
computationally intensive registration techniques or costly and error-prone electromagnetic (E/M)
registration techniques, neither of which fit seamlessly into the clinical workflow. We have previously
proposed a real-time image-based registration technique that addresses both of these problems. We
now propose a system-level approach that incorporates this technique into a complete paradigm for
real-time image-based guidance in order to provide a physician with continuously-updated navigational
and guidance information. At the core of the system is a novel strategy for guidance of endoscopy. Additional
elements such as global surface rendering, local cross-sectional views, and pertinent distances
are also incorporated into the system to provide additional utility to the physician. Phantom results
were generated using bronchoscopy performed on a rapid prototype model of a human tracheobronchial
airway tree. The system has also been tested in ongoing live human tests. Thus far, ten such tests,
focused on bronchoscopic intervention of pulmonary patients, have been run successfully.
KEYWORDS: Video, Image registration, Cameras, Bronchoscopy, 3D image processing, 3D modeling, Video processing, Image processing, Algorithm development, Lung cancer
Previous research has shown that CT-image-based guidance could be
useful for the bronchoscopic assessment of lung cancer. This
research drew upon the registration of bronchoscopic video images to
CT-based endoluminal renderings of the airway tree. The proposed methods either were restricted to discrete single-frame
registration, which took several seconds to complete, or required
non-real-time buffering and processing of video sequences. We have
devised a fast 2D/3D image registration method that performs
single-frame CT-Video registration in under 1/15th of a second. This
allows the method to be used for real-time registration at full
video frame rates without significantly altering the physician's
behavior. The method achieves its speed through a gradient-based
optimization method that allows most of the computation to be
performed off-line. During live registration, the optimization
iteratively steps toward the locally optimal viewpoint at which a
CT-based endoluminal view is most similar to a current bronchoscopic
video frame. After an initial registration to begin the process
(generally done in the trachea for bronchoscopy), subsequent
registrations are performed in real-time on each incoming video
frame. As each new bronchoscopic video frame becomes available, the
current optimization is initialized using the previous frame's
optimization result, allowing continuous guidance to proceed without
manual re-initialization. Tests were performed using both synthetic
and pre-recorded bronchoscopic video. The results show that the
method is robust to initialization errors, that registration
accuracy is high, and that continuous registration can proceed on
real-time video at >15 frames per sec. with minimal
user-intervention.
One of the indicators of early lung cancer is a color change in airway mucosa. Bronchoscopy of the major airways can provide high-resolution color video of the airway tree's mucosal surfaces. In addition, 3D MDCT chest images provide 3D structural information of the airways. Unfortunately, the bronchoscopic video contains no explicit 3D structural and position information, and the 3D MDCT data captures no color or textural information of the mucosa. A fusion of the topographical information from the 3D CT data and the color information from the bronchoscopic video, however, enables realistic 3D visualization, navigation, localization, and quantitative color-topographic analysis of the airways. This paper presents a method for topographic airway-mucosal surface mapping from bronchoscopic video onto 3D MDCT endoluminal views. The method uses registered video images and CT-based virtual endoscopic renderings of the airways. The visibility and depth data are also generated by the renderings. Uniform sampling and over-scanning of the visible triangles are done before they are packed into a texture space. The texels are then re-projected onto video images and assigned color values based on depth and illumination data obtained from renderings. The texture map is loaded into the rendering engine to enable real-time navigation through the combined 3D CT surface and bronchoscopic video data. Tests were performed on pre-recorded bronchoscopy patient video and associated 3D MDCT scans. Results show that we can effectively accomplish mapping over a continuous sequence of airway images spanning several generations of airways.
The standard procedure for diagnosing lung cancer involves two
stages. First, the physician evaluates a high-resolution three-dimensional (3D) computed-tomography (CT) chest image to produce a procedure plan. Next, the physician performs bronchoscopy on the patient, which involves navigating the the bronchoscope through the airways to planned biopsy sites. Unfortunately, the physician has no link between the 3D CT image data and the live video stream provided during bronchoscopy. In addition, these data sources differ greatly in what they physically give, and no true 3D planning tools exist for planning and guiding procedures. This makes it difficult for the physician to translate a CT-based procedure plan to the video domain of the bronchoscope. Thus, the physician must essentially perform biopsy blindly, and the skill levels between different physicians differ greatly. We describe a system that enables direct 3D CT-based procedure planning and provides direct 3D guidance during bronchoscopy. 3D CT-based information on biopsy sites is provided interactively as the physician moves the bronchoscope. Moreover, graphical information through a live fusion of the 3D CT data and bronchoscopic video is provided during the procedure. This information is coupled with a series of computer-graphics tools to give the physician a greatly augmented reality of the patient's interior anatomy during a procedure. Through a series of controlled tests and studies with human lung-cancer patients, we have found that the system not only reduces the variation in skill level between different physicians, but also increases biopsy success rate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.