KEYWORDS: Image registration, Detection and tracking algorithms, Video, Cameras, 3D modeling, Image processing, 3D acquisition, 3D image processing, Motion models, Tin
Targeting from video relies upon precise image and video registration. Historically, the technology to automate
this georegistration has operated using 2D transform spaces under the often naive assumption that the imaged
geometry is planar. The author previously demonstrated a fast 2D-to-3D registration algorithm that removes this
assumption, provided a digital elevation model (DEM) is available. Whereas the previous algorithm operated
independently on each frame of a video sequence, a new 2D-to-3D algorithm is proposed that exploits the
structural consistency of the imaged geometry across frames. This work presents this novel algorithm and
explores its efficacy in reducing targeting error.
KEYWORDS: Image registration, Cameras, 3D modeling, 3D acquisition, Infrared imaging, Detection and tracking algorithms, Video, Information technology, 3D image processing, Optimization (mathematics)
Targeting and precision-guided munitions rely on precise image and video registration. Current approaches for
geo-registration typically utilize registration algorithms that operate in two-dimensional (2D) transform spaces,
in the absence of an underlying three-dimensional (3D) surface model. However, because of their two-dimensional
motion assumptions, these algorithms place limitations on the types of imagery and collection geometries that
can be used. Incorporating a 3D reference surface enables the use of 2D-to-3D registration algorithms and
removes many such limitations. The author has previously demonstrated a fast 2D-to-3D registration algorithm
for registration of live video to surface data extracted from medical images. The algorithm uses an illumination-tolerant
gradient-descent based optimization to register a 2D image to 3D surface data in order to globally locate
the camera's origin with respect to the 3D model. The rapid convergence of the algorithm is achieved through
a reformulation of the optimization problem that allows many data elements to be re-used through multiple
iterations. This paper details the extension of this algorithm to the more difficult problem of registering aerial
imagery to terrain data.
Lung cancer remains the leading cause of cancer death in the United States and is expected to account for nearly 30% of
all cancer deaths in 2007. Central to the lung-cancer diagnosis and staging process is the assessment of the central chest
lymph nodes. This assessment typically requires two major stages: (1) location of the lymph nodes in a three-dimensional
(3D) high-resolution volumetric multi-detector computed-tomography (MDCT) image of the chest; (2) subsequent nodal
sampling using transbronchial needle aspiration (TBNA). We describe a computer-based system for automatically locating
the central chest lymph-node stations in a 3D MDCT image. Automated analysis methods are first run that extract the
airway tree, airway-tree centerlines, aorta, pulmonary artery, lungs, key skeletal structures, and major-airway labels. This
information provides geometrical and anatomical cues for localizing the major nodal stations. Our system demarcates these
stations, conforming to criteria outlined for the Mountain and Wang standard classification systems. Visualization tools
within the system then enable the user to interact with these stations to locate visible lymph nodes. Results derived from
a set of human 3D MDCT chest images illustrate the usage and efficacy of the system.
Previous research has indicated that use of guidance systems during endoscopy can improve the performance
and decrease the skill variation of physicians. Current guidance systems, however, rely on
computationally intensive registration techniques or costly and error-prone electromagnetic (E/M)
registration techniques, neither of which fit seamlessly into the clinical workflow. We have previously
proposed a real-time image-based registration technique that addresses both of these problems. We
now propose a system-level approach that incorporates this technique into a complete paradigm for
real-time image-based guidance in order to provide a physician with continuously-updated navigational
and guidance information. At the core of the system is a novel strategy for guidance of endoscopy. Additional
elements such as global surface rendering, local cross-sectional views, and pertinent distances
are also incorporated into the system to provide additional utility to the physician. Phantom results
were generated using bronchoscopy performed on a rapid prototype model of a human tracheobronchial
airway tree. The system has also been tested in ongoing live human tests. Thus far, ten such tests,
focused on bronchoscopic intervention of pulmonary patients, have been run successfully.
KEYWORDS: Video, Image registration, Cameras, Bronchoscopy, 3D image processing, 3D modeling, Video processing, Image processing, Algorithm development, Lung cancer
Previous research has shown that CT-image-based guidance could be
useful for the bronchoscopic assessment of lung cancer. This
research drew upon the registration of bronchoscopic video images to
CT-based endoluminal renderings of the airway tree. The proposed methods either were restricted to discrete single-frame
registration, which took several seconds to complete, or required
non-real-time buffering and processing of video sequences. We have
devised a fast 2D/3D image registration method that performs
single-frame CT-Video registration in under 1/15th of a second. This
allows the method to be used for real-time registration at full
video frame rates without significantly altering the physician's
behavior. The method achieves its speed through a gradient-based
optimization method that allows most of the computation to be
performed off-line. During live registration, the optimization
iteratively steps toward the locally optimal viewpoint at which a
CT-based endoluminal view is most similar to a current bronchoscopic
video frame. After an initial registration to begin the process
(generally done in the trachea for bronchoscopy), subsequent
registrations are performed in real-time on each incoming video
frame. As each new bronchoscopic video frame becomes available, the
current optimization is initialized using the previous frame's
optimization result, allowing continuous guidance to proceed without
manual re-initialization. Tests were performed using both synthetic
and pre-recorded bronchoscopic video. The results show that the
method is robust to initialization errors, that registration
accuracy is high, and that continuous registration can proceed on
real-time video at >15 frames per sec. with minimal
user-intervention.
The standard procedure for diagnosing lung cancer involves two
stages. First, the physician evaluates a high-resolution three-dimensional (3D) computed-tomography (CT) chest image to produce a procedure plan. Next, the physician performs bronchoscopy on the patient, which involves navigating the the bronchoscope through the airways to planned biopsy sites. Unfortunately, the physician has no link between the 3D CT image data and the live video stream provided during bronchoscopy. In addition, these data sources differ greatly in what they physically give, and no true 3D planning tools exist for planning and guiding procedures. This makes it difficult for the physician to translate a CT-based procedure plan to the video domain of the bronchoscope. Thus, the physician must essentially perform biopsy blindly, and the skill levels between different physicians differ greatly. We describe a system that enables direct 3D CT-based procedure planning and provides direct 3D guidance during bronchoscopy. 3D CT-based information on biopsy sites is provided interactively as the physician moves the bronchoscope. Moreover, graphical information through a live fusion of the 3D CT data and bronchoscopic video is provided during the procedure. This information is coupled with a series of computer-graphics tools to give the physician a greatly augmented reality of the patient's interior anatomy during a procedure. Through a series of controlled tests and studies with human lung-cancer patients, we have found that the system not only reduces the variation in skill level between different physicians, but also increases biopsy success rate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.