The use of electromagnetic (EM) tracking is an important guidance tool that can be used to aid procedures
requiring accurate localization such as needle injections or catheter guidance. Using EM tracking, the information
from different modalities can be easily combined using pre-procedural calibration information. These calibrations
are performed individually, per modality, allowing different imaging systems to be mixed and matched according
to the procedure at hand. In this work, a framework for the calibration of a 3D transesophageal echocardiography
probe to EM tracking is developed. The complete calibration framework includes three required steps: data
acquisition, needle segmentation, and calibration. Ultrasound (US) images of an EM tracked needle must be
acquired with the position of the needles in each volume subsequently extracted by segmentation. The calibration
transformation is determined through a registration between the segmented points and the recorded EM needle
positions. Additionally, the speed of sound is compensated for since calibration is performed in water that has a
different speed then is assumed by the US machine. A statistical validation framework has also been developed
to provide further information related to the accuracy and consistency of the calibration. Further validation of
the calibration showed an accuracy of 1.39 mm.
A feature-based registration was developed to align biplane and tracked ultrasound images of the aortic root with
a preoperative CT volume. In transcatheter aortic valve replacement, a prosthetic valve is inserted into the aortic
annulus via a catheter. Poor anatomical visualization of the aortic root region can result in incorrect positioning,
leading to significant morbidity and mortality. Registration of pre-operative CT to transesophageal ultrasound
and fluoroscopy images is a major step towards providing augmented image guidance for this procedure. The
proposed registration approach uses an iterative closest point algorithm to register a surface mesh generated from
CT to 3D US points reconstructed from a single biplane US acquisition, or multiple tracked US images. The use
of a single simultaneous acquisition biplane image eliminates reconstruction error introduced by cardiac gating
and TEE probe tracking, creating potential for real-time intra-operative registration. A simple initialization
procedure is used to minimize changes to operating room workflow. The algorithm is tested on images acquired
from excised porcine hearts. Results demonstrate a clinically acceptable accuracy of 2.6mm and 5mm for tracked
US to CT and biplane US to CT registration respectively.
Electromagnetic (EM) tracking systems are often used for real time navigation of medical tools in an Image
Guided Therapy (IGT) system. They are specifically advantageous when the medical device requires tracking
within the body of a patient where line of sight constraints prevent the use of conventional optical tracking. EM
tracking systems are however very sensitive to electromagnetic field distortions. These distortions, arising from
changes in the electromagnetic environment due to the presence of conductive ferromagnetic surgical tools or
other medical equipment, limit the accuracy of EM tracking, in some cases potentially rendering tracking data
unusable. We present a mapping method for the operating region over which EM tracking sensors are used,
allowing for characterization of measurement errors, in turn providing physicians with visual feedback about
measurement confidence or reliability of localization estimates.
In this instance, we employ a calibration phantom to assess distortion within the operating field of the
EM tracker and to display in real time the distribution of measurement errors, as well as the location and
extent of the field associated with minimal spatial distortion. The accuracy is assessed relative to successive
measurements. Error is computed for a reference point and consecutive measurement errors are displayed relative
to the reference in order to characterize the accuracy in near-real-time. In an initial set-up phase, the phantom
geometry is calibrated by registering the data from a multitude of EM sensors in a non-ferromagnetic ("clean")
EM environment. The registration results in the locations of sensors with respect to each other and defines
the geometry of the sensors in the phantom. In a measurement phase, the position and orientation data from
all sensors are compared with the known geometry of the sensor spacing, and localization errors (displacement
and orientation) are computed. Based on error thresholds provided by the operator, the spatial distribution of
localization errors are clustered and dynamically displayed as separate confidence zones within the operating
region of the EM tracker space.
Live three dimensional (3D) transesophageal echocardiography (TEE) provides real-time imaging of cardiac
structure and function, and has been shown to be useful in interventional cardiac procedures. Its application in
catheter based cardiac procedures is, however, limited by its limited field of view (FOV). In order to mitigate
this limitation, we register pre-operative magnetic resonance (MR) images to live 3D TEE images. Conventional
multimodal image registration techniques that use mutual information (MI) as the similarity measure use
statistics from the entire image. In these cases, correct registration, however, may not coincide with the global
maximum of MI metric. In order to address this problem, we present an automated registration algorithm that
balances a combination global and local edge-based statistics. The weighted sum of global and local statistics is
computed as the similarity measure, where the weights are decided based on the strength of the local statistics.
Phantom validation experiments shows improved capture ranges when compared with conventional MI based
methods. The proposed method provided robust results with accuracy better than 3 mm (5°) in the range of
-10 to 12 mm (-6 to 3°), -14 to 12 mm (-6 to 6°) and -16 to 6 mm (-6 to 3°) in x-, y-, and z- axes respectively.
We believe that the proposed registration method has the potential for real time intra-operative image fusion
during percutaneous cardiac interventions.
Purpose: Brachytherapy (radioactive seed insertion) has emerged as one of the most effective treatment options
for patients with prostate cancer, with the added benefit of a convenient outpatient procedure. The main
limitation in contemporary brachytherapy is faulty seed placement, predominantly due to the presence of intra-operative
edema (tissue expansion). Though currently not available, the capability to intra-operatively monitor
the seed distribution, can make a significant improvement in cancer control. We present such a system here.
Methods: Intra-operative measurement of edema in prostate brachytherapy requires localization of inserted
radioactive seeds relative to the prostate. Seeds were reconstructed using a typical non-isocentric C-arm, and
exported to a commercial brachytherapy delivery system. Technical obstacles for 3D reconstruction on a non-isocentric
C-arm include pose-dependent C-arm calibration; distortion correction; pose estimation of C-arm
images; seed reconstruction; and C-arm to TRUS registration.
Results: In precision-machined hard phantoms with 40-100 seeds and soft tissue phantoms with 45-87 seeds,
we correctly reconstructed the seed implant shape with an average 3D precision of 0.35 mm and 0.24 mm,
respectively. In a DoD Phase-1 clinical trial on 6 patients with 48-82 planned seeds, we achieved intra-operative
monitoring of seed distribution and dosimetry, correcting for dose inhomogeneities by inserting an average of
4.17 (1-9) additional seeds. Additionally, in each patient, the system automatically detected intra-operative seed
migration induced due to edema (mean 3.84 mm, STD 2.13 mm, Max 16.19 mm).
Conclusions: The proposed system is the first of a kind that makes intra-operative detection of edema (and
subsequent re-optimization) possible on any typical non-isocentric C-arm, at negligible additional cost to the
existing clinical installation. It achieves a significantly more homogeneous seed distribution, and has the potential
to affect a paradigm shift in clinical practice. Large scale studies and commercialization are currently underway.
KEYWORDS: Distortion, Principal component analysis, Sensors, Statistical analysis, Video, Data modeling, 3D modeling, Error analysis, Image intensifiers, 3D image processing
C-arm images suffer from pose dependant distortion, which needs to be corrected for intra-operative quantitative
3D surgical guidance. Several distortion correction techniques have been proposed in the literature, the current
state of art using a dense grid pattern rigidly attached to the detector. These methods become cumbersome
for intra-operative use, such as 3D reconstruction, since the grid pattern interferes with patient anatomy. The
primary contribution of this paper is a framework to statistically analyze the distortion pattern which enables
us to study alternate intra-operative distortion correction methods. In particular, we propose a new phantom
that uses very few BBs, and yet accurately corrects for distortion.
The high dimensional space of distortion pattern can be effectively characterized by principal component analysis
(PCA). The analysis shows that only first three eigen modes are significant and capture about 99% of the
variation. Phantom experiments indicate that distortion map can be recovered up to an average accuracy of
less than 0.1 mm/pixel with these three modes. With this prior statistical knowledge, a subset of BBs can
be sufficient to recover the distortion map accurately. Phantom experiments indicate that as few as 15 BBs
can recover distortion with average error of 0.17 mm/pixel, accuracy sufficient for most clinical applications.
These BBs can be arranged on the periphery of the C-arm detector, minimizing the interference with patient
anatomy and hence allowing the grid to remain attached to the detector permanently. The proposed method
is fast, economical, and C-arm independent, potentially boosting the clinical viability of applications such as
quantitative 3D fluoroscopic reconstruction.
C-arm fluoroscopy is modelled as a perspective projection, the parameters of which are estimated through a calibration
procedure. It has been universally accepted that precise intra-procedural calibration is a prerequisite for
accurate quantitative C-arm fluoroscopy guidance. Calibration, however, significantly adds to system complexity,
which is a major impediment to clinical practice. We challenge the status quo by questioning the assumption that
precise intra-procedural C-arm calibration is really necessary. Using our theoretical framework, we derive upper
bounds on the effect of mis-calibration on various algorithms like C-arm tracking, 3D reconstruction and surgical
guidance in virtual fluoroscopy - some of the most common techniques in intra-operative fluoroscopic guidance.
To derive bounds as a function of mis-calibration, we model the error using an a.ne transform. This is fairly
intuitive, since small amounts of mis-calibration result in predictably linear transformation of the reconstruction
space. Experiments indicate the validity of this approximation even for 50 mm mis-calibrations.
For quantitative C-arm fluoroscopy, we had earlier proposed a unified mathematical framework to tackle the
issues of pose estimation, correspondence and reconstruction, without the use of external trackers. The method
used randomly distributed unknown points in the imaging volume, either naturally present or induced by placing
beads on the patient. These points were then inputted to an algorithm that computed the 3D reconstruction. The
algorithm had an 8° region of convergence, which in general could be considered sufficient for most applications.
Here, we extend the earlier algorithm to make it more robust and clinically acceptable. We propose the use of
a circle/ellipse, naturally found in many images. We show that the projection of elliptic curves constrain 5 out
of the 6 degrees of freedom of the C-arm pose. To completely recover the true C-arm pose, we use constraints
in the form of point correspondences between the images. We provide an algorithm to easily obtain a virtual
correspondence across all the images and show that two correspondences can recover the true pose 95% of the
time when the seeds employed are separated by a distance of 40 mm. or greater. Phantom experiments across
three images indicate a pose estimation accuracy of 1.7° using an ellipse and two sufficiently separated point
correspondences. Average execution time in this case is 130 seconds. The method appears to be suffciently
accurate for clinical applications and does not require any significant modification of clinical protocol.
Intra-operative quality assurance and dosimetry optimization in prostate brachytherapy critically depends on the ability of discerning the locations of implanted seeds. Various methods exist for seed matching and reconstruction from multiple segmented C-arm images. Unfortunately, using three or more images makes the problem NP-hard, i.e. no polynomial-time algorithm can provably compute the complete matching. Typically, a statistical analysis of performance is considered sufficient. Hence it is of utmost importance to exploit all the available information in order to minimize the matching and reconstruction errors. Current algorithms use only the information about seed centers, disregarding the information about the orientations and length of seeds. While the latter has little dosimetric impact, it can positively contribute to improving seed matching rate and 3D implant reconstruction accuracy. It can also become critical information when hidden and spuriously segmented seeds need to be matched, where reliable and generic methods are not yet available. Expecting orientation information to be useful in reconstructing large and dense implants, we have developed a method which incorporates seed orientation information into our previously proposed reconstruction algorithm (MARSHAL). Simulation study shows that under normal segmentation errors, when considering seed orientations, implants of 80 to 140 seeds with the density of 2.0- 3.0 seeds/cc give an average matching rate >97% using three-image matching. It is higher than the matching rate of about 96% when considering only seed positions. This means that the information of seed orientations appears to be a valuable additive to fluoroscopy-based brachytherapy implant reconstruction.
The registration of preoperative CT to intra-operative reality systems is a crucial step in Computer Assisted Orthopedic Surgery (CAOS). The intra-operative sensors include 3D digitizers, fiducials, X-rays and Ultrasound (US). FAPBED is designed to process CT volumes for registration to tracked US data. Tracked US is advantageous because it is real time, noninvasive, and non-ionizing, but it is also known to have inherent inaccuracies which create the need to develop a framework that is robust to various uncertainties, and can be useful in US-CT registration. Furthermore, conventional registration methods depend on accurate and absolute segmentation. Our proposed probabilistic framework addresses the segmentation-registration duality, wherein exact segmentation is not a prerequisite to achieve accurate registration. In this paper, we develop a method for fast and automatic probabilistic bone surface (edge) detection in CT images. Various features that influence the likelihood of the surface at each spatial coordinate are combined using a simple probabilistic framework, which strikes a fair balance between a high-level understanding of features in an image and the low-level number crunching of standard image processing techniques. The algorithm evaluates different features for detecting the probability of a bone surface at each voxel, and compounds the results of these methods to yield a final, low-noise, probability map of bone surfaces in the volume. Such a probability map can then be used in conjunction with a similar map from tracked intra-operative US to achieve accurate registration. Eight sample pelvic CT scans were used to extract feature parameters and validate the final probability maps. An un-optimized fully automatic Matlab code runs in five minutes per CT volume on average, and was validated by comparison against hand-segmented gold standards. The mean probability assigned to nonzero surface points was 0.8, while nonzero non-surface points had a mean value of 0.38 indicating clear identification of surface points on average. The segmentation was also sufficiently crisp, with a full width at half maximum (FWHM) value of 1.51 voxels.
Purpose: Intraoperative dosimetric quality assurance in prostate brachytherapy critically depends on discerning the 3D locations of implanted seeds. The ability to reconstruct the implanted seeds intraoperatively will allow us to make immediate provisions for dosimetric deviations from the optimal implant plan. A method for seed reconstruction from segmented C-arm fluoroscopy images is proposed. Method: The 3D coordinates of the implanted seeds can be calculated upon resolving the correspondence of seeds in multiple X-ray images. We formalize seed-matching as a network flow problem, which has salient features: (a) extensively studied exact solutions, (b) performance claims on the space-time complexity, (c) optimality bounds on the final solution. A fast implementation is realized using the Hungarian algorithm. Results: We prove that two images can correctly match only about 67% of the seeds, and that a third image renders the matching problem to be of non-polynomial complexity. We utilize the special structure of the problem and propose a pseudo-polynomial time algorithm. Using three images, MARSHAL achieved 100% matching in simulation experiments; and 98.5% in phantom experiments. 3D reconstruction error for correctly matched seeds has a mean of 0:63 mm, and 0:91 mm for incorrectly matched seeds. Conclusion: Both on synthetic data and in phantom experiments, matching rate and reconstruction accuracy were found to be sufficient for prostate brachytherapy. The algorithm is extendable to deal with arbitrary number of images without loss in speed or accuracy. The algorithm is sufficiently generic to be used for establishing correspondences across any choice of features in different imaging modalities.
Purpose: C-arm fluoroscopy is ubiquitous in contemporary surgery, but it lacks the ability to accurately reconstruct 3D information. A major obstacle in fluoroscopic reconstruction is discerning the pose of the X-ray image, in 3D space. Optical/magnetic trackers are prohibitively expensive, intrusive and cumbersome. Method: We present single-image-based fluoroscope tracking (FTRAC) with the use of an external radiographic fiducial consisting of a mathematically optimized set of points, lines, and ellipses. The fiducial encodes six degrees of freedom in a single image by creating a unique view from any direction. A non-linear optimizer can rapidly compute the pose of the fiducial using this image. The current embodiment has salient attributes: small
dimensions (3 x 3 x 5 cm), it need not be close to the anatomy of interest and can be segmented automatically. Results: We tested the fiducial and the pose recovery method on synthetic data and also experimentally on a precisely machined mechanical phantom. Pose recovery had an error of 0.56 mm in translation and 0.33° in orientation. Object reconstruction had a mean error of 0.53 mm with 0.16 mm STD. Conclusion: The method offers accuracies similar to commercial tracking systems, and is sufficiently robust for intra-operative quantitative
C-arm fluoroscopy.
The registration of preoperative CT to intra-operative reality systems is a crucial step in Computer Assisted Orthopedic Surgery (CAOS). The intra-operative sensors include 3D digitizers, fiducials, X-rays and Ultrasound (US). Although US has many advantages over others, tracked US for Orthopedic Surgery has been researched by only a few authors. An important factor limiting the accuracy of tracked US to CT registration (1-3mm) has been the difficulty in determining the exact location of the bone surfaces in the US images (the response could range from 2-4mm). Thus it is crucial to localize the bone surface accurately from these images. Moreover conventional US imaging systems are known to have certain inherent inaccuracies, mainly due to the fact that the imaging model is assumed planar. This creates the need to develop a bone segmentation framework that can couple information from various post-processed spatially separated US images (of the bone) to enhance the localization of the bone surface. In this paper we discuss the various reasons that cause inherent uncertainties in the bone surface localization (in B-mode US images) and suggest methods to account for these. We also develop a method for automatic bone surface detection. To do so, we account objectively for the high-level understanding of the various bone surface features visible in typical US images. A combination of these features would finally decide the surface position. We use a Bayesian probabilistic framework, which strikes a fair balance between high level understanding from features in an image and the low level number crunching of standard image processing techniques. It also provides us with a mathematical approach that facilitates combining multiple images to augment the bone surface estimate.
Conventional freehand 3D ultrasound (US) is a complex process, involving calibration, scanning, processing, volume reconstruction, and visualization. Prior to calibration, a position sensor is attached to the probe for tagging each image with its position and orientation in space; then calibration process is performed to determine the spatial transformation of the scan plan with respect to the position sensor. Finding this transformation matrix is a critical, but often underrated task in US-guided surgery. The purpose of this study is to enhance previously known calibration methods by introducing a novel calibration fixture and process. The proposed phantom is inexpensive, easy to construct, easy to scan, while yielding more data points per image than previously known designs. The processing phase is semi-automated, allowing for fast processing of a massive amount of data, which in turn increases accuracy by reducing human errors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.