Quantification of myocardial blood flow (MBF) can aid in the diagnosis and treatment of coronary artery disease. However, there are no widely accepted clinical methods for estimating MBF. Dynamic cardiac perfusion computed tomography (CT) holds the promise of providing a quick and easy method to measure MBF quantitatively. However, the need for repeated scans can potentially result in a high patient radiation dose, limiting the clinical acceptance of this approach. In our previous work, we explored techniques to reduce the patient dose by either uniformly reducing the tube current or by uniformly reducing the number of temporal frames in the dynamic CT sequence. These dose reduction techniques result in noisy time-attenuation curves (TACs), which can give rise to significant errors in MBF estimation. We seek to investigate whether nonuniformly varying the tube current and/or sampling intervals can yield more accurate MBF estimates for a given dose. Specifically, we try to minimize the dose and obtain the most accurate MBF estimate by addressing the following questions: when in the TAC should the CT data be collected and at what tube current(s)? We hypothesize that increasing the sampling rate and/or tube current during the time frames when the myocardial CT number is most sensitive to the flow rate, while reducing them elsewhere, can achieve better estimation accuracy for the same dose. We perform simulations of contrast agent kinetics and CT acquisitions to evaluate the relative MBF estimation performance of several clinically viable variable acquisition methods. We find that variable temporal and tube current sequences can be performed that impart an effective dose of 5.5 mSv and allow for reductions in MBF estimation root-mean-square error on the order of 20% compared to uniform acquisition sequences with comparable or higher radiation doses.
Photon counting x-ray detectors (PCD) offer a great potential for energy-resolved imaging that would allow for promising applications such as low-dose imaging, quantitative contrast-enhanced imaging, as well as spectral tissue decomposition. However, physical processes in photon counting detectors produce undesirable effects like charge sharing and pulse-pile up that can adversely affect the imaging application. Existing detector response models for photon counting detectors have mainly used either X-ray fluorescence imaging or radionuclides to calibrate their detector and estimate the model parameters. The purpose of our work was to apply one such model to our photon counting detector and to determine the model parameters from transmission measurements. This model uses a polynomial fit to model the charge sharing response and energy resolution of the detector as well as an Aluminum filter to model the modification of the spectrum by the X-ray. Our experimental setup includes a Si-based photon counting detector to generate transmission spectra from multiple materials at varying thicknesses. Materials were selected so as to exhibit k-edges within the 15-35 keV region. We find that transmission measurements can be used to successfully model the detector response. Ultimately, this approach could be used for practical detector energy calibration. A fully validated detector response model will allow for exploration of imaging applications for a given detector.
Cardiac computed tomography (CT) acquisitions for perfusion assessment can be performed in a dynamic or static mode. Either method may be used for a variety of clinical tasks, including (1) stratifying patients into categories of ischemia and (2) using a quantitative myocardial blood flow (MBF) estimate to evaluate disease severity. In this simulation study, we compare method performance on these classification and quantification tasks for matched radiation dose levels and for different flow states, patient sizes, and injected contrast levels. Under conditions simulated, the dynamic method has low bias in MBF estimates (0 to 0.1 ml/min/g) compared to linearly interpreted static assessment (0.45 to 0.48 ml/min/g), making it more suitable for quantitative estimation. At matched radiation dose levels, receiver operating characteristic analysis demonstrated that the static method, with its high bias but generally lower variance, had superior performance (p<0.05) in stratifying patients, especially for larger patients and lower contrast doses [area under the curve (AUC)=0.95 to 96 versus 0.86]. We also demonstrate that static assessment with a correctly tuned exponential relationship between the apparent CT number and MBF has superior quantification performance to static assessment with a linear relationship and to dynamic assessment. However, tuning the exponential relationship to the patient and scan characteristics will likely prove challenging. This study demonstrates that the selection and optimization of static or dynamic acquisition modes should depend on the specific clinical task.
Quantification of myocardial blood flow (MBF) can aid in the diagnosis and treatment of coronary artery disease (CAD). However, there are no widely accepted clinical methods for estimating MBF. Dynamic CT holds the promise of providing a quick and easy method to measure MBF quantitatively, however the need for repeated scans has raised concerns about the potential for high radiation dose. In our previous work, we explored techniques to reduce the patient dose by either uniformly reducing the tube current or by uniformly reducing the number of temporal frames in the dynamic CT sequence. These dose reduction techniques result in very noisy data, which can give rise to large errors in MBF estimation. In this work, we seek to investigate whether nonuniformly varying the tube current or sampling intervals can yield more accurate MBF estimates. Specifically, we try to minimize the dose and obtain the most accurate MBF estimate through addressing the following questions: when in the time attenuation curve (TAC) should the CT data be collected and at what tube current(s). We hypothesize that increasing the sampling rate and/or tube current during the time frames when the myocardial CT number is most sensitive to the flow rate, while reducing them elsewhere, can achieve better estimation accuracy for the same dose. We perform simulations of contrast agent kinetics and CT acquisitions to evaluate the relative MBF estimation performance of several clinically viable adaptive acquisition methods. We found that adaptive temporal and tube current sequences can be performed that impart an effective dose of about 5 mSv and allow for reductions in MBF estimation RMSE on the order of 11% compared to uniform acquisition sequences with comparable or higher radiation doses.
Cardiac CT acquisitions for perfusion assessment can be performed in a dynamic or static mode. In this simulation study, we evaluate the relative classification and quantification performance of these modes for assessing myocardial blood flow (MBF). In the dynamic method, a series of low dose cardiac CT acquisitions yields data on contrast bolus dynamics over time; these data are fit with a model to give a quantitative MBF estimate. In the static method, a single CT acquisition is obtained, and the relative CT numbers in the myocardium are used to infer perfusion states. The static method does not directly yield a quantitative estimate of MBF, but these estimates can be roughly approximated by introducing assumed linear relationships between CT number and MBF, consistent with the ways such images are typically visually interpreted. Data obtained by either method may be used for a variety of clinical tasks, including 1) stratifying patients into differing categories of ischemia and 2) using the quantitative MBF estimate directly to evaluate ischemic disease severity. Through simulations, we evaluate the performance on each of these tasks. The dynamic method has very low bias in MBF estimates, making it particularly suitable for quantitative estimation. At matched radiation dose levels, ROC analysis demonstrated that the static method, with its high bias but generally lower variance, has superior performance in stratifying patients, especially for larger patients.
Dynamic contrast-enhanced computed tomography (CT) could provide an accurate and widely available technique for myocardial blood flow (MBF) estimation to aid in the diagnosis and treatment of coronary artery disease. However, one of its primary limitations is the radiation dose imparted to the patient. We are exploring techniques to reduce the patient dose by either reducing the tube current or by reducing the number of temporal frames in the dynamic CT sequence. Both of these dose reduction techniques result in noisy data. In order to extract the MBF information from the noisy acquisitions, we have explored several data-domain smoothing techniques. In this work, we investigate two specific smoothing techniques: the sinogram restoration technique in both the spatial and temporal domains and the use of the Karhunen–Loeve (KL) transform to provide temporal smoothing in the sinogram domain. The KL transform smoothing technique has been previously applied to dynamic image sequences in positron emission tomography. We apply a quantitative two-compartment blood flow model to estimate MBF from the time-attenuation curves and determine which smoothing method provides the most accurate MBF estimates in a series of simulations of different dose levels, dynamic contrast-enhanced cardiac CT acquisitions. As measured by root mean square percentage error (% RMSE) in MBF estimates, sinogram smoothing generally provides the best MBF estimates except for the cases of the lowest simulated dose levels (tube current=25 mAs, 2 or 3 s temporal spacing), where the KL transform method provides the best MBF estimates. The KL transform technique provides improved MBF estimates compared to conventional processing only at very low doses (<7 mSv). Results suggest that the proposed smoothing techniques could provide high fidelity MBF information and allow for substantial radiation dose savings.
KEYWORDS: Blood circulation, Smoothing, Iodine, Computed tomography, Signal attenuation, Data modeling, Arteries, Computer aided diagnosis and therapy, Image filtering, Bone
There is a strong need for an accurate and easily available technique for myocardial blood flow (MBF) estimation to aid in the diagnosis and treatment of coronary artery disease (CAD). Dynamic CT would provide a quick and widely available technique to do so. However, its biggest limitation is the dose imparted to the patient. We are exploring techniques to reduce the patient dose by either reducing the tube current or by reducing the number of temporal frames in the dynamic CT sequence. Both of these dose reduction techniques result in very noisy data. In order to extract the myocardial blood flow information from the noisy sinograms, we have been looking at several data-domain smoothing techniques. In our previous work,1 we explored the sinogram restoration technique in both the spatial and temporal domain. In this work, we explore the use of Karhunen-Loeve (KL) transform to provide temporal smoothing in the sinogram domain. This technique has been applied previously to dynamic image sequences in PET.2, 3 We find that the cluster-based KL transform method yields noticeable improvement in the smoothness of time attenuation curves (TAC). We make use of a quantitative blood flow model to estimate MBF from these TACs and determine which smoothing method provides the most accurate MBF estimates.
Contrast enhancement on cardiac CT provides valuable information about myocardial perfusion and methods have been
proposed to assess perfusion with static and dynamic acquisitions. There is a lack of knowledge and consensus on the
appropriate approach to ensure 1) sufficient diagnostic accuracy for clinical decisions and 2) low radiation doses for
patient safety. This work developed a thorough dynamic CT simulation and several accepted blood flow estimation
techniques to evaluate the performance of perfusion assessment across a range of acquisition and estimation scenarios.
Cardiac CT acquisitions were simulated for a range of flow states (Flow = 0.5, 1, 2, 3 ml/g/min, cardiac output = 3,5,8
L/min). CT acquisitions were simulated with a validated CT simulator incorporating polyenergetic data acquisition and
realistic x-ray flux levels for dynamic acquisitions with a range of scenarios including 1, 2, 3 sec sampling for 30 sec
with 25, 70, 140 mAs. Images were generated using conventional image reconstruction with additional image-based
beam hardening correction to account for iodine content. Time attenuation curves were extracted for multiple regions
around the myocardium and used to estimate flow. In total, 2,700 independent realizations of dynamic sequences were
generated and multiple MBF estimation methods were applied to each of these. Evaluation of quantitative kinetic
modeling yielded blood flow estimates with an root mean square error (RMSE) of ~0.6 ml/g/min averaged across
multiple scenarios. Semi-quantitative modeling and qualitative static imaging resulted in significantly more error
(RMSE = ~1.2 and ~1.2 ml/min/g respectively). For quantitative methods, dose reduction through reduced temporal
sampling or reduced tube current had comparable impact on the MBF estimate fidelity. On average, half dose
acquisitions increased the RMSE of estimates by only 18% suggesting that substantial dose reductions can be employed
in the context of quantitative myocardial blood flow estimation. In conclusion, quantitative model-based dynamic
cardiac CT perfusion assessment is capable of accurately estimating MBF across a range of cardiac outputs and tissue
perfusion states, outperforms comparable static perfusion estimates, and is relatively robust to noise and temporal
subsampling.
Attenuation effects can be significant in photoacoustic tomography since the generated pressure signals are broadband, and ignoring them may lead to image artifacts and blurring. La Rivière et al. [Opt. Lett. 31(6), pp. 781-783, (2006)] had previously derived a method for modeling the attenuation effect and correcting for it in the image reconstruction. This was done by relating the ideal, unattenuated pressure signals to the attenuated pressure signals via an integral operator. We derive an integral operator relating the attenuated pressure signals to the absorbed optical energy for a planar measurement geometry. The matrix operator relating the two quantities is a function of the temporal frequency, attenuation coefficient and the two-dimensional spatial frequency. We perform singular-value decomposition (SVD) of this integral operator to study the problem further. We find that the smallest singular values correspond to wavelet-like eigenvectors in which most of the energy is concentrated at times corresponding to greater depths in tissue. This allows us to characterize the ill-posedness of recovering the absorbed optical energy distribution at different depths in an attenuating medium. This integral equation can be inverted using standard SVD methods, and the initial pressure distribution can be recovered. We conduct simulations and derive an algorithm for image reconstruction using SVD for a planar measurement geometry. We also study the noise and resolution properties of this image-reconstruction method.
Previous research correcting for variable speed of sound in photoacoustic tomography (PAT) based on a generalized radon transform (GRT) model assumes first-order geometrical acoustics (GA) approximation. In the GRT model, the pressure is related to the optical absorption, in an acoustically inhomogeneous medium, through integration over nonspherical isochronous surfaces. Previous research based on the GRT model assumes that the path taken by acoustic rays is linear and neglects amplitude perturbations to the measured pressure. We have derived a higher-order GA expression that takes into account the first-order effect in the amplitude of the measured signal and higher-order perturbation to the travel times. The higher-order perturbation to travel time incorporates the effect of ray bending. Incorrect travel times can lead to image distortion and blurring. These corrections are expected to impact image quality and quantitative PAT. We have previously shown that travel-time corrections in 2-D suggest that perceivable differences in the isochronous surfaces can be seen when the second-order travel-time perturbations are taken into account with a 10% speed-of-sound variation. In this work, we develop iterative image reconstruction algorithms that incorporate this higher-order GA approximation assuming that the speed of sound map is known. We evaluate the effect of higher-order GA approximation on image quality and accuracy.
Several papers recently have addressed the issue of estimating chromophore concentration in PAT using multiple wavelengths.
The question is how does one choose the multiple wavelengths for imaging in PAT that would give the most
accurate results. Previous work was based on the knowledge of the wavelength dependence of the extinction coefficient
of chromophores but did not directly address this question. One would assume that the wavelength that maximizes the
extinction coefficient of the chromophorewould be the most suitable. However, this may not always be the case, especially
if the extinction peak of the chromophore is fairly broad. In this paper, we derive an expression for the variance of the
measured signal based on the Cramer-Rao lower bound (CRLB). This lower bound on variance can be evaluated numerically
for different wavelengths using the variation of the extinction coefficients and scattering coefficients with wavelength.
The wavelength that gives the smallest variance will be optimal for multi-wavelength PAT to estimate the chromophore
concentration. The expression for CRLB has been derived analytically for estimating the concentration of a oxy or deoxyhemoglobin
contained in a background tissue-like solution using the knowledge of the illumination function in a specific
geometry using a photoacoustic microscope. This approach could also be extended to the estimation of concentrations of
multiple chromophores and for other geometries.
Our goal is to compare and contrast various image reconstruction algorithms for optoacoustic tomography (OAT) assuming a finite linear aperture of the kind that arises when using a linear-array transducer. Because such transducers generally have tall, narrow elements, they are essentially insensitive to out-of-plane acoustic waves, and the usually 3-D OAT problem reduces to a 2-D problem. Algorithms developed for the 3-D problem may not perform optimally in 2-D. We have implemented and evaluated a number of previously described OAT algorithms, including an exact (in 3-D) Fourier-based algorithm and a synthetic-aperture-based algorithm. We have also implemented a 2-D algorithm developed by Norton for reflection mode tomography that has not, to the best of our knowledge, been applied to OAT before. Our simulation studies of resolution, contrast, noise properties, and signal detectability measures suggest that Norton's approach-based algorithm has the best contrast, resolution, and signal detectability.
Attenuation effects can be significant in photoacoustic tomography (PAT) since the measured pressure signals are broadband
and ignoring them may lead to image artifacts and blurring. Previous work by our group had derived a method for
modeling the attenuation effect and correcting for it in the image reconstruction. This was done by relating the ideal,
unattenuated pressure signals to the attenuated pressure signals via an integral operator. In this work, we explore singularvalue
decomposition (SVD) of a previously derived 3D integral equation that relates the Fourier transform of the measured
pressure with respect to time and two spatial components to the 2D spatial Fourier transform of the optical absorption
function. We find that the smallest singular values correspond to wavelet-like eigenvectors in which most of the energy is
concentrated at times corresponding to greater depths in tissue. This allows us characterize the ill posedness of recovering
absorption information from depth in an attenuating medium. This integral equation can be inverted using standard SVD
methods and the optical absorption function can be recovered. We will conduct simulations and derive algorithm for image
reconstruction using SVD of this integral operator.
Previous research correcting for variable speed of sound in photoacoustic tomography (PAT) has used a generalized radon
transform (GRT) model . In this model, the pressure is related to the optical absorption, in an acoustically inhomogeneous
medium, through integration over non-spherical isochronous surfaces. This model assumes that the path taken by acoustic
rays is linear and neglects amplitude perturbations to the measured pressure. We have derived a higher-order geometrical
acoustics (GA) expression, which takes into account the first-order effect in the amplitude of the measured signal and
higher-order perturbation to the travel times. The higher-order perturbation to travel time incorporates the effect of ray
bending. Incorrect travel times can lead to image distortion and blurring. These corrections are expected to impact image
quality and quantitative PAT. We have previously shown that
travel-time corrections in 2D suggest that perceivable differences
in the isochronous surfaces can be seen when the second-order
travel-time perturbations are taken into account with
a 10% speed of sound variation. In this work, we develop iterative image reconstruction algorithms that incorporate this
higher-order GA approximation assuming that the speed of sound map is known. We evaluate the effect of higher-order
GA approximation on image quality and accuracy.
The goal of this paper is to compare and contrast various image reconstruction algorithms for tomography (OAT) assuming
a finite linear aperture of the kind that arises when using a
linear-array transducer. Because such transducers generally
have tall, narrow elements, they are essentially insensitive to out of plane acoustic waves, and the usually 3D OAT problem
reduces to a 2D problem. Algorithms developed for the 3D problem may not perform optimally. We have implemented
and evaluated a number of previously described OAT algorithms, including an exact (in 3D) Fourier-based algorithm and a
synthetic aperture based algorithm. We have also implemented an exact 2D algorithm developed by Norton for reflection
mode tomography that has not, to the best of our knowledge, been applied to OAT before. Our simulation studies of
resolution, contrast, noise properties and signal detectability measures suggest that Norton's approach based algorithm has
the best contrast, resolution and signal detectability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.