Accuracy in proton range prediction is critical in proton therapy to ensure conformal tumor dose. Our lab proposed a joint statistical image reconstruction method (JSIR) based on a basis vector model (BVM) for estimation of stopping power ratio maps and demonstrated that it outperforms competing Dual Energy CT (DECT) methods. However, no study has been performed on the clinical utility of our method. Here, we study the resulting dose prediction error, the difference between the dose delivered to tissue based on the more accurate JSIR-BVM method and the planned dose based on Single Energy CT (SECT).
In previous work, we generated computational breast phantoms by using a principal component analysis (PCA) or "Eigenbreast" technique. For this study, we sought to address resolution limitations in the previous synthesized breast phantoms by analyzing new human subject data set with higher resolution.
We utilize PCA to sample input breast cases, then by using weighted sums along the different eigenvectors or "eigenbreasts," a number of new cases can be generated. While breasts can vary in structure and form, we used a series of compressed breasts derived from human subject breast CT volumes to create the eigenbreasts. We used an initial set of thirty-five phantoms from a new CT patient population with 155x155x155 μm3 voxel size. The training set and synthetized phantoms were evaluated by power law exponent β and changes in volumetric breast density as a result of the PCA process.
The synthetic phantoms were found to have similar β and fibroglandular density distributions to the training dataset. Individual synthetic phantoms appeared to capture glandular features present in the training phantoms but had visually different texture features. This work shows that earlier work on the eigenbreast technique can be extended to newer datasets with higher resolution and produce synthetic phantoms that retain the quantitative properties of training data.
KEYWORDS: Breast, Systems modeling, Tissues, Image segmentation, Visualization, Digital breast tomosynthesis, Sensors, Signal attenuation, Reconstruction algorithms, Volume rendering, Radiology
This work seeks to utilize a cohort of computational, patient-based breast phantoms and anthropomorphic lesions inserted therein to determine trends in breast lesion detectability as a function of several clinically relevant variables. One of the measures of local density proposed gives rise to a statistically significant trend in lesion detectability, and it is apparent that lesion type is also a predictor of relative detectability.
KEYWORDS: Breast, Image segmentation, Image processing, Mammography, Digital breast tomosynthesis, 3D modeling, 3D acquisition, Blood vessels, Mathematical modeling, Digital mammography
While patient-based breast phantoms are realistic, they are limited by low resolution due to the image acquisition and segmentation process. The purpose of this study is to restore the high frequency components for the patient-based phantoms by adding power law noise (PLN) and breast structures generated based on mathematical models. First, 3D radial symmetric PLN with β=3 was added at the boundary between adipose and glandular tissue to connect broken tissue and create a high frequency contour of the glandular tissue. Next, selected high-frequency features from the FDA rule-based computational phantom (Cooper’s ligaments, ductal network, and blood vessels) were fused into the phantom. The effects of enhancement in this study were demonstrated by 2D mammography projections and digital breast tomosynthesis (DBT) reconstruction volumes. The addition of PLN and rule-based models leads to a continuous decrease in β. The new β is 2.76, which is similar to what typically found for reconstructed DBT volumes. The new combined breast phantoms retain the realism from segmentation and gain higher resolution after restoration.
KEYWORDS: Video, Denoising, Neural networks, Associative arrays, Motion models, Video processing, Video acceleration, Visualization, Visual process modeling, 3D modeling
Video denoising can be described as the problem of mapping from a specific length of noisy frames to clean one. We propose a deep architecture based on Recurrent Neural Network (RNN) for video denoising. The model learns a patch-based end-to-end mapping between the clean and noisy video sequences. It takes the corrupted video sequences as the input and outputs the clean one. Our deep network, which we refer to as deep Recurrent Neural Networks (deep RNNs or DRNNs), stacks RNN layers where each layer receives the hidden state of the previous layer as input. Experiment shows (i) the recurrent architecture through temporal domain extracts motion information and does favor to video denoising, and (ii) deep architecture have large enough capacity for expressing mapping relation between corrupted videos as input and clean videos as output, furthermore, (iii) the model has generality to learned different mappings from videos corrupted by different types of noise (e.g., Poisson-Gaussian noise). By training on large video databases, we are able to compete with some existing video denoising methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.