In this study, we address the challenge of enhancing image quality and spatial resolution in computed tomography (CT) imaging by introducing simulation and fabrication of high aspect ratio, point-like transmission targets. Utilizing advanced electroplating techniques, traditionally employed in the fabrication of Through Substrate Via (TSV) interconnects for CMOS circuitry, we successfully embed copper targets within silicon substrates. This method allows us to create high-aspect- ratio features specifically designed for x-ray transmission targets, resulting in micro targets that exhibit a volume increase compared to conventional evaporated surface targets. Furthermore, we present simulation results of the x-ray spectrum generated by these targets, demonstrating their potential to significantly improve both image quality and spatial resolution in CT applications. Our findings suggest that leveraging advanced fabrication techniques can open new avenues for the development of enhanced imaging technologies in medical diagnostics and beyond.
Applications such as counterfeit identification, quality control, and non-destructive material identification benefit from improved spatial and compositional analysis. X-ray Computed Tomography is used in these applications but is limited by the X-ray focal spot size and the lack of energy-resolved data. Recently developed hyperspectral X-ray detectors estimate photon energy, which enables composition analysis but lacks spatial resolution. Moving beyond bulk homogeneous transmission anodes toward multi-metal patterned anodes enables improvements in spatial resolution and signal-to-noise ratios in these hyperspectral X-ray imaging systems. We aim to design and fabricate transmission anodes that facilitate confirmation of previous simulation results. These anodes are fabricated on diamond substrates with conventional photolithography and metal deposition processes. The final transmission anode design consists of a cluster of three disjoint metal bumps selected from molybdenum, silver, samarium, tungsten, and gold. These metals are chosen for their k-lines, which are positioned within distinct energy intervals of interest and are readily available in standard clean rooms. The diamond substrate is chosen for its high thermal conductivity and high transmittance of X-rays. The feature size of the metal bumps is chosen such that the cluster is smaller than the 100 µm diameter of the impinging electron beam in the X-ray tube. This effectively shrinks the X-ray focal spot in the selected energy bands. Once fabricated, our transmission anode is packaged in a stainless-steel holder that can be retrofitted into our existing X-ray tube. Innovations in anode design enable an inexpensive and simple method to improve existing X-ray imaging systems.
The goal of this work is to develop an x-ray computed tomography (CT) capability that delivers improved imaging resolutions while reliably identifying the material composition of the interrogated object. Through the use of a hyperspectral x-ray detector along with a multi-metal patterned anode, one can simultaneously enhance achievable spatial resolution and improve the spectral signal through the use of energy intervals that capture the k-lines of each material present in the anode. This paper will present preliminary Monte-Carlo results of the anode design and simulated CT datasets along with the applied machine learning techniques to identify materials and their concentrations.
KEYWORDS: Imaging systems, Data modeling, Calibration, Computing systems, Sensors, Data acquisition, Monte Carlo methods, Computed tomography, Signal attenuation, Data storage
Sandia National Laboratories has developed a model characterizing the nonlinear encoding operator of the world's first hyperspectral x-ray computed tomography (H-CT) system as a sequence of discrete-to-discrete, linear image system matrices across unique and narrow energy windows. In fields such as national security, industry, and medicine, H-CT has various applications in the non-destructive analysis of objects such as material identification, anomaly detection, and quality assurance. However, many approaches to computed tomography (CT) make gross assumptions about the image formation process to apply post-processing and reconstruction techniques that lead to inferior data, resulting in faulty measurements, assessments, and quantifications. To abate this challenge, Sandia National Laboratories has modeled the H-CT system through a set of point response functions, which can be used for calibration and anaylsis of the real-world system. This work presents the numerical method used to produce the model through the collection of data needed to describe the system; the parameterization used to compress the model; and the decompression of the model for computation. By using this linear model, large amounts of accurate synthetic H-CT data can be efficiently produced, greatly reducing the costs associated with physical H-CT scans. Furthermore, successfully approximating the encoding operator for the H-CT system enables quick assessment of H-CT behavior for various applications in high-performance reconstruction, sensitivity analysis, and machine learning.
Sandia National Laboratories has developed a method that applies machine learning methods to high-energy spectral x-ray computed tomography data to identify material composition for every reconstructed voxel in the field-of-view. While initial experiments led by Koundinyan et al. demonstrated that supervised machine learning techniques perform well in identifying a variety of classes of materials, this work presents an unsupervised approach that differentiates isolated materials with highly similar properties, and can be applied on spectral computed tomography data to identify materials more accurately compared to traditional performance. Additionally, if regions of the spectrum for multiple voxels become unusable due to artifacts, this method can still reliably perform material identification. This enhanced capability can tremendously impact fields in security, industry, and medicine that leverage non-destructive evaluation for detection, verification, and validation applications.
KEYWORDS: Computer security, Radiography, Data acquisition, Sensors, Absorption, Nondestructive evaluation, X-rays, Computed tomography, Interfaces, Signal to noise ratio
Sandia National Laboratories has recently developed the capability to acquire multi-channel radio-
graphs for multiple research and development applications in industry and security. This capability
allows for the acquisition of x-ray radiographs or sinogram data to be acquired at up to 300 keV
with up to 128 channels per pixel. This work will investigate whether multiple quality metrics for
computed tomography can actually benefit from binned projection data compared to traditionally
acquired grayscale sinogram data. Features and metrics to be evaluated include the ability to dis-
tinguish between two different materials with similar absorption properties, artifact reduction, and
signal-to-noise for both raw data and reconstructed volumetric data. The impact of this technology
to non-destructive evaluation, national security, and industry is wide-ranging and has to potential
to improve upon many inspection methods such as dual-energy methods, material identification,
object segmentation, and computer vision on radiographs.
This work seeks to develop an autonomous optimization of input computational resource parameters for arbitrary big-data computed tomography (CT) configurations. It is well known that graphics processing units (GPU) have been a boon to many high-performance applications, including CT. The reconstruction task has both colossal computational and data throughput requirements that easily tax high-end GPUs to their limit. For big-data industrial and research applications, the burden is exacerbated through the use of high pixel count detectors (≥ 16 megapixels) and the large number of projections needed to meet Nyquist sampling requirements, resulting in datasets up to terabytes in size. Previous work has shown that the GPU kernels can be optimized to efficiently handle big-data; however, as this work will show, some sensitivities exist with respect to the tunable input parameters that can exact an exaggerated toll on reconstruction performance. This work will investigate the input parameter space for various relevant and future-sized datasets and will present a calibration approach to optimize reconstruction performance for varying sized detectors, geometries, and graphics processing resources. This work has the potential to dramatically improve many non-destructive evaluation and inspection applications in industry, security, and research where reconstruction rate is the main bottleneck of the resource chain. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration under contract DEAC04-94AL85000.
This work will investigate the imaging capabilities of the Multix multi-channel linear array detector and its
potential suitability for big-data industrial and security applications versus that which is currently deployed.
Multi-channel imaging data holds huge promise in not only finer resolution in materials classification, but also in
materials identification and elevated data quality for various radiography and computed tomography applications.
The potential pitfall is the signal quality contained within individual channels as well as the required exposure
and acquisition time necessary to obtain images comparable to those of traditional configurations. This work will
present results of these detector technologies as they pertain to a subset of materials of interest to the industrial
and security communities; namely, water, copper, lead, polyethylene, and tin.
Despite object detection, recognition, and identification being very active areas of computer vision research,
many of the available tools to aid in these processes are designed with only photographs in mind. Although
some algorithms used specifically for feature detection and identification may not take explicit advantage of
the colors available in the image, they still under-perform on radiographs, which are grayscale images. We
are especially interested in the robustness of these algorithms, specifically their performance on a preexisting
database of X-ray radiographs in compressed JPEG form, with multiple ways of describing pixel information. We
will review various aspects of the performance of available feature detection and identification systems, including
MATLABs Computer Vision toolbox, VLFeat, and OpenCV on our non-ideal database. In the process, we
will explore possible reasons for the algorithms' lessened ability to detect and identify features from the X-ray
radiographs.
While object detection is a relatively well-developed field with respect to visible light photographs, there are significantly fewer algorithms designed to work with other imaging modalities. X-ray radiographs have many unique characteristics that introduce additional challenges that can cause common image processing and object detection algorithms to begin to fail. Examples of these problematic attributes include the fact that radiographs are only represented in gray scale with similar textures and that transmission overlap occurs when multiple objects are overlaid on top of each other. In this paper we not only analyze the effectiveness of common object detection techniques as applied to our specific database, but also outline how we combined various techniques to improve overall performance. While significant strides have been made towards developing a robust object detection algorithm for use with the given database, it is still a work in progress. Further research will be needed in order to deal with the specific obstacles posed by radiographs and X-ray imaging systems. Success in this project would have disruptive repercussions in fields ranging from medical imaging to manufacturing quality assurance and national security.
This position paper describes a potential implementation of a large-scale grating-based X-ray Phase Contrast Imaging System (XPCI) simulation tool along with the associated challenges in its implementation. This work proposes an implementation based off of an implementation by Peterzol et. al. where each grating is treated as an object imaged in the field-of-view. Two main challenges exist; the first, is the required sampling and information management in object space due to the micron-scale periods of each grating propagating over significant distances. The second is maintaining algorithmic numerical stability for imaging systems relevant to industrial applications. We present preliminary results for a numerical stability study using a simplified algorithm that performs Talbot imaging in a big-data context
Although there have been great strides in object recognition with optical images (photographs), there has been comparatively little research into object recognition for X-ray radiographs. Our exploratory work contributes to this area by creating an object recognition system designed to recognize components from a related database of radiographs. Object recognition for radiographs must be approached differently than for optical images, because radiographs have much less color-based information to distinguish objects, and they exhibit transmission overlap that alters perceived object shapes. The dataset used in this work contained more than 55,000 intermixed radiographs and photographs, all in a compressed JPEG form and with multiple ways of describing pixel information. For this work, a robust and efficient system is needed to combat problems presented by properties of the X-ray imaging modality, the large size of the given database, and the quality of the images contained in said database. We have explored various pre-processing techniques to clean the cluttered and low-quality images in the database, and we have developed our object recognition system by combining multiple object detection and feature extraction methods. We present the preliminary results of the still-evolving hybrid object recognition system.
This work describes a high-performance approach to radiograph (i.e. X-ray image for this work) simulation for arbitrary objects. The generation of radiographs is more generally known as the forward projection imaging model. The formation of radiographs is very computationally expensive and is not typically approached for large-scale applications such as industrial radiography. The approach described in this work revolves around a single GPU-based implementation that performs the attenuation calculation in a massively parallel environment. Additionally, further performance gains are realized by exploiting the GPU-specific hardware. Early results show that using a single GPU can increase computational performance by three orders-of- magnitude for volumes of 10003 voxels and images with 10002 pixels.
Estimation of the x-ray attenuation properties of an object with respect to the energy emitted from the source is a challenging task for traditional Bremsstrahlung sources. This exploratory work attempts to estimate the x-ray attenuation profile for the energy range of a given Bremsstrahlung profile. Previous work has shown that calculating a single effective attenuation value for a polychromatic source is not accurate due to the non-linearities associated with the image formation process. Instead, we completely characterize the imaging system virtually and utilize an iterative search method/constrained optimization technique to approximate the attenuation profile of the object of interest. This work presents preliminary results from various approaches that were investigated. The early results illustrate the challenges associated with these techniques and the potential for obtaining an accurate estimate of the attenuation profile for objects composed of homogeneous materials.
This paper will investigate energy-efficiency for various real-world industrial computed-tomography reconstruction algorithms, both CPU- and GPU-based implementations. This work shows that the energy required for a given reconstruction is based on performance and problem size. There are many ways to describe performance and energy efficiency, thus this work will investigate multiple metrics including performance-per-watt, energy-delay product, and energy consumption. This work found that irregular GPU-based approaches1 realized tremendous savings in energy consumption when compared to CPU implementations while also significantly improving the performance-per- watt and energy-delay product metrics. Additional energy savings and other metric improvement was realized on the GPU-based reconstructions by improving storage I/O by implementing a parallel MIMD-like modularization of the compute and I/O tasks.
This work will present the utilization of the massively multi-threaded environment of graphics processors (GPUs)
to improve the computation time needed to reconstruct large computed tomography (CT) datasets and the aris-
ing challenges for system implementation. Intelligent algorithm design for massively multi-threaded graphics
processors differs greatly from traditional CPU algorithm design. Although a brute force port of a CPU algo-
rithm to a GPU kernel may yield non-trivial performance gains, further measurable gains could be achieved by
designing the algorithm with consideration given to the computing architecture. Previous work has shown that
CT reconstruction on GPUs becomes an irregular problem for large datasets (10GB-4TB),1 thus memory band-
width at the host and device levels becomes a significant bottleneck for industrial CT applications. We present
a set of GPU reconstruction kernels that utilize various GPU-specific optimizations and measure performance
impact.
Although there has been progress in applying GPU-technology to Computed-Tomography reconstruction algorithms, much of the work has concentrated on optimizing reconstruction performance for smaller, medical-scale datasets. Industrial CT datasets can vary widely in size and number of projections. With the new advancements in high resolution cameras, it is entirely possible that the Industrial CT community may soon need to pursue a 100-megapixel detector for CT applications. To reconstruct such a massive dataset, simply adding extra GPUs would not be an option as memory and storage bottlenecks would result in prolonged periods of GPU downtime, thus negating performance gains. Additionally, current reconstruction algorithms would not be sufficient due to the various bottlenecks in the processor hardware. Past work has shown that CT reconstruction is an irregular problem for large-scale datasets on a GPU due to the massively parallel environment. This work proposes a high-performance, multi-GPU, modularized approach to reconstruction where computation, memory transfers, and disk I/O are optimized to occur in parallel while accommodating the irregular nature of the computation kernel. Our approach utilizes a dynamic MIMD-type of architecture in a hybrid environment of CUDA and OpenMP. The modularized approach showed an improvement in load-balancing and performance such that a 1 trillion voxel volume was reconstructed from 10,000 100 megapixel projections in less than a day.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.