General purpose graphical processing units or GPGPUs have emerged in recent years as the
power horse behind many large scale computing efforts. For example, the recent unveiling of the
world's fastest supercomputer has achieved this feat by utilizing low cost and high performance
GPGPUs. Additionally, in the past year the synthetic aperture radar (SAR) community has
started to utilize GPGPUs as well. The utilization of GPGPUs to date has been limited mainly to
SAR image formation and in this capacity tremendous performance improvements over the same
CPU based algorithms have been demonstrated. However, image formation is only one of many
necessary steps towards SAR image exploitation. Image registration, filtering, interpolation and
interferometric flattening are equally important steps in obtaining many of the desired output
products such as coherence change detection (CCD) products and terrain adjusted
interferograms. We will demonstrate that by transitioning the entire SAR image exploitation
processing chain from image formation through product generation onto a GPGPU, it is possible
to achieve more than an order of magnitude in performance improvements. In this paper we will
review results presented at last year's SPIE conference regarding SAR image formation and
present new results obtained for coherent exploitation of SAR data including CCD and
interferometric SAR processing. In addition to presenting these results, we will discuss
challenges associated with migration of CPU-based exploitation algorithms to the GPGPU
environment, as well as to discuss possible future improvements using these powerful new
devices and associated software tools.
With sensor technologies rapidly improving, the need to process increasingly larger data sets is becoming the main
bottleneck in many real time applications associated with persistence surveillance such as VideoSAR and volumetric
SAR imaging. In many instances, the image fidelity is of utmost importance which can have implications when choosing
the appropriate algorithm to generate the desired data products. The performance improvements afforded by algorithms
such as the fast back projection (FBP) algorithm prove attractive for such environments. Unfortunately, even though the
FBP algorithm is magnitudes faster than a traditional back projection algorithm it is still incapable of meeting the strict
requirements of some of the aforementioned real time applications. However, the emergence of general purpose
graphical processing units (GPGPUs) in recent years have afforded many scientific fields orders of magnitudes
improvement in performance for a large variety of applications. This is also the case for the FBP algorithm. By
distributing the processing across 480 processing cores located on a single video card, it possible to achieve substantial
performance improvements compared to the serial FBP algorithm. Considering that many PCs are capable of housing
three to four video cards, it is possible to obtain more than two orders of magnitude improvement in performance with
the parallel approach. This technology provides the ability to process enormous datasets in the field without the need of
supercomputers that have to date been the only means of keeping pace with the incoming data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.