Open Access
28 February 2023 Digital adaptive optics with interferometric homodyne encoding for mitigating atmospheric turbulence
Author Affiliations +
Abstract

Digital adaptive optics (DAO) created using homodyne encoding can mitigate atmospheric turbulence in passive imaging systems. This work demonstrates a self-referencing homodyne interferometry technique that combines the passive imaging utility of multiframe algorithmic procedures with the single-frame correction capability of the Shack–Hartmann adaptive optics technique. This paper presents image reconstruction improvements through the addition of (1) phase diversity modulation techniques within the interferometry reconstruction algorithm and (2) temporal image processing techniques applied after the interferometry reconstruction algorithm. By imaging quick response codes through a turbulent air chamber in the laboratory, it was possible to quantify the machine-readable performance gain provided by DAO when compared with a standard imaging camera. Results from this research verify that DAO from homodyne encoding provides turbulence mitigation for single frames of data, paving the way for environmentally robust, high-speed, self-contained imaging systems.

1.

Introduction

Atmospheric turbulence limits all ground-based, long-range imaging systems and can blur important target features, degrading the information collected by the system.16 Degradation from ideal, diffraction-limited resolution is caused by phase errors introduced through path length changes that accumulate as photons propagate along varying path lengths. Producing high-quality images immune to these refractive index fluctuations is universally beneficial for applications ranging from astronomy to medical imaging. As accurate, precise, and high-speed information plays an ever-greater role in our daily lives, high-quality imagery provides immense value.

One of the oldest forms of atmospheric compensation was developed for astronomical telescopes, as the atmosphere blurs distant objects under investigation. Specifically, difficulties arose when trying to differentiate if a distant celestial body was one object or two separate but close objects. To compensate for atmospheric turbulence degradation, adaptive optics (AO) techniques have been developed. Traditional AO techniques require a beacon LASER, deformable mirror, and some type of wavefront sensor. The LASER creates an artificial guide star in the upper atmosphere by interacting with sodium atoms to emit light as a point source.6 With the point source providing a locally known reference, the wavefront sensor measures the atmospheric error. A common AO method leverages a Shack–Hartmann6 wavefront sensor; this device uses a microlens array to estimate phase aberrations. More recent wavefront sensor developments include holographic and sensor-less techniques.7 These systems compensate for turbulence in real time and use the full optical bandwidth of the receiving telescope. However, such systems still rely on reference source beacons in conjunction with deformable mirrors to physically apply the compensations and are thus generally expensive and complicated to operate.

Other turbulence compensation techniques rely on computational postprocessing of collected image data rather than upfront optical correction. Techniques such as speckle imaging8 and contrast enhancement9 are extremely useful for applications involving existing imaging systems as they do not require hardware modification. Lucky imaging requires a high-frame-rate camera and uses temporal atmospheric fluctuations to its advantage by capturing moments of minimal turbulence and stitching together many frames of data to create an enhanced frame. Multiframe blind deconvolution (MFBD) leverages multiple frames of data to estimate atmospheric blur and then uses deconvolution to remove this blur from the image. One of the most common uses of MFBD is to enhance images of satellites collected by ground-based telescopes.10,11 Unfortunately, these multiframe approaches are often computationally expensive, can introduce unacceptable image processing delays, and generally require that the target be observed over a significant duration.

Hybrid opto-computational techniques that leverage self-referencing interferometry enable single-frame turbulence mitigation while reducing hardware complexity. Previous computational studies on six-aperture pupil relays showed that multi-aperture systems enable phase correction (including pistons, tips, and tilts) and enhancement of images degraded by turbulence.12 A seven-aperture imaging system with phase-correction algorithms demonstrated turbulence mitigation, although it was an active system requiring a coherent illumination source.13 A technique leveraging diffraction gratings via two-aperture pupil remapping enables piston-corrected images with improved field-dependent contrast.14,15 In another study, sophisticated simulations of temporally independent turbulence effects pave the way for single-frame image restoration.16

The homodyne-encoded digital AO1720 (DAO) technique in this paper incorporates many of the features described above, compensating for phase distortions and delivering higher quality data than standard imagery. While future applications may include medical imaging through nonuniform fluids and undersea imaging through clear, turbulent waters,21,22 DAO is particularly suited to correct for atmospheric turbulence, and it does so passively with a single frame of data. The DAO technique in this paper is an opto-computational self-referencing homodyne interferometer approach that does not require a beacon source, deformable mirrors, or multiple images to create a single near diffraction limited image. This approach combines the single-frame correction of the Shack–Hartmann technique with the passive imaging utility of the multiframe lucky imaging and MFBD methods.

The remainder of this paper proceeds in the following sections. Section 2 provides background on and an example of the effects of turbulence on imaging systems. Section 3 introduces the homodyne DAO approach, optical hardware used, data acquisition system, calibration procedure, and preliminary optical results. Section 4 presents recent improvements to standard DAO results through the addition of (1) phase diversity modulation within the DAO reconstruction algorithm and (2) temporal image processing methods applied after the DAO reconstruction algorithm. Each reconstruction method is quantified using optical turbulence experiments on static quick response (QR) codes. Section 5 presents a summary of this work and future concepts.

2.

Atmospheric Turbulence

2.1.

Background

All light obeys Snell’s law,1 which relates the angles of incidence and refraction to the indices of refraction. Specifically, it dictates the refracted angle θf within a medium based on θi, the initial angle of incidence, ni, the index of refraction of the initial medium, and nf, the index of refraction of the final medium, governed by the following equation:

Eq. (1)

nisinθi=nfsinθf.

Atmospheric turbulence is caused by continuous temperature and pressure changes that induce refractive index fluctuations. The quantification of this phenomenon is detailed in Refs. 16 and is summarized below. The atmospheric index of refraction, n, is defined in Eq. (2), where n0 is the mean index of refraction of the atmosphere, n1 is the randomly fluctuating term, r is the three-dimensional (3D) position vector, and t is time:

Eq. (2)

n(r,t)=n0+n1(r,t).

The mean atmospheric index of refraction n0=1.0003. Fluctuations can be defined using Eqs. (3a) and (3b), where T is the temperature in Kelvin and P is the air pressure in millibars:

Eq. (3a)

n1=n1=77.6PT106,

Eq. (3b)

dn1dT=77.6PT2106.

As can be seen from these equations, temperature is the dominant driver in changing the index of refraction. A common example of optical turbulence can be seen in the mirage effect caused by the heat released from an asphalt road on a sunny day. Here, the difference between surface and air temperatures creates gradients that cause the rays to bend in an unexpected manner. These effects have been historically documented using optical beams that propagate along horizontal ground paths23 or bodies of water.24 Even slant-path beams that propagate 100s of meters above the ground surface experience substantial turbulence.25 This is because the atmosphere is not uniform; instead, it is broken into separate layers, and at the interfaces between these layers there are significant changes in meteorological conditions, which adversely affect optical propagation.

To better define the statistical nature of atmospheric turbulence, Kolmogorov theory is employed,2 which defines the spatial power spectral density of the fluctuating term n1 in Eq. (4), where Cn2(z) is the refractive-index structure parameter at altitude z,λ is the optical wavelength, and k is the optical wavenumber (with k=2π/λ)

Eq. (4)

Φnk(k,z)=0.033Cn2(z)k11/3.

For modeling and simulations, the refractive-index structure parameter can be thought of as the primary driving force for modeling atmospheric turbulence, with common values ranging from 1018m2/3 (for minimal turbulence effects) to 1012m2/3 (for strong turbulence effects).

2.2.

Turbulence Effects on Imaging Systems

Atmospheric turbulence is inherently a zero-mean process. From an imaging perspective, this means that given enough time, a centroid will average out and the higher frequency information will be lost. Fried1 first quantified atmospheric seeing by introducing the Fried parameter, r0, where

Eq. (5)

r0=0.185[4π2k2Cn2z]3/5,
and the propagation distance is z in meters. This equation can be broken up for heterogenous media by discretizing Cn2 and z along the path, then computing a summation for every i slice, as follows:

Eq. (6)

r0i=0.185[4π2k2Cni2Δzi]3/5,

Eq. (7)

r05/3=i=1Nr0i5/3.

From an image resolution standpoint, when collecting data, the r0 parameter effectively reduces the input aperture of the imaging system. Conceptually, and from a simplified modeling and simulation perspective, it can be thought of as applying a low-pass filter, where the spatial resolution of the imaging system is defined as

Eq. (8)

Δx=λz/min([D,r0]).

A simulation demonstrating this imaging scenario is shown in Fig. 1: panel (a) shows a diffraction-limited image of a license plate at 1 km. Panel (b) shows the same setup with moderate turbulence (Cn2=1014) along the same 1-km propagation path. Panel (c) maintains the same conditions as panel (b) but assumes a shorter 500-m path. Moderate turbulence rendered the clear image of panel (a) nearly unreadable in panel (b). The reduced path of panel (c) improved visibility, but ranges shorter than 1 km severely limit many imaging applications. It should be noted that the images were cropped to show identical fields of view for comparison purposes. To compensate for turbulence-induced path differences, phase compensation techniques are needed. The remainder of this paper will focus on solving these path differences using DAO achieved via homodyne interferometry.

Fig. 1

The images above demonstrate optical degradation caused by turbulence. In this series of optical simulations, a 152-mm-diameter telescope is assumed. Results first show (a) diffraction-limited imaging performance at 1 km, followed by turbulence-limited imaging performance at (b) 1 km and (c) 500 m (where Cn2=1014).

OE_62_2_023104_f001.png

3.

Digital Adaptive Optics with Homodyne Encoding

3.1.

Overview

DAO with homodyne encoding is an opto-computational closure phase technique that uses an optical interferometer to impose a moiré pattern on the received image and an algorithm to then remove this moiré pattern to create a final near-diffraction-limited image. A comparison between a standard imaging system and the DAO process is illustrated in Fig. 2. When computing a two-dimensional (2D) fast Fourier transform (FFT) on the intensity received by a standard imaging system, the frequency information is overlapped and nonseparable. By adding the homodyne interferometer and running the same 2D FFT on DAO system images, the frequency components become separable, allowing for extraction and injection into an eigenvalue system solver to correct for atmospheric phase errors and recover lost information. To accomplish this, the system solver sets the phase of one of the overlapped areas to be constant, and the system solver then estimates the phase solutions necessary for the other overlapped regions to force a global in-phase solution across the aperture. An overview of this process is shown in Fig. 3. This correction only requires a single frame of data, with temporal data not being required for first-order corrections. It should be noted that the system solver could also be replaced by generating a series of phase estimates, applying them to the complex amplitudes, spatially shifting the frequency components back to generate the corrected images, and then selecting the correct phase solution based on contrast maximization. However, this approach is computationally inefficient.

Fig. 2

This diagram compares a standard imaging system (top row) with the DAO system (bottom row) using three apertures (closely spaced, for illustration purposes). Overlap in aperture segment interference patterns prevents the correction of phase errors when using standard imaging. Using DAO techniques, the separation of aperture segment interference patterns allows for postdetection turbulence correction and full-aperture resolution.

OE_62_2_023104_f002.png

Fig. 3

This flowchart illustrates the three main elements of DAO: (a) the optical beamline setup (green), (b) the data acquisition assembly (blue), and (c)-(h) the reconstruction algorithm suite implemented using MATLAB software (violet).

OE_62_2_023104_f003.png

The homodyne interferometry component of DAO is accomplished by employing specialized optics in front of the system’s final focusing optics to sub-divide the input aperture of the imaging system into laterally separated sub-apertures. This light is then passed to focusing optics to create an image with an interference pattern on the sensor. The diagram provided in Fig. 2 shows how the spatially separated apertures result in a modification to the information within the system.

The three main DAO system elements include the optical separation component, a frame of data, and the algorithm suite that synthesizes the reconstructed image. These components are illustrated and color coded in Fig. 3. The primary optical component to create the interference pattern is the diffraction-grating-based interferometer [Fig. 3(a)], which creates sub-apertures and spatially separates the sub-apertures before collection by the camera system. To accomplish this separation, matched pairs of gratings are utilized. The primary gratings [Fig. 5(a)] diverge the three beams in angular space, and the secondary gratings [Fig. 5(b)]—affixed 50 mm from the primary gratings—re-collimate the light, as depicted in the Fig. 5(c) illustration. All DAO diffraction gratings are blazed and possess the following properties: 300 lines per mm, blaze angle of 11.25 deg, diffraction efficiency of 60%, center wavelength of 670 nm, and blaze arrow direction toward the beamline’s center. The primary gratings have a diameter of d1=12.7  mm and a separation of s1=13.7  mm. The secondary gratings have a diameter of d2=25.4  mm and a separation of s2=44.0  mm. Images of the primary and secondary assemblies after final fabrication and integration are shown in Figs. 5(d) and 5(e).

For the experimental results in this paper, a synchronized data acquisition (depicted in Fig. 6) system was developed to collect both DAO and standard-camera frames simultaneously. The synchronized system was developed solely for quantitative comparison of the DAO technique to a traditional imaging system.

From a top level, after the image is collected, the algorithms spatially extract the frequency information as created by the secondary aperture plane, solve for any phase errors by forcing the computed overlapped regions to be in phase, and then spatially place the frequency information back in the correct location as defined by the primary aperture. The reconstruction algorithms (written in MATLAB) leverage closure phase techniques and perform several steps to solve for the image degradation.

The algorithm begins by taking an FFT of the received intensity image, identifying the isolated frequency terms, and then spatially extracting the terms. In computational memory space, the extracted frequency terms are stored as 2D images that have had all other frequency information removed using a binary mask [Fig. 3(c)]. The 2D images are stored in a 3D array, with the third dimension accounting for the aperture component number. Once the frequency components have been extracted, the overlap regions are computed. Data within these overlap regions provide the primary information leveraged by the system solvers. The overlap regions are defined by the primary input aperture as if the secondary aperture did not exist.

With the frequency components extracted and stored in a 3D array, the phase errors can be computed using the following process: global tip/tilt phase error estimation between aperture pairs [Fig. 3(d)], phase piston jitter correction [Fig. 3(e)], and deconvolution of the raw, interfered image from the modulation transfer function (MTF), which is comprised of the phase ramp’s estimated power and noise inherent in the original digital image; deconvolution removes the calculated MTF from the raw image [Fig. 3(f)]. The frequency terms are then spatially shifted back to the primary, standard-image locations [Fig. 3(g)] and summed to create a final 2D array of frequency components. The final step uses standard FFTs to transform the phase-corrected and shifted frequency terms into a reconstructed image [Fig. 3(h)]. The laboratory calibration data of Fig. 7 show the frequency components as initially captured after the secondary aperture, followed by their spatial shift back to the primary aperture locations.

3.2.

Laboratory Setup

As diagrammed in Fig. 4, the laboratory setup for this experiment contains a common optical path of the target imaged through a turbulence generator before splitting to the standard camera and the DAO system. The common optical path begins with a white-light halogen lamp source. The lamp was chosen to mimic the incoherent solar radiation expected during future outdoor field tests. Despite the incoherent nature of the source, the gratings were crafted from a single master and carefully aligned within the brass board system to path-match the light, creating sufficient coherence within the interferometer. The source back-illuminates an optical target for the two systems to image. This study analyzed two types of negative-pattern targets: a 1951 United States Air Force (USAF) test target and QR codes. The USAF chrome-on-glass target enables preliminary, qualitative comparisons between standard and DAO methods. This test target is common (e.g., it is seen in Ref. 12 and others) and it allows the viewer to quickly and intuitively assess image quality. For the comprehensive, quantitative machine-vision analysis, three QR Code targets were printed on plastic transparency sheets. After collimation by a lens (200-mm focal length), light from the target passes through the turbulence generator.

Fig. 4

The experimental setup contains a standard camera path, a DAO path, and an optical path common to both. The three-aperture interferometer is the main component that differentiates the DAO path from the standard camera path. The common optical path employs a broadband source, an optical target (1951 USAF test target or QR code), a turbulence generator, a demagnification telescope, and beam-steering optics. The optional 4× telescope enables resolution of the 4 to 5 region of the 1951 USAF target.

OE_62_2_023104_f004.png

The turbulence generation system is composed of an air mixing chamber and a hot plate. Within the air mixing chamber, multiple 12-V fans circulate hot and room temperature air to create turbulence. Currently, the setup cannot specify turbulence strength, but by following a procedure to collect temporally similar data from different optical targets, it is possible to obtain similar turbulence profiles. In the future, it would be useful to have a more controlled and quantified turbulence generation system.

The collection process starts at room temperature and the chamber’s heating elements slowly activate, creating low levels of turbulence within the first 1 to 2 min. As the experiment progresses to the 3- to 4-min mark, the heating elements have continued to increase and moderate turbulence is observed. By the 5- to 6-min mark, the system has generated higher levels of turbulence and reaches a saturation point. At the conclusion of each experiment, a minimum waiting period of 20 min allowed the heating elements and mixing chamber to return to ambient temperature before starting another data collection.

For image comparison purposes an optional 4× telescope enables resolution of the 4 to 5 region of the 1951 USAF target. The telescope assembly was omitted when analyzing QR codes. A demagnification telescope, steering mirrors, and a 50:50 nonpolarizing cube beam-splitter comprise the remainder of the common optical path.

The three-aperture interferometer (Fig. 5) is the main component that differentiates the DAO path from the standard camera path. The standard camera iris is employed to match the effective aperture of the DAO interferometer. The visible-band cameras are identical (FLIR Systems Inc. Blackfly S; BFS-U3-32S4M), and both cameras employ optical bandpass filters centered at 670 nm. While the standard camera path employs a 20-nm-bandwidth filter, the DAO path uses a 40-nm-bandwidth filter to compensate for the reduced signal received at the camera. Images are collected from central, square regions (568  pixels×568  pixels) on the sensor planes using integration times of 5 and 1 ms for USAF target and QR codes, respectively.

Fig. 5

(a) Primary and (b) secondary three-aperture optical interferometer assemblies are depicted (shown in relative scale). The primary blazed diffraction gratings are d1=12.7  mm in diameter, are s1=13.7  mm in spacing, and separate the three beams in angular space. The secondary blazed gratings are d2=25.4  mm in diameter, are s2=44.0  mm in spacing, are affixed 50 mm from the primary gratings, and re-collimate the light (as depicted in panel (c)). Images of the (d) primary and (e) secondary assemblies after final integration are provided (not shown in relative scale).

OE_62_2_023104_f005.png

The standard camera data and DAO data are captured simultaneously using triggered, synchronized cameras. Thus, even if sequential turbulence experiments are not exactly reproduceable, the beam paths and timing shared by the standard and DAO systems are identical, ensuring valid point-by-point comparisons between the two techniques. The data collection system allows for microsecond accurate frame synchronization across multiple devices, including camera systems, wavefront sensors, and other measurement devices from various vendors. A schematic of the synchronized data collection system is shown in Fig. 6.

Fig. 6

The synchronized data collection system diagram is shown. The Intel NUC PC sends trigger requests via serial to the Teensy 4.0 microcontroller, which then simultaneously triggers both FLIR cameras. The PC then captures images via high-speed USB 3.1.

OE_62_2_023104_f006.png

The data collection system is comprised of a microcontroller (Teensy 4.0), a data acquisition computer (Intel NUC PC), and custom software written in C++. The microcontroller generates hardware timing signals that trigger all attached devices using the C++ writeFast() function. In addition to the two primary FLIR cameras, the trigger lines can be used to synchronize other heterogenous hardware (e.g., wavefront sensors and photodiodes). Five lines are currently enabled, but further expansion is possible. For flexibility and ease of implementation, trigger lines for each device are enabled on separate pins of the microcontroller. This limits timing accuracy to the sequential command resolution of the microcontroller, but bench testing verified delays <10  ns, which was within the timing requirements. Alternatively, a single pin can drive all triggered devices (resulting in zero lag between triggered devices), but this would require an amplifier and level shifter to accommodate the required current and device input levels.

The data acquisition computer simultaneously records images and ancillary data from measurement devices, and it communicates with the microcontroller to change trigger settings using a serial-over-USB protocol. The custom C++ software coordinates the collection of images and facilitates modifications to the camera trigger and framerate settings. In addition, the software performs diagnostic processing on the image streams, computing and displaying FFTs of the images in real-time. Currently, the FLIR Spinnaker library for camera control and acquisition is integrated within the DAO custom C++ application; other device libraries may be added in the future as necessary.

3.3.

DAO Calibration

Before the DAO system can collect general data, an initial calibration dataset must be created, and initial offset shifts require computation. These offset shifts are used for future field data collections and provide the baseline spatial shifts that are required to successfully reconstruct images across the sensor plane. All future solutions are based on this calibration baseline.

The initial calibration data grid is composed of a 7×7 array of 50-μm point sources. To compile the grid of data, a 50-μm pinhole is sequentially scanned to each of the 49 locations; the complete image [shown in Fig. 7(a)] is a summation of the individual acquisitions. A moiré pattern is superimposed on top of each point source. This interference pattern is caused by aperture separation. When the 2D FFT of the raw data is computed, it results in the six frequency terms surrounding the direct current (dc) term [Fig. 7(b)]. Centroid beats are shown in yellow, and as a visual check, predicted beat locations are indicated in green. With frequency components separated and isolated, the calibration algorithm suite solves for residual phase errors in the system and then computes the primary spatial shifts for moving the frequency components back to their initial locations, as dictated by the primary aperture. The Fourier space is seen in Fig. 7(c) after spatially shifting the frequency information back to the correct (i.e., natural, nondiffracted) locations. By compensating for inherent phase issues and then removing the moiré pattern in the three-aperture system, the normal point source images from the calibration grid result [Fig. 7(d)].

Fig. 7

DAO image reconstruction is illustrated using a calibration routine as an example. (a) The raw calibration dataset consists of a 7×7 grid of point sources. A moiré pattern is visible at each point. (b) A 2D Fourier transform of the raw data is shown with centroid beats indicated in yellow. (c) The Fourier space is shown after spatially shifting the frequency information back to the correct locations. (d) After a final transform, processing ends with a normal point source image.

OE_62_2_023104_f007.png

3.4.

Preliminary Results with Bar Targets

Initial results using the three-aperture DAO system with the 1951 USAF target demonstrate the mitigation of laboratory-generated turbulence when compared with standard imaging techniques. As shown in Fig. 8, DAO consistently outperforms the standard camera setup under low, medium, and high turbulence.

Fig. 8

(a)–(c) Standard camera images and (d)–(f) DAO-reconstructed images of a 1951 USAF test target were collected in a series of sequential laboratory experiments through an air chamber with low, medium, and high degrees of turbulence (left, center, and right columns, respectively). Simultaneous recording of standard and DAO-reconstructed images enables direct comparisons of the two techniques for a given turbulence level. This preliminary, qualitative study of a familiar target validated the DAO technique and paved the way for quantitative analysis on QR codes.

OE_62_2_023104_f008.png

3.5.

Algorithm Enhancement Through Phase Modulation

Speckle, speckle patterns, and speckle noise are generated when light waves or signals self-interfere. For example, when light propagates through turbulent media, speckle patterns show peaks and nulls that evolve with time. Optical systems often display different types of speckle noise. The DAO interferometer is based on a pair of three-aperture grating assemblies, resulting in seven frequency regions (one dc and six beat) with various overlapping regions as a result [as shown in Fig. 7(b)]. These overlap regions provide an opportunity to identify and eliminate phase errors that occur within the imaging system. However, this modest number of apertures limits the ability of the system to solve for higher-order-frequency phase errors. More apertures would refine the solution at the expense of increased optical and computational complexity.

After studying preliminary DAO results, this residual high-frequency noise was identified, and adding phase diversity to the reconstruction algorithm was proposed as a solution to reduce it. While standard DAO results are superior to those from a standard camera, additional noise suppression would further improve image quality. To mitigate this noise, two methods were explored to enhance the reconstruction algorithm: randomized and discretized phase modulation. Increasing phase diversity within the solution set and averaging this diversity within the algorithm improved QR Code read rates when compared with standard DAO reconstructions.19 The extra step of implementing either randomized or discretized phase modulation increases computation time by 10s of milliseconds, but the final images display reduced high-frequency noise. The payoff from these techniques will be presented in the following sections.

Randomized and discretized phase diversity modulation methods are introduced within the jitter and phase piston solver portion of the MATLAB-based DAO algorithm suite [Fig. 3(e)]. Both methods reduce the residual phase error within the solution through averaging, effectively applying a specialized high-frequency filter. These methods shift the overall phase solution by a constant factor in the frequency domain [as defined in Eqs. (9) and (10)]. A simplified mathematical description of randomized phase modulation is represented as

Eq. (9)

I(x,y)=i=1nFFT{rand*2π*f(u,v)},
and discretized phase modulation is represented as

Eq. (10)

I(x,y)=i=1nFFT{i/n*2π*f(u,v)},
where rand is a uniform distribution between 0 and 1, f(u,v) is the frequency space of the initially solved phase solution, n is the number of phase modulation steps (with n=10), and I(x,y) is the final reconstructed image. In keeping with the flowchart of Fig. 3, a step of phase modulation is applied to the complex amplitude between steps (e) and (f) before continuing through to create an instance of the reconstructed image [Fig. 3(h)]. These images are then averaged to create the final I(x,y) image with reduced high-frequency speckle noise.

4.

QR Code Analysis

4.1.

QR Code Basics

There are numerous methods to measure image quality, including using line rulings and calculating contrast. To avoid the ambiguity in traditional analyses of what constitutes sufficient contrast for a given task, static QR codes were used as optical targets and a standard QR code reader generated a pass or fail metric to quantify imaging performance. Here, the task of maintaining a data bandwidth in the face of turbulence is directly tied to the imaging target itself.

One feature that makes QR codes useful in daily life is their inherent error correction (EC) capability in the form of pattern redundancies to overcome data corruption. These redundancies are expressed by repeating information throughout the pattern, and different regions of the QR code are read in different directions. This study analyzed QR codes with three levels of built-in EC: low (L), medium (M), and quartile (Q), with 7%, 15%, and 25% EC, respectively.26 This work used the MATLAB function readBarcode.m to generate QR code pass/fail metrics.

4.2.

QR Code Results

The QR code images shown in Fig. 9 compare standard optics with standard, randomized, and discretized DAO reconstructions. These static level M QR code images were collected under moderate turbulence at the time indicated by the black vertical line in Fig. 10(b).

Fig. 9

A comparison of QR code imaging using (a) conventional optics, (b) standard DAO reconstruction, (c) randomized DAO reconstruction, and (d) discretized DAO reconstruction is shown. These images of a level M QR code (15% EC) were collected under moderate turbulence, as indicated by the black vertical line in Fig. 10(b).

OE_62_2_023104_f009.png

Fig. 10

Standard camera results are compared with DAO standard, randomized, and discretized reconstruction techniques. Three levels of QR code EC are analyzed: (a) level L (7% EC), (b) level M (15% EC), and (c) level Q (25% EC). Plots display the cumulative running average success rates for each imaging technique. Relative turbulence increases with test duration. The black line in panel (b) at 4 min, 43 s indicates the QR code frame images presented in Fig. 9.

OE_62_2_023104_f010.png

The standard camera image is partially blurred and generally indecipherable by the QR code reader. The standard DAO reconstruction appears crisp and resulted in improved code readability. Visually, the randomized and discretized DAO reconstructions may appear degraded, but they provide further enhanced machine readability. In particular, the high-frequency background noise in the discretized reconstruction is lower than that present in both the standard and randomized versions, resulting in a maximal success rate.

As shown in Figs. 10(a)10(c), experiments were performed using level L, level M, and level Q codes, respectively. Each sequential experiment commenced using the same initial environmental conditions, resulting in similar (although not identical) turbulence profiles as functions of time. To accommodate the QR code reader’s binary pass/fail metric, success rates were computed as cumulative running averages. Thus, earlier portions of each test exhibited larger fluctuations.

DAO reconstructions consistently provide higher QR code success rates when compared with the standard camera system. Furthermore, discretized reconstructions offer the highest DAO success rates when imaging the data-dense level L and level M codes. As shown in Fig. 9 images and Fig. 10(b) plot, the level M code results at 4 min, 43 s indicate the following cumulative success rate gains over the standard imaging camera: +8.4% (standard DAO), +17.4% (randomized DAO), and +30.1% (discretized DAO). When imaging level Q codes (which feature 25% EC), the subtle computational differences between the randomized and discretized methods become negligible and results are similar. Here, both methods improve upon standard DAO techniques and significantly improve upon standard camera methods.

4.3.

Resolution Enhancement Through Temporal Image Processing

After verifying that phase modulation within the DAO reconstruction pipeline reduces background noise, temporal image processing methods were explored to further improve results. The three techniques to enhance images, further suppress high-frequency speckling, and ultimately improve read rates are (1) subpixel image frame registration (FR),27 (2) moving-window averaging (MATLAB movmean.m), and (3) contrast-flattening field filtering (MATLAB imflatfield.m). Both FR and moving average (MA) employed a window of five frames. In this analysis, each QR code EC level and imaging technique result from Fig. 10 are plotted separately with the FR, MA, and flat field (FF) methods progressively added.

The plots shown in Fig. 11 highlight several results. First, FR offers results that are the same as or slightly better than the original results. The marginal improvement enabled by FR postprocessing should only be considered if computational demands are low. Second, MA provides significant improvements over the prior step for the majority of test cases. Introducing MA to codes reconstructed by randomized [Fig. 11(k)] and discretized [Fig. 11(l)] DAO brought gains exceeding 25%. Therefore, MA should be included in all reconstructions when feasible. Third, FF provides moderate improvement over the prior step for half of the test cases. Gains offered by FF processing are clearest for randomized and discretized reconstructions, and as such, should be incorporated when available. Fourth, none of the temporal image processing methods significantly improved standard camera imaging of QR codes through moderate to high turbulence. For example, at a test duration of 6 min, the advantage provided by image processing to standard camera imaging is just 9.3% [Fig. 11(a)], and in one case [Fig. 11(e)], image processing actually degrades the success rate by 7.3%. Only through a combination of DAO phase modulation and temporal image processing techniques were QR codes imaged through high turbulence a majority of the time.

Fig. 11

Image processing is applied to standard camera results (column 1) and to standard, randomized, and discretized DAO techniques (columns 2 to 4, respectively). Beginning with the QR code data of Fig. 10, the temporal image processing methods of FR, MA, and FF are progressively applied to (a)–(d) level L, (e)–(h) level M, and (i)–(l) level Q results.

OE_62_2_023104_f011.png

5.

Summary

DAO via homodyne encoding is an opto-computational technique that mitigates atmospheric turbulence using a single frame of data, and it eliminates the need for active sources or moving parts. The advantages of passively compensating for atmospheric turbulence are numerous. The primary downsides to this technology are that it requires more photons because (1) part of the beam is blocked by the primary apertures, (2) the gratings used to create the apertures are bandwidth limited, (3) the gratings limit in-band throughput with their 60% diffraction efficiency, and (4) the moiré pattern super-imposed on the image reduces the dynamic range of the sensor. These limitations are overridden by the enhanced resolution of the imaging system.

Recently, randomized and discretized phase solution diversity modulation methods were introduced to DAO reconstruction algorithms to drive down high-frequency solution noise at the cost of minor computational time increases. In addition, the temporal image processing techniques of FR, MAs, and FF further enhanced resolution. When image processing is added to the discretized computation chain, the QR code success rate is nearly 100% for low-to-moderate turbulence and above 50% for high turbulence when reading all three levels of EC codes. Future work to generate steady-state turbulence levels in the laboratory and measure the corresponding refractive-index structure parameters (Cn2) would help quantify the degree to which the DAO system compensates for image degradation. With these two upgrades in place, advanced work on mitigating outdoor atmospheric propagation turbulence will be possible.

Acknowledgments

The authors would like to thank the Naval Innovative Science and Engineering Fund for its financial support of this research and the Air Force Research Laboratory’s Sensors Directorate for providing hardware to enable laboratory experiments. The authors declare no conflicts of interest regarding this article.

References

1. 

D. L. Fried, “Optical resolution through a randomly inhomogeneous medium for very long and very short exposures,” J. Opt. Soc. Am., 56 1372 –1379 https://doi.org/10.1364/JOSA.56.001372 JOSAAH 0030-3941 (1966). Google Scholar

2. 

A. N. Kolmogorov, “The local structure of turbulence in incompressible viscous fluids for very large Reynolds numbers,” Turbulence, Classic Papers on Statistical Theory, Wiley-Interscience, New York (1961). Google Scholar

3. 

V. I. Tatarskii, Wave Propagation in a Turbulent Medium, 1st edDover Publications, New York (1967). Google Scholar

4. 

A. Ishimaru, Wave Propagation and Scattering in Random Media, 2 1st edAcademic Press, New York (1978). Google Scholar

5. 

J. W. Goodman, Statistical Optics, 1st ed.John Wiley & Sons, New York (1985). Google Scholar

6. 

M. Roggemann and B. Welsh, Imaging Through Turbulence, 1st edCRC Press, Boca Raton, Florida (1996). Google Scholar

7. 

S. Gladysz et al., “Wavefront sensing for terrestrial, underwater, and space-borne free-space optical communications,” Proc. SPIE, 11834 118340F https://doi.org/10.1117/12.2595801 PSISDG 0277-786X (2021). Google Scholar

8. 

J. P. Bos and M. C. Roggemann, “Robustness of speckle-imaging techniques applied to horizontal imaging scenarios,” Opt. Eng., 51 (8), 083201 https://doi.org/10.1117/1.OE.51.8.083201 (2012). Google Scholar

9. 

K. B. Gibson and T. Q. Nguyen, “An analysis and method for contrast enhancement turbulence mitigation,” IEEE Trans. Image Process., 23 (7), 3179 –3190 https://doi.org/10.1109/TIP.2014.2328180 IIPRE4 1057-7149 (2014). Google Scholar

10. 

T. J. Schulz, B. E. Stribling and J. J. Miller, “Multiframe blind deconvolution with real data: imagery of the Hubble Space Telescope,” Opt. Express, 1 355 –362 https://doi.org/10.1364/OE.1.000355 OPEXFF 1094-4087 (1997). Google Scholar

11. 

C. L. Matson et al., “Fast and optimal multiframe blind deconvolution algorithm for high-resolution ground-based imaging of space objects,” Appl. Opt., 48 A75 –A92 https://doi.org/10.1364/AO.48.000A75 APOPAI 0003-6935 (2009). Google Scholar

12. 

S. E. Krug and D. J. Rabb, “Computational phase correction algorithms for multi-aperture systems,” J. Opt. Soc. Am. A, 37 (4), 552 –567 https://doi.org/10.1364/JOSAA.379316 JOAOD6 0740-3232 (2020). Google Scholar

13. 

N. J. Miller et al., “Active multi-aperture imaging through turbulence,” Proc. SPIE, 8395 839504 https://doi.org/10.1117/12.921160 PSISDG 0277-786X (2012). Google Scholar

14. 

A. M. Tai, “Passive synthetic aperture imaging using an achromatic grating interferometer,” Appl. Opt., 25 (18), 3179 –3190 https://doi.org/10.1364/AO.25.003179 APOPAI 0003-6935 (1986). Google Scholar

15. 

S. E. Krug and D. J. Rabb, “Blazed grating pupil remapping and rotational synthesis with computational phase correction for a two-aperture system,” J. Opt. Soc. Am. A, 38 (12), 1866 –1874 https://doi.org/10.1364/JOSAA.431363 JOAOD6 0740-3232 (2021). Google Scholar

16. 

R. C. Hardie et al., “Simulation of anisoplanatic imaging through optical turbulence using numerical wave propagation with new validation analysis,” Opt. Eng., 56 (7), 071502 https://doi.org/10.1117/1.OE.56.7.071502 (2017). Google Scholar

17. 

K. Drexler and K. Watson, “Digital adaptive optics,” in Imaging and Appl. Opt. 2017 (3D, AIO, COSI, IS, MATH, pcAOP), OSA Technical Digest (online), CM2B.4 (2017). https://doi.org/10.1364/COSI.2017.CM2B.4 Google Scholar

18. 

K. R. Drexler, S. D. Lilledahl and B. Laxton, “Digital adaptive optics for turbulence mitigation,” Proc. SPIE, 11834 118340D https://doi.org/10.1117/12.2595801 PSISDG 0277-786X (2021). Google Scholar

19. 

K. R. Drexler, S. D. Lilledahl and B. Laxton, “Atmospheric mitigation for imaging applications,” Proc. SPIE, 12118 121180J https://doi.org/10.1117/12.2624467 PSISDG 0277-786X (2022). Google Scholar

20. 

S. Lilledahl et al., “Digital adaptive optics for turbulence mitigation with QR codes,” Proc. SPIE, 12237 122370D https://doi.org/10.1117/12.2635497 PSISDG 0277-786X (2022). Google Scholar

21. 

S. Matt et al., “A controlled laboratory environment to study EO signal degradation due to underwater turbulence,” Proc. SPIE, 9459 94590H https://doi.org/10.1117/12.2177028 PSISDG 0277-786X (2015). Google Scholar

22. 

III B. Neuner et al., “Deployable scintillometer for ocean optical turbulence characterization,” Proc. SPIE, 11752 117520A https://doi.org/10.1117/12.2588721 PSISDG 0277-786X (2021). Google Scholar

23. 

C. Wu et al., “Near ground surface turbulence measurements and validation: A comparison between different systems,” Proc. SPIE, 10770 107700K https://doi.org/10.1117/12.2322723 PSISDG 0277-786X (2018). Google Scholar

24. 

W. S. Rabinovich et al., “Estimation of terrestrial FSO availability,” Proc. SPIE, 10524 105240K https://doi.org/10.1117/12.2290237 PSISDG 0277-786X (2018). Google Scholar

25. 

L. C. Andrews et al., “Creating a Cn2 profile as a function of altitude using scintillation measurements along a slant path,” Proc. SPIE, 8238 82380F https://doi.org/10.1117/12.913756 PSISDG 0277-786X (2012). Google Scholar

27. 

M. Guizar-Sicairos, S. T. Thurman and J. R. Fienup, “Efficient subpixel image registration algorithms,” Opt. Lett., 33 156 –158 https://doi.org/10.1364/OL.33.000156 OPLEDP 0146-9592 (2008). Google Scholar

Biography

Burton Neuner III received his BS degree in engineering physics from the University of Illinois at Urbana-Champaign in 2004 and his PhD in physics from the University of Texas at Austin in 2011. He is a research scientist at NIWC Pacific, San Diego, California, United States. His current research interests include oceanic and atmospheric optical turbulence, optical communication, environmental sensing, and machine learning.

Skylar D. Lilledahl received his BS degree in physics from San Diego State University in 2019, focusing on modern optical communications. He is a research scientist at NIWC Pacific, San Diego, California, United States, focusing on optical atmospheric propagation. His current research is in mitigating atmospheric optical turbulence in imaging and communications systems.

Benjamin Laxton received his MS degree in computer science and engineering from University of California, San Diego, in 2007, and has worked on a wide range of problems in optical instrument design and image processing. He is a research scientist at NIWC Pacific, San Diego, California, United States, focusing on instrument and algorithmic development for atmospheric sensing applications.

Kyle R. Drexler received his PhD in electrical engineering from Michigan Technological University in 2012 with a focus on phase compensation algorithms for mitigating atmospheric turbulence in free space optical communications. He is a senior research scientist at NIWC Pacific, San Diego, California, United States, and is currently developing electro-optical propagation and atmospheric turbulence estimation techniques for imaging and directed energy applications. Since 2007, he has built system models, optical simulations, reconstruction algorithms, and hardware validation routines.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Burton Neuner III, Skylar D. Lilledahl, Benjamin Laxton, and Kyle R. Drexler "Digital adaptive optics with interferometric homodyne encoding for mitigating atmospheric turbulence," Optical Engineering 62(2), 023104 (28 February 2023). https://doi.org/10.1117/1.OE.62.2.023104
Received: 10 November 2022; Accepted: 30 January 2023; Published: 28 February 2023
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
KEYWORDS
Turbulence

Imaging systems

Cameras

Homodyne detection

Adaptive optics

Atmospheric optics

Optical signal processing

Back to Top