|
1.IntroductionThe point spread function (PSF) is the image of a point source of light. The PSF is shaped by various effects, including the spatial frequency response of the optics and image motion due to line-of-sight (LOS) motion. The optical transfer function (OTF) of an isoplanatic (shift invariant) optical system is the two-dimensional (2-D) spatial Fourier transform (FT) of the PSF for noncoherent (incoherent) optical radiation, and the modulation transfer Function (MTF) is its magnitude.1,2 Prominent uses for the OTF of an optical system include predicting performance from simulation information, specifying performance tolerances and requirements for an optical system, and analyzing performance from test data. Optical systems operating in real-world scenarios are subject to dynamic environments. The principle dynamic effect is image motion during the exposure interval in which electromagnetic energy is collected by the detector. Image motion reduces the system OTF, particularly at higher spatial frequencies, and therefore reduces image quality. Image motion is potentially a limiting factor in the imaging performance of an optical system. The image motion treated herein is the relative LOS pointing motion projected onto the two spatial dimensions of a focal plane. The relative pointing motion is due to camera attitude error, deliberate attitude motion, translational camera motion, and translational target motion. Other sources of image motion, distortion, and varying target aspect, for example, are not considered. The various types and sources of image motion are illustrated and explained in detail in Ref. 3 (Ch. 8, pp. 103–115). The effect of image motion on the performance of an optical system is measured by an image motion OTF. In addition to its contribution to the system OTF described above, the image motion OTF is also needed to calculate an inverse filter for image compensation. In this work, we consider systems where all elements of the image sensor are exposed simultaneously. Line scan detectors, time delay integration (TDI), and moving shutter systems are not considered. 1.1.ObjectiveThe purpose of this paper is to derive statistical image motion OTFs in two dimensions of spatial frequency for image displacement, smear, and jitter, and to provide a methodology to compute the parameters of the OTFs from LOS pointing motion of the optical system. Conventional analysis of the smear OTF (a sinc function) assumes some particular value for smear, so we call it a deterministic smear OTF. In general, image smear has a mean value plus a random variation from one image to another. In some optical systems, the random variation dominates the mean. The statistical smear OTF measures the average performance of an ensemble of images subject to nonzero-mean Gaussian random smear. It is best visualized as a surface over the two dimensions of spatial frequency. The derivations yield the familiar Gaussian jitter OTF, which is also a statistical OTF, and a displacement OTF, which measures image offset due to image motion. The parameters of the OTFs are means and covariances computed from the power spectrum of the pointing motion weighted by frequency domain weighting functions. Various types of pointing metrics are defined. The OTFs and the method to compute their parameters are intended to support integrated modeling, multidisciplinary analysis, and simulation of electro-optical systems. 1.2.Historical Literature SurveyVarious authors3–18 have analyzed the effect of image motion on the performance of optical systems. The OTF has been studied analytically and numerically for specific motions such as uniform linear motion, accelerated motion, low-frequency sinusoidal motion with period greater than the exposure interval and with various initial phase angles, high-frequency sinusoidal motion with period less than the exposure interval, and white Gaussian random motion (jitter). The image motion OTF has been studied extensively for deterministic motion. Except for the jitter OTF, statistical treatment of image motion OTF has been limited to numerical evaluation. The image motion MTF derived in Refs. 4 and 5 for high-frequency sinusoidal motion, assuming an integral number of cycles during an exposure or many cycles so that fractional cycle is negligible, is shown to be a zero-order Bessel function of the spatial frequency and amplitude of the sinusoid. The low-frequency image motion MTF in Ref. 5 is simply the image motion MTF for uniform linear motion with the assumption that the image exposure time is much shorter than the period of the sinusoid. The image motion OTF for uniform linear motion and Gaussian random motion are also given in Ref. 4. The OTF for uniform linear motion and for sinusoidal motion, with zero to two cycles in the exposure interval, including fractional cycles, and for various initial phase angles, are analyzed in Ref. 7. The image motion OTF for quadratic motion was first analyzed in Ref. 6. The image motion OTF for linear plus quadratic (accelerated) motion is derived in Ref. 8, where it is shown that in the presence of accelerated motion the MTF is nonzero at any spatial frequency but approaches the sinc function as the smear due to acceleration becomes small compared with the smear due to the initial velocity. The MTF for a fractional-cycle sinusoid at a particular initial phase angle shown in Ref. 7 is similar to the MTF for accelerated motion in Ref. 8. This is not surprising, since a short segment of a sinusoid can be approximated as a quadratic. Image degradation due to various types of image motion is summarized in Ref. 3 (Ch. 8, pp. 115–124). A “lucky shot” probability model is derived in Ref. 10 and confirmed experimentally in Ref. 11 to predict how many independent exposures are needed, with a given probability, to obtain at least one image with a smear less than a given length. This result is important to compute the probability of target acquisition. A numerical method to compute the MTF from arbitrary motion data is presented in Ref. 12, and MTFs are computed numerically for linear motion and for high- and low-frequency sinusoidal motion. Average MTFs for low-frequency sinusoidal motion with random initial phase (relative to the start of the exposure) and for low-frequency motion with a range of amplitudes are also computed in Ref. 12. The OTF for sinusoidal image motion is computed in Ref. 13 by first obtaining a line spread function (LSF) from a histogram (probability density function) of the image motion data and then computing the OTF by a fast Fourier transform (FFT) of the LSF. The numerically computed OTF due to sinusoidal image motion is studied in greater detail and confirmed experimentally using motion sensor data in Ref. 13. The numerically computed OTF for accelerated motion is also analyzed in Refs. 13 and 14. The image motion analyses in Refs. 1011.12.13.–14 are summarized in Ref. 15 (Ch. 14). An image motion MTF is derived in Ref. 16 by using moments of the motion data. There is no assumption about the type of motion or about its probability density. Results show that a large number of moments are typically required to achieve acceptable numerical accuracy, and the number of moments required depends on the data. A deterministic image motion MTF for a time-delay-integration (TDI) detector subject to uniform linear motion and a statistical (jitter) image motion MTF are derived in Ref. 17. The image motion MTF for a TDI line-scan detector and uniform linear motion was derived and analyzed via simulation in Ref. 18. In many systems, the image motion is more accurately represented by a power spectral density (PSD) spread over a range of frequencies rather than a single vibrational frequency. Image motion is defined in Refs. 19 and 20 as a displacement plus jitter, and the variances of the displacement and jitter are computed from the PSD of the image motion weighted by temporal-frequency domain weighting functions for displacement and jitter. The computation of covariance matrices in the present work follows that of Refs. 19 and 20. The jitter MTF given in Refs. 19 and 20 is the same as in Ref. 17. In previous work by the first author21,22 the terms “stability” and “jitter” are defined and (point-to-point) stability and windowed stability are introduced as measures of image stability over multiple images. A standard adopted by the National Geospatial Intelligence Agency for spatial data accuracy23 defines a point-to-point stability metric and an algorithm to compute it, typically for line-scan data. This metric can be computed more efficiently by our method (Sec. 4.1). Windowed stability measures the change in displacement from one image to another and is useful for image registration and target tracking. 1.3.ApproachThis work extends results24 for LOS motion in one spatial dimension to two spatial dimensions and removes the assumption of zero-mean smear rate (and zero-mean smear). We first define the displacement, mean smear rate, and jitter components of image motion over the exposure interval and derive expressions for these as a function of the pointing motion. The mean smear is the mean smear rate times the exposure time. We then derive from first principles the general image motion OTF as a function of the pointing motion. The general image motion OTF is written in terms of displacement, smear, and jitter, which is shown to be separable in these components of pointing motion. Taking expectations and time averages yields the statistical image motion OTFs. The OTFs are parameterized by means and covariances, which are computed from the power spectrum of the pointing motion weighted by frequency domain weighting functions. The frequency domain weighting functions are a direct result of the definitions of the displacement, mean smear rate, and jitter. A notable result is that the statistical smear OTF cannot be written as the product of one-dimensional (1-D) OTFs, unlike the deterministic smear OTF and the Gaussian jitter OTF. 1.4.OrganizationIn Sec. 2, the components of the pointing motion (displacement, smear, and jitter) are defined and expressions for these components in terms of the pointing motion are derived. In Sec. 3, we derive the general image motion OTF and derive expressions for the displacement, statistical smear, and jitter OTFs. The statistical smear OTF is characterized in Sec. 3.2. The statistical smear LSF (1-D PSF) is derived in Sec. 3.3. The OTFs and LSF are summarized in Table 1 in Sec. 3.3. In Sec. 4 and in Appendices A–H, equations for the mean and covariance of the components of the pointing motion are derived based on the power spectrum of the pointing motion. The lengthy derivations of the weighting functions are relegated to the Appendices but summarized in Table 2 in Sec. 4. Weighting functions used to compute the various covariance matrices are discussed in Sec. 4.1. The computation of the power spectrum is presented in Sec. 4.2, and a method for simulating and analyzing the pointing motion in an imaging vehicle or platform is discussed in Sec. 4.4. 2.Pointing Motion ModelImage motion is due to the relative LOS motion of the camera and the observed object. The relative LOS motion is caused by the relative translational and rotational motion of the camera and the observed object. Image motion due to changes in aspect of the object is not considered here. The image motion can be modeled by where is a camera model parameterized by the relative translation and relative attitude . The relative attitude here is a small-angle rotation vector but could be represented by a quaternion, direction cosine matrix, Euler angles, or other parameterization. Relative pointing motion and image motion are synonymous, with image motion being interior to the camera and relative pointing motion being exterior to the camera.Figure 1 shows image motion comprising displacement, smear, and jitter. Displacement is the average image offset over the exposure interval of length . Smear is due to a linear motion over the interval and is equal to times the smear rate, where the smear rate is the average slope of the image motion over the exposure interval. Jitter is the residual motion after displacement and smear are removed from the image motion. Smear results in a streaked image and jitter causes an image to be blurred. The image exposure interval of length centered at time is The image motion over the exposure interval is where is the image displacement over , is the uniform smear rate (the average rate) over , and is the jitter motion in the interval . For convenience, let and write Eq. (2) as The jitter over is obtained from Eq. (3) as We shall compute the displacement and smear rate from a least-squares fit of and to the pointing motion over the interval . The jitter motion is then the least-squares residual. The best fit minimizes the mean square jitter where is the mean square jitter in the interval .Differentiation of with respect to the displacement gives Setting this partial derivative to zero yields The displacement is thus given byDifferentiation of with respect to the smear rate gives Setting this partial derivative to zero yields The smear rate is thus given by Smear, rather than smear rate, is the observable to which optical system performance is traditionally linked. The smear length is the linear change in the pointing over the interval and is given byWe assume that and are Gaussian and wide sense stationary with expected value means and covariances and , respectively. The mean smear due to the uniform smear rate is with covariance We assume also that the jitter motion is Gaussian. The jitter is zero mean when the data fits the model Eq. (2) at each : The mean-square jitter over the interval is given by This is similar to the scalar average square jitter in Eq. (5). The jitter covariance is Formulas to compute , , , and from the power spectrum of the pointing motion are summarized in Table 2 in Sec. 4. These covariance matrices are used in the formulas in the previous section to compute the smear and jitter OTFs.3.Image Motion Optical Transfer FunctionThe noncoherent (incoherent) imaging of an isoplanatic (shift invariant) electro-optical system is the product of the Fourier transform of the point spread function of the optical system and the Fourier transform of the object geometrically projected onto the detector plane. The imaging process in the Fourier transform domain is where is the 2-D spatial frequency. The OTF of an optical system is the product of the OTF of the image motion, the OTF of the optical diffraction (aperture, wavefront, and so on.), the OTF of the detector, the OTF of the atmosphere, and any other effects that may be present. A typical system OTF is thus given by In this work we are concerned only with the effect of image motion on the system OTF.The irradiance of a still image is a function of the spatial location in the image. The average irradiance at over an exposure of duration seconds centered at time is We assume that the irradiance of a still image is constant with time so that , and so we have The 2-D Fourier transform of isThe irradiance of the image subject to image motion is , and the average irradiance over the exposure is The 2-D Fourier transform of is Let . Then and . Substitute these into Eq. (26) to get The term is the general single image motion OTF for the exposure interval centered at : It will be convenient to substitute and into Eq. (28) to obtain This single image motion OTF depends on the image motion during the exposure interval centered at time .3.1.Statistical Image Motion Optical Transfer FunctionAn analytical expression for the statistical image motion OTF is derived in this section. The statistical image motion OTF is the expected value of the single image motion OTF in Eq. (29): The integrand in Eq. (30) is Substitute for from Eq. (3) into Eq. (31) and factor the exponential: It is shown in Appendix D that and are independent random variables in each interval , and are independent of the least-squares residual so we have for and , where is the displacement OTF, is the smear OTF, and is the jitter OTF. The dependence of on will be removed by integration over in Eq. (30) so that3.1.1.Displacement optical transfer functionThe displacement of an image is represented in the OTF by a constant phase shift. For any given exposure centered at time , the displacement is a random constant during the exposure interval. Therefore we take the expectation in Eq. (33) to obtain An image displacement is merely a shift in position of the image in the focal plane, and so the displacement MTF is unity, since .3.1.2.Smear optical transfer functionFor a Gaussian random smear rate with mean and covariance , the second term in Eq. (33) is the characteristic function associated with the Gaussian density of , so we have [Ref. 25 (p. 115)], The dependence on is removed by taking a time average over the exposure interval, which yields the statistical smear OTF, where For convenience, using Eqs. (15) and (16), and can be written in terms of the mean smear and smear covariance,The function in Eq. (39) is the complex error function (usually denoted erf), and is the real part of its complex argument. Equation (39) was obtained with the aid of Ref. 26 (p. 108, §2.33-1), which is also found in Ref. 27 [p. 3, §3.2, Eq. (3)]. The complex error function is found in Fourier analysis, Fresnel integrals, and the plasma dispersion function. The error function for real is Ref. 25 (p. 48) The complex error function erfz is the error function28 continued into the complex plane with a complex argument in place of the real argument . The complex error function is Hermetian, so . [Similarly, the standard error function is odd, so .] The complex error function is bounded between for all real , but unbounded on the imaginary axis as . Therefore Eq. (39) is best computed from rather than from separate terms. The complex error function and its properties, related functions, and series expansions are given in Ref. 29 (p. 297–309). Algorithms, code, and documentation for computing the complex error function are found in Refs. 3031.32.33.–34. One must be careful in using any numerical algorithm — remarks on the method in Ref. 33 indicate that it may be less accurate for complex arguments near the imaginary axis. It is beyond the scope of this paper to provide a detailed treatment of numerical methods to compute the complex error function. The reader is directed to Ref. 30 (§7) as a starting point, but beware that some articles and algorithms cited in Ref. 30 (§7.25) (and elsewhere) as methods for computing the complex error function actually compute the Faddeeva function. An exception is Ref. 32, which provides algorithms and code to compute the complex error function, the complementary complex error function, and the Faddeeva function. Although Ref. 35 provides algorithms for the Voigt function, it includes two series approximations for the complex error function, which are found in Refs. 33 and 34. The erfz function in Ref. 31, which comprises three separate algorithms noted in comments in the code, was used with its default settings to generate results in the next section.Two limiting cases of the statistical smear OTF are of interest. (1) When and , the statistical smear OTF becomes the well-known deterministic smear OTF,17,18 where is the sinus cardinalis (cardinal sine) function. (2) When and , we have in Eq. (38), and the statistical smear OTF becomes where is the real error function. This is the same as the result obtained in Ref. 24 where () is assumed at the outset (and where the motion is 1-D). Equation (44) is the average of deterministic smear OTFs for images whose smear lengths are zero-mean Gaussian random variables. Equations (39)–(41) are the average smear OTF for images whose smear lengths are nonzero-mean Gaussian random variables. The relationship between these OTFs are illustrated and discussed in Sec. 3.2.From Eqs. (39)–(41), the statistical smear OTF is clearly not separable in terms of mean smear and dispersion. Therefore, it is incorrect to model the deterministic and stochastic effects of smear as the product of the deterministic smear OTF (the sinc function) and the statistical smear OTF (with ). Furthermore, the two spatial frequency components of the statistical smear OTF are also not separable, so the statistical smear OTF cannot be expressed as the product of 1-D OTFs in each frequency variable. A coordinate transformation can be applied so that one component of is zero. This rotates the graph of so that one axis is aligned with the direction of the mean smear. Even in this coordinate system, is not separable. In computing , we could have switched the order of expectation and time averaging, where and is the random smear over the exposure interval centered at time . The remainder of the derivation in Eq. (45) is omitted here.3.2.Comparison of Deterministic and Statistical Smear Optical Transfer FunctionThe statistical smear OTF is characterized to show how it behaves as a function of smear and smear dispersion (standard deviation of smear). For clarity, the characterization is shown for one frequency axis. Surface plots are also provided to illustrate the statistical smear OTF in two dimensions of spatial frequency. Figure 2 shows two plots of the 2-D statistical smear OTF evaluated for various mean smear and various smear dispersion along one dimension in frequency. The frequency axis is one-sided since the OTF is an even function. Figure 2(a) shows the statistical smear OTF for mean smear and smear dispersion ranging from 0.02 to 2.0 mm. The OTF converges to the sinc function, Eq. (43), as , and converges to the statistical smear OTF, Eq. (44), as . The curve for is almost indistinguishable from the curve for and so is not shown. An interesting characteristic is that the curves essentially go up as the dispersion increases until the dispersion equals the mean smear, and then the curves go down as the dispersion increases further. The degradation of the statistical smear OTF is pronounced as the dispersion increases above the mean smear. The curves begin to look like the sinc function when the dispersion is less than about half the mean smear. Figure 2(b) shows the statistical smear OTF for a smear dispersion and mean smear ranging from 0.02 to 2.0 mm. The curve for is almost indistinguishable from the curve for and so is not shown. The curves move left and down as the smear increases, again indicating worsening degradation of the image. The curves begin to look like the sinc function when the smear is greater than twice the dispersion, which is consistent with Fig. 2(a). The statistical smear OTF in two dimensions of spatial frequency is shown in the contour plots in Fig. 3. (A three-dimensional mesh plot is difficult to show clearly, so it is omitted.) The statistical smear OTF in Fig. 3(a) was produced with mean smear , and smear dispersion . Compare with Fig. 2(a). Although there is smear in the direction, the average smear OTF is not a sinc function, although the response is a sinc function for each realization of smear in each image. Figure 3(b) was produced with mean smear and smear dispersion , . Although the mean smear is along a line, the sinc response is along the axis, and the response is essentially along a line. Although this may seem counterintuitive, it is because of the large random smear in the axis. Since the statistical smear OTF can change dramatically with changes in the parameters, one must be careful in making any general statements regarding the smear OTF. Nevertheless, the statistical smear OTF provides information about OTF performance that is not revealed by the deterministic smear OTF (the sinc function) for any choice of smear such as a “worst-case” smear. In Fig. 4(a), the statistical smear MTF (, ) tightly bounds the deterministic smear MTF (, ). It also bounds the deterministic smear MTF for , since the deterministic smear MTF is smaller for longer smear lengths. The statistical smear and jitter OTF are shown in Fig. 4(b) for , , and . From Eq. (60), the contribution of smear to the root-mean-square (RMS) attitude motion is , or 1.15 for . This is only slightly larger than the jitter in this example. Empirical evidence indicates that image quality tolerates degradation from smear better than from jitter. The reason for this is because the statistical smear OTF goes slowly to zero with increasing spatial frequency, whereas the jitter OTF goes to zero quickly. 3.3.Statistical Smear Line Spread FunctionThe deterministic smear LSF is a rectangle (boxcar) function. The rectangle function is shown in Fig. 5(a) for three values of smear. The statistical smear LSF describes the average LSF over an ensemble of random rectangle LSFs whose widths are random from one image to another. The statistical smear LSF can be computed as the expected value of the random rectangle function. Alternatively, the statistical smear LSF can be obtained by computing the inverse Fourier transform of the statistical smear OTF, Eq. (39). The computation is facilitated by substituting Eq. (36) for the integrand in Eq. (38), and then switching the order of integration and inverse Fourier transform, and by assuming that the random smear rate is Gaussian. The derivation is lengthy by either method, so details are omitted. We have derived the statistical smear LSF in one dimension for zero-mean Gaussian smear. The 1-D statistical smear LSF for zero-mean smear is where is the spatial distance in the image, is the smear dispersion, and is the exponential integral function.The statistical smear LSF is shown in Fig. 5(b) for values of from 0.2 to 2.0 mm. In comparison, the deterministic smear LSF for smear is a rectangle function of width and amplitude (a Dirac delta function for ). The statistical smear LSF has the required property that it has unit area, as does the deterministic smear LSF. Gaussian random smears are concentrated around the mean, which is zero in Eq. (47); hence is large there, and as . Large smears are infrequent, so as . The broadens but becomes thinner near as increases. For small , is concentrated near and becomes a delta function [similar to the Dirac delta function ] as . As can be seen in Eq. (47), scales the graph [Fig. 5(b)] of in both axes. Table 1Summary of image motion OTFs and LSF.
4.Pointing CovarianceThe OTFs for displacement, smear rate, smear, and jitter in Table 1 are parameterized with means (mean displacement), (mean smear rate), (mean smear), and covariances (displacement covariance), (smear rate covariance), (smear covariance), and (jitter covariance) in Eqs. (13)–(19). These are derived in Appendices B, C, and E. Covariance matrices for other performance metrics are also derived. These are the covariances , , , and of the relative pointing motion, smitter (the sum of smear and jitter), point-to-point stability (the relative motion at points in time separated by seconds), and windowed stability (displacements separated by seconds). These are defined in Appendices A, F, G, and H and discussed further in Sec. 4.1. Although the covariance matrices can be computed directly from sampled motion data, that approach is computationally intensive and does not reveal what spectral content of the pointing motion contributes significantly to the covariance matrices, hence to the image motion OTF. The spectral content can also reveal which sources of relative pointing motion contribute most significantly to the covariance matrices and to the image motion OTFs. The basic idea19,20 is to compute the PSD from the autocorrelation of the relative pointing motion . Expressions for the covariances in terms of the PSD are derived in Appendices A–H. We assume only that is wide sense stationary during the exposure intervals. We can ignore pointing motion between exposure intervals that is not characteristic of the motion during the exposures (e.g., a slew between exposures). Any average displacement and trend over the ensemble of images should be removed so that a valid autocorrelation and PSD are computed. After subtracting the overall trend, the trend in each exposure can vary but average to zero over the ensemble. The average displacement and trend are added back into and (or ) before computing the OTFs. The autocorrelation of is the matrix where . The Wiener–Kinchin theorem states that the PSD of is the Fourier transform of the autocorrelation function of ,From the inverse Fourier transform, we obtain the autocorrelation in terms of the PSD, Since is real, is real and even, is real and even, and we can write Eq. (50) as and similarly for Eq. (49).The PSD and autocorrelation are used in the Appendices to derive expressions for the covariance matrices , , , , , , and . The covariances are computed from expressions of the form where the subscript X is one of A, D, R, S, J, SJ, WS, and PS. The in Eq. (52) are frequency domain weighting functions, which are derived in the Appendices and summarized in Table 2. The means are also defined in Table 2.Table 2Summary of covariances and corresponding frequency domain weighting functions.
The covariance of the pointing motion is derived in Appendix A, Eq. (60) and is given by Similarly, the displacement, smear, and jitter weighting functions in Table 2 are such thatSmitter is the sum of smear and jitter, or equivalently the pointing motion with the displacement removed. The smitter covariance in Table 2 is the jitter covariance defined in previous works,19–22 and is not used to compute the image motion MTFs in Table 1. 4.1.Weighting FunctionsThe displacement, smear, jitter, and smitter weighting functions in Table 2 are plotted in Fig. 6 with second. It can be seen that the frequency content of the pointing contributes to the covariances, hence the OTFs, over certain ranges of frequency. The displacement weighting function is lowpass, so low-frequency pointing motion contributes to the displacement. The smear weighting function peaks at 0.7 Hz and is zero at 0 and 1.4 Hz, and exhibits smaller peaks at higher frequencies. The smear weighting function is essentially bandpass, so pointing motion over a certain range of frequencies contributes significantly to smear. The jitter weighting function is highpass. Large-amplitude pointing motion can be significant at frequencies where the weighting function is small. The displacement, smear, and jitter weighting functions overlap, and so the spectral content of the image motion at any frequency contributes to all three measures of image motion. The contribution of the pointing motion to displacement, smear, and jitter depends on the PSD of the pointing motion as well as the weighting functions, so there are no arbitrary frequency regions associated with displacement, smear, and jitter. The point-to-point stability covariance measures the change in pointing from one instant to another. The stability weighting function with second, shown in Fig. 7, is a minimum at 0,1,2,… Hz and a maximum at 0.5,1.5,2.5,… Hz, so frequencies above contribute to the point-to-point stability. Point-to-point stability is called stability in Refs. 21 and 22. The point-to-point stability metric for spatial data accuracy of line scan data23 can be computed more efficiently by our method. However, the displacement, smear, and jitter metrics may be more appropriate. The windowed stability covariance measures the change in displacement from one image to another. The windowed stability weighting function has two time parameters, the exposure time and the time between image center times. The windowed stability weighting function is plotted in Fig. 8(a) with and . It is essentially a bandpass function and goes to zero at low and high frequencies for any choice of and . The windowed stability weighting function looks significantly different for various and , as exemplified by Fig. 8(b) where and . Windowed stability is useful in image registration and to specify or evaluate performance for a frame-differencing camera. 4.2.Computation of the Power Spectrum and CovariancesA pointing performance analysis will typically produce both time-domain and frequency-domain data. Time-domain data is typically obtained from a time-domain simulator, and frequency-domain data is typically obtained from transfer functions driven by harmonic or white noise. Although the mean and covariance of displacement, smear, and jitter can be computed in either the time domain or in the frequency domain, their computation is most conveniently and efficiently performed in the frequency domain. The main tool for computing the pointing covariances from uniformly sampled data is the FFT. The FFT of the sampled pointing motion is scaled to a power spectrum (not a density) by dividing it by and then computing its magnitude squared, where is the number of samples of data and is assumed to be a power of two, . (This assumption can be relaxed.) The power spectrum is then shifted (FFTSHIFT in Table 3) so that the zero frequency line is at the center. The frequencies range from to Hz in increments of Hz, where is the sample interval in seconds. Note that the sum of the discrete power spectrum is equal to the second moment of the time-domain data, This serves as a useful check that the power spectrum is scaled correctly. The mean of should be subtracted out so that the computed accuracy and displacement covariances do not include the overall mean pointing motion.Table 3Summary of calculations for the power spectrum of pointing motion.
Since the power spectrum does not converge to the true spectrum with increasing , the data should be segmented and the power spectra of the segments should be averaged. Alternatively, the power spectrum can be computed from the biased, and possibly windowed, sample correlation function of . A detailed discussion of computation of the power spectrum is beyond the scope of this paper; the reader is referred to Ref. 36 or one of many books on spectral analysis. In the frequency domain, the pointing covariances are evaluated by computing the weighting functions at each frequency point, multiplying by the power spectrum of the pointing motion at each frequency, and then summing the terms. This computational algorithm is summarized in Table 3. The power spectrum can be computed using only non-negative frequencies, but the zero-frequency term is multiplied by one and the positive-frequency terms multiplied by two in the summation. Once the power spectrum is computed, the covariance matrix corresponding to one of the weighting functions in Table 2 is easily computed. 4.3.Pointing Covariance from Relative Motion CovarianceIn Sec. 4.2, the pointing motion is computed from the relative translation and relative attitude (a small-angle representation) by using the camera model in Eq. (1). The power spectrum and covariance matrices are then computed from the pointing motion . An alternative approach is to first compute the power spectrum of the relative translation and the power spectrum of the relative attitude motion. At this point, there are two paths to compute the covariances matrices. One path is to apply sensitivity matrices to map the power spectra of the relative motions into the power spectrum of the pointing vector by where the sensitivity matrices and are The vectors and are the nominal relative translation and attitude, and is the camera model in Eq. (1). The covariance matrices are then computed from the power spectrum in Eq. (54). This approach is computationally intensive because the mapping matrices have to be applied to each frequency component of the power spectra. Furthermore, the attitude motion may be simulated at a higher sample rate than the translational motion due to typically higher spectral bandwidth of the attitude motion.A computationally more efficient path is to compute the covariance matrices and corresponding to and by using the formulas in Table 2. These covariance matrices are then mapped into the pointing covariances by an equation of the form This may be the preferred approach, since the control system designer will simulate the relative translation and attitude motions but may not have details of the camera model. In addition, image motion on multiple focal planes can be evaluated from the same set of power spectra and covariance matrices by applying multiple mapping matrices. Another advantage of this approach is that the contribution of the relative translation and relative attitude motions to the displacement, smear, and jitter can be evaluated. An optical sensitivity matrix for the James Webb space telescope (JWST), formerly called the next generation space telescope, is presented in Refs. 37 and 38.4.4.Pointing Performance AnalysisFigure 9 shows the pointing control system for an optical payload on an imaging vehicle. In the case of a spacecraft, the system model comprises models of attitude sensors, actuators, fuel slosh, a solar array drive, internal disturbances, and the optical system, all connected to appropriate nodes of a reduced-order Nastran model comprising rigid-body and flexible-body modes and mode shapes. The control loop is closed through the attitude controller. The attitude command reference input is a disturbance since it can excite structural and slosh modes, and the command itself may be subject to error (e.g., scan rate error or tracking rate error). Similar integrated modeling approaches are found in Refs. 3839.40.41.42.–43. An overview of modeling and analysis is given in Ref. 44. As suggested in Ref. 19 (pp. 21–22) and Ref. 20 (pp. 573–574), the weighting functions can be approximated by linear transfer functions for use in control system analysis and synthesis. Standard state-space methods can then be applied to calculate the covariance matrices. A state-space solution that avoids having to compute the weighted FFT is presented in Refs. 45 and 46 but would have to be modified for our model of pointing motion to include smear and a different jitter weighting function. Analysis of pointing performance is often faster and numerically more reliable (due to time scales) if the system response to disturbances is computed directly in the frequency domain from a linear or linearized closed-loop transfer function rather than in the time domain from a simulator. A time domain simulator can of course capture nonlinear and time-varying effects. The response of a system to high-frequency noise and disturbance is most accurately and efficiently computed in the frequency domain. For stochastic sources, the power spectrum can be computed directly by using standard state-space covariance methods. Once the power spectrum of is computed, it is a trivial effort to compute the covariances, as discussed in Sec. 4.2. Segments of the pointing motion pertaining to nonimaging attitude motions have to be eliminated if they are not representative of the motion during the exposure interval. In a linear or linearized system, the covariance of the pointing motion from individual noise, disturbance, and other sources can be computed individually and added to obtain the total pointing covariance. The individual contributions can then be ranked so that the greatest offenders can be identified. The power spectrum may be computed as a combination of a system frequency response, an FFT of the autocorrelation of time-series data, discrete spectral lines due to harmonic disturbance sources, and stochastic sources such as sensor noise. The pointing motion from each source can be computed at different sample rates or frequency resolutions, though the sample rate should be high enough and frequency resolution small enough to accurately represent the high-frequency responses of the system and so that numerical errors in the computed covariance matrices are not significant. Similarly, time-domain data from different simulations do not have to be resampled to a common sample rate. 5.ConclusionTwo-dimensional statistical image motion OTFs for displacement, smear, and jitter components of image motion are derived. The LSF for zero-mean random smear is also derived. The statistical smear OTF measures the average optical system performance for an ensemble of images subject to nonzero mean Gaussian random smear. In comparison, the deterministic (sinc function) smear OTF measures performance for a specified smear length. The familiar Gaussian jitter OTF is also a statistical OTF. Limiting cases for the statistical smear OTF are given: (1) fixed nonzero mean smear and diminishing smear dispersion, and (2) diminishing mean smear and fixed nonzero smear dispersion. In the first case, the statistical smear OTF converges to a sinc function (the well-known deterministic smear OTF), and in the second case it converges to the function. The statistical smear OTF begins to resemble the sinc function when the mean smear exceeds about twice the dispersion in the smear. For equal RMS attitude motion due to zero-mean random smear and jitter, the statistical smear OTF is greater than the jitter OTF at higher spatial frequencies. This corroborates the empirical observation that optical systems tolerate smear better than jitter. The statistical OTFs are parameterized by means and covariances of the displacement, smear, and jitter components of pointing motion, with spatial frequency as the independent variable. The covariances are computed accurately and efficiently from a temporal-frequency-weighted power spectrum of the LOS pointing motion. The weighting functions are parameterized with only the exposure time. Essentially, the displacement weighting function is low pass, the smear weighting function is bandpass, and the jitter weighting function is highpass. These frequency regions overlap, so the spectral content of the image motion at any frequency contributes to all three measures of image motion; therefore, there are no arbitrary frequency regions associated with displacement, smear, and jitter. By examining the weighted power spectrum, a control system engineer can determine the temporal frequencies where the sensitivity of the OTFs to pointing motion is greatest. The control system design engineer can then focus on the most significant disturbance sources or frequencies, which can lead directly to improvements in the design of the pointing control system and in the design of the optical system. Because covariances are additive, individual disturbance sources can be analyzed to determine their relative contributions to the displacement, smear, and jitter OTFs. The weighting functions can also be used in control system synthesis to optimize a controller. The statistical OTFs and the method for determining their parameters are a basis for integrated modeling and multidisciplinary analysis and simulation. In addition to the image motion OTFs and their associated means, covariances, and weighting functions, point-to-point stability and windowed stability are defined and formulas for the corresponding covariance matrices are derived. Point-to-point stability measures the change in pointing from one instant of time to another. Windowed stability measures the change in displacement from one image to the next. AppendicesAppendix A:Pointing CovarianceThe pointing (accuracy) covariance is the covariance of and is computed from For consistency with other measures of pointing motion, we write the integral as where is the accuracy weighting function. From Eq. (43) and Eq. (74) we have The term is the contribution to pointing covariance from the smear component of pointing motion.Appendix B:Displacement CovarianceThe displacement and displacement variance were originally derived in Refs. 19 and 20. We have written the definition of the displacement in a different but equivalent form in Eq. (8), so it is instructive to rederive the displacement covariance using our definition of the displacement. The steps involved are similar to those in Refs. 19 and 20. From Eqs. (8) and (50), we obtain the displacement covariance, Since the pointing error is assumed to be a wide sense stationary process, the autocorrelation is independent of , and so the displacement metric is valid for all . The displacement covariance can be written as where is the displacement weighting function.Appendix C:Smear and Smear Rate CovarianceThe smear rate covariance is obtained by substituting the smear rate from Eq. (11) into Eq. (48) and then by using Eq. (50), whence Since the pointing error is assumed to be a wide sense stationary process, the autocorrelation is independent of , and so the smear rate metric is valid for all . The smear rate covariance can be written as where is the smear rate weighting function.The smear was defined as , where is the average smear rate. The smear covariance is given by and the corresponding smear weighting function isAppendix D:Correlation of Displacement and Smear RateHere, we show that . This result is used in the derivation of Eq. (72): The integrand is an odd function in , so the integral is zero. The integrand is also purely imaginary, but the left side of the equation is real, so the integral evaluates to zero.Appendix E:Jitter CovarianceThe mean-square jitter over the interval is given by The jitter covariance is the expected value of the average square jitter over : Since is zero mean, as a result of the least-squares minimization, we will omit the means and from the derivation, since they will drop out. We will also use the fact that , which is shown in Appendix D. Now substitute for from Eq. (4) and carry out the expectation using the definitions of , , and :Finally we have an expression for the jitter covariance, Substitute from Eq. (14) to write the jitter covariance as Now substitute Eqs. (58), (59), (62), (63), (65), (66), (67), and (68) into Eq. (74) to obtain the jitter covariance in terms of : whereAppendix F:Smitter CovarianceJitter was formerly defined in Refs. 2021.–22 as or equivalently It is easy to show that can be obtained from the least-squares minimization of which yields the same expression for as in Eq. (8). From Eq. (78) and Eq. (3), we haveThus, the former jitter defined in Refs. 2021.–22 is the sum of smear and jitter, which is termed “smitter.” Because smear and jitter affect the image motion OTF differently, the former definition of jitter is less useful than the present definition. The mean square smitter over the interval is The smitter covariance is The smitter covariance in Eq. (82) is analogous to the jitter variance defined in Refs. 2021.–22.Substitute Eqs. (58), (59), (62), and (63) into Eq. (82) to obtain the smitter covariance in terms of the PSD : whereAppendix G:Point-to-Point Stability CovarianceThe change in the LOS pointing over an interval of length is given by The point-to-point stability covariance measures the mean square change in pointing from one instant to another and is given by the second order structure function These equations suggest two ways of computing in the time domain, either by a time average or by way of autocorrelation. In the frequency domain, the point-to-point stability covariance is obtained by substituting Eq. (50) or Eq. (51) into Eq. (71): where is the stability weighting function An obvious characteristic of the stability weighting function is that it does not roll off at high frequency. This is because and are at instantaneous points in time. A useful fact is that , so , and in fact . Therefore, if is less than the stability requirement, no further analysis of stability is needed. These statements hold if the trace is applied to each side of the inequality. (For matrices and , means has non-negative eigenvalues.)Appendix H:Windowed Stability CovarianceThere may be a requirement on the change in displacement of one image compared with a subsequent image. The change in displacement over an interval of length is given by The windowed stability covariance measures the mean square change in displacement given by the second order structure function The autocorrelation of is most easily obtained from the inverse Fourier transform of the PSD of , Substituting Eq. (91) into Eq. (90) yields the windowed stability covariance where is the windowed stability weighting function. The windowed stability weighting function is shown in Fig. 8. The presence of in Eq. (93) causes the weighting to go to zero as the frequency increases.AcknowledgmentsThis work is the result of independent research by the authors. Financial support for publication was provided by the Harris Corporation and by Aerospace Control Systems LLC. The authors especially thank the reviewers for their insightful comments, which considerably improved this paper. ReferencesJ. D. Gaskill, Linear Systems, Fourier Transforms, and Optics, John Wiley & Sons, New York
(1978). Google Scholar
M. Born and E. Wolf, Principles of Optics, 2nd ed.Pergamon Press, Oxford
(1964). Google Scholar
N. Jensen, Optical and Photographic Systems, 116
–124 John Wiley and Sons, Inc., New York
(1968). Google Scholar
R. M. Scott,
“Contrast rendition as a design tool,”
Photogr. Sci. Eng., 3
(5), 201
–209
(1959). PSENAC 0031-8760 Google Scholar
T. Trott,
“The effects of motion on resolution,”
Photogramm. Eng. J. Am. Soc. Photogramm., 26 819
–827
(1960). Google Scholar
M. D. Rosenau,
“Parabolic image motion,”
Photogramm. Eng. J. Am. Soc. Photogramm., 27
(3), 421
–427
(1961). Google Scholar
R. V. Shack,
“The influence of image motion and shutter operation on the photographic transfer function,”
Appl. Opt., 3
(10), 1171
–1181
(1964). http://dx.doi.org/10.1364/AO.3.001171 APOPAI 0003-6935 Google Scholar
S. C. Som,
“Analysis of the effect of linear smear on photographic images,”
J. Opt. Soc. Am., 61 859
–864
(1971). http://dx.doi.org/10.1364/JOSA.61.000859 JOSAAH 0030-3941 Google Scholar
G. C. Holst, Electro-Optical Imaging System Performance, PM187 5th ed.SPIE Press, Bellingham, Washington
(2008). Google Scholar
D. Wulich and N. S. Kopeika,
“Image resolution limits resulting from mechanical vibration,”
Opt. Eng., 26 266529
(1987). http://dx.doi.org/10.1117/12.7974110 Google Scholar
S. Rudoler et al.,
“Image resolution limits resulting from mechanical vibrations, Part II: expriment,”
Opt. Eng., 30
(5), 577
–589
(1991). http://dx.doi.org/10.1117/12.55843 Google Scholar
O. Hadar, M. Fisher and N. S. Kopeika,
“Image resolution limits resulting from mechanical vibrations. Part III: numerical calculation of modulation transfer function,”
Opt. Eng., 31
(3), 581
–589
(1992). http://dx.doi.org/10.1117/12.56084 Google Scholar
O. Hadar, I. Dror and N. S. Kopeika,
“Image resolution limits resulting from mechanical vibrations. Part IV: real-time numerical calculation of optical transfer functions and experimental verification,”
Opt. Eng., 33
(2), 566
–578
(1994). http://dx.doi.org/10.1117/12.153186 Google Scholar
O. Hadar, S. R. Rotman and N. S. Kopeika,
“Target acquisition modeling of forward-motion considerations for airborne reconnaissance over hostile territory,”
Opt. Eng., 33
(9), 3106
–3117
(1994). http://dx.doi.org/10.1117/12.177485 Google Scholar
N. S. Kopeika, A System Engineering Approach to Imaging, PM38 SPIE Press, Bellingham, Washington
(1998). Google Scholar
A. Stern and N. S. Kopeika,
“Analytical method to calculate optical transfer functions for image motion and vibrations using moments,”
J. Opt. Soc. Am., 14
(2), 388
–396
(1997). http://dx.doi.org/10.1364/JOSAA.14.000388 JOSAAH 0030-3941 Google Scholar
J. F. Johnson,
“Modeling imager deterministic and statistical modulation transfer functions,”
Appl. Opt., 32
(32), 6503
–6513
(1993). http://dx.doi.org/10.1364/AO.32.006503 APOPAI 0003-6935 Google Scholar
S. L. Smith et al.,
“Understanding image quality losses due to smear in high-resolution remote sensing imaging systems,”
Opt. Eng., 38
(5), 821
–826
(1999). http://dx.doi.org/10.1117/1.602054 Google Scholar
S. W. Sirlin and A. M. San Martin,
“A new definition of pointing stability,”
JPL Engineering Memorandum EM 343-1189, Jet Propulsion Laboratory, Pasadena, California
(1990). Google Scholar
R. L. Lucke et al.,
“New definitions of pointing stability: AC and DC effects,”
AAS J. Astronaut. Sci., 40
(4), 557
–576
(1992). Google Scholar
M. E. Pittelkau,
“Definitions, metrics, and algorithms for displacement, jitter, and stability,”
in Advances in the Astronautical Sciences, AAS/AIAA Astrodynamics Specialist Conf.,
901
–920
(2003). Google Scholar
M. E. Pittelkau,
“Definitions, metrics, and algorithms for displacement, jitter, and stability,”
in Flight Mechanics Symp.,
(2003). Google Scholar
T. P. Ager, An Analysis of Metric Accuracy Definitions and Methods of Computation, 13 NIMA InnoVision, Springfield, Virginia
(2002). Google Scholar
M. E. Pittelkau and W. G. McKinley,
“Pointing error metrics: displacement, smear, jitter, and smitter with application to image motion MTF,”
in AIAA Guidance, Navigation, and Control Conf.,
19 Google Scholar
A. Papoulis, Probability, Random Variables, and Stochastic Processes, 2nd ed.McGraw-Hill, New York, New York
(1984). Google Scholar
I. S. Gradshteyn and I. M. Ryzhik, Table of Integrals, Series, and Products, 7th ed.Elsevier, Boston, Massachusetts
(2007). Google Scholar
E. W. Ng and M. Murray Geller,
“A table of integrals of the error functions,”
J. Res. Natl. Bur. Stand. B. Math. Sci., 73B
(1),
(1969). http://dx.doi.org/10.6028/jres.073B.001 Google Scholar
E. W. Weisstein,
“Erf.Erfi,”
(2015) http://mathworld.wolfram.com/Erf.html August ). 2015). Google Scholar
M. Abramowitz and I. Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, §7 Error Functions and Fresnel Integrals, 55 297 U.S. Government Printing Office, Washington, D.C.
(1972). Google Scholar
N. M. Temme, NIST digital library of mathematical functions,
“Section 7: error functions, dawson’s and Fresnel integrals,”
(2015) http://dlmf.nist.gov/7 August ). 2015). Google Scholar
K. Johnson,
“Complex Erf (error function), Fresnel integrals,”
(2011) http://www.mathworks.com/matlabcentral/fileexchange/33577-complex-erf-error-function-fresnel-integrals Febraury 2013). Google Scholar
S. G. Johnson,
“Faddeeva package: complex error functions,”
(2012) http://www.mathworks.com/matlabcentral/fileexchange/38787-faddeeva-package-complex-error-functions February 2013). Google Scholar
M. Leutenegger,
“Error function of complex numbers,”
(2008) http://www.mathworks.com/matlabcentral/fileexchange/18312-error-function-of-complex-numbers Febraury 2013). Google Scholar
I. A. Stegun and R. Zucker,
“Automatic computing methods for special functions. IV. Complex error function, Fresnel integrals, and other related functions,”
J. Res. Nat. Bur. Stand., 86
(6), 661
–686
(1981). http://dx.doi.org/10.6028/jres.086.031 Google Scholar
S. M. Abrarov and B. M. Quine,
“Efficient algorithmic implementation of the Voigt/complex error function based on exponential series approximation,”
Appl. Math. Comput., 218 1894
–1902
(2011). http://dx.doi.org/10.1016/j.amc.2011.06.072 AMHCBQ 0096-3003 Google Scholar
S. M. Kay and S. L. Marple,
“Spectrum analysis—a modern perspective,”
Proc. IEEE, 69
(11), 1380
–1419
(1981). http://dx.doi.org/10.1109/PROC.1981.12184 IEEPAD 0018-9219 Google Scholar
O. de Weck, Singular Value Decomposition for NGST Optics, 20 Massachusetts Institute of Technology, MIT Space Systems Laboratory, Cambridge, Massachusetts
(1998). Google Scholar
O. de Weck and D. W. Miller, Integrated Modeling and Dynamics Simulation for the Next Generation Space Telescope, Massachusetts Institute of Technology, Cambridge, Massachusetts
(1999). Google Scholar
G. Mosier et al.,
“Fine pointing control for a next generation space telescope,”
Proc. SPIE, 3356 1070
(1998). http://dx.doi.org/10.1117/12.324507 PSISDG 0277-786X Google Scholar
O. de Weck et al.,
“Integrated modeling and dynamics simulation for the next generation space telescope (NGST),”
Proc. SPIE, 4013 920
(2000). http://dx.doi.org/10.1117/12.393964 PSISDG 0277-786X Google Scholar
G. Mosier et al., Dynamics and Controls, NASA Goddard Space Flight Center, Greenbelt, Maryland
(1998). Google Scholar
K. C. Liu et al.,
“Jitter test program and on-orbit mitigation strategies for solar dynamic observatory,”
in 20th Int. Symp. on Space Flight Dynamics,
16
(2007). Google Scholar
“STEREO guidance and control system specification,”
Laurel, Maryland
(2003). Google Scholar
M. Santina et al.,
“Line-of-sight pointing and control of electro-optical space systems – an overview,”
in Advances in the Astronautical Sciences, Guidance and Control 2011, AAS Guidance and Control Conf.,
213
–235
(2011). Google Scholar
D. S. Bayard,
“State-space approach to computing spacecraft pointing jitter,”
J. Guid. Control Dyn., 27
(3), 426
–433
(2004). http://dx.doi.org/10.2514/1.2783 Google Scholar
D. S. Bayard,
“A simple analytic method of computing instrument pointing jitter,”
JPL New Technology Pasadena, California
(2000). Google Scholar
BiographyMark E. Pittelkau received his BS and PhD degrees from Tennessee Technological University in 1981 and 1988, and his MS degree from Virginia Polytechnic Institute in 1983, all in electrical engineering. His work has been in spacecraft guidance, navigation, and control. He has designed and analyzed attitude determination and control systems for precision-pointing imaging and science spacecraft. William G. McKinley received his BS degree in physics magna cum laude from Arizona State University in 1971 and his MS degree in optical science from the University of Arizona in 1975. In his optical career, he has worked at Kodak, Goodyear, TRW, ITEK, Goodrich Aerospace, and Harris Corporation. He has been engaged in the creation and analysis of all types and aspects of optical systems and the processing and evaluation of optical system data products. |