Open Access Paper
12 July 2023 Imaging static Fourier transform spectrometry: impact of trajectory perturbations on the hyperspectral images
Varvara Chiliaeva, Olivier Gazzano, Yann Ferrec, Herve Sauer, Andrés Almansa, François Goudail
Author Affiliations +
Proceedings Volume 12777, International Conference on Space Optics — ICSO 2022; 127774H (2023) https://doi.org/10.1117/12.2690836
Event: International Conference on Space Optics — ICSO 2022, 2022, Dubrovnik, Croatia
Abstract
Imaging static Fourier transform spectrometry (isFTS) may be used for pushbroom air- or spaceborne hyperspectral remote sensing. In isFTS, the spectral information is multiplexed over several instantaneous images, and numerical reconstruction is needed to recover the full spectrum for each pixel. The registration of instantaneous images is a crucial step. Unsufficient precision leads to artefacts on the images and degradation of the estimated spectra. We developed a simulation program of the entire processing chain of an isFTS system that can reproduce the degradations of actual isFTS images. Using this program we studied spatial and spectral effects of temporally correlated and uncorrelated trajectory perturbations. We also performed a theoretical study to quantify these degradations. We demonstrated that the most significant degradations occur in the regions of high radiance gradient.

1.

INTRODUCTION

Imaging static Fourier transform spectrometry (isFTS) has been used for different ground-based or airborne remote sensing applications such as the observation of the plume from a volcano1 or the observation of the surface of the Earth.2 Since the instrument has no moving parts, isFTS is robust and suitable for space applications.2 It is also SNR efficient in most application scenarios.3 The key component of isFTS instruments is an inteferometer that produces the interferograms required to retrieve the spectra.4 The interferometer is designed so that the optical path difference depends on the position in the field, hence it is necessary to scan the scene by flying over it in order to build the interferogram. A sequence of raw (or instantaneous) images is acquired then registered. The spectra are then numerically reconstructed.5, 6

The goal of our work is to understand the physical and numerical limitations of isFTS, and especially the impact of image registration errors. We developed two programs to perform a thorough study of isFTS: a direct program and an inverse program. After validating these programs in an ideal scenario, we added uncorrected trajectory perturbations. We studied the spatial and spectral effects of perturbations in the along-track and across-track direction. We also identified the regions of the images that are most concerned by perturbation related noise.

In Sec. 2, we explain the general principles of isFTS. In Sec. 3, the image formation model and its use in the simulation program are detailed. Section 4 focuses on the influence of a single perturbation on the hyperspectral images. Finally, in Sec. 5 we study the effects of two particular cases : the white noise and the sinusoidal perturbation.

2.

GENERAL PRINCIPLE OF ISFTS

2.1

Classical FTS and Imaging FTS

isFTS derives from classical (non-imaging) FTS. In a FTS system, a two-wave interferometer scans the available optical path difference (OPD) span. The scanning is usually temporal, meaning that one of the arms of the interferogram is moving. The intensity represented as a function of the OPD is called an interferogram. Unlike spectrometers based on dispersive elements such as prisms or gratings, FTS systems require a numerical treatment of the signal to recover the spectrum. It can be proven that the spectrum is the real part of the Fourier transform of the interferogram.7 In its most simple form, this relationship is:

00137_PSISDG12777_127774H_page_03_1.jpg

with δ the OPD (m), σ the wavenumber (m–1), S(σ) the spectrum, σmin and σmαx the limits of spectral range of the instrument.

The isFTS systems we study are high-etendue systems,8 contrary to dispersive instruments that have an entrance slit. The field of view of a single frame is two-dimensional and limited by the size of the focal plane array (FPA). The interferometer is static and the OPD depends on the position of a ground point in the along-track direction. The interference fringes appear on the raw image itself (Fig. 1 (a)). As the carrier moves linearly over the scene, a same point on the ground is seen through all the OPDs. A sequence of raw images is taken, then registered. On the registred sequence, one ground point is on the same pixel in every image (Fig. 1 (b)). The interferogram of each pixel is extracted from the stack of registered images. Each image yields one point of the interferogram. The Fourier transform is then applied to each interferogram. The set of spectra corresponding to each point constitutes the hyperspectral cube, an object with two spatial dimensions and one spectral dimension. The hyperspectral cube can also be seen as a stack of spectral (or monochromatic) images (Fig. 1 (c)).

Figure 1. :

: Principle of isFTS (a) three raw images of the same scene taken at different moments. The yellow dot follows the same ground point; (b) the images are registered; (c) three monochromatic images at different wavelengths.

00137_PSISDG12777_127774H_page_03_2.jpg

2.2

Image Registration Errors

Registration is performed by correlation methods and/or using line of sight data.5 The line of sight can be perturbed by an unsufficient stabilization of the platform or micro-vibrations. Registration errors cause a mixing of interferograms from different ground points which leads to a degradation of the spectra.

Figure 2 (a) illustrates this issue. It represents a spectral image taken by the instrument SIELETERS, an airborne infra-red isFTS instrument developed by ONERA.9 In this specific case where the micro-vibrations where not corrected by fine registration, a periodical, spatially correlated artefact can be observed on the edge of the building, only in one direction. This artefact disappears when a fine registration is applied (Fig. 2 (b)). In some cases this artefact could be mistaken for a periodical pattern present on the actual scene.

Figure 2.

Spectral images taken with the SIELETERS instrument designed by ONERA. a) Noise observed when no fine registration is applied. b) After fine registration is applied.

00137_PSISDG12777_127774H_page_04_1.jpg

In the following sections, we proceed to quantify this effect and to identify on which parameters it depends. This would enable us to specify the correct registration precision so that this effect does not exceed a given value.

3.

SIMULATION TOOLS

3.1

Image Formation Model

In this section we detail the parameters taken into account to derive an expression of the irradiance on the detector (in photoelectrons).

The optical system is made of four elements: a front afocal system, an interferometer, a lens and a detector:

  • The afocal system is assumed ideal.

  • The lateral shearing interferometer introduces no distortion, so an incident plane wave results in two emergent plane waves in the same direction, thus the fringes are localized at infinity. The function δ(x, y) that associates the optical path to the position on the FPA is called the OPD map. The interferometer introduces the sinusoidal term responsible for the presence of the fringes on the images (see Eq. 1)

  • The lens focuses the image on the focal plane. It is assumed to be a diffraction-limited, circular aperture system.

  • The detector is a focal plane array (FPA) of NDX by NDY pixels of pitch adet. All pixels have the same quantum efficiency η(σ).

We define the function h(x, y, σ) as the point spread function (PSF) of the system consisting of the lens and the detector, normalized such that ∫∫h(x, y, σ)dxdy = 1. The photonic spectral radiance (photons · s–1 · sr–1 · m–2 · (m–1)1) of the scene at coordinates (x, y) is written Lσ(x, y, σ).

Consequently, if G is the geometrical etendue associated with a pixel (m2sr), T0 the constant transmission coefficient of the system, Δt the integration time (s), and (xC, yC) the coordinates of the optical center (or nadir point), then the number I(x, y) of electrons generated on a pixel of the FPA centered on (x, y) is given by:

00137_PSISDG12777_127774H_page_04_2.jpg

3.2

Simulation programs

The main tool used for our study of the isFTS method is a simulation program consisting of a direct program and an inverse program.

The direct program simulates an isFTS acquisition by computing a sequence of raw images based on the image model (Eq. 2), geometric and spectral information on the scene and the trajectory of the carrier. The input parameters of the direct program must be chosen carefully to make sure that the sequence is invertible and to minimize non-physical numerical artefacts. The main parameters are :

  • The dimensions of the FPA: NDX × NDY

  • The photonic spectral radiance of the scene. The scene is discretized in both spatial dimensions and in the spectral dimension. It is a three dimensional table of size NSX × NSY × Nσ. The initial scene is defined with a higher resolution in the spatial dimensions than the image formed on the detector. The pixels of the scene are thus smaller than the pixels of the FPA. Their size is : asc = adet/r, where r is an integer. This oversampling of the scene allows us to shift the scene by a distance inferior to a pixel of the detector without any numerical artefacts, which proves useful for the study of spatial perturbations.

  • The map of optical path differences δ(x, y)

  • The trajectory of the carrier : the user can set the successive positions of the optical centre. The carrier can only be translated (not rotated), at constant altitude and with a nadir viewing. Perturbations of the trajectory can be introduced, for example by adding a white noise or a sinusoid to the expected position. The perturbations can occur in the along-track or across-track direction.

  • The optical cutoff frequency of the lens, which can be set to exceed the Nyquist frequency of the detector.

The inversion program registers the images produced by the direct program, retrieves the interferograms and performs the inversion to obtain the hyperspectral cube. The interferogram is retrieved for every pixel of the scene that has been seen through the entire OPD span, in other words for all pixels of the scene that have crossed the entire field in the direction of the motion. The displacement of the carrier between two frames is assumed to be constant and known, in other words the inverse program does not take perturbations into account. This leads to degradations of the hyperspectral images. The spectrum of each pixel is obtained by computing the real part of the discrete Fourier transform of the interferogram.

4.

SINGLE REGISTRATION ERROR PROBLEM ON A MONOCHROMATIC SCENE

The study of a theoretical situation in which only one image of the sequence is perturbed yields valuable insights on the spatial characteristics of the noise on the spectral image.

4.1

Theoretical Analysis

In this section we determine the expression of the intensity perturbation on the hyperspectral cube as a function of the position of the pixel and the wavelength in the case of a single registration error.

We consider a scene consisting of a rectangle of monochromatic uniform radiance I0 on a dark background as illustrated on Fig. 3. A sequence of N instantaneous images is generated by displacing the scene in the 00137_PSISDG12777_127774H_page_04_2a.jpg direction by a constant step equal to Δy.

Figure 3.

a) Four instantaneous images from a sequence with a single registration error at image k. The scene is a rectangle of intensity I0 on a dark background. The displacement step is Δy. Note that the fringes are omitted in this model. b) The same four images after registration.

00137_PSISDG12777_127774H_page_06_1.jpg

At the image 0, the position of the optical center in the registered frame is C0(xC,0, yC,0). At the image k, the expected position without perturbations of the optical center is Ck(xC,0,yC,0 + kΔy). If a registration error of small amplitude (ϵx, ϵy) occurs at image k, the position of the optical center becomes Ck(xC,0+ ϵx, yC,0+kΔy+ϵy).

The OPD map is linear: δ(y) = py with p a constant slope. We note δM,k the OPD associated with the point M at the image k. We define O(x0, y0) as the reference point so that δO,0 = δ0.

At image number k, a mixing of interferograms of different points from the scene occurs. The interferogram of point M, which should be periodical since the emission is monochromatic, presents an error at the point number k as illustrated on Fig. 4.

Figure 4.

Perturbed interferogram (a) as a function of the image number (b) as a function of the OPD

00137_PSISDG12777_127774H_page_06_2.jpg

The expression of the interferogram at the point O is :

00137_PSISDG12777_127774H_page_06_3.jpg

with D the Dirac distribution, IP is the intensity of the perturbation (which depends on, the expression is calculated below) and (δ) the undisturbed interferogram.

The optical path difference associated with the point M(x, y) at image k, situated at the distance yyO from the point O, is δM,k = δO,k + p(yyO). If we consider that the contrast introduced by the fringes is negligible compared to the constrast of the scene, we can neglect the fringes on the instantaneous images (see Fig. 3). Then, the undisturbed interferogram is the same at the points M and O. This approximation is valid when the spectrum is broad, in which case the contrast of the fringes is low except when the OPD is close to zero. We will see below that this approximation is also reasonably valid in the case of a monochromatic spectrum.

The expression of the interferogram at M is :

00137_PSISDG12777_127774H_page_07_01.jpg

The spectrum at M is the real part of the Fourier transform of the interferogram :

00137_PSISDG12777_127774H_page_07_02.jpg

with 00137_PSISDG12777_127774H_page_07_03.jpg the undisturbed spectrum.

We can note :

00137_PSISDG12777_127774H_page_07_04.jpg

where P(x, y, σ) is the perturbation :

00137_PSISDG12777_127774H_page_07_05.jpg

Let us now determine the quantity IP,k.

On the image k, the intensity at M is :

00137_PSISDG12777_127774H_page_07_06.jpg

with I(x, y) the intensity of the initial scene.

Let us define the vectors :

00137_PSISDG12777_127774H_page_07_07.jpg

Then the perturbation is :

00137_PSISDG12777_127774H_page_07_08.jpg

4.2

Illustration of the Impact of the Single Perturbation on the Hyperspectral Image

On a vertical edge of the rectangle on Fig. 3, 00137_PSISDG12777_127774H_page_07_09.jpg and 00137_PSISDG12777_127774H_page_07_10.jpg.

Equation 3 becomes :

00137_PSISDG12777_127774H_page_07_11.jpg

The perturbation presents a periodical sinusoidal behaviour in the direction of the movement with a frequency σ, the same as the frequency of the fringes on the raw images. This result corroborates the images taken by SIELETERS that present a ”crenel” effect on vertical edges (Fig. 2).

The possibility to control the geometry of the initial scene in the simulation program enabled us to identify the areas of the images that are most affected by the perturbation. The simulation was performed with an initial scene composed of a uniform rectangle on a dark background, emitting a monochromatic spectrum at σe. A sequence of 160 images was generated. A single registration error was introduced at image 80 (when the entire rectangle is in the field of view of the camera) such that ϵx = ϵy = –0.4 FPA pixels. We extracted the monochromatic image at σe from the hyperspectral cube (Fig 5).

Figure 5.

Monochromatic images from a hyperspectral cube simulated with a monochromatic rectangle as the initial scene. Left : monochromatic image at σe without perturbations. Centre : monochromatic image at σe with a single perturbation. A periodical noise appears on the vertical edges. Right : Perturbation P(x, y, σe)

00137_PSISDG12777_127774H_page_08_1.jpg

As in the SIELETERS image, the simulated monochromatic image shows a spatially correlated noise in the direction of the movement. Fig. 5(c) shows that the perturbation is more important on the edges of the rectangle, i.e. in the areas of high radiance gradient.

We compared the theoretical and simulated results for a uniform rectangle with a single perturbation, using Eq. 3 for the strongly approximated theoretical calculation. The theoretical and simulated perturbations are represented on Fig. 6 and show good agreement. On Fig. 6(c) we represented the difference between the theoretical and simulated results. The image presents a pattern with the exact periodicity of the fringes on the instantaneous image, which indicates that the discrepancies are caused by the inadequacy of the approximated model that neglects the fringes.

Figure 6.

P(x, y, σe) with a single perturbation. a) Theoretical perturbation. b) Simulated perturbation. c) Difference between theoretical and simulated perturbation.

00137_PSISDG12777_127774H_page_09_1.jpg

5.

WHITE NOISE AND SINUSOIDAL PERTURBATIONS

Once we have studied the case of a single registration error, we extend the study to a multiple error situation focusing on two relevant cases : the white noise, which corresponds to random registration errors and sinusoidal errors, which simulate micro-vibrations of the line of sight.

5.1

General case theory

If more than one image is perturbed, the expression of the perturbation becomes:

00137_PSISDG12777_127774H_page_08_2.jpg

and δO,k = δ0 + kpΔy, thus:

00137_PSISDG12777_127774H_page_09_2.jpg

5.2

White noise

In the simulation, the white noise is introduced using a random integer generator with uniform distribution. The initial scene is oversampled with respect to the detector by a factor r = 10, thus allowing displacements by a distance as small as a tenth of a FPA pixel. To simulate the perturbations ϵx and ϵy, we generate random numbers in the discrete set {–5/r; –4/r;…; 0; 1/r; 4/r}, and add them to the expected position of the nadir point Ck (xC,k, yC,k).

We performed Monte-Carlo simulations to determine the regions of the scene that undergo the largest degradations. The target is a tilted wedge of orientation 12°, emitting a monochromatic spectrum at the wavelength σe. We simulated a movement of the carrier in the 00137_PSISDG12777_127774H_page_09_3.jpg direction and generated sequences of 100 perturbed images. The sequence was then processed by the inversion model without taking the perturbations into account. We generated several realizations of these sequences and computed the standard deviation for each pixel of the monochromatic image at σe.

We performed the simulation for perturbations in both the along-track (Figure 7) and across-track directions (Figure 8).

Figure 7.

Monochromatic image with a white noise in the 00137_PSISDG12777_127774H_page_09_3a.jpg (along-track) direction. The movement of the carrier was in the 00137_PSISDG12777_127774H_page_09_3b.jpg direction. Left : one realization of the white noise. Right : Standard deviation for 10 realizations.

00137_PSISDG12777_127774H_page_09_4.jpg

Figure 8.

Monochromatic image with a white noise in the 00137_PSISDG12777_127774H_page_10_02.jpg (across-track) direction. The movement of the carrier was in the 00137_PSISDG12777_127774H_page_10_03.jpg direction. Left : one realization of the white noise. Right : Standard deviation for 10 realizations.

00137_PSISDG12777_127774H_page_10_03a.jpg

The pseudo-periodical noise on the quasi-vertical edge can be observed with both along-track and across-track perturbations. The noise is spatially correlated even though the perturbations are temporally uncorrelated.

The standard deviation figures show that intensity estimation errors are more important in the areas of high intensity gradient (i.e. on the edges), and even more so on the edges that are approximately perpendicular to the direction of the perturbations. For instance, on Fig. 7, the random noise is in the vertical direction and the most perturbed edge is the quasi-horizontal one, whereas in 8 the noise is in the horizontal direction and the quasi-vertical edge is more perturbed. This is consistent with Eq. 3.

5.3

Sinusoidal perturbation

The case of the sinusoidal perturbation is interesting since it reproduces micro-vibrations of the carrier. For variations in the direction 00137_PSISDG12777_127774H_page_10_04.jpg, the amplitude of the perturbation at image k has the form :

00137_PSISDG12777_127774H_page_10_05.jpg

where K is the period of the perturbation in number of images, not necessarily an integer.

We note :

00137_PSISDG12777_127774H_page_10_06.jpg

From Eq. 4 we obtain :

00137_PSISDG12777_127774H_page_10_07.jpg

Using the exponential notation so that 00137_PSISDG12777_127774H_page_10_08.jpg :

00137_PSISDG12777_127774H_page_10_09.jpg

We note 00137_PSISDG12777_127774H_page_10_10.jpg is the sum of a geometric progression of common ratio eiΦ :

00137_PSISDG12777_127774H_page_10_11.jpg

The function 00137_PSISDG12777_127774H_page_11_1.jpg takes non negligible values when |1 – еіΦ|tends to zero, i.e. when Φ = 2πn with n an integer.

Consequently, 00137_PSISDG12777_127774H_page_11_2.jpg is non negligible for values of σ such that : 00137_PSISDG12777_127774H_page_11_3.jpg. At these values of σ the spectrum presents parasitic peaks.

Let Δδ = pΔy denote the sampling step of the interferogram. According to the Shannon-Nyquist theorem, the estimated spectrum is periodical with a periodicity of 00137_PSISDG12777_127774H_page_11_4.jpg and the maximum measurable wavenumber is 00137_PSISDG12777_127774H_page_11_5.jpg. Hence if we only represent the period such that n = 0, two parasitic peaks will appear :

00137_PSISDG12777_127774H_page_11_6.jpg

A simulation was performed with a scene emitting a monochromatic spectrum at σe = 2.5 · 106 m–1 with a sinusoidal perturbation in the direction of the movement (the 00137_PSISDG12777_127774H_page_11_6a.jpg direction). We chose the values : K = 10, p = 1 · 10–7 m/pixel, Δy = 1 FPA pixel. We expected to find the peaks σ0+ = 1 · 106 m–1 and σ0– = –1 · 106 m–1. The spectrum of a point of that scene is represented on Fig. 9.

Figure 9. :

: Spectrum of a monochromatic emission at σe = 2.5 · 106 m–1 with a sinusoidal perturbation of period K =10 images.

00137_PSISDG12777_127774H_page_11_7.jpg

The peaks at ±σe are visible as well as the peaks at σ. Moreover, another set of parasitic peaks appear at ±σe + σ, namely at ±3.5 · 106 m–1 and ±1.5 · 106 m–1. These peaks do not appear in our approximated theoretical model since we neglected the fringes of the instantaneous images.

6.

CONCLUSION

The goal of our work was to study the physical and numerical limitations of imaging static Fourier transform spectrometry (isFTS), and in particular the impact of image registration errors. We developed and implemented a simulation tool composed of two programs, a direct program that generates a sequence of instantaneous images and an inversion program that retrieves the interferograms, and then the spectra, in order to obtain the hyperspectral cube. We performed simulations introducing registration errors and compared the results to our approximated analytical model of intensity perturbation.

Our approximated theoretical model quantifies the degradation of the estimated spectrum as a function of the amplitude of the perturbation, the intensity gradient of the scene and the wavenumber. Our simulation results show good agreement with the model. We showed that the errors on the estimation of the scene radiance are more important in regions of high gradient of the initial scene, i.e. the edges of regions of different radiance. A pseudo-periodical effect is observed on the edges parallel to the direction of the movement with the same periodicity as the fringes on the instantaneous images. With sinusoidal registration errors, parasitic peaks are observed on the spectra at wavelengths depending on the temporal periodicity of the perturbation. Further developments of these study should include simulations in more complex scenarii with broad spectra corresponding to actual materials.

ACKNOWLEDGEMENTS

Our research is funded by Thales Alenia Space, Cannes, France.

REFERENCES

[1] 

Gabrieli, A., Wright, R., Porter, J. N., Lucey, P. G., and Honnibal, C., “Applications of quantitative thermal infrared hyperspectral imaging (8–14 μm): measuring volcanic SO2 mass flux and determining plume transport velocity using a single sensor,” Bulletin of Volcanology, 81 (8), (2019). https://doi.org/10.1007/s00445-019-1305-x Google Scholar

[2] 

Wright, R., Lucey, P., Crites, S., Horton, K., Wood, M., and Garbeil, H., “BBM/EM design of the thermal hyperspectral imager: An instrument for remote sensing of earth’s surface, atmosphere and ocean, from a microsatellite platform,” Acta Astronautica, 87 182 –192 (2013). https://doi.org/10.1016/j.actaastro.2013.01.001 Google Scholar

[3] 

Ferrec, Y., “Noise sources in imaging static Fourier transform spectrometers,” Optical Engineering, 51 (11), 111716 (2012). https://doi.org/10.1117/1.OE.51.11.111716 Google Scholar

[4] 

Lucey, P. G., Hinrichs, J. L., and Akagi, J., “A compact LWIR hyperspectral system employing a microbolometer array and a variable gap Fabry-Perot interferometer employed as a Fourier transform spectrometer,” Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII8390, 83900R –83900R–8 (2012). https://doi.org/10.1117/12.918974 Google Scholar

[5] 

Ferrec, Y., Taboury, J., Sauer, H., Chavel, P., Fournet, P., Coudrain, C., Deschamps, J., and Primot, J., “Experimental results from an airborne static Fourier transform imaging spectrometer,” Applied Optics, 50 (30), 5894 –5904 (2011). https://doi.org/10.1364/AO.50.005894 Google Scholar

[6] 

Su, L., Yuan, Y., Xiangli, B., Huang, F., Cao, J., Li, L., and Zhou, S., “Spectrum reconstruction method for airborne temporally-spatially modulated fourier transform imaging spectrometers,” IEEE Transactions on Geoscience and Remote Sensing, 52 (6), 3720 –3728 (2014). https://doi.org/10.1109/TGRS.2013.2275174 Google Scholar

[7] 

Davis, S., Abrams, M., and Brault, J., Fourier Transform Spectrometry, Academic Press(2001). Google Scholar

[8] 

Horton, R. F., “Optical design for a high-etendue imaging Fourier-transform spectrometer,” Imaging Spectrometry II, 2819 300 –315 International Society for Optics and Photonics, SPIE (1996). https://doi.org/10.1117/12.258077 Google Scholar

[9] 

Coudrain, C., Bernhardt, S., Caes, M., Domel, R., Ferrec, Y., Gouyon, R., Henry, D., Jacquart, M., Kattnig, A., Perrault, P., Poutier, L., Rousset-Rouvière, L., Tauvy, M., Thétas, S., and Primot, J., “SIELETERS, an airborne infrared dual-band spectro-imaging system for measurement of scene spectral signatures,” Optics Express, 23 (12), 16164 (2015). https://doi.org/10.1364/OE.23.016164 Google Scholar
© (2023) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Varvara Chiliaeva, Olivier Gazzano, Yann Ferrec, Herve Sauer, Andrés Almansa, and François Goudail "Imaging static Fourier transform spectrometry: impact of trajectory perturbations on the hyperspectral images", Proc. SPIE 12777, International Conference on Space Optics — ICSO 2022, 127774H (12 July 2023); https://doi.org/10.1117/12.2690836
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image registration

Interferograms

Fourier transforms

Optical path differences

Hyperspectral imaging

Astronomical imaging

Staring arrays

Back to Top