Recording and identifying faint objects through atmospheric scattering media by an optical system are fundamentally interesting and technologically important. We introduce a comprehensive model that incorporates contributions from target characteristics, atmospheric effects, imaging systems, digital processing, and visual perception to assess the ultimate perceptible limit of geometrical imaging, specifically the angular resolution at the boundary of visible distance. The model allows us to reevaluate the effectiveness of conventional imaging recording, processing, and perception and to analyze the limiting factors that constrain image recognition capabilities in atmospheric media. The simulations were compared with the experimental results measured in a fog chamber and outdoor settings. The results reveal good general agreement between analysis and experiment, pointing out the way to harnessing the physical limit for optical imaging in scattering media. An immediate application of the study is the extension of the image range by an amount of 1.2 times with noise reduction via multiframe averaging, hence greatly enhancing the capability of optical imaging in the atmosphere. |
1.IntroductionImaging acquisition and recognition abilities are traditionally constrained by either the diffraction limit or the resolving capability of an optical imaging system. In low-visibility environments, optical images suffer severe contrast degradation due to atmospheric effects, resulting in reduced viewing distance and angular resolution (AR). Hence, the image perceptibility can be partially or completely impaired. Though existing imaging models, which are based on the contrast threshold of human vision,1–6 have made success in predicting the probability of human recognition of targets captured by traditional imaging systems through the atmosphere, the evolution of imaging technologies has substantially enhanced the capability to perceive images. The continuous development in imaging technologies in the atmosphere or in scattering media necessitates a re-examination of the traditional optical imaging models. Modern experimental settings, incorporating advanced optical components and processing techniques, present new opportunities and challenges for improving image perceptibility in atmospheric scattering media. For instance, Bian et al.7 applied liquid crystal to enable higher signal-interference ratio, allowing deeper penetration for imaging through the fog. Likewise, Wang et al.8 and Demos et al.9 applied optical filtering and polarizers to improve imaging performance. The development of high dynamic range sensors enables an exponentially increased sensitivity to record optical signals. Deep-learning-assisted signal processing10–13 demonstrates unprecedented effects. Advanced displaying systems14 have shown the potential to further improve the human perception of foggy images. All these advancements need to be taken into consideration to accurately predict the enhanced imaging performance under low-visibility conditions. On the other hand, one of the main challenges in imaging modeling is the complex interactions between light and atmospheric particles. Traditional models often simplify these interactions, neglecting factors, such as dynamic changes in atmospheric conditions. In addition, the validation of these models in both controlled and real-world settings needs an unambiguous experimental verification. In this work, we introduce a reformulated imaging model that comprehensively includes all processes in imaging, including optical transferring, recording, signal processing, and perception. The model is based on the principles of the meteorological optical range (MOR). More specifically, we have incorporated a special parameter, , which describes image perceptibility as introduced in human vision models15 into our imaging model. The validity of this parameter for discernible images has been experimentally verified. By introducing into modeling the imaging systems, we enable a dynamic evaluation of the imaging limit. Our research aims to bridge the gap between theoretical modeling and practical applications by providing a detailed analysis of the modulation transfer function (MTF) in the presence of updated detection configuration. This work not only provides physical insight for dehazing algorithms16,17 but also offers guidance for refining optical imaging systems. Our analysis was examined through experiments in a fog chamber as well as in outdoor settings. Good agreement between theoretical analysis and experimental results was obtained. Hence, the work described allows for the quantitative determination of the physical boundary of the optical imaging in atmospheric scattering media, filling a critical gap in current modeling approaches and setting the stage for future innovation in atmospheric imaging. 2.Principles and Methods2.1.Imaging ModelThe MTF is widely used to describe the characteristics of an optical system,18 and it is applied in this work to describe the imaging transferring behavior in atmospheric scattering media. Inspired by the theories of MOR that the imaging perceptibility is determined by the vision’s minimum modulation difference, we advocate for a refined definition of an imaging model that relies merely on the appearance of the final modulation determined by the imaging system. Hence, the following formulation15 is applied to describe the imaging system: where , , and represent the modulation (or contrast) of the target, the imaging system’s output, and the noise, respectively. is the system’s MTF, is the perceiving factor, a multiplier of the theoretical minimum discernible modulation, and is the minimum of the sensor’s dynamic range and bit depth. Equation (1) presents two critical conditions: the signal-to-noise ratio (SNR) condition, delineating the system’s capability to distinguish modulation variations across spatial frequencies amid noise; and the signal-to-interference ratio (SIR) condition, illustrating an absolute threshold for modulation detection limited by the system’s sensitivity to optical intensity.Specially, for the SNR condition, the parameter serves as a comprehensive indicator of the effectiveness of image perceptibility, spanning from naked eye observation to algorithm-assisted analysis, with its magnitude reflecting the efficacy of postprocessing techniques. When the imaging modulation falls below , representing the system perceptibility, the target becomes obscured by noise, thus reaching the noise-dominated imaging limit. Likewise, when the signal modulation falls short of , the signal is lost during the optoelectronic recording and conversion. The expression for is19 where and represent the maximum and minimum light intensities, respectively, whereas and represent the maximum and minimum reflectances, respectively.The light transmission and imaging process involves the ambient light , the reflected light , the interference light , and the scattered light . The latter three transmit through the atmosphere and lenses and are then converted into electrical signals by the imaging sensor. Thus, the MTF of the imaging system, , can be expressed as3 where , , and denote the MTFs of the atmosphere, lens, and sensor, respectively. The is divided into aerosol and turbulence .20–23 With Sec. 5, the expressions of all the MTFs can be written as where is the spatial frequency, is the working wavelength, is the focal length, is the effective diameter of the lens, is the atmospheric coherent width, represents the optical thickness, is the Airy disk diameter, and is the line spread width of the sensor.The modulation of noise15 can be calculated as where signifies the Fourier transform, denotes taking the mean value, and represents the noise from the system, originating both from the atmosphere and the imaging system itself.By substituting the above equations into Eq. (1), we can achieve the expression for AR as a function of in the SNR condition, where is the minimum distinguishable distance. To satisfy the SIR condition, there exists a maximum value for ,Equation (8) indicates that, when attenuation is weak, the limitation of the imaging is entirely due to the Rayleigh criterion and imaging resolution. The above two equations quantify two boundaries for imaging, representing the imaging limit, revealing the complex nature of imaging through atmospheric scattering media. The discussion indicates that the imaging limit can be harnessed via various strategies, such as refining imaging methods, employing lenses with higher numerical apertures, enhancing sensor dynamic range and bit depth, reducing system noise, and implementing advanced algorithms. The choice of the value is pivotal in framing our approach to quantifying imaging limitations through atmospheric scattering media. Within the human eye sensitivity (10-bit),24,25 there exists a positive correlation between the probability of visual perception and the value of for unprocessed images.26 When , the human eye can reliably detect objects. Following the guidelines of the International Union of Pure and Applied Chemistry (IUPAC), we set as the detection limit.27,28 Drawing from both the theoretical groundwork laid out in prior discussions and the empirical benchmarks set by existing research, the experimental validation of the threshold becomes imperative, as it offers a critical test of the applicability in real-world applications. The objective of the indoor experiment was to rigorously evaluate model’s accuracy in predicting imaging limits within a controlled low-visibility environment and to replicate typical imaging scenarios examining both the SIR condition and the SNR condition (whether ). Thus, we chose the following two cameras: Daheng MER-232-48GM-P NIR (8-bit) and PCO EDGE 4.2 (16-bit). MER-232-48GM-P NIR is optimized for near-infrared and SIR is limited, whereas the EDGE 4.2 is visible-light-responsive and SNR-limited. Lens choices were the Kowa M7528-MP and Zeiss Milvus 2/100M with the focal lengths of 75 and 100 mm, respectively, to match the camera’s pixel physical dimensions. The fog chamber experiment was orchestrated at the Visibility Calibration Laboratory of the China Meteorological Administration in Shanghai, China. The temperature was 298 K, and the air pressure was 101 kPa. The fog in the experiment was made of water particles with diameters ranging from 1 to . The target employed was the USAF 1951 resolution test chart, featuring six groups (G:0 to G:5) and six elements (E:1 to E:6) within each group, with the test chart’s reflective area for the black portion around 2.5% and for the white area around 90%. Alignment between the target and cameras is horizontal, with a working distance of 16 m. A visibility meter sensor (HUAYUN SOUNDING DNQ1) with a 10% of error was placed at the same height as the target to obtain the visibility,29,30 as Fig. 1 demonstrates. In pursuit of further advancements in the system’s ability to penetrate fog and to validate the SNR condition, we conducted an in-depth test and analysis in outdoor settings. The outdoor experiment was orchestrated in Chongqing, China, during October, with the target facing west. The temperature and the air pressure of the location were 300 K and 100 KPa, respectively. In the foggy weather, the fog stably existed for over 3 h, with the visibility increasing from about 1 to 3 km. We utilized the PCO EDGE 4.2, paired with a Canon EF 400 mm F/5.6L lens. The resolution test chart was positioned 5 km away from the camera. Visibility measurement was facilitated by a drone-mounted visibility sensor, offering real-time data of the atmospheric condition. We recorded the target under the same experimental settings for conditions with and without fog. The modulation depth of the target without fog was used in our calculation, effectively accounting for the turbulence effect on MTF. This calibration process allows a simplification of the theoretical analysis by neglecting MTF terms other than . In the data acquired without fog, the theoretical resolution of our imaging system, as predicted by Eq. (9) using the modulation depth of the target captured at a close distance, was . However, due to the effect of atmospheric turbulence, the resolution adjusted to when using the modulation depth measured at a distance without fog. According to Eq. (8), the noise reduction can effectively improve the system’s imaging capability. We conducted a simple approach of multiframe (M-F) averaging to reduce Gaussian noise,31,32 thereby increasing the SNR. The noise variance can present an inverse proportional decay with the number of frames averaged. The noise variance is defined as33 where we identified and selected a homogeneous region within the image and filtered it with a moderate kernel to obtain . The data were sampled at intervals of 100 ms and 200 images were averaged.2.2.Experimental Imaging CriteriaThe recorded images were further processed with the following technique. For the 8-bit images, the series was shown by the original image, dark channel prior (DCP),34 and clipped limit adaptive histogram equalization (CLAHE).35 The DCP leverages the atmospheric degradation model to enhance images. The CLAHE enhances local modulation by dividing the image into small regions and applying histogram equalization separately to each region. Both are effective dehazing algorithms for enhancing images. For the 16-bit data, the participants were invited to perform custom histogram adjustments to achieve the optimal enhancement algorithm without prior knowledge of the target. Examples of the data collected in the fog chamber experiment are shown in Fig. 2. The figure demonstrates the target alongside the average and normalized values inside the red and blue regions in Figs. 2(a) and 2(b) with high and low visibilities, respectively. In Fig. 2(a), the imaging is limited by the system’s resolution, extending beyond G:2, E:2. Conversely, in Fig. 2(b), the imaging is limited by visibility; thus three clear peaks are discernible before G:2, E:2, and no discernible feature beyond. The quantitative analysis is in good agreement with the human visual observation as well as algorithm-aided observation, indicating that the obliteration of signal features renders the task of discernment infeasible without prior knowledge of the target. These findings not only delineate the limits for image perceptibility but also set the stage for study to harness the resolution limits. 3.Results and Discussion3.1.Fog Chamber ExperimentFor the MER-232-48GMP NIR imaging system, Fig. 3(a) presents the data analysis where the red curve represents the SNR condition, and the black dashed line signifies the SIR condition. The yellow, orange, and blue error points correspond to the data observed without algorithms, using DCP, and using CLAHE for the corresponding minimum AR, respectively. With algorithm-aided vision, the penetrable by the imaging system improves by . However, in terms of smaller objects, these algorithms offer a relatively minor enhancement for . Here, the imaging limit is mainly determined by the SIR condition. For the PCO EDGE 4.2 imaging system, Fig. 3(b) illustrates the performance analysis where the red curve and black dashed line represent the SNR and the SIR conditions, respectively. The blue error points represent the observed data. Given that the system is SNR-limited, we focus on fitting the data with respect to . To quantify the fitting’s accuracy and reliability, we employed the statistical measures: -squared () and root mean square error (RMSE). Their expressions are36 where is the measured value, is the predicted value, and data is the number of data. The fitted is , aligning well with the IUPAC limit, with an -squared of 0.9544, and RMSE of 0.0001, affirming the imaging limit’s reliance on SNR conditions.These results underscore the versatility and accuracy of our proposed model in predicting the single-frame (S-F) imaging limit of the detecting system affected by atmospheric scattering. The experiment not only confirms the model’s applicability across different camera systems and conditions but also highlights the potential for enhancement algorithms to mitigate the visibility limitations in foggy environments. 3.2.Outdoor ExperimentWe first examine the relationship between the modulation and variance of the image noise. is measured to examine the relationship between the image clarity and the number of averaging, and the variance is measured to confirm the elimination of Gaussian noise. The numerical results of the two quantities calculated based on measured experimental data are illustrated as blue dots in Figs. 4(a) and 4(b), showcasing the decay of variance and in accordance with the law associated with Gaussian noise, as the fitted black lines indicate. When the number of averaging is greater than 100, and variance quickly converge to a constant level, indicating that the image contains Gaussian noise and non-Gaussian noise that cannot be eliminated by averaging.37 We then examine the efficacy of using averaging in actual imaging. Figure 5 shows the AR for S-F and M-F averaged images against , highlighting the impact on SNR. The gray line and green dot line are the S-F and M-F SNR curves, respectively, with the -square and RMSE values as 0.92761, and 0.94839, , respectively. The black dashed line is the SIR condition. In the case of a camera limited by SNR, employing the noise reduction through averaging has effectively extended the imaging range to 1.2 times that of the S-F imaging system, corresponding to an increase of . Compared to the S-F system, where the value associated with its is 3, the M-F system exhibits a significantly reduced value at 0.1839, underscoring the technique’s potential to extend the imaging system’s operational . As a result, the field experiment reinforces the comprehensive framework of our proposed model, highlighting both the potential and limitations of current image enhancement techniques in addressing atmospheric scattering effects. 4.ConclusionIn conclusion, we derived a comprehensive physical model to describe the behavior of optical images after traversing a specific optical thickness in the atmosphere based on the principles of MOR. We show explicitly that the image can be retrieved with the requirement that the image modulation survives to a minimum level so that the system detection allows the modulation to be detected via high dynamic range detectors or alternatively via M-F averaging. Experimental validations of our prediction were conducted both in a fog chamber and outdoor settings, revealing a substantial alignment, demonstrating that the optical image limit can be harnessed to achieve its best performance via an optimized optical system, a suitable detecting system, and effective postsignal processing. 5.Appendix: The System MTFIn this section, we focus on the derivations of the system MTF. The optical imaging process is shown in Fig. 1. The ambient light illuminating the distant target undergoes diffuse reflection, resulting in the reflectance within the field of view, denoted as . The imaging light consists of the reflected light as well as the interference light and scattered light . When the target is of restricted AR, the enters the lenses at a fixed angle. Therefore, we consider the extinction coefficient to be a constant. This allows us to formulate the radiative equation as19 where is the scattering coefficient. The coordinates , , and are established in a reference system with the target as the origin. Assuming the distance between the target and the lenses is , we denote as .The light intensity distribution can be calculated as Substituting Eq (15) into the definition of modulation, we obtain the relationship between modulation depth and , For ideal target imaging with an infinitely distant background and an object appearing black, and . This leads to a simplified expression for modulation depth induced by atmospheric attenuation with respect to , Thus, the aerosol MTF can be written as The turbulence MTF for short exposure imaging is expressed as38,39 where is the spatial frequency, is the working wavelength, is the focal length, is the effective diameter of the lens, , represents the zenith angle, is the refractive index structure parameter, and and represent the atmospheric pressure and temperature on the Kelvin scale, respectively.The point spread function (PSF) of the lenses is characterized by Bessel functions, and its diameter, where the first minimum occurs, is known as the Airy disk. The full width at half-maximum of the Airy disk is given as where is the Airy disc diameter. The lenses’ PSF can be approximated as a Gaussian function:40Thus, the MTF of the lens can be expressed as In theory, the MTF of the sensor can be simulated by a sinc function,41 defined as , but in practical engineering, a Gaussian function also provides a good approximation.42,43 The PSF of the sensor can be obtained by applying low-pass filtering to the edge spread function derived from the sensor’s captured edges, followed by a Fourier transform of the line spread function (LSF). Therefore, its expression is where is the total width of the LSF. Consequently, the MTF of the sensor can be written asCode and Data AvailabilityThe data that support the findings of this article are not publicly available due to privacy concerns. They can be requested from the author at liuyk6@sysu.edu.cn. AcknowledgmentsThis work was supported by the National Natural Science Foundation of China (Grant Nos. 61991452 and 12074444), the Guangdong Major Project of Basic and Applied Basic Research (Grant No. 2020B0301030009), and the National Key Research and Development Program of China (Grant Nos. 2022YFA1404300 and 2020YFC2007102). ReferencesR. Vollmerhausen, E. L. Jacobs and R. G. Driggers,
“New metric for predicting target acquisition performance,”
Opt. Eng., 43
(11), 2806
–2818 https://doi.org/10.1117/1.1799111
(2004).
Google Scholar
J. A. Ratches et al.,
“Night vision laboratory static performance model for thermal viewing systems,”
(1975). Google Scholar
J. Johnson,
“Analysis of image forming systems,”
Proc. SPIE, 513 761
(1985).
Google Scholar
G. L. DesAutels,
“A modern review of the Johnson image resolution criterion,”
Optik, 249 168246 https://doi.org/10.1016/j.ijleo.2021.168246
(2021).
Google Scholar
T. Sjaardema, C. S. Smith and G. C. Birch,
“History and evolution of the Johnson criteria,”
(2015). Google Scholar
N. S. Kopeika et al.,
“Atmospheric effects on target acquisition,”
Proc. SPIE, 8355 83550D https://doi.org/10.1117/12.921151
(2012).
Google Scholar
Y. Bian et al.,
“Passive imaging through dense scattering media,”
Photon. Res., 12 134
–140 https://doi.org/10.1364/PRJ.503451
(2023).
Google Scholar
Q. Z. Wang et al.,
“Fourier spatial filter acts as a temporal gate for light propagating through a turbid medium,”
Opt. Lett., 20
(13), 1498
–500 https://doi.org/10.1364/OL.20.001498
(1995).
Google Scholar
S. G. Demos and R. R. Alfano,
“Optical polarization imaging,”
Appl. Opt., 36 150
–155 https://doi.org/10.1364/AO.36.000150 APOPAI 0003-6935
(1997).
Google Scholar
A. Alfalou and C. Brosseau,
“Chapter two: recent advances in optical image processing,”
Prog. Opt., 60 119
–262 https://doi.org/10.1016/bs.po.2015.02.002 POPTAN 0079-6638
(2015).
Google Scholar
Y. Qi et al.,
“A comprehensive overview of image enhancement techniques,”
Arch. Comput. Methods Eng., 29 583
–607 https://doi.org/10.1007/s11831-021-09587-6
(2021).
Google Scholar
C.-H. Hsieh,
“Dehazed image enhancement by a gamma correction with global limits,”
in IEEE 10th Int. Conf. Awareness Sci. Technol. (iCAST),
1
–4
(2019). Google Scholar
N. Sharma, V. Kumar and S. K. Singla,
“Single image defogging using deep learning techniques: past, present and future,”
Arch. Comput. Methods Eng., 28 4449
–4469 https://doi.org/10.1007/s11831-021-09541-6
(2021).
Google Scholar
A. Choudhury and S. J. Daly,
“HDR display quality evaluation by incorporating perceptual component models into a machine learning framework,”
Signal Process. Image Commun., 74 201
–217 https://doi.org/10.1016/j.image.2019.02.007 SPICEF 0923-5965
(2019).
Google Scholar
P. G. J. Barten,
“Contrast sensitivity of the human eye and its effects on image qualit,”
Soc. Photo-Opt. Instrument. Eng., 25
–27
(1999). Google Scholar
R. Hegadi,
“Image processing: research opportunities and challenges,”
Nat. Seminar on Res. Comput. , 36 1
–5
(2010).
Google Scholar
C. Sharma and A. Manhar,
“Development of image processing tools,”
Int. J. Sci. Res. Comput. Sci. Eng. Inf. Technol., 9 535
–543 https://doi.org/10.32628/CSEIT23903135
(2023).
Google Scholar
S. N. Ahmed,
“Position-sensitive detection and imaging,”
Physics and engineering of radiation detection, 435
–475
(2015). Google Scholar
R. Pierce, J. Ramaprasad and E. Eisenberg,
“Optical attenuation in fog and clouds,”
Proc. SPIE, 4530 https://doi.org/10.1117/12.449815
(2001).
Google Scholar
N. S. Kopeika, S. C. Solomon and Y. Gencay,
“Wavelength variation of visible and near-infrared resolution through the atmosphere: dependence on aerosol and meteorological conditions,”
J. Opt. Soc. Am., 71 892
–901 https://doi.org/10.1364/JOSA.71.000892 JOSAAH 0030-3941
(1981).
Google Scholar
Y. Kuga and A. Ishimaru,
“Modulation transfer function and image transmission through randomly distributed spherical particles,”
J. Opt. Soc. Am. A, 2 2330
–2336 https://doi.org/10.1364/JOSAA.2.002330
(1985).
Google Scholar
I. Dror and N. S. Kopeika,
“Aerosol and turbulence modulation transfer functions: comparison measurements in the open atmosphere,”
Opt. Lett., 17 1532
–1534 https://doi.org/10.1364/OL.17.001532 OPLEDP 0146-9592
(1992).
Google Scholar
D. Sadot and N. S. Kopeika,
“Imaging through the atmosphere: practical instrumentation-based theory and verification of aerosol modulation transfer function,”
J. Opt. Soc. Am. A, 10 172
–179 https://doi.org/10.1364/JOSAA.10.000172 JOAOD6 0740-3232
(1993).
Google Scholar
V. Lakshminarayanan,
“Light detection and sensitivity,”
Handbook of Visual Display Technology, Springer, Berlin, Heidelberg
(2012). Google Scholar
T. Kimpe and T. Tuytschaever,
“Increasing the number of gray shades in medical display systems—how much is enough?,”
J. Digit Imaging, 20 422
–432 https://doi.org/10.1007/s10278-006-1052-3
(2007).
Google Scholar
F. A. Rosell and R. H. Willson,
“Performance synthesis (electro-optical sensors),”
(1971). Google Scholar
W. Pan et al.,
“ single-crystal X-ray detectors with a low detection limit,”
Nat.Photonics, 11 726
–732 https://doi.org/10.1038/s41566-017-0012-4
(2017).
Google Scholar
L. Pan et al.,
“Determination of X-ray detection limit and applications in perovskite X-ray detectors,”
Nature Commun., 12 5258 https://doi.org/10.1038/s41467-021-25648-7
(2021).
Google Scholar
Y. Huang et al.,
“Multi-view optical image fusion and reconstruction for defogging without a prior in-plane,”
Photonics, 8 454 https://doi.org/10.3390/photonics8100454
(2021).
Google Scholar
L. Chen et al.,
“Multi-channel visibility distribution measurement via optical imaging,”
Photonics, 10 945 https://doi.org/10.3390/photonics10080945
(2023).
Google Scholar
M. Tico,
“Multi-frame image denoising and stabilization,”
in 16th Eur. Signal Process. Conf.,
1
–4
(20082008). Google Scholar
A. Nazir, M. S. Younis and M. K. Shahzad,
“MFNR: multi-frame method for complete noise removal of all PDF types in multi-dimensional data using KDE,”
(2020). Google Scholar
S. D.-I. Schuster et al.,
“Noise variance and signal-to-noise ratio estimation from spectral data,”
in IEEE Int. Instrum. Meas. Technol. Conf. (I2MTC),
1
–6
(2019). Google Scholar
K. He, J. Sun and X. J. Tang,
“Single image haze removal using dark channel prior,”
IEEE Trans. Pattern Anal. Mach. Intell., 33 2341
–2353 https://doi.org/10.1109/TPAMI.2010.168
(2011).
Google Scholar
K. J. Zuiderveld,
“Contrast limited adaptive histogram equalization,”
Graphics Gems IV, 474
–485 Academic Press Professional, Inc.(
(1994). Google Scholar
A. H. A. Kamel, A. S. Shaqlaih and A. Rozyyev,
“Which friction factor model is the best? a comparative analysis of model selection criteria,”
J. Energy Power Eng., 12 158
–168 https://doi.org/10.17265/1934-8975/2018.03.006
(2018).
Google Scholar
A. Foi et al.,
“Practical Poissonian-Gaussian noise modeling and fitting for single-image raw-data,”
IEEE Trans. Image Process., 17
(10), 1737
–1754 https://doi.org/10.1109/TIP.2008.2001399
(2008).
Google Scholar
A. N. Khan et al.,
“Atmospheric turbulence and fog attenuation effects in controlled environment FSO communication links,”
IEEE Photonics Technol. Lett., 34 1341
–1344 https://doi.org/10.1109/LPT.2022.3217072
(2022).
Google Scholar
D. L. Fried,
“Optical resolution through a randomly inhomogeneous medium for very long and very short exposures,”
J. Opt. Soc. Am., 56 1372
–1379 https://doi.org/10.1364/JOSA.56.001372 JOSAAH 0030-3941
(1966).
Google Scholar
E. Bayati et al.,
“Role of refractive index in metalens performance,”
Appl. Opt., 58 1460
–1466 https://doi.org/10.1364/AO.58.001460 APOPAI 0003-6935
(2019).
Google Scholar
B. Tharun et al.,
“Contrast computation methods for interferometric measurement of sensor modulation transfer function,”
J. Electron. Imaging, 27 013015 https://doi.org/10.1117/1.JEI.27.1.013015
(2018).
Google Scholar
K. A. Krapels et al.,
“Atmospheric turbulence modulation transfer function for infrared target acquisition modeling,”
Opt. Eng., 40
(9), 1906
–1913 https://doi.org/10.1117/1.1390299
(2001).
Google Scholar
S. Zhang et al.,
“MTF measurement by slanted-edge method based on improved Zernike moments,”
Sensors, 23
(1), 509 https://doi.org/10.3390/s23010509
(2023).
Google Scholar
Biography |