PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12967, including the Title Page, Copyright information, Table of Contents, and Conference Committee information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Light field microscopy (LFM) is an emerging three-dimensional (3D) imaging technology that simultaneously captures both spatial and angular information of the incident light by placing a micro lens array (MLA) in front of an imaging sensor, enabling computational reconstruction of the full 3D volume of a specimen via a single camera frame or single shot imaging. Unlike other 3D imaging techniques that acquire spatial information sequentially or through scanning, LFM four-dimensional (4D) imaging scheme effectively frees volume acquisition time from spatial scales and easy to miniaturize, thus making LFM a highly scalable tool in diverse applications. However, its broad application has slowed down due to the low resolution of the limited angular and spatial information in one snapshot of captured image, the inhomogeneous resolution of reconstructed depth images, and the lack of lateral shift invariance, which greatly degrades the spatial resolution, causing grid-like artifacts and great computational complexity. The introduction of Fourier light field microscopy (FLFM) provides a promising path to improve the current LFM techniques and achieve high-quality imaging and rapid light field reconstruction. However, the inherent trade-off between the angular and spatial resolution is still not fundamentally resolved without the introduction of additional information. Polarization, another dimension of light field information, has been shown to integrate well with other 3D imaging techniques to obtain finer 3D reconstructions results. Unfortunately, this aspect has seldom received attention and is ignored in LFM. This paper presents a resolution enhancement scheme for Fourier light field microscopy system that utilizes polarization norms and light field point cloud data fusion to generate improved imaging resolution and 3D reconstruction accuracy. Different from conventional FLFM, this approach actively introduces additional surface polarization information of the sample into the reconstruction of 3D volume information. A universal polarization-integrated FLFM configuration is designed and built up, allowing polarization and light field data to be acquired simultaneously using the same set of optical paths. A mathematical model is derived to describe the mapping and fusion of the polarization norms and light field point cloud data. Simulation studies show that the resolution and accuracy of the 3D reconstruction of the proposed FLFM imaging system are significantly improved after incorporating the polarization information, confirming the validity of the proposed methods. Finally, the implications of this approach for FLM are discussed, providing guidance for future experiments and applications. The resolution enhancement approach based on polarization and data fusion provides a feasible solution to the contradiction between lateral resolution and vertical resolution, and further improve the resolution of the FLFM imaging system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital images are widely used in daily life and scientific researches, however, due to some uncertain reasons in imaging capture devices the improperly exposure may occur during the operation which can significantly reduce image quality and could terribly destroy the visual sensations or following processing. In order to correct such improperly exposed images, GretagMacbeth Colour Checker DC was used in this study, and via multi-exposure technology we measured the colour features of the multi-level dynamic range of the camera, and then a polynomial model was adopted to set up a 3D colour mapping relationship corresponding to a reference image which can be used as a basis for an image colour correction algorithm. To evaluate the differences between corrected and reference images, the S-CIELAB image difference model was applied that took spatial characters of human visual system into account. Experiment results show that the proposed colour correction method could achieve 2.0~5.5 S-CIELAB units corresponding to reference images. For the purpose of proving the result is acceptable, A visual experiment was designed using the psychophysical method of constant stimuli, and its results showed that the discrimination threshold for visually distinguishing on image differences are around 4.4 S-CIELAB units. This means that the correction method of this study could successfully correct such improperly exposed images to visually nice ones under or around the visual discrimination level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The rise of “computational optics” provides a new way for label-free microscopic imaging. Programmable illumination modulation devices [light emitting diode (LED), liquid-crystal-display (LCD), or spatial light modulator (SLM)] have been introduced into light microscopes to give them multimodal imaging capabilities. Nevertheless, due to the discontinuity of the LED array and the high light loss of the LCD, the existing illumination modulation scheme still needs to be optimized. In this paper, we report a programmable illumination modulation condenser for multi-modal microscopic imaging. By placing a high transmittance LCD inside a condenser, this device enables flexible modulation of the illumination light wavelength and angle without altering the existing optical layout. The high transmittance LCD greatly increases the transmittance of illumination light, which is further converged on the sample by the condenser, resulting in higher illumination intensity to produce high contrast images. In addition, the programmable condenser can be used as a flexible add-on module compatible with any commercial microscope to extend the imaging modality of conventional bright-field microscopes. Ultimately, we validated the accuracy and dependability of the programmable illumination modulation condenser by imaging spongy germ cells.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An swimmer's level and ability are often evaluated in swimming based on speed. Therefore, it is essential to quantify this speed accurately to guide their training. Swimming speed measurement methods are typically divided into two categories: using an LED as a reference object or wearing inertial navigation equipment. The former does not provide real-time feedback, while the latter can easily diverge. Through digital image processing and tracking, the speed can be accurately measured in real-time by analyzing the image's deep abstract features, frame by frame, even in complex and constantly changing sports scenes. To meet multiple swimmers' high-precision positioning and speed measurement requirements, YOLOv5 utilizes its advantages of full-view, low-latency, and high-precision multi-target recognition, making it well-suited for swimming speed measurement. DeepSort, on the other hand, leverages its powerful representation learning and accurate matching capability and, when combined with YOLOv5, can achieve real-time and precise tracking of multiple targets. Therefore, this paper proposes a real-time high-precision positioning and speed measurement algorithm for swimming based on YOLOv5 and DeepSort. The nine-point calibration method gets swimmers' positioning and speed information from swimmers' target boxes, which are tracked by YOLOv5 and DeepSort. The results of the multiple tests in the actual swimming competition scene show that the algorithm's tracking accuracy can reach more than 90%, and the positioning error of the first swimming lane is about 1.2cm. It has strong feasibility and engineering practicability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose to complete a radial shear differential interference contrast (DIC) microscope based on Greek-ladder sieves, in which three structures are set up to demonstrate the feasibility of radial shear interference phase contrast imaging. A detailed analysis and mathematical derivation is presented based on the fact that DIC contrast is spatially separated by the optical path difference of two interfering beams in a small focal volume, focusing on the optical range difference between the three structures. An important reference is provided for subsequent applications, in particular radial shear differential interference in the localisation of surface damage points and tweezers in large-scale integrated circuits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image classification behind complex inhomogeneous media is a pervasive problem in computational optics. In recent years, optical neural networks have shown high accuracy and little computation costs in image classification. However, the improvements in scalability and complexity are still challenging. This paper presents an optronic speckle transformer (OPST) for image classification through scattering media. We utilize the optical self-attention mechanism to extract the speckle pattern’s local and global properties. We realize excellent speckle classification results with minimal computation costs based on OPST. The OPST improves the classification by more than 8% and reduces the network’s parameter by more than 30%, compared with optronic convolutional neural networks (OPCNN). Moreover, our OPST demonstrates high scalability with existing optical neural networks and is adaptive to more complex tasks. Our work paves the way to an all-optical approach with less computational costs for object classification through opaque media.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wavefront encoding is an efficient way to extend the depth of focus of optical imaging systems, and its core component, the phase mask, determines its performance in extending the depth of focus. It is worth noting that wavefront coding technology is also an important method for laser protection. On the one hand, its optical field modulation effect realizes the redistribution of laser energy at the imaging plane, reduces the maximum single-pixel received power, and improves the laser blinding protection capability of the imaging system; on the other hand, its out-of-focus imaging plane can reduce the echo detection received power, and achieves the goal of improving the laser reconnaissance protection capability while ensuring the imaging quality. This article presents a laser transmission model for the tangent phase plate wavefront coding imaging system while exploring its protective performance at varying defocus distances to verify its laser protection potential. The findings show that the system offers efficient protective capabilities, as the maximum single-pixel receiving power decreases by an order of magnitude, and the echo detection receiving power reduces nearly three orders of magnitude. Further, the study assesses the protective performance of the system at different propagation distances. The results indicate that when propagation distances vary between 100 m and 10,000 m, the maximum single-pixel receiving power of the wavefront coding imaging system decreases rapidly and then stabilizes, whereas the echo detection receiving power rapidly decreases to approximately 0 mW. Through systematic simulation, the study successfully explores the laser protection performance of the tangent phase mask wavefront coding imaging system, verifying its effectiveness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, deep learning has yielded transformative success across optics and photonics, especially in optical metrology. Deep neural networks (DNNs) with a fully convolutional architecture (e.g., U-Net and its derivatives) have been widely implemented in an end-to-end manner to accomplish various optical metrology tasks, such as fringe denoising, phase unwrapping, and fringe analysis. However, the task of training a DNN to accurately identify an image-to-image transform from massive input and output data pairs seems at best na¨ıve, as the physical laws governing the image formation or other domain expertise pertaining to the measurement have not yet been fully exploited in current deep learning practice. To this end, we introduce a physics-informed deep learning method for fringe pattern analysis (PI-FPA) to overcome this limit by integrating a lightweight DNN with a learning-enhanced Fourier transform profilometry (LeFTP) module. By parameterizing conventional phase retrieval methods, the LeFTP module embeds the prior knowledge in the network structure and the loss function to directly provide reliable phase results for new types of samples, while circumventing the requirement of collecting a large amount of high-quality data in supervised learning methods. Guided by the initial phase from LeFTP, the phase recovery ability of the lightweight DNN is enhanced to further improve the phase accuracy at a low computational cost compared with existing end-to-end networks. Experimental results demonstrate PI-FPA enables more accurate and computationally efficient single-shot phase retrieval.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper uses traditional algorithms and deep learning algorithms to recover datacube obtained by CASSI and CSIMS in order to verify that CSIMS outperforms CASSI by comparing the Peak Signal to Noise Ratio (PSNR), Structural Similarity (SSIM) and Relative spectral Quadratic Error (RQE) of the reconstructed datacube. The experimental results show that the datacube of CASSI and CSIMS can be both reconstructed by ADMM-TV algorithm which is the most effective among the traditional algorithms. PSNR of the reconstructed datacube of CASSI is 32.50 dB, while that of CSIMS is 35.53 dB, with an increase of 3.03 dB. By using deep learning algorithm, both systems improve substantially under the PnP-HSI network, with PSNR of CASSI growing to 38.85 dB and that of CSIMS growing to 41.97 dB, which can be seen that CSIMS is still 3.12 dB higher than CASSI.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-distance phase retrieval represents a computational imaging technique that synergizes a basic imaging setup with computational post-processing. This method involves capturing diffraction intensity at distinct distances, enabling the iterative reconstruction of the target's wavefront by incorporating the intensity patterns into the relevant algorithm. Despite the advantages of lensless imaging through multi-distance phase retrieval, including its uncomplicated setup, expansive field of view, and freedom from aberrations, challenges persist in terms of sluggish convergence and limited resolution. To address these concerns, the presented paper introduces enhancements to both the imaging system and the algorithm. This dual approach contributes to a remarkable 5.88 times acceleration in convergence speed, all achieved without the need for supplementary equipment. Moreover, a substantial enhancement in imaging quality is achieved when compared to the conventional method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quantitative phase imaging (QPI) has gained extensive attention in the field of biomedical imaging and life sciences due to its unique ability to quantify the important physical characteristics of living cells and tissues without labeling. However, boundary conditions have always seriously affected the accuracy of QPI, which are frequently overlooked. When acquiring the original data, the sample being tested is habitually placed at the center of the field of view, unconsciously avoiding the influence of the boundary conditions, but this does not fundamentally solve the problem. When the size of the object being tested exceeds that of the imaging field of view (FOV), the boundary conditions cannot be avoided, and serious boundary artifacts will appear in the reconstructed FOV. In various QPI techniques, such as the transport of intensity equation (TIE), differential phase contrast (DPC), and Fourier ptychographic microscopy (FPM), it has been demonstrated that the boundary conditions can significantly impact the accuracy of the phase reconstruction. The most fundamental reason for the incorrect reconstruction results caused by the boundary conditions is the loss of information. This paper systematically studies the impact of the boundary conditions on the reconstruction accuracy of quantitative phase imaging and adaptive aberration correction based on FPM, and discusses the influence of data redundancy on boundary artifacts of phase reconstruction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In lensless imaging, encoding the diffraction field recorded by the detector by adding a laterally or axially moving mask is the most common configuration. In this paper, we propose a new scheme for improving the speed and accuracy of complex object reconstruction. Particularly, a random binary amplitude mask is placed upstream of the object and moves obliquely to introduce speckle illumination on the object plane. The object and the detector are stationary in the experiment. Inspired by the idea of ptychography, the extended ptychographic iterative engine algorithm is adopted to reconstruct the object and unknown mask simultaneously. It is verified by simulation that our proposed method can improve the reconstruction resolution of the complex object compared with the conventional method. Further, the relevant parameters of the proposed scheme are also optimized. This improvement will facilitate the widespread application of lensless imaging in biology and materials science.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Differential phase contrast (DPC) imaging is a non-interferometric, label-free and efficient quantitative phase imaging method based on partially coherent illumination modulation, of which the two widely used forms are the method based on the slow-variable-object approximation solved by the phase gradient transfer function (PGTF) and the method based on the weak-object approximation solved by the phase transfer function (PTF). Both methods can recover the phase well under their respective approximations, but for phase recovery of certain complex phase objects in reality, they are lackluster due to the limitations imposed by their use of approximations. To break this limitation, a spectral fusion method is proposed in this paper, which can take into account the low-frequency generalization and high-frequency details of phase objects and achieve efficient and accurate quantitative phase imaging by effectively merging the phases reconstructed by PTF and PGTF in the frequency domain on the basis of optimized illumination. Relevant simulations and experiments demonstrate the effectiveness of the method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This article simulates a study on a phase recovery technique that combines triangular aperture illumination as a supporting constraint and partially overlapped random binary amplitude modulation. Inspired by the concept of ptychography, a random binary amplitude mask is designed with partially transparent regions that overlap each other randomly. The overlapping regions of the amplitude modulation mask impose a strong constraint on the coherence of the light field, similar to the overlap constraint in ptychography. Moreover, the redundant information brought by the overlapping binary masks can improve the reconstruction accuracy, and the triangular aperture constraint enables faster convergence in the method. Compared with the original binary amplitude modulation method, this constraint leads to higher convergence accuracy and speed in the iterative algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The cascade X-ray phase-contrast imaging system is composed of a set of Talbot-Lau interferometers and inverse Talbot-Lau interferometers, which can avoid the difficulty of making small-period high aspect ratio absorption gratings, and is expected to realize the application of X-ray phase-contrast imaging for large-field of view. This equipment can simultaneously obtain the absorption images, phase contrast images and scattering images of the sample using Fourier transform algorithm of a single sample exposure. The selection of the frequency domain window function of the sample fringe image and its influencing factors are the key to optimize image quality. Aiming at the X-ray cascade grating phase-contrast imaging system, this paper use the X-ray chest transmission image as the simulation sample, simulates the selection scheme of Fourier window function in frequency domain and its influencing factors by numerical calculation, and obtains the selection range of window function for the optimal image. The simulation results show that the optimal window function is selected by taking the high frequency edge of the sample fringe image as one side edge of the window function and extending linearly to the low frequency side. The selection range of window function is inversely proportional to the sample fringe period. The smaller the fringe period is, the larger the selection range of window function is, and the more favorable it is to obtain the optimal phase-contrast image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Photoacoustic imaging (PAI) is an emerging modality that has generated increasing interest for its uses in clinical research and translation. To fully exploit its potential for various preclinical and clinical applications, it is necessary to develop systems that offer high imaging speed, reasonable cost, and manageable data flow. Currently, a significant challenge lies in the fabrication of ultrasound arrays, as many of them are not densely populated enough to fully sample the signals. Ideally, the pitch of the arrays should be half of the center ultrasonic wavelength to prevent spatial undersampling and the subsequent reconstruction artifacts, such as aliasing artifacts and structural deformation. Here, a novel photoacoustic sparse sampling Transformer-CNN coupling network (passFormer) is proposed to decouple target details and spatial under-sampling artifacts from high-frequency image information in a heterogeneous feature-aware manner. To be specific, we first decompose the sparse sampling (SS) photoacoustic (PA) images into 2 parts: high-frequency (HF) and low-frequency (LF) compositions. Our methodology incorporates two bridging modules, the LF modules and the HF modules. The LF coupling module extracts content features (Xc) and latent texture feature (Xtex), and the HF coupling module extracts high-frequency embedding (Xemb) containing the target details features (Xdetail) and under-sampling artifacts (Xart). We feed Xt and Xemb into a modified transformer with three encoders and decoders to obtain well-refined HF texture features. At last, we combine the refined HF texture features with pre-extracted Xc by pixel-wise summation reconstruction. Experimental results on publicly available full and sparse reconstruction datasets of mouse and phantom PA images highlight the superior performance of our method, particularly in live mouse imaging. This new approach enables accelerated data acquisition and image reconstruction, facilitating the development of practical and cost-effective imaging systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.