Quantitative phase imaging (QPI) is a label-free technique that provides optical path length information for transparent specimens, finding utility in biology, materials science, and engineering. Here, we present QPI of a three-dimensional (3D) stack of phase-only objects using a wavelength-multiplexed diffractive optical processor. Utilizing multiple spatially engineered diffractive layers trained through deep learning, this diffractive processor can transform the phase distributions of multiple two-dimensional objects at various axial positions into intensity patterns, each encoded at a unique wavelength channel. These wavelength-multiplexed patterns are projected onto a single field of view at the output plane of the diffractive processor, enabling the capture of quantitative phase distributions of input objects located at different axial planes using an intensity-only image sensor. Based on numerical simulations, we show that our diffractive processor could simultaneously achieve all-optical QPI across several distinct axial planes at the input by scanning the illumination wavelength. A proof-of-concept experiment with a 3D-fabricated diffractive processor further validates our approach, showcasing successful imaging of two distinct phase objects at different axial positions by scanning the illumination wavelength in the terahertz spectrum. Diffractive network-based multiplane QPI designs can open up new avenues for compact on-chip phase imaging and sensing devices. |
1.IntroductionQuantitative phase imaging (QPI) stands as a powerful label-free technique capable of revealing variations in optical path length caused by weakly scattering samples.1–3 QPI enables the generation of high-contrast images of transparent specimens, which are difficult to observe using conventional bright-field microscopy. In recent years, various QPI methodologies have been established, including, e.g., off-axis imaging methods,4,5 phase-shifting methods,6,7 and common-path QPI techniques.8,9 These methods have been instrumental in conducting precise measurements of various cellular dynamics and metabolic activities covering applications in, e.g., cell biology,10,11 pathology,12–14 and biophysics,15 such as the monitoring of real-time cell growth and behavior,16,17 cancer detection,18,19 pathogen sensing,20,21 and the investigation of subcellular structures and processes.22 In addition, QPI also finds applications in materials science and nanotechnology, which include characterizing thin films, nanoparticles, and fibrous materials, revealing their unique optical and physical attributes.23–25 Predominantly, QPI systems are employed to extract quantitative phase information within a two-dimensional (2D) plane by utilizing a monochromatic light source and sensor array. Given that standard optoelectronic sensors are limited to detecting only the intensity of light, advanced approaches utilizing customized illumination schemes and interferometric techniques,26–28 combined with digital postprocessing and reconstruction algorithms, are employed to convert the intensity signals into quantitative phase images. Building on the foundations of 2D QPI approaches, tomographic QPI and optical diffraction tomography methods have also expanded QPI’s capabilities to encompass volumetric imaging.29–32 These techniques typically capture holographic images from multiple illumination angles, which allows for the digital reconstruction of the refractive index distribution across the entire three-dimensional (3D) volume of the sample. The digital postprocessing techniques in QPI and phase tomography systems have witnessed a paradigm shift, primarily attributed to the recent advancements in the field of artificial intelligence. Specifically, the efficiency of feed-forward neural networks utilizing the parallel processing power of graphics processing units (GPUs) has markedly increased the speed and throughput of image reconstruction in QPI systems.33–35 These deep-learning-based approaches facilitated solutions to various complex tasks of QPI, such as segmentation and classification,36–38 as well as inverse problems including phase retrieval,33,35,39–44 aberration correction,45,46 depth-of-field extension,47,48 and cross-modality image transformations.14,49 Additionally, deep-learning-based techniques have also been used to enhance 3D QPI systems by improving the accuracy and resolution of 3D refractive index reconstructions, utilizing methods such as physical approximant-guided learning,50 recurrent neural networks,51 neural radiance fields,52 alongside the reduction of coherent noise through generative adversarial networks.53 However, the complexity of digital neural networks employed in these reconstruction techniques requires substantial computational resources, leading to lower imaging frame rates and increased hardware costs and computing power. These challenges become further intensified in 3D QPI systems due to the necessity of processing a larger set of interferometric images for 3D reconstructions. Here, we introduce an all-optical, wavelength-multiplexed QPI approach that utilizes diffractive processing of coherent light to obtain the quantitative phase distributions of multiple phase objects distributed at varying axial depths. As illustrated in Fig. 1, our approach employs a diffractive optical processor that is composed of spatially engineered dielectric diffractive layers, optimized collectively via deep learning.54–64 Following the deep-learning-based design phase, these diffractive elements are physically fabricated to perform task-specific modulation of the incoming optical waves, converting the phase profile of each of the phase-only objects located at different axial planes into a distinct intensity distribution at a specific wavelength within its output field of view (FOV). These wavelength-multiplexed intensity distributions can then be recorded, either simultaneously with a multicolor image sensor equipped with a color filter array or sequentially using a monochrome detector by scanning the illumination wavelength to directly reveal the object phase information through intensity recording at the corresponding wavelength. Based on this framework, we conducted analyses through numerical simulations and proof-of-concept experiments. Initially, we examined how the overlap of input objects at different axial positions affects the quality of the diffractive output images and all-optical quantitative phase information retrieval. Our results demonstrated that this diffractive QPI framework could achieve near-perfect QPI for phase objects without spatial overlap along the optical axis. Furthermore, even when the input objects are entirely overlapping along the axial direction, our diffractive processor could effectively reconstruct the quantitative phase information of each input plane with high fidelity and minimal cross talk among the imaging channels. Beyond numerical analyses, we also experimentally validated our approach by designing and fabricating a diffractive multiplane QPI processor operating at the terahertz spectrum. Our experimental results closely aligned with the numerical simulations, confirming the practical feasibility of diffractive processors in retrieving the quantitative phase information of specimens across different input planes. The presented diffractive multiplane QPI design incorporates wavelength multiplexing and passive optical elements, enabling the rapid capture of quantitative phase images of specimens across multiple axial planes. This system’s notable compactness, with an axial dimension of mean wavelengths () of the operational spectral band, coupled with its all-optical phase recovery capability, sets it apart as a competitive analog alternative to traditional digital QPI methods. Additionally, the scalable nature of our design allows its adaptation to different parts of the electromagnetic spectrum by scaling the feature size of each diffractive layer proportional to the illumination wavelength of interest. Our presented framework paves the way for the development of new phase-imaging solutions that can be integrated with focal plane arrays operating at various wavelengths to enable efficient, on-chip imaging and sensing devices, which can be especially valuable for applications in biomedical imaging/sensing, materials science, and environmental analysis, among others. 2.Results2.1.Design of a Wavelength-Multiplexed Diffractive Processor for Multiplane QPIFigure 1 presents a diagram of our diffractive multiplane QPI design that is based on wavelength multiplexing. In this setup, multiple transparent samples, which are axially separated, are illuminated by a broadband spatially coherent light. This broadband illumination can be regarded as a combination of plane waves at distinct wavelengths , organized in order from the longest to the shortest wavelength. Here, represents the number of spectral channels as well as the number of phase objects/input planes, as each wavelength channel is uniquely assigned to a specific input plane. The illumination fields, denoted as (), propagate through multiple phase-only transmissive objects, each exhibiting a unique phase profile at the corresponding input plane . As the illumination light encounters the sample at each plane, it undergoes a phase modulation of , resulting in multispectral optical fields at the input aperture of the diffractive processor. The wavelength-multiplexed QPI diffractive processor consists of several modulation layers constructed by dielectric materials, where each layer is embedded with spatially designed diffractive features that have a lateral size of with a trainable/optimizable thickness, covering a phase modulation range of 0 to for all the illumination wavelengths. These diffractive layers, along with the input and output planes, are interconnected through optical diffraction in free space (air). The complex fields , resulting from the stacked input planes along the axial () direction, are modulated by the diffractive optical processor to yield output fields {}, i.e., . The intensity variations of these output fields are then captured by a monochrome image sensor, which sequentially records the QPI signals across the illumination wavelengths. The resulting optical intensity measurements at each illumination wavelength, noted as , can be expressed as Considering that the optical intensity recorded by the sensor is influenced by both the power of the illumination and the output diffraction efficiency, we used a straightforward normalization approach60,65 to counteract potential fluctuations caused by power variations and achieve consistent QPI performance. This involves dividing the output measurements () into two zones: an output signal area and a reference signal area . Here, is designated as a one-pixel wide border surrounding the edges of . This border is further segmented into subsections, each labeled as (). A given acts as a reference signal () for the wavelength channel , i.e., where denotes the total number of image sensor pixels located within . Finally, the output quantitative phase image of the wavelength multiplexed diffractive processor can be obtained through a simple normalization step,Once the training of our diffractive multiplane QPI processor successfully converges, all the output quantitative phase images obtained at different wavelengths are expected to approximate the phase profiles of the input objects , which can be written as where the ground-truth phase images are defined, without loss of generality, as the object phase distributions at the corresponding wavelength . Based on the above formulation, our diffractive multiplane QPI processor is optimized to act as an all-optical transformer that simultaneously performs two tasks:
To optimize/train our diffractive multiplane QPI processor, we compiled a training data set of 110,000 images containing 55,000 handwritten images and 55,000 custom-designed grating/fringe-like patterns.66 During training, to form each multiplane object, images were randomly chosen from these 110,000 training images without replacement and encoded into the phase channels () of the object planes. For this phase encoding, we adopted an assumption that all the object planes are composed of the same material and an identical range of material thickness variations. This assumption ensures that these different object planes, regardless of their individual axial positions, induce a similar magnitude of phase modulations on the incoming complex fields, thereby better mirroring the real-world scenarios encountered in multiplane imaging systems. Based on this assumption, we chose to have the thickness profile of each phase-only object plane to be confined within the same dynamic range of [0, ], where stands for the thickness range parameter used during the training, defined based on the shortest wavelength . Following this notation, for the ’th object plane, the maximum phase modulation of the incoming field at wavelength can be written as where denotes the refractive index of the object material at . Accordingly, we also define a phase contrast parameter to represent the maximum phase contrast of objects at wavelength , i.e.,As a result, the phase modulation values in each object plane are confined to a range of (0, ). Without loss of generality, in our numerical analyses, we chose and a constant material refractive index () of 1.5 for all . As a result, the phase contrast parameter values vary according to the operational wavelength, peaking at the shortest wavelength , where . Error backpropagation and stochastic gradient descent were employed to optimize the thickness of the diffractive layers by minimizing a custom loss function defined based on the mean-squared error (MSE) between the diffractive output quantitative phase images and their ground truth across all the wavelength channels, i.e., . More information about the training process is provided in the Appendix. To numerically demonstrate the feasibility of our diffractive system, we devised several diffractive multiplane QPI processors, focusing on the impact of input object lateral overlap—where the FOVs of the input objects located at different axial planes overlap in the and directions. The occurrence of lateral overlap, resulting in nonuniform illumination, can deteriorate the quality of QPI reconstructions. To explore the dynamics between adjacent input phase objects during image reconstruction and assess our design’s capability of handling laterally overlapping objects at different axial planes, we adapted our training models to various assumptions about the lateral separation between different axial planes. These five input phase objects were uniformly distributed on the circumference of a circle with a radius of from the center, as shown in Fig. S1 in the Supplementary Material. A maximum lateral separation distance was set as , ensuring that the input FOVs are not distributed out of the boundary of a diffractive layer. Building on this, we developed and trained six distinct diffractive designs by adjusting the lateral separation distance () of the input planes across various values spanning , as illustrated in Fig. 2(a). These different configurations of input object arrangements, which cover the condition of a complete spatial overlap () of objects to a complete lateral separation (), enabled us to investigate the impact of r on the system’s QPI performance. Apart from the varying input lateral separations, these diffractive multiplane QPI designs share identical input specifications, featuring the same number of input planes with . All the diffractive designs are composed of 10 diffractive layers, where each diffractive layer has trainable diffractive features. The entire diffractive volume spans an axial length of and a lateral size of , forming a compact system that can be monolithically integrated with a complementary metal–oxide–semiconductor image sensor. At the output plane of these diffractive designs, a monochrome image sensor with a pixel size of is assumed. A unit magnification is selected between the object/input plane and the monochrome output/sensor plane, resulting in the same size of the output signal region as the input FOV for each axial plane. After their deep-learning-based optimization, the thickness profiles of the diffractive layers for each of the six designs are depicted in Fig. S2 in the Supplementary Material. 2.2.Performance Analysis of Wavelength-Multiplexed Diffractive Processors for Multiplane QPIAfter the training stage, we first conducted blind testing of the resulting diffractive processor designs through numerical simulations. To evaluate the multiplane QPI performance of these designs, we constructed a test set comprising 5000 phase-only objects that were never used in the training process. These objects were synthesized by randomly selecting images from the MNIST data set and encoding them into the phase channels of the th input object with a dynamic phase range of . Mirroring the approach used during the training, the phase ranges in the testing were derived from a thickness range of , consistent across the input planes, where stands for the testing thickness range parameter. The corresponding diffractive QPI output examples of the blind testing results are visualized in Fig. 2(b). Here, the Pearson correlation coefficient (PCC) was utilized to quantify the performance of these diffractive processor designs. From the observation of the output examples with shown in Fig. 2(b), it is evident that a large lateral separation distance () among different axial planes ensures a decent reconstruction of inputs, yielding high-fidelity output images. Conversely, a smaller lateral separation distance, such as , results in diminished image contrast and the introduction of some imaging artifacts. We noted a consistent degradation in the image quality as the input lateral separation distance r was reduced from to 0. This decrease in the QPI performance can be attributed to two main factors.
When the testing thickness range is larger than the training thickness range, i.e., , the output QPI results of were found to be degraded. These output images can hardly be recognized because of stronger phase perturbations caused by the larger phase contrast at each object plane. On the contrary, the output QPI measurements of still present a good image fidelity at the output of the wavelength multiplexed diffractive QPI processor for . These results highlight the diffractive design’s ability to process and image phase-only objects with a larger thickness and higher phase contrast beyond what was encountered during the training phase, i.e., . We also evaluated the resulting PCC values in Fig. 3, which reflects the examples shown in Fig. 2(b). As revealed in Fig. 3(a), the design with complete lateral separation of inputs () achieved high output PCC scores across all the imaging channels when , reaching an average PCC value of , corroborating the observations from visual inspections. When the input phase objects were completely laterally overlapping (r = 0), the output PCC values dropped to , whereas the reconstructed digit images could still be discernible. When increased to 1, as shown in Fig. 3(b), the performance of the design with complete lateral separation of inputs () remains at a high level, showing an average PCC value of . However, when it comes to the completely overlapping input objects (), the PCC scores reduced to . The PCC values quantified for the individual objects also showed that the axial planes closer to the front in the spatial sequence exhibit better imaging performance, revealing consistency with the previously shown output images. To further explore the impact of varying on the QPI performance, we extended our analysis across an array of values {0.2, 0.4, 0.6, 0.8, 1, 1.2, 1.4, 1.6, 1.8}, all tested against the same diffractive QPI model trained with , as shown in Fig. 3(c). It was found that the diffractive output QPI performance peaked at , with output for , and for . Below this peak, when , a decrease in PCC was evident, demonstrating the challenge of resolving significantly lower phase contrast objects. In scenarios where the test thickness exceeded the training range (), our designs demonstrated some decrease in performance, especially as approached 1.8, where the PCC for drops to , and PCC for drops to . This decline can be attributed primarily to two factors. First, significant phase contrast deviations between the training and testing expose the diffractive processor to unseen contrast levels, presenting generalization challenges. Second, the inherently linear nature of our diffractive processor, except for the intensity measurements at the output plane, faces approximation challenges under larger input phase contrast values due to the increased contributions of the nonlinear terms in the phase-to-intensity transformation task. Overall, our diffractive processor designs present decent generalization to various thicknesses and phase contrast values, very well covering by using a fixed training thickness range parameter in the training stage. To shed more light on the impact of lateral separation, we conducted additional analysis examining output PCC values as a function of the input lateral separation distance (). As shown in Fig. S3a in the Supplementary Material, the three curves correspond to different thickness range parameters , with values of {0.6, 1, 1.4}. A consistent trend of improved image quality with increasing was observed for all three curves, indicating that a reduced overlap between objects leads to a higher image fidelity for the output QPI reconstruction. Additionally, to quantify the phase reconstruction accuracy, we measured the phase mean absolute error (MAE) of our diffractive outputs. As shown in Fig. S3b in the Supplementary Material, the phase error for is consistently lower compared to . Specifically, at , the phase error was 4.1% for , and it increased to 4.9% and 7.6% for and 1.4, respectively. Furthermore, the phase error gradually decreased with increased lateral separation . For example, for , the phase error reduces from 12.4% at to 4.1% at . These findings demonstrate that our diffractive design not only achieves low phase-error values, but also maintains acceptable phase-imaging performance, even in scenarios where objects overlap laterally. 2.3.Impact of Axial Separation of Input Object Planes on the Multiplane QPI PerformanceBeyond the lateral arrangement of the input phase objects, the axial distance separating these input planes is another crucial factor that influences the wavelength-multiplexed QPI performance of our diffractive processors. To investigate this, we expanded our analysis of the output QPI performance by changing the input axial separation distance () across , as shown in Fig. 4. Here, the testing thickness range parameter was fixed at 0.6. Figure 4(a) reveals that, when the axial distance decreases, the PCC values of the laterally overlapping phase inputs () show a drop from at to at . This decrease is expected due to the limited axial resolution of the diffractive QPI processor, leading to a degraded multiplane QPI performance for smaller distances. The output visualizations in Fig. 4(b) corroborate these findings, displaying a noticeable decrease in image fidelity for multiplane QPI as decreases. Conversely, in scenarios with laterally separated inputs (), the PCC values remained consistently high (around 0.993) even when the axial distance was reduced to . When the input phase objects were partially overlapping (e.g., for or ), the PCC values remained stable when decreased. This suggests that the diffractive processor maintains its effectiveness in phase reconstruction with laterally separated input phase objects, regardless of the axial distance, . The output examples with varying distances further reinforce this conclusion, showing high-quality reconstructions across different axial separations. These observations underscore the diffractive processor’s capability in multiplexed imaging of phase objects, especially when the inputs are not laterally overlapping. 2.4.Cross Talk among Imaging ChannelsIdeally, our diffractive multiplane QPI processor should perform precise phase-to-intensity transformations for each input plane independently. However, accurately channeling the spatial information of individual object planes into their respective wavelength channels is challenging, as the features of the input objects positioned in the axial sequence can perturb the wave fields generated or modulated by the target object planes. This results in complex fields that, upon entering the diffractive processor, contain intermingled information from different object planes. Consequently, information from one object plane can negatively impact the imaging process of another, especially when the input phase objects laterally overlap, leading to cross talk among the imaging channels associated with different object planes. To delve deeper into the impact of this cross talk among the channels, we conducted a numerical analysis by individually testing each input sample plane across all five wavelengths. By placing a phase object in one of the five object planes and leaving the remaining planes vacant, we could directly assess how the phase information from one input plane, corresponding to a specific output wavelength channel, affects other output channels. From the visualization of the output quantitative phase reconstructions shown in Fig. 5, it was clear that when the inputs were laterally separated with , the output images of the diffractive multiplane QPI processor at the target wavelength aligned well with the ground-truth images, and the signal leakage to the other wavelength channels was negligible. This result highlighted the diffractive processor’s proficiency in handling and mitigating cross talk between different wavelength channels. However, as the input separation distance decreased, for example, to , a noticeable cross talk was observed across the different channels. This challenge became more pronounced when all the input objects were coaxially aligned at the center without any lateral separation (), resulting in more significant cross talk as well as suboptimal quality of QPI reconstructions. These findings confirm the diffractive processor’s capability to correctly route the signals and mitigate the cross talk effectively, while acknowledging its limitations when the input objects present a notable lateral overlap across different axial planes. 2.5.Lateral Resolution and Phase Sensitivity AnalysisTo gain deeper insights into the diffractive multiplane QPI processor’s capability to resolve phase images of input objects, we further investigated the lateral imaging resolution of our processor designs across different levels of input thickness range. To standardize our tests, we created binary phase grating patterns with a linewidth of , and selected the testing thickness range parameter from {0.2, 0.6, 1} and the input lateral separation distance r from , as shown in Fig. 6. The results in Figs. 6(a) and 6(b) show that the diffractive QPI processors with input planes effectively resolved the test phase gratings with a linewidth of for both and , even with thickness ranges that were different compared to the training thickness range, such as or . In cases where , i.e., the test objects are positioned coaxially and exhibit complete lateral overlap, the processor can still resolve the grating patterns with or , as shown in Fig. 6(c). However, at a thinner thickness or a smaller testing phase contrast level, e.g., , the resolution of diffractive QPI outputs became worse. The output examples revealed that the diffractive processor under falls short in reconstructing the last two input planes (i.e., and ). Our analyses revealed that the diffractive multiplane QPI designs could clearly resolve spatial phase features with a linewidth of at least across all five input planes, particularly when the input phase object had a thickness range parameter . 2.6.External Generalization Performance of Wavelength-Multiplexed Diffractive Processors for Multiplane QPIThe diffractive multiplane QPI processors reported so far were trained on a data set that included handwritten digits and grating-like spatial patterns. To further assess how well our diffractive multiplane QPI processors generalize to different types of spatial features, we conducted additional numerical analysis using Pap smear microscopy images. These images have significantly different spatial characteristics compared to our training data set. In addition to this, we used various thickness range parameters () including {0.2, 0.4, 0.6, 0.8, 1, 1.2, 1.4, 1.6, 1.8} with an aim to examine the diffractive QPI processor’s adaptability to new spatial features with previously unseen object thicknesses or phase contrasts, covering both and . These blinded test results are showcased in Fig. 7(b), revealing a decent agreement between the diffractive multiplane QPI results and the corresponding ground-truth images. We also calculated the image quality metrics across the entire Pap smear test data set [see Fig. 7(a)]. The average PCC was calculated as when the testing thickness range matched the training condition, i.e., . The QPI performance remained robust, with average PCC values of from to , while starting to exhibit more degradation when . When , the average PCC dropped to . Overall, these external generalization test results demonstrated that our diffractive multiplane QPI design is not limited to specific object types or phase features but can serve as a general-purpose multiplane quantitative phase imager for various kinds of objects. 2.7.Output Power Efficiency of Diffractive Multiplane QPI ProcessorsAll our diffractive multiplane QPI processor designs presented so far were optimized without considering the output power efficiency, resulting in relatively low diffraction efficiencies, mostly lower than 0.1%. When the output power efficiency becomes a concern in a given diffractive processor design, an additional diffraction efficiency-related loss term60,67,68 can be introduced to the training loss function to balance the trade-off between task performance and signal-to-noise ratio. We used the same approach to achieve a balance between the QPI performance and the diffraction efficiency of the diffractive processor (see the Appendix for details). In Fig. 8, we present a comprehensive quantitative analysis of this trade-off between the multiplane QPI performance and the output diffraction efficiency. For this comparison, we used two designs with and (as shown in Fig. S2 in the Supplementary Material), which initially exhibit output diffraction efficiencies of and , alongside PCC values of and , respectively—these are corresponding to diffractive designs trained without any diffraction efficiency penalty terms. Maintaining the same structural parameters and the same training/testing data set, we retrained these wavelength-multiplexed diffractive multiplane QPI designs from scratch; this time, we incorporated varying degrees of diffraction efficiency penalty terms into the training loss functions, resulting in diffractive designs that demonstrated significantly enhanced output diffraction efficiencies. Figure 8(a) depicts the resulting PCC values of these new designs in relation to their output diffraction efficiencies. When compared to the original design, these new designs showed an approximately 90-fold increase in the output diffraction efficiency, which reached up to an efficiency of . This major output diffraction efficiency enhancement was achieved with a modest reduction in multiplane QPI performance, evidenced by PCC values decreasing to . Similarly, compared to the original design shown in Fig. S2 in the Supplementary Material, these new designs subjected to the diffraction efficiency penalty showcased an improved diffraction efficiency of up to , with a marginal decrease in the PCC values that reach up to . Moreover, from the observation of the output examples in Fig. 8(b), the diffractive processors, even with enhanced output power efficiencies of , still effectively reconstruct multiplane QPI images with a decent image quality. These results reveal that by properly incorporating an efficiency-related loss term into the optimization process, our wavelength-multiplexed diffractive multiplane QPI processors can be optimized to maintain an effective balance between the QPI performance and power efficiency, which is important for practical applications of the presented framework. This training approach to boost the output power efficiency was also used in our experimental proof-of-concept demonstration, which will be reported next. 2.8.Experimental Validation of a Wavelength-Multiplexed Diffractive Multiplane QPI ProcessorWe conducted an experimental demonstration of our diffractive multiplane QPI processor using the terahertz part of the spectrum. Because of the larger wavelength of terahertz radiation, the 3D fabrication and alignment of the resulting diffractive layers are easier compared to shorter wavelengths, such as IR and visible parts of the spectrum. As illustrated in Fig. 9(a), we created an input aperture to better control the illumination wavefront. The experimental configuration includes two input planes ( and ), containing a phase-only object characterized by its thickness range parameter, empirically set as . This setup serves as a proof-of-concept demonstration of our multiplane QPI system, wherein only one of two input planes contains a phase object at any given time. In our experiments, a diffractive multiplane QPI system composed of three phase-only dielectric diffractive layers () was employed. This diffractive system converted the phase information of the input planes (axially separated by 20 mm) into an intensity distribution, captured at the output plane, where each illumination wavelength (, ) was assigned to one axial plane performing QPI using phase-to-intensity transformations at each wavelength. Structural details of this experimental arrangement are provided in Fig. 9(a) and the accompanying Appendix. To optimize our experimental multiplane QPI design, we synthesized objects to train the diffractive design through deep learning. Our training data set comprised 10,000 binary images of , with each image featuring two random pixels set to one and the remainder set to zero. These binary images were encoded into phase-only objects with a phase range of , where the phase contrast parameter values reached 0.94 at and 1 at . These phase contrast values were derived from the preset thickness range of , where . Throughout the training process, for each iteration, one input plane was designated for the placement of the phase object, while the other was left vacant. The optimized phase profiles of the diffractive layers are displayed in the upper column of Fig. 9(b). After the training, the resulting diffractive layers were 3D-printed; images of the fabricated layers are showcased in the lower column of Fig. 9(b). After 3D assembly and alignment of these fabricated layers, we employed a terahertz source and a detector to record the intensity distribution at the output plane. Detailed schematics and photographs of this experimental setup are presented in Fig. S4 in the Supplementary Material and Fig. 9(c), respectively. In the experimental phase, our system was subjected to eight distinct phase objects (never seen during the training), with the testing thickness range parameter set to . These objects were equally divided between the two input planes that are axially separated by 20 mm, i.e., , totaling four test phase objects per axial plane, and they were also fabricated using 3D printing. Figure 9(d) delineates the experimental output imaging results of the diffractive multiplane QPI processor, which align closely with our numerically simulated output patterns. The object phase profiles on both of the input planes were accurately transformed into intensity variations at the output plane, with each pixel clearly distinguishable and matching the expected ground-truth phase profiles. These experimental results demonstrate the proof-of-concept capability of our diffractive design in conducting QPI across multiple planes using wavelength multiplexing. 3.DiscussionIn the experimental proof-of-concept results presented earlier, we used relatively simpler patterns as input phase objects. To numerically assess the system’s capability to handle more structurally complex objects, akin to real-world conditions, we also investigated diffractive processor designs capable of retrieving multiplane QPI signals across the visible spectrum (400 to 650 nm). In these numerical analyses, we assumed that the diffractive layers were fabricated using two-photon polymerization-based 3D printing (Photonic Professional GT2, Nanoscribe GmbH, Germany), and a particular type of photoresist (IP-DIP, Nanoscribe GmbH)69 was selected as the diffractive layer material due to its high transparency and prevalent usage in the visible range. This diffractive design adapted the same configuration, training data, and methods used by the designs shown in Fig. S2 in the Supplementary Material, with the diffractive feature sizes scaled based on the operational wavelengths. Accordingly, the axial distance between the input phase objects was selected as , i.e., . Fabrication constraints for such a diffractive processor operating in the visible range restrict the axial thickness levels to hundreds of nanometers for each layer, which can degrade the performance compared to the ideal numerical design. Therefore, we analyzed the performance of this visible diffractive processor design under limited phase bit depth selected from [16, 8, 6, 5, 4, 3, 2]. As shown in Fig. S5a in the Supplementary Material, the diffractive processor designed for the visible part of the spectrum maintains its QPI performance despite a reduction in the phase bit-depth of the diffractive layers. For example, for a 16-bit phase modulation design, the output PCC was 0.991 for the case and 0.966 for the case. When the phase bit-depth was reduced to 4, the PCC values had only a minor decrease: 0.980 for the case and 0.943 for the case. The results reported in Fig. S5b in the Supplementary Material further support these conclusions and demonstrate that our diffractive processor can maintain high-quality QPI performance under phase quantization limitations, down to a phase bit-depth of 4. In practical implementations of diffractive QPI processors, another challenge is the mechanical misalignments between different layers, which can cause the optical waves to be modulated by the diffractive layers in an undesired way, leading to results deviating from their designed performance. To investigate this, we used the same configuration of the visible diffractive multiplane QPI processor and subjected the diffractive layers to various levels of random displacements/misalignments (), either in the lateral directions () or the axial direction (), sampled from uniform random distributions (). As shown in Figs. S6a and S7a in the Supplementary Material, the performance of the diffractive processor (indicated by blue curves) peaks at an output PCC of 0.991 under the ideal alignment case (), but it degrades with increasing lateral or axial misalignments. To address this misalignment sensitivity, a “vaccination” strategy can be applied during the optimization process by incorporating random misalignments into the numerical forward model of the system. Specifically, the 3D random displacements of the diffractive layers (, , and ) are modeled using random variables that change from iteration to iteration during the training process, providing resilience against such displacements with minimal performance loss. The efficacy of this strategy is demonstrated in Figs. S6 and S7 in the Supplementary Material, where new “vaccinated” diffractive multiplane QPI processors were trained under varying degrees of axial and lateral misalignments. As shown in Figs. S6a and S7a in the Supplementary Material, these vaccinated models (shown in green and orange curves) maintain good QPI performance across different levels of misalignments. For instance, when , the PCC value for the vaccinated design remains at 0.979, while it decays to 0.427 for the unvaccinated baseline diffractive QPI model. Figures S6b and S7b in the Supplementary Material also illustrate the outputs of the vaccinated diffractive QPI design, maintaining a good agreement with the ground-truth phase images under various degrees of random misalignments. These analyses highlight the effectiveness of our vaccination strategy and the diffractive QPI processor’s capability to withstand unknown random misalignments. In the results and analyses presented above, we have unveiled diffractive multiplane QPI processor designs utilizing wavelength multiplexing to encode the phase information of multiple input objects, which can be implemented through sequential imaging of different wavelength channels using, for example, a monochrome image sensor equipped with a spectral filter, each time adjusted to a unique wavelength; alternatively, a wavelength scanning light source can also be used for the same multiplane QPI. We would like to emphasize that our diffractive designs are not confined to multishot sequential image capture configurations, where the diffractive outputs for individual input planes are captured separately; our design framework can be further optimized to create a snapshot multiplane QPI system by devising the functionality of spectral filter arrays into the diffractive processor.65,70 This functionality allows the multiplane phase signals at the diffractive processor’s output to be partitioned, following a virtual filter-array pattern, enabling a monochrome image sensor to obtain signals from distinct object planes within a single frame. After a standard image demosaicing process, each QPI channel corresponding to a unique axial plane can be retrieved from a single intensity-only image. It is crucial to highlight that our diffractive multiplane QPI design is tailored for a 3D stack of phase objects with weak scattering and absorption properties. This scenario meets the criterion for the first Born approximation,71 allowing the modeling of a 3D phase-only object using a discrete set of 2D phase modulation layers, which are assumed to be connected by free-space propagation and approximately uniform illumination at each axial plane. The diffractive optical processor, due to its capacity for performing arbitrary complex-valued linear transformations between an input and output FOV,59 emerges as a viable approach for phase reconstruction and QPI under the first Born approximation. As one increases the lateral overlap among the axial planes that contain the phase-only input objects, the 3D QPI problem starts to deviate from the first Born approximation due to successive object-induced unknown wavefront distortions on the other axial planes where other unknown objects are located, which makes the problem nonlinear due to the interaction among the scattered fields that represent the object information at different planes. This physical cross talk and the deviation from a linear coherent system approximation is at the heart of our QPI performance degradation observed for when compared to the performance of designs; the latter diffractive designs provide a better fit to the first Born approximation and the resulting fields at the output FOV of a diffractive QPI processor can be approximated as a linear superposition of the individual fields resulting separately from each axial plane. Having emphasized these points in relationship to the first Born approximation, we should also note that our numerical forward model does not make any such approximations and in fact precisely models object-to-object cross-talk fields for each case, taking all these nonlinear terms as part of its analysis and training/testing reported in this paper. To further increase the performance of quantitative phase images and the spectral multiplexing factor (), one would require a deeper diffractive architecture with more trainable degrees of freedom. Both theoretical analyses and empirical studies established earlier56,58,59,61,62 have substantiated that increasing the total number of trainable diffractive features within a diffractive processor can improve its processing capacity and inference accuracy,62,72 also achieving significantly better diffraction efficiency at the output FOV. A particularly effective design strategy here involves increasing the number of layers rather than the number of diffractive features at each layer, which was proven to not only boost the diffraction efficiency but also to achieve a more optimal utilization of diffractive features by enhancing optical connectivity between successive layers.59,68,73 By increasing the number of diffractive layers (forming a deeper diffractive architecture), the performance of our wavelength-multiplexed diffractive QPI processor can be further enhanced to perform the desired phase-to-intensity transformations more accurately across an even larger number of axial planes and also facilitate the multiplane QPI reconstructions with even a higher spatial resolution. In our work, the input depth information is encoded into a specific set of wavelength channels at the output plane, and the design of the wavelength assignments could affect the QPI reconstruction accuracy. Our wavelength assignments were informed by the understanding that the input planes/phase objects at the axially deeper positions (closer to the diffractive layers) are subject to more distorted illumination wavefronts due to the phase perturbations caused by the other phase objects in the front. Therefore, shorter wavelengths were assigned to the more difficult-to-reconstruct axial planes that are closer to the diffractive layers. This is also supported by previous work and empirical evidence, which revealed that a diffractive processor operating at shorter wavelengths can control larger degrees of freedom within the diffractive layers due to the diffraction limit of light, resulting in better diffractive processing capability.62,65,70 As a result, in our wavelength-multiplexed multiplane QPI approach, shorter wavelengths were assigned to the deeper planes to better mitigate the cross talk from earlier object planes and enhance the QPI performance at the output. We quantitatively analyzed the efficacy of this wavelength assignment strategy by comparing it with an alternative design that reverses the wavelength assignment order, i.e., longer wavelengths are assigned to the axially deeper planes in this alternative approach. Figure S8a in the Supplementary Material demonstrates that our wavelength assignment yields more uniform and higher output PCC values than this alternative diffractive design with a reversed wavelength assignment. This trend is further highlighted in the phase-error analysis, shown in Fig. S8b in the Supplementary Material, where our design achieved an average phase error of 4.1%, which is lower than the 5.0% phase error corresponding to the reversed wavelength assignment. These results confirm that our wavelength assignment strategy not only achieves a uniform performance across all the input planes, but also improves the phase accuracy of our multiplane QPI reconstructions. As another key factor, we also investigated the QPI performance of our diffractive imager designs as a function of the illumination wavelength or bandwidth. For this quantitative analysis, we selected the object plane positioned in the middle of our object volume, which was designated to a mean wavelength of . As shown in Fig. S9a in the Supplementary Material, the peak QPI performance occurs at , i.e., when the training wavelength matches the testing wavelength, while the QPI performance relatively declines when the testing wavelength deviates from the training wavelength, i.e., . The same trend can also be observed in the output examples shown in Fig. S9b in the Supplementary Material. We also tested the same QPI processor under broadband illumination, as shown in Fig. S9c in the Supplementary Material. The average PCC of the output QPI images reduced to 0.843 and 0.676 for a broadband spectral illumination that uniformly covers [: ] and [: ], respectively. The examples of test objects shown in Fig. S9c in the Supplementary Material further illustrate the success of the diffractive QPI processor under such broadband illumination, demonstrating its robustness and adaptability to different spectral conditions not covered during its training. Note that in this work, we assumed spatially coherent illumination, which is common in the measurement of the phase information of objects, especially in tomography and microscopy applications. Nevertheless, previous studies on diffractive processors also reported that spatially incoherent or partially coherent illumination can be used in the optical forward model of a diffractive processor for the optimization of its inference.74,75 By doing so, one can further broaden the diffractive QPI processor’s applicability to scenarios where ideal coherent light sources are unavailable. Our presented system is optimized for transparent objects commonly used in QPI. In terms of lateral resolution, our diffractive QPI processor could resolve spatial phase features with a linewidth of at least across all five input planes, which corresponds to within the visible spectrum. To further increase the resolution and 3D volume of our diffractive QPI reconstructions, one could design a much wider and deeper diffractive architecture with significantly more degrees of freedom, which needs larger computational resources during training. It is important to note that our training process is a one-time effort, and inference is performed all-optically without any digital computation. By leveraging the versatility of our diffractive design, we can address the imaging needs of both thin, transparent objects and more complex 3D structures, making our technology potentially suitable for a wide array of scientific and industrial applications. Our diffractive optical processor design could also potentially support applications such as optical sectioning of thicker objects by collecting information from specific planes while optically filtering background signals. However, the current demonstrations reported in this work show that our processor is most effective in scenarios where the objects are sparsely distributed or partially overlapping, following a linear coherent system approximation at each wavelength. Notably, the presented multiwavelength diffractive processors maintain their accuracy in reconstructing quantitative phase images for multiple distinct planes irrespective of potential variations in the intensity of the broadband light sources used for illumination. Furthermore, these diffractive optical processors are not limited to the terahertz spectrum. By choosing suitable nanofabrication techniques, including, e.g., two-photon polymerization-based 3D printing,76–78 it is possible to scale these diffractive optical processors physically to operate across different segments of the electromagnetic spectrum, including visible and IR wavelengths. Such scalability and the passive nature of our diffractive processors pave the way for more efficient and compact on-chip phase imaging and sensing devices, promising a transformative impact for biomedical imaging/sensing and materials science. Finally, we would like to note that compared to some of the earlier QPI work that utilized illumination wavelength and angular diversity to improve the lateral and/or axial resolution of phase microscopy and tomography,79–81 the presented approach stands out in that its quantitative phase reconstructions are performed through passive light–matter interactions, without the need for a digital reconstruction algorithm, which saves image reconstruction time and computing power. The all-optical phase-to-intensity transformations demonstrated in this work are multiplexed over different planes of the sample volume using wavelength encoding, as demonstrated in Sec. 2 (Results). 4.Appendix: Materials and Methods4.1.Optical Forward Model of Wavelength-Multiplexed Diffractive QPI ProcessorsTo numerically simulate a diffractive optical processor, each diffractive layer was treated as a thin optical element that modulates the complex field of the incoming coherent light. The complex transmission coefficient at any point on the th diffractive feature of the th layer is determined by the local material thickness, , and can be described as In this equation, and are the refractive index and extinction coefficient, respectively, of the chosen dielectric material at . These values correspond to the real and imaginary parts of the complex refractive index . For the experimentally tested diffractive multiplane QPI processor, and were set based on the measurements from a terahertz spectroscopy system.63 As for the numerically analyzed diffractive multiplane QPI designs in the terahertz range, was kept the same, while was set to 0. For the diffractive multiplane QPI designs within the visible range, was set based on the dispersion of IP-DIP photoresist, and was set to 0, as the absorption of this material within the visible spectrum is negligible. The thickness for each diffractive feature combines a constant and a variable/learnable part , as shown in where is the adjustable thickness part of each diffractive feature, constrained within the range . In all the diffractive designs demonstrated in the terahertz range, including the numerical-simulated diffractive models and the experimentally validated diffractive model, is 1.4 mm, providing full phase modulation from 0 to for the longest wavelength. The base thickness , empirically set as 0.2 mm, provides the substrate (mechanical) support for all the diffractive features. In the diffractive QPI designs for the visible range, was selected as 1400 nm, and the base thickness was set as 1000 nm.To simulate the light propagation of coherent optical fields in free space between the layers (including the input object planes, diffractive layers, and the output plane), we applied the angular spectrum approach.54 The field at the th diffractive layer, modulated by its transmittance function , is given by Here, and represent the 2D Fourier transform and its inverse operation, respectively. The transfer function of free-space propagation with a distance between two successive layers is given by where and represent the spatial frequencies along the and directions, respectively.In our numerical simulations for all the diffractive designs demonstrated in this paper, we chose a spatial sampling rate for the simulated complex fields at a period of . Similarly, the lateral size of the diffractive elements on each layer was selected at . The axial distance between consecutive layers, including both the diffractive layers and the input/output planes, was set at for the numerical designs depicted in Fig. S2 in the Supplementary Material and at for the design used in our experimental validations, as illustrated in Fig. 9(a). 4.2.Numerical Implementation of Wavelength-Multiplexed Diffractive Multiplane QPI ProcessorsIn our diffractive multiplane QPI processor design, multiple phase-only input objects are placed at different positions, where . Each object features a phase profile with a consistent amplitude across the plane. The transmission through each of these phase objects/input planes is defined as Initially, a broadband spatially coherent source (or ) illuminates the front phase object plane. As the light propagates through axially stacked input planes, it is modulated by the phase objects and results in the complex field at each phase object plane. This field is calculated using the angular spectrum approach, postmodulation by the object transmittance , which can be expressed as . Finally, after traversing phase object planes, a cumulative multispectral complex field (or ) is formed, which contains the desired phase information of input objects. The resulting field is then positioned at the entry point of the diffractive multiplane QPI processor. Consequently, the input field undergoes a sequence of diffractive layer modulations and secondary wave formations, as elaborated in the last subsection. This process ultimately results in a complex output field, denoted as , where K is the total number of the diffractive layers. Upon normalization with the reference signal () for each wavelength channel , the resultant output QPI signals can be obtained following Eq. (3). For the diffractive QPI processor designs depicted in Fig. S2 in the Supplementary Material, both the input 2D FOVs distributed in the input 3D volume and the output FOV were designed to have a size of . These FOVs are discretized into , with each pixel having dimensions of . To ensure effective performance of the multiplane QPI task, every diffractive layer in this diffractive multiplane QPI processor contains diffractive features, covering an area of . These diffractive QPI processors used in our numerical analyses operate in the terahertz spectral range, i.e., and . For the diffractive QPI processors in the visible range, the wavelengths were selected as and . In the diffractive design used for our experimental validation, shown in Fig. 9(a), the input and output FOVs share identical dimensions of . This space is divided into , resulting in each pixel being . Each diffractive layer in this design includes diffractive features, extending over an area of . Here, the wavelengths for experimental validation were selected as and . 4.3.Training Loss Function and Image Quality MetricsFor the optimization of our diffractive multiplane QPI processors, a loss function that utilizes normalized MSE was formulated to penalize the structural errors between the output quantitative phase image and its ideal counterpart over each wavelength channel that corresponds to an individual input plane. The loss computation for each channel is structured as follows: During the training stage, the overall loss function used for the diffractive multiplane QPI processor is determined by averaging these loss terms over all the spectral channels, thereby facilitating a concurrent optimization process across all the wavelength channels. Consequently, the aggregate loss function can be expressed as where represents the dynamic channel balance weights assigned to each wavelength channel’s loss. This mechanism was designed to balance the performance among different wavelength channels during the training. Initialized as 1, undergoes adaptive adjustments in every training iteration, as per the subsequent equation,In this equation, symbolizes the average loss across all wavelength channels. According to this methodology, if a channel’s loss is relatively low compared to the average, will decrease automatically, thereby dynamically reducing the weight of that channel in the training process. Conversely, a higher-than-average loss in a particular wavelength channel leads to an increase in , amplifying the channel’s balance weight and intensifying the penalty on its output image performance. In our experimental validation and our output diffraction efficiency-related analyses, we employed a modified loss function by further adding an output diffraction efficiency-related loss term, , into the original loss function defined in Eq. (13), which is given by where denotes the weight coefficient associated with . is defined as where is the threshold set to maintain a decent output diffraction efficiency. To be specific, the efficiency penalty is activated only when . During the training of the experimental model, the values of and were empirically set as 100 and 0.02, respectively; for the diffractive models trained with output diffraction efficiency penalty shown in Fig. 8(a), the values of were selected as 1%, 5%, 10%, and the value of was chosen as 100. represents the output diffraction efficiency and is defined asThe linear correlation of the quantitative phase images produced by the diffractive multiplane QPI processor against their ground truth was evaluated using the PCC metric. For a specific wavelength channel, the PCC value is quantified using the following equation: For evaluating the phase accuracy of our presented diffractive multiplane QPI processors, we calculated the normalized MAE values between the reconstructed output phase profiles and the ground truth, defined as where the testing phase contrast parameter was used to normalize the phase error based on the dynamic range of the input phase contrast.4.4.Training Data Preparation and Other Implementation DetailsFor training our diffractive multiplane QPI processors, we assembled a data set of 110,000 images, which can be divided into two categories: (1) 55,000 handwritten digit images from the original MNIST training set, and (2) 55,000 custom-designed images featuring a variety of patterns, such as gratings, patches, and circles, each with unique spatial frequencies and orientations.66 In the training phase, we generated each set of input objects by randomly choosing images from this data set, with each image encoded into the phase channel of one of the object planes, thus creating an input object stack for the multiplane phase-imaging task. The numerical simulations and the training process for the diffractive multiplane QPI processors described in this study were carried out using Python (version 3.7.13) and PyTorch (version 2.5.0, Meta Platform Inc.). The Adam optimizer from PyTorch, with its default settings, was utilized. We set the learning rate at 0.001 and the batch size at 16. Our diffractive models underwent a 100-epoch training on a workstation equipped with an Nvidia GeForce RTX 3090 GPU, an Intel Core i9-11900 CPU, and 128 GB RAM. The training time for a 10-layer diffractive multiplane QPI processor design, as seen in Fig. S2 in the Supplementary Material, was roughly 12 days, which is a one-time design effort. 4.5.Details of the Experimental Diffractive Multiplane QPI SystemOur diffractive multiplane QPI design was tested using a terahertz continuous wave (CW) system, as illustrated in Fig. S4 in the Supplementary Material. This setup involved a terahertz source comprising a Virginia Diode Inc. WR9.0M SGX/WR4.3x2 WR2.2 modular amplifier/multiplier chain (AMC), paired with a corresponding diagonal horn antenna (Virginia Diode Inc. WR2.2). At the AMC’s input, a 10-dBm radio-frequency (RF) signal was introduced at 10.4166 or 11.1111 GHz (fRF1), which underwent a 36-fold multiplication, resulting in a 0.375 or 0.4 THz CW radiation output, equal to an illumination wavelength 0.8 or 0.75 mm, respectively. Additionally, for lock-in detection, the AMC’s output experienced modulation with a 1-kHz square wave. Situated at from the horn antenna’s exit plane, the input aperture was 1.6 mm wide. An XY positioning stage, comprising two Thorlabs NRT100 motorized stages, moved a single-pixel mixer (Virginia Diode Inc. WRI 2.2) to conduct a 2D scan of the output intensity distribution, with a step size of 0.8 mm. The detector also received a 10-dBm RF signal at 11.1111 or 10.4166 GHz (fRF2) as the local oscillator, downconverting the output frequency to 1 GHz. This downconverted signal then passed through a low-noise amplifier (gain: 80 dBm) and a KL Electronics 3C40-1000/T10-O/O bandpass filter at 1 GHz (+/-10 MHz), reducing noise from undesirable frequency bands. After a linear calibration with an HP 8495B tunable attenuator, the signal was relayed to a Mini-Circuits ZX47-60 low-noise power detector. The lock-in amplifier (Stanford Research SR830) then processed the detector’s output voltage, using the 1-kHz square wave as a reference for linear scale calibration. For the fabrication of the diffractive multiplane QPI system depicted in Fig. 9(b), an Objet30 Pro 3D printer by Stratasys was employed to print the diffractive design and the input aperture. To ensure alignment with our optical forward model for the experimental diffractive design, a 3D-printed holder was fabricated using the same printer. This holder facilitated the precise positioning of both the input aperture and the printed diffractive layers, securing their accurate 3D assembly. Code and Data AvailabilityAll the data and methods needed to evaluate the conclusions of this work are presented in the main text and Supplementary Material. Additional data can be requested from the corresponding author. The codes used in this work use standard libraries and scripts that are publicly available in PyTorch. Author ContributionsA.O. conceived and initiated the research. C.S. and J.L. conducted numerical simulations and processed the resulting data. C.S., J.L., L.B., and Y.L. contributed to the PyTorch implementation of diffractive processor implementation. C.S. and T.G. conducted the experiment. All the authors contributed to the preparation of the manuscript. A.O. supervised the research. AcknowledgmentsThis research was supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering (Grant No. DE-SC0023088). ReferencesG. Popescu, Quantitative Phase Imaging of Cells and Tissues, McGraw-Hill Education(
(2011). Google Scholar
Y. Park, C. Depeursinge and G. Popescu,
“Quantitative phase imaging in biomedicine,”
Nat. Photonics, 12 578
–589 https://doi.org/10.1038/s41566-018-0253-x NPAHBY 1749-4885
(2018).
Google Scholar
J. Park et al.,
“Artificial intelligence-enabled quantitative phase imaging methods for life sciences,”
Nat. Methods, 20 1645
–1660 https://doi.org/10.1038/s41592-023-02041-4 1548-7091
(2023).
Google Scholar
P. Marquet et al.,
“Digital holographic microscopy: a noninvasive contrast imaging technique allowing quantitative visualization of living cells with subwavelength axial accuracy,”
Opt. Lett., 30 468
–470 https://doi.org/10.1364/OL.30.000468 OPLEDP 0146-9592
(2005).
Google Scholar
T. Ikeda et al.,
“Hilbert phase microscopy for investigating fast dynamics in transparent systems,”
Opt. Lett., 30 1165
–1167 https://doi.org/10.1364/OL.30.001165 OPLEDP 0146-9592
(2005).
Google Scholar
G. A. Dunn and D. Zicha,
“Phase-shifting interference microscopy applied to the analysis of cell behaviour,”
in Symp. Soc. Exp. Biol.,
91
–106
(1993). Google Scholar
S. K. Debnath and Y. Park,
“Real-time quantitative phase imaging with a spatial phase-shifting algorithm,”
Opt. Lett., 36 4677
–4679 https://doi.org/10.1364/OL.36.004677 OPLEDP 0146-9592
(2011).
Google Scholar
G. Popescu et al.,
“Fourier phase microscopy for investigation of biological structures and dynamics,”
Opt. Lett., 29 2503
–2505 https://doi.org/10.1364/OL.29.002503 OPLEDP 0146-9592
(2004).
Google Scholar
B. Bhaduri et al.,
“Diffraction phase microscopy with white light,”
Opt. Lett., 37 1094
–1096 https://doi.org/10.1364/OL.37.001094 OPLEDP 0146-9592
(2012).
Google Scholar
G. Popescu,
“Quantitative phase imaging of nanoscale cell structure and dynamics,”
Methods Cell Biol., 90 87
–115 https://doi.org/10.1016/S0091-679X(08)00805-4 MCBLAG 0091-679X
(2008).
Google Scholar
R. Kasprowicz, R. Suman and P. O’Toole,
“Characterising live cell behaviour: traditional label-free and quantitative phase imaging approaches,”
Int. J. Biochem. Cell Biol., 84 89
–95 https://doi.org/10.1016/j.biocel.2017.01.004 IJBBFU 1357-2725
(2017).
Google Scholar
P. Wang et al.,
“Nanoscale nuclear architecture for cancer diagnosis beyond pathology via spatial-domain low-coherence quantitative phase microscopy,”
J. Biomed. Opt., 15 066028 https://doi.org/10.1117/1.3523618 JBOPFO 1083-3668
(2010).
Google Scholar
R. Horstmeyer et al.,
“Digital pathology with Fourier ptychography,”
Comput. Med. Imaging Graphics, 42 38
–43 https://doi.org/10.1016/j.compmedimag.2014.11.005
(2015).
Google Scholar
Y. Rivenson et al.,
“PhaseStain: the digital staining of label-free quantitative phase microscopy images using deep learning,”
Light Sci. Appl., 8 23 https://doi.org/10.1038/s41377-019-0129-y
(2019).
Google Scholar
K. C. M. Lee et al.,
“Quantitative phase imaging flow cytometry for ultra-large-scale single-cell biophysical phenotyping,”
Cytometry A, 95 510
–520 https://doi.org/10.1002/cyto.a.23765
(2019).
Google Scholar
G. Popescu et al.,
“Optical imaging of cell mass and growth dynamics,”
Am. J. Physiol.-Cell Physiol., 295 C538
–C544 https://doi.org/10.1152/ajpcell.00121.2008
(2008).
Google Scholar
L. Kastl et al.,
“Quantitative phase imaging for cell culture quality control,”
Cytometry A, 91 470
–481 https://doi.org/10.1002/cyto.a.23082
(2017).
Google Scholar
Z. El-Schich, A. Leida Mölder and A. Gjörloff Wingren,
“Quantitative phase imaging for label-free analysis of cancer cells—focus on digital holographic microscopy,”
Appl. Sci., 8
(7), 1027 https://doi.org/10.3390/app8071027
(2018).
Google Scholar
D. Roitshtain et al.,
“Quantitative phase microscopy spatial signatures of cancer cells,”
Cytometry A, 91 482
–493 https://doi.org/10.1002/cyto.a.23100
(2017).
Google Scholar
H. Wang et al.,
“Early detection and classification of live bacteria using time-lapse coherent imaging and deep learning,”
Light Sci. Appl., 9 118 https://doi.org/10.1038/s41377-020-00358-9
(2020).
Google Scholar
T. Liu et al.,
“Rapid and stain-free quantification of viral plaque via lens-free holography and deep learning,”
Nat. Biomed. Eng., 7 1040
–1052 https://doi.org/10.1038/s41551-023-01057-7
(2023).
Google Scholar
K. C. Lee et al.,
“Multi-ATOM: ultrahigh-throughput single-cell quantitative phase imaging with subcellular resolution,”
J. Biophotonics, 12 e201800479 https://doi.org/10.1002/jbio.201800479
(2019).
Google Scholar
O. Mudanyali et al.,
“Wide-field optical detection of nanoparticles using on-chip microscopy and self-assembled nanolenses,”
Nat. Photonics, 7 247
–254 https://doi.org/10.1038/nphoton.2012.337 NPAHBY 1749-4885
(2013).
Google Scholar
L. Zhong et al.,
“Formation of monatomic metallic glasses through ultrafast liquid quenching,”
Nature, 512 177
–180 https://doi.org/10.1038/nature13617
(2014).
Google Scholar
R. Stoian and J.-P. Colombier,
“Advances in ultrafast laser structuring of materials at the nanoscale,”
Nanophotonics, 9 4665 https://doi.org/10.1515/nanoph-2020-0310
(2020).
Google Scholar
Z. Wang et al.,
“Spatial light interference microscopy (SLIM),”
Opt. Express, 19 1016
–1026 https://doi.org/10.1364/OE.19.001016 OPEXFF 1094-4087
(2011).
Google Scholar
T. H. Nguyen et al.,
“Gradient light interference microscopy for 3D imaging of unlabeled specimens,”
Nat. Commun., 8 210 https://doi.org/10.1038/s41467-017-00190-7 NCAOBW 2041-1723
(2017).
Google Scholar
G. Zheng et al.,
“Concept, implementations and applications of Fourier ptychography,”
Nat. Rev. Phys., 3 207
–223 https://doi.org/10.1038/s42254-021-00280-y
(2021).
Google Scholar
F. Charrière et al.,
“Cell refractive index tomography by digital holographic microscopy,”
Opt. Lett., 31 178
–180 https://doi.org/10.1364/OL.31.000178 OPLEDP 0146-9592
(2006).
Google Scholar
Y. Sung et al.,
“Optical diffraction tomography for high resolution live cell imaging,”
Opt. Express, 17 266
–277 https://doi.org/10.1364/OE.17.000266 OPEXFF 1094-4087
(2009).
Google Scholar
A. Matlock and L. Tian,
“High-throughput, volumetric quantitative phase imaging with multiplexed intensity diffraction tomography,”
Biomed. Opt. Express, 10 6432
–6448 https://doi.org/10.1364/BOE.10.006432 BOEICL 2156-7085
(2019).
Google Scholar
M. H. Jenkins and T. K. Gaylord,
“Three-dimensional quantitative phase imaging via tomographic deconvolution phase microscopy,”
Appl. Opt., 54 9213
–9227 https://doi.org/10.1364/AO.54.009213 APOPAI 0003-6935
(2015).
Google Scholar
Y. Rivenson et al.,
“Phase recovery and holographic image reconstruction using deep learning in neural networks,”
Light Sci. Appl., 7 17141 https://doi.org/10.1038/lsa.2017.141
(2018).
Google Scholar
Y. Jo et al.,
“Quantitative phase imaging and artificial intelligence: a review,”
IEEE J. Sel. Top. Quantum Electron., 25
(1), 6800914 https://doi.org/10.1109/JSTQE.2018.2859234 IJSQEN 1077-260X
(2018).
Google Scholar
Y. Rivenson, Y. Wu and A. Ozcan,
“Deep learning in holography and coherent imaging,”
Light Sci. Appl., 8 85 https://doi.org/10.1038/s41377-019-0196-0
(2019).
Google Scholar
Y. Jo et al.,
“Holographic deep learning for rapid optical screening of anthrax spores,”
Sci. Adv., 3 e1700606 https://doi.org/10.1126/sciadv.1700606 STAMCV 1468-6996
(2017).
Google Scholar
F. Yi, I. Moon and B. Javidi,
“Automated red blood cells extraction from holographic images using fully convolutional neural networks,”
Biomed. Opt. Express, 8 4466
–4479 https://doi.org/10.1364/BOE.8.004466 BOEICL 2156-7085
(2017).
Google Scholar
V. Ayyappan et al.,
“Identification and staging of B-cell acute lymphoblastic leukemia using quantitative phase imaging and machine learning,”
ACS Sens., 5 3281
–3289 https://doi.org/10.1021/acssensors.0c01811
(2020).
Google Scholar
F. Wang et al.,
“Phase imaging with an untrained neural network,”
Light Sci. Appl., 9 77 https://doi.org/10.1038/s41377-020-0302-3
(2020).
Google Scholar
H. Chen et al.,
“Fourier Imager Network (FIN): a deep neural network for hologram reconstruction with superior external generalization,”
Light Sci. Appl., 11 254 https://doi.org/10.1038/s41377-022-00949-8
(2022).
Google Scholar
L. Huang et al.,
“Self-supervised learning of hologram reconstruction using physics consistency,”
Nat. Mach. Intell., 5 895
–907 https://doi.org/10.1038/s42256-023-00704-7
(2023).
Google Scholar
D. Pirone et al.,
“Speeding up reconstruction of 3D tomograms in holographic flow cytometry via deep learning,”
Lab. Chip, 22 793 https://doi.org/10.1039/D1LC01087E LCAHAM 1473-0197
(2022).
Google Scholar
H. Chen et al.,
“eFIN: enhanced Fourier imager network for generalizable autofocusing and pixel super-resolution in holographic imaging,”
IEEE J. Sel. Top. Quantum Electron., 29
(4), 1
–10 https://doi.org/10.1109/JSTQE.2023.3248684 IJSQEN 1077-260X
(2023).
Google Scholar
D. Pirone et al.,
“Label-free liquid biopsy through the identification of tumor cells by machine learning-powered tomographic phase imaging flow cytometry,”
Sci. Rep., 13 6042 https://doi.org/10.1038/s41598-023-32110-9 SRCEC3 2045-2322
(2023).
Google Scholar
T. Nguyen et al.,
“Automatic phase aberration compensation for digital holographic microscopy based on deep learning background detection,”
Opt. Express, 25 15043
–15057 https://doi.org/10.1364/OE.25.015043 OPEXFF 1094-4087
(2017).
Google Scholar
V. K. Lam et al.,
“Quantitative assessment of cancer cell morphology and motility using telecentric digital holographic microscopy and machine learning,”
Cytometry A, 93 334
–345 https://doi.org/10.1002/cyto.a.23316
(2018).
Google Scholar
Y. Wu et al.,
“Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery,”
Optica, 5 704
–710 https://doi.org/10.1364/OPTICA.5.000704
(2018).
Google Scholar
H. Byeon, T. Go and S. J. Lee,
“Deep learning-based digital in-line holographic microscopy for high resolution with extended field of view,”
Opt. Laser Technol., 113 77
–86 https://doi.org/10.1016/j.optlastec.2018.12.014 OLTCAS 0030-3992
(2019).
Google Scholar
Y. Wu et al.,
“Bright-field holography: cross-modality deep learning enables snapshot 3D imaging with bright-field contrast using a single hologram,”
Light Sci. Appl., 8 25 https://doi.org/10.1038/s41377-019-0139-9
(2019).
Google Scholar
A. Matlock, J. Zhu and L. Tian,
“Multiple-scattering simulator-trained neural network for intensity diffraction tomography,”
Opt. Express, 31 4094
–4107 https://doi.org/10.1364/OE.477396 OPEXFF 1094-4087
(2023).
Google Scholar
I. Kang, A. Goy and G. Barbastathis,
“Dynamical machine learning volumetric reconstruction of objects’ interiors from limited angular views,”
Light Sci. Appl., 10 74 https://doi.org/10.1038/s41377-021-00512-x
(2021).
Google Scholar
R. Liu et al.,
“Recovery of continuous 3D refractive index maps from discrete intensity-only measurements using neural fields,”
Nat. Mach. Intell., 4 781
–791 https://doi.org/10.1038/s42256-022-00530-3
(2022).
Google Scholar
G. Choi et al.,
“Cycle-consistent deep learning approach to coherent noise reduction in optical diffraction tomography,”
Opt. Express, 27 4927
–4943 https://doi.org/10.1364/OE.27.004927 OPEXFF 1094-4087
(2019).
Google Scholar
X. Lin et al.,
“All-optical machine learning using diffractive deep neural networks,”
Science, 361 1004
–1008 https://doi.org/10.1126/science.aat8084 SCIEAS 0036-8075
(2018).
Google Scholar
Y. Luo et al.,
“Design of task-specific optical systems using broadband diffractive neural networks,”
Light Sci. Appl., 8 112 https://doi.org/10.1038/s41377-019-0223-1
(2019).
Google Scholar
D. Mengu et al.,
“Analysis of diffractive optical neural networks and their integration with electronic neural networks,”
IEEE J. Sel. Top. Quantum Electron., 26
(1), 1
–14 https://doi.org/10.1109/JSTQE.2019.2921376 IJSQEN 1077-260X
(2019).
Google Scholar
D. Mengu et al.,
“Misalignment resilient diffractive optical networks,”
Nanophotonics, 9 4207
–4219 https://doi.org/10.1515/nanoph-2020-0291
(2020).
Google Scholar
O. Kulce et al.,
“All-optical information-processing capacity of diffractive surfaces,”
Light Sci. Appl., 10 25 https://doi.org/10.1038/s41377-020-00439-9
(2021).
Google Scholar
O. Kulce et al.,
“All-optical synthesis of an arbitrary linear transformation using diffractive surfaces,”
Light Sci. Appl., 10 196 https://doi.org/10.1038/s41377-021-00623-5
(2021).
Google Scholar
D. Mengu and A. Ozcan,
“All-optical phase recovery: diffractive computing for quantitative phase imaging,”
Adv. Opt. Mater., 10 2200281 https://doi.org/10.1002/adom.202200281 2195-1071
(2022).
Google Scholar
B. Bai et al.,
“To image, or not to image: class-specific diffractive cameras with all-optical erasure of undesired objects,”
eLight, 2 14
(2022). https://doi.org/10.1186/s43593-022-00021-3 Google Scholar
J. Li et al.,
“Massively parallel universal linear transformations using a wavelength-multiplexed diffractive optical network,”
Adv. Photonics, 5 016003 https://doi.org/10.1117/1.AP.5.1.016003 AOPAC7 1943-8206
(2023).
Google Scholar
J. Li et al.,
“Unidirectional imaging using deep learning–designed materials,”
Sci. Adv., 9 eadg1505 https://doi.org/10.1126/sciadv.adg1505 STAMCV 1468-6996
(2023).
Google Scholar
D. Mengu et al.,
“Snapshot multispectral imaging using a diffractive optical network,”
Light Sci. Appl., 12 86 https://doi.org/10.1038/s41377-023-01135-0
(2023).
Google Scholar
C.-Y. Shen et al.,
“Multispectral quantitative phase imaging using a diffractive optical network,”
Adv. Intell. Syst., 5 2300300 https://doi.org/10.1002/aisy.202300300
(2023).
Google Scholar
M. S. Sakib Rahman and A. Ozcan,
“Computer-free, all-optical reconstruction of holograms using diffractive networks,”
ACS Photonics, 8 3375
–3384 https://doi.org/10.1021/acsphotonics.1c01365
(2021).
Google Scholar
J. Li et al.,
“Spectrally encoded single-pixel machine vision using diffractive networks,”
Sci. Adv., 7 eabd7690 https://doi.org/10.1126/sciadv.abd7690 STAMCV 1468-6996
(2021).
Google Scholar
Y. Li et al.,
“Quantitative phase imaging (QPI) through random diffusers using a diffractive optical network,”
Light Adv. Manuf., 4 17 https://doi.org/10.37188/lam.2023.017
(2023).
Google Scholar
S. Dottermusch et al.,
“Exposure-dependent refractive index of Nanoscribe IP-Dip photoresist layers,”
Opt. Lett., 44 29 https://doi.org/10.1364/OL.44.000029 OPLEDP 0146-9592
(2019).
Google Scholar
D. Mengu et al.,
“Snapshot multispectral imaging using a diffractive optical network,”
Light Sci. Appl., 12 86 https://doi.org/10.1038/s41377-023-01135-0
(2023).
Google Scholar
B. Chen and J. J. Stamnes,
“Validity of diffraction tomography based on the first Born and the first Rytov approximations,”
Appl. Opt., 37 2996
–3006 https://doi.org/10.1364/AO.37.002996 APOPAI 0003-6935
(1998).
Google Scholar
Y. Li et al.,
“Universal polarization transformations: spatial programming of polarization scattering matrices using a deep learning-designed diffractive polarization transformer,”
Adv. Mater., 35 2303395 https://doi.org/10.1002/adma.202303395 ADVMEW 0935-9648
(2023).
Google Scholar
C.-Y. Shen et al.,
“All-optical phase conjugation using diffractive wavefront processing,”
Nat. Commun., 15 4989 https://doi.org/10.1038/s41467-024-49304-y
(2024).
Google Scholar
M. S. S. Rahman et al.,
“Universal linear intensity transformations using spatially incoherent diffractive processors,”
Light Sci. Appl., 12 195 https://doi.org/10.1038/s41377-023-01234-y
(2023).
Google Scholar
X. Yang et al.,
“Complex-valued universal linear transformations and image encryption using spatially incoherent diffractive networks,”
Adv. Photonics Nexus, 3 016010 https://doi.org/10.1117/1.APN.3.1.016010
(2024).
Google Scholar
E. Goi et al.,
“Nanoprinted high-neuron-density optical linear perceptrons performing near-infrared inference on a CMOS chip,”
Light Sci. Appl., 10 40 https://doi.org/10.1038/s41377-021-00483-z
(2021).
Google Scholar
H. Chen et al.,
“Diffractive deep neural networks at visible wavelengths,”
Engineering, 7 1483
–1491 https://doi.org/10.1016/j.eng.2020.07.032 ENGNA2 0013-7782
(2021).
Google Scholar
B. Bai et al.,
“Data‐class‐specific all‐optical transformations and encryption,”
Adv. Mater., 35
(31), e2212091 https://doi.org/10.1002/adma.202212091
(2023).
Google Scholar
C. Zuo et al.,
“Lensless phase microscopy and diffraction tomography with multi-angle and multi-wavelength illuminations using a LED matrix,”
Opt. Express, 23 14314
–14328 https://doi.org/10.1364/OE.23.014314 OPEXFF 1094-4087
(2015).
Google Scholar
W. Luo et al.,
“Pixel super-resolution using wavelength scanning,”
Light Sci. Appl., 5 e16060 https://doi.org/10.1038/lsa.2016.60
(2016).
Google Scholar
X. Wu et al.,
“Wavelength-scanning pixel-super-resolved lens-free on-chip quantitative phase microscopy with a color image sensor,”
APL Photonics, 9 016111 https://doi.org/10.1063/5.0175672
(2024).
Google Scholar
BiographyChe-Yung Shen received his MS degree in optics and photonics from National Yang Ming Chiao Tung University, Taiwan. He is currently a PhD student in the Electrical and Computer Engineering Department at the University of California, Los Angeles (UCLA). His research interests include computational imaging, machine learning, and optics. Jingxi Li received his PhD in the Electrical and Computer Engineering Department at UCLA. His work focuses on optical computing and information processing using diffractive networks and computational optical imaging for biomedical applications. Yuhang Li received his BS degree in optical science and engineering from Zhejiang University, Hangzhou, China, in 2021. He is currently working toward his PhD in the Electrical and Computer Department at UCLA. His work focuses on the development of computational imaging, machine learning, and optics. Tianyi Gan received his BS degree in physics from Peking University, Beijing, China, in 2021. He is currently a PhD student in the Electrical and Computer Engineering Department at UCLA. His research interests are terahertz source and imaging. Langxing Bai is currently an undergraduate student in the Department of Computer Science at UCLA. His research interests are computational imaging and machine learning. Mona Jarrahi is a professor and a Northrop Grumman Endowed chair in the Electrical and Computer Engineering Department at UCLA and the director of the Terahertz Electronics Laboratory. She has made significant contributions to the development of ultrafast electronic and optoelectronic devices and integrated systems for terahertz, infrared, and millimeter-wave sensing, imaging, computing, and communication systems by utilizing innovative materials, nanostructures, and quantum structures, as well as innovative plasmonic and optical concepts. Aydogan Ozcan is the chancellor’s professor and the Volgenau chair for engineering innovation at UCLA and an HHMI professor at the Howard Hughes Medical Institute. He is also the associate director of the California NanoSystems Institute. He is elected a fellow of the National Academy of Inventors and holds >75 issued/granted patents in microscopy, holography, computational imaging, sensing, mobile diagnostics, nonlinear optics, and fiber-optics. He is also the author of 1 book and the co-author of >1000 peer-reviewed publications in leading scientific journals/conferences. He is an elected fellow of Optica, AAAS, SPIE, IEEE, AIMBE, RSC, APS, and the Guggenheim Foundation and is a Lifetime Fellow Member of Optica, NAI, AAAS, SPIE, and APS. He is also listed as a highly cited researcher by Web of Science, Clarivate. |