SignificanceFluorescence head-mounted microscopes, i.e., miniscopes, have emerged as powerful tools to analyze in-vivo neural populations but exhibit a limited depth-of-field (DoF) due to the use of high numerical aperture (NA) gradient refractive index (GRIN) objective lenses.AimWe present extended depth-of-field (EDoF) miniscope, which integrates an optimized thin and lightweight binary diffractive optical element (DOE) onto the GRIN lens of a miniscope to extend the DoF by 2.8 × between twin foci in fixed scattering samples.ApproachWe use a genetic algorithm that considers the GRIN lens’ aberration and intensity loss from scattering in a Fourier optics-forward model to optimize a DOE and manufacture the DOE through single-step photolithography. We integrate the DOE into EDoF-Miniscope with a lateral accuracy of 70 μm to produce high-contrast signals without compromising the speed, spatial resolution, size, or weight.ResultsWe characterize the performance of EDoF-Miniscope across 5- and 10-μm fluorescent beads embedded in scattering phantoms and demonstrate that EDoF-Miniscope facilitates deeper interrogations of neuronal populations in a 100-μm-thick mouse brain sample and vessels in a whole mouse brain sample.ConclusionsBuilt from off-the-shelf components and augmented by a customizable DOE, we expect that this low-cost EDoF-Miniscope may find utility in a wide range of neural recording applications.
We demonstrate an extended-depth-of-field miniscope (EDoF-Miniscope) which utilizes an optimized binary diffractive optical element (DOE) to achieve a 2.8x axial elongation in twin foci when integrated on the pupil plane. We optimize our DOE through a genetic algorithm, which utilizes a Fourier optics forward model to consider the native aberrations of the primary gradient refractive index (GRIN) lens, optical property of the submersion media, the geometric effects of the target fluorescent sources and axial intensity loss from tissue scattering to create a robust EDoF. We demonstrate that our platform achieves high contrast signals that can be recovered through a simple filter across 5-μm and 10-μm beads embedded in scattering phantoms, and fixed mouse brain samples.
KEYWORDS: 3D modeling, Data modeling, Stereoscopy, Point spread functions, Resolution enhancement technologies, Particles, Optical design, Microlens, Luminescence, Imaging systems
Computational Miniature Mesoscope (CM2) is a novel fluorescence imaging device that achieves single-shot 3D imaging on a compact platform by jointly designing the optics and algorithm. However, the low axial resolution and heavy computational cost hinder its biomedical applications. Here, we demonstrate a deep learning framework, termed CM2Net, to perform fast and reliable 3D reconstruction. Specifically, the multi-stage CM2Net is trained on synthetic data with realistic field varying aberrations based on a 3D linear shift variant model. We experimentally demonstrate that the CM2Net can provide 10x improvement in axial resolution and 1400x faster reconstruction speed.
Imaging through scattering medium has wide applications across many areas. Here, we present a new deep learning framework for improving the robustness against physical perturbations of the scattering medium. The trained DNN can make high-quality predictions beyond the training range which is across 10X depth-of-field (DOF). We develop a new analysis framework based on dimensionality reduction for revealing the information contained in the speckle dataset, interpreting the mechanism of our DNN, and visualizing the generalizability of the DNN model. This allows us to further elucidate on the information encoded in both the raw speckle measurements and the working principle of our speckle-imaging deep learning model.
The conventional ultrafast optical imaging methods in the ultraviolet (UV) spectral range are based on pump-probe techniques, which cannot record non-repeatable and difficult-to-produce transient dynamics. Compressed ultrafast photography (CUP), as a single-shot ultrafast optical imaging technique, can capture an entire transient event with a single exposure. However, CUP has been experimentally demonstrated only in visible and near-infrared spectral ranges. Moreover, the requirement to tilt a digital mirror device (DMD) in the system and the limitation of controllable parameters in the reconstruction algorithm also hinder CUP’s performance. To overcome these limitations, we extended CUP to the UV spectrum by integrating a patterned palladium photocathode into a streak camera. This design also nullifies the previous restrictions in DMD-based spatial encoding, improves the system’s compactness, and offers good spectral adaptability. Meanwhile, by replacing the conventional TwIST algorithm with a plug-and-play alternating direction method of multipliers algorithm, the reconstruction process is split into three secondary optimization problems to precisely update the separated variables in different steps, which considerably enhances CUP’s reconstruction quality. The system exhibits a sequence depth of up to 1500 frames with a size of 1750×500 pixels at an imaging speed of 0.5 trillion frames per second. The system’s ability of ultrafast imaging was investigated by recording the process of UV pulses travel through various transmissive targets with a single exposure. We envision that our system will open up many new possibilities in imaging transient UV phenomena.
We present a Computational Miniature Mesoscope that enables 3D fluorescence imaging across an 8-mm field-of-view and 2.5-mm depth-of-field in a single shot, achieving 7-micrometer lateral resolution and better than 200-micrometer axial resolution. The mesoscope has a compact design that integrates a microlens array for imaging and an LED array for excitation on a single platform. Its expanded imaging capability is enabled by computational imaging. We experimentally validate the mesoscopic 3D imaging capability on volumetrically distributed fluorescent beads and fibers. We further quantify the effects of bulk scattering and background fluorescence on phantom experiments.
We demonstrate a deep-learning(DL)-based computational microscopy for high-throughput phase imaging by taking multiplexed measurements and employing deep neural networks (DNNs) based reconstruction. In particular, we develop a Bayesian convolutional neural network (BNN) to quantify the uncertainties of the DL inference, providing a surrogate estimate of the true prediction errors. The framework is demonstrated on a high-speed computational phase microscopy technique. We show the BNN is able to not only predict high-resolution phase images and but also provide a pixel-wise credibility map that evaluates the imperfections in the datasets and training process。
Light scattering in complex media is a pervasive problem across many areas, such as deep tissue imaging, and imaging in degraded environment. Major progress has been made by using the transmission matrix (TM) framework that characterizes the “one-for-one” input-output relation of a fixed scattering medium as a linear shift-variant matrix. A major limitation of these existing approaches is their high susceptibility to model errors. The phase-sensitive TM is inherently intolerant to speckle decorrelations. Our goal here is to develop a highly scalable imaging through scattering framework by overcoming the existing limitations in susceptibility to speckle decorrelation and SBP. The proposed model is built on a deep learning (DL) frame- work. To satisfy the desired statistical properties, we do not train a convolutional neural network (CNN) to learn the TM of a single scattering medium. Instead, we build a CNN to learn a “one-for-all” mapping by training on several scattering media with different microstructures while having the same macroscopic parameter. Specifically, we show that our CNN model trained on a few diffusers can sufficiently support the statistical information of all diffusers having the same mean characteristics (e.g. ‘grits’). We then experimentally demonstrate that the CNN is able to “invert” speckles captured from entirely different diffusers to make high-quality object predictions. Our method significantly improves the system’s information throughput and adaptability as compared to existing approaches, by improving both the SBP and the robustness to speckle decorrelations.
Emerging deep learning based computational microscopy techniques promise novel imaging capabilities beyond traditional techniques. In this talk, I will discuss two microscopy applications.
First, high space-bandwidth product microscopy typically requires a large number of measurements. I will present a novel physics-assisted deep learning (DL) framework for large space-bandwidth product (SBP) phase imaging,1 enabling significant reduction of the required measurements, opening up real-time applications. In this technique, we design asymmetric coded illumination patterns to encode high-resolution phase information across a wide field-of-view. We then develop a matching DL algorithm to provide large-SBP phase estimation. We demonstrate this technique on both static and dynamic biological samples, and show that it can reliably achieve 5x resolution enhancement across 4x FOVs using only five multiplexed measurements. In addition, we develop an uncertainty learning framework to provide predictive assessment to the reliability of the DL prediction. We show that the predicted uncertainty maps can be used as a surrogate to the true error. We validate the robustness of our technique by analyzing the model uncertainty. We quantify the effect of noise, model errors, incomplete training data, and “out-of-distribution” testing data by assessing the data uncertainty. We further demonstrate that the predicted credibility maps allow identifying spatially and temporally rare biological events. Our technique enables scalable DL-augmented large-SBP phase imaging with reliable predictions.
Second, I will turn to the pervasive problem of imaging in scattering media. I will discuss a new deep learning- based technique that is highly generalizable and resilient to statistical variations of the scattering media.2 We develop a statistical ‘one-to-all’ deep learning technique that encapsulates a wide range of statistical variations for the model to be resilient to speckle decorrelations. Specifically, we develop a convolutional neural network (CNN) that is able to learn the statistical information contained in the speckle intensity patterns captured on a set of diffusers having the same macroscopic parameter. We then show that the trained CNN is able to generalize and make high-quality object predictions through an entirely different set of diffusers of the same class. Our work paves the way to a highly scalable deep learning approach for imaging through scattering media.
We investigate quantitative phase imaging techniques based on oblique illumination including differential phase contrast microscopy (DPC) and Fourier Ptychography Microscopy (FPM). DPC uses partially coherent, asymmetric illumination to achieve 2X resolution improvement but has small field of view (FOV). FPM achieves both wide FOV and high resolution but requires a large number of measurements. Achieving high space-bandwidth product (SBP) imaging in real-time remains challenging. Our goal is to develop a data-driven approach to enable highly multiplexed illumination to substantially improve the acquisition speed for high-SBP quantitative phase imaging. To do so, we abandon the traditional sampling strategy and phase retrieval algorithms. Instead we design a convolutional neural network (CNN) that uses only 4 brightfield and 3 darkfield images under asymmetrically coded illuminations as input and predicts high-SBP phase images. Particularly, instead of restoring a deterministic image, our CNN predicts pixel-wise probability distributions (Laplace) that is characterized by the location and scale. The predicted location map corresponds to the desired high-resolution phase image; in addition, the scale map provides per-pixel confidence of the prediction. Additionally, we show the potential of transfer learning that with minor extra training, the CNN can be optimized for different cell types. Experimental results demonstrate that the proposed method is robust against experimental imperfections, e.g. aberrations, misalignment, and reconstructs high-SBP phase images with significantly reduced acquisition and processing times.
Light scattering in complex media is a pervasive problem across many areas, such as deep tissue imaging, and imaging in degraded environment. Major progress has been made by using the transmission matrix (TM) framework that characterizes the “one-for-one” input-output relation of a fixed scattering medium as a linear shift-variant matrix. A major limitation of these existing approaches is their high susceptibility to model errors. The phase-sensitive TM is inherently intolerant to speckle decorrelations. Our goal here is to develop a highly scalable imaging through scattering framework by overcoming the existing limitations in susceptibility to speckle decorrelation and SBP. The proposed model is built on a deep learning (DL) framework. To satisfy the desired statistical properties, we do not train a convolutional neural network (CNN) to learn the TM of a single scattering medium. Instead, we build a CNN to learn a “one-for-all” mapping by training on several scattering media with different microstructures while having the same macroscopic parameter. Specifically, we show that our CNN model trained on a few diffusers can sufficiently support the statistical information of all diffusers having the same mean characteristics (e.g. “grits”). We then experimentally demonstrate that the CNN is able to “invert” speckles captured from entirely different diffusers to make high-quality object predictions. Our method significantly improves the system’s information throughput and adaptability as compared to existing approaches, by improving both the SBP and the robustness to speckle decorrelations.
To obtain the accurate integral PSF of an extended depth of field (EDOF) microscope based on liquid tunable lens and volumetric sampling (VS) method, a method based on statistic and inverse filtering using quantum dot fluorescence nanosphere as a point source is proposed in this paper. First, a number of raw quantum dot images were captured separately when the focus length of the liquid lens was fixed and changed over the exposure time. Second, the raw images were separately added and averaged to obtain two noise-free mean images. Third, the integral PSF was achieved by computing the inverse Fourier transform of the mean image's Fourier transform caught when the focus lens is fixed divided by that when the focus length is changed. Finally, experimental results show that restored image using the measured accumulated PSF has good image quality and no artifacts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.