Structured illumination microscopy (SIM) has been widely applied in the superresolution imaging of subcellular dynamics in live cells. Higher spatial resolution is expected for the observation of finer structures. However, further increasing spatial resolution in SIM under the condition of strong background and noise levels remains challenging. Here, we report a method to achieve deep resolution enhancement of SIM by combining an untrained neural network with an alternating direction method of multipliers (ADMM) framework, i.e., ADMM-DRE-SIM. By exploiting the implicit image priors in the neural network and the Hessian prior in the ADMM framework associated with the optical transfer model of SIM, ADMM-DRE-SIM can further realize the spatial frequency extension without the requirement of training datasets. Moreover, an image degradation model containing the convolution with equivalent point spread function of SIM and additional background map is utilized to suppress the strong background while keeping the structure fidelity. Experimental results by imaging tubulins and actins show that ADMM-DRE-SIM can obtain the resolution enhancement by a factor of ∼1.6 compared to conventional SIM, evidencing the promising applications of ADMM-DRE-SIM in superresolution biomedical imaging.
Learning-based compressed sensing algorithms are popularly used for recovering the underlying datacube of snapshot compressive temporal imaging (SCTI), which is a novel technique for recording temporal data in a single exposure. Despite providing fast processing and high reconstruction performance, most deep-learning approaches are merely considered a substitute for analytical-modeling-based reconstruction methods. In addition, these methods often presume the ideal behaviors of optical instruments neglecting any deviation in the encoding and shearing processes. Consequently, these approaches provide little feedback to evaluate SCTI’s hardware performance, which limits the quality and robustness of reconstruction. To overcome these limitations, we develop a new end-to-end convolutional neural network—termed the deep high-dimensional adaptive net (D-HAN)—that provides multi-faceted process-aware supervision to an SCTI system. The D-HAN includes three joint stages: four dense layers for shearing estimation, a set of parallel layers emulating the closed-form solution of SCTI’s inverse problem, and a U-net structure that works as a filtering step. In system design, the D-HAN optimizes the coded aperture and establishes SCTI’s sensing geometry. In image reconstruction, D-HAN senses the shearing operation and retrieves a three-dimensional scene. D-HAN-supervised SCTI is experimentally validated using compressed optical-streaking ultrahigh-speed photography to image the animation of a rotating spinner at an imaging speed of 20 thousand frames per second. The D-HAN is expected to improve the reliability and stability of a variety of snapshot compressive imaging systems.
Various super-resolution microscopy techniques have been presented to explore fine structures of biological specimens. However, the super-resolution capability is often achieved at the expense of reducing imaging speed by either point scanning or multiframe computation. The contradiction between spatial resolution and imaging speed seriously hampers the observation of high-speed dynamics of fine structures. To overcome this contradiction, here we propose and demonstrate a temporal compressive super-resolution microscopy (TCSRM) technique. This technique is to merge an enhanced temporal compressive microscopy and a deep-learning-based super-resolution image reconstruction, where the enhanced temporal compressive microscopy is utilized to improve the imaging speed, and the deep-learning-based super-resolution image reconstruction is used to realize the resolution enhancement. The high-speed super-resolution imaging ability of TCSRM with a frame rate of 1200 frames per second (fps) and spatial resolution of 100 nm is experimentally demonstrated by capturing the flowing fluorescent beads in microfluidic chip. Given the outstanding imaging performance with high-speed super-resolution, TCSRM provides a desired tool for the studies of high-speed dynamical behaviors in fine structures, especially in the biomedical field.
Single-shot two-dimensional (2D) optical imaging of transient scenes is indispensable for numerous areas of study. Among existing techniques, compressed optical-streaking ultrahigh-speed photography (COSUP) uses a cost-efficient design to endow ultra-high frame rates with off-the-shelf CCD and CMOS cameras. Thus far, COSUP’s application scope is limited by the long processing time and unstable image quality in existing analytical-modeling-based video reconstruction. To overcome these problems, we have developed a snapshot-to-video autoencoder (S2V-AE)—a new deep neural network that maps a compressively recorded 2D image to a movie. The S2V-AE preserves spatiotemporal coherence in reconstructed videos and presents a flexible structure to tolerate changes in input data. Implemented in compressed ultrahigh-speed imaging, the S2V-AE enables the development of single-shot machine-learning assisted real-time (SMART) COSUP, which features a reconstruction time of 60 ms and a large sequence depth of 100 frames. SMART COSUP is applied to wide-field multiple-particle tracking at 20 thousand frames-per-second. As a universal computational framework, the S2V-AE is readily adaptable to other modalities in high-dimensional compressed sensing. SMART COSUP is also expected to find wide applications in applied and fundamental sciences.
Compressed ultrafast photography (CUP) is the fastest receive-only single-shot imaging technique up to now. By combining compressed sensing and streak imaging, CUP is able to capture ultrafast dynamics in a single shot. As a powerful tool for researching ultrafast phenomena, it has been widely applied in lots of areas. To meet the demand for more precise dynamics information and higher dimension in some application, many improvements have been conduct in CUP. For example, we have raised total variation-block match 3D filter algorithm and augmented Lagrange-deep learning hybrid algorithm to improve the reconstructed image quality of CUP, and set up a stereo-volumetric CUP system to capture 5 dimension dynamic information in a single shot. Besides, we have also developed another single-shot ultrafast optical imaging technique, chirped spectral mapping ultrafast photography (CSMUP), which utilized the spectral-temporal mapping to exact temporal information from hyperspectral image.
In ultrafast optical imaging, it is critical to obtain the spatial structure, temporal evolution, and spectral composition of the object with snapshots in order to better observe and understand unrepeatable or irreversible dynamic scenes. However, so far, there are no ultrafast optical imaging techniques that can simultaneously capture the spatial–temporal–spectral five-dimensional (5D) information of dynamic scenes. To break the limitation of the existing techniques in imaging dimensions, we develop a spectral-volumetric compressed ultrafast photography (SV-CUP) technique. In our SV-CUP, the spatial resolutions in the x, y and z directions are, respectively, 0.39, 0.35, and 3 mm with an 8.8 mm × 6.3 mm field of view, the temporal frame interval is 2 ps, and the spectral frame interval is 1.72 nm. To demonstrate the excellent performance of our SV-CUP in spatial–temporal–spectral 5D imaging, we successfully measure the spectrally resolved photoluminescent dynamics of a 3D mannequin coated with CdSe quantum dots. Our SV-CUP brings unprecedented detection capabilities to dynamic scenes, which has important application prospects in fundamental research and applied science.
The spatial, temporal, and spectral information in optical imaging play a crucial role in exploring the unknown world and unencrypting natural mysteries. However, the existing optical imaging techniques can only acquire the spatiotemporal or spatiospectral information of the object with the single-shot method. In this talk, I’d like to introduce a hyperspectrally compressed ultrafast photography (HCUP) that can simultaneously record the spatial, temporal, and spectral information of the object. In our HCUP, the dynamical spatial resolution is 1.26 lp/mm in the horizontal direction and 1.41 lp/mm in the vertical direction, the temporal frame interval is 2 ps, and the spectral frame interval is 1.72 nm. Based on our HCUP, we realized the spatiotemporal-spatiospectral four-dimensional optical imaging of the chirped picosecond laser pulse and the photoluminescence dynamics. It can be expected that HCUP is flexible to couple to a variety of imaging modalities, including microscopes and telescopes, which enable recording the object at the spatial scales from cellular organelles to galaxies. Considering the powerful function of HCUP in optical imaging, it will open a new route in related application areas.
Compressed ultrafast photography (CUP) is a burgeoning single-shot computational imaging technique that provides an imaging speed as high as 10 trillion frames per second and a sequence depth of up to a few hundred frames. This technique synergizes compressed sensing and the streak camera technique to capture nonrepeatable ultrafast transient events with a single shot. With recent unprecedented technical developments and extensions of this methodology, it has been widely used in ultrafast optical imaging and metrology, ultrafast electron diffraction and microscopy, and information security protection. We review the basic principles of CUP, its recent advances in data acquisition and image reconstruction, its fusions with other modalities, and its unique applications in multiple research fields.
Bringing ultrafast temporal resolution to transmission electron microscopy (TEM) has historically been challenging. Despite significant recent progress in this direction, it remains difficult to achieve sub-nanosecond temporal resolution with a single electron pulse imaging. To address this limitation, here, we propose a methodology that combines laserassisted TEM with computational imaging methodologies based on compressed sensing (CS). In this technique, a twodimensional (2D) transient event [i.e. (x, y) frames that vary in time] is recorded through a CS paradigm. The 2D streak image generated on a camera is used to reconstruct the datacube of the ultrafast event, with two spatial and one temporal dimensions, via a CS-based image reconstruction algorithm. Using numerical simulation, we find that the reconstructed results are in good agreement with the ground truth, which demonstrates the applicability of CS-based computational imaging methodologies to laser-assisted TEM. Our proposed method, complementing the existing ultrafast stroboscopic and nanosecond single-shot techniques, opens up the possibility for single-shot, spatiotemporal imaging of irreversible structural phenomena with sub-nanosecond temporal resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.