Perfect imaging is one of the ultimate goals for humankind in perceiving the world, yet it is fundamentally limited by the optical aberrations resulting from imperfect imaging systems or dynamic imaging environments. To address this long-standing problem, we develop a new framework of digital adaptive optics for universal incoherent imaging applications based on scanning light-field imaging systems. With digital measurement and synthesis of the incoherent light field with unprecedented precision, we have demonstrated a series of killer applications that are hard for traditional methods, including long-term high-speed intravital 3D imaging in mammals, gigapixel imaging with a single lens, high-speed multi-site aberration corrections for ground-based telescopes against turbulence, and real-time megapixel depth sensing. We anticipate that digital adaptive optics may facilitate broad applications in almost all fields, including industrial inspection, mobile devices, autonomous driving, surveillance, medical diagnosis, biology, and astronomy.
KEYWORDS: Microscopy, Spatial resolution, Deep learning, Resolution enhancement technologies, In vivo imaging, Biological imaging, 3D image processing, Microlens array
Investigating sophisticated cellular and intercellular behaviors in animals is crucial to biological research, which calls for an intravital high-precision recording at ultrahigh spatiotemporal resolution. Light-Field Microscopy (LFM) achieves snapshot 3D imaging with a microlens array to uncouple the angular information, but at the cost of low spatial resolution. Recently, deep learning has revolved various microscopes including LFM with enhanced capabilities. However, deep learning-based LFM has limited performance in resolution, robustness and generalization ability. To address such challenges and expand the application boundaries of LFM-based technologies, we propose a learning-based framework, termed Virtual-scanning Network (Vs-Net) for light-field microscopy to achieve snapshot subcellular observations in vivo.
Significance: Light-field microscopy has achieved success in various applications of life sciences that require high-speed volumetric imaging. However, existing light-field reconstruction algorithms degrade severely in low-light conditions, and the deconvolution process is time-consuming.Aim: This study aims to develop a noise robustness phase-space deconvolution method with low computational costs.Approach: We reformulate the light-field phase-space deconvolution model into the Fourier domain with random-subset ordering and total-variation (TV) regularization. Additionally, we build a time-division-based multicolor light-field microscopy and conduct the three-dimensional (3D) imaging of the heart beating in zebrafish larva at over 95 Hz with a low light dose.Results: We demonstrate that this approach reduces computational resources, brings a tenfold speedup, and achieves a tenfold improvement for the noise robustness in terms of SSIM over the state-of-the-art approach.Conclusions: We proposed a phase-space deconvolution algorithm for 3D reconstructions in fluorescence imaging. Compared with the state-of-the-art method, we show significant improvement in both computational effectiveness and noise robustness; we further demonstrated practical application on zebrafish larva with low exposure and low light dose.
Significance: Mesoscale neural imaging in vivo has gained extreme popularity in neuroscience for its capacity of recording large-scale neurons in action. Optical imaging with single-cell resolution and millimeter-level field of view in vivo has been providing an accumulated database of neuron-behavior correspondence. Meanwhile, optical detection of neuron signals is easily contaminated by noises, background, crosstalk, and motion artifacts, while neural-level signal processing and network-level coordinate are extremely complicated, leading to laborious and challenging signal processing demands. The existing data analysis procedure remains unstandardized, which could be daunting to neophytes or neuroscientists without computational background.
Aim: We hope to provide a general data analysis pipeline of mesoscale neural imaging shared between imaging modalities and systems.
Approach: We divide the pipeline into two main stages. The first stage focuses on extracting high-fidelity neural responses at single-cell level from raw images, including motion registration, image denoising, neuron segmentation, and signal extraction. The second stage focuses on data mining, including neural functional mapping, clustering, and brain-wide network deduction.
Results: Here, we introduce the general pipeline of processing the mesoscale neural images. We explain the principles of these procedures and compare different approaches and their application scopes with detailed discussions about the shortcomings and remaining challenges.
Conclusions: There are great challenges and opportunities brought by the large-scale mesoscale data, such as the balance between fidelity and efficiency, increasing computational load, and neural network interpretability. We believe that global circuits on single-neuron level will be more extensively explored in the future.
Training an artificial neural network with backpropagation algorithms requires an extensive computational process. Our recent work proposes to implement the backpropagation algorithm optically for in-situ training of both the linear and nonlinear diffractive optical neural networks which enables the acceleration of training speed and improvement on the energy efficiency on core computing modules. We numerically validated that the proposed in-situ optical learning architecture achieves comparable accuracy to the in-silico training with an electronic computer on the task of object classification and matrix-vector multiplication, which further allows adaptation to the system imperfections. Besides, the self-adaptive property of our approach facilitates the novel application of the network for all-optical imaging through scattering media. The proposed approach paves the way for the robust implementation of large-scale diffractive neural networks to perform distinctive tasks all-optically.
The pixel size of a charge-coupled device (CCD) camera plays a major role in the image resolution, and the square pixels are attributed to the physical anisotropy of the sampling frequency. We synthesize the high sampling frequency directions from multiple frames acquired with different angles to enhance the resolution by 1.4 × over conventional CCD orthogonal sampling. To directly demonstrate the improvement of frequency-domain diagonal extension (FDDE) microscopy, lens-free microscopy is used, as its resolution is dominantly determined by the pixel size. We demonstrate the resolution enhancement with a mouse skin histological specimen and a clinical blood smear sample. Further, FDDE is extended to lens-based photography with an ISO 12233 resolution target. This method paves a new way for enhancing the image resolution for a variety of imaging techniques in which the resolution is primarily limited by the sampling pixel size, for example, microscopy, photography, and spectroscopy.
Modern computer vision tasks are achieved by first capturing and storing large-scale images and then performing the processing electronically, the paradigm of which has the fundamentally limited speed and power efficiency with the continuous increase of the data throughput and computational complexity. We propose to build the all-optical artificial intelligent for light-speed computing, which performs advanced computer vision tasks during the imaging so that the detector can directly measure the computed results. The proposed method uses light diffraction property to build the optical neural network, where the neuron function is achieved by tuning the optical diffraction with a nonlinear threshold. Since every target scene has different frequency components, the proposed diffractive neural network is trained to perform various filtering on different frequency components and achieves different transform functions for the target scenes. We demonstrate the proposed approach can be used for high-speed detecting and segmenting visual saliency objects of the microscopic samples and macroscopic scenes as well as performing the task of object classification. The low power consumption, light-speed processing, and high-throughput capability of the proposed approach can serve as significant support for high-performance computing and will find applications in self-driving automobile, video monitoring, and intelligent microscopy, etc.
Tomographic phase microscopy (TPM) is a unique imaging modality to measure the three-dimensional refractive index distribution of transparent and semitransparent samples. However, the requirement of the dense sampling in a large range of incident angles restricts its temporal resolution and prevents its application in dynamic scenes. Here, we propose a graphics processing unit-based implementation of a deep convolutional neural network to improve the performance of phase tomography, especially with much fewer incident angles. As a loss function for the regularized TPM, the ℓ1-norm sparsity constraint is introduced for both data-fidelity term and gradient-domain regularizer in the multislice beam propagation model. We compare our method with several state-of-the-art algorithms and obtain at least 14 dB improvement in signal-to-noise ratio. Experimental results on HeLa cells are also shown with different levels of data reduction.
Fourier ptychographic microscopy (FPM) is a recently developed technique stitching low-resolution images in Fourier domain to realize wide-field high-resolution imaging. However, the time-consuming process of image acquisition greatly narrows its applications in dynamic imaging. We report a wavelength multiplexing strategy to speed up the acquisition process of FPM several folds. A proof-of-concept system is built to verify its feasibility. Distinguished from many current multiplexing methods in Fourier domain, we explore the potential of high-speed FPM in spectral domain. Compatible with most existing FPM methods, our strategy provides an approach to high-speed gigapixel microscopy. Several experimental results are also presented to validate the strategy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.