Molecular imaging tools that can image plant metabolism and effects of external agricultural treatments in the micro-environment of plant tissues are significant for further understanding plant biology and optimizing the formulation of new agricultural products. Mass spectrometry, a common tool used by plant biologists, is unable to resolve nano-crystalline active ingredients (AIs) on the leaf surface nor achieve 3D molecular imaging of living plants. To address that, multiphoton microscopy (MPM) and fluorescence lifetime imaging microscopy (FLIM) are combined to achieve sub-cellular, depth-resolved fluorescence lifetime of both AIs and intrinsic proteins/pigments (e.g., chlorophyll and/or cytosolic NADH) after the herbicide treatment application. Here we present a method using a custom-designed, high-speed MPM-FLIM system, “Instant FLIM”, to achieve real-time, unlabeled 3D functional molecular imaging of intrinsic proteins and pigments in optically thick and highly scattering plant samples with the application of external treatments. To validate the capability of MPM-FLIM to measure intrinsic proteins and pigments within plant tissues, we present the results of unlabeled bluegrass blades samples. To demonstrate simultaneous imaging of 3D molecular plant tissue and the agricultural AI nano-crystals deposition and formation, we evaluate the performance of the MPM-FLIM by applying commercial herbicide product to gamagrass blade sample. Additionally, to measure the herbicide-induced cellular-level functional responses within living plant tissues, 3D time-resolved molecular MPM-FLIM imaging of hemp dogbane leaf with herbicide is performed. Results demonstrate MPM-FLIM is capable of 3D simultaneous functional imaging of label-free living plant tissues and the quantitative measurements of the location and formation of AI nanocrystals within the plant tissues.
SignificanceMachine learning (ML) models based on deep convolutional neural networks have been used to significantly increase microscopy resolution, speed [signal-to-noise ratio (SNR)], and data interpretation. The bottleneck in developing effective ML systems is often the need to acquire large datasets to train the neural network. We demonstrate how adding a “dense encoder-decoder” (DenseED) block can be used to effectively train a neural network that produces super-resolution (SR) images from conventional microscopy diffraction-limited (DL) images trained using a small dataset [15 fields of view (FOVs)].AimThe ML helps to retrieve SR information from a DL image when trained with a massive training dataset. The aim of this work is to demonstrate a neural network that estimates SR images from DL images using modifications that enable training with a small dataset.ApproachWe employ “DenseED” blocks in existing SR ML network architectures. DenseED blocks use a dense layer that concatenates features from the previous convolutional layer to the next convolutional layer. DenseED blocks in fully convolutional networks (FCNs) estimate the SR images when trained with a small training dataset (15 FOVs) of human cells from the Widefield2SIM dataset and in fluorescent-labeled fixed bovine pulmonary artery endothelial cells samples.ResultsConventional ML models without DenseED blocks trained on small datasets fail to accurately estimate SR images while models including the DenseED blocks can. The average peak SNR (PSNR) and resolution improvements achieved by networks containing DenseED blocks are ≈3.2 dB and 2 × , respectively. We evaluated various configurations of target image generation methods (e.g., experimentally captured a target and computationally generated target) that are used to train FCNs with and without DenseED blocks and showed that including DenseED blocks in simple FCNs outperforms compared to simple FCNs without DenseED blocks.ConclusionsDenseED blocks in neural networks show accurate extraction of SR images even if the ML model is trained with a small training dataset of 15 FOVs. This approach shows that microscopy applications can use DenseED blocks to train on smaller datasets that are application-specific imaging platforms and there is promise for applying this to other imaging modalities, such as MRI/x-ray, etc.
Fluorescence lifetime imaging microscopy (FLIM) is an important technique to understand the chemical microenvironment in cells and tissues since it provides additional contrast compared to conventional fluorescence imaging. When two fluorophores within a diffraction limit are excited, the resulting emission leads to nonlinear spatial distortion and localization effects in intensity (magnitude) and lifetime (phase) components. To address this issue, in this work, we provide a theoretical model for convolution in FLIM to describe how the resulting behavior differs from conventional fluorescence microscopy. We then present a Richardson-Lucy (RL) based deconvolution including total variation (TV) regularization method to correct for the distortions in FLIM measurements due to optical convolution, and experimentally demonstrate this FLIM deconvolution method on a multi-photon microscopy (MPM)-FLIM images of fluorescent-labeled fixed bovine pulmonary arterial endothelial (BPAE) cells.
KEYWORDS: 3D image processing, Microscopy, Compressed sensing, Luminescence, In vivo imaging, Signal to noise ratio, Confocal microscopy, Reconstruction algorithms, Microscopes, 3D modeling
Fluorescence microscopy has been a significant tool to observe long-term imaging of embryos (in vivo) growth over time. However, cumulative exposure is phototoxic to such sensitive live samples. While techniques like light-sheet fluorescence microscopy (LSFM) allows for reduced exposure, it is not well suited for deep imaging models. Other computational techniques are computationally expensive and often lack restoration quality. To address this challenge, one can use various low-dosage imaging techniques that are developed to achieve the 3D volume reconstruction using a few slices in the axial direction (z-axis); however, they often lack restoration quality. Also, acquiring dense images (with small steps) in the axial direction is computationally expensive. To address this challenge, we present a compressive sensing (CS) based approach to fully reconstruct 3D volumes with the same signal-to-noise ratio (SNR) with less than half of the excitation dosage. We present the theory and experimentally validate the approach. To demonstrate our technique, we capture a 3D volume of the RFP labeled neurons in the zebrafish embryo spinal cord (30 μm thickness) with the axial sampling of 0.1 μm using a confocal microscope. From the results, we observe the CS-based approach achieves accurate 3D volume reconstruction from less than 20% of the entire stack optical sections. The developed CS-based methodology in this work can be easily applied to other deep imaging modalities such as two-photon and light-sheet microscopy, where reducing sample photo-toxicity is a critical challenge.
We propose and demonstrate the first analytical model of the spatial resolution of frequency-domain (FD) fluorescence lifetime imaging microscopy (FLIM) that explains how it is fundamentally different with the common resolution limit of the conventional fluorescence microscopy. Frequency modulation (FM) capture effect is also observed by the model, which results in distorted FLIM measurements. A super-resolution FLIM approach based on a localization-based technique, super-resolution radial fluctuations (SRRF), is presented. In this approach, we separately process the intensity and lifetime to generate a super-resolution FLIM composite image. The capability of the approach is validated both numerically and experimentally in fixed cells sample.
KEYWORDS: Fluorescence lifetime imaging, Image segmentation, Microscopy, Denoising, In vivo imaging, Luminescence, Convolutional neural networks, Signal to noise ratio, Imaging systems, Signal processing
Fluorescence lifetime imaging microscopy (FLIM) systems are limited by their slow processing speed, low signal- to-noise ratio (SNR), and expensive and challenging hardware setups. In this work, we demonstrate applying a denoising convolutional network to improve FLIM SNR. The network will integrated with an instant FLIM system with fast data acquisition based on analog signal processing, high SNR using high-efficiency pulse-modulation, and cost-effective implementation utilizing off-the-shelf radio-frequency components. Our instant FLIM system simultaneously provides the intensity, lifetime, and phasor plots in vivo and ex vivo. By integrating image de- noising using the trained deep learning model on the FLIM data, provide accurate FLIM phasor measurements are obtained. The enhanced phasor is then passed through the K-means clustering segmentation method, an unbiased and unsupervised machine learning technique to separate different fluorophores accurately. Our experimental in vivo mouse kidney results indicate that introducing the deep learning image denoising model before the segmentation effectively removes the noise in the phasor compared to existing methods and provides clearer segments. Hence, the proposed deep learning-based workflow provides fast and accurate automatic segmentation of fluorescence images using instant FLIM. The denoising operation is effective for the segmentation if the FLIM measurements are noisy. The clustering can effectively enhance the detection of biological structures of interest in biomedical imaging applications.
KEYWORDS: Super resolution, Microscopy, Luminescence, Organisms, Data modeling, X-rays, X-ray imaging, Visualization, Super resolution microscopy, Magnetic resonance imaging
Fluorescence microscopy has enabled a dramatic development in modern biology by visualizing biological organ- isms with micrometer scale resolution. However, due to the diffraction limit, sub-micron/nanometer features are difficult to resolve. While various super-resolution techniques are developed to achieve nanometer-scale resolu- tion, they often either require expensive optical setup or specialized fluorophores. In recent years, deep learning has shown potentials to reduce the technical barrier and obtain super-resolution from diffraction-limited images. For accurate results, conventional deep learning techniques require thousands of images as a training dataset. Obtaining large datasets from biological samples is not often feasible due to photobleaching of fluorophores, phototoxicity, and dynamic processes occurring within the organism. Therefore, achieving deep learning-based super-resolution using small datasets is challenging. We address this limitation with a new convolutional neural network based approach that is successfully trained with small datasets and achieves super-resolution images. We captured 750 images in total from 15 different field-of-views as the training dataset to demonstrate the technique. In each FOV, a single target image is generated using the super-resolution radial fluctuation method. As expected, this small dataset failed to produce a usable model using traditional super-resolution architecture. However, using the new approach, a network can be trained to achieve super-resolution images from this small dataset. This deep learning model can be applied to other biomedical imaging modalities such as MRI and X-ray imaging, where obtaining large training datasets is challenging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.