Molecular imaging tools that can image plant metabolism and effects of external agricultural treatments in the micro-environment of plant tissues are significant for further understanding plant biology and optimizing the formulation of new agricultural products. Mass spectrometry, a common tool used by plant biologists, is unable to resolve nano-crystalline active ingredients (AIs) on the leaf surface nor achieve 3D molecular imaging of living plants. To address that, multiphoton microscopy (MPM) and fluorescence lifetime imaging microscopy (FLIM) are combined to achieve sub-cellular, depth-resolved fluorescence lifetime of both AIs and intrinsic proteins/pigments (e.g., chlorophyll and/or cytosolic NADH) after the herbicide treatment application. Here we present a method using a custom-designed, high-speed MPM-FLIM system, “Instant FLIM”, to achieve real-time, unlabeled 3D functional molecular imaging of intrinsic proteins and pigments in optically thick and highly scattering plant samples with the application of external treatments. To validate the capability of MPM-FLIM to measure intrinsic proteins and pigments within plant tissues, we present the results of unlabeled bluegrass blades samples. To demonstrate simultaneous imaging of 3D molecular plant tissue and the agricultural AI nano-crystals deposition and formation, we evaluate the performance of the MPM-FLIM by applying commercial herbicide product to gamagrass blade sample. Additionally, to measure the herbicide-induced cellular-level functional responses within living plant tissues, 3D time-resolved molecular MPM-FLIM imaging of hemp dogbane leaf with herbicide is performed. Results demonstrate MPM-FLIM is capable of 3D simultaneous functional imaging of label-free living plant tissues and the quantitative measurements of the location and formation of AI nanocrystals within the plant tissues.
Fluorescence lifetime imaging microscopy (FLIM) is an important technique to understand the chemical microenvironment in cells and tissues since it provides additional contrast compared to conventional fluorescence imaging. When two fluorophores within a diffraction limit are excited, the resulting emission leads to nonlinear spatial distortion and localization effects in intensity (magnitude) and lifetime (phase) components. To address this issue, in this work, we provide a theoretical model for convolution in FLIM to describe how the resulting behavior differs from conventional fluorescence microscopy. We then present a Richardson-Lucy (RL) based deconvolution including total variation (TV) regularization method to correct for the distortions in FLIM measurements due to optical convolution, and experimentally demonstrate this FLIM deconvolution method on a multi-photon microscopy (MPM)-FLIM images of fluorescent-labeled fixed bovine pulmonary arterial endothelial (BPAE) cells.
We propose and demonstrate the first analytical model of the spatial resolution of frequency-domain (FD) fluorescence lifetime imaging microscopy (FLIM) that explains how it is fundamentally different with the common resolution limit of the conventional fluorescence microscopy. Frequency modulation (FM) capture effect is also observed by the model, which results in distorted FLIM measurements. A super-resolution FLIM approach based on a localization-based technique, super-resolution radial fluctuations (SRRF), is presented. In this approach, we separately process the intensity and lifetime to generate a super-resolution FLIM composite image. The capability of the approach is validated both numerically and experimentally in fixed cells sample.
KEYWORDS: Fluorescence lifetime imaging, Image segmentation, Microscopy, Denoising, In vivo imaging, Luminescence, Convolutional neural networks, Signal to noise ratio, Imaging systems, Signal processing
Fluorescence lifetime imaging microscopy (FLIM) systems are limited by their slow processing speed, low signal- to-noise ratio (SNR), and expensive and challenging hardware setups. In this work, we demonstrate applying a denoising convolutional network to improve FLIM SNR. The network will integrated with an instant FLIM system with fast data acquisition based on analog signal processing, high SNR using high-efficiency pulse-modulation, and cost-effective implementation utilizing off-the-shelf radio-frequency components. Our instant FLIM system simultaneously provides the intensity, lifetime, and phasor plots in vivo and ex vivo. By integrating image de- noising using the trained deep learning model on the FLIM data, provide accurate FLIM phasor measurements are obtained. The enhanced phasor is then passed through the K-means clustering segmentation method, an unbiased and unsupervised machine learning technique to separate different fluorophores accurately. Our experimental in vivo mouse kidney results indicate that introducing the deep learning image denoising model before the segmentation effectively removes the noise in the phasor compared to existing methods and provides clearer segments. Hence, the proposed deep learning-based workflow provides fast and accurate automatic segmentation of fluorescence images using instant FLIM. The denoising operation is effective for the segmentation if the FLIM measurements are noisy. The clustering can effectively enhance the detection of biological structures of interest in biomedical imaging applications.
KEYWORDS: Super resolution, Microscopy, Luminescence, Organisms, Data modeling, X-rays, X-ray imaging, Visualization, Super resolution microscopy, Magnetic resonance imaging
Fluorescence microscopy has enabled a dramatic development in modern biology by visualizing biological organ- isms with micrometer scale resolution. However, due to the diffraction limit, sub-micron/nanometer features are difficult to resolve. While various super-resolution techniques are developed to achieve nanometer-scale resolu- tion, they often either require expensive optical setup or specialized fluorophores. In recent years, deep learning has shown potentials to reduce the technical barrier and obtain super-resolution from diffraction-limited images. For accurate results, conventional deep learning techniques require thousands of images as a training dataset. Obtaining large datasets from biological samples is not often feasible due to photobleaching of fluorophores, phototoxicity, and dynamic processes occurring within the organism. Therefore, achieving deep learning-based super-resolution using small datasets is challenging. We address this limitation with a new convolutional neural network based approach that is successfully trained with small datasets and achieves super-resolution images. We captured 750 images in total from 15 different field-of-views as the training dataset to demonstrate the technique. In each FOV, a single target image is generated using the super-resolution radial fluctuation method. As expected, this small dataset failed to produce a usable model using traditional super-resolution architecture. However, using the new approach, a network can be trained to achieve super-resolution images from this small dataset. This deep learning model can be applied to other biomedical imaging modalities such as MRI and X-ray imaging, where obtaining large training datasets is challenging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.