Significance: Light-field microscopy has achieved success in various applications of life sciences that require high-speed volumetric imaging. However, existing light-field reconstruction algorithms degrade severely in low-light conditions, and the deconvolution process is time-consuming. Aim: This study aims to develop a noise robustness phase-space deconvolution method with low computational costs. Approach: We reformulate the light-field phase-space deconvolution model into the Fourier domain with random-subset ordering and total-variation (TV) regularization. Additionally, we build a time-division-based multicolor light-field microscopy and conduct the three-dimensional (3D) imaging of the heart beating in zebrafish larva at over 95 Hz with a low light dose. Results: We demonstrate that this approach reduces computational resources, brings a tenfold speedup, and achieves a tenfold improvement for the noise robustness in terms of SSIM over the state-of-the-art approach. Conclusions: We proposed a phase-space deconvolution algorithm for 3D reconstructions in fluorescence imaging. Compared with the state-of-the-art method, we show significant improvement in both computational effectiveness and noise robustness; we further demonstrated practical application on zebrafish larva with low exposure and low light dose. |
1.IntroductionLight-field microscopy (LFM) is an elegant method of capturing various biological dynamics in 3D at a high-speed frame rate. It reserves the volume information of organisms through single shot that simultaneously captures the spatial and angular information. The information can be used to recover the 3D volume of the sample through deconvolution algorithms, making LFM popular in various biological applications, especially for large-scale neural activities in 3D observation.1–8 Furthermore, some hardware improvement schemes based on LFM, such as scanning LFM (sLFM),9,10 mirror-enhanced sLFM,11 confocal LFM,12 etc., have also been proposed. Many approaches for LFM deconvolution algorithms have been developed to facilitate the rapidly growing applications in recent years, such as the model-driven multiscale scattering model13 and Fourier-space model,14,15 data-driven dictionary learning model,16 and deep learning model.17–19 These LFM algorithms have gone through different stages. Geometric optics-based light field photography was first proposed to extend the realm of traditional photography with the ability of refocusing and 3D imaging.2,20 The Fourier slice theory then enhanced the efficiency of geometrical light field processing with orders of magnitude.21 However, the geometric model cannot describe the light field accurately in microscopy. Because the sampling size of complementary metal oxide (CMOS) is close to the diffraction limit, the geometric model results in severe resolution degradation in LFM. Broxton et al.1 proposed an LFM wave optics model to conduct the 3D deconvolution algorithm instead. They achieved a much higher spatial resolution for some axial planes. On the other hand, the wave optics model results in strong artifacts when approaching the native objective plane by frequency aliasing, which severely reduces the imaging contrast. Meanwhile, this model also causes high computational cost, limiting the scope of applications in LFM. Lu et al.5 proposed a phase-space deconvolution algorithm with a smooth prior in the phase-space domain to eliminate the artifacts and accelerate the algorithm by ten times. However, it still takes a very long time to reconstruct the sample volume. In addition, the noise robustness of these algorithms has not been analyzed accurately. LFM has a very strong background fluorescence from the out-of-focus planes, contributing a lot of shot noise to the phase-space measurements. Furthermore, a higher imaging frame rate leads to a much shorter exposure time and results in a low signal-to-noise ratio (SNR), which is an intrinsic limitation in fluorescent microscope.1 To solve these problems, we indicate that the multiview images in LFM are the projections along with different phase-space point spread functions (PSFs), which can be solved in a tomographic way (see Fig. 1 and Sec. 2). Fourier slice photography21 is represented as a simplified model when the object scale is much larger than the diffraction limit in our framework, which also shows the difference between geometric optics and wave optics on LFM in phase space [shown in Fig. 1(b)]. In this paper, we propose a new Fourier-phase-space LFM model with novel algorithms to solve the above-mentioned problems. The model leads to low computational cost and strong noise robustness in LFM 3D imaging. First, we transform our 3D deconvolution of LFM fluorescence samples to the Fourier domain by 3D Fourier transform. Then, we use random-subset ordering22 to accelerate the maximum likelihood estimation (MLE) of the phase-space model for LFM, and a 3D total-variation regularization is applied to utilize the sparse prior of the fluorescence samples. About tenfold improvement can be achieved in terms of the photon number required to achieve a similar SNR, compared with the previous method, with reduced computational costs.5 We evaluated the method by both numerical simulations and experiments on our customized multicolor LFM. High-speed heart-beating dynamics with 3D blood flows in a zebrafish larva were also observed at a volume rate of over 95 Hz. The limitations and potential future work are discussed in Sec. 4. 2.Method2.1.Imaging ModelWe build our imaging model in the phase space based on wave optics. As shown in Fig. 2(a), we set the 3D point located at . One pixel on the sensor with a coordinate of is realigned into phase space with a spatial coordinate of , corresponding to the center position of the microlens in front of the sensor pixel, and an angular coordinate of , corresponding to its relative position against the microlens center. To achieve angle projections from four-dimensional light-field capture, we introduce a realign-process that is described as , where is the original light field image and , which is also illustrated in Fig. 2(b). Therefore, we describe the phase-space PSF as5 where is a constant phase component related to and is a two-dimensional (2D) rectangle window function in spatial sampling, which is defined by the pitch size of a single microlens. represents the 2D Fourier transform based on the spatial frequency. is also a 2D rectangle function in the spatial frequency domain with a window size determined by the sensor pixel size after each microlens ( in our system). is the analytical model for the complex field at the native image plane, generated by a point source , which is described by Debye theory.1So the imaging process is modeled in phase space as where is the intensity distribution of the 3D volume and is the captured LF image after pixel realignment. As the phase-space PSF has been proved to be spatially-invariant for each angle, our imaging model is now simplified as where and represents the operator of 3D convolution. Because the background constant can be removed directly from the measurements and the shot noise is the main noise resources for fluorescence imaging, the final measurement is illustrated as2.2.Fourier Space RepresentationThe MLE of this 3D convolution process is obtained through the 3D Richardson–Lucy (RL) algorithm.1 The general iteration formula for RL deconvolution is represented as where represents the operator of matrix dot product; the division/represents matrix point division operator; is the forward projection of the imaging model; is the backward projection process; is the set of spatial frequency components, that is, the total number of the pixels after each microlens ( in our system setup); and is the weight number that is corresponding to the energy distribution of the PSF as The weight number is employed to compensate for the shot noise variances of different angles.Then, the forward projection of the volume is derived in Fourier space as where represents the Fourier transform and represents the inverse Fourier transform. To calculate the backward projection in Fourier space, we first define as a 3D volume that has the value of zero except for the central slice with the 2D value map and as the 3D PSF (on specific spatial frequency ) matrix transposed in 2D . The replication function replicates the 2D matrix into a 3D matrix . Therefore, the backward projection of Eq. (7) is represented as whereNotice that, as only has value at the central slice, its 3D Fourier transform is the replications of the 2D Fourier transform of its central slice. For the same reason, the backward projection of the light field data after realignment in phase space, which is also called phase-space measurements, is represented as If Eqs. (8) and (10) are substituted back into Eq. (5), we finally get the iteration formula for the MLE of the phase-space model in the Fourier domain For the MLE solution, we need to go through every angle within one iteration, which will reduce the reconstruction speed. Random subset ordering is a recently-proposed method in x-ray computed tomography to accelerate the reconstruction while maintaining the noise robustness. Because the phase-space model of LFM can also be viewed as a tomographic process as described, we implement the random subset ordering here to balance the benefit of noise reduction of the MLE solution and the speed strength of the single-angle iteration method.5 We mix the measurements of two random angles together as a new measurement before the iteration procedure to enhance SNR. The combinations of two random angles approach can therefore preserve fast convergence while gaining robustness against shot noise. In this case, similar to the method of our Fourier phase-space model, we further reduce the computational cost with the iteration formula shown as where2.3.Total Variation RegularizationFluorescence samples are usually sparse compared with normal photography because only the structures of interests are labeled in each fluorescence channel. To make full use of such sparsity, we further add a 3D total-variation (TV) regularization during each iteration to improve the noise performance In practice, for each iteration, we update the volume by RL deconvolution first and then employ TV regularization by times:23 where where correspond to the gradients along different dimensions. We chose and for both the numerical simulations and experimental reconstructions. The whole process of our Fourier phase-space deconvolution is illustrated by the pseudocode shown in Fig. 3.3.ResultsTo quantitatively analyze the effectiveness of the Fourier-domain phase-space model, we conducted numerical simulations on a denoised 3D data obtained by a commercial confocal microscope. For further evaluation of the improvement, we compare our results with the state-of-the-art algorithm (Lu et al.5), which has eliminated the reconstruction artifacts and obtained tenfold improvement in speed over traditional 3D deconvolution algorithms.1 The reduction of the computational cost of our method is analyzed in theory and a noise-free deconvolution simulation. Different shot noise levels are employed to show the noise robustness of our algorithm, especially for the TV regularization. The reconstruction quality is evaluated by the structural similarity (SSIM) index against the ground truth 3D data. Experimental results are conducted on a custom-built two-color LFM, with the reconstruction comparisons also shown in Video 1. 3.1.Simulation for Computational CostTo fairly compare the time complexity of the phase-space deconvolution in the spatial domain and Fourier domain, we note that the PSF and the phase-space measurements may not share the same size. The size of PSF is usually smaller than that of phase-space measurements due to the memory limitation of the computer during calculation. Here, we use as the measurement size and to define the size of PSF. For simplicity, we set . As shown in Fig. 3(b), traditional algorithms1,5 used multiple 2D convolutions in the spatial domain, so the time complexity in the spatial domain is estimated as As shown in Fig. 3(b), our Fourier phase-space deconvolution used 3D Fourier transform in both the forward and backward projections. By employing fast Fourier transform, the time complexity of our algorithm is estimated as (for simplicity, we have ) It should be noted that we resize the PSF as the same size as the phase-space measurements when doing the Fourier transform with zero padding, so the complexity should be computed based on the . The ratio of time complexity between both algorithms is Such a ratio indicates the reduction of computational cost in the Fourier phase-space model, especially for a large volume with larger PSF size and more axial slices.To quantitatively analyze the noise performance and the computational cost of our algorithms in the Fourier domain, a numerical simulation was conducted. The 3D data used for simulation was a testis slice of pig with autofluorescence excited by the light of about 488 nm. We obtained the 3D information from a commercial confocal microscope (Olympus FV1000) with a oil-immersion objective and scanning pixel size 100 nm at an axial step of 530 nm. The whole 3D volume has a voxel number of with PSF size . The whole imaging process of LFM is simulated based on wave optics, corresponding to our experimental setup with a microlens array inserted at the image plane and a sensor placed at the back focal plane of each microlens. The -number of the microlens array is set to be 23 to match the 0.5 numerical aperture (NA) of the objective after a magnification factor of 23. Each microlens with a pitch size of covers about . The synthesized light field imaging is shown in Fig. 4(a). We conduct all of the comparison algorithms on the same computer with a i9-9980XE CPU and RAM of 128 GB. All of the codes are implemented on MATLAB R2019a. The GPU version using the gpuArray function in MATLAB was conducted by NVIDIA RTX Titan. We show the convergence curves of the ptychographic reconstruction algorithms with the maximum iteration number of 5. About a tenfold improvement in speed is obtained by our algorithm, compared with the state-of-the-art algorithm,5 for the CPU processing. Such an improvement mainly results from the reduction of the computational cost in the Fourier phase-space model. We show the speed results in Fig. 4(b), indicating that a similar reconstruction performance can be achieved with much less time by our algorithm, especially after acceleration by a GPU. Different from the convolution process in the spatial domain, most steps in our Fourier phase space convolution are the 3D Fourier transform, which can be easily implemented by cuFFT for GPU acceleration. In this case, the reconstruction time can be reduced below 100 s, which is about 100 times faster than the previous method.5 The maximum reconstruction SSIM is quite low due to the resolution loss of LFM, which is a tradeoff for the snapshot 3D imaging capability. 3.2.Simulation for Noise RobustnessTo address the noise robustness of our method, we applied different levels of shot noise to the synthesized light-field measurements and evaluated the reconstruction results by SSIM as shown in Fig. 5(a). The ground truth of a single slice and the depth-coded projection are shown in Fig. 5(b). For the depth-coded projection, we applied different hues for the voxels at different axial planes and averaged them along the axial dimension. The reconstructed results of the selected region in Fig. 5(b) under different noise levels including both single slices and depth-coded projections are shown in Fig. 5(c). We added different noise levels by the MATLAB function imnoise with parameter poisson and calculated its equivalent photon numbers per pixel in the simulation. The first row of Fig. 5(c) shows the center views of the synthesized data at different noise levels. All of the algorithms show strong robustness to noise, especially for the LF photon number below 100 per pixel. The shot noise in measurements will eventually introduce the speckle-like patterns over the reconstructed volume, which may cause fluctuations in the intensity. However, the performance of our results decreases more slowly than that of the previous algorithm5 in terms of SSIM. From the color distribution in depth-coded projections, we can clearly observe the decrease of accuracy in depth with the increase of the noise. Such a degradation mainly results from the reduced contrast of the disparity between different angles, which is flooded by the shot noise. Our results show better performance, especially in the axial domain, because information from multiple angles is averaged together during backpropagation. TV regularization further provides a smoother distribution of the structures to avoid the speckles during synthesis. Even for the low photon number conditions (1 photon per pixel) shown in the fourth column of Fig. 5(c), our results with TV regularization can achieve similar performance as previous algorithm with a photon number of around 10 per pixel. With the ablation study in Fig. 5(d), we show that both TV regularization and random subset can improve the phase-space reconstruction performance under low-light conditions, and our method benefits from both of these improvements. The subset combination merges its two measurements’ signals, which reduces shot noise, whereas the TV regularization reserves more effective information during each iteration. Therefore, the high-frequency component of every angular measurement is well reserved before they are led into the next angle subsets. Moreover, the 20 angles subset result with a decreased volume SSIM shows that the angle numbers are critical for both noise-robustness and convergence speed in our simulation because the MLE solution is equivalent to using all angles in every iteration but lacks convergence speed. Therefore, we choose the subset angle number of 2 as a trade-off for noise-robustness and convergence speed during the reconstruction process. 3.3.Experiment on Zebrafish LarvaTo further validate the effectiveness of the algorithm, we built up a multicolor LFM based on a commercially inverted microscope (Zeiss Observer Z1). As shown in Fig. 6, the sample was excited by multiple lasers, which can be switched on periodically at high speed. The switch of the laser is synchronized with the exposure of the camera to prevent color crosstalk. We used the dichroic mirror (DM) to combine two different laser beams. The dual-channel DM was employed to separate the excitation and emission light for multicolor epifluorescence imaging. We used a objective. Another magnification was adopted to match the NA of the microlens array with a pitch size of and F-number of 21. With an unfocused light field setup, to achieve maximum depth of range, we inserted the microlens array at the native image plane and used a 4f system to relay the back focal plane of the microlens array to the sensor plane, ensuring that every microlens covered around sensor pixels. An sCMOS camera (Andor Zyla 4.2 plus) was used to collect the image of different fluorescence channels with a high sensitivity. The multicolor light field measurements were then processed sequentially and merged together after reconstruction with different pseudocolors. To show the system performance, we imaged the 3D dynamics of the heart-beating process in a zebrafish larva at a high frame rate with a low light dose. The zebrafish larva used in this study had the ethical approval from the Animal Care and Use Committee of our affiliation. During imaging, the larva was embedded in 1% low-melting-point agarose in glass bottom dishes (D35-14-0-N, In Vitro Scientific). As shown in Fig. 7, we labeled both the endothelial cells by EGFP (green) and red blood cells by DsRed (magenta). The light-field video was captured at 195 Hz for two colors, corresponding to the 97.5-Hz frame rate for each color. To keep the high frame rate, the exposure time is only 2 ms for each frame, which leads to strong shot noise, which can be clearly observed from the center views and side views shown in Fig. 7(a). Such a huge noise reduces the contrast of the disparity of different views and results in the degradation of reconstruction performance. The traditional artifacts close to the native objective plane by previous methods1 were all successfully eliminated in the phase-space model. Compared with the previous ptychographic algorithm,5 the MLE results at different axial planes have much fewer background artifacts and speckle-like patterns, which severely reduce the image contrast, as indicated by the white arrows. These artifacts may be mistaken for the sample structures but can be clearly distinguished in the video. With TV regularizations, the noise can be further suppressed and structures can be even smoother, which can be observed after zooming in on the image. However, if we further increase the weight of TV regularization, the spatial resolution after reconstruction will be reduced. Such a tradeoff should be noticed during the use of the algorithm. With our objective, the axial range that we achieve with similar resolution is around , which is enough to cover the heart. The engine-like beating process of the heart can be clearly observed from both the images at different time stamps and in Video 1. The 3D motion of the blood cells can be readily tracked at such a high imaging speed. Such a 3D dynamic process could only be observed previously with very complicated and expensive imaging setups, whereas our multicolor LFM can do this in a much more compact and inexpensive way. Moreover, an ablation study with quantitative evaluation for our method is shown in Fig. 7(b); the zoomed-in slice view with a low-light region indicates that both random-subset ordering and TV regularization are helpful in reducing the shot noise in the reconstructed volume. However, as a wide-field imaging method, the background fluorescence in LFM still reduces the reconstruction performance and creates the artifacts for axial planes at the margin with larger intensity contributed by the background. Future work can remove this influence on quantitative fluorescence imaging. 4.ConclusionWe have derived the phase-space model of LFM in the Fourier domain based on wave optics. A Fourier phase-space deconvolution algorithm was then proposed for noise-robust 3D reconstructions in fluorescence imaging with low computational costs. By analyzing the reconstruction performance at different shot noise levels compared with the state-of-the-art method, we validated the improvement of computational effectiveness and noise robustness with random subset ordering. Further, we explored the performance of the sparse prior in fluorescence samples by applying the 3D total-variation regularization. We demonstrated a practical application of our algorithm in the observation of heart-beating dynamics in zebrafish larva with our custom-built multicolor LFM with low exposure time and low light dose. A limitation of the current framework is that it is still an iterative algorithm with several iterations required for convergence. Although the ptychographic iteration of random subsets can accelerate the convergence process, the reconstruction process can be extended into a deterministic algorithm by an interpolation procedure in the Fourier domain as a generic Radon transform. More filtering processes can be incorporated into the Fourier domain at the same time. For future work, other regularizations such as the Hessian constraint can be used to improve the noise robustness.24 Currently, only the spatial constraint has been considered in this work, and the time-lapse 3D video should have a strong sparsity in temporal domain. The low-rank constraints in time-lapse 3D video can also be added to further enhance the noise performance. As the deconvolution process can be modeled as a kind of tomography with axially-nonuniform PSF in the phase space, the same missing cone problem in computed tomography and magnetic resonance imaging exists in LFM, leading to a low resolution in axial dimension, which can be observed in our projections. Deep learning techniques can be applied to fill the gap in the Fourier domain with the prior of data structures, which have been verified in other tomographic algorithms.25,26 AcknowledgmentsWe thank Dong Jiang for assistance in sample preparation and Yuanlong Zhang for great help in the manuscript. The project was funded by the National Natural Science Foundation of China (61927802, 61722209), the Beijing Municipal Science & Technology Commission (Z181100003118014), the National Key Research and Development Program of China (2020AAA0130000), the Postdoctoral Science Foundation of China (2019M660644), and Tsinghua University Initiative Scientific Research Program. J.W. was also funded by the National Postdoctoral Program for Innovative Talent and Shuimu Tsinghua Scholar Program. Code, Data, and Materials AvailabilityThe datasets generated and/or analyzed during the current study are available from the corresponding author on reasonable request. ReferencesM. Broxton et al.,
“Wave optics theory and 3-D deconvolution for the light field microscope,”
Opt. Express, 21
(21), 25418
–25439
(2013). https://doi.org/10.1364/OE.21.025418 OPEXFF 1094-4087 Google Scholar
M. Levoy, Z. Zhang and I. McDowall,
“Recording and controlling the 4D light field in a microscope using microlens arrays,”
J. Microsc., 235
(2), 144
–162
(2009). https://doi.org/10.1111/j.1365-2818.2009.03195.x JMICAR 0022-2720 Google Scholar
A. Stefanoiu et al.,
“Artifact-free deconvolution in light field microscopy,”
Opt. Express, 27
(22), 31644
–31666
(2019). https://doi.org/10.1364/OE.27.031644 OPEXFF 1094-4087 Google Scholar
L. Cong et al.,
“Rapid whole brain imaging of neural activity in freely behaving larval zebrafish (danio rerio),”
Elife, 6 e28158
(2017). https://doi.org/10.7554/eLife.28158 Google Scholar
Z. Lu et al.,
“Phase-space deconvolution for light field microscopy,”
Opt. Express, 27
(13), 18131
–18145
(2019). https://doi.org/10.1364/OE.27.018131 OPEXFF 1094-4087 Google Scholar
R. Prevedel et al.,
“Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,”
Nat. Methods, 11
(7), 727
(2014). https://doi.org/10.1038/nmeth.2964 1548-7091 Google Scholar
T. Nöbauer et al.,
“Video rate volumetric Ca2+ imaging across cortex using seeded iterative demixing (SID) microscopy,”
Nat. Methods, 14
(8), 811
(2017). https://doi.org/10.1038/nmeth.4341 1548-7091 Google Scholar
O. Skocek et al.,
“High-speed volumetric imaging of neuronal activity in freely moving rodents,”
Nat. Methods, 15
(6), 429
(2018). https://doi.org/10.1038/s41592-018-0008-0 1548-7091 Google Scholar
M.-K. Park et al.,
“Fast-switching laterally virtual-moving microlens array for enhancing spatial resolution in light-field imaging system without degradation of angular sampling resolution,”
Sci. Rep., 9 11297
(2019). https://doi.org/10.1038/s41598-019-47819-9 SRCEC3 2045-2322 Google Scholar
J. Wu et al.,
“Iterative tomography with digital adaptive optics permits hour-long intravital observation of 3D subcellular dynamics at millisecond scale,”
Cell, 184
(12), 3318
–3332.e17
(2021). https://doi.org/10.1016/j.cell.2021.04.029 CELLB5 0092-8674 Google Scholar
B. Xiong et al.,
“Mirror-enhanced scanning light-field microscopy for long-term high-speed 3D imaging with isotropic resolution,”
Light: Sci. Appl., 10 227
(2021). https://doi.org/10.1038/s41377-021-00665-9 Google Scholar
Z. Zhang et al.,
“Imaging volumetric dynamics at high speed in mouse and zebrafish brain with confocal light field microscopy,”
Nat. Biotechnol., 39
(1), 74
–83
(2021). https://doi.org/10.1038/s41587-020-0628-7 NABIF9 1087-0156 Google Scholar
Y. Zhang et al.,
“Computational optical sectioning with an incoherent multiscale scattering model for light-field microscopy,”
Nat. Commun., 12 6391
(2021). https://doi.org/10.1038/s41467-021-26730-w NCAOBW 2041-1723 Google Scholar
C. Guo et al.,
“Fourier light-field microscopy,”
Opt. Express, 27
(18), 25573
–25594
(2019). https://doi.org/10.1364/OE.27.025573 OPEXFF 1094-4087 Google Scholar
L. Shi et al.,
“Light field reconstruction using sparsity in the continuous Fourier domain,”
ACM Trans. Graphics, 34
(1), 1
–13
(2014). https://doi.org/10.1145/2682631 ATGRDF 0730-0301 Google Scholar
Y. Zhang et al.,
“DiLFM: an artifact-suppressed and noise-robust light-field microscopy through dictionary learning,”
Light: Sci. Appl., 10 152
(2021). https://doi.org/10.1038/s41377-021-00587-6 Google Scholar
N. Wagner et al.,
“Deep learning-enhanced light-field imaging with continuous validation,”
Nat. Methods, 18
(5), 557
–563
(2021). https://doi.org/10.1038/s41592-021-01136-0 1548-7091 Google Scholar
Z. Wang et al.,
“Real-time volumetric reconstruction of biological dynamics with light-field microscopy and deep learning,”
Nat. Methods, 18
(5), 551
–556
(2021). https://doi.org/10.1038/s41592-021-01058-x 1548-7091 Google Scholar
K. Wang,
“Deep-learning-enhanced light-field microscopy,”
Nat. Methods, 18
(5), 459
–460
(2021). https://doi.org/10.1038/s41592-021-01151-1 1548-7091 Google Scholar
M. Levoy et al.,
“Light field microscopy,”
ACM Trans. Graphics, 25
(3), 924
–934
(2006). https://doi.org/10.1145/1141911.1141976 ATGRDF 0730-0301 Google Scholar
R. Ng,
“Fourier slice photography,”
ACM Trans. Graphics, 24
(3), 735
–744
(2005). https://doi.org/10.1145/1073204.1073256 ATGRDF 0730-0301 Google Scholar
D. Kim, S. Ramani and J. A. Fessler,
“Combining ordered subsets and momentum for accelerated x-ray CT image reconstruction,”
IEEE Trans. Med. Imaging, 34
(1), 167
–178
(2014). https://doi.org/10.1109/TMI.2014.2350962 ITMID4 0278-0062 Google Scholar
H. Banjak et al.,
“Evaluation of noise and blur effects with SIRT-FISTA-TV reconstruction algorithm: Application to fast environmental transmission electron tomography,”
Ultramicroscopy, 189 109
–123
(2018). https://doi.org/10.1016/j.ultramic.2018.03.022 ULTRD6 0304-3991 Google Scholar
X. Huang et al.,
“Fast, long-term, super-resolution imaging with Hessian structured illumination microscopy,”
Nat. Biotechnol., 36
(5), 451
(2018). https://doi.org/10.1038/nbt.4115 NABIF9 1087-0156 Google Scholar
H. Qiao et al.,
“GPU-based deep convolutional neural network for tomographic phase microscopy with ℓ1 fitting and regularization,”
J. Biomed. Opt., 23
(6), 066003
(2018). https://doi.org/10.1117/1.JBO.23.6.066003 JBOPFO 1083-3668 Google Scholar
U. S. Kamilov et al.,
“Learning approach to optical tomography,”
Optica, 2
(6), 517
–522
(2015). https://doi.org/10.1364/OPTICA.2.000517 Google Scholar
|