PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 13041, including the Title Page, Copyright information, Table of Contents, and Conference Committee information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Devices for 3D Imaging, TV, Video, and Visualization Systems
In multiview (MV) or super multiview (SMV) glasses-free 3D displays, depth of field (DOF) is one of the most important parameters. The greater the distance of the displayed object outside or inside the screen, the more blurred it will be; therefore, a larger value of DOF can give more depth information and more pleasing 3D images. In this paper, we will first analyze how the voxels, which are the 3D equivalent of pixels in a conventional display, are formed in space and how it affects DOF; we will then show how to increase DOF by increasing the focal length of the lenticular lens. In our experiments, we made two 3D displays using two lenticular lenses with the same parameters, except for the focal length, and showed that the larger the focal length of lenticular lens, the larger the DOF of the 3D displays. However, for the same 2D screen, the larger DOF results in a smaller viewing angle. We then applied eye-tracking to increase the viewing angle by a factor of about three times in order to compensate for the reduced viewing angle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quantitative optical imaging techniques represent a new highly promising approach to identify such cellular biomarkers in particular when combining with artificial intelligence (AI) technologies for scientific, industrial, and most importantly biomedical applications. Among several new optical quantitative imaging techniques, digital holographic microscopy (DHM) have recently emerged as a powerful new technique well suited to non-invasively explore cell structure and dynamics with a nanometric axial sensitivity and hence to identify new cellular biomarkers. This overview paper provides explanations in the DHM to perform label-free phenotypic cellular assays. It further provides explanations of AI and deep learning pipelines for the development of an intelligent DHM that performs optical phase measurement, phase image processing, feature extraction, and classification. In addition, this paper provides some perspective on the use of the intelligent DHM in biomedical fields and shows its great potential for biomedical application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an off-axis holography with intensity correlation of the randomly scattered light and initial experimental results are presented. The hologram is recorded in the intensity correlations rather than the intensity and subsequently numerical reconstruction is applied to reconstruct the complex fields encoded into the hologram. Performance of this technique is examined in the off-axis hologram recording from the intensity correlation of the laser speckles and results in better reconstruction quality and field of view. This technique may find applications in wide-field imaging and microscopy with randomness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an overview of a previously published work on the performance comparison of different sensors
(Visible, LWIR, and LiDAR-based imaging systems) for the task of object detection and classification in the presence of
degradation such as fog and partial occlusions. Three-dimensional integral imaging has been shown to improve the
detection accuracy of object detectors operating in both visible and LWIR domains. As fog affects the image quality of
different sensors in different ways, we have trained deep learning detectors for each sensor for 2D imaging as well as 3D
integral imaging to compare the performance of sensors in the presence of degradation such as fog and partial occlusions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditionally, a three-dimensional point cloud is a convex hull. It only contains points that are representative of the object’s exterior surface. Some point cloud generation methods erroneously create interior points beyond the object’s surface, which must be removed to prepare a point cloud for surface reconstruction. Our method utilizes spherical coordinates to establish a relative interior-exterior relationship. We outline and demonstrate a technique to procedurally identify multiple origins relative to the object. Then, we use these origins to remove all interior points within an azimuth and elevation swath. We demonstrate our method on a dataset of atypical models containing interior points.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Acquiring information such as the shape, height, and image of a target surrounded by scattering obscurants is a challenging problem involving several random processes. In this work, we approach and compare this problem using theoretical and experimental results from the speckle correlation method. This method involves subjecting the target to illumination by random fields generated by Gaussian and perfect optical vortex (POV) beams to evaluate the orientation of the target. We show that the orientation of the object can be obtained from vortex speckles, whereas the Gaussian speckles provide less information regarding the orientation. Additionally, we demonstrate that the POV speckles have better sensitivity to detect the edges of a target than the Gaussian speckles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While offering promising applications in augmented reality (AR), the Maxwellian based AR display faces a critical limitation due to its constrained exit pupil size, posing discomfort to wearers and vulnerability to image loss upon minor misalignments. To address this issue, we propose a method utilizing Raman-Nath holographic grating for Maxwellian waveguide based display, aiming to expand the exit pupil effectively. It involves the development of two-dimensional gratings through the peristrophic multiplexing technique following the Raman-Nath regime. By spatially separating the outcoupled converging beam using multiplexed grating, an expansion of the exit pupil in horizontal as well as vertical directions is achieved. This enables the enlargement of the eye box into two dimensions, enhancing user comfort and mitigating the risk of image loss due to misalignments. Through experimental validation and optical simulation, the feasibility and effectiveness of our proposed method of Maxwellian waveguide displays for AR applications are demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Polarization, a fundamental feature of the light, is a useful parameter in understanding the complex optical responses of the system and to unravel unique properties that are otherwise missing. Usually, measurement of the polarization demands multiple measurements, which is not appropriate for live imaging. In this paper, we discuss and present some of our recent works on development of polarization digital holographic microscope for spatially resolved and a label-free imaging. Our emphasis is on single-shot polarization imaging and its possible applications in live cancer cell imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this review paper, we present a previously proposed approach for the estimation of degree of polarization under low illumination conditions. To avoid the saturation problems, zeros from the denominator of the Degree of polarization calculation are excluded which changes the distribution of the photon detection from Poisson to a Truncated Poisson distribution. 3D polarimetric imaging experiments had been conducted via light field under low illumination environments to verify the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Owing to its high resolution, sensitivity, imaged field of view, and frame rate acquisition, Digital Holographic Microscopy (DHM) stands out among the Quantitative phase imaging (QPI) techniques to reconstruct high-resolution phase images from micrometer-sized samples, providing information about the sample’s topography and refractive index. Despite the successful performance of DHM systems, their applicability to in-situ clinical research has been partially hampered by the need for a standard phase reconstruction algorithm that provides quantitative phase distributions without any phase distortion. This invited talk overviews the current advances in computational DHM reconstruction approaches from semi-heuristic to learning-based approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an overview of previously reported Single Random Phase Encoding (SRPE) and Double Random Phase Encoding (DRPE) optical bio-sensing systems. In contrast to traditional imaging modalities that rely on lenses to capture and magnify subjects, SRPE and DRPE employ phase masks to modulate the light field emanating from an object. This modulation results in a pseudo-random optical signal to be received at the sensor, which is then classified by an appropriate classification algorithm. This lensless paradigm not only reduces the physical bulk and expense associated with optical components but provides wide field of view, and enhanced depth of field in comparison with lens-based imaging system. In biomedical imaging, the application of SPRE and DRPE systems has significant promise in the context of distinguishing between various types of red blood cells (RBCs) for disease diagnosis. Specifically, these imaging systems have demonstrated remarkable efficacy in identifying horse and cow RBCs, as well as differentiating between sickle cell-positive and negative RBCs with high accuracy and robustness to noise. The integration of Convolutional Neural Networks (CNNs), when trained directly on captured opto-biological signature (OBS) images show significant robustness to noise. Training a CNN on the Local Binary Patterns (LBP) of captured OBS images has shown not only improved classification performance but also maintained accuracy under conditions of significant data compression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an overview of previously published reports on three-dimensional (3D) Profilometry for visualization of objects using Integral Imaging (InIm). The method was initially proposed for imaging in free space to map color, depth, and texture of multiple objects on a 3D surface. The advantage of 3D profilometric reconstruction versus conventional InIm planar reconstruction is that the former can optically slice the entire object depth within a range of voxels whereas the latter presents the depth of a single plane. Later, this method was studied for flexible sensing integral imaging for occlusion removal. This work which combined two-view geometry for camera pose estimation and 3D occluded object recognition demonstrated that 3D profilometric reconstruction not only can mitigate the effects of occlusion but is also perceptually better than conventional InIm reconstruction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this undergraduate research work, we studied 3D optical sensing and volumetric computational reconstruction algorithm based on conventional integral imaging and axially distributed sensing architectures. Imaging sensors are distributed along x-y-z dimensions for multi-perspective sensing, corresponding reconstruction algorithms are developed for volumetric visualization. Experiments were conducted to verify the feasibility of 3D sensing system and developed algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we develop an algorithm for calibrating a camera array for 3D image reconstruction and depth detection. The developed algorithm uses the functionality of MATLAB’s Computer Vision Toolbox and allows for the calibration of any number of cameras. The resulting intrinsic and extrinsic parameters (x, y, and z position; focal length of the lens; and pixel size of the sensor) will be formatted and saved into a matrix for further computations. This increases the efficiency when conducting 3D image reconstruction and depth detection. The accuracy and precision of these calculations were experimentally verified using simulated scenes created in Autodesk 3Ds Max. Future work includes: adding functionality to make use of rotation parameters, testing with real-world data sets, and modification to increase the accuracy and efficiency of calculations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We overview a polarimetric integral imaging system for optical signal sensing and imaging. The proposed system is demonstrated to enhance signal detection and visualization in turbid water. For optical signal detection, a temporally encoded optical signal is recorded using single-shot polarimetric integral imaging and detected using nonlinear correlation. Furthermore, we also presented an integral imaging-based polarization dehazing method for polarization-based image recovery in turbid and occluded mediums. Reconstruction based on integral imaging reduces noise and improves estimating the intermediate parameters required for polarization-based image recovery. The above-overviewed systems enhance the detection capabilities compared to conventional 2D imaging methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we overview the previously reported underwater signal detection system using 1D integral imaging convolutional neural networks (1DInImCNN). The 1DInImCNN system comprises cameras arranged in a one-dimensional configuration for optical signal collection and the 1DInImCNN approach for signal detection. The 1D camera array is used to capture the spatial and temporal information, encoded using Gold code and transmitted by a Light-emitting Diode (LED). Various turbidities and occlusions are created in a water tank to test the performance of the proposed method under such degradations. The 1DInImCNN method is compared to the previously proposed 3D integral imaging (3D InIm) with Convolutional neural network (CNN) and Bi-Long Short-term memory (Bi-LSTM) approach. The results suggest that the 1DInImCNN-based approach outperforms the previously proposed 3D InIm with the CNN-BiLSTM approach in terms of computation costs and detection performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an overview of the 3D integral imaging based human recognition under degraded environments and its performance comparison with that of RGB-D sensors. In this work, we considered the problem of continuous gesture recognition under degradations such as partial occlusion. The 3D integral imaging helps to improve the recognition under these degradations which has been demonstrated through experimental results. Additionally, its performance is better compared to that of RGB-D sensors under the experimental conditions considered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we overview previously published works on the robustness of diffuser-based single random phase encoding (SRPE) lensless imaging system to sensor parameters such as pixel size and number of pixels. Lensless imaging systems are cheaper, more compact, and more portable than their lens-based counterparts due to the absence of expensive and bulky optical elements such as lenses. Our recent work has shown that the performance of an SRPE system does not suffer appreciably as we increase the pixel size of the sensor and reduce the number of pixels of the sensor. For example, we have shown that reducing the number of sensor pixels by orders of magnitude does not appreciably affect the deep neural network assisted classification accuracy of SRPE systems. Thus, providing many benefits in terms of data processing and storage. In addition, the lateral resolution of the SRPE system is robust to reducing the number of pixels of the sensor and increasing the pixel size. Our results indicate that SRPE systems may be more advantageous, compared to their lens-based counterparts, in computationally constrained environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, we present an overview of previously published work on the identification of COVID-19 red blood cells (RBCs) and sickle cell disease based on the reconstructed phase profile using a deep learning framework. The video holograms for thin blood smears were recorded using a compact, low-cost, and field portable, 3D-printed shear-based digital holographic system. Individual cells were segmented from the holograms and then each frame was reconstructed to extract spatio-temporal signatures of the cells. Morphology-based features along with motility-based features extracted from reconstructed phase images, were fed to a bi-LSTM to classify between COVID-19 positive and healthy red blood cells. Based on the majority of the cell subjects were classified as healthy or diseased.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this undergraduate research project, we use LiDAR mapping for object detection and further combine AI and computer vision algorithms to enable robots to safely drive the vehicle in a given environment. AI and computer vision technologies allow the robot to identify lanes and intersections, enabling vehicle navigation, while LiDAR mapping quickly and accurately determines the depth between the vehicle and objects entering a specific area. This capability allows the robot to temporarily stop the vehicle, preventing collisions with objects. Through these technologies, our goal is to prevent collisions that may occur during driving, ensuring pedestrian safety and enabling safe robot-driven vehicle operation in crowded places. Simulation and test have been conducted to verify the proposed methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper clarifies that the gap of aerial image caused by gaps between multiple retro-reflectors for large aerial images can be perceived to be complemented by binocular vision in AIRR (Aerial Imaging by Retro-reflection). AIRR requires a large-size retro-reflector to increase the size of the aerial image or to enlarge the field of view. To increase the retro-reflector size, multiple retro-reflectors should be connected, but the gap at connected part causes a gap in the aerial image. However, by binocular vision, this gap of aerial image is perceived to be complemented, and the entire image can be perceived. Moreover, the maximum width of the gap that is perceived to be complemented is clarified to depend on the viewing distance and the depth position of the retro-reflector. Thus, the problem of the gap of the aerial image can be solved for the flexibility of AIRR configuration, such as the improving ease of tiling retro-reflectors when displaying large aerial image or composing a new optical system using a gap.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate object detection and depth estimation is critical for a variety of applications such as autonomous driving and robotics. In the context of object avoidance, one may use a LiDAR sensor to determine the position of nearby objects but, due to a lack of resolution, these sensors cannot be used to accurately categorize and label the object being detected. To contrast this, RGB cameras can provide rich semantic information, which can be used to categorize and segment an object but cannot provide accurate depth data. To overcome this, an abundance of algorithms has been created which are capable of fusing the two sensors, among others, allowing for accurate depth detection and segmentation of a given object. The problem with many of these systems is that they are complex in their approach and create 3D bounding boxes, which can result in an agent taking a less optimal path due to the size of the perceived object. The proposed approach in this paper simply determines the position of an object in an RGB image, using a CNN, and then translates two dimensions, found through the center pixel of the bounding box, to a point cloud to identify and segment point clusters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this undergraduate research work, we present studies on computational volumetric reconstruction and depth detection approaches. Computational integral imaging algorithm could provide 3D reconstructed images which include both infocus and out-of-focus pixels. Using two image analysis indicators, peak signal-to-noise ratio (PSNR) and structural similarity index measurement (SSIM), we discussed the accuracy and performance of 3D depth detection by comparing 2D captured images with 3D reconstructed images along a depth range.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The design and development of an eye-hand coordination assisting device was directed towards the use for individuals with Parkinson’s disease or cerebral palsy. This application was not limited to individuals with neuromuscular diseases. It could also be extended to others who wish to improve their hand-arm coordination skills such as surgeons, soldiers, student drivers, etc. With this system, the user was prompted to stand in front of a large touch screen with a LabVIEW program containing three target buttons displayed on it. A web-camera was utilized to determine where the test subject was positioned via a customized LabVIEW Vision Development Module program. A red-light marker was placed on the test subject to determine where they were. A series of images were taken and processed to determine marker location in space. Using a unique color and intensity enhanced its position over the noisy background. Based on the where the individual was, the direction to move towards the target was displayed on a separate monitor, so that a subject would be able to complete the task successfully.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Currently, three-dimensional (3D) light field displays (LFDs) based on integral imaging (InIm) are one of the most interesting technologies in the field of 3D displays. The InIm principle consists of two key stages: the capture and reconstruction of the light field describing a specific 3D scene. However, these stages represent two distinct processes requiring different tools and resources, making the evaluation of InIm-based 3D LFDs a laborious and time consuming task. To address those problems, we propose an end-to-end simulation model developed in the commercial lens design software Ansys Zemax OpticStudio, that integrates the two stages of the InIm to facilitate the evaluation of the entire 3D image formation process. This work aims to provide a preliminary solution by ensuring that the targeted specifications are checked and achieved before embarking on the development of a costly and time-consuming prototype.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
NIST and multiple industrial stakeholders are leading and supporting multiple efforts to develop standards for 3D imaging systems for manufacturing automation applications. Many manufacturers specify the performance of their sensors in non-standard ways and offer no method to verify those parameters independently. The standards are being developed under the auspices of the ASTM E57 committee on 3D Imaging Systems. They are meant to produce a) standards for measuring the performance of 3D imaging systems, b) standards for bin-picking vision systems, and c) guidelines for the selection of 3D imaging systems. This work presents the status of four work items.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a complete bibliometric and text mining analysis of 4Pi microscopy research, from its outset in 1992 to 2023. The study of data retrieved from Scopus through targeted query templates aims at providing a comprehensive insight into the publication trends, citation patterns, and thematic evolution of 4Pi microscopy. This study, using sophisticated analytical software such as Bibliometrix, Biblioshiny, and VOSviewer, not only quantifies the growth and influence of 4Pi microscopy in various scientific disciplines but also qualitatively analyses the emergence of research themes, methods, and collaborative networks within the field. The results emphasize the multidisciplinary character of 4Pi microscopy, its important impact on technological progress and life sciences as well as its expanding role in solving complex research problems. The analysis showed a dynamic research field with constant growth in the number of publications, diversification of research applications, and the shift in the focus areas over time. The trajectory of 4Pi microscopy research is mapped by this study and as a result valuable insights are provided for academics, policymakers and practitioners, thus creating a platform for future research directions and interdisciplinary collaborations to realize the full potential of 4Pi microscopy in scientific discovery and innovation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.