The Normalized Difference Vegetation Index (NDVI) can effectively reflect the growth state and spatial distribution of vegetation, which can be used to study vegetation growth, regional coverage and drought response. Traditionally, NDVI is acquired using Remote Sensing, which results in limited information. The emergence of Multispectral LiDAR provides a new way to obtain NDVI. It can gain rich spectral alongside three-dimensional spatial information. In this paper, a two-channel Multispectral LiDAR system is constructed, where two wavelengths of 650nm and 800nm laser are used to detect the leaves from various plants in different health states. The echo intensities at these two wavelengths are collected, then, the corresponding NDVI values are calculated. It is found that the longer the leaves are separated from the plants, the closer the NDVI value is to 0. The range of NDVI varies with tree species, but the variation trend is the same, and the highest NDVI of freshly picked leaves is about 0.89. At the same time, this paper uses a multi-channel high-speed acquisition card to collect data, the Time of Flight (TOF) is processed by batch average to obtain the objects’ distance. It can be seen that the two-channel Multispectral LiDAR can realize the detection of plant health state and obtain spatial distance position information simultaneously. This study not only verifies the effectiveness of the two-channel Multispectral LiDAR system to obtain NDVI, but also has constructive significance for the study of short-period vegetation growth status and the application of forest terrain construction. For the keywords, select up to 8 key terms for a search on your manuscript's subject.
Single-vehicle light detection and ranging (LiDAR) has limitations in capturing comprehensive environmental information. The advancement of vehicle-to-infrastructure (V2I) collaboration presents a potent solution to this challenge. During the collaboration, point cloud registration precisely aligns data from various LiDARs, effectively mitigating the constraints associated with data collection by a single-vehicle LiDAR. Registration furnishes autonomous vehicles with a more comprehensive and dependable environmental understanding. Currently, there are various types and performances of LiDAR in practical application scenarios. So, it is more necessary to perform heterogeneous point cloud registration, and there is still a relatively large room for improvement. Consequently, we introduce a coarse-to-fine approach to heterogeneous point cloud registration (C2F-HPCR), establishing the inaugural benchmark for point cloud registration in intricate vehicle-infrastructure collaboration contexts. C2F-HPCR acquires an initial registration matrix through its coarse registration module. Subsequently, it uses the overlap estimation module to extract overlap points between two point clouds. These identified points are inputted into the fine registration module to obtain the final registration matrix. Experiments on the DAIR-V2X-C dataset demonstrate that the recall of C2F-HPCR in heterogeneous point cloud registration is 72.99%. C2F-HPCR shows strong performance in heterogeneous point cloud registration, facilitating efficient registration of vehicle-side point clouds and infrastructure-side point clouds. The code is available at https://github.com/916718212/C2F-HPCR
Fusing LiDAR point cloud and camera image for 3D object detection in autonomous driving has emerged as a captivating research avenue. The core challenge of multimodal fusion is how to seamlessly fuse 3D LiDAR point cloud with 2D camera image. Although current approaches exhibit promising results, they often rely solely on fusion at either the data level, feature level, or object level, and there is still a room for improvement in the utilization of multimodal information. We present an advanced and effective multimodal fusion framework called EPAWFusion for fusing 3D point cloud and 2D camera image at both data level and feature level. EPAWFusion model consists of three key modules: a point enhanced module based on semantic segmentation for data-level fusion, an adaptive weight allocation module for feature-level fusion, and a detector based on 3D sparse convolution. The semantic information of the 2D image is extracted using semantic segmentation, and the calibration matrix is used to establish the point-pixel correspondence. The semantic information and distance information are then attached to the point cloud to achieve data-level fusion. The geometry features of enhanced point cloud are extracted by voxel encoding, and the texture features of image are obtained using a pretrained 2D CNN. Feature-level fusion is achieved via the adaptive weight allocation module. The fused features are fed into a 3D sparse convolution-based detector to obtain the accurate 3D objects. Experiment results demonstrate that EPAWFusion outperforms the baseline network MVXNet on the KITTI dataset for 3D detection of cars, pedestrians, and cyclists by 5.81%, 6.97%, and 3.88%. Additionally, EPAWFusion performs well for single-vehicle-side 3D object detection based on the experimental findings on DAIR-V2X dataset and the inference frame rate of our proposed model reaches 11.1 FPS. The two-layer level fusion of EPAWFusion significantly enhances the performance of multimodal 3D object detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.