KEYWORDS: Scattering, Radar, Point clouds, Data processing, 3D modeling, Voxels, Extremely high frequency, Error analysis, Light scattering, Feature extraction
In the current field of autonomous driving, millimeter-wave radar serves as an important complement to optical sensors in Simultaneous Localization and Mapping (SLAM) technology. Due to its ability to penetrate visual obstacles like dense smoke, millimeter-wave radar has become a key tool for localization and navigation in adverse weather conditions such as rain and snow. Particularly, the emergence of 4D millimeter-wave radar has provided an expansion of point cloud data from two-dimensional to three-dimensional. However, in the SLAM field, research on 4D millimeter-wave radar is still lacking. Due to its resolution and point cloud density limitations, it is difficult for 4D millimeter-wave radar to extract geometric features, such as edges and planes. Therefore, contemporary approaches predominantly utilize characteristics of spatial statistical distributions. However, these methodologies do not adequately exploit the scattering features in synergy with the SLAM process. This paper proposes a SLAM algorithm based on Scattering Angle Feature Model. In the algorithm's front-end, scattering angle feature constraints are introduced to enhance semantic information recognition during the scan matching process. This paper presents a computational method for three-dimensional scattering angle features, which is applied in front-end scan matching. At last, by collecting 4D millimeter-wave radar data from real-world scenarios, it was verified that scattering angle features have improved SLAM performance. These results confirm that scattering angle features can enhance the accuracy of pose estimation.
In recent years, multiclass target detection in remote sensing images has become a popular research topic, and it has been widely applied in military and civilian fields. Multiclass-oriented target detection in remote sensing images presents the following challenges: small densely parked targets (SDPT), multidirectionality, interclass unbalanced number of samples (ICUNS), and hard example detection. These problems will impact the result. Therefore, to solve the abovementioned problems, we propose a multiclass-oriented target detection method in optical remote sensing images. In the detection stage, an oriented bounding box (OBB) is used to predict the angle of the target, which can overcome the problems of SDPT and multidirectionality. A cascade refined module is proposed to solve the problem of network performance degradation caused using the OBB. Second, the smooth L1 loss function is used, which can complete OBB regression by adding an angle parameter. This method can improve network performance. Finally, gradient harmonized mechanism loss is applied to the OBB. It can solve problems, such as ICUNS and hard example detection. We describe experiments conducted on the DOTA public optical remote sensing dataset. The experimental results show that this method is effective for multiclass-oriented target detection in optical remote sensing images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.