As robotics advances, visual SLAM technology gains traction. Monocular inertial SLAM is notable for its affordability and real-time capabilities. Researchers introduce line features to enhance traditional point-based SLAM in scenarios like weak textures or motion blur. However, mainstream methods, combining LSD and LBD, suffer from high computational demands, impacting real-time robot localization. To address this, we propose a monocular inertial SLAM method using enhanced ELSED and imu-aided line optical flow. We replace LSD with improved ELSED, boosting line feature extraction speed. Additionally, our imu-aided approach enhances line feature tracking accuracy and matching precision. Comparative experiments with mainstream methods validate the effectiveness of the proposed method.
With the continuous development of digital photography technology and quality inspection, the demand for image stitching in practical applications is increasing. Traditional image stitching algorithms employ a variety of hand-designed methods for feature extraction, matching, and optimization. However, these traditional feature-based image stitching techniques heavily rely on feature extraction and may not perform well in scenarios with limited features. Current image stitching solutions based on supervised deep learning lack relevant data sets, and labeling data is relatively cumbersome, making supervised deep learning methods unreliable. At the same time, the rise of unsupervised deep learning algorithms provides new ideas for image stitching. We use unsupervised homography estimation to provide information about the geometric relationship between images, Stitching-Domain Transformer Layer to align feature maps, warp and generate masks, it helps to enhance the reality and continuity of splicing. We simultaneously utilize a pre-trained deep learning model (VGG) for feature extraction. We adjust the smoothness loss term to ensure smoother transitions within the stitching areas. Throughout the training process, we continuously optimize the number of convolutional layers, channels, and network depth to achieve optimal results. The superiority of the unsupervised learning algorithm compared to other classic algorithms was verified through experiments. Finally, we discussed the challenges and future applications of unsupervised deep learning in image stitching.
The star sensor is an attitude-sensitive device for spaceflight. It is a critical component in the autonomous attitude determination of aerospace vehicles. Compared to other attitude sensors, the star sensor offers higher attitude accuracy, low power consumption, small volume, and strong autonomy. It plays an important role in high-precision remote sensing, astronomical navigation, and other fields. Star extraction is an essential part of the star sensor in the process of working. Its accuracy and the number of extracted stars affect the performance of the star sensor. This paper proposes a method of star extraction based on the combination of the Improved Optical Flow Method (IOFM) and Dynamic Filtering (DF) named IOFM-DF. Based on the optical flow method, the motion characteristics of stars in the time and space domains are considered. Due to the difference between the star and noise in the motion trajectory, dynamic filtering is used to reduce the influence of noise from the star image on the extraction effect of the star. Considering the statistical properties of the motion trajectories of multiple stars, the cosine distance of the motion track between the extracted point and the star is calculated to predict the probability that the extracted point belongs to the star. IOFM-DF can extract and track stars in the star image for a low signal-to-noise ratio. Experimental results show that IOFM-DF increases the number of star extractions by at least 30% compared to traditional methods. This research is important to improve the accuracy and performance of star sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.