PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 13207, including the Title Page, Copyright information, Table of Contents, and Conference Committee information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Examining improvised explosive devices (IEDs) and chemical, biological, radiological, and nuclear (CBRN) threats presents immediate risks to the personnel and surrounding areas. Remote-controlled robots are crucial for approaching potentially hazardous objects safely. However, current camera-assisted manual control methods hinder precise manoeuvring, especially in unknown complex environments without a direct line of sight to the robot. Additionally, navigating into the target area and examining suspicious objects at close distances are often time-consuming and resource-intensive tasks. We present a semi-automatic robotic system based on the Rosenbauer RTE Robot tracked platform equipped with an industrial-grade robotic arm. It is composed of Commercial-off-the-Shelf (COTS) components and is designed to minimise the operational burden during manoeuvring and robotic arm movements. The robot is equipped with LiDAR sensors, a GNSS receiver, an IMU, rotary encoders, and a Time-of-Flight (TOF) camera. These navigation sensors facilitate semi-autonomous operations. The system is capable of semi-autonomous navigation in both indoor and outdoor environments. A multi-sensor SLAM algorithm based on Factor Graph Optimisation (FGO) is used to construct a 3D point cloud of the environment. Via a user interface, an operator can set waypoints in this point cloud to which the robot navigates autonomously. As the robot navigates, it continuously scans its environment to avoid obstacles and to extend the map of the surrounding area. Furthermore, close-up inspection and surface scans are executed semi-autonomously. We use spline draping to wrap primitive user inputs around 3D shapes obtained from onboard 3D sensors. In addition, scan occlusions in the surface scan area are detected and closed automatically. This facilitates equidistant scan patterns for contamination monitoring using e.g. a hyperspectral imager or α-radiation detectors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to the current global political situation, threats to critical maritime infrastructures such as LNG terminals, ports, offshore wind farms, subsea pipelines and cables tend to increase. The protection of infrastructures is essential to prevent deterioration of societal structures and to maintain essential services and functions that impact the well-being and livelihood of citizens. To counter this threat, permanent protection is necessary, which is difficult to achieve through surveillance technology operated by people for reasons of cost and personnel. One solution is to deploy a cooperating team (swarm) of unmanned surface vehicles (USVs) that coordinate with each other and independently carry out security tasks such as patrolling or surveillance and react intelligently to suspicious events.
Within the Fraunhofer project HUGIN “Heterogeneous Unmanned Group of INtelligent surface vehicles” the use of heterogeneous autonomous USVs for monitoring critical infrastructure and inspecting ship hulls was developed and tested. Up to three vehicles (two different platform types and heterogeneous sensor technology) were used. The team was assigned a joint task, which had to be divided into subtasks.
The subtasks were forwarded to the vehicles with the appropriate capabilities to be completed independently. The autonomous mission processing and coordination of the three USVs was investigated. The algorithms were tested in dynamic cooperative operation with parallel execution of specific subtasks. To enable the efficient, effective and flexible operation of such systems and the integration of the operator into the overall system, we used the Management by Objective (MbO) concept. MbO for parallel operating autonomous USVs is separated into three main aspects: Vehicle autonomy, Product creation control cycle, and Mission management.
Both, the autonomous cooperative behavior of the vehicles which affect vehicle autonomy as well as mission management and the operator-independent sensor data evaluation (product generation) were further developed and tested in HUGIN.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the MUSAL ecosystem, which allows temporal scene modeling with multi-sourced data and provides a continuous scene representation at different abstraction layers. It consists of individual building blocks, each of which implements different functionalities and is orchestrated by a ROS2-based middleware. The building blocks are comprised of (i) data stores, which implement the actual persistence and management of the sensor data; (ii) data recorders, connecting the actual sensors with the data stores; (iii) data processors, that advertise services to process the sensor data on different abstraction levels; as well as a (iv) universal search engine, providing a unified and transparent interface to the user. The functionality of the system is demonstrated through four use cases, namely the generation of obstacle maps and their distribution between multiple autonomous mobile robots, collaborative online 3D mapping and point cloud merging, multi-source image-based 3D reconstruction, and the monitoring of industrial facilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automated systems are becoming widespread in many fields, e.g. transportation, exploration, defence, rescue, etc. These systems need to build a comprehensive and robust situational awareness, detailed in terms of spatial and temporal resolution. This situational awareness is based on the data provided by a suite of perception sensors (e.g. camera, LiDAR, RADAR, etc.). Due to internal and external noise factors, the quality of the sensor data can be heavily compromised.
It is impossible to test the systems and the sensor suite under all possible environmental conditions and safety critical cases. To tackle the testing complexity and speed up the testing procedures, digital twins and models of the systems and test environments are needed to enable accelerated and thorough testing in virtual and/or mixed environments under a wide variety of non-ideal conditions. In order to use virtual/mixed testing to properly assess system performance and its safety, the simulation-to-reality gap needs to be reduced as much as possible, using high-fidelity digital models in combination with validated sensor noise models to reproduce accurately the data that real sensors would produce.
This work discusses the development and validation of two high-fidelity digital models of one outdoor and one indoor testing facilities, offering rain and fog emulation on site. By the usage of high-resolution and geo-referenced point clouds and images combined with photogrammetry and 3D modelling, a semi-automatic 3D reconstruction and material creation process is presented. The created digital models, combined with real perception sensors data collection and the development of sensor noise models, enable the validation of these models and the production of trustworthy and realistic virtual sensor data. In turn, this data allows numerous and safety critical tests to be executed reliably.
The hereby described digital models have been developed as a part of the EU Horizon ROADVIEW project∗ and will be made openly available.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Personnel shortages in the military sector require deploying soldiers as effectively as possible. Increased vehicle automation, e.g. for displacements or for resupply convoys, can improve this effectiveness by lowering the mental load needed for driving. Drivers of automated vehicles resemble passengers and are thereby more susceptible to motion sickness than drivers of non-autonomous vehicles. It is useful to monitor potential motion sickness, to ensure personnel arrive fit for duty at their destination. Therefore, a system to automatically detect the presence of motion sickness would be beneficial. In this paper, we introduce a camera-based system that uses electro-optical (EO) and infrared (IR) video sets to monitor facial skin temperature and respiratory rate as a step towards camera-based motion sickness monitoring in autonomous vehicles. Our proof-of-concept system obtained sufficient measurement accuracy for use in an experimental setting in which participants were subjected to a condition that induced motion sickness. We discuss the successes and challenges encountered during system set-up and data analysis, and share insights relevant to the envisioned application in an autonomous vehicle. Specifically, we compare recordings with and without subject motion caused by the motion sickness inducing condition and discuss measurement inaccuracies that might be encountered because of IR thermal drift. Additionally, we reflect on obstacles that can arise when employing an EO/IR monitoring system in a military context.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of Wing-In-Ground (WIG) vehicles marks a significant evolution in autonomous transportation, bridging the gap between aerial and maritime domains and combining maritime vessels' efficiency with aircraft speed and flexibility. These vehicles navigate the complex interface between sea and air, requiring sophisticated navigational strategies to manage their unique dynamics. Central to their deployment in defence and security applications is the ability to rapidly deploy and intervene at sea without infrastructure or launch vehicles for departure and landing. This paper presents an obstacle avoidance framework for Unmanned WIG Vehicles (UWVs) that integrates advanced image segmentation techniques, drawing upon comprehensives datasets for obstacle detection and avoidance.
The datasets chosen for training and testing encompass a wide range of maritime scenarios, including lakes, rivers, and seas, serve as the foundation for this study. It offers various scene types, obstacle classifications, and environmental conditions.
The study of different image segmentation CNNs represents a pivotal step towards robust autonomy in UWVs, particularly in defence and security, where reliability and precision are paramount. The methodology presented may establish the foundation for an obstacle avoidance system that improves the operational efficiency of UWVs while enhancing their safety and providing a more accurate and collision-free navigation through the dynamically changing maritime environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
New algorithms for appropriate planning and dynamic control of pan-tilt-zoom sensors (stationary or mounted on mobile systems) in in context of reconnaissance missions have been developed at Fraunhofer IOSB. These algorithms are based on the prior proposed solutions for an efficient control of robotic vehicles and groups of heterogeneous robotic vehicles.
In this paper two specific algorithms (deterministic and non-deterministic) are referenced, and it is exploited how these algorithms - originally developed for the control of unmanned vehicles of potential different domains – can be adapted to also support the intelligent autonomous sensor control. The aim is to maximize the effectiveness of these sensors when used in reconnaissance missions.
The deterministic algorithm is based on extensive pre-planning that considers all relevant aspects of the task at hand and the optical sensors to be used like target area, restricted zones, fields of view, resolutions, zoom, and uses approximations and assumptions to determine the best possible area coverage. The non-deterministic algorithm does not undertake preplanning but rather provides basic behaviors and mission relevant compiled information that is used by the autonomous control system to identify the most reasonable actions based on the current situation.
Both algorithm types are suitable for the autonomous control of (heterogeneous) cooperative sensors without any operator interaction. To provide effective and efficient reconnaissance, the usage of each sensor assigned to the operation must be optimized and depending on the task ensure best possible coverage of the mission-relevant area, focus on certain areas by increasing the scanning cadence, etc. To provide sufficient image resolution and quality, the footprint should cover each specified target for a defined time period with a suitable zoom level. Changing alignment angles and relative positions must be continuously taken into account. Therefore, regarding sensors mounted on mobile systems (flying, swimming or driving) planning and control need to be fast and reliable in order to take the movements of the carrier platform into account. The theoretical foundations and practical approaches of the algorithms are compared and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional deep learning datasets often lack representations of unstructured environments, making it difficult to acquire the ground truth data needed to train models. We therefore present a novel approach that relies on platform-specific synthetic training data. To this end, we use an excavator simulation based on the Unreal Engine to accelerate data generation for object segmentation tasks in unstructured environments. We focus on barrels, which serve as a typical example of deformable objects with different styles and shapes, which are commonly encountered in hazardous environments.
Through extensive experimentation with different SOTA models for semantic segmentation, we demonstrate the effectiveness of our approach in overcoming the limitations of small training sets and show how photorealistic synthetic data substantially improves model performance, even on corner cases such as occluded or deformed objects and different lighting conditions, which is crucial to assure the robustness in real-world applications.
In addition, we demonstrate the usefulness of this approach with a real-world instance segmentation application together with a ROS-based barrel grasping pipeline for our excavator platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The process of Self-organisation (SO) is pervasive in nature, being observed across physical, biological, chemical, social, as well as technological systems. It enables continuous adaptation of a system to its dynamic environment, balancing real-world challenges such as finding the shortest path or the division of labour. These models and principles have been integrated into state-of-the-art algorithms and computational approaches, though primarily within the field of software agents. SO is achieved through interactions at the device level but manifests at the system level, offering a method to achieve system-wide control through the actions of individual participants. Key advantages include scalability (as the process is independent of system size) and robustness (as the collective continuously adapts without a single point of failure). This paper explores how existing SO approaches can be combined to optimise the performance of e.g., a fleet of logistic assets, ensuring that the preferences of individual fleet owners are respected while allowing their assets to collaborate through SO in non- permanent cooperating groups. The requirements, constraints and operational paradigms of the (civilian) logistic industry are found to be applicable in the military domain as well, whether for supply and logistics operations or to facilitate SO of assets on the battlefield. Insights from self-organising logistics demonstrate how assets from different units or branches can collaborate on the battlefield while respecting the priorities of the deploying units. This article presents the insights gained, the opportunities envisioned, and the proposed approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multiple object tracking (MOT) interest has grown in recent years, both in civil and military contexts, enhancing situational awareness for better decision-making. Typically, state-of-the-art methods integrate motion and appearance features to preserve the trajectory of each object over time, using new detection information when available. Visual features are fundamental when it comes to solving temporary occlusion or complex trajectories, i.e. non-linear motion associated with high object speeds or low framerate. Currently, these features are extracted by powerful deep learning-based models trained on the re-identification (ReID) task. However, research focuses mostly on scenarios involving pedestrians or vehicles, limiting the adaptability and transferability of such methods to other use cases. In this paper we investigate the added value of a variety of appearance features for comparing vessel appearance. We also include recent advances in foundation models that show their out-of-the- box applicability to unseen circumstances. Finally, we discuss how the robust visual features could improve multiple object tracking performances in the specialized domain of maritime surveillance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently deep learning-based methods for small object detection have been improved by leveraging temporal information. The capability of detecting objects down to five pixels, provides new opportunities for automated surveillance with high resolution wide field of view cameras. However, integration on unmanned vehicles generally comes with strict demands on size, weight and power. This poses a challenge for processing high framerate high resolution data, especially when multiple camera streams need to be analyzed in parallel for 360 degrees situational awareness. This paper presents results of the Penta Mantis-Vision project where we investigated the parallel processing of four 4K camera video streams with commercially available edge computing hardware, specifically the Nvidia Jetson AGX Orin. As the computational power of the GPU on an embedded platform is a critical bottleneck we explore widely available techniques to accelerate inference or reduce power consumption. Specifically we analyze the effect of INT8 quantization and replacement of the activation function on small object detection. Furthermore we propose a prioritized a tiling strategy to process camera frames in such a way that new objects can be detected anywhere in the camera view while previously detected objects can still be tracked robustly. We implemented a video processing pipeline for different temporal YOLOv8 models and evaluated these with respect to object detection accuracy and throughput. Our results demonstrate that recently developed deep learning models can be deployed on embedded devices for real-time multi-cam detection and tracking of small objects without compromising object detection accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This research centres on a dynamic risk assessment and assurance strategy for object detection technology within a versatile Unmanned Aerial Vehicle (UAV) during surveillance missions. While existing studies focus on enhancing object detection accuracy, this research emphasises the integration of vision systems into autonomous systems with defined objectives. Using stereo cameras with YOLO object detection, the study monitors factors like resource usage, environmental impact, and mission adaptability. Parameters such as slack time, battery levels, weather, and power consumption are continuously monitored and used to dynamically adjust the camera system's operation mode. By utilising formal modelling and verification techniques, the system can adapt to various scenarios, ensuring mission success. Additionally, Goal Structured Notation (GSN) is employed to articulate and visualise mission success arguments, offering a structured framework for assessing evidence and assuring system properties. By addressing uncertainties and visualising them through GSN, the research enhances UAV adaptability and resilience in diverse environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study, we investigate the feasibility of performing real-time object detection on an edge devices using 100 MP images. To demonstrate the monitoring capabilities over wide areas, we captured images of a peri-urban scenario containing vehicles and pedestrians and using a DJI M300 drone equipped with an iXM-100 Phase One camera. We fine-tuned a YOLOX-Tiny object detector on the VisDrone2019 dataset, achieving a mean average precision of 0.32 at IOU=0.5. Subsequently, we deployed the detector on a Jetson ORIN AGX board using tensoRT Nvidia framework and performing FP16 quantization. The so obtained YOLOX model was applied to the dataset collected using the iXM payload, employing extensively the sliding window technique. Our experiments demonstrate the trade-off between achieving real-time processing, i.e., 3 frames per second for the current setup, while maintaining the ability to detect an average of 200 targets per image. Additionally, we showcased the capability of detecting pedestrians up to 800 m away and vehicles up to 1 km away.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Autonomous vehicles typically employ several types of sensors simultaneously, such as LiDAR (Light Detection and Ranging) sensors, visual cameras, and radar to provide information about the surrounding scene while driving. Ideally, coupling multiple sensors would improve a system that only utilizes a single sensing modality for its perception task. For an autonomous system to understand the scene intelligently, it must be equipped with object localization and classification, which is commonly employed using a visual camera. Object detection and classification may also be applied to LiDAR and thermal sensors to further enhance scene awareness of the autonomous system. Herein, we investigate the fusing of information obtained from visual (RGB), LiDAR, and infrared (IR) sensors in order to improve object classification accuracy and heighten scene awareness. In autonomy, there are several levels of fusion that can be employed, such as sensor-level, feature-level, and decision-level; for the scope of this research, we will be exploring the impact of decisionlevel fusion. Three state-of-the-art object detection and classification models (visual-based, LiDAR-based, and thermalbased) will be employed in parallel, and object predictions will be fused using voting-based (or rules-based) fusion methods. Some questions remain as to the discrepancies that could occur: what can we do to mitigate sensor disagreement? Does the coupling of sensor decisions generally boost confidence or induce confusion? Will different fusion methods provide differing levels of impact to the final solution? Additionally, does this multi-source fusion application transfer well to different scenes? A qualitative and quantitative analysis will be presented for applications of simple and complex fusion methods, and, understanding from past research, we hypothesize that multiple modality perception algorithms boost the final solution by balancing individual sensor strengths and weaknesses. The experiments performed herein will be conducted on a novel multi-sensor autonomous driving dataset created at the Center for Advanced Vehicular Systems (CAVS) at Mississippi State University and in collaboration with the US Army’s Engineer Research and Development Center (ERDC).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reports on a novel verification and performance evaluation framework specifically designed and developed to facilitate a standardized comparative performance evaluation for commercial detection, tracking and identification (DTI) solutions to counter Unmanned Aerial System (UAS) threats. The test methodology is designed to compare commercial systems in a fair and reproducible manner based on end-user defined criteria.
DTI systems are increasingly relevant for e.g., perimeter protection of military facilities, critical infrastructures and public events and the expected end-users are law enforcement agencies, the military, civil defense agencies and private entities. However, such systems are commonly hard to benchmark in a fair and comparable manner and performance claims of these systems are currently not supported by evidence. In addition, no standardized test methodologies are currently available making it near impossible to compare competing DTI systems.
In Courageous we developed an objective driven test methodology for use by the civilian sector. Courageous leads to a comparative performance evaluation system for commercial DTI solutions for Counter-UAS systems (C-UASs) using operationally relevant end-user scenarios and a generic DTI system lay-out. The work takes into account contextual information as well as end-user input, albeit focusing primarily on civilian use cases so far. We outline the process taken as well as the resulting system and discuss how the systems should be evaluated and validated iteratively over time. We furthermore elicit end-user input from the defense domain and argue that the scope of Courageous should be broadened to include military challenges, aspects and concerns.
The work with regard to homeland security use-cases, presented here, has firstly been verified in a simulation environment where a number relevant scenarios were used and the output of the simulation injected into the testing system. Validation of the work in a relevant environment has been done in three operational trials.
The results from the operational trials held for homeland security scenarios show that the method allows for performance evaluation at component level (i.e., detection, tracking or identification component) and at system level (combinations of these components and integrated DTI system of system solutions).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The transition towards more-electric and autonomous vehicles in defence and maritime applications necessitates innovative approaches to energy management, particularly for hybrid systems combining fuel cells, batteries, and supercapacitors. This paper explores the development of an Energy Management System (EMS) tailored for Hybrid Unmanned Wing-In-Ground (WIG) Vehicles, leveraging comparative analyses from the more-electric aircraft sector to enhance reliability and operational flexibility while supporting the rapid manoeuvres crucial to defence applications.
Hybrid WIG vehicles, operating at the intersection of air and sea, present unique challenges and opportunities for energy optimisation. Drawing upon established energy management strategies—including state machine control, rule-based fuzzy logic, classical PI control, frequency decoupling/state machine, the equivalent consumption minimisation strategy, the external energy maximisation strategy and a new proposed control strategy modified version of the classical PI—this study adapts these methodologies to the specific requirements of unmanned maritime vehicles. Our work optimises hydrogen consumption, manages the state of charge for batteries and supercapacitors, and enhances overall system efficiency while considering the rapid manoeuvrability essential for defence missions.
This research identifies the most effective EMS approaches in sustaining high-performance and environmentally conscious operations while adequately supporting the high-stakes, fast-paced manoeuvres integral to defence strategies through simulation and experimental validation of a representative hybrid WIG vehicle model. The findings contribute to the advancement of autonomous naval technologies and offer insights into the broader application of hybrid energy systems in future more-electric vehicles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study, a novel image fusion algorithm is proposed to detect and track targets. The method is based on background suppression on both infrared and visible image. The algorithm performance is calculated by using the defined datasets. The detection performance is calculated by using SCR (Signal to Clutter Ratio). Instead of using single band images, the proposed fusion algorithm that uses both infrared and visible images obtained the highest SCR value. To calculate the tracking performance of the proposed fusion algorithm, feature based tracking algorithms is used. In conclusion, the proposed image fusion algorithm has better tracking and detection performance of the target instead of using single band image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.