Presentation + Paper
13 November 2024 Utilizing synthetic data for object segmentation on autonomous heavy machinery in dynamic unstructured environments
Miguel Granero, Raphael Hagmanns, Janko Petereit
Author Affiliations +
Abstract
Traditional deep learning datasets often lack representations of unstructured environments, making it difficult to acquire the ground truth data needed to train models. We therefore present a novel approach that relies on platform-specific synthetic training data. To this end, we use an excavator simulation based on the Unreal Engine to accelerate data generation for object segmentation tasks in unstructured environments. We focus on barrels, which serve as a typical example of deformable objects with different styles and shapes, which are commonly encountered in hazardous environments.

Through extensive experimentation with different SOTA models for semantic segmentation, we demonstrate the effectiveness of our approach in overcoming the limitations of small training sets and show how photorealistic synthetic data substantially improves model performance, even on corner cases such as occluded or deformed objects and different lighting conditions, which is crucial to assure the robustness in real-world applications.

In addition, we demonstrate the usefulness of this approach with a real-world instance segmentation application together with a ROS-based barrel grasping pipeline for our excavator platform.
Conference Presentation
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Miguel Granero, Raphael Hagmanns, and Janko Petereit "Utilizing synthetic data for object segmentation on autonomous heavy machinery in dynamic unstructured environments", Proc. SPIE 13207, Autonomous Systems for Security and Defence, 132070A (13 November 2024); https://doi.org/10.1117/12.3030820
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Data modeling

Image segmentation

Semantics

Sensors

Deep learning

Intelligent robotic vision

Robotics

Back to Top