Sensor data from multiple sources provides valuable insights for data-driven decision making. Sensor fusion is the process of combining data from multiple sensors to produce a more complete view than would be possible with a single sensor alone. To implement sensor fusion at the tactical edge, heterogeneous data sources are often combined. Various types of sensors, such as cameras, lidar, radar, magnetic, etc. are frequently blended for optimal decision making. Multidomain operation (MDO) brings a unique set of challenges to the subdomain of sensor fusion. MDO software architects must consider such challenges as the need to operate in a resource-constrained, fast-paced, and contested environment. Software architects must focus on such design features as high data velocity, high data volume, scalability, and real-time processing. This work offers a blueprint for benchmarking a multi-input video processing pipeline in search of a service degradation point. We throttle the number of input video feeds while quantitatively measuring the pipeline’s performance. We employ an established Nvidia DeepStream SDK framework used for building commercial computer vision and video analytics applications. We begin with a double input application performing object detection, semantic segmentation, and a tracker. We continue scaling up the number of video inputs meant to simulate pipeline load variability frequently present in the field. While objectively measuring the pipeline’s execution parameters, we simulate the point of channel saturation and demonstrate the degradation of the quality of service. We postulate that it may be logical to apply such a benchmarking template in order to establish the range of limits and predict the anticipated pipeline processing times. We conclude with some of the potential future developments of the proposed template.
|