We develop a software system to automatically identify aircraft in thermal camera imagery to assist with uplink laser safety for deep space optical communications. Ground terminals often transmit a high-powered laser for use as a beacon to assist spacecraft pointing. The wavelengths for these beacons are not eye safe for humans. Previous missions have used spotters and transponder-based aircraft detection (TBAD) as a warning system. However not all aircraft (e.g., low-flying planes, hang gliders) have transponders. For this reason, we take an image-based approach, utilizing data from multiple thermal cameras aligned with the telescope, for detecting lowflying aircraft as part of a multi-tiered system. We use a Kalman filter-based tracking software, which is capable of detecting and tracking aircraft within 20 km of the ground terminal. At these ranges, smaller aircraft are only 1-2 pixels in extent, and any system sensitive enough to detect and track all possible aircraft will also detect and track non-aircraft such as insects and birds. We develop traditional machine learning and neural network classifiers to separate aircraft from non-aircraft, using key distinguishing features based on track statistics. In addition, we develop convolutional and recurrent neural network models that incorporate the time-series history of the tracks. Since we cannot tolerate missed aircraft, we select a decision threshold that yields a true positive rate of 1 (all aircraft are identified), and compare performance of a variety of machine learning classifiers. We demonstrate use in the field, where we correctly identify all aircraft, with a false positive rate around 50% when classification is made using only the initial 45 frames of a track and a false positive rate less than 20% when full system tracks are used.
|