KEYWORDS: Education and training, Performance modeling, Data modeling, Object detection, Machine learning, Image quality, Detection and tracking algorithms, Visualization, Target detection, RGB color model
The interpretability of an image indicates its potential information value. Historically, the National Imagery Interpretability Rating Scale (NIIRS) has been the standard for quantifying the interpretability of an image. With the growing reliance on machine learning (ML) for image analysis, NIIRS fails to capture the image quality attributes relevant to ML. Empirical studies have demonstrated that the relationship between NIIRS and ML performance is weak at best. In this study, we explore several image characteristics through the relationship between the training data and the test data using two standard ML methods: TensorFlow and Detectron2. We employed quantitative methods to measure color diversity, edge density, and image texture as ways to characterize the training and test sets. A series of experiments demonstrate the utility of these measures. The results suggest that each of the proposed methods quantifies an aspect of image difficulty for the ML method. Performance is generally better for test sets with lower levels of color diversity, edge density, and texture. In addition, the experiments suggest that training on higher complexity imagery yields more resilient models. Future studies will assess the relationship among these image features and explore methods for extending them.
Automatic Target Detection (ATD) leverages machine learning to efficiently process datasets that are too large for humans to evaluate quickly enough for practical applications. Technological and natural factors such as the type of sensor, collection conditions, and environment can affect image interpretability. Synthetic Aperture Radar (SAR) sensors are sensitive to different issues from optical sensors. While SAR imagery can be collected at any time of day and in almost any weather conditions, some conditions are uniquely challenging. Properties of targets and the environment can affect the radar signatures. In this experiment, we simulated these effects in quantifiable increments to measure how strongly they impact the performance of a machine learning model when detecting targets. The experiments demonstrate the differences in image interpretability for machine learning vs. human perception.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.