Reinforcement learning (RL) agents offer significant value for military applications by effectively navigating complex, dynamic environments typical of mission engineering and operational analysis. Once trained, these agents can be employed to inform mission planners on optimal strategies, tactics, or even innovative utilization of different military platforms within a given scenario. In recent years, RL has become a major research area for automation and solving complex sequential decision-making problems. However, a notable challenge lies in the inherent black-box nature of RL models and their inability to explain their decisions and actions. This limitation serves as a major adoption barrier, especially in Defense. This paper aims to study EXplainable RL (XRL) within an operational context. XRL is a distinct branch of Explainable Artificial Intelligence (XAI) techniques that provides the necessary transparency to make AI models more transparent to address this challenge. This research is an effort to gain insight into the behavior of RL agents in an operational environment and to discuss explainability and interpretability through the lens of different roles within the decision-making pipeline.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.