It is undeniable that modern multi-domain operations facing sophisticated threats are pushing for an ever-growing list of complex mission capabilities and changing the nature of how systems are designed. Today's systems of systems must have highly interoperable components requiring sophisticated integration techniques and holistic analyses covering from the lowest component level to the highest enterprise level. From increasingly demanding C4ISR and Cyber applications, real-time assimilation and fusion of disparate information, or cost-effective mission planning/operation; innovative technologies — often driven by artificial intelligence (AI) and machine learning (ML) — are working their way into defense systems and challenging the traditional systems engineering practices usually associated with this domain. Incorporating and integrating AI/ML driven technologies to defense systems brings a new set of challenges to the designers, integrators, and evaluators due to the "black-box" nature of these technologies. For instance, Deep Neural Networks (DNNs) are at the foundation of recent innovations in AI/ML and the lack of explainability of their algorithms is well documented in computer science and systems engineering literature.
As engineers become increasingly involved in designing solutions to complex technological and societal needs with AI/ML techniques, modern engineering principles dictate the use of techniques like Modeling and Simulation (M&S) and Explainable Artificial Intelligence (XAI) to understand, analyze and validate the assumptions, theories, and operations of modern complex systems. Here, XAI techniques are combined with M&S to provide a window of opportunity to understand the decision-making constructs of DNNs that can directly aid systems engineers in their professional practice. Under this context, the C4I & Cyber Center at the George Mason University is aggressively pursuing novel methods for integrating theories and results across multiple disciplines including M&S, AI, and XAI. The center’s goal is to develop an understanding of the systems level behavior driven by complex technologies, align the state of theoretical knowledge to the state of practice in systems engineering, and bridge the cultural gaps between government, industry, and academia. The Center was created in the 80's as the nation’s first civilian university-based entity offering comprehensive academic and research programs in the national security domain and is actively working on projects funded by DARPA, AFRL, NATO, and Sandia National Labs, to name a few.
In this presentation, we will highlight the spectrum of ongoing research projects performed by the C4I and Cyber Center, focusing on the technical achievements of two different efforts that directly address systems engineering with the aid of M&S and XAI techniques. The first one combines Probabilistic Ontologies with Multi-entity Bayesian Networks to provide explainable AI insights into the design of adaptable system of systems architecture. The second one focuses on utilizing Shapely Additive Explanations (SHAP) — a state-of-the-art XAI technique — to understand decision making of Reinforcement Learning in dynamic environments. A synopsis of these efforts is provided in the following paragraph.
Anytime Reasoning and Analysis for Kill-Web Negotiation and Instantiation across Domains (ARAKNID): this project is part of the DARPA Adapting Cross-Domain Kill-Webs (ACK) Program (www.darpa.mil/program/adapting-cross-domain-kill-webs) and is being developed to manage data on various threats in real-time to help decisions involving weapons, sensors and military assets. The C4I & Cyber Center supports Raytheon-BBN technologies providing state-of-the-art research on Applied Explainable AI and Probabilistic Semantic Technologies (e.g., Probabilistic Ontologies and Multi-Entity Bayesian Network inference) for analysis and prediction of courses of actions. The goal of the ACK program is to provide decision aid for mission commanders to assist them with rapidly identifying and selecting options for tasking (and re-tasking) assets within and across organizational boundaries. Important decisions are ultimately made by a real person but automating machine-to-machine interactions is a significant tactical advantage when seconds and minutes matter.
The second segment of the presentation will focus on XAI techniques for Reinforcement Learning (RL). RL is becoming increasingly popular in AI communities and provides the ability to train an AI agent to operate in dynamic and uncertain environments. Recently RL has demonstrated promising performance in missile guidance, flight controls, motion planning for swarms of autonomous vehicles, and the list of potential application areas continues to grow. Despite the demonstrated performance outcomes of RL, characterizing performance boundaries, explaining the logic behind RL decisions, and quantifying resulting uncertainties in RL outputs are major challenges that need to addressed by System Engineering community. We have built a robustness testing framework for RL solutions and successfully applied Shapely Additive Explanations (SHAP) to uncover decision making construct of DNNs trained by RL algorithms.
We believe the capabilities addressed in our presentation are applicable not only in the field of defense but also for a broader sphere of applications including information technology and cybernetics. Our presentation directly addresses and advances the tools and techniques required for System Engineering for AI (SE4AI) — which is an identified area of need for the SE community under INCOSE’s FuSE efforts.