AI-Enhanced Autonomous Formation Flying - Definition of a Mission-driven and Safety-critical Software Development Environment

8.4.2: AI

Session Chair: Ali Raz

Session Host: Frank Salvatore

Presenter(s):
Bernard Dion (Ansys)

Author(s):
Alexandre Luc, Nicolas Dalmasso, Guilherme Goretkin, Matthieu Paquet (Ansys)

Presentation: 380
When: Thu 04, Jul 14:15-14:55 IST
Where: Wicklow Hall 2B
Keywords: Autonomy;Mission;Safety-Critical Systems;AI/ML-Enabled Subsystems;Airworthiness;Certification;MBSE;Safety;Simulation
Topics: 14. Autonomous Systems 2. Aerospace 4.6. System Safety 5.11. Artificial Intelligence, Machine Learning 5.3. MBSE 6. Defense #AI
Abstract: The challenges posed by the introduction of autonomy in mission-critical and safety-critical aeronautics applications are driving a strong shift toward the utilization of Artificial Intelligence/Machine Learning (AI/ML)-based techniques. These applications must function in complex and uncertain environments, support autonomous and pilot-assistance systems, ensure system safety, and facilitate the design of efficient system performance, such as energy-aware trajectories or area-coverage maximization. Examples of such applications include formation flying and teaming, man-unmanned teaming, collision avoidance, last-mile delivery, urban air mobility (UAM) and aerial infrastructure inspection.

Standardization bodies, such as SAE and EUROCAE, have explicitly identified, in the "Artificial Intelligence in Aeronautical Systems: Statement of Concerns," the necessity to produce a standard supporting the integration of AI/ML-enabled sub-systems into safety-critical aeronautics software, hardware, and system development.

To address these development and regulatory challenges, this session introduces an Autonomy Model-Based Systems Engineering (MBSE) Framework, heavily reliant on simulation, for developing and validating mission and safety-critical applications, including AI/ML-based constituents within a safety-critical function implemented in a model-based environment. This framework enables users to build digital models, covering mission and vehicle behavior, and lays down the foundations of a digital training and validation environment for autonomous systems, that can provide early and accurate feedback to autonomous systems developers. Furthermore, this framework aims at complying with emerging AI-based safety standards such as the future SAE ARP6983.

Users of the Autonomy Framework include both system developers and system operators, who can build and use digital and executable reference models covering mission and vehicle behavior. This enables the inclusion of operational experience into a digital validation environment that system developers can leverage to assess their design and implementation. Reciprocally, simulating the system in an actual mission environment allows system operators to better understand system behavior and provide earlier and more accurate feedback to system developers.

In this presentation, we will go over the main aspects of the Autonomy MBSE Framework, before illustrating each step of this approach with a concrete Fixed Wing Formation Flying Case Study.


Autonomy MBSE Framework:

In the initial stages of the system development cycle, standard Systems Engineering and Safety tasks are being performed:

- Functional Hazard Assessment (FHA)
- System Architecture Definition
- Preliminary System Safety Assessment (PSSA)
- Operational Design Domain (OOD) and Scenario Mission Definition, to train an application that is typically made of traditionally developed and AI/ML constituents

The AI/ML training process involves simulating these scenarios within the framework, varying their parameters according to their probability distribution. This process accommodates supervised learning for perception and reinforcement learning for decision-making. Sensitivity and robustness analyses are then carried out to further characterize the resulting neural networks.

Once trained and validated, the AI/ML constituents are integrated within the overall application design model, and simulation is used again to conduct reliability analysis and estimate the probability of failure of the mission. In the case where system performance and/or safety objectives of the application over its Operational Design Domain (ODD) are not met, the recommended approach is to trigger further training or redesign activities if necessary. Finally, the embedded code is generated from the software model using a certified code generator.

Overall, the framework facilitates AI/ML-based decision-making for autonomous systems in complex and uncertain environments, supporting both autonomous and pilot-assistance systems while ensuring system safety.


Fixed Wing Formation Flying Case Study:

A Case Study will be presented to demonstrate formation flying (two fixed wing aircraft) executing a series of 90 degree turns at a high speed, following the different steps of the Autonomy MBSE Framework.

The functions to be developed include:

- Traditional Flight and Engine control for ego aircraft (automatically following the lead aircraft)
- AI-based perception software based on camera sensors for ego aircraft calculating position and orientation of lead aircraft
- AI-based automated ego aircraft stick agent to achieve formation flying objective (aircraft proximity comprised between 250ft and 500ft)

As part of this demo, the use of You Only Look Once (YOLO) v7 algorithm, OpenAI's Proximal Policy Optimization (PPO) from stable-baselines3 and SysML V2 for System Architecture Modeling will be demonstrated.