EASA Promotes Safe Machine Learning Integration for the Aviation Industry

Knowledge    /    March 10, 2024

The European Union Aviation Safety Agency (EASA) has taken a step forward in promoting the safe use of machine learning (ML) in the aviation industry. Their recently released concept paper provides much-needed clarity for manufacturers, software providers, and operators looking to leverage this powerful technology. As part of EASA’s AI roadmap, the paper aims to develop principles and guidance which can be integrated into rules and acceptable means of compliance (AMC) later on.
Safe and Responsible Machine Learning Integration
The EASA guidance dives deep into how machine learning (ML) can be safely integrated into various aviation applications. They outline two key levels of ML implementation:

Level 1: Decision Support for Human Users: Here, ML acts as a co-pilot, assisting human operators by providing insights and recommendations.  This could involve tasks like analysing sensor data to predict potential maintenance issues or optimizing flight paths for fuel efficiency. The ultimate decision-making authority, however, remains with the human pilot or engineer.

Level 2: Taking Control of Specific Functions (Under Certain Conditions):  This level allows ML to take over specific, pre-defined functions under controlled conditions.  For example, an ML system might be used to automatically manage certain aircraft systems during routine flight phases, freeing up pilot workload for other critical tasks. However, EASA emphasizes that human oversight remains essential, with clear transition procedures in place for when the system needs to hand control back to the pilot.

To ensure safety and reliability of both these levels, EASA establishes a set of trustworthiness objectives that developers must meet. These objectives address critical aspects like data quality, explainability of AI decisions, and robust risk management processes.

High-Quality Data: The Fuel for Safe ML in Aviation
One of the most critical aspects of the paper is the focus on high-quality data. As with any AI system, the effectiveness of ML applications hinges on the data they are trained on. EASA emphasizes the importance of using relevant, complete, accurate, and up-to-date data. Additionally, proper data labelling is crucial to ensure the ML system interprets the information correctly.
Transparency and Explainability: Building Trust in Aviation AI
Trustworthiness of AI applications is very important. Building trust in AI systems requires transparency in how they arrive at their decisions. The EASA document highlights the need for clear explanations on how a system works, makes choices, and arrives at certain decisions. This allows users to understand, trust, and rely on the recommendations provided.
AI and Safety Risk Management: Soaring to New Heights
Data science plays a vital role in safety risk management by allowing analysis of vast amounts of data to identify patterns and potential risks. Machine learning enhances this process by:

  • Emerging Risk Detection: AI can proactively identify new and unforeseen safety risks by analysing vast datasets.
  • Risk Classification: Machine learning algorithms can potentially classify safety occurrences based on severity and impact.

The Future of AI in Aviation Safety Management
The future holds great potential for AI applications, especially in the aviation industry. Establishing a framework and guidelines for the safe integration of this technology is a necessary step to make sure that the power of AI can be harnessed in a responsible way. Ensuring safe human-AI interaction will be an exciting challenge in the near future – not just here at ASQS!

Find the EASA concept paper for download here: https://www.easa.europa.eu/en/newsroom-and-events/news/easa-publishes-artificial-intelligence-concept-paper-issue-02-guidance