EASA Promotes Safe Machine Learning Integration for the Aviation Industry
Knowledge / March 10, 2024
Safe and Responsible Machine Learning Integration
Level 1: Decision Support for Human Users: Here, ML acts as a co-pilot, assisting human operators by providing insights and recommendations. This could involve tasks like analysing sensor data to predict potential maintenance issues or optimizing flight paths for fuel efficiency. The ultimate decision-making authority, however, remains with the human pilot or engineer.
Level 2: Taking Control of Specific Functions (Under Certain Conditions): This level allows ML to take over specific, pre-defined functions under controlled conditions. For example, an ML system might be used to automatically manage certain aircraft systems during routine flight phases, freeing up pilot workload for other critical tasks. However, EASA emphasizes that human oversight remains essential, with clear transition procedures in place for when the system needs to hand control back to the pilot.
To ensure safety and reliability of both these levels, EASA establishes a set of trustworthiness objectives that developers must meet. These objectives address critical aspects like data quality, explainability of AI decisions, and robust risk management processes.
High-Quality Data: The Fuel for Safe ML in Aviation
Transparency and Explainability: Building Trust in Aviation AI
AI and Safety Risk Management: Soaring to New Heights
- Emerging Risk Detection: AI can proactively identify new and unforeseen safety risks by analysing vast datasets.
- Risk Classification: Machine learning algorithms can potentially classify safety occurrences based on severity and impact.
The Future of AI in Aviation Safety Management
Find the EASA concept paper for download here: https://www.easa.europa.eu/en/newsroom-and-events/news/easa-publishes-artificial-intelligence-concept-paper-issue-02-guidance