The European Union Aviation Safety Agency (EASA) has released results of a survey focused on Ethics for Artificial Intelligence (AI) in Aviation.
Released to coincide with the Agency’s annual AI Days event, the report presents comments and views from a number of aviation professionals from across Europe concerning the ethics of AI use in aviation operations.

The survey examined eight hypothetical scenarios where AI could be applied and measured levels of comfort, trust and general acceptance. Results gathered showed a balanced perspective, with an average score of 4.4 out of 7 for the acceptance of AI within operations.
Key findings of the data include reservations from around two-thirds of respondents, with one AI scenario rejected outright, concerns surrounding the limits of AI performance, data protection and privacy, accountability and possible safety implications.
Elsewhere, a call for oversight saw a major of the respondents call for robust regulation and supervision by the EASA, as well as national aviation authorities, to maintain safe and responsible AI integration.
Finally, concerns were voiced regarding the ‘de-skilling’ of the workforce, with participants expressing worries that human knowledge and abilities could begin to degrade if AI was to take over tasks.
Commenting on the report, Guillaume Soudain, EASA AI Programme Manager said:AI offers tremendous opportunities to improve aviation safety and efficiency, but trust is critical. This survey underscores the importance of a balanced regulatory framework—one that ensures the highest level of safety for citizens while also fostering innovation and competitiveness in Europe’s aviation sector.
The third edition of the EASA’s AI Days event was held in Cologne between 27–28 August, with nearly 200 aviation professionals in attendance.
In a keynote speech, Christine Berg, Head of Unit – Aviation Safety at the European Commission (DG MOVE), showcased a wider European context for the adoption of AI.
Christine Berg said:In the transport domain, AI is already more than theory.
It is being deployed to optimise traffic flows, enhance predictive maintenance, and enable autonomous systems. The potential is vast. But so are the safety and certification challenges.
Aviation is safety-critical by definition. This means we need systems that are not only intelligent, but also explainable, reliable, and certifiable. EASA’s work — including its AI roadmap and guidance on machine learning assurance levels — is essential in addressing these questions.
The full survey report is available on the EASA website, here.