Beluga Competition Winners: Explainability Prize – Teaching AI to Explain Itself

In this article, we teamed up with Daniel Gnad and his brilliant team at Linköping University, who took home the Explainability Prize for teaching AI how to explain its own decisions.
Besides Daniel Gnad, the team consisted of his colleague Jendrik Seipp and four doctoral students: Elliot Gestrin, Gustaf Söderholm, Paul Höft, and Mauricio Salerno.
The Problem
Imagine you are managing a massive aircraft factory where giant cargo planes called Belugas arrive daily, delivering wings, turbines and other airplane parts. These components must be carefully stored on limited rack space before they can be assembled
into new aircraft.
Perhaps you would like an artificial intelligence system to solve this complex logistics problem, deciding where to place each part and when to move them around.
But, what happens when a human supervisor questions the AI’s choices?
“Why did you put that wing component on rack 3 instead of rack 7?”
To gain human trust and enable effective collaboration, it is crucial that AI systems can answer these kinds of questions. As such, TUPLES and Airbus organized the international Beluga AI Challenge, where one of the tasks was to develop an AI system capable of explaining itself.
The Swedish Solution
A team from Linköping University took part and won the competition with their system BeLiUga-Explain. Their breakthrough came from an approach called “counterfactual reasoning”, essentially teaching the AI to consider “what if” scenarios. When someone
questions a decision, the system creates an alternative version of the problem where that specific choice is forbidden or enforced. It then finds the best possible solution under these new constraints and compares it with the original.
For example, if asked why a wing component was stored on rack A instead of rack B, BeLiUga-Explain re-solves the entire logistics task with the restriction that the wing must go on rack B. It then compares the two solutions and uses modern large language models (LLMs) to explain the trade-offs in a human-understandable way:
“Using rack B would have required three additional component moves and thereby delayed assembly.”
More Than Just Moving Parts
The implications extend far beyond aircraft manufacturing. Daniel Gnad, the team leader and assistant professor at Linköping University, explains:
“This is not just about logistics – it is about trust. When AI systems make decisions that affect safety, efficiency, or costs, humans need to understand the reasoning behind those choices.”
The team consisted of several PhD students and two professors at Linköping University who, as part of a PhD course, combined their expertise in AI planning and human-computer interaction to design their solution. What makes this success remarkable is that the team was not specialized in explainable AI before. It was the unique combination of different expertise areas that made the difference.
Flying Onwards
Members of the team have since continued working on explainability solutions. Their newest system, called PlanPilot, allows users to interactively explore the diverse set of plans found by AI planning systems, through similarly enforcing or forbidding certain
choices, without needing to re-solve a modified problem. Making systems capable of naturally and quickly interacting with humans like this is a key requirement for AI to be applicable in real-world scenarios.
The team will keep on working on their findings to make AI a transparent and trustworthy partner in complex decision-making.
