From Competition to Classroom: A PhD Course at Linköping University

Scientific competitions are increasingly proving to be effective bridges between academia and industry. They push professionals to look at their challenges with fresh perspectives while giving students a realistic view of how problems are tackled in practice.
The TUPLES Beluga Challenge was so compelling that Linköping University launched a specialized PhD course: “Sequential Decision-Making at Airbus – The Beluga Challenge.”
Organized by the Machine Reasoning Lab, the course immersed students in an industrial AI application, guiding them to understand the problem in depth, design solutions, and exchange feedback.
A team of ten PhD students and two professors from LiU submitted their work to the competition and achieved remarkable success: first place in the Explainability track.
A Comprehensive Approach to Complex AI Problems
The TUPLES Beluga Challenge models the logistics of transporting aircraft parts using Airbus Beluga airplanes. The core of the problem lies in managing the delivery of parts on “jigs” which need to be stored and reordered in capacity-limited racks, a task made difficult by potential congestion and the large number of jigs.
The LiU team tackled this intricate challenge head-on by creating three distinct systems:
- BeLiUga-Plan for the Scalability Deterministic track, where the team secured the third-place finish. This system, designed for scenarios with fixed flight schedules, addresses the challenge by decomposing the overall problem into a sequence of smaller, more manageable subtasks, one for each aircraft. This idea made it possible to find solutions for very large problem instances using off-the-shelf planning systems.
- BeLiUga-Reinforce for the Scalability Deterministic and Probabilistic tracks, the latter introducing the added complexity of uncertain flight arrival times. This solution utilized Proximal Policy Optimization (PPO), a reinforcement learning framework, and incorporated techniques like dynamic action masking to reduce the number of invalid actions and curriculum learning to progressively expose the agent to more difficult problems.
- BeLiUga-Explain for the Explainability track, a competition category the team won. This system was designed to answer a user’s questions about a given plan, thereby explaining the reasoning behind a planning system’s decisions. The system works by generating “counterfactuals”, which are alternative plans that adhere to the constraints of a specific question (e.g., “Why was jig X loaded on rack A instead of rack B?”) and then using a large language model to compare the original plan to the counterfactual to generate a clear, natural language explanation.
A Glimpse into the Student Experience
The course provided students with a unique opportunity to apply their academic knowledge to a challenging industrial application. The students describe the experience as a rewarding step from theoretical research to practical, real-world problem-solving.
“Working on a problem so closely related to an actual industrial application was incredibly eye-opening”, said Elliot Gestrin. “It really showed us the kind of challenges that exist beyond the academic benchmarks we’re used to.”
Paul Höft remarked, “The Beluga Challenge pushed us to think creatively and collaboratively. The problem was complex, and each track required a completely different approach. It was a true team effort to find solutions that were not only effective but also innovative.”
Mauricio Salerno, a guest PhD student from Spain, highlighted the significance of the Explainability track win: “Winning the Explainability track was particularly special. In a world where AI is becoming more and more integrated into critical systems, the ability to explain ‘why’ a decision was made is just as important as the decision itself. Our work shows that we’re ready to build AI that is not only smart but also trustworthy.”
The LiU team’s success highlights the value of the course in bridging the gap between academic research and industrial application, preparing the next generation of AI experts to tackle some of the world’s most complex problems.
The team lead by Daniel Gnad: Elliot Gestrin, Arash Haratian, Paul Höft, Oliver Joergensen, Arnaud Lequen, Mauricio Salerno, Jendrik Seipp, Gustaf Söderholm and Amath Sow.
Arash Haratian, Arnaud Lequen, Amath Sow and Oliver Joergensen participated in the Beluga Challenge but not in the Explainability Challenge.
