A paper by Saarland AI Group has been presented at the 37th AAAI Conference

Marcel Vinzent PHD at the AI Group at Saarland University has presented at the prestigious AAAI Conference the paper “Neural Policy Safety Verification via Predicate Abstraction: CEGAR.” written by Marcel Vinzent, Siddhant Sharma, Joerg Hoffmann

Marcel Vinzent PHD at Saarland University in the AI Group has presented a paper on Neural Policy at the prestigious AAAI Conference. The 37th edition took place in Washington DC.
The paper is “Neural Policy Safety Verification via Predicate Abstraction: CEGAR” and is written by Marcel Vinzent, Siddhant Sharma, Joerg Hoffmann.

Neural networks (NN) are an increasingly important representation of action policies π. Recent work has extended predicate abstraction to prove safety of such π, through policy predicate abstraction (PPA) which over-approximates the state space subgraph induced by π. The advantage of PPA is that reasoning about the NN – calls to SMT solvers – is required only locally, at individual abstract state transitions, in contrast to bounded model checking (BMC) where SMT must reason globally about sequences of NN decisions.
Indeed, it has been shown that PPA can outperform a simple BMC implementation. However, the abstractions underlying these results (i.e., the abstraction predicates) were supplied manually.

In this work the authors automate this step. They extend counterexample guided abstraction refinement (CEGAR) to PPA. This involves dealing with a new source of spuriousness in abstract unsafe paths, pertaining not to transition behavior but to the decisions of the neural network π.
They introduce two methods tackling this issue based on the states involved, and they show that global SMT calls deciding spuriousness exactly can be avoided. They devise algorithmic enhancements leveraging incremental computation and heuristic search. They show empirically that the resulting verification tool has significant advantages over an encoding into the state-of-the-art model checker nuXmv. In particular, the approach used is the only in our experiments that succeeds in proving policies safe.

The work contributes to TUPLES’s goal of developing verification methods capable of reasoning about AI systems, specifically represented by neural networks. Saarland University is partner in the TUPLES project (link to the Saarland page in Tuples).

You can read the paper here

Latest News

Workshop on Reliability In Planning and Learning (RIPL)

We’re proud to share that members of the TUPLES consortium — including Daniel Höller, Jörg Hoffmann, and our coordinator Sylvie Thiebaux — are on the organizing committee of the RIPL Workshop (Reliability in Planning and Learning), part of the prestigious ICAPS 2025 conference.

TUPLES Waste services

A recent article in the Czech magazine Nový Prostor, featuring our partners at the Faculty of Electrical Engineering at CVTU (Czech Technical University in Prague), offers a clear and accessible view of how TUPLES technology is already making a difference in urban waste collection — one of the project’s five real-world use cases.

Nearing the finish line of the TUPLES project, we’re reflecting on three years of ambitious, collaborative research into Trustworthy AI for Planning and Scheduling.

The Beluga™ Competition set a tough real-world scheduling challenge — and we have our winners! Thanks to all who took part.