A paper by Saarland AI Group has been presented at the 37th AAAI Conference

Marcel Vinzent PHD at the AI Group at Saarland University has presented at the prestigious AAAI Conference the paper “Neural Policy Safety Verification via Predicate Abstraction: CEGAR.” written by Marcel Vinzent, Siddhant Sharma, Joerg Hoffmann

Marcel Vinzent PHD at Saarland University in the AI Group has presented a paper on Neural Policy at the prestigious AAAI Conference. The 37th edition took place in Washington DC.
The paper is “Neural Policy Safety Verification via Predicate Abstraction: CEGAR” and is written by Marcel Vinzent, Siddhant Sharma, Joerg Hoffmann.

Neural networks (NN) are an increasingly important representation of action policies π. Recent work has extended predicate abstraction to prove safety of such π, through policy predicate abstraction (PPA) which over-approximates the state space subgraph induced by π. The advantage of PPA is that reasoning about the NN – calls to SMT solvers – is required only locally, at individual abstract state transitions, in contrast to bounded model checking (BMC) where SMT must reason globally about sequences of NN decisions.
Indeed, it has been shown that PPA can outperform a simple BMC implementation. However, the abstractions underlying these results (i.e., the abstraction predicates) were supplied manually.

In this work the authors automate this step. They extend counterexample guided abstraction refinement (CEGAR) to PPA. This involves dealing with a new source of spuriousness in abstract unsafe paths, pertaining not to transition behavior but to the decisions of the neural network π.
They introduce two methods tackling this issue based on the states involved, and they show that global SMT calls deciding spuriousness exactly can be avoided. They devise algorithmic enhancements leveraging incremental computation and heuristic search. They show empirically that the resulting verification tool has significant advantages over an encoding into the state-of-the-art model checker nuXmv. In particular, the approach used is the only in our experiments that succeeds in proving policies safe.

The work contributes to TUPLES’s goal of developing verification methods capable of reasoning about AI systems, specifically represented by neural networks. Saarland University is partner in the TUPLES project (link to the Saarland page in Tuples).

You can read the paper here

Latest News

AI in the places where it's actually done, with little media hype and a lot of hard work.

Daniel Höller, Jörg Hoffmann, Marcel Steinmetz and our chief coordinator Sylvie Thiebaux are in the organizing committee of the upcoming third Workshop on Reliable Data-Driven Planning and Scheduling (RDDPS) that is being held within The International Conference on Automated Planning and Scheduling (ICAPS) the premier forum for researchers and practitioners in planning and scheduling.

The manufacturing use case is based on well-known scheduling and resource allocation and planning challenges and this use case has a number of special features that make the challenge much more complex. The use case has two applications: robust scheduling of resources and resource allocation for rack systems.

Online collaboration is working fine but sometimes meeting face to face is really a good thing. So we did it for the third time in KU Leuven.