where data driven and algorithmic solutions meet

Trustworthy AI

Our long term objective is to help building World-class transparent, explainable, accountable and trustworthy AI, based on smarter, safer, secure, resilient, accurate, robust, reliable and dependable solutions

Decision Support Systems provide algorithmic solutions to help a human decision maker deal with complex problems. Most DSS used in real-world contexts rely on symbolic approaches: these allow a decision maker to specify, in a formal language, which decisions or actions are available, how they affect the world, and which constraints must be satisfied. Automated solvers can then search for a set of decisions (i.e. a solution) that satisfies all constraints and optimise a given objective. In contrast to this approach, pure data-driven approaches have recently emerged. They compute action policies by learning from historical decision data or autonomously experimenting with a simulator.
  • Symbolic approaches are more interpretable, allow the user better control, and typically enable guarantees on the level of constraint satisfaction and other desirable properties of the solution. However, symbolic approaches have trouble dealing with uncertainty, cannot deal with implicit knowledge, typically rely on (possibly inaccurate) hand-crafted models, and have limited scalability.
  • Pure data-driven approaches have complementary properties: they handle uncertainty well and (once learning is over) can cope with large scale problems. However, they are more opaque, they are less controllable, and guarantees on properties of the solution are typically not given.

Recently, hybrid approaches have emerged as a promising avenue to address some of the main limitations of “pure” solutions.

Tuples complex models of many practical applications

Unfortunately, the actual implementation of hybrid methods is challenging:

  • If we are adding ML models to a symbolic system, hybrid DSS may inherit some of the same drawbacks that ML models have. In particular, if a black-box ML model is employed (e.g. a Neural Network), the resulting DSS will be less interpretable, and its decisions will be ultimately harder to explain; bias in the original dataset may translate into bias in the decisions suggested by the system; vulnerabilities in the ML model may lead to constraint violations.
  • If we are adding symbolic components to an ML system, hybrid DSSs may inherit some of the same drawbacks the symbolic components have, including lack of scalability and high manual costs for model design.

Furthermore, for the complex models of many practical applications, explanations of their behaviour may be understandable only by someone with a strong expertise in the underlying technology; and in contrast, if hand-crafted symbolic models are not sufficiently accurate, any result based on them will have limited reliability.

The TUPLES project sets out to investigate suitable combinations of symbolic and data-driven methods in planning and scheduling that accentuate their strengths while compensating for their weaknesses.

We will establish novel technical contributions including algorithms and quantitative metrics, and define controlled environments for reproducible research, all in tight interaction with representative use cases.

The project structure is oriented by a set of target properties and stages, which will guide the development of new techniques and provide the basis of a process for stakeholder engagement and validation.

The TUPLES project

Safe AI

No harm to people or the environment

An action policy is safe if executing it does not lead the environment to a fatal or undesirable state.

In many applications, safety is paramount. For instance, an aircraft pilot assistance system cannot prescribe a course of action which would lead to run out of fuel before landing. In the presence of uncertainty, it may not be possible to completely eliminate the possibility of unsafe behavior, but the probability of this occurring must be upper-bounded by a very small threshold.

We need to be able to generate safe solutions, and, especially in the presence of uncertainty, to be able to verify and quantify the extent to which a solution is safe.

Safe AI

Robust AI

Stability and sensitivity in all situations

In the ML literature, robustness is typically understood as the stability of the (correct) decision under irrelevant changes in the input.

Additionally, in planning and scheduling, robustness includes sensitivity to relevant changes in the environment, and generalisation to a wide range of situations.

For all of these interpretations, it is important to produce robust solutions, to be able to measure and quantify robustness, and to formally verify that a solution is robust in a given range of situations.

robust ai

Scalable AI

Crucial for real-life applications

Real-world P&S problems feature large numbers of actions, state variables, resources, and contingencies, as well as complex constraints and objectives. The ability of a P&S system to scale to such large and complex problems is critical to its applicability.

Given the computational complexity of P&S (in their simplest form, scheduling is NP-hard and planning PSPACE-hard), scalability is a major issue.
Data-driven approaches often deliver superior scalability, but they:

  • often require large amounts of data
  • suffer from the aforementioned lack of transparency, safety, and robustness.

Hybrid approaches are suited to address both.

While not directly tied to trustworthiness, the ability of an AI system to scale up to large and complex decision making problems, and to verify and explain DSS behavior in such problems is critical for its practical applicability.

 

Hybrid DSS systems provide challenges to scalability, but also opportunities to solve them if the integration is carefully managed.

 

Improving scalability will be a major goal within the project, and will necessitate advancement of the state-of-the-art.

Explainable AI

Understand the complexity to have confidence

Understanding the “what” and the “why” behind the solution returned by an AI system is a prerequisite to trusting the system, and it is a fundamental step for the trust calibration process.

In planning and scheduling in particular, users need to understand why a solution was chosen over others amongst a very large, potentially infinite, space of possible solutions. 

The presence of uncertainty, and the use of data-driven approaches to handle this uncertainty adds considerably to these challenges.

Finally, ethical and legal dimensions are also critical to trustworthy P&S systems: an explainee is morally and may be legally entitled to an explanation of any bias or injurious consequence of a plan or schedule. 

Safe AI Tuples