The BABYLON Project

A Tool for Specification and Verification of Parameterized Systems to Benchmark Infinite-State Model Checkers


Motivations and goals

Our goal is to set up a library of examples of infinite-state systems to be used to evaluate theoretical and practical
methods. For this purpose we have defined a very simple language based counter machines that can be used to encode
different extensions of Petri Nets.

The low-level language (together with an high-level language based on communication machines) is defined here.

All examples should be specified in the low level language, i.e., as a Counter Machine.

For our experiments we plan to consider the following data structures :

Adding new data structures in the babylon tool can be done as defined here.

We plan to apply these data structure as Constraint Solver within the framework of Constraint-based Automated Verification:

Safety Properties: Given initial states S0 and target states T

Forward Reachability:

Backward Reachability: The experiments can be performed by scaling up the following  dimensions/evaluate the following parameters:

Dimensions

Parameters
Plan

Phase 1: A Fair Benchmarking

The goal of this phase is to compare the trade-off between efficiency, robustness and usability of the constraint solvers.
More in detail, the idea is to consider the constraint solvers as providers of services (set operations) we take as black box
and that we encorporate in a general scheme of the standard verification algorithm used in symbolic model checking
(backward reachability).
This way, we try to guarantee the fairness of the comparison of the performance of the different solvers  (i.e. we avoid optimizations
peculiar to the data structures we will re-introduce later).

The structure of the algorithm and the nature of the services

- The algorithm must implement a Breadth-First Search in which the accessibility relation is the pre-image operator

- Since we restrict this phase at Petri Nets, we will consider a fixed representation of the transition system (at the level of the platform)

- The set of services we need here are:
        - EMPTY emptyset
        - INTERSECT intersection of denotations!
        - DIFFERENCE difference of denotations!
        - UNION union of denotations!
        - IS_EMPTY test of emptyness

        - REPRESENT(F) to represent the initial states and target states taken in input inside the solver
        - PREpre-image computation over a single transition of the input
        - PRET    pre-image computation over the whole transition system

        Note1:   REPRESENT and the PRE-image operators are the only non-standard operators we expect that the designer of the platform has to implement
                      in order to integrate the platform with the constrain solvers.

                      Concerning PRE, they will take as input a single transition/whole relation and a set of states
                      the first parameter is the same for all libraries (the implementation of PRe will take care of translating in the right format for the solver)
                      whereas the second parameter is peculiar of the considered data structure (it is a var of the right type...).
 

-We remove all the optimizations in the implementations of the different operators

- Based on the previous set of operators, we will focus our experiments on the following search algorithms:

There are (at least) two type of search: the plain search in which we always `try' to compute a fixpoint
and then check for the initial states, and the on-the-fly search in which we interleave the steps of the main loop with
a test for the initial states.

Here are the abstract algorithms:

Algorithm 1 Plain Backward Search

Let T be the set of transitions {t1,...,tk},
TS the target states, and I the initial states, all read from input

F:=REPRESENT(TS);
I:=REPRESENT(IS);
R:=emptyset;

while (not(IS_EMPTY(F))) do
    begin
          R:=UNION(R,F);

          L:=EMPTY;
          for every transition t in T  do
            begin
               AUX:=PREt(F);
               L:=UNION(AUX,L);
            end;
          F:=DIFFERENCE(L,R);

    end;
if IS_EMPTY(INTERSECT(I,R)) then VERIFIED
otherwise TRACE(?)
 

Algorithm 2 On-the-Fly Backward Search

TS the target states, and IS the initial states, all read from input

F:=REPRESENT(TS);
I:=REPRESENT(IS);

R:=emptyset;
Error:=false;

while (not(IS_EMPTY(F))) do
    begin
          R:=UNION(R,F);

           if IS_EMPTY(INTERSECT(I,R)) then Error:=true;break

          L:=EMPTY;
          for every transition t in T  do
            begin
               AUX:=PREt(F);
               L:=UNION(AUX,L);
            end;
          F:=DIFFERENCE(L,R);

    end;
If Error then TRACE
otherwise VERIFIED

Algorithm 3 Fully Symbolic Backward Search

TS the target states, and IS the initial states, all read from input

F:=REPRESENT(TS);
I:=REPRESENT(IS);

R:=emptyset;
Error:=false;

while (not(IS_EMPTY(F))) do
    begin
          R:=UNION(R,F);

           if IS_EMPTY(INTERSECT(I,R)) then Error:=true;break

           L:=PRET(F);
           F:=DIFFERENCE(L,R);
    end;
If Error then TRACE
otherwise VERIFIED

Models and Measures

What do we expect from the fair experiments?
Well, we expect to have a first selection w.r.t. : performance of each library, compacteness of resulting data, flexibility
w.r.t. the set of models we have in mind.
Furthermore, for each library we'd like to know: how much time did we spend in adapting it to export the adequate services?
In average how much time every operations takes w.r.t. the whole computation? Peaks in memory usage and execution times.
Specifically, this is my proposal for models/measures

Models
In the first phase we will focus our attention on Petri Nets (Bounded/Unbounded) and Petri Nets with Inhibitor Arcs
This way, we avoid many of the problems due to optimizations and all other tricks.
The examples for the moment we have are (see list below):

- Lamport, Dekker, Peterson
- CSM,FMS,CTS,Multipoll, Kanban, Mesh (different sizes)
- Readers & writers

Measures

Based e.g. on the spy of Gilles we should measure (fixing the parameters discussed before, eg., swap space etc.)

Overall Computation:
- Execution time
- Number of Steps
- Memory used to store the fixpoint
- Features peculiar of the data structure: e.g. nodes of CSTs, DDDs, size of automata of NDDs
- Number of symbolic states

Average
- Max size of intermediate set variables (wrt. to the peculiarties of the data structure)
- Max execution time of each step

Services Profiling
- Execution time of each operation with max and avg values
- Memory used for each operations with max and avg values
- Sizes (again peculiar of the different data structures) od the data structures used in the intermediate operations

I think this should give use good indicators for the performance of the solvers.

Invariants

Concerning the invariants we could run a second series of experiments in which we
prune via invariants. However we must be careful on how we prune.
Alternatively, we can see invariants as part of the optimizations (every solver use them in its own and
more effective way).

 Here  are some results.


Examples

Warning: we should turn all examples into one of the following specification languages: Global Machines and Counter Machines, both defined here

The examples we have are listed below (classified in terms of Petri Nets Extensions)

Bounded Petri Nets

Petri Nets
  • Basic ME protocol Fig. 1 in  Description
  • Petri Nets with Transfer Arcs Petri Nets with Inhibitor Arcs Broadcast Protocols

    Abstract Multithreaded Java Programs

    Consitency Protocols with Atomic Syncronization Actions Petri Nets with Brodcast and Inhibitor Arcs
    Available Software The CBP2GM compiler converts an abstract multithreaded piece of code (described thanks to the Concurrent Boolean Programs' language) into a Global Machine, suitable for verification.

    A beta version of CBP2GM is available. Click here to download CBP2GM (.tar.gz source file, 70 Kb).

    CBP2GM is free software.