DOI 10.1007/s00165-008-0075-6 BCS © 2008 Formal Aspects of Computing (2008) 20: 507–560
Formal Aspects of Computing
Deductive verification of alternating systems Matteo Slanina1,∗ , Henny B. Sipma2 and Zohar Manna2 1 Google
Inc., 1600 Amphitheatre Pkwy, Mountain View, CA 94043, USA. E-mail:
[email protected] 2 Stanford University, Computer Science Department, 353 Serra Mall, Stanford, CA 94305-9045, USA. E-mails:
[email protected];
[email protected]
Abstract. Alternating systems are models of computer programs whose behavior is governed by the actions of multiple agents with, potentially, different goals. Examples include control systems, resource schedulers, security protocols, auctions and election mechanisms. Proving properties about such systems has emerged as an important new area of study in formal verification, with the development of logical frameworks such as the alternating temporal logic atl*. Techniques for model checking atl* over finite-state systems have been well studied, but many important systems are infinite-state and thus their verification requires, either explicitly or implicitly, some form of deductive reasoning. This paper presents a theoretical framework for the analysis of alternating infinite-state systems. It describes models of computation, of various degrees of generality, and alternating-time logics such as atl* and its variations. It then develops a proof system that allows to prove arbitrary atl* properties over these infinite-state models. The proof system is shown to be complete relative to validities in the weakest possible assertion language. The paper then derives auxiliary proof rules and verification diagrams techniques and applies them to security protocols, deriving a new formal proof of fairness of a multi-party contract signing protocol where the model of the protocol and of the properties contains both game-theoretic and infinite-state (parameterized) aspects.
1. Introduction Alternating systems are models of computer programs whose behavior is governed by the actions of multiple agents with, potentially, different goals. Examples include control systems, resource schedulers, security protocols, auctions and election mechanisms. The mathematical analysis of a computer system, as that of any other physical system, consists of two interdependent subtasks: forming a mathematical model of the physical system, and proving properties of the model. The process of modeling is, in general, one of forming an abstraction of the physical system, i.e., to identify the essential traits of the system that influence the properties we want to study and to provide a mathematical model that can be used to approximate the observations of these traits with sufficient degree
∗ This work was done while at Stanford University. Correspondence and offprint requests to: M. Slanina, E-mail:
[email protected]
508
M. Slanina et al.
of accuracy. Analogously, every formal logic formula expresses an abstraction of a property of a real system. Therefore, the model chosen is paramount for the success of the analysis, both in whether the latter is technically possible and in whether the results provide useful insight into the behavior of the system. There is no perfect model of a system or a perfect specification, only an infinite sequence of better approximations. Advancements in formal modeling and verification offer progressively more precise models of more and more features of the real systems and more and more properties in the specification. This progress must balance two opposing forces: on one side, a finer abstraction makes the formal verification problem harder to solve; on the other, any time we leave out a feature from the model, we need to prove the correctness of the abstraction with respect to the system at a meta-level, and this introduces the possibility of human error. The study of formal methods for reactive systems has developed a sequence of increasingly more powerful abstractions: • the linear view; • the branching view; • the alternating view. In all cases, we consider the possible evolution of the system as it moves between possible states through time. The linear view models the system as the collection of all the sequences of states (linear traces) that it can move through. This makes it possible to answer questions of the form Do all computations of the system satisfy condition . . . ? which is enough for many applications. The most widely used temporal logic, ltl, can only describe linear properties of a system. The branching view models the system as a tree (computation tree) of states it can evolve through, a tree node being a child of another if the former can be reached from the latter by executing an atomic instruction. This makes it possible to answer additional questions, like Does there exist a computation such that . . . ?, or Can every reachable state begin a computation such that . . . ? The temporal logics ctl and ctl*, used a lot in hardware verification for their computational properties, implement the branching view. The alternating view is a recent paradigm that takes interest in an additional feature of many reactive systems: Different parts of the system may be controlled by different agents, and not all agents may share the same goals. Any system used in control theory is a typical example. So is virtually any security protocol. It is important to realize that the mere presence of multiple agents in the system does not necessarily mean that alternation is an essential trait. In fact, any reactive system in which concurrency plays a role can, in general, be described as a system with multiple agents (processes). If the agents all share the same goal, transition systems, automata, and ltl or ctl* specifications suffice to describe the intended behavior. Consider, for example, a mutual exclusion protocol, such as the bakery or Peterson’s algorithms [MP95], through which processes in a multitasking computer contend for the exclusive use of a resource: If we can assume that all the interested parties will engage in the correct execution of the protocol, for example because they will only be able to access the shared resource through well-defined operating system calls, then we can use ltl to specify that mutual exclusion and fairness are achieved in all possible computations. If, on the other hand, we want to take into account the possibility that some process may maliciously try to circumvent the protocol, for example by making system calls or sending messages in an unpredicted order, then we are in the presence of a real alternating system. The specifications, then, express statements about the existence or non-existence of strategies for agents (or groups of agents) to achieve a given goal regardless of the actions of the other agents. Consider again the mutual exclusion protocol, and assume we have two agents p1 and p2 representing two processes and an agent q representing the operating system. There are no other agents in the system. In the trusted case, an ltl specification of mutual exclusion may be of the form ¬(using1 ∧ using2 ), where usingi means that process pi is using the resource. We use to denote the ltl operator always in the future. Thus the formula reads in all future states reachable by the program, it is never the case that both agent p1 and agent p2 are using the intended resource at the same time. In the untrusted case, we may not be able to prove such a specification. We may then consider the more appropriate atl* specification q, p1 ¬(using1 ∧ using2 ) ∧ q, p2 ¬(using1 ∧ using2 ).
Deductive verification of alternating systems
509
The first conjunct should be read players q and p1 have a strategy to guarantee (can act in such a way) that ¬(using1 ∧ using2 ) holds no matter how all other agents (in this case, just p2 ) act. This strategy may, for example, involve ignoring certain actions of p2 that are not conforming with the protocol and may require the operating system q to keep some extra state to be able to check the consistency of the processes’ actions. A different protocol could, on the other side, ensure that only one of the two agents can be guaranteed to be able to reach mutual exclusion (perhaps this is a high priority process that can preempt the other); we could then write q, p1 ¬(using1 ∧ using2 ) or even, if we wanted to prove that p2 cannot force mutual exclusion, q, p1 ¬(using1 ∧ using2 ) ∧ ¬q, p2 ¬(using1 ∧ using2 ). Our approach in this paper is to study proof systems for the validity of properties over a system. This mirrors the approach to ltl verification in [MP91, MP95] and of ctl* verification in [KP05]. Consider again our specification for mutual exclusion. If the specification is an ltl formula of the form π , where π is an assertion, then we know [MP95, MP89] that we can use the following rule, which is sound and relatively complete for this class of properties: ltl- inv: →η η→π η → pre(η) π This rule can be interpreted as induction on the computations: To prove that π holds in all reachable states of the system (“π ”), we find a strengthening η of π (“η → π ”) that holds at all initial states of the system (“ → η”) and that is preserved by all computation steps, or transitions, of the system (“η → pre(η)”). The pre operator is the weakest precondition operator: given an assertion η describing a set of states, it returns an assertion denoting the set of states such that all transitions necessarily lead to η. Suppose now that we want to prove an alternating version of the property, that is, q, p1 π . As we shall see later, the following proof rule is sound and relatively complete for this new class of properties: inv: →η η→π η → cpreq,p1 (η) q, p1 π The rule is another kind of induction: It tells us to find a strengthening η of π (“η → π ”) that holds at all initial states (“ → η”) and such that, from any state satisfying η, agents q and p1 can choose actions in such a way that, no matter the actions of all other agents (p2 , in this case), the resulting next state will always satisfy η (“η → cpreq,p1 (η)”). The operator cpreX , where X is a set of agents, is the alternating analogue of a weakest precondition, called controllable predecessors operator: given a formula η denoting a set of states, it returns an assertion denoting the set of states at which X can act in such a way that, no matter how the other agents act, the resulting next state necessarily satisfies η. An obvious objection is that, instead of complicating the model in this way, we may refine the system until we are able to prove the original ltl specification. What this implies, though, is committing to a specific strategy early in the algorithm development, while we may prefer to specify the minimal, abstract properties required of the system so that the design can be as adaptable as possible to new requirements. For example, there may be many different ways for the operating system to ensure that no process breaks the access control, but we may not know, at an initial verification phase, which one will be the most adequate: additional specifications may be added later, or new efficiency constraints may have to be met. If, to meet these criticisms, we were to augment the model with all the possible strategies that we can think of, the resulting model may become unwieldy and still we may have restricted it more than necessary.
510
M. Slanina et al.
It also seems epistemologically sounder and intellectually more satisfying to study formal methods for specifying and proving the properties that we are truly interested in, instead of changing the system to fit existing theories, as long as the new theory is at least as useful and effective as the previously exising ones. Finally, there are many cases in which the system cannot be modified and we must study it for what it is. For example, alternating systems bring together formal verification and control theory, where we model a system with a controller and an environment and ask whether the controller can act in order to satisfy a temporal property, no matter what the environment does: control theorists have traditionally used restricted forms of alternating models to find efficient solutions to special cases of this problem. As another example, consider again security protocols: here, more than in other kinds of systems, it is preferable for the model to be as close to the implemented system as possible, and abstracting away some of the alternating behavior of the agents may not allow to capture all the important details: such over-approximations always run the risk of causing errors in the verification process. Most previous work on formal verification of alternating systems has been focused on model-checking algorithms for treating finite-state systems [AHK97, AHK02, AHM+ 98, HMMR00]. Apart from the initial work on abstract interpretation for alternating systems [HMMR00], very little has been done on techniques for infinitestate systems. This paper is a first step in remedying this situation, by generalizing some of the successful techniques for the deductive verification of reactive systems [MP91, MP95, MP94, MBSU98] to the alternating framework. Analogously to the case of traditional reactive systems, the move from finite-state to infinite-state systems presents both exciting perspectives and new challenges. On one side, we immediately lose decidability of most problems, and we must thus resort, either explicitly or implicitly, to some form of deduction, or theorem proving. On the other side, some of the most interesting alternating systems are, by their very nature, infinite-state. Many security protocols, for example, are inherently parametric, i.e., they are designed to act in an environment with a variable number of actors. Each actor, in turn, may manipulate complex data structures; but even if we manage to abstract each actor to a finite-state system (which, as argued before, always has the potential to introduce imprecisions in the model), we are still left with a parameterized, thus infinite-state, system to verify. We shall see an example of this in Sect. 5, where we describe a case study of a parametric, multi-agent, contract-signing protocol. Organization of the Paper. The rest of the paper is structured as follows: • Section 2 introduces several formal models of alternating systems that will be used in the rest of the paper. It then introduces the alternating temporal logics atl and atl*. Some of the models are slightly more general than strictly required later, but this lets us unify the treatment and to also cover previous models in the literature under a unified framework. This section is partly based on work published in [Sla02] and [SSM06c]. • Section 3 contains the major theoretical contribution: it defines a sound and relatively complete proof system for atl* over alternating systems. It is based on the work published in [SSM06c]. The proofs of soundness and completeness (Sect. 3.2) can be skipped without compromising readability of the rest of the article. • Section 4 shows how to extend the proof system by auxiliary and derived proof rules and verification diagrams that greatly simplify the verification of complex systems by making it possible to write more intuitive and concise proofs. It is based on [Sla02] and more recent work in [SSM06a]. • Section 5 contains our two major case studies that apply the techniques of Sects. 3 and 4 to the verification of fairness of two contract-signing protocols. Preliminary reports on this work appeared in [SSM06a, SSM06b]. • Section 6 contains our conclusions, and Sect. 7 contains notes on the history of the concept presented and on related work and ideas for extensions. • An appendix contains a list of symbols used in the paper for standard operations from set theory, first-order logic, and other areas. Please refer to it if uncertain about the notation. Some of the results were previously reported in [SSM06c] and [Sla07].
Deductive verification of alternating systems
511
2. Mathematical models 2.1. Models of computation The mathematical models of computation that we are interested in extend traditional transition systems by adding multiple agents that play a game at every time step to determine what the next state of the system will be. Models of this kind are known in the literature as game graphs, game structures, or alternating systems. We present here our own formalism, which lets us present our results in a more orderly and concise fashion. For a discussion of the relations between our models and similar choices in the literature, see the Sect. 7 at the end of the paper. For the sake of exposition, we distinguish two types of models: 1. Extensional models: These are the mathematically simplest, most elegant, and most generic, consisting of abstract sets of states and games in strategic form. We shall use them to discuss general concepts that apply to all other models and provide a uniform semantics. They are usually not suitable to define models of systems, unless the system is finite-state. 2. Intensional models: These provide a formal language and a logical framework in which to describe specific classes of systems. We shall use them to discuss computability and expressibility characteristics of logics and proof methods, and for examples and case studies. We shall usually define the semantics of these models by translating each into an extensional model. The following subsections discuss the two classes in detail.
2.1.1. Extensional models Notation Throughout this section, and in later sections when possible, we shall denote the set of agents by P , elements of P by p, q, . . . , and subsets of P by X , Y , Z , X1 , . . . . We shall denote the set of actions by A, elements of A by α, β, γ , . . . , and subsets of A by B , C , D, . . . . Definition 2.1.1 A (one-step, strategic-form) game is a quintuple P , A, O, a, g, where: • P is a set of players or agents, • A is a set of actions, • O is a set of outcomes, • a : P → 2A defines a set of choices of available actions for each player, and • g : p∈P a(p) O is the game matrix, a partial function associating an outcome to complete choices of actions by all players. The set of all games between players P , with actions A and outcomes O, denoted G(P , A, O), is the set of all games P , A , O , a, g such that A ⊆ A and O ⊆ O. Example 2.1.2 Consider a game of rock-paper-scissors, between two players, l (left player) and r (right player). It can be defined as the following one-step game P , A, O, a, g: P {l , r } A {R, P, S} O {win-l, win-r, draw} a(l ) a(r ) A g(R, P) g(P, S) g(S, R) win-r g(P, R) g(S, P) g(R, S) win-l g(R, R) g(P, P) g(S, S) draw Definition 2.1.3 Given a game G P , A, O, a, g, a play of G is a pair α, o, where α ∈ p∈P a(p) and o g(α). The set of all plays of G is denoted plays(G). If plays(G) ∅, we say that G is blocking.
512
M. Slanina et al.
Example 2.1.4 In the rock-paper-scissors game, an example play is P, S, win-r, which we may sometimes, for clarity, also write l : P, r : S, win-r. Definition 2.1.5 A game frame (gf) is a quadruple P , W , A, δ, where • • • •
P is a set of players or agents, W is a set of states or worlds, A is a set of actions, and δ : W → G(P , A, W ) is an alternating transition function or one-step game function.
Notice that, in the definition of δ, we specialize the generic definition of game so that the set of outcomes is the set of states, W . Thus, in game frames, the combined choices players make results in the next state the systems evolves to. This is analogous to regular transition systems, where instead of a game we have a nondeterministic relation that describes the possible next states given the current state. From now on, unless otherwise specified, when we say outcomes we shall intend it in this specialized sense. We say that a state w ∈ W is blocking if δ(w ) is a blocking game. If δ(w ) P , A, W , a, g, we denote a by a(δ(w )) and g by g(δ(w )). Example 2.1.6 Gomoku, go-moku, or gobang, or in English Connect 5 is an abstract strategy board game. It is played with go pieces (black and white stones) on a go board (19×19 intersections) . . . Black plays first, and players alternate in placing a stone of their color on an empty intersection. The winner is the first player to get an unbroken row of five stones horizontally, vertically, or diagonally . . . [Wik] We model a version of go-moku in which the board is infinite in all directions. Consider the following game frame P , W , A, δ. The black player is player 1, the white player is player 2. A state of the game is a pair consisting of a player whose turn is next and an unbounded matrix with entries 1 (black stone), 2 (white stone), or 0 (empty) and finitely many non-zero entries. P {1, 2} A Z × Z Z2 W {1, 2} × {M : Z2 → {0, 1, 2} | Mij
0 for finitely many i , j } a(δ(t, M ))(t) {i , j ∈ A | Mij 0} a(δ(t, M ))(3 − t) A t if i , j it , jt (g(δ(t, M ))(i1 , j1 , i2 , j2 ))ij Mij otherwise The left side of the last equation denotes the i , j -th entry in the matrix component of the new state. The turn component, not shown here, simply alternates from one player to the other. Notice also that we could have defined a(δ(t, M ))(3 − t) B for any set B ⊆ A, B ∅: the choice of player 3 − t, whose turn is not to play, is disregarded by g. Definition 2.1.7 Given a game frame F P , W , A, δ, a trail of F starting at state w0 ∈ W is a finite or infinite sequence w0 , α 1 , w1 , α 2 , w2 , . . . such that α i , wi ∈ plays(δ(wi−1 )) for all i . The run based on the trail is the sequence w0 , w1 , w2 , . . . . A sequence in W ∗ ∪ W ω (the set of all finite and infinite sequences of elements from W [Tho97]) is a run if it is a run based on some trail. A trail (resp. run) is full if it is either infinite or it ends in a blocking state. The set of full trails of F is denoted trails(F ). The set of all full runs of F is denoted runs(F ). Definition 2.1.8 Given a frame F P , W , A, δ, a strategy for player p ∈ P is a partial function s: W+ A such that s(xw ) ∈ a(δ(w ))(p) for all x ∈ W ∗ and w ∈ W , and the left side is defined if, and only if, the right side is non-empty.
Deductive verification of alternating systems
513
A trail w0 , α 1 , w1 , α 2 , w2 , . . . is compatible with a strategy s for player p if α i [p] s(w0 w1 . . . wi−1 ) for all i ∈ {1, 2, . . . }. A strategy for a set of players X is a collection s of strategies, one for each player p ∈ X . The component strategy for player p is denoted s[p]. A trail is compatible with a strategy s for a set of players X if it is compatible with s[p] for each p ∈ X . The set of all strategies for a set of players X is denoted strat(X ). The set of all runs based on trails compatible with a strategy s for X and starting at a state w ∈ W is called the outcome set of s from w and denoted out(w , s). Definition 2.1.9 Given a set of proposition symbols, a game transition system (gts) is a quintuple S P , W , A, δ, I, where: • P , W , A, δ is a frame and • I : → W → B is the interpretation of the proposition symbols for all states. Note that, although technically should belong to the tuple defining the gts, we do not write it as it will normally be clear from the context. Example 2.1.10 Consider the gomoku game frame from Example 2.1.6. We can define a gts by providing some propositions and their interpretations. Here is a useful set of propositions: • turn(p), for p ∈ {1, 2}, denotes that it’s player p’s turn: I(turn(p))(t, M ) 1 if and only if t p • five(p), for p ∈ {1, 2}, denotes the fact that player p has an unbroken row of five stones horizontally, vertically, or diagonally: I(five(p))(t, M ) 1 if and only if ∃ i , j . Mij Mi+1,j Mi+2,j Mi+3,j Mi+4,j p ∨ Mij Mi,j +1 Mi,j +2 Mi,j +3 Mi,j +4 p ∨ Mij Mi+1,j +1 Mi+2,j +2 Mi+3,j +3 Mi+4,j +4 p ∨ Mij Mi+1,j −1 Mi+2,j −2 Mi+3,j −3 Mi+4,j −4 p Definition 2.1.11 A fair game transition system (fgts) is a sextuple S P , W , A, δ, I, F , where • P , W , A, δ, I is a gts and ω • F : P → 2W associates a fairness condition to every player; we commonly write Fp for F (p). x∈
A run x w0 , w1 , w2 , . . . is fair to player p if x ∈ Fp ; we write x Fp . A run x is fair to a set X of players if p∈X Fp ; we write x FX .
Example 2.1.12 Coming back to the go-moku example, suppose that we want to exclude runs in which one of the players wins and are only interested in games that continue forever. Then we can add a fairness condition for each player, where Fp {t 0 , M 0 t 1 , M 1 t 2 , M 2 · · · | t i ∈ {1, 2} and I(five(p))(t i , M i ) 0 for all i ∈ n}
2.1.2. Controllable and uncontrollable predecessors The controllable predecessors operator is an essential ingredient to reason about game transition systems: it enables us to reason about the one-step games at the nodes, analogously to how the weakest precondition operator lets us reason about the single transitions in traditional system models. Definition 2.1.13 Given a set of states U ⊆ W and a set of players X ⊆ P , cpreX (U ) denotes the set of states from which the players in X have actions with which they can ensure that the game will be in U at
514
M. Slanina et al.
the next state. Formally: cpreX (U ) {w ∈ W | ∃ α X ∈ p∈X a(δ(w ))(p) such that ∀ α P \X ∈ p∈P \X a(δ(w ))(p) , we have g(α X ∪ α P \X ) ∈ U } In this formula, α X ∪ α P \X is an element of p∈P a(δ(w ))(p) that corresponds to picking elements in the tuple from either α X or α P \X as appropriate.1 Dual to cpreX is the uncontrollable predecessors operator upreX , defined as upreX (U ) W \ cpreX (W \ U ) or, explicitly, as upreX (U ) {w ∈ W | ∀ α X ∈ a(δ(w ))(p) p∈X
∃ α P \X ∈
p∈P \X
a(δ(w ))(p)
such that g(α X ∪ α P \X ) ∈ U } For classes of systems with special properties, the cpre transformers have simpler forms [AHK02, Sla02]. To take advantage of these simpler forms, we express our verification conditions in terms of these transformers as much as possible.
2.1.3. Intensional models Alternating discrete systems. Alternating discrete systems (ads) are our most general intensional models. They are based on the fair discrete systems of Kesten and Pnueli [KP05]. An ads is a general first-order representation of game transition systems, that generalizes turn-based, synchronous and asynchronous concurrency models of [AHK02] and recursive programming languages. States and fairness conditions are represented as value assignments to a finite set of typed variables. To enable a first-order representation of the next-state game, the player’s available actions are represented by special action variables. The formal definitions are as follows. Definition 2.1.14 An alternating discrete system (ads) is a sextuple S P , VS , VP , ξ, χ , , where: • P is a set of players. • VS is a set of typed system variables; a state of the system will be a typed value assignment to the variables in VS . • VP {p → Vp | p ∈ P } associates to each player a finite set of typed action variables. Player p will choose an action by choosing values for Vp . • ξ {p → ξp | p ∈ P associates to each player p a first-order formula over variables VS and Vp that restricts the actions player p can choose at each state: at state VS , player p can choose only actions such that ξp (VS , Vp ) holds. We require that ∀ VS ∃ Vp ξ (VS , Vp ) in VS and Vp , respectively. The for all p ∈ P , where ∀ VS and ∃ Vp denote quantification over all variables extension of ξ to a set of players X ⊆ P is defined as ξX (VS , VX ) ≡ p∈X ξp (VS , Vp ).
• χ is a first-order formula over VS , VP , and VS that defines VS , i.e., such that ∀ VS ∀ VP . ξP (VS , VP ) → ∃!VS χ (VS , VP , VS );
χ represents the game matrix: χ (VS , VP , VS ) expresses the fact that the system moves from state VS to state VS when the players’ choices are VP . 1
This follows from the set-theoretic definition of x ∈X Ax {f : X → ∪x ∈X Ax | f (x ) ∈ Ax for all x ∈ X }.
Deductive verification of alternating systems
515
• : P → B(∞ QF(VS )) assigns to each player a fairness condition, represented as a Boolean formula over atoms of the form ∞ϕ (read “infinitely many times ϕ”), where ϕ is an assertion (quantifier-free formula) over VS . For example, ∞(x 2 ∧ y > x ) → ∞(y ≥ z 2 ). Example 2.1.15 Let us redefine the endless gomoku game of Example 2.1.12 as an ads. As before, we let P {1, 2}. The following parameters are similar to the ones for the fgtss, with some changes to make the system non-blocking. The set VS of system variables is: • t : {0, 1, 2} the player whose turn it is, or 0 to represent that the system is blocked; • M : Z2 → {0, 1, 2} the board. The set of action variables for player p ∈ {1, 2} is a singleton Vp {mp : Z2 }, denoting the position at which the player wants to put the stone. Letting read and write denote function symbols in the language, whose semantics is defined in the underlying assertional theory, we then have ξp χ
≡ (t p → read(M , mp ) 0), ≡ (t 0 ∧ t 0 ∧ M M ) ∨ (t 0 ∧ M write(M , t, mt ) ∧ ∃ k : Z2 (read(M , k ) 0) ∧ t 3 − t) ∨ (t 0 ∧ M write(M , t, mt ) ∧ ∀ k : Z2 (read(M , k ) 0) ∧ t 0)
and we can write the no one wins fairness conditions as
p ≡ ∞¬(five(1, M ) ∨ five(2, M )), which, while not identical to the one of Example 2.1.12 for arbitrary sequences, gives the same result for this system because stones are only added and never removed. Definition 2.1.16 Given an ads, we define an equivalent fgts: the semantics of the ads is then defined as the semantics of the associated fgts. Precisely, given an ads P , VS , VP , ξ, χ , on players P , we define the fgts P , W , A, δ, I, F on players P and propositions {π | π is a first-order formula with free variables VS } as follows: • W v ∈VS dom(v ) is the set of all type-consistent value assignments to the variables in VS . • A p∈P v ∈Vp dom(v ) is the coproduct (sum) of the sets of all type-consistent value assignments to the variables Vp as p varies. • δ(w ) is the game P , A, W , a, g, where: – a : P → 2A is defined by a(p) the set of value assignments to Vp such that ξp holds when VS has value w. – g : p∈P a(p) → W is defined as g(α) the unique assignment to VS such that χ holds when VS has value w and, for all p, Vp has value α. • I(π )(w ) simply evaluates the first-order formula π with VS set to w . • Fp {x ∈ W ω | p evaluates to true when each ∞ϕ is interpreted as ϕ is true at infinitely many positions in x }. Notice that, by definition, we only admit non-blocking adss. We can do this without loss of generality, since any blocking system can be converted into a non-blocking one by the addition of an extra state and proper transition games, and atl* properties converted similarly. Later in the paper, when we discuss proof rules, some transformations will modify the underlying ads. In those cases, the non-blocking and unique-successor properties of ξ and χ will always be preserved. This is an important fact to keep in mind, although often we shall not prove it for the specific cases, since it will be clear from inspection of the constructions.
516
M. Slanina et al.
Controllable and Uncontrollable Predecessors. The controllable and uncontrollable predecessor operators for adss are expressible by first-order operations: cpreX (π )(VS ) ≡ ∃ VX . ξX (VS , VX ) ∧ ∀ VP \X . ξP \X → ∀ VS . χ (VS , VP , VS ) → π (VS ) upreX (π )(VS ) ≡ ∀ VX . ξX (VS , VX ) → ∃ VP \X . ξP \X ∧ ∃ VS . χ (VS , VP , VS ) ∧ π (VS )
(1) (2)
Guarded Alternating Discrete Systems. adss are a very general model of computation. In most applications, we can simplify the modeling and the verification significantly if we use a more specialized version of the model. This section presents a model based on guarded transitions, shared memory, and asynchronous concurrency that is most useful among these specialized models. Definition 2.1.17 A guarded alternating discrete system (gads) is a quadruple P , V , T, J, where: • P is a finite set of agents. • V is a finite set of typed variables. • T ∪p∈P Tp is a finite set of transitions, partitioned among agents (transitions in Tp are controlled by (“belong to”) agent p). A transition τ ∈ T has the form τ : G → x : t, where G is the guard, a boolean expression over V , and x : t is an assignment of a list of expressions t over V to a list of variables x from V . • J ⊆ T is the set of just transitions. The intuition behind gadss is that an extra player, the scheduler, chooses, at every step, which agent executes. That agent then chooses one of its enabled transitions and executes it by simultaneously evaluating the expressions in t and assigning them to x, leaving all other variables unchanged. We also make the following two assumptions (which are inessential, but simplify the translation to adss considerably and hold in all the case studies that we shall treat): 1. For every agent p, the set Tp contains an idling transition, that is, a transition that is always enabled and leaves all variables unchanged. 2. The set J of just transitions only contains transitions τ : G → x : t such that G → ¬G[x/t] is valid, that is, all just transitions disable their own guard when taken. Definition 2.1.18 We provide a formal semantics by associating, to every gads P , V , T, J an ads P ∪ {z }, VS , VP , ξ, χ , F , where: • • • • •
z is a fresh agent called the scheduler; VS V ∪ {λ}, where λ is a fresh variable of type P denoting the last scheduled agent; Vz P , i.e., the scheduler z chooses which agent to schedule; Vp Tp for p ∈ P , i.e., a regular player chooses which transition to execute; ξp (s, τ ) if and only if s Gτ ; ξz (s, p) ≡ t; • χ (s, τp1 , . . . , τpn , pi , s ) iff τpi is G → x : t and s s[x → t(s), λ → pi ]; • Fp ≡ τ ∈Jp ∞¬Gτ for p ∈ P , where Jp J ∩ Tp ; Fz ≡ p∈P ∞(λ p).
The condition ξz (s, p) ≡ t reflects the fact that the scheduler is free to choose any agent to execute next. This is possible because we assumed that every player has an idling transition that is always enabled, so the scheduler does not have to check whether an agent has available moves before it schedules it. Because of our assumption of self-disabling transitions the simple fairness condition Fp suffices to capture the justice requirement of the guarded transitions. The fairness condition Fz states that every agent must be scheduled infinitely many times. Controllable and uncontrollable predecessors. The main advantage of using a model like gads instead of a generic ads, besides the simplified modeling stage, is the simplified form of the cpre and upre operators, which will reflect in applications of virtually all proof rules.
Deductive verification of alternating systems
517
The cpre for a gads is: ⎧ ⎛ ⎪ p∈X τ ∈Tp (Gτ ∧ π [xτ /tτ , λ /p]) ⎪ ⎪ ⎪ ⎝ ∧ ⎪ ⎪ ⎪ ⎪ ⎨ p∈P \X τ ∈Tp (Gτ → π [xτ /tτ , λ /p]) cpreX (π ) ≡ ⎛ ⎪ ⎪ ⎪ p∈X τ ∈Tp (Gτ ∧ π [xτ /tτ , λ /p]) ⎪ ⎪ ⎝ ⎪ ∨ ⎪ ⎪ ⎩ (G → π [x /t , λ /p]) p∈P \X
τ ∈Tp
τ
τ
⎞ ⎠
if z ∈ X (3)
⎞ ⎠
if z ∈ X
τ
Even if on paper this form seems more cumbersome than (1), in practice it is usually much more tractable. In particular, we shall see in Sects. 3 and 4 that many verification conditions are of the form ρ → cpreX (π ); in such case, if z ∈ X then validity of the verification condition is equivalent to the validity of the many smaller verification conditions ρ→ Gτ ∧ π [xτ /tτ , λ /p]) for p ∈ X τ ∈Tp
ρ ∧ Gτ → π [xτ /tτ , λ /p]
for p ∈ P \ X and τ ∈ Tp
Given that checking verification conditions usually requires either a theorem prover or a decision procedure whose complexity is at least exponential, several small verification conditions is usually a win over few large ones. Linear controlled transition systems. As another illustration of an even more specialized model of computation, we describe a form of ads in which the only variable types are discrete or vector spaces and whose actions are combinations of linear tests and transformations. We also do not consider fairness. This is a very interesting model of computation because of its relations with the theory of control of discrete systems (See [Sla07] for some applications of the methods presented in this paper to linear systems). Because of this, we also fix the players, although extending this model to allow additional players is simple. Definition 2.1.19 A linear controlled transition system (lcts ) is a sextuple Nx , Nu , Nd , L, I , T, where • • • • • •
Nx ∈ n is the dimension of the real vector space of system (state) variables x (x1 . . . xNx )tr ∈ RNx . Nu ∈ n is the dimension of the space of control (input) variables u. Nd ∈ n is the dimension of the space of disturbance (drift) variables d. L is a finite set of locations. I is a map from L to linear assertions over x. T is a finite set of transitions. Each transition τ , m, ξu , ξd , f ∈ T consists of: – – – – –
a source location ∈ L; a target location m ∈ L; ξu : a linear assertion over x and u restricting the control variables; ξd : a linear assertion over x and d restricting the disturbance variables; f : the transition function, an affine transformation of the form x : f (x, u, d) Ax + B u + C d + b, specifying the action taken by the transition; A, B , C , and b are constants for the transition but vary between transitions.
We provide a formal semantics by associating, to every lcts Nx , Nu , Nd , L, I , T, a gts P , W , A, δ, I, where: • P {z , u, d }: – z is the scheduler; – u is the controller; – d is the disturber. • W {, x ∈ L × RNx | x ∈ I ()}; in this setting we also call the elements of W configurations. • A T + RNu + RNd .
518
M. Slanina et al.
• δ(, x) P , A, W , a, g is the game defined by: – a(z ) {τ , . . . ∈ T} is the set of transitions with source in ; {u ∈ RNu | ξu (x, u)}; – a(u) τ ,m,ξu ,ξd ,f ∈a(z )
– a(d )
τ ,m,ξu ,ξd ,f ∈a(z )
– g(τ, u, d)
{d ∈ RNd | ξd (x, d)};
m, f (x, u, d) ⊥
if f (x, u, d) ∈ I (m), otherwise.
• The underlying set of propositions is {A, b | A ∈ RM ×Nx and b ∈ RM for some M ∈ n} and I(A, b)(x)
1 if Ax + b ≥ 0, 0 otherwise.
In other words, the propositions that we consider are affine assertions over x. Similarly to what was done in adss, to limit finite runs we assume, without loss of generality, the following syntactic restrictions: • Each ∈ L has an outgoing transition τ , and therefore the scheduler can always pick a transition. • For all configurations , x such that x I () and all transitions τ , m, ξu , ξd , f , there exist u and d such that ∃ u. ξu (x, u) and ∃ d. ξd (x, d), and therefore, no matter the scheduler’s choice, the controller and the disturber can always pick some values. Controllable and uncontrollable predecessors. In lctss we shall always be interested in playing the controller (player u) against the other two players. The controllable predecessor, in this case, is as follows: I ()(x) → ∃ u. ξu (x, u) ∧ ∀ d. ξd (x, d) cpreu (π ) ≡ τ ,m,ξu ,ξd ,f ∈T
→ π (m, f (x, u, d)) ∧ I (m)(f (x, u, d))
(4)
In the context of linear systems, one is usually only concerned with propositions of the form m, π , where π is an affine assertion on x. In this case, cpreu further simplifies to cpreu (m, π ) ≡ cpreu (π, τ ), τ ,m,...∈T
where cpreu (π, τ ) ≡ I ()(x) → ∃ u. ξu (x, u) ∧ ∀ d. ξd (x, d) → π (f (x, u, d)) ∧ I (m)(f (x, u, d)).
2.2. Alternating temporal logics Since temporal logics are well studied and our focus is on alternation, we define a flexible version of atl that allows for more flexibility than just ltl operators in the temporal part of the specification. Of course, in most cases we shall use atl* and this will be understood. The more general definition in the first subsection lets us focus on the essence of alternation and keep the semantic definitions concise. The following sections provide examples of specific instances, including atl* and atl, but also more general logics to be used as examples. We refer the reader to [MP95] for an introduction to ltl and to [Tho97] for the theory of ω-languages.
Deductive verification of alternating systems
519
2.2.1. Syntax Fix a set of ω-language symbols and a function n : → n, the arity function of the symbols. Definition 2.2.1 Given a set P of agent symbols, a set of proposition symbols, a set of ω-language symbols called the signature, and an arity function n, we define atl() state formulas ϕ and path formulas ψ, by mutual induction, by the following grammar: ϕ :: π | ¬ϕ | ϕ ∧ ϕ | ϕ ∨ ϕ | X ψ | X ψ | X f ψ | X f ψ ψ :: σ (ϕ1 , . . . , ϕn(σ ) ) where π ∈ , X ⊆ P , and σ ∈ . In other words, a state formula is a Boolean combination of propositions and alternating quantifiers X , X , X f , and X f applied to path formulas; a path formula is an ω-language symbol applied to state formulas. Notation If p1 , . . . , pn ∈ P , we write p1 , . . . , pn ψ instead of {p1 , . . . , pn }ψ, and similarly for the ·, ·f , and ·f operators. We also write ψ instead of ∅ψ (and similarly for the other operators). Example 2.2.2 We shall define the logic atl [AHK02]. Let atl {, , ♦, U, W} and n() n() n(♦) 1
n(U) n(W) 2
Then, for example, the following are state formulas (let P {p1 , p2 }, {π1 , π2 }): p1 f π2
W(π2 , ¬π2 )
p1 , p2 U (π1 , p1 (π1 ∧ π2 ))
We commonly use infix notation for U and W and write p1 f π2
(π2 W π2 )
p1 , p2 (π1 U p1 (π1 ∧ π2 ))
In the future, if the set is not specified, we shall assume it is the set of all future ltl languages and use the syntax with operators , ♦, , U, and W. A precise definition of this language will be given in Sect. 2.2.4. We use ϕ1 → ϕ2 and ϕ1 ↔ ϕ2 as abbreviations for (¬ϕ1 ) ∨ ϕ2 and (ϕ1 → ϕ2 ) ∧ (ϕ2 → ϕ1 ), respectively. Notation Operator precedence, in decreasing order, is: 1. the unary symbols ¬, X , X , X f , and X f ; 2. the binary symbols ∧ and ∨; 3. the derived binary symbols → and ↔. Inside each of these three classes, we define no precedence and use parentheses to specify the correct parsing.
2.2.2. Semantics Definition 2.2.3 Let B, , , , 0, 1 be the two-element Boolean lattice. Given a set of ω-language symbols and an arity function n, a temporal interpretation or temporal logic is a function L : → ∪n∈n ((Bω )n → B) such that, for all σ ∈ , L(σ ) : (Bω )n(σ ) → B.
520
M. Slanina et al.
Example 2.2.4 Consider the signature atl from Example 2.2.2. We define the interpretation Latl as follows: Latl ()(b0 b1 . . . ) b1 Latl ()(b0 b1 . . . ) bj j ≥0
Latl (♦)(b0 b1 . . . )
j ≥0
Latl (U)(b01 b11 . . . , b02 b12 . . .)
bj ⎛ ⎝bj2 2
j ≥0
Latl (W)(b01 b11
. . . , b02 b12
. . .)
⎞ bj11 ⎠
0≤j1
Latl (U)(b01 b11
. . . , b02 b12 . . .) Latl ()(b01 b11 . . . )
This corresponds to the standard definitions. For example, Latl ()(b0 b1 . . . ) 1 if and only if bj 1 for all j ≥ 0. Definition 2.2.5 Fix: • • • •
a set of propositions or assertions; a set of ω-language symbols; an arity function n for ; a temporal interpretation L over .
Every fair game transition system S P , W , A, δ, I, F defines the interpretation of ATL() formulas by associating: • to every state formula ϕ, a function ϕ : W → B, and • to every path formula ψ, a function ψ : W ω → B. The definition is by induction on the structure of the formulas: • • • • • • •
If π ∈ , then π I(π ); ¬ϕ (w ) ϕ (w ) for all w ∈ W ; ϕ1 ∧ ϕ2 (w ) ϕ1 (w ) ϕ2 (w ) for all w ∈ W ; ϕ1 ∨ ϕ2 (w ) ϕ1 (w ) ϕ2 (w ) for all w ∈ W ; X ψ (w ) maxs∈strat(X ) minx ∈out(w ,s) ψ (x ); X ψ (w ) mins∈strat(X ) maxx ∈out(w ,s) ψ (x ); X f ψ (w ) maxs∈strat(X ) minx ∈out(w ,s) FP \X (FX ψ (x )) ; • X f ψ (w ) mins∈strat(X ) maxx ∈out(w ,s) FP \X FX ψ (x ) ; • σ (ϕ1 , . . . , ϕn(σ ) )(x ) L(σ ) ϕ1 (x ), . . . , ϕn(σ ) (x ) , where ϕ : W ω → Bω simply applies the function ϕ component-wise.
A gts with no fairness similarly defines an interpretation of all formulas without occurrences of ·f or ·. If ϕ (w ) 1 in a game transition system S, we say that ϕ is true at state w in S and write S, w ϕ. Analogously, if ϕ (x ) 1 in S, we say that ψ is true of path x in S and write S, x ψ. If S is clear from the context, we shall sometimes omit it and write simply w ϕ,
x ψ.
Finally, if π is an assertion and ϕ is a state formula, we shall write π ⇒ϕ to mean w ϕ for all w ∈ I(π ).
Deductive verification of alternating systems
521
2.2.3. Positive normal form Definition 2.2.6 A temporal logic L is normal if for every σ ∈ there is σ¯ ∈ such that L(σ¯ )(I1 , . . . , In(σ ) ) L(σ )(I1 , . . . , In(σ )) for all I1 , . . . , In(σ ) ∈ Bω (where Ik denotes component-wise complementation). Given a normal interpretation, we can push negations to the assertion level in any atl() formula by applying the following rewrite rules exhaustively: ¬¬ϕ −→ ¬(ϕ ∨ ψ) −→ ¬(ϕ ∧ ψ) −→ ¬X ϕ −→ ¬X ϕ −→ ¬X f ϕ −→ ¬X f ϕ −→ ¬σ (ϕ1 , . . . , ϕn ) −→
ϕ (¬ϕ) ∧ (¬ψ) (¬ϕ) ∨ (¬ψ) X ¬ϕ X ¬ϕ X f ¬ϕ X f ¬ϕ σ¯ (¬ϕ1 , . . . , ¬ϕn )
Proposition 2.2.7 Given a normal temporal logic L: 1. 2. 3. 4.
for all state formulas ϕ1 and ϕ2 , if ϕ1 −→∗ ϕ2 then ϕ1 ϕ2 ; for all path formulas ψ1 and ψ2 , if ψ1 −→∗ ψ2 then ψ1 ψ2 ; the rewriting system −→ is terminating and convergent; normal forms are the formulas in which the only negations appear in front of proposition symbols.
Proof Parts (1) and (2) follow from a simple induction argument. For (3), consider the following ranking function δ on atl() formulas: δ(ϕ) the multiset of heights of occurrences of ¬ in ϕ
Then it is easy to check that δ decreases from the left to the right of every rewrite rule. Thus the rewriting system is terminating. It is confluent because it is left-linear and non-overlapping [Mit96]. Finally, (4) follows by examination of the rules.
2.2.4. The ltl signature and atl* The most important signature, and the one that will be considered most of the time in this paper, is the ltl signature. The classical logic atl* from [AHK02] is precisely the atl(ltl ) defined below. Definition 2.2.8 The ltl signature for players P and propositions is the signature , n where: • is the set of all future ltl formulas on propositions 1, 2, . . . : i.e., the set of strings generated by the grammar: F :: 1 | 2 | . . . | ¬F | F ∧ F | F ∨ F | F | F | ♦F | F U F | F W F • n(ϕ), for ϕ ∈ , is the largest nonnegative integer i such that i occurs in ϕ. Notation Operator precedence, in decreasing order, is: 1. 2. 3. 4.
the unary symbols ¬, X , X , X f , X f , , , and ♦; the binary symbols U and W; the binary symbols ∧ and ∨; the derived binary symbols → and ↔.
522
M. Slanina et al.
The operators U and W associate to the right. Otherwise, we define no precedence inside each class and use parentheses to specify the correct parsing. For example, ¬(x y) U (x y) U ¬(x y) is the same as the fully parenthesized (¬(x y)) U ((x y) U (¬(x y))). Definition 2.2.9 We now define the temporal interpretation Lltl by defining, by induction on σ ∈ ltl , the value of Lltl (σ )(b) for all b ∈ (Bk )ω such that k ≥ n(σ ) (we omit the subscript ltl in the positions below): • If σ i ∈ {1, 2, . . . }, then L(σ )(i ) b0 [i ]. • L(¬σ ) L(σ ) . • L(σ1 ∧ σ2 ) L(σ1 ) L(σ2 ). • L(σ1 ∨ σ2 ) L(σ1 ) L(σ2 ). • L(σ ) L(σ )(b1... ). • L(σ ) j ≥0 L(σ )(bj ... ). • L(♦σ ) j ≥0 L(σ )(bj ... ). L (σ )(b ) L (σ )(b ) • L(σ1 U σ2 ) j2 ≥0 1 j1 ... 2 j2 ... . 0≤j1
cpreX (U ) X ϕ upreX (U )
This can be verified by comparing the definition.
2.2.5. The Muller signature The second signature that we shall use is that which includes all ω-regular languages. Any family of automata ¨ (Buchi, Streett, parity, . . . ) or the extended temporal logic etl can of course be used for the purpose. As an illustration, we define a temporal interpretation based on deterministic Muller automata. Definition 2.2.11 The Muller signature for players P and propositions is the signature , n where: • is the set of all deterministic Muller automata on alphabet Bn for any n ∈ n. A deterministic Muller automaton on Bn is a quadruple Q, q0 , δ, F where: – – – –
Q is a finite set of states; q0 ∈ Q is the initial state; δ : Q × Bn → Q is the transition function; F ⊆ 2Q is the acceptance condition.
• The arity function has value n on the automata on alphabet Bn . Definition 2.2.12 We now define the temporal interpretation LMuller . Given a Muller automaton A Q, q0 , δ, F on Bn and an infinite sequence b ∈ (Bn )ω , the run of the automaton on b is the unique infinite sequence q0 , q1 , . . . ∈ Q ω such that • q0 is the initial state of the automaton; • qi+1 δ(qi , bi ) for all i ∈ n. Then we define LMuller (A)(b)
1 0
if {q | q appears infinitely many times in the run on b} ∈ F , otherwise.
Deductive verification of alternating systems
523
3. A complete proof system for ATL* Since our goal is to prove atl* properties of infinite-state alternating systems, we first seek the strongest and most general proof system possible, in order to understand the complexities involved and the theoretical possibilities and limitations of such approach. Our underlying philosophy is that it is epistemologically sounder to start with a complete proof system, albeit impractical, and then derive more practical approaches by simplifications and approximations. Doing so, we know where the approximations have been made and to have a deeper understanding of the applicability of a particular method. Even methods devised independently of our general proof system can gain, from fitting in a general framework, a better understanding of their powers and limitations. We hence present, in this section, the central result of the paper, a sound and relatively complete proof system for arbitrary atl* properties over alternating discrete systems (ads). While the proof system could be easily adapted to somewhat more general logics and modeling languages, the choices of atl* and ads are the most natural: atl* is the most common alternating logic in the literature and sufficient to cover all the problems that we want to address in the applications part of this paper. Alternating discrete systems are the most general model in which single steps of the computation can be expressed in first-order logic and which enables us to devise a proof system of the form commonly used in formal verification: proof rules that generate assertional verification conditions to be solved by a first-order theorem prover or decision procedures. Moreover, we shall prove relative completeness of our system with respect to the same assertional language required for relative completeness of Hoare-Floyd logic of while programs [Hoa69, Flo67, SdRG89]. If we disregarded the expressibility requirements, then the completeness proof would go through for arbitrary fgtss with ω-regular fairness conditions. Our proof system consists of proof rules that reduce the verification of an atl* property over an ads to a set of assertional (normally, first-order) verification conditions in the underlying assertion language of the system. The verification conditions are expressed in terms of cpre and upre predicate transformers. The advantage of parameterizing the proof rules by cpre is that the rules are independent of the system structure, but the resulting verification conditions for different types of systems—e.g., turn-based systems—can be simplified by instantiating cpre with the version that exploits the more constrained system structure.
3.1. Description of the proof system Our proof system operates on statements of the form S π ⇒ ((X ))ϕ, where S is an ads, π is an assertion, ((X )) is an alternating quantifier, and ϕ is a path formula in positive normal form. When clear from the context, we omit S and simply write π ⇒ ((X ))ϕ. The rules of the proof system can be classified into four groups: 1. A basic state rule, which reduces all statements to the form π ⇒ ((X ))ϕ, where ϕ is an ltl formula. 2. A basic path rule, which reduces ϕ (an ltl formula) to an assertion while extending the system S by synchronously composing it with an automaton for ϕ. Thus our proof system incrementally converts temporal formulas into finite automata that are then composed with the system. This technique of lifting automata-theoretic results to proof systems was first proposed by Vardi and applied to ltl [Var91]. Later a similar approach was applied to ctl [FG96] and ctl* [KP05]. Our approach is most closely related to that in [KP05]. 3. A history rule, which augments the system with extra history variables such that, in the new system, players X can win ϕ with memoryless strategies from their winning set. 4. Assertion rules, which reduce the validity of statements about winnability with memoryless strategies to assertional verification conditions. The application of these rules results in a set of verification conditions to be proved in the underlying theory of the system plus the cpre predicate transformers. The advantage of parameterizing the underlying language to the cpre operators is that, as illustrated in Sect. 2, in most practical cases the alternating system has specific properties that can be exploited by defining a simpler version of cpre than the generic form for adss.
524
M. Slanina et al.
Completeness of the proof system is relative to validities in the first-order logic, with fixed points and cpre, of the underlying theory – the same as required for relative completeness for ltl [MP89], or program termination, proof systems.
3.1.1. Basic state rule For ψ a state formula appearing with positive polarity in ϕ(ψ), basic- state: π ⇒ ϕ(ρ) ρ⇒ψ π ⇒ ϕ(ψ) Here and later in the paper, if no “S ” is present in the rule, we imply a unique system on the left of all statements. This rule says that, in order to prove π ⇒ ϕ(ψ), where ψ is a state formula appearing with positive polarity in ϕ, we guess an assertion ρ underapproximating the set of states on which ψ holds and substitute ρ for ψ in ϕ. The two premises require us to prove that ρ is indeed an underapproximation (ρ ⇒ ψ) and that the formula after the substitution holds in the initial states (π ⇒ ϕ(ρ)). Notice that there is an implicit “S ” at the left of every line in the rule.
3.1.2. Basic path rule The unfair rule. The basic- path rule requires the alternating quantifiers to be fair and adds more fairness conditions to the systems. If we start with an unfair alternating quantifier, we simply remove all fairness conditions from the system before applying the rule. This is formalized in the following simple unfair rule: unfair: π ⇒ ((X ))f ϕ
Sunfair
S
π ⇒ ((X ))ϕ
where Sunfair is identical to S by replacing all players’ fairness conditions with t. Augmentation. The basic- path rule applies to statements of the form S π ⇒ ((X ))f ϕ where ϕ is an ltl formula and ((X ))f is a fair alternating quantifier (f or f ). The first step in the application of this rule is the synchronous composition of an automaton for ϕ with S [Var91, KP05]. Let ϕ be a (quantifier-free) ltl formula over variables VS . Let Aϕ Q, q0 , δ, F be a deterministic Muller automaton accepting the language of ϕ, where: • • • •
Q is a finite set of states; q0 ∈ Q is the initial state; δ : Q × → Q is a full deterministic transition function; Q F ∈ 22 is the Muller acceptance condition.
The definition suggests that the alphabet of Aϕ is W , the – generally infinite – set of states of the underlying ads. Only a finite quotient of W , however, is necessary, determined by the values assumed by the states on the atoms of ϕ. We use Muller acceptance condition (a run is accepted if, an only if, the set of states that it traverses infinitely often is a member of F ) for simplicity of notation. In practice, other forms of verification conditions ¨ (e.g., Streett, Rabin, if possible Buchi) are preferable for efficiency of representation. The role of Aϕ is to act as a temporal tester [KP05], that is, to observe the evolution of ϕ on the ads. To achieve this, we construct a synchronous composition of Aϕ and the ads S and introduce a new player pϕ with the fairness conditions of Aϕ . The requirement that Aϕ be deterministic ensures that no player gains power by the composition with Aϕ . In particular, the new player pϕ has only one choice of action at all times. The formal definition is as follows.
Deductive verification of alternating systems
525
¨ Let S P , VS , VP , ξ, χ , be an ads, and A Q, q0 , δ, FA a deterministic Muller (or Buchi, or Streett . . . ) automaton on alphabet W , as defined above. The synchronous composition of S and A, denoted S A, is the ads ˆ Sˆ Pˆ , Vˆ S , Vˆ P , ξˆ , χˆ , , where: • Pˆ P ∪ {pA }, where pA is a new player; • Vˆ S VS ∪ {q}, where q is a new variable of type Q; • Vˆ p Vp if p ∈ P ; Vˆ pA ∅; • ξˆp (Vˆ S , Vp ) ≡ ξp (VS , Vp ) if p ∈ P ; ξˆp (Vˆ S , ∅) ≡ t; A
• χ( ˆ Vˆ S , Vˆ P , Vˆ S ) ≡ χ (VS , VP , VS ) ∧ q δ(q, VS ); ˆ p ≡ p if p ∈ P ; • ˆ pA ≡ FA , where FA is an expression of A’s acceptance condition. • The basic-path rule. For an ltl formula ϕ, basic- path: π ∧ q q0 ⇒ ((X , pϕ ))f t
S Aϕ
S
π ⇒ ((X ))f ϕ
where Aϕ is a deterministic automaton on infinite words accepting ϕ, is synchronous composition, ((X ))f is either X f or X f , and pϕ stands for pAϕ .
3.1.3. History rule The purpose of the history rule is to allow for memoryless strategies in all the games of interest. Consider a property of the form S π ⇒ X ϕ, where ϕ is an ltl property. By [Zie98], if we partition the states of S into two sets, W1 and W2 , such that players X have a winning strategy (can ensure ϕ) from every state in W1 , but not from any state in W2 , then players X have a finite memory winning strategy from every state in W1 . The application of the assertion rules requires that X have a memoryless strategy. To achieve this, we add to the structure some new variables, called history variables, and a new player h (for history). We let h play in coalition with X and give it the task of maintaining the history variables: at every step, h will make a deterministic choice for the history variables, and the game matrix χ will simply copy these choices into the next stage of the game. History augmentations. Let h be a new player and Vh a new set of history variables. We define the history augmentation of S with history Vh , denoted S[h, Vh ], to be the ads ˆ Sˆ Pˆ , Vˆ S , Vˆ P , ξˆ , χˆ , , where: • Pˆ P ∪ {h}; • Vˆ S VS ∪ Vh∗ , where Vh∗ is a copy of Vh ; • Vˆ p Vp if p ∈ P ; Vˆ h Vh ; • ξˆp (Vˆ S , Vp ) ≡ ξp (VS , Vp ) if p ∈ P ;
ξˆh (Vˆ S , Vh ) ≡ t;
• χ( ˆ Vˆ S , Vˆ P , Vˆ S ) ≡ χ (VS , VP , VS ) ∧ Vh∗ Vh ; ˆ ≡ . •
526
M. Slanina et al.
(a)
(b)
(c) Fig. 1. Example 3.1.1. (a) Game transition system S. (b) The two memoryless strategies. Neither guarantees ♦b ∧ ♦c. (c) The augmented system S[h, Vh : B] and a memoryless strategy for p(♦b ∧ ♦c)
The history rule. For an ltl formula ϕ, history: S[h, Vh ] π ⇒ ((X , h))ϕ S
π ⇒ ((X ))ϕ
where S[h, Vh ] is a history augmentation of S. Example 3.1.1 In practice, the history rule is not needed much. In fact, in our case study in Sect. 5 we never use it. Nevertheless, this example, adapted from [Tho95], shows that it is necessary for completeness of the proof system. Consider the gts in Fig. 1a. There is only one player, p. Vertices in the graph are states and p chooses the next state. There is one proposition per state. Consider the atl* formula ϕ ≡ p(♦b ∧ ♦c) Then ϕ holds in any state of the system, as exemplified by the stragegy for p that, at a, goes alternatively to b and to c. No memoryless winning strategy can achieve the same goal, though: Fig. 1b shows the only two memoryless strategies (a memoryless strategy in this case is the choice of precisely one outgoing edge for every state). If we augment the system with a Boolean variable we obtain the gts in Fig. 1c. Now there is a memoryless winning strategy. Since the assertion rules can only prove alternating formulas achievable with memoryless strategies, this example proves that they are incomplete on their own.
3.1.4. Assertion rules After applying the previous rules as much as possible, we are left with a set of statements of the form S π ⇒ ((X ))ρ, where S is an ads, π and ρ are assertions, and ((X )) is one of the four alternating quantifiers X , X , X f , X f . In this section we show how to reduce each of these to first-order validities. First, we transform the fair quantifiers into unfair ones by making the fairness conditions explicit. This reintroduces temporal operators in the scope of the alternating quantifier. The resulting temporal formula, however, is of a special form that is dealt with directly by the assertion rule.
Deductive verification of alternating systems
527
Making fairness conditions explicit. Recall that every fairness condition p is a Boolean combination of atoms of the form ∞π , where π is an assertion. We write ∞π instead of ♦p to make it clear that we are now dealing with a special case and not with arbitrary ltl formulas. From the definitions of the semantics of the alternating quantifiers it follows that π ⇒ X f ρ can be rewritten as the conjunction of the two statements π ∧ ρ ⇒ X ( P \X → X ) π ∧ ¬ρ ⇒ X ¬ P \X and π ⇒ X f ρ as the conjunction of π ∧ ρ ⇒ X P \X π ∧ ¬ρ ⇒ X ( P \X ∧ ¬ X ) These statements are all of the form π ⇒ ((X )), where ((X )) is either X or X and is a Boolean combination of ∞ atoms. If we transform into conjunctive normal form, it becomes a conjunction of clauses of the form ∞J1 ∧ . . . ∧ ∞Jk → ∞R1 ∨ . . . ∨ ∞Rm , where each Ji and Rj is an assertion. Since ∞ distributes over ∨, this is equivalent to ∞J1 ∧ . . . ∧ ∞Jk → ∞(R1 ∨ . . . ∨ Rm ). Finally, then, we are left with statements of the form S π ⇒ ((X )) ∞J1i ∧ . . . ∧ ∞Jki → ∞ρi . i
(Technically, the number k of antecedents is different for every i . Without loss of generality, we drop this distinction to lighten the notation). Below we present proof rules to reduce these particular forms to assertional verification conditions. The positive assertion rule. This rule applies to formulas of the form π ⇒ X ∞J1i ∧ . . . ∧ ∞Jki → ∞ρi .
(5)
i j
To apply the rule, we guess intermediate assertions ηi and ηi (for i ∈ {1, . . . , n} and j ∈ {1, . . . , k }) and ranking j functions δi on a well-founded domain D, ≺. π ⇒ ni 1 ηi For every i ∈ {1, . . . , n}: ηi ⇒ kj 1 ηji For every choice of {ji | i ∈ {1, . . . , n}}: ⎡ pos- assertion:
⎤ (ηi ∧ ρi ) ∨ ⎥ n n ⎢ ⎢ k ⎥ i i i i (η ∧ δ a ) ⇒ cpre (η ∧ δ ≺ a ) ⎢ ⎥ i i X ji i 1 ji i 1 l l 1 l ⎣ ⎦ ∨ i i i (ηji ∧ δji ai ∧ ¬Jji ) n π ⇒ X i 1 ∞J1i ∧ . . . ∧ ∞Jki → ∞ρi
The intuition behind this rule is similar to that for the analogous rules for ltl [MP89]. The ranking functions j enforce progress towards realizing the ρi , and the ρi denote regions inside which the ranking functions are constant. The verification conditions assure that, assuming fairness of the adversaries, the players X can eventually force the game out of these regions, and thus decrease the ranking.
528
M. Slanina et al.
This proof rule is sound and relatively complete to prove properties of the form (5). Relative completeness means that, if (5) holds of a system, with memoryless strategies, then there exist assertions ηi and ηji and ranking functions δji that are expressible in the language and satisfy the premises. The negative assertion rule. The negative assertion rule is used for formulas of the form ∞J1i ∧ . . . ∧ ∞Jki → ∞ρi . π ⇒ X i
The rule and its properties are identical to the positive version, except that it uses the predicate transformer upre instead of cpre. π ⇒ ni 1 ηi For every i ∈ {1, . . . , n}: ηi ⇒ kj 1 ηji For every choice of {ji | i ∈ {1, . . . , n}}: ⎡ neg- assertion:
⎤ (ηi ∧ ρi ) ∨ ⎥ n n ⎢ ⎢ k ⎥ i i i i (η ∧ δ a ) ⇒ upre (η ∧ δ ≺ a ) ⎢ ⎥ i i X ji i 1 ji i 1 l l 1 l ⎣ ⎦ ∨ i i i (ρji ∧ δji ai ∧ ¬Jji ) n π ⇒ X i 1 ∞J1i ∧ . . . ∧ ∞Jki → ∞ρi
3.2. Soundness and completeness 3.2.1. Overview In the following subsections we prove soundness and completeness of each of the rules previously described. Together, these results show that the rules form a sound and complete proof system that can reduce arbitrary atl* properties over programs to first-order validities in the underlying assertion language. For relative completeness to hold, we need the underlying assertion language to be expressive enough. We require the same expressivity as needed for completeness of ltl in [MP89] (this is the same expressive power required to prove termination of simple while loops). Namely, the assertion language should contain Peano Arithmetic, allow first-order quantification, and be able to express fixed points [SdRG89]. If an assertion language satisfies these criteria, we shall hereafter say that it is expressive. In many of the completeness arguments we use a classic result about omega-regular games on infinite graphs (see, for example, [Zie98]), that in our case can be expressed as follows: If two players play a game on an ads (or any similar structure, as long as a player’s winning condition is an omega-regular set of runs), then every player has a finite-memory winning strategy from its winning set (the set of starting states from which it can win). Moreover, the winning set and a finite-memory strategy can be obtained with fixed point constructions, and therefore they can be encoded in an expressive language.
3.2.2. Basic state rule Proposition 3.2.1 (Soundness and completeness of basic-state) 1. Soundness: If the premises of rule basic- state hold of system S for some assertion ρ, then the conclusion holds. 2. Relative completeness: If the conclusion holds and the assertion language is expressive, then there exists an assertion ρ such that the premises hold. Proof 1. Since ρ is positive in ϕ(ρ) and ρ ⇒ ψ, we have ϕ(ρ) ⇒ ϕ(ψ). To conclude, apply transitivity of ⇒ with the first premise. 2. It suffices to take for ρ an assertion representing the winning region of ψ. Since ψ is omega-regular,
Deductive verification of alternating systems
529
such an expression exists, as discussed in the overview section. Then ρ ⇔ ψ and the premises hold trivially.
3.2.3. Basic path rule We shall often need to talk about the projection of a sequence of states x ∈ Wˆ + onto its W components. We shall use the notation xˇ , or Proj(x ) ∈ W + , for this purpose. In general, as a mnemonic, a letter with a check, such as xˇ , denotes an object (state, run, strategy, . . . ) related to S obtained from a related object in Sˆ. A letter with a hat, such as xˆ , denotes an object related to Sˆ obtained from something in S. Think about checks and hats as down and up arrows. The following proposition summarizes the fundamental properties of synchronous composition that we are interested in. Proposition 3.2.2 Let S P , VS , VP , ξ, χ , be an ads and A Q, q0 , δ, FA a full deterministic Muller ˆ onto its components. automaton, Sˆ S A. Let Proj denote the projection of a state or a sequence of states in Then: 1. For every run prefix x ∈ W + in S from some state w , there is exactly one run prefix xˆ ∈ Wˆ + from w , q0 in Sˆ whose projection on W is equal to x . 2. For every strategy f for X in S there is a strategy fˆ for X , pϕ in Sˆ such that outS (w , f ) Proj(outSˆ(w , q0 , fˆX )) Proj(outSˆ(w , q0 , fˆ)) for all w ∈ W . 3. For every strategy f for X in Sˆ there is a strategy fˇ for X \ {pϕ } in S such that outS (w , fˇ) Proj(outSˆ(w , q0 , f )) for all w ∈ W . 4. A run x ∈ Wˆ ω of Sˆ is fair to player p ∈ P if, and only if, xˇ is fair to p in S. A run x ∈ Wˆ ω of Sˆ is fair to pϕ if, and only if, xˇ is accepted by A. Proof 1. Define xˆ [n] δ(q0 , x [0 . . . n − 1]). The statement can then be directly checked by induction on the length of x and xˆ . 2. Define fˆp (w ) f (xˇ ) for p ∈ P and x ∈ Wˆ + . Again, proceed by induction. The last equality is obvious once we notice that pϕ has only one strategy. 3. Define fˇp (x ) fp (xˆ ) for p ∈ P and x ∈ W + , where xˆ is the run prefix defined in part 1. 4. Follows directly from examination of the definitions.
With this proposition, soundness and completeness are immediate. Corollary 3.2.3 The premise and the conclusion of rule basic- path are equivalent. Proof We prove soundness for the case ((X ))f being X f . The other case is similar, the only differences being which parts of the proposition are applied. Assume the premise is true and let w be an S-reachable π -state. Consider the state wˆ w , q0 in Sˆ. By the premise, there is a strategy f for players X , pϕ in Sˆ such that for all x ∈ outSˆ (wˆ , f ) such that x P\X , we also have x X and x pϕ .
Now consider outS (w , fˇ), where fˇ is the strategy defined in the previous proposition. By the proposition, every run therein is of the form xˇ for some x ∈ outSˆ(wˆ , f ). Therefore, if xˇ P \X , then xˇ X and xˇ is accepted by Aϕ , i.e., xˇ ϕ. Hence, w X f ϕ.
3.2.4. History rule We use the same conventions in notation as in the previous section about the basic- path rule.
530
M. Slanina et al.
Proposition 3.2.4 Let f : Wˆ + → A × v ∈Vh dom(v ) be a strategy for players X , h in Sˆ S[h, Vh ] and t0 ∈ v ∈Vh dom(v ) arbitrary. There exists a strategy fˇ : W + → A for X in S such that outS (w , fˇ) Proj(outSˆ(w , t0 , f )). Proof For all x ∈ W + , we define xˆ ∈ Wˆ + by induction on |x |: • |x | 1: We define xˆ x , t0 . ˆ • |x | > 1: Let x yw , for y ∈ W + and w ∈ W . We define xˆ yˆ w , fh (y). We then define fˇ(x ) f (xˆ ) for all x ∈ W + . Take an arbitrary x w0 w1 · · · ∈ outS (w , fˇ) and let x0 α 1 x1 α 2 . . . be a trail of x . We define xˆ ∈ Wˆ + as xˆ [n] (x [n]) x [0], t0 x [n], fh (xˆ [0 . . . n − 1])
if n 0 if n ≥ 1
(It is easy to check that this is a consistent definition). Clearly, the projection of xˆ is x . To check that xˆ ∈ outSˆ(w , t0 , f ), we exhibit an appropriate trail for xˆ : α[n] ˆ α[n], fh (xˆ [0 . . . n]). We need to verify two properties:: ˆ p ) for p ∈ P ∪ {h} and n ∈ n. For p ∈ P (assuming n ≥ 1, the other case is analogous): We 1. ξˆp (xˆ [n], α[n] know, since x is a run with trail defined by α, that ξp (x [n], α[n]p ). It follows from the definition of αˆ p that αˆ p (x [n], fh (xˆ [0 . . . n − 1]), α[n]p ) ˆ p ). It then follows from the definitions of xˆ [n] and α[n] ˆ that ξˆp (xˆ [n], α[n] ˆ The p h case is obvious since ξh is t. 2. χ( ˆ xˆ [n], α[n], ˆ xˆ [n + 1]): We know, since α defines a trail for run x , that χ (x [n], α[n], x [n + 1]). Since xˆ [n] xˆ [n + 1] x [n + 1], fh (xˆ [0 . . . n]), it then follows that χ (x [n], α[n], x [n]) ∧ xˆ [n] α[n]h , which is, by definition, χˆ (xˆ [n], α[n], ˆ xˆ [n + 1]). Thus xˆ is a run of S[h, Vh ] from w , t0 . For the other direction, let x ∈ outSˆ(w , t0 , f ) and xˇ be the projection of x onto W . Again, we can prove by induction, using the definitions, that xˇ ∈ outS (w , fˇ). The details are similar to the previous case and we omit them. From the previous proposition it follows immediately, as was the case with the basic- path rule, that the history rule is sound and complete. Moreover, the new system has an important additional property: Proposition 3.2.5 There is a finite domain Dh , depending only on X and ϕ, such that, for all t0 ∈ Dh : • For ((X )) being either X or X f , if S, w ((X ))ϕ, then X , h have a memoryless strategy to achieve ϕ (and fairness in case of X f ) from w , t0 in S[h, Vh ]. • For ((X )) being either X or X f , if X , h have no memoryless strategy to achieve ¬ϕ (and fairness if necessary) from w , t0 in S[h, Vh ], then S, w ((X ))ϕ. Proof Clearly, (1) and (2) are equivalent, and the fairness of the alternating quantifier is not an issue. Since ϕ is omega-regular, players X can win X ϕ, from their winning set, with a finite memory strategy, f (see,
Deductive verification of alternating systems
531
for example, [Zie98]). But then, we can set Vh be a single variable, with domain D representing the amount of memory needed by f , and use the memory update function of f as a strategy for h in S[h, Vh ] to achieve the same result. The choice of t0 ∈ Dh is clearly immaterial. Notice that, in fact, we can compute bounds on the size of Dh by, for example, the techniques in [Zie98]. Corollary 3.2.6 Rule history is sound and complete, and it remains so if the alternating quantifier in the premise is evaluated over memoryless strategies only. Proof Follows from the two previous propositions.
3.2.5. Assertion rules The proofs of soundness and completeness of the assertion rules use similar techniques to those of related rules in [MP89, Sla02]. Proposition 3.2.7 (Soundness of pos-assertion) If the premises of rule pos- assertion hold for an ads S, then the conclusion holds for S. Proof The last premise defines a memoryless strategy for players X : at a state w ∈ W , we can define fp (w ), for p ∈ X , as follows: • If w i ηi , choose an arbitrary action. • Otherwise, for each i we can identify the minimum index ji such that w ηjii . We can take the tuple ji | i ∈ {1, . . . , n} of all these indexes and define fp (w ) to be the action given, for this choice, by the last premise. Let now x ∈ out(w , fX ) for an arbitrary w π . It is clear that all states of x satisfy i ηi , hence we do not need to consider the first part of the definition of f . Let i ∈ {1, . . . , n} be an arbitrary index. We shall prove that x ∞J1i ∧ . . . ∧ ∞Jki → ∞ρi . By contraposition, assume that x ∞ρi , that is, x contains only finitely many ρi -states. Let ν be a position in x such that there is no ρi -state in x [ν . . . ]. Then the first disjunct of the argument of cpreX is never taken from ν onwards, and therefore in x [ν . . . ] the value of δji is weakly decreasing. Since the domain of the δs is well-founded, this value must eventually stabilize. Assume it stabilizes at position ν ≥ ν and consider the least index ji such that there is an ηjii -state in x [ν . . . ]. Let ν ≥ ν be an index such that x [ν ] ηjii . From the definition of fX and the last premise it follows that ηjii ∧ ¬Jjii holds in all states of x following ν (since the first two disjuncts of the argument of cpre never occur). Hence Jjii holds only finitely many times in x . This proves the contrapositive and thus x ∞J1i ∧ . . . ∧ ∞Jki → ∞ρi . The strategy fX is hence winning for the conclusion of pos- assertion, and this concludes the proof. Proposition 3.2.8 (Relative completeness of pos-assertion) If, in rule pos- assertion, the conclusion holds with a memoryless winning strategy and the assertion language is expressive, then there exist assertions {ηi } and {ηji } and expressible ranking functions {δji } such that the premises hold. The rest of the section is devoted to the proof of the proposition above. We shall first define certain sets of states without proving them expressible. We shall prove expressibility later, when not obvious. Given a strategy, f , and states w , w ∈ W , a segment of computation from w to w compatible with f is a sequence of states w1 . . . wν that can be extended by one more state w0 in such a way that w0 w , wν w , and wi+1 is a possible successor of wi compatibly with f for all i ∈ {0, . . . , ν − 1}. Notice that we include w , but not w , in the segment: this lets us concatenate segments easily. An assertion π is satisfied on a segment if it is satisfied in some state in the segment. A segment is π -free if all states on the segment satisfy ¬π (Notice that this allows w π , if w does not appear in the segment). Let n i i η w w X ∞J ∧ . . . ∧ ∞J → ∞ρi . 1
k
i 1
By assumption, every π -state is in η. Let f be the X -strategies winning the game on η. It follows from [Zie98] and our assumptions on the power of the assertion language that η and f are expressible.
532
M. Slanina et al.
For i ∈ {1, . . . , n}, we define a relation i over η-states as follows: def
w i w ≡ ∃ a ρi -free segment of computation, leading from w to w and compatible with f , on which each of J1i , . . . , Jki is satisfied at least once, and w is reachable from an η-state through a computation segment compatible with f .
Notice that, by assumption, i is well-founded (otherwise, f would not be a winning strategy for the game in the conclusion of the rule). From i we can obtain, by a well-known result of [LPS81] (see also [MP89]), a function δ˜i from η into a prefix ζ of the ordinals such that 1. w i w → δ˜i (w ) > δ˜i (w ). 2. If δ˜i (w ) β and α < β, then there exists an element w such that δ˜i (w ) α and w w . 3. If w w → w w for every w ∈ W , then δ˜i (w ) ≥ δ˜i (w ). For a segment x , we define the i -deficit i (x ) as def
i (x ) µ j . Jji -state in x , or k + 1 if all Jji s happen in x . For an η-state w , the i -height of w is hi (w ) max{i (x ) | x is a δ˜i -leveled, ρi -free segment compatible with f departing from w }. where a segment w1 . . . wν from w to wν is said to be δ˜i -leveled if δ˜i (w ) δ˜i (w1 ) · · · δ˜i (wν ) (notice the inclusion of w ). Now we define D ζ × {1, . . . , k + 1} and δi δ˜i , hi composed lexicographically, i.e., α, j i α , j ⇔ α α ∨ (α α ∧ j ≥ j ). The preorder i is well-founded on η. The following lemma establishes the properties of η and hi that we need. Lemma 3.2.9 (0) If w is an η-state and w is an f -successor of w , then w is an η-state. (1) Every η-state w satisfies hi (w ) ≤ k . (2) For every η-state w and its f -successor w , either w ρi or δi (w ) ≥ δi (w ). (3) For every η-state w and its f -successor w such that hi (w ) j and w Jji , we have either w ρi or δi (w ) > δi (w ). Proof (0) Immediate from the definitions of η and f . (1) Let x be a ρi -free, δ˜i -leveled segment from w to w compatible with f . We need to prove that i (x ) ≤ k . Assume, by absurd, that i (x ) k + 1. Then all of J1i , . . . , Jki happen in x , hence w i w , and therefore δ˜i (w ) > δ˜i (w ). But then x is not leveled, a contradiction. (2) Assume w ρi . First we prove that δ˜i (w ) ≥ δ˜i (w ). By (0) we know that w is also an η-state. By the third property of δ˜i above, it then suffices to show that ∀ w . w i w → w i w . Let w be any state such that w i w . This means that there exists a ρi -free segment, x , compatible with f , on which each of the Jji happen, and w is reachable from a π -state through a segment compatible with f . Consider the segment wx obtained by prepending w to x . Since w is not a ρi -state, wx is a segment that proves w i w . Now, if δ˜i (w ) > δ˜i (w ), then δi (w ) > δi (w ) and we are done. Otherwise, δ˜i (w ) δ˜i (w ) and we need to consider hi (w ) and hi (w ). Let hi (w ) j ≤ k (by (1)). Let x be a segment from w to some w , δ˜i -leveled, ρi -free, and compatible with f , such that i (x ) j . Then i (wx ) ≥ j , since all the Jji that
Deductive verification of alternating systems
533
are satisfied in x are also satisfied in wx . Hence hi (w ) ≥ j hi (w ), and we conclude even in this case that δi (w ) ≥ δi (w ). (3) Consider w ρi and δi (w ) δi (w ). Let hi (w ) j and let x be a segment from x to x realizing the deficit j for x . Then wx satisfies all the Jli that x does, and in addition it satisfies Jji . Hence i (w ) > j , and hence hi (w ) > j hi (w ). Finally, we define def
ηji ≡ η ∧ hi j
def
and ηi ≡
k
ηji .
j 1
The lemma above shows precisely that, with these definitions, the premises of rule pos- assertion hold. At last, we need to show that the ranking functions δ˜i are expressible in the assertion language. We can do this in exactly the same way as it is done in [MP89]. First, we remark that i is expressible. Then, it suffices to examine the construction of δ˜i from i from [LPS81]: since it can be interpreted as a fixed point construction, the ranking function (δ˜i in this case) is expressible whenever the original well-founded relation (i in this case) is. The details are exactly the same as in [MP89] and we omit them. If the δ˜i s are expressible, then clearly thus are hi and δi as well, and this concludes the proof of Proposition 3.2.8. The proofs for neg- assertion are completely similar to the ones for pos- assertion and we omit them.
4. Derived rules and verification diagrams So far we have kept the proof system as small as possible by avoiding any redundancy: every rule is needed for completeness and addresses a very general class of formulas. This is a useful approach in meta-proofs, especially in order to prove completeness. To verify real systems it is nonetheless useful to expand the proof system by either adding new rules or specializing existing ones to more specific systems or formulas: clearly, all we need to do in these cases is to show soundness; the original rules are always available, so completeness is not compromised [MP95]. For significant verification projects, such derivation of new proof rules should always be addressed on a caseby-case basis, as most likely further simplifications can be made for any particular situation. Thus the techniques examined in this section should be considered examples of the approach. On the other side, these are proof rules for some of the most common and important classes of formulas, and they are likely to be useful in a wide variety of situations. We shall use some of them in our case study in Sect. 5.
4.1. Some derived and auxiliary rules In this section we illustrate some derived rules that we shall find useful in the next section. We do not prove any soundness theorems as they are all immediate and along the lines of Proposition 3.2.7. The invariance rule and the two termination rules are also relatively complete for the classes of properties and systems to which they apply, and we omit the proofs, which are just simpler versions of that of Proposition 3.2.8 and can be found in [Sla02]. A first class of auxiliary rules consists of rules that do not depend on the system, but are tautologies of propositional logic or atl. Any number of rules from Gentzen-style proof systems for propositional logic can be very useful to simplify proofs. Here are some examples: left- ∨: π ⇒ϕ ρ⇒ϕ
right- ∧: π ⇒ϕ π ⇒ψ
π ∨ρ ⇒ϕ
π ⇒ϕ∧ψ
534
M. Slanina et al.
Another very useful auxiliary rule is the invariance rule already mentioned in the Introduction, which enables us to prove properties of the form π ⇒ X ρ: inv: π →η η→ρ η → cpreX η π ⇒ X ρ Another simple rule, the simple termination rule, can be used to prove alternating reachability that does not depend on fairness assumptions of the ads: term: π →ρ∨η η ∧ δ a → cpreX (ρ ∨ (η ∧ δ ≺ a)) π ⇒ X ♦ρ An even more interesting form of the above is the just termination rule, using which we can prove alternating reachability that may also depend on simple fairness requirements of the adversaries P \ X : just- term: π →ρ ∨ η η → i ηi For all i : ηi ∧ δi a → cpreX ρ ∨ j (ηj ∧ δj ≺ a) ∨ (ηi ∧ δi a ∧ ¬Ji ) π ⇒ X i ♦Ji → ♦ρ The rule can equivalently be expressed as π ⇒ X f ♦ρ for an ads such that conclusionof the just termination F ∞J and F t. p i p p∈P \X i p∈X For examples of applications of some of these rules, see Section 5.
4.2. Some verification diagrams for atl* The application of proof rules generally requires several intermediate assertions and ranking functions and produces a large number of first-order verification conditions. Verification diagrams [MBSU98] provide a precise and concise representation to visualize such proofs: all the required assertions and ranking functions are embedded in the diagram and their interdependencies and relations with the system are more clearly visible (A self-contained account of verification diagrams is beyond the scope of this paper and readers unfamiliar with the concept may find it difficult to follow the technical part of this section. We have provided references to the relevant literature in Sect. 7). We shall present verification diagram forms of all the proof rules introduced in the previous section. The last one, just termination diagrams, will be the core of the case studies in Sect. 5. All three classes of diagrams that we define are sound and relatively complete for the classes of formulas and systems to which they apply. As for the derived rule, proving soundness is immediate; relative completeness follows by the fact that each class of diagrams corresponds to a class of rules and every proof using a rule can be directly transformed into a valid diagram for the same property. The definition of a single class of verification diagrams that is sound and relatively complete for arbitrary atl* properties and produces intuitive proofs over large systems is an open problem.
Deductive verification of alternating systems
535
4.2.1. Invariance diagrams Definition 4.2.1 Given an ads S and a set of agents X ⊆ P , an invariance verification diagram is a triple N , E , µ where: • N is a set of nodes; • E ⊆ N × N is a set of edges, so that N , E is a digraph; • µ : N → associates an assertion to every node. Definition 4.2.2 To every node n ∈ N we associate the following verification condition: ⎛ ⎞ µ(n)⎠, µ(n) → cpreX ⎝ m∈succ(n)
where succ(n) {m ∈ N | n, m ∈ E }. The set of verification conditions associated to a diagram is the collection of the verification conditions associated to all its nodes. A diagram is called valid if all its associated verification conditions are valid. Proposition 4.2.3 1. A valid invariance diagram N , E , µ over an ads S and agents X is a proof of ! µ(n) ⇒ X µ(n) S n∈N
n∈N
2. A valid verification diagram such that π→ µ(n) n∈N
µ(n) → ρ
for all n ∈ N
is a proof of S π → X ρ. Example 4.2.4 Consider the following simple ads: • P {p1 , p2 }. • VS {x : R}. • Vp1 {y1 : R}
Vp2 {y2 : R}.
• ξp1 ≡ (−2 ≤ y1 ≤ 3)
ξp2 ≡ (−1 ≤ y2 ≤ 2).
• χ ≡ (x x + y1 + y2 ). • p1 ≡ p2 ≡ t. This could be, for example, a very simple model of a thermostat, where x is the temperature, p1 is the controller, y1 is the controller’s choice of how much heat or cooling to apply, p2 is the environment, and y2 is the environmental heating or cooling. Time is discrete, so the rate of temperature change is constant for, e.g., a minute at a time. The valid invatiance diagram in Fig. 2 then proves that 18 ≤ x ≤ 22 ⇒ p1 (18 ≤ x ≤ 22).
536
M. Slanina et al.
Fig. 2. Termination diagram for the thermostat game of Example 4.2.4
4.2.2. Termination diagrams Definition 4.2.5 Given an ads S and a set of agents X ⊆ P , a termination diagram is a quintuple N , E , NT , µ, δ where: • • • • •
N is a set of nodes; E ⊆ N × N is a set of edges, so that N , E is a digraph; NT ⊆ N is a non-empty set of nodes called the terminal nodes; µ : N → associates an assertion to every node; δ : N → W → D associates, to every node n, a ranking function δ(n) mapping states to elements of a well-founded order D, ≺, in such a way that all terminal nodes have minimum ranking.
Definition 4.2.6 To every node n ∈ N \ NT we associate the following verification condition: ⎛ ⎞ µ(n) ∧ δ(n) a → cpreX ⎝ (µ(n) ∧ δ(n) ≺ a)⎠, m∈succ(n)
where succ(n) {m ∈ N | n, m ∈ E }. The set of verification conditions associated to a diagram is the collection of the verification conditions associated to its nonterminal nodes. A diagram is called valid if all its associated verification conditions are valid. Proposition 4.2.7 1. A valid termination diagram N , E , NT , µ, δ over an ads S and agents X is a proof of ! S µ(n) ⇒ X ♦ µ(n) n∈N
n∈NT
2. A valid termination diagram such that µ(n) π → n∈N
µ(n) → ρ
for all n ∈ NT
is a proof of S π → X ♦ρ. Example 4.2.8 We consider the simplest possible version of the game of nim: There are x stones in a pile; two players alternatively remove one, two, or three stones. The player who removes the last stone wins. We can model this system as an ads P , VS , VP , ξ, χ , , where: • P {p1 , p2 }. • VS {x , t}, where – x : n number of stones remaining; – t : {1, 2} turn (player to play), or loser if x 0. Vp2 {y2 }. • Vp1 {y1 } • ξp1 ≡ (x > 0 → 1 ≤ y1 ≤ x )
ξp2 ≡ (x > 0 → 1 ≤ y2 ≤ x )
Deductive verification of alternating systems
537
Fig. 3. Termination diagram for nim game of Example 4.2.8
• χ ≡ (x > 0 ∧ t 1 ∧ x x − y1 ∧ t 2) ∨ (x > 0 ∧ t 2 ∧ x x − y2 ∧ t 1) ∨ (x 0 ∧ x 0 ∧ t t). • p1 p2 t. We want to prove the following atl* formula, defining the winning condition for player 1: t 1 ∧ x > 0 ∧ x ≡ 0 (mod 4) ⇒ p1 ♦(t 2 ∧ x 0) Figure 3 shows a just termination diagram proving this property.
4.2.3. Just termination diagrams Here we introduce a newtype of verification diagram that can represent proofs of properties of the form π ⇒ X i ♦Ji → ♦q over adss. Definition 4.2.9 Given an ads S, k assertions J1 , . . . , Jk (occurring in fairness conditions of S), and a set of agents X ⊆ P , a just termination diagram is a sextuple N , E , NT , µ, δ, j where: • N is a set of nodes; • E ⊆ N × N is a set of edges, so that N , E is a digraph; • NT ⊆ N is a non-empty set of nodes called the terminal nodes; • µ : N → associates an assertion to every node; • δ : N → W → D associates, to every node n, a ranking function δ(n) mapping states to elements of a well-founded order D, ≺, in such a way that all terminal nodes have minimum ranking; • j : N → {1, . . . , k } associates, to every node n, a fairness condition Jj (n) . Definition 4.2.10 To every node n ∈ N \ NT we associate the following verification condition: " (µ(m)∧ δ(m) ≺ a) m∈succ(n) µ(n) ∧ δ(n) a → cpreX ∨ m∈succL (n) µ(m) ∧ δ(m) a ∧ ¬Jj (n) , where succ(n) {m ∈ N | n, m ∈ E } and succL (n) {m ∈ succ(n) | j (m) j (n)}, called the level successors. The set of verification conditions associated to a diagram is the collection of the verification conditions associated to its nonterminal nodes. A diagram is called valid if all its associated verification conditions are valid.
538
M. Slanina et al.
Proposition 4.2.11 1. A valid just termination diagram N , E , NT , µ, δ, j over an ads S and agents X is a proof of !! k S µ(n) ⇒ X ♦Ji → ♦ µ(n) n∈N
n∈NT
i 1
2. A valid termination diagram such that µ(n) π → n∈N
µ(n) → ρ
for all n ∈ NT
is a proof of S π → X
k
! ♦Ji → ♦ρ
.
i 1
We shall see many examples of use of just termination diagrams in Sect. 5. Notation In some of the proofs in the next section we shall label edges with transition names indicating what actions X can take to satisfy the verification conditions. These names are not officially part of the diagram; they can be discovered by a theorem prover or decision procedure checking the verification condition.
5. Two case studies: fairness of contract-signing protocols Security protocols are excellent case studies for the techniques we have developed in the previous sections. Although usually relatively small, they are often intricate and surprisingly hard to verify, especially in an asynchronous environment where participants can deviate from the protocol at any time, network connections can fail, and intruders can interfere. At the same time, security protocols are most naturally modeled as alternating systems that can represent both cooperative and adversarial behaviors, and atl* is an excellent language to specify most properties of interest. A number of recent papers—e.g., [KR02, KR03, CKS06]—use finite-state gads models and atl model checking to prove security properties of protocols. Model checking and deductive verification have competing advantages that make each more suitable in a different situation. For example, model checking can find violations of the specification and produce concrete runs that exhibit the violations, while deductive verification does not produce counterexamples directly. Using model checking to verify protocols, however, has also some disadvantages that can be overcome by using deductive methods. The first limitation is the restriction to finite-state systems. Although in some of the analyses mentioned above the restriction to finite-state systems was not a problem, in general this is not the case. For example, the analysis of the multi-party contract signing protocol of [KR02, CKS06], which is parameterized by the number of participating parties, was limited to small instances with three or four parties. Also when dealing with multi-session protocols the state-space becomes unbounded. A second disadvantage is the general problem that model checking is attractive for debugging, but not very convincing when it returns no bugs. A deductive proof, on the other hand, provides evidence of the validity of the protocol that exposes explicitly the assumptions made on each of the participants and their interactions and can be checked independently. This section presents the results of two case studies in which we applied the techniques presented in earlier sections to two contract-signing protocols. A contract-signing protocol is an algorithm used in digital transactions when two parties need to agree on the terms of a contract. For example, in a contract between buyer and seller when buying a good on the Internet, the buyer’s burden is to pay the agreed price and the seller’s is to ship the item. Digital contract-signing protocols must deal with more complex interactions and avoid more possible frauds than can occur when two people who know each other sit at a table to sign a paper contract. For example, it is not enough that each party sends a message with her signature to the other, for then, given the asynchronous nature of the communication, a cheating party could fail to send her signature after receiving the other’s and could thus choose to enforce the contract while the other could not. A protocol
Deductive verification of alternating systems
539
where the latter event is provably impossible is said to be fair (a formal definition of fairness will be given later). Another example of a property of interest is abuse freeness: a protocol between two parties A and B is abuse-free for B if it is impossible, at any stage of the protocol, for A to be be able to prove to a third party, C , that it is in her power to either terminate the protocol with a successful pair of signatures or to abort it, at her choice (to see why this could be a problem, imagine B and C being sellers of the same item, A a potential buyer, and the contract being the sale from B to A at a certain price). In this section we show how to use our techniques to prove fairness of two contract-signing protocols. The first, the asw protocol [ASW98], is a finite-state protocol for contract-signing between two players; it can be verified by model checking as well, but it is a good introduction to the style of proof that we use in the second example, and a successful proof provides more information than a model-checking analysis that finds no bugs, as explained earlier. The second example is a parameterized system modeling a simple version of the bw multiparty contract signing protocol [BWW00]; being parameterized (we prove properties for an arbitrary number of signatories) the system is infinite-state: thus model checking techniques do not apply and some form of theorem proving is necessary.
5.1. Two-party contract signing: ASW protocol Our first case study is a deductive proof of fairness of the contract-signing protocol (henceforth called the asw protocol) first proposed by Asokan, Shoup, and Waidner in [ASW98]. The asw protocol was proposed as the first optimistic contract-signing protocol to guarantee timeliness. In an optimistic protocol, the signing parties only involve a trusted third party in exceptional circumstances, when one thinks that the other may be cheating or that the communication may have been compromised. Optimistic protocols are important because of their efficiency in the use of communication resources. Timeliness means that each player can choose to interrupt the protocol at any time without losing fairness; i.e., if she interrupts the protocol, either she can show a signed contract or she can show a certificate that proves that the contract has not been signed. Timeliness is important in asynchronous settings when a player has no guarantee on the delivery times of messages. Previous protocols assumed either a reliable channel between each signing party and the trusted third party or finite bounds on the delays in the communication channels [ASW98]. The asw protocol was earlier analyzed by Kremer and Raskin using atl model checking [KR02]. The purpose of this case study is to illustrate how the rules and diagrams developed so far in the paper can be used together in a real-world example. The asw protocol is finite-state and thus all that can be verified by our deductive method can also be verified by model checking, given enough resources. In the next, more complex, case study in the next section we shall verify an infinite-state protocol where model checking is not applicable. In the following sections, we first summarize the workings of the asw protocol informally, then present a formal model as a gads and a formal specification of fairness in atl, and finally present a proof of fairness using the proof rules and diagram from Sect. 4.
5.1.1. Protocol and modeling assumptions Figure 4 shows the three main components of the asw protocol in a common message-flow diagrammatic notation. Before we proceed to formally model the protocol, we shall give a short informal description of it. For more details, please refer to the original paper [ASW98], or to [KR02]. Notation In this section we abandon the convention of using lowercase letters for players in order to stay close to the conventions of the security field. In asw, A and B are the contract signers, with A the originator and B the recipient; T is the trusted third party; t is the contract text; NX is a secret nonce generated by agent X ; SigX (m), where X ∈ {A, B , T } is a signature of message m with X ’s public key; and h is a one-way collision-resistant hash function. We shall model the protocol, similarly to [KR02], as the following gads: The agents are A, B , and T as above, and C which represents the communication channels. The agents A and B are potentially dishonest. The corresponding honest agents, which always act according to the protocol, are denoted Ah and Bh . For each agent X ∈ {A, B , T , C } and every message m ∈ {m1 , m2 , . . . }, K (X , m) stands for X knows m if X ∈ {A, B , T }, and C contains m, that is, m is in transit if X C . In the construction of the model we make the following assumptions:
540
M. Slanina et al.
Fig. 4. asw protocol
1. The channel C is resilient: every message is eventually delivered. This is the only assumption on the communication channel. 2. A dishonest agent has control over C . 3. C does not necessarily deliver messages in order. The model could easily be applied to other assumptions, e.g., that any message known by C is automatically known by any dishonest player (i.e., a dishonest agent can eavesdrop on the channel), or that a dishonest player can send duplicate messages. Furthermore, we assume perfect public key cryptography. We encode this assumption in the transition model as follows: An agent can only send certain messages after he has received (knows) all the information necessary to compose it. This can be encoded either as a propositional theory on the K (X , m) atoms or as a first-order theory of K , Sig? , N? , etc, that covers the propositional theory. For our purposes it is more convenient to use the former, but our approach would work equally well with the latter.
5.1.2. Formal model of the asw protocol We model the asw protocol as a gads. The set of agents is P {A, B , T , C }. The set of variables V includes one boolean variable for each K (X , m) proposition, as discussed above, and three state variables A.state, B .state, and T .state. The variables A.state and B .state range over the values {M 1, M 2, M 3, M 4, A1, A2, R1, R2, F }, where F stands for finished and denotes termination, while T .state ranges over {clean, resolved, aborted}. The transitions of the four agents are shown in Figs. 5 and 6. For conciseness, we use the following abbreviated notation for checking and setting of state variables: for agent p and state value x , x (¬x ) appearing in a guard stands for p.state x (p.state x ), and x appearing in the update stands for p.state : x . Similarly for propositions π appearing in the update, π stands for π : t and ¬π stands for
Deductive verification of alternating systems
541
Fig. 5. Guarded transitions of honest agent Ah and potentially dishonest agent B
Fig. 6. Guarded transitions of the trusted party and communication channel
π : f. All transitions owned by C and T are just, except for the idling transitions. None of the transitions owned by Ah and B is just. We use simpler, but equivalent, fairness conditions for T than those following directly from the translation in the previous section: FT ≡ ∞¬K (T , a1 ) ∧ ∞¬K (T , A.r1 ) ∧ ∞¬K (T , B .r1 ) FC ≡ ∞¬K (C , m1 ) ∧ ∞¬K (C , m2 ) ∧ ∞¬K (C , m3 ) ∧ ∞¬K (C , m4 ) ∧ ∞¬K (C , a1 ) ∧ ∞¬K (C , A.r1 ) ∧ ∞¬K (C , B .r1 ) ∧ ∞¬K (C , TA.aborted) ∧ ∞¬K (C , TA.resolved) ∧ ∞¬K (C , TB .aborted) ∧ ∞¬K (C , TB .resolved)
(6)
Fz ≡ ∞(λ Ah ) ∧ ∞(λ B ) ∧ ∞(λ T ) ∧ ∞(λ C ) The assertion, , that will denote the initial condition of the system and will be used in the specification
542
M. Slanina et al.
is defined by
≡ A.state M 1 ∧ T .state clean ∧
¬π
(7)
π∈{all other assertions}
5.1.3. Proving fairness for alice Fairness for agent A is expressed by the formula Ah B T C ⇒ B , C , z f (contractB → Ah f ♦contractA ) where is the initial condition (7) and contractA ≡ (K (A, m2 ) ∧ K (A, m4 )) ∨ K (A, TA.resolved) contractB ≡ (K (B , m1 ) ∧ K (B , m3 )) ∨ K (B , TB .resolved) Before we apply the proof rules we first establish some global invariants of the protocol, that is, properties of the form S π ⇒ P ρ, where π and ρ are assertions (also written π ⇒ ρ, or ρ when π is clear from the context). Global invariants form the basis of any deductive proof. Methods for generation and verification of invariants have been studied extensively (e.g., [MP95, BBM97]) and apply directly to adss, so we shall not elaborate on them here. In the remainder of the proof we will assume that the following invariants hold and use them in the discharge of the verification conditions: #
(K (A, m2) → A.state M 1) $ A.state R2 ∧ T .state aborted → K (C , A.r1 ) ∨ K (T , A.r1 ) ∨ K (C , TA.resolved)
(8) (9)
K (A, m1 ) K (A, a1 ) (T .state resolved → A.state M 1) (A.state ∈ {M 3, M 4} → K (A, m2 ))
(10) (11) (12) (13)
The first step of the proof is to apply rule basic- state with intermediate assertion # $ K (B , m1 ) ∧ K (B , m3 ) ∧ K (A, m1 ) ∧ K (A, m2 ) ∧ recoverable poss contractA ≡ ∨ K (B , TB .resolved) ∧ T .state resolved ∧ recoverable where # recoverable ≡
contractA ∨ ¬K (C , a1 ) ∧ T .state aborted ∧ A.state ∈ {A1, A2, F }
$
which produces the two subgoals ⇒ B , C , z f (contractB → poss contractA ) poss contractA ⇒ Ah f ♦contractA
(14) (15)
Discovering intermediate assertions is the creative and, in general, hard part of the proof that requires insight into the system. In this case, contractB was strengthened into an assertion that characterizes states from which it is possible for agent Ah to guarantee contractA eventually. Goal (14) follows immediately from the global invariant (contractB → poss contractA )
Deductive verification of alternating systems
543
Goal (15) can be split using rule left- ∨ (Sect. 4.1) into the two new goals poss contractA,1 ⇒ Ah f ♦contractA poss contractA,2 ⇒ Ah f ♦contractA
(16) (17)
where poss contractA,1 ≡ K (B , m1 ) ∧ K (B , m3 ) ∧ K (A, m1 ) ∧ K (A, m2 ) ∧ recoverable poss contractA,2 ≡ K (B , TB .resolved) ∧ T .state resolved ∧ recoverable To prove goal (16) we first make the fairness conditions explicit: poss contractA,1 ⇒ Ah (FT ∧ FC ∧ Fz → ♦contractA ) ,
(18)
or, explicitly: K (B , m1 ) ∧ K (B , m3 ) ∧ K (A, m1 ) ∧ K (A, m2 ) ∧ recoverable ⇒ Ah (FT ∧ FC ∧ Fz → ♦ ((K (A, m2 ) ∧ K (A, m4 )) ∨ K (A, TA.resolved)))
(19)
and then apply rule just- term (Sect. 4.1) with auxiliary assertions and corresponding fairness conditions η1 η2 η3 η4
≡ ϕR ∧ A.state ∈ {M 2, M 3, M 4, R1} ≡ ϕR ∧ K (C , A.r1 ) ≡ ϕR ∧ K (T , A.r1 ) ≡ ϕR ∧ K (C , TA.resolved)
J1 J2 J3 J4
≡ λ Ah ≡ ¬K (C , A.r1 ) ≡ ¬K (T , A.r1 ) ≡ ¬K (C , TA.resolved)
with ϕR ≡ K (A, m2 ) ∧ recoverable. We can choose ηi ≡ f for all other Ji s. The ranking functions, with range {0, . . . , 11} (with the normal order on integers), and their value corresponds, intuitivively, to the index of the diagram node under consideration: notice that the nodes are indexed in reverse topological order. Formally, they are defined as follows:
δ1
δ2 δ3 δ4
⎧ 11 ⎪ ⎪ ⎪ ⎪ 10 ⎪ ⎪ ⎪ ⎪ ⎪ 9 ⎪ ⎪ ⎨ 8 ⎪ 7 ⎪ ⎪ ⎪ ⎪ 6 ⎪ ⎪ ⎪ ⎪ 5 ⎪ ⎪ ⎩ 4 3 2 1
if A.state M 2 ∧ λ Ah if A.state M 2 ∧ λ Ah if A.state M 3 ∧ λ Ah if A.state M 3 ∧ λ Ah if A.state M 4 ∧ λ Ah if A.state M 4 ∧ λ Ah if A.state R1 ∧ λ Ah if A.state R1 ∧ λ Ah
With these assertions, fairness conditions, and ranking functions the resulting verification conditions are valid relative to the global invariants established before, which completes the proof of goal (16). The just termination diagram corresponding to this proof is shown in Fig. 7. Verification conditions of just termintion diagrams were given in Definition 4.2.10. To prove goal (17), we make again first the fairness conditions explicit: poss contractA,2
# $ ⇒ Ah FT ∧ FC ∧ Fz → ♦contractA
(20)
544
M. Slanina et al.
Fig. 7. A diagram for goal (19). Each node n is labeled with the assertion µ(n). The expression M 2 is an abbreviation for A.state M 2, and so on for the other values. The expression in the outer box is common and considered conjoined to all the inner ones. The ranking function is defined by δ(ni ) i (which corresponds to a topological order of the graph without self-loops). The fairness index j (ni ) is 1 for n4 , . . . , n11 , 2 for n3 , 3 for n2 , and 4 for n1 . Node n0 is the only teminal node
K (B , TB .resolved) ∧ T .state resolved ∧ recoverable # $ ⇒ Ah FT ∧ FC ∧ Fz → ♦ K (A, m2 ) ∧ K (A, m4 ) ∨ K A, TA.resolved
(21)
We then apply auxiliary rule just- term or, equivalently, the diagram from Fig. 8. We use fairness with intermediate assertions η1 , . . . , η4 and fairness conditions J1 , . . . , J4 as in the proof of (16), plus the additional intermediate assertions η5 ≡ ϕT ∧ K (C , a1 ) η6 ≡ ϕT ∧ K (T , a1 )
J5 ≡ ¬K (C , a1 ) J6 ≡ ¬K (T , a1 )
where ϕT ≡ T .state resolved ∧ recoverable, and we can choose ηi ≡ f for all other Ji s. Again, all verification conditions hold relative to the invariants established before. A diagram representation of the proof is shown in Fig. 8.
Deductive verification of alternating systems
545
Fig. 8. A diagram for the proof of (21). The conventions are the same as those of Fig. 7. The fairness index j (ni ) is 1 for n4 , . . . , n9 and n12 , . . . , n15 , j (n3 ) 2, j (n2 ) 3, j (n1 ) 4, j (n11 ) 5, and j (n10 ) 6. Node n0 is the only terminal node
5.2. Multi-party contract signing: BW protocol Our second case study is the multi-party contract signing protocol of [BWW00] (bw protocol). Contrarily from the previous case study, the bw protocol can involve an arbitrarily large number of agents; model checking can only verify finite instances (e.g., up to 3 agents). This is thus a case where the deductive methods of this paper appear to have a distinct advantage over model checking. bw, as the asw protocol in our first case study, is an optimistic protocol; i.e., the signing parties only contact the trusted third party in exceptional circumstances, when one thinks that the other may be cheating or than messages may have been lost; in most cases, the protocol can terminate without ever involving the third party. This is important since the trusted third party is usually a scarce resource that may serve many runs of the protocol at once. The bw protocol was the first protocol for multi-party contract-signing to solve the problem in the optimal O(n) communication steps (where n is the number of signing parties). Previous solutions required to exchange (n 2 ) messages in the worst case. Since this example is significantly more complex than the one of the previous section and some of the
546
M. Slanina et al.
proof strategies are similar, it may be beneficial to the reader to become familiar with the previous section first.
5.2.1. Informal description of the protocol Before we provide the formal model, we describe the protocol informally in English. The following description is taken from [BWW00], except that we drop a few indices since we only consider a single session. We assume a fixed but arbitrary number, n, of parties, or signatories, involved in the contract. At most t < n −1 of these may be dishonest. Protocol “Sign” for Honest Pi . The protocol is given by the following rules; Pi applies them, sequentially, until it stops or it generates an exception. Let c be the contract to sign. The protocol will proceed in locally defined rounds. Let r ( 1 initially) be a local round counter for Pi . Let M0 nil. • If r 1 then: – Pi sends m1,i signi (c, 1, “prev rnd ok”) to all signatories. – From all received messages of type m1,j it tries to compute full and consistent vectors M1 m1,1 , . . . , m1,n and X1 M1 . If this succeeds, Pi sets r to 2. • If 2 ≤ r ≤ t + 2 then: – Pi sends message mr ,i mrL,i , mrR,i , containing two submessages mrL,i signi (Mr −1 , r , “vec ok”) and mrR,i signi (c, r , “prev rnd ok”) to all signatories. – From all received messages of type mr ,j it has collected, it tries to compile full and consistent vectors Mr mrR,1 , . . . , mrR,n and Xr mrL,1 , . . . , mrL,n . If it succeeds and r < t + 2 then it sets r to r + 1. If it succeeds and r t + 2 then it decides signed, with proof Ci Mt+2 , and stops. • If any step above fails, e.g., because Pi cannot verify a signature, or it times out waiting for a message, then it sends a resolve request message Rr ,i to T . If r 1, then Rr ,i 1, i , signi (m1,i , “resolve”). If 2 ≤ i ≤ t + 2, then Rr ,i r , i , signi (Xr −1 , “resolve”), Xr −1 , Mr −2 . After sending the request, Pi waits for an answer from T and stops. If the answer is sr ,i then it sets Ci to sr ,i . Protocol “Sign” for Third Party T . We assume T receives a message Rr ,i as defined in the protocol for Pi . Let c be the unique contract contained in Rr ,i . If this is the first time that T is asked about this contract, then T initializes a boolean variable signed to false and two sets con and abort set to ∅. The boolean variable signed indicates T ’s current decision, signed or failed. The set con contains all the indices of signatories that contacted T , and abort set is the set of all “failed” messages sent by T . T makes its decision deterministically as follows: • (T accepts only one request from each Pi . All other requests are ignored.) If i ∈ con then the message Rr ,i is ignored. • (If T receives a request from round r 1 and has not yet decided “signed” then T sends an “abort” to the requester.) If i ∈ con and signed false and r 1 then T sets ar ,i : signT (c, r , i , “aborted”) abort set : abort set ∪ {(r , i )} con : con ∪ {i } and send ar ,i to Pi . • (If T receives a request from a round r ≥ 2 and has not yet decided “signed” then T checks whether all previous requests came from dishonest players. If this is the case then it changes its decision to “signed,” otherwise it sticks to “aborted”). If i ∈ con, signed false, and r ≥ 2, then:
Deductive verification of alternating systems
547
– If for all (s, k ) ∈ abort set we have s < r − 1 then T sets first signed sr ,i signed con
: Rr ,i , signT (c, r , i , “signed ) : first signed : true : con ∪ {i }
and sends sr ,i to Pi . – Otherwise, T sets ar ,i : signT (c, r , i , “aborted”) abort set : abort set ∪ {(r , i )} con : con ∪ {i } and sends ar ,i to Pi . • (A decision “signed” is always preserved). If i ∈ con and signed true then T sets sr ,i : first signed con : con ∪ {i } and sends sr ,i to Pi .
5.2.2. Modeling assumptions We do not intend to provide a formal proof of all possible properties of the protocol. Instead, our goal is to show the procedure in detail on one interesting property, in order to demonstrate how to approach infinite-state protocols using the deductive approach. In fact, we use the first version of the protocol from [BWW00], instead of the extended version given at the end of the paper and that insures abuse-freeness. In order to obtain a manageable model, we also make the following assumption: • We consider a single session: we can thus consider the contract and the session id as constants in the model, and effectively we can disregard them. • We keep intact, on the other hand, the parameterization on the number of parties. Thus our study is completely parametric and thus infinite-state. • We model the effects of the epistemic structure of the system, and not the epistemic structure itself. This is a common simplification in formal verification of security protocols [KR03, CKS06]. The restrictions, apart for the last one, are didactic, and the example demonstrates that the proof technique can be applied to all the more complex situations.
5.2.3. Modeling of messages and first-order theory of messages Consider all the possible messages involved in the protocol: • c: the contract. • ki , for i ∈ {1, . . . , n}, where n is the number of parties involved in the contract: the private keys for agents (parties) Pi . • kT : the private key of the trusted third party T . • mrL,i , mrR,i for i ∈ {1, . . . , n} and r ∈ {1, . . . , t + 2}, where t is the maximum number of dishonest parties: the left and right sides of the main protocol messages exchanged by the parties. • Xr , Mr : the vectors of all mrL,i and mrR,i , respectively (see [BWW00]). • M0 : an empty message; by extending the range of subscripts of M we can simplify several formulas and diagrams. • Rr ,i : the resolve request from Pi to the trusted third party T . • ar ,i : an “aborted” answer to Rr ,i from T to Pi . • sr ,i : a “signed” answer to Rr ,i from T to Pi .
548
M. Slanina et al.
Fig. 9. Dependencies between messages in the bw protocol. A set of incoming edges connected by an arc means that the destination message is known by an agent if all source messages are known by the agent
For each of the possible messages m in the system, we introduce atoms Kp m, where p is an agent Pi , the trusted third party T , or the channel C , meaning that Pi , T , or C “knows” message m. We also usually abbreviate KPi by Ki . Sending a message m will correspond, in the model, to a transition making KC m true. Receiving m will correspond to making Ki m (if Pi receives m) or KT m (if T receives m) true. Notice that we are not using an epistemic logic, although we borrow the notation: Kj sr ,i , for example, is simply a first-order relation symbol applied to integer variables i , r , and j . To capture the relationship between messages and how one is formed from others, we need a theory of the K· m atoms. This is provided in Fig. 9. Figure 9 is to be interpreted as follows. Consider a set of edges incoming to a node labeled with message m and connected with an arc:
Deductive verification of alternating systems
549
Every such combination gives rise to the Horn clauses Kp m ← Kp m1 ∧ . . . ∧ Kp mn (for all agents p). The theory is that of the minimal model of this definite program: If Kp m ← Kp m11 ∧ . . . ∧ Kp mn1 1 .. . Kp m ← Kp m1k ∧ . . . ∧ Kp mnkk are all the rules with head Kp m, then the theory contains the formula Kp m ↔
nk k
Kp mji
i 1 j 1
A double-arrowed edge is just an abbreviation for two edges. For example, one of the implications that we read off Fig. 9 is Kp Xr ↔ (mrL,1 ∧ . . . ∧ mrL,n ) ∨ Rr ,i (for r > 1). As another example, since the private key ki has no incoming edge, the knowledge of it can never be inferred from the knowledge of other messages. The action Inform(p, m), for a player p and a message m, appearing in the assignment part of a transition in the formalization of the protocol, is an abbreviation for make Kp m true in the new state, and also all atoms that can be inferred from Kp m and other K· atoms that already hold in the current state. This can easily be expressed in the formal language.
5.2.4. Variables and initial condition Apart for the messages, the system contains these other variables: • • • • •
statei : {normal, waiting, exception, finished} for all honest parties Pi . ri : [1 . . . t + 2] for all honest parties Pi , the turn in the main protocol that the party is executing. signedi : boolean for all honest parties Pi . con : set of [1 . . . n], a variable where T keeps record of the parties that contacted it. abort set : set of [1 . . . t + 2] × [1 . . . n], a variable where T keeps record of all abort decisions it has made. The initial condition of the system is: statei normal signedi f con ∅ abort set ∅
Ki ki Ki c KT kT Ki M0 ¬KP m for all other agents and messages, excluding those that are implied true by the above
for all i .
5.2.5. Transitions The system is modeled as a guarded alternating transition system, as for the two-party protocol of [SSM06a]. Figures 10, 11, 12, and 13 define the transition relations. Fairness conditions are associated to the scheduler, who must schedule every process infinitely may times, and to the channel, who must eventually deliver every message. The thrusted third party T is deterministic, and thus does not need any fairness condition.
550
M. Slanina et al.
Fig. 10. Transitions for honest Pi
Fig. 11. Transitions for dishonest Pi
Fig. 12. Transitions for T
Fig. 13. Transitions for the channel C
Deductive verification of alternating systems
551
5.2.6. Deductive verification of fairness To simplify the notation, we assume that party P1 is honest, and all the others, P2 , . . . , Pn , are dishonest. Thus t n − 1 and the protocol proceeds for at most n + 1 turns. What we are left with is still an infinite-state protocol, representing the worst case of the n-party contract signing scenario. We now show how to prove fairness for P1 , represented by the statement contracti ⇒ P1 ♦contract1
(22)
for i arbitrary (without loss of generality, i 1), where contracti ≡ Ki Mn+1 ∨ ∃ r (Ki sr ,i ) Using rule left- ∨, we split goal (22) into two subgoals: Ki Mn+1 ⇒ P1 ♦contract1 ∃ r (Ki sr ,i ) ⇒ P1 ♦contract1
(23) (24)
Verification diagrams. We prove subgoal (23) using the just termination diagram of Fig. 14, and subgoal (24) using the just termination diagram of Fig. 15. The two diagrams are valid with respect to the system invariants described in the next section.
Fig. 14. Verification diagram for first case of proof of fairness of the bw protocol
552
M. Slanina et al.
Fig. 15. Verification diagram for second case of proof of fairness of the bw protocol
Invariants. The invariants on which the verification diagrams depend are the following (universally quantified): i 1 ∧ Ki Mn+1 i 1 ∧ Ki mn+1,1 (r , i ) ∈ abort set i ∈ con (r , i ) ∈ abort set KT Rr ,i KC Rr ,1 r > 1 ∧ (r , i ) ∈ abort set
→ → → → → → → →
1 ∈ con → statei finished → Ki ar ,i → ∃ r (KC sr ,i ) → Ki sr ,i →
r1 n + 1 state1 normal i ∈ con ∃ r ((r , i ) ∈ abort set) ∨ (signed ∧ ∃ r (KC sr ,i )) KT Rr ,i KC Rr ,i r1 r ∃ c : vector[1 . . . r ] of [1 . . . n]. c[r ] i ∧ ∀ s ((s, c[s]) ∈ abort set) ∧ ∀ s ∀ s (c[s] c[s ]) (n + 1, i ) ∈ abort set ∀ i ((n, i ) ∈ abort set) contracti ∨ ∃ r (Ki ar ,i ) (r , i ) ∈ abort set ∧ KT Rr ,i 1 ∈ con ∨ contract1 ∨ KC sr1 ,1 signed t ∧ r1 ≥ 2
(25) (26) (27) (28) (29) (30) (31) (32)
(33) (34) (35) (36) (37) (38)
Most of these are inductive invariants, either alone or with respect to others. Invariant (32) needs special consideration. It is the key in the validity of the diagram of Fig. 14 and part of the proof of Theorem 1 of [BWW00]. Intuitively, it says that if (r , i ) ∈ abort set, that is, if T sent an abort decision to Pi at turn r , then he must have sent at least one abort decision for every turn before r
Deductive verification of alternating systems
553
(to a different party every time): according to T ’s rules, if this were not the case, it would have switched its decision to signed at some point. To prove invariant (32) formally, we use the classic inv rule from [MP95]: we must prove that the invariant holds at the initial state and that it is preserved by all transitions. It holds at the initial state trivially, since abort set ∅. The only transitions that need to be examined are those that can change the antecedent of the implication from false to true or the consequent from true to false. The latter possibility cannot happen because no transition ever removes pairs from abort set. The only transitions that can change the antecedent from false to true are abortr ,i for some r and i . Assume then that abortr ,i is taken. If r 1, then the invariant holds trivially in the new state. Otherwise, if r ≥ 2, then, according to the transition relation, there exists some (s, k ) ∈ abort set with s ≥ r − 1. By inductive hypothesis, there is a vector c of length s satisfying the invariant for (s, k ). We build a new vector, of length r , for (r , i ) by appending i at the end of c[1, . . . , r − 1]. The new vector satisfies the properties required by the invariant because c has length at least r − 1 and because i does not appear in c[1, . . . , r − 1], as can be verified by checking invariant (27). Invariants (33) and (34) are consequences of (32). Verification conditions. We show how to prove some of the verification conditions for the diagram of Fig. 14. • Ki Mn+1 ∧ ¬contract1 ∧ r1 n + 1 → state1 waiting ∨ (state1 exception ∧ KC sn+1,1 ) ∨ (state1 exception ∧ 1 ∈ con ∧ (KC Rn+1,1 ∨ KT Rn+1,1 )): Assume the antecedent. state1 normal follows from Ki mn+1,1 by invariant (26). Next, we assume that state1 exception ∧ ¬(KC sn+1,1 ) and prove that 1 ∈ con. This follows from invariants (28), (29), (30), (31), and (33). Last, we prove that state1 finished: Assume the contrary. It follows from (35), since ¬contract1 , that ∃ r (K1 ar ,1 ). By (36) that ∃ r ((r , 1) ∈ abort set ∧ KT Rr ,1 ), and from there, using (30) and (31), that (r1 , 1) ∈ abort set, which contradicts (33). • Ki Mn+1 ∧ ¬contract1 ∧ r1 n + 1 ∧ state1 exception ∧ KT Rn+1,1 ∧ 1 ∈ con ∧ ¬signed → ∀(s, k ) ∈ abort set (s < r1 − 1) (needed to prove that a transition out of node n1 through signr ,i is always possible): This follows from (34). This finishes the proof of (22) and successfully concludes this case study.
6. Conclusions Alternating systems and properties are essential for precise modeling of several domains: security, control systems, auction mechanisms are just some examples. In these same domains one usually finds programs that are short in terms of lines of code, but present extremely complex interactions and are thus very hard to get right manually; their correctness is also usually very critical, and these two factors make them good candidates for formal verification. With the recent advances in static analysis, theorem proving, and other techniques for the analysis of infinite-state systems, alternating systems are a natural next step of expansion for formal verification. In this paper we presented one possible framework for the alternating analysis of infinite-state systems, derived some fundamental theoretical tools such as a complete proof system and verification diagrams, and described some applications in the verification of security protocols and in interactions with control theory. Some of the natural next steps in this line of research include: • Further development of decision procedures for quantified fragments of the theories of important data types: Most recent advances in decision procedures have been restricted to quantifier-free fragments. We have seen, on the other side, that proof techniques for alternating logics generate verification conditions that may contain quantifier alternation. The fact that most theories become undecidable or intractable in this setting is not a good argument against the pursuit of this line of research, because verification conditions are usually very simple, but in a large number: much of traditional theorem proving has been concerned with the opposite combination.
554
M. Slanina et al.
• Interactions with constructive proof theory and synthesis of programs from proofs: all our proof rules and diagrams are constructive, in the sense that, if the proofs of verification conditions are constructive and provide one-step strategies, then the proof rule or diagram enables us to combine these strategies into global strategies for the alternating formula. Thus using constructive theorem proving techniques and designing decision procedures for quantified formulas that can also return realizations for ∀ ∃ verification conditions is an important research direction. • Finding a class of verification diagrams for atl* that is complete and intuitive to use, i.e., the analogue of gvds for ltl. • Imperfect information: With alternation now in the equation, the most important trait of systems that is still required for a precise modeling of many real systems is imperfect information. In this paper, we assumed that the full state of the system (gts) is always visible to all agents. If this is not the case, the definition of an agent’s strategy changes, since the agent’s choice must be the same in two states that it cannot distinguish, for example because they only differ in an unobservable local variable of another agent. Alternation and imperfect information are the crucial characteristics of most security protocols. In some cases, like the ones that we analyzed in Sect. 5, we can model a system as one of perfect information by a careful a priori metalevel analysis. In general, though, not all systems can be faithfully represented as ones of perfect information. Even when so, the meta-reasoning needed is susceptible to errors. Since the presence of imperfect information makes even the simple model-checking problem over finite-state alternating systems undecidable (this fact is stated without proof in [AHK02]; it is also a simple consequence of [FS05]), we expect techniques like those described in this paper to be essential for this new extension. In particular, a complete proof system for atl* over systems with imperfect information is an important open problem.
7. History and related work This section contains some pointers to the history of the ideas on which this paper rests, as well as connections to related work not mentioned elsewhere in the paper. We do not attempt to provide a comprehensive overview of the field, but rather point to some key contributions that have guided us in our work. We hope that the interested reader will find these a useful starting point to further explore the field. Introduction. The use of linear temporal logic as an adequate specification language for reactive systems was first advocated in [Pnu77]. For a complete account of deductive and algorithmic verification using ltl, see [MP91, MP95]. See [Tho97] for the theory of ω-languages. The use of branching temporal logics in verification was first advocated in [BMP81]. The logic ctl was introduced in [EC80, EC82] and ctl* in [EH86]. Good surveys on branching-time logics in verification are [Eme90, CGP99]. The logics atl and atl* were introduced in [AHK97]. More on the history of alternating temporal logics will be given in the notes for Sect. 2. Detailed case studies of formal verification of mutual exclusion protocols can be found in [MP95]. The ltl- inv rule first appeared, under the name “inv”, in [MP83], although the same proof principle was used in some form or another in all deductive verification since Floyd’s [Flo67] and Hoare’s [Hoa69] founding papers; in this paper, since the focus in on alternation, the rule is renamed ltl- inv. The alternating inv rule first appeared in [Sla02]. The concept of alternation as applied to atl* should not be confused with the use of alternating finite-state automata on infinite words in verification as acceptors for linear and branching-time logics. For more information on this use, see [Var95, Fin02]. The adjective alternating was introduced in [CS76] as describing a form of computation generalizing nondeterminism, also independently discovered in [Koz76]. The concept of alternation quickly became an essential tool in various fields of complexity, automata theory, and verification. It was adopted in [AHK97] in the definition of alternating systems and alternating temporal logic, probably because it had by then become, in theoretical computer science, a metonymy for any model of computation containing game-theoretic elements, although strict alternation between two players, or even the restriction to only two players, was not present any more. Alur, Henzinger, and Kupferman later changed the name of the computation model to concurrent game structures, but the alternating attached to logic had stuck. A survey of alternation in complexity and automata theory can be found in [CKS81].
Deductive verification of alternating systems
555
Mathematical Models. Our definition of strategic game generalizes that of [AHK02] and is based on the general notion of game structure in game theory [OR94]. The study of infinite games has a long history in mathematics and important applications to computer science and set theory. The first study of infinite games was due to Gale and Stewart [GS53]: therein was introduced the abstract model of infinite extensive game on the infinite binary tree; it was shown that Zermelo’s determinacy theorem for finite-state games fails in general, since there exists a winning condition (a set of infinite binary strings) for which no player has a winning strategy. The proof of this central fact made essential use of the Axiom of Choice and was highly non-constructive. Gale and Stewart then showed that games whose winning condition is open or closed (in the natural Cantor topology of infinite strings) are always determined and set as an open problem to study the dependency of determinacy on the level of the winning conditions in higher orders of the Borel hierarchy [Kur58, Kur61]. This problem stimulated a lot of research in the next decade: [Wol55] proved that Gδ and Fσ games are necessarily determined; [Dav64] proved it for Gδσ and Fσ δ games. Finally, [Mar75, Mar85] showed that the game is determined for any winning condition in the Borel hierarchy. The result of Davis [Dav64] was a starting point for the study of computability of winning strategies and other applications to theoretical computer science: the accepting conditions of automata on infinite trees, used in Rabin’s celebrated proof of decidability of monadic second-order logic [Rab69], are Gδσ . [GH82] used finite memory strategies in infinite games to considerably simplify the proof of Rabin’s theorem. [McN93] studied omega-regular games played on finite graphs and showed algorithms to construct winning sets and finite-memory winning strategies. Zielonka’s survey [Zie98] generalized this and several subsequent papers by showing how to compute winning sets and minimum-memory winning strategies for infinite games played on infinite graphs with a general form of finitary winning condition. These techniques enable us, for example, to express the value of atl* formulas over an arbitrary ads. See [Zie98] for more details on the history of infinite games. Our game frames and game transition systems are the obvious generalization to infinite states of the concurrent game structures of [AHK02]. [AHK97, AHK02] describe many sub-classes of finite-state alternating systems, modeling different facets of real-world concurrency such as synchronous and asynchronous communication. The use of first-order logic formulas to represent transitions of a reactive systems has a long tradition and is well surveyed in [MP91]. Our ads model is derived from gtss by choosing the most general possible version of gtss that can still be expressed in first-order logic. The choice of making the fairness conditions depend only on the state and not on the transition is inspired by [KP05, Var91]. This simplifies the theoretical arguments considerably and is clearly without loss of generality, since the system can always be augmented with a variable remembering the last transition taken. Our gads model is a close relative to that used in the Mocha model checker [AHM+ 98]. lctss are an alternating version of the linear transition system models of [San05]. [AHK97] introduced the logics atl and atl* and developed model-checking algorithms over finite-state systems. Another important game-theoretic logic, more useful in the study of mechanism design than in verification, is that of [Par85]. A model checker for atl was presented in [AHM+ 98]. Subsequent papers developed all the standard tools of modal and temporal verification, such as abstract interpretation [HMMR00] and bisimulation and simulation relations [AHKV98]. Both atl and atl* are sub-logics of the alternating mu-calculus [AHK02], the alternating analogue of the modal mu-calculus [Koz83].
Proof System for atl*. The idea of using formal logic to prove properties of programs originated with [Flo67] and was extended in [Hoa69] to a proof system for partial correctness of structured imperative programs. The first proof of relative completeness of that proof system was given in [Coo78]. Proofs of relative completeness for proof systems for total correctness were given in [HPS77] and in [HMP77]. The question of what assertion language is necessary for relative completeness to hold was addressed in [SdRG89], which advocated the use of the mu-calculus for this purpose. For reactive programs, a complete proof systems for ltl was given in [MP89] and for ctl* in [PK02, KP05]. The technique of lifting automata-theoretic results to proof systems was first proposed by Vardi and applied to ltl [Var91]. Later a similar approach was applied to ctl [FG96] and ctl* [KP05]. Our approach is most closely related to that of [KP05]. Deductive completeness of Hilbert-style proof systems for logics independent from programs has also been shown for ltl [GPSS80], ctl [FG96], ctl* [Rey00], and atl [GvD06]. Proof systems of this kind can also be used for verification, either by reducing proofs of system properties to proofs of much more complex
556
M. Slanina et al.
properties in the program logic [Lam02], or as auxiliary rules to simplify and split temporal proof obligations before applying system-dependent rules. The basic-state and basic-path rules in our proof system were already present in [KP05]; the only difference is that, in the alternating setting, we must require the temporal tester to be deterministic. The history rule is new and inspired by the method of proof by history variables of [MP91]. The assertion rules are new and are extensions of the response rules in [Sla02], in turn based on the rules of [MP89, KP05] for ltl and ctl*. The relative completeness proof for the assertion rules is based on techniques from [MP89, LPS81]. The proof of Prop. 3.2.5 depends on results from [Zie98] on finite-memory strategies for games on infinite graphs. Derived rules and verification diagrams. For an introduction to Gentzen-style proof systems see, e.g., [TS00]. Specialized proof rules for classes of ltl formulas were studied in [MP89, MP95, MP96]. Our termination rules are most similar to those of [LPS81]. Verification diagrams were introduced in [MP94] and further developed in [MP95, MP96]. [BMS95, MBSU98, Sip99] defined a single, general class of verification diagrams that is sound and relatively complete to prove arbitrary ltl properties of reactive systems. Finding a similar class of diagrams for atl* that is complete and intuitive to use is an open problem. Security protocols. Recent papers that use model checking of atl properties over finite-state models to prove properties of security protocols include [KR02, KR03, CKS06, CKS06, MS05]. [BDD+ 04, DDMP04] developed a logic and a proof system for linear-time properties for infinite-state security protocols. The assumption of perfect information is necessary for model checking, because checking atl properties over systems with imperfect information is undecidable already for finite-state systems. (This fact is stated without proof in [AHK02]. It is a simple consequence of the results of [FS05]). A complete proof system for atl* over systems with imperfect information is an important open problem.
Acknowledgments We thank Aaron Bradley, C´esar S´anchez, David Dill, Marc Pauly, Grigori Mints, the referees and participants of ICTAC 2006, and the journal’s anonymous referees and editors for their insightful comments. This research was supported in part by NSF grants CCR- 02- 20134, CCR- 02- 09237, CNS- 0411363, CCF0430102, and CSR- 0615449, and by NAVY/ONR contract N00014- 03- 1- 0939.
References [AHK97] [AHK02] [AHKV98] [AHM+ 98] [ASW98] [BBM97] [BDD+ 04] [BMP81]
Alur R, Henzinger TA, Kupferman O (1999) Alternating-time temporal logic. In: FOCS, pp 100–109, 1997. An extended version appeared in compositionality—the significant difference, vol 1536. LNCS. Springer, Heidelberg, pp 23–60 Alur R, Henzinger TA, Kupferman O(2002) Alternating-time temporal logic. J ACM 49(5):672–713 Alur R, Henzinger TA, Kupferman O, Vardi MY (1998) Alternating refinement relations. In: Sangiorgi D, de Simone R (eds) CONCUR, vol 1466, LNCS. Springer, Heidelberg, pp 163–178 Alur R, Henzinger TA, Mang FYC, Qadeer S, Rajamani SK, Tasiran S (1998) Mocha: modularity in model checking. In: Proceedings of 10th international conference on computer aided verification, vol 1427, LNCS. Springer, Heidelberg, pp 516–520 Asokan N, Shoup V, Waidner M (1998) Asynchronous protocols for optimistic fair exchange. In: Proceedings of the IEEE symposium on security and privacy Bjørner NS, Browne A, Manna Z (1997) Automatic generation of invariants and intermediate assertions. Theor Comput Sci 173(1):49–87, February 1997. Preliminary version appeared in 1st International Conference on Principles and Practice of Constraint Programming, vol. 976, LNCS (1995). Springer, Heidelberg, pp 589–623 Backes M, Datta A, Derek A, Mitchell JC, Turuani M (2004) Compositional analysis of contract signing protocols. In: Proceedings of the 18th IEEE computer security foundation workshop, pp 30–45 Ben-Ari M, Manna Z, Pnueli A (1981) The temporal logic of branching time. In: PoPL. ACM Press, pp 164–176
Deductive verification of alternating systems
557
[BMS95]
Browne A, Manna Z, Sipma HB (1995) Generalized temporal verification diagrams. In: 15th conference on the foundations of software technology and theoretical computer science, vol 1026, LNCS. Springer, Heidelberg, pp 484–498
[BWW00]
Baum-Waidner B, Waidner M (2000) Round-optimal and abuse-free optimistic multi-party contract signing. In: Ugo Montanari, Jos´e DP Rolim, Emo Welzl (eds) 27th International Colloquium on Automata, Languages and Programming (ICALP2000), vol 1853, LNCS. Springer, Heidelberg, pp 524–535
[CGP99]
Clarke EM, Jr, Grumberg O, Peled DA (1999) Model Checking. MIT Press, Cambridge
[CKS81]
Chandra AK, Kozen DC, Stockmeyer LJ (1981) Alternation. J ACM 28:114–133
[CKS06]
Chadha R, Kremer S, Scedrov A (2006) Formal analysis of multi-party contract signing. J Autom Reason 36(1–2):39–83
[Coo78]
Cook SA (1978) Soundness and completeness of an axiom system for program verification. SIAM J Comput 7(1):70–90
[CS76]
Chandra AK, Stockmeyer LJ (1976) Alternation. In: FOCS. IEEE computer society, pp 98–108
[Dav64]
Davis M (1964) Infinite games of perfect information. In: Dresher M, Shapley LS, Tucker AW (eds) Advances in Game Theory, number 52, Annal of Mathematics Studies. Princeton University Press, New Jersey, pp 85–101
[DDMP04]
Datta A, Derek A, Mitchell JC, Pavlovi´c D (2004) Secure protocol composition. In: Proceedings of the 19th annual conference on mathematical foundatinons of programming semantics, number 83, Electronic Notes in Theoretical Computer Science
[EC80]
Emerson EA, Clarke EM, Jr (1980) Characterizing correctness properties of parallel programs using fixpoints. In: ICALP, vol 85, LNCS. Springer, Heidelberg, pp 169–181
[EC82]
Emerson EA, Clarke EM, Jr (1982) Using branching time temporal logic to synthesize synchronization skeletons. Sci. Comp. Prog. 2(3):241–266
[EH86]
Emerson EA, Halpern JY (1986) ‘Sometimes’ and ‘not never’ revisited: on branching time versus linear time. J ACM 33: 151–178
[Eme90]
Emerson EA (1990) Temporal and modal logic. In: van Leeuwen J (ed) Handbook of Theoretical Computer Science, vol B. Elsevier, Amsterdam, pp 995–1072
[FG96]
Fix L, Grumberg O (1996) Verification of temporal properties. J Logic Comput 6(3):343–361
[Fin02]
Finkbeiner BE (2002) Verification algorithms based on alternating automata. PhD thesis, Stanford University
[Flo67]
Floyd RW (1967) Assigning meanings to programs. In: Mathematical aspects of computer science, vol 19, Proceedings of symposia in applied mathematics. American Mathematical Society, pp 19–32
[FS05]
Finkbeiner BE, Schewe S (2005) Uniform distributed synthesis. In: LICS, pp 321–330
[GH82]
Gurevich Y, Harrington L (1982) Trees, automata, and games. In: STOC, pp 60–65
[GPSS80]
Gabbay DM, Pnueli A, Shelah S, Stavi J (1980) On the temporal analysis of fairness. In: POPL. ACM Press, pp 163–173
[GS53]
Gale D, Stewart FM (1953) Infinite games with perfect information. Ann Math Stud 28:245–266
[GvD06]
Goranko V, van Drimmelen G (2006) Complete axiomatization and decidability of alternating-time temporal logic. Theor Comput Sci 353(1–3):93–117
[HMMR00]
Henzinger TA, Majumdar R, Mang FYC, Raskin J-F (2000) Abstract interpretation of game properties. In: Proceedings of 7th international static analysis symposium (SAS), vol 1824, LNCS. Springer, Heidelberg, pp 220–239
[HMP77]
Harel D, Meyer AR, Pratt VR (1977) Computability and completeness in logics of programs. In: STOC, pp 261–268
[Hoa69]
Richard Hoare CA (1969) An axiomatic basis for computer programming. Commun ACM 12(10):576–583
[HPS77]
Harel D, Pnueli A, Stavi J (1977) A complete axiomatic system for proving deductions about recursive programs. In: STOC, pp 249–260
[Koz76]
Kozen D (1976) On parallelism in turing machines. In: FOCS. IEEE Computer Society, pp 89–97
[Koz83]
Kozen D (1983) Results on the propositional µ-calculus. Theor Comput Sci 27(3):333–354
[KP05]
Kesten Y, Pnueli A (2005) A compositional approach to CTL∗ verification. Theor Comput Sci 331:397–428
[KR02]
Kremer S, Raskin J-F (2002) Game analysis of abuse-free contract signing. In: Computer security foundations workshop (CSFW). IEEE Computer Society
[KR03]
Kremer S, Raskin J-F (2003) A game-based verification of non-repudiation and fair exchange protocols. J Comput Security 11(3):399–429
[Kur58]
´ Kuratowski C (1958) Topologie, vol I. Panstwowe Wydawnictwo Naukowe, 4th edn
[Kur61]
´ Kuratowski C (1961) Topologie, vol I. Panstwowe Wydawnictwo Naukowe, 3rd edn
[Lam02]
Lamport L (2002) Specifying systems. Addison-Wesley, Reading
[LPS81]
Lehmann D, Pnueli A, Stavi J (1981) Impartiality, justice and fairness: The ethics of concurrent termination. In: Shimon Even and Oded Kariv, editors, Automata, Languages and Programming—8th International colloquium, ICALP, vol 115, LNCS. Springer, Heidelberg, pp 264–277
[Mar75]
Martin DA (1975) Borel determinacy. Ann Math 102:363–371
558
[Mar85] [MBSU98] [McN93] [Mit96] [MP83]
[MP89] [MP91] [MP94] [MP95] [MP96] [MS05] [OR94] [Par85] [PK02] [Pnu77] [Rab69] [Rey00] [San05] [SdRG89] [Sip99] [Sla02]
[Sla07] [SSM06a]
[SSM06b]
[SSM06c]
[Tho95] [Tho97] [TS00] [Var91] [Var95] [Wik] [Wol55] [Zie98]
M. Slanina et al.
Martin DA (1985) A purely inductive proof of borel determinacy. In: Proceedings of symposia in pure mathematics, vol 42. American Mathematical Society, pp 303–308 Manna Z, Browne A, Sipma HB, Uribe TE (1998) Visual abstractions for temporal verification. In: Haeberer A (ed) AMAST, vol 1548, LNCS. Springer, Heidelberg, pp 28–41 McNaughton R (1993) Infinite games played on finite graphs. Ann Pure Appl Logic 65:149–184 Mitchell JC (1996) Foundations of programming languages. Foundations of computing. MIT Press, Cambridge Manna Z, Pnueli A (1983) Verification of concurrent programs: a temporal proof system. In: Jaco W, de Bakker and Van Leeuwen J (eds) Foundations of computer science IV, distributed systems: Part 2. Mathematical Centre Tracts 159. Center for Mathematics and Computer Science (CWI), Amsterdam, pp 163–255 Manna Z, Pnueli A (1989) Completing the temporal picture. In: Ausiello G, Dezani-Ciancaglini M, Della Rocca SR (eds) 16th International colloquium on automata, languages, and programming, vol 372, LNCS. Springer, Heidelberg, pp 534–558 Manna Z, Pnueli A (1991) The temporal logic of reactive and concurrent systems: specification. Springer, Heidelberg Manna Z, Pnueli A (1994) Temporal verification diagrams. In: Proceedings of international symposium on theoretical aspects of computer software, vol 789, LNCS. Springer, Heidelberg, pp 726–765 Manna Z, Pnueli A (1995) Temporal verification of reactive systems: safety. Springer, Heidelberg Manna Z, Pnueli A (1996) Temporal verification of reactive systems: progress. Draft. Available on line at http://theory.stanford. edu/~zm/tvors3.html Mahimkar A, Shmatikov V (2005) Game-based analysis of denial-of-service prevention protocols. In: Computer Security Foundations Workshop (CSFW). IEEE Computer Society Osborne MJ, Rubinstein A (1994) A course in game theory. MIT Press, Cambridge Parikh R (1985) The logic of games and its applications. Ann Discrete Math 24:111–140 Pnueli A, Kesten Y(2002) A deductive proof system for CTL∗ . In: Brim L, Janˇcar P, Kˇret´ınsk´y M, Kuˇcera A (eds) CONCUR 2002—concurrency theory: 13th international conference, vol 2421, LNCS. Springer, Heidelberg, pp 24–40 Pnueli A (1977) The temporal logic of programs. In: FOCS. IEEE Computer Society, pp 46–57 Rabin MO (1969) Decidability of second order theories and automata on infinite trees. Trans AMS 141:1–35 Reynolds M (2000) An axiomatization of full computation tree logic. J Symbol Logic 66(3):1011–1057 Sankaranarayanan S (2005) Mathematical analysis of programs. PhD thesis, Stanford University Stomp FA, de Roever W-P, Gerth RT (1989) The µ-calculus as an assertion language for fairness arguments. Inf Comput 82(3):278–322 Sipma HB (1999) Diagram-based verification of discrete, real-time and hybrid systems. PhD thesis, Computer Science Department, Stanford University Slanina M (2002) Control rules for reactive system games. In: Fischer B, Smith DR (eds) Logic-based program synthesis: state of the art and future trends, AAAI spring symposium. The American Association for Artificial Intelligence, AAAI Press, March 2002. Available from AAAI as Technical Report SS-02-05, pp 95–104 Slanina M (2007) Deductive verification of alternating systems. PhD thesis, Stanford University Slanina M, Sipma HB, Manna Z (2006) Game-theoretic deductive verification of a contract-signing protocol. Technical Report REACT-TR-2006-03, Stanford University, Computer Science Department, REACT Group, August 2006. Available at http:// react.stanford.edu/TR/ Slanina M, Sipma HB, Manna Z (2006) Game-theoretic deductive verification of a contract-signing protocol II. Technical Report REACT-TR-2006-04, Stanford University, Computer Science Department, REACT Group, August 2006. Available at http://react.stanford.edu/TR/ Slanina M, Sipma HB, Manna Z (2006) Proving atl* properties of infinite-state systems. In: Barkaui K, Cavalcanti A, Cerone A (eds) 3rd International colloquium on theoretical aspects of computing (ICTAC2006), vol 4281, LNCS. Springer, Heidelberg, pp 242–256 Thomas W (1995) On the synthesis of strategies in infinite games. In: Mayr EW, Puech C (eds) Proceedings of the 12th annual symposium on theoretical aspects of computer science, STACS ’95, vol 900, LNCS. Springer, Heidelberg, pp 1–13 Thomas W (1997) Languages, automata, and logic. In: Rozenberg G, Salomaa A (eds) Handbook of formal languages, vol 3, Chap 7. Springer, Heidelberg, pp 389–455 Troelstra AS, Schwichtenberg H (2000) Basic proof theory. Cambridge University Press, 2nd edn Vardi MY (1991) Verification of concurrent programs: the automata-theoretic framework. J Pure Appl Logic 51(1–2):79–98 Vardi MY (1995) Alternating automata and program verification. In: van Leeuwen J (ed) Computer science today: recent trends and developments, vol 1000, LNCS. Springer, Heidelberg, pp 471–485 Wikipedia. Gomoku. http://en.wikipedia.org/wiki/Gomoku. Retrieved on February 17, 2007 Wolfe P (1955) The strict determinateness of certain infinite games. Pacific J Math, 5(Supplement I), Zielonka W (1998) Infinite games on finitely coloured graphs with applications to automata on infinite trees. Theor Comput Sci 200:135–183
Deductive verification of alternating systems
559
A. List of symbols Set theory a∈A A⊆B A∩B A∪B A\B A×B A+B i∈I Ai ∪i∈I Ai Ai i∈I i∈I Ai An a1 , a2 , . . . , an αi or α[i ]
a is a member of set A set A is contained in set B intersection of sets A and B union of A and B set difference, A minus B cartesian product of A and B disjoint union (coproduct) of A and B , equal to ({0} × A) ∪ ({1}) × B ; when no confusion arises, we identify 0, a with a ∈ A and 1, b with b ∈ B intersection of a family of sets union of sets product of sets
2A f:A→B f:AB (A → B ), B A
power set (total) function from A to B partial function from A to B set (type) of all functions from A to B
coproduct (disjoint union) of sets product of n copies of A tuple, member of a product if α ∈ Ai , indicates a component i
Formal language theory A∗ A+ Aω x [i . . . j ] x [i . . . ] a −→ b
finite strings, free monoid over alphabet A non-empty finite strings over A infinite strings over A substring from positions i to j (start at 0) infinite suffix from position i onward rewriting rule
First-order logic ≡ ¬ϕ ϕ∧ψ ϕ∨ψ ϕ→ψ ϕ↔ψ i∈I ϕi i∈I ϕi ∀x ϕ ∃x ϕ ∀ x. ϕ ∃x. ϕ ϕ sϕ s[x → v] ϕ[x/t]
syntactic equivalence not, negation and, conjunction or, disjunction implies, implication if and only if, double implication conjunction of a family of formulas disjunction of a family of formulas universal quantification, scope as short as possible existential quantification, scope as short as possible universal quantification, scope as long as possible existential quantification, scope as long as possible validity truth, ϕ holds in state (variable assignment) s state identical to s except at variables x, where it has values v safe substitution of terms t for variables x in ϕ
560
M. Slanina et al.
Boolean algebra B a a b a b i∈I ai i∈I ai
the two-element Boolean algebra {0, 1} complement in Boolean lattice meet in Boolean lattice join in Boolean lattice meet in complete Boolean lattice join in complete Boolean lattice
Linear algebra x A Atr A ≤ B, A ≥ B
(column) vector matrix transpose of matrix M element-wise comparison of matrices
Received 1 May 2007 Accepted in revised form 11 March 2008 by K. Barkaoui, M. Broy, A. Cavalcanti and A. Cerone Published online 24 April 2008