Game


Contents

1 Semantics
1.1 Formalizing Program Syntax
1.2 Semantics of Expressions
1.3 States
1.4 Operational Semantics
1.4.1 Small-Step (Structural) Semantics
1.4.2 Transitive Closure
1.4.3 TODO Termination
1.4.4 Big-Step Semantics
1.4.5 TODO Code Optimization / Compilation
1.5 Denotational Semantics
1.5.1 Non-Determinism
1.6 Axiomatic Semantics and Hoare Logic
1.6.1 Hoare Triples
2 Infinite-duration Two-player Games on Graphs
2.1 Basic Concepts
2.2 Classical Objectives
2.3 Strategy
2.4 Solving Reachability Games
2.5 Solving Buchi Games
2.5.1 Classical 𝑂(𝑛𝑚) Algorithm
2.5.2 Faster 𝑂(𝑛2) Algorithm
2.6 Solving Parity Games
3 Automata on Finite Words
3.1 Basic Closure Properties
4 Automata on Infinite Words (𝜔-automata)
4.1 Buchi Automata
4.1.1 Complementation of DBW
4.1.2 Union of DBWs
4.1.3 Intersection of DBWs
4.1.4 Union of NBWs.
4.1.5 Intersection of NBWs
4.1.6 NBW Complement of DBW
4.1.7 Emptiness of DBW/NBW
4.1.8 Universality Problem of DBW
4.1.9 Universality Problem of NBW
4.1.10 Complementation NBW
4.1.11 Containment Problem
4.2 Parity Automata

1 Semantics

1.1 Formalizing Program Syntax

We first introduce components of our program (and their informal meanings)

skip means a command that does nothing.

A variable 𝑥 is a string or a name. The set of all variables is denoted by Vars.

Definition 1.1 (Syntax of Arithmetic Expression). The syntax of arithmetic expressions is defined as

𝑐Z𝑥Vars𝑒:=(𝑐)|(𝑥)|(𝑒)+(𝑒)|(𝑒)(𝑒)|(𝑒)/(𝑒)|(𝑒)×(𝑒)|(𝑒)(𝑒)|(𝑒)%(𝑒),

where / stands for integer division, % stands for mod. We add parentheses here to avoid the arithmetic priority issue, and will omit them later on.

The set of all arithmetic expressions is denoted by AExp.

Definition 1.2 (Syntax of Boolean Expression). The syntax of a boolean expression is defined as

𝑎AExp,𝑏:=true|false𝑎>𝑎|𝑎<𝑎|𝑎==𝑎|𝑎𝑎|𝑎𝑎|𝑎𝑎|𝑏and𝑏|𝑏or𝑏|not𝑏,

where 𝑒 is an arithmetic expression.

The set of all arithmetic expressions is denoted by BExp.

Then we can define the syntax of program.

Definition 1.3 (Syntax of Program). The syntax of a program 𝑃 is defined as

𝑃:=skip|𝑥=𝑒|𝑃;𝑃|if𝑒then𝑃else𝑃fi|while𝑒do𝑃od,

where 𝑥Vars, 𝑒 is an expression, and 𝑃;𝑃 means a sequence of two program.

The meaning of these programs are not formally defined yet. We will add more program syntax later.

1.2 Semantics of Expressions

Definition 1.4 (Valuation). A valuation is a function 𝑣:VarsZ, which specifies values of variables. The set of all valuations is denoted by Val. Given a valuation 𝑣, the value of the variable 𝑥 is 𝑣(𝑥). During the execution of the program, the variable itself remains the same (just a name), but its value changes as the valuation changes.

Definition 1.5 (Semantics of Arithmetic Expression). The semantics of an expression 𝑒 is the meaning / value of the expression, denoted by [[𝑒]]. Formally, [[𝑒]] is a function :ValZ (not [[·]]), such that the value of function [[𝑒]] at 𝑣, denoted by [[𝑒]](𝑣) or simply [[𝑒]]𝑣, is the value of expression 𝑒 given the valuation 𝑣.

We can immediately get the semantics of some simple expressions by definition: 𝑣Val,

Definition 1.6 (Semantics of Boolean Expression). The semantics of a Boolean expression 𝑏 is a function [[𝑏]]:Val{true,false}, such that [[𝑏]](𝑣) is the truth value of 𝑏 given a valuation 𝑣.

For simplicity, we assume that non-zero numbers stand for true and zero stands for false, so that we do not need to explicitly name a domain for values of boolean variable.

Notation 1 (Inference-rule notation). Assumptions--
   Result means that if the Assumptions are true, then the Result is true.

Semantics of more complex expressions can be defined as rules under some assumptions:

Rules for boolean expressions:

1.3 States

Definition 1.7 (State of Program). We can define the state of a program as:

1.
The current valuation: 𝑣:VarsZ.
2.
The current valuation together with a pointer showing the current position in code: (𝑙,𝑣) where 𝑙 is a line number.
3.
The current valuation together with a piece of remaining code to be executed: (𝑃,𝑣).

We denote by Σ the set of all states. Program execution is simply captured by transitions between states.

Analysis of the 3 types of states:

1.
The first type 𝑣: Doesn’t tell the intermediate process of the program (terminations, exceptions).
2.
The second type (𝑙,𝑣): E.g.,
      while x > 0 && y > 0 do
        if x > y then
          x = x - y
        else
          y = y - x
        fi
      od

Suppose the initial valuation is (2,1) for 𝑥,𝑦 respectively. Then the execution process can be represented as (2,2,1)(3,2,1)(6,1,1). Drawbacks: Requires knowing the whole program, cannot model separate functions in the program independently.

3.
Thus, we use the third type of state (𝑃,𝑣). E.g., x = 1; y = x + 2 can be represented as (𝑥=1;𝑦=𝑥+2,0,0)(𝑦=𝑥+2,1,0)(,1,3).

1.4 Operational Semantics

Definition 1.8 (Operational Semantics). Operational semantics is a category of semantics that is defined by inference rules over transitions between states.

1.4.1 Small-Step (Structural) Semantics

Definition 1.9 (Small-step Transition Relation). A samll-step transition relation (𝑃,𝑣)(𝑃,𝑣) means that the first state transits to the second state in one execution step.

Definition 1.10 (Small-step Operational Semantics). Small-step operational semantics defines inference rules for single-step transitions.

Inference rules (with new syntax):

Consider the extension with Break, Continue, and for:

1.4.2 Transitive Closure

Definition 1.11 (Transitive closure). (𝑃,𝑣)(𝑃,𝑣) means that starting from state (𝑃,𝑣), we can reach state (𝑃,𝑣) in certain number of steps. Formally, it is defined by

---------------
(𝑃, 𝑣)→  ∗ (𝑃,𝑣)(𝑃,𝑣) →  (𝑃′,𝑣′)
--------∗--′--′-
(𝑃,𝑣)→   (𝑃 ,𝑣)(𝑃,𝑣) → ∗ (𝑃 ′,𝑣′),(𝑃 ′,𝑣′)→ ∗ (𝑃 ′′,𝑣′′)
-----------------∗---′′--′′----------
         (𝑃,𝑣) →  (𝑃  ,𝑣 )

1.4.3 TODO Termination

Definition 1.12 (Termination). A program 𝑃 with valuation 𝑣 will terminate if it will eventually transit to an empty program, denoted by Term(𝑃,𝑣). Formally, it is defined by

        ∗     ′
(𝑃,𝑣)-→---(∅,𝑣-)
   Term( 𝑃,𝑣).

E.g., prove the Euclidean Algorithm for calculating the GCD terminates 𝑥,𝑦Z:

while𝑥>0and𝑦>0𝑔doif𝑥>𝑦then𝑥=𝑥𝑦else𝑦=𝑦𝑥fi}𝑃od

TODO We denote the program by 𝑃, the condition in while by 𝑔, and the body by 𝑃.

1.4.4 Big-Step Semantics

Definition 1.13 (Big-Step Relation). A big-step relation (𝑃,𝑣)𝑣 means that the program 𝑃 with valuation 𝑣 terminates, modifyng the valuation to 𝑣.

Definition 1.14 (Big-Step Operational Semantics). Big-step operational semantics defines inference rules for big-step relations.

Inference rules:

1.4.5 TODO Code Optimization / Compilation

TODO

𝑃 and 𝑄 are equivalent only if (necessary but not sufficient),

𝑣𝑎𝑙,𝑣𝑎𝑙.(𝑃,𝑣𝑎𝑙)𝑣𝑎𝑙(𝑄,𝑣𝑎𝑙)𝑣𝑎𝑙.

This is not sufficient because the termination is not taken into consideration.

Example 1.1. Consider 𝑃:𝑥=1 and 𝑄:𝑥=0;while𝑥1do𝑥=1skipod. 𝑃 always terminates with 𝑥=1, but 𝑄 terminates only when sometimes 𝑥=1 is executed. We could not say they are equivalent.

1.5 Denotational Semantics

Definition 1.15 (Denotational Semantics). Denotational semantics is a category of semantics that is defined as mathematical functions. Usually, the denotational semantics of a program is obtained inductively based on the (already defined) denotational semantics of its subprograms.

We are going to use the current valuation for state, i.e., the set of state Σ=Val.

The type of semantics of arithmetic / boolean expressions does not change (but Val becomes Σ): If 𝑒 is an arithmetic expression, then [[𝑒]]:ΣZ. If 𝑏 is a boolean expression, then [[𝑏]]:Σ{true,false}. The denotational semantics of them basically rewrites the inference rules (operational semantics) as operations on functions:

Definition 1.16 (Denotational Semantics of Program). The semantics of a program 𝑃 is the function that transforms the initial state (valuation) to the final state (valuation), denoted by [[𝑃]]. Formally, it is a partial function [[𝑃]]:ΣΣ. It is partial because it is not defined on initial states that do not terminate, etc.

Definition 1.17 (Conditional statement). Let 𝑓,𝑔:ΣΣ,𝑏:Σ{true,false}, then the conditional semantics cond(𝑏,𝑓,𝑔):ΣΣ is defined as

cond(𝑏,𝑓,𝑔)(𝑣𝑎𝑙)={𝑓(𝑣𝑎𝑙),if 𝑏(𝑣𝑎𝑙)=true𝑔(𝑣𝑎𝑙),if 𝑏(𝑣𝑎𝑙)=false

E.g., we can rewrite [[𝑎1>𝑎2]] as cond([[𝑎1]]>[[𝑎2]],true,false)𝑣𝑎𝑙.

Semantics of program (check that the R.H.S.s are valuations):

Semantics of while:

Proposition 1.1. Given an boolean expression 𝑏 and a program 𝑃, there exists a fixed point of 𝑇𝑏,𝑃.

Proof 1.1. We prove by constructing functions based on Operational Semantics.

For any program 𝑃 and valuation 𝑣𝑎𝑙, we define [[𝑃]]𝑣𝑎𝑙=𝑣𝑎𝑙 iff (𝑃,𝑣𝑎𝑙)↓𝑣𝑎𝑙 in Operational Semantics, and [[𝑃]]𝑣𝑎𝑙=undefined if there is no such 𝑣𝑎𝑙 (does not terminate).

Running the inference rules, we will get a function 𝑓 for [[while𝑏do𝑃]] and a function 𝑔 for [[if𝑏then𝑃;while𝑏do𝑃elseskipfi]]. Recall the corollary about the equivalence between two, which basically means

𝑓(𝑣𝑎𝑙)=𝑣𝑎𝑙𝑔(𝑣𝑎𝑙)=𝑣𝑎𝑙,𝑔(𝑣𝑎𝑙)=𝑣𝑎𝑙𝑓(𝑣𝑎𝑙)=𝑣𝑎𝑙.

Thus, 𝑓=𝑔=𝑇𝑏,𝑃(𝑓). 𝑓 is indeed a fixed point.

Proposition 1.2. The fixed point is not unique, but the least defined fixed point is unique.

Proof 1.2. We can prove the nonuniqueness simply by finding multiple fixed points: 𝑇(𝑓)=𝑓 iff

𝑣𝑎𝑙,𝑓(𝑣𝑎𝑙)={𝑓[[𝑃]](𝑣𝑎𝑙),[[𝑏]]=true[[skip]](𝑣𝑎𝑙)=𝑣𝑎𝑙,[[𝑏]]=false

So 𝑓 could be cond([[𝑏]],𝑐,[[skip]]) for any constant function 𝑐 (𝑣𝑎𝑙,𝑐(𝑣𝑎𝑙)=𝑐).

Proof 1.3. More detailed analysis of constraints on the fixed points: Let 𝑔 be any fixed point of 𝑇𝑏,𝑃 (abbreviated to 𝑇 below). Let 𝑣𝑎𝑙0 be an arbitraty initial valuation, then there are three possible cases of the while loop.

1.
The entire while loop terminates after a finite number, say 𝑛, of iterations:

We have

𝑣𝑎𝑙𝑖+1=[[𝑃]]𝑣𝑎𝑙𝑖(1)

for 𝑖=0,1,,𝑛1. Moreover,

[[𝑏]]𝑣𝑎𝑙0==[[𝑏]]𝑣𝑎𝑙𝑛1=true,[[𝑏]]𝑣𝑎𝑙𝑛=false.

This implies that

𝑇(𝑔)(𝑣𝑎𝑙𝑖)=cond([[𝑏]],𝑔[[𝑃]],[[skip]])(𝑣𝑎𝑙𝑖)=𝑔[[𝑃]](𝑣𝑎𝑙𝑖)=𝑔(𝑣𝑎𝑙𝑖+1)(2)

for 𝑖=0,,𝑛1, and 𝑇(𝑔)(𝑣𝑎𝑙𝑛)=[[skip]](𝑣𝑎𝑙𝑛)=𝑣𝑎𝑙𝑛. Since 𝑔 is a fixed point,

𝑔(𝑣𝑎𝑙0)=𝑔(𝑣𝑎𝑙1)==𝑔(𝑣𝑎𝑙𝑛)=𝑣𝑎𝑙𝑛.(*1)
2.
Some iteration, say the 𝑛-th one, does not terminate so the whole loop does not terminate.

Equation (1) still holds for 𝑖=0,,𝑛1. And we have

[[𝑏]]𝑣𝑎𝑙0==[[𝑏]]𝑣𝑎𝑙𝑛1=[[𝑏]]𝑣𝑎𝑙𝑛=true.

Then, equation (2) also holds for 𝑖=0,,𝑛1. Since 𝑔 is a fixed point of 𝑇,

𝑔(𝑣𝑎𝑙0)==𝑔(𝑣𝑎𝑙𝑛).(*2)
3.
Every iteration terminates but the loop itself does not:

Equation (1) and [[𝑏]]𝑣𝑎𝑙𝑖=true holds for 𝑖=0,1,. Similarly, we have

𝑔(𝑣𝑎𝑙0)=𝑔(𝑣𝑎𝑙1)=…𝑔(𝑣𝑎𝑙𝑛)=.(*3)

Equations (*1), (*2), and (*3) are the only constraints we have on 𝑔 in the three cases, respectively. In (*1) 1, the values 𝑔(𝑣𝑎𝑙0),,𝑔(𝑣𝑎𝑙𝑛) are uniquely determined to be 𝑣𝑎𝑙𝑛. However, in (*2) and (*3), the values are not unique.

We want the 𝑔 that aligns with the Operational Semantics. Requirements are:

Now, this 𝑔 is uniquely determined.

Least defined fixed point: A fixed point such that at any initial valuation it is not uniquely determined, its value is undefined.

The 𝑔 uniquely determined above is the least defined fixed point.

1.5.1 Non-Determinism

Without non-determinism, [[𝑃]]:ΣΣ. Let be the state representing non-termination, so that [[𝑃]]:ΣΣ{}. More explicitly, [[𝑃]]:ΣΣ{}\, since no valuation is mapped to .

With non-determinism, [[𝑃]]:ΣPowerset(Σ{})\{}.

For while, we also find a fixed point of the transformer, but the definition of the transformer is more complicated.

1.6 Axiomatic Semantics and Hoare Logic

Definition 1.18 (Invariant). Assertion assigned to some point of the program that always hold whenever the program reach the point.

E.g., the first line of a while loop may reached many times.

Definition 1.19 (Pre-conditon and post-conditon). The invariant that is assigned to the beginning or the end of a (part of) program, respectively.

E.g.,

{𝑛>0}𝑖=1;{𝑛>0,𝑖=1}𝑠=0;{𝑛>0,𝑖=1,𝑠=0}𝑚=0;{𝑛>0,𝑚0,𝑛𝑚,𝑖=2𝑚,𝑠=20++2𝑚1}while𝑚<𝑛do𝑠=𝑠+𝑖;𝑖=𝑖2;𝑚=𝑚+1;od{𝑠=2𝑛1}

1.6.1 Hoare Triples

Definition 1.20 (Hoare triple). A hoare triple {𝐴}𝑃{𝐵}, where 𝐴 and 𝐵 are two assertions and 𝑃 is a program, means that, for any current 𝑣𝑎𝑙, if 𝐴 holds in state 𝑣𝑎𝑙 (denoted by 𝑣𝑎𝑙⊨𝐴), and 𝑣𝑎𝑙 is a state that can be reached by a terminating run of 𝑃 starting at 𝑣𝑎𝑙, then 𝐵 holds in state 𝑣𝑎𝑙. Formally, {𝐴}𝑃{𝐵} is defined as the logical statement

𝑣𝑎𝑙,𝑣𝑎𝑙𝐴(𝑃,𝑣𝑎𝑙)↓𝑣𝑎𝑙𝑣𝑎𝑙𝐵.

Remark: the assertion can be true,false. true always holds (𝑣𝑎𝑙,𝑣𝑎𝑙true holds) and false never holds (𝑣𝑎𝑙,¬𝑣𝑎𝑙false holds).

If the program does not terminate, the Hoare triple does not say anything.

Hoare triples talk about all possibilities of the program execution.

Axioms:

2 Infinite-duration Two-player Games on Graphs

2.1 Basic Concepts

Games in which two players play forever.

Definition 2.1. A game graph or arena is a finite directed graph (𝑉,𝐸) with a partition (𝑉1,𝑉2) of 𝑉, where

Given an initial vertex 𝑣𝑉, the game proceeds as follows:

2.2 Classical Objectives

Let 𝜌=𝜌0,𝜌1, be an infinite path. We denote the prefix of 𝜌 starting from the 𝑖-th vertex by 𝜌[𝑖…]. Define:

Let 𝑇 be a subset of vertices, here are some of the classical objectives:

Definition 2.2. Let 𝑇 be a set of target vertices. Then,

Definition 2.3. Let 𝑇 be a set of Büchi vertices. Then,

We can check that:

Definition 2.4. Let 𝑝:𝑉{0,1,,𝑑} be a priority function. Then the parity objective is defined as parity(𝑝)={𝜌|min{𝑝(𝑣)|𝑣inf(𝜌)}is even}.

It is the most general objective we will study.

Proposition 2.1. Buchi(𝑇)=parity(𝑝) where 𝑝(𝑣)=0 for 𝑣𝑇 and 𝑝(𝑣)=1 otherwise.

In this case, parity(𝑝)={𝜌|min{𝑝(𝑣)|𝑣inf(𝜌)}=0}={𝜌|𝑣inf(𝜌),𝑣𝑇}=Buchi(𝑇).

Proposition 2.2. coBuchi(𝑇)=parity(𝑝) where 𝑝(𝑣)=2 for 𝑣𝑇 and 𝑝(𝑣)=1 otherwise.

To find a priority function for coBuchi, we want {𝜌:inf(𝜌)⊈𝑇}={𝜌|min{𝑝(𝑣)|𝑣inf(𝜌)}is odd}. This hints that we set 𝑝(𝑣) to be a smaller odd number for 𝑣𝑇.

Proposition 2.3. In general, parity(𝑝)=parity(𝑞)¯ for 𝑞=1+𝑝 (or 𝑞=2𝑘+1+𝑝 for 𝑘Z).

Definition 2.5. We say an objective 𝑂 is a tail objective if whenever 𝜌𝑂, we have 𝜌[𝑖…]𝑂 for every 𝑖.

Proposition 2.4. Parity objectives are tail objectives since finite prefix does not matter.

2.3 Strategy

Notation 2. For a vertex 𝑣, we denote by Succ(𝑢)={𝑢𝑉:(𝑢,𝑣)𝐸} the set of successor vertices of 𝑢.

Definition 2.6. A strategy for player 𝑖 is a function 𝜎𝑖:𝑉×𝑉𝑖𝑉, where 𝑉 represents the history (the portion of path visited) domain. Given history 𝑤 and current vertex 𝑣, the strategy outputs Player 𝑖’s chioce of the next vertex to visit 𝜎𝑖(𝑤,𝑣)Succ(𝑣).

Notation 3. The outcome is fixed given the initial vertex 𝑣 and strategies (no matter memoryless or not) of the two players 𝜎1,𝜎2. We denote it by Outcome(𝑣,𝜎1,𝜎2).

Without other specifications, we assume that strategies are memoryless in the following discussion.

Remark 1. In the case where strategies have memory, one may discuss the size of the memory. The memory is said to be finite if it has finite bits (not that it can memorize finite number of steps).

Definition 2.7. A strategy 𝜎1 for player 1 is called a winning strategy for player 1 from a vertex 𝑣 if for every strategy 𝜎2 of player 2 we have Outcome(𝑣,𝜎1,𝜎2)𝑂1. The winning strategies for player 2 are defined analogously.

Definition 2.8 (Spoiling strategy). A strategy 𝜎2 for player 2 spoils 𝜎1 if Outcome(𝑣,𝜎1,𝜎2)𝑂1.

Definition 2.9. Given an objective 𝑂𝑖 for player 𝑖, the winning set Win𝑖(𝑂𝑖) is the subset of vertices from which player 𝑖 has a winning strategy (i.e., the other player has no spoiling strategy for this).

Definition 2.10. A game with objectives 𝑂1,𝑂2 for players 1 and 2 respectively is determined if Win1(𝑂1)=𝑉\Win2(𝑂2).

Usually we aim at finding the winning set of player 1.

2.4 Solving Reachability Games

Suppose the objective for player 1 is Reach(𝑇). (Besides attractors and traps for this objective, I also include some related conclusions on winning strategies for player 2 and objective Safe(𝑉\𝑇).)

Definition 2.11. Let 𝑇 be a set of target vertices. The attractor for player 𝑖 is defined as Attr𝑖(𝑇)=Win𝑖(Reach(𝑇)), the set of vertices from which player 𝑖 has a strategy to reach 𝑇.

Proposition 2.5. The set Attr1(𝑇) can be inductively defined as follows:

The inductive definition for Attr2(𝑇) can be analogous.

Definition 2.12. For all vertices in Attr1(𝑇), define rank(𝑣)=𝑖 if 𝑣𝐴𝑖\𝐴𝑖1 (See Prop. 2.5).

Proposition 2.6. By definition, 𝐴Win1(Reach(𝑇)) and 𝐴Win2(Safe(𝑉\𝑇))=.

[TODO: We will soon see that they are equal]

. Intuitively, 𝐴𝑖 consists of all vertices that can reach 𝑇 in at most 𝑖 steps, so any valid attractor must be included in some 𝐴𝑖.

Definition 2.13. Let 𝑇 be a set of target vertices. A vertex set 𝐶𝑇= is said to be trap for player 1 if player 2 has a strategy to confine the state in 𝐶 (and hence never reaches 𝑇), i.e.,

1.
If 𝑣𝐶𝑉1, then Succ(𝑣)𝐶
2.
If 𝑣𝐶𝑉2, then Succ(𝑣)𝐶.

We often use the term “a trap of player 1 in 𝐶” or “a trap of player 1 out of 𝑇”.

That is, 𝐶Win2(Safe(𝑉\𝑇)) and 𝐶Win1(Reach(𝑇))=.

Proposition 2.7. 𝑉\𝑖0𝐴𝑖 is a trap for player 1.

Proof 2.1.

1.
𝑇𝐴(𝑉\𝐴)𝑇=.
2.
Suppose 𝑣(𝑉\𝐴)𝑉1. If Succ(𝑣)⊈𝑉\𝐴, i.e., Succ(𝑣)𝐴, then Succ(𝑣)𝐴𝑖 for some 𝑖. In this case, 𝑣 should have been added into 𝐴, which is a contradiction. Thus, Succ(𝑣)𝑉\𝐴.
3.
Suppose 𝑣(𝑉\𝐴)𝑉2. If Succ(𝑣)(𝑉\𝐴)=, i.e., Succ(𝑣)𝐴, then we can partition Succ(𝑣) to several sets such that each set 𝐴𝑖 for some 𝐴𝑖. 𝑣 should have been added into 𝐴, which is a contradiction. Thus, Succ(𝑣)(𝑉\𝐴)=.

Derivation 2.1. Together, we have 𝑉\𝐴Win2(Safe(𝑉\𝑇)), and (𝑉\𝐴)Win1(Reach(𝑇))=. This implies that

Proposition 2.8. 𝐴=Win1(Reach(𝑇)) and 𝑉\𝐴=Win2(Safe(𝑉\𝑇)).

Find 𝐴 using a BFS:

_______________________________________________________________________________________ Algorithm 1: Linear Algorithm for Finding Attractor___________________________________________________________ Input: Game graph 𝐺=(𝑉,𝐸), set of target vertices 𝑇𝑉 Output: Attractor set 𝐴 1𝑄𝑇; 2𝐴; 3𝑆𝑢𝑐𝑐𝐼𝑛𝐴[𝑣]0 for all 𝑣𝑉; 4while 𝑄 do 5 6 7Pop one vertex 𝑣 from 𝑄; 8𝐴𝐴{𝑣}; 9for 𝑤𝑃𝑟𝑒𝑑(𝑣)\𝐴 do 10 11 12𝑆𝑢𝑐𝑐𝐼𝑛𝐴[𝑤]𝑆𝑢𝑐𝑐𝐼𝑛𝐴[𝑤]+1; 13if 𝑤𝑉1 then 14 15 // Corresponds to {𝑣𝑉1Succ(𝑣)𝐴} 16𝑄𝑄{𝑤}; 17if 𝑤𝑉2 and 𝑆𝑢𝑐𝑐𝐼𝑛𝐴[𝑤]=𝑜𝑢𝑡𝑑𝑒𝑔(𝑤) then 18 19 // Corresponds to {𝑣𝑉2Succ(𝑣)𝐴} 20𝑄𝑄{𝑤}; 21return𝐴;

Running time:

2.5 Solving Buchi Games

Consider the zero-sum Buchi game with 𝑂1=Buchi(𝑇) and 𝑂2=coBuchi(𝑉\𝑇). We want to solve Win1(𝑂1) and Win2(𝑂2).

Definition 2.14 (Subgame arena). Given an arena 𝐺=(𝑉,𝑉1,𝑉2,𝐸) and a set 𝑊𝑉, if every vertex in 𝑊 has a successor in 𝑊, then the subgame arena at 𝑊 is defined as 𝐺𝑊=(𝑊,𝑉1𝑊,𝑉2𝑊,𝐸𝑊×𝑊) (remain edges within 𝑊 only).

The condition of having a successor is to ensure that the 𝐺𝑊 satisfies the third requirement of arena, which is every vertex has at least one out-going edge.

We denote Attr𝑖(𝑇)=Win𝑖(Reach(𝑇)).

2.5.1 Classical 𝑂(𝑛𝑚) Algorithm

Proposition 2.9. Player 2 has a winning strategy (for coBuchi(𝑉\𝑇)) starting from 𝐶=Attr2(𝐵), and hence player 1 does not have a winning strategy from the set. (𝐶Win2(𝑂2)).

Proof 2.2.

Proposition 2.10. The following algorithm finds Win2(coBuchi(𝑉\𝑇)) and Win1(Buchi(𝑉\𝑇)): Mark all vertices in 𝐶 as winning for Player 2, and recursively solve the subgame at 𝑉\𝐶 with objective 𝑂1=Buchi(𝑇\(𝑇𝐶)). Formally:

_______________________________________________________________________________________ Algorithm 2: Classical Algorithm for Finding coBüchi Winning Set__________________________________ Input: Game graph 𝐺=(𝑉,𝐸), target set 𝑇𝑉 Output: Winning set 𝑊2 1𝑇0𝑇,𝐺0𝐺,𝑊2; 2for 𝑗0 to  do 3 4 5𝐴𝑗Attr1(𝑇𝑗,𝐺𝑗); 6𝐵𝑗𝑉𝑗\𝐴𝑗; 7𝐶𝑗Attr2(𝐵𝑗); 8𝑊2𝑊2𝐶𝑗; 9𝑉𝑗+1𝑉𝑗\𝐶𝑗; 10𝑇𝑗+1𝑇𝑗\𝐶𝑗; 11𝐺𝑗+1𝐺[𝑉𝑗+1]; 12if 𝐶𝑗= then 13 14 15break; 16return𝑊2;

Win2(coBuchi(𝑉\𝑇))=𝑊2 and Win1(Buchi(𝑇))=𝑉\𝑊2.

Proof 2.3. We first show that, in each iteration, it is “safe” to keep the subgame at 𝑉\𝐶 only (subscript omitted), i.e., the game starting from 𝑉\𝐶 will remain in 𝑉\𝐶 if both players play the best. (Note that this does not include edges across 𝐶 and 𝑉\𝐶 either). Consider any initial vertex in 𝑉\𝐶:

Then we show that player 1 has a winning strategy in 𝑉\𝑊2. Suppose the loop breaks at 𝐶𝑛=, i.e., Attr2(𝐵𝑛)=. The 𝐵𝑛 must be , since Attr2(𝐵𝑛)𝐵𝑛. Thus, 𝐴𝑛=𝑉𝑛=𝑉\𝑊2. Then, 𝑉\𝑊2 playes two roles:

Corollary 2.1. Buchi game is determined, Win1(𝑂1)=𝑉\Win2(𝑂2) (𝑉\𝑊2):

Corollary 2.2. Both winning strategies for player 1 and player 2 are memoryless: Their strategies can be decomposed into many stages of reachability strategies, which are memoryless.

Runtime of the algorithm: 𝑂(𝑚) (attractor algorithm) for each iteration. At most 𝑛 iteration. Overall 𝑂(𝑛𝑚).

Solving 𝑂1=coBuchi(𝑇): 𝑂2=Buchi(𝑉\𝑇). Solve 𝑂2 using the algorithm.

2.5.2 Faster 𝑂(𝑛2) Algorithm

More detailed analysis of the classical algorithm:

Proposition 2.11. The total running time for finding all 𝐶𝑗=Attr2(𝐵𝑗,𝐺𝑗) (and removing relevant vertices and edges to derive 𝐺𝑗) is 𝑂(𝑚).

Proof 2.4. In the process of finding 𝐶𝑗=Attr2(𝐵𝑗,𝐺𝑗), all the vertices in 𝐶𝑗 and their in-edges are visited.

Then it removes all edges incident on these vertices, i.e., all in-edges and out-edges.

Thus, every edge is visited 𝑂(1) times.

Notice that the only property of 𝐵 we use is that it is a trap of player 1 out of 𝑇. We took 𝐵 as the largest such trap, namely 𝑉\𝐴, but actually it can be any trap out of 𝑇.

Definition 2.15 (Subgraph Trap). Let 𝐸𝐸, a set 𝑆𝑉 is an 𝐸-trap of player 1 outside 𝑇 if

It is a set of vertices from which player 2 has a strategy to restrict the state in the set and in the subgraph 𝐸 when playing in the entire arena. It is not the same as a trap of player 1 outside 𝑇 when playing in the subgame arena (𝑉,𝐸).

We can use an algorithm similar to the reachability algorithm to find the maximum 𝐸-trap:

_______________________________________________________________________________________ Algorithm 3: Linear Algorithm for Finding Subgraph Trap_________________________________________________ Input: Game graph 𝐺=(𝑉,𝐸), subgraph 𝐺=(𝑉,𝐸), and target set 𝑇𝑉 Output: Trap set 𝑆 1𝑄𝑇{𝑣𝑉(𝑣,𝑤),(𝑣,𝑤)𝐸}// Change 2𝐴; 3𝑆𝑢𝑐𝑐𝐼𝑛𝐴𝐸[𝑣]0 for all 𝑣𝑉; 4while 𝑄 do 5 6 7Pop one vertex 𝑣 from 𝑄; 8𝐴𝐴{𝑣}; 9for 𝑤𝑃𝑟𝑒𝑑(𝑣,𝐸)\𝐴 do 10 11 // Change 12𝑆𝑢𝑐𝑐𝐼𝑛𝐴𝐸[𝑤]𝑆𝑢𝑐𝑐𝐼𝑛𝐴𝐸[𝑤]+1; 13if 𝑤𝑉1 then 14 15 16𝑄𝑄{𝑤}; 17if 𝑤𝑉2 and 𝑆𝑢𝑐𝑐𝐼𝑛𝐴𝐸[𝑤]=𝑜𝑢𝑡𝑑𝑒𝑔(𝑤,𝐸) then 18 19 20𝑄𝑄{𝑤}; 21𝑆𝑉\𝐴; 22return𝑆;

It is basically the complement of the “𝐸-attractor” to the union of 𝑇 and player 1-vertices which have an outgoing edges not in 𝐸. Again, we cannot simply apply the previous algorithm to the subgame (𝑉,𝐸).

Hierarchical Decomposition:

Coloring Vertices:

Proposition 2.12. Let 𝑋𝑖 be the set of vertices that are

Then 𝑋𝑖 is the set of all the vertices from which player 1 has a strategy to leave 𝐺𝑖 (visit edges that are not in 𝐸𝑖).

Proof 2.5.

Proposition 2.13. The following algorithm finds a trap of player 1 outside of 𝑇. In addition, let 𝑘 be the size of the found trap, then the algorithm takes 𝑂(𝑛𝑘) time:

_______________________________________________________________________________________ Algorithm 4: Algorithm for Finding Trap______________________________________________________________________________ Input: Game graph 𝐺=(𝑉,𝐸), target set 𝑇𝑉 Output: Trap 𝑆𝑖 for player 1, or report no trap found 1for 𝑖0 to log𝑛 do 2 3 // Try to find an 𝐸𝑖-trap 𝑆𝑖 for player 1 out of 𝑇 4Try to find an 𝐸𝑖-trap 𝑆𝑖 for player 1 out of 𝑇; 5if such 𝑆𝑖 is found then 6 7 8Output 𝑆𝑖 as the trap; 9break; 10if no trap has been found then 11 12 13Report no trap is found;_______________________________________________________________________________________________________________________________

Proof 2.6 (Correctness). By definition, an 𝐸-trap of player 1 is an 𝐸-trap of player 1.

Proof 2.7 (Running Time). Suppose we find a trap 𝑆𝑖 in the 𝑖-th iteration. We claim |𝑆𝑖|2𝑖1. We prove by casework on the color of player 1 vertices in the previous graph:

In summary, the algorithm finds a trap of size 2𝑖1 in 𝑂(2𝑖𝑛) time. In other words, it can find a trap of size 𝑘 in 𝑂(𝑛𝑘) (𝑘 is the size of the actual trap it found).

The faster algorithm:

_______________________________________________________________________________________ Algorithm 5: Classical Algorithm for Finding coBüchi Winning Set__________________________________ Input: Game graph 𝐺=(𝑉,𝐸), target set 𝑇𝑉 Output: Winning set 𝑊2 1𝑇0𝑇,𝐺0𝐺,𝑊2; 2for 𝑗0 to  do 3 4 5𝐴𝑗Attr1(𝑇𝑗,𝐺𝑗); 6𝐵𝑗 a trap of size 𝑘 computed by the hierarchical decomposition algorithm; 7𝑊2𝑊2𝐶𝑗; 8𝑉𝑗+1𝑉𝑗\𝐶𝑗; 9𝑇𝑗+1𝑇𝑗\𝐶𝑗; 10𝐺𝑗+1𝐺[𝑉𝑗+1]; 11if 𝐶𝑗= then 12 13 14break; 15return𝑊2;

Running time: Since 𝐶𝑗𝐵𝑗 is removed from 𝑉𝑗, all 𝑘’s add up to 𝑂(𝑛), the overall running time is 𝑂(𝑛2+𝑚)=𝑂(𝑛2).

2.6 Solving Parity Games

Consider a zero-sum game with 𝑂1=parity(𝑝) and 𝑂2=parity(𝑝+1), where 𝑝:𝑉{0,1,,𝑑}.

W.l.o.g., we can assume that

Proposition 2.14. Parity game is determined and both players have memoryless winning strategies.

Proof 2.8. We prove by induction on the number of priorities 𝑑.

Base cases:

Induction step: Assume any parity game with 𝑑 priorities {0,1,,𝑑1} is memoryless and determined. We want to show that any game with 𝑑+1 priorities {0,1,,𝑑} is memoryless and determiend by diminishing priority 0 in the current game.

1.
Firstly, we find Win2(𝑂2):
  • Let 𝑃0={𝑣𝑉|𝑝(𝑣)=0}𝑉.
  • Let 𝐴=Attr1(𝑃0) and 𝐵=𝑉\𝐴 (trap of player 1 out of 𝑃0).
  • 𝐵 has a subgame and the subgame does not have any priority 0 vertex. By induction, the subgame with the same objectives is determined and memoryless. Let 𝐵1,𝐵2 be the winning sets of player 1 and 2, respectively, in the subgame at 𝐵.
  • Player 2 has a winning strategy in the original game from 𝐵2: Play the strategy that traps the game in 𝐵 and the same winning strategy as in the subgame.
  • Player 2 has a winning strategy in the original game from 𝐶=Attr2(𝐵2): First play the strategy that reaches 𝐵2 in finitely many steps, then play the above strategy. The finitely many steps does not affect the parity (See Tail Objective).
  • The winning strategies above are memoryless.
  • Mark all vertices in 𝐶 as winning starting vertices for player 2. And continue recursively on the subgame at 𝑉\𝐶. The argument is exactly the same as Buchi games.
2.
Secondly, we show that Win1(𝑂1)=𝑉\Win2(𝑂2).
  • The process continues until 𝐶=,𝐵2=,𝐵=𝐵1. We use the same notation for 𝑃0,𝐴. Consider this strategy for player 1: if the current state is in 𝐵, then play the winning strategy as in the subgame; if the current state is in 𝐴, then play the winning strategy for objective Reach(𝑃0). Then no matter what strategy player 2 plays, player 1 can always win:

    • By definition, if player 2 keeps the game in 𝐵=𝐵1, player 2 will loss and player 1 will win (determinanc in 𝐵).
    • Since the objective is a tail objective, if player 2 keeps the game in 𝐵 after finitely many steps, player 2 will still loss and player 1 will win (determined in 𝐵).
    • Otherwise, player 2 does not keep the game in 𝐵 for infinitely many time, i.e., the game enters 𝐴 for infinitely many times. Everytime the game is in 𝐴, player 1 play with strategy for objective Reach(𝑃0). Thus, player 1 can reach priority 0 for infinitely many times. In this case, player 1 still win.
3.
This shows that the game is determined and memoryless.

Running time:

3 Automata on Finite Words

Definition 3.1 (Deterministic Finite Automaton, DFA). A Deterministic Finite Automaton (DFA) is a tuple 𝐴=(Σ,𝑄,𝑞0,𝛿,𝑇) in which:

A DFA operates on fintie words as follows: It starts from the initial state. Each each symbol from the alphabet is read sequentially. If the current state and symbol are 𝑞 and 𝑎 respectively, it transits to 𝛿(𝑞,𝑎).

A word is accepted if the (unique) run with that word ends in an accepting state. (P.S. if the transition corresponding to the input symbol is not available at the current state, then it directly leads to a rejection.)

The language of the automaton is the set of accepted words.

P.S., we denote the set of all finite words consisting of letters in Σ by Σ.

Definition 3.2 (Nondeterministic Finite Automaton, NFA). A Nondeterministic Finite Automaton NFA is a tuple 𝐴=(Σ,𝑄,𝑄0,𝛿,𝑇) in which:

A word is accepted if there exists a run over the word ending in an accepting state.

3.1 Basic Closure Properties

Proposition 3.1. Any NFA has a NFA complement.

4 Automata on Infinite Words (𝜔-automata)

We denote the set of all infinite words consisting of letters in Σ by Σ𝜔.

Definition 4.1 (Deterministic Automaton of Infinite Words). A Deterministic Automaton on Infinite words is a tuple 𝐴=(Σ,𝑄,𝑞0,𝛿,𝑇,𝐴𝑐𝑐) in which:

Definition 4.2 (Nondeterministic Automaton on Infinite Words). A Nondeterministic Automaton on Infinite words is a tuple 𝐴=(Σ,𝑄,𝑄0,𝛿,𝑇) in which:

The Automata on Infinite words with the acceptance condition being Buchi or Parity are called Buchi Automata and Parity Automata, resp.

4.1 Buchi Automata

Definition 4.3 (Buchi Automaton). A Buchi Automaton is an Automaton on Infinite words with Buchi acceptance condition: Let 𝑇𝑄 be the target set, a run 𝜚 is accepting if it visits T infinitely many times, i.e. inf(𝜚)𝑇.

We abbreviate Deterministics Buchi Automaton to DBW, and the nondeterministic one to NBW.

Definition 4.4 (𝜔-regular Language). We call a language 𝜔-regular if it is the language of some NBW.

Proposition 4.1. NBW are strictly more expressive than DBW:

Proof 4.1. The second statement is trivial: we can make a DBW NBW by turning the initial state and the outputs of the transition function into sets of single element.

The first statement is proved by contradiction: Consider this NBW which accepts all words with finitely many zeros (grey nodes are target states):

PIC

Suppose 𝐴 is a DBW with the same language. Let 𝑛 be the number of states in 𝐴 (finite). Consider the following accepting infinite words:

By Pigeon principle, two among these accepting states, say 𝑞𝑖𝑗 and 𝑞𝑖𝑘, must be the same. Let the prefix of the last word before 𝑞𝑖𝑗 be 𝑝, and the subsequence between 𝑞𝑖𝑗 and 𝑞𝑖𝑘 (excluded) be 𝑟. Starting from 𝑞𝑖𝑗, the run of 𝐴 returns to this state by reading 𝑟. Then 𝑝(𝑟𝜔), the concatenation of 𝑝 and infinite repetitions of 𝑟, is also accepted, because it visits accepting states infinitely many times.

However, 𝑝(𝑟𝜔) contains infinitely many zeros. This contradicts with the assumption. Thus, DBW with the same language does not exist.

4.1.1 Complementation of DBW

In the followings, we will call languages that can be recognized by some DBW “DBW languages”.

Proposition 4.2. DBW languages are not closed under complementation. There is some DBW, such that we cannot find another DBW whose language is the complement of the language of the first DBW.

Proof 4.2. Consider the example in the last proof: We showed that there is no DBW of the language consisting of words with finitely many zeros. However, there exists a DBW of the language consisting of all words with infinitely many zeros:

PIC

4.1.2 Union of DBWs

Proposition 4.3. DBW languages are closed under union. For any two DBWs, there is another DBW that recognize the union of languages of the two DBWs.

Proof 4.3. Let 𝐿1,𝐿2 be any two languages. Suppose 𝐿1=𝐿(𝐴1),𝐿2=𝐿(𝐴2), where 𝐴1=(Σ,𝑄1,𝑞01,𝛿1,𝑇1) and 𝐴2=(Σ,𝑄2,𝑞02,𝛿2,𝑇2).

Consider the automaton that simulates the transitions of 𝐴1,𝐴2 at the same time: 𝐴=(Σ,𝑄1×𝑄2,(𝑞01,𝑞02),𝛿,(𝑇1×𝑄2)(𝑄1×𝑇2)), where 𝛿(𝑞,𝑞,𝑎)=(𝛿1(𝑞,𝑎),𝛿2(𝑞,𝑎)). Each state of 𝐴 is a pair of states of 𝐴1,𝐴2.

Then 𝐴 accepts a word iff the run with the word visits (𝑇1×𝑄2)(𝑄1×𝑇2) infinitely many times, which is equivalent to visiting 𝑇1 infinitely often, or visiting 𝑇2 infinitely often, or visiting both infinitely often. This means that the word is in 𝐿1𝐿2. Thus, 𝐿(𝐴)=𝐿1𝐿2.

4.1.3 Intersection of DBWs

Proposition 4.4. DBW languages are closed under intersection. For any two DBWs, there is another DBW that recognize the intersection of languages of the two DBWs.

Proof 4.4. Let 𝐴1,𝐴2 be two automata and 𝐿1=𝐿(𝐴1),𝐿2=𝐿(𝐴2). The tuples of 𝐴1,𝐴2 use the same notation as above.

𝐿1𝐿2 consists of the words that visit both target sets infinitely many times, which is equivalent to alternating between both target sets infinitely many times (before switching to the other target set, can visit the same target set multiple times).

Consider two copies of the “joint” automata 𝐴1×𝐴2 𝐴=(Σ,𝑄1×𝑄2×{𝐿,𝑅},(𝑞01,𝑞02)×{𝐿},𝛿,𝑇). We want to modify the transitions such that once the run reaches a target state in one copy, it switches to the other copt and continue the transition, so that it cannot visits target states in the same copy consecutively. We define the target set to be 𝑇=𝑄1×𝑇2×{𝑅} and 𝛿 by:

𝛿(𝑞1,𝑞2,𝐿,𝑎)={(𝛿1(𝑞1,𝑎),𝛿2(𝑞2,𝑎),𝐿),if 𝑞1𝑇1(𝛿1(𝑞1,𝑎),𝛿2(𝑞2,𝑎),𝑅),if 𝑞1𝑇1𝛿(𝑞1,𝑞2,𝑅,𝑎)={(𝛿1(𝑞1,𝑎),𝛿2(𝑞2,𝑎),𝑅),if 𝑞2𝑇2(𝛿1(𝑞1,𝑎),𝛿2(𝑞2,𝑎),𝐿),if 𝑞2𝑇2

A run in 𝐴 visits 𝑄1×𝑇2×{𝑅} infinitely often iff it visits 𝑄1×𝑇2×{𝐿} infinitely often: Once it visits 𝑄1×𝑇2×{𝑅}, it switches to 𝑄1×𝑄2×{𝐿}, and visits 𝑄1×𝑇2×{𝐿} zero or more times until it visits 𝑇1×𝑄2×{𝐿}. vice versa.

4.1.4 Union of NBWs.

Proposition 4.5. 𝜔-regular languages are closed under union. For any two NBWs, there is an NBW that recognize the union of languages of the two NBWs.

Proof 4.5. Let 𝐴1=(Σ,𝑄1,𝑄01,𝛿1,𝑇1) and 𝐴2=(Σ,𝑄2,𝑄02,𝛿2,𝑇2) and 𝐿1,𝐿2 be their languages, resp. We define 𝐴=(Σ,𝑄1×{1}𝑄2×{2},𝑄01×{1}𝑄02×{2},𝛿1𝛿2,𝑇1×{1}𝑇2×{2}), where (𝛿1𝛿2)((𝑞,1),𝑎)=𝛿1(𝑞,𝑎),(𝛿1𝛿2)((𝑞,1),𝑎)=𝛿2(𝑞,𝑎).

Basically, we keep two isolated components in 𝐴. Then every accepting word in 𝐴𝑖 corresponds to an accepting word in 𝐴 starting from 𝑄0𝑖×{𝑖}, and vice versa.

P.S.: Since DBW can have only one initial state, we cannot use this construction to prove the union of two DBWs.

4.1.5 Intersection of NBWs

Proposition 4.6. 𝜔-regular languages are closed under intersection. For any two NBWs, there is an NBW that recognize the intersection of languages of the two NBWs.

Proof 4.6. The same proof as DBW works. The initial set becomes 𝑄1×𝑄2×{𝐿}. The definition for 𝛿 becomes 𝛿(𝑞1,𝑞2,𝐿,𝑎)=𝛿1(𝑞1,𝑎)×𝛿2(𝑞2,𝑎)×{𝐿} if 𝑞1𝑇1, etc.

4.1.6 NBW Complement of DBW

Proposition 4.7. For any DBW, there is an NBW that recognize the complement of the language of the DBW.

Proof 4.7. Let 𝐴=(Σ,𝑄,𝑞0,𝛿,𝑇) be a DBW. Intuitively, in a complement NBW 𝐵 with the “same” (not exactly) set of states as 𝐴, the accepting words should never see 𝑇 after finitely many steps.

We define 𝐵=(Σ,𝑄,𝑄0,𝛿,𝑇) as follows: 𝑄=(𝑄×{1})((𝑄\𝑇)×{2}), i.e., 𝑄 plus an extra copy of 𝑄\𝑇. 𝑄0={(𝑞0,1)}. 𝑇=(𝑄\𝑇)×{2}. 𝛿(𝑞,1,𝑎)={𝛿(𝑞,𝑎)}×{1,2}, 𝛿(𝑞,2,𝑎)=({𝛿(𝑞,𝑎)}×{2})𝑇.

The accepting words of 𝐵 are not accepted by 𝐴: They visit 𝑇 infinitely many times, and will never visit 𝑇×{1} (corresponds to 𝑇 in 𝐴) once entering 𝑇.

Conversely, the accepting words of 𝐴 are not accepted by 𝐵: To visit 𝑇 (𝑇×{1} in 𝐵) infinitely often, they cannot ever enter 𝑇.

Remark: This method is not applicable for finding NBW complement of NBW. In the nondeterministic setting, an accepting words may have one run that visits 𝑇×{1} infinitely often and another run that visits 𝑇×{2} infinitely often.

4.1.7 Emptiness of DBW/NBW

Given a DBW/NBW 𝐴 with 𝑛 states, decide whether 𝐿(𝐴)=.

𝐿(𝐴) iff there exists an accepting run of 𝐴 with an infinite word: Any valid run of 𝐴 corresponds to a unique word, so the problem is equivalent to deciding whether 𝐴 has an accepting run.

Proposition 4.8. We can find a corresponding Buchi-game for emptiness problem: Let 𝑇 be the set of accepting states in 𝐴. We call the graph (𝑉,𝐸) defined below the underlying graph of 𝐴: 𝑉=𝑄, and (𝑞1,𝑞2)𝐸 iff 𝑎Σ,𝑞2𝛿(𝑞1,𝑎) (since we only care about the existence of an accepting run, not the characters in the run).

Define a single-player game (𝑉,𝑉1,𝑉2,𝐸), where 𝑉1=𝑉 and 𝑉2=, with objective 𝐵𝑢𝑐ℎ𝑖(𝑇). Then 𝐿(𝐴) iff there exists an outcome starting from an initial vertex 𝐵𝑢𝑐ℎ𝑖(𝑇), which is equivalent to 𝑄0𝑊𝑖𝑛1(𝐵𝑢𝑐ℎ𝑖(𝑇)).

Proof 4.8. A run corresponds to an outcome, and an accepting run, which visits 𝑇 infinitely often, corresponds to a winning outcome starting from an initial vertex in the game.

Since the outcome is solely controlled by player 1, the condition is equivalent to 𝑄0𝑊𝑖𝑛1(𝐵𝑢𝑐ℎ𝑖(𝑇)).

Thus, this is solvable using the algorithm for finding winning set of Buchi game in 𝑂(𝑛2) or 𝑂(𝑛𝑚) time.

1.
Linear-time algorithm

Proposition 4.9. Consider the underlying game graph. Since the there are finitely many vertices, a run visits 𝑇 infinitely often iff it visits a particular vertex in 𝑇 infinitely often. Furthermore, this is equivalent to that there (1) there is a cycle containing a vertex in 𝑇, (2) the cycle is reachable from the initial vertices.

Consider the strongly connected components of the graph. Condition (1) is equivalent to that an accepting state is contained by some non-transitional SCC, which is a SCC that is not a single vertex without a self-loop, i.e., a SCC that either is a single vertex with a self-loop or has at least two vertices.

The algorithm:

(a)
Find all reachable vertices: Linear time.
(b)
Find all SCCs: Linear time (tarjan).
(c)
Check if a SCC contains an accepting state and is reachable: Linear time.

Overall, it is solvable in linear time 𝑂(𝑛+𝑚).

4.1.8 Universality Problem of DBW

The dual of emptiness.

Given a DBW 𝐴, decide whether 𝐿(𝐴)=Σ𝜔.

For a DBW, every word has a unique run and vice versa. Thus, 𝐿(𝐴)=Σ𝜔 iff every run of 𝐴 is accepting.

Proposition 4.10. We have a corresponding coBuchi game for universality problem: Let 𝑇 be the set of accepting states in 𝐴. Define a game with the underlying graph of 𝐴 as in the emptiness problem with objective 𝑐𝑜𝐵𝑢𝑐ℎ𝑖(𝑇). Then, every run is accepting iff every path starting from an initial vertex is in 𝐵𝑢𝑐ℎ𝑖(𝑇), i.e., 𝑄0𝑊𝑖𝑛1(𝑐𝑜𝐵𝑢𝑐ℎ𝑖(𝑇))=.

Thus, the problem is solvable using the algorithm for find winning set of coBuchi game in 𝑂(𝑛𝑚) or 𝑂(𝑛2).

1.
Linear-time algorithm

Proposition 4.11. 𝐿(𝐴)=Σ𝜔 iff every reachable cycle in the graph contains an accepting state.

Proof 4.9. Otherwise, there is a path that repeats a cycle containing no accepting state infinitely many times, and hence does not visit 𝑇 infinitely many.

Every cycle contains an accepting state iff, after the removal of all accepting vertices, no cycle exists: This can be checked easily.

The algorithm:

(a)
Remove all the unreachable vertices: Linear time.
(b)
Remove all the accepting vertices: Linear time.
(c)
Check whether there is a cycle in the remaining graph: Linear time.

Overall, it is solvable in linear time 𝑂(𝑛+𝑚).

4.1.9 Universality Problem of NBW

Proposition 4.12. Non-universality of NBW is NP-hard.

Definition 4.5 (3-coloring Problem). Given a (undirected) graph 𝐺=(𝑉,𝐸), is it possible to color the vertices in 𝐺 with 3 colors such that no two adjacent vertices have the same color?

Proof 4.10. We provide a reduction from 3-coloring, which is a classic NP-complete problem.

Given a (undirected) graph 𝐺=(𝑉,𝐸), let Σ=𝑉×{𝑟,𝑔,𝑏}. We create the following NBWs:

1.
The first NBW 𝐴 is defined to accept all words that, when ignoring the colors, are not valid paths in 𝐺. That is, a word (𝑣1,𝑐1),(𝑣2,𝑐2),(𝑣3,𝑐3), is accepting iff 𝜌=𝑣1,𝑣2,𝑣3, is an invalid path in 𝐺.

𝐴=(Σ,𝑉{,𝑞0},{𝑞0},𝛿,{}) with Buchi condition, where:

𝛿(𝑞0,(𝑣,𝑐))=𝑣,𝛿(𝑢,(𝑣,𝑐))={𝑣,if (𝑢,𝑣)𝐸,if (𝑢,𝑣)𝐸,𝛿(,(𝑣,𝑐))=.

A word is accepted by 𝐴 iff the run ever transits through (𝑢,(𝑣,𝑐)) for some (𝑢,𝑣)𝐸. Thus, the path associated with every accepting word contains an edge that is not in 𝐸, and hence is invalid.

2.
The second NBW 𝐵 is defined to accept all words in which there is a vertex that appears with two different colors.

𝐵=(Σ,𝐴{𝑞0},{𝑞0},𝛿,𝐵 with Buchi condition, where 𝐴={𝑎𝑣,𝑐1,𝑐2|𝑣𝑉,𝑐1,𝑐2{𝑟,𝑔,𝑏},𝑐1𝑐2}, 𝐵={𝑏𝑣,𝑐1,𝑐2|𝑣𝑉,𝑐1,𝑐2{𝑟,𝑔,𝑏},𝑐1𝑐2}, and

𝑞0𝛿(𝑞0,),𝑐1,𝑐2{𝑟,𝑔,𝑏},𝑐1𝑐2:𝑎𝑣,𝑐1,𝑐2𝛿(𝑞0,(𝑣,𝑐1)),𝑎𝑣,𝑐1,𝑐2𝛿(𝑎𝑣,𝑐1,𝑐2,),𝑏𝑣,𝑐1,𝑐2𝛿(𝑎𝑣,𝑐1,𝑐2,(𝑣,𝑐2)),𝑏𝑣,𝑐1,𝑐2𝛿(𝑏𝑣,𝑐1,𝑐2,).

An accepting word is of the form (𝑣,𝑐1)(𝑣,𝑐2).

PIC
3.
The third NBW 𝐶 is defined to accept all words that do not contain all edges of the graph. It is rather complicated to define 𝐶 in expression, so we only show a subpart of it: The initial vertex 𝑄0 includes 𝑞𝑢,𝑣 for all undirected edges (𝑢,𝑣)𝐸. The subpart for an edge (𝑢,𝑣)𝐸 is:
PIC

which accpets all the words that at least miss the edge (𝑢,𝑣) (in either direction).

4.
The fourth NBW 𝐷 is defined to accepts all the words that have the same color twice in a row.
PIC
5.
The NBW 𝐸=𝐴𝐵𝐶𝐷 (𝐿(𝐸)=𝐿(𝐴)𝐿(𝐵)𝐿(𝐶)𝐿(𝐷)).

We claim that 𝐸 is universal iff 𝐺 is not 3-colorable, i.e., 𝐸 is not universal iff 𝐺 is 3-colorable.

Only if side: 𝐸 does not accept a word iff the word (a) is a valid path in 𝐺, (b) assigns a unique color to each vertex, (c) visits all edges of 𝐺, and (d) does not have the same color in a row. This is equivalent to that the word represent a valid 3-coloring of the graph.

If side: If 𝐺 has a valid 3-coloring, then any infinite path that visits all edges together is an infinite word that is not accepted by 𝐸.

Corollary 4.1. Unless P=NP, there is no polynomial-time complementation of NBW.

Otherwise, we can run this polynomial-time complementation, and apply the (polynomial) emptiness algorithm to the complement to solve the universality in polynomial time.

4.1.10 Complementation NBW

Given a NBW 𝐴, find an NBW 𝐵 such that 𝐿(𝐵)=Σ𝜔\𝐿(𝐴).

Proposition 4.13. If we color each edge of a 6-vertex graph with one red and blue, then there must be a monochrome triangle.

Proof 4.11. Vertex 1 must have three edges of the same color, say blue. Consider the three vertices connected with 1 by blue edges. There are two cases: (1) At least two of them are connected by a blue edge, in which a blue triangle is formed. (2) All of them are connected by red edges, in which a red triangle is formed.

Theorem 4.1 (Ramsey Theorem for Infinite Graphs). Consider a complete infinite graph 𝐺=(𝑉,𝐸) where every edge is colored by one of 𝑐 colors. There is an infinite monochrome clique (complete subgraph induced by some vertices) in 𝐺.

Proof 4.12. Let 𝑎1𝑉 be an arbitrary vertex. Since there are finitely many colors, there is a color 𝑐1 such that 𝑎1 has infinitely many neighbours with edge of color 𝑐1. Let the set of these neighbours be 𝑉1.

Consider the infinite graph 𝐺[𝑉1]=(𝑉1,𝑉1×𝑉1). For the same reason, there exists a vertex 𝑎2 and a color 𝑐2 with the same properties. Continuing this process forever, we get three infinite sequences 𝑎1,𝑎2,, 𝑐1,𝑐2,, 𝑉1,𝑉2,.

Thus, there is a color 𝑐 that appears infinitely many times in 𝑐1,𝑐2,. Let 𝑉={𝑎𝑖|𝑐𝑖=𝑐}. Clearly, 𝑉 (𝐺[𝑉]) is an infinite monochrome clique.

Definition 4.6 (Equivalence class). Given an NBW 𝐴, we define an equivalence relaion over all non-empty finite words as follows: For 𝑣,𝑤Σ+, we say 𝑣𝑤 if

Define a function 𝑓𝑣:𝑄2𝑄×2𝑄 for each vertex 𝑣 that maps 𝑞 to {𝑞|𝑞 is reachable using 𝑣} and {𝑞|𝑞 is target-reachable using 𝑣}. Then 𝑣𝑤 iff 𝑓𝑣𝑓𝑤. Let 𝐿𝑓 be the equivalence class of function 𝑓, i.e., {𝑣|𝑓𝑓𝑣}. Since we have finitely many such functions, we have finitely many equivalence classes.

Proposition 4.14. Every 𝐿𝑓 is regular (recognized by some NFA).

Definition 4.7. We can find an NFA that recognizes exactly 𝐿𝑓:

Proposition 4.15. For every infinite word 𝑤Σ𝜔, there is a decomposition 𝑤𝐿𝑓(𝐿𝑔)𝜔, where 𝐿𝑓,𝐿𝑔 are two equivalence classes of finite words. That is, 𝑤 must be the concatenation of some finite word in 𝐿𝑓 and infinitely many finite words in 𝐿𝑔.

Proof 4.13. Let 𝑤=𝑤1𝑤2 be any infinite word, where each 𝑤𝑖 is a symbol. We define 𝑤[𝑖..𝑗)=𝑤𝑖𝑤𝑖+1𝑤𝑗1. Consider the infinite complete graph 𝐺(N,N×N). We view equivalence classes as colors, and color the edge {𝑖,𝑗} with the (unique) 𝐿𝑓 that contains 𝑤[𝑖..𝑗).

By the Ramsey theorem, there is a color 𝐿𝑔 such that there is an infinite monochrome clique 𝑖0<𝑖1< in which edges are all colored 𝐿𝑔. Particularly, {𝑖0,𝑖1},{𝑖1,𝑖2}, all have color 𝐿𝑔. Thus, 𝑤 can be decomposed as 𝑤[1..𝑖0)𝑤[𝑖0..𝑖1)𝑤[𝑖1..𝑖2), implying that 𝑤𝐿𝑓(𝐿𝑔)𝜔, where 𝐿𝑓 is the class containing 𝑤[1..𝑖0).

Proposition 4.16. Let 𝐿𝑓 and 𝐿𝑔 be equivalence classes. Either 𝐿(𝐴)𝐿𝑓(𝐿𝑔)𝜔= or 𝐿(𝐴)𝐿𝑓(𝐿𝑔)𝜔.

Proof 4.14. Let 𝑤𝐿𝑓(𝐿𝑔)𝜔, 𝑤=𝑤1𝑤2𝑤3 where 𝑤1𝐿𝑓 and 𝑤2,𝑤3,𝐿𝑔 are finite words. Suppose 𝑤𝐿(𝐴), then by the definition of equivalence class, we can replace 𝑤𝑖 by any word in the same class without affecting the times the run visits 𝑇. Thus, the altered infinite words should also be 𝐿(𝐴).

Proposition 4.17. 𝜔-regular languages are closed under complementation. Any NBW has an NBW complement.

Proof 4.15 (Complementing NBW). There are finitely many 𝐿𝑓, so there are finitely many sets 𝐿𝑓(𝐿𝑔)𝜔.

We have an NBW for every 𝐿𝑔𝜔: Take the NFA for 𝐿𝑔. For each accepting state, create edges that the initial states have. The initial states and accepting states for the NBW are the same as the NFA.

We have an NBW for every 𝐿𝑓(𝐿𝑔)𝜔: Take the NBW for 𝐿𝑔𝜔 and the NFA for 𝐿𝑓. For each accepting state of the NFA, create edges that the initial states of the NBW have. The initial states of the resulting NBW are the intial states of the NFA, and the accepting states of the resulting NBW are the accepting states of the NBW for 𝐿𝑔𝜔.

Then take the union of NBWs of all 𝐿𝑓(𝐿𝑔)𝜔 that does not intersect with 𝐿(𝐴)). Since every infinite word is in some 𝐿𝑓(𝐿𝑔)𝜔, all the words in 𝐿(𝐴)𝑐 are included.

4.1.11 Containment Problem

Given two DBW/NBW 𝐴 and 𝐵, decide whether 𝐿(𝐴)𝐿(𝐵).

Note that 𝐿(𝐴)𝐿(𝐵)𝐿(𝐴)𝐿(𝐵)𝑐=𝐿(𝐴)𝐿(𝐵𝑐)=. If we can find the complement NBW 𝐵𝑐 of 𝐵 (such that 𝐿(𝐵𝑐)=𝐿(𝐵)𝑐), find the intersection automaton, and then answer the emptiness problem.

4.2 Parity Automata

Definition 4.8 (Parity Automaton). A Parity Automaton is an Automaton on Infinite words with Parity acceptance condition: Let 𝑝:𝑄{0,...,𝑑} be a priority function, a run 𝜚 of the automaton is accepting if it satisfies the parity condition, i.e. liminf(𝑝(𝜚𝑖)) is even.

We abbreviate Deterministic Parity Automaton to DPW, and the nondeterministic one to NPW.