Game


Contents

1 Semantics
1.1 Formalizing Program Syntax
1.2 Semantics of Expressions
1.3 States
1.4 Operational Semantics
1.4.1 Small-Step (Structural) Semantics
1.4.2 Transitive Closure
1.4.3 TODO Termination
1.4.4 Big-Step Semantics
1.4.5 TODO Code Optimization / Compilation
1.5 Denotational Semantics
1.5.1 Non-Determinism
1.6 Axiomatic Semantics and Hoare Logic
1.6.1 Hoare Triples
2 Infinite-duration Two-player Games on Graphs
2.1 Basic Concepts
2.2 Classical Objectives
2.3 Strategy
2.4 Solving Reachability Games
2.5 Solving Buchi Games
2.5.1 Classical 𝑂(π‘›π‘š) Algorithm
2.5.2 Faster 𝑂(𝑛2) Algorithm
2.6 Solving Parity Games
3 Automata on Finite Words
3.1 Basic Closure Properties
4 Automata on Infinite Words (πœ”-automata)
4.1 Buchi Automata
4.1.1 Complementation of DBW
4.1.2 Union of DBWs
4.1.3 Intersection of DBWs
4.1.4 Union of NBWs.
4.1.5 Intersection of NBWs
4.1.6 NBW Complement of DBW
4.1.7 Emptiness of DBW/NBW
4.1.8 Universality Problem of DBW
4.1.9 Universality Problem of NBW
4.1.10 Complementation NBW
4.1.11 Containment Problem
4.2 Parity Automata

1 Semantics

1.1 Formalizing Program Syntax

We first introduce components of our program (and their informal meanings)

skip ⁑ means a command that does nothing.

A variable π‘₯ is a string or a name. The set of all variables is denoted by Vars ⁑ .

Definition 1.1 (Syntax of Arithmetic Expression). The syntax of arithmetic expressions is defined as

𝑐 ∈Z π‘₯ ∈Vars ⁑ 𝑒 := (𝑐)|(π‘₯)|(𝑒)+(𝑒)|(𝑒)βˆ—(𝑒)|(𝑒)/(𝑒)|(𝑒)Γ—(𝑒)|(𝑒)βˆ’(𝑒)|(𝑒)%(𝑒),

where / stands for integer division, % stands for mod. We add parentheses here to avoid the arithmetic priority issue, and will omit them later on.

The set of all arithmetic expressions is denoted by AExp ⁑ .

Definition 1.2 (Syntax of Boolean Expression). The syntax of a boolean expression is defined as

π‘Ž ∈AExp ⁑ ,𝑏 := true ⁑ |false ⁑ π‘Ž > π‘Ž|π‘Ž < π‘Ž|π‘Ž == π‘Ž|π‘Ž β‰₯π‘Ž|π‘Ž β‰€π‘Ž|π‘Ž β‰  π‘Ž |𝑏and ⁑ 𝑏|𝑏or ⁑ 𝑏|not ⁑ 𝑏,

where 𝑒 is an arithmetic expression.

The set of all arithmetic expressions is denoted by BExp ⁑ .

Then we can define the syntax of program.

Definition 1.3 (Syntax of Program). The syntax of a program 𝑃 is defined as

𝑃 := skip ⁑ |π‘₯ = 𝑒 |𝑃; 𝑃 |if ⁑ 𝑒then ⁑ 𝑃else ⁑ 𝑃fi ⁑ |while ⁑ 𝑒do ⁑ 𝑃od ⁑ ,

where π‘₯ ∈Vars ⁑ , 𝑒 is an expression, and 𝑃; 𝑃 means a sequence of two program.

The meaning of these programs are not formally defined yet. We will add more program syntax later.

1.2 Semantics of Expressions

Definition 1.4 (Valuation). A valuation is a function 𝑣 : Vars ⁑ β†’Z, which specifies values of variables. The set of all valuations is denoted by Val ⁑ . Given a valuation 𝑣, the value of the variable π‘₯ is 𝑣(π‘₯). During the execution of the program, the variable itself remains the same (just a name), but its value changes as the valuation changes.

Definition 1.5 (Semantics of Arithmetic Expression). The semantics of an expression 𝑒 is the meaning / value of the expression, denoted by [[𝑒]]. Formally, [[𝑒]] is a function : Val ⁑ β†’Z (not [[Β·]]), such that the value of function [[𝑒]] at 𝑣, denoted by [[𝑒]](𝑣) or simply [[𝑒]]𝑣, is the value of expression 𝑒 given the valuation 𝑣.

We can immediately get the semantics of some simple expressions by definition: βˆ€ ⁑𝑣 ∈Val ⁑ ,

Definition 1.6 (Semantics of Boolean Expression). The semantics of a Boolean expression 𝑏 is a function [[𝑏]]: Val ⁑ β†’{true ⁑ ,false ⁑ }, such that [[𝑏]](𝑣) is the truth value of 𝑏 given a valuation 𝑣.

For simplicity, we assume that non-zero numbers stand for true ⁑ and zero stands for false ⁑ , so that we do not need to explicitly name a domain for values of boolean variable.

Notation 1 (Inference-rule notation). Assumptions--
   Result means that if the Assumptions are true, then the Result is true.

Semantics of more complex expressions can be defined as rules under some assumptions:

Rules for boolean expressions:

1.3 States

Definition 1.7 (State of Program). We can define the state of a program as:

1.
The current valuation: 𝑣 : Vars ⁑ β†’Z.
2.
The current valuation together with a pointer showing the current position in code: (𝑙,𝑣) where 𝑙 is a line number.
3.
The current valuation together with a piece of remaining code to be executed: (𝑃,𝑣).

We denote by Ξ£ the set of all states. Program execution is simply captured by transitions between states.

Analysis of the 3 types of states:

1.
The first type 𝑣: Doesn’t tell the intermediate process of the program (terminations, exceptions).
2.
The second type (𝑙,𝑣): E.g.,
      whileΒ xΒ >Β 0Β &&Β yΒ >Β 0Β do
      Β Β ifΒ xΒ >Β yΒ then
      Β Β Β Β xΒ =Β xΒ -Β y
      Β Β else
      Β Β Β Β yΒ =Β yΒ -Β x
      Β Β fi
      od

Suppose the initial valuation is (2,1) for π‘₯,𝑦 respectively. Then the execution process can be represented as (2,2,1)β†’(3,2,1)β†’(6,1,1). Drawbacks: Requires knowing the whole program, cannot model separate functions in the program independently.

3.
Thus, we use the third type of state (𝑃,𝑣). E.g., x = 1; y = x + 2 can be represented as (π‘₯ = 1; 𝑦 = π‘₯ +2,0,0)β†’(𝑦 = π‘₯ +2,1,0)β†’(βˆ…,1,3).

1.4 Operational Semantics

Definition 1.8 (Operational Semantics). Operational semantics is a category of semantics that is defined by inference rules over transitions between states.

1.4.1 Small-Step (Structural) Semantics

Definition 1.9 (Small-step Transition Relation). A samll-step transition relation (𝑃,𝑣)β†’(𝑃′,𝑣′) means that the first state transits to the second state in one execution step.

Definition 1.10 (Small-step Operational Semantics). Small-step operational semantics defines inference rules for single-step transitions.

Inference rules (with new syntax):

Consider the extension with Break ⁑ , Continue ⁑ , and for ⁑ :

1.4.2 Transitive Closure

Definition 1.11 (Transitive closure). (𝑃,𝑣)β†’βˆ—(𝑃′,𝑣′) means that starting from state (𝑃,𝑣), we can reach state (𝑃′,𝑣′) in certain number of steps. Formally, it is defined by

---------------
(𝑃, 𝑣)β†’  βˆ— (𝑃,𝑣)(𝑃,𝑣) β†’  (𝑃′,𝑣′)
--------βˆ—--β€²--β€²-
(𝑃,𝑣)β†’   (𝑃 ,𝑣)(𝑃,𝑣) β†’ βˆ— (𝑃 β€²,𝑣′),(𝑃 β€²,𝑣′)β†’ βˆ— (𝑃 β€²β€²,𝑣′′)
-----------------βˆ—---β€²β€²--β€²β€²----------
         (𝑃,𝑣) β†’  (𝑃  ,𝑣 )

1.4.3 TODO Termination

Definition 1.12 (Termination). A program 𝑃 with valuation 𝑣 will terminate if it will eventually transit to an empty program, denoted by Term ⁑ (𝑃,𝑣). Formally, it is defined by

        βˆ—     β€²
(𝑃,𝑣)-β†’---(βˆ…,𝑣-)
   Term( 𝑃,𝑣).

E.g., prove the Euclidean Algorithm for calculating the GCD terminates βˆ€ ⁑π‘₯,𝑦 ∈Z:

while ⁑ π‘₯ > 0and ⁑ 𝑦 > 0οΈ· 𝑔do ⁑ if ⁑ π‘₯ > 𝑦then ⁑ π‘₯ = π‘₯ βˆ’π‘¦ else ⁑ 𝑦 = 𝑦 βˆ’π‘₯ fi ⁑ } 𝑃′ od ⁑

TODO We denote the program by 𝑃, the condition in while by 𝑔, and the body by 𝑃′.

1.4.4 Big-Step Semantics

Definition 1.13 (Big-Step Relation). A big-step relation (𝑃,𝑣)↓𝑣′ means that the program 𝑃 with valuation 𝑣 terminates, modifyng the valuation to 𝑣′.

Definition 1.14 (Big-Step Operational Semantics). Big-step operational semantics defines inference rules for big-step relations.

Inference rules:

1.4.5 TODO Code Optimization / Compilation

TODO

𝑃 and 𝑄 are equivalent only if (necessary but not sufficient),

βˆ€ β‘π‘£π‘Žπ‘™,π‘£π‘Žπ‘™β€².(𝑃,π‘£π‘Žπ‘™)β†’βˆ—π‘£π‘Žπ‘™β€²βŸΊ(𝑄,π‘£π‘Žπ‘™)β†’βˆ—π‘£π‘Žπ‘™β€².

This is not sufficient because the termination is not taken into consideration.

Example 1.1. Consider 𝑃 : π‘₯ = 1 and 𝑄 : π‘₯ = 0;while ⁑ π‘₯ β‰  1do ⁑ π‘₯ = 1 βˆ—skip ⁑ od ⁑ . 𝑃 always terminates with π‘₯ = 1, but 𝑄 terminates only when sometimes π‘₯ = 1 is executed. We could not say they are equivalent.

1.5 Denotational Semantics

Definition 1.15 (Denotational Semantics). Denotational semantics is a category of semantics that is defined as mathematical functions. Usually, the denotational semantics of a program is obtained inductively based on the (already defined) denotational semantics of its subprograms.

We are going to use the current valuation for state, i.e., the set of state Σ = Val ⁑ .

The type of semantics of arithmetic / boolean expressions does not change (but Val ⁑ becomes Ξ£): If 𝑒 is an arithmetic expression, then [[𝑒]]: Ξ£ β†’Z. If 𝑏 is a boolean expression, then [[𝑏]]: Ξ£ β†’{true ⁑ ,false ⁑ }. The denotational semantics of them basically rewrites the inference rules (operational semantics) as operations on functions:

Definition 1.16 (Denotational Semantics of Program). The semantics of a program 𝑃 is the function that transforms the initial state (valuation) to the final state (valuation), denoted by [[𝑃]]. Formally, it is a partial function [[𝑃]]: Σ⇀ ⁑ Ξ£. It is partial because it is not defined on initial states that do not terminate, etc.

Definition 1.17 (Conditional statement). Let 𝑓,𝑔 : Σ⇀ ⁑ Ξ£,𝑏 : Ξ£ β†’{true ⁑ ,false ⁑ }, then the conditional semantics cond ⁑ (𝑏,𝑓,𝑔): Σ⇀ ⁑ Ξ£ is defined as

cond ⁑ (𝑏,𝑓,𝑔)(π‘£π‘Žπ‘™)= {𝑓(π‘£π‘Žπ‘™),if 𝑏(π‘£π‘Žπ‘™)=true ⁑ 𝑔(π‘£π‘Žπ‘™),if 𝑏(π‘£π‘Žπ‘™)=false ⁑

E.g., we can rewrite [[π‘Ž1 > π‘Ž2]] as cond ⁑ ([[π‘Ž1]]> [[π‘Ž2]],true ⁑ ,false ⁑ )π‘£π‘Žπ‘™.

Semantics of program (check that the R.H.S.s are valuations):

Semantics of while:

Proposition 1.1. Given an boolean expression 𝑏 and a program 𝑃, there exists a fixed point of 𝑇𝑏,𝑃.

Proof 1.1. We prove by constructing functions based on Operational Semantics.

For any program 𝑃 and valuation π‘£π‘Žπ‘™, we define [[𝑃]]π‘£π‘Žπ‘™ = π‘£π‘Žπ‘™β€² iff (𝑃,π‘£π‘Žπ‘™)β†“π‘£π‘Žπ‘™β€² in Operational Semantics, and [[𝑃]]π‘£π‘Žπ‘™ =undefined ⁑ if there is no such π‘£π‘Žπ‘™β€² (does not terminate).

Running the inference rules, we will get a function 𝑓 for [[while ⁑ 𝑏do ⁑ 𝑃]] and a function 𝑔 for [[if ⁑ 𝑏then ⁑ 𝑃;while ⁑ 𝑏do ⁑ 𝑃else ⁑ skip ⁑ fi ⁑ ]]. Recall the corollary about the equivalence between two, which basically means

𝑓(π‘£π‘Žπ‘™)= π‘£π‘Žπ‘™β€²βŸΉπ‘”(π‘£π‘Žπ‘™)= π‘£π‘Žπ‘™β€²,𝑔(π‘£π‘Žπ‘™)= π‘£π‘Žπ‘™β€²βŸΉπ‘“(π‘£π‘Žπ‘™)= π‘£π‘Žπ‘™β€².

Thus, 𝑓 = 𝑔 = 𝑇𝑏,𝑃(𝑓). 𝑓 is indeed a fixed point.

Proposition 1.2. The fixed point is not unique, but the least defined fixed point is unique.

Proof 1.2. We can prove the nonuniqueness simply by finding multiple fixed points: 𝑇(𝑓)= 𝑓 iff

βˆ€ β‘π‘£π‘Žπ‘™,𝑓(π‘£π‘Žπ‘™)= {𝑓 β—¦[[𝑃]](π‘£π‘Žπ‘™), [[𝑏]]=true ⁑ [[skip ⁑ ]](π‘£π‘Žπ‘™)= π‘£π‘Žπ‘™,[[𝑏]]=false ⁑

So 𝑓 could be cond ⁑ ([[𝑏]],𝑐,[[skip ⁑ ]]) for any constant function 𝑐 (βˆ€ β‘π‘£π‘Žπ‘™,𝑐(π‘£π‘Žπ‘™)= 𝑐).

Proof 1.3. More detailed analysis of constraints on the fixed points: Let 𝑔 be any fixed point of 𝑇𝑏,𝑃 (abbreviated to 𝑇 below). Let π‘£π‘Žπ‘™0 be an arbitraty initial valuation, then there are three possible cases of the while loop.

1.
The entire while loop terminates after a finite number, say 𝑛, of iterations:

We have

π‘£π‘Žπ‘™π‘–+1 = [[𝑃]]π‘£π‘Žπ‘™π‘– (1)

for 𝑖 = 0,1,…,π‘›βˆ’1. Moreover,

[[𝑏]]π‘£π‘Žπ‘™0 = β‹― = [[𝑏]]π‘£π‘Žπ‘™π‘›βˆ’1 =true ⁑ ,[[𝑏]]π‘£π‘Žπ‘™π‘› =false ⁑ .

This implies that

𝑇(𝑔)(π‘£π‘Žπ‘™π‘–) =cond ⁑ ([[𝑏]],𝑔 β—¦[[𝑃]],[[skip ⁑ ]])(π‘£π‘Žπ‘™π‘–) = 𝑔 β—¦[[𝑃]](π‘£π‘Žπ‘™π‘–)= 𝑔(π‘£π‘Žπ‘™π‘–+1) (2)

for 𝑖 = 0,…,π‘›βˆ’1, and 𝑇(𝑔)(π‘£π‘Žπ‘™π‘›)= [[skip ⁑ ]](π‘£π‘Žπ‘™π‘›)= π‘£π‘Žπ‘™π‘›. Since 𝑔 is a fixed point,

𝑔(π‘£π‘Žπ‘™0)= 𝑔(π‘£π‘Žπ‘™1)= β‹― = 𝑔(π‘£π‘Žπ‘™π‘›)= π‘£π‘Žπ‘™π‘›. (*1)
2.
Some iteration, say the 𝑛-th one, does not terminate so the whole loop does not terminate.

Equation (1) still holds for 𝑖 = 0,…,π‘›βˆ’1. And we have

[[𝑏]]π‘£π‘Žπ‘™0 = β‹― = [[𝑏]]π‘£π‘Žπ‘™π‘›βˆ’1 = [[𝑏]]π‘£π‘Žπ‘™π‘› =true ⁑ .

Then, equation (2) also holds for 𝑖 = 0,…,π‘›βˆ’1. Since 𝑔 is a fixed point of 𝑇,

𝑔(π‘£π‘Žπ‘™0)= β‹― = 𝑔(π‘£π‘Žπ‘™π‘›). (*2)
3.
Every iteration terminates but the loop itself does not:

Equation (1) and [[𝑏]]π‘£π‘Žπ‘™π‘– =true ⁑ holds for 𝑖 = 0,1,…. Similarly, we have

𝑔(π‘£π‘Žπ‘™0)= 𝑔(π‘£π‘Žπ‘™1)= …𝑔(π‘£π‘Žπ‘™π‘›)= …. (*3)

Equations (*1), (*2), and (*3) are the only constraints we have on 𝑔 in the three cases, respectively. In (*1) 1, the values 𝑔(π‘£π‘Žπ‘™0),…,𝑔(π‘£π‘Žπ‘™π‘›) are uniquely determined to be π‘£π‘Žπ‘™π‘›. However, in (*2) and (*3), the values are not unique.

We want the 𝑔 that aligns with the Operational Semantics. Requirements are:

Now, this 𝑔 is uniquely determined.

Least defined fixed point: A fixed point such that at any initial valuation it is not uniquely determined, its value is undefined ⁑ .

The 𝑔 uniquely determined above is the least defined fixed point.

1.5.1 Non-Determinism

Without non-determinism, [[𝑃]]: Σ⇀ ⁑ Ξ£. Let ⟳ be the state representing non-termination, so that [[𝑃]]: Ξ£ β†’Ξ£βˆͺ{⟳}. More explicitly, [[𝑃]]: Ξ£ β†’Ξ£βˆͺ{⟳}\βˆ…, since no valuation is mapped to βˆ….

With non-determinism, [[𝑃]]: Σ⇀ ⁑ Powerset(Ξ£βˆͺ{⟳})\{βˆ…}.

For while, we also find a fixed point of the transformer, but the definition of the transformer is more complicated.

1.6 Axiomatic Semantics and Hoare Logic

Definition 1.18 (Invariant). Assertion assigned to some point of the program that always hold whenever the program reach the point.

E.g., the first line of a while loop may reached many times.

Definition 1.19 (Pre-conditon and post-conditon). The invariant that is assigned to the beginning or the end of a (part of) program, respectively.

E.g.,

{𝑛 > 0} 𝑖 = 1; {𝑛 > 0,𝑖 = 1} 𝑠 = 0; {𝑛 > 0,𝑖 = 1,𝑠 = 0} π‘š = 0; {𝑛 > 0,π‘š β‰₯0,𝑛 β‰₯π‘š,𝑖 = 2π‘š ,𝑠 = 20 +β‹― +2π‘šβˆ’1 } while ⁑ π‘š < 𝑛do ⁑ 𝑠 = 𝑠 +𝑖; 𝑖 = 𝑖 βˆ—2; π‘š = π‘š+1; od ⁑ {𝑠 = 2𝑛 βˆ’1}

1.6.1 Hoare Triples

Definition 1.20 (Hoare triple). A hoare triple {𝐴}𝑃{𝐡}, where 𝐴 and 𝐡 are two assertions and 𝑃 is a program, means that, for any current π‘£π‘Žπ‘™, if 𝐴 holds in state π‘£π‘Žπ‘™ (denoted by π‘£π‘Žπ‘™ ⊨𝐴), and π‘£π‘Žπ‘™β€² is a state that can be reached by a terminating run of 𝑃 starting at π‘£π‘Žπ‘™, then 𝐡 holds in state π‘£π‘Žπ‘™β€². Formally, {𝐴}𝑃{𝐡} is defined as the logical statement

βˆ€ β‘π‘£π‘Žπ‘™,π‘£π‘Žπ‘™βŠ§π΄βˆ§(𝑃,π‘£π‘Žπ‘™)β†“π‘£π‘Žπ‘™β€²βŸΉπ‘£π‘Žπ‘™β€²βŠ§π΅.

Remark: the assertion can be true ⁑ ,false ⁑ . true ⁑ always holds (βˆ€ β‘π‘£π‘Žπ‘™,π‘£π‘Žπ‘™ ⊨true ⁑ holds) and false ⁑ never holds (βˆ€ β‘π‘£π‘Žπ‘™,Β¬ β‘π‘£π‘Žπ‘™ ⊨false ⁑ holds).

If the program does not terminate, the Hoare triple does not say anything.

Hoare triples talk about all possibilities of the program execution.

Axioms:

2 Infinite-duration Two-player Games on Graphs

2.1 Basic Concepts

Games in which two players play forever.

Definition 2.1. A game graph or arena is a finite directed graph (𝑉,𝐸) with a partition (𝑉1,𝑉2) of 𝑉, where

Given an initial vertex 𝑣 βˆˆπ‘‰, the game proceeds as follows:

2.2 Classical Objectives

Let 𝜌 = ⟨𝜌0,𝜌1,β€¦βŸ© be an infinite path. We denote the prefix of 𝜌 starting from the 𝑖-th vertex by 𝜌[𝑖…]. Define:

Let 𝑇 be a subset of vertices, here are some of the classical objectives:

Definition 2.2. Let 𝑇 be a set of target vertices. Then,

Definition 2.3. Let 𝑇 be a set of BΓΌchi vertices. Then,

We can check that:

Definition 2.4. Let 𝑝 : 𝑉 β†’{0,1,…,𝑑} be a priority function. Then the parity objective is defined as parity ⁑ (𝑝)= {𝜌|min ⁑ {𝑝(𝑣)|𝑣 ∈inf ⁑ (𝜌)}isΒ even}.

It is the most general objective we will study.

Proposition 2.1. Buchi ⁑ (𝑇)= parity ⁑ (𝑝) where 𝑝(𝑣)= 0 for 𝑣 βˆˆπ‘‡ and 𝑝(𝑣)= 1 otherwise.

In this case, parity ⁑ (𝑝)= {𝜌|min ⁑ {𝑝(𝑣)|𝑣 ∈inf ⁑ (𝜌)}= 0}= {𝜌|βˆƒ ⁑𝑣 ∈inf ⁑ (𝜌),𝑣 βˆˆπ‘‡}= Buchi ⁑ (𝑇).

Proposition 2.2. coBuchi ⁑ (𝑇)= parity ⁑ (𝑝) where 𝑝(𝑣)= 2 for 𝑣 βˆˆπ‘‡ and 𝑝(𝑣)= 1 otherwise.

To find a priority function for coBuchi, we want {𝜌 : inf ⁑ (𝜌)βŠˆπ‘‡}= {𝜌|min ⁑ {𝑝(𝑣)|𝑣 ∈inf ⁑ (𝜌)}isΒ odd}. This hints that we set 𝑝(𝑣) to be a smaller odd number for 𝑣 βˆ‰ 𝑇.

Proposition 2.3. In general, parity ⁑ (𝑝)= parity ⁑ (π‘ž)Β― for π‘ž = 1 +𝑝 (or π‘ž = 2π‘˜ +1 +𝑝 for π‘˜ ∈Z).

Definition 2.5. We say an objective 𝑂 is a tail objective if whenever 𝜌 βˆˆπ‘‚, we have 𝜌[𝑖…]βˆˆπ‘‚ for every 𝑖.

Proposition 2.4. Parity objectives are tail objectives since finite prefix does not matter.

2.3 Strategy

Notation 2. For a vertex 𝑣, we denote by Succ ⁑ (𝑒)= {𝑒 βˆˆπ‘‰ : (𝑒,𝑣)∈𝐸} the set of successor vertices of 𝑒.

Definition 2.6. A strategy for player 𝑖 is a function πœŽπ‘– : π‘‰βˆ—Γ—π‘‰π‘– →𝑉, where π‘‰βˆ— represents the history (the portion of path visited) domain. Given history 𝑀 and current vertex 𝑣, the strategy outputs Player 𝑖’s chioce of the next vertex to visit πœŽπ‘–(𝑀,𝑣)∈Succ ⁑ (𝑣).

Notation 3. The outcome is fixed given the initial vertex 𝑣 and strategies (no matter memoryless or not) of the two players 𝜎1,𝜎2. We denote it by Outcome ⁑ (𝑣,𝜎1,𝜎2).

Without other specifications, we assume that strategies are memoryless in the following discussion.

Remark 1. In the case where strategies have memory, one may discuss the size of the memory. The memory is said to be finite if it has finite bits (not that it can memorize finite number of steps).

Definition 2.7. A strategy 𝜎1 for player 1 is called a winning strategy for player 1 from a vertex 𝑣 if for every strategy 𝜎2 of player 2 we have Outcome ⁑ (𝑣,𝜎1,𝜎2)βˆˆπ‘‚1. The winning strategies for player 2 are defined analogously.

Definition 2.8 (Spoiling strategy). A strategy 𝜎2 for player 2 spoils 𝜎1 if Outcome ⁑ (𝑣,𝜎1,𝜎2)βˆ‰ 𝑂1.

Definition 2.9. Given an objective 𝑂𝑖 for player 𝑖, the winning set Win ⁑ 𝑖(𝑂𝑖) is the subset of vertices from which player 𝑖 has a winning strategy (i.e., the other player has no spoiling strategy for this).

Definition 2.10. A game with objectives 𝑂1,𝑂2 for players 1 and 2 respectively is determined if Win ⁑ 1(𝑂1)= 𝑉 \Win ⁑ 2(𝑂2).

Usually we aim at finding the winning set of player 1.

2.4 Solving Reachability Games

Suppose the objective for player 1 is Reach ⁑ (𝑇). (Besides attractors and traps for this objective, I also include some related conclusions on winning strategies for player 2 and objective Safe ⁑ (𝑉 \𝑇).)

Definition 2.11. Let 𝑇 be a set of target vertices. The attractor for player 𝑖 is defined as Attr ⁑ 𝑖(𝑇)= Win ⁑ 𝑖(Reach ⁑ (𝑇)), the set of vertices from which player 𝑖 has a strategy to reach 𝑇.

Proposition 2.5. The set Attr ⁑ 1(𝑇) can be inductively defined as follows:

The inductive definition for Attr ⁑ 2(𝑇) can be analogous.

Definition 2.12. For all vertices in Attr ⁑ 1(𝑇), define rank ⁑ (𝑣)= 𝑖 if 𝑣 βˆˆπ΄π‘– \π΄π‘–βˆ’1 (See Prop.Β 2.5).

Proposition 2.6. By definition, 𝐴 βŠ†Win ⁑ 1(Reach ⁑ (𝑇)) and 𝐴∩Win ⁑ 2(Safe ⁑ (𝑉 \𝑇))= βˆ….

[TODO: We will soon see that they are equal]

. Intuitively, 𝐴𝑖 consists of all vertices that can reach 𝑇 in at most 𝑖 steps, so any valid attractor must be included in some 𝐴𝑖.

Definition 2.13. Let 𝑇 be a set of target vertices. A vertex set 𝐢 βˆ©π‘‡ = βˆ… is said to be trap for player 1 if player 2 has a strategy to confine the state in 𝐢 (and hence never reaches 𝑇), i.e.,

1.
If 𝑣 ∈𝐢 βˆ©π‘‰1, then Succ ⁑ (𝑣)βŠ†πΆ
2.
If 𝑣 ∈𝐢 βˆ©π‘‰2, then Succ ⁑ (𝑣)∩𝐢 β‰  βˆ….

We often use the term β€œa trap of player 1 in 𝐢” or β€œa trap of player 1 out of 𝑇”.

That is, 𝐢 βŠ†Win ⁑ 2(Safe ⁑ (𝑉 \𝑇)) and 𝐢 ∩Win ⁑ 1(Reach ⁑ (𝑇))= βˆ….

Proposition 2.7. 𝑉 \βˆͺ𝑖β‰₯0𝐴𝑖 is a trap for player 1.

Proof 2.1.

1.
𝑇 βŠ†π΄βŸΉ(𝑉 \𝐴)βˆ©π‘‡ = βˆ….
2.
Suppose 𝑣 ∈(𝑉 \𝐴)βˆ©π‘‰1. If Succ ⁑ (𝑣)βŠˆπ‘‰ \𝐴, i.e., Succ ⁑ (𝑣)∩𝐴 β‰  βˆ…, then Succ ⁑ (𝑣)βˆ©π΄π‘– β‰  βˆ… for some 𝑖. In this case, 𝑣 should have been added into 𝐴, which is a contradiction. Thus, Succ ⁑ (𝑣)βŠ†π‘‰ \𝐴.
3.
Suppose 𝑣 ∈(𝑉 \𝐴)βˆ©π‘‰2. If Succ ⁑ (𝑣)∩(𝑉 \𝐴)= βˆ…, i.e., Succ ⁑ (𝑣)βŠ†π΄, then we can partition Succ ⁑ (𝑣) to several sets such that each set βŠ†π΄π‘– for some 𝐴𝑖. 𝑣 should have been added into 𝐴, which is a contradiction. Thus, Succ ⁑ (𝑣)∩(𝑉 \𝐴)= βˆ….

Derivation 2.1. Together, we have 𝑉 \𝐴 βŠ†Win ⁑ 2(Safe ⁑ (𝑉 \𝑇)), and (𝑉 \𝐴)∩Win ⁑ 1(Reach ⁑ (𝑇))= βˆ…. This implies that

Proposition 2.8. 𝐴 = Win ⁑ 1(Reach ⁑ (𝑇)) and 𝑉 \𝐴 = Win ⁑ 2(Safe ⁑ (𝑉 \𝑇)).

Find 𝐴 using a BFS:

_______________________________________________________________________________________ AlgorithmΒ 1:Β Linear Algorithm for Finding Attractor___________________________________________________________ Input: Game graph 𝐺 = (𝑉,𝐸), set of target vertices 𝑇 βŠ†π‘‰ Output: Attractor set 𝐴 1𝑄 ←𝑇; 2𝐴 β†βˆ…; 3𝑆𝑒𝑐𝑐𝐼𝑛𝐴[𝑣]←0 for all 𝑣 βˆˆπ‘‰; 4while 𝑄 β‰  βˆ…Β do 5 6 7Pop one vertex 𝑣 from 𝑄; 8𝐴 ←𝐴βˆͺ{𝑣}; 9for 𝑀 βˆˆπ‘ƒπ‘Ÿπ‘’π‘‘(𝑣)\𝐴 do 10 11 12𝑆𝑒𝑐𝑐𝐼𝑛𝐴[𝑀]←𝑆𝑒𝑐𝑐𝐼𝑛𝐴[𝑀]+1; 13if 𝑀 βˆˆπ‘‰1Β then 14 15 // Corresponds to {𝑣 βˆˆπ‘‰1∣Succ ⁑ (𝑣)∩𝐴 β‰  βˆ…} 16𝑄 ←𝑄 βˆͺ{𝑀}; 17if 𝑀 βˆˆπ‘‰2 and 𝑆𝑒𝑐𝑐𝐼𝑛𝐴[𝑀]= π‘œπ‘’π‘‘π‘‘π‘’π‘”(𝑀)Β then 18 19 // Corresponds to {𝑣 βˆˆπ‘‰2∣Succ ⁑ (𝑣)βŠ†π΄} 20𝑄 ←𝑄 βˆͺ{𝑀}; 21return𝐴;

Running time:

2.5 Solving Buchi Games

Consider the zero-sum Buchi game with 𝑂1 = Buchi ⁑ (𝑇) and 𝑂2 = coBuchi ⁑ (𝑉 \𝑇). We want to solve Win ⁑ 1(𝑂1) and Win ⁑ 2(𝑂2).

Definition 2.14 (Subgame arena). Given an arena 𝐺 = (𝑉,𝑉1,𝑉2,𝐸) and a set π‘Š βŠ†π‘‰, if every vertex in π‘Š has a successor in π‘Š, then the subgame arena at π‘Š is defined as πΊπ‘Š = (π‘Š,𝑉1 βˆ©π‘Š,𝑉2 βˆ©π‘Š,𝐸 βˆ©π‘Š Γ—π‘Š) (remain edges within π‘Š only).

The condition of having a successor is to ensure that the πΊπ‘Š satisfies the third requirement of arena, which is every vertex has at least one out-going edge.

We denote Attr ⁑ 𝑖(𝑇)= Win ⁑ 𝑖(Reach ⁑ (𝑇)).

2.5.1 Classical 𝑂(π‘›π‘š) Algorithm

Proposition 2.9. Player 2 has a winning strategy (for coBuchi ⁑ (𝑉 \𝑇)) starting from 𝐢 = Attr ⁑ 2(𝐡), and hence player 1 does not have a winning strategy from the set. (𝐢 βŠ†Win ⁑ 2(𝑂2)).

Proof 2.2.

Proposition 2.10. The following algorithm finds Win ⁑ 2(coBuchi ⁑ (𝑉 \𝑇)) and Win ⁑ 1(Buchi ⁑ (𝑉 \𝑇)): Mark all vertices in 𝐢 as winning for Player 2, and recursively solve the subgame at 𝑉 \𝐢 with objective 𝑂1 = Buchi ⁑ (𝑇 \(𝑇 ∩𝐢)). Formally:

_______________________________________________________________________________________ AlgorithmΒ 2:Β Classical Algorithm for Finding coBΓΌchi Winning Set__________________________________ Input: Game graph 𝐺 = (𝑉,𝐸), target set 𝑇 βŠ†π‘‰ Output: Winning set π‘Š2 1𝑇0 ←𝑇,𝐺0 ←𝐺,π‘Š2 β†βˆ…; 2for 𝑗 ←0 to ∞ do 3 4 5𝐴𝑗 ←Attr ⁑ 1(𝑇𝑗,𝐺𝑗); 6𝐡𝑗 ←𝑉𝑗 \𝐴𝑗; 7𝐢𝑗 ←Attr ⁑ 2(𝐡𝑗); 8π‘Š2 β†π‘Š2 βˆͺ𝐢𝑗; 9𝑉𝑗+1 ←𝑉𝑗 \𝐢𝑗; 10𝑇𝑗+1 ←𝑇𝑗 \𝐢𝑗; 11𝐺𝑗+1 ←𝐺[𝑉𝑗+1]; 12if 𝐢𝑗 = βˆ…Β then 13 14 15break; 16returnπ‘Š2;

Win ⁑ 2(coBuchi ⁑ (𝑉 \𝑇))= π‘Š2 and Win ⁑ 1(Buchi ⁑ (𝑇))= 𝑉 \π‘Š2.

Proof 2.3. We first show that, in each iteration, it is β€œsafe” to keep the subgame at 𝑉 \𝐢 only (subscript omitted), i.e., the game starting from 𝑉 \𝐢 will remain in 𝑉 \𝐢 if both players play the best. (Note that this does not include edges across 𝐢 and 𝑉 \𝐢 either). Consider any initial vertex in 𝑉 \𝐢:

Then we show that player 1 has a winning strategy in 𝑉 \π‘Š2. Suppose the loop breaks at 𝐢𝑛 = βˆ…, i.e., Attr ⁑ 2(𝐡𝑛)= βˆ…. The 𝐡𝑛 must be βˆ…, since Attr ⁑ 2(𝐡𝑛)βŠ‡π΅π‘›. Thus, 𝐴𝑛 = 𝑉𝑛 = 𝑉 \π‘Š2. Then, 𝑉 \π‘Š2 playes two roles:

Corollary 2.1. Buchi game is determined, Win ⁑ 1(𝑂1)= 𝑉 \Win ⁑ 2(𝑂2) (𝑉 \π‘Š2):

Corollary 2.2. Both winning strategies for player 1 and player 2 are memoryless: Their strategies can be decomposed into many stages of reachability strategies, which are memoryless.

Runtime of the algorithm: 𝑂(π‘š) (attractor algorithm) for each iteration. At most 𝑛 iteration. Overall 𝑂(π‘›π‘š).

Solving 𝑂1 = coBuchi ⁑ (𝑇): 𝑂2 = Buchi ⁑ (𝑉 \𝑇). Solve 𝑂2 using the algorithm.

2.5.2 Faster 𝑂(𝑛2) Algorithm

More detailed analysis of the classical algorithm:

Proposition 2.11. The total running time for finding all 𝐢𝑗 = Attr ⁑ 2(𝐡𝑗,𝐺𝑗) (and removing relevant vertices and edges to derive 𝐺𝑗) is 𝑂(π‘š).

Proof 2.4. In the process of finding 𝐢𝑗 = Attr ⁑ 2(𝐡𝑗,𝐺𝑗), all the vertices in 𝐢𝑗 and their in-edges are visited.

Then it removes all edges incident on these vertices, i.e., all in-edges and out-edges.

Thus, every edge is visited 𝑂(1) times.

Notice that the only property of 𝐡 we use is that it is a trap of player 1 out of 𝑇. We took 𝐡 as the largest such trap, namely 𝑉 \𝐴, but actually it can be any trap out of 𝑇.

Definition 2.15 (Subgraph Trap). Let πΈβ€²βŠ†πΈ, a set 𝑆 βŠ†π‘‰ is an 𝐸′-trap of player 1 outside 𝑇 if

It is a set of vertices from which player 2 has a strategy to restrict the state in the set and in the subgraph 𝐸′ when playing in the entire arena. It is not the same as a trap of player 1 outside 𝑇 when playing in the subgame arena (𝑉,𝐸′).

We can use an algorithm similar to the reachability algorithm to find the maximum 𝐸′-trap:

_______________________________________________________________________________________ AlgorithmΒ 3:Β Linear Algorithm for Finding Subgraph Trap_________________________________________________ Input: Game graph 𝐺 = (𝑉,𝐸), subgraph 𝐺′= (𝑉,𝐸′), and target set 𝑇 βŠ†π‘‰ Output: Trap set 𝑆 1𝑄 ←𝑇 βˆͺ{𝑣 βˆˆπ‘‰βˆ£βˆƒ ⁑(𝑣,𝑀),(𝑣,𝑀)βˆ‰ 𝐸′} ;Β // Change 2𝐴 β†βˆ…; 3𝑆𝑒𝑐𝑐𝐼𝑛𝐴𝐸′[𝑣]←0 for all 𝑣 βˆˆπ‘‰; 4while 𝑄 β‰  βˆ…Β do 5 6 7Pop one vertex 𝑣 from 𝑄; 8𝐴 ←𝐴βˆͺ{𝑣}; 9for 𝑀 βˆˆπ‘ƒπ‘Ÿπ‘’π‘‘(𝑣,𝐸′)\𝐴 do 10 11 // Change 12𝑆𝑒𝑐𝑐𝐼𝑛𝐴𝐸′[𝑀]←𝑆𝑒𝑐𝑐𝐼𝑛𝐴𝐸′[𝑀]+1; 13if 𝑀 βˆˆπ‘‰1Β then 14 15 16𝑄 ←𝑄 βˆͺ{𝑀}; 17if 𝑀 βˆˆπ‘‰2 and 𝑆𝑒𝑐𝑐𝐼𝑛𝐴𝐸′[𝑀]= π‘œπ‘’π‘‘π‘‘π‘’π‘”(𝑀,𝐸′)Β then 18 19 20𝑄 ←𝑄 βˆͺ{𝑀}; 21𝑆 ←𝑉 \𝐴; 22return𝑆;

It is basically the complement of the β€œπΈβ€²-attractor” to the union of 𝑇 and player 1-vertices which have an outgoing edges not in 𝐸′. Again, we cannot simply apply the previous algorithm to the subgame (𝑉,𝐸′).

Hierarchical Decomposition:

Coloring Vertices:

Proposition 2.12. Let 𝑋𝑖 be the set of vertices that are

Then 𝑋𝑖 is the set of all the vertices from which player 1 has a strategy to leave 𝐺𝑖 (visit edges that are not in 𝐸𝑖).

Proof 2.5.

Proposition 2.13. The following algorithm finds a trap of player 1 outside of 𝑇. In addition, let π‘˜ be the size of the found trap, then the algorithm takes 𝑂(π‘›π‘˜) time:

_______________________________________________________________________________________ AlgorithmΒ 4:Β Algorithm for Finding Trap______________________________________________________________________________ Input: Game graph 𝐺 = (𝑉,𝐸), target set 𝑇 βŠ†π‘‰ Output: Trap 𝑆𝑖 for player 1, or report no trap found 1for 𝑖 ←0 to log ⁑ 𝑛 do 2 3 // Try to find an 𝐸𝑖-trap 𝑆𝑖 for player 1 out of 𝑇 4Try to find an 𝐸𝑖-trap 𝑆𝑖 for player 1 out of 𝑇; 5ifΒ such 𝑆𝑖 is foundΒ then 6 7 8Output 𝑆𝑖 as the trap; 9break; 10ifΒ no trap has been foundΒ then 11 12 13Report no trap is found;_______________________________________________________________________________________________________________________________

Proof 2.6 (Correctness). By definition, an 𝐸′-trap of player 1 is an 𝐸-trap of player 1.

Proof 2.7 (Running Time). Suppose we find a trap 𝑆𝑖 in the 𝑖-th iteration. We claim |𝑆𝑖|β‰₯2π‘–βˆ’1. We prove by casework on the color of player 1 vertices in the previous graph:

In summary, the algorithm finds a trap of size β‰₯2π‘–βˆ’1 in 𝑂(2𝑖𝑛) time. In other words, it can find a trap of size π‘˜ in 𝑂(π‘›π‘˜) (π‘˜ is the size of the actual trap it found).

The faster algorithm:

_______________________________________________________________________________________ AlgorithmΒ 5:Β Classical Algorithm for Finding coBΓΌchi Winning Set__________________________________ Input: Game graph 𝐺 = (𝑉,𝐸), target set 𝑇 βŠ†π‘‰ Output: Winning set π‘Š2 1𝑇0 ←𝑇,𝐺0 ←𝐺,π‘Š2 β†βˆ…; 2for 𝑗 ←0 to ∞ do 3 4 5𝐴𝑗 ←Attr ⁑ 1(𝑇𝑗,𝐺𝑗); 6𝐡𝑗 ← a trap of size π‘˜ computed by the hierarchical decomposition algorithm; 7π‘Š2 β†π‘Š2 βˆͺ𝐢𝑗; 8𝑉𝑗+1 ←𝑉𝑗 \𝐢𝑗; 9𝑇𝑗+1 ←𝑇𝑗 \𝐢𝑗; 10𝐺𝑗+1 ←𝐺[𝑉𝑗+1]; 11if 𝐢𝑗 = βˆ…Β then 12 13 14break; 15returnπ‘Š2;

Running time: Since 𝐢𝑗 βŠ‡π΅π‘— is removed from 𝑉𝑗, all π‘˜β€™s add up to 𝑂(𝑛), the overall running time is 𝑂(𝑛2 +π‘š)= 𝑂(𝑛2).

2.6 Solving Parity Games

Consider a zero-sum game with 𝑂1 = parity ⁑ (𝑝) and 𝑂2 = parity ⁑ (𝑝 +1), where 𝑝 : 𝑉 β†’{0,1,…,𝑑}.

W.l.o.g., we can assume that

Proposition 2.14. Parity game is determined and both players have memoryless winning strategies.

Proof 2.8. We prove by induction on the number of priorities 𝑑.

Base cases:

Induction step: Assume any parity game with 𝑑 priorities {0,1,…,𝑑 βˆ’1} is memoryless and determined. We want to show that any game with 𝑑 +1 priorities {0,1,…,𝑑} is memoryless and determiend by diminishing priority 0 in the current game.

1.
Firstly, we find Win ⁑ 2(𝑂2):
  • Let 𝑃0 = {𝑣 βˆˆπ‘‰|𝑝(𝑣)= 0}βŠ†π‘‰.
  • Let 𝐴 = Attr ⁑ 1(𝑃0) and 𝐡 = 𝑉 \𝐴 (trap of player 1 out of 𝑃0).
  • 𝐡 has a subgame and the subgame does not have any priority 0 vertex. By induction, the subgame with the same objectives is determined and memoryless. Let 𝐡1,𝐡2 be the winning sets of player 1 and 2, respectively, in the subgame at 𝐡.
  • Player 2 has a winning strategy in the original game from 𝐡2: Play the strategy that traps the game in 𝐡 and the same winning strategy as in the subgame.
  • Player 2 has a winning strategy in the original game from 𝐢 = Attr ⁑ 2(𝐡2): First play the strategy that reaches 𝐡2 in finitely many steps, then play the above strategy. The finitely many steps does not affect the parity (See Tail Objective).
  • The winning strategies above are memoryless.
  • Mark all vertices in 𝐢 as winning starting vertices for player 2. And continue recursively on the subgame at 𝑉 \𝐢. The argument is exactly the same as Buchi games.
2.
Secondly, we show that Win ⁑ 1(𝑂1)= 𝑉 \Win ⁑ 2(𝑂2).
  • The process continues until 𝐢 = βˆ…,𝐡2 = βˆ…,𝐡 = 𝐡1. We use the same notation for 𝑃0,𝐴. Consider this strategy for player 1: if the current state is in 𝐡, then play the winning strategy as in the subgame; if the current state is in 𝐴, then play the winning strategy for objective Reach ⁑ (𝑃0). Then no matter what strategy player 2 plays, player 1 can always win:

    • By definition, if player 2 keeps the game in 𝐡 = 𝐡1, player 2 will loss and player 1 will win (determinanc in 𝐡).
    • Since the objective is a tail objective, if player 2 keeps the game in 𝐡 after finitely many steps, player 2 will still loss and player 1 will win (determined in 𝐡).
    • Otherwise, player 2 does not keep the game in 𝐡 for infinitely many time, i.e., the game enters 𝐴 for infinitely many times. Everytime the game is in 𝐴, player 1 play with strategy for objective Reach ⁑ (𝑃0). Thus, player 1 can reach priority 0 for infinitely many times. In this case, player 1 still win.
3.
This shows that the game is determined and memoryless.

Running time:

3 Automata on Finite Words

Definition 3.1 (Deterministic Finite Automaton, DFA). A Deterministic Finite Automaton (DFA) is a tuple 𝐴 = (Ξ£,𝑄,π‘ž0,𝛿,𝑇) in which:

A DFA operates on fintie words as follows: It starts from the initial state. Each each symbol from the alphabet is read sequentially. If the current state and symbol are π‘ž and π‘Ž respectively, it transits to 𝛿(π‘ž,π‘Ž).

A word is accepted if the (unique) run with that word ends in an accepting state. (P.S. if the transition corresponding to the input symbol is not available at the current state, then it directly leads to a rejection.)

The language of the automaton is the set of accepted words.

P.S., we denote the set of all finite words consisting of letters in Ξ£ by Ξ£βˆ—.

Definition 3.2 (Nondeterministic Finite Automaton, NFA). A Nondeterministic Finite Automaton NFA is a tuple 𝐴 = (Ξ£,𝑄,𝑄0,𝛿,𝑇) in which:

A word is accepted if there exists a run over the word ending in an accepting state.

3.1 Basic Closure Properties

Proposition 3.1. Any NFA has a NFA complement.

4 Automata on Infinite Words (πœ”-automata)

We denote the set of all infinite words consisting of letters in Ξ£ by Ξ£πœ”.

Definition 4.1 (Deterministic Automaton of Infinite Words). A Deterministic Automaton on Infinite words is a tuple 𝐴 = (Ξ£,𝑄,π‘ž0,𝛿,𝑇,𝐴𝑐𝑐) in which:

Definition 4.2 (Nondeterministic Automaton on Infinite Words). A Nondeterministic Automaton on Infinite words is a tuple 𝐴 = (Ξ£,𝑄,𝑄0,𝛿,𝑇) in which:

The Automata on Infinite words with the acceptance condition being Buchi or Parity are called Buchi Automata and Parity Automata, resp.

4.1 Buchi Automata

Definition 4.3 (Buchi Automaton). A Buchi Automaton is an Automaton on Infinite words with Buchi acceptance condition: Let 𝑇 βŠ†π‘„ be the target set, a run 𝜚 is accepting if it visits T infinitely many times, i.e. inf ⁑ (𝜚)βˆ©π‘‡ β‰  βˆ….

We abbreviate Deterministics Buchi Automaton to DBW, and the nondeterministic one to NBW.

Definition 4.4 (πœ”-regular Language). We call a language πœ”-regular if it is the language of some NBW.

Proposition 4.1. NBW are strictly more expressive than DBW:

Proof 4.1. The second statement is trivial: we can make a DBW NBW by turning the initial state and the outputs of the transition function into sets of single element.

The first statement is proved by contradiction: Consider this NBW which accepts all words with finitely many zeros (grey nodes are target states):

PIC

Suppose 𝐴 is a DBW with the same language. Let 𝑛 be the number of states in 𝐴 (finite). Consider the following accepting infinite words:

By Pigeon principle, two among these accepting states, say π‘žπ‘–π‘— and π‘žπ‘–π‘˜, must be the same. Let the prefix of the last word before π‘žπ‘–π‘— be 𝑝, and the subsequence between π‘žπ‘–π‘— and π‘žπ‘–π‘˜ (excluded) be π‘Ÿ. Starting from π‘žπ‘–π‘—, the run of 𝐴 returns to this state by reading π‘Ÿ. Then 𝑝(π‘Ÿπœ”), the concatenation of 𝑝 and infinite repetitions of π‘Ÿ, is also accepted, because it visits accepting states infinitely many times.

However, 𝑝(π‘Ÿπœ”) contains infinitely many zeros. This contradicts with the assumption. Thus, DBW with the same language does not exist.

4.1.1 Complementation of DBW

In the followings, we will call languages that can be recognized by some DBW β€œDBW languages”.

Proposition 4.2. DBW languages are not closed under complementation. There is some DBW, such that we cannot find another DBW whose language is the complement of the language of the first DBW.

Proof 4.2. Consider the example in the last proof: We showed that there is no DBW of the language consisting of words with finitely many zeros. However, there exists a DBW of the language consisting of all words with infinitely many zeros:

PIC

4.1.2 Union of DBWs

Proposition 4.3. DBW languages are closed under union. For any two DBWs, there is another DBW that recognize the union of languages of the two DBWs.

Proof 4.3. Let 𝐿1,𝐿2 be any two languages. Suppose 𝐿1 = 𝐿(𝐴1),𝐿2 = 𝐿(𝐴2), where 𝐴1 = (Ξ£,𝑄1,π‘ž01,𝛿1,𝑇1) and 𝐴2 = (Ξ£,𝑄2,π‘ž02,𝛿2,𝑇2).

Consider the automaton that simulates the transitions of 𝐴1,𝐴2 at the same time: 𝐴 = (Ξ£,𝑄1 ×𝑄2,(π‘ž01,π‘ž02),𝛿,(𝑇1 ×𝑄2)βˆͺ(𝑄1 ×𝑇2)), where 𝛿(π‘ž,π‘žβ€²,π‘Ž)= (𝛿1(π‘ž,π‘Ž),𝛿2(π‘žβ€²,π‘Ž)). Each state of 𝐴 is a pair of states of 𝐴1,𝐴2.

Then 𝐴 accepts a word iff the run with the word visits (𝑇1 ×𝑄2)βˆͺ(𝑄1 ×𝑇2) infinitely many times, which is equivalent to visiting 𝑇1 infinitely often, or visiting 𝑇2 infinitely often, or visiting both infinitely often. This means that the word is in 𝐿1 βˆͺ𝐿2. Thus, 𝐿(𝐴)= 𝐿1 βˆͺ𝐿2.

4.1.3 Intersection of DBWs

Proposition 4.4. DBW languages are closed under intersection. For any two DBWs, there is another DBW that recognize the intersection of languages of the two DBWs.

Proof 4.4. Let 𝐴1,𝐴2 be two automata and 𝐿1 = 𝐿(𝐴1),𝐿2 = 𝐿(𝐴2). The tuples of 𝐴1,𝐴2 use the same notation as above.

𝐿1 ∩𝐿2 consists of the words that visit both target sets infinitely many times, which is equivalent to alternating between both target sets infinitely many times (before switching to the other target set, can visit the same target set multiple times).

Consider two copies of the β€œjoint” automata 𝐴1 ×𝐴2 𝐴 = (Ξ£,𝑄1 ×𝑄2 Γ—{𝐿,𝑅},(π‘ž01,π‘ž02)Γ—{𝐿},𝛿,𝑇). We want to modify the transitions such that once the run reaches a target state in one copy, it switches to the other copt and continue the transition, so that it cannot visits target states in the same copy consecutively. We define the target set to be 𝑇 = 𝑄1 ×𝑇2 Γ—{𝑅} and 𝛿 by:

𝛿(π‘ž1,π‘ž2,𝐿,π‘Ž)= {(𝛿1(π‘ž1,π‘Ž),𝛿2(π‘ž2,π‘Ž),𝐿),ifΒ π‘ž1 βˆ‰ 𝑇1 (𝛿1(π‘ž1,π‘Ž),𝛿2(π‘ž2,π‘Ž),𝑅),ifΒ π‘ž1 βˆˆπ‘‡1 𝛿(π‘ž1,π‘ž2,𝑅,π‘Ž)= {(𝛿1(π‘ž1,π‘Ž),𝛿2(π‘ž2,π‘Ž),𝑅),ifΒ π‘ž2 βˆ‰ 𝑇2 (𝛿1(π‘ž1,π‘Ž),𝛿2(π‘ž2,π‘Ž),𝐿),ifΒ π‘ž2 βˆˆπ‘‡2

A run in 𝐴 visits 𝑄1 ×𝑇2 Γ—{𝑅} infinitely often iff it visits 𝑄1 ×𝑇2 Γ—{𝐿} infinitely often: Once it visits 𝑄1 ×𝑇2 Γ—{𝑅}, it switches to 𝑄1 ×𝑄2 Γ—{𝐿}, and visits 𝑄1 ×𝑇2 Γ—{𝐿} zero or more times until it visits 𝑇1 ×𝑄2 Γ—{𝐿}. vice versa.

4.1.4 Union of NBWs.

Proposition 4.5. πœ”-regular languages are closed under union. For any two NBWs, there is an NBW that recognize the union of languages of the two NBWs.

Proof 4.5. Let 𝐴1 = (Ξ£,𝑄1,𝑄01,𝛿1,𝑇1) and 𝐴2 = (Ξ£,𝑄2,𝑄02,𝛿2,𝑇2) and 𝐿1,𝐿2 be their languages, resp. We define 𝐴 = (Ξ£,𝑄1 Γ—{1}βˆͺ𝑄2 Γ—{2},𝑄01 Γ—{1}βˆͺ𝑄02 Γ—{2},𝛿1 βˆͺ𝛿2,𝑇1 Γ—{1}βˆͺ𝑇2 Γ—{2}), where (𝛿1 βˆͺ𝛿2)((π‘ž,1),π‘Ž)= 𝛿1(π‘ž,π‘Ž),(𝛿1 βˆͺ𝛿2)((π‘ž,1),π‘Ž)= 𝛿2(π‘ž,π‘Ž).

Basically, we keep two isolated components in 𝐴. Then every accepting word in 𝐴𝑖 corresponds to an accepting word in 𝐴 starting from 𝑄0𝑖 Γ—{𝑖}, and vice versa.

P.S.: Since DBW can have only one initial state, we cannot use this construction to prove the union of two DBWs.

4.1.5 Intersection of NBWs

Proposition 4.6. πœ”-regular languages are closed under intersection. For any two NBWs, there is an NBW that recognize the intersection of languages of the two NBWs.

Proof 4.6. The same proof as DBW works. The initial set becomes 𝑄1 ×𝑄2 Γ—{𝐿}. The definition for 𝛿 becomes 𝛿(π‘ž1,π‘ž2,𝐿,π‘Ž)= 𝛿1(π‘ž1,π‘Ž)×𝛿2(π‘ž2,π‘Ž)Γ—{𝐿} if π‘ž1 βˆ‰ 𝑇1, etc.

4.1.6 NBW Complement of DBW

Proposition 4.7. For any DBW, there is an NBW that recognize the complement of the language of the DBW.

Proof 4.7. Let 𝐴 = (Ξ£,𝑄,π‘ž0,𝛿,𝑇) be a DBW. Intuitively, in a complement NBW 𝐡 with the β€œsame” (not exactly) set of states as 𝐴, the accepting words should never see 𝑇 after finitely many steps.

We define 𝐡 = (Ξ£,𝑄′,𝑄0,𝛿′,𝑇′) as follows: 𝑄′= (𝑄 Γ—{1})βˆͺ((𝑄 \𝑇)Γ—{2}), i.e., 𝑄 plus an extra copy of 𝑄 \𝑇. 𝑄0 = {(π‘ž0,1)}. 𝑇′= (𝑄 \𝑇)Γ—{2}. 𝛿′(π‘ž,1,π‘Ž)= {𝛿(π‘ž,π‘Ž)}Γ—{1,2}, 𝛿′(π‘ž,2,π‘Ž)= ({𝛿(π‘ž,π‘Ž)}Γ—{2})βˆ©π‘‡β€².

The accepting words of 𝐡 are not accepted by 𝐴: They visit 𝑇′ infinitely many times, and will never visit 𝑇 Γ—{1} (corresponds to 𝑇 in 𝐴) once entering 𝑇′.

Conversely, the accepting words of 𝐴 are not accepted by 𝐡: To visit 𝑇 (𝑇 Γ—{1} in 𝐡) infinitely often, they cannot ever enter 𝑇′.

Remark: This method is not applicable for finding NBW complement of NBW. In the nondeterministic setting, an accepting words may have one run that visits 𝑇 Γ—{1} infinitely often and another run that visits 𝑇 Γ—{2} infinitely often.

4.1.7 Emptiness of DBW/NBW

Given a DBW/NBW 𝐴 with 𝑛 states, decide whether 𝐿(𝐴)= βˆ….

𝐿(𝐴)β‰  βˆ… iff there exists an accepting run of 𝐴 with an infinite word: Any valid run of 𝐴 corresponds to a unique word, so the problem is equivalent to deciding whether 𝐴 has an accepting run.

Proposition 4.8. We can find a corresponding Buchi-game for emptiness problem: Let 𝑇 be the set of accepting states in 𝐴. We call the graph (𝑉,𝐸) defined below the underlying graph of 𝐴: 𝑉 = 𝑄, and (π‘ž1,π‘ž2)∈𝐸 iff βˆƒ β‘π‘Ž ∈Σ,π‘ž2 βˆˆπ›Ώ(π‘ž1,π‘Ž) (since we only care about the existence of an accepting run, not the characters in the run).

Define a single-player game (𝑉,𝑉1,𝑉2,𝐸), where 𝑉1 = 𝑉 and 𝑉2 = βˆ…, with objective π΅π‘’π‘β„Žπ‘–(𝑇). Then 𝐿(𝐴)β‰  βˆ… iff there exists an outcome starting from an initial vertex βˆˆπ΅π‘’π‘β„Žπ‘–(𝑇), which is equivalent to 𝑄0 βˆ©π‘Šπ‘–π‘›1(π΅π‘’π‘β„Žπ‘–(𝑇))β‰  βˆ….

Proof 4.8. A run corresponds to an outcome, and an accepting run, which visits 𝑇 infinitely often, corresponds to a winning outcome starting from an initial vertex in the game.

Since the outcome is solely controlled by player 1, the condition is equivalent to 𝑄0 βˆ©π‘Šπ‘–π‘›1(π΅π‘’π‘β„Žπ‘–(𝑇))β‰  βˆ….

Thus, this is solvable using the algorithm for finding winning set of Buchi game in 𝑂(𝑛2) or 𝑂(π‘›π‘š) time.

1.
Linear-time algorithm

Proposition 4.9. Consider the underlying game graph. Since the there are finitely many vertices, a run visits 𝑇 infinitely often iff it visits a particular vertex in 𝑇 infinitely often. Furthermore, this is equivalent to that there (1) there is a cycle containing a vertex in 𝑇, (2) the cycle is reachable from the initial vertices.

Consider the strongly connected components of the graph. Condition (1) is equivalent to that an accepting state is contained by some non-transitional SCC, which is a SCC that is not a single vertex without a self-loop, i.e., a SCC that either is a single vertex with a self-loop or has at least two vertices.

The algorithm:

(a)
Find all reachable vertices: Linear time.
(b)
Find all SCCs: Linear time (tarjan).
(c)
Check if a SCC contains an accepting state and is reachable: Linear time.

Overall, it is solvable in linear time 𝑂(𝑛+π‘š).

4.1.8 Universality Problem of DBW

The dual of emptiness.

Given a DBW 𝐴, decide whether 𝐿(𝐴)= Ξ£πœ”.

For a DBW, every word has a unique run and vice versa. Thus, 𝐿(𝐴)= Ξ£πœ” iff every run of 𝐴 is accepting.

Proposition 4.10. We have a corresponding coBuchi game for universality problem: Let 𝑇 be the set of accepting states in 𝐴. Define a game with the underlying graph of 𝐴 as in the emptiness problem with objective π‘π‘œπ΅π‘’π‘β„Žπ‘–(𝑇). Then, every run is accepting iff every path starting from an initial vertex is in π΅π‘’π‘β„Žπ‘–(𝑇), i.e., 𝑄0 βˆ©π‘Šπ‘–π‘›1(π‘π‘œπ΅π‘’π‘β„Žπ‘–(𝑇))= βˆ….

Thus, the problem is solvable using the algorithm for find winning set of coBuchi game in 𝑂(π‘›π‘š) or 𝑂(𝑛2).

1.
Linear-time algorithm

Proposition 4.11. 𝐿(𝐴)= Ξ£πœ” iff every reachable cycle in the graph contains an accepting state.

Proof 4.9. Otherwise, there is a path that repeats a cycle containing no accepting state infinitely many times, and hence does not visit 𝑇 infinitely many.

Every cycle contains an accepting state iff, after the removal of all accepting vertices, no cycle exists: This can be checked easily.

The algorithm:

(a)
Remove all the unreachable vertices: Linear time.
(b)
Remove all the accepting vertices: Linear time.
(c)
Check whether there is a cycle in the remaining graph: Linear time.

Overall, it is solvable in linear time 𝑂(𝑛+π‘š).

4.1.9 Universality Problem of NBW

Proposition 4.12. Non-universality of NBW is NP-hard.

Definition 4.5 (3-coloring Problem). Given a (undirected) graph 𝐺 = (𝑉,𝐸), is it possible to color the vertices in 𝐺 with 3 colors such that no two adjacent vertices have the same color?

Proof 4.10. We provide a reduction from 3-coloring, which is a classic NP-complete problem.

Given a (undirected) graph 𝐺 = (𝑉,𝐸), let Ξ£ = 𝑉 Γ—{π‘Ÿ,𝑔,𝑏}. We create the following NBWs:

1.
The first NBW 𝐴 is defined to accept all words that, when ignoring the colors, are not valid paths in 𝐺. That is, a word (𝑣1,𝑐1),(𝑣2,𝑐2),(𝑣3,𝑐3),… is accepting iff 𝜌 = βŸ¨π‘£1,𝑣2,𝑣3,β€¦βŸ© is an invalid path in 𝐺.

𝐴 = (Ξ£,𝑉 βˆͺ{βŠ₯,π‘ž0},{π‘ž0},𝛿,{βŠ₯}) with Buchi condition, where:

𝛿(π‘ž0,(𝑣,𝑐))= 𝑣,𝛿(𝑒,(𝑣,𝑐))= {𝑣, ifΒ (𝑒,𝑣)∈𝐸 βŠ₯,ifΒ (𝑒,𝑣)βˆ‰ 𝐸 ,𝛿(βŠ₯,(𝑣,𝑐))= βŠ₯.

A word is accepted by 𝐴 iff the run ever transits through (𝑒,(𝑣,𝑐)) for some (𝑒,𝑣)βˆ‰ 𝐸. Thus, the path associated with every accepting word contains an edge that is not in 𝐸, and hence is invalid.

2.
The second NBW 𝐡 is defined to accept all words in which there is a vertex that appears with two different colors.

𝐡 = (Ξ£,𝐴βˆͺ{π‘ž0},{π‘ž0},𝛿,𝐡 with Buchi condition, where 𝐴 = {π‘Žπ‘£,𝑐1,𝑐2|𝑣 βˆˆπ‘‰,𝑐1,𝑐2 ∈{π‘Ÿ,𝑔,𝑏},𝑐1 β‰  𝑐2}, 𝐡 = {𝑏𝑣,𝑐1,𝑐2|𝑣 βˆˆπ‘‰,𝑐1,𝑐2 ∈{π‘Ÿ,𝑔,𝑏},𝑐1 β‰  𝑐2}, and

π‘ž0 βˆˆπ›Ώ(π‘ž0,βˆ—),βˆ€ ⁑𝑐1,𝑐2 ∈{π‘Ÿ,𝑔,𝑏},𝑐1 β‰  𝑐2 : π‘Žπ‘£,𝑐1,𝑐2 βˆˆπ›Ώ(π‘ž0,(𝑣,𝑐1)), π‘Žπ‘£,𝑐1,𝑐2 βˆˆπ›Ώ(π‘Žπ‘£,𝑐1,𝑐2,βˆ—), 𝑏𝑣,𝑐1,𝑐2 βˆˆπ›Ώ(π‘Žπ‘£,𝑐1,𝑐2,(𝑣,𝑐2)), 𝑏𝑣,𝑐1,𝑐2 βˆˆπ›Ώ(𝑏𝑣,𝑐1,𝑐2,βˆ—).

An accepting word is of the form …(𝑣,𝑐1)…(𝑣,𝑐2)….

PIC
3.
The third NBW 𝐢 is defined to accept all words that do not contain all edges of the graph. It is rather complicated to define 𝐢 in expression, so we only show a subpart of it: The initial vertex 𝑄0 includes π‘žπ‘’,𝑣 for all undirected edges (𝑒,𝑣)∈𝐸. The subpart for an edge (𝑒,𝑣)∈𝐸 is:
PIC

which accpets all the words that at least miss the edge (𝑒,𝑣) (in either direction).

4.
The fourth NBW 𝐷 is defined to accepts all the words that have the same color twice in a row.
PIC
5.
The NBW 𝐸 = 𝐴βˆͺ𝐡 βˆͺ𝐢 βˆͺ𝐷 (𝐿(𝐸)= 𝐿(𝐴)βˆͺ𝐿(𝐡)βˆͺ𝐿(𝐢)βˆͺ𝐿(𝐷)).

We claim that 𝐸 is universal iff 𝐺 is not 3-colorable, i.e., 𝐸 is not universal iff 𝐺 is 3-colorable.

Only if side: 𝐸 does not accept a word iff the word (a) is a valid path in 𝐺, (b) assigns a unique color to each vertex, (c) visits all edges of 𝐺, and (d) does not have the same color in a row. This is equivalent to that the word represent a valid 3-coloring of the graph.

If side: If 𝐺 has a valid 3-coloring, then any infinite path that visits all edges together is an infinite word that is not accepted by 𝐸.

Corollary 4.1. Unless P=NP, there is no polynomial-time complementation of NBW.

Otherwise, we can run this polynomial-time complementation, and apply the (polynomial) emptiness algorithm to the complement to solve the universality in polynomial time.

4.1.10 Complementation NBW

Given a NBW 𝐴, find an NBW 𝐡 such that 𝐿(𝐡)= Ξ£πœ” \𝐿(𝐴).

Proposition 4.13. If we color each edge of a 6-vertex graph with one red and blue, then there must be a monochrome triangle.

Proof 4.11. Vertex 1 must have three edges of the same color, say blue. Consider the three vertices connected with 1 by blue edges. There are two cases: (1) At least two of them are connected by a blue edge, in which a blue triangle is formed. (2) All of them are connected by red edges, in which a red triangle is formed.

Theorem 4.1 (Ramsey Theorem for Infinite Graphs). Consider a complete infinite graph 𝐺 = (𝑉,𝐸) where every edge is colored by one of 𝑐 colors. There is an infinite monochrome clique (complete subgraph induced by some vertices) in 𝐺.

Proof 4.12. Let π‘Ž1 βˆˆπ‘‰ be an arbitrary vertex. Since there are finitely many colors, there is a color 𝑐1 such that π‘Ž1 has infinitely many neighbours with edge of color 𝑐1. Let the set of these neighbours be 𝑉1.

Consider the infinite graph 𝐺[𝑉1]= (𝑉1,𝑉1 ×𝑉1). For the same reason, there exists a vertex π‘Ž2 and a color 𝑐2 with the same properties. Continuing this process forever, we get three infinite sequences π‘Ž1,π‘Ž2,…, 𝑐1,𝑐2,…, 𝑉1,𝑉2,….

Thus, there is a color 𝑐′ that appears infinitely many times in 𝑐1,𝑐2,…. Let 𝑉′= {π‘Žπ‘–|𝑐𝑖 = 𝑐′}. Clearly, 𝑉′ (𝐺[𝑉′]) is an infinite monochrome clique.

Definition 4.6 (Equivalence class). Given an NBW 𝐴, we define an equivalence relaion over all non-empty finite words as follows: For 𝑣,𝑀 ∈Σ+, we say 𝑣 βˆΌπ‘€ if

Define a function 𝑓𝑣 : 𝑄 β†’2𝑄 Γ—2𝑄 for each vertex 𝑣 that maps π‘ž to {π‘žβ€²|π‘žβ€²Β isΒ reachableΒ using 𝑣} and {π‘žβ€²|π‘žβ€²Β isΒ target-reachableΒ using 𝑣}. Then 𝑣 βˆΌπ‘€ iff 𝑓𝑣 ≑𝑓𝑀 . Let 𝐿𝑓 be the equivalence class of function 𝑓, i.e., {𝑣|𝑓 ≑𝑓𝑣}. Since we have finitely many such functions, we have finitely many equivalence classes.

Proposition 4.14. Every 𝐿𝑓 is regular (recognized by some NFA).

Definition 4.7. We can find an NFA that recognizes exactly 𝐿𝑓 :

Proposition 4.15. For every infinite word 𝑀 βˆˆΞ£πœ”, there is a decomposition 𝑀 βˆˆπΏπ‘“ (𝐿𝑔)πœ”, where 𝐿𝑓 ,𝐿𝑔 are two equivalence classes of finite words. That is, 𝑀 must be the concatenation of some finite word in 𝐿𝑓 and infinitely many finite words in 𝐿𝑔.

Proof 4.13. Let 𝑀 = 𝑀1𝑀2… be any infinite word, where each 𝑀𝑖 is a symbol. We define 𝑀[𝑖..𝑗)= 𝑀𝑖𝑀𝑖+1β€¦π‘€π‘—βˆ’1. Consider the infinite complete graph 𝐺(N,NΓ—N). We view equivalence classes as colors, and color the edge {𝑖,𝑗} with the (unique) 𝐿𝑓 that contains 𝑀[𝑖..𝑗).

By the Ramsey theorem, there is a color 𝐿𝑔 such that there is an infinite monochrome clique 𝑖0 < 𝑖1 < … in which edges are all colored 𝐿𝑔. Particularly, {𝑖0,𝑖1},{𝑖1,𝑖2},… all have color 𝐿𝑔. Thus, 𝑀 can be decomposed as 𝑀[1..𝑖0)𝑀[𝑖0..𝑖1)𝑀[𝑖1..𝑖2)…, implying that 𝑀 βˆˆπΏπ‘“ (𝐿𝑔)πœ”, where 𝐿𝑓 is the class containing 𝑀[1..𝑖0).

Proposition 4.16. Let 𝐿𝑓 and 𝐿𝑔 be equivalence classes. Either 𝐿(𝐴)βˆ©πΏπ‘“ (𝐿𝑔)πœ” = βˆ… or 𝐿(𝐴)βŠ‡πΏπ‘“ (𝐿𝑔)πœ”.

Proof 4.14. Let 𝑀 βˆˆπΏπ‘“ (𝐿𝑔)πœ”, 𝑀 = 𝑀1𝑀2𝑀3… where 𝑀1 βˆˆπΏπ‘“ and 𝑀2,𝑀3,β‹― βˆˆπΏπ‘” are finite words. Suppose 𝑀 ∈𝐿(𝐴), then by the definition of equivalence class, we can replace 𝑀𝑖 by any word in the same class without affecting the times the run visits 𝑇. Thus, the altered infinite words should also be ∈𝐿(𝐴).

Proposition 4.17. πœ”-regular languages are closed under complementation. Any NBW has an NBW complement.

Proof 4.15 (Complementing NBW). There are finitely many 𝐿𝑓 , so there are finitely many sets 𝐿𝑓 (𝐿𝑔)πœ”.

We have an NBW for every πΏπ‘”πœ”: Take the NFA for 𝐿𝑔. For each accepting state, create edges that the initial states have. The initial states and accepting states for the NBW are the same as the NFA.

We have an NBW for every 𝐿𝑓 (𝐿𝑔)πœ”: Take the NBW for πΏπ‘”πœ” and the NFA for 𝐿𝑓 . For each accepting state of the NFA, create edges that the initial states of the NBW have. The initial states of the resulting NBW are the intial states of the NFA, and the accepting states of the resulting NBW are the accepting states of the NBW for πΏπ‘”πœ”.

Then take the union of NBWs of all 𝐿𝑓 (𝐿𝑔)πœ” that does not intersect with 𝐿(𝐴)). Since every infinite word is in some 𝐿𝑓 (𝐿𝑔)πœ”, all the words in 𝐿(𝐴)𝑐 are included.

4.1.11 Containment Problem

Given two DBW/NBW 𝐴 and 𝐡, decide whether 𝐿(𝐴)βŠ†πΏ(𝐡).

Note that 𝐿(𝐴)βŠ†πΏ(𝐡)⟺𝐿(𝐴)∩𝐿(𝐡)𝑐 = βˆ…βŸΊπΏ(𝐴)∩𝐿(𝐡𝑐)= βˆ…. If we can find the complement NBW 𝐡𝑐 of 𝐡 (such that 𝐿(𝐡𝑐)= 𝐿(𝐡)𝑐), find the intersection automaton, and then answer the emptiness problem.

4.2 Parity Automata

Definition 4.8 (Parity Automaton). A Parity Automaton is an Automaton on Infinite words with Parity acceptance condition: Let 𝑝 : 𝑄 β†’{0,...,𝑑} be a priority function, a run 𝜚 of the automaton is accepting if it satisfies the parity condition, i.e. liminf ⁑ (𝑝(πœšπ‘–)) is even.

We abbreviate Deterministic Parity Automaton to DPW, and the nondeterministic one to NPW.