In my first post I promised to give a polynomial-time procedure to compute the probability that an infinite word (chosen uniformly at random) is accepted by a given unambiguous Büchi automaton.
For example, let's take this Büchi automaton again:
Let's call the probability x_1, x_2, x_3, depending on the start state: \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} := \begin{pmatrix} \text{Pr(state $1$ accepts a random word)}\\ \text{Pr(state $2$ accepts a random word)}\\ \text{Pr(state $3$ accepts a random word)} \end{pmatrix} I showed in the first post that x_1, x_2, x_3 are nonzero in this example. In order to compute the x_i we set up a linear system of equations and solve it. We have x_1 \ = \ \frac12 \cdot x_{1,a} + \frac12 \cdot x_{1,b}\,, where x_{1,a} and x_{1,b} denote the (conditional) probabilities that state 1 accepts a random word, under the condition that the word starts with a and b, respectively.
The equation system above says that the probabilities x_1, x_2, x_3 are the entries of an eigenvector of T, corresponding to eigenvalue 1. It follows from the Perron-Frobenius theorem for irreducible matrices that all such eigenvectors are scalar multiples of each other. So we just need to add one independent equation to the system in order to make the solution unique. What should that equation be?
I claim that this additional equation can be x_1 + x_3 = 1 or x_2 = 1 (either works). The relevant feature of \{1,3\} and \{2\} is that they can be reached from start state 1 but cannot reach the empty set \emptyset in the determinization of our automaton:
Here is why the equalities x_2 = 1 and x_1 + x_3 = 1 hold:
In general, assume the states are 1, \ldots, n. Without much loss of generality we can assume (as before) that every state is accepting and is reachable from every other state. Here is how to compute the probabilities x_1, \ldots, x_n:
The material from this post is from a CAV'16 paper by Christel Baier, Joachim Klein, Sascha Klüppelholz, David Müller, James Worrell, and myself.
Let's call the probability x_1, x_2, x_3, depending on the start state: \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} := \begin{pmatrix} \text{Pr(state $1$ accepts a random word)}\\ \text{Pr(state $2$ accepts a random word)}\\ \text{Pr(state $3$ accepts a random word)} \end{pmatrix} I showed in the first post that x_1, x_2, x_3 are nonzero in this example. In order to compute the x_i we set up a linear system of equations and solve it. We have x_1 \ = \ \frac12 \cdot x_{1,a} + \frac12 \cdot x_{1,b}\,, where x_{1,a} and x_{1,b} denote the (conditional) probabilities that state 1 accepts a random word, under the condition that the word starts with a and b, respectively.
- x_{1,b} = 0 because state 1 has no b-labelled outgoing transitions;
- x_{1,a} = x_1 + x_3 because state 1 has a-labelled transitions to states 1 and 3. You might be afraid that we are double-counting events here, but we are not: by unambiguousness, it is impossible that the same word, say w, is accepted by both states 1 and 3, because then state 1 would accept the word a w along two different paths.
The equation system above says that the probabilities x_1, x_2, x_3 are the entries of an eigenvector of T, corresponding to eigenvalue 1. It follows from the Perron-Frobenius theorem for irreducible matrices that all such eigenvectors are scalar multiples of each other. So we just need to add one independent equation to the system in order to make the solution unique. What should that equation be?
I claim that this additional equation can be x_1 + x_3 = 1 or x_2 = 1 (either works). The relevant feature of \{1,3\} and \{2\} is that they can be reached from start state 1 but cannot reach the empty set \emptyset in the determinization of our automaton:
Here is why the equalities x_2 = 1 and x_1 + x_3 = 1 hold:
- Clearly, we have x_2 \le 1. We also have x_1 + x_3 \le 1 because otherwise some word, say w, would be accepted by both states 1 and 3, and so state 1 would accept the word a w along two different paths.
- These inequalities cannot be strict. Let's argue for x_1 + x_3 \ge 1. Recall that no word leads from \{1,3\} to the empty set \emptyset. So for any finite word v, the probability that state 1 or state 3 accepts v w (where w is a random infinite word) is at least \min\{x_1, x_2, x_3\}, which is bounded away from 0. Applying a general probability principle (Lévy's zero–one law) we can conclude that the probability that state 1 or state 3 accepts a random infinite word is 1. Hence x_1 + x_3 \ge 1.
In general, assume the states are 1, \ldots, n. Without much loss of generality we can assume (as before) that every state is accepting and is reachable from every other state. Here is how to compute the probabilities x_1, \ldots, x_n:
- Determine in polynomial time if all x_i are zero, as shown in the first post. If that's not the case, then all x_i are nonzero, as the states can reach each other.
- Compute the average transition matrix T and set up the linear system \begin{pmatrix} x_1 \\ \vdots \\ x_n \end{pmatrix} \ = \ T \begin{pmatrix} x_1 \\ \vdots \\ x_n \end{pmatrix}\,.
- Compute a set of states \alpha \subseteq \{1, \ldots, n\} that can be reached from a single state but cannot reach the empty set \emptyset in the determinization. I explained in the previous post how this can be done in polynomial time, without computing the determinization. Add to the linear system the extra equation x_{i_1} + \cdots + x_{i_k} = 1\,, \quad \text{where }\{i_1, \ldots, i_k\} = \alpha\,.
- Solve the linear system.
Given a Markov chain and an unambiguous Büchi automaton, one can compute in polynomial time the probability the automaton accepts a word randomly generated by the Markov chain.
Using an unpublished result (modifying step 3. above), one can sharpen the theorem by replacing "polynomial time" with the complexity class "NC", which captures problems that can be efficiently solved in parallel.
This leads to an automata-theoretic PSPACE procedure for model-checking Markov chains against LTL specifications, which is optimal.
The material from this post is from a CAV'16 paper by Christel Baier, Joachim Klein, Sascha Klüppelholz, David Müller, James Worrell, and myself.
Comments
Post a Comment