### Model Checking Markov Chains Against Unambiguous Automata: The Qualitative Case

This is my first blog post. The purpose of this blog is to describe some neat ideas from my research. Ideally only one idea per post, which explains the title of this blog.

Today I start with a classical topic from probabilistic verification: model checking Markov chains against $\omega$-regular specifications. I will focus on specifications given by Büchi automata, which are automata like this one:

This Büchi automaton accepts all infinite words over $\{a,b\}$ that start with $a$. The automaton is nondeterministic. The automaton is also unambiguous, which means that every infinite word has at most one accepting run.

A Markov chain, on the other hand, generates infinite words. A Markov chain looks like this:

For instance, with probability $\frac12 \cdot \frac12 \cdot 1 = \frac14$ it generates an infinite word that starts with $a b a$. For simplicity, we consider only a very simple Markov chain in this post:

This Markov chain generates an infinite word over $\{a,b\}$ uniformly at random. Having fixed this random word generator, we consider the following problem:
Given a nondeterministic Büchi automaton.
Is the probability that it accepts a random (infinite) word positive?
Let's make two further assumptions: The given automaton is strongly connected (every state is reachable from every other state) and all states are accepting:

These assumptions don't take much away from the key challenges. In fact, the problem is PSPACE-complete, with or without all mentioned assumptions. The only thing that will make the problem easier is if the given automaton is unambiguous. And that is what this post is about.

For perspective, let's first solve the problem for possibly ambiguous automata. We determinize the automaton (starting from state 1) with the standard subset construction:
Remembering that the letters have probability 1/2 each, we see a Markov chain shine through:
That Markov chain, call it $\mathcal{M}_{\mathit{det}}$, may have two kinds of bottom SCCs: a rejecting one, which is the one with the empty set $\emptyset$, and accepting ones, which are the other bottom SCCs. In the example there is one rejecting and one accepting bottom SCC. The probability that the given automaton accepts a random word is equal to the probability that $\mathcal{M}_{\mathit{det}}$ reaches an accepting bottom SCC, here $\frac12$.
The probability of accepting a random word is positive if and only if $\, \mathcal{M}_{\mathit{det}}$ has an accepting bottom SCC.
The determinized automaton is of exponential size, so the proposition does not directly lead to a polynomial-time algorithm.

In the unambiguous case we can do better: there is a polynomial-time algorithm. Consider again our unambiguous automaton

and its transition matrices \begin{aligned} T(a) &= \begin{pmatrix} 1 & 0 & 1 \\ 1 & 0 & 1 \\ 0 & 0 & 0 \end{pmatrix} \\[1em] T(b) &= \begin{pmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 1 & 0 \end{pmatrix} \end{aligned} and the transition matrix of an "average" letter $T \ = \ \frac12 (T(a) + T(b)) \ = \ \begin{pmatrix} \frac12 & 0 & \frac12 \\ \frac12 & \frac12 & \frac12 \\ 0 & \frac12 & 0 \end{pmatrix}\,.$ More precisely, the $(i,j)$-entry of $T$ is the probability that a random letter labels a transition from state $i$ to state $j$. This generalizes smoothly to random finite words:
The $(i,j)$-entry of $\ T^n$ is the probability that a random word of length $n$ labels a path from state $i$ to state $j$.
For this property, unambiguousness of the automaton is crucial: any word labels at most 1 path from $i$ to $j$.

Let's analyze the limit behaviour of the matrix powers $T, T^2, T^3, \ldots$ By the proposition, all entries of $T^n$ are at most 1. There are two cases:
• $T, T^2, T^3, \ldots$ converges to the zero matrix.
• $T, T^2, T^3, \ldots$ does not converge to the zero matrix.
By strong connectedness, either all or no entries of $T^n$ converge to 0 for $n \to \infty$.
• If the Markov chain $\mathcal{M}_{\mathit{det}}$ does not have an accepting bottom SCC then an infinite random word will almost surely lead to the rejecting bottom SCC. So the probability that a random infinite word labels an automaton path from a start state is zero. By the proposition, we are in the first case.
• If the Markov chain $\mathcal{M}_{\mathit{det}}$ has an accepting bottom SCC then some finite word $w$ (which has positive probability) leads to this accepting bottom SCC, and all infinite extensions of $w$ label infinite paths of the automaton. So the probability that a random infinite word labels an automaton path is positive. By the proposition, we are in the second case.
It follows that the probability of accepting a random word is positive if and only if $T, T^2, T^3, \ldots$ does not converge to the zero matrix.

Since all entries of $T^n$ are at most $1$, standard matrix theory says that the spectral radius of $T$ (i.e., the largest absolute value of the eigenvalues of $T$) is at most $1$. We also have that $T, T^2, T^3, \ldots$ converges to the zero matrix if and only if the spectral radius of $T$ is less than $1$. So:
The probability of accepting a random word is positive if and only if the spectral radius of $\ T$ is $1$.
The spectral radius of $T$ is $1$ if and only if $T$ has an eigenvector with eigenvalue $1$, i.e., a nonzero vector $x$ with $T x = x$. This amounts to checking the satisfiability of a linear system of equations. In our example we have: $T \begin{pmatrix}1 \\ 2 \\ 1 \end{pmatrix} \ = \ \begin{pmatrix} \frac12 & 0 & \frac12 \\ \frac12 & \frac12 & \frac12 \\ 0 & \frac12 & 0 \end{pmatrix} \begin{pmatrix}1 \\ 2 \\ 1 \end{pmatrix} \ = \ \begin{pmatrix}1 \\ 2 \\ 1 \end{pmatrix}$ We get:
Given an unambiguous Büchi automaton, one can decide in polynomial time whether the probability that it accepts a random word is positive.
A Markov chain may also be input:
Given a Markov chain and an unambiguous Büchi automaton, one can decide in polynomial time whether the probability is positive that the automaton accepts a word randomly generated by the Markov chain.
This result may be sharpened by replacing "polynomial time" by the complexity class "NC", which captures problems that can be efficiently solved in parallel. This combines nicely with a translation from LTL to unambiguous Büchi automata: the translation is exponential but can be computed by a PSPACE-transducer. Altogether, we obtain a PSPACE algorithm for model checking Markov chains against LTL. The problem is PSPACE-complete, so this automata-theoretic algorithm is optimal.

In a forthcoming post I'll show that in addition to comparing the probability with zero one can compute the probability equally efficiently.

The material from this post is from a CAV'16 paper by Christel Baier, Joachim Klein, Sascha Klüppelholz, David Müller, James Worrell, and myself.

### Popular posts from this blog

An initial draft of this post was about how to make rankings more meaningful, a piece of mathsy computer science, as you expect it from this blog. My motivation is personal: I'm involved in selecting CS undergraduate students. Some people might be curious how Oxford admissions work: it's certainly a favourite subject for UK politicians and news media. Therefore I decided to postpone my original plan and to write this post about Oxford admissions. I hope I'm not playing with too much fire and the blog will not suddenly become famous for the wrong reasons!

The tenor in newspapers and political campaigns is that the Oxbridge admissions process is “opaque” and unfair. Here I am going to offer transparency and some comments on fairness. The views expressed here are my own, and may not necessarily coincide with the university's views.

Oxford's goal is to admit the strongest applicants. One can disagree with this basic premise, but we try to admit those applicants who w…

### Probabilistic Models of Computation

This post has 0 ideas. At least, no ideas that are younger than 50 years or so. I'm talking about the two most fundamental models of probabilistic computation and the two most fundamental results about them, according to my subjective judgement.

Every computer scientist knows finite automata: State $1$ is the start state, and state $3$ is the only accepting state. Some words are accepted (i.e., have an accepting run), for instance the word $aa$. Other words are rejected (i.e., have no accepting run), for instance the word $ba$. Most questions about finite automata are decidable. For instance, the emptiness problem, which asks whether the automaton accepts at least one word. This problem is decidable in polynomial time: view the automaton as a graph, and search the graph for a path from the initial state to an accepting state. One can also decide whether two given automata are equivalent, i.e., do they accept the same words? This problem is computationally hard though: PSPACE-co…

### The Price of Memory

In an earlier post I defined the price of memory, $u$, and asked:
Is $u < 1$? Worth €100.
Determine $u$. Worth another €100, even if $u=1$. I'm happy to announce that there is now a solution:
The price of memory, $u$, is 1. I don't want to repeat the precise definition of $u$, but here is a short recap: Suppose you have an acyclic finite MDP and a strategy $\sigma$ that guarantees to visit a red transition with probability 1. This strategy $\sigma$ might use memory, i.e., might base its behaviour in a state on the history of the run, as indicated in yellow:
Any such strategy $\sigma$ induces a randomized strategy $\overline\sigma$ that acts like $\sigma$ in average, but is memoryless. In each state, when averaged over all runs, the strategies $\sigma$ and $\overline\sigma$ take the same actions with the same probabilities:
One can show that $\sigma$ and $\overline\sigma$ visit red transitions equally often in average; in particular, $\overline\sigma$ visits at least…