This is my first blog post. The purpose of this blog is to describe some neat ideas from my research. Ideally only one idea per post, which explains the title of this blog.

Today I start with a classical topic from probabilistic verification: model checking Markov chains against $\omega$-regular specifications. I will focus on specifications given by Büchi automata, which are automata like this one:

This Büchi automaton accepts all infinite words over $\{a,b\}$ that start with $a$. The automaton is nondeterministic. The automaton is also

A Markov chain, on the other hand,

For instance, with probability $\frac12 \cdot \frac12 \cdot 1 = \frac14$ it generates an infinite word that starts with $a b a$. For simplicity, we consider only a very simple Markov chain in this post:

This Markov chain generates an infinite word over $\{a,b\}$ uniformly at random. Having fixed this random word generator, we consider the following problem:

These assumptions don't take much away from the key challenges. In fact, the problem is PSPACE-complete, with or without all mentioned assumptions. The only thing that will make the problem easier is if the given automaton is unambiguous. And that is what this post is about.

For perspective, let's first solve the problem for possibly ambiguous automata. We determinize the automaton (starting from state 1) with the standard subset construction:

Remembering that the letters have probability 1/2 each, we see a Markov chain shine through:

That Markov chain, call it $\mathcal{M}_{\mathit{det}}$, may have two kinds of bottom SCCs: a

In the unambiguous case we can do better: there is a polynomial-time algorithm. Consider again our unambiguous automaton

and its transition matrices \[\begin{aligned} T(a) &= \begin{pmatrix} 1 & 0 & 1 \\ 1 & 0 & 1 \\ 0 & 0 & 0 \end{pmatrix} \\[1em] T(b) &= \begin{pmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 1 & 0 \end{pmatrix} \end{aligned} \] and the transition matrix of an "average" letter \[ T \ = \ \frac12 (T(a) + T(b)) \ = \ \begin{pmatrix} \frac12 & 0 & \frac12 \\ \frac12 & \frac12 & \frac12 \\ 0 & \frac12 & 0 \end{pmatrix}\,. \] More precisely, the $(i,j)$-entry of $T$ is the probability that a random letter labels a transition from state $i$ to state $j$. This generalizes smoothly to random finite words:

Let's analyze the limit behaviour of the matrix powers $T, T^2, T^3, \ldots$ By the proposition, all entries of $T^n$ are at most 1. There are two cases:

Since all entries of $T^n$ are at most $1$, standard matrix theory says that the spectral radius of $T$ (i.e., the largest absolute value of the eigenvalues of $T$) is at most $1$. We also have that $T, T^2, T^3, \ldots$ converges to the zero matrix if and only if the spectral radius of $T$ is less than $1$. So:

In a forthcoming post I'll show that in addition to comparing the probability with zero one can

The material from this post is from a CAV'16 paper by Christel Baier, Joachim Klein, Sascha Klüppelholz, David Müller, James Worrell, and myself.

Today I start with a classical topic from probabilistic verification: model checking Markov chains against $\omega$-regular specifications. I will focus on specifications given by Büchi automata, which are automata like this one:

This Büchi automaton accepts all infinite words over $\{a,b\}$ that start with $a$. The automaton is nondeterministic. The automaton is also

*unambiguous*, which means that every infinite word has at most one accepting run.A Markov chain, on the other hand,

*generates*infinite words. A Markov chain looks like this:For instance, with probability $\frac12 \cdot \frac12 \cdot 1 = \frac14$ it generates an infinite word that starts with $a b a$. For simplicity, we consider only a very simple Markov chain in this post:

This Markov chain generates an infinite word over $\{a,b\}$ uniformly at random. Having fixed this random word generator, we consider the following problem:

Given a nondeterministic Büchi automaton.

Is the probability that it accepts a random (infinite) word positive?

Let's make two further assumptions: The given automaton is strongly connected (every state is reachable from every other state) and all states are accepting:
Is the probability that it accepts a random (infinite) word positive?

These assumptions don't take much away from the key challenges. In fact, the problem is PSPACE-complete, with or without all mentioned assumptions. The only thing that will make the problem easier is if the given automaton is unambiguous. And that is what this post is about.

For perspective, let's first solve the problem for possibly ambiguous automata. We determinize the automaton (starting from state 1) with the standard subset construction:

Remembering that the letters have probability 1/2 each, we see a Markov chain shine through:

That Markov chain, call it $\mathcal{M}_{\mathit{det}}$, may have two kinds of bottom SCCs: a

*rejecting*one, which is the one with the empty set $\emptyset$, and*accepting*ones, which are the other bottom SCCs. In the example there is one rejecting and one accepting bottom SCC. The probability that the given automaton accepts a random word is equal to the probability that $\mathcal{M}_{\mathit{det}}$ reaches an accepting bottom SCC, here $\frac12$.
The probability of accepting a random word is positive if and only if $\, \mathcal{M}_{\mathit{det}}$ has an accepting bottom SCC.

The determinized automaton is of exponential size, so the proposition does not directly lead to a polynomial-time algorithm.In the unambiguous case we can do better: there is a polynomial-time algorithm. Consider again our unambiguous automaton

and its transition matrices \[\begin{aligned} T(a) &= \begin{pmatrix} 1 & 0 & 1 \\ 1 & 0 & 1 \\ 0 & 0 & 0 \end{pmatrix} \\[1em] T(b) &= \begin{pmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 1 & 0 \end{pmatrix} \end{aligned} \] and the transition matrix of an "average" letter \[ T \ = \ \frac12 (T(a) + T(b)) \ = \ \begin{pmatrix} \frac12 & 0 & \frac12 \\ \frac12 & \frac12 & \frac12 \\ 0 & \frac12 & 0 \end{pmatrix}\,. \] More precisely, the $(i,j)$-entry of $T$ is the probability that a random letter labels a transition from state $i$ to state $j$. This generalizes smoothly to random finite words:

The $(i,j)$-entry of $\ T^n$ is the probability that a random word of length $n$ labels a path from state $i$ to state $j$.

For this property, unambiguousness of the automaton is crucial: any word labels **at most 1**path from $i$ to $j$.Let's analyze the limit behaviour of the matrix powers $T, T^2, T^3, \ldots$ By the proposition, all entries of $T^n$ are at most 1. There are two cases:

- $T, T^2, T^3, \ldots$ converges to the zero matrix.
- $T, T^2, T^3, \ldots$ does not converge to the zero matrix.

- If the Markov chain $\mathcal{M}_{\mathit{det}}$ does not have an accepting bottom SCC then an infinite random word will almost surely lead to the rejecting bottom SCC. So the probability that a random infinite word labels an automaton path from a start state is zero. By the proposition, we are in the first case.
- If the Markov chain $\mathcal{M}_{\mathit{det}}$ has an accepting bottom SCC then some finite word $w$ (which has positive probability) leads to this accepting bottom SCC, and all infinite extensions of $w$ label infinite paths of the automaton. So the probability that a random infinite word labels an automaton path is positive. By the proposition, we are in the second case.

Since all entries of $T^n$ are at most $1$, standard matrix theory says that the spectral radius of $T$ (i.e., the largest absolute value of the eigenvalues of $T$) is at most $1$. We also have that $T, T^2, T^3, \ldots$ converges to the zero matrix if and only if the spectral radius of $T$ is less than $1$. So:

The probability of accepting a random word is positive if and only if the spectral radius of $\ T$ is $1$.

The spectral radius of $T$ is $1$ if and only if $T$ has an eigenvector with eigenvalue $1$, i.e., a nonzero vector $x$ with $T x = x$.
This amounts to checking the satisfiability of a linear system of equations.
In our example we have:
\[
T \begin{pmatrix}1 \\ 2 \\ 1 \end{pmatrix} \ = \
\begin{pmatrix}
\frac12 & 0 & \frac12 \\
\frac12 & \frac12 & \frac12 \\
0 & \frac12 & 0
\end{pmatrix}
\begin{pmatrix}1 \\ 2 \\ 1 \end{pmatrix} \ = \ \begin{pmatrix}1 \\ 2 \\ 1 \end{pmatrix}
\]
We get:
Given an unambiguous Büchi automaton, one can decide in polynomial time whether the probability that it accepts a random word is positive.

A Markov chain may also be input:
Given a Markov chain and an unambiguous Büchi automaton, one can decide in polynomial time whether the probability is positive that the automaton accepts a word randomly generated by the Markov chain.

This result may be sharpened by replacing "polynomial time" by the complexity class "NC", which captures problems that can be efficiently solved in parallel.
This combines nicely with a translation from LTL to unambiguous Büchi automata: the translation is exponential but can be computed by a PSPACE-transducer.
Altogether, we obtain a PSPACE algorithm for model checking Markov chains against LTL.
The problem is PSPACE-complete, so this automata-theoretic algorithm is optimal.
In a forthcoming post I'll show that in addition to comparing the probability with zero one can

*compute*the probability equally efficiently.The material from this post is from a CAV'16 paper by Christel Baier, Joachim Klein, Sascha Klüppelholz, David Müller, James Worrell, and myself.

## Comments

## Post a Comment