Markov chains

Updated

November 11, 2024

Let \(\ALPHABET X\) be a finite set. A stochastic process \(\{X_n\}_{n \ge 0}\), \(X_n \in \ALPHABET X\), is called a Markov chain if it satisfies the Markov property: for any \(n \in \integers_{\ge 0}\) and any \(x_{1:n+1} \in \ALPHABET X^{n+1}\), we have \[\begin{equation}\tag{Markov property}\label{eq:Markov} \PR(X_{n+1} = x_{n+1} \mid X_{1:n} = x_{1:n}) = \PR(X_{n+1} = x_{n+1} \mid X_n = x_n). \end{equation}\]

The variable \(X_n\) is called the state of the Markov chain at time \(n\); the set \(\ALPHABET X\) is called state space. The \(\eqref{eq:Markov}\) implies that the current state captures all the information from the past that is relevant for the future.

The Markov chain is called time-homogeneous if the right hand side does not depend on \(n\). In this case, we describe the Markov chain by a state transition matrix \(P\), where \(P_{ij} = \PR(X_{n+1} = j | X_n = i)\).

Let \(μ^{(n)}\) denote the PMF of the state of the Markov chain at time \(n\). Then, we have that \[ μ^{(n)} = μ^{(n-1)} P \] and, by recusively expanding the right hand side, we have \[ μ^{(n)} = μ^{(0)} P^n. \]

We will abbreviate \([P^n]_{ij}\) as \(P^{(n)}_{ij}\).

1 Hitting times and expected number of visits

  1. We use the following notation:

    • \(P_i(A)\) denotes \(\PR(A \mid X_0 = i)\) and $
    • \(\EXP_i[Y]\) denotes \(\EXP[Y \mid X_0 = i]\).
  2. Let \(A\) be a subset of \(\ALPHABET X\). The hitting time \(T_A\) of \(A\) is defined by \[ T_A = \min\{n > 0 : X_n \in A\}. \] The standard convention is that \(T_A\) is taken to be \(∞\) if \(X_n \neq A\) for any \(n > 0\). For a state \(j \in \ALPHABET X\), we use the short-hand \(T_j\) to denote \(T_{\{j\}}\).

  3. Define \[ f^{(n)}_{ij} \coloneqq P_i(T_j = n) = P_i(X_1 \neq j, \dots, X_{n-1} \neq j, X_n = j). \]

  4. \(f^{(n)}_{ij}\) satisfies the following recursion.

    • \(f^{(1)}_{ij} = P_i(X_1 = j) = P_{ij}\).
    • And for \(n > 1\), \(f^{(n+1)}_{ij} = \sum_{k \neq j} P_{ik} f^{(n)}_{kj}\).
  5. Let \[ f_{ij} = \sum_{n=1}^∞ f^{(n)}_{ij} = P_i(T_j < ∞) \] denotes the probability that a chain starting in \(i\) eventually visits \(j\). In particular \(f_{jj}\) denotes the probability that a chain starting in \(j\) will return to \(j\).

  6. \(f_{ij}\) satisfy the following property.
    \[\displaystyle P^{(n)}_{ij} = \sum_{m=1}^n f^{(m)}_{ij} P^{(n-m)}_{jj}.\]

    An immediate implication of the above is that if \(j\) is an absorbing state then \(P^{(n)}_{ij} = \sum_{m=1}^n f^{(m)}_{ij} = P_i(T_j \le n).\)

  7. Let \(N^{(n)}_j = \sum_{m=1}^{n} \IND\{X_m = j\}\) denote the number of visits to state \(j\) in \(n\) steps. Define \[ G_{ij}^{(n)} = \EXP_i[ N^{(n)}_j ] = \sum_{m=1}^n P^{(m)}_{ij} \] to be the expected number of visits to state \(j\) in \(n\) steps, starting in \(i\).

  8. Let \(N_j = \lim_{n \to ∞} N^{(n)}_j \sum_{m=1}^∞ \IND\{X_m = j \}\) denote the number of visits to state \(j\). Similarly, define \[ G_{ij} = \EXP_{i}[N_j] = \lim_{n \to ∞} G_{ij}^{(n)} = \sum_{m=1}^{∞} P^{(m)}_{ij} \] to denote the expected number of visits to state \(j\) for a chain starting in \(i\).

  9. A state \(j\) is recurrent if \(f_{jj} = 1\) and transient if \(f_{jj} < 1\).

  10. For every transient state \(j\), we have for every \(i\), \(P_i(N_j < ∞) = 1\) and \[ G_{ij} = \dfrac{f_{ij}}{1 - f_{jj}}.\] On the other hand, if \(j\) is recurrent, then \(P_j(N_j = ∞) = 1\) and \(G_{jj} = ∞\). Moreover, \[ P_i(N_j = ∞) = P_i(T_j < ∞) = f_{ij}. \] So, if \(f_{ij} = 0\), then \(G_{ij} = 0\) while if \(f_{ij} = ∞\) then \(G_{ij} = ∞\).

  11. Thus, a state \(i\) is recurrent if and only if \[G_{ii} = \sum_{n=1}^∞ P^{(n)}_{ii} = ∞. \]

  12. A state \(j\) is said to be accessible from \(i\) (abbreviated as \(i \rightsquigarrow j\)) if there is an ordered string of notes \((i_0, \dots, i_m)\) such that \(i_0 = i\) and \(i_m = j\) and \(P_{i_k i_{k+1}} > 0\). Equivalently, \(i \rightsquigarrow j\) if there exists a \(m\) such that \(P^{(m)}_{ij} > 0\).

  13. Accessibility is an transitive relationship, i.e., if \(i \rightsquigarrow j\) and \(j \rightsquigarrow k\) implies that \(i \rightsquigarrow k\).

  14. If \(f_{ij} > 0\) but \(f_{ji} < 1\), then \(i\) is transient.

  15. If \(i\) is recurrent and \(i \rightsquigarrow j\). Then, \(j\) is also recurrent and \(f_{ij} = f_{ji} = 1\).

  16. A subset \(C\) of \(\ALPHABET X\) is said to be closed if no state inside \(C\) can lead to any state outside \(C\), i.e., \[ f_{ij} = 0, \quad \forall i \in C \text{ and } j \not\in C. \]

  17. A closed set \(C\) is called irreducible if \(i \rightsquigarrow j\) for all \(i,j \in C\). Thus, if \(C\) is an irreducible set, then all states in \(C\) are either recurrent or transient.

  18. Consequently, if \(C\) is an irreducible and closed set of recurrent states. Then for all \(i,j \in C\),

    • \(f_{ij} = 1\)
    • \(P_i(N_j = ∞) = 1\)
    • \(G_{ij} = ∞\)
  19. If \(C\) is a finite irreducible closed set of states. Then every state in \(C\) is recurrent.

  20. Let \(\ALPHABET X_T\) and \(\ALPHABET X_R\) denote the set of transient and recurrent states. The set \(\ALPHABET X_R\) can be paritioned into a finite or countable number of irreducible closed sets \(C_1\), \(C_2\), \(\dots\).

2 Absorption probabilities

  1. Let \(C\) be one of the irreducible closed sets of recurrent states. Define \(f_{iC} = P_i(T_C < ∞)\) to be the probability that a chain starting at \(i\) eventually hits \(C\) (and is absorbed in \(C\)).

  2. Clearly, \(f_{iC} = 1\) for \(i \in C\) and \(f_{iC} = 0\) if \(i\) is a recurrent state not in \(C\).

  3. Suppose \(i \in \ALPHABET X_T\). Then, \(f_{iC}\) satisfies the following: \[ f_{iC} = \sum_{j \in C} P_{ij} + \sum_{j \in \ALPHABET X_T} P_{ij} f_{jC}. \]

    When \(\ALPHABET X_T\) is finite, the above system has a unique solution.

Example 1 Consider a gambler’s ruin problem, where we start at state \(1\) and stop if we hit state \(0\) or \(3\).

A gambler’s ruin problem

Find the probability of getting absorbed in state \(0\) before state \(3\).

Let \(f_{i0}\) denote the probability of getting absorbed in state \(0\) before \(3\). Then, we can write the following system of equations to describe the absorption probabilities: \[\begin{align*} f_{10} &= q + p f_{20} \\ f_{20} &= q f_{10}. \end{align*}\] We can solve this system of equations to find \(f_{10}\) and \(f_{20}\).

3 Expected duration of play

Let’s start with a simple example. Suppose we toss a coin multiple times and stop at a heads. What are the expected number of tosses until stopping?

From elementary probability we know that the number of tosses until stopping is a geometric random variable. However, we will model this using a Markov chain where the state denotes the number of consecutive heads so far. Let \(p\) denote the probability of heads and \(q = 1-p\) denote the probability of tails. Then, the Markov chain model is as follows.

Markov chain for coin tossing until one head

Let \(v_i\) denote the expected number of tosses until stopping when starting at state \(i\). Then, we have \[\begin{align*} v_0 &= 1 + q v_0 + p v_1, \\ v_1 &= 0. \end{align*}\] Solving this system of equations, we get \(v_0 = 1/(1-q) = 1/p\).

Now, let’s try a variation of the above model. Suppose we toss a coin multiple times and stop at two heads. What are the expected number of tosses until stopping.

We can model this in the same manner as the before, where the state denotes the number of consecutive heads so far. The Markov chain is as follows:

Markov chain for coin tossing until two heads

As before, let \(v_i\) denote the expected number of tosses until stopping when starting at state \(i\). Then, we have \[\begin{align*} v_0 &= 1 + q v_0 + p v_1, \\ v_1 &= 1 + q v_0 + p v_2, \\ v_2 &= 0. \end{align*}\] Solving this system of equations, we get \(v_0 = 1/(1-p)\).

We can generalize these ideas to find time of hitting a state.

4 Classification of states

The states in a finite-state Markov chain can be partitioned into two sets: recurrent states and transient states. A state is recurrent if it is accessible from all states that is accessible from it (i.e., \(i\) is recurrent if \(i \rightsquigarrow j\) implies that \(j \rightsquigarrow i\)). States that are not recurrent are transient.

Two distinct states \(i\) and \(j\) are said to communicate (abbreviated to \(i \leftrightsquigarrow j\) if \(i\) is accessible from \(j\) (i.e., \(j \rightsquigarrow i\)) and \(j\) is accessible from \(i\) (\(i \rightsquigarrow j\)). An important fact about communicating states is that if \(i \leftrightsquigarrow j\) and \(j \leftrightsquigarrow k\), then \(i \leftrightsquigarrow k\).

A class \(C\) of states is a non-empty set of states such that for each \(i \in C\) satisfies that for each \(j \in C\), \(i \leftrightsquigarrow j\) and for each \(j \not\in C\), \(i \not\leftrightsquigarrow j\). In a finite-state Markov chain, all states in a state are either recurrent or transient.

The period of a state \(i\), denoted by \(d(i)\), is the greatest common divisor of those values of \(n\) for which \(P^n_{ii} > 0\) for all \(i\). If the period is \(1\), the state is aperiodic, and if the period is \(2\) or more, the state is periodic. It can be shown that all states in the same class have the same period.

A Markov chain is said to be ergodic if it has only one recurrent class that is also aperiodic.

5 Steady-state distribution

A Markov chain is said to have a steady state distribution if \(π_n\) converges to a limit as \(n \to ∞\) and the limit does not depend on the initial distribution \(π_0\).

A Markov chain has a steady state distribution if it is ergodic. We can find the steady state distribution by solving the balance equation: \(π = π P\).