The notion of Markov dependence, defined in section 5 for Bernoulli trials, may be extended to trials with several possible outcomes. Consider \(n\) trials of an experiment with \(r\) possible outcomes \(s_{1}, s_{2}, \ldots, s_{r}\) , in which \(r>2\) . For \(k=1,2, \ldots, n\) and \(j=1,2, \ldots, r\) let \(A_{k}^{(j)}\) be the event that on the \(k\) th trial outcome \(s_{j}\) occurred. The trials are defined as Markov dependent if for any integer \(k\) from 1 to \(n\) and integers \(j_{1}, j_{2}, \ldots, j_{k}\) , from 1 to \(r\) , the events \(A_{1}^{\left(j_{1}\right)}, A_{2}^{\left(j_{2}\right)}, \ldots, A_{k}^{\left(j_{k}\right)}\) satisfy the condition \[P\left[A_{k}^{\left(j_{k}\right)} \mid A_{k-1}^{\left(j_{k-1}\right)}, \cdots, A_{1}^{\left(j_{1}\right)}\right]=P\left[A_{k}^{\left(j_{k}\right)} \mid A_{k-1}^{\left(j_{k-1}\right)}\right] . \tag{6.1}\] 

In discussing Markov dependent trials with \(r\) possible outcomes, it is usual to employ an intuitively meaningful language. Instead of speaking of \(n\) trials of an experiment with \(r\) possible outcomes, we speak of observing at \(n\) times the state of a system which has \(r\) possible states. We number the states \(1,2, \ldots, r\) (or sometimes \(0,1, \ldots, r-1\) ) and let \(A_{k}^{(j)}\) be the event that the system is in state \(j\) at time \(k\) . If (6.1) holds, we say that the system is a Markov chain with \(r\) possible states. In words, (6.1) states that at any time the conditional probability of transition from one’s present state to any other state does not depend on how one arrived in one’s present state. One sometimes says that a Markov chain is a system without memory of the past.

Now suppose that for any states \(i\) and \(j\) the conditional probability \begin{align} P(i, j)= & \text { conditional probability that the Markov chain } \tag{6.2} \\ & \text { is at time } t \text { in state } j, \text { given that at time }(t-1) \\ & \text { it was in state } i, \end{align} is independent of \(t\) . The Markov chain is then said to be homogeneous (or time homogeneous). A homogeneous Markov chain with \(r\) states corresponds to the notion of Markov dependent repeated trials with \(r\) possible outcomes. In this section, by a Markov chain we mean a homogeneous Markov chain.

The \(m\) -step transition probabilities defined by \begin{align} P_{m}(i, j)= & \text { the conditional probability that the Markov } \tag{6.3} \\ & \text { chain is at time } t+m \text { in state } j, \text { given that at } \\ & \text { time } t \text { it was in state } i, \end{align} are given recursively in terms of \(P(i, j)\) by the system of equations, for \(m=\) \(2,3, \ldots\) , \begin{align} P_{m}(i, j)= & P_{m-1}(i, 1) P(1, j)+P_{m-1}(i, 2) P(2, j) \tag{6.4} \\ & +\cdots+P_{m-1}(i, r) P(r, j) \\ = & \sum_{k=1}^{r} P_{m-1}(i, k) P(k, j) \end{align} To justify (6.4) , write it in the form \[P\left[A_{m+1}^{(j)} \mid A_{1}^{(i)}\right]=\sum_{k=1}^{r} P\left[A_{m}^{(k)} \mid A_{1}^{(i)}\right] P\left[A_{m+1}^{(j)} \mid A_{m}^{(k)}\right]\] recall that \(A_{m b}^{(j)}\) is the event that at time \(m\) the system is in state \(j\) .

One may similarly prove \[P_{m}(i, j)=\sum_{k=1}^{r} P(i, k) P_{m-1}(k, j) \tag{6.5}\] 

The unconditional probabilities, (6.6) \(\quad p_{n}(j)=\) the probability that at time \(n\) the Markov chain is in state \(j\) , are given for \(n \geq 2\) in terms of the initial unconditional probabilities \(p_{1}(j)\) by \[p_{n}(j)=\sum_{k=1}^{r} p_{1}(k) P_{n-1}(k, j) \tag{6.7}\] 

One proves (6.7) in exactly the same way that one proves (6.4) and (6.5) . Similarly, one may show that \[p_{n}(j)=\sum_{k=1}^{r} p_{n-1}(k) P(k, j) \tag{6.8}\] 

The transition probabilities \(P(i, j)\) of a Markov chain with \(r\) states are best exhibited in the form of a matrix. \[P=\left[\begin{array}{llllll} P(1,1) & P(1,2) & P(1,3) & \ldots & P(1, r-1) & P(1, r) \tag{6.9} \\ P(2,1) & P(2,2) & P(2,3) & \ldots & P(2, r-1) & P(2, r) \\ \ldots \ldots & \ldots \ldots& \ldots \ldots& \ldots & \ldots \ldots& \ldots \ldots \\ P(r-1,1) & P(r-1,2) & P(r-1,3)& \ldots &P(r-1, r-1) & P(r-1, r) \\ P(r, 1) & P(r, 2) & P(r, 3)& \ldots & P(r, r-1) & P(r, r) \end{array}\right]\] The matrix \(P\) is said to be an \(r \times r\) matrix, since it has \(r\) rows and \(r\) columns.

Given an \(m \times r\) matrix \(A\) and an \(r \times n\) matrix \(B\) , \[A=\left[\begin{array}{cccc} a_{11} & a_{12} & \cdots & a_{1 r} \\ a_{21} & a_{22} & \cdots & a_{2 r} \\ \ldots & \cdots & \cdots & \cdots \\ a_{m 1} & a_{m 2} & \cdots & a_{m r} \end{array}\right] \quad B=\left[\begin{array}{cccc} b_{11} & b_{12} & \cdots & b_{1 n} \\ b_{21} & b_{22} & \cdots & b_{2 n} \\ \ldots & \cdots & \cdots & \ldots \\ b_{r 1} & b_{r 2} & \cdots & b_{r n} \end{array}\right]\] we define the product \(C=A B\) of the two matrices as the \(m \times n\) matrix whose element \(c_{i j}\) , lying at the intersection of the \(i\) th row and the \(j\) th column, is given by \[c_{i j}=a_{i 1} b_{1 j}+a_{i 2} b_{2 j}+\cdots+a_{i r} b_{r j}=\sum_{k=1}^{r} a_{i k} b_{k j} \tag{6.10}\] It should be noted that matrix multiplication is associative; \(A(B C)=\) \((A B) C\) for any matrices \(A, B\) , and \(C\) .

If we define the \(m\) -step transition probability matrix \(P_{m}\) of a Markov chain by \[P_{m}=\left[\begin{array}{cccc} P_{m}(1,1) & P_{m}(1,2) & \ldots & P_{m}(1, r) \tag{6.11} \\ P_{m}(2,1) & P_{m}(2,2) & \ldots& P_{m}(2, r) \\ \ldots \ldots& \ldots\ldots & \ldots &\ldots \ldots \\ P_{m}(r, 1) & P_{m}(r, 2)& \ldots & P_{m}(r, r) \end{array}\right]\] we see that (6.4) and (6.5) may be concisely expressed. \[P_{m}=P P_{m-1}=P_{m-1} P, \quad m=2,3, \cdots \tag{6.12}\] 

Example 6A . If the transition probability matrix \(P\) of a Markov chain is given by \[P=\left[\begin{array}{ccc} 0 & \frac{1}{3} & \frac{2}{3} \\ \frac{2}{3} & 0 & \frac{1}{3} \\ \frac{1}{3} & \frac{2}{3} & 0 \end{array}\right]\] then the chain consists of three states, since \(P\) is a \(3 \times 3\) matrix, and the 2 -step and 3 -step transition probability matrices are given by \[P_{2}=P P=\left[\begin{array}{ccc} \frac{4}{9} & \frac{4}{9} & \frac{1}{9} \\ \frac{1}{9} & \frac{4}{9} & \frac{4}{9} \\ \frac{4}{9} & \frac{1}{9} & \frac{4}{9} \end{array}\right], \quad P_{3}=P_{2} P=\left[\begin{array}{ccc} \frac{1}{3} & \frac{2}{9} & \frac{4}{9} \\ \frac{4}{9} & \frac{1}{3} & \frac{2}{9} \\ \frac{2}{9} & \frac{4}{9} & \frac{1}{3} \end{array}\right]\] 

If the initial unconditional probabilities are assumed to be given by \[p_{1}=\left(p_{1}(1), p_{1}(2), p_{1}(3)\right)=\left(\frac{1}{2}, \frac{1}{4}, \frac{1}{4}\right),\] then \begin{align} & p_{2}=\left(p_{2}(1), p_{2}(2), p_{2}(3)\right)=p_{1} P=\left(\frac{1}{4}, \frac{1}{3}, \frac{5}{12}\right) \\ & p_{3}=\left(p_{3}(1), p_{3}(2), p_{3}(3)\right)=p_{1} P_{2}=p_{2} P=\left(\frac{13}{6}, \frac{13}{3}, \frac{10}{3}\right) \\ & p_{4}=\left(p_{4}(1), p_{4}(2), p_{4}(3)\right)=p_{1} P_{3}=p_{3} P=\left(\frac{12}{3}, \frac{11}{3}, \frac{13}{3}\right) \end{align} 

We define a Markov chain with \(r\) states as ergodic if numbers \(\pi_{1}, \pi_{2}, \ldots, \pi_{r}\) exist such that for any states \(i\) and \(j\) 

\[\lim _{m \rightarrow \infty} P_{m}(i, j)=\pi_{j} \tag{6.13}\] 

In words, a Markov chain is ergodic if, as \(m\) tends to \(\infty\) , the \(m\) -step transition probabilities \(P_{m}(i, j)\) tend to a limit that depends only on the final state \(j\) and not on the initial state \(i\) . If a Markov chain is ergodic, then after a large number of trials it achieves statistical equilibrium in the sense that the unconditional probabilities \(p_{n}(j)\) tend to limits

\[\lim _{n \rightarrow \infty} p_{n}(j)=\pi_{j}, \tag{6.14}\] 

which are the same, no matter what the values of the initial unconditional probabilities \(p_{1}(j)\) . To see that (6.13) implies (6.14) , take the limit of both sides of (6.7) and use the fact that \(\sum_{k=1}^{r} p_{1}(k)=1\) .

In view of (6.14) , we call \(\pi_{1}, \pi_{2}, \ldots, \pi_{r}\) the stationary probabilities of the Markov chain, since these represent the probabilities of being in the various states after one has achieved statistical equilibrium.

One of the important problems of the theory of Markov chains is to determine conditions under which a Markov chain is ergodic. A discussion of this problem is beyond the scope of this book. We state without proof the following theorem.

If there exists an integer \(m\) such that \[P_{m}(i, j)>0 \quad \text { for all states } i \text { and } j, \tag{6.15}\] then the Markov chain with transition probability matrix \(P\) is ergodic. 

It is sometimes possible to establish that a Markov chain is ergodic without having to exhibit an \(m\) -step transition probability matrix \(P_{m}\) , all the entries of which are positive. Given two states \(i\) and \(j\) in a Markov chain, we say that one can reach \(i\) from \(j\) if states \(i_{1}, i_{2}, \ldots, i_{Y}\) exist such that

\[0

Two states \(i\) and \(j\) are said to communicate if one can reach \(i\) from \(j\) and also \(j\) from \(i\) . The following theorem can be proved.

If all states in a Markov chain communicate and if a state \(i\) exists such that \(P(i, i)>0\) , then the Markov chain is ergodic. 

Having established that a Markov chain is ergodic, the next problem is to obtain the stationary probabilities \(\pi_{j}\) . It is clear from (6.4) that the stationary probabilities satisfy the system of linear equations, \[\pi_{j}=\sum_{k=1}^{r} \pi_{k} P(k, j), \quad j=1,2, \ldots, r. \tag{6.17}\] 

Consequently, if a Markov chain is ergodic, then a solution of (6.17) that satisfies the conditions \[\pi_{j} \geq 0 \quad \text { for } j=1,2, \ldots, r; \quad \sum_{j=1}^{r} \pi_{j}=1, \tag{6.18}\] exists. It may be shown that if a Markov chain with transition probability matrix \(P\) is ergodic then the solution of (6.17) satisfying (6.18) is unique and necessarily satisfies (6.13) and (6.14) . Consequently, to find the stationary probabilities, we need solve only (6.17) .

Example 6B . The Markov chain considered in example 6A is ergodic, since \(P_{2}(i, j)>0\) for all states \(i\) and \(j\) . To compute the stationary probabilities \(\pi_{1}, \pi_{2}, \pi_{3}\) , we need only to solve the equations \begin{align} & \pi_{1}=\quad \frac{2}{3} \pi_{2}+\frac{1}{3} \pi_{3} \\ & \pi_{2}=\frac{1}{3} \pi_{1} \quad+\frac{2}{3} \pi_{3} \tag{6.19} \\ & \pi_{3}=\frac{2}{3} \pi_{1}+\frac{1}{3} \pi_{2} \end{align} subject to (6.18) . It is clear that \[\pi_{1}=\pi_{2}=\pi_{3}=\frac{1}{3}\] is a solution of (6.19) satisfying (6.18) . In the long run, the states 1,2, and 3 are equally likely to be the state of the Markov chain.

A matrix \[A = \left[\begin{array}{llll} a_{11} & a_{12} & \cdots & a_{1r} \\ a_{21} & a_{22} & \cdots & a_{2r} \\ \vdots & \vdots & \ddots & \vdots \\ a_{r1} & a_{r2} & \cdots & a_{rr} \end{array}\right]\] is defined as stochastic if the sum of the entries in any row is equal to 1; in symbols, \(A\) is stochastic if \[\sum_{j=1}^{r} a_{i j}=1 \quad \text { for } i=1,2, \ldots, r \tag{6.20}\] 

The matrix \(A\) is defined as doubly stochastic if in addition the sum of the entries in any column is equal to 1; in symbols, \(A\) is doubly stochastic if (6.20) holds and also \[\sum_{i=1}^{r} a_{i j}=1 \quad \text { for } j=1,2, \ldots, r \tag{6.21}\] 

It is clear that the transition probability matrix \(P\) of a Markov chain is stochastic. If \(P\) is doubly stochastic (as is the matrix in example 6A), then the stationary probabilities are given by \[\pi_{1}=\pi_{2}=\cdots=\pi_{r}=\frac{1}{r} \tag{6.22}\] in which \(r\) is the number of states in the Markov chain. To prove (6.22) one need only verify that if \(P\) is doubly stochastic then (6.22) satisfies (6.17) and (6.18) .

Example 6C . Random walk with retaining barriers. Consider a straight line on which are marked off positions \(0,1,2,3,4\) , and 5, arranged from left to right. A man (or an atomic particle, if one prefers physically significant examples) performs a random walk among the six positions by tossing a coin that has probability \(p\) (where \(0) of falling heads and acting in accordance with the following set of rules: if the coin falls heads, move one position to the right, if at \(0,1,2,3\) , or 4, and remain at 5, if at 5; if the coin falls tails, move one position to the left, if at \(1,2,3,4\) , or 5, and remain at 0, if at 0. The positions 0 and 5 are retaining barriers; one cannot move past them. In example \(6 \mathrm{D}\) we consider the case in which positions 0 and 5 are absorbing barriers; if one reaches these positions, the walk stops. The transition probability matrix of the random walk with retaining barriers is given by \[P=\left[\begin{array}{llllll} q & p & 0 & 0 & 0 & 0 \tag{6.23} \\ q & 0 & p & 0 & 0 & 0 \\ 0 & q & 0 & p & 0 & 0 \\ 0 & 0 & q & 0 & p & 0 \\ 0 & 0 & 0 & q & 0 & p \\ 0 & 0 & 0 & 0 & q & p \end{array}\right].\] All states in this Markov chain communicate, since \[0

The chain is ergodic, since \(P(0,0)>0\) . To find the stationary probabilities \(\pi_{0}, \pi_{1}, \ldots, \pi_{5}\) , we solve the system of equations: \begin{align} & \pi_{0}=q \pi_{0}+q \pi_{1} \\ & \pi_{1}=p \pi_{0}+q \pi_{2} \\ & \pi_{2}=p \pi_{1}+q \pi_{3} \\ & \pi_{3}=p \pi_{2}+q \pi_{4} \tag{6.24} \\ & \pi_{4}=p \pi_{3}+q \pi_{5} \\ & \pi_{5}=p \pi_{4}+p \pi_{5}. \end{align} 

We solve these equations by successive substitution.

From the first equation we obtain \[q \pi_{1}=p \pi_{0} \quad \text { or } \quad \pi_{1}=\frac{p}{q} \pi_{0}\] By subtracting this result from the second equation in (6.24) , we obtain \[q \pi_{2}=p \pi_{1} \quad \text { or } \quad \pi_{2}=\frac{p}{q} \pi_{1}=\left(\frac{p}{q}\right)^{2} \pi_{0}\] Similarly, we obtain \[\begin{array}{lll} q \pi_{3}=p \pi_{2} & \text { or } & \pi_{3}=\frac{p}{q} \pi_{2}=\left(\frac{p}{q}\right)^{3} \pi_{0} \\ q \pi_{4}=p \pi_{3} & \text { or } & \pi_{4}=\frac{p}{q} \pi_{3}=\left(\frac{p}{q}\right)^{4} \pi_{0} \\ q \pi_{5}=p \pi_{4} & \text { or } & \pi_{5}=\frac{p}{q} \pi_{4}=\left(\frac{p}{q}\right)^{5} \pi_{0}. \end{array}\] 

To determine \(\pi_{0}\) , we use the fact that \begin{align} 1 & =\pi_{0}+\pi_{1}+\cdots+\pi_{5}=\pi_{0}\left[1+\frac{p}{q}+\left(\frac{p}{q}\right)^{2}+\cdots+\left(\frac{p}{q}\right)^{5}\right] \\ & = \begin{cases}\pi_{0} \frac{1-(p / q)^{6}}{1-(p / q)} & \text { if } p \neq q \\ 6 \pi_{0} & \text { if } p=q=\frac{1}{2} .\end{cases} \end{align} 

We finally conclude that the stationary probabilities for the random walk with retaining barriers for \(j=0,1, \ldots, 5\) are given by \[\pi_{j}= \begin{cases}\left(\frac{p}{q}\right)^{j} \frac{1-(p / q)}{1-(p / q)^{6}} & \text { if } p \neq q \tag{6.25} \\ \frac{1}{6} & \text { if } p=q=\frac{1}{2} .\end{cases}\] 

If a Markov chain is ergodic, then the physical process represented by the Markov chain can continue indefinitely. Indeed, after a long time it achieves statistical equilibrium and probabilities \(\pi_{1}, \ldots, \pi_{r}\) exist of being in the various states that depend only on the transition probability matrix \(P\) .

We next desire to study an important class of non-ergodic Markov chains, namely those that possess absorbing states. A state \(j\) in a Markov chain is said to be absorbing if \(P(j, i)=0\) for all states \(i \neq j\) , so that it is impossible to leave an absorbing state. Equivalently, a state \(j\) is absorbing if \(P(j, j)=1\) .

Example 6D . Random walk with absorbing barriers. Consider a straight line on which positions \(0,1,2,3,4\) , and 5, arranged from left to right, are marked off. Consider a man who performs a random walk among the six positions according to the following transition probability matrix: \[P=\left[\begin{array}{llllll} 1 & 0 & 0 & 0 & 0 & 0 \tag{6.26} \\ q & 0 & p & 0 & 0 & 0 \\ 0 & q & 0 & p & 0 & 0 \\ 0 & 0 & q & 0 & p & 0 \\ 0 & 0 & 0 & q & 0 & p \\ 0 & 0 & 0 & 0 & 0 & 1 \end{array}\right].\] 

In the Markov chain with transition probability matrix \(P\) , given by (6.26) , the states 0 and 5 are absorbing states; consequently, this Markov chain is called a random walk with absorbing barriers. The model of a random walk with absorbing barriers describes the fortunes of gamblers with finite capital. Let two opponents, \(A\) and \(B\) , have 5 cents between them. Let \(A\) toss a coin, which has probability \(p\) of falling heads. On each toss he wins a penny if the coin falls heads and loses a penny if the coin falls tails. For \(j=0, \ldots, 5\) we define the chain to be in state \(j\) if \(A\) has \(j\) cents.

Given a Markov chain with an absorbing state \(j\) , it is of interest to compute for each state \(i\) \[u_j(i) = \text{conditional probability of ever arriving at the absorbing state } j, \text{ given that one started from state} i.\] 

We call \(u_{j}(i)\) the probability of absorption in state \(j\) , given the initial state \(i\) , since one remains in \(j\) if one ever arrives there.

The probability \(u_{j}(i)\) is defined on a sample description space consisting of a countably infinite number of trials. We do not in this book discuss the definition of probabilities on such sample spaces. Consequently, we cannot give a proof of the following basic theorem, which facilitates the computation of the absorption probabilities \(u_{j}(i)\) .

If \(j\) is an absorbing state in a Markov chain with states \(\{1,2, \ldots, r\}\) , then the absorption probabilities \(u_{j}(1), \ldots, u_{j}(r)\) are the unique solution to the system of equations: \begin{align} & u_{j}(j)=1 \\ & u_{j}(i)=0, \quad \text { if } j \text { cannot be reached from } i \tag{6.28} \\ & u_{j}(i)=\sum_{k=1}^{r} P(i, k) u_{j}(k), \quad \text { if } j \text { can be reached from } i. \end{align} 

Equation (6.28) is proved as follows. The probability of going from state \(i\) to state \(j\) is the sum, over all states \(k\) , of the probability of going from \(i\) to \(j\) via \(k\) ; this probability is the product of the probability \(P(i, k)\) of going from \(i\) to \(k\) in one step and the probability \(u_{j}(k)\) of then ever passing from \(k\) to \(j\) .

Example 6E . Probability of a gambler’s ruin. Let \(A\) and \(B\) play the coin-tossing game described in example 6D. If \(A\) ’s initial fortune is 3 cents and \(B^{\prime}\) ’s initial fortune is 2 cents, what is the probability that \(A\) ’s fortune will be 0 cents before it is 5 cents, and \(A\) will be ruined?

 

Solution

For \(i=0,1, \ldots, 5\) let \(u_{0}(i)\) be the probability that \(A\) ’s fortune will ever be 0, given that his initial fortune was \(i\) cents. In view of (6.28), the absorption probabilities \(u_{0}(i)\) are the unique solution of the equations \begin{align} & u_{0}(0)=1 \\ & u_{0}(1)=q u_{0}(0)+p u_{0}(2) \quad \text { or } q\left[u_{0}(1)-u_{0}(0)\right]=p\left[u_{0}(2)-u_{0}(1)\right] \\ & u_{0}(2)=q u_{0}(1)+p u_{0}(3) \quad \text { or } q\left[u_{0}(2)-u_{0}(1)\right]=p\left[u_{0}(3)-u_{0}(2)\right] \\ & u_{0}(3)=q u_{0}(2)+p u_{0}(4) \text { or } q\left[u_{0}(3)-u_{0}(2)\right]=p\left[u_{0}(4)-u_{0}(3)\right] \tag{6.29} \\ & u_{0}(4)=q u_{0}(3)+p u_{0}(5) \text { or } q\left[u_{0}(4)-u_{0}(3)\right]=p\left[u_{0}(5)-u_{0}(4)\right] \\ & u_{0}(5)=0. \end{align} 

 

To solve these equations we note that, defining \(c=u_{0}(1)-u_{0}(0)\) , \begin{align} & u_{0}(2)-u_{0}(1)=\frac{q}{p}\left[u_{0}(1)-u_{0}(0)\right]=\frac{q}{p} c \\ & u_{0}(3)-u_{0}(2)=\frac{q}{p}\left[u_{0}(2)-u_{0}(1)\right]=\left(\frac{q}{p}\right)^{2} c \\ & u_{0}(4)-u_{0}(3)=\frac{q}{p}\left[u_{0}(3)-u_{0}(2)\right]=\left(\frac{q}{p}\right)^{3} c \\ & u_{0}(5)-u_{0}(4)=\left(\frac{q}{p}\right)\left[u_{0}(4)-u_{0}(3)\right]=\left(\frac{q}{p}\right)^{4} c . \end{align} Therefore, there is a constant \(c\) such that (since \(u_{0}(0)=1\) ), \begin{align} & u_{0}(1)=1+c \\ & u_{0}(2)=\frac{q}{p} c+1+c \\ & u_{0}(3)=\left(\frac{q}{p}\right)^{2} c+\left(\frac{q}{p}\right) c+1+c \\ & u_{0}(4)=\left(\frac{q}{p}\right)^{3} c+\left(\frac{q}{p}\right)^{2} c+\left(\frac{q}{p}\right) c+1+c \\ & u_{0}(5)=\left(\frac{q}{p}\right)^{4} c+\left(\frac{q}{p}\right)^{3} c+\left(\frac{q}{p}\right)^{2} c+\left(\frac{q}{p}\right) c+1+c \end{align} To determine the constant \(c\) , we use the fact that \(u_{0}(5)=0\) . We see that \begin{align} 1+5 c=0 & \text { if } p=q=\frac{1}{2} \\ 1+c\left(\frac{1-(q / p)^{5}}{1-(q / p)}\right)=0 & \text { if } p \neq q \end{align} so that \[c=\left\{ \begin{aligned} &-\frac{1}{5} &&\text { if } p=q=\frac{1}{2} \\ &-\frac{1-(q / p)}{1-(q / p)^{5}} &&\text { if } p \neq q \end{aligned} \right.\] Consequently, for \(i=0,1, \ldots, 5\) \[u_{0}(i) =\left\{\begin{aligned} &1-\frac{i}{5} &&\text { if } p=q=\frac{1}{2} \\ &1-\frac{1-(q / p)^{i}}{1-(q / p)^{5}} &&\text { if } p \neq q. \end{aligned}\right.\tag{6.30}\] In particular, the probability that \(A\) will be ruined, given his initial fortune is 3 cents, is given by \[u_{0}(3) =\left\{ \begin{aligned} & \frac{2}{5} &&\text { if } p=q=\frac{1}{2} \\ & \frac{(q / p)^{3}-(q / p)^{5}}{1-(q / p)^{5}}=\frac{q^{5}-p^{2} q^{3}}{q^{5}-p^{5}} && \text { if } p \neq q \end{aligned}\right.\] 

Exercises

6.1 . Compute the 2-step and 3-step transition probability matrices for the Markov chains whose transition probability matrices are given by

(i)

\[P=\left[\begin{array}{ll} \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} \end{array}\right]\] 

\[P=\left[\begin{array}{ccc} \frac{1}{2} & \frac{1}{2} & 0 \\ \frac{1}{2} & \frac{1}{2} & 0 \\ 0 & 0 & 1 \end{array}\right]\] 

\begin{align} & P=\left[\begin{array}{lll} \frac{1}{2} & \frac{1}{2} & 0 \\ \frac{1}{2} & \frac{1}{2} & 0 \\ 0 & \frac{1}{2} & \frac{1}{2} \end{array}\right] \tag{ii} \\ & P=\left[\begin{array}{lll} \frac{1}{2} & \frac{1}{2} & 0 \\ 0 & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & 0 \end{array}\right] \end{align} 

 

Answer

(i), (iii) \(P_{2}=P_{3}=P\) ;

 

(ii) \[P_{2}=\left[\begin{array}{ccc} \frac{1}{2} & \frac{1}{2} & 0 \\ \frac{1}{2} & \frac{1}{2} & 0 \\ \frac{1}{4} & \frac{1}{2} & \frac{1}{4} \end{array}\right], \quad P_{3}=\left[\begin{array}{ccc} \frac{1}{2} & \frac{1}{2} & 0 \\ \frac{1}{2} & \frac{1}{2} & 0 \\ \frac{3}{8} & \frac{1}{2} & \frac{1}{8} \end{array}\right]\] 

(iv) \[P_{2}=P_{3}=\left[\begin{array}{ccc} \frac{1}{4} & \frac{1}{2} & \frac{1}{4} \\ \frac{1}{4} & \frac{1}{2} & \frac{1}{4} \\ \frac{1}{4} & \frac{1}{2} & \frac{1}{4} \end{array}\right]\] 

6.2 . For each Markov chain in exercise 6.1, determine whether or not (i) it is ergodic, (ii) it has absorbing states.

6.3 . Find the stationary probabilities for each of the following ergodic Markov chains:

\(\left[\begin{array}{ll}\frac{2}{3} & \frac{1}{3} \\ \frac{1}{3} & \frac{2}{3}\end{array}\right]\) 

(ii)

\(\left[\begin{array}{ll}\frac{2}{3} & \frac{1}{3} \\ \frac{2}{3} & \frac{1}{3}\end{array}\right]\) 

(iii)

\(\left[\begin{array}{ll}0.99 & 0.01 \\ 0.01 & 0.99\end{array}\right]\) 

 

Answer

(i), (iii) \(\pi_{1}=\pi_{2}=\frac{1}{2}\) ; (ii) \(\pi_{1}=\frac{2}{3}, \pi_{2}=\frac{1}{3}\) .

 

6.4 . Find the stationary probabilities for each of the following ergodic Markov chains: \[\left[\begin{array}{lll} \frac{1}{2} & \frac{1}{4} & \frac{1}{4} \tag{i} \\ \frac{1}{4} & \frac{1}{2} & \frac{1}{4} \\ \frac{1}{4} & \frac{1}{4} & \frac{1}{2} \end{array}\right]\] \[\left[\begin{array}{lll} \frac{1}{4} & \frac{1}{4} & \frac{1}{2} \tag{ii} \\ 0 & \frac{2}{3} & \frac{1}{3} \\ \frac{3}{4} & \frac{1}{4} & 0 \end{array}\right]\] \[\left[\begin{array}{ccc} \frac{1}{3} & \frac{1}{4} & \frac{5}{12} \\ \frac{1}{3} & \frac{1}{4} & \frac{5}{12} \\ \frac{1}{3} & \frac{1}{4} & \frac{5}{12} \end{array}\right]\] 

6.5 . Consider a series of independent repeated tosses of a coin that has probability \(p>0\) of falling heads. Let us say that at time \(n\) we are in state \(s_{1}, s_{2}, s_{3}\) , or \(s_{1}\) depending on whether outcomes of tosses \(n-1\) and \(n\) were \((H, H),(H, T),(T, H)\) , or \((T, T)\) . Find the transition probability matrix \(P\) of this Markov chain. Also find \(P^{2}, P^{3}, P^{4}\) .

 

Answer

\(P\) has rows \((p, q, 0,0),(0,0, p, q),(p, q, 0,0),(0,0, p, q)\) . For \(n>1\) the rows are \(\left(p^{2}, p q, p q, q^{2}\right)\) .

 

6.6 . Random walk with retaining barriers. Consider a straight line on which positions \(0,1,2, \ldots, 7\) are marked off. Consider a man who performs a random walk among the positions according to the following transition probability matrix: \[P=\left[\begin{array}{llllllll} q & p & 0 & 0 & 0 & 0 & 0 & 0 \\ q & 0 & p & 0 & 0 & 0 & 0 & 0 \\ 0 & q & 0 & p & 0 & 0 & 0 & 0 \\ 0 & 0 & q & 0 & p & 0 & 0 & 0 \\ 0 & 0 & 0 & q & 0 & p & 0 & 0 \\ 0 & 0 & 0 & 0 & q & 0 & p & 0 \\ 0 & 0 & 0 & 0 & 0 & q & 0 & p \\ 0 & 0 & 0 & 0 & 0 & 0 & q & p \end{array}\right]\] Prove that the Markov chain is ergodic. Find the stationary probabilities.

6.7 . Gambler’s ruin. Let two players \(A\) and \(B\) have 7 cents between them. Let \(A\) toss a coin, which has probability \(p\) of falling heads. On each toss he wins a penny if the coin falls heads and he loses a penny if the coin falls tails. If \(A\) ’s initial fortune is 3 cents, what is the probability that \(A\) ’s fortune will be 0 cents before it is 7 cents, and that \(A\) will be ruined.

 

Answer

\(\frac{4}{2}\) if \(p=q=\frac{1}{2},\left(q^{7}-p^{4} q^{3}\right) /\left(q^{7}-p^{7}\right)\) if \(p \neq q\) .

 

6.8 . Consider 2 urns, I and II, each of which contains 1 white and 1 red ball. One ball is drawn simultaneously from each urn and placed in the other urn. Let the probabilities that after \(n\) repetitions of this procedure urn \(I\) will contain 2 white balls, 1 white and 1 red, or 2 red balls be denoted by \(p_{n}, q_{n}\) , and \(r_{n}\) , respectively. Deduce formulas expressing \(p_{n+1}, q_{n+1}\) , and \(r_{n+1}\) in terms of \(p_{n}, q_{n}\) , and \(r_{n}\) . Show that \(p_{n}, q_{n}\) , and \(r_{n}\) tend to limiting values as \(n\) tends to infinity. Interpret these values.

6.9 . In exercise 6.8 find the most probable number of red balls in urn \(I\) after (i) 2, (ii) 6 exchanges.

 

Answer

\(1\) .