Issuu on Google+

Nicolas Privault

Notes on Markov Chains MAS328

0.6 1

0.4

0.4

0.6 4

0

0.2

3

0.5 0.6 0.5 2

0.8

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html

0.4


Notes on Markov Chains

Preface

The goal of MAS328 is serve as an introduction to the modeling of time dependent randomness. The prerequisites for this course consist in a knowledge of basic probabilistic concepts such as random variables, discrete distributions (binomial, hypergeometric and Poisson), continuous distributions (normal, exponential) and densities, expectation, independence, and conditional probabilities, cf. e.g. MAS215. Such topics can be regarded as belonging to the field of “static” probability, i.e. probability without time dependence, as opposed to the contents of this course, which are concerned with time dependence and random evolution. This second version of the notes has benefited from numerous questions and comments from the MAS328 students of years 2010/2011 and 2011/2012. The cover graph is an animation of a 5-state discrete-time Markov chain with transition matrix   0 0.2 0.8 0 0  0.4 0 0 0.3 0     P =  0.5 0 0 0.5 0  .  0 0 0 0.4 0.6  0 0.4 0.6 0 0 This pdf file contains animations and external links that may require the use of Acrobat Reader for viewing.

v

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Contents

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

1

Probability Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Probability Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Probability Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Conditional Probabilities and Independence . . . . . . . . . . . . . . . . 1.5 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Probability Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 Expectation of a Random Variable . . . . . . . . . . . . . . . . . . . . . . . . 1.8 Conditional Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.9 Characteristic and Generating Functions, Laplace Transforms 1.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9 9 10 12 13 15 16 22 27 28 32

2

Gambling Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Ruin probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Mean game duration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

35 36 44 50

3

Random Walks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Mean and variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 First returns to zero . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

51 51 52 53 59

4

Discrete-Time Markov Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Markov property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 The two-state Markov chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

63 63 71 77

vii


N. Privault

5

First Step Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Hitting probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Mean hitting and absorption times . . . . . . . . . . . . . . . . . . . . . . . 5.3 First return times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Number of returns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

79 79 82 86 92 93

6

Classification of States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Communicating states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Communicating class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Recurrent states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Transient states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Positive and null recurrence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Periodicity and aperiodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

99 99 100 101 101 105 106 109

7

Limiting and Stationary Distributions . . . . . . . . . . . . . . . . . . . . 7.1 Limiting Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Stationary Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

111 111 112 118

8

Branching Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Definition and example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Extinction probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

123 123 128 136

9

Continuous-Time Markov Chains . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 The Poisson process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Birth and death processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Continuous-time Markov chains . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 The two-state continuous-time Markov chain . . . . . . . . . . . . . . . 9.5 Limiting and stationary distributions . . . . . . . . . . . . . . . . . . . . . 9.6 The embedded chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7 Absorption probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.8 Mean absorption times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

141 141 145 147 159 164 169 174 175 179

10 Spatial Poisson processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 Spatial Poisson (1781-1840) processes . . . . . . . . . . . . . . . . . . . . . 10.2 Characteristic functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Transformations of Poisson measures . . . . . . . . . . . . . . . . . . . . . . 10.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

185 185 186 187 189

viii

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

11 Reliability and Renewal Processes . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Survival probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Poisson process with time-dependent intensity . . . . . . . . . . . . . 11.3 Mean time to failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

191 191 192 194

12 Discrete-Time Martingales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 12.1 Definition and properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 12.2 Ruin probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Some Useful Identities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Solutions to the Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1 Chapter 1 - Probability Background . . . . . . . . . . . . . . . . . . . . . . 13.2 Chapter 2 - Gambling Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3 Chapter 3 - Random Walks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4 Chapter 4 - Discrete-Time Markov Chains . . . . . . . . . . . . . . . . . 13.5 Chapter 5 - First Step Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 13.6 Chapter 6 - Classification of States . . . . . . . . . . . . . . . . . . . . . . . 13.7 Chapter 7 - Limiting and Stationary Distributions . . . . . . . . . . 13.8 Chapter 8 - Branching Processes . . . . . . . . . . . . . . . . . . . . . . . . . 13.9 Chapter 9 - Continuous-Time Markov Chains . . . . . . . . . . . . . . 13.10Chapter 10 - Spatial Poisson Processes . . . . . . . . . . . . . . . . . . . .

211 211 220 222 226 228 239 241 249 256 280

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285

ix

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Introduction

A stochastic process is a mathematical tool used for the modeling of time-dependent random phenomena. Here, the term “stochastic” means random and “process” refers to the time evolving status of a given system. Stochastic processes have applications to multiple fields, and can be useful anytime one recognizes the role of randomness and unpredictability of events that occur at random times in a physical, biolological, or financial system. In applications to physics for example one can mention physical transitions, atom emission, etc. In biology the time behavior of live beings is often subject to randomness, at least when the observer only handles partial information. This point is of importance, as it shows that the notion of randomness is linked to the concept of information. What appears random to another observer may not be random to an observer equipped with more information. Think for example of an observer watching the apparent random behavior of cars turning at a crossroad vs the point of view of car drivers, each of whom are acting according to their own decisions. In finance the importance of modeling time dependent random phenomena is quite clear as no one can efficiently predict the future moves of risky assets. The concrete outcome of such modeling lies in the computation of expectations or expected values, which often turn out to be more useful than the probability values themselves. The long term statistical behavior of such systems is another issue of interest. Basically, a stochastic process is a time-dependent family (Xt )t∈T of random variable where t is a time index belonging to a parameter set T . That is, instead of considering a single random variable X, one considers a whole family of random variables (Xt )t∈T , adding another level of technical difficulty. The index set T can be finite (e.g. T = {1, 2, . . . , N }) or countably infinite (e.g. T = N) or even uncountable (e.g. T = [0, 1], T = R+ ). The case of uncountable T corresponds to continuous-time stochastic process and this setting is the most theoretically difficult. In fact, a serious treatment of 1


N. Privault

continuous-time processes requires a background in measure theory, which is outside the scope of this course. Measure theory is the general study of measures, which include probability measures as a particular cases, and allows for a rigorous treatment of integrals using integration in the Lebesgue sense. The Lebesgue integral is a powerful tool that allows one to integrate functions and random variables under minimal technical conditions. In this course we mainly work in the discrete time framework that mostly does not require the use of measure theory. That being said, the definition of a stochastic process (Xt )tâ&#x2C6;&#x2C6;T remains vague at this stage since virtually any family of random variables can be called a stochastic process. In addition, working at such a level of generality without imposing any structure or properties on the processes under consideration can be of little practical use. As we will see later on, stochastic processes can be classified into two main families: - Markov processes Roughly speaking, a process is Markov when its statistical behavior after time t can be recovered from the value Xt of the process at time t. In particular, the values Xs of the process at time s < t have no influence on this behavior when the value of Xt is known. - Martingales Originally, a martingale is a strategy designed to win repeatedly in a casino. In mathematics, a stochastic process is a martingale if the best possible estimate at time t of its future value Xs at time s > t is simply given by Xt . Here we need to carefully define the meaning of â&#x20AC;&#x153;best possible estimateâ&#x20AC;?, and for this we need the tool of conditional expectation. Martingale are useful in physics and finance, where they are linked to the notion of equilibrium. The outline of the course is as follows. After reviewing in Chapter 1 the probabilistic tools needed, we will turn to simple gambling problems in Chapter 2, due to their practical usefulness and to the fact that they only require a minimum theoretical background. Next in Chapter 3 we will turn to the study of random walks, which can be defined as discrete time processes with independent increments without requiring much abstract formalism. In Chapters 4, 5, 6 and 7 we will then review the general framework of Markov chains in discrete time, which includes the gambling process and the simple random walk as particular cases. Branching processes are other examples of discrete-time Markov processes which have important applications in life sciences, and they are considered in Chapter refchapter7. Then in Chapter 9 we consider Markov chains in continuous time, including birth and death processes. Spatial Poisson processes, which can be considered as stochastic 2

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

processes which are not indexed by a time parameter, are presented in Chapter 10. Queueing systems and reliability theory are two important engineering applications of Markov chains, which are dealt with in Chapter 5. Martingales are considered in Chapter 11.3. All the stochastic processes considered in these notes have discrete state spaces and discontinuous trajectories. Brownian motion is the first main example of a stochastic process with continuous trajectories in continuous time and at this stage it is not covered in this document. Next we consider some examples of simulations for the paths of some stochastic processes (source: Wikipedia). The graphs presented are for illustration purposes only, not all such processes are within the scope of this course. Example: Standard Brownian motion, d = 1.

Fig. 0.1: Sample trajectory of Brownian motion. Example: Standard Brownian motion, d = 3.

3

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

Fig. 0.2: Sample trajectory of Brownian motion. Example: Drifted Brownian motion, d = 1.

Fig. 0.3: Drifted Brownian motion, d = 1. Example: Poisson process, d = 1.

4

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

Fig. 0.4: Sample trajectories of a Poisson process. Example: Compound Poisson process, d = 1.

Fig. 0.5: Sample trajectories of a compound Poisson process. Example: Gamma process, d = 1.

5

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

Fig. 0.6: Sample trajectories of a gamma process. Example: Stable process, d = 1.

Fig. 0.7: Sample trajectories of a stable process. Example: Cauchy process, d = 1.

6

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

Fig. 0.8: Sample trajectory of a Cauchy process. Example: Variance Gamma process.

7

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

Fig. 0.9: Sample trajectory of a variance gamma process. Example: Negative Inverse Gaussian process.

Fig. 0.10: Sample trajectory of a negative inverse Gaussian process.

8

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Chapter 1

Probability Background

We review a number of basic probabilistic tools that are needed for the study of stochastic processes.

1.1 Probability Spaces We will need the following notation coming from set theory. Given A and B to abstract sets, “A ⊂ B” means that A is contained in B. The property that ω belongs to the set A is denoted by “ω ∈ A”. The finite set made of n elements ω1 , . . . , ωn is denoted by {ω1 , . . . , ωn }. Note that the element ω is in fact distinct from the set {ω}. A probability space is an abstract set Ω made of the possible outcomes of a random experiment. Examples: • Coin tossing: Ω = {H, T }. • Rolling one die: Ω = {1, 2, 3, 4, 5, 6}. • Picking on card at random in a pack of 52: Ω = {1, 2, 3, , . . . , 52}. • An integer-valued random outcome: Ω = N. In this case the outcome ω ∈ N can be the random number of trials needed until some event occurs. • A non-negative, real-valued outcome: Ω = R+ .

9


N. Privault

In this case the outcome ω ∈ R+ may represent the (non-negative) value of a stock price, or a continuous random time. • A random continuous parameter (such as time, weather, price, temperature, ...): Ω = R. • Random choice of a continuous path in the space Ω = C(R+ ) of all continuous functions on R+ . In this case, ω ∈ Ω is a function ω : R+ → R and a typical example is the graph t 7−→ ω(t) of a stock price over time. Product spaces: Probability spaces can be built as product spaces and used for the modeling of repeated random experiments. • Rolling two dice: Ω = {1, 2, 3, 4, 5, 6} × {1, 2, 3, 4, 5, 6}. In this case a typical element of Ω is written as ω = (k, l) with k, l ∈ {1, 2, 3, 4, 5, 6}. • A finite number n of real-valued samples: Ω = Rn . In this case the outcome ω is a vector ω = (x1 , . . . , xn ) ∈ Rn with n components. Note that to some extent, the more complex Ω is, the better it fits a practical and useful situation, e.g. Ω = {H, T } corresponds to a simple coin tossing experiment while Ω = C(R+ ) can be applied to the modeling of stock markets. On the other hand, in many situations and especially in the most complex situations, we will not attempt to specify Ω explicitly.

1.2 Events An event is a collection of outcomes, which is represented by a subset of Ω. The collections G of events that we will consider are called σ-algebras, and assumed to satisfy the following conditions.

10

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

(i) ∅ ∈ G, [ (ii) For all countable sequences An ∈ G, n ≥ 1, we have An ∈ G, n≥1

(iii) A ∈ G =⇒ Ω \ A ∈ G. Note that in the set theoretic notation, an event A is a subset of Ω, i.e. A ⊂ Ω, while it is an element of F, i.e. A ∈ F. Given G a σ-algebra on G, a random variable X : Ω → R is said to be G-measurable if {X ≤ x} = {ω ∈ Ω : X(ω) ≤ x} ∈ G, for all x ∈ R. In this case we will also say that X depends only on the information contained in G. In the context of stochastic processes, two σ-algebras G and F such that G ⊂ F will refer to two different amounts of information, the amount of information associated to G being here lower than the one associated to F. Example: Ω = {1, 2, 3, 4, 5, 6}. The event A = {2, 4, 6} corresponds to “the result of the experiment is an even number”. This formalism is helpful to describe events in a short and precise way. The collection of all events in Ω will be generally denoted by F. Example: Take Ω = {H, T } × {H, T } = {(H, H), (H.T ), (T, H), (T, T )}. In this case, the collection F of all possible events is given by F = {∅, {(H, H)}, {(T, T )}, {(H, T )}, {(T, H)}, {(T, T ), (H, H)}, {(H, T ), (T, H)}, {(H, T ), (T, T )}, {(T, H), (T, T )}, {(H, T ), (H, H)}, {(T, H), (H, H)}, {(H, H), (T, T ), (T, H)}, {(H, H), (T, T ), (H, T )}, {(H, T ), (T, H), (H, H)}, {(H, T ), (T, H), (T, T )}, Ω} . The empty set ∅ and the full space Ω are considered as events but they are of less importance because Ω corresponds to “any outcome may occur” while ∅ corresponds to an absence of outcome, or no experiment.

11

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

Note that taking n = 4, the set above F of all events has altogether   n 1= event of cardinal 0, 0 n 4= events of cardinal 1, 1 n 6= events of cardinal 2, 2 n 4= events of cardinal 3, 3 n event of cardinal 1, 1= 1 for a total of 16 = 2n =

n   X n k=0

k

=1+4+6+4+1

events. Exercise: Write down the set of all events of Ω = {H, T }. Note also that (H, T ) is different from (T, H), whereas {(H, T ), (T, H)} is equal to {(T, H), (H, T )}. In addition we will usually make a distinction between the outcome ω ∈ Ω from its associated event {ω} ∈ F, which satisfies {ω} ⊂ Ω.

1.3 Probability Measures A probability measure is a mapping P : F → [0, 1] that assigns a probability P(A) ∈ [0, 1] to any event A, with the properties a) P(Ω) = 1 and ! ∞ ∞ X [ An = P(An ), whenever Ak ∩ Al = ∅, k 6= l. b) P n=1

n=1

A property or event is said to hold P-almost surely (also written P-a.s.) if it holds with probability equal to one. In particular we have P(A1 ∪ · · · ∪ An ) = P(A1 ) + · · · + P(An ) when the subsets A1 , . . . , An of Ω are disjoints, and

12

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

P(A ∪ B) = P(A) + P(B) if A ∩ B = ∅. In the general case we can write P(A ∪ B) = P(A) + P(B) − P(A ∩ B). The triple (Ω, F, P)

(1.1)

was introduced by A.N. Kolmogorov (1903-1987), and is generally referred to as the Kolmogorov framework. In addition we have the following convergence properties. 1. Let (An )n∈N be an increasing sequence of events, i.e. An ⊂ An+1 , n ∈ N. Then we have ! [ P An = lim P(An ). n∈N

n→∞

2. Let (An )n∈N be a decreasing sequence of events, i.e. An+1 ⊂ An , n ∈ N. Then we have ! \ P An = lim P(An ). n∈N

n→∞

1.4 Conditional Probabilities and Independence We start with an example. Consider a population Ω = M ∪W made of a set M of men and a set W of women. Here the can consider the σ-algebra F = {Ω, ∅, W, M } corresponds to the information given by gender. After polling the population, e.g. for a market survey, it turns out that a proportion p ∈ [0, 1] of the population declares to like apples, while a proportion 1 − p declares to dislike apples. Let L ⊂ Ω denote the subset of individuals who like apples, while D ⊂ Ω denotes the subset individuals who dislike apples, with p = P(L) and 1 − p = P(D). Suppose now that p = 60% of the population likes apples. It may be interesting to get a more precise information and to determine - the relative proportion of women who like apples, and - the relative proportion of men who like apples. Given that M ∩ L (resp. W ∩ L) denotes the set of men (resp. women) who like apples, these relative proportions are respectively given by 13

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

P(L ∩ M ) P(L ∩ W ) and . P(M ) P(W ) Here, P(L ∩ W )/P(W ) represents the probability that a woman picked at random in W likes apples, and P(L ∩ M )/P(M ) represents the probability that a man picked at random in M likes apples. Thus these two proportions are also interpreted as conditional probabilities, i.e. for example P(L ∩ M )/P(M ) denotes the probability that an individual likes apples given that he is a man. More generally, given any two events A, B ⊂ Ω with P(B) 6= 0, we call P(A | B) :=

P(A ∩ B) P(B)

the probability of A given B, or conditionally to B. Note that if P(B) = 1 we have P(A∩B c ) ≤ P(B c ) = 0 hence P(A∩B) = P(A) and P(A | B) = P(A). We also recall the following property: ! ∞ ∞ ∞ [ X X P B∩ An = P(B ∩ An ) = P(B | An )P(An ), n=1

n=1

n=1

provided Ai ∩ Aj = ∅, i 6= j, and P(An ) > 0, n ≥ 1. Similarly we have ! ∞ ∞ ∞ [ X X P B∩ An = P(B ∩ An ) = P(An | B)P(B). n=1

n=1

n=1

This also shows that conditional probability measures are probability measures, in the sense that if P(B) > 0 we have a) P(Ω | B) = 1,!and ∞ ∞

X [

P(An | B), whenever Ak ∩ Al = ∅, k 6= l. An B = b) P n=1

n=1

In particular if

∞ [

An = Ω we get

n=1

P(B) =

∞ X

P(B ∩ An ) =

n=1

and P(B) =

∞ X n=1

P(B ∩ An ) =

∞ X

P(An | B)P(B),

n=1 ∞ X

P(B | An )P(An ),

n=1

14

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

provided Ai ∩ Aj = ∅, i 6= j, and P(An ) > 0, n ≥ 1. However we have in general ∞

[

P A

Bn n=1

! 6=

∞ X

P(A | Bn ),

n=1

even when Bk ∩ Bl = ∅, k 6= l. Indeed, taking for example A = Ω = B1 ∪ B2 with B1 ∩ B2 = ∅ and P(B1 ) = P(B2 ) = 1/2, we have 1 = P(Ω | B1 ∪ B2 ) 6= P(Ω | B1 ) + P(Ω | B2 ) = 2. Finally, two events A and B are said to be independent if P(A | B) = P(A), i.e. if P(A ∩ B) = P(A)P(B). In this case we find P(A | B) = P(A).

1.5 Random Variables A random variable is a mapping X : Ω −→ R ω 7−→ X(ω) where Ω is a probability space. Given X : Ω → R a random variable and A a measurable subset of R, we denote by {X ∈ A} the event {X ∈ A} = {ω ∈ Ω : X(ω) ∈ A}. Examples: - Let Ω = {1, 2, 3, 4, 5, 6} × {1, 2, 3, 4, 5, 6}, and consider the mapping X : Ω −→ R (k, l) 7−→ k + l.

15

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

Then X is a random variable giving the sum of the numbers appearing on each die. - the time needed everyday to travel from home to work or school is a random variable, as the precise value of this time may change from day to day under unexpected circumstances. - the price of a risky asset is a random variable. In the sequel we will often use the notion of indicator function 1A of a an event A. The indicator function 1A is the random variable The indicator function 1A is the random variable 1A : Ω −→ {0, 1} ω 7−→ 1A (ω) defined by  1A (ω) =

1 if ω ∈ A, 0 if ω ∈ / A,

with the property 1A∩B = 1A 1B . In addition, any Bernoulli random variable X : Ω −→ {0, 1} can be written as an indicator function X = 1A on Ω with A = {X = 1} = {ω ∈ Ω : : X(ω) = 1}. For example if Ω = N and A = {k}, for all l ∈ N we have  1 if k = l, 1{k} (l) = 0 if k 6= l. If X is a random variable we also let  1 if X = n, 1{X=n} = 0 if X 6= n, and

 1{X<n} =

1 if X < n, 0 if X ≥ n.

1.6 Probability Distributions The probability distribution of a random variable X : Ω −→ R is the collection

16

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

{P(X ∈ A) : A measurable subset of R}. In fact the distributions of X can be reduced to the knowledge of either {P(a < X ≤ b) : a < b ∈ R}, or {P(X ≤ a) : a ∈ R}, or {P(X ≥ a) : a ∈ R}. Two random variables X and Y are said to be independent under the probability P if their probability distributions satisfy P(X ∈ A , Y ∈ B) = P(X ∈ A)P(Y ∈ B) for all measurable1 subsets A and B of R.

Distributions Admitting a Density The distribution of X is given by b

Z P(a ≤ X ≤ b) =

f (x)dx a

where the function f : R → R+ is called the density of the distribution of X. In this case we also say that the distribution of X is continuous, or that X is a continuous random variable. This, however, does not imply that the density function f : R → R+ is continuous. In particular we always have Z ∞ f (x)dx = P(−∞ ≤ X ≤ ∞) = 1 −∞

for all probability density functions f : R → R+ . The density fX can be recovered from the distribution functions Z x x 7−→ P(X ≤ x) = fX (s)ds, −∞

and

Z

x 7−→ P(X ≥ x) =

fX (s)ds, x

1

This concept will not be defined in this course.

17

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

as fX (x) =

∂ ∂x

Z

x

fX (s)ds = − −∞

∂ ∂x

Z

fX (s)ds, x

x ∈ R. Examples: • The Gaussian distribution. The density of the standard normal distribution is given by 2 1 f (x) = √ e−x /2 2π

More generally, X has a Gaussian distribution with mean µ ∈ R and variance σ 2 > 0 (in this case we write X ' N (µ, σ 2 )) if f (x) = √

1

2

2πσ 2

e−(x−µ)

/(2σ 2 )

.

• The exponential distribution with parameter λ > 0. In this case, f (x) =

 −λx ,x≥0  λe 

0,

x < 0.

We also have P(X > t) = e−λt ,

t ∈ R+ .

In addition, if X1 , . . . , Xn are independent exponentially distributed random variables with parameters λ1 , . . . , λn we have P(min(X1 , . . . , Xn ) > t) = P(X1 > t, . . . , Xn > t) = P(X1 > t) · · · P(Xn > t) = e−t(λ1 +···+λn ) ,

t ∈ R+ ,

(1.2)

hence min(X1 , . . . , Xn ) is an exponentially distributed random variable with parameter λ1 + · · · + λn . We also have Z

Z

P(X1 < X2 ) = P(X1 ≤ X2 ) = λ1 λ2 0

y

e−λ1 x−λ2 y dxdy =

0

• The gamma distribution. In this case, 18

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html

λ1 . λ1 + λ2 (1.3)


Notes on Markov Chains

 λ a   xλ−1 e−ax , x ≥ 0  Γ (λ) f (x) =    0, x < 0, where a > 0 and λ > 0 are parameters and Z ∞ xλ−1 e−x dx, Γ (λ) =

λ > 0,

0

is the Gamma function. • The Cauchy distribution. In this case we have f (x) =

1 1 , π 1 + x2

x ∈ R.

• The lognormal distribution. In this case, f (x) =

1 √

  

xσ 2π

 

0,

e−

(µ−log x)2 2σ 2

,x>0 x ≤ 0.

Exercise: For each of the above probability density functions, check that the condition Z ∞ f (x)dx = 1 −∞

is satisfied. Remark 1. Note that if the distribution of X admits a density then for all a ∈ R, we have Z a P(X = a) = f (x)dx = 0, (1.4) a

and this is not a contradiction. In particular this shows that P(a ≤ X ≤ b) = P(X = a) + P(a < X ≤ b) = P(a < X ≤ b) = P(a < X < b), for a ≤ b. In practice, Property (1.4) appears for example in the framework of lottery games with a large number of participants, in which a given number

19

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

“a” chosen in advance has a very low (almost zero) probability to be chosen. Given two continuous random variables X : Ω → R and Y : Ω → R we can form the R2 -valued random variable (X, Y ) defined by (X, Y ) : Ω −→ R2 ω 7−→ (X(ω), Y (ω)). We say that (X, Y ) admits a joint probability density f(X,Y ) : R2 → R+ when

Z Z P((X, Y ) ∈ A × B =

f(X,Y ) (x, y)dxdy A

B

for all measurable subsets A, B of R. The density f(X,Y ) can be recovered from the distribution functions Z x Z y (x, y) 7−→ P(X ≤ x, Y ≤ y) = f(X,Y ) (s, t)dsdt, −∞

and

Z

−∞

Z

(x, y) 7−→ P(X ≥ x, Y ≥ y) =

f(X,Y ) (s, t)dsdt, x

y

as f(X,Y ) (x, y) =

∂2 ∂x∂y

Z

x

Z

y

f(X,Y ) (s, t)dsdt = −∞

−∞

∂2 ∂x∂y

Z

Z

f(X,Y ) (s, t)dsdt, x

y

x, y ∈ R. The probability densities fX : R → R+ and fY : R → R+ of X : Ω → R and Y : Ω → R are called the marginal densities of (X, Y ) and are given by Z ∞ fX (x) = f(X,Y ) (x, y)dy, x ∈ R, −∞

and

Z

fY (y) =

f(X,Y ) (x, y)dx,

y ∈ R.

−∞

The conditional density fX|Y =y : R → R+ X given Y = y is given by fX|Y =y (x) =

f(X,Y ) (x, y) , fY (y)

provided fY (y) > 0.

20

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html

x ∈ R,


Notes on Markov Chains

Discrete Distributions We only consider integer-valued random variables, i.e. the distribution of X is given by the values of P(X = k), k ∈ N. Examples: • The Bernoulli distribution. We have P(X = 1) = p

and

P(X = 0) = 1 − p,

(1.5)

where p ∈ [0, 1] is a parameter. • The binomial distribution. We have P(X = k) =

  n k p (1 − p)n−k , k

k = 0, 1, . . . , n,

where n ≥ 1 and p ∈ [0, 1] are parameters. • The geometric distribution. We have P(X = k) = (1 − p)pk ,

k ∈ N,

(1.6)

where p ∈ (0, 1) is a parameter. Note that if (Xk )k∈N is a sequence of independent Bernoulli random variables with distribution (1.5), then the random variable X := inf{k ∈ N : Xk = 1} has the geometric distribution (1.6). • The negative binomial distribution (or Pascal distribution). We have P(X = k) =

  k+r−1 (1 − p)r pk , r−1

k ∈ N,

where p ∈ (0, 1) and k ≥ 1 are parameters. Note that the negative binomial distribution recovers the geometric distribution when r = 1. • The Poisson distribution.

21

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

We have P(X = k) = e−λ

λk , k!

k ∈ N,

where λ > 0 is a parameter. Remark 2. The distribution of a discrete random variable can not admit a density. If this were the case, by Remark 1 we would have P(X = k) = 0 for all k ∈ N and 1 = P(X ∈ R) = P(X ∈ N) =

∞ X

P(X = k) = 0,

k=0

which is a contradiction. Given two discrete random variables X and Y , the conditional distribution of X given Y = k is given by P(X = n | Y = k) =

P(X = n and Y = k) , P(Y = k)

n ∈ N.

1.7 Expectation of a Random Variable The expectation of a random variable X is the mean, or average value, of X. In practice, expectations can be even more useful than probabilities. For example, knowing that a given equipment (such as a bridge) has a failure probability of 1.78493 out of a billion can be of less practical use than knowing the expected lifetime (e.g. 200000 years) of that equipment. For example, the time T (ω) to travel from home to work/school can be a random variable with a new outcome and value every day, however we usually refer to its expectation IE[T ] rather than to its sample values that may change from day to day. The notion of expectation takes its full meaning under conditioning. For example, the expected return of a random asset usually depends on information such as economic data, location, etc. In this case, replacing the expectation by a conditional expectation will provide a better estimate of the expected value. For example, life expectancy is a natural example of a conditional expectation since it typically depends on location, gender, and other parameters. In general, the expectation of the indicator function 1A is defined as

22

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

IE[1A ] = P(A), for any event A. For a Bernoulli random variable X : Ω −→ {0, 1} with parameter p ∈ [0, 1], written as X = 1A with A = {X = 1}, we have p = P(X = 1) = P(A) = IE[1A ] = IE[X].

Discrete Distributions Next, let X : Ω → N be a discrete random variable. The expectation IE[X] of X is defined as the sum IE[X] =

∞ X

kP(X = k),

k=0

in which the possible values k ∈ N of X are weighted by their probabilities. More generally we have IE[φ(X)] =

∞ X

φ(k)P(X = k),

k=0

for all sufficiently summable functions φ : N → R. The expectation of the indicator function X = 1A can be recovered as IE[1A ] = 0 × P(Ω \ A) + 1 × P(A) = P(A). Note that the expectation is linear, i.e. we have IE[aX + bY ] = a IE[X] + b IE[Y ],

a, b ∈ R,

(1.7)

provided IE[|X|] + IE[|Y |] < ∞. The conditional expectation of X : Ω → N given an event A is defined by IE[X | A] =

∞ X

kP(X = k | A),

k=0

with ∞

IE[X | A] = =

1 X kP({X = k} ∩ A) P(A) 1 P(A)

k=0 ∞ X

  k IE 1{X=k}∩A

k=0

23

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

∞   1 X k IE 1{X=k} 1A P(A) k=0 " # ∞ X 1 IE 1A = k1{X=k} P(A)

=

k=0

1 = IE [X1A ] , P(A)

(1.8)

where we used the relation X=

∞ X

k1{X=k}

k=0

which holds since X takes only integer values. If X is independent of A (i.e. P({X = k} ∩ A) = P({X = k})P(A), k ∈ N) we have IE[X1A ] = IE[X]P(A) and IE[X | A] = IE[X]. We also have in particular IE[1B | A] = 0 × P(X = 0 | A) + 1 × P(X = 1 | A) = P(X = 1 | A) = P(B | A). One can also define the conditional expectation of X given that {Y = k}, as IE[X | Y = k] =

∞ X

nP(X = n | Y = k).

n=0

In general we have IE[X] = IE[IE[X | Y ]] =

∞ X

IE[X | Y = k]P(Y = k),

k=0

since IE[IE[X | Y ]] = =

∞ X

IE[X | Y = k]P(Y = k)

k=0 ∞ X ∞ X

nP(X = n | Y = k)P(Y = k)

k=0 n=0

24

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html

(1.9)


Notes on Markov Chains

= =

∞ X ∞ X

nP(X = n and Y = k)

n=0 k=0 ∞ X

nP(X = n)

n=0

= IE[X]. The relation IE[X] = IE[IE[X | Y ]] is sometimes referred to as the tower property. In particular the expectation of a random sum

Y X

Xk , where (Xk )k∈N is

k=1

a sequence of random variables, can be computed as " Y # " Y # ∞ X X X

IE Xk = IE Xk Y = k P(Y = n) k=1

=

n=0 ∞ X

" IE

n=0

k=1 n X

#

Xk Y = k P(Y = k),

k=1

and if Y is independent of (Xk )k∈N this yields " n # " Y # ∞ X X X IE Xk P(Y = k). IE Xk = n=0

k=1

k=1

Similarly, for a random product we will have " Y # " n # ∞ Y X Y IE Xk = IE Xk P(Y = k). n=0

k=1

k=1

Example: The life expectancy in Singapore is IE[T ] = 80 years overall, where T denotes the lifetime of . Let G ∈ {m, w} denote the gender of an individual chosen at random. The statistics show that IE[T | G = w] = 78

and

IE[T | G = m] = 81.9,

and we have 80 = IE[T ] = IE[IE[T |G]] = P(G = w) IE[T | G = w] + P(G = m) IE[T | G = m] = 81.9 × P(G = w) + 78 × P(G = m) 25

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

= 81.9 × (1 − P(G = m)) + 78 × P(G = m), showing that 80 = 81.9 × (1 − P(G = m)) + 78 × P(G = m), i.e. P(G = m) =

81.9 − 80 1.9 = = 0.487. 81.9 − 78 3.9

Distributions Admitting a Density Given a random variable X whose distribution admits a density f : R → R+ we have Z ∞ IE[X] = xf (x)dx, −∞

and more generally, Z

IE[φ(X)] =

φ(x)f (x)dx, −∞

for all sufficiently integrable function φ on R. For example, if X has a standard normal distribution we have Z ∞ 2 dx IE[φ(X)] = φ(x)e−x /2 √ . 2π −∞ In case X has a Gaussian distribution with mean µ ∈ R and variance σ 2 > 0 we get Z ∞ 2 2 1 IE[φ(X)] = √ φ(x)e−(x−µ) /(2σ ) dx. 2 2πσ −∞ The expectation of a continuous random variable satisfies the same linearity property (1.7) as in the discrete case. Exercise: In case X has a Gaussian distribution with mean µ ∈ R and variance σ 2 > 0, check that µ = IE[X]

and

σ 2 = IE[X 2 ] − (IE[X])2 .

The conditional expectation of a continuous random variable can be defined as Z ∞ IE[X | Y = y] = xfX|Y =y (x)dx −∞

with the relation

26

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

IE[X] = IE[IE[X | Y ]] as in the discrete case, since Z

IE[IE[X | Y ]] =

IE[X | Y = y]fY (y)dy Z ∞ xfX|Y =y (x)fY (y)dxdy = −∞ −∞ Z ∞ Z ∞ = x f(X,Y ) (x, y)dydx −∞ −∞ Z ∞ = xfX (x)dx −∞ ∞

Z

−∞

= IE[X].

1.8 Conditional Expectation The construction of conditional expectation given above for discrete and continuous random variables can be generalized to σ-algebras. Given G ⊂ F a sub σ-algebra of F and F ∈ L2 (Ω, F, P), the conditional expectation of F given G, and denoted by IE[F | G] can defined to be the orthogonal projection of F onto L2 (Ω, G, P). That is, IE[F | G] is characterized by the relation hG, F − IE[F | G]i = IE[G(F − IE[F | G])] = 0, i.e. IE[GF ] = IE[G IE[F | G]],

(1.10)

for all bounded and G-measurable random variables G, where h·, ·i is the scalar product in L2 (Ω, F, P). (i) IE[F G | G] = G IE[F | G] if G depends only on the information contained in G. By the characterization (1.10) it suffices to show that IE[HF G]] = IE[HG IE[F |G]], for all bounded and G-measurable random variables G, and this relation holds because the product HG is G-measurable.

27

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

(ii) IE[F |G] = F when F depends only on the information contained in G. This is a consequence of point (i) above by taking G = 1. (iii) IE[IE[F |G] | H] = IE[F |H] if H ⊂ G, called the tower property. First we note that (iii) holds when H = {∅, Ω} because taking G = 1 in (1.10) yields IE[F ] = IE[IE[F | G]]. (1.11) Next, by the characterization (1.10) it suffices to show that IE[H IE[F |G]] = IE[H IE[F |H]], for all bounded and G-measurable random variables H. Then, Relation (iii) holds because by (1.11) and point (i) above we have IE[H IE[F |G]] = IE[IE[HF |G]] = IE[HF ] = IE[IE[HF |H]] = IE[H IE[F |H]], where we conclude by the characterization (1.10). (iv) IE[F |G] = IE[F ] when F does not depend on the information contained in G. For all bounded G-measurable G we have IE[F G] = IE[F ] IE[G] = IE[G IE[F ]], and we conclude again by (1.10). (v) If G depends only on G and F is independent of G, then IE[h(F, G)|G] = IE[h(x, F )]x=G .

1.9 Characteristic and Generating Functions, Laplace Transforms The characteristic function of a random variable X is the function ΨX : R → C defined by ΨX (t) = IE[eitX ], t ∈ R.

28

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

The Laplace transform (or moment generating function) of a random variable X is the function ΦX : R → R defined by ΦX (t) = IE[etX ],

t ∈ R,

provided the expectation is finite. In particular we have IE[X n ] =

∂n ΦX (0), ∂t

n ≥ 1.

The Laplace transform ΦX of a random variable X with density f : R → R+ satisfies Z ∞ ΦX (t) = etx f (x)dx, t ∈ R. −∞

Note that in probability we are using the bilateral Laplace transform. The characteristic function ΨX of a random variable X with density f : R → R+ satisfies Z ∞ ΨX (t) = eitx f (x)dx, t ∈ R. −∞

On the other hand, if X : Ω → N is a discrete random variable we have ΨX (t) =

∞ X

eitn P(X = n),

t ∈ R.

n=0

The main applications of characteristic functions lie in the following theorems: Theorem 1. Two random variables X : Ω → R and Y : Ω → R have same distribution if and only if ΨX (t) = ΨY (t),

t ∈ R.

Theorem 1 is used to identify or to determine the probability distribution of a random variable X, by comparison with the characteristic function ΨY of a random variable Y whose distribution is known. The characteristic function of a random vector (X, Y ) is the function ΨX,Y : R2 → C defined by ΨX,Y (s, t) = IE[eisX+itY ],

s, t ∈ R.

Theorem 2. Given two independent random variables X : Ω → R and Y : Ω → R are independent if and only if

29

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

ΨX,Y (s, t) = ΨX (s)ΨY (t),

s, t ∈ R.

A random variable X is Gaussian with mean µ and variance σ 2 if and only if its characteristic function satisfies 2

IE[eiαX ] = eiαµ−α

σ 2 /2

,

α ∈ R.

(1.12)

In terms of Laplace transforms we have, replacing iα by α, IE[eαX ] = eαµ+α

2

σ 2 /2

,

α ∈ R.

(1.13)

From Theorems 1 and 2 we deduce the following proposition. 2 Proposition 1. Let X ' N (µ, σX ) and Y ' N (ν, σY2 ) be independent Gaussian random variables. Then X + Y also has a Gaussian distribution 2 X + Y ' N (µ + ν, σX + σY2 ).

Proof. Since X and Y are independent, by Theorem 2 the characteristic function ΨX+Y of X + Y is given by ΦX+Y (t) = ΦX (t)ΦY (t) = eitµ−t =e

2

2 2 σX /2 itν−t2 σY /2

e

2 2 it(µ+ν)−t2 (σX +σY )/2

,

where we used (1.12). Consequently, the characteristic function of X + Y is 2 + σY2 that of a Gaussian random variable with mean µ + ν and variance σX and we conclude by Theorem 1.  The probability generating function of a (possibly infinite) discrete random variable X : Ω −→ N ∪ {+∞} is the function GX :[−1, 1] −→ R s 7−→ GX (s) defined by ∞   X GX (s) = IE sX 1{X<∞} = sn P(X = n),

s ∈ [−1, 1].

n=0

We have   GX (1) = IE 1{X<∞} P(X < ∞) and

30

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html

(1.14)


Notes on Markov Chains

GX (0) = IE[0X ] = IE[1{X=0} ] = 00 P(X = 0) = P(X = 0). since 00 = 1 and 0X = 1{X=0} . On the other hand we have2 G0X (s) =

∞ X

ksk−1 P(X = k),

s ∈ (−1, 1),

k=1

hence G0X (1− ) = IE[X] =

∞ X

kP(X = k),

k=0

provided IE[|X|] < ∞, and Var[X] = G00X (1− ) + G0X (1− ) − (G0X (1− ))2 , provided IE[|X|2 ] < ∞. When X : Ω → N and Y : Ω → N are two independent variables we have GX+Y (t) = IE[sX sY ] = IE[sX ] IE[sY ] = GX (t)GY (t),

t ∈ [−1, 1].

Example: Consider a random variable X with probability generating function GX (s) = eλ(s−1) ,

s ∈ [−1, 1],

for some λ > 0. What is the distribution of X ? We have GX (s) = e

λ(s−1)

=e

−λ

∞ X

sn

n=0

λn , n!

s ∈ [−1, 1],

hence by identification with (1.14) we find P(X = n) = e−λ

λn , n!

n ∈ N,

i.e. X has the Poisson distribution with parameter λ. From the generating function we also recover λ(s−1)

IE[X] = G0X (1) = λe|s=1 2

= λ,

Her G0X (1− ) denotes the derivative on the left at the point 1.

31

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

and Var[X] = G00X (1) + G0X (1) − (G0X (1))2 λ(s−1)

= λ2 e|s=1

λ(s−1)

+ λse|s=1

− λ2

= λ2 + λ − λ2 = λ, which are the expectation and variance of a Poisson random variable with parameter λ.

1.10 Exercises

Exercise 1.1. (Exercise II.3.1 in [4]). A six-sided die is rolled and the number N on the uppermost face is recorded. Then a fair coin is tossed N times, and the total number number Z of heads to appear is observed. 1. Determine the mean and variance of Z. 2. Determine the probability distribution of Z. 3. Recover the result of Question (1) from the result of Question (2). Exercise 1.2. (Problem II.3.1 in [4]). The following experiment is performed: An observation is made of a Poisson random variable N with parameter λ. Then N independent Bernoulli trials are performed, each with probability p of success. Let Z be the total number of successes observed in the N trials. 1. Formulate Z as a random sum and thereby determine its mean and variance. 2. What is the distribution of Z ? 3. Recover the result of Question (1) from the result of Question (2). Exercise 1.3. (Exercise II.4.5 in [4]). Let U be uniformly distributed over the interval [0, L] where L follows the gamma density fL (x) = 1[0,∞) xe−x . What is the joint density function of U and V = L − U ?

Exercise 1.4. (Problem II.4.4 in [4]). Suppose X and Y are independent random variables having the Poisson distribution with parameter λ, but where λ is random, being exponentially distributed with parameter θ. What is the conditional distribution for X given that X + Y = n ? 32

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

Exercise 1.5. (Problem II.1.8 in [4]). Initially an urn contains one red and one green ball. A ball is drawn at random from the urn, observed, and then replaced. If this ball is red, then an additional red ball is placed in the urn. If the ball is green, then a green ball is added. A second ball is drawn. Find the conditional probability that the first ball was red given that the second ball drawn was red. Exercise 1.6. This question is only a probability exercise that does not involve Markov chains. Consider a system made of three components, each of which is functioning with probability p and failing with probability 1 â&#x2C6;&#x2019; p, independently of the others. The system operates if at least two of the components operate. 1. What is the probability that the system operates ? 2. Assume in addition that the system is in a random environment, so that the probability p itself is also random and uniformly distributed over the interval (0, 1], independently of the component states. What is the probability that the system operates ?

33

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

34

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Chapter 2

Gambling Problems

To begin, let us repeat that this chapter on “gambling problems” is not designed to help the reader who has a problem with gambling such as addiction. We consider an amount $S of S dollars shared between two players A and B. At each round, player A may earn $1 with probability p ∈ (0, 1), and in this case player B loses $1. Conversely, player A may lose $1 with probability q := 1 − p, and in this case player B gains $1. The probability space Ω corresponding to this experiment could be taken as Ω = {−1, +1}N , however here we do not focus on this particular representation. We let Xn represent the wealth of player A at time n ∈ N, while S − Xn represents the wealth of player B at time n ∈ N. The initial wealth X0 of player A could be negative, 1 but for simplicity we will assume that it is comprised between 0 and S. The game stops whenever the fortune of any of the two players reaches 0, in which case the other player’s account holds $S. The two main issues of interest are: - the probability of ruin of players A and B, - the mean duration of the game. According to the above description of the problem, for all n ∈ N we have P(Xn+1 = k + 1 | Xn = k) = p and P(Xn+1 = k − 1 | Xn = k) = q, 1

Singaporeans and permanent residents may have to start with X0 = −$100.

35


N. Privault

and in this case the chain is said to be time homogeneous since the transition probabilities do not depend on the time index n.

2.1 Ruin probability We are interested in the event RA = “player A loses all his capital at some time” =

[

{Xn = 0},

n∈N

and in computing the probability f (k) = P(RA | X0 = k), First, let us note that the problem is easy case we find  f (0) = P(RA | X0      f (1) = P(RA | X0      f (2) = P(RA | X0

0 ≤ k ≤ S. to solve in case S = 2, in which = 0) = 1, = 1) = q,

(2.1)

= 2) = 0.

In case S = 3 we find  f (0) = P(RA | X0 = 0) = 1,       ∞  X  q   f (1) = P(RA | X0 = 1) = q (pq)n = ,    1 − pq  n=0  ∞  X  q2  2 n  f (2) = P(R | X = 2) = q (pq) = ,  A 0  1 − pq   n=0      f (3) = P(RA | X0 = 3) = 0.

(2.2)

Clearly, things can become quite complicated for S ≥ 4, and increasingly difficult when S gets larger. The general case will be solved by the method of first step analysis, which will be used in the framework of Markov processes at later stages in this course. The idea is to condition on the first transition from X0 to X1 in order to show that P(RA | X0 = k) = pP(RA | X0 = k + 1) + qP(RA | X0 = k − 1). For all 1 ≤ k ≤ S − 1 we have 36

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

f (k) := P(RA | X0 = k) = P(RA and X1 = k + 1 | X0 = k) + P(RA and X1 = k − 1 | X0 = k) P(RA and X1 = k + 1 and X0 = k) P(RA and X1 = k − 1 and X0 = k) = + P(X0 = k) P(X0 = k) P(RA and X1 = k + 1 and X0 = k) P(X1 = k + 1 and X0 = k) = × P(X1 = k + 1 and X0 = k) P(X0 = k) P(RA and X1 = k − 1 and X0 = k) P(X1 = k − 1 and X0 = k) + × P(X1 = k − 1 and X0 = k) P(X0 = k) = P(RA | X1 = k + 1, X0 = k)P(X1 = k + 1|X0 = k) +P(RA | X1 = k − 1, X0 = k)P(X1 = k − 1|X0 = k) = pP(RA | X1 = k + 1, X0 = k) + qP(RA | X1 = k − 1, X0 = k) = pP(RA | X1 = k + 1) + qP(RA | X1 = k − 1) = pP(RA | X0 = k + 1) + qP(RA | X0 = k − 1) = pf (k + 1) + qf (k − 1), so that we get the equation f (k) = pf (k + 1) + qf (k − 1),

1 ≤ k ≤ S − 1,

(2.3)

under the boundary conditions f (0) = P(RA | X0 = 0) = 1,

(2.4)

f (S) = P(RA | X0 = S) = 0,

(2.5)

and for k = 0 and k = S. In the above derivation of (2.3) that we wish to solve we used the relation P(RA | X1 = k ± 1, X0 = k) = P(RA | X1 = k ± 1) = P(RA | X0 = k ± 1), which can be shown in various ways: 1. Descriptive proof (preferred): we note that given X1 = k + 1, the transition from X0 to X1 has no influence on the future of the process after time 1, and the probability of ruin starting at time 1 is the same as if the process is started at time 0. 2. Algebraic proof: first for k ≥ 1 and k ± 1 ≥ 1 we have P(RA | X1 = k ± 1, X0 = k) = P

∞ [

{Xn = 0} X1 = k ± 1, X0 = k

!

n=0

37

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

P

∞ [

! {Xn = 0}

! ∩ {X1 = k ± 1} ∩ {X0 = k}

n=0

=

P ({X1 = k ± 1} ∩ {X0 = k}) P

∞ [

! ({Xn = 0} ∩ {X1 = k ± 1} ∩ {X0 = k})

n=0

=

P ({X1 = k ± 1} ∩ {X0 = k}) P

∞ [

! ({Xn = 0} ∩ {X1 = k ± 1} ∩ {X0 = k})

n=2

= P

∞ [

P ({X1 = k ± 1} ∩ {X0 = k}) ! {Xn = 0}

!

∩ {X1 = k ± 1} ∩ {X0 = k}

n=2

= P

∞ [

P ({X1 = k ± 1} ∩ {X0 = k}) ! {Xn = 0}

!

∩ {X1 = k ± 1} ∩ {X1 − X0 = ±1}

n=2

= P = =P =P

∞ [

P ({X1 = k ± 1} ∩ {X1 − X0 = ±1}) ! ! {Xn = 0}

∩ {X1 = k ± 1} P ({X1 − X0 = ±1})

n=2 ∞ [

P ({X1 = k ± 1}) P ({X1 − X0 = ±1}) !

{Xn = 0} {X1 = k ± 1}

n=2 ∞ [

!

{Xn = 0} {X0 = k ± 1}

n=1

= P(RA | X0 = k ± 1), otherwise if k = 1 we easily find that P(RA | X1 = 0, X0 = 1) = 1 = P(RA | X0 = 0), since {X1 = 0} ⊂ RA . Noting that p + q = 1, we can rewrite (2.3) as (p + q)f (k) = pf (k + 1) + qf (k − 1),

1 ≤ k ≤ S − 1,

or p(f (k + 1) − f (k)) − q(f (k) − f (k − 1)) = 0, The solution of (2.3) turns out to be

38

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html

1 ≤ k ≤ S − 1.

(2.6)


Notes on Markov Chains

f (k) = P(RA | X0 = k) =

S−k k =1− , S S

0 ≤ k ≤ S,

in the symmetric case p = q = 1/2, and f (k) = P(RA | X0 = k) =

rk − rS , 1 − rS

0 ≤ k ≤ S,

(2.7)

in the non-symmetric case p 6= q, and we need to show why.2 Exercise: Check that (2.7) agrees with (2.1) and (2.2) when S = 2 and S = 3. In the following graph the ruin probability (2.7) is plotted as a function of k for p = 0.45.

1

Ruin probability S = 20, p = 0.45

0.8

f(k)

0.6

0.4

0.2

0 0

1

2

3

4

5

6

7

8

9

10 k

11

12

13

14

15

16

17

18

19

20

Fig. 2.1: Ruin probability as a function of X0 ∈ [0, 20] for S = 20 and p = 0.45. First, note that (2.7) does satisfy both boundary conditions (2.4) and (2.5). Standard solution. the form 3

In order to solve (2.3) we can look for a solution of k 7−→ f (k) = Cak ,

where C and a are constants which will be determined from the boundary conditions and from the equation (2.3), respectively. 2

The techniques used to solve (2.3) can be found in MAS214 Basic Discrete Mathematics and Number Theory, see also MTH116/MH1301. 3 Where did we get this idea ? From intuition, experience, or empirically by multiple trials and errors.

39

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

Substituting this expression into (2.3) yields the characteristic equation pa2 â&#x2C6;&#x2019; a + q = p(a â&#x2C6;&#x2019; 1)(a â&#x2C6;&#x2019; q/p) = 0,

(2.8)

of degree 2 when C is non-zero, and this equation in a has for solutions     â&#x2C6;&#x161; â&#x2C6;&#x161; 1 + 1 â&#x2C6;&#x2019; 4pq 1 â&#x2C6;&#x2019; 1 â&#x2C6;&#x2019; 4pq q {a1 , a2 } = , = 1, = {1, r}, p â&#x2C6;&#x2C6; (0, 1]. 2p 2p p (2.9)

Non-symmetric case: r 6= 1 In this case we have p 6= 1/2 and q 6= 1/2, and both f (k) = C1 ak1 = C1 and f (k) = C2 rk are solutions. Since (2.3) is linear, the sum of two solutions remains a solution, hence the general solution of (2.3) is given by f (k) = C1 ak1 + C2 ak2 ,

0 â&#x2030;¤ k â&#x2030;¤ S,

i.e. f (k) = C1 + C2 rk ,

0 â&#x2030;¤ k â&#x2030;¤ S,

(2.10)

where r = q/p and C1 , C2 are two constants to be determined from the boundary conditions. From (2.4), (2.5) and (2.10) we have   f (0) = 1 = C1 + C2 , 

(2.11)

f (S) = 0 = C1 + C2 rS ,

and solving the system (2.11) of two equations we find C1 = â&#x2C6;&#x2019;

1 rS and C2 = , 1 â&#x2C6;&#x2019; rS 1 â&#x2C6;&#x2019; rS

which yields (2.7).

Second case: r = 1 In case p = q = 1/2 (fair game), (2.8) reads a2 â&#x2C6;&#x2019; 2a + 1 = (a â&#x2C6;&#x2019; 1)2 = 0, which has a unique solution a = 1, since the constant function is solution of (2.3). Noting that f (k) = k is also solution of (2.3), the general solution

40

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

is found to have the form f (k) = C1 + C2 k.

(2.12)

From (2.4), (2.5) and (2.12) we have   f (0) = 1 = C1 , 

(2.13)

f (S) = 0 = C1 + C2 S,

and solving the system (2.13) of two equations we find C1 = 1 and C2 = −1/S, which yields the quite intuitive solution f (k) = P(RA | X0 = k) =

S−k k =1− , S S

0 ≤ k ≤ S.

(2.14)

Exercise: Recover (2.14) from (2.7) by letting p go to 1/2, i.e. by letting r go to 1. Direct solution.

Note that (2.6) rewrites as

f (k + 1) − f (k) =

q (f (k) − f (k − 1)), p

1 ≤ k ≤ S − 1,

hence by induction on k we can get  k q f (k + 1) − f (k) = (f (1) − f (0)), p

1 ≤ k ≤ S − 1.

(2.15)

Next, by the telescoping sum, f (n) = f (0) +

n−1 X

(f (k + 1) − f (k)),

k=0

Relation (2.15) implies f (n) = f (0) + (f (1) − f (0))

n−1 X

rk ,

1 ≤ n ≤ S − 1.

(2.16)

k=0

Symmetric case: r 6= 1 In this case we get

41

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

f (n) = f (0) + (f (1) − f (0))

n−1 X

rk

k=0

1 − rn = f (0) + (f (1) − f (0)), 1−r

1 ≤ n ≤ S − 1,

where we used (13.1). Conditions (2.4) and (2.5) show that 0=1+ hence f (n) = f (0) −

1 − rS (f (1) − f (0)), 1−r

rn − rS 1 − rn = , S 1−r 1 − rS

0 ≤ n ≤ S.

Second case: r = 1 In case r = 1, (2.6) simply becomes f (k + 1) − f (k) = f (1) − f (0),

1 ≤ k ≤ S − 1,

and (2.16) reads f (n) = f (0) + n(f (1) − f (0)),

1 ≤ n ≤ S − 1.

Then Conditions (2.4) and (2.5) yield 0 = 1 + S(f (1) − f (0)), hence

S−n n = , 0 ≤ n ≤ S. S S Note that when p = q = 1/2, (2.6) is a discretization of the continuous Laplace equation ∂2f (x) = 0, x ∈ R, ∂x2 which admits a solution of the form f (n) = f (0) −

f (x) = f (0) + xf 0 (0) = f (0) + x(f (1) − f (0)),

x ∈ R.

Note that given that the fortune of player B starts at k, its probability of ruin becomes, by (2.7), equal to P(RB | X0 = k) =

(p/q)S−k − (p/q)S 1 − (p/q)S

42

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

after switching k to S − k and exchanging p and q. Not surprisingly, we can check that4 (p/q)k − (p/q)S (q/p)S−k − (q/p)S + 1 − (q/p)S 1 − (p/q)S = 1, 0 ≤ k ≤ S,

P(RA | X0 = k) + P(RB | X0 = k) =

which means that eventually one of the two players has to lose the game. This means in particular that with probability one the game cannot continue endlessly, which is a priori not completely obvious. In the following graph the ruin probability (2.7) is plotted as a function of p for S = 20 and k = 10. 1

Ruin probability S = 20, k = 10

0.8

0.6

0.4

0.2

0 0

0.1

0.2

0.3

0.4

0.5 p

0.6

0.7

0.8

0.9

1

Fig. 2.2: Ruin probability as a function of p ∈ [0, 1] for S = 20 and k = 10. Gambling machines in casinos are computer controlled and most countries permit by law a slight degree of “unfairness” (payout percentage or ”return to player, by taking p < 1/2) in order to allow the house to make an income. Interestingly, we can note that taking e.g. p = 0.45 < 1/2 gives a ruin probability P(RA | X0 = 10) = 0.8815, almost equal to 90%, which means that the probability p = 0.45 of winning one round translates into a probability of only 0.1185 of winning the game, i.e. a 73% drop, although the average proportioni of winning rounds is still 45%. Hence a “slightly unfair” game on each round can become devastatingly unfair in the long run. Most (but not all) gamblers are aware that gambling 4

Exercise: check that the equality to 1 holds as stated.

43

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

machines are slightly unfair, however most people would intuitively believe that a small degree of unfairness on each round should only translate into a reasonably low degree of unfairness in the long run.

2.2 Mean game duration Let now T = inf{n ≥ 0 : Xn = 0 or Xn = S} denote the duration of the game.5 We are now interested in computing the expected duration of the game h(k) = IE[T | X0 = k],

0 ≤ k ≤ S,

given that player A starts with a fortune equal to k ∈ {0, 1, . . . , S}. Clearly we should have h(0) = IE[T | X0 = 0] = 0, (2.17) and h(S) = IE[T | X0 = S] = 0.

(2.18)

We note that when S = 2 we have    0 if X0 = 0,    T = 1 if X0 = 1,      0 if X0 = 2, hence T is deterministic given the value of X0 . In case S = 3 the law of T given X0 = k ∈ {0, 1, 2, 3} can be determined explicitly and we find   P(T = 2k | X0 = 1) = p2 (pq)k−1 , k ≥ 1,  and

P(T = 2k + 1 | X0 = 1) = q(pq)k , k ≥ 0,

  P(T = 2k | X0 = 2) = q 2 (pq)k−1 , k ≥ 1, 

P(T = 2k + 1 | X0 = 2) = p(pq)k , k ≥ 0,

5 Here the notation “inf” is for “infimum”, meaning here the smallest n ≥ 0 such that Xn = S.

44

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

whereas T = 0 whenever X0 = 0 or X0 = 3. As a consequence we can directly compute IE[T | X0 = 2] = 2

∞ X

kP(T = 2k | X0 = 2) +

k=1 ∞ X 2

= 2q

k=1 2

∞ X

(2k + 1)P(T = 2k + 1 | X0 = 2)

k=0

k(pq)k−1 + p

∞ X

(2k + 1)(pq)k

(2.19)

k=0

2p2 q p 2q + + 2 2 (1 − pq) (1 − pq) 1 − pq 2q 2 + p + qp2 = , (1 − pq)2 =

(2.20)

where we used (13.2), and by exchanging p and q we get IE[T | X0 = 1] =

2p2 + q + pq 2 . (1 − pq)2

(2.21)

Again, things can become quite complicated for S ≥ 4, and increasingly difficult when S gets larger. In the general case S ≥ 4 we will only compute the conditional expectation of T and not its probability distribution. For this we rely again on first step analysis, i.e. we condition on the first transition from X0 to X1 in order to show that IE[T | X0 = k] = 1 + p IE[T | X0 = k + 1] + q IE[T | X0 = k − 1]. Using the equality 1{X0 =k} = 1{X1 =k+1,X0 =k} + 1{X1 =k−1,X0 =k} , and conditional expectations we show, by first step analysis, that for all 1 ≤ k ≤ S − 1 we have, applying (1.8) to A = {X0 = k}, h(k) = IE[T | X0 = k]   1 = IE T 1{X0 =k} P(X0 = k)     1 = IE T 1{X1 =k+1,X0 =k} + IE T 1{X1 =k−1,X0 =k} P(X0 = k) P(X1 = k + 1, X0 = k) = IE[T | X1 = k + 1, X0 = k] P(X0 = k) P(X1 = k − 1, X0 = k) + IE[T | X1 = k − 1, X0 = k] P(X0 = k) 45

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

= P(X1 = k + 1 | X0 = k) IE[T | X1 = k + 1, X0 = k] +P(X1 = k − 1 | X0 = k) IE[T | X1 = k − 1, X0 = k] = p IE[T | X1 = k + 1, X0 = k] + q IE[T | X1 = k − 1, X0 = k] = p IE[T + 1 | X0 = k + 1, X−1 = k] + q IE[T + 1 | X0 = k − 1, X−1 = k] = p IE[T + 1 | X0 = k + 1] + q IE[T + 1 | X0 = k − 1] = p(1 + IE[T | X0 = k + 1]) + q(1 + IE[T | X0 = k − 1]) = p(1 + h(k + 1)) + q(1 + h(k − 1)) = p + q + ph(k + 1) + qh(k − 1) = 1 + ph(k + 1) + qh(k − 1). Thus we have to solve the equation h(k) = 1 + ph(k + 1) + qh(k − 1),

1 ≤ k ≤ S − 1,

(2.22)

i.e., since p + q = 1, p(h(k + 1) − h(k)) − q(h(k) − h(k − 1)) = −1,

1 ≤ k ≤ S − 1,

(2.23)

under the boundary conditions (2.17) and (2.18).6 Equation (2.6), i.e. p(f (k + 1) − f (k)) − q(f (k) − f (k − 1)) = 0,

1 ≤ k ≤ S − 1,

is called the homogenous equation associated to (2.23), and it is known that the general solution to (2.23) is written as the sum of a solution of the homogeneous equation (2.6) and a particular solution of (2.23).

Non-symmetric case: r 6= 1 Searching for a particular solution of (2.23) of the form k 7→ Ck shows that C has to be equal to C = 1/(q − p), hence when r 6= 1 the general solution of (2.23) has the form h(k) = C1 + C2 rk +

1 k, q−p

(2.24)

where the homogeneous solution C1 + C2 rk is given from (2.10). From the boundary conditions (2.17) and (2.18) and from (2.24) we have 6

The techniques used to solve (2.22) can be found in MAS214 Basic Discrete Mathematics and Number Theory, see also MTH116/MH1301.

46

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

 h(0) = 0 = C1 + C2 ,     h(S) = 0 = C1 + C2 rS +

1 S, q−p

(2.25)

and solving the system (2.24) of two equations we find C1 = −

S (q − p)(1 − rS )

and C2 =

S , (q − p)(1 − rS )

hence from (2.24) we get 1 h(k) = q−p



1 − rk k−S 1 − rS

 ,

0 ≤ k ≤ S,

(2.26)

which does satisfy (2.17) and (2.18). It is easy to show that (2.26) yields h(1) = 1 when S = 2. When S = 3, (2.26) shows that   1 − (q/p) 1 1−3 IE[T | X0 = 1] = , (2.27) q−p 1 − (q/p)3 and

1 IE[T | X0 = 2] = q−p

  1 − (q/p)2 2−3 . 1 − (q/p)3

(2.28)

When S = 3 it is more difficult to show that (2.27) and (2.28) agree respectively with (2.20) and (2.21). For this we might have to use formal computing tools, see for example here. In the following graph the mean game duration (2.26) is plotted as a function of k for p = 0.45.

47

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

90

Average game duration S = 20, p = 0.45

80

70

60

h(k)

50

40

30

20

10

0 0

1

2

3

4

5

6

7

8

9

10 k

11

12

13

14

15

16

17

18

19

20

Fig. 2.1: Mean game duration as a function of X0 ∈ [0, 20] for S = 20 and p = 0.45.

Symmetric case: r = 1 In case r = 1 (fair game) we see that k 7→ Ck can no longer be a particular solution of (2.23), and we search for a particular solution of the form k 7→ Ck 2 . In this case we find that C has to be equal to C = −1, hence when r = 1 the general solution of (2.23) has the form h(k) = C1 + C2 k − k 2 ,

0 ≤ k ≤ S,

(2.29)

where the homogeneous solution C1 + C2 k is given from (2.12). From the boundary conditions (2.17) and (2.18) and from (2.29) we have   h(0) = 0 = C1 , (2.30)  h(S) = 0 = C1 + C2 S − S 2 , and solving the system (2.24) of two equations we find C1 = 0

and C2 = S,

hence from (2.29) we get h(k) = k(S − k),

0 ≤ k ≤ S,

(2.31)

which does satisfy (2.17) and (2.18). We note that for all values of p the expectation IE[T | X0 = k] has a finite value, which shows that the game duration T is finite with probability one 48

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

for all k = 0, 1, . . . , S. When r = 1, the continuous version of (2.23) is the Laplace equation 1 ∂2f (x) = −1, 2 ∂x2

x ∈ R,

which has for solution f (x) = f (0) + xf 0 (0) − x2 ,

x ∈ R.

Note that (2.31) can also be recovered from (2.26) by letting p go to 1/2. In the next figure, the expected game duration (2.26) is plotted as a function of p for S = 20 and k = 10. 110

Average game duration S = 20, k = 10

100 90 80 70 60 50 40 30 20 10 0 0

0.1

0.2

0.3

0.4

0.5 p

0.6

0.7

0.8

0.9

1

Fig. 2.2: Mean game duration as a function of p ∈ [0, 1] for S = 20 and k = 10. As expected, the duration will be maximal in a fair game for p = q = 1/2. On the other hand, it always takes exactly 10 = S − k = k steps to end the game in case p = 0 or p = 1, in which case there is no randomness. When p = 0.45 the expected duration of the game becomes 76.3, which represents only a drop of 24% from the “fair” value 100, as opposed to the 73% drop noticed above in terms of winning probabilities. Thus, a game with p = 0.45 is only slightly shorter than a fair game, whereas the probability of winning the game drops down to 0.12. The probability distribution P(T = n | X0 = k) can actually be computed explicitly for all values of S ≥ 1 using first step analysis, however the computation becomes more technical and will not be treated here.

49

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

Remark 3. In this chapter we have noticed an interesting connection between analysis and probability. That is, a probabilistic quantity such as k 7→ P(RA | X0 = k) or k 7→ IE[T | X0 = k] can be shown to satisfy a difference equation which is solved by analytic methods. This fact actually extends beyond the present simple framework, and in continuous time it yields other connections between probability and partial differential equations. In the next chapter we will consider simple random walks which can be seen as “unrestricted” gambling processes.

2.3 Exercises Exercise 2.1. We consider a gambling problem with the possibility of a draw, i.e. at time n the gain Xn of player A can increase by one unit with probability p, decrease by one unit with probability p, or remain stable with probability 1 − 2p. We let f (k) = P(RA | X0 = k) denote the probability of ruin of player A, and h(k) = IE[T | X0 = k] denote the expectation of the game duration T starting from X0 = k, 0 ≤ k ≤ S. 1. Using first step analysis, write down the difference equation satisfied by f (k) and its boundary conditions, 0 ≤ k ≤ S. We refer to this equation as the homogeneous equation. 2. Solve the homogeneous equation of Question 1 by your preferred method. Is this solution compatible with your intuition of the problem ? Why ? 3. Using first step analysis, write down the difference equation satisfied by h(k) and its boundary conditions, 0 ≤ k ≤ S. 4. Find a particular solution of the equation of Question 3. 5. Solve the equation of Question 3. Hint: the general solution of the equation is the sum of a particular solution and a solution of the homogeneous equation. 6. How does the mean duration h(k) behave as p goes to zero ? Is this solution compatible with your intuition of the problem ? Why ?

50

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Chapter 3

Random Walks

The simple unrestricted random walk (Sn )n≥0 is defined by S0 = 0 and Sn = X1 + · · · + Xn ,

n ≥ 1,

where (Xk )k≥1 is a family of independent {−1, 0, +1}-valued random variables. We will assume in addition that the family (Xk )k≥1 is i.i.d., i.e. independent and identically distributed, with distribution    P(Xk = +1) = p,    P(Xk = 0) = r,      P(Xk = −1) = q, k ≥ 1, with p + q + r = 1.

3.1 Mean and variance In the sequel we take r = 0, in which case (Sn )n∈N is called a Bernoulli random walk. In this case the mean and variance of Xn are given by IE[Xn ] = −1 × q + 1 × p = 2p − 1 = p − q, and Var[Xn ] = IE[Xn2 ] − (IE[Xn ])2 = 1 × q + 1 × p − (2p − 1)2 = 4p(1 − p) 51


N. Privault

= 4pq. As a consequence we find IE[Sn ] = n(2p − 1), and Var[Sn ] = IE[Sn2 ] − (IE[Sn ])2  !2  " n #!2 n X X = IE  Xk  − IE Xk k=1

" = IE

k=1

n X n X

# Xk Xl −

= IE

n X

#

Xk2 + IE 

=

n X

 X

Xk Xl  −

1≤k6=l≤n

k=1

=

IE[Xk ] IE[Xl ]

k=1 l=1

k=1 l=1

"

n X n X

n X

X

(IE[Xk ])2 −

k=1

IE[Xk ] IE[Xl ]

1≤k,l≤n

(IE[Xk2 ] − (IE[Xk ])2 )

k=1 n X

Var[Xk ]

k=1

= 4np(1 − p).

3.2 Distribution First we note that in an even number of time steps, (Sn )n∈N can only reach an even state in N starting from 0. Similarly, in an odd number of time steps, (Sn )n∈N can only reach an odd state in N starting from 0. Consequently we have  P(S2n = 2k + 1) = 0, k ∈ Z, n ∈ N,      

P(S2n+1 = 2k) = 0,

k ∈ Z, n ∈ N,

and P(Sn = k) = 0,

|k| ≥ n + 1.

Next, let l denote the number of upwards steps between time 0 and time 2n, whereas 2n − l will denote the number of downwards steps. If S2n = 2k we have 2k = l − (2n − l) = 2l − 2n,

52

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

hence there are l = n + k upwards steps and 2n − l = n − k downwards steps, −n ≤ k ≤ n. The probability of a given paths having l = n + k upwards steps and 2n − l = n − k downwards steps is pn+k q n−k and in order to find P(S2n = 2k) we need to multiply this number by the total number of paths leading from 0 to 2k in 2n steps. We find that this number is     2n 2n = n+k n−k which represents the number of ways to arrange n + k upwards steps (or n − k downwards steps) within 2n time steps. Hence we have  P(S2n = 2k) =

 2n pn+k q n−k , n+k

−n ≤ k ≤ n.

Exercise: Show by a similar analysis that   2n + 1 P(S2n+1 = 2k + 1) = pn+k+1 q n−k , n+k+1

(3.1)

−n ≤ k ≤ n.

3.3 First returns to zero Let τ0 = inf{n ≥ 1 : Sn = 0} denote the time of first return to zero of the random walk started at 0, with inf ∅ = +∞.1 We are interested in computing the distribution g(n) = P(τ0 = n),

n ≥ 1,

of τ0 . It is easy to show by pathwise analysis that P(τ0 = 1) = 0,

P(τ0 = 2) = 2pq

and P(τ0 = 2) = 2p2 q 2 ,

(3.2)

however this computation is difficult to pursue for all n ≥ 3. We have, partitioning 1 Here the notation “inf” is for “infimum”, meaning here the smallest n ≥ 0 such that Sn = 0, with τ0 = +∞ if no such n ≥ 0 exists.

53

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

{Sn = 0} =

n−2 [

{Sk = 0, Sk+1 6= 0, · · · , Sn−1 6= 0, Sn = 0},

k=0

given the time k of last return to 0 before time n we find h(n) := P(Sn = 0) =

n−2 X

P(Sk = 0, Sk+1 6= 0, . . . , Sn−1 6= 0, Sn = 0)

k=0

=

n−2 X

P(Sk+1 6= 0, . . . , Sn−1 6= 0, Sn = 0 | Sk = 0)P(Sk = 0)

(3.3)

k=0

=

n−2 X

P(S1 6= 0, . . . , Sn−k−1 6= 0, Sn−k = 0 | S0 = 0)P(Sk = 0) (3.4)

k=0

=

n−2 X

P(S1 6= 0, . . . , Sn−k−1 6= 0, Sn−k = 0)P(Sk = 0)

k=0

=

n−2 X

P(τ0 = n − k)P(Sk = 0)

k=0

=

n−2 X

h(k)g(n − k),

n ≥ 1,

(3.5)

k=0

where from (3.3) to (3.4) we applied a time shift from time k + 1 to time 1 and used the fact that P(S0 = 0) = 1. Hence we need to solve the convolution equation h(n) =

n−2 X

g(n − k)h(k),

(3.6)

k=0

for g(n) = P(τ0 = n), n ≥ 1, with the initial condition g(1) = 0, knowing that   2n n n h(2n) = P(S2n = 0) = p q , (3.7) n and h(2n + 1) = 0, n ∈ N, from (3.1). For this we define the probability generating function Gτ0 : [−1, 1] −→ R s 7−→ Gτ0 (s) of the random variable τ0 by

54

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

Gτ0 (s) := IE[sτ0 1{τ0 <∞} ] =

∞ X

sn P(τ0 = n) =

n=0

∞ X

sn g(n),

−1 ≤ s ≤ 1.

n=0

(3.8) Computing Gτ0 (s) provides a number of informations on τ0 , such as P(τ0 < ∞) = IE[1{τ0 <∞} ] = Gτ0 (1) and IE[τ0 1{τ0 <∞} ] =

∞ X

nP(τ0 = n) = G0τ0 (1− ).

n=1

Our aim is now to compute Gτ0 (s) for all s ∈ [−1, 1]. For this we let H : R −→ R s 7−→ H(s) be defined by H(s) :=

∞ X

sk P(Sk = 0),

s ∈ [−1, 1],

k=0

and we show that the convolution equation (3.6) implies that Gτ0 (s)H(s) = H(s) − 1,

s ∈ [−1, 1].

By (3.7) and the fact that P(S2k+1 = 0) = 0, k ∈ N, we have ∞ X

H(s) =

sk P(Sk = 0)

(3.9)

k=0

= = = = = =

∞ X

s2k P(S2k = 0)

k=0 ∞ X k=0 ∞ X

2k

s



 2k k k p q k

(pq)k s2k

k=0 ∞ X

(2k) × (2k − 1) × (2k − 2) × (2k − 3) × · · · × 4 × 3 × 2 × 1 (k × (k − 1) × · · · × 2 × 1)2

(4pq)k s2k

k=0 ∞ X

(4pq)k s2k

k=0 ∞ X

k × (k − 1/2) × (k − 2/2) × (k − 3/2) × · · · × (4/2) × (3/2) × (2/2) × (1/2) (k × (k − 1) × · · · × 2 × 1)2 (k − 1/2) × (k − 3/2) · · · × (3/2) × (1/2) k × (k − 1) × · · · × 2 × 1

(−1)k (4pq)k s2k

k=0

(−1/2 − (k − 1)) × (3/2 − k) · · · × (−3/2) × (−1/2) k × (k − 1) × · · · × 2 × 1 55

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

=

∞ X

(−4pqs2 )k

k=0

(−1/2) × (−3/2) × · · · (3/2 − k) × (−1/2 − (k − 1)) k!

= (1 − 4pqs2 )−1/2 ,

|4pqs2 | < 1,

see for example here.2 In order to compute Gτ0 (s) we note that by (3.6) we have ! ∞ ! ∞ X X n k Gτ0 (s)H(s) = s g(n) s P(Sk = 0) n=1

= =

k=0

∞ X ∞ X n=1 k=0 ∞ X ∞ X

sn+k g(n)P(Sk = 0) sn+k g(n)P(Sk = 0)

k=0 n=2

= =

∞ X l=2 ∞ X

sl

l−2 X

g(l − k)P(Sk = 0)

k=0

sl P(Sl = 0)

l=1

= −1 +

∞ X

sl P(Sl = 0)

l=0

= H(s) − 1, hence Gτ0 (s)H(s) = H(s) − 1, where we used (3.6) and the change of variable (k, n) 7→ (k, l) with l = n + k. Hence Gτ0 (s) = 1 −

1 = 1 − (1 − 4pqs2 )1/2 , H(s)

4pqs2 < 1.

The probability generating function Gτ0 (s) =

∞ X

sn P(τ0 = n),

−1 ≤ s ≤ 1,

(3.10)

n=0

can now be used to determine the probability distribution P(τ0 = n) of τ0 , as follows: 2

We used the formula (1 + x)α =

∞ X xk α(α − 1) × · · · × (α − (k − 1)), cf. (13.4). k! k=0

56

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

Gτ0 (s) = 1 − (1 − 4pqs2 )1/2     ∞ X 1 1 1 1 = 1− (−4pqs2 )k − 1 × ··· × − (k − 1) k! 2 2 2 k=0     ∞ k X 1 1 2k (4pq) 1 = 1− × ··· × k − 1 − , s k! 2 2 2 k=1

where we used (13.4) for α = 1/2. By identification with (3.10) we find     1 1 (4pq)k 1 1− × ··· × k − 1 − P(τ0 = 2k) = k! 2 2 2   k−1 (4pq)k Y 1 = m− , k ≥ 1, 2k! m=1 2 while P(τ0 = 2k + 1) = 0, k ∈ N. Exercise: Show that the formula (3.11) recovers (3.2). In the general case the probability that the first return to zero occurs within a finite time is P(τ0 < ∞) = IE[1{τ0 <∞} ] = IE[1τ0 1{τ0 <∞} ] = Gτ0 (1) = 1 − (1 − 4pq)1/2 = 1 − |2p − 1| = 1 − |p − q|   2q, p ≥ 1/2, =  2p, p ≤ 1/2, = 2 min(p, q).

(3.11)

with P(τ0 = ∞) = |2p − 1| = |p − q|. In the non-symmetric case p 6= q we have P(τ0 < ∞) < 1

and

P(τ0 = ∞) > 0,

whereas in the symmetric case (or fair game) p = q = 1/2 we find that P(τ0 < ∞) = 1

and

P(τ0 = ∞) = 0,

57

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

i.e. the random walk returns to zero with probability one. We also have IE[τ0 1{τ0 <∞} ] = =

∞ X

nP(τ0 = n)

n=1 G0τ0 (1)

= p

4pqs

1 − 4pqs2 4pq = √ 1 − 4pq 4pq = , |p − q|

|s=1

(3.12)

when p 6= q, see for example here. In the symmetric case p = q = 1/2, taking the limit as p, q → 1/2 in (3.12) yields IE[τ0 1{τ0 <∞} ] = +∞ hence in particular IE[τ0 ] = +∞.

(3.13)

Therefore the random walk returns to 0 with probability one within a finite (random) time, however the average of this random time is infinite. This shows that even a fair game can be risky when the player’s fortune is negative. This also yields an example of a random variable τ0 which is (almost surely)3 finite, but whose expectation is infinite. We also have 1 IE[τ0 1{τ0 <∞} ] P(τ0 < ∞) 2pq 1 = min(p, q) |p − q| max(p, q) =2 . |p − q|

IE[τ0 | τ0 < ∞] =

When p 6= 1/2 the time τ0 needed to return to 0 is infinite with probability P(τ0 = ∞) = |2p − 1| > 0, hence IE[τ0 ] =

∞ X

kP(τ0 = k) + ∞ × P(τ0 = ∞) = +∞

k=1 3

“Almost surely” means “with probability 1”.

58

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html

(3.14)


Notes on Markov Chains

is also infinite in this case. Using similar arguments one can even compute the distribution of the first passage time to a given arbitrary level a ∈ N. This will not be done here. The gambler process and the standard random walk will later be reconsidered in the framework of Markov chains.

3.4 Exercises Exercise 3.1. We consider a simple random walk (Sn )n∈N with independent increments and S0 = 0, in which the probability of advance is p and the probability of retreat is 1 − p. 1. Enumerate all possible sample paths that lead to S4 = 2. 2. Show that     4 3 4 3 P(S4 = 2) = p (1 − p) = p (1 − p). 3 1 3. Show that if n + k is even we have   n   p(n+k)/2 (1 − p)(n−k)/2 , n + k even and |k| ≤ n,  (n + k)/2 P(Sn = k) =    P(Sn = k) = 0, n + k odd or |k| > n. 4. Show that pn,k := P(Sn = k) satisfies the difference equation pn+1,k = ppn,k−1 + qpn,k+1 ,

(3.15)

under the boundary conditions p0,0 = 1, p0,k = 0, k 6= 0. 5. Obtain the equation (3.15) by a direct argument using first step analysis on random walks. Exercise 3.2. (cf. Proposition 7.1.2 pages 330-331 of [7]). Consider the random walk (Sn )n≥0 defined by S0 = 0 and Sn = X1 + · · · + Xn ,

n ≥ 1,

where (Xk )k≥1 is an i.i.d. family of independent {−1, +1}-valued random variables with distribution   P(Xk = +1) = p, 

P(Xk = −1) = q, 59

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

k ≥ 1, with p + q = 1. We let Rn denote the range of (S0 , . . . , Sn ), i.e. the (random) number of distinct values appearing in the sequence (S0 , . . . , Sn ). 1. Explain why ! Rn = 1 +

sup Sk

 −

k=0,...,n

 inf

k=0,...,n

Sk ,

and give the value of R0 and R1 . 2. Show that for all k ≥ 1, Rk − Rk−1 is a Bernoulli random variable, and that P(Rk − Rk−1 = 1) = P(Sk − S0 6= 0, Sk − S1 6= 0, . . . , Sk − Sk−1 6= 0). 3. Show that for all k ≥ 1 we have P(Rk − Rk−1 = 1) = P(X1 6= 0, X1 + X2 6= 0, . . . , X1 + · · · + Xk 6= 0). 4. Show that the telescoping identity Rn = R0 +

n X

(Rk − Rk−1 ) holds,

k=1

n ∈ N. 5. Show that P(τ0 = ∞) = limk→∞ P(τ0 > k). 6. From the results of Questions 3 and 4, show that IE[Rn ] =

n X

P(τ0 > k),

n ∈ N,

k=0

where τ0 = inf{n ≥ 1 : Sn = 0} is the time of first return to zero of the random walk. 7. From the results of Questions 5 and 6, show that P(τ0 = ∞) = lim

n→∞

8. Show that lim

n→∞

1 IE[Rn ]. n

1 IE[Rn ] = 0. n

when p = 1/2, and that IE[Rn ] 'n→∞ n|p − q|, when p 6= 1/2.4 Hints and comments on Exercise 3.2. 1. No mathematical computation is needed here, a credible explanation (in words) is sufficient. It may be of interest to also compute IE[R2 ]. 4

The meaning of f (n) 'n→∞ g(n) is limn→∞ f (n)/g(n) = 1, provided g(n) 6= 0, n ≥ 1.

60

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

2. Show first that the two events {Rk − Rk−1 = 1}

and {Sk − S0 6= 0, Sk − S1 6= 0, . . . , Sk − Sk−1 6= 0}

are the same. 3. Here the events {Rk − Rk−1 = 1}

and {X1 6= 0, X1 + X2 6= 0, . . . , X1 + · · · + Xk 6= 0}

are not the same, however we can show the equality between the probabilities. Telescoping identities can be useful in many situations, not restricted to probability or stochastic processes. Use point (2) on page MTH354-16 of the notes, after identifying what the events Ak are. The basic identity IE[1A ] = P(A) can be used. A mathematically rigorous proof is asked here. The following definition of limit may be used:

4. 5. 6. 7.

A real number a is said to be the limit of the sequence (xn )n≥1 , written a = limn→∞ xn , if and only if for every real number ε > 0, there exists a natural number N such that for every n > N we have |xn −a| < ε. Alternatively, you can make a search for “Cesaro means” and state and apply correctly the relevant theorem. 8. Use the formula giving P(τ0 = +∞) in the notes.

Sn

0

n

Fig. 3.1: Illustration of the range process. In Figure 3.1 the height at time n of the orange area coincides with Rn −1. 61

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

62

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Chapter 4

Discrete-Time Markov Chains

4.1 Markov property A Z-valued discrete-time stochastic process (Zn )n∈N is said to be Markov, or to have the Markov property if, for all n ≥ 1, the probability distribution of Zn+1 is determined by the state Zn of the process at time n, and does not depend on the past values of Zk for k ≤ n − 1. In other words, for all n ≥ 1 and all j, in , . . . , i1 ∈ Z we have P(Zn+1 = j | Zn = in , Zn−1 = in−1 , . . . , Z0 = i0 ) = P(Zn+1 = j | Zn = in ). In particular we have P(Zn+1 = j | Zn = in , Zn−1 = in−1 ) = P(Zn+1 = j | Zn = in ), and P(Z2 = j | Z1 = i1 , Z0 = i0 ) = P(Z2 = j | Z1 = i1 ). On the other hand the first order transition probabilities can be used to for the complete computation of the law of the process by induction as P(Zn = in , Zn−1 = in−1 , . . . Z0 = i0 ) = P(Zn = in | Zn−1 = in−1 ) · · · P(Z1 = i1 | Z0 = i0 )P(Z0 = i0 ), i0 , i1 , . . . , in ∈ Z. By the law of total probability we also have X P(Z1 = i) = P(Z1 = i, Z0 = j) j∈Z

=

X

P(Z1 = i | Z0 = j)P(Z0 = j),

i ∈ Z.

(4.1)

j∈Z

63


N. Privault

Example: The random walk Sn = X1 + · · · + Xn ,

n ∈ N,

(4.2)

considered Chapter 3, where (Xn )n≥1 is a sequence of independent Z-valued random variables, is a discrete-time Markov chain. Indeed, for all j, in , . . . , i1 ∈ Z we have (note that S0 = 0 here) P(Sn+1 = j | Sn = in , Sn−1 = in−1 , . . . , S1 = i1 ) (4.3) P(Sn+1 = j, Sn = in , Sn−1 = in−1 , . . . , S1 = i1 ) = P(Sn = in , Sn−1 = in−1 , . . . , S1 = i1 ) P(Sn+1 − Sn = j − in , Sn − Sn−1 = in − in−1 , . . . , S2 − S1 = i2 − i1 , S1 = i1 ) = P(Sn − Sn−1 = in − in−1 , . . . , S2 − S1 = i2 − i1 , S1 = i1 ) P(Xn+1 = j − in , Xn = in − in−1 , . . . , X2 = i2 − i1 , X1 = i1 ) = P(Xn = in − in−1 , . . . , X2 = i2 − i1 , X1 = i1 ) P(Xn+1 = j − in )P(Xn = in − in−1 , . . . , X2 = i2 − i1 , X1 = i1 ) = P(Xn = in − in−1 , . . . , X2 = i2 − i1 , X1 = i1 ) = P(Xn+1 = j − in ) P(Xn+1 = j − in )P(Xn + · · · + X1 = in ) = P(Xn + · · · + X1 = in ) P(Xn+1 = j − in , Xn + · · · + X1 = in ) = P(Xn + · · · + X1 = in ) P(Xn+1 = j − in , Sn = in ) = P(Sn = in ) P(Sn+1 = j, Sn = in ) = P(Sn = in ) = P(Sn+1 = j | Sn = in ). More generally, all processes with independent increments are Markov chains. However, not all Markov chains have independent increments, in fact the Markov chains of interest in this chapter do not have independent increments. As seen above, the random evolution of a Markov process (Zn )n∈N is determined by the data of Pi,j := P(Zn+1 = j | Zn = i),

i, j ∈ Z, ,

where we assume that the probability P(Zn+1 = j | Zn = i) does not depend on n ∈ N. In this case the Markov chain (Zn )n∈N is said to be time homoge-

64

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

neous. This data can be encoded as a matrix indexed by Z2 , called the transition matrix of the Markov chain: [ Pi,j ]i,j∈Z = [ P(Zn+1 = j | Zn = i) ]i,j∈Z , also written as 

[ Pi,j ]i,j∈Z

..

 . ···    ···    = ···   ···    ···  . ..

.. .

.. .

.. .

.. .

.. .

. ..

 P−2,−2 P−2,−1 P−2,0 P−2,1 P−2,2 · · ·     P−1,−2 P−1,−1 P−1,0 P−1,1 P−1,2 · · ·     P0,−2 P0,−1 P0,0 P0,1 P0,2 · · ·  .   P1,−2 P1,−1 P1,0 P1,1 P1,2 · · ·     P2,−2 P2,−1 P2,0 P2,1 P2,2 · · ·   .. .. .. .. .. . . . . . . . .

Note the inversion of the order of indices (i, j) between P(Zn+1 = j | Zn = i) and Pi,j . In particular, the initial state i is a line number in the matrix, while the final state j corresponds to a column number. Due to the relation X

P(Zn+1 = j | Zn = i) = 1,

i ∈ N,

(4.4)

j∈Z

the lines of the transition matrix satisfy the condition X Pi,j = 1, j∈Z

for every line index i ∈ Z. Using the matrix notation we find P(Zn = in , Zn−1 = in−1 , . . . Z0 = i0 ) = Pin−1 ,in · · · Pi0 ,i1 P(Z0 = i0 ), i0 , i1 , . . . , in ∈ Z, and P(Z1 = i) =

X

Pj,i P(Z0 = j)Pj,i ,

i ∈ Z.

j∈Z

65

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

A state k ∈ Z is said to be absorbing if Pk,k = 1. Exercise: Write down the transition matrix [ Pi,j ]i,j∈Z of the unrestricted random walk (4.2). We have

. . .. .. ..  . . . . ··· 0 p 0  ··· q 0 p  = ··· 0 q 0 ··· 0 0 q  ··· 0 0 0  . . . . . . .. .. .. 

[ Pi,j ]i,j∈Z

.. . 0 0 p 0 q .. .

.. . 0 0 0 p 0 .. .

. ..

 ···  ···  ··· . ···  ···  .. .

(4.5)

In the sequel we will only consider N-valued Markov chains, and in the case the transition matrix [ P(Zn+1 = j | Zn = i) ]i,j∈N of the Markov chain is written as   P0,0 P0,1 P0,2 · · ·      P1,0 P1,1 P1,2 · · ·      [ Pi,j ]i,j∈N =  .  P2,0 P2,1 P2,2 · · ·        .. .. .. . . . . . . From (4.4) we have

∞ X

Pi,j = 1,

j=0

for all i ∈ N. In case the Markov chain (Zk )k∈N takes values in the finite state space {0, . . . , N } its transition matrix will simply have the form   P0,0 P0,1 P0,2 · · · P0,N      P1,0 P1,1 P1,2 · · · P1,N         P2,0 P2,1 P2,2 · · · P2,N  [ Pi,j ]0≤i,j≤N =  .      .. .. .. . . ..   . . .  . .     PN,0 PN,1 PN,2 · · · PN,N

66

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

Example: the transition matrix [ Pi,j ]0≤i,j≤N of the gambling process on {0, 1, . . . , N } with absorbing stated 0 and N is given by   1 0 0 ··· 0 0 0 q 0 p ··· 0 0 0     P = [ Pi,j ]0≤i,j≤N =  ... ... ... . . . ... ... ...  .   0 0 0 ··· q 0 p 0 0 0 ··· 0 0 1 Example: The Ehrenfest chain. Two volumes of air (left and right) are connected by a hole and contain a total of N balls. At each time step, one picks a ball at random and moves it to the other side. Let Xn ∈ {0, 1, . . . , N } denote the number of balls in the left side at time n. The transition probabilities P(Xn+1 = j | Xn = i), 0 ≤ i, j ≤ N , are given by P(Xn+1 = k + 1 | Xn = k) =

N −k , N

and P(Xn+1 = k − 1 | Xn = k) =

k , N

0 ≤ k ≤ N − 1,

1 ≤ k ≤ N.

Example: Credit rating (transition probabilities are expressed in %). Rating at the start of a year AAA AA A AAA 90.34 5.62 0.39 AA 0.64 88.78 6.72 A 0.07 2.16 87.94 BBB 0.03 0.24 4.56 BB 0.03 0.06 0.4 B 0 0.09 0.29 CCC 0.13 0 0.26 D 0 0 0 N.R. 0 0 0

Rating at the end of the year BBB BB B CCC D N.R. Total 0.08 0.03 0 0 0 3.5 100 0.47 0.06 0.09 0.02 0.01 3.21 100 4.97 0.47 0.19 0.01 0.04 4.16 100 84.26 4.19 0.76 0.15 0.22 5.59 100 6.09 76.09 6.82 0.96 0.98 8.58 100 0.41 5.11 74.62 3.43 5.3 10.76 100 0.77 1.66 8.93 53.19 21.94 13.14 100 0 1.0 3.1 9.29 51.29 37.32 100 0 0 0.1 8.55 74.06 17.07 100

We note that higher ratings are more stable since the diagonal coefficients of the matrix go decreasing. On the other hand starting from the rating AA it is easier to be downgraded (probability 6.72%) than to be upgraded (probability 0.64%).

67

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

Example: Markov chains in music. By a statistical analysis of the note transitions, every type of music can be encoded into a Markov chain. A A] B C D E F G A 4/19 0 3/19 0 2/19 1/19 0 6/19 A] 1 0 0 0 0 0 0 0 B 7/15 0 1/15 4/15 0 3/15 0 0 C 0 0 6/15 3/15 6/15 0 0 0 D 0 0 0 3/11 3/11 5/11 0 0 E 4/19 1/19 0 3/19 0 5/19 4/19 1/19 F 0 0 0 1/5 0 0 1/5 0 G 1/5 0 1/5 2/5 0 0 0 1/5 G] 0 0 3/4 0 0 1/4 0 0

G] 3/19 0 0 0 0 1/19 3/5 0 0

In this example, the transitions of Mozart’s variations on this famous tune have been analyzed to form a transition matrix.1 Then that transition matrix was used for random melody generation.2 See also this arrangement and here for details.3 Example: Text generation. See here for an example of the use of Markov chain for random text generation. As noted above, the transition matrix is a convenient way to record the values of P(Zn+1 = j | Zn = i) in a table. However, it is much more than that. Suppose for example that we are interested in the two-step transition probability P(Zn+2 = j | Zn = i). This probability does not appear in the transition matrix, but it can be computed by first step analysis, as follows, denoting by S the state space of the process: X P(Zn+2 = j | Zn = i) = P(Zn+2 = j, Zn+1 = l | Zn = i) l∈S

X P(Zn+2 = j, Zn+1 = l, Zn = i) = P(Zn = i) l∈S

X P(Zn+2 = j, Zn+1 = l, Zn = i) P(Zn+1 = l, Zn = i) = P(Zn+1 = l, Zn = i) P(Zn = i) l∈S

1 2 3

Try here if it does not work. Try here if it does not work. Try here if it does not work.

68

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

=

X

P(Zn+2 = j | Zn+1 = l, Zn = i)P(Zn+1 = l | Zn = i)

l∈S

=

X

P(Zn+1 = l | Zn = i)P(Zn+2 = j | Zn+1 = l)

l∈S

= =

X

Pi,l Pl,j

l∈S 2 Pi,j ,

i, j ∈ S.

More generally, for all k ∈ N we have X P(Zn+k+1 = j | Zn = i) = P(Zn+k+1 = j, Zn+k = l | Zn = i) k∈S

X P(Zn+k+1 = j, Zn+k = l, Zn = i) = P(Zn = i) l∈S

X P(Zn+k+1 = j, Zn+k = l, Zn = i) P(Zn+k = l, Zn = i) = P(Zn+k = l, Zn = i) P(Zn = i) l∈S X = P(Zn+k+1 = j | Zn+k = l, Zn = i)P(Zn+k = l | Zn = i) l∈S

=

X

P(Zn+k+1 = j | Zn+k = l)P(Zn+k = l | Zn = i)

l∈S

=

X

P(Zn+k = l | Zn = i)Pl,j .

l∈S

We have just checked that the matrix [ P(Zn+k = j | Zn = i) ]i,j∈S satisfies the same induction relation as P k , i.e. X k+1 k Pi,j = Pi,l Pl,j , l∈S

hence the equality [ P(Zn+k = j | Zn = i) ]i,j∈S =



k Pi,j

 i,j∈S

= Pk

holds not only for k = 0 and k = 1, but also for all k ∈ N. The relation P m+n = P m P n reads

69

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

m+n Pi,j =

X

m n Pi,l Pl,j ,

l∈S

and can be rewritten as P(Zn+m = j | Z0 = i) =

X

P(Zm = j | Z0 = l)P(Zn = l | Z0 = i),

l∈S

i, j ∈ S, which is called the Chapman-Kolmogorov equation, cf. (1.1). On the finite state space S = {0, 1, . . . , N }, defining the line vectors π = [π0 , . . . , πN ] = [P(X0 = 0), . . . , P(X0 = N )] and η = [η0 , . . . , ηN ] = [P(X1 = 0), . . . , P(X1 = N )] Relation (4.1) rewrites in matrix notation as η = πP. Example: The gambling process (Xn )n≥0 . Taking N = 4 and p = 40%, the transition matrix of the process is   1 0 0 0 0      0.6 0 0.4 0 0         P = [ Pi,j ]0≤i,j≤4 =  0 0.6 0 0.4 0  ,      0 0 0.6 0 0.4      0 0 0 0 1 and we have  1 0    0.6 0    2 P =  0 0.6    0 0   0 0

0

0

0.4 0 0 0.4 0.6 0 0

0

0

 

1

     0    0.6      0  × 0      0.4    0   1 0

0

0

0

0 0.4 0 0.6 0 0.4 0 0.6 0 0

0

0

0

1

     0    0.6      0   =  0.36      0.4    0   1 0

70

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html

0

0

0

0.24 0 0.16 0 0.48 0 0.36 0 0.24 0

0

0

0

  0     0.16  .   0.4    1


Notes on Markov Chains

Exercise: From the above matrix, show that P(X2 = 4 | X0 = 2) = 0.16, P(X2 = 1 | X0 = 2) = 0, and P(X2 = 2 | X0 = 2) = 0.48.

4.2 The two-state Markov chain The above discussion shows that there is some interest in computing the n-th order transition matrix P n . Although this is generally difficult, this is actually possible when the number of states equals two, i.e. S = {0, 1}. In this case the transition matrix has the form   1−a a , P = (4.6) b 1−b with a ∈ [0, 1] and b ∈ [0, 1]. 1-a

1-b b

0

1 a

In what follows we exclude the case a = b = 0 since it corresponds to the identity matrix (constant chain). We have P(Zn+1 = 1 | Zn = 0) = a,

P(Zn+1 = 0 | Zn = 0) = 1 − a,

P(Zn+1 = 0 | Zn = 1) = b,

P(Zn+1 = 1 | Zn = 1) = 1 − b.

and

The matrix P has two eigenvectors4   1   and 1

−a

 ,

 b

with respective eigenvalues λ1 = 1 and λ2 = 1 − a − b, see for example here. 4

Please refer to MAS213 - Linear Algebra II for more on “eigenvectors, eigenvalues, diagonalization”.

71

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

Hence it can be put in diagonal form P = M × D × M −1 as follows:

 b a 1 −a λ1 0  a+b a+b  × × P =  .  1 1  1 b 0 λ2 − a+b a+b Consequently we have 

P n = (M × D × M −1 )n = (M × D × M −1 ) · · · (M × D × M −1 ) = M × D × ··· × D × M

(4.7)

−1

(4.8)

= M × Dn × M −1

(4.9)

 b a 1 −a 1 0  a+b a+b  × × =    n 1 1  1 b 0 λ2 − a+b a+b     ba a −a n 1  λ + 2   = a+b a+b ba −b b   n b + aλ2 a(1 − λn2 ) 1  , = a+b n n b(1 − λ2 ) a + bλ2 

(4.10)

see also here, where Dn =



 1 0 , 0 λn2

n ∈ N.

Hence we can compute the probabilities P(Zn = 0 | Z0 = 0) =

b + aλn2 , a+b

P(Zn = 1 | Z0 = 0) =

a(1 − λn2 ) a+b

(4.11)

a + bλn2 . a+b

(4.12)

and P(Zn = 0 | Z0 = 1) =

b(1 − λn2 ) , a+b

P(Zn = 1 | Z0 = 1) =

As an example by pathwise analysis we can compute the value of P(Z3 = 0 | Z0 = 0) = (1 − a)3 + ab(1 − b) + 2(1 − a)ab, and show that it coincides with

72

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

P(Z3 = 0 | Z0 = 0) =

b + a(1 − a − b)3 . a+b

Letting n go to infinity in (4.10), we get in particular the long time behavior, or limitinig distribution, of the Markov chain:   ba 1  , lim P n = n→∞ a+b ba since |λ2 | = |1 − a − b| < 1, whenever (a, b) 6= (1, 1). Note that convergence will be faster when a + b is closer to 1. Hence we have lim P(Zn = 1 | Z0 = 0) = lim P(Zn = 1 | Z0 = 1) =

a a+b

(4.13)

lim P(Zn = 0 | Z0 = 0) = lim P(Zn = 0 | Z0 = 1) =

b a+b

(4.14)

n→∞

n→∞

and n→∞

and

n→∞



b a π = [π0 , π1 ] = , a+b a+b

 (4.15)

appears as a limiting distribution as n goes to infinity, provided (a, b) 6= (1, 1). This means that whatever the starting point Z0 , the probability of being at 1 after a “large” time is close to a/(a + b), while the probability of being at 0 is close to b/(a + b). We also note that the distribution π in (4.15) is invariant by P , i.e. π = πP as we have   1 1−a a πP = [b, a] b 1−b a+b   1 b(1 − a) + ab = a + b ab + a(1 − b) 1 [b, a] = a+b = π. In addition, if a + b = 1, or a = 1 − b, one sees that   ba  = P, Pn =  n ∈ N, ba

73

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

and we find P(Zn = 1 | Z0 = 0) = P(Zn = 1 | Z0 = 1) = a and P(Zn = 0 | Z0 = 0) = P(Zn = 0 | Z0 = 1) = b for all n ≥ 1, regardless of the initial distribution [P(Z0 = 0), P(Z0 = 1)]. Note that when a = b = 1 the limit limn→∞ P n does not exist as we have   10       , n = 2k,     0 1    n P =       01      , n = 2k + 1.    10 Next we consider a simulation of the two-state random walk with transition matrix   0.8 0.2 , P = 0.4 0.6 i.e. a = 0.2 and b = 0.4. Figure 4.2 represents a sample path (xn )n=0,1,...,100 of the chain, while Figure 4.2 represents the sample average yn =

1 (x0 + x1 + · · · + xn ), n+1

n = 0, 1, . . . , 100,

which counts the proportion of values of the chain in the state 1. This proportion is found to converge to a/(a + b) = 1/3.

0 1

0

10 ●

0

20

10

30

20

40

50

30

40

Fig. 4.1: Sample path.

74

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html

60

50

70 ●

60

80

70

90

80

100

90

100


0.0

0.2

0.4

0.6

0.8

1.0

Notes on Markov Chains

0

10

20

30

40

50

60

70

80

90

Fig. 4.2: The proportion of chain values in the state 1 converges to 1/3. The source code of the above program written in R is given below. a=0.2; b=0.4; # Dimension of the transition matrix dim=2 # Definition of the transition matrix P=matrix(c(1-a,a,b,1-b),nrow=dim,ncol=dim,byrow=TRUE) # Number of time steps N=100 Z=array(N+1); for(ll in seq(1,N)) { Z[1]=sample(dim,size=1,prob=P[2,]) # Random simulation of Z[j+1] given Z[j] for (j in seq(1,N)) Z[j+1]=sample(dim,size=1,prob=P[Z[j],]) Y=array(N+1); S=0; # Computation of the average over the l first steps for(l in seq(1,N+1)) { Z[l]=Z[l]-1;

S=S+Z[l]; Y[l]=S/l; }

X=array(N+1); for(l in seq(1,N+1)) { X[l]=l-1; } par(mfrow=c(2,1)) plot(X,Y,type="l",yaxt="n",xaxt="n",xlim=c(0,N),xlab="",ylim=c(0,1),ylab="",xaxs="i",col="black",main="",bty="n") segments( 0 , a/(a+b), N, a/(a+b)) axis(2,pos=0,at=c(0.0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0)) axis(1,pos=0,at=seq(0,N,10),outer=TRUE) plot(X,Z,type="o",xlab="",ylab="",xlim=c(0,N),yaxt="n",xaxt="n",xaxs="i",col="black",main="",pch=20,bty="n")

75

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html

100


N. Privault

axis(1,pos=1,at=seq(0,N+1,10),outer=TRUE,padj=-4,tcl=0.5) axis(1,pos=0,at=seq(0,N+1,10),outer=TRUE) axis(2,las=2,at=0:1) readline(prompt = "Pause. Press <Enter> to continue...") }

We close this chapter with two animations of Markov chains (use the controls to see the animations under acrobat reader). First we check that the proportion of chain values in the state 1 converges to 1/3 for a two-state Markov chain.

0 1

10

20

30

40

50

60

70

80

90

10

0

10

20

30

40

50

60

70

80

90

10

0

10

20

30

40

50

60

70

80

90

10

â&#x2014;?

0.0

0.2

0.4

0.6

0.8

1.0

0

Fig. 4.3: Animated convergence for the two-state Markov chain. Next is an animation of a five-state Markov chain.

76

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


â&#x2014;?

0

1

2

3

4

Notes on Markov Chains

10

20

30

40

50

60

70

Fig. 4.4: Animated five-state Markov chain.

4.3 Exercises Exercise 4.1. (Exercise III.5.9 in [4]). In a simplified model of a certain television game show, suppose that the contestant, having won k dollars, will at the next play have k + 1 dollars with probability q and and be put out of the game and leave with nothing with probability p = 1 â&#x2C6;&#x2019; q. Suppose that the contestant begins with one dollar. Model her winnings after n plays as a success runs Markov chain by specifying the corresponding transition probability matrix.

77

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html

80

90

10


N. Privault

78

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Chapter 5

First Step Analysis

5.1 Hitting probabilities Let us consider a Markov chain (Zn )n∈N with state space S, and let A ⊂ S denote a subset of S. We are interested in the first time T the chain hits the subset A, i.e. TA = inf{n ≥ 0 : Zn ∈ A}, (5.1) with TA = 0 if Z0 ∈ A and TA = +∞ if {n ≥ 0 : Zn ∈ A} = ∅, i.e. Zn ∈ / A} for all n ∈ N. Similarly to the gambling problem we would like to compute gl (k) = P(ZTA = l | Z0 = k),

k ∈ S,

l ∈ A,

and hA (k) := IE[TA | Z0 = k],

k ∈ S.

This can be done by first step analysis. For all k ∈ S \ A we have TA ≥ 1 given that Z0 = k, hence we can write gl (k) = P(ZTA = l | Z0 = k) X = P(ZTA = l | Z1 = m)P(Z1 = m | Z0 = k) m∈S

=

X

Pk,m P(ZTA = l | Z1 = m)

m∈S

=

X

Pk,m P(ZTA = l | Z0 = m)

m∈S

79


N. Privault

=

X

Pk,m gl (m),

k ∈ S \ A,

l ∈ A,

m∈S

i.e. gl (k) =

X

Pk,m gl (m),

k ∈ S,

l ∈ A,

(5.2)

m∈S

under the boundary conditions gl (k) = P(ZTA = l | Z0 = k) = 1{k=l} ,

k, l ∈ A,

since TA = 0 whenever Z0 ∈ A. Hence this equation can be rewritten in matrix form as gl = P gl ,

l ∈ A,

(5.3)

where g is a column vector, under the boundary condition   1, k = l, gl (k) == P(ZTA = l | Z0 = k) = 1{l} (k) =  0, k 6= l, for k ∈ A and all l ∈ S. In case Pk,l = 1{k=l} ,

k, l ∈ A,

the set A is said to be absorbing. In addition, the hitting probabilities gl (k) = P(ZTA = l | Z0 = k) satisfy the condition X 1 = P(TA = +∞ | Z0 = k) + P(ZTA = l | Z0 = k) l∈A

= P(TA = +∞ | Z0 = k) +

X

gl (k),

(5.4)

l∈A

for all k ∈ S. Note that we may have P(TA = +∞ | Z0 = k) > 0, for example in the following chain with A = {0} and k = 1 we have P(T{0} = +∞ | Z0 = 1) = 0.5.

80

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

0.5

1

0.5

0

1

1 2

Assume now that the state space is S = {0, 1, . . . , N } and the transition matrix P has the form   QR , P = (5.5) 0 I where Q is a (r + 1) × (r + 1) matrix, and   1 0 ··· 0 0 0 1 ··· 0 0     Id =  ... ... . . . ... ...    0 0 ··· 1 0 0 0 ··· 0 1 is the (N − r) × (N − r) identity matrix, in which case the states in {r + 1, . . . N } are absorbing. If A = {r + 1, . . . , N } the equation (5.2) can be rewritten as gl (k) =

N X

Pk,m gl (m)

m=0

= =

r X m=0 r X

Pk,m gl (m) +

N X

Pk,m gl (m)

m=r+1

Pk,m gl (m) + Pk,l ,

0 ≤ k ≤ N, r + 1 ≤ l ≤ N,

Pk,m gl (m) + Pk,l ,

0 ≤ k ≤ N, r + 1 ≤ l ≤ N,

m=0

i.e. gl (k) =

r X m=0

with boundary condition gl (k) = 1{k=l} ,

0 ≤ l ≤ N,

r + 1 ≤ k ≤ N.

In the case of the two-state Markov chain with transition matrix (4.6) with A = {0} we simply find g0 (0) = 1 and g0 (1) = b + (1 − b)g0 (1), 81

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

hence g0 (1) = 1 if b > 0 and g0 (1) = 0 if b = 0.

5.2 Mean hitting and absorption times Given (Xn )n∈N a Markov chain we consider the average time hA (k) := IE[TA | X0 = k],

k ∈ S.

For all k ∈ S \ A, using first step analysis we have hA (k) = IE[TA | X0 = k] X = P(X1 = l | X0 = k)(1 + IE[TA | X0 = l]) l∈S

=

X

P(X1 = l | X0 = k) +

l∈S

= 1+

X

P(X1 = l | X0 = k) IE[TA | X0 = l]

l∈S

X

P(X1 = l | X0 = k) IE[TA | X0 = l]

l∈S

= 1+

X

Pk,l hA (l),

k ∈ S \ A,

l∈S

i.e. hA (k) = 1 +

X

Pk,l hA (l),

k ∈ S \ A,

(5.6)

l∈S

under the boundary conditions hA (l) = IE[TA | X0 = l] = 0, which imply that (5.6) becomes X hA (k) = 1 + Pk,l hA (l),

l ∈ A,

k ∈ S \ A.

l∈S\A

This equation can be rewritten in matrix form as   1  ..  hA =  .  + P hA , 1 by considering only the lines with index k ∈ S \ A, under the boundary conditions hA (k) = 0, k ∈ A. 82

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

When the transition matrix P has the form (5.5) and A = {r + 1, . . . , N }, Equation (5.6) rewrites as hA (k) = 1 +

N X

Pk,l hA (l)

l=0

= 1+ = 1+

r X l=0 r X

Pk,l hA (l) +

N X

Pk,l hA (l)

l=r+1

Pk,l hA (l),

0 ≤ k ≤ r,

l=0

since hA (l) = 0, l = r + 1, . . . , N , i.e. hA (k) = 1 +

r X

Pk,l hA (l),

0 ≤ k ≤ r,

l=0

with hA (k) = 0, r + 1 ≤ k ≤ n. In the case of the two-state Markov chain with transition matrix (4.6) with A = {0} we simply find h{0} (0) = 1 and h{0} (1) = 1 + (1 − b)h{0} (1), hence h{0} (1) = 1/b and similarly we find h{1} (0) h{0} (0) = h{1} (1) = 0.

=

1/a, with

The above can be generalized to derive an equation for an expectation of the form "T −1 # A

X

hA (k) := IE f (Xi ) X0 = k , 0 ≤ k ≤ N, i=0

as follows: hA (k) = f (k) +

r X

Pk,m hA (m),

0 ≤ k ≤ r,

m=0

with hA (k) = 0, r + 1 ≤ k ≤ n. When f = 1{j} , i.e.   1, Zi = l, f (Zi ) = 1{j} (Zi ) =  0, Zi 6= l,

83

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

this will yield the mean number of visits to state j before absorption. When A = {m} we find h{m} (k) = 1 +

X

Pk,l h{m} (l),

k ∈ S \ {m},

(5.7)

l∈S l6=m

and h{m} (m) = 0. Examples. 1. Consider a Markov chain on {0, 1, 2, 3} with transition matrix of the form   1 000  a b c d  P = α β γ η. 0 001 Let A = {0, 3} and T = T{0,3} = inf{n ≥ 0 : Xn = 0 or Xn = 3} and compute the probabilities g0 (k) = P(XT = 0 | X0 = k) of hitting 0 first within {0, 3} starting from k = 0, 1, 2, 3. The chain has the following graph: b d

1 a β 1

3

c

0

2 α γ

84

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html

η

1


Notes on Markov Chains

Noting that 0 and 3 are absorbing states and writing the relevant lines of g = P g by first step analysis, we have  g0 (0) = 1    g0 (1) = a + bg0 (1) + cg0 (2)  g0 (2) = α + βg0 (1) + γg0 (2)   g0 (3) = 0, which has for solution  g0 (0) = 1    cα + a(1 − γ)    g0 (1) = (1 − b)(1 − γ) − cβ −bα + aβ + α   g0 (2) =   (1 − b)(1 − γ) − cβ   g0 (3) = 0. We have gl (0) = gl (3) = 0 for l = 1, 2, and by a similar analysis we also find g3 (1) =

cη + d(1 − γ) , (1 − b)(1 − γ) − cβ

and note that g0 (1) + g3 (1) =

cα + a(1 − γ) cη + d(1 − γ) + = 1, (1 − b)(1 − γ) − cβ (1 − b)(1 − γ) − cβ

since α + η = 1 − γ − β. We also check that in case a = d and α = η we have g0 (1) =

1 cα + a(β + 2γ) = g0 (2) = . (c + 2α)(β + 2γ) − cβ 2

2. (Problem III.6.2 in [4]). Consider the Markov chain whose transition probability matrix is given by   α0β 0 α 0 0 β   [ Pi,j ]0≤i,j≤3 =  α β 0 0 , 0 0 0 1 where α, β ≥ 0 and α + β = 1. Determine the mean time to reach state 3 starting from time 0. We observe that state 3 is absorbing:

85

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

1

α

β β α

3

α

0

1

2

β

Let h3 (k) = IE[T3 | X0 = k] denote the mean (hitting) time to reach 3 starting from state k = 0, 1, 2, 3. We get    h3 (0) = 1 + αh3 (0) + βh3 (2)        h3 (1) = 1 + αh3 (0)   h3 (2) = 1 + αh3 (0) + βh3 (1)        h3 (3) = 0, which, using the relation α = 1 − β, yields h3 (3) = 0,

h3 (1) =

1 , β3

h3 (2) =

1+β , β3

h3 (0) =

1 + β + β2 . β3

Since state 3 can only be reached from 1 with probability β, it is natural that the hitting times go to infinity as β goes to zero. We also check that h3 (3) < h3 (1) < h3 (2) < h3 (0), as can be expected from the graph. In addition, (h3 (1), h3 (2), h3 (0)) converge to (1, 2, 3) as β goes to 1, as can be expected.

5.3 First return times Consider now the first return time τj to state j ∈ S, defined by τj = inf{n ≥ 1 : Xn = j}, with τj = +∞ 86

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

if Xn 6= j for all n ≥ 1. Note that in contrast with (5.1) the infimum is taken here for n ≥ 1. Denote by µj (i) = IE[τj | X0 = i] the mean return time to j ∈ S starting from i ∈ S. Here the chain (Xn )n∈N may have no absorbing states and this problem is different from the hitting time problem as the time of return to 1 is always greater or equal to one. Nevertheless, by definition for all i 6= j we have hi (j) = IE[T{i} | X0 = j] = IE[τ{i} | X0 = j] = µi (j), while hi (i) = 0 and µi (i) ≥ 1, i ∈ S. The mean first return times can be computed via first step analysis. We have µj (i) = IE[τj | X0 = i] = 1 × P(X1 = j | X0 = i) +

X

P(X1 = l | X0 = i)(1 + IE[τj | X0 = l])

l∈S l6=j

= Pi,j +

X

Pi,l (1 + µj (l))

l∈S l6=j

= Pi,j +

X

Pi,l +

X

l∈S l6=j

=

X

Pi,l +

l∈S

= 1+

Pi,l µj (l)

l∈S l6=j

X

Pi,l µj (l)

l∈S l6=j

X

Pi,l µj (l),

l∈S l6=j

hence µj (i) = 1 +

X

Pi,l µj (l),

i, j ∈ S.

l∈S l6=i

The difference with the mean hitting time computation (5.7) is that here we have no boundary conditions and µi (i) cannot be zero, while we always have hi (i) = 0, i ∈ S. In the sequel we let

87

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

pij = P(τj < ∞ | X0 = i) = P(Xn = j for some n ≥ 1 | X0 = i) denote the probability of return to state j starting from state i. The probability pii can be computed as follows: P(Xn = i for some n ≥ 1 | X0 = i) = =

∞ X n=0 ∞ X

P(Xn = i, Xn−1 6= i, . . . , X1 6= i | X0 = i) (n)

fii ,

n=0

where (n)

fij := P(τj = n | X0 = i) = P(Xn = j, Xn−1 6= j, . . . , X1 6= j | X0 = i) is the distribution of τj given that X0 = i, with the relation n Pi,i = P(Xn = i | X0 = i) n X = P(Xk = i, Xk−1 6= i, . . . , X1 6= i | X0 = i)P(Xn = i | Xk = i)

= =

k=1 n X k=1 n X

P(Xk = i, Xk−1 6= i, . . . , X1 6= i | X0 = i)P(Xn−k = i | X0 = i) (k)

n−k fii Pi,i ,

k=1

which extends (3.5), with (1)

fii = P(X1 = i | X0 = i) = Pi,i ,

i ∈ S.

Examples. 1. First return times. First, let us consider the two-state Markov chain. The mean return time µ0 (i) = IE[τ0 | X0 = k] to 0 starting from i ∈ {0, 1} satisfies   µ0 (0) = 1 + aµ0 (1) 

µ0 (1) = 1 + (1 − b)µ0 (1)

which yields 88

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

a+b 1 and µ0 (1) = . b b In the two-state case the distribution of τi given X0 = i is given by  if n = 1, 1 − a (n) f00 = P(τ0 = n | X0 = 0) =  ab(1 − b)n−2 if n ≥ 2, µ0 (0) =

(5.8)

hence (5.8) can be directly recovered as1 µ0 (0) = 1 − a + ab

∞ X

(n + 2)(1 − b)n

n=0

= 1 − a + ab(1 − b)

∞ X

n(1 − b)n−1 + 2ab

n=0

∞ X

(1 − b)n

n=0

a+b . = b Similarly we check that   µ1 (0) = 1 + (1 − a)µ1 (0) 

µ1 (1) = 1 + bµ1 (0)

which yields

1 a+b and µ1 (1) = , a a and can be directly recovered by µ1 (0) =

µ1 (1) = 1 − b + ab

∞ X

(n + 2)(1 − a)n =

n=0

a+b . a

2. Maze problem. Consider a fish placed in an aquarium with 9 compartments: 1

using the identities

∞ X k=0

rk = (1 − r)−1 and

∞ X

krk−1 = (1 − r)−2 .

k=1

89

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

1

2

3

4

5

6

8

9

.

7

(5.9) The fish moves randomly: at each time step it changes compartments and if there exists k ≥ 1 exit doors from one compartment, it chooses one of them with probability 1/k, i.e. the transition matrix is   0 1 0 0 0 0 0 0 0  1/2 0 1/2 0 0 0 0 0 0     0 1/2 0 0 0 1/2 0 0 0     0 0 0 0 0 0 1 0 0     P =  0 0 0 0 0 1/2 0 1/2 0   0 0 1/2 0 1/2 0 0 0 0     0 0 0 1/2 0 0 0 1/2 0     0 0 0 0 1/3 0 1/3 0 1/3  0 0 0 0 0 0 0 1 0 Find the average time to come back to 1 starting from 1. Letting τl = inf{n ≥ 1 : Xn = l} denote the first return time to l and µ1 (k) = IE[τl | X0 = k] the mean return time to 1 starting from k, we have

90

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

 µ1 (1) = 1 + µ1 (2)         1   µ1 (2) = 1 + µ1 (3)   2       1    µ1 (3) = 1 + µ1 (2) +   2        µ1 (4) = 1 + µ1 (7)        1 µ1 (5) = 1 + µ1 (8) + 2       1   µ1 (6) = 1 + µ1 (3) +   2       1    µ1 (7) = 1 + µ1 (4) +   2       1   µ1 (8) = 1 + µ1 (7) +    3      µ1 (9) = 1 + µ1 (8), i.e.

 µ1 (1) = 1 + µ1 (2)        1   µ1 (2) = 1 + µ1 (3)   2        3µ1 (3) = 6 + 2µ1 (6)         µ1 (4) = 1 + µ1 (7)          µ (5) = 1 + 1 µ (8) + 1 1 2     1    µ1 (6) = 1 + µ1 (3) +   2       1   µ1 (7) = 1 + µ1 (4) +    2       1   µ1 (8) = 1 + µ1 (7) +   3      µ1 (9) = 1 + µ1 (8),

1 µ1 (6) 2

1 µ1 (6) 2 1 µ1 (5) 2 1 µ1 (8) 2 1 1 µ1 (5) + µ1 (9) 3 3

1 µ1 (6) 2 1 µ1 (5) 2 1 µ1 (8) 2 1 1 µ1 (5) + µ1 (9) 3 3

i.e.

91

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

   µ1 (1) = 1 + µ1 (2),      µ1 (4) = 1 + µ1 (7),        µ1 (7) = 3 + µ1 (8),

1 µ1 (2) = 1 + µ1 (3), 2

2 µ1 (3) = 2 + µ1 (6), 3

1 1 µ1 (5) = 1 + µ1 (8) + µ1 (6), 2 2 0 = 80 + 5µ1 (6) − 5µ1 (8),

0 = 30 + 3µ1 (8) − 5µ1 (6),

µ1 (9) = 1 + µ1 (8),

which yields µ1 (1) = 16, µ1 (2) = 15, µ1 (3) = 28, µ1 (4) = 59, µ1 (5) = 48, (5.10) µ1 (6) = 39, µ1 (7) = 58, µ1 (8) = 55, µ1 (9) = 56. Consequently, it takes on average 16 steps to come back to 1 starting from 1, and 59 steps starting from 4.

5.4 Number of returns Let Ri =

∞ X

1{Xn =i}

n=1

denote the number of visits to state i of the chain (Xn )n∈N . If Rj = 0 starting from X0 = i, the chain never visits state j and this happens with probability 1 − pij , hence P(Rj = 0 | X0 = i) = 1 − pij . On the other hand when the chain (Xn )n∈N makes a number m ≥ 1 of visits to state j, it makes a first visit to state j with probability pi,j and then makes m − 1 returns to j, each with probability pjj . After those m visits it never returns to j, and this happens with probability 1 − pjj . Hence given that X0 = i we have   pij (1 − pjj )(pjj )m−1 , m ≥ 1, P(Rj = m | X0 = i) = (5.11)  1 − pij , m = 0. In case i = j, Ri is the number of returns to state i starting from i, and it has the geometric distribution P(Ri = m | X0 = i) = (1 − pii )(pii )m , Note that we have 92

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html

m ≥ 0.


Notes on Markov Chains

P(Ri = ∞ | X0 = i) =

  1,

if pii = 1,

0,

if pii < 1,

  0,

if pii = 1,

1,

if pii < 1.

 and P(Ri < ∞ | X0 = i) =

(5.12)

(5.13)

On the other hand if pjj = 1 we have P(Rj = ∞ | X0 = i) = pij . We also have2 IE[Rj | X0 = i] =

∞ X

mP(Rj = m | X0 = i)

(5.14)

m=0

= (1 − pjj )pij

∞ X

m(pjj )m−1

m=0

pij , = 1 − pjj which is finite if pjj < 1.

5.5 Exercises

Exercise 5.1. (Exercise III.5.7 in [4]). Consider the random walk Markov chain whose transition probability matrix is given by   1 0 0 0  0.3 0 0.7 0   [ Pi,j ]0≤i,j≤3 =   0 0.3 0 0.7  . 0 0 0 1 1. Find the absorbing states of the chain. 2. Starting in state 1, determine the probability that the process is absorbed into state 0.

2

using the identity

∞ X

krk−1 = (1 − r)−2 .

k=1

93

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

Exercise 5.2. (Problem III.6.2 in [4]). Consider the random walk Markov chain whose transition probability matrix is given by   0.5 0 0.5 0  0.5 0 0 0.5   [ Pi,j ]0≤i,j≤3 =   0.5 0.5 0 0  . 0 0 0 1 Determine the mean time to reach state 3 starting from state 0.

Exercise 5.3. Consider a Markov chain (Xn )n∈N on {0, 1, . . . , N }, N ≥ 1, with transition matrix P = [ Pi,j ]0≤i,j≤N . 1. Let T0 = inf{n ≥ 0 : Xn = 0},

TN = inf{n ≥ 0 : Xn = N },

and g(k) = P(T0 < TN | X0 = k),

k = 0, 1, . . . , N.

What are the values of g(0) and g(N ) ? 2. Show, using first step analysis, that the function g satisfies the relation g(k) =

N X

Pk,l g(l),

k = 1, . . . , N − 1.

l=0

3. In this question and the following ones, we consider the Wright-Fisher stochastic model in population genetics, in which the state Xn denotes the number of individuals in the population at time n, and Pk,l = P(Xn+1

   l  N −l N k k = l | Xn = k) = 1− , N N l

k, l = 0, 1, . . . , N . Write down the transition matrix P when N = 3. 4. Show, from Question 2, that P(T0 < TN | X0 = k) =

N −k , N

k = 0, 1, . . . , N.

5. Let T0,N = inf{n ≥ 0 : Xn = 0 or Xn = N }, and h(k) = IE[T0,N | X0 = k],

k = 0, 1, . . . , N.

94

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

What are the values of h(0) and h(N ) ? 6. Show, using first step analysis, that the function h satisfies the relation h(k) = 1 +

N X

Pk,l h(l),

1 ≤ k ≤ N − 1.

l=0

7. Assuming that N=3, compute h(k) = IE[T0,3 | X0 = k],

k = 0, 1, 2, 3.

Exercise 5.4. (Problem III.5.4 in [4]). Martha has a fair die with the usual six sides. She throws the die and records the number. She throws the die again and adds the second number to the first. She repeats until the cumulative sum of tosses first exceeds 10. What is the probability that she stops at a cumulative sum of 13 ? Exercise 5.5. A fish is put into the linear maze as shown, and its state at time n is denoted by Xn ∈ {0, 1, . . . , 5}:

0

1

2

3

4

shock

5 food

Starting from any state k ∈ {1, 2, 3, 4}, the fish moves to the right with probability p and to the left with probability q, p + q = 1, p ∈ (0, 1). Let T0 = inf{n ≥ 0 : Xn = 0},

and T5 = inf{n ≥ 0 : Xn = 5},

and g(k) = P(T5 < T0 | X0 = k), k = 0, 1, . . . , 5. 1. Using first step analysis, write down the equation satisfied by g(k), k = 0, 1, . . . , 5, and give the values of g(0) and g(5). 2. Assume that the fish is equally likely to move right or left at each step. Compute the probability that starting from state k it finds the food before getting shocked, for k = 0, 1, . . . , 5. Exercise 5.6. A zero-seeking device operates as follows: starting from a state m ≥ 1 at time k, its next position at time k + 1 is uniformly distributed over the states 0, 1, . . . , m − 1.

95

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

1. Model the state of this device using a Markov chain and give its transition matrix. 2. Let h0 (m) denote the expected time until the device first hits zero from state m. Using first step analysis, write down the equation satisfied by h0 (m), m ≥ 1 and give the values of h0 (0) and h0 (1). 1 3. Show that h0 (m) satisfies h0 (m) = h0 (m − 1) + , m ≥ 1, and that m m X 1 h0 (m) = , for all m ∈ N. k k=1

Exercise 5.7. An individual is placed in a castle tower having three exits. Exit 1 leads to a tunnel that returns to the tower after three days of walk. Exit 2 leads to a tunnel that returns to the tower after one day of walk. Exit 3 leads to the outside. Since the inside of the tower is dark, each exit is chosen at random with probability 1/3. The individual decides to remain outside after exiting the tower. 1. Show that this problem can be modeled using a Markov chain (Xn )n∈N with three transient states and one absorbing state. Draw the graph of the chain (Xn )n∈N . 2. Write down the transition matrix of the chain (Xn )n∈N . 3. Starting from inside the tower, find the average time it takes to exit the tower. Exercise 5.8. A rat is trapped in a maze. Initially it has to choose one of two directions. If it goes to the right, then it will wander around in the maze for three minutes and will then return to its initial position. If it goes to the left, then with probability 1/3 it will depart the maze after two minutes of travelling, and with probability 2/3 it will return to its initial position after five minutes of travelling. Assuming that the rat is at all times equally likely to go to the left or to the right, what is the expected number of minutes that it will be trapped in the maze ? Exercise 5.9. (Exercise III.4.3 in [4]). Consider the Markov chain whose transition probability matrix is given by   1 0 0 0  0.1 0.6 0.1 0.2     0.2 0.3 0.4 0.1  . 0 0 0 1

96

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

1. Starting in state 1, determine the probability that the Markov chain ends in state 0. 2. Determine the mean time to absorption. Exercise 5.10. (Problem III.4.7 in [4]). Let (Xn )n≥0 be a Markov chain with transition probabilities Pij . We are given a “discount factor” β with 0 < β < 1 and a cost function c(i), and we wish to determine the total expected discounted cost starting from state i, defined by # "∞ X

n

hi := E β c(Xn ) X0 = i . n=0

Using a first step analysis, show that hi satisfies the system of linear equations X Pij hj hi = x(i) + β j

for all states i.

97

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

98

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Chapter 6

Classification of States

6.1 Communicating states A state j ∈ S is to be accessible from i ∈ S and we write “i 7→ j” if there exists a finite integer n ≥ 0 such that n Pi,j = P(Xn = j | X0 = i) > 0.

In other words, it is possible to travel from i to j with non-zero probability in a certain number of steps that needs to be specified. In addition, since P 0 = Id , the definition of accessibility states that i is accessible from i, for every state i ∈ S, even if Pi,i = 0. In case i 7→ j and j 7→ i we say that i and j communicate and we write i ←→ j. The binary relation “←→” satisfies the following properties: 1. Reflexivity: For all i ∈ S we have i ←→ i. 2. Symmetry: For all i, j ∈ S we have that i ←→ j is equivalent to i ←→ j. 3. Transitivity: For all i, j, k ∈ S such that i ←→ j and j ←→ k, we have i ←→ k.

99


N. Privault

Therefore the relation “←→” is an equivalence relation1 and it induces a partition of S into disjoint subsets A1 , . . . , AM such that S = A1 ∪ · · · ∪ AM and 1. we have i ←→ k for all i, j ∈ Aq , and 2. we have i 6↔ k whenever i ∈ Ap and j ∈ Aq with p 6= q. The sets A1 , . . . , AM are called the communicating classes of the chain.

6.2 Communicating class Definition 1. A Markov chain whose state space has a unique equivalence class is said to be irreducible, otherwise it is said to be reducible. Clearly, all states in S communicate when (Xn )n∈N is irreducible. Exercise: Given a Markov transition matrix, find its equivalence classes for the relation ←→. 0.1

1 0.6 0.3

 3

0.2 0

1

0.5 2

0.5

0  0.3   0.5 0

0.2 0.8 0.1 0 0 0 0 0

 0 0.6   0.5  1 (6.1)

0.8 The above state space has two communicating classes {0, 1, 2} and {3}. 1

Please refer to MAS111 - Foundations of Mathematics for more information on equivalence classes.

100

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

6.3 Recurrent states A state i ∈ S is said to be recurrent if, starting from state i, the chain will return to i within a finite (random) time, with probability 1. In other words i ∈ S is recurrent if pii := P(τi < ∞ | X0 = i) = P(Xn = i for some n ≥ 1 | X0 = i) = 1. (6.2) We find from (5.14) and (6.2) that i is recurrent if and only if IE[Ri | X0 = i] = ∞.

(6.3)

By the Markov property it follows that the state i ∈ S is recurrent if and only if P(Ri = +∞ | X0 = i) = 1.

6.4 Transient states A state i ∈ S is said to be transient when it is not recurrent, i.e. P(Ri = +∞ | X0 = i) < 1, which is equivalent to P(Ri = +∞ | X0 = i) = 0, by (5.12) or P(Ri < ∞ | X0 = i) > 0, which is equivalent to P(Ri = +∞ | X0 = i) = 1 by (5.13), i.e. the number of returns to state i ∈ S can be finite with non-zero probability. In other words i ∈ S is transient if pii = P(τi < ∞ | X0 = i) = P(Xn = i for some n ≥ 1 | X0 = i) < 1.

(6.4)

Again, from (5.14) and (6.2) or directly from (6.3), a state i is transient if and only if IE[Ri | X0 = i] < ∞. 101

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

As a consequence we have the following result. Theorem 3. A state i ∈ S is recurrent if and only if ∞ X

n Pi,i = +∞.

n=1

Proof. We compute " IE[Rj | X0 = i] = IE

∞ X n=1

= = =

∞ X n=1 ∞ X n=1 ∞ X

#

1{Xn =j} X0 = i]

IE[1{Xn =j} | X0 = i] P(Xn = j | X0 = i) n Pi,j ,

(6.5)

n=1

for i = j and use (6.3).



As a consequence, a state i ∈ S is transient if and only if ∞ X

n Pi,i < ∞.

n=1

Example: For the two-state Markov chain, (4.11) and (4.12) show that  +∞, if b > 0,    ∞ ∞  n X X b + aλ2 n ∞ P0,0 = = X  a+b  (1 − a)n < ∞, if b = 0 and a > 0, n=1 n=1   n=1

hence the state 0 is transient if b = 0 and a > 0, and recurrent otherwise. Similarly we have  +∞, if a > 0,    ∞ ∞  n X X a + bλ 2 n ∞ P1,1 = = X  a + b  (1 − b)n < ∞, if a = 0 and b > 0, n=1 n=1   n=1

hence the state 1 is transient if a = 0 and b > 0, and recurrent otherwise.

102

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

Corollary 1. Let i ∈ S be a recurrent state. Then any state j ∈ S that communicates with i is recurrent. Proof. Since i 7→ j and j 7→ i there exists a ≥ 1 and b ≥ 1 such that a Pi,j > 0,

b Pj,i > 0.

Next, for all n ≥ a + b we have P(Xn = j | X0 = j) ∞ X = P(Xn = j | Xn−a = l)P(Xn−a = l | Xb = m)P(Xb = m | X0 = j) l,m=0

≥ P(Xn = j | Xn−a = i)P(Xn−a = i | Xb = i)P(Xb = i | X0 = j) n−a−b b a = Pi,j Pi,i Pj,i ,

hence ∞ X

n Pj,j =

n=a+b

∞ X

P(Xn = j | X0 = j)

n=a+b a b ≥ Pi,j Pj,i

=

∞ X

n−a−b Pi,i

n=a+b ∞ X a b n Pi,j Pj,i Pi,i n=0

= +∞, which shows that j is recurrent from Theorem 3.



As a consequence of Corollary 1, if a state j ∈ S communicates with a transient state i then j is also transient (otherwise the state i would be recurrent by Corollary 1). Exercise: Find which states are transient and recurrent in the chain (6.1). State 3 is clearly recurrent since τ3 = 1 with probability one when X0 = 3. State 2 is transient because P(τ2 = +∞ | X0 = 2) ≥ P(X1 = 3 | X0 = 2) = 0.5 > 0. By Corollary 1 states 0 and 1 are transient because they communicate with state 2.

103

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

Exercise: Which are the recurrent states in simple random walk (Sn )n∈N on S = Z ? First we note that this random walk is irreducible as all states communicate when p ∈ (0, 1). The simple random walk (Sn )n∈N on S = Z has the transition matrix Pi,i+1 = p,

Pi,i−1 = q = 1 − p,

i ∈ Z.

We have n Pi,i = P(Sn = i | S0 = i) = P(Sn = 0 | S0 = 0),

with  P(S2n = 0) =

 2n n n p q n

and

P(S2n+1 = 0) = 0,

n ∈ N.

Hence ∞ X

Piin =

n=1

= =

∞ X n=1 ∞ X

P(Sn = 0 | S0 = 0) P(S2n = 0 | S0 = 0)

n=1 ∞  X n=1

 2n n n p q n

= H(1) − 1 1 = √ − 1, 1 − 4pq where H(s) is defined in (3.9). Consequently, by Theorem 3, a given state i ∈ Z is recurrent if and only if p = q = 1/2. Alternativaly we can reach the same conclusion by directly using (3.11) and (6.2). Theorem 4. Let (Xn )n∈N be a Markov chain with finite state space S. Then (Xn )n∈N has at least one recurrent state. Proof. For any i ∈ S and any transient state j ∈ S, from (5.11) and (6.4) we have, by (5.14), IE(Rj | X0 = i) = pij (1 − pjj )

∞ X

n(pjj )n−1 =

n=1

which is finite since pjj < 1, hence by (6.5),

104

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html

pij < ∞, 1 − pjj


Notes on Markov Chains

IE(Rj | X0 = i) =

∞ X

n Pi,j < ∞,

n=1

which implies2 n lim Pi,j =0

n→∞

for all transient states j ∈ S. In case all states in S were transient, since S is finite we would have X n 0= lim Pi,j j∈S

n→∞

= lim

n→∞

X

n Pi,j

j∈S

= lim 1 n→∞

= 1, which is a contradiction. Hence not all states can be transient, and there exists at least one recurrent state. 

6.5 Positive and null recurrence The expected time of return to a state i ∈ S (also called the mean recurrence time of i) is given by µi (i) : = IE[τi | X0 = i] ∞ X nP(τi = n | X0 = i) = =

n=1 ∞ X

(n)

nfii .

n=1

A recurrent state i ∈ S is said to be positive recurrent if µi (i) = IE[τi | X0 = i] < ∞, and null recurrent if 2

For any sequence (an )n≥0 of non-negative numbers,

limn→∞ an = 0.

∞ X

an < ∞ implies

n=0

105

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

µi (i) = IE[τi | X0 = i] = +∞. Exercise: Which states are positive/null recurrent in the simple random walk (Sn )n∈N on S = Z ? From (3.13) and (3.14) we know that IE[τi | S0 = i] = +∞ for all values of p ∈ (0, 1), hence all states of the random walk on Z are null recurrent when p = 1/2, while all states are transient when p 6= 1/2 due to (3.11). Theorem 5. Let (Xn )n∈N be a Markov chain with finite state space S. Then all recurrent states of (Xn )n∈N are positive recurrent. In particular, Theorem 5 shows that a Markov chain with finite state space cannot have any null recurrent state. As a consequence of Definition 1, Corollary 1, and Theorems 4 and 5 we have the following corollary. Corollary 2. Let (Xn )n∈N be an irreducible Markov chain with finite state space S. Then all states of (Xn )n∈N are positive recurrent.

6.6 Periodicity and aperiodicity Given a state i ∈ S, consider the set of integers n {n ≥ 1 : Pi,i > 0}.

The period of the state i ∈ S is the greatest common divisor of

3

n {n ≥ 1 : Pi,i > 0}.

A state with period 1 is called aperiodic, which is the case in particular if n n Pi,i > 0. If Pi,i = 0 for all n ≥ 1 then the set {n ≥ 1 : Pi,i > 0} is empty and the period of i is defined to be 0. Note also that if n {n ≥ 1 : Pi,i > 0} 3

Please refer to MAS111 - Foundations of Mathematics or MAS214 Basic Discrete Mathematics and Number Theory for more information on greatest common divisors.

106

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

contains two distinct numbers that are relatively prime to each other (i.e. their greatest common divisor is 1) then state i aperiodic. A Markov chain is said to be aperiodic when all states are aperiodic. In particular, an absorbing state is both aperiodic and recurrent. Examples: 1. Consider the following Markov chain: 1 0

0.5

1

1

0.5

2

1

3

Here we have n {n ≥ 1 : P0,0 > 0} = {2, 4, 6, 8, 10, . . .}, n {n ≥ 1 : P1,1 > 0} = {2, 4, 6, 8, 10, . . .}, n {n ≥ 1 : P2,2 > 0} = {4, 6, 8, 10, 12, . . .}, n {n ≥ 1 : P3,3 > 0} = {4, 6, 8, 10, 12, . . .}

hence all states have period 2. 2. Next, consider the modified chain: 1 0

0.5

1

1

0.3 0.2 2

1

3

107

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

Here the chain is aperiodic since we have n {n ≥ 1 : P0,0 > 0} = {2, 3, 4, 5, 6, 7, . . .}, n {n ≥ 1 : P1,1 > 0} = {2, 3, 4, 5, 6, 7, . . .}, n {n ≥ 1 : P2,2 > 0} = {3, 4, 5, 6, 7, 8, . . .}, n {n ≥ 1 : P3,3 > 0} = {4, 6, 7, 8, 9, 10, . . .},

hence all states have period 1. Exercise: What is the periodicity of the simple random walk (Sn )n∈N on S=Z? By (3.1) We have   2n n n 2n+1 2n Pi,i = p q > 0 and Pi,i = 0, n ∈ N, n hence n {n ≥ 1 : Pi,i > 0} = {2, 4, 6, 8, . . .},

and the chain has period 2. A recurrent state i ∈ S is said to be ergodic if it is positive recurrent and aperiodic. Exercise: Find the periodicity of the chain (6.1). States 0, 1, 2 and 3 have period 1, hence the chain is aperiodic. The chain below is also aperiodic since it is irreducible and state 3 has a returning loop.

108

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

1

0.4

0.6 0.4 0.5

4 0

0.2

0.5

0.8

0.6 2

0.5

6.7 Exercises Exercise 6.1. Consider a Markov chain (Xn )n≥0 on the state space {0, 1, 2, 3}, with transition matrix   1/3 1/3 1/3 0  0 0 0 1    0 1 0 0. 0 0 1 0 1. 2. 3. 4.

Draw the graph of this chain and find its communicating classes. Find the period of each state 0, 1, 2, 3. Which state(s) is (are) absorbing, recurrent, and transient ? Is the Markov chain reducible ? Why ?

Exercise 6.2. Consider a Markov chain (Xn )n≥0 on the state space {0, 1, 2, 3, 4}, with transition matrix   0 1/4 1/4 1/4 1/4 1 0 0 0 0    0 1 0 0 0 .   0 0 1 0 0  0 0 0 0 1 1. 2. 3. 4.

Draw the graph of this chain. Find the periods of states 0, 1, 2, 3. Which state(s) is (are) absorbing, recurrent, and transient ? Is the Markov chain reducible ? Why ?

109

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html

3

0.4


N. Privault

Exercise 6.3. (Exercise IV.3.2 in [4]). Find the communicating classes, the transient states and the recurrent states and the period of each state for the Markov chain with transition probability matrix   1/2 0 1/4 0 0 1/4  1/3 1/3 1/3 0 0 0     0 0 0 01 0   [ Pi,j ]0≤i,j≤5 =   1/6 1/2 1/6 0 0 1/6  .    0 0 1 00 0  0 0 0 00 1 Exercise 6.4. Consider the Markov chain with transition matrix   0.8 0 0.2 0  0 0 1 0     1 0 0 0 . 0.3 0.4 0 0.3 1. Is the chain irreducible ? If not, give its communicating classes. 2. Find the period of each state. Which states are absorbing, transient, recurrent, positive recurrent ?

110

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Chapter 7

Limiting and Stationary Distributions

7.1 Limiting Distributions A Markov chain (Xn )n∈N is said to have a limiting distribution if the limits lim P(Xn = j | X0 = i)

n→∞

exist for all i, j ∈ S. When the transition matrix P is regular (i.e. it has a power matrix whose coefficients are all non-zero) the chain admits a limiting distribution π = (πi )i∈S given by πj = lim P(Xn = j | X0 = i), n→∞

i, j ∈ S,

(7.1)

For example, as noted in (4.13) and (4.14) above, the two-state Markov chain has a limiting distribution given by   b a (π0 , π1 ) = , , (7.2) a+b a+b while the corresponding mean return times are given from (5.8) by   a+b a+b (µ0 (0), µ1 (1)) = , , b a i.e. the limiting probabilities are given by the inverses of the mean return times. This fact is not a simple coincidence, and it is actually a consequence of the following more general result, which shows that the longer it takes on

111


N. Privault

average to return to a state, the smaller the probability is to find the chain in that state. Theorem 6. (Theorem IV.4.1 in [4]) Consider a recurrent aperiodic irreducible Markov chain (Xn )n∈N . We have lim P(Xn = i | X0 = j) =

n→∞

1 , µi (i)

i, j ∈ S,

where µi (i) = IE[τi | X0 = i] ∈ [1, ∞] is the mean recurrence time of i ∈ S. As noted above, this result in consistent with (4.13), (4.14), and (7.2) in the case of the two-state Markov chain.

7.2 Stationary Distributions A probability distribution on S, i.e. a family π = (πi )i∈S in [0, 1] such that X πi = 1 i∈S

is said to be stationary if, starting X0 at time 0 with the distribution (πi )i∈S , it turns out that the distribution of X1 is still (πi )i∈S at time 1. In other words, (πi )i∈S is stationary for the Markov chain with transition matrix P if, letting P(X0 = i) = πi , i ∈ S, implies P(X1 = i) = πi ,

i ∈ S.

This also means that πj = P(X1 = j) X = P(X1 = j | X0 = i)P(X0 = i) i∈S

=

X

πi Pi,j ,

j ∈ S,

i∈S

i.e. π = πP,

112

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html

(7.3)


Notes on Markov Chains

and the distribution π is also said to be invariant. More generally since the process (Xn )n∈N is time homogeneous we have X P(Xn+1 = j) = P(Xn+1 = j | Xn = i)P(Xn = i) i∈S

=

X

Pi,j P(Xn = i),

j ∈ S,

i∈S

and by induction on n this yields πj = P(Xk = j),

j ∈ S,

for all k ≥ 1. Note that in contrast with (5.3), the multiplication by P in (7.3) is on the right hand side and not on the left. Proposition 1. Assume that S = {0, 1, . . . , N } is finite and that the limiting distribution (7.1) πj := lim P(Xn = j | X0 = i), n→∞

i, j ∈ S,

exists and is independent of i ∈ S, i.e. we have   π0 · · · πN   lim P n =  ... . . . ...  . n→∞

π0 · · · πN

Then π = (πj )j∈{0,1,...,N } is a stationary distribution and we have π = πP,

(7.4)

i.e. π is invariant by P . Proof. We have πj : = lim P(Xn = j | X0 = i) n→∞ X = lim P(Xn+1 = j | X1 = l)P(X1 l | X0 = i) n→∞

=

X

l∈S

P(X1 l | X0 = i) lim P(Xn+1 = j | X1 = l) n→∞

l∈S

=

X l∈S

=

X

Pi,l lim P(Xn+1 = j | X1 = l) n→∞

πl Pl,j ,

i, j ∈ S,

l∈S

113

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

where we exchanged limit and summation because the state space S is assumed to be finite, which shows that (7.4) holds.  For example the limiting distribution (7.2) of the two-state Markov chain is also an invariant distribution, i.e. it satisfies (7.3). In particular we have the following result. Theorem 7. (Theorem IV.4.2 in [4]) Assume that the Markov chain (Xn )n∈N is positive recurrent, aperiodic, and irreducible. Then the limiting probabilities πi := lim P(Xn = i | X0 = j) = n→∞

1 , µi (i)

i, j ∈ S,

form a stationary distribution which is uniquely determined by the equation π = πP . In view of Theorem 5 we have the following corollary of Theorem 7: Corollary 3. Consider an irreducible aperiodic Markov chain with finite state space. Then the limiting probabilities πi = lim P(Xn = i | X0 = j) = n→∞

1 , µi (i)

i, j ∈ S,

form a stationary distribution which is uniquely determined by the equation π = πP . The irreducibility of the chain is needed in Theorems 6, 7 and clearly a reducible chain may not have a limiting distribution independent of the initial state as in the following example. 1 0

0.5

0.5 1

2

1

The results of Theorems 6, 7 and Corollary 3 require the aperiodicity of the Markov chain and they ensure the existence of the limiting distribution. However, the limiting distribution may not exist when the chain is not aperiodic. For example, the two-state Markov chain with transition matrix   01 P = 10

114

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

is not aperiodic (both states have period 2) and it has no limiting distribution. The chain does have an invariant distribution π solution of π = πP , and given by π = (π0 , π1 ) = (1/2, 1/2). The following theorem gives sufficient conditions for the existence of a stationary distribution, without requiring aperiodicity or finiteness of the state space. As noted above, the limiting distribution may not exist in this case. Theorem 8. ([1]) Consider a positive recurrent irreducible Markov chain (Xn )n∈N . Then the probabilities πi =

1 , µi (i)

i ∈ S,

form a stationary distribution which is uniquely determined by the equation π = πP . As a consequence of Corollary 2 we have the following corollary of Theorem 8, which does not require aperiodicity. Corollary 4. ([1]) Let (Xn )n∈N be an irreducible Markov chain with finite state space S. Then the probabilities πi =

1 , µi (i)

i ∈ S,

form a stationary distribution which is uniquely determined by the equation π = πP . Note that the stationary distribution of a Markov chain may not exist, for example in the case of the random walk (Sn )n∈N , to which Theorems 7, 8 and Corollaries 3, 4 do not apply because this process is not positive recurrent when p 6= 1/2. Under the assumptions of Theorem 7, if the stationary and limiting distributions both exist then they are equal and in this case we only compute one of them. The main problem is that, in some situations, only the stationary distribution might exist. According to Corollary 4 above, the stationary distribution always exists when the chain is irreducible with finite state space. However, the limiting distribution may not exist if the chain is not aperiodic, consider for example the two-state Markov chain with a = b = 1.

115

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

According to Corollary 3, the limiting distribution and stationary distribution both exist (and coincide) when the chain is irreducible aperiodic with finite state space. When the chain is irreducible it is usually easier to just compute the stationary distribution and it gives us the long run probability (which is also the limiting distribution if it exists, for example when the chain is also aperiodic). In case the chain is not irreducible we need to try to split it into subchains and consider the subproblems separately. In conclusion, usually we try first to compute the stationary distribution whenever possible, and this also gives the limiting distribution when it exists. Otherwise we may try to compute the limiting distribution by exponentiating the transition matrix and taking the limit, but this is normally much more complicated and done only in exceptional cases. To summarize we note that by Theorem 6 we have irreducible + recurrent + aperiodic =⇒ existence of limiting distribution and by Theorem 8, irreducible + positive recurrent =⇒ existence of stationary distribution In addition the limiting or stationary distribution is given by πi =

1 , µi (i)

i ∈ S,

in both cases. Example: Concerning the stationary distribution π of the maze random walk (5.9), the equation π = πP yields

116

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

   π1           π2            π3            π4        π5          π6            π7            π   8         π9 hence

=

1 π2 2

1 = π1 + π3 2 =

1 1 π2 + π6 2 2

=

1 π7 2

=

1 1 π6 + π8 2 3

=

1 1 π3 + π5 2 2

1 = π4 + π8 3 =

1 1 π5 + π7 + π9 2 2

=

1 π8 , 3

   π1          π2         π3            π4

=

1 π2 2

= π3 = π6 =

1 π3 2

  1 1   π7 = π8   2 3        π6 = π5         π6 = π7          π9 = 1 π8 , 3 and

117

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

1 = π1 + π2 + π3 + π4 + π5 + π6 + π7 + π8 + π9 = π1 + 2π1 + 2π1 + π1 + 2π1 + 2π1 + 2π1 + 3π1 + π1 = 16π1 , hence π1 =

2 4 4 2 4 4 4 6 2 , π2 = , π3 = , π4 = , π5 = , π6 = , π7 = , π8 = , π9 = , 32 32 32 32 32 32 32 32 32

and we check that we indeed have 1 1 1 = π1 = = 16 µ1 (1) 16 according to (5.10) and Corollary 4. In the table below we summarize the definitions introduced in this chapter and in Chapter 6. Property Definition absorbing Pi,i = 1 recurrent P(τi < ∞ | X0 = i) = 1 transient P(τi < ∞ | X0 = i) < 1 positive recurrent recurrent and IE[τi | X0 = i] < ∞ null recurrent recurrent and IE[τi | X0 = i] = +∞ aperiodic period = 1 ergodic positive recurrent and aperiodic irreducible all states communicate regular all coefficients of P n are > 0 for some n ≥ 1

7.3 Exercises Exercise 7.1. A signal processor is analysing a sequence of signals that can be either distorted or non-distorted. It turns out that on average, 1 out of 4 signals following a distorted signal are distorted, while 3 out of 4 signals are non-distorted following a non-distorted signal. 1. Let Xn ∈ {D, N } denote the state of the n-th signal being analysed by the processor. Show that (Xn )n≥1 is a Markov chain and determine its transition matrix. 2. Compute the stationary distribution of (Xn )n≥1 . 3. In the long run, what fraction of analysed signals are distorted ? 118

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

4. Given that the last observed signal was distorted, how long does it take on average until the next non-distorted signal ? 5. Given that the last observed signal was non-distorted, how long does it take on average until the next distorted signal ? Exercise 7.2. (Exercise IV.4.2 in [4]). Consider the Markov chain whose transition probability matrix is given by   0 1 0 0  0.1 0.4 0.2 0.3   P = [ Pi,j ]0≤i,j≤3 =   0.2 0.2 0.5 0.1  . 0.3 0.3 0.4 0 1. Determine the limiting probability π0 that the process is in state 0. 2. By pretending that state 0 is absorbing, use a first step analysis to calculate the mean time µ0 (1) for the process to go from state 1 to state 0. 3. Show that the relation π0 = 1/µ0 (0) is satisfied. Exercise 7.3. (Exercise IV.4.3 in [4]). Determine the stationary distribution for the periodic Markov chain whose transition probability matrix is   0 1/2 0 1/2  1/4 0 3/4 0   [ Pi,j ]0≤i,j≤3 =   0 1/3 0 2/3  . 1/2 0 1/2 0 Exercise 7.4. (Exercise IV.2.6 in [4]). A component of a computer has an active life, measured in discrete units, that is a random variable T where P(T = 1) = 0.1,

P(T = 2) = 0.2,

P(T = 3) = 0.3,

P(T = 4) = 0.4.

Suppose one starts with a fresh component, and each component is replaced by a new component upon failure. Determine the long run probability that a failure occurs in a given period.

Exercise 7.5. Suppose a Markov chain has the one-step transition probability matrix on the state space (A, B, C, D, E) as the following

119

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

0.6  0.3  P =  0.2  0.2 0

0.4 0 0.7 0 0 0.4 0.2 0.2 0 0

 0 0 0 0   0 0.4  . 0.2 0.2  0 1

Find limn→∞ P(Xn = A | X0 = C).

Exercise 7.6. Three out of 4 trucks passing under a bridge are followed by a car, while only 1 out of every 5 cars passing under that same bridge is followed by a truck. Let Xn ∈ {C, T } denote the nature of the n-th vehicle passing under the bridge, n ≥ 1. 1. Show that (Xn )n≥1 is a Markov chain and write down its transition matrix. 2. Compute the stationary distribution of (Xn )n≥1 . 3. In the long run, what fraction of vehicles passing under the bridge are trucks ? 4. Given that the last vehicle seen was a truck, how long does it take on average until the next truck is seen under that same bridge ? Exercise 7.7. (Problem IV.2.6 in [4]). Consider a computer system that fails on a given day with probability p and remains “up” with probability q = 1 − p. Suppose the repair time is a random variable N having the geometric distribution P(N = k) = β(1 − β)k−1 , k ≥ 1, where β ∈ (0, 1). Let Xn = 1 if the computer is operating on day n and Xn = 0 if not. 1. Show that (Xn )n∈N is a Markov chain and write down its transition matrix 2. Determine the long run probability that the computer is operating in terms of β and p. Exercise 7.8. (Problem IV.1.5 in [4]). Four players A, B, C, D are connected by the following network, and play by exchanging one token.

120

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

D

A

B

C

At each step of the game, the player who holds the token chooses another player he is connected to, and sends the token to that player. 1. Assuming that player choices are made at random and are equally distributed, model the states of the token as a Markov chain and give its transition matrix. 2. Compute the stationary distribution (πA , πB , πC , πD ) of (Xn )n≥1 . Hint: To simplify the resolution, start by arguing that we have πA = πD . 3. In the long run, what is the probability that player D holds the token ? 4. On average, how long does player D have to wait to recover the token ? Exercise 7.9. (Exercise IV.4.1 in [4]). Consider the Markov chain whose transition probability matrix is   qp000 q 0 p 0 0   q 0 0 p 0,   q 0 0 0 p 10000 with p + q = 1, p ∈ (0, 1). 1. Determine the stationary distribution (π0 , . . . , π4 ) of this Markov chain. 2. Determine its limiting distribution. Exercise 7.10. (Example 6(e) page 98 of [5]) Consider a Markov chain with state space {1, 2, . . . , N }, which moves from the state i to the state i + 1 with probability p ∈ (0, 1), and to state i − 1 with probability q = 1 − p, for i = 2, . . . , N −1. From states 1 and N the particle is reflected, i.e. from state 1 it moves to state 2 with probability p and remains at state 1 with probability q, and similarly from state N it moves to state N − 1 with probability q and remains at state N with probability p. 1. Write down the transition probability matrix of this chain. 121

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

2. 3. 4. 5.

Is the chain reducible ? Which states are absorbing, transient, recurrent, positive recurrent ? Compute the stationary distribution of this chain. Compute the limiting distribution of this chain.

Exercise 7.11. (Exercise 2.13 page 131 of [5]) Consider the chain having the transition probability matrix   100 0 1 0, abc with a + b + c = 1. 1. Compute P n for all n ≥ 1. 2. Compute the limiting probability distribution of the chain if it exists. 3. Compute the stationary distribution of the chain if it exists.

122

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Chapter 8

Branching Processes

8.1 Definition and example Branching processes are used as a tool for the modeling of population growth of entities such as living beings, genes, particles in nuclear physics, etc, or the spread of epidemics. Assuming that the population is made of a number Xn of individuals at time n, each of these individuals may have a random number of descendants. For each k = 1, . . . , Xn we let Yk denote the number of descendants of individual no k. That means, we have X0 = 1, X1 = Y1 , and at time n + 1, Xn+1 will be given by Xn+1 = Y1 + · · · + YXn , where the (Yk )k≥1 form a sequence of independent, identically distributed, non-negative (and almost surely finite) random variables. As a consequence, a branching process is a Markov process with state space S = N and transition matrix [Pi,j ]i,j∈N . In addition the state 0 is absorbing since P(Xn+1 = 0 | Xn = 0) = 1,

n ∈ N.

123


N. Privault

X3

X3

= 11

X3

= 11

X2 = 12

= 11

X2 = 12

X2 = 12

X1 = 6

X1 = 6 X3

= 11 X3

X2 = 12 X2 = 12

X2 = 12

X0 = 1

X1 = 6

X1 = 6

X2 = 12

X2 = 12 X2 = 12

X1 = 6

X1 = 6

X X22 = = 12 6 X3

X2 = 12

= 11

X3

= 11

X2 = 12 X3

= 11

X3

= 11 X3

= 11

X3

= 11

Fig. 8.1: Example of branching process.

Figure 8.1 represents an example of branching process with X0 = 1 and successively Y1 = 6 and X1 = YX0 = Y1 = 6, then (Y1 , Y2 , Y3 , Y4 , Y5 , Y6 ) = (0, 4, 1, 2, 2, 3) and X2 = Y1 + · · · + YX1 = Y1 + Y2 + Y3 + Y4 + Y5 + Y6 = 0+4+1+2+2+3 = 12, then (Y1 , Y2 , Y3 , Y4 , Y5 , Y6 , Y7 , Y8 , Y9 , Y1 0, Y11 , Y12 ) (0, 2, 0, 0, 0, 4, 2, 0, 0, 2, 0, 1), and X3 = Y1 + · · · + YX2 = Y1 + Y2 + Y3 + Y4 + Y5 + Y6 + Y7 + Y8 + Y9 + Y10 + Y11 + Y12 = 0+2+0+0+0+4+2+0+0+2+0+1 = 11.

124

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html

=

= 11


Notes on Markov Chains

The next figure presents another sample tree for the path of a branching process.

Fig. 8.2: Sample graph of a branching process. In general and except if otherwise specified, we will assume that all branching processes start from X0 = 1. However in Figure 8.2 above we took X0 = 2, X1 = 3, X2 = 5, X3 = 9, X4 = 9, X5 = 9. Let now ∞ X

G1 (s) = IE[sX1 ] = IE[sY1 ] =

sk P(Y1 = k),

s ∈ [−1, 1],

k=0

denote the generating function of X1 = Y1 , with G(1) = 1, G(0) = P(Y1 = 0), and ∞ X µ := G0 (1) = kP(Y1 = k) = IE[Y1 ] = IE[X1 | X0 = 1]. (8.1) k=0

Let now Gn (s) = IE[sXn | X0 = 1] =

∞ X

sk P(Xn = k|X0 = 1),

s ∈ [−1, 1],

k=0

denote the generating function of Xn , n ∈ N, with G0 (s) = s, s ∈ [−1, 1]. Proposition 2. We have the recurrence relation Gn+1 (s) = Gn (G1 (s)),

s ∈ [−1, 1],

n ∈ N. 125

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

Proof. We have Gn+1 (s) = IE[sXn+1 ] = IE[sY1 +···+YXn ] ∞

i h X

= IE sY1 +···+YXn Xn = k P(Xn = k) = = = =

k=0 ∞ X

IE[sY1 +···+Yk ]P(Xn = k)

k=0 ∞ X k=0 ∞ X k=0 ∞ X

IE[sY1 ] · · · IE[sYk ]P(Xn = k) (IE[sY1 ])k P(Xn = k) (G(s))k P(Xn = k)

k=0

= Gn (G1 (s)),

s ∈ [−1, 1]. 

We also have Gn (s) = G(Gn−1 (s)) = Gn−1 (G(s)),

s ∈ [−1, 1],

and Gn (s) = G(G(· · · (G(s), · · · )),

s ∈ [−1, 1].

By (8.2) and the chain rule of derivation, the mean population size µn := IE[Xn | X0 = 1] at time n ≥ 0 satisfies µn = G0n (1) d G(Gn−1 (s))|s=1 = ds 0 = Gn−1 (1)G0 (Gn−1 (1)) = G0n−1 (1)G0 (1) = µ × µn−1 ,

126

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html

(8.2)


Notes on Markov Chains

hence µ1 = µ, µ2 = µ × µ1 = µ2 , µ3 = µ × µ2 = µ3 , and by induction on n ≥ 1 we obtain µn = IE[Xn | X0 = 1] = µn ,

n ≥ 1,

where µ is given by (8.1). Similarly we find IE[Xn | X0 = k] = k IE[Xn | X0 = 1] = kµn ,

k, n ≥ 1.

Consequently, the average of Xn goes to infinity when µ > 1. This the case in particular if P(Y1 ≥ 1) = 1 and Y1 is not almost surely equal to 1, since under this condition we have P(Y1 ≥ 2) > 0 and µ = IE[Y1 ] ∞ X = nP(Y1 = n) >

n=0 ∞ X

P(Y1 = n)

n=1

= P(Y1 ≥ 1) = 1, hence µ > 1. On the other hand, µn converges to 0 when µ < 1. The variance σn2 = Var[Xn | X0 = 1] of Xn given that X0 = 1 can be shown in a similar way to satisfy the recurrence relation 2 σn+1 = σ 2 µn + µ2 σn2 ,

where σ 2 = Var[Y1 ], which shows that

σn2

=

 2 n−1 nσ µ ,   n   σ 2 µn−1 1 − µ , 1−µ

µ = 1, µ 6= 1,

n ≥ 1, cf. pages 180-181 of [4].

127

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

8.2 Extinction probability Here we are interested in the time to extinction T0 = inf{n ≥ 0 : Xn = 0}, and in the probability αk := P(T0 < ∞ | X0 = k) of extinction within a finite time starting from X0 = k. Note that the meaning of “extinction” can be negative as well as positive, for example when the branching process is used to model the spread of an infection. First we note that by the independence assumption, starting from X0 = k ≥ 2 independent individuals we have αk = P(T0 < ∞ | X0 = k) = (P(T0 < ∞ | X0 = 1))k = α1k ,

k ≥ 1.

Indeed, starting from k individuals, each of them starts a branch of offsprings and in order to have extinction of the population, all k branches should become extinct. Since the k branches behave independently, αk is the product of the extinction probabilities for each branch, which yields αk = (α1 )k since these extinction probabilities are all equal to α1 and there are k of them. Next, by first step analysis we have α1 = P(T0 < ∞ | X0 = 1) = P(X1 = 0 | X0 = 1) +

∞ X

P(T0 < ∞ | X0 = k)P(X1 = k | X0 = 1)

k=1

= P(Y1 = 0) +

∞ X

P(T0 < ∞ | X0 = k)P(Y1 = k)

k=1

=

∞ X

α1k P(Y1 = k)

k=0

= G(α1 ), hence the extinction probability α1 is solution of the equation α = G(α).

128

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html

(8.3)


Notes on Markov Chains

Note that any solution α of (8.3) also satisfies α = G(G(α)), α = G(G(G(α))), and more generally α = Gn (α),

n ≥ 1.

(8.4)

On the other hand the solution of (8.3) may not be unique, for example α = 1 is always solution of (8.3) since G(1) = 1, and may not be equal to the extinction probability. Proposition 3. The extinction probability α1 is the smallest solution of (8.3). Proof. We have {T0 < ∞} =

[

{Xn = 0},

n≥1

because the finiteness of T0 means that Xn vanishes for some n ∈ N. In addition it holds that {Xn = 0} ⊂ {Xn+1 = 0}, n ∈ N, hence given that {X0 = 1} we have α1 = P(T0 < ∞ | X0 = 1)  

[

= P {Xn = 0} X0 = 1

(8.5)

n≥1

=P





lim {Xn = 0} X0 = 1

n→∞

= lim P({Xn = 0} | X0 = 1) n→∞

= lim Gn (0),

(8.6)

n→∞

and since s 7→ G(s) is increasing because G0 (s) = IE[Y1 sY1 −1 ] > 0,

s ∈ [0, 1),

for any solution α ≥ 0 of (8.3) we have, by (8.4), 0 ≤ Gn (0) ≤ Gn (α) = α,

n ≥ 1,

and taking the limit in this inequality as n goes to infinity we get 0 ≤ α1 = lim Gn (0) ≤ α, n→∞

by (8.5)-(8.6), hence the extinction probability α1 is always smaller than any solution α of (8.3). Therefore α1 is the smallest solution of (8.3). 

129

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

Since G(0) = P(Y1 = 0) we have P(Y1 = 0) = G(0) ≤ G(α1 ) = α1 , hence α1 ≥ P(Y1 = 0), which shows that the extinction probability is stricly positive whenever P(Y1 = 0) > 0. On the other hand if P(Y1 ≥ 1) = 1 then G(0) = 0, which implies α1 = 0. Exercise: Assume that Y1 has a Bernoulli distribution with with parameter p ∈ (0, 1), i.e. P(Y1 = 1) = p, P(Y1 = 0) = 1 − p. Compute the extinction probability of the associated branching process. In this case the branching process is actually a two-state Markov chain with transition matrix   1 0 P = , 1−p p and we have P(Xn = 0 | X0 = 1) = (1 − p)

n−1 X

pk = 1 − pn ,

k=0

hence as in (8.5) the extinction probability α1 is given by α1 = P(T0 < ∞ | X0 = 1)  

[

{Xn = 0} X0 = 1 = P n≥1

= lim P({Xn = 0} | X0 = 1) n→∞

= 1, provided p < 1. This value can be recovered using the generating function G(s) = IE[sY1 ] =

∞ X

sk P(Y1 = k) = 1 − p + ps,

k=0

for which the unique solution of G(α) = α is the extinction probability α1 = 1, as shown in the next figure.

130

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains 1

0.8

G(s)

0.6

0.4

0.2

0 0

0.2

0.4

0.6

0.8

1

s

Generating function of Y1 with p = 0.65.

Same question for P(Y1 = 2) = p,

P(Y1 = 0) = 1 â&#x2C6;&#x2019; p.

Here we need to directly use the generating function G(s) = IE[sY1 ] =

â&#x2C6;&#x17E; X

sk P(Y1 = k) = 1 â&#x2C6;&#x2019; p + ps2 .

k=0

We check that the solutions of G(Îą) = Îą, i.e. pÎą2 â&#x2C6;&#x2019; Îą + q = p(Îą â&#x2C6;&#x2019; 1)(Îą â&#x2C6;&#x2019; q/p) = 0, 1

with q = 1 â&#x2C6;&#x2019; p, are given by     â&#x2C6;&#x161; â&#x2C6;&#x161; q 1 + 1 â&#x2C6;&#x2019; 4pq 1 â&#x2C6;&#x2019; 1 â&#x2C6;&#x2019; 4pq , = 1, , 2p 2p p

(8.7)

p â&#x2C6;&#x2C6; (0, 1].

(8.8)

Example: Assume that Y1 has the geometric distribution with parameter p â&#x2C6;&#x2C6; (0, 1), i.e. P(Y1 = n) = (1 â&#x2C6;&#x2019; p)pn , n â&#x2030;Ľ 0. We have G(s) = IE[sY1 ] =

â&#x2C6;&#x17E; X n=0

1

sn P(Y1 = n) = (1 â&#x2C6;&#x2019; p)

â&#x2C6;&#x17E; X n=0

pn sn =

1â&#x2C6;&#x2019;p . 1 â&#x2C6;&#x2019; ps

(8.9)

Remark that (8.7) is identical to the characteristic equation (2.8).

131

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

The equation G(α) = α reads 1−p = α, 1 − pα i.e. pα2 − α + q = p(α − 1)(α − q/p) = 0, which is identical to (2.8) and (8.7) with q = 1 − p, and has for solutions (8.8). Hence the extinction probability is   q/p, p ≥ 1/2, α1 = min(1, q/p) =  1, p ≤ 1/2. As can be seen from the following graphs the extinction probability α1 is equal to 1 when p ≤ 1/2, meaning that extinction within a finite time is certain in that case. 2

G(s)

1.5

1

0.5

0 0

0.5

1 s

1.5

Fig. 8.1: Generating function of Y1 with p = 3/8 < 1/2.

Next is a graph of the generating function s 7→ G(s) for p = 1/2.

132

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html

2


Notes on Markov Chains

2

G(s)

1.5

1

0.5

0 0

0.5

1 s

1.5

2

Fig. 8.2: Generating function of Y1 with p = 1/2.

The following graph of generating function corresponds to p = 3/4.

2

G(s)

1.5

1

0.5

0 0

0.5

1 s

1.5

2

Fig. 8.3: Generating function of Y1 with p = 3/4 > 1/2.

Assume now that Y1 is the sum of two independent geometric variables with parameter 1/2, i.e. it has the negative binomial distribution negative binomial distribution   n+r−1 r n P(Y1 = n) = q p = (n + 1)q r pn , n ≥ 0, r−1 133

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

with r = 2. In this case we have2 G(s) = IE[sY1 ] =

∞ X

sn P(Y1 = n) = q 2

n=0

∞ X

(n+1)pn sn =

n=0

(1 − p)2 , (1 − ps)2

s ∈ [−1, 1],

see here.3 When p = 1/2 we check that the equation G(α) = α reads s3 − 4s2 + 4s − 1 = 0, which is of degree 3. Now, since α = 1 is solution of this equation we can factorise it as follows: (s − 1)(s2 − 3s + 1) = 0, and we check that the smallest non-negative solution of this equation is given by √ 1 α1 = (3 − 5) ' 0.382 2 which is the extinction probability, as illustrated in the next figure.

3

2.5

G(s)

2

1.5

1

0.5

0 0

0.5

1

1.5 s

2

2.5

3

Probability generating function of Y1 .

See here for a simulation of branching processes in the modeling of infectious diseases. 2

Here, Y1 is the sum of two independent geometric random variables, and G is the square of the generating function (8.9) of the geometric distribution. ∞ X 3 (n + 1)rn = 1/(1 − r)2 . We used the identity n=0

134

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40

The next animation illustrates the extinction of a branching process in finite time when Y1 has the geometric distribution with p = 1/2, in which case there is extinction within finite time with probability 1.

â&#x2014;?

0

10

20

30

40

Fig. 8.4: Animated sample path of a branching process (Xn )nâ&#x2030;Ľ0 .

135

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

In the next table we summarize some questions and the associated solution methods introduced in this chapter and the previous ones. How to compute

Method

IE[X]

sum the values of X weighted by their probabilities.

the hitting probabilities

solve4 g = P g for g(k).

the mean hitting times

solve4 h = 1 + P h for h(k).

the stationary distribution solve5 π = πP for π.

the extinction probability solve G(α) = α for α.

 lim

n→∞

1−a a b 1−b

n

 b a a+b a+b  b a  a+b a+b 

8.3 Exercises Exercise 8.1. Each individual in a population has a random number Y of offsprings, with P(Y = 0) = 1/2,

P(Y = 1) = 1/2.

Let Xn denote the size of the population at time n ∈ N, with X0 = 1. 1. Compute the generating function G(s) = IE[sY ] of Y for s ∈ R+ . 2. Let Gn (s) = IE[sXn ] denote the generating function of Xn . Show that 4

Be sure to write only the relevant lines of the system under the appropriate boundary conditions. 5 Remember that the values of π(k) have to add up to 1.

136

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

Gn (s) = 1 −

s 1 + n, 2n 2

s ∈ R.

(8.10)

3. Compute the probability P(Xn = 0 | X0 = 1) that the population is extinct at time n. 4. Compute the average size E[Xn | X0 = 1] of the population at step n. 5. Compute the extinction probability of the population starting from one individual at time 0. Exercise 8.2. Each individual in a population has a random number Y of offsprings, with P (Y = 0) = c,

P (Y = 1) = b,

P (Y = 2) = a,

where a + b + c = 1. 1. Compute the generating function G(s) of Y for s ∈ [−1, 1]. 2. Compute the probability that the population is extinct at time 2, starting from 1 individual at time 0. 3. Compute the probability that the population is extinct at time 2, starting from 2 individuals at time 0. 4. Show that when 0 < c ≤ a the probability of eventual extinction of the population, starting from 2 individuals at time 0, is (c/a)2 . What is this probability equal to when 0 < a < c ? Exercise 8.3. (Problem III.9.5 in [4]). At time 0, a blood culture starts with one red cell. At the end of one minute, the red cell dies and is replaced by one of the following combinations with the probabilities as indicated: 2 red cells: 1/4 1 red cell, 1 white cell: 2/3 2 white cells: 1/12. Each red cell lives for one minute and gives birth to offspring in the same way as the parent cell. Each white cell lives for one minute and dies without reproducing. Assume that individual cells behave independently. 1. At time n + 1/2 minutes after the culture begins, what is the probability that no white cells have appeared ? 2. What is the probability that the entire culture enventually dies out entirely ?

137

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

Exercise 8.4. Consider a branching process with i.i.d. offspring sequence (Yk )k≥1 . The number of individuals in the population at generation n + 1 is given by the relation Xn+1 = Y1 + · · · + YXn , with X0 = 1. 1. (10 marks) Let Zn =

n X

Xk ,

k=1

denote the total number of individuals generated from time 1 to n. Compute IE[Zn ] as a function of µ = IE[Y1 ]. ∞ X 2. (10 marks) Let Z = Xk . denote the total number of individuals k=1

generated from time 1 to infinity. Compute IE[Z] and show that it is finite when µ < 1. In the sequel we work under the condition µ < 1. 3. (20 marks) Let H(s) = IE[sZ ], s ∈ [0, 1], denote the generating function of Z. Show, by first step analysis, that the relation H(s) = G(sH(s)),

4.

5.

6. 7.

s ∈ [0, 1],

holds, where G(x) is the probability generating function of Y1 . (10 marks) In the sequel we assume that Y1 has the geometric distribution P (Y1 = k) = qpk , k ∈ N, with p ∈ (0, 1) and q = 1 − p. Compute H(s) for s ∈ [0, 1]. (10 marks) Using the expression of the generating function H(s) computed in Question 4, check that we have H(0) = lims→0+ H(s), where H(0) = P (Z = 0) = P (Y1 = 0) = G(0). (10 marks) Using the generating function H(s) computed in Question 4, recover the value of IE[Z] found in Question 2. (10 marks) Assume that each of the Z individuals earns an income Uk , k = 1, . . . , Z, where (Uk )k≥1 is an i.i.d. sequence of random variables with finite expectation IE[U ] and distribution function F (x) = P (U ≤ x). Compute the expected value of the sum of gains of all the individuals in the population.

138

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

8. (10 marks) Compute the probability that none of the individuals earns an income higher than x > 0. 9. (10 marks) Evaluate the results of Questions 7 and 8 when Uk has the exponential distribution with F (x) = 1 − e−x , x ∈ R+ . Hints and comments on Exercise 8.4. 1. Use the expression of IE[Xk ] given in the notes. 2. IE[Z] < ∞ implies that Z < ∞ almost surely. 3. Given that X1 = k, Z can be decomposed into the sum of k independent population sizes, each of them started with 1 individual. 4. Compute G(s) and µ in this model, and check the condition µ < 1. When computing H(s) you should have to solve a quadratic equation, and to choose the relevant solution out of two possibilities. 5. The goal of this question is to confirm the result of Question 4 by checking the value of H(0) = lims→0 H(s). For this, find the Talyor expansion of √ 1 − 4pqs as s tends to 0. 6. The identity 1 − 4pq = (q − p)2 can be useful, and the sign of q − p is √ important when computing 1 − 4pq. 7. Use conditioning on Z = n, n ∈ N. 8. The answer will use both H and F . 9. Compute the value of IE[U1 ]. Exercise 8.5. (Problem III.9.9 in [4]). Families in a distant society continue to have children until the first girl, and then cease childbearing. Let X denote the number of male offsprings of a particular husband. 1. Assuming that each child is equally likely to be a boy or a girl, give the probability distribution of X. 2. Compute the probability generating function GX (s) of X. 3. What is the probability that the husband’s male line of descent will cease to exist by the third generation ?

139

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

140

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Chapter 9

Continuous-Time Markov Chains

9.1 The Poisson process In this chapter we start the study of continuous-time processes, which are families (Xt )t∈R+ of random variables indexed by R+ . The standard Poisson process is a stochastic process (Nt )t∈R+ which has jumps of size +1 only,1 and whose paths are constant in between two jumps, i.e. at time t, the value Nt of the process is given by Nt =

∞ X

1[Tk ,∞) (t),

t ∈ R+ ,

k=1

where 1[Tk ,∞) (t) =

  1 if t ≥ Tk , 

0 if 0 ≤ t < Tk ,

k ≥ 1, and (Tk )k≥1 is the increasing family of jump times of (Nt )t∈R+ such that lim Tk = +∞. k→∞

In addition, (Nt )t∈R+ satisfies the following conditions: 1. Independence of increments: for all 0 ≤ t0 < t1 < · · · < tn and n ≥ 1 the random variables Nt1 − Nt0 , . . . , Ntn − Ntn−1 , are independent. 1

We also say that (Nt )t∈R+ is a counting process.

141


N. Privault

2. Stationarity of increments: Nt+h − Ns+h has the same distribution as Nt − Ns for all h > 0 and 0 ≤ s ≤ t. The meaning of the above stationarity condition is that for all fixed k ∈ N we have P(Nt+h − Ns+h = k) = P(Nt − Ns = k), for all h > 0 and 0 ≤ s ≤ t. The stationarity of increments means that for all k ∈ N, the probability P(Nt+h − Ns+h = k) does not depend on h > 0. The next figure represents a sample path of a Poisson process. 7

6

5

Nt

4

3

2

1

0 0

2

4

6

8

10

t

Fig. 9.1: Sample path of a Poisson process (Nt )t∈R+ .

Based on the above assumption, a natural question arises: what is the distribution of Nt at time t ? We already know that Nt takes values in N and therefore it has a discrete distribution for all t ∈ R+ . It is a remarkable fact that the distribution of the increments of (Nt )t∈R+ , can be completely determined from the above conditions, as shown in the following theorem.

142

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

As seen in the next result, Nt − Ns has the Poisson distribution with parameter λ(t − s). Theorem 9. Assume that the counting process (Nt )t∈R+ satisfies the above Conditions (1) and (2). Then we have P (Nt − Ns = k) = e−λ(t−s)

(λ(t − s))k , k!

k ∈ N,

0 ≤ s ≤ t,

for some constant λ > 0. The parameter λ > 0 is called the intensity of the process and it is given by λ := lim

h→0

1 P(Nh = 1). h

(9.1)

Proof. ([1], main steps only). Using the independence and stationarity of increments, we show that the generating function Gt (u) := IE[uNt ],

u ∈ [−1, 1],

satisfies Gt (u) := (G1 (u))t ,

u ∈ [−1, 1],

which implies that Gt (u) := e−tf (u) ,

u ∈ [−1, 1],

for some function f (u) of u. Next, again from the independence and stationarity of increments, we show that f (u) = λ(1 − u),

u ∈ [−1, 1],

where λ is given by (9.1).



In particular, Nt has a Poisson distribution with parameter λt: P(Nt = k) =

(λt)k −λt e , k!

t > 0.

From (9.1) above we see that2 P(Nh = 1) = hλe−hλ ' λh, 2

h → 0,

We use the notation f (h) ' hk to mean that limh→0 f (h)/hk = 1.

143

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

and P(Nh = 0) = e−hλ ' 1 − λh,

h → 0,

and more generally that P(Nt+h − Nt = 1) = hλe−hλ ' λh,

h → 0,

(9.2)

P(Nt+h − Nt = 0) = e−hλ ' 1 − λh,

h → 0,

(9.3)

and for all t > 0. This means that within a “short” interval [t, t + h] of length h, the increment Nt+h − Nt behaves like a Bernoulli random variable with parameter λh. This remark can be used for the random simulation of Poisson process paths. We also find that P(Nt+h − Nt = 2) ' h2

λ2 , 2

h → 0,

t > 0,

λk , k!

h → 0,

t > 0.

and more generally P(Nt+h − Nt = k) ' hk

In order to prove the next proposition we note that we have the equivalence {T1 > t} ⇐⇒ {Nt = 0}, and more generally {Tn > t} = {Nt ≤ n − 1},

n ≥ 1. n−1

t on R+ , Proposition 2. The law of Tn has the density t 7→ λn e−λt (n−1)! n ≥ 1.

Proof. We have P(T1 > t) = P(Nt = 0) = e−λt , and by induction, assuming that Z ∞ (λs)n−2 P(Tn−1 > t) = λ e−λs ds, (n − 2)! t we obtain

144

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html

t ∈ R+ ,

n ≥ 2,


Notes on Markov Chains

P(Tn > t) = P(Tn > t ≥ Tn−1 ) + P(Tn−1 > t) = P(Nt = n − 1) + P(Tn−1 > t) Z ∞ (λs)n−2 (λt)n−1 +λ e−λs ds = e−λt (n − 1)! (n − 2)! t Z ∞ (λs)n−1 =λ e−λs ds, t ∈ R+ , (n − 1)! t where we applied an integration by parts to derive the last line.



Similarly we could show that the time τk := Tk+1 − Tk spent in state k ∈ N, with T0 = 0, forms a sequence of independent identically distributed random variables having the exponential distribution with parameter λ > 0, i.e. P(τ0 > t0 , . . . , τn > tn ) = e−λ(t0 +···+tn )

n Y

e−λtk ,

k=0

for all t0 , . . . , tn ∈ R+ . Since the expectation of the exponentially distributed random variable τk with parameter λ > 0 is given by IE[τk ] =

1 , λ

we can check that the higher the intensity λ (i.e. the higher the probability of having a jump within a small interval), the smaller is the time spent in each state k ∈ N on average.

9.2 Birth and death processes A continuous time Markov chain (Xt )t∈R+ such that P(Xt+h − Xt = 1 | Xt = i) ' λi h,

h → 0,

i ∈ S,

and P(Xt+h − Xt = 0 | Xt = i) ' 1 − λi h,

h → 0,

i ∈ S,

145

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

is called a pure birth process with birth rates λi ≥ 0, i ∈ S. The Poisson process (Nt )t∈R+ is a pure birth process with state-independent birth rates λn = λ > 0, n ∈ N. It can be shown that the time spent in state i by the process (Xt )t∈R+ is an exponentially distributed random variable with parameter λi , see (9.9) below. A continuous time Markov chain (Xt )t∈R+ such that P(Xt+h − Xt = −1 | Xt = i) ' µi h,

h → 0,

i ∈ S,

and P(Xt+h − Xt = 0 | Xt = i) ' 1 − µi h,

h → 0,

i ∈ S,

is called a pure death process with death rates µi ≥ 0, i ∈ S. When (Nt )t∈R+ is a Poisson process, the process (−Nt )t∈R+ , is a pure death process with state-independent death rates µn = λ > 0, n ∈ N, . It can be shown that the time spent in state i by the process (Xt )t∈R+ is an exponentially distributed random variable with parameter µi , see (9.9) below. A continuous time Markov chain (Xt )t∈R+ such that P(Xt+h − Xt = 1 | Xt = i) ' λi h,

h → 0,

i ∈ S,

and P(Xt+h − Xt = −1 | Xt = i) ' µi h,

h → 0,

i ∈ S,

and P(Xt+h − Xt = 0 | Xt = i) ' 1 − (λi + µi )h,

h → 0,

i ∈ S,

is called a birth and death process with birth rates λi ≥ 0 and death rates µi ≥ 0, i ∈ S. The time τi,i+1 spent in state i before a pure birth process (Xt )t∈R+ moves to i + 1 is an exponentially distributed random variable with parameter λi , i.e. P(τi,i+1 > t) = e−λi t , t ∈ R+ , and IE[τi,i+1 ] =

1 . λi

146

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

Similarly the time τi,i−1 spent in state i before a pure death process (Xt )t∈R+ moves to i−1 is an exponentially distributed random variable with parameter µi , i.e. P(τi,i−1 > t) = e−µi t , t ∈ R+ , and IE[τi,i−1 ] =

1 , µi

In a birth and death process the time τi spent in state i by (Xt )t∈R+ is given by τi = min(τi,i+1 , τi,i−1 ) which is an exponentially distributed random variable with parameter λi +µi and 1 IE[τi ] = . λi + µi Indeed, since τi,i+1 and τi,i−1 are two independent exponentially distributed random variables with parameters λi and µi , we have P(min(τi,i+1 , τi,i−1 ) > t) = P(τi,i+1 > t and τi,i−1 > t) = P(τi,i+1 > t)P(τi,i−1 > t) = e−t(λi +µi ) ,

t ∈ R+ ,

hence τi = min(τi,i+1 , τi,i−1 ) is an exponentially distributed random variable with parameter λi + µi .

9.3 Continuous-time Markov chains A Z-valued continuous-time stochastic process (Xt )t∈R+ is said to be Markov, or to have the Markov property if, for all n ≥ 1, the probability distribution of Xt given the past of the process up to time s is determined by the state Xs of the process at time s, and does not depend on the past values of Xu for u < s. In other words, for all 0 < s1 < · · · < sn−1 < s < t we have P(Xt = j | Xs = in , Xsn−1 = in−1 , . . . , Xs1 = i0 ) = P(Xt = j | Xs = in ).

147

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

In particular we have P(Xt = j | Xs = in , Xsn−1 = in−1 ) = P(Xt = j | Xs = in ). Example: The Poisson process (Nt )t∈R+ considered in Chapter 8 is a continuoustime Markov chain because it has independent increments. More generally, any continuous-time process (St )t∈R+ with independent increments has the Markov property. Indeed, for all j, in , . . . , i1 ∈ Z we have (note that S0 = 0 here) P(St = j | Ss = in , Ssn−1 = in−1 , . . . , Ss1 = i1 ) P(St = j, Ss = in , Ssn−1 = in−1 , . . . , Ss1 = i1 ) = P(Ss = in , Ssn−1 = in−1 , . . . , Ss1 = i1 ) P(St − Ss = j − in , Ss = in , . . . , Ss2 = i2 , Ss1 = i1 ) = P(Ss = in , , . . . , Ss2 = i2 , Ss1 = i1 ) P(St − Ss = j − in )P(Ss = in , Ssn−1 = in−1 , . . . , Ss2 = i2 , Ss1 = i1 ) = P(Ss = in , Ssn−1 = in−1 , . . . , Ss2 = i2 , Ss1 = i1 ) = P(St − Ss = j − in ) P(St − Ss = j − in )P(Ss = in ) = P(Ss = in ) P(St − Ss = j − in , Ss = in ) = P(Ss = in ) P(St − Ss = j − in , Ss = in ) = P(Ss = in ) P(St = j, Ss = in ) = P(Ss = in ) = P(St = j | Ss = in ), cf. (4.3) for the discrete-time version of this argument. Hence, continuoustime processes with independent increments are Markov chains. However, not all continuous-time Markov chains have independent increments, in fact the continuous-time Markov chains of interest in this chapter do not have independent increments. As seen above, the random evolution of a Markov process (Xt )t∈R+ is determined by the data of

148

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

Pi,j (t) := P(Xt+s = j | Xs = i),

i, j ∈ Z,

t ∈ R+ ,

where we assume that the probability P(Xt+s = j | Xs = i) does not depend on s ∈ R+ . In this case the Markov process (Xt )t∈R+ is said to be time homogeneous. Note that we always have P (0) = Id . This data can be recorded as a time-dependent matrix indexed by Z2 , called the transition semigroup of the Markov process: [ Pi,j (t) ]i,j∈Z = [ P(Xt+s = j | Xs = i) ]i,j∈Z , also written as 

[ Pi,j (t) ]i,j∈Z

..

 . ···    ···    = ···   ···    ···  . ..

.. .

.. .

.. .

.. .

.. .

. ..

 P−2,−2 (t) P−2,−1 (t) P−2,0 (t) P−2,1 (t) P−2,2 (t) · · ·     P−1,−2 (t) P−1,−1 (t) P−1,0 (t) P−1,1 (t) P−1,2 (t) · · ·     P0,−2 (t) P0,−1 (t) P0,0 (t) P0,1 (t) P0,2 (t) · · ·  .   P1,−2 (t) P1,−1 (t) P1,0 (t) P1,1 (t) P1,2 (t) · · ·     P2,−2 (t) P2,−1 (t) P2,0 (t) P2,1 (t) P2,2 (t) · · ·   .. .. .. .. .. .. . . . . . .

As in the discrete-time case, note the inversion of the order of indices (i, j) between P(Xt+s = j | Xs = i) and Pi,j (t). In particular, the initial state i is a line number in the matrix, while the final state j corresponds to a column number. Due to the relation X P(Xt+s = j | Xs = i) = 1,

i ∈ Z,

(9.4)

j∈Z

the lines of the transition semigroup (P (t))t∈R+ satisfy the condition ∞ X

Pi,j (t) = 1,

j∈Z

149

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

for all i ∈ Z. In the sequel we will only consider N-valued Markov process, and in the case the transition semigroup (P (t))t∈R+ of the Markov process is written as 

P (t) = [ Pi,j (t) ]i,j∈N

From (9.4) we have

P0,0 (t) P0,1 (t) P0,2 (t) · · ·

     P1,0 (t) P1,1 (t) P1,2 (t) · · ·      = .  P2,0 (t) P2,1 (t) P2,2 (t) · · ·        .. .. .. .. . . . .

∞ X

Pi,j (t) = 1,

j=0

for all i ∈ N and t ∈ R+ . Exercise: Write down the transition semigroup [ Pi,j (t) ]i,j∈N of the Poisson process (Nt )t∈R+ . We have 

[ Pi,j (t) ]i,j∈N

e−λt λte−λt

   0  =   0  .. .

e

−λt

0 .. .

λ2 t2 −λt 2 e

λte

−λt

e−λt .. .

···

  ···  .  ···  .. .

Indeed we have Pi,j (t) = P(Ns+t = j | Ns = i) P(Ns+t = j and Ns = i) = P(Ns = i) P(Ns+t − Ns = j − i and Ns = i) = P(Ns = i) P(Ns+t − Ns = j − i)P(Ns = i) = P(Ns = i) = P(Ns+t − Ns = j − i)

150

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

 (λt)j−i   if j ≥ i,  e−λt (j − i)! =    0 if j < i. In case the Markov process (Xt )t∈R+ takes values in the finite state space {0, . . . , N } its transition semigroup will simply have the form   P0,0 (t) P0,1 (t) P0,2 (t) · · · P0,N (t)      P1,0 (t) P1,1 (t) P1,2 (t) · · · P1,N (t)         P2,0 (t) P2,1 (t) P2,2 (t) · · · P2,N (t)  P (t) = [ Pi,j (t) ]0≤i,j≤N =  .       .. .. .. . .. ..   . . . .     PN,0 (t) PN,1 (t) PN,2 (t) · · · PN,N (t) As noted above, the semigroup matrix is a convenient way to record the values of P(Xt+s = j | Xs = i) in a table. In addition, using the Markov property and denoting by S the state space of the process, we have Pi,j (t + s) = P(Xt+s = j | X0 = i) X = P(Xt+s = j, Xs = l | X0 = i) l∈S

X P(Xt+s = j, Xs = l, X0 = i) = P(X0 = i) l∈S

X P(Xt+s = j, Xs = l, X0 = i) P(Xs = l, X0 = i) = P(Xs = l, X0 = i) P(X0 = i) l∈S X = P(Xt+s = j | Xs = l, X0 = i)P(Xs = l | X0 = i) l∈S

=

X

P(Xs = l | X0 = i)P(Xt+s = j | Xs = l)

l∈S

=

X

Pi,l (s)Pl,j (t)

l∈S

= [P (s)P (t)]i,j ,

i, j ∈ S,

s, t ∈ R+ .

We have shown the relation Pi,j (s + t) =

X

Pi,l (s)Pl,j (t),

l∈S

151

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

hence P (s + t) = P (s)P (t) = P (t)P (s)

(9.5)

which is called the semigroup property of (P (t))t∈R+ . From this property one can check that the matrices P (s) and P (t) commute, i.e. we have P (s)P (t) = P (t)P (s), s, t ∈ R+ . Example: For the Poisson process we check that 

e−λs λse−λs

   0  P (s)P (t) =    0  .. .

     =   

e

λ2 s2 −λs 2 e

−λs

λse

−λs

e−λs .. .

0 .. .

···

e−λt λte−λt 0 0 .. .

e

e

λ(s + t)e

λte

−λt

e−λt .. .

0 .. .

−λ(s+t)

e−λ(s+t) .. .

0 .. .

λ2 t2 −λt 2 e

−λt

λ2 (s+t)2 −λ(s+t) e 2

−λ(s+t)

0 .. .

     ···   ×    ···   .. .

e−λ(s+t) λ(s + t)e−λ(s+t) 0

···

···

  ···    ···  .. .

  ···    ···  .. .

= P (s + t). The above identity can be checked by the following calculation, for all 0 ≤ i ≤ j. Since Pi,j (s) = 0, i > j, and Pl,j (t) = 0, l > j, we have [P (s)P (t)]i,j =

∞ X

Pi,l (s)Pl,j (t)

l=0

=

j X

Pi,l (s)Pl,j (t)

l=i j X (λs)l−i (λt)j−l =e (l − i)! (j − l)! l=i  j  X 1 j−i = e−λs−λt (λs)l−i (λt)j−l (j − i)! l−i l=i j−i X j − i 1 = e−λs−λt (λs)l (λt)j−i−l (j − i)! l −λs−λt

l=0

152

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

1 (λs + λt)j−i (j − i)! = Pi,j (s + t), s, t ∈ R+ .

= e−λ(s+t)

By differentiating (9.5) with respect to t and letting t = 0 we get P (t + s) − P (t) s P (t)P (s) − P (t) = lim s→0 s P (s) − P (0) = P (t) lim s→0 s = P (t)Q,

P 0 (t) = lim

s→0

where

P (s) − P (0) s is called the infinitesimal generator of (Xt )t∈R+ . Q := P 0 (0) = lim

s→0

Denoting Q = [λi,j ]i,j∈S , for all i ∈ S we have X

λi,j =

j∈S

X j∈S

0 Pi,j (0) =

d X d Pi,j (t)|t=0 = 1|t=0 = 0, dt dt j∈S

hence the lines of the infinitesimal generator Q always add up to 0. The equation P 0 (t) = P (t)Q,

t > 0,

(9.6)

is called the forward Kolmogorov equation, cf. (1.1). In a similar way we can show that P 0 (t) = QP (t), t > 0, (9.7) which is called the backward Kolmogorov equation. The forward and backward Kolmogorov equations (9.6)-(9.7) can be solved using matrix exponentials or systems of differential equations. The solution of (9.6) is given, using matrix exponentials, by P (t) = P (0)etQ = etQ , where

153

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

etQ =

∞ n ∞ n X X t n t n Q = Id + Q , n! n! n=0 n=1

and Id is the identity matrix, written as  1 0 ··· 0 1 ···   Id =  ... ... . . .  0 0 ··· 0 0 ···

 0 0  ..  .  1 0 01

0 0 .. .

when the state space is S = {0, 1, . . . , N }. The first order approximation in h → 0 of ehQ = Id +

∞ X hn n h2 h3 h4 Q = Id + hQ + Q2 + Q3 + Q4 + · · · n! 2! 3! 4! n=1

is given by P (h) ' Id + hQ + o(h), We denote by λi,j , i, j ∈ S, the entries when S = {0, 1, . . . , N } we have  λ0,0    λ1,0     Q = [ λi,j ]0≤i,j≤N =  λ2,0    ..  .   λN,0

h → 0.

of the matrix Q = (λi,j )i,j∈S , i.e.

λ0,1 λ0,2 · · · λ0,N λ1,1 λ1,2 · · · λ1,N λ2,1 λ2,2 · · · λ2,N .. .

.. .

..

.

.. .

        .      

λN,1 λN,2 · · · λN,N

This yields the transition probabilities over a small time interval of length h, as:  i 6= j, h → 0,  λi,j h, Pi,j (h) = P(Xt+h = j | Xt = i) '  1 + λi,i h i = j, h → 0, or

154

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

Pi,j (h) = P(Xt+h = j | Xt = i) '

 λi,j h,      1 − h

X

λi,l = 1 + hλi,i ,

i 6= j,

h → 0,

i = j,

h → 0,

l6=i

since we have λi,i = −

X

λi,l ,

l6=i

from the condition X l∈S

λi,l = λi,i +

X

λi,l = 0.

l6=i

For example, in the case of a two-state continuous-time Markov chain we have   −α α , Q= β −β with α, β ≥ 0, and P (h) ' Id + hQ     −α α 10  + h  = β −β 01   1 − hα hα , = hβ 1 − hβ

(9.8)

as h → 0. In this case, P (h) above has the same form as the transition matrix (4.6) of a discrete-time Markov chain with “small” time step h > 0 and “small” transition probabilities, namely hα is the probability of switching from state 0 to state 1, and hβ is the probability of switching from state 1 to state 0 within a short period of time h. We note that since P(Xt+h = j | Xt = i) ' λi,j h,

h → 0,

i 6= j,

and P(Xt+h 6= j | Xt = i) ' 1 − λi,j h,

h → 0,

i 6= j,

the transition of the process (Xt )t∈R+ from state i to state j behaves identically to that of a Poisson process with intensity λi,j , cf. (9.2)-(9.3).

155

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

Similarly to the Poisson process, the time τi,j spent in state i before moving to state j 6= i is an exponentially distributed random variable with parameter λi,j , i.e. P(τi,j > t) = e−λi,j t , t ∈ R+ , (9.9) and we have Z

te−tλi,j dt =

IE[τi,j ] = λi,j 0

1 , λi,j

i 6= j.

We also have P(Xt+h = i | Xt = i) ' 1 − h

X

λi,l = 1 + hλi,i ,

h → 0,

l6=i

and P(Xt+h 6= i | Xt = i) ' h

X

λi,l = −hλi,i ,

h → 0,

l6=i

hence, by the same Poisson process analogy, the time τi spent in state i before the next X transition is an exponentially distributed random variable with parameter λi,j , i.e. j6=i

P(τi > t) = exp −t

X

λi,j  = etλi,i ,

t ∈ R+ .

j6=i

In other words, the time τi spent in state i satisfies τi =

min τi,j ,

j∈S, j6=i

hence τi is an exponential random variable with parameter

X j6=i

 P(τi > t) = P =

 min τi,j , > t

j∈S, j6=i

Y

P(τi,j , > t)

j∈S, j6=i

 = exp −t

 X

λi,j  = etλi,i ,

j6=i

In addition we have

156

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html

t ∈ R+ .

λi,j , since


Notes on Markov Chains

IE[τi ] =

X l6=i

Z λi,l

 t exp −t

0

X l6=i

1 1 λi,l  dt = X =− , λi,i λi,l l6=i

and the times (τk )k∈Z spent in states k ∈ Z form a sequence of independent random variables. Examples: • For the two-state continuous-time Markov chain with generator   −α α , Q= β −β the mean time spent at state 0 is 1/α, whereas the mean time spent at state 1 is 1/β. • The generator of the Poisson process is given by λi,j = 1{j=i+1} λ, i 6= j, i.e.   −λ λ 0 · · ·      0 −λ λ · · ·    Q = [ λi,j ]i,j∈N =  .    0 0 −λ · · ·    .. .. .. . . . . . . From the relation P (h) 'h→0 Id + hQ we recover the infinitesimal transition probabilities P(Nt+h −Nt = 1) = P(Nt+h = k+1 | Nt = k) ' λh,

h → 0,

k ∈ N,

h → 0,

k ∈ N,

and P(Nt+h −Nt = 0) = P(Nt+h = k | Nt = k) ' 1−λh, of the Poisson process. • The generator of the pure birth process on N is  −λ0 λ0 0    0 −λ1 λ1  Q = [ λi,j ]i,j∈N =    0 0 −λ2  .. .. .. . . .

···

  ···  ,  ···  .. . 157

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

in which the rate λi is state-dependent. From the relation P (h) 'h→0 Id + hQ we recover the infinitesimal transition probabilities P(Xt+h −Xt = 1 | Xt = i) = P(Xt+h = i+1 | Xt = i) ' λi h,

h → 0,

i ∈ N,

h → 0,

i ∈ N,

and P(Xt+h −Xt = 0 | Xt = i) = P(Xt+h = i | Xt = i) ' 1−λi h, of the pure birth process. • The generator of the pure death process on −N is   · · · 0 µ0 −µ0      · · · µ1 −µ1 0    Q = [ λi,j ]i,j≤0 =  .    · · · −µ2 0 0    .. .. . .. .. . . . From the relation P (h) 'h→0 Id + hQ we recover the infinitesimal transition probabilities P(Xt+h −Xt = −1 | Xt = i) = P(Xt+h = i−1 | Xt = i) ' µi h,

h → 0,

i ∈ S,

and P(Xt+h = i | Xt = i) = P(Xt+h −Xt = 0 | Xt = i) ' 1−µi h,

h → 0,

i ∈ S,

of the pure death process. • The generator of the birth and death process on {0, 1, . . . , N } is −λ0 λ0  µ1 −λ1 − µ1   .. ..  . .   .. ..  . . =  . ..  .. .   . ..  .. .   0 0 0 0 

[ λi,j ]0≤i,j≤N

0 0 ··· λ1 0 · · · .. .. .. . . . .. .. .. . . . .. . . . . . . . .. .. . . . . . 0 0 ··· 0 0 ···

··· ··· ··· ··· .. .. . . . .. . .. .. .. . .

 0 0 0 0   .. ..  . .   .. ..  . .  , .. ..  . .   . ..  .. .. .. . . .   0 µN −1 −λN −1 − µN −1 λN −1  0 0 µN −µN

with µ0 = λN = 0.

158

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

From the relation P (h) 'h→0 Id + hQ we recover the infinitesimal transition probabilities P(Xt+h − Xt = 1 | Xt = i) ' λi h,

h → 0,

0 ≤ i ≤ N,

and P(Xt+h − Xt = −1 | Xt = i) ' µi h,

h → 0,

0 ≤ i ≤ N,

and P(Xt+h − Xt = 0 | Xt = i) ' 1 − (λi + µi )h,

h → 0,

0 ≤ i ≤ N,

of the birth and death process on {0, 1, . . . , N }, with µ0 = λN = 0. Recall that the time τi spent in state i is an exponentially distributed random variable with parameter λi + µi and we have Pi,i (t) ≥ P(τi > t) = exp (−t(λi + µi )) , and IE[τi ] =

t ∈ R+ ,

1 . λi + µi

In the case of a pure birth process we find Pi,i (t) = P(τi > t) = exp (−tλi ) ,

t ∈ R+ ,

and similarly for a pure death process. This result can be recovered by matrix exponentiation. When S = {0, 1, . . . , N } with λi = λ and µi = µ, 1 ≤ i ≤ N − 1, and λ0 = µN = 0, the above birth and death process becomes a continuoustime analog of the discrete-time gambling process.

9.4 The two-state continuous-time Markov chain In this chapter we consider a continuous-time Markov process with state space S = {0, 1}. In this case the infinitesimal generator Q of (Xt )t∈R+ has the form

159

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

−α α

Q=

, β −β

with α, β ≥ 0. The forward Kolmogorov equation (9.6) reads   −α α , P 0 (t) = P (t) ×  β −β

t > 0,

i.e.  

0 0 P0,0 (t) P0,1 (t) 0 0 P1,0 (t) P1,1 (t)

P0,0 (t) P0,1 (t)

−α α

×

= P1,0 (t) P1,1 (t)

 ,

t > 0,

β −β

or  0 0 (t) = αP0,1 (t) − βP1,1 (t),  P0,0 (t) = −αP0,0 (t) + βP0,1 (t), P0,1 

0 0 P1,0 (t) = −αP0,0 (t) + βP1,0 (t), P1,1 (t) = αP0,1 (t) − βP1,1 (t),

t > 0, which is a system of four differential equations3 with initial condition     P0,0 (0) P0,1 (0) 10 =  = Id . P (0) =  P1,0 (0) P1,1 (0) 01 Its solution is given by the matrix exponential P (t) = P (0)etQ = etQ  

−α α



= exp t 

 β −β   β α α α + e−t(α+β) − e−t(α+β) α+β  α+β α+β α+β   =  , (9.10)   β β α β −t(α+β) −t(α+β) − e + e α+β α+β α+β α+β 3

Please refer to MAS321 - Ordinary Differential Equations for more on first order linear systems of differential equations.

160

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

t > 0, see here and (9.11) below. In other words we have   α α 1− (1 − e−h(α+β) ) (1 − e−h(α+β) ) α+β α+β     P (h) =  ,   β β −h(α+β) −h(α+β) (1 − e ) 1− (1 − e ) α+β α+β

h > 0,

hence, since 1 − e−h(α+β) ' h(α + β),

h → 0,

the expression (9.10) above recovers (9.8) as h → 0, i.e. we have   1 − hα hα , h → 0. P (h) '  hβ 1 − hβ The matrix exponential etQ can also be computed by diagonalization. The matrix Q has two eigenvectors4     1 −α   and  , 1 β with respective eigenvalues λ1 = 0 and λ2 = −α − β, see for example here. Hence it can be put in diagonal form Q = M × D × M −1 as follows: β α 1 −α λ1 0  α+β α+β × × Q=   1 1 1 β 0 λ2 − α+β α+β 

  . 

Consequently we have exp(tQ) =

∞ n X t n Q n! n=0

4

Please refer to MAS213 - Linear Algebra II for more on eigenvectors, eigenvalues, and diagonalization.

161

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

=

∞ n X t (M × D × M −1 )n n! n=0

∞ n X t M × Dn × M −1 n! n=0 ! ∞ n X t n =M× D × M −1 n! n=0

=

= M × exp(tD) × M −1  α β 1 −α etλ1 0  α+β α+β      × = ×    1 1  1 β 0 etλ2 − α+β α+β   β α     1 0 1 −α  α+β α+β   × × =    1 1  1 β 0 e−t(α+β) − α+β α+β     βα α −α −t(α+β) 1  e +   = α+β α+β βα −β β   β α α α + e−t(α+β) − e−t(α+β) α+β  α+β α+β α+β   = ,  β  β α β −t(α+β) −t(α+β) − e + e α+β α+β α+β α+β 

(9.11)

t > 0. Hence we can compute the probabilities P(Xt = 0 | X0 = 0) =

β + αe−t(α+β) , α+β

P(Xt = 1 | X0 = 0) =

α (1−e−t(α+β) ) α+β

and P(Xt = 0 | X0 = 1) =

β (1−e−t(α+β) ), α+β

P(Xt = 1 | X0 = 1) =

α + βe−t(α+β) , α+β

t ∈ R+ . From this expression we get in particular the long-time behavior of the continuous-time Markov chain:   βα 1  , lim P (t) = t→∞ α+β βα whenever α > 0 or β > 0, whereas if α = β = 0 we have 162

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

10

P (t) = 

  = Id ,

t ∈ R+ ,

01 and the chain is constant. Note that in continuous time convergence always occurs (unlike in the discrete-time case), and it will be faster when α + β is larger. Hence we have lim P(Xt = 1 | X0 = 0) = lim P(Xt = 1 | X0 = 1) =

α α+β

(9.12)

lim P(Xt = 0 | X0 = 0) = lim P(Xt = 0 | X0 = 1) =

β α+β

(9.13)

t→∞

t→∞

and t→∞

t→∞

and

 π = (π0 , π1 ) =

β α , α+β α+β

 (9.14)

appears as a limiting distribution as t goes to infinity, provided (α, β) 6= (0, 0). This means that whatever the starting point X0 , the probability of being at 1 after a “large” time is close to α/(α + β), while the probability of being at 0 is close to β/(α + β). Next we consider a simulation of the two-state continuous Markov chain with infinitesimal generator   −20 20 , Q= 40 −40 i.e. α = 20 and β = 40. Figure 9.1 represents a sample path (xn )n=0,1,...,100 of the continuous-time chain, while Figure 9.2 represents the sample average Z 1 t yt = xs ds, t ∈ [0, 1], t 0 which counts the proportion of values of the chain in the state 1. This proportion is found to converge to α/(α + β) = 1/3.

163

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

0.0 1

0.1

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

0

0.2 ●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

0.0

0.1

0.3 ●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

0.2

0.4 ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

0.3

0.5 ●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

0.4

●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●

0.6 ●●●●●●●●●●●●●●●

●●●●●●●●●●●●●

0.7

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

0.5

0.8

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

0.9 ●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●

●●●●

1.0

●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●●●●●●●●

0.6

0.7

0.8

0.9

1.0

0.6

0.7

0.8

0.9

1.0

0.0

0.2

0.4

0.6

0.8

1.0

Fig. 9.1: Sample path in continuous time.

0.0

0.1

0.2

0.3

0.4

0.5

Fig. 9.2: The proportion of process values in the state 1 converges to 1/3.

9.5 Limiting and stationary distributions A probability distribution π = (πi )i∈S is said to be stationary if πP (t) = π, t ∈ R+ . Proposition 4. The probability distribution π = (πi )i∈S is stationary if and only if πQ = 0. Proof. Assuming that πQ = 0 we have πP (t) = πetQ 164

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

∞ n X t n Q n! n=0

∞ n X t πQn n! n=0 = π.

=

On the other hand, differentiating the relation π = πP (t) at t = 0 shows that 0 = πP 0 (0) = πQP (0) = πQ.  Next is the continuous-time analog of Proposition 1. Proposition 5. Assume that the continuous-time Markov chain (Xt )t∈R+ admits a limiting distribution given by πj := lim P(Xt = j | X0 = i) = lim Pi,j (t), t→∞

t→∞

j ∈ S,

independent of the initial state i ∈ S. Then we have πQ = 0,

(9.15)

i.e. π is a stationary distribution. Proof. The limit of P 0 (t) exists as t → ∞ since by the forward Kolmogorov equation (9.6) we have   π  ..  0 lim P (t) = lim P (t)Q =  .  Q. t→∞

t→∞

π

On the other hand, since both Z P (t) = P (0) +

t

P 0 (s)ds

0 0

and P (t) converge as t → ∞ we should have lim P 0 (t) = 0,

t→∞

which shows that

165

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

  π  ..   . Q = 0 π by (9.6), hence we have πQ = 0, or X

πi λi,j = 0,

j ∈ S.

i∈S

 Note that the limiting distribution satisfies   lim P0,0 (t) · · · lim P0,n (t) t→∞  t→∞ .  .. ..  . lim P (t) =  . . .   t→∞ lim Pn,0 (t) · · · lim Pn,n (t) t→∞ t→∞   π0 · · · πn   =  ... . . . ...  π0 · · · πn   π  ..  =  . , π where π is the line vector π = [π0 , . . . , πn ]. Equation (9.15) is actually equivalent to π = π(Id + hQ),

h > 0,

which yields the stationary distribution of a discrete-time Markov chain with transition matrix P (h) ' Id + hQ on “small” discrete intervals of length h → 0. In the case of the two-state Markov chain with infinitesimal generator   −α α , Q= β −β 166

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

the limiting distribution solves   0 = −απ0 + βπ1 

0 = απ0 − βπ1 ,

with π0 + π1 = 1, i.e.  π = (π0 , π1 ) =

β α , α+β α+β

 .

(9.16)

As a second example, for the birth and death process on N, with infinitesimal generator   −λ0 λ0 0 0 0 ···  µ1 −λ1 − µ1 λ1 0 0 ···    0 µ2 −λ2 − µ2 λ2 0 ··· [ λi,j ]i,j∈N =  ,  0 0 µ3 −λ3 − µ3 λ3 · · ·    .. .. .. .. .. . . . . . . . . the limiting distribution solves  0 = −λ0 π0 + µ1 π1         0 = λ0 π0 − (λ1 + µ1 )π1 + µ2 π2      0 = λ1 π1 − (λ2 + µ2 )π2 + µ3 π3   ..   .      0 = λj−1 πj−1 − (λj + µj )πj + µj+1 πj+1 ,     .. . i.e.  λ   π1 = 0 π0   µ 1        λ1 + µ1 λ0 λ1 + µ1 λ0 λ1 λ0 λ0   π2 = − π0 + π1 = − π0 + π0 = π0   µ µ µ µ µ µ  2 2 2 2 1 2 µ1     λ1 λ2 + µ2 λ1 λ0 λ2 + µ2 λ1 λ0 λ2 λ1 λ0 π3 = − π1 + π2 = − π0 + π0 = π0   µ3 µ3 µ3 µ1 µ3 µ2 µ1 µ3 µ2 µ1    .    ..    λj · · · λ0   π0 , πj+1 =    µj+1 · · · µ1     .. . 167

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

and under the condition 1=

∞ X

πi = π0 + π0

i=0

∞ ∞ X X λj · · · λ0 λi−1 · · · λ0 = π0 , µ · · · µ µi · · · µ1 j+1 1 j=0 i=0

we get π0 =

1 , ∞ X λi−1 · · · λ0 µi · · · µ1

i=0

with the conventions λi−1 · · · λ0 =

i−1 Y

λj = 1,

i = 0,

l=0

and µi · · · µ1 =

i Y

µj = 1,

i = 0,

l=1

and πj =

λj−1 · · · λ0 , ∞ X λi−1 · · · λ0 µj · · · µ1 µi · · · µ1 i=0

j ∈ N.

When λi = λ, i ∈ N, and µi = µ, i ≥ 1, this gives λj = (1 − λ/µ) πj = ∞ X µj (λ/µ)i

 j λ , µ

j ∈ N.

i=0

provided λ < µ, hence in this case the limiting distribution is the geometric distribution with parameter λ/µ. Concerning the birth and death process on {0, 1, . . . , N } with generator

168

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

−λ0 λ0  µ1 −λ1 − µ1   ..  0 .   .. ..  . .   .. .. = . .   . ..  .. .    0 0   0 0 0 0 

[ λi,j ]0≤i,j≤N

0 λ1 .. .

··· ··· .. .

.. .. . . .. . . . . .. .. . . . 0 ..

··· ··· .. . .. ..

.

0 0 .. . .. . .. .

0 0

0 0

0 0

0 .. . .. . .. .

0 .. . .. . .. .

0 .. . .. . .. .

. .. .. . . .. . . . . .. . . . . 0 0 · · · · · · 0 µN −1 −λN −1 − µN −1 λN −1 0 ··· ··· 0 0 µN −µN

          ,        

we can apply (1.8) with λj = 0, j ≥ N , and µj = 0, j ≥ N + 1, which yields πj =

λj−1 · · · λ0 , N X λi−1 · · · λ0 µj · · · µ1 µi · · · µ1 i=0

j ∈ {0, 1, . . . , N },

and coincides with (9.16) when N = 1. When λi = λ, i ∈ N, and µi = µ, i ≥ 1, this gives πj =

1 − λ/µ 1 − (λ/µ)N +1

 j λ , µ

j ∈ {0, 1, . . . , N },

which is a truncated geometric distribution since πj = 0 for all j > N .

9.6 The embedded chain Consider the sequence (Tn )n∈N the sequence of jump times of the continuoustime Markov process (Xt )t∈R+ , defined recursively by T0 = 0, then T1 = inf{t > 0 : Xt 6= X0 }, and Tn+1 = inf{t > Tn : Xt 6= XTn },

n ∈ N.

The embedded Markov chain is the discrete-time Markov chain (Zn )n∈N defined by Z0 = X0 and Zn = XTn , n ∈ N.

169

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

The next figure shows the graph of the embedded chain of a birth and death process.

Birth and death process − embedded BBir m chain 4

●●●●●●●●●●●●●

3

● ● ●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●

2

●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

0.0

●●●●●●●●●●●●●●●●●●●●

0.1

0.2

0.3

0.4

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●

●●●●●●●●●●

● ●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●

1 0

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●

●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

0.5

0.6

0.7

F g 9 1 B rth and death process w th ts embedded cha n

The next figure represents the d screte-t me embedded cha n assoc ated to the path of F gure 9 1 n wh ch we have Z0 = 0 Z1 = 1 Z2 = 2 Z3 = 3 Z4 = 4 Z5 = 3 Z6 = 4 Z7 = 3

m

0

1

2

3

4

B

F g 9 2 Embedded cha n of a b rth and death process

For examp e f λ0 > 0 and µ0 > 0 the embedded cha n of the two-state cont nuous-t me Markov cha n has trans t on matr x

170

Th s vers on November 28 2011 http://www.ntu.edu.sg/home/nprivault/indext.html

0.8

0.9

1.0


Notes on Markov Chains



 01 P = . 10

(9.17)

In case one of the states {0, 1} is absorbing the transition matrix becomes   10   , λ0 = 0, µ0 > 0,   10         01 P = , λ0 > 0, µ0 = 0, 01           10   , λ0 = 0, µ0 = 0.  01 More generally, for the birth and death process with infinitesimal generator   −λ0 λ0 0 ··· ··· 0 0 0 0  µ1 −λ1 − µ1 λ1 · · · · · · 0 0 0 0    . .   .. .. . . ..   0 . . . . . 0 0 0     .. . . . . .. . . . . . . . . . .  . . . . . . . .  .    .. .. ..  , .. .. . . . . . . [ λi,j ]0≤i,j≤N =  ... . . . . . .  . .    . . ..  . . . .. .. .. .. . . . . . . . . .  .. .      .. .. . . . . ..  0 . . . 0  0 0 . .    0 0 0 · · · · · · 0 µN −1 − λN −1 − µN −1 λN −1  0 0 0 ··· ··· 0 0 µN −µN the probability that a given transition occurs from i to i + 1 is P(Xt+h = i + 1 and Xt = i) P(Xt+h − Xt 6= 0 and Xt = i) P(Xt+h = i + 1 and Xt = i) P(Xt+h − Xt 6= 0 | Xt = i)P(Xt = i) P(Xt+h = i + 1 | Xt = i) P(Xt+h − Xt 6= 0 | Xt = i) P(Xt+h − Xt = 1 | Xt = i) P(Xt+h − Xt 6= 0 | Xt = i) hλi hλi + hµi λi , h → 0, i ∈ N, λi + µi

P(Xt+h = i + 1 | Xt = i and Xt+h − Xt 6= 0) = = = = = = hence

171

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

Pi,i+1 = lim P(Xt+h = i+1 | Xt = i and Xt+h −Xt 6= 0) = h→0

λi , λi + µi

i ∈ N. (9.18)

This result can also be obtained from (1.3) which states that P(τi,i+1 < τi,i−1 ) =

λi . λi + µi

(9.19)

Similarly the probability that a given transition occurs from i to i − 1 is P(Xt+h = i − 1 | Xt = i and Xt+h − Xt 6= 0) =

µi , λi + µi

h → 0,

i ∈ N,

which can also be obtained from (1.3) which states that P(τi,i−1 < τi,i+1 ) =

µi . λi + µi

Hence we have Pi,i−1 = lim P(Xt+h = i−1 | Xt = i and Xt+h −Xt 6= 0) = h→0

µi , λi + µi

and the embedded chain (Zn )n∈N has the transition matrix  0 1 0 0 ··· 0 0  µ1 λ1  0  λ1 + µ1 0 λ1 + µ1 0 · · · 0   . . .. .. .. ..  . . . . . 0 0  .. .. ..  . . . . .. ..  . . . . . . .   .. . . . . . . .. .. .. .. .. .. P = [ Pi,j ]i,j∈N =  .   . . . . . . . .. .. .. .. .. . . . .    . . .. .. .. . . .  . 0 0 0   µ N −1  0 0 0 0 ··· 0  λN −1 + µN −1 0 0 0 0 ··· 0 0

i ∈ N,

0

0

0

0

0 .. . .. . .. .

0 .. . .. . .. .

           ,          

..

.

0 λN −1 0 λN −1 + µN −1 1 0

provided λ0 > 0 and µN > 0. When N = 1 this coincides with (9.17). Note that if λ0 = µN = 0, states 0 and 1 are absorbing since the birth rate starting from 0 is 0 and the death rate starting from N is also 0, hence the transition matrix becomes

172

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

P = [ Pi,j ]i,j∈N

1  µ1   λ1 + µ1    0  ..   .   .. = .   ..  .    0    0  0

0

0 0 ··· λ1 0 0 ··· λ1 + µ1 .. .. . . .. . . . . .. .. .. .. . . . . .. .. .. .. . . . . .. .. .. . . . . . . .. .. 0 0 . . 0

0

0

0

0

0

0

0

0

0

0

0

0 .. . .. . .. .

0 .. . .. . .. .

..

           ,          

.. . .. . ..

.

..

.

0 .. . .. . .. .

..

.

..

.

. 0 λN −1 µN −1 0 0 ··· 0 λN −1 + µN −1 λN −1 + µN −1 0 ··· 0 0 0 1

which is the transition matrix of a gambling type process on {0, 1, . . . , N }. When N = 1 this yields P = Id , which is consistent with the fact that a two-state Markov chain with two absorbing states is constant. For example, for a continuous-time chain with infinitesimal generator   −10 10 0 0 0  10 −20 10 0 0     Q = [ λi,j ]0≤i,j≤4 =  0 10 −30 20 0  ,  0 0 10 −40 30  0 0 0 20 −20 the transition matrix of the embedded chain is  0 1 0  1/2 0 1/2  P = [ Pi,j ]0≤i,j≤4 =   0 1/3 0  0 0 1/4 0 0 0 In case the states 0 and 4 are absorbing,  0  10  Q = [ λi,j ]0≤i,j≤4 =  0 0 0

 0 0 0 0   2/3 0  . 0 3/4  1 0

i.e. 0 −20 10 0 0

0 0 10 0 −30 20 10 −40 0 0

 0 0  0 , 30  0

the transition matrix of the embedded chain becomes

173

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

P = [ Pi,j ]0≤i,j≤4

1 0 0  1/2 0 1/2  =  0 1/3 0  0 0 1/4 0 0 0

 0 0 0 0   2/3 0  . 0 3/4  0 1

9.7 Absorption probabilities The absorption probabilities of the continuous-time process (Xt )t∈R+ can be computed based on the behaviour of the embedded chain (Zn )n∈N . In fact the continuous waiting time between two jumps has no influence on the absorption probabilities. Here we consider only the simple example of birth and death processes, which can be easily generalized to more complex situations. Assume that state 0 is absorbing, i.e. λ0 = 0, and let T0A = inf{t ∈ R+ : Xt = 0} denote the absorption time in state 0. Let now g0 (i) = P(T0A < ∞ | X0 = i),

0 ≤ i ≤ N,

denote the probability of absorption in 0 starting from state i ∈ N. We have the boundary condition g0 (0) = 1, and by first step analysis on the chain (Zn )n≥1 we get go (i) =

λi µi g0 (i + 1) + g0 (i − 1), λi + µi λi + µi

1 ≤ i ≤ N − 1.

When the rates λi = λ and µi = µ are independent of i ∈ {1, . . . , N − 1}, this equation becomes go (i) = pg0 (i + 1) + qg0 (i − 1),

1 ≤ i ≤ N − 1,

which is precisely Equation (2.3) for the gambling process with p=

λ λ+µ

and q =

µ . λ+µ

174

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

When λ0 = µN = 0 we have the boundary conditions g0 (0) = 1 and g0 (N ) = 0 since the state N is also absorbing, and the solution becomes g0 (k) =

(µ/λ)k − (µ/λ)N , 1 − (µ/λ)N

0 ≤ k ≤ N,

according to (2.7).

9.8 Mean absorption times We may still use the embedded chain (Zn )n∈N to compute the mean absorption time, using the mean interjump times. Here, unlike in the case of absorption probabilities, the random time spent by the continuous-time process (Xt )t∈R+ should be taken into account in the calculation. We consider a birth and death process on {0, 1, . . . , N } with absorbing states 0 and N , and we take N = 4. For this we have to associate a weighted graph to the Markov chain (Zn )n∈N that includes the mean time spent in state i before the next transition is IE[τi ] =

1 , λi + µi

1 ≤ i ≤ N − 1.

Recall that the the mean time spent at state i, given that the next transition is from i to i + 1, is equalt to 1 IE[τi,i+1 ] = = λi



λi λi + µi

−1 IE[τi ],

1 ≤ i ≤ N − 1,

and the mean time spent at state i, given that the next transition is from i to i − 1, is equal to 1 = IE[τi,i−1 ] = µi



µi λi + µi

−1 IE[τi ],

1 ≤ i ≤ N − 1.

In the next graph drawn for N = 4 the weights are underlined:

175

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

1

µ1 /(λ1 + µ1 )

λ1 /(λ1 + µ1 )

µ3 /(λ3 + µ3 )

λ3 /(λ3 + µ3 )

1/(λ1 + µ1 )

1/(λ1 + µ1 )

1/(λ3 + µ3 )

1/(λ3 + µ3 )

0

1

2

3

1/(λ2 + µ2 )

1/(λ2 + µ2 )

µ2 /(λ2 + µ2 )

λ2 /(λ2 + µ2 )

with λ0 = µ4 = 0. Let now i h A h0 (i) = IE T{0,N } < ∞ | X0 = i denote the mean absorption time in {0, N } starting from state i ∈ {0, 1, . . . , N }. We have the boundary conditions h0 (0) = h0 (N ) = 0 and by first step analysis on the chain (Zn )n≥1 we get µi λi (IE[τi ] + h0 (i − 1)) + (IE[τi ] + h0 (i + 1)) λi + µi λi + µi     µi 1 λi 1 = + h0 (i − 1) + + h0 (i + 1) , λi + µi λi + µi λi + µi λi + µi

ho (i) =

1 ≤ i ≤ N − 1, i.e. ho (i) =

1 µi λi + h0 (i − 1) + h0 (i + 1), λi + µi λi + µi λi + µi

1 ≤ i ≤ N − 1.

(9.20) When the rates λi = λ and µi = µ are independent of i ∈ {1, . . . , N − 1}, this equation becomes ho (i) =

µ λ 1 + h0 (i − 1) + h0 (i + 1), λ+µ λ+µ λ+µ

1 ≤ i ≤ N − 1,

which is a modification of Equation (2.22). Rewriting the equation as ho (i) =

1 + qh0 (i − 1) + ph0 (i + 1), λ+µ

1 ≤ i ≤ N − 1,

or (λ + µ)ho (i) = 1 + q(λ + µ)h0 (i − 1) + p(λ + µ)h0 (i + 1),

1 ≤ i ≤ N − 1,

with p = λ/(λ + µ) and q = µ/(λ + µ), we find from (2.26) that, with r = q/p = µ/λ,   1 − rk 1 (λ + µ)h0 (k) = k−N , q−p 1 − rN 176

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html

1 4


Notes on Markov Chains

i.e.

1 h0 (k) = µ−λ

  1 − (µ/λ)k , k−N 1 − (µ/λ)N

0 ≤ k ≤ N,

(9.21)

when λ 6= µ. In the limit λ → µ we find by (2.31) that h0 (k) =

1 k(N − k), 2µ

0 ≤ k ≤ N.

This solution is similar to that of the gambling problem with draw Exercise 8.5 as we multiply the solution of the gambling problem in the fair case by the average time 1/(2µ) spent in any state in {1, . . . , N − 1}. The mean absorption time for the embedded chain (Zn )n∈N remains equal to

λ+µ µ−λ



1 − (µ/λ)k k−N 1 − (µ/λ)N

 ,

0 ≤ k ≤ N,

(9.22)

in the non-symmetric case, and k(N − k),

0 ≤ k ≤ N,

in the symmetric case, and by multiplying (9.22) by the average time 1/(λ + µ) spent in each state we recover (9.21).

177

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

In the next table we gather some frequent questions and the associated solution method. How to compute

Method

the infinitesimal generator Q = (λi,j )i,j∈S Q =

dP (t) = P 0 (0). dt |t=0

the semigroup (P (t))t∈R+

P (t) = etQ , t ∈ R+ ,

the stationary distribution

solve5 πQ = 0 for π.

the law of the time τi,j spent in i → j

exponential distribution (λi,j ).

P (h) ' Id + hQ, h → 0.

 the law of the time τi spent at state i

exponential distribution 

 X

λi,l .

l6=i

   −α α lim exp t β −β t→∞

 β α α+β α+β   β α  α+β α+β

the hitting probabilities

solve6 g = P g for the embedded chain.

the mean hitting times

use the embedded chain with weighted links.

5

Remember that the values of π(k) have to add up to 1. Be sure to write only the relevant lines of the system under the appropriate boundary conditions. 6

178

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

9.9 Exercises

Exercise 9.1. Suppose that customers arrive at a facility according to a Poisson process having rate λ = 3. Let Xt be the number of customers that have arrived up to time t and let Wn be the waiting time for the nth customer, n = 1, 2, . . .. Determine the following (conditional) probabilities and (conditional) expectations.

1. P(X3 = 5 | X1 = 1). 2. IE[X1 X5 (X3 − X2 )]. 3. IE[X2 | W2 > 1].

Exercise 9.2. (Exercise VI.1.1 in [4]). A pure birth process starting from X0 = 0 has birth parameters λ0 = 1, λ1 = 3, λ2 = 2, λ3 = 5. Determine P0,n (t) for n = 0, 1, 2, 3.

Exercise 9.3. (Problem VI.1.5 in [4]). Let Wk be the time to the kth birth in a pure birth process starting from X0 = 0. Establish the equivalence P(W1 > t, W2 > t + s) = P0,0 (t)(P0,0 (s) + P0,1 (s)). Determine the joint density for W1 and W2 , and then the joint density of S0 = W1 and S1 = W2 − W1 .

Exercise 9.4. (Problem VI.4.3 in [4]). A factory has five machines and a single repairman. The operating time until failure of a machine is an exponentially distributed random variable with parameter (rate) 0.20 per hour. The repair time of a failed machine is an exponentially distributed random variable with parameter (rate) 0.50 per hour. Up to five machines may be operating at any given time, their failures being independent of one another, but at most one machine may be in repair at any time. In the long run, what fraction of time is the repairman idle ?

179

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

Exercise 9.5. (Exercise VI.6.2 in [4]). Let X1 (t) and X2 (t) be independent two-state Markov chains having the same infinitesimal matrix   −λ λ . µ −µ Argue that Z(t) := X1 (t) + X2 (t) is a Markov chain on the state space S = {0, 1, 2} and determine the transition semigroup P (t) of Z(t).

Exercise 9.6. (Problem VI.3.1 in [4]). Let (ξn )n≥0 be a two-state Markov chain on {0, 1} with transition matrix   0 1 , (9.23) 1−α α let (Nt )t∈R+ be a Poisson process with parameter λ > 0, and let the two-state birth and death process Xt be defined by Xt = ξNt ,

t ∈ R+ .

1. Compute the mean return time E[τ0 | X0 = 0] to 0 of Xt . 2. Compute the mean return time E[τ1 | X0 = 1] to 1 of Xt . 3. Show that Xt is a two-state birth and death process and determine the matrix generator Q of the process Xt in terms of α and λ. Exercise 9.7. (Problem V.4.7 in [4]). Let W1 , W2 , . . . be the event times in a Poisson process Xt of rate λ, and let f : R → R be an integrable function. Verify that # "X Z t t X f (Wi ) = λ f (s)ds. (9.24) IE k=1

0

Exercise 9.8. Two types of consultations occur at a database according to two independent Poisson processes: “read” consultations arrive at rate λR and “write” consultations arrive at rate λW . 1. What is the probability that the time interval between two consecutive “read” consultations is larger than t > 0 ? 2. What is the probability that during the time interval [0, t], at most three “write” consultations arrive ? 3. What is the probability that the next arriving consultation is a “read” consultation ? 180

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

4. Determine the distribution of the number of arrived “read” consultations during [0, t], given that in this interval a total number of n consultations occurred. Exercise 9.9. Consider two machines, operating simultaneously and independently, where both machines have an exponentially distributed time to failure with mean 1/µ. There is a single repair facility, and the repair times are exponentially distributed with rate λ. 1. In the long run, what is the probability that no machines are operating when λ = µ = 1 ? 2. We now assume that at most one machine can operate at any time. How does this modify your answer to question (1) ? Exercise 9.10. ([1]) Let (Nt1 )t∈R+ and (Nt2 )t∈R+ be two independent Poisson processes with intensities λ1 > 0 and λ2 > 0. 1. Show that (Nt1 + Nt2 )t∈R+ is a Poisson process and find its intensity. 2. Consider the process Mt = Nt1 − Nt2 ,

t ∈ R+ .

Show that (Mt )t∈R+ has stationary independent increments. 3. Find the distribution of Mt − Ms , 0 < s < t. 4. Let c > 0. Compute lim P(|Mt | ≤ c). t→∞

Nt1

denotes the number of clients arriving at a taxi station 5. Suppose that during the time interval [0, t], and that Nt2 denotes the number of taxis arriving at that same station during the same time interval [0, t]. How do you interpret the value of Mt depending on its sign ? How do you interpret the result of Question 4 ? Exercise 9.11. ([3]) We consider a birth and death process (Xt )t∈R+ on {0, 1, . . . , N } with transition semigroup (P (t))t∈R and birth and death rates λn = (N − n)λ,

µn = nµ,

n = 0, 1, . . . , N.

This process is used for the modeling of membrane channels in neuroscience. 181

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

1. Write down the infinitesimal generator Q of (Xt )t∈R+ . 2. From the forward Kolmogorov equation P 0 (t) = P (t)Q, show that for all n = 0, 1, . . . , N we have  0 Pn,0 (t) = −λ0 Pn,0 (t) + µ1 Pn,1 (t),      0 Pn,k (t) = λk−1 Pn,k−1 (t) − (λk + µk )Pn,k (t) + µk+1 Pn,k+1 (t),      0 Pn,N (t) = λN −1 Pn,N −1 (t) − µN Pn,N (t), 1 ≤ k ≤ N − 1. 3. Let Gk (s, t) = IE[sXt | X0 = k] =

N X

sn P(Xt = n | X0 = k) =

n=0

N X

sn Pk,n (t)

n=0

denote the generating function of Xt given that X0 = k ∈ {0, 1, . . . , N }. From the result of Question 2, show that Gk (s, t) satisfies the partial differential equation ∂Gk ∂Gk (s, t) = λN (s − 1)Gk (s, t) + (µ + (λ − µ)s − λs2 ) (s, t), (9.25) ∂t ∂s with Gk (s, 0) = sk , k = 0, 1, . . . , N . 4. Verify that the solution of (9.25) is given by Gk (s, t) =

1 (µ+λs+µ(s−1)e−(λ+µ)t )k (µ+λs−λ(s−1)e−(λ+µ)t )N −k , (λ + µ)N

k = 0, 1, . . . , N . 5. Show that IE[Xt | X0 = k] =

k (λ + µe−(λ+µ)t )(µ + λ)k−1 (µ + λ)N −k (λ + µ)N N −k (µ + λ)k (λ − λe−(λ+µ)t )(µ + λ)N −k−1 . + (λ + µ)N

6. Compute lim IE[Xt | X0 = k]

t→∞

and show that it does not depend on k ∈ {0, 1, . . . , N }. Exercise 9.12. (Exercise VI.4.2 in [4]). This exercise continues Exercise 9.12. Let Xt be a birth and death process where the possible states are 0, 1, . . . , N ,

182

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

and the birth and death parameters are, respectively, λn = α(N − n) and µ = βn. Determine the stationary distribution. Exercise 9.13. (Problem IV.1.6 in [4]). A fatigue model for the growth of a crack in a discrete time lattice proposes that the size of the crack evolves as a pure birth process with parameters λk = (1 + k)ρ ,

k ≥ 1.

The theory behind the model postulates that the growth rate of the crack is proportional to some power of the stress concentration at its ends, and that this stress concentration is proportional to some power of 1 + k, where k is the crack length. Use the sojourn time description to deduce that the mean time for the crack to grow to infinite length is finite when ρ > 1, and that therefore the failure time of the system is a well-defined and finite-valued random variable. 8 Exercise 9.14. Cars pass a certain street location according to a Poisson process with rate λ > 0. A woman who wants to cross the street at that location waits until she can see that no cars will come by in the next T time units. 1. Find the probability that her waiting time is 0. 2. Find her expected waiting time. Exercise 9.15. (Problem VI.4.7 in [4]). A system consists of two machines and two repairmen. The amount of time that an operating machine works before breaking down is exponentially distributed with mean 5. The amount of time it takes a single repairman to fix a machine is exponentially distributed with mean 4. Only one repairman can work on a failed machine at any given time. 1. Let Xt be the number of machines in operating condition at time t ∈ R+ . Show that Xt is a continuous time Markov process and complete the missing entries in the matrix    0.5 0 Q =  0.2    0  −0.4 of its generator. 2. Calculate the long run probability distribution (π0 , π1 , π2 ) for Xt . 8

Recall also that a finite-valued random variable may have an inifinite mean.

183

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

3. Compute the average number of operating machines in the long run. 4. If an operating machine produces 100 units of output per hour, what is the long run output per hour of the system ?

184

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Chapter 10

Spatial Poisson processes

10.1 Spatial Poisson (1781-1840) processes In this chapter we present the construction of spatial Poisson processes on a space of configurations of X = Rd , d ≥ 1. Let  Ω X = ω = (xi )N i=1 ⊂ X, N ∈ N ∪ {∞} , denote the space of configurations on X = Rd .

X 1

1

1

A

3

4

4

2

2

x1 x2

2

4

4

2 3

3

3

1

0

0

0

We let ω(A) = #{x ∈ ω : x ∈ A} =

X

1A (x).

x∈ω

The Poisson probability measure Pσ with intensity ρ(x)dx on X satisfies Pσ (ω ∈ Ω X : ω(A) = n) = e−σ(A) with

Z σ(A) =

n ∈ N,

Z ρ(x)dx =

A

(σ(A))n , n!

1A (x)ρ(x)dx. Rd

185


N. Privault

In addition, if A1 , . . . , An are disjoint subsets of X with σ(Ak ) < ∞, k = 1, . . . , n are the Nn -valued vector ω 7−→ (ω(A1 ), . . . , ω(An )) has independent components with Poisson distributions of respective parameters σ(A1 ), . . . , σ(An ). When σ(X) < ∞, the conditional distribution of ω = {x1 , . . . , xn } given that ω(X) = n is given by the formula  n σ(A) X n Pσ ({x1 , . . . , xn } ⊂ A | ω(X) = n) = . σ(X) Usually the intensity function ρ(x) will be constant, i.e. ρ(x) = λ > 0, x ∈ X, where λ > 0 is called the intensity parameter, and we have Z σ(A) = λ dx. A

In this case {x1 , . . . , xn } are uniformly distributed on X n given that {ω(X) = n}.

10.2 Characteristic functions In the next proposition we compute the characteristic function of the Poisson stochastic integral Z X f (x)ω(dx) = f (x), X

x∈ω

for f an integrable function on X. Proposition 3. Let f be an integrable function on X. We have   Z  Z  if (x) IEPσ exp i f (x)ω(dx) = exp (e − 1)σ(dx) . X

X

Proof. We assume that σ(X) < ∞. We have   Z  IEPσ exp i f (x)ω(dx) X

= e−σ(X)

Z Z ∞ X 1 ··· ei(f (x1 )+···+f (xn )) σ(dx1 ) · · · σ(dxn ). n! X X n=0

186

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html

(10.1)


Notes on Markov Chains

Z n ∞ X 1 eif (x) σ(dx) n! X n=0 Z  if (x) = exp (e − 1)σ(dx) . = e−σ(X)

X

 We have Z

   Z  d f (x)ω(dx) = −i IEPσ exp iε f (x)ω(dx) dε X X |ε=0 Z  d (eiεf (x) − 1)σ(dx) = −i exp dε X |ε=0 Z = f (x)σ(dx),

IE

X

for f an integrable function on X, and similarly, "Z 2 # Z IE f (x)(ω(dx) − σ(dx) = |f (x)|2 σ(dx), X

f ∈ L2 (X, σ).

X

(10.2)

10.3 Transformations of Poisson measures Consider a mapping τ : (X, σ) → (Y, µ), and let Z Z µ(A) = 1A (τ (x))σ(dx) = 1τ −1 (A) (x)σ(dx) = σ(τ −1 (A)). X

X

Let τ∗ : Ω X → Ω Y be defined by τ∗ (ω) = {τ (x) : x ∈ ω},

ω ∈ ΩX .

187

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

Y τ

X 1

1a

τ −1 (A)

A

3

3a

0

0a

2

2a

4a

4

Then ω 7→ τ∗ (ω) has the Poisson distribution Pµ with intensity µ. Indeed we have Pσ (τ∗ ω(A) = n) = Pσ (ω(τ −1 (A)) = n) −1 (σ(τ −1 (A)))n = e−σ(τ (A)) n! n (µ(A)) . = e−µ(A) n! 2

3

1

0

4

0

2a

3a

1a

0a

4a

0 For example in the case of a flat intensity ρ(x) = λ on R and τ (x) = x/2 the intensity is doubled, since Pσ (τ∗ ω([0, t]) = n) = Pσ (ω(τ −1 ([0, t])) = n) −1 (σ(τ −1 ([0, t])))n = e−σ(τ ([0,t])) n! = e−2λt (2λt)n /n!.

188

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

2

3

1

0

4

2a

3a

1a

0a

4a

We check that for all families A1 , . . . , An of disjoint subsets of X and k1 , . . . , kn ∈ N, we have Pσ ({ω ∈ Ω X : τ∗ ω(A1 ) = k1 , . . . , τ∗ ω(An ) = kn }) n Y = Pσ ({τ∗ ω(Ai ) = ki }) =

i=1 n Y

Pσ ({ω(τ −1 (Ai )) = ki })

i=1

= exp − = exp −

n X i=1 n X i=1

=

n Y

! σ(τ −1 (Ai ))

n Y (σ(τ −1 (Ai )))ki i=1

! µ(Ai )

ki !

n Y (µ(Ai ))ki i=1

ki !

Pµ ({ω(Ai ) = ki })

i=1

= Pµ ({ω(A1 ) = k1 , . . . , ω(An ) = kn }).

10.4 Exercises Exercise 10.1. Consider a spatial Poisson process on R2 with intensity λ = 0.5 per square meter. What is the probability that there are 10 events within a circle of radius 3 meters. Exercise 10.2. (Exercise V.5.1 in [4]). Bacteria are distributed throughout a volume of liquid according to a Poisson process of intensity θ = 0.6 organisms per mm3 . A measuring device counts the number of bacteria in a 10 mm3 volume of the liquid. What is the probability that more than two bacteria are in this measured volume ? 189

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

Exercise 10.3. (Exercise V.5.3 in [4]). Defects (air bubbles, contaminants, chips) occur over the surface of a varnished tabletop according to a Poisson process at a mean rate of one defect per top. If two inspectors each check separate halves of a given table, what is the probability that both inspectors find defects ? Exercise 10.4. (Problem V.2.4 in [4]). Let λ > 0 and suppose that N points are idependently and uniformly distributed over the interval [0, N ]. Determine the probability distribution for the number of points in the interval [0, λ] as N → ∞. Exercise 10.5. Suppose that X(A) is a spatial Poisson process of discrete items scattered on the plane R2 with intensity λ = 0.5 per square meter. We let D((x, y), r) = {(u, v) ∈ R2 : (x − u)2 + (y − v)2 ≤ r2 } denote the disc with radius r centered at (x, y) in R2 . Note that no evaluation of numerical expressions is required in this exercise. 1. What is the probability that 10 items are found within the disk D((0, 0), 3) with radius 3 meters centered at the origin ? 2. What is the probability that 5 items are found within the disk D((0, 0), 3) and 3 items are found within the disk D((x, y), 3) with (x, y) = (7, 0) ? 3. What is the probability that 8 items are found anywhere within S D((0, 0), 3) D((x, y), 3) with (x, y) = (7, 0) ?

190

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Chapter 11

Reliability and Renewal Processes

11.1 Survival probabilities Let τ denote the lifetime of an entity, and let P(τ ≥ t) denote its probability of surviving at least t years, t > 0. The probability of surviving up to a (deterministic) time T , given that the entity has already survived up to time t, T ≥ t, is P(τ > T and τ > t) P(τ > t) P(τ > T ) . = P(τ > t)

P(τ > T | τ > t) =

Let now f (t) = lim

h→0

P(τ < t + h | τ > t) , h

t ∈ R+ ,

denote the failure rate function. Letting A = {τ < t + dt} and B = {τ > t} we note that Ac ⊂ B, hence A ∩ B = B \ Ac , and P(τ < t + dt | τ > t) dt 1 P(τ < t + dt and τ > t) = P(τ > t) dt 1 P(τ > t) − P(τ > t + dt) = P(τ > t) dt d = − log P(τ > t) dt 1 d =− P(τ > t). P(τ > t) dt

f (t) =

(11.1)

191


N. Privault

Defining the reliability function as R(t) := P(τ > t),

t ∈ R+ ,

we find R0 (t) = −f (t)R(t), with R(0) = P(τ > 0) = 1, which has for solution  Z t   Z t  R(t) = P(τ > t) = R(0) exp − f (u)du = exp − f (u)du . 0

0

We also have R(T ) P(τ > T | τ > t) = = exp − R(t)

Z

!

T

f (u)du . t

11.2 Poisson process with time-dependent intensity Recall that the random variable τ has the exponential distribution with parameter λ > 0 if P(τ > t) = e−λt , t ≥ 0. Given (τn )n≥0 a sequence of i.i.d. exponentially distributed random variables, letting Tn = τ0 + · · · + τn−1 , n ≥ 1, and Nt =

X

1[Tn ,∞) (t),

t ∈ R+ ,

n≥1

defines a standard Poisson process with intensity λ > 0 and we have P(Nt − Ns = k) = e−λ(t−s)

(λ(t − s))k , k!

k ≥ 0.

The intensity of the Poisson process can in fact made time-dependent. For example under the time change Xt = NR t λ(s)ds 0

where (λ(u))u∈R+ is a deterministic function of time, we have

192

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

R P(Xt − Xs = k) = e−

Rt s

t s

λ(u)du

k λ(u)du ,

k!

k ≥ 0.

In this case we have P(Xt+h − Xt = 0) ' e−λ(t)h ' 1 − λ(t)h,

h → 0,

(11.2)

P(Xt+h − Xt = 1) ' 1 − e−λ(t)h ' λ(t)h,

h → 0.

(11.3)

and Letting τ0 denote the first jump time of (Xt )t∈R+ , we have P(τ0 > t + h | τ0 > t) = e−λ(t)h ' 1 − λ(t)h,

h → 0,

and P(τ0 < t + h | τ0 > t) = 1 − e−λh ' λ(t)h,

h → 0,

which coincide respectively with P(Xt+h − Xt = 0) and P(Xt+h − Xt = 1) in (11.2) and (11.3) above. We also have R(t) = P(τ0 > t) = P(Xt = 0) = e−

Rt 0

λ(u)du

,

t ≥ 0.

(11.4)

The above formulas can be recovered informally as Y P(τ0 > T ) = P(τ0 > t + dt | τ0 > t) 0<t<T

=

Y

exp (−λ(t)dt) ,

0<t<T

i.e. in the limit we recover P(τ0 > t) = e−

Rt 0

λ(s)ds

,

and λ(s) is called the failure rate function.

Cox processes The intensity process λ(s) can also be made random in the case of Cox processes.

193

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

For example, assume that (λu )u∈R+ is a two-state Markov chain on {0, λ}, with transitions P(λt+h = λ | λt = 0) = αh,

h → 0,

P(λt+h = 0 | λt = λ) = βh,

h → 0.

and In this case the law of Nt can be explicitly computed, cf. Ch. VI-7 in [4].

Renewal processes A renewal process is a counting process X Nt = 1[Tn ,∞) (t),

t ∈ R+ ,

n≥1

in which τk = Tk+1 − Tk , k ∈ N, is a sequence of independent identically distributed random variables. In particular, Poisson processes are renewal processes.

11.3 Mean time to failure The mean time to failure is given, from (11.1), by Z ∞ d IE[τ ] = t P(τ < t)dt dt 0 Z ∞ d =− t P(τ > t)dt dt Z0 ∞ =− tR0 (t)dt Z ∞0 = R(t)dt, 0

provided limt→0 tR(t) = 0. For example when τ has the exponential density (11.4) we get Z ∞ Z ∞ R t IE[τ ] = R(t)dt = e− 0 λ(u)du dt. 0

0

In case the function λ(t) = λ > 0 is constant we recover

194

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

Z IE[τ ] = 0

e−λt dt =

1 . λ

195

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

196

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Chapter 12

Discrete-Time Martingales

12.1 Definition and properties As mentioned in the introduction Chapter , stochastic processes can be classified into two main families: - Markov processes, - martingales. Markov processes have been our main focus of attention so far, and in this chapter we turn to the notion of martingale. Our main application will be to recover in an easy way the previous results on gambling processes of Chapter 2. In particular we will give a precise mathematical meaning to the description of martingales stated in the introduction, which says that when (Xt )tâ&#x2C6;&#x2C6;R+ is a martingale, the best possible estimate at time t of the future value Xs at time s > t is Xt itself. Before that, let us state that many recent applications of stochastic modeling are relying on the notion of martingale. In finance for example, the notion of martingale is necessary to characterize the fairness and equilibrium of a market model. Due to those recent applications, martingales have become an indispensible tool, at equality with Markov processes. Before dealing with martingales we need to introduce the important notion of filtration generated by a discrete-time stochastic process (Xn )nâ&#x2C6;&#x2C6;N .

197


N. Privault

The filtration generated by (Xn )n∈N is the family (Fn )n∈N in which Fn denotes all events possibly generated by X0 , X1 , . . . , Xn . Examples of such events include the event {X0 ≤ a0 , X1 ≤ a1 , . . . , Xn ≤ an } for a0 , a1 , . . . , an a given fixed sequence of real numbers. Note that we have the inclusion Fn ⊂ Fn+1 , n ∈ N. One refers to Fn as the information generated by (Xk )k∈N up to time n. We now review the definition of conditional expectation, cf. also Section 1.7. Given F a random variable with finite mean the conditional expectation IE[F | Fn ] refers to IE[F | Xn = kn , . . . , X0 = k0 ] given that X0 , X1 , . . . , Xn are respectively equal to k0 , k1 , . . . , kn ∈ S. The conditional expectation IE[F | Fn ] is a random variable that depends only on the values of X0 , X1 , . . . , Xn , i.e. it depends on the history of the process up to time n ∈ N. It can also be described as the best possible estimate of F given the values of X0 , X1 , . . . , Xn . A process (Zn )n∈N is said to be Fn -adapted if the value of Zn depends only on the information available at time n in Fn , n ∈ N. We now turn to the definition of martingale. Definition 2. A discrete-time stochastic process (Zn )n∈N is a martingale with respect to (Fn )n∈N if (Zn )n∈N is Fn -adapted and satisfies the property IE[Zn+1 | Fn ] = Zn ,

n ∈ N.

The process (Zn )n∈N is a martingale with respect to (Fn )n∈N if, given the information Fn known up to time n, the best possible estimate of Zn+1 is simply Zn . A particular property of martingales is that their expectation is constant over time. Proposition 4. Let (Zn )n∈N be a martingale. Then we have IE[Zn ] = IE[Z0 ],

n ∈ N.

198

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

Proof. From the tower property (1.9) we have: IE[Zn+1 ] = IE[IE[Zn+1 | Fn ]] = IE[Zn ] = · · · = IE[Z1 ] = IE[Z0 ],

n ∈ N. 

Processes with centered independent increments give examples of martingales. For example, if (Xn )n∈N has centered and independent increments then it is a martingale with respect to (Fn )n∈N . Indeed, in this case we have IE[Xn+1 | Fn ] = IE[Xn | Fn ] + IE [Xn+1 − Xn | Fn ] = IE[Xn | Fn ] + IE[Xn+1 − Xn ] = IE[Xn | Fn ] = Xn ,

n ∈ N.

In particular, a process (Xn )n∈N with centered independent increments is a martingale and it is also a Markov process. However, not all martingales have the Markov property, and not all Markov processes are martingales. Next we turn to the definition of stopping time. Definition 3. A stopping time is a random variable τ : Ω → N such that {τ > n} ∈ Fn ,

n ∈ N.

(12.1)

The meaning of Relation (12.1) is that the knowledge of the event {τ > n} depends only on the information present in Fn up to time n, i.e. on the knowledge of X0 , X1 , . . . , Xn . Not every N-valued random variable is a stopping time, however, hitting times provide natural examples of stopping times. For example, the hitting time Tx = inf{k ≥ 0 : Xk = x} of x ∈ S is a stopping time, as we have {Tx > n} = {X0 6= x, X1 6= x, . . . , Xn 6= x} = {X0 6= x} ∩ {X1 6= x} ∩ · · · ∩ {Xn 6= x},

n ∈ N.

On the other hand, the first time

199

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

τ = inf{k ≥ 0 : Xk =

max

l=0,1,...,N

Xl }

the process (Xk )k∈N reaches its maximum over {0, 1, . . . , N } is not a stopping time. Given (Zn )n∈N a stochastic process and τ : Ω → N a stopping time, the stopped process (Zn∧τ )n∈N is defined as   Zτ if τ ≤ n, Zn∧τ =  Zn if τ > n, Using indicator functions we may also write Zn∧τ = Zτ 1{τ ≤n} + Zn 1{τ >n} ,

n ∈ N.

The following figure is an illustration of the path of a stopped process.

0.065

0.06

0.055

0.05

0.045

0.04

0.035

0.03

0.025 0

5

10 t

15

Fig. 12.1: Stopped process

The following Theorem 10 is called the stopping time theorem, it is due to the mathematician J.L. Doob (1910-2004). Theorem 10. Assume that (Mn )n∈N is a martingale with respect to (Fn )n∈N . Then the stopped process (Mn∧τ )n∈N is also a martingale with respect to (Fn )n∈N .

200

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html

20


Notes on Markov Chains

12.2 Ruin probabilities In the sequel we will show that, as an application of the stopping time theorem, the ruin probabilities in the gambling problem can be recovered in a simple and elegant way. Consider the gambling process (Xn )n∈N on {0, 1, . . . , N } introduced in Chapter 2. Let the stopping time τ : Ω → N be defined by τ = inf{n ≥ 0 : Xn = N or Xn = 0}, which is also the hitting time of the boundary {0, N }. One checks easily that {τ > n} depends only on the history of (Xk )k∈N up to time n since for k ∈ {1, . . . , N − 1} we have {τ > n} = {0 < X0 < N } ∩ {0 < X1 < N } ∩ · · · ∩ {0 < Xn < N }, hence τ is a stopping time. We will recover the ruin probabilities computed in Chapter 7.2, in three steps. Step 1. The process (Xn )n∈N is a martingale. We note that the process (Xn )n∈N has independent increments, in the fair case p = q = 1/2, those increments are centered: IE[Xn+1 − Xn ] = 1 × p + (−1) × q = 0, hence (Xn )n∈N is a martingale. Step 2. The stopped process (Xτ ∧n )n∈N is still a martingale. By Theorem 10 we know that the stopped process (Xτ ∧n )n∈N is a martingale. Step 3. The expectation IE[Xτ ∧n ] is constant in n ∈ N. Since the stopped process (Xτ ∧n )n∈N is a martingale by Theorem 10, we find that its expectation is constant by Proposition 4, which gives 201

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

k = IE[X0 | X0 = k] = IE[Xτ ∧n | X0 = k],

0 ≤ k ≤ N.

Letting n go to infinity we get h i IE[Xτ | X0 = k] = IE lim Xτ ∧n | X0 = k = lim IE[Xτ ∧n | X0 = k] = k, n→∞

n→∞

hence k = IE[Xτ | X0 = k] = 0 × P(Xτ = 0 | X0 = k) + N × P(Xτ = N | X0 = k), which directly yields P(Xτ = N | X0 = k) =

k , N

and also shows that P(Xτ = 0 | X0 = k) = 1 −

k , N

0 ≤ k ≤ N,

which recovers (2.14) without use of boundary conditions, and with short calculations. Namely, the solution has been obtained in a simple way without solving any finite difference equation, demonstrating the power of the martingale approach. Next, let us turn to the case where p 6= q. In this case the process (Xn )n∈N is no longer a martingale and in order to use Theorem 10 we need to construct a martingale of a different type. Here we note that the process Mn :=

 Xn q , p

n ∈ N,

is a martingale with respect to (Fn )n∈N . Step 1. The process (Mn )n∈N is a martingale. Indeed, we have IE[Mn+1

# "  Xn+1

q

| Fn ] = IE

Fn p

202

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

# "  Xn+1 −Xn  Xn

q q

= IE

Fn p p  Xn " Xn+1 −Xn # q q

= IE

Fn p p  Xn " Xn+1 −Xn # q q = IE p p !  −1  Xn q q q P [Xn+1 − Xn = −1] P [Xn+1 − Xn = 1] + = p p p  Xn  −1 ! q q q = p +q p p p  Xn q = (q + p) p  Xn q = p = Mn , n ∈ N. In particular, the expectation of (Mn )n∈N is constant over time by Proposition 4 since it is a martingale, i.e. we have  k q = IE[M0 | M0 = k] = IE[Mn | M0 = k], p

0 ≤ k ≤ N,

n ∈ N.

Step 2. The stopped process (Mτ ∧n )n∈N is still a martingale. By Theorem 10 we know that the stopped process (Mτ ∧n )n∈N is a martingale. Step 3. The expectation IE[Mτ ∧n ] is constant in n ∈ N. Since the stopped process (Mτ ∧n )n∈N remains a martingale by Theorem 10, its expectation is constant by Proposition 4. This gives  k q = IE[M0 | X0 = k] = IE[Mτ ∧n | X0 = k]. p Next, letting n go to infinity we find

203

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

 k q = IE[M0 | X0 = k] p = lim IE[Mτ ∧n | X0 = k] n→∞ h i = IE lim Mτ ∧n | X0 = k n→∞

= IE[Mτ | X0 = k], hence  k q = IE[Mτ | X0 = k] p !    N  N 0 q q q P Mτ = | X0 = k + P(Mτ = 1 | X0 = k) = p p p !  N  N q q = P Mτ = | X0 = k + P(Mτ = 1 | X0 = k), p p under the additional condition !  N q | X0 = k + P(Mτ = 1 | X0 = k) = 1. P Mτ = p Finally this gives !  N q (q/p)k − 1 , (12.2) | X0 = k = P(Xτ = N | X0 = k) = P Mτ = p (q/p)N − 1 0 ≤ k ≤ N , and P(Xτ = 0 | X0 = k) = P(Mτ = 1 | X0 = k) =

(q/p)N − (q/p)k , (q/p)N − 1

0 ≤ k ≤ N , which recovers (2.7). In the case of a fair game the martingale method also allows us to recover the mean game duration, after checking that (Xn2 −n)n∈N is also a martingale. Step 1. The process (Xn2 − n)n∈N is a martingale. We have 2 IE[Xn+1 − (n + 1) | Fn ] = IE[(Xn + Xn+1 − Xn )2 − (n + 1) | Fn ]

= IE[Xn2 + (Xn+1 − Xn )2 + 2Xn (Xn+1 − Xn ) − (n + 1) | Fn ] 204

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

= IE[Xn2 − n | Fn ] − 1 + IE[(Xn+1 − Xn )2 | Fn ] + 2 IE[Xn (Xn+1 − Xn ) | Fn ] = Xn2 − n − 1 + IE[(Xn+1 − Xn )2 | Fn ] + 2Xn IE[Xn+1 − Xn | Fn ] = Xn2 − n − 1 + IE[(Xn+1 − Xn )2 ] + 2Xn IE[Xn+1 − Xn ] = Xn2 − n,

n ∈ N.

Step 2. The stopped process (Xτ2∧n − τ ∧ n)n∈N is still a martingale. By Theorem 10 we know that the stopped process (Xτ2∧n − τ ∧ n)n∈N is a martingale. Step 3. The expectation IE[Xτ2∧n − τ ∧ n] is constant in n ∈ N. Since the stopped process (Xτ2∧n − τ ∧ n)n∈N is also a martingale, we have k 2 = IE[X02 − 0 | X0 = k] = IE[Xτ2∧n − τ ∧ n | X0 = k], and after taking the limit as n goes to infinity, k 2 = lim IE[Xτ2∧n − τ ∧ n | X0 = k] n→∞

= IE[ lim Xτ2∧n − lim τ ∧ n | X0 = k] n→∞

n→∞

= IE[Xτ2 − τ | X0 = k], which gives k 2 = IE[Xτ2 − τ | X0 = k] = IE[Xτ2 | X0 = k] − IE[τ | X0 = k] = N 2 P(Xτ = N | X0 = k) + 02 P(Xτ = 0 | X0 = k) − IE[τ | X0 = k], i.e. IE[τ | X0 = k] = N 2 P(Xτ = N | X0 = k) − k 2 k = N 2 − k2 N = k(N − k), 0 ≤ k ≤ N , which recovers (2.31).

205

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

Finally we show how to recover the value of the mean hitting time in the non-symmetric case. Step 1. The process Xn − (p − q)n is a martingale. In this case we note that although (Xn )n∈N does not have centered increments and is not a martingale, the compensated process Xn − (p − q)n,

n ∈ N,

is a martingale because, in addition to having independent increments, its increments are centered: " n # X IE[Xn − X0 − (p − q)n] = IE Xk − Xk−1 − (p − q)n k=1

= −(p − q)n + = −(p − q)n +

n X k=1 n X

IE[Xk − Xk−1 ] (p − q)

k=1

= 0. Step 2. The stopped process (Xτ ∧n − (p − q)(τ ∧ n))n∈N is still a martingale. By Theorem 10 we know that the stopped process (Xτ ∧n − (p − q)(τ ∧ n))n∈N is a martingale. Step 3. The expectation IE[Xτ ∧n − (p − q)(τ ∧ n)] is constant in n ∈ N. Since the stopped process (Xτ ∧n − (p − q)(τ ∧ n))n∈N is a martingale, we have k = IE[X0 − 0 | X0 = k] = IE[Xτ ∧n − (p − q)(τ ∧ n) | X0 = k], and after taking the limit as n goes to infinity, k = lim IE[Xτ ∧n − (p − q)(τ ∧ n) | X0 = k] n→∞

= IE[ lim Xτ ∧n − (p − q) lim τ ∧ n | X0 = k] n→∞

n→∞

= IE[Xτ − (p − q)τ | X0 = k], which gives

206

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

k = IE[Xτ − (p − q)τ | X0 = k] = IE[Xτ | X0 = k] − (p − q) IE[τ | X0 = k] = N P(Xτ = N | X0 = k) + 0P(Xτ = 0 | X0 = k) − (p − q) IE[τ | X0 = k], i.e. (p − q) IE[τ | X0 = k] = N P(Xτ = N | X0 = k) − k (q/p)k − 1 =N − k, (q/p)N − 1 from (12.2), hence 1 IE[τ | X0 = k] = p−q

  (q/p)k − 1 N −k , (q/p)N − 1

0 ≤ k ≤ N,

which recovers (2.26).

207

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

208

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Chapter 13

Some Useful Identities

In this chapter we give a summary of identities used in this course.

1A (x) =

  1 if x ∈ A, 

  n n! = , k (n − k)!k! n X

rk =

1 − r n+1 , 1−r

rk =

1 , 1−r

k=0

∞ X k=0

∞ X

krk−1 =

k=1

k

n   X n k=0

0 otherwise.

0 ≤ k ≤ n.

r 6= 1.

(13.1)

|r| < 1.

∞ ∂ X k ∂ 1 1 r = = , ∂r ∂r 1 − r (1 − r)2

|r| < 1.

(13.2)

k=0

n   X n k=0

1[a,b] (x) =

0 if x ∈ / A.

  1 if a ≤ x ≤ b,

k

ak bn−k = (a + b)n .

= 2n .

209


N. Privault

  n n X n k n−k X n! k a b = ak bn−k k (n − k)!(k − 1)!

k=0

k=1

n−1 X

n! ak+1 bn−1−k (n − 1 − k)!k! k=0 n−1 X n − 1  =n ak+1 bn−1−k k

=

k=0

= na(a + b)n−1 ,

n ≥ 1,

  n   n X n k n−k ∂ X n k n−k a b k a b =a ∂a k k k=0

k=0

∂ = a (a + b)n ∂a = na(a + b)n−1 ,

n ≥ 1.

  n X n k = n2n−1 . k

k=0

n X k=1

n X k=1

k=

n(n + 1) . 2

k2 =

n(n + 1)(2n + 1) . 6

(1 + x)α =

∞ X xk k=0

k!

α(α − 1) × · · · × (α − (k − 1)).

210

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html

(13.3)

(13.4)


Solutions to the Exercises

13.1 Chapter 1 - Probability Background Exercise 1.1. (Exercise II.3.1 in [4]). We write Z=

N X

Xk

k=1

where P(N = n) = 1/6, n = 1, 2, . . . , 6, and Xk is a Bernoulli random variable with parameter 1/2, k = 1, . . . , 6. 1. We have IE[Z] = IE[IE[Z | N ]] =

6 X

IE[Z | N = n]P(N = n)

n=1

=

6 X

" IE

n=1

=

6 X

#

Xk N = n P(N = n)

k=1

" IE

n=1

=

N X

6 X n X

n X

# Xk P(N = n)

k=1

IE[Xk ]P(N = n)

n=1 k=1

=

6 1 X n 2 × 6 n=1

=

1 6 × (6 + 1) × 2×6 2

211


N. Privault

=

7 . 4

(13.5)

Concerning the variance we have IE[Z 2 ] = IE[IE[Z 2 | N ]] =

6 X

IE[Z 2 | N = n]P(N = n)

n=1

=

6 X

 IE 

n=1

=

6 X n=1

=

n X

!2 Xk

N = n P(N = n)

k=1

" IE

n X

# Xk2 P(N = n) +

6 X

 IE 

n=1

k=1 6 X

1 1 n+ 2 × 6 n=1 6 × 22

6 X

 X

Xk Xl  P(N = n)

1≤k6=l≤n

n(n − 1)

n=1

=

6 6 6 1 X 2 1 X 1 X n+ n − n 2 × 6 n=1 6 × 22 n=1 6 × 22 n=1

=

6 6 1 X 2 1 X n + n 6 × 22 n=1 6 × 22 n=1

1 6(6 + 1) 1 6(6 + 1)(2 × 6 + 1) + 2 6×2 2 6 × 22 6 14 , = 3 =

where we used (13.5), hence Var[Z] = IE[Z 2 ] − (IE[Z])2 =

14 49 77 − = . 3 16 48

(13.6)

We could also write Var[Z | N = n] = np(1 − p) with p = 1/2, which implies IE[Z 2 | N = n] = Var[Z | N = n] + (IE[Z | N = n]) = np(1 − p) + n2 p2 , hence IE[Z 2 ] =

6 X k=1

IE[Z 2 | N = n]P(N = n) =

6

6

k=1

k=1

1X 1X IE[Z 2 | N = n] = (np−np2 +n2 p2 ), 6 6

212

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

which leads to the same result. 2. We have N X

P(Z = l) = P

! Xk = l

k=1 6 X

=

=

N X

P

n=1

k=1

6 X

n X

P

n=1

!

Xk = l N = n P(N = n) ! Xk = l P(N = n).

k=1

Next we notice that the random variable

n X

Xk has a binomial distribu-

k=1

tion with parameter (n, 1/2), hence !      n l n−l X 1 n 1 , P Xk = l = 2 2 l k=1

which yields P(Z = l) =

6 X

P

n=1

n X

! Xk = l P(N = n)

k=1

6

 l  n−l   1 1 n 2 2 l n=l     6 n 1X n 1 = , 0 ≤ l ≤ 6. 6 2 l =

1X 6

n=l

3. We have IE[Z] =

6 X

lP(Z = l)

l=0 6 6  n   n 1X X 1 l = 6 2 l l=0 n=l n   6  n X 1X 1 n = l 6 n=1 2 l l=1 6  n X n 1X 1 n! = 6 n=1 2 (n − l)!(l − 1)! l=1

213

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

 n n−1 6 X (n − 1)! 1 1X n = 6 n=1 2 (n − 1 − l)!l! l=0

=

6 X

1 n 6 × 2 n=1

1 6 × (6 + 1) 6×2 2 7 = , 4 =

which recovers (13.3). We also have IE[Z 2 ] =

6 X

l2 P(Z = l)

l=0

=

=

=

=

=

=

1 6

6 X

l2

6  n   X 1 n

2 l n=l     n 6 nX 1X 1 2 n l l 6 n=1 2 l=1 6  n X n 1X 1 (l − 1 + 1)n! 6 n=1 2 (n − l)!(l − 1)! l=1   6 n 6  n X n n X 1X 1 (l − 1)n! 1X 1 n! + 6 n=1 2 (n − l)!(l − 1)! 6 n=1 2 (n − l)!(l − 1)! l=1 l=1     6 n 6 n n n X X 1X 1 n! 1X 1 n! + 6 n=1 2 (n − l)!(l − 2)! 6 n=1 2 (n − l)!(l − 1)! l=2 l=1     6 n−2 6 n n n−1 X (n − 2)! X 1X 1 1X 1 n! + n(n − 1) 6 n=1 2 (n − 2 − l)!l! 6 n=1 2 (n − 1 − l)!l! l=0

l=0

=

1 6 × 22

6 X n=1

n(n − 1) +

l=0

6 X

1 n 6 × 2 n=1

=

6 6 6 1 X 2 1 X 1 X n − n + n 6 × 22 n=1 6 × 22 n=1 6 × 2 n=1

=

6 6 1 X 2 1 X n + n 6 × 22 n=1 6 × 22 n=1

1 6(6 + 1)(2 × 6 + 1) 1 6(6 + 1) + 6 × 22 6 6 × 22 2 14 = , 3 =

214

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

which recovers (13.6). Exercise 1.2. (Problem II.3.1 in [4]). 1. Let the sequence of Bernoulli trials be represented by a family (Xk )k≥1 of independent Bernoulli random variables with distribution P(Xk = 1) = p,

P(Xk = 0) = 1 − p,

We have Z = X1 + · · · + XN =

N X

k ≥ 1.

Xk ,

k=1

and, since IE[Xk ] = p, " IE[Z] = IE

N X

# Xk

k=1

= =

∞ X

" IE

n=0 ∞ X

n X

# Xk P(N = n)

k=1 n X

!

IE[Xk ] P(N = n)

n=0 k=1 ∞ X

=p

nP(N = n)

n=0

= p IE[N ]. Note that N need not have the Poisson distribution for the above equality to hold. Next, the expectation of the Poisson random variable N with parameter λ > 0 is given by IE[N ] =

∞ X

nP(N = n)

n=0

= e−λ = e−λ

∞ X λn n n! n=0 ∞ X

λn (n − 1)! n=1

= λe−λ

∞ X λn n! n=0

215

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

= λe−λ eλ = λ,

(13.7)

hence IE[Z] = pλ. Concerning the variance we have, since IE[Xk2 ] = p,  !2  N X IE[Z 2 ] = IE  Xk  k=1

=

∞ X

 IE 

n=0

=

∞ X

=

IE 

=

 P(N = n)

n X



"

IE

n X

X

(Xk )2 +

k=1

n=0 ∞ X

Xk

k=1



n=0 ∞ X

!2 

n X

Xk Xl  P(N = n)

1≤k6=l≤n

# X

(Xk )2 +

k=1

IE[Xk ] IE[Xl ] P(N = n)

1≤k6=l≤n

 np + n(n − 1)p2 P(N = n)

n=0

= p(1 − p)

∞ X

nP(N = n) + p2

n=0

∞ X

n2 P(N = n)

n=0

= p(1 − p) IE[N ] + p2 IE[N 2 ]. Again, the above equality holds without requiring that N has the Poisson distribution. Next we have IE[N 2 ] =

∞ X

n2 P(N = n)

n=0

= e−λ = e−λ = e−λ

∞ X

n2

n=0 ∞ X n=1 ∞ X

n

λn n!

λn (n − 1)!

(n − 1)

n=1

∞ X λn λn + e−λ (n − 1)! (n − 1)! n=1

216

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

= e−λ

∞ X

∞ X λn λn + e−λ (n − 2)! (n − 1)! n=2 n=1

= λ2 e−λ

∞ ∞ X X λn λn + λe−λ n! n! n=0 n=0

= λ + λ2 , hence Var[N ] = IE[N 2 ] − (IE[N ])2 = λ,

(13.8)

and Var[Z] = IE[Z 2 ] − (IE[Z])2 = p(1 − p) IE[N ] + p2 IE[N 2 ] − (p IE[N ])2 = p(1 − p) IE[N ] + p2 Var[N ] = λp(1 − p) + λp2 = pλ. 2. What is the distribution of Z ? We have P(Z = l) =

∞ X

P

n=0

! Xk = l P(N = n)

k=1

= e−λ l

n X

∞   X n l λn p (1 − p)n−l l n!

n=l ∞ X −λ

=

p e l!

=

∞ (λp) −λ X λn e (1 − p)n l! n! n=0

n=l

1 (1 − p)n−l λn (n − l)!

l

l

(λp) −λ (1−p)λ e e l! l (λp) −pλ = e , l! =

hence Z has a Poisson distribution with parameter pλ. 3. From Question (2), Z is a Poisson random variable with parameter pλ, hence from (13.7) and (13.8) we have IE[Z] = Var[Z] = pλ. Exercise 1.3. (Exercise II.4.5 in [4]). Since U is uniformly distributed given L over the interval [0, L], we have 217

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

fU |L=y (x) =

1 1[0,y] (x), y

hence f(U,L) (x, y) = fU |L=y (x)fL (y) 1 = 1[0,y] (x)ye−y 1[0,∞) (y) y = 1[0,y] (x)1[0,∞) (y)e−y .

(13.9)

Next we want to determine f(U,L−U ) (x, y). We note that for all bounded functions h : R2 → R we have Z ∞Z ∞ h(x, z)f(U,L−U ) (x, z)dxdz. IE[h(U, L − U )] = −∞

(13.10)

−∞

On the other hand, IE[h(U, L − U )] can also be written as  Z ∞ Z ∞ IE[h(U, L − U )] = h(x, y − x)f(U,L) (x, y)dy dx −∞ −∞  Z ∞ Z ∞ = h(x, y)f(U,L) (x, y + x)dy dx −∞ −∞ Z ∞Z ∞ = h(x, z)f(U,L) (x, x + z)dxdz, −∞

−∞

after the change of variable z = y − x. By identification with (13.10) this gives f(U,L−U ) (x, z) = f(U,L) (x, x + z), hence from (13.9) we get f(U,L−U ) (x, z) = f(U,L) (x, x + z) = 1[0,x+z] (x)1[0,∞) (x + z)e−x−z = 1[0,∞) (x)1[0,∞) (z)e−x−z .

Exercise 1.4. (Problem II.4.4 in [4]). Let us assume first that X and Y are independent Poisson random variables with parameters λ and µ. We note that P(X + Y = n) =

n X

P(X = k, X + Y = n)

k=0

218

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

= =

n X k=0 n X

P(X = k, Y = n − k) P(X = k)P(Y = n − k)

k=0 n X λk µn−k k! (n − k)! k=0   n 1 X n k n−k = e−λ−µ λ µ n! k

= e−λ−µ

k=0

(λ + µ)n = e−λ−µ , n! hence X + Y has a Poisson distribution with parameter λ + µ. Next we have P(X = k, X + Y = n) P(X + Y = n) P(X = k, Y = n − k) = P(X + Y = n) P(X = k)P(Y = n − k) = P(X + Y = n)   k  n−k n λ µ = , k λ+µ λ+µ

P(X = k | X + Y = n) =

(13.11)

hence, given X + Y = n, the random variable X has a binomial distribution with parameters λ/(λ + µ) and n. In case X and Y have the same parameter λ = µ we get   1 −2λ n P(X = k | X + Y = n) = e , k 2n which becomes independent of λ. Hence when λ becomes random with probability density x 7→ fλ (x) we get Z ∞ P(X = k | X + Y = n) = P(X = k | X + Y = n, λ = x)fλ (x)dx −∞ Z ∞ = P(X = k | X + Y = n)fλ (x)dx −∞ Z ∞ = P(X = k | X + Y = n) fλ (x)dx −∞

219

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

= P(X = k | X + Y = n)   1 −2λ n , =e k 2n including in the case where λ has an exponential distribution with parameter θ > 0, i.e. fλ (x) = θ1[0,∞) (x)e−θx , x ∈ R. Exercise 1.5. 1. The probability that the system operates is   3 2 P(X ≥ 2) = p (1 − p) + p3 = 3p2 − 2p3 , 2 where X is a binomial random variable with parameter (3, p). 2. The probability that the system operates is Z 1 Z 1 Z 1 1 (3p2 − 2p3 )dp = 3 p2 dp − 2 p3 dp = . 2 0 0 0

13.2 Chapter 2 - Gambling Problems Exercise 2.1. 1. By first step analysis we have f (k) = (1 − 2p)f (k) + pf (k + 1) + pf (k − 1), which yields the equation f (k) =

1 1 f (k + 1) + f (k − 1), 2 2

1 ≤ k ≤ S − 1,

(13.12)

with the boundary conditions f (0) = 1 and f (S) = 0. We refer to this equation as the homogeneous equation. 2. According to the notes we know that the general solution of (13.12) has the form f (k) = C1 + C2 k, 0≤k≤S and after taking into account the boundary conditions we find 220

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

f (k) =

S−k , S

0 ≤ k ≤ S.

Intuitively the problem is a fair game for all values of p, so that the probability of ruin should be the same for all p ∈ (0, 1/2), which is indeed the case. 3. By first step analysis we have h(k) = (1 − 2p)(1 + h(k)) + p(1 + h(k + 1)) + p(1 + h(k − 1)) = 1 + (1 − 2p)h(k) + ph(k + 1) + ph(k − 1), which yields the equation h(k) =

1 1 1 + h(k + 1) + h(k − 1), 2p 2 2

1 ≤ k ≤ S − 1,

with the boundary conditions h(0) = 0 and h(S) = 0. 4. After trying a solution of the form h(k) = Ck 2 we find that C should be equal to C = −1/(2p), hence k 7→ −k 2 /(2p) is a particular solution. 5. Given the hint, the general solution has the form h(k) = C1 + C2 k − which gives h(k) =

k2 , 2p

k (S − k) 2p

0 ≤ k ≤ S,

0 ≤ k ≤ S,

after taking into account the boundary conditions. Note that ∞

X 1 1 = (1 − 2p)k = 2p 1 − (1 − 2p) k=0

is the mean time spent in any given state k ∈ {1, 2, . . . , S − 1}. 6. Starting from any state k ∈ {1, 2, . . . , S − 1}, the mean duration goes to infinity when p goes to zero. When p goes to 0 the probability 1 − 2p of a draw increases and the game should take longer. Hence the above answer is compatible with intuition.

221

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

13.3 Chapter 3 - Random Walks

3

3

2

2

Sn

Sn

Exercise 3.1. (Exercise III.5.9 in [4]).     4 4 1. We find = = 4 paths, as follows. 3 1

1

0

0

-1

-1 0

1

2 n

3

4

3

3

2

2

Sn

Sn

1

1

0

0

1

2 n

3

4

0

1

2 n

3

4

1

0

-1

-1 0

1

2 n

3

4

Fig. 13.1: Four paths leading from 0 to 2 in four steps.

2. In each of the

    4 4 = =4 3 1

paths there are 3 steps up (with probability p) and 1 step down (with probability q = 1 â&#x2C6;&#x2019; p), hence the result. 3. This follows from (3.1). 4. If n + 1 + k is odd the equation is clearly satisfied as both the right hand side and left hand side of (3.15) are equal to 0. If n + 1 + k is even we have

222

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

  n ppn,k−1 + qpn,k+1 = p p(n−1+k)/2 (1 − p)(n+1−k)/2 (n − 1 + k)/2   n +q p(n+1+k)/2 (1 − p)(n−1−k)/2 (n + 1 + k)/2     n n (n+1+k)/2 (n+1−k)/2 = p q + p(n+1+k)/2 q (n+1−k)/2 (n − 1 + k)/2 (n + 1 + k)/2     n n (n+1+k)/2 (n+1−k)/2 =p q + (n − 1 + k)/2 (n + 1 + k)/2   n! n! (n+1+k)/2 (n+1−k)/2 =p q + ((n + 1 − k)/2)!((n − 1 + k)/2)! ((n − 1 − k)/2)!((n + 1 + k)/2)!   n!((n + 1 − k)/2) n!(n + 1 + k)/2) (n+1+k)/2 (n+1−k)/2 =p + q ((n + 1 − k)/2)!((n + 1 + k)/2)! ((n1 − k)/2)!((n + 1 + k)/2)! n!(n + 1)) = p(n+1+k)/2 q (n+1−k)/2 ((n + 1 − k)/2)!((n + 1 + k)/2)!   n+1 = p(n+1+k)/2 q (n+1−k)/2 (n + 1 + k)/2 = pn+1,k , which shows that pn,k satisfies Equation (3.15). In addition we clearly have p0,0 = P(S0 = 0) = 1

and

p0,k = P(S0 = k) = 0,

k 6= 0.

5. By first step analysis we have, letting pn,k := P(Sn = k), pn+1,k = P(Sn+1 = k) = P(Sn+1 = k | S0 = 0) = P(Sn+1 = k | S1 = 1)P(S1 = 1 | S0 = 0) +P(Sn+1 = k | S1 = −1)P(S1 = −1 | S0 = 0) = pP(Sn+1 = k | S1 = 1) + qP(Sn+1 = k | S1 = −1) = pP(Sn+1 = k − 1 | S1 = 0) + qP(Sn+1 = k + 1 | S1 = 0) = pP(Sn = k − 1 | S0 = 0) + qP(Sn = k + 1 | S0 = 0) = ppn,k−1 + qpn,k+1 , which yields pn+1,k = ppn,k−1 + qpn,k+1 , for all n ∈ N and k ∈ Z.

223

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

Exercise 3.2. 1. Since the increment Xk takes its values in {−1, 1}, the set of distinct values of {S0 , . . . , Sn } is the integer interval " # inf

k=0,...,n

Sk , sup Sk , k=0,...,n

which has

! Rn = 1 +

sup Sk

 −

k=0,...,n

 inf

k=0,...,n

Sk

elements. In addition we have R0 = 1 and R1 = 2. 2. At each time step k ≥ 1 the range can only either increase by one unit or remain constant, hence Rk − Rk−1 ∈ {0, 1} is a Bernoulli random variable. In addition we have the identity {Rk − Rk−1 = 1} = {Sk 6= S0 , Sk 6= S1 , . . . , Sk 6= Sk−1 } hence applying the probability P on both sides we get P(Rk − Rk−1 = 1) = P(Sk − S0 6= 0, Sk − S1 6= 0, . . . , Sk − Sk−1 6= 0). 3. For all k ≥ 1 we have P(Rk − Rk−1 = 1) = P(Sk − S0 6= 0, Sk − S1 6= 0, . . . , Sk − Sk−1 6= 0) = P(X1 + · · · Xk 6= 0, X2 + · · · + Xk 6= 0, . . . , Xk 6= 0) = P(X1 6= 0, X1 + X2 6= 0, . . . , X1 + · · · + Xk 6= 0), since the sequence (Xk )k≥1 is made of independent and identically distributed random variables. 4. We have Rn = R0 + (R1 − R0 ) + · · · + (Rn − Rn−1 ) n X = R0 + (Rk − Rk−1 ), n ∈ N. k=1

5. First we note that ({τ0 > k})k≥1 is a decreasing sequence of events, since {τ0 > k + 1} =⇒ {τ0 > k}, hence we have

224

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html

k ≥ 1,


Notes on Markov Chains

 \

P(τ0 = ∞) = P 

{τ0 > k}

k≥1

= lim P(τ0 > k). k→∞

6. We have " IE[Rn ] = IE R0 +

n X

# (Rk − Rk−1 )

k=1

= R0 + = R0 + = R0 + = R0 + = R0 + = 1+

n X k=1 n X k=1 n X k=1 n X k=1 n X

IE[Rk − Rk−1 ] P(Rk − Rk−1 = 1) P(X1 6= 0, X1 + X2 6= 0, . . . , X1 + · · · + Xk 6= 0) P(S1 6= 0, S2 6= 0, . . . , Sk 6= 0) P(τ0 > k)

k=1 n X

P(τ0 > k)

k=1

= P(τ0 > 0) +

n X

P(τ0 > k)

k=1

=

n X

P(τ0 > k).

k=0

7. Let ε > 0. Since by Question 5 we have P(τ0 = ∞) = limk→∞ P(τ0 > k), there exists N ≥ 1 such that |P(τ0 = ∞) − P(τ0 > k)| < ε,

k ≥ N.

Hence for n ≥ N we have

n

n

1 X

1X

P(τ0 > k) =

(P(τ0 = ∞) − P(τ0 > k))

P(τ0 = ∞) −

n

n k=1

k=1

n 1X |P(τ0 = ∞) − P(τ0 > k)| ≤ n k=1

225

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

N n 1X 1 X ≤ |P(τ0 = ∞) − P(τ0 > k)| + |P(τ0 = ∞) − P(τ0 > k)| n n k=1

k=N +1

N n−N ≤ + ε n n N ≤ + ε. n Then, choosing N0 ≥ 1 such that (N + 1)/n ≤ ε for n ≥ N0 , we get

n

X

1 1 1

P(τ0 = ∞) − IE[Rn ] ≤ + P(τ0 = ∞) − P(τ0 > k) ≤ 2ε,

n n

n k=1

n ≥ N0 , which concludes the proof. Alternatively, the answer to that question can be derived by applying the Cesaro theorem, which states that in general we have n

1X ak = a n→∞ n lim

k=0

when the sequence (ak )k∈N has the limit a, by taking ak = P(τ0 > k), k ∈ N, since we have limk→∞ P(τ0 > k) = P(τ0 = ∞). 8. It is known from the notes that we have P(τ0 = +∞) = |p − q|, hence lim

n→∞

1 IE[Rn ] = |p − q|, n

when p 6= 1/2, and lim

n→∞

1 IE[Rn ] = 0. n

when p = 1/2.

13.4 Chapter 4 - Discrete-Time Markov Chains Exercise 4.1. Let Sn denote the fortune of the player at time n ∈ N. The process (Sn )n∈N is a Markov chain whose transition matrix is given by

226

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

[ Pi,j ]i,j∈N

1 p  p  = p  p  .. .

 0 0 0 0 0 ··· 0 q 0 0 0 ···  0 0 q 0 0 ···  . 0 0 0 q 0 ···  0 0 0 0 q ···  .. .. .. .. .. . . . . . . . .

After n steps we have P(Sn = n + 1 | S0 = 1) = q n ,

n ≥ 1,

and P(Sn = 0 | S0 = 1) =

n−1 X

pq l = p

l=0

1 − qn = 1 − qn , 1−q

n ≥ 1,

and more generally P(Sn = n + k | S0 = k) = q n ,

k ≥ 1,

and P(Sn = 0 | S0 = k) =

n−1 X

pq l = p

l=0

1 − qn = 1 − qn , 1−q

k ≥ 1,

hence P n is given by 



n Pi,j

 i,j∈N

1  1 − qn   1 − qn  n  =  1 − qn 1 − q   1 − qn  .. .

0 ··· 0 ··· 0 ··· 0 ··· 0 ··· 0 ··· .. .. . .

0 0 0 qn 0 0 0 0 0 0 0 0 .. .. . .

0 0 qn 0 0 0 .. .

0 0 0 qn 0 0 .. .

0 0 0 0 qn 0 .. .

0 0 0 0 0 qn .. .

 ··· ···  ···  ··· , ···  ···  .. .

and we can also check by matrix multiplication that this relation is consistent with P n+1 = P × P n , n ≥ 1, i.e.

227

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

 1 0 ··· 0 0 0 0 0 0 ···  1 − q n+1 0 · · · 0 q n+1 0 0 0 0 ···    1 − q n+1 0 · · · 0 0 q n+1 0 0 0 ···    1 − q n+1 0 · · · 0 0 0 q n+1 0 0 ···    1 − q n+1 0 · · · 0 0 0 0 q n+1 0 · · ·     1 − q n+1 0 · · · 0 0 0 0 0 q n+1 · · ·    .. .. .. .. .. .. .. .. .. . . . . . . . . . . . .    1 0 ··· 0 0 1 0 ··· 0 0 0 0 0 0 ···  1 − q 0 · · · 0 q 0 0 0 0 · · ·   1 − qn 0 · · · 0 qn     1 − q 0 · · · 0 0 q 0 0 0 · · ·   1 − qn 0 · · · 0 0    n    =  1 − q 0 · · · 0 0 0 q 0 0 · · ·  ×  1 − qn 0 · · · 0 0 1 − q 0 ··· 0 0 0 0 q 0 ··· 1 − q 0 ··· 0 0     1 − q 0 · · · 0 0 0 0 0 q · · ·   1 − qn 0 · · · 0 0    .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . . . 

0 0 qn 0 0 0 .. .

0 0 0 qn 0 0 .. .

0 0 0 0 qn 0 .. .

0 0 0 0 0 qn .. .

 ··· ···  ···  ··· , ···  ···  .. .

from the relation 1 − q + q(1 − q n ) = 1 − q n+1 , n ∈ N.

13.5 Chapter 5 - First Step Analysis Exercise 5.1. This exercise is a particular case of the example of Section 5.2 by taking a = 0.3, b = 0, c = 0.7, d = 0, α = 0, β = 0.3, γ = 0, η = 0.7. Exercise 5.2. This exercise is a particular case of Problem III.6.2 in [4] by taking α = β = 0.5. We observe that state 3 is absorbing: 0.5 0.5 0

0.5

1

0.5

0.5 0.5

2

3 1

Let 228

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

h3 (k) = IE[T3 | X0 = k] denote the mean time to reach 3 starting from state k = 0, 1, 2, 3. We get  1 1   h3 (0) = 1 + h3 (0) + h3 (2)   2 2       1    h3 (1) = 1 + h3 (0) 2    1 1   h3 (2) = 1 + h3 (0) + h3 (1)    2 2      h3 (3) = 0, which yields h3 (3) = 0,

h3 (1) = 8,

h3 (2) = 12,

h3 (0) = 14.

We check that h3 (3) < h3 (1) < h3 (2) < h3 (0), as can be expected from the graph. Exercise 5.3. 1. The boundary conditions g(0) and g(N ) are given by g(0) = 1 and g(N ) = 0. 2. We have g(k) = P(T0 < TN | X0 = k) =

N X

P(T0 < TN | X1 = l)P(X1 = l | X0 = k)

l=0

=

N X

P(T0 < TN | X1 = l)Pk,l

l=0

=

N X

P(T0 < TN | X0 = l)Pk,l

l=0

=

N X

P(T0 < TN | X0 = l)Pk,l

l=0

=

N X

g(l)Pk,l ,

k = 0, 1, . . . , N.

l=0

3. We have

229

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

[ Pi,j ]0≤i,j≤3

1

0

0

0

     (2/3)3 (2/3)2 2/32 1/33        = 3 2 2 3 2/3 (2/3) (2/3)   1/3      0 0 0 1 

N −k , we check that g(k) satisfies the boundary condiN tions g(0) = 1 and g(N ) = 0, and in addition we have

4. Letting g(k) =

N X l=0

g(l)Pk,l

l  N −l N   X N k k N −l = 1− l N N l l=0     N l N −l X k k N −l N! 1− = (N − l)!l! N N N l=0  l  N −l N −1 X N! k k N −l = 1− (N − l)!l! N N N l=0  l  N −l N −1 X (N − 1)! k k 1− = (N − l − 1)!l! N N l=0   l    N −1  N −1−l k k X N −1 k = 1− 1− l N N N l=0   N −1 k k k = 1− +1− N N N N −k = N = g(k), k = 0, 1, . . . , N.

5. The boundary conditions h(0) and h(N ) are given by h(0) = 0 and h(N ) = 0 since the states 0 and N are absorbing. 6. We have h(k) = IE[T0,N | X0 = k] =

N X (1 + IE[T0,N | X1 = l])P(X1 = l | X0 = k) l=0

=

N X (1 + IE[T0,N | X1 = l])Pk,l l=0

230

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

=

N X

Pk,l +

l=0

= 1+

N X

IE[T0,N | X1 = l]Pk,l

l=0 N −1 X

IE[T0,N | X1 = l]Pk,l

l=1

= 1+

N −1 X

h(l)Pk,l ,

k = 1, . . . , N − 1.

(13.13)

l=1

7. In this case the Equation (13.13) reads  h(0) = 0,        2 4     h(1) = 1 + 9 h(1) + 9 h(2)   4 2   h(2) = 1 + h(1) + h(2)   9 9      h(3) = 0, which yields    h(0) = 0,        h(1) = 3,   h(2) = 3,        h(3) = 0. Exercise 5.4. (Problem III.5.4 in [4]). First we take a look at the complexity of the problem. Starting from 0 there are multiple ways to reach state 13 without reaching 11 or 12. For example: 3 + 4 + 1 + 5, or 1 + 6 + 3 + 3, or 1 + 1 + 2 + 1 + 3 + 1 + 4, etc. Clearly it would be difficult to count all such possibilities. For this reason we use the framework of Markov chains. We denote by Xn the cumulative sum at time n and choose to model it as a Markov chain. We have Xn =

n X

ξk ,

n ≥ 0,

k=1

231

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

where (ξk )k≥1 is a family of independent random variables uniformly distributed over {1, . . . , 6}. The process (Xn )n≥1 is a Markov chain since given the value of Xn at time n, the next value Xn+1 = Xn + ξn+1 depends only on Xn and not on the past before time n. The process (Xn )n≥1 is actually a random walk with independent increments ξ1 , ξ2 , . . .. The chain (Xn )n≥1 has the transition matrix   0 1/6 1/6 1/6 1/6 1/6 1/6 0 0 0 0 0 · · ·  0 0 1/6 1/6 1/6 1/6 1/6 1/6 0 0 0 0 · · ·     0 0 0 1/6 1/6 1/6 1/6 1/6 1/6 0 0 0 · · ·     0 0 0 0 1/6 1/6 1/6 1/6 1/6 1/6 0 0 · · ·     0 0 0 0 0 1/6 1/6 1/6 1/6 1/6 1/6 0 · · ·     0 0 0 0 0 0 1/6 1/6 1/6 1/6 1/6 1/6 · · ·      [ Pi,j ]i,j∈N =  0 0 0 0 0 0 0 1/6 1/6 1/6 1/6 1/6 · · ·  .  0 0 0 0 0 0 0 0 1/6 1/6 1/6 1/6 · · ·     0 0 0 0 0 0 0 0 0 1/6 1/6 1/6 · · ·     0 0 0 0 0 0 0 0 0 0 1/6 1/6 · · ·     0 0 0 0 0 0 0 0 0 0 0 1/6 · · ·    0 0 0 0 0 0 0 0 0 0 0 0 ···   .. .. .. .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . . . Let Ti = inf{n ≥ 0 : Xn = i} denote the first hitting time of i ≥ 1. Note that we have Ti < ∞ and i < j =⇒ Ti < Tj , and Ti < Tj and i < j =⇒ Ti = +∞. We will compute the probability gk := P(T13 < T11 and T13 < T12 | X0 = k) = P(T13 < ∞ and T11 = ∞ and T12 = ∞ | X0 = k), k ∈ N, which is the probability of reaching state 13 without previously hitting state 12 or state 11, and g0 is the answer to our problem.

232

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

The probabilities gk can be easily computed for k = 8, . . . , 13. We have gk = 0, k ≥ 14, and g13 = P(T13 < T12 and T13 < T11 | X0 = k) = 1, g12 = P(T13 < T12 and T13 < T11 | X0 = 12) = 0, g11 = P(T13 < T12 and T13 < T11 | X0 = 11) = 0, 1 g10 = P(T13 < T12 and T13 < T11 | X0 = 10) = , 6 1 1 1 7 g9 = P(T13 < T12 and T13 < T11 | X0 = 9) = + × = , 6 6 6 36 1 1 1 1 1 49 1 , g8 = P(T13 < T12 and T13 < T11 | X0 = 8) = + 2 × × + × × = 6 6 6 6 6 6 216 where for the last term we note that there are only 4 ways to reach 13 from 8 without hitting 11 or 12, by the four combinations 8 + 5,

8 + 1 + 4,

8+2+3

8 + 1 + 1 + 3.

Clearly, things become easily complicated for k ≤ 7. To continue the calculation we rely on first step analysis. We have 6

g(k) =

1X gk+i , 6 i=1

k ≥ 0,

i.e. g0 = g1 = g2 = g3 = g4 = g5 = g6 = g7 =

1 (g1 + g2 + g3 + g4 + g5 + g6 ) 6 1 (g2 + g3 + g4 + g5 + g6 + g7 ) 6 1 (g3 + g4 + g5 + g6 + g7 + g8 ) 6 1 (g4 + g5 + g6 + g7 + g8 + g9 ) 6 1 (g5 + g6 + g7 + g8 + g9 + g10 ) 6 1 (g6 + g7 + g8 + g9 + g10 ) 6 1 (g7 + g8 + g9 + g10 ) 6 1 (g8 + g9 + g10 + g13 ) 6

233

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

1 (g9 + g10 + g13 ) 6 1 g9 = (g10 + g13 ) 6 1 g10 = g13 . 6 g8 =

In order to solve this system of equations we rewrite it as g0 − g1 = g1 − g2 = g2 − g3 = g3 − g4 = g4 − g5 = g5 − g6 = g6 − g7 = g7 − g8 = g8 − g9 = g9 − g10 = g10 =

1 (g1 − g7 ) 6 1 (g2 − g8 ) 6 1 (g3 − g9 ) 6 1 (g4 − g10 ) 6 1 g5 6 1 g6 6 1 (g7 − 1) 6 1 g8 6 1 g9 6 1 g10 6 1 g13 , 6

or g0 = g1 = g2 = g3 = g4 = g5 =

710 − 76 × 64 − 4 × 73 × 66 611 9 5 4 7 − 7 × 6 − 3 × 72 × 66 610 78 − 74 × 64 − 2 × 7 × 66 69 7 3 4 7 − 7 × 6 − 66 68 76 − 72 × 64 67 5 7 − 7 × 64 66

234

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

g6 = g7 = g8 = g9 = g10 = i.e. g0 =

1 74 − 64 74 4 64 74 − − = = 65 6 65 65 65 73 64 72 63 7 62 1 , 6

710 − 76 × 64 − 4 × 73 × 66 ' 0.181892636. 611

Exercise 5.5. 1. The transition matrix is given by   ××××× q 0 p 0 0    0 q 0 p 0 .   0 0 q 0 p ××××× The first and last lines of the matrix have not been completed because they have no influence on the result. We have g(0) = 0, g(5) = 1, and g(k) = qg(k − 1) + pg(k + 1),

1 ≤ k ≤ 4.

2. The probability that starting from state k the fish finds the food before getting shocked is g(k) =

k , 5

k = 0, 1, . . . , 5.

Exercise 5.6. 1. The transition matrix is given by

235

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

 1 0 0 0 0 0 ···  1 0 0 0 0 0 ···    1/2 1/2 0 0 0 0 · · ·     1/3 1/3 1/3 0 0 0 · · ·  .   1/4 1/4 1/4 1/4 0 0 · · ·     1/5 1/5 1/5 1/5 1/5 0 · · ·    .. .. .. .. .. .. . . . . . . . . . 

Note that an arbitrary choice has been made for the first line (i.e. state 0 is absorbing), however other choices would not change the answer to the question. 2. We have h0 (m) =

m−1 X k=0

m−1 1 1 X (1 + h0 (k)) = 1 + h0 (k), m m

m ≥ 1,

k=0

and h0 (0) = 0, h0 (1) = 1.

1 3. fff Show that h0 (m) satisfies h0 (m) = h0 (m − 1) + , m ≥ 1, and that m m X 1 h0 (m) = , m ∈ N. k k=1

We have h0 (m) = 1 +

m−1 1 X h0 (k) m k=0

= 1+

m−2 m − 1 X h0 (k) 1 h0 (m − 1) + m m m−1 k=0

1 m−1 m−1 = 1 + h0 (m − 1) − + m m m

1+

m−2 X k=0

1 m−1 m−1 = 1 + h0 (m − 1) − + h0 (m − 1) m m m m−1 1 m−1 = 1− + h0 (m − 1) + h0 (m − 1) m m m 1 = h0 (m − 1) + , m ≥ 1, m hence h0 (m) = h0 (m − 1) + = h0 (m − 2) +

1 m 1 1 + m−1 m

236

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html

h0 (k) m−1

!


Notes on Markov Chains

=

m X 1 , k

m ≥ 1.

k=1

We can also note that from the Markov property we have h0 (m) = hm−1 (m) + h0 (m − 1), where the mean time hm (m − 1) from m to m − 1 is equal to 1/m since m − 1 is always reached in one step from state m, with probability 1/m. Exercise 5.7. 1. The graph of the chain is 1/3

1/3

0

1

1

2

1 1/3 3 2. We have

1/3  0 P =  1 0

1

 1/3 0 1/3 0 1 0  . 0 0 0  0 0 1

3. Let h3 (i) denote the average time from state i to state 3, which solves  1 1 1   h3 (0) = (1 + h3 (0)) + (1 + h3 (1)) +   3 3 3       h3 (1) = 1 + h3 (2)     h3 (2) = 1 + h3 (0)       h3 (3) = 0, which yields h3 (0) =

1 1 1 (1 + h3 (0)) + (3 + h3 (0)) + , 3 3 3

237

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

i.e. h3 (0) = 5. Additional comments: a. The difficulty in this exercise is that the Markov chain itself is not given and one had to design a suitable Markov model, sometimes involving a Markov chain with weighted links. On the other hand, the equation h = 1 + P h cannot be directly used when the links have weights different from 1. b. the problem can be solved in a simpler way with only 3 states, by putting a weight corresponding to the travel time (underlined) on each link: 1 1/3 1/3 0 1 1 1 2

0

1/3 2

1

Here the average time h2 (i) from state i to state 2 solves  1 1 1   h2 (0) = (1 + h2 (0)) + (1 + h2 (1)) + × 0   3 3 3   h2 (1) = 2 + h2 (0)       h2 (2) = 0, which yields h2 (0) =

1 1 1 (1 + h2 (0)) + (3 + h2 (0)) + × 0, 3 3 3

i.e. h2 (0) = 4. c. the problem can even be further simplified using the following graph, which no longer uses the notion of Markov chain, and in which each link is weigthed by a travel time:

238

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

0 1/3 1

1/3

0

1

1/3 3 By first step analysis the average time t1 (0) to travel from state 0 to state 1 is directly given by t1 (0) =

1 1 1 (1 + t1 (0)) + (3 + t1 (0)) + × 0, 3 3 3

i.e. t1 (0) = 4. d. the problem could also be solved using 4 transient states and one absorbing state, as: 1/3

Tower

1/3

Exit 1

1

2

1 1/3 Exit 3

1

Outside

1

Exercise 5.8. Let t denote the expected time spent inside the maze. By first step analysis we have t=

1 2 1 × (t + 3) + × 2 + × (t + 5), 2 6 6

which yields t = 21. See the related Exercise 5.7 a more detailed analyis.

13.6 Chapter 6 - Classification of States Exercise 6.1. 1. The graph of the chain is

239

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

1/3 1/3

1/3

0

1

1

2

1 1 3 The communicating classes are {0} and {1, 2, 3}. 2. State 0 has period 1 and states 1, 2, 3 have period 3. 3. There are no absorbing states, state 0 is transient, and states 1, 2, 3 are recurrent by Corollary 1. 4. This Markov chain is reducible because its state space can be partitioned into two communicating classes as S = {0} â&#x2C6;Ş {1, 2, 3}. Exercise 6.2. 1. The chain has the following graph 1

1/4

1 2

1

4

1 0

3

1/4 1/4

1 1/4

2. All states 0, 1, 2 and 3 have period 1. The chain is aperiodic. 3. State 4 is absorbing (and therefore recurrent), and all other states 0, 1, 2, 3 are transient. 4. The Markov chain is reducible because its state space S = {0, 1, 2, 3, 4} can be partitioned into two communicating classes {0, 1, 2, 3} and {4}. Exercise 6.3. The communicating classes are {0}, {1}, {3}, {5}, and {2, 4}. State 3 has period 0, states 2 and 4 have period 2, and states 0, 1, 5 are aperiodic. States 0, 1, 3 are transient and states 2, 4, 5 are recurrent. 240

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

Exercise 6.4. 1. The graph of the chain is 0.3

0.3 0.8

0

1

3

0.2

2

0.4

1

1

The chain is reducible, with communicating classes {0, 2}, {1}, {3}. 2. States 0, 2, 3 are aperiodic, and state 1 has period 0. There are no absorbing states. States 1 and 3 are transient, states 0 and 2 are recurrent, and they are also positive recurrent since the state space is finite.

13.7 Chapter 7 - Limiting and Stationary Distributions Exercise 7.1. 1. Clearly the transition from the current state to the next state depends only on the current state on the chain, hence the process is Markov. The transition matrix of the chain on the state space S = (D, N ) is     1−a a 1/4 3/4 P = = . b 1−b 1/4 3/4 2. The stationary distribution π = (πD , πN ) is solution of π = πP , i.e.  1 1    πD = (1 − a)πD + bπN = 4 πD + 4 πN   π

N

= aπD + (1 − b)πN =

3 3 πD + πN 4 4

241

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

under the condition πD + πN = 1, which yields πD = b/(a + b) = 1/4 and πN = a/(a + b) = 3/4. 3. In the long run the fraction of distorted signals is πD = 1/4 = 25%. 4. The average time µN (D) to reach state N starting from state D satisfies µN (D) = (1 − a)(1 + µN (D)) + a

(13.14)

hence µN (D) = 1/a = 4/3. Additional comments: a. In Equation (13.14) above, note that µN (N ) is not present on the right-hand side. b. The value of µN (D) may also be recovered as ∞

3X µN (D) = k 4 k=1

 k−1 1 3 1 4 = = , 2 4 4 (1 − 1/4) 3

cf. (13.2). 5. The average time µD (N ) to reach state D starting from state N satisfies µD (N ) = (1 − b)(1 + µD (N )) + b

(13.15)

hence µN (D) = 1/b = 4. Additional comments: a. In Equation (13.15) above, note that µD (D) is not present on the right-hand side. b. The value of µD (N ) may also be recovered as ∞

1X µD (N ) = k 4 k=1

 k−1 3 1 1 = 4, = 4 4 (1 − 3/4)2

cf. (13.2). N D c. The values of µD D and µN can be computed from µD = 1/πD and µN N = 1/πN . Exercise 7.2. (Exercise IV.4.2 in [4]).

242

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

1. We find π0 = c × 161,

π1 = c × 460,

π2 = c × 320,

π2 = c × 170,

see here, where 1 1 = . 161 + 460 + 320 + 170 1111

c= 2. We have µ0 (1) =

950 , 161

µ0 (2) =

860 , 161

µ0 (3) =

790 . 161

see here. 3. We find µ0 (0) = 1 + µ0 (1) = 1 + hence the relation π0 =

950 161 + 950 1111 = = , 161 161 161 1 µ0 (0)

is satisfied. Exercise 7.3. (Exercise IV.4.3 in [4]). The chain is irreducible and has a finite state space, hence the equation π = πP characterizes the stationary distribution. The equation reads  1 1   π0 = π1 + π3   4 2       1 1     π1 = 2 π0 + 3 π2   3    π2 = π1 +  4         π3 = 1 π0 + 2

1 π3 2 2 π2 , 3

which has for solution π0 = 2c,

π1 = 2c,

π2 = 3c,

π3 = 3c,

see here, under the condition π0 + π1 + π2 + π3 = 1, 243

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

which yields c = 1/10 and π0 = 20%,

π1 = 20%,

π2 = 30%,

π3 = 30%.

Exercise 7.4. (Exercise IV.2.6 in [4]). We choose to model the problem on the state space {1, 2, 3, 4}, meaning that the replacement of a component is immediate upon failure. Let Xn denote the remaining active time of the component at time n. At any time n ≥ 1 we have Xn = 4

=⇒ Xn+1 = 3

=⇒ Xn+2 = 2

=⇒ Xn+3 = 1,

whereas when Xn = 1 the component will become inactive at the next time step and will be immendiately replaced by a new component of random lifetime T ∈ {1, 2, 3, }. Hence we have P(Xn+1 = k | Xn = 1) = P(T = k),

k = 1, 2, 3, 4,

and the process (Xn )n∈N is a Markov chain on S = {1, 2, 3, 4}, with transition matrix     0.1 0.2 0.3 0.4 P(Y = 1) P(Y = 2) P(Y = 3) P(Y = 4)   1 0 0 0   1 0 0 0  = P =   0 1 0 0 .  0 1 0 0 0 0 1 0 0 0 1 0 We are looking from the limit lim P(Xn = 1).

n→∞

Since the chain is irreducible, aperiodic (all states are checked to have period 1) and its state space is finite, we know that π1 = lim P(Xn = 1), n→∞

where π = (π1 , π2 , π3 , π4 ) is the stationary distribution π uniquely determined from the equation π = πP , as follows:  π1 = 0.1π1 + π2    π2 = 0.2π1 + π3 π  3 = 0.3π1 + π4   π4 = 0.4π1 . hence π2 = 0.9π1 ,

π3 = 0.7π1 ,

π4 = 0.4π1 ,

244

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

under the condition π1 + π2 + π3 + π4 = 1, i.e. π1 + 0.9π1 + 0.7π1 + 0.4π1 = 1, which yields π1 =

1 , 3

π2 =

9 , 30

π3 =

7 , 30

π4 =

4 . 30

Exercise 7.5. The graph of the chain is as follows.

0.2 0.7 0.3 0.6

B 0.2 0.4

1

E

A

D

0.4 0.2 C

0.2

0.2

0.4 We note that the chain is reducible, and that its state space S can be partitioned into 4 communicating classes: S = {A, B} ∪ {C} ∪ {D} ∪ {E}, where {A, B} and {E} are absorbing and C, D are transient. Starting from C, one can only return to C or end up in one of the absorbing classes {A, B} or {E}. Let us denote by τ{A,B} = inf{n ≥ 0 : Xn ∈ {A, B}} the hitting time of {A, B}. By first step analysis we find that P(τ{A,B} < ∞ | X0 = C) satisfies 245

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

P(τ{A,B} < ∞ | X0 = C) = 0.2 + 0.4P(τ{A,B} < ∞ | X0 = C), hence

1 , 3 which can also be recovered using geometric series as P(τ{A,B} < ∞ | X0 = C) =

P(τ{A,B} < ∞ | X0 = C) = 0.2

∞ X

(0.4)n =

n=0

0.2 1 = . 1 − 0.4 3

On the other hand, starting from any state within {A, B}, the long run probability of being in A is given by lim P(Xn = A | X0 ∈ {A, B}) =

n→∞

3 0.3 = , 0.3 + 0.4 7

hence the result is lim P(Xn = A | X0 = C) = P(τ{A,B} < ∞ | X0 = C) lim P(Xn = A | X0 ∈ {A, B})

n→∞

n→∞

1 3 = × 3 7 1 = . 7 On the other hand, the computation of the 20th power of P shows that   0.428571 0.571428 0 0 0   0.428571 0.571428 0 0 0   20 −8  0 0.666666  P =  0.142857 0.190476 1.09951110 ,  0.250000 0.333333 1.09951010−8 1.04857610−14 0.416666  0 0 0 0 1 which recovers lim P(Xn = A | X0 = C) =

n→∞

1 ' 0.142857 7

up to 6 decimals, and one can reasonably conjecture that   3/7 4/7 0 0 0  3/7 4/7 0 0 0    n  lim P =   1/7 4/21 0 0 2/3  , n→∞  1/4 1/3 0 0 5/12  0 0 00 1 which could also be recovered term by term using the above method. From this matrix one also sees clearly that C and D are transient states. It is also

246

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

of interest to note that limn→∞ P(Xn = i | X0 = j) is dependent on the initial state j. This is because the chain is not irreducible. Also the solution of π = πP is not unique, for example π = (3/7, 4/7, 0, 0, 0) and π = (1, 0, 0, 0, 0) are stationary distributions, which cannot yield the limiting probabilities in this case, except if the initial state belongs to {A, B} which is a communicating class. Exercise 7.6. 1. The transition matrix P of the chain on the state space S = (C, T ) is given by   4/5 1/5 . 3/4 1/4 2. The stationary distribution π = (πC , πT ) is solution of π = πP , i.e.  4 3    πC = 5 πC + 4 πT   π = 1π + 1π T C T 5 4 under the condition πC +πT = 1, which yields πC = 15/19 and πT = 4/19. 3. In the long run, 4 out of 19 vehicles are trucks. 4. Let µT (C) and µT (T ) denote the mean return times to state T starting from C and T , respecticely. By first step analysis we have  4   µT (C) = 1 + µT (C)  5    µ (T ) = 1 + 3 µ (C) T T 4 which has for solution µT (C) = 5 and µT (T ) = 19/4. Consequently, it takes on average 19/4 = 4.75 vehicles after a truck until the next truck is seen under the bridge. Exercise 7.7. (Exercise IV.4.1 in [4]). 1. We solve the system 247

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

 π0      π1 π2   π  3   π4

= q(π0 + π1 + π2 + π3 ) + π4 = q + pπ4 = pπ0 = pπ1 = p2 π0 = pπ2 = p3 π0 = pπ3 = p4 π0 ,

hence 1 = π0 + π1 + π2 + π3 + π4 = π0 (1 + p + p2 + p3 + p4 ), and

   π0         π1     π2        π3        π4

= = = = =

1 1 + p + + p3 + p4 p 2 1 + p + p + p3 + p4 p2 1 + p + p2 + p3 + p4 p3 1 + p + p2 + p3 + p4 p4 . 1 + p + p2 + p3 + p4 p2

2. Since the chain is irreducible and aperiodic with finite state space, its limiting distribution coincides with its stationary distribution. Exercise 7.8. (Problem IV.1.5 in [4]). 1. The transition matrix P is given by  0 1/2 0  1/3 0 1/3  P = 0 1 0 1/2 1/2 0 2. Solving for πP = π we have 

0  1/3 πP = [πA , πB , πC , πD ]   0 1/2

 1/2 1/3  . 0  0

   1 1 1/2 0 1/2 3 πB + 2 πD 1 1  0 1/3 1/3   =  2 πA + 1πC + 2 πD   1 0 0   π 3 B 1 1 1/2 0 0 π + π 2 A 3 B

= [πA , πB , πC , πD ], i.e. πA = πD = 2πC and πB = 3πC , which, under the condition πA + πB + πC + πD = 1, gives πA = 1/4, πB = 3/8, πC = 1/8, πD = 1/4. 3. This probability is πD = 0.25. 4. This average time is 1/πD = 4. 248

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

13.8 Chapter 8 - Branching Processes Exercise 8.1. 1. We have G(s) = E[sY ] = s0 P(Y = 0) + s1 P(Y = 1) =

1 1 + s, 2 2

s ∈ R.

2. We prove this statement by induction. Clearly at the order 1 we have G1 (s) = G(s). Next, assuming that (8.10) holds at the order n ≥ 1 we get Gn+1 (s) = G(Gn (s))   1 s = G 1− n + n 2 2   s 1 1 1 = + 1− n + n 2 2 2 2 1 s = 1 − n+1 + n+1 . 2 2 Additional comments: a. We may also directly note that P(Xn = 1 | X0 = 1) = P(Y1 = 1, . . . , Yn = 1) = (P(Y = 1))n =

1 , 2n

hence P(Xn = 0 | X0 = 1) = 1 − P(Xn = 0 | X0 = 1) = 1 −

1 , 2n

and Gn (s) = P(Xn = 0 | X0 = 1) + sP(Xn = 1 | X0 = 1) = 1 −

1 1 + s. 2n 2n

b. It is also possible to write 1 1 1 1 + + ··· + n + ns 2 22 2 2 n−1 1 1X 1 = ns + 2 2 2k

Gn (s) =

k=0

1 1 1 − (1/2)n = ns + 2 2 1 − 1/2 1 1 = 1 − n + n s, 2 2 249

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

although this is not recommended here. c. it is wrong (why ?) to write (( Y ((( Gn (s) = E[sY1 +···+YXn ] = ( ] · · · E[sYXn ]. E[s (1( Note that the left hand side is a well-defined deterministic number, while the right hand side is not well-defined as a random product. ( (( n (( d. For n ≥ 2 we do not have ( Gn( (s) (G(s)) . (= 3. We have P(Xn = 0 | X0 = 1) = Gn (0) = 1 −

1 . 2n

Additional comments: a. We may also directly write P(Xn = 0 | X0 = 1) = 1 − P(Xn = 1 | X0 = 1) = 1 − P(Y1 , . . . , Yn = 1) = 1 − (P(Y1 = 1))n 1 = 1 − n. 2 On the other hand we do not have ((( ((( ((P(Y n (= P(Xn = 0(| ( X0(=(1) 1 = 0)) , ( ( ( ( since the events {Xn = 0} and {Y1 = 0, . . . , Yn = 0} are not equivalent, more precisely we only have {Y1 = 0, . . . , Yn = 0} $ {Xn = 0}, hence 1 1 ≤ P(Xn = 0 | X0 = 1)) = 1 − n , 2n 2

n ≥ 1.

b. The probability βn := P(Xn = 0 | X0 = 1) is not solution of Gn (βn ) = βn . It is easy to check that the equality    1  1 Gn 1  − n = 1 − n 2  2 does not hold for n ≥ 1. c. In fact, {Xn = 0} means that extinction occurs at time n, or has already occurred before time n. 250

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

4. We have E[Xn | X0 = 1] = G0n (s)|s=1 =

1 . 2n

Additional comment: In this simple setting we could also write E[Xn | X0 = 1] = 0 × P(Xn = 1 | X0 = 1) + 1 × P(Xn = 1 | X0 = 1) = P(Xn = 1 | X0 = 1) 1 = n. 2 5. The extinction probability α is solution of G(α) = α, i.e. α=

1 1 + α, 2 2

with unique solution α = 1. Additional comments: a. Since the sequence of events ({Xn = 0})n≥1 is increasing, we also have   [ 1 α = P {Xn = 0} = lim P({Xn = 0}) = lim 1 − n = 1. n→∞ n→∞ 2 n≥1

Exercise 8.2. 1. We have G(s) = P(Y = 0) + sP(Y = 1) + s2 P(Y = 2) = as2 + bs + c,

s ∈ R.

2. Letting Xn denote the number of individuals in the population at time n, we have P(X2 = 0 | X0 = 1) = G(G(0)) = G(c) = ac2 + bc + c. 3. We have P(X2 = 0 | X0 = 2) = (P(X2 = 0 | X0 = 1))2 = (ac2 + bc + c)2 .

251

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

4. The extinction probability α1 given that X0 = 1 is solution of G(α) = α, i.e. α2 + bα + c = α, or 0 = α2 − (a + c)α + c = (α − 1)(aα − c) from the condition a+b+c = 1. The extinction probability α1 is known to be the smallest solution of G(α) = α, hence is α1 = c/a when 0 < c ≤ a, and α1 = 1 when 0 ≤ a ≤ c. The extinction probability α2 given that X0 = 2 is given by α2 = (α1 )2 . Exercise 8.3. 1. We compute the probability that only red cells are generated, which is n−1 Y k=0

1 4

2k .

2. Since white cells cannot reproduce, the extinction of the culture is equivalent to the extinction of the red cells. The probability distribution of the number Y of red cells produced from one red cell is P(Y = 0) =

1 , 12

P(Y = 1) =

2 , 3

P(Y = 2) =

1 , 4

which has the generating function G(s) = P(Y = 0) + sP(Y = 1) + s2 P(Y = 2) 2s s2 1 + + = 12 3 4 1 = (1 + 8s + 3s2 ), 12 hence the equation G(α) = α reads 3α2 − 4α + 1 = (α − 1)(α − 3) = 0, which has α = 1 for smallest solution. Consequently the extinction probability of the culture is equal to 1. Exercise 8.4. 1. We have 252

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

IE[Zn ] =

n X

IE[Xk ] = µ

k=1

n−1 X

µk = µ

k=0

1 − µn , 1−µ

n ∈ N.

2. We have " IE[Z] = IE

∞ X

# Xk =

k=1

∞ X

IE[Xk ] = µ

k=1

∞ X

µk =

k=0

µ , 1−µ

n ∈ N,

provided µ < 1. 3. We have H(s) = IE[sZ |X0 = 1] ∞ X = P(Y1 = 0) + IE[sZ |X1 = k]P(Y1 = k) = P(Y1 = 0) +

k=1 ∞ X

(IE[sZ+1 |X0 = 1])k P(Y1 = k)

k=1

=

∞ X

(s IE[sZ |X0 = 1])k P(Y1 = k)

k=0

= G(sH(s)). 4. We have H(s) = G(sH(s)) =

1−p , 1 − psH(s)

hence psH 2 (s) − H(s) + q = 0, and H(s) =

√ 1 − 4pqs 1 − 1 − 4pqs = , 2ps 2ps

where we have chosen the minus sign since the plus sign leads to H(0) = +∞. In addition we have µ = p/q < 1 hence p < 1/2 < q and the minus sign gives √ 1 − 1 − 4pq 1 − |q − p| H(1) = = = 1. 2p 2p 5. We have lim H(s) = lim+

s&0+

s&0

1 − (1 − 2pqs) = q = P(Z = 0) = P(Y1 = 0) = H(0). 2ps

Alternatively, L’Hospital’s rule can be used to compute the limit of H(s) expressed as a ratio.

253

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

6. We have H 0 (s) =

√ 1 − 1 − 4pqs pq − , 2ps2 ps 1 − 4pqs √

and 0

H (1) = = = = =

√ pq 1 − 1 − 4pq √ − 2p p 1 − 4pq pq 1 − (q − p) − p(q − p) 2p q −1 q−p p q−p µ , 1−µ

with µ = p/q for p < 1/2, which shows that IE[Z] =

µ 1−µ

and recovers the result of Question 2. 7. We have " Z # " Z # ∞ X X X IE Uk | Z = n P(Z = n) IE Uk = k=1

= =

n=0 ∞ X n=0 ∞ X

" IE

k=1 n X

# Uk P(Z = n)

k=1

n IE[U1 ]P(Z = n)

n=0

= IE[U1 ] IE[Z] µ = IE[U1 ] . 1−µ 8. We have P(Uk < x, k = 1, . . . , Z) = = =

∞ X n=0 ∞ X n=0 ∞ X

P(Uk < x, k = 1, . . . , Z | Z = n)P(Z = n) P(Uk < x, k = 1, . . . , n)P(Z = n) P(U1 < x)n P(Z = n)

n=0

254

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

=

∞ X

F (x)n P(Z = n)

n=0

= H(F (x)), under the convention that the condition is satisfied by default when Z = 0. Remark: We could also compute the same probability given that Z ≥ 1, and this would give P(Uk < x, k = 1, . . . , Z | Z ≥ 1) ∞ X 1 = P(Uk < x, k = 1, . . . , Z | Z = n)P(Z = n) P (Z ≥ 1) n=1 =

∞ X 1 P(Uk < x, k = 1, . . . , n)P(Z = n) P (Z ≥ 1) n=1

=

∞ X 1 P(U1 < x)n P(Z = n) P (Z ≥ 1) n=1

=

∞ X 1 F (x)n P(Z = n) P (Z ≥ 1) n=1

1 (H(F (x)) − P(Z = 0)) P (Z ≥ 1) 1 = (H(F (x)) − q). p =

9. We have

" IE

Z X

# Uk = IE[U1 ]

k=1

µ µ p = = . 1−µ 1−µ q−p

We find P(Uk < x, k = 1, . . . , Z) = H(F (x)) = H(1 − e−x ) p 1 − 1 − 4pq(1 − e−x ) = . 2p(1 − e−x ) Exercise 8.5. (Problem III.9.9 in [4]). 1. We have P(X = k) = (1/2)k+1 , k ∈ N. 2. The probability generating function of X is given by

255

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

1X 1 (s/2)k = . 2 2−s

GX (s) = E[sX ] =

k=0

3. The probability we are looking for is 1

GX (GX (GX (0))) =

2−

1 2−1/2

=

3 . 4

13.9 Chapter 9 - Continuous-Time Markov Chains Exercise 9.1. (Problem VI.1.5 in [4]). We have P(W1 > t, W2 > t + s) = P(Xt = 0, Xs+t ∈ {0, 1} | X0 = 0) = P(Xs+t ∈ {0, 1} | Xt = 0)P(Xt = 0 | X0 = 0) = P(Xs ∈ {0, 1} | X0 = 0)P(Xt = 0 | X0 = 0) = (P(Xs = 0 | X0 = 0) + P(Xs = 1 | X0 = 0))P(Xt = 0 | X0 = 0) = P0,0 (t)(P0,0 (s) + P0,1 (s)). Next, we note that we have P0,0 (t) = e−λ0 t , and P0,1 (t) =

t ∈ R+ ,

 λ0 e−λ0 t − e−λ1 t , λ1 − λ0

t ∈ R+ ,

(13.16)

see e.g. (13.17) below or page 338 of [4], hence  λ0  −λ0 (s+t) e − e−λ0 t−λ1 s λ1 − λ0 λ1 λ0 −λ0 (s+t) = e − e−λ0 t−λ1 s . λ1 − λ0 λ1 − λ0

P(W1 > t, W2 > t + s) = e−λ0 (s+t) +

Then, since Z

Z

P(W1 > x, W2 > y) =

fW1 ,W2 (u, v)dudv, x

y

we get fW1 ,W2 (x, y) =

∂2 P(W1 > x, W2 > y) ∂y∂x

256

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

  λ1 λ0 ∂2 −λ0 y −λ0 x−λ1 (y−x) e − e = ∂y∂x λ1 − λ0 λ1 − λ0 ∂ = −λ0 e−λ0 x−λ1 (y−x) ∂y = λ0 λ1 e−λ0 x−λ1 (y−x) , provided y ≥ x ≥ 0. When x > y ≥ 0 we have fW1 ,W2 (x, y) = 0. The density of (S0 , S1 ) is given by fS0 ,S1 (x, y) = fW1 ,W2 (x, x + y) = λ0 λ1 e−λ0 x−λ1 y ,

x, y ∈ R+ ,

which shows that S0 , S1 are two independent exponentially distributed random variables with parameters λ0 and λ1 , respectively. Exercise 9.2. 1. We have P(X3 = 5 and X1 = 1) P(X1 = 1) P(X3 − X1 = 4 and X1 = 1) = P(X1 = 1) P(X3 − X1 = 4)P(X1 = 1) = P(X1 = 1) = P(X3 − X1 = 4)

P(X3 = 5 | X1 = 1) =

= P(X2 = 4) (2λ)4 −2λ = e . 4! 2. We have IE[X1 X5 (X3 − X2 )] = IE[X1 (X5 − X3 )(X3 − X2 )] + IE[X1 (X3 − X2 )(X3 − X2 )] + IE[X1 (X2 − X1 )(X3 − X2 )] + IE[X1 X1 (X3 − X2 )] = IE[X1 ] IE[X5 − X3 ] IE[X3 − X2 ] + IE[X1 ] IE[(X3 − X2 )2 ] + IE[X1 ] IE[X2 − X1 ] IE[X3 − X2 ] + IE[X12 ] IE[X3 − X2 ] = IE[X12 ] IE[X2 ] + IE[X1 ] IE[X12 ] + IE[X1 ]3 + IE[X12 ] IE[X1 ] 257

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

= 2λ3 + λ(λ + λ2 ) + λ3 + (λ + λ2 )λ = 5λ3 + 2λ2 . 3. We have {W2 > t} = {Xt ≤ 1}, t ∈ R+ , hence IE[X2 | W2 > 1] = IE[X2 | X1 ≤ 1] 1 = IE[X2 1{X1 ≤1} ], P(X1 ≤ 1) by (1.8). Now we have IE[X2 1{X1 ≤1} ] = IE[(X2 − X1 )1{X1 ≤1} ] + IE[X1 1{X1 ≤1} ] = IE[X2 − X1 ] IE[1{X1 ≤1} ] + IE[X1 1{X1 ≤1} ] = IE[X1 ]P(X1 ≤ 1) + IE[X1 1{X1 ≤1} ], hence IE[X2 | W2 > 1] = IE[X2 | X1 ≤ 1] 1 IE[X2 1{X1 ≤1} ] = P(X1 ≤ 1) 1 = IE[X1 ] + IE[X1 1{X1 ≤1} ] P(X1 ≤ 1) = IE[X1 ] +

1

1 X

P(X1 ≤ 1)

k=0

kP(X1 = k)

P(X1 = 1) P(X1 ≤ 1) λe−λ = λ + −λ e + λe−λ λ = λ+ 1+λ 2λ + λ2 = . 1+λ = IE[X1 ] +

Exercise 9.3. (Exercise VI.1.1 in [4]). The generator Q of this pure birth process is given by

258

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

Q = [ λi,j ]i,j∈N

−1 1 0 0  0 −3 3 0   =  0 0 −2 2  0 0 0 −5  .. .. .. .. . . . .

0 0 0 5 .. .

 ··· ···  ··· , ···  .. .

hence the forward Kolmogorov equation P 0 (t) = P (t)Q reads  0  0 0 0 P0,0 (t) P0,1 (t) P0,2 (t) P0,3 (t) · · · 0 0 0 0  P1,0 (t) P1,2 (t) P1,3 (t) · · ·   0 (t) P1,1  0 0 0  P2,0 (t) P2,1 (t) P2,2 (t) P2,3 (t) · · ·   0  0 0 0  P3,0 (t) P3,1 (t) P3,2 (t) P3,3 (t) · · ·    .. .. .. .. .. . . . . .    −1 1 0 0 P0,0 (t) P0,1 (t) P0,2 (t) P0,3 (t) · · ·  P1,0 (t) P1,1 (t) P1,2 (t) P1,3 (t) · · ·   0 −3 3 0       =  P2,0 (t) P2,1 (t) P2,2 (t) P2,3 (t) · · ·  ×  0 0 −2 2  P3,0 (t) P3,1 (t) P3,2 (t) P3,3 (t) · · ·   0 0 0 −5    .. .. .. .. .. .. .. .. .. . . . . . . . . . which yields

0 0 0 5 .. .

 ··· ···  ··· , ···  .. .

 0 P0,0 (t) = −P0,0 (t),        0  (t) = P0,0 (t) − 3P0,1 (t),  P0,1  0  P0,2 (t) = 3P0,1 (t) − 2P0,2 (t),        0 P0,3 (t) = 2P0,2 (t) − 5P0,3 (t).

The first equation is solved as P0,0 (t) = e−t ,

t ∈ R+ ,

and this solution can be easily recovered from P0,0 (t) = P(Xt = 0 | X0 = 0) = P(τ0 > t) = e−t ,

t ∈ R+ .

The second equation becomes 0 P0,1 (t) = e−t − 3P0,1 (t),

which has t 7→ C1 e−3t as homogeneous solution. We note that t 7→ C2 e−t is a particular solution for −C2 e−t = e−t − 3C2 e−t , 259

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

which yields C2 = 1/2, hence P0,1 (t) =

1 −t e + C1 e−3t , 2

and from the initial condition P0,1 (0) = 0 we get C1 = −1/2, i.e. P0,1 (t) =

1 −t (e − e−3t ), 2

t ∈ R+ .

Again we may recover this result by a probabilistic approach, writing P0,1 (t) = P(Xt = 1 | X0 = 0) = P(τ0 < t, τ0 + τ1 > t) Z = λ0 λ1 = λ0 λ1

e−λ0 x e−λ1 y dxdy

{(x,y) : 0<x<t, x+y>t} Z tZ ∞ −(λ0 −λ1 )x −λ1 z

e

0

= e−λ1 t λ0

Z

e

dxdy

t t

e−(λ0 −λ1 )x dx

0

λ0 (e−λ1 t − e−λ0 t ) = λ0 − λ1 1 = (e−t − e−3t ), t ∈ R+ . 2

(13.17)

The remaining equations can be solved similarly by searching for a suitable particular solution. For 0 P0,2 (t) =

3 −t (e − e−3t ) − 2P0,2 (t), 2

we find1 , searching for a particular solution of the form t 7→ ae−t + be−3t , P0,2 (t) =

3 −3t e (1 − et )2 , 2

t ∈ R+ ,

see here, and for 0 P0,3 (t) = 3e−3t (1 − et )2 − 5P0,3 (t),

we find P0,3 (t) =

1 −5t t e (e − 1)3 (1 + 3et ), 4

1

t ∈ R+ ,

There is a typo in the solution given page 616 of [4], which does not satisfy P0,2 (0) = 0.

260

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

see here. Note that using the backward Kolmogorov equation P 0 (t) = QP (t) can lead to more complicated calculations. Exercise 9.4. (Problem VI.4.3 in [4]). We model the number of operating machines as a birth and death process (Xt )t∈R+ on the state space {0, 1, 2, 3, 4, 5}. A new machine can only be added at the rate λ since the repairman can fix only one machine at a time. In order to determine the failure rate starting from state k ∈ {0, 1, 2, 3, 4, 5}, let us assume that the number Xt of working machines at time t is equal to k. It is known that the lifetime τi of machine i ∈ {0, 1, 2, 3, 4, 5} is an exponentially distributed random variable with parameter µ. On the other hand we know that one out of k machines will fail at time min(τ1 , . . . , τk ), and we have P(min(τ1 , . . . , τk ) > t) = P(τ1 > t, . . . , τk > t) = P(τ1 > t) · · · P(τk > t) = (e−µt )k = e−kµt ,

t ∈ R+ ,

hence the time until failure of one machine out of k is exponentially distributed with parameter kµ, i.e. the birth rate µk of the process is µk = kµ, k = 1, . . . , 5. Consequently the infinitesimal generator Q of (Xt )t∈R+ is given by   −λ λ 0 0 0 0  µ −µ − λ λ 0 0 0     0 2µ −2µ − λ λ 0 0   , Q= 0 3µ −3µ − λ λ 0   0   0 0 0 4µ −4µ − λ λ  0 0 0 0 5µ −5µ with λ = 0.5 and µ = 0.2. This chains admits a stationary distribution 261

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

π = (π0 , π1 , π2 , π3 , π4 , π5 ) which satisfies πQ = 0, i.e.  0 = −λπ0 + µπ1           0 = λπ0 − (λ + µ)π1 + 2µπ2        0 = λπ1 − (λ + 2µ)π2 + 3µπ3   0 = λπ2 − (λ + 3µ)π3 + 4µπ4         0 = λπ3 − (λ + 4µ)π4 + 5µπ5        0 = λπ4 − 5µπ5 . This gives  0 = −λπ0 + µπ1         0 = −λπ1 + 2µπ2      0 = −λπ2 + 3µπ3       0 = −λπ3 + 4µπ4        0 = −λπ4 + 5µπ5 , hence   π1             π2         π3           π4            π5 hence π0 +

=

λ π0 µ

=

λ π1 2µ

=

λ π2 3µ

=

λ π3 4µ

=

λ π4 , 5µ

λ λ2 λ3 λ4 λ5 π0 + 2 π0 + π + π + π0 = 1, 0 0 µ 2µ 3!µ3 4!µ4 5!µ5

and

262

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

1 λ3 λ4 λ5 λ λ2 + + 1+ + 2 + µ 2µ 3!µ3 4!µ4 5!µ5 5 µ = 5 , 4 2 3 µ + λµ + λ µ /2 + λ3 µ2 /3! + λ4 µ/4! + λ5 /5!

π0 =

which is a truncated Poisson distribution. Finally, since π5 is the probability that all 5 machines are operating, the fraction of time the repairman is idle in the long run is π5 =

λ5 . 120µ5 + 120λµ4 + 60λ2 µ3 + 20λ3 µ2 + 5λ4 µ + λ5

Exercise 9.5. (Exercise VI.6.2 in [4]). The infinitesimal generator of Z(t) := X1 (t) + X2 (t) is given by   −2λ 2λ 0  µ −λ − µ λ  . 0 2µ −2µ Recall also that the semi-group of X1 (t) and X2 (t) is given by  µ λ −t(λ+µ) λ λ −t(λ+µ) + e − e λ+µ λ+µ  λ+µ λ+µ   .   µ µ −t(λ+µ) µ −t(λ+µ)  λ − e + e λ+µ λ+µ λ+µ λ+µ 

As for the transition semi-group of Z(t), we have P0,0 (t) = P(Z(t) = 0 | Z(0) = 0) = P(X1 (t) = 0, X2 (t) = 0 | X1 (0) = 0, X2 (0) = 0) = P(X1 (t) = 0 | X1 (0) = 0)P(X2 (t) = 0 | X2 (0) = 0) = (P(X1 (t) = 0 | X1 (0) = 0))2  2 µ λ −t(λ+µ) = + e . λ+µ λ+µ For P0,1 (t) we have P0,1 (t) = P(Z(t) = 1 | Z(0) = 0) = P(X1 (t) = 0, X2 (t) = 1 | X1 (0) = 0, X2 (0) = 0) +P(X1 (t) = 1, X2 (t) = 0 | X1 (0) = 0, X2 (0) = 0) 263

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

= P(X1 (t) = 1 | X1 (0) = 0)P(X2 (t) = 0 | X2 (0) = 0) +P(X1 (t) = 0 | X1 (0) = 0)P(X2 (t) = 1 | X2 (0) = 0) = 2P(X1 (t) = 1 | X1 (0) = 0)P(X2 (t) = 0 | X2 (0) = 0) = 2(P(X1 (t) = 1 | X1 (0) = 0))2 . Starting from Z(0) = 1 and ending at Z(t) = 1 we have two possibilities (0, 1) or (1, 0) for the terminal condition. However for the initial condition Z(0) = 1 the two possibilities (0, 1) and (1, 0) count for one only since they both give Z(0) = 1. Thus in order to compute P1,1 (t) we can make the choice to assign the value 0 to X1 (0) and the value 1 to X2 (0), without influencing the result, as the other choice would lead to the same probability value. Thus for P1,1 (t) we have P1,1 (t) = P(Z(t) = 1 | Z(0) = 1) = P(X1 (t) = 0, X2 (t) = 1 | X1 (0) = 0, X2 (0) = 1) +P(X1 (t) = 1, X2 (t) = 0 | X1 (0) = 0, X2 (0) = 1) = P(X1 (t) = 0 | X1 (0) = 0)P(X2 (t) = 1 | X2 (0) = 1) +P(X1 (t) = 1 | X1 (0) = 0)P(X2 (t) = 0 | X2 (0) = 1). Concerning P1,0 (t) we have P1,0 (t) = P(Z(t) = 0 | Z(0) = 1) = P(X1 (t) = 0, X2 (t) = 0 | X1 (0) = 0, X2 (0) = 1). On the other hand we have P1,2 (t) = P(Z(t) = 2 | Z(0) = 1) = P(X1 (t) = 1, X2 (t) = 1 | X1 (0) = 0, X2 (0) = 1) = P(X1 (t) = 1 | X1 (0) = 0)P(X2 (t) = 1 | X2 (0) = 1). We check that P1,0 (t) + P1,1 (t) + P1,2 (t) = P(X1 (t) = 0 | X1 (0) = 0)P(X2 (t) = 0 | X2 (0) = 1) +P(X1 (t) = 0 | X1 (0) = 0)P(X2 (t) = 1 | X2 (0) = 1) +P(X1 (t) = 1 | X1 (0) = 0)P(X2 (t) = 0 | X2 (0) = 1) +P(X1 (t) = 1 | X1 (0) = 0)P(X2 (t) = 1 | X2 (0) = 1)

264

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

= (P(X1 (t) = 1 | X1 (0) = 0) + P(X1 (t) = 0 | X1 (0) = 0)) ×(P(X2 (t) = 1 | X2 (0) = 1) + P(X2 (t) = 0 | X2 (0) = 1)) = 1. Note that for Z(t) to be Markov, the processes X1 (t) and X2 (t) should have same infinitesimal generators. For example, if X1 (t) and X2 (t) have different transition rates, then starting from Z(t) = 1 we need the information whether X1 (t) = 1 or X2 = 1 in order to determine what will be the next transition rate. However, the knowledge of Z(t) = 1 is not sufficient for this. Altoghether there are 3 × 3 = 9 transition probabilities to compute since the chain Z(t) has 3 states {0, 1, 2}, and the remaining computations are left to the reader. Exercise 9.6. (Problem VI.3.1 in [4]). Starting from state 0, the process Xt stays at 0 during an exponentially distributed time with parameter λ, after which Nt increases by one unit. In this case, ξNt = 0 becomes ξNt +1 = 1 with probability 1, from the transition matrix (9.23), hence the birth rate of Xt from 0 to 1 is λ. Next, starting from state 1, the process Xt stays at 1 during an exponentially distributed time with parameter λ. The difference is that when Nt increases by one unit, ξNt = 1 may move to ξNt +1 = 0 with probability 1 − α, or remain at ξNt +1 = 1 with probability α. In fact, due to the Markov property, Xt will remain at 1 during an exponentially distributed time whose expectation may be higher than 1/λ when α > 0. We will compute the expectation of this random time. 1. We have E[τ0 | X0 = 0] = and E[τ0 | X0 = 1] =

1 + E[τ0 | X0 = 1], λ 1 + αE[τ0 | X0 = 1], λ

hence E[τ0 | X0 = 1] =

1 λ(1 − α)

and 265

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

E[τ0 | X0 = 0] =

2−α . λ(1 − α)

2. We have 1 + αE[τ1 | X0 = 1] + (1 − α)E[τ1 | X0 = 0] λ 2−α = + αE[τ1 | X0 = 1], λ

E[τ1 | X0 = 1] =

since E[τ1 | X0 = 0] = 1/λ, hence E[τ1 | X0 = 1] =

2−α . λ(1 − α)

3. Since E[τ0 | X0 = 1] = 1/(λ(1 − α)), it takes an exponential random time with parameter λ(1 − α) for the process (Xt )t∈R+ to switch from state 0 to state 1. Hence the death rate is (1 − α)λ and the infinitesimal generator Q of Xt is   −λ λ . (1 − α)λ −(1 − α)λ This continous-time first step analysis argument is similar to the one used in Exercise 9.9. Exercise 9.7. (Problem V.4.7 in [4]). First we note that in case f (x) = 1[a,b] (x),

0 ≤ a ≤ b ≤ t,

we have, by direct counting, Xt X

f (Wi ) =

k=1

Xt X

1[a,b] (Wi )

k=1

= Xb − Xa . Hence IE

"X t X

# f (Wi ) = IE[Xb − Xa ]

k=1

= λ(b − a) Z t =λ 1[a,b] (s)ds 0 Z t =λ f (s)ds, 0

266

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

hence (9.24) is proved for f (x) = 1[a,b] (x). Next, we check by linearity that the (9.24) extends to a linear combination of indicator functions of the form f (x) =

n X

αk 1[ak−1 ,ak ] (x),

0 ≤ a0 < a1 < · · · < an < t.

k=1

The difficult part is to do the extension from the linear combinations of indicator functions to “any” integrable function f : [0, t] → R. This requires the knowledge of measure theory. For a different proof using the exponentially distributed jump times of the Poisson process, see Proposition 2.3.4 in [6]. Let gn , n ≥ 1, be defined as gn (t1 , . . . , tn ) =

n X

1{tk−1 <t<tk } (f (t1 )+· · ·+f (tk−1 ))+1{tn <t} (f (t1 )+· · ·+f (tn )),

k=1

with t0 = 0, so that n∧X Xt

f (Wk ) = gn (W1 , . . . , Wn ).

k=1

Then # "n∧X Xt f (Wk ) = E[g(W1 , . . . , Wn )] E k=1 n Z ∞ X n −λtn

tn

Z

e

n

Z

0

t −λtn

Z

tn

e 0

= λn

k=1

e−λtn

t

t2

1{tk−1 <t<tk } (f (t1 ) + · · · + f (tk−1 ))dt1 · · · dtn 0

Z ···

0

n Z X

Z tZ

···

0

k=1

Z t2

(f (t1 ) + · · · + f (tn ))dt1 · · · dtn 0

(tn − t)n−k dtn (n − k − 1)!

tk−1

Z t2 ··· (f (t1 ) + · · · + f (tk−1 ))dt1 · · · dtk−1 0 0 0 Z t Z tn Z t2 +λn e−λtn ··· (f (t1 ) + · · · + f (tn ))dt1 · · · dtn ×

0

0

0

267

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

= e−λt

n X

λk

Z

(λt)n−k dt (n − k − 1)!

e−λt

0 k=1 tk−1

Z t2 (f (t1 ) + · · · + f (tk−1 ))dt1 · · · dtk−1 ··· 0 0 0 Z t2 Z tn Z t (f (t1 ) + · · · + f (tn ))dt1 · · · dtn ··· e−λtn +λn Z tZ

×

−λt

=e

0

0

0 n X

λ

k

t2

Z ···

0

k=1 ∞ X −λt

tk−1

Z tZ

(f (t1 ) + · · · + f (tk−1 ))dt1 · · · dtk−1

0

0

t

Z t2 Z (t − tn )k−n tn (f (t1 ) + · · · + f (tn ))dt1 · · · dtn ··· (k − n)! 0 0 0 k=n Z t2 Z t Z tk−1 n X ··· (f (t1 ) + · · · + f (tk−1 ))dt1 · · · dtk−1 = e−λt λk +e

+e

Z

λk

Z tZ

k

=e

−λt

tn

0 t

λk

k=0

tn

t2

Z ···

λk k!

dtn+1 · · · dtk

tn

tk

Z Z 0

k=0 ∞ X

tn+2

(f (t1 ) + · · · + f (tn ))dt1 · · · dtn

0 ∞ X

Z

t2

Z ···

= e−λt

tk

Z

··· 0

tn+1

×

t

λ

k=n

Z

0

0

0

k=1 ∞ X −λt

Z

(f (t1 ) + · · · + f (tk∧n ))dt1 · · · dtk

0

0

t

t

Z ···

0

(f (t1 ) + · · · + f (tk∧n ))dt1 · · · dtk . 0

Hence as n goes to infinity, # # "n∧X "X t Xt X f (Wk ) f (Wk ) = lim E E n→∞

k=1

= lim e−λt n→∞

=e

−λt

k=0

k!

Z ···

0

···

0

(f (t1 ) + · · · + f (tk∧n ))dt1 · · · dtk 0

Z

f (s)dse−λt

0

k=1 t

t

Z

t

t

=λ Z

k

λ k!

Z ∞ X λk k=0

Z

∞ X

t

(f (t1 ) + · · · + f (tk ))dt1 · · · dtk 0

∞ X (λt)k−1 (k − 1)!

k=1 t

f (s)ds. 0

Exercise 9.8.

268

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

1. Since the time between two Poisson arrivals is an exponentially distributed random variable with parameter λR , this probability is given by P(τ0R > t) = P(NtR = 0) = e−λR t , where NtR denotes a Poisson process with intensity λR . 2. This probability is given by P(NtR ≤ 3) = e−λW t + λW te−λW t + e−λW t

λ2W t2 λ 3 t3 + e−λW t W , 2 6

where NtR denotes a Poisson process with intensity λW . 3. This probability is given by the ratio λR /(λW + λR ) of arrival rates. Remark that the difference NtW −NtR between the number NtW of “write” consultations and the number NtR of “read” consultations is a birth and death process with state-independent birth and death rates λR and λW . 4. This distribution is given by P (X = k | X + Y = n) where X, Y are independent Poisson random variables with parameters λR t and λW t respectively. We have P (X = k and X + Y = n) P(X + Y = n) P(X = k and Y = n − k) = P(X + Y = n) P(X = k)P(Y = n − k) = P(X + Y = n)   k  n−k n λR λW = , k λR + λW λR + λW

P(X = k | X + Y = n) =

cf. (13.11) in Exercise 1.4. Exercise 9.9. 1. The number Xt of machines operating at time t is a birth and death process on {0, 1, 2} with infinitesimal generator   −λ λ 0 Q =  µ −(λ + µ) λ  . 0 2µ −2µ The stationary distribution π = (π0 , π1 , π2 ) is solution of πQ = 0, i.e.

269

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

  0 = −λπ0 + µπ1 0 = λπ0 − (λ + µ)π1 + 2µπ2  0 = λπ1 − 2µπ2 under the condition π0 + π1 + π2 = 1, which yields   λµ λ2 µ2 , , , (π0 , π1 , π2 ) = µ2 + λµ + λ2 /2 µ2 + λµ + λ2 /2 2µ2 + 2λµ + λ2 i.e. the probability that no machine is operating when λ = µ = 1 is π0 = 2/5. 2. The number Xt of machines operating at time t is now a birth and death process on {0, 1}. The time spent in state 0 is exponentially distributed with average 1/µ. When the chain is in state 1, one machine is working while the other one may still be under repair, and the mean time t1 spent in state 1 has to be computed using first step analysis. We have t1 =

1 λ + t1 , µ λ+µ

since by (9.18) and (9.19), λ/(λ + µ) is the probability that repair of the idle machine finishes before failure of the working machine. This first step analysis argument is similar to the one used in Exercise 9.6. This yields t1 =

λ+µ , µ2

hence the corresponding rate is µ2 /(λ + µ) and the infinitesimal generator of the chain becomes     −λ λ −λ λ Q= = . 1/t1 −1/t1 µ2 /(λ + µ) −µ2 /(λ + µ) The stationary distribution π = (π0 , π1 ) is solution of πQ = 0, i.e.   0 = −λπ0 + π1 µ2 /(λ + µ) 

0 = λπ0 − π1 µ2 /(λ + µ)

under the condition π0 + π1 = 1, which yields   µ2 λµ + λ2 (π0 , π1 ) = , , µ2 + λµ + λ2 µ2 + λµ + λ2

270

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

i.e. the probability that no machine is operating when λ = µ = 1 is π0 = 1/3. This question can be solved in another way, by considering Xt a birth and death process on {0, 1, 2} with infinitesimal generator   −λ λ 0 Q =  µ −(λ + µ) λ  , 0 µ −µ for which Xt = 0

⇐⇒

no machine is working,

Xt = 1

⇐⇒

one machine is working and the other is under repair,

Xt = 2

⇐⇒

one machine is working and the other one is waiting.

In this case, πQ = 0 yields   λµ λ2 µ2 , , , (π0 , π1 , π2 ) = µ2 + λµ + λ2 µ2 + λµ + λ2 µ2 + λµ + λ2 hence π0 =

1 , 3

π1 =

1 , 3

π2 =

1 , 3

when λ = µ. Exercise 9.10. 1. We need to show the following properties. a. The process (Nt1 + Nt2 )t∈R+ is a counting process. Clearly the jump heights are positive integers and they can only be equal to one since the probability that Nt1 and Nt2 jumps simultaneously is 0. b. The process (Nt1 + Nt2 )t∈R+ has independent increments. Letting 0 < t1 < · · · < tn , the family   Nt1n + Nt2n − (Nt1n−1 + Nt2n−1 ), . . . , Nt12 + Nt22 − (Nt11 + Nt21 )   = Nt1n − Nt1n−1 + Nt2n − Nt2n−1 , . . . , Nt12 − Nt11 + Nt22 − Nt21

271

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

is a family of independent random variables. In order to see this we note that Nt1n − Nt1n−1 is independent of Nt1n−1 − Nt1n−2 , . . . , Nt12 − Nt11 , and of Nt2n − Nt2n−1 , . . . , Nt22 − Nt21 , hence it is also independent of Nt1n−1 − Nt1n−2 + Nt2n−1 − Nt2n−2 , . . . , Nt12 − Nt11 + Nt22 − Nt21 . Similarly it follows that Nt2n − Nt2n−1 is independent of Nt1n−1 − Nt1n−2 + Nt2n−1 − Nt2n−2 , . . . , Nt12 − Nt11 + Nt22 − Nt21 , hence Nt1n + Nt2n − (Nt1n−1 + Nt2n−1 ) is independent of Nt1n−1 − Nt1n−2 + Nt2n−1 − Nt2n−2 , . . . , Nt12 − Nt11 + Nt22 − Nt21 . This shows the required mutual independence by induction on n ≥ 1. c. The process (Nt1 + Nt2 )t∈R+ has stationary increments. 1 1 2 2 We note that the distributions of Nt+h − Ns+h and Nt+h − Ns+h are independent of h ∈ R+ , hence by the law of total probability we check that 1 2 1 2 P(Nt+h + Nt+h − (Ns+h + Ns+h ) = n) n X 1 1 2 2 = P(Nt+h − Ns+h = k)P(Nt+h − Ns+h = n − k) k=0

is independent of h ∈ R+ . The intensity of Nt1 + Nt2 is λ1 + λ2 . 2. a. The proof of independence of increments is similar to that of Question 1. b. Concerning the stationarity of increments we have 1 2 1 2 P(Mt+h − Mt = n) = P(Nt+h − Nt+h − (Ns+h − Ns+h ) = n) 1 1 2 2 = P(Nt+h − Ns+h − (Nt+h − Ns+h ) = n)

272

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

=

∞ X

1 1 2 2 P(Nt+h − Ns+h = n + k)P(Nt+h − Ns+h = k)

k=0 1 1 which is independent of h ∈ R+ since the distributions of Nt+h −Ns+h 2 2 and Nt+h − Ns+h are independent of h ∈ R+ . 3. For n ∈ N we have

P(Mt = n) = P(Nt1 − Nt2 = n) ∞ X = P(Nt1 = n + k)P(Nt2 = k) k=0∨(−n) ∞ X

= e−(λ1 +λ2 )t

k=0∨(−n)

 =  =

λ1 λ2

n/2

λ1 λ2

n/2

e

λn+k λk2 tn+2k 1 k!(n + k)! ∞ X

−(λ1 +λ2 )t

k=0∨(−n)

where In (x) =

e−(λ1 +λ2 )t I|n| (2t

∞ X (x/2)n+2k k=0

k!(n + k)!

,

p

√ (t λ1 λ2 )n+2k (n + k)!k!

λ1 λ2 ),

x > 0,

is the modified Bessel function with parameter n ≥ 0. When n ≤ 0, by exchanging λ1 and λ2 we get P(Mt = n) = P(−Mt = −n)  −n/2 p λ2 = e−(λ1 +λ2 )t I−n (2t λ1 λ2 ) λ1  n/2 p λ1 = e−(λ1 +λ2 )t I−n (2t λ1 λ2 ), λ2 hence in the general case we have  P(Mt = n) =

λ1 λ2

n/2

p e−(λ1 +λ2 )t I|n| (2t λ1 λ2 ),

n ∈ Z,

which is the Skellam distribution. This also shows that the semigroup P (t) of the birth and death process with state-independent birth and death rates λ1 and λ2 satisfies

273

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

Pi,j (t) = P(Mt = j | M0 = i) = P(Mt = j − i)  (j−i)/2 p λ1 e−(λ1 +λ2 )t I|j−i| (2t λ1 λ2 ), = λ2 i, j ∈ Z, t ∈ R+ . When λ1 = λ2 = λ we find P(Mt = n) = e−2λt I|n| (2λt). 4. From the bound2 I|n| (y) < Cn y |n| ey ,

y > 1,

we get n/2 p √ λ1 (2t λ1 λ2 )|n| e2t λ1 λ2 λ2  n/2 p √ √ λ1 −t( λ1 − λ2 )2 = Cn e (2t λ1 λ2 )|n| , λ2

P(Mt = n) ≤ Cn e−(λ1 +λ2 )t



which tends to 0 as t goes to infinity when λ1 6= λ2 . Hence we have3 lim P(|Mt | < c) =

t→∞

X −c<k<c

lim P(|Mt | = k) = 0,

t→∞

c > 0.

(13.18)

Remark: there exists a shorter proof by a probabilistic argument (not required here), which is displayed below, cf. [1]. Noting that since IE[Mt ] = t(λ1 − λ2 ), for t large enough we have, depending on the sign of λ1 − λ2 , 1 c − IE[Mt ] ≤ − IE[Mt ] 2

or

− c − IE[Mt ] ≥

1 IE[Mt ], 2

we get 2 3

This bound has been given in class. Treating the case λ1 = λ2 is more complicated and not required.

274

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

−c ≤ Mt ≤ c

=⇒

1 Mt −IE[Mt ] ≤ − IE[Mt ] 2

or

Mt −IE[Mt ] ≥

1 IE[Mt ], 2

hence by Thebychev’s inequality we have   1 P(|Mt | ≤ c) ≤ P |Mt − IE[Mt ]| ≥ IE[Mt ] 2 Var[Mt ] ≤4 (IE[Mt ])2 (λ1 + λ2 )t =4 , (λ1 − λ2 )2 t2 which tends to 0 as t tends to infinity, provided λ1 6= λ2 . A probabilistic proof is also available in case λ1 = λ2 , using the central limit theorem. 5. When Mt ≥ 0 it represents the number of waiting customers. When Mt ≤ 0, −Mt represents the number of waiting drivers. Relation (13.18) shows that for any fixed c > 0, the probability of having either more than c waiting customers or more than c waiting drivers is high in the long run. Exercise 9.11. 1. We have  −N λ Nλ 0  µ −µ − (N − 1)λ (N − 1)λ   .. .. Q =  ... . .   0 0 0 0 0 0

··· ··· .. .

0 0 .. .

0 0 .. .

0 0 .. .

· · · (N − 1)µ −(N − 1)µ − λ λ ··· 0 Nµ −N µ

2. The system of equations follows by writing the matrix multiplication P 0 (t) = P (t)Q term by term. 3. We apply the result of Question 2 to N X ∂Gk 0 (s, t) = sn Pk,n (t), ∂t n=0

and use the expression N X ∂Gk 0 (s, t) = nsn−1 Pk,n (t). ∂s n=1

4. We have 275

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html

    .  


N. Privault

∂Gk ∂Gk λN (s − 1)Gk (s, t) + (µ + (λ − µ)s − λs2 ) (s, t) − (s, t) ∂s ∂t  k  N −k−1 = −(s − 1)(λ + µ)(N − k) (s − 1)µe−(λ+µ)t + λs + µ λ −(s − 1)λe−(λ+µ)t + λs + µ e−(λ+µ)t  k−1  N −k +(s − 1)(λ + µ)kµ (s − 1)µe−(λ+µ)t + λs + µ −(s − 1)λe−(λ+µ)t + λs + µ e−(λ+µ)t  k  N −k +(s − 1) (s − 1)µe−(λ+µ)t + λs + µ N λ −(s − 1)λe−(λ+µ)t + λs + µ   k  N −k−1  + λs2 − (λ − µ)s − µ (N − k) λe−(λ+µ)t − λ (s − 1)µe−(λ+µ)t + λs + µ −(s − 1)λe−(λ+µ)t + λs + µ    k−1  N −k  − µe−(λ+µ)t + λ k (s − 1)µe−(λ+µ)t + λs + µ −(s − 1)λe−(λ+µ)t + λs + µ λs2 − (λ − µ)s − µ = 0.

5. This expression follows from the relation IE[Xt | X0 = k] =

∂Gk (s, t)|s=1 ∂s

and the result of Question 4. 6. We have lim IE[Xt | X0 = k] = k

t→∞

λ(λ + µ)k−1 (µ + λ)λk−1 N −k Nλ N −k (µ+λ) +(N −k) λ = . N N (λ + µ) (λ + µ) λ+µ

Exercise 9.12. The generator of the process is given by   −λ0 λ0 0 ··· 0 0 0  µ1 −λ1 − µ1 λ1 · · · 0 0 0     .. .. .. . . .. .. ..  Q= . . . . . . .     0 0 0 · · · µN −1 −λN −1 − µN −1 λN −1  0 0 0 ··· 0 µN −µN   −αN αN 0 ··· 0 0 0  µ − α(N − 1) − µ α(N − 1) · · · 0 0 0     .. . . ..  . . . . .. .. .. .. .. = .  .    0 0 0 · · · µ(N − 1) − λ − µ(N − 1) λ  0 0 0 ··· 0 µN −µN Writing the equation πQ = 0 shows that we have the recurrence relation πk+1 =

αN −k πk , β k

0 ≤ k ≤ N − 1,

and by induction on k = 1, . . . , N we find πk =

 k α N! π0 , β (N − k)!k!

0 ≤ k ≤ N.

276

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

The condition π0 + π1 + · · · + πN = 1 shows that 1 = π0

N  k X α

β

k=0

hence

N! = (N − k)!k!

 N α π0 , 1+ β

 −N α π0 = 1 + β

and we have −N  k α N! β (N − k)!k! k  N −k  β N! α , = α+β α+β (N − k)!k! 

πk =

1+

α β

0 ≤ k ≤ N,

hence the stationary distribution π is a binomial distribution with parameter (α/(α + β), N ). Exercise 9.13. The length of the crack can be viewed as a continuous-time birth process in N with state-dependent rate λk = (1 + k)ρ , k ∈ N. Let us denote by τk the time spent at state k ∈ N, which is an exponentially distributed random variable with parameter λk . The time it takes for the crack length to grow to infinity is ∞ X

τk

k=0

It is known that

∞ X

τk < ∞

k=0

almost surely, i.e. the crack grows to infinity within a finite time, if and only if the expectation "∞ # X IE τk k=0

is finite. We have " IE

∞ X k=0

# τk =

∞ X

IE[τk ]

k=0

277

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

= =

∞ X 1 λk

k=0 ∞ X k=0

1 . (1 + k)ρ

By comparison with the integral of the function x 7→ 1/(1 + x)ρ we get "∞ # ∞ X X 1 IE τk = (1 + k)ρ k=0

k=0

= 1+

∞ X

1 (1 + k)ρ

k=1 ∞

1 dx (1 + x)ρ ∞ 1  = 1+ (1 + x)−ρ+1 0 1−ρ 1 = 1+ ρ−1 < ∞, Z

≤ 1+

0

provided ρ > 1. We conclude that the mean time for the crack to grow to infinite length is finite when ρ > 1. Exercise 9.14. Let (Nt )t∈R+ denote a Poisson process with intensity λ > 0. 1. This probability is equal to P(NT = 0) = P(τ0 > T ) = e−λT . 2. Let t denote the expected time we are looking for. When the woman attempts to cross the street, she can do so immediately with probability P(NT = 0), in which case the waiting time is 0. Otherwise, with probability 1 − P(NT = 0), she has to wait on average IE[τ0 ] = 1/λ for the first car to pass, after which the process is reinitialized and the average waiting time is again t. Hence by first step analysis in continuous time we find the equation   1 t = 0 × P(NT = 0) + + t × P(Nt ≥ 1) λ with unknown t, and solution

278

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

t=

eλT − 1 , λ

which is equivalent to λT /2 when λ tends to 0. Exercise 9.15. (Problem VI.4.7 in [4]). A system consists of two machines and two repairmen. The amount of time that an operating machine works before breaking down is exponentially distributed with mean 5. The amount it takes a single repairman to fix a machine is exponentially distributed with mean 4. Only one repairman can work on a failed machine at any given time. (i) Let Xt be the number of machines in operating condition at time t ∈ R+ . Show that Xt is a continuous time Markov process and complete the missing entries in the matrix   0.5 0 Q =  0.2    0  −0.4 

of its generator. The generator Q of Xt is given by   −0.5 0.5 0 Q =  0.2 −0.45 0.25  . 0 0.4 −0.4 (ii) Calculate the long run probability distribution (π0 , π1 , π2 ) for Xt . Solving for πQ = 0 we have 

 −0.5 0.5 0 πQ = [π0 , π1 , π2 ]  0.2 −0.45 0.25  0 0.4 −0.4  T −0.5 × π0 + 0.2 × π1 =  0.5 × π0 − 0.45 × π1 + 0.4 × π2  0.25 × π1 − 0.4 × π2 = [0, 0, 0], i.e. π0 = 0.4 × π1 = 0.64 × π2 under the condition π0 + π1 + π2 = 1, which gives π0 = 16/81, π1 = 40/81, π2 = 25/81. (iii) Compute the average number of operating machines in the long run.

279

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

In the long run the average is 0 × π0 + 1 × π1 + 2 × π2 = 40/81 + 50/81 = 90/81. (iv) If an operating machine produces 100 units of output per hour, what is the long run output per hour of the system ? We find 100 × 90/81 = 1000/9.

13.10 Chapter 10 - Spatial Poisson Processes Exercise 10.1. The probability that there are 10 events within a circle of radius 3 meters is e−9πλ

(9πλ)1 0 (9π/2)1 0 = e−9π/2 10! 10! ' 0.0637.

Exercise 10.2. The probability that more than two bacteria are in this measured volume is P(N ≥ 3) = 1 − P(N ≤ 2)   (10θ)2 −10θ = 1−e 1 + 10θ + 2   2 (6) = 1 − e−6 1 + 6 + 2 = 1 − 25e−6 ' 0.938. Exercise 10.3. Letting XA , resp. XB , the number of defects found by the first, resp. second, we know that XA and XB are independent Poisson random variables with intensities 0.5, hence the probability that both inspectors find defects is P(XA ≥ 1, XB ≥ 1) = P(XA ≥ 1)P(XB ≥ 1) = (1 − P(XA = 0))(1 − P(XB = 0)) 280

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

= (1 − e−0.5 )2 ' 0.2212.

Exercise 10.4. The number XN of points in the interval [0, λ] has a binomial distribution with parameter (N, λ/N ), i.e. P(XN

   k  N −k N λ λ = k) = 1− , k N N

k = 0, 1, . . . , N,

and we find lim P(XN = k) =

N →∞

N −k k−1  Y N −i λk λ , lim 1 − k! N →∞ N N i=0

= e−λ

λk , k!

which is the Poisson distribution with parameter λ > 0. Exercise 10.5. 1. This probability is e−9π/2

(9π/2)10 . 10!

2. This probability is e−9π/2

(9π/2)5 (9π/2)3 × e−9π/2 . 5! 3!

3. This probability is e−9π

(9π)8 . 8!

281

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

282

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


References

1. D. Bosq and H.T. Nguyen. A Course in Stochastic Processes: Stochastic Models and Statistical Inference. Mathematical and Statistical Methods. Kluwer, 1996. 2. R. Durrett. Essentials of Stochastic Processes. Springer, 1999. 3. P.W. Jones and P. Smith. Stochastic Processes: An Introduction. Arnold Texts in Statistics. Hodder Arnold, 2001. 4. S. Karlin and H.M Taylor. A second course in stochastic processes. Academic Press Inc. [Harcourt Brace Jovanovich Publishers], New York, 1981. 5. J. Medhi. Stochastic processes. New Age Science Limited, Tunbridge Wells, UK, third edition, 2010. 6. N. Privault. Stochastic Analysis in Discrete and Continuous Settings, volume 1982 of Lecture Notes in Mathematics. Springer-Verlag, Berlin, 2009. 7. S.M. Ross. Stochastic processes. Wiley Series in Probability and Statistics: Probability and Statistics. John Wiley & Sons Inc., New York, second edition, 1996. 8. J.M. Steele. Stochastic calculus and financial applications, volume 45 of Applications of Mathematics. Springer-Verlag, New York, 2001.

283


Index

absorbing state, 66, 107 absorption probability, 174 absorption time, 82, 175 accessible state, 99 aperiodic chain, 107 state, 106 aperiodicity, 106 backward Kolmogorov equation, 153 Bernoulli distribution, 21 binomial distribution, 21 birth and death process, 145, 168 branching process, 123 Cauchy distribution, 19 characteristic equation, 39 characteristic function, 28, 186 class communicating, 100 classification of states, 99 communicating class, 100 states, 100 communicating state, 99 conditional expectation, 27 probability, 13 conditioning, 13 continuous-time Markov chain, 141, 147 Cox process, 193 density function, 17 discrete distribution, 21 discrete-time Markov chain, 63 discrete-time martingale, 197 distribution

Bernoulli, 21 binomial, 21 Cauchy, 19 discrete, 21 exponential, 18 gamma, 18 Gaussian, 18 geometrix, 21 invariant, 73, 113â&#x20AC;&#x201C;115 limiting, 73, 111, 163, 164 lognormal, 19 negative binomial, 21 Pascal, 21 Poisson, 21 stationary, 112, 164 Ehrenfest chain, 67 embedded chain, 169 event, 10 expectation, 22 conditional, 27 exponential distribution, 18, 32, 139, 145, 156, 178, 192 extinction probability, 128 failure time, 194 first step analysis, 36, 45, 50, 68, 79, 87, 94, 119, 128, 138, 174, 220, 223, 229, 233, 239, 245, 270 forward Kolmogorov equation, 153 gambling process, 35 gamma distribution, 18 Gaussian distribution, 18 random variable, 30

285


N. Privault

generating function, 29, 30, 54, 125, 130, 182 geometric distribution, 21 hitting probability, 79 hitting time, 82 increment independent, 142 stationary, 142 independence, 13 independent increments, 64, 141, 142, 148, 199 infinitesimal generator, 153 invariant distribution, 73, 113â&#x20AC;&#x201C;115 irreducibility, 100 irreducible Markov chain, 100 Kolmogorov equation, 153 Laplace transform, 29 limiting distribution, 73, 111, 163, 164 lognormal distribution, 19 Markov chain, 63 property, 63, 147 Markov chain continuous time, 141, 147 discrete time, 63 embedded, 169 irreducible, 100 reducible, 100 two-state, 71, 102, 159 martingale discrete time, 197 mean game duration, 44 mean recurrence time, 105 mean time to failure, 194 moment generating function, 29 negative binomial distribution, 21 number of returns, 92 Pascal distribution, 21 period, 106 periodicity, 106 Poisson distribution, 21

Poisson process, 4, 141, 150, 179, 185, 193 positive recurrence, 105 probability absorption, 174 conditional, 13 density function, 17 distribution, 16 extinction, 128 generating function, 30 measure, 12 ruin, 36, 201 space, 9 random variable, 15 random walk, 51 recurrence, 101 null, 105 positive, 105 recurrent class, 101 state, 101 reducibility, 100 reducible Markov chain, 100 renewal process, 194 return time, 53, 86 ruin probability, 36, 201 semigroup, 150 semigroup property, 152 spatial Poisson process, 185 state absorbing, 107 aperiodic, 106 communicating, 99, 100 recurrent, 101 transient, 101 stationary distribution, 112, 164 stationary increments, 142 survival probability, 191 time homogeneous, 36, 65 transience, 101 transient state, 101 transition matrix, 65 transition semigroup, 150 two-state Markov chain, 71, 102, 159 Wright-Fisher model, 94

286

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov Chains

287

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


N. Privault

These notes give a presentation of Markov chains in discrete and continuous time with a focus on random walks, branching processes and birth and death processes. Poisson and renewal processes are also considered, and martingales are treated in discrete time.

288

This version: November 28, 2011 http://www.ntu.edu.sg/home/nprivault/indext.html


Notes on Markov chains