Computational Methods in Stochastics

36
Computational Methods in Stochastics Lecture III

Transcript of Computational Methods in Stochastics

Page 1: Computational Methods in Stochastics

Computational Methods in Stochastics

Lecture III

Page 2: Computational Methods in Stochastics

1. Conditional Probability and Expectation

2. Markov Processes and Stochastic Models

This time:

Page 3: Computational Methods in Stochastics

Conditional Probability and Conditional

Expectation

Page 4: Computational Methods in Stochastics

Conditional Probability and Expectation

In what follows, the main properties of Markov processes needed for understanding advanced Monte Carlo methods are presented.

(Notation: Pr Β· = 𝑃 Β· = Probability of event Β· )

Chapter 5 of the online book is worth skimming through.

Page 5: Computational Methods in Stochastics

Discrete variables

The conditional probability of the event 𝐴 given the event 𝐡:

Pr 𝐴 𝐡 =Pr{𝐴 ∩ 𝐡}Pr{𝐡}

if Pr 𝐡 > 0.

Pr 𝐴 𝐡 is not defined when Pr 𝐡 = 0.Let 𝑋 and π‘Œ be random variables that can attain countably many values. The probability mass function of 𝑋 given π‘Œ = 𝑦:

𝑝!|# π‘₯ 𝑦 =Pr{𝑋 = π‘₯ and π‘Œ = 𝑦}

Pr{π‘Œ = 𝑦} if Pr π‘Œ = 𝑦 > 0.

𝑝!|# π‘₯ 𝑦 is not defined when Pr π‘Œ = 𝑦 = 0.

Conditional Probability and Expectation

Page 6: Computational Methods in Stochastics

Or, in terms of joint and marginal probability mass functions:

𝑝!|# π‘₯ 𝑦 =𝑝!#(π‘₯, 𝑦)𝑝#(𝑦)

if 𝑝#(𝑦) > 0; π‘₯, 𝑦 = 0, 1, …

The law of total probabilities

Pr X = x = <$%&

'

𝑝!|#(π‘₯|𝑦)𝑝#(𝑦)

Conditional Probability and Expectation

The law of total probabilities is used to obtain marginal probabilities (for examples, see Ch5 of the online book. )β†’

Note: In ratios Pr’s, PMFs, and PDFs all work equally.

Two variates in the conditional that can be summed over: 1. The β€œgiven” variate β†’ probability of a value

Page 7: Computational Methods in Stochastics

Conditional Probability and ExpectationExample. Let 𝑋 have a binomial distribution with parameters 𝑝 and 𝑁, where 𝑁 has a binomial distribution with parameters π‘ž and 𝑀. What is the marginal distribution of 𝑋?

Pr 𝑋 = π‘˜ = <(%&

)

𝑝!|* π‘˜ 𝑛 𝑝* 𝑛 = β‹― =

=𝑀!

π‘˜! 𝑀 βˆ’ π‘˜ ! π‘π‘ž+(1 βˆ’ π‘π‘ž)),+, π‘˜ = 0, 1, … ,𝑀.

𝑝!|* π‘˜ 𝑛 =π‘›π‘˜π‘+(1 βˆ’ 𝑝)(,+, π‘˜ = 0, 1, … , 𝑛.

Conditional prob. mass function

𝑝* 𝑛 = 𝑝!*(π‘˜, 𝑛) =𝑀𝑛

π‘ž((1 βˆ’ π‘ž)),(, 𝑛 = 0, 1, … ,𝑀.

Marginal distribution

The marginal distribution of 𝑋 (the law of total probability)

β†’ β€œπ‘‹ has a binomial distribution with parameters 𝑀 and π‘π‘ž.”

(see Introduction to Stochastic Modeling: Ch2)

Page 8: Computational Methods in Stochastics

Conditional Probability and ExpectationExample 2. Let 𝑋 have a binomial distribution with parameters 𝑝 and 𝑁, where 𝑁 has a Poisson distribution with mean Ξ». What is the marginal distribution of 𝑋?

Pr 𝑋 = π‘˜ = <(%&

'

𝑝!|* π‘˜ 𝑛 𝑝* 𝑛 =

= <(%&

'𝑛!

π‘˜! 𝑛 βˆ’ π‘˜ !𝑝+(1 βˆ’ 𝑝)(,+

πœ†(𝑒,-

𝑛!= β‹― =

πœ†π‘ +𝑒,-.

π‘˜!

𝑝!|* π‘˜ 𝑛 =π‘›π‘˜π‘+(1 βˆ’ 𝑝)(,+, π‘˜ = 0, 1, … , 𝑛.

Conditional probability mass function

𝑝* 𝑛 =πœ†(𝑒,-

𝑛!, 𝑛 = 0,1, …

Marginal distribution

The marginal distribution of 𝑋 (the law of total probability)

β†’ β€œπ‘‹ has a Poisson distribution with mean πœ†π‘.”

for π‘˜ = 0, 1, …

Page 9: Computational Methods in Stochastics

Conditional Probability and Expectation

Let 𝑔(π‘₯) be a function for which the expectation of 𝑔(𝑋) is finite.

Then the conditional expected value of 𝑔(𝑋) given π‘Œ = 𝑦 is

𝐸 𝑔 𝑋 π‘Œ = 𝑦 =</

𝑔(π‘₯)𝑝!|#(π‘₯|𝑦) if 𝑝# 𝑦 > 0

(This conditional mean is not defined at 𝑦 for which 𝑝# 𝑦 = 0.)

The law of total probability β†’

𝐸 𝑔 𝑋 = 𝐸{𝐸 𝑔 𝑋 π‘Œ }

2. Summing over the β€œoutcome” variate β†’ expectation

(Summing over both variates.)

Page 10: Computational Methods in Stochastics

Conditional Probability and Expectation

Discrete and continuous variables

Let 𝑋 and 𝑁 be jointly distributed random variables, where 𝑋is continuous and 𝑁 has the discrete set of values 𝑛 = 0, 1, 2, …

The conditional distribution function 𝐹!|*(π‘₯|𝑛) of the random variable 𝑋, given that 𝑁 = 𝑛:

𝐹!|* π‘₯ 𝑛 =Pr{𝑋 ≀ π‘₯ and 𝑁 = 𝑛}

Pr{𝑁 = 𝑛}if Pr 𝑁 = 𝑛 > 0.

𝐹!|*(π‘₯|𝑛) is not defined at values of 𝑛 for which Pr 𝑁 = 𝑛 = 0.

The conditional probability density function:

𝑓!|* π‘₯ 𝑛 =𝑑𝑑π‘₯𝐹!|* π‘₯ 𝑛 if Pr 𝑁 = 𝑛 > 0.

Page 11: Computational Methods in Stochastics

Conditional Probability and Expectation

⟹ Pr π‘Ž ≀ 𝑋 < 𝑏,𝑁 = 𝑛 = X0

1

𝑓!|* π‘₯ 𝑛 𝑝* 𝑛 𝑑π‘₯

for π‘Ž < 𝑏 and where 𝑝* 𝑛 = Pr{𝑁 = 𝑛}.

The law of total probability β†’ 𝑓! π‘₯ = <(%&

'

𝑓!|* π‘₯ 𝑛 𝑝* 𝑛

For the function 𝑔 such that 𝐸 𝑔 𝑋 < ∞, the conditional expectation of 𝑔(𝑋) given that 𝑁 = 𝑛:

𝐸 𝑔 𝑋 𝑁 = 𝑛 = X𝑔 π‘₯ 𝑓!|* π‘₯ 𝑛 𝑑π‘₯

The law of total probability:

𝐸 𝑔 𝑋 = <(%&

'

𝐸 𝑔 𝑋 𝑁 = 𝑛 𝑝* 𝑛 = 𝐸{𝐸 𝑔 𝑋 𝑁 }

Page 12: Computational Methods in Stochastics

Conditional Probability and Expectation

The previous derivations show that the same rules apply for discrete and continuous variables, one only needs to use βˆ‘β€¦or βˆ«β€¦ , respectively. This is the basic interpretation of the Lebesque-Stieltjes integral 𝐼 = βˆ«π‘“ π‘₯ 𝑑𝑔(π‘₯), where 𝑔 π‘₯ is of bounded variation within the integration interval (see Lecture I: Random Variables). Remember, in computation you draw samples from 𝑔 to compute 𝐼; 𝑑𝑔 defines your (probability) measure. (Lecture II, p. 3.)

Page 13: Computational Methods in Stochastics

Suppose 𝑋2, … , 𝑋( are i.i.d. observations from the joint density 𝑓(π‘₯|πœƒ2, … , πœƒ+). The likelihood function is

𝐿 πœƒ 𝒙 = 𝐿(πœƒ2, … , πœƒ+| π‘₯2, … , π‘₯() = ∏3%2( 𝑓 π‘₯3 πœƒ2, … , πœƒ+ .

More generally, when the 𝑋3’s are not i.i.d., the likelihood is defined as the joint density 𝑓(π‘₯2, … , π‘₯(|πœƒ). The value of πœƒ = aπœƒat which 𝐿 πœƒ 𝒙 attains its maximum with 𝒙 held fixed is the maximum likelihood estimator (MLE).

Likelihood

For the layman: 𝐿 πœƒ 𝒙 gives the likelihood of parameter values πœƒ (of a hypothesised distribution) given the data 𝒙.

Page 14: Computational Methods in Stochastics

Likelihood

The log likelihood:

log 𝐿(𝛼 , 𝛽 π‘₯2, … , π‘₯( = logg3%2

(

𝑓 π‘₯3 𝛼, 𝛽 =

= βˆ’π‘› log Ξ“ 𝛼 βˆ’ 𝑛𝛼 log 𝛽 + (𝛼 βˆ’ 1)<3%2

(

log π‘₯3 βˆ’<3%2

(

⁄π‘₯3 𝛽 .

Solving for 445 log 𝐿(𝛼 , 𝛽 π‘₯2, … , π‘₯( = 0 yields the MLE of 𝛽,

k𝛽 =<3%2

(π‘₯3𝑛𝛼 .

𝑓 π‘₯ 𝛼, 𝛽 =1

Ξ“(𝛼)𝛽6π‘₯6,2𝑒,//5 , where 𝛼 is known.

Example. Gamma MLE. i.i.d. observations 𝑋2, … , 𝑋( from the gamma density

For the layman: The most likely value for 𝛽 given the data 𝒙.

Page 15: Computational Methods in Stochastics

Random Sums

In random sums the number of summands varies randomly.

𝑋 = πœ‰2 +β‹―+ πœ‰*,

where 𝑁 is random and πœ‰2, πœ‰8, … is a sequence of independent and identically distributed (i.i.d.) random variables. 𝑋 is taken as 0 when 𝑁 = 0.

Random sums arise frequently within stochastic processes (e.g. in queuing-type processes).

One often encounters conditional distributions of discrete (i.e. the random number of summands) and continuous variables in connection with random sums.

Page 16: Computational Methods in Stochastics

MartingalesAn important class of stochastic processes is determined by them having the Martingale property.

Definition.A stochastic process {𝑋(; 𝑛 = 0,1, … } is a martingale if for 𝑛 = 0,1, …

and(a) 𝐸[ 𝑋( ] < ∞

(b) 𝐸 𝑋(92 𝑋&, … , 𝑋( = 𝑋(.

⟹ 𝐸{E 𝑋(92 𝑋&, … , 𝑋( } = E 𝑋( .Using the total probability in the form

𝐸{E 𝑋(92 𝑋&, … , 𝑋( } = E 𝑋(92 ,

gives E[𝑋(92] = E[𝑋(].

Page 17: Computational Methods in Stochastics

MartingalesA martingale has constant mean

E 𝑋& = E 𝑋+ = E 𝑋( , for π‘š β‰₯ 𝑛.

The martingale equality (b) extends to future times:

𝐸 𝑋: 𝑋&, … , 𝑋( = 𝑋( for π‘š β‰₯ 𝑛

Many important stochastic properties have the Martingale property, e.g. unbiased diffusion or random walk and gambler’s fortune (a concept of importance in the stock market). In β€œa game” (like finance) the property is related to β€œfairness” – unpredictability; chances are the same regardless of the past.

Page 18: Computational Methods in Stochastics

Markov Processes and Stochastic Models

Page 19: Computational Methods in Stochastics

Markov Chains

Some definitions

Stochastic process

A stochastic process is a random variable, which evolves through time.

Markov process

A Markov process {𝑋;} is a stochastic process with the property that, given the value of 𝑋;, the values of 𝑋< for 𝑠 > 𝑑 are not influenced by the values of 𝑋= for πœ‡ < 𝑑.In words, the probability of any particular future behaviour of the process, when its current state is known exactly, is not altered by additional knowledge concerning its past behaviour.

Online book, Ch 11.2.

Page 20: Computational Methods in Stochastics

Markov ChainsTo make the transition matrix notation as clear as possible, we first adapt the notation in β€˜An Introduction to Stochastic Modeling’. Later, the notation varies slightly depending on the system, but we’ll make a due note of that.

Markov property:Pr 𝑋(92 = 𝑗 𝑋& = 𝑖&, … , 𝑋(,2 = 𝑖(,2, 𝑋( = 𝑖 = Pr{𝑋(92 = 𝑗|𝑋( = 𝑖}

for all time points 𝑛 and all states 𝑖&, … , 𝑖(,2, 𝑖, 𝑗.

Common convention: label the Markov chain by the non-negative integers {0, 1, 2, … }. β€œπ‘‹( is in state 𝑖”: 𝑋( = 𝑖.

One-step transition probability = the probability of 𝑋(92 to be in state 𝑗 given that 𝑋( is in state 𝑖:

𝑃3>(,(92 = Pr{𝑋(92 = 𝑗|𝑋( = 𝑖}.

A discrete-time Markov chain: a Markov process whose state space is a finite or countable set and whose time index set is 𝑇 =(0, 1 , 2, … ).

Page 21: Computational Methods in Stochastics

Markov ChainsIn general, any transition probability may vary as a function of the time of the transition. To keep things clear, in what follows we deal with stationary transition probabilities (independent of the time variable 𝑛): 𝑃3>

(,(92 = 𝑃3>.

The Markov matrix or transition probability matrix of the process:

P =

2

666666664

P00 P01 P02 . . .P10 P11 P12 . . .P20 P21 P22 . . ....

......

Pi0 Pi1 Pi2 . . ....

......

3

777777775

=β‡₯Pij

⇀

<latexit sha1_base64="PJNE7zbB/ESvazuTDGEHejlGLnU=">AAADAnicjZJBa9swGIZlt9syt1vT7VLYRSy07BQsU2gvhbJdekyhSQtxCLL8OdUqy0aSy4Ixu+yv7NJDx9i1v6K3/pvJSczapId9YHh4v1ev/EmKcsG18f0Hx11bf/HyVeu1t7H55u1We/vdQGeFYtBnmcjURUQ1CC6hb7gRcJEroGkk4Dy6+lL3z69BaZ7JMzPNYZTSieQJZ9RYabzt7IQpNZdRUvaqIy+MYMJlGVlJ8W+V1xuXvl/hPVwDaSCoIRRxZjQOw9pEGhNpTGTVFDSmoDEFS6bwekZWWoJ5AG8CeBPA/zPAC0HG/8Y6emZO/rV66hq3O37XnxVeBbKADlpUb9y+D+OMFSlIwwTVekj83IxKqgxnAmx6oSGn7IpOYGhR0hT0qJxdYYV3rRLjJFP2kwbP1McrSppqPU0j66zvSy/3avG53rAwyeGo5DIvDEg23ygpBDYZrt8DjrkCZsTUAmWK23/F7JIqyox9NZ49BLI88ioMgi7xu+R0v3P8eXEcLfQBfUSfEEEH6BidoB7qI+Z8d346t84v94d74/52/8ytrrNY8x49KffuL5rj6cc=</latexit><latexit sha1_base64="PJNE7zbB/ESvazuTDGEHejlGLnU=">AAADAnicjZJBa9swGIZlt9syt1vT7VLYRSy07BQsU2gvhbJdekyhSQtxCLL8OdUqy0aSy4Ixu+yv7NJDx9i1v6K3/pvJSczapId9YHh4v1ev/EmKcsG18f0Hx11bf/HyVeu1t7H55u1We/vdQGeFYtBnmcjURUQ1CC6hb7gRcJEroGkk4Dy6+lL3z69BaZ7JMzPNYZTSieQJZ9RYabzt7IQpNZdRUvaqIy+MYMJlGVlJ8W+V1xuXvl/hPVwDaSCoIRRxZjQOw9pEGhNpTGTVFDSmoDEFS6bwekZWWoJ5AG8CeBPA/zPAC0HG/8Y6emZO/rV66hq3O37XnxVeBbKADlpUb9y+D+OMFSlIwwTVekj83IxKqgxnAmx6oSGn7IpOYGhR0hT0qJxdYYV3rRLjJFP2kwbP1McrSppqPU0j66zvSy/3avG53rAwyeGo5DIvDEg23ygpBDYZrt8DjrkCZsTUAmWK23/F7JIqyox9NZ49BLI88ioMgi7xu+R0v3P8eXEcLfQBfUSfEEEH6BidoB7qI+Z8d346t84v94d74/52/8ytrrNY8x49KffuL5rj6cc=</latexit><latexit sha1_base64="PJNE7zbB/ESvazuTDGEHejlGLnU=">AAADAnicjZJBa9swGIZlt9syt1vT7VLYRSy07BQsU2gvhbJdekyhSQtxCLL8OdUqy0aSy4Ixu+yv7NJDx9i1v6K3/pvJSczapId9YHh4v1ev/EmKcsG18f0Hx11bf/HyVeu1t7H55u1We/vdQGeFYtBnmcjURUQ1CC6hb7gRcJEroGkk4Dy6+lL3z69BaZ7JMzPNYZTSieQJZ9RYabzt7IQpNZdRUvaqIy+MYMJlGVlJ8W+V1xuXvl/hPVwDaSCoIRRxZjQOw9pEGhNpTGTVFDSmoDEFS6bwekZWWoJ5AG8CeBPA/zPAC0HG/8Y6emZO/rV66hq3O37XnxVeBbKADlpUb9y+D+OMFSlIwwTVekj83IxKqgxnAmx6oSGn7IpOYGhR0hT0qJxdYYV3rRLjJFP2kwbP1McrSppqPU0j66zvSy/3avG53rAwyeGo5DIvDEg23ygpBDYZrt8DjrkCZsTUAmWK23/F7JIqyox9NZ49BLI88ioMgi7xu+R0v3P8eXEcLfQBfUSfEEEH6BidoB7qI+Z8d346t84v94d74/52/8ytrrNY8x49KffuL5rj6cc=</latexit><latexit sha1_base64="PJNE7zbB/ESvazuTDGEHejlGLnU=">AAADAnicjZJBa9swGIZlt9syt1vT7VLYRSy07BQsU2gvhbJdekyhSQtxCLL8OdUqy0aSy4Ixu+yv7NJDx9i1v6K3/pvJSczapId9YHh4v1ev/EmKcsG18f0Hx11bf/HyVeu1t7H55u1We/vdQGeFYtBnmcjURUQ1CC6hb7gRcJEroGkk4Dy6+lL3z69BaZ7JMzPNYZTSieQJZ9RYabzt7IQpNZdRUvaqIy+MYMJlGVlJ8W+V1xuXvl/hPVwDaSCoIRRxZjQOw9pEGhNpTGTVFDSmoDEFS6bwekZWWoJ5AG8CeBPA/zPAC0HG/8Y6emZO/rV66hq3O37XnxVeBbKADlpUb9y+D+OMFSlIwwTVekj83IxKqgxnAmx6oSGn7IpOYGhR0hT0qJxdYYV3rRLjJFP2kwbP1McrSppqPU0j66zvSy/3avG53rAwyeGo5DIvDEg23ygpBDYZrt8DjrkCZsTUAmWK23/F7JIqyox9NZ49BLI88ioMgi7xu+R0v3P8eXEcLfQBfUSfEEEH6BidoB7qI+Z8d346t84v94d74/52/8ytrrNY8x49KffuL5rj6cc=</latexit>

The 𝑖th row is the probability distribution of the values of 𝑋(92under the condition that 𝑋( = 𝑖.

Page 22: Computational Methods in Stochastics

Markov ChainsClearly,𝑃3> β‰₯ 0 and βˆ‘>%&' 𝑃3> = 1 for 𝑖 = 0, 1, 2, …

A Markov process is completely defined once its transition probability matrix and initial state 𝑋& (the probability distribution of 𝑋&) are specified.

These properties define the matrix as stochastic.

Note that βˆ‘>%&' 𝑃3> = 1 is simply stating that the sum of probabilities of all transitions starting from any state 𝑖 equals 1. (This means that the rows of P sum to 1.)If also the sum of probabilities of all transitions ending in any state 𝑗 equals 1, βˆ‘3%&' 𝑃3> = 1 (the columns of P sum to 1), then the transition matrix P is said to be doubly stochastic.

Page 23: Computational Methods in Stochastics

Markov Chains

Transition probability matrices of a Markov chain

The analysis of a Markov chain concerns mainly the calculation of the possible realisations of the process.

The n-step transition probability matrices: 𝐏(() = 𝑃3>(() .

𝑃3>(() = Pr{𝑋:9( = 𝑗|𝑋: = 𝑖}

= the probability the process goes from state 𝑖 to state 𝑗 in 𝑛transitions.

Theorem. The n-step transition probabilities of a Markov chain

satisfy 𝑃3>(() = βˆ‘+%&' 𝑃3+𝑃+>

((,2), where we define 𝑃3>(&) = {

1 if 𝑖 = 𝑗,0 if 𝑖 β‰  𝑗.

Note: A product of stochastic matrices is a stochastic matrix.

Page 24: Computational Methods in Stochastics

Markov ChainsThis is matrix multiplication 𝐏(() = 𝐏×𝐏((,2).By iteration, 𝐏(() = 𝐏×𝐏×⋯𝐏 = 𝐏(.So, the 𝑛-step transition probabilities 𝑃3>

(()are the entries in the matrix 𝐏(, the 𝑛th power of 𝐏.Proof. The theorem can be proven via a first-step analysis, a breaking, or analysis, of the possible transitions on the first step, followed by an application of the Markov property:

𝑃3>(() = Pr 𝑋( = 𝑗 𝑋& = 𝑖 = <

+%&

'

Pr{𝑋( = 𝑗, 𝑋2 = π‘˜|𝑋& = 𝑖}

= <+%&

'

Pr 𝑋2 = π‘˜ 𝑋& = 𝑖 Pr{𝑋( = 𝑗|𝑋& = 𝑖, 𝑋2 = π‘˜}

= <+%&

'

𝑃3+𝑃+>((,2). ∎

Page 25: Computational Methods in Stochastics

Markov ChainsMarkov chains are visualised by state transition diagrams.

(Figure from the online book.)

Page 26: Computational Methods in Stochastics

Markov ChainsIt follows that if the probability of the process initially being in state 𝑗 is 𝑝>, i.e. the distribution law of 𝑋& is Pr 𝑋& = 𝑗 = 𝑝>, then the probability of the process being in state π‘˜ at time 𝑛 is

𝑝+(() =<

>%&

'

𝑝>𝑃>+(() = Pr 𝑋( = π‘˜ .

Note: Matrix P operates from the right, meaning that all vectors are row-vectors, not column-vectors like in almost any other field. This is the convention in stochastics.

P =

2

666666664

P00 P01 P02 . . .P10 P11 P12 . . .P20 P21 P22 . . ....

......

Pi0 Pi1 Pi2 . . ....

......

3

777777775

=β‡₯Pij

⇀

<latexit sha1_base64="PJNE7zbB/ESvazuTDGEHejlGLnU=">AAADAnicjZJBa9swGIZlt9syt1vT7VLYRSy07BQsU2gvhbJdekyhSQtxCLL8OdUqy0aSy4Ixu+yv7NJDx9i1v6K3/pvJSczapId9YHh4v1ev/EmKcsG18f0Hx11bf/HyVeu1t7H55u1We/vdQGeFYtBnmcjURUQ1CC6hb7gRcJEroGkk4Dy6+lL3z69BaZ7JMzPNYZTSieQJZ9RYabzt7IQpNZdRUvaqIy+MYMJlGVlJ8W+V1xuXvl/hPVwDaSCoIRRxZjQOw9pEGhNpTGTVFDSmoDEFS6bwekZWWoJ5AG8CeBPA/zPAC0HG/8Y6emZO/rV66hq3O37XnxVeBbKADlpUb9y+D+OMFSlIwwTVekj83IxKqgxnAmx6oSGn7IpOYGhR0hT0qJxdYYV3rRLjJFP2kwbP1McrSppqPU0j66zvSy/3avG53rAwyeGo5DIvDEg23ygpBDYZrt8DjrkCZsTUAmWK23/F7JIqyox9NZ49BLI88ioMgi7xu+R0v3P8eXEcLfQBfUSfEEEH6BidoB7qI+Z8d346t84v94d74/52/8ytrrNY8x49KffuL5rj6cc=</latexit><latexit sha1_base64="PJNE7zbB/ESvazuTDGEHejlGLnU=">AAADAnicjZJBa9swGIZlt9syt1vT7VLYRSy07BQsU2gvhbJdekyhSQtxCLL8OdUqy0aSy4Ixu+yv7NJDx9i1v6K3/pvJSczapId9YHh4v1ev/EmKcsG18f0Hx11bf/HyVeu1t7H55u1We/vdQGeFYtBnmcjURUQ1CC6hb7gRcJEroGkk4Dy6+lL3z69BaZ7JMzPNYZTSieQJZ9RYabzt7IQpNZdRUvaqIy+MYMJlGVlJ8W+V1xuXvl/hPVwDaSCoIRRxZjQOw9pEGhNpTGTVFDSmoDEFS6bwekZWWoJ5AG8CeBPA/zPAC0HG/8Y6emZO/rV66hq3O37XnxVeBbKADlpUb9y+D+OMFSlIwwTVekj83IxKqgxnAmx6oSGn7IpOYGhR0hT0qJxdYYV3rRLjJFP2kwbP1McrSppqPU0j66zvSy/3avG53rAwyeGo5DIvDEg23ygpBDYZrt8DjrkCZsTUAmWK23/F7JIqyox9NZ49BLI88ioMgi7xu+R0v3P8eXEcLfQBfUSfEEEH6BidoB7qI+Z8d346t84v94d74/52/8ytrrNY8x49KffuL5rj6cc=</latexit><latexit sha1_base64="PJNE7zbB/ESvazuTDGEHejlGLnU=">AAADAnicjZJBa9swGIZlt9syt1vT7VLYRSy07BQsU2gvhbJdekyhSQtxCLL8OdUqy0aSy4Ixu+yv7NJDx9i1v6K3/pvJSczapId9YHh4v1ev/EmKcsG18f0Hx11bf/HyVeu1t7H55u1We/vdQGeFYtBnmcjURUQ1CC6hb7gRcJEroGkk4Dy6+lL3z69BaZ7JMzPNYZTSieQJZ9RYabzt7IQpNZdRUvaqIy+MYMJlGVlJ8W+V1xuXvl/hPVwDaSCoIRRxZjQOw9pEGhNpTGTVFDSmoDEFS6bwekZWWoJ5AG8CeBPA/zPAC0HG/8Y6emZO/rV66hq3O37XnxVeBbKADlpUb9y+D+OMFSlIwwTVekj83IxKqgxnAmx6oSGn7IpOYGhR0hT0qJxdYYV3rRLjJFP2kwbP1McrSppqPU0j66zvSy/3avG53rAwyeGo5DIvDEg23ygpBDYZrt8DjrkCZsTUAmWK23/F7JIqyox9NZ49BLI88ioMgi7xu+R0v3P8eXEcLfQBfUSfEEEH6BidoB7qI+Z8d346t84v94d74/52/8ytrrNY8x49KffuL5rj6cc=</latexit><latexit sha1_base64="PJNE7zbB/ESvazuTDGEHejlGLnU=">AAADAnicjZJBa9swGIZlt9syt1vT7VLYRSy07BQsU2gvhbJdekyhSQtxCLL8OdUqy0aSy4Ixu+yv7NJDx9i1v6K3/pvJSczapId9YHh4v1ev/EmKcsG18f0Hx11bf/HyVeu1t7H55u1We/vdQGeFYtBnmcjURUQ1CC6hb7gRcJEroGkk4Dy6+lL3z69BaZ7JMzPNYZTSieQJZ9RYabzt7IQpNZdRUvaqIy+MYMJlGVlJ8W+V1xuXvl/hPVwDaSCoIRRxZjQOw9pEGhNpTGTVFDSmoDEFS6bwekZWWoJ5AG8CeBPA/zPAC0HG/8Y6emZO/rV66hq3O37XnxVeBbKADlpUb9y+D+OMFSlIwwTVekj83IxKqgxnAmx6oSGn7IpOYGhR0hT0qJxdYYV3rRLjJFP2kwbP1McrSppqPU0j66zvSy/3avG53rAwyeGo5DIvDEg23ygpBDYZrt8DjrkCZsTUAmWK23/F7JIqyox9NZ49BLI88ioMgi7xu+R0v3P8eXEcLfQBfUSfEEEH6BidoB7qI+Z8d346t84v94d74/52/8ytrrNY8x49KffuL5rj6cc=</latexit>

(p0 p1 p2 p3 p4 p5)

0

BBBBBB@

P00 P01 P02 P03 P04 P05

P10 P11 P12 P13 P14 P15

P20 P21 P22 P23 P24 P25

P30 P31 P32 P33 P34 P35

P40 P41 P42 P43 P44 P45

P50 P51 P52 P53 P54 P55

1

CCCCCCA

<latexit sha1_base64="QC0MGeLvoMYkH4Yd3W8fj6Bwfww=">AAADgXicbdLLjtMwFAZgN+EyhMt0YMnGooDaBVXsJAKJzQg2LItEZ0ZqqshxndYa5yLbQaqiPAfvxY6XQSSdnApmsJSjX8f+4sRyWilprO//GjnuvfsPHp488h4/efrsdHz2/MKUteZiyUtV6quUGaFkIZZWWiWuKi1YnipxmV5/7ucvvwttZFl8s/tKrHO2LWQmObNdKzkb/ZhWiR/jKiF9oX0J+hL2JZp5sRKZneI4FVtZNExrtm8bfhitt0ga32/xW9wHAoFCCCCEEKIWx3HPCDACjAAjwAgwcmQUGAVGgVFgFBg9sgBYACwAFgALgAVHFgILgYXAQmAhsPDIImARsAhYBCwCFkWtF4tiMxwtjrXc7uzMS8YTf+4fBr4byBAmaBiLZPwz3pS8zkVhuWLGrIhf2XX3Wiu5Et0mtREV49dsK1ZdLFguzLo53KAWv+k6G5yVunsKiw/dv0XDcmP2edqtzJndmdtzffN/c6vaZh/WjSyq2oqC32yU1QrbEvfXEW+kFtyqfRcY17L7Vsx3TDNuu0vbHwK5/ct3wwWdk2BOv4aT80/DcZygl+gVmiKC3qNz9AUt0BLx0W/ntfPOmbuuO3N9l94sdUaDeYH+Ge7HP516ADo=</latexit>

Page 27: Computational Methods in Stochastics

Markov Chains

Note: Just to make sure you understand how to do calculations with Markov matrices you might want to do a few exercises from Chapter 3 of β€˜An Introduction to Stochastic Modeling’. (The answers are given in the end of the book.)The online book serves this purpose just as well.

Page 28: Computational Methods in Stochastics

Markov ChainsMarkov Chain models

Markov chains are commonly used to model various stochastic processes in e.g. physics, biology, sociology and economics.

β€˜An Introduction to Stochastic Modeling’ gives a few examples that are useful to go through. We do this for one:

An inventory model

A commodity is stocked. The replenishment of stock takes place at the end of periods labelled 𝑛 = 0, 1, 2, … . The total aggregate demand during period 𝑛 is a random variable πœ‰( whose distribution function is independent of 𝑛,

Pr πœ‰( = π‘˜ = π‘Ž+ for π‘˜ = 0, 1, 2, … ,where π‘Ž+ β‰₯ 0 and <+%&

'

π‘Ž+ = 1.

Page 29: Computational Methods in Stochastics

Markov ChainsReplenishment policy: If the end-of period stock quantity is not greater than 𝑠, an amount sufficient to increase the quantity of stock up to the level 𝑆 is procured. On the other hand, if the available stock is in excess of 𝑠, no replenishment is undertaken. 𝑋( = the quantity on hand at the end of period 𝑛. The states of the process {𝑋(}: 𝑆, 𝑆 βˆ’ 1,… ,+1, 0, βˆ’1,βˆ’2,…, where negative values are interpreted as unfilled demand that will be satisfied immediately after restocking.

The inventory process {𝑋(}:

Page 30: Computational Methods in Stochastics

Markov ChainsAccording to the inventory rules:

𝑋(92 = {𝑋( βˆ’ πœ‰(92 if 𝑠 < 𝑋( ≀ 𝑆,𝑆 βˆ’ πœ‰(92 if 𝑋( ≀ 𝑠,

where πœ‰( is the quantity demanded in the 𝑛th period.

Assuming that the successive demands πœ‰2, πœ‰8, … are random variables, the stock values 𝑋&, 𝑋2, 𝑋8, … constitute a Markov chain whose transition probability matrix can be determined from

𝑃3> = Pr 𝑋(92 = 𝑗 𝑋( = 𝑖 = {Pr πœ‰(92 = 𝑖 βˆ’ 𝑗 if 𝑠 < 𝑖 ≀ 𝑆,Pr πœ‰(92 = 𝑆 βˆ’ 𝑗 if 𝑖 ≀ 𝑠.

Page 31: Computational Methods in Stochastics

Markov ChainsTo illustrate, consider a spare parts inventory in which either 0, 1, or 2 repair parts are demanded and for which

Pr πœ‰( = 0 = 0.5, Pr πœ‰( = 1 = 0.4, Pr πœ‰( = 2 = 0.1, and 𝑠 =0 and 𝑆 = 2.

The possible values for 𝑋(: 𝑆 = 2, 1, 0, and βˆ’ 1.

𝑃2& = Pr 𝑋(92 = 0 𝑋( = 1 :

When 𝑋( = 1, no replenishment β‡’ 𝑋(92 = 0, when πœ‰(92 = 1. This occurs with probability 𝑃2& = 0.4.

When 𝑋( = 0, replenishment to 𝑆 = 2 takes place β‡’ 𝑋(92 =0, when πœ‰(92 = 2. This occurs with probability 𝑃&& = 0.1.

Page 32: Computational Methods in Stochastics

Markov ChainsAnalysing each probability in this way, results in

P =

2

664

0 0.1 0.4 0.50 0.1 0.4 0.50.1 0.4 0.5 00 0.1 0.4 0.5

3

775

<latexit sha1_base64="EDGsSVvo+eqpR4pysnPRF9ZPtls=">AAACb3icfZFNS8NAEIY38Tt+1XrwUJHFouilJKLoRSh68VjBqtCUstlO6tLNJuxuxBJ69Qd68z948R+4SQNq/RjY4eWZGWb33SDhTGnXfbXsmdm5+YXFJWd5ZXVtvbJRvVVxKim0acxjeR8QBZwJaGumOdwnEkgUcLgLhpd5/e4RpGKxuNGjBLoRGQgWMkq0Qb3Ksx8R/RCEWWt87vgBDJjIAoMkexo7Lt7HbsMr8nGRT7Dv/4mnkMn/dPsg+p+bepW623CLwD+FV4o6KqPVq7z4/ZimEQhNOVGq47mJ7mZEakY5jB0/VZAQOiQD6BgpSASqmxV+jfGeIX0cxtIcoXFBv05kJFJqFAWmM3dHTddy+Futk+rwrJsxkaQaBJ0sClOOdYxz83GfSaCaj4wgVDJzV0wfiCRUmy/KTfCmn/xT3B41POPn9XG9eVHasYhqaBcdIA+doia6Qi3URhS9WVWrZm1b7/aWvWPjSattlTOb6FvYhx/B3asR</latexit><latexit sha1_base64="EDGsSVvo+eqpR4pysnPRF9ZPtls=">AAACb3icfZFNS8NAEIY38Tt+1XrwUJHFouilJKLoRSh68VjBqtCUstlO6tLNJuxuxBJ69Qd68z948R+4SQNq/RjY4eWZGWb33SDhTGnXfbXsmdm5+YXFJWd5ZXVtvbJRvVVxKim0acxjeR8QBZwJaGumOdwnEkgUcLgLhpd5/e4RpGKxuNGjBLoRGQgWMkq0Qb3Ksx8R/RCEWWt87vgBDJjIAoMkexo7Lt7HbsMr8nGRT7Dv/4mnkMn/dPsg+p+bepW623CLwD+FV4o6KqPVq7z4/ZimEQhNOVGq47mJ7mZEakY5jB0/VZAQOiQD6BgpSASqmxV+jfGeIX0cxtIcoXFBv05kJFJqFAWmM3dHTddy+Futk+rwrJsxkaQaBJ0sClOOdYxz83GfSaCaj4wgVDJzV0wfiCRUmy/KTfCmn/xT3B41POPn9XG9eVHasYhqaBcdIA+doia6Qi3URhS9WVWrZm1b7/aWvWPjSattlTOb6FvYhx/B3asR</latexit><latexit sha1_base64="EDGsSVvo+eqpR4pysnPRF9ZPtls=">AAACb3icfZFNS8NAEIY38Tt+1XrwUJHFouilJKLoRSh68VjBqtCUstlO6tLNJuxuxBJ69Qd68z948R+4SQNq/RjY4eWZGWb33SDhTGnXfbXsmdm5+YXFJWd5ZXVtvbJRvVVxKim0acxjeR8QBZwJaGumOdwnEkgUcLgLhpd5/e4RpGKxuNGjBLoRGQgWMkq0Qb3Ksx8R/RCEWWt87vgBDJjIAoMkexo7Lt7HbsMr8nGRT7Dv/4mnkMn/dPsg+p+bepW623CLwD+FV4o6KqPVq7z4/ZimEQhNOVGq47mJ7mZEakY5jB0/VZAQOiQD6BgpSASqmxV+jfGeIX0cxtIcoXFBv05kJFJqFAWmM3dHTddy+Futk+rwrJsxkaQaBJ0sClOOdYxz83GfSaCaj4wgVDJzV0wfiCRUmy/KTfCmn/xT3B41POPn9XG9eVHasYhqaBcdIA+doia6Qi3URhS9WVWrZm1b7/aWvWPjSattlTOb6FvYhx/B3asR</latexit><latexit sha1_base64="EDGsSVvo+eqpR4pysnPRF9ZPtls=">AAACb3icfZFNS8NAEIY38Tt+1XrwUJHFouilJKLoRSh68VjBqtCUstlO6tLNJuxuxBJ69Qd68z948R+4SQNq/RjY4eWZGWb33SDhTGnXfbXsmdm5+YXFJWd5ZXVtvbJRvVVxKim0acxjeR8QBZwJaGumOdwnEkgUcLgLhpd5/e4RpGKxuNGjBLoRGQgWMkq0Qb3Ksx8R/RCEWWt87vgBDJjIAoMkexo7Lt7HbsMr8nGRT7Dv/4mnkMn/dPsg+p+bepW623CLwD+FV4o6KqPVq7z4/ZimEQhNOVGq47mJ7mZEakY5jB0/VZAQOiQD6BgpSASqmxV+jfGeIX0cxtIcoXFBv05kJFJqFAWmM3dHTddy+Futk+rwrJsxkaQaBJ0sClOOdYxz83GfSaCaj4wgVDJzV0wfiCRUmy/KTfCmn/xT3B41POPn9XG9eVHasYhqaBcdIA+doia6Qi3URhS9WVWrZm1b7/aWvWPjSattlTOb6FvYhx/B3asR</latexit>

-1 0 +1 +2-10+1+2

For the inventory model the important quantities are e.g. the long-term fraction of periods in which demand is not met (𝑋( < 0) and long-term average inventory level; using 𝑝>

(() =Pr{𝑋( = 𝑗}, these quantities are given by lim

(β†’'βˆ‘>C&𝑝>

(() and

lim(β†’'

βˆ‘>D&𝑝>((), respectively.

Typically, determining conditions under which 𝑝>(() stabilise

and the limiting probabilities πœ‹> = lim(β†’'

𝑝>(() is of importance.

Page 33: Computational Methods in Stochastics

Markov ChainsThe simplest models can be solved analytically. To obtain limiting values for more complicated models one has to do stochastic simulation, that is, do many Monte Carlo runs starting from the given 𝑋& and compute the probabilities of different outcomes (distribution) of 𝑋( as the number of times different values of 𝑆 are obtained normalised by the total number of outcomes, e.g.

Pr 𝑆3 =#(𝑆3)

βˆ‘+%2* #(𝑆+)

Page 34: Computational Methods in Stochastics

Simple Stochastic KineticsAs an example of a general process that is Markovian, let’s look at a process where objects (people, cars, particles) arrive at a point randomly.

Here event times (arrival times) are governed by a stochastic process.

Poisson Process

In a Poisson process with rate πœ† defined on interval 0, 𝑇the inter-event times are 𝐸π‘₯𝑝 πœ† . To simulate: Initialise the process at time zero 𝑋& = 𝑑& = 0. Simulate 𝑑+ ~ 𝐸π‘₯𝑝 πœ† and put 𝑋+ = 𝑋+,2 + 𝑑+, where π‘˜ ∈ 1, 2, 3, … . Here 𝑑+ is the time from the π‘˜ βˆ’ 1 to the π‘˜th event. Stop when 𝑋+ > 𝑇 and keep the 𝑋2, 𝑋8, … as the realisation of the process.

Online book, Ch 11.1.

Page 35: Computational Methods in Stochastics

Simple Stochastic KineticsThe Poisson process is a point process. It is directly related to the corresponding counting process, where the process is defined via the number of events in a given time interval (0, 𝑑] as 𝑁; ~ π‘ƒπ‘œ(πœ†π‘‘) (πœ† is the rate of events).Stochastic simulations are often constructed by directly regarding them as counting processes rather than point processes.Viewing Poisson as counting process, one needs to specify the smallest time interval βˆ†π‘‘, or the resolution of time, at which the system is observed. Then the probability of an event during that time is, of course, 𝑝 = πœ†βˆ†π‘‘. Typically, one is interested in the mean and variance of inter-event times or times for a certain number of events to occur.

Page 36: Computational Methods in Stochastics

Simple Stochastic Kinetics

The stationary (and homogeneous) Poisson process is perhaps the most basic example of stochastic dynamics but describes surprisingly many real-world processes. In this form the state space and time are discrete. We will look at slightly more complicated and applicable variation, the nonstationary Poisson process in the next lecture. Here the process is defined in continuous time (but still simulated at discrete times.)