Computational Methods in Stochastics
Transcript of Computational Methods in Stochastics
Computational Methods in Stochastics
Lecture III
1. Conditional Probability and Expectation
2. Markov Processes and Stochastic Models
This time:
Conditional Probability and Conditional
Expectation
Conditional Probability and Expectation
In what follows, the main properties of Markov processes needed for understanding advanced Monte Carlo methods are presented.
(Notation: Pr Β· = π Β· = Probability of event Β· )
Chapter 5 of the online book is worth skimming through.
Discrete variables
The conditional probability of the event π΄ given the event π΅:
Pr π΄ π΅ =Pr{π΄ β© π΅}Pr{π΅}
if Pr π΅ > 0.
Pr π΄ π΅ is not defined when Pr π΅ = 0.Let π and π be random variables that can attain countably many values. The probability mass function of π given π = π¦:
π!|# π₯ π¦ =Pr{π = π₯ and π = π¦}
Pr{π = π¦} if Pr π = π¦ > 0.
π!|# π₯ π¦ is not defined when Pr π = π¦ = 0.
Conditional Probability and Expectation
Or, in terms of joint and marginal probability mass functions:
π!|# π₯ π¦ =π!#(π₯, π¦)π#(π¦)
if π#(π¦) > 0; π₯, π¦ = 0, 1, β¦
The law of total probabilities
Pr X = x = <$%&
'
π!|#(π₯|π¦)π#(π¦)
Conditional Probability and Expectation
The law of total probabilities is used to obtain marginal probabilities (for examples, see Ch5 of the online book. )β
Note: In ratios Prβs, PMFs, and PDFs all work equally.
Two variates in the conditional that can be summed over: 1. The βgivenβ variate β probability of a value
Conditional Probability and ExpectationExample. Let π have a binomial distribution with parameters π and π, where π has a binomial distribution with parameters π and π. What is the marginal distribution of π?
Pr π = π = <(%&
)
π!|* π π π* π = β― =
=π!
π! π β π ! ππ+(1 β ππ)),+, π = 0, 1, β¦ ,π.
π!|* π π =πππ+(1 β π)(,+, π = 0, 1, β¦ , π.
Conditional prob. mass function
π* π = π!*(π, π) =ππ
π((1 β π)),(, π = 0, 1, β¦ ,π.
Marginal distribution
The marginal distribution of π (the law of total probability)
β βπ has a binomial distribution with parameters π and ππ.β
(see Introduction to Stochastic Modeling: Ch2)
Conditional Probability and ExpectationExample 2. Let π have a binomial distribution with parameters π and π, where π has a Poisson distribution with mean Ξ». What is the marginal distribution of π?
Pr π = π = <(%&
'
π!|* π π π* π =
= <(%&
'π!
π! π β π !π+(1 β π)(,+
π(π,-
π!= β― =
ππ +π,-.
π!
π!|* π π =πππ+(1 β π)(,+, π = 0, 1, β¦ , π.
Conditional probability mass function
π* π =π(π,-
π!, π = 0,1, β¦
Marginal distribution
The marginal distribution of π (the law of total probability)
β βπ has a Poisson distribution with mean ππ.β
for π = 0, 1, β¦
Conditional Probability and Expectation
Let π(π₯) be a function for which the expectation of π(π) is finite.
Then the conditional expected value of π(π) given π = π¦ is
πΈ π π π = π¦ =</
π(π₯)π!|#(π₯|π¦) if π# π¦ > 0
(This conditional mean is not defined at π¦ for which π# π¦ = 0.)
The law of total probability β
πΈ π π = πΈ{πΈ π π π }
2. Summing over the βoutcomeβ variate β expectation
(Summing over both variates.)
Conditional Probability and Expectation
Discrete and continuous variables
Let π and π be jointly distributed random variables, where πis continuous and π has the discrete set of values π = 0, 1, 2, β¦
The conditional distribution function πΉ!|*(π₯|π) of the random variable π, given that π = π:
πΉ!|* π₯ π =Pr{π β€ π₯ and π = π}
Pr{π = π}if Pr π = π > 0.
πΉ!|*(π₯|π) is not defined at values of π for which Pr π = π = 0.
The conditional probability density function:
π!|* π₯ π =πππ₯πΉ!|* π₯ π if Pr π = π > 0.
Conditional Probability and Expectation
βΉ Pr π β€ π < π,π = π = X0
1
π!|* π₯ π π* π ππ₯
for π < π and where π* π = Pr{π = π}.
The law of total probability β π! π₯ = <(%&
'
π!|* π₯ π π* π
For the function π such that πΈ π π < β, the conditional expectation of π(π) given that π = π:
πΈ π π π = π = Xπ π₯ π!|* π₯ π ππ₯
The law of total probability:
πΈ π π = <(%&
'
πΈ π π π = π π* π = πΈ{πΈ π π π }
Conditional Probability and Expectation
The previous derivations show that the same rules apply for discrete and continuous variables, one only needs to use ββ¦or β«β¦ , respectively. This is the basic interpretation of the Lebesque-Stieltjes integral πΌ = β«π π₯ ππ(π₯), where π π₯ is of bounded variation within the integration interval (see Lecture I: Random Variables). Remember, in computation you draw samples from π to compute πΌ; ππ defines your (probability) measure. (Lecture II, p. 3.)
Suppose π2, β¦ , π( are i.i.d. observations from the joint density π(π₯|π2, β¦ , π+). The likelihood function is
πΏ π π = πΏ(π2, β¦ , π+| π₯2, β¦ , π₯() = β3%2( π π₯3 π2, β¦ , π+ .
More generally, when the π3βs are not i.i.d., the likelihood is defined as the joint density π(π₯2, β¦ , π₯(|π). The value of π = aπat which πΏ π π attains its maximum with π held fixed is the maximum likelihood estimator (MLE).
Likelihood
For the layman: πΏ π π gives the likelihood of parameter values π (of a hypothesised distribution) given the data π.
Likelihood
The log likelihood:
log πΏ(πΌ , π½ π₯2, β¦ , π₯( = logg3%2
(
π π₯3 πΌ, π½ =
= βπ log Ξ πΌ β ππΌ log π½ + (πΌ β 1)<3%2
(
log π₯3 β<3%2
(
βπ₯3 π½ .
Solving for 445 log πΏ(πΌ , π½ π₯2, β¦ , π₯( = 0 yields the MLE of π½,
kπ½ =<3%2
(π₯3ππΌ .
π π₯ πΌ, π½ =1
Ξ(πΌ)π½6π₯6,2π,//5 , where πΌ is known.
Example. Gamma MLE. i.i.d. observations π2, β¦ , π( from the gamma density
For the layman: The most likely value for π½ given the data π.
Random Sums
In random sums the number of summands varies randomly.
π = π2 +β―+ π*,
where π is random and π2, π8, β¦ is a sequence of independent and identically distributed (i.i.d.) random variables. π is taken as 0 when π = 0.
Random sums arise frequently within stochastic processes (e.g. in queuing-type processes).
One often encounters conditional distributions of discrete (i.e. the random number of summands) and continuous variables in connection with random sums.
MartingalesAn important class of stochastic processes is determined by them having the Martingale property.
Definition.A stochastic process {π(; π = 0,1, β¦ } is a martingale if for π = 0,1, β¦
and(a) πΈ[ π( ] < β
(b) πΈ π(92 π&, β¦ , π( = π(.
βΉ πΈ{E π(92 π&, β¦ , π( } = E π( .Using the total probability in the form
πΈ{E π(92 π&, β¦ , π( } = E π(92 ,
gives E[π(92] = E[π(].
MartingalesA martingale has constant mean
E π& = E π+ = E π( , for π β₯ π.
The martingale equality (b) extends to future times:
πΈ π: π&, β¦ , π( = π( for π β₯ π
Many important stochastic properties have the Martingale property, e.g. unbiased diffusion or random walk and gamblerβs fortune (a concept of importance in the stock market). In βa gameβ (like finance) the property is related to βfairnessβ β unpredictability; chances are the same regardless of the past.
Markov Processes and Stochastic Models
Markov Chains
Some definitions
Stochastic process
A stochastic process is a random variable, which evolves through time.
Markov process
A Markov process {π;} is a stochastic process with the property that, given the value of π;, the values of π< for π > π‘ are not influenced by the values of π= for π < π‘.In words, the probability of any particular future behaviour of the process, when its current state is known exactly, is not altered by additional knowledge concerning its past behaviour.
Online book, Ch 11.2.
Markov ChainsTo make the transition matrix notation as clear as possible, we first adapt the notation in βAn Introduction to Stochastic Modelingβ. Later, the notation varies slightly depending on the system, but weβll make a due note of that.
Markov property:Pr π(92 = π π& = π&, β¦ , π(,2 = π(,2, π( = π = Pr{π(92 = π|π( = π}
for all time points π and all states π&, β¦ , π(,2, π, π.
Common convention: label the Markov chain by the non-negative integers {0, 1, 2, β¦ }. βπ( is in state πβ: π( = π.
One-step transition probability = the probability of π(92 to be in state π given that π( is in state π:
π3>(,(92 = Pr{π(92 = π|π( = π}.
A discrete-time Markov chain: a Markov process whose state space is a finite or countable set and whose time index set is π =(0, 1 , 2, β¦ ).
Markov ChainsIn general, any transition probability may vary as a function of the time of the transition. To keep things clear, in what follows we deal with stationary transition probabilities (independent of the time variable π): π3>
(,(92 = π3>.
The Markov matrix or transition probability matrix of the process:
P =
2
666666664
P00 P01 P02 . . .P10 P11 P12 . . .P20 P21 P22 . . ....
......
Pi0 Pi1 Pi2 . . ....
......
3
777777775
=β₯Pij
β€
<latexit sha1_base64="PJNE7zbB/ESvazuTDGEHejlGLnU=">AAADAnicjZJBa9swGIZlt9syt1vT7VLYRSy07BQsU2gvhbJdekyhSQtxCLL8OdUqy0aSy4Ixu+yv7NJDx9i1v6K3/pvJSczapId9YHh4v1ev/EmKcsG18f0Hx11bf/HyVeu1t7H55u1We/vdQGeFYtBnmcjURUQ1CC6hb7gRcJEroGkk4Dy6+lL3z69BaZ7JMzPNYZTSieQJZ9RYabzt7IQpNZdRUvaqIy+MYMJlGVlJ8W+V1xuXvl/hPVwDaSCoIRRxZjQOw9pEGhNpTGTVFDSmoDEFS6bwekZWWoJ5AG8CeBPA/zPAC0HG/8Y6emZO/rV66hq3O37XnxVeBbKADlpUb9y+D+OMFSlIwwTVekj83IxKqgxnAmx6oSGn7IpOYGhR0hT0qJxdYYV3rRLjJFP2kwbP1McrSppqPU0j66zvSy/3avG53rAwyeGo5DIvDEg23ygpBDYZrt8DjrkCZsTUAmWK23/F7JIqyox9NZ49BLI88ioMgi7xu+R0v3P8eXEcLfQBfUSfEEEH6BidoB7qI+Z8d346t84v94d74/52/8ytrrNY8x49KffuL5rj6cc=</latexit><latexit sha1_base64="PJNE7zbB/ESvazuTDGEHejlGLnU=">AAADAnicjZJBa9swGIZlt9syt1vT7VLYRSy07BQsU2gvhbJdekyhSQtxCLL8OdUqy0aSy4Ixu+yv7NJDx9i1v6K3/pvJSczapId9YHh4v1ev/EmKcsG18f0Hx11bf/HyVeu1t7H55u1We/vdQGeFYtBnmcjURUQ1CC6hb7gRcJEroGkk4Dy6+lL3z69BaZ7JMzPNYZTSieQJZ9RYabzt7IQpNZdRUvaqIy+MYMJlGVlJ8W+V1xuXvl/hPVwDaSCoIRRxZjQOw9pEGhNpTGTVFDSmoDEFS6bwekZWWoJ5AG8CeBPA/zPAC0HG/8Y6emZO/rV66hq3O37XnxVeBbKADlpUb9y+D+OMFSlIwwTVekj83IxKqgxnAmx6oSGn7IpOYGhR0hT0qJxdYYV3rRLjJFP2kwbP1McrSppqPU0j66zvSy/3avG53rAwyeGo5DIvDEg23ygpBDYZrt8DjrkCZsTUAmWK23/F7JIqyox9NZ49BLI88ioMgi7xu+R0v3P8eXEcLfQBfUSfEEEH6BidoB7qI+Z8d346t84v94d74/52/8ytrrNY8x49KffuL5rj6cc=</latexit><latexit sha1_base64="PJNE7zbB/ESvazuTDGEHejlGLnU=">AAADAnicjZJBa9swGIZlt9syt1vT7VLYRSy07BQsU2gvhbJdekyhSQtxCLL8OdUqy0aSy4Ixu+yv7NJDx9i1v6K3/pvJSczapId9YHh4v1ev/EmKcsG18f0Hx11bf/HyVeu1t7H55u1We/vdQGeFYtBnmcjURUQ1CC6hb7gRcJEroGkk4Dy6+lL3z69BaZ7JMzPNYZTSieQJZ9RYabzt7IQpNZdRUvaqIy+MYMJlGVlJ8W+V1xuXvl/hPVwDaSCoIRRxZjQOw9pEGhNpTGTVFDSmoDEFS6bwekZWWoJ5AG8CeBPA/zPAC0HG/8Y6emZO/rV66hq3O37XnxVeBbKADlpUb9y+D+OMFSlIwwTVekj83IxKqgxnAmx6oSGn7IpOYGhR0hT0qJxdYYV3rRLjJFP2kwbP1McrSppqPU0j66zvSy/3avG53rAwyeGo5DIvDEg23ygpBDYZrt8DjrkCZsTUAmWK23/F7JIqyox9NZ49BLI88ioMgi7xu+R0v3P8eXEcLfQBfUSfEEEH6BidoB7qI+Z8d346t84v94d74/52/8ytrrNY8x49KffuL5rj6cc=</latexit><latexit sha1_base64="PJNE7zbB/ESvazuTDGEHejlGLnU=">AAADAnicjZJBa9swGIZlt9syt1vT7VLYRSy07BQsU2gvhbJdekyhSQtxCLL8OdUqy0aSy4Ixu+yv7NJDx9i1v6K3/pvJSczapId9YHh4v1ev/EmKcsG18f0Hx11bf/HyVeu1t7H55u1We/vdQGeFYtBnmcjURUQ1CC6hb7gRcJEroGkk4Dy6+lL3z69BaZ7JMzPNYZTSieQJZ9RYabzt7IQpNZdRUvaqIy+MYMJlGVlJ8W+V1xuXvl/hPVwDaSCoIRRxZjQOw9pEGhNpTGTVFDSmoDEFS6bwekZWWoJ5AG8CeBPA/zPAC0HG/8Y6emZO/rV66hq3O37XnxVeBbKADlpUb9y+D+OMFSlIwwTVekj83IxKqgxnAmx6oSGn7IpOYGhR0hT0qJxdYYV3rRLjJFP2kwbP1McrSppqPU0j66zvSy/3avG53rAwyeGo5DIvDEg23ygpBDYZrt8DjrkCZsTUAmWK23/F7JIqyox9NZ49BLI88ioMgi7xu+R0v3P8eXEcLfQBfUSfEEEH6BidoB7qI+Z8d346t84v94d74/52/8ytrrNY8x49KffuL5rj6cc=</latexit>
The πth row is the probability distribution of the values of π(92under the condition that π( = π.
Markov ChainsClearly,π3> β₯ 0 and β>%&' π3> = 1 for π = 0, 1, 2, β¦
A Markov process is completely defined once its transition probability matrix and initial state π& (the probability distribution of π&) are specified.
These properties define the matrix as stochastic.
Note that β>%&' π3> = 1 is simply stating that the sum of probabilities of all transitions starting from any state π equals 1. (This means that the rows of P sum to 1.)If also the sum of probabilities of all transitions ending in any state π equals 1, β3%&' π3> = 1 (the columns of P sum to 1), then the transition matrix P is said to be doubly stochastic.
Markov Chains
Transition probability matrices of a Markov chain
The analysis of a Markov chain concerns mainly the calculation of the possible realisations of the process.
The n-step transition probability matrices: π(() = π3>(() .
π3>(() = Pr{π:9( = π|π: = π}
= the probability the process goes from state π to state π in πtransitions.
Theorem. The n-step transition probabilities of a Markov chain
satisfy π3>(() = β+%&' π3+π+>
((,2), where we define π3>(&) = {
1 if π = π,0 if π β π.
Note: A product of stochastic matrices is a stochastic matrix.
Markov ChainsThis is matrix multiplication π(() = πΓπ((,2).By iteration, π(() = πΓπΓβ―π = π(.So, the π-step transition probabilities π3>
(()are the entries in the matrix π(, the πth power of π.Proof. The theorem can be proven via a first-step analysis, a breaking, or analysis, of the possible transitions on the first step, followed by an application of the Markov property:
π3>(() = Pr π( = π π& = π = <
+%&
'
Pr{π( = π, π2 = π|π& = π}
= <+%&
'
Pr π2 = π π& = π Pr{π( = π|π& = π, π2 = π}
= <+%&
'
π3+π+>((,2). β
Markov ChainsMarkov chains are visualised by state transition diagrams.
(Figure from the online book.)
Markov ChainsIt follows that if the probability of the process initially being in state π is π>, i.e. the distribution law of π& is Pr π& = π = π>, then the probability of the process being in state π at time π is
π+(() =<
>%&
'
π>π>+(() = Pr π( = π .
Note: Matrix P operates from the right, meaning that all vectors are row-vectors, not column-vectors like in almost any other field. This is the convention in stochastics.
P =
2
666666664
P00 P01 P02 . . .P10 P11 P12 . . .P20 P21 P22 . . ....
......
Pi0 Pi1 Pi2 . . ....
......
3
777777775
=β₯Pij
β€
<latexit sha1_base64="PJNE7zbB/ESvazuTDGEHejlGLnU=">AAADAnicjZJBa9swGIZlt9syt1vT7VLYRSy07BQsU2gvhbJdekyhSQtxCLL8OdUqy0aSy4Ixu+yv7NJDx9i1v6K3/pvJSczapId9YHh4v1ev/EmKcsG18f0Hx11bf/HyVeu1t7H55u1We/vdQGeFYtBnmcjURUQ1CC6hb7gRcJEroGkk4Dy6+lL3z69BaZ7JMzPNYZTSieQJZ9RYabzt7IQpNZdRUvaqIy+MYMJlGVlJ8W+V1xuXvl/hPVwDaSCoIRRxZjQOw9pEGhNpTGTVFDSmoDEFS6bwekZWWoJ5AG8CeBPA/zPAC0HG/8Y6emZO/rV66hq3O37XnxVeBbKADlpUb9y+D+OMFSlIwwTVekj83IxKqgxnAmx6oSGn7IpOYGhR0hT0qJxdYYV3rRLjJFP2kwbP1McrSppqPU0j66zvSy/3avG53rAwyeGo5DIvDEg23ygpBDYZrt8DjrkCZsTUAmWK23/F7JIqyox9NZ49BLI88ioMgi7xu+R0v3P8eXEcLfQBfUSfEEEH6BidoB7qI+Z8d346t84v94d74/52/8ytrrNY8x49KffuL5rj6cc=</latexit><latexit sha1_base64="PJNE7zbB/ESvazuTDGEHejlGLnU=">AAADAnicjZJBa9swGIZlt9syt1vT7VLYRSy07BQsU2gvhbJdekyhSQtxCLL8OdUqy0aSy4Ixu+yv7NJDx9i1v6K3/pvJSczapId9YHh4v1ev/EmKcsG18f0Hx11bf/HyVeu1t7H55u1We/vdQGeFYtBnmcjURUQ1CC6hb7gRcJEroGkk4Dy6+lL3z69BaZ7JMzPNYZTSieQJZ9RYabzt7IQpNZdRUvaqIy+MYMJlGVlJ8W+V1xuXvl/hPVwDaSCoIRRxZjQOw9pEGhNpTGTVFDSmoDEFS6bwekZWWoJ5AG8CeBPA/zPAC0HG/8Y6emZO/rV66hq3O37XnxVeBbKADlpUb9y+D+OMFSlIwwTVekj83IxKqgxnAmx6oSGn7IpOYGhR0hT0qJxdYYV3rRLjJFP2kwbP1McrSppqPU0j66zvSy/3avG53rAwyeGo5DIvDEg23ygpBDYZrt8DjrkCZsTUAmWK23/F7JIqyox9NZ49BLI88ioMgi7xu+R0v3P8eXEcLfQBfUSfEEEH6BidoB7qI+Z8d346t84v94d74/52/8ytrrNY8x49KffuL5rj6cc=</latexit><latexit sha1_base64="PJNE7zbB/ESvazuTDGEHejlGLnU=">AAADAnicjZJBa9swGIZlt9syt1vT7VLYRSy07BQsU2gvhbJdekyhSQtxCLL8OdUqy0aSy4Ixu+yv7NJDx9i1v6K3/pvJSczapId9YHh4v1ev/EmKcsG18f0Hx11bf/HyVeu1t7H55u1We/vdQGeFYtBnmcjURUQ1CC6hb7gRcJEroGkk4Dy6+lL3z69BaZ7JMzPNYZTSieQJZ9RYabzt7IQpNZdRUvaqIy+MYMJlGVlJ8W+V1xuXvl/hPVwDaSCoIRRxZjQOw9pEGhNpTGTVFDSmoDEFS6bwekZWWoJ5AG8CeBPA/zPAC0HG/8Y6emZO/rV66hq3O37XnxVeBbKADlpUb9y+D+OMFSlIwwTVekj83IxKqgxnAmx6oSGn7IpOYGhR0hT0qJxdYYV3rRLjJFP2kwbP1McrSppqPU0j66zvSy/3avG53rAwyeGo5DIvDEg23ygpBDYZrt8DjrkCZsTUAmWK23/F7JIqyox9NZ49BLI88ioMgi7xu+R0v3P8eXEcLfQBfUSfEEEH6BidoB7qI+Z8d346t84v94d74/52/8ytrrNY8x49KffuL5rj6cc=</latexit><latexit sha1_base64="PJNE7zbB/ESvazuTDGEHejlGLnU=">AAADAnicjZJBa9swGIZlt9syt1vT7VLYRSy07BQsU2gvhbJdekyhSQtxCLL8OdUqy0aSy4Ixu+yv7NJDx9i1v6K3/pvJSczapId9YHh4v1ev/EmKcsG18f0Hx11bf/HyVeu1t7H55u1We/vdQGeFYtBnmcjURUQ1CC6hb7gRcJEroGkk4Dy6+lL3z69BaZ7JMzPNYZTSieQJZ9RYabzt7IQpNZdRUvaqIy+MYMJlGVlJ8W+V1xuXvl/hPVwDaSCoIRRxZjQOw9pEGhNpTGTVFDSmoDEFS6bwekZWWoJ5AG8CeBPA/zPAC0HG/8Y6emZO/rV66hq3O37XnxVeBbKADlpUb9y+D+OMFSlIwwTVekj83IxKqgxnAmx6oSGn7IpOYGhR0hT0qJxdYYV3rRLjJFP2kwbP1McrSppqPU0j66zvSy/3avG53rAwyeGo5DIvDEg23ygpBDYZrt8DjrkCZsTUAmWK23/F7JIqyox9NZ49BLI88ioMgi7xu+R0v3P8eXEcLfQBfUSfEEEH6BidoB7qI+Z8d346t84v94d74/52/8ytrrNY8x49KffuL5rj6cc=</latexit>
(p0 p1 p2 p3 p4 p5)
0
BBBBBB@
P00 P01 P02 P03 P04 P05
P10 P11 P12 P13 P14 P15
P20 P21 P22 P23 P24 P25
P30 P31 P32 P33 P34 P35
P40 P41 P42 P43 P44 P45
P50 P51 P52 P53 P54 P55
1
CCCCCCA
<latexit sha1_base64="QC0MGeLvoMYkH4Yd3W8fj6Bwfww=">AAADgXicbdLLjtMwFAZgN+EyhMt0YMnGooDaBVXsJAKJzQg2LItEZ0ZqqshxndYa5yLbQaqiPAfvxY6XQSSdnApmsJSjX8f+4sRyWilprO//GjnuvfsPHp488h4/efrsdHz2/MKUteZiyUtV6quUGaFkIZZWWiWuKi1YnipxmV5/7ucvvwttZFl8s/tKrHO2LWQmObNdKzkb/ZhWiR/jKiF9oX0J+hL2JZp5sRKZneI4FVtZNExrtm8bfhitt0ga32/xW9wHAoFCCCCEEKIWx3HPCDACjAAjwAgwcmQUGAVGgVFgFBg9sgBYACwAFgALgAVHFgILgYXAQmAhsPDIImARsAhYBCwCFkWtF4tiMxwtjrXc7uzMS8YTf+4fBr4byBAmaBiLZPwz3pS8zkVhuWLGrIhf2XX3Wiu5Et0mtREV49dsK1ZdLFguzLo53KAWv+k6G5yVunsKiw/dv0XDcmP2edqtzJndmdtzffN/c6vaZh/WjSyq2oqC32yU1QrbEvfXEW+kFtyqfRcY17L7Vsx3TDNuu0vbHwK5/ct3wwWdk2BOv4aT80/DcZygl+gVmiKC3qNz9AUt0BLx0W/ntfPOmbuuO3N9l94sdUaDeYH+Ge7HP516ADo=</latexit>
Markov Chains
Note: Just to make sure you understand how to do calculations with Markov matrices you might want to do a few exercises from Chapter 3 of βAn Introduction to Stochastic Modelingβ. (The answers are given in the end of the book.)The online book serves this purpose just as well.
Markov ChainsMarkov Chain models
Markov chains are commonly used to model various stochastic processes in e.g. physics, biology, sociology and economics.
βAn Introduction to Stochastic Modelingβ gives a few examples that are useful to go through. We do this for one:
An inventory model
A commodity is stocked. The replenishment of stock takes place at the end of periods labelled π = 0, 1, 2, β¦ . The total aggregate demand during period π is a random variable π( whose distribution function is independent of π,
Pr π( = π = π+ for π = 0, 1, 2, β¦ ,where π+ β₯ 0 and <+%&
'
π+ = 1.
Markov ChainsReplenishment policy: If the end-of period stock quantity is not greater than π , an amount sufficient to increase the quantity of stock up to the level π is procured. On the other hand, if the available stock is in excess of π , no replenishment is undertaken. π( = the quantity on hand at the end of period π. The states of the process {π(}: π, π β 1,β¦ ,+1, 0, β1,β2,β¦, where negative values are interpreted as unfilled demand that will be satisfied immediately after restocking.
The inventory process {π(}:
Markov ChainsAccording to the inventory rules:
π(92 = {π( β π(92 if π < π( β€ π,π β π(92 if π( β€ π ,
where π( is the quantity demanded in the πth period.
Assuming that the successive demands π2, π8, β¦ are random variables, the stock values π&, π2, π8, β¦ constitute a Markov chain whose transition probability matrix can be determined from
π3> = Pr π(92 = π π( = π = {Pr π(92 = π β π if π < π β€ π,Pr π(92 = π β π if π β€ π .
Markov ChainsTo illustrate, consider a spare parts inventory in which either 0, 1, or 2 repair parts are demanded and for which
Pr π( = 0 = 0.5, Pr π( = 1 = 0.4, Pr π( = 2 = 0.1, and π =0 and π = 2.
The possible values for π(: π = 2, 1, 0, and β 1.
π2& = Pr π(92 = 0 π( = 1 :
When π( = 1, no replenishment β π(92 = 0, when π(92 = 1. This occurs with probability π2& = 0.4.
When π( = 0, replenishment to π = 2 takes place β π(92 =0, when π(92 = 2. This occurs with probability π&& = 0.1.
Markov ChainsAnalysing each probability in this way, results in
P =
2
664
0 0.1 0.4 0.50 0.1 0.4 0.50.1 0.4 0.5 00 0.1 0.4 0.5
3
775
<latexit sha1_base64="EDGsSVvo+eqpR4pysnPRF9ZPtls=">AAACb3icfZFNS8NAEIY38Tt+1XrwUJHFouilJKLoRSh68VjBqtCUstlO6tLNJuxuxBJ69Qd68z948R+4SQNq/RjY4eWZGWb33SDhTGnXfbXsmdm5+YXFJWd5ZXVtvbJRvVVxKim0acxjeR8QBZwJaGumOdwnEkgUcLgLhpd5/e4RpGKxuNGjBLoRGQgWMkq0Qb3Ksx8R/RCEWWt87vgBDJjIAoMkexo7Lt7HbsMr8nGRT7Dv/4mnkMn/dPsg+p+bepW623CLwD+FV4o6KqPVq7z4/ZimEQhNOVGq47mJ7mZEakY5jB0/VZAQOiQD6BgpSASqmxV+jfGeIX0cxtIcoXFBv05kJFJqFAWmM3dHTddy+Futk+rwrJsxkaQaBJ0sClOOdYxz83GfSaCaj4wgVDJzV0wfiCRUmy/KTfCmn/xT3B41POPn9XG9eVHasYhqaBcdIA+doia6Qi3URhS9WVWrZm1b7/aWvWPjSattlTOb6FvYhx/B3asR</latexit><latexit sha1_base64="EDGsSVvo+eqpR4pysnPRF9ZPtls=">AAACb3icfZFNS8NAEIY38Tt+1XrwUJHFouilJKLoRSh68VjBqtCUstlO6tLNJuxuxBJ69Qd68z948R+4SQNq/RjY4eWZGWb33SDhTGnXfbXsmdm5+YXFJWd5ZXVtvbJRvVVxKim0acxjeR8QBZwJaGumOdwnEkgUcLgLhpd5/e4RpGKxuNGjBLoRGQgWMkq0Qb3Ksx8R/RCEWWt87vgBDJjIAoMkexo7Lt7HbsMr8nGRT7Dv/4mnkMn/dPsg+p+bepW623CLwD+FV4o6KqPVq7z4/ZimEQhNOVGq47mJ7mZEakY5jB0/VZAQOiQD6BgpSASqmxV+jfGeIX0cxtIcoXFBv05kJFJqFAWmM3dHTddy+Futk+rwrJsxkaQaBJ0sClOOdYxz83GfSaCaj4wgVDJzV0wfiCRUmy/KTfCmn/xT3B41POPn9XG9eVHasYhqaBcdIA+doia6Qi3URhS9WVWrZm1b7/aWvWPjSattlTOb6FvYhx/B3asR</latexit><latexit sha1_base64="EDGsSVvo+eqpR4pysnPRF9ZPtls=">AAACb3icfZFNS8NAEIY38Tt+1XrwUJHFouilJKLoRSh68VjBqtCUstlO6tLNJuxuxBJ69Qd68z948R+4SQNq/RjY4eWZGWb33SDhTGnXfbXsmdm5+YXFJWd5ZXVtvbJRvVVxKim0acxjeR8QBZwJaGumOdwnEkgUcLgLhpd5/e4RpGKxuNGjBLoRGQgWMkq0Qb3Ksx8R/RCEWWt87vgBDJjIAoMkexo7Lt7HbsMr8nGRT7Dv/4mnkMn/dPsg+p+bepW623CLwD+FV4o6KqPVq7z4/ZimEQhNOVGq47mJ7mZEakY5jB0/VZAQOiQD6BgpSASqmxV+jfGeIX0cxtIcoXFBv05kJFJqFAWmM3dHTddy+Futk+rwrJsxkaQaBJ0sClOOdYxz83GfSaCaj4wgVDJzV0wfiCRUmy/KTfCmn/xT3B41POPn9XG9eVHasYhqaBcdIA+doia6Qi3URhS9WVWrZm1b7/aWvWPjSattlTOb6FvYhx/B3asR</latexit><latexit sha1_base64="EDGsSVvo+eqpR4pysnPRF9ZPtls=">AAACb3icfZFNS8NAEIY38Tt+1XrwUJHFouilJKLoRSh68VjBqtCUstlO6tLNJuxuxBJ69Qd68z948R+4SQNq/RjY4eWZGWb33SDhTGnXfbXsmdm5+YXFJWd5ZXVtvbJRvVVxKim0acxjeR8QBZwJaGumOdwnEkgUcLgLhpd5/e4RpGKxuNGjBLoRGQgWMkq0Qb3Ksx8R/RCEWWt87vgBDJjIAoMkexo7Lt7HbsMr8nGRT7Dv/4mnkMn/dPsg+p+bepW623CLwD+FV4o6KqPVq7z4/ZimEQhNOVGq47mJ7mZEakY5jB0/VZAQOiQD6BgpSASqmxV+jfGeIX0cxtIcoXFBv05kJFJqFAWmM3dHTddy+Futk+rwrJsxkaQaBJ0sClOOdYxz83GfSaCaj4wgVDJzV0wfiCRUmy/KTfCmn/xT3B41POPn9XG9eVHasYhqaBcdIA+doia6Qi3URhS9WVWrZm1b7/aWvWPjSattlTOb6FvYhx/B3asR</latexit>
-1 0 +1 +2-10+1+2
For the inventory model the important quantities are e.g. the long-term fraction of periods in which demand is not met (π( < 0) and long-term average inventory level; using π>
(() =Pr{π( = π}, these quantities are given by lim
(β'β>C&π>
(() and
lim(β'
β>D&π>((), respectively.
Typically, determining conditions under which π>(() stabilise
and the limiting probabilities π> = lim(β'
π>(() is of importance.
Markov ChainsThe simplest models can be solved analytically. To obtain limiting values for more complicated models one has to do stochastic simulation, that is, do many Monte Carlo runs starting from the given π& and compute the probabilities of different outcomes (distribution) of π( as the number of times different values of π are obtained normalised by the total number of outcomes, e.g.
Pr π3 =#(π3)
β+%2* #(π+)
Simple Stochastic KineticsAs an example of a general process that is Markovian, letβs look at a process where objects (people, cars, particles) arrive at a point randomly.
Here event times (arrival times) are governed by a stochastic process.
Poisson Process
In a Poisson process with rate π defined on interval 0, πthe inter-event times are πΈπ₯π π . To simulate: Initialise the process at time zero π& = π‘& = 0. Simulate π‘+ ~ πΈπ₯π π and put π+ = π+,2 + π‘+, where π β 1, 2, 3, β¦ . Here π‘+ is the time from the π β 1 to the πth event. Stop when π+ > π and keep the π2, π8, β¦ as the realisation of the process.
Online book, Ch 11.1.
Simple Stochastic KineticsThe Poisson process is a point process. It is directly related to the corresponding counting process, where the process is defined via the number of events in a given time interval (0, π‘] as π; ~ ππ(ππ‘) (π is the rate of events).Stochastic simulations are often constructed by directly regarding them as counting processes rather than point processes.Viewing Poisson as counting process, one needs to specify the smallest time interval βπ‘, or the resolution of time, at which the system is observed. Then the probability of an event during that time is, of course, π = πβπ‘. Typically, one is interested in the mean and variance of inter-event times or times for a certain number of events to occur.
Simple Stochastic Kinetics
The stationary (and homogeneous) Poisson process is perhaps the most basic example of stochastic dynamics but describes surprisingly many real-world processes. In this form the state space and time are discrete. We will look at slightly more complicated and applicable variation, the nonstationary Poisson process in the next lecture. Here the process is defined in continuous time (but still simulated at discrete times.)