Elbert Pramudya Purchase

70
Chapter 1 2.1 E [1 {A 1 }] = Pr {A 1 }= 1 13 . Similarly, E [1 {A 1 }] = Pr {A k }= 1 13 for k = 1,..., 13. Then, because the expected value of a sum is always the sum of the expected values, E[N] = E [1 {A 1 }] +···+ E [1 {A 13 }] = 1 13 +···+ 1 13 = 1. 2.2 Let X be the first number observed and let Y be the second. We use the identity (6x i ) 2 = 6x 2 i + 6 i6=j x i x j several times. E[X] = E[Y ] = 1 N X x i ; Var[X] = Var[Y ] = 1 N X x 2 i - 1 N X x i 2 = (N - 1) x 2 i - i6=j x i x j N 2 ; E[XY ] = i6=j x i x j N(N - 1) ; Cov[X, Y ] = E[XY ] - E[X]E[Y ] = i6=j x i x j - (N - 1) x 2 i N 2 (N - 1) ˜ n X,Y = Cov[X, Y ] σ X σ Y =- 1 N - 1 . 2.3 Write S r = ξ 1 +···+ ξ r where ξ k is the number of additional samples needed to observe k distinct elements, assuming that k - 1 district elements have already been observed. Then, defining p k = Pr[ξ k = 1}= 1 - k-1 N we have Pr [ξ k = n] = p k (1 - p k ) n-1 for n = 1, 2,... and E [ξ k ] = 1 p k . Finally, E [S r ] = E [ξ 1 ] +···+ E [ξ r ] = 1 p 1 +···+ 1 p r will verify the given formula. 2.4 Using an obvious notation, the event {N = n} is equivalent to either n-1 z }| { HTH ... HTT or n-1 z }| { THT ... THH so P r {N = n}= 2 × 1 2 n-1 × 1 2 = 1 2 n-1 for n = 2, 3,... ; Pr {N is even}= n=2,4,... 1 2 n-1 = 2 3 and P r {N 6}= 6 n=2 1 2 n-1 = 31 32 . Pr {N is even and N 6} 5 m=2,4,6 1 2 n-1 = 21 32 . 2.5 Using an obvious notation, the probability that A wins on the 2n + 1 trial is Pr In losses z }| { A c B c ... A c B c A = [(1 - p)(1 - q)] n p, n = 0, 1,... Pr{A wins}= n = 0 [(1 - p)(1 - q)] n p = p 1-(1-p)(1-q) . Pr{A wins on 2n + 1 play|A wins}= (1 - π)π n where π = (1 - p)(1 - q). E # trials|A wins = n=0 (2n + 1)(1 - π)π n = 1 + 2π 1-π = 1+ (1-p)(1-q) 1- (1-p)(1-q) = 2 1- (1-p)(1-q) - 1. 2.6 Let N be the number of losses and let S be the sum. Then Pr{N = n, S = k}= 1 6 n-1 ( 5 6 P k ) where p 3 = p 11 = p 4 = p 10 = 1 15 ; p 5 = p 9 = p 6 = p 8 = 2 15 and p 7 = 3 15 . Finally Pr{S = k}= n=1 Pr{N = n, S = k}= p k . (It is not a correct argument to simply say Pr{S = k}= Pr{Sum of 2 dice = k|Dice differ}. Compare with Exercise II, 2.1.) 2.7 We are given that (*) Pr{U > u, W > w}= [1 - F u (u)][1 - F w (w)] for all u, w. According to the definition for independence we wish to show that Pr{U u, W w}= F u (u)F w (w) for all u, w. Taking complements and using the addition law Pr{U u, W w}= 1 - Pr{U > u or W > w} = 1 - [Pr{U > u}+ Pr{W > w}- Pr{U > u, W > w}] = 1 - [(1 - F U (u)) + (1 - F W (w)) - (1 - F u (u))(1 - F w (w))] = F U (u)F W (w) after simplification. An Introduction to Stochastic Modeling, Instructor Solutions Manual c 2011 Elsevier Inc. All rights reserved.

Transcript of Elbert Pramudya Purchase

Page 1: Elbert Pramudya Purchase

“Ch01-9780123852328” — 2011/3/18 — 12:14 — page 1 — #1

Chapter 1

2.1 E [1 {A1}]= Pr {A1} =113 . Similarly, E [1 {A1}]= Pr {Ak} =

113 for k = 1, . . . ,13. Then, because the expected value of a sum

is always the sum of the expected values, E[N]= E [1 {A1}]+ ·· ·+E [1 {A13}]= 113 + ·· ·+

113 = 1.

2.2 Let X be the first number observed and let Y be the second. We use the identity (6xi)2=6x2

i +6i6=jxixj several times.

E[X]= E[Y]=1

N

∑xi;

Var[X]= Var[Y]=1

N

∑x2

i −

(1

N

∑xi

)2

=(N− 1)

∑x2

i −∑

i6=j xixj

N2;

E[XY]=

∑i6=j xixj

N(N− 1);

Cov[X,Y]= E[XY]−E[X]E[Y]=

∑i6=j xixj− (N− 1)

∑x2

i

N2(N− 1)

nX,Y =Cov[X,Y]

σXσY=−

1

N− 1.

2.3 Write Sr = ξ1+ ·· ·+ ξr where ξk is the number of additional samples needed to observe k distinct elements, assumingthat k− 1 district elements have already been observed. Then, defining pk = Pr[ξk = 1} = 1− k−1

N we have Pr [ξk = n]=pk (1− pk)

n−1 for n= 1,2, . . . and E [ξk]= 1pk

. Finally, E [Sr]= E [ξ1]+ ·· ·+E [ξr]= 1p1+ ·· ·+

1pr

will verify the givenformula.

2.4 Using an obvious notation, the event {N = n} is equivalent to either

n−1︷ ︸︸ ︷HTH . . . HTT or

n−1︷ ︸︸ ︷THT . . . THH so Pr{N = n} =

2×(

12

)n−1×

12 =

(12

)n−1for n= 2,3, . . . ;Pr {N is even} =

∑n=2,4,...

(12

)n−1=

23 and Pr {N ≤ 6} =

∑6n=2

(12

)n−1=

3132 .

Pr {N is even and N ≤ 6}5∑

m=2,4,6

(12

)n−1=

2132 .

2.5 Using an obvious notation, the probability that A wins on the 2n+ 1 trial is Pr

In losses︷ ︸︸ ︷

AcBc . . .AcBcA

= [(1− p)(1− q)]np,

n= 0,1, . . .Pr{A wins} =∑∞

n=0 [(1− p)(1− q)]np = p1−(1−p)(1−q) . Pr{A wins on 2n+ 1 play|A wins} = (1−π)πn where

π = (1− p)(1− q). E[

#trials|A wins]=∑∞

n=0 (2n+ 1)(1−π)πn= 1+ 2π

1−π =1+(1−p)(1−q)1−(1−p)(1−q) =

21−(1−p)(1−q) − 1.

2.6 Let N be the number of losses and let S be the sum. Then Pr{N = n,S= k} =(

16

)n−1 ( 56 Pk

)where p3 = p11 = p4 = p10 =

115 ;p5 = p9 = p6 = p8 =

215 and p7 =

315 . Finally Pr{S= k} =

∑∞

n=1 Pr{N = n,S= k} = pk. (It is not a correct argument tosimply say Pr{S= k} = Pr{Sum of 2 dice= k|Dice differ}. Compare with Exercise II, 2.1.)

2.7 We are given that (*) Pr{U > u,W> w} = [1−Fu(u)] [1−Fw(w)] for all u, w. According to the definition for independencewe wish to show that Pr{U ≤ u,W ≤ w} = Fu(u)Fw(w) for all u, w. Taking complements and using the addition law

Pr{U ≤ u,W ≤ w} = 1−Pr{U > u or W > w}

= 1− [Pr{U > u}+Pr{W > w}−Pr{U > u,W > w}]

= 1− [(1−FU(u))+ (1−FW(w))− (1−Fu(u))(1−Fw(w))]

= FU(u)FW(w) after simplification.

An Introduction to Stochastic Modeling, Instructor Solutions Manualc© 2011 Elsevier Inc. All rights reserved.

Page 2: Elbert Pramudya Purchase

“Ch01-9780123852328” — 2011/3/18 — 12:14 — page 2 — #2

2 Instructor Solutions Manual

2.8 (a) E[Y]= E[a+ bX]=∫(a+ bx)dFX(x)= a

∫dFX(x)+ b

∫xdFX(x)= a+ bE[X]= a+ bµ. In words, (a) implies that the

expected value of a constant times a random variable is the constant times the expected value of the random variable. SoE[b2(X−µ)2

]= b2E

[(X−µ)2

].

(b) Var[Y]= E[(Y −E{Y})2

]= E

[(a+ bX− a− bµ)2

]= E

[b2(X−µ)2

]= b2E

[(X−µ)2

]= b2σ 2

2.9 Use the usual sums of numbers formula (See I, 6 if necessary) to establish

n∑k=1

k(n− k)=1

6n(n+ 1)(n− 1); and

n∑k=1

k2(n− k)= n∑

k2−

∑k3=

1

12n2(n+ 1)(n− 1), so

E[X]=2

n(n− 1)

∑k(n− k)=

1

3(n+ 1)

E[X2]=

3

n(n− 1)

∑k2(n− k)=

1

6n(n+ 1), and

Var[X]= E[X2]− (E[X])2 =

1

18(n+ 1)(n− 2).

2.10 Observe, for example, Pr{Z = 4} = Pr{X = 3,Y = 1} =(

12

)(16

), using independence. Continuing in this manner,

z 1 2 3 4 5 6

Pr{Z = z} 112

16

14

112

16

14

2.11 Observe, for example, Pr{W = z} = Pr{U = 0,V = 2}+Pr{U = 1,V = 1} = 16 +

16 +

13 . Continuing in this manner, arrive at

w 1 2 3 4

Pr{W = w} 16

13

13

16

2.12 Changing any of the random variables by adding or subtracting a constant will not affect the covariance. Therefore, byreplacing U with U−E[U], if necessary, etc, we may assume, without loss of generality that all of the means are zero.Because the means are zero,Cov[X,Y]= E[XY]−E[X]E[Y]= E[XY]= E

[UV −UW +VW −W2

]=−E

[W2]=−σ 2. (E[UV]= E[U]E[V]= 0, etc.)

2.13 Pr{v< V,U ≤ u} = Pr{v< X ≤ u,v< Y ≤ u}

= Pr{v< X ≤ u} Pr{v< Y ≤ u} (by independence)

= (u− v)2

=

∫∫(u′,v′)v<v′≤u′≤u

fu,v(u′,v′

)du′dv′

=

u∫v

u∫

v′

fu,v(u′,v′

)du′

dv′.

The integrals are removed from the last expression by successive differentiation, first w.r.t. v (changing sign because v is alower limit) than w.r.t. u. This tells us

fu,v(u,v)=−d

du

d

dv(u− v)2 = 2 for 0< v≤ u≤ 1.

Page 3: Elbert Pramudya Purchase

“Ch01-9780123852328” — 2011/3/18 — 12:14 — page 3 — #3

Instructor Solutions Manual 3

3.1 Z has a discrete uniform distribution on 0,1, . . . ,9.

3.2 In maximizing a continuous function, we often set the derivative equal to zero. In maximizing a function of a discrete variable,we equate the ratio of successive terms to one. More precisely, k∗ is the smallest k for which p(k+1)

p(k) < 1, or, the smallest k

for which n−kk+1

(p

1−p

)< 1. Equivently, (b) k∗ = [(n+ 1)p] where [x]= greatest integer≤ x. for (a) let n→∞,p→ 0,λ= np.

Then k∗ = [λ].

3.3 Recall that eλ = 1+ λ+ λ2

2! +λ3

3! + ·· · and e−λ = 1− λ+ λ2

2! −λ3

3! + ·· · so that sinh λ≡ 12

(eλ− e−λ

)= λ+ λ3

3! +λ5

5! + ·· ·

Then Pr{X is odd} =∑

k=1,3,5,···

λke−λk! = e−λ sinh(λ)= 1

2

(1− e−2λ

).

3.4 E[V] =∞∑

k=0

1

k+ 1

λke−λ

k!=

e−λ

λ

∞∑k=0

λk+1

(k+ 1)!

=1

λe−λ

(eλ− 1

)=

1

λ

(1− e−λ

).

3.5 E[XY] = E[X(N−X)]= NE[X]−E[X2]

= N2p−[Np(1− p)+N2p2

]= N2p(1− p)−Np(1− p)

Cov[X,Y] = E[XY]−E[X]E[Y]=−Np(1− p).

3.6 Your intuition should suggest the correct answers: (a) X1 is binomially distributed with parameters M and π1; (b) N is binomialwith parameters M and π1+π2; and (c) X1, given N = n, is conditionally binominal with parameters n and p= π1/(π1+π2).To derive these correct answers formally, begin with

Pr {X1 = i,X2 = j,X3 = k} =M!

i! j!k!π i

1πj2π

k3 ; i+ j+ k =M.

Since k =M− (i+ j)

Pr {X1 = i,X2 = j} =M!

i! j!(M− i− j)!π i

1πj2π

M−i−j3 ;0≤ i+ j≤M.

(a) Pr {X1 = i} =∑

j

Pr {X1 = i,X2 = j}

=M!

i!(M− i)!π i

1

M−i∑j=0

(M− i)!

j!(M− i− j)!π

j2π

M−i−j3

=

(Mi

)π i

1 (π2+π3)M−i , i= 0,1, . . . ,M.

(b) Observe that N = n if and only if X3 =M – n. Apply the results of (a) to X3:

Pr{N = n} = Pr {X3 =M− n} =M!

n!(M− n)!(π1+π2)

nπM−n3

(c) Pr {X1 = k|N = n} =Pr {X1 = k,X2 = n− k}

Pr{N = n}

=

M!k!(M−n)!(n−k)!π

k1π

n−k2 πM−n

3M!

n!(M−n)! (π1+π2)nπM−n

3

=n!

k!(n− k)!

(π1

π1+π2

)k(π2

π1+π2

)n−k

,k = 0,1, . . . ,n.

Page 4: Elbert Pramudya Purchase

“Ch01-9780123852328” — 2011/3/18 — 12:14 — page 4 — #4

4 Instructor Solutions Manual

3.7 Pr{Z = n} =n∑

k=0

Pr{X = k}Pr{Y = n− k}

=

n∑k=0

µke−µυ(n−k)e−υ

k!(n− k)!= e−(µ+υ)

1

n!

n∑k=0

n!

k!(n− k)!µkυn−k

=e−(µ+υ)(µ+ υ)n

n!(Using binomial formula.)

Z is Poisson distributed, parameter µ+ υ.

3.8 (a) X is the sum of N independent Bernoulli random variables, each with parameter p, and Y is the sum of M indepen-dent Bernoulli random variables each with the same parameter p. Z is the sum of M+N independent Bernoulli randomvariables, each with parameter p.

(b) By considering the ways in which a committee of n people may be formed from a group comprised of M men and N

women, establish the identity

(M+N

n

)=

n∑k=0

(Nk

)(M

n− k

).

Then

Pr{Z = n} =n∑

k=0

Pr{X = k}Pr{Y = n−K}

=

n∑k=0

(Nk

)pk(1− p)N−k

(M

n− k

)pn−k(1− p)M−n+k

=

(M+N

n

)pn(1− p)M+N−n for n= 0,1, . . . ,M+N.

Note:(Nk

)= 0 for k > N.

3.9 Pr{X+Y = n} =n∑

k=0

Pr{X = k,Y = n− k} =n∑

k=0

(1−π)πk(1−π)πn−k

= (1−π)2πnn∑

k=0

1= (n+ 1)(1−π)2πn for n = 0.

3.10

k Binomial Binomial Poissonn = 10 p = .1 n = 100 p = .01 λ= 1

0 .349 .366 .3681 .387 .370 .3682 .194 .185 .184

3.11 Pr{U = u,W = 0} = Pr{X = u,Y = u} = (1−π)2π2u,u≥ 0.

Pr{U = u,W = w> 0} = Pr{X = u,Y = u+w}+Pr{Y = u,X = u+w} = 2(1−π)2π2u+w

Pr{U = u} =∞∑

w=0

Pr{U = u,W = w} = π2u(

1−π2).

Pr{W = 0} =∞∑

w=0

Pr{U = u,W = 0} = (1−π)2/(

1−π2).

Pr{W = w> 0} = 2[(1−π)2

/(1−π)2

(1−π2

)]πw, and

Pr{U = u,W = w} = Pr{U = u}Pr{W = w} for all u,w.

Page 5: Elbert Pramudya Purchase

“Ch01-9780123852328” — 2011/3/18 — 12:14 — page 5 — #5

Instructor Solutions Manual 5

3.12 Let X = number of calls to switch board in a minute. Pr{X ≥ 7} = 1−6∑

k=0

4ke−4

k! .111.

3.13 Assume that inspected items are independently defective or good. Let X = # of defects in sample.

Pr{X = 0} = (.95)10= .599

Pr{X = 1} = 10(.95)9(.05)= .315

Pr{X = 2} = 1− (.599+ .315)= .086.

3.14 (a) E[Z]= 1−pp = 9,Var[Z]= 1−p

p2 = 90

(b) Pr{Z > 10} = (.9)10= .349.

3.15 Pr{X ≤ 2} =(

1+ 2+ 22

2

)e−2= 5e−2

= .677.

3.16 (a) p0 = 1− b∞∑

k=1(1− p)k = 1− b

(1−p

p

).

(b) When b= p, then pk is given by (3.4).When b= p

1−p , then pk is given by (3.5).

(c) Pr{N = n> 0} = Pr{X = 0,Z = n}+Pr{X = 1,Z = n− 1}

= (1−α)p(1− p)n+αp(1− p)n−1

=[(1−α)p+αp

/(1− p)

](1− p)n

So b= (1−α)p+αp/(1− p).

4.1 E[eλZ]=

1√

+∞∫−∞

e12 z2+λzdz= e

12λ

2

1√

+∞∫−∞

e12 (z−λ)

2dz

= e12λ

2.

4.2 (a) PrW > 1θ= e−θ/θ = e−1

= .368 . . . .

(b) Mode= 0.

4.3 X− θ and Y − θ are both uniform over[−

12 ,

12

], independent of θ , and W = X−Y = (X− θ)− (Y − θ). Therefore the distri-

bution of W is independent of θ and we may determine it assuming θ = 0. Also, the density of W is symmetric since that ofboth X and Y are.

Pr{W > w} = Pr{X > Y +w} =1

2(1−w)2, w> 0

So fw(w)= 1−w for 0≤ w≤ 1 and fw(w)= 1− |w| for − 1≤ w≤+1

4.4 µc = .010;σ 2c = (.005)2,Pr{C < 0} = Pr

{C−.010.005 < −.010

.005

}= Pr{Z <−2} = .0228.

4.5 Pr{Z < Y} =∫∞

0

{∫∞

x 3e−3ydy}2e−2xdx= 2

5 .

5.1 Pr{N > k} = Pr {X1 ≤ ξ, . . . ,Xk ≤ ξ} = [F(ξ)]k,k = 0,1, . . .

Pr{N = k} = Pr{N > k− 1}−Pr{N > k} = [1−F(ξ)]F(ξ)k−1,k = 1,2, . . .

5.2 Pr{Z > z} = Pr {X1 > z, . . . ,Xn > z} = Pr {X1 > z} · · · · ·Pr {Xn > z}

= e−λz· · · · · e−λz

= e−nλz,z> 0.

Z is exponentially distributed, parameter nλ.

5.3 Pr{X > k} =∞∑

I=k+1p(1− p)l = p(1− p)k+1,k = 0,1, . . .

E[X] =∞∑

k=0Pr{X > k} = 1−p

p .

Page 6: Elbert Pramudya Purchase

“Ch01-9780123852328” — 2011/3/18 — 12:14 — page 6 — #6

6 Instructor Solutions Manual

5.4 Write V = V+−V− when V+ =max{V,0} and V− =max{−V,0}. Then Pr{V+ > v

}= 1−Fv(v) and Pr

{V− > v

}=

Fv(−v) for v> 0. Use (5.3) on V+ and V− together with E[V]= E[V+]−E

[V−]. Mean does not exit if E

[V+]=

E[V−]=∞.

5.5 E[W2]=∫∞

0 P{W2 > t

}dt =

∫∞

0

[1−Fw

(√t)]

dt =∫∞

0 2y [1−Fw(y)]dy by letting y=√

t.

5.6 Pr{V > t} =∫∞

t λe−λvdv= e−λt;E[V]=

∫∞

0 Pr{V > t}dt = 1λ

∫∞

0 λe−λidt = 1λ

.

5.7 Pr{V > v} = Pr {X1 > v, . . . ,Xn > v} = Pr {X1 > v} · · · · ·Pr {Xn > v}

= e−λ1v· · · · · e−λnv

= e−(λ1+···+λn)v,v> 0.

V is exponentially distributed with parameter∑λ

i .

5.8

Spares 3 2 1 0

AB

Mean 12λ

12λ

12λ

12λ

Expected flash light operating duration = 12λ +

12λ +

12λ +

12λ =

2λ= 2 Expected battery operating durations!

Page 7: Elbert Pramudya Purchase

“Ch02-9780123852328” — 2011/3/18 — 12:14 — page 7 — #1

Chapter 2

1.1 (a) Pr{X = k} =N∑

m=k

N!

m!(N−m)!pm(1− p)N−m m!

k!(m− k)!πk(1−π)m−k

=

(Nk

)(πp)k(1−πp)N−k for k = 0,1, . . . ,N.

(b) E[XY]= E[MX]−E[X]2; E[M]= Np;

E[M2]= N2p2

+Np(1− p); E[X|M]=Mπ

E[X]= Nπp; E[X2|M]=M2π2

+Mπ(1−π);

E[X2]= (Nπp)2+Nπ2p(1− p)+Nπp(1−π),

E[MX]= E{ME[X|M]} = π[N2p2

+Np(1− p)]

Cov[X,Y]= E[X(M−X)]−E[X]E[M−X]=−Np2π(1−π).

1.2 Pr{X = x|Y = y} =1/x

[1/y+ ·· ·+ 1/N]for 1≤ y≤ x≤ N.

1.3 (a) Pr{U = u|V = v} =2

2v− 1for 1≤ u< v≤ 6

=2

2v− 1for 1≤ u= v≤ 6.

(b) Pr{S= s,T = t}

t/s 2 3 4 5 6 7 8 9 10 11 12

01

36

1

36

1

36

1

36

1

36

1

36

12

36

2

36

2

36

2

36

2

36

22

36

2

36

2

36

2

36

32

36

2

36

2

36

42

36

2

36

52

36

1.4 E[X|N]=N

2; E[X]= 20×

1

1

2=

5

2.

1.5 Pr{X = 0} =

(3

4

)20

= .00317.

An Introduction to Stochastic Modeling, Instructor Solutions Manualc© 2011 Elsevier Inc. All rights reserved.

Page 8: Elbert Pramudya Purchase

“Ch02-9780123852328” — 2011/3/18 — 12:14 — page 8 — #2

8 Instructor Solutions Manual

1.6 Pr{N = n} =

(1

2

)n

n= 1,2, . . .

Pr{X = k|N = n} =

(nk

)(1

2

)n

0≤ k ≤ n

Pr{X = k} =∞∑

n=1

(nk

)(1

4

)n

Pr{X = 0} =∞∑

n=1

(1

4

)n

=1

3

Pr{X = 1} =∞∑

n=1n

(1

4

)n

=4

9.

E[X|N]=N

2; E[X]=

1

2E[N]= 1.

1.7 Pr{True S.F.|Diag. S.F.} =.30(.85)

.30(.85)+ .70(.35)= .51

1.8 Pr{First Red|Second Red} =Pr{Both Red}

Pr{Second Red}=

12

(23

)12

(23

)+

12

(13

) = 2

3.

1.9 Pr{X = k} =∞∑

n=k−1Pr{X = k|N = n}Pr{N = n}

=

∞∑n=k−1

1

n+ 2

e−1

n!=

∞∑n=k−1

e−1[

1

(n+ 1)!−

1

(n+ 2)!

]=

e−1

k!for k = 0,1 . . . (Poisson,mean= 1).

1.10 A “typical” distribution of 8 families would break down as follows:

Family # 1 # 2 # 3 # 4 # 5 # 6 # 7 # 8

Children {G} {G} {G} {G} {B,G} {B,G} {B,B,G} {B,B,B}

(a) Let N = the number of children in a family

Pr{N = 1} =1

2,Pr{N = 2} = Pr{N = 3} =

1

4.

(b) Let X = the number of girl children in a family.

Pr{X = 0} =1

8,Pr{X = 1} =

7

8

(c) Family #8 is three times as likely, and family #7 is twice as likely to be chosen as families #5 and #6.

Pr{No sisters} = Pr{#8} =3

7.

Pr{One sister} =4

7.

Pr{Two brothers} = Pr{#8} =3

7

Pr{One brother} = Pr{#7} =2

7.

Pr[No brothers]=2

7.

Page 9: Elbert Pramudya Purchase

“Ch02-9780123852328” — 2011/3/18 — 12:14 — page 9 — #3

Instructor Solutions Manual 9

2.1 (a) The bid X1 = x is accepted if x> A; Otherwise, if 0≤ x≤ A, one finds where one started due to the independent andidentically distributed nature of the bids.

E[XN |X1 = x]=

{x if x> A;

M if 0≤ x≤ A

(b) The given equation results from the law of total probability:

E[XN]=

∞∫0

E [XN |X1 = x]dF(x).

(c) When X is exponentially distributed the conditional distribution of X, given that X > A, is the same as the distribution ofA+X.

(d) When dF(x)= λe−λxdx, then

α = e−λA and

∞∫A

xλe−λxdx= Aα+α

∞∫0

yλe−λydy= α

(A+

1

λ

)and M = A+

1

λ.

2.2 The distribution for the sum is

p(2)= p(12)=1

36p(5)= p(9)=

4

36

p(3)= p(11)=2

36p(6)= p(8)=

5

36− (.02)2

p(4)= p(10)=3

36p(7)=

6

36+ 2(.02)2

Pr{Win} = .4924065.

3.1 (a) v= τ 2= λ, µ= p σ 2

= p(1− p)

E[Z]= λp; Var[Z]= λp(1− p)+ λp2= λp.

(b) Z has the Poisson distribution.

3.2 Pr{Z = k} =M∑

n=k

n!

k!(n− k)!pk(1− p)n−k M!

n!(M− n)!qn(1− q)M−n

=

(Mk

)(qp)k(1− qp)M−k for k = 0,1, . . . ,M.

3.3 (a) µ= 0, σ 2= 1, v=

1−α

α, τ 2=

1−α

α2; E[Z]= 0 Var[Z]=

1−α

α.

(b) E[z3]= 0 E

[z4]= E

[N2+ 2N(N− 1)

]= 3

((1−α)2

α2+

1−α

α2

)− 2

(1−α

α

)=(1−α)(6− 5α)

α2

3.4 (a) E[N]= Var[N]= λ; E [SN]= λµ Var [SN]= λ(µ2+ σ 2

).

(b) E[N]= λ, Var[N]=λ

p= λ(1+ λ)

E [SN]= λµ Var [SN]= λ(µ2+ σ 2

)+ λ2µ2

(c) For large λ, Var [SN]∝ λ in (a), while Var [SN]∝ λ2 in (b).

Page 10: Elbert Pramudya Purchase

“Ch02-9780123852328” — 2011/3/18 — 12:14 — page 10 — #4

10 Instructor Solutions Manual

3.5 E[Z]= (1+ v)µ; Var[Z]= vσ 2+µ2τ 2.

4.1 E [X1]= E [X2]=1∫

0pdp=

1

2

E [X1X2]=1∫

0p2dp=

1

3

Cov [X1,X2]= E [X1X2]−E [X1]E [X2]=1

12.

4.2 Pr{X = i,Y = j} = Pr{X = i,N = i+ j} = Pr{X = i|N = i+ j}Pr{N = i+ j}

=

(i+ j

i

)pi(1− p) jλi+j e−λ/(i+ j)!

=

{(λp)ie−λp

i!

}{[λ(1− p) je−λ(1−p)

]j!

}, i, j = 0.

Pr{X = i} =∞∑

j=0Pr{X = i,Y = j} =

(λp)ie−λp

i!,

Pr{Y = j} =[λ(1− p)] je−λ(1−p)

j!

and Pr{X = i,Y = j} = Pr{X = i}Pr{Y = j} for all i, j.

4.3 (a) Pr{X = i} =∞∫0

λie−λ

i!θe−θλdλ=

θ

i!

∞∫0λie−(1+θ)λdλ=

1+ θ

)(1

1+ θ

)i

, i= 0,1, . . . (Geometric distribution).

(b) f (λ|X = k)=(1+ θ)k+1λke−(1+θ)λ

k!(Gamma).

4.4 Pr{X = i,Y = j} =θ

i! j!

∞∫0λi+je−(z+θ)λdλ

=

(i+ j

i

)(θ

2+ θ

)(1

2+ θ

)i+j

, i, j≥ 0.

Pr{X = i,N = n} =

(ni

)(θ

2+ θ

)(1

2+ θ

)n

0≤ i≤ n.

Pr{N = n} =

2+ θ

)(1

2+ θ

)n n∑i=0

(ni

)=

2+ θ

)(2

2+ θ

)n

, n≥ 0.

Pr{X = i|N = n} =

(ni

)(1

2

)n

, 0≤ i≤ n.

4.5 Pr{X = 0,Y = 1} = 0 6= Pr{X = 0}Pr{Y = 1} so X and Y are NOT independent.

E[X]= E[Y]= 0, Cov[X,Y]= E[XY]=1

9+

1

9−

2

9= 0.

4.6 Pr{N > n} = Pr {X0 is the largest among{X0, . . . ,Xn}} =1

n+ 1since each random variable has the same chance to be the

largest. Or, one can integrate

Pr{N > n} =

∞∫0

[1−F(x)]nf (x)dx=−

∞∫0

[1−F(x)]nd[1−F(x)]=

1∫0

yndy=1

n+ 1

Pr{N = n} = Pr{N > n− 1}−Pr{N > n} =1

n−

1

n+ 1=

1

n(n+ 1).

E[N]=∞∑

n=0

Pr{N > n} =∞∑

n=0

1

n+ 1=∞.

Page 11: Elbert Pramudya Purchase

“Ch02-9780123852328” — 2011/3/18 — 12:14 — page 11 — #5

Instructor Solutions Manual 11

4.7 fX,Z(x,z)= α2e−αz for 0≤ x≤ z.

fX|Z(x|z)=fx,z(x,z)

fz(z)=α2e−αz

α2ze−αz=

1

z, 0< x< z.

X is, conditional on Z = z, uniformly distributed over the interval [0, z].

4.8 The key algebraic step in simplifying the joint divided by the marginal is

Q(x,y)−

(y−µY

σY

)2

=1

1− n2

{(x−µX

σx

)2

− 2n

(x−µX

σX

)(y−µY

σY

)+ n2

(y−µY

σY

)2}

=1

1− n2

{(x−µX

σX

)− n

(y−µY

σY

)}2

=1(

1− n2)σ 2

X

{x−µX − n

(σX

σY

)(y−µY)

}2

5.1 Following the suggestion

E [Xn+2|X0, . . . ,Xn]= E [E {Xn+2|X0, . . . ,Xn+1} |X0, . . . ,Xn]

= E [Xn+1|X0, . . . ,Xn]= Xn

5.2 E [Xn+1|X0, . . . ,Xn] = E [2Un+1Xn|Xn]

= XnE [2Un+1]= Xn.

5.3 E [Xn+1|X0, . . . ,Xn] = E[2e−εn+1Xn|Xn

]= XnE

[2e−εn+1

]= Xn.

5.4 E [Xn+1|X0, . . . ,Xn] = E

[Xnξn+1

p

∣∣∣∣Xn

]= XnE

[ξn+ 1

ρ

]= Xn lim

n→∞Xn = 0.

5.5 (b) Pr {Xn ≥ N for some n≥ 0|X0 = i} ≤E (X0)

N=

i

N. (In fact, equality holds. See III, 5.3)

Page 12: Elbert Pramudya Purchase

“Ch03-9780123852328” — 2011/3/18 — 12:15 — page 12 — #1

Chapter 3

1.1 P55 = 1,PK,K+1 = α

(k1

)(5− k

1

)(

52

) , k = 1,2,3,4

Pij = 0,otherwise.

1.2 (a) Pr {X0 = 0,X1 = 0,X2 = 0} = P0P200 = 1(1−α)2 = (1−α)2.

(b) Pr {X0 = 0,X2 = 0} = P00P00+P01P10 = (1−α)2+α2.

1.3 Pr {X2 = G,X3 = G,X4 = G,X5 = D |X1 = G} = P3GGPGD = α3(1−α).

1.4

P=

∥∥∥∥∥∥∥∥∥∥

0 1 2 3

0 .1 .3 .2 .4

1 0 .4 .2 .4

2 0 0 .6 .4

3 0 0 0 1

∥∥∥∥∥∥∥∥∥∥2.1 Observe that the columns of P sum to one. Then, b p(0)i = 1/4 for i= 0,1,2,3 and

by induction p(n+1)i =∑3

k=0 p(n)k Pki =1

4

∑Pki =

1

4.

2.2 P(5)00 = (1−α)5+ 10(1−α)3α2+ 5(1−α)α4 = 1

2

[1+ (1− 2α)5

]2.3 P(3)11 = .684772

2.4 ∥∥∥∥∥∥∥∥∥∥

(0,0) (0,1) (1,0) (1,1)

(0,0) α 1−α 0 0

(0,1) 0 0 1−β β

(1,0) α 1−α 0 0

(1,1) 0 0 1−β β

∥∥∥∥∥∥∥∥∥∥2.5 Pr {X3 = 0 |X0 = 0,T > 3} = P(3)00

/(P(3)00 +P(3)01

)= .6652.

3.1

P=

∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥

0 1 2 3

0 0 0 0 1

11

15

14

150 0

2 08

15

7

150

3 0 03

5

2

5

∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥

An Introduction to Stochastic Modeling, Instructor Solutions Manualc© 2011 Elsevier Inc. All rights reserved.

Page 13: Elbert Pramudya Purchase

“Ch03-9780123852328” — 2011/3/18 — 12:15 — page 13 — #2

Instructor Solutions Manual 13

3.2

P=

∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥

0 1 2 3

0 0 0 0 1

1 0 01

2

1

2

2 01

4

1

2

1

4

31

8

3

8

3

8

1

8

∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥3.3 (a)

P=

∥∥∥∥∥∥∥∥

0 1 2

0 .1 .4 .5

1 .5 .5 0

2 .1 .4 .5

∥∥∥∥∥∥∥∥(b) Long run lost sales per period = (.1) π1 = .0444. . . .

3.4 Pr{Customer completes service | In service} = Pr{Z = k |Z ≥ k} = α.

P01 = β,P00 = 1−β,Pi,i+1 = β(1−α)

Pi,i−1 = (1−β)α,Pii = (1−β)(1−α)+αβ, i = 1.

3.5

P=

∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥

0 H HH HHT

01

2

1

20 0

H1

20

1

20

HH 0 01

2

1

2

HHT 0 0 0 1

∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥3.6 P(a,b),(a,b+1) = 1− p a≤ 3,b≤ 2

P(a,b),(a+1,b) = p a≤ 2,b≤ 3

P(3,b),A wins = p b≤ 3

P(a,3),B wins = 1− p, a≤ 3

PA wins,A wins = PB wins,B wins = 1.

3.7 P0,k = αk fork = 1,2, . . .

P0,0 = 0

Pk,k−1 = 1 h≥ 1

3.8 Pk,k−1 = q(

kN

)Pk,k+1 = p

(N−k

N

)Pk,k = p

(kN

)+ q

(N−k

N

).

Page 14: Elbert Pramudya Purchase

“Ch03-9780123852328” — 2011/3/18 — 12:15 — page 14 — #3

14 Instructor Solutions Manual

3.9 Pk,k−1 = Pk,k+1 =(

k

N

)(N− k

N

)Pk,k =

(k

N

)2

+(

N− k

N

)2

3.10 (a) ξ = 2 3 4 0 2 1 2 2

n = 1 2 3 4 5 6 7 8

Xn = 2 −1 0 4 2 1 2 0

(b) P41 = Pr {ξ = 3} = .2P04 = Pr {ξ = 0} = .1

4.1 0= 0 1= H 2= HH 3= HHT

P =

∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥

0 1 2 3

01

2

1

20 0

11

20

1

20

2 0 01

2

1

2

3 0 0 0 1

∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥v0 = 1+ 1

2v0+

1

2v1

v1 = 1+ 1

2v0+

1

2v2

v2 = 1+ 1

2v2

v0 = 8

v1 = 6

v2 = 2

0= 0 1= H 2= HT 3= HTH

∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥

0 1 2 3

01

2

1

20 0

1 01

2

1

20

21

20 0

1

2

3 0 0 0 1

∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥v0 = 1+ 1

2v0+

1

2v1

v1 = 1+ 1

2v1+

1

2v2

v2 = 1+ 1

2v0

v0 = 10

v1 = 8

v2 = 6

Page 15: Elbert Pramudya Purchase

“Ch03-9780123852328” — 2011/3/18 — 12:15 — page 15 — #4

Instructor Solutions Manual 15

4.2 v1 = 1 v1 = 1

v2 = 1+ 1

2v1 v2 = 1+ 1

2

v3 = 1+ 1

3v1+

1

3v2 v3 = 1+ 1

3+ 1

3

(3

2

)= 1+ 1

2+ 1

3...

vm = 1+ 1

mv1+

1

mv2+ ·· ·+

1

mvm−1

Solve to get vm = 1+ 1

2+ 1

3+ ·· ·+ 1

m∼ Logm

Note: wjj = 1 and wij =1

j+ 1for i> j.

4.3 We will verify that vm = 2(

m+1m

)(1+ 1

2+ ·· ·+ 1

m

)− 3 solves vm = 1+∑m−1

j=12jm2 vj. Change variables to Vk = kvk. To

show: Vk = 2(k+ 1)

(1+ 1

2+ ·· ·+ 1

k

)− 3k solves Vm = m+ 2

m(V1+ ·· ·+Vm−1). Using the given Vk, use sums of num-

bers and interchange order as follows:

m−1∑k=1

Vk =m−1∑k=1

k∑l=1

2(k+ 1)1

l− 3

m−1∑k=1

k

=m−1∑l=1

2

l

m−1∑k=l

(k+ 1)− 3m(m− 1)

2

=m−1∑l=1

2

l

[m(m+ 1)

2− l(l+ 1)

2

]− 3m(m− 1)

2.

Then,

m+ 2

m

m−1∑k=1

Vk = 2(m+ 1)m−1∑l=1

1

l− 3m+ 2(m+ 1)

m= Vm

as was to be shown.

As in Problem 4.2, vm ∼ logm.

4.4 Change the matrix to

0

1

2

3

∥∥∥∥∥∥∥∥∥∥∥∥∥

0 1 2 3

1 0 0 0

.1 .2 .5 .2

0 0 1 0

.2 .2 .3 .3

∥∥∥∥∥∥∥∥∥∥∥∥∥and calculate the probability that absorption takes place in State O:

u10 = .1+ .2u10+ .2u30

u30 = .2+ .2u10+ .3u30

}u10 = .2115

u30 = .3462

Page 16: Elbert Pramudya Purchase

“Ch03-9780123852328” — 2011/3/18 — 12:15 — page 16 — #5

16 Instructor Solutions Manual

4.5 ∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥

1 2 3 4 5 6 7

1 01

20

1

20 0 0

21

30

1

30

1

30 0

3 0 0 1 0 0 0 0

41

30 0 0

1

30

1

3

5 01

30

1

30

1

30

6 0 01

20

1

20 0

7 0 0 0 0 0 0 1

∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥u13 =

7

12u53 =

2

3u43 =

5

12

u23 =3

4u63 =

5

6

4.6 v0 = 1+ qv0+ pv1v1 = 1+ qv0+ pv2 q= 1− pv2 = 1+ qv0+ pv3v3 = 1+ qv0

gives v0 = (1+ qv0)(1+ p+ p2+ p3)= (1+ p+ p2+ p3)+ v0(1− p4)

So v0 =1+ p+ p2+ p3

1−(1− p4

) = 1− p4

p4 (1− p).

4.7 The stationary Markov transitions imply that E[∑∞

n=1βnc(Xn) |X0 = i, X1 = j

]= βhj while E

[β0c(X0) |X0 = i

]= c(i) .

Now use the law of total probability.

4.8 ∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥

0 1 2 3 4 5

0 1 0 0 0 0 0

11

4

3

40 0 0 0

2 02

5

3

50 0 0

3 0 01

2

1

20 0

4 0 0 04

7

3

70

5 0 0 0 05

8

3

8

∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥Xn = # of red balls in use at nth draw.

v5 = 4+ 5

2+ 2+ 7

4+ 8

5= 11.85.

Page 17: Elbert Pramudya Purchase

“Ch03-9780123852328” — 2011/3/18 — 12:15 — page 17 — #6

Instructor Solutions Manual 17

4.9 ∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥

0 1 2 3 4 5

0 1 0 0 0 0 0

11

8

7

80 0 0 0

2 02

8

6

80 0 0

3 0 03

8

5

80 0

4 0 0 04

8

4

80

5 0 0 0 05

8

3

8

∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥Xn = # of red balls in use

v5 =8

1+ 8

2+ 8

3+ 8

4+ 8

5= 18

4

15= 18.266. . . .

4.10 Let Xn be the number of coins that fall tails in the nth toss, X0 = 5. Make “1” an absorbing state.

P =

∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥

0 1 2 3 4 5

0 1 0 0 0 0 0

1 0 1 0 0 0 0

21

4

1

2

1

40 0 0

31

8

3

8

3

8

1

80 0

41

16

4

16

6

16

4

16

1

160

51

32

5

32

10

32

10

32

5

32

1

32

∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥U51 = .7235023041 . . .

4.11 Label the states (x,y) where x= # of red balls and y= # of green balls, “win” = (1,0), “lose” = {(0,2), (2,0)} .

∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥

(2,2) (1,2) (2,1) (1,1) win lose

(2,2) 01

2

1

20 0 0

(1,2) 0 0 02

30

1

3

(2,1) 0 0 02

30

1

3

(1,1) 0 0 0 01

2

1

2

win 0 0 0 0 1 0

lose 0 0 0 0 0 1

∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥U(2,2),win =

1

3.

Page 18: Elbert Pramudya Purchase

“Ch03-9780123852328” — 2011/3/18 — 12:15 — page 18 — #7

18 Instructor Solutions Manual

4.12 ∥∥∥∥∥∥∥∥∥∥

0 1 win lose

0 .3 .2 0 .5

1 .5 .1 .4 0

win 0 0 1 0

lose 0 0 0 1

∥∥∥∥∥∥∥∥∥∥z0 = U0,win =

8

53

4.13 Use the matrix

0 1 2 0′ 1′ 2′

0 0 0 0 .3 .2 .5

1 0 0 0 .5 .1 .4

2 0 0 1 0 0 0

0′ .3 .2 .5 0 0 0

1′ .5 .1 .4 0 0 0

2′ 0 0 0 0 0 1

U0,2 ′ .67669. . .=

4.14 The long but sure way to solve the problem is to set up the 36 × 36 transition matrix for the Markov chain Zn = (Xn−1,Xn),

making states (x,y) where x+ y= 5 or x+ y= 7 absorbing. However, all that is important prior to ending the game is the lastroll, and by aggregating states where possible, we get the Markov transition matrix.

∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥

1 or 2 3 or 4 5 or 6 5 7

1 or 21

3

1

6

1

6

1

6

1

6

3 or 41

6

1

6

1

3

1

6

1

6

5 or 61

6

1

3

1

30

1

6

5 0 0 0 1 0

7 0 0 0 1 1

∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥U1,5 =

22

51U3 5 =

21

51U5,5 =

15

51

Pr {5 before 7} = 1

3

(U1,5+U3 5+U5,5

)= 59

153.

4.15 ∥∥∥∥∥∥∥∥∥∥∥∥∥

1 2 3 4 5

1 .96 .04 0 0 0

2 0 .94 .06 0 0

3 0 0 .94 .06 0

4 0 0 0 .96 .04

5 0 0 0 0 1

∥∥∥∥∥∥∥∥∥∥∥∥∥v1 = 133

1

3

Page 19: Elbert Pramudya Purchase

“Ch03-9780123852328” — 2011/3/18 — 12:15 — page 19 — #8

Instructor Solutions Manual 19

4.16 ∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥

0 1 2 3 4 5

0 1 0 0 0 0 0

1 .2 0 .8 0 0 0

2 0 .4 0 .6 0 0

3 0 0 .6 0 .4 0

4 0 0 0 .8 0 .2

5 0 0 0 0 0 1

∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥U35 =

17

32.

4.17 Let ϕi(s)= E[sT |X0 = i

]for i= 0,1

Then

ϕ0 (s)= s [.7ϕ0 (s)+ .3ϕ1 (s)]

ϕ1 (s)= s [.6ϕ1 (s)+ .4]

which solves to give

ϕ1 (s)=.4s

1− .6s

ϕ0 (s)=(

.4s

1− .6s

)(.3s

1− .7s

).

4.18 The transition probabilities are: if Xn = w, then Xn+1 = w w · pr · h

w+ h(and h→ h− 1) or Xn+1 = w− 1 w · pr · w

w+ h(and h→ h+ 1) where h= 2N− n− 2w is the number of half cigars in the box.

The recursion to verify is

vn(w)=h

w+ hvn+1(w)+

w

w+ hvn+1(w− 1)

This is straight forward, and easiest if one first does the algebra to verify

vn+1(w)= vn(w)+1

1+w

vn+1(w− 1)= vn(w)+n+ 2w− 2N

w(1+w)

4.19 PN =N

2

∞∑n=0

(1

2

)n[1−

(1

2

)n]N−1

.

As a function of N, this does NOT converge but oscillates very slowly (cycles∝ log N) and very slightly about the (Cesarolimit) 1

2 log 2 .

5.1 v0 = 1+ a0v0+ a1v1+ a2v2

v1 = 1+ (a0+ a1)v1+ a2v2

v2 = 1+ (a0+ a1+ a2)v2

v2 = 11−(a0+a1+a2)

= 1a3 = v0 = v1.

Page 20: Elbert Pramudya Purchase

“Ch03-9780123852328” — 2011/3/18 — 12:15 — page 20 — #9

20 Instructor Solutions Manual

5.2 (a) P0 = a1, q0 = 1− a1 = (a2+ a3+ ·· ·).Pk = ak+1/(ak+1+ ak+2+ ·· ·), for k ≥ 1.

(b) P0 = a1, q0 = 1− a1Pk = ak+1/(ak+1+ ak+2+ ·· ·) , qk = 1− pk for 1≤ k < N− 1.PN−1 = 1

The above assumes instantaneous replacement.

5.3 ∥∥∥∥∥∥∥0 1 2

0 α 1−α 0

P= 1 0 α 1−α2 1−α 0 α

∥∥∥∥∥∥∥5.4 After reviewing (5.8), set up the matrix

0 1 2 3 4 5 6 7 8 9 10 13 ∼ 13

01

6

1

6

1

6

1

6

1

6

1

60 0 0 0 0 0

0 01

6

1

6

1

6

1

6

1

6

1

60 0 0 0 0

0 0 01

6

1

6

1

6

1

6

1

6

1

60 0 0 0

0 0 0 01

6

1

6

1

6

1

6

1

6

1

60 0 0

0 0 0 0 01

6

1

6

1

6

1

6

1

6

1

60 0

0 0 0 0 0 01

6

1

6

1

6

1

6

1

60

1

6

0 0 0 0 0 0 01

6

1

6

1

6

1

60

2

6

0 0 0 0 0 0 0 01

6

1

6

1

6

1

6

2

6

0 0 0 0 0 0 0 0 01

6

1

6

1

6

3

6

0 0 0 0 0 0 0 0 0 01

6

1

6

4

6

0 0 0 0 0 0 0 0 0 0 01

6

5

6

0 0 0 0 0 0 0 0 0 0 0 1 0

∼ 13 0 0 0 0 0 0 0 0 0 0 0 0 1

U0,13 = .1819

13

10

9

8

7

6

5

4

3

2

1

0

5.5 If Xn = 0, then Xn+1 = 0 and E[Xn+1 |Xn = 0]= 0= Xn . If Xn = i> 0 then Xn+1 = i± 1, each w. pr.1

2so

E[Xn+1 |Xn = i]= i= Xn. So {Xn} is a martingale. If X0 = i, then Pr {max Xn ≥ N} ≤ E [X0]

N= i

Nby the maximal inequality.

Of course, (5.13) asserts Pr {max Xn ≥ N} = i

N.

Page 21: Elbert Pramudya Purchase

“Ch03-9780123852328” — 2011/3/18 — 12:15 — page 21 — #10

Instructor Solutions Manual 21

6.1 v1 = 1+ .7v2

v2 = 1+ .1v1

}v1 = 170

93 = 1.827957

n1 =3

7, n2 =

1

21, 81 =

10

7, 82 =

80

63

v1 =81+82

1+ n1+ n2= 1.827957.

6.2 v0 = 1+αv0+βv2

v1 = 1+αv0

v2 = 1+αv0+βv1

v0 =1+β +β2

β3.

6.3 For three urns, v(a,b,c)= E [T]= 3abc

a+ b+ c. The answer is unknown for four urns.

7.1 Observe that each state is visited at most a single time so that Wij = Pr {Ever visit j |X0 = i } . Clearly then Wi,j = 1 andWi,j = 0 for j> i.

Wi,i−1 = Pi,i−1 =1

i

Wi,i−2 = Pi,i−2+Pi,i−1Wi−1,i−2 =1

i+ 1

i

(1

i− 1

)= 1

i− 1.

Wi,i−3 = Pi,i−3+Pi,i−2Wi−2,i−3+Pi,i−1 Wi−1,i−3

= 1

i

(1+ 1

i− 2+ 1

i− 2

)= 1

i− 2.

Continuing in the manner, we deduce that Wi,j = 1j+1 for 1≤ j< i.

7.2 Observe that each state is visited at most a single time so that Wij = Pr {Ever visit j |X0 = i } . Clearly, then Wi,i = 1 andWi,j = 0 for j> i. We claim that

Wk,j = 2

(k+ 1

k

)j

( j+ 1)( j+ 2)for 1≤ j< k.

satisfies the first step equation Wk,j = Pk,j+k−1∑

l=j+1Pk,lWl,j.

Evaluating the right side with the given Pk,l and Wl,j we get

R.H.S.= 2j

k2+

k−1∑l=j+1

2l

k22

(l+ 1

l

)j

( j+ 1)( j+ 2)

= 2j

k2+ 2j

k2

2

( j+ 1)( j+ 2)

k−1∑l=j+1

(l+ 1)

= 2j

k2+ 2j

k2

2

( j+ 1)( j+ 2)

[k (k+ 1)

2− ( j+ 1)( j+ 2)

2

]

= 2

(k+ 1

2

)j

( j+ 1)( j+ 2)=Wk,j as claimed.

7.3 Observe that for j transient and k absorbing, the event {Xn−1 = j,Xn = k} is the same as the event {T = n,XT−1 = j, XT = k},whence

Pr {XT−1 = j,XT = k |X0 = i } =∞∑

n=1

Pr {Xn−1 = j,Xn = k |X0 = i } =∞∑

n=1

P(n−1)i,j Pjk =Wi,jPjk.

Page 22: Elbert Pramudya Purchase

“Ch03-9780123852328” — 2011/3/18 — 12:15 — page 22 — #11

22 Instructor Solutions Manual

7.4 (a) Wi,j = 1j+1 for 1≤ j< i. (See the solution to 7.1). Using 7.3, then,

(b) Pr {XT−1 = j |X0 = i } =Wi,jPj,0 = 1(j+1)j ,1≤ j< i, and Pr {XT−1 = i |X0 = i } = Pi,0 = 1

i .

7.5 vk =π

2BN,k

[(N+ 1)2− (N− 2k)2

]where

BN,k = 2−2N(

2(N− k)N− k

)(2kk

).

8.1 No adults survive if a local catastrophe occurs, which has probability 1−β, and if independently, all N dispersed offspringfail to survive, which has probability (1−α)N . Another example in which looking at mean values alone is misleading.

8.2 A first step analysis yields E [Z]= 1+µ E [Z] whence E [Z]= 1/(1−µ), µ < 1.

8.3 Possible families: GG GB BG BBG BBBG . . .

Probability:1

4

1

4

1

4

1

8

1

16. . .

Let N = Total children, X =Male children

(a) Pr {N = 2} = 3

4, Pr {N = k} =

(1

2

)k

for k = 3.

(b) Pr {X = 1} = 1

2, Pr {X = 0} = 1

4, Pr {X = k} =

(1

2

)k+1

, k = 2.

8.4 E [Zn+1 |Z0, . . . ,Zn ]= 1µn+1 E [ξ1+ +ξx |Xn = µnZn ]= 1

µn+1µXn = Zn. Suppose

Z0 = X0 = 1. Pr {Xn is ever greater than kµn} ≤ 1

k.

9.1 ξ = # of male children

k = 0 1 2 3

Pr {ξ = k} =11

32

9

32

9

32

3

32

ϕ(s)= 11

32+ 9

32s+ 9

32s2+ 3

32s3.

u= ϕ(u) has smallest solution u∞ = .76887.

9.2 ξ = # of male children

k = 0 1 2 3

Pr {ξ = k} = 1

32

(3

32+ 3

4

)3

32

1

32

ϕ (s)= 1

32+ 27

32s+ 3

32s2+ 1

32s3.

u= ϕ(u) has smallest solution u∞ = .23607.

9.3 Following the hint

Pr {X = k} =∫π (k |λ)f (λ)dλ= 0(k+α)

k!0(α)

1+ θ

)α( 1

1+ θ

)k

, k = 0,1, . . . .

9.4 ϕ (1)=∑∞k=0 pk = 1, p0 = ϕ (0)= 1− p.

Refering to Equations I, (6.18) and (6.20).

ϕ (s)= 1− p∞∑

k=0

k

)(−s)k ,

Page 23: Elbert Pramudya Purchase

“Ch03-9780123852328” — 2011/3/18 — 12:15 — page 23 — #12

Instructor Solutions Manual 23

so

pk =−p

k

)(−1)k = p

β (1−β). . . (k− 1−β)k!

≥ 0 because 0< β < 1.

9.5 (a) n= 1 2 3 4

Pr {All red} = 1

4

1

4

(1

4

)2 1

4

(1

4

)2(1

4

)4 1

4

(1

4

)2(1

4

)4(1

4

)8

Pr {All red at generations} =(

1

4

)2n−1

.

(b) Pr {Culture dies out} = Pr {Red cells die out}ϕ (s)= 1

12+ 2

3s+ 1

4s2. Smallest solution to u= ϕ(u) is u∞ =

1

3.

9.6 s= ϕ (s) is the quadratic 0= as2− (a+ c)s+ c whose smallest solution is u∞ = ca < 1 if c< a.

9.7 Family GG GB BG BBG BBBG

Probability1

4

1

4

1

4

1

8

1

16

Let X = # of male children.

Pr {X = 0} = 1

4Pr {X = 1} = 1

2

Pr {X = k} =(

1

2

)k+1

for k = 2.

ϕ (s)= 1

4+ 1

2s+

∞∑k=2

(1

2

)k+1

sk =1

4+ 1

2s+ 1

4

s2

2− s.

ϕ′ (s)= 1

2+ 1

4

2s(2− s)+ s2

(2− s)2E [X]= ϕ′ (1)= 5

4.

Pr {X > 0} = 3

4Pr {X > k} =

(1

2

)k+1

E [X]=∞∑

k=0

Pr {X > h} = 5

4

9.8 ϕ (s)= (1− c)∞∑

k=0(cs)k = 1− c

1− cs.

s= ϕ (s) is a quadratic whose smallest solution is u∞ =1− c

cprovided c>

1

2.

9.9 (a) Let X = # of male children.

Pr {X = 0} = 1

4+(

3

4

)1

2= 5

8

Pr {X = k} = 3

4

(1

2

)k+1

for k = 1.

(b) ϕ (s)= 5

8+ 3

8

∞∑k=1

( s

2

)k= 5

8+ 3

8

(s

2− s

).

u0 = 0 un = ϕ (un−1) u5 = .9414

9.10 (a) U∞ is smallest solution to u= f [g(u)].(b)

[f ′(1)g′(1)

]n/2 when n= 0,2,4, . . . f ′(1)(n+1)/2g′(1)(n−1)/2 when n= 1,3,5, . . ..(c) Yes, both change (unless f ′(1)= g′(1) in b).

Page 24: Elbert Pramudya Purchase

“Ch04-9780123852328” — 2011/3/18 — 12:16 — page 24 — #1

Chapter 4

1.1 Let Xn be the number of balls in Urn A. Then {Xn} is a doubly stochastic Markov chain having 6 states, whencelimn→∞Pr{Xn = 0|X0 = i} = 1

6 .

1.2 Let Xn be the number of balls in Urn A.

P=

∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥

0 1 2 3 4 5

0 0 1 0 0 0 0

11

50

4

50 0 0

2 02

50

3

50 0

3 0 03

50

2

50

4 0 0 04

50

1

55 0 0 0 0 1 0

∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥π0 = π5 =

1

32; π1 = π4 =

5

32; π2 = π3 =

10

32

π0 =1

32

1.3 The equations for the stationary distribution simplify to:

π0 = (α1+α2+α3+α4+α5+α6)π0

(∑αi = 1

)π1 = (α2+α3+α4+α5+α6)π0

π2 = (α3+α4+α5+α6)π0

π3 = (α4+α5+α6)π0

π4 = (α5+α6)π0

π5 = α6π0

1=

(6∑

k=1

kαk

)π0

π0 =1∑6

k=1 kαk=

1

Mean of α distribution

1.4 Formally, one can look at the Markov chain Zn = (Xn,Xn+1) and ask for limn→∞Pr{Zn = (k,m)} = limPr{Xn = k,Xn+1 = m} = πkPkm.

An Introduction to Stochastic Modeling, Instructor Solutions Manualc© 2011 Elsevier Inc. All rights reserved.

Page 25: Elbert Pramudya Purchase

“Ch04-9780123852328” — 2011/3/18 — 12:16 — page 25 — #2

Instructor Solutions Manual 25

1.5 P=

∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥

A B C D

A 01

20

1

2

B1

30

1

3

1

3C 0 1 0 0

D1

2

1

20 0

∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥πA =

2

8; πB =

3

8; πC =

1

8πD =

2

8

Note: πNODE ∝ # arcs at NODE.

1.6 limn→∞

Pr{Xn+1 = j

∣∣X0 = i}= πj.

limn→∞

Pr{Xn = k,Xn+1 = j

∣∣X0 = i}= πkPkj.

1.7 π0 =6

19;π1 =

3

19;π2 =

6

19;π3 =

4

19.

1.8 P8 has all positive entries.

π0 = .2529 π1 = .2299 π2 = .3103 π3 = .1379 π4 = .0690

1.9 π0 = π1 =1

10π2 = π3 =

4

10.

1.10 The matrix is doubly stochastic, whence

πk =1

N+ 1for k = 0,1, . . . ,N.

1.11 (a) π0 =117

379= .3087 π3 =

62

379= .1636

(b) π2+π3 =143

379= .3773.

(c) π0(P02+P03)+π1(P12+P13)= .1559

1.12 (a)∏

P= P∏=

∏2=

∏. Then Q2

= (P−∏)(P−

∏)= P2

−∏

P−P∏+

∏2= P2

−∏

. Similarly,∏

Pn= Pn∏

=∏n+1=

∏and Qn+1

=(P−

∏)(Pn−

∏)= Pn+1

−∏

.

(b) Qn=

1

2

∥∥∥∥∥∥∥∥∥∥∥

(1

2

)n

0 −

(1

2

)n

0 0 0

(1

2

)n

0

(1

2

)n

∥∥∥∥∥∥∥∥∥∥∥1.13 lim

n→∞Pr{Xn−1 = 2

∣∣Xn = 1}=π2P21

π1=

6× .2

7= .1714 π0 =

11

24π1 =

7

24π2 =

6

24.

43

q = 2

10

−1

Period1

ξ1 = 32

ξ2 = 43

ξ3 = 04

ξ4 = 25

LOST SALES

0 1 2 3 4 5

Page 26: Elbert Pramudya Purchase

“Ch04-9780123852328” — 2011/3/18 — 12:16 — page 26 — #3

26 Instructor Solutions Manual

(a) X0 = 4 X1 = 1 X2 = 0 X3 = 2 X4 = 2

(b) P=

∥∥∥∥∥∥∥∥∥∥∥∥

0 1 2 3 4

0 .6 .3 .1 0 0

1 .3 .3 .3 .1 0

2 .1 .2 .3 .3 .1

3 .3 .3 .3 .1 0

4 .1 .2 .3 .3 .1

∥∥∥∥∥∥∥∥∥∥∥∥2.1 (c) π0+π1+π2 = .3559+ .2746+ .2288= .8593

2.2 State is (x,y) where x= # machines operating and y= # days repair completed.

(a) P=

∥∥∥∥∥∥∥∥∥∥∥∥

(2,0) (1,0) (1,1) (0,0) (0,1)

(2,0) (1−α)2 2α(1−α) 0 α2 0

(1,0) 0 0 1−β 0 β

(1,1) (1−β) β 0 0 0

(0,0) 0 0 0 0 1

(0,1) 0 1 0 0 0

∥∥∥∥∥∥∥∥∥∥∥∥(b) π(2,0)+π(1,0)+π(1,1) = .6197+ .1840+ .1472= .9508.

2.3 (a) π0 = .2549; π1 = .2353; π2 = .3529; π3 = .1569

(b) π2+π3 = .5098

(c) π0 (P02+P03)+π1 (P12+P13)= .2235.

2.4 P=

∥∥∥∥∥∥∥∥∥∥

0 1 2 3

0 .1 .3 .2 .4

1 1 0 0 0

2 0 1 0 0

3 0 0 1 0

∥∥∥∥∥∥∥∥∥∥E[ξ ]= 2.9

π0 =10

29=

1

E[ξ ];π1 =

9

29;π2 =

6

29;π3 =

4

29.

2.5 (a) P(4)(s,s)(s,s)+P(4)(s,s),(c,s) = .3421+ .1368= .4789.

(b) π(s,s)+π(c,s) = .25+ .15= .40.

2.6 To establish the Markov property and to get the transition probability matrix, use Pr{N = n|N = n} = β.

π1 =β

p+β.

2.7 P00 = p; P0,1 = q= 1− p; Pi,i+1 = aq

Pii = αp+βq; Pi,i−1 = βp for i = 1.

2.8 π(1,0) =1

1+2p .

p= .01 .02 .05 .10

π(1,0) = .9804 .9615 .9091 .8333

Page 27: Elbert Pramudya Purchase

“Ch04-9780123852328” — 2011/3/18 — 12:16 — page 27 — #4

Instructor Solutions Manual 27

3.1 (a) f (0)00 =0 f (1)0,0 =1− a f (k)00 =ab(1− b)k−2, k = 2.

(b) We are asked do show:

P(n)00 =

n∑k=0

f (k)00 P(n−k)00 where n = 1 and

P(n)00 =b

a+ b+

a

a+ b(1− a− b)n.

Some preliminary calculations:

(i)b

a+ b

n∑k=2

f (k)00 =b

a+ b

n∑k=2

ab(1− b)k−2=

ab

a+ b

[1− (1− b)n−1

](ii)

a

a+ b

n∑k=2

f (k)00 (1− a− b)n−k=

ab

a+ b

[(1− b)n−1

− (1− a− b)n−1]

Thenn∑

k=0

f (k)00 P(n−k)00 = f (1)00 P(n−1)

00 +

n∑k=2

f (k)00 P(n−k)00

= (1− a)

[b

a+ b+

a

a+ b(1− a− b)n−1

]+

ab

a+ b

[1− (1− a− b)n−1

]=

b

a+ b+

[(1− a)a

a+ b−

ab

a+ b

](1− a− b)n−1

=b

a+ b+

a

a+ b(1− a− b)n = Pn

00.

as was to be shown.

3.2 For a finite state a periodic irreducible Markov chain, Pij(n)→ πj > 0 as n→∞ for all i, j. Thus for each i, j, there exists Nij

such that Pij(n) > 0 for all n> Ni,j. Because there are only a finite number of states N =maxi,jNi,j is finite, and for n> N, we

have Pi,j > 0 for all i, j.

3.3 We first evaluate

n= 1 2 3 4 5 6

P(n)00 = 01

4

1

8

3

8

7

32

17

64

Because P(n)00 = 1, (3.2) may be rewritten

f (n)00 = P(n)00 −

n−1∑k=1

f (k)00 P(n−k)00 .

Finally f (1)00 = P(1)00 = 0

f (2)00 = f (2)00 − f (1)00 P(1)00 =1

4

f (3)00 =1

8

f (4)00 =3

8−

(1

4

)(1

4

)=

5

16

f (5)00 =7

32−

(1

8

)(1

4

)−

(1

4

)(1

8

)=

5

32

Page 28: Elbert Pramudya Purchase

“Ch04-9780123852328” — 2011/3/18 — 12:16 — page 28 — #5

28 Instructor Solutions Manual

4.1 (a) π0+π1 = 1 and (β,α)P= (β,α).(b) A first return to 0 at time n entails leaving 0 on the first step, staying in 1 for n− 2 transitions, and then returning to 0,

whence f (n)00 = αβ(1−β)n−2 for n = 2.

(c) m0 =

∞∑n=1

nf (n)00 =

∞∑n=1

n∑k=1

f (n)00 =

∞∑k=1

∞∑n=k

f (n)00

= 1+∞∑

k=2

αβ

∞∑n=k

(1−β)n−2= 1+α

∞∑k=2

(1−β)k−2

= 1+α

β=α+β

β=

1

π0.

4.2 π0 = .1507 π1 = .3493 π2 = .1918 π3 = .3082

4.3 The period of the Markov chain is d = 2. While there is no limiting distribution, there is a stationary distribution. Set p0 =

qN = 1. Solve.

π0 = π0 π0 = π0

π0 = q1π1 π1 =1

q1π0 =

(p0

q1

)π0

π1 = p0π0+ q2π2 π2 =1

q2(π1− p0π0)=

(p0p1

q1q2

)π0

π2 = p1π1+ q3π3 π3 =

(p0p1p2

q1q2q3

)π0

...

πk = pk−1πk−1+ qk+1πk+1 πk+1 = ρk+1π0

Where

ρk+1 =p0p1 · · · · · pk

q1q2 · · · · · qk+1. Upon adding

1= π0+ ·· ·+πN = (ρ0+ ρ1+ ·· · · · ρN)π0

π0 =1

1+ ρ1+ ρ2+ ·· · · · +ρN=

1

1+∑N

k=1∏k

l=1

(pl−1

ql

)and πk = ρkπ0.

4.4 π0 = π0 =

(∞∑

k=1αk

)π0

π0 = α1π0+π1 π1 = (1−α1)π0 =

(∞∑

k=2αk

)π0

π1 = α2π0+π2 π2 = (1−α1−α2)π0 =

(∞∑

k=3αk

)π0

π2 = α3π0+π3

πn =

(∞∑

k=n+1αk

)π0

∞∑n=1

πn = 1 implies π0 =1∑

n=0∑∞

k=n+1αk

Let ξ be a random variable with Pr(ξ = k)= αk. Then∑∞

n=0∑∞

k=n+1αk =∑∞

n=0 Pr(ξ > n)= E[ξ ]=∑∞

k=1 kαk. In orderthat π0 > 0 we must have

∑∞

k=1 kαk = E[ξ ]<∞. Note: The Markov chain here models the remaining life in a renewal

Page 29: Elbert Pramudya Purchase

“Ch04-9780123852328” — 2011/3/18 — 12:16 — page 29 — #6

Instructor Solutions Manual 29

process. The result says that, under the conditions α1 > 0,α2 > 0, a renewal occurs during a particular period, in the limit,with probability 1/mean life of a component. See Chapter VII.

4.5 Recall that a return time is always at least 1.(a) Straight forward (b) To simplify, use

∑πi = 1 and∑

i

πiPik = πk to get∑

i

πimij = 1+∑k 6=j

πkmkj = 1+∑i6=j

πimij

and subtract∑i6=jπimij from both sides to get πimij = 1.

4.6{

n≥ 1 : P(n)00 > 0}= {4,5,8,9,10,12,13,14, . . .}

d(0)= 1 P(19)−1,−4 = 0, P(20)

ij > 0 for all i, j.

4.7 Measure time in trips, so there are two trips each day. Let Xn = 1 if car and person are at the same location prior to the nthtrip; = 0, if not. The transition probability matrix is

P=

∥∥∥∥∥0 1

0 0 1

1 1− p p

∥∥∥∥∥In the long run he is not with car for π0 =

1−p2−p fraction of trips, and walks in rain π0p= p(1−p)

2−p fraction of trips. The fraction

of days he/she walks in rain is 2p(1−p)2−p .

With two case, let Xn be the number of cars at the location of the person.

P=

∥∥∥∥∥∥∥0 1 2

0 0 0 1

1 0 1− p p

2 1− p p 0

∥∥∥∥∥∥∥π0 =

1−p3−p and fraction of days walk in rain is 2pπ0 =

2p(1−p)3−p . Note that the person never gets wet if p= 0 or p= 1.

4.8 The equations are

π0 =1

2π0+

1

3π1+

1

4π2+

1

5π3+ ·· ·

π1 =1

2π0+

1

3π1+

1

4π2+

1

5π3+ ·· · = π0

π2 =1

3π1+

1

4π2+

1

5π3+ ·· · = π0−

1

2π0

π3 =1

4π2+

1

5π3+ ·· · = π0−

1

2π0−

1

3π1

π4 =1

5π3+ ·· · = π0−

1

2π0−

1

3π1−

1

4π2

Starting with the equation for π1 and solving recursively we get

πk = πk−1−1

kπk−2 =

1

k!for k = 0.

∑πk = 1⇒ π0 = e−1 and πk =

e−1

k!,k = 0

(Poisson, θ = 1).

Page 30: Elbert Pramudya Purchase

“Ch04-9780123852328” — 2011/3/18 — 12:16 — page 30 — #7

30 Instructor Solutions Manual

5.1 The stationary distributions for PB and Pc are respectively, (π3,π4)=(

12 ,

12

)and (π5,π6,π7)= (.4227 .2371 .3402). The

hitting probabilities from the transient states to the recurrent classes are

U =

∥∥∥∥∥∥∥3− 4 5− 7

0 .4569 .5431

1 .1638 .8362

2 .4741 .5259

∥∥∥∥∥∥∥This gives

0 1 2 3 4 5 6 7

.2284 .2284 .2296 .1288 .18480

.0819 .0819 .3534 .1983 .28451

.2371 .2371 .2223 .1247 .178921 13 2 2

2 25 .4227 .2371 .34026 .4227 .2371 .34027 .4227 .2371 .3402

O O

O

P¥ =1 14

5.2 Aggregating states {3,4} and {5,6,7},

0 1 2 3− 4 5− 7

012

3− 45− 7

∥∥∥∥∥∥∥∥∥∥

.1 .2 .1 .3 .30 .1 .2 .1 .6.5 0 0 .3 .2

0 0 0 1 00 0 0 0 1

∥∥∥∥∥∥∥∥∥∥

U =

∥∥∥∥∥∥∥3− 4 5− 7

0 .44 .56

1 .23 .77

2 .52 .48

∥∥∥∥∥∥∥.3π3+ .6π4 = π3

π3+π4 = 1

}⇒

π3 =6

13= .46

π4 =7

13= .54

P(∞)04 = U0,3−4 π4 = .44× .54= .24

0 1 2 3 4 5 6 7

P∞ =

0

1

2

3

4

5

6

7

∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥

.20 .24 .25 .15 .16

.10 .12 .35 .20 .22

.24 .28 .21 .12 .14

.46 .54

.46 .54

.45 .26 .29

.45 .26 .29

.45 .26 .29

∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥

Page 31: Elbert Pramudya Purchase

“Ch05-9780123852328” — 2011/3/18 — 12:17 — page 31 — #1

Chapter 5

1.1 Following the hint:

Pr{X = k} =

1∫0

e−λ(1−x) λkxk−1

(k− 1)!e−λxdx=

λke−λ

(k− 1)!

1∫0

xk−1dx=λke−λ

k!

1.2 The number of defects in any interval in Poisson distributed, from Theorem 1.1, and the numbers of defects in disjoint intervalsare independent random variables because its truth for both major and minor types.

1.3 gX (s)=∞∑

k=0

µke−µ

k!sk= e−µ

∞∑k=0

(µs)k

k!= e−µeµs

= e−µ(1−s), |s|< 1.

1.4 gN (s) = E[sX+Y]

= E[sXsY]

= E[sX]E

[sY] (using II, (1,10), (1,12)).

= gX(s)gY(s).

In the Poisson case (Problem 1.3)

gN (s)= e−α(1−s)e−β(1−s)= e−(α+β)(1−s). (Poisson, θ = α+β).

1.5 (a) 1− p0(h)

h=

1− e−λh

h=λh− 1

2λ2h2+

13!λ

3h3

h

= λ−1

2λ2h+

1

3!λ2h2− ·· · → λ as h→ 0.

(b)p1(h)

h=λhe−λh

h= λe−λh

→ λ as h→ 0.

(c)p2(h)

h=

12λ

2h2e−λh

h=

1

2λ2He−λh

→ 0 as h→ 0.

1.6 Pr{X(t)= k,X(t+ s)= n} = Pr{X(t)= k,X(t+ s)−X(t)= n− k}

=(λt)ke−λt

k!

(λs)n−ke−λs

(n− k)!

Pr{X(t)= k|X(t+ s)= n} =

(λt)ke−λt[λs]n−ke−λs

k! (n− k)![λ(t+ s)]ne−λ(t+s)

n!

=

(nk

)(t

t+ s

)k( s

t+ s

)n−k (Binomial, p=

t

t+ s

)

An Introduction to Stochastic Modeling, Instructor Solutions Manualc© 2011 Elsevier Inc. All rights reserved.

Page 32: Elbert Pramudya Purchase

“Ch05-9780123852328” — 2011/3/18 — 12:17 — page 32 — #2

32 Instructor Solutions Manual

1.7 Pr{Survive at time t} =∞∑

k=0

Pr{Survive k shocks} Pr{k shocks} =∞∑

k=0

αk e−λt(λt)k

k!= e−λt(1−α).

1.8 eλt= 1+ λt+

1

2(λt)2+

1

3!(λt)3+

1

4!(λt)4+ ·· ·

e−λt= 1− λt+

1

2(λt)2−

1

3!(λt)3+

1

4!(λt)4− ·· ·

1

2

(eλt− e−λt)

= λt+1

3!(λt)3+

1

5!(λt)5+ ·· ·

Pr{X(t) is odd} = e−λt[

1

2

(eλt− e−λt)]

=1

2

(1− e−2λt

).

1.9 (a) E[X(T)|T = t]= λt E[X(T)2|T = t

]= λt+ λ2t2.

(b) E[X(T)] =

1∫0

λtdt =1

2λ= 1 when λ= 2.

E[X(T)2

]=

1∫0

(λt+ λ2t2

)dt =

1

2λ+

1

3λ2.

Var[X(T)] = E[X(T)2

]−E[X(T)]2

=1

2λ+

1

3λ2−

1

4λ2

=1

2λ+

1

12λ2=

4

3when λ= 2.

1.10 (a) K

(b) cE

T∫0

X(t)dt

= c

T∫0

λtdt =1

2λcT2.

(c) A.C.=1

T

(k+

1

2λcT2

).

(d)d

dTA.C.= O⇒ T∗ =

√2K

λc.

Note: In (a), could argue for Dispatch Cost= K1{X(T) > 0} and E[Dispatch Cost]= K(1− e−λT

)1.11 The gamma density

fk(x)=λkxk−1e−λx

(k− 1)!, x> 0

1.12 (a) Pr{X′(t)= k

}=

∞∫0

(θ t)ke−θ t

k!e−θdθ

=tk

k!

∞∫0

θke−(1+t)θdθ

Page 33: Elbert Pramudya Purchase

“Ch05-9780123852328” — 2011/3/18 — 12:17 — page 33 — #3

Instructor Solutions Manual 33

=1

k!

(t

1+ t

)k( 1

1+ t

) ∞∫0

xke−xdx

=

(t

1+ t

)k( 1

1+ t

)(See I, (6,4)).

(b) Pr{X′(t)= j,X′(t+ s)= j+ k

}=

∞∫0

(θ t) je−θ t

j!

(θs)ke−θs

k!e−θdθ

=

(j+ k

f

)(t

1+ s+ t

)j( s

1+ s+ t

)k( 1

1+ s+ t

).

2.1 Pr{X(n,p)= 0} = (1− p)n =

(1−

λ

n

)n

→ e−λ,n→∞

Pr{X(n,p)= k+ 1}

Pr{X(n,p)= k}=

(n− k)p

(k+ 1)(1− p)→

λ

k+ 1,n→∞,λ= np.

2.2 In a small sample (sample size ∞√

#tags) there are many potential pairs, and a small probability for each particular pair to bedrawn. The probability that pair i is drawn is approximately independent of the probability that pair j is drawn.

2.3 Pr{A} =2N(2N− 2) · · · · · (2N− 2N+ 2)

2N(2N− 1) · · · · · (2N− n+ 1)=

2n(

Nn

)(

2Nn

) .

= .7895 when n= 10,N = 100.

n−1∏i=1

(1−

i

2N− i

)∼=

n−I∏i=1

e−i

2N−i ∼= exp

{−

n−1∑i=1

(i

2N

)}.

= exp

{−n(n− 1)

4N

}= .7985 when n= 10,N = 100.

2.4 XN , the number of points in [0, 1) when N points are scattered over [0, N) is binomially distributed N,p= 1/N.

Pr {XN = k} =

(Nk

)(1

N

)k(1−

1

N

)N−k

→ e−1/k! , k = 0.

2.5 Xr, the number of points within radius 1 of origin is binomially distributed with parameters N and p= π

πr2 =λN . Pr{X,= k} =(

Nk

)(λN

)k(1− λ

N

)N−k→

λke−λk! , k = 0,1, . . . .

2.6 Pr{Xi = k,Xj = l

}=

N!

k! l!(N− k− l)!

(1

M

)k( 1

M

)1(1−

2

M

)N−k−l

(Multinomial)→1

k! l!λk+1e−2λ

=

(λke−λ

k!

)(λle−λ

l!

).

Fraction of locations having 2 or more accounts→(1− e−λ− λe−λ

).

2.7 Pr{k bacteria in region of area a}−

(Nk

)( aA

)k(1− aA

)N−k→

cke−c

k! asN→∞a→ o

, NaA → c.

Page 34: Elbert Pramudya Purchase

“Ch05-9780123852328” — 2011/3/18 — 12:17 — page 34 — #4

34 Instructor Solutions Manual

2.8 p1 = .1,p2 = .2,p3 = .3,p4 = .4∑

p2i = .30

k Pr {Sn = k} e−1/

k! Diff .

0 .3024 .3679 −.0655

1 .4404 .3679 .0725

2 .2144 .1840 .0304

3 .0404 .0613 −.0209

4 .00024 .0153 −.0151

2.9 p1 = p2 = p3 = .1,p4 = .2∑

p2i = .07

k Pr {Sn = k}(

12

)ke

12 /k! Diff .

0 .5832 .6065 −.0233

1 .3402 .3033 .0369

2 .0702 .0758 −.0056

3 .0062 .0126 −.0064

4 .0002 .0016 −.0014

2.10 One need only to observe that

|Pr {Sn ∈ I}−Pr{X(µ) ∈ I}| ≤ Pr {Sn 6= X(µ)}

n∑k=1

Pr {ε (pi) 6= X (pi)} .

2.11 From the hint,

Pr{X in B} ≤ Pr{Y in B}+Pr{X 6= Y}. Similarly

Pr{Y in B} ≤ Pr{X in B}+Pr{X 6= Y}.

Together∣∣Pr{X in B}−Pr{Y in B}∣∣≤ Pr{X 6= Y}.

2.12 Most older random number generators fail this test, the pairs (U2N,U2N+1) all falling in a region having little or no area.

3.1 To justify the differentiation as the correct means to obtain the density, look at

∞∫w1

∞∫w2

fw1,w2

(w′1,w

2

)dw′1dw′2 = [1+ λ(w2−w1)]e

−λw2 .

3.2 fw2 (w2)=

w2∫0

fw1,w2 (w1,w2)dw1 = λ2w2e−λw2 ,w2 > 0

fw1/w2 (w1/w2)=λ2e−λw2

λ2w2e−λw2=

1

w2for 0< w1 < w2

(Uniform on (0,w2]) Theorem 3.3 conditions on an event of positive probability. Here Pr {W2 = w} = 0 for all w. {W2 = w2}

is NOT the same event as X (w2)= 2.

3.3 fs0,s1 (s0,s1) = fw1,w2(s0,s0+ s1)(Jacobean= 1)

= λ2 exp{−λ(s0+ s1)} =(λe−λs0

)(λe−λs1

).

Compare with Theorem 3.2. for another approach.

Page 35: Elbert Pramudya Purchase

“Ch05-9780123852328” — 2011/3/18 — 12:17 — page 35 — #5

Instructor Solutions Manual 35

3.4 fw1 (w1) =

∞∫w1

λ2e−λw2 dw2 = λe−λw1 ,w1 > 0.

fw2 (w2) =

w2∫0

λ2e−λw2 dw1 = λ2w2e−λw2 ,w2 > 0

3.5 One can adapt the solution of Exercise 1.5 for a computational approach. For a different approach,

Pr{X(T)= 0} = Pr {T <W1} =θ

λ+ θ(Review I,5.2).

Using the memoryless property and starting afresh at time W1

Pr{X(T) > 1} = Pr {W2 ≤ T} = Pr {W1 ≤ T}Pr {W2 ≤ T |W1 ≤ T } =

(1

λ+ θ

)2

.

Similarly,

Pr{X(T) > k} =

λ+ θ

)k+1

and Pr{X(T)= k} =

λ+ θ

)(λ

λ+ θ

)k

, k ≥ 0

3.6 T =WQ, the waiting time for the k th arrival, whence E[T]= Qλ.∫ T

0 N(t)dt = Total customer waiting time= Area underN(t) up to T =WQ = 1S1+ 2S2+ ·· ·+ (Q− 1)SQ−1 (Draw picture) so

E

T∫0

N(t)dt

= [1+ 2+ ·· ·+Q− 1]1

λ=

Q(Q− 1)

2λ.

3.7 The failure rate is λ= 2 per year. Let X = # of failures in a year

If Stock(Spares) Inoperable unit if Pr{Inoperable}

0 X = 1 .8647

1 X = 2 .5940

2 X = 3 .3233

3 X = 4 .1429

4 X = 5 .0527

5∗ X = 6 .0166

3.8 We use the binomical distribution in Theorem 3.3.

Pr {Wr > s|X(t)= n} = Pr{X(s) < r|X(t)= n} =r−1∑k=0

(nk

)( s

t

)k(1−

s

t

)n−k

fwr|N(t)=n = (s)=−d

ds

r−1∑k=0

(nk

)( s

t

)k(1−

s

t

)n−k=

n!

(r− 1)!(n− r)!

( s

t

)r−1 1

t

(1−

s

t

)n−r.

3.9 (a) Pr

{W(1)

1 <W(2)1 =

λ1

λ1+ λ2

}

Page 36: Elbert Pramudya Purchase

“Ch05-9780123852328” — 2011/3/18 — 12:17 — page 36 — #6

36 Instructor Solutions Manual

(b) Pr{

W(1)1 <W(2)

2

}= Pr

{W(1)

2 <W(2)1

}+Pr

{W(2)

1 <W(1)2 <W(2)

2

}=

(λ1

λ1+ λ2

)2

+

(λ2

λ1+ λ2

)(λ1

λ1+ λ2

)2

+

(λ1

λ1+ λ2

)2(λ2

λ1+ λ2

)

=

(λ1

λ1+ λ2

)2[1+ 2

λ2

λ1+ λ2

]

3.10 Xn+1 = 2n+1 exp{−(Wn+ sn)} = Xn2e−sn

E [Xn+1 |Xn ] = Xn2E[e−sn]

= Xn2

∞∫0

e−2xdx= Xn.

4.1 fw1,...,wk−1,wk+1,...wn|X(1)=n,wk=w (w1, . . . ,wk−1,wk+1, . . . ,wn)

=n!

n!

(k− 1)!(n− k)!wk−1(1−w)n−k

= (k− 1)!

(1

w

)k−1

(n− k)!

(1

1−w

)n−k

4.2 Appropriately modify the argument leading to (4.7) so as to obtain the joint distribution of X(t) and Y(t). Answer: X(t) andY(t). are independent Poisson random variables with parameters

λ

t∫0

[1−G(v)]dv and λ

t∫0

G(v)dv, respectively.

4.3 The joint distribution of U =W1

W2

V =

(1−W3

1−W2

)and W =W2 is

fU,V,W(u,v,w)= 6w(1−w) for 0≤ u,v,w≤ 1

whence

fU,V(u,v)=

1∫0

6w(1−w)dw= 1 for 0≤ u,v≤ 1

4.4 Let Z(t)=min{W1+Z1, . . . ,Wx(t)+Zx(t)

}Pr{Z(t) > z} =

∞∑n=0

Pr{Z(t) > z|X(t)= n}Pr{X(t)= n}

=

∞∑n=0

Pr{U+ ξ > z}nPr{X(t)= n}

= e−λt expλt Pr{U+ ξ > z} (U is unif. (0, t])

= exp

−λt+ λt

t∫0

1

t[1−F(z− u)]du

= exp

−λz∫

z−t

F(v)dv

Page 37: Elbert Pramudya Purchase

“Ch05-9780123852328” — 2011/3/18 — 12:17 — page 37 — #7

Instructor Solutions Manual 37

Let t→∞

Pr{Z > z} = exp

−λz∫

−∞

F(v)dv

4.5 Pr {W1 > w|N(t)= n} = Pr {U1 > w, . . .Un > w} =

(1−

w

t

)n=

(1−

βw

n

)n

if n= βt→ e−βw if t→∞,n→∞,n= βt.

Under the specified conditions, W is exponentially distributed, but not with rate λ.

4.6 (a) E [W1 |X(t)= 2]=1

3t.

(b) E [W3 |X(t)= 5]=3

6t =

1

2t.

(c) fw2 |X(t)=5(w)= 20w(1−w)3/

t5 for 0< w< t.

4.7 E

X(t)∑i=1

f (Wi)

= ∞∑n=0

E

[n∑

i=1

f (Wi) |X(t)= n

]Pr{X(t)= n}

= λt E[ f (U)] U is unif. (0, t]

= λ

t∫0

f (u)du.

4.8 This is a generalization of the shot noise process in Section 4.1.

E[Z(t)]= E[ξ ]λ

t∫0

e−αudu= E[ξ ]

α

)(1− e−αu).

4.9 E[W1 W2 · · ·WN(t)

]=

∞∑n=0

E [W1 · · ·Wn|N(t)= n]Pr{N(t)= n}

=

∞∑n=0

E [U1, . . .Un]Pr{N(t)= n}

=

∞∑n=0

(t

2

)n

e−λt (λt)n

n!= e−λt

(1− 1

2 t).

4.10 The generalization allows the impulse response function to also depend on a random variable ξ so that, in the notation ofSection 4.1

I(t)=X(t)∑k=1

h(ξh, t−Wk),

the ξk being independent of the Poisson process

4.11 The limiting density is

f (x) = c for 0< x< 1;= c

(1− logex

)for 1< x< 2;

= c

1− logx+

x∫2

log(t− 1)

tdt

, for 2< x< 3;

...

where c= exp{−Euler’s constant} = .561459 . . .

Page 38: Elbert Pramudya Purchase

“Ch05-9780123852328” — 2011/3/18 — 12:17 — page 38 — #8

38 Instructor Solutions Manual

In principal, one can get the density for any x from the differential equation. In practice?

5.1 f (x)= 2( x

R2

)for 0< x< R because F(x)= Pr{X ≤ x|N = 1} =

( x

R

)2.

5.2 It is homogeneous Poisson process whose intensity is proportional to the mean of the Poisson random variable N. Note: IfX = Rcos2 and Y = Rsin2, then (X,Y) is uniform on the disk.

5.3 For i= 1,2, . . . ,n2 let ξi = 1 if two or more points are in box i, and ξi = 0 otherwise. Then the number of reactions isξ1+ ·· ·+ ξn2 and p= Pr{ξ = 1} = 1

2λ2d2+ o

(d2).

n2p=1

2λ2(td)2+ ·· · →

1

2λ2µ2.

The number of reactions is asymptotically Poisson with parameter 12λ

2µ2.

5.4 The probability that the centre of a sphere is located between r and r+ dr units from the origin is 2d[

43πr3

]= 4λπr2dr.

The probability that such a sphere covers the origin is∫∞

r f (x)dx. Then mean number of spheres that cover the origin is∫∞

0 4λπr2∫∞

r f (x)dxdr = 43λπ

∫∞

0 r3f (r)dr. The distribution is Poisson with this mean.

5.5 FD(x)= Pr{D≤ x} = 1−Pr{D> x}

= 1−Pr{No particles in circle of radius x}

= 1− exp{−vπx2

}, x> 0.

5.6 1−FR(x) = Pr{R> x} = Pr{No stars in sphere, radius x}

= exp

{−λ

4

3πx3

},

Whence

fR(x)=d

dxFR(x)= 4λπx2e

43λπx3

, x> 0

5.7 The hint should be sufficient for (a). For (b)

∞∫0

λ(r)dr = 2πλ

∞∫0

r

∞∫r

f (x)dxdr =

∞∫0

2πλ

x∫

0

rdr

f (x)dx= πλ

∞∫0

x2f (x)dx.

6.1 Nonhomogeneous Poisson, intensity λ(t)= λG(t). To see this, let N(t) be the number of points ≤ t in the relocated process= # point in .

6.2 The key observation is that, if 2 is uniform or [0,2π), and Y is independent with an arbitrary distribution, then Y +2 (mod2π) is uniform.

6.3 From the shock model of Section 6.1, modified for the discrete nature of the damage process, we have

E[T]=1

λ

∞∑n=0

G(n)(a− 1)

=1

λ

[1+

∞∑n=1

a−1∑k=0

(n+ k− 1

k

)pn(1− p)k

](See I, (3.6))

=1

λ

[1+

a−1∑k=0

p(1− p)k{∞∑

n=1

(n− 1+ kn− 1

)pn−1

}]

Page 39: Elbert Pramudya Purchase

“Ch05-9780123852328” — 2011/3/18 — 12:17 — page 39 — #9

Instructor Solutions Manual 39

=1

λ

[1+

a−1∑k=0

p(1− p)k(1= p)−k−1

](See I, (6.21))

=1

λ

[1+

ap

1− p

].

6.4 One can use the results of I, Section 5.2 or, since T1 is exponentially distributed, µ

Pr {X (T1)= k} =

∞∫0

(λt)ke−λt

k!µe−µtdt

=

λ+µ

)(µ

λ+µ

)k

, k = 0,1, . . . (See I, (6.4)).

Ta has a gamma density and a similar integration applies. See the last example in II, Section 4.

6.5 12 T is exponentially distributed with rate parameter 2µ, whence

Pr

{X

(1

2T

)= k

}=

∞∫0

(λt)ke−λt

k!2µe−2µt dt

=

(2µ

λ+ 2µ

)(2λ

λ+ 2µ

)k

, k = 0,1, . . . (See I, (6.4))

6.6 Instead of representing Wk+Xk, and Wk+Yk as distinct points on a single axis, write them as a single point (Wk+Xk,Wk+

Yk) in the plane. The result is a nonhomogeneous Poisson point process in the positive quadrant with intensity

λ(x,y)= λ

min{x,y}∫0

f (x− u) f (y− u)du

where f (·) is the common density for X,Y .

6.7 Refer to Exercise 6.3.

Pr{Z(t) > z|N(t) > 0} =e−λzat

− e−λt

1− e−λt; 0< z< 1.

Let Y(t)= t1/aZ(t). For large t, we have

Pr{Y(t) > y|N(t) > 0} = Pr{

Z(t) >y

t1/α|N(t) > 0

}=

e−λyα− e−λt

1− e−λt−−−→t→∞

e−λyα (Weibull).

6.8 Following the remarks in Section 1.3 we may write N(t)=M[3(t)] where M(t) is a Poisson process of unit intensity.

Pr{Z(t) > z} = Pr{min

[Y1, . . .YM[3(t)]

]> z

}= exp{−G(z)3(t)} = exp

{−zα3(t)

}.

Pr{

t1/αZ(t) > z}= Pr

{Z(t) >

z

t1/α

}= exp

{−zα

3(t)

t

}−−−→t→∞

e−θzα ,z> 0 (Weibull)

Page 40: Elbert Pramudya Purchase

“Ch05-9780123852328” — 2011/3/18 — 12:17 — page 40 — #10

40 Instructor Solutions Manual

6.9 To carry the methods of this section a little further, write N(dt)= N(t+ dt)−N(t) so N(dt)= 1 if and only if t <Wk ≤ t+ dtfor some Wk. Then

N(t)∑k=1

(Wk)2=

t∫0

x2N(dx) so

E

N(t)∑k=1

(Wk)2

= E

t∫0

x2N(dx)

= t∫0

x2E[N(dx)]

= λ

t∫0

x2dx=λt3

3.

6.10 (a) Pr{Sell asset} = 1− e−(1−θ).

(b) Because the offers are uniformly distributed, the expected selling price, given that it is sold, is(

1+θ2

).

E[Return]=1+ θ

2

[1− e−(1−θ)

]θ∗ = .2079, max

θE[Return]= .330425

(c) Pr{Sell in(t, t+ dt]} = e−

t∫0{1−θ(u)}du

[1− θ(t)]dt.

E[Return]=

1∫0

(1+ θ(t)

2

)exp

−t∫

0

[1− θ(u)]du

[1− θ(t)]dt

=2

9

1∫0

(2− t)dt =1

3, a miniscule improvement over (b).

Page 41: Elbert Pramudya Purchase

“Ch06-9780123852328” — 2011/3/18 — 12:17 — page 41 — #1

Chapter 6

1.1 Pr{X(U)= k} =

1∫0

e−βu (1− e−βu)k−1du=

1−e−β∫0

xk−1 dx

β=

1

βk

(1− e−β

)k.

1.2 λk = α+βk

λ0λ1 · · · · · λn−1 = βn(α

β

)(α

β+ 1

)· · · · ·

β+ n− 1

)(λ0− λk) · · · · · (λk−1− λk)= β

k(−1)kk!

(λn− λk) · · · · · (λk+1− λk)= βn−k(n− k)k!

Pn(t)= λ0 · · · · · λn−1

n∑k=0

Bk,ne−λkt

=

(αβ

)(αβ+ 1

)· · · · ·

(αβ+ n− 1

)n!

e−αtn∑

k=0

(nk

)(−e−βt)k

=

( αβ+ n− 1

n

)e−αt(1− e−βt)n for n= 0,1, . . .

Note:∑∞

n=0 Pn(t)= 1 using I, (6.18)–(6.20).

1.3 The probabilistic rate of increase in the infected population is jointly proportional to the number who can give the disease,and the number available to catch it.

λk = αk(N− k), k = 0,1, . . . ,N.

1.4 (a) λk = α+ θk = 1+ 2k.

(b) P2(t)= 2

[1

8e−t−

1

4e−3t+

1

8e−5t

]P2(1)=

1

4e−1−

1

2e−3+

1

4e−5= .0688.

1.5 The two possibilities X(w2)= O or X(w2)= 1 give us

Pr {W1 > w1,W2 > w2} = Pr {X(w1)= 0,X(w2)= 0}+Pr {X(w1)= 0,X(w2)= 1}

= P0(w1)P0(w2−w1)+P0(w1)P1(w2−w1)

= e−λ0w2 + λ0e−λ0w1

[1

λ1− λ0e−λ0(w2−w1)+

1

λ0− λ1e−λ1(w2−w1)

]=

λ1

λ1− λ0e−λ0w2 +

λ0

λ0− λ1e−λ1w2 e−(λ0−λ1)w1

=

∞∫w1

∞∫w2

fw1,w2

(w′1,w

2

)dw′1,dw′2,

An Introduction to Stochastic Modeling, Instructor Solutions Manualc© 2011 Elsevier Inc. All rights reserved.

Page 42: Elbert Pramudya Purchase

“Ch06-9780123852328” — 2011/3/18 — 12:17 — page 42 — #2

42 Instructor Solutions Manual

hence

fw1,w2(w1,w2)=∂

∂w2

∂w1Pr {W1 > w1,W2 > w2} = λ0λ1e−λ0w1e−λ1(w2−w1)

Setting s0 = w1,s1 = w2−w1(Jacobean= 1) fs0,s1(s0,s1)= (λ0e−λ0s0)(λ1e−λ1s1).

1.6 E [Sk]= (1+ k)−e; E [W∞]=

∞∑k=1

(1+ k)−e <∞ when Q> 1.

1.7 (a) Pr {S0 ≤ t} = 1− e−λ0t; Pr {S0 > t} = e−λ0t.

Pr {S0+ S1 ≤ t} =

t∫0

[1− e−λ1(t−x)

]λ0e−λ0x dx= 1− e−λ0t

(λ0

λ0− λ1

)e−λ1t

[1− e−(λ0−λ1)

t]

= 1−λ0

λ0− λ1e−λ1t

−λ1

λ1− λ0e−λ0t

= 1−Pr {S0+ S1 > t} .

Pr {S0+ S1+ S2 ≤ t} =

t∫0

{1−

λ0

λ0− λ1e−λ1(t−x)

−λ0

λ1− λ0e−λ0(t−x)

}λ2e−λ2x dx

= 1−λ0λ2

(λ0− λ1)(λ2− λ1)e−λ1t

−λ1λ2

(λ1− λ0)(λ2− λ0)e−λ0t

=λ0λ1

(λ0− λ2)(λ1− λ2)e−λ2t.

(b) P2(t) =λ0λ2

(λ0− λ1)(λ2− λ1)− e−λ1t

+λ1λ2

(λ1− λ0)(λ2− λ0)e−λ0t

+λ0λ1

(λ0− λ2)(λ1− λ2)e−λ2t

−λ0

(λ0− λ1)e−λ1t

−λ1

(λ1− λ0)e−λ0t

= λ0

[1

(λ1− λ0)(λ2− λ0)e−λ0t

+1

(λ0− λ1)(λ2− λ1)e−λ1t

+1

(λ0− λ2)(λ1− λ2)e−λ2t

].

1.8 Equations (1.2) become

P′0(t)=−βP0(t)

P′n(t)=−βPn(t)+αPn−1(t), n= 2,4,6, . . .

P′n(t)=−αPn(t)+βPn−1(t), n= 1,3,5, . . .

Seeming over n= 0,2,4 . . . and n= 1,3,5 . . . gives

P′even(t)= αPodd(t)−βPeven(t)= α [1−Peven(t)]−βPeven(t)

P′even(t)= α− (α−β)Peven(t).

Let Qeven(t)= e(α+β)tPeven(t), then Q′even(t)= α e(α+β)t and since Qeven(0)= 1, the solution is Qeven(t)=βα+β+

αα+β

e(α+β)t

and Peven(t)= e−(α+β)tQeven(t)=α

α+β+

βα+β

e−(α+β)t.

1.9 Equations (1.2) become

P′0(t)=−βP0(t)

P′n(t)=−βPn(t)+αPn−1(t), n= 2,4,6, . . .

P′n(t)=−αPn(t)+βPn−1(t), n= 1,3,5, . . .

Multiply the nth equation by n, sum, collect terms and simplify to get

M′(t)= αPodd(t)+βPeven(t)= α+ (βα)Peven(t),

Page 43: Elbert Pramudya Purchase

“Ch06-9780123852328” — 2011/3/18 — 12:17 — page 43 — #3

Instructor Solutions Manual 43

and (See Problem 1.8 above)

M(t)=2αβ

α+βt+

(β −α

β +α

)(β

α+β

)[1− e−(α+β)t

].

1.10 It is easily checked through (1.5) that Pn(t)=

(Nn

)e−(N−n)λt

(1− e−λt

)n, by working with Qn(t)= eλntPn(t)=(

Nn

)(1− e−λt)n;Q0(t)≡ 1;Qn(t)= λn−1

∫ t0 e−λxQn−1(x)dx (from (1.5)).

1.11 P1(t)= λ0e−λ1t

t∫0

eλ1xe−λ0xdx=λ0

λ0− λ1e−λ1t

+λ0

λ1− λ0e−λ0t

P2(t)= λ1e−λ2t

t∫0

[λ0

λ0− λ1e−λ1x

+λ0

λ1− λ0e−λ0x

]eλ2xdx

= λ0λ1

[1

(λ0− λ1)(λ2− λ1)e−λ1t

+1

(λ1− λ0)(λ2− λ0)e−λ0t

+

{1

(λ0− λ1)(λ1− λ2)+

1

(λ1− λ0)(λ0− λ2)

}e−λ2t

and

1

(λ0− λ1)(λ1− λ2)+

1

(λ1− λ0)(λ0− λ2)=

1

(λ0− λ2)(λ1− λ2)

P3(t)= λ0λ1λ2e−λ2t

t∫0

[1

(λ0− λ1)(λ2− λ1)e−λ1x

+1

(λ1− λ0)(λ2− λ0)e−λ0x

+1

(λ0− λ2)(λ1− λ2)e−λ2x

]eλ3xdx

= λ0λ1λ2

[1

(λ3− λ0)(λ2− λ0)(λ1− λ0)e−λ0t

+1

(λ3− λ1)(λ2− λ1)(λ0− λ1)e−λ1t

+1

(λ3− λ2)(λ1− λ2)(λ0− λ2)e−λ2t

+1

(λ0− λ3)(λ1− λ3)(λ2− λ3)e−λ3t

]

1.12 P2(t)= λ1e−λ2t

t∫0

[λ0

λ0− λ1e−λ1x

+λ0

λ1− λ0e−λ0x

]eλ2xdx

= λ0λ1e−λ2t[

1

(λ0− λ1)(λ1− λ2)

(1− e−(λ1−λ2)t

)+

1

(λ1− λ0)(λ0− λ2)

(1− e−(λ0−λ2)t

)]

= λ0λ1

[e−λ0t

(λ1− λ0)(λ2− λ0)+

e−λ1t

(λ2− λ1)(λ0− λ1)+

e−λ2t

(λ0− λ2)(λ1− λ2)

]Note: 1

(λ0−λ1)(λ1−λ2)+

1(λ1−λ0)(λ0−λ2)

=1

(λ0−λ2)(λ1−λ2).

1.13 Let Qn(t)= eλtPn(t). Then Q0(t)≡ 1 and (1.5) becomes Q′n(t)= λQn−1(t) which solves to give Q1(t)= λt;Q2(t)=12 (λt)2 . . . Qn(t)=

(λt)n

n! and Pn(t)=(λt)ne−λt

n! .

2.1 Using the memoryless property (Section I, 5.2)

Pr{X(T)= 0} = Pr{T >Wn} =

N∏i=1

Pr{T >Wi

∣∣T >Wi−1}=

N∏i=1

(µi

µi+ θ

).

Page 44: Elbert Pramudya Purchase

“Ch06-9780123852328” — 2011/3/18 — 12:17 — page 44 — #4

44 Instructor Solutions Manual

2.2 PN(t) = −e−θ t

PN−1(t) = θ t e−θ t

Pn(t) = (θ t)N−ne−θ t/(N− n)! for n= 1,2, . . . ,N

P0(t) = 1−N∑

n=1

Pn(t).

2.3 Refer to Figure 2.1 to see that Area=WN + ·· ·+W1

Hence E[Area]= E [WN]+ ·· ·+E [W1]= NE [SN]+ (N− 1)E [SN−1]+ ·· ·+E [S1]=∑N

n=1nµn

.

2.4 (a) µN = NMθ

µN−1 = (N− 1)(M− 1)θ

µk = k(M− (N− k))θ, k = 0,1, . . . ,N.

(b) E [WN] =N∑

k=1

E [Sk]=N∑

k=1

1

k[M− (N− k)]θ

=1

θ

N∑k=1

{1

(M−N)k−

1

(M−N)(M−N+ k)

}

=1

θ(M−N)

{N∑

k=1

1

k−

N∑k=1

1

M−N+ k

}

∼1

θ(M−N){logN− logM+ log(M−N)}.

2.5 Breakdown rule is “exponential breakdown”.

µk = kK

[NL

k

]= k sinh

[NL

k

]

E [WN]=N∑

k=1

1

k sinh[NL

k

] = N∑k=1

1(kN

)sinh

(L

R/N

) 1

N∼=

1∫0

dx

xsinh( L

X

) (Riemann approx.)

2.6 (a) E[T]= E [WN]=N∑

k=1

E [Sk]=1

α

N∑k=1

1

k.

(b) E[T] =

∞∫0

Pr{T > t}dt =

∞∫0

[1−P0(t)]dt =

∞∫0

[1−

(1− e−αt)N]dt =

1

α

1∫0

[1− yN] dy

1− y

=1

α

1∫0

[1+ y+ y2

+ ·· ·+ yN−1]

dy=1

α

[1+

1

2+

1

3+ ·· ·+

1

N

].

3.1 Pr{X(t+ h) = 1|X(t)= 0} = λh+ o(h) so λ0 = λ.

Pr{X(t+ h) = 0|X(t)= 1} = λ(1−α)h+ o(h) so µ1 = λ(1−α).

The Markov property requires the independent increments of the Poisson process.

3.2 If we assume that the sojourn time on a single plant is exponentially distributed then X(t) is a birth and death process.

Reflecting behavior(µ0 = 0,λ0 =

1m0,λN = 0,µN =

1mN

)might be assumed. In an actual experiment, and these have been

Page 45: Elbert Pramudya Purchase

“Ch06-9780123852328” — 2011/3/18 — 12:17 — page 45 — #5

Instructor Solutions Manual 45

done, one must allow for the escape or other loss of the beetle. We add a state 1= Escaped (1 is called the “cemetery”) andassume (0< α < 1)

Pr{X(t+ h)= k+ 1|X(t)= k} =α

2mkh+ o(h)

Pr{X(t+ h)= k− 1|X(t)= k} =α

2mkh+ o(h)

Pr{X(t+ h)=1|X(t)= k} =1−α

mkh+ o(h)

Pr{X(t+ h)=1|X(t)=1} = 1.

This is a general finite state continuous time Markov process—See Section 6. It is also a birth and death process with “killing”.

3.3 Pr{V(t)= 1} = π for all t (See Exercise 3.3)

E[V(s)V(t)] = Pr{V(s)= 1,V(t)= 1} = Pr{V(s)= 1}×Pr{V(t)= 1|V(s)= 1}

= πP11(t− s)= π [1−P10(t− s)]

Cov[V(s)V(t)] = E[V(s)V(t)]−E[V(s)]E[V(t)]= πP11(t− s)−π2= π(1−π)e−(α+β)(t−s).

3.4 Because V(0)= 0,E[V(t)]= P01(t), and

E[S(t)]=

t∫0

P01(u)du=

t∫0

(π −πe−τu)du= π t−π

τ(1− e−τ t), τ = α+β.

4.1 Single repairman R= 1

λk = 2 for k = 0,1,2,3,4, µk = k for k = 0,1,2,3,4,5.

θ0 = 1,θ1 = 2,θ2 = 2,θ3 =4

3,θ4 =

2

3,θ5 =

4

15.

5∑k=0

θk =218

30=

109

15

Two repairman R= 2

λ0 = λ1 = λ2 = λ3 = 4;λ4 = 2 µk = k

θ0 = 1,θ1 = 4,θ2 = 8,θ3 =32

3,θ4 =

32

3,θ5 =

64

15,

5∑k=0

θk =579

15=

193

15.

π0 π1 π2 π3 π4 π5

R= 115

109

30

109

30

109

20

109

10

109

4

109

R= 215

579

60

579

120

579

160

579

160

579

64

579

(a)∑

kπk (b)1

N

∑kπk (c)

R= 1 1.93 .39 π5 = .0367

R= 2 3.01 .06 2π5+π4 = .4974

Page 46: Elbert Pramudya Purchase

“Ch06-9780123852328” — 2011/3/18 — 12:17 — page 46 — #6

46 Instructor Solutions Manual

4.2 θ0 = 1,θ1 =λµ,θ2 =

(λµ

)2, . . . , θk =

(λk

)k∞∑

k=0

θk =

∞∑k=0

µ

)k

=1

1− λ/µ

if λ < µ(=∞ if λ≥ µ)

If λ < µ, then πk =

(1− λ

µ

)(λµ

)kfor k = 0. This is the geometre distribution. The model corresponds to the M/M/1 queue.

If λ≥ µ, then πk = 0 for all k.

4.3 Repairman Model M = N = 5,R= 1,µ= .2,λ= .5

k = 0 1 2 3 4 5

λk = .5 .5 .5 .5 .5 0

µk = 0 .2 .4 .6 .8 .10

θk = 15

2

25

8

125

48

625

384

628

768

πk =768

8963

1920

8963

2400

8963

2000

8963

1250

8963

625

8963

= .086 .214 .268 .223 .139 .070

Fraction of time repairman is idle= π5 = .07

4.4 There are

(42

)= 6 possible links, (a) λk = α(6− k) and µk = βk for 0≤ k ≤ 6, θ0 = 1, θ1 = 6

(αβ

);

θ2 =6.5

1.2

β

)2

, . . . θk =

(6k

)(α

β

)k

6∑k=0

θk =

(1+

α

β

)6

=

(α+β

β

)6

πk =

(6

k

)(α

β

)k(β

α+β

)6

=

(6

k

)(α

α+β

)k(β

α+β

)6−k

,0≤ k ≤ 6

Binomial distribution. See derivation of (4.8).

4.5 If X(t)= k, there are N – k unbonded A molecules and an equal number of unbonded B molecules. λk = α(N− k)2, µk = βk.

4.6 This model can be for mulated as a “Repairman model”.

k = 0 1 2 3

λk = λ λ λ 0

µk = 0 µ 2µ 2µ

θk = 1

µ

)1

2

µ

)2 1

4

µ

)3

The computer is fully loaded in states k = 2 and 3,

π2+π3 =

12

(λµ

)2+

14

(λµ

)3

1+ λµ+

12

(λµ

)2+

14

(λµ

)3.

Page 47: Elbert Pramudya Purchase

“Ch06-9780123852328” — 2011/3/18 — 12:17 — page 47 — #7

Instructor Solutions Manual 47

4.7 The repairman model with N = 3,R= 2

M = 2 µ=1

5= .2, λ=

1

4= .25

k 0 1 2 3

λk2

4

2

4

1

40

µk 01

5

2

5

2

5

θk 15

2

25

8

125

64

πk64

549

160

549

200

549

125

549∑θk = 1+

5

2+

25

8+

125

64+

549

64.

Long run avg. # Machines operating= 0π0+ 1π1+ 2π2+π3 =810549 = 1.48; Avg. Output= 148 items per hour.

4.8 θ0 = 1

θ1 =1

2

β

)θ2 =

1

3

β

)2

θk =1

k+ 1

β

)k

, k = 0,1, . . .

Because log(1− x)=−[x+ 1

2 x2+

13 x3+ ·· ·

]=−

∑∞

k=11k xk for |x|< 1, we can evaluate

∞∑k=0

θk =β

α

∞∑k=0

1

k+ 1

β

)k+1

αlog

1

1−α/β, 0< α < β,

and

πk = C

(1

k+ 1

)(α

β

)k+1

, C =1

log

(1

1− αβ

) .This is named the “logarithmic distribution”.

5.1 Change K into an absorbing state by setting λK = µK = 0. Then Pr{Absorption in 0} = Pr{Reach 0 before K}. Because %i = 0

for i = K, (5.7) becomes µm =

∑K−1i=m ni

1+∑K−1

i=1 ni, as desired.

5.2 %0 = 1,%1 = 4,%2 = 6,%3 = 4,%4 = 1,%5 = 0

u2 =n2+ n3+ n4

1+ n1+ n2+ n3+ n4=

11

16

(a) Alternatively, solve

u1 =4

5+

1

5u2

u2 =3

5u1+

2

5u3

u3 =2

5u2+

3

5u4

u4 =1

5u3

u1 =15

16

u2 =11

16

u3 =5

16

u4 =1

16

Page 48: Elbert Pramudya Purchase

“Ch06-9780123852328” — 2011/3/18 — 12:17 — page 48 — #8

48 Instructor Solutions Manual

(b) w1 =1

5+

1

5w2

w2 =1

5+

3

5w1+

2

5w3

w3 =1

5+

2

5w2+

3

5w4

w4 =1

5+

1

5w3

w1 = w4 =1

3

w2 = w3 =2

3

6.1 For i 6= j,Pr{X(t+ h)= j|X(t)= i} = λPijh+ o(h) for h≈ 0, The independent increments of the Poisson process are neededto establish the Markov property.

6.2 Fraction of time system operates=(

αAαA+βA

)[1−

(βB

αB+βB

)(βC

αC+βC

)].

6.3 Pr{Z(t+ h) = k+ 1|Z(t)= k} = (N− k)λh+ o(h),

Pr{Z(t+ h) = k− 1|Z(t)= k} = kµh+ o(h).

6.4 The infinitesimal matrix is

∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥

(2,0) (1,0) (1,1) (0,0) (0,1) (0,2)

(2,0) −2µ 2µp 2µq

(1,0) α −(α+µ) µp µq

(1,1) β −(β +µ) µp µq

(0,0) α −α

(0,1) β α −(α+β)

(0,2) β −β

∥∥∥∥∥∥∥∥∥∥∥∥∥∥∥7.1 (a) (1−π)P01(t)+πP11(t)= (1−π)π(1− e−π t)+π [π + (1−π)e−π t]= π −π2

+π2= π .

(b) Let N(dt)= 1 if there is an even in (t, t+ dt], and zero otherwise. Then N((0, t])=∫ t

0 N(ds) and E[N((0, t])]=

E[∫ t

0 N(ds)]=∫ t

0 E[N(ds)]=∫ t

0 πλds= πλt.

7.2 γ (t) > x if and only if N((t, t+ x])= 0. (Draw picture)

7.3 T > t if and only if N((0, t])= 0. Pr{T > t} = Pr{N((0, t])= 0} = f (t,λ) hence φ(t)=− ddt f (t;λ)= c+µ+ e−µ,t+

c−µ−e−µ−t. When α = β1 and λ= 2, then

µ± = 2±√

2,c± =1

4

(2∓√

2)

and φ(t)= e−2t e−√

2t+ e√

2t

2= e−2t cosh

√2t.

7.4 E[T]=∫∞

0 Pr{T > t}dt =∫∞

0 f (t;λ)dt = c+µ++

c−µ−

. when α = β1 and λ= 2, then

E[T]=1

4

{2−√

2

2+√

2+

2+√

2

2−√

2

}=

12

8=

3

2.

Since events occurs on average, at a rate of πλ per unit time, the mean duration between events must 1πλ(= 1 when α = β = 1

and λ= 2). The discrepancy between the mean time to the first event, and the mean time between events, is another exampleof length biased sampling.

7.5 Pr {N((t, t+ s])= 0 |N((0, t])= 0 }

=f (t+ s;λ)

f (t;λ)=

c+e−µ+(t+s)+ c−e−µ−(t+s)

c+e−µ+t+ c−e−µ−t

=c+e−(µ+−µ−)t

c+e−(µ+−µ−)t+ c−e−µ+s

+c−

c+e−(µ+−µ−)t+ c−e−µ−s

→ e−µ−s as t→∞.

Page 49: Elbert Pramudya Purchase

“Ch06-9780123852328” — 2011/3/18 — 12:17 — page 49 — #9

Instructor Solutions Manual 49

7.6∫∞

0 e−stce−µtdt = cµ+s hence

ϕ(s;λ)=c+

µ++ s+

c−µ−+ s

=c+µ−+ c−µ++ s

(µ++ s)(µ−+ s)

µ± =1

2

{(λ+ τ)±

√(λ+ τ)2− 4πτλ

}c+µ−+ c−µ+ = τ + (1−π)λ.

(µ++ s)(µ−+ s)= s2+ (τ + λ)s+πτλ.

(a) limτ→∞φ(s;λ)=1

πλ+s =∫∞

0 e−ste−πλtdt

(b) limτ→0φ(s;λ)=s+(1−π)λ

s2+λs= π

(1λ+s

)+ (1−π) 1

s .

7.7 When α = β = 1 and λ= 2(1− θ) then

µ± = (z− θ)±R, c± =1

2∓

1

2R

and

g(t;θ)= e−(2−θ)t{(

1

2−

1

2R

)e−Rt+

(1

2+

1

2R

)eRt}= e−(2−θ)t

{cosh Rt+

1

Rsinh Rt

}.

dg

dθ= e−(2−θ)tt

{cosh Rt+

1

Rsinh Rt

}+ e−(2−θ)t

{t sinh Rt+

t

Rcosh Rt−

1

R2sinh Rt

}dr

R∣∣∣θ=0 =

√2,

dR

∣∣∣∣θ=0=−

1√

2=

√2

2.

dg

∣∣∣θ=0= e−2tt

{cosh√

2t+1

2

√2sin√

2t

}− e−2tt

[{sin√

2t+1

2

√2cosh

√2t

}−

1

2sinh√

2t

]√2

2

=1

2e−2t

{t cosh

√2t+

√2

2sinh√

2t

}= Pr {N((0, t])= 1} .

Students with symbol manipulating computer skills may test them on Pr{N((0, t])= 2} = 12

d2gdθ2

∣∣θ=0.

7.8 Pr{λ(t)= 0

∣∣N((0, t])= 0}=

Pr{λ(t)= 0 and N((0, t])= 0}

P{N((0, t])= 0}=

f0(t)

f (t)→

a−c−

as t→∞.

7.9 The initial conditions are easy to verify: f0(0)= a++ a− = 1−π, f1(0)= b++ b− = π . We wish to check that

f ′0(t)=−af0(t)+βf1(t)

f ′1(t)= af0(t)− (β + λ)f1(t)

Now

f ′0(t)=−a+µ+e−µ+t− a−µ−e−µ−t

While

−αf0(t)=−aa+e−µ+t− aa−e−µ−t

βf1(t)= βb+e−µ+t+βb−e−µ−t

Page 50: Elbert Pramudya Purchase

“Ch06-9780123852328” — 2011/3/18 — 12:17 — page 50 — #10

50 Instructor Solutions Manual

Upon equating coefficients, we want to check that −a+µ+?=−aa++βb+ and −a−µ−

?=−aa−+βb−. Starting with the

first, is 0?= a+(µ+−α)+βb+

a+ (µ+−α)=1

4(1−π)

[(λ−α+β)+R−

(λ−α+β)(λ+α+β)

R− (α+β + λ)

]=

1

4(1−π)

[−2α+R−

(λ−α+β)(λ+α+β)

R

]βb+ =

1

2πβ

[1+

λ−α−β

R

], π =

α

α+β

2R(α−+β) [a+ (µ+−α)+βb+]

= βR2−β(λ−α+β)(λ+α+β)+ 2αβ(λ−α−β)

= β[(α+β + λ)2− 4aλ− (λ−α+β)(λ+α+β)+ 2α(λ−α−β)

]= β[(α+β + λ){α+β + λ− λ+α−β}− 4αλ+ 2α(λ−α−β)]

= β[2α(α+β + λ)− 4αλ+ 2α(λ−α−β)]

= βα[2(α+β)+ 2(−α−β)]= 0

To check: 0?= a−(µ−−α)+βb−

a− (µ−−α)=1

4(1−π)

[1+

α+β + λ

R

][(λ−α+β)−R]

=1

4

β

α+β

[λ−α+β −R+

(α+β + λ)(λ−α+β)

R−α−β − λ

]=

1

4

β

α+β

[−2α−R+

(α+β + λ)(λ−α+β)

R

]βb− =

1

2

αβ

α+β

[1−

λ−α−β

R

]=

1

4

β

α+β

[2α−

2α(λ−α−β)

R

]a− (µ−−α)+βb− =

1

4

β

α+β

[−R+

(α+β + λ)(λ−α+β)

R−

2α(λ−α−β)

R

]=

1

4R

α+β

)[−(α+β + λ)2+ 4αλ+ (α+β + λ)(λ−α+β)− 2α(λ−α−β)

]=

1

4R

α+β

)[4αλ− 2α(α+β + λ)− 2α(λ−α−β)]= 0

Verifying the second differential equation reduces to checking if 0?= aa++ b+(µ+−β − λ) and 0

?= aa−+ b−(µ−−β − λ).

The algebra is similar to that used for the first equation.

7.10 (a) Pr{N(0,h]> 0,N(h,h+ t]= 0}

= Pr{N(h,h+ t]= 0}−Pr{N(0,h]= 0,N(h,h+ t]= 0} = f (t;λ)− f (t+ h;λ).

(b) limh↓0

Pr{N(h,h+ t]= 0|N(0,h]> 0}

= limh↓0

f (t;λ)−f (t+h;λ)h

1−f (h;λ)h

=−

ddt f (t;λ)

−ddt f (t;λ)|t=0

=f ′(t;λ)f ′(0;λ) .

Page 51: Elbert Pramudya Purchase

“Ch06-9780123852328” — 2011/3/18 — 12:17 — page 51 — #11

Instructor Solutions Manual 51

(c)f ′(t;λ)

f ′(0;λ)=−c+µ+e

−µ+t− c−µ−e−µ−1

−c+µ+− c−µ−= p+e−µ−tp−e−µ−t.

(d) Pr{τ > t| Event at time 0} = p+e−µ+t+ p−µ−t−e where E[τ | Event at time 0]= p+

µ++

p−µ−

.

When α = β = 1 and λ= 2, we have µ± = 2±√

2,c± = 14 (2∓

√2);p± = 1

2 and E[τ | Event at time 0]= 1.= Long runmean time between events.

7.11 V(u) plays the role of λ(u), and S(t), that of 3(t).

7.12 This model arose in an attempt to describe a scientist observing butterflies. When the butterfly is at rest, it can be observedperfectly. When it is flying, it may fly out-of-range (the “event”). The observation time is random, and the model may guideus in summarizing the data. The stated properties should be clear from the picture.

Page 52: Elbert Pramudya Purchase

“Ch07-9780123852328” — 2011/3/18 — 12:18 — page 52 — #1

Chapter 7

1.1 Pr{N(t− x)= N(t+ y)} =∞∑

k=0

Pr{N(t− x)= k = N(t+ y)}

=

∞∑k=0

Pr {Wk ≤ t− x,Wk+1 > t+ y}

= [1−F(t+ y)]+∞∑

k=1

Pr {Wk ≤ t− x,Wk+Xk+1 > t+ y}

= [1−F(t+ y)]+∞∑

k=1

t−x∫0

[1−F(t+ y− z)]dFk(z).

In the exponential case:

= e−λ(t+y)+

∞∑k=1

t−x∫0

e−λ(t+y−z) λkzk−1

(k− 1)!e−λzdz

= e−λ(t+y)+

t−x∫0

e−λ(t+y−z)λeλze−λzdz

= e−λ(t+y)+ e−λ(t+y)

[eλ(t−x)

− 1]= e−λ(x+y).

1.2 Pr{N(t)= k} = Pr {Wk ≤ t, Wk+Xk+1 > t}

=

1∫0

[1−F(t− x)]dFk(λ),and in the exponential case

= λke−λt

t∫0

xk−1

k!dx=

(λt)ke−λt

k!, k = 0,1, . . .

1.3 E [γt]= E[WN(t)+1− t

]= E [X1] [M(t)+ 1]− t.

1.4 Pr{γt > y

∣∣δt = x}=

1−F(x+ y)

1−F(x)

E[γt∣∣δt = x

]=

∞∫0

1−F(x+ y)

1−F(x)dy.

An Introduction to Stochastic Modeling, Instructor Solutions Manualc© 2011 Elsevier Inc. All rights reserved.

Page 53: Elbert Pramudya Purchase

“Ch07-9780123852328” — 2011/3/18 — 12:18 — page 53 — #2

Instructor Solutions Manual 53

2.1Block Period Cost =

4+ 5M(K− 1)

KK 2(K)

1 4.002 2.253 2.1834 2.11385 2.1231

*Replace on failure: 2= 1.923, is best

2.2 We are asked to verify that M(n), as given, satisfies

M(n)= 1+ (1− q)M(n− 1)+ qM(n− 2), n = 2.

1= 1

(1− q)M(n− 1)=(1− q)(n− 1)

1+ q−(1− q)q2

(1+ q)2−(1− q)qn+1

(1+ q)2

qM(n− 2)=q(n− 2)

1+ q−

qq2

(1+ q)2+

qqn

(1+ q)2

M(n)?= 1−

q2(1+ q2

) + (1− q)(n− 1)+ q(n− 2)

1+ q−(1− q)qn+1

− qn+1

(1+ q)2

= 1−1− q+ 2q

1+ q−

q2

(1+ q)2+

n

1+ q+

qqn+1

(1+ q)2

=n

1+ q−

q2

(1+ q)2+

qn+2

(1+ q)2=M(n)

2.3 M(1) = p1 = β

M(2) = p1+ p2+ p1M(1)= β +β(1−β)+β2= 2β

M(3) = p1+ p2+ p3+ p1M(2)+ p2M(1)

= β +β(1−β)+β(1−β)2+β(2β)+β(1−β)β

= β +β −β2+β − 2β2

+β3+ 2β2

+β2−β3

= 3β

In general, M(n) = nβ.

3.1 E

[1

N(t)+ 1

]=

∞∑k=0

1

k+ 1

(λt)ke−λt

k!=

1

λte−λt

∞∑j=1

(λt)j

j!=

1

λte−λt

(eλt− 1

).

Using the independence established in Exercise 3.3,

E

[WN(t)+1

N(t)+ 1

]= E

[WN(t)+1

]E

[1

N(t)+ 1

]=

(t+

1

λ

)1

λt

(1− e−λt)

=1

λ

(1−

1

λt

)(1− e−λt) .

3.2 Because WN(t)+1 = t+ γt and E[γt]= 1λ,E[WN(t)+1]= t+ 1

λ. On the other hand E[X1]= 1

λand M(t)= λt, whence

E [X1]{M(t)+ 1} =1

λ(λt+ 1)= t+

1

λ= E

[WN(t)+1

].

3.3 For t > τ,p(t)= Pr{No arrivals in (t− τ, t]} = e−λτ . For t ≤ τ,p(t)= Pr{No arrivals in (0, t]} = e−λt. Thus p(t)= e−λmin(τ,t).

Page 54: Elbert Pramudya Purchase

“Ch07-9780123852328” — 2011/3/18 — 12:18 — page 54 — #3

54 Instructor Solutions Manual

3.4 Let l(x,y) =

{x if 0≤ y≤ x≤ 1

1− x if 0≤ x< y≤ 1.

E[L] = E[l(X,Y)]=

1∫0

1∫0

l(xy)dxdy

=

1∫0

x

x∫0

dy

dx+

1∫0

(1− x)

1∫x

dy

dx=

1∫0

x2dx+

1∫0

(1− x)2dx=2

3.

3.5 Pr[D(t) > x] = Pr{No birds in (t− x, t+ x]} =

{e−2λx for 0< x< te−λ(x+t) for 0< t ≤ x.

E[D(t)] =

∞∫0

Pr{D(t) > x}dx=

t∫0

e−2λxdx+

∞∫t

e−λ(x+t)dx=1

(1+ e−2λt

).

fT(t) = −d

dxPr{D(t) > x} =

{2λe−2λx for 0< x< tλe−λ(x+t) for 0< t ≤ x.

4.11

tM(t)→

1

µimplies µ= 1

M(t)−t

µ→

σ 2−µ2

2µ2= 1 implies σ 2

= 3

4.2 µ=1

α+

1

β;

1

µ=

αβ

α+β; σ 2

=1

α2+

1

β2

M(t)≈αβt

α+β+σ 2−µ2

2µ2=

αβt

α+β+

αβ

(α+β)2.

4.3 1−F(y)= exp{−∫ y

0 θx dy}= e−

12 θy2

,y = 0

µ=

∞∫0

[1−F(y)]dy=

∞∫0

e−12 θy2

dy=

√π

2θ.

σ 2+µ2

=

∞∫0

y2dF(y)=2

θ.

Limiting mean age σ 2+µ2

2µ =

√2πθ

.

4.4 List the children, family after family, to form a renewal process.

Children MF MMMF F F MF

Family #1 #2 #3 #4 #5

µ= 2E[#children in a family]. 1 female per family, Long run fraction female= 1µ=

12 .

4.5 m0 = 1+ .3m0 ⇒ m0 =1

.7=

10

7m2 = 1+ .5m2 ⇒ m2 = 2

Successive visits to state 1 form renewal instants for which µ= 1+ .6m0+ .4m2 =9335 . And π1 =

3593

(π0 =

3093 ,π2 =

2893

).

Page 55: Elbert Pramudya Purchase

“Ch07-9780123852328” — 2011/3/22 — 14:23 — page 55 — #4

Instructor Solutions Manual 55

5.1 (a) Pr{A down} =βA

αA+βA;Pr{B down} =

βB

αB+βB.

Pr{System down} =

(βA

αA+βA

)(βB

αB+βB

).

(b) System leaves the failed state upon first component repair. E[Sojourn System down] = 1αA+αB

.

(c) Pr[System down]=E[Sojourn System Down]

E[Cycle]so

E[Cycle]=1/(αA+αB)(

βAαA+βA

)(βBαB+βB

) .(d) E[System Sojourn Up] = E[Cycle]−E[Sojourn down]

=

(1

αA+αB

){(αA+βA)(αB+βB)

βAβB− 1

}5.2 Pr{X > y|X > x} =

1−F(y)

1−F(x), 0< x< y

E[X|X > x] =

∞∫0

Pr{X > y|X > x}dy= x+

∞∫x

{1−F(y)

1−F(x)

}dy

5.3 If successive fees are Y1,Y2, . . . then W(t)=∑N(t)+1

k=1 Yk when N(t) is the number of customers arriving in (0, t].

limt→∞

E[W(t)]

t=

E[Y]

E[X]=

∫∞

0 [1−G(y)]dy∫∞

0 [1−F(x)]dx

5.4 The duration of the cycle in the obvious renewal process is the maximum of the two lives. The cycle duration when both are litis the minimum. Let the bulb lives be ξ1, ξ2. The fraction time half lit

= 1−E [min {ξ1,ξ2}]

E [max {ξ1,ξ2}]

(a) E [min {ξ1,ξ2}]= 12λ ,E[max]= 1

2λ +1λ; Pr[Half lit]= 2

3(b) E[min]= 1

3 ,E[max]= 23 ; Pr[Half lit]= 1

2 .

6.1 (a) u0 = 1

u1 = p1u0 = α

u2 = p2u0+ p1u1 = α(1−α)+α2= α

u3 = p3u0+ p2u1+ p1u2 = α(1−α)2+α[α+α(1−α)]= α

We guess un = α for all n

so α?= α(1−α)n−1

+α(p1+ p2+ ·· ·+ (pn−1)

= α(1−α)n−1+α2

(1+ (1−α)+ ·· ·+ (1−α)n−2

)= α(1−α)n−1

+α(

1− (1−α)n−1)= α

(b) For excess life bn = pn+m = α(1−α)n+m−1= pm(1−α)n

Vn =

n∑k=0

bn−kuk = pm(1−α)nα

n∑k=1

(1−α)n−kpm = pm[(1−α)n+ 1− (1−α)n

]= pm.

Page 56: Elbert Pramudya Purchase

“Ch07-9780123852328” — 2011/3/18 — 12:18 — page 56 — #5

56 Instructor Solutions Manual

6.2 We want Pr{γn = 3} for n= 10, p1 = p2 = ·· · = p6 =16

f0 =1

6= .16666

f1 =1

6+

1

6f0 = .19444

f2 =1

6+

1

6( f0+ f1) = .22685

f3 =1

6+

1

6( f0+ f1+ f2) = .26466

f4 =1

6( f0+ f1+ f2+ f3) = .1421

f5 =1

6( f0+ f1+ f2+ f3+ f4) = .16578

f6 =1

6( f0+ f1+ f2+ f3+ f4+ f5) = .1934

f7 =1

6( f1+ f2+ f3+ f4+ f5+ f6) = .197878

f8 =1

6( f2+ f3+ f4+ f5+ f6+ f7) = .198450

f9 =1

6( f3+ f4+ f5+ f6+ f7+ f8) = .1937166

f10 =1

6( f4+ f5+ f6+ f7+ f8+ f9) = .181892637

limn→∞

Pr {γn = 3} =Pr{X1 = 3

}E [X1]

=2/3

7/2=

4

21= .190476

(Compare with problem 5.4 in III. The renewal theory gives us the limit)

6.3 That delaying the age of first birth, even at constant family size, would lower population growth rates, was an importantconclusion in early work with this model.

∑∞

v=0 mvsv= 2s2

+ 2s3= 1 solves to gives s= .5652 whence λ= 1

s 1.77 comparedto λ= 2.732 in the example.

6.4 (a)∑∞

v=0 mvsv= a

∑∞

v=2 sv=

as2

1− s, |s|< 1.

∑mvsv= 1 solve to give s=

−1±√

1+ 4a

2a,λ1 =

2a√

1+ 4a− 1

(b)∑

mvsv= as2

= 1 solves to give s=√

1a and λ2 =

√a. Compare λ1 =

4√

9−1= 2 when a= 2 versus λ2 = 2 when a= 4.

That is 4 offspring at age 2 is equivalent to 2 offspring repeatedly.

Page 57: Elbert Pramudya Purchase

“Ch08-9780123852328” — 2011/3/18 — 12:18 — page 57 — #1

Chapter 8

1.1 Let

Tn =min{t = 0; Bn(t)5 −a or Bn(t) = b

}=min

{t = 0; S[nt] 5 −a

√n or S[nt] = b

√n}

=1

nmin

{k = 0; Sk 5 −a

√n or Sk = b

√n}

E [Tn]=1

n

[a√

n∏

b√

n]

(III, Section 5.3)

→ ab as n→∞.

1.2 Begin with

1=

+∞∫−∞

1√

2πe

12 (x−λ

√t)

2

dx= e−12λ

2t

+∞∫−∞

eλ√

tx 1√

2πe−

12 x2

dx

= e−12λ

2t

+∞∫−∞

eλy 1√

2πe−

12 y2/tdy= e

12λ

2t E[eλB(t)

]so E

[eλB(t)

]= e

12λ

2t.

1.3 Pr{|B(t)|

t > ε}= Pr {|B(t)|> εt} = 2 {1−8t(εt)} = 2

{1−8

(ε√

t)}

(1.8)

Pr

{|B(t)|

t> ε

}→ 0 as t→∞ because 8

(ε√

t)→ 1

Pr

{|B(t)|

t> ε

}→ 1 as t→ 0 because 8

(ε√

t)→

1

2.

1.4 Every linear combination of jointly normal random variables is normally distributed.

E

[n∑

i=1

αiB(ti)

]=

∑αiE [B(ti)]= 0

Var

[n∑

i=1

αiB(ti)

]= E

[{∑αiB(ti)

}2]= E

{∑i

αiB(ti)

}∑j

αjB(tj)

=

n∑i=1

n∑j=1

αiαjE[B(ti)B

(tj)]=

n∑i=1

n∑j=1

αiαj min{ti, tj

}.

1.5 (a) Pr {Mτ =0} = Pr {Sn drops to − a< 0 before rising 1 unit} = 11+α (Using III, 5.3).

(b) In order that Mτ =2, we must first have Mτ =1 followed by moving ahead to 2, starting afresh from S′ = 1, beforedropping a units below 1. The spatial homogeneity gives this second move the same probability as Pr{M(τ )= 1}. Thus

Pr{Mτ =2} = [Pr{Mτ =1}]2=( a

1+a

)2. The argument repeats to give Pr{Mτ =k} =( a

1+a

)k.

(c) Divide the state space into increments of length 1n and observe the Brownian motion as it crosses lines k

n . That is, τ1 =

min{

t = 0; B(t)= 1n or B(t)=− 1

n

}, and if, for example, B(τ1)=

1n , then let τ2 =min

{t = τ1; B(t)=

2n or B(t)= 0

n

},

An Introduction to Stochastic Modeling, Instructor Solutions Manualc© 2011 Elsevier Inc. All rights reserved.

Page 58: Elbert Pramudya Purchase

“Ch08-9780123852328” — 2011/3/18 — 12:18 — page 58 — #2

58 Instructor Solutions Manual

etc. B(τ1),B(τ2), . . . has the same distribution as 1n S1, 1

n S2 . . . We apply part (b) to this approximating Brownian motionto see

Pr{Mn(τ ) > x} =

(na

1+ na

)nx

→ e−x/a

As n→∞, the partially observed Brownian gets closer to the Brownian motion. We conclude that M(τ ) is exponentiallydistributed with mean a (Rate parameter 1

a ).

1.6 If Sn(e) is the compressive force on n balloons at a strain of e, we have E[Sn(e)]= nKe[1− q(e)]× [1−F(e)]. BecauselogE[Sn(e)]= lognK+ loge− log 1

[1−q(e)][1−F(e)] , a plot of log E[Sn(e)] versus log e would have an intercept lognK reflectingmaterial volume and modulus. Any departures from linearity would reflect the failure strain distribution plus departures fromHooke’s law.

1.7 (a) E[B(n+ 1)|B(0), . . . ,B(n)]= E[B(n+ 1)−B(n)|B(0), . . . ,B(n)]+B(n)= B(n).

(b) E[B(n+ 1)2− (n+ 1)|B(n)2− n] = E[{B(n+ 1)2−B(n)2− 1}+B(n)2− n|B(n)2− n]

= B(n)2− n+E[B(n+ 1)2−B(n)2]− 1= B(n)2− n.

1.8 The suggested approximations probably do well in mimicking Brownian motion in some tests, and poorly in others. Becauseeach BN(t) is differentiable, 1BN(t) will be on the order of 1t, where as 1B(t) is on the order of

√1t.

2.1 Pr{B(u) 6= 0, t < u< t+ b |B(u) 6= 0, t < u< t+ a} =Pr{B(u) 6= 0, t < u< t+ b}

Pr{B(u) 6= 0, t < u< t+ a}=

1− ξ(t, t+ b)

1− ξ(t, t+ a)

=arc sin

√t/(t+ b)

arc sin√

t/(t+ a)0< a< b.

2.2 limt→0

arc sin√

t/(t+ b)

arc sin√

t/t(t+ a)=

√a

b0< a< b.

2.3 Pr{M(t) > a} = 2{1−8t(a)} = Pr{B(t) > a}.But the joint distributions clearly differ (For 0< s< t, it must be M(t)= M(s)).

2.4 The appropriate picture isThe two paths are equally likely.

∂x

{1−8

(2z− x√

t

)}=

1√

(2z− x√

t

)−∂

∂z

∂x

{1−8

(2z− x√

t

)}=

1√

t

∂zϕ

(2z− x√

t

)=

2

t

(2z− x√

t

(2z− x√

t

)2.5 The Jacobian is one, whence

fM(t),Y(t)(z,y)= fM(t),B(t)(z,z− y)=2

t

(z+ y√

t

(z+ y√

t

)

2.6 fY(t)(y)=

∞∫0

fM(t),Y(t)(z,y)dz=2√

t

∞∫y/√

t

uϕ(u)du=2√

(y√

t

)= f|B(t)|(y).

3.1 Pr

{R(t)√

t> z

}=

∫ ∫√

x2+y2>z

1

2πe−

12 (x

2+y2)dxdy=

∞∫z

2π∫0

1

2πre−

12 r2

dθdr =−

∞∫z

d{

e−12 r2}= e−

12 z2.

E

[R(t)√

t

]=

∞∫0

Pr

{R(t)√

t> z

}dz=

∞∫0

e−12 z2=

√π

2and E[R(t)]=

√π t

2.

Page 59: Elbert Pramudya Purchase

“Ch08-9780123852328” — 2011/3/18 — 12:18 — page 59 — #3

Instructor Solutions Manual 59

3.2 The specified conditional process has the same properties as bt+B◦(t) where B◦(t) is the Brownian bridge. WhenceE[B(t)|B(1)= b]= bt and Var[B(t)|B(1)= b]= t(1− t) for 0< t < 1.

3.3 We need only show that B(1) and B(u)− uB(1) are uncorrelated (Why?)

E[B(1){B(u)− uB(1)}]= E[B(1)B(u)]− uE[B(1)2

]= u− u= 0.

(a) B(t)= {B(t)− tB(1)}+ tB(1)The conditional distribution of B(t)− tB(1), given B(1), is the same as the unconditional distribution, by independence,where as tB(1), given B(1)= 0, is zero.

(b) E[B◦(s)B◦(t)] = E[{B(s)− sB(1)}{B(t)− tB(1)}]

= E[B(s)B(t)]− sE[B(1)B(t)]− tE[B(s)B(1)]+ stE[B(1)2

]= s− st− st+ st = s(1− t) for 0< s< t < 1.

3.4 E [W◦(s)W◦(t)] = E

[(1− s)B

(s

1− s

)(1− t)B

(t

1− t

)]= (1− s)(1− t)min

{s

1− s,

t

1− t

}= s(1− t), 0< s< t < 1.

3.5

∞∫0

y[φt(y− x)−ϕt(y+ x)]dy=

+∞∫−∞

yϕt(y− x)dy= x.

3.6 M > z if and only if a standard Brownian motion reaches z before 0, starting from x.

3.7 The same calculation as in Problem 3.5 showes that A(t) is a martingale. For the (continuous time) martingale A(t), themaximal inequality is an equality.

3.8 Since ϕt(y− x) satisfies the diffusion equation, so will [ϕt(y− x)−ϕt(y+ x)] and [ϕt(y− x)−ϕt(y− x)+ϕt(y+ x)]

3.9 (a) E[B◦(F(s))B◦(F(t))]= F(s)[1−F(t)] for s< t.(b) The approximation is

FN(t)≈ F(t)+1√

NB◦(F(t)).

4.1 Pr{max[B(t)− bt]> a} = e−2ab.

4.2 Pr{

max b+B(t)1+t > a

}= Pr {max B(t)− at > a− b} = e−2a(a−b).

4.3 Pr{max0≤u≤1B◦(u) > a

}= Pr

{maxt>0

(1+ t)B◦(

t

1+ t

)− a(1+ t) > 0

}= Pr

{maxt>0

B(t)− at > a

}= e−2a2

.

4.4 If the drift of X(t) is µ0, then the drift of X′(t)= X(t)− 12 (µ0+µ1) is − 1

2δ where δ = µ1−µ0 > 0. If the drift of X(t) is µ1,then the drift of X′(t)= X(t)− 1

2 (µ0+µ1) is 12δ. Set δ = µ1−µ0 and apply the procedure in the text to X′(t).

4.5 Pr{

max XA(t) > B}=

e−2µx/σ 2− 1

e−2µB/σ 2− 1

.

4.6 Pr{Double your money} = 12 .

4.7 Pr {Z(τ ) > a|Z(0)= z} = Pr {log Z(τ ) > log a|log Z(0)= log z}

= 1−8

log az

(a− 1

2σ2)τ

σ√τ

.4.8 The Black-Scholes value where σ = .35 is $5.21.

Page 60: Elbert Pramudya Purchase

“Ch08-9780123852328” — 2011/3/18 — 12:18 — page 60 — #4

60 Instructor Solutions Manual

4.9 The differential equation is d2

dx2 w(x)= 2λw(x) for x> 0. The solution is w(x)= c1e√

2λx+ c2e−

√2λx. The constants are evalu-

ated via the boundary conditions w(0)= 1, limx→∞w(x)= 0 whence w(x)= e−1√

2λx. There are many other ways to evaluatew(x), including martingale methods.

4.10 The relevant computation is E[Z(t)e−rt|Z(0)= z]= zerte−rt

= z.

5.1 (a) The formula follows easily from iteration. It is a discrete analog of (5.22).(b) Sum 1Vn = Vn−Vn−1 = βVn−1+ ξn to get the given formula. It is a discrete analog to (5.24).

5.2 E[S(t)V(t)] = E

t∫

0

V(s)ds

V(t)

= 1∫0

E[V(s)V(t)]ds=σ 2

1∫0

[e−β(t−s)

− e−β(t+s)]ds

=σ 2

2β2e−βt

t∫0

(βeβs−βe−βs)ds=

σ 2

2β2

[eβt+ e−βt

− 2]

=

β

)2

[cosh βt− 1]

5.3 This is merely the result of Exercise 4.6 applied to the position process.

5.4 1V = VN

(t+

1

N

)−VN(t) =

1√

N

[X[Nt]+1−X[Nt]

]=

1√

N1X, 1X = X[Nt]+1−X[Nt].

Pr

{1V =±

1√

N|VN(t)= v

}= Pr

{1X =±1

∣∣∣X[Nt] = v√

N}=

1

2∓

v√

N

2N.

E [1V |VN(t)= v ] = +1√

N

(1

2−

v√

N

2N

)−

1√

N

(1

2+

v√

N

2N

)=−v

(1

N

)=−v1t.

E[1V2|VN(t)= v

]=

(1√

N

)2

=1

N=1t.

Page 61: Elbert Pramudya Purchase

“Ch09-9780123852328” — 2011/3/18 — 12:19 — page 61 — #1

Chapter 9

1.1 Let X(t) be the number of trucks at the loader at time t. Then X(t) is birth and death process for which λ0 = λ1 = λ andµ1 = µ2 = µ · (λ2 = µ0 = 0) . Then θ0 = 1, θ1 = n= λ

µ, and θ2 = n2π0 = Long run fraction of time of no trucks are at the

loader = 11+n+n2 . Fraction of time trucks are loading = 1−π0 =

n+n2

1+n+n2 . Since trucks load at a rate of µ per unit time, long

run loads per unit time = µ(1−π0)= µ(

n+n2

1+n+n2

)= λ

(1+n

1+n+n2

).

2.1 For the M/M/2 system, θk = 2nk for k = 1.∑θk = 2

∑nk− 1=

2

1− n− 1=

1+ n

1− n.

π0 =1− n

1+ n; πk =

2(1− n)

1+ nnk, k = 1

L0 =2n3

1− n2, L=

2n

1− n2.

L0,L→∞ as n→ 1.

2.2 W = 1λ

L= 1µ

(2

1−n2

)(See solution to 2.1)

When

λ= 2, µ= 1.2 then n=λ

2µ=

1

1.2

and

W =1

2

2

1−

(1

1.2

)2

= 3.27

For a single server queue W = 1µ−λ=

12 = 5. This helps explain why many banks have a single line feeding several tellers

(M/M/s) rather than a separate line for each teller (s parallel M/M/1 systems).

2.3 For the M/M/2 system θ0 = 1, θk = 2nk for k = 1, π0 =1−n1+n , πk = 2

(1−n1+n

)nk, k = 1.

L=∑

kπk = 2

(1− n

1+ n

) ∞∑k=1

knk=

2n

1− n2.

W =1

µπ0+

1

µπ1+

(1

µ+

1

)π2+

(1

µ+

2

)π2+ ·· ·

=1

µ(π0+π1+π2+ ·· ·)+

1

∞∑k=2

(k− 1)πk

W =1

µ+

1

µ

(1− n

1+ n

) ∞∑k=2

(k− 1) nk=

1

µ

(1

1− n2

)and L= λW.

An Introduction to Stochastic Modeling, Instructor Solutions Manualc© 2011 Elsevier Inc. All rights reserved.

Page 62: Elbert Pramudya Purchase

“Ch09-9780123852328” — 2011/3/18 — 12:19 — page 62 — #2

62 Instructor Solutions Manual

2.4 (a) λ0 = λ1 = λ2 = λ, µ1 = µ2 = µ3 = µ, µ0 = λ3 = 0(b) θ0 = 1, θk = nk, k = 1,2,3, n = λ

µ.

∑θk = 1+ n+ n2

+ n3=

1− n4

1− n. π0 =

1− n

1− n4.

(c) Fraction of customers lost = π3 =

(1−n1−n4

)n3.

2.5 Observe that µπk = λπk−1. Following the hint, we get for j = 1

π0P′0j(t)= λπ0P1j(t)− λπ0P0j(t)= µπ1P1j(t)− λπ0P0j(t)

and

πkP′kj(t)= µπkPk−1,j−1(t)+ λπkPk+1,j(t)− (λ+µ)πkPk,j(t)

= λπk−1Pk−1,j−1(t)+µπk+1Pk+1,j(t)− (λ+µ)πkPk,j(t).

Upon summing the above, the µ terms drop out and we get

P′j(t)= λPj−1(t)− λPj(t), j = 1

Similar analysis yields

P′0(t)=−λP0(t).

Set Qn(t)= eλtPn(t). Then Q′

n(t) = Qn−1(t)The solution (using Q0(0)= P0(0)= 1) is

Qn(t)=(λt)n

n!, n= 0,1,2, . . .hence

Pn(t)=(λt)ne−λt

n!

Compare with Theorem 5.1.

2.6 We want the smallest c for which∞∑

k=c+1πk = nc+1 5 1

1000 . Want smallest c for which

c+ 1 =log1000

log1/n

c∗ =

[log1000

logv/n

][x]= Integer part of x.

2.7 Since X(0)= 0 we set Pj(t)= P0j(t). The forward equations in this case are

P′j(t)= λPj−1(t)+µ( j+ 1)Pj+1(t)− (λ+µj)Pj(t). Multiply by j

jP′j(t)= λ( j− 1+ 1)Pj−1(t)+µ[( j+ 1)2− ( j+ 1)

]Pj+1(t)− λjPj(t)−µj2Pj(t), j = 0.

Sum:

M′(t)= λM(t)+ λ−µM(t)+µP1(t)−µP1(t)− λM(t)= λ−µM(t), t = 0

M(0)= 0. Let Q(t)= eµt M(t).

Q′(t)= eµtM′(t)+ eµtµM(t)= eµt [M′(t)+µM(t)]= λeµt, t = 0

Q(t)=λ

µ

t∫0

µeµtdt =λ

µ

(eµt− 1

)M(t)= e−µtQ(t)=

λ

µ

(1− e−µt) .

Page 63: Elbert Pramudya Purchase

“Ch09-9780123852328” — 2011/3/18 — 12:19 — page 63 — #3

Instructor Solutions Manual 63

3.1 X(t) and Y(t) are independent random variables, each Poisson distributed where

E[X(t)]= λ

l∫0

[1−G(y)]dy

E[Y(t)]= λ

1∫0

G(y)dy.

Observe that Y(t) is the number of points (Wk,Vk) in the triangle Bt = {(w,v) : 0 ≤ w ≤ t, 0≤ v≤ t−w} and that At and Bt

are disjoint. Apply Theorem V, 6.1.

3.2 Method A: v= .5,τ 2= .2, λ= 1, n= λv= 5λ

W = v+λ(τ 2+ v2

)2(1− n)

= .5+.45λ

2− λ= .95 when λ= 1.

If λ→ 2 then W→∞.Method B : v= .4,τ 2

= .9,λ= 1, n= λv= .4λ

W = v+λ(τ 2+ v2

)2(1− n)

= .4+1.06λ

.8(2.5− λ)= 1.28 when λ= 1

Method A is preferred when λ= 1, but B is better when λ < .3 or λ < 1.7 . . ..

4.1 When the faster server is first

D= 1400 and π(1,1) = .3977

When the slower is first

D= 1600 and π(1,1) = .4082

Slightly more customers are lost when the slower server is first.

4.2 Slower customers have priority

σ =2

5τ =

4

15

Faster customers have priority

σ =4

15τ =

2

5Lp Ln L

2

31.6 2.27

4

11

82

551.85

Slower have priorityFaster have priority

Lslow Lfast L

.67 1.60 2.27

1.49 .36 1.85

Total L reduced with faster customers having priority.

Page 64: Elbert Pramudya Purchase

“Ch09-9780123852328” — 2011/3/18 — 12:19 — page 64 — #4

64 Instructor Solutions Manual

4.3 A birth and death process with λn = λ and µn = µ+ rn∑θk = 1+

∞∑k=1

λk

µ1µ2 · · ·µk, π0 =

1∑θkπk = π0

(λk

µ1µ2 · · ·µk

).

Rate at which customers depart prior to service is∑πkrk.

4.4 λn = λ,n = 0; µ0 = 0,µ1 = α, µk = β, k = 2.

θ0 = 1 θ1 =λ

α=

α

)(λ

β

)θk =

α

)(λ

β

)k

, k ≥ 1.

∑θk = 1+

β

α

∞∑k=1

β

)k

= 1+β

α

β − λ

); 0< λ < β.

π0 =α(β − λ)

α(β − λ)+βλ, πk =

β(β − λ)

α(β − λ)+βλ

β

)k

, k = 1

4.5 λ0 = λ1 = λ2 = λ,µ1 = µ,µ2 = µ3 = 2µ.

θ0 = 1, θ1 =λ

µθ2 =

1

2

µ

)2

θ3 =1

4

µ

)3

.

π0 =1

1+λ

µ+

1

2

µ

)2

+1

4

µ

)3πk = θkπ0

5.1 Pr {X3 = k} =

(1−

10

15

)(10

15

)k

, k = 0

Pr {X3 > k} =

(10

15

)k

Want c such that

(10

15

)c

≤ .01

such that c log2

3≤ log .01

such that c =log .01

log 23

c = 11.36

c∗ = 12.

6.1 Let x be the rate of feedback. Then x+ λ go in to Server #1, and .6(x+ λ) go in and out and Server #2. But x= .2 of the outputof Server #2. Therefore x= (.2)(.6)(x+ λ)= .12x+ .12λ

.88x= .12λ x=12

88λ=

3

11.

Server #2: Arrival rate

= .6(x+ λ)=6

10

(3

11+ 2

)=

15

11.

π20 = 1−15/11

3=

6

11.

Server #3: Arrival rate = .4(x+ λ)=10

11

π30 = 1−10/11

2=

6

11

Long run Pr {X2 = 0,X3 > 0} =6

11×

5

11=

30

121.

Page 65: Elbert Pramudya Purchase

“Ch10-9780123852328” — 2011/3/18 — 12:20 — page 65 — #1

Chapter 10

10.1 (a) Since the rows of the matrix P(t) add to 1, we may represent P(t) in the form

P(t)=

(φ(t) 1−φ(t)

1−ψ(t) ψ(t)

)and set Q=

(−a ab −b

)where φ(0)= 1,φ′(0)=−a,ψ(0)= 1,ψ ′(0)=−b. The (1,1) term of the matrix equation P′ = PQ is the first-orderlinear equation

φ′(t)=−(a+ b)φ(t)+ b.

The general solution is the sum of a particular solution and a general solution of the homogeneous equation, obtained bya family of exponentials; in detail

φ(t)=b

a+ b+ ce−(a+b)t (1)

The constant c is obtained from the first derivative at t = 0 in (1); thus

−a= φ′(0)=−c(a+ b)=⇒ c=a

a+ b

which gives the result

φ(t)=b

a+ b+

a

a+ be−(a+b)t, 1−φ(t)=

a

a+ b−

a

a+ be−(a+b)t.

Similarly one obtains ψ(t) by interchanging a and b.(b) We can use the result of part (a) to obtain

E[V(t)|V(0)= v1]= v1P11(t)+ v2P12(t)

= v1

(a+ be−µt

a+ b

)+ v2

(b− be−µt

a+ b

)=

av1+ bv2

a+ b+ e−µt bv1− bv2

a+ b

(c) Interchange a with b and v1 with v2 in (1b), to obtain the indcated result.(d) Replace v1 with v2

1 and v2 with v22 in the result of part (b).

(e) Interchange a with b and replace v1 with v21, v2 with v2

2. in the result of part (d).

10.2 Assuming that g,gt,gtt,gx,gxx are smooth functions, we have

f (x, t)=∫R

g(µ, t)eiµxdµ

ft(x, t)=∫R

gt(µ, t)eiµxdµ

An Introduction to Stochastic Modeling, Instructor Solutions Manualc© 2011 Elsevier Inc. All rights reserved.

Page 66: Elbert Pramudya Purchase

“Ch10-9780123852328” — 2011/3/18 — 12:20 — page 66 — #2

66 Instructor Solutions Manual

ftt(x, t)=∫R

gtt(µ, t)eiµxdµ

fxx(x, t)=∫R

g(µ, t)(iµ)2eiµxdµ

ftt+ 2ft− fxx =

∫R

(gtt+ 2gt+µ

2g)

eiµxdµ

By definition, f is a solution of the telegraph equation if and only if the left side is zero. But the Fourier transform is 1:1, sothat we must have gtt+ 2gt+µ

2g= 0 for almost all µ.

10.3 The ordinary differential equation for g is linear and homogeneous with constant coefficients. The form of the solutiondepends on whether |µ|< 1, |µ| = 1 or |µ|> 1. In every case we have the quadratic equation r2

+ 2r+µ2= 0 whose

solution is r =−1±√

1−µ2 which leads to the following three-fold system:

|µ|< 1 implies g(t)= e−t(

Acosh t√

1−µ2+Bsinh t√

1−µ2

)|µ| = 1 implies g(t)= e−t(A+Bt)

|µ|> 1 implies g(t)= e−t(

Acos(t√µ2− 1)+Bsin(t

√µ2− 1)

)for suitable constants A,B.

Exercises (10.4) and (10.5) can be paraphrased as follows: the idea is to charcterize those second-order differential oper-ators which arise from random evolutions in one dimension. Consider the following set-up: let L be the set of second orderdifferential operators of the form (10.32) for somev1 < v2,q1 > 0,q2 > 0. LetM be the set of differential operators of the form (10.33)

a20 = 1, a212 < 4a02a20, a10 > 0, v1 < a01/a10 < v2, a00 = 0 (2)

The revised exercises are written as follows:

Exercise (10.4’). Let L ∈ L for some v1 < v2,q1 > 0,q2 > 0. Then L ∈M with

a20 = 1,a11 = v1+ v2,a02 = v1v2,a10 = q1+ q2,a01 = q1v2+ q2v1,a00 = 0.

Exercise (10.5’). Let L ∈M for some (aij) satisfying the above conditions (2). Then L ∈ L.Thus we have a necessary and sufficient condition for a second order operator to be associated with a random evolution

process.

Solution of Exercises (10.4’), (10.5’): If L ∈ L, equation (2) holds and we can make the suitable identifications:

a20 = 1, a11 = v1+ v2, a02 = v1v2, a10 = q1+ q2, a01 = v1q2+ v2q1

It is immediate that these satisfy the conditions (2) hence L ∈M.Conversely, suppose that L ∈M. Let v1,v2 be the roots of the equation λ2

+ a11λ+ a02 = 0. From the hypotheses, bothroots are real and can be labeled so that v1 < v2.

Next, we define q1,q2 as the solution of the 2× 2 system

a10 = q1+ q2, a01 = q1v2+ q2v1

The unique solution is

q1 = (a01− v1a10)/(v2− v1), q2 = (v2a10− a01)/(v2− v1).

In both formulas the denominator is positive; in addition we have the hypothesis v1 < a01/a10 < v2, which shows thatthe numerators are also positive. Hence the ratios are positive, so that, in particular, q1+ q2 > 0. Finally we need to verify

Page 67: Elbert Pramudya Purchase

“Ch10-9780123852328” — 2011/3/22 — 14:27 — page 67 — #3

Instructor Solutions Manual 67

the condition on a01/a10. But this follows by dividing the second equation by q1+ q2 to obtain

v1 <a01

a10< v2

But a01/a10 is a convex combination of v1,v2, hence it must lie on the segment from v1 to v2. The proof is complete.•For completeness, we include the self-contained proof of (10.4) and (10.5):

10.4 In case N = 2 the determinant in eqn. (10.31) equals

(−q1− ∂t+ v1∂x)(−q2− ∂t+ v2∂x)− q12q21 = 0.

When we multiply out the diagonal terms there appear 9 terms for the product and another one from the off-diagonal terms.The determinant is zero, hence we have only eight non-zero terms, namely

∂2t once,∂t∂xtwice,v1v2∂

2x once,∂ttwice,∂x twice

When we add these eight terms we obtain (10.32), as required.

10.5 The polynomial equation is λ2− λ(v1+ v2)+ v1v2 = 0. By inspection the roots are λ= v1,v2, where we may assume that

v1 < v2.The coefficient a10 = q1+ q2 > 0. Finally a01/a10 = (v1q2+ v2q1)/(q1+ q2), a convex combination of v1,v2. Hence this

fraction belongs to the interval (v1,v2), as required.

10.6 If we are given v1,v2,q1,q1 we can define a20=1,a11= − (v1+ v2),a02=v1v2,a10=q1+ q2,a01=(v1q2+ v2q1)(q1+ q2).

10.7 Since the roots are distinct, the associated cubic polynomial separates into three linear factors: λ3+ a21λ

2+ a12λ+ a03 =

(λ− v1)(λ− v2)(λ− v3), where we label the v’s so that v1 < v2 < v3, which proves i’).To prove ii‘), note that a11/a20 is a convex combination of the three quantities v1+ v2,v2+ v3,v1+ v3. which are also in

increasing order. This set can equally well be described as the open interval (v1+ v2,v2+ v3), which was to be proved.To prove the second part of ii’) note that

a02/a20 = (q1v2v3+ q2v1v3+ q3v1v2)/(q1+ q2+ q3) is a convex combination of the three products v1v2,v1v3,v2v3. Thisconvex combination belongs to the interval (m,M)where m=mini,j vivj,M =maxi,j vivj which proves the second half of ii‘).

10.8 From the formula for σ 2, we have

σ 2=

∞∫0

πr2(

1

2πae−r/a

)dr

=1

2a

∞∫0

r2e−r/a dr

=a2

2

∞∫0

ρ2e−ρ dρ

= a2

Page 68: Elbert Pramudya Purchase

“Ch11-9780123852328” — 2011/3/18 — 12:20 — page 68 — #1

Chapter 11

11.1 The moment of order k of the normal distribution is obtained through the formula√

2πmk =∫

R xk e−x2/2 dx. If k is odd theintegrand is an odd function, hence mk is zero if k is odd. If k is even, we can write k = 2n for some n= 1,2,

.... Next, weintegrate by parts:

√2πm2n =

∞∫−∞

x2n−1xe−x2dx

=−

∞∫−∞

x2n−1d(

e−x2/2)

=+

∞∫−∞

(2n− 1)x2n−2e−x2/2 dx

=√

2π(2n− 1)m2n−2

For example m0 = 1,m2 = m0,m4 = 3m2 = 3m0 and in general

m2n = (2n− 1)(2n− 3) · · ·m0 =(2n)!

2n n!

11.2 The difference quotient is

φ(t+ h)−φ(t)= E(

ei(t+h)X− eitX

).

The integrand satisfies the conditions that for each t

limh→0

(ei(t+h)X

− eitX)= 0, |ei(t+h)X

− eitX| ≤ 2

which implies, by the bounded convergence theorem, that the integral tends to zero: limh→0 E(ei(t+h)X

− eitX)= 0 in other

words limh→0 (φ(t+ h)−φ(t))= 0, as required.

11.3 Write φX(t)= f (t)= u(t)+ iv(t) where u,v are continuous functions with u(t)2+ v(t)2 ≤ 1.Then

f (t)= u(t)+ iv(t)∫f (t)dt =

∫u(t)+ i

∫v(t)dt

= Reiθ , where R=

∣∣∣∣∫ f (t)dt

∣∣∣∣e−iθ

∫f (t)dt = cosθ

∫u(t)dt+ sinθ

∫v(t)dt

An Introduction to Stochastic Modeling, Instructor Solutions Manualc© 2011 Elsevier Inc. All rights reserved.

Page 69: Elbert Pramudya Purchase

“Ch11-9780123852328” — 2011/3/18 — 12:20 — page 69 — #2

Instructor Solutions Manual 69

Now use Cauchy-Schwarz as follows:∣∣∣∣cosθ∫

u(t)dt+ sinθ∫

v(t)dt

∣∣∣∣= ∣∣∣∣∫ [cosθ u(t)+ sinθ v(t)]dt

∣∣∣∣≤

∫ √u(t)2+ v(t)2 dt

=

∫|f (t)|dt ≤ 1

which was to be proved.

11.4 We can extend theorem (11.3) to intervals of the type: (a,b), (a,b], [a,b). We work with the functions ψ±ε which satisfy

ψ−ε ≤ 1(a,b) ≤ ψ+ε , lim

ε→0ψ−ε (x)= 1(a,b)(x), lim

ε→0ψ+ε (x)= 1[a,b](x)

Now take the expectation, let n→∞ and then ε→ 0:

Eψ−ε (Xn)≤ P[a< Xn < b]≤ Eψ+ε (Xn) (3)

Eψ−ε (X)≤ liminfn

P[a< Xn < b]≤ limsupn

P[a< Xn < b]≤ Eψ+ε (X)

P[a< X < b]≤ liminfn

P[a< Xn < b]≤ limsupn

P[a< Xn < b]≤ P[a≤ X ≤ b]

If P[X = a] and P[X = b] are both zero, then the extreme members in (3) are equal and we have

limn

P(a< Xn < b)= P(a< X < b)= P(a≤ X ≤ b)

which was to be proved.The proof for (a,b] and [a,b) is entirely similar. In the first case we have

Eψ−(Xn)≤ P[a< Xn ≤ b]≤ ψ+(Xn) (4)

Eψ−(X)≤ liminfn

P[a< Xn ≤ b]≤ limsupn

P[a< Xn ≤ b]≤ Eψ+(X)

P[a< X ≤ b]≤ liminfn

P[a< Xn ≤ b]≤ limsupn

P[a< Xn ≤ b]≤ P[a< X ≤ b]

If P[X = a] and P[X = b] are both zero, then the extreme members in (4) are equal and we have

limn

P(a< Xn ≤ b)= P(a< X ≤ b)= P(a≤ X ≤ b)

11.5 The complex exponential is defined by the power series

eiz=

∞∑n=0

(iz)n/n!.

Moving the first three terms to the left side, we have

eiz− 1− iz− (iz)2/2=

∞∑n=3

(iz)n/n!

eiz− 1− iz− (iz)2/2

z2=

∞∑n=3

(iz)n−2/n!

The term on the right is bounded by |z|e|z|. Hence∣∣∣∣eiz− 1− iz

z2+

1

2

∣∣∣∣≤ |z|e|z|→ 0, z→ 0

Page 70: Elbert Pramudya Purchase

“Ch11-9780123852328” — 2011/3/18 — 12:20 — page 70 — #3

70 Instructor Solutions Manual

11.6 If z= α+ iβ is an arbitrary complex number, the multiplicative property of the exponential implies that

∣∣eα+iβ∣∣= ∣∣eαeiβ

∣∣= eα|eiβ| = eα |cosβ + isinβ| == eα

√(cos2β + sin2β

)= eα.

11.7 For 0≤ θ ≤ π/2, cosθ ≤ 1, so that

sinθ =

θ∫0

cos t dt ≤

θ∫0

dt = θ.

Meanwile θ→ sinθ is a convex function so that it always lies above its chord with respect to any two points, esp. on theinterval [0,π/2] where the chord is the line with equation y= (2/π)θ . Combining these two estimates, we have the two-sidedestimate

2

πθ ≤ sinθ ≤ θ, 0≤ θ ≤

π

2

11.8∫R2

e−(x2+y2)

2 dxdy=

∞∫0

2π∫0

e−r2/2r dr dθ

= 1 · 2π