Martingale

328
First Prev Next Last Go Back Full Screen Close Quit Martingale problems and stochastic equations for Markov processes Review of basic material on stochastic processes Characterization of stochastic processes by their martingale properties Weak convergence of stochastic processes Stochastic equations for general Markov process in R d Martingale problems for Markov processes Forward equations and operator semigroups Equivalence of martingale problems and stochastic differential equations Change of measure Filtering Averaging

Transcript of Martingale

Page 1: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Martingale problems and stochastic equations forMarkov processes

• Review of basic material on stochastic processes

• Characterization of stochastic processes by their martingale properties

• Weak convergence of stochastic processes

• Stochastic equations for general Markov process in Rd

• Martingale problems for Markov processes

• Forward equations and operator semigroups

• Equivalence of martingale problems and stochastic differential equations

• Change of measure

• Filtering

• Averaging

Page 2: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

• Control

• Exercises

• Glossary

• References

Page 3: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

1. Review of basic material on stochastic processes

• Filtrations

• Stopping times

• Martingales

• Optional sampling theorem

• Doob’s inequalities

• Stochastic integrals

• Local martingales

• Semimartingales

• Computing quadratic variations

• Covariation

• Ito’s formula

Page 4: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Conventions and caveats 4

State spaces are always complete, separable metric spaces (sometimes called Polishspaces), usually denoted (E, r).

All probability spaces are complete.

All identities involving conditional expectations (or conditional probabilities) onlyhold almost surely (even when I don’t say so).

If the filtration Ft involved is obvious, I will say adapted, rather than Ft-adapted, stopping time, rather than Ft-stopping time, etc.

All processes are cadlag (right continuous with left limits at each t > 0), unlessotherwise noted.

A process is real-valued if that is the only way the formula makes sense.

Page 5: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

References 5

Kurtz, Lecture Notes for Math 735

http://www.math.wisc.edu/˜kurtz/m735.htm

Seppalainen, Basics of Stochastic Analysis

http://www.math.wisc.edu/˜seppalai/sa-book/etusivu.html

Ethier and Kurtz, Markov Processes: Characterization and Convergence

Protter, Stochastic Integration and Differential Equations, Second Edition

Page 6: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Filtrations 6

(Ω,F , P ) a probability space

Available information is modeled by a sub-σ-algebra of F

Ft information available at time t

Ft is a filtration. t < s implies Ft ⊂ Fs

A stochastic process X is adapted to Ft if X(t) is Ft-measurable for each t ≥ 0.

An E-valued stochastic process X adapted to Ft is Ft-Markov if

E[f(X(t+ r))|Ft] = E[f(X(t+ r))|X(t)], t, r ≥ 0, f ∈ B(E)

An R-valued stochastic process M adapted to Ft is an Ft-martingale if

E[M(t+ r)|Ft] = M(t), t, r ≥ 0

Page 7: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Stopping times 7

τ is an Ft-stopping time if for each t ≥ 0, τ ≤ t ∈ Ft.

For a stopping time τ ,

Fτ = A ∈ F : τ ≤ t ∩ A ∈ Ft, t ≥ 0

Exercise 1.1 1. Show that Fτ is a σ-algebra.

2. Show that for Ft-stopping times σ, τ , σ ≤ τ implies that Fσ ⊂ Fτ . In particular,Fτ∧t ⊂ Ft.

3. Let τ be a discrete Ft-stopping time satisfying τ < ∞ = ∪∞k=1τ = tk = Ω.Show that Fτ = σA ∩ τ = tk : A ∈ Ftk , k = 1, 2, . . ..

4. Show that the minimum of two stopping times is a stopping time and that the maxi-mum of two stopping times is a stopping time.

Page 8: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Examples and properties 8

Define Ft+ ≡ ∩s>tFs. Ft is right continuous if Ft = Ft+ for all t ≥ 0. If Ft isright continuous, then τ is a stopping time if and only if τ < t ∈ Ft for all t > 0.

If K ⊂ E is closed, τhK = inft : X(t) or X(t−) ∈ K is a stopping time, but

inft : X(t) ∈ K may not be; however, if Ft is right continuous and complete,then for anyB ∈ B(E), τB = inft : X(t) ∈ B is an Ft-stopping time. This resultis a special case of the debut theorem, a very technical result from set theory. Notethat

ω : τB(ω) < t = ω : ∃s < t 3 X(s, ω) ∈ B = projΩ(s, ω) : X(s, ω) ∈ B, s < t

Piecewise constant approximations

ε > 0, τ ε0 = 0,

τ εi+1 = inft > τ ε

i : r(X(t), X(τ εi )) ∨ r(X(t−), X(τ ε

i )) ≥ ε

Define Xε(t) = X(τ εi ), τ ε

i ≤ t < τ εi+1. Then r(X(t), Xε(t)) ≤ ε.

If X is adapted to Ft, then the τ εi are Ft-stopping times and Xε is Ft-

adapted.

Page 9: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Martingales 9

An R-valued stochastic process M adapted to Ft is an Ft-martingale if

E[M(t+ r)|Ft] = M(t), t, r ≥ 0

Every martingale has finite quadratic variation:

[M ]t = lim∑

(M(t ∧ ti+1)−M(t ∧ ti))2

where 0 = t0 < t1 < · · ·, ti →∞, and the limit is in probability as max(ti+1−ti) → 0.More precisely, for ε > 0 and t0 > 0,

limPsupt≤t0

|[M ]t − lim∑

(M(t ∧ ti+1)−M(t ∧ ti))2| > ε = 0.

For standard Brownian motion W , [W ]t = t.

Exercise 1.2 Let N be a Poisson process with parameter λ. Then M(t) = N(t) − λt is amartingale. Compute [M ]t.

Page 10: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Optional sampling theorem 10

A real-valued process is a submartingale if E[|X(t)|] <∞, t ≥ 0, and

E[X(t+ s)|Ft] ≥ X(t), t, s ≥ 0.

If τ1 and τ2 are stopping times, then

E[X(t ∧ τ2)|Fτ1 ] ≥ X(t ∧ τ1 ∧ τ2).

If τ2 is finite a.s. E[|X(τ2)|] <∞ and limt→∞E[|X(t)|1τ2>t] = 0, then

E[X(τ2)|Fτ1 ] ≥ X(τ1 ∧ τ2).

Page 11: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Square integrable martingales 11

M a martingale satisfying E[M(t)2] <∞. Then

M(t)2 − [M ]t

is a martingale. In particular, for t > s

E[(M(t)−M(s))2] = E[[M ]t − [M ]s].

Page 12: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Doob’s inequalities 12

Let X be a submartingale. Then for x > 0,

Psups≤t

X(s) ≥ x ≤ x−1E[X(t)+]

Pinfs≤t

X(s) ≤ −x ≤ x−1(E[X(t)+]− E[X(0)])

If X is nonnegative and α > 1, then

E[sups≤t

X(s)α] ≤(

α

α− 1

E[X(t)α].

Note that by Jensen’s inequality, if M is a martingale, then |M | is a submartingale.In particular, if M is a square integrable martingale, then

E[sups≤t

|M(s)|2] ≤ 4E[M(t)2].

Page 13: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Stochastic integrals 13

Definition 1.3 For cadlag processes X , Y ,

X− · Y (t) ≡∫ t

0

X(s−)dY (s)

= limmax |ti+1−ti|→0

∑X(ti)(Y (ti+1 ∧ t)− Y (ti ∧ t))

whenever the limit exists in probability.

Sample paths of bounded variation: If Y is a finite variation process, the stochasticintegral exists (apply dominated convergence theorem) and∫ t

0

X(s−)dY (s) =

∫(0,t]

X(s−)αY (ds)

αY is the signed measure with

αY (0, t] = Y (t)− Y (0)

Page 14: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Existence for square integrable martingales 14

If M is a square integrable martingale, then

E[(M(t+ s)−M(t))2|Ft] = E[[M ]t+s − [M ]t|Ft]

For partitions ti and ri

E[(∑

X(ti)(M(ti+1 ∧ t)−M(ti ∧ t))

−∑

X(ri)(M(ri+1 ∧ t)−M(ri ∧ t)))2]

= E

[∫ t

0

(X(t(s−))−X(r(s−)))2d[M ]s

]= E

[∫(0,T ]

(X(t(s−))−X(r(s−)))2α[M ](ds)

]t(s) = ti for s ∈ [ti, ti+1) r(s) = ri for s ∈ [ri, ri+1)

Page 15: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Cauchy property 15

Let X be bounded by a constant. As sup(ti+1 − ti) + sup(ri+1 − ri) → 0, the rightside converges to zero by the dominated convergence theorem.

MtiX (t) ≡

∑X(ti)(M(ti+1 ∧ t)−M(ti ∧ t)) is a square integrable martingale, so

E

[supt≤T

(∑X(ti)(M(ti+1 ∧ t)−M(ti ∧ t))

−∑

X(ri)(M(ri+1 ∧ t)−M(ri ∧ t)))2]

≤ 4E

[∫(0,t]

(X(t(s−))−X(r(s−)))2α[M ](ds)

]A completeness argument gives existence of the stochastic integral and the unifor-mity implies the integral is cadlag.

Page 16: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Local martingales 16

Definition 1.4 M is a local martingale if there exist stopping times τn satisfying τ1 ≤τ2 ≤ · · · and τn →∞ a.s. such that M τn defined by M τn(t) = M(τn ∧ t) is a martingale.M is a local square-integrable martingale if the τn can be selected so that M τn is squareintegrable.

τn is called a localizing sequence for M .

Remark 1.5 If τn is a localizing sequence for M , and γn is another sequence of stop-ping times satisfying γ1 ≤ γ2 ≤ · · ·, γn → ∞ a.s. then the optional sampling theoremimplies that τn ∧ γn is localizing.

Page 17: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Local martingales with bounded jumps 17

Remark 1.6 If M is a continuous, local martingale, then τn = inft : |M(t)| ≥ nwill be a localizing sequence. More generally, if |∆M(t)| ≤ c for some constant c, thenτn = inft : |M(t)|∨|M(t−)| ≥ nwill be a localizing sequence. Note that |M τn| ≤ n+c,so M is local square integrable.

Page 18: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Probability estimates for SIs 18

Y = M + V

M local square-integrable martingale

V finite variation process

Xc(t) = X(t)1τhc >t

Psups≤t

|X− · Y (s)| > K

≤ Pσ ≤ t+ Psups≤t

|X(s)| ≥ c+ P sups≤t∧σ

|Xc− ·M(s)| > K/2

+Psups≤t

|Xc− · V (s)| > K/2

≤ Pσ ≤ t+16c2E[[M ]t∧σ]

K2+ PTt(V ) ≥ c−1K/2

Page 19: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

A metric 19

For cadlag stochastic processes X and Y , define

dt(X,Y ) = infε > 0 : Psups≤t

r(X(s), Y (s)) ≥ ε ≤ ε

andd(X, Y ) =

∫ ∞

0

e−tdt(X,Y )dt.

Under d, the space of cadlag (or cadlag and adapted) stochastic processes is a com-plete metric space.

Page 20: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Semimartingales 20

Definition 1.7 Y is an Ft-semimartingale if and only if Y = M + V , where M isa local square integrable martingale with respect to Ft and V is an Ft-adapted finitevariation process.

In particular, if X is cadlag and adapted and Y is a semimartingale, then∫X−dY

exists.

Page 21: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Computing quadratic variations 21

Let ∆Z(t) = Z(t) = Z(t−).

Lemma 1.8 If Y is finite variation, then

[Y ]t =∑s≤t

∆Y (s)2

Lemma 1.9 If Y is a semimartingale, X is adapted, and Z(t) =∫ t

0X(s−)dY (s), then

[Z]t =

∫ t

0

X(s−)2d[Y ]s.

Proof. Check first for piecewise constant X and then approximate general X bypiecewise constant processes.

Page 22: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Covariation 22

The covariation of Y1, Y2 is defined by

[Y1, Y2]t ≡ lim∑

i

(Y1(ti+1 ∧ t)− Y1(ti ∧ t)) (Y2(ti+1 ∧ t)− Y2(ti ∧ t))

Exercise 1.10 1. Show that if Y1 is cadlag and Y2 is finite variation, then

[Y1, Y2]t =∑s≤t

∆Y1(s)∆Y2(s).

2. Using the fact that martingales have finite quadratic variation, show that semimartin-gales have finite quadratic variation.

3. Using the above results, show that the covariation of two semimartingales exist.

Page 23: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Ito’s formula 23

If f : R → R is C2 and Y is a semimartingale, then

f(Y (t)) = f(Y (0)) +

∫ t

0

f ′(Y (s−))dY (s) +

∫ t

0

1

2f ′′(Y (s))d[Y ]cs

+∑s≤t

(f(Y (s))− f(Y (s−))− f ′(Y (s−))∆Y (s)

where [Y ]c is the continuous part of the quadratic variation given by

[Y ]ct = [Y ]t −∑s≤t

∆Y (s)2.

Page 24: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Ito’s formula for vector-valued semimartingales 24

If f : Rm → R is C2, Y1, . . . , Ym are semimartingales, and Y = (Y1, . . . , Ym), thendefining

[Yk, Yl]ct = [Yk, Yl]t −

∑s≤t

∆Yk(s)∆Yl(s),

f (Y (t)) = f (Y (0)) +m∑

k=1

∫ t

0

∂kf (Y (s−)) dYk(s)

+m∑

k,l=1

1

2

∫ t

0

∂k∂lf (Y (s−)) d[Yk, Yl]cs

+∑s≤t

(f (Y (s))− f (Y (s−))−m∑

k=1

∂kf (Y (s−)) ∆Yk(s)).

Page 25: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Examples 25

W standard Brownian motion

Z(t) = expW (t)− 1

2t =

∫ t

0

Z(s)d(W (s)− 1

2s) +

∫ t

0

1

2Z(s)ds

=

∫ t

0

Z(s)dW (s)

Exercise 1.11 Consider the stochastic differential equation

X(t) = X(0) +

∫ t

0

aX(s)dW (s) +

∫ t

0

bX(s)ds.

Find α and β so thatX(t) = X(0) expαW (t) + βt

is a solution.

Page 26: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

2. Characterization of stochastic processes by their martingale prop-erties

• Levy’s characterization of Brownian motion

• Watanabe’s characterization of the Poisson process

• Strong Markov property for Poisson processes

• Intensity for a counting process

• Martingale problems for counting processes

• Multivariate counting processes

• Continuous time Markov chains

• Martingale problems for diffusion processes

Page 27: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Levy’s characterization of Brownian motion 27

Theorem 2.1 Let M be a continuous local martingale with [M ]t = t. Then M is standardBrownian motion.

Remark 2.2 Note that

E[M τn(t)2] = E[[M τn ]t] = E[τn ∧ t] ≤ t

and E[sups≤tMτn(s)2] ≤ 4E[M τn(t)2] ≤ 4t and it follows by the dominated convergence

theorem and Fatou’s lemma that M is a square integrable martingale.

Page 28: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Proof. Applying Ito’s formula,

eiθM(t) = 1 +

∫ t

0

iθeiθM(s)dM(s)− 1

2θ2

∫ t

0

eiθM(s)ds,

where the second term on the right is a martingale. Consequently,

E[eiθM(t+r)|Ft] = eiθM(t) − 1

2θ2

∫ t+r

t

E[eiθM(s)|Ft]ds

and

ϕt(θ, r) ≡ E[eiθ(M(t+r)−M(t)|Ft] = 1− 1

2θ2

∫ t+r

t

E[eiθM(s)−M(t)|Ft]ds

= 1− 1

2θ2

∫ r

0

ϕt(θ, u)du

so ϕt(θ, r) = e−12θ2r. It follows that for 0 = t0 < t1 < · · · < tm,

E[m∏

k=1

eiθk(M(tk)−M(tk−1)] =∏

e−12θ2k(tk−tk−1)

and hence M has independent Gaussian increments.

Page 29: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Continuous, finite variation martingales 29

Theorem 2.3 Suppose that M is a continuous local martingale and Tt(M) < ∞ a.s. forevery t ≥ 0. Then M is constant in time.

Proof. Replace M by M − M(0). Since M is a continuous local martingale, it islocally square integrable (Remark 1.6) and there exist stopping times τn →∞ suchthat

E[M(t ∧ τn)2] = E[[M ]t∧τn ] = 0.

Page 30: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Original form of Levy’s theorem 30

Lemma 2.4 Suppose that M is a continuous local martingale and that M2(t) − t is acontinuous local martingale also. Then [M ]t = t.

Proof. By Ito’s formula

M2(t) =

∫ t

0

2M(s)dM(s)− [M ]t.

Consequently,

[M ]t − t = M2(t)− t+

∫ t

0

2M(s)dM(s)

is a finite variation, continuous local martingale and is zero by Lemma 2.3.

It follows that Levy’s theorem (Theorem 2.1) holds with the condition [M ]t = treplaced by the assumption that M(t)2 − t is a local martingale.

Page 31: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Dominated convergence theorem 31

Theorem 2.5 Let Xn → X and Yn → Y in probability. Suppose that |Xn| ≤ Yn a.s. andE[Yn|D] → E[Y |D] in probability. Then

E[Xn|D] → E[X|D] in probability

Proof. A sequence converges in probability iff every subsequence has a furthersubsequence that converges a.s., so we may as well assume almost sure conver-gence. Let Dm,c = supn≥mE[Yn|D] ≤ c. Then

E[Yn1Dm,c |D] = E[Yn|D]1Dm,c

L1→ E[Y |D]1Dm,c = E[Y 1Dm,c |D].

Consequently, E[Yn1Dm,c ] → E[Y 1Dm,c ], so Yn1Dm,c → Y 1Dm,c in L1 by the ordinarydominated convergence theorem. It follows that Xn1Dm,c → X1Dm,c in L1 andhence

E[Xn|D]1Dm,c = E[Xn1Dm,c |D]L1→ E[X1Dm,c |D] = E[X|D]1Dm,c .

Since m and c are arbitrary, the lemma follows.

Page 32: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Poisson processes 32

A Poisson process N can be defined to be a counting process with stationary, in-dependent increments, and it follows that the increments must be Poisson dis-tributed.

E[eiθ(N(b)−N(a))] = E[eiθN( b−an

)]n

= (1 + PN(b− a

n) = 1(eiθ − 1) + E[eiθN( b−a

n)1N( b−a

n)>1])

n.

Exercise 2.6 Show that limn→∞ nPN( b−an

) > 1 = 0.

By the identity and the exercise:

E[eiθ(N(b)−N(a))] = limn→∞

enPN( b−an

)=1(eiθ−1),

so λb−a ≡ limn→∞ nPN( b−an

) = 1 exists. The assumption that N has stationary,independent increments then implies λb−a = λ1(b− a). Letting λ = λ1, we have

E[eiθ(N(b)−N(a))] = eλ(b−a)(eiθ−1).

Page 33: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Time inhomogeneous Poisson processes 33

Exercise 2.7 Extend the previous result to stochastically continuous counting processeswith independent increments.

Definition 2.8 X is stochastically continuous if for every t ≥ 0 and ε > 0,

lims→t

Pr(X(s), X(t)) > ε = 0.

Page 34: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Watanabe’s characterization of the Poisson process 34

Theorem 2.9 Let N be a counting process and λ > 0. If M(t) ≡ N(t) − λt is a localmartingale, then N is a Poisson process with parameter λ.

Remark 2.10 Since the discontinuities of M are bounded by 1, M is local square inte-grable. (See Remark 1.6.) Consequently, there exist τn →∞ such that

E[M(t ∧ τn)2] = E[[M ]t∧τn ] = E[N(t ∧ τn)] = E[t ∧ τn],

soM is square integrable by Fatou’s lemma. Further more, sups≤tM(t∧τn)2 sups≤tM(s)2,

E[sups≤t

M(s)2] = limE[sups≤t

M(t ∧ τn)2] ≤ 4t.

The dominated convergence theorem then gives

E[M(t+ r)|Ft] = limE[M((t+ r) ∧ τn|Ft] = limM(t ∧ τn) = M(t)

Page 35: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Proof.

eiθN(t) = 1 +

∫ t

0

(eiθ(N(s−)+1) − eiθN(s−))dM(s) +

∫ t

0

(eiθ(N(s−)+1) − eiθN(s−))λds

where the second term on the right is a martingale. Consequently,

E[eiθN(t+r)|Ft] = eiθN(t) + λ(eiθ − 1)

∫ t+r

t

E[eiθN(s)|Ft]ds

and

ϕt(θ, r) ≡ E[eiθ(N(t+r)−N(t)|Ft] = 1 + λ(eiθ − 1)

∫ t+r

t

E[eiθN(s)−N(t)|Ft]ds

= 1 + λ(eiθ − 1)

∫ r

0

ϕt(θ, u)du

so ϕt(θ, r) = eλr(eiθ−1). It follows that for 0 = t0 < t1 < · · · < tm,

E[m∏

k=1

eiθk(N(tk)−N(tk−1)] =∏

eλ(tk−tk−1)(eiθ−1)

and hence N has independent Poisson increments.

Page 36: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Strong Markov property 36

A Poisson process N is compatible with a filtration Ft, if N is Ft-adapted andN(t+ ·)−N(t) is independent of Ft for every t ≥ 0.

Lemma 2.11 Let N be a Poisson process with parameter λ > 0 that is compatible withFt, and let τ be a Ft-stopping time such that τ <∞ a.s. Define Nτ (t) = N(τ + t)−N(τ). ThenNτ is a Poisson process that is independent of Fτ and compatible with Fτ+t.

Proof. Let M(t) = N(t)− λt. By the optional sampling theorem,

E[M((τ + t+ r) ∧ T )|Fτ+t] = M((τ + t) ∧ T ),

so

E[N((τ + t+ r) ∧ T )−N((τ + t) ∧ T )|Fτ+t] = λ((τ + t+ r) ∧ T − (τ + t) ∧ T ).

By the monotone convergence theorem

E[N(τ + t+ r)−N(τ + t)|Fτ+t] = λr

which gives the lemma.

Page 37: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Intensity for a counting process 37

If N is a Poisson process with parameter λ and N is compatible with Ft, then

PN(t+ ∆t) > N(t)|Ft = 1− e−λ∆t ≈ λ∆t.

For a general counting process N , at least intuitively, a nonnegative, Ft-adaptedstochastic process λ(·) is an Ft-intensity for N if

PN(t+ ∆t) > N(t)|Ft ≈ E[

∫ t+∆t

t

λ(s)ds|Ft] ≈ λ(t)∆t.

Let σn be the nth jump time of N .

Definition 2.12 λ is an Ft-intensity for N if and only if for each n = 1, 2, . . ..

N(t ∧ σn)−∫ t∧σn

0

λ(s)ds

is a Ft-martingale.

Page 38: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Modeling with intensities 38

Let Z be a stochastic process (cadlag, E-valued for simplicity) that models “exter-nal noise.” Let Dc[0,∞) denote the space of counting paths (zero at time zero andconstant except for jumps of +1).

Condition 2.13

λ : [0,∞)×DE[0,∞)×Dc[0,∞) → [0,∞)

is measurable and satisfies λ(t, z, v) = λ(t, zt, vt), where zt(s) = z(s ∧ t) (λ is nonantic-ipating), and ∫ t

0

λ(s, z, v)ds <∞

for all z ∈ DE[0,∞) and v ∈ Dc[0,∞).

Let Y be a unit Poisson process that is Ft-compatible and assume that Z(s) isF0-measurable for every s ≥ 0. (In particular, Z is independent of Y . Consider

N(t) = Y (

∫ t

0

λ(s, Z,N)ds). (2.1)

Page 39: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Solution of the stochastic equation 39

Theorem 2.14 For n = 1, 2, . . ., there exists a unique solution of (2.1) up to σn, τ(t) =∫ t

0λ(s, Z,N)ds is a Fu-stopping time, and

N(t ∧ σn)−∫ t∧σn

0

λ(s, Z,N)ds

is a Fτ(t)-martingale.

Page 40: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Proof. Existence and uniqueness follows by solving from one jump to the next. LetY r(u) = Y (r ∧ u) and let

N r(t) = Y r(

∫ t

0

λ(s, Z,N r)ds).

Then N r(t) = N(t), if τ(t) =∫ t

0λ(s, Z,N)ds ≤ r. Consequently,

τ(t) ≤ r = ∫ t

0

λ(s, Z,N r)ds ≤ r ∈ Fr,

as is τ(t ∧ σn) ≤ r. By the optional sampling theorem

E[M(τ((t+ v)∧ σn)∧ T )|Fτ(t)] = M(τ((t+ v)∧ σn))∧ τ(t)∧ T ) = M(τ(t∧ σn)∧ T ).

We can let T → ∞ by the monotone convergence argument used in the proof ofthe strong Markov property for Poisson processes.

Page 41: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Martingale problems for counting processes 41

Definition 2.15 Let Z be a cadlag, E-valued stochastic process, and let λ satisfy Con-dition 2.13. A counting process N is a solution of the martingale problem for (λ, Z)if

N(t ∧ σn)−∫ t∧σn

0

λ(s, Z,N)ds

is a martingale with respect to the filtration

Ft = σ(N(s), Z(r) : s ≤ t, r ≥ 0)

Theorem 2.16 If N is a solution of the martingale problem for (λ, Z), then N has thesame distribution as the solution of the stochastic equation (2.1).

Page 42: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Proof. Suppose λ is an intensity for a counting process N and∫∞

0λ(s)ds = ∞ a.s.

Let γ(u) satisfy

γ(u) = inft :

∫ t

0

λ(s)ds ≥ u.

Then, since γ(u+ v) ≥ γ(u),

E[N(γ(u+v)∧σn∧T )−∫ γ(u+v)∧σn∧T

0

λ(s)ds|Fγ(u)] = N(γ(u)∧σn∧T )−∫ γ(u)∧σn∧T

0

λ(s)ds.

The monotone convergence argument lets us send T and n to infinity, and we have

E[N(γ(u+ v))− (u+ v)|Fγ(u)] = N(γ(u))− u.

so Y (u) = N(γ(u)) is a Poisson process. But γ(τ(t)) = t, so (2.1) is satisfied.

If∫∞

0λ(s)ds < ∞ with positive probability, then let Y ∗ be a unit Poisson process

that is independent of Ft for all t ≥ 0 and consider N ε(t) = N(t) + Y ∗(εt). N ε hasintensity λ(t) + ε, and Y ε, obtained as above, converges to

Y (u) =

N(γ(u)) u < τ(∞)

N(∞) + Y ∗(u− τ(∞)) u ≥ τ(∞)

(except at points of discontinuity).

Page 43: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Multivariate counting processes 43

Dcd[0,∞) d-dimensional counting paths

Condition 2.17 λk : [0,∞) × Dcd[0,∞) × DE[0,∞) → [0,∞), measurable, nonantici-

pating ∫ t

0

∑k

λk(s, z, v)ds <∞, v ∈ Dcd[0,∞), z ∈ DE[0,∞)

Z cadlag, E-valued and independent of independent Poisson processes Y1, . . . , Yd.

Nk(t) = Yk(

∫ t

0

λk(s, Z,N)ds) (2.2)

where N = (N1, . . . , Nd). Existence and uniqueness holds (including for d = ∞)and

Nk(t ∧ σn)−∫ t∧σn

0

λk(s, Z,N)ds

is a martingale for σn = inft :∑

k Nk(t) ≥ n, but what is the correct filtration?

Page 44: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Multiparameter optional stopping theorem 44

I is a directed set with partial ordering t ≤ s. If t1, t2 ∈ I, there exists t3 ∈ I suchthat t1 ≤ t3 and t2 ≤ t3.

Ft, t ∈ I, s ≤ t implies Fs ⊂ Ft.

A stochastic process X(t) indexed by I is a martingale if and only if for s ≤ t,

E[X(t)|Fs] = X(s).

An I valued random variable is a stopping time if and only if τ ≤ t ∈ Ft, t ∈ I.

Fτ = A ∈ F : A ∩ τ ≤ t ∈ Ft, t ∈ I

Lemma 2.18 (Kurtz [6]) Let X be a martingale and let τ1 and τ2 be stopping times as-suming countably many values and satisfying τ1 ≤ τ2 a.s. If there exists a sequenceTm ⊂ I such that limm→∞ Pτ2 ≤ Tm = 1, limm→∞E[|X(Tm)|1τ2≤Tmc ] = 0, andE[|X(τ2)|] <∞, then

E[X(τ2)|Fτ1 ] = X(τ1)

Page 45: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Proof. Define

τmi =

τi on τi ≤ TmTm on τi ≤ Tmc

Then τmi is a stopping time, since

τmi ≤ t = (τm

i ≤ t ∩ τi ≤ Tm) ∪ (τmi ≤ t ∩ τi ≤ Tmc

= (∪s∈Γ,s≤t,s≤Tmτi = s) ∪ (Tm ≤ t ∩ τi ≤ Tmc

Let Γ ⊂ I be countable and satisfy Pτi ∈ Γ = 1 and Tm ⊂ Γ. For A ∈ Fτ1 ,∫A∩τm

1 =tX(τm

2 )dP =∑

s∈Γ,s≤Tm

∫A∩τm

1 =t∩τm2 =s

X(s)dP

=∑

s∈Γ,s≤Tm

∫A∩τm

1 =t∩τm2 =s

X(Tm)dP

=

∫A∩τm

1 =tX(Tm)dP

=

∫A∩τm

1 =tX(t)dP =

∫A∩τm

1 =tX(τm

1 )dP

Page 46: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Multiple time change 46

I = [0,∞)d, u ∈ I, Fu = σ(Yk(sk) : sk ≤ uk, k = 1, . . . , d). Then

Mk(u) ≡ Yk(uk)− uk

is a Fu-martingale. For

Nk(t) = Yk(

∫ t

0

λk(s, Z,N)ds),

define τk(t) =∫ t

0λk(s, Z,N)ds and τ(t) = (τ1(t), . . . , τd(t)). Then τ(t) is a Fu-

stopping time.

Lemma 2.19 Let Gt = Fτ(t). If σ is a Gt-stopping time, then τ(σ) is a Fu-stoppingtime.

Page 47: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Approximation by discrete stopping times 47

Lemma 2.20 If τ is a Fu-stopping time, then τ (n) defined by

τ(n)k =

[τk2n] + 1

2n

is a Fu-stopping time.

Proof.

τ (n) ≤ u = ∩kτ (n)k ≤ uk = ∩k[τk2n] + 1 ≤ [uk2

n] = ∩kτk <[uk2

n]

2n

Note that τ (n)k decreases to τk.

Page 48: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Martingale problems for multivariate counting processes 48

Let σn = inft :∑

k Nk(t) ≥ n.

Theorem 2.21 Let Condition 2.17 hold. For n = 1, 2, . . ., there exists a unique solutionof (2.2) up to σn, τk(t) =

∫ t

0λk(s, Z,N)ds defines a Fu-stopping time, and

Nk(t ∧ σn)−∫ t∧σn

0

λk(s, Z,N)ds

is a Fτ(t)-martingale.

Definition 2.22 Let Z be a cadlag, E-valued stochastic process, and let λ = (λ1, . . . , λd)satisfy Condition 2.17. A multivariate counting process N is a solution of the martingaleproblem for (λ, Z) if for each k,

Nk(t ∧ σn)−∫ t∧σn

0

λk(s, Z,N)ds

is a martingale with respect to the filtration

Gt = σ(N(s), Z(r) : s ≤ t, r ≥ 0)

Page 49: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Existence and uniqueness for the martingale problem 49

Theorem 2.23 Let Z be a cadlag, E-valued stochastic process, and let λ = (λ1, . . . , λd)satisfy Condition 2.17. Then there exists a unique solution of the martingale problem for(λ, Z).

Page 50: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Continuous time Markov chains 50

Let X be a Markov chain with values in Zd. Let Nl(t) be the number of jumps withX(s)−X(s−) = l up to time t. Then

X(t) = X(0) +∑

l

lNl(t).

Define βl(k) = qk,k+l, qk,k+l is the usual intensity for a transition from k to k + l.Then

X(t) = X(0) +∑

l

lYl(

∫ t

0

βl(X(s))ds).

Page 51: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Generator for a Markov chain 51

f(X(t)) = f(X(0)) +∑

l

∫ t

0

(f(X(s−) + l)− f(X(s−))dNl(s)

= f(X(0)) +∑

l

∫ t

0

(f(X(s−) + l)− f(X(s−))dMl(s)

+

∫ t

0

∑l

βl(X(s))(f(X(s) + l)− f(X(s))ds

= f(X(0)) +

∫ t

0

Af(X(s))ds+Mf (t)

whereAf(k) =

∑l

βl(k)(f(k + l)− f(k))

Page 52: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Martingale problem for a Markov chain 52

Definition 2.24 X is a solution of the martingale problem for A with domain D(A) ifand only if there is a filtration Ft such that

f(X(t))− f(X(0))−∫ t

0

Af(X(s))ds (2.3)

is a (local) Ft-martingale for each f ∈ D(A).

Remark 2.25 Usually, one can take D(A) to be the collection of functions with finite sup-port.

If Af is bounded, then (2.3) will be martingale.

Page 53: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Chemical reactions 53

Standard notation for chemical reactions

A+Bκ C

is interpreted as “a molecule ofA combines with a molecule ofB to give a moleculeof C.

A+B C

means that the reaction can go in either direction, that is, a molecule of C candissociate into a molecule of A and a molecule of B.

A + Bκ1 C at a rate proportional to the numbers of molecules of A and B and

inversely proportional to the volume.

Cκ2 A+B at a rate proportional to the number of molecules of C.

N = Avogadro’s number times the volume

Let θ = (1, 1,−1)T . Then X(t) = (XA(t), XB(t), XC(t))T satisfies

X(t) = X(0)− θY1(κ1

∫ t

0

XA(s)XB(s)

Nds) + θY2(κ2

∫ t

0

XC(s)ds)

Page 54: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Law of large numbers for concentrations 54

ZNA (t) = N−1XA(t) is the concentration of A.

ZN(t) = ZN(0)−N−1Y1(κ1

∫ t

0

XA(s)XB(s)

Nds)θ +N−1Y2(κ2

∫ t

0

XC(s)ds)θ

= ZN(0)−N−1Y1(Nκ1

∫ t

0

ZNA (s)ZN

B (s)ds)θ +N−1Y2(Nκ2

∫ t

0

ZNC (s)ds)θ

Since Yi(Nu)N

≈ u,

ZN(t) = ZN(0)− κ1

∫ t

0

ZNA (s)ZN

B (s)ds)θ + κ2

∫ t

0

ZNC (s)ds)θ + εN(t)

Let

Z(t) = Z(0)− κ1

∫ t

0

ZA(s)ZB(s)ds)θ + κ2

∫ t

0

ZC(s)ds)θ.

Then there exists a constant K depending on κ1, κ2, ZN(0), and Z(0) such that

|ZN(t)− Z(t)| ≤ |ZN(0)− Z(0)|+ sups≤t

|εN(t)|+K

∫ t

0

|ZN(s)− Z(s)|ds

Page 55: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Gronwall’s inequality 55

Lemma 2.26 Let µ be a Borel measure on [0,∞), let ε ≥ 0, and let f be a Borel measurablefunction that is bounded on bounded intervals and satisfies

0 ≤ f(t) ≤ ε+

∫[0,t)

f(s)µ(ds).

Thenf(t) ≤ εeµ[0,t).

Proof. Iterate the inequality.

Page 56: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Martingale problems for diffusion processes 56

The Ito equation for a diffusion process is

X(t) = X(0) +

∫ t

0

σ(X(s))dW (s) +

∫ t

0

b(X(s))ds

W m-dimensional standard Brownian motion

σ : Rd → Md×m b : Rd → Rd

Define a(x) = σ(x)σT (x) (a d× d-matrix). Then by Ito’s formula

f(X(t)) = f(X(0)) +

∫ t

0

∇f(X(s))Tσ(X(s))dW (s) +

∫ t

0

Lf(X(s))ds

whereLf(x) =

1

2

∑aij(x)∂xi

∂xjf(x) +

∑i

bi(x)∂xif(x)

Consequently,

f(X(t))− f(X(0))−∫ t

0

Lf(X(s))ds

is a local martingale.

Page 57: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Simplified martingale problem 57

Note that

M(t) = X(t)−X(0)−∫ t

0

b(X(s))ds =

∫ t

0

σ(X(s))dW (s)

is a local martingale, as is

Mi(t)Mj(t)−∫ t

0

aij(X(s))ds

Lemma 2.27 If M1 and M2 are continuous local martingales and

M1(t)M2(t)−∫ t

0

C(s)ds

is a local martingale, then

[M1,M2]t =

∫ t

0

C(s)ds

Page 58: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Proof. By Ito’s formula

M1(t)M2(t) = M1(0)M2(0) +

∫ t

0

M1(s)dM2(s) +

∫ t

0

M2(s)dM1(s) + [M1,M2]t,

so M1(t)M2(t)− [M1,M2]t is a local martingale as is∫ t

0

C(s)ds− [M1,M2]t.

Since this process is finite variation it must be identically zero a.s.

Page 59: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Equivalence of martingale problems 59

Theorem 2.28 Let a : Rd → Md×d and b : Rd → Rd be measurable and bounded onbounded sets. Suppose that X is a continuous Rd-valued process and that there exists afiltration Ft such that

M(t) = X(t)−X(0)−∫ t

0

b(X(s))ds and Mi(t)Mj(t)−∫ t

0

aij(X(s))ds

are local martingales. Then X is a solution of the martingale problem for L with D(L) =C2

c (Rd).

Page 60: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Proof. Note that [Mi,Mj]t = [Xi, Xj]t so that

f(X(t)) = f(X(0)) +

∫ t

0

∇f(X(s))TdX(s) +1

2

∑i,j

∫ t

0

aij(X(s))∂xi∂xj

f(X(s))ds

= f(X(0)) +

∫ t

0

∇f(X(s))TdM(s) +

∫ t

0

Lf(X(s))ds.

For f ∈ D(L), f and Lf are bounded, so

f(X(t))− f(X(0))−∫ t

0

Lf(X(s))ds

is an Ft-martingale.

Page 61: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Equivalence of martingale problem and SDE 61

Theorem 2.29 Suppose there exists ε > 0 such that ε|z|2 ≤ zTa(x)z ≤ ε−1|z|2. Thenthere exists a symmetric, nonsingular matrix σ(x) such that σ(x)2 = a(x). If X is asolution of the martingale problem for L and M(t) ≡ X(t)−

∫ t

0b(X(s))ds, then

W (t) =

∫ t

0

σ−1(X(s))dM(s)

is a standard Brownian motion and

X(t) = X(0) +

∫ t

0

σ(X(s))dW (s) +

∫ t

0

b(X(s))ds. (2.4)

Proof. W is a local martingale and

[Wi,Wj]t =

∫ t

0

∑k,l

σ−1ik (X(s))σ−1

jl (X(s))d[Mk,Ml]s = δijt.

(2.4) follows from the fact that∫ t

0

σ(X(s))dW (s) =

∫ t

0

σ(X(s))σ−1(X(s))dM(s) = M(t)−M(0).

Page 62: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

3. Weak convergence for stochastic processes

• General definition of weak convergence

• Prohorov’s theorem

• Skorohod representation theorem

• Skorohod topology

• Conditions for tightness in the Skorohod topology

• Skorohod representation theorem

• Continuous mapping theorem

• Martingale central limit theorem

• Diffusion approximations

Page 63: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Topological proof of convergence 63

(S, d) metric space

Fn : S → R

Fn → F in some sense (e.g., xn → x implies Fn(xn) → F (x))

Fn(xn) = 0

1. Show that xn is compact

2. Show that any limit point of xn satisfies F (x) = 0

3. Show that the equation F (x) = 0 has a unique solution x0

4. Conclude that xn → x0

Page 64: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Convergence in distribution 64

(S, d) complete, separable metric space

Xn S-valued random variable

Xn converges in distribution to X (PXn converges weakly to PX) if for each f ∈C(S)

limn→∞

E[f(Xn)] = E[f(X)].

Denote convergence in distribution by Xn ⇒ X .

Equivalent statements

Xn converges in distribution to X if and only if

lim infn→∞

PXn ∈ A ≥ PX ∈ A, each open A,

or equivalently

lim supn→∞

PXn ∈ B ≤ PX ∈ B, each closed B,

Page 65: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Tightness and Prohorov’s theorem 65

A sequence Xn is tight if for each ε > 0, there exists a compact set Kε ⊂ S suchthat

supnPXn /∈ Kε ≤ ε.

Theorem 3.1 Suppose that Xn is tight. Then there exists a subsequence n(k) alongwhich the random variables converge in distribution.

Page 66: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Skorohod topology on DE[0,∞) 66 (E, r) complete, separable metric

space

DE[0,∞) space of cadlag, E-valued functions

xn → x ∈ DE[0,∞) in the Skorohod (J1) topology if and only if there exist strictlyincreasing λn mapping [0,∞) onto [0,∞) such that for each T > 0,

limn→∞

supt≤T

(|λn(t)− t|+ r(xn λn(t), x(t))) = 0.

The Skorohod topology is metrizable so that DE[0,∞) is a complete, separablemetric space.

Note that 1[1+ 1n

,∞) → 1[1,∞) in DR[0,∞), but (1[1+ 1n

,∞),1[1,∞)) does not converge inDR2 [0,∞).

Page 67: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Conditions for tightness 67

Sn0 (T ) collection of discrete Fn

t -stopping times q(x, y) = 1 ∧ r(x, y)

Theorem 3.2 Suppose that for t ∈ T0, a dense subset of [0,∞), Xn(t) is tight. Then thefollowing are equivalent.

a) Xn is tight in DE[0,∞).

b) (Kurtz) For T > 0, there exist β > 0 and random variables γn(δ, T ) such that for0 ≤ t ≤ T , 0 ≤ u ≤ δ, and 0 ≤ v ≤ t ∧ δ

E[qβ(Xn(t+ u), Xn(t)) ∧ qβ(Xn(t), Xn(t− v))|Fnt ] ≤ E[γn(δ, T )|Fn

t ]

limδ→0

lim supn→∞

E[γn(δ, T )] = 0,

andlimδ→0

lim supn→∞

E[qβ(Xn(δ), Xn(0))] = 0. (3.1)

c) (Aldous) Condition (3.1) holds, and for each T > 0, there exists β > 0 such that

Cn(δ, T ) ≡ supτ∈Sn

0 (T )

supu≤δ

E[ supv≤δ∧τ

qβ(Xn(τ + u), Xn(τ)) ∧ qβ(Xn(τ), Xn(τ − v))]

satisfies limδ→0 lim supn→∞Cn(δ, T ) = 0.

Page 68: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Example 68

η1, η2, . . . iid, E[ηi] = 0, σ2 = E[η2i ] <∞

Xn(t) =1√n

[nt]∑i=1

ηi

ThenE[(Xn(t+ u)−Xn(t))2|FXn

t ] =[n(t+ u)]− [nt]

nσ2 ≤ (δ +

1

n)σ2

for u ≤ δ.

Page 69: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Uniqueness of limit 69

Theorem 3.3 If Xn is tight in DE[0,∞) and

(Xn(t1), . . . , Xn(tk)) ⇒ (X(t1), . . . , X(tk))

for t1, . . . , tk ∈ T0, T0 dense in [0,∞), then Xn ⇒ X .

For the example, this condition follows from the central limit theorem.

Page 70: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Skorohod representation theorem 70

Theorem 3.4 Suppose that Xn ⇒ X . Then there exists a probability space (Ω,F , P ) andrandom variables, Xn and X , such that Xn has the same distribution as Xn, X has thesame distribution as X , and Xn → X a.s.

Continuous mapping theorem

Corollary 3.5 Let G(X) : S → E and define CG = x ∈ S : G is continuous at x.Suppose Xn ⇒ X and that PX ∈ CG = 1. Then G(Xn) ⇒ G(X).

Page 71: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Some mappings on DE[0,∞) 71

πt : DE[0,∞) → E πt(x) = x(t)

Cπt = x ∈ DE[0,∞) : x(t) = x(t−)

Gt : DR[0,∞) → R Gt(x) = sups≤t x(s)

CGt = x ∈ DR[0,∞) : lims→t−

Gs(x) = Gt(x) ⊃ x ∈ DR[0,∞) : x(t) = x(t−)

G : DR[0,∞) → DR[0,∞), G(x)(t) = Gt(x), is continuous

Ht : DE[0,∞) → R Ht(x) = sups≤t r(x(s), x(s−))

CHt = x ∈ DE[0,∞) : lims→t−

Hs(x) = Ht(x) ⊃ x ∈ DE[0,∞) : x(t) = x(t−)

H : DE[0,∞) → DR[0,∞), H(x)(t) = Ht(x), is continuous

Page 72: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Level crossing times 72

τc : DR[0,∞) → [0,∞) τc(x) = inft : x(t) > c

τ−c : DR[0,∞) → [0,∞) τ−c (x) = inft : x(t) ≥ c or x(t−) ≥ c

Gτc = Gτ−c= x : τc(x) = τ−c (x)

Note that τ−c (x) ≤ τc(x) and that xn → x implies

τ−c (x) ≤ lim infn→∞

τ−c (xn) ≤ lim supn→∞

τc(xn) ≤ τc(x)

Page 73: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Localization 73

Theorem 3.6 Suppose that for each α > 0, ταn is a stopping time, and Xn(· ∧ τα

n ) isrelatively compact. If

limα→∞

supnPτα

n > α = 0,

then Xn is relatively compact.

Compactification of the state space

Theorem 3.7 Let E ⊂ E0 where E0 is compact and the topology on E is the restriction ofthe topology onE0. Suppose that for each n,Xn is a process with sample paths inDE[0,∞)and that Xn ⇒ X in DE0 [0,∞). If X has sample paths in DE[0,∞), then Xn ⇒ X inDE[0,∞).

Page 74: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Simplified conditions for tightness 74

Sn0 (T ) collection of discrete Fn

t -stopping times q(x, y) = 1 ∧ r(x, y)

Theorem 3.8 Assume the following:

a) For t ∈ T0, a dense subset of [0,∞), Xn(t) is tight.

b) For T > 0, there exist β > 0 and random variables γn(δ, T ) such that for 0 ≤ t ≤ T ,0 ≤ u ≤ δ,

E[qβ(Xn(t+ u), Xn(t))|Fnt ] ≤ E[γn(δ, T )|Fn

t ]

andlimδ→0

lim supn→∞

E[γn(δ, T )] = 0.

Then Xn is tight in DE[0,∞).

Page 75: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Martingale central limit theorem 75

Theorem 3.9 Let Mn be a martingale such that

limn→∞

E[sups≤t

|Mn(s)−Mn(s−)|] = 0

and[Mn]t → ct

in probability. Then Mn ⇒√cW .

Theorem 3.10 (Vector-valued version) If for each 1 ≤ i ≤ d

limn→∞

E[sups≤t

|M in(s)−M i

n(s−)|] = 0

and for each 1 ≤ i, j ≤ d,[M i

n,Mjn]t → cijt,

then Mn ⇒ σW , where W is d-dimensional standard Brownian motion and σ is a sym-metric d× d-matrix satisfying σ2 = c = ((cij)).

Page 76: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Proof of tightness 76

To avoid a truncation argument, assume that Mn is square integrable and thatE[|[Mn]t − ct|] → 0. In particular, supnE[Mn(t)2] = supnE[[Mn]t] < ∞, so Mn(t)is tight.

By monotonicity and the dominated convergence theorem, this assumption im-plies

E[supt≤T

|[Mn]t − ct|] → 0. (3.2)

E[(Mn(t+ u)−Mn(t))2|Fnt ] = E[[Mn]t+u − [Mn]t|Fn

t ]

≤ E[supr≤T

([Mn]r+δ − [Mn]r)|Fnt ],

for t ≤ T and 0 ≤ u ≤ δ, and (3.2) implies

limδ→0

limn→∞

E[supr≤T

([Mn]r+δ − [Mn]r)] = limδ→0

cδ = 0.

Consequently, Mn is tight.

Page 77: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Martingale properties of the limit 77

Assume that Mn ⇒ M , selecting a subsequence if necessary. Since Mn is a martin-gale

E[(Mn(t+ s)−Mn(t))m∏

i=1

fi(Mn(ti))] = 0

for 0 ≤ t1 < · · · < tm ≤ t < t + s and fi ∈ C(R). Convergence in distributionand the uniform integrability of the sequence (Mn(t+s)−Mn(t))

∏mi=1 fi(Mn(ti))

implies

E[(M(t+ s)−M(t))m∏

i=1

fi(M(ti))] = 0

which in turn implies M is a martingale.

Let τ rn = inft : |Mn(t)| ≥ r. Then M τr

n (t)2 ≤ 2([Mn]t + r2), so M τrn (t)2 is uni-

formly integrable. By the same argument as above, it follows that

M(t)2 − ct

is a local martingale, and hence, M is a Brownian motion.

Page 78: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Example 78

Let Y be a unit Poisson process. Define

X(t) =

∫ t

0

(−1)N(s)ds

and

Xn(t) =1

nX(n2t) = n

∫ t

0

(−1)N(n2s)ds

Since

(−1)N(t) = 1−∫ t

0

2(−1)N(s−)dN(s) = 1−∫ t

0

2(−1)N(s−)dM(s)−∫ t

0

2(−1)N(s−)ds

Consequently,

Xn(t) =1− (−1)N(n2t)

2n− 1

n

∫ n2t

0

(−1)N(s−)dM(s) = εn(t) +Mn(t)

Since [Mn] = N(n2t)n2 → t and supt |εn(t) → 0, Xn ⇒ W .

Page 79: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Convergence of coupled counting processes 79

Theorem 3.11 For each n, letNn1 , . . . , N

nm be counting processes satisfying [Nn

i , Nnj ]t = 0

for i 6= j (that is, there are no simultaneous jumps). Suppose that Hni are nondecreasing

processes with Hni (t)−Hn

i (t−) ≤ 1 for all i and t ≥ 0, that

Nni −Hn

i , i = 1, . . . ,m

are Gnt -martingales, and thatHn

i (t) is Gn0 -measurable for each i and t ≥ 0. If (Hn

1 , . . . , Hnm) ⇒

H = (H1, . . . , Hm) in the Skorohod topology on DRm [0,∞), then (Nn1 , . . . , N

nm) ⇒

(N1, . . . , Nm) where (N1, . . . , Nm) are counting processes with joint distribution deter-mined by

ϕf (t) = E[e−

∑i=1

∫ t0 fi(s)dNi(s)

∣∣∣H]= 1 +

m∑i=1

∫ t

0

ϕf (u−)(e−fi(u) − 1)dHi(u)

for all nonnegative, continuous, Rm-valued functions f = (f1, . . . , fm).

Page 80: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Proof. Using the fact that there are no simultaneous jumps among the Nni ,

e−∑m

i=1

∫ t0 fi(s)dNn

i (s) = 1 +m∑

i=1

∫ t

0

(e−fi(u) − 1)e−∫ u−0 fi(s)dNn

i (s)dNni (u) (3.3)

= 1 +m∑

i=1

∫ t

0

(e−fi(u) − 1)e−∫ u−0 fi(s)dNn

i (s)d(Nni (u)−Hn

i (u))

+m∑

i=1

∫ t

0

(e−fi(u) − 1)e−∫ u−0 fi(s)dNn

i (s)dHni (u) .

Using the martingale assumption and the measurability assumption, conditioningboth sides of (3.3) on Hn, we have

ϕnf (t) = E

[e−

∑i=1

∫ t0 fi(s)dNn

i (s)∣∣∣Hn

]1 +

m∑i=1

∫ t

0

ϕnf (u−)(e−fi(u) − 1)dHn

i (u)

and the convergence of Hn to H implies the convergence of ϕnf to ϕf . Convergence

of the finite dimensional distributions follows. Convergence in distribution of theprocesses under the Skorohod topology follows by Exercise 14.

Page 81: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Diffusion approximations 81

Theorem 3.12 Let a : Rd → Md×d be continuous, symmetric and nonnegative definite,and let b : Rd → Rd be continuous. Suppose that for each ν ∈ P(Rd), the martingaleproblem for (L, ν) with D(L) = C2

c (Rd) has a unique solution in CRd [0,∞).

Let Xn and Bn be a cadlag, Rd-valued processes and let An be a cadlag, Md×d-valuedprocess. Suppose

Mn ≡ Xn −Bn and M inM

jn − Aij

n , i, j = 1, . . . , d

are Fnt -local martingales and that for each T > 0, and i, j = 1, . . . , d,

limn→∞

E[supt≤T

|Xn(t)−Xn(t−)|2]+E[supt≤T

|Bn(t)−Bn(t−)|2]+E[supt≤T

|Aijn (t)−Aij

n (t−)|] = 0

and

supt≤T

|Bin(t)−

∫ t

0

bi(Xn(s))ds|+ supt≤T

|Aijn (t)−

∫ t

0

aij(Xn(s))ds| → 0

in probability.

Let X be a solution of the martingale problem for L, and suppose Xn(0) ⇒ X(0). ThenXn ⇒ X .

Page 82: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Example 82

Markov chain: H : E × U → E, ξ1, ξ2, . . . iid U -valued, X0 E-valued and indepen-dent of ξk. for k ≥ 0, define Xk+1 = H(Xk, ξk+1). Then Xk is a Markov chain.Taking E = R, let

Xnk+1 = Xn

k +1√nGn(Xn

k , ξk+1).

For ∫U

Gn(x, u)ν(du) =1√nbn(x) an(x) =

∫U

Gn(x, u)2ν(du),

Mnm = Xn

m −Xn0 −

1

n

m−1∑k=0

bn(Xnk ) (Mn

m)2 − 1

n

m−1∑k=0

(an(Xnk )− 1

nb2n(Xn

k ))

are martingales. Set

Xn(t) = Xn[nt]

Bn(t) =1

n

[nt]−1∑k=0

bn(Xnk ) =

∫ [nt]n

0

bn(Xn(s))ds

Page 83: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

An(t) =1

n

[nt]−1∑k=0

(an(Xnk )− 1

nb2n(Xn

k )) =

∫ [nt]n

0

an(Xn(s))ds− 1

n

∫ [nt]n

0

b2n(Xn(s))ds

Page 84: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Wright-Fisher model 84

PXnk+1 =

l

n|Xn

k = x =

(n

l

)xl(1− x)n−l

E[Xnk+1 −Xn

k |Xnk = x] = 0 E[(Xn

k+1 −Xnk )2|Xn

k = x] =1

nx(1− x)

Let Xn(t) = Xn[nt].

Psupt≤T

|Xn(t)−Xn(t−)| ≥ ε ≤[nt]−1∑k=0

P|Xnk+1−Xn

k | ≥ ε ≤ [nt]

ε4n4max

xE[(ξ[nt],x−[nt]x)4]

where ξm,x is binomially distributed with parameters m and x. Then

limn→∞

E[supt≤T

|Xn(t)−Xn(t−)|2] = 0

by uniform integrability. Xn is approximated by the diffusion with generator

Lf(x) =1

2x(1− x)f ′′(x)

Page 85: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

4. Stochastic equations for general Markov processes in Rd

• Poisson random measures

• Stochastic integrals for space-time Poisson random measures

• Stochastic integrals for centered space-time Poisson random measures

• Stochastic equations for Markov processes

Page 86: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Poisson distribution 86

Definition 4.1 A random variable X has a Poisson distribution with parameter λ > 0(write X ∼ Poisson(λ)) if for each k ∈ 0, 1, 2, . . .

PX = k =λk

k!e−λ.

E[X] = λ V ar(X) = λ

and characteristic function of X is

E[eiθX ] = eλ(eiθ−1).

Since the characteristic function of a random variable characterizes its distribution,a direct computation gives

Proposition 4.2 If X1, X2, . . . are independent random variables with Xi ∼ Poisson(λi)and

∑∞i=1 λi <∞, then

X =∞∑i=1

Xi ∼ Poisson

(∞∑i=1

λi

)

Page 87: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Poisson sums of Bernoulli random variables 87

Proposition 4.3 Let N ∼ Poisson(λ), and suppose that Y1, Y2, . . . are i.i.d. Bernoullirandom variables with parameter p ∈ [0, 1]. If N is independent of the Yi, then

∑Ni=0 Yi ∼

Poisson(λp).

For j = 1, . . . ,m, let ej be the vector in Rm that has all its entries equal to zero,except for the jth which is 1.

For θ, y ∈ Rm, let

〈θ, y〉 =m∑

j=1

θjyj.

Proposition 4.4 Let N ∼ Poisson(λ). Suppose that Y1, Y2, . . . are independent Rm-valued random variables such that for all k ≥ 0 and j ∈ 1, . . . ,m

PYk = ej = pj,

where∑m

j=1 pj = 1. Define X = (X1, ..., Xm) =∑N

k=0 Yk. If N is independent of the Yk,then X1, . . . , Xm are independent random variables and Xj ∼ Poisson(λpj).

Page 88: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Poisson random measures 88

Let ν be a σ-finite measure on U and (U, dU) be a complete, separable metric space.Let N (U) denote the collection of counting measures on U .

Definition 4.5 A Poisson random measure on U with mean measure ν is a randomcounting measure ξ (that is, a N (U)-valued random variable) such that

a) For A ∈ B(U), ξ(A) has a Poisson distribution with expectation ν(A)

b) ξ(A) and ξ(B) are independent if A ∩B = ∅.

For f ∈M(U), f ≥ 0, define

ψξ(f) = E[exp−∫

U

f(u)ξ(du)] = exp−∫

(1− e−f )dν

(Verify the second equality by approximating f by simple functions.)

Page 89: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Existence 89

Proposition 4.6 Suppose that ν is a measure on U such that ν(U) <∞. Then there existsa Poisson random measure with mean measure ν.

Proof. The case ν(U) = 0 is trivial, so assume that ν(U) ∈ (0,∞). Let N be aPoisson random variable defined on a probability space (Ω,F , P ) with E[N ] =ν(U). Let X1, X2, . . . be iid U -valued random variables such that for every A ∈B(U),

PXj ∈ A =ν(A)

ν(U),

and assume that N is independent of the Xj .

Define ξ by ξ(A) =∑N

k=0 1Xk∈A. In other words ξ =∑N

k=0 δXkwhere, for each

x ∈ U , δx is the Dirac mass at x.

Extend the existence result to σ-finite measures by partitioning U = ∪iUi, whereν(Ui) <∞.

Page 90: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Identities 90

Let ξ be a Poisson random measure with mean measure ν.

Lemma 4.7 Suppose f ∈M(U), f ≥ 0. Then

E[

∫f(y)ξ(dy)] =

∫f(y)ν(dy)

Lemma 4.8 Suppose ν is nonatomic and let f ∈M(N (U)× U), f ≥ 0. Then

E[

∫U

f(ξ, y)ξ(dy)] = E[

∫U

f(ξ + δy, y)ν(dy)]

Page 91: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Proof. Suppose 0 ≤ f ≤ 1U0 , where ν(U0) < ∞. Let U0 = ∪kUnk , where the Un

k aredisjoint and diam(Un

k ) ≤ n−1. If ξ(Unk ) is 0 or 1, then∫

Unk

f(ξ, y)ξ(dy) =

∫Un

k

f(ξ(· ∩ Un,ck ) + δy, y)ξ(dy)

Consequently, if maxk ξ(Unk ) ≤ 1,∫

U0

f(ξ, y)ξ(dy) =∑

k

∫Un

k

f(ξ(· ∩ Un,ck ) + δy, y)ξ(dy)

Since ξ(U0) <∞, for n sufficiently large, maxk ξ(Unk ) ≤ 1,

E[

∫U

f(ξ, y)ξ(dy)] = E[

∫U0

f(ξ, y)ξ(dy)]

= limn→∞

∑k

E[

∫Un

k

f(ξ(· ∩ Un,ck ) + δy, y)ξ(dy)]

= limn→∞

∑k

E[

∫Un

k

f(ξ(· ∩ Un,ck ) + δy, y)ν(dy)]

= E[

∫U

f(ξ + δy, y)ν(dy)].

Page 92: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Note that the last equality follows from the fact that

f(ξ(· ∩ Un,ck ) + δy, y) 6= f(ξ + δy, y)

only if ξ(Unk ) > 0, and hence, assuming 0 ≤ f ≤ 1U0 ,

|∑

k

∫Un

k

f(ξ(· ∩ Un,ck ) + δy, y)ν(dy)−

∫U0

f(ξ + δy, y)ν(dy)| ≤∑

k

ξ(Unk )ν(Un

k ),

where the expectation of the right side is∑

k ν(Unk )2 =

∫U0ν(Un(y))ν(dy) ≤

∫U0ν(U0∩

B1/n(y))ν(dy), where Un(y) = Unk if y ∈ Un

k . limn→∞ ν(U0 ∩ B1/n(y)) = 0, since ν isnonatomic.

Page 93: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Space-time Poisson random measures 93

Let ξ be a Poisson random measure on U × [0,∞) with mean measure ν× ` (where` denotes Lebesgue measure).

ξ(A, t) ≡ ξ(A× [0, t]) is a Poisson process with parameter ν(A).

ξ(A, t) ≡ ξ(A× [0, t])− ν(A)t is a martingale.

Definition 4.9 ξ is Ft compatible, if for each A ∈ B(U), ξ(A, ·) is Ft adapted andfor all t, s ≥ 0, ξ(A× (t, t+ s]) is independent of Ft.

Page 94: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Stochastic integrals for Poisson random measures 94

For i = 1, . . . ,m, let ti < ri and Ai ∈ B(U), and let ηi be Fti-measurable. Let

X(u, t) =∑

i

ηi1Ai(u)1[ti,ri)(t),

and note thatX(u, t−) =

∑i

ηi1Ai(u)1(ti,ri](t). (4.1)

DefineIξ(X, t) =

∫U×[0,t]

X(u, s−)ξ(du× ds) =∑

i

ηiξ(Ai × (ti, ri]).

Then

E [|Iξ(X, t)|] ≤ E

[∫U×[0,t]

|X(u, s−)|ξ(du× ds)

]=

∫U×[0,t]

E[|X(u, s)|]ν(du)ds

and if the right side is finite, E[Iξ(X, t)] =∫

U×[0,t]E[X(u, s)]ν(du)ds.

Page 95: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Estimates in L1,0 95

|Iξ(X, t)| ∧ 1 ≤∫

U×[0,t]

|X(u, s−)| ∧ 1ξ(du× ds)

and

E

[supt≤T

|Iξ(X, t)| ∧ 1

]≤∫

U×[0,T ]

E[|X(u, s)| ∧ 1]ν(du)ds

Definition 4.10 Let L1,0(U, ν) denote the space of B(U)×B[0,∞)×F-measurable map-pings (u, s, ω) → X(u, s, ω) such that

∫∞0e−s∫

UE[|X(u, s)| ∧ 1]ν(du)ds <∞.

Let S− denote the collection of B(U)×B[0,∞)×F measurable mappings (u, s, t) →∑mi=1 ηi(ω)1Ai

(u)1(ti,ri](t) defined as in (4.1).

Lemma 4.11

d1,0(X, Y ) =

∫ ∞

0

e−s

∫U

E[|X(u, s)− Y (u, s)| ∧ 1]ν(du)ds

defines a metric on L1,0(U, ν), and the definition of Iξ extends to the closure of S− inL1,0(U, ν).

Page 96: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

The predictable σ-algebra 96

Warning: LetN be a unit Poisson process. Then∫∞

0e−sE[|N(s)−N(s−)|∧1]ds = 0,

but P∫ t

0N(s)dN(s) 6=

∫ t

0N(s−)dN(s) = 1− e−t.

Definition 4.12 Let (Ω,F , P ) be a probability space and let Ft be a filtration in F .The σ-algebra P of predictable sets is the smallest σ-algebra in B(U) × B[0,∞) × Fcontaining sets of the form A× (t0, t0 + r0]×B for A ∈ B(U), t0, r0 ≥ 0, and B ∈ Ft0 .

Remark 4.13 Note that for B ∈ Ft0 , 1A×(t0,t0+r0]×B(u, t, ω) is left continuous in t andadapted and that the mapping (u, t, ω) → X(u, t−, ω), where X(u, t−, ω) is defined in(4.1), is P-measurable.

Definition 4.14 A stochastic process X on U × [0,∞) is predictable if the mapping(u, t, ω) → X(u, t, ω) is P-measurable.

Lemma 4.15 If the mapping (u, t, ω) → X(u, t, ω) is B(U) × B[0,∞) × F-measurableand adapted and is left continuous in t, then X is predictable.

Proof.Let 0 = tn0 < tn1 < · · · and tni+1 − tni ≤ n−1. Define Xn(u, t, ω) = X(u, tni , ω)for tni < t ≤ tni+1. Then Xn is predictable and limn→∞Xn(u, t, ω) = X(u, t, ω) for all(u, t, ω).

Page 97: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Stochastic integrals for predictable processes 97

Lemma 4.16 LetG ∈ P ,B ∈ B(U) with ν(B) <∞ and b > 0. Then 1B×[0,b](u, t)1G(u, t, ω)is a predictable process and

Iξ(1B×[0,b]1G, t)(ω) =

∫U×[0,t]

1B×[0,b](u, s)1G(u, s, ω)ξ(du× ds, ω) a.s. (4.2)

and

E[

∫U×[0,t]

1B×[0,b](u, s)1G(u, s, ·)ξ(du× ds)] = E[

∫U×[0,t]

1B×[0,b]1G(u, s, ·)ν(du)ds]

(4.3)

Proof. Let

A = ∪mi=1Ai × (ti, ti + ri]×Gi : ti, ri ≥ 0, Ai ∈ B(U), Gi ∈ Fti.

ThenA is an algebra, (4.2) holds by definition, and (4.3) holds by direct calculation.The collection ofG that satisfy (4.2) and (4.3) is closed under increasing unions anddecreasing intersections, and the monotone class theorem (see Theorem 4.1 of theAppendix of [2]) gives the lemma.

Page 98: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

98

Lemma 4.17 LetX be a predictable process satisfying∫∞

0e−s∫

UE[|X(u, s)|∧1]ν(du)ds <

∞. Then∫

U×[0,t]|X(u, t)|ξ(du× ds) <∞ a.s. and

Iξ(X, t)(ω) =

∫U×[0,t]

X(u, t, ω)ξ(du× ds, ω) a.s.

Proof. Approximate by simple functions.

Page 99: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Consequences of predictability 99

Lemma 4.18 If X is predictable and∫

U×[0,t]|X(u, s)| ∧ 1ν(du)ds <∞ a.s. for all t, then∫

U×[0,t]

|X(u, s)|ξ(du× ds) <∞ a.s. (4.4)

and ∫U×[0,t]

X(u, s)ξ(du× ds)

exists a.s.

Proof. Let τc = inft :∫

U×[0,t]|X(u, s)| ∧ 1ν(du)ds ≥ c, and consider Xc(s, u) =

1[0,τc](s)X(u, s). Then Xc satisfies the conditions of Lemma 4.17, so∫U×[0,t]

|X(u, s)| ∧ 1ξ(du× ds) <∞ a.s.

But this implies ξ(u, s) : s ≤ t, |X(u, s)| > 1 <∞, so (4.4) holds.

Page 100: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Martingale properties 100

Theorem 4.19 Suppose X is predictable and∫

U×[0,t]E[|X(u, s)|]ν(du)ds < ∞ for each

t > 0. Then ∫U×[0,t]

X(u, s)ξ(du× ds)−∫ t

0

∫U

X(u, s)ν(du)ds

is a Ft-martingale.

Page 101: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Proof. LetA ∈ Ft and defineXA(u, s) = 1AX(u, s)1(t,t+r](s). ThenXA is predictableand

E[1A

∫U×(t,t+r]

X(u, s)ξ(du× ds)] = E[

∫U×[0,t+r]

XA(u, s)ξ(du× ds)]

= E[

∫U×[0,t+r]

XA(u, s)ν(du)ds]

= E[1A

∫U×(t,t+r]

X(u, s)ν(du)ds]

and hence

E[

∫U×(t,t+r]

X(u, s)ξ(du× ds)|Ft] = E[

∫U×(t,t+r]

X(u, s)ν(du)ds|Ft].

Page 102: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Local martingales 102

Lemma 4.20 If ∫U×[0,t]

|X(u, s)|ν(du)ds <∞ a.s. t ≥ 0,

then ∫U×[0,t]

X(u, s)ξ(du× ds)−∫

U×[0,t]

X(u, s)ν(du)ds

is a local martingale.

Proof. If τ is a stopping time and X is predictable, then 1[0,τ ](t)X(u, t) is pre-dictable. Let

τc = t > 0 :

∫U×[0,t]

|X(u, s)|ν(du)ds ≥ c.

Then ∫U×[0,t∧τc]

X(u, s)ξ(du× ds)−∫

U×[0,t∧τc]

X(u, s)ν(du)ds

=

∫U×[0,t]

1[0,τc](s)X(u, s)ξ(du× ds)−∫

U×[0,t]

1[0,τc](s)X(u, s)ν(du)ds.

Page 103: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

is a martingale.

Page 104: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Representation of counting processes 104

Let U = [0,∞) and ν = `. Let λ be a nonnegative, predictable process, and defineG = (u, t) : u ≤ λ(t). Then

N(t) =

∫[0,∞)×[0,t]

1G(u, s)ξ(du× ds) =

∫[0,∞)×[0,t]

1[0,λ(s)](u)ξ(du× ds)

is a counting process with intensity λ.

Stochastic equation for a counting process

λ : Dc[0,∞) × [0,∞) → [0,∞), λ(z, t) = λ(zt, t), t ≥ 0, λ(z, t) cadlag for eachz ∈ Dc[0,∞).

N(t) =

∫[0,∞)×[0,t]

1[0,λ(N,s−)](u)ξ(du× ds)

Page 105: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Semimartingale property 105

Corollary 4.21 If X is predictable and∫

U×[0,t]|X(u, s)| ∧ 1ν(du)ds < ∞ a.s. for all t,

then∫

U×[0,t]|X(u, s)|ξ(du× ds) <∞ a.s.∫

U×[0,t]

X(u, s)ξ(du× ds)

=

∫U×[0,t]

1|X(u,s|≤1X(u, s)ξ(du× ds)−∫ t

0

∫U

1|X(u,s)|≤1X(u, s)ν(du)ds︸ ︷︷ ︸local martingale

+

∫ t

0

∫U

1|X(u,s)|≤1X(u, s)ν(du)ds+

∫U×[0,t]

1|X(u,s|>11X(u, s)ξ(du× ds)︸ ︷︷ ︸finite variation

is a semimartingale.

Page 106: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Stochastic integrals for centered Poisson random measures 106

Let ξ(du× ds) = ξ(du× ds)− ν(du)ds

ForX(u, t−) =

∑i

ηi1Ai(u)1(ti,ri](t).

as in (4.1), define

Iξ(X, t) =

∫U×[0,t]

X(u, s−)ξ(du×ds) =

∫U×[0,t]

X(u, s)ξ(du×ds)−∫ t

0

∫U

X(u, s)ν(du)ds

and note thatE[Iξ(X, t)

2]

=

∫U×[0,t]

E[X(u, s)2]ν(du)ds

if the right side is finite.

Then Iξ(X, ·) is a square-integrable martingale.

Page 107: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Extension of integral 107

The integral extends to predictable integrands satisfying∫U×[0,t]

|X(u, s)|2 ∧ |X(u, s)|ν(du)ds <∞ a.s. (4.5)

so that ∫U×[0,t∧τ ]

X(u, s)ξ(du× ds) =

∫U×[0,t]

1[0,τ ](s)X(u, s)ξ(du× ds) (4.6)

is a martingale for any stopping time satisfying

E

[∫U×[0,t∧τ ]

|X(u, s)|2 ∧ |X(u, s)|ν(du)ds]<∞,

and (4.6) is a local square integrable martingale if∫U×[0,t]

|X(u, s)|2ν(du)ds <∞ a.s.

Page 108: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Quadratic variation 108

Note that if X is predictable and∫U×[0,t]

|X(u, s)| ∧ 1ν(du)ds <∞ a.s. t ≥ 0,

then ∫U×[0,t]

|X(u, s)|2 ∧ 1ν(du)ds <∞ a.s. t ≥ 0,

and[Iξ(X, ·)]t =

∫U×[0,t]

X2(u, s)ξ(du× ds).

Similarly, if ∫U×[0,t]

|X(u, s)|2 ∧ |X(u, s)|ν(du)ds <∞ a.s.,

[Iξ(X, ·)]t =

∫U×[0,t]

X2(u, s)ξ(du× ds).

Page 109: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Semimartingale properties 109

Theorem 4.22 Let Y be a cadlag, adapted process. If X satisfies (4.4), Iξ(X, ·) is a semi-martingale and ∫ t

0

Y (s−)dIξ(X, s) =

∫U×[0,t]

Y (s−)X(u, s)ξ(du× ds),

and if X satisfies (4.5), Iξ(X, ·) is a semimartingale and∫ t

0

Y (s−)dIξ(X, s) =

∫U×[0,t]

Y (s−)X(u, s)ξ(du× ds)

Page 110: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Levy processes 110

Theorem 4.23 Let U = R and∫

R |u|2 ∧ 1ν(du) <∞. Then

Z(t) =

∫[−1,1]×[0,t]

uξ(du× ds) +

∫[−1,1]c×[0,t]

uξ(du× ds)

is a process with stationary, independent increments with

E[eiθZ(t)] = expt∫

R(eiθu − 1− iθu1[−1,1](u))ν(du)

Page 111: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Proof.

eiθZ(t) = 1 +

∫ t

0

iθeiθZ(s−)dZ(s−) +∑s≤t

(eiθZ(s) − eiθZ(s−) − iθeiθZ(s−)∆Z(s))

= 1 +

∫[−1,1]×[0,t]

iθeiθZ(s−)uξ(du× ds) +

∫[−1,1]c×[0,t]

iθeiθZ(s−)uξ(du× ds)

+

∫R×[0,t]

(eiθ(Z(s−)+u) − eiθZ(s−) − iθeiθZ(s−)u)ξ(du× ds)

= 1 +

∫[−1,1]×[0,t]

iθeiθZ(s−)uξ(du× ds)

+

∫R×[0,t]

eiθZ(s−)(eiθu − 1− iθu1[−1,1](u))ξ(du× ds)

Taking expectations

ϕ(θ, t) = 1 +

∫R×[0,t]

ϕ(θ, s)(eiθu − 1− iθu1[−1,1](u))ν(du)ds

so ϕ(θ, t) = expt∫

R(eiθu − 1− iθu1[−1,1](u))ν(du)

Page 112: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Scaling 112

Let U = R and write ξ =∑δ(ui,si). Define ξa,b =

∑δ(aui,bsi). Then ξa,b is a Poisson

random measure with mean measure b−1νa(du)ds where νa(c, d) = ν(a−1c, a−1d). Ifν has a density γ, then γa(u) = a−1γ(a−1u).

Let

Za,b(t) =

∫[−1,1]×[0,t]

uξa,b(du× ds) +

∫[−1,1]c×[0,t]

uξa.b(du× ds)

=

∫[−a−1,a−1]×[0,b−1t]

auξ(du× ds) +

∫[−a−1,a−1]c×[0,b−1t]

auξ(du× ds)

= aZ(b−1t) +

∫ ∞

−∞au(1[−1,1](u)− 1[−a−1,a−1](u))ν(du)b

−1t

Example: γ(u) = c|u|−1−α. Then the measure for ξa,b is caα|u|−1−αb−1duds and the“drift” term on the right vanishes by symmetry. Consequently, if b = aα, thenZa,b(t) = aZ(a−αt) has the same distribution as Z.

Page 113: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Approximation of Levy processes 113

For 0 < ε < 1, let

Zε(t) =

∫[−1−ε)∪(ε,1]×[0,t]

uξ(du× ds) +

∫[−1,1]c×[0,t]

uξ(du× ds)

=

∫(−∞,−ε)∪(ε,∞)×[0,t]

uξ(du× ds)− t

∫[−1−ε)∪(ε,1]

uν(du)

that is, throw out all jumps of size less than or equal to ε and the correspondingcentering. Then

E[|Zε(t)− Z(t)|2] = t

∫[−ε,ε]

u2ν(du).

Consequently, since Zε − Z is a square integrable martingale, Doob’s inequalitygives

limε→0

E[sups≤t

|Zε(s)− Z(s)|2] = 0.

Page 114: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Summary on stochastic integrals 114

If X is predictable and∫

U×[0,t]|X(u, s)| ∧ 1ν(du)ds <∞ a.s. for all t, then∫

U×[0,t]

|X(u, s)|ξ(du× ds) <∞ a.s.

∫U×[0,t]

X(u, s)ξ(du× ds)

=

∫U×[0,t]

1|X(u,s)|≤1X(u, s)ξ(du× ds)−∫ t

0

∫U

1|X(u,s)|≤1X(u, s)ν(du)ds︸ ︷︷ ︸local martingale

+

∫ t

0

∫U

1|X(u,s)|≤1X(u, s)ν(du)ds+

∫U×[0,t]

1|X(u,s)|>1X(u, s)ξ(du× ds)︸ ︷︷ ︸finite variation

is a semimartingale.

Exercise 4.24 Give an example of a right continuous, adapted processX such that∫

U×[0,t]|X(u, s)|∧

1ν(du)ds <∞ a.s. but∫

U×[0,t]|X(u, s)|ξ(du× ds) = ∞ a.s.

Page 115: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

115

If X is predictable and∫U×[0,t]

|X(u, s)|2 ∧ |X(u, s)|ν(du)ds <∞ a.s.,

then∫U×[0,t]

X(u, s)ξ(du× ds)

= limε→0+

∫U×[0,t]

1|X(u,s)|≥ε(s)X(u, s)ξ(du× ds)

= limε→0+

(∫U×[0,t]

1|X(u,s)|≥εX(u, s)ξ(du× ds)−∫ t

0

∫U

1|X(u,s)|≥εX(u, s)ν(du)ds

)exists and is a local martingale.

Page 116: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Markov processes 116

Markov chain: Xn+1 = H(Xn, ηn+1), ηn iid and independent of X0.

Rd-valued Markov process:

X(t) = X(0) +

∫ t

0

σ(X(s))dW (s) +

∫ t

0

b(X(s))ds (4.7)

+

∫U1×[0,t]

α1(X(s−), u)ξ(du× ds) +

∫U2×[0,t]

α2(X(s−), u)ξ(du× ds)

where σ : Rd → Md×m, b : Rd → Rd, and for each compact K ⊂ Rd,

supx∈K

(|σ(x)|+ |b(x)|+

∫U1

|α1(x, u)|2ν(du) +

∫U2

|α2(x, u)| ∧ 1ν(du))<∞ (4.8)

Page 117: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Uniqueness and the Markov property 117

If X is a solution of (4.7), then

X(t) = X(r) +

∫ t

r

σ(X(s))dW (s) +

∫ t

r

b(X(s))ds

+

∫U1×(r,,t]

α1(X(s−), u)ξ(du× ds) +

∫U2×(r,t]

α2(X(s−), u)ξ(du× ds)

Uniqueness implies X(r) is independent of W (· + r) −W (r) and ξ(A × (r, ·]) andthat X(t), t ≥ r is determined by X(r), W (· + r) −W (r) and ξ(A × (r, ·]), whichgives the Markov property.

Page 118: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Conditions for uniqueness 118

Lipschitz condition

|σ(x)− σ(y)|+ |b(x)− b(y)|

+

√∫U1

|α1(x, u)− α1(y, u)|2ν(du) +

∫U2

|α2(x, u)− α2(y, u)|ν2(du)

≤M |x− y|

Page 119: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Random measures 119

Π is a random measure on U × [0,∞) if for each ω ∈ Ω, Π(·, ω) is a measure onB(U) × B[0,∞) and for each G ∈ B(U) × B[0,∞), Π(G) is a random variable withvalues in [0,∞]. Π is adapted if for each D ∈ B(U), the process Π(D × [0, t]) isadapted, and Π is predictable if for each D ∈ B(U), the process Π(D × [0, t]) ispredictable.

Page 120: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Dual predictable projection 120

The following result is essentially the existence of the dual predictable projectionof a random measure. (See, for example, Jacod and Shiryaev [4], Theorem II.1.8.)

Lemma 4.25 Let Ft be a complete, right-continuous filtration. Let Γ be a randommeasure on U × [0,∞) (not necessarily adapted) satisfying Γ(U ×0) = 0. Suppose thatthere exists a strictly positive predictable process H such that

E

[∫U×[0,∞)

H(u, s)Γ(du× ds)

]<∞.

Then there exists a predictable random measure Γ such that for each predictable Z satisfy-ing |Z| < K for some constant K <∞,

MZ(t) = E[

∫U×[0,t]

Z(u, s)H(u, s)Γ(du×ds)|Ft]−∫

U×[0,t]

Z(u, s)H(u, s)Γ(du×ds),

(4.9)is an Ft-martingale. In addition, there exist a kernel γ from ([0,∞)× Ω,P) to U and anondecreasing, right-continuous, predictable process A such that

Γ(du× ds, ω) = γ(s, ω, du)dA(s, ω).

Page 121: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Proof. See Appendix 1.

Page 122: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

A martingale inequality 122

Discrete time version: Burkholder [1]

Continuous time version: Lenglart, Lepingle, and Pratelli [9]

See Lemma 2.4 of [8] for an easy proof due to Ichikawa [3].

If M is a local square integrable martingale, there exists a nondecreasing pre-dictable process 〈M〉 such that M(t)2 − 〈M〉t is a local martingale. In particular,[M ] − 〈M〉 is a local martingale. Recall that a left-continuous, adapted process ispredictable.

Lemma 4.26 For 0 < p ≤ 2 there exists a constant Cp such that for any local squareintegrable martingale M with Meyer process 〈M〉 and any stopping time τ

E[sups≤τ

|M(s)|p] ≤ CpE[〈M〉p/2τ ]

Page 123: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

123

Proof. For p = 2 the result is an immediate consequence of Doob’s inequality. Let0 < p < 2. For x > 0, let σx = inft : 〈M〉t > x2. Since σx is predictable, there existsa strictly increasing sequence of stopping times σn

x → σx. Noting that 〈M〉σnx≤ x2,

we have

Psups≤τ

|M(s)| > x ≤ Pσnx < τ+ P sup

s≤τ∧σnx

|M(s)| > x

≤ Pσnx < τ+

E[〈M〉τ∧σnx]

x2

≤ Pσnx < τ+

E[x2 ∧ 〈M〉τ ]x2

,

and letting n→∞, we have

Psups≤τ

|M(s)| > x ≤ P〈M〉τ ≥ x2+E[x2 ∧ 〈M〉τ ]

x2. (4.10)

Using the identity ∫ ∞

0

E[x2 ∧X2]pxp−3dx =2

2− pE[|X|p],

the lemma follows by multiplying both sides of (4.10) by pxp−1 and integrating.

Page 124: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Computation of Meyer process 124

Let ξ be a Poisson random measure on U × [0,∞) with mean measure ν × `, andlet W be a standard Brownian motion. Assume that ξ and W are compatible witha filtration Ft.

IfX is predictable and∫

U×[0,t]X2(u, s)ν(du)ds <∞ a.s., thenM(t) =

∫U×[0,t]

X(u, s)ξ(du×ds) is a local square integrable martingale with

〈M〉t =

∫U×[0,t]

X2(u, s)ν(du)ds <∞.

If X is adapted and∫ t

0X(s)2ds < ∞ a.s., then M(t) =

∫ t

0X(s)dW (s) is a local

square integrable martingale with

〈M〉t =

∫ t

0

X(s)2ds.

Page 125: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Estimate 125

X and X solutions of the SDE (4.7). Then

E[sups≤t

|X(s)− X(s)|]

≤ E[|X(0)− X(0)|] + C1E[(

∫ t

0

|σ(X(s))− σ(X(s))|2ds)12 ]

+C1E[(

∫ t

0

∫U1

|α1(X(s), u)− α1(X(s), u)|2ν(du)ds)12 ]

+E[

∫ t

0

∫U2

|α2(X(s−), u)− α2(X(s−), u)|ν(du)ds]

+E[

∫ t

0

|b(X(s−))− b(X(s−))|ds]

≤ E[|X(0)− X(0)|] +D(√t+ t)E[sup

s≤t|X(s)− X(s)|]

For t small enough so that D(√t+ t) ≤ .5, then

E[sups≤t

|X(s)− X(s)|] ≤ 2E[|X(0)− X(0)|]

Page 126: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

5. Martingale problems for Markov processes

• The martingale problem for a generator

• Generator for an SDE

• Pure jump processes

• Dynkin’s identity

• Moment estimates

• Lyapunov functions

Page 127: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Martingale problems 127

An E-valued process X is an Ft-Markov process if X is Ft-adapted and

E[g(X(t+ s))|Ft] = E[g(X(t+ s))|X(t)], g ∈ B(E).

The generator of a Markov process determines its short time behavior

E[g(X(t+ ∆t))− g(X(t))|Ft] ≈ Ag(X(t))∆t

Definition 5.1 X is a solution of the martingale problem forA if and only if there existsa filtration Ft such that X is Ft-adapted and

g(X(t))− g(X(0))−∫ t

0

Ag(X(s))ds (5.1)

is an Ft-martingale for each g ∈ D(A).

For ν ∈ P(E), X is a solution of the martingale problem for (A, ν) if X is a solution of themartingale problem for A and X(0) has distribution ν.

Page 128: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Generator for the SDE 128

Let

X(t) = X(0) +

∫ t

0

σ(X(s))dW (s) +

∫ t

0

b(X(s))ds

+

∫U1×[0,t]

α1(X(s−), u)ξ(du× ds) +

∫U2×[0,t]

α2(X(s−), u)ξ(du× ds) .

Then

f(X(t))− f(X(0))−∫ t

0

Af(X(s))ds

=

∫ t

0

∇f(X(s))Tσ(X(s))dW (s)

+

∫U1

(f(X(s−) + α1(X(s−), u))− f(X(s−))ξ(du× ds)

+

∫U2

(f(X(s−) + α2(X(s−), u))− f(X(s−))ξ(du× ds)

Note that, assuming (4.8), f ∈ C2c (Rd), and that X exists for all t ≥ 0, the right side

is a local square integrable martingale

Page 129: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Form of the generator 129

Af(x) =1

2

∑aij(x)∂i∂jf(x) + b(x) · ∇f(x)

+

∫U1

(f(x+ α1(x, u))− f(x)− α1(x, u) · ∇f(x))ν(du)

+

∫U2

(f(x+ α2(x, u))− f(x))ν(du)

Let D(A) be a collection of functions for which Af is bounded. Then a solution ofthe SDE is a solution of the martingale problem for A.

Page 130: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Pure jump processes 130

ConsiderAf(x) = λ(x)

∫E

(f(y)− f(x))µ(x, dy)

λ ≥ 0, µ a transition function.

The corresponding Markov process stays in a state x for an exponential length oftime with parameter λ(x) and then jumps to a new point with distribution µ(x, ·).

There exists a space U0, a probability measure ν0 ∈ P(U0), and a measurable map-ping H : E × U0 → E such that µ(x,Γ) = ν0(u : H(x, u) ∈ Γ), that is∫

U0

f(H(x, u))ν0(du) =

∫E

f(y)µ(x, dy).

Page 131: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Stochastic equation for a pure jump process 131

Let ξ be a Poisson random measure on U0 × [0,∞) × [0,∞) with mean measureν0 × `× `. Then there exists an E-valued process satisfying

f(X(t))

= f(X(0))

+

∫U0×[0,∞)×[0,t]

1[0,λ(X(s−))](u1)(f(H(X(s−), u0))− f(X(s−)))ξ(du0 × du1 × ds)

= f(X(0)) +

∫ t

0

Af(X(s))ds+

∫U0×[0,∞)×[0,t]

Bf(X(s−), u0, u1)ξ(du0 × du1 × ds)

up to

τ∞ = limn→∞

inft :

∫U0×[0,∞)×[0,t]

1[0,λ(X(s−))](u1)ξ(du0 × du1 × ds) ≥ n

If E ⊂ Rd, then write

X(t) = X(0)+

∫U0×[0,∞)×[0,t]

1[0,λ(X(s−))](u1)(H(X(s−), u0)−X(s−))ξ(du0×du1×ds).

Page 132: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Lipschitz condition 132

α2(x, u) = 1[0,λ(x)](u1)(H(x, u0)− x)

so ∫U

|α2(x, u)− α2(y, u)|ν(du)

≤∫

U

|1[0,λ(x)](u1)− 1[0,λ(y)](u1)||(H(x, u0)− x)|ν0(u0)du1

+

∫U

1[0,λ(y)](u1)||(H(x, u0)−H(y, u0)− (x− y))|ν0(u0)du1

≤∫

U0

|λ(x)− λ(y)||(H(x, u0)− x)|ν0(u0)

+

∫U0

λ(y)|(H(x, u0)−H(y, u0)− (x− y))|ν0(u0)

Exercise 5.2 Try estimating∫|α2(x, u)− α2(y, u)|2ν(du).

Page 133: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Dynkin’s identity 133

Lemma 5.3 Suppose X is a solution of the martingale problem for A. Then for a stoppingtime τ ,

E[f(X(t ∧ τ))] = E[f(X(0))] + E[

∫ t∧τ

0

Af(X(s))ds]

Proof. Apply the optional sampling theorem.

Page 134: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Moment estimates 134

Suppose

|a(x)|+ |b(x)|2 +

∫U1

|α1(x, u)|2ν(du) (5.2)

+

∫U2

|α2(x, u)|2ν(du) +

(∫U2

|α2(x, u)|ν(du))2

≤ K1 +K2|x|2

Then for f(x) = |x|2, Af(x) ≤ C1 + C2|x|2.

Af(x) =1

2

∑aij(x)∂i∂jf(x) + b(x) · ∇f(x)

+

∫U1

(f(x+ α1(x, u))− f(x)− α1(x, u) · ∇f(x))ν(du)

+

∫U2

(f(x+ α2(x, u))− f(x))ν(du)

Page 135: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Truncation argument 135

Suppose X satisfies (4.7) and supx,u |α1(x, u)| ≤ c. Let Y satisfy

Y (t) = X(0) +

∫ t

0

σ(Y (s))dW (s) +

∫ t

0

b(Y (s))ds (5.3)

+

∫U1×[0,t]

α1(Y (s−), u)ξ(du× ds)

+

∫U2×[0,t]

c

c ∨ |α2(Y (s−), u)|α2(Y (s−), u)ξ(du× ds)

and agree with X until the first time that |X(t) − X(t−)| > c. If (5.2) holds, thenAcf(x) ≤ C1 + C2|x|2 also.

Page 136: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

136

Let τc = inft : |X(t)| ≥ c/2, and note that τc ≤ inft : |X(t) − X(t−)| > c.Consequently, if t < τc, |Y (t)| = |X(t)| < c/2. If τc ≤ t and |X(τc) − X(τc−)| ≤ c,then |X(t ∧ τc)| = |Y (t ∧ τc)| ≥ c/2. If τc ≤ t and |X(τc) − X(τc−)| > c, then|Y (τc)− Y (τc−)| = c and |Y (t ∧ τc)| ≥ c/2. Consequently,

|X(t ∧ τc)| ∧ (c

2) ≤ |Y (t ∧ τc)|. (5.4)

Let f(x) = |x|2 for |x| ≤ 3c/2 and be constant for |x| sufficiently large. Thensupx |Acf(x)| <∞ and, assuming |X(0)| ≤ 3c/2,

E[|Y (t ∧ τc)|2] = E[|X(0)|2] + E[

∫ t∧τc

0

Acf(Y (s))ds]

≤ E[|X(0)|2] + E[

∫ t∧τc

0

(C1 + C2|Y (s)|2)ds]

≤ E[|X(0)|2] + E[

∫ t

0

(C1 + C2|Y (s ∧ τc)|2)ds]

and henceE[|Y (t ∧ τc)|2] ≤ (E[|X(0)|2] + C1t)e

C2t. (5.5)

Page 137: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

By (5.4) and (5.5),

E[(|X(t)| ∧ (c

2))2] = E[(|X(t ∧ τc)| ∧ (

c

2))2] ≤ (E[|X(0)|2] + C1t)e

C2t.

Consequently, the monotone convergence theorem gives

E[|X(t)|2] ≤ (E[|X(0)|2] + C1t)eC2t.

Page 138: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Lyapunov functions 138

Lemma 5.4 Suppose there exist ϕn ∈ D(A) such that infn,x ϕn(x) > −∞, ϕn(x) ϕ(x), x ∈ E, supn,xAϕn(x) <∞, and Aϕn(x) → ψ(x). Then

ϕ(X(t))− ϕ(X(0))−∫ t

0

ψ(X(s))ds

is a supermartingale.

Proof.

E[(ϕ(X(t+ r))−∫ t+r

t

ψ(X(s))ds)∏

hi(X(ti))]

≤ limn→∞

E[(ϕn(X(t+ r))−∫ t+r

t

Aϕn(X(s))ds)∏

hi(X(ti))]

= limn→∞

E[ϕn(X(t)))∏

hi(X(ti))]

= E[ϕ(X(t)))∏

hi(X(ti))]

The inequality follows by Fatou’s lemma, the first equality by the martingale con-dition, and the last equality by the monotone convergence theorem.

Page 139: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Estimates on return times 139

Suppose there exists ε > 0 such that x : ψ(x) ≥ −ε is compact. Let τ = inft :ψ(X(t)) ≥ −ε. The supermartingale property implies

E[ϕ(X(t ∧ τ)] ≤ E[ϕ(X(0))] + E[

∫ t∧τ

0

ψ(X(s))ds] ≤ E[ϕ(X(0))]− εE[t ∧ τ ]

Rearranging the equality and letting t→∞, we have

E[τ ] ≤ E[ϕ(X(0))]− infx ϕ(x)

ε.

Page 140: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Uniform stochastic boundedness 140

Suppose that ϕ ≥ 0, x : ϕ(x) ≤ c is compact for each c > 0, and for some ε > 0and K ∈ R, ψ(x) ≤ K − εϕ(x), x ∈ E. Then

h(t) ≡ E[ϕ(X(t))] ≤ E[ϕ(X(r))] + E[

∫ t

r

ψ(X(s))ds]

≤ E[ϕ(X(r))] + E[

∫ t

r

(K − εϕ(X(s)))ds],

so

eεth(t) = h(0) +∑

(eεti+1h(ti+1)− eεtih(ti))

= h(0) +∑

(eεti+1 − eεti)h(ti+1) +∑

eεti(h(ti+1)− h(ti))

≤ h(0) +

∫ t

0

εeεsh(η(s))ds+

∫ t

0

eεη(s)(K − εh(s))ds

≤ h(0) +

∫ t

0

εeεs(h(η(s))− h(s))ds+

∫ t

0

ε(eεs − eεη(s))h(s)ds+Kε−1(eεt − 1)

where η(s) = ti+1 and η(s) = ti, for ti ≤ s < ti+1. By judicious choice of the ti,

h(t) ≤ h(0)e−εt +Kε−1(1− e−εt).

Page 141: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Exponential martingales 141

Suppose f ∈ D(A) and infx f(x) > 0. Then

f(X(t)) exp−∫ t

0

Af(X(s))

f(X(s))ds = f(X(0)) +

∫ t

0

exp−∫ r

0

Af(X(s))

f(X(s))dsdf(X(r))

−∫ t

0

exp−∫ r

0

Af(X(s))

f(X(s))dsAf(X(r))dr

= f(X(0)) +

∫ t

0

exp−∫ r

0

Af(X(s))

f(X(s))dsdMf (r)

is a martingale. If ϕn, ϕ, and ψ are as above and infx,n ϕn(x) > 0, then

ϕ(X(t)) exp−∫ t

0

ψ(X(s))ds

ϕ(X(s))ds

is a supermartingale. In particular,

E[ϕ(X(t+ r)) exp−∫ t+r

t

ψ(X(s))ds

ϕ(X(s))ds|Ft] ≤ ϕ(X(t)).

Page 142: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Check that

lim infn→∞

ϕn(X(t+r)) exp−∫ t+r

t

Aϕn(X(s))ds

ϕn(X(s))ds ≥ ϕ(X(t+r)) exp−

∫ t+r

t

ψ(X(s))ds

ϕ(X(s))ds

Page 143: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

6. Forward equations and operator semigroups

• The forward equation for a general Markov process

• The operator semigroup associated with a Markov process

• The generator for a semigroup and the Hille-Yosida theorem

• Equivalence of the forward equation and the martingale problem

• Markov mapping theorem

Page 144: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

The forward equation for a general Markov process 144

If X is a solution of the martingale problem for A and νt is the distribution of X(t),then

0 = E[f(X(t))− f(X(0))−∫ t

0

Af(X(s))ds] = νtf − ν0f −∫ t

0

νsAfds

so

νtf = ν0f +

∫ t

0

νsAfds, f ∈ D(A). (6.1)

(6.1) gives the weak form of the forward equation.

Definition 6.1 A measurable mapping t ∈ [0,∞) → νt ∈ P(E) is a solution of theforward equation for A if (6.1) holds.

Page 145: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Fokker-Planck equation 145

Let Af = 12a(x)f ′′(x) + b(x)f ′(x), f ∈ D(A) = C2

c (R). If νt has a C2 density, then

νtAf =

∫ ∞

−∞Af(x)ν(t, x)ds =

∫ ∞

−∞f(x)

(1

2

∂2

∂x2(a(x)ν(t, x))− ∂

∂x(b(x)ν(t, x))

)dx

and the forward equation is equivalent to

∂tν(t, x) =

1

2

∂2

∂x2(a(x)ν(t, x))− ∂

∂x(b(x)ν(t, x)),

known as the Fokker-Planck equation in the physics literature.

Page 146: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

The operator semigroup associated with a Markov process 146

Let X be a time homogeneous Markov process and for f ∈ B(E), define

T (t)f(x) = E[f(X(s+ t))|X(s) = x],

that is, T (t)f ∈ B(E) is a measurable function such that

T (t)f(X(s)) = E[f(X(s+ t))|X(s)] a.s.

and temporal homogeneity simply means that T (t)f can be selected independentlyof s.

By the Markov property

T (s)T (t)f(x) = E[T (t)f(X(s))|X(0) = x]

= E[E[f(X(s+ t))|X(s)]|X(0) = x]

= E[f(X(s+ t))|X(0) = x]

= T (t+ s)f(x)

T (t) is a contraction in the sense that

‖T (t)f‖ = supx|T (t)f(x)| ≤ ‖f‖.

Page 147: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Determining the finite dimensional distributions 147

For t2 > t1 ≥ 0,

E[f1(X(t1))f2(X(t2))|X(0) = x]

= T (t1)(f1T (t2 − t1)f2)(x)

and in general

E[f1(X(t1)) · · · fk(X(tk))|X(0) = x]

= E[f1(X(t1)) · · · fk−1(X(tk−1))T (tk − tk−1)fk(X(tk−1))|X(0) = x]

Note that, for these identities to determine the finite dimensional distributions ofa process, T (t) must be given by a transition function

T (t)f(x) =

∫E

f(y)P (t, x, dy) (6.2)

P (t, x, ·) ∈ P(E) for all t, x and (t, x) → P (t, x,D) is measurable for all D ∈ B(E).

Page 148: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

A continuity condition 148

If T (t) is given by a transition function, then

bp− limn→∞

fn = 0 (6.3)

impliesbp− lim

n→∞T (t)fn = 0. (6.4)

Page 149: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

The generator for a semigroup 149

The strong generator for a contraction semigroup: f is in the domain D(A) of thestrong generator for T (t) if

Af(x) ≡ limt→0

T (t)f(x)− f(x)

t

exists uniformly in x. f is in the domain of the weak generator A if the limit existsboundedly and pointwise.

Lemma 6.2 If f ∈ D(A), then T (t)f ∈ D(A)

AT (t)f = T (t)Af

T (t)f = f +

∫ t

0

AT (s)fds = f +

∫ t

0

T (s)Afds

‖T (t+ h)f − T (t)f‖ ≤ h‖Af‖

and T (t) is strongly continuous on L = D(A). If (6.4) holds for each sequence fnsatisfying (6.3), then the conclusion holds with A replaced by A.

Page 150: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Dissipativity and the positive maximum principle 150

For λ > 0,

‖λf − t−1(T (t)f − f)‖ ≥ (λ+ t−1)‖f‖ − t−1‖T (t)f‖ ≥ λ‖f‖

so A is dissipative‖λf − Af‖ ≥ λ‖f‖, λ > 0.

Definition 6.3 A satisfies the positive maximum principle if f(x) = ‖f‖ impliesAf(x) ≤ 0.

Lemma 6.4 The weak generator for a Markov process satisfies the positive maximum prin-ciple.

Lemma 6.5 Let E be compact and D(A) ⊂ C(E). If A satisfies the postive maximumprinciple, then A is dissipative.

Page 151: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Hille-Yosida theorem 151

Theorem 6.6 A linear operatorA on a Banach spaceL is the strong generator of a stronglycontinuous, contraction semigroup on L if and only if

1. D(A) is dense in L.

2. A is dissipative.

3. R(λ− A) = L for some (and hence all) λ > 0.

Remark 6.7 If Condition 3 is replaced by the condition thatR(λ−A) be dense in L, thenthe closure of A generates a strongly continuous contraction semigroup.

Page 152: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Uniqueness for the forward equation 152

Lemma 6.8 If νt and µt are solutions of the forward equation for A with ν0 = µ0 andR(λ − A) is separating for each λ > 0, then

∫∞0e−λtνtdt =

∫∞0e−λtµtdt and if ν and µ

are weakly right continuous, νt = µt for all t ≥ 0.

Proof.

λ

∫ ∞

0

e−λtνtfdt = ν0f + λ

∫ ∞

0

e−λt

∫ t

0

νsAfds dt

= ν0f + λ

∫ ∞

0

∫ ∞

s

e−λtνsAfdt ds

= ν0f +

∫ ∞

0

e−λsνsAf ds

and hence ∫ ∞

0

e−λtνt(λf − Af)dt = ν0f.

Since R(λ− A) is separating, the result holds.

Page 153: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

The semigroup and the martingale problem 153

Theorem 6.9 If T (t) is a semigroup corresponding to a Markov process X and A is itsweak generator, then X is a solution of the martingale problem for A.

Proof.

E[f(X(t+ r))− f(X(t))−∫ t+r

t

Af(X(s))ds|FXt ]

= T (r)f(X(t))− f(X(t))−∫ t+r

t

T (s− t)Af(X(t))ds

= T (r)f(X(t))− f(X(t))−∫ r

0

T (s)Af(X(t))ds

= 0

Page 154: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

The semigroup and the forward equation 154

If ν0 ∈ P(E), then

ν0T (t)f = ν0f +

∫ t

0

ν0T (s)Afds

and if T (t) is given by a transition function, νt =∫

EP (t, x, ·)ν0(dx) satisfies

νtf = ν0f +

∫ t

0

νsAfds, f ∈ D(A).

If A is the strong generator and L is separating, then uniqueness holds for theforward equation.

Page 155: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Uniqueness for martingale problems 155

Theorem 6.10 If any two solutions of the martingale problem forA satisfying PX1(0)−1 =

PX2(0)−1 also satisfy PX1(t)

−1 = PX2(t)−1 for all t ≥ 0, then the f.d.d. of a solution X

are uniquely determined by PX(0)−1.

If X is a solution of the MGP for A and Ya(t) = X(a+ t), then Ya is a solution of theMGP for A.

Theorem 6.11 If the conclusion of the above theorem holds, then any solution of themartingale problem for A is a Markov process.

Theorem 6.12 If for each ν0 ∈ P(E) uniqueness holds for the forward equation for(A, ν0), then uniqueness holds for the martingale problem for (A, ν0).

Page 156: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Digression on the proof of the Hille-Yosida theorem 156

The conditions of the Hille-Yosida theorem 6.6 imply (I − n−1A)−1 exists and

‖(I − n−1A)−1f‖ ≤ ‖f‖.

In addition‖(I − n−1A)−1f − f‖ ≤ 1

n‖Af‖.

One proof of the Hille-Yosida theorem is to show that

Tn(t)f = (I − n−1A)−[nt]f

is a Cauchy sequence and to observe that

Tn(t)f = f +1

n

[nt]∑k=1

(I − n−1A)−kAf = f +

∫ [nt]n

0

Tn(s+ n−1)Afds

Page 157: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Probabilistic interpretation 157

(n − A)−1 =∫∞

0e−ntT (t)dt and (I − n−1A)−1 = n

∫∞0e−ntT (t)dt. If T (t) is given

by a transition function, then

ηn(x, dy) = n

∫ ∞

0

e−ntP (t, x, dy)dt

is a transition function. If Y nk is a Markov chain with transition function ηn, then

E[f(Y nk )] = E[(I − n−1A)−kf(Y0)] = E[f(X(

∆1 + · · ·+ ∆k

n))]

and Xn(t) = Y n[nt] can be written as

Xn(t) = X(1

n

[nt]∑k=1

∆k)

Page 158: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Construction of solution of martingale problem 158

Assume that E is compact, A ⊂ C(E) × C(E), (1, 0) ∈ A, and D(A) is dense inC(E).

Assume that A satsifies the positive maximum principle (and is consequently dis-sipative). Note that

D((I − n−1A)−1) = R(I − n−1A)

For each x ∈ E, ηxf = (I − n−1A)−1f(x) is a linear functional on R(I − n−1A) andsatisfies |ηxf | ≤ ‖f‖ and ηx1 = 1. The Hahn-Banach theorem implies ηx extends toa positive linear functional on C(E) (hence a probability measure).

Γx = η ∈ P(E) : ηf = (I − n−1A)−1f(x), f ∈ R(I − n−1A)

is closed and lim supy→x Γy ⊂ Γx. The measurable selection theorem implies that thereexists a transition function satisfying∫

E

f(y)ηn(x, dy) = (I − n−1A)−1f(x).

Page 159: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Approximating Markov chain 159

If Y nk is a Markov chain with transition function ηn, then

E[f(Y nk+1)|Fn

k ] = (I − n−1A)−1f(Y nk ),

for f ∈ D(A) and fn = f − n−1Af and

fn(Y nm)− fn(Y n

0 )− 1

n

m−1∑k=0

Af(Y nk )

is a martingale. Recall the tightness condition, and defining Xn(t) = Y n[nt], note that

E[(g(Xn(t+ u))− g(Xn(t)))2|Fnt ]

= E[g2(Xn(t+ u))− g2(Xn(t))|Fnt ]− 2g(Xn(t))E[g(Xn(t+ u))− g(Xn(t))|Fn

t ]

≤ 2‖g2 − hn‖+ 4‖g‖‖g − fn‖+[nu] + 1

n(‖Ah‖+ 2‖g‖‖Af‖)

Tightness for g(Xn) for each bounded continuous g and the compactness of Eimplies tightness for Xn.

Page 160: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Stationary distributions 160

Definition 6.13 A stochastic process X is stationary if the distribution of Xt ≡ X(t+ ·)does not depend on t.

Definition 6.14 µ is a stationary distribution for the martingale problem for A if thereexists a stationary solution of the martingale problem for A with marginal distribution µ.

Theorem 6.15 Suppose that D(A) and R(λ − A) are separating and that for each ν ∈P(E), there exists a solution of the martingale problem for (A, ν). If∫

E

Afdµ = 0, f ∈ D(A),

then µ is a stationary distribution for A.

Page 161: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Echeverria’s theorem 161

Theorem 6.16 LetE be compact, and letA ⊂ C(E)×C(E) satisfy the positive maximumprinciple. Suppose that D(A) is an algebra and dense in C(E). If µ ∈ P(E) satisfies∫

E

Afdµ = 0, f ∈ D(A),

then µ is a stationary distribution of A.

Example 6.17 E = [0, 1], Af(x) = 12f ′′(x)

D(A) = f ∈ C2[0, 1] : f ′(0) = f ′(1) = 0, f ′(13) = f ′(2

3)

Let µ(dx) = 3I[ 13, 23](x)dx.

Page 162: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Outline of proof 162

We constructed ηn so that∫E

fn(y)ηn(x, dy) =

∫E

(f(y)− 1

nAf(y))ηn(x, dy) = f(x)

Consequently∫E

∫E

fn(y)ηn(x, dy)µ(dx) =

∫E

f(x)µ(dx) =

∫E

(f(x)− 1

nAf(x))µ(dx) =

∫E

fn(x)µ(dx)

For F (x, y) =∑m

i=1 hi(x)(fi(y)− 1nAfi(y)) + h0(y), fi ∈ D(A), define

ΛnF =

∫ [ m∑i=1

hi(x)fi(x) + h0(x)

]µ(dx)

If Λn is given by a measure νn, then both marginals are µ, and letting ηn satisfyνn(dx, dy) = ηn(x, dy)µ(dx), for f ∈ D(A),∫

(f(y)− 1

nAf(y))ηn(x, dy) = f(x), µ− a.s.

The work is to show the Λn is a positive linear functional.

Page 163: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Extensions 163

Theorem 6.18 LetE be locally compact (e.g., E = Rd), and letA ⊂ C(E)×C(E) satisfythe positive maximum principle. Suppose that D(A) is an algebra and dense in C(E). Ifµ ∈ P(E) satisfies ∫

E

Afdµ = 0, f ∈ D(A),

then µ is a stationary distribution of A.

Proof. Let E = E ∪ ∞ and extend A to include (1, 0). There exists an E-valuedstationary solution X of the martingale problem for the extended A, but PX(t) ∈E = µ(E).

Page 164: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Complete, separable E 164

E complete, separable. A ⊂ C(E)× C(E).

Assume that gk is closed under multiplication. Let I be the collection of finitesubsets of positive integers, and for I ∈ I, let k(I) satisfy gk(I) =

∏i∈I gi. For each

k, there exists ak ≥ |gk|. Let

E = z ∈∞∏i=1

[−ai, ai] : zk(I) =∏i∈I

zi, I ∈ I.

Note that E is compact, and define G : E → E by

G(x) = (g1(x), g2(x), . . .).

Then G has a measurable inverse defined on the (measurable) set G(E).

Lemma 6.19 Let µ ∈ P(E). Then there exists a unique measure ν ∈ P(E) satisfying∫Egkdµ =

∫Ezkν(dz). In particular, if Z has distribution ν, thenG−1(Z) has distribution

µ.

Page 165: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Equivalence of the forward equation and the MGP 165

Suppose

νtf = ν0f +

∫ t

0

νsAfds

DefineBλf(x, θ) = Af(x, θ) + λ(

∫E

f(y,−θ)ν0(dy)− f(x, θ))

andµλ = λ

∫ ∞

0

e−λtνtdt× (1

2δ1 +

1

2δ−1).

Then ∫E

Bλfdµλ = 0, f(x, θ) = f1(x)f2(θ), f1 ∈ D(A).

Let τ1 = inft > 0 : Θ(t) 6= Θ(0), τk+1 = inft > τk : Θ(t) 6= Θ(τk).

Theorem 6.20 Let (Y,Θ) be a stationary solution of the martingale problem for Bλ withmarginal distribution µλ. Let τ1 = inft > 0 : Θ(t) 6= Θ(0), τk+1 = inft > τk : Θ(t) 6=Θ(τk). Define X(t) = Y (τ1 + t). Then conditioned on τ2 − τ1 > t0, X is a solution of themartingale problem for A and the distribution of X(t) is νt for 0 ≤ t ≤ t0.

Page 166: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Images of a Markov process 166

X a Markov process with generator A and initial distribution µ0

γ : E → E0, Borel measurable

Y = γ X

• When is Y Markovian?

• If Y is Markovian, what is its generator?

• What is the conditional distribution of X given Y ?

Page 167: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Technical conditions 167

Definition 6.21 An operator A ⊂ B(E) × B(E) is a pre-generator if A is dissipativeand there are sequences of functions µn : E → P(E) and λn : E → [0,∞) such that, foreach (f, g) ∈ A,

g(x) = limn→∞

λn(x)

∫E

(f(y)− f(x))µn(x, dy), (6.5)

for each x ∈ E. Note that we have not assumed that µn and λn are measurable functionsof x.

A is bp-separable if there exists a countable collection gk ⊂ D(A) such that A iscontained in the bounded, pointwise closure of the linear span of (gk, Agk).

Page 168: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Markov mapping theorem 168

Theorem 6.22 A ⊂ C(E)× C(E) a pre-generator with bp-separable graph.

D(A) closed under multiplication and separating.

γ : E → E0, Borel measurable.

α a transition function from E0 into E satisfying

α(y, γ−1(y)) = 1

Let µ0 ∈ P(E0), ν0 =∫α(y, ·)µ0(dy), and define

C = (∫

E

f(z)α(·, dz),∫

E

Af(z)α(·, dz)) : f ∈ D(A) .

If Y is a solution of the MGP for (C, µ0), then there exists a solution Z of the MGP for(A, ν0) such that Y = γ Z and Y have the same distribution on ME0 [0,∞).

Page 169: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Interpretation of α 169

E[f(Z(t))|FYt ] =

∫f(z)α(Y (t), dz)

(at least for almost every t).

Page 170: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Uniqueness 170

Corollary 6.23 If uniqueness holds for the MGP for (A, ν0), then uniqueness holds for theME0 [0,∞)-MGP for (C, µ0). If Y has sample paths in DE0 [0,∞), then uniqueness holdsfor the DE0 [0,∞)-martingale problem for (C, µ0).

Existence for (C, µ0) and uniqueness for (A, ν0) implies existence for (A, ν0) and unique-ness for (C, µ0), and hence that Y is Markov.

Page 171: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

7. Equivalence of SDE and MGP

1. Weak solutions of stochastic equations

2. Ito equations

3. General equations for Markov processes

4. Yamada-Watanabe-Engelbert theorem

Page 172: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Weak solutions of stochastic equations 172

Stochastic equation: Given Γ : S1 × S2 → R and S2-valued Y , find X such that

Γ(X, Y ) = 0 a.s. (7.1)

Definition 7.1 Suppose Y has distribution µY . (X, Y ) is a weak solution (or distribu-tional solution) of (7.1) if Y has distribution µY and Γ(X, Y ) = 0 a.s.

For stochastic processes, X is compatible with Y if

E[f(Y )|FX,Yt ] = E[f(Y )|FY

t ], t ≥ 0.

Remark 7.2 If Y has independent increments, then compatibility is just the requirementthat Y (t+ ·)− Y (t) is independent of FX,Y

t for every t ≥ 0.

Page 173: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Equivalence of SDE and MGP 173

If

X(t) = X(0) +

∫ t

0

σ(X(s))dW (s) +

∫ t

0

b(X(s))ds,

then X is a solution of the martingale problem for

Af(x) =1

2

∑ij

aij(x)∂2

∂xi∂xj

f(x) +∑

i

bi(x)∂

∂xi

f(x)

where ((aij)) = σσT .

Conversly, if X is a solution of the MGP for A, then X is a weak solution of theSDE. If σ is invertible, then we should have

W (t) =

∫ t

0

σ−1(X(s))dX(s)−∫ t

0

σ−1(X(s))b(X(s))ds.

Page 174: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Existence of cadlag version 174

Lemma 7.3 Let M be a martingale defined on (Ω,F , P ). Then there exsits Ω′ ⊂ Ω withP (Ω′) = 1 such that for each ω ∈ Ω′,

M+(t, ω) = lims∈Q→t+

M(s, ω) and M−(t, ω) = lims∈Q→t−

M(s, ω)

exists for all t > 0 and the first limit exists for t = 0.

Definition 7.4 Let X and Y be E-valued stochastic processes defined on (Ω,F , P ). Y isa modification of X if and only if PX(t) = Y (t) = 1 for all t ≥ 0.

Lemma 7.5 Let E be compact and let D(A) ⊂ C(E) be separating. Suppose that thereexists a countable subset fi ⊂ D(A) such that fi separates points. If X is a solutionof the martingale problem for A, then there exists a cadlag modification of X .

Page 175: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Proof. Let

Mi(t) = fi(X(t))−∫ t

0

Afi(X(s))ds

There exists Ω′ with P (Ω′) = 1 such that M+i (·, ω) exists for each ω ∈ Ω′. By the

compactness of E and the fact that fi separates points, there exists Y such that

M+i (t, ω) = fi(Y (t, ω))−

∫ t

0

Afi(X(s, ω))ds, ω ∈ Ω′

and Y (t, ω) = lims∈Q→t+X(s, ω) Consequently,

E[f(Y (t))− f(X(t))|FXt ] = lim

s∈Q→t+E[

∫ s

t

Af(X(r))dr|FXt ] = 0,

and hence E[f(Y (t))|FXt ] = f(X(t)) and

E[f(Y (t))g(X(t))] = E[f(X(t))g(X(t))] (7.2)

for evey g ∈ B(E) and f ∈ D(A). Since D(A) is separating, (7.2) holds for allf ∈ B(E) and

E[(g(Y (t))− g(X(t)))2] = E[g2(Y (t)) + g2(X(t))− 2g(Y (t))g(X(t))] = 0.

Page 176: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Proof of continuity 176

Let f ∈ C2c (Rd). A little calculus shows that

∑4k=0(−1)4−k

(4k

)Afk(x)f(x)4−k = 0 and

hence

E[(f(X(t+ h))− f(X(t)))4|Ft]

= E[4∑

k=0

(−1)4−k

(4

k

)f(X(t+ h))kf(X(t))4−k|Ft]

=4∑

k=0

(−1)4−k

(4

k

)E[

∫ h

0

Afk(X(t+ s))ds|Ft]f(X(t))4−k

=4∑

k=0

(−1)4−k

(4

k

)E[

∫ h

0

Afk(X(t+ s))f(X(t+ s))4−kds|Ft]

−4∑

k=0

(−1)4−k

(4

k

)E[

∫ h

0

Afk(X(t+ s))(f(X(t+ s))4−k − f(X(t))4−k)ds|Ft]

= −4∑

k=0

(−1)4−k

(4

k

)E[

∫ h

0

Afk(X(t+ s))(f(X(t+ s))4−k − f(X(t))4−k)ds|Ft]

Page 177: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Setting h = T/n and ηn(t) = [tn]h,

E[n−1∑l=0

E[(f(X((l + 1)h))− f(X(lh)))4]

= −4∑

k=0

(−1)4−k

(4

k

)E[

∫ T

0

Afk(X(s))(f(X(s))4−k − f(X(ηn(s)))4−k)ds]

Letting n→∞, the right side goes to zero, so by Fatou’s lemma, we have

E[∑s≤T

(f(X(s))− f(X(s−)))4] ≤ limn→∞

E[n−1∑l=0

E[(f(X((l + 1)h))− f(X(lh)))4] = 0,

which implies the continuity of X .

Page 178: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Solution of MGP is weak solution of SDE 178

Theorem 7.6 Suppose X is a solution of the martingale problem for

Af(x) =1

2

∑ij

aij(x)∂2

∂xi∂xj

f(x) +∑

i

bi(x)∂

∂xi

f(x), f ∈ C2c (Rd),

where ((aij)) = σσT and σ(x) is invertible and that sups≤t |X(s)| < ∞ for each t > 0.Then X is a weak solution of the stochastic differential equation

X(t) = X(0) +

∫ t

0

σ(X(s))dW (s) +

∫ t

0

b(X(s))ds.

Page 179: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Proof. Note that

M(t) = X(t)−∫ t

0

b(X(s))ds

and

Mi(t)Mj(t)−∫ t

0

aij(X(s))ds

are FXt -local martingales, and hence

W (t) =

∫ t

0

σ−1(X(s))dX(s)−∫ t

0

σ−1(X(s))b(X(s))ds =

∫ t

0

σ−1(X(s))dM(s)

is also, and[Wi,Wj]t = 〈Wi,Wj〉t = δijt

Compatibility follows from the fact that W (t+ ·)−W (t) is independent of FXt .

Page 180: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Alternative approach 180

Let d = 1. DefineZ(t) = Z(0) +W (t) mod 1

Then (X,Z) is a solution of the MGP for

Af(x, z) =1

2a(x)

∂2

∂x2f(x, z) + σ(x)

∂2

∂x∂zf(x, z)

+1

2

∂2

∂z2f(x, z) + b(x)

∂xf(x, z)

Conversely, if (X,Z) is a solution of the martingale problem for A, then the corre-sponding W can be recovered from Z.

Page 181: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Verification of solution 181

For example, setting U1(t) = cos 2πZ(t) and U2(t) = sin 2πZ(t),

U1(t) +

∫ t

0

2π2U1(s)ds and U2(t) +

∫ t

0

2π2U2(s)ds

are martingales and

W (t) =1

(∫ t

0

U1(s)dU2(s)−∫ t

0

U2(s)dU1(s)

)is a Brownian motion. To see that X is a weak solution of the SDE, use the martin-gale properties to compute

E

[(X(t ∧ τ)−X(0)−

∫ t∧τ

0

σ(X(s))dW (s)−∫ t∧τ

0

b(X(s))ds)

)2].

Page 182: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Equivalence to original MGP 182

Let

αf(x) =

∫ 1

0

f(x, z)dz.

ThenαAf(x) = Aαf(x),

and any solution of the MGP for A corresponds to a solution of A and hence is aweak solution of the SDE.

Page 183: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Processes with jumps 183

Af(x) =1

2

∑aij(x)∂i∂jf(x) + b(x) · ∇f(x)

+

∫U1

(f(x+ α1(x, u))− f(x)− α1(x, u) · ∇f(x))ν(du)

+

∫U2

(f(x+ α2(x, u))− f(x))ν(du)

Let D(A) = C2c (Rd) and suppose that Af is bounded for f ∈ D(A).

X(t) = X(0) +

∫ t

0

σ(X(s))dW (s) +

∫ t

0

b(X(s))ds (7.3)

+

∫U1×[0,t]

α1(X(s−), u)ξ(du× ds) +

∫U2×[0,t]

α2(X(s−), u)ξ(du× ds)

Then a solution of the SDE is a solution of the martingale problem for A.

Page 184: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Stationary representations for PRM 184

Let Di ⊂ B(U) be a countable collection of sets with ν(Di) <∞ that is closed un-der intersections, and generates B(U). Then ξ is completely determined by ξ(Di, t).Define

Zi(t) = Zi(0)(−1)ξ(Di,t)

where Zi(0) is ±1. Note that

ξ(Di, t) = −1

2

∫ t

0

Zi(s−)dZi(s), (7.4)

and if the Zi(0) are iid with PZi(0) = 1 = PZi(0) = −1 = 12

and independentof ξ, then for each t ≥ 0, the Zi(t) are iid and independent of ξ.

Let z ∈ E1 = −1, 1∞ and define

(−1)δuz = ((−1)1D1(u)z1, (−1)1D2

(u)z2, . . .).

Page 185: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Joint generator 185

For f ∈ D(A) and g ∈ B(E1) satisfying supz

∫|g((−1)δuz) − g(z)|ν(du) < ∞, let

h(x, z) = f(x)g(z)

Ah(x, z)

=1

2g(z)

(∑aij(x)∂i∂jf(x) + b(x) · ∇f(x)

)+

∫U1

(f(x+ α1(x, u))g((−1)δuz)− f(x)g(z)− g(z)α1(x, u) · ∇f(x, z))ν(du)

+

∫U2

(f(x+ α2(x, u))g((−1)δuz)− f(x)g(z))ν(du)

Page 186: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Markov mapping 186

E = Rd × E1

E0 = Rd

γ(x, z) = x

α(x, dy × dz) = δx(dy)∏

i(12δ1(dzi) + 1

2δ−1(dzi))

Setting g =∫g(z)

∏i(

12δ1(dzi) + 1

2δ−1(dzi)),

αh = gf(x)

andαAh(x) = gAf(x) = Aαh(x).

Page 187: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Joint solution measure 187

Definition 7.7 Given Γ : S1 × S2 → R and ν ∈ P(S2), µ ∈ P(S1 × S2) is a jointsolution measure for (Γ, ν) if µ(S1 × ·) = ν and

Γ(x, y) = 0, a.s. µ.

Joint uniqueness in law holds if there is at most one joint solution measure.

Lemma 7.8 The collection SΓ,ν of joint solution measures is convex.

Let S1 = DE1 [0,∞) and S2 = DE2 [0,∞).

Lemma 7.9 Let Cν be the collection of µ ∈ P(S1 × S2) with the following properties:

a) µ(S1 × ·) = ν

b) If (X, Y ) has distribution µ, then X is compatible with Y .

Then Cν is convex.

Page 188: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Proof. X is compatible with Y if for each f ∈ B(DE2 [0,∞)),

E[f(Y )|FX,Yt ] = E[f(Y )|FY

t ], t ≥ 0,

which is equivalent to

infh∈B(DE1×E2

[0,t])E[(f(Y )− h(X,Y ))2] = inf

h∈B(DE2[0,t])

E[(f(Y )− h(Y ))2].

Note that the right side is determined by ν, so µ ∈ Cν if µ(S1 × ·) = ν and∫S1×S2

(f(y)− h(x, y)2µ(dx× dy) ≥ infh∈B(DE2

[0,t])

∫S2

(f(y)− h(y))2ν(dy),

for each h ∈ B(DE1×E2 [0, t]), f ∈ B(DE2 [0,∞)), and t ≥ 0. Each of these inequalitiesis preserved under convex combinations.

Page 189: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Conditions for compatibility 189

Lemma 7.10 X is compatible with Y if and only if for each t ≥ 0 and each g ∈ B(DE1 [0, t]),

E[g(X)|Y ] = E[g(X)|FYt ] a.s. (7.5)

Proof. Suppose thatX is compatible with Y . Then for f ∈ B(S2) and g ∈ B(DE1 [0, t]),

E[f(Y )g(X)] = E[E[f(Y )|FXt ∨ FY

t ]g(X)]

= E[E[f(Y )|FYt ]g(X)]

= E[E[f(Y )|FYt ]E[g(X)|FY

t ]]

= E[f(Y )E[g(X)|FYt ]],

and (7.5) follows.

Conversely, suppose (7.5) holds. For f ∈ B(S2), g ∈ B(DE1 [0, t]), and h ∈ B(DE2 [0, t])

E[E[f(Y )|FYt ]g(X)h(Y )] = E[E[f(Y )|FY

t ]E[g(X)|FYt ]h(Y )]

= E[f(Y )E[g(X)|Y ]h(Y )]

= E[f(Y )g(X)h(Y )],

and compatibility follows.

Page 190: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Pathwise uniqueness 190

Definition 7.11 Let X1, X2, and Y be defined on the same probability space. Let X1 andX2 be S1-valued and Y be S2-valued. (X1, X2) are jointly compatible with Y if

E[f(Y )|FX1t ∨ FX2

t ∨ FYt ] = E[f(Y )|FY

t ], t ≥ 0, f ∈ B(S2).

Pathwise uniqueness holds for compatible solutions of (Γ, ν), if for every triple of pro-cesses (X1, X2, Y ) defined on the same sample space such that µX1,Y , µX2,Y ∈ SΓ,ν ∩ Cν

and (X1, X2) is jointly compatible with Y , X1 = X2 a.s.

Definition 7.12 A solution (X, Y ) is strong if there exists a measurable F : S2 → S1

such that X = F (Y ) a.s.

Page 191: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Yamada-Watanabe-Engelbert theorem 191

Theorem 7.13 Suppose SΓ,ν ∩ Cν 6= ∅. The following are equivalent:

a) Pathwise uniqueness holds.

b) µ ∈ SΓ,ν ∩ Cν is unique and there is a strong, compatible solution.

Proof. Let µ, γ ∈ SΓ,ν ∩ Cν and let Y , ξ1, and ξ2 be independent, µY = ν, and ξ1 andξ2 uniform on [0, 1]. Then there exist Fµ : S2 × [0, 1] → S1 and Fγ : S2 × [0, 1] → S1

such that (Fµ(Y, ξ1), Y ) has distribution µ and (Fγ(Y, ξ2), Y ) has distribution γ.

Claim: X1 = Fµ(Y, ξ1) and X2 = Fγ(Y, ξ2) are jointly compatible.

For f ∈ B(DE1 [0, t]),

E[f(Fµ(Y, ξ1))|Y, ξ2] = E[f(Fµ(Y, ξ1))|Y ] = E[f(Fµ(Y, ξ1))|FYt ].

Page 192: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Consequently, for f ∈ B(S2), g1, g2 ∈ B(DE1 [0, t]), and h ∈ B(DE2 [0, t]),

E[f(Y )g1(X1)g2(X2)h(Y )]

= E[f(Y )E[g1(X1)|Y, ξ2]g2(X2)h(Y )]

= E[f(Y )E[g1(X1)|FYt ]g2(X2)h(Y )]

= E[E[f(Y )|FX2t ∨ FY

t ]E[g1(X1)|FYt ]g2(X2)h(Y )]

= E[E[f(Y )|FYt ]E[g1(X1)|Y, ξ2]g2(X2)h(Y )]

= E[E[f(Y )|FYt ]g1(X1)g2(X2)h(Y )],

giving the joint compatibility. By pathwise uniqueness, Fµ(Y, ξ1) = Fγ(Y, ξ2) a.s.The independence of ξ1 and ξ2 implies that the solution is strong.

Conversely, the strong solution must give the unique µ ∈ SΓ,ν ∩ Cν . Consequently,there exists F : S2 → S1 such that µXY = µ implies X = F (Y ) almost surely, andpathwise uniqueness follows.

Page 193: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

8. Change of measure

• Absolute continuity and the Radon Nikodym theorem

• Applications of absolute continuity

• Bayes formula

• Local absolute continuity

• Martingales under a change of measure

• Change of measure for Brownian motion

• Change of measure for Poisson processes

Page 194: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Absolute continuity and the Radon-Nikodym theorem 194

Definition 8.1 Let P and Q be probability measures on (Ω,F). Then P is absolutelycontinuous with respect to Q (P << Q) if and only if Q(A) = 0 implies P (A) = 0.

Theorem 8.2 If P << Q, then there exists a random variable L ≥ 0 such that

P (A) = EQ[1AL] =

∫A

LdQ, A ∈ F .

Consequently, Z is P -integrable if and only if ZL is Q-integrable, and

EP [Z] = EQ[ZL].

Standard notation: dPdQ

= L.

Page 195: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Maximum likelihood estimation 195

Suppose for each α ∈ A,

Pα(Γ) =

∫Γ

LαdQ

andLα = H(α,X1, X2, . . . Xn)

for random variables X1, . . . , Xn. The maximum likelihood estimate α for the“true” parameter α0 ∈ A based on observations of the random variablesX1, . . . , Xn

is the value of α that maximizes H(α,X1, X2, . . . Xn).

For example, under certain conditions the distribution of

Xα(t) = X(0) +

∫ t

0

σ(Xα(s))dW (s) +

∫ t

0

b(Xα(s), α)ds,

will be absolutely continuous with respect to the distribution of X satisfying

X(t) = X(0) +

∫ t

0

σ(X(s))dW (s) . (8.1)

Page 196: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Sufficiency 196

If dPα = LαdQ whereLα(X, Y ) = Hα(X)G(X, Y ),

then X is a sufficient statistic for α. Without loss of generality, we can assumeEQ[G(X,Y )] = 1 and hence dQ = G(X, Y )dQ defines a probability measure.

Example 8.3 If (X1, . . . , Xn) are iid N(µ, σ2) under P(µ,σ) and Q = P(0,1), then

L(µ,σ) =1

σnexp

−1− σ2

2σ2

n∑i=1

X2i +

µ

σ2

n∑i=1

Xi −µ2

σ2

so (∑n

i=1X2i ,∑n

i Xi) is a sufficient statistic for (µ, σ).

Page 197: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Parameter estimates and sufficiency 197

Theorem 8.4 If θ(X, Y ) is an estimator of θ(α) and ϕ is convex, then

EPα [ϕ(θ(α)− θ(X, Y ))] ≥ EPα [ϕ(θ(α)− EQ[θ(X, Y )|X])]

Proof.

EPα [ϕ(θ(α)− θ(X, Y ))] = EQ[ϕ(θ(α)− θ(X, Y ))Hα(X)]

= EQ[EQ[ϕ(θ(α)− θ(X, Y ))|X]Hα(X)]

≥ EQ[ϕ(θ(α)− EQ[θ(X, Y )|X])Hα(X)]

Page 198: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Other applications 198

Finance: Asset pricing models depend on finding a change of measure underwhich the price process becomes a martingale.

Stochastic Control: For a controlled diffusion process

X(t) = X(0) +

∫ t

0

σ(X(s))dW (s) +

∫ t

0

b(X(s), u(s))ds

where the control only enters the drift coefficient, the controlled process can beobtained from an uncontrolled process satisfying (8.1) via a change of measure.

Page 199: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Bayes Formula 199

Recall that Y = E[Z|D] if Y is D-measurable and for each D ∈ D,∫

DY dP =∫

DZdP .

Lemma 8.5 (Bayes Formula) If dP = LdQ, then

EP [Z|D] =EQ[ZL|D]

EQ[L|D].(8.2)

Proof. Clearly the right side of (8.2) is D-measurable. Let D ∈ D. Then∫D

EQ[ZL|D]

EQ[L|D].dP =

∫D

EQ[ZL|D]

EQ[L|D]LdQ

=

∫D

EQ[ZL|D]

EQ[L|D]EQ[L|D]dQ

=

∫D

EQ[ZL|D]dQ

=

∫D

ZLdQ =

∫D

ZdP

which verifies the identity.

Page 200: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Examples 200

For real-valued random variables with a joint density X,Y ∼ fXY (x, y), condi-tional expectations can be computed by

E[g(Y )|X = x] =

∫∞−∞ g(y)fXY (x, y)dy

fX(x)

that is, setting h(x) equal to the right side,

E[g(Y )|X] = h(X).

For general random variables, suppose X and Y are independent on (Ω,F , Q). LetL = H(X, Y ) ≥ 0, and E[H(X, Y )] = 1. Define

νY (Γ) = QY ∈ ΓdP = H(X, Y )dQ.

Bayes formula becomes

EP [g(Y )|X] =EQ[g(Y )H(X, Y )|X]

EQ[H(X, Y )|X]=

∫g(y)H(X, y)νY (dy)∫H(X, y)νY (dy)

Page 201: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Local absolute continuity 201

Theorem 8.6 Let (Ω,F) be a measurable space, and let P and Q be probability measureson F . Suppose Dn ⊂ Dn+1 and that for each n, P |Dn << Q|Dn . Define Ln = dP

dQ

∣∣∣Dn

.

Then Ln is a nonnegative Dn-martingale on (Ω,F , Q) and L = limn→∞ Ln satisfiesEQ[L] ≤ 1. If EQ[L] = 1, then P << Q on D =

∨nDn.

Proof. If D ∈ Dn ⊂ Dn+1, then P (D) = EQ[Ln1D] = EQ[Ln+11D] which impliesE[Ln+1|Dn] = Ln. If E[L] = 1, then Ln → L in L1, so

P (D) = EQ[L1D], D ∈ ∪nDn,

hence for all D ∈∨

nDn.

Proposition 8.7 P << Q on D if and only if Plimn→∞Ln <∞ = 1.

Proof. The dominated convergence theorem implies

PsupnLn ≤ K = lim

m→∞EQ[1supn≤m Ln≤KLm] = EQ[1supn Ln≤KL].

Letting K →∞ we see that EQ[L] = 1.

Page 202: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Martingales and change of measure 202

(See [10], Section III.6.)

Let Ft be a filtration and assume that P |Ft << Q|Ft and that L(t) is the corre-sponding Radon-Nikodym derivative. Then as before, L is an Ft-martingale on(Ω,F , Q).

Lemma 8.8 Z is a P -local martingale if and only if LZ is a Q-local martingale.

Proof. For a bounded stopping time τ , Z(τ) is P -integrable if and only if L(τ)Z(τ)is Q-integrable. Furthermore, if L(τ ∧ t)Z(τ ∧ t) is Q-integrable, then L(t)Z(τ ∧ t)is Q-integrable and

EQ[L(τ ∧ t)Z(τ ∧ t)] = EQ[L(t)Z(τ ∧ t)].

By Bayes formula, EP [Z(t+ h)−Z(t)|Ft] = 0 if and only if EQ[L(t+ h)(Z(t+ h)−Z(t))|Ft] = 0 which is equivalent to

EQ[L(t+ h)Z(t+ h)|Ft] = EQ[L(t+ h)Z(t)|Ft] = L(t)Z(t),

so Z is a martingale under P if and only if LZ is a martingale under Q.

Page 203: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Semimartingale decompositions under a change of measure 203

Theorem 8.9 If M is a Q-local martingale, then

Z(t) = M(t)−∫ t

0

1

L(s)d[L,M ]s (8.3)

is a P -local martingale. (Note that the integrand is 1L(s)

, not 1L(s−)

.)

Proof. Note that LM − [L,M ] is a Q-local martingale. We need to show that LZ isa Q-local martingale. But letting V denote the second term on the right of (8.3), wehave L(t)V (t) =

∫ t

0V (s−)dL(s) +

∫ t

0L(s)dV (s) and hence

L(t)Z(t) = L(t)M(t)− [L,M ]t −∫ t

0

V (s−)dL(s).

Both terms on the right are Q-local martingales.

Page 204: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Change of measure for Brownian motion 204

Let W be standard Brownian motion, and let ξ be an adapted process. Define

L(t) = exp∫ t

0

ξ(s)dW (s)− 1

2

∫ t

0

ξ2(s)ds

and note that

L(t) = 1 +

∫ t

0

ξ(s)L(s)dW (s).

Then L(t) is a local martingale.

Assume EQ[L(t)] = 1 for all t ≥ 0. Then L is a martingale. Fix a time T , and restrictattention to the probability space (Ω,FT , Q). On FT , define dP = L(T )dQ.

For t < T , let A ∈ Ft. Then

P (A) = EQ[1AL(T )] = EQ[1AEQ[L(T )|Ft]]

= EQ[1AL(t)]︸ ︷︷ ︸has no dependence on T

(crucial that L is a martingale)

Page 205: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

New Brownian motion 205

Theorem 8.10 W (t) = W (t)−∫ t

0ξ(s)ds is a standard Brownian motion on (Ω,FT , P ).

Proof. Since W is continous and [W ]t = t a.s., it is enough to show that W isa local martingale (and hence a martingale). But since W is a Q-martingale and[L,W ]t =

∫ t

0ξ(s)L(s)ds, Theorem 8.9 gives the desired result.

Page 206: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Changing the drift of a diffusion 206

Suppose that

X(t) = X(0) +

∫ t

0

σ(X(s))dW (s)

and setξ(s) = b(X(s)).

Note that X is a diffusion with generator 12σ2(x)f ′′(x). Define

L(t) = exp∫ t

0

b(X(s))dW (s)− 1

2

∫ t

0

b2(X(s))ds,

and assume that EQ[L(T )] = 1 (e.g., if b is bounded). Set dP = L(T )dQ on (Ω,FT ).

Page 207: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Transformed SDE 207

Define W (t) = W (t)−∫ t

0b(X(s))ds. Then

X(t) = X(0) +

∫ t

0

σ(X(s))dW (s) (8.4)

= X(0) +

∫ t

0

σ(X(s))dW (s) +

∫ t

0

σ(X(s))b(X(s))ds

so under P , X is a diffusion with generator

Af(x) =1

2σ2(x)f ′′(x) + σ(x)b(x)f ′(x). (8.5)

Page 208: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Conditions that imply local absolute continuity 208

Let Af(x) = 12σ2(x)f ′′(x) + σ(x)b(x)f ′(x).

Condition 8.11 If (Xn, τn), n = 1, 2, . . . satisfy

f(Xn(t ∧ τn))− f(Xn(0))−∫ t∧τn

0

Af(Xn(s))ds,

an Fnt -martingale for each f ∈ C2

c (R) and

τn = inft :

∫ t

0

b2(Xn(s))ds ≥ n,

then limn→∞ Pτn ≤ T = 0 for each T > 0.

Theorem 8.12 Suppose Condition 8.11 holds, and letW be a Browian motion on (Ω,F , Q).If X is a solution of

X(t) = X(0) +

∫ t

0

σ(X(s))dW (s)

on (Ω,F , Q), then for each T , there is a change of measure dP = L(T )dQ such that X on(Ω,FT , P ) is a solution of the martingale problem for A on [0, T ].

Page 209: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Proof. Let

L(t) = exp∫ t

0

b(X(s))dW (s)− 1

2

∫ t

0

b2(X(s))ds,

and define τn = inft :∫ t

0b2(X(s))ds > n. Then EQ[L(T ∧ τn)] = 1 and we can

define dP = L(T ∧ τn)dQ on FT∧τn . On (Ω,FT∧τn , P ),

W (t ∧ τn) = W (t ∧ τn)−∫ t∧τn

0

b(X(s))ds

is a Brownian motion stopped at τn and

X(t) = X(0) +

∫ t

0

σ(X(s))dW (s) +

∫ t

0

σ(X(s))b(X(s))ds

for t ≤ T ∧ τn. Then (X, τn), n = 1, 2, . . . satisfies Condition 8.11, and since

PL(T ∧ τn) > K = P∫ T∧τn

0

b(X(s))dW (s) +1

2

∫ T∧τn

0

b2(X(s))ds > logK

we can apply Proposition 8.2 to conclude that P << Q on FT , that is, E[L(T )] = 1.

Page 210: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Change of measure for Poisson processes 210

Theorem 8.13 Let N be an unit Poisson process on (Ω,F , Q) that is compatible withFt. If Λ is nonnegative, Ft-predictable, and satisfies∫ t

0

Λ(s)ds <∞ a.s., t ≥ 0,

then

L(t) = exp

∫ t

0

ln Λ(s)dN(s)−∫ t

0

(Λ(s)− 1)ds

satisfies

L(t) = 1 +

∫ t

0

(Λ(s)− 1)L(s−)d(N(s)− s) (8.6)

and is a Q-local martingale. If E[L(T )] = 1 and we define dP = L(T )dQ on FT , thenN(t)−

∫ t

0Λ(s)ds is a P -local martingale.

Page 211: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Proof. By Theorem 8.9,

Z(t) = N(t)− t−∫ t

0

1

L(s)(Λ(s)− 1)L(s−)dN(s)

=

∫ t

0

1

Λ(s)dN(s)− t

is a local martingale under P . Consequently,∫ t

0

Λ(s)dZ(s) = N(t)−∫ t

0

Λ(s)ds

is a local martingale under P .

Page 212: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Construction of counting processes by change of measure 212

Let J [0,∞) denote the collection of nonnegative integer-valued cadlag functionsthat are constant except for jumps of +1. Suppose that λ : J [0,∞)×[0,∞) → [0,∞),∫ t

0

λ(x, s)ds <∞, t ≥ 0, x ∈ J [0,∞)

and that λ(x, s) = λ(x(· ∧ s), s) (that is, λ is nonanticipating). If we take Λ(t) =λ(N, t) and let τn = inft : N(t) = n, then defining dP = L(τn)dQ on Fτn , N(· ∧ τn)

on (Ω,Fτn , P ) has the same distribution as N(· ∧ τn), where N is the solution of

N(t) = Y (

∫ t

0

λ(N , s)ds)

for Y a unit Poisson process and τn = inft : N(t) = n.

Page 213: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Change of measure for Poisson random measures 213

ξ a Poisson random measure on U × [0,∞) with mean measure ν(du) × dt. λ apositive, predictable process satisfying∫

U×[0,t]

(λ(u, s)− 1)2 ∧ |λ(u, s)− 1|ν(du)ds <∞ a.s., t ≥ 0.

Let Mλ(t) =∫

U×[0,t](λ(u, s)− 1)ξ(du× ds) and let L be the solution of

L(t) = 1 +

∫ t

0

L(s−)dMλ(s) = 1 +

∫U×[0,t]

(λ(u, s)− 1)L(s−)ξ(du× ds). (8.7)

Then L is a local martingale. If∫

U×[0,t]|λ(u, s)− 1|ν(du)ds <∞ a.s., t ≥ 0, then

L(t) = exp∫

U×[0,t]

log λ(u, s)ξ(du× ds)−∫

U×[0,t]

(λ(u, s)− 1)ν(du)ds,

and in general,

L(t) = exp∫

U×[0,t]

log λ(u, s)ξ(du× ds) +

∫U×[0,t]

(log λ(u, s)− λ(u, s) + 1)ν(du)ds.

Page 214: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Intensity for the transformed counting measure 214

If E[L(T )] = 1, then for A ∈ B(U) with ν(A) <∞,

MA(t) =

∫A×[0,t]

λ(u, s)ξ(du× ds)

is a local martingale under Q and

[MA, L]t =

∫A×[0,t]

λ(u, s)(λ(u, s)− 1)L(s−)ξ(du× ds).

Consequently,

ZA(t) =

∫A×[0,t]

λ(u, s)ξ(du× ds)−∫

A×[0,t]

1

L(s)λ(u, s)(λ(u, s)− 1)L(s−)ξ(du× ds)

= ξ(A, t)−∫

A×[0,t]

λ(u, s)ν(du)ds

is a local martingales under dP = L(T )dQ.

Page 215: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Change for stochastic equations 215

U = U1 ∪ U2, U1 ∩ U2 = ∅,

Af(x) =

∫U1

(f(x+ α(x, u))− f(x)− α(x, u) · ∇f(x))ν(du)

+

∫U2

(f(x+ α(x, u))− f(x))ν(du)

Let D(A) = C2c (Rd) and suppose that Af is bounded for f ∈ D(A). Then X satisfy-

ing

X(t) = X(0) +

∫U1×[0,t]

α(X(s−), u)ξ(du× ds) +

∫U2×[0,t]

α(X(s−), u)ξ(du× ds)

is a solution of the martingale problem for A.

Page 216: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

New martingale problem 216

Let λ(u, s) = λ(u,X(s−)). Under Q

f(X(t))− f(X(0))−∫ t

0

Af(X(s))ds

=

∫U×[0,t]

(f(X(s−) + α(X(s−), u))− f(X(s−))ξ(du× ds)

is a local martingale, so under P ,

f(X(t))− f(X(0))−∫ t

0

Af(X(s))ds

−∫ t

0

∫U

(f(X(s) + α(X(s), u))− f(X(s))(λ(u,X(s))− 1)ν(du)ds

=

∫U×[0,t]

(f(X(s−) + α(X(s−), u))− f(X(s−))(ξ(du× ds)− λ(u,X(s))ν(du)ds)

is a local martingale.

Page 217: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

New generator 217

X is a solution of the martingale problem for

Af(x) =

∫U1

(f(x+ α(x, u))− f(x)− α(x, u) · ∇f(x))λ(u, x)ν(du)

+

∫U2

(f(x+ α(x, u))− f(x))λ(x, u)ν(du)

+

∫U1

α(x, u)(λ(x, u)− 1)ν(du) · ∇f(x)

Page 218: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

9. Filtering

• Observation of a signal in noise

• Continuous time filtering in Gaussian white noise

• Zakai equation

• Kushner-Stratonovich equation

• Point process observations

• Convergence of stochastic integrals

• Convergence of exchangeable families

• Consistancy of Monte Carlo approximation of filtering equations

• Finite dimensional approximations

Page 219: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Observation of a signal in noise 219

Signal: X1, X2, . . .

Observation: Yk = h(Xk) + ζk, ζk iid “noise”

Filtering problem: Compute πn(Γ) = PXn ∈ Γ|FYn .

Suppose ζk has a strictly positive density γ with respect to Lebesgue measure.

Theorem 9.1 Suppose that under Q, Yk are iid with density γ(z) and are independentof Xk. Then

Ln =n∏

k=1

γ(Yk − h(Xk))

γ(Yk)

is a martingale and under dP = LndQ, (Y1, . . . , Yn) has the same distribution as

(h(X1) + ζ1, . . . , h(Xn) + ζn).

Page 220: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Proof.

EQ[g(Y1, . . . , Yn)Ln]

=

∫Rd

· · ·∫

Rd

EQ[g(z1, . . . , zn)n∏

k=1

γ(zk − h(Xk))]dz1 · · · dzn

=

∫Rd

· · ·∫

Rd

EQ[g(h(X1) + z1, . . . , h(Xn) + zn)n∏

k=1

γ(zk)]dz1 · · · dzn

= EQ[g(h(X1) + ζ1, . . . , h(Xn) + ζn)]

Page 221: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Recursive solution 221

Suppose Xk is a Markov chain in E with transition function P (dz|x). Let

πn(dx) = PXn ∈ dx|FYn .

EP [g(Xn)|FYn ] =

EQ[g(Xn)Ln|FYn ]

EQ[Ln|FYn ]

=EQ[g(Xn)

∏nk=1 γ(Yk − h(Xk))|FY

n ]

EQ[∏n

k=1 γ(Yk − h(Xk))|FYn ]

=EQ[EQ[g(Xn)

∏nk=1 γ(Yk − h(Xk))|FX

n−1 ∨ FYn ]|FY

n ]

EQ[EQ[∏n

k=1 γ(Yk − h(Xk))|FXn−1 ∨ FY

n ]|FYn ]

=EQ[

∫Eg(z)γ(Yn − h(z))P (dz|Xn−1)

∏n−1k=1 γ(Yk − h(Xk))|FY

n ]

EQ[∫

Eγ(Yn − h(z))P (dz|Xn−1)

∏n−1k=1 γ(Yk − h(Xk))|FY

n ]

∫E

g(x)πn(dx) =

∫Eg(z)γ(Yn − h(z))P (dz|x)πn−1(dx)∫Eγ(Yn − h(z))P (dz|x)πn−1(dx)

Page 222: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Unnormalized conditional distributions 222

Define φ0(dz) = π0(dz) and∫E

g(z)φn(dz) =

∫E

g(z)γ(Yn − h(z))P (dz|x)φn−1(dx)

Then ∫E

g(x)πn(dx) =

∫Eg(x)φn(dx)

φn(E)

Page 223: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Continuous time filtering in Gaussian white noise 223

Suppose Y (n)k = h(X

(n)k ) 1

n+ 1√

nζk, X(n)

k ≈ X(k/n), and ζk are iid with mean zero

and variance σ2. Then Yn(t) =∑[nt]

k=1 Y(n)k is approximately

Y (t) =

∫ t

0

h(X(s))ds+ σW (t)

Then

EP [g(X(t))|FYt ] =

EQ[g(X(t))L(t)|FYt ]

EQ[L(t)|FYt ]

where underQ,X and Y are independent, Y is a Brownian motion with mean zeroand variance σ2t, and

L(t) = exp∫ t

0

h(X(s))

σdY (s)− 1

2

∫ t

0

h2(X(s))

σ2ds

that is,

L(t) = 1 +

∫ t

0

h(X(s))

σL(s)dY (s)

Page 224: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Monte Carlo solution 224

Let X1, X2, . . . be iid copies of X that are independent of Y under Q, and let

Li(t) = 1 +

∫ t

0

h(Xi(s))

σLi(s)dY (s).

Note thatφ(g, t) ≡ EQ[g(X(s))L(s)|FY

s ] = EQ[g(Xi(s))Li(s)|FYs ]

Claim:1

n

n∑i=1

g(Xi(s))Li(s) → EQ[g(X(s))L(s)|FYs ]

Page 225: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Zakai equation 225

Assume X is a diffusion

X(t) = X(0) +

∫ t

0

σ(X(s))dB(s) +

∫ t

0

b(X(s))ds,

where under Q, B and Y are independent. Since

g(X(t)) = g(X(0)) +

∫ t

0

g′(X(s))σ(X(s))dB(s) +

∫ t

0

Ag(X(s))ds

g(X(t))L(t) = g(X(0)) +

∫ t

0

L(s)dg(X(s)) +

∫ t

0

g(X(s))dL(s)

= g(X(0)) +

∫ t

0

L(s)g′(X(s))σ(X(s))dB(s) +

∫ t

0

L(s)Ag(X(s))ds

+

∫ t

0

g(X(s))h(X(s))L(s)dY (s)

Page 226: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Monte Carlo derivation of Zakai equation 226

Xi(t) = Xi(0) +

∫ t

0

σ(Xi(s))dBi(s) +

∫ t

0

b(Xi(s))ds,

where (Xi(0), Bi) are iid copies of (X(0), B).

g(Xi(t))Li(t) = g(Xi(0)) +

∫ t

0

Li(s)g′(Xi(s))σ(Xi(s))dBi(s) +

∫ t

0

Li(s)Ag(Xi(s))ds

+

∫ t

0

g(Xi(s))h(Xi(s))Li(s)dY (s)

and hence

φ(g, t) = φ(g, 0) +

∫ t

0

φ(Ag, s)ds+

∫ t

0

φ(gh, s)dY (s)

Page 227: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Kushner-Stratonovich equation 227

π(g, t) = EP [g(X(t))|FYt ] =

φ(g, t)

φ(1, t)

=φ(g, 0)

φ(1, 0)+

∫ t

0

1

φ(1, s)dφ(g, s)−

∫ t

0

φ(g, s)

φ(1, s)2dφ(1, s)

+

∫ t

0

φ(g, s)

φ(1, s)3d[φ(1, ·)]s −

∫ t

0

1

φ(1, s)2d[φ(g, ·), φ(1, ·)]s

= π(g, 0) +

∫ t

0

π(Ag, s)ds+

∫ t

0

(π(gh, s)− π(g, s)π(h, s))dY (s)

+

∫ t

0

σ2π(g, s)π(h, s)2ds−∫ t

0

σ2π(gh, s)π(h, s)ds

= π(g, 0) +

∫ t

0

π(Ag, s)ds+

∫ t

0

(π(gh, s)− π(g, s)π(h, s))(dY (s)− π(h, s)ds)

Page 228: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Counting process observations 228

Model: X is a diffusion

X(t) = X(0) +

∫ t

0

σ(X(s))dB(s) +

∫ t

0

b(X(s))ds (9.1)

and

Y (t) = V (

∫ t

0

λ(X(s))ds),

where V is unit Poisson process independent of B.

Reference measure construction: Under Q, X is the diffusion given by (9.1) and Yis an independent, unit Poisson process. The change of measure is given by (8.6):

L(t) = 1 +

∫ t

0

(λ(X(s))− 1)L(s−)d(Y (s)− s)

Page 229: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Zakai equation 229

g(X(t))L(t) = g(X(0)) +

∫ t

0

g(X(s))dL(s) +

∫ t

0

L(s)dg(X(s))

= g(X(0)) +

∫ t

0

L(s)g′(X(s))σ(X(s))dB(s) +

∫ t

0

L(s)Ag(X(s))ds

+

∫ t

0

g(X(s))(λ(X(s))− 1)L(s−)d(Y (s)− s)

The unnormalized conditional distribution φ(g, t) = EQ[g(X(t))L(t)|FYt ] satisfies

φ(g, t) = φ(g, 0) +

∫ t

0

φ((A− C)g, s)ds+

∫ t

0

φ(Cg, s−)dY (s),

where Cg(x) = (λ(x)− 1)g(x).

Page 230: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Kushner-Stratonovich equation 230

For π(g, t) ≡ φ(g,t)φ(1,t)

,

π(g, t) = π(g, 0) +

∫ t

0

(π(Ag, s)− π(λg, s) + π(λ, s)π(g, s))ds

+

∫ t

0

(π(λg, s−)− π(λ, s−)π(g, s−)

π(λ, s−)

)dY (s)

Page 231: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Solution of the Zakai equation 231

Let T (t) be the semigroup given by

T (t)f(x) = E[f(X(t))e−∫ t0 (λ(X(s))−1)ds]

Suppose the jump times of Y satisfy 0 < τ1 < · · · < τm < t < τm+1. Then

φ(g, t) = φ(T (t− τm)g, τm)

φ(g, τk+1−) = φ(T (τk+1 − τk)g, τk)

φ(g, τk+1) = φ(λg, τk+1−) = φ(T (τk+1 − τk)λg, τk).

If φ(dx, t) = φ(x, t)dx, then

φ(·, t) = T ∗(t− τm)φ(·, τm)

φ(·, τk+1−) = T ∗(τk+1 − τk)φ(·, τk)φ(·, τk+1) = λ(·)φ(·, τk+1−) = λ(·)T ∗(τk+1 − τk)φ(·, τk).

Page 232: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Point process observations 232

Model: X is adapted to Ft, and

Y (t,Γ)−∫ t

0

∫Γ

λ(X(s), u)ν(du)ds

is an Ft-martingale for all Γ ∈ B(U) with ν(Γ) <∞.

supx

∫U

|λ(x, u)− 1| ∧ |λ(x, u)− 1|2ν(du) <∞.

Reference measure construction: Under Q, X is the diffusion given by (9.1) andY is an independent, Poisson random measure with mean measure ν. The changeof measure is given by (8.7):

L(t) = 1 +

∫ t

0

L(s−)dMλ(s) = 1 +

∫U×[0,t]

(λ(X(s), u)− 1)L(s−)Y (du× ds).

Page 233: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Zakai equation 233

g(X(t))L(t) = g(X(0)) +

∫ t

0

g(X(s))dL(s) +

∫ t

0

L(s)dg(X(s))

= g(X(0)) +

∫ t

0

L(s)g′(X(s))σ(X(s))dB(s) +

∫ t

0

L(s)Ag(X(s))ds

+

∫U×[0,t]

g(X(s))(λ(X(s), u)− 1)L(s−)Y (du× ds)

Assume that ν(U) <∞. Setting λ(x) =∫

Uλ(x, u)ν(du) and Cg(x) = (λ(x)− 1)g(x),

the unnormalized conditional distribution φ(g, t) = EQ[g(X(t))L(t)|FYt ] satisfies

φ(g, t) = φ(g, 0) +

∫ t

0

φ((A− C)g, s)ds+

∫ t

0

φ((λ(·, u)− 1)g, s−)Y (du× ds).

Page 234: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Solution of the Zakai equation 234

Let T (t) by the semigroup given by

T (t)f(x) = E[f(X(t))e−∫ t0 (λ(X(s))−1)ds]

Suppose Y =∑

k δ(τk,uk), and 0 < τ1 < · · · < τm < t < τm+1. Then

φ(g, t) = φ(T (t− τm)g, τm)

φ(g, τk+1−) = φ(T (τk+1 − τk)g, τk)

φ(g, τk+1) = φ(λ(·, uk+1)g, τk+1−) = φ(T (τk+1 − τk)λ(·, uk+1)g, τk).

If φ(dx, t) = φ(x, t)dx, then

φ(·, t) = T ∗(t− τm)φ(·, τm)

φ(·, τk+1−) = T ∗(τk+1 − τk)φ(·, τk)φ(·, τk+1) = λ(·, uk+1)φ(·, τk+1−) = λ(·, uk+1)T

∗(τk+1 − τk)φ(·, τk).

Page 235: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Stock prices as noisy observations of stock value 235

Stock value:

S(t) = S(0) +

∫ t

0

σS(s)dB(s) +

∫ t

0

µS(s)ds

B standard Brownian motion

N(t) number of trades up to time t (Assume Poisson with parameter λ.)

Pi = S(τi) + ηi price of ith trade, ηi iid with density ρ(z)

Y (t,Γ) cumulative number of trades at prices in Γ.

Y (t,Γ)−∫ t

0

∫Γ

λρ(z − S(s))dzds

is a martingale, Γ ∈ B[0,∞).

Page 236: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Filtering and Bayesian parameter estimation 236

Assume prior on unknown parameters σ, µ

Signal: (S(t), σ, µ)

Problem: Computeπσ,µ(D, t) = P(σ, µ) ∈ D|FY,R

t

Page 237: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Uniformity conditions for sequences of semimartingales 237

Yn = Mn + An, a semimartingale adapted to Fnt

Tt(An), the total variation of An on [0, t]

[Mn]t, the quadratic variation of Mn on [0, t]

Uniform tightness (UT) [JMP]: For Sn0 , the collection of piecewise constant Fn

t -adapted processes,

H0t = ∪∞n=1|

∫ t

0

Z(s−)dYn(s)| : Z ∈ Sn0 , sup

s≤t|Z(s)| ≤ 1

is stochastically bounded.

Uniformly controlled variations (UCV) [KP]: Tt(An), n = 1, 2, . . . is stochasti-cally bounded, and for each α > 0, there exist stopping times τα

n such that

supnE[[Mn]t∧τα

n] <∞

andlim

α→∞sup

nPτα

n ≤ α = 0.

Page 238: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Examples 238

Lemma 9.2 Suppose ηi are iid, E[ηi] = 0, V ar(ηi) = σ2 <∞. Then

Yn(t) =1√n

[nt]∑i=1

ηi

defines a sequence satisfying the uniformity conditions and Yn ⇒ Y , for Y = σW .

Lemma 9.3 Suppose that λn(t) is nondecreasing and satisfies limn→∞ supt≤T |λn(t)−t| =0 for each T > 0. If Y is a Ft-semimartingale, then Yn = Y λn is a Fλn(t)-semimartingale, Yn satisfies the uniformity conditions, and Yn → Y in the Skorohodtopology.

More generally, if Yn satisfies the uniformity conditions and Yn ⇒ Y , then Yn λnsatisfies the uniformity conditions and Yn λn ⇒ Y .

λn can be stochastic provided λn(t) is a Fnt -stopping time for each t.

Page 239: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Basic convergence theorem 239

Theorem 9.4 [5, 7]

(Xn, Yn) Fnt -adapted in DMkm×Rm [0,∞).

Yn = Mn + An an Fnt -semimartingale

Assume that Yn satisfies either UT or UCV.

If (Xn, Yn) ⇒ (X, Y ) in DMkm×Rm [0,∞) with the Skorohod topology, then

(Xn, Yn,

∫Xn(s−)dYn(s)) ⇒ (X, Y,

∫X(s−)dY (s))

in DMkm×Rm×Rk [0,∞)

If (Xn, Yn) → (X, Y ) in probability in the Skorohod topology on DMkm×Rm [0,∞), then

(Xn, Yn,

∫Xn(s−)dYn(s)) → (X, Y,

∫X(s−)dY (s))

in probability in DMkm×Rm×Rk [0,∞)

“IN PROBABILITY” CANNOT BE REPLACED BY “A.S.”

Page 240: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Proof. Suppose |Xn −Xεn| ≤ ε. Then the uniformity conditions imply

limε→0

supnPsup

t≤T|∫ t

0

Xn(s−)dYn(s)−∫ t

0

Xεn(s−)dYn(s)| ≥ δ = 0

Let θ be uniform [0, 1] and independent of Xn, τn,ε0 = 0, and

τn,εk+1 = inft > τn,ε

k : |Xn(t)−Xn(τn,εk )|+ |Yn(t)− Yn(τn,ε

k )| ≥ θε.

Define Xεn(t) = Xn(τn,ε

k ), τn,εk ≤ t < τn,ε

k+1. Claim: (Xεn, Yn) ⇒ (Xε, Y ) and∫

XεndYn ⇒

∫XεdY,

specifically, (Xn(τn,εk ), Yn(τn,ε

k )) ⇒ (X(τ εk), Y (τ ε

k))

Page 241: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Continuity lemma 241

Lemma 9.5 Let Zn be cadlagE-valued processes, and let τnc = inft : r(Zn(t), Zn(0)) ≥

c. If Zn ⇒ Z, then for all but countably many c, (Zn(τnc ), τn

c ) ⇒)Z(τc), τc).

Page 242: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Convergence of exchangeable families 242

Lemma 9.6 For n = 1, 2, . . ., let ξn1 , . . . , ξ

nNn be exchangeable (allowing Nn = ∞.) Let

Ξn be the empirical measure (defined as a limit if Nn = ∞),

Ξn =1

Nn

Nn∑i=1

δξni.

Assume

• Nn →∞

• For each m = 1, 2, . . ., (ξn1 , . . . , ξ

nm) ⇒ (ξ1, . . . , ξm) in Sm.

Then

ξi is exchangeable and setting ξni = s0 ∈ S for i > Nn, Ξn, ξn

1 , ξn2 . . . ⇒ Ξ, ξ1, ξ2, . . .

in P(S)× S∞, where Ξ is the deFinetti measure for ξi.

If for each m, ξn1 , . . . , ξ

nm → ξ1, . . . , ξm in probability in Sm, then Ξn → Ξ in proba-

bility in P(S).

Page 243: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Proof. Ξn ⇒ Ξ if and only if E[〈f,Ξmn 〉] → E[〈f,Ξm〉] for each m ≥ 1 and each

f ∈ C(Sm). If Nn = ∞,

E[〈f,Ξmn 〉] = E[f(ξn

1 , . . . , ξnm)] → E[f(ξ1, . . . , ξm)] = E[〈f,Ξm〉]

If Nn <∞,

E[〈f,Ξmn 〉] =

(Nn

m

)N−m

n E[f(ξn1 , . . . , ξ

nm)] +O(1−

(Nn

m

)N−m

n )‖f‖

→ E[f(ξ1, . . . , ξm)] = E[〈f,Ξm〉]

f ∈ C(Sk+m)

E[〈f(ξn1 , . . . , ξ

nk , ·),Ξm

n 〉] =

(Nn − k

m

)N−m

n E[f(ξn1 , . . . , ξ

nk+m)]+O(1−

(Nn − k

m

)N−m

n )‖f‖

Page 244: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Convergence lemma for processes 244

Lemma 9.7 Let Xn = (Xn1 , . . . , X

nNn

) be exchangeable families of DE[0,∞)-valued ran-dom variables such that Nn ⇒∞ and Xn ⇒ X in DE[0,∞)∞. Define

Ξn = 1Nn

∑Nn

i=1 δXni∈ P(DE[0,∞))

Ξ = limm→∞1m

∑mi= δXi

Vn(t) = 1Nn

∑Nn

i=1 δXni (t) ∈ P(E)

V (t) = limm→∞1m

∑mi=1 δXi(t)

Then

a) For t1, . . . , tl /∈ t : E[Ξx : x(t) 6= x(t−)] > 0

(Ξn, Vn(t1), . . . , Vn(tl)) ⇒ (Ξ, V (t1), . . . , V (tl)).

b) If Xn ⇒ X in DE∞ [0,∞), then Vn ⇒ V in DP(E)[0,∞). If Xn → X in probabilityin DE∞ [0,∞), then Vn → V in DP(E)[0,∞) in probability.

Page 245: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Properties of cadlag processes 245

a) The set DΞ = t : E[Ξx : x(t) 6= x(t−)] > 0 is at most countable.

b) If for i 6= j, with probability one, Xi and Xj have no simultaneous discontinu-ities, then DΞ = ∅ and convergence of Xn to X in DE[0,∞)∞ implies convergencein DE∞ [0,∞).

Page 246: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Approximating the filtering equation 246

Consider

Y (t) =

∫ t

0

h(X(s))ds+ σW (t)

where X is a Markov process with generator A.

φ(g, t) = φ(g, 0) +

∫ t

0

φ(Ag, s)ds+

∫ t

0

φ(gh, s)dY (s)

φ(g, t) = limm→∞

1

m

m∑i=1

g(Xi(t))Li(t) =

∫E×R

g(x)zV (t, dx× dz)

where Xi are independent copies of X ,

Li(t) = 1 +

∫ t

0

h(Xi(s))

σLi(s)dY (s),

and

V (t) = limm→∞

1

m

m∑i=1

δ(Xi(t),Li(t))

Page 247: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Continuous dependence on signal 247

Suppose Xn has generator An and Xn ⇒ X , and hence, (Xn, Y ) ⇒ (X, Y ). Then

Ln(t) = exp∫ t

0

h(Xn(s))

σdY (s)− 1

2

∫ t

0

h2(Xn(s))

σ2ds

and if h is continuous, (Xn, Y, Ln) ⇒ (X, Y, L). More generally, ((Xni , L

ni ), Y ) ⇒

((Xi, Li), Y ), and Vn ⇒ V .

φn(g, t) ≥∫g(x)z ∧ cVn(t, dx× dz) ⇒

∫g(x)z ∧ cV (t, dx× dz)

EQn [|φn(g, t)−∫g(x)z ∧ cVn(t, dx× dz)|] ≤ ‖g‖EQn [

∫(z − z ∧ c)Vn(t, dx× dz)]

≤ ‖g‖(1− EQn [

∫z ∧ cVn(t, dx× dz))]

→ ‖g‖(1− EQ[

∫z ∧ cV (t, dx× dz))]

= ‖g‖EQ[L(t)− L(t) ∧ c]

Page 248: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Consistency of Monte Carlo approximation 248

Suppose X(t) = X(0) +∫ t

0σ(X(s))dB(s) +

∫ t

0b(X(s))ds, Euler approximation:

Bh(t) = B([t/h]h), Ah(t) = [t/h]h, Y h(t) = Y ([t/h]h).

Xh(t) = X(0) +

∫ t

0

σ(Xh(s))dBh(s) +

∫ t

0

b(Xh(s))dAh(s)

Xhi (t) = Xi(0) +

∫ t

0

σ(Xhi (s))dBh

i (s) +

∫ t

0

b(Xhi (s))dAh(s)

Lhi (t) = 1 +

∫ t

0

h(Xhi (s))

σLh

i (s)dYh(s)

Claim: As h → 0, ((Xhi , L

hi ), Y h) → ((Xi, Li), Y ) in probability, and hence, as

h→ 0 and n→∞,1

n

n∑i=1

δ(Xhi ,Lh

i ) → limm→∞

1

m

m∑i=1

δ(Xi,Li)

and1

n

n∑i=1

g(Xhi (t))Lh

i (t) → φ(g, t)

Page 249: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Finite dimensional approximation 249

Assume En ⊂ E is finite. Xn(t) ∈ En, t ≥ 0, and

Anf(z) =∑y∈En

λ(z, y)(f(y)− f(z))

Let Xni be independent copies of Xn, let

Lni (t) = 1 +

∫ t

0

h(Xni (s))

σLn

i (s)dY (s)

and define

φn(g, t) = limm→∞

1

m

m∑i=1

g(Xni (t))Ln

i (t).

Then

φn(g, t) = φn(g, 0) +

∫ t

0

φn(Ang, s)ds+

∫ t

0

φn(gh, s)dY (s)

Page 250: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

φn given by finite dimensional system 250

gx = 1x, x ∈ En, φn(x, t) = φn(gx, t). Then

φn(x, t) = φn(x, 0) +

∫ t

0

(∑z 6=x

λ(z, x)φn(z, s)−∑y 6=x

λ(x, y)φn(x, s)

)ds

+

∫ t

0

h(x)φn(x, s)dY (s)

If Xn ⇒ X , thenφn(g, t) → φ(g, t)

Page 251: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

10. Averaging

• Convergence of random measures

• Diffusion with rapidly varying coefficients

• Markov process with two time scales

• Diffusion approximation for random evolutions

Page 252: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Convergence of random measures 252

M(S) the space of finite measures on S with the weak topology:

µn → µ if and only if∫fdµn →

∫fdµ, f ∈ C(S)

Prohorov metric:

ρ(µ, ν) = infε > 0 : µ(B) ≤ ν(Bε) + ε, ν(B) ≤ µ(Bε) + ε, B ∈ B(S), (10.1)

where Bε = s ∈ S : infy∈B d(x, y) < ε. The following lemma is a simple conse-quence of Prohorov’s theorem.

Lemma 10.1 Let Γn be a sequence of M(S)-valued random variables. Then Γn is rela-tively compact if and only if Γn(S) is relatively compact as a family of R-valued randomvariables and for each ε > 0, there exists a compact K ⊂ S such that supn PΓn(Kc) >ε < ε.

Page 253: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Proof. The proof of necessity is left to the reader. To prove sufficiency, fix η > 0.Then there exist compact sets Kk ⊂ S such that

supnPΓn(Kc

k) > η2−(k+1) < η2−(k+1)

and a constant c such that

supnPΓn(S) > c < η

2.

Define K = µ ∈ M(S) : µ(Kck) ≤ η2−(k+1), k = 1, 2, . . . , µ(S) ≤ c, and observe

that PΓn ∈ K ≥ 1− η. Prohorov’s theorem implies that K is a compact subset ofM(S), and consequently, again by Prohorov’s theorem, Γn is relatively compact.

Page 254: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

A convergence lemma 254

Let L(S) be the space of measures on [0,∞) × S such that µ([0, t] × S) < ∞ foreach t > 0, and let Lm(S) ⊂ L(S) be the subspace on which µ([0, t] × S) = t. Forµ ∈ L(S), let µt denote the restriction of µ to [0, t] × S. Let ρt denote the Prohorovmetric on M([0, t]× S), and define ρ on L(S) by

ρ(µ, ν) =

∫ ∞

0

e−t1 ∧ ρt(µt, νt)dt,

that is, µn converges in ρ if and only if µtn converges weakly for almost every

t.

Lemma 10.2 Let (xn, µn) ⊂ DE[0,∞)×L(S), and (xn, µn) → (x, µ). Let h ∈ C(E×S). Define

un(t) =

∫[0,t]×S

h(xn(s), y)µn(ds× dy), u(t) =

∫[0,t]×S

h(x(s), y)µ(ds× dy)

zn(t) = µn([0, t]× S), and z(t) = µ([0, t]× S).

a) If x is continuous on [0, t] and limn→∞ zn(t) = z(t), then limn→∞ un(t) = u(t).

Page 255: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

b) If (xn, zn, µn) → (x, z, µ) inDE×R[0,∞)×L(S), then (xn, zn, un, µn) → (x, z, u, µ)in DE×R×R[0,∞) × L(S). In particular, limn→∞ un(t) = u(t) at all points of conti-nuity of z.

c) The continuity assumption on h can be replaced by the assumption that h is con-tinuous a.e. νt for each t, where νt ∈ M(E × S) is the measure determined byνt(A×B) = µ(s, y) : x(s) ∈ A, s ≤ t, y ∈ B.

d) In both (a) and (b), the boundedness assumption on h can be replaced by the as-sumption that there exists a nonnegative convex function ψ on [0,∞) satisfyinglimr→∞ ψ(r)/r = ∞ such that

supn

∫[0,t]×S

ψ(|h(xn(s), y)|)µn(ds× dy) <∞

for each t > 0.

Page 256: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Proof. Let h ∈ C(E × S). For each ε > 0 and t > 0, there exists a compact K ⊂ Swith supn µn([0, t]×Kc) ≤ ε. If x is continuous, then

limn→∞

supy∈K,s≤t

|h(xn(s), y)− h(x(s), y)| = 0,

and if zn(t) → z(t), it follows that

lim supn→∞

∣∣∣∣∫[0,t]×S

h(xn(s), y)µn(ds× dy)−∫

[0,t]×S

h(x(s), y)µ(ds× dy)

∣∣∣∣ ≤ 2‖h‖ε.

If (xn, zn) → (x, z) in the Skorohod topology, then there exist continuous, strictlyincreasing mappings ηn of [0,∞) onto [0,∞) such that ηn(t) → t for each t and (xnηn, zn ηn) → (x, z) uniformly on bounded intervals. Define µn so that µn([0, t] ×H) = µn([0, ηn(t)]×H) and observe that µn → µ in L(S). But the uniformity of theconvergence of xn ηn to x and zn ηn to z implies∫

[0,ηn(t)]×S

h(xn(s), y)µn(ds× dy) =

∫[0,t]×S

h(xn ηn(s), y)µn(ds× dy)

→∫

[0,t]×S

h(x(s), y)µ(ds× dy)

for each fixed t.

Page 257: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Let un(t) denote the integral on the left. To prove uniformity, it is sufficient to showthat for any sequence satisfying tn → t, un(tn) − u(tn) → 0. But this convergenceholds if for any sequence satisfying tn ≥ t and tn → t , we have un(tn) → u(t), andfor any sequence satisfying tn < t and tn → t, we have un(tn) → u(t−). Since forall r, s, |un(s) − un(r)| ≤ ‖h‖|zn ηn(s) − zn ηn(r)|, the pointwise convergence ofun and the uniformity of the convergence of zn ηn imply, in the first case, that

lim supn→∞

|un(tn)− u(t)| = lim supn→∞

|un(tn)− un(t)|

≤ lim supn→∞

‖h‖|zn ηn(tn)− zn ηn(t)|

≤ lim supn→∞

‖h‖|z(tn)− z(t)| = 0

and in the second case, that

lim supn→∞

|un(tn)− u(t−)| = limε→0

lim supn→∞

|un(tn)− un(t− ε)|

≤ limε→0

lim supn→∞

‖h‖|zn ηn(tn)− zn ηn(t− ε)|

≤ lim supn→∞

‖h‖|z(tn)− z(t− ε)| = 0

Page 258: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Diffusion with rapidly varying coefficients 258

Let Y be a stationary process. An event of the form Y ∈ Γ is invariant if Y (t+·) ∈Γ does not depend on t. A stationary process Y is ergodic if every invariant eventhas probability zero or one. If Y is ergodic, the ergodic theorem implies

limT→∞

1

T

∫ T

0

g(Y (s))ds =

∫gdµ,

where µ is the marginal distribution of Y .

Let Y be an ergodic stationary process, independent of W , and

Xn(t) = X(0) +

∫ t

0

σ(Xn(s), Y (ns))dW (s) +

∫ t

0

b(Xn(s), Y (ns))ds

What happens as n→∞?

Note that∫ t+δ

t

b(x, Y (ns))ds =1

n

∫ n(t+δ)

nt

b(x, Y (u))du→ δ

∫E

b(x, y)µ(dy),

suggesting that the drift term is asymptotic to∫ t

0b(Xn(s))ds, where b(x) =

∫Eb(x, y)µ(dy).

Page 259: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Martingale approach 259

LetAf(x, y) =

1

2a(x, y)f ′′(x) + b(x, y)f ′(x),

and

Γn([0, t]×B) =

∫ t

0

1B(Yn(s))ds

Then, assuming relative compactness of Xn and observing that Γn → m× µ

f(Xn(t))−∫ t

0

Af(Xn(s), Yn(s))ds = f(Xn(t))−∫

[0,t]×E

Af(Xn(s), y)Γn(ds× dy)

⇒ f(X(t))−∫ t

0

∫E

Af(X(s), y)µ(dy)ds

so any limit point is a solution of the martingale problem for

Af(x)−∫

E

Af(x, y)µ(dy)

Page 260: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Conditions for tightness 260

Theorem 10.3 For n = 1, 2, . . ., let Xn be a cadlag, E-valued process adapted to a fil-tration Fn

t , and let D ⊂ C(E) be closed under multiplications and separate points.Suppose

a) (Compact containment) For each ε > 0 and T > 0, there exists a compact Kε,T ⊂ Esuch that

infnPXn(t) ∈ Kε,T , 0 ≤ t ≤ T ≥ 1− ε.

b) For each n = 1, 2, . . ., f ∈ D, T > 0, there exists γnf,T (δ) ≥ 0, δ ≥ 0, such that

|E[f(Xn(t+ u))− f(Xn(t))|Fnt ]| ≤ E[γn

f,T (δ)|Fnt ], 0 ≤ t ≤ T, 0 ≤ u ≤ δ,

andlimδ→0

lim supn→∞

E[γnf,T (δ)] = 0.

Then Xn is relatively compact.

Page 261: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Proof. Note that

|E[(f(Xn(t+ u))− f(Xn(t)))2|Fnt ]|

≤ |E[f 2(Xn(t+ u))− f 2(Xn(t))|Fnt ]|

+2‖f‖|E[f(Xn(t+ u))− f(Xn(t))|Fnt ]|

≤ E[γnf2,T (δ)|Fn

t ] + 2‖f‖E[γnf,T (δ)|Fn

t ]

Page 262: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Verification of compact containment 262

|Xn(t)|2 = |Xn(0)|2 +

∫ t

0

2Xn(s)Tσ(Xn(s), Yn(s))dW (s)

+

∫ t

0

(d∑

i=1

aii(Xn(s), Yn(s)) + 2Xn(s) · b(Xn(s), Yn(s))

)ds

Simple condition:

d∑i=1

aii(x, y) + 2x · b(x, y) ≤ K1 +K2|x|2 +K3|y|2

τc = inft|Xn(t)| ≥ c.

Page 263: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Estimates for compact containment 263

Assume that supn suptE[|Yn(t)|2] <∞.

E[|Xn(t ∧ τc)|2]≤ E[|Xn(0)|2] +

+

∫ t

0

(K1 +K2E[|Xn(s ∧ τc)|2] +K3E[|Yn(s)|2]

)ds

Page 264: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Estimates on increments 264

For t ≤ T , u ≤ δ, p−1 + q−1 = 1,

|E[f(Xn(t+ u))− f(Xn(t))|Fnt ]| = |E[

∫ t+u

t

Af(Xn(s), Yn(s))ds|Fnt ]|

≤ δp−1

E

[(∫ T+δ

0

|Af(Xn(s), Yn(s))|qds)q−1∣∣∣∣∣Fn

t

]

Example:

Xn(t) = Xn(0) +

∫ t

0

σ(Yn(s))Xn(s)dW (s) +

∫ t

0

b(Yn(s))Xn(s)ds

so for f ∈ C2c (R),

|Af(Xn(s), Yn(s))|q ≤ K(σ2(Yn(s)) + |b(Yn(s))|)q

Page 265: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Markov process with two time scales 265

Suppose

Xn(t) = X(0) +

∫ t

0

σ(Xn(s), Yn(s))dW (s) +

∫ t

0

b(Xn(s), Yn(s))ds

Yn(t) = Y (0) +

∫ t

0

√βnα(Xn(s), Yn(s))dW (s) +

∫ t

0

βnc(Xn(s), Yn(s))ds

For simplicity, assume all coefficents are bounded. Then

f(Xn(t))−∫ t

0

Af(Xn(s), Yn(s))ds

and

g(Yn(t))−∫ t

0

βnBg(Xn(s), Yn(s))ds

are martingales for f ∈ C2c (Rd), g ∈ C2

c (Rm),A,B appropriately defined differentialoperators.

Page 266: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Abstract setting 266

Suppose that for A ⊂ C(E1)× C(E1 × E2),

f(Xn(t))−∫ t

0

Af(Xn(s), Yn(s))ds+ εfn(t), (10.2)

is a Fnt -martingale and E[|εfn(t)|] → 0, t ≥ 0, and that forB ⊂ C(E2)× C(E1×E2)

g(Yn(t))−∫ t

0

βnBg(Xn(s), Yn(s))ds+ δgn(t)

is an Fnt -martingale, βn →∞, and for each t ≥ 0,

limn→∞

E[β−1n |δg

n(t)|] = 0.

Then if (Xn,Γn) is relatively compact,∫[0,t]×E2

Bg(X(s), y)Γ(ds× dy) (10.3)

is a martingale, and since (10.3) is continuous and of bounded variation∫[0,t]×E2

Bg(X(s), y)Γ(ds× dy) = 0, t ≥ 0, a.s. (10.4)

Page 267: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Local stationarity 267

Suppose that there exists a countable subset D ⊂ D(B) such that the bounded,pointwise closure of (g,Bg) : g ∈ D is the same as the bounded, pointwiseclosure of (g,Bg) : g ∈ D(B). (For example, ifD(B) = C∞

c (Rd) and B is a secondorder differential operator with locally bounded coefficients.) By (10.4),∫ t

0

∫E2

Bg(X(s), y)γs(dy)ds = 0

for all t a.s., and hence ∫E2

Bg(X(s), y)γs(dy) = 0 (10.5)

a.e. m a.s. Consequently, with probability one, there exists a single set Q ⊂ [0,∞)

with m(Q) = 0 such that (10.5) holds for all g ∈ D and all s ∈ [0,∞) − Q. But thechoice of D ensures that (10.5) holds for all g ∈ D(B) and all s ∈ [0,∞)−Q.

Page 268: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Limiting martingale problem 268

Define Bx : D(B) → C(E2) by Bxg(y) = Bg(x, y). Suppose that there is a uniquemeasure πx in P(E2) satisfying∫

E2

Bxgdπx = 0, g ∈ D(B).

(If Bx is the generator for an E2-valued Markov process, this assumption is essen-tially the assertion that there is a unique stationary distribution corresponding toBx.) Then we can take γs = πX(s), and defining C on D(A) by

Cf(x) =

∫E2

Af(x, y)πx(dy),

it follows that X is a solution of the martingale problem for C.

Page 269: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Averaging for stochastic equations 269

W a standard Brownian motion in R

Xn(0) independent of W

Y a stochastic process with state space U , independent of W and Xn(0)

Define Yn(t) = Y (nt).

Xn(t) = Xn(0) +

∫ t

0

σ(Xn(s), Yn(s))dW (s)

+

∫ t

0

b(Xn(s), Yn(s))ds

Define Mn(A, t) =∫ t

01A(Yn(s))dW (s) and Vn(A, t) =

∫ t

01A(Yn(s))ds, so that

Xn(t) = Xn(0) +

∫U×[0,t]

σ(Xn(s), u)Mn(du× ds)

+

∫U×[0,t]

b(Xn(s), u)Vn(du× ds)

Page 270: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Convergence of Driving Processes 270

Define

Mn(ϕ, t) =

∫ t

0

ϕ(Yn(s))dW (s)

Vn(ϕ, t) =

∫ t

0

ϕ(Yn(s))ds

Assume that1

t

∫ t

0

ϕ(Y (s))ds→∫

U

ϕ(u)ν(du)

in probability for each ϕ ∈ C(U).

Observe that

[Mn(ϕ1, ·),Mn(ϕ2, ·)]t =

∫ t

0

ϕ1(Yn(s))ϕ2(Yn(s))ds

→ t

∫U

ϕ1(u)ϕ2(u)ν(du)

Page 271: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Limiting Equation 271

The functional central limit theorem for martingales implies Mn(ϕ, t) ⇒ M(ϕ, t)where M is Gaussian with

E[M(ϕ1, t)M(ϕ2, s)] = t ∧ s∫

U

ϕ1(u)ϕ2(u)ν(du)

and Vn(ϕ, t) → t∫

Uϕ(u)ν(du)

Can we conclude that Xn ⇒ X satisfying

X(t) = X(0) +

∫ t

0

∫U

σ(X(s), u)M(du× ds)

+

∫ t

0

∫U

b(X(s), u)ν(du)ds

Page 272: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Construction of a partition of unity 272

Lemma 10.4 Let (S, d) be a complete, separable metric space, and let xk be a countabledense subset of S. Then for each ε > 0, there exists a sequence ψε

k ⊂ C(S) such thatsuppψε

k ⊂ Bε(xk), 0 ≤ ψεk ≤ 1, |ψε

k(x)−ψεk(y)| ≤ 4

εd(x, y), and for each compact K ⊂

S, there exists NK <∞ such that∑NK

k=1 ψεk(x) = 1, x ∈ K. In particular,

∑∞k=1 ψ

εk(x) =

1 for all x ∈ S.

Proof.Fix ε > 0. Let ψk(x) = (1 − 2εd(x,Bε/2(xk)) ∨ 0. Then 0 ≤ ψk ≤ 1, ψk(x) = 1,

x ∈ Bε/2(xk), and ψk(x) = 0, x /∈ Bε(xk). Note also that |ψk(x) − ψk(y)| ≤ 2εd(x, y).

Define ψε1 = ψ1, and for k > 1, ψε

k = maxi≤k ψi −maxi≤k−1 ψi. Clearly, 0 ≤ ψεk ≤ ψk

and∑k

i=1 ψεi = maxi≤k ψi. In particular, for compact K ⊂ S, there exists NK < ∞

such that K ⊂ ∪NKk=1Bε/2(xk) and hence

∑NK

k=1 ψεk(x) = 1 for x ∈ K. Finally,

|ψεk(x)− ψε

k(y)| ≤ 2 maxi≤k

|ψi(x)− ψi(y)| ≤4

εd(x, y)

Page 273: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Construction of Stochastic Integral for SMRM 273

H a Banach space

Y is an H#-semimartingale if

• For each ϕ ∈ H , Y (ϕ, ·) is an Ft-semimartingale.

• Y (∑m

k=1 akϕk, ·) =∑m

k akY (ϕk, ·) a.s.

ϕk dense in H , ψεk partition of unity with supp(ψε

k) ⊂ Bε(ϕk).

For X cadlag, H-valued, Ft-adapted, define

Xε(t) =∑

k

ψεk(X(t))ϕk.

Then Xε is cadlag and Ft-adaptd and ‖X −Xε‖H ≤ ε.

Define

Xε− · Y (t) =

∑k

∫ t

0

ψεk(X(s−))dY (ϕk, s) (=

∫U×[0,t]

Xε(t, u)Y (du× ds)).

Page 274: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Good Integrator Condition 274

Let S be the collection of cadlag, adapted processes of the formZ(t) =∑m

k=1 ξk(t)ϕk.Define

Z− · Y =∑

k

∫ t

0

ξk(s−)dY (ϕk, s).

Basic assumption:

Ht = sups≤t

|Z− · Y (s)| : Z ∈ S, sups≤t

‖Z(s)‖H ≤ 1

is stochastically bounded.

Page 275: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Definition of integral 275

Stochastic boundedness of Ht implies there exists K(t, δ) such that

Psups≤t

|Z− · Y (s)| ≥ K(t, δ) ≤ δ

for all Z ∈ S satisfying sups≤t ‖Z(s)‖H ≤ 1

Y aH#-semimartingale satisfying the good itegrator condition,X a cadlag, adapted,H-valued process, Xε as above. Then

X− · Y ≡ limε→0

Xε− · Y

exists in the sense that for all η > 0

limε→0

Psups≤t

|X · Y (s)−Xε · Y (s)| > η = 0

Since ‖Xε1(s)−Xε2(s)‖H ≤ ε1 + ε2,

Psups≤t

|Xε1− · Y (s)−Xε2

− · Y (s)| ≥ (ε1 + ε2)K(δ, t) ≤ δ

and Xε− · Y is Cauchy in probability.

Page 276: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Convergence for H#-semimartingales 276

H a separable Banach space of functions on U

Yn an Fnt -H#-semimartingale (for eachϕ ∈ H , Y (ϕ, ·) is an Fn

t -semimartingale)

Xn cadlag, H-valued processes

(Xn, Yn) ⇒ (X, Y ), if

(Xn, Yn(ϕ1, ·), . . . , Yn(ϕm, ·))⇒ (X,Y (ϕ1, ·), . . . , Y (ϕm, ·))

in DH×Rm [0,∞) for each choice of ϕ1, . . . , ϕm ∈ H .

Page 277: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Convergence for Stochastic Integrals 277

LetHn,t = sup

s≤t|Z− · Yn(s)| : Z ∈ Sn, sup

s≤t‖Z(s)‖H ≤ 1,

Definition: Yn is uniformly tight if ∪nHn,t is stochastically bounded for each t.

Theorem 10.5 Assume that Yn is uniformly tight. If (Xn, Yn) ⇒ (X, Y ), then thereis a filtration Ft, such that Y is an Ft-adapted, standard, H#-semimartingale, X isFt-adapted and

(Xn, Yn, Xn− · Yn) ⇒ (X, Y,X− · Y )

If (Xn, Yn) → (X, Y ) in probability, then (Xn, Yn, Xn− · Yn) → (X,Y,X− · Y ) in proba-bility.

Page 278: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Averaging theorem 278

Xn(t) = Xn(0) +

∫ t

0

σ(Xn(s), Yn(s))dW (s) +

∫ t

0

b(Xn(s), Yn(s))ds

= Xn(0) +

∫U×[0,t]

σ(Xn(s), u)Mn(du× ds) +

∫U×[0,t]

b(Xn(s), u)Vn(du× ds)

where Mn(A, t) =∫ t

01A(Yn(s))dW (s) and Vn(A, t) =

∫ t

01A(Yn(s))ds.

Then Mn(ϕ, t) ⇒M(ϕ, t), where M is Gaussian with

E[M(ϕ1, t)M(ϕ2, s)] = t ∧ s∫

U

ϕ1(u)ϕ2(u)ν(du)

and Vn(ϕ, t) → t∫

Uϕ(u)ν(du).

Let H = L2(ν). If x ∈ R → σ(x, ·) ∈ H and x ∈ R → b(x, ·) ∈ L1(ν) are boundedand continuous, then Xn ⇒ X satisfying

X(t) = X(0) +

∫ t

0

∫U

σ(X(s), u)M(du× ds) +

∫ t

0

∫U

b(X(s), u)ν(du)ds

Page 279: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Diffusion approximation for random evolutions 279

Consider a sequence of stochastic ordinary differential equations in Rk of the form

Xn(t) = G(Xn(t), Y (n2t)) + nH(Xn(t), Y (n2t)) (10.6)

where Y is a stochastic process representing the noise in the system.

Let Y be a continuous time Markov chain with state space E = 1, . . . ,m andintensity matrix Q.

Assume that Q is irreducible and hence that there is a unique stationary distribu-tion π.

Suppose ΣβH(x, β)πβ = 0 and define Vn and Wn by

V βn (t) =

∫ t

0

1β(Y (n2s))ds→ πβt

and

W βn (t) = n(V β

n (t)− πβt) = n

∫ t

0

(1β(Y (n2s))− πβ)ds .

Page 280: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Equation driven by Vn and Wn 280

Then (10.6) becomes

Xn(t) = Xn(0) +m∑

β=1

∫ t

0

G(Xn(s), β)dV βn (s) +

m∑β=1

∫ t

0

H(Xn(s), β)dW βn (s) .

Wn converges to a Brownian motion, but does not satisfy the uniformity condi-tions of the stochastic integral convergence theorem.

Let hβ satisfym∑

k=1

qjkhβ(k) = 1β(j)− πβ

(hβ exists by the uniqueness of π), and note that Yn defined by

Y βn (t) = n

∫ t

0

(1β(Y (n2s))− πβ)ds− 1

nhβ(Y (n2t)) +

1

nhβ(Y (0))

is a martingale and Wn = Yn + Zn, where

Zβn (t) =

1

nhβ(Y (n2t))− 1

nhβ(Y (0)).

Page 281: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Integration by parts 281

Note that∫ t

0

H(Xn(s), β)dW βn (s) =

∫ t

0

H(Xn(s), β)dY βn (s) +

∫ t

0

H(Xn(s), β)dZβn (s)

and that Yn satisfies the uniformity conditions, so the first stochastic integralconverges appropriately.∫ t

0

H(Xn(s), β)dZβn (s)

= H(Xn(t), β)Zβn (t)−H(Xn(0), β)Zβ

n (0)−∫ t

0

Zβn (s)dH(Xn(s), β)

= −∑

γ

∫ t

0

Zβn (s)H ′(Xn(s), β)G(Xn(s), γ)dV γ

n (s)

−∑

γ

∫ t

0

Zβn (s−)H ′(Xn(s), β)H(Xn(s), γ)dY γ

n (s)

−∑

γ

∫ t

0

H ′(Xn(s)β)H(Xn(s), γ)Zβn (s−)dZγ

n(s)

Page 282: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Convergence 282

Let Nij(t) denote the number of transitions of Y from state i to state j up to time t.Then

[Y βn , Y

γn ]t =

m∑ij=1

Nij(n2t)

n2(hβ(j)− hβ(i))(hγ(j)− hγ(i))

[Y βn , Z

γn ]t = −[Y β

n , Yγn ]t

and∫ t

0

Zβn (s−)dZγ

n(s) =m∑

ij=1

Nij(n2t)

n2hβ(i)(hγ(j)− hγ(i))−

1

nhβ(Y (0))Zγ

n(t) .

[Y βn , Y

γn ]t → Cβγt ≡

m∑ij=1

πiqij(hβ(j)− hβ(i))(hγ(j)− hγ(i))t∫ t

0

Zβn (s−)dZγ

n(s) → Dβγt ≡m∑

ij=1

πiqijhβ(i)(hγ(j)− hγ(i))t .

Page 283: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Limit theorem 283

The martingale central limit theorem gives Yn ⇒ Y where Y is a Brownian motionwith infinitesimal covariance C = ((Cβγ)).

Theorem 10.6 Let Y in (10.6) be a finite Markov chain with state space E = 1, . . . ,mand intensity matrix Q, and let Xn(0) be independent of Y . Let G be bounded and con-tinuous, and let H be bounded and have bounded and continuous first and second deriva-tives. Assume that Q is irreducible and that π, ((Cβγ)) and ((Dβγ)) are as above. DefineG : R → R by G(x) = ΣβG(x, β)πβ . If Xn(0) ⇒ X(0), then Xn is relatively compactand any limit point satisfies

X(t) = X(0) +

∫ t

0

G(X(s))ds+∑

β

∫ t

0

H(X(s), β)dY β(s)

+∑β,γ

∫ t

0

H ′(X(s), β)H(X(s), γ)Dγβds .

Page 284: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

11. Control

• Examples

• Controlled martingale problems

• Standard cost structures

• Relaxed controls

• Pasting lemma

• Bellman principle and the Nisio semigroup

• Hamilton-Jacobi-Bellman equation

• Linear programming formulation

• Relationship of linear program to HJB equation

• Viscosity solutions

Page 285: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Controlled queueing networks 285

Page 286: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Deviations from a target value 286

Page 287: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Recalibration 287

Page 288: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Controlled martingale problems 288

E state space F control space

A : D(A) ⊂ C(E) → C(E × F )

Definition 11.1 (X,U) is a solution of the controlled martingale problem forA if thereis a filtration Ft such that

f(X(t))−∫ t

0

Af(X(s), U(s))ds

is an Ft-martingale for each f ∈ D(A).

Examples of generators

Af(x, u) =1

2

∑aij(x, u)

∂2

∂xi∂j

f(x) +∑

bi(x, u)∂

∂xi

f(x)

E = Rd

Af(x, u) = λ(x, u)

∫E

(f(y)− f(x))µ(x, u, dy)

Page 289: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Standard cost structuresFinite horizon:

Running cost: c(x, u)

Terminal cost: g(x)∫ t

0

c(X(s), U(s))ds+ g(X(t)) or∫ t

0

∫F

c(X(s), u)λs(du)ds+ g(X(t))

Discounted infinite horizon: α > 0∫ ∞

0

e−αsc(X(s), U(s))ds or∫ ∞

0

e−αs

∫F

c(X(s), u)λs(du)ds

Long run average:

limt→∞

1

t

∫ t

0

c(X(s), U(s))ds or limt→∞

1

t

∫ t

0

∫F

c(X(s), u)λs(du)ds

Page 290: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Relaxed controls 290

L(F ) space of measures on F × [0,∞) with µ(F × [0, t]) <∞ for each t > 0.

Lm(F ) ⊂ L(F ) subspace with µ(F × [0, t]) = t.

Lm,T (F ) space of measures on F × [0, T ] that are restrictions of measures in Lm(F ).

Definition 11.2 A process (X,Λ) in DE[0,∞) × Lm(F ) is a relaxed solution of thecontrolled martingale problem for A if there exists a filtration Ft such that (X,Λ) isFt-adapted and for each f ∈ D(A)

Mf (t) = f(X(t))−∫

F×[0,t]

Af(X(s), u)Λ(du× ds)

is an Ft-martingale.

Λ(du× ds) = λs(du)ds so

Mf (t) = f(X(t))−∫ t

0

∫F

Af(X(s), u)λs(du)ds

Page 291: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Pasting lemma 291

Lemma 11.3 Let (E, r), (S1, ρ1), and (S2, ρ2) be complete separable metric spaces, letP1 ∈ P(S1) and P2 ∈ P(S2), and suppose that X1 : S1 → E and X2 : S2 → E are Borelmeasurable. If P1X1 ∈ · = P2X2 ∈ · = µ ∈ P(E), then there exists P ∈ P(S1 × S2)such that for each Z1 ∈ B(S1) and Z2 ∈ B(S2),

EP [Z1Z2] =

∫E

EP1 [Z1|X1 = x]EP2 [Z2|X2 = x]µ(dx)

and X1 = X2 a.s. P .

Page 292: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Properties of the collection of solutions 292

Suppose that every solution has a cadlag modification, so we can identify a solu-tion with its distribution on DE[0,∞)× Lm(F ).

Let Γ ⊂ P(DE[0,∞)×Lm(F )) be the collection of relaxed solutions of the controlledmartingale problem A, and let Γν ⊂ Γ be the collection of solutions with intitialdistribution ν.

For P ∈ Γ, t > 0, µ = PX(t)−1, and P ′ ∈ Γµ, there exists Q ∈ Γ such that

Q(X(· ∧ t),Λ(· ∧ t)) ∈ A, (X(t+ ·),Λ(t+ ·)− Λ(t)) ∈ B

=

∫E

P(X(· ∧ t),Λ(· ∧ t)) ∈ A|X(t) = x

P ′(B|X(0) = x)µ(dx)

Page 293: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Conditions for compactness 293

Suppose that c(x, u) ≥ 0 and that for each f ∈ D(A), there exist af , bf , and 0 < ηf <1 such that

|Af(x, u)| ≤ af + bfc(x, u)ηf . (11.1)

Then for 0 ≤ t < t+ h ≤ T

|E[f(X(t+ h))− f(X(t))|Ft]|

= |E[

∫ t+h

t

∫F

Af(X(s), u)Λ(du× ds)|Ft]|

≤ E[

∫ t+h

t

(af + bf (

∫F

c(X(s), u)λs(du))ηfds|Ft]

≤ afh+ bfh1−ηfE[(

∫ T

0

∫F

c(X(s), u)λs(du))ηf |Ft]

Page 294: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Compactness lemma 294

Lemma 11.4 Let

Γkν = P ∈ Γν : EP [

∫ ∞

0

e−αt

∫F

c(X(t), u)Λ(du× dt)] ≤ k.

Suppose D(A) is closed under multiplication and separates points. If E and F are com-pact (e.g., E = Rd ∪ ∞) and (11.1) holds for all f ∈ D(A), then Γk

ν is compact inP(DE[0,∞)× Lm(F ))

Page 295: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Bellman principle and the Nisio semigroup 295

Lemma 11.5 Suppose that c and g are lower semicontinuous and bounded below. Define

S(t)g(ν) = inf(X,λ)∈Γν

E

[∫ t

0

∫F

c(X(s), u)λs(du)ds+ g(X(t))

](11.2)

Under the conditions of the compactness lemma, the infimum is achieved, and settingS(t)g(x) = S(t)g(δx),

S(t)g(ν) =

∫E

S(t)g(x)ν(dx).

Proof. Check that for 0 ≤ a ≤ 1,

S(t)g(aν1 + (1− a)ν2) = aS(t)g(ν1) + (1− a)S(t)g(ν2)

Page 296: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Semigroup property 296

S(t+ r)g(ν) = infP∈Γν

EP[ ∫ t

0

∫F

c(X(s), u)λs(du)ds

+

∫ t+r

t

∫F

c(X(s), u)λs(du) + g(X(t+ r))]

≥ infP∈Γν

EP

[∫ t

0

∫F

c(X(s), u)λs(du)ds+ S(r)g(X(t))

]= S(t)S(r)g(ν),

Page 297: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

S(t)S(r)g(ν) = infQ∈Γν

EQ

[∫ t

0

∫F

c(X(s), u)λs(du)ds+ S(r)g(X(t))

]≥ inf

Q∈Γν

(EQ

[∫ t

0

∫F

c(X(s), u)λs(du)ds

]+ inf

P∈ΓQX(t)−1

EP

[∫ r

0

∫F

c(X(s), u)λs(du)ds+ g(X(r))

])

≥ infQ∈Γν

(EQ

[∫ t

0

∫F

c(X(s), u)λs(du)ds+

∫ t+r

t

∫F

c(X(s), u)λs(du)ds+ g(X(t+ r))

]= S(t+ r)g(ν)

Page 298: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Properties of the semigroup 298

S(t) is a nonlinear contraction semigroup

S(t+ r)g(ν) = S(t)S(r)g(ν)

‖S(t)g − S(t)h‖∞ ≤ ‖g − h‖∞

Page 299: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Nonlinear semigroup generators 299

If E and F are compact, c is continuous, and g ∈ D(A), then

S(t)g(x)− g(x)

= inf(X,Λ)∈Γx

E

[∫ t

0

∫F

(c(X(s), u) + Ag(X(s), u))λs(du)ds

]and hence

limt→0

S(t)g(x)− g(x)

t= Cg(x) = inf

u(c(x, u) + Ag(x, u))

so S(t)g should satisfy the Hamilton-Jacobi-Bellman equation

d

dtS(t)g = CS(t)g.

Page 300: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Hamilton-Jacobi-Bellman equation (discounted case) 300

Define Rαh(ν) = infP∈Γν EP [∫∞

0e−αs(

∫Fc(X(s), u)λs(du) + h(X(s)))dt]

Again Rαh(ν) =∫

ERαh(x)ν(dx)

S(t)Rαh(ν)

= infP∈Γν

EP [

∫ t

0

∫R

c(X(s), u)λs(du) +Rαh(X(t))]

= infP∈Γν

EP [

∫ t

0

∫F

c(X(s), u)λs(du) +

∫ ∞

t

e−α(s−t)(

∫F

c(X(s), u)λs(du) + h(X(s))ds]

= infP∈Γν

EP [

∫ t

0

(1− e−αs)

∫R

c(X(s), u)λs(du)]

+(eαt − 1)EP [

∫ ∞

t

e−αs(

∫F

c(X(s), u)λs(du) + h(X(s))ds]

−EP [

∫ t

0

e−αsh(X(s))ds] + EP [

∫ ∞

0

e−αs(

∫F

c(X(s), u)λs(du) + h(X(s)))dt]

limt→0

t−1(S(t)Rαh(x)−Rαh(x)) = αRαh(x)− h(x)

Page 301: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

For h ≡ 0, set Vα(x) = Rαh(x). Then we should have

αVα(x)− CVα(x) = αVα(x)− infu

(c(x, u) + AVα(x, u)) = 0

Page 302: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Linear programming formulation 302

If (X,λ) is a solution of the controlled martingale problem and νt ∈ P(E × F ) isdefined by

νth ≡∫hdνt = E[

∫F

h(X(t), u)λt(du)],

then

νtf = ν0f +

∫ t

0

νsAfds, f ∈ D(A). (11.3)

In the discounted case, the goal is to minimize∫ ∞

0

e−αtνtcdt = E[

∫ ∞

0

e−αt

∫F

c(X(t), u)λt(du)]

(11.3) implies

e−αtνtf = ν0f +

∫ t

0

e−αs(νsAf − ανsf)ds = ν0f +

∫ t

0

e−αsνs(Af − αf)ds

and henceαν0f = α

∫ ∞

0

e−αsνs(αf − Af)ds ≡ π(αf − Af)

Page 303: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Linear Program 303

Minimizeπc =

∫E×F

c(x, u)π(dx× du)

subject toπ(αf − Af) = αν0f, ∀f ∈ D(A).

Page 304: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Relationship of linear program to HJB equation 304

∫c(x, u)π(dx× du) =

∫(c(x, u) + Af(x, u)− αf(x))π(dx× du) + αν0f

≥∫ [

infu

(c(x, u) + Af(x, u)− αf(x))]π(dx× du) + αν0f.

Consequently, if Vα is a solution of the HJB equation∫c(x, u)π(dx× du) ≥ αν0Vα

LetΓ = (x, u) : (c(x, u) + AVα(x, u)− αVα(x)) = 0.

If there exists a solution of the controlled martingale problem for (A, ν0) with∫ t

0

∫U

1Γ(X(s), u)λs(du)ds = t, t ≥ 0,

thenE[

∫ ∞

0

e−αt

∫c(X(t), u)λt(du)dt] =

∫Vα(y)ν0(dy),

proving that∫Vαdν0 is the minimal cost.

Page 305: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Conditions on generators 305

A ⊂ B(E) × B(E) is a pre-generator if A is dissipative and there are sequences offunctions µn : E → P(E) and λn : E → [0,∞) such that for each (f, g) ∈ A

g(x) = limn→∞

λn(x)(

∫E

(f(y)− f(x))µn(x, dy)

for each x ∈ E.

A is bp-separable if there exists a countable subset gk ⊂ D(A)∩ C(E) such that thegraph of A is contained in the bounded, pointwise closure of (gk, Agk).

i) A : D(A) ⊂ C(E) → C(E × U), 1 ∈ D(A), and A1 = 0.

ii) There exist ψA ∈ C(E × U), ψA ≥ 1, and constants af , bf , f ∈ D(A), such that

|Af(x, u)| ≤ afψA(x, u), ∀(x, u) ∈ E × F.

iii) Defining A0 = (f, ψ−1A Af) : f ∈ D(A), A0 is bp-separable and for each

u ∈ F , the operator Au defined by Auf(x) = A0f(x, u) is a pre-generator.

iv) D(A) is closed under multiplication and separates points.

Page 306: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Existence of solutions of the martingale problem 306

Theorem 11.6 Suppose that A satisfies the condtions above. Let ν0 ∈ P(E) and π ∈P(E × F ) satisfy ∫

E×F

ψAdπ <∞

andπ(αf − Af) = αν0f, f ∈ D(A). (11.4)

Then there exists a solution of the controlled martingale problem for (A, ν0) such that

E[α

∫ ∞

0

e−αt

∫F

h(X(t), u)λt(du)dt] =

∫E×F

h(x, u)π(dx× du)

Page 307: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Existence of optimal solution 307

Theorem 11.7 Suppose that c ≥ 0 satisfies (x, u) : c(x, u) ≤ k is compact for every0 ≤ k < ∞ and that there exists ρ : [0,∞) → [0,∞) satisfying limr→∞ r−1ρ(r) = ∞such that

ρ(ψA(x, u)) ≤ c(x, u).

If there exists π ∈ P(E × F ) satisfying (11.4) and∫

E×Fcdπ < ∞, then there exists

π0 ∈ P(E×F ) satisfying (11.4) such that π0 achieves the minimum in the linear program.

Page 308: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Viscosity solutions 308

Page 309: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

12. Miscellaneous

• Feynman-Kac formula

• Duality

• Stochastic equations for spin-flip models

Page 310: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Feynman-Kac formula 310

X a solution of the martingale problem for A

Mf (t) = f(X(t))− f(X(0))−∫ t

0

Af(X(s))ds

Then

f(X(t))e∫ t0 β(X(s))ds = f(X(0)) +

∫ t

0

e∫ s0 β(X(r))drdf(X(s))

+

∫ t

0

e∫ s0 β(X(r))drβ(X(s))f(X(s))ds

= f(X(0)) +

∫ t

0

e∫ s0 β(X(r))drdMf (s)

+

∫ t

0

e∫ s0 β(X(r))dr(β(X(s))f(X(s)) + Af(X(s)))ds

Page 311: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Corresponding semigroup 311

DefineT (t)f(x) = E[f(X(t))e

∫ t0 β(X(s))ds|X(0) = x]

T (t+ r)f(x) = E[f(X(t+ r))e∫ t+r0 β(X(s))ds|X(0) = x]

= E[E[f(X(t+ r))e∫ t+r0 β(X(s))ds|Ft]|X(0) = x]

= E[E[f(X(t+ r))e∫ t+r

t β(X(s))ds|Ft]e∫ t0 β(X(s))ds|X(0) = x]

= E[T (r)f(X(t))e∫ t0 β(X(s))ds|X(0) = x]

= T (t)T (r)f(x)

and

T (t)f(x) = f(x) +

∫ t

0

T (s)(βf + Af)(x)ds.

Page 312: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Contact process 312

E space of counting measures on a countable set S with ηx ∈ 0, 1.

λyx ≥ 0

Af(η) =∑y 6=x

λyx(1− ηx)ηy(f(η + δx)− f(η)) +∑

y

µyηy(f(η − δy)− f(η))

HΓ(η) = H(η,Γ) =∏

x∈Γ(1− ηx)

AHΓ(η) = −∑x∈Γ

∑y/∈Γ

λyx(1− ηx)ηyH(η,Γ) +∑y∈Γ

µyηyH(η,Γ− y)

= −∑x∈Γ

∑y/∈Γ

λyxηyH(η,Γ) +∑y∈Γ

µy(H(η,Γ− y)−H(η,Γ))

=∑x∈Γ

∑y/∈Γ

λyx(H(η,Γ ∪ y)−H(η,Γ)) +∑y∈Γ

µy(H(η,Γ− y)−H(η,Γ))

= BHη(Γ)

Page 313: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Duality 313

X and Y are independent, X has generator A, Y has generator B

d

dsE[H(X(s), Y (t− s))] = E[

d

dsT (s)S(t− s)H(X(0), Y (0))]

= E[T (s)S(t− s)AH(X(0), Y (0))]

−E[T (s)S(t− s)BH(X(0), Y (0))]

which is zero if AH(x, y) = BH(x, y). But then

E[H(X(t), Y (0)] = E[H(X(0), Y (t))]

In particular,

E[H(ηt,Γ0)] = E[H(η0,Γt)]

Pη(t,Γ0) = 0 = PΓt ⊂ y : η0y = 0

Existence of finite process implies uniqueness of infinite process.

Page 314: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Spin-flip models 314

A spin-flip model on Zd is a stochastic process whose state η = ηi : i ∈ Zd assignsto each lattice point i ∈ Zd the value ±1.

For each i ∈ Zd, specify a flip rate ci which determines the rate at which the associ-ated state variable ηi changes sign.

The rates ci may depend on the full configuration η.

Xi counts the cumulative number of sign changes

ηi(t) = ηi(0)(−1)Xi(t)−Xi(0).

The specification of the rates ci corresponds to the requirement that there exist afiltration Ft such that for each i ∈ Zd,

Mj(t) = Xi(t)−∫ t

0

ci(X(s))ds.

is an Ft-martingale. Assume no simultaneous jumps.

Page 315: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Generator for spin-flip model 315

Suppose f only depends on finitely many coordinates. Then

f(X(t)) = f(X(0)) +∑j∈Jf

∫ t

0

(f(X(s−) + ej)− f(X(s−)))dXj(s)

= f(X(0)) +∑j∈Jf

∫ t

0

(f(X(s−) + ej)− f(X(s−)))dMj(s)

+

∫ t

0

∑j∈Jf

cj(X(s))(f(X(s−) + ej)− f(X(s−)))ds

Page 316: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Stochastic equation for spin-flip model 316

Let ξi : i ∈ Zd be independent Poisson random measures on [0,∞)× [0,∞) withLebesgue mean measure.

Xi(t) = Xi(0) +

∫[0,∞)×[0,t]

1[0,ci(X(s−))](u)ξi(du× ds)

Let J = ki, i ∈ Zd : ki ∈ Z+. Assume

|ci(x+ ej)− ci(x)| ≤ aij |ci(x)| ≤ bi

for all x ∈ J and i, j ∈ Zd, where ej is the element of J such eji = 0 for i 6= j andejj = 1.

Page 317: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Uniqueness 317

Suppose X and Y are solutions of the equation. Then

|Xi(t)−Yi(t)| ≤ |Xi(0)−Yi(0)|+∫

[0,∞)×[0,t]

|1[0,ci(Xi(s))](u)−1[0,ci(Yi(s))](u)|ξi(du× ds)

and

E[|Xi(t)− Yi(t)|] ≤ E[|Xi(0)− Yi(0)|] +

∫[0,∞)×[0,t]

E[|ci(Xi(s))− ci(Yi(s))|]duds

and hence

αiE[|Xi(t)− Yi(t)|] ≤ αiE[|Xi(0)− Yi(0)|] +

∫[0,t]

∑j

αiaijE[|Xj(s)− Yj(s)|]ds

≤ αiE[|Xi(0)− Yi(0)|] +

∫[0,t]

∑j

αiaij

αj

αjE[|Xj(s)− Yj(s)|]ds

Page 318: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Conditions 318

Setting ρi(t) = E[|Xi(t)− Yi(t)|] and K = supi

∑j

αiaij

αj

supiαiρi(t) ≤ sup

iαiρi(0) +K

∫ t

0

supjαjρj(s)ds. (12.1)

Alternative, set K =∑

i supj

∑j

αiaij

αj. Then

∑i

αiρi(t) ≤∑

i

αiρi(0) +K

∫ t

0

∑j

αjρj(s)ds. (12.2)

Warning: To apply Gronwall to (12.1), need to know that sups≤t supj αjρj(s) <∞ (true if supj αjbj < ∞), and similarly for (12.2), that sups≤t

∑j αjρj(s) (true if∑

j αjbj <∞).

Page 319: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Example 319

Let aij = ρ(j − i), where ρ has bounded support, and bi ≡ b.

Take αi = 11+|i|d+1 so that

supi

∑j

(1 + |j|d+1)ρ(j − i)

1 + |i|d+1≤ sup

i

∑j

(1 + |j + i|d+1)ρ(j)

1 + |i|d+1

≤ supi

supk∈suppρ

(1 + |k + i|d+1)

1 + |i|d+1

∑j

ρ(j)

Page 320: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Exercises 320

1. Show that Fτ is a σ-algebra.

2. Show that for Ft-stopping times σ, τ , σ ≤ τ implies that Fσ ⊂ Fτ . In particular, Fτ∧t ⊂ Ft.

3. Let τ be a discrete Ft-stopping time satisfying τ < ∞ = ∪∞k=1τ = tk = Ω. Show that Fτ =σA ∩ τ = tk : A ∈ Ftk , k = 1, 2, . . ..

4. Let τ be a discrete stopping time with values ti. Show that

E[Z|Fτ ] =∑

i

E[Z|Fti ]1τ=ti.

5. Show that the minimum of two stopping times is a stopping time and that the maximum of two stoppingtimes is a stopping time.

6. Let N be a Poisson process with parameter λ. Then M(t) = N(t)− λt is a martingale. Compute [M ]t.

7. Show that if Y1 is cadlag and Y2 is finite variation, then

[Y1, Y2]t =∑s≤t

∆Y1(s)∆Y2(s).

8. Using the fact that martingales have finite quadratic variation, show that semimartingales have finitequadratic variation.

9. Using the above results, show that the covariation of two semimartingales exist.

10. Consider the stochastic differential equation

X(t) = X(0) +

∫ t

0

aX(s)dW (s) +

∫ t

0

bX(s)ds.

Page 321: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Find α and β so thatX(t) = X(0) expαW (t) + βt

is a solution.

11. Show that limn→∞ nPN( b−an

) > 1 = 0.

12. Let N be a stochasticly continuous counting processes with independent increments. Show that theincrements of N are Poisson distributed.

13. Verify the statements regarding the mappings on page 71.

14. Let Nn and N be counting processes. Suppose that (Nn(t1), . . . , Nn(tm)) ⇒ (N(t1), . . . , N(tm)) forall choices of t1, . . . , tm in T0, where T0 is dense in [0,∞). Show that Nn ⇒ N under the Skorohodtopology.

15. Let N be Poisson distributed with parameter λ, and let Y1, Y2, . . . be independent Bernoulli randomvariables with PYi = 1 = 1 − PYi = 0 = p. Define Z =

∑Ni=1 Yi. Compute the characteristic

function for Z.

16. Suppose the Y1, . . . , Ym are independent Poisson random variables with E[Yi] = λi. Let Y =∑m

i=1 Yi.Compute PYi = 1|Y = 1. More generally, compute PYi = k|Y = m.

17. Let ξ be a Poisson random measure on (U, dU ) with mean measure ν. For f ∈ M(U), f ≥ 0, show that

E[exp−∫

U

f(u)ξ(du)] = exp−∫

(1− e−f )dν.

18. Give an example of a right continuous process X such that∫

U×[0,t]|X(u, s)| ∧ 1ν(du)ds < ∞ a.s. but∫

U×[0,t]|X(u, s)|ξ(du× ds) = ∞ a.s.

19. Let Y , ξ1, and ξ2 be independent random variables with values in complete, separable metric spaces, E,S1, and S2. Suppose that G : E × S1 → E0 and H : E × S2 → E0 are Borel measurable functions and

Page 322: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

that G(Y, ξ1) = H(Y, ξ2) a.s. Show that there exists a Borel measurable function F : E → E0 such thatF (Y ) = G(Y, ξ1) = H(Y, ξ2) a.s.

Page 323: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

Appendix 1: Proof of Lemma 4.25 323

Proof. For D ∈ PU , define ν(D) = E[∫

U×[0,∞)ID(x, s)H(x, s)Γ(dx × ds)] and for C ∈ P , define ν0(C) =

ν(U × C). Since U is Polish, there exists a transition function γ0 from ([0,∞)× Ω,P) into U such that

ν(D) =

∫[0,∞)×Ω

γ0(s, ω, D(s,ω))ν0(ds× dω),

where D(s,ω) = x : (x, s, ω) ∈ D. In particular, for each G ∈ B(U), γ0(·, ·, G) is P-measurable. LetA0(t) =

∫U×[0,t]

H(x, s)Γ(dx× ds), and note that

ν0(C) = E[

∫[0,∞)×U

IC(s)H(x, s)Γ(dx× ds)] = E[

∫ ∞

0

IC(s)dA0(s)], C ∈ P,

soν(D) =

∫Ω

∫ ∞

0

γ0(s, ω, D(s,ω))dA0(s, ω)P (dω) = E[

∫ ∞

0

γ0(s, ·, D(s,·))dA0(s)],

for every D ∈ PU which implies

E[

∫U×[0,∞)

Z(x, s)H(x, s)Γ(dx× ds)] = E[

∫ ∞

0

∫U

Z(x, s)γ0(s, ·, dx)dA0(s)]

for every bounded, predictable Z.

There exists a nondecreasing, right-continuous, predictable process A, such that A0 − A is a martingale andhence

ν0(C) = E[

∫ t

0

IC(s)dA(s)], C ∈ P,

Page 324: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

andν(D) = E[

∫ ∞

0

γ0(s, ω, D(s,ω))dA(s)].

For G ∈ B(U × [0,∞)), define

Γ(G, ω) =

∫ ∞

0

∫Gs

1

H(x, s)γ0(s, ω, dx)dA(s),

and observe that

E

[ ∫U×[0,t]

Z(x, s)H(x, s)Γ(dx× ds)

]= E[

∫ t

0

∫U

Z(x, s)γ0(s, dx)dA(s)]

= E[

∫ t

0

∫U

Z(x, s)γ0(s, dx)dA0(s)]

= E[

∫[0,t]×U

Z(x, s)H(x, s)Γ(dx× ds)].

Let t, r > 0, and let R be bounded andFt-measurable. Note that if Z is predictable, then Z(x, s) = RI(t,t+r](s)Z(x, s)is predictable. It follows that

E[(MZ(t + r)−MZ(t))R]

= E

[(E[

∫U×[0,t+r]

Z(x, s)H(x, s)Γ(dx× ds)|Ft+r]

−E[

∫U×[0,t]

Z(x, s)H(x, s)Γ(dx× ds)|Ft])R

]−E

[ ∫U×(t,t+r]

Z(x, s)H(x, s)Γ(dx× ds)R

]

Page 325: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

= E

[( ∫U×[0,t+r]

Z(x, s)H(x, s)Γ(dx× ds)

−∫

U×[0,t]

Z(x, s)H(x, s)Γ(dx× ds))R

]−E

[ ∫U×(t,t+r]

Z(x, s)H(x, s)Γ(dx× ds)R

]= E

[( ∫U×(t,t+r]

RZ(x, s)H(x, s)Γ(dx× ds)

]−E

[ ∫U×(t,t+r]

RZ(x, s)H(x, s)Γ(dx× ds)

]= 0,

so MZ is a martingale.

Page 326: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

GlossaryA directed set is a set A together with a binary relation ≤ having the following properties:

– (reflexivity) a ≤ a for all a in A.

– (transitivity) If a ≤ b and b ≤ c, then a ≤ c.

– (directedness) For a, b ∈ A, there exists a c ∈ A with a ≤ c and b ≤ c.

Page 327: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

References

[1] D. L. Burkholder. Distribution function inequalities for martingales. Ann. Probability, 1:19–42, 1973.

[2] Stewart N. Ethier and Thomas G. Kurtz. Markov processes. Wiley Series in Probability and MathematicalStatistics: Probability and Mathematical Statistics. John Wiley & Sons Inc., New York, 1986. Characteri-zation and convergence.

[3] Akira Ichikawa. Filtering and control of stochastic differential equations with unbounded coefficients.Stochastic Anal. Appl., 4(2):187–212, 1986.

[4] Jean Jacod and Albert N. Shiryaev. Limit theorems for stochastic processes, volume 288 of Grundlehren derMathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer-Verlag, Berlin,1987.

[5] A. Jakubowski, J. Memin, and G. Pages. Convergence en loi des suites d’integrales stochastiques surl’espace D1 de Skorokhod. Probab. Theory Related Fields, 81(1):111–137, 1989.

[6] Thomas G. Kurtz. The optional sampling theorem for martingales indexed by directed sets. Ann. Probab.,8(4):675–681, 1980.

[7] Thomas G. Kurtz and Philip Protter. Weak limit theorems for stochastic integrals and stochastic differen-tial equations. Ann. Probab., 19(3):1035–1070, 1991.

[8] Thomas G. Kurtz and Philip E. Protter. Weak convergence of stochastic integrals and differential equa-tions. II. Infinite-dimensional case. In Probabilistic models for nonlinear partial differential equations (Monte-catini Terme, 1995), volume 1627 of Lecture Notes in Math., pages 197–285. Springer, Berlin, 1996.

[9] E. Lenglart, D. Lepingle, and M. Pratelli. Presentation unifiee de certaines inegalites de la theorie desmartingales. In Seminar on Probability, XIV (Paris, 1978/1979) (French), volume 784 of Lecture Notes inMath., pages 26–52. Springer, Berlin, 1980. With an appendix by Lenglart.

Page 328: Martingale

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

[10] Philip E. Protter. Stochastic integration and differential equations, volume 21 of Applications of Mathematics(New York). Springer-Verlag, Berlin, second edition, 2004. Stochastic Modelling and Applied Probability.