18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175:...

65
18.175: Lecture 18 More on martingales Scott Sheffield MIT 18.175 Lecture 18

Transcript of 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175:...

Page 1: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

18.175: Lecture 18

More on martingales

Scott Sheffield

MIT

18.175 Lecture 18

Page 2: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Outline

Conditional expectation

Regular conditional probabilities

Martingales

Arcsin law, other SRW stories

18.175 Lecture 18

Page 3: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Outline

Conditional expectation

Regular conditional probabilities

Martingales

Arcsin law, other SRW stories

18.175 Lecture 18

Page 4: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Recall: conditional expectation

I Say we’re given a probability space (Ω,F0,P) and a σ-fieldF ⊂ F0 and a random variable X measurable w.r.t. F0, withE |X | <∞. The conditional expectation of X given F is anew random variable, which we can denote by Y = E (X |F).

I We require that Y is F measurable and that for all A in F ,we have

∫A XdP =

∫A YdP.

I Any Y satisfying these properties is called a version ofE (X |F).

I Theorem: Up to redefinition on a measure zero set, therandom variable E (X |F) exists and is unique.

I This follows from Radon-Nikodym theorem.

18.175 Lecture 18

Page 5: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Recall: conditional expectation

I Say we’re given a probability space (Ω,F0,P) and a σ-fieldF ⊂ F0 and a random variable X measurable w.r.t. F0, withE |X | <∞. The conditional expectation of X given F is anew random variable, which we can denote by Y = E (X |F).

I We require that Y is F measurable and that for all A in F ,we have

∫A XdP =

∫A YdP.

I Any Y satisfying these properties is called a version ofE (X |F).

I Theorem: Up to redefinition on a measure zero set, therandom variable E (X |F) exists and is unique.

I This follows from Radon-Nikodym theorem.

18.175 Lecture 18

Page 6: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Recall: conditional expectation

I Say we’re given a probability space (Ω,F0,P) and a σ-fieldF ⊂ F0 and a random variable X measurable w.r.t. F0, withE |X | <∞. The conditional expectation of X given F is anew random variable, which we can denote by Y = E (X |F).

I We require that Y is F measurable and that for all A in F ,we have

∫A XdP =

∫A YdP.

I Any Y satisfying these properties is called a version ofE (X |F).

I Theorem: Up to redefinition on a measure zero set, therandom variable E (X |F) exists and is unique.

I This follows from Radon-Nikodym theorem.

18.175 Lecture 18

Page 7: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Recall: conditional expectation

I Say we’re given a probability space (Ω,F0,P) and a σ-fieldF ⊂ F0 and a random variable X measurable w.r.t. F0, withE |X | <∞. The conditional expectation of X given F is anew random variable, which we can denote by Y = E (X |F).

I We require that Y is F measurable and that for all A in F ,we have

∫A XdP =

∫A YdP.

I Any Y satisfying these properties is called a version ofE (X |F).

I Theorem: Up to redefinition on a measure zero set, therandom variable E (X |F) exists and is unique.

I This follows from Radon-Nikodym theorem.

18.175 Lecture 18

Page 8: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Recall: conditional expectation

I Say we’re given a probability space (Ω,F0,P) and a σ-fieldF ⊂ F0 and a random variable X measurable w.r.t. F0, withE |X | <∞. The conditional expectation of X given F is anew random variable, which we can denote by Y = E (X |F).

I We require that Y is F measurable and that for all A in F ,we have

∫A XdP =

∫A YdP.

I Any Y satisfying these properties is called a version ofE (X |F).

I Theorem: Up to redefinition on a measure zero set, therandom variable E (X |F) exists and is unique.

I This follows from Radon-Nikodym theorem.

18.175 Lecture 18

Page 9: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Conditional expectation observations

I Linearity: E (aX + Y |F) = aE (X |F) + E (Y |F).

I If X ≤ Y then E (E |F) ≤ E (Y |F).

I If Xn ≥ 0 and Xn ↑ X with EX <∞, then E (Xn|F) ↑ E (X |F)(by dominated convergence).

I If F1 ⊂ F2 then

I E (E (X |F1)|F2) = E (X |F1).I E (E (X |F2)|F1) = E (X |F1).

I Second is kind of interesting: says, after I learn F1, my bestguess of what my best guess for X will be after learning F2 issimply my current best guess for X .

I Deduce that E (X |Fi ) is a martingale if Fi is an increasingsequence of σ-algebras and E (|X |) <∞.

18.175 Lecture 18

Page 10: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Conditional expectation observations

I Linearity: E (aX + Y |F) = aE (X |F) + E (Y |F).

I If X ≤ Y then E (E |F) ≤ E (Y |F).

I If Xn ≥ 0 and Xn ↑ X with EX <∞, then E (Xn|F) ↑ E (X |F)(by dominated convergence).

I If F1 ⊂ F2 then

I E (E (X |F1)|F2) = E (X |F1).I E (E (X |F2)|F1) = E (X |F1).

I Second is kind of interesting: says, after I learn F1, my bestguess of what my best guess for X will be after learning F2 issimply my current best guess for X .

I Deduce that E (X |Fi ) is a martingale if Fi is an increasingsequence of σ-algebras and E (|X |) <∞.

18.175 Lecture 18

Page 11: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Conditional expectation observations

I Linearity: E (aX + Y |F) = aE (X |F) + E (Y |F).

I If X ≤ Y then E (E |F) ≤ E (Y |F).

I If Xn ≥ 0 and Xn ↑ X with EX <∞, then E (Xn|F) ↑ E (X |F)(by dominated convergence).

I If F1 ⊂ F2 then

I E (E (X |F1)|F2) = E (X |F1).I E (E (X |F2)|F1) = E (X |F1).

I Second is kind of interesting: says, after I learn F1, my bestguess of what my best guess for X will be after learning F2 issimply my current best guess for X .

I Deduce that E (X |Fi ) is a martingale if Fi is an increasingsequence of σ-algebras and E (|X |) <∞.

18.175 Lecture 18

Page 12: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Conditional expectation observations

I Linearity: E (aX + Y |F) = aE (X |F) + E (Y |F).

I If X ≤ Y then E (E |F) ≤ E (Y |F).

I If Xn ≥ 0 and Xn ↑ X with EX <∞, then E (Xn|F) ↑ E (X |F)(by dominated convergence).

I If F1 ⊂ F2 then

I E (E (X |F1)|F2) = E (X |F1).I E (E (X |F2)|F1) = E (X |F1).

I Second is kind of interesting: says, after I learn F1, my bestguess of what my best guess for X will be after learning F2 issimply my current best guess for X .

I Deduce that E (X |Fi ) is a martingale if Fi is an increasingsequence of σ-algebras and E (|X |) <∞.

18.175 Lecture 18

Page 13: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Conditional expectation observations

I Linearity: E (aX + Y |F) = aE (X |F) + E (Y |F).

I If X ≤ Y then E (E |F) ≤ E (Y |F).

I If Xn ≥ 0 and Xn ↑ X with EX <∞, then E (Xn|F) ↑ E (X |F)(by dominated convergence).

I If F1 ⊂ F2 thenI E (E (X |F1)|F2) = E (X |F1).

I E (E (X |F2)|F1) = E (X |F1).

I Second is kind of interesting: says, after I learn F1, my bestguess of what my best guess for X will be after learning F2 issimply my current best guess for X .

I Deduce that E (X |Fi ) is a martingale if Fi is an increasingsequence of σ-algebras and E (|X |) <∞.

18.175 Lecture 18

Page 14: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Conditional expectation observations

I Linearity: E (aX + Y |F) = aE (X |F) + E (Y |F).

I If X ≤ Y then E (E |F) ≤ E (Y |F).

I If Xn ≥ 0 and Xn ↑ X with EX <∞, then E (Xn|F) ↑ E (X |F)(by dominated convergence).

I If F1 ⊂ F2 thenI E (E (X |F1)|F2) = E (X |F1).I E (E (X |F2)|F1) = E (X |F1).

I Second is kind of interesting: says, after I learn F1, my bestguess of what my best guess for X will be after learning F2 issimply my current best guess for X .

I Deduce that E (X |Fi ) is a martingale if Fi is an increasingsequence of σ-algebras and E (|X |) <∞.

18.175 Lecture 18

Page 15: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Conditional expectation observations

I Linearity: E (aX + Y |F) = aE (X |F) + E (Y |F).

I If X ≤ Y then E (E |F) ≤ E (Y |F).

I If Xn ≥ 0 and Xn ↑ X with EX <∞, then E (Xn|F) ↑ E (X |F)(by dominated convergence).

I If F1 ⊂ F2 thenI E (E (X |F1)|F2) = E (X |F1).I E (E (X |F2)|F1) = E (X |F1).

I Second is kind of interesting: says, after I learn F1, my bestguess of what my best guess for X will be after learning F2 issimply my current best guess for X .

I Deduce that E (X |Fi ) is a martingale if Fi is an increasingsequence of σ-algebras and E (|X |) <∞.

18.175 Lecture 18

Page 16: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Conditional expectation observations

I Linearity: E (aX + Y |F) = aE (X |F) + E (Y |F).

I If X ≤ Y then E (E |F) ≤ E (Y |F).

I If Xn ≥ 0 and Xn ↑ X with EX <∞, then E (Xn|F) ↑ E (X |F)(by dominated convergence).

I If F1 ⊂ F2 thenI E (E (X |F1)|F2) = E (X |F1).I E (E (X |F2)|F1) = E (X |F1).

I Second is kind of interesting: says, after I learn F1, my bestguess of what my best guess for X will be after learning F2 issimply my current best guess for X .

I Deduce that E (X |Fi ) is a martingale if Fi is an increasingsequence of σ-algebras and E (|X |) <∞.

18.175 Lecture 18

Page 17: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Outline

Conditional expectation

Regular conditional probabilities

Martingales

Arcsin law, other SRW stories

18.175 Lecture 18

Page 18: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Outline

Conditional expectation

Regular conditional probabilities

Martingales

Arcsin law, other SRW stories

18.175 Lecture 18

Page 19: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Regular conditional probability

I Consider probability space (Ω,F ,P), a measurable mapX : (Ω,F)→ (S ,S) and G ⊂ F a σ-field. Thenµ : Ω× S → [0, 1] is a regular conditional distribution forX given G if

I For each A, ω → µ(ω,A) is a version of P(X ∈ A|G).I For a.e. ω, A→ µ(ω,A) is a probability measure on (S ,S).

I Theorem: Regular conditional probabilities exist if (S ,S) isnice.

18.175 Lecture 18

Page 20: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Regular conditional probability

I Consider probability space (Ω,F ,P), a measurable mapX : (Ω,F)→ (S ,S) and G ⊂ F a σ-field. Thenµ : Ω× S → [0, 1] is a regular conditional distribution forX given G if

I For each A, ω → µ(ω,A) is a version of P(X ∈ A|G).

I For a.e. ω, A→ µ(ω,A) is a probability measure on (S ,S).

I Theorem: Regular conditional probabilities exist if (S ,S) isnice.

18.175 Lecture 18

Page 21: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Regular conditional probability

I Consider probability space (Ω,F ,P), a measurable mapX : (Ω,F)→ (S ,S) and G ⊂ F a σ-field. Thenµ : Ω× S → [0, 1] is a regular conditional distribution forX given G if

I For each A, ω → µ(ω,A) is a version of P(X ∈ A|G).I For a.e. ω, A→ µ(ω,A) is a probability measure on (S ,S).

I Theorem: Regular conditional probabilities exist if (S ,S) isnice.

18.175 Lecture 18

Page 22: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Regular conditional probability

I Consider probability space (Ω,F ,P), a measurable mapX : (Ω,F)→ (S ,S) and G ⊂ F a σ-field. Thenµ : Ω× S → [0, 1] is a regular conditional distribution forX given G if

I For each A, ω → µ(ω,A) is a version of P(X ∈ A|G).I For a.e. ω, A→ µ(ω,A) is a probability measure on (S ,S).

I Theorem: Regular conditional probabilities exist if (S ,S) isnice.

18.175 Lecture 18

Page 23: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Outline

Conditional expectation

Regular conditional probabilities

Martingales

Arcsin law, other SRW stories

18.175 Lecture 18

Page 24: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Outline

Conditional expectation

Regular conditional probabilities

Martingales

Arcsin law, other SRW stories

18.175 Lecture 18

Page 25: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Martingales

I Let Fn be increasing sequence of σ-fields (called a filtration).

I A sequence Xn is adapted to Fn if Xn ∈ Fn for all n. If Xn isan adapted sequence (with E |Xn| <∞) then it is called amartingale if

E (Xn+1|Fn) = Xn

for all n. It’s a supermartingale (resp., submartingale) ifsame thing holds with = replaced by ≤ (resp., ≥).

18.175 Lecture 18

Page 26: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Martingales

I Let Fn be increasing sequence of σ-fields (called a filtration).

I A sequence Xn is adapted to Fn if Xn ∈ Fn for all n. If Xn isan adapted sequence (with E |Xn| <∞) then it is called amartingale if

E (Xn+1|Fn) = Xn

for all n. It’s a supermartingale (resp., submartingale) ifsame thing holds with = replaced by ≤ (resp., ≥).

18.175 Lecture 18

Page 27: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Martingale observations

I Claim: If Xn is a supermartingale then for n > m we haveE (Xn|Fm) ≤ Xm.

I Proof idea: Follows if n = m + 1 by definition; taken = m + k and use induction on k.

I Similar result holds for submartingales. Also, if Xn is amartingale and n > m then E (Xn|Fm) = Xm.

I Claim: if Xn is a martingale w.r.t. Fn and φ is convex withE |φ(Xn)| <∞ then φ(Xn) is a submartingale.

I Proof idea: Immediate from Jensen’s inequality andmartingale definition.

I Example: take φ(x) = maxx , 0.

18.175 Lecture 18

Page 28: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Martingale observations

I Claim: If Xn is a supermartingale then for n > m we haveE (Xn|Fm) ≤ Xm.

I Proof idea: Follows if n = m + 1 by definition; taken = m + k and use induction on k.

I Similar result holds for submartingales. Also, if Xn is amartingale and n > m then E (Xn|Fm) = Xm.

I Claim: if Xn is a martingale w.r.t. Fn and φ is convex withE |φ(Xn)| <∞ then φ(Xn) is a submartingale.

I Proof idea: Immediate from Jensen’s inequality andmartingale definition.

I Example: take φ(x) = maxx , 0.

18.175 Lecture 18

Page 29: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Martingale observations

I Claim: If Xn is a supermartingale then for n > m we haveE (Xn|Fm) ≤ Xm.

I Proof idea: Follows if n = m + 1 by definition; taken = m + k and use induction on k.

I Similar result holds for submartingales. Also, if Xn is amartingale and n > m then E (Xn|Fm) = Xm.

I Claim: if Xn is a martingale w.r.t. Fn and φ is convex withE |φ(Xn)| <∞ then φ(Xn) is a submartingale.

I Proof idea: Immediate from Jensen’s inequality andmartingale definition.

I Example: take φ(x) = maxx , 0.

18.175 Lecture 18

Page 30: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Martingale observations

I Claim: If Xn is a supermartingale then for n > m we haveE (Xn|Fm) ≤ Xm.

I Proof idea: Follows if n = m + 1 by definition; taken = m + k and use induction on k.

I Similar result holds for submartingales. Also, if Xn is amartingale and n > m then E (Xn|Fm) = Xm.

I Claim: if Xn is a martingale w.r.t. Fn and φ is convex withE |φ(Xn)| <∞ then φ(Xn) is a submartingale.

I Proof idea: Immediate from Jensen’s inequality andmartingale definition.

I Example: take φ(x) = maxx , 0.

18.175 Lecture 18

Page 31: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Martingale observations

I Claim: If Xn is a supermartingale then for n > m we haveE (Xn|Fm) ≤ Xm.

I Proof idea: Follows if n = m + 1 by definition; taken = m + k and use induction on k.

I Similar result holds for submartingales. Also, if Xn is amartingale and n > m then E (Xn|Fm) = Xm.

I Claim: if Xn is a martingale w.r.t. Fn and φ is convex withE |φ(Xn)| <∞ then φ(Xn) is a submartingale.

I Proof idea: Immediate from Jensen’s inequality andmartingale definition.

I Example: take φ(x) = maxx , 0.

18.175 Lecture 18

Page 32: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Martingale observations

I Claim: If Xn is a supermartingale then for n > m we haveE (Xn|Fm) ≤ Xm.

I Proof idea: Follows if n = m + 1 by definition; taken = m + k and use induction on k.

I Similar result holds for submartingales. Also, if Xn is amartingale and n > m then E (Xn|Fm) = Xm.

I Claim: if Xn is a martingale w.r.t. Fn and φ is convex withE |φ(Xn)| <∞ then φ(Xn) is a submartingale.

I Proof idea: Immediate from Jensen’s inequality andmartingale definition.

I Example: take φ(x) = maxx , 0.

18.175 Lecture 18

Page 33: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Predictable sequence

I Call Hn predictable if each H + n is Fn−1 measurable.

I Maybe Hn represents amount of shares of asset investor has atnth stage.

I Write (H · X )n =∑n

m=1Hm(Xm − Xm−1).

I Observe: If Xn is a supermartingale and the Hn ≥ 0 arebounded, then (H · X )n is a supermartingale.

I Example: take Hn = 1N≥n for stopping time N.

18.175 Lecture 18

Page 34: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Predictable sequence

I Call Hn predictable if each H + n is Fn−1 measurable.

I Maybe Hn represents amount of shares of asset investor has atnth stage.

I Write (H · X )n =∑n

m=1Hm(Xm − Xm−1).

I Observe: If Xn is a supermartingale and the Hn ≥ 0 arebounded, then (H · X )n is a supermartingale.

I Example: take Hn = 1N≥n for stopping time N.

18.175 Lecture 18

Page 35: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Predictable sequence

I Call Hn predictable if each H + n is Fn−1 measurable.

I Maybe Hn represents amount of shares of asset investor has atnth stage.

I Write (H · X )n =∑n

m=1Hm(Xm − Xm−1).

I Observe: If Xn is a supermartingale and the Hn ≥ 0 arebounded, then (H · X )n is a supermartingale.

I Example: take Hn = 1N≥n for stopping time N.

18.175 Lecture 18

Page 36: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Predictable sequence

I Call Hn predictable if each H + n is Fn−1 measurable.

I Maybe Hn represents amount of shares of asset investor has atnth stage.

I Write (H · X )n =∑n

m=1Hm(Xm − Xm−1).

I Observe: If Xn is a supermartingale and the Hn ≥ 0 arebounded, then (H · X )n is a supermartingale.

I Example: take Hn = 1N≥n for stopping time N.

18.175 Lecture 18

Page 37: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Predictable sequence

I Call Hn predictable if each H + n is Fn−1 measurable.

I Maybe Hn represents amount of shares of asset investor has atnth stage.

I Write (H · X )n =∑n

m=1Hm(Xm − Xm−1).

I Observe: If Xn is a supermartingale and the Hn ≥ 0 arebounded, then (H · X )n is a supermartingale.

I Example: take Hn = 1N≥n for stopping time N.

18.175 Lecture 18

Page 38: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Two big results

I Optional stopping theorem: Can’t make money inexpectation by timing sale of asset whose price is non-negativemartingale.

I Proof: Just a special case of statement about (H · X ) ifstopping time is bounded.

I Martingale convergence: A non-negative martingale almostsurely has a limit.

I Idea of proof: Count upcrossings (times martingale crosses afixed interval) and devise gambling strategy that makes lots ofmoney if the number of these is not a.s. finite. Basically, youbuy every time price gets below the interval, sell each time itgets above.

I Stronger convergence statement: If Xn is a submartingalewith supEX+

n <∞ then as n→∞, X+n converges a.s. to alimit X with E |X | <∞.

18.175 Lecture 18

Page 39: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Two big results

I Optional stopping theorem: Can’t make money inexpectation by timing sale of asset whose price is non-negativemartingale.

I Proof: Just a special case of statement about (H · X ) ifstopping time is bounded.

I Martingale convergence: A non-negative martingale almostsurely has a limit.

I Idea of proof: Count upcrossings (times martingale crosses afixed interval) and devise gambling strategy that makes lots ofmoney if the number of these is not a.s. finite. Basically, youbuy every time price gets below the interval, sell each time itgets above.

I Stronger convergence statement: If Xn is a submartingalewith supEX+

n <∞ then as n→∞, X+n converges a.s. to alimit X with E |X | <∞.

18.175 Lecture 18

Page 40: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Two big results

I Optional stopping theorem: Can’t make money inexpectation by timing sale of asset whose price is non-negativemartingale.

I Proof: Just a special case of statement about (H · X ) ifstopping time is bounded.

I Martingale convergence: A non-negative martingale almostsurely has a limit.

I Idea of proof: Count upcrossings (times martingale crosses afixed interval) and devise gambling strategy that makes lots ofmoney if the number of these is not a.s. finite. Basically, youbuy every time price gets below the interval, sell each time itgets above.

I Stronger convergence statement: If Xn is a submartingalewith supEX+

n <∞ then as n→∞, X+n converges a.s. to alimit X with E |X | <∞.

18.175 Lecture 18

Page 41: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Two big results

I Optional stopping theorem: Can’t make money inexpectation by timing sale of asset whose price is non-negativemartingale.

I Proof: Just a special case of statement about (H · X ) ifstopping time is bounded.

I Martingale convergence: A non-negative martingale almostsurely has a limit.

I Idea of proof: Count upcrossings (times martingale crosses afixed interval) and devise gambling strategy that makes lots ofmoney if the number of these is not a.s. finite. Basically, youbuy every time price gets below the interval, sell each time itgets above.

I Stronger convergence statement: If Xn is a submartingalewith supEX+

n <∞ then as n→∞, X+n converges a.s. to alimit X with E |X | <∞.

18.175 Lecture 18

Page 42: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Two big results

I Optional stopping theorem: Can’t make money inexpectation by timing sale of asset whose price is non-negativemartingale.

I Proof: Just a special case of statement about (H · X ) ifstopping time is bounded.

I Martingale convergence: A non-negative martingale almostsurely has a limit.

I Idea of proof: Count upcrossings (times martingale crosses afixed interval) and devise gambling strategy that makes lots ofmoney if the number of these is not a.s. finite. Basically, youbuy every time price gets below the interval, sell each time itgets above.

I Stronger convergence statement: If Xn is a submartingalewith supEX+

n <∞ then as n→∞, X+n converges a.s. to alimit X with E |X | <∞.

18.175 Lecture 18

Page 43: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Other statements

I If Xn is a supermartingale then as n→∞, Xn → X a.s. andEX ≤ EX0.

I Proof: Yn = −Xn ≤ 0 is a submartingale with EY+ = 0.Since EX0 ≥ EXn, inequality follows from Fatou’s lemma.

I Doob’s decomposition: Any submartingale Xn can bewritten in a unique way as Xn = Mn + An where Mn is amartingale and An is a predictable increasing sequence withA0 = 0.

I Proof idea: Just let Mn be sum of “surprises” (i.e., thevalues Xn − E (Xn|Fn−1)).

I A martingale with bounded increments a.s. either converges tolimit or oscillates between ±∞. That is, a.s. eitherlimXn <∞ exists or lim supXn = +∞ and lim inf Xn = −∞.

18.175 Lecture 18

Page 44: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Other statements

I If Xn is a supermartingale then as n→∞, Xn → X a.s. andEX ≤ EX0.

I Proof: Yn = −Xn ≤ 0 is a submartingale with EY+ = 0.Since EX0 ≥ EXn, inequality follows from Fatou’s lemma.

I Doob’s decomposition: Any submartingale Xn can bewritten in a unique way as Xn = Mn + An where Mn is amartingale and An is a predictable increasing sequence withA0 = 0.

I Proof idea: Just let Mn be sum of “surprises” (i.e., thevalues Xn − E (Xn|Fn−1)).

I A martingale with bounded increments a.s. either converges tolimit or oscillates between ±∞. That is, a.s. eitherlimXn <∞ exists or lim supXn = +∞ and lim inf Xn = −∞.

18.175 Lecture 18

Page 45: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Other statements

I If Xn is a supermartingale then as n→∞, Xn → X a.s. andEX ≤ EX0.

I Proof: Yn = −Xn ≤ 0 is a submartingale with EY+ = 0.Since EX0 ≥ EXn, inequality follows from Fatou’s lemma.

I Doob’s decomposition: Any submartingale Xn can bewritten in a unique way as Xn = Mn + An where Mn is amartingale and An is a predictable increasing sequence withA0 = 0.

I Proof idea: Just let Mn be sum of “surprises” (i.e., thevalues Xn − E (Xn|Fn−1)).

I A martingale with bounded increments a.s. either converges tolimit or oscillates between ±∞. That is, a.s. eitherlimXn <∞ exists or lim supXn = +∞ and lim inf Xn = −∞.

18.175 Lecture 18

Page 46: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Other statements

I If Xn is a supermartingale then as n→∞, Xn → X a.s. andEX ≤ EX0.

I Proof: Yn = −Xn ≤ 0 is a submartingale with EY+ = 0.Since EX0 ≥ EXn, inequality follows from Fatou’s lemma.

I Doob’s decomposition: Any submartingale Xn can bewritten in a unique way as Xn = Mn + An where Mn is amartingale and An is a predictable increasing sequence withA0 = 0.

I Proof idea: Just let Mn be sum of “surprises” (i.e., thevalues Xn − E (Xn|Fn−1)).

I A martingale with bounded increments a.s. either converges tolimit or oscillates between ±∞. That is, a.s. eitherlimXn <∞ exists or lim supXn = +∞ and lim inf Xn = −∞.

18.175 Lecture 18

Page 47: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Other statements

I If Xn is a supermartingale then as n→∞, Xn → X a.s. andEX ≤ EX0.

I Proof: Yn = −Xn ≤ 0 is a submartingale with EY+ = 0.Since EX0 ≥ EXn, inequality follows from Fatou’s lemma.

I Doob’s decomposition: Any submartingale Xn can bewritten in a unique way as Xn = Mn + An where Mn is amartingale and An is a predictable increasing sequence withA0 = 0.

I Proof idea: Just let Mn be sum of “surprises” (i.e., thevalues Xn − E (Xn|Fn−1)).

I A martingale with bounded increments a.s. either converges tolimit or oscillates between ±∞. That is, a.s. eitherlimXn <∞ exists or lim supXn = +∞ and lim inf Xn = −∞.

18.175 Lecture 18

Page 48: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Problems

I How many primary candidates does one expect to ever exceed20 percent on Intrade? (Asked by Aldous.)

I Compute probability of having a martingale price reach abefore b if martingale prices vary continuously.

I Polya’s urn: r red and g green balls. Repeatedly samplerandomly and add extra ball of sampled color. Ratio of red togreen is martingale, hence a.s. converges to limit.

18.175 Lecture 18

Page 49: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Problems

I How many primary candidates does one expect to ever exceed20 percent on Intrade? (Asked by Aldous.)

I Compute probability of having a martingale price reach abefore b if martingale prices vary continuously.

I Polya’s urn: r red and g green balls. Repeatedly samplerandomly and add extra ball of sampled color. Ratio of red togreen is martingale, hence a.s. converges to limit.

18.175 Lecture 18

Page 50: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Problems

I How many primary candidates does one expect to ever exceed20 percent on Intrade? (Asked by Aldous.)

I Compute probability of having a martingale price reach abefore b if martingale prices vary continuously.

I Polya’s urn: r red and g green balls. Repeatedly samplerandomly and add extra ball of sampled color. Ratio of red togreen is martingale, hence a.s. converges to limit.

18.175 Lecture 18

Page 51: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Problems

I How many primary candidates ever get above twenty percentin expected probability of victory? (Asked by Aldous.)

I Suppose that you expect to get married once during your lifeHow many people do you expect will reach the point that youwould say you have a twenty five percent chance to marrythem?

I Compute probability of having a continuously updatedconditional probability reach a before b.

18.175 Lecture 18

Page 52: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Problems

I How many primary candidates ever get above twenty percentin expected probability of victory? (Asked by Aldous.)

I Suppose that you expect to get married once during your lifeHow many people do you expect will reach the point that youwould say you have a twenty five percent chance to marrythem?

I Compute probability of having a continuously updatedconditional probability reach a before b.

18.175 Lecture 18

Page 53: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Problems

I How many primary candidates ever get above twenty percentin expected probability of victory? (Asked by Aldous.)

I Suppose that you expect to get married once during your lifeHow many people do you expect will reach the point that youwould say you have a twenty five percent chance to marrythem?

I Compute probability of having a continuously updatedconditional probability reach a before b.

18.175 Lecture 18

Page 54: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Wald

I Wald’s equation: Let Xi be i.i.d. with E |Xi | <∞. If N is astopping time with EN <∞ then ESN = EX1EN.

I Wald’s second equation: Let Xi be i.i.d. with E |Xi | = 0 andEX 2

i = σ2 <∞. If N is a stopping time with EN <∞ thenESN = σ2EN.

18.175 Lecture 18

Page 55: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Wald

I Wald’s equation: Let Xi be i.i.d. with E |Xi | <∞. If N is astopping time with EN <∞ then ESN = EX1EN.

I Wald’s second equation: Let Xi be i.i.d. with E |Xi | = 0 andEX 2

i = σ2 <∞. If N is a stopping time with EN <∞ thenESN = σ2EN.

18.175 Lecture 18

Page 56: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Wald applications to SRW

I S0 = a ∈ Z and at each time step Sj independently changesby ±1 according to a fair coin toss. Fix A ∈ Z and letN = infk : Sk ∈ 0,A. What is ESN?

I What is EN?

18.175 Lecture 18

Page 57: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Wald applications to SRW

I S0 = a ∈ Z and at each time step Sj independently changesby ±1 according to a fair coin toss. Fix A ∈ Z and letN = infk : Sk ∈ 0,A. What is ESN?

I What is EN?

18.175 Lecture 18

Page 58: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Outline

Conditional expectation

Regular conditional probabilities

Martingales

Arcsin law, other SRW stories

18.175 Lecture 18

Page 59: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Outline

Conditional expectation

Regular conditional probabilities

Martingales

Arcsin law, other SRW stories

18.175 Lecture 18

Page 60: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Reflection principle

I How many walks from (0, x) to (n, y) that don’t cross thehorizontal axis?

I Try counting walks that do cross by giving bijection to walksfrom (0,−x) to (n, y).

18.175 Lecture 18

Page 61: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Reflection principle

I How many walks from (0, x) to (n, y) that don’t cross thehorizontal axis?

I Try counting walks that do cross by giving bijection to walksfrom (0,−x) to (n, y).

18.175 Lecture 18

Page 62: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Ballot Theorem

I Suppose that in election candidate A gets α votes and B getsβ < α votes. What’s probability that A is ahead throughoutthe counting?

I Answer: (α− β)/(α + β). Can be proved using reflectionprinciple.

18.175 Lecture 18

Page 63: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Ballot Theorem

I Suppose that in election candidate A gets α votes and B getsβ < α votes. What’s probability that A is ahead throughoutthe counting?

I Answer: (α− β)/(α + β). Can be proved using reflectionprinciple.

18.175 Lecture 18

Page 64: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Arcsin theorem

I Theorem for last hitting time.

I Theorem for amount of positive positive time.

18.175 Lecture 18

Page 65: 18.175: Lecture 18 .1in More on martingalesmath.mit.edu/~sheffield/2016175/Lecture18.pdf · 18.175: Lecture 18 More on martingales Scott She eld MIT 18.175 Lecture 18

Arcsin theorem

I Theorem for last hitting time.

I Theorem for amount of positive positive time.

18.175 Lecture 18