Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss...

59
Stochastic Target Games with Controlled Loss B. Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC, February 2013 Joint work with L. Moreau (ETH-Zürich) and M. Nutz (Columbia)

Transcript of Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss...

Page 1: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

Stochastic Target Games with Controlled Loss

B. Bouchard

Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech

USC, February 2013

Joint work with L. Moreau (ETH-Zürich) and M. Nutz (Columbia)

Page 2: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

Plan of the talk

• Problem formulation

• Examples• Assumptions for this talk• Geometric dynamic programming• Application to the monotone case

Page 3: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

Plan of the talk

• Problem formulation• Examples

• Assumptions for this talk• Geometric dynamic programming• Application to the monotone case

Page 4: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

Plan of the talk

• Problem formulation• Examples• Assumptions for this talk

• Geometric dynamic programming• Application to the monotone case

Page 5: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

Plan of the talk

• Problem formulation• Examples• Assumptions for this talk• Geometric dynamic programming

• Application to the monotone case

Page 6: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

Plan of the talk

• Problem formulation• Examples• Assumptions for this talk• Geometric dynamic programming• Application to the monotone case

Page 7: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

Problem formulation and Motivations

Page 8: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

Problem formulation

Provide a PDE characterization of the viability sets

Λ(t) := {(z , p) : ∃ u ∈ U s. t. E[`(Z u[ϑ],ϑ

t,z (T ))]≥ m ∀ ϑ ∈ V}

In which :

• V is a set of admissible adverse controls• U is a set of admissible strategies

• Z u[ϑ],ϑt,z is an adapted Rd -valued process s.t. Z u[ϑ],ϑ

t,z (t) = z• ` is a given loss/utility function• m a threshold.

Page 9: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

Application in finance

2 Z u[ϑ],ϑt,z = (X u[ϑ],ϑ

t,x ,Y u[ϑ],ϑt,x ,y ) where

• X u[ϑ],ϑt,x models financial assets or factors with dynamics

depending on ϑ

• Y u[ϑ],ϑt,x ,y models a wealth process

• ϑ is the control of the market : parameter uncertainty (e.g.volatility), adverse players, etc...

• u[ϑ] is the financial strategy given the past observations of ϑ.

2 Flexible enough to embed constraints, transaction costs, marketimpact, etc...

Page 10: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

Examples

Λ(t) := {(z , p) : ∃ u ∈ U s.t. E[`(Z u[ϑ],ϑ

t,z (T ))]≥ m ∀ ϑ ∈ V}

2 Expected loss control for `(z) = −[y − g(x)]−

2 Give sense to problems that would be degenerate under P− a.s.constraints : B. and Dang (guaranteed VWAP pricing).

Page 11: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

Examples

2 Constraint in probability :

Λ(t) := {(z , p) : ∃ u ∈ U s.t. P[Z u[ϑ],ϑ

t,z (T ) ∈ O]≥ m ∀ ϑ ∈ V}

for `(z) = 1z∈O, m ∈ (0, 1).⇒ Quantile-hedging in finance for O := {y ≥ g(x)}.

Page 12: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

Examples

2 Matching a P&L distribution = Multiple constraints inprobability :

{∃ u ∈ U s.t. P[dist

(Z u[ϑ],ϑ

t,z (T ),O)≤ γi

]≥ mi ∀ i ≤ I , ∀ ϑ ∈ V}

(see B. and Thanh Nam)

Page 13: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

Examples

2 Almost sure constraint :

Λ(t) := {z : ∃ u ∈ U s.t. Z u[ϑ],ϑt,z (T ) ∈ O P− a.s. ∀ ϑ ∈ V}

for `(z) = 1z∈O, m = 1.⇒ Super-hedging in finance for O := {y ≥ g(x)}.

Unfortunately not covered by our assumptions...

Page 14: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

Setting for this talk(see the paper for an abstract version)

Page 15: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

Brownian diffusion setting

2 State process : Z u[ϑ],ϑ solves (µ and σ continuous, uniformlyLipschitz in space)

Z (s) = z +

∫ s

tµ(Z (r), u[ϑ]r , ϑr ) dr +

∫ s

tσ(Z (r), u[ϑ]r , ϑr ) dWr .

2 Controls and strategies :

• V is the set of predictable processes with values in V ⊂ Rd .• U is set of non-anticipating maps u : ϑ ∈ V 7→ U , i.e.

{ϑ1 =(0,s] ϑ2} ⊂ {u[ϑ1] =(0,s] u[ϑ2]} ∀ ϑ1, ϑ2 ∈ V, s ≤ T ,

where U is the set of predictable processes with values inU ⊂ Rd .

2 The loss function ` has polynomial growth and is continuous.

Page 16: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

Brownian diffusion setting

2 State process : Z u[ϑ],ϑ solves (µ and σ continuous, uniformlyLipschitz in space)

Z (s) = z +

∫ s

tµ(Z (r), u[ϑ]r , ϑr ) dr +

∫ s

tσ(Z (r), u[ϑ]r , ϑr ) dWr .

2 Controls and strategies :

• V is the set of predictable processes with values in V ⊂ Rd .• U is set of non-anticipating maps u : ϑ ∈ V 7→ U , i.e.

{ϑ1 =(0,s] ϑ2} ⊂ {u[ϑ1] =(0,s] u[ϑ2]} ∀ ϑ1, ϑ2 ∈ V, s ≤ T ,

where U is the set of predictable processes with values inU ⊂ Rd .

2 The loss function ` has polynomial growth and is continuous.

Page 17: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

Brownian diffusion setting

2 State process : Z u[ϑ],ϑ solves (µ and σ continuous, uniformlyLipschitz in space)

Z (s) = z +

∫ s

tµ(Z (r), u[ϑ]r , ϑr ) dr +

∫ s

tσ(Z (r), u[ϑ]r , ϑr ) dWr .

2 Controls and strategies :

• V is the set of predictable processes with values in V ⊂ Rd .• U is set of non-anticipating maps u : ϑ ∈ V 7→ U , i.e.

{ϑ1 =(0,s] ϑ2} ⊂ {u[ϑ1] =(0,s] u[ϑ2]} ∀ ϑ1, ϑ2 ∈ V, s ≤ T ,

where U is the set of predictable processes with values inU ⊂ Rd .

2 The loss function ` has polynomial growth and is continuous.

Page 18: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

Brownian diffusion setting

2 State process : Z u[ϑ],ϑ solves (µ and σ continuous, uniformlyLipschitz in space)

Z (s) = z +

∫ s

tµ(Z (r), u[ϑ]r , ϑr ) dr +

∫ s

tσ(Z (r), u[ϑ]r , ϑr ) dWr .

2 Controls and strategies :• V is the set of predictable processes with values in V ⊂ Rd .

• U is set of non-anticipating maps u : ϑ ∈ V 7→ U , i.e.

{ϑ1 =(0,s] ϑ2} ⊂ {u[ϑ1] =(0,s] u[ϑ2]} ∀ ϑ1, ϑ2 ∈ V, s ≤ T ,

where U is the set of predictable processes with values inU ⊂ Rd .

2 The loss function ` has polynomial growth and is continuous.

Page 19: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

Brownian diffusion setting

2 State process : Z u[ϑ],ϑ solves (µ and σ continuous, uniformlyLipschitz in space)

Z (s) = z +

∫ s

tµ(Z (r), u[ϑ]r , ϑr ) dr +

∫ s

tσ(Z (r), u[ϑ]r , ϑr ) dWr .

2 Controls and strategies :• V is the set of predictable processes with values in V ⊂ Rd .• U is set of non-anticipating maps u : ϑ ∈ V 7→ U , i.e.

{ϑ1 =(0,s] ϑ2} ⊂ {u[ϑ1] =(0,s] u[ϑ2]} ∀ ϑ1, ϑ2 ∈ V, s ≤ T ,

where U is the set of predictable processes with values inU ⊂ Rd .

2 The loss function ` has polynomial growth and is continuous.

Page 20: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

Brownian diffusion setting

2 State process : Z u[ϑ],ϑ solves (µ and σ continuous, uniformlyLipschitz in space)

Z (s) = z +

∫ s

tµ(Z (r), u[ϑ]r , ϑr ) dr +

∫ s

tσ(Z (r), u[ϑ]r , ϑr ) dWr .

2 Controls and strategies :• V is the set of predictable processes with values in V ⊂ Rd .• U is set of non-anticipating maps u : ϑ ∈ V 7→ U , i.e.

{ϑ1 =(0,s] ϑ2} ⊂ {u[ϑ1] =(0,s] u[ϑ2]} ∀ ϑ1, ϑ2 ∈ V, s ≤ T ,

where U is the set of predictable processes with values inU ⊂ Rd .

2 The loss function ` has polynomial growth and is continuous.

Page 21: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

The game problem

2 Worst expected loss for a given strategy :

J(t, z , u) := ess infϑ∈V

E[`(Z u[ϑ],ϑ

t,z (T ))|Ft

]

2 The viability sets are given by

Λ(t) := {(z , p) : ∃ u ∈ U s.t. J(t, z , u) ≥ p P− a.s.}.

Compare with the formulation of games in Buckdahn and Li(2008).

Page 22: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

The game problem

2 Worst expected loss for a given strategy :

J(t, z , u) := ess infϑ∈V

E[`(Z u[ϑ],ϑ

t,z (T ))|Ft

]

2 The viability sets are given by

Λ(t) := {(z , p) : ∃ u ∈ U s.t. J(t, z , u) ≥ p P− a.s.}.

Compare with the formulation of games in Buckdahn and Li(2008).

Page 23: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

Geometric dynamic programming principle

How are the properties

(z ,m) ∈ Λ(t) and (Z u[ϑ],ϑt,z (θ), ?) ∈ Λ(θ)

related ?

Page 24: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

2 First direction : Take (z ,m) ∈ Λ(t) and u ∈ U such that

ess infϑ∈V

E[`(Z u[ϑ],ϑ

t,z (T ))|Ft

]≥ m

To take care of the evolution of the worst case scenario conditionalexpectation, we introduce :

Sϑr := ess infϑ∈V

E[`(Z u[ϑ⊕r ϑ],ϑ⊕r ϑ

t,z (T ))|Fr

].

Then

Sϑ is a submartingale and Sϑt ≥ m for all ϑ ∈ V,

and we can find a martingale Mϑ such that

Sϑ ≥ Mϑ and Mϑt = Sϑt ≥ m.

Page 25: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

2 First direction : Take (z ,m) ∈ Λ(t) and u ∈ U such that

ess infϑ∈V

E[`(Z u[ϑ],ϑ

t,z (T ))|Ft

]≥ m

To take care of the evolution of the worst case scenario conditionalexpectation, we introduce :

Sϑr := ess infϑ∈V

E[`(Z u[ϑ⊕r ϑ],ϑ⊕r ϑ

t,z (T ))|Fr

].

Then

Sϑ is a submartingale and Sϑt ≥ m for all ϑ ∈ V,

and we can find a martingale Mϑ such that

Sϑ ≥ Mϑ and Mϑt = Sϑt ≥ m.

Page 26: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

2 First direction : Take (z ,m) ∈ Λ(t) and u ∈ U such that

ess infϑ∈V

E[`(Z u[ϑ],ϑ

t,z (T ))|Ft

]≥ m

To take care of the evolution of the worst case scenario conditionalexpectation, we introduce :

Sϑr := ess infϑ∈V

E[`(Z u[ϑ⊕r ϑ],ϑ⊕r ϑ

t,z (T ))|Fr

].

Then

Sϑ is a submartingale and Sϑt ≥ m for all ϑ ∈ V,

and we can find a martingale Mϑ such that

Sϑ ≥ Mϑ and Mϑt = Sϑt ≥ m.

Page 27: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

2 First direction : Take (z ,m) ∈ Λ(t) and u ∈ U such that

ess infϑ∈V

E[`(Z u[ϑ],ϑ

t,z (T ))|Ft

]≥ m

To take care of the evolution of the worst case scenario conditionalexpectation, we introduce :

Sϑr := ess infϑ∈V

E[`(Z u[ϑ⊕r ϑ],ϑ⊕r ϑ

t,z (T ))|Fr

].

Then

Sϑ is a submartingale and Sϑt ≥ m for all ϑ ∈ V,

and we can find a martingale Mϑ such that

Sϑ ≥ Mϑ and Mϑt = Sϑt ≥ m.

Page 28: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

2 We have for all stopping times θ (may depend on u and ϑ)

ess infϑ∈V

E[`(Z u[ϑ⊕θϑ],ϑ⊕θϑ

t,z (T ))|Fθ]

= Sϑθ ≥ Mϑθ .

If the above is usc in space uniformly in the strategies, θ takesfinitely many values, we can find a covering formed of Bi 3 (ti , zi ),i ≥ 1, such that

J(ti , zi ; u) ≥ Mϑθ − ε on Ai := {(θ,Z u[ϑ],ϑ

t,z (θ)) ∈ Bi}.

HenceK (ti , zi ) ≥ Mϑ

θ − ε on Ai .

where

K (ti , zi ) := esssupu∈U

ess infϑ∈V

E[`(Z u[ϑ],ϑ

ti ,zi(T )

)|Fti

]is deterministic.

2 If K is lsc

(Z u[ϑ],ϑt,z (θ(ω)),Mϑ

θ (ω)− 3ε) ∈ Λ(θ(ω))

Page 29: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

2 We have for all stopping times θ (may depend on u and ϑ)

ess infϑ∈V

E[`(Z u[ϑ⊕θϑ],ϑ⊕θϑ

t,z (T ))|Fθ]

= Sϑθ ≥ Mϑθ .

If the above is usc in space uniformly in the strategies, θ takesfinitely many values, we can find a covering formed of Bi 3 (ti , zi ),i ≥ 1, such that

J(ti , zi ; u) ≥ Mϑθ − ε on Ai := {(θ,Z u[ϑ],ϑ

t,z (θ)) ∈ Bi}.

HenceK (ti , zi ) ≥ Mϑ

θ − ε on Ai .

where

K (ti , zi ) := esssupu∈U

ess infϑ∈V

E[`(Z u[ϑ],ϑ

ti ,zi(T )

)|Fti

]is deterministic.

2 If K is lsc

(Z u[ϑ],ϑt,z (θ(ω)),Mϑ

θ (ω)− 3ε) ∈ Λ(θ(ω))

Page 30: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

2 We have for all stopping times θ (may depend on u and ϑ)

ess infϑ∈V

E[`(Z u[ϑ⊕θϑ],ϑ⊕θϑ

t,z (T ))|Fθ]

= Sϑθ ≥ Mϑθ .

If the above is usc in space uniformly in the strategies, θ takesfinitely many values, we can find a covering formed of Bi 3 (ti , zi ),i ≥ 1, such that

J(ti , zi ; u) ≥ Mϑθ − ε on Ai := {(θ,Z u[ϑ],ϑ

t,z (θ)) ∈ Bi}.

HenceK (ti , zi ) ≥ Mϑ

θ − ε on Ai .

where

K (ti , zi ) := esssupu∈U

ess infϑ∈V

E[`(Z u[ϑ],ϑ

ti ,zi(T )

)|Fti

]is deterministic.

2 If K is lsc

(Z u[ϑ],ϑt,z (θ(ω)),Mϑ

θ (ω)− 3ε) ∈ Λ(θ(ω))

Page 31: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

2 We have for all stopping times θ (may depend on u and ϑ)

ess infϑ∈V

E[`(Z u[ϑ⊕θϑ],ϑ⊕θϑ

t,z (T ))|Fθ]

= Sϑθ ≥ Mϑθ .

If the above is usc in space uniformly in the strategies, θ takesfinitely many values, we can find a covering formed of Bi 3 (ti , zi ),i ≥ 1, such that

J(ti , zi ; u) ≥ Mϑθ − ε on Ai := {(θ,Z u[ϑ],ϑ

t,z (θ)) ∈ Bi}.

HenceK (ti , zi ) ≥ Mϑ

θ − ε on Ai .

where

K (ti , zi ) := esssupu∈U

ess infϑ∈V

E[`(Z u[ϑ],ϑ

ti ,zi(T )

)|Fti

]is deterministic.

2 If K is lsc

K (θ(ω),Z u[ϑ],ϑt,z (θ)(ω)) ≥ K (ti , zi )− ε ≥ Mϑ

θ (ω)− 2ε on Ai .

Page 32: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

2 We have for all stopping times θ (may depend on u and ϑ)

ess infϑ∈V

E[`(Z u[ϑ⊕θϑ],ϑ⊕θϑ

t,z (T ))|Fθ]

= Sϑθ ≥ Mϑθ .

If the above is usc in space uniformly in the strategies, θ takesfinitely many values, we can find a covering formed of Bi 3 (ti , zi ),i ≥ 1, such that

J(ti , zi ; u) ≥ Mϑθ − ε on Ai := {(θ,Z u[ϑ],ϑ

t,z (θ)) ∈ Bi}.

HenceK (ti , zi ) ≥ Mϑ

θ − ε on Ai .

where

K (ti , zi ) := esssupu∈U

ess infϑ∈V

E[`(Z u[ϑ],ϑ

ti ,zi(T )

)|Fti

]is deterministic.

2 If K is lsc

(Z u[ϑ],ϑt,z (θ(ω)),Mϑ

θ (ω)− 3ε) ∈ Λ(θ(ω))

Page 33: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

2 To get rid of ε, and for non-regular cases (in terms of K and J),we work by approximation : One needs to start from(z ,m − ι) ∈ Λ(t) and obtain

(Z u[ϑ],ϑt,z (θ),Mϑ

θ ) ∈ Λ(θ) P− a.s. ∀ ϑ ∈ V

where

Λ(t) :=

{(z ,m) : there exist (tn, zn,mn)→ (t, z ,m)

such that (zn,mn) ∈ Λ(tn) and tn ≥ t for all n ≥ 1

}

Remark : Mϑ = Mαϑt,p := p +

∫ ·t α

ϑs dWs , with αϑ ∈ A, the set of

predictable processes such that the above is a martingale.

Remark : The bounded variation part of S is useless : optimaladverse player control should turn S in a martingale.

Page 34: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

2 To get rid of ε, and for non-regular cases (in terms of K and J),we work by approximation : One needs to start from(z ,m − ι) ∈ Λ(t) and obtain

(Z u[ϑ],ϑt,z (θ),Mϑ

θ ) ∈ Λ(θ) P− a.s. ∀ ϑ ∈ V

where

Λ(t) :=

{(z ,m) : there exist (tn, zn,mn)→ (t, z ,m)

such that (zn,mn) ∈ Λ(tn) and tn ≥ t for all n ≥ 1

}

Remark : Mϑ = Mαϑt,p := p +

∫ ·t α

ϑs dWs , with αϑ ∈ A, the set of

predictable processes such that the above is a martingale.

Remark : The bounded variation part of S is useless : optimaladverse player control should turn S in a martingale.

Page 35: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

2 To get rid of ε, and for non-regular cases (in terms of K and J),we work by approximation : One needs to start from(z ,m − ι) ∈ Λ(t) and obtain

(Z u[ϑ],ϑt,z (θ),Mϑ

θ ) ∈ Λ(θ) P− a.s. ∀ ϑ ∈ V

where

Λ(t) :=

{(z ,m) : there exist (tn, zn,mn)→ (t, z ,m)

such that (zn,mn) ∈ Λ(tn) and tn ≥ t for all n ≥ 1

}

Remark : Mϑ = Mαϑt,p := p +

∫ ·t α

ϑs dWs , with αϑ ∈ A, the set of

predictable processes such that the above is a martingale.

Remark : The bounded variation part of S is useless : optimaladverse player control should turn S in a martingale.

Page 36: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

2 Reverse direction : Assume that {θϑ, ϑ ∈ V} takes finitelymany values and that

(Z u[ϑ],ϑt,z (θϑ),Mαϑ

t,m(θϑ)) ∈ Λ(θϑ) P− a.s. ∀ ϑ ∈ V.

ThenK (θϑ,Z u[ϑ],ϑ

t,z (θϑ)) ≥ Mαϑ

t,m(θϑ).

Play again with balls + concatenation of strategies (assumingsmoothness) to obtain u ∈ U such that

E[`(Z

(u⊕θϑ

u)[ϑ],ϑt,z (T )

)|Fθ]≥ Mαϑ

t,m(θϑ)− ε P− a.s. ∀ ϑ ∈ V.

By taking expectation

E[`(Z

(u⊕θϑ

u)[ϑ],ϑt,z (T )

)|Ft

]≥ m − ε P− a.s. ∀ ϑ ∈ V

so that(z ,m − ε) ∈ Λ(t).

Page 37: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

2 Reverse direction : Assume that {θϑ, ϑ ∈ V} takes finitelymany values and that

(Z u[ϑ],ϑt,z (θϑ),Mαϑ

t,m(θϑ)) ∈ Λ(θϑ) P− a.s. ∀ ϑ ∈ V.

ThenK (θϑ,Z u[ϑ],ϑ

t,z (θϑ)) ≥ Mαϑ

t,m(θϑ).

Play again with balls + concatenation of strategies (assumingsmoothness) to obtain u ∈ U such that

E[`(Z

(u⊕θϑ

u)[ϑ],ϑt,z (T )

)|Fθ]≥ Mαϑ

t,m(θϑ)− ε P− a.s. ∀ ϑ ∈ V.

By taking expectation

E[`(Z

(u⊕θϑ

u)[ϑ],ϑt,z (T )

)|Ft

]≥ m − ε P− a.s. ∀ ϑ ∈ V

so that(z ,m − ε) ∈ Λ(t).

Page 38: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

2 Reverse direction : Assume that {θϑ, ϑ ∈ V} takes finitelymany values and that

(Z u[ϑ],ϑt,z (θϑ),Mαϑ

t,m(θϑ)) ∈ Λ(θϑ) P− a.s. ∀ ϑ ∈ V.

ThenK (θϑ,Z u[ϑ],ϑ

t,z (θϑ)) ≥ Mαϑ

t,m(θϑ).

Play again with balls + concatenation of strategies (assumingsmoothness) to obtain u ∈ U such that

E[`(Z

(u⊕θϑ

u)[ϑ],ϑt,z (T )

)|Fθ]≥ Mαϑ

t,m(θϑ)− ε P− a.s. ∀ ϑ ∈ V.

By taking expectation

E[`(Z

(u⊕θϑ

u)[ϑ],ϑt,z (T )

)|Ft

]≥ m − ε P− a.s. ∀ ϑ ∈ V

so that(z ,m − ε) ∈ Λ(t).

Page 39: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

2 Reverse direction : Assume that {θϑ, ϑ ∈ V} takes finitelymany values and that

(Z u[ϑ],ϑt,z (θϑ),Mαϑ

t,m(θϑ)) ∈ Λ(θϑ) P− a.s. ∀ ϑ ∈ V.

ThenK (θϑ,Z u[ϑ],ϑ

t,z (θϑ)) ≥ Mαϑ

t,m(θϑ).

Play again with balls + concatenation of strategies (assumingsmoothness) to obtain u ∈ U such that

E[`(Z

(u⊕θϑ

u)[ϑ],ϑt,z (T )

)|Fθ]≥ Mαϑ

t,m(θϑ)− ε P− a.s. ∀ ϑ ∈ V.

By taking expectation

E[`(Z

(u⊕θϑ

u)[ϑ],ϑt,z (T )

)|Ft

]≥ m − ε P− a.s. ∀ ϑ ∈ V

so that(z ,m − ε) ∈ Λ(t).

Page 40: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

2 Reverse direction : Assume that {θϑ, ϑ ∈ V} takes finitelymany values and that

(Z u[ϑ],ϑt,z (θϑ),Mαϑ

t,m(θϑ)) ∈ Λ(θϑ) P− a.s. ∀ ϑ ∈ V.

ThenK (θϑ,Z u[ϑ],ϑ

t,z (θϑ)) ≥ Mαϑ

t,m(θϑ).

Play again with balls + concatenation of strategies (assumingsmoothness) to obtain u ∈ U such that

E[`(Z

(u⊕θϑ

u)[ϑ],ϑt,z (T )

)|Fθ]≥ Mαϑ

t,m(θϑ)− ε P− a.s. ∀ ϑ ∈ V.

By taking expectation

E[`(Z

(u⊕θϑ

u)[ϑ],ϑt,z (T )

)|Ft

]≥ m − ε P− a.s. ∀ ϑ ∈ V

so that(z ,m − ε) ∈ Λ(t).

Page 41: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

2 To cover the general case by approximations, we need to startwith

(Z u[ϑ],ϑt,z (θϑ),Mαϑ

t,m(θϑ)) ∈ Λι(θϑ) P− a.s. ∀ ϑ ∈ V,

where

Λι(t) :={

(z , p) : (t ′, z ′, p′) ∈ Bι(t, z , p) implies (z ′, p′) ∈ Λ(t ′)}.

2 To concatenate while keeping the non-anticipative feature :

• Martingale strategies : αϑ replaced by a : ϑ ∈ V 7→ a[ϑ] ∈ Ain a non-anticipating way (corresponding set A)

• Non-anticipating stopping times : θ[ϑ] (typically first exit timeof (Z u[ϑ],ϑ),Ma[ϑ]) from a ball.

Page 42: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

2 To cover the general case by approximations, we need to startwith

(Z u[ϑ],ϑt,z (θϑ),Mαϑ

t,m(θϑ)) ∈ Λι(θϑ) P− a.s. ∀ ϑ ∈ V,

where

Λι(t) :={

(z , p) : (t ′, z ′, p′) ∈ Bι(t, z , p) implies (z ′, p′) ∈ Λ(t ′)}.

2 To concatenate while keeping the non-anticipative feature :• Martingale strategies : αϑ replaced by a : ϑ ∈ V 7→ a[ϑ] ∈ Ain a non-anticipating way (corresponding set A)

• Non-anticipating stopping times : θ[ϑ] (typically first exit timeof (Z u[ϑ],ϑ),Ma[ϑ]) from a ball.

Page 43: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

The geometric dynamic programming principle(GDP1) : If (z ,m − ι) ∈ Λ(t) for some ι > 0, then ∃ u ∈ U and{αϑ, ϑ ∈ V} ⊂ A such that

(Z u[ϑ],ϑt,z (θ),Mαϑ

t,m(θ)) ∈ Λ(θ) P− a.s. ∀ ϑ ∈ V.

(GDP2) : If (u, a) ∈ U× A and ι > 0 are such that

(Z u[ϑ],ϑt,z (θ[ϑ]),Ma[ϑ]

t,m (θ[ϑ])) ∈ Λι(θ[ϑ]) P− a.s. ∀ ϑ ∈ V

for some family (θ[ϑ], ϑ ∈ V) of non-anticipating stopping times,then

(z ,m − ε) ∈ Λ(t) , ∀ ε > 0.

Rem : Relaxed version of Soner and Touzi (2002) and B., Elie andTouzi (2009).

Page 44: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

Application to the monotone case

Page 45: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

2 Monotone case : Z u[ϑ],ϑt,x ,y = (X u[ϑ],ϑ

t,x ,Y u[ϑ],ϑt,x ,y ) with values in

Rd × R with X u[ϑ],ϑt,x independent of y and Y u[ϑ],ϑ

t,x ,y ↑ y .

2 The value function is :

$(t, x ,m) := inf{y : (x , y ,m) ∈ Λ(t)}.

2 Remark :

• If ϕ is lsc and $ ≥ ϕ, then

Λ(t) ⊂ {(x , y ,m) : y ≥ ϕ(t, x ,m)}.

• If ϕ is usc and ϕ ≥ $ then

Λι(t) ⊃ {(x , y ,m) : y ≥ ϕ(t, x ,m) + ηι} for some ηι > 0.

Page 46: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

2 Monotone case : Z u[ϑ],ϑt,x ,y = (X u[ϑ],ϑ

t,x ,Y u[ϑ],ϑt,x ,y ) with values in

Rd × R with X u[ϑ],ϑt,x independent of y and Y u[ϑ],ϑ

t,x ,y ↑ y .

2 The value function is :

$(t, x ,m) := inf{y : (x , y ,m) ∈ Λ(t)}.

2 Remark :

• If ϕ is lsc and $ ≥ ϕ, then

Λ(t) ⊂ {(x , y ,m) : y ≥ ϕ(t, x ,m)}.

• If ϕ is usc and ϕ ≥ $ then

Λι(t) ⊃ {(x , y ,m) : y ≥ ϕ(t, x ,m) + ηι} for some ηι > 0.

Page 47: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

2 Monotone case : Z u[ϑ],ϑt,x ,y = (X u[ϑ],ϑ

t,x ,Y u[ϑ],ϑt,x ,y ) with values in

Rd × R with X u[ϑ],ϑt,x independent of y and Y u[ϑ],ϑ

t,x ,y ↑ y .

2 The value function is :

$(t, x ,m) := inf{y : (x , y ,m) ∈ Λ(t)}.

2 Remark :

• If ϕ is lsc and $ ≥ ϕ, then

Λ(t) ⊂ {(x , y ,m) : y ≥ ϕ(t, x ,m)}.

• If ϕ is usc and ϕ ≥ $ then

Λι(t) ⊃ {(x , y ,m) : y ≥ ϕ(t, x ,m) + ηι} for some ηι > 0.

Page 48: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

2 Monotone case : Z u[ϑ],ϑt,x ,y = (X u[ϑ],ϑ

t,x ,Y u[ϑ],ϑt,x ,y ) with values in

Rd × R with X u[ϑ],ϑt,x independent of y and Y u[ϑ],ϑ

t,x ,y ↑ y .

2 The value function is :

$(t, x ,m) := inf{y : (x , y ,m) ∈ Λ(t)}.

2 Remark :• If ϕ is lsc and $ ≥ ϕ, then

Λ(t) ⊂ {(x , y ,m) : y ≥ ϕ(t, x ,m)}.

• If ϕ is usc and ϕ ≥ $ then

Λι(t) ⊃ {(x , y ,m) : y ≥ ϕ(t, x ,m) + ηι} for some ηι > 0.

Page 49: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

2 Monotone case : Z u[ϑ],ϑt,x ,y = (X u[ϑ],ϑ

t,x ,Y u[ϑ],ϑt,x ,y ) with values in

Rd × R with X u[ϑ],ϑt,x independent of y and Y u[ϑ],ϑ

t,x ,y ↑ y .

2 The value function is :

$(t, x ,m) := inf{y : (x , y ,m) ∈ Λ(t)}.

2 Remark :• If ϕ is lsc and $ ≥ ϕ, then

Λ(t) ⊂ {(x , y ,m) : y ≥ ϕ(t, x ,m)}.

• If ϕ is usc and ϕ ≥ $ then

Λι(t) ⊃ {(x , y ,m) : y ≥ ϕ(t, x ,m) + ηι} for some ηι > 0.

Page 50: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

Thm :

(GDP1) : Let ϕ ∈ C 0 be such that argmin($∗ − ϕ) = (t, x ,m).Assume that y > $(t, x ,m − ι) for some ι > 0. Then, there exists(u, a) ∈ U× A that

Y u[ϑ],ϑt,x ,y (θϑ) ≥ ϕ(θϑ,X u[ϑ],ϑ

t,x (θϑ),Ma[ϑ]t,m (θϑ)) P− a.s. ∀ ϑ ∈ V.

(GDP2) : Let ϕ ∈ C 0 be such that argmax($∗ − ϕ) = (t, x ,m).Assume that (u, a) ∈ U× A and η > 0 are such that

Y u[ϑ],ϑt,x ,y (θ[ϑ]) ≥ ϕ(θ[ϑ],X u[ϑ],ϑ

t,x (θ[ϑ]),Ma[ϑ]t,m (θ[ϑ]))+η P−a.s. ∀ ϑ ∈ V.

Then,y ≥ $(t, x ,m − ε) , ∀ ε > 0.

Remark : In the spirit of the weak dynamic programming principle(B. and Touzi, and, B. and Nutz).

Page 51: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

Thm :(GDP1) : Let ϕ ∈ C 0 be such that argmin($∗ − ϕ) = (t, x ,m).Assume that y > $(t, x ,m − ι) for some ι > 0. Then, there exists(u, a) ∈ U× A that

Y u[ϑ],ϑt,x ,y (θϑ) ≥ ϕ(θϑ,X u[ϑ],ϑ

t,x (θϑ),Ma[ϑ]t,m (θϑ)) P− a.s. ∀ ϑ ∈ V.

(GDP2) : Let ϕ ∈ C 0 be such that argmax($∗ − ϕ) = (t, x ,m).Assume that (u, a) ∈ U× A and η > 0 are such that

Y u[ϑ],ϑt,x ,y (θ[ϑ]) ≥ ϕ(θ[ϑ],X u[ϑ],ϑ

t,x (θ[ϑ]),Ma[ϑ]t,m (θ[ϑ]))+η P−a.s. ∀ ϑ ∈ V.

Then,y ≥ $(t, x ,m − ε) , ∀ ε > 0.

Remark : In the spirit of the weak dynamic programming principle(B. and Touzi, and, B. and Nutz).

Page 52: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

Thm :(GDP1) : Let ϕ ∈ C 0 be such that argmin($∗ − ϕ) = (t, x ,m).Assume that y > $(t, x ,m − ι) for some ι > 0. Then, there exists(u, a) ∈ U× A that

Y u[ϑ],ϑt,x ,y (θϑ) ≥ ϕ(θϑ,X u[ϑ],ϑ

t,x (θϑ),Ma[ϑ]t,m (θϑ)) P− a.s. ∀ ϑ ∈ V.

(GDP2) : Let ϕ ∈ C 0 be such that argmax($∗ − ϕ) = (t, x ,m).Assume that (u, a) ∈ U× A and η > 0 are such that

Y u[ϑ],ϑt,x ,y (θ[ϑ]) ≥ ϕ(θ[ϑ],X u[ϑ],ϑ

t,x (θ[ϑ]),Ma[ϑ]t,m (θ[ϑ]))+η P−a.s. ∀ ϑ ∈ V.

Then,y ≥ $(t, x ,m − ε) , ∀ ε > 0.

Remark : In the spirit of the weak dynamic programming principle(B. and Touzi, and, B. and Nutz).

Page 53: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

Thm :(GDP1) : Let ϕ ∈ C 0 be such that argmin($∗ − ϕ) = (t, x ,m).Assume that y > $(t, x ,m − ι) for some ι > 0. Then, there exists(u, a) ∈ U× A that

Y u[ϑ],ϑt,x ,y (θϑ) ≥ ϕ(θϑ,X u[ϑ],ϑ

t,x (θϑ),Ma[ϑ]t,m (θϑ)) P− a.s. ∀ ϑ ∈ V.

(GDP2) : Let ϕ ∈ C 0 be such that argmax($∗ − ϕ) = (t, x ,m).Assume that (u, a) ∈ U× A and η > 0 are such that

Y u[ϑ],ϑt,x ,y (θ[ϑ]) ≥ ϕ(θ[ϑ],X u[ϑ],ϑ

t,x (θ[ϑ]),Ma[ϑ]t,m (θ[ϑ]))+η P−a.s. ∀ ϑ ∈ V.

Then,y ≥ $(t, x ,m − ε) , ∀ ε > 0.

Remark : In the spirit of the weak dynamic programming principle(B. and Touzi, and, B. and Nutz).

Page 54: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

PDE characterization - “waving hands” version

2 Assuming smoothness, existence of optimal strategies...

2 y = $(t, x ,m) impliesY u[ϑ],ϑ(t+) ≥ $(t+,X u[ϑ],ϑ(t+),Ma[ϑ](t+)) for all ϑ.

This implies dY u[ϑ],ϑ(t) ≥ d$(t,X u[ϑ],ϑ(t),Ma[ϑ](t)) for all ϑ

Hence, for all ϑ,

µY (x , y , u[ϑ]t , ϑt) ≥ Lu[ϑ]t ,ϑt ,a[ϑ]tX ,M $(t, x ,m)

σY (x , y , u[ϑ]t , ϑt) = σX (x , u[ϑ]t , ϑt)Dx$(t, x , p)

+a[ϑ]tDm$(t, x ,m)

with y = $(t, x ,m)

Page 55: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

PDE characterization - “waving hands” version

2 Assuming smoothness, existence of optimal strategies...

2 y = $(t, x ,m) impliesY u[ϑ],ϑ(t+) ≥ $(t+,X u[ϑ],ϑ(t+),Ma[ϑ](t+)) for all ϑ.

This implies dY u[ϑ],ϑ(t) ≥ d$(t,X u[ϑ],ϑ(t),Ma[ϑ](t)) for all ϑ

Hence, for all ϑ,

µY (x , y , u[ϑ]t , ϑt) ≥ Lu[ϑ]t ,ϑt ,a[ϑ]tX ,M $(t, x ,m)

σY (x , y , u[ϑ]t , ϑt) = σX (x , u[ϑ]t , ϑt)Dx$(t, x , p)

+a[ϑ]tDm$(t, x ,m)

with y = $(t, x ,m)

Page 56: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

PDE characterization - “waving hands” version

2 Assuming smoothness, existence of optimal strategies...

2 y = $(t, x ,m) impliesY u[ϑ],ϑ(t+) ≥ $(t+,X u[ϑ],ϑ(t+),Ma[ϑ](t+)) for all ϑ.

This implies dY u[ϑ],ϑ(t) ≥ d$(t,X u[ϑ],ϑ(t),Ma[ϑ](t)) for all ϑ

Hence, for all ϑ,

µY (x , y , u[ϑ]t , ϑt) ≥ Lu[ϑ]t ,ϑt ,a[ϑ]tX ,M $(t, x ,m)

σY (x , y , u[ϑ]t , ϑt) = σX (x , u[ϑ]t , ϑt)Dx$(t, x , p)

+a[ϑ]tDm$(t, x ,m)

with y = $(t, x ,m)

Page 57: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

PDE characterization - “waving hands” version2 Supersolution property

infv∈V

sup(u,a)∈N v$

(µY (·, $, u, v)− Lu,v ,a

X ,M$)≥ 0

where

N v$ := {(u, a) ∈ U × Rd : σY (·, $, u, v) = σX (·, u, v)Dx$ + aDm$}.

2 Subsolution property

sup(u[·],a[·])∈N [·]$

infv∈V

(µY (·, $, u[v ], v)− Lu[v ],v ,a[v ]

X ,M $)≤ 0

where

N [·]$ := {loc. Lip. (u[·], a[·]) s.t. (u[·], a[·]) ∈ N ·$(·)}.

Page 58: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

PDE characterization - “waving hands” version2 Supersolution property

infv∈V

sup(u,a)∈N v$

(µY (·, $, u, v)− Lu,v ,a

X ,M$)≥ 0

where

N v$ := {(u, a) ∈ U × Rd : σY (·, $, u, v) = σX (·, u, v)Dx$ + aDm$}.

2 Subsolution property

sup(u[·],a[·])∈N [·]$

infv∈V

(µY (·, $, u[v ], v)− Lu[v ],v ,a[v ]

X ,M $)≤ 0

where

N [·]$ := {loc. Lip. (u[·], a[·]) s.t. (u[·], a[·]) ∈ N ·$(·)}.

Page 59: Stochastic Target Games with Controlled Loss · Stochastic Target Games with Controlled Loss B.Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC,February2013

References

2 R. Buckdahn and J. Li (2008). Stochastic differential games andviscosity solutions of Hamilton-Jacobi-Bellman-Isaacs equations.SIAM SICON, 47(1) :444-475.2 H.M. Soner and N. Touzi (2002). Dynamic programming forstochastic target problems and geometric flows, JEMS, 4, 201-236.2 B. , R. Elie and N. Touzi (2009). Stochastic Target problems withcontrolled loss, SIAM SICON, 48 (5), 3123-3150.2 B. and M. Nutz (2012). Weak Dynamic Programming for GeneralizedState Constraints, SIAM SICON, to appear.2 L. Moreau (2011), Stochastic target problems with controlled loss in ajump diffusion model, SIAM SICON, 49, 2577-2607, 2011.2 B. and N. M. Dang (2010). Generalized stochastic target problems forpricing and partial hedging under loss constraints - Application in optimalbook liquidation, F&S, online.2 B. and V. Thanh Nam (2011). A stochastic target approach for P&Lmatching problems, MOR, 37(3), 526-558.