THESEµ - Le Mans Universitycyberdoc.univ-lemans.fr/theses/2005/2005LEMA1004.pdf · 2006. 4. 5. ·...
Transcript of THESEµ - Le Mans Universitycyberdoc.univ-lemans.fr/theses/2005/2005LEMA1004.pdf · 2006. 4. 5. ·...
UNIVERSITE DU MAINE
Laboratoire de Statistique et Processus
THESE
pour obtenir le grade de
DOCTEUR DE L’UNIVERSITE DU MAINE
Discipline : Mathematiques
presentee et soutenue publiquementpar
Mingyu XU
le vendredi 14 Octobre 2005
Contributions a l’etude des
Equations Differentielles StochastiquesRetrogrades reflechies
et Application aux
Equations aux derivees partielles
devant le jury compose de :
Vlad BALLY Professeur a l’Universite de Marne-la-Vallee RapporteurNicole EL KAROUI Professeur a l’Ecole Polytechnique RapporteurSaıd HAMADENE Professeur a l’Universite du Maine ExaminateurJean-Pierre LEPELTIER Professeur a l’Universite du Maine Directeur de theseAnis MATOUSSI Maıtre de conferences l’Universite du Maine ExaminateurJean MEMIN Professeur a l’Universite de Rennes I President
2
10/2005,
Remerciements
Je tiens tout d’abord ma plus profonde reconnaisance a mon directeur de these, Jean-PierreLepeltier ; pour ses conseils, sa grande patience et sa generosite avec laquelle il m’a fait partager sestravaux et ses idees. La pertinence de ses questions et de ses remarques ont toujours su me motiveret me diriger dans mes recherches. Je le remercie particulierement pour m’avoir fait confiance enme proposant de venir en France travailler avec lui.
J’exprime mes plus sinceres remerciements a Shige Peng, qui m’a ouvert les portes de la Re-cherche. La richesse de ses pensees et ses encouragements m’ont beaucoup aide pour mon travail.La clarte des ses explications et sa rigueur ont ete precieuses pour le cheminement de mon travail.
Je suis tres touchee que Vlad Bally et Nicole El Karoui m’aient fait l’honneur de rapporter cettethese. Je leur sais gre de l’interet qu’ils ont manifeste pour mon travail. Leur ecrits sur la theoriedes equations differentielles stochastiques retrogrades ont ete une aide precieuse.
Je tiens a exprimer toute ma gratitude a Jean Memin, qui a accepte d’etre dans le Jury de cettethese, pour sa collaboration dans la partie numerique de ma These. J’ai pu beaucoup beneficierde ses conseils et de son aide pendant plusieurs annees. Je n’oublierai jamais mes sejours a Bergerac.
Je tiens a remercier vivement Anis Matoussi, qui m’a guidee dans le domaine des EDPs, poursa collaboration enrichissante dans cette partie de ma these. Je suis extremement beneficiaire deses discussions, ses travaux et ses suggestions eclairees pour ma Recherche.
Je remercie egalement Saıd Hamadene et Laurent Denis, pour avoir bien voulu participer a ceJury, et pour tout ce qu’ils ont fait pour moi.
Je remercie tous les membres du departement des mathematiques de l’Universite du Mainepour leur accueil et leur sympathie. J’adresse un merci tout particulier a Jacqueline Alix, IbtissamHdhiri et Khosrow Fazli.
Je salue tous ceux avec qui j’ai eu le plaisir de discuter, de partager un cafe au foyer et dedecouvrir la France. Parmi eux, je pense en particulier a Elisabeth, Maya, Ying, Rika, Rong, Chen-song et Tao. Ceux que j’aurais pu oublier sauront, je l’espere, m’excuser.
Enfin, je remercie chaleureusement toute ma famille, qui m’a soutenue, encouragee et pousseedurant toutes mes annees d’etude.
4
10/2005
Table des matieres
1 Introduction 51.1 L’equation differentielle stochastique retrograde reflechie avec une ou deux barrieres
discontinues (cadlag et L2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.2 Les EDSRs reflechies a une barriere avec conditions de monotonie et de croissance
generale pour y et condition Lischitzienne pour z . . . . . . . . . . . . . . . . . . . . 71.3 Les EDSRs reflechies a une barriere avec conditions de monotonie et de croissance
generale en y et condition non-Lischitzienne en z . . . . . . . . . . . . . . . . . . . . 81.4 Les EDSRs reflechies a deux barrieres avec conditions de monotonie et de croissance
generale en y et condition Lischitzienne en z . . . . . . . . . . . . . . . . . . . . . . . 91.5 Les Solutions Sobolev pour les EDPs semilineaires avec obstacle sous la condition de
monotonie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.6 Simulation des EDSRs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2 The Reflected BSDE with single or double discontinuous barriers 132.1 Definitions and assumptions for reflected BSDE with one or two RCLL barriers . . . 142.2 The penalized BSDEs for RBSDE with one RCLL barrier and optimal stopping
problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.3 The RBSDE with two RCLL barriers and Dynkin game . . . . . . . . . . . . . . . . 21
2.3.1 The existence and uniqueness result . . . . . . . . . . . . . . . . . . . . . . . 212.3.2 The Application to mixed game problem . . . . . . . . . . . . . . . . . . . . . 27
2.4 Statements and definitions of reflected BSDE with L2-barriers . . . . . . . . . . . . . 292.4.1 Reflected BSDE with one L2-barrier . . . . . . . . . . . . . . . . . . . . . . . 292.4.2 Reflected BSDE with two L2–barriers . . . . . . . . . . . . . . . . . . . . . . 32
2.5 A generalized monotonic limit theorem for Ito processes . . . . . . . . . . . . . . . . 342.6 The proof of Theorem 2.4.1 through equivalence between the smallest g–supersolution
and the related RBSDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382.7 Penalization method for RBSDE with two obstacles and some basic estimates . . . . 442.8 Proof of Theorem 2.4.3 : the existence of RBSDE with two obstacles . . . . . . . . . 482.9 Penalization from two sides : A convergence result . . . . . . . . . . . . . . . . . . . 502.10 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
2.10.1 Some remarks about the Snell envelope . . . . . . . . . . . . . . . . . . . . . 532.10.2 Stochastic game and the Dynkin game problem . . . . . . . . . . . . . . . . . 552.10.3 Dynkin game and the penalization method for the RBSDE with two RCLL
barriers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3 Reflected BSDEs under monotonicity and general increasing growth conditions 633.1 RBSDE’s on a fixed finite time interval . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.1.1 Hypotheses and Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
1
2 TABLE DES MATIERES
3.1.2 Uniqueness of the solution of the RBSDEs . . . . . . . . . . . . . . . . . . . . 643.1.3 Existence of the solution of the RBSDEs . . . . . . . . . . . . . . . . . . . . . 65
The Main result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65Proof of the Theorem 3.1.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.1.4 Some a priori estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 823.1.5 Properties of the solution of the RBSDEs . . . . . . . . . . . . . . . . . . . . 87
Expression of the solution Y . . . . . . . . . . . . . . . . . . . . . . . . . . . 87When the barrier is an Ito process . . . . . . . . . . . . . . . . . . . . . . . . 87The RBSDE with finite stopping time . . . . . . . . . . . . . . . . . . . . . . 89
3.2 Applications to Finance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 893.3 RBSDE’s with random terminal time . . . . . . . . . . . . . . . . . . . . . . . . . . . 903.4 Appendix : Comparison theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
4 Reflected BSDE with continuity and monotonicity in y, and non-Lipschitz condi-tions in z 1034.1 Notations and assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1034.2 A general case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1044.3 The case f(t, y, z) = |z|p, with p ∈ [1, 2] . . . . . . . . . . . . . . . . . . . . . . . . . 106
4.3.1 The case p = 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1064.3.2 The case p ∈ (1, 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
4.4 The case when f is linear increasing in z . . . . . . . . . . . . . . . . . . . . . . . . . 1134.5 Appendix : Comparison theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
5 Reflected backward SDEs with two barriers under monotonicity condition 1235.1 RBSDE’s with two continuous barriers . . . . . . . . . . . . . . . . . . . . . . . . . . 123
5.1.1 Assumptions and notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1235.1.2 Main results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1255.1.3 Proof of theorem 5.1.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
5.2 Appendix : Comparison theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
6 Sobolev solution for semilinear PDE with obstacle under monotonicity condition1576.1 Notations and preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1576.2 Stochastic flows and random test functions . . . . . . . . . . . . . . . . . . . . . . . 1606.3 Solutions in Sobolev spaces for PDE’s with monotonicity condition . . . . . . . . . . 1616.4 Sobolev’s solution for PDE with obstacle under monotonicity condition . . . . . . . 1676.5 Appendix : Proof of proposition 6.2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
7 Numerical algorithms and simulations for BSDEs and reflected BSDEs 1817.1 Discretization and algorithms of BSDEs . . . . . . . . . . . . . . . . . . . . . . . . . 181
7.1.1 Discretization of BSDEs and numerical schemes . . . . . . . . . . . . . . . . . 1827.1.2 Convergence of Algorithms of BSDEs . . . . . . . . . . . . . . . . . . . . . . 183
7.2 Simulation results for BSDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1867.3 Discretization and Algorithms for Reflected BSDEs . . . . . . . . . . . . . . . . . . . 188
7.3.1 Algorithms for Reflected BSDEs with one barrier . . . . . . . . . . . . . . . . 1897.3.2 Simulation results of reflected BSDE with one barrier . . . . . . . . . . . . . 1917.3.3 Algorithms and Simulations of Reflected BSDEs with two barriers . . . . . . 196
7.4 BSDEs with constraint on z . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2017.5 Appendix : Convergence of Algorithm for Reflected BSDE’s with one barrier . . . . 204
7.5.1 Estimations of the Discrete Reflected BSDE with one barrier . . . . . . . . . 205
TABLE DES MATIERES 3
7.5.2 Convergence of the numerical solutions of Reflected BSDEs . . . . . . . . . . 2107.5.3 Annex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
4 TABLE DES MATIERES
5
Chapitre 1
Introduction
Les equations differentielles stochastiques retrogrades (EDSRs) lineaires ont ete introduitespar J.M. Bismut en 1970. Dans le cas general, Pardoux et Peng ont ete les premiers a etablirl’existence et l’unicite de la solution d’une EDSR associee a (f, ξ), si f est Lipschizienne en (y, z)uniformement, et sous de bonnes conditions d’integrabilite. Plus precisement il existe un uniquecouple (Y, Z), satisfaisant a
Yt = ξ +∫ T
tf(s, Ys, Zs)ds−
∫ T
tZsdBs,
ou B est un mouvement brownien sur un espace de probabilite (Ω,F , P ).Bien que la theorie des EDSRs soit recente, elle s’est developpee rapidement, ayant trouve de
nombreuses applications en Controle, en Finance, et en Theorie des equations aux derivees partielles(EDP).
Je vais maintenant detailler les resultats obtenus dans cette these.
1.1 L’equation differentielle stochastique retrograde reflechie avecune ou deux barrieres discontinues (cadlag et L2)
El Karoui, Kapoudjian, Pardoux, Peng et Quenez ont introduit la notion d’equation differentiellestochastique retrograde reflechie avec une barriere continue en 1997. La solution d’une telle equationassociee a un generateur f , une valeur terminale ξ, une barriere continue (Lt), est un triplet(Yt, Zt,Kt)0≤t≤T de processus progressivement mesurables a valeurs dansR1+d+1, de carre integrable,qui verifient une condition de carre integrable,
Yt = ξ +∫ T
tf(s, Ys, Zs)ds + KT −Kt −
∫ T
tZsdBs, 0 ≤ t ≤ T , p.s., (1.1)
Yt ≥ Lt p.s. pour chaque 0 ≤ t ≤ T , (Kt) est croissant et continu, et ou Bt est un mouvementbrownien d-dimensionnel. Le role de (Kt) est de pousser vers le haut le processus Y de maniereminimale, pour qu’il reste au dessus de la barriere L. Il s’en suit que l’on doit avoir
∫ T
0(Ys − Ls)dKs = 0. (1.2)
Les auteurs proposent deux methodes. Une premiere methode consiste en une iteration a la Picard,en considerant a chaque etape, la solution d’un probleme d’arret optimal. Une seconde methodeconsiste a construire la solution comme limite de problemes penalises. A chaque etape, la solution
6 Introduction
(Y n, Zn) est celle d’une EDSR classique. Le theoreme de comparaison pour les solutions d’EDSRs(Pardoux, E. and Peng, S., 1990 [58]) permet d’obtenir la convergence de la suite (Y n). Pour lasuite (Zn), le fait que la barriere (Lt) soit continue est essentiel dans la technique utilisee (Lemme6.1), avec application du theoreme de Dini.
A la meme epoque, Cvitanic and Karatzas (Cvitanic, J. et Karatzas, I., 1996) ont introduit lanotion d’equation differentielle stochastique retrograde reflechie avec deux barrieres continues. Dansce cas, un triplet (Yt, Zt, Kt)0≤t≤T est une solution pour cette equation associee avec un generateurf , une valeur terminale ξ, une barriere inferieure continue (Lt), et une barriere superieure continue(Ut), telles que Lt ≤ Ut et LT ≤ ξ ≤ UT p.s. s’il est carre integrable et verifie des conditions decarre integrable,
Yt = ξ +∫ T
tf(s, Ys, Zs)ds + KT −Kt −
∫ T
tZsdBs, 0 ≤ t ≤ T , p.s., (1.3)
Lt ≤ Yt ≤ Ut, p.s. pour chaque 0 ≤ t ≤ T , ou (Kt) est un processus a variation finie, K = K+t −K−
t ,avec K+,K− croissants continus ; le role de (Kt) est de garder le processus Y entre L et U dans lesens que l’on doit avoir
∫ T
0(Ys − Ls)dK+
s = 0,
∫ T
0(Ys − Us)dK−
s = 0. (1.4)
La demonstration est encore basee sur une iteration de Picard, ou a chaque etape, il s’agit de trouverla solution d’un Jeu de Dynkin. Dans la suite une methode de penalisation est developpee sous unecondition forte qui demande aux barrieres d’etre approximees uniformement par des processus d’Ito.
En 2004, J.P. Lepeltier et J. San Martın ont generalise la condition sur les barrieres, montrantpar une methode de penalisation l’existence, en supposant seulement l’existence entre L et U unesemimartingale avec la valeur terminale ξ.
Pour notre part, nous considerons les EDSRs reflechies avec une ou deux barrieres conti-nues a droite et limitees a gauche (cadlag). La solution d’une telle equation est encore un triplet(Yt, Zt,Kt)0≤t≤T de processus progressivement mesurables et a valeurs dans R1+d+1, satisfaisant(1.1) Yt ≥ Lt, p.s. ou Lt ≤ Yt ≤ Ut, p.s. pour chaque 0 ≤ t ≤ T , respectivement. Le processus Y estalors seulement cadlag. De maniere similaire, le role de (Kt)(K = K+ −K− pour le cas de deuxbarrieres) est de garder le processus Y au-dessus de la barriere L (pour le cas d’une barriere), ouentre L et U (pour le cas de deux barrieres) au sens minimal. Il est alors naturel de remplacer (1.2)par ∫ T
0(Ys− − Ls−)dKs = 0 (1.5)
pour le cas d’une barriere, et de remplacer (1.4) par∫ T
0(Ys− − Ls−)dK+
s = 0 et∫ T
0(Ys− − Us−)dK−
s = 0. (1.6)
pour le cas de deux barrieres.En 2000, S. Hamadene en utilisant une iteration de Picard a montre l’existence et l’unicite de
la solution de l’EDSR a une barriere.En utilisant la meme technique nous avons obtenu le meme resultat pour l’EDSR a deux
barrieres. Nous avons ensuite utilise la methode de penalisation dans le cas d’une barriere. Onconsidere les solutions (Y n, Zn, Kn) des equations
Y nt = ξ +
∫ T
tf(s, Y n
s , Zns )ds + n
∫ T
t(Y n
s − Ls)−ds−∫ T
tZn
s dBs,
Introduction 7
ou Knt = n
∫ t0 (Y n
s − Ls)−ds, comme solutions des EDSRs reflechies avec barriere Lt − (Y nt − Lt)−.
Grace au theoreme ”limit monotonic” de Peng (Peng, S., 1999), on obtient la convergence fortedans Hp
d (pour p < 2) de la suite (Zn), ce qui est suffisant pour le passage a la limite dans lesequations penalisees. Ensuite, on montre que Y satisfait
Yt = ess supτ∈Tt
E[∫ τ
tf(s, Ys, Zs)ds + Lτ1τ<T + ξ1τ=T|Ft],
puis en utilisant les proprietes de l’enveloppe de Snell, on montre que (1.5) est satisfait. Donc, lalimite (Y, Z,K) est la solution de l’EDSR avec une barriere cadlag L.
Dans le cas de deux barrieres cadlags, les equations penalisees sont
Y m,nt = ξ +
∫ T
tf(s, Y m,n
s , Zm,ns )ds+n
∫ T
t(Y m,n
s −Ls)−ds−m
∫ T
t(Us−Y m,n
s )−ds−∫ T
tZm,n
s dBs.
D’une maniere analogue, les solutions des equations penalisees sont considerees comme solutionsd’EDSRs reflechies a deux barrieres. Une generalisation du ”limit monotonic” theoreme permet depasser a la limite dans les equations. Ensuite, la representation des solutions via les jeux de Dynkinnous permet d’obtenir que (1.6) est verifiee. La limite (Y, Z,K) est alors la solution de l’equation.
Dans un second travail, nous avons generalise ce type de resultat au cas ou les barrieres sontseulement L2, utilisant toujours une methode de penalisation avec la theorie des g-sur-solutions.
La methode de penalisation pour une barriere fait l’objet d’un article en revue ”Statistics andProbability Letters” (avec J.-P. Lepeltier). Le second travail fait l’objet d’un article accepte auxAnnales of IHP (avec S. Peng).
1.2 Les EDSRs reflechies a une barriere avec conditions de mo-notonie et de croissance generale pour y et condition Lischit-zienne pour z
En 1999, Pardoux a etudie les solutions dans L2 des EDSR, lorsque le generateur f(t, ω, y, z)est continu, satisfait les conditions de monotonie, de croissance generale en y, et la conditionlipschitzienne en z, plus precisement, s’il existe une fonction croissante, continue ϕ : R+ → R+, etdes nombres µ ∈ R, C > 0, tel que ∀t ∈ [0, T ], y, y′ ∈ Rn, z, z′ ∈ Rn×d,
|f(t, y, 0)| ≤ |f(t, 0, 0)|+ ϕ(|y|), p.s.; (1.7)⟨y − y′, f(t, y, z)− f(t, y′, z)
⟩ ≤ µ∣∣y − y′
∣∣2 , p.s.;∣∣f(t, y, z)− f(t, y, z′)∣∣ ≤ C
∣∣z − z′∣∣ , p.s..
Briand et al.(2003) ont etudie les solutions dans Lp, pour p ∈ [1,∞), de telles equations sous lesmemes hypotheses.
Pour notre part, nous considerons les EDSRs reflechies avec une barriere continue, associeesa (ξ, f, L), lorsque ξ ∈ L2(FT ), f est une application de Ω × [0, T ] × R× Rd dans R, telle quef(t, y, z) est progressivement mesurable, f(t, 0, 0) est de carre integrable, et (1.7) est verifiee. Noussupposons que la barriere (Lt)0≤t≤T est un processus continu et progressivement mesurable, tel que
E[ϕ2( sup0≤t≤T
(eµtL+t ))] < ∞, (1.8)
(L+t )0≤t≤T ∈ S2(0, T ), LT ≤ ξ, p.s. Ici L+ est la partie positive du processus L.
8 Introduction
On dira qu’un triplet (Yt, Zt,Kt)0≤t≤T est solution de l’EDSR reflechie si l’on a
Yt = ξ +∫ T
tf(s, Ys, Zs)ds + KT −Kt −
∫ T
tZsdBs, (1.9)
Yt ≥ Lt p.s. pour chaque 0 ≤ t ≤ T , et∫ T0 (Ys − Ls)dKs = 0.
Nous avons notamment montre l’existence et l’unicite de la solution dans L2, pour cette equationreflechie avec temps terminal deterministe. Pour l’existence, on utilise encore une methode depenalisation. Dans le cas ou f ne depend pas de z, on considere les EDSRs
Y nt = ξ +
∫ T
tf(s, Y n
s )ds + n
∫ T
t(Y n
s − Ls)−ds−∫ T
tZn
s dBs. (1.10)
Sous les hypotheses (1.7), on rencontre des problemes au niveau des estimations a priori. On doitdonc commencer sous des hypotheses de bornitude pour ξ, f(t, 0) et L+, et on relaxe ensuite ceshypotheses etape par etape.
La preuve s’effectue en quatre etapes. La premiere etape consiste a montrer le resultat sousdes hypotheses de bornitude pour ξ, f(t, 0) et L+. La seconde etape (la plus delicate) consiste arelaxer l’hypothese de bornitude sur L+ ; enfin les deux dernieres etapes nous permettent d’obtenirle resultat general, en relaxant les hypotheses de bornitude sur ξ et f(t, 0). Dans ces deux etapes,les theoremes de comparaison jouent un role important, en nous permettant de passer a la limitedans les equations.
Nous avons ensuite etudie le cas ou le temps terminal est aleatoire. Dans un travail recent,Talay and Zheng (2002, [72]), ont etudie l’EDSR reflechie avec temps terminal aleatoire et deuxbarrieres continues, qui sont des processus d’Ito. L’existence et l’unicite sont montrees, lorsque legenerateur f(t, ω, y, z) satisfait la condition de monotonie en y, est continu, a croissance lineaire eny, et Lipschitzien en z. Malheureusement, leur methode ne s’adapte pas directement a notre cas.Dans leur travail, le fait que les deux barrieres soient dans un ”bon” espace est utilise largement.Dans notre cas la condition de croissance de f en y, beaucoup plus generale, est la raison destechniques lourdes utilisees dans la preuve de la section 2.
Une bonne partie de ce travail fait l’objet d’un article avec J.-P. Lepeltier et A. Matoussi dans”Advances in Applied Probability”.
1.3 Les EDSRs reflechies a une barriere avec conditions de mono-tonie et de croissance generale en y et condition non-Lischitzienneen z
Les EDSRs dont le generateur f est a croissance quadratique en z et dont la condition termi-nale ξ est bornee ont ete etudiees par Kobylanski dans [43]. Elle a notamment montre le resultatd’existence lorsque f est a croissance lineaire en y, et a croissance quadratique en z. En 1998 ([48]),Lepeltier et San Martın ont generalise ce resultat au cas a croissance super-lineaire en y. Plusrecemment, dans [50], ils ont considere les EDSRs dont le generateur satisfait seulement les condi-tions de monotonie, continuite, croissance generale en y, et de croissance quadratique ou lineaireen z, i.e. ∀t ∈ [0, T ], y, y′ ∈ R, z ∈ Rd,
(y − y′)(f(t, y, z) − f(t, y′, z)) ≤ µ(y − y′)2, p.s. ;|f(t, y, z)| ≤ ϕ(|y|) + A |z|2 , p.s. ; (1.11)
ou|f(t, y, z)| ≤ ϕ(|y|) + A |z| , p.s.. (1.12)
Introduction 9
Dans le meme papier, ils ont aussi traite le cas f(t, y, z) = |z|p, pour p ∈ (1, 2], obtenu quelquesconditions suffisantes et necessaires sur ξ pour l’existence de la solution, et construit la solutionexplicitement lorsque p = 2.
Au niveau des EDSR’s reflechies a une barriere, A.Matoussi a considere le cas ou le generateurf est continu et a croissance lineaire en y et z. Il a alors montre l’existence d’une solution maximale.
Ensuite dans [44], Kobylanski, Lepeltier, Quenez et Torres ont montre l’existence et unicitede la solution bornee pour les EDSRs reflechies lorsque le generateur f(t, ω, y, z) est a croissancesuper-lineaire en y et a croissance quadratique en z, i.e. s’il existe une fonction l strictement positivetelle que
|f(t, y, z)| ≤ l(y) + A |z|2 , et∫ ∞
0
dx
l(x)=
∫ 0
−∞
dx
l(x)= +∞,
et lorsque la condition terminale ξ et la barriere continue L sont uniformement bornees, ce qui leura permis de trouver un controle pour l’approximation.
Dans ce chapitre, nous etudions les EDSRs reflechies a une barriere dont le generateur f satis-fait les conditions (1.11) ou (1.12), et lorsque la barriere L est uniformement bornee. Suivant lesmethodes de [50], nous montrons l’existence d’une solution par approximation, sous ces conditions.Nous trouvons egalement une condition suffisante et necessaire pour le cas f(t, ω, y, z) = |z|2 , etconstruisons sa solution explicitement. Pour le cas f(t, ω, y, z) = |z|p, p ∈ (1, 2), nous montrons unecondition suffisante.
1.4 Les EDSRs reflechies a deux barrieres avec conditions de mo-notonie et de croissance generale en y et condition Lischit-zienne en z
Cvitanic et Karatzas (1995) ont traite les EDSRs reflechies a deux barrieres, et montre l’exis-tence et l’unicite de la solution lorsque le generateur f est uniformement Lipschiz en y et z, L < Usur [0, T ] et s’il existe une semimartingale entre L et U (Hypothese de Mokobodski pour les jeuxde Dynkin). Ensuite dans [38], l’existence d’une solution est montree lorsque le generateur f estseulement continu et a croissance lineaire en (y, z). Mais dans ce cas, une condition de regularitesur une barriere est exigee.
Plus recemment, Lepeltier et San Martin ont utilise la methode de penalisation pour montrerl’existence d’une solution, avec les meme hypotheses sur f , mais sans la regularite sur les barrieres,i.e. seulement lorsque L et U sont continues, L < U sur [0, T ], et l’hypothese de Mokobodski.
Dans ce chapitre, nous traitons les EDSRs reflechies avec deux barrieres, lorsque f satisfait lesconditions de monotonie, continuite, croissance generale en y, et de Lipschiz en z, i.e. (1.7). Pourles barrieres, nous demandons que L et U soient continues, L < U sur [0, T ], et l’hypothese deMokobodski. Nous montrons l’existence et l’unicite de la solution pour cette equation.
Pour l’existence, comme au chapitre 2 precedent, nous commencons par le cas ou ξ, f(t, 0), L+
et U− sont tous bornes, et montrons un resultat d’existence par la methode de penalisation. Nousrelaxons ensuite cette condition de bornitude etape par etape par approximation. D’une part, lefait que la solution soit controlee par L et U , simplifie le probleme d’estimation a priori. Toutefoisle fait d’avoir deux processus croissants K+ et K−, amene certaines complications, qui sont leveesgrace aux theoreme de comparaison relatif a ces processus.
10 Introduction
1.5 Les Solutions Sobolev pour les EDPs semilineaires avec obs-tacle sous la condition de monotonie
Une application importante des EDSRs consiste a donner une interpretation probabiliste (For-mule de Feynman-Kac nonlineaire) pour les solutions des equations aux derivees partielles (EDPs)semilineaires paraboliques. L’EDP avec un generateur assez regulier, a la solution classique, maissi son generateur est seulement continu et Lipschizien, on considere les solutions faibles, ou lessolutions de viscosite ou les solutions dans l’espace de Sobolev.
Dans [60] et [59], Pardoux et Peng ont montre que la solution de viscosite de l’EDP peut etreinterpretee par la solution d’une EDSR correspondante. Plus tard, Barles et Lesigne dans [6] ontetudie la relation entre la solution de l’EDP dans l’espace de Sobolev et la solution de l’EDSR.Ensuite, V. Bally et A. Matoussi [5] ont traite l’EDSR avec double mouvement Brownien et l’EDPstochastique semilineaire dans le sens Sobolev. Ils ont alors montre un theoreme qui leur permetd’utiliser la fonction test aleatoire pour l’EDP au lieu de la fonction reguliere. Ce theoreme joueun role tres important pour l’unicite de la solution de l’EDP.
Dans un papier recent, V. Bally, M.E. Caballero, N.El Karoui et B. Fernandez [4], ont etudieles EDPs semilineaires avec obstacle, de la forme suivante
(∂t + L)u + f(t, x, u, σ∗∇u) + ν = 0, u ≥ h, uT = g,
ou h est l’obstacle. La solution de cette equation est un couple (u, ν), ou u est une fonction dansL2([0, T ],H), et ν est un mesure positive concentree sur u = h. Les auteurs ont montre l’existenceet l’unicite de la solution de cette equation, lorsque le generateur f est Lipschitzien, a croissancelineaire en (u,∇u), ils ont donne l’interpretation probabiliste (Formule de Feynman-Kac) de u et∇u par la solution (Y,Z) de l’EDSR reflechie, et l’interpretation probabiliste de la mesure ν par leprocessus croissant K.
D’autre part, Pardoux (1999, [57]) a etudie les EDP dont le generateur f satisfait les condi-tions de monotonie, de croissance generale en u et Lipschizien en ∇u(Voir (1.7)), et il a obtenul’interpretation probabiliste de la solution de viscosite u par la solution de l’EDSR.
Dans ce chapitre, nous appliquons la methode d’approximation et les resultats de l’EDSR dans[57], pour l’EDP semilineaire dans le sens Sobolev par la solution de l’EDSR correspondante.Ensuite, nous utilisons la notion de l’EDP avec obstacle dans [4]. Par la meme approximation quedans le chapitre 3, nous montrons l’interpretation probabiliste de (u, ν) par la solution (Y, Z, K) del’EDSR reflechie. Ici, nous supposons que l’obstacle h est a croissance polynomiale. Nous prouvonsun theoreme qui permet de remplacer la fonction test reguliere par la fonction test aleatoire sousles conditions de monotonie et de croissance generale, et par ce theoreme nous obtenons l’unicitede la solution de l’EDP via l’unicite de la solution de l’EDSR ou l’EDSR reflechie.
1.6 Simulation des EDSRs
Le probleme de la simulation de l’EDSR non lineaire, a commence a se poser a partir de 1990.Pour les EDSRs classiques, il y a deux methodes principales. D’une part, on utilise la methodenumerique pour l’EDP et la formule de Feymann-Kac pour trouver une solution numerique del’EDSR, (pour les details voir [3] et [17]). D’autre part, la methode de Monte-Carlo a ete d’abordintroduite pour les EDPs lineaires. Plus recemment, Bouchard and Touzi [11] pour les EDS directes-retrogrades, Gobet, Lemor et Wavin [32] et Zhang et Zheng [76] pour les EDSRs ont generalisecette methode. Il est a noter qu’ils ont obtenu la vitesse de convergence de la solution numeriquevers la vraie solution.
Introduction 11
Dans ce chapitre, nous etudions les solutions numeriques des EDSRs. La methode est basee surla discretisation du temps et en considerant une marche aleatoire pour approximer le mouvementbrownien. De cette maniere, nous avons une EDSR discrete. Par differentes methodes, nous obtenonsles solutions numeriques approchees. Puis en appliquant un resultat de convergence dans [14](P.Briand, B. Delyon and J. Memin (2001)), nous montrons que la solution numerique converge versla vraie solution.
De la meme maniere, nous developpons les methodes numeriques pour les EDSRs reflechiesavec une ou deux barrieres, et montrons la convergence des equations penalisees. Nous ne pouvonsmalheureusement pas obtenir de vitesse de convergence pour les solutions numeriques reflechiesou non-reflechies, puisque la condition terminale ξ est seulement une fonctionnelle du mouvementbrownien sans aucune l’hypothese de regularite.
En conclusion nous presentons des resultats de simulation, et nous appliquons notamment cettetechnique aux options americaines.
12 Introduction
13
Chapitre 2
The Reflected BSDE with single ordouble discontinuous barriers
In this chapter, we first consider the reflected BSDE’s with right continuous left limit (RCLL)barriers. In the case of one RCLL barrier, an existence and uniqueness result has been obtainedby S.Hamadene ([33]) using a Picard iteration method. Here we consider the penalization method,the idea is to consider the solution Y n of the penalized equation as the solution of certain reflectedBSDE(RBSDE in short). For the reflected BSDE with two RCLL barriers, we get the existenceand uniqueness results by Picard iteration and Dynkin game problem.
When the lower boundary L is only an L2–process, Peng (1999) proved the existence of thesmallest supersolution of BSDE with prescribed terminal condition that dominates this L andthen applied this result to prove the a nonlinear decomposition of Doob–Meyer’s type, i.e., a g–supermartingale is a g–supersolution. But the formulation (1.5) and (1.6) can not be applied to ourL2–case. So we present a generalized formulation of the Skorohod reflecting condition, and thencharacterize the above smallest g–supermartingale as the unique solution of the related reflectedBSDE.
We also use this formulation to characterize the problem of BSDE with two reflecting L2–obstacles L and U . For this purpose we first need to use a penalization method to prove theexistence of the reflected solution. This is a constructive method in the sense that the solutionof the reflected BSDE is proved to be the limit of a sequence of solutions of standard BSDEscalled penalized BSDEs. Our penalization schemes might be useful since many numerical methodshave been developed for these standard BSDEs. To prove the convergence, a new monotonic limittheorem, which generalizes a useful tool initially introduced in Peng (1999), is developed.
This chapter is organized as follows : After introducing the definitions and assumptions for thereflected BSDE with one or two RCLL barriers in section 2.1, we use the penalization method toprove the existence of the solution of reflected BSDE with one RCLL barrier in section 2.2. In section2.3, we prove the existence and uniqueness results of the reflected BSDE with two RCLL barriers,and apply the result to the mixed game problem. For the reflected BSDE with L2 barrier, in Section2.4, we present the notations and our main results of RBSDE with single and double L2–barriers,i.e., existence , uniqueness and continuous dependence theorem. The generalized monotonic limittheorem is proved in Section 2.5. In Section 2.6 we present the result of existence of g–supersolutionthat dominates an L2–barrier and establish its equivalent relation with the corresponding RBSDE,which provides the proof of the existence of the corresponding RBSDE with a single L2–barrier.The proof of the existence theorem of RBSDE with double reflecting barriers begins from Section2.7 and finishes in section 2.8. In appendix 2.9, we present some known results of Snell envelopeand Dynkin game, and we give another penalized proof of reflected BSDE with two RCLL barriers.
14 Chapitre 2. RBSDE discontinuous
2.1 Definitions and assumptions for reflected BSDE with one ortwo RCLL barriers
Let (Ω,F , P ) be a complete probability space, and B = (B1, B2, · · · , Bd)′ be a d-dimensionalBrownian motion defined on the finite interval [0, T ]. Denote by Ft; 0 ≤ t ≤ T the naturalfiltration generated by the Brownian motion B :
Ft = σBs; 0 ≤ s ≤ t,
augmented with all P -null sets of F .We will need the following notations. For any given m ∈ N, p ≥ 1 and t ∈ [0, T ], let us introduce
the following spaces :
– Lpm(FT ) := ξ : Ω → Rm, FT –measurable random variables ξ with E[|ξ|p] < ∞ ;
– Hpm(0, T ) := ϕ : Ω× [0, t] → Rm ; Ft–predictable processes with E
∫ T0 |ϕt|pdt < ∞ ;
– Dpm(0, T ) :=ϕ ∈ Hp
m(0, T ); Ft–progressively measurable RCLL processeswith E[sup0≤t≤T |ϕt|p] < ∞.
– Spm(0, T ) :=all continuous processes in Dp
m(0, T ).– Ap(0, T ) :=K : Ω× [0, T ] → R, Ft–progressively measurable increasing RCLL processes
with K(0) = 0, E[(KT )2] < ∞ .In the real–value case, i.e., m = 1, they will be simply denoted by Lp(Ft), Hp(0, t), Dp(0, t)
and Sp(0, t), respectively. We are mainly interested in the case p = 2. We shall denote by P theσ–algebra of predictable sets in [0, T ] × Ω. Sometimes we use L2(0, T ) instead of H2(0, T ), whensuch replace will not cause confusion.
For reflected BSDE with one or two RCLL barriers, we suppose the following assumptions :
Assumption 2.1.1. a terminal value ξ, which is a given random variable in L2(FT ).
Assumption 2.1.2. a coefficient g, a mapping g : [0, T ]×Ω×R×Rd 7−→ R , which is P ⊗B(R)⊗B(Rd)-measurable, and satisfies
(i) E
∫ T
0g2(t, 0, 0)dt < +∞, (2.1)
and (ii) Lipschitz condition on (y, z) uniformly in (t, ω)
|g(t, ω, y1, z1)− g(t, ω, y2, z2)| ≤ k(|y1 − y2|+ |z1 − z2|) (2.2)∀(t, ω) ∈ [0, T ]× Ω ; y1, y2 in R ; z1, z2 in Rd.
for some 0 < k < ∞.
Assumption 2.1.3. (for the case of one barrier) a barrier Lt, 0 ≤ t ≤ T , which is a real-valuedRCLL progressively measurable process satisfying
E( sup0≤t≤T
(L+t )2) < +∞, (2.3)
and LT ≤ ξ a.s..
In case of reflected BSDE with two barriers, we assume
2.1. Definitions and assumptions 15
Assumption 2.1.4. two barriers Lt, 0 ≤ t ≤ T and Ut, 0 ≤ t ≤ T, which are RCLL progres-sively measurable real-valued processes satisfying
E( sup0≤t≤T
(L+t )2) < +∞, E( sup
0≤t≤T(U−
t )2) < +∞, (2.4)
and Lt ≤ Ut for 0 ≤ t ≤ T , with LT ≤ ξ ≤ UT a.s..
For the existence of the solution of reflected BSDE with two RCLL barriers, we will need :
Assumption 2.1.5. (i) There exists a process Jt = J0 +∫ t0 φsdBs− V +
t + V −t , with φ ∈ H2
d(0, T ),V +, V − ∈ A2(0, T ), such that
Lt ≤ Jt ≤ Ut P-a.s. for 0 ≤ t ≤ T.
(ii) For t ∈ [0, T ), Lt < Ut, a.s..
Now we present the definitions of the solutions of the RBSDEs with one or two RCLL barriers.
Definition 2.1.1. We call that a triple (Y, Z,K) of Ft-progressively measurable processes,where Y ,K are RCLL processes with Y : [0, T ]× Ω 7−→ R, Z : [0, T ]× Ω 7−→ Rd, and K : [0, T ]× Ω 7−→ R,is a solution of the BSDE with one RCLL reflecting lower barrier L(·), terminal condition ξ andcoefficient g which satisfy assumption 2.1.1 and 2.1.2, if the followings hold :
(i) Y ∈ D2(0, T ), Z ∈ H2d(0, T ), and K ∈ A2(0, T ).
(ii) Yt = ξ +∫ Tt g(s, Ys, Zs)ds + KT −Kt −
∫ Tt ZsdBs, 0 ≤ t ≤ T .
(iii) Yt ≥ Lt, 0 ≤ t ≤ T , a.s..(iv)
∫ T0 (Ys− − Ls−)dKs = 0, a.s..
Actually, a general solution of our RBSDE would satisfy the assumptions (ii) to (iv) of definition2.1.1. In fact we will consider the solutions which satisfy the integrability assumptions (i). Thestate-process Y (·) is forced to stay above of the barrier L(·), thanks to the cumulation action of thereflection processes K(·), which acts only when necessary to prevent Y (·) from crossing the barrier,and in this sense, its action can be considered minimal.
Remark 2.1.1. Our definition is similar to the continuous case, where the analogue of (iv) is
∫ T
0(Ys − Ls)dKs = 0
The existence and uniqueness of a solution for the RBSDE with one RCLL barrier have been provedin [33], theorem 1.4. The method is based on the theory of the Snell envelope when f does not dependon y, z and in the general case on a Picard iteration. The definition of the solution of the RBSDEin ([33] (1.4), which differs from (iv) of definition 2.1.1, is in fact equivalent, that is what we willprove in the following proposition.
Proposition 2.1.1. If (i), (ii), (iii) of definition 2.1.1 are satisfied, the condition ∀t ≤ T,4Yt =Yt − Yt− = −(Lt− − Yt)+ is equivalent to
∫ T
0(Ys− − Ls−)dKd
s = 0. (2.5)
where K = Kc+Kd, is the decomposition of K, with Kc continuous and Kd pure-jumps predictable.
16 Chapitre 2. RBSDE discontinuous
Proof. First of all, we know that, Yt − Yt− = −(Kdt − Kd
t−), then since (2.5) is satisfied, wehave,
Yt < Yt− = Kdt −Kd
t− > 0 ⊂ Yt− = Lt−. (2.6)
Notice that Yt +∫ t0 g(s, Ys, Zs)ds is a supermartingale,
∫ t0 g(s, Ys, Zs)ds is continuous, and from
(2.6), we have Yt− 6= Lt− ⊂ Yt = Yt−.a)If Yt < Lt−, then Yt < Lt− ≤ Yt−. Thanks to (2.6), we have Yt− = Lt−, so 4Yt = Yt − Yt− =
Yt − Lt− = −(Lt− − Yt)+.
b)If Yt ≥ Lt−, then Lt− ≤ Yt ≤ Yt−. Suppose that Yt < Yt−, by (2.6), we get Yt− = Lt−,so Yt− = Yt. Then we get a contradiction. Consequently Yt− = Yt, and 0 = 4Yt = Yt − Yt− =−(Lt− − Yt)+.
On the contrary, when 4Yt = Yt − Yt− < 0, i.e. 4Kdt > 0, we have Yt − Yt− = Yt − Lt−, so
Yt− = Lt−. It follows that 4Kdt > 0 ⊂ Yt− = Lt−, so (2.5) holds. ¤
Definition 2.1.2. A triple (Y, Z,K) of Ft-progressively measurable processes, where Y , K, areRCLL processes and Y, K : [0, T ] × Ω 7−→ R, and Z : [0, T ] × Ω 7−→ Rd is called a solution of theRBSDE with two RCLL reflecting barriers L(·), U(·), a terminal condition ξ and a coefficient g, ifthe followings hold :
(i) Y ∈ D2(0, T ), Z ∈ H2d(0, T ), and K = K+ −K−, with K+,K− ∈ A2(0, T ).
(ii) Yt = ξ +∫ Tt g(s, Ys, Zs)ds + K+
T −K+t − (K−
T −K−t )− ∫ T
t ZsdBs, 0 ≤ t ≤ T.
(iii) Lt ≤ Yt ≤ Ut, 0 ≤ t ≤ T , a.s.(iv)
∫ T0 (Ys− − Ls−)dK+
s =∫ T0 (Us− − Ys−)dK−
s = 0, a.s.
Now the state-process Y (·) is forced to stay between the barriers L(·) and U(·), by the cu-mulation action of the reflection processes K+(·), K−(·) respectively ; they act only necessarily toprevent Y (·) from crossing the respective barrier, and in this sense (iv) of definition 2.1.2, theiractions can be considered minimal.
Now we present a ”monotonic limit” theorem, which plays an important role in the penalizationmethod for the RBSDE with one discontinuous (RCLL or L2) barrier. It is firstly proved in ([65],Theorem 2.4), when the author considers the so-called g-supersolution, which is the solution of theBSDE with an increasing process in addition as below (2.8). Later, in section 1.6, we will generalizethis monotonic limit theorem, and apply it for the case of reflected BSDE with two discontinuousbarriers.
Theorem 2.1.1. We assume that g satisfies assumption 2.1.2 and (Ai) ∈ A2(0, T ), for i ∈ N. Let(Y i, Zi) be the solution of the following BSDE’s
Y it = ξ +
∫ T
tg(s, Y i
s , Zis)ds + Ai
T −Ait −
∫ T
tZi
sdBs, (2.7)
with E[sup0≤t≤T
∣∣Y it
∣∣2] < ∞. If (Y i) converges increasingly to Y , with E[sup0≤t≤T |Yt|2] < ∞, thenthere exists Z ∈ H2
d(0, T ) and A ∈ A2(0, T ), s.t. the pair (Y, Z) satisfies, for t ∈ [0, T ]
Yt = ξ +∫ T
tf(s, Ys, Zs)ds + AT −At −
∫ T
tZsdBs, (2.8)
where Z is the weak (resp. strong) limit of Z in H2d(0, T ) (resp. Hp
d(0, T ), for p < 2), and for eacht, At is the weak limit of (Ai
t) in L2(Ft)
2.2. The penalized BSDEs for RBSDE with one RCLL barrier and optimal stopping problem 17
2.2 The penalized BSDEs for RBSDE with one RCLL barrier andoptimal stopping problem
In the case of one barrier, we prove first that any solution of the RBSDE is the Snell envelopeof an optimal stopping problem. Then we consider the solutions of the penalized equations. Usingtheorem 2.1.1, we prove that (Y n, Zn,Kn) has in same sense a limit (Y,Z, K), and that (Y, Z, A)satisfies (i) and (ii) of definition 2.1.1. The other idea is to consider Y n as a solution of someRBSDE, and in this way by passing to the limit when n → ∞, to prove that Y is solution of anoptimal stopping problem. Finally by the properties of the Snell envelope, and identification, weget that (Y,Z, K) satisfies (iii) and (iv)of definition 2.1.1.
We prove firstly a result which is the analog of Proposition 2.3 in ([28], 1997) for the continuouscase. Since L is only RCLL, the proof is not exactly same. Instead of considering an optimal stoppingtime (which may not exist), we need to consider the ε-optimal stopping time.
Proposition 2.2.1. Let (Y, Z, K) ∈ D2(0, T )×H2d(0, T )×A2(0, T ) be the solution of the RBSDE,
Yt = ξ +∫ T
tg(s, Ys, Zs)ds + KT −Kt −
∫ T
tZsdBs, 0 ≤ t ≤ T, (2.9)
Yt ≥ Lt, 0 ≤ t ≤ T , and∫ T0 (Ys− − Ls−)dKs = 0, a.s..Then for each t ∈ [0, T ],
Yt = ess supτ∈Tt
E[∫ τ
tg(s, Ys, Zs)ds + Lτ1τ<T + ξ1τ=T|Ft] (2.10)
where Tt is be the set of all stopping times valued between t and T , defined as following
Tt = τ ∈ T ; t ≤ τ ≤ T. (2.11)
Proof. Let τ ∈ Tt. We may take the conditional expectation in (2.9), between times t and τ ,hence
Yt = E[∫ τ
tg(s, Ys, Zs)ds + Yτ + Kτ −Kt|Ft] (2.12)
≥ E[∫ τ
tg(s, Ys, Zs)ds + Lτ1τ<T + ξ1τ=T|Ft].
On the contrary, define the stopping time Dεt = infu ≥ t; Yu ≤ Lu + ε ∧ T. Obviously, we
have YDεt≤ LDε
t+ ε on the set Dε
t < T. On the set Dεt = T, by the definition of Dε
t , weknow that Yu > Lu + ε, for t ≤ u ≤ T . Then between t and Dε
t , Ys− > Ls−. From (vii), we get∫ Dεt
t (Ys− − Ls−)dKs = 0, so we deduce that KDεt−Kt = 0.
So it follows that
Yt ≤ E[∫ Dε
t
tg(s, Ys, Zs)ds + LDε
t1Dε
t <T + ξ1Dεt =T||Ft] + ε (2.13)
holds for all ε > 0. Comparing with the inequality (2.12), the result follows. ¤Now we want to prove that the penalization method allows us to obtain at the limit the solution
of the RBSDE. In the following, C will be denoted a constant whose value can vary from line toline.
For each n ∈ N, consider the penalized equations as following
Y nt = ξ +
∫ T
tg(s, Y n
s , Zns )ds + n
∫ T
t(Y n
s − Ls)−ds−∫ T
tZn
s dBs.
18 Chapitre 2. RBSDE discontinuous
If we definegn(t, y, z) = g(t, y, z) + n(y − Lt)−, (2.14)
which is also Lipschitz, then by the theorem of classical BSDE ([58]), these equation admits theunique solution (Y n
t , Znt )0≤t≤T . Denote Kn = n
∫ t0 (Y n
s −Ls)−ds, then applying Ito’s formula andthe same method like in Section 6 of ([28], 1997), the following holds uniformly for n ∈ N,
E[ sup0≤t≤T
|Y nt |2] + E
∫ T
0|Zn
s |2 ds + E[|KnT |2] ≤ c. (2.15)
Now using the theorem 2.1.1, we obtained :
Lemma 2.2.1. The sequence of processes (Y n, Zn,Kn), n ∈ N has a limit (Y, Z, K) ∈ D2(0, T )×H2
d(0, T ) ×A2(0, T ) such that Y n· converges to Y· in D2(0, T ), Z· is the weak (resp. strong) limitof Zn· in H2
d(0, T ) (resp. in Hpd(0, T ) for 0 < p < 2 ), and for each t ∈ [0, T ], Kt is the weak limit
of Knt in L2(Ft).
Proof. With (2.14),gn(t, y, z) ≤ gn+1(t, y, z).
From the comparison theorem of classical BSDE’s in [30] (in the following, we call it as theclassical comparison theorem), we easily obtain that Y n
t ≤ Y n+1t , 0 ≤ t ≤ T , a.s. Hence there exists
a process Yt, such thatY n
t ↑ Yt, 0 ≤ t ≤ T a.s.
We deduce from (2.15) and the Fatou’s lemma,
E[ sup0≤t≤T
|Yt|2] ≤ c. (2.16)
Now from the Lebesgue’s dominated convergence theorem, it follows that
E
∫ T
0(Y n
t − Yt)2dt → 0, as n →∞.
Since (2.16) holds and E[|KnT |2] ≤ C,which comes from (2.15), the convergences of Zn· and Kn·
are the direct result of Theorem 2.1.1. The proof is complete. ¤Now we will show the main result in this section.
Theorem 2.2.1. The limit (Y, Z,K) of (Y n, Zn,Kn) (Lemma 2.2.1), is the unique solution of thereflected BSDE with one RCLL lower barrier L, i.e. (i)-(iv) of definition 2.1.1 are satisfied.
Proof. The uniqueness result is found in ([33]). For the existence, first of all, by Lemma 2.2.1and the Lipschitz property of g, we obtain that the limit (Y,Z, K) ∈ D2(0, T )×H2
d(0, T )×A2(0, T )satisfies
Yt = ξ +∫ T
tg(s, Ys, Zs)ds + KT −Kt −
∫ T
tZsdBs, 0 ≤ t ≤ T. (2.17)
Then we will prove that the triple also satisfies (iii) and (iv).For this we first observe that for each n ∈ N, (Y n, Zn,Kn) is the solution of the RBSDE with
the lower barrier Lt− (Y nt −Lt)−. Indeed, obviously (i) and (ii) hold, and (iii) and (iv) are satisfied
from the followings :
Y nt ≥ Lt = Lt − (Y n
t − Lt)−, if Y nt ≥ L
Y nt ≥ Y n
t = Lt − (Y nt − Lt)−, if Y n
t < Lt
2.2. penalized BSDEs with one RCLL barrier 19
and ∫ T
0(Y n
t − (Lt + (Y nt − Lt)−))dKn
t = n
∫ T
0(Y n
t − Lt)+(Y nt − Lt)−dt = 0.
Then from Proposition 2.2.1,
Y nt = ess sup
τ∈Tt
E[∫ τ
tg(s, Y n
s , Zns )ds + Lτ1τ<T + ξ1τ=T − (Y n
τ − Lτ )−1τ<T|Ft]
≤ ess supτ∈Tt
E[∫ τ
tg(s, Y n
s , Zns )ds + Lτ1τ<T + ξ1τ=T|Ft]
≤ ess supτ∈Tt
E[∫ τ
tg(s, Ys, Zs)ds + Lτ1τ<T + ξ1τ=T|Ft]
+kE[∫ T
0(|Y n
s − Ys|+ |Zns − Zs|)ds|Ft].
Since Y nt → Yt in D2(0, T ), and Zn
t → Zt in Hpd(0, T ) for 0 < p < 2, we can choose a subsequence
which satisfies
E[∫ T
0(|Y n
s − Ys|+ |Zns − Zs|)ds|Ft] → 0, a.s.
So passing to the limits from both sides of the inequality, it follows
Yt ≤ ess supτ∈Tt
E[∫ τ
tg(s, Ys, Zs)ds + Lτ1τ<T + ξ1τ=T|Ft]. (2.18)
On the other hand, let us consider the following BSDE
Y nt = ξ +
∫ T
tg(s, Y n
s , Zns )ds + n
∫ T
t(Ls − Y n
s )ds−∫ T
tZn
s dBs. (2.19)
By the same way like in the proof of Lemma 6.1 in [28], Y nτ → Lτ1τ<T + ξ1τ=T in the mean
square.Using the classical comparison theorem, it follows Y n
t ≥ Y nt , for 0 ≤ t ≤ T , and n ∈ N, so if we
let n to ∞ on both sides, we get
Yt ≥ Lt1t<T + ξ1t=T for 0 ≤ t ≤ T.
Noticing that Yt +∫ t0 g(s, Ys, Zs)ds is a supermartingale, we have
Yt ≥ ess supτ∈Tt
E[∫ τ
tg(s, Ys, Zs)ds + Lτ1τ<T + ξ1τ=T|Ft]. (2.20)
Comparing (2.18) and (2.20), it follows
Yt = ess supτ∈Tt
E[∫ τ
tg(s, Ys, Zs)ds + Lτ1τ<T + ξ1τ=T|Ft]. (2.21)
Now set Lξt := Lt1t<T + ξ1t=T. By the notation of the Snell envelope (Appendix 2.10.1,
Definition 2.10.1), we rewrite the formula in the following form
Yt − E[∫ T
tg(s, Ys, Zs)ds + ξ|Ft] = St(η),
20 Chapitre 2. RBSDE discontinuous
where ηt =∫ t0 g(s, Ys, Zs)ds+Lξ
t−E[∫ T0 g(s, Ys, Zs)ds+ξ|Ft], s.t. η(T ) = 0 ; let η∗ = sup0≤t≤T |ηt| ,with
the assumptions 2.1.1, 2.1.2 and 2.1.3, we have the following estimation
E[ sup0≤t≤T
(St(η))2] ≤ E[ sup0≤t≤T
(E[η∗|Ft])2] ≤ 4E[|η∗|2]
≤ 8E[ sup0≤t≤T
(Lt)2] + E[|ξ|2] + CE[∫ T
0g2(s, 0, 0) + |Ys|2 + |Zs|2 ds] < ∞.
Since St(η) is a supermartingale, with the Doob-Meyer decomposition theorem, there exists aFt-martingale M1 and a Ft-adapted RCLL increasing process K1, s.t.
St(η) = M1t −K1
t .
Thanks to Lemma 2.10.2 in Section 2.10.1, we know that E[(K1T )2] < ∞ , i.e. K1(·) ∈ A2(0, T ).
Then
Yt = M1t −K1
t + E[∫ T
tg(s, Ys, Zs)ds + ξ|Ft] (2.22)
= Y0 −∫ t
0g(s, Ys, Zs)ds−K1
t +∫ t
0Z1
s dBs,
where Z1(·) comes from the Ito-representation theorem for the martingale M1t +E[
∫ T0 g(s, Ys, Zs)ds+
ξ|Ft]. Then we rewrite (2.17) in the forward form
Yt = Y0 −∫ t
0g(s, Ys, Zs)ds−Kt +
∫ t
0ZsdBs. (2.23)
Now comparing (2.22) and (2.23), we deduce
Kt −K1t =
∫ t
0(Zs − Z1
s )dBs.
Due to the fact that a process which is a finite variation martingale is a constant process, we haveKt −K1
t = 0, Zt − Z1t = 0, a.s.
By the property of the Snell envelope, we know that St(η) ≥ ηt, i.e.
Yt ≥ Lξt = Lt1t<T + ξ1t=T ≥ Lt,
and∫ T0 (St−(η)− ηt−)dK1
t = 0, (see Lemma 2.10.1) i.e.
0 =∫ T
0(St−(η)− ηt−)dK1
t =∫ T
0(Yt− − Lt−)dKt.
So the triple (Y, Z,K) satisfies also (iii) and (iv) i.e., (Y, Z,K) is the solution of the reflected BSDEwith RCLL lower barrier Lt. ¤
In [38], the Comparison theorem for the differential of the increasing process in the RBSDEwith one continuous barrier is firstly proved. Now we give one version for the RCLL barrier. Thistheorem also give the comparison result on the solution Y of the RBSDE, which was firstly provedby Hamadene ([33], Theorem 1.5) for y. Here due to the penalization method, we prove it in asimpler way, and give a comparison result for the increasing process K.
2.3. The RBSDE with two RCLL barriers and Dynkin game 21
Theorem 2.2.2. Suppose that ξi and gi(s, y, z), (i = 1, 2) satisfy the assumption 2.1.1 and 2.1.2,and L satisfies assumption 2.1.3. The two triples (Y 1, Z1,K1), (Y 2, Z2,K2) are respectively thesolutions of the following RBSDEs, i.e.
Y it = ξi +
∫ T
tgi(s, Y i
s , Zis)ds + Ki
T −Kit −
∫ T
tZi
sdBs, (2.24)
Y it ≥ Lt, 0 ≤ t ≤ T , and
∫ T0 (Y i
s− − Ls−)dKis = 0, for i = 1, 2. If we have ∀(y, z) ∈ R× Rd,
ξ1 ≤ ξ2, and g1(t, y, z) ≤ g2(t, y, z) dP × dt-a.s.,
then Y 1t ≤ Y 2
t , K1t ≥ K2
t , for t ∈ [0, T ], and for 0 ≤ s ≤ t ≤ T , K1t −K1
t ≥ K2t −K2
s .
Proof. Consider the penalized equations of (2.24), for n ∈ N, i = 1, 2
Y n,it = ξi +
∫ T
tgi(s, Y n,i
s , Zn,is )ds + n
∫ T
t(Y n,i
s − Ls)−ds−∫ T
tZn,i
s dBs.
Set gn,1(t, y, z) = g1(t, y, z) + n(y − Lt)− and gn,2(t, y, z) = g2(t, y, z) + n(y − Lt)−, then by theclassical comparison theorem, since gn,1(t, y, z) ≤ gn,2(t, y, z) and ξ1 ≤ ξ2, we get Y n,1
t ≤ Y n,2t , for
0 ≤ t ≤ T . Let n → ∞, by lemma 2.2.1, it follows that Y 1t ≤ Y 2
t . Moreover, Kn,it = n
∫ t0 (Y n,i
s −Ls)−ds, so for 0 ≤ s ≤ t ≤ T ,
Kn,1t −Kn,1
s ≥ Kn,2t −Kn,2
s .
Since Kn,it converge weakly to Ki
t , respectively in L2(Ft), then we get the result after passing tothe limit. ¤
2.3 The RBSDE with two RCLL barriers and Dynkin game
2.3.1 The existence and uniqueness result
For the existence and uniqueness of the solution of the RBSDE with two RCLL barriers, weneed the notions of stochastic game and Dynkin game, which are described in the Appendix 2.10.2.In the following proposition, we generalize the theorem 4.1 in ([19], 1996) to the case of RCLLbarriers. Set T be the set of all Ft-stopping times, and for all 0 ≤ t ≤ T , define
Tt = τ ∈ T ; t ≤ τ ≤ T, (2.25)
Proposition 2.3.1. Let (Y,Z, K), with K = K+ −K− and K± ∈ A2(0, T ) be a solution of theRBSDE with two RCLL barriers. For any 0 ≤ t ≤ T and any stopping times σ, τ in Tt, considerthe payoff
Rt(σ, τ) =∫ σ∧τ
tg(s, Ys, Zs)ds + ξ1σ∧τ=T + Lτ1τ<T,τ≤σ + Uσ1σ<τ, (2.26)
as well as the upper and lower values, respectively,
V t = ess infσ∈Tt
ess supτ∈Tt
E[Rt(σ, τ)|Ft], (2.27)
V t = ess supτ∈Tt
ess infσ∈Tt
E[Rt(σ, τ)|Ft],
of the corresponding stochastic game. This game has a value Vt, given by the state-process Yt solutionof RBSDE, i.e.,
Vt = V t = V t = Yt.a.s. (2.28)
22 Chapitre 2. RBSDE discontinuous
Proof. For any ε > 0, consider the stopping time σεt = infs ≥ t, Ys ≥ Us − ε ∧ T , then
Yσεt≥ Uσε
t−ε on the set σε
t < T ; and on the set σεt = T, we have Ys < Us−ε for t ≤ s ≤ T . So
Ys− < Us− for t < s ≤ σεt , and with (iv) of definition 2.1.2, it follows K−
σεt
= K−t . For any stopping
time τ ∈ Tt, notice that σεt = T ⊂ τ ≤ σε
t , so σεt < τ ⊂ σε
t < T. On the set σεt < τ, we
have
Rt(σεt , τ) ≤
∫ σεt
tg(s, Ys, Zs)du + Yσε
t− (K−
σεt−K−
t ) + ε
≤∫ σε
t
tg(s, Ys, Zs)du + Yσε
t+ (K+
σεt−K+
t )− (K−σε
t−K−
t ) + ε
= Yt +∫ σε
t
tZudBu + ε.
On the set τ ≤ σεt , we have
Rt(σεt , τ) =
∫ τ
tg(s, Ys, Zs)du + ξ1τ=T + Lτ1τ<T − (K−
τ −K−t )
≤∫ τ
tg(s, Ys, Zs)du + ξ1τ=T + Yτ1τ<T + (K+
τ −K+t )− (K−
τ −K−t )
= Yt +∫ τ
tZudBu.
Now compare the two inequalities ; we have Rt(σεt , τ) ≤ Yt +
∫ σεt∧τ
t ZudBu + ε, a.s., hence
E[Rt(σεt , τ)|Ft] ≤ Yt + ε. (2.29)
On the contrary, we consider the stopping time τ εt = infs ≥ t, Ys ≤ Ls + ε ∧ T , then Yτε
t≤
Lτεt
+ ε on the set σεt < T, and K+
τεt
= K+t . For an arbitrary stopping time σ ∈ Tt, and with a
similar proof, we get Rt(σ, τ εt ) ≥ Yt +
∫ σ∧τεt
t Z(u)dBu − ε, a.s., then
E[Rt(σ, τ εt )|Ft] ≥ Yt − ε. (2.30)
So we deduceE[Rt(σε
t , τ)|Ft]− ε ≤ Yt ≤ E[Rt(σ, τ εt )|Ft] + ε. (2.31)
Thanks to the Lemma 2.10.3 in the Appendix 2.10.2, this stochastic game has a value, i.e. thereexists Vt s.t. Vt = V t = V t. In addition, with (2.27) and (2.31), we have
V t ≤ Yt ≤ V t,
i.e. Vt = V t = V t = Yt. The proof is complete. ¤Now we begin to prove the existence and uniqueness of the solution of RBSDE. In this case,
since from the previous result, we know the necessary form of the state-process Yt, then we lookfor Z, K+ and K−. For this we introduce the followings :
Nt = E[ξ +∫ T
0g(s)ds|Ft]−
∫ t
0g(s)ds, (2.32)
Lξt = Lt1t<T + ξ1t=T, Lt = Lξ
t −Nt,
U ξt = Ut1t<T + ξ1t=T, Ut = U ξ
t −Nt.
2.3. RBSDE with two RCLL barriers 23
Obviously, Nt is a continuous process on [0, T ] and Nt ∈ D2(0, T ). Then Lt, Ut are RCLL processeson [0, T ], belong to D2(0, T ), and
Lt ≤ Ut, 0 ≤ t ≤ T,
LT− ≤ LT = 0 = UT ≤ UT−.
Then from (2.26), we get
E[Rt(σ, τ)|Ft] = E[Lτ1τ≤σ + Uσ1σ<τ|Ft] + Nt.
When we consider the Dynkin game problem with payoff Rt(σ, τ), with t = 0, player 1 chooses thestopping time σ, player 2 chooses the stopping time τ , and R0(σ, τ) represents the amount paid byplayer 1 to player 2. Then player 1 tries to minimize the payoff while player 2 tries to maximizeit. The game stops when one player decides to stop, that is, at the stopping time σ ∧ τ , or at T ifσ = τ = T . From the proposition 2.2.1, if the value of the Dynkin game exists, then Yt satisfies
Yt = ess infσ∈Tt
ess supτ∈Tt
E[Lτ1τ≤σ + Uσ1σ<τ|Ft] + Nt (2.33)
= ess supτ∈Tt
ess infσ∈Tt
E[Lτ1τ≤σ + Uσ1σ<τ|Ft] + Nt.
Thanks to theorem 2.10.2 in appendix 2.10.2, in order to study the value of the Dynkin game, weturn to the following system
X+ = S(L + X−), (2.34)X− = S(−U + X+),
where S denote the Snell envelope (see definition 2.10.1 in Appendix 2.10.2). This system wasintroduced by Bismut (1977) and was studied by him and Alario-Nazaret (1982). In the Appendix,we remember some results of Alario-Nazaret in her thesis(1982) and in [2]. The following theoremis deduced from the theorem 2.10.1 in the Appendix 2.10.2.
Theorem 2.3.1. The system (2.34) admits a solution (X+, X−) in D2(0, T )×D2(0, T ).
Proof. This theorem is the direct application of the theorem 2.10.1 in the Appendix 2.10.2 ;the only thing that we need to point out is that the assumption 2.1.5 leads to
L ≤ X − X ′ ≤ U
for some positive Ft-supermartingales (X, X ′) of class D. It’s easily seen if we take
Xt = J+0 +
∫ t
0φ+
s dBs + E[ξ+ +∫ T
0g+(s)ds|Ft]−
∫ t
0g+(s)ds− V +
t − (JT − ξ)+1t=T,
X ′t = J−0 +
∫ t
0φ−s dBs + E[ξ− +
∫ T
0g−(s)ds|Ft]−
∫ t
0g−(s)ds− V −
t − (JT − ξ)−1t=T.
where J+, φ+, ξ+, g+ and (JT − ξ)+ (resp. J−, φ−, ξ−, g− and (JT − ξ)−) are the positive (resp.negative) part of J , φ, ξ, g and (JT−ξ) respectively. Then X and X ′ belong to D by the assumptionson ξ, g, J , φ and V ±. ¤
With these results, we get the following theorem, which gives the method to find the processesZ and K−. The proof of this theorem is in the same way like the continuous case in [19], eveneasier, since in the discontinuous case, we do not need to prove the continuity of Y .
24 Chapitre 2. RBSDE discontinuous
Theorem 2.3.2. Let us consider the equation
π(K+) = S(L + π(K−)) (2.35)π(K−) = S(−U + π(K+))
where S denotes the Snell envelope and πt(V ) = E[VT |Ft] − Vt. If we suppose assumption 2.1.5,this equation has a solution (K+,K−) ∈ A2(0, T ) × A2(0, T ) ; then the triple (Y, Z,K), whereK = K+ −K−, with
Y := N + π(K+)− π(K−) (2.36)
and Z ∈ H2d(0, T ) uniquely determined via
E[ξ +∫ T
0g(s)ds + AT −K−
T |Ft] = N(0) + E[K+T ]− E[K−
T ] +∫ t
0ZsdBs, 0 ≤ t ≤ T, (2.37)
is the unique solution of the RBSDE.
Proof. Since assumption 2.1.5 is satisfied, by theorem 2.3.1 the system (2.34) admits a so-lution (X+, X−) ∈ D2(0, T ) × D2(0, T ). By Lemmas 2.10.2 in the Appendix, there exists a pair(K+, K−) ∈ A2(0, T ) ×A2(0, T ) which solves the equation (2.35). In fact, (2.35) is equivalent to(2.34) when we set X+ = π(K+), X− = π(K−).
Then by theorem 2.10.2 in appendix 2.10.2, Y = N + π(K+) − π(K−) is the value of Dynkingame as (2.33), by (2.32), (2.36), and (2.37), we have
Yt +∫ t
0g(s)ds + K+
t −K−t = E[ξ +
∫ T
0g(s)ds + K+
t −K−t |Ft] = Y (0) +
∫ t
0ZsdBs, (2.38)
for 0 ≤ t ≤ T, where Y (0) = N(0) + E[K+T −K−
T ] ; in particular, YT = ξ ; thus
ξ +∫ T
0g(s)ds + K+
T −K−T = Y (0) +
∫ T
0ZsdBs. (2.39)
From (2.38) and (2.39), we deduce (ii) of the definition 2.1.1 :
Yt = ξ +∫ T
tg(s)ds + K+
T −K+t − (K−
T −K−t )−
∫ T
tZsdBs.
From the definition of the Snell envelope (2.35) we have
π(K+) ≥ L + π(K−),π(K−) ≥ −U + π(K+).
Then with (2.32) and (2.36), it follows
L ≤ N + L ≤ Y = N + π(K+)− π(K−) ≤ U + N ≤ U.
Since the process K+ (resp. K−) is the increasing process of the decomposition of the Snell envelopeS(L + π(K−)) (resp. S(−U + π(K+))), by the Lemma 2.10.1 in the Appendix 2.10.1, we get
0 =∫ T
0(St−(L + π(K−))− Lt− − πt−(K−))dK+
t =∫ T
0(Yt− − Lt−)dK+
t ,
0 =∫ T
0(St−(−U + π(K+)) + Ut− − πt−(K+))dK−
t =∫ T
0(Ut− − Yt−)dK−
t ,
2.3. RBSDE with two RCLL barriers 25
almost surely, which shows that (iii) and (iv) of definition 2.1.2 are satisfied.Finally for (i) of definition 2.1.2, we know that the equation (2.35) has a fixed point (K+,K−) ∈
A2(0, T ) ×A2(0, T ), with Nt ∈ D2(0, T ) ; it follows that Yt ∈ D2(0, T ), and Z ∈ H2d(0, T ) comes
from the Ito representation of the square-integrable martingale E[ξ +∫ T0 g(s)ds + K+
T −K−T |Ft].
Uniqueness follows from Proposition 2.3.1. ¤Finally, we get the following theorem.
Theorem 2.3.3. For a given ξ ∈ L2(FT ), a process g(t, ω) ∈ H2(0, T ), and two RCLL progressivelymeasurable real-valued processes L, U , which satisfy assumption 2.1.4 and 2.1.5 ; there exists aunique group (Y, Z,K),with Y ∈ D2(0, T ), Z ∈ H2
d(0, T ), K = K+−K−, with K+,K− ∈ A2(0, T ),which is solution of the RBSDE with two barriers L,U .
Now we will consider the general case that is when g may depends on (y, z) ; for this we shalluse a fixed point method. This method was firstly introduced by Pardoux and Peng ([58]), and alsoused by J. Cvitanic and I. Karatzas ([19], 1996) in the case of two continuous barriers.
Theorem 2.3.4. Let ξ be a given random variable in L2(FT ), a coefficient g which satisfiesassumption 2.1.2, and two RCLL progressively measurable real-valued processes L and U, whichsatisfy assumption 2.1.4 and 2.1.5. Then there exists a unique triple (Y, Z, K), Y ∈ D2(0, T ),Z ∈ H2
d(0, T ),K = K+ −K− and K+, K− ∈ A2(0, T ), which is solution of the RBSDE with twobarriers L,U . The uniqueness holds in the following sense : If there exists another group (Y ′, Z ′,K ′)with K ′ = K ′+ −K ′− and K ′± ∈ A2(0, T ), satisfying (i)-(iv) of definition 2.1.2, we have Yt = Y ′
t ,Zt = Z ′t, Kt = K ′
t, for 0 ≤ t ≤ T .
Proof. Denote by S, the space of progressively measurable processes (Yt, Zt), 0 ≤ t ≤ Tvalued in R × Rd, which satisfy E
∫ T0 |Ys|2 + |Zs|2 ds < ∞. Given (ϕ,ψ) ∈ S, we define g(t, ω)
by setting g(t, ω) = g(t, ω, ϕ(t, ω), ψ(t, ω)) ; then by the theorem 2.3.3, if we consider the RBSDEwith coefficient function g, a unique solution (Y, Z, K) with K = K+ −K− and (Y, Z,K+,K−) ∈D2(0, T ) × H2
d(0, T ) × (A2(0, T ))2 exists. In particular, (Y,Z) ∈ S. In this way, we construct amapping
Φ : S 7−→ S , via (Y,Z) = Φ(ϕ,ψ).
In order to establish the unique solution of the RBSDE, it’s sufficient to prove that the mappingΦ is a contraction with respect to an appropriate norm on S, which is defined by
‖(Y,Z)‖β :=(
E[∫ T
0eβt(|Yt|2 + |Zt|2)dt]
) 12
,
for an appropriate β ∈ (0,∞) which will be determined later.Let (ϕ0, ψ0) be another pair in the set S, (Y 0, Z0) = Φ(ϕ0, ψ0) with K0, be the unique solution
of the RBSDE with coefficient function g0(t, ω) = g(t, ω, ϕ0(t, ω), ψ0(t, ω)). We define
ϕ = ϕ− ϕ0, ψ = ψ − ψ0, Y = Y − Y 0, Z = Z − Z0, K = K −K0.
Clearly, dY t = [g(t, ϕt, ψt) − g(t, ϕ0t , ψ
0t )]dt − dKt + ZtdBt, and Yt − Yt− = −(Kt − Kt−),
Y 0t −Y 0
t− = −(K0t −K0
t−), so Y t−Y t− = −(Kt−Kt−). Applying Ito’s formula to eβtY2t , and taking
26 Chapitre 2. RBSDE discontinuous
expectation on the two sides, we get
E[eβtY2t ] + E[
∫ T
0eβs(β
∣∣Y s
∣∣2 +∣∣Zs
∣∣2)ds] + E[∑
s∈[t,T ]
((Ks −Ks−)2] (2.40)
= 2E∫ T
teβsY s−dKs − 2E
∫ T
teβsY sZsdBs + 2E
∫ T
teβsY s[g(s, ϕs, ψs)− g(s, ϕ0
s, ψ0s)]dt
≤ 2kE
∫ T
teβs
∣∣Y s
∣∣ (|ϕs|+∣∣ψs
∣∣2)dt
≤ 4k2E
∫ T
teβs
∣∣Y s
∣∣2 ds +12E
∫ T
teβs(|ϕs|2 +
∣∣ψs
∣∣2)dt,
where k is the Lipschitz constant of (2.2). For the Ito integral form in the second line, we have
E(∫ T
0e2βs(Y s)2
∣∣Zs
∣∣2 ds)12 ≤ eβT E[ sup
0≤t≤T(Y s)(
∫ T
0
∣∣Zs
∣∣ ds)12 ]
≤ 12eβT E[ sup
0≤t≤T(Y s)2 +
∫ T
0
∣∣Zs
∣∣ ds] < ∞,
then we know that eβsY sZs ∈ H2d(0, T ), and after taking expectation, it becomes zero.
For the term E∫ Tt eβsY s−dKs = E
∫ Tt eβsY s−d(K+
s −K−s ), notice that since (Y, Z, K), (Y 0, Z0,K)
satisfy (iii) and (iv) in definition 2.1.2, we have∫ T
teβsY s−dKs =
∫ T
teβsY s−dK
+s −
∫ T
teβsY s−dK
−s ≤ 0,
in view of followings∫ T
teβsY s−dK
+s =
∫ T
teβs(Ys− − Y 0
s−)dK+s +
∫ T
teβs(Y 0
s− − Ys−)dK0+s
=∫ T
teβs(Ys− − Ls−)dK+
s +∫ T
teβs(Ls− − Y 0
s−)dK+s +
∫ T
teβs(Y 0
s− − Ls−)dK0+s
+∫ T
teβs(Ls− − Ys−)dK0+
s
≤ 0,
and∫ T
teβsY s−dK
−s =
∫ T
teβs(Ys− − Y 0
s−)dK−s +
∫ T
teβs(Y 0
s− − Ys−)dK0−s
=∫ T
teβs(Ys− − Us−)dK−
s +∫ T
teβs(Us− − Y 0
s−)dK−s +
∫ T
teβs(Y 0
s− − Us−)dK0−s
+∫ T
teβs(Us− − Ys−)dK0−
s
≥ 0,
Now if we choose β = 1 + 4k2 in the definition of the norm, we deduce from the inequality(2.40),
E[∫ T
0eβs(
∣∣Y s
∣∣2 +∥∥Zs
∥∥2)ds] ≤ 12E
∫ T
teβs(|ϕs|2 +
∥∥ψs
∥∥2)dt,
i.e. the mapping Φ is a contraction. The proof is complete. ¤
2.3. RBSDE with two RCLL barriers 27
2.3.2 The Application to mixed game problem
Now we use the RBSDE with two barriers as a tool to solve a stochastic mixed game problem.First let us briefly describe the settings of the considered problem.
Let C be the space of continuous functions from [0, T ] to Rd, endowed with the uniform conver-gence norm, and σ = (σij)i,j=1,d a map from [0, T ]×C into Rd×d, the space of d-dimensional squarematrices, such that
Assumption 2.3.1. (i) for any ζ, continuous and P-measurable process with values in Rd, theprocess σij(t, ζ) is P-measurable ; 1 ≤ i, j ≤ d ; here P is the σ-algebra of progressively measurablesubsets of [0, T ]× Ω.
(ii) for any x ∈ C, the matrix σ(t, x) is invertible and its inverse σ−1 is bounded ;(iii) there exists a constant c s.t. ∀t ∈ [0, T ], x, x′ ∈ C, ‖σ(t, x)− σ(t, x′)‖ ≤ c ‖x− x′‖ and
‖σ(t, x)‖ ≤ c(1 + ‖x‖).
These assumptions on σ imply that the stochastic differential equation dXt = σ(t,Xt)dBt,X0 = 0 ∈ Rd and t ≤ T , has a unique solution X ([42], 1991), ([70], 1991)).
Let us now consider compact metric spaces W (resp. V ) and W (resp. V) the space of all P-measurable processes with values in W (resp. V ). Let ϕ be a function from [0, T ]×C ×W ×V intoRd such that
Assumption 2.3.2. (i) ϕ is bounded and P ⊗ B(W × V )-measurable ; B(W × V ) is the Borelσ-algebra on W × V .
(ii) ∀t ∈ [0, T ] and x ∈ C, ϕ(t, x, ·, ·) is continuous on W × V .
For any (w, v) ∈ W × V we define a probability P (w,v) on (Ω,F) by
dP (w,v)
dP= exp
∫ T
0σ−1(s,Xs)ϕ(t,Xs, ws, vs)dBs − 1
2
∫ T
0
∣∣σ−1(s,Xs)ϕ(t,Xs, ws, vs)∣∣2 ds.
So according to Girsanov’s theorem ([42], 1991) and ([70], 1991)), for any (w, v) ∈ W × V, theprocess B(w,v) := (Bt −
∫ t0 σ−1(s, Xs)ϕ(s,Xs, ws, vs)ds)t≤T is a Brownian motion on (Ω,F , P (w,v))
and X is a weak solution of the following SDE :
dXt = ϕ(t,Xt, wt, vt)dt + σ(t,Xt)dB(w,v)t , t ≤ T and X0 = x.
Suppose that we have a system, whose evolution is described by X, which has an effect on thewealths of two controllers C1 and C2. On their part, the controllers have no influence on the systemand they act so as to protect their advantages, which are antagonistic, by the means of w ∈ W forC1 and v ∈ V for C2 via the probability P (w,v). The couple (w, v) ∈ W ×V is called an admissiblecontrol for the game. Both have also the possibility to stop the game at σ for C1 and τ for C2 ; σand τ are in T , which contains all Ft-stopping times. In such a case the game stops. The controllingaction is not free and it corresponds to the action of C1 and C2, a payoff
J(w, σ; v, τ) = E(w,v)[∫ τ∧σ
0c(s,Xs, ws, vs)ds + Lτ1τ≤σ,τ<T + Uσ1σ<τ + ξ1τ∧σ=T]
where L, U and ξ are those of the previous sections and c(t, x, w, v) is a bounded function definedon [0, T ]×C×W ×V with values in Rd, which satisfies the same hypothesis (xvii) (xviii) as ϕ. Theaction of C1 (resp. C2) is to minimize (resp. maximize) the payoff J(w, σ; v, τ) whose terms can beunderstood as :
28 Chapitre 2. RBSDE discontinuous
(i) c(t,X, w, v) is the instantaneous reward (resp. cost) for C2 (resp. C1).(ii) Uσ is the cost (resp. reward) for C1 (resp. C2) if C1 decides to stop first the game.(iii) Lτ is the reward (resp. cost) for C2 (resp. C1) if C2 decides to stop first the game.
The problem is to find a saddle-point strategy (one should say a fair strategy) for the controllersi.e. a strategy (w, σ; v, τ) such that
J(w, σ; v, τ) ≤ J(w, σ; v, τ) ≤ J(w, σ; v, τ),
for any (w, σ) ∈ W × T and (v, τ) ∈ V × T .For (t, x, p, u, v) ∈ [0, T ]×C×Rd×W ×V we define the Hamiltonian associated with this mixed
stochastic game problem by H(t, x, p, w, v) = pσ−1(t, x)ϕ(t, x, u, v) + c(t, x, u, v) and we supposethe following assumption, which is called the Isaac’s condition ([6], 1979), ([35], 1995), ([36], 1995)and ([37], 2000)) :
Assumption 2.3.3.
infw∈W
supv∈V
H(t,X, p, u, v) = supv∈V
infw∈W
H(t,X, p, u, v).
Under assumption 2.3.3, through the Benes’ theorem ([7], 1970), there exists a couple of P ×B(Rd)-measurable functions w∗(t,X, p) and v∗(t,X, p) with values respectively in W and V suchthat P -a.s., for any (t, p) ∈ [0, T ]×Rd, w ∈ W and v ∈ V ,
H(t, X, p, w∗(t,X, p), v∗(t, X, p)) = infw∈W
supv∈V
H(t,X, p, w, v) (2.41)
= supv∈V
infw∈W
H(t,X, p, w, v)
and
H(t,X, p, w∗(t,X, p), v) ≤ H(t,X, p, w∗(t,X, p), v∗(t, X, p)) ≤ H(t,X, p, w, v∗(t,X, p)).
On the other hand, since ϕ and σ−1 are bounded the function p 7−→ H(t, X, p, w, v) is uniformlyLipschitz with respect to p and it is easily deduced from (2.41) that the function to which passociates H(t,X, p, w∗(t,X, p), v∗(t,X, p)) is same.
Then we have the following theorem, which is an application of the RBSDE with two RCLLbarriers, and generalizes the result of ([37], 2000) to the case when L and U are only RCLL.
Theorem 2.3.5. Assume assumption 2.3.3 holds, and let (Y ∗, Z∗,K∗), with K∗ = K∗,+−K∗,− bethe solution of the RBSDE with parameters (H(t,X, z, w∗(t,X, z), v∗(t,X, z)), ξ), and two RCLLbarriers L, U , which satisfy assumption 2.1.5, w∗ := (w∗(t,X, Z∗t ))t≤T , v∗ := (v∗(t,X, Z∗t ))t≤T ,then Y ∗
0 is the value of the game. Moreover, if we assume that L, −U are left upper semicontinuous,then set τ := inft ∈ [0, T ], Y ∗
t ≤ Lt ∧ T , and σ := inft ∈ [0, T ], Y ∗t ≥ Ut ∧ T , then Y ∗
0 =J(u∗, σ; v∗, τ) and (u∗, σ; v∗, τ) is a saddle-point strategy for the mixed stochastic game problem.
Proof. Since (Y ∗, Z∗,K∗), with K∗ = K∗,+ − K∗,− is the solution of the RBSDE with twobarriers associated with (ξ, H(t,X, z, w∗(t,X, z), v∗(t,X, z)), L, U), then for t ≤ T , we have
Y ∗t = ξ +
∫ T
tH(x,X, Z∗s , w∗(s,X, Z∗s ), v∗(s,X,Z∗s ))ds
+K∗,+T −K∗,+
t − (K∗,−T −K∗,−
t )−∫ T
tZ∗s dBs.
2.4. Statements and definitions of reflected BSDE with L2-barriers 29
By the same method like [37](Hamadene, S. and Lepeltier, J.P., 2000), except that we have now towork with ε-optimal stopping times like in Proposition 3.1, since L and U are RCLL, we get thatY ∗
0 is the value of the stochastic mixed game.Notice that if L, −U are left upper semicontinuous, then the solution Y ∗ is continuous, and
τ , σ are optimal times, i.e. K∗,+bτ∧bσ = K∗,−bτ∧bσ = 0 ; following the same way like in Theorem 4 in [37],we deduce that Y ∗
0 = J(u∗, σ; v∗, τ) and that (u∗, σ; v∗, τ) is a saddle-point strategy for the mixedstochastic game problem. ¤
2.4 Statements and definitions of reflected BSDE with L2-barriers
2.4.1 Reflected BSDE with one L2-barrier
Now we study the reflected BSDE with one L2 barrier. Consider a coefficient g : [0, T ] × Ω ×R×Rd 7→ R, which is a given P ×B(R)×B(Rd)–measurable function, and satisfies the assumption2.1.2.
We first introduce with the following g–supersolution, which is a notion parallel to that in PDEtheory.
Definition 2.4.1. (g–supersolution, cf. [30]) We say a triple
(Y,Z, V ) ∈ D2(0, T )×H2d(0, T )×A2(0, T )
is a g–supersolution (resp. g–subsolution) if the following holds
Yt = YT +∫ T
tg(s, Ys, Zs)ds + VT − Vt −
∫ T
tZsdBs, t ∈ [0, T ]. (2.42)
We observe that if both (Y, Z, V ) and (Y, Z ′, V ′) satisfy (2.42), then we have Z = Z ′ and V = V ′.For this reason we often simply call Y a g–supersolution.
Remark 2.4.1. We also observe that, given ξ ∈ L2(FT ) and V ∈ A2(0, T ), there exists a uniquesolution (Y, Z) ∈ D2(0, T )×H2
d(0, T ) of (2.42). This is equivalent to solve
(Y , Z) = (Y + V, Z) ∈ D2(0, T )×H2d(0, T )
of the following standard BSDE (cf. [58])
Yt = ξ +∫ T
tg(s, Ys − Vs, Zs)ds−
∫ T
tZsdBs. (2.43)
Remark 2.4.2. In [65], 1999, Peng have obtained the following result : Y is a g–supersolutionif and only if it is a g–supermartingale (a g–supermartingale is defined similarly as a classicalsupermartingale in which we use a notion of nonlinear expectations, called g–expectations, in theplace of the classical linear expectations). It is a nonlinear version of decomposition theorems ofDoob–Meyer’s type. The increasing process K corresponds the one in the classical supermartingale.
We will first consider a reflected BSDE with a lower L2–obstacles L. Similar as in section 2.2,We assume that
Assumption 2.4.1. a barrier Lt, 0 ≤ t ≤ T, is a real-valued progressively measurable process inL2(0, T ), satisfying
E(ess sup0≤t≤T
(L+t )2) < +∞, (2.44)
and LT ≤ ξ a.s..
30 Chapitre 2. RBSDE discontinuous
Let us now introduce our generalized notion of RBSDE with one lower barrier L.
Definition 2.4.2. Let ξ be a given random variable in L2(FT ) and a coefficient g be a givenP×B(R)×B(Rd)–measurable function satisfying assumption 2.1.2. A triple (Y, Z, K) ∈ D2(0, T )×H2
d(0, T ) ×A2(0, T ) is called a solution of RBSDE associated to (ξ, g) with a lower obstacle L ∈L2(0, T ), if(i) (Y, Z, K) is a g–supersolution with on [0, T ] with YT = ξ, i.e.
Yt = ξ +∫ T
tg(s, Ys, Zs)ds + KT −Kt −
∫ T
tZsdBs, (2.45)
(ii) Y dominates L, i.e., Yt ≥ Lt, dP ⊗ dt-a.s. ;(iii) The following (generalized) Skorohod condition (cf. [71]) holds :
∫ T
0(Ys− − L∗s−)dKs = 0, a.s., ∀ L∗ ∈ D2(0, T ) s.t. Lt ≤ L∗t ≤ Yt, dP ⊗ dt-a.s. (2.46)
The difference between the above definition and those of [28], with a continuous obstacle, andin definition 2.1.1 with a cadlag obstacle, is in the Skorohod condition (iii). The following simpleresult linkes these notations
Proposition 2.4.1. If we assume further more that L ∈ D2(0, T ), then a triple (Y, Z,K) ∈D2(0, T )×H2
d(0, T )×A2(0, T ) is a solution of RBSDE with lower reflecting obstacle L and terminalcondition ξ ∈ L2(FT ) if and only if it satisfies the above conditions (i), (ii) and the followingSkorohod condition : ∫ T
0(Ys− − Ls−)dKs = 0, a.s. (2.47)
Proof. (2.46)⇒(2.47) is obvious. To prove (2.47)⇒(2.46), we only need to observe that, foreach L∗t ∈ D2(0, T ) such that Lt ≤ L∗t ≤ Yt, we have
0 ≤∫ T
0(Ys− − L∗s−)dKs ≤
∫ T
0(Ys− − Ls−)dKs = 0.
¤From the above definition, Y is a g–supersolution that dominates L. Indeed, our main results
show that Y is the smallest g–supersolution that dominates L.
Theorem 2.4.1. We assume that lower obstacle L ∈ L2(0, T ) satisfies assumption 2.4.1. Thenthere exists a unique solution (Y, Z,K) of RBSDE with the lower obstacle L and the terminalcondition YT = ξ. Moreover, Y is the smallest g–supersolution that dominates L with terminalcondition YT = ξ.
The proof of the existence will be given in Section 2.6, while our formulation of the reflectedBSDE permits us to derive easily the following continuous dependence theorem. This result alsoimplies the proof of the uniqueness in Theorem 2.4.1.
Proposition 2.4.2. We assume that lower obstacle L ∈ L2(0, T ) satisfies assumption 2.4.1. Letϕi
s ∈ L2(0, T ) and ξi ∈ L2(FT ), i = 1, 2 be given. Let (Y i, Zi,Ki) be the solution of RBSDEs withlower obstacle L, terminal condition ξi and the following coefficients : gi(t, y, z) = g(t, y, z)+ϕi(t),i.e., they are gi–supersolutions of the following forms :
Y it = ξi +
∫ T
t[g(s, Y i
s , Zis)ds + ϕi
s]ds + KiT −Ki
t −∫ T
tZi
sdBs,
2.4. Statements and definitions 31
and satisfy (2.46). Then we have
E[ sup0≤t≤T
∣∣Y 1t − Y 2
t
∣∣2 + sup0≤t≤T
∣∣K1t −K2
t
∣∣2] + E[∫ T
0
∣∣Z1t − Z2
t
∣∣2 dt (2.48)
≤ CE[∣∣ξ1 − ξ2
∣∣2 +∫ T
0
∣∣ϕ1s − ϕ2
s
∣∣2 ds],
where the constant C depends only on T and the Lipschitz constant k of g, given in (2.2).
Proof. By setting Y = Y 1 − Y 2, Z = Z1 − Z2, K = K1 −K2, ξ = ξ1 − ξ2, g = g(s, Y 1, Z1)−g(s, Y 2, Z2) and ϕ = ϕ1 − ϕ2, we have
Yt = ξ +∫ T
t[gs + ϕs]ds + KT − Kt −
∫ T
tZsdBs (2.49)
Their jumps satisfy Yt − Yt− = −(Kt − Kt−). Apply Ito’s rule to∣∣∣Yt
∣∣∣2, we have
∣∣∣Yt
∣∣∣2+
∫ T
t
∣∣∣Zs
∣∣∣2ds +
∑
t≤s≤T
(Ks − Ks−)2 (2.50)
= ξ2 + 2∫ T
tYs(gs + ϕs)ds + 2
∫ T
tYs−dKs − 2
∫ T
tYs · ZsdBs
We set L∗t := Y 1t ∧ Y 2
t . It is clear that L∗ ∈ D2(0, T ) satisfy Lt ≤ L∗t ≤ Y it , dP ⊗ dt-a.s. i = 1, 2.
Thanks to the generalized Skorohod condition (2.46), we have∫ T
0(Y 1
s− − L∗s−)dK1s =
∫ T
0(Y 2
s− − L∗s−)dK2s = 0.
The third term of the right hand of (2.50) is dominated by 0, since∫ T
0Ys−dAs =
∫ T
0(Y 1
s− − L∗s−)dK1s +
∫ T
0(L∗s− − Y 2
s−)dK1s
+∫ T
0(Y 2
s− − L∗s−)dK2s +
∫ T
0(L∗s− − Y 1
s−)dK2s ≤ 0.
It follows that∣∣∣Yt
∣∣∣2+
∫ T
t
∣∣∣Zs
∣∣∣2ds +
∑
t≤s≤T
(Ks − Ks−)2 ≤ ξ2 +∫ T
tYs(gs + ϕs)ds− 2
∫ T
tYs · ZsdBs (2.51)
By Lipschitz condition of g, we have |gs| ≤ k(|Ys|+ |Zs|). Thus
∣∣∣Yt
∣∣∣2+ (1− 1
α)∫ T
t
∣∣∣Zs
∣∣∣2ds +
∑
t≤s≤T
(Ks − Ks−)2 (2.52)
≤ |ξ|2 + (2k + αk2 + β)∫ T
t
∣∣∣Ys
∣∣∣2ds +
1β
∫ T
t|ϕs|2 ds− 2
∫ T
tYs · ZsdBs
Set α = 2, β = 1, it follows that
E[∣∣∣Yt
∣∣∣2] ≤ E[ξ2] + (2k + 2k2 + 1)E
∫ T
t
∣∣∣Ys
∣∣∣2ds + E
∫ T
t|ϕs|2 ds.
32 Chapitre 2. RBSDE discontinuous
It then follows from Gronwall’s inequality that
E[∣∣∣Yt
∣∣∣2] ≤ C(E[ξ2] + E
∫ T
t|ϕs|2 ds).
We thus have
E[∣∣∣Yt
∣∣∣2] + E[
∫ T
0
∣∣∣Zs
∣∣∣2ds] ≤ C(E[ξ2] + E
∫ T
0|ϕs|2 ds).
With this estimate and using Burkholder-Davis-Gundy inequality to (2.51), we deduce the estimatefor E[ supt |Yt|2|] in (2.48). Then, using again Burkholder-Davis-Gundy inequality to (2.49), wededuce the estimate for E[ supt |Kt|2|]. ¤
The uniqueness part in Theorem 2.4.1 is proved by setting ξ1 = ξ2 = ξ, ϕ1 = ϕ2 = 0. We alsohave the following estimate :
Theorem 2.4.2. We assume that the lower obstacle L ∈ L2(0, T ) satisfies assumption 2.4.1. Let(Y, Z,K) be the solution of RBSDE with the coefficient g, the terminal condition ξ and the lowerobstacle L. Then we have
E[ sup0≤t≤T
|Yt|2 + sup0≤t≤T
|Kt|2] + E[∫ T
0|Zt|2 dt]
≤ CE[|ξ|2 +∫ T
0|g(s, 0, 0)|2 ds + sup
0≤t≤T(L+
t )2].
Since the proof is similar to the previous one. We omit it.
2.4.2 Reflected BSDE with two L2–barriers
We now consider a BSDE reflected between a lower obstacles L and a upper obstacle U whereL and U are progressively measurable L2–processes. We still make the assumption 2.1.2 for thecoefficient g. The obstacles satisfy the following assumptions
Assumption 2.4.2. L,U ∈ L2(0, T ) with
E[ess sup0≤t≤T
(L+t )2] + E[ess sup
0≤t≤T(U−
t )2] < +∞, LT ≤ ξ ≤ UT , a.s.. (2.53)
and there exists a process Xt = X0 + A0t −K0
t +∫ t0 Z0
s dBs, 0 ≤ t ≤ T with Z0 ∈ H2d(0, T ), A0 and
K0 ∈ A2(0, T ), and such thatLt ≤ Xt ≤ Ut, dP ⊗ dt-a.s.. (2.54)
The formulation of the RBSDE with two L2–obstacles is as follows.
Definition 2.4.3. A solution of BSDE reflected between a lower obstacle L ∈ L2(0, T ) and anupper obstacle U ∈ L2(0, T ) with parameters (ξ, g) is a triple (Y, Z,K) satisfying(i) Y ∈ D2(0, T ), Z ∈ H2
d(0, T ), K = K+ −K−, and K+,K− ∈ A2(0, T ) ;(ii) (Y,Z) solves the following BSDE on [0, T ] :
Yt = ξ +∫ T
tg(s, Ys, Zs)ds + K+
T −K+t − (K−
T −K−t )−
∫ T
tZsdBs; (2.55)
(iii) Lt ≤ Yt ≤ Ut, dP ⊗ dt-a.s.(iv) (Generalized) Skorohod condition : for each L∗, U∗ ∈ D2
F (0, T ) such that Lt ≤ L∗t ≤ Yt ≤ U∗t ≤
Ut, dP ⊗ dt-a.s., we have∫ T
0(Ys− − L∗s−)dK+
s =∫ T
0(Ys− − U∗
s−)dK−s = 0, a.s.. (2.56)
2.4. Statements and definitions 33
For this reflected BSDE, we have the following main result of existence and uniqueness :
Theorem 2.4.3. We make assumptions 2.4.2. Then there exists at least one solution (Y,Z, K),with K+,K− of RBSDE in the sense of Definition 2.4.3. The solution is unique in the followingsense : if (Y ′, Z ′,K ′) is another solution, then Y ′
t ≡ Yt, Z ′t ≡ Zt, and Kt ≡ K ′t, ∀t ∈ [0, T ], a.s..
Example 2.4.1. The following example shows that, while the uniqueness is true for (Y,Z), butnot for (K+,K−).
Lt ≡ Ut ≡ 0, g(t, y, z) ≡ 0, ξ = 0.
In this case it is clear that Yt ≡ 0 is the unique g–solution such that Lt ≤ Yt ≤ Ut, a.e., a.s.. Thus(Yt, Zt,K
+t ,K−
t ) ≡ (0, 0, 0, 0). It satisfies (i)–(iv) of Definition 2.4.3. But (Yt, Zt, K′+t ,K ′−
t ) ≡(0, 0, t, t) also satisfies (i)–(iv).
Remark 2.4.3. It is easy to check that the assumption 2.1.3 for RBSDE with one obstacle, aswell as assumption 2.4.2 for RBSDE with two obstacles, are also necessary for the existence of therelated RBSDE.
The existence part of proof of Theorem 2.4.3 is in section 2.8, while the uniqueness part is asimple consequence of the following continuous dependence theorem, which once more, shows thatour new Skorohod condition (2.56) is a very useful formulation.
Theorem 2.4.4. We make assumptions 2.1.1, 2.1.2 and 2.4.2. For i = 1, 2, let (Y i, Zi,Ki), withKi = Ki+ −Ki− be the solutions of the RBSDE
Y it = ξi +
∫ T
t[g(s, Y i
s , Zis) + ϕi
s]ds + KiT −Ki
t −∫ T
tZi
sdBs, (2.57)
with two obstacles L, U ∈ L2(0, T ), i.e., in the sense of Definition 2.4.3-(i)-(iv). Then we have
E[ sup0≤t≤T
∣∣Y 1t − Y 2
t
∣∣2 + E[ sup0≤t≤T
∣∣K1t −K2
t
∣∣2] + E[∫ T
0
∣∣Z1t − Z2
t
∣∣2 dt] (2.58)
≤ CE[∣∣ξ1 − ξ2
∣∣2 +∫ T
0
∣∣ϕ1s − ϕ2
s
∣∣2 ds],
the constant C depends only on the Lipschitz constant of g and T .
Proof. We set Y = Y 1−Y 2, Z = Z1−Z2, K = K1−K2, ξ = ξ1− ξ2, and gs = g(s, Y 1s , Z1
s )−g(s, Y 2
s , Z2s ), ϕ = ϕ1 − ϕ2, in following
Yt = ξ +∫ T
t[gs + ϕs]ds + KT − Kt −
∫ T
tZsdBs (2.59)
Obviously Yt − Yt− = −(Kt − Kt−). Apply Ito’s formula to∣∣∣Yt
∣∣∣2, then
∣∣∣Yt
∣∣∣2+
∫ T
t
∣∣∣Zs
∣∣∣2ds +
∑
t≤s≤T
((Ks − Ks−))2 (2.60)
= |ξ|2 + 2∫ T
tYs(gs + ϕs)ds + 2
∫ T
tYs−dKs − 2
∫ T
tYs · ZsdBs
34 Chapitre 2. RBSDE discontinuous
We define L∗t = Y 1t ∧ Y 2
t and U∗t = Y 1
t ∨ Y 2t , it’s clear that L∗, U∗ ∈ D2(0, T ) and Lt ≤ L∗t ≤ Y i
t ≤U∗
t ≤ Ut. By the Generalized Skorohod condition (iv) of Definition 2.4.3, we have∫ T
t(Y 1
s− − L∗s−)dK1+s =
∫ T
t(Y 2
s− − L∗s−)dK2+s = 0,
∫ T
t(Y 1
s− − U∗s−)dK1−
s =∫ T
t(Y 2
s− − U∗s−)dK2−
s = 0.
Thus for the terms∫ Tt Ys−dKs in (2.60), we have
∫ T
tYs−dKs =
∫ T
tYs−dK+
s −∫ T
tYs−dK−
s ,
and∫ T
tYs−dK+
s =∫ T
t(Y 1
s− − L∗s−)dK1+s +
∫ T
t(L∗s− − Y 2
s−)dK1+s
+∫ T
t(Y 2
s− − L∗s−)dK2+s +
∫ T
t(L∗s− − Y 1
s−)dK2+s
≤ 0,
similarly,∫ Tt Ys−dK−
s ≥ 0. Applying these two inequalities to (2.60) yields
∣∣∣Yt
∣∣∣2+
∫ T
t
∣∣∣Zs
∣∣∣2ds +
∑
t≤s≤T
(4Ks)2 ≤ ξ2 + 2∫ T
tYs(gs + ϕs)ds− 2
∫ T
tYs · ZsdBs. (2.61)
We now arrive to a position similar to that of (2.51) in the proof of Theorem 2.4.2. We then cananalogously obtain (2.59) by using Gronwall’s inequality and Burkholder-Davis-Gundy inequality.¤
2.5 A generalized monotonic limit theorem for Ito processes
In this section, we will develop a new convergence theorem for a monotonic sequence of Itoprocesses. It is a generalized version of a monotonic limit theorem obtained in Peng (1999) (Theorem2.1 of [65]). In section 2.8, we will use this result to prove the existence part of Theorem 2.4.3 forreflected BSDE with two obstacles.
We consider the following sequence of Ito processes
yit = yi
0 +∫ t
0gisds−Ai
t + Kit +
∫ t
0zisdBs, i = 1, 2, · · · . (2.62)
Here, for each i, the processes gi ∈ H2(0, T ) and Ai, Ki ∈ A2(0, T ) are given. We assume
Assumption 2.5.1. For the increasing processes (Ai,Ki)∞i=1 satisfy(h1) Ai is continuous and increasing such that Ai
0 = 0 and E[(AiT )2] < ∞ ;
(h2) Ki is increasing with Ki0 = 0 ;
(h3) Kjt −Kj
s ≥ Kit −Ki
s, ∀0 ≤ s ≤ t ≤ T , a.s., ∀i ≤ j ;(h4) For each t ∈ [0, T ], Kj
t Kt , with E[K2T ] < ∞ ;
and
2.5. Generalized monotonic limit theorem 35
Assumption 2.5.2. For (yi, gi, zi)∞i=1,
(i) (gi, zi)∞i=i weakly converges to (g0, z) in H21+d(0, T ),
(ii) (yit)∞i=1 increasingly converges up to (yt) with E[sup0≤t≤T |yt|2] < ∞.
(2.63)
It is clear that(i) E[sup0≤t≤T |yi
t|2] ≤ C,
(ii) E∫ T0 |yi
t − yt|2ds → 0,(2.64)
where the constant C is independent of i.
Remark 2.5.1. It is easy to check that the limit y of yi∞i=1 is the following form of Ito processes
yt = y0 +∫ t
0g0sds−At + Kt +
∫ t
0zsdBs, (2.65)
where At is the weak limit in Ait in L2(FT ). In general, we can not prove the strong convergence
ofzi
∞i=1
in H2d(0, T ). But as in Peng (1999), we can prove that the convergence holds in some
stronger sense : for each p ∈ [1, 2),zi
converges strongly in Hp
d(0, T ).
Our monotonicity limit theorem is as following.
Theorem 2.5.1. We assume that the sequence of Ito processes (2.62) satisfies assumption 2.5.1and 2.5.2, with (2.63). Then the limit y of yi∞i=1 has a form (2.65), where A and K are increasingprocesses in A2(0, T ). Here, for each t ∈ [0, T ], At (resp. Kt) is the weak (resp. strong) limit ofAi
t∞i=1 (resp. Kit∞i=1) in L2(Ft). Furthermore, for any p ∈ [0, 2), zi∞i=1 strongly converges to z
in Hpd(0, T ), i.e.,
limi→∞
E
∫ T
0|zi
s − zs|pds = 0. (2.66)
If furthermore (At)t∈[0,T ] is continuous, then we have
limi→∞
E
∫ T
0|zi
s − zs|2ds = 0. (2.67)
Remark 2.5.2. A special situation of the above theorem is when Kit ≡ 0, i = 1, 2, · · · and thus
Kt ≡ 0. This result was obtained in [65]. This special case will be also applied in this paper.
The following lemma is Lemma 2.5 in [65] ; the second one gives us some results of the regularityof the limit of increasing processes.
Lemma 2.5.1. Let xi(·)∞i=1 be a sequence of (deterministic) cadlag processes defined on [0, T ]that increasingly converges to x(·) such that, for each t ∈ [0, T ], and i = 1, 2, · · · , xi(t) ≤ xi+1(t),with x(t) = b(t)−a(t), where b(·) is an cadlag process and a(·) is an increasing process with a(0) = 0and a(T ) < ∞. Then x(·) and a(·) are also cadlag processes.
Lemma 2.5.2. Let (ai(·))∞i=1 be a sequence of deterministic processes defined on [0, T ] satisfying(i) For each i, (ai) has left and right limit with ai(0) = 0. For each t ∈ [0, T ], ai(t) a(t) < ∞,where the limit process a has left and right limit and a(T ) < ∞ ;(ii) aj(t)− ai(t) ≤ aj(t′)− ai(t′), for ∀j ≥ i, 0 ≤ t ≤ t′ ≤ T .Then(1) If for each i, (ai) is right continuous, a is right continuous : a(t) = a(t+), for each t ∈ [0, T ) ;(2) If for each i, (ai) is left continuous, a is left continuous : a(t) = a(t−), for each t ∈ [0, T ).
36 Chapitre 2. RBSDE discontinuous
Proof. (1) Let j →∞ in (ii) :
a(t)− ai(t) ≤ a(t′)− ai(t′), ∀0 ≤ t ≤ t′ ≤ T. (2.68)
We fix t < T and let tn ∈ (t, T ], n = 1, 2, · · · , be such that tn t. Since for each ε > 0, there existsi, such that, for all i ≥ i0, a(T )− ai(T ) ≤ ε. Thus
0 ≤ a(t)− ai(t) ≤ a(tn)− ai(tn) ≤ a(T )− ai(T ) ≤ ε.
Let k →∞0 ≤ a(t)− ai(t) ≤ a(t+)− ai(t) ≤ ε.
We thus have |a(t+)− a(t)| ≤ ε. Since ε can be arbitrarily small we have a(t) = a(t+).(2) Like in (1), we have (2.68), then for fixed t < T , consider a sequence tn ∈ [0, t), such that
tn t. Since for each ε > 0, there exists i, such that, for all i ≥ i0, a(T )− ai(T ) ≤ ε. Thus
0 ≤ a(tn)− ai(tn) ≤ a(t)− ai(t) ≤ a(T )− ai(T ) ≤ ε.
Let k →∞0 ≤ a(t−)− ai(t) ≤ a(t)− ai(t) ≤ ε.
We thus have |a(t)− a(t−)| ≤ ε. Since ε can be arbitrarily small we have a(t) = a(t−). ¤The following lemma takes from [65]
Lemma 2.5.3. Let A be an increasing RCLL process defined on [0, T ] with A0 = 0 and E[AT ]2 <∞. Then, for any δ, ε > 0, there exists a finite number of pairs of stopping times σn, τn, n =0, 1, 2, · · ·N with 0 < σn ≤ τn ≤ T , such that
(i) (σj , τj ] ∩ (σn, τn] = ∅ for each j=k ;(ii) E
∑Nn=0(τn − σn) ≥ T − ε
(iii)∑N
n=0 E∑
σn<t≤τn(∆At)2 ≤ δ
After these preparations, we can prove the generalized monotonic limit theorem.Proof of Theorem 2.5.1. Since gi∞i=1 and zi∞i=1 weakly converge to g0 and z in H2(0, T )
and H2d(0, T ), respectively, and Ki
t∞i=1 converges up to Kt in L2(Ft), thus, for each stopping timeτ ≤ T , the following weak convergence holds in L2(Fτ ).
∫ τ
0zisdBs
∫ τ
0zsdBs,
∫ τ
0gisds
∫ τ
0g0sds, Ki
τ Kτ .
SinceAi
τ = −yiτ + yi
0 + Kiτ +
∫ τ
0gisds +
∫ τ
0zisdBs
thus we also have the weak convergence in L2(Fτ ) :
Aiτ Aτ := −yτ + y0 + Kτ +
∫ τ
0g0sds +
∫ τ
0zsdBs.
Obviously E[A2T ] < ∞. For any two stopping times σ ≤ τ ≤ T , we have Aσ ≤ Aτ since Ai
σ ≤ Aiτ .
From this it follows that A is an increasing process. Moreover, from Lemmas 2.5.1 and 2.5.2, K,A and y are cadlag, thus y has a form of (2.65). Our key point is to show that zi∞i=1 convergesto z in the strong sense of (2.66). In order to prove this we apply Ito’s formula to (yi
t − yt)2
2.5. Generalized monotonic limit theorem 37
on each given subinterval (σ, τ ]. Here 0 ≤ σ ≤ τ ≤ T are two stopping times. Observe thatyt − yt− ≡ (Kt −Kt−)− (At −A− t−), yi
t − yit− = Ki
t −Kit−. We have
E|yiσ − yσ|2 + E
∫ τ
σ|zi
s − zs|2ds
= E|yiτ − yτ |2 − E
∑
t∈(σ,τ ]
(∆(At −Kt + Kit))
2 − 2E
∫ τ
σ(yi
s − ys)(gis − g0
s)ds
+2E
∫
(σ,τ ](yi
s − ys)dAis − 2E
∫
(σ,τ ](yi
s− − ys−)dAs − 2E
∫
(σ,τ ](yi
s− − ys−)d(Kis −Ks)
= E|yiτ − yτ |2 + E
∑
t∈(σ,τ ]
[(∆At)2 − (∆Kt −∆Kit)
2]− 2E
∫ τ
σ(yi
s − ys)(gis − g0
s)ds
+2E
∫
(σ,τ ](yi
s − ys)dAis − 2E
∫
(σ,τ ](yi
s − ys)dAs − 2E
∫
(σ,τ ](yi
s− − ys−)d(Kis −Ks)
Since∫(σ,τ ](y
is − ys)dAi
s ≤ 0 and −2E∫(σ,τ ](y
is− − ys−)d(Ki
s −Ks) ≤ 0, we then have
E
∫ τ
σ|zi
s − zs|2ds ≤ E|yiτ − yτ |2 + E
∑
t∈(σ,τ ]
∆(At)2 (2.69)
+2E∫ τ
σ|yi
s − ys||gis − g0
s |ds + 2E
∫
(σ,τ ]|yi
s − ys|dAs.
The third term on the right side tends to zero since
E
∫ T
0|yi
s − ys||gis − g0
s |ds ≤ C
[E
∫ T
0|yi
s − ys|2ds
] 12
→ 0. (2.70)
For the last term, we have, P–almost surely,
|y1s − ys| ≥ |yi
s − ys| → 0, ∀s ∈ [0, T ].
Since
E
∫ T
0|y1
s − ys|dAs ≤ (E[sups
(|y1s − ys|2])
12 (E(AT )2)
12 < ∞.
It then follows from Lebesgue’s dominated convergence theorem that
E
∫
(0,T ]|yi
s − ys|dAs → 0. (2.71)
By convergence of (2.70) and (2.71), it is clear that from the estimate (2.69), once A is continuous(thus ∆At ≡ 0) on [0, T ], then zi tends to z strongly in H2
d(0, T ). Thus the second assertion of thetheorem, i.e., (2.67) follows.
But for the general case, the situation becomes complicated. Thanks to Lemma 2.5.3, for anypositive δ and ε, there exist a finite number of disjoint intervals (σn, τn], n = 0, 1, · · · , N , such thatσn ≤ τn ≤ T are all stopping times satisfying
(i) E∑N
n=0[τn − σn](ω) ≥ T − ε2 ;
(ii)∑N
n=0
∑σn<t≤τn
E(∆At)2 ≤ δε3 .
(2.72)
38 Chapitre 2. RBSDE discontinuous
Now, for each σ = σk and τ = τk, we apply estimate (2.69) and then take the sum. It follows that
N∑
n=0
E
∫ τn
σn
|zis − zs|2ds ≤
N∑
n=0
E|yiτn− yτn |2 +
N∑
n=0
E∑
t∈(σn,τn]
(∆At)2
+2E∫ T
0|yi
s − ys||gis − g0
s |ds + 2E
∫
(0,T ]|yi
s − ys|dAs.
By using the convergence results (2.70) and (2.71) and taking in consideration of (2.72)-(ii), itfollows that
limi→∞
N∑
n=0
E
∫ τn
σn
|zis − zs|2ds ≤
N∑
n=0
E∑
t∈(σn,τn]
(∆At)2 ≤ εδ
3
Thus there exists an integer lεδ > 0 such that, whenever i ≥ l εδ, we have
N∑
n=0
E
∫ τn
σn
|zis − zs|2ds ≤ εδ
2
Thus, in the product space ([0, T ] × Ω,B([0, T ]) × F ,m × P ) (here m stands for the Lebesguemeasure on [0, T ]), we have
m× P
(s, ω) ∈
N⋃
n=0
(σn(ω), τn(ω)]× Ω; |zis(ω)− zs(ω)|2 ≥ δ
≤ ε
2
This with (2.72)-(i) implies
m× P(s, ω) ∈ [0, T ]× Ω; |zi
s(ω)− zs(ω)|2 ≥ δ ≤ ε, ∀ i ≥ lεδ.
From this it follows that, for any δ > 0,
limi→∞
m× P(s, ω) ∈ [0, T ]× Ω; |zi
s(ω)− zs(ω)|2 ≥ δ
= 0.
Thus, on [0, T ]×Ω, the sequence zi∞i=1 converges in measure to z. Since zi∞i=1 is also boundedin H2
d(0, T ), then for each p ∈ [1, 2), it converges strongly in Hpd(0, T ). ¤
2.6 The proof of Theorem 2.4.1 through equivalence between thesmallest g–supersolution and the related RBSDE
Theorem 2.4.1 is an easy consequence of Theorem 2.6.1 of this section in which the followingequivalence is given : A triple (Y, Z, K) is the solution of RBSDE if and only if it is the relatedsmallest g–supersolution. Using this result and the existence of the smallest g–supersolution givenin Proposition 2.6.2, we then obtain the proof. We first claim
Proposition 2.6.1. We assume that lower obstacle L ∈ L2(0, T ) satisfies assumption 2.4.1. Letthe function g satisfy assumption 2.1.2. For a given process Y ∈ D2(0, T ) with YT = ξ ∈ L2(FT ),the following claims are equivalent :a) Y is the smallest g–supersolution that dominates L ;b) for each L∗ ∈ D2(0, T ) such that Yt ≥ L∗t ≥ Lt, dP ⊗ dt-a.s., Y is the smallest g–supersolutionthat dominates L∗.
2.6. the smallest g–supersolution and the RBSDE 39
Proof. a)⇒b) is obvious.b)⇒a) : Let Y ∈ D2(0, T ) be the smallest g–supersolution that dominates L with YT = ξ.
Then Yt ≥ Yt ≥ Lt, dP ⊗ dt-a.s.. Thus Y is the smallest g–supersolution that dominates Y , i.e.,Yt ≡ Yt, ∀t, a.s.. ¤
We now give the existence theorem of the smallest g–solution that dominates L. This theoremis proved in [28] for the situation where L has continuous paths. The case where L ∈ L2(0, T )is a special situation of Theorem 4.2 in [65]. This theorem claims the existence of the smallestg–supersolution (Y, Z) subject to the constraint
Φ(t, Yt, Zt) = 0, dP ⊗ dt− a.s., (2.73)
where the function Φ : Ω× [0, T ]× R× Rd → [0,∞) satisfies the same assumptions 2.1.2 for g. Inthis paper we are only interested in the constraint y ≥ Lt, or equivalently,
Φ(t, y, z) := (y − Lt)− = 0. (2.74)
The main idea of the proof is to introduce the following penalized BSDE,
Y nt = ξ +
∫ T
tg(s, Y n
s , Zns )ds + Kn
T −Knt −
∫ T
tZn
s dBs, (2.75)
Knt : = n
∫ t
0Φ(s, Y n
s , Zns )ds.
By comparison theorem of BSDE, we have Y nt ≤ Y n+1
t , t ∈ [0, T ], a.s. As, n →∞, the limit is thesmallest g–supersolution :
Yt = ξ +∫ T
tg(s, Ys, Zs)ds + KT −Kt −
∫ T
tZsdBs, (2.76)
Φ(t, Yt, Zt) ≡ 0, dP ⊗ dt− a.s. K ∈ A2F (0, T ), dKt ≥ 0. (2.77)
More precisely, Theorem 4.2 of [65] claims :
Proposition 2.6.2. Let the function g satisfy assumption 2.1.2. We also assume that there existsa g–supersolution (Y ∗, Z∗) constrained by Φ(t, Y ∗
t , Z∗t ) ≡ 0 with terminal condition ξ ∈ L2(FT ).Then the smallest g–supersolution Y ∈ D2(0, T ) constrained by (2.73) with terminal condition ξexists. It is the solution of BSDE (2.76), where K ∈ A2(0, T ) is an increasing process. Moreover(Y, Z,K) is the limit of the sequence of penalized BSDEs (2.75) in the following sense, for eachfixed p ∈ [1, 2),
E∫ T0 (|Y n
t − Yt|2 + |Znt − Zt|p)dt → 0,
E∫ T0 (Zn
t − Zt)ϕtdt → 0, ∀ϕ ∈ H2d(0, T ),
E[(Knτ −Kτ )ζ] → 0, ∀ζ ∈ L2(FT ), ∀τ (stopping time).
(2.78)
Remark 2.6.1. The above convergence also implies the boundedness :
E[ sup0≤t≤T
|Y nt |2] + E[
∫ T
0|Zn
t |2dt] + E[(KnT )2] ≤ C (2.79)
where the constant C does not depends on n.
40 Chapitre 2. RBSDE discontinuous
With this theorem we can obtain the existence of the smallest g–supersolution that dominatesL :
Proposition 2.6.3. Let the terminal condition ξ and coefficient g satisfy the assumptions 2.1.1and 2.1.2, the lower obstacle L satisfies assumption 2.4.1. Then the smallest g–supersolution Y ∈D2(0, T ) that dominates L with terminal condition ξ exists. It is the solution of BSDE (2.76) withthe constraint Φ defined in (2.74), where K ∈ A2(0, T ) is the corresponding increasing process.Moreover, (Y,Z, K) is the limit of the sequence of penalized BSDEs (2.75) in the sense of (2.78).
Proof. This is a simple corollary of Proposition 2.6.2 for Φ(t, y, z) = (y − Lt)−. We onlyneed to check the existence of a g–supersolution Y ∗ with terminal condition Y ∗
T = ξ such that(Y ∗
t − Lt)− ≡ 0. By (2.44), we have
ζ := maxess sups∈[0,T )
Ls1s<T, ξ ∈ L2(FT )
Let (Y ∗, Z∗) be the solution of the following BSDE
Y ∗t = ζ +
∫ T
t|g(s, Y ∗
s , Z∗s )|ds−∫ T
tZ∗s dBs.
It is easy to check that Y ∗t ≥ E[ζ|Ft] ≥ Lt. We then define an increasing process K∗ ∈ A2(0, T ) by
K∗t :=
∫ t
0(|g(s, Y ∗
s , Z∗s )| − g(s, Y ∗s , Z∗s ))ds + (ζ − ξ)1t=T
The above Y ∗ is a g–supersolution that dominates L :
Y ∗t = ξ + K∗
T −K∗t +
∫ T
tg(s, Y ∗
s , Z∗s )ds−∫ T
tZ∗s dBs, t ∈ [0, T ].
¤With the above existence theorem of the smallest g–supersolution, the existence and uniqueness
of RBSDE with single obstacle L is merely a simple consequence of the following properties. Asa main result, we will give the equivalence between the smallest g–supersolution dominated by Land RBSDE with lower obstacle L. First we consider a simple case.
Let l ∈ D2(0, T ) be a given process. For the case g0(t) ≡ 0, a g0–supersolution Y ∈ D2(0, T )that dominates l ∈ D2(0, T ) with YT = ξ ∈ L2(FT ) is simply defined by
Yt = ξ + KT −Kt −∫ T
tZsdBs, Yt ≥ lt, ∀t ∈ [0, T ], a.s. (2.80)
where Z ∈ H2d(0, T ) and K ∈ A2(0, T ) is an increasing process with K0 = 0. Thus Y is a merely a
supermartingale that dominates l on [0, T ] with YT = ξ. We need the following result :
Lemma 2.6.1. Let Y ∈ D2(0, T ) be the smallest g0–supersolution that dominates l with YT = ξ.Then for each stopping time τ ≤ T , we have
Yτ− = Yτ ∨ lτ−. (2.81)
Consequently ∑
0≤t≤T
(Yt− − lt−)(Kt −Kt−) = 0, a.s. (2.82)
2.6. the smallest g–supersolution and the RBSDE 41
Proof. For any stopping times σ, τ ∈ T0 such that σ ≤ τ , we denote by Tσ,τ the set of stoppingtimes ρ ∈ T0 such that σ ≤ ρ ≤ τ . We define
Yt := ess supσ∈Tt
E[lσ1σ<T + ξ1σ=T|Ft].
It is known that Y is the smallest supermartingale that dominates l on [0, T ] with YT = ξ. Thuswe have Y ≡ Y . Moreover, for each stopping time τ ∈ T0, Y ∈ D2(0, T ) is also the smallestg0–supersolution on [0, τ ] that dominates l with terminal condition Yτ . We then can derive (2.81)by
Yt = ess supσ∈Tt,τ
E[lσ1σ<τ + Yτ1σ=τ|Ft].
But Yτ− > lτ− implies Yτ− = Yτ , and thus Kτ = Kτ−. We then have (2.82). ¤With the existence result of the smallest g–supermartingale given Proposition 2.6.3, the follo-
wing equivalent conditions implies the existence part of Theorem 2.4.1.
Theorem 2.6.1. Let ξ and function g satisfy assumption 2.1.1 and 2.1.2, and the lower obstacleL satisfies assumption 2.4.1. Then the following conditions are equivalenta) The triple (Y, Z, K) is the unique solution of RBSDE with L2–lower barrier L ;b) Y is the smallest g–supersolution that dominates L with terminal condition YT = ξ ;c) Y is the smallest g–supersolution that dominates L with terminal condition YT = ξ, where weset, for each t ∈ [0, T ],
g(t) : = g(t, Yt, Zt), Yt := Yt +∫ t
0g(s)ds,
Lt : = Lt +∫ t
0g(s)ds, ξ := ξ +
∫ T
0g(s)ds;
d) Y ∈ D2(0, T ) is the smallest supermartingale that dominates L such that YT = ξ ;e) Y ∈ D2(0, T ) is a supermartingale that dominates L with YT = ξ, of which K is the increasingprocess of the Doob–Meyer’s decomposition, and the following reflecting condition holds : for eachL∗ ∈ D2(0, T ) such that Yt ≤ L∗t ≤ Lt, dP ⊗ dt-a.s.,
∫ T
0(Yt− − L∗t−)dKt = 0, a.s.. (2.83)
Proof of Theorem 2.6.1 and the existence part of Theorem 2.4.1c) ⇔ d) is easy to check.We now prove b) ⇔ c). We stress that in g(t) defined above, Yt and Zt are “fixed” or “frozen”.
We consider the solution (Y n, Z
n) of following penalized BSDE
Ynt = ξ +
∫ T
tg(s)ds + n
∫ T
t(Y n
s − Ls)−ds−∫ T
tZ
ns dBs.
Like g, the function g(t) satisfies also assumption 2.1.2. Thus, just as (Y n, Zn)∞n=1 defined in(2.75), (Y n, Zn)∞n=1 converges strongly to (Y , Z) in H2(0, T ) × Hp
d(0, T ) for each p ∈ [1, 2).Y ∈ D2(0, T ) is also the smallest g-supersolution that dominates L with YT = ξ :
Y t = ξ +∫ T
tg(s)ds + KT −Kt −
∫ T
tZsdBs, (2.84)
Yt ≥ Lt, dKt ≥ 0. (2.85)
42 Chapitre 2. RBSDE discontinuous
We now prove that (Y , Z) = (Y, Z). Indeed, apply Ito’s formula to∣∣Y n
t − Ynt
∣∣2, we have
E∣∣Y n
t − Ynt
∣∣2 + E
∫ T
t
∣∣Znt − Z
nt
∣∣2 dt
= 2E
∫ T
t(Y n
s − Yns )(g(s, Y, Zn
s )− g(s))ds + 2n
∫ T
t(Y n
s − Yns )[(Y n
s − Ls)− − (Y ns − Ls)−]ds.
For the last integrand, it is easy to check that (Y ns − Y
ns )[(Y n
s − Ls)− − (Y ns − Ls)−] ≤ 0. We then
have
E∣∣Y n
t − Ynt
∣∣2 + E
∫ T
t
∣∣Znt − Z
nt
∣∣2 dt ≤ 2E
∫ T
t(Y n
s − Yns )[g(s, Y n
s , Zns )− g(s)]ds
≤ 2E
∫ T
t[|Y n
s − Ys|+ |Y ns − Ys|] · |g(s, Y n
s , Zns )− g(s)|ds
+2E
∫ T
t(Ys − Y s)[g(s, Y n
s , Zns )− g(s)]ds.
Since |Y n − Y | +∣∣Y n − Y
∣∣ → 0 in H2(0, T ) and |g(·, Y n· , Zn· ) − g(·)| is uniformly bounded inH2(0, T ), thus the first integral of the right side converges to zero as n → ∞. For the secondterm, since Y n∞n=1 converges strongly to Y in H2(0, T ) and Zn∞n=1 converges strongly to Z inHp
d(0, T ), and g is Lipschitz in (Y, Z), thus g(·, Y n, Zn)∞n=1 converges strongly to g(·, Y·, Z·) = g(·)in Hp(0, T ). But g(·, Y n, Zn)∞n=1 is also bounded in H2(0, T ). Thus it must converges weakly to gin H2(0, T ). Thus the second integral also converges to zero. It follows that Y n − Y n and Zn − Zn
are both converges to zero. Thus Y ≡ Y , Z ≡ Z.For d) ⇔ e), we first prove d)⇒e) : Let Y be the smallest supersolution that dominates L
with YT = ξ. Thus (Y , Z, K) ∈ D2(0, T )×H2d(0, T )×A2(0, T ) solves (i), (ii) in the Definition 2.4.2
of RBSDE. We only need to prove the Skorohod condition (iii), i.e., for each L∗ ∈ D2(0, T ) suchthat Lt ≤ L∗t ≤ Yt, dP ⊗ dt-a.s., we have
∫ T
0(Yt− − L∗t−)dKt = 0, a.s.. (2.86)
We denote the discrete part of A by Ad, and the continuous part by Ac : A = Ac + Ad. From (2.82),we have ∑
0≤t≤T
(Yt− − L∗t−)(Kt −Kt−) =∫ T
0(Ys− − L∗s−)dKd
s = 0. (2.87)
The continuous part of Y is Y c := Y +Kd. Then, with g0(t) ≡ 0, Y c is the smallest g0–supersolution
that dominates Lc = L∗ + Kd with terminal condition Y cT = ξ + Kd
T .We now follow Proposition 2.6.2 to construct a penalization sequence (Y n, Zn,Kn) ∈ D2(0, T )×
H2d(0, T )×A2(0, T ) as follows
Y nt = Y c
T +∫ T
tn(Y n
s − Lcs)−ds−
∫ T
tZn
s dBs,
Knt =
∫ t
0n(Y n
s − Lcs)−ds.
According to Proposition 2.6.2, the triple (Y n, Zn,Kn) converges to (Y c, Z, Kc) in the sense of(2.78) and, for each stopping time τ ≤ T , as n →∞,
Y nτ Y c
τ , ∀t ∈ [0, T ], a.s.,Kn
τ → Kcτ , strongly in L2(FT ). (2.88)
2.6. the smallest g–supersolution and the RBSDE 43
On the other hand, for each m ≤ n, since
0 = (Y mt − Lc
t)+(Y m
t − Lct)− ≥ (Y m
t − Lct)
+(Y nt − Lc
t)−,
we have ∫ T
0(Y m
t − Lct)
+dKnt = 0. (2.89)
For each t ∈ [0, T ], we define
D+t : = infs ≥ t : (Y c
s − Lcs)
+ ∧ (Y cs − Lc
s−)+ = 0 ∧ T,
Dmt : = infs ≥ t : (Y m
s − Lcs)
+ ∧ (Y ms − Lc
s−)+ = 0 ∧ T.
Since (Y ms − Lc
s)+ (Ys − Lc
s)+ thus Dm
t ≤ Dm+1t ≤ Dt. On the other hand, for a.s. ω ∈ Ω, if
Dt > t, then for each t < t < Dt, we have (Y cs − Lc
s)+ ≥ δ(ω), t ≤ s ≤ t, for a positive δ > 0. Since
0 ≤ (Ys − Lcs)
+ − (Y ms − Lc
s)+ ≤ Y c
s − Y ms 0,
thus, for a sufficiently large m(ω), we have (Y ms −Lc
s)+ > 0, s ∈ [t, t]. Thus Dm
t > t. It follows that,for each t, limm→∞Dm
t = Dt, almost surely. On the other hand, by (2.89) we have
KnDm
t−Kn
t = 0. (2.90)
We let n → ∞. By the convergence of An in the sense of (2.88), we derive KcDm
t− Kc
t = 0. Byletting m →∞, and with (2.90) we get
KcDt−Kc
t = 0.
Thus ∫ T
0(Y c
t − Lct)dKc
t =∫ T
0(Y c
t − Lct−)dKc
t = 0, a.s..
This with (2.87) it follows that (2.86) holds.e)⇒d) : Since the solution (Y,Z, K) of RBSDE with the lower obstacle L is unique. Thus by
d)⇒e), Y must be the smallest g–supersolution that dominates L.Through e)⇔d)⇔c)⇔b) we can prove that the smallest g–supersolution Y with YT = ξ that
dominates L (given in b)) must satisfied the generalized Skorohod reflecting condition (2.46).Thus we have b)⇒a). This with the existence theorem, i.e., Proposition 2.6.3, of the smallest g–supersolution given in b), it follows that the solution Y of RBSDE of type a) exists. This provesthe existence part of Theorem 2.4.1. Finally the uniqueness of RBSDE given in Proposition 2.4.2gives a)⇒b). The proof is complete. ¤
The following comparison theorem of RBSDEs is a by-product of the above results. It will beused in the proof of the existence of RBSDE with two reflecting barriers. This comparison theoremof RBSDE was introduced in [38], for the case where L is continuous.
Theorem 2.6.2 (Comparison). We assume that lower obstacle L ∈ L2(0, T ) satisfies assumption2.4.1. Let g1, g2 be two coefficients of BSDE satisfying the standard assumption 2.1.2, for i = 1, 2,let (Y i, Zi,Ki) be the solution of the RBSDE with the lower obstacle L ∈ L2(0, T ) :
Y it = ξi +
∫ T
tgi(s, Y i
s , Zis)ds + Ki
T −Kit −
∫ T
tZi
sdBs. (2.91)
44 Chapitre 2. RBSDE discontinuous
Namely, the triple (Y i, Zi,Ki) satisfies (i)-(iii) in Definition 2.4.2. Assume that
g1(t, y, z) ≤ g2(t, y, z), ∀(y, z) ∈ R× Rd, a.s. (2.92)
and ξ1 ≤ ξ2, a.s.. Then we have
Y 1t ≤ Y 2
t , 0 ≤ t ≤ T , a.s.. (2.93)
Moreover,(K1
t −K1s )− (K2
t −K2s ) ≥ 0, ∀0 ≤ s ≤ t ≤ T, a.s.. (2.94)
Proof. For each i = 1, 2, consider the following penalization BSDE for the RBSDE ( 2.91) :
Y n,it = ξi +
∫ T
tgi(s, Y n,i
s , Zn,is )ds + n
∫ T
t(Ls − Y n,i
s )+ds−∫ T
tZn,i
s dBs.
By the comparison theorem of BSDEs we get Y n,1t ≤ Y n,2
t , ∀n ∈ N. Thanks to Proposition 2.6.2,as n →∞, Y n,i converge to Y i the solutions of RBSDE, for i = 1, 2. We immediately have (2.93).
Moreover the increasing processes Kn,it := n
∫ t0 (Ls − Y n,i
s )+ds satisfies
(Kn,1t −Kn,1
s )− (Kn,2t −Kn,2
s ) ≥ 0, for each 0 ≤ s ≤ t ≤ T .
Again by Proposition 2.6.2, (Kn,1t ) and (Kn,2
t ) respectively convergence to K1t and K2
t weakly inL2(Ft). We then have (2.94). ¤
2.7 Penalization method for RBSDE with two obstacles and somebasic estimates
In the preceding section, the existence result of RBSDE is proved by a penalization approach.This is a constructive method since the penalized equation (2.75) is a standard BSDE to whichmany existing numerical results can be applied. We now proceed to prove the existence of reflectedBSDE with two obstacles by using this approach. The penalized BSDEs we need are, for m,n ∈ N :
Y m,nt = ξ +
∫ T
tg(s, Y m,n
s , Zm,ns )ds + m
∫ T
t(Ls − Y m,n
s )+ds
−n
∫ T
t(Y m,n
s − Us)+ds−∫ T
tZm,n
s dBs.
or
Y m,nt = ξ +
∫ T
tg(s, Y m,n
s , Zm,ns )ds+Km,n,+
T −Km,n,+t − (Km,n,−
T −Km,n,−t )−
∫ T
tZm,n
s dBs (2.95)
with
Km,n,+t = m
∫ t
0(Ls − Y m,n
s )+ds, Km,n,−t = n
∫ t
0(Y m,n
s − Us)+ds.
Here the basic idea is simple : we first fix an m and let n → ∞, then let m → ∞. The twoincreasing processes K− and K+, which are the limits of Km,n,− and Km,n,+, will be proved to bethe two increasing processes in RBSDE (2.55) we are looking for. In Section 2.9.2, we will provethat when m = n in (2.95) the quadruple (Y m,m, Zm,m,Km,m,+, Km,m,−) also converges to thesolution (Y, Z,K+,K−) of RBSDE as m →∞.
We begin with establishing several basic estimates for (Y m,n, Zm,n,Km,n,+,Km,n,−). Theseestimates are useful not only to the proof the existence of RBSDE provided in the next section,also to the further development of numerical solutions.
2.7. some estimates for Penalization method 45
Proposition 2.7.1. We assume assumption 2.4.2 hold. Then there exists a constant C, inde-pendent from m and n, such that the following estimate hold for (2.95) :
E[ sup0≤t≤T
(Y m,nt )2] + E[
∫ T
0|Zm,n
s |2 ds] + E[(Km,n,+T )2] + E[(Km,n,−
T )2] ≤ C. (2.96)
To prove this result, we need the following lemma.
Lemma 2.7.1. There exists a triple (Y ∗, Z∗,K∗), with K∗ = K∗+ − K∗−, and Y ∗ ∈ D2(0, T ),Z∗ ∈ H2
d(0, T ) and K∗+,K∗− ∈ A2(0, T ), such that
Y ∗t = ξ +
∫ T
tg(s, Y ∗
s , Z∗s )ds + K∗+T −K∗+
t − (K∗−T −K∗−
t )−∫ T
tZ∗s dBs, (2.97)
and Lt ≤ Y ∗t ≤ Ut, dP ⊗ dt-a.s..
Proof. For the process X satisfting (2.54), we set X∗t = Xt + (ξ −XT )1t=T. We have Lt ≤
X∗t ≤ Ut and
X∗t = ξ −
∫ T
tZ0
s dBs + (ξ −XT )1t=T + (A0T −A0
t )− (K0T −K0
t )
= ξ +∫ T
tg(s,X∗
s , Z0s )ds + (A0
T −A0t ) + (ξ −Xt)1t=T +
∫ T
t[g(s,X∗
s , Z0s )]−ds
−(K0T −K0
t )−∫ T
t[g(s,X∗
s , Z0s )]+ds−
∫ T
tZ0
s dBs,
We denote Z∗ = Z0 and
K∗+t : = A0
t + (ξ −Xt)+1t=T +∫ t
0[g(s,X∗
s , Z0s )]−ds
K∗−t : = K0
t + (ξ −Xt)−1t=T +∫ t
0[g(s,X∗
s , Z0s )]+ds.
Then (Y ∗, Z∗,K∗+,K∗−) satisfies (2.97) and L ≤ Y ∗ ≤ U , dP ⊗ dt-a.s.. ¤
Proof of Proposition 2.7.1. Let (Y ∗, Z∗,K∗) with K∗ = K∗+−K∗− be given as in Lemma 2.7.1.Then for m,n ∈ N, the triple also satisfies
Y ∗t = ξ +
∫ T
tg(s, Y ∗
s , Z∗s )ds + K∗+T −K∗+
t − (K∗−T −K∗−
t )
+n
∫ T
t(Ls − Y ∗
s )+ds−m
∫ T
t(Y ∗
s − Us)+ds−∫ T
tZ∗s dBs.
Set the couples (Y m,n, Z
m,n), (Y m,n, Zm,n) be respectively the solutions of the following equations,
Ym,nt = ξ +
∫ T
tg(s, Y m,n
s , Zm,ns )ds + K∗+
T −K∗+t
+n
∫ T
t(Ls − Y
m,ns )+ds−m
∫ T
t(Y m,n
s − Us)+ds−∫ T
tZ
m,ns dBs.
46 Chapitre 2. RBSDE discontinuous
Y m,nt = ξ +
∫ T
tg(s, Y m,n
s , Zm,ns )ds− (K∗−
T −K∗−t )
+n
∫ T
t(Ls − Y m,n
s )+ds−m
∫ T
t(Y m,n
s − Us)+ds−∫ T
tZm,n
s dBs.
By the comparison theorem for BSDE, we obtain that for any m,n ∈ N, Ym,nt ≥ Y m,n
t ≥ Y m,nt and
Ym,nt ≥ Y ∗
t ≥ Lt, Y m,nt ≤ Y ∗
t ≤ Ut, so (Y m,n, Z
m,n) is also solution of
Ym,nt = ξ +
∫ T
tg(s, Y m,n
s , Zm,ns )ds + K∗+
T −K∗+t −m
∫ T
t(Y m,n
s − Us)+ds−∫ T
tZ
m,ns dBs, (2.98)
and (Y m,n, Zm,n) is also solution of
Y m,nt = ξ +
∫ T
tg(s, Y m,n
s , Zm,ns )ds− (K∗−
T −K∗−t ) + n
∫ T
t(Ls− Y m,n
s )+ds−∫ T
tZm,n
s dBs. (2.99)
Then let us consider the following BSDEs
Y +t = ξ +
∫ T
tg(s, Y +
s , Z+s )ds + K∗+
T −K∗+t −
∫ T
tZ+
s dBs, (2.100)
Y −t = ξ +
∫ T
tg(s, Y −
s , Z−s )ds− (K∗−T −K∗−
t )−∫ T
tZ−s dBs, (2.101)
Since Km,n,−t = m
∫ t0 (Y m,n
s − Us)+ds and Km,n,+t = n
∫ t0 (Ls − Y m,n
s )+ds are increasing processes,then using the comparison theorem for (2.98) and (2.100), (2.99) and (2.101) with (2.95), we get
Y +t ≥ Y
m,nt ≥ Y m,n
t ≥ Y m,nt ≥ Y −
t , (2.102)
for any m,n ∈ N,∀t ∈ [0, T ]. Then we have
E[ sup0≤t≤T
(Y m,nt )2] ≤ maxE[ sup
0≤t≤T(Y +
t )2], E[ sup0≤t≤T
(Y −t )2]. (2.103)
Since K∗± ∈ A2(0, T ), by the Ito’s formula and the BDG inequality, it follows that
E[ sup0≤t≤T
(Y +t )2] ≤ c, E[ sup
0≤t≤T(Y −
t )2] ≤ c.
Using (2.103), we get that there exists a constant c independent of m,n, s.t.
E[ sup0≤t≤T
(Y m,nt )2] ≤ c. (2.104)
Now we consider the last two terms of (2.96). First since for any m, n ∈ N, Y m,nt ≤ Y m,n
t , thenKm,n,+
t ≥ Km,n,+t ≥ 0. So if E[(Km,n,+
T )2] ≤ c, then E[(Km,n,+T )2] ≤ c. Rewrite (2.99) into the
following form
Km,n,+t = Y m,n
0 − Y m,nt −
∫ t
0g(s, Y m,n
s , Zm,ns )ds + K∗−
t +∫ t
0Zm,n
s dBs. (2.105)
Notice that from (2.102) we have
E[ sup0≤t≤T
(Y m,nt )2] ≤ maxE[ sup
0≤t≤T(Y +
t )2], E[ sup0≤t≤T
(Y −t )2] ≤ c,
2.7. some estimates for Penalization method 47
and E[(K∗−T )2] ≤ c1 ; then with the Lipschitz property of g, taking square and expectation on the
both sides of (2.105), we get
E[(Km,n,+T )2] ≤ c + c2E
∫ T
0
∣∣∣Zm,ns
∣∣∣2ds. (2.106)
Then apply Ito’s formula to∣∣∣Y m,n
t
∣∣∣2, with classical technics and (2.106), it follows that
E[(Km,n,+T )2] ≤ c, then E[(Km,n,+
T )2] ≤ c.
In the same way, we deduce that E[(Km,n,−T )2] ≤ c. Applying Ito’s formula to |Y m,n
t |2, then
E[|Y m,nt |2] + E[
∫ T
t|Zm,n
s |2 ds]
≤ c(1 +∫ T
t|Y m,n
s |2 ds + α
∫ T
t|Zm,n
s |2 ds) + E[ sup0≤t≤T
(L+t )2] + E[ sup
0≤t≤T(U−
t )2]
+E[(Km,n,+T )2] + E[(Km,n,−
T )2].
Set α = 13c , finally, we get E[
∫ T0 |Zm,n
s |2 ds] ≤ c. ¤We now pass limit in the penalization BSDE (2.95). By the comparison theorem of BSDEs, we
know that (Y m,n) is increasing in m for each fixed n, and decreasing in n for each fixed m. In (2.95)we fix m and set gm(s, y, z) = g(s, y, z) + m(Ls − y)+. Like g itself, the function gm also satisfiesthe assumption 2.1.2, with Lipschitz constant k +m in the place of k. Thanks to Proposition 2.6.2,we have the following convergence :
Lemma 2.7.2. When n → ∞, the triple (Y m,n, Zm,n,Km,n−) converges to (Y m, Zm, Km−) ∈D2(0, T )×H2
d(0, T )×A2(0, T ) in the following sense :
E∫ T0 (|Y m,n
t − Y mt |2 + |Zm,n
t − Zmt |p)dt → 0, p ∈ [1, 2),
E[|Y m,nt − Y n
t |2] → 0, ∀t ∈ [0, T ],E
∫ T0 (Zm,n
t − Znt )ϕtdt → 0, ∀ϕ ∈ H2
d(0, T ),E[(Km,n−
t −Kn−t )ζ] → 0, ∀ζ ∈ L2(FT ), ∀t ∈ [0, T ].
(2.107)
The limit (Y m, Zm,Km−) is the solution of the following RBSDE with one upper obstacle U ,
Y mt = ξ +
∫ T
tg(s, Y m
s , Zms )ds + m
∫ T
t(Ls − Y m
s )+ds− (Km−T −Km−
t )−∫ T
tZm
s dBs. (2.108)
We also have, for each i ≤ j, 0 ≤ t ≤ t′ ≤ T ,
Kj−t′ −Kj−
t ≥ Ki−t′ −Ki−
t ≥ 0. (2.109)
Moreover, with Km+t = m
∫ t0 (Ls−Y m
s )+ds, we have the following estimate : there exists a constantC, independent of m, such that
sup0≤t≤T
E(Y mt )2 + E
∫ T
0‖Zm
t ‖2 dt + E(Km+T )2 + E(Km−
T )2 ≤ C, (2.110)
where Km+t := m
∫ t0 (Ls − Y m
s )+ds.
48 Chapitre 2. RBSDE discontinuous
Proof. The convergence of (2.107) and equation (2.108) result directly from Proposition 2.6.2and Proposition 2.6.3 in which the coefficient g(t, y, z) is replaced by g(t, y, z) + m(Lt − y)+ andthe lower obstacle L by the upper obstacle U .
Observe that (2.108) can be regarded as the following RBSDE with upper obstacle U :
Y mt = ξ +
∫ T
tgm(s, Y m
s , Zms )ds− (Km−
T −Km−t )−
∫ T
tZm
s dBs,
where gm(s, y, z) := gm(s, y, z) + m(Ls− y)+. Since gi(t, y, z) ≤ gj(t, y, z), for i ≤ j, thus (2.109) isa direct result of Comparison Theorem 2.6.2.
Since by (Y m,n, Zm,n,Km,n−) are uniformly bounded by (2.96), their strong and weak limits inL2 are also uniformly bounded. ¤
2.8 Proof of Theorem 2.4.3 : the existence of RBSDE with twoobstacles
We now proceed theProof of Theorem 2.4.3–the part of existence and some results of convergence. We writeequation (2.108) in the forward form :
Y mt = Y m
0 −∫ t
0g(s, Y m
s , Zms )ds + Km−
t −Km+t +
∫ t
0Zm
s dBs. (2.111)
Using the Burkholder-Davis-Gundy inequality and (2.110), we have
E
(sup
0≤t≤T(Y m
t )2)≤ C.
From the comparison theorem 2.6.2, Y m is increasing in m. This with Y m ≤ U , it follows that,there exists a process Y , such that Y m Y ≤ U and thus
E
(sup
0≤t≤T(Yt)2
)≤ C. (2.112)
We also have the following L2-convergence :
E
(∫ T
0|Y m
t − Yt|2 dt
)→ 0. (2.113)
By lemma 2.7.2 the sequence (Y m)∞m=1 satisfy all conditions of the monotonic limit Theorem 2.5.1.It follows that its limit Y is in D2(0, T ) and has the following form :
Yt = ξ +∫ T
tg0sds + K+
T −K+t − (K−
T −K−t )−
∫ T
tZsdBs,
where (g0, Z) ∈ H21+d(0, T ) is the weak limit of (g(·, Y m, Zm), Zm)∞m=1 in H2
1+d(0, T ). For eacht ∈ [0, T ], K+
t is a weak limit of Km+t ∞m=1 in L2(Ft), K−
t is the strong limit of Km−t ∞m=0 in
L2(Ft). K+ and K− are increasing processes in D2(0, T ). Furthermore, for any p ∈ [0, 2), we have
limm→∞E
∫ T
0|Zm
s − Zs|pds = 0. (2.114)
2.8. The existence of RBSDE with two obstacles 49
It follows that g(·, Y m· , Zm· ) → g(·, Y·, Z·) in Hp(0, T ), and thus
Yt = ξ +∫ T
tg(s, Ys, Zs)ds + K+
T −K+t − (K−
T −K−t )−
∫ T
tZsdBs, (2.115)
i.e. condition (ii) of Definition 2.4.3 is satisfied.Since for each m ∈ N, Y m ≤ U , dP⊗dt-a.s., thus Y ≤ U , dP⊗dt-a.s.. Notice that E[(Km+
T )2] ≤C. As m → 0, we have
0 ≤ E[(∫ T
0(Ls − Ys)+ds)2] = E[(
∫ T
0(Ls − Y m
s )+ds)2] ≤ C
m2→ 0
and thus Y ≥ L, dP ⊗ dt-a.s.. So (iii) of Definition 2.4.3 holds.It remains to prove the two Skorohod reflecting conditions (iv) in Definition 2.4.3. For the
upper obstacle, U and a process U∗ ∈ D2(0, T ) such that Y m ≤ U∗ ≤ U , dP ⊗ dt-a.s., we have∫ T0 (U∗
t−−Y mt−)dKm−
t = 0. This with (U∗t−−Y m
t−) ≥ (U∗t−−Yt−) ≥ 0 yields
∫ T0 (U∗
t−−Yt−)dKm−t = 0.
We recall that d(K−t −Km−
t ) ≥ 0 and Km−T K−
T in L2(FT ). It follows from
0 ≤∫ T
0(U∗
t− − Yt−)d(K−t −Km−
t ) ≤ (K−T −Km−
T ) maxt∈[0,T ]
(U∗t− − Yt−)
that the reflecting condition in Definition 2.4.3 for the upper boundary holds :∫ T
0(U∗
t− − Yt−)dK−t = 0.
We now proceed to prove the reflecting condition for the lower obstacle L. We consider thefollowing BSDE :
Y mt = ξ +
∫ T
tg(s, Y m
s , Zms )ds + m
∫ T
t(Ls − Y m
s )+ds− (K−T −K−
t )−∫ T
tZm
s dBs. (2.116)
We denote Y m := Y m −K− and rewrite the above BSDE
Y mt = ξ −KT +
∫ T
tgK−(s, Y m
s , Zms )ds + m
∫ T
t(Ls −K−
s − Y ms )+ds−
∫ T
tZm
s dBs,
where we set gK−(t, y, z) := g(t, y + K−t , z). We observe that this is just the penalization equation
of the form (2.75), (2.74) with gK in the place of g and L−K− in the place of L. From Theorem2.6.2, as m →∞, we have the limit :
Yt = ξ −K−T +
∫ T
tgK−(s, Ys, Zs)ds + AT − At −
∫ T
tZsdBs. (2.117)
Here Y is the H2(0, T )-strong limit of Y m, Z is the H2d(0, T )–weak limit and Hp
d(0, T )–strong limitof Zm and, for each t, K+
t is the L2(Ft)–weak limit of
Km+t := m
∫ t
0(Ls − Y m
s )+ds = m
∫ T
t(Ls −Ks − Y m
s )+ds.
Theorem 2.6.2 also tells us that the limit Y ∈ D2(0, T ) is the smallest gK–supersolution withYT = ξ−K−
T that dominates L−K−. But on the other hand, using comparison theorem of BSDEto (2.116) and (2.108), we have Y m
t ≥ Y mt . Thus, for each s ≤ t,
Km+t − Km+
s = m
∫ t
s(Lr − Y m
r )+dr ≥ m
∫ t
s(Lr − Y m
r )+dr = Km+t −Km+
s .
50 Chapitre 2. RBSDE discontinuous
Thus their weak limits satisfies K+t − K+
s ≥ Km+t − Km+
s . Observe that, by (2.115), Y − K− isalso a gK−–supersolution :
Yt −K−t = ξ −K−
T +∫ T
tgK(s, Ys −K−
s , Zs)ds + K+T −K+
t −∫ T
tZsdBs.
Compare this with (2.117), we have Y ≤ Y . Thus Y − K− must be Y − K−, the smallest gK−–supersolution with terminal condition ξ − K−
T that dominates L − K−. It follows from Theorem2.6.1, a)⇔b) that Y −K− satisfies the Skorohod reflecting condition (iii) of Definition 2.4.2 withthe obstacle L∗ − K−. But this implies that, for each L∗ ∈ D2(0, T ) such that Y ≥ L∗ ≥ L,dP ⊗ dt-a.s., we have
∫ T
0(Yt− − L∗t−)dK+
t =∫ T
0(Yt− −K−
t− − (L∗t− −K−t−))dK+
t = 0,
namely, the reflecting condition for the lower bound L. Thus all conditions in Definition 2.4.3 aresatisfied. The proof is complete. ¤
We now give an equivalent relation between a double obstacles reflected BSDE and the smallestg–solution. We observe that the solution (Y, Z,K) with K = K+−K− of the reflected BSDE withdouble obstacles can be rewritten to
Yt −K−t = ξ −K−
T +∫ T
tgK−(s, Ys, Zs)ds + K+
T −K+t −
∫ T
tZsdBs (2.118)
where gK−(t, y, z) := g(t, y + K−t , z). We have
Theorem 2.8.1. The following two claims are equivalent a) The quadruple (Y, Z, K) with K =K+ −K− is the solution of the reflected BSDE with double obstacles L and U ; b) Y −K− is thesmallest gK−–supersolution that dominates L−K− with terminal condition ξ −K−
T , and Y + K+
is the largest g−K+–subsolution that dominated by U + K+ with terminal condition ξ + K+T .
Proof. If (Y, Z,K) with K = K+ − K− is the solution of the reflected BSDE with doubleobstacles L and U . Then it is clear that the triple (Y −K−, Z, K+) is the solution of the reflectedBSDE (2.118) with the lower obstacle Y − K− ≥ L − K−. But by Theorem 2.6.1 a)⇔b), thisis equivalent to say that Y −K− is the smallest gK−–supersolution that dominates L −K− withterminal condition ξ −K−
T . The same argument is applied for the upper obstacle. ¤
2.9 Penalization from two sides : A convergence result
In this section, we study the equations which are penalizing from the two barriers at the sametime, and prove that these equations converge to the solution of RBSDE with two barriers L andU . This convergence will help us to find the numerical algorithm of the RBSDE with two barriers.
The penalized equations with two barriers are following BSDEs, for m ∈ N :
Y m,mt = ξ+
∫ T
tg(s, Y m,m
s , Zm,ms )ds+m
∫ T
t(Ls−Y m,m
s )+ds−m
∫ T
t(Us−Y m,m
s )−ds−∫ T
tZm,m
s dBs.
(2.119)Notice that the coefficients of the equation gm,m(t, y, z) := g(t, y, z) + m(Lt − y)+ − m(Ut − y)−
satisfy the Lipschitz condition and the integrability condition, so these equations admit the uniquesolution (Y m,m, Zm,m). First, we need some estimation results.
2.9. Penalization from two sides 51
Proposition 2.9.1. We have
E[ sup0≤t≤T
|Y m,mt |2] + E
∫ T
0|Zm,m
s |2 ds + E[(Km,m+T )2] + E[(Km,m−
T )2] ≤ C.
where Km,m+t = m
∫ t0 (Ls − Y m,m
s )+ds, Km,m−t = m
∫ t0 (Us − Y m,m
s )−ds.
Proof. This is a corollary of Proposition 2.7.1, when we set m = n. ¤For m ∈ N, we consider the solution (ym, zm, km) of the following RBSDE with single lower
barrier L,
ymt
= ξ +∫ T
tg(s, ym
s, zm
s )ds + (kmT − km
t )−m
∫ T
t(Us − ym
s)−ds−
∫ T
tzm
s dBs,
and the solution (ym, zm, km) of the following RBSDE with single upper barrier U ,
ymt = ξ +
∫ T
tg(s, ym
s , zms )ds + m
∫ T
t(Ls − ym
s )+ds− kmT − k
mt −
∫ T
tzm
s dBs.
We compare these two RBSDE with (2.119) with the same technics in the proof of proposition2.7.1, then
ymt ≤ Y m,m
t ≤ ymt
, ∀t ∈ [0, T ], a.s..
While we have already proved that
ymt Yt, ym
t Yt, ∀t ∈ [0, T ], a.s.
where (Y,Z, K+,K−) is the solution of the double barrier RBSDE
Yt = ξ +∫ T
tg(s, Ys, Zs)ds + K+
T −K+t − (K−
T −K−t )−
∫ T
tZsdBs, (2.120)
thus we have the following Lemma.
Lemma 2.9.1. We haveY m.m
t → Yt, ∀t ∈ [0, T ], a.s.
We thus have
Proposition 2.9.2. we have, for each 1 ≤ p < 2,
limm→∞E
∫ T
0|Zm,m
s − Zs|pds = 0. (2.121)
Proof. For each pair of stopping times 0 ≤ σ ≤ τ ≤ T , we apply Ito’s formula
E|Y m,mσ − Yσ|2 + E
∫ τ
σ|Zm,m
s − Zs|2ds
= E|Y m,mτ − Yτ |2 + E
∑
t∈(σ,τ ]
(∆(K+t −K−
t ))2 + 2E
∫ τ
σ(Y m,m
s − Ys)(gms − gs)ds
+2E
∫ τ
σ(Y m,m
s − Ys)dKm,m+s − 2E
∫ τ
σ(Y m,m
s − Ys)dK+s
−2E
∫ τ
σ(Y m,m
s − Ys)dKm,m−s + 2E
∫ τ
σ(Y m,m
s − Ys)dK−s
52 Chapitre 2. RBSDE discontinuous
where gms := g(s, Y m,m
s , Zm,ms ) and g0
s := g(s, Ys, Zs). Now for any δ, ε > 0, we choose stoppingtimes 0 ≤ σk ≤ τk ≤ T , k = 0, 1, · · · , N , such that (σk, τk], k = 0, 1, · · · , N , are disjoined from eachothers and that
E
N∑
k=0
(τk − σk) ≥ T − ε
2
and
EN∑
k=0
∑
t∈(σk,τk]
(∆(K+t + K−
t ))2 ≤ δε
3.
We then have
N∑
k=0
E|Y m,mσk
− Yσk|2 + E
N∑
k=0
∫ τk
σk
|Zm,ms − Zs|2ds
≤N∑
k=0
E|Y m,mτk
− Yτk|2 + E
N∑
k=0
∑
t∈(σk,τk]
(∆(K+t −K−
t ))2 + 2E
∫ T
0|Y m,m
s − Ys| · |gms − gs|ds
+2E
∫ T
0(Y m,m
s − Ys)dAm,ms + 2E
∫ T
0|Y m,m
s − Ys|dK+s
−2E
∫ T
0(Y m,m
s − Ys)dKm,ms + 2E
∫ T
0|Y m,m
s − Ys|dK−s
We can apply Lebesgue domination convergence theorem to prove that, as m →∞,
2E
∫ T
0|Y m,m
s − Ys|d(K+s + K−
s ) → 0.
On the other hand
2E
∫ T
0(Y m,m
s − Ys)dKm,m+s = 2E
∫ T
0(Y m,m
s − Ls)dKm,m+s − 2E
∫ T
0(Ys − Ls)dKm,m+
s
≤ m2E
∫ T
0(Y m,m
s − Ls)(Y m,ms − Ls)−ds
= −m2E
∫ T
0[(Y m,m
s − Ls)−]2ds ≤ 0.
and
−2E
∫ T
0(Y m,m
s − Ys)dKm,m−s = −2E
∫ T
0(Y m,m
s − Us)dKm,m−s + 2E
∫ T
0(Ys − Us)dKm,m−
s
≤ −m2E
∫ T
0[(Y m,m
s − Us)+]2ds ≤ 0.
We thus have
limm→∞E
N∑
k=0
∫ τk
σk
|Zm,ms − Zs|2ds ≤ δε
3.
Thus following the same techniques in the proof of theorem 2.5.1 or theorem 2.1 in [65], we have(2.121). ¤
2.10. Appendix 53
From this lemma and the Lipschitz property of g, we know that the limit of (2.119) verifies thefollowing formula
Yt = ξ +∫ T
tg(s, Ys, Zs)ds + K
+T −K
+t − (K−
T −K−t )−
∫ T
tZsdBs, (2.122)
where K+t is the weak limit of Km,m+
t in L2(Ft), K−t is the weak limit of Km,m−
t in L2(Ft).
Lemma 2.9.2. We have K+t = K
+t , K−
t = K−t , t ∈ [0, T ].
Proof. Since ymt ≤ Y m,m
t ≤ ymt
, ∀t ∈ [0, T ], a.s., then
Km,m+t = m
∫ t
0(Ls − Y m,m
s )+ds ≥ m
∫ t
0(Ls − ym
s )+ds = kmt ≥ 0,
Km,m−t = m
∫ t
0(Us − Y m,m
s )−ds ≤ m
∫ t
0(Us − ym
s)−ds = km
t .
From theorem 2.4.3, we know that kmt converges weakly to K+
t , and kmt converges weakly to K−
t ,in L2(Ft). So
K+t ≥ K+
t , K−t ≤ K−
t ,
it follows that K+t − K
−t ≥ K+
t − K−t . But compare (2.120) and (2.122), we know K
+t − K
−t =
K+t −K−
t . The equivalence holds only when K+t = K+
t , K−t = K−
t , a.s. ¤Now we know that the limit of (2.119) is the solution of the RBSDE with double barriers.
2.10 Appendix
2.10.1 Some remarks about the Snell envelope
Any Ft-adapted RCLL process η = (ηt)0≤t≤T , is called of class D[0, T ], if
the family η(τ)τ∈T is uniformly integrable,
where T is the set of all Ft-stopping times, s.t. 0 ≤ τ ≤ T .
Definition 2.10.1. Let η = (ηt)0≤t≤T be of class D[0, T ], with ηT ≥ 0, then its Snell envelopeSt(η) is defined as
St(η) = ess supτ∈Tt
E[η(τ)|Ft], 0 ≤ t ≤ T (2.123)
where T is the set of all Ft-stopping times, and for all 0 ≤ t ≤ T , Tt = τ ∈ T ; t ≤ τ ≤ T.From theorems 2.28 and 2.29 of [27](El Karoui, N., 1979), the Snell envelope has the following
properties :
Proposition 2.10.1. St(η) is a RCLL positive process and is the smallest supermartingale, whichdominate the process η. In addition, if η satisfy
η∗ := sup0≤t≤T
|ηt| ∈ L1(Ω), (2.124)
then S(η) is a potential of class D[0, T ]. (Indeed it’s dominated by the martingale E[η∗|Ft].)
54 Chapitre 2. RBSDE discontinuous
Proposition 2.10.2. There exists a unique decomposition of the Snell envelope :
St(η) = Mt −Act −Ad
t (2.125)
where Mt is a Ft-martingale, Act is a continuous integrable increasing process with Ac
0 = 0 , Adt is
a pure-jumps integrable increasing predictable RCLL process with Ad0 = 0.
We need also the following results.
Lemma 2.10.1. Relatively to the decomposition in the Proposition 2.10.2, we have∫ T
0(St−(η)− ηt−)dAt = 0, (2.126)
where At = Act + Ad
t .
Proof. We know that St(η) = Mt − Act − Ad
t , where M and Ac are both continuous processes,since Ft is generated by the Brownian motion. So, for any 0 ≤ t ≤ T , Ad
t −Adt− = −(St(η)−St−(η)).
By the Theorem 2.34 in [27] or Proposition A.4 in [33], we get 4Ad = Ad −Ad− > 0 ⊂ S−(η) =η−. Then the following holds ∫ T
0(St−(η)− ηt−)dAd
t = 0. (2.127)
We need also to prove∫ T0 (St−(η) − ηt−)dAc
t = 0. Obviously, St(η) + Adt = Mt − Ac
t is also asupermartingale, and satisfies St(η) + Ad
t ≥ ηt + Adt .
On the other hand, any supermartingale R such that Rt ≥ ηt + Adt , satisfies Rt − Ad
t ≥ ηt.Meanwhile, since St(η) is the Snell envelope of η, by the proposition 2.10.1, Rt − Ad
t ≥ St(η), i.e.Rt ≥ St(η) + Ad
t . So St(η) + Adt is the Snell envelope of the process ηt + Ad
t , where ηT + AdT ≥ 0.
Now for any 0 ≤ t ≤ T, define the stopping time Dt = infs ≥ t, Acs > Ac
t ∧ T , it follows thatAc
t = AcDt
. Thanks to the Theorem 2.41 in [27] or Proposition A.4 in [33], Dt is optimal after t,hence S·∧Dt(η) + Ad
·∧Dtis a Ft-martingale(Theorem 2.12 in [27]). Consider another stopping time
D′t = infs ≥ t, Ss(η) + Ad
s = ηs + Ads ∧ T, obviously D
′t ≤ Dt, so Ac
t = AcD′t
, and
∫ T
0(St−(η)− ηt−)dAc
t = 0. (2.128)
The result follows then from (2.127) and (2.128). ¤
Lemma 2.10.2. Let X = (Xt)0≤t≤T be a supermartingale in the space D2(0, T ), and K be theincreasing process of the Doob-Meyer decomposition of X. Then we have E[K2
T ] < ∞.
Proof. Applying Ito’s formula to K2t , we get
K2T =
∫ T
0KtdKt +
∫ T
0Kt−dKt
Noticing that K2T =
∫ T0 KT dKt, then
E[K2T ] = E[
∫ T
0(KT −Kt)dKt +
∫ T
0(KT −Kt−)dKt].
Using the optional projections for (KT −Kt)t≤T and (KT −Kt−)t≤T , we obtain
E[K2T ] = E[
∫ T
0E[KT −Kt|Ft]dKt +
∫ T
0E[KT −Kt−|Ft]dKt].
2.10. Appendix 55
By the decomposition E[KT −Kt|Ft] = Xt − E[XT |Ft], E[KT −Kt−|Ft] = Xt− − E[XT |Ft], wehave
E[K2T ] = E[
∫ T
0(Xt −E[XT |Ft])dKt +
∫ T
0(Xt− −E[XT |Ft])dKt]
≤ E[KT · supt≤T
|Xt|+ |Xt−|+ 2E[|XT | |Ft]]
≤ 12E[K2
T ] +12E[(sup
t≤T|Xt|+ |Xt−|+ 2E[|XT | |Ft])2].
Finally,E[K2
T ] ≤ 3E[supt≤T
|Xt|2 + supt≤T
|Xt−|2 + 4E[XT ]2] < ∞.
¤
Corollary 2.10.1. Let η = (ηt)0≤t≤T be in the space D2(0, T ), ηT = 0, and A = Ac + Ad whereAc, Ad are the increasing processes of the decomposition of the Snell envelope St(η). Then A satisfiesE[A2
T ] < ∞.
Proof. From the Lemma 2.10.2, it’s enough to prove that E(sup0≤t≤T |St(η)|2) < ∞. In fact,since ηT ≥ 0, St(η) ≥ 0. So, we get
|St(η)|2 ≤ E[η∗|Ft]2
where η∗ := sup0≤t≤T |ηt|. In the following, with Doob’s inequality, we obtain
E( sup0≤t≤T
|St(η)|2) ≤ E( sup0≤t≤T
E[η∗|Ft]2) ≤ 4E[(η∗)2] < ∞.
¤
2.10.2 Stochastic game and the Dynkin game problem
Definition 2.10.2. For a probability space (Ω,F , P ), let U (resp. V ) be the set of the strategiesfor the first (resp. second) player. We consider a family of random variables J(u, v), indexed by theset U × V. The rule of the game is the following :
(i) The first player wants to minimize J(u, v) acting on u ∈ U ,(ii)The second player wants to maximize J(u, v) acting on v ∈ V.We call such a system a stochastic game.
Definition 2.10.3. A pair (u∗, v∗) ∈ U × V is called saddle point for the game, if for all (u, v) ∈U × V, we have :
J(u∗, v) ≤ J(u∗, v∗) ≤ J(u, v∗),a.s.
Definition 2.10.4. We denote by V (resp.V ) the upper (resp. lower) value of the game, i.e.
V = ess infu∈U
ess supv∈V
J(u, v)
resp. V = ess supv∈V
ess infu∈U
J(u, v).
Definition 2.10.5. If V = V = V a.s., then V is called the value of the stochastic game.
Then we give a sufficient condition for the existence of a value in a stochastic game problem.
56 Chapitre 2. RBSDE discontinuous
Lemma 2.10.3. For a stochastic game with payoff J(u, v), if for all ε > 0, there exist uε ∈ U ,vε ∈ V, s.t.
J(uε, v)− ε ≤ J(u, vε) + ε a.s. for all u ∈ U , v∈ V, (2.129)
then this game has a value.
Proof. By taking the ess sup for all v∈ V at the left side of (2.126), we get
ess supv∈V
J(uε, v)− ε ≤ J(u, vε) + ε a.s. for all u ∈ U .
Theness sup
v∈VJ(uε, v)− ε ≤ ess inf
u∈UJ(u, vε) + ε. a.s.
After taking the ess inf for all u ∈ U at the right side, the ess sup for all v∈ V at the left side,we obtain
ess infu∈U
ess supv∈V
J(u, v)− ε ≤ ess supv∈V
ess infu∈U
J(u, v) + ε a.s. (2.130)
and since ε is arbitrary, we have V ≤ V a.s..Since we have always V ≥ V a.s., the result follows. ¤
Definition 2.10.6. The Dynkin game problem is a kind of stochastic game. Given a probabilityspace equipped with a filtration (Ω,F , P,Ft), where Ft satisfies the general conditions of Dellacherie,T the set of F-stopping times dominated by a fixed time T , two RCLL Ft-progressive processesL∗, U∗ of class D, for any (τ, σ) ∈ T × T , the payoff J(τ, σ) is defined by
J(τ, σ) = E[L∗τ1τ≤σ − U∗σ1σ<τ].
The first player wants to choose a stopping time τ in view to get the maximum of the payoff J(τ, σ)on T , while the second player wants to choose a stopping time σ in view to get the minimum of thepayoff J(τ, σ) on T .
In order to find the value of the Dynkin game problem, we need the following system, whichwas firstly introduced by Bismut ([10]) then by Alario-Nazaret ([1]).
X = S(L∗ + X ′) (2.131)X ′ = S(U∗ + X),
where S is the Snell envelope (definition 2.10.1). In ([10], 1977), the thesis of Alario-Nazaret (1982)and ([2], 1982), we can find the following result, which gives conditions about the existence of asolution for this system.
Theorem 2.10.1. There exists a pair (X,X ′), of positive Ft-supermartingales of class D withXT = X ′
T = 0, which satisfies the system (2.131), if we have the followings :(i) L∗T = U∗
T = 0(ii)There exist two positive Ft-supermartingales (X, X ′) of class D, such that L∗ ≤ X − X ′ ≤
−U∗.
The detail proof can be found in [10] and [2], so we omit it. The following theorem gives therelation between this system and the value of the Dynkin game.
2.10. Appendix 57
Theorem 2.10.2. Suppose that (X, X ′) is a the solution of the system (2.131), and consider forany 0 ≤ t ≤ T, the stochastic game with payoff
Rt(τ, σ) = E[L∗(τ)1τ≤σ,τ≤T − U∗(σ)1σ<τ|Ft]
as well as its upper and lower values
V t = ess infσ∈Tt
ess supτ∈Tt
Rt(τ, σ)
V t = ess supτ∈Tt
ess infσ∈Tt
Rt(τ, σ),
where Tt = τ ∈ T ; t ≤ τ ≤ T. Then we have almost surely
Xt −X ′t = V t = V t. (2.132)
In the special case of t = 0, then we get the existence of the value for the classical Dynkin gameproblem.
Proof. For any t ∈ [0, T ], ε > 0, consider the stopping time τ εt = infs ≥ t,Xs ≤ X ′
s+L∗s+ε∧T ;then by the theory of the Snell envelope Xt∧τε
tis a martingale ([27], 2.16 and 2.17). Notice that
X ′ is a supermartingale and X ′ ≥ X + U∗. Then for any stopping time σ ∈ Tt, and notice thatτ ε
t < σ ⊂ τ εt < T, we have
Xt −X ′t ≤ E[Xσ∧τε
t−X ′
σ∧τεt|Ft]
≤ E[(X −X ′)τεt1τε
t <σ + (X −X ′)σ1σ≤τεt |Ft]
≤ E[(L∗τεt
+ ε)1τεt <σ − U∗
σ1σ≤τεt |Ft]
≤ E[L∗τεt1τε
t <σ − U∗σ1σ≤τε
t |Ft] + ε = Rt(τ εt , σ) + ε a.s.
On the other hand, we consider the stopping time σεt = infs ≥ t, X ′
s ≤ Xs + U∗s + ε∧T ; then
X ′t∧σε
tis a martingale, and X is a supermartingale s.t. X ≥ X ′ + L∗. Similarly, for any stopping
time τ ≥ t, we get that
Xt −X ′t ≥ E[L∗τ1τ≤σε
t − U∗σε
t1σε
t <τ|Ft]− ε = Rt(τ, σεt )− εa.s.
Then from the lemma 2.10.3, we deduce the result
Xt −X ′t = V t = V t, a.s.
The proof is complete. ¤
2.10.3 Dynkin game and the penalization method for the RBSDE with twoRCLL barriers
In this section we will give another proof of the existence result of reflected BSDEs with twoRCLL barrier (theorem 2.3.4), which is base on an approximation via penalization. For each m,n ∈N, since g(s, y, z)+n(y−Ls)−−m(Us− y)− is Lipschitz in (y, z), the following classical BSDE (cf.[58]) admits the unique solution (Y m,n, Zm,n)
Y m,nt = ξ +
∫ T
tg(s, Y m,n
s , Zm,ns )ds+n
∫ T
t(Y m,n
s −Ls)−ds−m
∫ T
t(Us−Y m,n
s )−ds−∫ T
tZm,n
s dBs.
(2.133)
58 Chapitre 2. RBSDE discontinuous
when ξ and g satisfy the assumptions (i) and (ii), L and U satisfy (iv) and (v). And we setKm,n,+
t = n∫ t0 (Ls − Y m,n
s )+ds and Km,n,−t = m
∫ t0 (Us − Y m,n
s )−ds.Following a same method like in the proof of proposition 2.7.1 in section 2.7, for the RBSDE
with two L2(0, T )-barriers, we have a priori estimates, uniform in n and m, on the sequence(Y m,n, Zm,n,Km,n,+,Km,n,−), that is
E[ sup0≤t≤T
(Y m,nt )2] + E[
∫ T
0|Zm,n
s |2 ds] + E[(Km,n,+T )2] + E[(Km,n,−
T )2] ≤ C. (2.134)
In (2.133), for fixed m, we set gm(s, y, z) = g(s, y, z)−m(Us − y)− ; obviously, it has Lipschitzproperty and
E
∫ T
0(gm(s, 0, 0))2ds ≤ 2E
∫ T
0(g(s, 0, 0))2ds + 2m2TE sup
0≤t≤T(U−
t )2 < ∞.
By the classical comparison theorem of BSDEs, we know that (Y m,n) is increasing in n for anyfixed m. Thanks to the results of the RBSDE with one RCLL barrier obtained in the Section 2.1,when n → ∞ we know that (Y m,n) Y m,∞ in H2(0, T ), (Zm,n) → Zm,∞ weakly in H2
d(0, T ),Km,n,+
t → Km,∞,+t weakly in L2(Ft), and that (Y m,∞, Zm,∞,Km,∞,+) is the solution of the following
RBSDE with one lower barrier L,
Y m,∞t = ξ +
∫ T
tg(s, Y m,∞
s , Zm,∞s )ds+Km,∞,+
T −Km,∞,+t −m
∫ T
t(Us−Y m,∞
s )−ds−∫ T
tZm,∞
s dBs,
(2.135)Y m,∞
t ≥ Lt, 0 ≤ t ≤ T , and∫ T0 (Y m,∞
t − Lt)dKm,∞,+t = 0, a.s.. Then set Km,∞,−
t = m∫ t0 (Us −
Y m,∞s )−ds, with (2.134) we have the following Lemma.
Lemma 2.10.4. There exists a constant C independent of m such that
sup0≤t≤T
E(Y m,∞t )2 + E
∫ T
0|Zm,∞
t |2 dt + E(Km,∞,+T )2 + E(Km,∞,−
T )2 ≤ C. (2.136)
Using the BDG inequality, it follows
E
(sup
0≤t≤T(Y m,∞
t )2)≤ C.
From the comparison theorem 2.2.2, Y m,∞t ≥ Y m+1,∞
t ; we conclude that there exists a processY such that Y m,∞ Y , and using Fatou’s Lemma, we get
E
(sup
0≤t≤T(Yt)2
)≤ C. (2.137)
By the dominated convergence theorem, it follows that Y m,∞ → Y as m → ∞, in H2(0, T ).Using theorem 2.2.2 again, we know that
Km,∞,+t ≥ Km+1,∞,+
t ,Km,∞,+t −Km,∞,+
s ≥ Km+1,∞,+t −Km+1,∞,+
s ,
for 0 ≤ s ≤ t ≤ T . With (2.136), we deduce that there exists a process K+ s.t., for t ∈ [0, T ],Km,∞,+
t K+t in L2(Ft). Obviously K+ is an increasing process and E[(K+
T )2] ≤ c. So theassumptions in Theorem 2.5.1 in section 2.5 are satisfied, then we deduce that the limit Y satisfies
Yt = ξ +∫ T
tg(s, Ys, Zs)ds + K+
T −K+t − (K−
T −K−t )−
∫ T
tZsdBs, (2.138)
2.10. Appendix 59
where K−t is the weak limit of Km,∞,−
t in L2(Ft), and Zm,∞ strongly converges to Z in Hpd(0, T ),
for p < 2.Similarly, (Y m,n) is decreasing on m for any fixed n ; let m →∞, then by the result of section
1.2, (Y m,n) Y ∞,n in H2(0, T ), (Zm,n) → Z∞,n weakly in H2d(0, T ), Km,n,−
t → K∞,n,−t weakly in
L2(Ft), and (Y ∞,n, Z∞,n,K∞,n,−) is the solution of the following RBSDE with one upper barrierU , i.e.
Y ∞,nt = ξ +
∫ T
tg(s, Y ∞,n
s , Z∞,ns )ds + n
∫ T
t(Y ∞,n
s − Ls)−ds−K∞,n,−T + K∞,n,−
t −∫ T
tZ∞,n
s dBs,
(2.139)Y ∞,n
t ≤ Ut, 0 ≤ t ≤ T ,∫ T0 (Y ∞,n
t − Ut)dK∞,n,−t = 0, and set K∞,n,+
t = n∫ t0 (Y ∞,n
s − Ls)−ds, then
sup0≤t≤T
E(Y ∞,nt )2 + E
∫ T
0|Z∞,n
t |2 dt + E(K∞,n,−T )2 + E(K∞,n,+
T )2 ≤ C. (2.140)
Then by the comparison theorem 2.2.2 in section 2.2, and the above estimation, we get that thereexists a process Y ′ ∈ S2(0, T ) such that Y ∞,n Y ′ and the convergence also holds in H2(0, T ).Then with Theorem 2.5.1, we get that the limit Y ′ satisfies
Y ′t = ξ +
∫ T
tg(s, Y ′
s , Z ′s)ds + K′+T −K
′+t − (K
′−T −K ′−
t )−∫ T
tZ ′sdBs. (2.141)
Here Z∞,n strongly converges to Z in Hpd(0, T ), for p < 2, K
′−t (resp. K
′+t ) is the strong (resp. weak)
limit of K∞,n,−t (resp. K∞,n,+
t ) in L2(Ft). Now we want to prove that the two limits are equal.
Lemma 2.10.5. The two limits Y and Y ′ are equal.
Proof. Since Y m,n Y m,∞ and Y m,n Y ∞,n, so for ∀m,n ∈ N, Y ∞,n ≤ Y m,n ≤ Y m,∞.Then with Y m,∞ Y , Y ∞,n Y ′, it follows Y ≥ Y ′. On the other hand, consider (2.133) and(2.139), due to Y ∞,n ≤ Y m,n, it follows that for 0 ≤ s ≤ t ≤ T ,
Km,n,+t −Km,n,+
s ≤ K∞,n,+t −K∞,n,+
s .
Otherwise we know that Km,n,+t → Km,∞,+
t weakly in L2(Ft), K∞,n,+t → K
′+t weakly in L2(Ft),
as n → ∞ and Km,∞,+t → K+
t strongly in L2(Ft), as m → ∞. In the previous inequality, first letn →∞, then let m →∞, we get
K+t −K+
s ≤ K′+t −K
′+s . (2.142)
Then consider (2.133) and (2.135), since Y m,n ≤ Y m,∞, then for 0 ≤ s ≤ t ≤ T
Km,n,−t −Km,n,−
s ≤ Km,∞,−t −Km,∞,−
s .
Similarly, in this inequality, first let n →∞, then let m →∞, we have
K′−t −K
′−s ≤ K−
t −K−s . (2.143)
With (2.142), it follows for 0 ≤ s ≤ t ≤ T
K+t −K+
s − (K−t −K−
s ) ≤ K′+t −K
′+s − (K
′−t −K
′−s )
i.e. the process K′+t −K
′−t − (K+
t −K−t ) is increasing, and by the Comparison theorem for BSDE,
it follows Y ′ ≥ Y . At last Y ′ = Y . ¤Then we get immediately Z = Z ′, K+ −K− = K ′+ −K
′−. We are now able to prove that thelimit of the solutions of the penalization BSDE’s is the solution of the RBSDE with two RCLLbarriers.
60 Chapitre 2. RBSDE discontinuous
Theorem 2.10.3. The triple (Y, Z,K), Y ∈ D2(0, T ), Z ∈ H2d(0, T ), K = K+ −K−, K+,K− ∈
A2(0, T ) is the unique solution of the RBSDEs with two RCLL barriers L,U .
Proof. Let us remember that from theorem 2.3.4, we have the uniqueness. By the discussionbefore, we know that (Y m,∞
t , Zm,∞t ,Km,∞,+
t ) is the solution of the RBSDE with the lower barrierLt. In (2.135), denote Km,∞
t = Km,∞,+t −Km,∞,−
t ; then (Y m,∞t , Zm,∞
t ,Km,∞t ) can be considered
as the solution of the RBSDE with two barriers L and U + (U − Y m,∞)−. In fact it’s easy to knowthat
L ≤ Y m,∞ ≤ U + (U − Y m,∞)−,
and
∫ T
0(Y m,∞
t − Lt)dKm,∞,+t = 0
∫ T
0(Y m,∞
t − Ut − (U − Y m,∞)−t )dKm,∞,−t = m
∫ T
0(Y m,∞
t − Ut)−(Ut − Y m,∞t )−dt = 0.
So by the Proposition 2.3.1, we get
Y m,∞t = ess inf
σ∈Tt
ess supτ∈Tt
E[∫ σ∧τ
tg(s, Y m,∞
s , Zm,∞s )ds + ξ1σ∧τ=T (2.144)
+Lτ1τ<T,τ≤σ + Uσ1σ<τ + (Uσ − Y m,∞σ )−1σ<τ|Ft]
≥ ess infσ∈Tt
ess supτ∈Tt
E[∫ σ∧τ
tg(s, Y m,∞
s , Zm,∞s )ds + ξ1σ∧τ=T + Lτ1τ<T,τ≤σ
+Uσ1σ<τ|Ft]
≥ ess infσ∈Tt
ess supτ∈Tt
E[∫ σ∧τ
tg(s, Ys, Zs)ds + ξ1σ∧τ=T + Lτ1τ<T,τ≤σ
+Uσ1σ<τ|Ft]− kE[∫ T
0|Y m,∞
s − Ys|+ |Zm,∞s − Zs| ds|Ft].
Since Y m,∞ → Y in H2(0, T ), Zm,∞ → Z in Hpd(0, T ) for p < 2, as m → ∞, we can choose a
subsequence which satisfies E[∫ T0
∣∣Zmj ,∞s − Zs
∣∣ ds|Ft] → 0 a.s., so we deduce
E[∫ T
0(|Y m,∞
s − Ys|+ |Zm,∞s − Zs|)ds|Ft] → 0, a.s.
In (2.144), let m →∞, we obtain
Yt ≥ ess infσ∈Tt
ess supτ∈Tt
E[∫ σ∧τ
tg(s, Ys, Zs)ds + ξ1σ∧τ=T + Lτ1τ<T,τ≤σ + Uσ1σ<τ|Ft]. (2.145)
On the other side, in the same way, we know that (Y ∞,n, Z∞,n,K∞,n,−t ) is the solution of the
RBSDE with the upper barrier Ut, in (2.139). Denote K∞,nt = K∞,n,+
t −K∞,n,−t ; (Y ∞,n
t , Z∞,nt ,K∞,n
t )is the solution of the RBSDE with two barriers L− (Y ∞,n − L)− and U. Similarly by proposition
2.10. Appendix 61
2.3.1, we deduce that
Y ∞,nt = ess sup
τ∈Tt
ess infσ∈Tt
E[∫ σ∧τ
tg(s, Y ∞,n
s , Z∞,ns )ds + ξ1σ∧τ=T (2.146)
+Lτ1τ<T,τ≤σ + Uσ1σ<τ − (Y ∞,nτ − Lτ )−1τ<T,τ≤σ|Ft]
≤ ess supτ∈Tt
ess infσ∈Tt
E[∫ σ∧τ
tg(s, Y ∞,n
s , Z∞,ns )ds + ξ1σ∧τ=T + Lτ1τ<T,τ≤σ
+Uσ1σ<τ|Ft]
≤ ess supτ∈Tt
ess infσ∈Tt
E[∫ σ∧τ
tg(s, Ys, Zs)ds + ξ1σ∧τ=T + Lτ1τ<T,τ≤σ
+Uσ1σ<τ|Ft] + kE[∫ T
0|Y ∞,n
s − Ys|+ |Z∞,ns − Zs| ds|Ft]
Since Y ∞,n → Y in H2(0, T ), Z∞,n → Z in Hpd(0, T ) for p < 2, as n → ∞, like above, in (2.146)
let n →∞, we get
Yt ≤ ess supτ∈Tt
ess infσ∈Tt
E[∫ σ∧τ
tg(s, Ys, Zs)ds + ξ1σ∧τ=T + Lτ1τ<T,τ≤σ + Uσ1σ<τ|Ft]. (2.147)
Comparing (2.145) and (2.147), in view of ess sup ess inf ≤ ess inf ess sup, we deduce finally
Yt = ess supτ∈Tt
ess infσ∈Tt
E[∫ σ∧τ
tg(s, Ys, Zs)du + ξ1σ∧τ=T + Lτ1τ<T,τ≤σ + Uσ1σ<τ|Ft]
= ess infσ∈Tt
ess supτ∈Tt
E[∫ σ∧τ
tg(s, Ys, Zs)du + ξ1σ∧τ=T + Lτ1τ<T,τ≤σ + Uσ1σ<τ|Ft].
Using (2.32) in Section 2.3, we can rewrite Y in the following form
Yt = ess infσ∈Tt
ess supτ∈Tt
E[Lτ1τ≤σ + Uσ1σ<τ|Ft] + Nt
= ess supτ∈Tt
ess infσ∈Tt
E[Lτ1τ≤σ + Uσ1σ<τ|Ft] + Nt,
where Nt = E[ξ +∫ T0 g(s)ds|Ft] −
∫ t0 g(s)ds, Lt = Lt1t<T + ξ1t=T − Nt, Ut = Ut1t<T +
ξ1t=T−Nt. That is, the process Yt−Nt is the value of the stochastic game problem, whose payoffis Jt(σ, τ) = E[L(τ)1τ≤σ + U(σ)1σ<τ|Ft]. To go further, we need to check if L and U are alsoin D2(0, T ), which can be easily seen by using Doob’s inequality. In fact
E[ sup0≤t≤T
(Nt)2] ≤ 2E[ sup0≤t≤T
(E[ξ +∫ T
0g(s, Ys, Zs)ds|Ft])2 + (
∫ T
0g(s, Ys, Zs)ds)2]
≤ 16E[(ξ)2] + 18E[∫ T
0g2(s, Ys, Zs)ds]
≤ C(1 + E
∫ T
0|Ys|2 ds + E
∫ T
0‖Zs‖2 ds) < ∞,
E[ sup0≤t≤T
(Lt)2] ≤ E[ sup0≤t≤T
(Lt)2] + E[ sup0≤t≤T
(Nt)2] + E[(ξ)2] < ∞,
E[ sup0≤t≤T
(Ut)2] ≤ E[ sup0≤t≤T
(Ut)2] + E[ sup0≤t≤T
(Nt)2] + E[(ξ)2] < ∞.
62 Chapitre 2. RBSDE discontinuous
Thanks to the Theorem 2.10.2 in Appendix, we know that Yt−Nt = X+t −X−
t , where (X+, X−)is a pair of supermartingales in D2(0, T )×D2(0, T ), solution of the system
X+ = S(L + X−) (2.148)X− = S(−U + X+).
(notice that LT = UT = 0). Then by the Doob-Meyer decomposition theorem, we get
X+t = E[K+,1
T |Ft]−K+,1t , X−
t = E[K−,1T |Ft]−K−,1
t ,
where K+,1t ,K−,1
t are predictable increasing processes and by Lemma 2.10.2, K±,1 ∈ A2(0, T ).With the representation theorem for the martingale part, it follows
Yt = Nt + X+t −X−
t (2.149)
= E[ξ +∫ T
0g(s, Ys, Zs)ds + K+,1
T −K−,1T |Ft]−
∫ t
0g(s, Ys, Zs)ds−K+,1
t + K−,1t
= Y0 +∫ t
0Z1
s dBs −∫ t
0g(s, Ys, Zs)ds−K+,1
t + K−,1t .
Rewrite (2.138) in forward form and compare with (2.149), similarly to the case of the RBSDEwith one RCLL barrier, we get, Zt − Z1
t = 0, K−t −K+
t = K−,1t −K+,1
t . Then by (2.148), and theproperties of the Snell envelope, since X+ ≥ L + X− and X− ≥ −U + X+, then easily, we see that
L ≤ N + L ≤ N + X+ −X− = Y ≤ N + U ≤ U,
so (iii) of definition 2.1.2 is satisfied.Finally, (iv) of definition 2.1.2 also comes from the theory of the Snell envelope, Lemma 2.10.1.
Indeed
0 =∫ T
0(X+ − (L + X−))t−dK+,1
t =∫ T
0(X+ −X− − L + N)t−dK+
t
=∫ T
0(Yt− − Lt−)dK+
t ,
and
0 =∫ T
0(X− − (−U + X+))t−dK−,1
t =∫ T
0(X− −X+ + U −N)t−dK−
t
=∫ T
0(Ut− − Yt−)dK−
t .
The proof is complete. ¤
63
Chapitre 3
Reflected BSDEs under monotonicityand general increasing growthconditions
In this chapter we study the reflected BSDEs with one continuous barrier, under monotonicityand general increasing growth conditions on y and Lipschitz on z. This chapter is organized asfollows. In section 3.1.1, we prove the existence and uniqueness of the solution of the RBSDE, withdeterministic terminal time. After the presentation of the notations and assumptions in the firstsubsection, we prove the uniqueness in the second subsection. Then in subsection 3.3, we prove theexistence in four steps : in the first step, using the penalization method, we get the existence underthe condition
|ξ|+ sup0≤t≤T
|f(t, 0)|+ sup0≤t≤T
L+t ≤ c, (3.1)
where f is the coefficient of BSDE, and we relax this bounded assumption by successive approxi-mation in the three other steps. Then
In section 3, we give an application of this reflected BSDE to finance. Then we prove theexistence and uniqueness of the solution of the RBSDE with random terminal time in section 4.Finally in the appendix, several comparison theorems with respect to BSDE’s and RBSDE’s, whichare intensively used in the proof of existence, are presented.
3.1 RBSDE’s on a fixed finite time interval
3.1.1 Hypotheses and Notations
Assume (Ω,F , P ) be a complete probability space, equipped with a d-dimensional Brownianmotion (Bt)0≤t≤T = (B1
t , B2t , · · · , Bd
t )′0≤t≤T , which is defined on a finite interval [0, T ], 0 < T <+∞. Denote by Ft; 0 ≤ t ≤ T the natural filtration generated by the Brownian motion B :Ft = σBs; 0 ≤ s ≤ t,where F0 contains all P−null sets of F . We denote by P the σ-algebra ofpredictable sets on [0, T ]× Ω.
We recall the notations of spaces in chapter 1 : L2(Ft), H2d(0, T ), S2(0, T ) and A2(0, T ).
For the following, we work under the following assumptions :
Assumption 3.1.1. a final condition ξ ∈ L2(FT ),
Assumption 3.1.2. a coefficient f : Ω × [0, T ] × R× Rd → R, which is such that for somecontinuous increasing function ϕ : R+ → R+, real numbers µ ∈ R and k > 0 : ∀t ∈ [0, T ], y, y′ ∈ R,z, z′ ∈ Rd,
64 Chapitre 3. RBSDE under general increasing growth condition
(i) f(·, y, z) is progressively measurable,(ii) |f(t, y, z)| ≤ |f(t, 0, z)|+ ϕ(|y|), a.s. ;(iii) E
∫ T0 |f(t, 0, 0)|2 dt < ∞;
(iv) |f(t, y, z)− f(t, y, z′)| ≤ k |z − z′| , a.s.(v) (y − y′)(f(t, y, z)− f(t, y′, z)) ≤ µ(y − y′)2, a.s.(vi) y → f(t, y, z) is continuous, a.s.
Assumption 3.1.3. a barrier (Lt)0≤t≤T , which is a continuous progressively measurable real-valuedprocess, satisfying
E[ϕ2( sup0≤t≤T
(eµtL+t ))] < ∞,
and (L+t )0≤t≤T ∈ S2(0, T ), LT ≤ ξ, a.s.
Remark 3.1.1. The condition on the barrier (Lt)0≤t≤T that (L+t )0≤t≤T ∈ S2(0, T ) for the case
when f is Lipschitz or monotone and linear increasing in y, is a special case of our assumption3.1.3. When the coefficient f is Lipschitz or linear increasing in y, i.e. ϕ(y) = y, then
E[ϕ2( sup0≤t≤T
eµtL+t )] = E[( sup
0≤t≤TeµtL+
t )2] ≤ (e2µT ∨ 1)E[( sup0≤t≤T
L+t )2],
so assumption 3.1.3 is equivalent to the classical condition on the barrier E[sup0≤t≤T (L+t )2] < ∞,
like in El Karoui et al.(1997, [28]), where this case has already been studied.
Now we introduce the definition of the solution of RBSDE with parameters satisfying assump-tion 3.1.1, 3.1.2 and 3.1.3, which is the same like in El Karoui et al.(1997, [28]).
Definition 3.1.1. We say that the triple (Yt, Zt, Kt)0≤t≤T of progressively measurable processes is asolution of the reflected backward stochastic differential equation with one continuous reflecting lowerbarrier L(·), terminal condition ξ and coefficient f , (in short RBSDE(ξ, f, L)), if the followingshold :
(1) (Yt)0≤t≤T ∈ S2(0, T ), (Zt)0≤t≤T ∈ H2d(0, T ), and (Kt)0≤t≤T ∈ A2(0, T ).
(2) Yt = ξ +∫ Tt f(s, Ys, Zs)ds + KT −Kt −
∫ Tt ZsdBs, 0 ≤ t ≤ T a.s.
(3) Yt ≥ Lt, 0 ≤ t ≤ T.
(4)∫ T0 (Ys − Ls)dKs = 0, a.s.
3.1.2 Uniqueness of the solution of the RBSDEs
Now we study first the uniqueness of the solution of the RBSDE(ξ, f, L), under the assumptions3.1.1, 3.1.2 and 3.1.3.
Theorem 3.1.1. Under the assumptions 3.1.1, 3.1.2 and 3.1.3, the RBSDE(ξ, f, L) has at mostone solution (Yt, Zt,Kt)0≤t≤T .
Proof. Suppose that (Yt, Zt,Kt)0≤t≤T and (Y ′t , Z ′t,K ′
t)0≤t≤T are two solutions of the RBSDE(ξ, f, L).Set ∆Y = Y − Y ′, ∆Z = Z − Z ′, ∆K = K − K ′. Applying Ito formula to ∆Y 2 on the interval[t, T ], and taking expectation on both sides, it follows
E |∆Yt|2 + E
∫ T
t|∆Zs|2 ds = 2E
∫ T
t∆Ys(f(s, Ys, Zs)− f(s, Y ′
s , Z ′s))ds + 2E
∫ T
t∆Ysd∆Ks
≤ 2kE
∫ T
t∆Ys∆Zsds + 2µE
∫ T
t∆Y 2
s ds
≤ (2k2 + µ)E∫ T
t∆Y 2
s ds +12E
∫ T
t|∆Zs|2 ds.
3.1. RBSDE’s on[0, T ] 65
Here we have used the monotonic assumption on y, the Lipschitz assumption on z, and∫ T
t∆Ysd∆Ks =
∫ T
t(Ys − Ls)dKs +
∫ T
t(Y ′
s − Ls)dK ′s
+∫ T
t(Ys − Ls)dK ′
s +∫ T
t(Y ′
s − Ls)dKs
≤ 0.
We get
E |∆Yt|2 ≤ (2k2 + µ)E∫ T
t∆Y 2
s ds.
From the Gronwall’s inequality, it follows E |∆Yt|2 = E |Yt − Y ′t |2 = 0, 0 ≤ t ≤ T , i.e. Yt = Y ′
t a.s. ;then we have also E
∫ T0 |∆Zs|2 ds = E
∫ T0 |Zs − Z ′s|2 ds = 0, and Kt = K ′
t follows.¤
3.1.3 Existence of the solution of the RBSDEs
For the existence of a solution we will prove it in several steps as we will see in the followingtheorem. Comparing with the Lipschitz case and the monotonic linear increasing case, see El Karouiet al.(1997, [28]) and Matoussi(1997,[52]) respectively, it needs much more technics.
The Main result
First we note that (Yt, Zt,Kt)0≤t≤T solves the RBSDE(ξ, f, L), if and only if
(Y t, Zt, Kt) := (eλtYt, eλtZt,
∫ t
0eλsdKs) (3.2)
solves the RBSDE(ξ, f , L), where
ξ = ξeλT ,
f(t, y, z) = eλtf(t, e−λty, e−λtz)− λy,
Lt = eλtLt.
If we choose λ = µ, then the coefficient f satisfies the same assumptions in 3.1.2 as f , but with3.1.2-(v) replaced by
(v’) (y − y′)(f(t, y, z)− f(t, y′, z)) ≤ 0.Since we are in the 1-dimensional case, (v’) means that f is decreasing on y. From another part
the barrier L satisfies :
Assumption 3.1.4.
E[ sup0≤t≤T
(L+t )] < ∞, E[ϕ2( sup
0≤t≤T(L+
t ))] = E[ϕ2( sup0≤t≤T
(eµtL+t ))] < ∞.
In the following, we shall work with assumption 3.1.2’ which is 3.1.2 with (v) replaced by (v’)and assumption 3.1.4 instead of 3.1.3. We first present the following existence theorem when f doesnot depend on z, which will be proved later.
Theorem 3.1.2. For any process (Vt)0≤t≤T ∈ H2d(0, T ), suppose that f satisfies assumption 3.1.2’,
and (Lt)0≤t≤T satisfies assumption 3.1.4, then there exists (Yt, Zt,Kt)0≤t≤T , which satisfies (1), (3),(4) in the definition 3.1.1 and
Yt = ξ +∫ T
tf(s, Ys, Vs)ds + KT −Kt −
∫ T
tZsdBs, 0 ≤ t ≤ T .
66 Chapitre 3. RBSDE under general increasing growth condition
With this result, we present the existence of a solution as following.
Theorem 3.1.3. Suppose assumptions 3.1.1, 3.1.2’ and 3.1.4 hold, then there exists (Yt, Zt, Kt)0≤t≤T
which is the solution of the RBSDE(ξ, f, L).
Proof. After the transformation of (Yt, Zt,Kt)0≤t≤T in (7.5.1), we consider the RBSDE(ξ, f, L),whose parameters satisfy assumption 3.1.1, 3.1.2’ and 3.1.4. Thanks to theorem 3.1.2, we canconstruct a mapping Φ from S into itself, where S is defined as the space of the progressivelymeasurable processes (Yt, Zt)0≤t≤T , valued in R× Rd normed by
‖(Y, Z)‖γ :=(
E[∫ T
0eγt(|Yt|2 + |Zt|2)dt]
) 12
, (3.3)
for an appropriate γ ∈ (0,∞) which will be determined later.Given (U, V ) ∈ S, (Y, Z) = Φ(U, V ) is the unique solution of following RBSDE
Yt = ξ +∫ T
tf(s, Ys, Vs)ds + KT −Kt −
∫ T
tZsdBs,
i.e., if we define the process
Kt = Yt − Y0 −∫ t
0f(s, Ys, Vs)ds +
∫ t
0ZsdBs, 0 ≤ t ≤ T,
then (Y, Z,K) satisfies (1)-(4) in definition 3.1.1, with f(s, y, z) = f(s, y, Vs).Consider another element of S, and define (Y ′, Z ′) = Φ(U ′, V ′) ; set
∆U = U − U ′,∆V = V − V ′, ∆Y = Y − Y ′, ∆Z = Z − Z ′.
Then we apply the Ito formula to eγt |∆Yt|2 on the interval [t, T ],
eγtE |∆Yt|2 + E
∫ T
teγs(γ |∆Ys|2 + |∆Zs|2)ds
= 2E∫ T
teγs∆Ys(f(s, Ys, Vs)− f(s, Y ′
s , V ′s ))ds + 2E
∫ T
teγs∆Ysd∆Ks
≤ 2k2E
∫ T
teγs |∆Ys|2 ds +
12E
∫ T
teγs |∆Vs|2 ds,
since∫ T
teγs∆Ysd∆Ks =
∫ T
teγs(Ys − Ls)dKs +
∫ T
teγs(Y ′
s − Ls)dK ′s
−∫ T
teγs(Ys − Ls)dK ′
s −∫ T
teγs(Y ′
s − Ls)dKs
≤ 0.
Hence, if we choose γ = 1 + 2k2, it follows
E
∫ T
teγs(|∆Ys|2 + |∆Zs|2)ds ≤ 1
2E
∫ T
teγs |∆Vs|2 ds
≤ 12E
∫ T
teγs(|∆Us|2 + |∆Vs|2)ds.
Consequently, Φ is a strict contraction on S with the norm (3.3), and has a fixed point, which isthe unique solution of the RBSDE(ξ, f, L). ¤
3.1. RBSDE’s on[0, T ] 67
Proof of the Theorem 3.1.2
Let us recall the assumptions on the coefficient f :
Assumption 3.1.5. We write f(s, y) for f(s, y, Vs), so f(s, 0) = f(s, 0, Vs), which is in H2(0, T )and satisfies :(ii’) |f(s, y)| ≤ |f(s, 0)|+ ϕ(|y|);(iii’) E
∫ T0 |f(t, 0)|2 dt < ∞;
(v”) (y − y′)(f(s, y)− f(s, y′)) ≤ 0;(vi’) y → f(s, y) is continuous,∀s ∈ [0, T ],a.s..
We point out that we always denote by c > 0 a constant whose value can be changed line byline. The proof will be done by four steps as following.
– Using a penalization method we prove the existence under the assumption
|ξ|2 + sup0≤t≤T
|f(t, 0)|2 + sup0≤t≤T
L+t ≤ c. (3.4)
– Approximating the barrier L, we prove the existence under assumption 3.1.4 and the boundedassumption on ξ and f(t, 0), i.e.
|ξ|2 + sup0≤t≤T
|f(t, 0)|2 ≤ c. (3.5)
– By approximation, we prove the existence of the solution under the assumption ξ ≥ c,inf0≤t≤T f(t, 0) ≥ c.
– Finally, we prove the existence of the solution under the assumption ξ ∈ L2(FT ), f(t, 0) ∈H2(0, T ), by approximation.
Before begin the proof by step 1, we prove the following lemma in view of the estimation.
Lemma 3.1.1. Suppose that f satisfies assumption 3.1.5, and (3.4) holds ; then there exists a triple(Y ∗
t , Z∗t ,K∗t )0≤t≤T which satisfies
Y ∗t = ξ +
∫ T
tf(s, Y ∗
s )ds + K∗T −K∗
t −∫ T
tZ∗s dBs, (3.6)
Y ∗t ≥ Lt, 0 ≤ t ≤ T a.s., and sup0≤t≤T |Y ∗
t | ≤ c, Z∗ ∈ H2d(0, T ), K∗ is increasing, and K∗
0 = 0,K∗
T ≤ c.
Proof. Consider the random variable ξ := maxsup0≤t≤T L+t , ξ ≥ 0 ; by (3.4), it follows
∣∣ξ∣∣ ≤ c.Set Lt = E[ξ|Ft]. The process Lt is a bounded martingale, and by the Ito representation theorem,there exists a process Z ∈ H2
d(0, T ), s.t.
Lt = L0 +∫ t
0ZsdBs = ξ −
∫ T
tZsdBs (3.7)
= ξ +∫ T
tf(s, Ls)ds−
∫ T
tf(s, Ls)ds−
∫ T
tZsdBs.
The process K∗t =
∫ t0 f−(s, Ls)ds + (ξ− ξ)1t=T is an uniformly bounded increasing process, since
K∗T ≤ T ( sup
0≤t≤Tf(t, 0) + ϕ( sup
0≤t≤TLt)) +
∣∣ξ∣∣ + |ξ| ≤ c.
68 Chapitre 3. RBSDE under general increasing growth condition
Then we consider the following BSDE
Yt = ξ + K∗T +
∫ T
tf(s, Ys)ds−
∫ T
tZsdBs, (3.8)
where f(t, y) = f(t, y −K∗t ). Since
|ξ + K∗T |+ sup
0≤t≤T
∣∣∣f(t, 0)∣∣∣ ≤ |ξ|+ K∗
T + sup0≤t≤T
|f(t, 0)|+ ϕ(|K∗T |) ≤ c,
from the proof of the step 1 of proposition 2.4 in Pardoux(1999, [57]), the BSDE (3.8) has a uniquesolution (Yt, Zt)0≤t≤T , and Y is uniformly bounded. Now if we consider Y ∗
t = Yt −K∗t , Z∗t = Zt, it
is easy to check that (Y ∗, Z∗) satisfies
Y ∗t = ξ +
∫ T
tf(s, Y ∗
s )ds + K∗T −K∗
t −∫ T
tZ∗s dBs,
withsup
0≤t≤T|Y ∗
t | ≤ sup0≤t≤T
∣∣∣Yt
∣∣∣ + K∗T ≤ c.
On the other hand, (3.7) can be rewritten as
Lt + K∗t = ξ + K∗
T +∫ T
tf(s, Ls + K∗
s )ds−∫ T
tf+(s, Ls)ds−
∫ T
tZsdBs. (3.9)
Since∫ t0 f+(s, Ls)ds is an increasing process, by the generalized comparison theorem 3.4.1, we have
Y ∗t + K∗
t ≥ Lt + K∗t , 0 ≤ t ≤ T , so
Y ∗t ≥ Lt ≥ Lt, 0 ≤ t ≤ T.
¤
Step 1 Now we will start to prove the existence under the assumption (3.4) that for some c > 0,
|ξ|2 + sup0≤t≤T
|f(t, 0)|2 + sup0≤t≤T
L+t ≤ c.
Consider the penalized BSDEs, for n ∈ N,
Y nt = ξ +
∫ T
tf(s, Y n
s )ds + n
∫ T
t(Y n
s − Ls)−ds−∫ T
tZn
s dBs. (3.10)
From Proposition 2.4 in Pardoux (1999, [57]), these penalized BSDEs admit a unique solution(Y n
t , Znt )0≤t≤T . Indeed set fn(s, y) = f(s, y) + n(y − Ls)− ; we only need to check that fn satisfies
the conditions of Proposition 2.4. First,
|fn(s, y)| =∣∣f(s, y) + n(y − Ls)−
∣∣≤ |fn(s, 0, 0)|+ k |Vs|+ ϕ(|y|) + 2 |f(s, 0, 0)|+ n |y|≤ |fn(s, 0, 0)|+ k |Vs|+ ϕn(|y|),
where ϕn(|y|) = ϕ(|y|) + 2c + n |y| is also a continuous increasing application from R+ to R+.
3.1. RBSDE’s on[0, T ] 69
Then the square integrability comes easily from
E
∫ T
0|fn(s, 0)|2 ds ≤ 2E
∫ T
0|f(t, 0, 0)|2 dt + 2n2TE[ sup
0≤t≤T
∣∣L+t
∣∣2] < ∞.
Also fn still keeps the monotonicity condition :
(y − y′)(fn(s, y)− fn(s, y′))= (y − y′)(f(s, y)− f(s, y′)) + (y − y′)(n(y − Ls)− − n(y′ − Ls)−)≤ n(y − y′)2.
At last, it is obvious that y → fn(s, y) is still continuous, ∀s ∈ [0, T ], a.s..Denote Kn
t = n∫ t0 (Y n
s −Ls)−ds. Let us now prove the a-priori estimation on (Y nt , Zn
t ,Knt )0≤t≤T ,
uniformly in n. For this, consider the BSDE with coefficient f(t, ·),
Yt = ξ +∫ T
tf(s, Ys)ds−
∫ T
tZsdBs.
By the result of step 1 of Proposition 2.4 in Pardoux(1999, [57]), sup0≤t≤T
∣∣∣Yt
∣∣∣ ≤ c. Obviously, for(s, y) ∈ [0, T ]× R, fn(s, y) ≥ f(s, y) ; then by the comparison theorem 2.4 in Pardoux(1999, [57]),we obtain
Y nt ≥ Yt, 0 ≤ t ≤ T , a.s. (3.11)
On the other hand, by the lemma 3.1.1, there exists a triple (Y ∗t , Z∗t , K∗
t )0≤t≤T , which satis-fies the equation (3.6), with Y ∗
t ≥ Lt, 0 ≤ t ≤ T , and sup0≤t≤T |Y ∗t | ≤ c, moreover the triple
(Y ∗, Z∗,K∗) satisfies
Y ∗t = ξ +
∫ T
tf(s, Y ∗
s )ds + n
∫ T
t(Y ∗
s − Ls)−ds + K∗T −K∗
t −∫ T
tZ∗s dBs
= ξ +∫ T
tfn(s, Y ∗
s )ds + K∗T −K∗
t −∫ T
tZ∗s dBs. (3.12)
Using the generalized comparison theorem 3.4.1, we get Y ∗t ≥ Y n
t , 0 ≤ t ≤ T. With (3.11),
Y ∗t ≥ Y n
t ≥ Yt, 0 ≤ t ≤ T,
follows, and since Y ∗ and Y are uniformly bounded in the interval [0, T ], then
sup0≤t≤T
|Y nt | ≤ max sup
0≤t≤T|Y ∗
t | , sup0≤t≤T
∣∣∣Yt
∣∣∣ ≤ c, (3.13)
where c is a constant independent of n. Furthermore, for each n ∈ N,
|f(s, Y ns )| ≤ |f(t, 0)|+ ϕ( sup
0≤t≤T|Y n
s |)+ ≤ c. (3.14)
Now we apply the Ito formula to |Y nt |2 on the interval [t, T ], and take the expectation
E |Y nt |2 + E
∫ T
t|Zn
s |2 ds (3.15)
= E |ξ|2 + 2E
∫ T
tY n
s f(s, Y ns )ds + 2
∫ T
tY n
s dKns
≤ E |ξ|2 + E
∫ T
t|Y n
s |2 ds + E
∫ T
t|f(s, 0)|2 ds + αE[ sup
0≤t≤T(L+
t )2] +1α
E[(KnT −Kn
t )2],
70 Chapitre 3. RBSDE under general increasing growth condition
where α is a positive number. We rewrite the BSDE(ξ, fn, L) as
KnT −Kn
t = Y nt − ξ −
∫ T
tf(s, Y n
s )ds +∫ T
tZn
s dBs.
Hence, by (3.4), (3.13) and (3.14)
E(KnT −Kn
t )2 ≤ 4E |Y nt |2 + 4E |ξ|2 + 4TE
∫ T
t|f(s, Y n
s )|2 ds + 4E
∫ T
t|Zn
s |2 ds
≤ c + 4E
∫ T
t|Zn
s |2 ds. (3.16)
Then we substitute (3.16) into (3.15), set α = 8, and with (3.4) and (3.13), it follows
E
∫ T
0|Zn
s |2 ds ≤ c. (3.17)
Using (3.16) again, we getE[(Kn
T )2] ≤ c. (3.18)
Notice that for ∀n ∈ N, ∀(s, y) ∈ [0, T ]× R,
fn(s, y) ≤ fn+1(s, y).
Then by the comparison theorem 3.4.1, we get Y nt ≤ Y n+1
t , 0 ≤ t ≤ T , a.s. Hence
Y nt Yt, 0 ≤ t ≤ T, a.s. (3.19)
In view of (3.13), we havesup
0≤t≤T|Yt| ≤ c, (3.20)
and by the dominated convergence theorem
E
∫ T
0(Y n
t − Yt)2dt → 0, as n →∞.
We will then prove that the sequence Y n converges in the space S2(0, T ). Using Ito formula to|Y n
t − Y pt |2, for n, p ∈ N, on the interval [t, T ], we have
E |Y nt − Y p
t |2 + E
∫ T
t|Zn
s − Zps |2 ds (3.21)
= 2E∫ T
t(f(s, Y n
s )− f(s, Y ps ))(Y n
s − Y ps )ds + 2E
∫ T
t(Y n
s − Y ps )d(Kn
s −Kps )
≤ 2E
∫ T
t(Y n
s − Ls)−dKps + 2E
∫ T
t(Y p
s − Ls)−dKns .
Let us give the following lemma, whose proof is easy to get by the boundeness of f(s, Y ns ) in 3.14,
similar to the lemma 6.1 in El Karoui et al.(1997, [28]), so we omit it.
Lemma 3.1.2. The limit Yt ≥ Lt, for 0 ≤ t ≤ T , a.s., and
E[ sup0≤t≤T
(|Y nt − Lt|−)2] → 0, as n →∞.
3.1. RBSDE’s on[0, T ] 71
Then like in [28], from this lemma and (3.18), for the first term in (3.21), we deduce
E
∫ T
t(Y n
s − Ls)−dKps ≤ [E[ sup
0≤t≤T(|Y n
t − Lt|−)2]]12 · [E[(Kp
T )2]]12 → 0,
as n, p →∞. Similarly,
E
∫ T
t(Y p
s − Ls)−dKns → 0, as n, p →∞.
Hence
E
∫ T
0|Zn
s − Zps |2 ds → 0, as n, p →∞. (3.22)
So there exists a process Z ∈ H2d(0, T ), s.t.
E[∫ T
0|Zn
s − Zs|2 ds] → 0, as n →∞. (3.23)
Then by Burkholder-David-Gundy (BDG) inequality, it follows
E sup0≤t≤T
|Y nt − Y p
t |2
≤ 4E
∫ T
0(Y n
s − Ls)−dKps + 4E
∫ T
0(Y p
s − Ls)−dKns + c
∫ T
0|Zn
s − Zps |2 ds → 0,
as n, p →∞. FinallyE sup
0≤t≤T|Y n
t − Yt|2 → 0, as n →∞. (3.24)
By (3.19) and the fact that f(s, y) is continuous and decreasing in y, we get f(s, Y ns ) f(s, Ys),
0 ≤ s ≤ T . Moreover |f(s, Y ns )| ≤ c. Using the monotonic convergence theorem, we deduce that
E
∫ T
0[f(t, Y n
t )− f(t, Yt)]2dt → 0. (3.25)
Now let us consider the convergence of the sequence Kn ; for n, p ∈ N, we have
Knt = Y n
0 − Y nt −
∫ t
0f(s, Y n
s )ds +∫ t
0Zn
s dBs,
Kpt = Y p
0 − Y pt −
∫ t
0f(s, Y p
s )ds +∫ t
0Zp
s dBs.
Using the BDG inequality, we obtain
E sup0≤t≤T
|Knt −Kp
t |2
≤ 4 |Y n0 − Y p
0 |2 + 4E sup0≤t≤T
|Y nt − Y p
t |2 + 4E sup0≤t≤T
(∫ t
0(f(s, Y n
s )− f(s, Y ps ))ds)2
+4E( sup0≤t≤T
∣∣∣∣∫ t
0(Zn
s − Zps )dBs
∣∣∣∣)2
≤ 4 |Y n0 − Y p
0 |2 + 4E sup0≤t≤T
|Y nt − Y p
t |2 + 4TE
∫ T
0(f(s, Y n
s )− f(s, Y ps ))2ds
+cE
∫ T
0|Zn
s − Zps |2 ds.
72 Chapitre 3. RBSDE under general increasing growth condition
By (3.19), (3.22) and (5.22), it follows
E sup0≤t≤T
|Knt −Kp
t |2 → 0, as n, p →∞, (3.26)
so there exists an increasing process K in A2(0, T ), s.t.
E sup0≤t≤T
|Knt −Kt|2 → 0, as n →∞, (3.27)
and (Yt, Zt, Kt)0≤t≤T ∈ S2(0, T )×H2d(0, T )×A2(0, T ) satisfies the property (2) of definition 4.1.1.
From the Lemma 3.1.2, we know that (3) of definition 3.1.1 is true. It remains to check (4).Since (Y n
t ,Knt )0≤t≤T tends to (Yt,Kt)0≤t≤T uniformly in t in probability, then the measure dKn
converges to dA weakly in probability, so∫ T0 (Y n
t − Lt)dKnt → ∫ T
0 (Yt − Lt)dKt in probability asn →∞. Obviously
∫ T0 (Yt −Lt)dKt ≥ 0. On the other hand, for each n ∈ N,
∫ T0 (Y n
t −Lt)dKnt ≤ 0.
Hence ∫ T
0(Yt − Lt)dKt = 0, a.s.
Consequently (Y,Z, K) is solution of the RBSDE(ξ, f, L), under the assumptions (3.4). ¤
Step 2 Now we consider the case of a barrier L which satisfies assumption 3.1.4 :
E[ϕ2( sup0≤t≤T
(L+t ))] < ∞,
L+ ∈ S2(0, T ), LT ≤ ξ and still the case when ξ and f(t, 0) are uniformly bounded.Under the assumptions 3.1.5, and (5.6), we know that there exists constants c1 and c2, s.t.,
ξ ≤ c1, f(t, 0) ≤ c2 ; set c′ = maxc1, c2T. Then the triple (Yt, Zt,Kt)0≤t≤T is the solution of theRBSDE(ξ, f, L), if and only if (Y ′
t , Z ′t,K ′t)0≤t≤T is the solution of the RBSDE(ξ′, f ′, L′), where
(Y ′t , Z ′t,K
′t) = (Yt + c2t− 2c′, Zt, Kt), (3.28)
and
ξ′ = ξ + c2T − 2c′,f ′(t, y) = (f(t, y − (c2t− 2c′))− c2,
L′t = Lt + c2t− 2c′.
In fact, the triple (Y ′t , Z ′t,K ′
t)0≤t≤T satisfies (2) of definition 3.1.1 for the RBSDE(ξ′, f ′, L′)
Y ′t = Yt + c2t− 2c′
= ξ′ +∫ T
tf ′(s, Y ′
s )ds + K ′T −K ′
t −∫ T
tZ ′sdBs,
and Y ′t = Yt + c2t− 2c′ ≥ Lt + c2t− 2c′ = L′t, with
∫ T
0(Y ′
t − L′t)dK′t =
∫ T
0(Yt − Lt)dKt = 0.
Now we consider the RBSDE(ξ′, f ′, L′) ; obviously (5.6) also holds for ξ′ and f ′, in fact∣∣ξ′∣∣ + sup
0≤t≤T
∣∣f ′(t, 0)∣∣
≤ |ξ|+ c2T − 2c′ + sup0≤t≤T
(|f(t, 0)|+ ϕ(2c′ − c2t) + c2)
≤ |ξ|+ c2T − 2c′ + sup0≤t≤T
|f(t, 0)|+ ϕ(2c′) + c2 ≤ c.
3.1. RBSDE’s on[0, T ] 73
It follows directly that E∫ T0 |f ′(t, 0)|2 dt ≤ c. Also f ′ is decreasing and continuous in y. For as-
sumption 3.1.5-(ii’), since c2t ≤ c′, 2c′ − c2t ≥ 0, for 0 ≤ t ≤ T , then∣∣f ′(t, y)
∣∣ =∣∣f(t, y − (c2t− 2c′))− c2
∣∣≤ ∣∣f ′(t, 0)
∣∣ + |f(t, 0)|+ c2 + ϕ(|y|+ 2c′ − c2t)≤ ∣∣f ′(t, 0)
∣∣ + ϕ′(|y|)where ϕ′(y) = |f(t, 0)| + c2 + ϕ(|y| + 2c′), which is still a continuous increasing positive function.Moreover, since f is decreasing on y, and 2c′ − c2t ≥ 0, we have
ξ′ = ξ + c2T − 2c′ ≤ ξ − c′ ≤ 0,
f ′(t, 0) = f(t, 0− (c2t− 2c′))− c2 ≤ f(t, 0)− c2 ≤ 0;
and for the barrier
E[ϕ2( sup0≤t≤T
(L′t)
+)] = E[ϕ2( sup0≤t≤T
(Lt + c2t− 2c′)+)] ≤ E[ϕ2( sup0≤t≤T
L+t )] < ∞,
andE[ sup
0≤t≤T((L′t)
+)2] = E[ sup0≤t≤T
((Lt + c2t− 2c′)+)2] ≤ E[ sup0≤t≤T
(L+t )2] < ∞.
We need the following Lemma 3.1.3, which will be proved a few later.
Lemma 3.1.3. Assume that f satisfies the assumption 3.1.5 and (5.6) holds, and that the barrierL satisfies assumption 3.1.4. Moreover, we suppose
ξ ≤ 0, f(t, 0) ≤ 0.
Then there exists (Yt, Zt,Kt)0≤t≤T which is the solution of the RBSDE(ξ, f, L).
So by this lemma there exists a unique (Y ′t , Z ′t, K ′
t)0≤t≤T which is solution of the RBSDE(ξ′, f ′, L′).Then from (3.28), we know that the RBSDE(ξ, f, L) has the unique solution (Yt, Zt,Kt)0≤t≤T . ¤
Proof of Lemma 3.1.3. For n ∈ N, set Ln = L ∧ n, then sup0≤t≤T (Lnt )+ ≤ n ; by the step 1,
we know that there exists (Y nt , Zn
t , Knt )0≤t≤T , which satisfies
Y nt = ξ +
∫ T
tf(s, Y n
s )ds + KnT −Kn
t −∫ T
tZn
s dBs, (3.29)
Y nt ≥ Ln
t , 0 ≤ t ≤ T , and∫ T
0(Y n
t − Lnt )dAn
t = 0.
For n ∈ N, notice that Kn is an increasing process ; by the generalized comparison theorem3.4.1, Y n
t ≥ Yt, 0 ≤ t ≤ T , where (Yt, Zt)0≤t≤T ∈ S2(0, T ) ×H2d(0, T ) is the solution of the classic
BSDE(ξ, f)
Yt = ξ +∫ T
tf(s, Ys)ds−
∫ T
tZsdBs. (3.30)
On the other hand, let us consider the RBSDE(ξ+, 0, L+) ; by Proposition 2.3 in El Karoui et al.(1997, [28]), the Snell envelope of L+
t 1t<T + ξ+1t=T is the solution of this linear RBSDE, so
Y t = ess supτ∈Tt,T
E[L+τ 1τ<T + ξ+1τ=T|Ft] = ess sup
τ∈Tt,T
E[L+τ |Ft]
= KT −Kt −∫ T
tZsdBs,
74 Chapitre 3. RBSDE under general increasing growth condition
in view of L+T = ξ+ = 0. The processes K, Z come from the Doob-Meyer decomposition of the Snell
envelope and the Ito representation of the martingale part. Since Y t ≥ L+t ≥ 0, and f is decreasing,
then f(t, Y t) ≤ f(t, 0) ≤ 0, which implies f+(t, Y t) = 0. So (Y t, Zt,Kt)0≤t≤T is still the solution ofthe RBSDE(ξ+, f+, L+). Moreover, notice that the Snell envelope is the smallest supermartingalewhich dominates the process L+, and it’s positive, so we have
E[ sup0≤t≤T
(Y t)2] ≤ E[ sup0≤t≤T
(E[ sup0≤t≤T
L+t |Ft])2]
≤ E[ sup0≤t≤T
E[( sup0≤t≤T
L+t )2|Ft]] ≤ E[ sup
0≤t≤T(L+
t )2],
so (Y t)0≤t≤T ∈ S2(0, T ), since (L+t )0≤t≤T ∈ S2(0, T ).
Notice that ξ+ ≥ ξ, f+(t, y) ≥ f(t, y), (t, y) ∈ [0, T ] × R, and L+t ≥ Lt ≥ Ln
t , 0 ≤ t ≤ T , forn ∈ N. Then by the comparison theorem 3.4.2, we get Y n
t ≤ Y t, 0 ≤ t ≤ T , and consequently
E[ sup0≤t≤T
(Y nt )2] ≤ maxE[ sup
0≤t≤T(Yt)2], E[ sup
0≤t≤T(Y t)2] ≤ c. (3.31)
Since Lnt ≤ Ln+1
t , 0 ≤ t ≤ T , thanks to the comparison theorem 3.4.2, Y nt Yt, 0 ≤ t ≤ T .
From (3.31) and Fatou lemma, we get
E[ sup0≤t≤T
(Yt)2] ≤ c, (3.32)
and
E
∫ T
0|Y n
t − Yt|2 dt → 0, as n →∞, (3.33)
follows from the dominated convergence theorem.In order to prove the convergence of (Zn,Kn), we first need a-priori estimations. Apply the Ito
formula to |Y nt |2 with the usual calculus, we get,
E |Y nt |2 + E
∫ T
t|Zn
s |2 ds (3.34)
≤ E |ξ|2 + E
∫ T
t|Y n
s |2 ds + E
∫ T
t|f(s, 0)|2 ds + αE[ sup
0≤t≤T|Y n
t |2] +1α
E[(KnT −Kn
t )2],
where α is a positive number. We rewrite the RBSDE(ξ, f, Ln),
KnT −Kn
t = Y nt − ξ −
∫ T
tf(s, Y n
s )ds +∫ T
tZn
s dBs,
hence
E(KnT −Kn
t )2 ≤ 4E |Y nt |2 + 4E |ξ|2 + 4E(
∫ T
tf(s, Y n
s )ds)2 + 4E
∫ T
t|Zn
s |2 ds. (3.35)
SinceY t ≥ Y n
t ≥ Yt, 0 ≤ t ≤ T, (3.36)
with the monotonic property of f(t, y), it follows that
f(t, Y t) ≤ f(t, Y nt ) ≤ f(t, Yt).
3.1. RBSDE’s on[0, T ] 75
Then from (3.30)
E[(∫ T
0f(t, Yt)dt)2] ≤ 3E |ξ|2 + 3(Y0)2 + 3E
∫ T
t
∣∣∣Zs
∣∣∣2ds ≤ c.
For the other side, due to the fact that Y is the Snell envelope of L+, sup0≤t≤T Y t ≥ sup0≤t≤T L+t .
The process Lt = E[sup0≤t≤T L+t |Ft], is a martingale which dominates L+, then Y t ≤ Lt. Notice
that
E[ sup0≤t≤T
Y t] ≤ E[ sup0≤t≤T
Lt] = E[ sup0≤t≤T
E[ sup0≤t≤T
L+t |Ft]]
≤ E[E[ sup0≤t≤T
L+t |Ft]] = E[ sup
0≤t≤TL+
t ]].
It follows that sup0≤t≤T Y t = sup0≤t≤T L+t . Then from the assumption 3.1.4, it follows
E[(∫ T
0f(t, Y t)dt)2] ≤ E[
∫ T
0(2f2(t, 0) + 2ϕ2( sup
0≤t≤TL+
t ))dt]
≤ c + 2TE[ϕ2( sup0≤t≤T
L+t )] ≤ c.
So we have
E[(∫ T
0f(t, Y n
t )dt)2] ≤ maxE[(∫ T
0f(t, Y t)dt)2], E[(
∫ T
0f(t, Yt)dt)2] ≤ c,
and from (3.39),
E(KnT −Kn
t )2 ≤ c + 4E
∫ T
t|Zn
s |2 ds. (3.37)
Then if we substitute (3.37) into (3.34), set α = 8, and with (5.6) and (3.31), it follows
E
∫ T
0|Zn
s |2 ds ≤ c. (3.38)
Using (3.37) again, we getE[(Kn
T )2] ≤ c. (3.39)
Now for n, p ∈ N, n ≥ p, then Lnt ≥ Lp
t , 0 ≤ t ≤ T . We apply the Ito formula to (|Y nt − Y p
t |2),and remember that f satisfies assumption 3.1.5-(v”)
E[|Y nt − Y p
t |2] + E
∫ T
t|Zn
s − Zps |2 ds
= 2E
∫ T
t[f(s, Y n
s )− f(s, Y ps )](Y n
s − Y ps )ds + 2E
∫ T
t(Y n
s − Y ps )d(Kn
s −Kps )
≤ 2E
∫ T
t(Y n
s − Lns )dKn
s + 2E
∫ T
t(Y p
s − Lps)dKp
s − 2E
∫ T
t(Y n
s − Lns )dKp
s
−2E
∫ T
t(Y p
s − Lps)dKn
s + 2E
∫ T
t(Ln
s − Lps)d(Kn
s −Kps )
≤ 2E
∫ T
t(Ln
s − Lps)dKn
s − 2E
∫ T
t(Ln
s − Lps)dKp
s
≤ 2E
∫ T
t(Ln
s − Lps)dKn
s .
76 Chapitre 3. RBSDE under general increasing growth condition
Since Lt−Lnt ↓ 0, for each t ∈ [0, T ], and Lt−Ln
t is continuous, by the Dini theorem, the convergenceholds uniformly on the interval [0, T ], i.e.
E[ sup0≤t≤T
(Lt − Lnt )2] → 0, as n →∞. (3.40)
Then with (3.39),
E
∫ T
0|Zn
s − Zps |2 ds ≤ 2E[ sup
0≤t≤T(Ln
s − Lps)K
nT ]
≤ 2(E( sup0≤t≤T
(Lns − Lp
s)2)
12 (E[(Kn
T )2])12
≤ c(E( sup0≤t≤T
(Lns − Lp
s)2])
12 → 0,
as n, p →∞, so there exists a process (Zt)0≤t≤T ∈ H2d(0, T ), s.t. as n →∞,
E
∫ T
0|Zn
s − Zs|2 ds → 0. (3.41)
Furthermore, by Ito formula,
sup0≤t≤T
|Y nt − Y p
t |2
≤ 2 sup0≤t≤T
∫ T
t(Ln
s − Lps)d(Kn
s −Kps ) + 2 sup
0≤t≤T
∣∣∣∣∫ T
t(Y n
s − Y ps )(Zn
s − Zps )dBs
∣∣∣∣ ,
Taking the expectation on the both sides, by BDG inequality and (3.39), we get
E sup0≤t≤T
|Y nt − Y p
t |2
≤ 2(E( sup0≤t≤T
(Lns − Lp
s)2)
12 (E[(Kn
T )2])12 + cE( sup
0≤t≤T|Y n
s − Y ps |2
∫ T
0|Zn
s − Zps |2 ds)
≤ c(E[ sup0≤t≤T
(Lns − Lp
s)2])
12 +
12E sup
0≤t≤T|Y n
s − Y ps |2 + cE
∫ T
0|Zn
s − Zps |2 ds.
Hence, by (3.40) and (3.41), as n, p →∞,
E sup0≤t≤T
|Y nt − Y p
t | → 0, (3.42)
which implies that there exists a process (Yt)0≤t≤T ∈ S2(0, T ), s.t. as n →∞,
E sup0≤t≤T
|Y nt − Yt| → 0. (3.43)
Moreover, since f is continuous and decreasing on y, with Y nt Yt, 0 ≤ t ≤ T ,
f(t, Y nt )− f(t, Yt) 0, 0 ≤ t ≤ T.
By the monotonic limit theorem, we get∫ T0 [f(t, Y n
t )−f(t, Yt)]dt 0. From (3.36) and convergenceof Y n
t , we get Y t ≥ Yt ≥ Yt, 0 ≤ t ≤ T, with the monotonic condition of f , it follows
E[(∫ T
0f(t, Yt)dt)2] ≤ maxE[(
∫ T
0f(t, Y t)dt)2], E[(
∫ T
0f(t, Yt)dt)2] ≤ c.
3.1. RBSDE’s on[0, T ] 77
Since E[(∫ T0 f(t, Y n
t )dt)2] ≤ c, then we deduce that
E[∫ T
0(fn(t, Y n
t )− f(t, Yt))dt]2 → 0, as n →∞. (3.44)
Since E[(KnT )2] ≤ c, so for each t ∈ [0, T ], E[(Kn
t )2] ≤ c. The sequence (Knt ) has a weak limit Kt
in L2(Ft), with E[(Kt)2] ≤ c. Then for 0 ≤ t ≤ T , (Yt, Zt,Kt)0≤t≤T satisfies
Yt = ξ +∫ T
tf(s, Ys)ds + KT −Kt −
∫ T
tZsdBs. (3.45)
We need to prove the convergence of Kn in stronger sense. For this we rewrite (3.29) and (5.37)in the forward form :
Knt = Y n
0 − Y nt −
∫ t
0f(s, Y n
s )ds +∫ t
0Zn
s dBs
Kt = Y0 − Yt −∫ t
0f(s, Ys)ds +
∫ t
0ZsdBs,
so
sup0≤t≤T
|Knt −Kt|2 ≤ 4 |Y n
0 − Y0|2 + 4 sup0≤t≤T
|Y nt − Yt|2
+4 sup0≤t≤T
∣∣∣∣∫ t
0(f(s, Y n
s )− f(s, Ys))ds
∣∣∣∣2
+ 4 sup0≤t≤T
∣∣∣∣∫ t
0(Zn
s − Zs)dBs
∣∣∣∣ .
Taking expectation on the both sides, using the BDG inequality, and since f(s, Y ns ) ≥ f(s, Ys), it
follows
E[ sup0≤t≤T
|Knt −Kt|2] ≤ 4 |Y n
0 − Y0|2 + 4E[ sup0≤t≤T
|Y nt − Yt|2]
+4E(∫ T
0[f(s, Y n
s )− f(s, Ys)]ds)2 + cE
∫ T
0|Zn
s − Zs|2 ds.
Then by (3.43), (3.44), and (3.41), we deduce that, as n →∞
E[ sup0≤t≤T
|Knt −Kt|2] → 0.
The last thing to check is that (Y, Z, K) also satisfies the definition 4.1.1 (3) and (4). Since foreach n ∈ N, 0 ≤ t ≤ T , Y n
t ≥ Lnt , a.s., with Y n
t Yt and Lnt Lt, then Yt ≥ Lt a.s.. From another
part, the processes Kn are increasing, so the limit K is also increasing. Notice that (Y nt ,Kn
t )0≤t≤T
tends to (Yt,Kt)0≤t≤T uniformly in t in probability, so the measure dKn converges to dK weaklyin probability, and (Ln
t )0≤t≤T converges to (Lt)0≤t≤T in S2(0, T ), as n →∞, hence
∫ T
0(Yt − Lt)dKn
t →∫ T
0(Yt − Lt)dKt in probability, as n →∞.
78 Chapitre 3. RBSDE under general increasing growth condition
Then we have as n →∞,
E
∫ T
0(Yt − Lt)dKt −E
∫ T
0(Y n
t − Lnt )dKn
t
= E
∫ T
0(Yt − Y n
t )dKnt + E
∫ T
0(Yt − Lt)d(Kt −Kn
t ) + E
∫ T
0(Ln
t − Lt)dKnt
≤ (E[ sup0≤t≤T
(Y nt − Yt)2])
12 (E[(Kn
T )2])12 + E
∫ T
0(Yt − Lt)d(Kn
t −Kt)
+(E[ sup0≤t≤T
(Lt − Lnt )2])
12 (E[(Kn
T )2])12
→ 0.
While E∫ T0 (Y n
t −Lnt )dKn
t = 0, so E∫ T0 (Yt −Lt)dKt = 0, since Yt ≥ Lt, then
∫ T0 (Yt −Lt)dKt ≥ 0,
it follows∫ T0 (Yt − Lt)dKt = 0. i.e. (Y, Z,K) is the solution of RBSDE(ξ, f, L). ¤
Step 3 In this step, we relax partly the assumption (5.6), which is widely used in steps 1 and 2.We now suppose only
ξ and inf0≤t≤T
f(t, 0) ≥ c, (3.46)
where c is a constant. We approximate ξ and f(t, 0), by a sequence whose elements satisfy thebounds assumed in step 2 as the following : for each n ∈ N, let
ξn = ξ ∧ n, fn(t, y) = f(t, y)− f(t, 0) + f(t, 0) ∧ n.
Obviously, (ξn, fn) satisfies the assumptions of the step 2, and since ξ ∈ L2(FT ), f(t, 0) ∈H2(0, T ), then
E[|ξn − ξ|2] → 0, E
∫ T
0|f(t, 0)− fn(t, 0)|2 → 0, as n →∞. (3.47)
From the results in step 2, for each n ∈ N, there exists (Y nt , Zn
t , Knt )0≤t≤T ∈ S2(0, T ) ×
H2d(0, T ) × A2(0, T ), which is the unique solution of the RBSDE(ξn, fn, L). By the comparison
theorem 3.4.2, since for all (s, y) ∈ [0, T ] × R, n ∈ N, ξn ≤ ξn+1, fn(s, y) ≤ fn+1(s, y), we haveY n
t ≤ Y n+1t , 0 ≤ t ≤ T , a.s. Hence
Y nt Yt, 0 ≤ t ≤ T. a.s. (3.48)
Applying Ito formula to |Y nt − Y p
t |2, for n, p ∈ N, n ≥ p, on [t, T ], we get
E |Y nt − Y p
t |2 + E
∫ T
t|Zn
s − Zps |2 ds
≤ E |ξn − ξp|2 + E
∫ T
t|Y n
s − Y ps |2 ds + E
∫ T
t|fn(s, 0)− fp(s, 0)|2 ds,
since∫ T
t(Y n
s − Y ps )d(Kn
s −Kps )
=∫ T
t(Y n
s − Ls)dKns +
∫ T
t(Y p
s − Ls)dKps −
∫ T
t(Y n
s − Ls)dKps −
∫ T
t(Y p
s − Ls)dKns ≤ 0.
3.1. RBSDE’s on[0, T ] 79
Hence from Gronwall’s inequality and (3.47), we deduce
sup0≤t≤T
E |Y nt − Y p
t |2 → 0, E
∫ T
0|Zn
s − Zps |2 ds → 0. (3.49)
Consequently there exists (Zt)0≤t≤T ∈ H2d(0, T ), s.t.
E
∫ T
0|Zn
s − Zs|2 ds → 0. (3.50)
Using again Ito formula, taking sup and the expectation, in view of the BDG inequality, Y nt ≥
Y pt , assumption 3.1.5-(v”) and fn(t, 0) ≥ fp(t, 0), we get
E[ sup0≤t≤T
|Y nt − Y p
t |2]
≤ E |ξn − ξp|2 + 2E[ sup0≤t≤T
∫ T
t(Y n
s − Y ps )(fn(s, 0)− fp(s, 0))ds] + E[2 sup
0≤t≤T
∣∣∣∣∫ T
t(Y n
s − Y ps )(Zn
s − Zps )dBs
∣∣∣∣]
≤ E |ξn − ξp|+ 4TE
∫ T
0|fn(s, 0)− fp(s, 0)|2 ds +
14E sup
0≤t≤T|Y n
s − Y ps |2 +
14E[ sup
0≤t≤T|Y n
t − Y pt |2]
+cE
∫ T
0|Zn
s − Zps |2 ds.
From (3.47) and (3.49), it follows E[sup0≤t≤T |Y nt − Y p
t |2] → 0, as n, p →∞, i.e. the sequence Y nis a Cauchy sequence in the space S2(0, T ). Consequently, with (3.48), we have Y ∈ S2(0, T ) and
E[ sup0≤t≤T
|Y nt − Yt|2] → 0. (3.51)
By the comparison theorem 3.4.4, since for all (s, y) ∈ [0, T ]× R, n ∈ N, ξn ≤ ξn+1, fn(s, y) ≤fn+1(s, y), we have Kn
t ≥ Kn+1t ≥ 0, for 0 ≤ t ≤ T , so
Knt Kt, (3.52)
with E[(Knt )2] < ∞, and by the monotonic limit theorem, it follows that Kn
t → Kt in L2(Ft),E[(Kt)2] < ∞, so (Kt)0≤t≤T is increasing.
Notice that since f(t, y) is decreasing and continuous in y, and Y nt Yt, we have f(t, Y n
t ) f(t, Yt). Then by the monotonic limit theorem,
∫ t0 f(s, Y n
s )ds ∫ t0 f(s, Ys)ds. Since (Y n, Zn,Kn)
is the solution of RBSDE(ξn, fn, L), it also satisfies
Y nt = Y n
0 −Knt −
∫ t
0f(s, Y n
s )ds−∫ t
0(fn(s, 0)− f(s, 0))ds +
∫ t
0Zn
s dBs, (3.53)
and with (3.47), (3.48), (3.50), and (3.52), we get that (Y, Z,K) satisfies
Yt = Y0 −Kt −∫ t
0f(s, Ys)ds +
∫ t
0ZsdBs. (3.54)
So
Yt = ξ +∫ T
tf(s, Ys)ds + KT −Kt −
∫ T
tZsdBs. (3.55)
80 Chapitre 3. RBSDE under general increasing growth condition
Due to (Y nt , Zn
t ,Knt )0≤t≤T ∈ S2(0, T )×H2
d(0, T )×A2(0, T ), we get, for 0 ≤ t ≤ T ,
E(∫ t
0fn(s, Y n
s )ds)2 ≤ 4E(Y nt )2 + 4(Y n
0 )2 + 4E(Knt )2 + 4E
∫ t
0(Zn
s )2ds < ∞.
With the definition of fn(s, y), it follows that, for n ∈ N
E(∫ t
0f(s, Y n
s )ds)2 ≤ 2E(∫ t
0fn(s, Y n
s )ds)2 + 2E(∫ t
0(f(s, 0)− fn(s, 0))ds)2 < ∞.
Then from (3.54), for 0 ≤ t ≤ T
E(∫ t
0f(s, Ys)ds)2 ≤ 4E(Yt)2 + 4(Y0)2 + 4E(Kt)2 + 4E
∫ t
0(Zs)2ds < ∞.
It follows that∫ t0 f(s, Y n
s )ds → ∫ t0 f(s, Ys)ds in L2(Ft), as n →∞.
Now we need to prove that the convergence of Kn holds in a stronger sense. Let us rewritethe approximation equation for n ∈ N in forward form as (3.54), and consider their difference. Bythe BDG inequality and f(s, Y n
s ) ≥ f(s, Ys), f(s, 0) ≥ fn(s, 0), we deduce
E sup0≤t≤T
|Knt −Kt|2 ≤ 5 |Y n
0 − Y0|2 + 5E sup0≤t≤T
|Y nt − Yt|2 + 5E(
∫ T
0f(s, Y n
s )− f(s, Ys)ds)2
+5E(∫ T
0f(s, 0)− fn(s, 0)ds)2 + 5E
∫ t
0(Zn
s − Zs)2ds.
It follows that E sup0≤t≤T |Knt −Kt|2 → 0, as n → 0, and the convergence holds in S2(0, T ).
It remains to check if (Yt, Zt,Kt)0≤t≤T satisfies the definition 3.1.1 (3) and (4). Since Y nt ≥ Lt,
0 ≤ t ≤ T , then Yt ≥ Lt, 0 ≤ t ≤ T , a.s.. Furthermore (Y n,Kn) tends to (Y, K) uniformly in t
in probability, as n → ∞, then like the end of step 1, we get∫ T0 (Yt − Lt)dKt = 0, i.e. the triple
(Yt, Zt,Kt)0≤t≤T is the solution of RBSDE(ξ, f, L), under the assumption (3.46). ¤
Step 4 Now we consider a terminal condition ξ ∈ L2(FT ), and a coefficient f which satisfiesassumption 3.1.5 ; set
ξn = ξ ∨ (−n), fn(t, y) = f(t, y)− f(t, 0) + f(t, 0) ∨ (−n),
for n ∈ N. It is clear that (ξn, fn) satisfies the assumptions of the step 3, and
E[|ξn − ξ|2] → 0, E
∫ T
0|f(t, 0)− fn(t, 0)|2 → 0.
By the results of step 3, there exists a triple (Y nt , Zn
t ,Knt )0≤t≤T ∈ S2(0, T )×H2
d(0, T )×A2(0, T ),which is the solution of the RBSDE(ξn, fn, L) ; thanks to the comparison theorem 3.4.2, as n →∞,Y n
t Yt, 0 ≤ t ≤ T , a.s., then as in step 3, we get that (Y nt )0≤t≤T → (Yt)0≤t≤T in S2(0, T ),
(Znt )0≤t≤T → (Zt)0≤t≤T in H2
d(0, T ).Then we need to prove the convergence of (Kn
t )0≤t≤T . Set
ξn,m = ξn ∧m = (ξ ∨ (−n) ∧m),
fn,m(t, y) = fn(t, y)− fn(t, 0) + fn(t, 0) ∧m = f(t, y)− f(t, 0) + (f(t, 0) ∨ (−n) ∧m),
3.1. RBSDE’s on[0, T ] 81
for m ∈ N, then|ξn,m|+ sup
0≤t≤T|fn,m(t, 0)| ≤ c,
andξn,m ≥ ξn+1,m, fn,m(t, y) ≥ fn+1,m(t, y).
From the comparison theorem 3.4.4, consider the solutions (Y m,nt , Zm,n
t ,Km,nt ) of the RBSDEs(ξm,n, fm,n, L),
we getKm,n
t ≤ Km,n+1t , for t ∈ [0, T ],
thanks to the convergence results in step 3, we know that as m →∞, Km,nt → Kn
t and Km,n+1t →
Kn+1t in L2(Ft). So Kn
t ≤ Kn+1t , for t ∈ [0, T ], with E[(Kn
t )2] < ∞, and by the monotonic limittheorem, it follows that
Knt Kt, a.s..
Notice that since f(t, y) is decreasing and continuous in y, and Y nt Yt, we have f(t, Y n
t ) f(t, Yt). Then by the monotonic limit theorem,
∫ t0 f(s, Y n
s )ds ∫ t0 f(s, Ys)ds. Notice that now we
haveYt ≤ Yt ≤ Y 1
t ,
where (Y , Z) is the solution of BSDE(ξ, f). From the monotonicity of f in y, we get
f(t, Yt) ≥ f(t, Yt) ≥ f(t, Y 1t ),
which implies for each t ∈ [0, T ],
E[(∫ t
0f(s, Ys)ds)2] ≤ maxE[(
∫ t
0f(s, Ys)ds)2], E[(
∫ t
0f(s, Y 1
s )ds)2] < ∞.
It follows immediately∫ t0 f(s, Y n
s )ds → ∫ t0 f(s, Ys)ds in L2(Ft). Since (Y n, Zn,Kn) is the solution
of RBSDE(ξn, fn, L), it also satisfies
Y nt = Y n
0 −Knt −
∫ t
0f(s, Y n
s )ds−∫ t
0(fn(s, 0)− f(s, 0))ds +
∫ t
0Zn
s dBs,
and with the convergence which we get, let n →∞, it follows that (Y, Z, K) satisfies
Yt = Y0 −Kt −∫ t
0f(s, Ys)ds +
∫ t
0ZsdBs.
So
Yt = ξ +∫ T
tf(s, Ys)ds + KT −Kt −
∫ T
tZsdBs.
Moreover
E[(KT )2] ≤ 4E[|Yt|2 + |Y0|2 + (∫ t
0f(s, Ys)ds)2 +
∫ t
0|Zs|2 ds] < ∞.
By the same method in step 3, we can prove that Kn → K in a strong sense. Then we deduce thatthe limit (Yt, Zt,Kt)0≤t≤T ∈ S2(0, T )×H2
d(0, T )×A2(0, T ) is the solution of the RBSDE(ξ, f, L).Now we have completed the proof of theorem 3.1.2, i.e. the RBSDE(ξ, f, L) under the assump-
tions 3.1.1, 3.1.2’ and 3.1.4 has a unique solution (Yt, Zt,Kt)0≤t≤T . ¤
82 Chapitre 3. RBSDE under general increasing growth condition
3.1.4 Some a priori estimates
In this subsection, we consider the RBSDE(ξ, f, L) under the monotonic condition, and givesome estimates of the solution (Y, Z,K) with respect to the terminal condition ξ, the coefficient fand the barrier L.
First, we prove an a priori estimate of the norm of (Y, Z,K). Unlike the Lipshitz case, we havein addition the term Eϕ2(sup0≤t≤T (L+
t )) and a constant, which only depends on ϕ, µ, k and T .In the following, we estimate the variation of the solution induced by a variation of the data. Theuniqueness of the solution can be considered as a corollary of this result.
Suppose that (Y,Z, K) is the solution of the following reflected BSDE :
Yt = ξ +∫ T
tf(t, Ys, Zs)ds + KT −Kt −
∫ T
tZsdBs,
Yt ≥ Lt,
∫ T
0(Ys − Ls)dKs = 0.
Then we have
Theorem 3.1.4. There exists a constant C, which only depends on T , µ and k, such that
E( sup0≤t≤T
|Yt|2 +∫ T
0|Zs|2 ds + |KT |2)
≤ CE[ξ2 +∫ T
0f2(t, 0, 0)dt + ϕ2( sup
0≤t≤T(L+
t )) + sup0≤t≤T
(L+t )2 + 1 + ϕ2(2T )].
Proof. Applying Ito’s formula to |Yt|2, and taking expectation, then
E[|Yt|2 +∫ T
t|Zs|2 ds]
= E[|ξ|2 + 2∫ T
tYsf(s, Ys, Zs)ds + 2
∫ T
tLsdKs
≤ E[|ξ|2 + 2∫ T
tYsf(s, 0, 0)ds + 2
∫ T
t(µ |Ys|2 + k |Ys| |Zs|)ds + 2
∫ T
tLsdKs].
It follows that
E[|Yt|2 +12
∫ T
t|Zs|2 ds]
≤ E[|ξ|2 + 2∫ T
tf2(s, 0, 0)ds + (2µ + 1 + 2k2)
∫ T
t|Ys|2 ds + 2
∫ T
tLsdKs].
Then by Gronwall’s inequality, we have
E |Yt|2 ≤ CE[|ξ|2 +∫ T
0f2(s, 0, 0)ds +
∫ T
0LsdKs], (3.56)
then
E
∫ T
0|Zs|2 ds ≤ CE[|ξ|2 +
∫ T
0f2(s, 0, 0)ds +
∫ T
0LsdKs], (3.57)
where C is a constant only depends on µ, k and T , in the following this constant can be changedline by line.
3.1. RBSDE’s on[0, T ] 83
Now we estimate K by approximation following step 3 and 4 of the proof of theorem 3.1.2. Bythe existence of the solution, theorem 3.1.3, we take the process Z as a known process. Withoutlosing generality we write f(t, y) for f(t, y, Zt), here f(t, 0) = f(t, 0, Zt) is a process in H2(0, T ).Set
ξm,n = (ξ ∨ (−n)) ∧m,
fm,n(t, y) = f(t, y)− f(t, 0) + (f(t, 0) ∨ (−n)) ∧m.
For m, n ∈ N, ξm,n and sup0≤t≤T fm,n(t, 0) are uniformly bounded. Consider the RBSDE(ξm,n, fm,n, L),
Y m,nt = ξm,n +
∫ T
tfm,n(t, Y m,n
s )ds + Km,nT −Km,n
t −∫ T
tZm,n
s dBs,
Y m,nt ≥ Lt,
∫ T
0(Y m,n
s − Ls)dKm,ns = 0.
if we recall the transform in step 2 of the proof of theorem 3.1.2, since ξm,n, fm,n(t, 0) ≤ m, weknow that (Y m,n
t , Zm,nt ,Km,n
t ) is the solution of this RBSDE, if and only if (Y m,n′, Zm,n′, Km,n′) isthe solution of RBSDE(ξm,n′, fm,n′, L′), where
Y m,n′t = Y m,n
t + m(t− 2(T ∨ 1)),Zm,n′
t = Zm,nt ,
Km,n′t = Km,n
t ,
and
ξm,n′ = ξm,n + 2mT −m(T ∨ 1),fm,n′(t, y) = fm,n(t, y −m(t− 2(T ∨ 1)))−m,
L′t = Lt + m(t− 2(T ∨ 1)).
Without losing generality, we set T ≥ 1. Then ξm,n′ ≤ 0 and fm,n′(t, 0) ≤ 0. Since (Y m,n′, Zm,n′,Km,n′)is the solution of RBSDE(ξm,n′, fm,n′, L′), then we have
Km,n′T = Y m,n′
0 − ξm,n′ −∫ T
0fm,n′(s, Y m,n′
s , Zs)ds +∫ T
0Zm,n′
s dBs,
which follows
E[(Km,n′T )2] ≤ 4E[
∣∣Y m,n′0
∣∣2 +∣∣ξm,n′∣∣2 + (
∫ T
0fm,n′(s, Y m,n′
s )ds)2 +∫ T
0
∣∣Zm,n′s
∣∣2 ds]. (3.58)
Applying Ito’s formula to |Y m,n|2, like (3.56) and (3.57), we have
E |Y m,nt |2 + E
∫ T
t|Zm,n
s |2 ds ≤ CE[|ξm,n|2 +∫ T
t(fm,n(s, 0))2ds +
∫ T
tLsdKm,n
s ].
So
∣∣Y m,n′0
∣∣2 +∫ T
0
∣∣Zm,n′s
∣∣2 ds = 2 |Y m,n0 |2 + 8m2T 2 + E
∫ T
0|Zm,n
s |2 ds
≤ CE[|ξm,n|2 +∫ T
0(fm,n(s, 0)ds)2 +
∫ T
0LsdKm,n
s ] + 8m2T 2.
84 Chapitre 3. RBSDE under general increasing growth condition
For the third term on the right side of (3.58), from Lemma 3.1.3, we remember that
(∫ T
0fm,n′(s, Y m,n′
s )ds)2 ≤ max(∫ T
0fm,n′(s, Y m,n
s )ds)2, (∫ T
0fm,n′(s, Y m,n
s )ds)2, (3.59)
where (Y m,n, Zm,n) is the solution the following BSDE
Y m,nt = ξm,n′ +
∫ T
0fm,n′(s, Y m,n
s )ds−∫ T
0Zm,n
s dBs, (3.60)
andY
m,ns = ess sup
τ∈Tt,T
E[(L′τ )
+1τ<T + (ξm,n)+1τ=T|Ft] = ess supτ∈Tt,T
E[(L′τ )
+|Ft],
withsup
0≤t≤TY
m,ns = sup
0≤t≤T(L′t)
+.
From (3.60),
E(∫ T
0fm,n′(s, Y m,n
s )ds)2 ≤ 3E[∣∣ξm,n′∣∣2 +
∫ T
0
∣∣∣Zm,ns
∣∣∣2ds +
∣∣∣Y m,n0
∣∣∣2],
then by proposition 2.2 in [57], we have
E(∫ T
0fm,n′(s, Y m,n
s )ds)2 ≤ CE[∣∣ξm,n′∣∣2 + (
∫ T
0fm,n′(s, 0)ds)2]
= CE[|ξm,n −mT |2 + (∫ T
0(fm,n(s,−m(t− 2T ))−m)ds)2]
≤ CE[|ξm,n|2 +∫ T
0(fm,n(s, 0))2ds] + Cϕ2(2mT ) + Cm2.
Here C is a constant only depend on µ, k and T , and in the following it can be changed line byline. For the other term in (3.59),
E(∫ T
0fm,n′(s, Y m,n
s )ds)2
≤ E[∫ T
02(fm,n′(s, 0))2ds + 2Tϕ2( sup
0≤t≤T(L′t)
+]
= E[2∫ T
0(fm,n(s,−m(t− 2T ))−m)2ds + 2Tϕ2( sup
0≤t≤T(L′t)
+)]
≤ E[4∫ T
0fm,n(s, 0)2ds + 2Tϕ2( sup
0≤t≤T(Lt)+)] + 2m2T + 4Tϕ2(2mT ).
Consequently, we deduce that
E[(Km,nT )2] = E[(Km,n′
T )2]
≤ CE[|ξm,n|2 +∫ T
0(fm,n(s, 0))2ds +
∫ T
tLsdKm,n
s + ϕ2( sup0≤t≤T
(Lt)+) + m2 + ϕ2(2mT )],
≤ CE[|ξ|2 +∫ T
0(f(s, 0, Zs))2ds + ϕ2( sup
0≤t≤T(Lt)+) + sup
0≤t≤T((Lt)+)2] +
12E[(Km,n
T )2]
+C(m2 + ϕ2(2mT )),
3.1. RBSDE’s on[0, T ] 85
with (3.57) and f is Lipschitz on z, it follows that
E[(Km,nT )2] ≤ CE[|ξ|2 +
∫ T
0(f(s, 0, 0))2ds + ϕ2( sup
0≤t≤T(Lt)+) + sup
0≤t≤T((Lt)+)2
+∫ T
0LsdKs] + C(m2 + ϕ2(2mT )).
Let m →∞, then
E[|ξm,n − ξn|2] → 0, E
∫ T
0|fm,n(t, 0)− fn(t, 0)|2 → 0,
where
ξn = ξ ∨ (−n),fn(t, y) = f(t, y)− f(t, 0) + f(t, 0) ∨ (−n).
Thanks to the convergence result of step 3, we know that (Y m,n, Zm,n,Km,n) → (Y n, Zn,Kn)in S2(0, T ) × H2
d(0, T ) × A2(0, T ), where (Y n, Zn,Kn) is the soultion of the RBSDE(ξn, fn, L).Moreover Km,n
T KnT in L2(FT ), so we have Kn
T ≤ K1,nT , which implies for each n ∈ N,
E[(KnT )2] ≤ E[(K1,n
T )2] ≤ CE[|ξ|2 +∫ T
0(f(s, 0, 0))2ds + ϕ2( sup
0≤t≤T(Lt)+) (3.61)
+ sup0≤t≤T
((Lt)+)2 +∫ T
0LsdKs] + C(1 + ϕ2(2T )).
Then, letting n →∞, by step 4, since
E[|ξn − ξ|2] → 0, E
∫ T
0|fn(t, 0)− f(t, 0)|2 → 0,
the sequence (Y n, Zn,Kn) → (Y, Z, K) in S2(0, T ) ×H2d(0, T ) ×A2(0, T ), where (Y, Z,K) is the
solution of the RBSDE(ξ, f, L). From (3.61),
E[(KT )2]
≤ CE[|ξ|2 +∫ T
0(f(s, 0, 0))2ds + ϕ2( sup
0≤t≤T(Lt)+) + sup
0≤t≤T((Lt)+)2 +
∫ T
0LsdKs] + C(1 + ϕ2(2T ))
≤ CE[|ξ|2 +∫ T
0(f(s, 0, 0))2ds + ϕ2( sup
0≤t≤T(Lt)+) + sup
0≤t≤T((Lt)+)2] +
12E[(KT )2] + C(1 + ϕ2(2T )).
Then it follows that for each t ∈ [0, T ],
E[|Yt|2 +∫ T
0|Zs|2 ds + (KT )2]
≤ CE[|ξ|2 +∫ T
0(f(s, 0, 0))2ds + ϕ2( sup
0≤t≤T(Lt)+) + sup
0≤t≤T((Lt)+)2] + C(1 + ϕ2(2T )).
Finally we get the result, by applying BDG inequality. ¤Now we estimate the variation in the solution induced by a variation in the data.
86 Chapitre 3. RBSDE under general increasing growth condition
Proposition 3.1.1. We consider the RBSDE(ξ1, f1, L1) and RBSDE(ξ2, f2, L2). Define (Y 1, Z1, K1)and (Y 2, Z2,K2) to be the solution of RBSDE(ξ1, f1, L1) and RBSDE(ξ2, f2, L2), respectively. De-fine
∆ξ = ξ1 − ξ2, ∆f = f1 − f2, ∆L = L1 − L2,
∆Y = Y 1 − Y 2, ∆Z = Z1 − Z2, ∆K = K1 −K2,
then there exists a constant C, which only depends on µ, k and T , such that
E( sup0≤t≤T
|∆Yt|2 +∫ T
0|∆Zs|2 ds)
≤ CE[|∆ξ|2 +∫ T
0
∣∣∆f(t, Y 1t , Z1
t )∣∣2 dt + C(E[ sup
0≤t≤T(∆Lt)2])
12 Ψ
12T ,
where
ΨT = E∑
i=1,2
[∣∣ξi
∣∣2 +∫ T
0(f i(s, 0, 0))2ds + ϕ2( sup
0≤t≤T(Li
t)+) + sup
0≤t≤T((Li
t)+)2] + 1 + ϕ2(2T ).
Proof. Since
∆Yt = ∆ξ +∫ T
t(f1(t, Y 1
s , Z1s )− f2(t, Y 2
s , Z2s ))ds + ∆KT −∆Kt −
∫ T
t∆ZsdBs,
apply Ito’s formula, then we get
E[|∆Yt|2 +∫ T
t|∆Zs|2 ds]
= E |∆ξ|2 + 2E
∫ T
t(f1(t, Y 1
s , Z1s )− f2(t, Y 2
s , Z2s ))∆Ysds + 2E
∫ T
t∆Ysd∆Ks
≤ E |∆ξ|2 + E
∫ T
t
∣∣∆f1(t, Y 1s , Z1
s )∣∣2 ds + 2(µ + k2 + 1)E
∫ T
t|∆Ys|2 ds
+12E
∫ T
t|∆Zs|2 ds + 2E
∫ T
t∆Ysd∆Ks.
Notice that∫ T
t∆Ysd∆Ks =
∫ T
t(Y 1
s − Y 2s )d(K1
s −K2s ) ≤
∫ T
t|∆Ls| d(K1
s + K2s ).
So from Gronwall’s inequality, we get for each t ∈ [0, T ]
E[|∆Yt|2 +∫ T
t|∆Zs|2 ds]
≤ CE[|∆ξ|2 +∫ T
0
∣∣∆f(t, Y 1t , Z1
t )∣∣2 dt +
∫ T
0|∆Ls| d(K1
s + K2s )]
≤ CE[|∆ξ|2 +∫ T
0
∣∣∆f(t, Y 1t , Z1
t )∣∣2 dt] + C(E[ sup
0≤t≤T|∆Ls|2])
12 (E[(K1
T + K2T )2])
12 .
With previous theorem, the result follows. ¤
3.1. RBSDE’s on[0, T ] 87
3.1.5 Properties of the solution of the RBSDEs
The first two results are the analogues of Proposition 2.3 and Proposition 4.2 in El Karoui etal. (1997, [28]) for the Lipschitz case.
Expression of the solution Y
The square-integrable solution (Yt)0≤t≤T of the RBSDE(ξ, f, L) corresponds to the value of anoptimal stopping time problem.
Proposition 3.1.2. Let (Yt, Zt,Kt), 0 ≤ t ≤ T be a solution of the RBSDE(ξ, f, L). Then foreach t ∈ [0, T ],
Yt = ess supτ∈Tt
E[∫ τ
tf(s, Ys, Zs)ds + Lτ1τ<T + ξ1τ=T|Ft]
where T is the set of all stopping times dominated by T , and
Tt = τ ∈ T ; t ≤ τ ≤ T.
Proof. Let τ ∈ Tt ; we can take the conditional expectation in (2) of definition 3.1.1 betweentime t and τ , hence :
Yt = E[∫ τ
tf(s, Ys, Zs)ds + Yτ + Kτ −Kt|Ft]
≥ E[∫ τ
tf(s, Ys, Zs)ds + Lτ1τ<T + ξ1τ=T|Ft].
Then we find an optimal stopping time in Tt in order to get the reversed inequality. Set
Dt = inft ≤ u;Yu = Lu ∧ T.
By the integral condition (4),∫ T0 (Ys − Ls)dKs = 0, and the continuity of K, it follows that
KDt −Kt = 0.
We deduce that
Yt = E[∫ Dt
tf(s, Ys, Zs)ds + LDt1Dt<T + ξ1Dt=T|Ft],
and the result follows. ¤
When the barrier is an Ito process
Now we suppose that the barrier L is an Ito process which satisfies assumption 3.1.3, and
Lt = L0 +∫ t
0lsds +
∫ t
0σsdBs, (3.62)
for 0 ≤ t ≤ T , where l and σ are respectively R and Rd-valued, Ft-progressively measurable andsatisfy
E
∫ T
0(|ls|+ |σs|2)ds < ∞. (3.63)
Then we have the following proposition, which gives an explicit expression of the increasing processK of the solution of the RBSDE(ξ, f, L).
88 Chapitre 3. RBSDE under general increasing growth condition
Proposition 3.1.3. Assume that the assumptions 3.1.1 and 3.1.2 are satisfied, and moreover thatthe barrier L is a semimartingale of the form (3.62) satisfying (3.63). Let (Yt, Zt, Kt) be a solutionof the RBSDE(ξ, f, L). Then Zt = σt, dP×dt, a.e. on the set Yt = Lt, and
0 ≤ dKt ≤ 1Yt=Lt[f(t, Lt, σt) + lt]−dt.
Proof. Follows from (2) of definition 3.1.1 and (3.62)
d(Yt − Lt) = −(f(t, Yt, Zt) + lt)dt− dKt + (Zt − σt)dBt.
If we denote by Vt, 0 ≤ t ≤ T the local time at 0 of the continuous semimartingle Yt − Lt, itfollows from the Ito-Tanaka formula
d(Yt − Lt)+ = −1Yt>Lt(f(t, Yt, Zt) + lt)dt + 1Yt>Lt(Zt − σt)dBt +12dVt.
While from (3), Yt ≥ Lt, we get (Yt − Lt)+ = Yt − Lt. Hence the two differentials coincide andso do the martingale and the bounded variation parts.
Consequently,1Yt=Lt(Zt − σt)dBt = 0,
so the first statement follows. And
dKt +12dVt = −1Yt=Lt(f(t, Yt, Zt) + lt)dt
= −1Yt=Lt(f(t, Lt, σt) + lt)dt.
HencedKt +
12dVt = 1Yt=Lt(f(t, Lt, σt) + lt)−dt. (3.64)
Then the second result follows from the fact that K is an increasing process. Note that we haveproved that the local time at 0 is absolutely continuous. ¤
Remark 3.1.2. From this proposition, we know that there exists a predictable process α, s.t. 0 ≤αt ≤ 1, a.s. 0 ≤ t ≤ T , with
dKt = αt1Yt=Lt[f(t, Lt, σt) + lt]−dt. (3.65)
Remark 3.1.3. This proposition can be easily generalized to an obstacle Lt, which is a semimar-tigale satisfying assumption 3.1.3 and
Lt = L0 +∫ t
0lsds +
∫ t
0σsdBs + Vt,
where V is a continuous process of integrable variation such that the measure dVt is singular withrespect to dt and admits a decomposition Vt = V +
t − V −t , where V +
t and V −t are both increasing
processes, satisfyingV +
T + V −T < ∞ a.s.
Then we still have Zt = σt, dP×dt, a.e. on the set Yt = Lt, and there exists a predictable processα, s.t. 0 ≤ αt ≤ 1, a.s. 0 ≤ t ≤ T , with
dKt = αt1Yt=Lt[(f(t, Lt, σt) + lt)−dt + dV −t ]. (3.66)
3.2. Applications to Finance 89
The RBSDE with finite stopping time
We prove the following property for a finite stopping time τ . This result will be used in section3.3.
Proposition 3.1.4. Let (Yt, Zt, Kt)0≤t≤T be the solution of the RBSDE(ξ, f, L) with the terminaltime T . Assume that for some stopping time τ , s.t. 0 ≤ τ ≤ T ,
(a) ξ is Fτ -measurable ;(b) f(t, y, z) = 0 on the interval (τ, T ] ;(c) Lt = Lt∧τ , for 0 ≤ t ≤ T .Then on the interval (τ, T ], Yt = Yτ , Zt = 0, Kt = Kτ .
Proof. From proposition 2.10.1, we have
Yτ = ess supv∈Tτ
E[∫ v
τf(s, Ys, Zs)ds + ξ1v=T + Lv1v<T|Fτ ]
= ess supv∈Tτ
E[ξ1v=T + Lτ1v<T|Fτ ] ≤ ξ.
On the other hand,
Yτ = ξ + KT −Kτ −∫ T
τZsdBs
= ξ + E[KT −Kτ |Fτ ] ≥ ξ.
So Yτ = ξ, and E[KT −Kτ |Fτ ] = 0. Notice that K is an increasing process, it follows that Kt = Kτ
for t ∈ (τ, T ]. Apply Ito formula to |Yt|2, on the interval (τ, T ]
|Yτ |2 +∫ T
τ|Zs|2 ds = |ξ|2 + 2
∫ T
τYsdKs − 2
∫ T
τYsZsdBs
= |ξ|2 − 2∫ T
τYsZsdBs,
hence
|Yτ |2 + E[∫ T
τ|Zs|2 ds|Fτ ] = |ξ|2 .
Consequently∫ Tτ |Zs|2 ds = 0, a.s., i.e. Zt = 0 on the interval (τ, T ]. ¤
3.2 Applications to Finance
We follow the idea of El Karoui et al.(1997, [30]). In some constraint cases we consider thestrategy wealth portfolio (Xt, πt) as a pair of adapted processes in H2(0, T ) × H2
d(0, T ) whichsatisfy the following BSDE
−dXt = b(t,Xt, πt)dt− π∗t σdBt,
where b is R-valued, convex with respect to (x, π), and satisfy (H2). We suppose that the volatilitymatrix σ of the n risky assets is invertible and such that (σt)−1 is bounded. Without loss ofgenerality, we take σt = Id.
We are concerned with the problem of pricing an American contingent claim at each time t,which consists of the selection of a stopping time τ ∈ Tt (the set of stopping times valued in [t, T ])and a payoff Sτ on exercise if τ < T and ξ if τ = T . Here (St) satisfies the condition (H3). We set
Ss = ξ1s=T + Ss1s<T.
90 Chapitre 3. RBSDE under general increasing growth condition
Fix t ∈ [0, T ], τ ∈ Tt ; then there exists a unique strategy (Xs(τ, Sτ ), π(τ, Sτ )) ∈ H2(0, T )×H2d(0, T ),
still denote by (Xτs , πτ
s ), which replicate Sτ , i.e. is the solution of the classical BSDE associatedwith terminal time τ , terminal condition Sτ and generator b.
−dXτs = b(s,Xτ
s , πτs )ds− (πτ
s )∗dBs, 0 ≤ s ≤ T, (3.67)Xτ
τ = Sτ .
Then the price of the American contingent claim (Ss, 0 ≤ s ≤ T ) at time t is given by
Xt = ess supτ∈Tt
Xt(τ, Sτ ).
Applying the previous results on RBSDE’s, it follows that the price (Xt, 0 ≤ t ≤ T ) correspondsto the unique solution of the RBSDE associated with terminal condition ξ, generator b and obstacleS, i.e. there exists (πt) ∈ H2
d(0, T ) and (Kt) an increasing adapted continuous process with K0 = 0such that
−dXt = b(s, Xt, πt)ds + dKt − π∗t dBt, (3.68)XT = ξ,
Xt ≥ St , 0 ≤ t ≤ T,
∫ T
0(Xt − St)dKt = 0.
Furthermore, the stopping time Dt = inf(t ≤ s ≤ T | Xs = Ss) ∧ T is optimal, that is
Xt = Xt(Dt, SDt).
3.3 RBSDE’s with random terminal time
For the RBSDE with random terminal time, we shall need the following norms indexed by areal number p. Given a stopping time τ , for all progressively measurable process (Xt)t≥0 takingvalues in R, we define
‖Xt‖p,τ∞ =
(E[ sup
0≤t≤τeptX2
t ]) 1
2
.
For all progressively measurable process (Xt)t≥0 taking values in Rn, n ∈ N, we define
‖Xt‖p,τ2 =
(E[
∫ τ
0ept |Xt|2 dt]
) 12
.
Now we are given four objects : τ , f , L and ξ, which satisfy
Assumption 3.3.1. a final time τ is a stopping time ;
Assumption 3.3.2. a coefficient f : Ω × R+ × R× Rd → R, which satisfies for some continuousincreasing function ϕ : R+ → R+, real numbers µ, λ, k, C ′ such that k, C ′ > 0 and 2µ + k2 < λ :∀t ∈ R+, ∀y, y′ ∈ R, z, z′ ∈ Rd,
(i) f(·, y, z) is progressively measurable ;(ii) |f(t, y, z)| ≤ |f(t, 0, 0)|+ ϕ(|y|) + C ′ |z|, a.s. ;(iii) E
∫ τ0 eλt |f(t, 0, 0)|2 dt < ∞;
(iv)|f(t, y, z)− f(t, y, z′)| ≤ k |z − z′|, a.s.(v) (y − y′)(f(t, y, z)− f(t, y′, z)) ≤ µ(y − y′)2, a.s.(vi) y → f(t, y, z) is continuous, a.s.
3.3. RBSDE’s on[0, τ ] 91
Assumption 3.3.3. a barrier (Lt)t≥0, which is a continuous progressively measurable, real-valuedprocess, defined on the interval R+, satisfying Lt = Lt∧τ ,
∥∥L+t
∥∥0,τ
∞ < ∞, and
E[ϕ2(supt≥0
(eλtL+t ))] < ∞.
Assumption 3.3.4. a final condition ξ, which is a Fτ -measurable R-valued random variable, s.t. :(i) ξ = Mτ1τ<∞, where (Mt)t≥0 is a continuous Ito process, such that,
Lt ≤ Mt, a.s. t ≥ 0.
(ii) ‖Mt‖λ,τ∞ < ∞, and satisfies
dMt = mt1[0,τ ]dt + σMt 1[0,τ ]dBt,
where (mt) and (σMt ) are progressively measurable and such that ‖mt‖λ,τ
2 +∥∥σM
t
∥∥λ,τ
2< ∞.
(iii) E(eλτ |ξ|2) < ∞, E∫ τ0 eλt
∣∣f(t,Mt, σMt )
∣∣2 dt < ∞.
We now introduce the definition of the solution of the RBSDE with random terminal time.
Definition 3.3.1. A solution of the RBSDE(ξ, f, L), with random terminal time τ ,is a triple(Yt, Zt,Kt)t≥0 of progressively measurable processes taking values in R, Rd and R, respectively,such that
(1’) ‖Yt‖λ,τ∞ < ∞, ‖Zt‖λ,τ
2 < ∞, and for any 0 ≤ T < ∞, ‖Kt‖min(λ,0),T∧τ∞ < ∞.
(2’) For all t, T s.t. 0 ≤ t ≤ T
Yt = YT +∫ T∧τ
t∧τf(s, Ys, Zs)ds + KT∧τ −Kt∧τ −
∫ T∧τ
t∧τZsdBs.
(3’) Yt = ξ, Zt = 0, Kt = Kτ on the set t ≥ τ.(4’) (Yt)t≥0 is a continuous process such that Yt ≥ Lt a.s. ∀t ≥ 0, and Kt is an increasing
continuous process, with K0 = 0.(5’)
∫ τ0 (Ys − Ls)dKs = 0, a.s..
Remark 3.3.1. If τ < ∞ a.s., we are solving the RBSDE
Yt = ξ +∫ τ
t∧τf(s, Ys, Zs)ds + Kτ −Kt∧τ −
∫ τ
t∧τZsdBs, t ≥ 0,
s.t. Yt ≥ Lt, for 0 ≤ t ≤ τ , and∫ τ0 (Yt − Lt)dAt = 0. If τ = ∞ a.s., then the set t ≥ τ = ∅, and
there is no meaning of (3’), instead of it, we set
limT→∞
YT = ξ. (3.69)
In this paper we require λ > 0, when τ ≡ ∞ a.s.. In this case, due to assumption 3.3.4-(iii), we haveξ = 0, and since the solution (Yt)t≥0 satisfies ‖Yt‖λ,τ
∞ < ∞, then we get directly limT→∞ YT = 0 = ξ,i.e. (3.69) is satisfied.
Remark 3.3.2. If we choose λ ≤ 0, when τ ≡ ∞ a.s., in view of assumption 3.3.4-(iii), thespaces of ξ and f are widened. By the uniqueness and existence theorem for RBSDE with randomterminal time, which will be proved in the theorem 3.3.1, we can prove that there exists a unique(Y, Z,K) which satisfies Definition 3.3.1-(1’), (2’), (4’) and (5’). But the relation between theterminal condition ξ and the solution (Yt)t≥0 is not clear ; with different values of ξ, (Yt)t≥0 maybe the same process. Even limT→∞ YT = ξ doesn’t hold necessarily, which is shown in the followingexample.
92 Chapitre 3. RBSDE under general increasing growth condition
Example 3.3.1. Assume τ = +∞, f(t, y, z) = −y, L = 0, M = 0, ξ = c ≥ 0, where c is a non-random constant. It’s easy to see that (Yt, Zt,Kt)0≤t≤T ≡ (0, 0, 0) satisfies definition 3.3.1-(1’),(2’), (4’) and (5’). For f , the parameters in assumption 3.3.2 are ϕ(y) = y, k = C ′ = 0, µ = −1,so if we choose a negative number λ s.t. −2 < λ < 0, then λ > 2µ + k2, so all real number c ∈ R+
satisfies assumption 3.3.4-(iii). If we set ξ = c > 0, then limT→∞ YT = 0 < c = ξ.
Our main result is the following.
Theorem 3.3.1. Under the assumptions 3.3.1-3.3.4, there exists a unique solution (Yt, Zt,Kt)t≥0
of the RBSDE(τ, ξ, f, L).
Proof. Uniqueness. Let (Yt, Zt,Kt)t≥0 and (Y ′t , Z ′t,K ′
t)t≥0 be two solutions of the RBSDE(τ, ξ, f, L).Denote ∆Y = Y −Y ′, ∆Z = Z−Z ′, ∆K = K−K ′. Obviously, ∆Yt = 0, ∆Zt = 0, ∆Kt = 0 on theset t ≥ τ. Applying Ito formula to eλt |∆Yt|2 on the interval [t ∧ τ, T ∧ τ ], with the assumption3.3.2, it follows
eλt∧τ |∆Yt|2 +∫ T∧τ
t∧τeλs(λ |∆Ys|2 + |∆Zs|2)ds (3.70)
= eλT∧τ |∆YT |2 + 2∫ T∧τ
t∧τeλs(f(s, Ys, Zs)− f(s, Y ′
s , Z ′s))∆Ysds + 2∫ T∧τ
t∧τeλss ∆Ysd∆Ks
−2∫ T∧τ
t∧τeλss ∆Ys∆ZsdBs
≤ eλT∧τ |∆YT |2 +∫ T∧τ
t∧τeλs((2µ + k2) |∆Ys|2 + |∆Zs|2)ds− 2
∫ T∧τ
t∧τeλss ∆Ys∆ZsdBs,
since∫ T∧τ
t∧τeλss ∆Ysd∆Ks =
∫ T∧τ
t∧τeλs(Ys − Ls)dKs +
∫ T∧τ
t∧τeλs(Y ′
s − Ls)dK ′s
−∫ T∧τ
t∧τeλs(Y ′
s − Ls)dKs −∫ T∧τ
t∧τeλs(Ys − Ls)dK ′
s
≤ 0.
Due to λ > 2µ + k2, we deduce that for t < T,
E(eλt∧τ |∆Yt|2) ≤ E(eλT∧τ |∆YT |2).
The same result holds with λ replaced by λ′, with 2µ + k2 < λ′ < λ, hence
E(eλ′t∧τ |∆Yt|2) ≤ e(λ−λ′)T E(eλT∧τ |∆YT |2 1T<τ).
From Definition 3.3.1 (1’), ‖Yt‖λ,τ∞ < ∞, which implies that the second factor of the right hand side
remains bounded as T →∞, while the first factor tends to 0, as T →∞. So ∆Yt = 0, for t ∈ R+.Then if we come back to (3.70), from the first equality, with ∆Yt = 0, we get
E
∫ T∧τ
0eλs |∆Zs|2 ds ≤ E[eλT∧τ |∆YT |2],
and ∆Zt = 0 on the set t ≥ τ, it follows ∆Zt = 0, for t ∈ R+, then ∆Kt = 0 follows from (2’).Uniqueness is proved.
Existence. To prove the existence, we do it in three steps.
3.3. RBSDE’s on[0, τ ] 93
– We first construct a sequence of the processes (Y nt , Zn
t ,Knt )t≥0 by considering the RBSDEs
with deterministic terminal time n, and give the estimations.– Then we prove that (Y n
t , Znt )t≥0 is a Cauchy sequence under the norm ‖·‖λ,τ
∞ and ‖·‖λ,τ2 .
– Finally we use the limit (Yt, Zt)t≥0 of (Y nt , Zn
t )t≥0 to construct the increasing process (Kt)t≥0,and prove that the triple is the solution of the RBSDE(τ, ξ, f, L).
With respect to the proof of Theorem 4.1 in Pardoux(1999, [57]) and Theorem 2.2 and 2.3 in Talayand Zheng(2002, [72]), we cannot prove the uniform estimations for (Y n
t , Znt ,Kn
t )t≥0, but we canstill prove that (Y n
t , Znt )t≥0 is a Cauchy sequence.
Step 1. We first construct a sequence of the processes (Y nt , Zn
t , Knt )t≥0 by considering the
RBSDEs with deterministic terminal time n.For each n ∈ N, by the existence and uniqueness theorem in section 3.2, with the assumptions
3.3.2 and 3.3.3, there exists a unique solution (Y nt , Zn
t , Knt )0≤t≤n to the RBSDE(Mn, 1[0,τ ]f, L) on
[0, n], in the following sense : (Y nt )0≤t≤n is a continuous process, s.t. Y n
t ≥ Lt, for 0 ≤ t ≤ n,
Y nt = Mn +
∫ n
t1[0,τ ]f(s, Y n
s , Zns )ds + Kn
n − Knt −
∫ n
tZn
s dBs,
for n ≥ T ≥ 0,∫ T0 (Y n
s − Ls)dKns = 0, and we have the following estimations of the solutions,
E[ sup0≤t≤T
∣∣∣Y nt
∣∣∣2+
∫ n
0
∣∣∣Zns
∣∣∣2ds +
∣∣∣Knn
∣∣∣2] ≤ c(n) < ∞.
In the following, c(n) is a constant which depends on n, and can vary from line to line.We first remark that due to assumption 3.3.4-(ii) for all t ∈ R+, Mt = Mτ∧t. Since n ∧ τ is a
finite stopping time and n∧τ ≤ n, then by proposition 3.1.4, on the set n∧τ ≤ t ≤ n Y nt = Y n
n∧τ ,
Znt = 0 and Kn
t = Knn∧τ . Applying Ito formula to eλt
∣∣∣Y nt
∣∣∣2, we get
eλt∧τ∣∣∣Y n
t∧τ
∣∣∣2+
∫ n∧τ
t∧τeλs(λ
∣∣∣Y ns
∣∣∣2+
∣∣∣Zns
∣∣∣2)ds
= eλn∧τ (Mn∧τ )2 + 2∫ n∧τ
t∧τeλsY n
s f(s, Y ns , Zn
s )ds + 2∫ n∧τ
t∧τeλsY n
s dKns − 2
∫ n∧τ
t∧τeλsY n
s Zns dBs
≤ eλn∧τ (Mn∧τ )2 + 2∫ n∧τ
t∧τeλs(µ
∣∣∣Y ns
∣∣∣2+ k
∣∣∣Y ns
∣∣∣∣∣∣Zn
s
∣∣∣ +∣∣∣Y n
s
∣∣∣ |f(s, 0, 0)|)ds
+2∫ n∧τ
t∧τeλsL+
s dKns − 2
∫ n∧τ
t∧τeλsY n
s Zns dBs
≤ eλn∧τ (Mn∧τ )2 + (2µ +k2
ρ+
1α
)∫ n∧τ
t∧τeλs
∣∣∣Y ns
∣∣∣2ds + ρ
∫ n∧τ
t∧τeλs
∣∣∣Zns
∣∣∣2ds
+α
∫ n∧τ
t∧τeλs |f(s, 0, 0)|2 ds + sup
0≤t≤n∧τeλtL+
t (Knn∧τ − Kn
t∧τ )− 2∫ n∧τ
t∧τeλsY n
s Zns dBs.
Taking expectation on the both sides, by Schwartz inequality, and setting λ = λ− 2µ− k2
ρ − 1α > 0,
ρ = 1− ρ > 0, it follows that
E[eλt∧τ∣∣∣Y n
t∧τ
∣∣∣2] + E
∫ n∧τ
t∧τeλs(λ
∣∣∣Y ns
∣∣∣2+ ρ
∣∣∣Zns
∣∣∣2)ds
≤ E[eλn∧τ (Mn∧τ )2] + α
∫ n∧τ
t∧τeλs |f(s, 0, 0)|2 ds + e2λnE[ sup
0≤t≤n∧τ(L+
t )2] + E(Knn∧τ )
2 ≤ c(n).
94 Chapitre 3. RBSDE under general increasing growth condition
Then by BDG inequality, we have
E[ sup0≤t≤n∧τ
eλt∣∣∣Y n
t
∣∣∣2+
∫ n∧τ
0eλs(
∣∣∣Y ns
∣∣∣2+
∣∣∣Zns
∣∣∣2)ds] ≤ c(n) < ∞. (3.71)
With these preliminaries, we define a process (Y nt , Zn
t ,Knt )t≥0 by
(Y nt , Zn
t ,Knt ) =
(Y n
t , Znt , Kn
t ), t ≤ n ∧ τ,
(Mt, σMt 1[0,τ ](t), Kn
n∧τ ) t > n ∧ τ.
By the estimation (3.71), and the assumption 3.3.4-(ii), it follows that
E[ sup0≤t≤τ
eλt |Y nt |2 +
∫ τ
0eλs(|Y n
s |2 + |Zns |2)ds + (Kn
τ )2] ≤ c(n),
and on the set t ≥ τ, Y nt = Mτ , Zn
t = 0, Knt = Kn
τ .Step 2. In the following, we will prove that (Y n
t , Znt )t≥0 is a Cauchy sequence under ‖·‖λ,τ
∞ and‖·‖λ,τ
2 . For n > p, n, p ∈ N, set Y t = Y nt − Y p
t , Zt = Znt − Zp
t , Kt = Knt −Kp
t .For t > n > p, by the definition, we know that Y n
t = Y pt = Mt, Zn
t = Zpt = σM
t 1[0,τ ](t), soY t = 0, Zt = 0.
For n ≥ t ≥ p, we have that on the interval (n∧ τ, n], Y nt = Y p
t = Mt, Znt = Zp
t = 0, so Y t = 0,Zt = 0, and on the interval [p, n ∧ τ ],
Y pt = Mt = Mτ∧t = Mn −
∫ n∧τ
t∧τmsds−
∫ n∧τ
t∧τσM
s 1[0,τ ](s)dBs,
Zpt = σM
t 1[0,τ ](t),
and
Y nt = Mn +
∫ n∧τ
t∧τf(s, Y n
s , Zns )ds + Kn
n −Knt −
∫ n∧τ
t∧τZn
s dBs.
It follows from Ito formula that
eλt∧τ (Y t)2 +∫ n∧τ
t∧τeλs(λ
∣∣Y s
∣∣2 +∣∣Zs
∣∣2)ds
= 2∫ n∧τ
t∧τeλsY s(f(s, Y n
s , Zns ) + ms)ds + 2
∫ n∧τ
t∧τeλsY sdAn
s − 2∫ n∧τ
t∧τeλsY sZsdBs
= 2∫ n∧τ
t∧τeλsY s(f(s, Y n
s , Zns )− f(s, Y p
s , Zps ) + f(s,Ms, σ
Ms ) + ms)ds
+2∫ n∧τ
t∧τeλs(Y n
s − Ls)dAns − 2
∫ n∧τ
t∧τeλs(Ms − Ls)dAn
s − 2∫ n∧τ
t∧τeλsY sZsdBs
≤∫ n∧τ
t∧τeλs((2µ +
k2
ρ+
1α
)∣∣Y s
∣∣2 + ρ∣∣Zs
∣∣2 + 2α∣∣f(s,Ms, σ
Ms )
∣∣2 + 2α |ms|2)ds
−2∫ n∧τ
t∧τeλsY sZsdBs.
Then taking the expectation on the both sides, choosing proper numbers ρ and α, s.t. λ − 2µ −k2
ρ − 1α > 0 and 1− ρ > 0, it follows that
E[eλt∧τ∣∣Y t∧τ
∣∣2 +∫ n∧τ
t∧τeλs(
∣∣Y s
∣∣2 +∣∣Zs
∣∣2)ds ≤ cE
∫ τ
p∧τeλs(
∣∣f(s,Ms, σMs )
∣∣2 + |ms|2)ds.
3.3. RBSDE’s on[0, τ ] 95
With BDG inequality and Y t = 0, Zt = 0, on the interval (n ∧ τ, n], we get that
E[ supp≤t≤n
eλt∣∣Y t
∣∣2 +∫ n
peλs(
∣∣Y s
∣∣2 +∣∣Zs
∣∣2)ds ≤ cE
∫ τ
p∧τeλs(
∣∣f(s,Ms, σMs )
∣∣2 + |ms|2)ds. (3.72)
Next for t ≤ p, on the interval (p ∧ τ, p], Y nt = Y p
t = Mt, Znt = Zp
t = 0, so Y t = 0, Zt = 0, andon the interval [0, p ∧ τ ],
Y t = Y p +∫ p∧τ
t∧τ(f(s, Y n
s , Zns )− f(s, Y p
s , Zps ))ds + Kp −Kt −
∫ p∧τ
t∧τZsdBs.
Applying Ito formula to eλt∣∣Y t
∣∣2, then
eλt∧τ∣∣Y t∧τ
∣∣2 +∫ p∧τ
t∧τeλs(λ
∣∣Y s
∣∣2 +∣∣Zs
∣∣2)ds
= eλp∧τ∣∣Y n
∣∣2 + 2∫ p∧τ
t∧τeλsY s(f(s, Y n
s , Zns )− f(s, Y p
s , Zps ))ds + 2
∫ p∧τ
t∧τeλsY sdKs − 2
∫ p∧τ
t∧τeλsY sZsdBs
≤ eλp∧τ∣∣Y n
∣∣2 +∫ p∧τ
t∧τeλs((2µ +
k2
ρ+
1α
)∣∣Y s
∣∣2 + ρ∣∣Zs
∣∣2)ds− 2∫ p∧τ
t∧τeλsY sZsdBs,
since∫ p∧τ
t∧τeλsY sdKs =
∫ p∧τ
t∧τeλs(Y n
s − Ls)dKns +
∫ p∧τ
t∧τeλs(Y p
s − Ls)dKps
−∫ p∧τ
t∧τeλs(Y p
s − Ls)dKns −
∫ p∧τ
t∧τeλs(Y n
s − Ls)dKps ≤ 0.
Taking the expectation, we get
E[eλt∧τ∣∣Y t∧τ
∣∣2 +∫ n∧τ
t∧τeλs(λ
∣∣Y s
∣∣2 + ρ∣∣Zs
∣∣2)ds] ≤ E[eλp∣∣Y p
∣∣2],
where λ = λ− 2µ− k2
ρ − 1α > 0, ρ = 1− ρ > 0. Then with BDG inequality, Y t = 0, Zt = 0 on the
interval (p ∧ τ, p], and (3.72), we deduce that
E[ sup0≤t≤p
eλt∣∣Y t
∣∣2 +∫ p
0eλs(
∣∣Y s
∣∣2 +∣∣Zs
∣∣2)ds] ≤ cE[eλp∣∣Y p
∣∣2]
≤ cE[∫ τ
p∧τeλs(
∣∣f(s, Ms, σMs )
∣∣2 + |ms|2)ds.
So recall (3.72), and Y t = 0, Zt = 0 on the set t ≥ τ, we deduce that
E[ sup0≤t≤τ
eλt∣∣Y t
∣∣2 +∫ τ
0eλs(
∣∣Y s
∣∣2 +∣∣Zs
∣∣2)ds] ≤ cE[∫ τ
p∧τeλs(
∣∣f(s,Ms, σMs )
∣∣2 + |ms|2)ds.
Due to the assumptions 3.3.4-(ii) and (iii), the right side tends to 0, as p tends to infinity. So thesequence (Y n
t , Znt )t≥0 is a Cauchy sequence under this norm, i.e., there exists a pair (Yt, Zt)t≥0 of
progressively measurable processes, s.t.
E[ sup0≤t≤τ
eλt |Yt|2 +∫ τ
0eλs(|Ys|2 + |Zs|2)ds] < +∞,
96 Chapitre 3. RBSDE under general increasing growth condition
and (Y nt , Zn
t )t≥0 → (Yt, Zt)t≥0 under this norm. Moreover since on the set t ≥ τ, Y nt = Mτ = ξ
and Znt = 0, for n ∈ N, so we have Yt = ξ and Zt = 0 on the set t ≥ τ.
Step 3. Now we will use this couple of processes to construct the continuous process K, thenprove that the triple (Yt, Zt,Kt)t≥0 is the solution of the RBSDE(τ, ξ, f, L). For any T > 0, bythe result for the RBSDE with a deterministic terminal time, with the assumptions 3.3.2 and3.3.3 there exists a unique solution (Y T
t , ZTt , KT
t )0≤t≤T ∈ S2(0, T ) × H2d(0, T ) × A2(0, T ) of the
RBSDE(T, YT , 1[0,τ ]f, L), i.e.
Y Tt = YT +
∫ T
t1[0,τ ]f(s, Y T
s , ZTs )ds + KT
T − KTt −
∫ T
tZsdBs,
Y Tt ≥ Lt, 0 ≤ t ≤ T, and
∫ T
0(Y T
t − Lt)dKTt = 0.
For t ≤ T , with the same calculation like in the proof of the uniqueness,
E[eλt∧τ (Y Tt − Yt)2] = lim
n→∞E[eλt∧τ (Y Tt − Y n
t )2]
≤ limn→∞E[eλT∧τ (YT − Y n
T )2] = 0.
So we have Y Tt = Yt, for t ≤ T . Similarly, we have
E[∫ T
0eλs
∣∣∣ZTs − Zs
∣∣∣2ds] = lim
n→∞E[∫ T
0eλs
∣∣∣ZTs − Zn
s
∣∣∣2ds]
≤ limn→∞ cE[eλT∧τ (YT − Y n
T )2] = 0,
then ZTt = Zt. If we consider two different terminal times T1, T2, then by the uniqueness of
the solution of the RBSDE with determinist time, for t ≤ T1 ∧ T2, we have Y T1t = Yt = Y T2
t ,ZT1
t = Zt = ZT2t , i.e. the processes (Y T1
t , ZT1t )0≤t≤T1 and (Y T2
t , Y T2t )0≤t≤T2 coincide on the interval
[0, T1∧T2], then so do the processes (KT1t )0≤t≤T1 and (KT2
t )0≤t≤T2 , i.e. KT1t = KT2
t , for t ≤ T1∧T2.So there is no confusion if we define (Kt)t≥0 as following : for 0 ≤ t < ∞, Kt = Kt
t . Then (Kt)t≥0
is a continuous increasing process, which satisfies for 0 ≤ t < ∞, E[K2t ] < +∞, and Kt = KT
t , fort ≤ T . So (Yt, Zt,Kt)t≥0 satisfies
Yt = YT +∫ T
t1[0,τ ](s)f(s, Ys, Zs)ds + KT −Kt −
∫ T
tZsdBs. (3.73)
By the proposition 3.1.4, notice that T ∧ τ is a finite stopping time, so for t ∈ (T ∧ τ, T ], we haveYt = Yτ and Zt = 0, and Kt = Kτ . So (3.73) can be rewritten as in the Definition 3.3.1 (2’), for0 ≤ t ≤ T
Yt = YT +∫ T∧τ
t∧τf(s, Ys, Zs)ds + KT∧τ −Kt∧τ −
∫ T∧τ
t∧τZsdBs.
In addition, we know that Yt ≥ Lt, 0 ≤ t ≤ T ,∫ T∧τ0 (Ys − Ls)dAs = 0, i.e., (Yt, Zt,Kt)t≥0 is the
solution of the RBSDE. ¤If the barrier (Lt)t≥0 is an Ito process, then we have a better estimation of (Kt)t≥0, and (Kt)t≥0
can be expressed explicitly.
Proposition 3.3.1. Let us assume that the barrier (Lt)t≥0 is an Ito process, i.e.
Lt = L0 +∫ t
0ls1[0,τ ](s)ds +
∫ t
0σL
s 1[0,τ ](s)dBs,
3.4. Appendix : Comparison theorems 97
and E∫ τ0
∣∣f(t, Lt, σLt )
∣∣2 + |lt|2 ds < ∞. Then we have ‖Kt‖min(λ,0),τ2 < ∞, and there exists a pro-
gressively measurable process (αt)t≥0, s.t. 0 ≤ αt ≤ 1 and
dKt = αt1Yt=Lt[f(t, Lt, σLt ) + lt]−dt.
Proof. Like in the proof of the step 3 in the theorem 3.3.1, for T > 0, we consider theRBSDE(T, YT , 1[0,τ ]f, L) ; since L is an Ito process, by the proposition 3.1.3, there exists a pro-gressively measurable process (αT
t )t≥0, s.t. 0 ≤ αTt ≤ 1,
dKTt = αT
t 1eY Tt =Lt[f(t, Lt, σ
Lt ) + lt]−dt.
Since for different T1, T2 > 0, (Y T1t , ZT1
t , KT1t )0≤t≤T1 and (Y T2
t , ZT2t , KT2
t )0≤t≤T2 coincide on theinterval [0, T1 ∧ T2], so αT1 and αT2 are equal on the interval [0, T1 ∧ T2]. Define αt = αt
t, for t > 0,then 0 ≤ αt ≤ 1, with Kt = KT
t as in step 3 of the proof of theorem 3.3.1, we have
dKt = αt1Yt=Lt][f(t, Lt, σLt ) + lt]−dt.
Obviously K is increasing, then we get the estimation for K,
‖Kt‖0,τ2 = E[K2
τ ] = E[(∫ τ
0αt1Yt=Lt][f(t, Lt, σ
Lt ) + lt]−dt)2]
≤ E
∫ τ
0(∣∣f(t, Lt, σ
Lt )
∣∣2 + |lt|2)dt < ∞.
¤
3.4 Appendix : Comparison theorems
In this section, we present several comparison theorems, which are used in the demonstration ofTheorem 3.1.2. The first one is a generalized version of the comparison theorem in Pardoux(1999,[57]) in the 1-dimensional case.
Theorem 3.4.1 (A generalized case for BSDE’s). Suppose that f1(s, y, z), f2(s, y, z) satisfy theassumption 3.1.2, ξ1, ξ2 ∈ L2(FT ), and K1 and K2 are two increasing processes with E[(Ki
T )2] ≤ c,i = 1, 2. If there exist a pair (Y 1
t , Z1t )0≤t≤T (resp. (Y 2
t , Z2t )0≤t≤T ) satisfying the equation
Y 1t = ξ1 +
∫ T
tf1(s, Y 1
s , Z1s )ds + K1
T −K1t −
∫ T
tZ1
s dBs, (3.74)
(resp. Y 2t = ξ2 +
∫ T
tf2(s, Y 2
s , Z2s )ds + K2
T −K2t −
∫ T
tZ2
s dBs).
Moreover if for any 0 ≤ t ≤ T ,
f1(t, Y 1t , Z1
t ) ≤ f2(t, Y 1t , Z1
t ), and ξ1 ≤ ξ2, (3.75)
and if K2 −K1 is an increasing process, then Y 1t ≤ Y 2
t , 0 ≤ t ≤ T , a.s.
Proof. Define
αt =
f2(t,Y 2
t ,Z2t )−f2(t,Y 1
t ,Z2t )
Y 2t −Y 1
tif Y 2
t 6= Y 1t
0 if Y 2t = Y 1
t
,
98 Chapitre 3. RBSDE under general increasing growth condition
βit =
f2(t,Y 1
t , eZi−1t )−f2(t,Y 1
t , eZit)
Z2,it −Z1,i
t
if Z2,it 6= Z1,i
t
0 if Z2,it = Z1,i
t
.
Here Zi is the vector whose first i components are equal to those of Z1 and whose n − i lastcomponents are equal to those of Z2, that is Zi
t = (Z1,1t , · · ·Z1,i
t , Z2,i+1t , · · ·Z2,n
t ).Obviously, αt andβt are progressively measurable, and by assumption 3.1.2-(v) and (iv), αt ≤ µ, |βt| ≤ k.
For 0 ≤ s ≤ t ≤ T , set Γs,t = exp[∫ ts (αr − 1
2 |βr|2)dr +∫ ts βrdBr]. Consider the difference
of the two solutions of the BSDEs, ∆Yt = Y 2t − Y 1
t , ∆Zt = Z2t − Z1
t , and ∆ξ = ξ2 − ξ1, Ut =f2(t, Y 1
t , Z1t )− f1(t, Y 1
t , Z1t ), ∆Kt = K2
t −K1t ; we know that ∆ξ ≥ 0, Ut ≥ 0, d∆Kt ≥ 0.
Then (∆Yt, ∆Zt) solves the equation
∆Yt = ∆ξ +∫ T
t(αs∆Ys + βs∆Zs)ds +
∫ T
tUsds + ∆KT −∆Kt −
∫ T
t∆ZsdBs.
Applying the Ito formula to ∆YsΓs,t, we get
∆Ys = Γs,t∆Yt +∫ t
sΓs,rUrdr +
∫ t
sΓs,rd∆Kr −
∫ t
sΓs,r(∆Zr + ∆Yrβr)dBr.
So taking the conditional expectation, it follows
∆Ys = E[Γs,t∆Yt +∫ t
sΓs,rUrdr +
∫ t
sΓs,rd∆Kr|Fs].
Particularly,
∆Yt = E[Γt,T ∆ξ +∫ T
tΓt,rUrdr +
∫ T
tΓt,rd∆Kr|Ft] ≥ 0,
using the positivity of ∆ξ, U and ∆K. ¤We next prove a comparison theorem for the solution of the RBSDE in the general case, which
is similar to that in El Karoui et al.(1997, [28]).
Theorem 3.4.2 (General case for RBSDE’s). Suppose that the parameters (ξ1, f1, L1) and (ξ2, f2, L2)satisfy the assumptions 3.1.1, 3.1.2 and 3.1.3. Let (Y 1
t , Z1t ,K1
t )0≤t≤T (resp. (Y 2t , Z2
t ,K2t )0≤t≤T ) be
the solution of the RBSDE(ξ1, f1, L1) (resp. RBSDE(ξ2, f2, L2)). Assume in addition the following :∀t ∈ [0, T ],
ξ1 ≤ ξ2, a.s. (3.76)f1(t, Y 1
t , Z1t ) ≤ f2(t, Y 1
t , Z1t ), a.s.
L1t ≤ L2
t , a.s.
then Y 1t ≤ Y 2
t , for t ∈ [0, T ], a.s..
Proof. Applying Ito formula to [(Y 1 − Y 2)+]2 on the interval [t, T ], and taking expectation onthe both sides, we get immediately
E[(Y 1t − Y 2
t )+]2 + E
∫ T
t1Y 1
t >Y 2t
∣∣Z1s − Z2
s
∣∣2 ds
= E[(ξ1 − ξ2)+] + 2E∫ T
t(Y 1
s − Y 2s )+1Y 1
s >Y 2s (f
1(s, Y 1s , Z1
s )− f2(s, Y 2s , Z2
s ))ds
+2E
∫ T
t(Y 1
s − Y 2s )+d(K1
s −K2s ).
3.4. Comparison theorems 99
Since on the set Y 1t > Y 2
t , Y 1t > Y 2
t ≥ L2t ≥ L1
t , we get∫ T
t(Y 1
s − Y 2s )+d(K1
s −K2s ) ≤
∫ T
t(Y 1
s − Ls)dK1s −
∫ T
t(Y 2
s − Ls)dK1s −
∫ T
t(Y 1
s − Y 2s )+dK2
s
≤ −∫ T
t(Y 1
s − Y 2s )+dK2
s ≤ 0.
So by (5.64) and the Lipschitz and monotonic condition on f2, it follows
E[(Y 1t − Y 2
t )+]2 + E
∫ T
t1Y 1
t >Y 2t
∣∣Z1s − Z2
s
∣∣2 ds
≤ 2E
∫ T
t1Y 1
s >Y 2s (Y
1s − Y 2
s )(f1(s, Y 1s , Z1
s )− f2(s, Y 1s , Z1
s ) + f2(s, Y 1s , Z1
s )− f2(s, Y 2s , Z2
s ))ds
≤ 2E
∫ T
t1Y 1
s >Y 2s (Y
1s − Y 2
s )(f2(s, Y 1s , Z1
s )− f2(s, Y 2s , Z2
s ))ds
≤ 2µE
∫ T
t1Y 1
s >Y 2s (Y
1s − Y 2
s )2ds + 2kE
∫ T
t1Y 1
s >Y 2s (Y
1s − Y 2
s )∣∣Z1
s − Z2s
∣∣ ds
≤ 12E
∫ T
t1Y 1
t >Y 2t
∣∣Z1s − Z2
s
∣∣2 ds + (2µ + 4k2)E∫ T
t[(Y 1
s − Y 2s )+]2ds.
Hence
E[(Y 1t − Y 2
t )+]2 ≤ (2µ + 4k2)E∫ T
t[(Y 1
s − Y 2s )+]2ds,
and from Gronwall’s inequality, we deduce (Y 1t − Y 2
t )+ = 0, 0 ≤ t ≤ T . ¤In this comparison theorem, we can only compare the two solutions Y of the RBSDE with
different coefficients, since the barriers L1, L2 are different, then we cannot compare the increasingprocesses of the solutions. But the following Comparison theorem shows that, if the two barriersare the same and satisfy the assumption 3.1.3, then we can compare the increasing processes K.In the following we first prove a comparison theorem under the bounded condition for ξ, f , andsup0≤t≤T L+
t , then we relax it step by step.
Theorem 3.4.3 (Special case for RBSDE’s). Suppose that f1(s, y), f2(s, y) satisfy the assumption3.1.5, and ξi, f i(·, 0), L, i = 1, 2 satisfy for some constant c,
∣∣ξi∣∣ + sup
0≤t≤T
∣∣f i(t, 0)∣∣ + sup
0≤t≤TL+
t ≤ c.
Denote by (Y it , Zi
t ,Kit)0≤t≤T the solution of the RBSDE(ξi, f i, L) . If we have ∀(t, y) ∈ [0, T ]× R,
f1(t, y) ≤ f2(t, y), , a.s.
ξ1 ≤ ξ2, a.s.
then Y 1t ≤ Y 2
t , K1t ≥ K2
t , for t ∈ [0, T ] a.s., and for 0 ≤ s ≤ t ≤ T , K1t −K1
s ≥ K2t −K2
s , a.s.
Proof. We consider the penalized equations relative to the RBSDE(ξi, f i, L), for i = 1, 2, n ∈ N,
Y n,it = ξi +
∫ T
tf i(s, Y n,i
s )ds + n
∫ T
t(Y n,i
s − Ls)−ds−∫ T
tZn,i
s dBs.
For each n ∈ N,
fn,1(s, y) = f1(s, y) + n(y − Ls)− ≤ fn,2(s, y) = f2(s, y) + n(y − Ls)−.
100 Chapitre 3. RBSDE under general increasing growth condition
So by the comparison theorem in Pardoux(1999, [57]), we get
Y n,1t ≤ Y n,2
t , 0 ≤ t ≤ T.
Since Kn,it = n
∫ t0 (Y n,i
s − Ls)−ds, then we deduce, for 0 ≤ s ≤ t ≤ T ,
Kn,1t ≥ Kn,2
t ,
Kn,1t −Kn,1
s ≥ Kn,2t −Kn,2
s .
By the convergence results of the step1, Y n,1t Y 1
t , Y n,2t Y 2
t , Kn,1t → K1
t , Kn,2t → K2
t , a.s.,then the inequalities
Y 1t ≤ Y 2
t , K1t ≥ K2
t ,K1t −K1
s ≥ K2t −K2
s ,
hold for 0 ≤ s ≤ t ≤ T . ¤Then we relax the bounded condition on the barrier L+.
Theorem 3.4.4 (Special case for RBSDE’s). Suppose that f1(s, y), f2(s, y) satisfy the assumption3.1.5, the barrier L satisfies assumption 3.1.4 and ξi, f i(·, 0), i = 1, 2 satisfies for some constantc, ∣∣ξi
∣∣ + sup0≤t≤T
∣∣f i(t, 0)∣∣ ≤ c.
Set (Y it , Zi
t ,Kit)0≤t≤T to be the solution of the RBSDE(ξi, f i, L). If we have ∀(t, y) ∈ [0, T ]× R,
f1(t, y) ≤ f2(t, y), a.s.
ξ1 ≤ ξ2, a.s.
then Y 1t ≤ Y 2
t , K1t ≥ K2
t , for t ∈ [0, T ] a.s., and for 0 ≤ s ≤ t ≤ T , K1t −K1
s ≥ K2t −K2
s , a.s.
Proof. As in step 2, there exist constants c1 and c2, s.t. ξi ≤ c1, f i(t, 0) ≤ c2, for i = 1, 2 ; setc′ = maxc1, c2T. Then for i = 1, 2, (Y i
t , Zit ,K
it)0≤t≤T is the solution of the RBSDE(ξi, f i, L), if
and only if (Y i′t , Zi′
t ,Ki′t )0≤t≤T is the solution of the RBSDE(ξi′, f i′, L), where
(Y i′t , Zi′
t ,Ki′t ) = (Y i
t + c2t− 2c′, Zit ,K
it),
and(ξi′, f i′(t, y), L′t) = (ξi + c2T − 2c′, f i(t, y − (c2t− 2c′))− c2, Lt + c2t− 2c′).
From this transform, we know that the results is equivalent to, for 0 ≤ s ≤ t ≤ T ,
Y 1′t ≤ Y 2′
t ,K1′t ≥ K2′
t ,K1′t −K1′
s ≥ K2′t −K2′
s , a.s.. (3.77)
Set Ln = L ∧ n, we consider the solution (Y n,i′t , Zn,i′
t ,Kn,i′t )0≤t≤T of the RBSDE(ξi′, f i′, Ln),
since sup0≤t≤T (Lnt )+ ≤ c, and
ξ1′ ≤ ξ2′, f1′(t, y) ≤ f2′(t, y), ∀(t, y) ∈ [0, T ]× R.
then by theorem 3.4.3, we have
Y n,1′t ≤ Y n,2′
t ,Kn,1′t ≥ Kn,2′
t ,Kn,1′t −Kn,1′
s ≥ Kn,2′t −Kn,2′
s , a.s., for 0 ≤ s ≤ t ≤ T, ,
Notice that, for i = 1, 2, ξi′ and f i′(t, y) ≤ 0 ; by the convergence results in step 2, we deduce thatY n,i′
t Y i′t , a.s. and Kn,i′
t → Ki′t in L2(Ft), for 0 ≤ t ≤ T , i = 1, 2. So let n → ∞, (3.77) follows.
And the proof is complete. ¤For the general case, we have the following comparison theorem.
3.4. Comparison theorems 101
Theorem 3.4.5. Suppose that the parameters (ξi, f i, L) satisfy assumption 3.1.1-3.1.3, then thereexists (Y 1
t , Z1t ,K1
t )0≤t≤T (resp. (Y 2t , Z2
t ,K2t )0≤t≤T ) which is solution of the RBSDE(ξ1, f1, L) (resp.
RBSDE(ξ2, f2, L)). If for (t, y) ∈ [0, T ]× R, we have
f1(t, y, Z1t ) ≤ f2(t, y, Z2
t ), (3.78)f1(t, 0, Z1
t ) = f2(t, 0, Z2t ),
ξ1 ≤ ξ2, a.s.,
thenY 1
t ≤ Y 2t , K1
t −K1s ≥ K2
t −K2s , 0 ≤ s ≤ t ≤ T. (3.79)
Proof. Like in section 3.3, 7.5.1, for i = 1, 2, set
(Y it, Z
it,K
it) := (eλtY i
t , eλtZit ,
∫ t
0eλsdKi
s).
Then it’s easy to check that for i = 1, 2, (Y it, Z
it, K
it)0≤t≤T is the solution of the RBSDE(ξi
, fi, L),
where(ξi
, fi(t, y, z), Lt) = (eλT ξi, eλtf i(t, e−λty, e−λtz)− λy, eλtLt).
If we assume λ = µ, then (ξi, f
i, L) satisfies assumption 3.1.1, 3.1.2’ and 3.1.4. Since the transform
keeps the monotonicity, (3.79) is equivalent to
Y1t ≤ Y
2t , K
1t −K
1s ≥ K
2t −K
2s, 0 ≤ s ≤ t ≤ T. (3.80)
Define for i = 1, 2, the mapping :F i(s, y) = f
i(s, y, Zis),
and consider the RBSDE(ξi, F i, L), with its solution (Y i
t , Zit , K
it)0≤t≤T ; then ξi, F i(s, y), L satisfy
assumption 3.1.1, 3.1.5 and 3.1.4, and for (t, y) ∈ [0, T ]× R
ξ1 ≤ ξ
2, F 1(t, y) ≤ F 2(t, y), F 1(t, 0) = F 2(t, 0).
Like in step 3 and 4 in the proof of the theorem 3.1.2, we make the approximations
ξm,n,i : = ξ
n,i ∧m := (ξi ∨ (−n)) ∧m
F im,n(t, y) : = F i
n(t, y)− F in(t, 0) + F i
n(t, 0) ∧m
: = F i(t, y)− F i(t, 0) + (F i(t, 0) ∨ (−n)) ∧m.
Let for i = 1, 2, (Y m,n,it , Zm,n,i
t , Km,n,it )0≤t≤T be the solution of the RBSDE (ξm,n,i
, F im,n, L) ; then
ξm,n,i, F i
m,n satisfy ∣∣∣ξm,n,i∣∣∣ + sup
0≤t≤T
∣∣F im,n(t, 0)
∣∣ ≤ m ∨ n,
and
F 1m,n(t, y) ≤ F 2
m,n(t, y), for (t, y) ∈ [0, T ]× Rξm,n,1 ≤ ξ
m,n,2, a.s.
Using the comparison theorem 3.4.4, we have for 0 ≤ s ≤ t ≤ T
Y m,n,1t ≤ Y m,n,2
t , Km,n,1t − Km,n,1
s ≥ Km,n,2t − Km,n,2
s .
102 Chapitre 3. RBSDE under general increasing growth condition
By the convergence results in the step 3 of the proof of theorem 3.1.2, let m → ∞, we get fori = 1, 2
(Y m,n,it )0≤t≤T → (Y n,i
t )0≤t≤T in S2(0, T ),
(Zm,n,it )0≤t≤T → (Zn,i
t )0≤t≤T in H2d(0, T ),
(Km,n,it )0≤t≤T → (Kn,i
t )0≤t≤T in A2(0, T ),
where (Y n,it , Zn,i
t , Kn,it )0≤t≤T is the solution of the RBSDE(ξn,i
, F in, L), and for 0 ≤ s ≤ t ≤ T
Y n,1t ≤ Y n,2
t , Kn,1t − Kn,1
s ≥ Kn,2t − Kn,2
s .
Then by the convergence in step 4 of the proof of theorem 3.1.2, for i = 1, 2, (Y n,it , Zn,i
t , Kn,it )0≤t≤T →
(Y it , Zi
t , Kit)0≤t≤T in S2(0, T ) × H2
d(0, T ) × A2(0, T ), as n → ∞, which is the solution of theRBSDE(ξi
, F i, L). Finally, we get, for 0 ≤ s ≤ t ≤ T
Y 1t ≤ Y 2
t , K1t − K1
s ≥ K2t − K2
s .
Applying Ito formula to∣∣∣Y i
t − Yit
∣∣∣2, for i = 1, 2, we get
∣∣∣Y it − Y
it
∣∣∣2+
∫ T
t
∣∣∣Zis − Z
is
∣∣∣2ds
= 2∫ T
t(Y i
s − Yis)(F
i(s, Y is )− f
i(s, Y is))ds + 2
∫ T
t(Y i
s − Yis)d(Ki
s −Kis)
−2∫ T
t(Y i
s − Yis)(Z
is − Z
is)dBs
≤ −2∫ T
t(Y i
s − Yis)(Z
is − Z
is)dBs,
taking the expectation on both sides, it follows E∫ T0
∣∣∣Zis − Z
is
∣∣∣2ds = 0, then with BDG inequality,
we obtain
E sup0≤t≤T
∣∣∣Y it − Y
it
∣∣∣2≤ cE
∫ T
0
∣∣∣Zis − Z
is
∣∣∣2ds = 0.
Thanks to the uniqueness of the solution, we get Y it = Y
it, Zi
t = Zit, so Ki
t = Kit, 0 ≤ t ≤ T . So
(3.80) holds, and (3.79) follows. ¤
103
Chapitre 4
Reflected BSDE with continuity andmonotonicity in y, and non-Lipschitzconditions in z
In this chapter, we study the case when f satisfies monotonicity and general increasing condi-tions in y and non-Lipschitz condition in z. The chapter is organized as follows : in Section 4.1, wepresent the basic assumptions and the definition of the RBSDE ; then in Section 4.2, we prove theexistence of a solution when f(t, ω, y, z) satisfies the conditions (1.11), ξ and L are bounded ; inthe following section, we consider the case when f(t, ω, y, z) = |z|p, for p ∈ (1, 2], and ξ is not ne-cessarily bounded. In subsection 4.3.1, we give a necessary and sufficient condition on the terminalcondition ξ for p = 2 ; then under the assumption ξ ≥ a, for a ∈ R, we give a sufficient condition onξ for the existence of a solution when 1 < p < 2. Finally, in section 4.4, we study the RBSDE withthe condition (1.12), and prove under an additional technical condition the existence of a solution.At last, in Appendix 4.5, we generalize the comparison theorem in [44], and get some comparisontheorems, which help us to pass to the limit in the approximations.
4.1 Notations and assumptions
Let (Ω,F , P ) be a complete probability space, equipped with a d-dimensional Brownian motion(Bt)0≤t≤T = (B1
t , B2t , · · · , Bd
t )′0≤t≤T , which is defined on a finite interval [0, T ], 0 < T < +∞.Denote by Ft; 0 ≤ t ≤ T the natural filtration generated by the Brownian motion B : Ft =σBs; 0 ≤ s ≤ t,where F0 contains all P−null sets of F . We denote by P the σ-algebra ofpredictable sets on [0, T ]× Ω.
We recall the notations of spaces in chapter 1 : L2(Ft), H2d(0, T ), S2(0, T ) and A2(0, T ).
For the following, we work under the following assumptions :
Assumption 4.1.1. a final condition ξ ∈ L2(FT ),
Assumption 4.1.2. a coefficient f : Ω × [0, T ] × R× Rd → R, which is such that for somecontinuous increasing function ϕ : R+ → R+, real numbers µ ∈ R and A > 0, ∀t ∈ [0, T ], y, y′ ∈ Rand z ∈ Rd :
(i) f(·, y, z) is progressively measurable ;(ii) |f(t, y, z)| ≤ ϕ(|y|) + A |z|2 , a.s. ;(iii) (y − y′)(f(t, y, z)− f(t, y′, z)) ≤ µ(y − y′)2, a.s.(iv) y → f(t, y, z) is continuous, a.s.
104 Chapitre 4. RBSDE under non-Lipschitz
Assumption 4.1.3. a barrier (Lt)0≤t≤T , which is a bounded continuous progressively measurablereal-valued process, b := sup0≤t≤T |Lt| < +∞, LT ≤ ξ, a.s.
Now we introduce the definition of the solution of RBSDE with parameters satisfying theassumption 4.1.1, 4.1.2 and 4.1.3 which is the same like in El Karoui et al.(1997, [28]).
Definition 4.1.1. We say that the triple (Yt, Zt, Kt)0≤t≤T of progressively measurable processes is asolution of the reflected backward stochastic differential equation with one continuous reflecting lowerbarrier L(·), terminal condition ξ and coefficient f , (in short RBSDE(ξ, f, L)), if the followingshold :
(1) (Yt)0≤t≤T ∈ S2(0, T ), (Zt)0≤t≤T ∈ H2d(0, T ), and (Kt)0≤t≤T ∈ A2(0, T ).
(2) Yt = ξ +∫ Tt f(s, Ys, Zs)ds + KT −Kt −
∫ Tt ZsdBs, 0 ≤ t ≤ T a.s.
(3) Yt ≥ Lt, 0 ≤ t ≤ T.
(4)∫ T0 (Ys − Ls)dKs = 0, a.s.
4.2 A general case
Theorem 4.2.1. Under the assumptions 4.1.2 and 4.1.3, if ξ is bounded, then the RBSDE(ξ, f, L)admits a maximal bounded solution.
Proof. First, notice that (Y, Z,K) is a solution of RBSDE(ξ, f, L) if and only if (Y b, Zb,Kb)is a solution of the RBSDE(ξb, f b, Lb), where
(Y b, Zb,Kb) = (Y − b, Z, K),
and(ξb, f b(t, y, z), Lb) = (ξ − b, f(s, y + b, z), L− b).
Then ξb is bounded, f b satisfies the assumption 4.1.2, and −2b ≤ Lb ≤ 0. So in the following, weassume that the barrier L is a negative bounded process.
For C > 0, set gC : R→ R be a continuous function, such that 0 ≤ gC(y) ≤ 1, ∀y ∈ R, and
gC(y) = 1, if |y| ≤ C, (4.1)gC(y) = 0, if |y| ≥ 2C.
Denote fC(t, y, z) = gC(y)f(t, y, z) ; then
∣∣fC(t, y, z)∣∣ ≤ gC(y)(ϕ(|y|) + A |z|2)≤ 1[−2C,2C](y)(ϕ(|y|) + A |z|2)≤ ϕ(2C) + A |z|2 .
From the theorem 1 in [44], there exists a maximal solution (Y C , ZC ,KC) to the RBSDE(ξ, fC , L)
Y Ct = ξ +
∫ T
tgC(Y C
s )f(s, Y Cs , ZC
s )ds−∫ T
tZC
s dBs + KCT −KC
t , (4.2)
Y Ct ≥ Lt,
∫ T
0(Y C
t − Lt)dKCt = 0, a.e..
4.2. A general case 105
We choose n ≥ 2 even, and a ∈ R ; using Ito’s formula to eat(Y Ct )n, we have
eat(Y Ct )n = eaT ξn + n
∫ T
teas(Y C
s )n−1gC(Y Cs )f(s, Y C
s , ZCs )ds− n
∫ T
teas(Y C
s )n−1ZCs dBs (4.3)
−n(n− 1)2
∫ T
teas(Y C
s )n−2∣∣ZC
s
∣∣2 ds + n
∫ T
teas(Y C
s )n−1dKCs − a
∫ T
teas(Y C
s )nds.
From the assumption 4.1.2 and since n is even, we have
yf(s, y, z) ≤ yf(s, 0, z) + µy2,
yn−1f(s, y, z) ≤ yn−1f(s, 0, z) + µyn.
With 0 ≤ gC(y) ≤ 1, we get
gC(y)yn−1f(s, y, z) ≤ gC(y) |y|n−1 f(s, 0, z) + µyn
≤ gC(y) |y|n−1 (ϕ(0) + A |z|2) + µyn
≤ (1n
+n− 1
n|y|n)ϕ(0) + A |z|2 gC(y) |y|n−1 + µyn
≤ (1 + yn)ϕ(0) + 2CA |z|2 yn−2 + µyn.
Substitute it into (4.3), then
eat(Y Ct )n ≤ eaT ξn +
nϕ(0)a
(eaT − eat) + (nϕ(0) + nµ− a)∫ T
teas(Y C
s )nds
+(2nCA− n(n− 1)2
)∫ T
teas(Y C
s )n−2∣∣ZC
s
∣∣2 ds + n
∫ T
teas(Ls)n−1dKC
s
−n
∫ T
teas(Y C
s )n−1ZCs dBs.
Notice that since KC is an increasing process, n is even and L ≤ 0, we get immediately∫ T
teas(Ls)n−1dKC
s ≤ 0.
If we choose n, a such thatn− 1 ≥ 4CA, a = n(ϕ(0) + µ),
we obtain
eat(Y Ct )n ≤ eaT ξn +
nϕ(0)a
(eaT − eat)− n
∫ T
teas(Y C
s )n−1ZCs dBs,
theneat(Y C
t )n ≤ E[eaT (ξn +nϕ(0)
a)|Ft] ≤ eaT (‖ξ‖n
∞ + 1),
which implies that(Y C
t )n ≤ ea(T−t)(‖ξ‖n∞ + 1) ≤ (eaT ∨ 1)(‖ξ‖n
∞ + 1).
Since a = n(ϕ(0) + µ), it follows that∣∣Y C
t
∣∣ ≤ (e(ϕ(0)+µ)T ∨ 1)(‖ξ‖n∞ + 1)
1n ≤ (e(ϕ(0)+µ)T ∨ 1)(‖ξ‖∞ + 1).
If C is chosen to satisfy C ≥ (e(ϕ(0)+µ)T ∨ 1)(‖ξ‖∞ + 1), then we have∣∣Y C
t
∣∣ ≤ C, whichimplies gC(Y C
t ) = 1, for 0 ≤ t ≤ T . So for this C, (Y C , ZC ,KC) is the maximal solution of theRBSDE(ξ, f, L). ¤
106 Chapitre 4. RBSDE under non-Lipschitz
4.3 The case f(t, y, z) = |z|p, with p ∈ [1, 2]
First if p = 1, we get the classical Lipschitz case, so following [28], ξ ∈ L2(FT ) and E[sup0≤t≤T (L+t )2] <
+∞ is a sufficient condition to get the unique solution to the reflected BSDE(ξ, f, L).
4.3.1 The case p = 2
In this section we consider the case f(t, y, z) = |z|2, which corresponds to the RBSDE
Yt = ξ +∫ T
t|Zs|2 ds + KT −Kt −
∫ T
tZsdBs, (4.4)
Yt ≥ Lt,
∫ T
0(Yt − Lt)dKt = 0.
the main result is
Theorem 4.3.1. We assume E(sup0≤t≤T e2Lt) < +∞, the RBSDE(ξ, f, L) (4.4) admits a solutionif and only if E(e2ξ) < +∞.
Proof. For the necessary part, let (Y,Z) be a solution of the RBSDE (4.4). By Ito’s formula,we get
e2Yt = e2ξ + 2∫ T
te2YsdKs − 2
∫ T
teYsZsdBs (4.5)
= e2Y0 + 2∫ t
0e2YsZsdBs − 2
∫ t
0e2YsdKs.
Let for all n, τn = inft : Yt ≥ n∧T , then Mt∧τn = 2∫ t∧τn
0 e2YsZsdBs is a martingale, and we have
E[e2Yτn ] = E[e2Y0 − 2∫ t
0e2YsdKs] ≤ E[e2Y0 ],
in view of 2∫ t0 e2YsdKs ≥ 0. Finally, since τn T , when n →∞ :
E[lime2Yτn ] = E[e2ξ] ≤ E[e2Y0 ] < ∞,
follows from Fatou’s Lemma.Now we suppose E(e2ξ) < +∞, set Lt = Lt1t<T + ξ1t=T,
Nt = St(e2eL) = ess supτ∈Tt,T
E[e2eLτ |Ft],
where St(η) denotes the Snell envelope of η(See definition 2.10.1 in Chapter 1, appendix, or ElKaroui [27]), Tt,T is the set of all stopping times such that t ≤ τ ≤ T . Since
E[ sup0≤t≤T
e2eLt ] ≤ E[ sup0≤t≤T
e2Lt + e2ξ] < +∞,
applying the results of Snell envelope, we know that N is a supermartingale, so it admits thefollowing decomposition : for an increasing integrable process K,
Nt = N0 +∫ t
0ZsdBs −Kt.
4.3. When f = |z|p, with p ∈ [1, 2] 107
Using Ito’s formula to log Nt, we get
12
log Nt =12
log N0 +12
∫ t
0
Zs
NsdBs − 1
4
∫ t
0(Zs
Ns)2ds− 1
2
∫ t
0
1Ns
dKs.
Set Yt = 12 log Nt, Zt = Zt
2Nt, Kt = 1
2
∫ t0
1Ns
dKs, then the triple satisfies
Yt = ξ +∫ T
tZ2
s ds + KT −Kt −∫ T
tZsdBs. (4.6)
Thanks to the results on the Snell envelope, we know that Nt ≥ e2eLt and∫ T0 (Nt − e2eLt)dKt = 0.
The first impliesYt ≥ Lt ≥ Lt.
Obviously, Nt > 0, 0 ≤ t ≤ T , so K is increasing. Consider the stopping time Dt := inft ≤ u ≤T ;Yu = Lu ∧ T , then it satisfies Dt = inft ≤ u ≤ T ; Nu = e2Lu ∧ T . By the continuity of K, weget KDt −Kt = 0, which implies KDt −Kt = 0. It follows that
∫ T
0(Yt − Lt)dKt = 0.
Now the rest is to prove Yt ∈ S2(0, T ), Zt ∈ H2d(0, T ), and Kt ∈ A2(0, T ). With Jensen’s inequality
Yt =12
log Nt =12
log[ess supτ∈Tt,T
E[e2eLτ |Ft]]
≥ 12
log[exp(ess supτ∈Tt,T
E[2Lτ |Ft])]
= ess supτ∈Tt,T
E[Lτ |Ft] ≥ E[ξ|Ft] ≥ Ut,
where Ut = −E[ξ−|Ft]. For all a > 0, define
τa = inft; |Nt| > a,
∫ t
0(Zs
Ns)2ds > a,
∣∣∣∣∫ t
0
Zs
NsdBs
∣∣∣∣ > a.
From (4.6), we get for 0 ≤ t ≤ T
0 ≤∫ t
0Z2
s ds = Y0 − Yt +∫ t
0ZsdBs −Kt
≤ Y0 − Ut +∫ t
0ZsdBs.
Then(∫ τa
0Z2
s ds)2 ≤ 3(Y0)2 + 3(Uτa)2 + 3(
∫ τa
0ZsdBs)2.
Taking the expectation, using the Jensen’s inequality and 3x ≤ x2
2 + 92 , we obtain
E(∫ τa
0Z2
s ds)2 ≤ 34(log N0)2 + 3E(ξ−)2 +
12(E(
∫ τa
0Z2
s ds))2 +92
≤ 34(log N0)2 + 3E(ξ−)2 +
12E(
∫ τa
0Z2
s ds)2 +92,
108 Chapitre 4. RBSDE under non-Lipschitz
soE(
∫ τa
0Z2
s ds)2 ≤ 32(log N0)2 + 6E(ξ−)2 + 9 ≤ C.
Since τa T when a → +∞, we get to the limit, and with the Schwartz inequality
E
∫ T
0Z2
s ds ≤ (E(∫ T
0Z2
s ds)2)12 ≤ C.
So Z ∈ H2d(0, T ). From (4.6), we get for 0 ≤ t ≤ T
0 ≤ Kt = Y0 − Yt +∫ t
0ZsdBs −
∫ t
0Z2
s ds
≤ Y0 − Yt +∫ t
0ZsdBs.
Notice that K is increasing, so it’s sufficient to prove E[K2T ] < +∞. Squaring the inequality on
both sides and taking expectation, we obtain
E[(KT )2] ≤ 3Y 20 + 3E[ξ2] + 3E
∫ T
0Z2
s ds ≤ C.
We consider now Y ; again from (4.6),
Yt = Y0 −Kt +∫ t
0ZsdBs −
∫ t
0Z2
s ds,
so
(Yt)2 ≤ 4 (Y0)2 + 4 (Kt)
2 + 4(∫ t
0ZsdBs
)2
+ 4(∫ t
0Z2
s ds
)2
.
Then by the Bukholder-Davis-Gundy inequality, we get
E[ sup0≤t≤T
(Yt)2] ≤ 4 (Y0)2 + 4E[K2
T ] + 4E[ sup0≤t≤T
(∫ t
0ZsdBs
)2
] + 4E(∫ T
0Z2
s ds
)2
≤ 4 (Y0)2 + 4E[K2
T ] + CE
(∫ t
0Z2
s dBs
)+ 4E
(∫ T
0Z2
s ds
)2
≤ C,
i.e. Y ∈ S2(0, T ). ¤
4.3.2 The case p ∈ (1, 2)
In this section, f(t, y, z) = |z|p, where p ∈ (1, 2). We assume that the assumption 4.1.3 holdsand
Assumption 4.3.1. There exists a ∈ R, ε > 0, such that ξ ≥ a and E[eεξ] < +∞.
First, we notice that if (Y,Z, K) is a solution of the RBSDE(ξ, f, L), i.e.
Yt = ξ +∫ T
t|Zs|p ds + KT −Kt −
∫ T
tZsdBs,
Yt ≥ Lt,
∫ T
0(Yt − Lt)dKt = 0,
4.3. When f = |z|p, with p ∈ [1, 2] 109
then for any constant d, (Y + d, Z, K) is a solution of the RBSDE(ξ + d, f, L + d), indeed
Yt + d = ξ + d +∫ T
t|Zs|p ds + KT −Kt −
∫ T
tZsdBs,
Yt + d ≥ Lt + d,
∫ T
0((Yt + d)− (Lt + d))dKt = 0.
If we set d = a− ∨ b, we can replace the assumptions 4.3.1 and 4.1.3 by
Assumption 4.3.2. ξ ≥ 0 and there exists ε > 0 such that E[eεξ] < +∞.
Assumption 4.3.3. Lt ≥ 0 and sup0≤t≤T Lt ≤ b, for some b ∈ R+.
The main result of this section is the following
Theorem 4.3.2. Assume that ξ and L satisfies the assumptions 4.3.3 and 4.3.2, then there existsa solution to the RBSDE(ξ, f, L) with f(t, y, z) = |z|p, where p ∈ (1, 2).
We prove it by approximation of the terminal value. For n ∈ N, set ξn = ξ∧n, then 0 ≤ ξn ≤ n.There exists (Y n, Zn,Kn) which is the maximal solution of the RBSDE(ξn, f, L), i.e.
Y nt = ξn +
∫ T
t|Zn
s |p ds + KnT −Kn
t −∫ T
tZn
s dBs, (4.7)
Y nt ≥ Lt,
∫ T
0(Y n
t − Lt)dKnt = 0,
by theorem 1 in [44], in view of |z|p ≤ 1 + |z|2.Furthermore since ξn ≤ ξn+1, by the comparison theorem 4.5.1, Y n
t ≤ Y n+1t and dKn
t ≤ dKn+1t ,
0 ≤ t ≤ T . Consequently Y nt Yt, a.s. So Yt ≥ Y n
t ≥ Lt ≥ 0. Before the proof of the theorem, weneed the three following lemmas.
Lemma 4.3.1. For all k ≥ 2, E[sup0≤t≤T (Yt)k] < +∞.
Proof. Let (Xn, Λn, Jn) be the maximal bounded solution of the following RBSDE
Xnt = ξn +
∫ T
t(A +
ε
4|Λn
s |2)ds + JnT − Jn
t −∫ T
tΛn
s dBs,
Xnt ≥ Lt,
∫ T
0(Xn
t − Lt)dJnt = 0,
where A = A(ε, p) satisfies xp ≤ A + ε4 |x|2, for x ≥ 0. Again from the comparison theorem 4.5.1,
we have Y nt ≤ Xn
t , a.s. 0 ≤ t ≤ T . It is easy to notice that (Xn + At,Λn, Jn) is a solution of theRBSDE(ξn + AT, ε
4 |z|2 , L + At), i.e.
Xnt + At = ξn + AT +
ε
4
∫ T
t|Λn
s |2 ds + JnT − Jn
t −∫ T
tΛn
s dBs,
Xnt + At ≥ Lt + At,
∫ T
0((Xn
t + At)− (Lt + At))dJnt = 0.
If we apply Ito’s formula to exp( ε2(Xn
t + At)) between t and T , we deduce that
eε2(Xn
t +At) = eε2(ξn+AT ) +
ε
2
∫ T
te
ε2(Xn
s +As)dJns −
ε
2
∫ T
te
ε2(Xn
s +As)Λns dBs
= eε2(ξn+AT ) +
ε
2
∫ T
te
ε2(Ls+As)dJn
s −ε
2
∫ T
te
ε2(Xn
s +As)Λns dBs.
110 Chapitre 4. RBSDE under non-Lipschitz
Taking conditional expectation with respect to Ft, we have
eε2(Xn
t +At) = E[eε2(ξn+AT ) +
ε
2
∫ T
te
ε2(Ls+As)dJn
s |Ft]
≤ E[eε2(ξn+AT ) +
ε
2e
ε2(b+AT )(Jn
T − Jnt )|Ft],
while
JnT − Jn
t = (Xnt + At)− (ξn + AT )− ε
4
∫ T
t|Λn
s |2 ds +∫ T
tΛn
s dBs
≤ (Xnt + At) +
∫ T
tΛn
s dBs.
Then we get
eε2(Xn
t +At) ≤ E[eε2(ξn+AT )|Ft] +
ε
2e
ε2(b+AT )(Xn
t + At)
≤ E[eε2(ξ+AT )|Ft] + c1(Xn
t + At)
where c1 = ε2e
ε2(b+AT ). Since eεx
c1x → +∞, as x → +∞, then there exists a constant c2 only dependingon ε, such that when x ≥ c2, c1x ≤ 1
2eε2x. So we have
1Xnt +At≥c2e
ε2(Xn
t +At) ≤ 2E[eε2(ξ+AT )|Ft].
On the other hand,1Xn
t +At≤c2eε2(Xn
t +At) ≤ eε2c2 .
Soe
ε2(Xn
t +At) ≤ 2E[eε2(ξ+AT )|Ft] + e
ε2c2 ,
which impliese
ε2Xn
t ≤ 2eε2A(T−t)E[e
ε2ξ|Ft] + e
ε2(c2−At).
Since Xnt ≥ Y n
t , 0 ≤ t ≤ T ; then for all n ∈ N,
eε2Y n
t ≤ eε2Xn
t ≤ 2eε2AT E[e
ε2ξ|Ft] + e
ε2c2 .
While Y nt Yt, with Doob’s inequality, we have
E[ sup0≤t≤T
eεYt ] ≤ 8eεAT E[ sup0≤t≤T
(E[e
ε2ξ|Ft]2
)] + 2eεc2
≤ 16eεAT(E[eεξ]
) 12 + 2eεc2 < +∞.
Since for k ≥ 2, eεx
xk → +∞, when x → +∞, the final results follows.¤
Lemma 4.3.2. There exists a constant C0 > 0, such that for all n ≥ 0,
E[∫ T
0|Zn
s |2 ds] ≤ C0.
4.3. When f = |z|p, with p ∈ [1, 2] 111
Proof. If we apply Ito’s formula to (Y n)2, we have
(Y nt )2 +
∫ T
t|Zn
s |2 ds = (ξn)2 + 2∫ T
tY n
s |Zns |p ds + 2
∫ T
tY n
s dKns − 2
∫ T
tY n
s Zns dBs
= (ξn)2 + 2∫ T
tY n
s |Zns |p ds + 2
∫ T
tLsdKn
s − 2∫ T
tY n
s Zns dBs,
which implies∫ T
0|Zn
s |2 ds ≤ (ξn)2 + 2∫ T
0Y n
s |Zns |p ds + 2bKn
T − 2∫ T
0Y n
s Zns dBs. (4.8)
Notice that
KnT = Y n
0 − ξn −∫ T
0|Zn
s |p ds +∫ T
0Zn
s dBs ≤ Y n0 +
∫ T
0Zn
s dBs. (4.9)
Taking expectation in (4.8), we get
E
∫ T
0|Zn
s |2 ds ≤ E(ξn)2 + 2E
∫ T
0Y n
s |Zns |p ds + 2bY n
0
≤ E(ξ)2 + b2 + (Y n0 )2 + 2
(E
∫ T
0|Zn
s |2 ds
) p2(
E
∫ T
0|Y n
s |q ds
) 1q
,
where p2 + 1
q = 1, so q = 22−p ≥ 2, by Holder’s inequality. From Lemma 4.3.1 and 0 ≤ Y n
t ≤ Yt, weget
E
∫ T
0|Zn
s |2 ds ≤ C1 + C2
(E
∫ T
0|Zn
s |2 ds
) p2
.
Since p2 ∈ (1
2 , 1), then for x > 0, there exists a constant Ap such that xp2 ≤ Ap + 1
2C2x, so we get
E
∫ T
0|Zn
s |2 ds ≤ (C1 + C2Ap) +12E
∫ T
0|Zn
s |2 ds,
and finally,
E
∫ T
0|Zn
s |2 ds ≤ C0,
where C0 = 2(C1 + C2Ap). ¤
Lemma 4.3.3. For n ≥ 0, there exist a constant C3 such that E[(KnT )2] ≤ C3.
Proof. From (4.9), we get,
0 ≤ KnT = Y n
0 − ξ −∫ T
0|Zn
s |p ds +∫ T
0Zn
s dBs
≤ Y n0 +
∫ T
0Zn
s dBs.
So
E[(KnT )2] ≤ 2(Y n
0 )2 + 2E
∫ T
0|Zn
s |2 ds ≤ C2,
in view of Lemma 4.3.1 and Lemma 4.3.2. ¤
112 Chapitre 4. RBSDE under non-Lipschitz
Now we will prove the Theorem 4.3.2.Proof of Theorem 4.3.2.For m, n ∈ N, if we apply Ito’s formula to |Y n − Y m|2, we have
|Y nt − Y m
t |2 +∫ T
t|Zn
s − Zms |2 ds (4.10)
= |ξn − ξm|2 + 2∫ T
t(Y n
s − Y ms )(|Zn
s |p − |Zms |p)ds + 2
∫ T
t(Y n
s − Y ms )d(Kn
s −Kms )
−2∫ T
t(Y n
s − Y ms )(Zn
s − Zms )dBs
≤ |ξn − ξm|2 + 2∫ T
t(Y n
s − Y ms )(|Zn
s |p − |Zms |p)ds− 2
∫ T
t(Y n
s − Y ms )(Zn
s − Zms )dBs,
since
∫ T
t(Y n
s − Y ms )d(Kn
s −Kms ) =
∫ T
t(Y n
s − Ls)dKns +
∫ T
t(Y m
s − Ls)dKms
−∫ T
t(Y n
s − Ls)dKms −
∫ T
t(Y m
s − Ls)dKns
≤ 0.
Taking expectation and setting t = 0, with Holder’s inequality, we get
E
∫ T
0|Zn
s − Zms |2 ds ≤ E |ξn − ξm|2 + 2E
∫ T
0(Y n
s − Y ms ) |Zn
s |p ds
≤ E |ξn − ξm|2 + 2(
E
∫ T
0(Y n
s − Y ms )
22−p ds
) 2−p2
(E
∫ T
0|Zn
s |2 ds
) p2
.
From Lemma 4.3.1, since 22−p ≥ 2, the Lebesgue dominated convergence theorem implies that
Y n → Y in H2
2−p (0, T ), i.e.
E
∫ T
0(Y n
s − Y ms )
22−p ds → 0, as n,m →∞.
Thanks to Lemma 4.3.2, there exists a constant C0 independent of n such that E∫ T0 |Zn
s |2 ds ≤ C0.So we have that as n,m →∞,
E
∫ T
0|Zn
s − Zms |2 ds → 0.
For the convergence of Y n, without losing generality, we set n ≥ m, so Y nt ≥ Y m
t . From(4.10),
sup0≤t≤T
|Y nt − Y m
t |2 ≤ |ξn − ξm|2 + 2 sup0≤t≤T
∫ T
t(Y n
s − Y ms ) |Zn
s |p ds
+2 sup0≤t≤T
∣∣∣∣∫ T
t(Y n
s − Y ms )(Zn
s − Zms )dBs
∣∣∣∣ ,
4.4. The case when f is linear increasing in z 113
taking expectation, by BDG inequality,
E sup0≤t≤T
|Y nt − Y m
t |2 ≤ E |ξn − ξm|2 + 2E
∫ T
0(Y n
s − Y ms ) |Zn
s |p ds
+12E sup
0≤t≤T|Y n
t − Y mt |2 + CE
∫ T
t|Zn
s − Zms |2 ds.
Thanks to the convergence of Zn, we deduce as n,m →∞E sup
0≤t≤T|Y n
t − Y mt |2 → 0,
i.e. Y n → Y in S2(0, T ), Y is a continuous process.Since Zn → Z in H2
d(0, T ), then there exists Z ′ ∈ H2d(0, T ) and a subsequence still denoted
by Zn such that |Zn|p ≤ |Z ′|p, so by the Lebesgue convergence theorem we get that, for all0 ≤ t ≤ T
E
∫ T
t|Zn
s |p ds → E
∫ T
t|Zs|p ds,
when n →∞.For the term Kn, without losing generality, we set n ≥ m. By the comparison theorem 4.5.1,
since ξn ≥ ξm, we get dKnt ≤ dKm
t . Then for 0 ≤ t ≤ T ,
0 ≤ Kmt −Kn
t ≤ KmT −Kn
T .
It followsE[ sup
0≤t≤T(Km
t −Knt )2] ≤ E[(Km
T −KnT )2] → 0,
as n,m →∞, i.e. there exists a K ∈ A2(0, T ) such that Knt → K in A2(0, T ).
It’s easy to check that (Y,Z, K) satisfies the equation
Yt = ξ +∫ T
t|Zs|p ds + KT −Kt −
∫ T
tZsdBs.
Obviously Yt ≥ Lt, 0 ≤ t ≤ T , in view of Y nt ≥ Lt. The last thing is to check the integral
condition. The sequence (Y n,Kn) tends to (Y,K) uniformly in t in probability, as n → ∞, thenthe measure dKn → dK weakly in probability, as n →∞, i.e.
∫ T
0(Y n
t − Lt)dKnt →
∫ T
0(Yt − Lt)dKt, in probability.
While Yt ≥ Lt, 0 ≤ t ≤ T , so∫ T0 (Yt − Lt)dKt ≥ 0. On the other hand
∫ T0 (Y n
t − Lt)dKnt = 0, so∫ T
0 (Yt − Lt)dKt = 0, i.e. the triple (Y, Z,K) is the solution of RBSDE(ξ, f, L)¤
4.4 The case when f is linear increasing in z
In this section, we consider the case when the coefficient f is at most linear increasing in z andgeneral increasing in y, i.e.
Assumption 4.4.1. For all (t, ω), f(t, ω, ·, ·) is continuous and there exists a continuous functionϕ strictly positive, real numbers µ and A > 0 such that : for any t ∈ [0, T ], y, y′ ∈ R, z ∈ Rd
(i) f(·, y, z) is progressively measurable;(ii) |f(t, y, z)| ≤ ϕ(|y|) + A |z| , a.s. ;(iii) (y − y′)(f(t, y, z)− f(t, y′, z)) ≤ µ(y − y′)2, a.s.
114 Chapitre 4. RBSDE under non-Lipschitz
If ϕ(x) = |x|, then f is linear increasing in y and z. Matoussi proved in [52] that when ξ ∈ L2(FT )and L ∈ S2(0, T ), there exists a triple (Y, Z,K) which is solution of the RBSDE(ξ, f, L). In thisgeneral case, we need the additional assumption on f and the terminal condition ξ :
(iv) For t ∈ [0, T ], y ∈ R, z, z′ ∈ Rd, there exists a constant A0 such that∣∣f(t, y, z)− f(t, y, z′)
∣∣ ≤ A0(|z|+∣∣z′∣∣);
Assumption 4.4.2. E[ξ2(log ξ2)+] < +∞, a.s..
First we note that a triple (Y, Z, K) solves the RBSDE(ξ, f, L), if and only if
(Y t, Zt, Kt) := (eλtYt, eλtZt,
∫ t
0eλsdKs) (4.11)
solves the RBSDE(ξ, f , L), where
ξ = ξeλT ,
f(t, y, z) = eλtf(t, e−λty, e−λtz)− λy,
Lt = eλtLt.
If we choose λ = µ, then the coefficient f satisfies the same assumptions as in 4.4.1, but with4.4.1-(iii) replaced by
(iii’) (y − y′)(f(t, y, z)− f(t, y′, z)) ≤ 0.Since we are in the 1-dimensional case, (iii’) means that f is decreasing on y. From another
part the terminal condition ξ and the barrier L still satisfy the assumptions 4.4.2and 4.1.3. In thefollowing, we shall work with the assumption 4.4.1 with (iii) replaced by (iii’).
The main result of this section is the following :
Theorem 4.4.1. Suppose that ξ, f and L satisfy the assumptions 4.4.2, 4.4.1-(i), (ii) (iii’) (iv)and 4.1.3, then the RBSDE(ξ, f, L) has a solution.
Proof. We will prove the result in three steps.Step 1. Suppose that ξ ≥ a, a.s. for some a ∈ R and Lt ≤ 0, a.s. with sup0≤t≤T |Lt| ≤ b.Set ξn = ξ∧n, for n ∈ N, then ξn is bounded. By theorem 4.2.1, we know that the RBSDE(ξn, f, L)
has a maximal solution (Y n, Zn,Kn), i.e.
Y nt = ξn +
∫ T
tf(s, Y n
s , Zns )ds + Kn
T −Knt −
∫ T
tZn
s dBs, (4.12)
and Y nt ≥ Lt,
∫ T0 (Y n
t − Lt)dKnt = 0. We first need some estimations. Applying Ito’s formula to
eαt(Y nt )2, then
eαt(Y nt )2 = eαT (ξn)2 + 2
∫ T
teαsY n
s f(s, Y ns , Zn
s )ds−∫ T
teαsY n
s Zns dBs
−∫ T
teαs(Zn
s )2ds− α
∫ T
teαs(Y n
s )2ds + 2∫ T
teαsLsdKn
s .
Notice that L ≤ 0, so∫ Tt eαsLsdKn
s ≤ 0. Moreover
2Y ns f(s, Y n
s , Zns ) ≤ 2 |Y n
s | |f(s, 0, Zns )|+ 2µ |Y n
s |2≤ 2 |Y n
s | (ϕ(0) + A |Zns |) + 2µ |Y n
s |2
≤ ϕ2(0) + (1 + 2µ +A
k) |Y n
s |2 + Ak |Zns |2 .
4.4. f is linear increasing in z 115
We choose k > 1A , α ≥ 1 + 2µ + A
k , then
eαt(Y nt )2 + (Ak − 1)
∫ T
teαs(Zn
s )2ds ≤ eαT (ξn)2 +∫ T
teαsϕ2(0)ds−
∫ T
teαsY n
s Zns dBs.
Taking conditional expectation with respect to Ft, with the fact that (ξn)2 ≤ (ξ2), we have
eαt(Y nt )2 ≤ eαT E[(ξ)2 +
1α
ϕ2(0)|Ft]. (4.13)
Denote Nt = E[(ξ)2 + 1αϕ2(0)|Ft], which is a continuous martingale. By the assumption 4.4.2, (See
Jacod [41] or Revuz and Yor [70],), it follows E(sup0≤t≤T Nt) < +∞. With (4.13), we have
E[ sup0≤t≤T
(Y nt )2] ≤ eαT E( sup
0≤t≤TNt) < +∞, (4.14)
and since Ak − 1 > 0,
E
∫ T
0(Zn
s )2ds ≤ 1Ak − 1
eαT (N0) ≤ C, (4.15)
where C is a constant which does not depend on n.Now from the comparison theorem 4.5.2, since ξn ≤ ξn+1, we get Y n
t ≤ Y n+1t , Kn
t ≥ Kn+1t
and dKnt ≥ dKn+1
t , 0 ≤ t ≤ T . Hence Y nt Yt, Kn
t Kt, 0 ≤ t ≤ T , a.s.. From (4.14) andFatou’s Lemma we obtain E[sup0≤t≤T (Yt)2] < +∞. Then by the dominated convergence theorem,it follows as n →∞,
E
∫ T
0(Y n
s − Ys)ds → 0. (4.16)
For Kn, it’s obvious that 0 ≤ KT ≤ KnT ≤ K1
T , so
E[(KT )2] ≤ E[(K1T )2] ≤ C.
For n, m ∈ N, such that n > m, applying Ito’s formula to |Y n − Y m|2, then
|Y nt − Y m
t |2 +∫ T
t|Zn
s − Zms |2 ds (4.17)
= |ξn − ξm|2 + 2∫ T
t(Y n
s − Y ms )(f(s, Y n
s , Zns )− f(s, Y m
s , Zms ))ds
+2∫ T
t(Y n
s − Y ms )d(Kn
s −Kms )− 2
∫ T
t(Y n
s − Y ms )(Zn
s − Zms )dBs
≤ [|ξn − ξm|2] + 2∫ T
t(Y n
s − Y ms )(f(s, Y n
s , Zns )− f(s, Y m
s , Zms ))ds
−2∫ T
t(Y n
s − Y ms )(Zn
s − Zms )dBs
since∫ T
t(Y n
s − Y ms )d(Kn
s −Kms ) =
∫ T
t(Y n
s − Ls)dKns +
∫ T
t(Y m
s − Ls)dKms
−∫ T
t(Y n
s − Ls)dKms −
∫ T
t(Y m
s − Ls)dKns
≤ 0.
116 Chapitre 4. RBSDE under non-Lipschitz
Using the assumption 4.4.1-(iii) and (iv), we get
|Y nt − Y m
t |2 +∫ T
t|Zn
s − Zms |2 ds (4.18)
≤ |ξn − ξm|2 + 2µ
∫ T
t|Y n
s − Y ms |2 ds + 2A0
∫ T
t(Y n
s − Y ms )(|Zn
s |+ |Zms |)ds
−2∫ T
t(Y n
s − Y ms )(Zn
s − Zms )dBs.
Then taking expectation and setting t = 0, with Holder’s inequality, we get
E
∫ T
0|Zn
s − Zms |2 ds ≤ E[|ξn − ξm|2] + 2µE
∫ T
0|Y n
s − Y ms |2 ds
+4A0
(E
∫ T
0(Y n
s − Y ms )2ds
) 12(
E
∫ T
0(|Zn
s |2 + |Zms |2)ds
) 12
So Zn is a Cauchy sequence in H2d(0, T ), in view of (4.16) and (4.15). It follows that there exists
Z ∈ H2d(0, T ), such that Zn → Z in H2
d(0, T ), when n →∞.Finally, by the continuity of f on y and z, we get
f(s, Y ns , Zn
s ) → f(s, Ys, Zs), a.s.
and by the assumption 4.4.1-(ii) and (4.13), it follows
|f(s, Y ns , Zn
s )| ≤ ϕ(eαT2
√Nt) +
∣∣Z ′s∣∣ ,
where |Znt | ≤ Z ′t, for a subsequence, with Z ′ ∈ H2
d(0, T ). Then by dominated convergence theorem,∫ T
tf(s, Y n
s , Zns )ds →
∫ T
tf(s, Ys, Zs)ds, a.s. when n →∞.
So by passing to the limit in (4.12), we get
Yt = ξ +∫ T
tf(s, Ys, Zs)ds + KT −Kt −
∫ T
tZsdBs,
and Yt ≥ Lt follows directly from Y nt ≥ Lt, 0 ≤ t ≤ T .
We still need to prove the integral condition. For this, we first prove that Y n and Knconverge in some stronger sense. First since dKn
t ≥ dKn+1t , we have for 0 ≤ t ≤ T
0 ≤ Knt −Kn+1
t ≤ KnT −Kn+1
T .
It followsE[ sup
0≤t≤T(Kn
t −Kn+1t )2] ≤ E[(Kn
T −Kn+1T )2] → 0,
as n →∞. So Kn → K in A2(0, T ).For n, m ∈ N, with n > m, which implies Y n
t ≥ Y mt , we obtain from (4.18) that
sup0≤t≤T
|Y nt − Y m
t |2 ≤ |ξn − ξm|2 + 2A0
∫ T
0(Y n
s − Y ms )(|Zn
s |+ |Zms |)ds
+2µ
∫ T
0|Y n
s − Y ms |2 ds− 2 sup
0≤t≤T
∫ T
t(Y n
s − Y ms )(Zn
s − Zms )dBs.
4.4. f is linear increasing in z 117
Then by Holder’s inequality and BDG inequality, it follows
E sup0≤t≤T
|Y nt − Y m
t |2 ≤ |ξn − ξm|2 + 4A0(E∫ T
0(Y n
s − Y ms )2ds)
12 (E
∫ T
0|Zn
s |2 + |Zms |2 ds)
12
+2µ
∫ T
0|Y n
s − Y ms |2 ds +
12E sup
0≤t≤T|Y n
t − Y mt |2 + CE
∫ T
0|Zn
s − Zms |2 ds.
So let n,m →∞, we get thatE sup
0≤t≤T|Y n
t − Y mt |2 → 0;
i.e. that Y n converges to Y in S2(0, T ). So Y is continuous.Since Kn tends to K uniformly in t in probability, as n → ∞, then the measure dKn → dK
weakly in probability, as n →∞, i.e.∫ T
0(Y n
t − Lt)dKnt →
∫ T
0(Yt − Lt)dKt, in probability.
While Yt ≥ Lt, 0 ≤ t ≤ T , so∫ T0 (Yt − Lt)dKt ≥ 0. On the other hand
∫ T0 (Y n
t − Lt)dKnt = 0, so∫ T
0 (Yt − Lt)dKt = 0.Step 2. Let us now assume that ξ only satisfies 4.4.2, but L is still a non-positive process with
sup0≤t≤T |Lt| ≤ b.Denote ξn = ξ ∨ (−n), then (ξn, f, L) satisfies the conditions of step 1. So for all n, there exists
a triple (Y n, Zn,Kn) which is a maximal solution of the RBSDE(ξn, f, L) : i.e.
Y nt = ξn +
∫ T
tf(s, Y n
s , Zns )ds + Kn
T −Knt −
∫ T
tZn
s dBs, (4.19)
and Y nt ≤ Lt,
∫ T0 (Y n
s − Ls)dKns = 0. With same argument as step 1, we deduce that
E[ sup0≤t≤T
(Y nt )2] ≤ eαT E( sup
0≤t≤TNt) < +∞, E
∫ T
0(Zn
s )2ds ≤ C, (4.20)
Thanks to the proposition 4.5.3, we get
Y nt ≥ Y n+1
t ,Knt ≤ Kn+1
t , dKnt ≤ dKn+1
t .
For the estimation of Kn, we rewrite (4.19) in forward form, square on both sides and takeexpectation, to get
E[(KnT )2] ≥ 4(Y n
0 )2 + 4E[(ξn)2] + 4E[(∫ T
0f(s, Y n
s , Zns )ds)2] + 4E
∫ T
0(Zn
s )2ds,
By the assumption 4.4.1-(iii’) and (iv), with Y 1t ≥ Y n
t ≥ Lt, it follows
f(s, Y ns , Zn
s ) = f(s, Y ns , Zn
s )− f(s, Y ns , 0) + f(s, Y n
s , 0)≤ A0 |Zn
s |+ f(s, Ls, 0)≤ A0 |Zn
s |+ ϕ(b),
and
f(s, Y ns , Zn
s ) = f(s, Y ns , Zn
s )− f(s, Y 1s , Z1
s ) + f(s, Y 1s , Z1
s )≥ −A0(|Zn
s |+∣∣Z1
s
∣∣) + f(s, Y 1s , Z1
s )
118 Chapitre 4. RBSDE under non-Lipschitz
So
E[(KnT )2] ≤ 4(Y n
0 )2 + 4E[(ξ)2] + 8Tϕ2(b) + CE
∫ T
0(Zn
s )2ds
+8E[(∫ T
0f(s, Y 1
s , Z1s )ds)2] + CE
∫ T
0(Z1
s )2ds
≤ C.
Then we know that Knt Kt in L2(Ft). Now we are in the same situation as in step 1, we can
prove that
E
∫ T
0(Y n
s − Ys)ds → 0, E
∫ T
0|Zn
s − Zs|2 ds → 0, (4.21)
and that the triple (Y, Z, K) verifies the equation
Yt = ξ +∫ T
tf(s, Ys, Zs)ds + KT −Kt −
∫ T
tZsdBs.
The fact that Yt ≥ Lt follows directly from Y nt ≥ Lt, 0 ≤ t ≤ T .
Now the rest is to prove the integral condition. From dKnt ≤ dKn+1
t , we still have
0 ≤ Kn+1t −Kn
t ≤ Kn+1T −Kn
T ,
so by the same method, we get∫ T0 (Yt − Lt)dKt = 0.
Step 3. Now we consider that the barrier L only satisfies the assumption 4.1.3, i.e. sup0≤t≤T |Lt| <+∞.
Set ξ = ξ− b, f(t, y, z) = f(t, y + b, z), Lt = Lt− b, then f satisfies the assumption 4.4.1, Lt ≤ 0and ξ satisfies the assumption 4.4.2. Indeed, if we set
g1(x) = (x− b)2(log(x− b)2)+,
g2(x) = x2(log x2)+,
then we know that as x →∞, g1(x)g2(x) → 1. So from the assumption 4.4.2, we deduce that
E[ξ2(log ξ2)+] =∫
R(x− b)2(log(x− b)2)+dνξ(dx) < +∞.
Then using step 2, we consider (Y , Z, K) which is a maximal solution of the RBSDE(ξ, f , L). LetYt = Yt + b, Zt = Zt, Kt = Kt, it’s easy to check that (Y, Z, K) is a solution of the RBSDE(ξ, f, L),i.e.
Yt = Yt + b = ξ + b +∫ T
tf(s, Ys, Zs)ds + KT − Kt −
∫ T
tZsdBs
= ξ +∫ T
tf(s, Ys, Zs)ds + KT −Kt −
∫ T
tZsdBs,
and Yt = Yt + b ≥ Lt + b = Lt,∫ T
0(Yt − Lt)dKt =
∫ T
0((Yt + b)− (Lt + b))dKt
=∫ T
0(Yt − Lt)dKt = 0.
¤
4.5. Appendix : Comparison theorems 119
4.5 Appendix : Comparison theorems
We first generalize the comparison theorem of RBSDE with superlinear quadratic coefficient, (inview to proposition 3.2 in [44]), to compare the increasing processes. Assume that the assumptions4.1.1 and 4.1.3 hold, and that the coefficient f satisfies :
Assumption 4.5.1. For all (t, ω), f(t, ω, ·, ·) is continuous and there exists a function l strictlypositive such that
f(t, y, z) ≤ l(y) + A |z|2 , with∫ ∞
0
dx
l(x)= +∞.
Proposition 4.5.1. Suppose that ξi, f i(s, y, z) and Li, i = 1, 2 satisfy the assumptions 4.1.1, 4.5.1and 4.1.3. The two triples (Y 1, Z1,K1), (Y 2, Z2,K2) are respectively maximal (minimal) solutionsof the RBSDE(ξ1, f1, L) and RBSDE(ξ2, f2, L). If we have ∀(t, y, z) ∈ [0, T ]× R× Rd,
f1(t, y, z) ≤ f2(t, y, z),ξ1 ≤ ξ2,
L1t ≤ L2
t ,
then Y 1t ≤ Y 2
t . Moreover, if L1 = L2, then K1t ≥ K2
t and dK1t ≥ dK2
t , for t ∈ [0, T ].
Proof. We consider first maximal solutions. From the demonstration of theorem 1 in [44], weknow that for i = 1, 2, (Y i, Zi,Ki) is the maximal solution of RBSDE(ξi, f i, L) if and only if(θi, J i,Λi) is the maximal solution of RBSDE(ηi, F i, L) where
(θi, J i, Λi) = (exp(2AY i), 2AZiθi, 2∫ ·
0A exp(2AYs)dKi
s) (4.22)
and
ηi = exp(2Aξi), Lit = exp(2ALi
t)
F i(t, x, λ) = 2Ax[f(s,log x
2A,
λ
2Ax)− |λ|2
4Ax2].
Then we use the approximation to construct a solution. For p ∈ N, we consider the RBSDE(ηi, F ip, L
it),
whereF i
p(s, x, λ) = g(ρ(θ))(1− κp(λ)) + κp(λ)F i(s, ρ(θ), λ).
Here g(x) = 2Axl( log x2A ), κp(λ) and ρ(x) are smooth functions such that κp(λ) = 1 if |λ| ≤ p,
κp(λ) = 0 if |λ| ≥ p+1, and ρ(x) = x if x ∈ [r,R], ρ(x) = r2 if x ∈ (0, r
2), ρ(x) = R if x ∈ (2R, +∞),where r and R are two constants. Since F i
p are bounded and continuous function of (θ, λ), theRBSDE(ηi, F i
p, Lt) admits a bounded maximal solution (θi,p, J i,p,Λi,p), with m ≤ θi,pt ≤ V0. Here
m and V0 are constants given in Theorem 2 in [44].We know that F i
p ↓ F i, as p → ∞, where F i = F (s, ρ(θ), λ). Thanks to the proof of theorem1 in [44], it follows that θi,p
t ↓ θit, J i,p
t ↑ J it , 0 ≤ t ≤ T , and Λi,p → Λi in H2
d(0, T ) and (θi, J i, Λi)is the maxiaml solution of the RBSDE(ηi, F i, Lt). In addition, m ≤ θi
t ≤ V0. So if we choose0 < r < m and V0 < R, then F i = F i. It follows that (θi, J i, Λi) satisfies the RBSDE(ηi, F i, L), i.e.(θi, J i, Λi) = (θi, J i, Λi).
Since f1(t, y, z) ≤ f2(t, y, z), for (t, x, λ) ∈ [0, T ] × R+×Rd, we have F 1(t, x, λ) ≤ F 2(t, x, λ).Then for p ∈ N, F 1
p (s, x, λ) ≤ F 2p (s, x, λ). Notice that F i
p is bounded and continuous in (θ, λ) and
120 Chapitre 4. RBSDE under non-Lipschitz
θi,pt > 0, with η1 ≤ η2 and L1
t ≤ L2t , by Lemma 2.1 in [44], it follows that θ1,p
t ≤ θ2,pt . Thanks to
the convergence results, we get thatθ1t ≤ θ2
t .
From (7.5.1), we know that
Y it =
log(θit)
2A,
so Y 1t ≤ Y 2
t .Moreover, if L1 = L2, using again Lemma 2.1 in [44], we get that J1,p
t ≥ J2,pt , dJ1,p
t ≥ dJ2,pt ,
0 ≤ t ≤ T . And it follows that for 0 ≤ s ≤ t ≤ T , J1,pt − J1,p
s ≥ J2,pt − J2,p
s . Let p →∞, thanks tothe convergence results, we get that
J1t ≥ J2
t , J1t − J1
s ≥ J2t − J2
s ,
which implies dJ1t ≥ dJ2
t . By (7.5.1), we obtain that
dKit =
dJ it
2Aθit
,
so dK1t ≥ dK2
t , which implies that K1t ≥ K2
t in view of K10 = K2
0 = 0.For the minimal solution (Y i, Zi,Ki) of the RBSDE(ξi, f i, Li), we consider the transform in
the proof of theorem 3 in [44],
(θi, J
i,Λi) = (exp(−2AY i),−2AZiθi,−2
∫ ·
0A exp(−2AYs)dKi
s) (4.23)
where (θi, J
i,Λi) is the maximal solution of reflected BSDE(ηi, F
i) with the upper barrier Li∗. Here
ηi = exp(−2Aξi), Li∗t = exp(−2ALi
t)
Fi(t, x, λ) = 2Ax[−f(s,
− log x
2A,−λ
2Ax)− |λ|2
4Ax2].
Then with similar arguments, we use (θi,p, J
i,p, Λi,p) to approximate (θi
, Ji, Λi), which are maximal
solution of RBSDE(ηi, Fi,p
, Li∗), where
Fi
p(s, x, λ) = g(ρ(θ))(1− κp(λ)) + κp(λ)F i(s, ρ(θ), λ).
Here g(x) = 2Axl(− log x2A ). Since f1(t, x, y) ≤ f2(t, x, y), we get F
1(t, x, λ) ≥ F2(t, x, λ), then
F1
p(t, x, λ) ≥ F2
p(t, x, λ). With η1 ≥ η2 and L1∗t ≥ L
2∗t , by Lemma 2.1 in [44], we get θ
1,pt ≥ θ
2,pt .Then
let p →∞, we getθ1t ≥ θ
2t ,
soY 1
t ≤ Y 2t .
Notice that
dKit =
dJit
−2Aθit
,
we get the comparison result of Ki from J i, when L1 = L2. ¤From this result, we prove the following comparison theorem when the coefficient f satisfies
monotonicity and general increasing condition in y, and quadratic increasing in z.
4.5. Comparison theorems 121
Proposition 4.5.2. Suppose that ξi and f i(s, y, z), i = 1, 2 satisfy the assumptions 4.1.1 and4.1.2, L satisfies the assumption 4.1.3. The two triples (Y 1, Z1,K1), (Y 2, Z2,K2) are respectivelythe solutions of the RBSDE(ξ1, f1, L) and RBSDE(ξ2, f2, L). If we have ∀(t, y, z) ∈ [0, T ]×R× Rd,
f1(t, y, z) ≤ f2(t, y, z),ξ1 ≤ ξ2,
then Y 1t ≤ Y 2
t , K1t ≥ K2
t and dK1t ≥ dK2
t , for t ∈ [0, T ].
Proof. First with changement of (Y, Z, K),
(Y b, Zb,Kb) = (Y − b, Z, K),
we work with Lb ≤ 0. Since this transformation doesn’t change the monotonicity, then in thefollowing, we assume that the barrier L is a negative bounded process. As in the proof of theo-rem 4.2.1, for C ∈ R+, let gC : R → [0, 1] continuous which satisfies (4.1). Set fC
i (t, y, z) =gC(x)f i(t, y, z), i = 1, 2, which satisfies the assumption 4.5.1, with li(y) = ϕi(|2C|). We considersolutions (Y i,C , Zi,C ,Ki,C) of the RBSDE(ξi, fC
i , Lb) respectively. Using proposition 4.5.1, since
fC1 (t, y, z) ≤ fC
2 (t, y, z), ξ1 ≤ ξ2,
we get for t ∈ [0, T ],Y 1,C
t ≤ Y 2,Ct , dK1,C
t ≥ dK2,Ct .
Then by the bounded property of Y i, we choose C big enough like in the proof of theorem 4.2.1,which follows immediately
Y 1t ≤ Y 2
t , dK1t ≥ dK2
t ,∀t ∈ [0, T ].
¤
Proposition 4.5.3. Suppose that ξi and f i(s, y, z), i = 1, 2 satisfy the assumptions 4.4.2 and 4.4.1,L satisfies the assumption 4.1.3. In addition, we assume that there exists a real number a ∈ R suchthat ξ ≥ a and L ≤ 0. The two triples (Y 1, Z1,K1), (Y 2, Z2, K2) are respectively solutions of theRBSDE(ξ1, f1, L) and RBSDE(ξ2, f2, L). If we have ∀(t, y, z) ∈ [0, T ]× R× Rd,
f1(t, y, z) ≤ f2(t, y, z),ξ1 ≤ ξ2,
then Y 1t ≤ Y 2
t , K1t ≥ K2
t and dK1t ≥ dK2
t , for t ∈ [0, T ].
Proof. Set ξi,n = ξi ∧ n, for n ∈ N, i = 1, 2, then ξi,n is bounded. By theorem 4.2.1, we knowthat the RBSDE(ξi,n, f i, L) has a bounded maximal solution (Y i,n, Zi,n,Ki,n). From proposition4.5.2, we know that Y 1,n
t ≤ Y 2,nt , K1,n
t ≥ K2,nt , dK1,n
t ≥ dK2,nt , which implies for 0 ≤ s ≤ t ≤ T ,
K1,nt −K1,n
s ≥ K2,nt −K2,n
s . Thanks to the convergence results in the step 1 of the proof of theorem4.4.1, we know that Y n → Y in S2(0, T ) and Kn → K in A2(0, T ), where (Y, Z,K) is the solutionof RBSDE(ξ, f, L). It follows for 0 ≤ s ≤ t ≤ T ,
Y 1t ≤ Y 2
t , K1t ≥ K2
t ,K1t −K1
s ≥ K2t −K2
s ,
¤
122 Chapitre 4. RBSDE under non-Lipschitz
Proposition 4.5.4. Suppose that ξi and f i(s, y, z), i = 1, 2 satisfy the assumptions 4.4.2 and4.4.1, L satisfies the assumption 4.1.3. The two triples (Y 1, Z1,K1), (Y 2, Z2,K2) are respectivelysolutions of the RBSDE(ξ1, f1, L) and RBSDE(ξ2, f2, L). If we have ∀(t, y, z) ∈ [0, T ]× R× Rd,
f1(t, y, z) ≤ f2(t, y, z),ξ1 ≤ ξ2,
then Y 1t ≤ Y 2
t , K1t ≥ K2
t and dK1t ≥ dK2
t , for t ∈ [0, T ].
Proof. Set ξi,n = ξi∨ (−n), for n ∈ N, i = 1, 2, then ξi,n satisfies the assumption of proposition4.5.3. By the first step of proof for theorem 4.4.1, we know that the RBSDE(ξi,n, f i, L) has abounded maximal solution (Y i,n, Zi,n, Ki,n). From proposition 4.5.3, we know that Y 1,n
t ≤ Y 2,nt ,
K1,nt ≥ K2,n
t , dK1,nt ≥ dK2,n
t , which implies for 0 ≤ s ≤ t ≤ T , K1,nt −K1,n
s ≥ K2,nt −K2,n
s . Thanksto the convergence results in the step 2 of proof of theorem 4.4.1, we know that Y n → Y in S2(0, T )and Kn → K in A2(0, T ), where (Y,Z, K) is the solution of RBSDE(ξ, f, L). It follows that for0 ≤ s ≤ t ≤ T ,
Y 1t ≤ Y 2
t , K1t ≥ K2
t ,K1t −K1
s ≥ K2t −K2
s ,
¤
123
Chapitre 5
Reflected backward SDEs with twobarriers under monotonicity condition
In this chapter, we consider the reflected BSDE with two continuous barriers with f undermonotonicity and general increasing conditions in y, and prove the existence and uniqueness ofthe solution. This chapter is organized as following : In subsection 5.1.1, we present notation andassumptions ; then we prove the main results of this paper, the existence and uniqueness of thesolution in subsection 5.1.2 ; in subsection 5.1.3 we prove an important theorem for the existencein five steps. Finally, in section 5.2, we prove several comparison theorems with respect to RBSDEwith one or two barriers, which are used in the proof of existence.
5.1 RBSDE’s with two continuous barriers
5.1.1 Assumptions and notations
Assume that (Ω,F , P ) is a complete probability space, equipped with a d-dimensional Brownianmotion (Bt)0≤t≤T = (B1
t , B2t , · · · , Bd
t )′0≤t≤T , which is defined on a finite interval [0, T ], 0 < T <+∞. Denote by Ft; 0 ≤ t ≤ T the natural filtration generated by the Brownian motion B :Ft = σBs; 0 ≤ s ≤ t,where F0 contains all P−null sets of F . We denote by P the σ-algebra ofpredictable sets on [0, T ]× Ω.
We recall the notations of spaces in chapter 1 : L2(Ft), H2d(0, T ), S2(0, T ) and A2(0, T ). Let us
consider the reflected backward stochastic differential equation with monotonic condition in y ona fixed time interval ; we need the following assumptions :
Assumption 5.1.1. a final condition ξ ∈ L2(FT ),
Assumption 5.1.2. a coefficient f : Ω × [0, T ] × R× Rd → R, which is such that for somecontinuous increasing function ϕ : R+ → R+, real numbers µ ∈ R and k > 0, ∀t ∈ [0, T ], y, y′ ∈ R,z, z′ ∈ Rd :
(i) f(·, y, z) is progressively measurable ;(ii) |f(t, y, z)| ≤ |f(t, 0, z)|+ ϕ(|y|), a.s. ;(iii) E
∫ T0 |f(t, 0, 0)|2 dt < ∞;
(iv) |f(t, y, z)− f(t, y, z′)| ≤ k |z − z′| , a.s. ;(v) (y − y′)(f(t, y, z)− f(t, y′, z)) ≤ µ |y − y′|2 , a.s. ;(vi) y → f(t, y, z) is continuous, a.s..
Assumption 5.1.3. two barriers Lt, Ut, which are Ft-progressively measurable continuous pro-cesses, defined on the interval [0, T ], satisfying
124 Chapitre 5. RBSDE with two barriers
(i)E[ϕ2( sup
0≤t≤T(eµt(Lt)+))] + E[ϕ2( sup
0≤t≤T(eµt(Ut)−))] < ∞,
and (L)+, (U)− ∈ S2(0, T ), LT ≤ ξ ≤ UT , a.s., where (L)+(resp. (U)−) is the positive (resp.negative) part of L (resp. U).
(ii) there exists a process Jt = J0 +∫ t0 φsdBs − V +
t + V −t , JT = ξ with φ ∈ H2
d(0, T ), V +, V − ∈A2(0, T ), s.t.
Lt ≤ Jt ≤ Ut, for 0 ≤ t ≤ T.
(iii) Lt < Ut, a.s., for 0 ≤ t < T.
Now we introduce the definition of the solution of RBSDE with two barriers L and U .
Definition 5.1.1. We say that (Y, Z,K) is a solution of the backward stochastic differential equa-tion with two continuous reflecting barriers L(·) and U(·), terminal condition ξ and coefficient f ,which is denoted as RBSDE(ξ, f, L, U), if the followings hold :
(1) Y ∈ S2(0, T ), Z ∈ H2d(0, T ), and K = K+ −K−, where K± ∈ A2(0, T ).
(2) Yt = ξ +∫ Tt f(s, Ys, Zs)ds + K+
T −K+t − (K−
T −K−t )− ∫ T
t ZsdBs, 0 ≤ t ≤ T a.s.(3) Lt ≤ Yt ≤ Ut, 0 ≤ t ≤ T, a.s.(4)
∫ T0 (Ys − Ls)dK+
s =∫ T0 (Ys − Us)dK−
s = 0, a.s.
Actually, a general solution of our RBSDE(ξ, f, L, U) would satisfy the assumptions (1) to (4).The state-process Y (·) is forced to stay between the barrier L(·) and U(·), thanks to the cumulationaction of the reflection processes K+(·) and K−(·) respectively, which act only when necessary toprevent Y (·) from crossing the respective barrier, and in this sense, its action can be consideredminimal, i.e. the integrability assumption (4). From the fact that K± ∈ A2(0, T ) is continuous and(2), it follows that Y is continuous.
We present first a result which is the analog of Proposition 4.1 in [19] for the Lipschitz case.Precisely, we show that the square-integrable solution Y of the RBSDE(ξ, f, L, U) corresponds tothe value of a Dynkin game problem.
Proposition 5.1.1. Let (Yt, Zt,Kt), 0 ≤ t ≤ T be a solution of the RBSDE(ξ, f, L, U). Then foreach t ∈ [0, T ], and any stopping times σ, τ in Tt, where Tt = τ : τ is a Ft stopping time, s.t.t ≤ τ ≤ T, consider the payoff
Rt(σ, τ) =∫ σ∧τ
tf(s, Ys, Zs)ds + ξ1σ∧τ=T + Lτ1τ<T,τ≤σ + Uσ1σ<τ,
as well as the upper and lower values, respectively
V t = ess infσ∈Tt
ess supτ∈Tt
E[Rt(σ, τ)|Ft]
V t = ess supτ∈Tt
ess infσ∈Tt
E[Rt(σ, τ)|Ft]
of the corresponding stochastic game. this game has a value Vt, given by the state process Yt, solutionof the RBSDE, i.e.
Vt = V t = V t = Yt, a.s.
as well as a saddlepoint (σt, τt) ∈ Tt × Tt given by
σt = infs ∈ [t, T );Ys = Us ∧ T,
τt = infs ∈ [t, T );Ys = Ls ∧ T,
5.1. Main results 125
namelyE[Rt(σt, τ)|Ft] ≤ E[Rt(σt, τt)|Ft] = Yt ≤ E[Rt(σ, τt)|Ft] (5.1)
a.s., for any (σ, τ) ∈ Tt × Tt.
Proof. It suffices to prove 5.1.We first take σt = σt and arbitary τ ∈ Tt, then Ybσt = Ubσ and K−bσt
= K−t on the event σt < τ.
Thus we have
Rt(σt, τ) ≤∫ bσt
tg(s, Ys, Zs)du + Ybσt
− (K−bσt−K−
t )
≤∫ bσt
tg(s, Ys, Zs)du + Ybσt
+ (K+bσt−K+
t )− (K−bσt−K−
t ) +
= Yt +∫ bσt
tZudBu,
with equality if τ = τt. On the set τ ≤ σt, we have
Rt(σt, τ) =∫ τ
tg(s, Ys, Zs)du + ξ1τ=T + Lτ1τ<T − (K−
τ −K−t )
≤∫ τ
tg(s, Ys, Zs)du + ξ1τ=T + Yτ1τ<T + (K+
τ −K+t )− (K−
τ −K−t )
= Yt +∫ τ
tZudBu,
with equality if τ = τt. Now we take the conditional expectaion with respect to Ft, then ∀τ ∈ Ft
E[Rt(σt, τ)|Ft] ≤ Yt = E[Rt(σt, τt)|Ft], a.s.. (5.2)
On the contrary, we consider the stopping time τ = τt, and arbitrary σ ∈ Tt, by same argumentsas before, we get that
Rt(σ, τt) ≥ Yt +∫ bτt
tZudBu,
with equality if σ = σt and to
E[Rt(σ, τt)|Ft] ≥ Yt = E[Rt(σt, τt)|Ft], a.s. (5.3)
by taking conditional expectaion as before. Now (5.1) follows from (5.2) and (5.3). ¤
5.1.2 Main results
Our main results in this paper is following :
Theorem 5.1.1. Under the assumptions 5.1.1, 5.1.2 and 5.1.3, the RBSDE(ξ, f, L, U) has theunique solution (Y,Z, K), which satisfies definition 5.1.1 (1)-(4).
Proof. Uniqueness. Suppose that the triples (Y, Z, K) and (Y ′, Z ′,K ′) are two solutions ofthe RBSDE(ξ, f, L), i.e. satisfy (1)-(4). Set ∆Y = Y − Y ′, ∆Z = Z − Z ′, ∆K = ∆K+ − ∆K−′,
126 Chapitre 5. RBSDE with two barriers
with ∆K+ = K+ −K+′, ∆K− = K− −K−′. Applying Ito’s formula to ∆Y 2 on the interval [t, T ],and taking expectation on both sides, it follows
E |∆Yt|2 + E
∫ T
t|∆Zs|2 ds = 2E
∫ T
t∆Ys(f(s, Ys, Zs)− f(s, Y ′
s , Z ′s))ds + 2E
∫ T
t∆Ysd∆Ks
≤ 2kE
∫ T
t∆Ys∆Zsds + 2µE
∫ T
t∆Y 2
s ds
≤ (2k2 + µ)E∫ T
t∆Y 2
s ds +12E
∫ T
t|∆Zs|2 ds,
using the monotonic assumption on y, the Lipschitz assumption on z, and∫ T
t∆Ysd∆Ks =
∫ T
t∆Ysd∆K+
s −∫ T
t∆Ysd∆K−
s
=∫ T
t(Ys − Ls)dK+
s +∫ T
t(Y ′
s − Ls)dK+′s −
∫ T
t(Ys − Ls)dK+′
s −∫ T
t(Y ′
s − Ls)dK+s
−∫ T
t(Ys − Us)dK−
s −∫ T
t(Y ′
s − Us)dK−′s +
∫ T
t(Ys − Us)dK−′
s +∫ T
t(Y ′
s − Us)dK−s
≤ 0.
We get
E |∆Yt|2 ≤ (2k2 + µ)E∫ T
t∆Y 2
s ds.
From the Gronwall’s inequality, it follows E |∆Yt|2 = E |Yt − Y ′t |2 = 0, 0 ≤ t ≤ T , i.e. Yt = Y ′
t a.s. ;then we have also E
∫ T0 |∆Zs|2 ds = E
∫ T0 |Zs − Z ′s|2 ds = 0, and Kt = K ′
t.Existence. We firstly present the following existence theorem when f does not depend on z,
which will be proved a little later.
Theorem 5.1.2. Suppose that ξ, f , and L, U satisfy assumptions 5.1.1, 5.1.2 and 5.1.3, thenfor any process Q ∈ H2
d(0, T ), there exists a unique triple of progressively measurable processes(Yt, Zt,Kt)0≤t≤T , such that Y ∈ S2(0, T ), Z ∈ H2
d(0, T ), with K = K+ −K−, K± ∈ A2(0, T ),which satisfies definition 5.1.1 (1), (3), (4) and
Yt = ξ +∫ T
tf(s, Ys, Qs)ds + K+
T −K+t − (K−
T −K−t )−
∫ T
tZsdBs, 0 ≤ t ≤ T .
Thanks to the theorem 5.1.2, we can construct a mapping Φ from S into itself, where S isdefined as the space of the progressively measurable processes (Yt, Zt); 0 ≤ t ≤ T, valued inR× Rd which satisfy (1) as follows.
Given (P,Q) ∈ S, (Y,Z) = Φ(P, Q) is the unique solution of following RBSDE
Yt = ξ +∫ T
tf(s, Ys, Qs)ds + KT −Kt −
∫ T
tZsdBs,
i.e., if we define the process
Kt = Yt − Y0 −∫ t
0f(s, Ys, Qs)ds +
∫ t
0ZsdBs, 0 ≤ t ≤ T,
then the triple (Y,Z, K) satisfies definition 5.1.1 (1)-(4), with f(s, y, z) = f(s, y, Qs).
5.1. Proof of theorem 5.1.2 127
Consider another element of S, and define (Y ′, Z ′) = Φ(P ′, Q′) ; set
∆P = P − P ′, ∆Q = Q−Q′,∆Y = Y − Y ′, ∆Z = Z − Z ′,∆K = K+ −K−, ∆K+ = K+ −K+′, ∆K− = K− −K−′.
Then we apply the Ito’s formula to eγt |∆Yt|2 on the interval [t, T ], for γ > 0,
eγtE |∆Yt|2 + E
∫ T
teγs(γ |∆Ys|2 + |∆Zs|2)ds
= 2E
∫ T
teγs∆Ys(f(s, Ys, Qs)− f(s, Y ′
s , Q′s))ds + 2E
∫ T
teγs∆Ysd∆Ks
≤ 2(k2 + µ)E∫ T
teγs |∆Ys|2 ds +
12E
∫ T
teγs |∆Qs|2 ds,
since∫ T
teγs∆Ysd∆Ks =
∫ T
teγs∆Ysd∆K+
s −∫ T
teγs∆Ysd∆K−
s
=∫ T
teγs(Ys − Ls)dK+
s +∫ T
teγs(Y ′
s − Ls)dK+′s −
∫ T
teγs(Ys − Ls)dK+′
s −∫ T
teγs(Y ′
s − Ls)dK+s
−∫ T
teγs(Ys − Us)dK−
s +∫ T
teγs(Y ′
s − Us)dK−′s +
∫ T
teγs(Ys − Us)dK−′
s −∫ T
teγs(Y ′
s − Us)dK−s
≤ 0.
Hence, if we choose γ = 1 + 2(k2 + µ), it follows
E
∫ T
teγs(|∆Ys|2 + |∆Zs|2)ds ≤ 1
2E
∫ T
teγs |∆Qs|2 ds
≤ 12E
∫ T
teγs(|∆Ps|2 + |∆Qs|2)ds.
Consequently, Φ is a strict contraction on S equipped with the norm
‖(Y, Z)‖γ =[E
∫ T
0eγs(|Ys|2 + |Zs|2)ds
] 12
,
and has a fixed point, which is the unique solution of the RBSDE(ξ, f, L, U). ¤
5.1.3 Proof of theorem 5.1.2
Now we prove the theorem 5.1.2 in several steps for the existence of solution. We write f(s, y)for f(s, y, Qs). First we note that the triple (Y, Z,K) solves the RBSDE(ξ, f, L, U), K = K+−K−,if and only if
(Y t, Zt,K+t ,K
−t ) := (eλtYt, e
λtZt,
∫ t
0eλsdK+
s ,
∫ t
0eλsdK−
s ) (5.4)
solves the RBSDE(ξ, f , L, U), where
(ξ, f(t, y), Lt, U t) = (ξeλT , eλtf(t, e−λty)− λy, eλtLt, eλtUt).
128 Chapitre 5. RBSDE with two barriers
If we choose λ = µ, then the coefficient f satisfies the same hypotheses in assumption 5.1.2 asf , but with (v) replaced by
(v’) (y − y′)(f(t, y, z)− f(t, y′, z)) ≤ 0.Since we are in 1-dimensional case, (v’) means that f is decreasing on y. From another part the
barriers L, U satisfies :Assumption 5.1.3 (i’) :
E[ sup0≤t≤T
(Lt)+] < ∞, E[ sup0≤t≤T
(U t)−] < ∞,
E[ϕ2( sup0≤t≤T
(Lt)+)] = E[ϕ2( sup0≤t≤T
(eµt(Lt)+))] < ∞,
E[ϕ2( sup0≤t≤T
(U t)−)] = E[ϕ2( sup0≤t≤T
(eµt(Ut)−))] < ∞.
In the following, we shall work with assumption 5.1.2’ which is 5.1.2 with (v) replaced by (v’).Proof of 5.1.2 : First, let us recall the assumptions on the coefficient f :
Assumption 5.1.2’(ii’) |f(s, y)| ≤ |f(s, 0, 0)|+ k |Qs|+ ϕ(|y|);(iii’) E
∫ T0 |f(t, 0)|2 dt < ∞;
(v”) (y − y′)(f(s, y)− f(s, y′)) ≤ 0;(vi’) y → f(s, y) is continuous, ∀s ∈ [0, T ], a.s..We point out that we always denote by c > 0 a constant whose value can be changed line by
line. The proof will be done by five steps as following.– Using a penalization method we prove the existence under the assumption
|ξ|+ sup0≤t≤T
|f(t, 0)|+ sup0≤t≤T
L+t + sup
0≤t≤TU−
t ≤ c. (5.5)
– Approximating the lower barrier L, we prove the existence under that L satisfies assumption5.1.3-(i’) and the bounded assumption on ξ, f(t, 0) and sup0≤t≤T U−
t .– Like above step, we approximate the upper barrier U to prove the existence under assumption
5.1.3-(i’) and ξ and f(t, 0) satisfy
|ξ|2 + sup0≤t≤T
|f(t, 0)|2 ≤ c. (5.6)
– By approximation, we prove the existence of the solution under the assumption ξ ≥ c,inf0≤t≤T f(t, 0) ≥ c.
– Finally, we prove the existence of the solution under the assumption ξ ∈ L2(FT ), f(t, 0) ∈H2(0, T ), by approximation.
Step 1. Consider the penalization equations with respect to the two barriers L, U , for m, n ∈ N,
Y m,nt = ξ +
∫ T
tf(s, Y m,n
s )ds+m
∫ T
t(Y m,n
s −Ls)−ds−n
∫ T
t(Us−Y m,n
s )−ds−∫ T
tZm,n
s dBs, (5.7)
set fm,n(s, y) = f(s, y)+m(y−Ls)−−n(Us−y)− ; we need to check that fm,n satisfies the conditionof Proposition 2.4 in [57]. First
|fm,n(s, y)| =∣∣f(s, y) + m(y − Ls)− − n(Us − y)−
∣∣≤ |fm,n(0, 0)|+ |f(s, y)|+ |f(0, 0)|+ m
∣∣(y − Ls)− − L+s
∣∣ + n∣∣U−
s − (Us − y)−∣∣
≤ |fm,n(0, 0)|+ 2 |f(0, 0)|+ k |Qs|+ ϕ(|y|) + 2(m + n) |y|≤ |fm,n(0, 0)|+ ϕm,n(|y|)
5.1. Proof of theorem 5.1.2 129
where ϕm,n is an increasing continuous function from R+ → R+.Then the integration condition comes easily from
E
∫ T
0|fm,n(t, 0)|2 dt ≤ 2E
∫ T
0|f(t, 0)|2 dt + 2m2TE[ sup
0≤t≤T(L+
s )2] + 2n2TE[ sup0≤t≤T
(U−s )2] < ∞,
and we have the monotonicity condition
(y − y′)(fm,n(s, y)− fm,n(s, y′))= (y − y′)(f(s, y)− f(s, y′)) + m(y − y′)((y − Ls)− − (y′ − Ls)−)
−n(y − y′)((Us − y)− − (Us − y′)−)≤ (m− n)(y − y′)2.
Obviously, y → fm,n(s, y) is continuous for s ∈ [0, T ]. So by the Proposition 2.4 in [57], thereexists (Y m,n
t , Zm,nt )0≤t≤T , which is the solution of (5.7). Denote Km,n,+
t = m∫ t0 (Y m,n
s − Ls)−ds,Km,n,−
t = n∫ t0 (Us − Y m,n
s )−ds.
Now let us do the uniformly a priori estimation of (Y m,n, Zm,n,Km,n,+,Km,n,−). Consider theRBSDE(ξ, f, L) with one lower barrier L ; due to theorem 3.1.2 in section 3.1, it admits a uniquesolution (Y t, Zt, Kt)0≤t≤T ∈ S2(0, T )×H2
d(0, T )×A2(0, T ), which satisfies
Y t = ξ +∫ T
tf(s, Y s)ds + KT −Kt −
∫ T
tZsdBs, (5.8)
Y t ≥ Lt, 0 ≤ t ≤ T ,∫ T0 (Y s − Ls)dKs = 0. In order to compare (5.8) and (5.7), we consider the
penalization equation associated with the RBSDE (5.8), for m ∈ N,
Ymt = ξ +
∫ T
tf(s, Y m
s )ds + m
∫ T
t(Ls − Y
ms )+ds−
∫ T
tZ
ms dBs. (5.9)
Comparing (5.7) and (5.9), we get Y m,nt ≤ Y
mt , ∀t ∈ [0, T ], n ∈ N. Thank to the convergence result
of step1 and step 2 in the proof of theorem 3.1.2 in chapter 3, i.e. Ym Y in S2(0, T ). So we
get for any m,n ∈ N, t ∈ [0, T ], Y m,nt ≤ Y t.
Similarly, we consider the RBSDE(ξ, f, U) with one upper barrier U . There exists (Y t, Zt,Kt)0≤t≤T ∈S2(0, T )×H2
d(0, T )×A2(0, T ), which satisfes
Y t = ξ +∫ T
tf(s, Y s)ds− (KT −Kt)−
∫ T
tZsdBs, (5.10)
Y t ≤ Ut, 0 ≤ t ≤ T ,∫ T0 (Y s−Us)dKs = 0. In the same way, by the penalization equation associated
with (5.10) and the comparison theorem, we deduce that Y m,nt ≥ Y t, for any m,n ∈ N, t ∈ [0, T ].
Then we get, with the results of the step 1 in the proof of theorem 3.1.2 in section 3.1,
sup0≤t≤T
|Y m,nt | ≤ max sup
0≤t≤T
∣∣Y t
∣∣ , sup0≤t≤T
|Y t| ≤ C. (5.11)
In the following, notice that assumption 5.1.2’-(v”) implies that f is decreasing on y, for s ∈ [0, T ],so f(s, Y s) ≥ f(s, Y m,n
s ) ≥ f(s, Y s), with the estimate results of (5.8) and (5.10) under boundedcondition, it follows
|f(s, Y m,ns )| ≤ max∣∣f(s, Y s)
∣∣ , |f(s, Y s)| ≤ C. (5.12)
130 Chapitre 5. RBSDE with two barriers
To get the estimation of (Km,n,+,Km,n,−, Zm,n), apply Ito’s formula to (Y m,n)2, then
E(Y m,nt )2 + E
∫ T
t|Zm,n
s |2 ds
= E(ξ)2 + 2E
∫ T
tY m,n
s f(s, Y m,ns )ds + 2E(
∫ T
tY m,n
s m(Ls − Y m,ns )+ds +
∫ T
tY m,n
s n(Us − Y m,ns )−ds)
≤ E(ξ)2 + E
∫ T
t|Y m,n
s |2 ds + E
∫ T
t|f(s, 0)|2 ds +
1α
E( sup0≤t≤T
(L+t )2) +
1α
E( sup0≤t≤T
(U−t )2)
+αE(m∫ T
t(Ls − Y m,n
s )+ds)2 + αE(n∫ T
t(Us − Y m,n
s )−ds)2,
for some α > 0, so
E
∫ T
t|Zm,n
s |2 ds ≤ C + α(E(m∫ T
t(Ls − Y m,n
s )+ds)2 + E(n∫ T
t(Us − Y m,n
s )−ds)2). (5.13)
We need to prove that there exists a constant C independent of m, n such that for any 0 ≤ t ≤ T
E(m∫ T
t(Ls − Y m,n
s )+ds)2 + E(n∫ T
t(Us − Y m,n
s )−ds)2 ≤ C + 8E
∫ T
t|Zm,n
s |2 ds.
In fact, let us consider the stopping time
τ1 = infr ≥ t|Y m,nr ≥ Ur ∧ T,
σ1 = infr ≥ τ1|Y m,nr = Lr ∧ T,
τ2 = infr ≥ σ1|Y m,nr = Ur ∧ T,
and so on. Since L < U on [0, T ), and L and U are continuous, then when j →∞, we have τj T ,σj T . Obviously Y m,n ≥ L on the interval [τj , σj ], so we get
Y m,nτj
= Y m,nσj
+∫ σj
τj
f(s, Y m,ns )ds− n
∫ σj
τj
(Y m,ns − Us)+ds−
∫ σj
τj
Zm,ns dBs.
On the other hand
Y m,nτj
≥ Jτj , on τj < T,Y m,n
τj= Jτj = ξ, on τj = T,
Y m,nσj
≤ Jσj , on σj < T,Y m,n
σj= Jσj = ξ, on σj = T,
and these inequalities imply that for all j, the following holds
n
∫ σj
τj
(Y m,ns − Us)+ds ≤ Jσj − Jτj +
∫ σj
τj
f(s, Y m,ns )ds−
∫ σj
τj
Zm,ns dBs
≤∫ σj
τj
(φs − Zm,ns )dBs − V +
σj+ V +
τj+ V −
σj− V −
τj+
∫ σj
τj
f(s, Y m,ns )ds
≤∫ σj
τj
(φs − Zm,ns )dBs + V −
σj− V −
τj+
∫ σj
τj
|f(s, Y m,ns )| ds.
5.1. Proof of theorem 5.1.2 131
Notice that on the interval [σj , τj+1], Y m,ns ≤ Us ; we obtain the following by summing in j
n
∫ T
t(Y m,n
s − Us)+ds ≤∫ T
t((φs − Zm,n
s )(∑
j
1[τj ,σj)(s))dBs + V −T +
∫ T
t|f(s, Y m,n
s )| ds.
By squaring and taking the expectation, with (5.12), we get
E(n∫ T
t(Y m,n
s − Us)+ds)2 (5.14)
≤ 4E
∫ T
t|φs|2 ds + 4E
∫ T
t|Zm,n
s |2 ds + 4E[(V +T )2] + 4E(
∫ T
t|f(s, Y m,n
s )| ds)2
≤ C + 4E
∫ T
t|Zm,n
s |2 ds,
in the same way, we obtain
E(m∫ T
t(Ls − Y m,n
s )+ds)2 ≤ C + 4E
∫ T
t|Zm,n
s |2 ds. (5.15)
By (5.14) and (5.15), and (5.13), with α = 116 , it follows
E
∫ T
t|Zm,n
s |2 ds ≤ C, (5.16)
thenE[(Km,n,+
T )2 + (Km,n,−T )2] ≤ C. (5.17)
Let m → ∞, due to the convergence results in step 1 of the proof in theorem 3.1.2 in chapter3, Y m,n → Y n in S2(0, T ), Km,n,+ → Kn,+ in A2(0, T ), and Zm,n → Zn in H2
d(0, T ), where(Y n, Zn,Kn,+) is the solution of the one lower barrier RBSDE(ξ, fn, L), with fn(s, y) = f(s, y) −n(y − Us)+. So
Y nt = ξ +
∫ T
tf(s, Y n
s )ds + Kn,+T −Kn,+
t − n
∫ T
t(Y n
s − Us)+ds−∫ T
tZn
s dBs,
Y nt ≥ Lt, 0 ≤ t ≤ T ,
∫ T0 (Y n
s − Ls)dKn,+s = 0. Thank to the uniform estimations, which we got as
above, we know that there exists a constant C independent of n and t, s.t.
sup0≤t≤T
(Y nt )2 + f(t, Y n
t ) ≤ C, (5.18)
and
E
∫ T
0|Zn
s |2 ds + E[(Kn,+T )2] + E[(Kn,−
T )2] ≤ C (5.19)
where Kn,−t = n
∫ t0 (Y n
s − Us)+ds. Then by the comparison theorem 3.4.2 in chapter 3, we deducethat Y n
t Yt, for t ∈ [0, T ], as n →∞, and by the dominated convergence theorem
E
∫ T
0(Y n
s − Ys)2ds → 0, as n →∞. (5.20)
Then we want to prove the convergence of (Zn) in H2d(0, T ). For this, we need the following
lemma, which will be proved later.
132 Chapitre 5. RBSDE with two barriers
Lemma 5.1.1.lim
n→∞E( sup0≤t≤T
((Y nt − Ut)+)2 = 0. (5.21)
For n, p ∈ N, applying Ito’s formula to |Y n − Y p|2, and taking the expectation, then
E(Y nt − Y p
t )2 + E
∫ T
t|Zn
s − Zps |2 ds
= 2E
∫ T
t(Y n
s − Y ps )(f(s, Y n
s )− f(s, Y ps ))ds + 2E
∫ T
t(Y n
s − Y ps )d(Kn,+
s −Kp,+s )
−2E
∫ T
t(Y n
s − Y ps )d(Kn,−
s −Kp,−s )
≤ 2E
∫ T
t(Y n
s − Us)+dKp,−s + 2E
∫ T
t(Y p
s − Us)+dKn,−s
≤ 2(E[( sup0≤t≤T
(Y ns − Us)+)2])
12 (E(Kp,−
T )2)12 + 2(E[( sup
0≤t≤T(Y p
s − Us)+)2])12 (E(Kn,−
T )2)12 ,
since∫ T
t(Y n
s − Y ps )d(Kn,+
s −Kp,+s ) =
∫ T
t(Y n
s − Ls)dKn,+s −
∫ T
t(Y n
s − Ls)dKp,+s
+∫ T
t(Y p
s − Ls)dKp,+s −
∫ T
t(Y p
s − Ls)dKn,+s
≤ 0.
So by (5.21) and (5.19), as n, p →∞,
E
∫ T
t|Zn
s − Zps |2 ds → 0,
and there exists a process Z ∈ H2d(0, T ), such that, as n →∞,
E
∫ T
t|Zn
s − Zs|2 ds → 0.
Moreover
|Y nt − Y p
t |2 +∫ T
t|Zn
s − Zps |2 ds
= 2∫ T
t(Y n
s − Y ps )(f(s, Y n
s )− f(s, Y ps ))ds + 2
∫ T
t(Y n
s − Y ps )d(Kn,+
s −Kp,+s )
−2∫ T
t(Y n
s − Y ps )d(Kn,−
s −Kp,−s )− 2
∫ T
t|Y n
s − Y ps | |Zn
s − Zps | dBs,
so
E[ sup0≤t≤T
|Y nt − Y p
t |2] ≤ 2E
∫ T
t(Y n
s − Us)+dKp,−s + 2E
∫ T
t(Y p
s − Us)+dKn,−s
+2E[ sup0≤t≤T
∫ T
t|Y n
s − Y ps | |Zn
s − Zps | dBs].
5.1. Proof of theorem 5.1.2 133
By Burkholder-Davis-Gundy inequality and (5.21), we get, as n, p →∞
E[ sup0≤t≤T
|Y nt − Y p
t |2] → 0,
i.e. Y n Y , in S2(0, T ).By the convergence of Y n
t , i.e. Y nt Yt, 0 ≤ t ≤ T , and the fact that f(s, y) is continuous
and decreasing in y, we get f(s, Y ns ) f(s, Ys), 0 ≤ s ≤ T . Moreover |f(s, Y n
s )| ≤ C. Using themonotonic convergence theorem, we deduce that
E
∫ T
0[f(t, Y n
t )− f(t, Yt)]2dt → 0, (5.22)
i.e. the sequence f(·, Y n· ) is also a Cauchy sequence in H2(0, T ).Now we consider the convergence of the increasing processes (Kn,+) and (Kn,−). By the compari-
son theorem 3.4.3 in chapter 3, we get Kn,+t ≥ Kp,+
t , Kn,+t −Kn,+
s ≥ Kp,+t −Kp,+
s , for 0 ≤ s ≤ t ≤ T .So for 0 ≤ t ≤ T , Kn,+
t K+t , with E[(Kn,+
t )2] ≤ C, we get that E[(K+t )2] ≤ C. Furthermore,
Kn,+T −Kp,+
T ≥ Kn,+t −Kp,+
t , which follows
E[ sup0≤t≤T
(Kn,+t −Kp,+
t )2] ≤ E[(Kn,+T −Kp,+
T )2] → 0,
so Kn,+ → K+ in A2(0, T ). On the other hand, since (Y n, Zn,Kn+,Kn−) satisfies
Y nt = ξ +
∫ T
tf(s, Y n
s )ds + Kn,+T −Kn,+
t − (Kn,−T −Kn,−
t )−∫ T
tZn
s dBs,
rewrite it in the following form
Kn,−t = Y n
t − Y n0 +
∫ t
0f(s, Y n
s )ds + Kn,+t −
∫ t
0Zn
s dBs.
Without losing the generality, set p < n,
Kp,−t = Y p
t − Y p0 +
∫ t
0f(s, Y p
s )ds + Kp,+t −
∫ t
0Zp
s dBs,
so with BDG inequality, we get
E[ sup0≤t≤T
(Kn,−t −Kp,−
t )2]
≤ 5E[ sup0≤t≤T
(Y nt − Y p
t )2] + 5(Y n0 − Y p
0 )2 + 5TE(∫ T
0(f(s, Y n
s )− f(s, Y ps ))2ds)
+5E[(Kn,+T −Kp,+
T )2] + CE
∫ t
0(Zn
s − Zps )2ds
→ 0,
i.e. there exists a process K− ∈ A2(0, T ), s.t. Kn,− → K− in A2(0, T ), and the limit (Y,Z, K+,K−)satisfies
Yt = ξ +∫ T
tf(s, Ys)ds + K+
T −K+t − (K−
T −K−t )−
∫ T
tZsdBs.
134 Chapitre 5. RBSDE with two barriers
Since for n ∈ N, Y nt ≥ Lt, 0 ≤ t ≤ T , so Yt ≥ Lt. The last is to check (4). Since (Y n,Kn,+,Kn,−)
tends to (Y,K+,K−) uniformly in t in probability, then the measure dKn,+ (resp. dKn,−) convergesto dK+ (resp. dK−) weakly in probability, so
∫ T
0(Y n
t − Lt)dKn,+t →
∫ T
0(Yt − Lt)dK+
t ,
∫ T
0(Y n
t − Ut)dKn,−t →
∫ T
0(Yt − Ut)dK−
t ,
in probability as n →∞. Obviously∫ T
0(Yt − Lt)dK+
t ≥ 0,
∫ T
0(Yt − Ut)dK−
t ≤ 0.
On the other hand, for each n ∈ N,∫ T
0(Y n
t − Lt)dKn,+t = 0,
∫ T
0(Y n
t − Ut)dKn,−t ≥ 0.
Hence ∫ T
0(Yt − Lt)dK+
t =∫ T
0(Yt − Ut)dK−
t = 0, a.s.
Consequently the triple (Y,Z, K+,K−) is solution of the RBSDE(ξ, f, L, U), under the assumptions(5.5). ¤
To complete the step 1, we give the proof of Lemma 5.1.1, which is analogue as Lemma 4 in[49]. With (5.12), (5.11) and (5.5), we can easily get it.
Proof of Lemma 5.1.1 : Since Y t ≤ Y m,nt ≤ Y t, let m → ∞, by the convergence result,
Y t ≤ Y nt ≤ Y t, without losing generality, we replace Lt by Lt ∨ Y t, and Ut by Ut ∧ Y t ; that is we
may assume that sup0≤t≤T |Lt|+sup0≤t≤T |Ut| ≤ C. The one lower barrier RSBDE(ξ, fn, L), wherefn(t, y) = f(t, Y n
t )− n(y − Us), admits the unique solution (Y n, Zn, Kn,+), i.e.
Y nt = ξ +
∫ T
tf(s, Y n
s )ds− n
∫ T
t(Y n
s − Us)ds + Kn,+T − Kn,+
t −∫ T
tZn
s dBs.
Y nt ≥ Lt, 0 ≤ t ≤ T , and
∫ T0 (Y n
t −Lt)dKn,+t = 0. Let (Y m,n, Zm,n) be the solution of the penalized
BSDE
Y m,nt = ξ +
∫ T
tf(s, Y m,n
s )ds + m
∫ T
t(Y m,n
s − Ls)−ds− n
∫ T
t(Us − Y m,n
s )ds−∫ T
tZm,n
s dBs.
If we compare it with (5.7), it follows Y m,nt ≤ Y m,n
t . Since Y m,nt Y n
t , Y m,nt Y n
t , when m →∞,so Y n
t ≤ Y nt , 0 ≤ t ≤ T . On the other hand, we have that
e−ntY nt = e−nT ξ +
∫ T
te−nsf(s, Y n
s )ds + n
∫ T
te−nsUsds +
∫ T
te−nsdKn,+
s −∫ T
te−nsZn
s dBs,
e−ntY nt ≥ e−ntLt, 0 ≤ t ≤ T , and
∫ T0 (e−ntY n
t −e−ntLt)e−ntdKn,+t = 0. So we know that (e−ntY n
t , e−ntZnt ,∫ t
0 e−nsdKn,+s ) is the solution of the lower barrier RBSDE(e−nT ξ, e−ntf(t, Y n
t ) + ne−ntUt, e−ntLt).
Then by Proposition 2.3 in [28], for any stopping time ν such that 0 ≤ ν ≤ T , with Lt ≤ Jt ≤ Ut,we deduce
Y nν = ess sup
τ≥νE[
∫ τ
νe−n(s−ν)f(s, Y n
s )ds + e−n(τ−ν)ξ1τ=T +∫ τ
νne−n(s−ν)Usds + e−n(τ−ν)Lτ1τ<T|Fν ]
≤ E[n∫ T
ve−n(s−ν)(Js − Us)ds +
∫ T
ve−n(s−ν) |f(s, Y n
s )| ds|Fv]
+ess supτ≥ν
E[∫ τ
νne−n(s−ν)Jsds + e−n(τ−ν)Jτ1τ<T + e−n(τ−ν)ξ1τ=T|Fv].
5.1. Proof of theorem 5.1.2 135
Obviously, when n →∞,
n
∫ T
ve−n(s−ν)(Js − Us)ds → (Uv − Jv)1v<T,
a.s. and in L2(Fv), so the conditional expectation converges also in L2(Fv). Moreover since |f(s, Y ns )| ≤
C, with ∫ T
ve−n(s−ν) |f(s, Y n
s )| ds ≤ 1√2n
(∫ T
0f2(s, Y n
s )ds)12 ,
we get that E[∫ Tv e−n(s−ν) |f(s, Y n
s )| ds|Fv] converges to zero in L2(Fv), as n →∞.Now consider the last term of the inequality ; since
∫ τ
νne−n(s−ν)Jsds + e−n(τ−ν)Jτ = Jv +
∫ τ
ve−n(s−v)dJs,
we have
ess supτ≥ν
E[∫ τ
νne−n(s−ν)Jsds + e−n(τ−ν)Jτ1τ<T + e−n(τ−ν)ξ1τ=T|Fv]
= ess supτ≥ν
E[Jv +∫ τ
ve−n(s−v)dJs|Fv]
≤ Jv + E[∫ T
ve−n(s−v)d(V +
s + V −s )|Fv].
Since E[(V +T )2] ≤ C, E[(V −
T )2] ≤ C, so E[∫ Tv e−n(s−v)d(V +
s + V −s )|Fv] →∞ in L2(Fv), as n →∞.
It follows thatYv ≤ lim inf
n→∞ Y nν ≤ Uv1v<T + ξ1v=T, a.s.
By the section theorem of Dellacherie and Meyer, then for 0 ≤ t ≤ T ,
Yt ≤ Ut.
Hence (Y nt −Ut)+ ↓ 0, and from Dini’s theorem, this convergence is uniform in t. The result follows
by the dominated convergence theorem, since (Y nt − Ut)+ ≤ (Y 1
t − Ut)+. The proof is complete. ¤
Step 2 In the this step, we consider the general case of a barrier L which satisfies the assumption5.1.3-(i’) :
E[ϕ2( sup0≤t≤T
(Lt)+)] < ∞,
and L+ ∈ S2(0, T ), but we still assume that for some C > 0,
|ξ|+ sup0≤t≤T
|f(t, 0)|+ sup0≤t≤T
(Ut)− ≤ C. (5.23)
For n ∈ N, set Ln = L ∧ n, then sup0≤t≤T (Lnt )+ ≤ n and Ln
t ≤ Lt ; so assumption 5.1.3-(ii), (iii) are satisfied and by the step 1, we know that there exists a triple (Y n, Zn,Kn), withKn = Kn,+ −Kn,−, which satisfies
Y nt = ξ +
∫ T
tf(s, Y n
s )ds + Kn,+T −Kn,+
t − (Kn,−T −Kn,−
t )−∫ T
tZn
s dBs, (5.24)
136 Chapitre 5. RBSDE with two barriers
Lnt ≤ Y n
t ≤ Ut, 0 ≤ t ≤ T , and∫ T0 (Y n
t − Lnt )dKn,+
t =∫ T0 (Y n
t − Ut)dKn,−t = 0.
Consider the solution (Y ,Z, K) of one lower barrier RBSDE(ξ, f, L) and the solution (Y ,Z, K)of the super barrier RBSDE(ξ, f, U), in fact these two equations can be considered as the follo-wing two barriers RBSDE(ξ, f, L, U) and RBSDE(ξ, f, L, U), where L = −∞, U = +∞. By thecomparison theorem 5.2.2, it follows that
Y t ≤ Y nt ≤ Y t, 0 ≤ t ≤ T.
SoE[ sup
0≤t≤T|Y n
t |2] ≤ maxE[ sup0≤t≤T
∣∣Y t
∣∣2 , E[ sup0≤t≤T
|Y t|2 ≤ C. (5.25)
Since Lnt ≤ Ln+1
t , 0 ≤ t ≤ T , thanks to the comparison theorem 5.2.2, Y nt Yt, 0 ≤ t ≤ T . From
the above estimate and Fatou’s lemma, we get
E[ sup0≤t≤T
(Yt)2] ≤ C. (5.26)
And
E
∫ T
0|Y n
t − Yt|2 dt → 0, as n →∞, (5.27)
follows from the dominated convergence theorem.Notice that f is decreaing on y, then f(t, Y t) ≤ f(t, Y n
t ) ≤ f(t, Y t), 0 ≤ t ≤ T , and with theintegral property of Y and Y , we have
E[(∫ t
0f(s, Y n
s )ds)2] ≤ maxE[(∫ t
0f(s, Y s)ds)2], E[(
∫ t
0f(s, Y s)ds)2] ≤ C. (5.28)
In order to prove the convergence of (Zn,Kn), we first need a-priori estimations. We apply theIto’s formula to |Y n
t |2 on the interval [t, T ],
E |Y nt |2 + E
∫ T
t|Zn
s |2 ds (5.29)
= E |ξ|2 + 2E
∫ T
tY n
s f(s, Y ns )ds + 2
∫ T
tY n
s dKns
≤ E |ξ|2 + E
∫ T
t|Y n
s |2 ds + E
∫ T
t|f(s, 0)|2 ds + (α + β)E[ sup
0≤t≤T|Y n
t |2]
+1α
E[(Kn,+T −Kn,+
t )2] +1β
E[(Kn,−T −Kn,−
t )2],
where Kn = Kn,+−Kn,−. We first use the comparison theorem estimate Kn,−. Consider the linearRBSDE(ξ, f(s, (Ls)−)) with two barriers L and U . By existence results of [19], we know there exists(Y , Z, K+, K−) ∈ S2(0, T )×H2
d(0, T )×A2(0, T )×A2(0, T ) satisfying
Yt = ξ +∫ T
tf(s, (Ls)−)ds + K+
T − K+t − (K−
T − K−t )−
∫ T
tZsdBs,
Lt ≤ Yt ≤ Ut,
∫ T
0(Yt − Lt)dK+
t =∫ T
0(Yt − Ut)dK−
t = 0.
Then we have the following lemma, which will be proved later.
5.1. Proof of theorem 5.1.2 137
Lemma 5.1.2. For 0 ≤ s ≤ t ≤ T , Kn,−t −Kn,−
s ≤ K−t − K−
s , and Kn,−T ≤ K−
T .
Now we haveE[(Kn,−
T )2] ≤ E[(K−T )2] ≤ C.
We rewrite the RBSDE(ξ, f, Ln, U) (5.24),
Kn,+T −Kn,+
t = Y nt − ξ −
∫ T
tf(s, Y n
s )ds + (Kn,−T −Kn,−
t ) +∫ T
tZn
s dBs,
hence
E(Kn,+T −Kn,+
t )2 ≤ 5E |Y nt |2 + 5E |ξ|2 + 5E(
∫ T
tf(s, Y n
s )ds)2 (5.30)
+5E[(Kn,−T −Kn,−
t )2] + 5E∫ T
t|Zn
s |2 ds
≤ C + 5E
∫ T
t|Zn
s |2 ds.
Then we substitute (5.30) into (5.29), set α = 10, β = 1, and with (5.23) and (5.25), it follows
E(Kn,+T )2 + E
∫ T
0|Zn
s |2 ds ≤ C. (5.31)
Now for n, p ∈ N, n ≥ p, then Lnt ≥ Lp
t , 0 ≤ t ≤ T . We apply the Ito’s formula to (|Y nt − Y p
t |2)on the interval [t, T ], and take expectation
E[|Y nt − Y p
t |2] + E
∫ T
t|Zn
s − Zps |2 ds
= 2E
∫ T
t[f(s, Y n
s )− f(s, Y ps )](Y n
s − Y ps )ds + 2E
∫ T
t(Y n
s − Y ps )d(Kn,+
s −Kp,+s )
−2E
∫ T
t(Y n
s − Y ps )d(Kn,−
s −Kp,−s )
≤ 2E
∫ T
t(Ln
s − Lps)dKn,+
s − 2E
∫ T
t(Ln
s − Lps)dKp,+
s
≤ 2E
∫ T
t(Ln
s − Lps)dKn,+
s ,
in view of∫ T
t(Y n
s − Y ps )d(Kn,−
s −Kp,−s ) ≤
∫ T
t(Y n
s − Us)dKn,−s +
∫ T
t(Y p
s − Us)dKp,−s
−∫ T
t(Y n
s − Us)dKp,−s −
∫ T
t(Y p
s − Us)dKn,−s
≥ 0.
Since Lt − Lnt ↓ 0, for each t ∈ [0, T ], and Lt − Ln
t is continuous, by the Dini’s theorem, theconvergence holds uniformly on the interval [0, T ], i.e.
E[ sup0≤t≤T
(Lt − Lnt )2] → 0, as n →∞. (5.32)
138 Chapitre 5. RBSDE with two barriers
Then with (5.30),
E
∫ T
0|Zn
s − Zps |2 ds ≤ 2E[ sup
0≤t≤T(Ln
s − Lps)K
n,+T ]
≤ 2(E( sup0≤t≤T
(Lns − Lp
s)2)
12 (E[(Kn,+
T )2])12
≤ C(E( sup0≤t≤T
(Lns − Lp
s)2])
12 → 0,
as n, p → ∞, i.e. Zn is a Cauchy sequence in the space H2d(0, T ), and there exists a process
Z ∈ H2d(0, T ), s.t. as n →∞,
E
∫ T
0|Zn
s − Zs|2 ds → 0. (5.33)
Furthermore
|Y nt − Y p
t |2 +∫ T
t|Zn
s − Zps |2 ds
= 2∫ T
t[f(s, Y n
s )− f(s, Y ps )](Y n
s − Y ps )ds + 2
∫ T
t(Y n
s − Y ps )d(Kn,+
s −Kp,+s )
−2∫ T
t(Y n
s − Y ps )d(Kn,−
s −Kp,−s )− 2
∫ T
t(Y n
s − Y ps )(Zn
s − Zps )dBs,
so
sup0≤t≤T
|Y nt − Y p
t |2
≤ 2 sup0≤t≤T
∫ T
t(Ln
s − Lps)d(Kn,+
s −Kp,+s ) + 2 sup
0≤t≤T
∣∣∣∣∫ T
t(Y n
s − Y ps )(Zn
s − Zps )dBs
∣∣∣∣ ,
Taking the expectation on the both sides, by BDG inequality and (5.31), we get
E sup0≤t≤T
|Y nt − Y p
t |2
≤ 2E[∫ T
0(Ln
s − Lps)dKn,+
s ] + CE
∫ T
0(Y n
s − Y ps )2(Zn
s − Zps )2ds
≤ 2(E( sup0≤t≤T
(Lns − Lp
s)2)
12 (E[(Kn,+
T )2])12 + CE( sup
0≤t≤T|Y n
s − Y ps |2
∫ T
0|Zn
s − Zps |2 ds)
≤ C(E[ sup0≤t≤T
(Lns − Lp
s)2])
12 +
12E sup
0≤t≤T|Y n
s − Y ps |2 + CE
∫ T
0|Zn
s − Zps |2 ds.
Hence, by (5.33) and (5.32), as n, p →∞,
E sup0≤t≤T
|Y nt − Y p
t | → 0, (5.34)
i.e. Y n is a Cauchy sequence in the space S2(0, T ), which implies that there exists a processY ∈ S2(0, T ), s.t. as n →∞,
E sup0≤t≤T
|Y nt − Yt| → 0. (5.35)
5.1. Proof of theorem 5.1.2 139
Moreover, since f is continuous and decreasing on y, with Y nt Yt, 0 ≤ t ≤ T ,
f(t, Y nt )− f(t, Yt) 0, 0 ≤ t ≤ T.
By the monotonic limit theorem, we get∫ T0 [f(t, Y n
t ) − f(t, Yt)]dt 0, and with (5.28), it followsE[(
∫ T0 f(t, Yt)dt)2] ≤ C, then
E[(∫ T
0(fn(t, Y n
t )− f(t, Yt))dt)2] → 0, (5.36)
as n →∞.From corollary 5.2.1, we know that for ∀t ∈ [0, T ], Kn,−
t is increasing with respect to n, andwith E[(Kn,−
t )2] ≤ C, there exists K−t such that Kn,−
t K−t in L2(Ft). Since for each t ∈ [0, T ],
E[(Kn,+t )2] ≤ C, the sequence (Kn,+
t ) has weak limit K+t in L2(Ft), with E[(K+
t )2] ≤ C. Then for0 ≤ t ≤ T , (Y,Z, K+,K−) satisfies
Yt = ξ +∫ T
tf(s, Ys)ds + K+
T −K+t − (K−
T −K−t )−
∫ T
tZsdBs. (5.37)
We will then prove that the convergence of Kn,+ and Kn,− also holds in strong sense.First, we consider Kn,−, for n, p ∈ N, with n ≥ p, since Ln
t ≥ Lpt , by corollary 5.2.1, we have for
0 ≤ s ≤ t ≤ T , Kn,−t −Kn,−
s ≥ Kp,−t −Kp,−
s . So 0 ≤ Kn,−t −Kp,−
t ≤ Kn,−T −Kp,−
T , and it followsimmediately by letting n →∞
0 ≤ K−t −Kp,−
t ≤ K−T −Kp,−
T .
This inequality yields as p →∞,
E sup0≤t≤T
∣∣∣K−t −Kp,−
t
∣∣∣2≤ E
∣∣∣K−T −Kp,−
T
∣∣∣2→ 0. (5.38)
Then we consider the term Kn,+. For this we rewrite (5.24) and (5.37) in the forward form :
Kn,+t = Y n
0 − Y nt −
∫ t
0f(s, Y n
s )ds + Kn,−t +
∫ t
0Zn
s dBs
K+t = Y0 − Yt −
∫ t
0f(s, Ys)ds + K−
t +∫ t
0ZsdBs,
so consider the difference and take expectation on the both sides, by the BDG inequality, andf(s, Y n
s ) ≥ f(s, Ys), it follows
E[ sup0≤t≤T
∣∣∣Kn,+t −K+
t
∣∣∣2] ≤ 5 |Y n
0 − Y0|2 + 5E[ sup0≤t≤T
|Y nt − Yt|2] + 5E(
∫ T
0[f(s, Y n
s )− f(s, Ys)]ds)2
+5E sup0≤t≤T
∣∣∣Kn,−t −K−
t
∣∣∣2+ CE
∫ T
0|Zn
s − Zs|2 ds.
Then by (5.35), (5.36), (5.38) and (5.33), we deduce that
E[ sup0≤t≤T
∣∣∣Kn,+t −K+
t
∣∣∣2] → 0.
The last thing to check is that (3) and (4) are also satisfied. Since for each n ∈ N, Lnt ≤ Y n
t ≤ Ut,0 ≤ t ≤ T , with Y n
t Yt and Lnt Lt, then Lt ≤ Yt ≤ Ut. From another part, the processes Kn,+
140 Chapitre 5. RBSDE with two barriers
and Kn,− are increasing, so the limit K+ and K− are also increasing. Notice that (Y n,Kn,+,Kn,−)tends to (Y, K+,K−) uniformly in t in probability, so the measure dKn,+(resp. dKn,−) convergesto dK+ (resp. dK−) weakly in probability. So
∫ T
0(Yt − Lt)dKn,+
t →∫ T
0(Yt − Lt)dK+
t ,∫ T
0(Y n
t − Ut)dKn,−t →
∫ T
0(Yt − Ut)dK−
t ,
in probability as n → ∞. Obviously∫ T0 (Yt − Ut)dK−
t ≤ 0. On the other hand, for each n ∈ N,∫ T0 (Y n
t − Ut)dKn,−t = 0. Hence ∫ T
0(Yt − Ut)dK−
t = 0, a.s.
For the lower barrier, since Ln converges to L in S2(0, T ), as n →∞, we have
E
∫ T
0(Y n
t − Lnt )dKn,+
t −E
∫ T
0(Yt − Lt)dK+
t
= E
∫ T
0(Y n
t − Yt)dKn,+t + E
∫ T
0(Yt − Lt)d(Kn,+
t −K+t ) + E
∫ T
0(Lt − Ln
t )dKn,+t
≤ (E[ sup0≤t≤T
(Y nt − Yt)2])
12 (E[(Kn,+
T )2])12 + E
∫ T
0(Yt − Lt)d(Kn,+
t −K+t )
+(E[ sup0≤t≤T
(Lt − Lnt )2])
12 (E[(Kn,+
T )2])12
→ 0.
Since Yt ≥ Lt, then∫ T0 (Yt−Lt)dK+
t ≥ 0, while E∫ T0 (Y n
t −Lnt )dKn,+
t = 0, so E∫ T0 (Yt−Lt)dK+
t = 0,then
∫ T0 (Yt − Lt)dK+
t = 0. ¤Proof of Lemma 5.1.2 : Obviously f(s, (Ls)−) ∈ H2(0, T ), in view of assumption 5.1.3.
Consider for m, n ∈ N, the following RBSDEs with one lower barrier,
Y mt = ξ +
∫ T
tf(s, (Ls)−)ds−m
∫ T
t(Y m
s − Us)+ds + Km,+T − Km,+
t −∫ T
tZm
s dBs,
Y mt ≥ Lt,
∫ T
0(Y m
t − Lt)dKm,+t = 0,
and
Y m,nt = ξ +
∫ T
tf(s, Y m,n
s )ds−m
∫ T
t(Y m,n
s − Us)+ds + Km,n,+T −Km,n,+
t −∫ T
tZm,n
s dBs,
Y m,nt ≥ Ln
t ,
∫ T
0(Y m,n
t − Lnt )dKm,n,+
t = 0.
Since Y m,nt ≥ Ln
t ≥ (Lt)−, we get f(t, Y m,nt ) ≤ f(t, Ln
t ) ≤ f(t, (Lt)−). Then for m,n ∈ N, ∀t ∈ [0, T ]
f(t, Y m,nt )−m(Y m,n
t − Ut)+ ≤ f(t, (Lt)−)−m(Y m,nt − Ut)+,
Lnt ≤ Lt.
By the general comparison theorem for RBSDE with one barrier theorem 3.4.2 in chapter 3, itfollows Y m,n
t ≤ Y mt , ∀t ∈ [0, T ]. Denote Km,−
t = m∫ t0 (Y m
s − Us)+ds and Km,n,−t = m
∫ t0 (Y m,n
s −Us)+ds, then we get for 0 ≤ s ≤ t ≤ T
Km,n,−t −Km,n,−
s ≤ Km,−t − Km,−
s . (5.39)
5.1. Proof of theorem 5.1.2 141
Thanks to the convergence result in step 1 and in [49], notice that (Ln)+ is bounded, we know thatas m →∞,
Km,−t → K−
t ,Km,n,−t → Kn,−
t , in L2(Ft).
Here K−t and Kn,−
t are increasing processes with respect to the upper barrier U of the solution ofthe RBSDE(ξ, f(t, (Lt)−), L, U) and RBSDE(ξ, f, Ln, U), respectively. Then from (5.39), we deducethat for 0 ≤ s ≤ t ≤ T
Kn,−t −Kn,−
s ≤ K−t − K−
s .
It follows immediately that Kn,−T ≤ K−
T . ¤
Step 3. In this step, we study the general case for L and U , when assumption 5.1.3 is satisfied :
E[ϕ2( sup0≤t≤T
(Lt)+)] + E[ϕ2( sup0≤t≤T
(Ut)−)] < ∞,
L+, U− ∈ S2(0, T ). But we still assume that for some C > 0,
|ξ|+ sup0≤t≤T
|f(t, 0)| ≤ C. (5.40)
For n ∈ N, set Un = U ∨ (−n) ; then sup0≤t≤T (Unt )− ≤ n and Un ≥ U , so assumption 5.1.3-
(ii), (iii) are satisfied, and by the step 2, we know that there exists a triple (Y n, Zn,Kn), withKn = Kn,+ −Kn,−, which satisfies
Y nt = ξ +
∫ T
tf(s, Y n
s )ds + Kn,+T −Kn,+
t − (Kn,−T −Kn,−
t )−∫ T
tZn
s dBs, (5.41)
Lt ≤ Y nt ≤ Un
t , 0 ≤ t ≤ T , and∫ T0 (Y n
t − Lt)dKn,+t =
∫ T0 (Y n
t − Unt )dKn,−
t = 0.Like in step 2, we consider the solution (Y , Z, K) of the one lower barrier RBSDE(ξ, f, L) and
the solution (Y ,Z,K) of the one super barrier RBSDE(ξ, f, U). Then by the comparison theorem5.2.2, it follows that
Y t ≤ Y nt ≤ Y t, 0 ≤ t ≤ T.
SoE[ sup
0≤t≤T|Y n
t |2] ≤ maxE[ sup0≤t≤T
∣∣Y t
∣∣2 , E[ sup0≤t≤T
|Y t|2 ≤ C. (5.42)
Since Un+1t ≤ Un
t , 0 ≤ t ≤ T , thanks to the comparison theorem 5.2.2, Y nt Yt, 0 ≤ t ≤ T . From
(5.42) and Fatou’s lemma, we getE[ sup
0≤t≤T(Yt)2] ≤ C, (5.43)
and
E
∫ T
0|Y n
t − Yt|2 dt → 0, as n →∞, (5.44)
follows from the dominated convergence theorem.Notice that f is decreaing on y, then f(t, Y t) ≤ f(t, Y n
t ) ≤ f(t, Y t), 0 ≤ t ≤ T , and with theintegral property of Y and Y , we have
E[(∫ t
0f(s, Y n
s )ds)2] ≤ maxE[(∫ t
0f(s, Y s)ds)2], E[(
∫ t
0f(s, Y s)ds)2] ≤ C. (5.45)
142 Chapitre 5. RBSDE with two barriers
Then we use again the comparison theorem for the estimation of Kn,+t . Consider the linear RBSDE(ξ, f(s, (Us)+))
with two barriers L and U . By results of [19], we know that there exists (Y , Z, K+, K−) ∈S2(0, T )×H2
d(0, T )×A2(0, T )×A2(0, T ) satisfying the following :
Yt = ξ +∫ T
tf(s, (Us)+)ds + K+
T − K+t − (K−
T − K−t )−
∫ T
tZsdBs,
Lt ≤ Yt ≤ Ut,
∫ T
0(Yt − Lt)dK+
t =∫ T
0(Yt − Ut)dK−
t = 0.
We admits for a instant the following lemma, which will be proved later.
Lemma 5.1.3. For 0 ≤ s ≤ t ≤ T , Kn,+t −Kn,+
s ≤ K+t − K+
s , and Kn,+T ≤ K+
T .
Now we haveE[(Kn,+
T )2] ≤ E[(K+T )2] ≤ C,
Then apply the Ito’s formula to |Y nt |2 on the interval [t, T ], by the same method as in step 2, we
have the following estimates
E[(Kn,−T )2] + E
∫ T
0|Zn
s |2 ds ≤ C. (5.46)
Since Unt − Ut ↓ 0, for each t ∈ [0, T ], and Un
t − Ut is continuous, by the Dini’s theorem again,the convergence holds uniformly on the interval [0, T ], i.e.
E[ sup0≤t≤T
(Unt − Ut)2] → 0, as n →∞. (5.47)
Now we are in the same situation as step 2. With the same arguments, we deduce that there existsprocesses Y ∈ S2(0, T ), Z ∈ H2
d(0, T ), K+ ∈ A2(0, T ), K− ∈ A2(0, T ), s.t. as n →∞,
E[ sup0≤t≤T
|Y nt − Yt|+
∫ T
0|Zn
s − Zs|2 ds + sup0≤t≤T
∣∣∣Kn,+t −K+
t
∣∣∣2+ sup
0≤t≤T
∣∣∣Kn,−t −K−
t
∣∣∣2] → 0. (5.48)
Moreover, since f is continuous and decreasing on y, with Y nt Yt, 0 ≤ t ≤ T , f(t, Yt)−f(t, Y n
t ) 0. By the monotonic limit theorem and estimate (5.45), we get as n →∞.
E[(∫ T
0(fn(t, Y n
t )− f(t, Yt))dt)2] → 0. (5.49)
Then for 0 ≤ t ≤ T , (Y,Z, K+,K−) satisfies
Yt = ξ +∫ T
tf(s, Ys)ds + K+
T −K+t − (K−
T −K−t )−
∫ T
tZsdBs. (5.50)
The last thing to check is that (3) and (4) of definition 5.1.1 are satisfied. Since for each n ∈ N,Lt ≤ Y n
t ≤ Unt , 0 ≤ t ≤ T , with Y n
t Yt and Unt Ut, then Lt ≤ Yt ≤ Ut. From another part,
the processes Kn,+ and Kn,− are increasing, so the limit K+ and K− are also increasing. Noticethat (Y n,Kn,+,Kn,−) tends to (Y, K+,K−) uniformly in t in probability, and Un converges to Uin S2, as n →∞, similarly as step 2, we get
∫ T
0(Yt −Kt)dK+
t =∫ T
0(Yt − Ut)dK−
t = 0, a.s.
5.1. Proof of theorem 5.1.2 143
¤To complete the proof of step 3, we need to prove the following.Proof of Lemma 5.1.3 : Obviously f(s, (Us)+) ∈ H2(0, T ), in view of assumption 5.1.3-(i).
Consider the following RBSDEs with one barrier, for n, m, p ∈ N,
Y n,m,pt = ξ +
∫ T
tf(s, (Us)+)ds + m
∫ T
t(Y n,m,p
s − Lps)−ds− (Kn,m,p,−
T − Kn,m,p,−t )−
∫ T
tZn,m,p
s dBs,
Y n,m,pt ≤ Un
t ,
∫ T
0(Y n,m,p
t − Ut)dKn,m,p,−t = 0,
and
Y n,m,pt = ξ +
∫ T
tf(s, Y n,m,p
s )ds + m
∫ T
t(Y n,m,p
s − Lps)−ds− (Kn,m,p,−
T −Kn,m,p,−t )−
∫ T
tZn,m,p
s dBs,
Y n,m,pt ≤ Un
t ,
∫ T
0(Y n,m,p
t − Unt )dKn,m,p,−
t = 0.
Since Y n,m,pt ≤ Un
t ≤ (Ut)+, by monotonic property of f , we get f(t, Y n,m,pt ) ≥ f(t, (Ut)+). So
f(t, (Ut)+) + m(Y n,m,pt − Lp
t )− ≤ f(t, Y n,m,p
t ) + m(Y n,m,pt − Lp
t )−,
Ut ≤ Unt ,
from general comparison theorem for RBSDE with one barrier 3.4.2, we have Y n,m,pt ≥ Y n,m,p
t . SetKn,m,p,+
t = m∫ t0 (Y n,m,p
s − Lps)−ds, Kn,m,p,+
t = m∫ t0 (Y n,m,p
s − Lps)−ds, then for 0 ≤ s ≤ t ≤ T
Kn,m,p,+t −Kn,m,p,+
s ≤ Kn,m,p,+t − Kn,m,p,+
s .
Notice that (Lp)+ and (Un)− are bounded, by convergence results in step 1, we know that asm → ∞, Kn,m,p,+
t → Kn,p,+t in L2(Ft), where Kn,p,+
t is the increasing process corresponding tolower barrier Lp for RBSDE(ξ, f, Lp, Un). On the other hand, by the convergence result in [49], asm → ∞, Kn,m,p,+
t → Kn,p,+t in L2(Ft), where Kn,p,+
t is the increasing process with respect to thebarrier Lp for RBSDE(ξ, f(t, (Ut)+), Lp, Un). Consequently for 0 ≤ s ≤ t ≤ T
Kn,p,+t −Kn,p,+
s ≤ Kn,p,+t − Kn,p,+
s .
Then thanks to the convergence result in step 2 for the approximation of lower barrier L, we havethat as p →∞,
Kn,p,+t → Kn,+
t and Kn,p,+t → Kn,+
t in L2(Ft).
Here Kn,+ (resp. Kn,+) is the increasing process corresponding to lower barrier L for RBSDE(ξ, f, L, Un)(resp. RBSDE(ξ, f(t, (Ut)+), L, Un)). It follows for 0 ≤ s ≤ t ≤ T
Kn,+t −Kn,+
s ≤ Kn,+t − Kn,+
s .
Finally by comparison theorem 5.2.1, since Ut ≤ Unt , ∀t ∈ [0, T ], we get
K+t − K+
s ≥ Kn,+t − Kn,+
s ,
where K+t is the increasing process corresponding to lower barrier L for RBSDE(ξ, f(t, (Ut)+), L, Un).
So for 0 ≤ s ≤ t ≤ TKn,+
t −Kn,+s ≤ K+
t − K+s .
Specially, Kn,+T ≤ K+
T . ¤
144 Chapitre 5. RBSDE with two barriers
Step 4 In this step, we will partly relax the bounded assumption for ξ and f(t, 0). We onlysuppose that for a constant c,
ξ ≥ c and inf0≤t≤T
f(t, 0) ≥ c. (5.51)
We approximate ξ and f(t, 0) by a sequence whose elements satisfy the bounded assumption instep 3, as following : for n ∈ N, set
ξn = ξ ∧ n, fn(t, y) = f(t, y)− f(t, 0) + f(t, 0) ∧ n.
Obviously, (ξn, fn) satisfies the assumptions of the step 3, and since ξ ∈ L2(FT ), f(t, 0) ∈ H2(0, T ),then
E[|ξn − ξ|2] → 0, E
∫ T
0|f(t, 0)− fn(t, 0)|2 → 0, (5.52)
as n →∞.From the results in step 3, for each n ∈ N, there exists (Y n
t , Znt ,Kn
t )0≤t≤T such that Y ∈S2(0, T ), Z ∈ H2
d(0, T ), with Kn = Kn,+ − Kn,−, and Kn,± ∈ A2(0, T ), which is the uniquesolution of the RBSDE(ξn, fn, L, U), i.e.
Y nt = ξn +
∫ T
tfn(s, Y n
s )ds + Kn,+T −Kn,+
t − (Kn,−T −Kn,−
t )−∫ T
tZn
s dBs, (5.53)
Lt ≤ Y nt ≤ Ut,
∫ T
0(Y n
t − Lt)dKn,+t =
∫ T
0(Y n
t − Ut)dKn,−t = 0.
Like in step 3, we consider the solution (Y ,Z, K) of one lower barrier RBSDE(ξ, f, L) and thesolution (Y ,Z,K) of one super barrier RBSDE(ξ−, f , U), where ξ− is the negative part of ξ and
f(t, y) = f(t, y)− f(t, 0) + (f(t, 0))−.
Then we can take the RBSDE(ξ, f, L) (resp. RBSDE(ξ−, f , U)) as a RBSDE with two barriersassociated to the parameters (ξ, f, L, U) (resp. (ξ−, f , L, U)), where U = ∞ and L = −∞. By thecomparison theorem 5.2.2, since
ξ ≥ ξn ≥ ξ−, f(t, y) ≥ fn(t, y) ≥ f(t, y),
it follows thatY t ≤ Y n
t ≤ Y t, 0 ≤ t ≤ T.
SoE[ sup
0≤t≤T|Y n
t |2] ≤ maxE[ sup0≤t≤T
∣∣Y t
∣∣2 , E[ sup0≤t≤T
|Y t|2 ≤ C. (5.54)
Then by the comparison theorem 5.2.5, since for all (s, y) ∈ [0, T ] × R, n ∈ N, ξ1 ≤ ξn, f1(s, y) ≤fn(s, y), we have K1,+
t ≥ Kn,+t ≥ 0 for 0 ≤ t ≤ T , so E[(Kn,+
t )2] ≤ E[(K1,+t )2] ≤ C. Following the
same steps, we deduce that
E[∫ t
0f(s, Y n
s )ds)2] + E
∫ T
0|Zn
s |2 ds + E[(Kn,−t )2] + E[(Kn,+
t )2] ≤ C. (5.55)
Due to the comparison theorem 5.2.2, since for all (s, y) ∈ [0, T ]× R, n ∈ N, ξn ≤ ξn+1, fn(s, y) ≤fn+1(s, y), we have Y n
t ≤ Y n+1t , 0 ≤ t ≤ T , a.s. Hence
Y nt Yt, 0 ≤ t ≤ T. a.s. (5.56)
5.1. Proof of theorem 5.1.2 145
Applying Ito formula to |Y nt − Y p
t |2, for n, p ∈ N, n ≥ p, on [t, T ], we get
E |Y nt − Y p
t |2 + E
∫ T
t|Zn
s − Zps |2 ds
= E |ξn − ξp|2 + 2E
∫ T
t(Y n
s − Y ps )(fn(s, Y n
s )− fp(s, Y ps ))ds
+2E
∫ T
t(Y n
s − Y ps )d(Kn,+
s −Kp,+s )− 2E
∫ T
t(Y n
s − Y ps )d(Kn,−
s −Kp,−s )
≤ E |ξn − ξp|2 + E
∫ T
t|Y n
s − Y ps |2 ds + E
∫ T
t|fn(s, 0)− fp(s, 0)|2 ds,
since∫ T
t(Y n
s − Y ps )d(Kn,+
s −Kp,+s )−
∫ T
t(Y n
s − Y ps )d(Kn,−
s −Kp,−s )
=∫ T
t(Y n
s − Ls)dKn,+s +
∫ T
t(Y p
s − Ls)dKp,+s −
∫ T
t(Y n
s − Ls)dKp,+s −
∫ T
t(Y p
s − Ls)dKn,+s
−∫ T
t(Y n
s − Us)dKn,−s −
∫ T
t(Y p
s − Us)dKp,−s +
∫ T
t(Y n
s − Us)dKp,−s +
∫ T
t(Y p
s − Us)dKn,−s
≤ 0.
Hence from Gronwall’s inequality and (5.52), we deduce
sup0≤t≤T
E |Y nt − Y p
t |2 → 0, E
∫ T
0|Zn
s − Zps |2 ds → 0. (5.57)
Consequently there exists (Zt)0≤t≤T ∈ H2d(0, T ), s.t.
E
∫ T
0|Zn
s − Zs|2 ds → 0. (5.58)
Using again Ito formula, taking sup and the expectation, in view of the BDG inequality, Y nt ≥
Y pt , assumption 5.1.2’-(v”) and fn(t, 0) ≥ fp(t, 0), we get
E[ sup0≤t≤T
|Y nt − Y p
t |2]
≤ E |ξn − ξp|2 + 2E[ sup0≤t≤T
∫ T
t(Y n
s − Y ps )(fn(s, 0)− fp(s, 0))ds]
+E[2 sup0≤t≤T
∣∣∣∣∫ T
t(Y n
s − Y ps )(Zn
s − Zps )dBs
∣∣∣∣]
≤ E |ξn − ξp|+ 4TE
∫ T
0|fn(s, 0)− fp(s, 0)|2 ds +
14E sup
0≤t≤T|Y n
s − Y ps |2
+14E[ sup
0≤t≤T|Y n
t − Y pt |2] + cE
∫ T
0|Zn
s − Zps |2 ds.
From (5.52) and (5.57), it follows E[sup0≤t≤T |Y nt − Y p
t |2] → 0, as n, p →∞, i.e. the sequence Y nis a Cauchy sequence in the space S2(0, T ). Consequently, with (5.56), we have Y ∈ S2(0, T ) and
E[ sup0≤t≤T
|Y nt − Yt|2] → 0. (5.59)
146 Chapitre 5. RBSDE with two barriers
by the comparison theorem 5.2.5, since for all (s, y) ∈ [0, T ] × R, n ∈ N, ξn ≤ ξn+1, fn(s, y) ≤fn+1(s, y), we have Kn,+
t ≥ Kn+1,+t ≥ 0, and 0 ≤ Kn,−
t ≤ Kn+1,−t for 0 ≤ t ≤ T , so
Kn,+t K+
t , Kn,−t K−
t , (5.60)
with (5.55), by the monotonic limit theorem, it follows that Kn,+t → K+
t , Kn,−t → K−
t in L2(Ft),and E[(K+
t )2 + (K−t )2] < ∞, moreover, (K+
t )0≤t≤T and (K−t )0≤t≤T are increasing.
Notice that since f(t, y) is decreasing and continuous in y, and Y nt Yt, we have f(t, Y n
t ) f(t, Yt). Then by the monotonic limit theorem,
∫ t0 f(s, Y n
s )ds ∫ t0 f(s, Ys)ds. With (5.55), it follows
that∫ t0 f(s, Y n
s )ds → ∫ t0 f(s, Ys)ds in L2(Ft), as n →∞.
Now we need to prove that the convergence of Kn,+ and Kn,− holds in a stronger sense.Using again the comparison theorem 5.2.5, since for all (s, y) ∈ [0, T ] × R, n, p ∈ N, with n ≥ p,ξp ≤ ξn, fp(s, y) ≤ fn(s, y), we have for 0 ≤ s ≤ t ≤ T ,
Kp,+t −Kp,+
s ≥ Kn,+t −Kn,+
s ≥ 0,
Kn,−t −Kn,−
s ≥ Kp,−t −Kp,−
s ≥ 0.
Then let n →∞, for t ∈ [0, T ]
Kp,+T −K+
T ≥ Kp,+t −K+
t ≥ 0,
K−T −Kp,−
T ≥ K−t −Kp,−
t ≥ 0.
So as n → 0,
E sup0≤t≤T
∣∣∣Kp,+t −K+
t
∣∣∣2≤ E
∣∣∣Kp,+T −K+
T
∣∣∣2→ 0,
E sup0≤t≤T
∣∣∣K−t −Kp,−
t
∣∣∣2≤ E
∣∣∣K−T −Kp,−
T
∣∣∣2→ 0.
It remains to check if (Yt, Zt, Kt)0≤t≤T satisfies (3) and (4) of the definition 4.1.1. Since Lt ≤Y n
t ≤ Ut, 0 ≤ t ≤ T , then letting n → ∞, Lt ≤ Yt ≤ Ut, 0 ≤ t ≤ T , a.s.. Furthermore (Y n,Kn)tends to (Y, K) uniformly in t in probability, as n → ∞, then the measure dKn → dK weakly inprobability, as n →∞, i.e.
∫ T
0(Y n
t − Lt)dKn,+t →
∫ T
0(Yt − Lt)dK+
t , and∫ T
0(Y n
t − Ut)dKn,−t →
∫ T
0(Yt − Lt)dK−
t ,
in probability. While Lt ≤ Yt ≤ Ut, 0 ≤ t ≤ T , so∫ T0 (Yt − Lt)dK+
t ≥ 0 and∫ T0 (Yt − Ut)dK−
t ≥ 0.On the other hand
∫ T0 (Y n
t − Lt)dKn,+t =
∫ T0 (Y n
t − Ut)dKn,−t = 0, so
∫ T0 (Yt − Lt)dK+
t =∫ T0 (Yt −
Ut)dK−t = 0, i.e. the triple (Yt, Zt, Kt)0≤t≤T is the solution of RBSDE(ξ, f, L), under the assumption
(5.51). ¤
Step 5 Now we consider a terminal condition ξ ∈ L2(FT ) and a coefficient f which satisfiesassumption 5.1.2’. For n ∈ N, set
ξn = ξ ∨ (−n), fn(t, y) = f(t, y)− f(t, 0) + f(t, 0) ∨ (−n).
Obviously, (ξn, fn) satisfies the assumptions of the step 4, and since ξ ∈ L2(FT ), f(t, 0) ∈ H2(0, T ),then
E[|ξn − ξ|2] → 0, E
∫ T
0|f(t, 0)− fn(t, 0)|2 → 0, (5.61)
5.1. Proof of theorem 5.1.2 147
as n →∞.From the results in step 4, for each n ∈ N, there exists (Y n
t , Znt ,Kn
t )0≤t≤T such that Y ∈S2(0, T ), Z ∈ H2
d(0, T ), with Kn = Kn,+ − Kn,−, and Kn,± ∈ A2(0, T ), which is the uniquesolution of the RBSDE(ξn, fn, L, U). Like in step 4, we consider the solution (Y , Z, K) of the onelower barrier RBSDE(ξ+, f , L), where ξ+ is the positive part of ξ and
f(t, y) = f(t, y)− f(t, 0) + (f(t, 0))+,
and the solution (Y ,Z,K) of the one super barrier RBSDE(ξ, f, U). Then we can take the RBSDE(ξ+, f , L)(resp. RBSDE(ξ, f, U)) as a RBSDE with two barriers associated to the parameters (ξ+, f , L, U)(resp. (ξ, f, L, U)), where U = ∞ and L = −∞. By the comparison theorem 5.2.2, since ∀(t, y) ∈[0, T ]× R,
ξ+ ≥ ξn ≥ ξ, f(t, y) ≥ fn(t, y) ≥ f(t, y),
it follows thatY t ≤ Y n
t ≤ Y t, 0 ≤ t ≤ T.
SoE[ sup
0≤t≤T|Y n
t |2] ≤ maxE[ sup0≤t≤T
∣∣Y t
∣∣2 , E[ sup0≤t≤T
|Y t|2 ≤ C. (5.62)
For n, p ∈ N, with n ≥ p, we have ξn ≤ ξp and fn(t, y) ≤ fp(t, y), ∀(t, y) ∈ [0, T ] × R. Fromapproximations for ξn, ξp, fn(t, y) and fp(t, y) as following :
ξn,m : = ξn ∧m, ξp,m := ξp ∧m
fn,m(t, y) = fn(t, y)− fn(t, 0) + fn(t, 0) ∧m
= f(t, y)− f(t, 0) + (f(t, 0) ∨ (−n)) ∧m,
fp,m(t, y) = fp(t, y)− fp(t, 0) + fp(t, 0) ∧m
= f(t, y)− f(t, 0) + (f(t, 0) ∨ (−p)) ∧m,
then the parameters satisfy the assumptions in theorem 5.2.5, and
ξn,m ≤ ξp,m, fn,m(t, y) ≤ fp,m(t, y).
Consider the solution (Y n,m, Zn,m,Kn,m)(resp. (Y p,m, Zp,m,Kp,m)) of the RBSDE(ξn,m, fn,m, L, U)(resp.(ξp,m, fp,m, L, U)) ; by the comparison theorem 5.2.5, for 0 ≤ s ≤ t ≤ T , we have
Y n,mt ≤ Y p,m
t , Kn,m,+t −Kn,m,+
s ≥ Kp,m,+t −Kp,m,+
s , Kn,m,−t −Kn,m,−
s ≤ Kp,m,−t −Kp,m,−
s .
Then by the convergence results in step 4, let m →∞, we get
Y nt ≤ Y p
t , Kn,+t −Kn,+
s ≥ Kp,+t −Kp,+
s , Kn,−t −Kn,−
s ≤ Kp,−t −Kp,−
s .
So we have 0 ≤ Kn,−t ≤ K1,−
t , then E[(Kn,−t )2] ≤ E[(K1,−
t )2] ≤ C. By the same method as previousstep, we deduce that
E[∫ t
0f(s, Y n
s )ds)2] + E[(Kn,+T )2] + E
∫ T
0|Zn
s |2 ds ≤ C. (5.63)
Now we are in the same situation as in step 4, and following the same method, we get that thesequence (Y n
t , Znt ,Kn,+
t ,Kn,−t ) converge to (Yt, Zt,K
+t ,K−
t ) as n → ∞, in S2(0, T ) ×H2d(0, T ) ×
A2(0, T )×A2(0, T ), and (Yt, Zt,K+t , K−
t ) is the solution to the RBSDE(ξ, f, L, U). ¤Then we present a property of RBSDE with two barriers, which characterize the acts of the
increasing processes.
148 Chapitre 5. RBSDE with two barriers
Corollary 5.1.1. Under the assumptions in step 1,
|ξ|2 + sup0≤t≤T
|f(t, 0)|2 + sup0≤t≤T
L+t + sup
0≤t≤TU−
t ≤ c,
let (Y, Z, K+,K−) be the unique solution of the RBSDE(ξ, f, L, U). Then dK+ is supported on therandom set s : Ys = Ls ; dK− is supported on the random set s : Ys = Us.
Proof. The proof is similar as in [49]. Consider the solutions (Y n, Zn,Kn,+) of the RBSDE(ξ, fn, L),where fn(s, y) = f(s, y)−n(y−Us)+, set Kn,−
t = n∫ t0 (Y n
s −Us)+ds. Then their limit (Y,Z, K+,K−)in S2(0, T ) ×H2(0, T ) ×A2(0, T ) ×A2(0, T ) is the solution of the RBSDE(ξ, f, L, U). Define thesequence of stopping times defined for each ε > 0 as
σ0 = 0;τ1 = infs ≥ σ0 : Ys = Ls ∧ T ;σ1 = infs > τ1 : Ys = Ls + ε ∧ T ;τ2 = infs > σ2 : Ys = Ls ∧ T ;
and so on. Since Y and L are continuous, only a finite number of these stopping times are strictlysmaller than T , from which we deduce
∑
i
K+τi+1
−∑
i
K+σi
= limn→∞(
∑
i
Kn,+τi+1
−∑
i
Kn,+σi
).
On the other hand, on the interval [σi, τi+1) we have Yt > Lt. Since Y nt ≥ Yt we get on [σi, τi+1)
the strict inequality Y nt > Lt, therefore Kn,+
τi+1 −Kn,+σi = 0. From this we deduce that the support
of the random measure dK+ is contained on
∪i[τi, σi] ⊆ s : Ys ≤ Ls + ε.
So the support of dK+ is contained on the set s : Ys ≤ Ls + ε. If we notice that ε is arbitrary, itfollows that dK+ is supported on s : Ys = Ls.
Now we prove, in a similar way that dK− is supported on the set s : Ys = Us. We take0 < δ < ε. Let ρ = infs ≥ 0 : Ys = Us − δ ∧ T and define the sequence of stopping times : σ0 = 0if Y0 < U0 − δ, and σ0 = ρ if Y0 ≥ U0 − δ,
τ1 = infs ≥ σ0 : Ys = Us − δ ∧ T ;σ1 = infs > τ1 : Ys = Us − ε ∧ T ;τ2 = infs > σ1 : Ys = Us − δ ∧ T ;
and so on. Again these stopping times are in finite number, so∑
i
K−τi+1
−∑
i
K−σi
= limn→∞(
∑
i
Kn,−τi+1
−∑
i
Kn,−σi
).
Between σi and τ i+1, we have Yt ≤ Ut− δ. From the uniform convergence of Y n to Y , we have thatfor all n large enough that between σi and τ i+1, Y n
t < Ut− δ2 , and therefore Kn,−
τ i+1= Kn,−
σi= 0. So
we get that the support of dK− is contained on
[0, σ0) ∪i [τ i, σi] ⊆ s : Ys ≥ Us − ε.
Since ε is arbitrary small, the support of dK− is contained on s : Ys = Us. ¤
5.2. Appendix : Comparison theorems 149
5.2 Appendix : Comparison theorems
First we prove a comparison theorem for the increasing processes under Lipschitz assumptionon f via the penalization method in [49].
Theorem 5.2.1. Suppose that the parameters (ξ1, f1, L1, U1) and (ξ2, f2, L2, U2) satisfy the follo-wing conditions : for i = 1, 2,
(i) ξi ∈ L2(FT ) ;(ii) f i satisfy assumption 5.1.2 (i), (iii), (vi) and a Lipschitz condition in (y, z) uniformly in
(t, ω), i.e. there exists a constant k such that, for y, y′ ∈ R, z, z′ ∈ Rd,∣∣f i(t, y, z)− f i(t, y′, z′)
∣∣ ≤ k(∣∣y − y′
∣∣ +∣∣z − z′
∣∣);(iii) Li and U i are real-valued, progressively measurable, continuous with (Li)+, (U i)− ∈ S2(0, T ).Let (Y i, Zi, Ki,+,Ki,−) be the solution of the RBSDE(ξi, f i, Li, U i), i.e.
Y it = ξi +
∫ T
tf i(s, Y i
s , Zis)ds + Ki,+
T −Ki,+t − (Ki,−
T −Ki,−t )−
∫ T
tZi
sdBs,
Y it ≥ Li
t, 0 ≤ t ≤ T , and∫ T0 (Y i
s − Lis)dKi,+
s =∫ T0 (Y i
s − U is)dKi,−
s = 0,Moreover, we assume ∀(t, y, z) ∈ [0, T ]× R× Rd,
ξ1 ≤ ξ2, f1(t, y, z) ≤ f2(t, y, z),
Then we have :(i) If L1 = L2, U1 = U2, then Y 1
t ≤ Y 2t , K1,+
t ≥ K2,+t , K1,−
t ≤ K2,−t , for t ∈ [0, T ], and for
0 ≤ s ≤ t ≤ T ,
K1,+t −K1,+
s ≥ K2,+t −K2,+
s , K1,−t −K1,−
s ≤ K2,−t −K2,−
s ;
(ii) If L1t ≤ L2
t , U1t = U2
t , for 0 ≤ t ≤ T , then Y 1t ≤ Y 2
t , and for 0 ≤ s ≤ t ≤ T ,
K1,−t −K1,−
s ≤ K2,−t −K2,−
s ;
(iii) If L1t = L2
t , U1t ≤ U2
t , for 0 ≤ t ≤ T , then Y 1t ≤ Y 2
t , and for 0 ≤ s ≤ t ≤ T ,
K1,+t −K1,+
s ≥ K2,+t −K2,+
s .
Proof. (i) Set L := L1 = L2, U := U1 = U2, and consider the penalization equations for m,n ∈ N, i = 1, 2
Y m,n,it = ξi+
∫ T
tf i(s, Y m,n,i
s , Zm,n,is )ds+m
∫ T
t(Y m,n,i
s −Ls)−ds−n
∫ T
t(Y m,n,i
s −Us)+ds−∫ T
tZm,n,i
s dBs.
By comparison theorem for BSDEs, since
ξ1 ≤ ξ2,
f1(t, y, z) + m(y − Lt)− − n(y − Ut)+ ≤ f2(t, y, z) + m(y − Lt)− − n(y − Ut)+,
we have Y m,n,1t ≤ Y m,n,2
t , ∀t ∈ [0, T ]. Denote Km,n,i,+t = m
∫ t0 (Y m,n,i
s − Ls)−ds, Km,n,i,−t =
n∫ t0 (Y m,n,i
s − Us)+ds, then for 0 ≤ s ≤ t ≤ T ,
Km,n,1,+t −Km,n,1,+
s ≥ Km,n,2,+t −Km,n,2,+
s ,
Km,n,1,−t −Km,n,1,−
s ≤ Km,n,2,−t −Km,n,2,−
s .
150 Chapitre 5. RBSDE with two barriers
From the convergence results in [49], which also holds for Lipschitz function,
limn→∞ lim
m→∞Y m,n,it = Y i
t , limn→∞ lim
m→∞Km,n,i,+t = Ki,+
t , limn→∞ lim
m→∞Km,n,i,−t = Ki,−
t ,
in L2(Ft), where Y i, Ki,+, Ki,− are elements of the solution of RBSDE(ξi, f i, L, U). Consequently,for 0 ≤ s ≤ t ≤ T ,
Y 1t ≤ Y 2
t ,K1,+t −K1,+
s ≥ K2,+t −K2,+
s , K1,−t −K1,−
s ≤ K2,−t −K2,−
s ;
if we especially set s = 0, we get K1,+t ≥ K2,+
t , K1,−t ≤ K2,−
t .(ii) Set U := U1 = U2, we consider the penalized reflected BSDE’s, for n ∈ N, i = 1, 2,
Y n,it = ξi +
∫ T
tf i(s, Y n,i
s , Zn,is )ds + Kn,+
T −Kn,+t − n
∫ T
t(Y n,i
s − Us)+ds−∫ T
tZn,i
s dBs,
Y n,it ≥ Li
t,
∫ T
0(Y n,i
t − Lit)dKn,+
t = 0.
Since ∀t ∈ [0, T ],
ξ1 ≤ ξ2, f1(t, y, z)− n(y − Ut)+ ≤ f2(t, y, z)− n(y − Ut)+, L1t ≤ L2
t
by the comparison theorem for RBSDE with one barrier, we have Y n,1t ≤ Y n,2
t . Let Kn,i,−t =
n∫ t0 (Y n,i
s − Us)+ds, then for 0 ≤ s ≤ t ≤ T ,
Kn,1,−t −Kn,1,−
s ≤ Kn,2,−t −Kn,2,−
s .
Thanks to the convergence result in [49], which still works for Lipschitz functions, Y n,i → Y i inS2(0, T ), and Kn,i
t → Kit in L2(Ft), as n →∞. So we have for 0 ≤ s ≤ t ≤ T ,
Y 1t ≤ Y 2
t ,K1,−t −K1,−
s ≤ K2,−t −K2,−
s ;
if we especially set s = 0, we get K1,+t ≥ K2,+
t , K1,−t ≤ K2,−
t .(iii) The proof is in the same as (ii), so we omit it. ¤We next prove a comparison theorem for RBSDE with two barriers in a general case.
Theorem 5.2.2 (General case for RBSDE’s). Suppose that the parameters (ξ1, f1, L1, U1) and(ξ2, f2, L2, U2) satisfy assumption 5.1.1, 5.1.2 and 5.1.3. Let (Y 1, Z1,K1,+,K1,−), (Y 2, Z2,K2,+, K2,−)be respectively the solutions of the RBSDE(ξ1, f1, L1, U1) and RBSDE(ξ2, f2, L2, U2), i.e.
Y it = ξi +
∫ T
tf i(s, Y i
s , Zis)ds + Ki,+
T −Ki,+t − (Ki,−
T −Ki,−t )−
∫ T
tZi
sdBs,
Lit ≤ Y i
t ≤ U it , 0 ≤ t ≤ T , and
∫ T0 (Y i
s − Lis)dKi,+
s =∫ T0 (Y i
s − U is)dKi,−
s = 0, i = 1, 2. Assume inaddition the following : ∀t ∈ [0, T ]
ξ1 ≤ ξ2, f1(t, Y 1t , Z1
t ) ≤ f2(t, Y 1t , Z1
t ), (5.64)L1
t ≤ L2t , U1
t ≤ U2t ,
then Y 1t ≤ Y 2
t , for t ∈ [0, T ].
5.2. Comparison theorems 151
Proof. Applying Ito’s formula to [(Y 1 − Y 2)+]2 on the interval [t, T ], and taking expectationon the both sides, we get immediately
E[(Y 1t − Y 2
t )+]2 + E
∫ T
t1Y 1
t >Y 2t
∣∣Z1s − Z2
s
∣∣2 ds
= E[(ξ1 − ξ2)+] + 2E∫ T
t(Y 1
s − Y 2s )+1Y 1
s >Y 2s (f
1(s, Y 1s , Z1
s )− f2(s, Y 2s , Z2
s ))ds
+2E
∫ T
t(Y 1
s − Y 2s )+d(K1,+
s −K2,+s )− 2E
∫ T
t(Y 1
s − Y 2s )+d(K1,−
s −K2,−s ).
Since on the set Y 1t > Y 2
t , Y 1t > Y 2
t ≥ L2t ≥ L1
t , U2t ≥ U1
t ≥ Y 1t > Y 2
t we get∫ T
t(Y 1
s − Y 2s )+d(K1,+
s −K2,+s ) = −
∫ T
t(Y 1
s − Y 2s )+dK2,+
s ≤ 0,
∫ T
t(Y 1
s − Y 2s )+d(K1,−
s −K2,−s ) =
∫ T
t(Y 1
s − Y 2s )+dK1,−
s ≥ 0.
So by (5.64) and the Lipschitz condition and monotonic condition on f2, it follows
E[(Y 1t − Y 2
t )+]2 + E
∫ T
t1Y 1
t >Y 2t
∣∣Z1s − Z2
s
∣∣2 ds
≤ 2E
∫ T
t1Y 1
s >Y 2s (Y
1s − Y 2
s )(f1(s, Y 1s , Z1
s )− f2(s, Y 1s , Z1
s ) + f2(s, Y 1s , Z1
s )− f2(s, Y 2s , Z2
s ))ds
≤ 2E
∫ T
t1Y 1
s >Y 2s (Y
1s − Y 2
s )(f2(s, Y 1s , Z1
s )− f2(s, Y 2s , Z2
s ))ds
≤ 2µE
∫ T
t1Y 1
s >Y 2s (Y
1s − Y 2
s )2ds + 2kE
∫ T
t1Y 1
s >Y 2s (Y
1s − Y 2
s )∣∣Z1
s − Z2s
∣∣ ds
≤ 12E
∫ T
t1Y 1
t >Y 2t
∣∣Z1s − Z2
s
∣∣2 ds + (2µ + 4k2)E∫ T
t[(Y 1
s − Y 2s )+]2ds.
Hence
E[(Y 1t − Y 2
t )+]2 ≤ (2µ + 4k2)E∫ T
t[(Y 1
s − Y 2s )+]2ds,
and from Gronwall’s inequality, we deduce (Y 1t − Y 2
t )+ = 0, 0 ≤ t ≤ T .¤From the convergence of penalization equations, we get the following comparison theorem.
Theorem 5.2.3 (Special case). Suppose that f1(s, y), f2(s, y) satisfy the assumption 5.1.2’, and ξi,f i(·, 0), L, U , i = 1, 2 satisfies (5.5). The two triples (Y 1, Z1, K1), (Y 2, Z2,K2) are respectivelythe solutions of the RBSDE(ξ1, f1, L, U) and RBSDE(ξ2, f2, L, U), i.e.
Y it = ξi +
∫ T
tf i(s, Y i
s )ds + Ki,+T −Ki,+
t − (Ki,−T −Ki,−
t )−∫ T
tZi
sdBs,
Lt ≤ Y it ≤ Ut, 0 ≤ t ≤ T , and
∫ T0 (Y i
s − Ls)dKi,+s =
∫ T0 (Y i
s − Us)dKi,−s = 0, Ki = Ki,+ −Ki,−,
i = 1, 2. If we have
ξ1 ≤ ξ2, and f1(t, y) ≤ f2(t, y), ∀(t, y) ∈ [0, T ]× R,
then Y 1t ≤ Y 2
t , K1,+t ≥ K2,+
t , K1,−t ≤ K2,−
t , for t ∈ [0, T ], and for 0 ≤ s ≤ t ≤ T ,
K1,+t −K1,+
s ≥ K2,+t −K2,+
s , K1,−t −K1,−
s ≤ K2,−t −K2,−
s .
152 Chapitre 5. RBSDE with two barriers
Proof. We consider the penalized equations relative to the RBSDE(ξi, f i, L, U), for i = 1, 2,n ∈ N,
Y m,n,it = ξi +
∫ T
tf i(s, Y m,n,i
s )ds+n
∫ T
t(Y m,n,i
s −Ls)−ds−m
∫ T
t(Y m,n,i
s −Us)+ds−∫ T
tZm,n,i
s dBs.
For each m, n ∈ N,
fm,n,1(s, y) = f1(s, y)+n(y−Ls)−−m(y−Us)+ ≤ fm,n,2(s, y) = f2(s, y)+n(y−Ls)−−m(y−Us)+,
and ξ1 ≤ ξ2. So by the comparison theorem in [57], we get
Y m,n,1t ≤ Y m,n,2
t , 0 ≤ t ≤ T.
Since Km,n,i,+t = n
∫ t0 (Y m,n,i
s − Ls)−ds, Km,n,i,−t = m
∫ t0 (Y m,n,i
s − Us)+ds, then we deduce, for0 ≤ s ≤ t ≤ T ,
Km,n,1,+t −Km,n,1,+
s ≥ Km,n,2,+t −Km,n,2,+
s ,
Km,n,1,−t −Km,n,1,−
s ≤ Km,n,2,−t −Km,n,2,−
s ,
By the convergence results of the step1, let m →∞, then n →∞, Y m,n,1t → Y 1
t , Y m,n,2t → Y 2
t ,Km,n,1,+
t → K1,+t , Km,n,2,+
t → K2,+t , Km,n,1,−
t → K1,−t , Km,n,2,−
t → K2,−t , a.s., then the inequalities
hold for 0 ≤ s ≤ t ≤ T :
Y 1t ≤ Y 2
t ,K1,+t −K1,+
s ≥ K2,+t −K2,+
s , K1,−t −K1,−
s ≥ K2,−t −K2,−
s ,
Particularly, set s = 0, we get K1,+t ≥ K2,+
t ,K1,−t ≤ K2,−
t .¤
Corollary 5.2.1. Suppose that f1(s, y), f2(s, y) satisfy the assumption 5.1.2’, and ξi, f i(·, 0), Li,U i, i = 1, 2 satisfies (5.5). The two triples (Y 1, Z1,K1), (Y 2, Z2,K2) are respectively the solutionsof the RBSDE(ξ1, f1, L1, U1) and RBSDE(ξ2, f2, L2, U2), with Ki = Ki,+ − Ki,−, i = 1, 2. Inaddition, we assume
ξ1 ≤ ξ2, and f1(t, y) ≤ f2(t, y), ∀(t, y) ∈ [0, T ]× R,
then (i) If L1t ≤ L2
t , U1t = U2
t , for 0 ≤ t ≤ T , then Y 1t ≤ Y 2
t , and for 0 ≤ s ≤ t ≤ T ,
K1,−t −K1,−
s ≤ K2,−t −K2,−
s ;
(ii) If L1t = L2
t , U1t ≤ U2
t , for 0 ≤ t ≤ T , then Y 1t ≤ Y 2
t , and for 0 ≤ s ≤ t ≤ T ,
K1,+t −K1,+
s ≥ K2,+t −K2,+
s .
Proof. (i) To simplify symbols, we denote U = U1 = U2. For n ∈ N, we consider the followingRBSDE with one barrier Li, i = 1, 2.
Y n,it = ξi +
∫ T
tf i(s, Y n,i
s )ds + Kn,i,+T −Kn,i,+
t − n
∫ T
t(Y n,i
s − Us)+ds−∫ T
tZn,i
s dBs,
Y n,it ≥ Li
t,
∫ T
0(Y n,i
t − Lit)dt = 0.
Since ξ1 ≤ ξ2, f1(t, y) ≤ f2(t, y), L1t ≤ L2
t , by general comparison theorem of RBSDE with onebarrier, we know Y n,1
t ≤ Y n,2t . Denote Kn,1,−
t = n∫ t0 (Y n,1
s − Us)+ds, Kn,2,−t = n
∫ t0 (Y n,2
s − Us)+ds,then for 0 ≤ s ≤ t ≤ T ,
Kn,1,−t −Kn,1,−
s ≤ Kn,2,−t −Kn,2,−
s .
5.2. Comparison theorems 153
Thanks to the convergence result of step 1of theorem 5.1.2, we know that as n →∞, for i = 1, 2
Y n,i → Y i in S2(0, T ) and Kn,i,−t → Ki,−
t in L2(Ft).
It follows immediately that for 0 ≤ s ≤ t ≤ T ,
Y 1t ≤ Y 2
t and K1,−t −K1,−
s ≤ K2,−t −K2,−
s .
Especially with s = 0, we get K1,−t ≤ K2,−
t .(ii) follows similarly as (i), so we omit it. ¤
Theorem 5.2.4. Suppose that f1(s, y), f2(s, y) satisfy the assumption 5.1.2’, ξi, f i(·, 0), U i, fori = 1, 2 satisfies (5.23), and Li satisfies assumption 5.1.3’-(i). The two groupes (Y 1, Z1,K1),(Y 2, Z2,K2) are respectively the solutions of the RBSDE(ξ1, f1, L1, U1) and RBSDE(ξ2, f2, L2, U2),i.e.
Y it = ξi +
∫ T
tf i(s, Y i
s )ds + Ki,+T −Ki,+
t − (Ki,−T −Ki,−
t )−∫ T
tZi
sdBs,
Lit ≤ Y i
t ≤ U it , 0 ≤ t ≤ T , and
∫ T0 (Y i
s − Lis)dKi,+
s =∫ T0 (Y i
s − U is)dKi,−
s = 0, i = 1, 2. Moreover,assume
ξ1 ≤ ξ2,
f1(t, y) ≤ f2(t, y), ∀(t, y) ∈ [0, T ]× R,
then(i) If L1 = L2 and U1 = U2, then Y 1
t ≤ Y 2t , K1,+
t ≥ K2,+t , K1,−
t ≤ K2,−t , for t ∈ [0, T ], and for
0 ≤ s ≤ t ≤ T ,
K1,+t −K1,+
s ≥ K2,+t −K2,+
s , K1,−t −K1,−
s ≤ K2,−t −K2,−
s .
(ii) If L1t ≤ L2
t , U1t = U2
t , for 0 ≤ t ≤ T , then Y 1t ≤ Y 2
t , and for 0 ≤ s ≤ t ≤ T ,
K1,−t −K1,−
s ≤ K2,−t −K2,−
s ;
(iii) If L1t = L2
t , U1t ≤ U2
t , for 0 ≤ t ≤ T , then Y 1t ≤ Y 2
t , and for 0 ≤ s ≤ t ≤ T ,
K1,+t −K1,+
s ≥ K2,+t −K2,+
s .
Proof. Like in the step 2 of the proof of theorem 5.1.2, we approximate the barrier Li by superbounded barrier Ln,i, where Ln,i = Li ∧ n.
(i) Set L := L1 = L2, U := U1 = U2, and Ln = L∧n. Then consider the RBSDE(ξi, f i, Ln, U),for i = 1, 2,
Y n,it = ξi +
∫ T
tf i(s, Y n,i
s )ds + Kn,i,+T −Kn,i,+
t − (Kn,i,−T −Kn,i,−
t )−∫ T
tZn,i
s dBs,
Lnt ≤ Y n,i
t ≤ Ut,
∫ T
0(Y n,i
s − Ls)dKn,i,+s =
∫ T
0(Y n,i
s − Us)dKn,i,−s = 0.
Sinceξ1 ≤ ξ2, f1(t, y) ≤ f2(t, y),
from comparison theorem 5.2.3, we have for 0 ≤ s ≤ t ≤ T ,
Y n,1t ≤ Y n,2
t ,Kn,1,+t −Kn,1,+
s ≥ Kn,2,+t −Kn,2,+
s ,Kn,1,−t −Kn,1,−
s ≤ Kn,2,−t −Kn,2,−
s .
154 Chapitre 5. RBSDE with two barriers
Thanks to the convergence results in step 2 of the proof for theorem 5.1.2, for i = 1, 2, Y n,i → Y i inS2(0, T ), Kn,i,+
t → Ki,+t and Kn,i,−
t → Ki,−t in L2(Ft), where Y i, Ki,+ and Ki;− are elements of the
solution of RBSDE(ξi, f i, L, U). Passing to the limit in inequalities, we get that for 0 ≤ s ≤ t ≤ T ,
Y 1t ≤ Y 2
t ,K1,+t −K1,+
s ≥ K2,+t −K2,+
s , K1,−t −K1,−
s ≤ K2,−t −K2,−
s .
Especially, with s = 0, we get K1,+t ≥ K2,+
t , K1,−t ≤ K2,−
t , for t ∈ [0, T ].(ii) Set U := U1 = U2, and Ln,i = Li ∧ n. Then we consider the solutions (Y n,i, Zn,i,Kn,i) of
the RBSDEs (ξi, f i, Ln,i, U), for i = 1, 2. Since
ξ1 ≤ ξ2, f1(t, y) ≤ f2(t, y), Ln,1t ≤ Ln,2
t ,
from corollary 5.2.1, we have for 0 ≤ s ≤ t ≤ T , Y n,1t ≤ Y n,2
t and Kn,1,−t −Kn,1,−
s ≤ Kn,2,−t −Kn,2,−
s .Then by the convergence results in step 2 of the proof for theorem 5.1.2, it follows that for 0 ≤ s ≤t ≤ T ,
Y 1t ≤ Y 2
t ,K1,−t −K1,−
s ≤ K2,−t −K2,−
s .
Especially, with s = 0, we get K1,−t ≤ K2,−
t , for t ∈ [0, T ].(iii) The proof is similar to (ii), which follows from corollary 5.2.1 and by the convergence results
in step 2 of the proof for theorem 5.1.2, so we omit it. ¤
Theorem 5.2.5. Suppose that f1(s, y), f2(s, y) satisfy the assumption 5.1.2, ξi, f i(·, 0), for i = 1, 2satisfies (5.40), and Li and U isatisfy 5.1.3’. The two groupes (Y 1, Z1, K1), (Y 2, Z2,K2) are res-pectively the solutions of the RBSDE(ξ1, f1, L1, U1) and RBSDE(ξ2, f2, L2, U2). Moreover, assume
ξ1 ≤ ξ2,
f1(t, y) ≤ f2(t, y), ∀(t, y) ∈ [0, T ]× R,
then(i) If L1 = L2 and U1 = U2, then Y 1
t ≤ Y 2t , K1,+
t ≥ K2,+t , K1,−
t ≤ K2,−t , for t ∈ [0, T ], and for
0 ≤ s ≤ t ≤ T ,
K1,+t −K1,+
s ≥ K2,+t −K2,+
s , K1,−t −K1,−
s ≤ K2,−t −K2,−
s .
(ii) If L1t ≤ L2
t , U1t = U2
t , for 0 ≤ t ≤ T , then Y 1t ≤ Y 2
t , and for 0 ≤ s ≤ t ≤ T ,
K1,−t −K1,−
s ≤ K2,−t −K2,−
s ;
(iii) If L1t = L2
t , U1t ≤ U2
t , for 0 ≤ t ≤ T , then Y 1t ≤ Y 2
t , and for 0 ≤ s ≤ t ≤ T ,
K1,+t −K1,+
s ≥ K2,+t −K2,+
s .
Proof. Like in theorem 5.2.4, we approximate the barrier U by lower bounded barrier Un, whereUn = U ∨ (−n), then the results follow from the comparison theorem 5.2.4 and the convergenceresults of step 3 in the proof of theorem 5.1.2, so we omit it. ¤
Theorem 5.2.6. Suppose that for i = 1, 2, ξisatisfies assumption 5.1.1, f i does not depends on zand satisfies 5.1.2, Li and U isatisfy 5.1.3. The two triples (Y 1, Z1,K1,+,K1,−), (Y 2, Z2,K2,+,K2,−)are the solutions of the RBSDE(ξ1, f1, L1, U1) and RBSDE(ξ2, f2, L2, U2), respectively. Moreover,assume for (t, y) ∈ [0, T ]× R,
ξ1 ≤ ξ2, f1(t, y) ≤ f2(t, y), and f1(t, 0) = f2(t, 0),
5.2. Comparison theorems 155
then(i) If L1 = L2 and U1 = U2, then Y 1
t ≤ Y 2t , K1,+
t ≥ K2,+t , K1,−
t ≤ K2,−t , for t ∈ [0, T ], and for
0 ≤ s ≤ t ≤ T ,
K1,+t −K1,+
s ≥ K2,+t −K2,+
s , K1,−t −K1,−
s ≤ K2,−t −K2,−
s ;
(ii) If L1t ≤ L2
t , U1t = U2
t , for 0 ≤ t ≤ T , then Y 1t ≤ Y 2
t , and for 0 ≤ s ≤ t ≤ T ,
K1,−t −K1,−
s ≤ K2,−t −K2,−
s ;
(iii) If L1t = L2
t , U1t ≤ U2
t , for 0 ≤ t ≤ T , then Y 1t ≤ Y 2
t , and for 0 ≤ s ≤ t ≤ T ,
K1,+t −K1,+
s ≥ K2,+t −K2,+
s .
Proof. (i)Set L := L1 = L2, U := U1 = U2. Like in the proof of the theorem 5.1.2, for i = 1, 2,set
(Y it, Z
it,K
i,+t , K
i,−t ) := (eλtY i
t , eλtZit ,
∫ t
0eλsdKi,+
s ,
∫ t
0eλsdKi,−
s ).
Then it’s easy to check that for i = 1, 2, (Y it, Z
it,K
i,+t , K
i,−t )0≤t≤T is the solution of the RBSDE(ξi
, fi, L, U),
where(ξi
, fi(t, y), Lt, U t) = (eλT ξi, eλtf i(t, e−λty)− λy, eλtLt, e
λtUt).
If we assume λ = µ, then (ξi, f
i, L, U) satisfies assumption 5.1.1, 5.1.2’ and 5.1.3’. Since the
transform keeps the monotonicity, the results are equivalent to
Y1t ≤ Y
2t ,K
1,+t −K
1,+s ≥ K
2,+t −K
2,+s , K
1,−t −K
1,−s ≥ K
2,−t −K
2,−s , (5.65)
for 0 ≤ s ≤ t ≤ T . Then we make the approximations
ξm,n,i : = ξ
n,i ∧m := (ξi ∨ (−n)) ∧m
fim,n(t, y) : = f
in(t, y)− f
in(t, 0) + f
in(t, 0) ∧m
: = fi(t, y)− f
i(t, 0) + (f i(t, 0) ∨ (−n)) ∧m.
Let for i = 1, 2, (Y m,n,it , Z
m,n,it , K
m,n,i,+t , K
m,n,i,−t )0≤t≤T be the solution of the RBSDE (ξm,n,i
, fim,n, L, U) ;
then ξm,n,i, f
im,n satisfy ∣∣∣ξm,n,i
∣∣∣ + sup0≤t≤T
∣∣∣f im,n(t, 0)
∣∣∣ ≤ c,
andξm,n,1 ≤ ξ
m,n,2, and f
1m,n(t, y) ≤ f
2m,n(t, y), for (t, y) ∈ [0, T ]× R,
in view of f1(t, 0) = f1(t, 0) = f2(t, 0) = f
2(t, 0). Using the comparison theorem 5.2.5-(i), we havefor 0 ≤ s ≤ t ≤ T
Ym,n,1t ≤ Y
m,n,2t ,
Km,n,1,+t −K
m,n,1,+s ≥ K
m,n,2,+t −K
m,n,2,+s , K
m,n,1,−t −K
m,n,1,−s ≥ K
m,n,2,−t −K
m,n,2,−s .
By the convergence results in the step 4 of the proof of theorem 5.1.2, let m → ∞, we get fori = 1, 2
(Y m,n,it )0≤t≤T → (Y n,i
t )0≤t≤T in S2(0, T ),
(Zm,n,it )0≤t≤T → (Zn,i
t )0≤t≤T in H2d(0, T ),
(Km,n,i,+t ,K
m,n,i,−t )0≤t≤T → (Kn,i,+
t , Kn,i,−t )0≤t≤T in A2(0, T )×A2(0, T ),
156 Chapitre 5. RBSDE with two barriers
where (Y n,it , Z
n,it ,K
n,i,+t , K
n,i,−t )0≤t≤T is the solution of the RBSDE(ξn,i
, f in, L, U), and for 0 ≤ s ≤
t ≤ T
Yn,1t ≤ Y
n,2t , K
n,1,+t −K
n,1,+s ≥ K
n,2,+t −K
n,2,+s , K
n,1,−t −K
n,1,−s ≤ K
n,2,−t −K
n,2,−s .
Then by the convergence in step 5, for i = 1, 2, (Y n,it , Z
n,it , K
n,i,+t , K
n,i,−t )0≤t≤T → (Y i
t, Zit,K
i,+t , K
i,−t )0≤t≤T
in S2(0, T )×H2d(0, T )×A2(0, T )×A2(0, T ), as n →∞, which is the solution of the RBSDE(ξi
, f i, L, U).Finally, we get, for 0 ≤ s ≤ t ≤ T
Y1t ≤ Y
2t ,K
1,+t −K
1,+s ≥ K
2,+t −K
2,+s , K
1,−t −K
1,−s ≤ K
2,−t −K
2,−s .
Especially, with s = 0, it follows K1,+t ≥ K
2,+t , K
1,−t ≤ K
2,−t .
(ii) and (iii) are from comparison theorem 5.2.5 -(ii) and (iii), with approximation as in (i), sowe omit it. ¤
157
Chapitre 6
Sobolev solution for semilinear PDEwith obstacle under monotonicitycondition
In this chapter, we study the Sobolev solutions of semilinear PDE and also PDE with continuousobstacle under the monotonicity condition (1.7). By approximation, we prove the existence of thesolution and give the probabilistic interpretation of the solution u and ∇u (resp.(u,∇u, ν)) by thesolution (Y, Z) of backward SDE (resp. the solution (Y, Z,K) of reflected backward SDE).
This chapter is organized as following : in section 6.1, we present the basic assumptions andthe definitions of the solutions for PDE and PDE with obstacle, then in section 6.2, we recall someuseful results of [5] for stochastic flow. We will prove the existence of Sobolev’s solution and giveits probabilistic interpretation for PDE and PDE with continuous obstacle under monotonicitycondition in section 6.3 and 6.4 respectively.
6.1 Notations and preliminaries
Let (Ω,F , P ) be a complete probability space, and B = (B1, B2, · · · , Bd)∗ be a d-dimensionalBrownian motion defined on a finite interval [0, T ], 0 < T < +∞. Denote by F t
s; t ≤ s ≤ T thenatural filtration generated by the Brownian motion B :
F ts = σBs −Bt; t ≤ r ≤ s ∪ F0,
where F0 contains all P−null sets of F .We will need the following spaces for studying BSDE or reflected BSDE. For any given n ∈ N :
– L2n(F t
s) : the set of n-dimensional F ts-measurable random variable ξ, such that E(|ξ|2) < +∞.
– H2n×m(t, T ) : the set of Rn×m-valued F t
s-predictable process ψ on the interval [t, T ], such thatE
∫ Tt ‖ψ(s)‖2 ds < +∞.
– S2n(t, T ) : the set of n-dimensional F t
s-progressively measurable process ψ on the interval[t, T ], such that E(supt≤s≤T ‖ψ(s)‖2) < +∞.
Finally, we shall denote by P the σ-algebra of predictable sets on [0, T ]×Ω. In the real–valued case,i.e., when n = 1, these spaces will be simply denoted by L2(F t
s), H2(t, T ) and S2(t, T ), respectively.
For the sake of the Sobolev solution of the PDE, the following notations are needed :
– Cmb (Rd,Rn) : the set of Cm-functions f : Rd → Rn, whose partial derivatives of order less
that or equal to m, are bounded. (The functions themselves need not to be bounded)
158 Chapitre 6. Sobolev solution for semilinear PDE
– C1,mc ([0, T ] × Rd,Rn) : the set of continuous functions f : [0, T ] × Rd → Rn with compact
support, whose first partial derivative with respect to t and partial derivatives of order lessor equal to m with respect to x exist.
– ρ : Rd → R, the weight function, which is a continuous positive function which satisfies∫Rd ρ(x)dx < ∞.
– L2(Rd, ρ(x)dx) : the weighted L2-space with weight function ρ, endowed with the norm
‖u‖2L2(Rd,ρ) =
∫
Rd
|u(x)|2 ρ(x)dx
We assume :
Assumption 6.1.1. g(·) ∈ L2(Rd, ρ(x)dx).
Assumption 6.1.2. f : [0, T ]× Rd × Rn×Rn×d → Rn is measurable in (t, x, y, z) and∫ T
0
∫
Rd
|f(t, x, 0, 0)|2 ρ(x)dxdt < ∞.
Assumption 6.1.3. f satisfies increasing and monotonicity condition on y, i.e. for some conti-nuous increasing function ϕ : R+ → R+, real numbers k > 0, µ ∈ R such that ∀(t, x, y, y′, z, z′) ∈[0, T ]× Rd × Rn × Rn × Rn×d × Rn×d
(i) |f(t, x, y, z)| ≤ |f(t, x, 0, z)|+ ϕ(|y|),(ii) |f(t, x, y, z)− f(t, x, y, z′)| ≤ k |z − z′|,(iii) 〈y − y′, f(t, x, y, z)− f(t, x, y′, z)〉 ≤ µ |y − y′|2,(iv) y → f(t, x, y, z) is continuous.
For the PDE with obstacle, we consider that f satisfies assumptions 6.1.2. and 6.1.3. for n = 1.
Assumption 6.1.4. The obstacle function h ∈ C([0, T ]×Rd,R) satisfies the following conditions :there exists κ ∈ R, β > 0, such that ∀(t, x) ∈ [0, T ]× Rd
(i) ϕ(eµth+(t, x)) ∈ L2(Rd; ρ(x)dx),(ii) |h(t, x)| ≤ κ(1 + |x|β),
here h+ is the positive part of h.
Assumption 6.1.5. b : [0, T ]× Rd → Rd and σ : [0, T ]× Rd → Rd×d satisfy
b ∈ C2b (Rd;Rd) and σ ∈ C3
b (Rd;Rd×d).
We first study the following PDE
(∂t + L) u + F (t, x, u,∇u) = 0, ∀ (t, x) ∈ [0, T ]× Rd
u(x, T ) = g(x), ∀x ∈ Rd
where F : [0, T ]× Rd × Rn × Rn×d → R, such that
F (t, x, u, p) = f(t, x, u, σ∗p)
and
L =d∑
i=1
bi∂
∂xi+
12
d∑
i,j=1
ai,j∂2
∂xi∂xj,
6.1. Notations and preliminaries 159
a := σσ∗. Here σ∗ is the transposed matrix of σ.In order to study the weak solution of the PDE, we introduce the following space
H := u ∈ L2([0, T ]× Rd, ds⊗ ρ(x)dx) | σ∗∇u ∈ L2(([0, T ]× Rd, ds⊗ ρ(x)dx)
endowed with the norm
‖u‖2 :=∫
Rd
∫ T
0[|u(s, x)|2 + |(σ∗∇u)(s, x)|2]ρ(x)dsdx.
Definition 6.1.1. We say that u ∈ H is the weak solution of the PDE associated to (g, f), if(i) ‖u‖2 < ∞,(ii) for every φ ∈ C1,∞
c ([0, T ]× Rd)∫ T
t(us, ∂tφ)ds + (u(t, ·), φ(t, ·))− (g(·), φ(·, T )) +
∫ T
tE(us, φs)ds =
∫ T
t(f(s, ·, us, σ
∗∇us), φs)ds.
(6.1)where (φ, ψ) =
∫Rd φ(x)ψ(x)dx denotes the scalar product in L2(Rd, dx) and
E(ψ, φ) =∫
Rd
((σ∗∇ψ)(σ∗∇φ) + φ∇((12σ∗∇σ + b)ψ))dx
is the energy of the system of our PDE which corresponds to the Dirichlet form associated to theoperator L when it is symmetric. Indeed E(ψ, φ) = −(φ,Lψ).
The probabilistic interpretation of the solution of PDE associated with g, f , which satisfy As-sumption 6.1.1-6.1.3 was firstly studied by (Pardoux [57]), where the author proved the existenceof a viscosity solution to this PDE, and gave its probabilistic interpretation. In section 6.4, weconsider the weak solution to PDE (6.1) in Sobolev space, and give the proof of the existence anduniqueness of the solution as well as the probabilistic interpretation.
In the second part of this chapter, we will consider the obstacle problem associated to the PDE(6.1) with obstacle function h, where we restrict our study in the one dimensional case (n = 1).Formally, u is solution of PDE with obstacle h, if it verifies the equation in the following sense :∀(t, x) ∈ [0, T ]× Rd
(i) (∂t + L)u + F (t, x, u,∇u) ≤ 0, on u(t, x) ≥ h(t, x),(ii) (∂t + L)u + F (t, x, u,∇u) = 0, on u(t, x) = h(t, x),(iii) u(x, T ) = g(x) .
where L =∑d
i=1 bi∂
∂xi+ 1
2
∑di,j=1 ai,j
∂2
∂xi∂xj, a = σσ∗. In fact, we give the following formulation of
the PDE with obstacle.
Definition 6.1.2. We say that (u, ν) is the weak solution of the PDE with obstacle associated to(g, f, h), if
(i) ‖u‖2 < ∞, u ≥ h, and u(T, x) = g(x).(ii) ν is a positive radon measure such that
∫ T0
∫Rd ρ(x)dν(t, x) < ∞,
(iii) for every φ ∈ C1,∞c ([0, T ]× Rd)
∫ T
t(us, ∂sφ)ds + (u(t, ·), φ(t, ·))− (g(·), φ(·, T )) +
∫ T
tE(us, φs)ds (6.2)
=∫ T
t(f(s, ·, us, σ
∗∇us), φs)ds +∫ T
t
∫
Rd
φ(s, x)1u=hdν(x, s).
160 Chapitre 6. Sobolev solution for semilinear PDE
6.2 Stochastic flows and random test functions
Let (Xt,xs )t≤s≤T be the solution of
dXt,x
s = b(s,Xt,xs )ds + σ(s,Xt,x
s )dBs,
Xt,xt = x,
where b : [0, T ]× Rd → Rd and σ : [0, T ]× Rd → Rd×d satisfy Assumption 6.1.5.So Xt,x
s , x ∈ Rd, t ≤ s ≤ T is the stochastic flow associated to the diffuse Xt,xs and denote
by Xt,xs , t ≤ s ≤ T the inverse flow. It is known that x → Xt,x
s is differentiable (Ikeda andWatanabe [40]). We denote by J(Xt,x
s ) the determinant of the Jacobian matrix of Xt,xs , which is
positive, and obviously J(Xt,xt ) = 1.
For φ ∈ C∞c (Rd) we define a process φt : Ω× [0, T ]× Rd → R by
φt(s, x) := φ(Xt,xs )J(Xt,x
s ).
Following Kunita([45]), we know that for v ∈ L2(Rd), the composition of v with the stochastic flowis
(v Xt,·s , φ) := (v, φt(s, ·)).
Indeed, by a change of variable, we have
(v Xt,·s , φ) =
∫
Rd
v(y)φ(Xt,ys )J(Xt,y
s )dy =∫
Rd
v(Xt,xs )φ(x)dx.
The main idea in Bally and Matoussi [5] and Bally et al. [4], is to use φt as a test function in(6.1) and (6.2). Problem is that s → φt(s, x) is not differentiable so that
∫ Tt (us, ∂sφ)ds has no sense.
However φt(s, x) is a semimartingale and they proved the following semimartingale decompositionof φt(s, x) :
Lemma 6.2.1. For every function φ ∈ C2c(Rd),
φt(s, x) = φ(x)−d∑
j=1
∫ s
t
(d∑
i=1
∂
∂xi(σij(x)φt(r, x))
)dBj
r +∫ s
tL∗φt(r, x)dr, (6.3)
where L∗ is the adjoint operator of L. So
dφt(r, x) = −d∑
j=1
(d∑
i=1
∂
∂xi(σij(x)φt(r, x))
)dBj
r + L∗φt(r, x)dr, (6.4)
Then in (6.1) we may replace ∂sφds by the Ito stochastic integral with respect to dφt(s, x), andhave the following proposition which allows us to use φt as a test function. The proof will be givenin the appendix.
Proposition 6.2.1. Assume that assumptions 6.1.1, 6.1.2 and 6.1.3 hold. Let u ∈ H be a weaksolution of PDE (6.1), then for s ∈ [t, T ] and φ ∈ C2
c (Rd),∫
Rd
∫ T
su(r, x)dφt(r, x)dx− (g(·), φt(T, ·)) + (u(s, ·), φt(s, ·))−
∫ T
sE(u(r, ·), φt(r, ·))dr
=∫
Rd
∫ T
sf(r, x, u(r, x), σ∗∇u(r, x))φt(r, x)drdx. a.s. (6.5)
6.3. Solutions in Sobolev spaces for PDE’s with monotonicity condition 161
Remark 6.2.1. Here φt(r, x) is R-valued. We consider that in (6.5), the equality holds for eachcomponent of u.
Remark 6.2.2. In fact, this proposition is first proved by Bally and Matoussi for linear casewhen f(t, x, y, z) = F (t, x) + c(t)y + c(t)z, where c, c : [0, T ] → R are bounded function, F (t, x) :[0, T ]× R → R, and F ∈ L2([0, T ]× Rd, ρ(x)dx⊗ dt), Proposition 2.3 in [5]. Then in the proof oftheorem 3.1 the same paper, it’s proved that this proposition is still true for f under the Lipschitzcondition on y and z.
We need the result of equivalence of norms, which play important roles in existence proof forPDE under monotonic conditions. The equivalence of functional norm and stochastic norm is firstproved by Barles and Lesigne [6] for ρ = 1. In Bally and Matoussi [5] proved the same result forweighted integrable function by using probabilistic method. Let ρ be a weighted function, suchthat ρ(x) := exp(F (x)), where F : Rd → R is a continuous function. Moreover, we assume thatthere exists a constant R > 0, such that for |x| > R, F ∈ C2
b (Rd,R). For instant, we can takeρ(x) = (1 + |x|)q or ρ(x) = exp α |x|, with q > d + 1, α ∈ R.
Proposition 6.2.2. [Equivalence norm result] Suppose that assumption 6.1.5 hold, then there existstwo constants k1, k2 > 0, such that for every t ≤ s ≤ T and φ ∈ L1(Rd, ρ(x)dx), we have
k2
∫
Rd
|φ(x)| ρ(x)dx ≤∫
Rd
E(∣∣φ(Xt,x
s )∣∣)ρ(x)dx ≤ k1
∫
Rd
|φ(x)| ρ(x)dx, (6.6)
Moreover, for every ψ ∈ L1([0, T ]× Rd, dt⊗ ρ(x)dx)
k2
∫
Rd
∫ T
t|ψ(s, x)| ρ(x)dsdx ≤
∫
Rd
∫ T
tE(
∣∣ψ(s,Xt,xs )
∣∣)ρ(x)dsdx (6.7)
≤ k1
∫
Rd
∫ T
t|ψ(s, x)| ρ(x)dsdx,
where the constants k1, k2 depend only on T , ρ and the bounds of the first (resp. first and second)derivatives of b (resp. σ).
This proposition is easy to get from the follwing Lemma, (See Lemma 5.1 in Bally and Matoussi[5]).
Lemma 6.2.2. There exist two constants c1 > 0 and c2 > 0 such that ∀x ∈ Rd, 0 ≤ t ≤ T
c1 ≤ E
(ρ(t, X0,x
t )J(X0,xt )
ρ(x)
)≤ c2.
6.3 Solutions in Sobolev spaces for PDE’s with monotonicity condi-tion
In this section we will study the solution of the PDE whose coefficient f satisfies the monoto-nicity condition. For this sake, we introduce the BSDE associated with (g, f) : for t ≤ s ≤ T ,
Y t,xs = g(Xt,x
T ) +∫ T
sf(r,Xt,x
r , Y t,xr , Zt,x
r )dr −∫ T
sZt,x
s dBs. (6.8)
162 Chapitre 6. Sobolev solution for semilinear PDE
Thanks to the assumptions 6.1.1, 6.1.2, and the equivalence of norms results, we know the compo-sition g(Xt,x
T ) and f(s,Xt,xs , 0, 0) make sense in the BSDE 6.8. Moreover we have
g(Xt,xT ) ∈ L2(FT ) and f(s,Xt,x
s , 0, 0) ∈ H2(0, T ).
It follows from the results on the BSDE (Pardoux [57]) that for each (t, x), there exists a uniquepair (Y t,x, Zt,x) ∈ S2(t, T ) ×H2
n×d(t, T ) of F ts progressively measurable processes, which solves
this BSDE(g, f).The main result of this section is
Theorem 6.3.1. Suppose that assumptions 6.1.1-6.1.3 and 6.1.4 hold. There exists a unique weaksolution u ∈ H of the PDE (6.1). Moreover we have the probabilistic interpretation of the solution :u(t, x) = Y t,x
t , (σ∗∇u)(t, x) = Zt,xt . Moreover Y t,x
s = u(s,Xt,xs ), Zt,x
s = (σ∗∇u)(s,Xt,xs ), dt ⊗ dx-
a.s..
Proof. We prove the existence in three steps.
Existence.By the integration by parts, we know that u solves (6.1) if and only if
u(t, x) = eµtu(t, x) (6.9)
is a solution of the PDE(g, f), where
g(x) = eµT g(x) (6.10)f(t, x, y, z) = eµtf(t, x, e−µty, e−µtz)− µy.
Then the coefficient f satisfies the assumption 6.1.3 as f , except that assumption 6.1.3-(iii) isreplaced by
(y − y′)(f(t, x, y, z)− f(t, x, y′, z)) ≤ 0. (6.11)
In the first two steps, we consider the case where f does not depend on ∇u, and write f(t, x, y)for f(t, x, y, v(t, x)), where v is in L2([0, T ]× Rd, dt⊗ ρ(x)dx).
Suppose that f(t, x, y) satisfies assumption 6.1.3’ : for ∀(t, x, y, y′) ∈ [0, T ]× Rd × Rn × Rn,
(i) |f(t, x, y)| ≤ |f(t, x, 0)|+ ϕ(|y|),(ii) 〈y − y′, f(t, x, y)− f(t, x, y′)〉 ≤ 0,(iii) y → f(t, x, y) is continuous, ∀(t, x) ∈ [0, T ]× Rd.
Step 1. Suppose that g(x), f(t, x, 0) are uniformly bounded, i.e. there exists a constant C, suchthat
|g(x)|+ sup0≤t≤T
|f(t, x, 0)| ≤ C, (6.12)
In the following we use C as a constant which can be changed line by line.Define
fn(t, y) := (θn ∗ f(t, ·))(y)
where θn : Rn → R+ is a sequence of smooth functions with compact support, which approximatethe Dirac distribution at 0, and satisfy
∫θn(z)dz = 1. We consider the BSDE(g(Xt,x
T ), fn) anddenote its solution by (Y n,t,x
s , Zn,t,xs )t≤s≤T , i.e.
Y n,t,xs = g(Xt,x
T ) +∫ T
sfn(r,Xt,x
r , Y n,t,xr )dr −
∫ T
sZn,t,x
r dBr, ∀s ∈ [t, T ], (6.13)
6.3. Sobolev Solutions for PDE 163
Moreover for each n, by the step 1 of proposition 2.4 in [57], we have∣∣Y n,t,x
s
∣∣ ≤ eT C,
and ∣∣fn(s, Xt,xs , Y n,t,x
s )∣∣2 ≤ 2
∣∣fn(s,Xt,xs , 0)
∣∣2 + 2ψ2(eT2
√C)
where ψ(r) := supn sup|y|≤r
∫Rn ϕ(|y|)θn(y − z)dz. So there exists a constant C > 0, s.t.
supn
∫
Rd
E
∫ T
t(∣∣Y n,t,x
s
∣∣2 +∣∣fn(s, Xt,x
s , Y n,t,xs )
∣∣2 +∣∣Zn,t,x
s
∣∣2)ρ(x)dsdx ≤ C. (6.14)
Then let n →∞ on the both sides of (6.13), we get that the weak limit (Y t,xs , Zt,x
s ) of (Y n,t,xs , Zn,t,x
s ),satisfies
Y t,xs = g(Xt,x
T ) +∫ T
sf(r,Xt,x
r , Y t,xr )dr −
∫ T
sZt,x
r dBr, a.s.. (6.15)
Define u(t, x) := Y t,xt and v(t, x) := Zt,x
t . By the property of the flow Xs,Xt,xs
r = Xt,xr , t ≤ s ≤ r,
and the fact that (Y t,xs , Zt,x
s ) ∈ S2n(t, T )×H2
n×d(t, T ) is the unique solution of (6.15), we have thatY t,x
s = u(s,Xt,xs ) and Zt,x
s = v(s, Xt,xs ), a.s., a.e. Thanks to the equivalence of norms with (6.14),
we have∫
Rd
∫ T
t(|u(s, x)|2 + |v(s, x)|2)ρ(x)dsdx
≤ 1k2
∫
Rd
∫ T
tE(
∣∣u(s,Xt,xs )
∣∣2 +∣∣v(s,Xt,x
s )∣∣2)ρ(x)dsdx
=1k2
∫
Rd
∫ T
tE(
∣∣Y t,xs
∣∣2 +∣∣Zt,x
s
∣∣2)ρ(x)dsdx < ∞,
i.e. u, v ∈ L2([0, T ] × Rd, dt ⊗ ρ(x)dx). Let F (r, x) = f(r,Xt,xr , Y t,x
r ), we know that F (s, x) ∈L2([0, T ]× Rd, dt⊗ ρ(x)dx), in view of
∫
Rd
∫ T
t|F (s, x)|2 ρ(x)dsdx ≤ 1
k2
∫
Rd
∫ T
tE
∣∣F (s,Xt,xs )
∣∣2 ρ(x)dsdx
=1k2
∫
Rd
∫ T
tE
∣∣f(s,Xt,xs , Y t,x
s )∣∣2 ρ(x)dsdx < ∞.
From theorem 2.1 in [5], we get that v = σ∗∇u and that u ∈ H solves the PDE(g, f) under thebounded assumption, i.e. for every φ ∈ C1,∞
c ([0, T ]× Rd), we have∫ T
t(us, ∂tφ)ds + (u(t, ·), φ(t, ·))− (g(·), φ(·, T )) +
∫ T
tE(us, φs)ds =
∫ T
t(f(s, ·, us), φs)ds.
Step 2. We assume g ∈ L2(Rd, ρ(x)dx), f satisfies the assumption 6.1.3’ and f(t, x, 0) ∈L2([0, T ]× Rd, dt⊗ ρ(x)dx). Now we approximate g and f by bounded functions as follows :
gn(x) = Πn(g(x)), (6.16)fn(t, x, y) = f(t, x, y)− f(t, x, 0) + Πn(f(t, x, 0)),
whereΠn(y) :=
min(n, |y|)|y| y.
164 Chapitre 6. Sobolev solution for semilinear PDE
Clearly, the pair (gn, fn) satisfies the assumption (6.12) of step 1, and
gn → g in L2(Rd, ρ(x)dx), (6.17)fn(t, x, 0) → f(t, x, 0) in L2([0, T ]× Rd, dt⊗ ρ(x)dx).
Denote (Y n,t,xs , Zn,t,x
s ) ∈ S2n(t, T ) × H2
n×d(t, T ) the solution of the BSDE(ξn, fn), where ξn =gn(Xt,x
T ), i.e.
Y n,t,xs = gn(Xt,x
T ) +∫ T
sfn(r,Xt,x
r , Y n,t,xr )dr −
∫ T
sZn,t,x
r dBr.
Then from the results in step 1, un(t, x) = Y n,t,xt and un(t, x) ∈ H, is the weak solution of the
PDE(gn, fn), withY n,t,x
s = un(s,Xt,xs ), Zn,t,x
s = (σ∗∇un)(s,Xt,xs ), a.s. (6.18)
For m,n ∈ N, applying Ito’s formula to∣∣∣Y m,t,x
s − Y n,t,xs
∣∣∣2, we get
E∣∣Y m,t,x
s − Y n,t,xs
∣∣2 + E
∫ T
s
∣∣Zm,t,xr − Zn,t,x
r
∣∣2 dr (6.19)
≤ E∣∣∣gm(Xt,x
T )− gn(Xt,xT )
∣∣∣2+ E
∫ T
s
∣∣Y m,t,xr − Y n,t,x
r
∣∣2 dr + E
∫ T
s
∣∣fm(r,Xt,xr , 0)− fn(r,Xt,x
r , 0)∣∣2 dr.
From the equivalence of the norms (6.6) and (6.7), it follows∫
Rd
E∣∣Y m,t,x
s − Y n,t,xs
∣∣2 ρ(x)dx
≤∫
Rd
E∣∣∣gm(Xt,x
T )− gn(Xt,xT )
∣∣∣2ρ(x)dx +
∫
Rd
E
∫ T
s
∣∣Y m,t,xr − Y n,t,x
r
∣∣2 drρ(x)dx
+∫
Rd
E
∫ T
s
∣∣fm(r,Xt,xr , 0)− fn(r,Xt,x
r , 0)∣∣2 drρ(x)dx
≤∫
Rd
E
∫ T
s
∣∣Y m,t,xr − Y n,t,x
r
∣∣2 drρ(x)dx + k1
∫
Rd
E |gm(x)− gn(x)|2 ρ(x)dx
+k1
∫
Rd
∫ T
t|fm(r, x, 0)− fn(r, x, 0)|2 ρ(x)drdx,
and by Gronwall’s inequality and (6.17), we get as m,n →∞
supt≤s≤T
∫
Rd
E∣∣Y m,t,x
s − Y n,t,xs
∣∣2 ρ(x)dx → 0.
It follows immediately as m,n →∞∫
Rd
E
∫ T
s
∣∣Y m,t,xr − Y n,t,x
r
∣∣2 ρ(x)drdx +∫
Rd
E
∫ T
s
∣∣Zm,t,xr − Zn,t,x
r
∣∣2 ρ(x)drdx → 0.
Using again the equivalence of the norms (6.7), we get :∫ T
t
∫
Rd
|um(s, x)− un(s, x)|2 + |σ∗∇um(s, x)− σ∗∇un(s, x)|2 ρ(x)dxds
≤ 1k2
∫ T
t
∫
Rd
E(∣∣um(s,Xt,x
s )− un(s,Xt,xs )
∣∣2 +∣∣σ∗∇um(s,Xt,x
s )− σ∗∇un(s,Xt,xs )
∣∣2)ρ(x)dsdx
=1k2
∫ T
t
∫
Rd
E(∣∣Y m,t,x
s − Y n,t,xs
∣∣2 +∣∣Zm,t,x
s − Zn,t,xs
∣∣2)ρ(x)dsdx → 0.
6.3. Sobolev Solutions for PDE 165
as m,n →∞, i.e. un is Cauchy sequence in H. Denote its limit as u, so u ∈ H, and satisfies forevery φ ∈ C1,∞
c ([0, T ]× Rd),
∫ T
t(us, ∂tφ)ds + (u(t, ·), φ(t, ·))− (g(·), φ(·, T )) +
∫ T
tE(us, φs)ds =
∫ T
t(f(s, ·, us), φs)ds. (6.20)
On the other hand, (Y n,t,x· , Zn,t,x
· ) converges to (Y t,x· , Zt,x
· ) in S2n(0, T )×H2
n×d(0, T ), which is thesolution of the BSDE with parameters (g(Xt,x
T ), f) ; by the equivalence of the norms, we deducethat
Y t,xs = u(s,Xt,x
s ), Zt,xs = σ∗∇u(s,Xt,x
s ), a.s. ∀s ∈ [t, T ],
specially Y t,xt = u(t, x), Zt,x
t = σ∗∇u(t, x).It is then easy to generalize the result to the case when f satisfies assumption 6.1.2.
Step 3. In this step, we consider the case where f depends on ∇u.Assume that g, f satisfy the assumptions 6.1.1 - 6.1.3, with assumption 6.1.3-(iii) replaced by
(6.11). From the result in step 2, for any given n×d-matrix-valued function v ∈ L2([0, T ]×Rd, dt⊗ρ(x)dx), f(t, x, u, v(t, x)) satisfies the assumptions in step 2. So the PDE(g, f(t, x, u, v(t, x))) admitsa unique solution u ∈ H satisfying (i) and (ii) in the definition 6.1.1.
Set V t,xs = v(s,Xt,x
s ), then V t,xs ∈ H2
n×d(0, T ) in view of the equivalence of the norms. Weconsider the following BSDE with solution (Y t,x
· , Zt,x· )
Y t,xs = g(Xt,x
T ) +∫ T
sf(s,Xt,x
s , Y t,xs , V t,x
s )ds−∫ T
sZt,x
s dBs,
then Y t,xs = u(s,Xt,x
s ), Zt,xs = σ∗∇u(s,Xt,x
s ), a.s. ∀s ∈ [t, T ].Now we can construct a mapping Ψ from H into itself. For any u ∈ H, u = Ψ(u) is the weak
solution of the PDE with parameters g(x) and f(t, x, u, σ∗∇u).Symmetrically we introduce a mapping Φ from H2
n(t, T ) × H2n×d(t, T ) into itself. For any
(U t,x, V t,x) ∈ H2n(t, T ) ×H2
n×d(t, T ), (Y t,x, Zt,x) = Φ(U t,x, V t,x) is the solution of the BSDE withparameters g(Xt,x
T ) and f(s, Xt,xs , Y t,x
s , V t,xs ). Set V t,x
s = σ∗∇u(s,Xt,xs ), then Y t,x
s = u(s,Xt,xs ),
Zt,xs = σ∗∇u(s, Xt,x
s ), a.s.a.e..Let u1, u2 ∈ H, and u1 = Ψ(u1), u2 = Ψ(u2), we consider the difference M u := u1 − u2, M u :=
u1 − u2. Set V t,x,1s := σ∗∇u1(s,X
t,xs ), V t,x,2
s := σ∗∇u2(s,Xt,xs ). We denote by (Y t,x,1, Zt,x,1)(resp.
(Y t,x,2, Zt,x,2)) the solution of the BSDE with parameters g(Xt,xT ) and f(s,Xt,x
s , Y t,xs , V t,x,1
s ) (resp.f(s,Xt,x
s , Y t,xs , V t,x,2
s )) ; then ∀s ∈ [t, T ],
Y t,x,1s = u1(s, Xt,x
s ), Zt,x,1s = σ∗∇u1(s,Xt,x
s ),Y t,x,2
s = u2(s, Xt,xs ), Zt,x,2
s = σ∗∇u2(s,Xt,xs ),
Denote M Y t,xs := Y t,x,1
s −Y t,x,2s , M Zt,x
s := Zt,x,1s −Zt,x,2
s , M V t,xs := V t,x,1
s −V t,x,2s . By Ito’s formula
applied to eγtE∣∣∣M Y t,x
s
∣∣∣2, for some α and γ ∈ R, we have
eγtE∣∣M Y t,x
s
∣∣2 + E
∫ T
seγs(γ
∣∣M Y t,xr
∣∣2 +∣∣M Zt,x
r
∣∣2)dr ≤ E
∫ T
seγs(
k2
α
∣∣M Y t,xr
∣∣2 + α∣∣M V t,x
r
∣∣2)dr,
166 Chapitre 6. Sobolev solution for semilinear PDE
Using the equivalence of the norms, we deduce that∫
Rd
∫ T
teγs(γ |M u(s, x)|2 + |σ∗∇(M u)(s, x)|2)ρ(x)dsdx
≤ 1k2
∫
Rd
∫ T
teγsE(γ
∣∣M Y t,xr
∣∣2 +∣∣M Zt,x
r
∣∣2)ρ(x)drdx
≤ 1k2
∫
Rd
∫ T
seγsE(
k2
α
∣∣M Y t,xr
∣∣2 + α∣∣M V t,x
r
∣∣2)ρ(x)drdx
≤ k1
k2
∫
Rd
∫ T
seγs(
k2
α|M u(s, x)|2 + α |σ∗∇(M u)(s, x)|2)ρ(x)dsdx.
Set α = k22k1
, γ = 1 + 2k21
k22
k2, then we get
∫
Rd
∫ T
teγs(|M u(s, x)|2 + |σ∗∇(M u)(s, x)|2)ρ(x)dsdx
≤ 12
∫
Rd
∫ T
teγs |σ∗∇(M u)(s, x)|2 ρ(x)dsdx,
≤ 12
∫
Rd
∫ T
teγs(|M u(s, x)|2 + |σ∗∇(M u)(s, x)|2)ρ(x)dsdx.
Consequently, Ψ is a strict contraction on H equipped with the norm
‖u‖2γ :=
∫
Rd
∫ T
teγs(|u(s, x)|2 + |σ∗∇u(s, x)|2)ρ(x)dsdx.
So it has a fixed point, which is the solution of the PDE(g, f). We denote it as u, then u ∈ H, andfor every φ ∈ C1,∞
c ([0, T ]× Rd),∫ T
t(us, ∂tφ)ds + (u(t, ·), φ(t, ·))− (g(·), φ(·, T ) +
∫ T
tE(us, φs)ds =
∫ T
t(f(s, x, us, σ
∗∇us), φs)dxds.
(6.21)Moreover, for t ≤ s ≤ T ,
Y t,xs = u(s,Xt,x
s ), Zt,xs = σ∗∇u(s,Xt,x
s ), a.s.a.e.
and specially Y t,xt = u(t, x), Zt,x
t = σ∗∇u(t, x).
Uniqueness.Let u1 and u2 ∈ H be two solutions of the PDE(g, f). From Proposition 6.2.1, for φ ∈ C2
c (Rd)and i = 1, 2
∫
Rd
∫ T
sui(r, x)dφt(r, x)dx + (ui(s, ·), φt(s, ·))− (g(·), φt(·, T ))−
∫ T
sE(ui(r, ·), φt(r, ·))dr
=∫ T
s
∫
Rd
φt(r, x)f(r, x, ui(r, x), σ∗∇ui(r, x))drdx. (6.22)
By (6.3), we have∫
Rd
∫ T
suidφt(r, x)dx =
∫ T
s(∫
Rd
(σ∗∇ui)(r, x)φt(r, x)dx)dBr
+∫ T
s
∫
Rd
((σ∗∇ui)(σ∗∇φr) + φ∇((
12σ∗∇σ + b)ui
r))
dxdr.
6.4. Sobolev’s solution for PDE with obstacle under monotonicity condition 167
We substitute this in (6.22), and get∫
Rd
ui(s, x)φt(s, x)dx = (g(·), φt(·, T ))−∫ T
s
∫
Rd
(σ∗∇ui)(r, x)φt(r, x)dxdBr
+∫ T
s
∫
Rd
φt(r, x)f(r, x, ui(r, x), σ∗∇ui(r, x))drdx.
Then by the change of variable y = Xt,xr , we obtain
∫
Rd
ui(s,Xt,ys )φ(y)dy =
∫
Rd
g(Xt,yT )φ(y)dy +
∫ T
s
∫
Rd
φ(y)f(s,Xt,ys , ui(s,Xt,y
s ), σ∗∇ui(s,Xt,ys ))dyds
−∫ T
s
∫
Rd
(σ∗∇ui)(r,Xt,yr )φ(y)dydBr.
Since φ is arbitrary, we can prove this result for ρ(y)dy almost every y. So (ui(s,Xt,ys ), (σ∗∇ui)(s,Xt,y
s ))solves the BSDE(g(Xt,y
T ), f), i.e. ρ(y)dy a.s., we have
ui(s,Xt,ys ) = g(Xt,y
T ) +∫ T
sf(s,Xt,y
s , ui(s,Xt,ys ), σ∗∇ui(s,Xt,y
s ))ds−∫ T
s(σ∗∇ui)(r,Xt,y
r )dBr.
Then by the uniqueness of the BSDE, we know u1(s,Xt,ys ) = u2(s,Xt,x
s ) and (σ∗∇u1)(s, Xt,ys ) =
(σ∗∇u2)(s,Xt,ys ). Taking s = t we deduce that u1(t, y) = u2(t, y), ρ(y)dy-a.s. ¤
6.4 Sobolev’s solution for PDE with obstacle under monotonicitycondition
In this section we study the PDE with obstacle associated with (g, f, h), which satisfy theassumptions 6.1.1-6.1.4 for n = 1. We will prove the existence and uniqueness of a weak solutionto the obstacle problem. And we restrict our study to the case when ϕ is polynomial increasing iny, i.e.
Assumption 6.4.1. We assume that for some κ1 ∈ R, β1 > 0, ∀y ∈ R,
|ϕ(y)| ≤ κ1(1 + |y|β1).
For the sake of PDE with obstacle, we introduce the reflected BSDE associated with (g, f, h),like in El Karoui et al. [28] :
Y t,xs = g(Xt,x
T ) +∫ T
sf(r,Xt,x
r , Y t,xr , Zt,x
r )dr + Kt,xT −Kt,x
t −∫ T
sZt,x
s dBs, (6.23)
Y t,xs ≥ Lt,x
s ,
∫ T
0(Y t,x
s − Lt,xs )dKt,x
s = 0.
where Lt,xs = h(s,Xt,x
s ) is a continuous process. We have
E[ supt≤s≤T
ϕ2(eµt(Lt,xs )+)] = E[ sup
t≤s≤Tϕ2(eµth(s,Xt,x
s )+)]
≤ Ce2β1µT E[ supt≤s≤T
(1 +∣∣Xt,x
s
∣∣2β1β)]
≤ C(1 + |x|2β1β),
168 Chapitre 6. Sobolev solution for semilinear PDE
where C is a constant which can be changed line by line. By assumption 6.1.4-(ii), with sametechniques we get for x ∈ R, E[supt≤s≤T ϕ2((Lt,x
s )+)] < +∞. Thanks to the assumption 6.1.1 and6.1.2, by the equivalence of norms 6.6 and 6.7, we have
g(Xt,xT ) ∈ L2(FT ) and f(s,Xt,x
s , 0, 0) ∈ H2(0, T ).
By the existence and uniqueness theorem 3.1.3 for the RBSDE in section 3.1, for each (t, x), thereexists a unique triple (Y t,x, Zt,x,Kt,x) ∈ S2(t, T )×H2
d(t, T )×A2(t, T ) of F ts progressively measu-
rable processes, which is the solution of the reflected BSDE with parameters (g(Xt,xT ), f(s,Xt,x
s , y, z),h(s,Xt,x
s )). As in (Bally et al. [4]), we have the probabilistic interpretation for the solution of PDEwith obstacle.
The main result of this section is
Theorem 6.4.1. Assume that assumptions 6.1.1-6.1.5 and assumption 6.4.1 hold and ρ(x) =(1+ |x|)−p with p ≥ γ where γ = β1β +β +d+1. There exists a pair (u, ν), which is the solution ofthe PDE(g, f, h) with obstacle, i.e. (u, ν) satisfies Definition 6.1.2-(i) -(iii). Moreover the solutionis given by : u(t, x) = Y t,x
t , where (Y t,xs , Zt,x
s ,Kt,xs )t≤s≤T is the solution of RBSDE (6.23), and
Y t,xs = u(s,Xt,x
s ), Zt,xs = (σ∗∇u)(s,Xt,x
s ). ds⊗ dx− a.s. (6.24)
Moreoer we have for every measurable bounded and positive functions φ and ψ,∫
Rd
∫ T
tφ(s, Xt,x
s )J(Xt,xs )ψ(s, x)1u=h(s, x)dν(s, x) =
∫
Rd
∫ T
tφ(s, x)ψ(s,Xt,x
s )dKt,xs , a.s.. (6.25)
If (u, ν) is another solution of the PDE (6.2) such that ν satisfies (6.25) with some K instead ofK, where K is a continuous process in A2(t, T ), then u = u and ν = ν.
Remark 6.4.1. (6.25) gives us the probabilistic interpretation (Feymamn-Kac formula) for themeasure ν via the increasing process Kt,x of the RBSDE. This formula was first introduced in(Bally et al. [4]), where the authors prove (6.25) when f is Lipschitz on y and z uniformly in (t, ω).Here we generalize their result to the case when f is monotonic in y and Lipschitz in z.
Proof. As in the proof of Theorem 6.3.1, we first notice that (u, ν) solves (6.2) if and only if
(u(t, x), dν(t, x)) = (eµtu(t, x), eµtdν(t, x))
is the solution of the PDE with obstacle (g, f , h), where g, f are defined as in (6.10) with
h(t, x) = eµth(t, x).
Then the coefficient f satisfies the same assumptions in assumption 6.1.3 with (iii) replaced by(6.11), which means that f is decreasing on y in the 1-dimensional case. The obstacle h still sa-tisfies assumption 6.1.4, for µ = 0. In the following we will use (g, f, h) instead of (g, f , h), andsuppose that (g, f, h) satisfies assumption 6.1.1, 6.1.2, 6.1.4, 6.1.5 and 6.1.3 with (iii) replaced by(6.11).
Existence.The existence of a solution will be proved in 4 steps. From step 1 to step 3, we suppose that
f does not depend on ∇u, but satisfies assumption 6.1.3’ for n = 1, and f(t, x, 0) ∈ L2([0, T ] ×Rd, dt⊗ ρ(x)dx). In the step 4, we study the case when f depend on ∇u.
6.4. Sobolev Solutions for PDE with obstacle 169
Step 1. Suppose g(x), f(t, x, 0), h+(t, x) uniformly bounded, i.e. that there exists a constantC such that
|g(x)|+ sup0≤t≤T
|f(t, x, 0)|+ sup0≤t≤T
h+(t, x) ≤ C.
We will use the penalization method. For n ∈ N, we consider for all s ∈ [t, T ],
Y n,t,xs = g(Xt,x
T ) +∫ T
sf(r,Xt,x
r , Y n,t,xr )dr + n
∫ T
s(Y n,t,x
r − h(r,Xt,xr ))−dr −
∫ T
sZn,t,x
r dBr.
From Theorem 6.3.1 in section 3, we know that un(t, x) := Y n,t,xt , is solution of the PDE(g, fn),
where fn(t, x, y, x) = f(t, x, y, z) + n(y − h(t, x))−, i.e. for every φ ∈ C1,∞c ([0, T ]× Rd)
∫ T
t(un
s , ∂sφ)ds + (un(t, ·), φ(t, ·))− (g(·), φ(·, T )) +∫ T
tE(un
s , φs)ds
=∫ T
t(f(s, ·, un
s ), φs)ds + n
∫ T
t((un − h)−(s, ·), φs)ds. (6.26)
MoreoverY n,t,x
s = un(s,Xt,xs ), Zn,t,x
s = σ∗∇un(s,Xt,xs ), (6.27)
Set Kn,t,xs = n
∫ st (Y n,t,x
r −h(r,Xt,xr ))−dr ; by (6.27), we know that Kn,t,x
s = n∫ st (un−h)−(r,Xt,x
r )dr.Due to the result of step 1 in theorem 3.1.2, in chapter 3.1.3, we know that
E supt≤s≤T
∣∣Kn,t,xs −Km,t,x
s
∣∣2 → 0,
as m,n →∞, andsup
nsup
xE[(Kn,t,x
T )2] ≤ C. (6.28)
Following the convergence and estimates results for (Y n,t,x, Zn,t,x) in the proof of theorem 3.1.2,step 1, in chapter 3.1.3, for m, n ∈ N, we have
E
∫ T
t
∣∣Y n,t,xs − Y m,t,x
s
∣∣2 ds + E
∫ T
t
∣∣Zn,t,xs − Zm,t,x
s
∣∣2 ds → 0, as m, n →∞.
and
supn
E
∫ T
0(∣∣Y n,t,x
s
∣∣2 +∣∣Zn,t,x
s
∣∣2) ≤ C.
By the equivalence of the norms (6.7) and dominated convergence theorem, we get∫
Rd
∫ T
tρ(x)(|un(s, x)− um(s, x)|2 + |σ∗∇un(s, x)− σ∗∇um(s, x)|2)dsdx
≤ 1k2
∫
Rd
ρ(x)E∫ T
t(∣∣Y n,t,x
s − Y m,t,xs
∣∣2 +∣∣Zn,t,x
s − Zm,t,xs
∣∣2)dsdx → 0.
So (un) is the Cauchy sequence in H, and the limit u = limn→∞ un is in H.Denote νn(dt, dx) = n(un − h)−(t, x)dtdx and πn(dt, dx) = ρ(x)νn(dt, dx), then by (6.6)
πn([0, T ]× Rd) =∫
Rd
∫ T
0ρ(x)νn(dt, dx) =
∫
Rd
∫ T
0ρ(x)n(un − h)−(t, x)dtdx
≤ 1k2
∫
Rd
ρ(x)E∣∣∣Kn,0,x
T
∣∣∣ dx ≤ C
∫
Rd
ρ(x)dx < ∞.
170 Chapitre 6. Sobolev solution for semilinear PDE
It follows thatsup
nπn([0, T ]× Rd) < ∞. (6.29)
In the same way like in the existence proof step 2 of theorem 14 in [4], we can prove that πn([0, T ]×Rd) is bounded and then πn is tight. So we may pass to a subsequence and get πn → π where π isa positive measure. Define ν = ρ−1π ; ν is a positive measure such that
∫ T0
∫Rd ρ(x)dν(t, x) < ∞,
and so we have for φ ∈ C1,∞c ([0, T ]× Rd) with compact support in x,
∫
Rd
∫ T
tφdνn =
∫
Rd
∫ T
t
φ
ρdπn →
∫
Rd
∫ T
t
φ
ρdπ =
∫
Rd
∫ T
tφdν.
Now passing to the limit in the PDE(g, fn), we check that (u, ν) satisfies the PDE with obstacle(g, f, h), i.e. for every φ ∈ C1,∞
c ([0, T ]× Rd), we have∫ T
t(us, ∂sφ)ds + (u(t, ·), φ(t, ·))− (g(·), φ(·, T )) +
∫ T
tE(us, φs)ds
=∫ T
t(f(s, ·, us), φs)ds +
∫ T
t
∫
Rd
φ(s, x)1u=h(s, x)dν(x, s). (6.30)
The last is to prove that ν satisfies the probabilistic interpretation (6.25). Since Kn,t,x convergesto Kt,x uniformly in t, the measure dKn,t,x → dKt,x weakly in probability.
Fix two continuous functions φ, ψ : [0, T ]× Rd → R+ which have compact support in x and acontinuous function with compact support θ : Rd → R+, we have
∫
Rd
∫ T
tφ(s, Xt,x
s )J(Xt,xs )ψ(s, x)θ(x)dν(s, x)
= limn→∞
∫
Rd
∫ T
tφ(s, Xt,x
s )J(Xt,xs )ψ(s, x)θ(x)n(un − h)−(t, x)dtdx
= limn→∞
∫
Rd
∫ T
tφ(s, x)ψ(s,Xt,x
s )θ(Xt,xs )n(un − h)−(t,Xt,x
s )dtdx
= limn→∞
∫
Rd
∫ T
tφ(s, x)ψ(s,Xt,x
s )θ(Xt,xs )dKn,t,x
s dx
=∫
Rd
∫ T
tφ(s, x)ψ(s, Xt,x
s )θ(Xt,xs )dKt,x
s dx.
We take θ = θR to be the regularization of the indicator function of the ball of radius R andpass to the limit with R →∞, it follows that
∫
Rd
∫ T
tφ(s, Xt,x
s )J(Xt,xs )ψ(s, x)dν(s, x) =
∫
Rd
∫ T
tφ(s, x)ψ(s,Xt,x
s )dKt,xs dx. (6.31)
Since (Y n,t,xs , Zn,t,x
s ,Kn,t,xs ) converges to (Y t,x
s , Zt,xs ,Kt,x
s ) as n →∞ in S2(t, T )×H2d(t, T )×A2(t, T ),
and (Y t,xs , Zt,x
s ,Kt,xs ) is the solution of RBSDE(g(Xt,x
T ), f, h), then we have∫ T
t(Y t,x
s − Lt,xs )dKt,x
s =∫ T
t(u− h)(t,Xt,x
s )dKt,xs = 0, a.s.
it follows that dKt,xs = 1u=h(s,X
t,xs )dKt,x
s . In (6.31), setting ψ = 1u=h yields∫
Rd
∫ T
tφ(s, Xt,x
s )J(Xt,xs )1u=h(s, x)dν(s, x) =
∫
Rd
∫ T
tφ(s, Xt,x
s )J(Xt,xs )dν(s, x), a.s.
6.4. Sobolev Solutions for PDE with obstacle 171
Note that the family of functions A(ω) = (s, x) → φ(s, Xt,xs ) : φ ∈ C∞
c is an algebra whichseparates the points (because x → Xt,x
s is a bijection). Given a compact set G, A(ω) is densein C([0, T ] × G). It follows that J(Xt,x
s )1u=h(s, x)dν(s, x) = J(Xt,xs )dν(s, x) for almost every ω.
While J(Xt,xs ) > 0 for almost every ω, we get dν(s, x) = 1u=h(s, x)dν(s, x), and (6.25) follows.
Then we get easily that Y t,xs = u(s,Xt,x
s ) and Zt,xs = σ∗∇u(s,Xt,x
s ), in view of the convergenceresults for (Y n,t,x
s , Zn,t,xs ) and the equivalence of the norms. So u(s,Xt,x
s ) = Y t,xs ≥ h(t, x). Specially
for s = t, we have u(t, x) ≥ h(t, x)
Step 2. As in the proof of the RBSDE in theorem 3.1.2, we relax the bounded condition onthe barrier h in step 1, and prove the existence of the solution under assumption 6.1.4.
Similarly to step 2 in the proof of theorem 3.1.2 step 2, after some transformation, we knowthat it is sufficient to prove the existence of the solution for the PDE with obstacle (g, f, h), where(g, f, h) satisfies
g(x), f(t, x, 0) ≤ 0.
This is proved by the convergence result Lemma 3.1.3.Let h(t, x) satisfy assumption 6.1.4 for µ = 0, i.e. ∀(t, x) ∈ [0, T ]×,Rd
ϕ(h(t, x)+) ∈ L2(Rd; ρ(x)dx),
and|h(t, x)| ≤ κ(1 + |x|β).
Sethn(t, x) = h(t, x) ∧ n,
then the function hn(t, x) are continuous, sup0≤t≤T h+n (t, x) ≤ n, and hn(s,Xt,x
s ) → h(s,Xt,xs ) in
S2(t, T ), in view of Dini’s theorem and dominated convergence theorem.We consider the PDE with obstacle associated with (g, f, hn). By the results of step 1, there
exists (un, νn), which is the solution of the PDE with obstacle associated to (g, f, hn), where un ∈ Hand νn is a positive measure such that
∫ T0
∫Rd ρ(x)dνn(t, x) < ∞. Moreover
Y n,t,xs = un(s,Xt,x
s ), Zn,t,xs = σ∗∇un(s,Xt,x
s ),∫
Rd
∫ T
tφ(s, Xt,x
s )J(Xt,xs )ψ(s, x)1un=hn(s, x)dνn(s, x) =
∫
Rd
∫ T
tφ(s, x)ψ(s,Xt,x
s )dKn,t,xs dx,
(6.32)Here (Y n,t,x, Zn,t,x,Kn,t,x) is the solution of the RBSDE(g(Xt,x
T ), f, hn). Thanks to theorem 3.1.4in section 3.1.4, and the bounded assumption of g and f , we know that
E[∫ T
t(∣∣Y n,t,x
s
∣∣2 +∣∣Zn,t,x
s
∣∣2)ds + (Kn,0,xT )2]
≤ C(1 + E[ϕ2( sup0≤t≤T
h+(t,X0,xt )) + sup
0≤t≤T(h+(t,X0,x
t ))2]) (6.33)
≤ C(1 + |x|2β1β + |x|2β).
By the Lemma 3.1.3, Y n,t,xs → Y t,x
s in S2(0, T ), Zn,t,xs → Zt,x
s in H2d(0, T ) and Kn,t,x
s → Kt,xs in
A2(0, T ), as n →∞. Moreover (Y t,xs , Zt,x
s ,Kt,xs ) is the solution of RBSDE(g(Xt,x
T ), f, h).
172 Chapitre 6. Sobolev solution for semilinear PDE
By the convergence result of (Y n,t,xs , Zn,t,x
s ) and the equivalence of the norms (6.7) with domi-nated convergence theorem and 6.34, we get
∫
Rd
ρ(x)∫ T
t(|un(t, x)− um(t, x)|2 + |σ∗∇un(s, x)− σ∗∇um(s, x)|2)dsdx
≤ 1k2
∫
Rd
ρ(x)E∫ T
t(∣∣Y n,t,x
s − Y m,t,xs
∣∣2 +∣∣Zn,t,x
s − Zm,t,xs
∣∣2)dsdx → 0.
So un is a Cauchy sequence in H, and admits a limit u ∈ H. Moreover Y t,xs = u(s,Xt,x
s ), Zt,xs =
σ∗∇u(s,Xt,xs ). In particuliar u(t, x) = Y t,x
t ≥ h(t, x).Set πn = ρνn, like in step 1, we first need to prove that πn([0, T ] × Rd) is uniformly bounded.
In (6.32), let φ = ρ, ψ = 1, then we have∫
Rd
∫ T
0ρ(X0,x
s )J(X0,xs )dνn(s, x) =
∫
Rd
∫ T
0ρ(x)dKn,0,x
s dx.
From proposition , and the bounded assumption of g and f , we know that
E[(Kn,0,xT )2] ≤ C(1 + E[ϕ2( sup
0≤t≤Th+(t,X0,x
t )) + sup0≤t≤T
(h+(t,X0,xt ))2]) (6.34)
≤ C(1 + |x|2β1β + |x|2β).
Recall Lemma 6.2.2 : there exist two constants c1 > 0 and c2 > 0 such that ∀x ∈ Rd, 0 ≤ t ≤ T
c1 ≤ E
(ρ(t, X0,x
t )J(X0,xt )
ρ(x)
)≤ c2.
Applying Holder’s inequality and Schwartz’s inequality, we have
πn([0, T ]× Rd)
=∫
Rd
∫ T
0ρ(x)νn(dt, dx)
=∫
Rd
∫ T
0
ρ12 (x)
ρ12 (t, X0,x
t )J12 (X0,x
t )ρ
12 (x)ρ
12 (t, X0,x
t )J12 (X0,x
t )νn(dt, dx)
≤ E[
(∫
Rd
∫ T
0
ρ(x)
ρ(t, X0,xt )J(X0,x
t )ρ(x)νn(dt, dx)
) 12 (∫
Rd
∫ T
0ρ(t, X0,x
t )J(X0,xt )νn(dt, dx)
) 12
]
≤(
E
∫
Rd
∫ T
0
ρ(x)
ρ(t, X0,xt )J(X0,x
t )ρ(x)νn(dt, dx)
) 12 (
E
∫
Rd
∫ T
0ρ(t, X0,x
t )J(X0,xt )νn(dt, dx)
) 12
=
(∫
Rd
∫ T
0E
(ρ(x)
ρ(t, X0,xt )J(X0,x
t )
)ρ(x)νn(dt, dx)
) 12 (∫
Rd
E
∫ T
0dKn,0,x
t ρ(x)dx
) 12
≤(
1c1
∫
Rd
∫ T
0ρ(x)νn(dt, dx)
) 12(∫
Rd
ρ(x)E[Kn,0,xT ]dx
) 12
.
So by (6.34), we get
supn
πn([0, T ]× Rd) ≤ C
∫
Rd
ρ(x)E[Kn,0,xT ]dx (6.35)
≤ C
∫
Rd
ρ(x)(1 + |x|β1β + |x|β)dx < ∞.
6.4. Sobolev Solutions for PDE with obstacle 173
Using the same arguments as in step 1, we deduce that πn is tight. So we may pass to a subsequenceand get πn → π where π is a positive measure.
Define ν = ρ−1π, then ν is a positive measure such that∫ T0
∫Rd ρ(x)dν(t, x) < ∞. Then for
φ ∈ C([0, T ]× Rd) with compact support in x, we have as n →∞,∫ T
t
∫φdνn =
∫ T
t
∫φ
ρdπn →
∫ T
t
∫φ
ρdπ =
∫ T
t
∫φdν.
Now passing to the limit in the PDE(g, f, hn), we check that (u, ν) satisfies the PDE withobstacle associated to (g, f, h), i.e. for every φ ∈ C1,∞
c ([0, T ]× Rd)∫ T
t(us, ∂sφ)ds + (u(t, ·), φ(t, ·))− (g(·), φ(·, T )) +
∫ T
tE(us, φs)ds
=∫ T
t(f(s, ·, us), φs)ds +
∫ T
t
∫
Rd
φ(s, x)1u=hdν(x, s). (6.36)
Then we will check if the probabilistic interpretation (6.25) still holds. Fix two continuousfunctions φ, ψ : [0, T ] × Rd → R+ which have compact support in x. With the convergence resultof Kn,t,x, which implies dKn,t,x → dKt,x weakly in probability, in the same way as step 1, passingto the limit in (6.32) we have
∫
Rd
∫ T
tφ(s, Xt,x
s )J(Xt,xs )ψ(s, x)dν(s, x) =
∫
Rd
∫ T
tφ(s, x)ψ(s,Xt,x
s )dKt,xs dx
Since (Y t,xs , Zt,x
s , Kt,xs ) is the solution of RBSDE(g(Xt,x
T ), f, h), then by the integral condition, wededuce the dKt,x
s = 1u=h(s,Xt,xs )dKt,x
s . In (6.37), setting ψ = 1u=h yields
∫
Rd
∫ T
tφ(s, Xt,x
s )J(Xt,xs )1u=h(s, x)dν(s, x) =
∫
Rd
∫ T
tφ(s, Xt,x
s )J(Xt,xs )dν(s, x).
With the same arguments, we get that dν(s, x) = 1u=h(s, x)dν(s, x), and (6.25)holds for ν and K.
Step 3. Now we will relax the bounded condition on g(x) and f(t, x, 0). For m,n ∈ N, let
gm,n(x) = (g(x) ∧ n) ∨ (−m),fm,n(t, x, y) = f(t, x, y)− f(t, x, 0) + (f(t, x, 0) ∧ n) ∨ (−m).
So gm,n(x) and fm,n(t, x, 0) are bounded and for fixed m ∈ N, as n →∞, we have
gm,n(x) → gm(x) in L2(Rd, ρ(x)dx),fm,n(t, x, 0) → fm(t, x, 0) in L2([0, T ]×Rd, dt⊗ ρ(x)dx),
where
gm(x) = g(x) ∨ (−m),fm(t, x, y) = f(t, x, y)− f(t, x, 0) + f(t, x, 0) ∨ (−m).
Then as m →∞, we have
gm(x) → g(x) in L2(Rd, ρ(x)dx),fm(t, x, 0) → f(t, x, 0) in L2([0, T ]×Rd, dt⊗ ρ(x)dx),
174 Chapitre 6. Sobolev solution for semilinear PDE
in view of assumption 6.1.1 and f(t, x, 0) ∈ L2([0, T ]× Rd, dt⊗ ρ(x)dx).Now we consider the PDE with obstacle associated to (gm,n, fm,n, h) ; by the result of step 2,
there exists a (um,n, νm,n), which is the solution of the PDE with obstacle associated to (gm,n, fm,n, h),where um,n ∈ H and νm,n is a positive measure such that
∫ T0
∫Rd ρ(x)dνm,n(t, x) < ∞. Moreover
(6.24) and (6.25) are satisfied for (um,n, νm,n) and (Y m,n,t,x, Zm,n,t,x,Km,n,t,x), where (Y m,n,t,x, Zm,n,t,x,Km,n,t,x) is the solution of the RBSDE(gm,n(Xt,x
T ), fm,n, h).Recall the convergence results in step 3 of theorem 3.1.2, we know that for fixed m ∈ N, as
n → ∞, (Y m,n,t,xs , Zm,n,t,x
s ,Km,n,t,xs ) → (Y m,t,x
s , Zm,t,xs ,Km,t,x
s ) in S2(0, T ) ×H2d(0, T ) ×A2(0, T ),
and that (Y m,t,xs , Zm,t,x
s ,Km,t,xs ) is the solution of RBSDE(gm(Xt,x
T ), fm, h).By Ito’s formula, we have for n, p ∈ N,
E
∫ T
t(∣∣Y m,n,t,x
s − Y m,p,t,xs
∣∣2 +∣∣Zm,n,t,x
s − Zm,p,t,xs
∣∣2)ds
≤ CE∣∣∣gm,n(Xt,x
T )− gm,p(Xt,xT )
∣∣∣2+ CE
∫ T
t
∣∣fm,n(s,Xt,xs , 0)− fm,p(s,Xt,x
s , 0)∣∣2 ds,
so by the equivalence of the norms (6.6) and (6.7), it follows that as n →∞,∫
Rd
∫ T
tρ(x)(|um,n(t, x)− um,p(t, x)|2 + |σ∗∇um,n(s, x)− σ∗∇um,p(s, x)|2)dsdx
≤ Ck1
k2
∫
Rd
ρ(x) |gm,n(x)− gm,p(x)|2 dx +Ck1
k2
∫
Rd
∫ T
tρ(x) |fm,n(s, x, 0)− fm,p(s, x, 0)|2 dsdx → 0.
i.e. for each fixed m ∈ N, um,n is a Cauchy sequence in H, and admits a limit um ∈ H. MoreoverY m,t,x
s = um(s,Xt,xs ), Zm,t,x
s = σ∗∇um(s,Xt,xs ), a.s., in particular um(t, x) = Y m,t,x
t ≥ h(t, x).Then we find the measure νm by the sequence νm,n. Set πm,n = ρνm,n, by theorem 3.1.4 in
chapter 3.1.4 we have for each m, n ∈ N, 0 ≤ t ≤ T
E(∣∣∣Km,n,t,x
T
∣∣∣2) ≤ CE[g2
m,n(Xt,xT ) +
∫ T
0f2
m,n(s,Xt,xs , 0, 0)ds + ϕ2( sup
t≤s≤T(h+(s,Xt,x
s )))
+ supt≤s≤T
(h+(s,Xt,xs ))2 + 1 + ϕ2(2T )]
≤ CE[g(Xt,xT )2 +
∫ T
0f2(s,Xt,x
s , 0, 0)ds + ϕ2( sup0≤s≤T
(h+(s,Xt,xs )))
+ sup0≤s≤T
(h+(s,Xt,xs ))2 + 1 + ϕ2(2T )]
≤ C(1 + |x|2β1β + |x|2β). (6.37)
By the same way as in step 2, we deduce that for each fixed m ∈ N, πm,n is tight, we may pass toa subsequence and get πm,n → πm where πm is a positive measure. If we define νm = ρ−1πm, thenνm is a positive measure such that
∫ T0
∫Rd ρ(x)dνm(t, x) < ∞. So we have for all φ ∈ C([0, T ]×Rd)
with compact support in x,∫ T
t
∫φdνm,n =
∫ T
t
∫φ
ρdπm,n →
∫ T
t
∫φ
ρdπm =
∫ T
t
∫φdνm.
Now for each fixed m ∈ N, let n → ∞, in the PDE(gm,n, fm,n, h), we check that (um, νm)satisfies the PDE with obstacle associated to (gm, fm, h), and by the weak convergence result ofdKm,n,t,x, we have easily that the probabilistic interpretation (6.25) holds for νm and Km.
6.4. Sobolev Solutions for PDE with obstacle 175
Then let m →∞, by the convergence results in step 4 of theorem 3.1.2, we apply the same me-thod as before. We deduce that limm→∞ um = u is inH and Y t,x
s = u(s, Xt,xs ), Zt,x
s = σ∗∇u(s,Xt,xs ),
a.s., where (Y t,x, Zt,x,Kt,x) is the solution of the RBSDE(g, f, h), in particular, setting s = t,u(t, x) = Y t,x
t ≥ h(t, x).From (6.37), it follows that
E[(Km,t,xT )2] ≤ C(1 + |x|2ββ1 + |x|2β).
By the same arguments, we can find the measure ν by the sequence νm, which satisfies that forall φ and ψ with compact support,
∫
Rd
∫ T
tφ(s, Xt,x
s )J(Xt,xs )ψ(s, x)1u=h(s, x)dν(s, x) =
∫
Rd
∫ T
tφ(s, x)ψ(s,Xt,x
s )dKt,xs dx.
Finally we find a solution (u, ν) to the PDE with obstacle (g, f, h), when f does not depend on∇u. So for every φ ∈ C1,∞
c ([0, T ]× Rd)
∫ T
t(us, ∂sφ)ds + (u(t, ·), φ(t, ·))− (g(·), φ(·, T )) +
∫ T
tE(us, φs)ds
=∫ T
t(f(s, ·, us), φs)ds +
∫ T
t
∫
Rd
φ(s, x)1u=hdν(x, s). (6.38)
Step 4. Finally we study the case when f depends on ∇u, and satisfies a Lipschitz conditionon ∇u.
We construct a mapping Ψ from H into itself. For some u ∈ H, define
u(t, x) = Ψ(u(t, x)),
where (u, ν) is a weak solution of the PDE with obstacle (g, f(t, x, u, σ∇u), h). Then by this map-ping, we denote a sequence un in H, beginning with a function v0 ∈ L2([0, T ]×Rd, dt⊗ ρ(x)dx).Since f(t, x, u, v0(t, x)) satisfies the assumptions of step 3, the PDE(g, f(t, x, u, v0(t, x)), h) admitsa solution (u1, v1) ∈ H. For n ∈ N, set un(t, x) = Ψ(un−1(t, x)).
Symmetrically we introduce a mapping Φ from H2(t, T ) × H2d(t, T ) into itself. For V t,x,0 =
v0(s,Xt,xs )), then V t,x
s ∈ H2d(t, T ) in view of the equivalence of the norms. Set
(Y t,x,n, Zt,x,n) = Φ(Y t,x,n−1, Zt,x,n−1),
where (Y t,x,n, Zt,x,n,Kt,x,n)is the solution of the RBSDE with parameters g(Xt,xT ), f(s, Xt,x
s , y, Zt,x,n−1s )
and h(s,Xt,xs ).Then Y t,x,n
s = un(s,Xt,xs ), Zt,x,n
s = σ∗∇un(s,Xt,xs ) a.s. and
∫
Rd
∫ T
tφ(s, Xt,x
s )J(Xt,xs )ψ(s, x)1u=h(s, x)dνn(s, x) =
∫
Rd
∫ T
tφ(s, x)ψ(s,Xt,x
s )dKt,x,ns dx.
Set un(t, x) := un(t, x) − un−1(t, x). To deal with the difference un, we need the difference ofthe corresponding BSDE, denote Y t,x,n
s := Y t,x,ns − Y t,x,n−1
s , Zt,x,ns := Zt,x,n
s − Zt,x,n−1s , Kt,x,n
s :=Kt,x,n
s −Kt,x,n−1s . It follows from Ito’s formula, for some α and γ ∈ R,
eγtE∣∣∣Y t,x,n
s
∣∣∣2
+ E
∫ T
seγr(γ
∣∣∣Y t,x,nr
∣∣∣2+
∣∣∣Zt,x,nr
∣∣∣2)dr
≤ E
∫ T
seγr(
k2
α
∣∣∣Y t,x,nr
∣∣∣2+ α
∣∣∣Zt,x,n−1r
∣∣∣2)dr,
176 Chapitre 6. Sobolev solution for semilinear PDE
since∫ T
seγrY t,x,n
r dKt,x,nr
=∫ T
seγr(Y t,x,n
s − h(r,Xt,xr ))dKt,x,n +
∫ T
seγr(Y t,x,n−1
s − h(r,Xt,xr ))dKt,x,n−1
−∫ T
seγr(Y t,x,n
s − h(r,Xt,xr ))dKt,x,n−1 +
∫ T
seγr(Y t,x,n−1
s − h(r,Xt,xr ))dKt,x,n
≤ 0.
then by the equivalence of the norms, for γ = 1 + 2k21
k22
k2, we have
∫
Rd
∫ T
teγs(|un(s, x)|2 + |σ∗∇(un)(s, x)|2)ρ(x)dsdx
≤ (12)n−1
∫
Rd
∫ T
teγs(|u2(s, x)|2 + |σ∗∇(u2)(s, x)|2)ρ(x)dsdx
≤ (12)n−1(‖u1(s, x)‖2
γ + ‖u2(s, x)‖2γ).
where ‖u‖2γ :=
∫Rd
∫ Tt eγs(|u(s, x)|2 + |σ∗∇u(s, x)|2)ρ(x)dsdx, which is equivalent to the norm ‖·‖
of H. So un is a Cauchy sequence in H, it admits a limit u in H, which is the solution to thePDE with obstacle (6.1). Then consider σ∗∇u as a known function by the result of step 3, weknow that there exists a positive measure ν such that
∫ T0
∫Rd ρ(x)dν(t, x) < ∞, and for every
φ ∈ C1,∞c ([0, T ]× Rd),
∫ T
t(us, ∂sφ)ds + (u(t, ·), φ(t, ·))− (g(·), φ(·, T )) +
∫ T
tE(us, φs)ds
=∫ T
t(f(s, ·, us, σ
∗∇us), φs)ds +∫ T
t
∫
Rd
φ(s, x)1u=hdν(x, s). (6.39)
Moreover, for t ≤ s ≤ T ,
Y t,xs = u(s,Xt,x
s ), Zt,xs = σ∗∇u(s,Xt,x
s ), a.s.a.e.,
and∫
Rd
∫ T
tφ(s, Xt,x
s )J(Xt,xs )ψ(s, x)1u=h(s, x)dν(s, x)
=∫
Rd
∫ T
tφ(s, x)ψ(s,Xt,x
s )dKt,xs .
Uniqueness.Set (u, ν) to be another solution of the PDE with obstacle (6.2) associated to (g, f, h) ; ν verifies
(6.25) for an increasing process K. We fix φ : Rd → R, a smooth function in C2c (Rd) with compact
support and denote φt(s, x) = φ(Xt,xs )J(Xt,x
s ). From proposition 6.2.1, one may use φt(s, x) as a
6.5. Appendix : Proof of proposition 6.2.1 177
test function in the PDE(g, f, h) with ∂sφ(s, x)ds replaced by a stochastic integral with respect tothe semimartingale φt(s, x). Then we get, for t ≤ s ≤ T
∫
Rd
∫ T
su(r, x)dφt(r, x)dx + (u(s, ·), φt(s, ·))− (g(·), φt(·, T )) +
∫ T
sE(ur, φr)dr (6.40)
=∫ T
s
∫
Rd
f(r, x, u(r, x), σ∗∇u(r, x))φt(r, ·)dr +∫ T
s
∫
Rd
φt(r, x)1u=hdν(x, r).
By (6.4) in Lemma 6.2.1, we have
∫
Rd
∫ T
sudrφt(r, x)dx =
∫ T
s(∫
Rd
(σ∗∇u)(r, x)φt(r, x)dx)dBr
+∫ T
s
∫
Rd
((σ∗∇u)(σ∗∇φr) + φt∇((
12σ∗∇σ + b)u)
)dxdr.
Substitute this equality in (6.40), we get
∫
Rd
u(s, x)φt(s, x)dx = (g(·), φt(·, T ))−∫ T
s(∫
Rd
(σ∗∇u)(r, x)φt(r, x)dx)dBr
+∫
Rd
∫ T
sf(r, x, u(r, x), σ∗∇u(r, x))φt(s, ·)dr +
∫ T
s
∫
Rd
φt(r, x)1u=hdν(x, r).
Then by changing of variable y = Xt,xr and applying (6.25) for ν, we obtain
∫
Rd
u(s, Xt,ys )φ(y)dy
=∫
Rd
g(Xt,yT )φ(y)dy +
∫ T
sφ(y)f(s,Xt,y
s , u(s,Xt,ys ), σ∗∇u(s,Xt,y
s )ds
+∫ T
s
∫
Rd
φ(y)1u=h(r,Xt,ys )dK
t,yr dy −
∫ T
s(∫
Rd
(σ∗∇u)(r,Xt,yr )φ(y)dy)dBr.
Since φ is arbitrary, we can prove that for ρ(y)dy almost every y, (u(s,Xt,ys ), (σ∗∇u)(s,Xt,y
s ), Kt,xs )
solves the RBSDE(g(Xt,yT ), f, h). Here Kt,x
s =∫ st 1u=h(r,X
t,yr )dK
t,yr . Then by the uniqueness of
the solution of the reflected BSDE, we know u(s,Xt,ys ) = Y t,y
s = u(s,Xt,xs ), (σ∗∇u)(s,Xt,y
s ) =Zt,y
s = (σ∗∇u)(s,Xt,ys ) and Kt,y
s = Kt,ys . Taking s = t we deduce that u(t, y) = u(t, y), ρ(y)dy-a.s.
and by the probabilistic interpretation (6.25), we obtain
∫ T
s
∫φt(r, x)1u=h(r, x)dν(x, r) =
∫ T
s
∫φt(r, x)1u=h(r, x)dν(x, r).
So 1u=h(r, x)dν(x, r) = 1u=h(r, x)dν(x, r).
6.5 Appendix : Proof of proposition 6.2.1
First we consider the case when f does not depend on z and satisfies assumption 6.1.3’. As in step2, we approximate g and f as in (6.16), then gn → g in L2(Rd, ρ(x)dx) and fn(t, x, 0) → f(t, x, 0)in L2([0, T ]× Rd, dt⊗ ρ(x)dx), as n →∞.
178 Chapitre 6. Sobolev solution for semilinear PDE
Since for each n ∈ N, |gn| ≤ n and |fn(t, x, 0)| ≤ n, by the result of the step 1 of theorem 6.3.1,the PDE(gn, fn) admits the weak solution un ∈ H and sup0≤t≤T |un(t, x)| ≤ Cn. So we know
|fn(t, x, un(t, x))|2 ≤ |fn(t, x, 0)|2 + ϕ( sup0≤t≤T
|un(t, x)|) ≤ C(n).
Set Fn(t, x) := fn(t, x, un(t, x)), then Fn(t, x) ∈ L2([0, T ]× Rn, dt⊗ ρ(x)dx).By proposition 2.3 in Bally and Matoussi [5], we get for φ ∈ C2
c (Rd), t ≤ s ≤ T
∫
Rd
∫ T
sun(r, x)dφt(r, x)dx + (un(s, ·), φt(s, ·))− (gn(·), φt(·, T )) +
∫ T
sE(un(r, ·), φt(r, ·))dr
=∫
Rd
∫ T
sfn(r, x, un(r, x))φt(r, x)drdx, (6.41)
=∫
Rd
∫ T
sf(r, x, un(r, x))φt(r, x)drdx +
∫
Rd
∫ T
s(fn(r, x, 0)− f(r, x, 0))φt(r, x)drdx.
From the step 2, we know that as n →∞, un → u inH, where u is a weak solution of the PDE(g, f),i.e.
un → u in L2([0, T ]× Rd, dt⊗ ρ(x)dx),σ∗∇un → ∇u in L2([0, T ]× Rd, dt⊗ ρ(x)dx).
Then there exists a function u∗ in L2([0, T ] × Rd, dt ⊗ ρ(x)dx), such that for a subsequence ofun, |unk
| ≤ |u∗| and unk→ u, dt⊗ dx-a.e.. While assumption 6.1.3’-(iii) says that y → f(t, x, y)
is continuous for ∀(t, x) ∈ [0, T ] × Rd, we have that f(r, x, un(r, x)) → f(r, x, u(r, x)), dt ⊗ dx-a.s.. For all compact support function φ ∈ C2
c (Rd), passing to the limit in (6.41), with theconvergence of left side of (6.41) and the second term of right side of (6.41), we know thatlimn→∞
∫Rd
∫ Ts f(r, x, un(r, x))φt(r, x)drdx exists. Then we get
∫
Rd
∫ T
su(r, x)dφt(r, x)dx + (u(s, ·), φt(s, ·))− (g(·), φt(·, T )) +
∫ T
sE(u(r, ·), φt(r, ·))dr
=∫
Rd
∫ T
sf(r, x, u(r, x))φt(r, x)drdx, dt⊗ dx a.s..
Now we consider the case when f depends on ∇u and satisfies the assumption 6.1.3 with (iii)replaced by (6.11). Like in the step 3, we construct a mapping Ψ from H into itself. Then bythis mapping, we define a sequence un in H, beginning with a matrix-valued function v0 ∈L2([0, T ] × Rn×d, dt ⊗ ρ(x)dx). Since f(t, x, u, v0(t, x)) satisfies the assumptions of step 2, thePDE(g, f(t, x, u, v0(t, x))) admits a unique solution u1 ∈ H. For n ∈ N, denote
un(t, x) = Ψ(un−1(t, x)),
i.e. un is the weak solution of the PDE(g, f(t, x, u, σ∗∇un−1(t, x))).Set un(t, x) := un(t, x) − un−1(t, x). In order the estimate the difference, we introduce the
corresponding BSDE(g, fn) for n = 1, where fn(t, x, u) = f(t, x, u,∇un−1(t, x)). So we have Y n,t,xs =
un(s,Xt,xs ), Zn,t,x
s = σ∇un(s,Xt,xs ). Then we apply the Ito’s formula to |Y n,t,x|2, where Y n,t,x
s =Y n,t,x
s − Y n−1,t,xs . With the equivalence of the norms, similarly as in step 3, for γ = 1 + 2k2
1
k22
k2, we
6.5. Proof of proposition 6.2.1 179
have∫
Rd
∫ T
teγs(|un(s, x)|2 + |σ∗∇(un)(s, x)|2)ρ(x)dsdx
≤ (12)n−1
∫
Rd
∫ T
teγs(|u2(s, x)|2 + |σ∗∇(u2)(s, x)|2)ρ(x)dsdx
≤ (12)n−1(‖u1(s, x)‖2
γ + ‖u2(s, x)‖2γ).
where ‖u‖2γ :=
∫Rd
∫ Tt eγs(|u(s, x)|2 + |σ∗∇u(s, x)|2)ρ(x)dsdx, which is equivalent to the norm ‖·‖
of H. So un is a Cauchy sequence in H, it admits a limit u in H, and by the fixed point theorem,u is a solution of the PDE(g, f).
Then for each n ∈ N, we have for φ ∈ C2c (Rd)
∫
Rd
∫ T
sun(r, x)dφt(r, x)dx + (un(s, ·), φt(s, ·))− (g(·), φt(·, T )) +
∫ T
sE(un(r, ·), φr(r, ·))dr
=∫
Rd
∫ T
sf(r, x, un(r, x), σ∗∇un−1(r, x))φt(r, x)drdx
=∫
Rd
∫ T
sf(r, x, un(r, x), σ∗∇u(r, x))φt(r, x)drdx
+∫
Rd
∫ T
s[f(r, x, un(r, x), σ∗∇un−1(r, x))− f(r, x, un(r, x), σ∗∇u(r, x))]φt(r, x)drdx.
Noticing that f is Lipschitz in z, we get
|f(r, x, un(r, x), σ∗∇un−1(r, x))− f(r, x, un(r, x), σ∗∇u(r, x))| ≤ k |σ∗∇un−1(r, x)− σ∗∇u(r, x)| .
So the last term of the right side converges to 0, since σ∗∇un converges to σ∗∇u in L2([0, T ]×Rd, dt ⊗ ρ(x)dx). Now we are in the same situation as in the first part of proof, and in the sameway, we deduce that the following holds : for φ ∈ C2
c (Rd)∫
Rd
∫ T
su(r, x)dφt(r, x)dx + (u(s, ·), φt(s, ·))− (g(·), φt(·, T )) +
∫ T
sE(u(r, ·), φt(r, ·))dr
=∫
Rd
∫ T
sf(r, x, u(r, x), σ∗∇u(r, x))φt(r, x)drdx, dt⊗ dx, a.s..
Now if f satisfies assumption 6.1.3, we know from (6.9) and (6.10) that u is solution of thePDE(g, f) if and only if u = eµtu is solution of the PDE(g, f), where
g(x) = eµT g(x), f(t, x, y, x) = eµtf(t, x, e−µty, e−µtz)− µy,
and f satisfies assumption 6.1.3 with (iii) replaced by (6.11). So we know now : for φ ∈ C2c (Rd),
∫
Rd
∫ T
su(r, x)dφt(r, x)dx + (u(s, ·), φt(s, ·))− (g(·), φt(·, T )) +
∫ T
sE(u(r, ·), φt(r, ·))dr
=∫
Rd
∫ T
sf(r, x, u(r, x),∇u(r, x))φt(r, x)drdx, dt⊗ dx, a.s..
Notice thatd(eµru(r, x)) = µeµru(r, x)dr + eµrd(u(r, x)),
180 Chapitre 6. Sobolev solution for semilinear PDE
and is deterministic. By the integral by parts for stochastic process, we get∫
Rd
∫ T
su(r, x)dφt(r, x)dx
=∫
Rd
∫ T
se−µru(r, x)dφt(r, x)dxdr
= e−µT (g(·), φt(·, T ))− e−µs(u(s, ·), φt(s, ·)) + µ
∫ T
se−µr
∫
Rd
u(r, x)φt(r, x)dxdr
−∫ T
s
∫
Rd
e−µrφt(r, x)[Lu(r, x) + f(r, x, u(r, x),∇u(r, x))]drdx.
Using (6.10), we get that for φ ∈ C2c (Rd),
∫
Rd
∫ T
su(r, x)dφt(r, x)dx = (g(·), φt(·, T ))− (u(s, ·), φt(s, ·))−
∫ T
s
∫
Rd
φt(r, x)Lu(r, x)drdx
+∫ T
s
∫
Rd
φt(r, x)f(r, x, u(r, x),∇u(r, x))drdx
= (g(·), φt(·, T ))− (u(s, ·), φt(s, ·))−∫ T
sE(u(r, ·), φt(r, ·))dr
+∫ T
s
∫
Rd
φt(r, x)f(r, x, u(r, x),∇u(r, x))drdx,
and finally, the result follows. ¤
181
Chapitre 7
Numerical algorithms and simulationsfor BSDEs and reflected BSDEs
In this chapter, we study some algorithms for different kinds of BSDEs. The numerical solutionof BSDE is essentially different from the classical (forward) SDE. In some sense, it’s rather ageneralization of PDE. Some linear BSDE can be solved by using Monte-Carlo method with helpsof dual techniques. For nonlinear cases, by nonlinear Feymann-Kac formula, the numerical methodsfor PDE can be applied to solve certain BSDEs, see [3], [17], for BSDE and [24] for forward-backwardSDE. More recently, Bouchard and Touzi [11], Gobet, Lemor and Warin [32], Zhang and Zheng [76]studied the Monte-Carlo method for nonlinear BSDEs.
Here, we develop a new numerical method for nonlinear BSDE and reflected BSDE. This nu-merical method is from the point of view of stochastic analysis. We use the scaled random walk toapproximate the corresponding Brownian motion and get a sequence of discrete version of BSDEs.Then we develop implicit and explicit schemes. The convergence of implicit scheme has been provedby Philippe Briand, Bernard Delyon and Jean Memin(2001) in ([14]) for BSDE. We will prove theconvergence of the explicit scheme in this chapter. More recently, we found numerical schemes forreflected BSDEs, reflected and penalized schemes. Here we mainly discuss the simulation results.The proof for the convergence of these schemes can be found in [54].
This chapter is organized as following, in the first section, we introduce the discretizationof BSDE, then present the implicit and explicit schemes. In section 2, we show some numericalsimulations. In section 3, we consider the reflected BSDE with one or two barriers which are Ito’sprocesses, by reflected scheme and penalized scheme. Later, we apply penalized scheme to BSDEswith constraint on z, in section 4. In the last section, we give the proof of the convergence ofnumerical solution of reflected BSDE with one barrier.
7.1 Discretization and algorithms of BSDEs
Let (Ω,F , P ) be a probability space, and (Bs)s≥0 be a d-dimensional Brownian motion definedon this space. Set Ft; 0 ≤ t ≤ T be the natural filtration generated by the Brownian motion(Bt) : Ft = σBs; 0 ≤ s ≤ t,where F0 contained all P−null sets of F . We are interested in thebehavior of processes on a given interval [0, T ], and we recall the following spaces L2
d(Ft), H2d(0, T ),
S2d(0, T ), D2
d(0, T ), A2(0, T ) In the real–value case, i.e., d = 1, they will be simply denoted byH2(0, T ), L2(Ft) and S2(0, T ).
Let ξ be the terminal condition of BSDE, f be the coefficient, and we assume :
Assumption 7.1.1. ξ ∈ L2n(FT ).
182 Chapitre 7. Numerical algorithms and simulations
Assumption 7.1.2. f is a mapping, f(s, y, z) : Ω× [0, T ]×Rn×Rn×d → Rn, f(·, 0, 0) ∈ H2n(0, T ),
which satisfies the Lipschitz condition : for ∀(y1, z1), (y2, z2) ∈ Rn × Rn×d,there exists a constantk > 0 uniformly in (t, ω), satisfying
|f(t, y1, z1)− f(t, y2, z2)| ≤ k(|y1 − y2|+ |z1 − z2|). (7.1)
Definition 7.1.1. A solution of the BSDE is a couple (Yt, Zt), which is in L2n(0, T )×H2
n×d(0, T )and satisfies the following equation :
Yt = ξ +∫ T
tf(r, Yr, Zr)dr −
∫ T
tZrdBr. (7.2)
Then we have the existence and uniqueness for the solution of BSDE, which was firstly provedby Pardoux and Peng in 1990.
Theorem 7.1.1. Assume assumption 7.1.1 and 7.1.2 hold, then there exists a unique pair (Yt, Zt) ∈L2F (0, T ;Rn) × L2
F (0, T ;Rn×d) to be the solution of the BSDE associated to ξ and f , i.e. (7.2) issatisfied.
Here we mainly study the case when the Brownian motion is 1-dimensional. Without losinggenerality, we assume T = 1. Now we consider the following 1-dimensional BSDE
yt = ξ +∫ 1
tf(s, ys, zs)ds−
∫ 1
tzsdBs (7.3)
and the terminal condition is yT = ξ = Φ((Bs)0≤s≤1), where Φ(·) is a functional of the Brownianmotion (Bs)0≤s≤T , such that ξ ∈ L2(FT ). We suppose that assumption 7.1.2 holds for f .
7.1.1 Discretization of BSDEs and numerical schemes
For n ∈ N big enough, we divide the time interval [0, 1] into n parts : 0 = t0 < t1 < · · · < tn = 1,δ := tj − tj−1 = 1
n , for 1 ≤ j ≤ n. Consider (εm)1≤m≤n, a Bernoulli sequence, with ε0 = 0, whichare i.i.d. random variable satisfying
εm =
+1, p = 0.5−1, p = 0.5
.
Now we define the scaled random walk Bn. , by setting Bn
0 = 0,
Bnt =
√δ
[t/δ]∑
m=0
εm, 0 ≤ t ≤ T. (7.4)
Obviously, Bnt is a Ft-measurable process who take discrete values ; denote Bn
j = Bntj , we get
Bnj =
√δ∑j
m=1 εm. We define the discrete filtration Fnj := σεm; 0 ≤ m ≤ j = σBn
t ; 0 ≤ t ≤ tj,for 1 ≤ j ≤ n.
Then on the small interval [tj , tj+1], the equation
ytj = ytj+1 +∫ tj+1
tj
f(s, ys, zs)ds−∫ tj+1
tj
zsdBs, (7.5)
can be approximated by the discrete equation
ynj = yn
j+1 + f(tj , ynj , zn
j )δ − znj
√δ(Bn
j+1 −Bnj ). (7.6)
7.1. Discretization and algorithms of BSDEs 183
Lemma 7.1.1. If f(t, y, z) satisfies the Lipschitz condition (7.1) with constant k, for small enoughδ s.t. δk < 1, there exists a unique couple (yn
j , znj ) satisfying equation (7.6).
Proof. From (7.6), in view of Bnj+1 −Bn
j = εj+1, and E[εj+1|Fnj ] = 0, we get immediately
znj =
12√
δE[yn
j+1εj+1|Fnj ]. (7.7)
Then taking conditional expectation on (7.6), it follows
ynj = E[yn
j+1|Fnj ] + f(tj , yn
j , znj )δ. (7.8)
Consider the mapping Θ(y) = y − f(tj , y, znj )δ ; from the Lipschitz property of f , we obtain
⟨Θ(y)−Θ(y′), y − y′
⟩ ≥ (1− δk)∣∣y − y′
∣∣2 > 0,
which implies that the mapping Θ(y) is a monotonic mapping. So there exists a unique value y s.t.Θ(y) = E[yn
j+1|Fnj ] holds, i.e. yn
j = Θ−1(E[ynj+1|Fn
j ]). ¤
Remark 7.1.1. The existence of the solution of discrete BSDE only depends on the Lipschitzcondition of f in y. In fact, if f does not depend on y, we can easily get Θ−1(y) = y + f(tj , zn
j )δ.
Remark 7.1.2. In general, if f nonlinearly depends on y, then Θ−1 can not be solved explicitly,so sometimes we shall use (yn
j , znj ), where
ynj = E[yn
j+1|Fnj ] + f(tj , E[yn
j+1|Fnj ], zn
j )δ, (7.9)
znj =
12√
δE[yn
j+1εj+1|Fnj ],
to approximate the solution of Θ(y) = E[ynj+1|Fn
j ]. And (7.9) is called the explicit scheme forBSDE, while (7.8) is called the implicit scheme for BSDE.
7.1.2 Convergence of Algorithms of BSDEs
We consider that the discrete terminal condition is ynn := ξn = Φ((Bn
j )0≤j≤n), which is a Fnn -
measurable random variable, for the discrete case. First, for the implicit scheme for BSDE, if weconstruct the process :
ynt = yn
[t/δ], 0 ≤ t ≤ 1,
then the convergence between ynt to yt is deduced by ”Donsker-Type theorem for BSDEs”, by (P.
Briand, B. Delyon and J. Memin. (2001)[14]). Consider the following assumptions,
Assumption 7.1.3. ξ is Ft measurable and, for all n, ξn is Fnn measurable s.t.
E∣∣ξ2
∣∣ + supn
E∣∣(ξn)2
∣∣ < ∞;
Assumption 7.1.4. ξn converges to ξ in L1 as n →∞,
then we have
Theorem 7.1.2. Assume that assumptions 7.1.2, 7.1.3 and 7.1.4 hold. Let us consider the scaledrandom walks Bn ; if Bn → B uniformly on interval [0, 1] in probability, then we have (yn, zn) →(y, z) in the following sense :
sup0≤t≤1
|ynt − yt|2 +
∫ 1
0|zn
s − zs| ds →∞, as n →∞ in probability. (7.10)
184 Chapitre 7. Numerical algorithms and simulations
It’s easy to check that for the implicit scheme assumptions 7.1.3 and 7.1.4 hold. And fromDonsker’s theorem, we know that Bn
t in (7.4) converges uniformly on the interval [0, 1] to theBrownian motion Bt in probability. Then as n → ∞, (yn, zn) converge to (y, z) in the sense of(7.10).
For the explicit scheme for BSDE, from the terminal condition ynn = Φ((Bn
j )0≤j≤n), for 0 ≤ j ≤n− 1, the discrete recurring results at jth step are given by
ynj = E[yn
j+1|Fnj ] + f(tj , E[yn
j+1|Fnj ], zn
j )δ (7.11)
znj =
12√
δE[yn
j+1εj+1|Fnj ].
Then set ynt = yn
[t/δ], znt = zn
[t/δ], for 0 ≤ t ≤ 1. For the convergence of this scheme, we first considerthe estimates of (yn
j , znj ). For this reason, we need the following Gronwall type lemma, which is
proved in Appendix.
Lemma 7.1.2. Let us consider a, b, α positive constant, δb < 1 and a sequence (vj)j=1,...n ofpositive numbers such that, for every k
vj + α ≤ a + bδ
j∑
i=1
vi,
thensupj≤n
vj + α ≤ aEδ(b),
where Eδ(b) is the convergent sequence :
Eδ(b) = 1 +∞∑
p=1
bp
p!(1 + δ) · · · (1 + (p− 1)δ),
which is decreasing in δ and tends to eb as δ →∞.
Lemma 7.1.3. For (1 + 2k + 3k2)δ < 1, we have
E[supj
∣∣ynj
∣∣2 + E[n∑
j=0
∣∣znj
∣∣2]δ] ≤ cCξn,fn , (7.12)
where c is a constant which only depends on k, and Cξn,fn = E[|ξn|2 +∑n
j=0 |f(tj , 0, 0)|2 (δ2 + δ)].
Proof. By explicit scheme, we have
ynj = yn
j+1 + f(tj , E[ynj+1|Fn
j ], znj )δ − zn
j
√δεj+1.
Squaring both sides and taking expectation, with Lipschitz condition on f , we get
E∣∣yn
j
∣∣2 + 2δE∣∣zn
j
∣∣2 ≤ E∣∣yn
j+1
∣∣2 + (δ2 + δ)E |f(tj , 0, 0)|2 + 3k2δ2E∣∣zn
j
∣∣2
+δ(1 + k + k2 + 3k2δ)E∣∣yn
j+1
∣∣2 .
If we notice that 3k2δ < 1 and 3k2δ2 < δ, with δ < 1, it follows by recurrence
E[∣∣yn
j
∣∣2 + δn∑
i=j
|zni |2] ≤ E |ξn|2 + (δ2 + δ)
n∑
i=j
|f(ti, 0, 0)|2] + δ(2 + k + k2)n∑
i=j
E∣∣yn
i+1
∣∣2 .
7.1. Discretization and algorithms of BSDEs 185
Then by lemma 7.1.2, we obtain
E[supj
∣∣ynj
∣∣2] ≤ Cξn,fnEδ(2 + k + k2).
So the results follow. ¤Theorem 7.1.3. Assume assumptions 7.1.2, 7.1.3, and 7.1.4 hold, and for every (y, z), t →f(t, y, z) is continuous. If the scaled random walks Bn → B uniformly on [0, T ] in probability, thenwe have (yn, zn) → (y, z) in the following sense
sup0≤t≤1
|ynt − yt|2 +
∫ 1
0|zn
s − zs|2 ds → 0, as n →∞ in probability.
Proof. From theorem 7.1.2, (yn, zn) → (y, z), in (7.10), so it is sufficient to prove that thedifference between (yn, zn) and (yn, zn) is sufficient small. From (7.6) and (7.11), we apply Ito’s
formula to∣∣∣yn
j − ynj
∣∣∣2, and take expectation, notice that ξn − ξ
n = 0, then
E∣∣yn
j − ynj
∣∣2 + E[δn∑
i=j
|zni − zn
i |2] ≤ 2n∑
i=j
E[(ynj − yn
j )(f(tj , ynj , zn
j )− f(tj , E[ynj+1|Fn
j ], znj ))]δ
≤ (2k + 2k2)δE[n∑
i=j
|yni − yn
i |2] +12E[δ
n∑
i=j
|zni − zn
i |2]
+2kδ
n∑
i=j
E[(ynj − yn
j )(ynj −E[yn
j+1|Fnj ])].
By (7.11), we know that E[ynj − E[yn
j+1|Fnj ]] = E[f(tj , E[yn
j+1|Fnj ], zn
j )]δ, with Lipschitz propertyof f , and (7.12), we get
E∣∣yn
j − ynj
∣∣2 +12E[δ
n∑
i=j
|zni − zn
i |2]
≤ (2k + 2k2 + k2δ2)δE[n∑
i=j
|yni − yn
i |2] + δ2En∑
i=j
[|f(ti, 0, 0)|2 + 3k2∣∣yn
i+1
∣∣2 + 3k2 |zni |2]
≤ (2k + 2k2 + 1)δE[n∑
i=j
|yni − yn
i |2] + δckCξn,fn .
Here ck is a constant only depends on k. Then by lemma 7.1.2, we get
E supj≤n
∣∣ynj − yn
j
∣∣2 ≤ δckCξn,fnEδ(2k + 2k2 + 1).
where Eδ(2k +2k2 +1) = 1+∑∞
p=1(2k+2k2+1)p
p! (1+ δ) · · · (1+ (p− 1)δ), which is increasing in δ andtends to exp(2k + 2k2 + 1) when δ converges to 0. It follows immediately
E supj≤n
∣∣ynj − yn
j
∣∣2 +12E[δ
n∑
i=j
|zni − zn
i |2] ≤ δckCξn,fnEδ(2k + 2k2 + 1).
So as n →∞, i.e. δ →∞, we get
E sup0≤t≤1
|ynt − yn
t |2 + E[∫ 1
0|zn
s − zns |2 ds] → 0.
The results follow. ¤
186 Chapitre 7. Numerical algorithms and simulations
7.2 Simulation results for BSDEs
The most important characteristic of this scheme is that we backwardly calculate yt as a functionof (t, Bt), i.e. we solve all the value as yn
j = Φnj (tj , Bn
j ).When we consider the general case that is
ynn = ξn = Φ((Bn
j )0≤j≤n) = Φn(ε1, ε2, · · · , εn),
if we divide the time period [0, 1] into n parts, then there will be 2n different possible states for ynn ;
in the following ynn−1will have 2n−1 states, and so on, so at last for yn
j , znj 0≤j≤n, we need a array
containing (2 × 2n − 1) × 2 elements to describe it. Even if we take a small number, like n = 20,the calculation will be unbearable for computer, whose array is consists about 4.2× 106 numbers.
So in practice, when we do simulation, we consider the following terminal condition :
ynn = ξn = Φ(Bn
n).
In this case, we adapt the binomial tree scheme for discrete Brownian motion, then if we dividethe time period [0, 1] into n parts, yn
n will have n different states, in following there will be n − 1states for yn
n−1, an so on. Then at last we only need a array of n(n + 1) for ynj , zn
j 0≤j≤n−1, whichis more acceptable than the previous one.
Using MATLAB, we have developed a toolbox for calculating and simulating the solution ofBSDEs, for a given analytic generator and terminal condition. Here we use some examples to showthe results of this toolbox. We hope that this toolbox will be useful to understand and analyzeBSDE both in practical and theoretical aspects. Now we present some simulation results. Here theparameters of BSDE are f(t, y, z) = −z2, ξ = sin(|BT |).
0
0.2
0.4
0.6
0.8
1
−4
−2
0
2
4
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
tx
y
Figure 2.1 :The solution surface with one trajectory
7.2. Simulation results for BSDEs 187
In this figure, the colorful surface is the solution of y, which show the value of y in the timeinterval [0, 1] and state interval [−4, 4]. The line on the surface is one trajectory of y, while the lineon the bottom is the respective trajectory of Brownian motion.
With the same parameters, we show the trajectories of y and z, both in 2 and 3-dimension, inthe following figures :
0
0.5
1−4
−20
24
−0.5
0
0.5
t
simulation of stochastic phenomena: y(t)
x
0 0.2 0.4 0.6 0.8 1
0.6
0.7
0.8
0.9
t
y
0
0.5
1−4
−20
24
−0.5
0
0.5
t
simulation of stochastic phenomena: z(t)
x
z
0 0.2 0.4 0.6 0.8 1
−0.4
−0.3
−0.2
−0.1
0
t
z
Figure 2.2 :The trajectories of the solution
Now for the cases when BSDEs have explicit solutions, we list out our numerical results ofdifferent number n.
Case I. The coefficient f is linear with respect to y and z, and we set f(s, y, z) = bsy + csz + rs
with bs ≡ b, cs ≡ c, rs ≡ r. Then we know that the solution Yt of the BSDE is
Yt = E[ξ exp(∫ T
tbsds− 1
2
∫ T
tc2sds +
∫ T
tcsdBs)
+∫ T
trs exp(
∫ s
tbudu− 1
2
∫ s
tc2udu +
∫ s
tcudBu)ds|Ft]
= exp((b− 12c2)(T − t))E[ξ exp(c(BT −Bt))|Ft] +
r
b[exp(b(T − t))− 1],
especially,
Y0 = exp((b− 12c2)T )E[ξ exp(cBT )] +
r
b[exp(bT )− 1].
188 Chapitre 7. Numerical algorithms and simulations
Example 7.2.1. Set b = c = r = 1, ξ = sin(|BT |), by the implicit algorithm, our calculation resultsare showed in the following tablet :
n 100 800 1000 2000 5000Y0 3.5106 3.4916 3.4879 3.4866 3.4859
Its explicit solution is Y0 = exp(12)E[sin(|B1|) exp(B1)] + exp(1) − 1,then we use 10,000,000
samples to approximate the value, and get Y0 = 3.4850.
Example 7.2.2. Set b = c = 1, r = 0, ξ = |BT |, then the simulation results are listed in following :
n 100 500 1000 2000 5000Y0 3.1806 3.1731 3.1722 3.1719 3.1714
While its explicit solution is Y0 = exp(12E[|B1| exp(B1)]), we use 10,000,000 samples to ap-
proximate the value, and get Y0 = 3.1710.
Case II. The function f is quadratic growth in z, set f(t, y, z) = 12z2, set yt = exp(Yt), by Ito’s
formula, it’s easy to check that yt satisfies the following BSDE
yt = exp(ξ)−∫ T
tzsdBs,
where zt = YtZt. So yt = E[exp(ξ)|Ft ]. It follows Yt = ln(E[exp(ξ)|Ft]), especially
Y0 = ln(E[exp(ξ)])
Example 7.2.3. Set ξ = sin(|B1|), then the simulation results are showed in following
n 100 400 800 1000 2000Y0 0.6249 0.6253 0.6254 0.6254 0.6255
while Y0 = ln(E[exp(sin(|B1|)]), and by taking 10,000,000 samples, we get Y0 = 0.6255.
7.3 Discretization and Algorithms for Reflected BSDEs
Now we study the different algorithms for reflected BSDE and simulation results. Without loseof generality, we set T = 1 in the following of this section.
Now we divide the time interval [0, 1] into n parts, for n ∈ N big enough : 0 = t0 < t1 < · · · <tn = 1. Set δ := tj − tj−1 = 1
n , for 1 ≤ j ≤ n. As in Section 3.1, we define the scaled random walkBn
t as (7.4) : Bnt =
√δ∑[t/δ]
m=0 εm, 0 ≤ t ≤ 1, by a Bernoulli sequence (εm)1≤m≤n with ε0 = 0,which are i.i.d. random variable satisfying
P (εm = 1) = P (εm = −1) = 0.5.
Obviously, Bnt is a Ft-measurable process who take discrete values. If we denote Bn
j = Bntj , we have
Bnj =
√δ∑j
m=1 εm. We denote the discrete filtration for 1 ≤ j ≤ n,
Fnj := σεm; 0 ≤ m ≤ j = σBn
t ; 0 ≤ t ≤ tj.
7.3. Discretization and Algorithms for Reflected BSDEs 189
7.3.1 Algorithms for Reflected BSDEs with one barrier
Consider the reflected BSDE on [0, 1] associated to (ξ, f, L), which satisfies the assumptions7.1.1, 7.1.2, and E[sup0≤t≤T ((Lt)+)2] < ∞, LT ≤ ξ, then there exists a unique triple (Y, Z,K)which satisfies
Yt = ξ +∫ 1
tf(s, Ys, Zs)ds + KT −Kt −
∫ 1
tzsdBs, (7.13)
Yt ≥ Lt, 0 ≤ t ≤ 1 and∫ 10 (Yt − Lt)dKt = 0. Here ξ = Φ((Bs)0≤s≤1), and B is 1-dimensional
Brownian motion. Here we suppose that L is an Ito process.After separation of the time interval, we present first the reflected scheme. On the small interval
[tj , tj+1], the equation (7.13) can be approximated by the following discrete equation
ynj = yn
j+1 + f(tj , ynj , zn
j )δ + dnj − zn
j
√δεj+1, (7.14)
where dnj = Ktj+1 − Ktj , with yn
j ≥ Lnj = Ltj , (yn
j − Lnj )dn
j = 0. Here (7.5.2) is called discretereflected equation in [54].
The numerical calculation will begin at the discrete terminal condition : ynn := ξn = Φ((Bn
j )0≤j≤n) =Φn((εj)0≤j≤n). Suppose that (yn
j+1, znj+1, d
nj+1) is solved, then we study (7.5.2) to solve (yn
j , znj , dn
j ).Since we know that yn
j+1 has the form : ynj+1 = Φj+1(ε1, · · · , εj+1), set
y+j+1 = Φj+1(ε1, · · · , 1), y−j+1 = Φj+1(ε1, · · · ,−1).
From (7.5.2), we get immediately znj = E[yn
j+1εj+1|Fnj ] = 1
2√
δ(y+
j+1 − y−j+1). Substitute it into theequation, then our problem change to find (yn
j , dnj ) satisfying
ynj = E[yn
j+1|Fnj ] + f(tj , yn
j , znj )δ + dn
j , (7.15)yn
j ≥ Lnj , (yn
j − Lnj )dn
j = 0.
To solve ynj and dn
j , we have two different ways :
Implicit reflected scheme. If we consider the mapping Θ(y) := y−(f(tj , y, znj )−f(tj , Ln
j , znj ))δ,
then for δ small enough, we have
⟨Θ(y)−Θ(y′), y − y′
⟩ ≥ (1− δk)∣∣y − y′
∣∣2 > 0,
i.e. Θ(y) is strictly increasing, with Θ(Lnj ) = Ln
j , so
Θ(y) ≥ Lnj ⇐⇒ y ≥ Ln
j .
Notice that E[ynj+1|Fn
j ] = 12(y+
j+1 + y−j+1), we get
ynj = Θ−1(
12(y+
j+1 + y−j+1) + f(tj , Lnj , zn
j )δ + dnj ),
dnj =
(12(y+
j+1 + y−j+1) + f(tj , Lnj , zn
j )δ − Lnj
)−.
The proof of the convergence of this implicit reflected scheme is rather long, so we put it inappendix of this chapter.
190 Chapitre 7. Numerical algorithms and simulations
Explicit reflected scheme Then instead of looking for Θ−1, we use E[ynj+1|Fn
j ] to approximateyn
j on the right side of (7.15), then
ynj =
12(y+
j+1 + y−j+1) + f(tj ,12(y+
j+1 + y−j+1), znj )δ + d
nj ,
dnj =
(12(y+
j+1 + y−j+1) + f(tj ,12(y+
j+1 + y−j+1), znj )δ − Ln
j
)−.
The other ideal numerical method for reflected BSDE is via the penalization equations. Fromsection 6 of [28], we know that the solution of reflected BSDE can be approximated by penalizationequations, which are solutions of classical BSDE ; for p ∈ N, the penalization equation is
Y pt = ξ +
∫ T
tf(s, Y p
s , Zps )ds + p
∫ T
t(Y p
s − Ls)−ds−∫ T
tZp
s dBs, (7.16)
Denote Kpt = p
∫ t0 (Y p
s − Ls)−ds. Then the following holds
Theorem 7.3.1. When p →∞, Y pt → Yt in S2(0, T ), Zp
t → Zt in H2(0, T ), Kpt → Kt in A2(0, T ),
where (Yt, Zt,Kt) is the solution of the reflected BSDE associated to (ξ, f, L) (7.13).
Numerical Penalization scheme For p large enough, on the small time [tj , tj+1], we considerthe following discrete penalized BSDE
yp,nj = yp,n
j+1 + f(tj , yp,nj , zp,n
j )δ + p(yp,nj − Ln
j )−δ − zp,nj
√δεj+1,
with the discrete terminal condition : ynn := ξn = Φ((Bn
j )0≤j≤n) = Φn((εj)0≤j≤n). If we have alreadysolved (yp,n
j+1, zp,nj+1), which has the form : yp,n
j+1 = Φpj+1(ε1, · · · , εj+1), then for solving (yp,n, zn,p), we
setyp,+
j+1 = Φpj+1(ε1, · · · , 1), yp,−
j+1 = Φpj+1(ε1, · · · ,−1),
then it is easy to get zp,nj = E[yp,n
j+1εj+1|Fnj ] = 1
2√
δ(yp,+
j+1 − yp,−j+1). So for yp,n
j , we have
yp,nj = E[yp,n
j+1|Fnj ] + f(tj , y
p,nj , zp,n
j )δ + p(yp,nj − Ln
j )−δ. (7.17)
There are still two ways in view to find the solution. One is to solve the equation :
yp,nj = (Θp)−1(E[yp,n
j+1|Fnj ]) = (Θp)−1(
12(yp,+
j+1 + yp,−j+1)),
where Θp is the mapping, Θp(y) = y − (f(tj , y, zp,nj ) + p(y − Ln
j )−)δ.The other is partly explicit, we only replace yp,n
j by E[yp,nj+1|Fn
j ] in f , then solve the quasi-linearequation. Finally,
yp,nj = E[yp,n
j+1|Fnj ] + f(tj , E[yp,n
j+1|Fnj ], zp,n
j )δ
+pδ
1 + pδ(E[yp,n
j+1|Fnj ] + f(tj , E[yp,n
j+1|Fnj ], zp,n
j )δ − Lnj )−.
If we substitute E[yp,nj+1|Fn
j ] = 12(yp,+
j+1 + yp,−j+1), the results follow easily.
7.3. Discretization and Algorithms for Reflected BSDEs 191
7.3.2 Simulation results of reflected BSDE with one barrier
Now we can do the simulation for reflected BSDE. It begins from the terminal time 1 with ynn =
ξn, repeating the procedure of reflected scheme or numerical penalization scheme, to backwardlysolve (yn
j , znj , dn
j ), for j = n− 1, · · · 1, 0.Due to the account of computation, we only treat a very simple situation : ξ = φ(B1), Lt =
ψ(t, B(t)), where φ and ψ are real regular functions defined on R and [0, 1]× R respectively.We set : f(y, z) = −10 |y + z| − 1, ξ = Φ(B1) = |B1|, Lt = Ψ(t, Bt) = −3(B(t) − 1)2 + 1 and
n = 400.After inputting the parameters, we run the calculation program. Here we adapt the implicit
reflect scheme. Then we get the result surface of y(t, x). Now we will see the surface of solution yand its trajectory of solution y in the figure 3.1.
0
0.2
0.4
0.6
0.8
1
−4
−2
0
2
4
−60
−40
−20
0
tx
y
0 0.2 0.4 0.6 0.8 10
5
10
15
20
25
t
K
Kt
0 0.2 0.4 0.6 0.8 10
5
10
15
t
Yt−
Lt
Yt−Lt
Figure 3.1 : The solution on surface
In the above window, the lower surface is for the barrier L, as well the upper one for the valueof y. Then we generate two trajectories Bn,i
j , i = 1, 2 of the discrete Brownian motion Bn which are
192 Chapitre 7. Numerical algorithms and simulations
drawn below. The value of yn,ij (i = 1, 2) with respect to these Brownian paths, are showed by on the
surface, and we use the fine line to show the relation between the y and B. Meantime, in the below-left window, the lines stand for the trajectories of the force Kn,i
j =∑j
k=0 dn,ik (i = 1, 2) respect to
the value of yn,ij (i = 1, 2), and in the below-right window, we draw out lines for yn,i
j −Ln,ij (i = 1, 2).
The two groups lines are showed in different color, in the following, we will use the same way todistinguish the different groups of lines.
From the above window we can see that there is a part of two surfaces (solution surface and thebarrier surface) sticking together. Then when the trajectories of solution yn
j goes into this area, theforce Kn,i
j will push it going upward. Indeed, if we don’t have the barrier, the solution yn,ij intend
going down, then for keeping yn,ij bigger than the reflecting barrier Ln,i
j ,the forces Kn,ij are needed.
So comparing these two trajectories, we can see that the force Kn,1j of the first trajectory yn,1
j actsmuch more than the second Kn,2
j , since yn,1j goes into the sticking area much more than yn,2
j .Compare the two below pictures, we can easily find out that the increasing of Kn,i
j only happensfor the j such that yn,i
j − Ln,ij taking the value 0 ; but conversely, when j satisfy yn,i
j − Ln,ij = 0,
Kn,ij are not necessary increasing.
0
0.5
1
−4−2
02
4
−30
−20
−10
0
t
trajectory of y(t) in 3−D
x
0 0.5 1
−1.5
−1
−0.5
0
0.5
1
1.5
t
trajectory of y(t) in 2−D
0
0.5
1
−4−2
02
4
−10
0
10
20
t
trajectory of z(t) in 3−D
x
z
0 0.5 1
−1
0
1
2
3
4
t
z
trajectory of z(t) in 2−D
0 0.5 10
2
4
6
8
10
12
14
simulation of stochastic phenomena: Kt
t
K
0 0.5 10
5
10
15
20
25
yt − L
t
t
y t − S
t
Figure 3.2 : Simulation of y,z,A
About this point, we can see more clearly in this figure, figure 3.2. In this figure, we alsogenerate two trajectories Bn,i
j , i = 1, 2. In the two windows on the left, we show out the trajectoriesof yn,i
j (i = 1, 2), the above one is in 3-dimension with lines for respective discrete Brownian motions
7.3. Discretization and Algorithms for Reflected BSDEs 193
Bn,ij (i = 1, 2), and the below one is 2-dimensional. The two trajectories of zn,i
j (i = 1, 2) are displayedin the two middle windows with the same organizing as for yn,i
j (i = 1, 2). The above-right windowis for the trajectories Kn,i
j (i = 1, 2), and while the below-left window for yn,ij − Ln,i
j (i = 1, 2), thencomparing the two, we can see the relation between Kn,i
j (i = 1, 2) and yn,ij − Ln,i
j (i = 1, 2) again,like in figure 3.1.
Then we list out some numerical results for the reflected scheme and explicit penalizationscheme, and we can see that as the penalized parameter p converge to infinity, yp,n
0 convergeto yn
0 . Consider the same parameters as above : f(y, z) = −10 |y + z| − 1, ξ = Φ(B1) = |B1|,Lt = Ψ(t, Bt) = −3(B(t)− 1)2 + 1. Then as the followings :
n = 400, reflected scheme : yn0 = −1.2977,
penalization scheme :p 103 5× 103 104 5× 104
yp,n0 −1.3021 −1.2987 −1.2982 −1.2978
n = 1000, reflected scheme : yn0 = −1.2685,
penalization scheme :p 103 5× 103 104 5× 104
yp,n0 −1.2739 −1.2696 −1.2691 −1.2686
n = 2000, reflected scheme : yn0 = −1.2595,
penalization scheme :p 103 5× 103 104 5× 104
yp,n0 −1.2649 −1.2607 −1.2601 −1.2595
Application to American options Now we will discuss some applications of the reflectedBSDE’s. In mathematical finance, we know that American options can be described by the linearreflected BSDE’s. For an American call option, the wealth yt satisfies the following BSDE
yt = (x1 − k)+ −∫ 1
t[rys + (µ− r)zs]ds + K1 −Kt −
∫ 1
tσzsdBs, 0 ≤ t ≤ 1
and yt satisfiesyt ≥ Lt, for 0 ≤ t ≤ τ, where Lt = (xt − k)+,
where xt is the price of the underlying stock which is a geometric Brownian motion :
xt = x0 +∫ t
0µxsds +
∫ t
0σxsdBs, (7.18)
and τ is a stopping time, which satisfies τ = inft, yt−Lt < 0. In Finance, yt is the wealth process,τ stands for exit time from market for the investor. At this time he will do the action buying orselling the stock. In this problem, we are interested in the yt, zt, and τ. To solve it, we consider itas a reflected BSDE, with the barrier St, then τ = inft,Kt > 0. The parameters are
r = 0.1, µ = 0.5, σ = 1, x0 = 100, k = 100.
Now f is linear in y, so we use the implicit reflected scheme, we generate two trajectories ofthe discrete Brownian motion to simulate the wealth processes. The results are in figure 3.3. Fromthis figure we find out that the exit time is always the terminal time τ = 1, which coincide withthe theory of mathematical finance. Here axis X stands for x, the price of stock.
194 Chapitre 7. Numerical algorithms and simulations
0
0.2
0.4
0.6
0.8
1
−3−2
−10
12
30
500
1000
1500
t
the blue point is exit time.
B
y
0 0.2 0.4 0.6 0.8 1−5
0
5x 10
−3
t
K
Kt
0 0.2 0.4 0.6 0.8 10
10
20
30
40
t
y t−L t
yt − L
t
Figure 3.3 : solution of American call option
Reflected BSDE applied to American put option corresponds to the case :
y1 = ξ = (k − x1)+, Lt = (k − xt)+.
The investor’s wealth yt satisfies the following RBSDE :
yt = (k − x1)+ −∫ 1
t[rys + (µ− r)zs]ds + K1 −Kt −
∫ 1
tzsdBs, 0 ≤ t ≤ 1 (7.19)
yt ≥ Lt, dKt ≥ 0,
∫ 1
0(yt − St)dKt = 0,
where xt is the price of the underlying stock as (7.18). At the stopping time τ = inft,Kt > 0.The investor will execute the contract. His return is (k − xτ )+. We do simulation for the reflectedBSDE with value
r = 0.1, µ = 0.5, σ = 1, x0 = 100, k = 100.
By implicit reflected scheme, the numerical calculation of the put option is similarly to thatof the American call option. And the corresponding simulation is shown in figure 3.4, and figure3.5. In the upper window of figure 3.4, axis (t, B, y) gives position of time, Brownian motion andsolution y respectively. While in two 3-dimensional windows of figure 3.5, axis X stands for x, the
7.3. Discretization and Algorithms for Reflected BSDEs 195
price of stock. The exit time τ is the debut of the set (ω, t),K(ω, t) > 0, we can see this area atthe above window of figure 3.4. From these figures, we would find that, sometimes investor quitsthe market early, sometimes he quits at the terminal time τ = 1.
In these figures, the exit time is showed by a big point.
0
0.2
0.4
0.6
0.8
1
−3−2
−10
12
3
0
20
40
60
80
t
the blue point is exit time.
B
y
0 0.2 0.4 0.6 0.8 10
1
2
3
4
5
t
K
Kt
0 0.2 0.4 0.6 0.8 10
10
20
30
t
y t−L t
yt − L
t
Figure 3.4 : The solution of American put option (7.19)
196 Chapitre 7. Numerical algorithms and simulations
0
0.5
1
0
100
200
0
20
40
60
80
t
simulation of stochastic phenomena: Y(t)
x
0 0.5 10
10
20
30
40
50
the blue point is exit time.
t
Y
0
0.5
1
0
100
200
−80
−60
−40
−20
0
t
simulation of stochastic phenomena: z(t)
x
z
0 0.5 1−70
−60
−50
−40
−30
−20
−10
0
t
z
0 0.5 10
0.2
0.4
0.6
0.8
1
simulation of stochastic phenomena: Kt
t
K
0 0.5 10
5
10
15
20
25
30
yt − L
t
t
y t − L
t
Figure 3.5 : The trajectories of y,z,A of (7.19)
7.3.3 Algorithms and Simulations of Reflected BSDEs with two barriers
For the RBSDE with two barriers on [0, 1] associated to (ξ, f, L, U), if the (ξ, f) satisfy assump-tion 7.1.1, 7.1.2 and the following assumption :
Assumption 7.3.1. L and U are continuous Ft-progressively measurable processes with
E[ sup0≤t≤1
((Lt)+)2] < ∞ and E[ sup0≤t≤1
((Ut)−)2] < ∞.
Moreover,(i) There exists a process xt = x0 −
∫ t0 JsdBs + V +
t − V −t , X1 = ξ with J ∈ H2
d(0, 1), V +, V − ∈A2(0, 1), such that
Lt ≤ xt ≤ Ut P-a.s. for 0 ≤ t ≤ 1,
(ii)Lt < Ut, for 0 ≤ t ≤ 1.
Then there exists a unique quadruple (Y, Z, K+,K−) satisfies
Yt = ξ +∫ 1
tf(s, Ys, Zs)ds + K+
1 −K+t − (K−
1 −K−t )−
∫ 1
tZsdBs, (7.20)
7.3. Discretization and Algorithms for Reflected BSDEs 197
Lt ≤ Yt ≤ Ut, 0 ≤ t ≤ 1 and∫ 10 (Yt − Lt)dK+
t =∫ 10 (Yt − Ut)dK−
t = 0. Here we consider ξ =Φ((Bs)0≤s≤1). Similarly, we have also the convergence of penalized solutions (cf. [49]).
Theorem 7.3.2. Consider the penalization equations for the reflected BSDE with two barriers, forp ∈ N,
Y pt = ξ +
∫ 1
tf(s, Y p
s , Zps )ds + p
∫ 1
t(Y p
s − Ls)−ds− p
∫ 1
t(Y p
s − Us)+ds−∫ 1
tZp
s dBs. (7.21)
So when p → ∞, Y pt → Yt in S2(0, 1), Zp
t → Zt in H2d(0, 1), K±,p
t → K±t in A2(0, 1), where
(Yt, Zt,K+t ,K−
t ) is the solution of the reflected BSDE (7.20).
Like the numerical solution to reflected BSDEs with one barrier, there are two main schemesto do simulation, one is that we reflected the solution on the barrier directly, the other one is toconsider the penalization solutions instead of the reflected solutions.
Reflected scheme For reflected scheme, after the discretization of the time, we know that onthe small interval [tj , tj+1], (7.20) can be approximated by the following
ynj = yn
j+1 + f(tj , ynj , zn
j )δ + dnj − an
j − znj
√δεj+1, (7.22)
where dnj = K+
tj+1−K+
tjand an
j = K−tj+1
−K−tj
, with Lnj ≤ yn
j ≤ Unj , (yn
j −Lnj )dn
j = (ynj −Un
j )anj = 0
and dnj · an
j = 0. Here Lnj = Ltj , Un
j = Utj . And the discrete terminal condition is ynn := ξn =
Φ((Bnj )0≤j≤n) = Φn((εj)0≤j≤n).
The key point is how to solve (ynj , zn
j , dnj , an
j ) from (7.22) by (ynj+1, z
nj+1, d
nj+1). Since we know
that ynj+1 has the form : yn
j+1 = Φj+1(ε1, · · · , εj+1), set
y+j+1 = Φj+1(ε1, · · · , 1), y−j+1 = Φj+1(ε1, · · · ,−1).
By (7.22), we get immediately znj = E[yn
j+1εj+1|Fnj ] = 1
2√
δ(y+
j+1 − y−j+1). Substitute it into (7.22),then the problem is changed to find (yn
j , dnj , an
j ), which satisfies
ynj = E[yn
j+1|Fnj ] + f(tj , yn
j , znj )δ + dn
j − anj , (7.23)
Lnj ≤ yn
j ≤ Unj , (yn
j − Lnj )dn
j = (ynj − Un
j )anj = 0.
Since dnj (resp. an
j ) acts only on ynj = Ln
j (resp. ynj = Un
j ), notice that Lnj < Un
j , which meansthat yn can not reach the two barriers at same time. i.e. yn
j = Lnj ∩ yn
j = Unj = ∅. It follows
naturally dnj · an
j = 0. In another word, instead of (7.23), we need to solve either
ynj = E[yn
j+1|Fnj ] + f(tj , yn
j , znj )δ + dn
j ,
Lnj ≤ yn
j < Unj , (yn
j − Lnj )dn
j = anj = 0;
or
ynj = E[yn
j+1|Fnj ] + f(tj , yn
j , znj )δ − an
j ,
Lnj < yn
j ≤ Unj , (yn
j − Unj )an
j = dnj = 0.
Now we consider the mappings : Θ(y) := y−(f(tj , y, znj )−f(tj , Ln
j , znj ))δ, Θ(y) := y−(f(tj , y, zn
j )−f(tj , Un
j , znj ))δ, then for δ small enough, we know that Θ(y) and Θ(y) are strictly increasing on y,
in view of⟨Θ(y)−Θ(y′), y − y′
⟩ ≥ (1− δk)∣∣y − y′
∣∣2 > 0,⟨Θ(y)−Θ(y′), y − y′
⟩ ≥ (1− δk)∣∣y − y′
∣∣2 > 0.
198 Chapitre 7. Numerical algorithms and simulations
However Θ(Lnj ) = Ln
j , Θ(Unj ) = Un
j , then
Θ(y) ≥ Lnj ⇐⇒ y ≥ Ln
j and Θ(y) ≤ Unj ⇐⇒ y ≥ Un
j .
Finally, we get
ynj =
Θ−1(1
2(y+j+1 + y−j+1) + f(tj , Ln
j , znj )δ + dn
j ), if anj = 0
Θ−1(12(y+
j+1 + y−j+1) + f(tj , Unj , zn
j )δ − anj ), if an
j > 0, (7.24)
where
dnj =
(E[yn
j+1|Fnj ] + f(tj , Ln
j , znj )δ − Ln
j
)−,
anj =
(E[yn
j+1|Fnj ] + f(tj , Un
j , znj )δ − Un
j
)+.
This is called the reflected implicit scheme. Similarly, we have reflected explicit scheme :
ynj = E[yn
j+1|Fnj ] + f(tj , E[yn
j+1|Fnj ], zn
j )δ + dnj − an
j ,
dnj =
(E[yn
j+1|Fnj ] + f(tj , E[yn
j+1|Fnj ], zn
j )δ − Lnj
)−,
anj =
(E[yn
j+1|Fnj ] + f(tj , E[yn
j+1|Fnj ], zn
j )δ − Unj
)+.
Penalization scheme The numerical calculation and simulation of reflected BSDEs with twobarriers is based on our two sides penalized method. By theorem 7.3.2, we know that the solutionof (7.20) can be approximated by penalization equations (7.21), for some large p. Then on the smalltime [tj , tj+1], we get the following discrete penalized BSDE
yp,nj = yp,n
j+1 + f(tj , yp,nj , zp,n
j )δ + p(yp,nj − Ln
j )−δ − p(yp,nj − Un
j )+δ − zp,nj
√δεj+1. (7.25)
The discrete terminal condition is : ynn := ξn = Φ((Bn
j )0≤j≤n) = Φn((εj)0≤j≤n).Assume that we have already gotten (yp,n
j+1, zp,nj+1), we need to solve (yp,n
j , zn,pj ) from (7.25). Since
yp,nj+1 being in the form : yp,n
j+1 = Φpj+1(ε1, · · · , εj+1), then we set
yp,+j+1 = Φp
j+1(ε1, · · · , 1), yp,−j+1 = Φp
j+1(ε1, · · · ,−1).
It is easy to get zp,nj = E[yp,n
j+1εj+1|Fnj ] = 1
2√
δ(yp,+
j+1 − yp,−j+1).
For yp,nj , we get the equation
yp,nj = E[yp,n
j+1|Fnj ] + f(tj , y
p,nj , zp,n
j )δ + p(yp,nj − Ln
j )−δ − p(yp,nj − Un
j )+δ.
By solving the equation, i.e.
yp,nj = (Θp)−1(E[yp,n
j+1|Fnj ]) = (Θp)−1(
12(yp,+
j+1 + yp,−j+1)),
we can get ynj . Here Θp is the mapping, Θp(y) = y − (f(tj , y, zp,n
j ) + p(y − Lnj )− − p(y − Un
j )+)δ.The other way is partly explicit, we only replace yp,n
j by E[yp,nj+1|Fn
j ] in f , then solve a quasi-linear equation. Notice that Ln
j < Unj , so for y ∈ R, (y − Ln
j )− and (y − Unj )+ can not be nonzero
at same time. Then we obtain
yp,nj = E[yp,n
j+1|Fnj ] + f(tj , E[yp,n
j+1|Fnj ], zp,n
j )δ
+pδ
1 + pδ(E[yp,n
j+1|Fnj ] + f(tj , E[yp,n
j+1|Fnj ], zp,n
j )δ − Lnj )−
− pδ
1 + pδ(E[yp,n
j+1|Fnj ] + f(tj , E[yp,n
j+1|Fnj ], zp,n
j )δ − Unj )+.
With E[yp,nj+1|Fn
j ] = 12(yp,+
j+1 + yp,−j+1), results follow easily.
7.3. Discretization and Algorithms for Reflected BSDEs 199
Theorem 7.3.3 (Convergence). We define yp,nt = yp,n
[t/δ], zp,nt = zp,n
[t/δ], Kp,n,+t =
∑[t/δ]m=0 dp,n
m and
Kp,n,−t =
∑[t/δ]m=0 ap,n
m . Here yp,nj , 0 ≤ j ≤ n, can come from either implicit scheme or explicit
scheme. Then we know (yp,nt , zp,n
t ) converges to (yt, zt), in the following sence,
E[ sup0≤t≤1
|ynt − yt|2 +
∫ 1
0|zn
s − zs|2 ds] → 0,
as p, n →∞. And Kp,n,+t → K+
t , Kp,n,+t → K−
t in L2(Ft).
Sketch of the proof. Since Bn converges uniformly to B and ξn converges to ξ in L2(FT ),by the convergence results of numerical solutions for BSDE and penalization method for reflectedBSDE, in theorem 7.3.2 and theorem 7.1.2, the results follows. ¤
Simulation results. As in the case of reflected BSDE with one barrier, due to the account ofcomputation, we only treat a very simple situation : ξ = φ(B1), Lt = ψ1(t, B(t)), Ut = ψ2(t, B(t)),where φ, ψ1 and ψ2 are real regular functions defined on R and [0, 1] × R respectively. The thecalculation begins from the terminal time 1 with yn
n = ξn, repeating the procedure of reflectedscheme or numerical penalization scheme, to backwardly solve (yn
j , znj , dn
j , anj ), for j = n−1, · · · 1, 0.
We set : f(y, z) = −5 |y + z| − 1, ξ = Φ(B1) = |B1|, Lt = Ψ1(t, Bt) = −3(B(t) − 2)2 + 3,Ut = Ψ2(t, Bt) = (B(t) + 1)2 + 3(t − 1), and n = 400. After inputting the parameters, we runour programmes to calculate. Here we use the reflected scheme. As Figure 3.6 showing below,we see the surface of solution yn and one trajectory (yn,Kn,+,Kn,−), where Kn,+
j =∑j
i=1 dni ,
Kn,−j =
∑ji=1 an
i .
0
0.2
0.4
0.6
0.8
1
−4
−2
0
2
4
−100
−50
0
tx
y
0 0.2 0.4 0.6 0.8 1−5
0
5x 10
−3
t
K+
K+
0 0.2 0.4 0.6 0.8 10
0.1
0.2
0.3
0.4
0.5
t
K−
K−
200 Chapitre 7. Numerical algorithms and simulations
Figure 3.6 : The solution surface of reflected BSDE with two barriers
In figure 3.6, three windows show the surface of yn, and the trajectories ofKn,+ and Kn,−,respectively. In the upper window, the three surface correspond to the surface of the upper barrierU , solution yn and the lower barrier L. And the line on the solution surface is one trajectory yn
j ,while the line on bottom is the trajectory Bn
j of Brownian motion with respect to ynj . From this
figure, we know that the increasing process Kn,+j does not act for this trajectory, since yn
j has nointent to go down and cross the lower barrier L ; while the increasing process Kn,−
j effects ynj , when
it wants to go upwards to cross the upper barrier U .Then we present another figure (Figure 3.7), which only consider the trajectory of (yn, zn,Kn,+,Kn,−).
There are six windows in this figure. The first column show the trajectory of yn in 3 or 2-dimensional ; and the second column show the corresponding trajectory of zn, also in 3 or 2-dimensional. As we have seen in figure 3.2, for the 3-dimensional window the upper trajectory isyn(resp. zn), while the lower curve is Brownian motion Bn. The last two windows show trajectoriesKn,+ and Kn,−. From this figure, we know that both Kn,+ and Kn,− influence the solution yn inturn.
0
0.5
1
−4−2
02
4
−10
−5
0
t
trajectory of y(t) in 3−D
x
0 0.5 1−2
−1.5
−1
−0.5
0
0.5
1
1.5
t
trajectory of y(t) in 2−D
0
0.5
1
−4−2
02
4−5
0
5
t
trajectory of z(t) in 3−D
x
z
0 0.5 1
1
2
3
4
5
t
z
trajectory of z(t) in 2−D
0 0.5 10
0.5
1
1.5
simulation of stochastic phenomena: K+t
t
K+
0 0.5 10
0.01
0.02
0.03
0.04
0.05
0.06
0.07
simulation of stochastic phenomena: K−t
t
K−
Figure 3.7 : The trajectories of solutions of (7.20)
7.4. BSDEs with constraint on z 201
Now we present some numerical result for reflected scheme and penalization scheme, with sameparameters as above :
n = 1000, reflected scheme : yn0 = −2,
penalization scheme :p 103 5× 103 104 5× 104
yp,n0 −1.9820 −1.9974 −1.9997 −1.9999
7.4 BSDEs with constraint on z
In this section, we consider the BSDEs with constraint on z in following form :
Yt = ξ +∫ T
tf(s, Ys, Zs)ds + AT −At −
∫ T
tZsdBs, (7.26)
and Ψ(Zt) = 0, a.e., a.s.,
where Ψ is a nonnegative, measurable Lipschitz function and Ψ(·, z) ∈ H2(0, T ). The problem is tofind the smallest solution Y ∈ D2(0, T ) with (Z·, A·) ∈ H2(0, T )×A2(0, T ) to satisfies (7.26).
Remark 7.4.1. For the convenience of presenting the algorithm, we use the function Ψ to giveconstraint of z, which is also equivalent to the usual form z ∈ Γ, where Γ is a closed set inR. For example, if Γ = [a,∞), then Ψ(z) = (z − a)− ; if Γ = [a, b], correspondingly, Ψ(z) =(z − a)− + (z − b)+.
Assume that ξ and f satisfy assumption 7.1.1 and 7.1.2, then from [65], we have the existenceof the smallest solution for (7.26) :
Theorem 7.4.1. If there exist processes Y· ∈ D2(0, T ), with (Z·, A·) ∈ H2(0, T ) ×A2(0, T ), withterminal condition YT = ξ, satisfying the backward equation (7.26) and the constraint Ψ(Zt) =0, a.e., a.s. then this BSDE(ξ, f) with the constraint Ψ admits the smallest solution (Y, Z,A).Moreover, y is the limit of sequence Y pp∈N in H2(0, T ), as p → ∞, where Y p is the solution ofBSDEs penalized by Ψ,
Y pt = ξ +
∫ T
tf(s, Y p
s , Zps )ds + p
∫ T
tΨ(Zp
s )ds−∫ T
tZp
s dBs. (7.27)
And as p → ∞, Zps weakly (resp. strong) converges to Zs in H2(0, T ) (resp. Hβ(0, T ) for β < 2),
Apt → At weakly in L2(Ft), where Ap
t = p∫ t0 Ψ(Zp
s )ds.
Thank to this theorem, we can simulation the BSDEs with constraint Ψ on z via penalizedBSDEs. Here we consider ξ = Φ((Bs)0≤s≤1). Now we divide the time interval [0, 1] into n parts,for n ∈ N big enough and set δ := tj − tj−1 = 1
n , for 1 ≤ j ≤ n. As in previous section, we definethe scaled random walk Bn
t as (7.4) : Bnt =
√δ∑[t/δ]
m=0 εm, 0 ≤ t ≤ 1, where (εm)1≤m≤n presenta Bernoulli sequence, with ε0 = 0. Then Bn
t is a Ft-measurable process who take discrete values,denote Bn
j = Bntj , so Bn
j+1 −Bnj = εj+1. And denote the discrete filtration for 1 ≤ j ≤ n,
Fnj := σεm; 0 ≤ m ≤ j = σBn
t ; 0 ≤ t ≤ tj.
For some p ∈ N large enough, we consider (7.27) on the small interval [tj , tj+1], which can beapproximated by
yp,nj = yp,n
j+1 + f(tj , yp,nj , zp,n
j )δ + pΨ(zp,nj )δ − zp,n
j
√δεj+1, (7.28)
202 Chapitre 7. Numerical algorithms and simulations
with the discrete terminal condition is : ynn := ξn = Φ((Bn
j )0≤j≤n) = Φn((εj)0≤j≤n).Now we need to find a way to solve (yp,n
j , zn,pj ) from (yp,n
j+1, zp,nj+1), which is assume to be known.
Since yp,nj+1 has the form : yp,n
j+1 = Φpj+1(ε1, · · · , εj+1), we set
yp,+j+1 = Φp
j+1(ε1, · · · , 1), yp,−j+1 = Φp
j+1(ε1, · · · ,−1).
From (7.28), it is easy to get zp,nj = E[yp,n
j+1εj+1|Fnj ] = 1
2√
δ(yp,+
j+1 − yp,−j+1). Then we get a equation
of yp,nj ,
yp,nj = E[yp,n
j+1|Fnj ] + f(tj , y
p,nj , zp,n
j )δ + pΨ(zp,nj )δ.
So apply the numerical results in Section 7.1 for classical BSDE, we get
yp,nj = Θ−1(E[yp,n
j+1|Fnj ] + pΨ(zp,n
j )δ),
where Θ(y) = y − f(tj , y, zp,nj )δ, from implicit scheme. While the explicit scheme gives
yp,nj = E[yp,n
j+1|Fnj ] + f(tj , E[yp,n
j+1|Fnj ], zp,n
j )δ + pΨ(zp,nj )δ.
The interesting point here is that the penalization of zp,n is not directly act on zp,n, it act on yn,p
to influence zp,n in the following step.
Theorem 7.4.2 (Convergence). We define yp,nt = yp,n
[t/δ], zp,nt = zp,n
[t/δ]. Here yp,nj , 0 ≤ j ≤ n, can
come from either implicit scheme or explicit scheme. Then we obtain as n, p →∞,
E[ sup0≤t≤1
|ynt − yt|2 +
∫ 1
0|zn
s − zs|2 ds] → 0.
Sketch of proof. For the convergence of this algorithm, since ξn converges to ξ in L1 asn → ∞ and the scaled random walks Bn → B uniformly on interval [0, 1] in probability, then byconvergence results of penalized BSDE and theorem 7.1.2, and theorem 7.4.1, we get the results. ¤
For the simulations, we consider the case when z is bounded by two value, i.e. a ≤ zt ≤ b, fora, b ∈ R, so Ψ(z) = (z−a)−+(z−b)+. And we set f(y, z) = −2 |y + z|−1, ξ = |B1|, with a = −0.5,b = 0.8 and p = 20. After calculation of the discreteness n = 400, we get the surface of the solutionyp,n and zp,n as in figure 4.1.
There are three windows in the figure 4.1. The first one on the top is the solution surface ofyp,n, with one trajectory on it. The second one on top is the surface of solution of zp,n, also withthe trajectory on it. And the big one below show the influence of the penalization, in fact it isAp,n
j = p∑j
i=0 Ψ(zp,ni )δ. Compare the below and window of zp,n, we can see that the penalization
effects when zp,n exceed the interval [−0.5, 0.8].Figure 4.2 show a trajectory of the solution (yp,n, zp,n). The first window is for yp,n, the second
for zp,n, and the third for Ap,nj . Compare the last two windows, we see that, at t ≈ 0.6, this trajectory
of zp,n goes down −0.5, at the same time Ap,nj begins its action till zp,n is bigger than −0.5. After
that, zp,n stays calmly in the interval [−0.5, 0.8]. So Ap,nj remains a constant. Till t ≈ 0.85, zp,n
goes beyond 0.8, and Ap,nj acts again to force zp,n staying within the domain.
Unlike the penalization scheme for reflected BSDE in section 3, here we need to well choose theparameter p and δ. Notice that in (7.28), there isn’t any control of the penalization term pΨ(zp,n
j )δ.If p
√δ > 1, then the numerical solution will explode.
7.4. BSDEs with constraint on z 203
0
0.5
1
−4
−2
0
2
4
0
1
2
3
4
tx
y
0
0.5
1
−4
−2
0
2
4
−1
−0.5
0
0.5
1
tx
z
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
0.1
0.2
0.3
0.4
0.5
t
∫ 0t p ⋅
Ψ(z
s) ds
Figure 4.1 : The solution surface of BSDE (7.27)
About this point, the following figure shows more clair :
0 0.5 1
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
t
y
0 0.5 1
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
t
z
0 0.5 10
0.05
0.1
0.15
0.2
0.25
0.3
t
Ap
Figure 4.2 : The trajectory of solutions of (7.27)
204 Chapitre 7. Numerical algorithms and simulations
7.5 Appendix : Convergence of Algorithm for Reflected BSDE’swith one barrier
Now we mainly consider solutions of the following reflected BSDE on interval [0, 1]..
Definition 7.5.1. A solution of the Reflected Backward SDE driven by the pair (f, ξ) is a triple(y, z, K) with y ∈ S2(0, 1), z ∈ H2
d(0, 1), and K is an adapted process, satisfying
yt = ξ +∫ 1
tf(s, ys, zs)ds + K1 −Kt −
∫ 1
tzsdBs, (7.29)
and yt ≥ 0 on [0, 1]. (7.30)
Here K is a continuous increasing process, K0 = 0, K1 ∈ L2(F1) and
∫ 1
0ytdKt = 0. (7.31)
Remark 7.5.1. We considered the formulation with y forced to be non negative. A more generalobstacle case can be reduced to this one. The formulation is the following : the triple (y, z, K) hasto satisfy (7.29), and
yt ≥ Lt,
∫ 1
0(yt − Lt)dKt = 0,
with
Lt = L0 +∫ t
0usds +
∫ t
0vsdBs,
∫ ·
0usds ∈ S2(0, 1).
Set
yt := yt − Lt, zt = zt − vt, ξ = ξ − L1,
then we have
yt = ξ +∫ 1
tf(s, ys, zs)ds +
∫ 1
tdKs −
∫ 1
tzsdBs
yt ≥ 0,
∫ 1
0ytdKt = 0,
where f(s, y, z) := f(s, y + Ls, z + vs) + us.
With the Definition 7.5.1, the following results : Theorem 7.5.1 and Theorem 7.5.2 are provedin ([28]).
Theorem 7.5.1. Under the assumptions 7.1.1 and 7.1.2, there exists a unique triple (y, z,K)solution of the reflected BSDE (7.29), (7.30) and (7.31). yt is the smallest solution of (7.29) and(7.30), i.e. if (y′, z′,K ′) satisfies (7.29), (7.30) and (7.31) then yt ≤ y′t, a.s.. Moreover, there existsan adapted process (αt)t∈[0,1], with 0 ≤ αt ≤ 1, ∀t, such that dKt = α(t)[f(t, yt, zt)−dt].
7.5. Appendix : Convergence of Reflect Algorithm 205
Approximation by Penalization Let us consider the sequence (y(p), z(p)) of the followingBSDEs
y(p)t = ξ +
∫ 1
t[f(s, y(p)
s , z(p)s )ds + p · (y(p)
s )−]ds−∫ 1
tz(p)s dBs (7.32)
By Comparison Theorem ([62]), for p = 1, 2, ..,
y(p)t ≤ y
(p+1)t ≤ yt
and holds the following
Theorem 7.5.2. When p →∞, we have
E[ sup0≤t≤1
|y(p)t − yt|2] → 0, E
∫ 1
0|z(p)
t − zt|2dt → 0,
E[ sup0≤t≤1
|Kt − p
∫ t
0(y(p)
s )−ds|2] → 0.
Moreover, there exists a positive constant c such that
E[ sup0≤t≤1
|y(p)t − yt|2] + E
∫ 1
0|z(p)
t − zt|2dt ≤ c√p.
7.5.1 Estimations of the Discrete Reflected BSDE with one barrier
As for classical (with respect to Brownian motion) BSDE’s, we are given a terminal conditionξn and a ”coefficient” gn
j .Let us consider the following assumptions :
Assumption 7.5.1. ξn is Fnn–measurable :
∃Φ : 1,−1n −→ R
such thatξn = Φ(εn
1 , · · · , εnn).
Assumption 7.5.2. For every j = 0, ...n−1, gnj : Ω×R×R→ R, is Fn
j –measurable and for every(ω, z) , gn
j is k–Lipschitz w.r.t. y with n > k
Assumption 7.5.3. For every (ω, y) gnj is k Lipschitz w.r.t. z, with k2 < n.
Let q ≥ 0 be a given constant. We consider for j = n− 1, .., 1, 0, the backward equation
yn,qj = yn,q
j+1 + gnj (yn,q
j , zn,qj )
1n
+q
n(yn,q
j )− − zn,qj
1√n
εnj+1 (7.33)
with the terminal condition : yn,qn = ξn.
Theorem 7.5.3. (Existence, Uniqueness and Comparison). Let (gnj , ξn) satisfying the assumptions
7.5.1 and 7.5.2. Then there exists a unique Fnj –adapted pair (yn,q
. , zn,q. ), solution of (7.33).
Moreover, let (y′n,q′. , z′n,q′
. ) be the solution of (7.33) corresponding to (ξ′n, g′n, q′), and q′ ≥ q,we assume that gn
j and g′nj satisfy the assumption 7.5.3 and the following :
g′nj (ω, y, z) ≥ gnj (ω, y, z), ξ′n ≥ ξn.
Then, the corresponding solution (y′n,q′. , z′n,q′
. ) satisfies, for j = 0, 1, .., n− 1,
y′n,q′j ≥ yn,q
j .
206 Chapitre 7. Numerical algorithms and simulations
Proof. Assume that yn,qj+1 is solved, we then solve (yn,q
j , zn,qj ) solution of (7.33). Since yn,q
j+1 hasthe form : yn,q
j+1 = Φj+1(εn1 , · · · , εn
j+1), We write
y(+)j+1 := Φj+1(εn
1 , · · · , εnj , 1), y
(−)j+1 := Φj+1(εn
1 , · · · , εnj ,−1). (7.34)
y(+)j+1 and y
(−)j+1 are Fn
j –measurable. With εnj+1 = ±1, we get in (7.33) :
yn,qj = y
(+)j+1 + gn
j (yn,qj , zn,q
j )1n
+q
n(yn,q
j )− − zn,qj
1√n
,
yn,qj = y
(−)j+1 + gn
j (yn,qj , zn,q
j )1n
+q
n(yn,q
j )− + zn,qj
1√n
.
zn,qj can be uniquely solved by
zn,qj =
y(+)j+1 − y
(−)j+1
2√
n.
Then, yn,qj is solution of the following equation
yn,qj − q((yn,q
j )−)1n− gn
j (yn,qj , zn,q
j )1n
=y
(+)j+1 + y
(−)j+1
2. (7.35)
When n > k, q ≥ 0, the mapping
y → G(y) := y − q(y−)1n− gn
j (y, zn,qj )
1n
is a strictly increasing function of y with G(y) → ±∞ as y → ±∞ . Thus there exists a uniquesolution yn,q
j of (7.35)
yn,qj = G−1(
y(+)j+1 + y
(−)j+1
2).
The comparison assertion results from classic technique of linearization and the following. ¤
Lemma 7.5.1. If a > −n, |b| <√
n, then, for each φ ≥ 0 and ψ ≥ 0, the solution (y, z) of thelinear algebraic equation
y + (ay + bz)1n
= φ− z1√n
y + (ay + bz)1n
= ψ + z1√n
satisfiesy ≥ 0.
It is interesting to notice that the following estimates are uniform in q. To do the estimationwe need the discrete type of Gronwell inequality, theorem 7.1.2 in section 7.1.2.
Lemma 7.5.2. Under the assumptions 7.5.1, 7.5.2, 7.5.3 and the following
Assumption 7.5.4. ξn ∈ L2(Ω,Fnn ,P), E[
∑n−1j=0 |gn
j (0, 0)|2] < ∞,
7.5. Appendix : Convergence of Reflect Algorithm 207
for n > 1 + 2k + 2k2, then we have
E[supi|yn,q
i |2] + E[n−1∑
j=0
|zn,qj |2] 1
n+ 2
q
nE[
n−1∑
i=0
((yn,qi )−)2] ≤ acξn,gn
where
cξn,gn = cE[|ξn|2] +1n
E[n−1∑
j=0
|gnj (0, 0)|2]
and constant a depends only on k, the Lipschitz constant of gn for y and z.
Proof. We apply Ito’s formula to (yn,qi )2
E[(yn,qj )2] + E[
n−1∑
i=j
(zn,qi )2]
1n
≤ E[(ξn)2] + 2n−1∑
i=j
E[yn,qi gn
i (yn,qi , zn,q
i )]1n
+ 2n−1∑
i=j
E[yn,qi
q
n(yn,q
i )−].
Since yn,qi
qn(yn,q
i )− = − qn((yn,q
i )−)2, we get
E[(yn,qj )2] + E[
n−1∑
i=j
(zn,qi )2]
1n
+ 2q
n
n−1∑
i=j
E[((yn,qi )−)2]
≤ E[(ξn)2] +n−1∑
i=j
E[(gni (0, 0))2]
1n
+ (1 + 2k)n−1∑
i=j
E[(yn,qi )2]
1n
+ 2k
n−1∑
i=j
E[|yn,qi zn,q
i |] 1n
.
Finally
E[(yn,qj )2] +
12E[
n−1∑
i=j
(zn,qi )2]
1n
+ 2q
n
n−1∑
i=j
E[((yn,qi )−)2]
≤ E[(ξn)2] +n−1∑
i=j
E[(gni (0, 0))2]
1n
+ (1 + 2k + 2k2)n−1∑
i=j
E[(yn,qi )2]
1n
.
By Lemma 7.1.2 we get
E[(yn,qj )2] +
12E[
n−1∑
i=j
(zn,qi )2]
1n
+ 2q
n
n−1∑
i=j
E[((yn,qi )−)2]
≤ (E[(ξn)2] +n−1∑
i=j
E[(gni (0, 0))2]
1n
)En(1 + 2k + 2k2).
Noticing that En is decreasing in n, we can express the right hand side of this inequality as acξn,gn .
Then using Burkholder-Davis-Gundy inequality we obtain the desired estimate. ¤
208 Chapitre 7. Numerical algorithms and simulations
Solutions of Discrete Reflected BSDE Analogously with the continuous situation, we havethe following definition.
Definition 7.5.2. A solution of a discrete reflected backward SDE driven by (gn ,ξn) is a triple ofadapted processes (yn, zn, dn) satisfying the following relations
ynj = yn
j+1 + gnj (yn
j , znj )
1n
+ dnj − zn
j
1√n
εnj+1, yn
n = ξn, (7.36)
ynj ≥ 0, dn
j ≥ 0, ynj dn
j = 0. (7.37)
Theorem 7.5.4. (Existence, Uniqueness and Comparison). Under the assumption 7.5.1 and 7.5.2,there exists a unique Fn
j –adapted triple (yn· , zn· , jn· ), solution of (7.36) and (7.37).Moreover yn
j is the smallest solution of (7.36) and (7.37).
Proof. Since znj can be uniquely solved by zn
j =y(+)j+1−y
(−)j+1
2
√n, where y
(+)j+1 and y
(−)j+1 are defined
as (7.34), thus (7.36) and (7.37) are equivalent to
ynj = E[yn
j+1|Fnj ] + gn
j (ynj , zn
j )1n
+ dnj , (7.38)
withyn
j ≥ 0, dnj ≥ 0, yn
j dnj = 0.
ButΦn
j (ynj ) = ηn
j + dnj ,
where we denoteΦn
j (y) := y − [gnj (y, zn
j )− gnj (0, zn
j )]1n
ηnj := E[yn
j+1|Fnj ] + gn
j (0, znj )
1n
. (7.39)
ηnj is known, and Φn
j (y) : R→ R is strictly increasing with Φnj (0) = 0.
Then,(Φn
j )−1(y) > 0(= 0) ⇐⇒ y > 0(= 0).
Finally, the unique (and the smallest) solution is :
dnj = (ηn
j )−, ynj = (Φn
j )−1((ηnj )+). (7.40)
¤
Proposition 7.5.1. Under the assumptions 7.5.1-7.5.3 and 7.5.4, we have
E[supj|yn
j |2] +1n
n−1∑
j=0
E[|znj |2] ≤ acξn,gn, (7.41)
where a is a constant depending only of k Lipschitz constant of gn in y and z,
dnj ≤
gnj (0, zn
j )1n
−(7.42)
7.5. Appendix : Convergence of Reflect Algorithm 209
Proof. (7.42) is a consequence of the following equality :
dnj = (ηn
j )− =
E[ynj+1|Fn
j ] + gnj (0, zn
j )1n
−
where ynj+1 ≥ 0.
Now we haveyn
j = ynj+1 + gn
j (ynj , zn
j )1n
+ dnj − zn
j
1√n
εnj+1.
Let yn−j , yn+
j be the respective solutions of the following discrete BSDE’s
yn+j = yn+
j+1 + [gnj (yn+
j , zn+j ) + gn
j (0, zn+j )−]
1n− zn+
j
1√n
εnj+1,
yn−j = yn−
j+1 + gnj (yn−
j , zn−j )
1n− zn−
j
1√n
εnj+1,
with the same terminal condition ξn, taking into account Lemma 7.5.1, we get by classical techniqueof linearization the inequalities :
yn−j ≤ yn
j ≤ yn+j .
We can apply Lemma 7.5.2, hence we get : E[supj |ynj |2] ≤ acξn,gn .
Since yni dn
i = 0, we have
E[(ynj )2] + E[
n−1∑
i=j
(zni )2]
1n
≤ E[(ξn)2] + 2n−1∑
i=j
E[yni (|gn
i (yni , zn
i )|] 1n
)]
≤ E[(ξn)2] + 2n−1∑
i=j
E[yni ((k(|yn
i |+ |zni |) + |gn
i (0, 0)|) 1n
)]
≤ E[(ξn)2] +n−1∑
i=j
E[(1 + 2k + 2k2)|yni |2 + |gn
i (0, 0)|2 + 1/2|zni |2]
1n
.
Thus holds
E[n−1∑
i=0
(zni )2]
1n≤ acξn,gn
and (7.41) is satisfied. ¤
Penalization : Let us come back to the discrete BSDE where p is a positive constant.
yn,pj = yn,p
j+1 +1n
[gnj (yn,p
j , zn,pj ) + p(yn,p
j )−]− zn,pj
1√n
εnj+1
with terminal condition yn,pn = ξn.
Lemma 7.5.3. Under the assumptions 7.5.1-7.5.3 and 7.5.4, for every j and p we have the in-equalities
yn,pj ≤ yn,p+1
j ≤ ynj .
210 Chapitre 7. Numerical algorithms and simulations
These are direct consequences of Comparison Theorem.The second inequality is clear since (yn
j )− = 0.
Lemma 7.5.4. Let us denote
yn,pj := yn
j − yn,pj , zn,p
j := znj − zn,p
j ,
Under the assumptions 7.5.1-7.5.3 and 7.5.4, we have, for every j = 0, ..n− 1, the estimates
E[(yn,pj )2] + E[
n−1∑
i=j
(zn,pi )2]
1n≤ acξn,gnp−
12
and
E[supj
(yn,pj )2] + E[
n−1∑
i=0
(zn,pi )2]
1n≤ acξn,gnp−
12 .
Proof. Let us consider the first estimate, we observe that
(yni − yn,p
i )(dni −
p
n· (yn,p)−) ≤ (yn,p
i )−dni ≤ (yn,p
i )− · (gni )−(0, zn
i )1n
.
We apply “Ito’s formula” to (yn,pi )2,
E[(yn,pj )2] + E[
n−1∑
i=j
(zn,pi )2]
1n
≤ 2n−1∑
i=j
E[yn,pi (gn
i (yni , zn
i )− gni (yn,p
i , zn,pi ))]
1n
+ 2n−1∑
i=j
E[(yni − yn,p
i )(dni −
p
n(yn,p
i )−) ]
≤ 2n
n−1∑
i=j
E[k|yn,pi |(|yn,p
i |+ |zn,pi |) + (yn,p
i )−(gni )−(0, zn
i )]
≤ 1n
n−1∑
i=j
E[2(k + k2)|yn,pi |2 +
12|zn,p
i |2] + 2
E[
1n
n−1∑
i=1
((yn,pi )−)2]
12
E[1n
n−1∑
i=1
((gni )−(0, zn
i ))2]
12
.
Thus, from Lemma 7.1.2 and Lemma 7.5.2
E[(yn,pj )2] + E[
n−1∑
i=0
(zn,pi )2]
1n≤ acξn,gnp−
12 .
Using Burkholder-Davis-Gundy inequality, we get easily the second estimate as well. ¤
7.5.2 Convergence of the numerical solutions of Reflected BSDEs
We consider a Brownian motion B and a sequence (εj) of i.i.d. Bernoulli symmetric randomvariables defined on the same probability space (Ω,F ,P). Let us consider also a pair (f, ξ) satisfyingthe assumptions 7.1.2, 7.1.1. Moreover, we suppose the following.
Assumption 7.5.5. For every (y, z) the map : t → f(t, y, z) is continuous.
7.5. Appendix : Convergence of Reflect Algorithm 211
For every n, we consider a pair (ξn, gn) where ξn is a Fnn -random variable with E[(ξn)2] < ∞.
And we set j = 0, ..n− 1, gnj (y, z) = f( k
n , y, z).It is clear that (ξn, gn) satisfy the assumptions the assumptions 7.5.1-7.5.3 and 7.5.4, for n large
enough. So we can define the pair (yn,q, zn,q) solution of the discrete BSDE (7.33).Let us associate the following cadlag processes : For s < 1,
Bns :=
n∑
j=0
εj√n
1[0,s)(k
n), Bn
1 =n∑
j=0
εj√n
.
fn(s, y, z) :=n−1∑
j=0
gnj (y, z)1[ k
n, j+1
n)(s),
yn,qs :=
n−1∑
j=0
yn,qj 1[ k
n, j+1
n)(s), zn,q
s :=n−1∑
j=0
znj 1[ k
n, j+1
n)(s).
Equation (7.33) is written as
−dyn,qs = [fn(s−, yn,q
s− , zn,qs− ) + q · (yn,q
s− )−]d 〈Bn〉s − zn,qs− dBn
s ,
with yn,q1 = ξn. Here 〈Bn〉 denotes the predictable quadratic variation of Bn.
We have also for every s, y, z the convergence
fn(s, y, z) → f(s, y, z).
Theorem 7.5.5. ([15]) Let us assume
Bn· → B· in S2(0, T ), lim
nE[|ξn − ξ|2] = 0.
Then, under the assumption 7.1.1, 7.1.2 and 7.5.5 we have, for every positive constant q,
limn
E[supt|yn,q
t − y(q)t |2] = 0, lim
nE
∫ 1
0|zn,q
s − z(q)s |2ds = 0.
Theorem 7.5.6. Under the above assumptions
E[supt|yn
t − yt|2] + E
∫ 1
0|zn
t − zt|2dt → 0, n → +∞.
Proof. We use Theorem 7.5.2, Lemma 7.5.4 and we get
E [supt|yn
t − yt|2] + E
∫ 1
0|zn
t − zt|2dt
≤ 3(E[supt|yn
t − yn,pt |2] + E[sup
t|yn,p
t − y(p)t |2] + E[sup
t|y(p)
t − yt|2])
+3E
∫ 1
0[|zn,p
t − znt |2 + |zn,p
t − z(p)t |2 + |z(p)
t − zt|2]dt
≤ 3acξn,gnp−12 + 3E[sup
t|yn,p
t − y(p)t |2] + 3E
∫ 1
0|zn,p
t − z(p)t |2dt + 3ap−
12 .
But for each fixed p > 0, from Theorem 7.5.5 the last two terms tend to 0, and for n large enoughwe have : acξn,gn ≤ 2acξn,f whith the right hand side independent of n. ¤
212 Chapitre 7. Numerical algorithms and simulations
7.5.3 Annex
Proof of Lemma 7.1.2 (discrete Gronwall Lemma )Lemma 7.1.2 Let us consider a, b, α positive constants, n > b and a sequence (vk)k=1,..n of positivenumbers such that, for every k
vk + α ≤ a + b1n
k∑
i=1
vi.
Thensupk≤nvk + α ≤ aEn(b),
where En(b) is the convergent serie
En(b) = 1 +∞∑
p=1
bp
p!(1 +
1n
)...(1 +p− 1
n).
Proof. We have :
vk + α ≤ a + b1n
k∑
i=1
vi.
Noticing the inequality
vi ≤ a + b1n
i∑
j=1
vj ,
by iterating the previous one, we get
vk +α ≤ a + b1n
k∑
i=1
(a + b1n
i∑
j=1
vj)
≤ a +ab
nk +
b2
n2
k∑
j=1
vj(k − j + 1)
≤ a +ab
nk +
ab2
n2
k∑
j=1
j +b3
n3
k∑
j=1
(k − j + 1)j∑
i=1
vi
≤ a +ab
nk +
ab2
2n2k(k + 1) +
b3
n3
k∑
i=1
vi
k∑
j=i
(k − j + 1)
≤ a +ab
nk +
ab2
2n2k(k + 1) +
b3
2n3
k∑
i=1
vi(k − i + 1)(k − i + 2).
Now we use the elementary equality : for p ≥ 1,
k∑
i=1
i(i + 1)...(i + p− 1) ≤ 1p + 1
k(k + 1)..(k + p)
and we get
vk + α
≤ a +ab
nk + .. +
abp
np
1p!
k(k + 1)..(k + p− 1) +bp+1
np+1
1p!
k∑
i=1
vi(k − i + 1)..(k − i + p).
7.5. Appendix : Convergence of Reflect Algorithm 213
It is clear that under the assumption n > b the last term tends to 0, and that we are finished.We can notice that En(b) is decreasing in n and tends to eb when n →∞. ¤
214 Chapitre 7. Numerical algorithms and simulations
Bibliographie
[1] Alario Nazaret, M., (1982). Jeux de Dynkin, Ph.D. dissertation, Univ. Franche-Comte, Be-sancon.
[2] Alario Nazaret M., Lepeltier, J.P. and Marchal. B., (1982). Dynkin games. Lecture Notes inControl and Inform. Sci. 43. (Springer, Berlin), 23-42.
[3] Bally, V., (1995) An approximation scheme for BSDEs and applications to control and non-linear PDEs. In : El Karoui, N. and Mazliak, L., (Eds.), Backward Stochastic differentialEquations. Pitman Research Notes in Mathematics Series 364, 177-193.
[4] Bally, V., E., Caballero, N. El-Karoui and B. Fernandez, (2004) Reflected BSDE’s PDE’s andVariational Inequalities, Preprint.
[5] Bally, V. and Matoussi, A., (2001) Weak solutions for SPDEs and Backward doubly stochasticdifferential equations, Journal of Theoretical Probability, Vol. 14, No. 1, 125-164.
[6] Barles, G. and Lesigne, L. (1997) SDE, BSDE and PDE. In : El Karoui, N. and Mazliak, L.,(Eds.), Backward Stochastic differential Equatons. Pitman Research Notes in MathematicsSeries 364, pp. 47-80.
[7] Benes, V.E., (1970) Existence of optimal strategies based on specified information for a classof stochastic decision problems. SIAM JCO 8, 179-188.
[8] Bensoussan, A. and Friedman, A. (1974) Non-linear variational inequalities and differentialgames with stopping times. J. Funct. Anal. 16, 305-352.
[9] Bensoussan, A. and Lions, J.L., (1979). Applications des Inequations Variationnelles enControle Stochastique. Dunod, Paris.
[10] Bismut, J.M., (1977). Sur un probleme de Dynkin. Z.Wahrsch. Verw. Gebiete 39 31-53.
[11] Bouchard, B. and Touzi, N. (2002) Discrete time approximation and Monte-Carlo simulationof Backward stochastic differential equation. Preprint.
[12] Briand, Ph. and Carmona, R. (2000). BSDEs with polynomial growth generators, J. Appl.Math. stochastic Anal. 13, 207-238.
[13] Briand, Ph., Delyon, B., Hu, Y., Pardoux, E. and Stoica, L. (2003) Lp solutions of BSDEs,Stochastic Process. Appl. 108, 109-129.
[14] Briand, Ph., Delyon, B, Memin, J. (2001). Donsker-type theorem for BSDEs, Elect. Comm. inProbab. 6 1-14.
[15] Briand, Ph., Delyon, B, Memin, J. (2002) On the robustness of backward stochastic differentialequations, Stoch. Process and their Applic. 97, 2, 229-253.
[16] Briand, Ph. and HU, Y. (1998). Stabiity of BSDEs with random terminal time and homoge-neization of semilinear elliptic PDEs, J. Funct. Anal. 155, 455-494.
[17] Chevance, D. (1997). Numerical Methods for Backward stochastic differential equations. Nu-merical methods in finance, Publ. Newton Inst. Cambridge Univ. Press, Cambridge, 232–244.
215
216 BIBLIOGRAPHIE
[18] Coquet, F., Hu Y., Memin J. and Peng S., (2002), Filtration–consistent nonlinear expectationsand related g–expectations, Probab. Theory Relat. Fields, 123, 1–27.
[19] Cvitanic, J. and Karatzas, I., (1996). Backward Stochastic Differential Equations with Reflec-tion and Dynkin Games, Ann. Probab. 24 , no 4, 2024–2056.
[20] Cvitanic, J., Karatzas, I., Soner, M., (1998). Backward stochastic differential equations withconstraints on the gain-process, The Annals of Probability, 26, No. 4, 1522–1551.
[21] Darling, R.W.R. and Pardoux, E., (1997). Backward SDE with random terminal time andapplications to semilinear elliptic PDE. Ann. Proba. Vol. 25, No.3, 1135-1159.
[22] Dellacherie, C. and Meyer, P.A., (1975). Probabilites et Potentiel, I-IV. (Hermann. Paris).[23] Dellacherie, C. and Meyer, P.A., (1980). Probabilites et Potentiel, V-VIII. (Hermann. Paris).[24] Douglas, Jr., J. Ma, and P. Protter,(1996) Numerical methods for forward-backward stochastic
differential equations, Ann. Appl. Probab. 6 , no. 3, 940-968[25] Duffie, D. and Epstein, L. (1992), Stochastic differential utility, Econometrica 60, no 2, 353–
394.[26] Dynkin, E. B. and Yushkevich, A. A. (1968). Theorems and Problems in Markov Processes.
Plenum Press, New York.[27] El Karoui, N., (1979). Les aspects probabilistes du controle stochastique. In : Hennequin, P.L.
(Ed.), Ecole d’ete de Saint-Flour, Lecture Notes in Math, Vol. 876. Springer, Berlin, 73-238.[28] El Karoui, N., Kapoudjian, C., Pardoux, E., Peng, S. and Quenez, M.C., (1997). Reflected
Solutions of Backward SDE and Related Obstacle Problems for PDEs, Ann. Probab. 25, no2, 702–737.
[29] El Karoui, N., Pardoux, E., and Quenez, M.C., (1997). Reflected Backward SDEs and Americanoptions, Numerical methods in finance, Publ. Newton Inst. Cambridge Univ. Press, Cambridge,215–231.
[30] El Karoui, N., Peng, S. and Quenez, M.C., (1997). Backward stochastic differential equationsin Finance. Math. Finance, 7, 1-71.
[31] El Karoui, N. and Quenez, M. C. (1995). Dynamic programming and pricing of contingentclaims in an incomplete market. SIAM J. Control Optim. 33 29–66.
[32] Gobet, E., Lemor, J.P. and Warin, X., (2004), A regression based Monte-Carlo mehtod tosolve backward differential stochastique equations, Preprint, Ecole Polytechnique, Centre deMathmatiques Appliques.
[33] Hamadene, S., (2002). Reflected BSDE’s with Discontinuous Barrier and Application. Stochas-tics and Stochastic Reports, 74, no 3-4, 571–596.
[34] Hamadene, S., Hassani, M. (2003). BSDEs with two reflecting barriers : the general result.Preprint.
[35] Hamadene, S. and Lepeltier, J.P., (1995). Zero-sum stochastic differential games and backwardequations. Systems Control Lett. 24, 259-263.
[36] Hamadene, S. and Lepeltier, J.P., (1995). Backward equations, stochastic control and zero-sumstochastic differential games. Stochastics and Stochastic Reports 54, 221-231.
[37] Hamadene, S. and Lepeltier, J.P., (2000). Reflected BSDE’s and mixed game problem. Sto-chastics Processes Appl. 85, 177-188.
[38] Hamadene, S., Lepeltier, J.P. and Matoussi, A. (1997). Double barrier backward SDEs withcontinuous coefficient. In : El Karoui, N. and Mazliak, L., (Eds.), Backward Stochastic diffe-rential Equations.. Pitman Research Notes in Mathematics Series 364, 161-177.
BIBLIOGRAPHIE 217
[39] Hamadene, S., Lepeltier, J.P. and Peng, S., (1997). BSDE with continuous coefficients andstochastic differential games. In : El Karoui, N. and Mazliak, L., (Eds.), Backward Stochasticdifferential Equations.. Pitman Research Notes in Mathematics Series 364, 115-128.
[40] Ikeda, N. and Watanabe, S. (1989) Stochastic Differential Equations and Diffusion processes.2nd. North-Holland, Kodansha.
[41] Jacod, J., (1979) Calcul stochastique et problemes de martingales, Berlin ; New York : Springer-Verlag.
[42] Karatzas, I. and Shreve, S.E., (1991). Brownian Motion and Stochastic Calculus, (Springer,New York).
[43] Kobylanski, M. (2000) Backward stochastic differential equations and partial differential equa-tions with quadratic growth, Ann. Proba. 28, 558-602.
[44] Kobylanski, M., Lepeltier, J.P., Quenez, M.C. and Torres, S., (2002) Reflected BSDE withsuperlinear quadratic coefficient. Probability and Mathematical Statistics. Vol. 22, 51-83.
[45] Kunita, H. (1982) Stochastic differential equations and stochastic flows of diffeomorphisms.Ecole dete de Probabilite de Saint-Flour, Lect. Notes Math. 1097, 143-303.
[46] Lepeltier, J.P. and Maingueneau, M.A., (1984). Le jeu de Dynkin en theorie generale sansl’hypothese de Mokoboski. Stochastics. 13, 25-44.
[47] Lepeltier, J.P., Matoussi, A. and Xu, M. (2004) Reflected BSDEs under monotonicity andgeneral increasing growth conditions. Advances in Applied Probability, March, 2005, 1-26.
[48] Lepeltier, J.P. and J. San Martın. (1998) Existence for BSDE with superlinear-quadratic co-efficient, Stochastic and Stochastic reports, 63, 227-240.
[49] Lepeltier, J.P. and San Martın, J., (2004). Backward SDE’s with two barriers and continuouscoefficient. An existence result. Journal of Applied Probability, vol. 41, no. 1. 162-175
[50] Lepeltier, J.P. and J. San Martın, (2004) BSDE’s with continuous, monotonicity, and non-Lipschitz in z coefficient. Submitted to J. of Applied Probability.
[51] Lepeltier, J.P. and Xu, M. (2004) Penalization method for Reflected Backward StochasticDifferential Equations with one r.c.l.l. barrier. (to appear in Statistics and Probability Letters)
[52] Matoussi, A. (1997) Reflected solutions of backward stochastic differential equations withcontinuous coefficient, Statistic & Probality Letters 34, 347-354.
[53] Matoussi, A. and Xu, M. (2005) Sobolev solution for semilinear PDE with obstacle undermonotonicity conditions, Preprint.
[54] Memin, J., Peng, S. and Xu, M. (2002) Convergence of solutions of discrete Reflected backwardSDE’s and simulations, Preprint.
[55] Morimoto, H. (1984). Dynkin games and martingale methods. Stochastics 13 213-228.[56] Neveu, J. (1975). Discrete-Parameter Martingales. North-Holland, Amsterdam.[57] Pardoux, E., (1999). BSDE’s, weak convergence and homogenization of semilinear PDE’s in
Nonlinear analysis, Differential Equations and Control, F. H. Clarke & R. J. Stern Eds , pp.503-549, Kluwer Acad. Pub.
[58] Pardoux, E. and Peng, S., (1990). Adapted solutions of Backward Stochastic Differential Equa-tions. Systems Control Lett. 14, 51-61.
[59] Pardoux, E. and Peng, S., (1992). Backward Stochastic Differential Equations and QuasilinearParabolic Partial Differential Equations. In : Rozovskii, B. and Sower, R., (Eds.), StochasticDifferential Equations and their Applications, Lecture Notes in Control and Inform. Sci. 186.Springer, Berlin, 200-217.
218 BIBLIOGRAPHIE
[60] Peng, S., (1991). Probabilistic interpretation for system of quasilinear parabolic partial diffe-rential equations. Stochastics and stochastics reports, Vol. 37, pp 61-74.
[61] Peng, S., (199)2. A Generalized Dynamic Programming Principle and Hamilton-Jacobi-Bellman equation, Stochastics, 38, 119–134.
[62] S.Peng. (1992) Stochastic Hamilton-Jacobi-Bellman Equations. SIAM J. of Control Optim.30, 284-304.
[63] Peng, S., (1997). BSDE and Stochastic Optimizations (Chinese vers.). In : J. Yan, S. Peng,S. Fang and L. Wu, Topics in Stochastic Analysis. Science Publication, 85-138.
[64] Peng, S. (1997) Backward SDE and Related g–Expectation, in Backward Stochastic Differen-tial Equations, In : El Karoui, N. and Mazliak, L., (Eds.), Backward Stochastic differentialEquations. Pitman Research Notes in Math. Series, No.364, 141–159.
[65] Peng, S., (1999). Monotonic limit theory of BSDE and nonlinear decomposition theorem ofDoob-Meyer’s type. Probab. Theory and Related Fields, 113 473–499.
[66] Peng, S., (2002), Nonlinear expectations and nonlinear Markov chains, in The proceedingsof the 3rd Colloquium on ”Backward Stochastic Differential Equations and Applications” ,Weihai, 2002.
[67] Peng, S. (2003) Dynamical consistent nonlinear evaluations and expectations, preprint.
[68] Peng, S. (2004) Nonlinear Expectations, Noninear Evaluations and Risk Measures, LectureNotes in CIME–EMS, Bressanone, Italy July, 2003, LNM, Springer.
[69] Peng, S. and Xu, M., (2005). Smallest g-Supermartingales and related Reflected BSDEs, An-nales of I.H.P. Vol. 41, 3, 605-630.
[70] Revuz, D. and Yor, M., (1991). Continuous martingales and Brownian motion. Springer, NewYork.
[71] Shorokhod, A. V. (1965) Studies in the Theory of the Random Processes, Addison Wesley,New-York.
[72] Talay, D. and Zheng, Z. (2002) Reflected backward stochastic differential equations with ran-dom terminal time and applications. Part I : existence and uniqueness. Preprint.
[73] Xu, M. (2004)Reflected BSDE with continuity and monotonicity in y, and non-Lipschitz condi-tions in z, Preprint.
[74] Xu, M. (2005) Reflected backward SDEs with two barriers under monotonicity conditions,Preprint.
[75] Yong, J. and Zhou, X. (1999) Stochastic Controls : Hamiltonian Systems and HJB Equations,Springer.
[76] Y. Zhang and W. Zheng, (2002) Discretizing a backward stochastic differential equation, Int.J. Math. Math. Sci. 32, no. 2, 103-116.