FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC ...FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC DIFFERENTIAL...

29
FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC DIFFERENTIAL EQUATIONS AND MARKOV PROCESSES J. VAN CASTEREN This article is written in honor of G. Lumer whom I consider as my semi-group teacher Abstract. In this paper we explain the notion of stochastic backward differen- tial equations and its relationship with classical (backward) parabolic differential equations of second order. The paper contains a mixture of stochastic processes like Markov processes and martingale theory and semi-linear partial differential equations of parabolic type. Some emphasis is put on the fact that the whole theory generalizes Feynman-Kac formulas. A new method of proof of the exis- tence of solutions is given. All the existence arguments are based on rather precise quantitative estimates. 1. Introduction Backward stochastic differential equations, in short BSDE’s, have been well stud- ied during the last ten years or so. They were introduced by Pardoux and Peng [19], who proved existence and uniqueness of adapted solutions, under suitable square- integrability assumptions on the coefficients and on the terminal condition. They provide probabilistic formulas for solution of systems of semi-linear partial differ- ential equations, both of parabolic and elliptic type. The interest for this kind of stochastic equations has increased steadily, this is due to the strong connections of these equations with mathematical finance and the fact that they gave a general- ization of the well known Feynman-Kac formula to semi-linear partial differential equations. In the present paper we will concentrate on the relationship between time-dependent strong Markov processes and abstract backward stochastic differen- tial equations. The equations are phrased in terms of a martingale problem, rather than a stochastic differential equation. They could called weak backward stochas- tic differential equations. Emphasis is put on existence and uniqueness of solutions. The paper [25] deals with the same subject, but it concentrates on comparison theo- rems and viscosity solutions. The proof of the existence result is based on a theorem Date : January 10, 2007. 2000 Mathematics Subject Classification. 60H99, 35K20. Key words and phrases. Backward Stochastic Differential Equation, Markov process, Parabolic equations of second order. The author is obliged to the University of Antwerp and FWO Flanders (Grant number 1.5051.04N) for their financial and material support support. He was also very fortunate to have been able to discuss part of this material with Karel in’t Hout (University of Antwerp), who pro- vided some references with a crucial result about a surjectivity property of one-sided Lipschitz mappings: see Theorem 1 in Croezeix et al [9]. Some aspects concerning this paper were at issue during a conservation with ´ Etienne Pardoux (CMI, Universit´ e de Provence, Marseille); the author is grateful for his comments and advice. 1

Transcript of FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC ...FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC DIFFERENTIAL...

Page 1: FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC ...FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC DIFFERENTIAL EQUATIONS AND MARKOV PROCESSES J. VAN CASTEREN This article is written in honor

FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTICDIFFERENTIAL EQUATIONS AND MARKOV PROCESSES

J. VAN CASTEREN

This article is written in honor of G. Lumer whom I consider as my semi-group teacher

Abstract. In this paper we explain the notion of stochastic backward differen-tial equations and its relationship with classical (backward) parabolic differentialequations of second order. The paper contains a mixture of stochastic processeslike Markov processes and martingale theory and semi-linear partial differentialequations of parabolic type. Some emphasis is put on the fact that the wholetheory generalizes Feynman-Kac formulas. A new method of proof of the exis-tence of solutions is given. All the existence arguments are based on rather precisequantitative estimates.

1. Introduction

Backward stochastic differential equations, in short BSDE’s, have been well stud-ied during the last ten years or so. They were introduced by Pardoux and Peng [19],who proved existence and uniqueness of adapted solutions, under suitable square-integrability assumptions on the coefficients and on the terminal condition. Theyprovide probabilistic formulas for solution of systems of semi-linear partial differ-ential equations, both of parabolic and elliptic type. The interest for this kind ofstochastic equations has increased steadily, this is due to the strong connections ofthese equations with mathematical finance and the fact that they gave a general-ization of the well known Feynman-Kac formula to semi-linear partial differentialequations. In the present paper we will concentrate on the relationship betweentime-dependent strong Markov processes and abstract backward stochastic differen-tial equations. The equations are phrased in terms of a martingale problem, ratherthan a stochastic differential equation. They could called weak backward stochas-tic differential equations. Emphasis is put on existence and uniqueness of solutions.The paper [25] deals with the same subject, but it concentrates on comparison theo-rems and viscosity solutions. The proof of the existence result is based on a theorem

Date: January 10, 2007.2000 Mathematics Subject Classification. 60H99, 35K20.Key words and phrases. Backward Stochastic Differential Equation, Markov process, Parabolic

equations of second order.The author is obliged to the University of Antwerp and FWO Flanders (Grant number

1.5051.04N) for their financial and material support support. He was also very fortunate to havebeen able to discuss part of this material with Karel in’t Hout (University of Antwerp), who pro-vided some references with a crucial result about a surjectivity property of one-sided Lipschitzmappings: see Theorem 1 in Croezeix et al [9]. Some aspects concerning this paper were at issueduring a conservation with Etienne Pardoux (CMI, Universite de Provence, Marseille); the authoris grateful for his comments and advice.

1

Page 2: FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC ...FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC DIFFERENTIAL EQUATIONS AND MARKOV PROCESSES J. VAN CASTEREN This article is written in honor

2 J. VAN CASTEREN

which is related to a homotopy argument as pointed out by the authors of [9]. It ismore direct than the usual approach, which uses, among other things, regularizingby convolution products. It also gives rather precise quantitative estimates.

For examples of strong solutions which are driven by Brownian motion the readeris referred to e.g. section 2 in Pardoux [18]. If the coefficients x 7→ b(s, x) andx 7→ σ(s, x) of the underlying (forward) stochastic differential equation are linearin x, then the corresponding forward-backward stochastic differential equation isrelated to option pricing in financial mathematics. The backward stochastic differ-ential equation may serve as a model for a hedging strategy. For more details onthis interpretation see e.g. El Karoui and Quenez [15], pp. 198–199. E. Pardouxand S. Zhang [20] use BSDE’s to give a probabilistic formula for the solution ofa system of Parabolic or elliptic semi-linear partial differential equation with Neu-mann boundary condition. In this paper we want to consider the situation wherethe family of operators L(s), 0 ≤ s ≤ T , generates a time-inhomogeneous Markovprocess

(Ω,FτT ,Pτ,x) , (X(t) : T ≥ t ≥ 0) , (E, E) (1.1)

in the sense of Definition 1.4. The Markov property and a Markov process are definedin Definition 1.3. We consider the operators L(s) as operators on (a subspace of)the space of bounded continuous functions on E, i.e. on Cb(E) equipped with thesupremum norm: ‖f‖∞ = supx∈E |f(x)|, f ∈ Cb(E).

1.1. Definition. With the family of operators L(s) : s ∈ [0, T ] the squared gradi-ent operator Γ1 defined by

Γ1 (f, g) (τ, x) = lims↓τ

1

s− τEτ,x [(f (X(s))− f (X(τ))) (g (X(s))− g (X(τ)))] ,

(1.2)for f , g ∈ D (Γ1), is associated. A function f is said to belong to D (Γ1), ifΓ1(f, f)(s, x) exists for all pairs (τ, x) ∈ [0, T ] × E, and if the resulting functionis a member of Cb ([0, T ]× E).

It is assumed that D (Γ1) is dense in Cb ([0, T ]× E) for the topology of uniformconvergence on compact subsets. These squared gradient operators are also calledenergy operators: see e.g. Barlow, Bass and Kumagai [5]. We assume that every op-erator L(s), 0 ≤ s ≤ T , generates a diffusion in the sense of the following definition.In the sequel it is assumed that the family of operators L(s) : 0 ≤ s ≤ T pos-sesses the property that the space of functions u : [0, T ]×E → R with the property

that the function (s, x) 7→ ∂u

∂s(s, x) + L(s)u (s, ·) (x) belongs to C0 ([0, T ]× E) :=

C0 ([0, T ]× E;R) is dense in the space C0 ([0, T ]× E). This subspace of functionsis denoted by D(L), and the operator L is defined by Lu(s, x) = L(s)u (s, ·) (x),u ∈ D(L). It is also assumed that the family A is a core for the operator L. Weassume that the operator L, or that the family of operators L(s) : 0 ≤ s ≤ T,generates a diffusion in the sense of the following definition.

1.2. Definition. A family of operators L(s) : 0 ≤ s ≤ T is said to generate adiffusion if for every C∞-function Φ : Rn → R, with Φ(0, . . . , 0) = 0, and every pair(s, x) ∈ [0, T ]× E the following identity is valid

L(s) (Φ (f1, . . . , fn) (s, ·)) (x) (1.3)

Page 3: FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC ...FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC DIFFERENTIAL EQUATIONS AND MARKOV PROCESSES J. VAN CASTEREN This article is written in honor

BSDE’S AND MARKOV PROCESSES 3

=n∑

j=1

∂Φ

∂xj

(f1, . . . , fn) L(s)fj(x) +1

2

n∑

j,k=1

∂2Φ

∂xj∂xk

(f1, . . . , fn) (x)Γ1 (fj, fk) (s, x)

for all functions f1, . . . , fn in an algebra of functions A, contained in the domain ofthe operator L, which forms a core for L.

Generators of diffusions for single operators are described in Bakry’s lecture notes[1]. For more information on the squared gradient operator see e.g. [3] and [2] aswell. Put Φ(f, g) = fg. Then (1.3) implies

L(s) (fg) (s, ·)(x) = L(s)f(s, ·)(x)g(s, x) + f(s, x)L(s)g(s, ·)(x) + Γ1 (f, g) (s, x),

provided that the three functions f , g and fg belong to A. Instead of using thefull strength of (1.3), i.e. with a general function Φ, we just need it for the product(f, g) 7→ fg: see Proposition 1.12.

By definition the gradient of a function u ∈ D (Γ1) in the direction of v ∈ D (Γ1)is the function (τ, x) 7→ Γ1 (u, v) (τ, x). For given (τ, x) ∈ [0, T ] × E the functionalv 7→ Γ1 (u, v) (τ, x) is linear: its action is denoted by ∇L

u (τ, x). Hence, for (τ, x) ∈[0, T ]×E fixed, we can consider ∇L

u (τ, x) as an element in the dual of D (Γ1). Thepair

(τ, x) 7→ (u (τ, x) ,∇L

u (τ, x))

may be called an element in the phase space of the family L(s), 0 ≤ s ≤ T ,(see Pruss [21]), and the process s 7→ (

u (s,X(s)) ,∇Lu (s,X(s))

)will be called an

element of the stochastic phase space. Next let f : [0, T ]×E×R×D (Γ1)∗ → R be

a “reasonable” function, and consider, for 0 ≤ s1 < s2 ≤ T the expression:

u (s2, X (s2))− u (s1, X (s1)) +

∫ s2

s1

f(s,X(s), u (s, X(s)) ,∇L

u (s,X(s)))ds

− u (s2, X (s2)) + u (s1, X (s1)) +

∫ s2

s1

(L(s)u (s,X(s)) +

∂u

∂s(s,X(s))

)ds

(1.4)

= u (s2, X (s2))− u (s1, X (s1)) +

∫ s2

s1

f(s, X(s), u (s,X(s)) ,∇L

u (s,X(s)))ds

−Mu (s2) + Mu (s1) , (1.5)

where

Mu (s2)−Mu (s1)

= u (s2, X (s2))− u (s1, X (s1))−∫ s2

s1

(L(s)u (s,X(s)) +

∂u

∂s(s,X(s))

)ds (1.6)

is a martingale difference, provided the family L(s), 0 ≤ s ≤ T , generates a Markovprocess: see definitions 1.3 and 1.4, and Proposition 1.5 below.

1.3. Definition. The family of probability spaces and state variables(Ω,Fτ

T ,Pτ,x)(τ,x)∈[0,T ]×E , (X(t) : T ≥ t ≥ 0) , (E, E)

(1.7)

is called a time-inhomogeneous Markov family or Markov process if

Eτ,x

[f(X(t))

∣∣ Fτs

]= Es,X(s) [f(X(t))] , Pτ,x-almost surely. (1.8)

Page 4: FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC ...FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC DIFFERENTIAL EQUATIONS AND MARKOV PROCESSES J. VAN CASTEREN This article is written in honor

4 J. VAN CASTEREN

Here f is a bounded Borel measurable function defined on the state space E andτ ≤ s ≤ t ≤ T .

Suppose that the process X(t) in (1.7) has paths which are right-continuous andhave left limits in E. Then it can be shown that the Markov property for fixed timescarries over to stopping times in the sense that (1.8) may be replaced with

Eτ,x

[Y

∣∣ FτS

]= ES,X(S) [Y ] , Pτ,x-almost surely. (1.9)

Here S : E → [τ, T ] is an Fτt -adapted stopping time and Y is a bounded stochastic

variable which is measurable with respect to the future (or terminal) σ-field after S,i.e. the one generated by X (t ∨ S) : τ ≤ t ≤ T. For this type of result the readeris referred to Chapter 2 in Gulisashvili et al [11]. Markov processes for which (1.9)holds are called strong Markov processes.

1.4. Definition. The family of operators L(s), 0 ≤ s ≤ T , is said to generate atime-inhomogeneous Markov process, as described in (1.7) in Definition 1.3,

(Ω,FτT ,Pτ,x) , (X(t) : T ≥ t ≥ 0) , (E, E) (1.10)

if for all functions u ∈ D(L), for all x ∈ E, and for all pairs (τ, s) with 0 ≤ τ ≤ s ≤ Tthe following equality holds:

d

dsEτ,x [u (s,X(s))] = Eτ,x

[∂u

∂s(s,X(s)) + L(s)u (s, ·) (X(s))

]. (1.11)

In the following proposition we write Fts, s ∈ [t, T ], for the σ-field generated

by X(ρ), ρ ∈ [t, s]. It shows that under rather general conditions the processs 7→ Mu(s) −Mu(t), t ≤ s ≤ T , as defined in (1.5) is a Pt,x-martingale. The proofis left to the reader.

1.5. Proposition. Fix t ∈ [τ, T ). Let the function u : [t, T ] × E → R be such that

(s, x) 7→ ∂u

∂s(s, x) + L(s)u (s, ·) (x) belongs to C0 ([t, T ]× E) := C0 ([t, T ]× E;R).

Then the process s 7→ Mu(s)−Mu(t) is adapted to the filtration of σ-fields (Fts)s∈[t,T ].

Moreover, it is a Pt,x-martingale.

As explained in Definition 1.2 it is assumed that the subspace D(L) contains analgebra of functions which forms a core for the operator L.

1.6. Proposition. Let the family of operators L(s), 0 ≤ s ≤ T , generate a time-inhomogeneous Markov process in the sense of Definition 1.4: see equality (1.11).Then the process X(t) has a modification which is right-continuous and has leftlimits.

In view of Proposition 1.6 we will assume that our Markov process has left limitsand is continuous from the right.

Proof. Let the function u : [0, T ] × E → R belong to the space D(L). Then theprocess s 7→ Mu(s) − Mu(t), t ≤ s ≤ T , is a Pt,x-martingale. Let D[0, T ] be theset of numbers of the form k2−nT , k = 0, 1, 2, . . . , 2n. By a classical martingaleconvergence theorem (see e.g. Chapter II in Revuz and Yor [22]) it follows thatthe following limit lim

s↑t, s∈D[0,T ]u (s,X(s)) exists Pτ,x-almost surely for all 0 ≤ τ <

t ≤ T and for all x ∈ E. In the same reference it is also shown that the limit

Page 5: FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC ...FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC DIFFERENTIAL EQUATIONS AND MARKOV PROCESSES J. VAN CASTEREN This article is written in honor

BSDE’S AND MARKOV PROCESSES 5

lims↓t, s∈D[0,T ]

u (s,X(s)) exists Pτ,x-almost surely for all 0 ≤ τ ≤ t < T and for all

x ∈ E. Since the locally compact space [0, T ] × E is second countable it followsthat the exceptional sets may be chosen to be independent of (τ, x) ∈ [0, T ] × E,of t ∈ [τ, T ], and of the function u ∈ D(L). Since by hypothesis the subspaceD(L) is dense in C0 ([0, T ]× E) it follows that the left-hand limit at t of the processs 7→ X(s), s ∈ D[0, T ] ∩ [τ, t], exists Pτ,x-almost surely for all (t, x) ∈ (τ, T ]×E. Italso follows that the right-hand limit at t of the process s 7→ X(s), s ∈ D[0, T ]∩(t, T ],exists Pτ,x-almost surely for all (t, x) ∈ [τ, T )×E. Then we modify X(t) by replacingit with X(t+) = lims↓t, s∈D[0,T ]∩(τ,T ] X(s), t ∈ [0, T ), and X(T+) = X(T ). It alsofollows that the process t 7→ X(t+) has left limits in E. ¤

The hypotheses in Proposition 1.7 below are the same as those in Proposition 1.6.

1.7. Proposition. Let the continuous function u : [0, T ] × E → R be such that forevery s ∈ [t, T ] the function x 7→ u(s, x) belongs to D (L(s)) and suppose that thefunction (s, x) 7→ [L(s)u (s, ·)] (x) is bounded and continuous. In addition supposethat the function s 7→ u(s, x) is continuously differentiable for all x ∈ E. Then theprocess s 7→ Mu(s)−Mu(t) is a Ft

s-martingale with respect to the probability Pt,x. Ifv is another such function, then the (right) derivative of the quadratic co-variationof the martingales Mu and Mv is given by:

d

dt〈Mu,Mv〉 (t) = Γ1 (u, v) (t,X(t)) .

In fact the following identity holds as well:

Mu(t)Mv(t)−Mu(0)Mv(0)

=

∫ t

0

Mu(s)dMv(s) +

∫ t

0

Mv(s)dMu(s) +

∫ t

0

Γ1 (u, v) (s,X(s)) ds. (1.12)

Here Fts, s ∈ [t, T ], is the σ-field generated by the state variables X(ρ), t ≤ ρ ≤ s.

Instead of F0s we usually write Fs, s ∈ [0, T ]. The formula in (1.12) is known as the

integration by parts formula for stochastic integrals.

Proof. We outline a proof of the equality in (1.12). So let the functions u and v beas in Proposition 1.7. Then we have

Mu(t)Mv(t)−Mu(0)Mv(0)

=2n−1∑

k=0

Mu

(k2−nt

) (Mv

((k + 1)2−nt

)−Mv

(k2−nt

))(1.13)

+2n−1∑

k=0

(Mu

((k + 1)2−nt

)−Mu

(k2−nt

))Mv

(k2−nt

)

+2n−1∑

k=0

(Mu

((k + 1)2−nt

)−Mu

(k2−nt

)) (Mv

((k + 1)2−nt

)−Mv

(k2−nt

)).

Page 6: FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC ...FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC DIFFERENTIAL EQUATIONS AND MARKOV PROCESSES J. VAN CASTEREN This article is written in honor

6 J. VAN CASTEREN

The first term on the right-hand side of (1.13) converges to∫ t

0Mu(s)dMv(s), the

second term converges to∫ t

0Mv(s)dMu(s). Using the identity in (1.6) for the func-

tion u and a similar identity for v we see that the third term on the right-hand sideof (1.13) converges to

∫ t

0Γ1 (u, v) (s,X(s)) ds.

This completes the proof Proposition 1.7. ¤

1.1. Remark. The quadratic variation process of the (local) martingale s 7→ Mu(s)is given by the process s 7→ Γ1 (u (s, ·) , u (s, ·)) (X(s)), and therefore

Es1,x

[∣∣∣∣∫ s2

s1

dMu(s)

∣∣∣∣2]

= Es1,x

[∫ s2

s1

Γ1 (u (s, ·) , u (s, ·)) (X(s)) ds

]< ∞

under appropriate conditions on the function u. Informally we may think of thefollowing representation for the martingale difference:

Mu (s2)−Mu (s1) =

∫ s2

s1

∇Lu (s,X(s)) dW (s). (1.14)

Here we still have to give a meaning to the stochastic integral in the right-handside of (1.14). If E is an infinite-dimensional Banach space, then W (t) should besome kind of a cylindrical Brownian motion. It is closely related to a formula whichoccurs in Malliavin calculus: see Nualart [16] and [17].

1.2. Remark. It is perhaps worthwhile to observe that for standard Brownian motion(W (s),Px) the martingale difference Mu (s2) −Mu (s1), s1 ≤ s2 ≤ T , is given by astochastic integral:

Mu (s2)−Mu (s1) =

∫ s2

s1

∇u (τ, W (τ)) dW (τ).

Its increment of the quadratic variation process is given by

〈Mu,Mu〉 (s2)− 〈Mu,Mu〉 (s1) =

∫ s2

s1

|∇u (τ,W (τ))|2 dτ.

Next suppose that the function u solves the equation:

f(s, x, u (s, x) ,∇L

u (s, x))

+ L(s)u (s, x) +∂

∂su (s, x) = 0. (1.15)

If moreover, u (T, x) = ϕ (T, x), x ∈ E, is given, then we have

u (t,X(t)) = ϕ (T,X(T ))+

∫ T

t

f(s,X(s), u (s,X(s)) ,∇L

u (s,X(s)))ds−

∫ T

t

dMu(s),

(1.16)with Mu(s) as in (1.6). From (1.16) we get

u (t, x) = Et,x [u (t,X(t))] (1.17)

= Et,x [ϕ (T, X(T ))] +

∫ T

t

Et,x

[f

(s,X(s), u (s,X(s)) ,∇L

u (s,X(s)))]

ds.

Page 7: FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC ...FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC DIFFERENTIAL EQUATIONS AND MARKOV PROCESSES J. VAN CASTEREN This article is written in honor

BSDE’S AND MARKOV PROCESSES 7

1.8. Theorem. Let u : [0, T ] × E → R be a continuous function with the propertythat for every (t, x) ∈ [0, T ] × E the function s 7→ Et,x [u (s,X(s))] is differentiableand that

d

dsEt,x [u (s,X(s))] = Et,x

[L(s)u (s,X(s)) +

∂su (s,X(s))

], t < s < T.

Then the following assertions are equivalent:

(a) The function u satisfies the following differential equation:

L(t)u (t, x) +∂

∂tu (t, x) + f

(t, x, u (t, x) ,∇L

u (t, x))

= 0. (1.18)

(b) The function u satisfies the following type of Feynman-Kac integral equation:

u (t, x) = Et,x

[u (T, X(T )) +

∫ T

t

f(τ, X(τ), u (τ, X(τ)) ,∇L

u (τ,X(τ)))dτ

].

(1.19).

(c) For every t ∈ [0, T ] the process

s 7→ u (s,X(s))− u (t, X(t)) +

∫ s

t

f(τ,X(τ), u (τ,X(τ)) ,∇L

u (τ, X(τ)))dτ

is an Fts-martingale with respect to Pt,x on the interval [t, T ].

(d) For every s ∈ [0, T ] the process

t 7→ u (T,X(T ))− u (t,X(t)) +

∫ T

t

f(τ,X(τ), u (τ, X(τ)) ,∇L

u (τ, X(τ)))dτ

is an FtT -backward martingale with respect to Ps,x on the interval [s, T ].

1.3. Remark. Suppose that the function u is a solution to the following terminalvalue problem:

L(s)u (s, ·) (x) +∂

∂su (s, x) + f

(s, x, u (s, x) ,∇L

u (s, x))

= 0;

u(T, x) = ϕ(T, x).(1.20)

Then the pair(u (s,X(s)) ,∇L

u (s,X(s)))

can be considered as a weak solution toa backward stochastic differential equation. More precisely, for every s ∈ [0, T ] theprocess

t 7→u (T,X(T ))− u (t,X(t)) +

∫ T

t

f(τ,X(τ), u (τ, X(τ)) ,∇L

u (τ, X(τ)))dτ

is an FtT -backward martingale with respect to Ps,x on the interval [s, T ]. The symbol

∇Luv (s, x) stands for the functional v 7→ ∇L

uv (s, x) = Γ1(u, v)(s, x), where Γ1 is thesquared gradient operator, which is defined in Definition 1.1. Possible choices forthe function f are

f(s, x, y,∇L

u

)= −V (s, x)y and (1.21)

f(s, x, y,∇L

u

)=

1

2

∣∣∇Lu (s, x)

∣∣2 − V (s, x) =1

2Γ1 (u, u) (s, x)− V (s, x). (1.22)

Page 8: FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC ...FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC DIFFERENTIAL EQUATIONS AND MARKOV PROCESSES J. VAN CASTEREN This article is written in honor

8 J. VAN CASTEREN

The choice in (1.21) turns equation (1.20) into the following heat equation:

∂su (s, x) + L(s)u (s, ·) (x)− V (s, x)u(s, x) = 0;

u (T, x) = ϕ(T, x).(1.23)

The function v(s, x) defined by the Feynman-Kac formula

v(s, x) = Es,x

[e−

∫ Ts V (ρ,X(ρ))dρϕ (T,X(T ))

](1.24)

is a solution candidate to equation (1.23). In the next example we see how the clas-sical Feynman-Kac formula is related to backward stochastic differential equations.

1.4. Example. This example is copied from Remark 2.5 in Pardoux [18]. An impor-tant example is one in which the function f is linear: f(t, x, r, z) = c(t, x)r + h(t, x)and X(s) = X t,x(s) is a solution to a stochastic differential of the form below:

X t,x(s)−X t,x(t) =

∫ s

t

b(τ, X t,x(τ)

)dτ +

∫ s

t

σ(τ, X t,x(τ)

)dW (τ), t ≤ s ≤ T ;

X t,x(s) = x, 0 ≤ s ≤ t.

In this case the linear BSDE

Y t,x(s) = g(X t,x(T ))+

∫ T

s

[c(r,X t,x(r))Y t,x(s)+h(r,X t,x(r))] dr−∫ T

s

Zt,x(r) dW (r),

has an explicit solution. From an extension of the classical “variation of constantsformula” (see the argument in the proof of the comparison theorem 1.6 in Pardoux[18]) or by direct verification we get:

Y t,x(s) = g(X t,x(T )

)e

∫ Ts c(r,Xt,x(r)) dr +

∫ T

s

h(r,X t,x(r)

)e

∫ rs c(α,Xt,x(α)) dα dr

−∫ T

s

e∫ r

s c(α,Xt,x(α))dαZt,x(r) dW (r).

Hence Y t,x(t) = E [Y t,x(t)], so that

Y t,x(t) = E[g(X t,x(T ))e

∫ Tt c(s,Xt,x(s)) ds +

∫ T

t

h(s,X t,x(s)

)e

∫ st c(r,Xt,x(r))drds

],

which is the well-known Feynman-Kac formula. For more details and explicit for-mulas see Remark 2.5 in Pardoux [18].

The choice in (1.22) turns equation (1.20) into the following Hamilton-Jacobi-Bellmann equation of Riccati type:

∂su (s, x) + L(s)u (s,X(s))− 1

2Γ1 (u, u) (s, x) + V (s, x)=0;

u (T, x) = − log ϕ(T, x),(1.25)

where − log ϕ(T, x) replaces ϕ(T, x). The function SL defined by the genuine non-linear Feynman-Kac formula

SL(s, x) = − logEs,x

[e−

∫ Ts V (ρ,X(ρ))dρϕ (T,X(T ))

](1.26)

Page 9: FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC ...FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC DIFFERENTIAL EQUATIONS AND MARKOV PROCESSES J. VAN CASTEREN This article is written in honor

BSDE’S AND MARKOV PROCESSES 9

is a solution candidate to (1.25). Often these “solution candidates” are viscositysolutions. However, this will be the main topic of [25]. For more details on theequation in (1.25) in case of a diffusion on a manifold see Theorem 2.4 in Zambrini[26]. The result in [26] is put in the (present) framework of diffusions with L(s) = Lin [23].

1.5. Remark. Let u(t, x) satisfy one of the equivalent conditions in Theorem 1.8.Put Y (τ) = u (τ, X(τ)), and let M(s) be the martingale determined by M(0) =Y (0) = u (0, X(0)) and by

M(s)−M(t) = Y (s) +

∫ s

t

f(τ, X(τ), Y (τ),∇L

u (τ, X(τ)))dτ.

Then the expression ∇Lu (τ,X(τ)) only depends on the martingale part M of the

process s 7→ Y (s). This entitles us to write ZM(τ) instead of ∇Lu (τ,X(τ)). The

interpretation of ZM(τ) is then the linear functional N 7→ d

dτ〈M,N〉 (τ), where N is

a Pτ,x-martingale in M2 (Ω, F0T ,Pt,x). Here a process N belongs to M2 (Ω,F0

T ,Pt,x)whenever N is martingale in L2 (Ω, F0

T ,Pt,x). Notice that the functional ZM(τ)is known as soon as the martingale M ∈ M2 (Ω,F0

T ,Pt,x) is known. From ourdefinitions it also follows that

M(T ) = Y (T ) +

∫ T

0

f (τ,X(τ), Y (τ), ZM(τ)) dτ,

where used the fact that Y (0) = M(0).

1.6. Remark. Let the notation be as in Remark 1.5. Then the variables Y (t)and ZM(t) only depend on the space variable X(t), and as consequence the mar-tingale increments M (t2) − M (t1), 0 ≤ t1 < t2 ≤ T , only depend on Ft1

t2 =σ (X(s) : t1 ≤ s ≤ t2). In Section 2 we give Lipschitz type conditions on the functionf in order that the BSDE

Y (t) = Y (T ) +

∫ T

t

f (s,X(s), Y (s), ZM(s)) ds + M(t)−M(T ), τ ≤ t ≤ T, (1.27)

possesses a unique pair of solutions

(Y, M) ∈ L2 (Ω,FτT ,Pτ,x)×M2 (Ω,Fτ

T ,Pτ,x) .

Here M2 (Ω,FtT ,Pt,x) stands for the space of all (Ft

s)s∈[t,T ]-martingales which belong

to L2 (Ω,FtT ,Pt,x). Of course instead of writing “BSDE” it would be better to write

“BSIE” for Backward Stochastic Integral Equation. However, since in the literaturepeople write “BSDE” even if they mean integral equations we also stick to this ter-minology. Suppose that the σ (X(T ))-measurable variable Y (T ) ∈ L2 (Ω,Fτ

T ,Pτ,x)is given. In fact we will prove that the solution (Y, M) of the equation in (1.27)belongs to the space S2

(Ω,Fτ

T ,Pτ,x;Rk)×M2

(Ω,Ft

T ,Pt,x;Rk). For more details see

the definitions 1.9 and 3.1, and Theorem 4.1.

1.7. Remark. Let M and N be two martingales in M2 [0, T ]. Then, for 0 ≤ s < t ≤ T ,

|〈M, N〉 (t)− |〈M,N〉 (s)||2 ≤ (〈M, M〉 (t)− 〈M,M〉 (s)) (〈N,N〉 (t)− 〈N, N〉 (s)) ,

Page 10: FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC ...FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC DIFFERENTIAL EQUATIONS AND MARKOV PROCESSES J. VAN CASTEREN This article is written in honor

10 J. VAN CASTEREN

and consequently∣∣∣∣d

ds〈M,N〉 (s)

∣∣∣∣2

≤ d

ds〈M,M〉 (s) d

ds〈N, N〉 (s).

Hence, the inequality∫ T

0

∣∣∣∣d

ds〈M, N〉 (s)

∣∣∣∣ ds ≤∫ T

0

(d

ds〈M, M〉 (s)

)1/2 (d

ds〈N,N〉 (s)

)1/2

ds (1.28)

follows. The inequality in (1.28) says that the quantity

∫ T

0

∣∣∣∣d

ds〈M,N〉 (s)

∣∣∣∣ ds is

dominated by the Hellinger integral H (M,N) defined by the right-hand side of(1.28).

For a proof we refer the reader to [25].

1.8. Remark. Instead of considering ∇Lu (s, x) we will also consider the bilinear map-

ping Z(s) which associates with a pair of local semi-martingales (Y1, Y2) a pro-cess which is to be considered as the right derivative of the covariation process:〈Y1, Y2〉 (s). We write

ZY1(s) (Y2) = Z(s) (Y1, Y2) =d

ds〈Y1, Y2〉 (s).

The function f (i.e. the generator of the backward differential equation) will thenbe of the form: f (s,X(s), Y (s), ZY (s)); the deterministic phase

(u(s, x),∇Lu(s, x)

)is replaced with the stochastic phase (Y (s), ZY (s)). We should find an appro-priate stochastic phase s 7→ (Y (s), ZY (s)), which we identify with the processs 7→ (Y (s),MY (s)) in the stochastic phase space S2 ×M2, such that

Y (t) = Y (T ) +

∫ T

t

f (s,X(s), Y (s), ZY (s)) ds−∫ T

t

dMY (s), (1.29)

where the quadratic variation of the martingale MY (s) is given by

d 〈MY ,MY 〉 (s) = ZY (s) (Y ) ds = Z(s) (Y, Y ) ds = d 〈Y, Y 〉 (s).This stochastic phase space S2×M2 plays a role in stochastic analysis very similar tothe role played by the first Sobolev space H1,2 in the theory of deterministic partialdifferential equations.

1.9. Remark. In case we deal with strong solutions driven by standard Brownian mo-tion the martingale difference MY (s2)−MY (s1) can be written as

∫ s2

s1ZY (s)dW (s),

provided that the the martingale MY (s) belongs to M2 (Ω,G0T ,P). Here G0

T is theσ-field generated by W (s), 0 ≤ s ≤ T . If Y (s) = u (s,X(s)), then this stochasticintegral satisfies:∫ s2

s1

ZY (s)dW (s) = u (s2, X (s2))−u (s1, X (s1))−∫ s2

s1

(L(s) +

∂s

)u (s,X (s)) ds.

(1.30)Such stochastic integrals are for example defined if the process X(t) is a solution toa stochastic differential equation (in Ito sense):

X(s) = X(t) +

∫ s

t

b (τ, X(τ)) dτ +

∫ s

t

σ (τ, X(τ)) dW (τ), t ≤ s ≤ T. (1.31)

Page 11: FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC ...FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC DIFFERENTIAL EQUATIONS AND MARKOV PROCESSES J. VAN CASTEREN This article is written in honor

BSDE’S AND MARKOV PROCESSES 11

Here the matrix (σjk (τ, x))dj,k=1 is chosen in such a way that

ajk(τ, x) =d∑

`=1

σj` (τ, x) σk` (τ, x) = (σ(τ, x)σ∗(τ, x))jk .

The process W (τ) is Brownian motion or Wiener process. It is assumed that oper-ator L(τ) has the form

L(τ)u(x) = b (τ, x) · ∇u(x) +1

2

d∑

j,k=1

ajk (τ, x)∂2

∂xjxk

u(x). (1.32)

Then from Ito’s formula together with (1.30), (1.31) and (1.32) it follows that theprocess ZY (s) has to be identified with σ (s,X(s))∗∇u (s, ·) (X(s)). For more detailssee e.g. Pardoux and Peng [19] and Pardoux [18]. In this case the squared gradientoperator is given by

Γ1 (u, v) (s, x) =d∑

j,k=1

aj,k(s, x)∂u

∂xj

(s, x)∂v

∂xk

(s, x).

1.10. Remark. Backward doubly stochastic differential equations (BDSDEs) couldhave been included in the present paper: see Boufoussi, Mrhardy and Van Casteren[7]. In our notation a BDSDE may be written in the form:

Y (t)− Y (T ) =

∫ T

t

f

(s,X(s), Y (s), N 7→ d

ds〈M, N〉 (s)

)ds

+

∫ T

t

g

(s,X(s), Y (s), N 7→ d

ds〈M, N〉 (s)

)d←−B (s)

+ M(t)−M(T ). (1.33)

Here the expression∫ T

t

g

(s,X(s), Y (s), N 7→ d

ds〈M, N〉 (s)

)d←−B (s)

represents a backward Ito integral. The symbol 〈M,N〉 stands for the covariationprocess of the (local) martingales M and N ; it is assumed that this process isabsolutely continuous with respect to Lebesgue measure. Moreover,

(Ω,FτT ,Pτ,x) , (X(t) : T ≥ t ≥ 0) , (E, E)

is a Markov process generated by a family of operators L(s), 0 ≤ s ≤ T , and

Fτt = σ X(s) : τ ≤ s ≤ t .

The process X(t) could be the (unique) weak or strong solution to a (forward)stochastic differential equation (SDE):

X(t) = x +

∫ t

τ

b (s,X(s)) ds +

∫ t

τ

σ (s, X(s)) dW (s). (1.34)

Here the coefficients b and σ have certain continuity or measurability properties,and Pτ,x is the distribution of the process X(t) defined as being the unique weak

Page 12: FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC ...FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC DIFFERENTIAL EQUATIONS AND MARKOV PROCESSES J. VAN CASTEREN This article is written in honor

12 J. VAN CASTEREN

solution to the equation in (1.34). We want to find a pair (Y,M) ∈ S2 (Ω, Fτt ,Pτ,x)×

M2 (Ω,Fτt ,Pτ,x) which satisfies (1.33).

We first give some definitions. Fix (τ, x) ∈ [0, T ]× E. In the definitions 1.9 and1.10 the probability measure Pτ,x is defined on the σ-field Fτ

T . In Definition 3.1we return to these notions. The following definition and implicit results describedtherein shows that, under certain conditions, by enlarging the sample space a familyof processes may be reduced to just one process without losing the S2-property.

1.9. Definition. Fix (τ, x) ∈ [0, T ] × E. An Rk-valued process Y is said to be-long to the space S2

(Ω, Fτ

T ,Pτ,x;Rk)

if Y (t) is Fτt -measurable (τ ≤ t ≤ T ) and

if Eτ,x

[sup

τ≤t≤T|Y (t)|2

]< ∞. It is assumed that, for s ∈ [0, τ ] Y (s) = Y (τ),

Pτ,x-almost surely. The process Y (s), s ∈ [0, T ], is said to belong to the spaceS2

unif

(Ω, Fτ

T ,Pτ,x;Rk)

if

sup(τ,x)∈[0,T ]×E

Eτ,x

[sup

τ≤t≤T|Y (t)|2

]< ∞,

and it belongs to S2loc,unif

(Ω,Fτ

T ,Pτ,x;Rk)

provided that

sup(τ,x)∈[0,T ]×K

Eτ,x

[sup

τ≤t≤T|Y (t)|2

]< ∞

for all compact subsets K of E.

If the σ-field Fτt and Pτ,x are clear from the context we write S2

([0, T ],Rk

)or

sometimes just S2. A similar convention is used for the space M2.

1.10. Definition. Let the process M be such that the process t 7→ M(t) −M(τ),t ∈ [τ, T ], is a Pτ,x-martingale with the property that the stochastic variable M(T )−M(τ) belongs to L2 (Ω,Fτ

T ,Pτ,x). Then it is said that M is a member of thespace M2

(Ω,Fτ

T ,Pτ,x;Rk). By the Burkholder-Davis-Gundy inequality (see inequal-

ity (3.5) below) it follows that Eτ,x

[sup

τ≤t≤T|M(t)−M(τ)|2

]is finite if and only if

M(T ) − M(τ) belongs to the space L2 (Ω, FτT ,Pτ,x). Here an Fτ

t -adapted processM(·) − M(τ) is called a Pτ,x-martingale provided that Eτ,x [|M(t)−M(τ)|] < ∞and Eτ,x

[M(t)−M(τ)

∣∣ Fτs

]= M(s)−M(τ), Pτ,x-almost surely, for T ≥ t ≥ s ≥ τ .

The martingale difference s 7→ M(s) − M(0), s ∈ [0, T ], is said to belong to thespace M2

unif

(Ω,Fτ

T ,Pτ,x;Rk)

if

sup(τ,x)∈[0,T ]×E

Eτ,x

[sup

τ≤t≤T|M(t)−M(τ)|2

]< ∞,

and it belongs to M2loc,unif

(Ω,Fτ

T ,Pτ,x;Rk)

provided that

sup(τ,x)∈[0,T ]×K

Eτ,x

[sup

τ≤t≤T|M(t)−M(τ)|2

]< ∞

for all compact subsets K of E. From the Burkholder-Davis-Gundy inequality (seeinequality (3.5) below) it follows that the process M(s)−M(0) belongs to the space

Page 13: FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC ...FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC DIFFERENTIAL EQUATIONS AND MARKOV PROCESSES J. VAN CASTEREN This article is written in honor

BSDE’S AND MARKOV PROCESSES 13

M2unif

(Ω,Fτ

T ,Pτ,x;Rk)

if and only if

sup(τ,x)∈[0,T ]×E

Eτ,x

[|M(T )−M(τ)|2] = sup(τ,x)∈[0,T ]×E

Eτ,x [〈M, 〉 (T )− 〈M, M〉 (τ)] < ∞.

Here 〈M, M〉 stands for the quadratic variation process of the process t 7→ M(t)−M(0).

The notions in the definitions 1.9 and (1.10) will exclusively be used in casethe family of measures Pτ,x : (τ, x) ∈ [0, T ]× E constitute the distributions of aMarkov process which was defined in Definition 1.3.

Again let the Markov process, with right-continuous sample paths and with leftlimits, be generated by the family of operators L(s) : 0 ≤ s ≤ t: see definitions1.3 equality (1.8), and 1.4 equality (1.10).

Next we define the family of operators Q (t1, t2) : 0 ≤ t1 ≤ t2 ≤ T by

Q (t1, t2) f(x) = Et1,x [f (X (t2))] , f ∈ C0 (E) , 0 ≤ t1 ≤ t2 ≤ T. (1.35)

Fix ϕ ∈ D(L). Since the process t 7→ Mϕ(t)−Mϕ(s), t ∈ [s, T ], is a Ps,x-martingalewith respect to the filtration (Fs

t )t∈[s,T ], and X(t) = x Pt,x almost surely, the follow-ing equality follows:

∫ t

s

Es,x [L(ρ)ϕ (ρ, ·) (X(ρ))] dρ + Et,x [ϕ (t,X(t))]− Es,x [ϕ (t,X(t))]

= ϕ(t, x)− ϕ(s, x)−∫ t

s

Es,x

[∂ϕ

∂ρ(ρ,X(ρ))

]dρ. (1.36)

The fact that a process of the form t 7→ Mϕ(t)−Mϕ(s), t ∈ [s, T ], is a Ps,x-martingalefollows from Proposition 1.5. In terms of the family of operators

Q (t1, t2) : 0 ≤ t1 ≤ t2 ≤ Tthe equality in (1.36) can be rewritten as

∫ t

s

Q (s, ρ) L(ρ)ϕ (ρ, ·) (x) dρ + Q(t, t)ϕ (t, ·) (x)−Q(s, t)ϕ (t, ·) (x)

= ϕ(t, x)− ϕ(s, x)−∫ t

s

Q (s, ρ)∂ϕ

∂ρ(ρ, ·) (x)dρ. (1.37)

From (1.37) we infer that

L(s)ϕ(s, ·)(x) = − limt↓s

Q(t, t)ϕ (t, ·) (x)−Q(s, t)ϕ (t, ·) (x)

t− s. (1.38)

Equality (1.37) also yields the following result. If ϕ ∈ D(L) is such that

L(ρ)ϕ (ρ, ·) (y) = −∂ϕ

∂ρ(ρ, y),

then

ϕ (s, x) = Q (s, t) ϕ (t, ·) (x) = Es,x [ϕ (t, X(t))] . (1.39)

Since 0 ≤ s ≤ t ≤ T are arbitrary from (1.39) we see

Q (s, t′) ϕ (t′, ·) (x) = Q (s, t) Q (t, t′) ϕ (t′, ·) (x) 0 ≤ s ≤ t ≤ t′ ≤ T, x ∈ E. (1.40)

Page 14: FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC ...FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC DIFFERENTIAL EQUATIONS AND MARKOV PROCESSES J. VAN CASTEREN This article is written in honor

14 J. VAN CASTEREN

If in (1.40) we (may) choose the function ϕ (t′, y) arbitrary, then the family Q(s, t),0 ≤ s ≤ t ≤ T , automatically is a propagator in the space C0(E) in the sensethat Q (s, t) Q (t, t′) = Q (s, t′), 0 ≤ s ≤ t ≤ t′ ≤ T . For details on propagators orevolution families see [11].

1.11. Remark. In the sequel we want to discuss solutions to equations of the form:

∂tu (t, x) + L(t)u (t, ·) (x) + f

(t, x, u (t, x) ,∇L

u (t, x))

= 0. (1.41)

For a preliminary discussion on this topic see Theorem 1.8. Under certain hypotheseson the function f we will give some existence and uniqueness results. Let m be(equivalent to) the Lebesgue measure in Rd. In a concrete situation where everyoperator L(t) is a genuine diffusion operator in L2

(Rd, m

)we consider the following

Backward Stochastic Differential equation

u (s,X(s)) = Y (T, X(T )) +

∫ T

s

f(ρ,X(ρ), u (ρ,X(ρ)) ,∇L

u (ρ,X(ρ)))

−∫ T

s

∇Lu (ρ,X(ρ)) dW (ρ) . (1.42)

Here we suppose that the process t 7→ X(t) is a solution to a genuine stochasticdifferential equation driven by Brownian motion and with one-dimensional distri-

bution u(t, x) satisfying L(t)u (t, ·) (x) =∂u

∂t(t, x). In fact in that case we will not

consider the equation in (1.42), but we will try to find an ordered pair (Y, Z) suchthat

Y (s) = Y (T ) +

∫ T

s

f (ρ,X(ρ), Y (ρ) , Z (ρ)) dρ−∫ T

s

〈Z (ρ) , dW (ρ)〉 . (1.43)

If the pair (Y, Z) satisfies (1.43), then u (s, x) = Es,x [Y (s)] satisfies (1.41). MoreoverZ(s) = ∇L

u (s,X(s)) = ∇Lu (s, x), Ps,x-almost surely. For more details see section 2

in Pardoux [18].

1.12. Remark. Some comments follow:

(a) In Section 2 weak solutions to BSDEs are studied.(b) In Section 7 of [25] and in Section 2 of Pardoux [18] strong solutions to

BSDEs are discussed.(c) BSDEs go back to Bismut [6].

As a corollary to theorems 1.8 and 3.6 we have the following result.

1.11. Corollary. Suppose that the function u solves the following

∂u

∂s(s, y) + L(s)u(s, ·) (y) + f

(s, y, u(s, y),∇L

u(s, y))

= 0;

u (T, X(T )) = ξ ∈ L2 (Ω,FτT ,Pτ,x) .

(1.44)

Let the pair (Y,M) be a solution to

Y (t) = ξ +

∫ T

t

f (s,X(s), Y (s), ZM(s)) ds + M(t)−M(T ), (1.45)

Page 15: FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC ...FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC DIFFERENTIAL EQUATIONS AND MARKOV PROCESSES J. VAN CASTEREN This article is written in honor

BSDE’S AND MARKOV PROCESSES 15

with M(τ) = 0. Then (Y (t),M(t)) = (u (t,X(t)) , Mu(t)), where

Mu(t) = u (t, X(t))− u (τ, X(τ))−∫ t

τ

L(s)u (s, ·) (X(s)) ds−∫ t

τ

∂u

∂s(s,X(s)) ds.

Notice that the processes s 7→ ∇Lu (s,X(s)) and s 7→ ZMu(s) may be identified

and that ZMu(s) only depends on (s,X(s)). The decomposition

u (t,X(t))− u (τ, X(τ))

=

∫ t

τ

(∂u

∂s(s,X(s)) + L(s)u (s, ·) (X(s))

)ds + Mu(t)−Mu(τ) (1.46)

splits the process t 7→ u (t,X(t))−u (τ, X(τ)) into a part which is bounded variation(i.e. the part which is absolutely continuous with respect to Lebesgue measure on[τ, T ]) and a Pτ,x-martingale part Mu(t) − Mu(τ) (which in fact is a martingaledifference part).

If L(s) = 12∆, then X(s) = W (s) (standard Wiener process or Brownian motion)

and (1.46) can be rewritten as

u (t,W (t))− u (τ,W (τ)) =

∫ t

τ

(∂u

∂s(s,W (s)) +

1

2∆u (s, ·) (W (s))

)ds

+

∫ t

τ

〈∇u (s, ·) (W (s)) , dW (s)〉 (1.47)

where∫ t

τ〈∇u (s, ·) (W (s)) , dW (s)〉 is to be interpreted as an Ito integral.

1.13. Remark. Suggestions for further research:

(a) Find “explicit solutions” to BSDEs with a linear drift part. This should bea type of Cameron-Martin formula or Girsanov transformation.

(b) Treat weak (and strong) solutions BDSDEs in a manner similar to what ispresented here for BSDEs.

(c) Treat weak (strong) solutions to BSDEs generated by a function f which isnot necessarily of linear growth but for example of quadratic growth in oneor both of its entries Y (t) and ZM(t).

(d) Can anything be done if f depends not only on s, x, u(s, x), ∇u (s, x), butalso on L(s)u (s, ·) (x)?

1.12. Proposition. Let the functions f , g ∈ D(L) be such that their product fgalso belongs to D(L). Then Γ1 (f, g) is well defined and for (s, x) ∈ [0, T ] × E thefollowing equality holds:

L(s) (fg) (s, ·) (x)− f(s, x)L(s)g (s, ·) (x)− L(s)f (s, ·) (x)g(s, x) = Γ1 (f, g) (s, x).(1.48)

Proof. Let the functions f and g be as in Proposition 1.12. For h > 0 we have:

(f (X(s + h))− f (X(s))) (g (X(s + h))− g (X(s)))

= f (X(s + h)) g (X(s + h))− f (X(s)) g (X(s)) (1.49)

− f (X(s)) (g (X(s + h))− g (X(s)))− (f (X(s + h))− f (X(s))) g (X(s)) .

Then we take expectations with respect to Es,x, divide by h > 0, and pass to thelimit as h ↓ 0 to obtain equality (1.48) in Proposition 1.12. ¤

Page 16: FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC ...FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC DIFFERENTIAL EQUATIONS AND MARKOV PROCESSES J. VAN CASTEREN This article is written in honor

16 J. VAN CASTEREN

2. A probabilistic approach: weak solutions

In this section and also in sections 3 we will study BSDE’s on a single probabilityspace. In the sections 4 and 4 we will consider Markov families of probability spaces.In the present section we write P instead of P0,x, and similarly for the expectations Eand E0,x. Here we work on the interval [0, T ]. Since we are discussing the martingaleproblem and basically only the distributions of the process t 7→ X(t), t ∈ [0, T ],the solutions we obtain are of weak type. In case we consider strong solutionswe apply a martingale representation theorem (in terms of Brownian Motion). InSection 4 we will also use this result for probability measures of the form Pτ,x onthe interval [τ, T ]. In this section we consider a pair of Ft = F0

t -adapted processes(Y, M) ∈ L2

(Ω,FT ,P;Rk

) × L2(Ω,FT ,P : Rk

)such that Y (0) = M(0) and such

that

Y (t) = Y (T ) +

∫ T

t

f (s,X(s), Y (s), ZM(s)) ds + M(t)−M(T ) (2.1)

where M is a P-martingale with respect to the filtration Ft = σ (X(s) : s ≤ t).In [25] we will employ the results of the present section with P = Pτ,x, where(τ, x) ∈ [0, T ]× E.

2.1. Proposition. Let the pair (Y,M) be as in (2.1), and suppose that Y (0) = M(0).Then

Y (t) = M(t)−∫ t

0

f (s,X(s), Y (s), ZM(s)) ds, (2.2)

Y (t) = E[Y (T ) +

∫ T

t

f (s,X(s), Y (s), ZM(s)) ds∣∣ Ft

], and (2.3)

M(t) = E[Y (T ) +

∫ T

0

f (s,X(s), Y (s), ZM(s)) ds∣∣ Ft

]. (2.4)

The equality in (2.2) shows that the process M is the martingale part of thesemi-martingale Y .

Proof. The equality in (2.3) follows from (2.1) and from the fact that M is a mar-tingale. A calculation, in which (2.1) is used, then implies (2.4). Since

M(T ) = Y (T ) +

∫ T

0

f (s,X(s), Y (s), ZM(s)) ds

the equality in (2.2) follows. ¤

In the following theorem we write z = ZM(s), (s, x) belongs to [0, T ]× E, and yto Rk.

2.2. Theorem. Suppose that there exist finite constants C1 and C2 such that

〈y2 − y1, f (s, x, y2, z)− f (s, x, y1, z)〉 ≤ C1 |y2 − y1|2 ; (2.5)

|f (s, x, y, ZM2(s))− f (s, x, y, ZM1(s))|2 ≤ C22

d

ds〈M2 −M1,M2 −M1〉 (s). (2.6)

Page 17: FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC ...FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC DIFFERENTIAL EQUATIONS AND MARKOV PROCESSES J. VAN CASTEREN This article is written in honor

BSDE’S AND MARKOV PROCESSES 17

Then there exists a unique pair of adapted processes (Y, M) such that Y (0) = M(0)and such that the process M is the martingale part of the semi-martingale Y :

Y (t) = M(t)−M(T ) + Y (T ) +

∫ T

t

f (s,X(s), Y (s), ZM(s)) ds

= M(t)−∫ t

0

f (s,X(s), Y (s), ZM(s)) ds. (2.7)

Proof. The uniqueness follows from Corollary 3.4 of Theorem 3.3 below. In theexistence part of the proof of Theorem 2.2 we will approximate the function f byLipschitz continuous functions fδ, 0 < δ < (2C1)

−1, where each function fδ hasLipschitz constant δ−1, but at the same time inequality (2.6) remains valid for fixedsecond variable (in an appropriate sense). It follows that for the functions fδ (2.6)remains valid and that (2.5) is replaced with

|fδ (s, x, y2, z)− fδ (s, x, y1, z)| ≤ 1

δ|y2 − y1| . (2.8)

In the uniqueness part of the proof it suffices to assume that (2.5) holds. In The-orem 3.6 we will see that the monotonicity condition (2.5) also suffices to provethe existence. For details the reader is referred to the propositions 3.7 and 3.8,Corollary 3.9, and to Proposition 3.10. In fact for M ∈ M2 fixed, and the functiony 7→ f (s, x, y, ZM(s)) satisfying (2.5) the function y 7→ y− δf (s, x, y, ZM(s)) is sur-jective as a mapping from Rk to Rk and its inverse exists and is Lipschitz continuous

with constant1

1− δC1

. The Lipschitz continuity is part of Proposition 3.8. The

surjectivity of this mapping is a consequence of Theorem 1 in [9]. As pointed out byCrouzeix et al the result follows from a non-trivial homotopy argument. A relativelyelementary proof of Theorem 1 in [9] can be found for a continuously differentiablefunction in Hairer and Wanner [12]: see Theorem 14.2 in Chapter IV. For a fewmore details see Remark 3.2. Let fs,M be the mapping y 7→ f (s, y, ZM(s)), and put

fδ (s, x, y, ZM(s)) = f(s, x, (I − δfs,x,M)−1 , ZM(s)

). (2.9)

Then the functions fδ, 0 < δ < (2C1)−1, are Lipschitz continuous with constant δ−1.

Proposition 3.10 treats the transition from solutions of BSDE’s with generator orcoefficient fδ with fixed martingale M ∈ M2 to solutions of BSDE’s driven by f withthe same fixed martingale M . Proposition 3.7 contains the passage from solutions(Y, N) ∈ S2 ×M2 to BBSDE’s with generators of the form (s, y) 7→ f (s, y, ZM(s))for any fixed martingale M ∈ M2 to solutions for BSDE’s of the form (2.7) where thepair (Y, M) belongs to S2×M2. By hypothesis the process s 7→ f (s, x, Y (s), ZM(s))satisfies (2.5) and (2.6). Essentially speaking a combination of these observationsshow the result in Theorem 2.2. ¤

2.1. Remark. In the literature functions with the monotonicity property are alsocalled one-sided Lipschitz functions. In fact Theorem 2.2, with f(t, x, ·, ·) Lipschitzcontinuous in both variables, will be superseded by Theorem 3.5 in the Lipschitz caseand by Theorem 3.6 in case of monotonicity in the second variable and Lipschitzcontinuity in the third variable. The proof of Theorem 2.2 is part of the resultsin Section 3. Theorem 4.1 contains a corresponding result for a Markov family of

Page 18: FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC ...FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC DIFFERENTIAL EQUATIONS AND MARKOV PROCESSES J. VAN CASTEREN This article is written in honor

18 J. VAN CASTEREN

probability measures. Its proof is omitted, it follows the same lines as the proof ofTheorem 3.6.

3. Existence and Uniqueness of solutions to BSDE’s

The equation in (1.41) can be phrased in a semi-linear setting as follows. Find afunction u (t, x) which satisfies the following partial differential equation:

∂u

∂s(s, x) + L(s)u (s, x) + f

(s, x, u(s, x),∇L

u (s, x))

= 0;

u(T, x) = ϕ (T, x) , x ∈ E.(3.1)

Here ∇Lf2

(s, x) is the linear functional f1 7→ Γ1 (f1, f2) (s, x) for smooth enough

functions f1 and f2. For s ∈ [0, T ] fixed the symbol ∇Lf2

stands for the linearmapping f1 7→ Γ1 (f1, f2) (s, ·). One way to treat this kind of equation is consideringthe following backward problem. Find a pair of adapted processes (Y, ZY ), satisfying

Y (t)− Y (T )−∫ T

t

f (s,X(s), Y (s), Z(s) (·, Y )) ds = M(t)−M(T ), (3.2)

where M(s), t0 < t ≤ s ≤ T , is a forward local Pt,x-martingale (for every T > t >t0). The symbol ZY1 , Y1 ∈ S2

([0, T ],Rk

), stands for the functional

ZY1 (Y2) (s) = Z(s) (Y1(·), Y2(·)) =d

ds〈Y1(·), Y2(·)〉 (s), Y2 ∈ S2

([0, T ],Rk

).

(3.3)If the pair (Y, ZY ) satisfies (3.2), then ZY = ZM . Instead of trying to find the pair(Y, ZY ) we will try to find a pair (Y, M) ∈ S2

([0, T ],Rk

)×M2([0, T ],Rk

)such that

Y (t) = Y (T ) +

∫ T

t

f (s,X(s), Y (s), ZM(s)) ds + M(t)−M(T ).

In fact in the present section we will also suppress the dependence of the gener-ator f as a function of the Markov process X. Instead of a family of measurespaces (Ω, Fτ

T ,Pτ,x) we will consider a single measure space (Ω,F,P) with a filtration(Ft)0≤t≤T = (F0

t )0≤t≤T . In Section 4 we will employ the results of this section 3 toobtain results for Markov processes. As a consequence we write f (s, Y (s), ZM(s))instead of f (s,X(s), Y (s), ZM(s)). Next we define the spaces S2

([0, T ],Rk

)and

M2([0, T ],Rk

): compare with the definitions 1.9 and 1.10.

3.1. Definition. Let (Ω,F,P) be a probability space, and let Ft, t ∈ [0, T ], be afiltration on F. Let t 7→ Y (t) be an stochastic process with values in Rk which isadapted to the filtration Ft and which is P-almost surely continuous. Then Y is

said to belong to S2([0, T ],Rk

)provided that E

[sup

t∈[0,T ]

|Y (t)|2]

< ∞.

3.2. Definition. The space of Rk-valued martingales in L2(Ω,F,P;Rk

)is denoted

byM2

([0, T ],Rk

).

So that a continuous martingale t 7→ M(t)−M(0) belongs to M2([0, T ],Rk

)if

E[|M(T )−M(0)|2] < ∞.

Page 19: FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC ...FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC DIFFERENTIAL EQUATIONS AND MARKOV PROCESSES J. VAN CASTEREN This article is written in honor

BSDE’S AND MARKOV PROCESSES 19

Since the process

t 7→ |M(t)|2 − |M(0)|2 − 〈M, M〉 (t) + 〈M, M〉 (0)

is a martingale difference we see that

E[|M(T )−M(0)|2] = E [〈M,M〉 (T )− 〈M, M〉 (0)] , (3.4)

and hence a martingale difference t 7→ M(t) − M(0) in L2(Ω,F,P;Rk

)belongs

to M2([0, T ],Rk

)if and only if E [〈M,M〉 (T )− 〈M, M〉 (0)] is finite. By the Burk-

holder-Davis-Gundy this is the case if and only if E[sup0<t<T |M(t)−M(0)|2] < ∞.

To be precise, let M(s), t ≤ s ≤ T , be a continuous local L2-martingale takingvalues in Rk. Put M∗(s) = supt≤τ≤s |M(τ)|. Fix 0 < p < ∞. The Burkholder-Davis-Gundy inequality says that there exist universal finite and strictly positiveconstants cp and Cp such that

cpE[(M∗(s))2p] ≤ E [〈M(·),M(·)〉p (s)] ≤ CpE

[(M∗(s))2p] , t ≤ s ≤ T. (3.5)

If p = 1, then cp = 14, and if p = 1

2, then cp = 1

8

√2. For more details and a proof

see e.g. Ikeda and Watanabe [13].

The following theorem will be employed to prove continuity of solutions to BSDE’s.It also implies that BSDE’s as considered by us possess at most unique solutions.The variables (Y, M) and (Y ′,M ′) attain their values in Rk endowed with its Eu-

clidean inner-product 〈y′, y〉 =∑k

j=1 y′jyj, y′, y ∈ Rk. Processes of the form

s 7→ f (s, Y (s), ZM(s)) are progressively measurable processes whenever the pair(Y, M) belongs to the space mentioned in (3.6) mentioned in next theorem.

3.3. Theorem. Let the pairs (Y, M) and (Y ′,M ′), which belong to the space

L2([0, T ]× Ω,F0

T , dt× P)×M2(Ω,F0

T ,P), (3.6)

be solutions to the following BSDE’s:

Y (t) = Y (T ) +

∫ T

t

f (s, Y (s), ZM(s)) ds + M(t)−M(T ), and (3.7)

Y ′(t) = Y ′(T ) +

∫ T

t

f ′ (s, Y ′(s), ZM ′(s)) ds + M ′(t)−M ′(T ) (3.8)

for 0 ≤ t ≤ T . In particular this means that the processes (Y, M) and (Y ′, M ′)are progressively measurable and are square integrable. Suppose that the coefficientf ′ satisfies the following monotonicity and Lipschitz condition. There exist somepositive and finite constants C ′

1 and C ′2 such that the following inequalities hold for

all 0 ≤ t ≤ T :

〈Y ′(t)− Y (t), f ′ (t, Y ′(t), ZM ′(t))− f ′ (t, Y (t), ZM ′(t))〉 ≤ (C ′1)

2 |Y ′(t)− Y (t)|2 ,

(3.9)

and

|f ′ (t, Y (t), ZM ′(t))− f ′ (t, Y (t), ZM(t))|2 ≤ (C ′2)

2 d

dt〈M ′ −M,M ′ −M〉 (t).

(3.10)

Page 20: FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC ...FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC DIFFERENTIAL EQUATIONS AND MARKOV PROCESSES J. VAN CASTEREN This article is written in honor

20 J. VAN CASTEREN

Then the pair (Y ′ − Y, M ′ −M) belongs to S2(Ω, F0

T ,P;Rk) × M2

(Ω, F0

T ,P;Rk),

and there exists a constant C ′ which depends on C ′1, C ′

2 and T such that

E[

sup0<t<T

|Y ′(t)− Y (t)|2 + 〈M ′ −M,M ′ −M〉 (T )

]

≤ C ′E[|Y ′(T )− Y (T )|2 +

∫ T

0

|f ′ (s, Y (s), ZM(s))− f (s, Y (s), ZM(s))|2 ds

].

(3.11)

3.1. Remark. From the proof it follows that for C ′ we may choose C ′ = 260eγT ,where γ = 1 + 2 (C ′

1)2 + 2 (C ′

2)2.

By taking Y (T ) = Y ′(T ) and f (s, Y (s), ZM(s)) = f ′ (s, Y (s), ZM(s)) it alsoimplies that BSDE’s as considered by us possess at most unique solutions. A preciseformulation reads as follows.

3.4. Corollary. Suppose that the coefficient f satisfies the monotonicity condi-tion (3.9) and the Lipschitz condition (3.10). Then there exists at most one pair(Y, M) ∈ L2 ([0, T ]× Ω,F0

T , dt× P)×M2 (Ω, F0T ,P) which satisfies the backward sto-

chastic differential equation (3.7).

Proof of Theorem 3.3. Put Y = Y ′ − Y and M = M ′ − M . The proof follows

from Ito’s formula applied to the process∣∣Y (t)

∣∣2 − ⟨M, M

⟩(t) in conjunction with

the Burkholder-Davis-Gundy inequality (3.5) for p = 12. For more details on the

Burkholder-Davis-Gundy inequality, see e.g. Ikeda and Watanabe [13]. ¤In the definitions 3.1 and 3.2 the spaces S2

([0, T ],Rk

)and M2

([0, T ],Rk

)are

defined.In Theorem 3.6 we will replace the Lipschitz condition (3.12) in Theorem 3.5 of

the function Y (s) 7→ f (s, Y (s), ZM(s)) with the (weaker) monotonicity condition(3.18). Here we write y for the variable Y (s) and z for ZM(s). It is noticed that weconsider a probability space (Ω,F,P) with a filtration (Ft)t∈[0,T ] = (F0

t )t∈[0,T ] whereFT = F.

3.5. Theorem. Let f : [0, T ] × Rk × (M2)∗ → Rk be a Lipschitz continuous in the

sense that there exists finite constants C1 and C2 such that for any two pairs of pro-cesses (Y,M) and (U,N) ∈ S2

([0, T ],Rk

)×M2([0, T ],Rk

)the following inequalities

hold for all 0 ≤ s ≤ T :

|f (s, Y (s), ZM(s))− f (s, U(s), ZM(s))| ≤ C1 |Y (s)− U(s)| , and (3.12)

|f (s, Y (s), ZM(s))− f (s, Y (s), ZN(s))| ≤ C2

(d

ds〈M −N,M −N〉 (s)

)1/2

.

(3.13)

Suppose that E[∫ T

0|f(s, 0, 0)|2 ds

]< ∞. Then there exists a unique pair (Y,M) ∈

S2([0, T ],Rk

)×M2([0, T ],Rk

)such that

Y (t) = ξ +

∫ T

t

f (s, Y (s), ZM(s)) ds + M(t)−M(T ), (3.14)

where Y (T ) = ξ ∈ L2(Ω,FT ,Rk

)is given and where Y (0) = M(0).

Page 21: FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC ...FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC DIFFERENTIAL EQUATIONS AND MARKOV PROCESSES J. VAN CASTEREN This article is written in honor

BSDE’S AND MARKOV PROCESSES 21

For brevity we write

S2 ×M2 = S2([0, T ],Rk

)×M2([0, T ],Rk

)= S2

(Ω,F0

T ,P;Rk)×M2

(Ω,F0

T ,P;Rk).

In fact we employ this theorem with the function f replaced with fδ, 0 < δ <(2C1)

−1, where fδ is defined by

fδ (s, y, ZM(s)) = f(s, (I − δfs,M)−1 , ZM(s)

). (3.15)

Here fs,M(y) = f (s, y, ZM(s)). If the function f is monotone (or one-sided Lipschitz)in the second variable with constant C1, and Lipschitz in the second variable withconstant C2, then the function fδ is Lipschitz in y with constant δ−1.

Proof. The proof of the uniqueness part follows from Corollary 3.4.In order to prove existence we proceed as follows. By induction we define a

sequence (Yn,Mn) in the space S2 ×M2 as follows.

Yn+1(t) = E[ξ +

∫ T

t

f (s, Yn(s), ZMn(s)) ds∣∣ Ft

], and (3.16)

Mn+1(t) = E[ξ +

∫ T

0

f (s, Yn(s), ZMn(s)) ds∣∣ Ft

], (3.17)

Then, since the process s 7→ f (s, Yn(s),Mn(s)) is adapted and belongs to S2 ×M2, it is possible to prove, using for example Ito calculus and Burkholder-Davis-Gundy inequality with p = 1

2(see (3.5)) it can be proved that the sequence (Yn,Mn)

converges in the space S2 ×M2. ¤In the following theorem we replace the Lipschitz condition (3.12) in Theorem 3.5

for the function Y (s) 7→ f (s, Y (s), ZM(s)) with the (weaker) monotonicity condition(3.18). Here we write y for the variable Y (s) and z for ZM(s).

3.6. Theorem. Let f : [0, T ] × Rk × (M2)∗ → Rk be monotone in the variable y

and Lipschitz in z. More precisely, suppose that there exist finite constants C1 andC2 such that for any two pairs of processes (Y,M) and (U,N) ∈ S2

([0, T ],Rk

) ×M2

([0, T ],Rk

)the following inequalities hold for all 0 ≤ s ≤ T :

〈Y (s)− U(s), f (s, Y (s), ZM(s))− f (s, U(s), ZM(s))〉 ≤ C1 |Y (s)− U(s)|2 ,

(3.18)

|f (s, Y (s), ZM(s))− f (s, Y (s), ZN(s))| ≤ C2

(d

ds〈M −N,M −N〉 (s)

)1/2

,

(3.19)

and

|f (s, Y (s), 0)| ≤ f(s) + K |Y (s)| . (3.20)

If E[∫ T

0

∣∣f(s)∣∣2 ds

]< ∞, then there exists a unique pair (Y, M) ∈ S2

([0, T ],Rk

)×M2

([0, T ],Rk

)such that

Y (t) = ξ +

∫ T

t

f (s, Y (s), ZM(s)) ds + M(t)−M(T ), (3.21)

Page 22: FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC ...FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC DIFFERENTIAL EQUATIONS AND MARKOV PROCESSES J. VAN CASTEREN This article is written in honor

22 J. VAN CASTEREN

where Y (T ) = ξ ∈ L2(Ω,FT ,Rk

)is given and where Y (0) = M(0).

In order to prove Theorem 3.6 we need the next proposition, the proof of whichuses the monotonicity condition (3.18) in an explicit manner.

3.7. Proposition. Suppose that for every ξ ∈ L2 (Ω, F0T ,P) and M ∈ M2 there exists

a pair (Y,N) ∈ S2 ×M2 such that

Y (t) = ξ +

∫ T

t

f (s, Y (s), ZM(s)) ds + N(t)−N(T ). (3.22)

Then for every ξ ∈ L2 (Ω, F0T ,P) there exists a unique pair (Y, M) ∈ S2 ×M2 which

satisfies (3.21).

The following proposition can be viewed as a consequence of Theorem 12.4 in[12]. The result is due to Burrage and Butcher [8] and Crouzeix [10]. The obtainedconstants are somewhat different from ours. If C1 = 0, then they agree. The proofis omitted. The surjectivity of the mapping y 7→ y−δf (t, y, ZM(t)) is a consequenceof Theorem 1 in Croezeix et al [9].

3.8. Proposition. Fix a martingale M ∈ M2, and choose δ > 0 in such a waythat δC1 < 1. Here C1 is the constant which occurs in inequality (3.18). Choose,

for given y ∈ Rk, the stochastic variable Y (t) ∈ Rk in such a way that y =

Y (t) − δf(t, Y (t), ZM(t)

). Then the mapping y 7→ f

(t, Y (t), ZM(t)

)is Lipschitz

continuous with a Lipschitz constant equal to1

δmax

(1,

δC1

1− δC1

). Moreover, the

mapping y 7→ I−δf (t, y, ZM(t)) is surjective and has a Lipschitz continuous inverse

with Lipschitz constant1

1− δC1

.

3.9. Corollary. For δ > 0 such that δC1 < 1 there exist processes Yδ and Yδ ∈ S2

and a martingale Mδ ∈ M2 such that the following equalities are satisfied:

Yδ(t) = Yδ(t)− δf(t, Yδ(t), ZM(t)

)

= Yδ(T ) +

∫ T

t

f(s, Yδ(s), ZM(s)

)ds + Mδ(t)−Mδ(T ). (3.23)

Proof. From Theorem 1 (page 87) in Crouzeix et al [9] it follows that the mappingy 7→ y−δf (t, y, ZM(t)) is a surjective map from Rk onto itself, provided 0 < δC1 < 1.If y2 and y1 in Rk are such that y2 − δf (t, y2, ZM(t)) = y1 − δf (t, y1, ZM(t)). Then

|y2 − y1|2 = 〈y2 − y1, δf (t, y2, ZM(t))− δf (t, y1, ZM(t))〉 ≤ δC1 |y2 − y1|2 ,

and hence y2 = y1. It follows that the continuous mapping y 7→ y − δf (t, y, ZM(t))has a continuous inverse. Denote this inverse by (I − δft,M)−1. Moreover, for 0 <

2δC1 < 1, the mapping y 7→ f(t, (I − δft,M)−1 , ZM(t)

)is Lipschitz continuous

with Lipschitz constant δ−1, which follows from Proposition 3.8. The remainingassertions in Corollary 3.9 are consequences of Theorem 3.5 where the Lipschitzcondition in (3.12) was used with δ−1 instead of C1. This establishes the proof ofCorollary 3.9. ¤

Page 23: FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC ...FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC DIFFERENTIAL EQUATIONS AND MARKOV PROCESSES J. VAN CASTEREN This article is written in honor

BSDE’S AND MARKOV PROCESSES 23

3.2. Remark. The surjectivity property of the mapping y 7→ y − δf (s, y, ZM(s))follows from Theorem 1 in [9]. The authors use a homotopy argument to provethis theorem for C1 = 0. Upon replacing f (t, y, ZM(t)) with f (t, y, ZM(t)) − C1ythe result follows in our version. An elementary proof of Theorem 1 in [9] canbe found for a continuously differentiable function in Hairer and Wanner [12]: seeTheorem 14.2 in Chapter IV. The author is grateful to Karel in’t Hout (Universityof Antwerp) for pointing out Runge-Kutta type results and these references.

Proof of Proposition 3.7. The proof of the uniqueness part follows from Corollary3.4.

Fix ξ ∈ L2 (Ω,F0T ,P), and let the martingale Mn−1 ∈ M2 be given. Then by

hypothesis there exists a pair (Yn,Mn) ∈ S2 ×M2 which satisfies:

Yn(t) = ξ +

∫ T

t

f(s, Yn(s), ZMn−1(s)

)ds + Mn(t)−Mn(T ). (3.24)

Another use of this hypothesis yields the existence of a pair (Yn+1,Mn+1) ∈ S2×M2

which again satisfies (3.24) with n + 1 instead of n. Then it can be proved that thesequence (Yn, Mn) is a Cauchy sequence in the space S2×M2. Again the Burkholder-Davis-Gundy inequality with p = 1

2(see (3.5)) is required. ¤

3.10. Proposition. Let the notation and hypotheses be as in Theorem 3.6. Let for

δ > 0 with 2δC1 < 1 the processes Yδ, Yδ ∈ S2 and the martingale Mδ ∈ M2 be suchthat the equalities of (3.23) in Corollary 3.9 are satisfied. Then the family

(Yδ,Mδ) : 0 < δ <

1

2C1

converges in the space S2 ×M2 if δ decreases to 0, provided that the terminal valueξ = Yδ(T ) is given.

Let (Y,M) be the limit in the space S2×M2. In fact from the proof of Proposition3.10 it follows that ∥∥∥∥

(Yδ − YMδ −M

)∥∥∥∥S2×M2

= O(δ) (3.25)

as δ ↓ 0, provided that ‖Yδ2(T )− Yδ1(T )‖L2(Ω,F0T ,P) = O (|δ2 − δ1|).

Proof of Proposition 3.10. Let C1 be the constant which occurs in inequality (3.18)in Theorem 3.6, and fix 0 < δ2 < δ1 < (2C1)

−1. Our estimates give quan-titative bounds in case we restrict the parameters δ, δ1 and δ2 to the interval(0, (4C1 + 4)−1). From the equalities in (3.23) we infer

Yδ(t) = Yδ(t)− δfδ(t) = Yδ(T ) +

∫ T

t

fδ(s)ds + Mδ(t)−Mδ(T ). (3.26)

First we prove that the family(Yδ,Mδ) : 0 < δ < (4C1 + 4)−1 is bounded in the

space S2 ×M2. Then it can also be shown that this family converges in the spaceS2 ×M2 as δ ↓ 0. The continuity of the functions y 7→ f (s, y, ZM(s)), y ∈ Rk. Thefact that the convergence of the family (Yδ,Mδ), 0 < δ ≤ (4C1 + 4)−1 is of order δ,as δ ↓ 0, can be shown as well. ¤

Page 24: FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC ...FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC DIFFERENTIAL EQUATIONS AND MARKOV PROCESSES J. VAN CASTEREN This article is written in honor

24 J. VAN CASTEREN

Proof of Theorem 3.6. The proof of the uniqueness part follows from Corollary 3.4.The existence is a consequence of Theorem 3.5, Proposition 3.10 and Corollary3.9. ¤

The following result shows that in the monotonicity condition we may alwaysassume that the constant C1 can be chosen as we like provided in Theorem 3.11 wereplace the equation in (1) by the one in (2) and adapt its solution.

3.11. Theorem. Let the pair (Y, M) belong to S2([0, T ],Rk

)×M2([0, T ],Rk

). Fix

λ ∈ R and put (Yλ(t),Mλ(t)) =

(eλtY (t), Y (0) +

∫ t

0

eλsdM(s)

). Then the pair

(Yλ,Mλ) belongs to S2 ×M2. Moreover, the following assertions are equivalent:

(1) The pair (Y,M) ∈ S2 ×M2 satisfies Y (0) = M(0) and

Y (t) = Y (T ) +

∫ T

t

f (s, Y (s), ZM(s)) ds + M(t)−M(T ).

(2) The pair (Yλ,Mλ) satisfies Yλ(0) = Mλ(0) and

Yλ(t)− Yλ(T )

=

∫ T

t

eλsf(s, e−λsYλ(s), e

−λsZMλ(s)

)ds− λ

∫ T

t

Yλ(s)ds + Mλ(t)−Mλ(T ).

3.3. Remark. Put fλ(s, y, z) = eλsf(s, e−λsy, e−λsz

) − λy. If the function y 7→f (s, y, z) has monotonicity constant C1, then the function y 7→ fλ (s, y, z) has mono-tonicity constant C1 − λ. It follows that by reformulating the problem one alwaysmay assume that the monotonicity constant is 0.

Proof of Theorem 3.11. First notice the equality e−λsZMλ(s) = ZM(s): see Remark

1.5. The equivalence of (1) and (2) follows by considering the equalities in (1) and(2) in differential form. ¤

4. Backward stochastic differential equations and Markovprocesses

In this section the coefficient f of our BSDE is a mapping from [0, T ]×E×Rk×(M2)

∗to Rk. Theorem 4.1 below is the analogue of Theorem 3.6 with a Markov

family of measure spaces

Pτ,x : (τ, x) ∈ [0, T ]× Einstead of a single measure space. Put

fn(s) = f (s,X(s), Yn(s), ZMn(s)) ,

and suppose that the processes Yn(s) and ZMn(s) only depend of the state-timevariable (s,X(s)). Put Y (τ, t) f(x) = Eτ,x [f (X(t))], f ∈ C0(E), and suppose thatfor every f ∈ C0(E) the function (τ, x, t) 7→ Y (τ, t)f(x) is continuous on the set(τ, x, t) ∈ [0, T ]× E × [0, T ] : 0 ≤ τ ≤ T. Then it can be proved that the Markovprocess

(Ω,FτT ,Pτ,x) , (X(t) : T ≥ t ≥ 0) , (E, E) (4.1)

has left limits and is right-continuous: see e.g. Theorem 2.22 in [11], and Proposition1.6 in Section 1. Suppose that the Pτ,x-martingale t 7→ N(t) − N(τ), t ∈ [τ, T ],

Page 25: FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC ...FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC DIFFERENTIAL EQUATIONS AND MARKOV PROCESSES J. VAN CASTEREN This article is written in honor

BSDE’S AND MARKOV PROCESSES 25

belongs to the space M2([τ, T ],Pτ,x,Rk

)(see Definition 1.10). It follows that the

quantity ZM(s)(N) is measurable with respect to σ(Fs

s+, N(s+)): see equalities the

(4.4), (4.5) and (4.6) below. The following iteration formulas play an importantrole:

Yn+1(t) = Et,X(t) [ξ] +

∫ T

t

Et,X(t) [fn(s)] ds, and

Mn+1(t) = Et,X(t) [ξ] +

∫ t

0

fn(s)ds +

∫ T

t

Et,X(t) [fn(s)] ds.

Then the processes Yn+1 and Mn+1 are related as follows:

Yn+1(T ) +

∫ T

t

fn(s)ds + Mn+1(t)−Mn+1(T ) = Yn+1(t).

Moreover, by the Markov property, the process

t 7→ Mn+1(t)−Mn+1(τ)

= Eτ,X(τ)

∣∣ Fτt

]− Eτ,X(τ) [ξ] + Eτ,X(τ)

[∫ T

τ

fn(s)ds∣∣ Fτ

t

]− Eτ,X(τ)

[∫ T

τ

fn(s)ds

]

= Eτ,X(τ)

[ξ +

∫ T

τ

fn(s)ds∣∣ Fτ

t

]− Eτ,X(τ)

[ξ +

∫ T

τ

fn(s)ds

]

is a Pτ,x-martingale on the interval [τ, T ] for every (τ, x) ∈ [0, T ]× E.In Theorem 4.1 below we replace the Lipschitz condition (3.12) in Theorem 3.5 for

the function Y (s) 7→ f (s, Y (s), ZM(s)) with the (weaker) monotonicity condition(4.7) for the function Y (s) 7→ f (s,X(s), Y (s), ZM(s)). Sometimes we write y forthe variable Y (s) and z for ZM(s). Notice that the functional ZMn(t) only dependson Ft

t+ :=⋂

h:T≥t+h>t σ (X(t + h)) and that this σ-field belongs the Pt,x-completionof σ (X(t)) for every x ∈ E. This is the case, because by assumption the processs 7→ X(s) is right-continuous at s = t: see Proposition 1.7. In order to show thiswe have to prove equalities of the following type:

Es,x

[Y

∣∣ Fst+

]= Et,X(t) [Y ] , Ps,x-almost surely, (4.2)

for all bounded stochastic variables which are FtT -measurable. By the monotone

class theorem and density arguments the proof of (4.2) reduces to showing theseequalities for Y =

∏nj=1 fj (tj, X (tj)), where t = t1 < t2 < · · · < tn ≤ T , and

the functions x 7→ fj (tj, x), 1 ≤ j ≤ n, belong to the space C0(E). Next supposethat the bounded stochastic variable Y is measurable with respect to Ft

t+. From(4.2) with s = t it follows that Y = Et,X(t) [Y ], Pt,x-almost surely. Hence, essentiallyspeaking, such a variable Y only depends on the space-time variable (t,X(t)). SinceX(t) = x Pt,x-almost surely it follows that the variable Et,x

[Y

∣∣ Ftt+

]is Pt,x-almost

equal to the deterministic constant Et,x [Y ]. A similar argument shows the followingresult. Let 0 ≤ s < t ≤ T , and let Y be a bounded Fs

T -measurable stochasticvariable. Then the following equality holds Ps,x-almost surely:

Es,x

[Y

∣∣ Fst+

]= Es,x

[Y

∣∣ Fst

](4.3)

In particular it follows that an Fst+-measurable bounded stochastic variable coincides

with the Fst -measurable variable Es,x

[Y

∣∣ Fst

]Ps,x-almost surely for all x ∈ E. Hence

Page 26: FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC ...FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC DIFFERENTIAL EQUATIONS AND MARKOV PROCESSES J. VAN CASTEREN This article is written in honor

26 J. VAN CASTEREN

(4.3) implies that the σ-field Fst+ is contained in the Ps,x-completion of the σ-field

Fst .In addition, notice that the functional ZM(s) is defined by

ZM(s)(N) = limt↓s

〈M, N〉 (t)− 〈M,N〉 (s)t− s

(4.4)

where

〈M, N〉 (t)− 〈M,N〉 (s) = limn→∞

2n−1∑j=0

(M (tj+1,n)−M (tj,n)) (N (tj+1,n)−N (tj,n)) .

(4.5)For this the reader is referred to the remarks 1.5, 1.6, 1.8, and to formula (3.3).The symbol tj,n represents the real number tj,n = s + j2−n(t − s). The limit in(4.5) exists Pτ,x-almost surely for all τ ∈ [0, s]. As a consequence the process ZM(s)is Fτ

s+-measurable for all τ ∈ [0, s]. It follows that the process N 7→ ZM(s)(N)is Pτ,x-almost surely equal to the functional N 7→ Eτ,x

[ZM(s)(N)

∣∣ σ (Fτs , N(s))

]provided that ZM(s)(N) is σ

(Fτ

s+, N(s+))-measurable. If the martingale M is of

the form M(s) = u (s,X(s))+∫ s

0f(ρ)dρ, then the functional ZM(s)(N) is automat-

ically σ(Fs

s+, N(s+))-measurable. It follows that, for every τ ∈ [0, s], the following

equality holds Pτ,x-almost surely:

Eτ,x

[ZM(s)(N)

∣∣ σ(Fτ

s+, N(s+))]

= Eτ,x

[ZM(s)(N)

∣∣ σ (Fτs , N(s+))

]. (4.6)

Moreover, in the next Theorem 4.1 the filtered probability measure space(Ω,F,

(F0

t

)t∈[0,T ]

,P)

is replaced with a Markov family of measure spaces(Ω,Fτ

T , (Fτt )τ≤t≤T ,Pτ,x

), (τ, x) ∈ [0, T ]× E.

Its proof follows the lines of the proof of Theorem 3.6: it will not be repeated here. Atthe relevant places the measure Pτ,x replaces P and the coefficient f (s, Y (s), ZM(s))is replaced with f (s, X(s), Y (s), ZM(s)).

4.1. Theorem. Let f : [0, T ]×E×Rk× (M2)∗ → Rk be monotone in the variable y

and Lipschitz in z. More precisely, suppose that there exist finite constants C1 andC2 such that for any two pairs of processes (Y,M) and (U,N) ∈ S2

([0, T ],Rk

) ×M2

([0, T ],Rk

)the following inequalities hold for all 0 ≤ s ≤ T :

〈Y (s)− U(s), f (s, X(s), Y (s), ZM(s))− f (s,X(s), U(s), ZM(s))〉≤ C1 |Y (s)− U(s)|2 , (4.7)

|f (s,X(s), Y (s), ZM(s))− f (s,X(s), Y (s), ZN(s))|

≤ C2

(d

ds〈M −N,M −N〉 (s)

)1/2

, (4.8)

and

|f (s,X(s), Y (s), 0)| ≤ f (s,X(s)) + K |Y (s)| . (4.9)

Page 27: FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC ...FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC DIFFERENTIAL EQUATIONS AND MARKOV PROCESSES J. VAN CASTEREN This article is written in honor

BSDE’S AND MARKOV PROCESSES 27

Fix (τ, x) ∈ [0, T ]×E and let Y (T ) = ξ ∈ L2(Ω, Fτ

T ,Pτ,x;Rk)

be given. In addition,

suppose Eτ,x

[∫ T

τ

∣∣f (s,X(s))∣∣2 ds

]< ∞. Then there exists a unique pair

(Y,M) ∈ S2([τ, T ],Pτ,x,Rk

)×M2([τ, T ],Pτ,x,Rk

)

with Y (τ) = M(τ) such that

Y (t) = ξ +

∫ T

t

f (s,X(s), Y (s), ZM(s)) ds + M(t)−M(T ). (4.10)

Next let ξ = ET,X(T ) [ξ] ∈ ⋂(τ,x)∈[0,T ]×E L2 (Ω,Fτ

T ,Pτ,x) be given. Suppose that the

functions (τ, x) 7→ Eτ,x

[|ξ|2] and (τ, x) 7→ Eτ,x

[∫ T

τ

∣∣f (s,X(s))∣∣2 ds

]are locally

bounded. Then there exists a unique pair

(Y, M) ∈ S2loc,unif

([τ, T ],Rk

)×M2loc,unif

([τ, T ],Rk

)

with Y (0) = M(0) such that equation (4.10) is satisfied.Again let ξ = ET,X(T ) [ξ] ∈ ⋂

(τ,x)∈[0,T ]×E L2 (Ω, FτT ,Pτ,x) be given. Suppose that the

functions (τ, x) 7→ Eτ,x

[|ξ|2] and (τ, x) 7→ Eτ,x

[∫ T

τ

∣∣f (s,X(s))∣∣2 ds

]are uniformly

bounded. Then there exists a unique pair

(Y, M) ∈ S2unif

([τ, T ],Rk

)×M2unif

([τ, T ],Rk

)

with Y (0) = M(0) such that equation (4.10) is satisfied.

The notations

S2([τ, T ],Pτ,x,Rk

)= S2

(Ω,Fτ

T ,Pτ,x;Rk)

and

M2([τ, T ],Pτ,x,Rk

)= M2

(Ω,Fτ

T ,Pτ,x;Rk)

are explained in the definitions 1.9 and 1.10 respectively. The same is true for thenotions

S2loc,unif

([0, T ],Rk

)= S2

loc,unif

(Ω, Fτ

T ,Pτ,x;Rk),

M2loc,unif

([0, T ],Rk

)= M2

loc,unif

(Ω, Fτ

T ,Pτ,x;Rk),

S2unif

([0, T ],Rk

)= S2

unif

(Ω,Fτ

T ,Pτ,x;Rk), and

M2unif

([0, T ],Rk

)= M2

unif

(Ω,Fτ

T ,Pτ,x;Rk).

The probability measure Pτ,x is defined on the σ-field FτT . Since the existence

properties of the solutions to backward stochastic equations are based on explicitinequalities, the proofs carry over to Markov families of measures. Ultimatelythese inequalities imply that boundedness and continuity properties of the func-tion (τ, x) 7→ Eτ,x [Y (t)], 0 ≤ τ ≤ t ≤ T , depend the continuity of the functionx 7→ ET,x [ξ], where ξ is a terminal value function which is supposed to be σ (X(T ))-measurable. In addition, in order to be sure that the function (τ, x) 7→ Eτ,x [Y (t)] iscontinuous, functions of the form (τ, x) 7→ Eτ,x [f (t, u (t,X(t)) , ZM(t))] have to becontinuous, whenever the following mappings

(τ, x) 7→ Eτ,x

[∫ T

τ

|u(s,X(s))|2 ds

]and (τ, x) 7→ Eτ,x [〈M, M〉 (T )− 〈M,M〉]

represent finite and continuous functions.

Page 28: FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC ...FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC DIFFERENTIAL EQUATIONS AND MARKOV PROCESSES J. VAN CASTEREN This article is written in honor

28 J. VAN CASTEREN

Acknowledgement. Part of this work was presented at a Colloquium at the Univer-sity of Gent, October 14, 2005, at the occasion of the 65th birthday of Richard De-langhe and appeared in a preliminary form in [24]. Some results were also presentedat the University of Clausthal, at the occasion of Michael Demuth’s 60th birthdaySeptember 10–11, 2006, and at a Conference in Marrakesh, Morocco, “MarrakeshWorld Conference on Differential Equations and Applications”, June 15–20, 2006.This work was also part of a Conference on “The Feynman Integral and Related Top-ics in Mathematics and Physics: In Honor of the 65th Birthdays of Gerry Johnsonand David Skoug”, Lincoln, Nebraska, May 12–14, 2006. Finally, another prelimi-nary version was presented during a Conference on Evolution Equations, in memoryof G. Lumer, at the Universities of Mons and Valenciennes, August 28–September1, 2006.

References

1. D. Bakry, L’hypercontractivite et son utilisation en theorie de semigroupes, Lecture Notes inMath., vol. 1581, pp. 1–114, Springer Verlag, Berlin, 1994, P. Bernard (editor).

2. , Functional inequalities for Markov semigroups, Probability measures on groups: recentdirections and trends, Tata Inst. Fund. Res., Mumbai, 2006, pp. 91–147.

3. D. Bakry and M. Ledoux, A logarithmic Sobolev form of the Li-Yau parabolic inequality,Revista Mat. Iberoamericana (2005), to appear.

4. V. Bally, E Pardoux, and L. Stoica, Backward stochastic differential equations associated to asymmetric Markov process, Potential Analysis 22 (2005), no. 1, 17 – 60.

5. M. T. Barlow, R. F. Bass, and T. Kumagai, Note on the equivalence of parabolic Harnackinequalities and heat kernel estimates,http://www.math.ubc.ca/˜barlow/preprints/, 2005.

6. Jean-Michel Bismut, Mecanique aleatoire, Lecture Notes in Mathematics, vol. 866, Springer-Verlag, Berlin, 1981, With an English summary.

7. B. Boufoussi, J. A. Van Casteren, and N. Mrhardy, Generalized backward doubly stochasticdifferential equations and SPDEs with nonlinear Neumann boundary conditions, to appear inBernoulli, March 2006.

8. K. Burrage and J. C. Butcher, Stability criteria for implicit Runge-Kutta methods, SIAM J.Numer. Anal. 16 (1979), no. 1, 46–57.

9. M. Crouzeix, W. H. Hundsdorfer, and M. N. Spijker, On the existence of solutions to thealgebraic equations in implicit Runge-Kutta methods, BIT 23 (1983), no. 1, 84–91.

10. Michel Crouzeix, Sur la B-stabilite des methodes de Runge-Kutta, Numer. Math. 32 (1979),no. 1, 75–82.

11. A. Gulisashvili and J.A. van Casteren, Non-autonomous Kato classes and Feynman-Kac prop-agators, World Scientific, Singapore, 2006.

12. E. Hairer and G. Wanner, Solving ordinary differential equations. II, Springer Series in Com-putational Mathematics, vol. 14, Springer-Verlag, Berlin, 1991, Stiff and differential-algebraicproblems.

13. N. Ikeda and S. Watanabe, Stochastic differential equations and diffusion processes, 2 ed.,North-Holland Mathematical Library, vol. 24, North-Holland, Amsterdam, 1998.

14. N. El Karoui, E. Pardoux, and M. C. Quenez, Reflected backward SDEs and American options,Numerical methods in finance (L. C. G. Rogers and D. Talay, eds.), Publ. Newton Inst.,Cambridge Univ. Press, Cambridge, 1997, pp. 215–231.

15. N. El Karoui and M.C. Quenez, Imperfect markets and backward stochastic differential equa-tions, Numerical methods in finance, Publ. Newton Inst., Cambridge Univ. Press, Cambridge,1997, pp. 181–214.

16. D. Nualart, The Malliavin calculus and related topics, Probability and its Applications (NewYork), Springer-Verlag, New York, 1995.

Page 29: FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC ...FEYNMAN-KAC FORMULAS, BACKWARD STOCHASTIC DIFFERENTIAL EQUATIONS AND MARKOV PROCESSES J. VAN CASTEREN This article is written in honor

BSDE’S AND MARKOV PROCESSES 29

17. , Analysis on Wiener space and anticipating stochastic calculus, Lectures on probabilitytheory and statistics (Saint-Flour, 1995), Lecture Notes in Math., vol. 1690, Springer, Berlin,1998, pp. 123–227.

18. E. Pardoux, Backward stochastic differential equations and viscosity solutions of systems ofsemilinear parabolic and elliptic PDEs of second order, Stochastic analysis and related topics,VI (Geilo, 1996), Progr. Probab., vol. 42, Birkhauser Boston, Boston, MA, 1998, pp. 79–127.MR 99m:35279

19. E. Pardoux and S. G. Peng, Adapted solution of a backward stochastic differential equation,Systems Control Lett. 14 (1990), no. 1, 55–61.

20. E. Pardoux and S. Zhang, Generalized BSDEs and nonlinear Neumann boundary value prob-lems, Probab. Theory Related Fields 110 (1998), no. 4, 535–558.

21. Jan Pruss, Maximal regularity for evolution equations in Lp-spaces, Conf. Semin. Mat. Univ.Bari (2002), no. 285, 1–39 (2003).

22. D. Revuz and M. Yor, Continuous martingales and Brownian motion, third ed., Springer-Verlag, Berlin, 1999.

23. Jan A. Van Casteren, The Hamilton-Jacobi-Bellman equation and the stochastic Noether the-orem, Proceedings Conference Evolution Equations 2000: Applications to Physics, Industry,Life Sciences and Economics-EVEQ2000 (M. Iannelli and G. Lumer, eds.), Birkhauser, 2003,pp. 381–408.

24. , Backward stochastic differential equations and Markov processes, Liber Amicorum,Richard Delanghe: een veelzijdig wiskundige (Gent) (F Brackx and H. De Schepper, eds.),Academia Press, University of Gent, 2005, pp. 199–239.

25. , Viscosity solutions, backward stochastic differential equations and Markov processes,Preprint University of Antwerp, 2007.

26. J.-C. Zambrini, A special time-dependent quantum invariant and a general theorem on quan-tum symmetries, Proceedings of the second International Workshop Stochastic Analysis andMathematical Physics: ANESTOC’96 (Singapore) (R. Rebolledo, ed.), World Scientific, 1998,Workshop: Vina del Mar Chile 16-20 December 1996, pp. 197–210.

Department of Mathematics and Computer Science, University of Antwerp, Mid-delheimlaan 1, 2020 Antwerp, Belgium

E-mail address: [email protected]