Cw z Submittedgjjjh Jul 05
-
Upload
jelena-jotic -
Category
Documents
-
view
217 -
download
0
Transcript of Cw z Submittedgjjjh Jul 05
-
7/30/2019 Cw z Submittedgjjjh Jul 05
1/36
Continuous-Time Principal-Agent Problems
with Hidden Action: The Weak Formulation
Jaksa Cvitanic
Xuhu Wan
Jianfeng Zhang
July 26, 2005
Abstract
We consider the problem of optimal contracts in continuous time, when the agents
actions are unobservable by the principal. We apply the stochastic maximum principle
to give necessary conditions for optimal contracts, both for the case of the utility from
the payoff being separable and not separable from the cost of the agents effort. The
necessary conditions are shown also to be sufficient for the case of quadratic cost
and separable utility, for general utility functions. The solution for the latter case
is almost explicit, and the optimal contract is a function of the final outcome only,
the fact which was known before for exponential and linear utility functions. For
general non-separable utility functions, sufficient conditions are hard to establish, but
we suggest a way to check sufficiency using non-convex optimization. Unlike previous
work on the subject, we use the weak formulation both for the agents problem and
the principals problem, thus avoiding tricky measurability issues.
Key Words and Phrases: Principal-Agent problems, hidden action, optimal contracts
and incentives, stochastic maximum principle, Forward-Backward SDEs.
AMS (2000) Subject Classifications: 91B28, 93E20; JEL Classification: C61, C73, D82,
J33, M52
Research supported in part by NSF grants DMS 00-99549 and DMS 04-03575.Caltech, M/C 228-77, 1200 E. California Blvd. Pasadena, CA 91125. Ph: (626) 395-1784. E-mail:
[email protected] of Information and Systems Management, HKUST Business School , Clear Water Bay,
Kowloon , Hong Kong. Ph: +852 2358-7731. Fax: +852 2359-1908. E-mail: [email protected] of Mathematics, USC, 3620 S Vermont Ave, MC 2532, Los Angeles, CA 90089-1113. Ph:
(213) 740-9805. E-mail: [email protected].
-
7/30/2019 Cw z Submittedgjjjh Jul 05
2/36
1 Introduction
This paper is a continuation of our work Cvitanic, Wan and Zhang (2004) on principal-
agent problems in continuous time, in which the principal hires the agent to control a given
stochastic process. The problem of optimal compensation of portfolio managers and the
problem of optimal compensation of company executives are important applications of this
theory.
In the previous paper we study the case in which the actions of the agent are observed
by the principal. Here, we consider the case of hidden actions. More precisely, the
agents control of the drift of the process is unobserved by the principal. For example, the
controlled part of the drift can be due to the agents superior abilities or effort, unknown by
the principal.
The seminal paper in the continuous-time framework is Holmstrom and Milgrom (1987),which showed that if both the principal and the agent have exponential utilities, then the
optimal contract is linear. Schattler and Sung (1993) generalized those results using a
dynamic programming and martingales approach of Stochastic Control Theory, and Sung
(1995) showed that the linearity of the optimal contract still holds even if the agent can
control the volatility, too. A nice survey of the literature is provided by Sung (2001). More
complex models are considered in Detemple, Govindaraj and Loewenstein (2001), Hugonnier
and Kaniel (2001). Ou-Yang (2003), Sannikov (2004), DeMarzo and Sannikov (2004). The
paper closest to ours is Williams (2004). That paper uses the stochastic maximum principle
to characterize the optimal contract in the principal-agent problems with hidden information,in the case of the penalty on the agents effort being separate (outside) of his utility, and
without volatility control. Williams (2004) focuses on the case of a continuously paid reward
to the agent, while we study the case when the reward is paid once, at the end of the contract.
Moreover, we prove our results from the scratch, thus getting them under weaker conditions.
(Williams (2004) also deals with the so-called hidden states case, which we do not discuss
here.) In addition, unlike the existing literature, we study the principals problem in the
same formulation as the agents problem, so-called weak formulation, thus avoiding the
problem of having to check whether the principals strong solution is adapted to the
appropriate -algebra. We also provide a detailed discussion on how to check whether the
controls satisfying necessary conditions are also sufficient.
In particular, under the assumption of quadratic cost function on the agents effort and
separable utility, we find an almost explicit solution in a framework with general utility
functions. To the best of our knowledge, this is the first time that the solution is explicitly
described in a continuous-time Principal-Agent problem with hidden action, other than for
exponential and linear utilities. Moreover, it turns out that the solution depends only on
-
7/30/2019 Cw z Submittedgjjjh Jul 05
3/36
the final outcome and not on the history of the controlled process, the fact which was known
before for exponential and linear utilities.
The general case of non-separable utility functions is much harder. If the necessary
conditions determine a unique control process, then if we proved existence of the optimal
control, we would know that the necessary conditions are also sufficient. The existence of anoptimal control is hard because, in general, the problem is not concave. It is related to the
existence of a solution to Forward-Backward Stochastic Differential Equations (FBSDEs),
possibly fully coupled. However, it is not known under which general conditions these
equations have a solution. The stochastic maximum principle is covered in the book Yong
and Zhou (1999), while FBSDEs are studied in the monograph Ma and Yong (1999).
The paper is organized as follows: In Section 2 we set up the model and its weak
formulation. In Section 3, we find necessary conditions for the agents problem and the
principals problem. In Section 4 we discuss how to establish sufficiency. We present some
important examples in Section 5. We conclude in Section 6, mentioning possible further
research topics.
2 The model
2.1 Optimization Problems
Let {Wt}t0 be a standard Brownian Motion on a probability space (, F, P) and denote
by F := {Ft}tT its augmented filtration on the interval [0, T]. The controlled state process
is denoted X = Xu,v and its dynamics are given by
dXt = utvtdt + vtdWt, X0 = x. (2.1)
Here for simplicity we assume all the processes are one-dimensional. We note that in the
literature process X usually takes the form
dXt = b(t, Xt, ut, vt)dt + (t, Xt, vt)dWt. (2.2)
When is nondegenerate, one can always set
vt= (t, Xt, vt), ut = b(t, Xt, ut, vt)
1(t, Xt, vt).
Then (2.2) becomes (2.1). Moreover, under some monotonicity conditions on b, , one can
write u, v as functions of (X, u, v). In this sense, (2.1) and (2.2) are equivalent. In the
following we shall always consider (2.1).
The full information case, in which the principal observes X,u,v and thus also W,
was studied in Cvitanic, Wan and Zhang (2004). In the so-called hidden action case,
2
-
7/30/2019 Cw z Submittedgjjjh Jul 05
4/36
the principal can only observe the controlled process Xt, but not the underlying Brownian
motion W or the agents control u (so the agents action ut is hidden to the principal). We
note that the principal observes the process X continuously, which implies that the volatility
control v can also be observed through the quadratic variation of X, under the assumption
v 0.In this paper we investigate the principal-agent problem with hidden actions. At time
T, the principal gives the agent compensation in the form of a payoff CT = F(X), where
F : C[0, T] IR is a (deterministic) mapping. For a given process v, the principal can
design the payoffF in order to induce (or force) the agent to implement it. In this sense,
we may consider v as a control chosen by the principal instead of by the agent, as is usual
in the literature. We say that the pair (F, v) is a contract.
The agents problem is that, given a contract (F, v), the agent chooses the control u (over
some admissible set which will be specified later) in order to maximize his utility
V1(F, v)= sup
u
V1(u; F, v)= sup
u
E[U1 (F(Xu,v ), G
u,vT )].
Here,
Gu,vt=
t0
g(s, Xs, us, vs)ds (2.3)
is the accumulated cost of the agent, and with a slight abuse of notation we use V1 both for
the objective function and its maximum. We say a contract (F, v) is implementable if there
exists unique uF,v such that
V1(uF,v; F, v) = V1(F, v).
The principal maximizes her utility
V2= max
F,vE[U2(X
uF,v,vT F(X
uF,v,v ))],
where the maximum is over all implementable contracts (F, v) such that the following par-
ticipation constraint or individual rationality (IR) constraint holds:
V1(F, v) R . (2.4)
Function g is a cost or penalty function of the agents effort. Constant R is the reservation
utilityof the agent and represents the value of the agents outside opportunities, the minimum
value he requires to accept the job. Functions U1 and U2 are utility functions of the agent
and the principal. The typical cases studied in the literature are the separable utility case
with U1(x, y) = U1(x) y, and the non-separable case with U1(x, y) = U1(x y), where,
with a slight abuse of notation, we use the same notation U1 also for the function of one
argument only. We could also have the same generality for U2, but this makes less sense
from the economics point of view.
3
-
7/30/2019 Cw z Submittedgjjjh Jul 05
5/36
2.2 The weak formulation
A very important assumption we make, which is standard in the literature, is that u depends
on X, rather than W. That is, ut FXt . In other words, for any t, ut = ut(X) where ut
is a (deterministic) mapping from C[0, t] to IR, and can be interpreted to mean that the
agent chooses his action based on the performance of process X up to the current time. We
note again that vt is always determined by X and thus vt = vt(X) for some deterministic
functional vt. We will consider only those vt such that the SDE Xt = x +t0
vs(X)dWs has
a strong solution. We also note that in 2.1 a contract would be more precisely defined as
(F, v) instead of (F, v), and an action should be u instead of u.
The above assumption enables us to use the weak formulation approach. We note that
in the literature it is standard to use the weak formulation for the agent problem and the
strong formulation (the original formulation in 2.1) for the principals problem. However,
when one applies the optimal action uF,v
, which is obtained from the agents problem byusing the weak formulation, back to the strong formulation of the principals problem, there
is a subtle measurability issue which seems to be ignored in the literature. In this paper
that problem is non-existent, as we use weak formulation for both the agents problem and
the principals problem.
We now reformulate the problem. Let B be a standard Brownian motion under some
probability space with probability measure Q, and FB be the filtration generated by B. For
any FB-adapted process v 0, let
Xt = x +t0
vsdBs. (2.5)
Then vt = vt(X) and obviously it holds that
FXt = FBt , t.
Now for any functional ut, let
ut= ut(X); B
ut
= Bt
t
0
usds; Mut
= exp
t
0
usdBs 1
2 t
0
|us|2ds; (2.6)
and dQu
dQ
= MuT. Then we know by Girsanov Theorem that, under certain conditions, B
u is
a Qu-Brownian motion and
dXt = vtdBt = (utvt)(X)dt + vt(X)dBut .
That is, (X, Bu, Pu) is a weak solution to (2.1) corresponding to actions (u, v).
For any CT FBT , there exists some functional F such that CT = F(X). So a contract
(F, v) is equivalent to a random variable CT FBT and a process v F
B. Also, an action u
4
-
7/30/2019 Cw z Submittedgjjjh Jul 05
6/36
is equivalent to a process u FB. For simplicity, in the following we abuse the notation by
writing ut = ut(X) and vt = vt(X) when there is no danger of confusion.
Now given a contract CT FBT and v F
B, the agents problem is to find an optimal
control uCT,v FB such that
V1(uCT,v; CT, v) = V1(CT, v)
= sup
u
V1(u; CT, v),
where, recalling (2.3),
V1(u; CT, v)= EQ
u
{U1(CT, GT)} = EQ{MuTU1(CT, GT)}. (2.7)
For simplicity from now on we denote E= EQ and Eu
= EQ
u
. The principals problem is
to find optimal (CT, v) such that
V2(CT, v) = V2 = supCT,v
V2(uCT,v; CT, v),
where
V2(u; CT, v)= Eu{U2(XT CT)} = E{M
uTU2(XT CT)}. (2.8)
2.3 Standing assumptions
First we need the following assumptions on the coefficients.
(A1.) Function g : [0, T] IR IR IR IR is continuously differentiable withrespect to x,u,v, gx is uniformly bounded, and gu, gv have uniform linear growth in x,u,v.
In addition, g is jointly convex in (x,u,v), gu > 0 and guu > 0.
(A2.) (i) Functions U1 : IR2 IR, U2 : IR IR are differentiable, with 1U1 > 0, 2U1 0, U1 is jointly concave and U2 is concave.
(ii) Sometimes we will also need U1 K for some constant K.
For any p 1, denote
LpT(Qu) = { FBT : E
u{||p} < }; Lp(Qu) = { FB : Eu{T0
|t|pdt} < },
and define LpT(Q), Lp(Q) in a similar way.
We next define the admissible set for the agents controls.
(A3.) Given a contract (CT, v), the admissible set A(CT, v) of agents controls associated
with this contract is the set of all those u FB such that
(i) Girsanov Theorem holds true for (Bu, Qu);
5
-
7/30/2019 Cw z Submittedgjjjh Jul 05
7/36
(ii) U1(CT, GT), 2U1(CT, GT) L2T(Q
u);
(iii) For any bounded u FB, there exists 0 > 0 such that for any [0, 0),
u satisfies (i) and (ii) at above and |u|4, |g|4, |gu|4, |MT|
4, U21 (CT, GT), 2U
21 (CT, G
T) are
uniformly integrable in L1(Q) or L1T(Q), where
u= u + u, GT
=
T0
g(t)dt, M t= Mu
t , V1
= V1(u
),
and
g(t)= g(t, Xt, u
t , vt), g
u(t)
= gu(t, Xt, u
t , vt).
When = 0 we omit the superscript 0. We note that, for any u A(CT, v) and u, 0
satisfying (A3) (iii), we have u A(CT, v) for any [0, 0). We note also that, under
mild assumptions on (CT, v), all bounded u belong to A(CT, v).
The admissible set for the contracts (CT, v) is more involved. We postpone it to 3.2.
3 Necessary Conditions
Here we use the method of so-called Stochastic Maximum Principle, as described in the
book Yong and Zhou (1999). First we establish a simple technical lemma.
Lemma 3.1 Assumeu FB, Girsanov Theorem holds true for(Bu, Qu), andMuT L2T(Q).
Then for any L2T(Q
u
), there exists a unique pair (Y, Z) L2
(Qu
) such that
Yt =
Tt
ZsdBus . (3.1)
Obviously Yt = Eut {}, and uniqueness also follows immediately. But in general F
Bu =
FB, so we cannot apply the Martingale Representation Theorem directly to obtain Z. On
the other hand, since in general u is unbounded, one cannot claim the well-posedness of the
following equivalent BSDE from the standard literature:
Yt = +Tt
usZsds Tt
ZsdBs.
We prove the existence of Z below.
Proof. We first assume is bounded. Then MuT L2T(Q). Let (Y , Z) be the unique
solution to the BSDE
Yt = MuT
Tt
ZsdBs.
Define
Yt= Yt[M
ut ]1, Zt
= [Zt utYt][M
ut ]1.
6
-
7/30/2019 Cw z Submittedgjjjh Jul 05
8/36
One can check directly that
dYt = ZtdBut , YT = .
Moreover,
Yt = Et{MuT}[M
ut ]1 = Eut {},
which implies that
Eu{ sup0tT
|Yt|2} CEu{||2} < .
Then one can easily get Z L2(Qu).
In general, assume n are bounded and Eu{|n |
2} 0. Let (Yn, Zn) be the solution
to BSDE (3.1) with terminal condition n. Then
Eu
sup
0tT|Ynt Y
mt |
2 +
T
0
|Znt Zmt |
2dt
CEu{|n m|
2} 0.
Therefore, (Yn, Zn) converges to some (Y, Z) which satisfies (3.1).
3.1 The Agents problem
We fix now a contract (CT, v), u A(CT, v), and ut FB bounded. Denote, omitting
arguments of U1, 2U1,
g(t)= gu(t, Xt, ut, vt)ut;
Gt
=t0 g(s)ds;
Mt = Mt[
t0
usdBs
t0
ususds] = Mt
t0
usdBus ;
V1 = E
MTU1 + MT2U1GT
.
Moreover, for any bounded u G and (0, 0) as in (A3)(iii), denote
g(t)=
g(t) g(t)
; GT
=
GT GT
; MT=
MT MT
; V1=
V1 V1
.
Introduce the so-called adjoint processes
YA,1t = U1(CT, GT)
Tt
ZA,1s dBus ;
YA,2t = 2U1(CT, GT)
Tt
ZA,2s dBus .
(3.2)
Theorem 3.1 Under our standing assumptions, we have
lim0
V1 = V1 (3.3)
7
-
7/30/2019 Cw z Submittedgjjjh Jul 05
9/36
and
V1 = EuT
0
At utdt
, (3.4)
where
At= ZA,1t + gu(t, Xt, ut, vt)Y
A,2t . (3.5)
In particular, the necessary condition for u to be an optimal control is:
At 0. (3.6)
Proof: See Appendix.
We see that given (CT, v) (and thus also X), the optimal u should satisfy the following
Forward Backward Stochastic Differential equation (FBSDE)
Gt =
t0
g(s, Xs, us, vs)ds;
YA,1t = U1(CT, GT)
Tt
ZA,1s dBus ;
YA,2t = 2U1(CT, GT)
Tt
ZA,2s dBus ;
(3.7)
with maximum condition (3.6).
Moreover, since guu
> 0, we may assume there exists a function h(t,x,v,z) such that
gu(t,x,h(t,x,v,z), v) = z. (3.8)
Note that 2U1 < 0, so YA,2t < 0. Thus, (3.6) is equivalent to
ut = h(t, Xt, vt, ZA,1t /Y
A,2t ). (3.9)
That is, given (CT, v) and X, one may solve the following (self-contained) FBSDE:
Gt =t0 g(s, Xs, h(s, Xs, vs, Z
A,1
s /YA,2
s ), vs)ds;
YA,1t = U1(CT, GT) +
Tt
ZA,1s h(s, Xs, vs, ZA,1s /Y
A,2s )ds
Tt
ZA,1s dBs;
YA,2t = 2U1(CT, GT) +
Tt
ZA,2s h(s, Xs, vs, ZA,1s /Y
A,2s )ds
Tt
ZA,2s dBs.
(3.10)
Then, as a necessary condition, the optimal control uCT,v should be defined by (3.9).
8
-
7/30/2019 Cw z Submittedgjjjh Jul 05
10/36
3.2 The Principals problem
3.2.1 Admissible contracts
We now characterize the admissible set A of contracts (CT, v). Our first requirement is:
(A4.) (CT, v) is implementable. That is, (3.10) has a unique solution, and uCT,v
definedby (3.9) satisfies uCT,v A(CT, v) and V1(u
CT,v; CT, v) = V1(CT, v).
Note that 3.1 gives only necessary conditions. In 4 there will be some discussion on
when the above uCT,v is indeed the agents optimal control.
Now an implementable contract (CT, v) uniquely determines uCT,v. In fact, for fixed v,
the correspondence between CT and uCT,v is one to one, up to a constant. To see this, we
fix some (u, v) and want to find some CT such that uCT,v = u. For notational convenience,
we denote
XA= YA,1, YA
= YA,2, ZA
= ZA,2.
If u = uCT,v for some CT, then (3.6) holds true for u. That is,
ZA,1t = gu(t, Xt, ut, vt)YAt .
Denote R= XA0 = Y
A,10 . Then (3.7) becomes
Gt =
t0
g(s, Xs, us, vs)ds;
XAt = R
t0
gu(s, Xs, us, vs)YAs dB
us ;
YAt = 2U1(CT, GT) T
t
ZAs dBus ;
(3.11)
where
XAT = U1(CT, GT). (3.12)
Since 1U1 > 0, we may assume there exists a function H(x, y) such that
U1(H(x, y), y) = x. (3.13)
Then (3.12) leads to
CT= H(XAT , GT), (3.14)
Plugging this into (4.4), we get
Xt = x +
t0
vsdBs;
Gt =
t0
g(s, Xs, us, vs)ds;
XAt = R
t0
gu(s, Xs, us, vs)YAs dB
us ;
YAt = 2U1(H(XAT , GT), GT)
Tt
ZAs dBus ;
(3.15)
9
-
7/30/2019 Cw z Submittedgjjjh Jul 05
11/36
Now fix (R,u,v). If FBSDE (3.15) is well-posed, we may define CT by (3.14) and we can
easily see that uCT,v = u. In this sense, for technical convenience, from now on we consider
(R,u,v) (instead of (CT, v)) as a contract, or say, as the principals control. Then (A4)
should be rewritten as
(A4.) (R,u,v) is an implementable contract. That is,(i) FBSDE (3.15) is well-posed;
(ii) For CT defined by (3.14), (CT, v) is implementable in the sense of (A4).
We note that the theory of FBSDEs is far from complete. The well-posedness of (3.15) is
in general unclear (unless we put strict conditions such as u is bounded), mainly due to the
introduction of Bu. In fact, even for linear FBSDEs there is no general result like Lemma
3.1. Instead of adopting too strong technical conditions, in this paper we assume the well-
posedness of the involved FBSDEs directly and leave the general FBSDE theory for future
research. However, in the so called separable utility case, the corresponding FBSDEs willbecome decoupled FBSDEs and thus we can use Lemma 3.1 to establish their well-posedness,
as we will see in 3.4.2.
Now for any (u, v) and any bounded (u, v), denote
ut= ut + ut; v
t
= vt + vt;
Xt = x +
t0
vsdBs; (3.16)
GT=
T
0
g(t, Xt , ut , v
t )dt.
Denote also with superscript all corresponding quantities.
(A5.) The principals admissible set A of controls is the set of all those contracts (R,u,v)
such that, for any bounded (u, v), there exists a constant 1 > 0 such that for for any
[0, 1):
(i) (A4) holds true for (R, u, v);
(ii) The FBSDEs (3.20) and (3.22) below are well-posed for (R, u, v);
(iii) lim0
V2 = YP0 for V
2 , Y
P0 defined in (3.19) and (3.20) below, respectively.
Note again that we will specify sufficient conditions for (A5
) in the separable utility case
in 3.4.2. We also assume that A is not empty.
3.2.2 Necessary conditions
We now derive the necessary condtions for the Principals problem. Our first observation is
that
R = Eu{XAT} = Eu{U1(CT, GT)}
10
-
7/30/2019 Cw z Submittedgjjjh Jul 05
12/36
is exactly the optimal utility of the agent. So condition (2.4) becomes equivalent to
R R.
Intuitively it is obvious that the principal would choose R = R in order to maximize her
utility. Again, due to the lack of satisfactory theory of FBSDEs, here we simply assume the
optimal R = R, and we will prove it rigorously in the separable utility case by using the
comparison theorem of BSDEs.
Given (u, v), let (X,G,XA, YA, ZA) be the solution to (3.15) with R = R. Define CT by
(3.14) and let
YPt = U2(XT CT)
Tt
ZPs dBus ; (3.17)
By Lemma 3.1 (3.17) is well-posed. Then the principals problem is to choose optimal ( u, v)in order to maximize
V2(u, v)= Eu{YPT } = Y
P0 . (3.18)
We now introduce more notation and adjoint processes for the principal. Similarly as
before, for any bounded u FB, v FB, and (0, 1) as in (A5), we denote
X=
X X
; GT
=
GT GT
; V2=
V2 V2
. (3.19)
Also denote, omitting functions arguments,
Xt =
t0
vsdBs;
gu = guuu + guvv + guxX
GT =
T0
[gxXt + guut + gvvt]dt.
Moreover, consider the following FBSDE system
XAt =
t0
guYAs usds
t0
[guYAs + Y
As gu]dB
us ;
YAt = 12U1CT + 22U1GT +
Tt
ZAs usds
Tt
ZAs dBus ;
YPt = U2[XT CT] +
Tt
ZPs usds
Tt
ZPs dBus .
(3.20)
where CT is defined by
XAT = 1U1CT + 2U1GT; (3.21)
11
-
7/30/2019 Cw z Submittedgjjjh Jul 05
13/36
and, noting that 1U1 > 0,
X1t =
t0
guZ1sds;
X2t
= t0
[gux
Z1s
YAs
+ gx
Y2s
]ds;
Y1t =1
1U1[U2 X
1T12U1]
Tt
Z1sdBus ;
Y2t =2U11U1
[U2 X1T12U1] + X
1T22U1
Tt
Z2sdBus ;
Y3t = X2T + U
2
Tt
Z3sdBus .
(3.22)
Theorem 3.2 Under (A5), we have
YP0 = EuT
0P,1t utdt +
T0
P,2t vtdt
, (3.23)
where P,1t
= ZPt guY
1t Y
At + X
1t Z
At + guuZ
1t Y
At + guY
2t ;
P,2t= guvZ
1t Y
At + gvY
2t + Z
3t + u(Y
3t X
2t ).
(3.24)
In particular, the necessary condition for (u, v) to be an optimal control is:
P,1t = P,2t = 0. (3.25)
To prove the theorem, we first recall the following simple lemma (see, e.g., Cvitanic, Wan
and Zhang (2004)):
Lemma 3.2 Assume Wt =t0
sdBs + At is a continuous semimartingale, where B is a
Brownian motion. Suppose that
1)T0
|t|2dt < a.s.
2) Both Wt and At are uniformly (in t) integrable.
Then E[WT] = E[AT].
Proof of Theorem 3.2. The necessity of (3.25) is obvious because (u, v) is arbitrary.
So it suffices to prove (3.23).
Note that
Xt =
t0
vsdBus +
t0
usvsds.
12
-
7/30/2019 Cw z Submittedgjjjh Jul 05
14/36
-
7/30/2019 Cw z Submittedgjjjh Jul 05
15/36
The proof is complete.
In summary, we have the following system of necessary conditions for the principal:
Xt = x + t
0
vsdBs;
Gt =
t
0
g(s, Xs, us, vs)ds;
XAt = R
t0
guYAs dB
us ;
X1t =
t0
guZ1sds;
X2t =
t0
[guxZ1sY
As + gxY
2s ]ds;
YAt = 2U1(H(XAT , GT), GT)
T
t
ZAs dBus ;
YPt = U2(XT H(XAT , GT))
T
t
ZPs dBus ;
Y1t =1
1U1[U2 X
1T12U1]
Tt
Z1sdBus ;
Y2t =2U11U1
[U2 X1T12U1] + X
1T22U1
Tt
Z2sdBus ;
Y3t = X2T + U
2
Tt
Z3sdBs;
(3.26)
with maximum condition (3.25).
In particular, if (3.25) has a unique solution
ut = h1(t, Xt, Y1t Y
At , Y
2t , Z
Pt + X
1ZAt , Z1t Y
At , Z
3t );
vt = h2(t, Xt, Y1t Y
At , Y
2t , Z
Pt + X
1ZAt , Z1t Y
At , Z
3t ),
then, by plugging (h1, h2) into (3.26) we obtain a self contained FBSDE.
14
-
7/30/2019 Cw z Submittedgjjjh Jul 05
16/36
3.3 Fixed volatility case
If the principal has no control on v, then both v and X are fixed. In this case, along the
variation one can only choose v = 0. Then (3.26) can be simplified as
Gt =t0
g(s, Xs, us, vs)ds;
XAt = R
t0
guYAs dB
us ;
X1t =
t0
guZ1sds;
YAt = 2U1(H(XAT , GT), GT)
Tt
ZAs dBus ;
YPt = U2(XT H(XAT , GT))
Tt
ZPs dBus ;
Y1t =U2
X1T
12
U1
1U1 Tt
Z1sdBus ;
Y2t =2U11U1
[U2 X1T12U1] + X
1T22U1
Tt
Z2sdBus ;
(3.27)
with maximum condition
P,1t= ZPt guY
1t Y
At + X
1t Z
At + guuZ
1t Y
At + guY
2t = 0. (3.28)
3.4 Separable Utilities
In this subsection we assume the agent has a separable utility function, namely,
U1(CT, GT) = U1(CT) GT. (3.29)
Here we abuse the notation U1. We note that if U1 > 0 and U
1 0, then Assumption A.2
(i) still holds true.
3.4.1 The agents problem
In this case obviously we have
YA,2t = 1; ZA,2t = 0.
Then (3.5) becomes
At= ZA,1t gu(t, Xt, ut, vt). (3.30)
Denote YA,1t= YA,1t +
t0
gds. Then (3.7) and (3.10) become
YA,1t = U1(CT) +
Tt
[usZA,1s g]ds
Tt
ZA,1s dBs; (3.31)
15
-
7/30/2019 Cw z Submittedgjjjh Jul 05
17/36
and
YA,1t = U1(CT) +
Tt
[ZA,1s h(s, Xs, vs, ZA,1s ) g(s, Xs, h(s, Xs, vs, Z
A,1s ), vs)]ds
Tt
ZA,1s dBs;
(3.32)
respectively.
3.4.2 The principals problem
First one can check straightforwardly that
YA = 1; ZA = 0; Y2 = Y1; Z2 = Z1. (3.33)
Denote
J1= U11 (3.34)
and
XAt= XAt + Gt; Y
3t
= Y3t X
2t .
Then (3.14) and (3.24) become, recalling notation (3.34),
CT = J1(XAT);
P,1t
= ZPt guuZ
1t ;
P,2t
= Z3t + utY
3t gvY
1t guvZ
1t ; (3.35)
Therefore, (3.26) becomes
Xt = x +
t0
vsdBs;
XAt = R +
t0
gds +
t0
gudBus ;
YPt = U2(XT J1(XAT))
Tt
ZPs dBus ;
Y1t =U2(XT J1(X
AT))
U1(J1(XAT))
Tt
Z1sdBus ;
Y3t = U2(XT J1(X
AT))
T
t
[gxY1s + guxZ
1s ]ds
T
t
Z3sdBus ;
(3.36)
with maximum conditions P,1t = P,2t = 0.
As mentioned in 3.2, we shall specify some sufficient conditions for the well-posedness
of the FBSDEs in this case. First, under the integrability conditions in (A5) below, X and
X are well defined. Applying Lemma 3.1 on (YP, ZP), (Y1, Z1) and then on (Y3, Z3), we
see that (3.36) is well-posed. Therefore, FBSDEs (3.11), (3.20), and (3.22) are well-posed
in this case.
Recall (3.16), (3.19), and define other -terms similarly. We now modify A as follows.
16
-
7/30/2019 Cw z Submittedgjjjh Jul 05
18/36
(A5.) The principals admissible set A of controls is redefined as the set of all those
contracts (R,u,v) such that, for any bounded (u, v), there exists a constant 1 > 0 such
that for any [0, 1):
(i) u, v, MT, [MT]1, g, gu, g
v, g
x, g
uu, g
uv, g
ux, U
1 , U
2 , [U
2], and [J1]
are uniformly inte-
grable in Lp0(Q) or Lp0
T (Q), for some p0 large enough (where J1 = U11 ).(ii) u A(CT, v) and (CT, v) is implementable in the sense of (A4), where CT is defined
in (3.35);
Note that we may specify p0 as in (A5). But in order to simplify the presentation and
to focus on the main ideas, we assume p0 is as large as we want.
Theorem 3.3 Assume (A5). Then (A5) holds true and the optimal R is equal to R.
Proof: We first show that the principals optimal control R is R. In fact, for fixed (u, v),
let superscriptR
denote the processes corresponding to R. Then obviously XA,Rt X
A,Rt
for any R R. Since
1H(x, y) =1
1U1(H(x, y), y)> 0, U2 > 0,
we get
YP,RT = U2(XT CRT) = U2(XT H(X
A,RT , GT)) U2(XT H(X
A,RT , GT)) = Y
P,RT .
Therefore,
Y
P,R
0 = E
u
{Y
P,R
T } E{Y
P,R
T } = Y
P,R
0 .Thus, optimal R is equal to R.
It remains to prove
lim0
V2 = YP0 . (3.37)
We postpone the proof to the Appendix.
To end this subsection, for future use we note that (3.14) becomes, recalling notation
(3.34),
CT = J1
R +
T
0
gu(t, Xt, ut, vt)dBut +
T
0
g(t, Xt, ut, vt)dt
.
This means that the principals problem is
supu,v
Eu
U2
x +
T0
utvtdt +
T0
vtdBut (3.38)
J1
R +
T0
gu(t, Xt, ut, vt)dBut +
T0
g(t, Xt, ut, vt)dt
.
17
-
7/30/2019 Cw z Submittedgjjjh Jul 05
19/36
3.4.3 Fixed volatility case
If we also assume v (hence X) is fixed, then (3.36) becomes
XAt = R + t
0
gds + t
0
gudBus ;
YPt = U2(XT J1(XAT))
Tt
ZPs dBus ;
Y1t =U2(XT J1(X
AT))
U1(J1(XAT))
Tt
Z1sdBus ;
(3.39)
with maximum condition P,1t = 0.
4 Sufficient conditions
4.1 A general result
If the necessary condition uniquely determines a candidate for the optimal solution, it is
also a sufficient condition, if an optimal solution exists. We here discuss the existence of an
optimal solution. In general, our maximization problems are non-concave, so we have to use
infinite dimensional non-convex optimization methods.
Let H be a Hilbert space with norm and inner product < >. Let F : H IR be a
functional with Frechet derivative f : H H. That is, for any h, h H,
lim0
1
[F(h + h) F(h)] =< f(h), h > .
The following theorem is a direct consequence of the so-called Ekelands variational
principle, see Ekeland (1974).
Theorem 4.4 Assume
(A1) F is continuous;
(A2) There exists unique h H such that f(h) = 0;
(A3) For > 0, > 0 such that F(h) F(h) whenever f(h) .
(A4) V
= suphH
F(h) < .
Then h is the maximum argument of F. That is, F(h) = V.
Remark 4.1 (1) A sufficient condition for (A3) is thatf is invertible andf1 is continuous
at 0.
(2) If H = IR and f is continuous and invertible, then F is either convex or concave,
and thus the result obviously holds true.
(3) If (A4) is replaced by infhH
F(h) > , then h is the minimum argument of F.
18
-
7/30/2019 Cw z Submittedgjjjh Jul 05
20/36
-
7/30/2019 Cw z Submittedgjjjh Jul 05
21/36
-
7/30/2019 Cw z Submittedgjjjh Jul 05
22/36
4.2.4 Fixed volatility case
In this case v is fixed. Set H to be the admissible set of u with inner product defined by
(4.2). The functional is V2(u) with Frechet derivative P,1(u). We need the following:
(i) Considering u as a parameter, FBSDE (3.27) (without assuming (3.28)) is well-posed;
(ii) FBSDE (3.27) together with (3.28) has a unique solution u;
(iii) For any sequence of u,
P,1(u) 0 YP,u0 YP,u
0 .
Then u is the optimal control of the principals problem.
Similarly, (iii) can be replaced by the following stronger condition:
(iii) For any , FBSDE (3.27) together with condition P,1t = t is well-posed. In
particular,
V2(u) V2(u0), as 0.
5 Examples
We now present an important example, which is quite general in the choice of the util-
ity functions, and thus could be of use in many economic applications. The solution is
semi-explicit, as it boils down to solving a linear BSDE (not FBSDE!), and a nonlinear de-
terministic equation for a constant c. To the best of our knowledge, this is the firstexplicit
description of a solution to a continuous-time Principal-Agent problem with hidden action,other than the case of exponential and linear utility functions. Moreover, as in the latter
case, the optimal contract is still a function only of the final outcome XT, and not of the
history of process X.
5.1 Separable utility with fixed v and quadratic g
Consider the separable utility with fixed v and
g(t,x,u,v) = u
2
/2.
5.1.1 Necessary conditions
First by (3.8) and noting that gu = u, we get h(t,x,v,z) = z. Given (CT, v), the adjoint
process for the agents problem is
YA,1t = U1(CT) +1
2
Tt
|ZA,1s |2ds
Tt
ZA,1s dBs; (5.1)
21
-
7/30/2019 Cw z Submittedgjjjh Jul 05
23/36
and, by (3.6) and section 4.2.1, the necessary and sufficient condition for the optimal control
is uCT,vt = ZA,1t . Denote
YA,1t= exp(YA,1t ); Z
A,1t
= YA,1t Z
A,1t .
Then (5.1) is equivalent to
YA,1t = exp(U1(CT))
Tt
ZA,1s dBs; (5.2)
If Assumption A.2 (ii) holds, exp(U1(CT)) is bounded, then (5.2) is well-posed, and so is
(5.1).
As for the principals problem, we note that (3.39) becomes
X
A
t = R
1
2t0 u
2
sds +t0 usdBs;
YPt = U2(XT J1(XAT))
Tt
ZPs dBus ;
Y1t =U2(XT J1(X
AT))
U1(J1(XAT))
Tt
Z1sdBus ;
and, recalling (3.35), the maximum condition becomes
P,1t= ZPt Z
1t = 0.
Obviously we getYPt Y
1t c
where c is a constant (equal to YP0 Y10 .) In particular, at time T, we have
U2(XT J1(XAT))
U2(XT J1(XAT))
U1(J1(XAT))
= c.
Denote
F(x, y)= U2(x y)
U2(x y)
U1(y).
By our assumptions,
Fy(x, y) = U2 +
U2
U1+
U2U1
|U1|2
< 0.
Thus we may assume there exists a function (x, c) such that
F(x, (x, c)) = c.
Denote (x, c)= U1((x, c)). We have X
AT = (XT, c) for some constant c. Recall again
that XT FBT is a given fixed random variable.
22
-
7/30/2019 Cw z Submittedgjjjh Jul 05
24/36
We would like to have a solution (XA,ct , uc) to the quadratic BSDE
XA,ct = (XT, c) +
Tt
u2s2
ds
Tt
usdBs. (5.3)
By Itos rule, this is equivalent to the linear BSDE
Xt := eX
A,ct = e(XT,c)
Tt
usXsdBs.
If e(XT,c) is square-integrable, there exists a solution to the BSDE (this is satisfied, in
particular, if U1 is bounded).
Furthermore, assume that we can find a unique constant c = cR so that
XA,c0 = R , or, equivalently, E[e(XT,c)] = eR. (5.4)
Then
XA,cR
=
XA
, and ucR
is a candidate for the optimal u for the principals problem.
5.1.2 Sufficiency
We next show that the above controls are indeed optimal. For the agents problem, this
follows from section 4.2.1. We now consider the principals problem. From (3.38), the
principals problem is
supu
Eu
U2
x +
T0
usvsds +
T0
vsdBus J1
XAT
.
Note thatlog Mut = X
At R.
We can thus rewrite the principals problem as
supu
E[G(MuT)] := supu
E[MuTU2 (XT J1 (R + log(MuT)))] .
Here, G is a random function on positive real numbers, defined by
G(x) = xU2(XT J1(R + log(x)).
We find
G(x) = U2(XT J1(R + log(x)) U2(XT J1(R + log(x))J
1(R + log(x)),
G(x) = 1
xU2(XT J1(R + log(x))J
1(R + log(x))
+1
x[J1(R + log(x))]
2U2 (XT J1(R + log(x))
1
xU2(XT J1(R + log(x))J
1 (R + log(x)).
23
-
7/30/2019 Cw z Submittedgjjjh Jul 05
25/36
Note that
G(x) 0
so that G is a concave function, for every fixed XT().
We define the dual function, for y > 0,
G(y) = maxx>0
[G(x) xy].
The maximum is attained at
x = (G)1(y).
Thus, we get the following upper bound on the principals problem, for any constant c > 0:
E[G(MuT)] E[G(c)] + cE[MuT] = E[G(c)] + c.
The upper bound will be attained if
MuT = (G)1(c)
and c = c is such that
E[(G)1(c)] = 1.
This means that if u satisfies the Backward SDE
Mut = (G)1(c)
T
t
usMus dBs
then u is optimal.
We now show that
(G)1(c) = eRe(XT,c)
in the notation introduced before (5.3). Indeed,
G(eRe(XT,c)) = U2(XT J1((XT, c)) U2(XT J1((XT, c)))J
1((XT, c)) = c
where the last equality comes from the definition of (XT, c). Thus the sufficient Backward
SDE for u to be optimal becomes
Mut = eRe(XT,c)
Tt
usMus dBs
where
E[e(XT,c)] = eR.
We see from (5.4) that we can take c = cR. Also, because log Mt = XAt R, the sufficient
BSDE is the same as the necessary BSDE (5.3).
24
-
7/30/2019 Cw z Submittedgjjjh Jul 05
26/36
-
7/30/2019 Cw z Submittedgjjjh Jul 05
27/36
Note that the agent wants to maximize YA,10 = Y0. By the BSDE Comparison Theorem,
the latter is maximized if the drift in (5.7) is maximized. We see that this will be true if
condition (5.5) is satisfied, which is then a sufficient condition.
Denote
J1(y) := U11 (y) = log(y)/1
The principals problem is then to maximize
EQu
[U2(XT J1(YA,1T (u)) GT)] (5.8)
We now impose the assumption (with a slight abuse of notation) that
g(t,x,u,v) = tx + g(t,u,v), (5.9)
for some deterministic function t. Doing integration by parts we get the following repre-
sentation for the first part of the cost GT:T0
sXsds = XT
T0
sds
T0
s0
udu[usvsds + vsdBus ] . (5.10)
If we substitute this into GT =T0
sXsds +T0
g(s, us, vs)ds, and plug the expression
for XT and the expression (5.6) for YA into (5.8), with U2(x) = e
ix, we get that we need
to minimize
Eu
exp
2[1
T0
sds][X0 +
T0
utvtdt] + 21
T0
g2u(s, us, vs)
2ds
+2
T
0
g(s, us, vs)ds 2
T
0
[
s
0
rdr]usvsds 2[1
T
0
sds]
T
0
vsdBus
+2
T0
gu(s, us, vs)dBus 2
T0
[
s0
rdr]vsdBus
. (5.11)
This is a standard stochastic control problem, for which the solution turns out to be de-
terministic processes u, v (as can be verified, once the solution is found, by either verifying
Hamilton-Jacobi-Bellman equation, or by verifying the corresponding maximum principle).
Assuming that u, v are deterministic, the expectation above can be computed by using the
fact thatEu[exp(
T
0
fsdBus )] = exp(
1
2
T
0
f2s ds)
for a given square-integrable deterministic function f. Then, the minimization can be done
inside the integral in the exponent, and boils down to minimizing over (ut, vt) the expression
1
Tt
sds
utvt + 1
g2u(t, ut, vt)
2+ g(t, ut, vt)
+22
1
Tt
sds
vt gu(t, ut, vt)
2. (5.12)
26
-
7/30/2019 Cw z Submittedgjjjh Jul 05
28/36
The optimal contract is found from (3.14), as:
CT = GT 1
1log(YA,1T )
where YA
should be written not in terms of the Brownian Motion Bu
, but in the terms ofthe process X. Since we have
YA,1t = R exp
t0
21g2u(s, us, vs)/2ds +
t0
1usgu(s, us, vs)ds
t0
1gu(s, us, vs)dBs
(5.13)
we get that the optimal contract can be written as (assuming optimal vt is never equal to
zero)
CT = c + T
0
gu(s, us, vs)
vsdXs
for some constant c. If gu(s,us,vs)vs
is a constant, then we get a linear contract.
Let us consider the special case of Holmstrom-Milgrom (1987), with
v 1 , g(t,x,u,v) = u2/2.
Then (5.12) becomes
ut + 1u2t/2 + u
2t/2 +
22
{1 ut}2 .
Minimizing this we get constant optimal u of Holmstrom-Milgrom (1987), given by
u =1 + 2
1 + 1 + 2
The optimal contract is linear, and given by CT = a + bXT, where b = u and a is such that
the IR constraint is satisfied,
a = 1
1log(R) bX0 +
b2T
2(1 1) . (5.14)
5.3 Separable Exponential/Linear Utility
Example 5.1 Let us consider the separable utility case with U1(x) = U2(x) = x, that is,
both the principal and the agent are risk-neutral. Also assume that g(t,x,u,v) = g(t,u,v).
(The case with a linear cost in x could be handled as in the previous example.) From (3.38)
it is clear that we have to maximize
Eu[
T0
[usvs g(s, us, vs)ds]
27
-
7/30/2019 Cw z Submittedgjjjh Jul 05
29/36
Assume that g is such that there is (u, v) which maximizes
usvs g(s, us, vs).
Then (u, v) is optimal pair, and it is deterministic. The optimal contract is
CT = J1(YA,1T + GT) = R +
T0
gu(s, us, vs)
vsdBus +
T0
g(s, us, vs)ds,
which is equivalent to
CT = R +
T0
gu(s, us, vs)
vsdXs +
T0
[g(s, us, vs) usgu(s, us, vs)]ds.
Note that ifgu is not a function of t, then u, v are constant and the contract is linear in XT.
Example 5.2 Let us consider now the case U1(x) = x, U2 = e
2x
. Also assume thatg(t,x,u,v) = g(t,u,v). From (3.38) it is clear that we have to minimize
Eu
exp
2
T0
[usvs g(s, us, vs)]ds +
T0
[vs gu(s, us, vs)]dBus
As in (5.11), the solution will be deterministic, and it minimizes
usvs + g(s, us, vs) +22
[vs g(s, us, vs)]2.
and the optimal contract is of the same form as in the previous example:
CT = R +
T0
gu(s, us, vs)
vsdXs +
T0
[g(s, us, vs) usgu(s, us, vs)]ds.
For example, if v 1 and g(t,u,v) = u2/2, it is easy to see that u 1 and CT = R T2
+
XT X0.
6 Conclusion
We provide a general theory for Principal-Agent problems with hidden action in models
driven by Brownian Motion. We analyze both the agent and the principals problem in
weak formulation, thus having a consistent framework. In the case of separable utility and
quadratic cost function, we find the optimal solution for general utility functions. In general,
however, the question of the existence of an optimal solution remains open.
Important application of the Principal-Agent theory is the problem of optimal compen-
sation of executives. For that application, other, more general forms of compensation should
be considered, such as a possibility for the agent to cash in the contract at a random time.
We leave these problems for future research.
28
-
7/30/2019 Cw z Submittedgjjjh Jul 05
30/36
-
7/30/2019 Cw z Submittedgjjjh Jul 05
31/36
Then
E{|MT MT|2} = E
10
MT
T0
utdBt MT
T0
utdBt
M
TT0 (ut + ut)utdt MT
T0 ututds
d2
C
10
E
|MT MT|2|
T0
utdBt|2
+|MT MT|2|
T0
(ut + ut)utdt|2
+|MT|2|
T0
[(ut + ut)ut utut)]dt|2
d
C1
0 E{|MT MT|
4}
+
E{|MT MT|4}
T0
E{|(ut + ut)|4}dt
+E{|MT|2}2
d.
Then by (7.2) and Assumption A3 (iii) we prove the result.
We now show (3.3), that is, we show that
lim0
V1 = EMTU1 + MT2U1GT.Note that we have
V1 =V1 V1
= E
MTU
1 + MT
U1U1
(7.4)
As for the limit of the first term on the right-hand side, we can write
MTU1 MTU1 = [M
T MT]U1 + M
T[U
1 U1].
By Assumption A3 (iii) and the above L2 bounds on MT, this is integrable uniformly with
respect to , so the expected value (under Q) converges to zero, which is what we need.As for the limit of the second term in the right side of (7.4), notice that we have
MT lim0
U1 U1
= MT2U1GT. (7.5)
We want to prove the uniform integrability again. We note thatU1 U1 =
10
2U1 (CT, GT + (GT GT)) d
|GT| {|2U1 (CT, GT)| + |2U1 (CT, G
T)|}|G
T|
30
-
7/30/2019 Cw z Submittedgjjjh Jul 05
32/36
where the last inequality is due to monotonicity of 2U1.
Therefore, we get
MTU1 U1
C|2U1 (CT, GT)|2 + |2U1 (CT, G
T)|
2 + |GT|4 + |MT|
4
Thus, from Assumption A3 (iii), the left-hand side is uniformly integrable, and the expec-
tations of the terms in (7.5) also converge, and we finish the proof of (3.3).
We now want to prove (3.4). We have
V1 = E
MTU1 + MT2U1GT
= E
MTU1
T0
utdBut + MT2U1
T0
guutdt
= Eu
YA,1TT0
utdBut + YA,2T
T0
guutdt
= EuT
0
At utdt +
T0
Bt dBut
, (7.6)
where
At= ZA,1t + gu(t, Xt, ut, vt)Y
A,2t ,
Bt
= YA,1t us + Z
A,1t
t0
usdBus + Z
A,2t Gt
and the last equality is obtained from Itos rule, and definitions of YA,i, ZA,i. We need to
show
EuT0
Bt dBut = 0
We want to use Lemma 3.2 in the last two lines of (7.6), with = B and
Wt = YA,1t
t0
usdBus + Y
A,2t
t0
gu(s)usds
At =t0
A
s usds
From the BSDE theory and our assumptions we have
Eu
sup0tT
(|YA,1t |2 + |YA,2t |
2) +
T0
(|ZA,1t |2 + |ZA,2t |
2)dt
< . (7.7)
From this it is easily verified that T0
Bt
2
dt <
31
-
7/30/2019 Cw z Submittedgjjjh Jul 05
33/36
so that condition 1) of the lemma is satisfied. Next, we have
Eu
sup
0tT|Wt|
CEu
sup
0tT|YA,1t |
2 + |YA,2t |2+
+
T
0
|gu(t)|2dt
C+ CE
M2T +
T
0
|gu(t)|4dt
< ,
thanks to (7.7) and (7.1). Moreover,
Eu
sup0tT
|At|
= Eu
sup0tT
|
t0
[ZA,1s + gu(s)YA,2s ]usds|
CE
MT
T0
|ZA,1t + gu(s)YA,2s |dt
CE
|MT|4
+T0 [|Z
A,1
t |2
+ |gu(t)|4
+ |YA,2
t |2
]dt
<
The last two bounds ensure that condition 2) of the lemma is satisfied, so that the last term
in (7.6) is zero, and we finish the proof of (3.4).
Finally, (3.6) follows directly from (3.4) if u is optimal, as ut is arbitrary.
7.2 Proof of Theorem 3.3
We have already shown that we can set R = R. Recall that
Xt = x +
t
0
vsdBs;
Mt = expt
0
usdBs 1
2
t0
|us|2ds
;
XAt = R +
t0
gds
t0
usguds +
t0
gudBs;
YPt = U2(XT J1(XAT)) +
Tt
usZPs ds
Tt
ZPs dBs;
and that
Xt =t0
vsdBs;
Mt = Mt[
t0
usdBs
t0
ususds];
= uut + vvt + xXt, = g, gu;
XAt =
t0
gds
t0
[guus + usgu]ds +
t0
gudBs;
YPt = U2(XT J1(X
AT))[XT
XATU1(J1(X
AT))
]
+ T
t[ZPs us + usZ
Ps ]ds
T
tZPs dBs;
32
-
7/30/2019 Cw z Submittedgjjjh Jul 05
34/36
To prove (3.37), we need the following result. For any random variable and any p > 0,
Eu
{||p} = E{Mu
T ||p}
E{|Mu
T |2}
E{||2p} C
E{||2p};
E{||p} = Eu
{[Mu
T ]1||p} E
u{[Mu
T ]2}E
u{||2p}
=
E{[MuT ]1}
Eu{||2p} C
Eu{||2p}.
(7.8)
Proof of (3.37): In this proof we use a generic constant p 1 to denote the powers,
which may vary from line to line. We assume all the involved powers are always less than
or equal to the p0 in (A5).
First, one can easily show that
lim0
E sup0tT[|Xt Xt|
p + |Mt Mt|p + |XAt X
At |
p] + T
0
[|g g|p + |gu gu|p]dt = 0.
Using the arguments in Lemma 3.1 we have
Eu
[
T0
|ZP,t |2dt]p
C < ,
which, by applying (7.8) twice, implies that
Eu
[
T0
|ZP,t |2dt]p
C < .
Note that
YP,t YPt = U
2 U2 +
Tt
usZ
P,s + us[Z
P,s Z
Ps ]
ds
Tt
[ZP,s ZPs ]dBs
= U2 U2 +
Tt
usZP,s ds
Tt
[ZP,s ZPs ]dB
us .
Using the arguments in Lemma 3.1 again we get
lim0
Eu
sup0tT
|YP,t YPt |
p + [
T0
|ZP,t ZPt |
2dt]p
= 0,
which, together with (7.8), implies that
lim0
E
sup0tT
|YP,t YPt |
p + [
T0
|ZP,t ZPt |
2dt]p
= 0. (7.9)
Next, recall (3.19) one can easily show that
lim0
E
sup0tT
[|Xt Xt|p + |Mt Mt|
p + |XAt XAt |
p]
+
T0
[|g g|p + |gu gu|p]dt
= 0.
33
-
7/30/2019 Cw z Submittedgjjjh Jul 05
35/36
Then similar to (7.9) one can prove that
lim0
E
sup0tT
|YP,t YPt |
p
= 0.
In particular, lim0
V2 = lim0
YP,0 = YP0 .
The proof is complete.
References
[1] J. Cvitanic, X. Wan and J. Zhang, First-Best Contracts for Continuous-Time
Principal-Agent Problems. Working paper, University of Southern California (2004).
[2] P. DeMarzo and Y. Sannikov, A Continuous-Time Agency Model of Optimal Con-
tracting and Capital Structure, working paper, Stanford University.
[3] J. Detemple, S. Govindaraj, M. Loewenstein, Optimal Contracts and Intertemporal
Incentives with Hidden Actions, working paper, Boston University, (2001).
[4] I. Ekeland, On the variational principle, J. Math. Anal. Appl. 47 (1974), 324353.
[5] B. Holmstrom and P. Milgrom, Aggregation and Linearity in the Provision of In-
tertemporal Incentives, Econometrica 55 (1987), 303328.
[6] J. Hugonnier and R. Kaniel, Mutual Fund Portfolio Choice in the Presence of Dynamic
Flows. Working paper, University of Lausanne, (2001).
[7] D. G. Luenberger, Optimization by Vector Space Methods. Wiley-Interscience, (1997).
[8] J. Ma, and J. Yong, Forward-Backward Stochastic Differential Equations and Their
Applications. Lecture Notes in Math. 1702. Springer, Berlin, (1999).
[9] H. Ou-Yang, Optimal Contracts in a Continuous-Time Delegated Portfolio Manage-
ment Problem, Review of Financial Studies 16 (2003), 173208.
[10] Y. Sannikov, A Continuous-Time Version of the Principal-Agent Problem, working
paper, UC Berkeley.
[11] H. Schattler and J. Sung, The First-Order Approach to the Continuous-Time
Principal-Agent Problem with Exponential Utility, J. of Economic Theory 61 (1993),
331371.
34
-
7/30/2019 Cw z Submittedgjjjh Jul 05
36/36
[12] J. Sung, Lectures on the theory of contracts in corporate finance, preprint, University
of Illinois at Chicago (2001).
[13] J. Sung, Linearity with project selection and controllable diffusion rate in continuous-
time principal-agent problems, RAND J. of Economics 26 (1995), 720743.
[14] N. Williams, On Dynamic Principal-Agent Problems in Continuous Time. Working
paper, Princeton University, (2004).
[15] J. Yong, Completeness of Security Markets and Solvability of Linear Backward
Stochastic Differential Equations. Working paper, University of Central Florida,
(2004).
[16] J. Yong and X.Y. Zhou, Stochastic Controls: Hamiltonian Systems and HJB Equa-
tions, Springer-Verlag, New York, (1999).
35