Chow's method of optimal control

15
ELSEVIER Journal of Economic Dynamics and Control 21 (1997) 723-737 Chow’s method of optimal control Michael Reiter Department of Economics, University of Munich, Ludwigstr. 33/IV, D-80539 M&hen, Germany (Received 11 May 1995; Final version received 31 July 1996) Abstract The note investigates the numerical properties of the method proposed by Chow (1993) to solve stochastic dynamic optimization problems. The results are compared to the stan- dard method of linear quadratic approximation. The properties of these approximation schemes under heteroskedastic noise are examined. Keywords: Dynamic programming; Linear quadratic approximations; Growth model JEL classification: C61 1. Chow’s method The majority of problems in dynamic programming can only be solved numer- ically. Numerical procedures, however, suffer from the curse of dimensionality: in general, the complexity of the problem grows exponentially with the dimension of the underlying state space. If higher-dimensional problems should be treated, there is an urgent need for approximative methods that escape the curse of di- mensionality. Chow (1993) has proposed a new approximate method to solve stochastic dynamic optimization problems. It is based on first-order conditions and should avoid the global calculation of the value function. Since the state- ment of the method is very general, and no limits to its application are given, it seems necessary to characterize the situations in which the method can be applied successfully, and to compare it with other approximation schemes. Chow considers dynamic problems of the form (equation numbers beginning with ‘Ch’ refer to the numbering in Chow’s paper) I 00 max Et e-b(r-tb(x(z), u(z)) dz U I Helpful comments of three anonymous referees are gratefully acknowledged. (Chl) 0165-1889/97/$17.00 0 1997 Elsevier Science B.V. All rights reserved PII SO 165- 1889(97)00004-8

Transcript of Chow's method of optimal control

ELSEVIER Journal of Economic Dynamics and Control

21 (1997) 723-737

Chow’s method of optimal control

Michael Reiter

Department of Economics, University of Munich, Ludwigstr. 33/IV, D-80539 M&hen, Germany

(Received 11 May 1995; Final version received 31 July 1996)

Abstract

The note investigates the numerical properties of the method proposed by Chow (1993) to solve stochastic dynamic optimization problems. The results are compared to the stan- dard method of linear quadratic approximation. The properties of these approximation

schemes under heteroskedastic noise are examined.

Keywords: Dynamic programming; Linear quadratic approximations; Growth model JEL classification: C61

1. Chow’s method

The majority of problems in dynamic programming can only be solved numer- ically. Numerical procedures, however, suffer from the curse of dimensionality: in general, the complexity of the problem grows exponentially with the dimension of the underlying state space. If higher-dimensional problems should be treated, there is an urgent need for approximative methods that escape the curse of di- mensionality. Chow (1993) has proposed a new approximate method to solve stochastic dynamic optimization problems. It is based on first-order conditions and should avoid the global calculation of the value function. Since the state- ment of the method is very general, and no limits to its application are given, it seems necessary to characterize the situations in which the method can be applied successfully, and to compare it with other approximation schemes.

Chow considers dynamic problems of the form (equation numbers beginning with ‘Ch’ refer to the numbering in Chow’s paper)

I 00

max Et e-b(r-tb(x(z), u(z)) dz U I

Helpful comments of three anonymous referees are gratefully acknowledged.

(Chl)

0165-1889/97/$17.00 0 1997 Elsevier Science B.V. All rights reserved

PII SO 165- 1889(97)00004-8

724 M. Reiter IJournal of Economic Dynamics and Control 21 (19973 723- 737

subject to

dx = f(n, U) dt + S(x, U) dw, (Ch2)

where x is a vector of state variables and u is a vector of controls. Eq. (Ch2) is understood as an Ito differential equation with driving Brownian motion w(t). Chow’s method starts from the first-order conditions to this problem

af1 &+Gi+ftr &.E ( > =o, i=l,..., q, I

+g [ $&,~) ,

1 i= l,...,p,

2

(CW

(ChlO)

where L is the vector of shadow prices of x, C is the covariance matrix SS’, and the prime denotes transposition. To find the optimal control u for u gioen state vector x, the method then proceeds by linearizing the function f and the derivatives of r that are required in (Ch9) and (ChlO). This is equivalent to a linear-quadratic approximation of the above programming problem. A special feature of this method is that it allows for heteroskedasticity: the partial deriva- tives of Z in (Ch9) and (ChlO) are also linearized in x and u (cf. (Ch12)). I is approximated by

n(x) = Hx + h, (Chll)

where H and h are unknown, but are to be recovered in the process of the solution. The approximation is taken about x and initial guesses of U, H and h. By an iterative method (Eqs. (Ch15)-(Ch18)) we obtain new estimates of u, H and h. We repeat the approximation with the new values, and iterate the whole process until convergence is achieved.

The method consists of a sequence of linear-quadratic approximations, each taken at different points in (x, u)-space. Since linear-quadratic approximations (about a stationary state) play a prominent role in many economic applications, notably in the recent real business cycle literature, this note compares Chow’s method with the standard method in those cases where it is usually applied. Since Chow (1993) seems to claim a more universal applicability, the present note also evaluates the method in cases where the standard approach cannot be used.

A first important difference between the standard approach of approximation about the stationary state and Chow’s method is that the former gives a linear decision rule u(x), which can be used to analyse the simplified model. In contrast to that, Chow’s method does not give a general formula for u(x), but computes

M. Reiter I Journal of Economic Dynamics and Control 21 (1997) 723- 737 725

the optimal control u for single points X, and the resulting decision rule u(x) is

non-linear in x. The method is therefore less convenient for the dynamic analysis

of economic models. To investigate the numerical properties of Chow’s method, I will apply it to

several variants of a standard model, which is presented in Section 2. The results are given in Section 3. Most of the time I deal with deterministic systems, since they illustrate the main issues more clearly. The stochastic case is treated in

Section 3.3. Section 4 discusses general properties of Chow’s algorithm such as uniqueness of the solution and convergence, and proposes some modifications.

2. Test model

Numerical methods to solve dynamic programming problems in economics have been given a lot of attention recently. A large part of this research has chosen the standard stochastic growth model as the exemplary application. Taylor and Uhlig (1990) give a detailed comparison of the different ways to solve the discrete time version of this model. Since Chow (1993) gives more room to the

continuous time case, I will apply Chow’s method to a continuous time version of the stochastic growth model. More concretely, I will concentrate on variants of the problem

max Et e-b(‘-“U(C(z)) dT (1) U

subject to

dK=Idt +aKdw, (2)

C = f(K) - I - @(K, I), (3)

where K, C and I denote capital, consumption and investment, respectively, and where @(K,I) is a convex adjustment cost function.

3. Numerical results

3. I. Nonlinear dynamics

Chow’s method is based on a linear approximation of the dynamic equation and a quadratic approximation of the return function (this has also been proposed by McGrattan 1990). In the RBC literature (Kydland and Prescott, 1982; Chris- tiano, 1990), on the other hand, it has become a common practice first to elimi- nate the nonlinearity in the dynamic equation by rewriting the return function in terms of state variables and their derivatives (in discrete time: next period values).

726 M. Reiter I Journal of Economic Dynamics and Control 21 (1997) 723- 737

For example, the problem

max Et C r

e-pc’-t)U( C(T)) dr I

St. dK=(f(K)-C)dt+adw

(4)

would be rewritten as

max Et c J

m e-B”-‘)U(f(K(r)) - I(r))dz (5) t

s.t. dK=Idt +odw,

so that the dynamic equation is already linear. While the above literature does not cite a specific reason for this procedure, it is important to note that the reformulation brings a significant increase in accuracy, at least in the case of deterministic systems. Linear-quadratic approximations give linear decision rules; using familiar methods of phase space analysis, I will show that they give a correct linear approximation to the exact decision rule about the stationary state if we choose form (5), but not for (4).

Let us first consider the deterministic version (a = 0) of (4) with logarithmic utility. The corresponding Euler equation is DC = C(f’(K)-/I), where D denotes differentiation with respect to time, and we have the dynamic system

DK = f(K) - C,

DC = C(f’(K) - p).

Linearizing this system about the stationary state f’(K*) = jI, we get

f’(K*) -1

f(K*)f”(K*) 0 (6)

where the tilde signifies deviations from stationary state. The system matrix has a positive and a negative eigenvalue, since the system is saddle-point stable. The

eigenvector corresponding to the negative eigenvalue i(f’ - ) is

(( 1

4 f’+ JG) ’ 1

so that

is the linearization of the optimal control about the stationary state.

M. Reirer I Journal of Economic Dynamics and Control 21 (I 997) 723 - 737 727

The same procedure applied to the linear-quadratic approximation of (4) gives

(8)

The term f (K’ )f “(K*) in (6) has disappeared because of the linearization of the dynamic equation. Looking at the eigenvector to the zero eigenvalue we

see that c = f’(K*)k = /?I?, which is different from (7); this is an example where the solution to the linear-quadratic approximation does not give the correct

linear approximation of the decision rule about the stationary state. (The above procedure actually is an unusual way of finding the solution of a linear quadratic

programming problem, in case there is a stationary state.)

On the other hand, if we treat the model in the form (5), we find that it is a special case of the general problem Se-fi’R(x,u) s.t. Dx=u. It is then easily

found out that the original as well as the approximated problem lead to

D.? O( 0

Dzi = K,! (R, + P&y) (9)

This shows that the linear-quadratic approximation leads to a first-order approx- imation of the decision rule about the stationary state. Of course, this is true

not only of the new control variable (change in capital, in our example), but also of the control variable of the original problem (consumption), since one is

a differentiable function of the other. The first conclusion is therefore that we should apply Chow’s method only

after a transformation of the problem into a form with linear dynamic equation. This will be done in all the examples below.

3.2. Deterministic system with stationary state

In this subsection, I discuss the case that is most favourable to a linear- quadratic approximation, the case of a deterministic model with a stationary state.

The outcome of Chow’s method will be compared with the standard quadratic approximation about the stationary state.

Consider problem (l)-(3) with utility function (Cy - 1)/y, Cobb-Douglas pro- duction function f(K) = BP, zero variance and zero adjustment costs ((I = @ (K, Z) = 0). The parameter 8 is always chosen so that the stationary state value of capital is 32. Since the solution to this problem cannot be found analytically, the ‘exact solution’ was obtained by a discrete element method on a very fine

grid. Figs. 1 and 2 display the exact solution, the standard quadratic approximation

about the stationary state, and Chow’s method for values of IX= 0.4 and a = 0.9 with logarithmic utility (y = 0).

728 M. Reiter I Journal of Economic Dynamics and Control 21 (1997) 723- 737

0.10

0.05

3

b -Om .c

2 -0.05 B

-0.10

-0.15

20 25 30 35 40 45 50 55

Capital

Fig. 1. Deterministic model, logarithmic utility, a = 0.4.

- Exact solution - - Standardapproximation . _ . Chow’s approximation

15 20 25 35

Capital

Fig. 2. Deterministic model, logarithmic utility, a = 0.9.

Both approximations are rather accurate near the stationary state (K = 32). The relative performance of both methods, however, depends on the parameter values. This is forther illustrated by Table 1, which provides the second derivative of investment with respect to capital at the steady state, for Chow’s approximation

M. Reiter / Journal of Economic Dynamics and Control 21 (I 997) 723- 737 729

Table 1

Second derivative of control in Chows’s approximation (exact solution in parentheses), times 1000,

for different parameter values

y=o y= -1 y= -3 )‘= 0.5

a = 0.2 -2.3817 (0.1658) -1.5625 (0.1563) -1.0569 (0.0897) -3.8386 (-0.0360)

~=0.4 -1.6806 (-0.3491) -1.0156 (-0.1875) -0.6123 (-0.1134) -2.8467 (-0.7290)

a = 0.9 -0.1944 (-0.1417) -0.0966 (-0.0738) -0.0481 (-0.0379) -0.3909 (-0.2680)

and for the exact solution. ’ Since both approximations give the correct first derivative of investment, Chow’s method is closer to the exact solution in a neighborhood of the stationary state if it approximates the second derivative of investment better than does the standard approximation (in which the second

derivative is zero, of course.) In Table 1 this is the case only for a = 0.9. Since the standard procedure is much simpler to implement and yields a linear decision

rule, it seems preferable for everyday use. The application of Chow’s method could be recommended only if there is a strong presumption that it gives the

closer approximation. So far there are no results of this kind.

3.3. Stochastic systems

The previous sections have shown that the deterministic growth model can be approximated very well by a linear-quadratic model. This subsection investigates whether the accuracy of the approximation is significantly reduced if the dynamic equation is affected by noise. It is well known that homoskedastic noise has no impact on the control of linear quadratic models. In that case, therefore, the ques- tion reduces to what effect noise has on the control in the exact solution. More interesting for us is the case of heteroskedastic noise, where we can investigate

whether it pays out to account for heteroskedasticity in the way of (Ch12). To an- swer these questions, we consider the same model as in the previous subsection, modifying the dynamic equation to

dK=Zdl+oKdz

with Z = OK” - C. The error variance depends on capital in the way allowed in Eq. (Ch12), so that there is no approximation needed. Figs. 3 and 4 compare the exact solution with the standard quadratic approximation (which is not affected by noise because of certainty equivalence) and the outcome of Chow’s method.

’ The second derivative of the exact solution can be obtained analytically by differentiating the

Bellman equation three times and using the first-order condition. The calculations are lengthy but

can be left to algebraic packages such as Mathematics, Maple or MuPAD (which I have used).

For Chow’s method, the numerical calculation of the second derivative turned out to be numerically

stable.

730 M. Reiter I Journal of Economic Dynamics and Control 21 (1997) 723- 737

0.6 L I I I I I

0.4 \\ I-

3 0.0 - ._ fit

v -0.2 - .5 & -0.4 - 2

f, -0.6 -

.

-1.2 I I , I I

20 25 30 35 40 45 so

Capital

Fig. 3. Stochastic model, logarithmic utility, GL= 0.4, a=0.05.

3 ._ 8

” -1.0 -

.f

8 -1.5 - P

is -2.0 -

-2.5 -

20 35

Capital

Fig. 4. Stochastic model, logarithmic utility, a = 0.4, (r = 0.2.

Fig. 3 represents a case with noise of considerable magnitude (r~ = 0.05, which means that the standard deviation per period is 5%), Fig. 4 represents a high-noise environment (a = 0.2).

Comparing the true solution with the standard approximation, we see that the noise has a depressing effect on capital formation. Allowing for heteroskedasticity

M. Reiterl Journal of Economic Dynamics and Control 21 (1997) 723- 737 731

in the way of (CH12), however, greatly overstates this negative effect; this may appear surprising, so a brief explanation is in order.

First it should be mentioned that the poor approximation is not due to the fact that Chow’s method consists of a sequence of linear quadratic problems. The one-step quadratic approximation about the stationary state leads to a very

similar picture if it allows for heteroskedastic noise. To examine the effect of noise on investment, let us write investment as a function of K and c*. From

Eq. (12) in the appendix we then have the analytical result that the derivative of investment w.r.t. noise at the stationary state value of capital K* is given by

az(K; 02)

a3 V(K) a?(K)

a(lnK)3- a(ln K)2 =-- ad

K=K',d=O 2K* aw(8K” - I) az(K;d) ’

az2 p- aK >

where V(K) is the value function of the deterministic model. We see that the

effect of noise on the change in capital depends crucially on the difference of the third and the second derivative of the value function w.r.t. the logarithm of cap-

ital. While the quadratic approximation gives all the first and second derivatives correctly, it may obviously provide a very poor estimate of the third derivative, since in a quadratic model, d3 V(K)/dK3 = 0 by construction. In our model, the

exact value of a3V(K)/a(ln K)3 equals 0.212, while the quadratic approximation gives -9.500. Subtracting a2V(K)/a(lnK)2 = 2.167, we see that the quadratic approximation overestimates i3Z(K; a2)/i3a2 by a factor of about five, as observed

in the pictures. This problem is not confined to our specific test model. The Appendix shows

for general one-dimensional control problems that heteroskedastic noise of the

form (Ch12) in the quadratic approximation always has a negative influence on the optimal change in the state variable at a stable stationary state point. Therefore, allowing for heteroskedasticity in the linear quadratic approximation seems to give a very biased picture of the effect of uncertainty on the optimal control, and should be only used if we know that the third derivate of the value

function is close to zero. On the other hand, ignoring noise and working with the standard deterministic approximation still yields a good approximation in the sense that the deviation from the true solution is approximately a constant.

3.4. Model without stationary state

The last subsections have dealt with the standard case of linear-quadratic ap- proximations about a stationary state. The standard approach needs a stationary state, otherwise it is not clear about what point the approximations should be taken. Chow’s method, on the other hand, can be applied to a model without sta- tionary state, since it iteratively determines the point about which to approximate. The question is how well this approximation might work.

132 M. Reiterl Journal of Economic Dynamics and Control 21 (1997) 723-737

/ /

/

0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10

Capital productivity

Fig. 5. Deterministic linear model, logarithmic utility.

A simple example is given by the deterministic growth model with linear production function f(K) = OK, where t3 # /I. The system has a nonstationary solution, as will be shown below. First, the case without adjustment costs illus- trates an interesting pitfall: While the original programming problem is perfectly well defined, the quadratic approximation is not! For a quadratic utility func- tion, it is easy to see that, if 8 > /3, we can always increase utility by postpon- ing consumption. If 8 < /I, we can always increase utility by consuming earlier (I have assumed that the budget constraint does not allow to consume perpetu- ally at the maximum utility level; this would not give an interesting programming problem.) Of course, the algebraic equations still yield a solution, but since it does not conform to the solution of an optimization problem, it is not clear how to interprete the result.

To circumvent this problem, we now introduce the quadratic adjustment cost function Qi(K,Z) = #12/K, which is linearly homogenous in K and I. Then the quadratic approximation is strictly concave in K as well as in I, and the maxi- mization problem is well defined.

With utility function (0 - 1)/y, the linear model can be solved analytically: optimal investment is given by

’ = 2+(1 l- zY) (y + 2@ - 1 + &l - Y + 28’#J)2 + 4(8 - /I)&1 - 27)) K.

The exact decision rule, as well as the outcome of Chow’s method, are linear in capital. We can therefore characterize them by a single number, the investment-

M. Reiter IJournal of Economic Dynamics and Control 21 (1997) 723- 737 ?33

0.010

0.008

0.006

0.004

/ /

/ 1 , , / -- --

I I I I

-2.0 -1.5 -1.0 -0.5 0.0 0.5

gamma

Fig. 6. Deterministic linear model, utility = (0 - 1)/y, 8 = 0.07.

capital ratio. Figs. 5 and 6 investigate the accuracy of the approximation along two dimensions.

In Fig. 5, the logarithmic utility function is used, and the control Z is plotted as a function of capital productivity 6. We see that the approximation is exact for

0 = /?. In that case, the change in capital is zero and we have a stationary state. The greater the difference of j3 and 8, the greater is the change of the true solution

over time, and the lesser is the relevance of a quadratic approximation at the given point K. It is therefore not surprising that the quality of the approximation rapidly goes down, For values of 8 that are too far apart from /?, the method does not

even converge (see Section 4). Fig. 6 uses the value 0 = 0.07, and plots the exact and the approximative

solution as a function of the parameter y in the utility function U(C) = (CY- 1)/y. We see that the approximation consistently underestimates the true solution, but

the dependence of the solution on y is qualitatively correct. The results of this subsection are easily understood. If the model is formulated

in a way that the state variable is close to stationarity, the approximation may turn out relatively well: the situation is then similar to the case already investi- gated in Section 3.2. In other cases, the quadratic approximation at a given point

has not much significance, since the solution leaves the neighbourhood of this point very quickly, so that the outcome is very inaccurate or even meaningless. There is of course an easy remedy to this problem in most cases. It is a standard procedure to rewrite models with a nonstationary steady state by using suitably discounted variables, so that the solution is stationary in the new variables. If a model does not even have a steady state, we can at least reduce the average drift component of the solution, and thereby increase the precision of the solution.

134 M. ReiterlJournal of Economic Dynamics and Control 21 (1997) 723-737

4. Properties of the algorithm

Chow’s method is iterative in two ways. In the ‘outer loop’, we have a se- quence of linear-quadratic approximations, each about (x,uk), where Uk is the control obtained from the last iteration. In the ‘imer loop’, for each of these quadratic approximation problems, the optimal control should be found by itera- tions (Ch15)-(Ch18). For both types of iterations, we have to ask about existence and uniqueness of the solution, and convergence of the algorithm.

Let us begin with the imer loop. If we ignore heteroskedasticity, as recom- mended in Section 3.3, we can apply standard value iteration algorithms that converge to the unique solution of the maximization problem (see, for example, Kushner 1971, Ch. 11). On the other hand, Chow’s equations (Ch15)-(Ch18) have multiple solutions, since inserting (Ch15) into (Ch17) gives a quadratic matrix equation in In

z Details are contained in an unpublished appendix which is available from the author (send an e-mail

to [email protected]).

3 For example, consider the model of Section 3.4 with $0 = 1, ~!3 = 0.05, 0 = 0.075 and utility function

The utility fimction is strictly concave. For K = 16, u = 0.1279 as well as u = 0.2239 form a solution.

4 As an example, take the model of Section 3.4 with logarithmic utility and parameters 0 =0.12

and 40 = 1. Starting from the correct solution Y = 0.06033, after several hundred steps the iterations

bounce back and forth between u = 0.1037 and u = - 0.05336.

M. Reiterl Journal of Economic Dynamics and Control 21 (1997) 723-737 735

happened in circumstances such as those of Section 3.4 where it cannot be ex-

pected to give a good approximation anyway.

5. Conclusions

The linear-quadratic approximation performs well in one type of situation: ap-

proximation about a stationary state with relatively low variance. In this case, sometimes the standard method, sometimes Chow’s method yields the more ac-

curate result. Given the simplicity of the standard approach, and the fact that it yields a linear decision rule that is very convenient for theoretical analyses, the

standard approach seems preferable to the method of Chow (1993) in most cases. In nonstationary models, Chow’s method must be applied with caution, since the

outcomes can be very inaccurate. A general result of this note is that allow-

ing for heteroskedasticity in linear quadratic approximations should be avoided, unless one knows that the exact value function is almost quadratic.

Finally, one should be aware that most of the results of this note relate to one particular model. The stochastic growth model is well-behaved and uses very smooth functional forms. So it is certainly very favourable for linear-quadratic approximations. Any positive results have to be interpreted with caution, while negative results are probably more conclusive.

Appendix. Effect of noise on control at stationary state

Consider the problem of maximizing

max Jrn

e-P’R(x, u) dt, (A.1) U 0

where we assume w.1.o.g. that the scalar control u is the derivative of the scalar state variable x so that

dx=udt + oxdz

Writing the value function V as well as the optimal control u as functions of x and 02, the HJE%-equation for this problem says that

pV(x; a2) =R(x, u(x; 02)) + ?$(x; a2)u(x; a2) + $ I&(x; 2)a2x2.

Defining y = lnx, w = u/x, W(y) = V(er) = V(x) and R*(y, w) =R(x, u) we get

P IVY; fJ2 ) = R*(y, NY; cr2 )) + Y(y; a2 )w(y; c2 )

+;(Q(Y; 02) - w,(y; c2))a2.

736 M. ReiterlJournal of Economic Dynamics and Control 21 (1997) 723-737

Differentiating w.r.t. CT* at CT* = 0 and using the envelope theorem we find

After differentiating w.r.t. y we have at the point u = w = 0

(P - wy(y; .*))a?$02) = ;(qyv(Y;o*) - wgy; 2)). 64.2)

Using (A.2) and the first-order condition &(x, u(x; o*)) + V,(x; a2) = 0 we obtain

au(x; a2) -1 dv,(x; 02) -1 zVI$(y$) 1 a02 = R,, aa* = R,, da* X

= _ ?h(v; fJ2) - &4Yi a*) ~&,(P - wy(y; o*)) * (A.3)

Assuming R&x*,0) < 0 and using (A.3) we find for a stable stationary state x* > 0 that

sip $ =sign (Y:,,(Y; - W,,(Y; 02N,

where we have used the fact that stability requires u, = w,, 5 0. If we now con- sider a quadratic approximation to (A.l), the value function is of the form V(x) = ug + vtx + v2x2 with v2 < 0, and we have

y&JAY; 02) - &(y; a*> = a3 qx; 8) a* qx; d) @lnx)3 - d(lnx)2

=4v2x* < 0. (A-4)

Eqs. (A.3) and (A.4) imply that in a quadratic model, noise always has a negative effect on the control at a stable stationary state x* > 0 (at a stable stationary state x* < 0, noise always has a positive effect on the control).

References

Bunse-Getstner, A. and V. Mehrrnann, 1986, A symplectic QR like algorithm for the solution of the

real algebraic Riccati equation, IEEE Transaction Automatic Control AC-31, 1104-l 113.

Chow, G.C., 1993, Optimal control without solving the Bellman equation, Journal of Economic

Dynamics and Control 17, 621-630.

Cbristiano, L.J., 1990, Solving the stochastic growth model by linear-quadratic approximation and by

value-function iteration, Journal of Business and Economic Statistics S(l), 23-26.

Kushner, H., 1971, Introduction to Stochastic Control, Holt, Rinehart and Winston, New York.

hl. Reiter I Journal of Economic Dynamics and Control 21 (1997) 723 - 737 737

Kydland, F. and E. Prescott, 1982, Time to build and aggregate fluctuations, Econometlica 50(6),

1345-1370.

Laub, A., 1979, A Schur method for solving algebraic Riccati equations, IEEE Trans. Automat. Contr.

AC-24, 913-925.

M&rattan, E.R., 1990, Solving the stochastic growth model by linear-quadratic approximation,

Journal of Business and Economic Statistics 8(J), 41-44.

Taylor, J.B. and H. Uhlig, 1990, Solving nonlinear stochastic growth models: A comparison of

alternative solution methods, Journal of Business and Economic Statistics 8(l), l-17.