Senior Paper Rough Draft

26
Corey Becker 5/19/08 Numerical Solution of Parabolic Partial Differential Equations with Applications in Finance Corey Becker Project Advisor: Dr. Daniel Willis May 19, 2009

description

This is a working draft of my senior paper entitled "Numerical Solution of Parabolic Partial Differential Equations with Applications to Finance." For this reason there will be grammatical errors, details missing, and unreferenced material so I ask for your forbearance on these.

Transcript of Senior Paper Rough Draft

Page 1: Senior Paper Rough Draft

Corey Becker5/19/08

Numerical Solution of

Parabolic Partial Differential Equations

with Applications in Finance

Corey Becker

Project Advisor: Dr. Daniel Willis

May 19, 2009

Page 2: Senior Paper Rough Draft

Corey Becker5/19/08

In the world of quantitative finance, there is a wide array of advance mathematical tools used to

help model the various markets. Math and finance collide on a daily basis on Wall Street. One of the

countless applications has received Nobel Prize recognition. The work of Fischer Black and Myron

Scholes has been at the forefront of mathematical finance since 1973. While some will always debate it,

the impact it had on the investment world is indisputable. In order to better understand how one

mathematical model could change the world, we will look at the mathematics behind it. The power of

the Black-Scholes equation lies in the mathematics that support it.

Taylor Series

The majority of functions f (x) found in mathematics cannot be evaluated exactly by simple

means. For example, calculating the everyday functions f (x)=cos (x), ex, or √ x can be problematical

without technology. Asking a person to calculate ex without a calculator would be an awfully

challenging assignment. However, if another function can be used to approximate the original function,

then one can develop an efficient estimate much easier. The most common approximating functions are

polynomials, and the most common of these is the Taylor polynomial. The Taylor polynomial is used

often due to its easy construction and its ability to be utilized by more accurate methods. In order to

construct an equation that we can use to accurately calculate approximate values of the original, less

demonstrable, equations we must generate a polynomial that mimics the behavior of the initial

function. One way to ensure this is to set their characteristics equal to each other. By characteristics

I’m referring to the position f (a) , velocity f ’ (a), acceleration f ’ ’(a), and the value of every other

derivative at point a. Given a position and information as to how that position will change over time

(derivatives) we can develop a function to fit these conditions. The Taylor polynomial is developed from

this idea (Atkinson and Han, pp. 1-8):

Page 3: Senior Paper Rough Draft

Corey Becker5/19/08

pn ( x )=f (a )+( x−a ) f ' (a )+ ( x−a )2

2 !f ' ' (a )+…+

( x−a )n

n!f (n )(a)

¿∑j=0

n ( x−a ) j

j !f ( j )(a)

Difference Method Approximations

The basic idea, as stated above, is that if we know all the information about where the value

f (a) will be based on the independent variable, then we can create a function that corresponds to the

original function but can be utilized much easier in computations. It seems reasonable to then assume

that the more derivatives or the more information we have about a function, the more accurate our

representation will be. The Taylor series above can be restrained by limiting the derivative n. For

example, for a linear approximation we can set p1 (x )=f (a )+ (x−a ) f ' (a ) where we have the initial

position and the slope but nothing else (thus linear). This can be accurate for points near a, however,

assuming that f (x) is not a line, it will not be accurate as we move further from a. Therefore, we can

take the linear approximations at several points to extrapolate more values but they will never be as

accurate as higher orders since we have less knowledge about its movement. When calculating these

linear approximations we make a few alterations to arrive at a very recognizable formula (Brandimarte

295). Since we are modeling the function f (x) we can say:

f ( x )=f (a )+( x−a ) f ' (a )+ ( x−a )2

2 !f ' ' (a )+…+

( x−a )n

n !f (n)(a)

If we set h equal to (x−a) then we get:

f (a+h )=f (a )+h f ' (a )+ h2

2!f ' ' (a )+…+ h

n

n !f (n)(a)

Page 4: Senior Paper Rough Draft

Corey Becker5/19/08

For simplicity’s sake we can substitute the variable x for a and truncate the series after the first

derivative term:

f ( x+h )=f ( x )+h f ' ( x )+O(h2)

From this we can perform some algebra to arrive at a familiar formula:

f ' ( x )=f ( x+h )−f (x )

h+O(h)

This is the forward approximation for the derivative (often referred to as Newton’s difference quotient)

which any calculus student should recognize. If we were to add a limit for h and drop the error term

O(h), we would see the formal definition of the derivative. We can also formulate the backward

difference approximation by taking the step h in the opposite direction (replace h with (−h) then solve

for f ’ (x ):

f ( x−h )=f (x )−h f ' ( x )+ h2

2 !f ' ' ( x )− h3

3!f ' ' ' (a )…

f ' ( x )= f ( x )−f (x−h )h

+O(h)

The O(h) element in these two equations represents the error assumed when we curtail the

approximation at n=1. It signifies all the information that we are missing; therefore, it denotes the

accuracy of our numerical solutions. From here on, one of our objectives will be to reduce this error as

much as possible.

A third difference method can reduce the error to O(h2). Taking the difference between the

forward and backward values and dividing by two steps, will give us the following equation:

Page 5: Senior Paper Rough Draft

Corey Becker5/19/08

f ( x+h )=f ( x )+h f ' ( x )+ h2

2 !f ' ' ( x )+ h

3

3!f ' ' ' (a )…

−f ( x−h )=f ( x )−h f ' ( x )+ h2

2 !f ' ' ( x )− h3

3!f ' ' ' (a )…

f ( x+h )−f ( x−h )=2h f ' ( x )+ 2h3

3 !f ' ' ' (a )…

Or

f ' ( x )= f ( x+h )−f ( x−h )2h

+O(h2)

This is more accurate because it takes the average of the first two approximations (Brandimarte 295).

Typically, if the forward approximation results in an overestimate, the backward approximation will

output an underestimate and vice versa, therefore, averaging the two seems logical.

Eventually, we will need to approximate the second derivative for our final equation so we will

derive it here. We would prefer an approximation of the second derivative using only function values,

without having to calculate first derivatives. If we add the forward and backward difference methods

we arrive at the following:

f ( x+h )=f ( x )+h f ' ( x )+ h2

2 !f ' ' ( x )+ h

3

3!f ' ' ' (a )…

+ f (x−h )=f ( x )−h f ' ( x )+ h2

2 !f ' ' ( x )− h3

3 !f ' ' ' (a )…

f ( x+h )+ f ( x−h )=2 f ( x )+h2 f ' ' ( x )…

Or

Page 6: Senior Paper Rough Draft

Corey Becker5/19/08

f ' ' ( x )= f ( x+h )−2 f ( x )+ f ( x−h )h2

+O(h2)

Here we have a second derivative approximation of f (x) with second order of accuracy (Brandimarte

295). Refer back to this equation when we need to estimate the second derivative of a function.

Direction Fields

One way we can see a graphical representation of these linear approximations is using a

direction field. This graphic plots line segments with the slope of the function at corresponding values of

y. For example, if we wanted to plot a direction field of exit would look something like this:

As you can see, depending on initial conditions, these lines will match up with the instantaneous slope at

the respective y-values. Once we are provided with initial conditions we can isolate the specific set of

points/slopes and draw an integral curve like those seen to the right. These graphic representations are

not incredibly helpful in the formulation of a numerical solution but they provide an excellent view of

what is really going on.

Page 7: Senior Paper Rough Draft

Corey Becker5/19/08

Euler’s Method

One specific formula that arises from the Taylor expansion is known as Euler’s Method.

Essentially this method is a first-order Taylor approximation with some different notation seen below:

For {y ' (x)=f (x , y)y ( x0 )= y0

y ' (xn)≈y (xn+h )− y (xn)

h Or y ' (xn)≈

yn+1− ynh

Then reversing our algebra from earlier we get:

yn+1= yn+h y' (xn )

Using our conditions above we substitute:

yn+1= yn+hf (xn , yn)

To demonstrate this approximation we will take the function y=tan(x ) where y ’= y2+1 and y (0)=0

. Using a program written in MATLAB with h=.2 we are given the following estimations:

x Actual Value Euler Approx. Error

0 0 0 00.2000 0.2027 0.2000 -0.00270.4000 0.4228 0.4080 -0.01480.6000 0.6841 0.6413 -0.04280.8000 1.0296 0.9235 -0.10611.0000 1.5574 1.2941 -0.2633

These values are obviously a very rough estimate but they provide a fine building block. Now, instead of

using the forward difference approximation we can also use the backward difference approximation

which will give us the backward Euler method. We will start by performing some algebra like before:

Page 8: Senior Paper Rough Draft

Corey Becker5/19/08

f ' ( x )= f ( x )−f (x−h )h

f ( x )=f ( x−h )+hf ' (x)

Yielding the backward Euler method:

yn+1= yn+hf (xn+1 , yn+1)

The backward Euler method appears to be a simple change to the Euler equation but it brings about a

much different process and thus a much different result. Difference methods can be classified into two

categories, implicit and explicit. Euler’s method is explicit because it uses the information at the current

point to project forward and estimate the next point n+1. On the other hand, the backward Euler

equation is implicit because it uses both the current position solution ( yn) and the slope at the next

solution f (xn+1 , yn+1) (usually found using iterative methods) to approximate the differential equation.

As one may infer, this extra step for approximating for an additional solution at xn+1 can be costly in

terms of computation effort. However, in some cases implicit methods are the only way to avoid

unfortunate situations that arise in equations that are known to be “stiff.” These equations will

explained in further detail in the section regarding stability.

Second Order Taylor Series Approximation

As we were able to see in the previous table, there is much room for improvement. One way to

make the approximation more accurate that I mentioned earlier is to add another derivative thus

increasing the “order” of the Taylor expansion. Continuing the previous example using tan(x) we will

need to compute just one more element:

y ' 'n=ddx

[ y2+1 ]= ddx

[tan ( x )2+1 ]=2 tan (x ) [ tan 2 ( x )+1 ]=2 y ( y2+1)

Using this second derivative we revise our estimate:

yn+1= yn+h( y n2+1)+h2 [( yn2+ y)]

Page 9: Senior Paper Rough Draft

Corey Becker5/19/08

Plugging this equation into the MATLAB program generates the following results:

x Actual Value Taylor (2) Approx. Error

0 0 0 00.2000 0.2027 0.2000 -0.00270.4000 0.4228 0.4163 -0.00650.6000 0.6841 0.6705 -0.01360.8000 1.0296 0.9993 -0.03031.0000 1.5574 1.4789 -0.0785

As was expected, this method was much more accurate but still has unacceptable errors. Therefore, we

will look at other ways to create the most accurate approximation possible.

Runge-Kutta and Heun Methods

An idea that was touched on with the central difference method was taking an average of two

slopes near x, so now we will test its efficiency. The Runge-Kutta method of approximating differential

equations uses another method in order to predict f (xn+1, yn+1 ) and then average it with f (xn , yn ) by

adding them and dividing by 2. The specific Runge-Kutta formula we will be using utilizes Euler’s method

as the predictor for f (xn+1, yn+1 ). This approach is known at Heun’s method and is also a type of

trapezoidal method. Using the tan(x) example equation again, Heun’s approximation looks like the

following:

yn+1= yn+h[ f (xn , y n)+ f (xn+1 , yn+1)2 ]

Replacing the f (xn+1 , yn+1) with the Euler approximation and plugging our example in for f (xn , yn ) we

form the following:

yn+1= yn+h¿

Using this formula and the same MATLAB program from before we generate the following values:

x Actual Value Heun Approx. Error

Page 10: Senior Paper Rough Draft

Corey Becker5/19/08

0 0 0 00.2000 0.2027 0.2040 -0.00130.4000 0.4228 0.4252 -0.00240.6000 0.6841 0.6870 -0.00280.8000 1.0296 1.0305 -0.00081.0000 1.5574 1.5448 -0.0126

Below is a graph of the errors to graphically demonstrate the behavior of the numerical method as x

increases.

Here is a comparison of the three methods’ errors as calculated by the MATLAB program:

x Euler 2nd Order Taylor Heun

0 0 0 00.2000 -0.0027 -0.0027 -0.00130.4000 -0.0148 -0.0065 -0.00240.6000 -0.0428 -0.0136 -0.00280.8000 -0.1061 -0.0303 -0.00081.0000 -0.2633 -0.0785 -0.0126

After a brief analysis of the errors, we can see we are on the right track. Considering the size of h=.2,

these numbers are fairly impressive. Once we decrease the increment size we should be able to have a

Page 11: Senior Paper Rough Draft

Corey Becker5/19/08

very accurate approximation for our purposes. A further analysis of the errors demonstrates the

advantage of the O(h2)approximation. If we compare the errors of Euler and Heun’s method in a

different way, the significance of h becomes clearer. Calculating e8 starting with x0=0 Actual=2980.96

Method h=.1 h=.05

Euler 2048.4error=932.558

2456.3error=524.658

Heun 2944.3error=36.658

2971.4error=9.558

This output data confirms the premise behind O(h) notation. Assuming h is a very small number, h2

will be an even smaller number. As in the Euler approximation above, a function with O(h) error will

decrease linearly with the size of h. If h is cut in half then O(h) will likewise be halved. In the case of

Heun’s method, however, since the error is O(h2) then for each decrease in h, the error will see a

quadratic decrease. As h is halved above, the error of Heun’s method is cut into fourths. One more

detail that should be mentioned is the efficiency of these methods. A disadvantage of Heun’s method

and Backward Euler’s Method pertains to the extra calculations needed to approximate the slopes.

Heun’s requires an Euler approximation within each approximation so naturally it will entail more

computational exertion. The Backward Euler Equation requires the user to implement another method

to approximate yn+1. The “elapsed time” values above give evidence to this fact. These should both be

considered in cases that involve complicated and time-consuming functions. Accuracy and efficiency

must be considered together when working with more advanced applications and limited computational

power but another aspect that demands even more attention in solving equations numerically is known

as stability.

Page 12: Senior Paper Rough Draft

Corey Becker5/19/08

Stability

The errors from these methods may be acceptable for some applications but as we apply them

to more demanding functions, we need to have confidence that they will maintain their accuracy. As

was mentioned earlier, there are certain functions that do not conform well to explicit methods such as

Heun’s and Euler’s formulas. These models’ errors will begin to grow rapidly or “blow up” as it is often

described. Functions that do not conform to explicit methods are often called “stiff” functions. The

easiest way to understand the behavior of these stiff functions is to plot them. A common equation to

model stability is:

{Y '=λYY (0 )=1

Y ( x )=eλx for λ<0

yh (xn )→0as xn→∞

Plugging the function into the Euler model we get the following:

yn+1¿ yn+hλ yn=(1+hλ ) yn , n≥0 , y0=1

This would intuitively equate to:

yn=(1+hλ)n , n≥0

We use MATLAB to execute this method with the following values of λ and h:

A . λ=−1∧h=0.1

B .λ=−100∧h=0.1

Output (Only need first few values to realize trend):

A. n=¿ 1 2 3 4 5

yn=¿ 1 0.9 0.81 0.729 0.6561

B. n=¿ 1 2 3 4 5

yn=¿ 1 -9 81 -729 6561

Page 13: Senior Paper Rough Draft

Corey Becker5/19/08

If the value of λ is very large, the value of h has to compensate by being very small. Using the equation

yn=(1+hλ)n , n≥0 we can see that if |1+hλ| is greater than one then the function will grow

exponentially with the value of n. This is the opposite of how the function should behave. However, if

the function is less than one, then taking it to a positive integer n will give us the desired decay effect.

So for λ=−100 then h≤0.02. If h is any bigger than 0.02, example B will not behave properly. There

are some cases, however, that such a small step size would be impractical. Implicit methods give us a

versatile alternative to the unstable explicit methods (Atkinson & Han). We can see why with some

basic algebra performed with the backward Euler Method:

yn+1¿ yn+hλ yn+1 , n≥0 , y0=1

yn+1=yn

(1−hλ )

yn=(1−hλ)−n

This relationship makes for a flexible model. For any positive value of h, |1−hλ|>1, therefore, the

model will decay as expected. Implicit methods are preferred when it comes to stiff problems for this

reason. They may take more computational effort but in these cases, it is definitely worth it. We could

take steps with h=1 and have 50 times less steps than with Euler and easily make up for the iterative

portion of the backward Euler program (Atkinson & Han). We will keep this in mind as we delve into the

final component, partial differential equations.

Partial Differential Equations

Since the conception of the Black-Scholes equation in 1973, mathematically-inclined financial

professionals have used Partial differential equations (PDEs) extensively in many areas of analysis and

valuation. PDEs have proven over the years to be an effective instrument for the financial engineers

Page 14: Senior Paper Rough Draft

Corey Becker5/19/08

seeking a consistent method to valuate complex financial derivatives. They can model the fluctuation of

many variables and how the fluctuations behave over time. This applies well to the field of finance with

its numerous and non-trivial variables. Some of these models have analytic solutions, such as the Black-

Scholes equation, but primarily the financial engineers, or “academics” as they are often called by

“traditionalists”, must resort to numerical methods to solve the PDEs. Since the Black-Scholes equation

has a known solution, we will use it to demonstrate the process of numerically solving the model for the

price of stock options over time.

∂V∂ t

+ 12σ 2S2

∂2V∂S2

+rS ∂V∂S

−rV=0

Numerically solving an equation like Black-Scholes often begins with classifying it based on

certain characteristics. The choice of numerical method to approximate the PDE generally depends on

these classifications. The Black-Scholes equation is a second-order, linear, parabolic partial differential

equation. The most important aspect to recognize is that through a tedious sequence of algebraic

manipulations and substitutions, this model can be simplified into the familiar heat equation:

∂ϕ∂t

=∂2ϕ∂x2

, x∈ (0,1 )∧t∈(0 , T )

We will attempt to model the equation using explicit and implicit approximations. Explicit will look like

the following:

ϕi , j+1−ϕij∂ t

=ϕi+1 , j−2ϕij+ϕi−1 , j

(∂ x)2

Solving for ϕi , j+1 we get:

ϕi , j+1= ρϕi−1 , j+(1−2 ρ )ϕi , j+ ρϕi+1 , j , ρ=∂ t

(∂ x)2

Given the following initial data, we can begin solving this heat equation numerically:

Page 15: Senior Paper Rough Draft

Corey Becker5/19/08

ϕ ( x ,0 )=f ( x )={ 2x 0≤x ≤0.52 (1−x ) , 0.5≤ x≤1 ,

And boundary conditions:

ϕ (0 , t )=ϕ (1 , t )=0 ∀ t

This means the “temperature” at points x=0 and x=1 will be held fixed at 0 (Brandimarte 305). The

program for the MATLAB implementation of this model is attached. The numerical solution of the heat

equation with δx=0.1 and δx=0.001, by the explicit method for t=0 , t=0.01 , t=0.05 ,t=0.1:

As was discussed earlier, explicit methods have odd tendencies that are sensitive to changes in step

sizes. We would assume that decreasing the size of δx would increase the accuracy of the

approximation but as the output below demonstrates, this is not necessarily always the case. The

numerical solution of the heat equation with δx=0.01 and δx=0.001, by the explicit method for

t=0 , t=0.01 , t=0.05 ,t=0.1:

Page 16: Senior Paper Rough Draft

Corey Becker5/19/08

Obviously, this is a case where the explicit method “blows up.” If we implement the implicit method

(code attached) with the same parameters we should maintain a smooth solution. The numerical

solution of the heat equation with δx=0.01 and δx=0.001, by the implicit method for

t=0 , t=0.01 , t=0.05 ,t=0.1:

Page 17: Senior Paper Rough Draft

Corey Becker5/19/08

EDU>> EXPLICITHEAT

Elapsed time is 0.000385 seconds.

EDU>> IMPLICITHEAT

Elapsed time is 0.006542 seconds.

Now that we have seen the heat equation at work using numerical methods, we can understand

how it may apply to similar models in finance. The Black-Scholes equation plots the behavior of a stock

option’s price over time depending on the underlying stock’s current price. As we explained above, the

variables in the Black-Scholes equation can be simplified into a linear heat equation relationship and

solved similar to how we solved the above problem. The ability to reduce such a complicated formula

into the heat equation is what makes the Black-Scholes’ work so popular today. It had remarkable

simplicity then and even more so today. It will never claim to be the perfect solution to option pricing

(history will prevent that) but the math used to develop it and utilize it will continue to make it one of

the most famous financial equations of all time.

Page 18: Senior Paper Rough Draft

Corey Becker5/19/08

Works Cited

Atkinson, Kendall and Weimin, Han. Elementary Numerical Analysis. December 31, 2002.

Brandimarte, Paolo. Numerical Methods in Finance and Economics. Hoboken: John Wiley & Sons, 2006.

Cheney, Ward and Kincaid, David. Numerical Mathematcs and Computing. Belmont: Thomson, 2008.

Code:

% Euler Method of Solving y=y'h=input('value of h: ');min=0;max=4;steps=(max-min)/h;x=(min:h:max);y=ones(1, steps+1);for i=1:steps y(i+1)=y(i)+h*(-100*y(i));end %establish true valuestrue=exp(-100*x);%calculate errorerror=true-y; diary stabilitydisp(' x exp(x) numerical err1')[x',true',y',error']%plot numerical solutionhold offplot(x,y)hold on%plot actual solutionplot(x,true)diary off

%Heat Explicit Testticdx=.1;dt=.001;tmax=dt*100;sol=HeatExpl(dx,dt,tmax);tocsubplot(2,2,1);plot(0:dx:1,sol(:,1))axis([0 1 0 1])subplot(2,2,2);plot(0:dx:1,sol(:,11))axis([0 1 0 1])

Page 19: Senior Paper Rough Draft

Corey Becker5/19/08

subplot(2,2,3);plot(0:dx:1,sol(:,51))axis([0 1 0 1])subplot(2,2,4);plot(0:dx:1,sol(:,101))axis([0 1 0 1])________________________________________________________________

%Heat Explicit Methodfunction sol=HeatExpl(deltax,deltat,tmax)N=round(1/deltax);M=round(tmax/deltat);sol=zeros(N+1,M+1);rho=deltat/(deltax^2);rho2=1-2*rho;vetx=0:deltax:1;for i=2:ceil((N+1)/2) sol(i,1)=2*vetx(i); sol(N+2-i,1)=sol(i,1);endfor j=1:M for i=2:N sol(i,j+1)=rho*sol(i-1,j)+rho2*sol(i,j)+rho*sol(i+1,j); endend

%Heat Implicit Testticdx=.1;dt=.001;tmax=dt*100;sol=HeatImpl(dx,dt,tmax);tocsubplot(2,2,1);plot(0:dx:1,sol(:,1))axis([0 1 0 1])subplot(2,2,2);plot(0:dx:1,sol(:,11))axis([0 1 0 1])subplot(2,2,3);plot(0:dx:1,sol(:,51))axis([0 1 0 1])subplot(2,2,4);plot(0:dx:1,sol(:,101))axis([0 1 0 1])

%Heat Implicit Methodfunction sol=HeatImpl(deltax,deltat,tmax)N=round(1/deltax);M=round(tmax/deltat);sol=zeros(N+1,M+1);rho=deltat/(deltax^2);B=diag((1+2*rho)*ones(N-1,1))-diag(rho*ones(N-2,1),1)-... diag(rho*ones(N-2,1),-1);vetx=0:deltax:1;for i=2:ceil((N+1)/2)

Page 20: Senior Paper Rough Draft

Corey Becker5/19/08

sol(i,1)=2*vetx(i); sol(N+2-i,1)=sol(i,1);endfor j=1:M sol(2:N,j+1)=B\sol(2:N,j);end