Henderson d., plaskho p. stochastic differential equations in science and engineering

240
m S T O H A D I F F E R E N T I A L EQUATIONS IN SCIENCE AND ENGINEERING Douglas Henderson Peter Plaschko

description

 

Transcript of Henderson d., plaskho p. stochastic differential equations in science and engineering

Page 1: Henderson d., plaskho p.   stochastic differential equations in science and engineering

m S T O H A

D I F F E R E N T I A L

E Q U A T I O N S I N S C I E N C E

A N D E N G I N E E R I N G

D o u g l a s H e n d e r s o n • P e t e r P l a s c h k o

Page 2: Henderson d., plaskho p.   stochastic differential equations in science and engineering

S T O C H A S T I C

D I F F E R E N T I A L

E Q U A T I O N S IN S C I E N C E

A N D E N G I N E E R I N G

Page 3: Henderson d., plaskho p.   stochastic differential equations in science and engineering
Page 4: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Douglas Henderson Brigham Young University, USA

Pp t p f PIi3Qf*ihk"A c i c i r I O O U I I I V V /

Uriiversidad Autonoma Metropolitans, Mexico

| | p World Scientific N E W JERSEY • L O N D O N • S I N G A P O R E • BEIJING • S H A N G H A I • HONG KONG • T A I P E I » C H E N N A I

Page 5: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Published by

World Scientific Publishing Co. Pte. Ltd.

5 Toh Tuck Link, Singapore 596224

USA office: 27 Warren Street, Suite 401-402, Hackensack, NJ 07601

UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE

British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library.

STOCHASTIC DIFFERENTIAL EQUATIONS IN SCIENCE AND ENGINEERING (With CD-ROM)

Copyright © 2006 by World Scientific Publishing Co. Pte. Ltd.

All rights reserved. This book, or parts thereof, may not be reproduced in any form or by any means, electronic or mechanical, including photocopying, recording or any information storage and retrieval system now known or to be invented, without written permission from the Publisher.

For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to photocopy is not required from the publisher.

ISBN 981-256-296-6

Printed in Singapore by World Scientific Printers (S) Pte Ltd

Page 6: Henderson d., plaskho p.   stochastic differential equations in science and engineering

To Rose-Marie Henderson A good friend and spouse

Page 7: Henderson d., plaskho p.   stochastic differential equations in science and engineering
Page 8: Henderson d., plaskho p.   stochastic differential equations in science and engineering

PREFACE

This book arose from a friendship formed when we were both faculty members of the Department of Physics, Universidad Autonoma Metropolitana, Iztapalapa Campus, in Mexico City. Plaschko was teaching an intermediate to advanced course in mathematical physics. He had written, with Klaus Brod, a book entitled, "Hoehere Mathematische Methoden fuer Ingenieure und Physiker", that Henderson admired and suggested that be translated into English and be updated and perhaps expanded somewhat.

However, we both prefer new projects and this suggested instead that a book on Stochastic Differential Equations be written and this project was born. This is an important emerging field. From its inception with Newton, physical science was dominated by the idea of determinism. Everything was thought to be determined by a set of second order differential equations, Newton's equations, from which everything could be determined, at least in principle, if the initial conditions were known. To be sure, an actual analytic solution would not be possible for a complex system since the number of dynamical equations would be enormous; even so, determinism prevailed. This idea took hold even to the point that some philosophers began to speculate that humans had no free will; our lives were determined entirely by some set of initial conditions. In this view, even before the authors started to write, the contents of this book were determined by a set of initial conditions in the distant past. Dogmatic Marxism endorsed such ideas, although perhaps not so extremely.

Deterministic Newtonian mechanics yielded brilliant successes. Most astronomical events could be predicted with great accuracy.

V l l

Page 9: Henderson d., plaskho p.   stochastic differential equations in science and engineering

viii Stochastic Differential Equations in Science and Engineering

Even in case of a few difficulties, such as the orbit of Mercury, Newtonian mechanics could be replaced satisfactorily by equally deter-ministric general relativity. A little more than a century ago, the case for determinism was challenged. The seemingly random motion of the Brownian motion of suspended particles was observed as was the sudden transition of the flow of a fluid past an object or obstacle from lamanar flow to chaotic turbulence. Recent studies have shown that some seemingly chaotic motion is not necessarily inconsistent with determinism (we can call this quasi-chaos). Even so, such problems are best studied using probablistic notions. Quantum theory has shown that the motion of particles at the atomic level is fundamentally nondeterministic. Heisenberg showed that there were limits to the precision with which physical properties could be determined. One can only assign a probablity for the value of a physical quantity. The consequence of this idea can be manifest even on a macroscopic scale. The third law of thermodynamics is an example.

Stochastic differential equations, the subject of this monograph, is an interesting extension of the deterministic differential equations that can be applied to Brownian motion as well as other problems. It arose from the work of Einstein and Smoluchowski among others. Recent years have seen rapid advances due to the development of the calculii of Ito and Stratonovich.

We were both trained as mathematicians and scientists and our goal is to present the ideas of stochastic differential equations in a short monograph in a manner that is useful for scientists and engineers, rather than mathematicians and without overpowering mathematical rigor. We presume that the reader has some, but not extensive, knowledge of probability theory. Chapter 1 provides a reminder and introduction to and definition of some fundamental ideas and quantities, including the ideas of Ito and Stratonovich. Stochastic differential equations and the Fokker-Planck equation are presented in Chapters 2 and 3. More advanced applications follow in Chapter 4. The book concludes with a presentation of some numerical routines for the solution of ordinary stochastic differential equations. Each chapter contains a set of exercises whose purpose is to aid the reader in understanding the material. A CD-ROM that provides

Page 10: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Preface ix

MATHEMATICA and FORTRAN programs to assist the reader with the exercises, numerical routines and generating figures accompanies the text.

Douglas Henderson Peter Plaschko Provo Utah, USA Mexico City DF, Mexico June, 2006

Page 11: Henderson d., plaskho p.   stochastic differential equations in science and engineering
Page 12: Henderson d., plaskho p.   stochastic differential equations in science and engineering

CONTENTS

Preface vii

Introduction xv

Glossary xxi

1. Stochastic Variables and Stochastic Processes 1

1.1. Probability Theory 1 1.2. Averages 4 1.3. Stochastic Processes, the Kolmogorov Criterion

and Martingales 9 1.4. The Gaussian Distribution and Limit Theorems 14

1.4.1. The central limit theorem 16 1.4.2. The law of the iterated logarithm 17

1.5. Transformation of Stochastic Variables 17 1.6. The Markov Property 19

1.6.1. Stationary Markov processes 20 1.7. The Brownian Motion 21 1.8. Stochastic Integrals 28 1.9. The Ito Formula 38 1.9. The Ito Formula 38 Appendix 45 Exercises 49

2. Stochastic Differential Equations 55

2.1. One-Dimensional Equations 56 2.1.1. Growth of populations 56 2.1.2. Stratonovich equations 58

Page 13: Henderson d., plaskho p.   stochastic differential equations in science and engineering

xii Stochastic Differential Equations in Science and Engineering

2.1.3. The problem of Ornstein-Uhlenbeck and the Maxwell distribution 59

2.1.4. The reduction method 63 2.1.5. Verification of solutions 65

2.2. White and Colored Noise, Spectra 67 2.3. The Stochastic Pendulum 70

2.3.1. Stochastic excitation 72 2.3.2. Stochastic damping (/? = 7 = 0; a ^ 0) 73

2.4. The General Linear SDE 76 2.5. A Class of Nonlinear SDE 79 2.6. Existence and Uniqueness of Solutions 84 Exercises 87

3. The Fokker-Planck Equation 91

3.1. The Master Equation 91 3.2. The Derivation of the Fokker-Planck Equation 95 3.3. The Relation Between the Fokker-Planck Equation and

Ordinary SDE's 98 3.4. Solutions to the Fokker-Planck Equation 104 3.5. Lyapunov Exponents and Stability 107 3.6. Stochastic Bifurcations 110

3.6.1. First order SDE's 110 3.6.2. Higher order SDE's 112

Appendix A. Small Noise Intensities and the Influence of Randomness Limit Cycles 117

Appendix B.l The method of Lyapunov functions 124 Appendix B.2 The method of linearization 128 Exercises 130

4. Advanced Topics 135

4.1. Stochastic Partial Differential Equations 135 4.2. Stochastic Boundary and Initial Conditions 141

4.2.1. A deterministic one-dimensional wave equation 141

4.2.2. Stochastic initial conditions 144 4.3. Stochastic Eigenvalue Equations 147

4.3.1. Introduction 147 4.3.2. Mathematical methods 148

Page 14: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Contents xiii

4.3.3. Examples of exactly soluble problems 152 4.3.4. Probability laws and moments of the eigenvalues 156

4.4. Stochastic Economics 160 4.4.1. Introduction 160 4.4.2. The Black-Scholes market 162

Exercises 164

5. Numerical Solutions of Ordinary Stochastic Differential Equations 167

5.1. Random Numbers Generators and Applications 167 5.1.1. Testing of random numbers 168

5.2. The Convergence of Stochastic Sequences 173 5.3. The Monte Carlo Integration 175 5.4. The Brownian Motion and Simple Algorithms for SDE's 179 5.5. The Ito-Taylor Expansion of the Solution of a ID SDE 181 5.6. Modified ID Milstein Schemes 187 5.7. The Ito-Taylor Expansion for N-dimensional SDE's 189 5.8. Higher Order Approximations 193 5.9. Strong and Weak Approximations and the Order

of the Approximation 196 Exercises 201

References 205

Fortran Programs 211

Index 213

Page 15: Henderson d., plaskho p.   stochastic differential equations in science and engineering
Page 16: Henderson d., plaskho p.   stochastic differential equations in science and engineering

INTRODUCTION

The theory of deterministic chaos has enjoyed during the last three decades a rapidly increasing audience of mathematicians, physicists, engineers, biologists, economists, etc. However, this type of "chaos" can be understood only as quasi-chaos in which all states of a system can be predicted and reproduced by experiments.

Meanwhile, many experiments in natural sciences have brought about hard evidence of stochastic effects. The best known example is perhaps the Brownian motion where pollen submerged in a fluid experience collisions with the molecules of the fluid and thus exhibit random motions. Other familiar examples come from fluid or plasma dynamic turbulence, optics, motions of ions in crystals, filtering theory, the problem of optimal pricing in economics, etc. The study of stochasticity was initiated in the early years of the 1900's. Einstein [1], Smoluchowsky [2] and Langevin [3] wrote pioneering investigations. This work was later resumed and extended by Ornstein and Uhlenbeck [4]. But investigation of stochastic effects in natural science became more popular only in the last three decades. Meanwhile studies are undertaken to calculate or at least approximate the effect of stochastic forces on otherwise deterministic oscillators, to investigate the stability or the transition to stochastic chaos of the latter oscillator.

To motivate the following considerations of stochastic differential equations (SDE) we introduce a few examples from natural sciences.

(a) Pendulum with Stochastic Excitations

We study the linearized pendulum motion x(t) subjected to a stochastic effect, called white noise

x + x = (3£t,

XV

Page 17: Henderson d., plaskho p.   stochastic differential equations in science and engineering

xvi Stochastic Differential Equations in Science and Engineering

where ft is an intensity constant, t is the time and £j stands for the white noise, with a single frequency and constant spectrum. For (3 = 0 we obtain the homogeneous deterministic (non-stochastic) traditional pendulum motion. We can expect that the stochastic effect disturbs this motion and destroys the periodicity of the motion in the phase space (x,x). The latter has closed solutions called limit cycles. It is an interesting task to investigate whether the solutions disintegrate into scattered points (stochastic chaos). We will cover this problem later in Section 2.3 and find that the average motion (in a sense to be defined in Section 1.2 of Chapter 1) of the pendulum is determined by the deterministic limit (/3 = 0) of the stochastic pendulum equation.

(b) Stochastic Growth of Populations

N(i) is the number of the members of a population at the time t, a is the constant of the deterministic growth and (5 is again a constant characterizing the intensity of the white noise. Thus we study the growth problem in terms of the linear scenario

The deterministic limit (/? = 0) of this equation describes the growth of a population living on an unrestricted area with unrestricted food supply. Its solution (the number of such a population) grows exponentially. The stochastic effects, or the white noise describes a stochastic varying food supply that influences the growth of the population. We will consider this problem in the Section 2.1.1 and find again that the average of the population is given by the deterministic limit.

(c) Diffraction of Optical Waves

The transfer function T(u>); UJ = (u\, U2) of a two-dimensional optical device is defined by

/

oo /-oo

dx / dyF{x,y)F*{x -wuy- u;2)/N; -OO J —OO

/

CO /*CO

dx dy\F(x,y)\2, -00 J—00

Page 18: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Introduction xvn

where F is a complex wave amplitude and F* = cc(F) is its complex conjugate. The parameter N denotes the normalization of |F(x,y)|2

and the variables x and y stand for the coordinates of the image plane. In a simplified treatment, we assume that the wave form is given by

F = |F|exp(—ikA); |F|,fc = const,

where k and A stand for the wave number and the phase of the waves, respectively. We suppose that the wave emerging from the optical instrument (e.g. a lens) exhibits a phase with two different deviations from a spherical structure A = Ac + A r with a controlled or deterministic phase Ac(x,y) and a random phase A r(x,y) that arises from polishing the optical device or from atmospheric influences. Thus, we obtain

•1 POO /"OO

T(u>) = — dx dyexp{ifc[A(x-o;i,y-u;2) - A(x,y)}}, •••*• J—oo J—oo

where K is used to include the normalization. In simple applications we can model the random phase using white noise with a Gaussian probability density. To evaluate the average of the transfer function (T(ui)) we need to calculate the quantity

(exp{ik[AT(x - Ui,y - u2) - Ar(x,y)]}).

We will study the Gaussian probability density and complete the task to determine the average written in the last line in Section 1.3 of Chapter 1. An introduction to random effects in optics can be found in O'Neill [5].

(d) Filtering Problems

Suppose that we have performed experiments of a stochastic problem such as the one in (a) in an interval t € [0, u] and we obtain as result say A(v), v = [0, u]. To improve the knowledge about the solution we repeat the experiments for t € [u,T] and we obtain A(t),t = [u,T]. Yet due to inevitable experimental errors we do not obtain A(i) but a result that includes an error A(i) + 'noise'. The question is now how can we filter the noise away? A filter is thus, an instrument to

Page 19: Henderson d., plaskho p.   stochastic differential equations in science and engineering

xviii Stochastic Differential Equations in Science and Engineering

clean a result and remove the noise that arises during the observation. A typical problem is where a signal with unknown frequency is transmitted (e.g. by an electronic device) and it suffers during the transmission the addition of a noise. If the transmitted signal is stochastic itself (as in the case of music) we need to develop a non-deterministic model for the signal with the aid of a stochastic differential equation. To study basic the ideas of filtering problems the reader in referred to the book of Stremler [6].

(e) Fluidmechanical Turbulence

This is the perhaps most challenging and most intricate application of statistical science. We consider here the continuum dynamics of a flow field influenced by stochastic effects. The latter arise from initial conditions (e.g. at the nozzle of a jet flow, or at the entry region of a channel flow) and/or from background noise (e.g. acoustic waves). In the simplest case, the incompressible two-dimensional flows, there are three characteristic variables (two velocity components and the pressure). These variables are governed by the Navier-Stokes equations (NSEs). The latter are a set of three nonlinear partial differential equations that included a parameter, the Reynolds number R. The inverse of R is the coefficient of the highest derivatives of the NSEs. Since turbulence occurs at intermediate to high values of the R, this phenomenon is the rule and not the exception in Fluid Dynamics and it occurs in parameter regions where the NSEs are singular. Nonlinear SDEs — such as the NSEs — lead additionally to the problem of the closure, where the equation governing the statistical moment of nth order contains moments of the (n + l)th order.

Hopf [7] was the first to try to find a theoretical approach to solve the problem for the idealized case of isotropic homogenous turbulence, a flow configuration that can be approximately realized in grid flows. Hopf assumed that the turbulence is Gaussian, an assumption that facilitates the calculation of higher statistical moments of the distribution (see Section 1.3 in Chapter 1). However, later measurements showed that the assumption of a Gaussian distribution was rather unrealistic. Kraichnan [8] studied the problem again in

Page 20: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Introduction xix

the 60's and 70's with the direct triad interaction theory in the idealized configuration of homogeneous isotropic turbulence. However, this rather involved analysis could only be applied to calculate the spectrum of very small eddies where the viscosity dominates the flow. Somewhat more progress has been achieved by the investigation of Rudenko and Chirin [9]. The latter predicted with aid of stochastic initial conditions with random phases a broad banded spectra of a nonlinear model equation. During the last two decades there was the intensive work done to investigate the Burgers equation and this research is summarized in part by Wojczinsky [10]. The Burgers equation is supposed to be a reasonable one-dimensional model of the NSEs. We will give a short account on the work done in [9] in Chapter 4.

Page 21: Henderson d., plaskho p.   stochastic differential equations in science and engineering
Page 22: Henderson d., plaskho p.   stochastic differential equations in science and engineering

GLOSSARY

AC almost certainly

BC boundary condition

dBj — dWj — £td£ differential of the Brownian motion (or equivalently Wiener process)

cc(a) = a* complex conjugate of a

D dimension or dimensional

DF distribution function

DOF degrees of freedom

Sij Kronecker delta function

S(x) Dirac delta function

EX exercise at the end of a chapter

FPE Fokker-Planck equation

r(x) gamma function

GD Gaussian distribution

GPD Gaussian probability distribution

HPP homogeneous Poisson process

Hn(x) Hermite polynomial of order n

IC initial condition

IID identically independently distributed

Page 23: Henderson d., plaskho p.   stochastic differential equations in science and engineering

xxii Stochastic Differential Equations in Science and Engineering

IFF if and only if

IMSL international mathematical science library

C Laplace transform

M master, as in master equation

MCM Monte Carlo method

NSE Navier-Stokes equation

NIGD normal inverted GD

N(jU, a) normal distribution with \i as mean and a as variance

o Stratonovich theory

ODE ordinary differential equation

PD probability distribution

PDE partial differential equation

PDF probability distribution function

PSDE partial SDE

r Reynolds number

RE random experiment

RN random number

RV random variable

Re(a) real part of a complex number

R, C sets of real and complex numbers, respectively

S Prandt number

SF stochastic function

SI stochastic integral

SDE stochastic differential equation

SLNN strong law of large numbers

Page 24: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Glossary

TPT transition probability per unit time

WP Wiener process

WS Wiener sheet

WKB Wentzel, Kramers, Brillouin

WRT with respect to

W(t) Wiener white (single frequency) noise

(a) average of a stochastic variable a

a2 = (a2) — (a) (a) variance

{x\y),{x,u\y,v) conditional averages

s At minimum of s and t

V for all values of

€ element of

f f(x)dx short hand for J^ f(x)dx

X end of an example

• end of definition

$ end of theorem

Page 25: Henderson d., plaskho p.   stochastic differential equations in science and engineering

CHAPTER 1

STOCHASTIC VARIABLES AND STOCHASTIC PROCESSES

1.1. Probability Theory

An experiment (or a trial of some process) is performed whose outcome (results) is uncertain: it depends on chance. A collection of all possible elementary (or individual) outcomes is called the sample space (or phase space, or range) and is denoted by f2. If the experiment is tossing a pair of distinguishable dice, then 0, = {(i,j) | 1 < i,j < 6}. For the case of an experiment with a fluctuating pressure 0, is the set of all real functions fi = (0, oo). An observable event A is a subset of f2; this is written in the form A c f2. In the dice example we could choose an even, for example, as A = {{i,j) \ i + J' = 4}. For the case of fluctuating pressures we could use the subset A = (po > 0,oo).

Not every subset of £1 is observable (or interesting). An example of a non-observable event appears when a pair of dice are tossed and only their spots are counted, fi = {(i,j),2 < i + j < 12}. Then elementary outcomes like (1, 2), (2, 1) or (3, 1), (2, 2), (1, 3) are not distinguished.

Let r be the set of observable events for one single experiment. Then F must include the certain event of CI, and the impossible event of 0 (the empty set). For every A C T, Ac the complement of A, satisfies Ac C T and for every B C F the union and intersection of events, A U B and A D B, must pertain also to F. F is called an algebra of events. In many cases there are countable unions and intersections in F. Then it is sufficient to assume that

oo

(J An e r, if An e r.

1

Page 26: Henderson d., plaskho p.   stochastic differential equations in science and engineering

2 Stochastic Differential Equations in Science and Engineering

An algebra with this property is called a sigma algebra. In measure theory, the elements of T are called measurable sets and the pair of (F, Q,) is called a measurable space.

A finite measure Pr(A) defined on F with

0 < Pr(A) < 1, Pr(0) = 0, Pr(fi) = 1,

is called the probability and the triple (I\ f2, Pr) is referred to as the probability space. The set function Pr assigns to every event A the real number Pr(A). The rules for this set function are along with the formula above

Pr(Ac) = l - P r ( A ) ;

Pr (A)<Pr(B) ; Pr(B\A) = Pr(B) - Pr(A) for A C B € T.

The probability measure Pr(r) on Q, is thus a function Pr(P) —>• [0,1] and it is generally derived with Lebesque integrations that are defined on Borel sets.

We introduced this formal concept because it can be used as the most general way to introduce axiomatically the probability theory (see e.g. Chung, [1.1]). We will not follow this procedure but we will introduce heuristically stochastic variables and their probabilities.

Definition 1.1. (Stochastic variables) A random (or stochastic) variable ~X.(u),u £ Q is a real valued function defined on the sample space Q. In the following we omit the parameter u) whenever no confusion is possible. •

Definition 1.2. (Probability of an event) The probability of an event equals the number of elementary outcomes divided by the total number of all elementary outcomes, provided that all cases are equally likely. •

Example For the case of a discrete sample space with a finite number of elementary outcome we have, fi = {wi, . . . ,u>n} and an event is given by A = {LO\, ... ,u>k}, I < k < n. The probability of the event A is then Pr(A) = k/n. *

Page 27: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Variables and Stochastic Processes 3

Definition 1.3. (Probability distribution function and probability density) In the continuous case, the probability distribution function (PDF) Fx(a;) of a vectorial stochastic variable X = (Xi , . . . ,X n ) is defined by the monotonically increasing real function

Fx(xi,...,xn) = Pr(Xi < xi,...,Xn < xn), (1.1)

where we used the convention that the variable itself is written in upper case letters, whereas the actual values that this variable assumes are denoted by lower case letters.

The probability density px(^i, • • • ,xn) (PD) of the random variable is then defined by

Fx(xi,...,xn) = ••• p x (u i , . . . , -u n )dn 1 - - -du n (1.2)

and this leads to

dnFx dxi...dXn = !*(*! , . . . , *„ ) . (1-3)

Note that we can express (1.1) and (1.2) alternatively if we put

Pr(xn < Xi < X12,..., xnl < Xn < xn2)

fX12 fXn2

•••px(xi,...,xn)dxi •••dxn. (1.1a) rxi2 rxn-.

JXn Jx„,\

The conditions to be imposed on the PD are given by the positiveness and the normalization condition

PxOci, ,xn)>0] / ••• / px(xi,...,xn)dxi •••dxn = 1. (1.4)

In the latter equation we used the convention that integrals without explicitly given limits refer to integrals extending from the lower boundary — oo to the upper boundary oo. •

In a continuous phase space the PD may contain Dirac delta functions

p(x) = Y^l(k)s(x - k) + P(x); q(k) = Pr(x = k), (1.5)

Page 28: Henderson d., plaskho p.   stochastic differential equations in science and engineering

4 Stochastic Differential Equations in Science and Engineering

where q(k) represents the probability that the variable x of the discrete set equals the integer value k. We also dropped the index X in the latter formula. We can interpret it to correspond to a PD of a set of discrete states of probabilities q(fc) that are embedded in a continuous phase space S. The normalization condition (1.4) yields now

^2<ik+ p(x)dx = 1. 1. J S

Examples (discrete Bernoulli and Poisson distributions) First we consider the Bernoulli distribution

(i) qf ) = Pr(a; = fe) = 6(A: ,n ,p)=r™)p f c ( l -p) ' l - f c ; A; = 0 , 1 , . . .

and then we introduce the Poisson distribution

( i i)7rfc(A0 = Pr(x = A ; ) = ( A t ) f c e g ) ( - A t ) ; * = 0 , 1 , . . . .

In the appendix of this chapter we will give more details about the Poisson distribution. We derive there the Poisson distribution as limit of Bernoulli distribution

TTk(Xt) — l im b(k,n,p = Xt/n). * n—>oo

In the following we will consider in almost all cases only continuous sets.

1.2. Averages

The sample space and the PD define together completely a stochastic variable. To introduce observable quantities we consider now averages. The expectation value (or the average, or the mean value) of a function G(x i , . . . ,x n ) of the stochastic variables x\,...,xn is denned by

(G(xi,...,xn)) = ••• G ( z i , . . . , £ n ) p x ( x i , . . . , x n ) d x i - - ' d x n .

(1.6) In the case of a discrete variable we must replace to integral in

(1.6) by a summation. We obtain then with the use of (1.5) for p(x)

<G(xi, . . . , xn)) = Y^ Yl G(fci> • • •' M # i r • •, kn)- (1-7)

Page 29: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Variables and Stochastic Processes 5

There are two rules for the application of the averages:

(i) a and b are two deterministic constants and G(x\,...xn) and H(x i , . . . , x n ) are two functions of the random variables x\,..., xn. Then we have

(aG(xi,...,xn) + bK(xi,...,xn))

= a(G(xi,..., xn)) + 6(H(xi , . . . , xn)), (1.8a)

and

(ii)

{(G{x1,...,xn))) = (G(x1,...,xn)). (1.8b)

Now we consider two scalar random variables x and y, their joint PD is p(x,y). If we do not have more information (observed values) of y, we introduce the two marginal PD's px(x) and py(y) of the single variables x and y

Px(ar) = / p{x,y)dy; pY(y) = / p(x,y)dx, (1.9a)

where we integrate over the phase spaces S^ (Sy) of the variables x(y). The normalization condition (1.4) yields

/ px(x)dx = / pY(y)dy = 1. (1.9b)

Definition 1.4. (Independence of variables) We consider n random variables x\,..., xn, x\ to be independent of the other variables X2, - - -, xn if

(xiX2 • • • xn) = (xi)(x2---xn). (1.10a)

We see easily that a sufficient condition to satisfy (1.10a) is

p(xu...,xn) = pi(xi)pn_i(x2, . . . ,a;n) , (1.10b)

where p^(.. .) , k < n denotes the marginal probability distribution of the corresponding variables. •

Page 30: Henderson d., plaskho p.   stochastic differential equations in science and engineering

6 Stochastic Differential Equations in Science and Engineering

The moments of a PD of a scalar variable x are given by

<*"> = / • > < * * • < * " e N -

where n denotes the order of the moment. The first order moment (x) is the average of x and we introduce the variance a2 by

a2 = ((x - {x))2} = (x2) - (re)2 > 0. (1.11)

The random variable x — (x) is called the standard deviation. The average of the of the Fourier transform of a PD is called the

characteristic function

G(k\,..., kn) = (ex.p(ikrxr)}

p(xi,..., xn) ex.Y>(ikrxr)dx\ • • • dxn, (1-12)

where we applied a summation convention krxr = ^ ? = i kjxj- This function has the properties G(0 , . . . , 0)1; | G(ki,..., kn) \< 1.

Example

The Gaussian (or normal) PD of a scalar variable x is given by

p(x) = (2vr)"1/2 exp(-a;2/2); - c o < x < oo. (1.13a)

Hence we obtain (see also EX 1.1)

<*2n> = | ? 7 ; " 2 = i; (^2n+1) = o. (l.isb) Li lit

A stochastic variable characterized by N(m, s) is a normal distributed variable with the average m and the variance s. The variable x distributed with the PD (1.13a) is thus called a normal distributed variable with N(0, 1).

Page 31: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Variables and Stochastic Processes 7

A Taylor expansion of the characteristic function G(k) of (1.13a) yields with (1.12)

G(*) = E ^ V > . (L 1 4 a) n = 0 U-

We define the cumulants nm by

\ m lnG(fc) = E ^ f - K m - (1.14b)

A comparison of equal powers of k gives

Ki = (x); K2 = (x2) - (x)2 = a2;

K3 = {X3)-3(X2)(X)+2{X)3;.... (1.14c)

*

Definition 1.5. (Conditional probability) We assume that A, B C T are two random events of the set of observable events V. The conditional probability of A given B (or knowing B, or under the hypothesis of B) is defined by

Pr(A | B) = Pr(A n B)/Pr(B); Pr(B) > 0.

Thus only events that occur simultaneously in A and B contribute to the conditional probability.

Now we consider n random variables x\,... ,xn with the joint PD p n ( x i , . . . , xn). We select a subset of variables x\,..., xs. and we define a conditional PD of the latter variables, knowing the remaining subset xs+i,... ,xn, in the form

Ps|n—s\ x l i • • • i xs I Xs-\-\, . . . , Xn)

= pn(xi, . . . , Xn)/pn-s(xs+i, . . . , Xn). (1.15)

Equation (1.15) is called Bayes's rule and we use the marginal PD

pn-s(xs+i,...,xn) = pn{xi,...,xn)dxi---dxs, (1.16)

where the integration is over the phase space of the variables x± • • • xs. Sometimes is useful to write to Bayes's rule (1.15) in the form

P n l ^ l j • • • j Xn) = pn—syXs-^i, . . . , 3^nJPs|n—s v^l> • • • > xs \ Xs-\-lj • • • , XnJ.

(1.15')

Page 32: Henderson d., plaskho p.   stochastic differential equations in science and engineering

8 Stochastic Differential Equations in Science and Engineering

We can also rearrange (1.15') and we obtain

P n ^ l i • • • > -En) = Ps(.-El> • • • ) •KsjPn—s\s\%s+1: • • • j %n \ Xi, . . . ,XS).

(1.15") •

Definition 1.6. (Conditional averages) The conditional average of the random variable x\, knowing x2, • • •, xn, is defined by

(Xi | X2, . • • , Xn) = / Z lP i | n _i (x i I X2, • • • , Xn)dXi

= / XxPnfa \X2,..., X n ) d x i / p n _ i ( x 2 , • • • , Xn).

(1.17)

Note that (1.17) is a random variable.

The rules for this average are in analogy to (1.8)

(axi + bx2 | y) = a{xx \ y) + b(x2 \ y), ((x \ y)) = (x \ y). (1.18)

D

Example We consider a scalar stochastic variable x with its PD p(a;). An event A is given by a; £ [a, 6]. Hence we have

p(x | A) = 0 V z ^ [a, b],

and

p(x | A) = p(x) / / p(s)ds; xe[a,b\.

The conditional PD is thus given by

(x | A) = / xp(x)dx / / p(s)ds. J a I J a

For an exponentially distributed variable x in [0, oo] we have p(x) = Aexp(—Arc). Thus we obtain for a > 0 the result

/•oo / /-oo

{x \ x > a) = / xexp(—Ax)ds / / exp(—Xx)dx = a + 1/A. JO / . /a JL

Page 33: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Variables and Stochastic Processes 9

1.3. Stochastic Processes, the Kolmogorov Criterion and Martingales

In many applications (e.g. in irregular phenomena like blood flow, capital investment, or motions of molecules, etc.) one encounters a family of random variables that depend on continuous or discrete parameters like the time or positions. We refer to {X(t,co),t £ l,u £ ft}, where I is set of (continuous or discrete) parameters and X(t7ui) £ Rn, as a stochastic process (random process or stochastic (random) function). If I is a discrete set it is more convenient to call X(t,u>) a time series and to use the phrase process only for continuous sets. If the parameter is the time t then we use I = [to, T], where to is an initial instant. For a fixed value of t £ I, X(£, a>) is a random variable and for every fixed value of LO £ Q (hence for every observation) X(t, LO) is a real valued function. Any observation of this process is called a sample function (realization, trajectory, path or orbit) of the process.

We consider now a finite variate PD of a process and we define the time dependent probability density functions (PDF) in analogy to (1.1) in the form

F x (x , t ) = P r (X( t )<x) ;

Fx.yfo t; y, s) = Pr(X(t) < x, Y(s) < y); ^1A^

Fxu...,xn(xiiti\---;xn,tn) = Pr(Xx(t) < xi,Xn(t) < xn),

where we omit the dependence of the process X(t) on the chance variable LO, whenever no confusion is possible. The system of PDF's satisfies two classes of conditions:

(i) Symmetry If {ki,..., kn} is a permutation of 1 , . . . , n then we obtain

Fxlv..,xn (zfc! ,tkl;...;xkn,tkJ = FXl,...,x„ {x\, h;...; xn, tn).

(1.19a)

(ii) Compatibility

Fx1?...,x„ (xi,ti;...; xr, tr; oo, tr+i; ...;oo,tn)

= FXl,...,x r(a;i, <i; •••\xr, tr). (1.19b)

Page 34: Henderson d., plaskho p.   stochastic differential equations in science and engineering

10 Stochastic Differential Equations in Science and Engineering

The rules to calculate averages are still given by (1.6) where the corresponding PD is derived by (1.3) and where the PDF's of (1.19) are used

Qn

p(xi, ii; ...;xn, tn) = -—— —7r^FXll...,x„ (xi,h;...; xn, tn). dxi(ti) • • • dxn(tn)

One would expect that a stochastic process at a high rate of irregularity (expressed e.g. by high values of intensity constants, see Chapter 2) would exhibit sample functions (SF) with a high degree of irregularity like jumps ore singularities. However, Kolmogorov's criterion gives a condition for continuous SF:

Theorem 1.1. (Kolmogorov's criterion) A bivariate distribution is necessary to give information about the possibility of continuous SF. If and only if (IFF)

( |Xi ( t i ) -X 2 ( t 2 ) r> < c | i i - i 2 | 1 + b ; a , 6 , c > 0 ; tx,t2 G [t0,T],

(1.20)

then the stochastic process X(t) posses almost certainly (AC, this symbol is discussed in Chapter 5) continuous SF. However, the latter are nowhere differentiable and exhibit jumps, and higher order derivatives singularities. &

We will use later the Kolmogorov's criterion to investigate SF of Brownian motions and of stochastic integrals.

Definition 1.7. (Stationary process) A process x(t) is stationary if its PD is independent of a time shift r

p(xi,h +T;...;xn,tn + T) = p(zi, tx;... ;xn,tn). (1.21a)

Equation (1.21a) implies that all moments are also independent of the time shift

(x(h + T)x(t2 + T) • • • x(tk + T))

= (x(t1)x(t2)---x(tk)); forfc = l , 2 . . . . (1.21b)

A consequence of (1.25a) is given by

(x(t)) = (x), independent of t: (1.21c)

(x(t)x(t + r)) = (x (0 )x ( r ) )= 5 ( r ) . •

Page 35: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Variables and Stochastic Processes 11

The correlation matrix is defined by

Cik = (zi{h)zk(t2)); Zi(ti) = Xi(ti) - (xi(ti)). (1.22)

Thus, we have

cik = {xi(h)xk(t2)) - {Xi(ti))(xk(t2)). (1.23)

The diagonal elements of this matrix are called autocorrelation functions (we do not employ a summation convention)

Cii = {Zi{tl)Zi{t2)).

The nondiagonal elements are referred to as cross-correlation functions. The correlation coefficient (the nondimensional correlation) is defined by

r . = (xj{ti)xk{t2)) - (xi(ti))(xk(t2)) ,x 2 4 )

y/{xUh)) ~ (Xt(h))^(xl(t2)) - (Xk(t2)/

For stationary processes we have

Cik(h,t2) = (zi(0)zk(t2 - h)) = cik(t2 - h); (1.25)

Cki(h,t2) = (zkit^Zifo)) = (zk(ti -t2)zi(0)) = Cik(ti -t2).

A stochastic function with C{k = 0 is called an uncorrelated function and we obtain

(xl{h)xk(t2)) = (Xiihfiixkfa)). (1.26)

Note that the condition of noncorrelation (1.26) is weaker than the condition of statistical independence.

Example We consider the process X(i) = Ui cos t + l ^ s in i . Ui,2 are independent stochastic variables independent of the time. The moments of the latter are given by (Uk) = 0, (U|) — a = const; k — 1,2, (U1U2) = 0. Hence we obtain (X) — 0;cxx(s,t) = acos(t — s). JI»

Remark (Statistical mechanics and stochastic differential equations) In Chapter 2 we will see that stochastic differential equations or "stochastic mechanics" can be used to investigate a single mechanical system in the presence of stochastic influences (white or colored

Page 36: Henderson d., plaskho p.   stochastic differential equations in science and engineering

12 Stochastic Differential Equations in Science and Engineering

noise). We use concepts that are similar to those developed in statistical mechanics such as probability distribution functions, moments, Markov properties, ergodicity, etc. We solve the stochastic differential equation (analytically, but in most cases numerically) and one solution represents a realization of the system. Repeating the solution process we obtain another realization and in this way we are able to calculate the moments of the system. An alternative way to calculate the moments would be to solve the Fokker-Planck equation (see: Chapter 3) and then use the corresponding solution to determine the moments. To establish the Fokker-Planck equation we will use again the coefficients of the stochastic differential equation.

Statistical Mechanics works with the use of ensemble averages. Rather than defining a single quantity (e.g. a particle) with a PD p(x), one introduces a fictitious set of an arbitrary large number of M quantities (e.g. particles or thermodynamic systems) and these M non-interacting quantities define the ensemble. In case of interacting particles, the ensemble is made up by M different realizations of the N particles. In general, these quantities have different characteristic values (temperature, or energy, or values of N) x, in a common range. The number of quantities having a characteristic value between x and x + dx defines the PD. Therefore, the PD is replaced by density function for a large number of samples. One observes a large number of quantities and averages the results. Since, by definition, the quantities do not interact one obtains in this way a physical realization of the ensemble. The averages calculated with this density function are referred to as ensemble averages and a system where ensemble averages equal time averages is called an ergodic system. In stochastic mechanics we say that a process with the property that the averages defined in accordance with (1.6) equal the time averages, represents an ergodic process.

An other stochastic process that posses SF of some regularity is called a martingale. This name is related to "fair games" and we give a discussion of this expression in a moment.

In everyday language, we can state that the best prediction of a martingale process X(t) conditional on the path of all Brownian

Page 37: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Variables and Stochastic Processes 13

motions up to s < t is given by previous value X(s). To make this idea precise we formulate the following theorem:

Theorem 1.2. (Adapted process) We consider a probability space (r, Q, Pr) with an increasing family (of sigma algebras of T) of events Ts £ Tt, 0 < s < t (see Section 1.1). A process X.(s,u);u) € Q,s £ [0, oo) is called Ts-adapted if it is Immeasurable. An r s-adapted process can be expanded into a (the limit) of a sequence of Brownian motions Bu(u>) with u < s (but not u> s). ^

Example

For n = 2, 3 , . . . ; 0 < A < t we see that the processes

(i) G1(t,co) = Bt/n(co), G2(t,uj) = Bt_x(u;), (ii) G3(t,Lj) = Bnt(u), G4(t,w) = Bt+X(u>),

are Tj-adapted, respectively, not adapted. *

Theorem 1.3. (martingale process) A process X(t) is called a martingale IFF it is adapted and the condition

<Xt |T a) = XS V 0 < s < t < o o , (1.27)

is almost certainly (AC) satisfied. If we replace the equality sign in (1.27) by < (>) we obtain

a super (sub) martingale. We note that martingales have no other discontinuities than at worst finite jumps (see Arnold [1.2]). ^

Note that (1.27) defines a stochastic process. Its expectation ((Xj | Ts)) = (Xs);s < t is a deterministic function.

An interesting property of a martingale is expressed by

Pr(sup | X(t) |> c) < (| X(6) \p)/cp; c > 0; p > 1, (1.28)

where sup is the supremum of the embraced process in the interval [a, b]. (1.28) is a particular version of the Chebyshev inequality, that

Page 38: Henderson d., plaskho p.   stochastic differential equations in science and engineering

14 Stochastic Differential Equations in Science and Engineering

will be derived in EX 1.2. We apply later the concept of martingales to Wiener processes and to stochastic integrals.

Finally we give an explanation of the phrase "martingale". A gambler is involved in a fair game and he has at the start the capital X(s) . Then he should posses in the mean at the instant t > s the original capital X(s). This is expressed in terms of the conditional mean value (X t | X s) = X s . Etymologically, this term comes from French and means a system of bett ing which seeks the amount to be wagered after each win or loss.

1.4. T h e Gauss ian Di s tr ibut ion and Limit T h e o r e m s

In relation (1.13) we have already introduced a special case of the Gaussian (normal distributed) PD (GD) for a scalar variable. A generalization of (1.13) is given by theN(m,o- 2 ) PD

p(x) = (2TTCT2)-1 / 2 exp [ - (x - m)2/(2a2)]; V i e [-oo, oo] (1.29)

where m is the average and <72 = (a;2) — m2 is the variance. The multivariate form of the Gaussian PD for the set of variables xi,...,xn

has the form

p(xi,...,xn) = N e x p f --AikXiXk -bkxkj , (1.30a)

where we use a summation convention. The normalization constant N is given by

N = (27r ) -" / 2 [Det (A)] 1 / 2 e X pf-^A- 1 6 i 6 f e ' ) . (1.30b)

We define the characteristic function of (1.30) has the form

G(ki,...,kn) = exp

An expansion of (1.31) WRT powers of k yields the moments

(Xi) = - A ^ b f c , (1.32a)

and the covariance is given by

Cik = {{xi - {xi)){xk - (xk)) = A^ 1 . (1.32b)

- l '•uv (1.31)

Page 39: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Variables and Stochastic Processes 15

This indicates that the GD is completely given, if the mean value and the covariance matrix are evaluated. The n variables are uncorrelated and thus are independent if A - 1 and hence A itself are diagonal.

The higher moments of n-variate GD with zero mean are particularly easy to calculate. To show this, we recall that for zero mean we have bk = 0 and we obtain the characteristic function with the use of (1.31) and (1.32) in form of

G = exp \XUXV)KUKV

1 -p ~ \XUXV)ZUZV -\- \XuXvj \XpXq]ZuZvZpZq -p

A/y ( ( I f f . \/u,v,p,q,r = 1,2,.. . . (1.33)

A comparison of equal powers of z in (1.33) and in a Taylor expansion of the exponent in (1.31) shows that all odd moments vanish

\XaXbXcf — \X aXl)X CX dX ej = • • • U.

We also obtain with restriction to n — 2 (bivariate GD)

( 4 ) = 3 ( 4 ) 2 ; (xlxp} = 3(xl)(x1x2), i,p= 1,2; (1.34)

{xlx22) = {x\){xl) + 2{xlx2)

2.

In the case of a trivariate PD we face additional terms of the type (xkXpXr) — 2(xkXp){xpXr) + {xkXr){Xp). The higher order variate and higher order moments can be calculated in analogy to the results (1.34).

We give also the explicit formula of the bivariate Gaussian (see also EX 1.3)

with

P(x,y)

1

N2exp

x- (x),

e 2 ( 1 - r 2 ) [a

v = y- (y),

2T-£?7 rf

Vab b

2Tr^ab(l - r2); a2x a;

(1.35a)

(1.35b)

and where r = v\i is defined as the cross correlation coefficient (1.24). For ax = ay = 1 and (x) = (y) = 0 in (1.35) we can expand the latter

Page 40: Henderson d., plaskho p.   stochastic differential equations in science and engineering

16 Stochastic Differential Equations in Science and Engineering

formula and we obtain

p(x,y) = (27r)~1exp[-(x2 + y2)/2} £ -H fc(x)H fc(y), (1.36) A;!"

fc=0

where Hfc(x) is the fc-th order Hermite polynomial (see Abramowitz and Stegun [1.3]). Equation (1.36) is the basis of the "Hermitian-chaos" expansion in the theory of stochastic partial differential equations.

In EX 1.3 we show that conditional probabilities of the GD (1.35a) are Gaussian themselves.

Now we consider two limit theorems. The first of them is related to GD and we introduce the second one for later use.

1.4.1. The central limit theorem

We consider the random variable

u = n

{xk) = 0, (1.37)

where x^ are identically independent distributed (IID) (but not necessarily normal) variables with zero mean and variance a2 = {x2,). We find easily (U) = 0 and (U2) = a2.

The central limit theorem says that U tends in the limit n —> oo to a N(0, a2) variable with a PD given by (1.13a). To prove this we use the independence of the variables Xk and we perform the calculation of the characteristic function of the variable U with the aid of (1.12)

Gu(fc) = / dxip(xi) • • • / dxnp(xn) • • • exp [ik(xi -\ h xn)/y/n\

= [Gx(A;/v^)]n

2^2 kla ~2n~

+ 0(n -3/2. exp(—k a 12) for n —> oo.

(1.38)

We introduced in the second line of (1.38) the characteristic function of one of the individual random functions according to (1.14a); (1.38) is the characteristic function of a GD that corresponds indeed to

Page 41: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Variables and Stochastic Processes 17

N(0, a2). Note that this result is independent of the particular form of the individual PD's p(x). It is only required that p(a;) has finite moments. The central limit theorem explains why the Gaussian PD plays a prominent role in probability and stochastics.

1.4.2. The law of the iterated logarithm

We give here only this theorem and refer the reader for its derivation to the book Chow and Teichler [1.4]. yn is the partial sum of n IID variables

yn = xx -\ \-xn; (xn) = /3, {(xn - (if) = a2. (1.39)

The theorem of the iterated logarithm states that there exists AC an asymptotic limit

-a < lim / n ~ r a / ? < a. (1.40) rwoo v/2nln[ln(n)]

Equation (1.40) is particular valuable in case of estimates of stochastic functions and we will use it later to investigate Brownian motions. We will give a numerical verification of (1.40) in program F18.

1.5. Transformation of Stochastic Variables

We consider transformations of an n-dimensional set of stochastic variables x\,... ,xn with the PD pxi-x n (x\, • • •, xn). First we introduce the PD of a linear combination of random variables

n

Z =

fe=l

where the a^ are deterministic constants. The PD of the stochastic variable z is then defined by

Pz(z) = / dxi ••• / dxn8 I z - Y^akXk 1 PXi-x„(zi,--- ,xn).

(1.41b) Now we investigate transformations of the stochastic variables

xi,..., xn. The new variables are defined by

uk = uk(x1,...,xn), k = l,...,n. (1.42)

^2otkxk, (1.41a)

Page 42: Henderson d., plaskho p.   stochastic differential equations in science and engineering

18 Stochastic Differential Equations in Science and Engineering

The inversion of this transformation and the Jacobian are

Xk = gk(ui,...,un), J = d(x1,...,xi)/d(u1,...,u1). (1.43)

We infer from an expansion of the probability measure (1.1a) that

dpx!-xn = Pr(zi < Xi < xi + d x i , . . . , xn < Xn < xn + dxn)

= pXl...xn(^i, • • •, xn)dxi • • • dxn

for dxk —> 0, k — 1,... ,n. (1.44a)

Equation (1.44a) represents the elementary probability measure that the variables are located in the hyper plane

n

Y[[xk,xk + dxk}. k=l

The principle of invariant elementary probability measure states that this measure is invariant under transformations of the coordinate system. Thus, we obtain the transformation

d p ^ . . . ^ =dpX i . . .X n . (1.44b)

This yields the transformation rule for the PD's

PUi-u„(Mi(a;i, • • -,xn), • • •, un(xi,.. .,xn))

=| det(J) | px!-x„(a;i,---,a;n)- (1-45)

Example (The Box-Miller method) As an application we introduce the transformations method of B o x -Miller to generate a GD. There are two stochastic variables given in an elementary cube

, , (I V 0 < x 1 < l , 0 < x 2 < l > \ n ^ P ( X 1 ' X 2 ) = U elsewhere J ' ( L 4 6 )

Note that the bivariate PD is already normalized. Now we introduce the new variables

yi = y/—2 In x\ cos(27TX2), (1.47)

y2 = V - 21nxisin(27TX2).

The inversion of (1.47) is

xi = exp[-(yj + yl)/2]\ x2 = —arc tan(y2/yi).

Page 43: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Variables and Stochastic Processes 19

According to (1-45) we obtain the new bivariate PD

p(y1,y2) = p ( x 1 ^ 2 ) | | ^ | = i - e x P [ - ( y ? + y22)/2], (1.48)

and this the PD of two independent N(0, 1) variables. Until now we have only covered stochastic variables that are time-

independent or stochastic processes for the case that all variables belong to the same instant. In the next section we discuss a property that is rather typical for stochastic processes. Jf»

1.6. The Markov P rope r ty

A process is called a Markov (or Markovian) process if the conditional PD at a given time tn depends only on the immediately prior time tn-\. This means that for t\ < t2 < • • • < tn

Pl\n-l(yn,tn | 2/1, h\ . . . ; y n - l , * n - l ) = Pl | l(?/n,£n I 2/n-l>*7i-l)>

(1.49)

and the quantity Pi\i(yn,tn \ yn-i,tn-i) is referred to as transition probability distribution (TPD).

A Markov process is thus completely defined if we know the two functions

Pi(yi,*i) and p2(y2,t2 | yi,ti) forti<t2.

Thus, we obtain for t\ < t2 (see (1.15") and note that we use a semicolon to separate coordinates that belong to different instants)

V2{yiM\V2,t2) =pi(yi,*i)Pi|i(y2,*21 yi,*i), (1.50.1)

and for t\ < t2 < £3

P 3 ( y i , * i ; 2/2, £2; 2/3,^3)

= Pi(yi,*i)pi|i(y2,*21 yi,<i)pi|i(y3,*31 y2,t2). (1.50.2)

We integrate equation (1.50.2) over the variable y2 and we obtain

P2(yi,*i;y3,*3) =pi(yi,h) / Pi|i(y2,*21 2/1, £I )PI 11(2/3^312/2,t2)dy2.

(1.51)

Page 44: Henderson d., plaskho p.   stochastic differential equations in science and engineering

20 Stochastic Differential Equations in Science and Engineering

Now we use

Pi|i(2/3,*31 yi,*i) = P2(y i ,* i ;y3 , i3 )M(y i ,* i ) ,

and we obtain from (1.51) the C h a p m a n - K o l m o g o r o v equat ion

Pi|i(z/3,*31 yi,h) = / Pi|i(y2,*21 yi,*i)pi|i(y3,*31 V2,t2)dy2.

(1.52) It is easy to verify tha t a particular solution of (1.52) is given by

Pi|l(2/2,*2 I 2/1, *i) = [27r{t2-t1)}~1/2exV{-{y2-y1)2/[2{t2-t1)}}.

(1.53)

We give in EX 1.4 hints how to verify (1.53). We can also integrate the identity (1.50.1) over y\ and we obtain

ViiViM) = I Pi(z/i,*i)Pi|i(l/2,*2 I J/i,*i)dj/i. (1-54)

The latter relation is an integral equation for the function pi(?/2, t2). EX 1.5 gives hints to show that the solution to (1.54) is the Gaussian PD

P l ( y , t) = (27rt)-V2 exp[ - j /7 (2 i ) ] ; lim P l ( y , t) = 8(y). (1.55) t—>U-|-

In Chapter 3 we use the Chapman-Kolmogorov equation (1.52) to derive the master equation that is in turn applied to deduce the Fokker-Planck equation.

1.6.1. Stationary Markov processes

Stat ionary Markovian processes are defined by a PD and transition probabilities that depend only on the time differences. The most important example is the Ornstein—Uhlenbeck-process tha t we will treat in Section 2.1.3 and 3.4. There we will prove the formulas for its PD

pi(y) = (2TT)- 1 /2 e x p ( - y 2 / 2 ) , (1-56.1)

Page 45: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Variables and Stochastic Processes 21

and the transition probability

Pi|i(w,«2 I l/i, ti) = [2TT(1 - u 2 ) ] - 1 " ^ ' (?/2 ~ " ^ 1 yJ ' (1.56.2)

u = exp(- r ) ; Pi|i(y2,*i I yi,*i) = <%2 - y{]

The Ornstein-Uhlenbeck-process is thus stationary, Gaussian and Markovian. A theorem from Doob [1.5] states that this is apart from trivial process, where all variables are independent — the only process that satisfies all the three properties listed above. We continue to consider stationary Markov processes in Section 3.1.

1.7. The Brownian Motion

Brown discovered in year 1828 that pollen submerged in fluids show under collisions with fluid molecules, a completely irregular movement. This process is labeled with y := Bt(ui), where the subscript is the time. It is also called a Wiener (white noise) process and labeled with the symbol Wj (WP) that is identical to the Brownian motion: Wt = Bt. The WP is a Gaussian [it has the PD (1.55)] and a Markov process.

Note also that the PD of the Wiener process (WP) — given by (1.55) — satisfies a parabolic partial differential equation (called Fokker—Planck equation, see Section 3.2)

dp ld2p ft = 2 ft?" ( L 5 7 )

We calculate the characteristic function G(u) and we obtain according to (1.12)

G{u) = (exp(mWt)) = exp(-n2 t /2) , (1.58a)

and we obtain the moments in accordance with (1.13b)

< w ? f c ) = 2 W f f e ; (w?fc+1) = °; fcGN°- (L 5 8 b) We use the Markovian properties now to prove the independence

of Brownian increments. The latter are defined

yi,y2-yi,---,yn-yn-i with yk := wtk; h<---<tn. (1.59)

Page 46: Henderson d., plaskho p.   stochastic differential equations in science and engineering

22 Stochastic Differential Equations in Science and Engineering

We calculate explicitly the joint distribution given by (1.50) and we obtain with the use (1.53) and (1.55)

P2(yuh;y2,t2) = [(27r)2ti(t2 - t 1 ) ] - 1 / 2 exp{-y2/ (2 t 1 )

- ( y 2 - y i ) 2 / [ 2 ( t 2 - i i ) ] } , (1-60)

and

P3(yi,*i; 2/2,^2; 2/3, *3) = [(27r)3*i(f2 - h)(t3 - t 2 ) ] - 1 / 2 e x p { _ y 2 / ( 2 i i )

- (2/2 - yi) 2 / [2( t 2 - h)] - (2/3 - y2)2/[2(«3 " *2)]}, (1.61)

P 4 ( y i , t i ; y 2 , * 2 ; y 3 , * 3 ; y 4 , * 4 )

= [27r(*4 - * 3 ) r 1 / 2 P 3 ( y i , * i ; 2 / 2 , * 2 ; 2 / 3 , * 3 )

x e x p { - ( y 4 - y 3 ) 2 / [ 2 ( i 4 - i 3 ) ] } -We see that the joint PD's of the variables 2/1,2/2 ~~ 2/1,2/3 ~~ 2/2,2/4 ~~ 2/3 are given in (1.60) and (1.61) in a factorized form and this implies the independence of these variables. To prove the independence of the remaining variables y^ — 2/3,..., yn — yn~\ we would only have to continue the process of constructing joint PD's with the aid of (1.49).

In EX 1.6 we prove the following property

(2/1^1)2/2^2)) = min(ti, t2) =hA t2. (1.62)

Equation (1.62) also demonstrates that the Brownian motion is not a stationary process, since the autocorrelation does not depend on the time difference r = t2 — t\ but it depends on t2 A t\.

To apply Kolmogorov's criterion (1.20) we choose a = 2 and we obtain with (1.58b) and (1.62) ([2/1 (*i) - 2/2to)]2) = 1*2 - *i|- Thus we can conclude with the choice b = c = 1 that the SF of the WP are ac continuous functions. The two graphs Figures 1(a) and 1(b) are added in this section to indicate the continuous SF.

We apply also the law of iterated logarithm to the WP. To this end we consider the independent increments y^ — y^-i where we ifc = kAt with a finite time increment At. This yields for the partial sum in (1.39)

n

]P(2/fc - 2/fe-i) = 2/n = 2/nAt! OL = (yk) = 0; ((yk - yk-i)2) = At.

fc=i

Page 47: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Variables and Stochastic Processes 23

Fig. 1(a). The Brownian motion Bt versus the time axis. Included is a graph of the numerically determined temporal evolution of the mean value and the variance.

Bv(t)

Fig. 1(b). The planar Brownian motion with x = Bj and y = Bj • B^, k = 1, 2 are independent Brownian motions.

We substitute the results of the last line into (1.40) and we obtain

- \ / A ! < lim W, nAt

n^°° v/2nln(ln(n)) < VAt.

Page 48: Henderson d., plaskho p.   stochastic differential equations in science and engineering

24 Stochastic Differential Equations in Science and Engineering

The assignment of t := nAt into the last line and the approximation ln(i/Ai) —> ln(i) for t —> oo gives the desired result for the AC asymptotic behavior of the WP

- 1 < lim l < 1. (1.63) ~ ™ °° ^/2tln(In(t))

We will verify (1.63) in Chapter 5 numerically. There are various equivalent definitions of a Wiener process. We

use the following:

Definition 1.8. (Wiener process) A WP has an initial value of Wo = 0 and its increments Wj — Ws, t > s satisfies three conditions. They are

(i) independent and (ii) stationary (the PD dependence on t — s) and (iii) N[0, t — s] distributed.

As a consequence of these three conditions WP exhibits continuous sample functions with probability 1. •

There are also WP's that do not start at zero. There is also a generalization of the WP with discontinuous SF. We will return to this point at the end of Section 1.7.

Now we show that a WP is a martingale

<BS | Bu> = Bu; s> u. (1.64)

We prove (1.64) with the application of the Markovian property (1.53). We use (1.17) write

(Bs | Bu) = (y2,s | yi,u) = / t/2Pi|iO/2,s I yi,u)dy2

= /n . I 2/2exp{-(y2 - yi)2/[2{s - u)]}dy2

\J2lT{S — U) J = yi = Bu.

This concludes the proof of (1.64). A WP has also the following properties. The translated quantity

Wi and the scaled quantity Wf defined by

t , a > 0 : W t = W t + a - W a and W t = \ 2 ( ) (1.65)

Page 49: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Variables and Stochastic Processes 25

are also a Brownian motion. To prove (1.65) we note first that the averages of both variables are zero (Wt) = (Wt) = 0. Now we have to show that both variables satisfy also the condition for the auto correlation. We prove this only for the variable Wt and leave the second part for the EX 1.7. Thus, we put

(W tWs) = (Ba24Ba2s)/a2 = ^ V 2 ^ =tAs.

So far, we considered exclusively scalar WP's. In the study of partial differential equations we need to introduce a set of n independent WP's. Thus, we generalize the WP to the case of an independent WP's that define a vector of a stochastic processes

xi(h),...,xn(tn); tk>0. (1.66)

The corresponding PD is then

p(xu...,xn) = pXl(xi)...pXn(xn)

= (27T)""/2 I K 1 / 2 e x p [-4/(2**)] • (1-67) fc=l

We have assumed independent stochastic variables (like the orthogonal basic vectors in the case of deterministic variables) and this independence is expressed by the factorized multivariate PD (1.67).

We define an n-dimensional WP (or a Wiener sheet (WS)) by

n

Min) = n **(**): t = (t1,...,tn). (1.68) k=\

Now we find how we can generalize the Definition 1.8 to the case of n stochastic processes. First, we prove easily that the variable (1.68) has a zero mean

(M(tn)) = 0. (1.69)

Thus, it remains to calculate the autocorrelation (1.62). We use the independence of the set of variables Xk(tk),k = l , . . . , n and we obtain with the use of the bivariate PD (1.61) with y\ = Xk(tk);

Page 50: Henderson d., plaskho p.   stochastic differential equations in science and engineering

26 Stochastic Differential Equations in Science and Engineering

V2 = xk(sk) and factorize the result for the independent variables. Hence, we obtain

n

(Mjn)M(")> = Yl(xk(tk)xk(sk)}; t = (h,...,tn); s = ( s i , . . . , s n ) . fc=i

The evaluation of the last line yields with (1.62)

n

(M(tn)M^) = HtkAsk. (1.70)

fc=i

The relations (1.69) and (1.70) show now that process (1.68) is an n-WP.

In analogy to deterministic variables we can now construct with stochastic variables curves, surfaces and hyper surfaces. Thus, a curve in 2-dimensional WS and surfaces on 3-dimensional WS are given by

c« = Mt2f(ty stut2 = MS2 ) g( t l > t 2)-

We give here only two interesting examples.

Example 1 Here we put

(2)

Kt = M ^ ; a = exp(i), b = exp(—£); —oo < x < oo.

This defines a stochastic hyperbola with zero mean and with the autocorrelation

(KtKs) = (x1{et)x1{es))(x2{e-t)x2{e-s))

= (e* A es)(e-* A e~s) = exp(- | t - s\). (1.71)

The property (1.71) shows this process is not only a WS but also a stationary Ornstein-Uhlenbeck process (see Section 1.5.1). 4b

Example 2 Here we define the process

Kt = exp[-(l + c)t]M^l; a = exp(2£), b = exp(2ct); c > 0. (1.72)

Page 51: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Variables and Stochastic Processes 27

Again we see, that stochastic variable defined (1.72) has zero mean and the calculation of its autocorrelation yields

(K tKs) = exp[-(l + c)(t + s)}(xl(e2t)x1(e

2s)){x2(e2ct)x2(e

2cs))

= exp[-(l + c)(t + s)}(e2t A e2s){e2ct A e2cs)

= exp[-(l + c ) | t - s | ] . (1.73)

The latter equation means that the process (1.72) is again an Ornstein-Uhlenbeck process. Note also that because of c > 0 there is no possibility to use (1.73) to reproduce the result of the previous example. X

Just as in the case of one parameter, there exist for WS's also scaling and translation. Thus, the stochastic variables

H - —M{2) • 'v ~ ab a2u'h2v''

T _ M ( 2 ) _ M ( 2 ) _ M ( 2 ) M ( 2 ) L>u,v - M

u+a,v+b Mu+a,b Ma,v+b ~ M a ,6>

(1.74)

are also WS's. The proof of (1.74) is left for EX 1.8. We give in Figures 1(a) and 1(b) two graphs of the Brownian

motion. At the end of this section we wish to mention that the WP is a

subclass of a Levy process L(t). The latter complies with the first two conditions of the Definition 1.8. However, it does not possess normal distributed increments. A particular feature of normal distributed process x is the vanishing of the skewness (x3) / (x2)3'2. However, many statistical phenomena (like hydrodynamic turbulence, the market values of stocks, etc.) show remarkable values of the skewness. This means that a GD (with only two parameter) is not flexible enough to describe such phenomena and it must be replace by a PD that contains a sufficient number of parameters. An appropriate choice is the normal inverted Gaussian distribution (NIGD) (see Section 4.4). The NIGD distribution does not satisfy the Kolmogorov criterion. This means that the sample functions of the Levy process L(i) is equipped with SF that jump up and down at arbitrary instances t. To get more information about the Levy process we refer the reader to the work of Ikeda k, Watanabe [1.6] and of Rydberg

Page 52: Henderson d., plaskho p.   stochastic differential equations in science and engineering

28 Stochastic Differential Equations in Science and Engineering

[1.7]. In Section 4.4 we will give a short description of the application of the NIGD in economics theories.

1.8. Stochastic Integrals

We need stochastic integrals (SI) when we attempt to solve a stochastic differential equation (SDE). Hence we introduce a simple first order ordinary SDE

^ = a(X(t),t) + b(X(t),t)Zt; X,a,b,t€R. (1.75)

We use in (1.75) the deterministic functions a and b. The symbol £t indicates the only stochastic term in this equation. We assume

<6> = 0; (tes) = 6(t-s). (1.76)

The spectrum of the autocorrelation in (1.76) is constant (see Section 2.2) and in view of this £t is referred as white noise and any term proportional to £( is called a noisy term. These assumptions are based on a great variety of physical phenomena that are met in many experimental situations.

Now we replace (1.75) by a discretization and we put

Atk = tk+i — tk>0; Xfc = X(ifc);

AXfc = Xfc+1 -X f c ; A; = 0,1,

The substitution into (1.75) yields

AXfc = a(Xfc, tk) Atk + b{Xk,tk) ABk;

ABfc = B f c + 1 -B f c ; A; = 1,2,...

where we used

A precise derivation of (1.77) is given in Section 2.2. Thus we can write (1.75) in terms of

n - l

Xn = X0 + Y^ [<xs,ts)Ats + b(Xa, ts)ABs] • X0 = X(t0). s=Q

(1.78)

Page 53: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Variables and Stochastic Processes 29

What happens in the limit Atk —> 0? If there is a "reasonable" limit of the last term in (1.78) we obtain as solution of the SDE (1.75)

X(t) = X(0)+ / a(X(s),s)ds + " / 6(X(s),s)dB8". (1.79) Jo Jo

The first integral in (1.79) is a conventional integral of Riemann's type and we put the stochastic (noisy) integral into inverted commas. The irregularity of the noise does not allow to calculate the stochastic integral in terms of a Riemann integral. This is caused by the fact that the paths of the WP are nowhere differentiable. Thus we find that a SI depends crucially on the decomposition of the integration interval.

We assumed in (1.75) to (1.79) that b(X,t) is a deterministic function. We generalize the problem of the calculation of a SI and we consider a stochastic function

1 = / i(w,s)dBs. (1.80) Jo

We recall that Riemann integrals of the type (g(s) is a differentiable function)

i(s)dg(s) = [ f(s)g'(s)ds, Jo ' 0

are discretized in the following manner

pT n—1

/ f(s)dg(s) = lim Vf(sfc)[g(sfc+i)-g(sfc)]. JU k=0

Thus, it is plausible to introduce a discretization of (1.80) that takes the form

I = 53f(a f c ,a;)(B f c + 1-B f c). (1.81)

In Equation (1.81) we used s^ as time-argument for the integrand f. This is the value of s that corresponds to the left endpoint of the discretization interval and we say that this decomposition does not

Page 54: Henderson d., plaskho p.   stochastic differential equations in science and engineering

30 Stochastic Differential Equations in Science and Engineering

look into the future. We call this type of integral an Ito integral and write

I i = / {{s,uj)dBs. (1.82) Jo

An other possible choice is to use the midpoint of the interval and with this we obtain the Stratonovich integral

Is = / f(s,w)odBs = ^f(s f c ,w)(B f c + i -Bfc); sk = -(tk+1 + tk). J° k

(1.83)

Note that the symbol "o" between integrand and the stochastic differential is used to indicate Stratonovich integrals.

There are, of course, an uncountable infinity of other decompositions of the integration interval that yield to different definitions of a SI. It is, however, convenient to take advantage only of the Ito and the Stratonovich integral. We will discuss their properties and find out which type of integrals seems to be more appropriate for the use in the analysis of stochastic differential equations.

Properties of the Ito integral

(a) We have for deterministic constants a < b < c, a, /3 G R.

f [ah{s,u>) + /?f2(s,w)]dBs = al l +/JI2; h = [ f*(s,w)dBs. J a J a

(1.84)

Note that (1.84) remains also valid for Stratonovich integrals. The proof of (1.84) is trivial.

In the following we give non-trivial properties that apply, however, exclusively to Ito integrals. Now we need a definition:

Definition 1.9. (non-anticipative or adapted functions) The function f(t, Bs) is said to be non-anticipative (or adapted, see also Theorem 1.2) if it depends only on a stochastic variable of the past: B s appears only for arguments s < t. Examples for a non-anticipative functions are

i(s,co)= [Sg(u)dBu; f( Jo

s,u) = B,. •

Page 55: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Variables and Stochastic Processes 31

Now we list further properties of the Ito integrals that include non anticipative functions f(s,Bs) and g(s,B s).

(b) M l E E W f ( s ' B s ) d B s ) = ° - (L85)

Proof. We use (1.81) and obtain

Mi = / ^ f ( s f c , B f c ) ( B f c + 1 - B f c )

But we know that Bk is independent of B^+i — Bk. The function f(sfc,Bfc) is thus also independent of B^+i — B^. Hence we obtain

Mi = ^2(f(sk,Bk)){Bk+1 - Bfc) = 0. k

This concludes the proof of (1.85).

(c) Here we study the average of a product of integrals and we show that

M2 = I J f(s,Bs)dBsJ g(u,Bu)dBu\ = J (i(s,Bs)g(s,Bs))ds.

(1.86)

Proof.

M2 = ]T<f(s m ,B m ) (B m + 1 - B m ) g ( s n , B n ) ( B n + 1 - B n ) ) . m,n

We have to distinguish three subclasses: (i) n > m, (ii) n < m and (hi) n = m.

Taking into account the independence of the increments of WP's we see that only case (hi) contributes non-trivially to M2. This yields

M2 = ^ ( f ( S n , B n ) g ( s n , B n ) ( B ra+l ~ B n ) ) .

n

But we know that f(sn, Bn)g(sn , Bn) is again a function that is independent of (Bn + i — Bn)2 . We use (1.62) and obtain

((Bn+i — Bn) ) = (B n + 1 — 2B n + iB n + Bn) = tn+\ — tn — Ain ,

Page 56: Henderson d., plaskho p.   stochastic differential equations in science and engineering

32 Stochastic Differential Equations in Science and Engineering

and thus we get

M2 = ^ ( f ( S n ,B n )g ( S n ,B n ) ) ( (B n+l —

n oo

^2(i{sn,Bn)g{sn,Bn))Atn. n = l

The last relation tends for Atn —» 0 to (1.86).

(d) A generalization of the property (c) is given by

/ pa rb \ rahb

M3 = M i(s,Bs)dBsl g(u,Bu)dBu\ = I (f(S,Bs)g(S,Bs))ds.

(1.87)

To prove (1.87) we must distinguish to subclasses (i) b = a + c > a and (ii) a = b + c> b;c> 0. We consider only case (i), the proof for case (ii) is done by analogy. We derive from (1.86) and (1.87).

M3 = M2 + / / f(s,Bs)dBs / g(u,Bu)dBt \ J0 J a

= M2 + Yl Yl ^ Bn)g(sm , Bm)AB nABm) . n m>n

But we see that i(sn, Bn) and AB n are independent of f(sm, Bm) and AB m . Hence, we obtain

M3 = M2 + £ ( f ( s „ , Bn)ABn) Y, (g(sm, Bm)ABm) = M2, n m>n

where we use (1.85). This concludes the proof of (1.87) for case (i). Now we calculate an example

I(t) = / B sdB s . (1.88a) Jo

First of all we obtain with the use of (1.85) and (1.86) the moments of the stochastic variable (1.88a)

<I(t)> = 0; (I(t)I(t + r)) = [\B2s)ds = f a d s = 7

2 A n QQ^ Jo Jo (1.88b)

7 = i A ( t + r ) .

Page 57: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Variables and Stochastic Processes 33

We calculate the integral with an Ito decomposition

1 = 2 ^ B*;(Bfc+i — Bfc). k

But we have

AB2k = (B2

k+l - B2k) = (Bk+1 - Bkf + 2B f c(B f c + 1 - Bfc)

= (ABfc)2 + 2B fc(B fc+1-B fc).

Hence we obtain

I(t) = lj2[A(Bl)-(ABk)% k

We calculate now the two sums in (1.90) separately. Thus we obtain the first place

I1(t) = EA(Bfc) = (B?-B8) + ( B i - B ? ) + - + (BN-BN-l) k

N ~* B i >

where we used Bo = 0. The second integral and its average are given by

I2(t) = Y, (AB*)2 = E (B^+i " 2Bk+iBk + Bl); k k

(I2(i))=EAifc = t

k

The relation (I2(i)) = t gives not only the average but also the integral I2(i) itself. However, the direct calculation of l2(t) is impractical and we refer the reader to the book of 0ksendahl [1.8], where the corresponding algebra is performed. We use instead an indirect proof and show that the quantity z (the standard deviation of I2(i)) is a deterministic function with the value zero. Thus, we put z = I2(£) — t. The mean value is clearly (z) — 0 and we obtain

(Z2) = (l2(t)-2tl2(t)+t2) = (l2(t))-t2.

Page 58: Henderson d., plaskho p.   stochastic differential equations in science and engineering

34 Stochastic Differential Equations in Science and Engineering

But we have

ai(*)> = EE<(A B*)2(A B™)2>- (i.88c) k rn

The independence of the increments of the WP's yields

((ABfc)2(ABm)2) = ((ABk)

2)((ABm)2) + 5km{ABi),

hence we obtain with the use of the results of EX 1.6

$(«)>= (£<(ABfc)2)) +]T<(Bfc+1+Bfc)

4} = £2 + 5>+ 1-£ f c)2 . V k J k k

However, we have

^2(tk+1 - tkf = J^(At)2 = tAt^0 for At -> 0, k k

and this indicates that (z2) = 0. This procedure can be pursued to higher orders and we obtain

the result that all moments of z are zero and thus we obtain l2(£) = t. Thus, we obtain finally

I(*) = / B sdB s = ^ ( B 2 - £ ) . (1.89)

There is a generalization of the previous results with respect to higher order moments. We consider here moments of a stochastic integral with a deterministic integrand

Jfc(t) = I ffc(s)dBs; k€N. (1.90) Jo

These integrals are a special case of the ones in (1.82) and we know from (1.85) that the mean value of (1.90) is zero. The covariance of (1.90) is given by (see (1.86))

(Jfc(t)Jm(t)) = / h(s)fm(s)ds. Jo

But we can obtain formally the same result if we put

(dBsdB„) = 5(s - u)dsdu. (1.91)

A formal justification of (1.91) is given in Chapter 2 in connection with formula (2.41). Here we show that (1.91) leads to a result that

Page 59: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Variables and Stochastic Processes 35

S.

is identical to the consequences of (1.86)

<Jfc(*)Jm(<)> = / h(s) f f m H(dB,dB u ) Jo Jo

- \ fk(s) / fm(u)5(s - u)dsdu = / ffc(s)fm(s)d Jo Jo Jo

We also know that B t and hence dB4 are Gaussian and Markovian. This means that all odd moments of the integral (1.90) must vanish

<Jfc(*)Jm(*)Jr(t)> = ••• = (). (1.92a)

To calculate higher order moments we use the properties of the multivariate GD and we put for the 4th order moment of the differential

<dBpdB9dBudB„) = <dBpdB9>(dBudB„) + (dBpdBu)(dBgdB^)

+ (dBpdB^)(dB(?dBu) = [S(p - q)5{u - v)

+ S(p — u)S(q — v) + 5(p — v)8(q — u)]dpdqdudv.

Note that the 4th order moment of the differential of WP's has a form similar to an isotropic 4th order tensor. Hence, we obtain

<Jj(t)Jm(i)Jr(*)J*(<)> = / f » f m ( a ) d a f ir{(3%((3)d(3 Jo Jo

+ / ij{a)ir{a)da f fm(/3)f,(/3)d/3 Jo Jo

+ / f » f s ( a ) d a [ fm(/3)fr(/3)d/3. Jo Jo

This leads in a special case to

<j£(i)> = 3<J2(i)>2. (1.92b)

Again, this procedure can be carried out also for higher order moments and we obtain

<J2"+1(i)) = 0; < J 2 ^ ) ) = 1 .3 . . . . ( 2 / , - l )<J 2 ( i ) ) ^ ^ N .

(1.92c)

Equation (1.92) signifies that the stochastic Ito-integral (1.90) with the deterministic integrand ffc(s) is N[0, fQ f|(s)ds] distributed. However, one can also show that the Ito-integral with the non-anticipative

Page 60: Henderson d., plaskho p.   stochastic differential equations in science and engineering

36 Stochastic Differential Equations in Science and Engineering

integrand

K(i) = / g(s,B s)dB„ (1.93a) Jo

is, in analogy to the stochastic integral with the deterministic integrand,

N[0,r(t)]; r(t)= [ (g\u,Bu))du, (1.93b) Jo

distributed (see Arnold [1.2]). The variable r(t) is referred to as intrinsic time of the stochastic integral (1.93a). We use this variable to show with Kolmogorov's Theorem (1.20) that (1.93a) posses continuous SF. The Ito integral

rtk Xk= / g(«,Bu)dBu , ti = t > t2 = s,

Jo

with

(xk) = 0; (x\) = r(tfc) = rk; k = 1, 2, {xix2) = r2,

has according to (1.35a) the joint PD

xi _ (xi - x2)2

"2n 2(n-r2) p2(xi,x2) = [(27r)ir1(r1 - r2)] ' exp

Yet, the latter line is identical with the bivariate PD of the Wiener process (1.60) if we replace in the latter equation the t\- by rk. Hence, we obtain from Kolmogorov's criterion ([xi(ri) — x2i(r2)]2) = | 7~i — r2 | and this guarantees the continuity of the SF of the Ito-integral (1.93a). A further important feature of Ito integrals is their martingale property. We verify this now for the case of the integral (1.89). To achieve this, we generalize the martingale formula (1.64) for the case of arbitrary functions of the Brownian motions

(%2,«) I f(yi,i)> = / %2,s)Pi|i(y2,s | yx,t)dy2 = f(yi,t);

yk = Btk; Vs>t, (1.94)

Page 61: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Variables and Stochastic Processes 37

where p ^ is given by (1.53). To verify now the martingale property of the integral (1.89) we specify (1.94) to

(I(j/2,s) | Ifo!,*)) = - i = / {vl - s) exp[-(y2 - yif /f5]dy2.

The application of the standard substitution (see EX 1.1) yields

(I(y2, s) | I(2/i,*)> = ^=J{y\-s + 2Vlz^ + /3^2) exp(-z2)d^

1 = -{yi-s + P/2)=I(yi,t). (1.95)

This concludes the proof that the Ito integral (1.89) is a martingale. The general proof that all Ito integrals are martingales is given by 0ksendahl [1.8]. However, we will encounter the martingale property for a particular class of Ito integrals in the next section.

To conclude this example we add here also the Stratonovich version of the integral (1.89). This yields (the subscript s indicates a Stratonovich integral)

rt i ls{t)= / B sodB s = - V ( B f c + 1 + B fc)(B fc+1-B fc)

Jo 2 k

k

The result (1.96) is the "classical" value of the integral whereas the Ito integral gives a non classical result. Note also the significant differences between the Ito and Stratonovich integrals. Even the moments do not coincide since we infer from (1.96)

&(*)> = \ a n d (U*)I*(«)> = ^[tu + 2(t A u)2}.

It is now easy to show that the Stratonovich integral Is is not a martingale. We obtain this result if we drop the term s in second line of (1.95)

(ls(y2,s) | I8(yi,t)) = \{y2 + P/2)^Uyut). X

Hence, we may summarize the properties of the Ito and Stratonovich integrals. The Stratonovich concept uses all the transformation rules of classical integration theory and thus leads in many

Page 62: Henderson d., plaskho p.   stochastic differential equations in science and engineering

38 Stochastic Differential Equations in Science and Engineering

applications to an easy way of performing the integration. Deviating from the Ito integral, the Stratonovich integral does, however, not posses the effective rules to calculated averages such as (1.85) to (1.87) and they do not have the martingale property. In the following we will consider both integration concepts and their application in solution of SDE.

We have calculated so far only one stochastic integral and we continue in the next section with helpful rules perform the stochastic integration.

1.9. The Ito Formula

We begin with the differential of a function $(Bf, t). Its Ito differential takes the form

d$(B t , t) = Qtdt + $ B t dB t + ^ B t B t (dB t)2. (1.97.1)

Formula (1.97.1) contains the non classical term that is proportional to the second derivative WRT Bt. We must supplement (1.97.1) by a further non classical relation

(dB t)2 = dt. (1.97.2)

Thus, we infer from (1.97.1,2) the final form of this differential

d$(B t , t) = Ut + ^*B tB t) dt + ^BedBj. (1.98)

Next we derive the Ito differential of the function Y = g(x, t) where x is the solution of the SDE

dx = a(x,t)dt + b(x,t)dBt. (1.99.1)

In analogy to (1.97.1) we include a non classical term and put

dY = gtdt + gxdz + -gxx(dx)2,

We substitute dx from (1.99.1) into the last line and apply the non classical formula

(dx)2 = {adt + bdBt)2 = b2dt; {dt)2 = dtdBt = 0; (dB t)

2 = dt, (1.99.2)

Page 63: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Variables and Stochastic Processes 39

and this yields

d Y = ( 6 + 0 b + £& .)<K + t b d B , . (1.99.3)

The latter equation is called the Ito formula for the total differential of function Y = g{x,t) given the SDE (1.99.1). (1.99.3) contains the non classical term b2gxx/2 and it differs thus from the classical (or Stratonovich) total differential

dYc = (gt + agx)dt + bgxdBt. (1.100)

Note that both the Ito and the Stratonovich differentials coincide if g{x,i) is a first order polynomial of the variable x.

We postpone a sketch of the proof of (1.99) for a moment and give an example of the application of this formula. We use (1.99.1) in the form

dx = dB t, or x = B t with a = 0, 6 = 1, (1.101a)

and we consider the function

Y = g(x) = x2/2; gt = 0; gx = x; gxx = 1. (1.101b)

Thus we obtain from (1.99.3) and (1.101b)

dY = d(x2/2) = dt/2 + B tdB t ,

and the integration of this total differential yields

d(x2/2) = / d(B2s/2) = B2/2 = t/2+ [ B sdB s

Jo Jo

and the last line reproduces (1.89). X

We give now a sketch of the proof of the Ito formula (1.99) and we follow in part considerations of Schuss [1.9]. It is instructive to perform this in detail and we do it in four consecutive steps labeled with Si to S4.

I

Page 64: Henderson d., plaskho p.   stochastic differential equations in science and engineering

40 Stochastic Differential Equations in Science and Engineering

Si We begin with the consideration of the stochastic function x(t) given by

rv rv x(v)-x(u) = a(x{s),s)ds + b(x(s),s)dBs, (1.102)

Ju Ju

where a and b are two differentiate functions. Thus, we obtain the differential of x(t) if we put in (1.102) v = u + dt and let dt —• 0

dx(u) = a(x(u),u)du + b(x(u),u)dBu. (1.103)

Before we pass to the next step we consider two important examples

Example 1. (integration by parts) Here we consider a deterministic function f and a stochastic function Y and we put

Y(B t,t) = g(B t )t) = f(i)Bt. (1.104a)

The total differential is in both (Ito and Stratonovich) cases (see (1.98) with 3>BtBt = 0) given by the exact formula

dY = d[f(t)Bt] = f(*)dBt + i'(t)Btdt. (1.104b)

The integration of this differential yields

i(t)Bt= f f '(s)B sds+ [ f(s)dBs. (1.105a) Jo Jo

Subtracting the last line for t = u from the same relation for t — v yields

rv rv i{v)Bv - i(u)Bu = f'(s)Bsds+ f(s)dBs. (1.105b)

Ju Ju

Example 2. (Martingale property) We consider a particular class of Ito integrals

I(t) = / f(u)dBu, (1.106) Jo

Page 65: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Variables and Stochastic Processes 41

and show that I(i) is a martingale. First we realize that the integral l(t) is a particular case of the class (1.93a) with g(u,Bu) = i(u). Hence we know that the variable (1.106) is normal distributed and posses the intrinsic time given by (1.93b). Its transition probability Pi|i is defined by (1.53) with tj = r(tj); yj = I(i,); j — 1, 2. This concludes the proof that the integral (1.106) obeys a martingale property like (1.27) or (1.64). *

S2 Here we consider the product of two stochastic functions subjected to two SDE with constant coefficients

dxk(t) = ctkdt + frfcdBi; ak,bk = const; k ~ 1,2, (1.107)

with the solutions

xk(t) = akt + bkBt; xfc(0) = 0. (1.108)

The task to evaluate d(xiX2) is outlined in EX 1.9 and we obtain with the aid of (1.89)

d{x\X2) = X2dxi + x\dx2 + b^dt. (1.109)

The term proportional to 6162 in (1.109) is non classical and it is a mere consequence of the non classical term in (1.89).

The relation (1.109) was derived for constant coefficients in (1.107). One may derive (1.109) under the assumption of step-function for the functions a and b in (1.106) and with that one can approximate differentiable functions (see Schuss [1.9]).

We consider now two examples

Example 1 We take put x\ = Bi;X2 = Bf. Thus, we obtain with an application of (1.101b) and (1.109)

dBt3 = BtdB? + B^dBj + 2B tdi = 3(Btdt + B?dBt).

The use of the induction rule yields the generalization

dBtfc = jfeB^dB* + ^ " ^ B ^ d t (1.110)

Page 66: Henderson d., plaskho p.   stochastic differential equations in science and engineering

42 Stochastic Differential Equations in Science and Engineering

Example 2

Here we consider polynomials of the Brownian motion

Pn(B t) = c0 + cxBt + • • • + cnB?; ck = const. (1.111)

The application of (1.110) to (1.111) leads to

dPn(B4) = P;(B t)dB t + ip£(B t)di ; ' = d/dBt. (1.112)

The relation (1.112) is also valid for all functions that can be expanded in form of polynomials. &

S3

Here we consider the product

*{Bt,t) = ip(Bt)g(t), (1.113)

where g is a deterministic function. The use of (1.109) yields

d*(Bt,t) = g(i)<MBt) + <p(Bt)g!(t)dt

= L'dBt + ^"dt\g + Vg'(t)dt (1.114)

= Ug' + ^"g]dt + gip'dBt.

But we also have

Thus, we obtain

1 d2 \ , 1 ,, + 25Bfj$ = g V + 2 g ^ ' ^L115)

(d 1 d2 \ <9$ d * = ( » + 2 8 B ? j W f + 8 B ; d B " <L116>

Equation (1.116) applies, in the first place, only to the function (1.113). However, the use of the expansion

CO

$(B t,t) = J>fc(B4)gfc(i), (1.117) fc=i

shows that (1.116) is valid for arbitrary functions and this proves (1.98).

Page 67: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Variables and Stochastic Processes 43

S4 In this last step we do not apply the separation (1.113) or (1.117) but we use a differentiable function of the variables (x,t), where x satisfies a SDE of the type (1.107)

$(B t , t ) = g(x,t) = g(at + bBt,t); x = adt + bdBt; a, 6 = const.

0 (1.118) *t = agx + gt; *B t = &gx5 $ B , B , = b lgxx.

Thus we obtain with (1.116)

The relation (1.119) represents the Ito formula (1.99.3) (for constant coefficients a and b). As before, we can generalize the proof and (1.119) is valid for arbitrary coefficients a(x,t) and b(x,t).

We generalize now the Ito formula for the case of a mult ivariate process. First we consider K functions of the type

Vk y fc(B i1,...,B t

M,t); fc = 1,2,... ,K,

where B ^ , . . . , B ^ are M independent Brownian motions. We take advantage of the summation convention and obtain the generalization of (1.97.1)

dy f c(Bj, . . . , B f , , ) = * £ d t + ^LdBr + I ^ d B r d B f ; dt <9B[ l 2dWtWt (1.120)

/c = l , . . . ,K ; r ,s = l , . . . , M .

We generalize (1.97.2) and put

dB[dB? = Srsdt, (1.121)

and we obtain (see (1.98))

d s t ( B , ' , . . . , B ( - t ) = ( ^ + i ^ ) d . + ^ d B E . (1.122)

Now we consider a set of n SDEs

dXfe = a f c(Xi,... ,Xn,t)dt + 6fer(Xi,... ,Xn,i)dB£;

Jfe = l , 2 , . . . , n ; r = l , 2 , . . . ,R . [1.123)

Page 68: Henderson d., plaskho p.   stochastic differential equations in science and engineering

44 Stochastic Differential Equations in Science and Engineering

We wish to calculate the differential of the function

Zfc = Z f c(Xi,. . . ,Xn,t); fc = l , . . . , K . (1.124)

The differential reads

dt M+ dXm^m+2dXmdX1J dZfc = - ^ dt + T^dXm + - v * dXmdXM

-dt + ^r— (amdt + 6mrdBj) m dt dX,

1 d2Z + o oV ov (a™di + bmrdBr

t) (audt + busdBst);

m,n = 1,2,... ,n; r = 1,2,... ,R. (1.125)

The n-dimensional generalization of the rule (1.99.2) is given by

d(B[dBJ1) = <5rudt; (dt)2 = dB[ dt = 0. (1.126)

Thus, we obtain the differential of the vector valued function (1.124)

dZfc = ( -XT + amT^rp \- T.bmrKr ^ r ^ r I d t dt + am dxm +2bmrbur dxmdxu

+ h dZk aw + t>mr flv ar3 t.

Now we conclude this section with two examples.

Example 1 A stochastic process is given by

(1.127)

Yi = B j + B ? + Bt3; Y2 = ( B ? ) 2 - B j B

We obtain for the SDE in the form (1.120) corresponding to the last line

dYi = dB! + dB2 + dB3;

dY2 = dt + 2B2dBt2 - (B^dBj + BjdB?). *

Page 69: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Variables and Stochastic Processes 45

dY 5, + «gI + ^ + 7 2 ) §

Example 2 Here we study a single stochastic process under the influence of two independent Brownian motions

dx = a(x, t)dt + (3{x, t)dB] + j(x, i)dBf2. (1.128)

The differential of the function Y = g(x, t) has the form

dt + g^dBJ+jdB2).

We consider now the special case

g = In x; a = rx; (5 — ux; 7 = ax; r,u,a = const,

and we obtain

d(lnar) = [r - (u2 + a2)/2]dt + (udBJ + odB2). (1.129)

We will use (1.129) in Section 2.1 of the next chapter. X

We introduced in this chapter some elements of the probability theory and added the basic ideas about SDE. For readers who wish to get more deeply involved in the abstract theory of probability and in particular with the measure theory we suggest they consider the following books: Chung & Aitsahia [1.10], Ross [1.11], Mallivan [1.12], Pitman [1.13] and Shiryaev [1.14].

Appendix: Poisson Processes

In many applications appears there a random set of countable points driven by some stochastic system. Typical examples are arrival times of customers (at the desk of an office, at the gate of an airport, etc.), the birth process of an organism, the number of competing building projects for a state budget. The randomness in such phenomena is conveniently described by Poisson distributed variables.

First we verify that the Poisson distribution is the limit of the Bernoulli distribution. We substitute for the argument p in the Bernoulli distribution in Section 1.1 the value p = a/n and this

Page 70: Henderson d., plaskho p.   stochastic differential equations in science and engineering

46 Stochastic Differential Equations in Science and Engineering

yields

Kt,n,a/n) = i ; ) ( 2 ) ' ( l - 2

( aV 6(0, n, a/n) = 1 1 —> exp(—a) for n —> oo. V nJ

Now we put

b(k + l , n , a / n ) a n — kf a\~ a

(A.l)

b(k,n,a/n) k + 1 n \ n) fc + l '

and this yields

a2

6(1, n, a /n) —> a exp(—a); 6(2, n, a /n) —> — exp(—a);...

an

b(k, n, a/n) —y —- exp(—a) = ^ ( a ) .

Definition. (Homogeneous Poisson process (HPP)) A random point process N(t), t > 0 on the real axis is a H P P with a constant intensity A if it satisfies the three conditions

(a) N(0) = 0. (b) The random increments N(£&) — N(£fc_i); k = 1,2,... are for

any sequence of times 0 < to < t\ < • • • < tn < • • • mutually independent.

(c) The random increments defined in condition (b) are Poisson distributed of the form

Pr([N(tr+0-Nfa)] = fc)=(A^7(-H *• (A.2)

Tr = t r + i — tr, k = (J, 1 , . . . ; r = 1, 2 , . . . . ^

To analyze the sample paths we consider the increment AN(£) — N(£ + At) — N(£). Its probability has, for small values of At, the form

' l - A A i f o r k = 0~ Pr(AN(£) = fe) = ih^fL exp(-AAt)

fe! AAt for A; = 1

0(At2) forfc>2 (A.3)

Equation (A.3) means that for At —> 0 the probability that N(t + At) is most likely the one of N(£) (Pr([N(t + At) - N(t)] = 0) ss 1).

Page 71: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Variables and Stochastic Processes 47

However, the part of (A.3) with Pr([N(t + At) - N(t)] = 1) « XAt indicates that there is small chance for a jump with the height unity. The probability of jumps with higher heights k = 2, 3 , . . . corresponding to the third part of (A.3) is subdominantly small and such jumps do not appear.

We calculate of the moments of the HPP in two alternative ways. (i) We use (1.5) with (A.2) to obtain

/

oo

p(x)smdx = J2km Fr(x = k) fc=0

oo

= exp(-a)^fc m a*/A;! ; a = At, (A.4) k=0

or we apply (ii) the concept of the generating function defined by

g(z) = J2zk Pr(x = *)> with g'w = (x) fc=0

5"(l) = ( o : 2 ) - ( x ) , . . . ; *

fc=° ^ (A.5)

dz

This leads in the case of an H P P to oo oo

g(z) = ^2 zkak exp(-o;)/fc! = e x p ( - q ) y^(zq) fc/A:! fc=o fe=o

= e x p [ a ( z - l ) ] . (A.6)

In either case we obtain

(N(t)) = At, (N2(t)) = (At)2 + At. (A.7)

We calculate now the PD of the sum x\ + x^ of two independent HPP's. By definition this yields

Pr([a;i + x2] = k) = Prl ^ [ x i = j , x 2 = A; - j] J

fc

= X P r ( X l = J > 2 = A; - j )

A;

j=o

Page 72: Henderson d., plaskho p.   stochastic differential equations in science and engineering

48 Stochastic Differential Equations in Science and Engineering

qk-j

Eexp[-(^)]fexp[-^)](fc_j)!

k

exp[-(01 + l92)]£^'Q)/fc! 3=0

= exp[-(81 + 92)](01+92)k/kl (A.8)

If the two variables are IID (0 = 6± = 02) (A.8) reduces to

Pr([xi + x2] = k) = exp(-20){20)k/k\. (A.9)

Poisson HPP's play important roles in Markov process (see Bremaud [1.16]). In many applications these Markov chains are iterations driven by "white noise" modeled by HPP's. Such iterations arise in the study of the stability of continuous periodic phenomena, in the biology and economics, etc. We consider the form of iterations

x{t + s) = F(x(s),Z(t + s)); s , t e N 0 (A.10)

where t, s are discrete variables and x(t) is a discrete random variable driven by the white noise Z(t + s). An important particular case is Z(£ + s) := N(t + s) with a PD

Pr(N(£ + 8) = k) = exp(u)uk/kl; u = 9(t + s).

The transition probability is the matrix governing the transition from state i to state k.

Examples

(i) Random walk This is an iteration of a discrete random variable x(t)

x(t) = x(t-l) + N(t); x{0) = xoeN. (A.ll)

N(t) is HPP with Pr([N(t) = A;]) = exp(-Xt){Xt)k/k\. Hence, we obtain the transition probability

Pjl = Pr(x(t) = j , x(t - 1) = i) = Pr([i + N(j)] = 3)

= P r ( N ( j ) = j - l ) .

Page 73: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Variables and Stochastic Processes 49

(ii) Flip-Flop processes The iteration takes here the form

x(i) = ( - l ) N ( t ) . (A.12)

The transition matrix takes the form

p _ u = Pr(x(t + s) = 1 | x(s) = -1 ) = Pr(N(t) = 2k + 1) = a;

p M = p r(x(t + s) = 1 | x(s) = 1 = Pr(N(t) = 2k) = 0,

with

a = ^Texp(-\t)(\t)2k+1/(2k + 1)! = exp(-At) sinh(Ai); k=o oo

0 = ^exp(-Ai)(Ai)2fc/(2fc)! = exp(-Ai) cosh(At). X fc=0

Another important application of HPP is given by a ID approach to turbulence elaborated by Kerstein [1.17] and [1.18]. This model is based on the turbulence advection by a random map. A triplet map is applied to a shear flow velocity profile. An individual event is represented by a mapping that results in a new velocity profile. As a statistical hypothesis the author assumes that the temporal rate of the event is governed by a Poisson process and the parameter of the map can be sampled from a given PD. Although this model was applied to ID turbulence, its results go beyond this limit and the model has a remarkable power of prediction experimental data.

Exercises

EX 1.1. Calculate the mean value Mn(s,t) = ((Bt - B s)n), n G N.

Hint: Use (1.60) and the standard substitution y^ = yi + Z\j2{t2 — t\), where z is a new variable. Show that this yields

[2(£ 2 -*i)] n / 2

Mn / exp(—v2)dv / exp(—z2)zndz. /it

The gamma function is defined by (see Ryshik & Gradstein [1.15])

r ( (n + l) /2)Vn = 2fc;

^0Vn = 2ife + l; fceN,

r ( l / 2 ) =-S/TF, r ( n + l) = nT(n).

/ ex.p(-z2)zndv

Page 74: Henderson d., plaskho p.   stochastic differential equations in science and engineering

50 Stochastic Differential Equations in Science and Engineering

a2>e2

Verify the result

M2n = ir-V2[2(t2 - i i)]nr((2n + l)/2).

EX 1.2. We consider a ID random variable X with the mean fi and the variance a2. Show that the latter can be written in the form (fx(x) is the P D ^ = ( s ) ; e > 0 )

<r2>( + ) fx(x)(x - tfdx; e. \J fi+e J—oo /

For x < /x — e and X>/J, + £^(X — fi)2 > e2, this yields

1 - / ix(x)dx = e 2 P r ( | X - ^ | > e), J ix—e

and this gives the Chebyshev inequality its final form

Pr{ |X- / z | >e} <a2/e2.

The inequality governing martingales (1-28) is obtained with considerations similar to the derivation of the Chebyshev inequality.

EX 1.3.

(a) Show that we can factorize the bivariate GD (1.35a) with zero mean and equal variance ((x) = (y) — 0; a2 = a = b) in the form

p(x, y) = J~1/2p(x)p((y - rx)/y/T); 7 = (1 - r2) ,

where p(x) is the univariate GD (1.29). (b) Calculate the conditional distribution (see 1.17) of the bivariate

GD (1.35a). Hint: (c is the covariance matrix)

PiliO* I V) = V <W( 2 7 r D) exp[-cTO(x - ycxy/cyy)/(2D)};

Verify that the latter line corresponds to a N[ycxy/cyy,cxx — ciy/cyy) distribution.

EX 1.4. Prove that (1.53) is a solution of the Chapman-Kolmogorov equation (1.52)

Page 75: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Variables and Stochastic Processes 51

Hint: The integrand in (1.52) is given by

T = Pi|i(y2,t2 | ?/i,*i)Pi11 (2/3, 3 I 2/2, £2)-

Use the substitution

u = t<i - t\ > 0, v = £3 - £2 > 0; £3 - t\ = v + u > 0,

introduce (1.53) into (1.52) and put

T = (47r2m;)-1/2exp(-A);

A = (2/3 - y2?/(2v) + (3/2 - 2/i)2/(2u) = a22/| + ai2/2 + «o,

with afc = ctk(yi,y3),k = 1,2,3. Use the standard substitution (see EX 1.1) to obtain

/ Tdt/2 = (47rra)"1/2exp[-F(y3,y2)] / exp(-K)djy2;

4a0a2 - af / ai \ 2

F = - ^ 2 — ; K = a 2 l y 2 + 2 ^ J ' and compare the result of the integration with the right hand side of (1.52).

EX 1.5. Verify that the solution of (1.54) is given by (1.55). Prove also its initial condition.

Hint: To verify the initial condition use the integral

/

oo exp[-y2/(2£)]H(y)dy,

-00

where H(y) is a continuous function. Use the standard substitution in its form y = \/2tz.

To verify the solution (1.55) use the same substitution as in EX 1.4.

EX 1.6. Calculate the average (yf (£1)^(*2)>; 2/fc = Btfc, k = 1,2; n, m G N with the use of the Markovian bivariate PD (1.60).

Hint: Use standard substitution of the type given in EX 1.2.

EX 1.7. Verify that the variable B t defined in (1.65) has the autocorrelation (BtBs) = £ A s. To perform this task we calculate for a

Page 76: Henderson d., plaskho p.   stochastic differential equations in science and engineering

52 Stochastic Differential Equations in Science and Engineering

fixed value of a > 0

<BtBs) = (B t + aB s + a) - (B t + aBa) - (B s + aBa) + {B2a)

= sAf + a - a - a + a = s A t .

EX 1.8. Prove that the scaled and translated WS's defined in (1.74) are WS's.

Hint: To cover the scaled WS's, put

HUl„ = ^M-alpv = ^x1(a2u)x2(b

2v).

Because of (xi(a)x2((3)) = 0 we have (H.u,v) = 0. Its autocorrelation is given by

(HUi„HPi9) = —-^(x1(a2u)x1(a

2p))(x2(b2v)x2{b2q)) [ao)

(a2u) A (a2p)(b2v) A (b2q) = (u A p)(v A q). (ab)2

For the case of the translated quantity use the consideration of EX 1.7.

EX 1.9. Verify the differential (1.109) of two linear stochastic functions.

Hint: According to (1.89) we have dBt2 = 2B tdB t + dt...

EX 1.10. Show that the "inverted" stochastic variables

Zt = tB1/t; H8it = stM{2lA/t,

are also a WP (Zt) and a WS (HS)t).

EX 1.11. Use the bivariate PD (1.60) for a Markov process, to calculate the two-variable characteristic function of a Brownian motion. Verify the result

G(u,v) = (exp[i(uBi + uB2)])

exp -(u2h + v2t2) + 2uv(ti A£2) Bfc = Btfc

and compare its ID limit with (1.58a).

EX 1.12. Calculate the probability P of a particle to stay in the interior of the circle D = {(a;, y) G R2 | x2 + y2 < R}.

Page 77: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Variables and Stochastic Processes 53

Hint: Assume that components of the vector (x, y) are statistically independent use the bivariate GD (1.35) with zero mean to calculate

P[Bt€D] = J J p(x,y)dxdj/.

EX 1.13. Consider the Brownian motion on the perimeter of ellipses and hyperbolas

(i) ellipses

x(t) = cos(Bi), y(t) = sin(Bt),

(ii) hyperbolas

x{t) = cosh(Bt), y(t) = sinh(Bt).

Use the Ito formula to obtain the corresponding SDE and calculate (x(t)) and (y(t)).

EX 1.14. Given the variables

Z1 = (Bj - B2)4 + (Bj)5; Z2 = (B,1 - Bt2)3 + (B,1)6,

where B^ and B2 are independent WP's. Find the SDE's governing dZx and dZ2.

EX 1.15. The random function

RW = [(Bt1)2 + --- + (BD2]1/2,

is considered as the distance of an n-dimensional vector of independent WP's from the origin. Verify that its differential has the form

n — 1

dR(t) = J2BtdBt/R+T^Tdt-

EX 1.16. Consider the stochastic function

x(t) = exp(aBt — a2t/2); a = const.

(a) Show that

x(t) = x(t — s)x(s).

Hint: Use (1.65). (b) Show that x(t) is a martingale.

Page 78: Henderson d., plaskho p.   stochastic differential equations in science and engineering

54 Stochastic Differential Equations in Science and Engineering

EX 1.17. The Wiener-Levy Theorem is given by oo »t

B t = V A f c / rl>k{z)Az, (E.l)

where A is a set of IID N(0,1) variables and ipi-; k = 1, 2 , . . . is a set of orthonormal functions in [0, 1]

»i

1/Jk(z)lpm(z)dz = Skm. '0

Show tha t (E. l ) defines a WP. Hint: The autocorrelation is given by (BtB s) = t A s. Show tha t

Jo

d_ ~dt

(BtBa) = — (t A s) = V Vfc(0 / Mz)dz.

Multiply the last line by ipm(t) and integrate the resulting equation from zero to unity.

EX 1.18. A bivariate PD of two variables x,y is given by p(x,y).

(a) Calculate the PD of the "new" variable z and its average for (i) z = x ± y\ (ii) z = xy.

Hint: Use (1.41b). (b) Find the PD iuy(u,v) for the "new" variables u = x + y; v =

x - y.

EX 1.19. The Ito representation of a given stochastic processes F(t,(j) has the form

F(t,u) = (F(t,u)) + [ f(s,u)dBs, Jo

where i(s,u) is an other stochastic process. Find i(s,u) for the particular cases

(i) F(t,iu) = const; (ii) F(t,u) = BJ1; n = 1,2,3; (hi) F(t,u) = exp(Bt).

EX 1.20. Calculate the probability of n identically independent HPP's [see (A.8)].

Page 79: Henderson d., plaskho p.   stochastic differential equations in science and engineering

CHAPTER 2

STOCHASTIC DIFFERENTIAL EQUATIONS

There are two classes of ordinary differential equations that contain stochastic influences:

(i) Ordinary differential equations (ODE) with stochastic coefficient functions and/or random initial or boundary conditions that contain no stochastic differentials. We consider this type of ODE's in Chapter 4.3 where we will analyze eigenvalue problems. For these ODE's we can take advantage of all traditional methods of analysis.

Here we give only the simple example of a linear 1st order ODE

dx — = -par; p = p(w), ar(0) = x0(u),

where the coefficient function p and the initial condition are x-independent random variables. The solution is x(t) = xoexp(—pt) and we obtain the moments of this solution in form of (xm) = {XQ1 exp(—pmt)}. Assuming that the initial condition and the parameter p are identically independent N(0, a) distributed, this yields

(*2m> = ^ ^ e x p ( 2 a m 2 t 2 ) ; ( x 2 ^ 1 ) = 0. *

(ii) We focus in this book — with a few exceptions in Chapter 4 — exclusively on initial value problems for ordinary SDE's of the type (1.123) that contain stochastic differentials of the Brownian motions. The initial values may also vary randomly xn(0) — xn(u). In this chapter we introduce the analytical tools to reach this goal. However, in many cases we would have to resort to numerical procedures and we perform this task in Chapter 5.

The primary questions are:

(i) How can we solve the equations or at least approximate the solutions and what are the properties of the latter?

55

Page 80: Henderson d., plaskho p.   stochastic differential equations in science and engineering

56 Stochastic Differential Equations in Science and Engineering

(ii) Can we derive criteria for the existence and uniqueness of the solutions?

The theory is, however, only in a state of infancy and we will be happy if we will be able to answer these questions in case of the simplest problems. The majority of the knowledge pertains to linear ordinary SDE, nonlinear problems are covered only in examples. Partial stochastic differential equations (PSDE) will be covered in Chapter 4 of this book.

2.1. One-Dimensional Equations

To introduce the ideas we begin with two simple problems.

2.1.1. Growth of populations

We consider here the growth of an isolated population. N(i) is the number of members of the population at the instant t. The growth (or decay) rate is proportional to the number of members and this growth is, in absence of stochastic effects, exponential. We introduce additionally a stochastic term that is also proportional to N. We write the SDE first in the traditional way

dN — = rN + uW(t)N; r,u = const, (2.1)

where W(t) stands for the white noise. It is, however, convenient to write Equation (2.1) in a form analogous to (1.99.1). Thus, we obtain

dN = adt + bdBt; a = rN, b = uN; dB t = W(t)dt. (2.2)

Equation (2.2) is a first order homogeneous ordinary SDE for the desired solution N(B4,i). We call the function a(N, t) (the coefficient of dt) the drift coefficient and the function fe(N, t) (the coefficient of dBi) the diffusion coefficient. SDE's with drift coefficients that are at most first order polynomials in N and diffusion coefficients that are independent of N are called linear equations. Equation (2.2) is hence a nonlinear SDE. We solve the problem with the use of the

Page 81: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Differential Equations 57

Ito formula. Thus we introduce the function Y = g(N) = InN and apply (1.99.3) (see also (1.129))

dY = d(lnN) = (agN + 62gNN/2)di + bg^dBt

= (r - u2/2)dt + udBt. (2.3)

Equation (2.3) is now a SDE with constant coefficients. Thus we can directly integrate (2.3) and we obtain its solution in the form

N = N0exp[(r-M2 /2)£ + uB t]; N0 = N(i = 0). (2.4)

There are two classes of initial conditions (ICs):

(i) The initial condition (here the initial population No) is a deterministic quantity.

(ii) The initial condition is stochastic variable. In this case we assume that No is independent of the Brownian motion.

The relation (2.4) is only a formal solution and does not offer much information about the properties of the solutions. We obtain more insight from the lowest moments of the formal solution (2.4). Thus we calculate the mean and the variance of N and we obtain

(N(t)> = (N0) exp[(r - u2/2)t] (exp(uB4)) = (N0) exp(rt), (2.5)

where we used the characteristic function (1.58a). We see that the mean or average (2.5) represents the deterministic limit solution (u — 0) of the SDE. We calculate the variance with the use of (1.20) and we obtain

Var(N) = exp(2ri) [(N§> exp(u2t) - (N0)2]. (2.6)

An important special case is given by the combination of the parameters r = u2/2. This leads to

N(t )=N 0 exp(«B t ) ; (N(t)> = (N0) exp(u2i/2); , ^ (2.7)

Var(N) = (N§> exp(2u2t) - (N0)2exp(u2f).

Page 82: Henderson d., plaskho p.   stochastic differential equations in science and engineering

58 Stochastic Differential Equations in Science and Engineering

We generalize now the SDE (2.2) and we introduce the effect of two independent white noise processes. Thus, we put (see also (1.129))

dN = rNdi + (udBJ + udB2) N; u, v = const. (2.8)

We obtain now in lieu of (2.4) the solution

N = N0exp{[(r - (u2 + v2)/2)\t + vBlt + VB2}. (2.9)

Taking into account the independence of the two Brownian motions yields again the mean value of the exponential growth (2.5) and we obtain for the variance

Var(N) = exp(2rt){(Nl)exp[(u2 + v2)t} - (N0)2}. (2.10)

2.1.2. Stratonovich equations

Now we compare the results of the Ito theory in Section 2.1.1 with the one derived by the concept of Stratonovich. We use here the classical total differential (1.100). To indicate that we are considering a Stratonovich SDE we rewrite (2.2) in the form

dN = rNdt + «NodB t , (2.11)

where the symbol "o" is used again [see (1.83) or (1.96)]. To calculate the solution of (2.11) we use again the function g = ln(N) and we obtain

d(ln(N)) = dN/N = rdi + uodB t , (2.12)

We can directly integrate this (2.12) and we obtain

N = N0 exp(ri + uBt) with (N) = (N0) exp[(r + u2/2)t}. (2.13)

The calculation of the variance gives

Var(N) = exp(2ri){(N^)exp(2u2t) - (N0)2 exp(u2t)}.

Note that result of the Stratonovich concept is obtained with the aid of a conventional integration.

Page 83: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Differential Equations 59

2.1.3. The problem of Ornstein-Uhlenbeck and the Maxwell distribution

We consider here the SDE

dX = (m - X)dt + 6dBt; m,b = const. (2.14)

Equation (2.13) is a first order inhomogenous ordinary SDE for the desired solution X(Bt,t). Its diffusion coefficient is independent of the variable X and this variable only appears linear in the drift coefficient. Therefore, we will classify (2.2) as a linear equation.

We use the relation

d X + ( X - m ) d t - e - * d [ e i ( X - m ) ] , (2.15)

to define an integrating factor. Thus, we obtain from (2.14) d[e*(X — m)] = 6e*dBt.

Now we have a relation between just two differentials and we can integrate

X(t) = m + e-*(X0 ~m) + b J exp(s - i)dBs. (2.16) Jo

Relation (2.16) is the formal solution of the SDE (2.14). To obtain more information we continue again with the calculation of the first moments. The integral on the right hand side of (2.17) is a non anticipative function. The mean value is hence given by (771 and Xo are supposed to be deterministic quantities)

( X ( i ) ) = m + e - i ( X 0 - m ) ; (X(0)) = X0, <X(oo)) = m, (2.17a)

and it varies monotonically between Xo and m and it represents the deterministic limit of the solution (2.16). The calculation of the variance yields

Var(X) = b2(l2); I = / exp(s - t)dBa. Jo

We use (1.86) to calculate (I2) and thus we obtain ft u2

Var(X) = b2 / exp[2(s - t)]ds = —(1 - e~2t). (2.17b) Jo 2

In Figures 2.1 and 2.2 we compare the theoretical predictions of average (2.17a) and the variance (2.17b) with numerical simulations.

Page 84: Henderson d., plaskho p.   stochastic differential equations in science and engineering

60 Stochastic Differential Equations in Science and Engineering

Fig. 2.1. Theoretical prediction (2.17a) and numerical simulation of the average of the Ornstein-Uhlenbeck process. The irregular curve belongs to a particular simulation of (2.14). Parameters: m = 3; b = 1; Xo = 0. Numerical parameters: step width h = 0.005, and a number Ensem = 200 of repetitions of individual realizations.

Fig. 2.2. Theoretical prediction (2.17b) and numerical simulation of the variance of the Ornstein-Uhlenbeck process. Parameters as in Figure 2.1.

The numerical computations of the moments are performed repeating individual realizations (Ensem times). We see from Figure 2.1 that the theoretical and numerical averages are almost coinciding. The values of the moments are calculated from averages of the random

Page 85: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Differential Equations 61

numbers Xj, (x) = EnLm Z^=i e m x j• However, we find that in the case of the variance a difference arises between numerics and theory and this discrepancy grows with increasing values of the time. In the latter case it is advisable to use a higher value of the parameter Ensem, or to change the routine and use higher order techniques (see Sections 5.5 and 5.6).

More details about the Ornstein-Uhlenbeck problem can be found in Risken [2.1].

Now we consider the Maxwell distribution. We investigate the three-dimensional SDE

du = — udt + \/26dB+, 6 — const; / 1 9 ^ ( 2 - 1 8 a

u=(Ul,u2,u3y, dB t=(dB t1 ,dB t

2 ,dB?),

where u stands for a velocity field and B^, k = 1, 2, 3 are independent WP's. Equation (2.18a) degenerates into three individual SDE's of the Ornstein-Uhlenbeck type (2.14). The individual solutions are

uk = ukoexp(-t) + V2bexp(-t) exp(s)dB*; ukQ = uk(0). Jo

We obtain as mean value uk(t) = uk(0) exp(—t) and the standard deviation has the form

Vfe(t) = uk(t) - «fc(0)exp(-£) = V2bexp(-t) / exp(s)dB^. Jo

(2.18b) We know that the integral in (2.18b) has a GD (see (1.90)). We calculate the moments of V/t and we obtain (see (2.17b))

(V2k(t)) = a2 = 6(1 - A2), A = exp(-t)]. (2.18c)

Now we use (1.92) and this yields

<V^)> = 3(V2(t))2; (Yt(t)) = 15<V2(i))3;. . . .

Clearly, the moments are in accordance with those of a GD (see (1.93)). Hence we obtain from (1.29)

p(Vfc) = [2*6(1 - A 2 ) ] - 1 /2 e x p {_v 2 / [26( l - A2)]}.

Page 86: Henderson d., plaskho p.   stochastic differential equations in science and engineering

62 Stochastic Differential Equations in Science and Engineering

The substitution of (2.18b) into the last line leads to

p(«t, 11 u*o, 0) = [2TT6(1 - A2)]"1/2 exp{-(«fc - uk0X)2/[2b(l - A2)]}. (2.19a)

This is the transition probability to pass from t = 0 and Uko to the instant t and u&. Furthermore it reproduces for b = 1 the stationary Ornstein-Uhlenbeck transition probability (1.56.2).

Next we derive the Maxwell distribution. We realize that all three velocity components have the same variance (2.18c) and hence the same distribution. Furthermore, the individual components are produced by independent WP's. The velocities are hence statistically independent and the transition probability of the three components (ui,U2,U3) is factorized and this yields

p(ui,u2,u3,t | uw, u2o, U30,0) 3

[2vr6(l-A2)]-3/2 exp - J ] K - uk0X)2/[2b(l - A2)] fc=i

(2.19b) Now we choose for b the Langevin parameter

b = kT/m, (2.19c)

where m, k and T stand for the mass of the particle, the Boltzmann constant and the absolute temperature, respectively. The substitution of (2.19c) into (2.19b) yields the non stationary Maxwell distribution. Better known is, however, the stationary version of the formula that we obtain from (2.19b) and (2.19c) in the limit t —> oo. Thus, we obtain the final form of the stationary Maxwell distribution

/ m \3/2

p(ui,u2,u3) = ( T ^ j ^ J exp[-?rm2/(2kT)]; u2 = u\ + u\ + u\.

(2.19d)

This distribution was an important result of the theoretical physics of the 19th century. It describes the velocity distribution of an ideal monatomic gas with no internal degrees of freedom and no intermolecular forces. It was found to agree well with experiments (see McQuarrie [2.2]). The Maxwell distribution has been derived in a number of different ways such as a stationary solution of the Boltzmann transport equation. A

Page 87: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Differential Equations 63

2.1.4. The reduction method

We consider again the first order nonlinear SDE

dy = a{y,t)dt + b{y,t)dBt. (2.20)

We elucidate now conditions that allow us to recast (2.20) with the aid of the transformation

x = \J(y,t) (2.21)

as a reduced, linear SDE

dx = a(t)dt + P(t)dBt, (2.22)

where the new drift and diffusion coefficients are only functions of the time-variable. This property of (2.22) allows us to integrate (2.22) directly. We can write (2.22) in form of

x — / a(s)ds Jo

= P(t)dBt, (2.23)

where the term x — JQ a(s)ds is the integrating factor. There are only two differentials in (2.23). Thus, we obtain by integrating

x(t) = XQ + [ a(s)ds+ [ (3(s)dBs; x0 = x(0). (2.24) Jo Jo

Now we want to find the desired condition that allows us to transform (2.20) into (2.22). First we obtain from (1.99) the differential

dU = (\]t + a\Jy + ^b2Vyy\ dt + b\JydBt.

A comparison with (2.22) leads to

a(t) = Ut + aXJy + ^62UW; 0(t) = b\Jy. (2.25)

The derivation d/dy of the first part of (2.25) yields

Page 88: Henderson d., plaskho p.   stochastic differential equations in science and engineering

64 Stochastic Differential Equations in Science and Engineering

The derivations 9/9y, d/dt applied to the second part of (2.25) lead to

byVy = -6UOT; P'(t) = htXJy - b|- ( a l l , + l-b2Xlyy^j. (2.27)

We integrate the first part of (2.27) and we obtain

Uy — — and XJyy I b2 b„. (2.28)

The substitution of (2.28) into the second part of (2.27) yields

P'(t) = )9(t)7(t); 7(«) = \h - b~ (j - \by^j • (2.29)

Equation (2.29) represents now a sufficient condition to allow us to reduce the original SDE (2.20) to the directly integrable form (2.22). The integration of (2.29) gives

H

/3(t) = 0oexp / l{s)ds Jo

Po = const. (2.30)

The substitution of (2.30) into the first part of (2.28) yields the transformation function

ry

1S / /o J ./o

Finally we obtain from the first part of (2.25)

U(y, t) = Po exp / 7{s)ds Jo

du/b(u,t). (2.31)

a(t) = Vt + p (2.32)

We also mention a condition for the special case where drift and diffusion coefficients of the original Equation (2.20) are time-independent a = a(y); b = b(y). In this case we obtain from (2.29)

_d_

dy

d fa 1 db .d~y(b~2dy

0. (2.33)

Example 1.

A simple solution of (2.33) is

a = 1 d(62) 4 dy

(2.34a)

Page 89: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Differential Equations 65

In this case we obtain a = 0,/3 = (5Q. The solution of the reduced equation is x = j3oBt + const. The transformation U(y, i) is given by the integration of (2.31) and we obtain the solution of the original problem (2.20) with (2.34a) in the implicit form

fv Aii / 7 T ^ = B t ; y(t = 0)=y0. (2.34b)

Jy0 h\u)

Example 2.

We consider again the Ornstein-Uhlenbeck problem (2.14). We have a = m — y;b = const. These coefficients comply with (2.33) and this yields with (2.29) (and fa = 1)

7 = 1 ; a = mex.p(t)/b; (3 = exp(t); x = yexp(t)/b; XQ = yo/b.

The reduced SDE (2.22) has the form dx = exp(t) (fdt + dBt) and its solution is

x = XQ + — [exp(i) — 1] + / exp(s)dBs. b Jo

We apply the inversion of the transformation and we obtain the solution (2.17). We consider further applications of the reduction method in EX 2.1. *

2.1.5. Verification of solutions

We add here comments on how to verify given solutions with the aid of the Ito formula and demonstrate the techniques with examples. Basically there are two types of solutions of SDE:

(1) The solution depends explicitly on the Brownian motion. An example is the solution of the population growth problem (2.4) that should be written as N = N(Bt, t). The verification of such a problem is done with the use of the Ito formula (1.98). We leave this verification for the EX 2.2 and look at a generalized growth problem defined by the SDE.

Example. (Generalized population growth problem)

We consider the SDE:

dX = rX(k - X)dt + uXdBt; k,r,u = const. (2.35a)

Page 90: Henderson d., plaskho p.   stochastic differential equations in science and engineering

66 Stochastic Differential Equations in Science and Engineering

The solution is given by

X(B t, t) = J l ^ l l ; F = exp[(rfc - u2/2)t + uBt};

(2.35b)

I(i) = / F(B s ,s)ds; - = X(* = 0). Jo a

We use (1.98) and introduce there the assignment X := Bt. Thus we obtain in the first place

Xt = (rk-u2/2)X-rX2; XBt = uX; XBtBt = u2X,

and the introduction of the first line into (1.98) reproduces the SDE (2.35a). *

(2) The solution of the SDE does not depend explicitly on the Brownian motion and it is given by X = X(t). An example of this type is the solution to the Ornstein-Uhlenbeck problem (2.16). To verify such a solution we need to differentiate the integrals of the type

3(t) = [ H(s)dBs, (2.36) Jo

where H(s) is a deterministic function. The derivative of the integral in (2.36) is given by

sjCH«dB--H("ir (237)

To prove the relation (2.37) we only need to integrate this formula.

Example. (The Brownian bridge)

We consider here the linear SDE

dX = 1j-Z^dt + dBt; X(t = 0) = X0. (2.38)

Its solution (see Gard [2.3]) has the form

X(t) = X0(l - t) + rt + (1 - t) I dBa/(l - s). Jo

(2.39)

The use of (2.37) yields

dX ~d7 ~r~

v dB s

- * 0 + dt • /•* dB s

Jo 1 - s (2.40)

Page 91: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Differential Equations 67

Indeed, solving (2.39) for the integral on the right hand side gives

' d B s X-rt __ o 1 — s 1 — t

and the substitution of the last line into (2.39) reproduces again to the SDE (2.38). *

The verification of the solutions to higher-order SDEs that are analyzed in the following sections is performed with the use of the multivariate Ito formula and (1.122) (see EX 2.4).

2.2. White and Colored Noise, Spectra

We begin with the calculation of the white noise auto-correlation function. We start from (1-77) and obtain

r ( M ) = «,&> = ( ^ ^ ) = ^ ( B , B , > = ^ A , = , ( . - «),

(2.41)

where S(s — t) is the Dirac function. One equivalent form of (2.41) is

r{t = s + T,s) = r(T) = 5(T). (2.42)

The autocorrelation function, r(t, s), depends only on the time difference and we say that the white noise is delta-correlated. We also can use (2.41) to justify empirically (1.91). We can multiply (2.41) by the deterministic differentials dtds yielding (1.91).

The spectrum S(CJ) is defined as the Fourier transform of r(r) and we obtain

/

oo

exp(io;r)r(T)dT. (2.43) -oo

Thus, we obtain for the case of white noise the constant ("white") spectrum

S(w) = 1. (2.44)

The relation (2.44) means that all frequency components of the spectrum have same importance. This result is analogous to the case of the white light, where all frequency components of the light occur with equal weight factors. In many stochastic processes (such as in

Page 92: Henderson d., plaskho p.   stochastic differential equations in science and engineering

68 Stochastic Differential Equations in Science and Engineering

the case of turbulent flows) we face, however, a banded spectrum and the spectrum (2.42) is a non physical idealization.

To construct a more realistic spectrum we modify the Ornstein-Uhlenbeck process (2.14) and use the linear first order SDE

dt; = -b2vdt + ab2dBt; a,b = const; v0 = v(t = 0), (2.45)

where the parameter b is independent of the other parameter a. Equation (2.45) is a simple model to calculate the velocity v of a sphere submerged in a viscous fluid. The term — b2v represents the hydro-dynamic resistance, which is according to the law of Stokes given by b2 — 67r~Rr)/m (R, m are the radius and the mass of the sphere, r\ is the viscosity of the fluid). The second term on the left hand side of (2.45) represents the force on the sphere that is caused by collisions with molecules of the fluid. We use an integrating factor and we obtain

v = v0 exp(-62£) + ab2 / exp[62(s - £)]dBs. (2.46) Jo

The mean value of (2.46) (v) = i>oexp(—b2t). We calculate the autocorrelation r(t, r) = (Z(£ + r/2)Z(£ - r/2)); Z(£) = v(t) - (v(t)}.

The solution (2.46) includes an integral of a non anticipative function and we obtain with (1.87)

r(t,T) = a2b4J exp[2b2(u - t)}du;

7 = (t + r /2) A (t - r /2) = t - | r | /2 .

The evaluation of (2.47) leads to

r(t, r) = A[exp(-62|r!) - exp(-262t)]; A = a2b2/2. (2.48)

The asymptotic limit of (2.48) is given by

£ — • 0 0 : r{t:r) —> A exp(-62 | r | ) . (2.49)

Thus, the spectrum of the asymptotic autocorrelation has the form

f°° a2b4

S(w) = 2A / exp(-62r) cos(o;T)dr = -j ^ (2-50)

This spectrum is not delta-shaped, but it declines monotonically from its maximum at the position zero with S(0) = a2 to S(±£>2) = a2 /2. It

Page 93: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Differential Equations 69

is thus the spectrum of a colored process with frequency components of different importance.

However, there are alternative ways to derive a colored spectrum and we follow here the ideas of Stratonovich [2.4]. We introduce a stochastic function

f(t) = Ai(t) cos(Ai) + A2 sin(Xt), (2.51)

where A is a given deterministic frequency and the functions A& are amplitude functions that satisfy the SDEs

dAk(t) = -a2Akdt + adBtfc; Afc(0) = 0; k = 1, 2; a > 0.

(2.52)

In Equation (2.52) we use two independent Brownian motions B4

fc, k = 1, 2. The solution of (2.52) has the form

Afc(t) = aexp(-a2t) / exp(a2s)dB^. (2.53) Jo

The substitution of (2.53) into (2.51) leads to (f(t)) = 0. To calculate the autocorrelation function we put

f(i + r/2)f(t - r /2) = a2 exp(-2a2 t) [cos(Aa+)J^ + sin(Aa+)J^"]

x [cos(Aa_)J7/+sin(Aa_)J^"], (2-54)

with ra+ />a_

J + = / exp(a2s)dB^; J^ = / exp(a2s)dB^; a ± = t ± r / 2 . JO Jo

(2.55)

Now we take advantage of the independence of B] and B2 and we obtain the autocorrelation function

r ( t , r ) = (f(t + r / 2 ) f ( t - r / 2 ) )

= a2exp(—2a2t)[cos(Aa+)cos(Aa_)(J^"J]~)

+ sin(Aa+)sin(Aa_)(J^J^)], (2.56)

and we obtain with the use of integral in Equation (2.47)

r(t,r) = c o s ( A r ) [ e x p ( - Q2 | r | ) - exp(-2a2 t)] . (2.57)

Page 94: Henderson d., plaskho p.   stochastic differential equations in science and engineering

70 Stochastic Differential Equations in Science and Engineering

Finally, we perform again the limit t —>• oo and we obtain the spectrum

S(W) = J r ( o o , r ) e x P ^ r ) d r = ^ + {(J2 _ x,? + 2 f t 4 ( a ; 2 + A 2 ) •

(2.58)

This spectrum has for all values of a and A at w = 0 a stationary-point. However, there is limit curve Ac(w), that separates different features. For A < XC(UJ) we meet at u = 0 maximum that is the only stationary point and the (2.58) behaves qualitatively like (2.50). For A > Ac(w) the spectrum has a minimum at u> = 0 and a maximum at the position um(a,X) and it decreases to zero for to > ujm(a,\). Note also that (2.58) degenerates in the limit A —>• 0 to the spectrum (2.50). From a practical point of view, we can conclude that the determination of a colored spectrum requires — additionally to the Brownian motion — the solution of at least one SDE (see (2.46) or (2.52)), whereas the construction of a white noise needs only the use of the Brownian motion.

2.3. The Stochastic Pendulum

We return here to the problem mentioned in the introduction. We generalize this linearized problem (the nonlinear case is treated in Section 2.5) and introduce a stochastic damping (intensity coefficient a), a stochastic frequency (intensity coefficient j3) and a stochastic excitation (intensity coefficient 7). Thus, we analyze the solutions to the second order inhomogeneous SDE

d x dx ~&~p+ a^ "dT + ^ + ^X = 7^t; a ' ^'7 = const'

(2.59)

The stochastic (or noise) terms that influence the damping and frequency are multiplied by the variable x or its derivative. Thus these stochastic quantities are referred to as multiplicative noise. By contrast, we see that the excitation in (2.59) is independent of the variable x and its derivatives. This type of stochastic influence is hence called additive noise. Equation (2.59) is hence in general a nonlinear two-dimensional SDE.

Page 95: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Differential Equations 71

We write now the SDE (2.59) as a canonical system of two first order SDEs

dXl = X2dt> (2.60) d^2 = — xidt + (7 — ax2 — (3xi)dBf

To solve (2.60) write it in its vectorial form

dx = Axdt + bdBj,

We introduce an integrating factor and set

dx - Axdt = exp(At)d[exp(-At)x], (2.62)

where the exponential matrix is defined by a Taylor expansion 0 0

exp(At) = V -Aktk = I + At + - A2t2 + • • • ; tok- 2 (2-63)

d[exp(At)] = Aexp(At)dt,

where I is the identity matrix. Thus, we obtain the SDE

d[exp(-At)x] = exp(-At)bdB i ; (2.64)

where the term exp(—At) represents the integrating factor. An integration yields

x = exp(At)xo + / exp[A(t — s)]b(s)dBs; xo = x(t = 0). Jo

(2.65)

We assume that the initial condition xo is deterministic quantity and we obtain from (2.65) the mean value

(x(t)} = exp(At)x0; x0 = (x0,y0) (2.66)

that is again the solution of the deterministic limit of the SDE (2.59). We obtain from (2.63)

e x p ( A t ) = ( C°S\ S i a t \ (2.67) V —suit cost /

Page 96: Henderson d., plaskho p.   stochastic differential equations in science and engineering

72 Stochastic Differential Equations in Science and Engineering

Thus, we infer from (2.65)

xi = XQ cos t + yo sin t

+ [ s i n ( i - s ) [ 7 - a x 2 ( s ) - # c 1 ( s ) ] d B 8 , (2.68.1) Jo

X2 = —XQ sin t + yo cos t

+ / cos ( t - s ) [7 -aa :2 ( s ) - / fo i ( s ) ]dB s . (2.68.2) Jo

Thus we can infer from (2.68) that the solution of the stochastic pendulum is given in general by a set of two coupled Volterra integral equations.

2.3.1. Stochastic excitation

Here we assume (a = j3 = 0; 7 / 0) and equation (2.59) contains only additive noise and the problem is linear. The solution (2.68) is thus not an integral equation anymore, but it involves only a stochastic integral. We put for the mean value of the initial vector

(x\) = xocosi + yosint; (^2) = —XQ sin t + j/o cost, (2.69)

and we calculate the autocorrelation of the x\ component

ru(t,T) = {z(t + r/2)z(t-T/2)); z(t) = xi(t) - (xi(t)>.

The application of the rule (1.87) to the last line yields (see (2.47))

r i i ( £ > r ) = 7 2 / sin(i + r / 2 — u) sin(t — r /2 — u)du; Jo (2.70)

a = t- | T | / 2 .

The evaluation of this integral leads to

ru{t, T) = \l2{2{t - |r |/2) cos(r) + [sin(|r|) - sin(2t)]}. (2.71)

Page 97: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Differential Equations 73

2.3.2. Stochastic damping (/3 = 7 = 0; a / 0)

We assume here /3 = 7 = 0 ; a ^ 0 and the SDE (2.59) contains a multiplicative noise term. We obtain from (2.68)

xi = xocost+ yosmt — a sin(i — s)x2(s)dBs, (2.72.1) Jo

X2 =—XQsint + yocost — a cos(t — s)x2(s)dBs. (2.72.2) Jo

We recall that the solution of the problem with additive stochastic excitation was governed only by a stochastic integral. We see that (2.72) represents an integral equation. This means that multiplicative noise terms influence a process in a much more drastic way than additive stochastic terms do. The mean value is again given by (2.69). However, the determination of the variance is an interesting problem. We use

Zk = xk - (xk); Vjk = (zj(t)zk(t)); j , k = l,2. (2.73)

Thus we obtain from (2.72)

Vn(t) = s sin2(t - u)K(u)du; K(«) = (xl(u)); s = a2; Jo

Vi2(t) = V2i(t) =S- I sin[2(t - u)]K(u)du; (2.74) 1 Jo

V22(t) = s / cos2(i - u)K(u)du. Jo

We must determine all the four components of the variance matrix and we begin with

V22(t) = K(t) - (x2)2 = s [ cos2(t - u)K{u)du. (2.75)

Jo

Equation (2.75) is a convolution integral equation that governs the function K(t). It is convenient to solve (2.75) with an application of the Laplace transform

POO

H(p) = £K(£) = / exp(-pt)K(t)dt. (2.76) Jo

Page 98: Henderson d., plaskho p.   stochastic differential equations in science and engineering

74 Stochastic Differential Equations in Science and Engineering

The Laplace transform given by (2.76) and the use of the convolution theorem

S, f A(t- s)B(s)ds = £A(t)£B(t), (2.77) Jo

lead to

H(p) = £ ( - x 0 sin t + y0 cos i)2 + sH(p)£ cos2t

= xl £ sin2 t - xoyo £ sin It + [y2, + «H(p)] £ cos21.

To reduce the algebra we consider only the special case XQ = 0, yQ ^ 0. This yields

H(P)=[W8 + « H ( P ) ] ^ ± ^ )

and solving this for the function H(p) we obtain

H ( p ) = ( p ! 1+ 2 ) y ° ; F ( p , S ) = p ( p 2 + 4 ) - S ( p 2 + 2). (2.79)

To invert the Laplace transform we need to know the zeroes of the function F(p, s). In EX 2.5 we give a simple special case of the parameter s to invert (2.79). However, our focus is the determination of weak noisy damping. Thus, we consider the limit s —> 0+. In this case we can approximate the zeroes of F(a,p) and this leads to

F(p,s) = [ ( p - s / 4 ) 2 + 4 ] ( p - s / 2 ) + 0(s2) Vs = a 2 ^ 0 . (2.80)

A decomposition of (2.79) into partial fractions yields with (2.80)

H(p) = vi 1/2 (p - s/4)/2 + 5s/8

(2.81) p - s/2 {p - s/4)2 + 4

With (2.81) and (2.75) we obtain the inversion of the Laplace transform (2.76)

V22(£) = K( t ) -y 2l cos 2 i

= - ^exp ( s t / 2 ){ l + exp(-st/4)

x [cos(2i) + 5ssin(2t)/8]} - ygcos2^ (2.82)

Note that the variance (2.82) has the order of unity for O(st) < 0(1).

Page 99: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Differential Equations 75

We give in Figures 2.3 and 2.4 a comparison of the theoretically calculated average (2.69) and the variance (2.82) with numerical simulations. We note again that the theoretically predicted and the numerical averages almost coincide. However, we encounter a discrepancy between the theoretical and numerical values. The latter

y(t), <y(t)>

Fig. 2.3. Theoretical prediction (2.69) and numerical simulation of the average of Equation (2.72). Parameters: a = 0.3; XQ = 0; yo = 1. Numerical parameters: step width h = 0.005, and a number Ensem = 200 of repetitions of individual realizations.

Fig. 2.4. Theoretical prediction (2.82) and numerical simulation of the variance of Equation (2.72). Parameters as in Figure 2.3.

Page 100: Henderson d., plaskho p.   stochastic differential equations in science and engineering

76 Stochastic Differential Equations in Science and Engineering

might be caused by a value of a that violates the condition s —> 0 in Equation (2.80). However, we will repeat the computation of the variance in Chapter 5 with the use of a different numerical routine.

We assign the calculation of the two other components of the variance matrix to EX 2.6. We also shift the solution of the pendulum problem with a noisy frequency to EX 2.7.

2.4. The General Linear SDE

We consider here the general linear inhomogeneus n-dimensional SDE

dxj = [Ajk{t)xk + a,j(t)]dt + bjr(t)dBl;

j,k = l,...,n; r = l , . . . , m , (2.83)

where the vector dj(t) represents a inhomogeneous term. Note that the diffusion coefficients are only time-dependent. (2.83) represents thus a problem with additive noise. Multiplicative noise problems are (for reasons discussed in Section 2.3) included in the categories of nonlinear SDEs. To introduce a strategy to solve (2.83) we consider for the moment the one-dimensional limit of (2.83). Thus we analyze the SDE

dx = [A(t)x + a(t)]dt + b(t)dBt. (2.84)

To find an integrating factor we use the integral

l(t) = exp A(s)ds o

(2.85)

We find with (2.85) d[x(t)l(t)} = I(i)[a(t)dt + b(t)]dBt and after an integration we obtain

x(t)=[x0+l l{s)a(s)ds+ I(s)b(s)dBs\ /l(t); x0 = x(t = 0).

(2.86) Thus, we determine the mean value

m{t) =IXQ+ f l{s)a(s)ds \/l(t), (2.87)

Page 101: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Differential Equations 77

and this is again the solution of the deterministic limit of the SDE (2.84). The covariance has the form

c(t,u) = ([x(t) — m(t)][x{u) — m(u)])

= TZATT^ / l2(s)b2(s)ds; a = uM. (2.88) iKt)i\u) JO

To solve the n-dimensional SDE we start with the homogeneous deterministic limit of (2.83)

±j = Ajk(t)xk; xm(0) = x0m. (2.89)

We use the matrix of the fundamental solutions, <&, and we can generalize (2.89)

—j^ = Ajk(mkm; $fcm(0) = 5km. (2.90)

To compare the solutions to (2.90) with the one-dimensional case, we rewrite the solution (2.86) in form of

x(t) = * n ( i ) j z o + f $n(s)[a(s)ds + 6(a)dB s] |; ^(s) = l(s).

(2.86')

Hence, the n-dimensional solution to (2.83) is

xm = $mk(t)lx0k + J ^(s)[ar(s)ds + bru{s)dBus]\. (2.91)

We prove (2.91) in EX 2.8 with the use of the Ito formula (1.127). The mean value is again the solution of the deterministic problem. It is given by the vector

mB(t) = <S>sk(t)lx0k + f $- r1 (s)a r (s)ds | , (2.92)

and the covariance matrix takes the symbolic form

cma(t,u) = $mk(t)$av{u) / *fcrl{s)brt7(s)<$>ul(s)bx<r{s)ds, (2.93)

where the symbol 7 has the same meaning as in (2.47). More details about linear SDE can be found in Kloeden et al. [2.5].

Page 102: Henderson d., plaskho p.   stochastic differential equations in science and engineering

78 Stochastic Differential Equations in Science and Engineering

It was shown in Section 1.8 that the stochastic integral K(t) = Jo G(s)dBs is N[0, fQ G2(s)ds] distributed. A simple generalization for the case of a stochastic vector integral

Kj(t) = / bjr(s)dBrs; j = 0, l , . . . , n ; r = l , . . . , m ,

Jo

shows that the latter integral is N[0, fQ b(s)bT(s)ds] (6T is the transposition of b, see EX 2.8) distributed. Thus, the solution of the general linear SDE (2.83) can be written in the symbolic form

Y(t) = x(t) - $(t) x0+ [ $(s)a(s)ds = / b(s)dBs. (2.94) Jo J Jo

The left hand side of (2.94) is Gaussian distributed, this implies that also the right hand side has the same distribution.

An additional point of interest is the question whether this solution exhibits a stationary normal distribution. It was proved by Arnold [1.2] that this distribution is stationary, provided that the equation is homogeneous (a = 0) and the matrices A and b are time-independent and all eigenvalues of the matrix A have eigenvalues with negative imaginary parts and that the initial value vector is constant or normal distributed. Then we obtain for the mean value and the covariance in symbolic form

(x(t)) = const, f°° (2 95)

(x2{t))= exp(Ai)66Texp(ATt)dt. v ' ;

Jo In anticipation of Chapter 3, we use as an example the stationary Fokker-Planck equation for the ID SDE

da; = — Axdt + bdt; A, b = const > 0,

where the negative sign of the drift coefficient is the ID remainder of the condition of negative imaginary parts of the matrix A. The stationary Fokker—Planck equation (3.43a) governing the PD p(x) reads

d f°° A(xp)' + b2p"/2 = 0; ' = —; / p(x)dx = 1.

dz J-oo

Page 103: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Differential Equations 79

A first integral leads to

d 2A — p = - T Q - ^ P + T; 7 = const. da; tr

We put 7 = 0 and this leads to

p = (27ra2)~1/2 exp[-x2/(2a2)}; a2 = b2/(2A),

where we applied the normalization condition of a normal distribution with zero mean and the variance a2 and these values coincide with the ID limit of (2.95).

2.5. A Class of Nonlinear SDE

We begin with a class of multiplicative one-dimensional problems

dx = a(x, t)dt + h(t)xdBt. (2.96)

Typical for this SDE is a nonlinear drift coefficient and a diffusion coefficient proportional to x: b(x, t) = h{t)x. We define an integrating factor with the function

G{But) = ex.pl- f h(s)dBs + ]- f h2(s)ds\. (2.97a)

An integration by parts in the exponent gives with (1.105)

G(Bt,t) = expl-h(t)Bt+ f L" ^ l ti(s)Bs + ^h2(s) ds\. (2.97b)

The differential dG is performed with the Ito formula (1.98) yields Gt = \h2G; GBt = -hG, GBtBt = h2G. Thus we obtain

dG = Gh2dt + b2dBt, b2 = -hG (2.98a)

and the application of (1.109) yields

d(xG) = G(adt + hxdBt) + Gx(h2dt - hdBt) - Gxh2dt = Gadt.

(2.98b) Now we introduce the new variable

Y = xG. (2.99)

Page 104: Henderson d., plaskho p.   stochastic differential equations in science and engineering

80 Stochastic Differential Equations in Science and Engineering

The substitution into (2.98) yields

dY — = Go(x = Y/G,t) . (2.100)

It is important to realize that (2.100) is a deterministic equation since the differential dBt is missing.

Example 1.

ax = h axdBt; a = const. x

We see that with a = 1/x, h = a this example belongs to the class (2.96). The function G is given by G = exp (a2t/2-aBt). Thus we obtain with from (2.100)

dY /"* Y — = G2 ^> Y2 = const + 2 / exp(a2s - 2aB s)ds.

d* Jo Finally we obtain after the inversion of (2.99)

x = e x p ( - a 2 t / 2 + aB^VD; D = XQ + 2 / exp(a2s - 2aBs)ds. Jo

Example 2.

We consider a generalization of problem of population growth (2.2)

da; = rx(l — x)dt + axdBt; a,r = const.

Note this SDE coincides with the SDE (2.35a) for k = 1. The function G is defined as in the pervious example and (2.100) yields

dY _ rY(G - Y) ~dt ~ CJ '

This is a Bernoulli differential equation. We solve it with substitution Y = 1/z. Thus we obtain the linear problem z — —rz + r/G, where dots indicate time derivatives. An integration yields

1 U(t) zG l/x0 + rl(t)'

with

U(t) = e x p [ ( r - a 2 / 2 ) i + aB t]; I(t) = / U(s)ds. * Jo

Page 105: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Differential Equations 81

The class (2.96) of one-dimensional SDE's is of importance to analyze bifurcation problems and we will use the technique given here in Chapter 3. It is, however, easy to generalize the ideas to n-dimensional SDEs. Thus we introduce the problem

dXfc = ak(X,t)dt + hk{t)XkdBt; k = l,...,n. (2.101)

Note that we do not use the summation convention. We introduce the integrating factor (see (2.95))

Gfc(Bt)t) = e x p j - J hk{s)dBs + ~ f h2k{s)ds\ (2.102)

and we obtain the differential dGk(Bt,t) = {h\dt — hkdBt)Gk. Thus we have

d{Gkxk) = akGkdt. (2.103)

Thus, we obtain the system of n deterministic differential equations

— - = Gkak(X\ = Y i / G i , . . . ,Xn = Yn /Gn ,£); Y^ = GkXk.

(2.104)

Example: The Nonlinear Pendulum. We consider nonlinear pendulum oscillations in the presence of stochastic damping. The corresponding SDE is (compare the linear counterpart (2.59))

d2x . dx ^ + a 6 ^ + s m x = 0.

Passing to a system of first order equations leads to

dx\ = xzdt; da>2 = —ctX2dBt — sinxidi. (2.105)

First, we find that the fixed points (FPs) of the deterministic problem (a — 0) given by Pi = (0,0)(center); P2,3 = (±7r,0) (saddle points) are also FPs of (2.105). To find the type of the FPs of the SDE we linearize (2.105) and we obtain

d 6 = 6 d i ; , A f (2-106) d^2 = —a^2dBt — cos(x{ )£id£; x{ = 0, ± TT.

Page 106: Henderson d., plaskho p.   stochastic differential equations in science and engineering

82 Stochastic Differential Equations in Science and Engineering

In case of the deterministic center (x[ — 0) we rediscover the linear problem (2.59) with j3 = 7 = 0. Its variance V22 is already given by (2.82). To be complete we should also calculate the variances Vn and V12. An inspection of the Laplace transform of the first and second integral equations (2.74) shows

2

£ V n = a2H(j9)£sin2i; £Vi2 = ?-ll(p)£sm(2t).

These additional variances exhibit the same structure as (2.82). This means that for a2t < 0(1) the perturbations remain small and the FP (the deterministic center) is still an elliptic FP.

It is also clear that the deterministic saddles become even more unstable under the influence of stochastic influences. It is convenient to study the possibility of the occurrence of heteroclinic orbits. To consider this problem we first observe that (2.105) belongs to the class (2.101) of SDEs.

We obtain Gi = 1; G2 = G = exp(aB4 + a 2 t /2) , and we use the new variables

Vi=xi; y2 = Gx2. (2.107)

This yields

ill = y2/G; y2 = -Gs iny i , (2.108)

These are the canonical equations of the one-degree-of-freedom non-autonomous system with the Hamil ton function

P2

H = — - G c o s g ; q = yi, p = y2. (2.109)

In the deterministic limit (a = 0) we obtain G = 1 and (2.109) reduces to the deterministic Hamilton function H = p2/2 — cosq, where deterministic paths are given by H = H(go,Po) with qo,po as initial coordinates (see e.g. Hale and Kocak [2.6]).

We return now to the stochastic problem. There, it could be argued that we may return to the "old" coordinates

p = Gx2; q = xu (2.110)

Page 107: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Differential Equations 83

the substitution of which into (2.109) gives the transformed Hamiltonian (the Kamiltonian)

K = G ( x | / 2 - c o s x i ) . (2.111)

However, it is clear that (2.108) are not the canonical equations corresponding to the Kamiltonian (2.111). The reason for this inconsistency is that (2.110) is not a canonical transform. To see this we write the old coordinates in the form x\ = x\(q,p) = q; %2 — %2(q,p) = p/G. A transform is canonical if its Poisson bracket has the value unity. Yet we obtain in our case

dxi 3x2 3xi 3x2 1 /r, _LI

Hence, we return to the Hamiltonian (2.109) and continue with the investigation of its heteroclinic orbits. The latter are defined by H(q,p,t) = H(±7r,0, i) and this leads to

p = ±2Gcos{q/2). (2.112)

In the stochastic case we must evaluate in general the heteroclinic orbits numerically. However, it is interesting to note that we can also generalize the concept of parametrization of the heteroclinic path which in the case of the stochastic pendulum is given by

p = ±2Gsechi; q = 2arctan(sinht). (2.113)

The variables in Equation (2.113) comply with (2.112). To verify that p and q tend to the saddle points we obtain asymptotically q —> 7rsign(t) for |i| —> oo and this gives the correct saddle coordinates. As to the momentum we obtain

p —> ± 4 exp (aBt + a2t/2 - \t\) for |t| -> oo. (2.114)

Using (1.58a) we obtain the mean of the momentum

(p) - • ±4exp(a 2 i - |t|) for \t\ -> oo. (2.115)

To reach the saddle (ir, 0) in the mean we must comply with \a\ < 1. If we consider the variance of p we obtain

(p2) ^ 1 6 e x p ( 3 a 2 t - 2 | t | ) .

Page 108: Henderson d., plaskho p.   stochastic differential equations in science and engineering

84 Stochastic Differential Equations in Science and Engineering

-1 -

-2 "

Fig. 2.5. The motion of the oscillator (2.105) in the phase space. The non-noisy curve is the corresponding deterministic limit cycle. Parameters: a = 0.3, (xo,yo) = ( T / 3 , 1), h = 0.005.

The latter relation tells us that to reach the saddle (IT, 0), we need to ask for a2 < 2/3. Thus, we can conclude that the higher the level of the considered stochastic moment, the lower we have to choose the value of the intensity parameter a that allows the homoclinic orbit to reach for t —> oo, the right hand side saddle. Finally, we show in Figure 2.5 for one specific choice of initial conditions a particular numerical realization of the solution to (2.105) in the phase space. We juxtapose this stochastic solution to the corresponding deterministic solution. Thus, we see that the stochastic solution remains close to the deterministic limit cycle only in the very first phase of its evolution from the initial conditions. For later times the destructive influence of the multiplicative noise becomes dominant and any kind of regularity of the stochastic solution disappears, no matter how weak the intensity of the noise. $

2.6. Existence and Uniqueness of Solutions

The subject considered in this section is still under intensive research and we present here only a few available results for one-dimensional

Page 109: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Differential Equations 85

SDEs. First, we consider a deterministic ODE

x = a(x); x,a E R .

A sufficient condition for the existence and uniqueness of its solution is the Lipschitz condi t ion \a(x) — a(y)\ < K\x — y\; K > 0 where K is Lipschitz's constant. There is, however, a serious deficiency because the solutions may become unbound after a small elapse of time. A simple demonstration of this is given by an inspection of the solutions of the equation x — x2. Here, we have \x2 — y2\ < K\x — y\ and K is the maximum of \x + y\ in the (x, y)-plane under consideration. The solution of this ODE is x(t) = XQ/{1 — x^t) tha t becomes unbounded after the elapse of the blow up time £& = \/XQ.

To ensure the global existence of the solutions of an ODE (the existence for all times after the initial time) we need in addition to the Lipschitz's condition, the growth bound condition

\a{x)\ < L ( l + |x|) , L > 0 ; Vt>t0

where L is a constant. Now, we consider as an example x — xk, and we obtain from the growth bound condition \x\k < L ( l + |a;|); we can satisfy this condition only for k < 1.

We turn to a general class of one-dimensional SDE's given by (2.20). Now, the coefficients a and b must satisfy the Lipschitz and growth bound conditions to guarantee existence, uniqueness and boundedness. This means tha t the drift and diffusion coefficients must satisfy the stochastic Lipschitz condition

\a(y,t)-a(z,t)\ + \b(y,t)-b(z,t)\ <K\y-z\;

K > 0 , (y,z)EK2, (2.116)

and the stochastic growth bound condition

\a(y,t)-b(y,t)\ < L ( l + |y|), L > 0; V t > t 0 , x G R . (2.117)

E x a m p l e 1.

dy = ~ 2 exp( -2y )d t + e x p ( - y ) d B t .

This SDE does not satisfy the growth bound condition for y < 0. Thus we have to expect tha t the solution will blow up. To verify this we use the reduction method. The SDE complies with

Page 110: Henderson d., plaskho p.   stochastic differential equations in science and engineering

86 Stochastic Differential Equations in Science and Engineering

(2.34a) and we obtain the solution y = lnjBj + exp(yo)}- This solution blows up once the condition B t = —exp(yo) is met for the first time. £

Example 2. As a counter-example where the SDE satisfies the growth bound condition, we reconsider EX 2.1(iii) and we obtain

1 fy du 1, . . .„ ,„,.,?/ „

and this yields 7T

y(t) = — — + 2arctan[^exp(rB()]; z = tan(yo/2 + 7r/4). £ These examples suggest that we should eliminate the drift coef

ficient. To achieve this we start from (2.20) (written for the variable x), we use the transformation y = g(x) and the application of Ito's formula (1.99.3) gives

dy = [ag'(x) + b2{x)g"{x)/2]dt + b(x)g'(x)dBt. (2.118)

Hence, we can eliminate the drift coefficient of the transformed SDE if we put g"(x)/g'(x) = —2a{x)/b2{x) and the integration of this equation leads to the transformation

g(x) = J e x p j - 2 J [a(v)/b2(v)}dv\du, (2.119)

where the parameter c is appropriately selected. Thus we obtain from (2.118) a transformed SDE dy = b(x) g'(x)dB i; x = g~l(y) where we have replaced x by the inverse of the function g. Hence, we obtain

ft V = Vo

Jo Karatzas and Shreve [2.7] showed that although the original process x may explode in time, the process y will not explode.

+ / g'(xs)b(xs)dBs; xs = x(s). (2.120) L/0

Example (Bessel process) We consider the SDE

a - 1 dx=——di + dBj; a = 2 , 3 , . . . . (2.121)

2x

Page 111: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Differential Equations 87

Thus (2.119) yields with c = 1

g(s) = ln(s) for a = 2 and g(x) = ( x 2 _ a - l ) / ( 2 - a ) for a > 3.

Hence, we obtain for a = 2: x = exp(y);g' = exp(—y). Thus we get the solution

y = Vo+ exp(-y s)dB s , ys = y(s). 4 Jo

Exercises

EX 2.1. We consider the SDE (2.20). Find which of the following cases is amenable to the reduction technique of Section 2.1.4

(i) a = ry;b = u, (ii) a = r; b = ^ ; 7 = 1/2,1, 2,

(hi) a = — -sin(2y); b = rcos(y); r,u = const.

EX 2.2. Verify the solutions (2.4) and (2.17) with the use of the Ito formula (1.98) and (2.37).

EX 2.3. Verify that the SDE

dx = -/32x(l - x2)dt + /3(1 - x2)dBt,

has the solution

aexp(z) + a —2 ., . . ^ x (*) = — ^ o — ; a =l + x ° ) ; z = ^B*-

aexp(z) + 2 — a EX 2.4. Verify the solution of the stochastic pendulum (2.65). Use the Ito formula (1.122) and a generalization of (2.37).

EX 2.5. Find a parameter s that allows a simple determination of the zeros of (2.79) and thus an inversion of (2.79).

EX 2.6. Determine the components Vn, V12 of the variance matrix for the pendulum with stochastic damping. Use (2.74) and (2.82).

EX 2.7. Solve the problem of the pendulum with a stochastic frequency. Use the (2.68) with the parameters a = 7 = 0 ; / 3 ^ 0 t o calculate the variance Vn .

Page 112: Henderson d., plaskho p.   stochastic differential equations in science and engineering

88 Stochastic Differential Equations in Science and Engineering

We obtain from (2.68)

xi — xocost + yosint — f3 / sin(i — s)xi(s)dBs, Jo

X2 = —xosint + yocost — f3 / cos(£ — s)xi(s)dBs. Jo

We introduce here variance Vn(i) in terms of

Vn(t) = K(t) - 2/0 sin2 * = (? / sin2(i - u)K(u)dit; K(t) = (x2).

The determination of Vn is performed in analogy to the calculations in Section 2.3.2 and we obtain

H(p) = vl p + \

2; \ = (32. ,2

p - A / 2 (p + A/4)2+4_

Finally, we obtain the variance in the form

Vn(t) = |{exp(/32 i /2) - exp(-/32i/4)

x [cos(2i) + 3/32 sin(2i)/8] - 2 sin2 t}.

Note that the variance has the same structure as (2.82) and it is of order unity if for 0(f32t) = 1.

EX 2.8. Calculate the mean and the covariance function of the stochastic vectorial integral

Kj(J) = / bjr(s)dBrs, j = l , . . . , n ; r = l , . . . , m .

Jo

EX 2.9. Take advantage of the multivariate Ito formula (1.127) to prove the (2.91).

EX 2.10. Show that the ODE y = y/x satisfies the growth bound conditions, but not the Lipschitz condition. Solve this ODE and discuss the singularities of its solution in connection with the mentioned conditions.

Page 113: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Stochastic Differential Equations 89

EX 2.11. Solve the SDE

da; = ydt + adB],

dy = ±xdt + bdBt; a,b = const,

where B^; k = 1,2 are independent WP's. The case with the negative sign of the drift coefncient is a generalization of the linearized pendulum.

EX 2.12. Solve the SDE and find the positions where the solution blows up

dx = xkdt + bxdBt; b,k = const; k > 1; x(0) > 0.

EX 2.13. Solve the generalized population growth model SDE K

da; = rxdt + x NJ frfcdBt; r,b^ — const. fe=i

EX 2.14. Solve the damped oscillation SDE

x + ax + UJ x = b£t(t); a,u> = const.

Hint: Write the SDE in form of scalar equations and use the matrix

A 4 ° 2 M; \ - o r -a I

exp(Ai) = ^EL^!'J-[lv Cos{vt) + {lu + A) sin(vt)]; v

u = a/2; v = y/co2 - (a/2)2,

where I is the 2D unit matrix. See also the undamped case (2.59) with (2.61).

Page 114: Henderson d., plaskho p.   stochastic differential equations in science and engineering
Page 115: Henderson d., plaskho p.   stochastic differential equations in science and engineering

CHAPTER 3

THE FOKKER-PLANCK EQUATION

In Section 3.1 we derive the master equation and we use this equation in Section 3.2, where we focus on the derivation of the Fokker-Planck equation that is a PDE governing the distribution function of the solution to a SDE. In the following sections we treat important applications of this equation in the field of bifurcations and limit cycles of SDE's.

3.1. The Master Equation

We consider a Markovian, stationary process and our goal is to reduce the integral equation of Chapman-Kolmogorov (1.52) to a more useful equation called the master equation. We follow here in part the ideas of Van Kampen [3.1] and we simplify in the first place the notation. The transition probability depends only on the time difference and we can write

Pi|i(y2,*2|yi,*i) = T(y2 |yi;r); T = t2-t1. (3.1)

An example was already given for the Ornstein-Uhlenbeck process (1.56.2). There we got also the generally valid initial condition

T(y2bi;0) = <5(2/2-yi). (3.2)

Since the transition from (t/i, ii) to (2/2 5 £2) occurs AC for some value of 2/2 w e have the normalization condition

/T(y2 |yi;r)d?/2 = 1. (3.3)

We define now u = £2 —£1, v = £3 —£2; this means that u+v = £3 — t\. Thus, we obtain from the Chapman-Kolmogorov equation (1.52)

91

Page 116: Henderson d., plaskho p.   stochastic differential equations in science and engineering

92 Stochastic Differential Equations in Science and Engineering

the relation

T(y 3 | y i ;« + ^) = T(y2\y1;u)T(y3\y2;v)dy2. (3.4)

Next we try to find a Taylor expansion of T(j/2, Vi',u) with respect to the variable u. To achieve this we put

T(y2 |y i ; u) = S(y2 - yi) + k(y2, yi)u + 0(u2). (3.5)

The application of (3.3) yields Jk(y2,yi)dy2 = 0. To comply with

this relation we use the form

% 2 , y i ) = ~a(yi)5{y2 - yi) + W(y 2 |y i ) ; W(y2 |y i ) > 0,

(3.6)

o(yi) = / W(y2 |y i )dy2 ,

where W(j/2|yi) is the transition probability per unit time (TPT) for the passage from y\ to y2. Thus, we obtain the first two terms of the Taylor series in (3.5) and we can write

T(y2\yi;u) = [1 - a(yi)u}5(y2 - yi) + uW(y2\yi) + 0(u2). (3.7)

We use now (3.7) for the transition probability T(y3\y2;v) and we obtain

T(y3|y2! v) = [1 - a(y2)v}S(y3 - y2) + vW(y3\y2),

and substi tute the last line into (3.4). Thus, we obtain

T(y3|yi ;« + v) = [l - a{y3)v]T(y3\yi;u)

|y2)T(y2 |yi;w)dy2-

,nd use the limit v —> 0. Thii

-a(y3)T(y3\yi;u)+ / W(y3\y2)T(y2\yi;u)dy2.

+ v

We rearrange this relation and use the limit v —> 0. This leads to

9T(y3 |yi ; i i)

du

(3.8)

To obtain the desired master equation we see from (3.6) that a(V3) = J W(y2|y3)dy2- Hence we obtain from (3.8)

9 T (y3 |y i ;« ) W(y 3 |y2)T(y 2 |y i ;«)dy 2

T(y3\yi;u) fw(y2\y3)dy2. (3.9)

Page 117: Henderson d., plaskho p.   stochastic differential equations in science and engineering

The Fokker-Planck Equation 93

The integral equation (3.9) contains as kernel the TPT W(y3|y2)-We note first that all transition probabilities in (3.9) have, in contrast to (3.4), exclusively y\ as initial point. Hence, we simplify the nomenclature by using

Z(y,t) = T(y\y1]t)] Z(y,t = 0) = 5(y - Vl), (3.10)

and we note that Z(y,t) does not mean a single-time distribution. Equations (3.9) and (3.10) lead now to

^ ^ = J dy>[W(y\y'W, t) - W(y'\y)Z(y, t)]. (3.11)

The integral equation (3.11) is now the master equation (or M-equation). Note that for the derivation of this equation it was sufficient to use the two-term expansion (3.7).

It is instructive to specify (3.11) for the case of a discrete set of variables characterized by an integral number. Here we consider the discrete M-equation

^ ^ = £ [ W n m Z m ( £ ) - W m n Z n ( t ) ] ; W n m > 0 ; W n n = 0. m

(3.12)

This equation shows the structure of a gain-loss relation for the probabilities of the individual states Zn(£). The first term on the right hand side of (3.12) represents the gain of the state n from all the other states m, and the second term gives the loss from n to m.

Example (Random walk) We consider the random walk on the one-dimensional rr-axis where jumps between adjacent sites with equal probability are permitted. The quantity W n m is the transition probability between the sites n and TO. Since we consider only jumps between adjacent sites, the state n (with the position xn) wins with equal unit-probability from the states n — \ (position xn-i) and the state n + 1 (position xn+\) and loses with the same probability to the same states. Hence, we have W n m = 8nm-i + 8nm+i and the M-equation reads

Zn(t) = Zn_i(t) + Z„+i(t) - 2Zn(t). (3.13)

We give more details about the solution to (3.13) in EX 3.1. X

Page 118: Henderson d., plaskho p.   stochastic differential equations in science and engineering

94 Stochastic Differential Equations in Science and Engineering

Generally we can state that the physics of the process under consideration determines the structure of the transition probabilities per unit time W(y|y'). The substitution of the latter into the M-equation yields eventually to the determination of the transition probability Z(y,t). As an application of the continuous M-equation (3.11) we consider now jump moment that are defined by

ak(y) = j(y' - V)k W(y'\y)dy'; k = 0 , 1 , . . . . (3.14)

Note that the coefficient a(y) in (3.6) to (3.8) satisfies a(y) — ao(y). We analyze now the average of the variable y that can be reached by the transition probability. Hence, we put

y(t) = (y) = J yZ(y,t)dy, (3.15)

and this is the conditional average of the variable y that starts with y = y± at t = 0 with the transition probability 5(y — yi). We calculate the time derivative of (3.15) and we obtain

d* dt{y) J y dt y ' and this gives with (3.11)

^ = J dy' J dyy{W(y, y')Z(y', t) - W(y', y)Z(y, t)}.

We can split the right hand side of the latter line into two double integrals. We interchange in the first of those integrals the variables y and y'. This yields

^ = J dy' J dy(y' - y)W(y', y)Z(y, t). (3.16)

The substitution of (3.14) into (3.16) leads to

dy(t) dt = J dya1(y)Z(y,t) = (a1(y)), (3.17)

and we recall that the right hand side of (3.17) is a conditional average. We use now the expansion

ai(y) = a i « y » + (y - {vM^y)) + \(y - ( y» 2 <( (y» + • • • ;

' = d/dy. (3.18)

Page 119: Henderson d., plaskho p.   stochastic differential equations in science and engineering

The Fokker-Planck Equation 95

In the special case where a\ (y) is a linear function of y we have ai(y) — by; b = const, and (3.17) leads to

^ = b(y) = by(t). (3-19)

This is a closed equation, in the sense that only the quantity (y) (and not higher order moments) are involved in Equation (3.19). However, if y{t) is not a linear function then we obtain from (3.17) and (3.18)

±(y) = a i « y » + \({y2) - (y)2)a'{((y)) + • • • . (3.20)

Equation (3.20) governs the quantity (y); there appear, along with (y), higher order moments such as (y2). This equation is, hence, not a closed relation. We need additional equations to determine the order moments. This dilemma is called the closure problem.

In EX 3.2 we calculate an evolution-equation for the variance. We will reconsider the jump moments in the next section when we derive the Fokker-Planck equation.

3.2. The Derivation of the Fokker-Planck Equation

This equation governs in its simplest version the probability function of the solution to a one-dimensional SDE. There are several alternatives to derive this equation. Planck himself, used a physical motivated way and introduced short range atomic forces to approximate terms in the M-equation.

Since we are not primarily interested in atomic physics, we use here a more formal approach. We start from the (3.4) and we apply the nomenclature

2/3 = z; 2/2 = w; yi = x;

u = t; v = At, 0 < At < 1,

and we note that the variable w is in (3.4) a dummy integration variable, whereas z and x are variables that belong to more and less advanced values of the time and are called forward and backward

Page 120: Henderson d., plaskho p.   stochastic differential equations in science and engineering

96 Stochastic Differential Equations in Science and Engineering

variables, respectively. Thus we write (3.4) in the form

T(z\x;t + At) = f dwT(w\x;t)T(z\w;At). (3.22)

We multiply now (3.22) by an arbitrary function ip(z,t) that vanishes sufficiently rapid at z = ±oo, and we integrate the resulting equation from z = —oo to z = oo. Thus, we obtain in the first place

LHS = RHS. (3.23)

We approximate the left hand side of (3.22) by a Taylor series. This yields

LHS= dzip(z,t)T(z\x;t + At) = dzip(z,t)T(z\x;t)

+ At [ dzif>(z,t) —T(z\x;t) + OUAt)2). (3.24) J ot

The right hand side of (3.22) can be written in the form

RHS= dwT(w\x;t)J(w,t);J(w,t) = dzT(z\w;At)i/>(z,t).

(3.25)

We approximate now also ip(z,t) by a Taylor series and this leads to

J(w, t) = ip(w, t) / dzT(z\w; At) + At

A(w,t)ip'(w,t) + -B(w,t)x/>"(w,t) + •••

(3.26) ijk(\,t) = dkiP(\,t)/d\k.

We use in (3.26) the abbreviations

A{w,t)At= dzT(z\w;At)(z-w);

B(w,t)At= dzT(z\w;At)(z-w)2.

Note that the normalization condition of T(z|u;; At) gives the first integral on the left hand side of (3.26) the value unity.

(3.27)

Page 121: Henderson d., plaskho p.   stochastic differential equations in science and engineering

The Fokker-Planck Equation 97

The substitution of (3.24) to (3.27) into (3.23) leads now to

dzi/)(z, t)T(z\x; t) + At f dz^{z, t)—T(z\x; t) + 0((Ai)2) /

dwip(w,t)T(w\x;t) + At dw

T(w\x;t).

A(w,t)ip'(w,t)

+-B(w, t)ip"(w,t)

The first terms on both sides cancel and we obtain in the limit At —> 0

T(z\x;t)}=Q. (3.28)

dz{^(z,t)-T(z\x;t)

A(z,t)rl>'{z,t) + -B(z,tW(z,t)

where we replaced the dummy integration variable w by z. As a further manipulation we introduce partial integrations and we put

A(z,t)iJ>'(z,t)T(z\x;t) = [A(z,t)ip(z,t)T(z\x;t)]>

-TP(z,t)[A(z,t)T(z\x;t)]',

and another relation for the term proportional to B(z, t) (see EX 3.3). Thus, we obtain

3 dz->P(z,t) i ^_T{z\x;t) + [A(z,t)T(z\x;t)}'

-±[B(z,t)T(z\x;t)]"} = 0.

The braces in the last line must disappear, since ip(z, t) is an arbitrary function. This yields

7\ 1

- T ( z | x ; t ) = -[A(z,t)T(z\x;t)]' + -[B(z,t)T(z\x;t)]" (3.29)

Equation (3.29) is the Fokker—Planck equation (FPE). Since the spatial derivatives correspond to the forward spatial variable z, (3.29) is sometimes also called the "forward Chapman-Kolmogorov" equation. In EX 3.4 we derive from (3.21) also a "backward Chapman-Kolmogorov" equation with spatial derivatives with respect to the backward variable a;. It is also more convenient to use

Page 122: Henderson d., plaskho p.   stochastic differential equations in science and engineering

98 Stochastic Differential Equations in Science and Engineering

the nomenclature (3.10) and we obtain in this way the FPE in its usual form

^ T 1 = -|;[A(w.*)P(y.t)] + \~2my,t)P(yM (3.30)

P(y,t) = T(y\x;t).

To apply the FPE we introduce boundary conditions, an initial condition and the normalization [(a, j3) are the boundaries of the interval]

9 V t = °> fc = 0,1; A = a and A = /?, (3.31a) oyfc

P(y,0) = <%), (3.31b)

and rP / P(y,t)dy = l. (3.31c)

J a

There are several generalizations of the FPE. First we can include all terms of the Taylor expansion (3.26). This yields the Kramers— Moyal equation

^ - E ^ £ P < » . « > M * ' ) l - (3.3.) 7 1 = 1 ' y

Equation (3.32) is more general than the Fokker-Planck equation, yet it is identical with the M-equation and it is not easier to use. An example of the use of (3.32) is given in EX 3.5.

3.3. The Relation Between the Fokker-Planck Equation and Ordinary SDE's

To establish a relation between the Fokker-Planck equation the solutions (or rather the moments of the solution) of a first order SDE we need to calculate the coefficients A(z,t) and 3(z,t) in (3.29) or (3.30). According to Salinas [3.2], this task can be performed for rather simple examples of SDE's by a direct inspection of (3.27). However, it is more convenient to compare the moments calculated from the FPE with the moments derived from solving the SDE.

Page 123: Henderson d., plaskho p.   stochastic differential equations in science and engineering

The Fokker-Planck Equation 99

To perform this task we recall that the variable t appearing in the function P(y, t) is the time to reach from a starting point t — 0 (where P satisfies (3.31b)) the position y. We multiply (3.30) by

(Ay)k = [y(t) - y0}k, A; = 1,2,...; y0 = const., (3.33a)

and we obtain

3[(Ay)fcP] = {Ay)k \ 9(AP) , 1 92(BP)

dt dy 2 dy2 (3.33b)

which are able to put y(t) into the interior part of the partial time derivative since y(t) depends only implicitly on the time.

Next we rearrange the derivatives and we calculate for first three resulting moments for k — 1,2,3 (higher orders are considered in EX 3.6)

a l < A " ) P ) - / 8 . [ ( A » ) A P ] - A P dt [dy

+ 2 H ^ [ ( A 9 , B P , - 2 S ( B P ) } ' (3Mc)

^ ™ ~ / 8 . [ ( A » ) » A n - 2 ( A » ) A P dt [dy

+ 2 \ VK A 2 / ) P] " V ( A y ) ( B P ) + 2BP

(3.33d)

and

^ ^ = - { A [ ( A y ) 3 A P ] _ 3 ( A ^ A P

+ \ { ^ ^ A y ) 3 B P ] " 6^[(Ay) 2BP] + 6(Ay)Bp | .

(3.33e)

We integrate (3.33c) through (3.33e) over y and we apply the boundary condition (3.31a) for (a,/?) = (—oo,oo). This means that

Page 124: Henderson d., plaskho p.   stochastic differential equations in science and engineering

100 Stochastic Differential Equations in Science and Engineering

all terms arising from

aVKmz) Vs = l,2; A; = 1,2,3, . . . ;

dys

K1 = A(y,i); K2 = B(y,t),

vanish. The remaining terms lead to

±(Ay) = {A(y,t)),

-{ (Ay) 2 ) = 2((Ay)A(y,i)) + (B(y,t)), (3.34)

^ ( (Ay) 3 ) = 3((Ay)2A(y, i)> + 3((Ay)B(y, t)),

with

y"(Ay)AKm(y, t)P(y, t)dy = <(Ay)AKm(y, t)); A = 1,2,... .

(3.35) Now we consider a the one-dimensional SDE

dy = a(y,t)dt + b(y,t)dBt; y(t = 0) = y0, (3.36a)

where yo stands for a deterministic initial condition. The formal solution is

Ay = y - yo = / a(y(s), s)ds + f b(y(s), s)dBs. (3. Jo Jo

36b)

We know the moments of this process

rt (Ay) = J ( t ) = f (a(y(s),s))ds;

Jo

((Ay)2) = J2(t)+ [\b2(y(s),S)}ds. Jo

(3.37)

We give in EX 3.7 the reason that the average of the product between the two integrals in (3.36b) vanishes under the influence of the

Page 125: Henderson d., plaskho p.   stochastic differential equations in science and engineering

The Fokker-Planck Equation 101

independent Brownian movements. Hence, we obtain

d{Ay)

dt = Hy,t)),

d ( ( ^ ) 2 ) = 2{(Ay)a(y,t)) + {b2(y,t)), (3.38)

^ M ^ = 3((Ay)2a(y, t)) + 3((Ay)b2(y, t)).

In the derivation of third line of (3.38) we used again the independence of the Brownian movements.

A comparison of this (3.38) with (3.34) yields for the two lowest orders (the third order is treated in later)

(a(y,t)) = (A(y,t)); {(Ay)a(y,t)) = ((Ay)A(y,t));

(b2(y,t)) = (B(y,t)).

The simplest way to solve (3.39) is given by

A(y, t) = a(y, t); B(y, t) = b2(y, t). (3.40)

Note also that with (3.40) we can automatically satisfy the relation for the third order moments that is given by the third equation of (3.38).

Note also that the one-dimensional SDE (3.36a) has only the two scalar coefficient functions a(y,t) and b(y,t). Moments of higher order (n > 3) can therefore not lead to additional independent relations for A(y,t) and B(y,t).

It is now time for an example.

Example (The Ornstein-Uhlenbeck problem) We know from (2.19a) that the transition probability has in the nomenclature of the FPE (3.30) the form

P(y, t) = P l | 1p(j/ ; t\y0; 0) = [2vr6(l - A2)]"1^

x exp{-(y - Xy0)2/[2b(l - A2)]}, (2.19a')

with A = exp(—t) and this TB corresponds to the Ornstein-Uhlenbeck equation with m = 0 and b := v26

dy = -ydt + V2bdBs; b = const., (2.14')

Page 126: Henderson d., plaskho p.   stochastic differential equations in science and engineering

102 Stochastic Differential Equations in Science and Engineering

with the solution

y(t) = Xy0 + V2b [ exp(s - t)dB8. (2.16') Jo

We calculate with (2.19a') the moments and this yields

Tfc = ((Ay)k) = J dy(Ay)kp1{1(y;t\y0]0)

= [2*6(1 - X2)}-1'2 f dy{y - y0) f cexp(-Z2);

V - Ayo = V2b(l - A2) z.

Thus, we obtain

Tk = -±= J dz[{X - l)y0 + V26( l -A2)z]* exp(-^2) .

The calculation of the moments are performed separately and this yields

To = 1 (normalization); Ti = (A — l)yo!

T2 = (A - l)2y02 + 6(1 - A2);

T3 = ( A - l ) 3 ^ + 3 6 y 0 ( l - A 2 ) ( A - l ) .

The time derivatives of these moments coincide with the ones calculated directly from the SDE (2.14')

—Ti = (a(y,t)) = ~{y) = -Xy0,

^tT2 = 2((y-yo)a(y,t)) + (b2(y,t)}

= -2 J y(y - y0)P(y, t)dy + 2b J P(y, t)dy

= 2bX2 - 2A(A - l)j/0.

We treat the third order moment problem in EX 3.8. &

A further generalization of (3.30) is the Fokker-Planck equation for a multivariate SDE [see (1.123) in the presence of just one

Page 127: Henderson d., plaskho p.   stochastic differential equations in science and engineering

The Fokkei—Planck Equation 103

Brownian motion]

dym = am(y,t)dt + bm(y,t)dBt; y = (y i , . . . ,yn); m = l , . . . , n .

(3.41a)

The corresponding multivariate Fokker-Planck equation has the form

^*r = -4 [z(y' t ) f l fc ( l /'* )1 + l^wy^(y,t)br{y,t)}. (3.41b)

The proof of (3.38) can be found in the book of Risken [2.1].

Example We studied in Section 2.1.1 the nonlinear population growth SDE. Its solution is given by (2.4). Solving the logarithm of this solution for the Brownian motion leads to

B t = [Z + {u2/2 - r)t]/u; Z = ln(N/N0). (3.42a)

We infer from (1.55) that B t is N(0,£) distributed and consequently Z is N(/3,A); 0 = (r - u2/2)t, A = uH distributed. We will verify this result with an explicit solution of the FPE. A differentiation of (3.42a) yields a linear first order SDE with constant coefficients

dZ = (r - u2/2)dt + udB t.

The corresponding FPE takes the form

Pt = (u2/2 - r )P z + (u2/2)Pzz. (3.42b)

Motivated by the solution of the FPE (1.55), we try a similarity form to solve (3.42b). Thus, we use

P(Z,t) = t-1/2A{s); s = (Z + at)2/t.

We substitute the latter line into (3.42b) and we must comply with conditions in the orders 0(t~"^1+n^2), n = 1,2. This leads to

a = u2/2-r- A(s) = exp(-s/2u2);

P(Z,t) = (2iru2t)-1/2exp{[-(Z + at)2}/(2u2t)}.

Page 128: Henderson d., plaskho p.   stochastic differential equations in science and engineering

104 Stochastic Differential Equations in Science and Engineering

We give in the Appendix A an application of the FPE that concerns an oscillator that has a limit cycle without noise. It serves also to introduce the WKB-method to approximate solutions of the FPE in the limit of small additive noise.

3.4. Solutions to the Fokker—Planck Equation

We emphasize the connection of the FPE (3.30) to the SDE (3.36a) and we write the one-dimensional FPE from now on the form

3P 3 r , , n l 1 3 r i2

_ = _ - W v , t ) i > i + ^ '"(w.OT + S T T ^ f e ' O P ] ; p = p(M)> (3.43a)

where a and b are the drift and diffusion coefficients of the SDE (3.36a).

Let us focus now on the calculation of stationary solutions to (3.43a). There is only a stationary solution Ps(y) if the coefficients are time-independent a = a(y); b = b(y). We use the boundary conditions for a — — oo; j3 = oo and we obtain a first integral in the form d(62Ps)/d,z — 2aPs. A further integration yields the stationary solution

P s ( y H ? S y e x p W a < u > d « /o V{u

The constant in (3.43b) serves to normalize the distribution.

(3.43b)

Example We consider the Ornstein-Uhlenbeck problem (2.14) and (3.43a) reduces to

3P = 82P 3[(m-s/)P] h = ^ dt dy2 dy

The stationary solution is governed by dP/dy — (m — x)P = 0 and we obtain

P = C exp(my — y2/2); C = const.

We get the normalization (3.38) and this yields C = exp(—m2/2)/ \/2~7r (see also (1.56.1) for m — 0). A

Page 129: Henderson d., plaskho p.   stochastic differential equations in science and engineering

The Fokker-Planck Equation 105

We conclude now this section with the derivation of nonstationary solutions of autonomous SDE's with

a = a(y), b = b(y). (3.44)

We solve the Fokker-Planck equation with the use of a separation of variables. Thus we put (the function A(y) should not be confused with the coefficient A(z,t) in (3.27) through (3.30))

P(y,i) = exp(-At)A(2/). (3.45)

The substitution of (3.44) and (3.45) into (3.43a) gives

[b2A]" - 2[aA}' + 2AA = 0; ' = d/dy; A(±oo) = 0. (3.46)

The ODE (3.46) and the BC's are homogeneous, hence (3.46) constitutes an eigenvalue problem. We will satisfy the initial condition (3.31b) later when we replace (3.45) by an expansion of eigenfunctions and apply orthogonality relations. The rest of the algebra depends on the particular problem. We will consider, however, only problems that satisfy the conditions of the Sturm-Liouville problem (see e.g. Bender and Orszag [3.3]). The latter boundary value problem has the form

[p(yK(y)}' + m(y)Uy) = o; Uvi) = to) = o. (3.47) The eigenfunction is denoted by in(z) and /i is the eigenvalue. The orthogonality relation for the eigenfunctions is expressed by

/ ^{z)ln{z)lm{z)dz = NmSnm, (3.48)

where Nm and Snm stand for the norm of the eigenfunctions and the Kronecker-Delta. To illustrate these ideas consider the following example.

Example We consider an SDE with the coefficients

a(y) = -f3y; b(y) = VK; P,K = const. (3.49)

The coefficients in (3.49) correspond to an Ornstein-Uhlenbeck problem with a scaled Brownian movement, see EX 3.9.

Page 130: Henderson d., plaskho p.   stochastic differential equations in science and engineering

106 Stochastic Differential Equations in Science and Engineering

The separated equation (3.46) reads KA" + 2(3(yA)' + 2AA = 0. We use a scale transformation and this yields

d2 A d(zA) _ A n /2/3

az az v rv ( 3 5Q)

A* = ^ ; A(z = ±oo) = 0.

We transform (3.50) into the Sturm-Liouville operator (3.47) and we obtain

d d~z

P(z)—An(z) + (A* + l)p(z)An(z) = 0;

(3.51) p(z) = q(z) = exp(z2/2), /x = (A* + 1).

The eigenvalues and eigenfunction of (3.51) have the form

A; = 0 , 1 , . . . ; An(z) = £^exV(-z2/2). (3.52)

Now we use an eigenfunction expansion and replace (3.45) by

oo

P(y, t) = ^2 exp(-n/3i)BnAn(yv/2/3/K); B n = const. (3.53)

n = 0

We can satisfy the initial condition (3.31b) and we obtain

oo

Y, BnAn(y^/2j/K) = 5(y). (3.54) n = 0

The evaluation of B n is performed with the aid of (3.48) and we obtain

B m = f ^ J Am(0)/Nm.

The explicit evaluation of the eigenfunctions and the calculation of the norm is assigned to EX 3.10.

Page 131: Henderson d., plaskho p.   stochastic differential equations in science and engineering

The Fokker-Planck Equation 107

3.5. Lyapunov Exponents and Stability

To explain the meaning of the Lyapunov exponents, we consider a n-dimensional system of deterministic ODE's

x = f(x,t); x(t = 0) = XQ; X,XQ, f G R n . (3.55)

We suppose that we found a solution x(t); x(0) = XQ. NOW we study the stability of this solution. To perform this task we are interested in the dynamics of a near trajectory y(t). Thus we use the linearization

x(t) = x(t) + ey(t); 0 < e < 1, y G R n (3.56)

where s is a small formal parameter. The substitution of (3.56) into (3.55) leads after a Taylor expansion

y = J(x,t)y; 3pq = dfp(xi,... ,xn, t)/dxq; p,q = l,...,n,

(3.57)

where the matrix J is the Jacobian and (3.57) is the linearized system. We can replace the vector-ODE (3.57) by a matrix-ODE for the fundamental matrix $fcm(i) that takes the form

$fcm(*) = hs(x, t)<S>sm(t); $sm{t = 0) = 8sm. (3.58)

The solution to the linearized problem (3.57) is now given by

yk{t) = <s>km(t)ym(0). (3.59)

We define now the coefficient of expansion of the solution y(t) in the direction of y(0)

In the last line we use the symbol | • • • | to label the norm in Rn. The Lyapunov exponent corresponding to the direction y(0) is then

Page 132: Henderson d., plaskho p.   stochastic differential equations in science and engineering

108 Stochastic Differential Equations in Science and Engineering

defined by

A _ lim NM»(Q).0l. ( m t—>oo t

The solution x(t) is therefore stable (in the sense of Lyapunov) if the condition Re(A) < 0 is met. Note also that in case of one-dimensional ODE (n = 1 in (3.55)) we obtain

A = l imiln[y(i)] ; y(t) G R. (3.61) t—>oo t

Equation (3.61) is sometimes called the Lyapunov coefficient of a function.

The stability method of Lyapunov can be generalized to stochastic differential equations. We will see that it is plausible to calculate the Lyapunov exponents with the use of the stationary FPE equation of the corresponding SDE. A rigorous treatment to calculate the Lyapunov exponents is given by Arnold [3.4]. To illustrate this idea we consider a class of first order SDE's

dx = f(x) + \g(x)g\x) dt + g{x)dBt; ' = d/dx, (3.62)

where f and g are arbitrary but differentiable functions. The stationary FPE (3.43) takes the form

C Ps(x) = -—-exp

g{x) 2 [X f(s)g-2(s)ds

Jo C = const. (3.63)

Now we suppose that we know the solution z(t) to (3.62). We study its stability with the linearization x(t) = z(t) + ey(t). The substitution into (3.62) leads to

y(t) = ( j f ' (z ) + \[g(z)g'(z)}>} dt + g'(z)dB?j y. (3.64)

The SDE (3.64) is analogous to the one for the population growth (2.1). Thus we use again the function ln(y) and we obtain

d ln(y) = (f + i g g " ) dt + g'dB t. (3.65)

Page 133: Henderson d., plaskho p.   stochastic differential equations in science and engineering

The Fokker-Planck Equation 109

We integrate (3.65) and this yields

ln[y(t)} = t / I \ rt ?+tgg»ys+^ g' dB,

/ Jo

f' + ^gg" + g ^ ds, (3.66)

where £s is the Wiener white noise. Now we calculate the Lyapunov exponent of the one-dimensional

SDE (3.62). Thus we use (3.61) and obtain

A = lim t—»oo Ul'(Hgs"+g'&)4 (367)

Equation (3.67) is the temporal average of the function f + ^gg" + g'£s. For stationary processes we can replace this temporal average by the probabilistic average that is performed with the stationary solution of the FPE (3.63). Thus, we obtain

A f + ^ ' + l f t - W f + gg"

|p s(*)(V + lgg")dz, (3.68)

which used the fact that (g'£s) = 0. We substitute P s from (3.63). An integration by parts yields

A = [Psf] oo —oo fP's - 2gg"P^ da;. (3.69)

The first term of the right hand side of (3.69) vanishes. We give in EX 3.11 hints how we can rearrange with the use of the (3.63) the integral (3.69). This yields finally (Ps > 0)

\ = -2f((/g)2Psdx<0. (3.70)

Equation (3.70) means that every solution of (3.62) is stable in the sense of Lyapunov.

Page 134: Henderson d., plaskho p.   stochastic differential equations in science and engineering

110 Stochastic Differential Equations in Science and Engineering

3.6. Stochastic Bifurcations

There are two different cases of bifurcations for SDE's:

(a) P-bifurcations (or bifurcations of the PD) These bifurcations are characterized by qualitative changes of the probability density. In many cases arise changes of the maxima or minima of the PD. The univariate Gaussian PD (1.29) has for m = 0 a maximum at x — 0. A bifurcation of this PD leads generically to a PD with a minimum at x = 0 and to two maxima at the positions ±u.

(b) D-bifurcations (or deterministic bifurcations) The scenario is that a solution loses its stability and bifurcates-like in the deterministic case- to a new stable branch.

3.6.1. First order SDE's

We consider now three SDE's that belong to the class (3.62) with a solution that never lose their stability

1 dx a(x,a) + -a h(x)h'(x) dt + <7h(x)dBt. (3.71)

The real constants a and a are the intensity constant of the stochas-ticity (<7 = 0 indicates the deterministic limit) and the bifurcation parameter of the deterministic case. We chose the drift coefficient a(x, a) such that the deterministic limit of (3.71) coincides with one of the three normal forms of one-dimensional ODE's (see Wiggins [3.5]). The fact that the presence of a stochastic effect leads to stable solutions means that the randomness destroys the bifurcation (see Crauel and Flandoli [3.6]).

We investigate now these three cases separately:

(i) The pitchfork case

Here we consider the SDE

dx = (ax - x3)dt + adB t . (3.72)

We obtain from (3.39) the stationary PD

Zs(x) = Cexp[(ax2 - x4/2)/cr2]; C = const., (3.73)

and we determine the constant using (3.31c). We infer from (3.70) that every stochastic solution of (3.72) is stable for all values of a

Page 135: Henderson d., plaskho p.   stochastic differential equations in science and engineering

The Fokker-Planck Equation 111

Fig. 3.1. The P-bifurcation of the PD (3.73). The upper, middle and lower curves belong to a = 1,0 and — 1, respectively C = a = 1.

and a ^ 0. But we see from (3.73) that there arises at the critical point a = 0 a P-bifurcation (see Fig. 3.1). The PD (3.73) has for a < 0 only a maximum at x = 0. By contrast we face for a > 0 at x = 0 a minimum and two maxima at locations ±^/a.

(ii) The transcritical case The SDE under consideration is

dx ax — x + -a x ) dt + axdBt. (3.74)

This SDE is a member of the class (2.35). Its solution is calculated in EX 3.12. We obtain from (3.39) the stationary PD.

CxAexp(-2:r/cr2) M x > 0 _ Za(x)

0 Vcc<0 A = 2a/a2 - 1. (3.75)

The determination of the norm yields C = (cr2/2)/3r(/9); f3 = 2a/a2. There is neither a D nor a P-bifurcation.

(iii) The saddle-node case

Here we put

dx = (a - x2 + a2/A)dt + a^/xdBt.

We obtain the PD

Zs(x) = Cxx exp(-x2/<J2); A = 2a/a2 - 1/2.

(3.76)

(3.77)

Page 136: Henderson d., plaskho p.   stochastic differential equations in science and engineering

112 Stochastic Differential Equations in Science and Engineering

Note that PD exists in x e (—00,00) for a = (n + l/2)<72/2, n = 1,2,... yet these are not critical locations. Again, there arises neither a D nor a P-bifurcation.

3.6.2. Higher order SDE's

There exists no general theory to cover bifurcations of higher order SDE's. A good review of this subject is given in the book of Arnold [3.4]. We limit our attention, however, to just one instructive example of a second order SDE that is in linear in the deterministic limit. We consider in this example the

Stochastic Mathieu equation We study here a stochastic generalization of the Mathieu equation that plays an important role in stability studies of excited oscillators. The stochastic problem is governed by the SDE

x + e(3x + x = -ex£(t); e,j3 = const.; 0 < e « l , (3.78)

where dots stand for time derivatives. Equation (3.76) describes a linear oscillator that is slightly damped (/? is the damping coefficient) and affected by a colored noise described by £(£). We follow the ideas of Rong et al. [3.7] and use the noisy term

£(i) = hcos(£lt + -fBet); h, ft,7 = const., (3.79)

where ft and 7 stand for a deterministic frequency and the noise intensity, respectively. Furthermore, we use a slowly varying noise expressed in form of a slowly scaled Brownian movement.

The deterministic case (7 = 0) of (3.78) is called the Mathieu equation. Its stability with respect to weak deterministic perturbations [£(£) = cos(fti)] was studied by Nayfeh [3.8] with the aid of various asymptotic methods. We investigate here the stability of the solutions of (3.78) with use of the method of multiple scales.

Before we perform this task we calculate the spectrum of the colored noise term in (3.78). We introduce the autocorrelation function [Re (a) denotes the real part of the complex variable a]

(atMt + T)) = \h2Re(V), (3.80a)

Page 137: Henderson d., plaskho p.   stochastic differential equations in science and engineering

The Fokker-Planck Equation 113

with

V = exp{i[fi(2i + r)]}(T+) + exp(iftr)(T_); (3.80b)

T± = exp[i7(Be(t+T) ± Bet)].

We can evaluate (3.80b) with the use of Ex 1.11. We find that the first term on the right hand side of (3.80a) vanishes in the limit t —> oo. Hence, we obtain in this limit

(mat + r)) = \h2 cos(ttr) exp(- 72£| r | /2) . (3.81a)

Equation (3.81a) coincides — apart from a different notation — with the limit of (2.57) for t —> oo. Thus we obtain the spectrum of this noise term (see also Wedig [3.9])

( , e 2 u;2 + » 2 + a4 4 V

[U) 2{ 7 j cfi + {w*-W)2 + 2a\w2 + Wy a 4 ' (3.81b)

To approximate the solutions to the SDE (3.78) we use the two-variable version of the multiple scale theory. The latter routine was developed first for deterministic oscillation problems amongst others by Nayfeh [3.8] and later extended to problems with random excitations by Rajan and Davies [3.10] and by Nayfeh and Serban [3.11]. We use two independent variables

r = t and T = et, (3.82)

where r and T are referred to as fast and slow variables, respectively. The time derivatives are

d 3 9 d2 32 n 92

2 a2

di = a^ + e 9T ; di2 = a ^ + 2 £ a 7 a T + e a r ^ - (3"83)

Furthermore, we use also an expansion of the dependent variable and we put

oo

x(t,e) = ^ £ n x n ( r , T ) = X0(T,T) + exi(r,T) + 0(e2) . (3.84) n=0

Thus, we obtain a hierarchy of second order equations n 32

e° : Lx0 = 0, L = —^ + 1; 9 r (3.85)

en :Lxn = RESn(x0,...,xn-i); n > 1.

Page 138: Henderson d., plaskho p.   stochastic differential equations in science and engineering

114 Stochastic Differential Equations in Science and Engineering

Equation (3.85) means that the leading order equation (the first equation for e°) is homogeneous. The higher order (or correction order) equations are inhomogeneous with right hand sides RHSn that depend in general on all solutions of the previous problems.

We apply this procedure now to (3.78) and we obtain in the first place

RHSi = -• a 2 a

XQ. (3.86)

The leading order solution is the one of an harmonic oscillator and we write it in complex form

XQ A(T) exp(ir) + A*(T) exp(-ir) cc(a), (3.87)

where an asterisk is used to denote the complex conjugation (cc). The constants of the r-integration depend on the slow variable T and we write those constants in form of a slowly amplitude A(T) and its complex conjugate. These functions are determined in the next order of the expansion.

Now we reformulate the right hand side defined by (3.86) with the leading order solution (3.87). Thus, we obtain

RHSi = —iexp(ir) ^ + * A + CC h 2

x {A* exp[i(Q - l ) r + i7BT] + cc}

h {Aexp[i(fi + 1)T + i7BT] + cc}. (3.88)

The inhomogeneity (3.88) causes resonance with solutions of the type r aexp(ir) ; a / 0 and such terms are no longer periodic solutions. Hence, we put all terms in (3.88) having the structure exp(ir)F(T) and exp(-ir)F*(T) to zero. This yields the non-resonance equation that allows the calculation of periodic solutions and gives a condition to calculate the slowly varying amplitude function A(T). We introduce a detuning parameter

ft-l = l + ecr; O(o-) = 1. (3.89)

Page 139: Henderson d., plaskho p.   stochastic differential equations in science and engineering

The Fokker-Planck Equation 115

Equation (3.88) leads with (3.89) to

RHSi = exp(ir)

h

• U 2 ^ + , A h

A*exp[i(aT + 7BT)]

- exp(3ir)A(T) exp[i(crT + 7BT)] + cc.

+ cc

(3.90)

Thus, we obtain the non-resonance condition if we put the coefficient of exp(ir) in (3.90) to zero. This yields

h , dA , I 2 - + / J A -A* exp[i((jT + 7BT)] = 0, (3.91)

and we note that the complex conjugate of (3.91) yields a differential equation that has the same solution as (3.91).

To separate (3.91) into real and imaginary parts we use the polar form

A(T) = R(T) exp[ty(T)]; R ^ e R .

Equation (3.92) yields

dA

(3.92)

— = (R' + iRVO exp[i^(T)]; A* = Rexp( - i^ ) ; ' = d/dT.

Hence, we obtain

-i[2(R' + iRi/0 + PR] = — exp(iry); rj = aT + 7 B T - 2^.

Comparing real and imaginary parts in (3.92) leads to

2R' + /3R = ——sinr],

and

V er + 7 dBT h

(3.93)

(3.94)

„ — . , . (3.95)

dT 2 y J

We note that (3.94) is not a SDE, while (3.95) is a SDE that is written in the usual form

d»7= ( < 7 - - c o s 7 M d T + 7dB T . (3.95')

To investigate the stability of the solutions to (3.94) and (3.95) we need to calculate the solution to the stationary FPE of (3.95).

cos r\.

Page 140: Henderson d., plaskho p.   stochastic differential equations in science and engineering

116 Stochastic Differential Equations in Science and Engineering

This leads with (3.43a) to

g _ iL [ ( M c o s ? 7 ) p ] = 0 . p = P(r?). ' h (3-96)

2a h u — —^, v = —z.

Since r\ is an angle we use instead of the conditions (3.31b, c) the periodicity condition

P(r? + 27r) = Pfa), (3.97)

and we apply the normalization condition in the form

P(ri)dri = 1. (3.98) / Jo 10

We obtain easily a first integral of (3.96)

dP — = (it - v cos ry)P + C; C = const. (3.99) drj

To solve (3.99) we use of the variation of constants and we obtain

P(rj) = exp(-u77 — i>sin?7) /o

F(x) = ex.p(—ux + vsinx); D = const.

D + C f F(x)dx Jo

(3.100)

The rest of the procedure to calculate the solution to the stationary probability function is tedious and we refer the interested reader to the original article [3.7].

We return now to the calculation of the stability of the solution. We note that (3.94) has the trivial solution R = 0. Its stability is governed by (3.61) with y(t) = R(T); t = T. Hence, we obtain from (3.94)

BT h fT

MR(t)] = - \ - 4 Jo sin[77(z)]dz. (3.101)

Thus we obtain the Lyapunov coefficient in the form

A = " f " I J i m ¥ / S™(V)<±V- (3-102) Z 4 T-+oo 1 J Q

Page 141: Henderson d., plaskho p.   stochastic differential equations in science and engineering

The Fokker-Planck Equation 117

We replace again the temporal average in (3.102) by the probability average [see also (3.67) and (3.68)] and this yields

A = - f - |<sin(^)>. (3.103)

The detailed stability calculation can by found in [3.7]. The numerical results show that an increase of the parameters h and | a | causes the trivial solution to lose stability. The interpretation of this result is simple since the parameter h controls the amplitude of the noise and the parameter |<r| is the detuning.

Appendix A. Small Noise Intensities and the Influence of Randomness on Limit Cycles

We consider a second order SDE

x + ng(x)x + f(x) = V2nT&; 0 < n, T < 1, (A.l)

where £t is a white noise term and n(T) a small damping factor (noise intensity) and we will specify the functions g(x) and i(x) in Equation (A.19). We suppose that Equation (A.l) possesses in the determinist limit (T = 0) a limit cycle Lo with a periodicity of the solution T: x(t + T) = x(t). We rewrite (A.l) in form of (1.123) and we obtain from (3.41b) the corresponding bivariate FPE (we use a summation convention)

3P 3 82P ~97 = ~ 3 ^ : ( 0 f c P ) + n T 3 ^ ; P = P ( ^ i ^ 2 , t , n , T ) . (A.2)

The boundary conditions are given such that on the limit cycle Lo, P has the same periodicity F as the limit cycle solution.

Our aim is now the determination of the stationary solution of (A.2). The coefficient of the highest derivative in (A.2) is given by the small parameter nT. This means that the problem is singular (see Van Dyke [3.12]) and we may apply an asymptotic expansion to approximate the solution of (A.2). This routine is the W K B method that is well known in many disciplines of applied mathematics. Its general theory can be found e.g. in the book of Bender and

Page 142: Henderson d., plaskho p.   stochastic differential equations in science and engineering

118 Stochastic Differential Equations in Science and Engineering

Orszag [3.3]. Thus, we use, as an asymptotic expansion to calculate the stationary probability function, the form

oo

Ps(x,y,n,T) = exp[-W(x,y)/T]Y^Pk(x,y)Tk; x = xx, y = x2. fc=0

(A.3)

The functions W(x, y) and pk are called the Boltzmann energy function and the expansion coefficient functions.

The substitution of (A.3) into (A.2) yields a hierarchy of equations. We use only the first two members of this hierarchy and we obtain

(A.4)

and

<; f(x) + n

9W ox

s(y)y -

r , s c M 9 W / 9 W ^ -[ng{y)y + i{x)]— + n (^—^

2

= o

- 2 — dy _

~] dp0 9p0 9 J dy dx dy g(y)y -

(

aw" dy _

0.

(A.5)

Note that the leading order given by (A.4) represents first order nonlinear PDE called the eikonal equation. By contrast, we see that the correction order equation (A.5) is a second order linear PDE. We obtain from (A.5), for small values of n, the periodic solution

n —> 0 : po — const. (A.6)

Next, we investigate the boundary condition on the limit cycle Lo-In the first place we obtain using the deterministic limit of (A.l)

T dW 9W , 9W , a w . . . r. N19W on Lo: ^r = ^x + V = y^ " [ n g { y ) y + f ( x ) ] " a 7 '

but with a consideration of (A.4) leads to

T dw / a w \ 2 n ,k .

o n L „ : - = - „ ( - ) <0. (A.7) Since the total derivative of a periodically varying function cannot decrease we obtain from (A.7)

9W on L0 : — = 0, (A.8)

dy

Page 143: Henderson d., plaskho p.   stochastic differential equations in science and engineering

The Fokker-Planck Equation 119

and the substitution of (A.8) into (A.4) yields also

T 3W on L0 : -r— = 0.

ox

Hence, we finally obtain

on Lo : grad w = 0 or W = const. (A.9)

On the other hand we find the parametric lines of W = const.: Wxx'+ Wyy' = 0 (we use subscripts to indicate partial derivatives) and this leads to

x' = y; j/ = -\ng{y)y + f(x)] + nW„. (A. 10)

The determination of these lines given by (A. 10) is now our primary target. We follow here the considerations of Ben-Jacob et al. [3.14]. First, we note that in (A.10) appears Wy and the corresponding term seems to be nontraditional. We continue now to calculate Wy. To perform this task we determine the characteristic lines of the eikonal equation (A.4). The general theory of characteristic lines corresponding to first order PDE's is developed by Zauderer [3.13] and we give here only a brief outline how to calculate these lines for a PDE of the class (A.4). Thus we focus on the problem

F(x,y,W,px,px)=0; px = W:E, py = Wy;

x _ 9 F x _ 9 F . p _ 3F

ox oy opx Py =

3F dpy'

(A.ll

The characteristic lines of (A.ll) are given by the five individual ODE's that we can obtain from

dx = dy = dW = dp^ = dpy = ^

"x ^y Px*x ~r Py^y -<*-x -**-y

Returning to (A.4) we have

Xx = -f ' (x)W„; Xy = Wx-nWyS(y); S(y) = [yg(y)}',

Px = y; Py = -[nyg(y) + i(x)} + 2nWy.

Page 144: Henderson d., plaskho p.   stochastic differential equations in science and engineering

120 Stochastic Differential Equations in Science and Engineering

Hence, we obtain the parametric equations for the characteristic curves of (A.4)

x' = y; y = -[nyg(y) + i(x)] + 2nWy;

^ - = yWx- [nyg(y) + f(x)] W, + 2nW2y = nW2

y; (A.13)

^ = f (x)Wy; ^ L = -Wx + nWyS(y),

where we used in the second line again (A.4). Now we are ready to calculate Wy. We introduce a generalized

Hamilton function

y2

H(z,y) = y + J i ;

: PX

f(u)du + n [g(y(u))y(u)-Wy(u,y{u))]du. (A.14) Jz

In Equation (A.14) we use an integral along lines W = const, that passes from an initial point z to (x, y). (A.14) implies that H = const. along the parametric lines W = const, (see EX 3.13). We use (A.13) to verify the time derivative of the Hamilton function

(^ = nyWy + 0(n2). (A.15)

Finally we suppose that

H = H(W). (A.16)

This yields with the second line of (A.13) to

§ = K ^ 0 - ^ + O ( B ) ' 0 I W, = »K(W) + 0(n) . (A.17)

Equation (A.17) defines the function K(W) and serves now to define for small values of n the quantity Wy. We substitute the result (A.17) into the parametric lines of W = const, that are given by (A. 10). Thus, we obtain

x' = y- y ' = - n y [ g ( y ) - K ( W ) ] - f ( x ) . (A.18)

Equation (A.18) has the same form as the deterministic limit of (A.l). The only difference is that the function g(y) in (A.l) is in (A.18)

Page 145: Henderson d., plaskho p.   stochastic differential equations in science and engineering

The Fokker-Planck Equation 121

replaced by g(y) — K(W). This means that the curves W = const. are a family of limit cycles of (A. 18) parametrized with K(W) around K = 0.

We use now an example from the field of Fluid Dynamics. A flexible cylinder oscillates in the presence of an array of fixed cylinders under the influence of the lift forces of a homogenous incoming flow (see Plaschko [3.15]). In this case we specify (A.18) to

x' = y; y' = -ny{[s0 - K(W)] - qa2 + s2y2]} - (1 + nqa{)x.

(A.19)

In Equation (A.19) n stands for the mass ratio (the mass of the fluid that replaces the cylinder over the mass of the cylinder, this parameter has in Aerodynamics an order of mag nitude n = 0(1(T3)). The parameters so, s2, <7i, cr2 and q represent the linear and cubic damping terms, memory terms and a fluid load term. The deterministic problem is governed by (A.19) with K(W) = 0. Its limit cycle was determined in [3.15] and it has the form

Lo :x 2 + y2 = 4R2 + 0(n); R2 = ^ | ^ ° , (A.20) 3s2

where 2R denotes the radius of the limit circle. Equation (A.19) can be understood as a problem with a modified linear damping term.

Now we obtain from (A.19)

x2+ 2 = 4 ^ 2 + K(W)-,0- ( A 2 1 )

3so

We obtain from (A.21)

3K(W) = 3s2y/2. (A.22)

dy

We use now (A.17) and this yields with (A.22)

3W dW dy _ Wy _ y K _ 2K ~&K ~~ ~3y"9K _ K^ - K^ ~ 3s^'

An integration of the last line leads to

W(K) - K2/(3s2), (A.23)

Page 146: Henderson d., plaskho p.   stochastic differential equations in science and engineering

122 Stochastic Differential Equations in Science and Engineering

where we set the constant of the integration on the limit cycle (K = 0) to zero. Thus we obtain from (A.21) and (A.23) the final result for the Boltzmann energy function

W{x,y) = 3 s 2 ( r 2 / 4 - R 2 ) 2 ; r = x2 + y2, (A.24)

and the approximation to the FPE is given by

P8(x, y) = A0 exp[-A(r2 - 4R2)2]{1 + T " 1 } , A = 3s2/(16T)

(A.25)

where Ao is a constant. The normalization is performed if we integrate (A.25) over the entire (x, y)-plane

/•oo

l = 27rA0/ rP(x,y)dr. (A.26) Jo

It is now convenient to pass to polar coordinates (r, </>). First we note that the independence of (A.25) from <p indicates that the azimuth distribution function Z(</>) satisfies

Z(</>)d</> = 0. (A.27)

The normalized radial distribution function G(r) is derived from (A.25) and (A.26). It has the form

G(r) = 2vrA0r exp[-A(r2 - 4R2)2]. (A.28)

We give in Figures 3.2 and 3.3 a comparison of the theoretical predictions of the radial and azimuth distribution functions with numerical simulations of the process (A.l). The corresponding numerical routines shall be covered in Chapter 5. Finally we present in Figure 3.4 a graph to get an impression in which way the noise destroys the limit circle curve of the deterministic oscillator and produces a banded diagram whose bandwidth increases with growing noise intensity T.

Page 147: Henderson d., plaskho p.   stochastic differential equations in science and engineering

The Fokker-Planck Equation 123

Fig. 3.2. A comparison of the numerically computed and the theoretically predicted radial distribution function (A.28); n = <5 = 0.001, T = 0.05.

6000

4000

2000

6000

4000

2000

-1,57 -1,047-0,5233 0 0,5233 1,047 1,57

if

Fig. 3.3. A comparison of the numerically computed and the theoretically predicted azimuth distribution function (A.27); parameters as in Fig. 3.2.

Page 148: Henderson d., plaskho p.   stochastic differential equations in science and engineering

124 Stochastic Differential Equations in Science and Engineering

0,2

y

o

-0,2

-0,2 0 0,2 x

Fig. 3.4. A single realization of the noisy oscillation phase-diagram; n = <5 = 0.001, T = 0.05; IC's: a;(0) = 0, j/(0) = -0 .2 .

Appendix B

We discuss here two different concepts to study the stability of fix-points (FP) of SDE's. Both strategies have their merits in the limit of deterministic problems and we give a brief introduction to the corresponding theories for ODE's. The stability of the FP's is investigated in the sense of Lyapunov, or we concentrated on asymptotic stability.

B. l . The method of Lyapunov functions

This method is well established in investigations of the stability of deterministic ODE's. We present here a brief survey of these deterministic problems. Given a system of n autonomous first order ODE's

—— = r i \x ) \ x t -ttn; x = [Xi, X2, • • •, xn)\

F i (0) = 0; j = l,2,...,n; t>0. (B.l)

A FP can always be replaced to the origin (x = 0) and this is expressed in (B.l) by Fj(0) = 0.

/!/) i If 11

Page 149: Henderson d., plaskho p.   stochastic differential equations in science and engineering

The Fokker-Planck Equation 125

We introduce now a positive definite function, the Lyapunov function, L(x) that satisfies the following three conditions in (B.2)

dL _ ^ ,^dL(x)

(B.2)

(i) L(0) = 0; (ii) L ( x ) > 0 ; (iii) — = Fp(x) —^ < 0, (it OXr,

where the conditions (ii) and (iii) hold in an open neighborhood (|x| > 0) of the FP. If we can find such a Lyapnov function then the FP is stable in the sense of Lyapunov [see (3.60)].

In the stochastic case we wish to investigate the stability of the FP's of the autonomous SDE's

dxj = a,j(x)dt + 6jm(x)dB™; j = l , 2 , . . . , n ; m = 1, 2 , . . . ,K.

(B.3a)

A FP of (B.3) is defined by

dxj = 0 => aj(x0) = 0; bjm(x0) = 0;

Vj = l , 2 , . . . , n ; m = l , 2 , . . . ,K . (B.3b)

FP's of 1-D SDE's studied in Section 3.6.1 are analyzed in EX 3.14. Lyapunov functions can be used, however, in the case of higher order SDE's. To verify this we introduce a Lyapunov function that satisfies the conditions of (i) and (ii) of (B.2). Its differential [see (1.127)] is given by

a m ^ ~ + ?bmrbkr?Sr flr, ) dt + bmi^dBt- ( B - 4 )

A stable system has now the property that L does not increase, i.e.

dL < 0. (B.5)

The latter equation means that the deterministic definition of the stability holds also for every single trajectory x(t,co). However, we cannot investigate every trajectory and this condition is thus impractical. Hence, we require only that the average of dL is negative definite

<dL> = < (<-»£;+ s^slfe) > * + < " - i S ; d B ' ) s °-

Page 150: Henderson d., plaskho p.   stochastic differential equations in science and engineering

126 Stochastic Differential Equations in Science and Engineering

We see tha t the second term on the right hand side of the latter line vanishes. Thus we derive in the stochastic case as the third stability criterion

U(L) = a m - h -bmrbkr- — < 0. (B.6)

E x a m p l e 1 We consider the 1-D population growth model (2.1) with the F P in the origin

dx = rxdt + uxdBt; r,u — const.

Its exact solution can be written in the form

x(t) = x(0) exp((r - u2/2)t{l + uBt/[t(r - u2/2)]}). (B.7)

To study its stability, we use the relation

— -»• 0, t - • oo. (B.8)

To prove (B.8) we can use the law of the iterated logarithm (1.63) or we can apply alternatively the way of EX 3.15 showing that all moments of the right hand side vanish. (B.7) and (B.8) imply now that the F P is stable for

r - u2/2 < 0, (B.9)

and unstable elsewhere.

Now we apply the Lyapunov function approach and use as a trial function

L(x) = \x\s; s > 0. (B.10)

It is easy to verify that this choice of the Lyapunov function satisfies the first and second condition of (B.2). The substitution into (B.6) yields

U(|a;|s) = 0.5s\x\su2[{s - 1) + 2r/u2}. (B . l l )

If we choose

0 < s < 1 - 2r/u2, (B.12)

we prove, with the choice of the Lyapunov function (B.10) and the

compliance of (B.9), tha t (B.6) is satisfied. Hence, we conclude that

the F P stable, provided that Equation (B.9) is fulfilled.

Page 151: Henderson d., plaskho p.   stochastic differential equations in science and engineering

The Fokker-Planck Equation 127

Example 2 We consider a damped nonlinear pendulum where the damping as well as the frequency are stochastically perturbed

x + S(l + a&)x + (1 + /?&) sin a; = 0, (B.13)

where 5 is the damping and a > 0, (5 > 0 are intensity constants, respectively. The deterministic and the stochastic FP lie in the origin and the linearized theory of deterministic problems shows that this FP is stable for S > 0 and that at 5 = 0 a Hopf bifurcation appears. We study now the stability of the stochastic FP and rewrite (B.13) in the standard form

dx = ydt; dy =—(5y + sin x)dt — (a8y + B sin x)dBt. (B.14)

With (B.6) we obtain the operator

U = y- (5y + sin x) - — I — (a6v + B sin x) r— . dx K y J d y 2 V y y ' dy2

We use now as trial Lyapunov function

L = Ax2 + Ba;y + y2 + 2Dsin2(x/2), (B.15)

where the constants A, B and D are unknown as of now. (B.15) is a quadratic form plus a multiple of an integral of the nonlinear term

/ sin(z)d,2. Jo

This leads to

U = xy(2 A - BS) + y2(B -28 + a2S2) + y sin(ir)(D - 2 + 2aBS)

, ^ / „ ^osin(x) -xs in (x) I B-B2——

To comply with the condition U < 0 we use the relations between the constants

A = BS/2; D = 2(1 - aB8); B - 25 - a252 < 0; B//32 > 1.

(B.16)

The Lyapunov function takes now the form

L = (y + T ) 2 + T i1 ~ I) x2 + 4(1 ~ a(3S)sin2^/2)- ^B-17)

Page 152: Henderson d., plaskho p.   stochastic differential equations in science and engineering

128 Stochastic Differential Equations in Science and Engineering

To achieve the condition L > 0 for (x, y) ^ (0,0) we require that

0 < B < 25; a(38 < 1. (B.18)

A recombination of condition (B.16) yields finally

« 2 < ^ 2 with/3 = a 7 , 7 > 0 ; B < - ^ . (B.19)

With constants complying with (B.19) we can conclude now that the FP in the origin is stable.

B.2. The method of linearization

We begin with a linear nth order ODE with constant coefficients

j/(")(x) + biy^-l\x) + ••• + bn.iy'{x) + bny(x) = 0;

bj = const.; Vj = 1, 2 , . . . ,n, (B.20)

where primes denote derivations with respect to the variable x. (B.20) has the FP

( y ( 0 ) , y ' ( 0 ) , . . . , ^ - 1 ) ( 0 ) ) = 0 . (B.21)

The Routh-Hurwitz criterion (see [1.2]) states that this FP is asymptotically (for t —> oo) stable if and only if the following determinants satisfy the relations (see also Ex 3.16)

h b3 0 Ai = 6i > 0; A2 = bib2 > 0; A3 = 1 b2 0 > 0. (B.22)

0 6i b3

We generalize now the concept of the Routh-Hurwitz criterion for the following class of second order SDE's

y" + [ai + Ci(t)]y' + [aa + Q2{t)y = 0; [a.26)

aj = const.; Q(t)dt = const. dB]; j = 1,2,

with the two white noise processes [see (2.41)]

(Cj(t)Ck(s)) = 6(t - s)Qik. (B.24)

This stochastic extension was derived by Khasminskiy [3.17]. It states that the FP at the origin is asymptotically stable in the mean square IFF the following conditions are met

a1 > 0; a2> 0; 2axa2 > Qna2 + Q22. (B.25)

Page 153: Henderson d., plaskho p.   stochastic differential equations in science and engineering

The Fokkei—Planck Equation 129

E x a m p l e (Stochast ic Lorentz equat ions) We consider a Lorentz attractor under the influence of a stochastic

perturbation. The SDE has the form

dx = s(y — x)dt,

dy = (rx — y — xz)dt + ay dBt, (B.26)

dz = (xy — bz)dt; b:s,r > 0.

This system of SDE's is a model for the development of fluid dynamical disturbances and their passage to chaos in the process of heat transfer. The dimensionless parameters in (B.26) are given by s (the Prandt l number) r (proportional to the Rayleigh number) and b (a geometric factor). (B.26) has the deterministic FP ' s

( 0 , 0 , 0 ) ; (±y/b{r-l), ±y/b(r-l), r - 1).

However, in the stochastic case (a ^ 0) only the F P in the origin survives.

We use now the linearization routine to study the stability of the F P in the origin. The linearization of (B.26) leads to

du = s(v — u)dt,

dv = (ru - v)dt + avdBt, (B.27)

dw = —bwdt.

We infer from (B.27) that the component w is decoupled from the rest of the system and because of w(t) = w(0) exp(-bt) this disturbance is stable for b > 0. Hence we will investigate only the stability of the system (u,v) taking advantage of the first two equations of (B.27). However, this system is not a member of the class (B.23). To achieve a transformation we eliminate the variable v by use of v = u + (du/dt)/s. This yields

u + [(1 + s) - aC{t)}u + [s(l - r) - as((t)]u = 0. (B.28)

A comparison with (B.23) and (B.24) leads then to

ai = l+s; a2 = s(l-r); (i =-a((t); (2 = -as((t); (B.29)

_ / a2 a2s \ Q " V a 2

S a2s2)-

Page 154: Henderson d., plaskho p.   stochastic differential equations in science and engineering

130 Stochastic Differential Equations in Science and Engineering

The Khasminskiy criterion applied to our example reads now

l + s > 0 ; s ( l - r ) > 0 ; 2(l + s)(l -r) > a2(l -r + s). (B.30)

Since we have by definition s > 0 the first part of (B.30) is automatically satisfied, while the second part gives r < 1. Finally, the third part of (B.30) yields for small intensity constants

r<1-207)+0^- (R31)

The origin is stable in the deterministic limit for r € [0,1). Hence, equation (B.31) tells us how the stability is reduced by a small intensity noise a £ [0,1).

The study of Hopf bifurcations is beyond the scope of this book. The interested reader can find, however, pertinent literature in [3.4], Ebeling et al. [3.18], Arnold et al. [3.19], Schenk-Hoppe [3.20], [3.21], [3.22] and Keller and Ochs [3.23].

Exercises

EX 3.1. (a) To find the solution to the random walk problem (3.13) use the initial condition Zn(0) = <5no and introduce the probability generating function (u is an auxiliary variable)

oo

G(u,t)= £ unZn{t). n=—oo

Show that

G(l, t) = 1; G'(l,t) = (n(t)>, G"(l,t) = (n2(t)) - <n(t)>; ' = 3/3*.

(b) Consider a random walk on a straight line with step size Ax and a time At between two steps. Equation (1.54) can be reduced to

Pi(nAx, sAt) = 2_.Pi(mAx, (s — l)At) m

x p1 |1(nAx,sAt|mAx, (s — I)At)).

Assuming equal probability of the steps to the right and to the left Pi|i — 0.5(5m;Tl+i + 5mjTl-i) shown that we obtain

pi(nAx, sAt) = 0.5[pi((n + l)Ay, (s - l)At)

+ P l ( ( n - l ) A y , ( s - l ) A t ) ] .

Page 155: Henderson d., plaskho p.   stochastic differential equations in science and engineering

The Fokker-Planck Equation 131

Derive from the latter equation the quantity

[pi(nAx, sAt) - pi(nAx, (s - l)Ai)]/Ai

and apply the limit

x = nAx; t = sAt; Ax -> 0; At -»• 0; D = (Ax)2/(2At) = 0(1)

to derive a diffusion PDE

dpi = r ) 9 2 P i dt dx2 '

This is the FPE with A = 0; B = 2D.

EX 3.2. Calculate the expression for d(y2)/dt and use, in analogy to the derivation of (3.17), the relation

^(y2) = jdyjdy'[(y - y1)2 + 2y(y' - y)]W(y'\y)Z(y,t).

Verify that the evaluation of the last line leads to

^t(y2} = (a2(y))+2(ya1(y)).

The variance a satisfies a2' = (y2)' — 2(y)(y)'. Verify that this yields

*2' = (a2(y))+2((y-{y))al(y)).

EX 3.3. To calculate the transformation of the third term in (3.28) use the relation

(B^T)" = (BT)'V + 2(BT)>' + ( B T ) < ,

to verify that

f(BT)^"dz = /"(BT)'Vdz.

EX 3.4. Derive the backward Chapman Kolmogorov equation. Use in (3.4) u = At and v = t, y% = z; y2 = w and y\ = x (x, w and z are backward, intermediate and forward spatial variables, respectively).

Page 156: Henderson d., plaskho p.   stochastic differential equations in science and engineering

132 Stochastic Differential Equations in Science and Engineering

Hint: This yields

T(z|x; t + At)= / dwT(w\x; At)T(z\w; t).

Multiply this equation by an arbitrary function ip{x, t) and note that x < w. Apply the other considerations that lead from (3.21) to (3.29). The result

^T{z\x;t) = -^[A(x,t)T(z\x;t)] - ^^[B(x,t)T(z\x;t)].

EX 3.5. Consider the three-term Kramers-Moyal equation [see (3.32)]

9P 3 1 32 I d 3

Multiply this formula by Ay and (Ay)2 (see (3.33a)). Show that the third term in this Kramers-Moyal equation does not affect the first and second order terms in (3.34). Hint:

V d 3 = {yv) - Sv ; y d 3 = (yv) - 6{yv) + 6^ .

EX 3.6. Continue the calculation of (3.33b) for k = 4 and 5 and find formulas in analogy to (3.33c) to (3.33e).

EX 3.7. Consider the average

S = / J a(y(s),s)dsf b(y(w),w)dBl

= ^2(a(ym, s m ) ( W i - tm)b{yn, sn)(Bn+i - B„)). n,m

The integrand is a non anticipative function. Thus, we obtain from the independence of the increments of Brownian motion that the average written above vanishes: (S) = 0.

EX 3.8. Use the third line of (3.38) to calculate dT3 /di; T3 = ((Ay)3). Take advantage of (2.14') and compare the results with T3 calculated from (2.19a').

Page 157: Henderson d., plaskho p.   stochastic differential equations in science and engineering

The Fokker-Planck Equation 133

EX 3.9. Compare the Ornstein-Uhlenbeck-problem (2.14) with SDE that arises from the coefficients (3.49). Use a scaled Brownian movement (see (1.65)) to perform this juxtaposition.

EX 3.10. Consider (3.54) and verify

1 B m =

N, m 6l(2(3/K)1/2z}exp(z2/2)Am(z)dz

- 1 / 2 Am(0)/N,

The symmetry of the eigenfunctions (3.50) leads to B2m+i = 0. Eventually we obtain

p(x,t) = y^ exjp(-2nt)B2nMn n = 0

We observe that (3.38a) tends for t —• oo to its asymptotic limit. This is the stationary solution l i m ^ o o p ^ i ) = Boexp(—(5x2/K).

Note also that we can determine the normalization constant Bo in the form

1 = / p(x, oo)dx =$> B0 = \ZKTT/(3.

EX 3.11. To prove (3.70) we put the first term in (3.69) to zero. We obtain with the use of the first integral of the PD that leads to (3.63)

1 A

- 2 | ( f / g ) :

f ( 2 f / g 2 - g ' / g ) - - g g "

P s d x + / fg'/g

P sdx

P,dx.

The first part of the last line gives Equation (3.70). In the second integral we apply an integration by parts for the second term where substitute (3.63). This shows that the second integral vanishes.

EX 3.12. (a) Show that (3.62) is equivalent to da; g(x)odB t . (b) Solve with the use of (2.35) the transcritical case

dx = (ax — x2)dt + axodBt-

f(x)dt +

Page 158: Henderson d., plaskho p.   stochastic differential equations in science and engineering

134 Stochastic Differential Equations in Science and Engineering

EX 3.13. The Hamilton function H is defined by (A.14). Using (A.10) show that H = const, along W = const.

EX 3.14. Find possible FP's of the 1-D SDE's (3.72), (3.74) and (3.76) of Section 3.6.1.

EX 3.15. Verify that all moments of Bt/i vanish for t —> oo.

EX 3.16. The Routh-Hurwitz matrices An = (ay) are in (B.22). They are constructed in the following way. First we set the coefficient of ?/n) to bo = 1. Then we put together the first row as a sequential array of odd coefficients

6i,63 , . . . ,b2fc+i,0,.. . , Vl<2fe + l < n .

Each subsequent row is then built according to

C'ij = b2j-i Vi > 2, 0 < 2j — i < n and ay = 0 otherwise.

i) Construct in this way A2, A3 and A4. ii) Verify the stability criterion for the case of a deterministic linear

damped and/or excited pendulum b\ ^ 0, 62 = A2.

EX 3.17. Study the stability of the FP's of the pendulum (B.14) with a linearization routine.

Page 159: Henderson d., plaskho p.   stochastic differential equations in science and engineering

CHAPTER 4

ADVANCED TOPICS

In the first section we treat stochastic partial differential equations (SPDE). The second section is devoted to an example of the influence of stochastic initial conditions on the evolution of partial differential equation (SPDE). In the third section we give a brief introduction to stochastic eigenvalue problems.

4.1. Stochastic Partial Differential Equations

We investigate here a SPDE that includes an additive stochastic term

L$ = aWx t ; $ = $(x, t ) , (4.1)

with a linear operator containing one space (x) and one time variable (£)

2 ~ 2 2 p.

L= E a ^ ^ ^ + E ^ + c ; k,m=l S f t S m fc=l S K

x0<x<xi; to<t<ti, (4.2)

(£i,6) = 0M); a*™ = afcm(£i>6); &* = &*(&, 6 ) ; c = c(^i,6)- (4.3)

In this SPDE a stands for an intensity constant and Wxt denotes the additive stochastic term, a two-dimensional Wiener sheet [see (1.66) to (1.70)]. We have to supply boundary and initial values to the problem (4.1) such as

*(aro,*) = «o(*); $(x1,t) = u1(t), $ ( M o ) = T0(x). (4.4)

We solve (4.1) with the method of Green's functions and we obtain the solution to (4.1)

$(x,t) = $o(x,t)+a dy dsG(x,t,y,s)Wy8. (4.5) Jxo J t0

135

Page 160: Henderson d., plaskho p.   stochastic differential equations in science and engineering

136 Stochastic Differential Equations in Science and Engineering

The homogeneous solution satisfies L3>o = 0. The inverse operator of L is labeled with L _ 1 and we define the Green's function by

G(x, t, y, s) = L-^y - x)8(s - t), (4.6)

and we note that the Green's function G is a deterministic function. Now we specify the stochastic term Wxt- We recall that for an

ordinary SDE we use dB t = £tdt [see (1.77)] where £t is the white noise and Bj is the Brownian movement. In a two-dimensional (2-D) setting we have

32W w»- = w (4'7) Equation (4.7) characterizes now the most general 2D white noise. The space (x or y) and the time (t or s) variables are, however, independent and we can write [see (1.68)]

Wys=W(y)W(s); W(y) = ^ ; W(s) = ^ , (4.8)

where B/- is the Brownian movement for the fcth coordinate. The calculation of the delta functions and the determination of

the Green's functions is performed for a finite region with the initial-boundary conditions (4.4) and the use of an eigenfunction expansion.

To determine the statistical moments of the solution (4.5) where y and s are independent variables we use

(W(y)W(s)) = {W(y))(W(s))=0;

(W(y i)W(y2)W(Sl)W(S2)) = <W(2/i)W(j/2))<W(si)W(s2)> (4.9)

= (2/1 Ay 2 ) ( s i A s 2 ) .

Using (4.9) we rewrite (4.5) in the form

$(x,t) = $0(a;,*) + a / dBy dsG(x,t,y,s)dBs. (4.5') •Jxo Jto

We perform the average and we take advantage of

(dBydBs) = (dBy)(dBs) = 0.

Thus, we obtain the mean value of the solution

<$(x,t)) = $oOr,S/), (4-10)

Page 161: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Advanced Topics 137

and this is in analogy to linear ordinary SDE, the deterministic limit solution (a = 0) of the problem. Now we calculate the variance of the solution (4.5) and this yields in first place

I fX pX pt

Var($) = a 2 / / dBy / dB, / dB s \Jx0 J XQ Jt0

x / dBuG(x,t,y,s)G(x,t,z,u) J t0

I PX px pt pt

/ dBy / dB, / dBs / dBu \Jx0 J X0 Jt0 Jtn IX0 J X0 Jt0

x G(x,t,y,s)G(x,t, z, u). (4-11)

To simplify this integral we use the Dirac-function expansion of the quantity

<dBj,dB*dB8dBu) = (dBydB2)(dBsdB„) = 6(y - z)5(s - u),

where we used again the independence of the space (y, z) and the time (s, u) variables. Thus, we obtain

Var($) = a2 f dy f dsG2(x,t,y,s). (4.12) Jx0 Jt0

We can infer from (4.12) that the variance exists only for square-integrable Greens functions. We will find later in this section that this is a general criterion for the existence of stochastic solutions to SPDE.

Now we calculate an example:

Example (The stochastic cable equation)

$t = $xx-$ + a W(s)W(t); 0 < x < vr; t>0; (4.13)

$a:(0,t) = $a:(7r)t) = $(x,0) = 0.

We use the deterministic eigenfunctions that are given by

Vt = VXx ~ V; Vx(0, t) = VX(TT, t) = 0, (4.14a)

and the Green's function takes the form oo

G(x,t,y,s) = J2Vk(x,t)Vk(y,-s) (4.14b) fc=0

Page 162: Henderson d., plaskho p.   stochastic differential equations in science and engineering

138 Stochastic Differential Equations in Science and Engineering

where Vk(x,t) and Vk{y,s) are the eigenfunctions satisfying (4.14a) for the pair of variables (x,t) and (y,s), respectively.

The separation V(x,t) = exp(-Ai)U(a;) leads to U"- (1-A)U = 0 and the compliance with the boundary conditions yields

Vfc = exp[-(l + k2)t]XJk(x); k = 0 , 1 , . . . , (4.15)

1 12 Tj0 = — ; \jk = J-cos(kx) \/k>l.

The eigenfunctions are orthonormalized and the orthogonality relation is given

/ XJk(x)XJm(x) dx = 8km. JO

Thus we obtain the Green's function

oo

G(x, t, y, s) = ^ \Jk(x)Uk(y) exp[-Afc(i - s)]; \k = 1 + k2. k=0

The application of (4.5) leads to

/>7r rt

$ = a / dy dsG(x,t,y,s)Wys Jo Jo oo „ O T „i

; Vu f c (x)exp(-A f c t ) / dBj,Ufc(y) / dBsexp(Afcs). (4.16) u-n JO JO

OO

a fc=o

Note tha t we set $o = 0 since (4.16) already satisfies the initial condition <&(x,i = 0) = 0. We can put (4.16) into a more elegant form if we define

Ztfc = [ dBs f dBy\Jk(y) = Btpk; (3k = f dByU*(y), (4.17a)

Jo Jo Jo

Page 163: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Advanced Topics 139

and

Ak(t) = f dBsexp[-\k(t-s)} I* &By\Jk{y) Jo Jo

= (3k [ dBsexp[-Afc(i - s)]; Afc(0) = 0. (4.17b) Jo

The differential of (4.17a) is dZ^ = (3kdBt and the substitution of this line into (4.17b) yields

Ak= [ dZsfeexp[-A fc(t-S)].

Jo Yet the differential of the latter line takes the form

dAfc = dZ^ - Afc dt / dZ*exp[-A fc(£-s)], Jo

and we obtain the ordinary SDE (we do not use a summation convention)

dAfc = dZtfc - AfcA

fcdt = -AfcAfcdt + (3kdBt. (4.18)

Equation (4.18) represents a special case of the Ornstein-Uhlenbeck SDE (2.14) and we obtain as solution

Ak(t) = /3k [ exp[-Afc(t - s)]dBs. (4.19) Jo

We use the variable Ak to simplify the formula the solution (4.16) and this yields

oo

<P(x,t) = a^Uk(x)Ak{t). (4.20) fc=o

The mean value is (Ak(t)) — 0 and the autocorrelation function has the form

C(Afc) = (Ak(t - r/2)Ak(t + r/2)) = <j32k) f exp[-2Afc(i - s)} ds, Jo

where the time 7 was defined in (2.47). Thus we obtain an autocorrelation function that is in an analogy to (2.48) given by

C(Afc) = (/3fc2)[exp(-Afc|r|) - exp(-2Afct)]/(2Afc). (4.21)

Page 164: Henderson d., plaskho p.   stochastic differential equations in science and engineering

140 Stochastic Differential Equations in Science and Engineering

The relation (4.20) is used in EX 4.1 calculate the variance of the solution $(x,t) and to compare this result with the prediction of the general variance formula (4.12). X

We continue now to investigate SPDE in higher space dimensions. We do this for the stochastic Poisson equation. Thus we investigate the problem n-D problem

A n $ = aU(x)Wn; $(x); x = (x i , . . . ,x„); (4.22)

32 32 dBXl dBx

dx\ 9x 2 ' n dxi dxn

The function U(x) is a deterministic function and we specify the domain of (4.22) as the entire Rn.

The 2D and 3D Green's functions are well known (see Morse & Feshbach [4.1]) and we obtain the 2D Green's function

G2(x, x', y, y') = ~ ln[(x - x'f + (y - y')\ (4.23a)

and the 3D Green's function has the form

G3(x, x',y,y',z,z') = 4TTR'

R2 - [(x - x')2 + (y- y'f + (z - z')2]. (4.23b)

Thus, we obtain the 2D solution

$(x,y) = ^ J dBu J dBv\J(u,v)ln{(x-u)2 + (y-v)2}, (4.24a)

where we put the homogeneous solution, and with this the average of $, to zero. The variance is calculated with (4.12) and we obtain

Var($) = (a/47r)2 f du f dv\J2{u,v) ln2[(x - uf + (y - vf\.

(4.24b)

We recall the criterion of square-integrable Green's functions. We extend this criterion now to the square integrabihty of the product of

Page 165: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Advanced Topics 141

the Green's function and the coefficient function U(x). As an example we set

U(u,u) = y/6(u)5(v). (4.25)

The substitution of (4.25) into (4.24) leads to the variance

Var($) = (a/4vr)2 In2(a;2 + y2). (4.26)

It is instructive to compare (4.26) with the result of the deterministic 2D Poisson equation

/ 9 2 9 2 \

U + Vy) ^ = aS{x)6^ with the solution

<S>d = ^\n{x2 + y2).

Thus, we see that the stochastic 2D Poisson equation (4.22) with the "charge" (4.25), leads to a zero mean solution and a variance that is the square of the deterministic solution (4.26). A similar result is obtained in EX 4.2 for the case of the 3D Poisson equation.

In the case of SPDE in dimensions higher than three we cannot represent the solutions with stochastic functions but only in form of distributions. This subject is beyond the scope of this book and we refer the reader for investigations on this field to the article of Walsh [4.2] and the books of Holden et al. [4.3] and Krylob et al. [4.4] and Wojczinsky [1.10].

4.2. Stochastic Boundary and Initial Conditions

4.2.1. A deterministic one-dimensional wave equation

An interesting subject is the study of the evolution of deterministic PDE subjected to stochastic boundary or initial conditions. We investigate in this section the solution of a simple deterministic wave equation that is of importance in the study of nonlinear acoustics (see Whitham [4.5]) and we will introduce a stochastic initial condition in the next section.

Page 166: Henderson d., plaskho p.   stochastic differential equations in science and engineering

142 Stochastic Differential Equations in Science and Engineering

The problem is to find a solution of the PDE that governs the space (x) and time (t) evolution of the wave velocity

ov \ ov . . , , , ,__N —- =—v—; v = v(x,t); T = t-x/c. (4.27) ox <r or

The variable r is the convection time and we use the initial condition

v(0,t) = u0sin(w,t). (4.28)

In Equation (4.27) the parameter A is given by A = (1 + 7)/2 (7 is the ratio of the specific heats) and c is the sound speed at rest. To construct the solution to (4.27) we introduce the function

$(x,i) = i>osin LO[ t c +A$

(4.29)

First we can simply find that $(0, t) complies with the initial condition (4.28). We substitute in (4.28) r and we obtain with the use of the Mach number MQ = VQ/C

$(x,t) = uosin 1 c 1 + \Mo$>/voJ\ y '

Equation (4.30) defines the function $ only implicitly. We can, however, solve (4.30) explicitly for the convection time and we obtain

, , , COX AMn<I?/t>0

wr = arc s in($v 0) " ; " • (4.31)

We differentiate (4.31) and this yields

9$ to 9$ u> AMo$/wo

9r G' ox cGl + AM0$/u0 ' ^ ' ^

where G is a certain function. Without giving the details of G, that we determine in EX 4.3, we eliminate G from the two equations in (4.32) and this yields

9$ = 1 \M0$/v0 8$

dx c 1 + AMo*/u0 or' [ ' }

Many applications of acoustics are, however, characterized by small values of the Mach number. Hence we obtain with an expansion

Page 167: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Advanced Topics 143

from (4.33)

9$ __ A 3 * dx c2 dr'

and this is the PDE (4.27). Since $(0, t) satisfies also the initial condition we can infer that the function that arises from an expansion of (4.30)

$(x,t) = fosin u>[ T + -AM0$/«o (4.34)

is the solution of the PDE (4.26) and satisfies the initial condition. We replace now $ by v and use a Fourier series expansion to

investigate the frequency components of its discrete spectrum. We write (4.34) in the form

v(x, t) = VQ sin(u;r + /3V/VQ); (5 = UJXMOX/C. (4.35)

We obtain a Fourier series from oo

v/v0 = Y^ An(/?) sin(amr). (4.36) 7 1 = 1

The coefficients of this series are given by

2 r An(/?) = - / sin(iOT +(3v/vo)sm(ionT)d(u>r). (4.37)

71" Jo

We substitute £ = UIT+(3V/VO, V/VQ = sin£ and we obtain from (4.37)

A„(/3) = - r s in (£)s in[n£ - n/3sin(0][l - /?cos(0]d£. (4.38) 7T JO

To evaluate (4.38) we take advantage of the theorems of the harmonic functions and of the recurrence formulas of the Bessel function (see Abramowitz and Stegun [1.3]). We use the integral representation of the Bessel function

i r Jfc(n/3) = — / cos[ka — n/3sin(a)]da,

7T Jo

Page 168: Henderson d., plaskho p.   stochastic differential equations in science and engineering

144 Stochastic Differential Equations in Science and Engineering

where 3n(z) denotes the Bessel function of order n. Thus, we obtain

from (4.38)

Ki{P) = Jn-l(-2) - Jn+l(z) - -[Jn-2(z) ~ Jn+2^)]

= 2 ^ ± - z = n(5. (4.39) z

The substitution of (4.39) into (4.36) yields

-} = f > ^ ) 1 / 2 sin(no;r); { E ^ = H ^ M . (4.40) v0 ^ nJ v " K nJ nf3

n=l

The relation (4.40), tha t is sometimes called the Bessel-Fubini formula, represents now the explicit solution to the PDE (4.27) for small values of the Mach number Mo- The latter condition limits the validity of (4.40) to a space variable in the order of 0(x) = 1.

4.2 .2 . Stochastic initial conditions

Now we concentrate on a stochastic wave problem where the stochas-ticity is introduced by the application of a stochastic initial condition that includes slowly varying amplitudes and phases. We follow here the ideas of Rudenko & Soluyan [4.6], Akhmanov & Chirkin [4.7] and Rudenko & Chirkin [1.9]. First, and we apply the initial condition

v(x = 0,i) = vo(£lt)sm[u>t + ip(ttt)}, Q = ecj, 0 < £ « 1.

(4.41)

The relation (4.41) represents a randomly modulated wave with a stochastic phase ip. In the following we will assume that the parameter e is sufficiently small such that we can use the solution

v(x, T) — VQ(CIT) sin LOT + -^UJV(X, T)X + ip(Qr) (4.42)

where r is again the convection time defined in (4.27). (4.42) com

plies with the PDE (4.27) and with the initial condition (4.41). We

rewrite (4.42) with the use of new variables

V{z,9) = A(e)sm[e + zV(z,6) + <p(9)}; 9 = UJT; V = v/a;

A(0) = v0(nT)/a; ip{6) = V'(fir); z = Xcoax/c2; a2 = (vg),

(4.42')

where z is the nondimensional position of the wave.

Page 169: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Advanced Topics 145

To find the explicit solution of (4.42) we take again advantage of the Bessel-Fubini formula and we put

oo

V(z, 9) = A(0) sin[9 + zV + y?(0)] = ^ B " 0 ) sin{n[0 + p(0)]}. 71 = 1

The inverse of this relation is given by

Bn(z) = - A{9) ^ sin(0 sm[m{9 + ip)]d(6 + ip), Ti" Jo

with £ = # + zV + </?; 0 + <p = ^ — zk. sin(£). Thus we obtain

Bn(z) = -A(0) / sin(0 sin[m(£ - zA sin(£)][l - zk cos(£)]d£.

The evaluation of this integral yields in analogy to (4.40) to the explicit solution with the series expansion

V(z, 9) = 2 J2 3n[nzA{9)] sm{n[9 + y(g)]}. (4.43) 7 1 = 1

It is easy to verify that (4.43) tends in the limit z —> 0 to the intial condition (3.41). We use (4.43) to calculate the correlation function of the velocity V. We employ P/i(A, A', ip, ip') as the tetravariate distribution function of a stationary normal process. The amplitudes have zero mean and unit variance and they vary in the interval (—oo, oo) whereas the phases vary in [0, 2-7r]. We put 9' — 9 + LOT; A = A(0), A' = A(9'), ip = (p(9), ip' = ip{9') ((A, (p) and (A', <p') are two pairs of cylindrical coordinates) and it was shown in [4.6] that this leads to

C(z,V) = (V(z,9)V(z,9')) 0 0 A /-OO /"OO

= V 5 / dAAJn(nzA) / dA'A'J^mzA') ^ nmz1 J0 J0

71,771=1 /•2ir P2TT

x / dnpsm[n(ip + 9)] / dip's'm[m(<p'+ 9')} Jo Jo

xP 4 (A, A', ¥>,¥>')• (4-44) The four-dimensional distribution was derived in [4.6]

P4(A, A',<p,<p') = Nexp{-p(r/)[A2 + A'2 - 2AA'6 cos(<// - ip - r?)]},

AA' 1 P = P(ri),V = UT,l3{0) = l; N = ^ ^ _ ^ ; P = 2 ( 1 _ / ? 2 ) -

(4.45)

Page 170: Henderson d., plaskho p.   stochastic differential equations in science and engineering

146 Stochastic Differential Equations in Science and Engineering

Note that the function b(j]) is the envelope of the input-signal

C(z = 0,r/) = 6(T/) cos(77). (4.46)

We begin the evaluation of (4.44) with the determination of the phase integrals. To achieve this goal we must use the expansion (ln(z) is the modified Bessel function of order n (see [1.3]))

exp[T cos(ip' — if CO

v)} = ^2 S fcI fc(T) cos[Hf' -<p~v)] k=0

T = AA'bp(rj); s0 = l, sk = 2Vk>l.

(4.47)

The evaluation of the amplitude-integrals leads eventually to the correlation function (see [4.6])

C(z, rj) = 2 J2 ^ ~ V UKv)(nz)2] coS(nrj), (4.48) rc=l

(nz)s

We compare now the growth (or decay) of the harmonic intensities for the deterministic (E^) and the stochastic case (E*). We have according to (4.40) and (4.48)

4 2 E^ = -jl(a) and E„ = - exp(-a)In[a\; a = (nz)2. (4.49)

a a We obtain E^ from (4.48) for r = 0 and a = VQ. A graphical result of this comparison is given in Figure 4.1.

1,2-1

Ed's 1 -n

0,8-

0,6-

0,4-

0,2-

0-— r r ' . ' ' — • — ^ — • • - •• -

• — * ' - ' 1 1 1

[1,2

-1

0,8

-0,6

0,4

0,2

0 0 0,5 1 1,5

Fig. 4.1. Variation of the harmonic (continuous lines) and stochastic (broken lines) intensities with the nondimensional position of the wave z. Upper pair of lines fundamental mode (n = 1), lower pair of curves: second harmonic modes (n = 2).

Page 171: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Advanced Topics 147

4.3. Stochastic Eigenvalue Equations

4.3.1. Introduction

To consider the nature of random eigenvalue equations we consider a boundary equation problem for an ODE that includes a parameter A

Lu = \u; u = u(x,\); x\ < x < X2\ (4.50)

gn gn-1 g L = 8 ^ + an~^X"A) 9^=i + " ' + ai(x' A f e + a ° ^ A)'

where L is a linear nth order differential operator. The boundary conditions (BC's) are given by

Zi(u) = 0; i = l , . . . , n , (4.51)

and they involve conditions at the endpoints x\ and x^. We assume that u\(x, A) , . . . , un(x, A) is a fundamental set of (4.50). This leads to the general solution

n

u(x, A) = y^KjUj(x, A), Kj = const. (4.52) 3 = 1

The application of the BCs yields n

J]K jZ i(« j(C,A)) = 0; i = Xlori = x2- i = 1,2,... ,n. (4.53) i=i

Equation (4.53) forms a set of n linear homogeneous algebraic equation for the coefficients KJ ; j = 1 , . . . , n. Hence, non-trivial solutions exist only if the determinant vanishes and this gives the eigenvalue relation

$(A) = det[Z i(u j(^,A))]=0. (4.54)

Solutions to (4.54) exist only for a parameter A, called the eigenvalue, that satisfies certain conditions. There exists in particular a discrete spectrum if the eigenvalue can be labeled with an integer number k and complies with Ao < Ai < • • • < An < • • •. By contrast we obtain a continuous spectrum if the eigenvalue is denned on a continuous interval a < A < b. There exists also the combination of an eigenvalue spectrum that contains both a discrete as well

Page 172: Henderson d., plaskho p.   stochastic differential equations in science and engineering

148 Stochastic Differential Equations in Science and Engineering

as a continuous part. Functions corresponding to the fcth eigenvalue Ufc = u(x,\k) are called eigenvectors or eigenfunctions.

A stochastic eigenvalue problem arises when either the differential equations or the BC's contain a stochastic parameter 9. This means that we consider in Section 4.3 only ODE with random coefficient functions but not SDE. The main goal is the determination of the PD of the eigenvalues PA(A). The ODE (4.50) is linear and we can in principle calculate its solutions. However, we encounter in many applications a nonlinear problem with a nonlinear operator or nonlinear BC's or both the operator and the BC's are nonlinear. This means that the exact solution of (4.50) is inaccessible and exact solutions of the PD PA(A) are impossible. Furthermore, there is in various applications in chemistry and physics the problem that one must measure the coefficients of the operator L or the coefficients of the BC's. The latter measurements introduce random effects. Geometrical dimensions can be measured with rather high precision but other quantities such as the position or the velocity of a particle are less accessible to exact measurements. This means that the coefficients in the operator or in the BC's introduce a randomness. Thus, we must consider more modest goals and restrict the considerations to the calculation of the lowest moments of the PD PA (A). In practical cases one would be already satisfied with the calculation of the mean and the variance of the eigenvalue A.

4.3.2. Mathematical methods

Every procedure to calculate the eigenvalues consists of two different parts: a deterministic and a stochastic part. Accordingly there are two categories of methods.

In the first method we try to calculate or approximate the first eigenvalues and apply final stochastic routines to calculate the PD p(A) or at least the lowest order moments of this PD. Such an approach was called by Keller [4.8] and [4.9] an "honest" procedure.

By contrast, we use in the second method statistical methods as the calculation of the moments of the ODE and apply approximate methods such as the assumption of stationary processes or weak correlations. A typical approach of this kind is used in the fluid dynamic

Page 173: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Advanced Topics 149

theory of the turbulence (see e.g. Prisch [4.10]). This type of procedure was named by Keller as "dishonest". However, we have to keep in mind that an "honest" approach does not imply more accuracy for its solutions or approximations than the results of a "dishonest" routine.

The methods that are used in both types of approaches are frequently one of the following routines or a combination thereof

- Variational principles, - Perturbation expansions, - Green's function and transformation to an integral equation, - Asymptotic theory for high order eigenvalues, - Asymptotic theory for singular differential operators, - Iterations.

Furthermore, one introduces elements of statistical theory and statistical assumptions like independence of variables, stationary processes, weak correlations, etc. A survey of these methods is given by Haines [4.11] and Boyce [4.12]. Random boundary problems are studied in detail by Bass and Fuks [4.13].

Here we use an asymptotic theory to treat the problem of higher-order eigenvalues.

Higher order eigenvalues

We consider the Sturm—Liouville problem (see also (3.47)) and investigate the eigenvalue problem

y" + P'V'/P - (r/p - M/p)y = 0;

y = y(x,X); p = p(x); q = q(x); r = r{x); (4.55)

' = d/dx; 0 < x < 1,

with the BC's

y'(0,A) + Cy(0,A) = 0; y'(l, A) + Dy(l, A) = 0; C, D = const.

(4.56)

We use a transformation to put (4.55) into a form amenable to the WKB routine (see also the appendix A of Chapter 3). We must

Page 174: Henderson d., plaskho p.   stochastic differential equations in science and engineering

150 Stochastic Differential Equations in Science and Engineering

(i) to eliminate the first derivative and (ii) obtain a constant as the coefficient of A. To this end we scale both the eigenfunction and the independent variable

y(x,\) = E(x)u(£,\); £ = H(x)/K; H(0) = 0;

H(l) = K, 0 < £ < 1, (4.57)

where E(x) and H(x) are two unknown function to be determined such that the conditions (i) and (ii) are met. The derivatives y(x,X) and u(x, A) are calculated in EX 4.5 and the substitution into (4.55) leads to

EH' •/2

K2 -«« + K

+ A

E'H' + (EH')' + ^ E H '

E + E" +

P

p'E'

H

0. p PJ p

Hence we can satisfy the conditions (i) and (ii) if we put

2 I + F F + P = 0 ; H(X) = [Vq(z)/P(z)dz, where the first part of (4.59) leads to

E = (w)~ 1 / 4 .

Now we can write (4.55) in the form

UK + [M2 - R(0]« = 0; /x2 = K 2 A » l ;

V E" p'E'A K2

j) E ~ ~pE ) W

We suppose that the eigenvalues increase with an order such that fj? is a large parameter. Thus we can take advantage of the WKB routine and we propose the asymptotic expansion

R =

(4.58)

(4.59)

(4.60)

(4.61)

1 u(£,/x) = e x p f ^ o (01 Go(£) + - ^ ( 0 + 0 ( / O .

\r

This yields in leading order (/j?) to the eikonal equation

x0£ 1 or u0 = ±£,

and the first correction order {jil) leads to

G0 ^ _ l«o& Gn 2 u0$

or Go = const.

(4.62)

(4.63)

(4.64)

Page 175: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Advanced Topics 151

Thus we find that the expansion (4.62) has the form

u(£,/z) = A1cos(/ /0 + A2sin(/ i0 + O(Ai"2). (4.65)

We see that u is of 0(1) whereas the derivative u% is of order 0(/z). Using the results of EX 4.5 we find that the BC's (4.56) are in leading order independent of the constants C and D (4.56) reduces to

^ ( 0 , ^ = ^ ( 1 , ^ = 0 . (4.66)

The application of (4.56) to (4.56) leads to the eigenvalue and the eigenfunction

(ik — irk; /cGN; k —> oo; u(£, fi) = Ai cos(£;7r£), (4.67)

where k is an integer number. The eigenvalue A of the original problem (4.55) is then given by

A = (fc7r/K)2 + o(A;). (4.68)

Now we make assumptions about the randomness of the problem. We introduce stochastic functions for the coefficients

q(x) = 1 + au(x); p=l + av(x), a = const, (4.69)

where u(x) and v(x) are random functions and a is an intensity parameter. To find the moments of eigenvalue A we must calculate the moments of K2. Hence, we obtain

(K2m) = (fe7r)2m(A-m). (4.70)

Example

Here we use a Brownian motion for the coefficients (4.69)

q(x) = 1+aB^; p=l + a/3Bx; 0 < a « l ; 0 < / 3 < l , (4.71)

where Bx is the Brownian motion with a spatial argument. The corresponding series expansion of K2 and K4 for a —> 0 is given in EX 4.6 and this yields

(K2) = 1 + a2{(5 - 1)[(1 + 3/3)/2 + 2(J3 - l)/3]/8 + 0 (a 4 ) } .

(4.72)

Page 176: Henderson d., plaskho p.   stochastic differential equations in science and engineering

152 Stochastic Differential Equations in Science and Engineering

4.3.3. Examples of exactly soluble problems

We consider here three examples of simple problems with exact solutions. In all of them we use the nomenclature that 9 and A represent a stochastic parameter and the eigenvalue, respectively.

Example 1

We consider the linear problem

u" + Xu = 0; ' = d/dx; 0 < x < 1, (4.73)

u(0 ,A)=0; ti'(l,A) + 0it(l,A) = O,

with 9 6 [0, oo) appearing only in one of the BC's. We solve the ODE using u(x) = exp(<rx) and we obtain a2 + A = 0. The solutions of (4.72) that satisfy u(0) = 0 are given by

(i) A > 0 : a = ±ivA and u(x) = Csin[vAa;]; C = const,

or alternatively

(ii) A < 0 : a = ± v |A| and u(x) = Dsin/i[y|A|x]; C = const.

The application of the second BC leads to

(i) C[0sin(VA) + V/Acos(v/A)], (4.74)

and

(ii) B[6smh{^/\\~\) + V\cosh(y/\\~\)}. (4.75)

Thus we obtain in case (i) the eigenvalue equation

9 = -VAcot(v /A) = G(A) > 0, (4.76)

whereas there is no eigenvalue problem for the other alternative (ii) (see EX 4.7). The latter statement means that we have to comply with

A > 0. (4.77)

Equations (4.76) and (4.77) give for 9 > 0 the eigenvalue relation

{2n + lf^ < A n < ( n + 1 ) 2 7 r 2 , n = 0 , i . . . . (4.78)

We reveal the first and the second eigenvalue in Figures 4.2(a) and 4.2(b). We see from these figures that the eigenvalues are mono-tonically increasing functions of 9 and we can uniquely calculate the

Page 177: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Advanced Topics 153

Fig. 4.2. The zero and first order eigenvalues Ao and Ai of the problem (4.73) plotted versus the stochastic parameter 6.

inverse function of G(A). Thus,

\ n = G~l{e); n = 0 , l . . . , (4.79)

where the subscript of the function G _ 1 refers to the number of the eigenvalue. We continue with this problem in the next section. X

Example 2

We consider a problem in polar coordinates

r V + ru' + (Ar2 - 62)u = 0; u = u(r, A); ' = d/dr, (4.80)

with the BC's

K 0 , A ) | < o o ; u(l,A) = 0. (4.81)

The solution to the ODE (4.80) takes the form

u{r, A) = CJe(rVA) + DYg(ry/\); C, D = const., (4.82)

Page 178: Henderson d., plaskho p.   stochastic differential equations in science and engineering

154 Stochastic Differential Equations in Science and Engineering

where Jg(x)(Yg(x)) denote the Bessel functions of the first (second) kind of the order 6 with the argument x and 6 > 0 stands again for the random parameter. The function Yg(0) diverges, and we put in accordance with first BC D = 0. Thus, we obtain from the second BC the eigenvalue equation

j0(vT) - o. (4.83)

We give in EX 4.8 hints how to solve (4.83) numerically and we reveal the results in Figure 4.3. These results show an approximate parabolic variation of the eigenvalues. However, we can also confirm this variation if we use for large values of \/A the asymptotic behavior of the Bessel functions. This yields with (4.83) (see [1.4])

Afcf« (vr/4)2(3 + 4A; + 20)2 (4.84)

*

Example 3

A quantum-mechanical particle moves on the ti-axis in a potential that increases linearly with u. The separated Schrodinger equation

200

150

100

200

150

100

Fig. 4.3. The zero and the first order eigenvalues Ao (lower pair) and Ai (upper pair) of the problem (4.80) are plotted against the stochastic parameter 0. The continuous (broken) lines correspond to the numerical solutions of (4.83) (asymptotic approximations (4.84)).

Page 179: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Advanced Topics 155

has in this case the form

d2

-T^MU) = ui/)(u); ip{oo) = 0. (4.85)

The solution to this equation is

ip{u) = CAi(u); C = const.; Ai(oo) = 0, (4.86)

where Ai(it) stands for the Airy function that oscillates for u < 0. After this physical motivation, we formulate a stochastic bound

ary value problem:

y"(x)=en(x + \e-n/3)y(x); ' = d/dz; y(-l) = y(oo) = 0. (4.87)

We can transform (4.87) into a Schrodinger equation with a stochastic potential if we put

u = X + x6n/3. (4.88)

Indeed, using (4.88) we find that solution to (4.87) is

y(u) = CAi(X + x6n/3), (4.89)

and (4.89) satisfies already the BC at infinity. Complying with the second BC yields

x = - 1 : Ai(A - 9n/3) = 0. (4.90)

In EX 4.9 it is shown that the Ai-function in (4.90) exhibits the discrete spectrum

A f c - 0 n / 3 = - K | ; A; = 0,1,2, . . .

with \uk\ = {2.33811,4.08795,5.52056,...}. (4.91)

We may now solve (4.91) for the variable 9 and this yields in a special case n = 1

n = l : 0 = G(Afc) = (Afc + K | ) 3 ; Xk e [-|ufc|, oo). (4.92)

* We continue in the next section with the calculation of the

moments of the eigenvalues and we will use the examples 1 through 3 to illustrate these ideas.

Page 180: Henderson d., plaskho p.   stochastic differential equations in science and engineering

156 Stochastic Differential Equations in Science and Engineering

4.3.4. Probability laws and moments of the eigenvalues

In Section 4.3.2 we could introduce random functions at will for the coefficient function p and q (see (4.69)). Then we calculate the moments of the eigenvalues, given the probability laws of the used random functions [like the Brownian motion in (4.71)].

Here we assume that the task of establishing the eigenvalue relation is already performed. We write the latter equation again in the form 6 = G(A). To determine the PD PA (A) we use the following theorem:

Theorem 4.1. We recall the definition of the ID probability distribution density (PDF) of two continuous random variables X and Y Fx(x) = Pr(X < x);FY(y) = Pr(Y < y) [see (1.1)]. We transform the variables X and Y into each other with the aid of a strictly increasing function Gy = G(x);x = G_ 1(y), where the monotonic properties of G ensure the existence of the inverse function G _ 1 . It is now easy to find (see also EX 4.10) that X < x implies Y < y. This yields

Pr(Y < G(x)) = Pr(X < x) (4.93a)

and this is equivalent to the relation for the PDF's

FX(x) = FY(G(x)) or FY(y) = Fx(G-1(y)). (4.93b)

Equation (4.93b) also implies the rule for the PD's [see (1.3)]

PY(y) = ^ = P x ( G - 1 ( y ) ) ^ ^ . (4.94a) Ay dy

Note that if the function G(x) were strictly decreasing we would obtain on the right hand side of (4.94a) a minus sign (see EX 4.10). Hence we obtain in conclusion

PY(y) = Px(G-1) dG - l

dy (4.94b)

Note also that (4.94b) is the ID version of the law of transformation of stochastic variables derived in Section 1.5. 8*5

Page 181: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Advanced Topics 157

The application of Theorem 4.1 to the characteristic equation 9 = G(A) with G a strictly increasing function, leads to pA(A)with \uk\ — pe(#)(d0/dA). Thus, we obtain for the nth moment of the kth eigenvalue

rv PV jn

(Xk) = J A^pA(Afc)dAfc = / Ajjpfl(0)— dAfc; u = ak, v = bk,

(4.95)

where we took the kth eigenvalue that is defined in the interval ak < Afc < bk.

We apply now (4.95) for the Examples 1 to 3 of the Section 4.3.3.

Example 1

We use G(Afc) given by (4.76) and we obtain the limits of the integral (4.89) for the first two eigenvalues ao = (7r/2)2,ai = (37r/2)2;6o = 7r2,bi = 4-7T2. We introduce two different types of the PD's. (a) We suppose that the random variable 9 is uniformly distributed in the interval 0 < 9 < (3 and we put

f 1//3 for0e[O,/3]

(J elsewhere

We obtain the after a substitution z = y \K the integrand

d# ,, 1 9„ z — sin(z) cos(z) ,— d\K = -z2n ^ ~ a\K [3 s m z z

(4.97)

A fcPe^~ dAK = -zzn -±£-—^ dz; for ^ <z< zu,k.

The upper limit of the integration is zUjk = y/rj, with n = —-v/A^cot(\/A^). Hence, we obtain the moments with (4.95) and (4.97) in the form

/? 7 / ^ sin (z)

Note that as /3 —> 0+ we have zU;fc - • V ^ - The corresponding integral is an undetermined form and we obtain (see EX 4.11)

(Afc)(/3 = 0+) = (2n + l ) V / 4 . (4.99)

Page 182: Henderson d., plaskho p.   stochastic differential equations in science and engineering

158 Stochastic Differential Equations in Science and Engineering

Table 4.1. The moments of the zero and first order eigenvalues and the corresponding variances.

p

0.001 0.25 0.5 1 2 3 4 8 10

(Ao>

2.4687 2.7092 2.9354 3.3460 4.0290 4.5715 5.0118 6.1721 6.5456

°~o

0. 0.1373 0.2613 0.4750 0.7948 1.0135 1.1658 1.4456 1.4939

<Ai>

22.208 22.456 22.702 23.859 24.107 24.954 25.726 28.151 29.056

o"l

0. 0.1433 0.2846 0.5585 1.0628 1.5012 1.8742 2.8504 3.1293

We give the corresponding results now terms of Table 4.1, where we use

^i,2 = <A?,2> - (Ai,2)2,

We calculate the integral (4.98) with the Mathematica program M41, that is given in the attached CD.

Note that the moments for /3 = 0+ given by (4.99) coincide almost with the values given for /?-= 0.001 in Table 4.1. (b) Here we apply the normal PD with zero mean and we obtain

PA(A) = p e ( 0 ) ^ = (27r<T)-1 /2 exp[-^/(2a)]^. (4.100)

We use again (4.97) and we obtain moments in the form

<A£> = ( 2 ^ ) - ^ r k z2n*-Bin(*)coB(z)

V^k sm2(z)

x expf-z2 cot2(z)/{2a)]dz. (4.101)

The numerical results for the eigenvalue are calculated with the program M42 and they are displayed in Table 4.2. Jj»

Example 2

We calculate here numerically the moments of the eigenvalues of the characteristic equation (4.83).

Page 183: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Advanced Topics 159

Table 4.2. The moments of the zero order eigenvalue and the variances for the normal PD.

<r <Ao>

0.2 2.38242 1 2.33368 5 2.58837

Table 4.3. The moments of the characteristic equation (4.86).

v <Ao>

0.05 3.5919 0.2 4.3650 1 6.5718

V^o) - <*o}2

0.92072 1.64598 2.38830

eigenvalues with the

V ^ g ) - <*o>2

3.6754 4.6818 7.9691

Note that Figure 4.3 indicate only minor variations of the eigenvalues with the stochastic parameter. Hence we can use a polynomial fit to approximate the eigenvalues with sufficient accuracy. We use again the POD (4.100) and we obtain the results disclosed in Table 4.3. The corresponding Mathematica program M43. £

Example 3

We apply (4.92) and this yields

A£p*(G)—dAfc; a=\uk\. (4.102)

We employ the PD (4.96) and we obtain with (4.102)

(K) = ^J aK(h + a)2d\k

= ^P~ [ * z\a-z)sdz; b = p1/3-a. (4.103) P Jo

We can calculate analytically the integral (4.103) and we obtain (see also EX 4.12)

(K ! _ 3 ^ 1 / 3 + 0 ( ^ / 3 ) 4 a

(4.104)

Page 184: Henderson d., plaskho p.   stochastic differential equations in science and engineering

160 Stochastic Differential Equations in Science and Engineering

and this leads to

(Afc) = - K | ; a = 0(/34/3). (4.105)

*

4.4. Stochastic Economics

4.4.1. Introduction

The London (FTSE) and the New York (NASDAQ) stock exchange data for about 13 years were fitted by Benth [4.14]. This leads to considerable differences with respect to the predictions of the theoretical approach by Black & Scholes [4.15] based on GPD. Particular unsatisfying is the skewness and tail heaviness of the empirical data that cannot be predicted by a GPD. Thus the normal inverted Gaussian distribution NIGD (see e.g. Barndorff-Nielsen [4.16] and Eberlein &; Keller [4.17]) was invented to fit the stock data. As opposed to the GPD with two parameters, the NIGD has the advantage of being equipped with four parameters:

a: tail heavines, 0 < \f3\ < a,

j3: skewness /3 > 0 (/3 < 0) positive (negative) skewness, /? = 0

symmetric NIGD,

5 > 0: scale parameter,

fi: position on the real axis.

With this parameters we can construct the NIGD

I a x\ kexp\P(x-[i)] p{x;a,f3,n,5) = Ki(a/>);

9 (4.106)

p = V<52 + (x - M)2; k = — exp[5(a2 - 02)1 / 2] ,

where Ki is the modified Bessel function (see [1.3])

1 f°° Ki(«) = x / exp{-uz/[2(l + z2)]}dz. (4.107)

^ Jo The NIGD governs the stochastic Levy process L(i) and its mean

and variance have the form (L(i)) = (i + 05{a2 - /3 2 r 1 / 2 ; Var(L(i)) = a25{a2 - p2)'3'2;.

(4.108)

Page 185: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Advanced Topics 161

It was shown in [4.14] that approaches bases on Levy processes fit the stock data with respect to skewness and tail heaviness and simple price model can be written as

S(t) = S(0)exp[L(t)]. (4.109)

However, Equation (4.108) is still of preliminary nature and in the next section we will discuss a more realistic price model. We complete now this section by explaining some elements of the financial terminology.

Assuming that an investor makes two types of investments: a risky stock and a comparatively low risk bond. The investor's aim is to obtain a fair price when he sells his investments. To achieve this he makes an option contract, where the financial asset depends on another asset. Option means that the contract has specific choices or alternatives. Such contracts are also referred to as derivatives, because the value is derived from an underlying asset. There exists a great variety of derivatives, we mention only the call option and alternatively the put option. A comprehensive survey of options is given by Hull [4.18]. The problem we wish to solve is now: what profit should the seller accept at the time of the sale. The time of the sale (the strike) is called the exercise time and the price of the sale is denoted strike price. Therefore we have for a call option the price P (the payoff)

P = (max(0, S(T) - K)), (4.109a)

where K is the agreed strike price, and we perform an average since the stock price S(T) varies randomly. On the other hand we obtain for a put option

P = (max(0, K - S(T))). (4.109b)

Finally we give a heuristic explanation of self-financing portfolios. We assume again that an investor made two investments: a number of (3{t) stocks with the price S(i) and a number of j(t) of bonds of the price R(£). The value of its portfolio is therefore

H(£) = P(t)S(t) + 7(t)R(<). (4.110)

We assume that there is no information about future stock prices such that S(t) is adapted. A portfolio is self-financing if the investor

Page 186: Henderson d., plaskho p.   stochastic differential equations in science and engineering

162 Stochastic Differential Equations in Science and Engineering

does not withdraw funds or buy additional ones. He starts with an initial capital, and from this moment on all gains or losses result from increases or decreases of the stock and bond prices. This means that the differential of the portfolio takes the form

dH(t) = P(t)dS(t) + 7(t)dR(t). (4.111)

We emphasize that (4.111) is valid only for self-financing portfolios. It contradicts the general formula (1.109) and its validity is restricted to semi-martingales (see [4.14]).

Finally, we mention that a contingent claim gives a holder a random amount at the exercise time T.

After these preliminaries we describe a stochastic theory for market prices.

4.4.2. The Black Scholes market

A financial system (a market) consists of a stock with price S(t) and a bond with a price R(i). The price dynamics takes the form

dS(i) = (adt + adBt)S(t); a,a = const., (4.112)

and

dR(t) = rR(t)dt; r = const., R(0) = 1. (4.113)

Equation (4.112) is an SDE of the population growth type (2.1), whereas (4.113) is an ODE with the solution R(i) = exp(rf). We also introduce the portfolio of the value H(i) given by (4.110).

A claim that pays £ = f(S(T)) at the exercise time T has a price P(i) at time t, that is function of the corresponding stock price x = S(t). Hence, we can write

P(t) = C(S(t), t) = C{x, t); x = S(t). (4.114)

We apply Ito's formula (1.127) to the function C(x,t) and we use the underlying SDE (4.112). This yields

1 dP(t) Ct + axCx + ^(ax)2Ca dt + axCxdBt = 0. (4.115)

For non-arbitrage conditions (see [4.14]) we must set

P(t) = H(t). (4.116)

Page 187: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Advanced Topics 163

Equation (4.110) combines with (4.116) to

7 = [ C ( M ) - / ? S ( i ) ] / R . (4.117)

An insertion of (4.111) to (4.114) into (4.115) yields with a comparison of the coefficients of dBj to

P(t) = Cx(x,t), (4.118)

and the same procedure for the coefficient of dt gives rise to a PDE called the Black-Scholes equation

Ct + rS(t)C+^[aS(t)]2Cxx = rC. (4.119)

Equation (4.118) is subjected to the terminal condition

C(x,T) = f(a;). (4.120)

The solution to (4.118) that satisfies (4.119) has the form

C(x,T) = exp[-r(T - t)](f(^*(T))), (4.121)

with

zx't(s) = x e x p { ( r - s 2 / 2 ) ( s - i ) + [B(s)-B(i)]}; s>t. (4.122)

The stochastic function (4.121) is the solution of a population growth SDE (2.1) that starts at time t from the position x: zx,t(t) = x.

Note that the logarithm of (4.122) can be written as

B(s) - B(t) = [In **•*(«) - In x - (r - s2/2)(s - t)]/a. (4.123)

Since the right hand side of (4.123) is N(0, s — t) distributed we infer that k(s,x,t) — l n ^ ' ^ s ) is

N(/j,,T1);fi = \nx + (r-a2/2)(s-t), E = (s - t)a2 (4.124a)

distributed and this gives its distribution the form

p K ( M ) = (27rS)-1 / 2exp[-(fc-//)2 /(2S)]. (4.124b)

We focus now on the verification of the Black-Scholes solution (4.121) and the latter equation has with (1.124) the form

C(x, t) = exp[-r(T - t)] f f(z(T))pz(z, t)dz, (4.125)

where we use the short cut z = zx,t(s). We see immediately from (4.125) that this solution satisfies the terminal condition (4.120).

Page 188: Henderson d., plaskho p.   stochastic differential equations in science and engineering

164 Stochastic Differential Equations in Science and Engineering

Now we perform the transformation z = exp(fc) and this gives (4.125) the form [see (1.44)]

C ( M ) = e x p [ - r ( T - t ) ] /f(efc)pK(fc,t)dk, (4.126)

where the PD is pK.(k,t) is given by (4.124). Now we apply the operator of the PDE (4.119) on (4.126) and we

obtain in the first place

0 = / d K ( e * ) ( A + r ^ + ^ ) P K ( M ) .

The integrand can be reformulated in terms of the diffusion equation (1-57)

3 9 x2a2 92 \ „ , (\ 3 2 9 \ „ , n

^ + r X 9 ^ + ^9^J P K ( f c ' i ) = Uv"^J P K ( ' ' ) = 0' and this verifies that (4.121) is the solution of the Black-Scholes PDE (4.119).

It remains to say that Scholes and Merton (with his investigation [4.19], Black died in 1995) earned the Nobel award in economics in 1997.

Exercises

EX 4 .1 . Use (4.20) to calculate the variance of the solution (4.16) and compare the result with the formula (4.5) where we take into account the orthogonality of the functions Ufc (x).

EX 4.2. Calculate the variance of the 3D stochastic Poisson equation. The Green's function is defined by (4.23b). Use the "charge" U2(x) = 5(x) and compare the variance of the stochastic PDE with the corresponding deterministic solution <&d(x) = a/(47r|x|).

EX 4.3. Determine the function G in (4.32).

EX 4.4. Solve the stochastic heat transport equation

9T / d B t \ 9 2 T 9T m m . N

^ = (a + b^r hx^ + ^ ; «.M = «H»t.; T = T(M),

Page 189: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Advanced Topics 165

with the use of the separation

T(x,t) = A1(t)cos(Lot) + A2(t)sm(cot).

Show that the resultant ordinary SDE can be transformed into a deterministic ODE involving the two functions A i ^ i ) .

EX 4.5. Calculate the derivatives of the functions y(x, A) and u(x, A) defined in (4.57). Hint: Verify

E H ' p i j ' 2 i

Vx = E'u + — « f ; yxx = ~^Hi + ^ [E 'H ' + ( E H ' ) > € + E"u-

EX 4.6. Referring to (4.71), calculate the series expansions of the moments of (K2) and (K4).

Hint: Verify

(K2) = [ dx [ dy{l + a2(P - 1)[(1 + 3/3)(x + y) Jo Jo

+ 2(P-l)(xAy)]/8 + 0(a4)}

= 1 + a2(/3 - 1)[(1 + 3/3)/2 + 2(/3 - l)/3] + 0 (a 4 ) .

EX 4.7. Explain why no eigenvalue problem arises for (4.76) in the case of A < 0.

EX 4.8. To solve (4.80) plot numerically (with the aid of "Mathe-matica") J#(\/A) in the plane < 0 < # < 5 ; 0 < A < 150 and use the plotted zeros for as initial values for an iteration to compute the numerical values of An . Using "Mathematica" you can take advantage of the routine FindRoot. Explain why the asymptotic order relation (see [1.3])

J , ( ^ ) a ^ 3 ( ^ ) " ' : ^°° ; ^ < 0 ° ' cannot be used to compute the eigenvalues.

EX 4.9. Apply the Mathematica routines AiryAi and FindRoot to verify the zeros of the Ai-functions given by (4.91).

Page 190: Henderson d., plaskho p.   stochastic differential equations in science and engineering

166 Stochastic Differential Equations in Science and Engineering

EX 4.10. Derived by inspection of a strictly decreasing function y = G{x) the relation for the PDF's Pr(X < x) = Pr(Y > G(x)) and use this to verify (4.94b).

EX 4.11. Applying the rule of l'Hospital to verify (4.99). Hint: Show that the PD (4.96) gives rise to

lim /"Pe(0)H(0)d0 = H(O).

EX 4.12. We consider the 4th order boundary value problem

y w - ^ y = o; s/(°>A) = y"(°>A) = y&A) = s/"(i>A) = °-

Verify that the eigenvalue equation has the form sm[y/9/(l + A)]. Use the PD (4.96) to find numerically the mean and the standard variance for Ai and A2.

Page 191: Henderson d., plaskho p.   stochastic differential equations in science and engineering

CHAPTER 5

NUMERICAL SOLUTIONS OF ORDINARY STOCHASTIC DIFFERENTIAL EQUATIONS

In this chapter we will familiarize the reader with some numerical routines to solve ordinary SDE and we shall introduce these routines with the FORTRAN programs that are included on the attached CD. The latter programs are labeled with an "F" . So, for example, F3 is the third FORTRAN program.

5.1 . R a n d o m N u m b e r Generators and Appl ica t ions

A prerequisite to solve SDE are r a n d o m n u m b e r s (RN's). There are various techniques to construct "random numbers". In fact, the numbers repeat after a period P. As a result, we should call these numbers pseudo-random numbers.

To find an algorithm for a random number generator, we recall tha t ever since the work of Feigenbaum [5.1] it has been known that the iteration of numbers on the real axis (ID maps) can lead to irregular outcomes. It is convenient to apply a congruential iteration

Nfc+i = (aN/- + u) mod v; k = 0 , 1 , . . . ,

where a, u, i>,No are given positive integers and the symbol u m o d t ; (the modulo) denotes the remainder of u after dividing u by v. The choice of v is determined by the capacity of the computer and there are various ways to select a and u. The period P of this iteration was investigated by Knuth [5.2] and it was found tha t the condition that u and v are relatively prime makes it important to increase the value of P .

The IMSL (international mathematical scientific library) proposes for RN's with a uniform PD in the interval (0, 1) the iteration

N f c+i = (16807 Nfc) mod (231 - 1); k = 0 , 1 , . . . ,

167

Page 192: Henderson d., plaskho p.   stochastic differential equations in science and engineering

168 Stochastic Differential Equations in Science and Engineering

where 0 < No < 1 is an initial number and the factor 231 — 1 is a prime number. The numbers in the above formula are not RN's in (0, 1) but the sequence

Rfe+i = N fc+1/(231 - 1),

serves to produce a sequence of RN's, Rfc+i, in (0, 1). Other techniques are discussed by Press et al. [5.3] and we rec

ommend in particular the routines called rani, ran2 and ran3, that are based on congruent iterations.

The question of a uniform distribution and the independence of the RN's is investigated in the next section. RN's that vary in arbitrary intervals are obtained with transformations (see Section 1.5, where also Gaussian distributed numbers are introduced).

5.1.1. Testing of random numbers

We will investigate here the property of uniformity and the independence of stochastic variables. To perform this task we introduce a few elements of the testing of stochastic (or probabilistic) models Ho on random variables X.

The basic idea is that we perform a random experiment (RE) of interest and group the outcomes into individual categories k with K as the total number of categories. One particular outcome of the RE is given by Xi and this particular outcome is called a test statistic. We categorize the predictions of an application of the model Ho in the same way. If, under a given model Ho, values of the test statistic appear that are likely (unlikely), we are inclined to accept (reject) the model Ho-

An important statistical test is x2-statistics. The quantity X is a X2-random variable of m degrees of freedom (DOF) with the PD

p x (x ,m) = [2 s r ( s ) ] -V - 1 exp( - : r /2 ) ; s = m/2; x > 0. (5.1a)

The mean and variance of (5.1a) are calculated in EX 5.1. It can be shown that (5.1a) is the PD of a set of m independent identical N(a, a) distributed RV's, X i , . . . ,Xm , with

m

X = J](Xp-a)2/^2 . (5.1b) P=i

Page 193: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Numerical Solutions of Ordinary Stochastic Differential Equations 169

We assume now that under a given hypothesis Ho (e.g. the uniformity of RN's) one particular simulation falls with the probability Pfc in the category k. We perform N simulations and the expected number of events under Ho in the category k has the value Npfc. This contrasts with the outcome of a RE where Z& is the number of events in the category A; of a particular simulation. Now we calculated the sum of the K squares of the deviation of Z^ from Npfc and divided each term by Npfc. This test statistics yields the RN

K

D K - i = ^ ( Z f c - N P f c ) 2 / ( N P f c ) . (5.2) k=l

It can be shown that for N > 1 the RN in (5.2), X = DK_i approximately has the PD (5.1a) with the DOF m = K — 1. To verify this value of the DOF we note that we can sum RV's in (5.2) but we must consider the compatibility relation Ylk=i fc = N-

A close agreement between the observed (Z&) and the predicted values (under Ho) Npfc yields (Z^ — Npfe)

2 <C 1. However, a large number of categories K leads to finite values of D K - I - This can be inferred from the mean values of (5.1a) (X) = K — 1, signifying that mean of D K - I increases linearly with K. Hence we introduce a critical value of the x2-statistics XcK-i defined by

fb

Pr(X>6) = l - / px(:r ,K-l)d:r = a; b = *cK-i- (5.3) Jo

In Equation (5.3) we must substitute the PD (5.1a) and use a given and small level c t « l . Conveniently one chooses a = O(10~2) and solves (5.3) numerically to obtain %2

K - 1 . As a goodness criterion for Ho we employ now the condition that D K - I must not exceed the critical value. Thus, we accept (reject) the hypothesis for D K - I <

X c , K - l ( D K - l > X c , K - l ) -

Example (Uniformity test) As an application of the x2-test statistics we investigate now the hypothesis Ho of the uniformity of RN's. We categorize by dividing the interval (0, 1) into K subintervals of equal length 1/K. Then we generate N RN's and count the numbers that fall into the fcth

Page 194: Henderson d., plaskho p.   stochastic differential equations in science and engineering

170 Stochastic Differential Equations in Science and Engineering

subinterval. Now we assume that

Zfc RN's in: ((k - 1)/K, Jfe/K); k = 1 , . . . , K.

Under Ho, the probability of simulation RN's in each subinterval is pfc = 1/K. Thus, we infer from (5.2)

K N

D K - i = N ^ ( Z f c _ N / K ) 2 - ( 5 - 4 )

fc=l

We investigate in program Fl the uniformity of RN's using 10, 30 and 50 subintervals and applying various RN generators. The results are given in Table 5.1. £

Now we discuss independence tests. Suppose that we have obtained a random sample of the size of n pairs of variables (xi, yj);j — 1,.. -, n. To investigate the independence of the two sets of variables X = (x\,..., xn), Y = ( j / i , . . . , yn) and we calculate their covariance

R = I>P-<*>) (%>-<3/>) /D; P = i

n n I n

p—X p=l p=X

(5.5)

If the variables X and Y are uncorrelated we would find R = 0 and the variables are independent.

We focus now on a test of the independence of N RN's x\,..., xn

and we group them into consecutive pairs (x\,X2),{x2,x%),..., (xn-i,xn). We regard the first member of such a pair as the first

Table 5.1. The evaluation of the uniformity test with the rani , ran3, the IMSL routine and a congruent method (CM) for a = 0.05.

D K - I D K - I D K - I D K - I

K X C , K - I ( r a n l ) ( r a n 3 ) (IMSL) (CM)

10 30 50

16.92 42.56 66.34

9.240 37.100 44.200

9.200 23.900 33.900

9.300 47.780 43.100

9.300 47.780 43.340

Page 195: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Numerical Solutions of Ordinary Stochastic Differential Equations 171

variable (say X) and the second member of the pair as another variable (say Y). The correlation coefficient Ri of these consecutive pairs is called the correlation coefficient with lag 1

n - l

Ri = E(^-^ ( 1 ) ) ) (^+i-^ ( 2 ) ) ) / D ; P = i

n—l n—1

D2 = 5 > p - <^)))2 J > p + 1 - <*(2)))2; (5.6) P=\ p=\

1 n—l

, , ,-Ep+k—l-n — l '-^ P=I

When n is sufficiently large we can simplify (5.6) and this yields

1 n

(xW) « (x^)(x) = - J > p , (5-7a) n 1 P=I

n - l

Ri = j > p - ^ » ( ^ P + I - ( * » / E ( X P - ^))2- (5-7b) P=I * p=i

It can be shown (see e.g. Chatfield [5.4]) that if independent then Ri, is for R » 1 , N(0,1/n) distributed. Under these conditions we expect |Rij < 1.96n-1/2 with a confidence parameter a = 0.95.

However, there are cases when the pairs (x\,X2), (x2,xs), are independent but pairs such as (xi,xs), (^2,^4), are not independent. Hence, we compute the correlation coefficient with time lag k (see Tuckwell [5.5])

n—k , n

Rk = J2(XP ~ (x)")(xp+k - (x))/ J2(XP ~ ^ ) 2 ' (5-8) P=I ' p=i

where the mean (x) is defined as in (5.7a). We investigate the question of the independence of RN's in program F2.

Finally we explain the term confidence interval. We know from the discussion of the central limit theorem (Section 1.4.1) that a sum of IID RV's converges to a normal distribution. Hence, it is of

Page 196: Henderson d., plaskho p.   stochastic differential equations in science and engineering

172 Stochastic Differential Equations in Science and Engineering

practical importance to find probabilistic limits of the normal distributions N(0,1) and N(/x,a).

We start with a N(0,1) distributed variable and define the confidence parameter a by

N(0,1) : Pr(Z > za/2) = a/2. (5.9)

Equivalently we can rewrite (5.9) in the form

1 fb

a/2 = 1 - —= \ exp(-u2/2)du] b = za/2. (5.10) V27T J-oo

There is no analytic expression for the inverse function of the exponential integral in (5.10). Alternatively we take in EX 5.2 advantage of an asymptotic method to approximate (5.10) and we use there the program F3. However, it is easy to find numerically for every value of za/2 the confidence parameter a.

Now we study a N(/i, a) distributed variable X with

fx(x) = (27ra2)-1/2 e x p[ -0r - ^)2/(2a2)},

and we consider that it lies with the probability K = Pr(xi < X < X2)] x\,2 " i ^ i aza/2 m the indicated interval.

Thus, we infer from (1.1a)

K = (2ira2)-1/2 H exp[-(x - n)2/(2a2)]dx

fb = (27T)-1/2 / exp(-y2/2)dy = 1 - a; b = za/2. (5.11)

J—b

Equation (5.11) means that

Pr(/x - aza/2 < X < \i + aza/2) = 1 - a, (5.12)

and (5.12) defines a

100(1 — a)% confidence interval. (5.13)

Example A 0.95% confidence interval is given by a = 0.05 and this implies with (5.10) zaj2 = 1.96. Numerically computed and approximate values of za/2 are calculated in EX 5.2 and in the program F3. •

Page 197: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Numerical Solutions of Ordinary Stochastic Differential Equations 173

Detailed studies of stochastic model testing can be found in the books of Walpole and Myers [5.6] and Overbeck-Larisch and Dolejsky [5.7].

5.2. The Convergence of Stochastic Sequences

In Chapter 1 we introduced already the term almost certain, (AC). We focus in this section on the convergence of sequences of RV or stochastic processes. A classification of the convergence of the sequence of RV

{Xn(co)}; n = N, u> <E ft

is given by strong and weak modes of convergence .

(i) Strong convergence There are two subclasses of convergence modes:

(i-1) AC convergence (or equivalently called convergence with probability 1) if

Prf lim |X(w) - Xn(cu)| = 0 J = 1. (5.14)

(i-2) Convergence in the fcth mean.

lim (\X(LU) - Xn(uj)\k) = 0, fceN. (5.15) n—>oo

This means for k = 2 convergence in the mean square. By contrast, for k = 1 (5.15) reduces to the mean (or strong in the narrow sense) convergence. Note also that (5.15) is a straightforward extension of the deterministic convergence concept. We also note that the posi-tiveness of the variance of a RV Y, with Y > 0, leads to the inequality

(Y) < v ^ > , and this means that the mean square convergence implies the mean convergence.

(ii) For the case of weaker convergences we do not need the behavior of the RV, we focus only on the sequences of PD's {pn(x)} and the way they tend to their limit p(x).

Page 198: Henderson d., plaskho p.   stochastic differential equations in science and engineering

174 Stochastic Differential Equations in Science and Engineering

(ii-1) The convergence in distribution is defined by

lim pn(x) = p(x), (5.16) n—>oo

where the PD's are supposed to be continuous. (ii-2)

The weak convergence in the narrow sense is expressed by

lim / g(x)pn(x)dx = / g(x)p(x)dx, (5-17) n^ooj J

where we include in the integrals classes of test functions g(x) (e.g. polynomials).

An important application of the concept of strong convergence is given in the following example:

Strong law of large numbers (SLLN) We consider a sequence of RV xi,X2,--- with finite means fi^ ~ (xk), k = 1, 2 . . . and we define a new RV

1 n 1 n

An = - V ] x k with Mn = - V'/ifc. (5.18) fc=i k=i

Then, provided that

l i m S n = 0 with Sn = ^ V a r ( y x f c ) (5.19) n-^oo n \ *•—' /

\ / c= l / the SLLN says that

lim (|An - Mn|) = 0. n—too

Note that for IID RV we have

/ife = /i = const. =>• Mn = lim An = /x. n—>oo

(5.20)

(5.21)

Proof. First we note that

(A„) = M,,, Var(An) = Sn.

Now we apply the Chebyshev inequality (see EX 1.2) and we obtain

P r ( | A n - M n | > £ ) < ^ 4 ^ = % - > 0 forrwoo.

Page 199: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Numerical Solutions of Ordinary Stochastic Differential Equations 175

This proof can be made more rigorous (see Bremaud [1.16]). The convergence in (5.20) is of mean square type. *

An interesting application of the SLLN is given in the following:

Example (Estimation of the probability distribution function, PDF) It is required to find (or numerically approximate) the PDF defined by (1.1) of the RV X. To this end we generate a sequence of independent samples of the RV Xi ,X2, . . . and we define the indicator function

{ 1, V Xi. < x, n ^ ( 5 ' 2 2 )

0, otherwise. Thus we obtain with the use of the SLLN (5.20)

<&(a;)) = lim -(Ci + • • • + £„) = Fx(x) . (5.23) n—»oo n

We will use this method in program F4 to estimate a Gaussian PDF.

5.3. The Monte Carlo Integration

Here we make a slight digression a from our path to the calculation of numerical solutions of SDE and we use RN's to perform a stochastic type of numerical integration. This method is widely used in integration and statistics. It is based on our ability to generate RN's and it can be done in principle by dice or a roulette wheel. For this reason the method is called the Monte Carlo method (MCM).

As a typical application we might consider the MCM to calculate the value of the area A below a curve i(x) and the x-axis, see Fig. 5.1. The curve is located in a rectangular box of the area B = ab and we consider a pair of two independent RV (x, y) that are uniformly distributed in 0 < x < a, 0 < y < b. The probability that a random point (x, y) falls between the x-axis and the curve i(x) is given by

Pr{(x,y) GA} = ^ f f(x)dx. (5.24)

We use a collection of n random points (xk,yk), k = 1,2,... ,n and define for the fcth member of this collection the indicator variable

Page 200: Henderson d., plaskho p.   stochastic differential equations in science and engineering

176 Stochastic Differential Equations in Science and Engineering

Fig. 5.1. The function f(a;) whose integral is calculated.

[see also (5.22)]

1 for (xk,yk) <E A,

0 otherwise.

Thus, we infer from the SLLN (5.20) that

1 n

(Z) = - V z f c ^ P r { ( x , y ) G A } for n —> oo, fc=i

or equivalently

f(x)dx ab

n y Zfc for n —> oo. fc=i

(5.25)

(5.26)

(5.27)

Example (Volume of spheres) We calculate the area F of a unit circle that is enclosed by a square of area B = 4. Clearly this example is a stochastic method of computing the value of TT. Here we infer from (5.25)

1 foTx2k + yl<l,

0 otherwise. (5.28)

In (5.28) we use independent RN's that vary uniformly in the interval (0, 1). The area of the unit sphere is then by symmetry four times

Page 201: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Numerical Solutions of Ordinary Stochastic Differential Equations 177

the one that would be obtained by the application of (5.28) to the first quadrant of the (x, y)-plane. Hence, we obtain from (5.27)

A n

F = -J2zk^TT. (5.29) k=l

However, it is easy to generalize this example to the calculation of the volume of the 3D sphere, and more interestingly, to the calculation of the volume m-D sphere with m > l . This is where the MCM is superior to traditional numerical calculations. Consider the computation of a volume in a 10D space. If we employ a minimum of 100 points in a single direction we would need for the traditional computation 1020 points. By contrast, the MCM would need according to (5.28) 10 points for one indicator variable and a collection of say 107 points would be sufficient for an accurate computation. Thus we generalize the present example and we calculate the volume of an m-D unit sphere. The indicator variable is now given by

m

Zfc — 2_]X2A < 1; Zfc = 0 otherwise. i=i

Thus, the MCM gives the volume Vm of the m-D unit sphere the value (2m is the number of sectors that make up the m-D unit sphere: 4 quadrants in case of a circle, 8 quadrants in case of a 3D sphere,. . .)

tym n

- V z ^ V m . (5.30) n ^—'

k=i

Note that the exact value V ^ is given by (see Giinther and Kusmin [5.8])

v" = w^wy (5-31) The numerical computation is made using program F5. We sum

marize the results of the computation of a 10-D sphere in Table 5.2. The error of the computation is defined by

err = 100|(VfS-Vi0)/VfS|%.

Page 202: Henderson d., plaskho p.   stochastic differential equations in science and engineering

178 Stochastic Differential Equations in Science and Engineering

Table 5.2. The results of a MCM calculation of the volume of a 10-D unit sphere using ran3.

n Vio err

60000 2.54123 0.35048 1100000 2.54203 0.31884 1150000 2.55029 0.00508

There are several improvements of the MCM for multiple integrals. In the case of the determination of the area between an irregular shaped curve i(x) and the x-axis we can write

b rb

i(x)dx = dx p(x)i(x)/p(x), (5.32a) J a

where p(x) is an arbitrary PD defined in x = [a, b]. If N RN's £&, k = 1 , . . . , n are generated from the PD p(x), then

One of the most widely used application is in statistical mechanics. The PD of a statistical mechanical state with energy E is given by the Gibbs distribution function

exp(-/?E) lK ' /exp(- /3E)dE'

where (3 is the inverse temperature. The computation proceeds by starting with a finite collection of molecules in a box. It can be made to imitate an infinite system by employing periodic BC's. Algorithms exist to generate molecular configurations that are consistent with p(E). The most popular one is the Metropolis algorithm. A presentation of this algorithm would take us too far afield. Details are given in Metropolis et al. [5.9]. Once a chain of configurations has been generated, the PD is determined and various thermodynamic functions can be obtained by suitable averages. The most simple example is

(E) = VE f c p(E f c ) .

Page 203: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Numerical Solutions of Ordinary Stochastic Differential Equations 179

5.4. The Brownian Motion and Simple Algorithms for SDE's

We recall that the Brownian motion is a non-stationary Markovian N(0, t) distributed process. We remind also the reader of the Box-Muller method (1.46) to (1.48) of generating a N(0, 1) distributed variable. The latter method contains the computer time-consuming use of trigonometric functions [see (1.47)]. Hence, it is more convenient to use the polar Marsaglia method. We start from a variable X that is uniformly distributed in (0, 1) and use the transformation V = 2x — 1 to obtain a variable V that is uniform in (—1,1). Then we apply two functions of the latter type Vi and V2 with

W = V? + V | < 1 , (5.33)

that is distributed again in (0, 1) and the angle 8 is such that 6 = arc tan(V2/Vi) that varies in (0,2ir). The area-ratio between the inscribed circle (5.33) and the surrounding square has the value 7r/4. For this reason we see that a point (Vi, V2) has the probability 7r/4 of falling into this circle. We consider only these points and disregard the others. We put now cos 9 = Vi / \ /W; sin^ = "V^/VW and we obtain as in (1.47)

yi = Vi V/-21n(W)/W; y2 = V2 y

/-21n(W)/W. (5.34)

This yields in analogy to (1.48) to the PD

p(yi,y2) = (27r)-1/2exp[-(y? + y22)/2}. (5.35)

We calculate with this generation of the Brownian (or Wiener) processes in program F6 the numerical data pertaining to Figures 1.1 and 1.2 of Chapter 1. We present in Figure 5.2 the graph of the solution to population growth problem (2.4) and calculate the corresponding numerical data in F7.

We give now a simple difference method to approximate solutions of ID deterministic ODE

o\x — = a(x, t); x(t = 0) = XQ (5.36)

is the Euler method. The latter discretizes the time into finite steps

to = 0 < h < ••• < tn < tn+r, Ak = tk+1-tk. (5.37)

Page 204: Henderson d., plaskho p.   stochastic differential equations in science and engineering

180 Stochastic Differential Equations in Science and Engineering

Fig. 5.2. A graphical evaluation of the results of the population growth model (2.4), r = 1, u = 0.2. The figure reveals also one realization and the predicted mean (2.5) that coincides with the numerically calculated mean using 50 realizations.

This procedure transforms the ODE (5.36) into a difference equation

2-n+l — Xn -\- cl\Xn, tn)L\n] Xn = X[tn). (^O.OOJ

The difference equation must be solved successively in the following way

xi = x0 + &(x0, t0)A0 = x0 + a(x0, 0)ii;

x2 = xi + &(x1,t1)A1 = x0 + &(x0,0)ii

+ a ( x i , t j ) ( t 2 - t i ) ; a(xi, h) = a(s0 + a(x0,0)ti,ti).

Equation (5.39) represents a recursion. Given the initial value XQ we obtain with the application of (5.39) the value of Xk for every following time £&.

Now we propose heuristically the Euler method for the ID autonomous SDE

dx — a(x)dt + b(x)dB4; x(t — 0) — XQ. (5.40)

In analogy and as a generalization to (5.39) we propose the stochastic Eulerian difference equation

xn+i =xn + a(xn)An + b(xn)ABn; ABn = B n+l • B r B, = B t

(5.41)

Page 205: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Numerical Solutions of Ordinary Stochastic Differential Equations 181

In the majority of the examples in this chapter we will use equidistant step widths,

tn = to + n/S. = nA; An = A = const.

This yields

(ABn) = 0 and ((ABn)2) = A.

In the next section we shall derive (5.41) as the lowest order term of the Ito-Taylor expansion of the solutions to the SDE (5.40).

5.5. The Ito—Taylor Expansion of the Solution of a ID SDE

We start here from the simplest case of an autonomous ID SDE (5.40). In the case of a non-autonomous N-dimensional SDE, that is covered in Section 5.7, we need only to generalize the ideas of the present derivations.

We integrate (5.40) over one step width and we obtain

/•*n+l /"tn + 1

x(tn+i) = x(tn) + / &(xs)ds+ b(xs)dBs; Jtn Jt„

XQ = x(0); xs = x(s). (5-42)

Now we apply Ito's formula (1.99.3) to the function f(x), where the variable x satisfies the ODE (5.40). This leads to

df(s) = L°f(x)dt + L1f(x)dBi; 1 (5-43)

L°f(x) = a(x)f'(x) + - b2(x)f"(x); L:f(x) - b(x)f (x).

The integration of last line leads to

f(xn+i) = f(xn) + / "+1 [L°f(xs)ds + iMixJdB,]. (5.44) Jtn

If we specify (5.44) for the case i(x) = x, this equation reduces again to (5.42).

The next step consists in the application of (5.44) for f(xs): = &(xs) and i(xs): =b(xs) the substitution of the corresponding result

Page 206: Henderson d., plaskho p.   stochastic differential equations in science and engineering

182 Stochastic Differential Equations in Science and Engineering

into (5.42). Hence, we put

g(xs) = g(xn) + / [L°g(xu)du + L1g(xu)dBu]; g = a or g = b. Jtn

Proceeding along these lines we find

xn+i =xn + a(xn)An + b(xn)ABn + Ri;

ftn + l fS R1= / [L°&{xu)dsdu + L1a(a;u)dsdBu (5.45)

Jtn Jt„

+ L 0 b(^)dB s du + L1b(xu)dBsdBn].

Equation (5.45) is the simplest nontrivial Ito-Taylor expansion of the solutions of (5.40). Neglecting the (first order) remainder Ri we obtain the stochastic Euler formula (5.41).

We repeat this procedure and this leads, with the application of the Ito formula the function i(xu) = L1b(x„) = b(xu)b'(xu), to

x(tn+i) = x(tn) + a(iEn)An + b(xn)ABn + b(xn)b'(xn)lijl + R2;

Ii.i = / n+1 dBs f dBu = f "+1 dBs(Bs - Btn) (5.46)

= i ( A B 2 - A n )

where we have used the Ito integral (1.89). If we proceed to second order, the remainder R2 contains two

second and one third order multiple Ito integrals that are based on the following integrals

rtn+1 rs ftn+i

I i , o = / / dBsdu = / (s-tn)dBs, Jtn Jtn <Jtn

ftn + l fS ftn + i

Io,i= / / d s d B u - / ( B s - B t J d s , Jtn Jtn Jtn

(5.47)

with

Io,i + Ii,o = A„AB„, (5.48)

Page 207: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Numerical Solutions of Ordinary Stochastic Differential Equations 183

and rtn+i rs ru ptn+i fs ru

Ii,i,i = / / / d B s d B „ d B , = / dB s / dB„ / dB , Jtn Jt„ Jtn Jtn Jtn Jtn

= f "+1 dB s J &BU{BU - B tn) = ^[(ABn)2 - 3An]ABn .

(5.49) We will verify (5.48) and (5.49) in EX 5.3.

The application of (5.46) with R2 = 0 yields the Mi l s t e in scheme

xn+1 = xn + M(xn,An,ABn), (5.50a)

with the Milstein operator

M ( x n , A n , A B n ) — a ( x n ) A n + b ( x n ) A B n

+ ^b(xn)b'(xn)[(ABn)2 - An}. (5.50b)

Note tha t the Milstein scheme (5.50) is again a recursive method and can be considered as a generalization of the Eulerian scheme (5.41).

The numerical da ta revealed in Figures 2.1 and 2.2 for the Ornstein-Uhlenbeck problem (2.14) are computed with the Milstein routine (5.50) and the corresponding program is F8. The latter is a general solver for first order autonomous SDE's. We calculate in particular the solution of the generalized population problem (2.35a). A sample path of numerical solution of this equation is plotted in Figure 5.3.

With the inclusion of more stochastic integrals, we obtain more information about sample paths of a solution. Hence we include more terms of the expansion of &(x) and b(x) into the remainder R2 of (5.46). We postpone this cumbersome algebra to treat the general case of non-autonomous N-D SDE's. We can, however, infer from (5.45) that one term resulting from the expansion of Loa(xu) must be given by

pv ps 1 / ds / duL°&(xu) = -

Jw Jw ^ &{w)a!(w) + -b2(-u;)a"(u;) A2-

tn+l, W = tn. (5.51)

Page 208: Henderson d., plaskho p.   stochastic differential equations in science and engineering

184 Stochastic Differential Equations in Science and Engineering

Fig. 5.3. Lower line: numerical average of the solution to (2.35a), middle (irregular) line: a SF of (2.35a). Upper line: graph of the exponential, exp(krt); r = 0.01, k = 100, u = 0.2, Ensem = 200, h = 0.001.

Proceeding along this lines we end up with the scheme

x„+i = xn + M(xn, An , ABn) + Hn(An , ABn , AZn), (5.52a)

with the operator

R(xn,An, ABn ,AZn)

= a'bAZn + ~ LB! + ±b2A A2n

+ Lb' + \b2bA (ABnAn - AZn)

+ ^b(bb" + b /2)[(ABn)2 - 3An]ABn , (5.52b) b

where the drift and the diffusion coefficients depend on the argument xn. In Equation (5.52b) a new RV appears that is defined by the first of the integrals in (5.47)

AZn = I i , 0 = / d B a ( s - i o ) , J w

where the integration boundaries are defined in (5.51).

(5.53)

Page 209: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Numerical Solutions of Ordinary Stochastic Differential Equations 185

We easily find the moments (AZn) = 0 and

Var(AZn) = M dBs(s - w) dBr(r - w) \Jw Jw I

= [ ds(s - wf = \A3U, (5.54a)

Jw "J

and the covariance

(AZnABn) = / fV dBs(s - w) fV dBu

= [ ds(s-w) = lA2n. (5.54b)

Jw ^

Note also that Equation (1.93) states that the variable AZn is N(0,A£/3) distributed.

In order to perform a numerical simulation of the RV AZn we formulate the following theorem:

Theorem 5.2 (Numerical simulation of the RV AZn) First we note that we can generate the Wiener variable ABn in form of N(0, An) distributed variable. We obtain, from the polar Marsaglia (or the Box-Miller) method (5.34), two independent N(0,1) distributed RV's, yi and y2- Now we put

ABn = v/A^il AZn = ±A3J2 L + 4 p V (5-55)

We obtain from (5.55)

(AZn) = 0; <(AZn)2> = ±A3

n; (AZnABn) = ~A2n.

and this proves (5.54). &

We verify (5.52) to (5.54) numerically in program F9A. There we use the traditional mean a of RV x by averaging over K realizations of this variable

1 K

(x) = Jim — V xk, (5.56a) fc=l

where the RV's x^ are individual samples of x. We contrast this average in program F9B with the batch average. There we calculate

Page 210: Henderson d., plaskho p.   stochastic differential equations in science and engineering

186 Stochastic Differential Equations in Science and Engineering

the mean of x by computing K batches, each of which consists of M samples of x and has a different seed of the random generator. This yields

1 K M

^ = K M E E ^ (5.56b) fc=l m = l

(k)

where x^ represents the mth realization of the variable x in the kth batch. Note that (5.56a) does not coincide with (5.56b), even not for identical seeds of the individual batches. We propose to use F9B with K = M = 25 and compare the results with the ones of F9A with K = 625.

The numerical data for Figure 5.3 were calculated with the Milstein scheme (5.50) as well as with the higher order routine (5.52) and the SF's as well as the averages that coincide up to the first three digits. We tried also to compare the growth of the solution for generalized population growth (2.35a) with the solutions of the usual growth problem (2.1). To accomplish this we use in (2.35a) rk = 1 and compare the corresponding numerical results with to those of (2.1) with to those of (2.1) with r = 1. We also note that (2.35a) can be written as

dX = rkX(l - X/k) + uXdBt,

and the deterministic limit of this SDE has stationary point at X = k. The latter property prevents the solutions from tending asymptotically to each other. We can only expect an approximate coincidence in the first phase of the growth.

Example The application of the routine (5.52) to the problem of Ornstein-Uhlenbeck (2.14) with a — m — x; b = const, yields

H(xn, An , ABn , AZn) = -bAZn - - (m - xn)A2n.

( 1 \ ( 5 - 5 7 )

xn+l =xn + (m-x) I An - - A n J + b{ABn - AZn).

*

The schemes discussed in this section belong to the class of explicit schemes, where the advanced solution xn+\ is given by

Page 211: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Numerical Solutions of Ordinary Stochastic Differential Equations 187

functions of the solution xn. Explicit routines have the disadvantage of including a great variety of functions and their derivatives. An additional disadvantage of these schemes is their instability. Hence, we replace these routines in the next section with modified routines. A particular modification is given by an implicit scheme that is known in deterministic problems to be less unstable.

However, a general comment about the use of numerical routines to approximate the solutions of SDE'S must be given. To eliminate the possibility of spurious results, it is always advisable to confirm numerical results of a particular routine with the numerical data obtained from another method.

We note also that we introduced at the beginning of this section the Ito-Taylor expansion that is based on the application of the Ito formula (1.99.3). In the case of Stratonovich SDE's there is also a Stratonovich-Taylor expansion that relies on the Stratonovich differential (1.100). The latter approach is not covered in this book and interested reader should consult the book of Kloeden et al. [2.5].

5.6. Modified ID Milstein Schemes

We used the Milstein scheme in program F8 to solve the population growth problem (2.1) in the regime of the variable 0 < x < 10 and depicted the results in Figures 2.1 and 2.2. There we found that the numerically calculated moments agree very well with the theoretical ones. We did not also come across with any problems of a divergence of the numerical solution. However, the situation changes dramatically if we would like to extend the regime of this solution beyond x > 10. The Milstein routine is in this regime unstable and results in a function that diverges rapidly.

The question of how to improve the stability of the Milstein scheme arises. Basically, there are two possibilities to modify given routine and we exemplify this for the Milstein routine. In the first case (i) we perturb the drift coefficient. In the second alternative (ii) we modify the coefficients that are based on the diffusion coefficient. We begin with the first alternative.

Page 212: Henderson d., plaskho p.   stochastic differential equations in science and engineering

188 Stochastic Differential Equations in Science and Engineering

(i) An implicit ID scheme Here, we replace the Milstein routine (5.50) by

xn+i = xn + [aa(xn+i) + (1 - a)&(xn)]An

+ b(xn)ABn + ^b(xn)b'(xn){(ABn)2 - An}. (5.58)

Note that (5.58) is for a <E [0,1] a family of SDE's. We call (5.58) in the limit a = 1 (a = 0) a fully implicit (an explicit) SDE, and a is the degree of implicitness. Note also, that implicit effects refer exclusively to the drift coefficient, whereas the diffusion coefficient is always taken at xn.

In an implicit scheme we must calculate the solution xn+\ as the numerical solution of the (generally nonlinear) Equation (5.58) with given xn. To solve this aspect of the problem one usually uses a Newton-iteration routine. However, convergence problems arise in this iteration, in particular if the stochastic influences are too strong. Thus, it is convenient to take advantage of this method, despite its popularity in deterministic problems, only for the case of SDE's with a linear drift coefficient, where no iterations are needed.

In EX 5.4 we use (5.58) for the Ornstein-Uhlenbeck SDE (2.14). However, the employment of the corresponding program F l l shows that the scheme (5.58) again leads to unstable results in the regime x > 10. This contrasts with the result of the higher order explicit scheme (5.52) for the same problem and we must conclude that the implicit routine (5.58) is inconsistent with a ID autonomous Ito SDE.

(ii) Modification of the diffusive terms Here we use for an SDE with a variable drift coefficient the scheme proposed by Kloeden (see [2.5])

xn+i = xn + a(xn)An + b(xn)ABn

+ ^[(ABn)2 - An][b(Sn) - h{xn)]/y/E~n, (5.59)

with the support value

Sn = xn + &(xn)An + b(xn)^/A^. (5.60)

To motivate (5.69) we note that a Taylor expansion of b(Sn) for the usually small parameter An reproduces (5.50) to leading order.

Page 213: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Numerical Solutions of Ordinary Stochastic Differential Equations 189

X, <X>

20 t 30

Fig. 5.4. A sample function and the mean value of the SDE (5.61); m = 3, 0 = 1, A„ = 0.005, Ensem = 200.

We test successfully (5.59) and (5.60) in program F12 for the SDE

with variable drift coefficients

dx = (m — x)dt + /3sin(x)dB i; m, f3 = const. (5.61)

A particular test criterion for applicability is the agreement with

respect to the average, since we obtain from (5.61) [in analogy to

(2.17a)] (x) = m— (m — XQ)exp(—t). In Figure 5.4 we reveal a graph

of a particular SF and depict the averages where the exact and the

numerical one are almost coinciding. Thus, we may conclude that

the routine (5.59) represents a stable modification of the Milstein

scheme.

5.7. T h e Ito—Taylor Expans ion for N-d imens iona l S D E ' s

We consider now the general case of non-autonomous N-dimensional

SDE's with R > 1 independent Wiener processes. We introduce here

the nomenclature that

Sk(tt,t) =gk(xi,...,XN,t); tit = {xi(t),...,xN(t)),

is the fcth component of the vector function g at the space-time posi

tion (&,i) .

Page 214: Henderson d., plaskho p.   stochastic differential equations in science and engineering

190 Stochastic Differential Equations in Science and Engineering

We write now the N-dimensional SDE (1.123) in the form

dxk(t) = afc(£fc, t)dt + bkr(£t, t)dBrt;

Jfc = l , . . . , N ; r = l , . . . , R . (5.62)

For the differential of the vector function ffc(£t,i) we find now, with (1.127), the expression

dffe(&,t) = [U&,t)ik{Zt,t)]dt+ [L^ t,*)ffc(&,t)]dB[ (5.63)

with

T _ 9 9 k U ^ dt dx 2 dx dx '

9 (5.64)

Li — b m r -OXr,

where the functions afc,bmr,ffc are taken at the space-time position (Ct,t) and B | , . . . , B ^ represent a set of R independent Brownian motions.

We can integrate (5.63) formally and the result is

ffc(6, t) = ffc(Co,«) + / {Lo(6, s)ffe(6, «)ds,

+ L;(6,s)f f c(6,s)dBS}, ( 5"6 5 )

*o = a; €o = (xi(a),...,xu(a)).

We apply now (5.65) to the functions xk to obtain Lo^^ = afe(a),L^Xfc = bkr and we infer from (5.65)

Xk(t) = xk(a) + I {afe(£s,s)ds + bfcr(£s,s)dB£}. (5.66) ./a

However, we could obtain Equation (5.66) in a more simple way as the result of a formal integration of (5.62).

We apply (5.65) to calculate the functions a&(£,£) and bkr{t],t) and substitute the results into (5.66). Proceeding in this way

Page 215: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Numerical Solutions of Ordinary Stochastic Differential Equations 191

l i t

we obtain

afc(6>s) = afc(£o,a) + / {L0(£u,u)a,k(£,u,u)di

J a

+ Ll^u,u)ak^u,u)dBru}, (5.67a)

and bto(£ s ,

s) = bfcu;(£o, a) + / {M&*, «)bfcu,(^u, *t)du

./a

+ L ^ ( ^ , B ) b t o ( ^ , « ) d B ; } . (5.67b)

The substitution of (5.67) into (5.66) yields

xk(t) = xk(a) + (t-a)a.k^0,a) + (Brt-B

ra)bkr^0,a) + R^) (5.68)

with

R41} = f f {{L0ak)dsdu + (L> f c)dsdB^ + (L0bfcr)dB^d« J a J a

+ (L^bfcr)dB^dB-} (5.69)

where all functions and operators are taken at the space-time location (£u,u). Equation (5.68) with Kk = 0 represents now the non-autonomous N-dimensional Euler-scheme [see (5.45) for the ID autonomous case].

To obtain the N-dimensional Milstein routine we expand the 3rd order tensor L ^ b ^ in (5.69) into an Ito-Taylor series and truncate after the first term. This leads to

TZ(£u,u) = bmw(tiu,u)—-bkr(Zu,u) <* TZ(to,a) OXyyi

= bmw(€o,a)-— bkr(£0, a). oxm

Hence, we obtain from the last term (5.69)

< } = T£.(£o,«) f dB£ f dB™ = T&.(£0,a) f dB (B™ - Bwa).

Ja Ja Ja

(5.70)

The integral on the right hand side of (5.70) is again a new RV K ? = f d B : ( B ™ - B - ) . (5.71a)

J a

Page 216: Henderson d., plaskho p.   stochastic differential equations in science and engineering

192 Stochastic Differential Equations in Science and Engineering

We know only one of its diagonal elements, such as [see 5.46)]

K\ = f d B j ( B j - B i ) = Ii,1. (5.71b) J a

For the sake of simplicity, we reduce our goal to the investigation of an N-D non-autonomous SDE in the presence of only one Wiener process (this means (5.62) with R = 1, BJ = Bt) and we refer the reader for more details of the tensorial RV (5.71a) to the book of Kloeden et al. [2.5].

Now we can formulate the N-dimensional Milstein scheme for the case R = 1; this yields

Sfc(tn+i) = Xfc(i„) + Mfc(£ tn,An,ABn), (5.72)

with the N-D Milstein operator

Mfe(&„, An , AB„) = afcAn + bfcABn + I i , ib m -—b f c , (5.73) oxm

where the coefficients a& and bfc refer to the space time position

(&„,*n) .

We apply the Milstein routine (5.73) now to in the following example Example 1 (The linear pendulum) The linear pendulum with stochastic damping was analyzed in Section 2.3.2. We use for a 2D SDE the nomenclature

xi{ts) = xs; x2(ts) = ys. (5.74)

We infer from (2.60) with a ^ 0, /? = 7 = 0

da; = ydt; dy = — xdt — aydBt, (5.75)

and this means

«i = y, h = 0; a2 = -x, b2 = -ay. (5.76)

With (5.72) to (5.75) we find

%n+l — %n 1 yn^ni

1 (5-77) Vn+i =Vn- xnAn - aynABn + -a 2 y n [ (AB n ) 2 - A„],

where the summation convention is not used.

Page 217: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Numerical Solutions of Ordinary Stochastic Differential Equations 193

We solve the iteration problem (5.77) for an equidistant decomposition of the time-axis An = A = const, in program F13. The corresponding initial condition reads XQ = x(0),yo = y(0). The results for the mean and variance of this process are already revealed in Figures 2.3 and 2.4 in the regime 0 < t < 10. If we wish extend this regime, we find that the scheme (5.77) is not stable and we will use an improved routine in the next section.

Also we solve the nonlinear pendulum (2.105) with the Milstein scheme. This SDE is incorporated in program F13 also. £

5.8. Higher Order Approximations

A higher order scheme for the non-autonomous N-D SDE is derived in EX 5.5. We give here only the result

Xn+i = xn + afcAn + bfcABn + -(L0afc)A2 + (L0bfc)I0,i

+ (L iafc)Ii,0 + (L!bfc)Ii,i + (LiLib^Ii,!,! (5.78a)

where we use the nomenclature [see (5.74)]

xks=xk{ts), fc = l , 2 , . . . ,N ; s = 0 , l , . . . , (5.78b)

and where all coefficient functions, operators, afc,bfc,Lo,Ia, and all integrals (5.46)-(5.49) and (5.53) depend on the space-time variable (x^,... ,x%,tn). Equation (5.78) is the N-D generalization of (5.52) for a non-autonomous SDE. We employ also (5.48) and (5.53) to obtain the relation

Io,i = AB nA n - AZn . (5.79)

We can easily write the compressed equation (5.78) in a clean-cut formulation if we use the definitions of the operators (5.64).

Example (The linear and the nonlinear pendulum) Using (5.76) we infer from (5.64)

L0 = ^ - s i n ( x ) - + - ( a y ) 2 g ? ; U = -ay-. (5.80)

Page 218: Henderson d., plaskho p.   stochastic differential equations in science and engineering

194 Stochastic Differential Equations in Science and Engineering

Thus, we obtain from (5.78)

xn+i = xn + yn(An - alifl) - - sin(xn)A^,

Vn+i = yn~ sin(:rn) An - -yn cos(xn) A2 - aynABn

+ 2 a V [ ( A B n ) 2 - An] + asin(a;n)Io,i

- J a 3 ^ [ ( A B n ) 2 - 3 A n ] A B n . (5.81) 6

Equation (5.81) applies to the nonlinear case. In the linear case we must replace in (5.81) by the linearization cos(xn) := l,sin(xn) := xn. The corresponding numerical work is done in program F13.

Finally, we mention a modified Milstein regime for autonomous systems that generalizes the ID scheme (5.60). With the use of the definitions (5.78b) and the comments that follow, we write for N-dimensional systems

4 + i = xkn + afcAn + bfcABn + - [b f c(Y+ , . . . , Y^)

-b f c (Yi , . . . ,Y^) ] l 1 , 1 /AV2

Y i = x£ + a f cA n±b f cAy 2 , fc = l , 2 , . . . ,N . (5.82)

The motivation for this scheme comes again from a Taylor expansion of the last term in (5.82).

We compare now the numerical data obtained for the variance V22(t) of the linearized pendulum [with the analytic formula (2.82)]. We apply (i) the Milstein regime (5.72), (ii) the higher order scheme (5.78), (iii) the modified Milstein scheme (5.82) and an implicit scheme. The latter is established in analogy to (5.58) and it is given for a general 2D SDE in the form

xn+i = xn + [7ai(£n+1) + (1 - 7)ai(£„)]An + Zx,

Vn+i = Vn + [7a2(£n+i) + (1 - 7)a2(£n)]An + Zy; (5.83)

where {Zx,Zy) are the components of the stochastic terms and 7 is the implicitness parameter. For the same reasons as discussed

Page 219: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Numerical Solutions of Ordinary Stochastic Differential Equations 195

in connection with (5.58), we use (5.83) only for SDE's with linear drift coefficients ai, a2.

We specialize (5.83) to the case of the linear pendulum and this yields

xn+l = (1 + 72A2)_ 1{xn + (1 - 7 )y n A n + Zx

+ jAn[yn + (7 - l )A n x n + Zy}}, (5.84)

yn+1 = (1 + 72A2) {yn + (7 - l)x„ An + Zy

+ 7 A n [ - x n + (7 - l )A ny n - Zx}}.

The numerical data are produced, in the case of (i) to (iii), with the program F13 and the implicit scheme is executed with the aid of program F14.

The comparison of the numerical data shows that — at least for the variance of the linear pendulum — the Milstein routine (5.72) and the modified Milstein routine (5.82) fail to give sufficient accuracy. By contrast, we see that the higher order scheme (5.78) and the implicit routine (5.84) reproduce satisfactorily the theoretical prediction (2.82) in the regime 30 < t < 50. The corresponding numerical solutions are almost identical for the routine (5.82) and the implicit scheme (5.84). Thus, in Figure 5.5 a comparison of variance produced by (5.82) with the analytic values (2.82) is shown.

0,25

0,2

0,15

V 22

0,1

0,05

0 30 35 40 , 45 50

Fig. 5.5. A comparison of the analytic variance (2.82) (continuous line) and the numerical data obtain by the routine (5.78); a = 0.1, A n = 0.005.

Page 220: Henderson d., plaskho p.   stochastic differential equations in science and engineering

196 Stochastic Differential Equations in Science and Engineering

5.9. Strong and Weak Approximations and the Order of the Approximation

We begin with a definition.

Definition 5.1 (Absolute, global and local error) We consider an n-D SDE for the variable X(i) e Rn with the IC's X(*o) = Xo 6 Rn in the interval t <£ [0,T}. A discrete approximation Y(t) G Rn of the solution X(i) to this SDE has a local error that is defined by

e^im^-Yttj)]); 0<tj<T. (5.85)

The global error is given by n

eg = 5^( |X(i f c)-Y(t f c) | ) ; t0 = 0 < t i < - - - < t n = T, (5.86) fc=i

whereas the absolute error is defined by

e a = ( | X ( T ) - Y ( T ) | ) . (5.87)

In (5.85) to (5.87) we used the symbol | | which is an abbreviation for the norm of a vector function. Thus, for instance, the absolute error is written in full length by

ea =

\

E^ro-WT)]2) (5.87') fc=l

where the subscript k labels the vector components. The local error (5.85) measures the closeness of the solution and

its discretization at just one internal point. The global and the absolute error measure the pathwise closeness of the exact solution and the approximation of the sum of K internal points and at the end-point of the interval. •

In the following we will investigate the dependence of the absolute error on the step width of a particular discretization routine. To realize this task with computer experiments, we choose a particular SDE with a known exact solution and approximate the solution with a given routine. Then we choose M batches of N simulations and we indicate the trajectories of the fcth simulation of the j th batch at

Page 221: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Numerical Solutions of Ordinary Stochastic Differential Equations 197

the end of the interval by X j ^ T ) and Yj)/C(T). The average values (batch averages of the absolute errors)

1 N

^ = N^ | X j ' f c ( T ) ~ Y ^ ( T ) l ' J' = 1 . 2 . - - - ,M, (5.88) fc=i

are for N » 1 independent Gaussian RV's. The batches are chosen to construct with the Student ^-distribution (a distribution that we will discuss in moment) confidence intervals for a sum of approximately Gaussian RV's with unknown variance. In this way we compute the batch averages

1 M 1 M N

1= M 5 > = MN £ £ |X*,*(T) " Yj>fc(T)|. (5.89) i = l j=l k=l

We take advantage of the latter average to obtain the estimate of the variance of the batch averages

1 M

It is convenient to apply the Student distribution with M-l DOF to achieve a 100(1 — a)% confidence interval for r\ in the form

(TJ-AT/.TJ + ATJ), A77 = K 1 _ a , M - i ^ / M , (5.91)

where the parameter Ki-aiM-i is calculated from the Student distribution. As an example we list the following data: For a = 0.05 and M = 20 we obtain Ko.95,19 = 2.09 and the absolute error will lie in the interval (5.91) with the probability 1 — a = 0.95.

The Student or (£-) distribution governs an N(0,1) distributed RV X and x2 is a RV following the chi-squared distribution (5.1a) with s DOF-parameter. Then we consider the ratio X / y x V s . The latter variable has the PDF [see (1.1)]

F ( X ^ x V ^ ) = PrflX, ^fW*\ < t) • (5.92)

Page 222: Henderson d., plaskho p.   stochastic differential equations in science and engineering

198 Stochastic Differential Equations in Science and Engineering

We give here the following represention of this PDF for even values of the DOF (see Abramowith & Stegun [1.3])

F(X/^Ts) = sin(0) | l + \ cos2(0) + ^ | cos4(0)

+ - ^ e f K H (593>

9 = arctan(i/y /s), (s even),

with an analogous formula for odd values of s.

Definition 5.2 (Strong convergence and order of convergence) Strong convergence is a pathwise convergence characterized by the error criterion (5.87). A discrete approximation scheme with a maximum step size 5 converges with the order 7 > 0 at a time T if there is a constant C independent of S such that

ea = < | X ( T ) - Y ( T ) | ) < G F (5.94)

D

In the framework of weak convergence we are not interested in pathwise approximation of the stochastic process. We focus there on the calculation of the PD, or its lowest moments, or functionals of the PD. Hence we do not compare the difference between exact solution and approximation as in (5.87), yet we concentrate on the errors of the moments. For the lowest moment, the mean, we introduce the mean error

£ m = | (X(T))-(Y(T)) | . (5.95)

Definition 5.3 (Order of weak convergence) We use the weak convergence criterion (5.95) we define the order 7 > 0 by

|<g(X(T)))-(g(Y(T))>|<C<F, (5.96)

where g can be a class of polynomials. Clearly, the simplest definition of the order comes with the application of the linear polynomial g(X) = X. •

Page 223: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Numerical Solutions of Ordinary Stochastic Differential Equations 199

To perform numerical experiments of strong and weak convergence, we use in program F15 the population growth problem (2.1). To this end we take the logarithm of (5.94) and (5.96). The slope of the corresponding curve equals than the order of the convergence 7. The results are revealed in Figures 5.6 and 5.7, where we plot the logarithm of the errors versus the time steps. It is convenient to apply negative powers of 2 for the time steps and use the log 2 of the error.

-1 .4 - -

Log2(Err)

-1.5-

-1.6-

-1.7--

-1.8-1 1 1 1 1 1 1 1 -6 -5.5 -5 -4.5 -4 -3.5 -3 -2.5 -2

Log2(DT)

Fig. 5.6. Absolute error for the case of the polpulation growth problem (2.1). Upper curve: Euler scheme, Lower curve: scheme (5.52). Parameters: r = 1, u = 0.1, N(0) = 1, T = 1.

O-i

Log2(Err)

- 6 - . - - - • -

-8 - . • '

-1 0-1 1 1 1 1 1 1 1 r -6 -5.5 -5 -4.5 -4 -3.5 -3 -2.5 -2

Log2(DT)

Fig. 5.7. Mean error for the case of the population growth problem (2.1). Curves and parameters as in Figure 5.6.

Page 224: Henderson d., plaskho p.   stochastic differential equations in science and engineering

200 Stochastic Differential Equations in Science and Engineering

However, there is an ambiguity in the calculation of the strong order of convergence 7 in (5.94) related to the computation of the exact solution X(T). We discuss this problem only for a 1-D SDE and use this to calculate the numerical solution Y(T) alternatively by (5.41), (5.50) or (5.52) (Euler, Milstein or higher order scheme). Starting from the initial condition Yo we obtain after K equidistant steps Y K ( T ) , T = KA, where we need for each individual step the quantity ABfc = (z)fc-\/A; k — 1, 2 , . . . , K with (z)k as the kth simulation of a N(0,1) RV. At the end of the interval t = T we have for the WP

K K

B T = ] T Bk = \/A ^{z)k. (5.97) fc=i fc=i

To determine the strong order convergence we need next the exact solution X(T) that is in many cases a function of Br. There are now different ways to determine the latter quantity. We propose the following two alternatives

(i) As in program F15 we calculate X(T) with the aid of

BT = VT(Z)K+1,

where we use the (K + l)-th simulation of the RVz. This is the simulation taken right after the computation of (5.97). This concept was employed in the determination of the data for Figures 5.6 and 5.7.

(ii) Here we use (5.97) and we easily derive

K K

<BT) = 0; (B2T) = A £ <(Z)m(Z)fe) = A ^ ( ( Z ) | ) = AK = T.

k,m=l k=l

This concept has the methodological advantage of assigning to the strong order of convergence for the exact 1-D SDE

dx = adt + /3dBt; a,0 = const, (5.98)

the value 7 = 00. Furthermore, we obtain with concept trends of the order of convergence that apply to general class of 1-D SDE's. The Eulerian scheme leads for instance to 7 = 0.5 and 7 = 1 for strong and weak convergence, respectively.

Page 225: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Numerical Solutions of Ordinary Stochastic Differential Equations 201

We will give more hints to the determination of the order of convergence in EX 5.10.

The important question is, however, what is the goal of the numerical simulation. Is a good pathwise approximation required or is an approximation of some functional of the PD, as the moments, sufficient?

Finally, we wish to add some important references from the vast literature about strong and weak convergence schemes and implicit routines. We mention here only the following articles Milstein [5.10], [5.11], Drummond and Mortiner [5.12] Kloeden and Platen [15.13], [5.14], Saito and Mitsui [5.15] and Newton [5.16].

Exercises

EX 5.1. Given the DOF m, calculate the mean value and the variance of the PD (5.1a). Verify (x) = m; Var(x) = 2m.

EX 5.2. Use the asymptotic method of integration by parts to approximate (5.10). Using this method and applying subsequently Newton's iteration determine the parameter b(a).

Hint: An integral such as the one in (5.10) can be transformed to poo

J(x) = / exp(—x2 /X)dx J a

A f°° 1 d = / da;-—-exp(—x2/X); 0 < a, A < oo.

2 Ja xdx Use integration by parts to find an asymptotic expansion of this integral. The subsequent Newton iteration is performed in program F3.

EX 5.3. Verify (5.48) and (5.49).

Hint: Use (1.89), (1.105b), and (1.110).

EX 5.4. Find an implicit solver for the Ornstein-Uhlenbeck (2.14) problem based on the Milstein routine and compare your program with F l l .

Hint: Use (5.58) to formulate the solver.

Page 226: Henderson d., plaskho p.   stochastic differential equations in science and engineering

202 Stochastic Differential Equations in Science and Engineering

EX 5.5. Derive the higher order scheme for a non-autonomous N-D SDE (5.62).

Hint: Expand the error term (5.69).

EX 5.6. Find a numerical solution of the SDE (A.l) with A := n (why?) of the appendix of Chapter 3 based on the routine (5.78). Here we obtain

a\ = V, &i=0; a2 = -\g(y)y - f(x), b2 = V2AT = const, 9 9 32 , 9 Lo = y ^ - + «2x- + A T — ; Ll = V2XT—

ox oy oyz ay

and hence (Lo&fc) = (Li&fc) = (LiLibfc) = 0, k = 1,2. Equation (5.78) leads to

„ + ynAn + ~a2A2n + V2ATAZ„

Vn+i = Vn + a2An + 72ATABn + -(L0a2)A2 + (Ua2)AZn.

Find Loa2 and L\a2 and solve the numerical system using the parameters given in Figure 3.4 with step sizes An = 0.01 and 0.005. Plot the results and compare them with the phase diagram in Figure 3.4.

EX 5.7. Apply the scheme (5.78) to solve the problem of the Brownian bridge (2.38). Use the step sizes An = 0.01 and 0.005 and the parameters r — 1 and x(0) = 0.

Hint: In the case of a ID system we have

9 9 ! ,2 9 2 T , 9

at ox 2 ox2 ox

EX 5.8. Use the Milstein (5.73) and the higher order scheme (5.78) to approximate the solution of the stochastic Brusselator problem

da; = a(ic, y)dt + ac(x)dBt,

dy = — [x + &(x, y)]dt — ac(x)dB t,

a(x, y) = (0- l)x + (3x2 + (x + l)2y;

c(x) = x(l + x); a, /? = const.

Page 227: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Numerical Solutions of Ordinary Stochastic Differential Equations 203

The deterministic Brusselator (a = 0) equation was developed at the occasion of a scientific congress in Brussels, Belgium, to develop a simple model for bifurcations in chemical reactions.

Solve this problem by developing the corresponding user-supplied subroutines for the program F13. Use step widths An = 0.005, 0.01 and the IC's x(0) = —0.1, y(0) — 0, and the parameters a — 0.1, (3 = 2. Plot the results in form of a phase diagram (see Figure 3.4) and find traces of the bifurcations.

EX 5.9. Solve numerically the stochastic Lorentz equations (B.26) of Chapter 3. To achieve this goal generalize F13 to the case of 3D SDE's. Use the step widths and IC's of EX 5.8 with z(0)=0 and apply the parameters b = 1, r = 0.2, s = 0.1, a = 0.1. Compare your program with F17.

EX 5.10. Modify the program F15 to emply (5.97) for the determination of the exact solution X(T). Use this concept after the computation of each individual batch in F15. Find with this modification

a) Strong and weak convergence order that replace the data for Figures 5.6 and 5.7 in the case of the population growth that was originally considered in F15.

b) Consider this alternative of F15 to find for the Euler routine (5.41) and calculate the strong order of convergence 7 for this case of the SDE (5.98).

c) An other 1-D SDE with a simple exact solution is gtiven in EX 2.1 (iii). Determine again the strong and weak order of convergence.

Page 228: Henderson d., plaskho p.   stochastic differential equations in science and engineering
Page 229: Henderson d., plaskho p.   stochastic differential equations in science and engineering

REFERENCES

Introduction

[1-1 [i-a: [1.3 [1-4 [1 [1

[i [1.9

[1.10;

A. Einstein, Ann. Physik, 17, 549 (1905). M. von Smoluchowsky, Ann. Physik, 21 , 576 (1906). P. Langevin, Comptes Rendus Acad. Sci. (Paris), 146, 530 (1908). G. E. Uhlenbeck and L. S. Ornstein, Phys. Rev., 34, 823 (1930). E. L. O'Neill, Introduction to Statistical Optics (Dover Publications, 1991). F. G. Stremler, Introduction to Communication Systems (Addison-Wesley Publishing Company, 1990). L. Hopf, Common Appl. Math., 1, 53 (1948). R. H. Kraichnan, Fluid. Mech., 67, 155 (1975). O. V. Rudenko and A. S. Chirkin, Soviet Phys. Acoustics, 19, 64 (1974). W. A. Wojczinsky, Burgers-KPZ Turbulence, Lecture Notes in Mathematics 1700 (Springer-Verlag, 1998).

Chapter 1

[1.1] K. L. Chung, Elementary Probability Theory with Stochastic Process (Springer-Verlag, New York, 1979).

[1.2] L. Arnold, Stochastic Differential Equations (J. Wiley & Sons, New York, 1974).

[1.3] M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions (Dover, 1964).

[1.4] Y. S. Chow and H. Teichler, Probability Theory (Springer-Verlag, New York, 1978).

[1.5] J. Doob, Stochastic Processes (John Wiley, 1953). [1.6] N. Ikeda and S. Watanabe, Stochastic Differential Equations and Diffusion

Processes (North-Holland, Kodansha, 1981). [1.7] T. H. Rydberg, The normal inverse Gaussian Levy Process: Simulation

and approximation, Commun. Statist.-Stoch. Models, 13, 887-910 (1997). [1.8] B. 0ksendahl, Stochastic Differential Equations. An Introduction with

Applications (Springer-Verlag, 1998). [1.9] Z. Schuss, Theory and Applications of Stochastic Differential Equations

(John Wiley, 1980).

205

Page 230: Henderson d., plaskho p.   stochastic differential equations in science and engineering

206 Stochastic Differential Equations in Science and Engineering

[1.10] K. L. Chung and F. Aitsahia, Elementary Probability (Springer-Verlag, 2003).

[1.11] S. Ross, Probability Models (Academic Press, 2003). [1.12] P. Mallivan, Stochastic Analysis (Springer-Verlag, New York, 1997). [1.13] J. Pitman, Probability (Springer-Verlag, New York, 1997). [1.14] A. N. Shiryaev, Probability (Springer-Verlag, New York, 1996). [1.15] I. M. Ryshik and I. S. Gradstein, Tables of Sums, Products and Integrals

(VEB Deutscher Verlag Wissenschaften, Berlin, 1963). [1.16] P. Bremaud, Markov Chains, Gibbs Fields, Monte Carlo Simulation and

Queues (Springer-Verlag, New York, 1999). [1.17] A. R. Kerstein, Linear-eddy modelling of turbulent transport, Part 6.

Microstructure of diffusive scalar mixing fields, J. Fluid Mech., 231, 361-394 (1991).

[1.18] A. R. Kerstein, One-dimensional turbulence: Model formulation and application to homogeneous turbulence, shear flows and buoyant stratified flows, J. Fluid Mech., 392, 277-334 (1999).

Chapter 2

[2.1] H. Risken, The Fokker-Planck Equation (Springer-Verlag, 1984). [2.2] D. A. McQuarrie, Statistical Mechanics (Harper and Row, New York, 1973). [2.3] T. C. Gard, Introduction to Stochastic Differential Equations (Dekker,

1988). [2.4] R. L. Stratonovich, Topics in the Theory of Random Noise, Vol. 1 (Gordon

and Breach, 1963). [2.5] P. E. Kloeden, E. Platen and H. Schurz, Numerical Solution of Stochastic

Differential Equations (Springer-Verlag, 1994). [2.6] J. K. Hale and H. Ko§ak, Dynamics and Bifurcation (Springer-Verlag, 1991). [2.7] I. Karatzas and S. E. Shreve, Brownian Motion and Stochastic Calculus

(Springer-Verlag, 1997).

Chapter 3

[3.1] N. G. V. Van Kampen, Stochastic Processes in Physics and Chemistry (North-Holland Amsterdam, 1992).

[3.2] S. R. Salinas, Introduction to Statistical Physics (Springer-Verlag, 2001). [3.3] C. M. Bender and S. A. Orszag, Advanced Mathematical Methods for Sci

entist and Engineers (McGraw-Hill, New York, 1978). [3.4] L. Arnold, Random Dynamical Systems (Springer-Verlag, Berlin, 1998). [3.5] S. Wiggins, Introduction to Nonlinear Dynamical Systems and Chaos

(Springer-Verlag, New York, 1990). [3.6] H. Crauel and F. Flandoli, Additive noise destructs a pitchfork bifurcation,

J. Dynamics Differential Equations, 10, 259-274 (1998).

Page 231: Henderson d., plaskho p.   stochastic differential equations in science and engineering

References 207

[3.7] H. Rong, G. Meng, X. Wang, W. Xu and T. Fang, Invariant measures and Lyapunov exponents for stochastic Mathieu system, Nonlinear Dynamics, 30, 313-321 (2002).

[3.8] A. H. Nayfeh, Perturbation Methods (John Wiley and Sons, New York, 1973).

[3.9] W. V. Wedig, Invariant measures and Lyapunov exponents for stochastic generalized parameter fluctuations, Structural Safety, 8, 13-25 (1990).

[3.10] S. Rajan and H. G. Davies, Multiple time scaling and the response of a duffing oscillator to narrow band excitations, J. Sound Vibrations, 123, 497-506 (1988).

[3.11] A. H. Nayfeh and S. J. Serban, Response statistics to combined deterministic and random excitations, Int. J. Nonlinear Mechanics, 25, 493-509 (1990).

[3.12] M. Van Dyke, Perturbation Methods in Fluid Mechanics (Parabolic Press, Stanford CA, 1975).

[3.13] E. Zauderer, Partial Differential Equations (John Wiley and Sons, New York, 1989).

[3.14] E. Ben-Jacob, D. J. Bergman, B. J. Matkowsky and Z. Schuss, Thermal shot effects and nonlinear oscillations, Annals of the New York Academy of Science, 410, 323-337 (1983).

[3.15] P. Plaschko, Deterministic and stochastic oscillations of a flexible cylinder in arrays of static tubes, Nonlinear Dynamics, 30, 337-355 (2002)

[3.16] P. Glendinning, Stability, Instability and Chaos (Cambridge University Press, Cambridge UK, 1994).

[3.17] R. Z. Khasminskiy, Stability of Systems of Differential Equations in the Presence of Random Noise (Nauka Press, Moscow USSR, 1969).

[3.18] W. Ebeling, H. Herzel, W. Richert and L. Schimansky-Geier, Influence of noise on Duffing-van der Pol oscillators, Zeitschrift f. Angew. Math. u. Mechanik, 66, 141-146 (1986).

[3.19] L. Arnold, N. Sri Namachchivaya and K. R. Schenk-Hoppe, Towards an understanding of the stochastic Hopf bifurcation: A case study, / . J. Bifurcation Chaos, 6, 1947-1975 (1996).

[3.20] K. R. Schenk-Hoppe, Stochastic Hopf bifurcation: An example, I. J. Non-Linear Mechanics, 31, 685-692 (1996).

[3.21] K. R. Schenk-Hoppe, Deterministic and stochastic Duffing-van der Pol Oscillators are non-explosive, ZAMP, 47, 740-759 (1996).

[3.22] K. R. Schenk-Hoppe, The stochastic Duffing-van der Pol equation, Ph.D. Thesis, Univ. Bremen, Germany (1996).

[3.23] H. Keller and G. Ochs, Numerical approximation of random attractors, Inst, of Dynamical Systems, Univ. Bremen, Germany, Rep. Nr. 431 (1998).

Chapter 4

[4.1] P. M. Morse and H. Feshbach, Methods of Theoretical Physics, Vol. I (McGraw-Hill, New York, 1963).

Page 232: Henderson d., plaskho p.   stochastic differential equations in science and engineering

208 Stochastic Differential Equations in Science and Engineering

J. B. Walsh, An Introduction to Stochastic Partial Differential Equations, Lecture Notes in Mathematics 1180 (Springer-Verlag, 1986), pp. 265-439. H. Holden, B. 0ksendal, J. Ub0e and T. Zhang, Stochastic Partial Differential Equations (Birkhauser Boston, 1996). N. V. J. Krylob, M. Rockner and N. D. J. Zabczyk, Stochastic PDE's and Kolmogorov Equations in Infinite Dimensions, Lecture Notes in Mathematics 1715 (Springer-Verlag, 1999). G. B. Whitham, Linear and Nonlinear Waves (J. Wiley and Sons, New York, 1974). O. V. Rudenko and S. I. Soluyan, Theoretical Foundations of Nonlinear Acoustics (Consultants Bureau, New York, 1977) [Translated from Russian by R. T. Beyer]. S. A. Akhmanov and A. S. Chirkin, Statistical Phenomena in Nonlinear Physics (Izd. GMU, 1971). J. M. Keller, Wave propagation in random media, Proc. Symp. Appl. Math. Am. Math. Soc, Providence Rhode Island, 13, 227-246 (I960). J. B. Keller, Stochastic equations and wave propagation in random media, Proc. Symp. Appl. Math. Am. Math. Soc, Providence Rhode Island, 1963, 16, 145-170 (1963). U. Frisch, Turbulence (Cambridge University Press, 1996). C. W. Haines, An analysis of stochastic eigenvalue problems, Ph. D. Thesis, Renssellaer Polytechnic Institute, Troy, New York (1964). W. E. Boyce, in Probabilistic Methods in Applied Mathematics I, ed. A. T. Bharucha-Reid (Academic Press, New York, 1968). F. G. Bass and I. M. Fuks, Wave Scattering from Statistically Rough Surfaces (Pergamon Press, Oxford, 1979). F. E. Benth, Option Theory with Stochastic Analysis (Springer-Verlag, Berlin, 2004). F. Black and E. Scholes, The pricing of options and cooperate liabilities, J. Polit Economy, 81 , 637-654 (1973). 0 . E. Barndorff-Nielsen, Processes of normal inverse Gaussian type, Finance Stock, 2, 41-68 (1998). E. Eberlein and U. Keller, Hyperbolic distributions in finance, Bernoulli, 1, 281-299 (1995). J. C. Hull, Options, Futures and Other Derivative Securities (Prentice Hall, Englewood Cliffs, 1993). R. Merton, Theory of rational option pricing, Bell J. Econom. Manag. ScL, 4, 141-183 (1973).

Chapter 5

[5.1] M. J. Feigenbaum, Quantitative universality for a class of nonlinear transformations, J. Statistical Physics, 19, 25-52 (1975).

[5.2] D. E. Knuth, The Art of Computer Programming, Vol. 2 (Addison-Wesley, Reading, Massachusetts, 1981).

Page 233: Henderson d., plaskho p.   stochastic differential equations in science and engineering

References 209

[5.3] W. H. Press, B. P. Flannery, S. A. Teukolsky and W. T. Vetterling, Numerical Recipes (Cambridge Univ. Press, New York, 1992).

[5.4] C. Chatfield, The Analysis of Time Series (Chapman and Hall, London, 1980).

[5.5] H. C. Tuckwell, Elementary Applications of Probability Theory (Chapman and Hall, London, 1988).

[5.6] R. E. Walpole and R. H. Myers, Probability and Statistics for Engineers and Scientists (Macmillan, New York, 1986).

[5.7] M. Overbeck-Larisch and W. Dolejsky, Stochastic with Mathematica (Vieweg-Verlag, Braunschweig, Germany, 1998) (in German).

[5.8] N. M. Giinther and R. O. Kusmin, A Collection of Examples of Higher Mathematics (VEB Deutscher Verlag d. Wissenschaften, Berlin, 1975) (in German).

[5.9] N. Metropolis, M. N. Rosenbluth, A. W. Rosenbluth, A. H. Teller and E. Teller, Equations of state calculations by fast computing machines, J. Chem. Soc, 21 , 1087-1092 (1953).

[5.10] G. N. Milstein, The Numerical Integration of Stochastic Differential Equations (Urals Univ. Press, Sverdlovsk USSR, 1988) (in Russian) [English Translation: Kluwer (1995)].

[5.11] G. N. Milstein, A theorem on the order of convergence of mean square approximations of systems of stochastic differential equations, Theor. Prob. Appi, 32, 738-741 (1988).

[5.12] P. D. Drummond and I. K. Mortimer, Computer simulation of multiplicative stochastic differential equations, J. Comput. Phys., 93, 144-170 (1991).

[5.13] P. E. Kloeden and E. Platen, Numerical Solution of Stochastic Differential

Equations, Applications of Mathematics, Vol. 23 (Springer, 1992). [5.14] P. E. Kloeden and E. Platen, Higher-order implicit strong numerical

schemes for stochastic differential equations, J. Statist. Physics, 68, 283-314 (1992).

[5.15] Y. Saito and T. Mitsui, Discrete approximations for stochastic differential equations, Trans. Japan SIAM, 2, 1-16 (1992).

[5.16] N. J. Newton, Variance reduction for simulated diffusion, SIAM J. Appl. Math., 54, 1780-1805 (1994).

Page 234: Henderson d., plaskho p.   stochastic differential equations in science and engineering
Page 235: Henderson d., plaskho p.   stochastic differential equations in science and engineering

FORTRAN PROGRAMS

Fl.f Independence test: chi-squared criterion F2.f Independence test: Eqs. (5.5)-(5.8) F3.f The Newton Iteration of the integral Eq. (5.10) F4.f Estimation of PDF F(fc) according to Eq. (5.23) F5.f Volume of the D-dim unit sphere F6.f 1-D Brownian motion for Figs. 1.1 and 1.2 F7.f Population growth, solution (2.4), data for Fig. 5.2 F8.f Milstein scheme (5.50) for 1-D SDE's F9A.f Numerical generation of the RV:DZ, Eq. (5.55) (no batches) F9B.f N batches of M samples to verify Eq. (5.55) FlO.f Higher order scheme (5.52) for a 1-st order SDE Fll.f Implicit Milstein scheme (5.58) for a 1-st order autonomous

SDE F12.f Modified Milstein scheme (5.59) for a 1-st order

autonomous SDE F13.f 2-D schemes: Milstein routine (5.73), higher order routine

(5.78) and modified routine (5.82) F14.f 2-D implicit scheme applied to the linear pendulum F15.f Weak and absolute error, confidence intervals F16.f Double precision version of F15.f F17.f Version of F15.f for the problem EX 5.11 F18f Discretization for a 3-D SDE F19f Numerical verification of the law of the iterated logarithm

(1.40)

211

Page 236: Henderson d., plaskho p.   stochastic differential equations in science and engineering
Page 237: Henderson d., plaskho p.   stochastic differential equations in science and engineering

INDEX

adapted process, 13 additive noise, 70 algebra of events, 1 almost coincide, 75 asymptotic theory, 149 autocorrelation function, 11, 67, 139

batch average, 185 Bayes's rule, 7 Bernoulli differential equation, 80 Bernoulli distribution, 4 Bessel process, 86 Bessel-Fubini formula, 144 bifurcation, 81, 110 bivariate Gaussian, 15 bivariate GD, 15 bivariate PD, 19 Black-Scholes market, 162 Boltzmann transport equation, 62 Borel set, 2 Box-Miller method, 18 Brownian bridge, 66 Brownian motion, viii, xi, 21, 179 Burgers equation, xv

call option, 161 central limit theorem, 16 Chapman-Kolmogorov equation, 20,

50, 91 backward equation, 97 forward equation, 97

characteristic function, 6, 16 Chebyshev inequality, 13, 50, 174 coefficient with time lag, 171 colored noise, 67

compatibility, 9 conditional probability, 7 confidence interval, 171 convergence

strong, 173, 198 weak, 173, 174, 198

convergence in the mean square, 173 correlation coefficient, 11 cross-correlation function, 11

differences between the Ito and Stratonovich integrals, 37

Dirac delta function, 3 Dirac function, 67

eigenfunction, 136, 148 eigenvalue moment, 156 eigenvector, 148 eikonal equation, 118 ensemble average, 12 equivalent definitions of a Wiener

process, 24 ergodic system, 12 error

absolute, 196 global, 196 local, 196

Euler method, 179 existence theorem, 84 expectation value, 4

flip-flop process, 49 fluidmechanical turbulence, xiv Fokker-Planck equation, 12, 21, 78,

91, 95

213

Page 238: Henderson d., plaskho p.   stochastic differential equations in science and engineering

214

Gaussian (or normal), 6 Gaussian probability density, xiii, 6 Green's function, 136 growth bound condition, 85

Hamilton function, 82 Hermitian-chaos, 16 Hermite polynomial, 16 heteroclinic orbit, 82 homogeneous Poisson process, 46 Hopf bifurcation, 127

independence, 5, 25 independence test, 170 inhomogenous ordinary SDE, 59 investment, 161 Ito equation, 57 Ito formula, 38

the case of a multivariate process, 43

Ito integral, 30 properties, 37 is a martingale, 37

Ito-Taylor expansion, 181, 189

Jacobian, 18

Kamiltonian, 83 Khasminskiy criterion, 128 Kolmogorov's criterion, 10 Kramers-Moyal equation, 98

Langevin parameter, 62 law of the iterated logarithm, 17 Lebesque integration, 2 Levy process, 27, 160 linear equation

homogeneous ordinary SDE, 56 nonlinear SDE, 56

Lipschitz condition, 85 Lyapunov function, 124 Lyapunov method, 108

Mach number, 142 marginal PD, 5

Index

Markov or Markovian process, 19, 91, 179

stationary, 20 Markov property, 12 martingale, 12, 13

Ito integral is not a martingale, 37

Wiener process is a martingale, 24

master equation, 20, 91 Maxwell distribution, 61, 62 mean, 57 Metropolis algorithm, 178 Milstein method, 183

modified, 187 moments of a PD, 6 Monte Carlo integration, 175 multiplicative noise, 70 multivariate form of the Gaussian

PD, 14

Navier-Stokes equations, xiv non-anticipative or adapted

functions, 30 normal distributed variable, 6 normal inverted Gaussian

distribution (NIGD), 27, 160 normalization, 5 numerical solution, 167

optical diffraction, xii ordinary differential equation (ODE),

55 Ornstein-Uhlenbeck, 26

problem, 59, 101, 186 process, 20, 27 SDE, 139 transition probability, 62

orthogonal, 138

pendulum, stochastic, xi, 70, 81 Poisson bracket, 83 Poisson distributed variables, 45 Poisson distribution, 4 polar Marsaglia method, 179

Page 239: Henderson d., plaskho p.   stochastic differential equations in science and engineering

Index 215

polynomials of the Brownian motion, 42

population growth, 56, 65 principle of invariant elementary

probability measure states, 18 probability density, 3 probability distribution function, 3 probability of an event, 2 probability space, 2 put option, 161

quasi-chaos, viii

random (or stochastic) variable, 2 random number, 167 random walk, 48 reduction method, 63 Reynolds number, xiv Riemann integral, 29 Routh-Hurwitz criterion, 128

sample function, 9 sample space, 1 sigma algebra, 2 stability of SDE's, 125 standard deviation, 6, 61 stationary normal distribution, 78 stationary point, 70 stationary process, 10 statistical mechanics, 11 stochastic effects, xi

boundary condition, 141 boundary value problem, 155 cable equation, 137 damping, 73 differential equation (SDE), xi ordinary SDE, 55 partial SDE, 56 economics, 160 eigenvalue, 135 eigenvalue equation, 147 eigenvalue problem, 148 excitation, 72 growth, xii

hyperbola, 26 initial condition, 141 Lorentz equation, 129 Mathieu equation, 112 partial differential equation, 135 pendulum, xi Poisson equation, 140 process, 9

Stokes law, 68 stratonovich equation, 58 stratonovich integral, 30, 37

not a martingale, 37 properties, 37

strong law of large numbers, 174 student distribution, 197 Sturm-Liouville problem, 149 symmetry, 9

Taylor expansion, 7 transformation of stochastic

variables, 17 transition probability distribution, 19 trivariate PD, 15 turbulence advection by a random

map, 49 two independent Brownian motions,

45

uncorrelated function, 11 uniformity test, 169 uniqueness theorem, 84

variance, 6, 57 Volterra integral equation, 72

white noise, xi, xii, 21, 56, 57 WP or Wiener process, 14, 21, 25

is a martingale, 24 Wiener sheet, 25, 135 Wiener-Levy theorem, 54 WKB method, 117, 149

Page 240: Henderson d., plaskho p.   stochastic differential equations in science and engineering

— z m m 33 5" z 6>

>

H

0 z W

z

0 X >

to

0 0

Traditionally, non-quantum physics has been

concerned with deterministic equations where

the dynamics of the system are completely

determined by initial conditions. A century

ago the discovery of Brownian motion showed

that nature need not be deterministic. However,

it is only recently that there has been broad

interest in nondeterministic and even chaotic

systems, not only in physics but in ecology

and economics. On a short term basis, the

stock market is nondeterministic and often

chaotic. Despite its significance, there are few

books available that introduce the reader to

modern ideas in stochastic systems. This book

provides an introduction to this increasingly

important field and includes a number of

interesting applications.

orld Scientific YEARS OF PUBLISHING

1 - 2 0 0 6

ISBN 981-256-296-6

9 "789812 562968'

www.worldscientific.com