Fs Chapter 6 Timeseries

Post on 10-Dec-2015

35 views 2 download

Tags:

description

chapter 6

Transcript of Fs Chapter 6 Timeseries

Imperial CollegeLondonBusiness School

FINANCIAL STATISTICSLEC 6: LINEAR TIME SERIES

Paolo Zaffaroni

October, 2014

Preliminaries ARMA 1 / 66

Imperial CollegeLondonBusiness School

AGENDA

1 Preliminaries

2 ARMALinear processMA processesAR processesARMA(p, q)NonstationarityRegression-based tests of non-stationarityPrediction of ARMA

Preliminaries ARMA 2 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: stationarity and lag operator

When observing (y1, ..., yT ) we are now considering yt as partof a stochastic process, which means a collection of rv, onefor each t.

0 50 100 150 200 250 300

5

5.2

5.4

5.6

5.8

6

6.2

6.4

6.6

6.81−month interest rate UK

Preliminaries ARMA 3 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: stationarity and lag operator

When observing (y1, ..., yT ) we are now considering yt as partof a stochastic process, which means a collection of rv, onefor each t.

0 50 100 150 200 250 300

5

5.2

5.4

5.6

5.8

6

6.2

6.4

6.6

6.81−month interest rate UK

Preliminaries ARMA 3 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: stationarity and lag operator

This means that we have one observation per randomvariable: how can we estimate parameters?

Useful to define covariance stationary stochastic process:{yt, t = 0,±1, ...} is covariance stationary if for any t, s

Eyt ≡ µt = µ <∞,cov(yt, ys) ≡ γ(t, s) = γ(t− s) with | γ(t− s) |<∞.

This implies var(yt) = γ(0) <∞, constant and finite.

Graph of (u, γ(u)) defines autocovariance function, hereafterACF.

Often one uses the autocorrelation function (scale free):

ρ(u) = γ(u)γ(0) , u = 0,±1, ... where −1 ≤ ρ(u) ≤ 1.

Preliminaries ARMA 4 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: stationarity and lag operator

This means that we have one observation per randomvariable: how can we estimate parameters?

Useful to define covariance stationary stochastic process:{yt, t = 0,±1, ...} is covariance stationary if for any t, s

Eyt ≡ µt = µ <∞,cov(yt, ys) ≡ γ(t, s) = γ(t− s) with | γ(t− s) |<∞.

This implies var(yt) = γ(0) <∞, constant and finite.

Graph of (u, γ(u)) defines autocovariance function, hereafterACF.

Often one uses the autocorrelation function (scale free):

ρ(u) = γ(u)γ(0) , u = 0,±1, ... where −1 ≤ ρ(u) ≤ 1.

Preliminaries ARMA 4 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: stationarity and lag operator

This means that we have one observation per randomvariable: how can we estimate parameters?

Useful to define covariance stationary stochastic process:{yt, t = 0,±1, ...} is covariance stationary if for any t, s

Eyt ≡ µt = µ <∞,cov(yt, ys) ≡ γ(t, s) = γ(t− s) with | γ(t− s) |<∞.

This implies var(yt) = γ(0) <∞, constant and finite.

Graph of (u, γ(u)) defines autocovariance function, hereafterACF.

Often one uses the autocorrelation function (scale free):

ρ(u) = γ(u)γ(0) , u = 0,±1, ... where −1 ≤ ρ(u) ≤ 1.

Preliminaries ARMA 4 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: stationarity and lag operator

This means that we have one observation per randomvariable: how can we estimate parameters?

Useful to define covariance stationary stochastic process:{yt, t = 0,±1, ...} is covariance stationary if for any t, s

Eyt ≡ µt = µ <∞,cov(yt, ys) ≡ γ(t, s) = γ(t− s) with | γ(t− s) |<∞.

This implies var(yt) = γ(0) <∞, constant and finite.

Graph of (u, γ(u)) defines autocovariance function, hereafterACF.

Often one uses the autocorrelation function (scale free):

ρ(u) = γ(u)γ(0) , u = 0,±1, ... where −1 ≤ ρ(u) ≤ 1.

Preliminaries ARMA 4 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: stationarity and lag operator

When (yt1 , ...yts) has a multivariate normal distribution forany integer s and any t1, ..., ts then Gaussian stochasticprocess.

Example. yt = x (constant in time) for some r.v. x iscovariance stationary if Ex2 <∞.

Example. yt = a+ bt (a, b constant numbers) not covariancestationary.

Example. yt satisfying for any t

Eyt = 0, γ(0) = σ2 <∞, γ(t− s) = 0 t 6= s,

make a covariance stationary process defined white noise .

Preliminaries ARMA 5 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: stationarity and lag operator

When (yt1 , ...yts) has a multivariate normal distribution forany integer s and any t1, ..., ts then Gaussian stochasticprocess.

Example. yt = x (constant in time) for some r.v. x iscovariance stationary if Ex2 <∞.

Example. yt = a+ bt (a, b constant numbers) not covariancestationary.

Example. yt satisfying for any t

Eyt = 0, γ(0) = σ2 <∞, γ(t− s) = 0 t 6= s,

make a covariance stationary process defined white noise .

Preliminaries ARMA 5 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: stationarity and lag operator

When (yt1 , ...yts) has a multivariate normal distribution forany integer s and any t1, ..., ts then Gaussian stochasticprocess.

Example. yt = x (constant in time) for some r.v. x iscovariance stationary if Ex2 <∞.

Example. yt = a+ bt (a, b constant numbers) not covariancestationary.

Example. yt satisfying for any t

Eyt = 0, γ(0) = σ2 <∞, γ(t− s) = 0 t 6= s,

make a covariance stationary process defined white noise .

Preliminaries ARMA 5 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: stationarity and lag operator

When (yt1 , ...yts) has a multivariate normal distribution forany integer s and any t1, ..., ts then Gaussian stochasticprocess.

Example. yt = x (constant in time) for some r.v. x iscovariance stationary if Ex2 <∞.

Example. yt = a+ bt (a, b constant numbers) not covariancestationary.

Example. yt satisfying for any t

Eyt = 0, γ(0) = σ2 <∞, γ(t− s) = 0 t 6= s,

make a covariance stationary process defined white noise .

Preliminaries ARMA 5 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: stationarity and lag operator

We have defined the population quantities. Sample analoguesare, for given sample (y1, ..., yT ),

µ =1

T

T∑t=1

yt sample mean,

γ(0) =1

T

T∑t=1

(yt − µ)2 =1

T

T∑t=1

y2t − µ2 sample variance,

γ(u) =1

T

T−u∑t=1

(yt − µ)(yt+u − µ) sample autocovariance,

for u = 1, .., T − 1, setting γ(−u) = γ(u), an even function oflag u.

Preliminaries ARMA 6 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: stationarity and lag operator

We have defined the population quantities. Sample analoguesare, for given sample (y1, ..., yT ),

µ =1

T

T∑t=1

yt sample mean,

γ(0) =1

T

T∑t=1

(yt − µ)2 =1

T

T∑t=1

y2t − µ2 sample variance,

γ(u) =1

T

T−u∑t=1

(yt − µ)(yt+u − µ) sample autocovariance,

for u = 1, .., T − 1, setting γ(−u) = γ(u), an even function oflag u.

Preliminaries ARMA 6 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: stationarity and lag operator

We have defined the population quantities. Sample analoguesare, for given sample (y1, ..., yT ),

µ =1

T

T∑t=1

yt sample mean,

γ(0) =1

T

T∑t=1

(yt − µ)2 =1

T

T∑t=1

y2t − µ2 sample variance,

γ(u) =1

T

T−u∑t=1

(yt − µ)(yt+u − µ) sample autocovariance,

for u = 1, .., T − 1, setting γ(−u) = γ(u), an even function oflag u.

Preliminaries ARMA 6 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: stationarity and lag operator

Lag operator: for any time series (need not be stochastic) thelag operator L defined by:

Lyt = yt−1.

In general for m ≥ 1 integer, applying L for m times,

Lmyt = yt−m.

When applying L to a constant has no effect, for instanceL 2 = 2.

Preliminaries ARMA 7 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: stationarity and lag operator

Lag operator: for any time series (need not be stochastic) thelag operator L defined by:

Lyt = yt−1.

In general for m ≥ 1 integer, applying L for m times,

Lmyt = yt−m.

When applying L to a constant has no effect, for instanceL 2 = 2.

Preliminaries ARMA 7 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: stationarity and lag operator

Lag operator: for any time series (need not be stochastic) thelag operator L defined by:

Lyt = yt−1.

In general for m ≥ 1 integer, applying L for m times,

Lmyt = yt−m.

When applying L to a constant has no effect, for instanceL 2 = 2.

Preliminaries ARMA 7 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: stationarity and lag operator

Lag operator properties: commutative with the multiplication(by constant) operator and additive over summation operator.

L(2yt) = 2Lyt = 2yt−1,

L(yt + xt) = Lyt + Lxt = yt−1 + xt−1.

Preliminaries ARMA 8 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: stationarity and lag operator

For example, for constant λ1, λ2

(1− λ1L)(1− λ2L)yt = (1 + λ1λ2L2 − λ1L− λ2L)yt

= (1− (λ1 + λ2)L+ λ1λ2L2)yt

= yt − (λ1 + λ2)yt−1 + λ1λ2yt−2.

An expression like (for constants λ0, λ1, λ2)

λ0 + λ1L+ λ2L2

is referred to as a polynomial in the lag operator (in this caseof order 2).

Preliminaries ARMA 9 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: stationarity and lag operator

For example, for constant λ1, λ2

(1− λ1L)(1− λ2L)yt = (1 + λ1λ2L2 − λ1L− λ2L)yt

= (1− (λ1 + λ2)L+ λ1λ2L2)yt

= yt − (λ1 + λ2)yt−1 + λ1λ2yt−2.

An expression like (for constants λ0, λ1, λ2)

λ0 + λ1L+ λ2L2

is referred to as a polynomial in the lag operator (in this caseof order 2).

Preliminaries ARMA 9 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: stationarity and lag operator

For example, for constant λ1, λ2

(1− λ1L)(1− λ2L)yt = (1 + λ1λ2L2 − λ1L− λ2L)yt

= (1− (λ1 + λ2)L+ λ1λ2L2)yt

= yt − (λ1 + λ2)yt−1 + λ1λ2yt−2.

An expression like (for constants λ0, λ1, λ2)

λ0 + λ1L+ λ2L2

is referred to as a polynomial in the lag operator (in this caseof order 2).

Preliminaries ARMA 9 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: stationarity and lag operator

For example, for constant λ1, λ2

(1− λ1L)(1− λ2L)yt = (1 + λ1λ2L2 − λ1L− λ2L)yt

= (1− (λ1 + λ2)L+ λ1λ2L2)yt

= yt − (λ1 + λ2)yt−1 + λ1λ2yt−2.

An expression like (for constants λ0, λ1, λ2)

λ0 + λ1L+ λ2L2

is referred to as a polynomial in the lag operator (in this caseof order 2).

Preliminaries ARMA 9 / 66

Imperial CollegeLondonBusiness School

AGENDA

1 Preliminaries

2 ARMALinear processMA processesAR processesARMA(p, q)NonstationarityRegression-based tests of non-stationarityPrediction of ARMA

Preliminaries ARMA 10 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: linear process

Important class of stochastic processes made by linear process:

yt = µ+ εt + Ψ1εt−1 + Ψ2εt−2 + ... = µ+

∞∑i=0

Ψiεt−i,

In this case we say that yt represents a moving average of theinnovations εt with coefficients Ψi.

Sequence of non-random coefficients {Ψi} must satisfy∑∞i=1 Ψ2

i <∞.

µ is constant coefficient εt are i.i.d. white noise.

Preliminaries ARMA 11 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: linear process

Important class of stochastic processes made by linear process:

yt = µ+ εt + Ψ1εt−1 + Ψ2εt−2 + ... = µ+

∞∑i=0

Ψiεt−i,

In this case we say that yt represents a moving average of theinnovations εt with coefficients Ψi.

Sequence of non-random coefficients {Ψi} must satisfy∑∞i=1 Ψ2

i <∞.

µ is constant coefficient εt are i.i.d. white noise.

Preliminaries ARMA 11 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: linear process

Important class of stochastic processes made by linear process:

yt = µ+ εt + Ψ1εt−1 + Ψ2εt−2 + ... = µ+

∞∑i=0

Ψiεt−i,

In this case we say that yt represents a moving average of theinnovations εt with coefficients Ψi.

Sequence of non-random coefficients {Ψi} must satisfy∑∞i=1 Ψ2

i <∞.

µ is constant coefficient εt are i.i.d. white noise.

Preliminaries ARMA 11 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: linear process

Important class of stochastic processes made by linear process:

yt = µ+ εt + Ψ1εt−1 + Ψ2εt−2 + ... = µ+

∞∑i=0

Ψiεt−i,

In this case we say that yt represents a moving average of theinnovations εt with coefficients Ψi.

Sequence of non-random coefficients {Ψi} must satisfy∑∞i=1 Ψ2

i <∞.

µ is constant coefficient εt are i.i.d. white noise.

Preliminaries ARMA 11 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: linear process

Is linear process a covariance stationary process? Let us check.

Mean: by linearity Eyt = µ+∑∞

i=0 ΨiEεt−i = µ+ 0 = µ soµ is the mean.

Variance: by independence (why?)

var(yt) = var(µ) +∞∑i=0

Ψ2i var(εt−i) = 0 + σ2

∞∑i=0

Ψ2i <∞

Do you see why the Ψ2i must be summable? Examples...

Preliminaries ARMA 12 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: linear process

Is linear process a covariance stationary process? Let us check.

Mean: by linearity Eyt = µ+∑∞

i=0 ΨiEεt−i = µ+ 0 = µ soµ is the mean.

Variance: by independence (why?)

var(yt) = var(µ) +∞∑i=0

Ψ2i var(εt−i) = 0 + σ2

∞∑i=0

Ψ2i <∞

Do you see why the Ψ2i must be summable? Examples...

Preliminaries ARMA 12 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: linear process

Is linear process a covariance stationary process? Let us check.

Mean: by linearity Eyt = µ+∑∞

i=0 ΨiEεt−i = µ+ 0 = µ soµ is the mean.

Variance: by independence (why?)

var(yt) = var(µ) +

∞∑i=0

Ψ2i var(εt−i) = 0 + σ2

∞∑i=0

Ψ2i <∞

Do you see why the Ψ2i must be summable? Examples...

Preliminaries ARMA 12 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: linear process

Is linear process a covariance stationary process? Let us check.

Mean: by linearity Eyt = µ+∑∞

i=0 ΨiEεt−i = µ+ 0 = µ soµ is the mean.

Variance: by independence (why?)

var(yt) = var(µ) +

∞∑i=0

Ψ2i var(εt−i) = 0 + σ2

∞∑i=0

Ψ2i <∞

Do you see why the Ψ2i must be summable? Examples...

Preliminaries ARMA 12 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: linear process

Autocovariance (ACF): for u > 0.

E(yt − µ)(yt+u − µ) =

∞∑i=0

Ψi

∞∑j=0

ΨjEεt−jεt+u−i

= σ2∞∑j=0

ΨjΨj+u ≡ γ(u),

because

Eεt−jεt+u−i =

{Eε2t−j = σ2, t− j = t+ u− i,

0 t− j 6= t+ u− i,

where t− j = t+ u− i equivalent to i = j + u.

Very general expression valid for any Ψi!

Preliminaries ARMA 13 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: linear process

Autocovariance (ACF): for u > 0.

E(yt − µ)(yt+u − µ) =

∞∑i=0

Ψi

∞∑j=0

ΨjEεt−jεt+u−i

= σ2∞∑j=0

ΨjΨj+u ≡ γ(u),

because

Eεt−jεt+u−i =

{Eε2t−j = σ2, t− j = t+ u− i,

0 t− j 6= t+ u− i,

where t− j = t+ u− i equivalent to i = j + u.

Very general expression valid for any Ψi!

Preliminaries ARMA 13 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: linear process

Autocovariance (ACF): for u > 0.

E(yt − µ)(yt+u − µ) =

∞∑i=0

Ψi

∞∑j=0

ΨjEεt−jεt+u−i

= σ2∞∑j=0

ΨjΨj+u ≡ γ(u),

because

Eεt−jεt+u−i =

{Eε2t−j = σ2, t− j = t+ u− i,

0 t− j 6= t+ u− i,

where t− j = t+ u− i equivalent to i = j + u.

Very general expression valid for any Ψi!

Preliminaries ARMA 13 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: empirical example

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15−0.05

0

0.05Sample autocorrelation coefficients returns S & P 500

k−values

sa

cf

va

lue

s

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15−0.1

0

0.1

0.2

0.3Sample autocorrelation coefficients returnsS & P 500 square

k−values

sa

cf

va

lue

s

Preliminaries ARMA 14 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: Moving Average (MA)

A Moving Average of order q (MA(q)) with integer q:

yt = µ+ εt + α1εt−1 + α2εt−2 + ...+ αqεt−q,

where the α1, ..., αq are constant coefficients.

Special case of linear process by setting Ψ0 = 1, Ψi = αi for1 ≤ i ≤ q and Ψi = 0 for i > q.

Square summability satisfied and mean constant (zero) soMA(q) is stationary. Use general formula for autocovariancefunction (ACF).

Mean: from linearityE(yt) = µ+ E(εt + α1εt−1 + α2εt−2 + ...+ αqεt−q) = µ.

Variance: γ(0) = σ2(1 + α21 + ...+ α2

q).

Preliminaries ARMA 15 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: Moving Average (MA)

A Moving Average of order q (MA(q)) with integer q:

yt = µ+ εt + α1εt−1 + α2εt−2 + ...+ αqεt−q,

where the α1, ..., αq are constant coefficients.

Special case of linear process by setting Ψ0 = 1, Ψi = αi for1 ≤ i ≤ q and Ψi = 0 for i > q.

Square summability satisfied and mean constant (zero) soMA(q) is stationary. Use general formula for autocovariancefunction (ACF).

Mean: from linearityE(yt) = µ+ E(εt + α1εt−1 + α2εt−2 + ...+ αqεt−q) = µ.

Variance: γ(0) = σ2(1 + α21 + ...+ α2

q).

Preliminaries ARMA 15 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: Moving Average (MA)

A Moving Average of order q (MA(q)) with integer q:

yt = µ+ εt + α1εt−1 + α2εt−2 + ...+ αqεt−q,

where the α1, ..., αq are constant coefficients.

Special case of linear process by setting Ψ0 = 1, Ψi = αi for1 ≤ i ≤ q and Ψi = 0 for i > q.

Square summability satisfied and mean constant (zero) soMA(q) is stationary. Use general formula for autocovariancefunction (ACF).

Mean: from linearityE(yt) = µ+ E(εt + α1εt−1 + α2εt−2 + ...+ αqεt−q) = µ.

Variance: γ(0) = σ2(1 + α21 + ...+ α2

q).

Preliminaries ARMA 15 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: Moving Average (MA)

A Moving Average of order q (MA(q)) with integer q:

yt = µ+ εt + α1εt−1 + α2εt−2 + ...+ αqεt−q,

where the α1, ..., αq are constant coefficients.

Special case of linear process by setting Ψ0 = 1, Ψi = αi for1 ≤ i ≤ q and Ψi = 0 for i > q.

Square summability satisfied and mean constant (zero) soMA(q) is stationary. Use general formula for autocovariancefunction (ACF).

Mean: from linearityE(yt) = µ+ E(εt + α1εt−1 + α2εt−2 + ...+ αqεt−q) = µ.

Variance: γ(0) = σ2(1 + α21 + ...+ α2

q).

Preliminaries ARMA 15 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: Moving Average (MA)

A Moving Average of order q (MA(q)) with integer q:

yt = µ+ εt + α1εt−1 + α2εt−2 + ...+ αqεt−q,

where the α1, ..., αq are constant coefficients.

Special case of linear process by setting Ψ0 = 1, Ψi = αi for1 ≤ i ≤ q and Ψi = 0 for i > q.

Square summability satisfied and mean constant (zero) soMA(q) is stationary. Use general formula for autocovariancefunction (ACF).

Mean: from linearityE(yt) = µ+ E(εt + α1εt−1 + α2εt−2 + ...+ αqεt−q) = µ.

Variance: γ(0) = σ2(1 + α21 + ...+ α2

q).

Preliminaries ARMA 15 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: Moving Average (MA)

ACF: for 1 ≤ u ≤ q

γ(u) = σ2∞∑j=0

ΨjΨj+u = σ2q−u∑j=0

αjαj+u,

and when u > q, γ(u) = 0. (Why?)

ACF is truncated: MA have little memory.

Stationarity achieved for any value of coefficients α1, ..., αq,even equal to 1m!

Preliminaries ARMA 16 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: Moving Average (MA)

ACF: for 1 ≤ u ≤ q

γ(u) = σ2∞∑j=0

ΨjΨj+u = σ2q−u∑j=0

αjαj+u,

and when u > q, γ(u) = 0. (Why?)

ACF is truncated: MA have little memory.

Stationarity achieved for any value of coefficients α1, ..., αq,even equal to 1m!

Preliminaries ARMA 16 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: Moving Average (MA)

ACF: for 1 ≤ u ≤ q

γ(u) = σ2∞∑j=0

ΨjΨj+u = σ2q−u∑j=0

αjαj+u,

and when u > q, γ(u) = 0. (Why?)

ACF is truncated: MA have little memory.

Stationarity achieved for any value of coefficients α1, ..., αq,even equal to 1m!

Preliminaries ARMA 16 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: Moving Average (MA)

Important case is MA(1):

yt = µ+ εt + α1εt−1.

ACF: ρ(1) = α1

1+α21

and ρ(u) = 0 u = ±2,±3, ....Limited

memory!

Positive autocorrelation for α1 > 0 and negativeautocorrelation for α1 < 0.

Replacing α1 by 1/α1 in ρ(1) we get

1/α1

1 + 1/α21

=1/α1

α21+1

α21

=α1

α21 + 1

= ρ(1)

so no difference!

Preliminaries ARMA 17 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: Moving Average (MA)

Important case is MA(1):

yt = µ+ εt + α1εt−1.

ACF: ρ(1) = α1

1+α21

and ρ(u) = 0 u = ±2,±3, ....Limited

memory!

Positive autocorrelation for α1 > 0 and negativeautocorrelation for α1 < 0.

Replacing α1 by 1/α1 in ρ(1) we get

1/α1

1 + 1/α21

=1/α1

α21+1

α21

=α1

α21 + 1

= ρ(1)

so no difference!

Preliminaries ARMA 17 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: Moving Average (MA)

Important case is MA(1):

yt = µ+ εt + α1εt−1.

ACF: ρ(1) = α1

1+α21

and ρ(u) = 0 u = ±2,±3, ....Limited

memory!

Positive autocorrelation for α1 > 0 and negativeautocorrelation for α1 < 0.

Replacing α1 by 1/α1 in ρ(1) we get

1/α1

1 + 1/α21

=1/α1

α21+1

α21

=α1

α21 + 1

= ρ(1)

so no difference!

Preliminaries ARMA 17 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: Moving Average (MA)

Important case is MA(1):

yt = µ+ εt + α1εt−1.

ACF: ρ(1) = α1

1+α21

and ρ(u) = 0 u = ±2,±3, ....Limited

memory!

Positive autocorrelation for α1 > 0 and negativeautocorrelation for α1 < 0.

Replacing α1 by 1/α1 in ρ(1) we get

1/α1

1 + 1/α21

=1/α1

α21+1

α21

=α1

α21 + 1

= ρ(1)

so no difference!

Preliminaries ARMA 17 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: invertibility of Moving Average (MA)

If we are to estimate α1 from sample ACF ρ(u), which oneshall we choose: the big one or the small one?

This is related to the issue of invertibility condition, meaningthe possibility to ‘invert’ the process and be able to write ytas a linear function of past values yt−j with j = 1, 2, ....

Invertibility: writing yt = (1 + α1L)εt, we can ask when theexpression

1

1 + α1L

makes sense.

This requires (treat L as a variable now!) | α1L |< 1 and asufficient condition for this is | α1 |< 1 taking | L |≤ 1.

Preliminaries ARMA 18 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: invertibility of Moving Average (MA)

If we are to estimate α1 from sample ACF ρ(u), which oneshall we choose: the big one or the small one?

This is related to the issue of invertibility condition, meaningthe possibility to ‘invert’ the process and be able to write ytas a linear function of past values yt−j with j = 1, 2, ....

Invertibility: writing yt = (1 + α1L)εt, we can ask when theexpression

1

1 + α1L

makes sense.

This requires (treat L as a variable now!) | α1L |< 1 and asufficient condition for this is | α1 |< 1 taking | L |≤ 1.

Preliminaries ARMA 18 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: invertibility of Moving Average (MA)

If we are to estimate α1 from sample ACF ρ(u), which oneshall we choose: the big one or the small one?

This is related to the issue of invertibility condition, meaningthe possibility to ‘invert’ the process and be able to write ytas a linear function of past values yt−j with j = 1, 2, ....

Invertibility: writing yt = (1 + α1L)εt, we can ask when theexpression

1

1 + α1L

makes sense.

This requires (treat L as a variable now!) | α1L |< 1 and asufficient condition for this is | α1 |< 1 taking | L |≤ 1.

Preliminaries ARMA 18 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: invertibility of Moving Average (MA)

If we are to estimate α1 from sample ACF ρ(u), which oneshall we choose: the big one or the small one?

This is related to the issue of invertibility condition, meaningthe possibility to ‘invert’ the process and be able to write ytas a linear function of past values yt−j with j = 1, 2, ....

Invertibility: writing yt = (1 + α1L)εt, we can ask when theexpression

1

1 + α1L

makes sense.

This requires (treat L as a variable now!) | α1L |< 1 and asufficient condition for this is | α1 |< 1 taking | L |≤ 1.

Preliminaries ARMA 18 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: invertibility of Moving Average (MA)

In this case expansion

1

1 + α1L=

∞∑i=0

(−α1L)i = 1− α1L+ α21L

2 − α31L

3 + ....

Then 11+α1L

yt = yt − α1yt−1 + α21yt−2 − .... = εt, rewritten as

yt = α1yt−1 − α21yt−2 + ....+ εt.

This is the invertibility .

Of course, to every MA(1) with | α1 |< it corresponds anotherMA(1) with the same ACF but with coefficient 1/ | α1 |> 1,which is not invertible.

Preliminaries ARMA 19 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: invertibility of Moving Average (MA)

In this case expansion

1

1 + α1L=

∞∑i=0

(−α1L)i = 1− α1L+ α21L

2 − α31L

3 + ....

Then 11+α1L

yt = yt − α1yt−1 + α21yt−2 − .... = εt, rewritten as

yt = α1yt−1 − α21yt−2 + ....+ εt.

This is the invertibility .

Of course, to every MA(1) with | α1 |< it corresponds anotherMA(1) with the same ACF but with coefficient 1/ | α1 |> 1,which is not invertible.

Preliminaries ARMA 19 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: invertibility of Moving Average (MA)

In this case expansion

1

1 + α1L=

∞∑i=0

(−α1L)i = 1− α1L+ α21L

2 − α31L

3 + ....

Then 11+α1L

yt = yt − α1yt−1 + α21yt−2 − .... = εt, rewritten as

yt = α1yt−1 − α21yt−2 + ....+ εt.

This is the invertibility .

Of course, to every MA(1) with | α1 |< it corresponds anotherMA(1) with the same ACF but with coefficient 1/ | α1 |> 1,which is not invertible.

Preliminaries ARMA 19 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: invertibility of Moving Average (MA)

Extension to MA(q) is based on looking at roots of polynomial

α(L) = 1 + α1L+ ...+ αqLq = 0.

MA invertible when all the roots are greater than one inmodulus: 2q MA(q) models sharing the same AC but one onlyinvertible

What do we take? Invertibility used as a selecting criterion.

Preliminaries ARMA 20 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: invertibility of Moving Average (MA)

Extension to MA(q) is based on looking at roots of polynomial

α(L) = 1 + α1L+ ...+ αqLq = 0.

MA invertible when all the roots are greater than one inmodulus: 2q MA(q) models sharing the same AC but one onlyinvertible

What do we take? Invertibility used as a selecting criterion.

Preliminaries ARMA 20 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: empirical example of MA(1)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5Sample autocorrelation coefficients MA(1)

k−values

sacf

val

ues

Preliminaries ARMA 21 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

Autoregressive process of order p (AR(p)) with integer p:

yt = µ+ φ1yt−1 + ....+ φpyt−p + εt.

with φ1, ..., φp are constant autoregressive coefficients.

AR(p) is stochastic difference equation of order p (whathappens if εt = 0?)

Simplest case is AR(1):

yt = µ+ φ1yt−1 + εt.

Preliminaries ARMA 22 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

Autoregressive process of order p (AR(p)) with integer p:

yt = µ+ φ1yt−1 + ....+ φpyt−p + εt.

with φ1, ..., φp are constant autoregressive coefficients.

AR(p) is stochastic difference equation of order p (whathappens if εt = 0?)

Simplest case is AR(1):

yt = µ+ φ1yt−1 + εt.

Preliminaries ARMA 22 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

Autoregressive process of order p (AR(p)) with integer p:

yt = µ+ φ1yt−1 + ....+ φpyt−p + εt.

with φ1, ..., φp are constant autoregressive coefficients.

AR(p) is stochastic difference equation of order p (whathappens if εt = 0?)

Simplest case is AR(1):

yt = µ+ φ1yt−1 + εt.

Preliminaries ARMA 22 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

To derive statistical properties one needs solution of thedifference equation: use lag operator!

Writing yt(1− φ1L) = µ+ εt. When | φ1 |< 1 we can write1

1−φ1L = 1 + φ1L+ φ21L

2 + .... =∑∞

i=0 φi1L

i also equal to

yt =µ

1− φ1L+∞∑i=0

φi1εt−i = µ′ +∞∑i=0

φi1εt−i

setting µ′ = µ/(1− φ1).

Preliminaries ARMA 23 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

To derive statistical properties one needs solution of thedifference equation: use lag operator!

Writing yt(1− φ1L) = µ+ εt. When | φ1 |< 1 we can write1

1−φ1L = 1 + φ1L+ φ21L

2 + .... =∑∞

i=0 φi1L

i also equal to

yt =µ

1− φ1L+∞∑i=0

φi1εt−i = µ′ +∞∑i=0

φi1εt−i

setting µ′ = µ/(1− φ1).

Preliminaries ARMA 23 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

Particular case of a linear process: setting Ψi = φi1 i = 0, 1, ...and note that

∑∞i=0 φ

2i1 = 1

1−φ21<∞.

Mean: Eyt = µ′ +∑∞

i=0 φi1Eεt−i = µ′.

Variance: γ(0) = σ2∑∞

i=0 φ2i1 = σ2/(1− φ2

1) <∞.

ACF: for u > 0

γ(u) = σ2∞∑i=0

φi1φi+u1 =

σ2φu11− φ2

1

,

and autocorrelation function ρ(u) = φu1 between −1 and 1.

Preliminaries ARMA 24 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

Particular case of a linear process: setting Ψi = φi1 i = 0, 1, ...and note that

∑∞i=0 φ

2i1 = 1

1−φ21<∞.

Mean: Eyt = µ′ +∑∞

i=0 φi1Eεt−i = µ′.

Variance: γ(0) = σ2∑∞

i=0 φ2i1 = σ2/(1− φ2

1) <∞.

ACF: for u > 0

γ(u) = σ2∞∑i=0

φi1φi+u1 =

σ2φu11− φ2

1

,

and autocorrelation function ρ(u) = φu1 between −1 and 1.

Preliminaries ARMA 24 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

Particular case of a linear process: setting Ψi = φi1 i = 0, 1, ...and note that

∑∞i=0 φ

2i1 = 1

1−φ21<∞.

Mean: Eyt = µ′ +∑∞

i=0 φi1Eεt−i = µ′.

Variance: γ(0) = σ2∑∞

i=0 φ2i1 = σ2/(1− φ2

1) <∞.

ACF: for u > 0

γ(u) = σ2∞∑i=0

φi1φi+u1 =

σ2φu11− φ2

1

,

and autocorrelation function ρ(u) = φu1 between −1 and 1.

Preliminaries ARMA 24 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

Particular case of a linear process: setting Ψi = φi1 i = 0, 1, ...and note that

∑∞i=0 φ

2i1 = 1

1−φ21<∞.

Mean: Eyt = µ′ +∑∞

i=0 φi1Eεt−i = µ′.

Variance: γ(0) = σ2∑∞

i=0 φ2i1 = σ2/(1− φ2

1) <∞.

ACF: for u > 0

γ(u) = σ2∞∑i=0

φi1φi+u1 =

σ2φu11− φ2

1

,

and autocorrelation function ρ(u) = φu1 between −1 and 1.

Preliminaries ARMA 24 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

ACF is never zero for a finite lag but it decays to zeroexponentially fast.

Positive autocorrelation at all lags for φ1 > 0 and negativeautocorrelation for φ1 < 0.

Since −1 ≤ ρ(1) ≤ 1 any level of memory can be attained.

Condition | φ1 |< 1 is the stationarity condition.

Preliminaries ARMA 25 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

ACF is never zero for a finite lag but it decays to zeroexponentially fast.

Positive autocorrelation at all lags for φ1 > 0 and negativeautocorrelation for φ1 < 0.

Since −1 ≤ ρ(1) ≤ 1 any level of memory can be attained.

Condition | φ1 |< 1 is the stationarity condition.

Preliminaries ARMA 25 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

ACF is never zero for a finite lag but it decays to zeroexponentially fast.

Positive autocorrelation at all lags for φ1 > 0 and negativeautocorrelation for φ1 < 0.

Since −1 ≤ ρ(1) ≤ 1 any level of memory can be attained.

Condition | φ1 |< 1 is the stationarity condition.

Preliminaries ARMA 25 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

ACF is never zero for a finite lag but it decays to zeroexponentially fast.

Positive autocorrelation at all lags for φ1 > 0 and negativeautocorrelation for φ1 < 0.

Since −1 ≤ ρ(1) ≤ 1 any level of memory can be attained.

Condition | φ1 |< 1 is the stationarity condition.

Preliminaries ARMA 25 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: simulated AR stationary (red) andnon-stationary (blue)

0 50 100 150 200 250−20

−15

−10

−5

0

5

10

time

retu

rns

Stationary and Non Stationary AR(1) process

Non StationaryStationary

Preliminaries ARMA 26 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: ACF of AR with positive φ1

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8Sample autocorrelation coefficients AR(1) with positive coefficient

k−values

sacf

val

ues

Preliminaries ARMA 27 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: ACF of AR with negative φ1

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8Sample autocorrelation coefficients AR(1) with negative coefficient

k−values

sacf

val

ues

Preliminaries ARMA 28 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

Generalizations to higher-order AR(p) cumbersome.

Let us consider AR(2)

yt = µ+ φ1yt−1 + φ2yt−2 + εt. (1)

Re-write it as (1− φ1L− φ2L2)yt = µ+ εt.

The statistical properties of yt determined by the nature ofthe roots of the equation in L

(1− φ1L− φ2L2) = 0.

Preliminaries ARMA 29 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

Generalizations to higher-order AR(p) cumbersome.

Let us consider AR(2)

yt = µ+ φ1yt−1 + φ2yt−2 + εt. (1)

Re-write it as (1− φ1L− φ2L2)yt = µ+ εt.

The statistical properties of yt determined by the nature ofthe roots of the equation in L

(1− φ1L− φ2L2) = 0.

Preliminaries ARMA 29 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

Generalizations to higher-order AR(p) cumbersome.

Let us consider AR(2)

yt = µ+ φ1yt−1 + φ2yt−2 + εt. (1)

Re-write it as (1− φ1L− φ2L2)yt = µ+ εt.

The statistical properties of yt determined by the nature ofthe roots of the equation in L

(1− φ1L− φ2L2) = 0.

Preliminaries ARMA 29 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

Generalizations to higher-order AR(p) cumbersome.

Let us consider AR(2)

yt = µ+ φ1yt−1 + φ2yt−2 + εt. (1)

Re-write it as (1− φ1L− φ2L2)yt = µ+ εt.

The statistical properties of yt determined by the nature ofthe roots of the equation in L

(1− φ1L− φ2L2) = 0.

Preliminaries ARMA 29 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

Let ρ1, ρ2 be the inverse of these roots of that second-orderequation.

Then rewrite (1− φ1L− φ2L2) = (1− ρ1L)(1− ρ2L), where

three possibilities arise: ρ1, ρ2 real and distinct, complexconjugates and real but equal.

Stationarity condition is | ρ1 |< 1, | ρ2 |< 1.

In terms of the coefficients this condition is

φ1 + φ2 < 1,

−φ1 + φ2 < 1,

| φ2 |< 1.

Preliminaries ARMA 30 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

Let ρ1, ρ2 be the inverse of these roots of that second-orderequation.

Then rewrite (1− φ1L− φ2L2) = (1− ρ1L)(1− ρ2L), where

three possibilities arise: ρ1, ρ2 real and distinct, complexconjugates and real but equal.

Stationarity condition is | ρ1 |< 1, | ρ2 |< 1.

In terms of the coefficients this condition is

φ1 + φ2 < 1,

−φ1 + φ2 < 1,

| φ2 |< 1.

Preliminaries ARMA 30 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

Let ρ1, ρ2 be the inverse of these roots of that second-orderequation.

Then rewrite (1− φ1L− φ2L2) = (1− ρ1L)(1− ρ2L), where

three possibilities arise: ρ1, ρ2 real and distinct, complexconjugates and real but equal.

Stationarity condition is | ρ1 |< 1, | ρ2 |< 1.

In terms of the coefficients this condition is

φ1 + φ2 < 1,

−φ1 + φ2 < 1,

| φ2 |< 1.

Preliminaries ARMA 30 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

Let ρ1, ρ2 be the inverse of these roots of that second-orderequation.

Then rewrite (1− φ1L− φ2L2) = (1− ρ1L)(1− ρ2L), where

three possibilities arise: ρ1, ρ2 real and distinct, complexconjugates and real but equal.

Stationarity condition is | ρ1 |< 1, | ρ2 |< 1.

In terms of the coefficients this condition is

φ1 + φ2 < 1,

−φ1 + φ2 < 1,

| φ2 |< 1.

Preliminaries ARMA 30 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

Several ways of getting the solution of AR(2). One is considergeometric expansion (see the analogy with AR(1) case), whenroots are real :

yt = µ′ +1

(1− ρ1L)(1− ρ2L)εt,

setting

µ′ =µ

(1− φ1 − φ2).

Preliminaries ARMA 31 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

Several ways of getting the solution of AR(2). One is considergeometric expansion (see the analogy with AR(1) case), whenroots are real :

yt = µ′ +1

(1− ρ1L)(1− ρ2L)εt,

setting

µ′ =µ

(1− φ1 − φ2).

Preliminaries ARMA 31 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

When roots are real and distinct :1

(1− ρ1L)(1− ρ2L)

=∞∑

i,j=0

ρi1ρj2L

i+j =∞∑u=0

(u∑i=0

ρi1ρu−i2

)Lu

=∞∑u=0

ρu21− (ρ1/ρ2)u+1

1− (ρ1/ρ2)Lu =

∞∑u=0

ρu+12 − ρu+1

1

ρ2 − ρ1Lu

=

∞∑u=0

ΨuLu.

so AR(2) is linear process with coefficients Ψu =ρu+12 −ρu+1

1ρ2−ρ1 .

Preliminaries ARMA 32 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

When roots are real and distinct :1

(1− ρ1L)(1− ρ2L)

=

∞∑i,j=0

ρi1ρj2L

i+j =

∞∑u=0

(u∑i=0

ρi1ρu−i2

)Lu

=∞∑u=0

ρu21− (ρ1/ρ2)u+1

1− (ρ1/ρ2)Lu =

∞∑u=0

ρu+12 − ρu+1

1

ρ2 − ρ1Lu

=

∞∑u=0

ΨuLu.

so AR(2) is linear process with coefficients Ψu =ρu+12 −ρu+1

1ρ2−ρ1 .

Preliminaries ARMA 32 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

When roots are real and distinct :1

(1− ρ1L)(1− ρ2L)

=

∞∑i,j=0

ρi1ρj2L

i+j =

∞∑u=0

(u∑i=0

ρi1ρu−i2

)Lu

=

∞∑u=0

ρu21− (ρ1/ρ2)u+1

1− (ρ1/ρ2)Lu =

∞∑u=0

ρu+12 − ρu+1

1

ρ2 − ρ1Lu

=

∞∑u=0

ΨuLu.

so AR(2) is linear process with coefficients Ψu =ρu+12 −ρu+1

1ρ2−ρ1 .

Preliminaries ARMA 32 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

When roots are real and distinct :1

(1− ρ1L)(1− ρ2L)

=

∞∑i,j=0

ρi1ρj2L

i+j =

∞∑u=0

(u∑i=0

ρi1ρu−i2

)Lu

=

∞∑u=0

ρu21− (ρ1/ρ2)u+1

1− (ρ1/ρ2)Lu =

∞∑u=0

ρu+12 − ρu+1

1

ρ2 − ρ1Lu

=

∞∑u=0

ΨuLu.

so AR(2) is linear process with coefficients Ψu =ρu+12 −ρu+1

1ρ2−ρ1 .

Preliminaries ARMA 32 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

When coincident roots one gets (ρ1 = ρ2 = ρ)

Ψu =

u∑j=0

ρjρu−j

= (u+ 1)ρu, u = 0, 1, ...

Just use Hopital’s theorem to previous formula.

Preliminaries ARMA 33 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

When coincident roots one gets (ρ1 = ρ2 = ρ)

Ψu =

u∑j=0

ρjρu−j

= (u+ 1)ρu, u = 0, 1, ...

Just use Hopital’s theorem to previous formula.

Preliminaries ARMA 33 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

When complex conjugates roots ρ1 = ρeiθ, ρ2 = ρe−iθ, onegets (i is the complex unit such that i =

√−1)

Ψu =

u∑j=0

ρj1ρu−j2

= ρue−iθu1− e2iθ(u+1)

1− e2iθ

= ρue−iθ(u+1) − eiθ(u+1)

e−iθ − eiθ= ρu

sin(u+ 1)θ

sin θu = 0, 1, ..

using

sinx =1

2i(eix − e−ix).

In case complex roots oscillatory behaviour of the Ψi.

Preliminaries ARMA 34 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

When complex conjugates roots ρ1 = ρeiθ, ρ2 = ρe−iθ, onegets (i is the complex unit such that i =

√−1)

Ψu =

u∑j=0

ρj1ρu−j2

= ρue−iθu1− e2iθ(u+1)

1− e2iθ

= ρue−iθ(u+1) − eiθ(u+1)

e−iθ − eiθ= ρu

sin(u+ 1)θ

sin θu = 0, 1, ..

using

sinx =1

2i(eix − e−ix).

In case complex roots oscillatory behaviour of the Ψi.

Preliminaries ARMA 34 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

When complex conjugates roots ρ1 = ρeiθ, ρ2 = ρe−iθ, onegets (i is the complex unit such that i =

√−1)

Ψu =

u∑j=0

ρj1ρu−j2

= ρue−iθu1− e2iθ(u+1)

1− e2iθ

= ρue−iθ(u+1) − eiθ(u+1)

e−iθ − eiθ= ρu

sin(u+ 1)θ

sin θu = 0, 1, ..

using

sinx =1

2i(eix − e−ix).

In case complex roots oscillatory behaviour of the Ψi.

Preliminaries ARMA 34 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive (AR)

When complex conjugates roots ρ1 = ρeiθ, ρ2 = ρe−iθ, onegets (i is the complex unit such that i =

√−1)

Ψu =

u∑j=0

ρj1ρu−j2

= ρue−iθu1− e2iθ(u+1)

1− e2iθ

= ρue−iθ(u+1) − eiθ(u+1)

e−iθ − eiθ= ρu

sin(u+ 1)θ

sin θu = 0, 1, ..

using

sinx =1

2i(eix − e−ix).

In case complex roots oscillatory behaviour of the Ψi.

Preliminaries ARMA 34 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: mean, variance and ACF ofautoregressive (AR)

Mean:

Eyt = µ′ +

∞∑u=0

ΨuEεt−u = µ′.

ACF: instead of using general formula for linear processes,rewrite

yt − µ′ = φ1(yt−1 − µ′) + φ2(yt−2 − µ′) + εt.

Pre-multiplying both terms by yt−j − µ′ with j ≥ 1 andtaking expectations

E(yt − µ′)(yt−j − µ′) = φ1E(yt−1 − µ′)(yt−j − µ′)+φ2E(yt−2 − µ′)(yt−j − µ′) + Eεt(yt−j − µ′).

Preliminaries ARMA 35 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: mean, variance and ACF ofautoregressive (AR)

Mean:

Eyt = µ′ +

∞∑u=0

ΨuEεt−u = µ′.

ACF: instead of using general formula for linear processes,rewrite

yt − µ′ = φ1(yt−1 − µ′) + φ2(yt−2 − µ′) + εt.

Pre-multiplying both terms by yt−j − µ′ with j ≥ 1 andtaking expectations

E(yt − µ′)(yt−j − µ′) = φ1E(yt−1 − µ′)(yt−j − µ′)+φ2E(yt−2 − µ′)(yt−j − µ′) + Eεt(yt−j − µ′).

Preliminaries ARMA 35 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: mean, variance and ACF ofautoregressive (AR)

Mean:

Eyt = µ′ +

∞∑u=0

ΨuEεt−u = µ′.

ACF: instead of using general formula for linear processes,rewrite

yt − µ′ = φ1(yt−1 − µ′) + φ2(yt−2 − µ′) + εt.

Pre-multiplying both terms by yt−j − µ′ with j ≥ 1 andtaking expectations

E(yt − µ′)(yt−j − µ′) = φ1E(yt−1 − µ′)(yt−j − µ′)+φ2E(yt−2 − µ′)(yt−j − µ′) + Eεt(yt−j − µ′).Preliminaries ARMA 35 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: mean, variance and ACF ofautoregressive (AR)

Assuming stationarity

γ(j) = φ1γ(j − 1) + φ2γ(j − 2), j = 0, 1, ...

so ACF satisfies 2nd difference equation. Need two initialconditions.

In terms of autocorrelation (divide all terms by γ(0)), we getthe so-called Yule-Walker equations:

ρ(j) = φ1ρ(j − 1) + φ2ρ(j − 2).

Here for given φ1, φ2 we determine ρ(j)!

Preliminaries ARMA 36 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: mean, variance and ACF ofautoregressive (AR)

Assuming stationarity

γ(j) = φ1γ(j − 1) + φ2γ(j − 2), j = 0, 1, ...

so ACF satisfies 2nd difference equation. Need two initialconditions.

In terms of autocorrelation (divide all terms by γ(0)), we getthe so-called Yule-Walker equations:

ρ(j) = φ1ρ(j − 1) + φ2ρ(j − 2).

Here for given φ1, φ2 we determine ρ(j)!

Preliminaries ARMA 36 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: mean, variance and ACF ofautoregressive (AR)

Assuming stationarity

γ(j) = φ1γ(j − 1) + φ2γ(j − 2), j = 0, 1, ...

so ACF satisfies 2nd difference equation. Need two initialconditions.

In terms of autocorrelation (divide all terms by γ(0)), we getthe so-called Yule-Walker equations:

ρ(j) = φ1ρ(j − 1) + φ2ρ(j − 2).

Here for given φ1, φ2 we determine ρ(j)!

Preliminaries ARMA 36 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive moving average(ARMA)

Autoregressive moving average process of order p, q (AR(p, q))with integer p, q:

yt = µ+ φ1yt−1 + ....+ φpyt−p + εt + α1εt−1 + ...+ αqεt−q.

Stationarity if roots of

1− φ1L− ...− φpLp = 0

lie outside the unit circle in complex plane.Invertibility if roots of

1 + α1L+ ...+ αqLq = 0

lie outside the unit circle in complex plane.

Preliminaries ARMA 37 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive moving average(ARMA)

Autoregressive moving average process of order p, q (AR(p, q))with integer p, q:

yt = µ+ φ1yt−1 + ....+ φpyt−p + εt + α1εt−1 + ...+ αqεt−q.

Stationarity if roots of

1− φ1L− ...− φpLp = 0

lie outside the unit circle in complex plane.

Invertibility if roots of

1 + α1L+ ...+ αqLq = 0

lie outside the unit circle in complex plane.

Preliminaries ARMA 37 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive moving average(ARMA)

Autoregressive moving average process of order p, q (AR(p, q))with integer p, q:

yt = µ+ φ1yt−1 + ....+ φpyt−p + εt + α1εt−1 + ...+ αqεt−q.

Stationarity if roots of

1− φ1L− ...− φpLp = 0

lie outside the unit circle in complex plane.Invertibility if roots of

1 + α1L+ ...+ αqLq = 0

lie outside the unit circle in complex plane.Preliminaries ARMA 37 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive moving average(ARMA)

To get properties of ARMA(p, q) we need solution. Considerthe ARMA(1, 1):

yt = φ1yt−1 + εt + α1εt−1.

Rewrite it as

(1− φ1L)yt = (1 + α1L)εt.

Comparing it with yt = (∑∞

i=0 ΨiLi)εt yields

∞∑i=0

ΨiLi =

1 + α1L

1− φ1L= (1 + α1L)

∞∑i=0

φi1Li.

Preliminaries ARMA 38 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive moving average(ARMA)

To get properties of ARMA(p, q) we need solution. Considerthe ARMA(1, 1):

yt = φ1yt−1 + εt + α1εt−1.

Rewrite it as

(1− φ1L)yt = (1 + α1L)εt.

Comparing it with yt = (∑∞

i=0 ΨiLi)εt yields

∞∑i=0

ΨiLi =

1 + α1L

1− φ1L= (1 + α1L)

∞∑i=0

φi1Li.

Preliminaries ARMA 38 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive moving average(ARMA)

To get properties of ARMA(p, q) we need solution. Considerthe ARMA(1, 1):

yt = φ1yt−1 + εt + α1εt−1.

Rewrite it as

(1− φ1L)yt = (1 + α1L)εt.

Comparing it with yt = (∑∞

i=0 ΨiLi)εt yields

∞∑i=0

ΨiLi =

1 + α1L

1− φ1L= (1 + α1L)

∞∑i=0

φi1Li.

Preliminaries ARMA 38 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive moving average(ARMA)

By equating the coefficients of the same power of L

Ψ0 = 1,

Ψ1 = φ1 + α1,... =

...

Ψj = (α1 + φ1)φj−11 .

We can then get the mean, the variance and ACF usinggeneral formula.

Preliminaries ARMA 39 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive moving average(ARMA)

By equating the coefficients of the same power of L

Ψ0 = 1,

Ψ1 = φ1 + α1,... =

...

Ψj = (α1 + φ1)φj−11 .

We can then get the mean, the variance and ACF usinggeneral formula.

Preliminaries ARMA 39 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive moving average(ARMA)

By equating the coefficients of the same power of L

Ψ0 = 1,

Ψ1 = φ1 + α1,

... =...

Ψj = (α1 + φ1)φj−11 .

We can then get the mean, the variance and ACF usinggeneral formula.

Preliminaries ARMA 39 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive moving average(ARMA)

By equating the coefficients of the same power of L

Ψ0 = 1,

Ψ1 = φ1 + α1,... =

...

Ψj = (α1 + φ1)φj−11 .

We can then get the mean, the variance and ACF usinggeneral formula.

Preliminaries ARMA 39 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive moving average(ARMA)

By equating the coefficients of the same power of L

Ψ0 = 1,

Ψ1 = φ1 + α1,... =

...

Ψj = (α1 + φ1)φj−11 .

We can then get the mean, the variance and ACF usinggeneral formula.

Preliminaries ARMA 39 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive moving average(ARMA)

By equating the coefficients of the same power of L

Ψ0 = 1,

Ψ1 = φ1 + α1,... =

...

Ψj = (α1 + φ1)φj−11 .

We can then get the mean, the variance and ACF usinggeneral formula.

Preliminaries ARMA 39 / 66

Imperial CollegeLondonBusiness School

Preliminaries ARMA 40 / 66

Imperial CollegeLondonBusiness School

But for the ACF also use Yule-Walker equations: in general

γ(j) = φ1γ(j − 1) + Eεtyt−j + α1Eεt−1yt−j .

Thenγ(j) = φ1γ(j − 1) j ≥ 2,

ACF function of both AR and MA coefficients for lags 0 and 1but only dependent on the AR coefficients for lags greaterthan 1.

Same behaviour is observed for ARMA(p, q).

Preliminaries ARMA 40 / 66

Imperial CollegeLondonBusiness School

Preliminaries ARMA 40 / 66

Imperial CollegeLondonBusiness School

But for the ACF also use Yule-Walker equations: in general

γ(j) = φ1γ(j − 1) + Eεtyt−j + α1Eεt−1yt−j .

Thenγ(j) = φ1γ(j − 1) j ≥ 2,

ACF function of both AR and MA coefficients for lags 0 and 1but only dependent on the AR coefficients for lags greaterthan 1.

Same behaviour is observed for ARMA(p, q).

Preliminaries ARMA 40 / 66

Imperial CollegeLondonBusiness School

Preliminaries ARMA 40 / 66

Imperial CollegeLondonBusiness School

But for the ACF also use Yule-Walker equations: in general

γ(j) = φ1γ(j − 1) + Eεtyt−j + α1Eεt−1yt−j .

Thenγ(j) = φ1γ(j − 1) j ≥ 2,

ACF function of both AR and MA coefficients for lags 0 and 1but only dependent on the AR coefficients for lags greaterthan 1.

Same behaviour is observed for ARMA(p, q).

Preliminaries ARMA 40 / 66

Imperial CollegeLondonBusiness School

Preliminaries ARMA 40 / 66

Imperial CollegeLondonBusiness School

But for the ACF also use Yule-Walker equations: in general

γ(j) = φ1γ(j − 1) + Eεtyt−j + α1Eεt−1yt−j .

Thenγ(j) = φ1γ(j − 1) j ≥ 2,

ACF function of both AR and MA coefficients for lags 0 and 1but only dependent on the AR coefficients for lags greaterthan 1.

Same behaviour is observed for ARMA(p, q).

Preliminaries ARMA 40 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive moving average(ARMA)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15−0.4

−0.2

0

0.2

0.4

0.6

0.8

1Sample autocorrelation coefficients ARMA(1,1)

k−values

sa

cf

va

lue

s

Preliminaries ARMA 41 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive moving average(ARMA)

Three steps:

1 Model selection (how to choose p and q).2 Estimation (in the most efficient way as possible).3 Diagnostic testing (to check that the result is satisfactory

using estimated residuals).

If diagnostic checking not satisfactory, start again with newchoice of p, q.

Preliminaries ARMA 42 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive moving average(ARMA)

Three steps:1 Model selection (how to choose p and q).

2 Estimation (in the most efficient way as possible).3 Diagnostic testing (to check that the result is satisfactory

using estimated residuals).

If diagnostic checking not satisfactory, start again with newchoice of p, q.

Preliminaries ARMA 42 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive moving average(ARMA)

Three steps:1 Model selection (how to choose p and q).2 Estimation (in the most efficient way as possible).

3 Diagnostic testing (to check that the result is satisfactoryusing estimated residuals).

If diagnostic checking not satisfactory, start again with newchoice of p, q.

Preliminaries ARMA 42 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive moving average(ARMA)

Three steps:1 Model selection (how to choose p and q).2 Estimation (in the most efficient way as possible).3 Diagnostic testing (to check that the result is satisfactory

using estimated residuals).

If diagnostic checking not satisfactory, start again with newchoice of p, q.

Preliminaries ARMA 42 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: autoregressive moving average(ARMA)

Three steps:1 Model selection (how to choose p and q).2 Estimation (in the most efficient way as possible).3 Diagnostic testing (to check that the result is satisfactory

using estimated residuals).

If diagnostic checking not satisfactory, start again with newchoice of p, q.

Preliminaries ARMA 42 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: model selection of ARMA

Model selection: choose p, q based on correlogram.

When ρ(u) = 0 for u > q,

T12 (ρ(u)− ρ(u))→d N(0, 1 + 2

q∑j=1

ρ2(j)), T →∞.

Works fine for MA but not for AR, where there is no cut-offpoint.

Preliminaries ARMA 43 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: model selection of ARMA

Model selection: choose p, q based on correlogram.

When ρ(u) = 0 for u > q,

T12 (ρ(u)− ρ(u))→d N(0, 1 + 2

q∑j=1

ρ2(j)), T →∞.

Works fine for MA but not for AR, where there is no cut-offpoint.

Preliminaries ARMA 43 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: model selection of ARMA

Model selection: choose p, q based on correlogram.

When ρ(u) = 0 for u > q,

T12 (ρ(u)− ρ(u))→d N(0, 1 + 2

q∑j=1

ρ2(j)), T →∞.

Works fine for MA but not for AR, where there is no cut-offpoint.

Preliminaries ARMA 43 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: model selection of ARMA

For AR, use partial ACF since in

yt = φ1yt−1 + ...+ φpyt−p + εt,

last coefficient φp defines the partial ACF at lag p. It is equalto zero for lags above p: cut off!

Estimated partial ACF π(u), for u > p, satisfies

T12 π(u)→d N(0, 1), T →∞.

whenever π(u) = 0 u > p.

Preliminaries ARMA 44 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: model selection of ARMA

For AR, use partial ACF since in

yt = φ1yt−1 + ...+ φpyt−p + εt,

last coefficient φp defines the partial ACF at lag p. It is equalto zero for lags above p: cut off!

Estimated partial ACF π(u), for u > p, satisfies

T12 π(u)→d N(0, 1), T →∞.

whenever π(u) = 0 u > p.

Preliminaries ARMA 44 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: estimation of AR

Estimation: consider first the AR(p) case. Can be seen as adynamic regression (autoregression). This suggests OLS to

φ = argminφ∈Φ

T∑t=p+1

(yt − φ1yt−1 − ...− φpyt−p)2.

with φ = (φ1, ..., φp)′.

Here OLS called conditional sum of squares (CSS) estimator,as y1, ..., yp given.

Preliminaries ARMA 45 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: estimation of AR

Estimation: consider first the AR(p) case. Can be seen as adynamic regression (autoregression). This suggests OLS to

φ = argminφ∈Φ

T∑t=p+1

(yt − φ1yt−1 − ...− φpyt−p)2.

with φ = (φ1, ..., φp)′.

Here OLS called conditional sum of squares (CSS) estimator,as y1, ..., yp given.

Preliminaries ARMA 45 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: estimation of AR

Alternative estimator is Yule-Walker estimator based onYule-Walker equations:

1 ρ(1) . . . ρ(p− 1)ρ(1) 1 . . . ρ(p− 2)

......

......

ρ(p− 1) ρ(p− 2) . . . 1

φ1

φ2...φp

=

ρ(1)ρ(2)

...ρ(p)

.

Plugging in empirical ACF ρ(u) yields linear system of pequation in p unknown φ1, ..., φp. The opposite of how we useYule-Walker before!

Preliminaries ARMA 46 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: estimation of AR

Alternative estimator is Yule-Walker estimator based onYule-Walker equations:

1 ρ(1) . . . ρ(p− 1)ρ(1) 1 . . . ρ(p− 2)

......

......

ρ(p− 1) ρ(p− 2) . . . 1

φ1

φ2...φp

=

ρ(1)ρ(2)

...ρ(p)

.

Plugging in empirical ACF ρ(u) yields linear system of pequation in p unknown φ1, ..., φp. The opposite of how we useYule-Walker before!

Preliminaries ARMA 46 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: estimation of AR

It turns out that CSS and Yule-Walker estimatorsasymptotically equivalent, in turn equal to Gaussian MLE.

MLE: taking the AR(1) for simplicity, with Gaussian εt

p(yt | yt−1) =1

σ√

2πe−(yt−µ−φ1yt−1)/2σ2

, t ≥ 2,

and

p(y1) =

√1− φ2

1

σ√

2πe−(1−φ21)(yt−µ/(1−φ))/2σ2

.

Then log-likelihood: l(θ) = constants − T2 log σ2 + 1

2 log(1−φ2

1)− 1−φ212σ2 (y1 − µ/(1− φ1))2 − 1

2σ2

∑Tt=2(yt − µ− φ1yt−1)2.

Preliminaries ARMA 47 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: estimation of AR

It turns out that CSS and Yule-Walker estimatorsasymptotically equivalent, in turn equal to Gaussian MLE.

MLE: taking the AR(1) for simplicity, with Gaussian εt

p(yt | yt−1) =1

σ√

2πe−(yt−µ−φ1yt−1)/2σ2

, t ≥ 2,

and

p(y1) =

√1− φ2

1

σ√

2πe−(1−φ21)(yt−µ/(1−φ))/2σ2

.

Then log-likelihood: l(θ) = constants − T2 log σ2 + 1

2 log(1−φ2

1)− 1−φ212σ2 (y1 − µ/(1− φ1))2 − 1

2σ2

∑Tt=2(yt − µ− φ1yt−1)2.

Preliminaries ARMA 47 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: estimation of AR

It turns out that CSS and Yule-Walker estimatorsasymptotically equivalent, in turn equal to Gaussian MLE.

MLE: taking the AR(1) for simplicity, with Gaussian εt

p(yt | yt−1) =1

σ√

2πe−(yt−µ−φ1yt−1)/2σ2

, t ≥ 2,

and

p(y1) =

√1− φ2

1

σ√

2πe−(1−φ21)(yt−µ/(1−φ))/2σ2

.

Then log-likelihood: l(θ) = constants − T2 log σ2 + 1

2 log(1−φ2

1)− 1−φ212σ2 (y1 − µ/(1− φ1))2 − 1

2σ2

∑Tt=2(yt − µ− φ1yt−1)2.

Preliminaries ARMA 47 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: estimation of MA and ARMA

Can you estimate MA by OLS:

yt = µ+ εt + θ1εt−1?

We use MLE where loglikelihood, where θ = (σ2, µ, θ1),

l(θ) = const− T

2logσ2 − 1

2σ2

T∑t=1

(yt − µ− θ1ε∗t−1(θ))2,

where use recursion ε∗t (θ) = yt − µ− θ1ε∗t−1(θ) starting from

ε∗0(θ) = 0.

No closed form but we know all about MLE!

For ARMA, same idea using recursion etc.

Preliminaries ARMA 48 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: estimation of MA and ARMA

Can you estimate MA by OLS:

yt = µ+ εt + θ1εt−1?

We use MLE where loglikelihood, where θ = (σ2, µ, θ1),

l(θ) = const− T

2logσ2 − 1

2σ2

T∑t=1

(yt − µ− θ1ε∗t−1(θ))2,

where use recursion ε∗t (θ) = yt − µ− θ1ε∗t−1(θ) starting from

ε∗0(θ) = 0.

No closed form but we know all about MLE!

For ARMA, same idea using recursion etc.

Preliminaries ARMA 48 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: estimation of MA and ARMA

Can you estimate MA by OLS:

yt = µ+ εt + θ1εt−1?

We use MLE where loglikelihood, where θ = (σ2, µ, θ1),

l(θ) = const− T

2logσ2 − 1

2σ2

T∑t=1

(yt − µ− θ1ε∗t−1(θ))2,

where use recursion ε∗t (θ) = yt − µ− θ1ε∗t−1(θ) starting from

ε∗0(θ) = 0.

No closed form but we know all about MLE!

For ARMA, same idea using recursion etc.

Preliminaries ARMA 48 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: estimation of MA and ARMA

Can you estimate MA by OLS:

yt = µ+ εt + θ1εt−1?

We use MLE where loglikelihood, where θ = (σ2, µ, θ1),

l(θ) = const− T

2logσ2 − 1

2σ2

T∑t=1

(yt − µ− θ1ε∗t−1(θ))2,

where use recursion ε∗t (θ) = yt − µ− θ1ε∗t−1(θ) starting from

ε∗0(θ) = 0.

No closed form but we know all about MLE!

For ARMA, same idea using recursion etc.

Preliminaries ARMA 48 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: empirical example on estimation ofARMA

Use Excel worksheet to:

estimate AR(1) by OLS. Apply to 100 samples and plothistogram of the estimates.

estimate MA(1) by MLE.Apply to 100 samples and plothistogram of the estimates.

Preliminaries ARMA 49 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: diagnostic checking of ARMA

Diagnostic checking: having chosen p, q and estimated model,use residuals εt to check if appear white-noise and/or i.i.d..

Portmanteau test: look at first R sample ACF of residuals

Q = T

R∑u=1

ρε(u)2 →d χ2R−p−q.

under the hypothesis of correct specification with large R, say√T , compared with p+ q.

Idea: if εt white noise then Q small.

If some ρε(u) non zero, Q large.

Preliminaries ARMA 50 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: diagnostic checking of ARMA

Diagnostic checking: having chosen p, q and estimated model,use residuals εt to check if appear white-noise and/or i.i.d..

Portmanteau test: look at first R sample ACF of residuals

Q = T

R∑u=1

ρε(u)2 →d χ2R−p−q.

under the hypothesis of correct specification with large R, say√T , compared with p+ q.

Idea: if εt white noise then Q small.

If some ρε(u) non zero, Q large.

Preliminaries ARMA 50 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: diagnostic checking of ARMA

Diagnostic checking: having chosen p, q and estimated model,use residuals εt to check if appear white-noise and/or i.i.d..

Portmanteau test: look at first R sample ACF of residuals

Q = T

R∑u=1

ρε(u)2 →d χ2R−p−q.

under the hypothesis of correct specification with large R, say√T , compared with p+ q.

Idea: if εt white noise then Q small.

If some ρε(u) non zero, Q large.

Preliminaries ARMA 50 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: nonstationarity

Simplest case: random walk (with no drift)

yt = yt−1 + εt.

By substitution yt = y0 +∑t−1

j=0 εt−j .

Set Yt the sigma-algebra induced by ys, s ≤ t. Then:

E(yt | Y0) = y0 (conditional mean),

var(yt | Y0) = tσ2. (conditional variance)

Notice that var(yt | Y0)→∞.

Preliminaries ARMA 51 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: nonstationarity

Simplest case: random walk (with no drift)

yt = yt−1 + εt.

By substitution yt = y0 +∑t−1

j=0 εt−j .

Set Yt the sigma-algebra induced by ys, s ≤ t. Then:

E(yt | Y0) = y0 (conditional mean),

var(yt | Y0) = tσ2. (conditional variance)

Notice that var(yt | Y0)→∞.

Preliminaries ARMA 51 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: nonstationarity

Simplest case: random walk (with no drift)

yt = yt−1 + εt.

By substitution yt = y0 +∑t−1

j=0 εt−j .

Set Yt the sigma-algebra induced by ys, s ≤ t. Then:

E(yt | Y0) = y0 (conditional mean),

var(yt | Y0) = tσ2. (conditional variance)

Notice that var(yt | Y0)→∞.

Preliminaries ARMA 51 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: nonstationarity

Simplest case: random walk (with no drift)

yt = yt−1 + εt.

By substitution yt = y0 +∑t−1

j=0 εt−j .

Set Yt the sigma-algebra induced by ys, s ≤ t. Then:

E(yt | Y0) = y0 (conditional mean),

var(yt | Y0) = tσ2. (conditional variance)

Notice that var(yt | Y0)→∞.

Preliminaries ARMA 51 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: nonstationarity

In terms of ACF, for s > 0 one gets cov(yt+syt | Y0) = tσ2.

Random walk is nonstationary! However ∆yt = εt isstationary.

In general ARIMA(p, d, q) viz. autoregressive integratedmoving average process of order (p, d, q):

φ(L)∆dyt = θ(L)εt

withφ(L) = 1− φ1L− ...− φpLp, θ(L) = 1 + θ1L+ ...+ θqL

q.

For d = 1, 2.. model is nonstationary.

Preliminaries ARMA 52 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: nonstationarity

In terms of ACF, for s > 0 one gets cov(yt+syt | Y0) = tσ2.

Random walk is nonstationary! However ∆yt = εt isstationary.

In general ARIMA(p, d, q) viz. autoregressive integratedmoving average process of order (p, d, q):

φ(L)∆dyt = θ(L)εt

withφ(L) = 1− φ1L− ...− φpLp, θ(L) = 1 + θ1L+ ...+ θqL

q.

For d = 1, 2.. model is nonstationary.

Preliminaries ARMA 52 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: nonstationarity

In terms of ACF, for s > 0 one gets cov(yt+syt | Y0) = tσ2.

Random walk is nonstationary! However ∆yt = εt isstationary.

In general ARIMA(p, d, q) viz. autoregressive integratedmoving average process of order (p, d, q):

φ(L)∆dyt = θ(L)εt

withφ(L) = 1− φ1L− ...− φpLp, θ(L) = 1 + θ1L+ ...+ θqL

q.

For d = 1, 2.. model is nonstationary.

Preliminaries ARMA 52 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: nonstationarity

In terms of ACF, for s > 0 one gets cov(yt+syt | Y0) = tσ2.

Random walk is nonstationary! However ∆yt = εt isstationary.

In general ARIMA(p, d, q) viz. autoregressive integratedmoving average process of order (p, d, q):

φ(L)∆dyt = θ(L)εt

withφ(L) = 1− φ1L− ...− φpLp, θ(L) = 1 + θ1L+ ...+ θqL

q.

For d = 1, 2.. model is nonstationary.

Preliminaries ARMA 52 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: nonstationarity

Model selection of ARIMA proceeds as follows:

1 Plots data in levels and differences.2 Empirical correlogram: not declining fast for nonstationary.3 Formal tests of nonstationarity (see below).4 Goodness-of-fit statistic (Akaike): choose p, d, q for which

minimizedAIC = −2 lnL(Ψ) + 2(p+ q)

where L(Ψ) is likelihood function, T sample size, Ψ is MLE ofparameters.

Preliminaries ARMA 53 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: nonstationarity

Model selection of ARIMA proceeds as follows:1 Plots data in levels and differences.

2 Empirical correlogram: not declining fast for nonstationary.3 Formal tests of nonstationarity (see below).4 Goodness-of-fit statistic (Akaike): choose p, d, q for which

minimizedAIC = −2 lnL(Ψ) + 2(p+ q)

where L(Ψ) is likelihood function, T sample size, Ψ is MLE ofparameters.

Preliminaries ARMA 53 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: nonstationarity

Model selection of ARIMA proceeds as follows:1 Plots data in levels and differences.2 Empirical correlogram: not declining fast for nonstationary.

3 Formal tests of nonstationarity (see below).4 Goodness-of-fit statistic (Akaike): choose p, d, q for which

minimizedAIC = −2 lnL(Ψ) + 2(p+ q)

where L(Ψ) is likelihood function, T sample size, Ψ is MLE ofparameters.

Preliminaries ARMA 53 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: nonstationarity

Model selection of ARIMA proceeds as follows:1 Plots data in levels and differences.2 Empirical correlogram: not declining fast for nonstationary.3 Formal tests of nonstationarity (see below).

4 Goodness-of-fit statistic (Akaike): choose p, d, q for whichminimized

AIC = −2 lnL(Ψ) + 2(p+ q)

where L(Ψ) is likelihood function, T sample size, Ψ is MLE ofparameters.

Preliminaries ARMA 53 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: nonstationarity

Model selection of ARIMA proceeds as follows:1 Plots data in levels and differences.2 Empirical correlogram: not declining fast for nonstationary.3 Formal tests of nonstationarity (see below).4 Goodness-of-fit statistic (Akaike): choose p, d, q for which

minimizedAIC = −2 lnL(Ψ) + 2(p+ q)

where L(Ψ) is likelihood function, T sample size, Ψ is MLE ofparameters.

Preliminaries ARMA 53 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: test of nonstationarity

Most basic approach: Dickey-Fuller test.

We need to see the consequences of non-stationarity when wedo not know if that is the case, otherwise just first-differencethe data.

For CSS estimator (sort of OLS right?) when | φ |< 1 then

T12 (φ− φ)→d N(0, 1− φ2).

Setting φ2 = 1 then suggests T12 (φ− φ)→d N(0, 0) = 0.

What is going on? φ converging to zero faster than 1/√T !

Preliminaries ARMA 54 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: test of nonstationarity

Most basic approach: Dickey-Fuller test.

We need to see the consequences of non-stationarity when wedo not know if that is the case, otherwise just first-differencethe data.

For CSS estimator (sort of OLS right?) when | φ |< 1 then

T12 (φ− φ)→d N(0, 1− φ2).

Setting φ2 = 1 then suggests T12 (φ− φ)→d N(0, 0) = 0.

What is going on? φ converging to zero faster than 1/√T !

Preliminaries ARMA 54 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: test of nonstationarity

Most basic approach: Dickey-Fuller test.

We need to see the consequences of non-stationarity when wedo not know if that is the case, otherwise just first-differencethe data.

For CSS estimator (sort of OLS right?) when | φ |< 1 then

T12 (φ− φ)→d N(0, 1− φ2).

Setting φ2 = 1 then suggests T12 (φ− φ)→d N(0, 0) = 0.

What is going on? φ converging to zero faster than 1/√T !

Preliminaries ARMA 54 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: test of nonstationarity

Most basic approach: Dickey-Fuller test.

We need to see the consequences of non-stationarity when wedo not know if that is the case, otherwise just first-differencethe data.

For CSS estimator (sort of OLS right?) when | φ |< 1 then

T12 (φ− φ)→d N(0, 1− φ2).

Setting φ2 = 1 then suggests T12 (φ− φ)→d N(0, 0) = 0.

What is going on? φ converging to zero faster than 1/√T !

Preliminaries ARMA 54 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: test of nonstationarity

Most basic approach: Dickey-Fuller test.

We need to see the consequences of non-stationarity when wedo not know if that is the case, otherwise just first-differencethe data.

For CSS estimator (sort of OLS right?) when | φ |< 1 then

T12 (φ− φ)→d N(0, 1− φ2).

Setting φ2 = 1 then suggests T12 (φ− φ)→d N(0, 0) = 0.

What is going on? φ converging to zero faster than 1/√T !

Preliminaries ARMA 54 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: test of nonstationarity

Fuller (1976) suggested that one can test for

H0 : φ = 1

againstH0 : φ < 1,

using the usual t-test

τ =φ− 1(

avar(φ))1/2

where avar(φ) = s2/(∑T

t=2 y2t−1

)and s2 sample variance of

(yt − φyt−1)2.

τ not converging to N(0, 1) but to Dickey-Fuller distribution .

Preliminaries ARMA 55 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: test of nonstationarity

Fuller (1976) suggested that one can test for

H0 : φ = 1

againstH0 : φ < 1,

using the usual t-test

τ =φ− 1(

avar(φ))1/2

where avar(φ) = s2/(∑T

t=2 y2t−1

)and s2 sample variance of

(yt − φyt−1)2.τ not converging to N(0, 1) but to Dickey-Fuller distribution .

Preliminaries ARMA 55 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: prediction of ARMA

At time T + L

yT+L = φ1yT+L−1 + ...+ φpyT+L−p + εT+L

+α1εT+L−1 + ...+ αqεT+L−q.

Setting XT+L|T = E(XT+L | YT ) for any r.v. Xt withYT = {y1, ..., yT } get recursive expression:

yT+L|T = φ1yT+L−1|T + ...+ φpyT+L−p|T + εT+L|T

+α1εT+L−1|T + ...+ αq εT+L−q|T , L = 1, 2, ...

with yT+j|T = yT+j j ≤ 0 and εT+j|T =

{0 j > 0,

εT+j j ≤ 0.by

i.i.d.ness of εt.

Preliminaries ARMA 56 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: prediction of ARMA

At time T + L

yT+L = φ1yT+L−1 + ...+ φpyT+L−p + εT+L

+α1εT+L−1 + ...+ αqεT+L−q.

Setting XT+L|T = E(XT+L | YT ) for any r.v. Xt withYT = {y1, ..., yT } get recursive expression:

yT+L|T = φ1yT+L−1|T + ...+ φpyT+L−p|T + εT+L|T

+α1εT+L−1|T + ...+ αq εT+L−q|T , L = 1, 2, ...

with yT+j|T = yT+j j ≤ 0 and εT+j|T =

{0 j > 0,

εT+j j ≤ 0.by

i.i.d.ness of εt.

Preliminaries ARMA 56 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: prediction of ARMA

For AR(1) this simplifies to

yT+L|T = φ1yT+L−1|T , L = 1, 2, ...

Starting value yT |T = yT yields yT+L|T = φL1 yT , L = 1, 2, .

For MA(1) yT+1|T = α1εT and yT+L|T = 0 L ≥ 2, assumingwe observe εT (invertibility).

Preliminaries ARMA 57 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: prediction of ARMA

For AR(1) this simplifies to

yT+L|T = φ1yT+L−1|T , L = 1, 2, ...

Starting value yT |T = yT yields yT+L|T = φL1 yT , L = 1, 2, .

For MA(1) yT+1|T = α1εT and yT+L|T = 0 L ≥ 2, assumingwe observe εT (invertibility).

Preliminaries ARMA 57 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: prediction of ARMA

For AR(1) this simplifies to

yT+L|T = φ1yT+L−1|T , L = 1, 2, ...

Starting value yT |T = yT yields yT+L|T = φL1 yT , L = 1, 2, .

For MA(1) yT+1|T = α1εT and yT+L|T = 0 L ≥ 2, assumingwe observe εT (invertibility).

Preliminaries ARMA 57 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: prediction of ARMA

How to assess the precision of prediction?

Use mean square error . For any stationary ARMA

yT+L =L∑j=1

ΨL−jεT+j +∞∑j=0

Ψj+LεT−j .

For L ≥ 0

yT+L|T =∞∑j=0

Ψj+LεT−j .

Forecasting error made is

yT+L − yT+L|T =

L∑j=1

ΨL−jεT+j

Preliminaries ARMA 58 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: prediction of ARMA

How to assess the precision of prediction?Use mean square error . For any stationary ARMA

yT+L =

L∑j=1

ΨL−jεT+j +

∞∑j=0

Ψj+LεT−j .

For L ≥ 0

yT+L|T =∞∑j=0

Ψj+LεT−j .

Forecasting error made is

yT+L − yT+L|T =

L∑j=1

ΨL−jεT+j

Preliminaries ARMA 58 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: prediction of ARMA

How to assess the precision of prediction?Use mean square error . For any stationary ARMA

yT+L =

L∑j=1

ΨL−jεT+j +

∞∑j=0

Ψj+LεT−j .

For L ≥ 0

yT+L|T =

∞∑j=0

Ψj+LεT−j .

Forecasting error made is

yT+L − yT+L|T =

L∑j=1

ΨL−jεT+j

Preliminaries ARMA 58 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: prediction of ARMA

How to assess the precision of prediction?Use mean square error . For any stationary ARMA

yT+L =

L∑j=1

ΨL−jεT+j +

∞∑j=0

Ψj+LεT−j .

For L ≥ 0

yT+L|T =

∞∑j=0

Ψj+LεT−j .

Forecasting error made is

yT+L − yT+L|T =L∑j=1

ΨL−jεT+j

Preliminaries ARMA 58 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: prediction of ARMA

The MSE is the variance of forecast error:

MSE(yT+L|T ) = σ2(1 + Ψ21 + ...+ Ψ2

L−1).

As L→∞ goes to unconditional variance!AR(1) example:

yT+L|T =

∞∑j=0

Ψj+L1 εT−j = φL1 yT and

MSE(yT+L|T ) = σ2 1− φ2L1

1− φ21

.

If εt Gaussian, CI of prediction:

yT+L|T ± 1.96σ(1 + Ψ21 + ...+ Ψ2

L−1)12 .

Preliminaries ARMA 59 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: prediction of ARMA

The MSE is the variance of forecast error:

MSE(yT+L|T ) = σ2(1 + Ψ21 + ...+ Ψ2

L−1).

As L→∞ goes to unconditional variance!

AR(1) example:

yT+L|T =

∞∑j=0

Ψj+L1 εT−j = φL1 yT and

MSE(yT+L|T ) = σ2 1− φ2L1

1− φ21

.

If εt Gaussian, CI of prediction:

yT+L|T ± 1.96σ(1 + Ψ21 + ...+ Ψ2

L−1)12 .

Preliminaries ARMA 59 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: prediction of ARMA

The MSE is the variance of forecast error:

MSE(yT+L|T ) = σ2(1 + Ψ21 + ...+ Ψ2

L−1).

As L→∞ goes to unconditional variance!AR(1) example:

yT+L|T =

∞∑j=0

Ψj+L1 εT−j = φL1 yT and

MSE(yT+L|T ) = σ2 1− φ2L1

1− φ21

.

If εt Gaussian, CI of prediction:

yT+L|T ± 1.96σ(1 + Ψ21 + ...+ Ψ2

L−1)12 .

Preliminaries ARMA 59 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: prediction of ARMA

The MSE is the variance of forecast error:

MSE(yT+L|T ) = σ2(1 + Ψ21 + ...+ Ψ2

L−1).

As L→∞ goes to unconditional variance!AR(1) example:

yT+L|T =

∞∑j=0

Ψj+L1 εT−j = φL1 yT and

MSE(yT+L|T ) = σ2 1− φ2L1

1− φ21

.

If εt Gaussian, CI of prediction:

yT+L|T ± 1.96σ(1 + Ψ21 + ...+ Ψ2

L−1)12 .Preliminaries ARMA 59 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: prediction of ARMA

How to estimate the εt? We assumed we know εt for t ≤ Twe do not!

Use invertibility: recall that

εt = yt−η1yt−1−η2yt−2− ...−ηt−1y1−ηty0 − ηt+1y−1 − ......

Since we do not know all yt but only sample, approximate εtwith

εt = yt − η1yt−1 − η2yt−2 − ...− ηt−1y1.

Preliminaries ARMA 60 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: prediction of ARMA

How to estimate the εt? We assumed we know εt for t ≤ Twe do not!

Use invertibility: recall that

εt = yt−η1yt−1−η2yt−2− ...−ηt−1y1−ηty0 − ηt+1y−1 − ......

Since we do not know all yt but only sample, approximate εtwith

εt = yt − η1yt−1 − η2yt−2 − ...− ηt−1y1.

Preliminaries ARMA 60 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: prediction of ARMA

How to estimate the εt? We assumed we know εt for t ≤ Twe do not!

Use invertibility: recall that

εt = yt−η1yt−1−η2yt−2− ...−ηt−1y1−ηty0 − ηt+1y−1 − ......

Since we do not know all yt but only sample, approximate εtwith

εt = yt − η1yt−1 − η2yt−2 − ...− ηt−1y1.

Preliminaries ARMA 60 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: prediction of ARMA

MA(1) example:

yT+1|T = α1εT = α1(

∞∑j=0

(−α1)jyT−j)

with

εt = yt − α1yt−1 + α21yt−2 + ...+ (−α1)t−1y1,

Preliminaries ARMA 61 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: prediction of ARMA

MA(1) example:

yT+1|T = α1εT = α1(

∞∑j=0

(−α1)jyT−j)

with

εt = yt − α1yt−1 + α21yt−2 + ...+ (−α1)t−1y1,

Preliminaries ARMA 61 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: empirical example of prediction ofAR(1)

5 10 15 20 25 30 35

5.9

6

6.1

6.2

6.3

6.4

6.5

6.6

6.7

Number of Observatioms

Ret

urns

1 M

onth

Inte

rest

Rat

e

Out of Sample Forecasting with ARMA(1,1)

Actual12 step Ahead1 steps Ahead

Preliminaries ARMA 62 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: analytical questions

1 Derive the ACF for the MA(2) scheme:

yt = εt + θ1εt−1 + θ2εt−2.

2 Derive the mean, variance and ACF of ∆yt where

yt = δ0 + δ1t+ ut,

withut = αut−1 + εt, 0 < α < 1.

Preliminaries ARMA 63 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: analytical questions

1 Derive the ACF for the MA(2) scheme:

yt = εt + θ1εt−1 + θ2εt−2.

2 Derive the mean, variance and ACF of ∆yt where

yt = δ0 + δ1t+ ut,

withut = αut−1 + εt, 0 < α < 1.

Preliminaries ARMA 63 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: analytical questions

3 A macroeconomist postulates that the log of US real GNPcan be represented by

A(L)(yt − δ0 − δ1t) = εt, A(L) = 1− α1L− α2L2.

An OLS fit yields

yt = −0.321 + 0.003t+ 1.335yt−1 − 0.401yt−2 + εt.

Derive the values of α1, α2, δ0, δ1.Compute the roots of the characteristic equation.What is the estimated value of A(1)?An alternative specification fitted to the same data yields

∆yt = 0.003 + 0.369∆yt−1 + vt.

What are the roots of this equation?

Preliminaries ARMA 64 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: analytical questions

3 A macroeconomist postulates that the log of US real GNPcan be represented by

A(L)(yt − δ0 − δ1t) = εt, A(L) = 1− α1L− α2L2.

An OLS fit yields

yt = −0.321 + 0.003t+ 1.335yt−1 − 0.401yt−2 + εt.

Derive the values of α1, α2, δ0, δ1.Compute the roots of the characteristic equation.What is the estimated value of A(1)?An alternative specification fitted to the same data yields

∆yt = 0.003 + 0.369∆yt−1 + vt.

What are the roots of this equation?

Preliminaries ARMA 64 / 66

Imperial CollegeLondonBusiness School

3.JPG

Preliminaries ARMA 65 / 66

Imperial CollegeLondonBusiness School

Preliminaries ARMA 65 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: analytical questions

4 Find the optimal MMSE prediction for ARMA(1, 1).

5 Derive the partial autocorrelation function for MA(1) and forAR(1).

Preliminaries ARMA 65 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: analytical questions

4 Find the optimal MMSE prediction for ARMA(1, 1).

5 Derive the partial autocorrelation function for MA(1) and forAR(1).

Preliminaries ARMA 65 / 66

Imperial CollegeLondonBusiness School

Linear Time Series: Summary

Preliminaries.

ARMA: linear process.

ARMA: MA process.

ARMA: AR process.

ARMA(p, q) process.

ARMA: nonstationarity.

ARMA: prediction.

Preliminaries ARMA 66 / 66