Autocorrelation x 4

download Autocorrelation x 4

of 17

Transcript of Autocorrelation x 4

  • 8/16/2019 Autocorrelation x 4

    1/17

    Autocorrelation Function Properties and Examples

    ρx() =  γ x()

    γ x(0) =

     γ x()

    σ2x

    The ACF has a number of useful properties

    •   Bounded:   −1 ≤ ρx() ≤ 1

    •   White noise,  x(n) ∼ WN(µx, σ2x):   ρx() = δ ()

    •  These enable us to assign meaning to estimated values fromsignals

    •  For example,

    –   If  ρ̂x() ≈ δ (), we can conclude that the process consists of nearly uncorrelated samples

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 3

    Partial and Autocorrelation Functions Overview

    •  Definitions

    •   Properties

    •   Yule-Walker Equations

    •  Levinson-Durbin recursion

    •  Biased and unbiased estimators

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 1

    Example 1: 1st Order Moving Average

    Find the autocorrelation function of a 1st order moving averageprocess, MA(1):

    x(n) = w(n) + b1w(n − 1)

    where w(n) ∼ WN(0, σ2w).

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 4

    Autocorrelation Function Defined

    Normalized Autocorrelation, also known as the  AutocorrelationFunction (ACF)  is defined for a WSS signal as

    ρx() =  γ x()γ x(0)

     =  γ x()σ2x

    where γ x()   is the autocovariance of  x(n),

    γ xx() = E [[x(n + ) − µx][x(n) − µx]∗] = rx() − |µx|

    2

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 2

  • 8/16/2019 Autocorrelation x 4

    2/17

    All-Pole Models

    H (z) =  b0A(z)

     =  b0

    1 +P 

    k=1 akz−k

    •  All-pole models are especially important because they can beestimated by solving a set of linear equations

    •   Partial autocorrelation can also be best understood within thecontext of all-pole models (my motivation)

    •  Recall that an AZ(Q) model can be expressed as an AP(∞)model if the AZ(Q) model is minimum phase

    •  Since the coefficients at large lags tend to be small, this can oftenbe well approximated by an AP(P ) model

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 7

    Example 2: 1st Order Autoregressive

    Find the autocorrelation function of a 1st order autoregressive process,AR(1):

    x(n) = −a1x(n − 1) + w(n)

    where w(n) ∼ WN(0, σ2w). Hint:   −αnu(−n − 1)

      Z ←→   11−αz−1   for an

    ROC of  |z| < |α|.

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 5

    AP Equations

    Let us consider a  causal   AP(P ) model:

    H (z) +P k=1

    akH (z)z−k =   b0

    h(n) +

    k=1 a

    kh(n − k) =   b0δ (n)

    P k=0

    akh(n − k)h∗(n − ) =   b0h

    ∗(n − )δ (n)

    ∞n=−∞

    P k=0

    akh(n − k)h∗(n − ) =

    ∞n=−∞

    b0h∗(n − )δ (n)

    k=0

    akrh( − k) =   b0h∗

    (−)

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 8

    Autocorrelation Function Properties

    ρx() =  γ x()

    γ x(0) =

     γ x()

    σ2x

    •   In general, the ACF of an AR(P ) process decays as a sum of damped exponentials (infinite extent)

    •   If the AR(P ) coefficients are known, the ACF can be determinedby solving a set of linear equations

    •  The ACF of a MA(Q) process is finite:   ρx() = 0  for   > Q

    •  Thus, if the estimated ACF is very small for large lags a MA(Q)model  may  be appropriate

    •  The ACF of a ARMA(P, Q) process is also a sum of dampedexponentials (infinite extent)

    •  It is difficult to solve for in general

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 6

  • 8/16/2019 Autocorrelation x 4

    3/17

    Solving the AP Equations

    If we know the autocorrelation, we can solve these equations for   a andb0

    rh(0)   r∗h(1)   · · ·   r

    ∗h(P )

    rh(1)   rh(0)   · · ·   r∗h(P  − 1)

    ......

      . . .  ...

    rh(P )   rh(P  − 1)   · · ·   rh(0)

    1a1...

    aP 

    =

    |b0|2

    0...

    0

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 11

    AP Equations Continued

    Since AP(P ) is causal,  h(0) = b0,  h∗(0) = b∗0, and

    k=0

    akrh(−k) =   |b0|2  = 0

    P k=0

    akrh( − k) = 0   > 0

    This has several important consequences. One is that theautocorrelation can be expressed as a recursive relation for   > 0, sincea0 = 1:

    P k=0

    akrh( − k) = 0

    rh() =   −P k=1

    akrh( − k)   > 0

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 9

    Solving for   a

    rh(1)   rh(0)   · · ·   r∗h(P  − 1)

    ......

      . . .  ...

    rh(P )   rh(P  − 1)   · · ·   rh(0)

    1a1...

    aP 

    =

    0...0

    rh(1)

    ...rh(P )

    +

    rh(0)   · · ·   r∗h(P  − 1)...

      . . .  ...

    rh(P  − 1)   · · ·   rh(0)

    a1...

    aP 

      =

    0...0

    rh + R ha   = 0

    a   =   −R −1h   rh

    These are called the  Yule-Walker  equations

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 12

    AP Equations in Matrix Form

    We can collect the first P  + 1  of these terms in a matrix

    rh(0)   rh(−1)   · · ·   rh(−P )

    rh(1)   rh(0)   · · ·   rh(−P  + 1)...

    ...  . . .

      ...rh(P )   rh(P  − 1)   · · ·   rh(0)

    1

    a1...

    aP 

    =

    |b0|2

    0...0

    rh(0)   r∗h(1)   · · ·   r

    ∗h(P )

    rh(1)   rh(0)   · · ·   r∗h(P  − 1)

    ......

      . . .  ...

    rh(P )   rh(P  − 1)   · · ·   rh(0)

    1a1...

    aP 

    =

    |b0|2

    0...0

    •  The autocorrelation matrix is Hermitian, Toeplitz, and positivedefinite.

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 10

  • 8/16/2019 Autocorrelation x 4

    4/17

    Yule-Walker Equation Comments Continued

    a =  R −1h   rh   b0  = 

    rh(0) + aTrh

    •  Thus the two are equivalent and reversible and uniquecharacterizations of the model

    {rh(0), . . . , rh(P )} ↔ {b0, a1, . . . , aP }

    •  The rest of the sequence can then be determined by symmetryand the recursive relation given earlier

    rh() =   −P k=1

    akrh( − k)   > 0

    rh(−) =   r

    h()

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 15

    Solving for  b0

    rh(0)   r∗h(1)   · · ·   r

    ∗h(P )

    rh(1)   rh(0)   · · ·   r∗h(P  − 1)

    ......

      . . .   ...rh(P )   rh(P  − 1)   · · ·   rh(0)

    1a1...

    aP 

    =

    |b0|2

    0

    ...0

    rh(0)   r

    ∗h(1)   · · ·   r

    ∗h(P )

    1a1...

    aP 

    =   |b0|

    2

    b0   =   ±

      P k=0

    akrh(k)

    =   ± 

    rh(0) + aTrh

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 13

    AR Processes versus AP Models

    Concisely, we can write the Yule-Walker Equations as

    R ha = −rh

    If we have an AR(P ) process, then we know  rx() =  σ2wrh() and wecan equivalently write

    R xa = −rx•  Thus, the following two problems are equivalent

    –  Find the parameters of an AR process,  {a1, . . . , aP , σ2w}, given

    rx()

    –  Find the parameters of an AP model,  {a1, . . . , aP , b0}, givenrh()

    •  To accommodate both in a common notation, I will write theYule-Walker equations as simply

    Ra = −r

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 16

    Yule-Walker Equation Comments

    a =  R −1h   rh   b0  = ± 

    rh(0) + aTrh

    •  The matrix inverse exists because unless  h(n) = 0,   R h  is positivedefinite

    •  Note that we cannot determine the sign of  b0 = h(0) from rh()

    •  Thus, the first  P   terms of the autocorrelation completelydetermine the model parameters

    •  A similar relation exists for the first  P  + 1  elements of theautocorrelation sequence in terms the model parameters by solving

    a set of linear equations (Problem 4.6)•   Is not true for AZ or PZ models

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 14

  • 8/16/2019 Autocorrelation x 4

    5/17

    Partial Autocorrelation: Alternative Definition

    Define P  [x(n)|x(1), . . . , x(n − 1)]  as the minimum mean square errorlinear predictor of  x(n) given {x(1), . . . , x(n − 1)}

    x̂(n) = P  [x(n)|x(n − 1), . . . , x(1)] =n−1k=1

    ckx(n − k)

    whereck  = argmin

    ck

    E

    (x(n) − x̂(n))2

    Similarly define  P  [x(0)|x(1), . . . , x(n − 1)]  as the minimum meansquare error linear predictor of  x(0) given {x(1), . . . , x(n − 1)},

    x̂(0) = P  [x(0)|x(n − 1), . . . , x(1)] =n−1k=1

    dkx(n − k)

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 19

    Solving for   a  Recursively

    We can write the Yule-Walker equations as

    r(0)   r

    (1)   · · ·   r

    ( − 1)r(1)   r(0)   · · ·   r∗( − 2)...

    ...  . . .

      ...r( − 1)   r( − 2)   · · ·   r(0)

    a()

    1a

    ()2...

    a()

    = −

    r(1)r(2)...

    r()

    Ra = −r a = −R −1r

    •  We can recursively solve for the model coefficients

    a = [a()

    1   , a()

    2   , . . . , a()   ]  for increasing model orders

    •  Levinson-Durbin algorithm

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 17

    Partial Autocorrelation: Alternative Definition & Properties

    Then the PACF can be defined as the correlation between the residuals

    x̃n(n)     x(n) − x̂1:n−1(n) = x(n) − P  [x(n)|x(n − 1), . . . , x(1)]

    x̃n(0)     x(0) − x̂1:n−1(0) = x(0) − P  [x(0)|x(n − 1), . . . , x(1)]

    α()     E [(x() − x̂n()) (x(0) − x̂n(0))]

    E (x(0) − x̂n(0))2=

      E [(x() − x̂n()] (x(0) − x̂n(0))]

    E

    (x(n) − x̂n(n))2

    •  One can think of the PACF as a measure of the correlation of what has not already been explained (the residuals)

    •  Like the ACF, it depends only on second order properties

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 20

    Partial Autocorrelation

    Partial Autocorrelation Function (PACF)  also known as, the  partial autocorrelation sequence (PACS), is defined as

    α()

    1    = 0

    a()   > 0

    α∗(−)  

  • 8/16/2019 Autocorrelation x 4

    6/17

  • 8/16/2019 Autocorrelation x 4

    7/17

    Example 3: Relevant MATLAB Code Continued

    l = 0:L;h = stem(l,pc);set(h(1),’MarkerFaceColor’,’b’);set(h(1),’MarkerSize’,4);ylabel(’\alpha(l)’);xlabel(’Lag (l)’);xlim([0 L]);ylim([-1 1]);box off;

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 27

    Example 3: MA(1) PACF

    0 1 2 3 4 5 6 7 8 9 10−1

    −0.8

    −0.6

    −0.4

    −0.2

    0

    0.2

    0.4

    0.6

    0.8

    1

         α        (        l        )

    Lag (l)

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 25

    Example 4: AR(1) ACF and PACF

    Plot the ACF and PACF of a AR(1) process with  a  = [1 0.9].

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 28

    Example 3: Relevant MATLAB Code

    pc = zeros(L+1,1); mc = zeros(L+1,1);pv = zeros(L+1,1);

    pc(1) = 1; mc(1) = 1;pv(1) = ac(1);

    pc(2) = ac(2)/ac(1); mc(2) = pc(2);pv(2) = ac(1)*(1-pc(2).^2);

    for c1 = 3:L+1,pc(c1 ) = (ac(c1) - mc(2:c1-1).’*ac((c1-1):-1:2))/pv(c1-1); mc(2:c1-1) = mc(2:c1-1) - pc(c1)*mc(c1-1:-1:2); mc(c1 ) = pc(c1);

    pv(c1 ) = pv(c1-1)*(1-pc(c1).̂ 2);end;

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 26

  • 8/16/2019 Autocorrelation x 4

    8/17

    Example 4: AR(1) PACF

    0 1 2 3 4 5 6 7 8 9 10

    −1

    −0.8

    −0.6

    −0.4

    −0.2

    0

    0.2

    0.4

    0.6

    0.8

    1

         α        (        l        )

    Lag (l)

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 31

    Example 4: AR(1) ACF

    0 1 2 3 4 5 6 7 8 9 10−1

    −0.8

    −0.6

    −0.4

    −0.2

    0

    0.2

    0.4

    0.6

    0.8

    1

         ρ        (        l        )

    Lag (l)

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 29

    Autocovariance Estimation

    •  We’ve seen that the second-order statistics are a handy, thoughincomplete, characterization of WSS stochastic processes

    •  We would like to  estimate   these properties from realizations

    –   Single signal:   γ x(),  rx(),  αx(),  Rx(ejω)

    –  Two or more signals:   γ yx(),  ryx(),  Ryx(),  G 2yx(e

    jω )

    •  What are the best estimators?

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 32

    Example 4: Relevant MATLAB Code

    L = 10; % Length of autocorrelation calculateda1 = 0.9; % Coefficientsw = 1; % White noise power

    ac = zeros(L+1,1);

    ac(1) = sw/(1-a1 2̂);for c1=2:L+1,

    ac(c1) = -a1*ac(c1-1);end;

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 30

  • 8/16/2019 Autocorrelation x 4

    9/17

    Unbiased Autocovariance Estimation

    γ̂ u()   1

    N  − ||

    N −1−||n=0

    [x(n + ||) − µ̂x] [x(n) − µ̂x]

    •   If we used the true mean  µx   instead of  µ̂x,  γ̂ u() would beunbiased 

    •  When we use  µ̂x   the estimate is asymptotically unbiased•  The bias is  O(1/N )

    •  Much smaller than the variance, so it may be ignored

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 35

    Autocovariance Estimation Options

    In practical applications, we only have a real finite data record{x(n)}N −10   . There are two popular estimators of autocovariance worth

    considering: “unbiased” and biased.“Unbiased”

    γ̂ u()   1

    N  − ||

    N −1−||n=0

    [x(n + ||) − µ̂x] [x(n) − µ̂x]   || < N 

    and  γ̂ u() = 0  for || ≥ N . Here  µ̂x  is the sample average of thesequence defined as

    µ̂x    1

    N −1

    n=0

    x(n)

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 33

    Biased Autocovariance Estimation

    γ̂ b()     1

    N −1−||n=0

    [x(n + ||) − µ̂x] [x(n) − µ̂x]   || < N 

    =  N  − ||

    N   γ̂ u()

    •  Our book (and most other books) lists a different estimate•  This estimate uses a divisor of  N   rather than  (N  − ||)

    •  If we ignore the effect of estimating  µx, this bias is obvious

    E [γ̂ ()] = N  − ||

    N   γ ()

    •  The bias of this estimator is larger than the “unbiased” estimator

    •  Some claim that in general, the “biased” estimator has a smaller

    MSE•  The variance, must therefore be much smaller

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 36

    Unbiased Autocovariance Estimation

    γ̂ u()   1

    N  − ||

    N −1−||n=0

    [x(n + ||) − µ̂x] [x(n) − µ̂x]

    •  Discussed briefly in the book

    •  The estimate has even symmetry:   γ̂ u() = γ̂ u(−)

    •  At longer lags, we have fewer terms to estimate the autocovariance

    •  We have no way to estimate  γ ()  for   || ≥ N 

    •  We know that each pair  {x(n + ||), x(n)}  for all  n  have the samedistribution because the process is assumed WSS and ergodic

    •  This is a natural estimator that we know converges asymptotically(N  → ∞)

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 34

  • 8/16/2019 Autocorrelation x 4

    10/17

    Biased versus Unbiased Estimators

    γ̂ b() = N  − ||

    N   γ̂ u() ∝

    N −1−||n=0

    [x(n + ||) − µ̂x] [x(n) − µ̂x]

    •  Although  γ̂ b() is biased,

    –  The bias is small at small lags

    – For large lags, the bias is towards 0:   γ̂ () → 0 as  → ∞

    –  This is also a property of the true autocorrelation

    •   If  γ ()  is small for large lags, then the bias is also small

    •  The biased estimator has considerably less variance at large lags(the tail)

    Biased   var{γ̂ b()} = O(1/N )

    Unbiased   var{γ̂ u()} = O(1/(N  − ||))

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 39

    Biased versus Unbiased Estimators

    γ̂ b() = N  − ||

    N   γ̂ u() ∝

    N −1−||n=0

    [x(n + ||) − µ̂x] [x(n) − µ̂x]

    •  The estimators are often called the   sample autocovariancefunctions

    •  Most software and books prefer the biased estimate

    •  Why prefer a biased estimate to an unbiased estimate?

    •  Our goal is to estimate the sequence, not just γ () for a specificlag 

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 37

    Biased is Better?

    γ̂ b() = N  − ||

    N   γ̂ u() ∝

    N −1−||n=0

    [x(n + ||) − µ̂x] [x(n) − µ̂x]

    •   In general

    –  At small lags, there is little difference between the two

    estimators–  At large lags, the larger bias of the biased model is favorably

    traded for reduced variance

    •  In most cases, the biased model has smaller  MSE, though it hasnot been proven rigorously

    •  For the remainder of the class will use the biased estimator, unlessotherwise noted  γ̂ () = γ̂ b()

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 40

    Biased versus Unbiased Estimators

    γ̂ b() = N  − ||

    N   γ̂ u() ∝

    N −1−||n=0

    [x(n + ||) − µ̂x] [x(n) − µ̂x]

    •  The key advantage of  γ̂ b()  is that it is positive semi-definite   (i.e.,nonnegative definite)

    •  There are many reasons why this property is important

    –  We know the true autocovariance has this property

    –  Autoregressive models built with the positive-definite estimatesof  γ ()  are stable

    –  Most estimators of power spectral density R(ejω)  are

    nonnegative if they are based on a positive-definite estimate of γ ()

    •   γ̂ u()  may  be positive definite for a particular sequence, but it isnot guaranteed in general

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 38

  • 8/16/2019 Autocorrelation x 4

    11/17

    Estimated Autocorrelation Variance Continued

    If Gaussian random process,

    var{r̂b()}   =  1

    N −−1m=−(N +)+1

    N −|m|+

    r2(m) + r(m + )r(m − )

    •  The same applies to the unbiased estimate with a divisor of 1/(N  − ||)  instead of  1/N 

    •  This is still problematic because we don’t know the true  r()  inmost applications

    •  If we did, we wouldn’t need to estimate it!

    •  This is often what prevents us from making desired inferencesabout our estimators:

    –  Desired properties of the sampling distribution depend onunknown properties of the random process

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 43

    Estimated Autocorrelation Covariance

    γ̂ b() = N  − ||

    N   γ̂ u() ∝

    N −1−||n=0

    [x(n + ||) − µ̂x] [x(n) − µ̂x]

    •  As with all estimators, we would like to have confidence intervals

    •  These are hard to obtain, in general

    •  Need more assumptions

    –  Stationary up to order four

    E [x(n)x(n + k)x(n + )x(n + m)] = f (k,,m)

    – Mean is zero  µx = 0, so does not need to be estimated

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 41

    Estimated ACF

    The natural estimate of the ACF is

    ρ̂b()  γ̂ b()

    γ̂ (0)  ρ̂u()

      γ̂ u()

    γ̂ (0)

    •  Same tradeoffs exist between the biased and unbiased estimates

    •   Also

    –   |ρ̂b()| ≤ 1 for all  

    –  Not true in general for  ρ̂u()

    •  They are the same at    = 0

    •   Often called the   sample autocorrelation function

    •  Again, the bias, covariance, and variance of the estimators iscomplicated and based on unknown properties

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 44

    Estimated Autocorrelation Variance

    γ̂ b() = r̂b() =  1

    N −1−||n=0

    x(n + ||)x(n)

    γ̂ u() = r̂u() =  1

    N  − ||

    N −1−||n=0

    x(n + ||)x(n)

    •  The bias is

    E [r̂b()] = N  − ||

    N   r() E [r̂u()] = r()

    •  The covariance of  r̂() is complicated and not usable in practice–  Depends on fourth joint cumulant of 

    {x(n), x(n + k), x(n + ), x(n + m)}

    –  Depends on true unknown autocorrelation

    •  If process is Gaussian, then the fourth joint cumulant is zero

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 42

  • 8/16/2019 Autocorrelation x 4

    12/17

    Confidence Intervals

    •   If  N   is large enough, the central limit theorem applies and  ρ̂b() isapproximately normal

    •  In this case, we can use the Normal cdf to plot confidenceintervals of an IID sequence

    •  These are proportional to  ± 

    var{ρ̂b()}

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 47

    Estimated ACF Variance

    Again, if  x(n)  is a Guassian process then

    var{ρ̂b()} ≈  1

    m=−∞

    ρ2(m) + ρ(m + )ρ(m − )

    + 2ρ2()ρ2(m) − 4ρ()ρ(m)ρ(m − )

    •  The fourth cumulant is also absent if  x(n) is generated by a linearprocess with independent   inputs

    •  The sample ACF,  ρ̂() will generally have more correlation thanthe true  ρ()

    •  It will generally be less damped and decay more slowly than  ρ()•  Applies to the estimated autocovariance and autocorrelations as

    well

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 45

    Partial Autocorrelation Estimation

    •  There are similar issues surrounding partial autocorrelation

    •  However, in this case we always use the biased estimate of autocorrelation to estimate the PACF

    •  This is necessary, in this case, to ensure that the AR models arebounded

    •  Less is known about the statistics of the PACF (mean, variance,and confidence intervals)

    •  However, for reasons similar to that of the ACF, for a WN processthe CLT applies and we can use the same confidence intervals asfor the ACF

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 48

    Confidence Intervals

    Let x(n) be an IID sequence. Then

    ρ(0) = 1

    ρ() = 0   || > 0

    cov{ρ̂(), ρ̂( + m)} ≈   0   m = 0

    var{ρ̂b()} ≈  1

    N   || > 0

    var{ρ̂u()} ≈  N 

    (N  − ||)2  || > 0

    •  In general, it is not possible to obtain confidence intervals for the

    estimated ACF because the variance of the estimator depends onthe true ACF

    •  Instead, it is common practice to plot the confidence intervals of apurely random process

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 46

  • 8/16/2019 Autocorrelation x 4

    13/17

    Example 5: AR(1) ACF

    0 10 20 30 40 50 60 70 80 90−1

    −0.5

    0

    0.5

    1

    Lag (l)

    N=100 a1=−0.9

         ρ        (        l        )

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 51

    Example 5: 1st Order Autoregressive

    Find the autocorrelation function of a 1st order autoregressive process,AR(1):

    x(n) = −a1x(n − 1) + w(n)where w(n) ∼ WN(0, σ2w). Estimate the ACF using the biased andunbiased estimates for  N  = 100. Do so several times for differentvalues of  a1.

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 49

    Example 5: AR(1) ACF

    0 10 20 30 40 50 60 70 80 90−1

    −0.5

    0

    0.5

    1

    Lag (l)

    N=100 a1=−0.9

         ρ        (        l        )

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 52

    Example 5: AR(1) Signal

    0 10 20 30 40 50 60 70 80 90 100

    −1

    0

    1

    2

    3

    4

    5

    6

    Sample (n)

    N=100 a1=−0.9

         x        (     n        )

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 50

  • 8/16/2019 Autocorrelation x 4

    14/17

  • 8/16/2019 Autocorrelation x 4

    15/17

  • 8/16/2019 Autocorrelation x 4

    16/17

    Example 5: AR(1) ACF

    0 10 20 30 40 50 60 70 80 90

    −1

    −0.5

    0

    0.5

    1

    Lag (l)

    N=100 a1=0.9

         ρ        (        l        )

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 63

    Example 5: AR(1) PACF

    0 10 20 30 40 50 60 70 80 90−1

    −0.5

    0

    0.5

    1

    Lag (l)

    N=100 a1=0.5

         α        (        l        )

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 61

    Example 5: AR(1) ACF

    0 10 20 30 40 50 60 70 80 90

    −1

    −0.5

    0

    0.5

    1

    Lag (l)

    N=100 a1=0.9

         ρ        (        l        )

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 64

    Example 5: AR(1) Signal

    0 10 20 30 40 50 60 70 80 90 100

    −5

    0

    5

    Sample (n)

    N=100 a1=0.9

         x        (     n        )

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 62

  • 8/16/2019 Autocorrelation x 4

    17/17

    Summary

    •  ACF and PACF are useful characterizations of WSS randomprocesses

    •  Can help select an appropriate model

    –   MA: Finite ACF

    –  AR: Finite PACF

    •  AP/AR are often preferred characterizations because we cansolve/estimate the model parameters by solving a set of linearequations (Yule-Walker)

    •   Biased estimates of  r(),  ρ(), and/or  γ () are generally preferredto the unbiased estimates

    –  Less variance (always), and lower MSE (sometimes)

    –  Positive definite (PSD is therefore also nonnegative)

    •  Bias is known, but variance of estimates is generally unknown

    •  Loosely, confidence intervals for WN are used instead

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 67

    Example 5: AR(1) PACF

    0 10 20 30 40 50 60 70 80 90−1

    −0.5

    0

    0.5

    1

    Lag (l)

    N=100 a1=0.9

         α        (        l        )

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 65

    Example 5: MATLAB Code

    L = 90; % Length of autocorrelation calculateda1 = 0.9; % Coefficientsw = 1; % White noise powerac = zeros(L+1,1);

    N = 100;cl = 99; % Confidence levelnp = norminv((1-cl/100)/2); % Find corresponding lower percentileac(1) = sw/(1-a1 2̂);for c1=2:L+1,

    ac(c1) = -a1*ac(c1-1);end;

    acf = ac/ac(1);w = randn(N,1);a = [1 a1];x = filter(1,a,w);

    J. McNames Portland State University ECE 538/638 Autocorrelation Ver. 1.09 66