Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 ·...

35
Topic 5 Energy & Power Signals, Correlation & Spectral Density In Topic 1 we reviewed the notion of average signal power in a periodic signal and related it to the A n and B n coefficients of a Fourier series, giving a method of calculating power in the domain of discrete frequencies. In this topic we want to revisit power for the continuous time domain, with a view to expressing it in terms of the frequency spectrum. 5.1 Discrete Parseval for the Complex Fourier Series Let’s remind ourselves about energy and power by reviewing the derivation of average power using the complex Fourier series. You did this as a part of 1st tute sheet. First, remember that f * (t )f (t )= |f (t )| 2 is a power. The average power in a periodic signal with period T 0 =2π/ω 0 is therefore Ave sig pwr = 1 T 0 Z +T 0 /2 -T 0 /2 |f (t )| 2 dt = 1 T 0 Z +T 0 /2 -T 0 /2 f * (t )f (t )dt. (5.1) Now replace f (t ) with its complex Fourier series f (t )= X n=-∞ C n e inωt . (5.2) It follows that (and do note the need for different indices m and n) Ave sig pwr = 1 T 0 Z +T 0 /2 -T 0 /2 X n=-∞ C n e inωt X m=-∞ C * m e -imωt dt = X n=-∞ C n C * n (because of orthogonality) = X n=-∞ |C n | 2 = |C 0 | 2 +2 X n=1 |C n | 2 , using C n = C * -n . 1

Transcript of Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 ·...

Page 1: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

Topic 5

Energy & Power Signals,Correlation & Spectral DensityIn Topic 1 we reviewed the notion of average signal power in a periodic signal and relatedit to the An and Bn coefficients of a Fourier series, giving a method of calculatingpower in the domain of discrete frequencies. In this topic we want to revisit power forthe continuous time domain, with a view to expressing it in terms of the frequencyspectrum.

5.1 Discrete Parseval for the Complex Fourier SeriesLet’s remind ourselves about energy and power by reviewing the derivation of averagepower using the complex Fourier series. You did this as a part of 1st tute sheet.

First, remember that f ∗(t)f (t) = |f (t)|2 is a power. The average power in a periodicsignal with period T0 = 2π/ω0 is therefore

Ave sig pwr =1

T0

∫ +T0/2

−T0/2

|f (t)|2 dt =1

T0

∫ +T0/2

−T0/2

f ∗(t)f (t) dt . (5.1)

Now replace f (t) with its complex Fourier series

f (t) =

∞∑n=−∞

Cneinωt . (5.2)

It follows that (and do note the need for different indices m and n)

Ave sig pwr =1

T0

∫ +T0/2

−T0/2

∞∑n=−∞

Cneinωt

∞∑m=−∞

C∗me−imωtdt

=

∞∑n=−∞

CnC∗n (because of orthogonality)

=

∞∑n=−∞

|Cn|2 = |C0|2 + 2

∞∑n=1

|Cn|2 , using Cn = C∗−n.

1

Page 2: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

5/2 TOPIC 5. ENERGY & POWER SIGNALS, CORRELATION & SPECTRAL DENSITY

5.1.1 A quick check

It is worth checking this using the relationships found in Topic 1:

Cm =

12(Am − iBm) for m > 0

A0/2 for m = 012(A|m| + iB|m|) for m < 0

(5.3)

For n ≥ 0 the quantities are

|C0|2 =

(1

2A0

)2

2|Cn|2 = 21

2(Am − iBm)

1

2(Am + iBm) =

1

2

(A2n + B2

n

)(5.4)

in agreement with the expression in Topic 1.

5.2 Energy signals vs Power signalsNow move on to general, not necessarily periodic, signals. When considering signalsin the continuous time domain, it is necessary to distinguish between finite energysignals, or “energy signals” for short, and finite power signals.But let us be absolutely clear thatAll signals f (t) are such that |f (t)|2 is a power.

An energy signal is one where the total energy is finite:

ETot =

∫ ∞−∞|f (t)|2dt 0 < ETot <∞ . (5.5)

It is said that f (t) is “square integrable”. As ETot is finite, dividing by the infiniteduration indicates that energy signals have zero average power.

To summarize before knowing what all these terms mean: An Energy signal f (t)

• always has a Fourier transform F (ω)

• always has an energy spectral density (ESD) given by Ef f (ω) = |F (ω)|2

• always has an autocorrelation Rf f (τ) =∫∞−∞ f

∗(t)f (t + τ)dt

• always has an ESD which is the FT of the autocorrelation Rf f (τ)⇔ Ef f (ω)

• always has total energy ETot = Rf f (0) = 12π

∫∞−∞ Ef f (ω)dω

• always has an ESD which transfers as Egg(ω) = |H(ω)|2Ef f (ω)

Page 3: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

5.3. PARSEVAL’S THEOREM REVISITED 5/3

A power signal is one where the total energy is infinite, and we consider average power

PAve = limT→∞

1

2T

∫ T

−T|f (t)|2dt 0 < PAve <∞ . (5.6)

A Power signal f (t)

• may have a Fourier transform F (ω)

• may have an power spectral density (PSD) given Sf f (ω) = |F (ω)|2

• always has an autocorrelation Rf f (τ) = limT→∞1

2T

∫ T−T f

∗(t)f (t + τ)dt

• always has a PSD which is the FT of the autocorrelation Rf f (τ)⇔ Sf f (ω)

• always has integrated average power PAve = Rf f (0)

• always has a PSD which transfers through a system as Sgg(ω) = |H(ω)|2Sf f (ω)

The distinction is all to do with avoiding infinities, but it results in the autocorrelationhaving different dimensions. Instinct tells you this is going to be a bit messy. Wediscuss finite energy signals first.

5.3 Parseval’s theorem revisitedLet us assume an energy signal, and recall a general result from Topic 3:

f (t)g(t)⇔1

2πF (ω) ∗ G(ω) , (5.7)

where F (ω) and G(ω) are the Fourier transforms of f (t) and g(t).

Writing the Fourier transform and the convolution integral out fully gives∫ ∞−∞

f (t)g(t)e−iωtdt =1

∫ ∞−∞

F (p)G(ω − p) dp , (5.8)

where p is a dummy variable used for integration.

Note that ω is not involved in the integrations above — it just a free variable on boththe left and right of the above equation — and we can give it any value we wish to.Choosing ω = 0, it must be the case that∫ ∞

−∞f (t)g(t)dt =

1

∫ ∞−∞

F (p)G(−p) dp . (5.9)

Page 4: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

5/4 TOPIC 5. ENERGY & POWER SIGNALS, CORRELATION & SPECTRAL DENSITY

Now suppose g(t) = f ∗(t). We know that∫ ∞−∞

f (t)e−iωtdt = F (ω)

⇒∫ ∞−∞

f ∗(t)e+iωtdt = F ∗(ω) ⇒∫ ∞−∞

f ∗(t)e−iωtdt = F ∗(−ω)

This is, of course, a quite general result which could have been stuck in Topic 2, andwhich is worth highlighting:The Fourier Transform of a complex conjugate is∫ ∞

−∞f ∗(t)e−iωtdt = F ∗(−ω) (5.10)

Take care with the −ω.

Back to the argument. In the earlier expression we had∫ ∞−∞

f (t)g(t)dt =1

∫ ∞−∞

F (p)G(−p)dp

⇒∫ ∞−∞

f (t)f ∗(t)dt =1

∫ ∞−∞

F (p)F ∗(p)dp

Now p is just any parameter, so it is possible to tidy the expression by replacing it withω. Then we arrive at the following important result

Parseval’s Theorem: The total energy in a signal is

ETot =

∫ ∞−∞|f (t)|2dt =

1

∫ ∞−∞|F (ω)|2 dω =

∫ ∞−∞|F (ω)|2 df (5.11)

NB! The df = dω/2π, and is nothing to do with the signal being called f (t).

5.4 The Energy Spectral DensityIf the integral gives the total energy, it must be that |F (ω)|2 is the energy per Hz.That is:

The ENERGY Spectral Density of a signal f (t)⇔ F (ω) is defined as

Ef f (ω) = |F (ω)|2 (5.12)

Page 5: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

5.5. ♣ EXAMPLE 5/5

5.5 ♣ Example[Q] Determine the total energy in the signal f (t) = u(t)e−t (i) in the time domain,

and (ii) by determining the energy spectral density and integrating over frequency.

[A] Part (i): Using the time domain, ETot is the power integrated over all time:

f (t) = u(t) exp(−t)

⇒ETot =

∫ ∞−∞|f (t)|2dt =

∫ ∞0

exp(−2t)dt

=

[exp(−2t)

−2

∣∣∣∣∞0

dt

= 0−1

−2=

1

2

Part (ii): Using the frequency domain

F (ω) =

∫ ∞−∞

u(t) exp(−t) exp(−iωt)dt

=

∫ ∞0

exp(−t(1 + iω))dt

=

[−

exp(−t(1 + iω))

(1 + iω)

∣∣∣∣∞0

=1

(1 + iω)

Hence the energy spectral density is

Ef f = |F (ω)|2 =1

1 + ω2(5.13)

Integration over all frequency f (not ω remember!!) gives the total energy as

ETot =

∫ ∞−∞|F (ω)|2df =

1

∫ ∞−∞

1

1 + ω2dω (5.14)

Substitute tan θ = ω∫ ∞−∞|F (ω)|2df =

1

∫ π/2

−π/2

1

1 + tan2 θsec2 θdθ

=1

∫ π/2

−π/2

=1

2ππ =

1

2which is nice

Page 6: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

5/6 TOPIC 5. ENERGY & POWER SIGNALS, CORRELATION & SPECTRAL DENSITY

5.6 Correlation

Correlation is a tool for analysing whether processes considered random a priori are infact related. In signal processing, cross-correlation Rf g is used to assess how similartwo different signals f (t) and g(t) are. Rf g is found by multiplying one signal, f (t)say, with time-shifted values of the other g(t + τ), then summing up the products. Inthe example in Figure 5.1 the cross-correlation will low if the shift τ = 0, and high ifτ = 2 or τ = 5.

t

t

1 2 3 4 5 6

f(t)

g(t)

LowHigh High

Figure 5.1: The signal f (t) would have a higher cross-correlation with parts of g(t) that look similar.

One can also ask how similar a signal is to itself. Self-similarity is described by theauto-correlation Rf f , again a sum of products of the signal f (t) and a copy of thesignal at a shifted time f (t + τ).

An auto-correlation with a high magnitude means that the value of the signal f (t) atone instant signal has a strong bearing on the value at the next instant. Correlation canbe used for both deterministic and random signals. We will explore random processesin Topic 6.

The cross- and auto-correlations can be derived for both finite energy and finite powersignals, but they have different dimensions (energy and power respectively) and differin other more subtle ways.

We continue by looking at the auto- and cross-correlations of finite energy signals.

5.7 The Auto-correlation of a finite energy signal

The auto-correlation of a finite energy signal is defined as follows. Most signals f (t)are real, so often you will see the conjugate omitted.

Page 7: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

5.7. THE AUTO-CORRELATION OF A FINITE ENERGY SIGNAL 5/7

The auto-correlation of a signal f (t) of finite energy is defined

Rf f (τ) =

∫ ∞−∞

f ∗(t)f (t + τ)dt =

∫ ∞−∞

f (t)f ∗(t − τ)dt (5.15)

=(for real signals)

∫ ∞−∞

f (t)f (t + τ)dt (5.16)

The result has units of power×time — it’s an energy.

There are two ways of envisaging the process, as shown in Figure 5.2. One is to shifta copy of the signal and multiply vertically (so to speak). For positive τ this is a shiftto the “left”. This is most useful when calculating analytically.

then sum

f(t)

f(t )+τ

f(t)

f(t)

Figure 5.2: f (t) and f (t + τ) for a positive shift τ .

5.7.1 Basic properties of auto-correlation

1. Symmetry.First, let’s show the equivalence in the definition. Substitute p = t + τ into the firstdefinition, and you will get

Rf f (τ) =

∫ ∞−∞

f ∗(t)f (t + τ)dt =

∫ ∞−∞

f ∗(p − τ)f (p)dp

=

∫ ∞−∞

f (p)f ∗(p − τ)dp =

∫ ∞−∞

f (t)f ∗(t − τ)dt

(5.17)

Page 8: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

5/8 TOPIC 5. ENERGY & POWER SIGNALS, CORRELATION & SPECTRAL DENSITY

It is then obvious that in general

If f complex, Rf f (τ) = R∗f f (−τ) — an Hermitian function

If f real, Rf f (τ) = Rf f (−τ) — an even function(5.18)

2. For a non-zero signal, Rf f (0) is real and positive.Proof: For any non-zero signal there is at least one instant t1 for which f (t1) 6= 0, andf ∗(t1)f (t1) > 0. Hence

∫∞−∞ f

∗(t)f (t)dt > 0.

3. The magnitude of the value at τ = 0 is largest: ∀τ, |Rf f (0)| ≥ |Rf f (τ)|.Proof: Consider any pair of numbers a1 and a2. As |a1 − a2|2 ≥ 0, we know that|a1|2 + |a2|2 ≥ a∗1a2 + a1a

∗2. Now take the pairs of numbers at random from the

function f (t). Our result shows that there is no rearrangement, random or ordered,of the function values into φ(t) that would make

∫f ∗(t)φ(t)dt >

∫|f (t)|2dt. Using

φ(t) = f (t + τ) is an ordered rearrangement, and so for any τ∫ ∞−∞|f (t)|2dt ≥

∫ ∞−∞

f ∗(t)f (t + τ)dt (5.19)

Hence |Rf f (0)| ≥ |Rf f (τ)|.

5.8 ♣ Applications

5.8.1 ♣ Useful: Synchronising to heartbeats in an ECG

DIY search and read ...

5.8.2 ♣ Bonkers: The search for Extra Terrestrial Intelligence

Figure 5.3: Chatty aliens

For several decades, the SETI organization have been lookingfor extra terrestrial intelligence by examining the autocorrela-tion of signals from radio telescopes. One project scans the skyaround nearby (200 light years) sun-like stars chopping up thebandwidth between 1-3 GHz into 2 billion channels each 1 Hzwide. (It is assumed that an attempt to communicate would usea single frequency, highly tuned, signal.) They determine theautocorrelation each channel’s signal. If the channel is noise,one would observe a very low autocorrelation for all non-zeroτ . (See white noise in Topic 6.) But if there is, say, a repeatedmessage, one would observe a periodic rise in the autocorrelation.

Page 9: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

5.9. THE WIENER-KHINCHIN THEOREM 5/9

Signal

Shifted by τ

increasing

R low

R low

R high again

Figure 5.4: Rf f at τ = 0 is always large, but will drop to zero if the signal is noise. If the messages alignthe autocorrelation with rise.

5.8.3 ♣ Useful: temperature cycles in a building

This is an example from the Matlab documentation where the temperature inside abuilding is measured over 120 days.

(a) (b)

Figure 5.5: (a) The structure in the raw signal is not immediately evident. (b) The autocorrelation Rf fshows immediately that there is both daily and weekly structure.

5.9 The Wiener-Khinchin TheoremLet us take the Fourier transform of the cross-correlation

∫f ∗(t)g(t+τ)dt, then switch

the order of integration,

FT[∫ ∞−∞

f ∗(t)g(t + τ)dt

]=

∫ ∞τ=−∞

∫ ∞t=−∞

f ∗(t)g(t + τ) dt e−iωτ dτ (5.20)

=

∫ ∞t=−∞

f ∗(t)

∫ ∞τ=−∞

g(t + τ) e−iωτ dτdt (5.21)

Page 10: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

5/10 TOPIC 5. ENERGY & POWER SIGNALS, CORRELATION & SPECTRAL DENSITY

Notice that t is a constant for the integration wrt τ (that’s how f (t) floated throughthe integral sign). Substitute p = t + τ into it, and the integrals become separable:

FT[∫ ∞−∞

f ∗(t)g(t + τ)dt

]=

∫ ∞t=−∞

f ∗(t)

∫ ∞p=−∞

g(p) e−iωpe+iωt dpdt (5.22)

=

∫ ∞−∞

f ∗(t)eiωt dt

∫ ∞−∞

g(p) e−iωp dp (5.23)

= F ∗(ω) G(ω). (5.24)

If we specialize this to the auto-correlation, G(ω) gets replaced by F (ω). Then

For a finite energy signalThe Wiener-Khinchin Theorema says thatThe FT of the Auto-Correlation is the Energy Spectral Density

FT [Rf f (τ)] = |F (ω)|2 = Ef f (ω) (5.25)aNorbert Wiener (1894-1964) and Aleksandr Khinchin (1894-1959)

(This method of proof is valid only for finite energy signals, and rather trivializes theWiener-Khinchin theorem. The fundamental derivation lies in the theory of stochasticprocesses.)

5.10 Corollary of Wiener-KhinchinThis corollary just confirms a result obtained earlier. We have just shown that Rf f (τ)⇔Ef f (ω). That is

Rf f (τ) =1

∫ ∞−∞Ef fω)eiωτdω (5.26)

where τ is used by convention. Now set τ = 0

Auto-correlation at τ = 0 is

Rf f (0) =1

∫ ∞−∞Ef fω) dω = ETot (5.27)

But this is exactly as expected! Earlier we defined the energy spectral density as

ETot =1

∫ ∞−∞Ef fω) dω , (5.28)

and we know that for a finite energy signal

Rf f (0) =

∫ ∞−∞|f (t)|2dt = ETot . (5.29)

Page 11: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

5.11. HOW IS THE ESD AFFECTED BY PASSING THROUGH A SYSTEM? 5/11

5.11 How is the ESD affected by passing through a system?If f (t) and g(t) are in the input and output of a system with transfer function H(ω),then

G(ω) = H(ω)F (ω) . (5.30)

But Ef f (ω) = |F (ω)|2, and soTransfer of ESD

Egg(ω) = |H(ω)|2|F (ω)|2 = |H(ω)|2Ef f (ω) (5.31)

5.12 Cross-correlationThe cross-correlation describes the similarity between two different signals.Cross-correlation

Rf g(τ) =

∫ ∞−∞

f ∗(t)g(t + τ)dt (5.32)

5.12.1 Basic properties

1. SymmetriesIt is straightforward to show that R∗f g(τ) = Rgf (−τ).

2. Independent signalsThe auto-correlation of even white noise has a non-zero value at τ = 0. This is notthe case for the cross-correlation. If Rf g(τ) = 0 for all τ , the signals f (t) and g(t)have no similar or dependence on one another.

5.13 ♣ Example[Q] Determine the cross-correlation Rf g(τ) of the signals f (t) and g(t) shown.

t

1 1

2 −1 1

t

f(t) g(t)

The non-zero ranges are f (t) = t/2 for 0 ≤ t ≤ 2 and g(t) = 1− t2 for −1 ≤ t ≤ 1.

Page 12: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

5/12 TOPIC 5. ENERGY & POWER SIGNALS, CORRELATION & SPECTRAL DENSITY

[A] It will be obvious that the cross-correlation will have a maximum at τ somewhatmore negative than −1. But to consider in detail, first plot f (t) and g(t + τ) againstt. Function g(t) = 1− t2, so that g(t + τ) = 1− (t + τ)2 , and for any τ , g(t + τ) isnon-zero between t=(−1− τ) and t=(1− τ).

Range 1:g(t+ )τ

1

21−τ−1−τ

t

f(t)

The first range of interest is for (1− τ) ≤ 0 — that is for τ ≥ 1 — and it is obviousthat Rf g(τ) = 0.

Range 2:g(t+ )τ

1

2

t

1−τ−1−τ

f(t)

Range 2 is for 0 < (1− τ) < 2 — that is −1 ≤ τ ≤ 1 . Then

Rf g(τ) =

∫ 1−τ

0

(t

2

)(1− (t + τ)2)dt

Range 3:g(t+ )τ

1

2

t

f(t)

1−τ−1−τ

Range 3 is for 0 < (−1− τ) < 2 — that is −3 ≤ τ ≤ −1 . In this range

Rf g(τ) =

∫ 2

−1−τ

(t

2

)(1− (t + τ)2)dt

Range 4:g(t+ )τ

1

t

f(t)

−1−τ 1−τ=2

This is valid for −1 − τ ≥ 2 or τ ≤ −3 , and the cross correlation is again zeroRf g(τ) = 0.

Page 13: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

5.13. ♣ EXAMPLE 5/13

We could bash out the integrals for Ranges 2 and 3, but let’s cheat by setting up thefunctions numerically in Matlab and using the xcorr function. The result for Rf g(τ) is

So the non-zero range is correct, and there is a maximum at τ ≈ −1.37 (yes, somewhatmore negative than −1).

We can overlay the functions at the position that gives the maximum by setting theupper “end” of the parabola at 1−τ≈1−(−1.37) = 2.37. Plotting the functions makesus realise this is a sensible outcome.

g(t+ )τ

1

2

t

f(t)

−1−τ 1−τ = 2.4

2D Application of cross-correlation: visual template trackingIt is obvious enough that cross-correlation is useful for detecting occurences of a “model”signal f (t) in another signal g(t). This is a 2D example where the model signal f (x, y)is the back view of a footballer, and the test signals g(x, y) are images from a match.The cross correlation is shown in the middle.

Figure 5.6:

Page 14: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

5/14 TOPIC 5. ENERGY & POWER SIGNALS, CORRELATION & SPECTRAL DENSITY

5.14 Cross-Energy Spectral Density

Earlier we derived the Wiener-Khinchin for the cross-correlation, and then specializedit for the auto-correlation.For finite energy signals, the Wiener-Khinchin Theorem indicatesthe that FT of the cross-correlation is the cross-ESD ...

FT [Rf g(τ)] = F ∗(ω)G(ω) = Ef g(ω) (5.33)

5.15 Finite Power Signals

We turn now to consider finite power signals — those for which the integral∫∞−∞ |f (t)|2dt

is infinite.

5.15.1 Definition of autocorrelation for a finite power signal

We define the autocorrelation as the limit of an average power in a time window Tthat expands to infinity.

The autocorrelation of a finite power signal is

Rf f (τ) = limT→∞

1

2T

∫ T

−Tf ∗(t)f (t + τ)dt . (5.34)

NB: that T is nothing to do with periodicity!

All periodic signals are finite power (infinite energy) signals, so let’s use f (t) = sinω0tto motivate our discussion. (It is obvious enough that

∫∞−∞ | sinω0t|2dt is infinite; but

we could also note that the average power in sinω0t is 1/2, so that the energy in oneperiod is T0/2 and in infinite time there are an infinite number of periods.)

By sketching the curve (as in Fig. 5.7) and using the notion of self-similarity, one wouldexpect its auto-correlation to be positive but decreasing for small but increasing τ ; thennegative as the the curves are in anti-phase and dissimilar in an “organized” way, thenreturn to being similar. The autocorrelation should have the same period as its parentfunction, and large when τ = 0. In short, Rf f being proportional to cos(ω0τ) wouldseem sensible.

Because this particular f (t) is periodic, to work out the auto-correlation we need only

Page 15: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

5.15. FINITE POWER SIGNALS 5/15

τ

increasing

Figure 5.7: Power signal sinω0t used to think about the autocorrelation of power signals.

make the infinite time window one period long! That is,

Rf f (τ) = limT→∞

1

2T

∫ T

−Tsin(ω0t) sin(ω0(t + τ))dt

→1

2(T0/2)

∫ T0/2

−T0/2

sin(ω0t) sin(ω0(t + τ))dt

=ω0

∫ π/ω0

−π/ω0sin(ω0t) sin(ω0(t + τ))dt

=1

∫ π

−πsin(p) sin(p + ω0τ))dp

=1

∫ π

−π

[sin2(p) cos(ω0τ) + sin(p) cos(p) sin(ω0τ)

]dp =

1

2cos(ω0τ)

So Rf f is indeed proportional to cos(ω0τ) as we expected.

For a finite energy signal, the Fourier Transform of the autocorrelation was the energyspectral density. What is the analogous result now?For any POWER signal, the FT of autocorrelation function is thePOWER Spectral Density

Rf f (τ)⇔ Sf f (ω) (5.35)

For our example,

Sf f = FT [Rf f ] = FT[

1

2cos(ωOt)

]=π

2[δ(ω + ω0) + δ(ω − ω0)] (5.36)

Page 16: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

5/16 TOPIC 5. ENERGY & POWER SIGNALS, CORRELATION & SPECTRAL DENSITY

As this is the power spectral density of sinω0t, if we integrate over frequency f weshould end up with the power.∫ ∞

−∞Sf f (ω)df =

∫ ∞−∞

π

2[δ(ω + ω0) + δ(ω − ω0)] df

=

∫ ∞−∞

π

2[δ(ω + ω0) + δ(ω − ω0)]

2π=

1

4[1 + 1] =

1

2,

as expected for a sine signal. We can use Fourier Series to conclude that this resultsmust also hold for any periodic function, but you should note that the result appliesto to any infinite energy “non square-integrable” function. The periodic signal wasjust an example. We will justify this a little more in Topic 61.

To finish off, we need only state the analogies to the finite energy formulae, replacingEnergy Spectral Density with Power Spectral Density, and replacing Total Energy withAverage Power.

The average power is

PAve = Rf f (0) (5.37)

The power spectrum transfers across a system as

Sgg(ω) = |H(ω)|2Sf f (ω) (5.38)

This result is proved in the next topic.

5.16 Cross-correlation and power signalsTwo power signals can be cross-correlated, using a similar definition:

Rf g(τ) = limT→∞

1

2T

∫ T

−Tf ∗(t)g(t + τ)dt (5.39)

Rf g(τ)⇔ Sf g(ω) (5.40)

5.17 Input and Output from a systemOne very last thought. If one applies an finite power signal to a system, it cannot beconverted into a finite energy signal — or vice versa.

1To really nail it would require us to understand Wiener-Khinchin in too much depth.

Page 17: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

5.18. SUMMARY 5/17

5.18 SummaryAll signals f (t) are such that |f (t)|2 is a power.An energy signal is one where the total energy is finite:

ETot =

∫ ∞−∞|f (t)|2dt 0 < ETot <∞ .

An energy signal f (t):

• always has a Fourier transform F (ω)

• always has an energy spectral density (ESD) given by Ef f (ω) = |F (ω)|2

• always has an autocorrelation Rf f (τ) =∫∞−∞ f

∗(t)f (t + τ)dt

• always has an ESD which is the FT of the autocorrelation Rf f (τ)⇔ Ef f (ω)

• always has total energy ETot = Rf f (0) = 12π

∫∞−∞ Ef f (ω)dω

• always has an ESD which transfers as Egg(ω) = |H(ω)|2Ef f (ω)

Two energy signals f , g

• A cross correlation Rf g(τ) =∫∞−∞ f

∗(t)g(t + τ)dt

• Have a cross-ESD Rf g(τ)⇔ Sf g(ω)

Page 18: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

5/18 TOPIC 5. ENERGY & POWER SIGNALS, CORRELATION & SPECTRAL DENSITY

A power signal is one where the total energy is infinite, and we consider average power

PAve = limT→∞

1

2T

∫ T

−T|f (t)|2dt 0 < PAve <∞ .

Note that ±T defines a time window, and does NOT imply f (t) is periodic!

A power signal f (t):

• may have a Fourier transform F (ω), but don’t rely on it.

• may have an power spectral density (PSD) given Sf f (ω) = |F (ω)|2, but don’t relyon it.

• always has an autocorrelation Rf f (τ) = limT→∞1

2T

∫ T−T f

∗(t)f (t + τ)dt

• always has a PSD which is the FT of the autocorrelation Rf f (τ)⇔ Sf f (ω)

• always has integrated average power PAve = Rf f (0)

• always has a PSD which transfers through a system as Sgg(ω) = |H(ω)|2Sf f (ω)

Two power signals f (t), g(t):

• Have cross-correlation Rf g(τ) = limT→∞1

2T

∫ T−T f

∗(t)g(t + τ)dt

• Have a cross-PSD Rf g(τ)⇔ Sf g(ω)

Page 19: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

Topic 6

Random Processes and SignalsNote: to avoid clutter, in this Topic we will assume that all time signals are real.

Recall our description in Topic 1 of random vs deterministic signals.

A deterministic signal is one that can be described by a function, mapping, or someother recipe or algorithm. If you know t, you can work out f (t). On the other hand, arandom signal is determined by some underyling random process. Although its statisticalproperties might be known (e.g. you might know its mean and variance) you cannotevaluate its value at time t. You might by able to say something about the likelihoodof its taking some value at time t, but not more.

Why are we particularly bothered with random signals?

First, what we regard as noise signals are usually random signals. Being able to deter-mine their properties in the frequency domain can be very useful in diminishing theireffect on information-bearing signals.

Second, the signals from a random process might be the information-bearing signalsthemselves. For example, signals generate from random solar activity might be regardedas noise to the satellite communications engineer, but might tell the solar physicistabout underlying nuclear processes in the sun.

A difficulty that might immediately occur to you is that most of our analysis so farhas involved integration over all time —

∫∞−∞ f (t)dt abounds. While this is fine for a

deterministic signal f (t) , for a random signal with no clean f (t) it implies having towait for all time collecting the random signal before we can work out, say, its auto-correlation. Fortunately there is a way around this difficulty.

6.1 Review of probability

Consider a random variable x . The probability distribution function underlying thevalues of x is defined by

Px(ξ) = Prob[x ≤ ξ] (6.1)

1

Page 20: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

6/2 TOPIC 6. RANDOM PROCESSES AND SIGNALS

Key characteristics of a distribution function are (i) that it varies from 0 at −∞ to 1at ∞ and (ii) it has a non-negative gradient.

The probability density function (pdf) px(ξ) is

px(ξ)dξ = Prob[ξ < x ≤ ξ + dξ] (6.2)

or

px(ξ) =d

dξPx(ξ) . (6.3)

For all ξ, px(ξ) ≥ 0 and∫∞−∞ px(ξ)dξ = 1.

P ( ) 1

0

x

x

ξ ξ+δξ

p(x)

x

(a) (b)

Figure 6.1: (a) A probability distribution function. (b) A probability density function or pdf.

It is often convenient to write p(x) rather than px(ξ). As a concrete example, a 1-dGaussian pdf with mean µ and variance σ2 is

p(x) =1√2π

exp

(−(x − µ)2

2σ2

)(6.4)

and its underlying probability distribution function is the error function erf(),

P (x) =

∫ x

−∞p(X)dX = erf(x) . (6.5)

6.2 Random processes and ensemblesThe random variable x might be regarded as random in itself, or regarded as somedeterministic mapping from the output θ of a random experiment x ≡ x(θ).

We will be concerned with random processes which are functions of time: x(t) ≡x(θ, t). When we refer to x(t1) or x(t1, θ) it is important to realize that this is not theunique value of the random variable, but an ensemble of values that could be obtainedat time t1 in a set repeated trials, as sketched in Figure 6.2.

Page 21: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

6.3. THE ENSEMBLE AUTOCORRELATION 6/3

t

t

t

t

t1

Figure 6.2: Ensembles

Thus x(t1) has an expected value of

E[x(t1)] =

∫x

x(t1)p(x)dx =

∫θ

x(t1, θ)p(θ)dθ . (6.6)

This is called an ensemble average.

Just suppose that the ensemble average E[x(t)] was the same for any value of t. Itmight be re-written as just E[x ]. A very reasonable question to ask is whether this isthe same as the temporal average in a window that expands to cover all time:

limT→∞

1

T

∫ T/2

−T/2

x(t)dt . (6.7)

We will return to this shortly.

6.3 The ensemble autocorrelationAlthough a signal is random, it is still reasonable to ask how “self-similar” it is through itsautocorrelation. However, just as the signal average can be re-worked as an ensembleaverage, so can the autocorrelation.

The “ensemble” definition of the autocorrelation of a random process involves samplingthe process at times t1 and t2 and forming

RExx(t1, t2) = E[x(t1)x(t2)] . (6.8)

Page 22: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

6/4 TOPIC 6. RANDOM PROCESSES AND SIGNALS

6.3.1 Stationarity

If it is found that

• the expectation value E[x(t)] is independent of time; and

• the ensemble value for autocorrelation depends only on the difference between t2

and t1

then the random process is said to be wide sense stationary. (Wide sense means herein a weak sense.)

For a (wide-sense) stationary process, the ensemble autocorrelation is

RExx(τ) = E[x(t)x(t + τ)] =

∫ ∞−∞

x(t, θ)x(t + τ, θ)p(θ)dθ . (6.9)

Many engineering processes are stationary.

Notice anything about the ensemble autocorrelation? The ensemble autocorrelation hasunits of power, and the proper comparison now is with the definition of autocorrelationfor power signals — those which do not necessarily have a Fourier transform, but dohave a Power Spectral Density.

is aPOWER

E[x(t)x(t+τ)]

So do stationary random processes always give rise to power signals? They must! Ifthe random signal is behaving (statistically) in a certain way at time zero, stationarityindicates it will be behaving in the same way at t →∞, and so it is non-integrable.

6.3.2 Ergodicity

We now have two viable definitions of the autocorrelation function of a stationaryrandom process. One is the ensemble autocorrelation

RExx(τ) = E[x(t)x(t + τ)] , (6.10)

Page 23: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

6.4. POWER SPECTRAL DENSITY 6/5

and the other is the temporal autocorrelation, the integral over time

RTxx = limT→∞

1

T

∫ T/2

−T/2

x(t)x(t + τ)dt . (6.11)

Both are perfectly proper things to evaluate. In the first case, you are taking asnapshot at one time of a whole ensemble of processes, and in the second you arelooking at the behaviour of one process over all time.

If the results are the same, the process is ergodic.An ergodic random processis a stationary random process whose ensemble autocorrelation is identical with itstemporal autocorrelation.

6.3.3 Ergodicity exemplified

A village has two pubs. Suppose you wished to know which was the more popular.The ensemble approach would be to go into both on the same evening and count thenumber of people in each. The temporal approach would be follow one villager formany evenings and log which (s)he chose to go to.

If you reached the same result, you will have shown the drinking preferences of villagersare an ergodic process. Of course, humans rarely behave ergodically, but engineeringsystems do.

6.4 Power Spectral DensityAs a power signal, the random process should posess a power spectral density.

The power spectral density S(ω) of a stationary random process is defined exactly asin Topic 5. That is,

Sxx(ω) = FT [Rxx ] . (6.12)

We will be concerned exclusively with ergodic processes, so that the above statementdoes not give rise to ambiguity.

Page 24: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

6/6 TOPIC 6. RANDOM PROCESSES AND SIGNALS

6.5 Descriptors of a random processWe are about at the point where we can discuss how to analyze the response of systemsto random processes. However, it is evident that we cannot give “exact” results forsuch processes in the time domain.

We will be able to make statements about the the autocorrelation and power spectraldensity, and in addition have access to other statistical descriptors, such as the meanand variance.

As reminders, and for completeness, the mean is

µ = E[x ] =

∫θ

x(θ)p(θ)dθ . (6.13)

The variance is

σ2 = E[(x − µ)2] = E[x2]− (E[x ])2 , (6.14)

where

E[x2] =

∫θ

x2(θ)p(θ)dθ . (6.15)

6.5.1 Application to a white noise process

White noise describes a random process whose mean is zero and whose autocorrelationis a delta-function. Can we understand why it is so?

0 100 200 300 400 500−1

−0.5

0

0.5

1Zero−Mean Random Noise

time (milliseconds)

Figure 6.3: White noise

Page 25: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

6.5. DESCRIPTORS OF A RANDOM PROCESS 6/7

First, because

Rxx(0) = limT→∞

1

T

∫ T/2

−T/2

|x(t)|2dt (6.16)

it is obvious that Rxx(0) for any non-zero signal is finite.

Now white noise has the property it is equally likely to take positive or negative val-ues, large or small, from instant to instant. So when you shift the signal by even aninfinitesimal amount to find

Rxx(∆τ) = limT→∞

1

T

∫ T/2

−T/2

x(t)x(t + ∆τ)dt (6.17)

the individual products x(t)x(t + ∆τ) being integrated are equally likely to be positiveand negative, large and small. Summing these by integration must give zero. ThusRxx(τ) = Aδ(τ).

The question now is what is A? The earlier expression actually indicates Rxx(0) =E[x2]. But for a process with µ = 0,

σ2 = E[x2]− µ2 = E[x2] . (6.18)

So A = σ2.

For zero-mean white noise with variance σ2

Rxx(τ) = σ2δ(τ)⇔ Sxx(ω) = σ2 (6.19)

The power spectral density, which is the Fourier transform of the autocorrelation func-tion, is uniform and has the value of the signal’s variance.

A constant power spectral density makes physical sense. As white noise has the propertyit is equally likely to take positive or negative values, large or small, from instant toinstant, its rate of change wrt time can take any value, large or small. The signalneeds power at all frequencies to achieve this. Having the power diminishing at highfrequencies would not achieve this, and having it increase makes no physical sense,

ergo ...

Page 26: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

6/8 TOPIC 6. RANDOM PROCESSES AND SIGNALS

6.6 Response of system to random signalsThe final part of the story is to work out how systems affect the descriptors.

6.6.1 Mean

*h(t)x(t) y(t)

X(ω Y() )ω ω )H(

Figure 6.4: Random input x(t) and ran-dom output y(t). The system however isdeterministic!

Given a temporal input x(t) the output y(t) is, asever,

y(t) = x(t) ∗ h(t) (6.20)

=

∫ ∞−∞

x(τ)h(t − τ)dτ

The expectation value (mean) of the output is

E[y ] = E[

∫ ∞−∞

x(τ)h(t − τ)dτ ] (6.21)

=

∫ ∞θ=−∞

∫ ∞τ=−∞

x(τ)h(t − τ)dτ p(θ)dθ

=

∫ ∞τ=−∞

[∫ ∞θ=−∞

x(τ)p(θ)dθ

]h(t − τ)dτ

=

∫ ∞τ=−∞

E[x(τ)]h(t − τ)dτ

But for a stationary function E[x(τ)] is independent of τ — it is the constant E[x ].Taking this outside the integral, and then substituting p = t − τ , we find that

E[y ] = E[x ]

∫ ∞−∞

h(t − τ)dτ = −∫ −∞∞

h(p) dp =

∫ ∞−∞

h(p) dp . (6.22)

For a hint as how to proceed, we might guess how the mean should transform. As itis an unchanging d.c. value, we might expect E[y ] = E[x ]H(0) ...

But we know that FT [h] = H(ω), so that∫ ∞−∞

h(p)e−iωpdp = H(ω) ⇒∫ ∞−∞

h(p)e−i0pdp = H(0) . (6.23)

⇒∫ ∞−∞

h(p)dp = H(0) .

For a stationary random input x(t) the mean of the output is

E[y ] = E[x ]H(0) . (6.24)

which also means that the output is stationary.

Page 27: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

6.6. RESPONSE OF SYSTEM TO RANDOM SIGNALS 6/9

6.6.2 Variance

To recover the variance of a signal after passing through a system we take a differentapproach.

First recall that for an ergodic signal, Rxx(0) = E[x2]. As σ2 = E[x2] − (E[x ])2,knowledge of the mean and autocorrelation of x(t) allow the variance of the input tobe found.

Now recall the Wiener-Khinchin Theorem for infinite energy signals. It said:The Fourier Transform of the Auto-Correlation is the Power Spectral Density

FT [Rxx(τ)] = Sxx(ω) (6.25)

If the random process x(t) is stationary and ergodic, this statement must still holdgood. It must hold good for the output y(t) too.

FT [Ryy(τ)] = Syy(ω) (6.26)

Hence, as proved at the end of this lecture, for any power signal

The Power Spectral Densities of ouput and input from a system with transferfunction H(ω) are related by

Syy(ω) = |H(ω)|2 Sxx(ω) where, NB |H(ω)|2 = H(ω)H∗(ω). (6.27)

As

Ryy(τ) =1

∫ ∞−∞

Syy(ω) eiωτ dω . (6.28)

it follows immediately that

Ryy(0) =1

∫ ∞−∞

Syy(ω) dω =1

∫ ∞−∞|H(ω)|2Sxx(ω) dω . (6.29)

So, if we know the autocorrelation of the input, and the transfer function, we can findthe autocorrelation and E[y 2] of the output. With knowledge of the mean E[y ] derivedin the previous subsection, we can determine the variance of the output.

Page 28: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

6/10 TOPIC 6. RANDOM PROCESSES AND SIGNALS

6.7 The complete recipeHere is the recipe, given mean µx and autocorrelation Rxx of the input:1. use mean and autocorrelation to find variance.

2. take FT of autocorrelation to find the PSD Sxx of input

3. multiply by |H(ω)|2 to find the PSD Syy of output

4. find mean of output µy = H(0)µx

5. take the IFT at τ = 0 to find Ryy(0).

6. find variance σ2y = Ryy(0)− µ2

y

6.8 ♣ Example[Q] White noise with zero mean and variance σ2 is input to the circuit in the Figure.Sketch the input’s autocorrelation and power spectral density, derive the mean andvariance of the output, and sketch the output’s autocorrelation and power spectraldensity.

R

Cx(t) y(t)

Figure 6.5:

Page 29: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

6.8. ♣ EXAMPLE 6/11

[A] The input x(t) is zero mean white noise, with a variance of σ2. Thus

Rxx(τ) = σ2δ(τ) ⇔ Sxx(ω) = FT[σ2δ(τ)

]= σ2 . (6.30)

The transfer function of the RC circuit in the Figure is

H(ω) =(1/jωC)

R + (1/jωC)=

1

1 + jωRC. (6.31)

The mean of the output is

E[y ] = E[x ]H(0) = 0× 1 = 0 . (6.32)

The power spectral density of the output is

Syy(ω) = Sxx |H(ω)|2 =σ2

1 + ω2(RC)2(6.33)

Variance only ... If we were interested only in the variance of y(t) we might proceedby writing

E[y 2] = Ryy(0) =1

∫ ∞−∞

Syy(ω)dω =1

2πσ2

∫ ∞−∞

1

1 + ω2(RC)2dω (6.34)

(substitute ωRC = tan θ) =1

2πσ2 1

RC

∫ π/2

−π/2

sec2 θ

sec2 θdθ

=σ2

2RC.

The variance of y is therefore

σ2y = E[y 2]− (E[y ])2 =

σ2

2RC. (6.35)

Page 30: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

6/12 TOPIC 6. RANDOM PROCESSES AND SIGNALS

Autocorrelation function... But to work out the complete autocorrelation we can usethe standard FT pair from HLT and set a = 1

e−a|t| ⇔2a

a2 + ω2⇒

1

2e−|t| ⇔

1

1 + ω2. (6.36)

Now use the parameter scaling or similarity property from Topic 2 with α = 1/RC. Itgives

f (t/RC)⇔ |RC|F (ωRC) (6.37)

so that

1

2|RC|e−|t/RC| ⇔

1

1 + ω2(RC)2. (6.38)

Finally, given that RC > 0

Ryy(τ) =σ2

2RCe−|τ |/RC ⇔

σ2

1 + ω2(RC)2= Syy(ω) . (6.39)

Of course it is a relief to see that Ryy(0) = σ2/2RC as we found earlier.

σ2

Rxx

τ( )

σ2

S xxω( )

ωτ

τω

S ω( )yy

R τ( )yy

σ2

σ2/2RC

Figure 6.6: (a) Autocorrelation and PSD of input x(t); (b) Autocorrelation and PSD of output y(t)

Page 31: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

6.9. ♣ EXAMPLE 6/13

Can we understand the curves? The power spectral density looks sensible. Thecircuit was a low pass filter with 1st order roll-off. The filter will greatly diminish higherfrequencies, but not cut them off completely. Note that the power at zero frequencyremains at σ2.

The auto-correlation needs a little more thought. We argued that the autocorrelationof white noise was a delta-function because the next instant’s value was equally likelyto be positive or negative, large or small — in other words there was no restriction onthe rate of change of the signal. But the low pass filter prevents such rapid change— so the next instant’s value is more likely to be similar to this instant’s. However,the correlation will fade away with time. This is exactly what you see. Moreover, thedecay in the autocorrelation has a decay constant of 1/RC.

6.9 ♣ Example[Q] Each pulse in a continuous binary pulse train has a fixed duration, T , but takes thevalue 0 or 1 with equal probability (P = 1/2), independently of the previous pulse. Anexample of part of such a train is sketched below.

0

1

t

Figure 6.7:

1. If xi is the random variable representing the height of pulse i , calculate E[xi ],E[x2

i ], and E[xixj ] for i 6= j .

2. Find and sketch the auto-correlation function Rxx(τ) of the pulse train.

3. Hence find and sketch the power spectral density Sxx(ω) of the pulse train.

[A]

1. The pulse heights xi are 0 and 1, each with a probability of Pi = 1/2.

E[xi ] =∑i

xiPi = 0×1

2+ 1×

1

2=

1

2(6.40)

E[x2i ] =

∑i

x2i Pi = 02 ×

1

2+ 12 ×

1

2=

1

2(6.41)

Each pulse value xi is independent of any other

E[xixj ] = E[xi ]× E[xj ] =1

1

2=

1

4(6.42)

Page 32: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

6/14 TOPIC 6. RANDOM PROCESSES AND SIGNALS

τt+

τt+

t

t

repeated trials

Figure 6.8:

2. To find the ensemble average, take the product x(t)x(t + τ) and average overmany random trials or experiments

RExx =

∫x(t, θ)x(t + τ, θ)p(θ)dθ (6.43)

= (0× 0)p(x(t) = 0, x(t + τ) = 0) + (0× 1)p(x(t) = 0, x(t + τ) = 1)

+ (1× 0)p(x(t) = 1, x(t + τ) = 0) + (1× 1)p(x(t) = 1, x(t + τ) = 1)

= p(x(t) = 1, x(t + τ) = 1)

When τ > T there is always an independent transition during interval τ so that

RExx = p(x(t) = 1, x(t + τ) = 1) (6.44)= p(x(t) = 1)× p(x(t + τ) = 1)

=1

1

2=

1

4.

Page 33: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

6.9. ♣ EXAMPLE 6/15

Rxx

1/2

1/4

−T T

τ

Figure 6.9:

When 0 ≤ τ ≤ T ,

RExx = p(x(t) = 1, x(t + τ) = 1) (6.45)= p(x(t) = 1, no ↓ transition during following τ)

= p(x(t) = 1)× p(no ↓ transition during following τ)

= p(x(t) = 1)× (1− p(↓ transition during following τ))

=1

2

(1−

1

2p(ANY transition during following τ)

)=

1

2

(1−

1

2

( τT

))The last step multiplies the frequency of transition 1/T by the time interval τ toget the probability of transition.

Finally we use the even symmetry property to complete the function

Rxx(τ) =1

4+

1

4ΛT (τ) (6.46)

where ΛT (τ) is a triangle of unit height, half-width T .

3. The FT of the triangle of halfwidth T (Lec 2) is T sin2(ωT/2)(ωT/2)2 and the FT of unity

(Lec 3) is FT [[] 1] = 2πδ(ω) so that the power spectral density

Sxx(ω) = FT [[]Rxx ] =T

4

sin2(ωT/2)

(ωT/2)2+π

2δ(ω) (6.47)

Page 34: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

6/16 TOPIC 6. RANDOM PROCESSES AND SIGNALS

ω

Sxx

Figure 6.10:

6.10 Appendix: A proof of the transfer of the Power Spectrum

Given power signals x(t) and y(t) we must derive the result Syy(ω) = |H(ω)|2Sxx(ω)without assuming the existence of X(ω) and Y (ω).

Start by writing the autocorrelation of the output

Ryy = limT→∞

1

2T

∫ T

−Ty(t)y(t + τ)dt . (6.48)

As the output y(t) is the convolution of the input with the impulse response function

y(t) =

∫ ∞−∞

x(t − p)h(p) dp (6.49)

y(t + τ) =

∫ ∞−∞

x(t + τ − q)h(q) dq .

Inserting these into Ryy and changing the order of integration

Ryy(τ) = limT→∞

1

2T

∫ T

−T

∫ ∞p=−∞

x(t − p)h(p) dp

∫ ∞q=−∞

x(t + τ − q)h(q) dq dt(6.50)

=

∫ ∞p=−∞

∫ ∞q=−∞

limT→∞

1

2T

∫ T

−Tx(t − p)x(t + τ − q) dt h(q)h(p)dq dp

Page 35: Topic5 Energy&PowerSignals, Correlation&SpectralDensitydwm/Courses/2TF/N56.pdf · 2020-02-10 · 5/2 TOPIC5. ENERGY&POWERSIGNALS, CORRELATION&SPECTRALDENSITY 5.1.1 Aquickcheck ItisworthcheckingthisusingtherelationshipsfoundinTopic1:

6.10. APPENDIX: A PROOF OF THE TRANSFER OF THE POWER SPECTRUM 6/17

Change the variable from t to t ′ = t − p

Ryy(τ) =

∫ ∞p=−∞

∫ ∞q=−∞

[limT→∞

1

2T

∫ T

−Tx(t ′)x(t ′ + τ + p − q) dt ′

]h(q)h(p)dq dp(6.51)

=

∫ ∞p=−∞

{∫ ∞q=−∞

[Rxx(τ + p − q)] h(q) dq

}h(p) dp

=

∫ ∞p=−∞

{[Rxx ∗ h](τ + p)} h(p) dp .

The quantity in {} should be read as the convolution of Rxx and h evaluated at (τ+p).Now replace p by −p′

Ryy(τ) = −∫ −∞p′=∞

[Rxx ∗ h]((τ − p′) h(−p′) dp′ (6.52)

= [[Rxx ∗ h](τ) ∗ h(−τ)]

Taking Fourier Transforms of both sides we find

Syy(ω) = FT [Ryy(τ)] (6.53)= FT [Rxx ∗ h] FT [h(−τ)]

= FT [Rxx(τ)] FT [h(τ)] FT [h(−τ)]

= Sxx(ω) H(ω) H∗(ω).