8/12/2019 7074 Chapter3 Slides
1/58
Chapter 3: Probability, Random Variables, Random Processes
A First Course in Digital CommunicationsHa H. Nguyen and E. Shwedyk
February 2009
A First Course in Digital Communications 1/58
http://find/8/12/2019 7074 Chapter3 Slides
2/58
Chapter 3: Probability, Random Variables, Random Processes
Introduction
The main objective of a communication system is the transfer
of information over a channel.Message signal is best modeled by a random signal: Anysignal that conveys information must have some uncertainty init, otherwise its transmission is of no interest.
Two types of imperfections in a communication channel:Deterministic imperfection, such as linear and nonlineardistortions, inter-symbol interference, etc.Nondeterministic imperfection, such as addition of noise,interference, multipath fading, etc.
We are concerned with the methods used to describe andcharacterize a random signal, generally referred to as arandom process(also commonly called stochastic process).
In essence, a random process is a random variableevolving in
time.A First Course in Digital Communications 2/58
http://find/http://goback/8/12/2019 7074 Chapter3 Slides
3/58
Chapter 3: Probability, Random Variables, Random Processes
Sample Space and Probability
Random experiment: its outcome, for some reason, cannot bepredicted with certainty.
Examples: throwing a die, flipping a coin and drawing a card
from a deck.Sample space: the set of all possible outcomes, denoted by .Outcomes are denoted bys and each lies in, i.e., .A sample space can be discreteor continuous.
Eventsare subsets of the sample space for which measures oftheir occurrences, called probabilities, can be defined ordetermined.
A First Course in Digital Communications 3/58
http://goforward/http://find/http://goback/8/12/2019 7074 Chapter3 Slides
4/58
Chapter 3: Probability, Random Variables, Random Processes
Three Axioms of Probability
For a discrete sample space , define a probability measure P on as a set function that assigns nonnegative values to all events,denoted by E, in such that the following conditions are satisfied
Axiom 1: 0
P(E)
1 for all E
(on a % scale
probability ranges from 0 to 100%. Despite popular sportslore, it is impossible to give more than 100%).
Axiom 2: P() = 1 (when an experiment is conducted therehas to be an outcome).
Axiom 3: For mutually exclusive events1
E1, E2, E3, . . .wehave P(
i=1 Ei) =
i=1 P(Ei).
1The events E1, E2, E3,. . . are mutually exclusive ifEi Ej = for all
i= j, where is the null set.A First Course in Digital Communications 4/58
http://find/8/12/2019 7074 Chapter3 Slides
5/58
Chapter 3: Probability, Random Variables, Random Processes
Important Properties of the Probability Measure
1. P(Ec) = 1 P(E), where Ec denotes the complement ofE.This property implies that P(Ec) + P(E) = 1, i.e., somethinghas to happen.
2. P(
) = 0 (again, something has to happen).
3. P(E1 E2) =P(E1) + P(E2) P(E1 E2). Note that iftwo events E1 and E2 are mutually exclusive thenP(E1 E2) =P(E1) + P(E2), otherwise the nonzerocommon probability P(E1 E2) needs to be subtracted off.
4. IfE1 E2 then P(E1) P(E2). This says that if event E1is contained in E2 then occurrence ofE2 means E1 hasoccurred but the converse is not true.
A First Course in Digital Communications 5/58
Ch P b b l R d V bl R d P
http://find/http://goback/8/12/2019 7074 Chapter3 Slides
6/58
Chapter 3: Probability, Random Variables, Random Processes
Conditional Probability
We observe or are told that event E1 has occurred but are
actually interested in event E2: Knowledge that ofE1 hasoccurred changes the probability ofE2 occurring.
If it was P(E2) before, it now becomes P(E2|E1), theprobability ofE2 occurring given that event E1 has occurred.
This conditional probabilityis given by
P(E2|E1) =
P(E2 E1)P(E1)
, ifP(E1) = 00, otherwise
.
IfP(E2|E1) =P(E2), or P(E2 E1) =P(E1)P(E2), thenE1 and E2 are said to be statistically independent.
Bayes rule
P(E2|E1) = P(E1|E2)P(E2)P(E1)
,
A First Course in Digital Communications 6/58
Ch 3 P b bili R d V i bl R d P
http://find/8/12/2019 7074 Chapter3 Slides
7/58
Chapter 3: Probability, Random Variables, Random Processes
Total Probability Theorem
The events
{Ei
}ni=1 partition the sample space if:
(i)n
i=1
Ei= (1a)
(ii) Ei Ej = for all 1 i, j n and i =j (1b)If for an event A we have the conditional probabilities{P(A|Ei)}ni=1, P(A) can be obtained as
P(A) =n
i=1
P(Ei)P(A|Ei).
Bayes rule:
P(Ei|A) = P(A|Ei)P(Ei)P(A)
= P(A|Ei)P(Ei)
nj=1 P(A
|Ej)P(Ej)
.
A First Course in Digital Communications 7/58
Ch te 3 P b bilit R d V i bles R d P esses
http://find/http://goback/8/12/2019 7074 Chapter3 Slides
8/58
Chapter 3: Probability, Random Variables, Random Processes
Random Variables
R
1
4
3
2
4( )x 1( )x 2( )x3( )x
A random variable is a mappingfrom the sample space tothe set of real numbers.
We shall denote random variables by boldface, i.e., x, y, etc.,while individual or specific values of the mapping x aredenoted by x().
A First Course in Digital Communications 8/58
Chapter 3: Probability Random Variables Random Processes
http://find/8/12/2019 7074 Chapter3 Slides
9/58
Chapter 3: Probability, Random Variables, Random Processes
Cumulative Distribution Function (cdf)
cdf gives a complete description of the random variable. It isdefined as:
Fx(x) =P( :x() x) =P(x x).
The cdf has the following properties:1. 0 Fx(x) 1 (this follows from Axiom 1 of the probability
measure).2. Fx(x) is nondecreasing: Fx(x1) Fx(x2) ifx1 x2 (this is
because event x()
x1 is contained in event x()
x2).
3. Fx() = 0andFx(+) = 1(x() is the empty set,hence an impossible event, while x() is the wholesample space, i.e., a certain event).
4. P(a
8/12/2019 7074 Chapter3 Slides
10/58
Chapter 3: Probability, Random Variables, Random Processes
Typical Plots of cdf I
A random variable can be discrete, continuousormixed.
0.1
0 x
( )F xx
(a)
A First Course in Digital Communications 10/58
Chapter 3: Probability Random Variables Random Processes
http://find/8/12/2019 7074 Chapter3 Slides
11/58
Chapter 3: Probability, Random Variables, Random Processes
Typical Plots of cdf II
0.1
0 x
( )F xx
(b)
0.1
0 x
( )F xx
(c)
A First Course in Digital Communications 11/58
Chapter 3: Probability, Random Variables, Random Processes
http://find/8/12/2019 7074 Chapter3 Slides
12/58
Chapter 3: Probability, Random Variables, Random Processes
Probability Density Function (pdf)
The pdf is defined as the derivative of the cdf:
fx(x) =dFx(x)
dx .
It follows that:
P(x1
x
x2) =P(x
x2)
P(x
x1)
=Fx(x2) Fx(x1) = x2
x1
fx(x)dx.
Basic properties of pdf:1. fx(x) 0.2. f
x
(x)dx= 1.3. In general, P(x A) = A
fx(x)dx.
For discrete random variables, it is more common to definethe probability mass function (pmf): pi=P(x=xi).
Note that, for all i, one haspi
0 and
i
pi= 1.
A First Course in Digital Communications 12/58
Chapter 3: Probability, Random Variables, Random Processes
http://find/8/12/2019 7074 Chapter3 Slides
13/58
p y, ,
Bernoulli Random Variable
0x
1 0x
1
p1
1
(1 )p( )p
( )F xx
( )f xx
A discrete random variable that takes two values 1 and 0 withprobabilitiesp and 1 p.Good model for a binary data source whose output is 1 or 0.
Can also be used to model the channel errors.
A First Course in Digital Communications 13/58
Chapter 3: Probability, Random Variables, Random Processes
http://find/8/12/2019 7074 Chapter3 Slides
14/58
Binomial Random Variable
0x
2
05.0
10.0
20.0
15.0
25.0
30.0
4 6
( )f xx
A discrete random variable that gives the number of 1s in asequence ofn independentBernoulli trials.
fx(x) =n
k=0
n
k
pk(1p)nk(xk), where
n
k
=
n!
k!(n
k)!
.
A First Course in Digital Communications 14/58
Chapter 3: Probability, Random Variables, Random Processes
http://find/8/12/2019 7074 Chapter3 Slides
15/58
Uniform Random Variable
0x
ab
1
a b 0x
a b
1
( )F xx( )f xx
A continuous random variable that takes values between aand b with equal probabilities over intervals of equal length.
The phase of a received sinusoidal carrier is usually modeledas a uniform random variable between 0 and 2. Quantizationerror is also typically modeled as uniform.
A First Course in Digital Communications 15/58
Chapter 3: Probability, Random Variables, Random Processes
http://goforward/http://find/http://goback/8/12/2019 7074 Chapter3 Slides
16/58
Gaussian (or Normal) Random Variable
0x
0x
12
2
1
2
1
( )f xx
( )F xx
A continuous random variable whose pdf is:
fx(x) = 122
exp(x )2
22
,
and 2 are parameters. Usually denoted asN(, 2).Most important and frequently encountered random variable
in communications.A First Course in Digital Communications 16/58
Chapter 3: Probability, Random Variables, Random Processes
http://find/8/12/2019 7074 Chapter3 Slides
17/58
Functions of A Random Variable
The function y=g(x) is itself a random variable.
From the definition, the cdf ofy can be written as
Fy(y) =P( :g(x()) y).Assume that for all y, the equation g(x) =y has a countable
number of solutions and at each solution point, dg(x)/dxexists and is nonzero. Then the pdf ofy=g(x) is:
fy(y) = i
fx(xi)
dg(x)dx x=xi,
where{xi} are the solutions ofg(x) =y.A linear function of a Gaussian random variable is itself aGaussian random variable.
A First Course in Digital Communications 17/58
Chapter 3: Probability, Random Variables, Random Processes
http://find/http://goback/8/12/2019 7074 Chapter3 Slides
18/58
Expectation of Random Variables I
Statistical averages, ormoments, play an important role in the
characterization of the random variable.
Theexpected value(also called the mean value, first moment)of the random variable x is defined as
mx=E{x}
xfx(x)dx,
where Edenotes the statistical expectation operator.
In general, the nth moment ofx is defined as
E{xn}
xnfx(x)dx.
For n= 2, E{x2} is known as the mean-squared value of therandom variable.
A First Course in Digital Communications 18/58
Chapter 3: Probability, Random Variables, Random Processes
http://find/8/12/2019 7074 Chapter3 Slides
19/58
Expectation of Random Variables II
The nthcentral momentof the random variable x is:
E{y} =E{(x mx)n} =
(x mx)nfx(x)dx.
When n= 2 the central moment is called the variance,
commonly denoted as 2x:
2x= var(x) =E{(x mx)2} =
(x mx)2fx(x)dx.
The variance provides a measure of the variablesrandomness.
The mean and variance of a random variable give a partialdescription of its pdf.
A First Course in Digital Communications 19/58
Chapter 3: Probability, Random Variables, Random Processes
http://find/8/12/2019 7074 Chapter3 Slides
20/58
Expectation of Random Variables III
Relationship between the variance, the first and secondmoments:
2x=E
{x2
} [E
{x
}]2 =E
{x2
} m2x.
An electrical engineering interpretation: The AC power equalstotal power minus DC power.
The square-root of the variance is known as the standarddeviation, and can be interpreted as the root-mean-squared(RMS) value of the AC component.
A First Course in Digital Communications 20/58
Chapter 3: Probability, Random Variables, Random Processes
http://find/8/12/2019 7074 Chapter3 Slides
21/58
The Gaussian Random Variable
0 0.2 0.4 0.6 0.8 10.8
0.6
0.4
0.2
0
0.2
0.4
0.6
t(sec)
Signalamplitude(volts)
(a) A muscle (emg) signal
A First Course in Digital Communications 21/58
Chapter 3: Probability, Random Variables, Random Processes
http://find/8/12/2019 7074 Chapter3 Slides
22/58
1 0.5 0 0.5 1
0
0.5
1
1.5
2
2.5
3
3.5
4
x(volts)
f x(x)(1/volts)
(b) Histogram and pdf fits
Histogram
Gaussian fit
Laplacian fit
fx(x) = 1
22x
e (xmx)
2
22x (Gaussian)
fx(x) = a
2
ea|x| (Laplacian)
A First Course in Digital Communications 22/58
Chapter 3: Probability, Random Variables, Random Processes
http://find/8/12/2019 7074 Chapter3 Slides
23/58
Gaussian Distribution (Univariate)
15 10 5 0 5 10 150
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
x
f x(x)
x=1
x
=2
x=5
Range (kx) k = 1 k= 2 k= 3 k = 4P(mx kx
8/12/2019 7074 Chapter3 Slides
24/58
Multiple Random Variables I
Often encountered when dealing with combined experimentsor repeated trials of a single experiment.
Multiple random variables are basically multidimensionalfunctions defined on a sample space of a combinedexperiment.
Let x and y be the two random variables defined on the samesample space . The joint cumulative distribution function isdefined as
Fx,y(x, y) =P(x
x,y
y).
Similarly, the joint probability density function is:
fx,y(x, y) =2Fx,y(x, y)
xy .
A First Course in Digital Communications 24/58
Chapter 3: Probability, Random Variables, Random Processes
http://find/8/12/2019 7074 Chapter3 Slides
25/58
Multiple Random Variables II
When the joint pdf is integrated over one of the variables, oneobtains the pdf of other variable, called the marginal pdf:
fx,y(x, y)dx=fy(y),
fx,y(x, y)dy=fx(x).
Note that:
fx,y(x, y)dxdy=F(, ) = 1Fx,y(, ) =Fx,y(, y) =Fx,y(x, ) = 0.
A First Course in Digital Communications 25/58
Chapter 3: Probability, Random Variables, Random Processes
http://find/8/12/2019 7074 Chapter3 Slides
26/58
Multiple Random Variables III
The conditional pdf of the random variable y, given that the
value of the random variable x is equal to x, is defined as
fy(y|x) =
fx,y(x, y)
fx(x) , fx(x) = 0
0, otherwise.
Two random variables x and y are statistically independent ifand only if
fy(y|x) =fy(y) or equivalently fx,y(x, y) =fx(x)fy(y).
The joint moment is defined as
E{xjyk} =
xjykfx,y(x, y)dxdy.
A First Course in Digital Communications 26/58
Chapter 3: Probability, Random Variables, Random Processes
http://find/8/12/2019 7074 Chapter3 Slides
27/58
Multiple Random Variables IV
The joint central moment is
E{(xmx)j(ymy)k} =
(xmx)j(ymy)kfx,y(x, y)dxdy
where mx=E{x
}and my =E
{y
}.
The most important moments are
E{xy}
xyfx,y(x, y)dxdy (correlation)
cov{x,y} E{(x mx)(y my)}= E{xy} mxmy (covariance).
A First Course in Digital Communications 27/58
Chapter 3: Probability, Random Variables, Random Processes
http://find/8/12/2019 7074 Chapter3 Slides
28/58
Multiple Random Variables V
Let2x and 2y be the variance ofx and y. The covariance
normalized w.r.t. xy is called the correlation coefficient:
x,y =cov{x,y}
xy.
x,y indicates the degree oflinear dependencebetween two
random variables.
It can be shown that|x,y| 1.x,y = 1 implies an increasing/decreasing linear relationship.Ifx,y = 0, x and y are said to be uncorrelated.
It is easy to verify that ifx and y are independent, thenx,y = 0: Independence implies lack of correlation.
However, lack of correlation (no linear relationship) does notin general imply statistical independence.
A First Course in Digital Communications 28/58
Chapter 3: Probability, Random Variables, Random Processes
http://find/8/12/2019 7074 Chapter3 Slides
29/58
Examples of Uncorrelated Dependent Random Variables
Example 1: Let x be a discrete random variable that takes on
{1, 0, 1} with probabilities{14 , 12 , 14}, respectively. Therandom variables y=x3 and z= x2 are uncorrelated butdependent.
Example 2: Let x be an uniformly random variable over
[1, 1]. Then the random variablesy
=x
andz
=x2
areuncorrelated but dependent.
Example 3: Let x be a Gaussian random variable with zeromean and unit variance (standard normal distribution). Therandom variables y=x and z=
|x
|are uncorrelated but
dependent.Example 4: Let u and v be two random variables (discrete orcontinuous) with the same probability density function. Thenx= u v and y=u+ v are uncorrelated dependent randomvariables.
A First Course in Digital Communications 29/58
Chapter 3: Probability, Random Variables, Random Processes
http://find/8/12/2019 7074 Chapter3 Slides
30/58
Example 1
x
{1, 0, 1
}with probabilities
{1/4, 1/2, 1/4
} y=x3 {1, 0, 1} with probabilities{1/4, 1/2, 1/4} z= x2 {0, 1} with probabilities{1/2, 1/2}my = (1)14 + (0)12 + (1)14 = 0; mz= (0)12 + (1)12 = 12 .The joint pmf (similar to pdf) ofy and z:
0
1
1
1
12
1
4
1
4
y
zP(y= 1, z= 0) = 0P(y= 1, z= 1) =P(x= 1) = 1/4P(y= 0, z= 0) =P(x= 0) = 1/2
P(y= 0, z= 1) = 0
P(y= 1, z= 0) = 0
P(y= 1, z= 1) =P(x= 1) = 1/4Therefore, E{yz} = (1)(1)14 + (0)(0)12 + (1)(1)14 = 0
cov
{y, z
}=E
{yz
} mymz= 0
(0)1/2 = 0!
A First Course in Digital Communications 30/58
Chapter 3: Probability, Random Variables, Random Processes
G ( )
http://find/8/12/2019 7074 Chapter3 Slides
31/58
Jointly Gaussian Distribution (Bivariate)
fx,y(x, y) = 1
2xy
1 2x,yexp
1
2(1 2x,y)
(x mx)22x
2x,y(x mx)(y my)xy
+(y my)2
2y
,
wheremx, my, x, y are the means and variances.
x,y is indeed the correlation coefficient.
Marginal density is Gaussian: fx(x) N(mx, 2x) andfy(y) N(my, 2y).When x,y = 0
fx,y(x, y) =fx(x)fy(y)
random
variables x and y are statistically independent.Uncorrelatedness means that joint Gaussian random variablesare statistically independent. The converse is not true.
Weighted sum of two jointly Gaussian random variables is alsoGaussian.
A First Course in Digital Communications 31/58
Chapter 3: Probability, Random Variables, Random Processes
J i df d C f 1 d
http://find/8/12/2019 7074 Chapter3 Slides
32/58
Joint pdf and Contours for x = y = 1and x,y = 0
32
10
12 3
2
02
0
0.05
0.1
0.15
x
x,y
=0
y
f x,y
(x,y
)
x
y
2 1 0 1 22.5
2
1
0
1
2
2.5
A First Course in Digital Communications 32/58
Chapter 3: Probability, Random Variables, Random Processes
J i df d C f 1 d 0 3
http://find/http://goback/8/12/2019 7074 Chapter3 Slides
33/58
Joint pdf and Contours for x = y = 1and x,y = 0.3
32
10
12 3
2
0
2
0
0.05
0.1
0.15
x
x,y
=0.30
y
f x,y
(x,y
)
a crosssection
x
y
2 1 0 1 22.5
2
1
0
1
2
2.5
A First Course in Digital Communications 33/58
Chapter 3: Probability, Random Variables, Random Processes
J i df d C f 1 d 0 7
http://find/8/12/2019 7074 Chapter3 Slides
34/58
Joint pdf and Contours for x = y = 1and x,y = 0.7
32
10
12 3
2
0
2
0
0.05
0.1
0.15
0.2
a crosssection
x
x,y
=0.70
y
f x,y
(x,y
)
x
y
2 1 0 1 22.5
2
1
0
1
2
2.5
A First Course in Digital Communications 34/58
Chapter 3: Probability, Random Variables, Random Processes
J i t df d C t f 1 d 0 95
http://find/8/12/2019 7074 Chapter3 Slides
35/58
Joint pdf and Contours for x = y = 1and x,y = 0.95
32
10
12
3
2
0
2
0
0.1
0.2
0.3
0.4
0.5
x
x,y
=0.95
y
f x,y
(x,y
)
x
y
2 1 0 1 22.5
2
1
0
1
2
2.5
A First Course in Digital Communications 35/58
http://find/8/12/2019 7074 Chapter3 Slides
36/58
Chapter 3: Probability, Random Variables, Random Processes
Random Processes I
8/12/2019 7074 Chapter3 Slides
37/58
Random Processes I
Real number
Timetk
x1(t,
1)
x2(t,
2)
xM
(t,M
)
.
.
.
1
2
M
3
j
x(t,)
t2
t1
.
.
.
.
.
.
. . .
x(tk,)
.
.
.
A mapping from a sample space to a set of time functions.
A First Course in Digital Communications 37/58
http://goforward/http://find/http://goback/8/12/2019 7074 Chapter3 Slides
38/58
Chapter 3: Probability, Random Variables, Random Processes
Examples of Random Processes I
8/12/2019 7074 Chapter3 Slides
39/58
Examples of Random Processes I
(a) Thermal noise
0 t
(b) Uniform phase
t0
A First Course in Digital Communications 39/58
Chapter 3: Probability, Random Variables, Random Processes
Examples of Random Processes II
http://find/8/12/2019 7074 Chapter3 Slides
40/58
Examples of Random Processes II
t0
(c) Rayleigh fading process
t
+V
V
(d) Binary random data
0
Tb
A First Course in Digital Communications 40/58
Chapter 3: Probability, Random Variables, Random Processes
Classification of Random Processes
http://find/8/12/2019 7074 Chapter3 Slides
41/58
Classification of Random Processes
Based on whether its statistics change with time: the processis non-stationaryorstationary.
Different levels of stationarity:
Strictly stationary: the joint pdf of any order is independent ofa shift in time.Nth-order stationarity: the joint pdf does not depend on thetime shift, but depends on time spacings:
fx(t1),x(t2),...x(tN)(x1, x2, . . . , xN; t1, t2, . . . , tN) =
fx(t1+t),x(t2+t),...x(tN+t)(x1, x2, . . . , xN; t1+ t, t2+ t , . . . , tN+ t).
The first- and second-order stationarity:
fx(t1)(x, t1) =fx(t1+t)(x; t1+ t) =fx(t)(x)
fx(t1),x(t2)(x1, x2; t1, t2) =fx(t1+t),x(t2+t)(x1, x2; t1+ t, t2+ t)
=fx(t1),x(t2)(x1, x2; ), =t2 t1.A First Course in Digital Communications 41/58
Chapter 3: Probability, Random Variables, Random Processes
Statistical Averages or Joint Moments
http://find/8/12/2019 7074 Chapter3 Slides
42/58
Statistical Averages or Joint Moments
Consider Nrandom variables x(t1),x(t2), . . .x(tN). Thejoint moments of these random variables is
E{xk1(t1),xk2(t2), . . .xkN(tN)} = x1=
xN=
xk11 xk22 xkNN fx(t1),x(t2),...x(tN)(x1, x2, . . . , xN; t1, t2, . . . , tN)
dx1dx2 . . . dxN,
for all integers kj 1 and N 1.Shall only consider the first- and second-order moments, i.e.,E{x(t)}, E{x2(t)} and E{x(t1)x(t2)}. They are the meanvalue, mean-squared valueand (auto)correlation.
A First Course in Digital Communications 42/58
Chapter 3: Probability, Random Variables, Random Processes
Mean Value or the First Moment
http://find/8/12/2019 7074 Chapter3 Slides
43/58
Mean Value or the First Moment
The mean value of the process at time t is
mx(t) =E{x(t)} =
xfx(t)(x; t)dx.
The average isacross the ensembleand if the pdf varies withtime then the mean value is a (deterministic) function of time.
If the process is stationary then the mean is independent oftor a constant:
mx =E{x(t)} =
xfx(x)dx.
A First Course in Digital Communications 43/58
Chapter 3: Probability, Random Variables, Random Processes
Mean-Squared Value or the Second Moment
http://find/8/12/2019 7074 Chapter3 Slides
44/58
Mean Squared Value or the Second Moment
This is defined as
MSVx(t) =E{x2(t)} =
x2fx(t)(x; t)dx (non-stationary),
MSVx=E{x2(t)} =
x2fx(x)dx (stationary).
The second central moment (or the variance) is:
2x(t) = E[x(t) mx(t)]
2 =MSVx(t) m2x(t) (non-stationary),
2x = E
[x(t) mx]2
=MSVx m2x (stationary).
A First Course in Digital Communications 44/58
Chapter 3: Probability, Random Variables, Random Processes
Correlation
http://find/8/12/2019 7074 Chapter3 Slides
45/58
Correlation
The autocorrelation function completely describes the power
spectral density of the random process.Defined as the correlation between the two random variablesx1=x(t1) and x2=x(t2):
Rx(t1, t2) =E{x(t1)x(t2)}= x1=
x2=
x1x2fx1,x2(x1, x2; t1, t2)dx1dx2.
For a stationary process:
Rx() =E
{x(t)x(t+ )
}=
x1=
x2=x1x2fx1,x2(x1, x2; )dx1dx2.
Wide-sense stationarity(WSS) process: E{x(t)} =mx forany t, and Rx(t1, t2) =Rx() for =t2
t1.
A First Course in Digital Communications 45/58
Chapter 3: Probability, Random Variables, Random Processes
Properties of the Autocorrelation Function
http://goforward/http://find/http://goback/8/12/2019 7074 Chapter3 Slides
46/58
Properties of the Autocorrelation Function
1. Rx() =Rx(). It is an even function of because the
same set of product values is averaged across the ensemble,regardless of the direction of translation.
2.|Rx()| Rx(0). The maximum always occurs at = 0,though there maybe other values of for which it is as big.Further Rx(0)is the mean-squared value of the random
process.3. If for some 0 we have Rx(0) =Rx(0), then for all integers
k, Rx(k0) =Rx(0).
4. Ifmx= 0 then Rx() will have a constant component equalto m2
x.
5. Autocorrelation functions cannot have an arbitrary shape.The restriction on the shape arises from the fact that theFourier transform of an autocorrelation function must begreater than or equal to zero, i.e.,F{Rx()} 0.
A First Course in Digital Communications 46/58
Chapter 3: Probability, Random Variables, Random Processes
Power Spectral Density of a Random Process (I)
http://find/8/12/2019 7074 Chapter3 Slides
47/58
p y ( )
Taking the Fourier transform of the random process does not work.
x2(t,
2)
xM
(t,M
)
x1(t,
1)
t
t0
0
0
0
0
0
|X1(f,
1)|
|X2(f,2)|
|XM
(f,M
)|
f
f
f.
.
.
.
.
.
.
.
.
.
.
.
Timedomain ensemble Frequencydomain ensemble
t
A First Course in Digital Communications 47/58
Chapter 3: Probability, Random Variables, Random Processes
Power Spectral Density of a Random Process (II)
http://goforward/http://find/http://goback/8/12/2019 7074 Chapter3 Slides
48/58
p y ( )
Need to determine how the average power of the process is
distributed in frequency.Define a truncated process:
xT(t) =
x(t), T t T
0, otherwise .
Consider the Fourier transform of this truncated process:
XT(f) =
xT(t)ej2ftdt.
Average the energy over the total time, 2T:
P= 1
2T
TT
x2T(t)dt= 1
2T
|XT(f)|2 df (watts).
A First Course in Digital Communications 48/58
Chapter 3: Probability, Random Variables, Random Processes
Power Spectral Density of a Random Process (III)
http://find/8/12/2019 7074 Chapter3 Slides
49/58
p y ( )
Find the average value ofP:
E{P} =E
1
2T
TT
x2T(t)dt
=E
1
2T
|XT(f)|2 df
.
Take the limit as T :
limT
12T
TT
Ex2T(t)
dt= lim
T1
2T
E|XT(f)|2 df,
It follows that
MSVx = limT
1
2T TT E
x2T(t)
dt
=
limT
E|XT(f)|2
2T
df (watts).
A First Course in Digital Communications 49/58
Chapter 3: Probability, Random Variables, Random Processes
Power Spectral Density of a Random Process (IV)
http://find/8/12/2019 7074 Chapter3 Slides
50/58
p y ( )
Finally,
Sx(f) = limT
E|XT(f)|2
2T
(watts/Hz),
is the power spectral densityof the process.
It can be shown that the power spectral density and theautocorrelation function are a Fourier transform pair:
Rx() Sx(f) =
=Rx()ej2fd.
A First Course in Digital Communications 50/58
Chapter 3: Probability, Random Variables, Random Processes
Time Averaging and Ergodicity
http://find/8/12/2019 7074 Chapter3 Slides
51/58
A process where any member of the ensemble exhibits the
same statistical behavior as that of the whole ensemble.All time averages on a single ensemble member are equal tothe corresponding ensemble average:
E{xn
(t))} =
x
n
fx(x)dx
= limT
1
2T
TT
[xk(t, k)]ndt, n, k.
For an ergodic process: To measure various statisticalaverages, it is sufficient to look at only one realization of theprocess and find the corresponding time average.
For a process to be ergodic it must be stationary. Theconverse is not true.
A First Course in Digital Communications 51/58
Chapter 3: Probability, Random Variables, Random Processes
Examples of Random Processes
http://find/8/12/2019 7074 Chapter3 Slides
52/58
(Example 3.4) x(t) =A cos(2f0t + ), where is arandom variable uniformly distributed on [0, 2]. This processis both stationary and ergodic.
(Example 3.5) x(t) =x, where x is a random variable
uniformly distributed on [A, A], where A >0. This processis WSS, but not ergodic.
(Example 3.6) x(t) =A cos(2f0t + ) where A is azero-mean random variable with variance, 2A, and isuniform in [0, 2]. Furthermore, A and are statisticallyindependent. This process is not ergodic, but strictlystationary.
A First Course in Digital Communications 52/58
Chapter 3: Probability, Random Variables, Random Processes
Random Processes and LTI Systems
http://find/8/12/2019 7074 Chapter3 Slides
53/58
Linear, Time-Invariant
(LTI) System
Input Output
( ) ( )h t H f ( )tx ( )ty
, ( ) ( )m R S f x x x
, ( ) ( )m R S f y y y
, ( )R x y
my = E
{y(t)
}=E
h()x(t
)d =mxH(0)
Sy(f) = |H(f)|2 Sx(f)Ry() = h() h() Rx().
A First Course in Digital Communications 53/58
Chapter 3: Probability, Random Variables, Random Processes
Thermal Noise in Communication Systems
http://find/8/12/2019 7074 Chapter3 Slides
54/58
A natural noise source is thermal noise, whose amplitudestatistics are well modeled to be Gaussian with zero mean.
The autocorrelation and PSD are well modeled as:
Rw() = kGe||/t0
t0
(watts),
Sw(f) = 2kG
1 + (2f t0)2 (watts/Hz).
where k = 1.38 1023 joule/0K is Boltzmanns constant, Gis conductance of the resistor (mhos); is temperature indegrees Kelvin; and t0 is the statistical average of timeintervals between collisions of free electrons in the resistor (onthe order of1012 sec).
A First Course in Digital Communications 54/58
Chapter 3: Probability, Random Variables, Random Processes
(a) Power Spectral Density, Sw
(f)
http://find/8/12/2019 7074 Chapter3 Slides
55/58
15 10 5 0 5 10 150
f(GHz)
Sw
(f)(watts/Hz)
0.1 0.08 0.06 0.04 0.02 0 0.02 0.04 0.06 0.08 0.1(picosec)
Rw
()(watts)
(b) Autocorrelation,Rw
()
N0/2
N0/2()
White noise
Thermal noise
White noise
Thermal noise
A First Course in Digital Communications 55/58
http://goforward/http://find/http://goback/8/12/2019 7074 Chapter3 Slides
56/58
Chapter 3: Probability, Random Variables, Random Processes
Example
8/12/2019 7074 Chapter3 Slides
57/58
Suppose that a (WSS) white noise process, x(t), of zero-mean andpower spectral density N0/2 is applied to the input of the filter.
(a) Find and sketch the power spectral density andautocorrelation function of the random process y(t) at theoutput of the filter.
(b) What are the mean and variance of the output process y(t)?
L
R( )tx ( )ty
A First Course in Digital Communications 57/58
Chapter 3: Probability, Random Variables, Random Processes
http://goforward/http://find/http://goback/8/12/2019 7074 Chapter3 Slides
58/58
H(f) = R
R +j2f L=
1
1 +j2fL/R.
Sy(f) =N0
2
1
1 +2LR
2f2
Ry() = N0R4L
e(R/L)||.
0 0
2
0N
(Hz)f (sec)
L
RN
4
0
( ) (watts/Hz)S fy
( ) (watts)R y
A First Course in Digital Communications 58/58
http://find/Top Related