Chapter 3 Pulse Modulation - Website of Professor Po-Ning...
Transcript of Chapter 3 Pulse Modulation - Website of Professor Po-Ning...
1
Chapter 3 Pulse Modulation
We migrate from analog modulation (continuous in both time and value) to digital modulation (discrete in both time and value) through pulse modulation (discrete in time but could be continuous in value).
© Po-Ning [email protected] Chapter 3-2
3.1 Pulse Modulation
o Families of pulse modulation n Analog pulse modulation
o A periodic pulse train is used as carriers (similar to sinusoidal carriers)
o Some characteristic feature of each pulse, such as amplitude, duration, or position, is varied in a continuous matter in accordance with the sampled message signal.
n Digital pulse modulation o Some characteristic feature of carriers is varied in a
digital manner in accordance with the sampled, digitized message signal.
2
© Po-Ning [email protected] Chapter 3-3
3.2 Sampling Theorem
∑∞
−∞=
−=n
ss nTtnTgtg )()()( δδ
( ) ( )∑∑ ∫∞
−∞=
∞
−∞=
∞
∞−−=−−=
nss
nss fnTjnTgdtftjnTtnTgfG ππδδ 2exp)(2exp)()()(
o Ts sampling period o fs = 1/Ts sampling rate
© Po-Ning [email protected] Chapter 3-4
3.2 Sampling Theorem
o Given:
o Claim: ∑∞
−∞=
−=m
ss mffGffG )()(δ
( )∑∞
−∞=
−=n
ss fnTjnTgfG πδ 2exp)()(
In this figure, fs = 2W .
3
© Po-Ning [email protected] Chapter 3-5
3.2 Spectrum of Sampled Signal
dfffnjfL
fcf
fnjcfL
fmffGffL
s
s
f
fss
nn s
n
sm
ss
∫∑
∑
−
∞
−∞=
∞
−∞=
⎟⎟⎠
⎞⎜⎜⎝
⎛−=⎟⎟
⎠
⎞⎜⎜⎝
⎛=
⇒
−=
2/
2/2exp)(1 where,2exp)(
, ExpansionSeries By Fourier
. period withperiodic isit that notice and ,)()(Let
ππ
∫ ∑
∫
−
∞
−∞=
−
⎟⎟⎠
⎞⎜⎜⎝
⎛−⎟
⎠⎞
⎜⎝⎛ −=
⎟⎟⎠
⎞⎜⎜⎝
⎛−=
2/
2/
2/
2/
2exp)(
2exp)(1
s
s
s
s
f
fsm
s
f
fss
n
dfffnjmffG
dfffnjfL
fc
π
π⇒
Proof:
cn = G( f −mfs )exp − j 2πnfs
f⎛
⎝⎜
⎞
⎠⎟df
− fs /2
fs /2∫m=−∞
∞
∑ , s = f −mfs
= G(s)exp − j 2πnfs
(s+mfs )⎛
⎝⎜
⎞
⎠⎟ds
− fs /2−mfs
fs /2−mfs∫m=−∞
∞
∑
= G(s)exp − j 2πnfs
s⎛
⎝⎜
⎞
⎠⎟ds
− fs /2−mfs
fs /2−mfs∫m=−∞
∞
∑
= G(s)−∞
∞
∫ exp − j 2πnfs
s⎛
⎝⎜
⎞
⎠⎟ds
= g(−nTs )
© Po-Ning [email protected] Chapter 3-6
( ) . where,2exp)(
2exp)()(
nmfmTjmTg
ffnjnTgfL
mss
n ss
−=−=
⎟⎟⎠
⎞⎜⎜⎝
⎛−=
∑
∑∞
−∞=
∞
−∞=
π
π⇒
Q.E.D.
4
© Po-Ning [email protected] Chapter 3-7
3.2 First Important Conclusion from Sampling
o Uniform sampling at the time domain results in a periodic spectrum with a period equal to the sampling rate.
∑∑∞
−∞=
∞
−∞=
−=⇒−=m
ssn
ss mffGffGnTtnTgtg )()()()()( δδ δ
In this figure, fs = 2W .
© Po-Ning [email protected] Chapter 3-8
3.2 Reconstruction from Sampling
.2 Take Wfs = Ideal lowpass filter
.||for )(21)( WffGW
fG ≤= δ
5
© Po-Ning [email protected] Chapter 3-9
3.2 Aliasing due to Sampling
.samples edundersamplby edreconstruc becannot )( ,2 When fGWfs <
© Po-Ning [email protected] Chapter 3-10
o A band-limited signal of finite energy with bandwidth W can be completely described by its samples of sampling rate fs ≥ 2W. n 2W is commonly referred to as the Nyquist rate.
o How to reconstruct a band-limited signal from its samples?
3.2 Second Important Conclusion from Sampling
2W fs
6
( )
( )
[ ]
[ ]( ))(2sinc2 )(
)(2)(2sin)(2
)(2exp)(1
)2exp(2exp)(1
)2exp()(
)2exp()()(
ssn
s
s
s
ns
s
W
W sn
ss
W
Wn
sss
W
W
nTtWWTnTg
nTtWnTtWnTg
fW
dffnTtjnTgf
dfftjfnTjnTgf
dfftjfG
dfftjfGtg
−=
−−
=
−=
⎟⎠
⎞⎜⎝
⎛−=
=
=
∑
∑
∫∑
∫ ∑
∫
∫
∞
−∞=
∞
−∞=
−
∞
−∞=
−
∞
−∞=
−
∞
∞−
ππ
π
ππ
π
π
2WTs sinc[2W(t-nTs)] plays the role of an interpolation function for samples.
© Po-Ning [email protected] Chapter 3-11
G(f) =
1
fsG�(f) for |f | W
G�(f) = L(f) = fs
1X
m=�1G(f �mfs)
=
1X
m=�1g(mTs) exp(�j2⇡mTsf)
See Slides 3-4 ~ 3-6
© Po-Ning [email protected] Chapter 3-12
3.2 Band-Unlimited Signals
o The signal encountered in practice is often not strictly band-limited.
o Hence, there is always “aliasing” after sampling. o To combat the effects of aliasing, a low-pass anti-aliasing
filter is used to attenuate the frequency components outside [-fs, fs].
o In this case, the signal after passing the anti-aliasing filter is often treated as bandlimited with bandwidth fs/2 (i.e., fs = 2W). Hence,
⎥⎦
⎤⎢⎣
⎡−= ∑
∞
−∞=
nTtnTgtgsn
s sinc )()(
7
© Po-Ning [email protected] Chapter 3-13
3.2 Interpolation in terms of Filtering
o Observe that
is indeed a convolution between gδ(t) and sinc(t/Ts).
⎥⎦
⎤⎢⎣
⎡−= ∑
∞
−∞=
nTtnTgtgsn
s sinc )()(
∑ ∫
∫ ∑
∫
∞
−∞=
∞
∞−
∞
∞−
∞
−∞=
∞
∞−
⎟⎟⎠
⎞⎜⎜⎝
⎛ −−=
⎟⎟⎠
⎞⎜⎜⎝
⎛ −⎟⎠
⎞⎜⎝
⎛−=
⎟⎟⎠
⎞⎜⎜⎝
⎛ −=⎟⎟
⎠
⎞⎜⎜⎝
⎛
n sss
snss
ss
dTtnTnTg
dTtnTnTg
dTtg
Tttg
ττ
τδ
ττ
τδ
ττ
τδδ
sinc)()(
sinc)()(
sinc)(sinc*)(
∑∞
−∞=⎟⎟⎠
⎞⎜⎜⎝
⎛−=⎟⎟
⎠
⎞⎜⎜⎝
⎛
n ss
s
nTtnTg
Tttg sinc)(sinc*)(δ
(Continue from the previous slide.)
⎟⎟⎠
⎞⎜⎜⎝
⎛=⇒
sTtth sinc)( filter) tion(interpolafilter tionReconstruc
)(rect)( fTTfH ss=⇒
2/sf− 2/sf
sf)(tgδ )(tg
© Po-Ning [email protected] Chapter 3-14
)( fH
8
© Po-Ning [email protected] Chapter 3-15
3.2 Physical Realization of Reconstruction Filter
o An ideal lowpass filter is not physically realizable. o Instead, we can use an anti-aliasing filter of bandwidth W,
and use a sampling rate fs > 2W. Then the spectrum of a reconstruction filter can be shaped like:
Signal spectrum with bandwidth W
Signal spectrum after sampling with fs > 2W
The physically realizable reconstruction filter
)(*)()()()()()(*)( idealidealrealizablerealizable thtgfHfGfHfGthtg δδδδ ≡≡=
Ideal filter of bandwidth fs/2.
© Po-Ning [email protected] Chapter 3-16
9
© Po-Ning [email protected] Chapter 3-17
3.3 Pulse-Amplitude Modulation (PAM)
o PAM n The amplitude of regularly spaced pulses is varied in
proportion to the corresponding sample values of a continuous message signal. Notably, the top of each pulse
is maintained flat. So this is PAM, not natural-sampling for which the message signal is directly multiplied by a periodic train of rectangular pulses.
© Po-Ning [email protected] Chapter 3-18
3.3 Pulse-Amplitude Modulation (PAM)
o The operation of generating a PAM modulated signal is often referred to as “sample and hold.”
o This “sample and hold” process can also be analyzed through “filtering technique.”
)(*)()()()( thtmnTthnTmtsn
ss δ=−= ∑∞
−∞=
⎪⎩
⎪⎨
⎧
==
<<
=
otherwise,0,0,2/1
0,1)( where Ttt
Ttth .)()()( and ∑
∞
−∞=
−=n
ss nTtnTmtm δδ
10
© Po-Ning [email protected] Chapter 3-19
3.3 Pulse-Amplitude Modulation (PAM)
o By taking “filtering” standpoint, the spectrum of S(f) can be derived as:
n M(f) is the message signal with bandwidth W (or having experienced an anti-aliasing filter of bandwidth W).
n fs ≥ 2W.
)()(
)()(
)()()(
fHkffMf
fHkffMf
fHfMfS
kss
kss
∑
∑∞
−∞=
∞
−∞=
−=
⎟⎠
⎞⎜⎝
⎛−=
= δ
© Po-Ning [email protected] Chapter 3-20
3.3 Pulse-Amplitude Modulation (PAM)
)()()(
)()()()(
)()()(
Equalizer FiltertionReconstruc
1||
fMfHfM
fHkffMffHfMf
fHkffMffS
ksss
kss
→→
−+=
−=
∑
∑
≥
∞
−∞=
)(1
))( of ],[ range (over the
fH
fMWW−
11
© Po-Ning [email protected] Chapter 3-21
3.3 Feasibility of Equalizer Filter
o The distortion of M(f) is due to M(f)H(f),
( ) ( )fTjfTTfHTttTt
th π−=⎪⎩
⎪⎨
⎧
==
<<
= expsinc)(or otherwise,0
,0,2/10,1
)( where
( )( )
⎪⎩
⎪⎨⎧ ≤=
=⇒otherwise0,
||,expsinc1
)(1
)( WffTjfTTfHfE π
Question: Is the above E(f) feasible or realizable?
.211 WfTT ss
>=>
© Po-Ning [email protected] Chapter 3-22
( )⎪⎩
⎪⎨⎧ ≤
=otherwise0,
||,sinc1
)(~ WffTTfE
-1 -0.5 0.5 1
0.2
0.4
0.6
0.8
1)(~ fE
E.g., T = 1, W = 1/8.
This gives an equalizer:
)(~ fE ( )fTjTtπ
δ
expor )2/( +)(ti )(to
non-realizable! Why? A lowpass filter
)(1 to
Because "o1(t) = 0 for t < 0" does not imply "o(t) = 0 for t < 0."
12
© Po-Ning [email protected] Chapter 3-23
3.3 Feasibility of Equalizer Filter
o Causal
n A reasonable assumption for a feasible linear filter system is that:
n A necessary and sufficient condition for the above assumption to hold is that h(t) = 0 for t < 0.
)(th)(to)(ti
.0for 0)( have we,0for 0)( satisfying )(any For <=<= ttottiti
n Simplified Proof:
∫∫ −=−=⇒⎩⎨⎧
<=
<= ∞
∞−
tdtihdtihto
ttitth
0)()()()()(
0for 0)(0for 0)(
ττττττ
0for 0)( <=⇒ tto
⎩⎨⎧
≥
<=>≠∫
−
∞− .0for ,1;0for ,0
)( take then,0 somefor 0)( Iftt
tiadttha
input! zero completely todueoutput nonzeroa be willthere
thatmeans which,0)()( ≠=−⇒ ∫−
∞−
adhao ττ
.0every for 0)( Therefore, >=∫−
∞−adh
aττ
.0for 0)(0every for 0 )( >=∂
∂⇒>= a
aaaa λ
λ
.0for 0)()(0every for 0)( >=−−=∂∂
⇒>= ∫∫−
∞−
−
∞−aahdh
aadh
aaττττ
© Po-Ning [email protected] Chapter 3-24
13
© Po-Ning [email protected] Chapter 3-25
3.3 Aperture Effect
o The distortion of M(f) due to M(f)H(f)
is very similar to the distortion caused by the finite size of
the scanning aperture in television. So this is named the aperture effect.
o If T/Ts ≦ 0.1, the amplitude distortion is less than 0.5%; hence, the equalizer may not be necessary.
( ) ( )fTjfTTfHTttTt
th π−=⎪⎩
⎪⎨
⎧
==
<<
= expsinc)(or otherwise,0
,0,2/10,1
)( where
-0.06 -0.04 -0.02 0.02 0.04 0.06
0.2
0.4
0.6
0.8
1
© Po-Ning [email protected] Chapter 3-26
( )⎪⎩
⎪⎨⎧ ≤
=otherwise0,
||,sinc1
)(~ WffTTfE .211 and Wf
TT ss
>=>
( ) 04.0,10,1for otherwise0,
04.0||,sinc
1)(~
===⎪⎩
⎪⎨⎧ ≤
=⇒ WTTfffE s
)(~ fE 1.00264
14
© Po-Ning [email protected] Chapter 3-27
3.3 Pulse-Amplitude Modulation
o Final notes on PAM n PAM is rather stringent in its system requirement, such
as short duration of pulse. n Also, the noise performance of PAM may not be
sufficient for long distance transmission. n Accordingly, PAM is often used as a mean of message
processing for time-division multiplexing, from which conversion to some other form of pulse modulation is subsequently made. Details will be discussed in Section 3.9.
© Po-Ning [email protected] Chapter 3-28
3.4 Other Forms of Pulse Modulation
o Pulse-Duration Modulation (or Pulse-Width Modulation) n Samples of the message signal are used to vary the
duration of the pulses. o Pulse-Position Modulation
n The position of a pulse relative to its unmodulated time of occurrence is varied in accordance with the message signal.
15
Pulse trains
PDM
PPM
© Po-Ning [email protected] Chapter 3-29
© Po-Ning [email protected] Chapter 3-30
3.4 Other Forms of Pulse Modulation
o Comparisons between PDM and PPM n PPM is more power efficient because excessive pulse
duration consumes considerable power. o Final note
n It is expected that PPM is immune to additive noise, since additive noise only perturbs the amplitude of the pulses rather than the positions.
n However, since the pulse cannot be made perfectly rectangular in practice (namely, there exists a non-zero transition time in pulse edge), the detection of pulse positions is somehow still affected by additive noise.
16
© Po-Ning [email protected] Chapter 3-31
3.5 Bandwidth-Noise Trade-Off
o PPM seems to be a better form for analog pulse modulation from noise performance standpoint. However, its noise performance is very similar to (analog) FM modulation as: n Its figure of merit is proportional to the square of
transmission bandwidth (i.e., 1/T) normalized with respect to the message bandwidth (W).
n There exists a threshold effect as SNR is reduced.
o Question: Can we do better than the “square” law in figure-of-merit improvement? Answer: Yes, by means of Digital Communication, we can realize an “exponential” law!
)/ .,.( WBBeI Tn =
See Slide 2-162: figure-of-merit∝D2 =12BT ,Carson
W−1
⎛
⎝⎜
⎞
⎠⎟
2
=12Bn,Carson −1
⎛
⎝⎜
⎞
⎠⎟
2
© Po-Ning [email protected] Chapter 3-32
3.6 Quantization Process
o Transform the continuous-amplitude m = m(nTs) to discrete approximate amplitude v = v(nTs)
o Such a discrete approximate is adequately good in the sense that any human ear or eye can detect only finite intensity differences.
17
© Po-Ning [email protected] Chapter 3-33
3.6 Quantization Process
o We may drop the time instance nTs for convenience, when the quantization process is memoryless and instantaneous (hence, the quantization at time nTs is not affected by earlier or later samples of the message signal.)
o Types of quantization n Uniform
o Quantization step sizes are of equal length. n Non-uniform
o Quantization step sizes are not of equal length.
© Po-Ning [email protected] Chapter 3-34
o An alternative classification of quantization n Midtread n Midrise
midtread midrise
18
© Po-Ning [email protected] Chapter 3-35
3.6 Quantization Noise
Uniform midtread quantizer
© Po-Ning [email protected] Chapter 3-36
3.6 Quantization Noise
o Define the quantization noise to be Q = M – V = M – g(M), where g( ) is the quantizer.
o Let the message M be uniformly distributed in (–mmax, mmax). So M has zero mean.
o Assume g( ) is symmetric and of midrise type; then, V = g(M) also has zero-mean, so does Q = M – V.
o Then the step size of the quantizer is given by:
where L is the total number of representation levels.
Lmmax2
=Δmmax =1
L = 4
Δ =12
Example.
19
© Po-Ning [email protected] Chapter 3-37
3.6 Quantization Noise
o Assume that g( ) assigns the midpoint of each step interval to be the representation level. Then
{ }
⎪⎪⎪
⎩
⎪⎪⎪
⎨
⎧
Δ≥
Δ<≤
Δ−+
Δ
Δ−<
=⎭⎬⎫
⎩⎨⎧ ≤
Δ−Δ=≤
2,1
22,21
2,0
2)mod(PrPr
q
q
qMqQ
⎭⎬⎫
⎩⎨⎧ Δ
<≤Δ
−⋅Δ
=22
1)( pdfOr qqfQ 1 mmax =1
L = 4
Δ =12
Example.
© Po-Ning [email protected] Chapter 3-38
3.6 Quantization Noise
o So, the output signal-to-noise ratio is equal to:
o The transmission bandwidth of a quantization system is conceptually proportional to the number of bits required per sample, i.e., R = log2(L).
o We then conclude that SNRO ∝ 4R, which increases exponentially with transmission bandwidth.
22max
2max22/
2/
2
32
121
1211 L
mP
LmPP
dqq
PSNRO =
⎟⎠
⎞⎜⎝
⎛=
Δ=
Δ
=
∫Δ
Δ−
20
© Po-Ning [email protected] Chapter 3-39
Example 3.1 Sinusoidal Modulating Signal
o Let m(t) = Am cos(2πfct). Then
L R SNRO (dB) 32 5 31.8 64 6 37.8 128 7 43.8 256 8 49.8
* Note that in this example, we assume a full-load quantizer, in which no quantization loss is encountered due to saturation.
P =
A2m2
and mmax
= Am
) SNRO =
3(Am2/2)A2
mL2
=
3
2
4
R= 10 log
10
(3/2) +R · 10 log10
(4) dB ⇡ (1.8 + 6R) dB
© Po-Ning [email protected] Chapter 3-40
3.6 Quantization Noise
o In the previous analysis of quantization error, we assume the quantizer assigns the mid-point of each step interval to be the representative level.
o Questions: n Can the quantization noise power be further reduced by
adjusting the representative levels? n Can the quantization noise power be further reduced by
adopting a non-uniform quantizer?
21
© Po-Ning [email protected] Chapter 3-41
Representation level
3.6 Optimality of Scalar Quantizers
v1 v2 vL-1 vL
o Let d(m, vk) be the distortion by representing m by vk. o Goal: To find {Ik} and {vk} such that the average distortion
D = E[d(M, g(M))] is minimized.
…
I1 I2 IL-1 IL … Partitions
),[1
AAIL
kk −=
=∪ Notably, interval Ik may not be
a “consecutive” single interval.
© Po-Ning [email protected] Chapter 3-42
3.6 Optimality of Scalar Quantizers
o Solution:
∑∫=
=L
k IMkIvIv
kkkkk
dmmfvmdD1
}{}{}{}{)(),(minminminmin
(I) For fixed {vk}, determine the optimal {Ik}. (II) For fixed {Ik}, determine the optimal {vk}.
(I) If d(m, vk) ≦ d(m, vj), then m should be assigned to Ik rather than Ij.
{ }LjvmdvmdAAmI jkk ≤≤≤−∈=⇒ 1 allfor ),(),(:),[
22
(II) For fixed {Ik}, determine the optimal {vk}.
∑∫=
L
k IMkv
kk
dmmfvmd1
}{)(),(min
∫
∫∑∫
∂
∂=
⎟⎟
⎠
⎞
⎜⎜
⎝
⎛
∂∂
=⎟⎟
⎠
⎞
⎜⎜
⎝
⎛
∂∂
=
j
jk
IM
j
j
IMj
j
L
k IMk
j
dmmfvvmd
dmmfvmdv
dmmfvmdv
)(),(
)(),()(),( Since1
.0)(),(
:is optimal for the conditionnecessary a
=∂
∂∫jI
Mj
j
j
dmmfvvmd
v
© Po-Ning [email protected] Chapter 3-43
Lloyd-Max algorithm is to repetitively apply (I) and (II) for the search of the optimal quantizer.
© Po-Ning [email protected] Chapter 3-44
Example: Mean-Square Distortion
o d(m, vk) = (m - vk)2
(I) { }interval. econsecutiva be should
1 allfor )()(:),[ 22 LjvmvmAAmI jkk ≤≤−≤−−∈=
v1 v2 vL-1 vL …
I1 I2 IL-1 IL …
Representation level
Partitions
23
© Po-Ning [email protected] Chapter 3-45
Example: Mean-Square Distortion (II) :is optimal for the conditionnecessary A jv
]|[)(
)(1optimal, 1
1
+<≤==⇒
∫
∫+
+
jjm
m M
m
m M
j mMmMEdmmf
dmmmfv
j
j
j
j
Exercise: What is the best {mk} and {vk} if M is uniformly distributed over [-A,A).
Z mj+1
mj
@(m� vj)2
@vjfM (m)dM = �2
Z mj+1
mj
(m� vj)fM (m)dm = 0.
Hint: min{Ik}
min{vk}
D =1
2Amin{mk}
LX
k=1
Z mk+1
mk
✓m� mk +mk+1
2
◆2
dm.
© Po-Ning [email protected] Chapter 3-46
3.7 Pulse-Code Modulation
(anti-alias)
24
© Po-Ning [email protected] Chapter 3-47
3.7 Pulse-Code Modulation
o Non-uniform quantizers used for telecommunication (ITU-T G.711) n ITU-T G.711: Pulse Code Modulation (PCM) of Voice
Frequencies (1972) o It consists of two laws: A-law (mainly used in
Europe) and µ-law (mainly used in US and Japan) n This design helps to protect weak signal, which occurs
more frequently in, say, human voice.
© Po-Ning [email protected] Chapter 3-48
3.7 Laws
o Quantization Laws n A-law
o 13-bit uniformly quantized o Conversion to 8-bit code
n µ-law o 14-bit uniformly quantized o Conversion to 8-bit code.
n These two are referred to as compression laws since they uses 8-bit to (lossily) represent 13-(or 14-)bit information.
25
© Po-Ning [email protected] Chapter 3-49
3.7 A-law in G.711
o A-law (A=87.6)
⎪⎪⎩
⎪⎪⎨
⎧
≤≤⎥⎦
⎤⎢⎣
⎡
++
≤+
=11,
)log(1|)|log(1)sgn(
1,)log(1
)(law-A
mAA
mAm
Amm
AA
mF
Linear mapping
Logarithmic mapping
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
output
input
)(law-A mF
m
© Po-Ning [email protected] Chapter 3-50
26
-128 -112
-96 -80 -64 -48 -32
0
32 48 64 80 96
112 128
-4096 -2048 -1024 -512 -256
-128 -64
0
64 128
256
512 1024 2048 4096
outp
ut
input
)(law-A mF
© Po-Ning [email protected] Chapter 3-51
13 bit uniform quantization
8 bit PCM code A piecewise linear approximation to the law.
© Po-Ning [email protected] Chapter 3-52
Input Values Compressed Code Word
Chord Step
Bits:11 10 9 8 7 6 5 4 3 2 1 0 Bits: 6 5 4 3 2 1 0
0 0 0 0 0 0 0 a b c d x
0 0 0 0 0 0 1 a b c d x
0 0 0 0 0 1 a b c d x x
0 0 0 0 1 a b c d x x x
0 0 0 1 a b c d x x x x
0 0 1 a b c d x x x x x
0 1 a b c d x x x x x x
1 a b c d x x x x x x x
0 0 0 a b c d
0 0 1 a b c d
0 1 0 a b c d
0 1 1 a b c d
1 0 0 a b c d
1 0 1 a b c d
1 1 0 a b c d
1 1 1 a b c d
E.g. (3968)10 --> (1111,1000,0000)2-->(111,1111)2-->(127)10 E.g. (2176)10 -->(1000,1000,0000)2-->(111,0001)2-->(113)10
Compressor of A-law (assume nonnegative m)
27
© Po-Ning [email protected] Chapter 3-53
Expander of A-law (assume nonnegative m)
Compressed Code Word Raised Output Values
Chord Step
Bits:6 5 4 3 2 1 0 Bits:11 10 9 8 7 6 5 4 3 2 1 0
0 0 0 a b c d
0 0 1 a b c d
0 1 0 a b c d
0 1 1 a b c d
1 0 0 a b c d
1 0 1 a b c d
1 1 0 a b c d
1 1 1 a b c d
0 0 0 0 0 0 0 a b c d 1
0 0 0 0 0 0 1 a b c d 1
0 0 0 0 0 1 a b c d 1 0
0 0 0 0 1 a b c d 1 0 0
0 0 0 1 a b c d 1 0 0 0
0 0 1 a b c d 1 0 0 0 0
0 1 a b c d 1 0 0 0 0 0
1 a b c d 1 0 0 0 0 0 0
E.g. (113)10 → (111, 0001)2 → (1000,1100, 0000)2 → (2240)10
In other words, (1001, 0000, 0000)2 + (1000,1000, 0000)2
2=
(2304)10 + (2176)10
2= (2240)10
© Po-Ning [email protected] Chapter 3-54
3.7 µ-law in G.711
o µ-law (µ = 255)
n It is approximately linear at low m. n It is approximately logarithmic at large m.
.1for )log(1
)1log()sgn()(law- ≤
+
+= m
mmmF
µ
µµ
28
)(law- mFµ
© Po-Ning [email protected] Chapter 3-55
-1 -0.8 -0.6 -0.4 -0.2
0 0.2 0.4 0.6 0.8
1
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 m
)(law- mFµ
14 bit uniform quantization (213 = 8192)
8 bit PCM code
© Po-Ning [email protected] Chapter 3-56
-128 -112
-96 -80 -64 -48 -32 -16
0 16 32 48 64 80 96
112 128
-8159 -4063 -2015 -991 -479
-223 -95 -31
0
31 95 223
479
991 2015 4063 8159
A piecewise linear approximation to the law.
29
Compressor of µ-law (assume nonnegative m) Raised Input Values Compressed Code Word
Chord Step
Bits:12 11 10 9 8 7 6 5 4 3 2 1 0 Bits: 6 5 4 3 2 1 0
0 0 0 0 0 0 0 1 a b c d x
0 0 0 0 0 0 1 a b c d x x
0 0 0 0 0 1 a b c d x x x
0 0 0 0 1 a b c d x x x x
0 0 0 1 a b c d x x x x x
0 0 1 a b c d x x x x x x
0 1 a b c d x x x x x x x
1 a b c d x x x x x x x x
0 0 0 a b c d
0 0 1 a b c d
0 1 0 a b c d
0 1 1 a b c d
1 0 0 a b c d
1 0 1 a b c d
1 1 0 a b c d
1 1 1 a b c d
Raised Input = Input + 33 = Input + 21H (For negative m, the raised input becomes (input – 33).) An additional 7th bit is used to indicate whether the input signal is positive (1) or negative (0).
© Po-Ning [email protected] Chapter 3-57
Expander of µ-law (assume nonnegative m)
© Po-Ning [email protected] Chapter 3-58
Compressed Code Word Raised Output Values
Chord Step
Bits:6 5 4 3 2 1 0 Bits:12 11 10 9 8 7 6 5 4 3 2 1 0
0 0 0 a b c d
0 0 1 a b c d
0 1 0 a b c d
0 1 1 a b c d
1 0 0 a b c d
1 0 1 a b c d
1 1 0 a b c d
1 1 1 a b c d
0 0 0 0 0 0 0 1 a b c d 1
0 0 0 0 0 0 1 a b c d 1 0
0 0 0 0 0 1 a b c d 1 0 0
0 0 0 0 1 a b c d 1 0 0 0
0 0 0 1 a b c d 1 0 0 0 0
0 0 1 a b c d 1 0 0 0 0 0
0 1 a b c d 1 0 0 0 0 0 0
1 a b c d 1 0 0 0 0 0 0 0
Output = Raised Output – 33 Note that the combination of a compressor and an expander is called a compander.
30
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8
A-lawmu-law
Comparison of A-law and µ-law specified in G.711.
© Po-Ning [email protected] Chapter 3-59
© Po-Ning [email protected] Chapter 3-60
3.7 Coding
o After the quantizer provides a symbol representing one of 256 possible levels (8 bits of information) at each sampled time, the encoder will transform the symbol (or several symbols) into a code character (or code word) that is suitable for transmission over a noisy channel.
o Example. Binary code.
11100100 1 1 1 0 0 1 0
0 = change 1 = unchange 0
31
© Po-Ning [email protected] Chapter 3-61
3.7 Coding
o Example. Ternary code (Pseudo-binary code).
0 0 0 1 1 0 1
00011011→ B A
C
00011011→ACABBCBB
1
Through the help of coding, the receiver may be able to detect (or even correct) the transmission errors due to noise. For example, it is impossible to receive ABABBABB, since this is not a legitimate code word (character).
© Po-Ning [email protected] Chapter 3-62
3.7 Coding
o Example of error correcting code : Three-times repetition code (to protect Bluetooth packet header).
00011011→ 000,000,000,111,111,000,111,111
Then the so-called majority law can be applied at the receiver to correct one-bit error.
o Channel (error correcting) codes are designed to compensate the channel noise, while line codes are simply used as the electrical representation of a binary data stream over the electrical line.
32
© Po-Ning [email protected] Chapter 3-63
3.7 Line Codes
(a) Unipolar nonreturn-to-zero (NRZ) signaling
(b) Polar nonreturn-to-zero (NRZ) signaling
(c) Unipolar return-to-zero (RZ) signaling
(d) Bipolar return-to-zero (BRZ) signaling
(e) Split-phase (Manchester code)
© Po-Ning [email protected] Chapter 3-64
3.7 Derivation of PSD
o From Slide 1-116, we obtain that the general formula for PSD is:
}.|{|)()( where)],()([21limPSD 2
*2 TttstsfSfSE
T TTT≤⋅==
∞→1
For a line coded signal, s(t) =1X
n=�1ang(t�nTb), where g(t) = 0 outside [0, Tb).
Hence, S(f) = G(f)1X
n=�1ane
�j2⇡fnTb and S2NTb(f) = G(f)N�1X
n=�N
ane�j2⇡fnTb .
) PSD = limN!11
2NTb|G(f)|2
1X
n=�1
N�1X
m=�N
E[ana⇤m]e�j2⇡f(n�m)Tb
!.
33
⎟⎠
⎞⎜⎝
⎛=
⎟⎠
⎞⎜⎝
⎛=
⎟⎠
⎞⎜⎝
⎛−=
⎟⎠
⎞⎜⎝
⎛=
−∞
−∞=
−
−=
−∞
−∞=∞→
−
−=
−−∞
−∞=∞→
∞
−∞=
−−−
−=∞→
∑
∑ ∑
∑ ∑
∑ ∑
b
b
b
b
fkTj
ka
b
N
Nm
fkTj
ka
bN
N
Nm
Tmnfj
na
bN
n
TmnfjN
Nmmn
bN
ekT
fG
ekNT
fG
emnNT
fG
eaaEfGNT
π
π
π
π
φ
φ
φ
22
122
1)(22
)(21
*2
)(1|)(|
)(21lim|)(|
)(21lim|)(|
][|)(|21limPSD
© Po-Ning [email protected] Chapter 3-65
For i.i.d. {an},1
Tb
1X
k=�1�a(k)e
�j2⇡fkTb
!=
�2a
Tb+
µ2a
Tb
1X
k=�1e�j2⇡fkTb
=
�2a
Tb+
µ2a
T 2b
1X
k=�1�(f � k/Tb)(See Slide 3-4.)
(i.i.d. = independent and identically distributed)
© Po-Ning [email protected] Chapter 3-66
3.7 Power Spectral of Line Codes
o Unipolar nonreturn-to-zero (NRZ) signaling n Also named on-off signaling. n Disadvantage: Waste of power due to the non-zero-
mean nature (i.e., PSD does not approach zero at zero frequency).
⎪⎩
⎪⎨
⎧
⎩⎨⎧ <≤
=−=
∞−∞=∞
−∞=∑
otherwise,00,
)(
i.i.d., zero/one is }{ where,)()( b
nn
nbn TtA
tg
anTtgats
34
© Po-Ning [email protected] Chapter 3-67
3.7 Power Spectral of Line Codes
n PSD of Unipolar NRZ
PSDU-NRZ = |G(f)|2 �2a
Tb+
µ2a
T 2b
1X
k=�1�(f � k/Tb)
!
= A2T 2b sinc
2(fTb)
�2a
Tb+
µ2a
T 2b
1X
k=�1�(f � k/Tb)
!
=A2Tb
4sinc2(fTb)
1 +
1
Tb
1X
k=�1�(f � k/Tb)
!
=A2Tb
4sinc2(fTb)
✓1 +
1
Tb�(f)
◆
© Po-Ning [email protected] Chapter 3-68
o Polar nonreturn-to-zero (NRZ) signaling n The previous PSD of Unipolar NRZ suggests that a
zero-mean data sequence is preferred.
⎪⎩
⎪⎨
⎧
⎩⎨⎧ <≤
=
±
−=
∞−∞=∞
−∞=∑
otherwise,00,
)(
i.i.d., 1 is }{ where,)()( b
nn
nbn TtA
tg
anTtgats
3.7 Power Spectral of Line Codes
PSDP-NRZ = |G(f)|2 �2a
Tb+
µ2a
T 2b
1X
k=�1�(f � k/Tb)
!
= A2Tbsinc2(fTb)
35
© Po-Ning [email protected] Chapter 3-69
3.7 Power Spectral of Line Codes
o Unipolar return-to-zero (RZ) signaling n An attractive feature of this line code is the presence of
delta functions at f = –1/Tb, 0, 1/Tb in the PSD, which can be used for bit-timing recovery at the receiver.
n Disadvantage: It requires 3dB more power than polar return-to-zero signaling.
⎪⎩
⎪⎨
⎧
⎩⎨⎧ <≤
=−=
∞−∞=∞
−∞=∑
otherwise,02/0,
)(
i.i.d., zero/one is }{ where,)()( b
nn
nbn TtA
tg
anTtgats
© Po-Ning [email protected] Chapter 3-70
3.7 Power Spectral of Line Codes
n PSD of Unipolar RZ
PSDU-RZ = |G(f)|2 �2a
Tb+
µ2a
T 2b
1X
k=�1�(f � k/Tb)
!
=A2T 2
b
4sinc2
✓fTb
2
◆ �2a
Tb+
µ2a
T 2b
1X
k=�1�(f � k/Tb)
!
=A2Tb
16sinc2
✓fTb
2
◆ 1 +
1
Tb
1X
k=�1�(f � k/Tb)
!
=A2Tb
16sinc2
✓fTb
2
◆ 1 +
1
Tb
1X
k=1�(f � k/Tb)
!
36
© Po-Ning [email protected] Chapter 3-71
3.7 Power Spectral of Line Codes
o Bipolar return-to-zero (BRZ) signaling n Also named alternate mark inversion (AMI) signaling n No DC component and relatively insignificant low-
frequency components in PSD.
⎩⎨⎧ <≤
=−= ∑∞
−∞= otherwise,02/0,
)( where,)()( b
nbn
TtAtgnTtgats
© Po-Ning [email protected] Chapter 3-72
3.7 Power Spectral of Line Codes
n PSD of BRZ o {an} is no longer i.i.d.
.1for 0][
0161)1)(1(
161)1)(1(
161)1)(1(
161)1)(1(][
41
41)1(][
21
41)1(
41)1(
21)0(][
2
1
222
>=
=−−+−+−+=
−=−=
=++−+=
+
+
+
maaE
aaE
aaE
aE
mnn
nn
nn
n
!
37
© Po-Ning [email protected] Chapter 3-73
3.7 Power Spectral of Line Codes
PSDBRZ = |G(f)|2 1
Tb
1X
k=�1�a(k)e
�j2⇡fkTb
!
=
A2T 2b
4
sinc
2
✓fTb
2
◆· 1
Tb
✓�1
4
ej2⇡fTb+
1
2
� 1
4
e�j2⇡fTb
◆
=
A2T 2b
4
sinc
2
✓fTb
2
◆· 1
Tb
✓1
2
� 1
2
cos(2⇡fTb)
◆
=
A2Tb
4
sinc
2
✓fTb
2
◆sin
2(⇡fTb)
© Po-Ning [email protected] Chapter 3-74
3.7 Power Spectral of Line Codes
o Split-phase (Manchester code) n This signaling suppressed the DC component, and has
relatively insignificant low-frequency components, regardless of the signal statistics.
n Notably, for P-NRZ and BRZ, the DC component is suppressed only when the signal has the right statistics.
⎪⎪
⎩
⎪⎪
⎨
⎧
⎪⎩
⎪⎨
⎧
<≤−
<≤
=
±
−=
∞−∞=
∞
−∞=∑
otherwise,02/,
2/0,)(
i.i.d., 1 is }{
where,)()(bb
b
nn
nbn TtTA
TtAtg
a
nTtgats
38
© Po-Ning [email protected] Chapter 3-75
3.7 Power Spectral of Line Codes
n PSD of Manchester code
PSDManchester
= |G(f)|2 �2a
Tb+
µ2a
T 2b
1X
k=�1�(f � k/Tb)
!
= A2T 2b sinc
2
✓fTb
2
◆sin2
✓⇡fTb
2
◆ �2
Tb+
µ2a
T 2b
1X
k=�1�(f � k/Tb)
!
= A2Tbsinc2
✓fTb
2
◆sin2
✓⇡fTb
2
◆
© Po-Ning [email protected] Chapter 3-76
Let Tb=1, and adjust A such that the total power of each line code is 1. This gives a fair comparison among line codes.
A =p2
A = 1
A = 2
A = 2
power= 1
power= 1
power= 1
power=
12+
12
power=
12+
12
PSDU-NRZ =1
2sinc2(f) +
1
2�(f)
PSDP-NRZ = sinc2(f)
PSDU-RZ =1
4sinc2
✓f
2
◆+
1
4
1X
k=�1sinc2
✓k
2
◆�(f � k)
PSDBRZ = sinc2✓f
2
◆sin2(⇡f)
PSDManchester = sinc2✓f
2
◆sin2
✓⇡f
2
◆A = 1
39
© Po-Ning [email protected] Chapter 3-77
0
0.2
0.4
0.6
0.8
1
0 0.5 1 1.5 2
U-NRZP-NRZ
U-RZBRZ
Manchester
2/1 π
© Po-Ning [email protected] Chapter 3-78
3.7 Differential Encoding with Unipolar NRZ Line Coding
o 1 = no change and 0 = change.
ndno
dn = dn�1 � on = dn�1 on
40
© Po-Ning [email protected] Chapter 3-79
3.7 Regeneration
o Regenerative repeater for PCM system n It can completely remove the distortion if the decision
making device makes the right decision (on 1 or 0).
© Po-Ning [email protected] Chapter 3-80
3.7 Decoding & Filtering
o After regenerating the received pulse at the last time, the receiver then decodes, and regenerates the original message signal (with acceptable quantization error).
o Finally, a lowpass reconstruction filter whose cutoff frequency is equal to the message bandwidth W is applied at the end (to remove the unnecessary high-frequency components due to “quantization”).
41
© Po-Ning [email protected] Chapter 3-81
3.8 Noise Consideration in PCM Systems
o Two major noise sources in PCM systems n (Message-independent) Channel noise n (Message-dependent) Quantization noise
o The quantization noise is often under designer’s control, and can be made negligible by taking adequate number of quantization levels.
© Po-Ning [email protected] Chapter 3-82
3.8 Noise Consideration in PCM Systems
o The main effect of channel noise is to introduce bit errors. n Notably, the symbol error rate is quite different from
the bit error rate. n A symbol error may be caused by one-bit error, or two-
bit error, or three-bit error, or …; so in general, one cannot derive the symbol error rate from the bit error rate (or vice versa) unless some special assumption is made.
n Considering the reconstruction of original analog signal, a bit error in the most significant bit is more harmful than a bit error in the least significant bit.
42
© Po-Ning [email protected] Chapter 3-83
3.8 Error Threshold
o Eb/N0
n Eb: Transmitted signal energy per information bit o E.g., information bit is encoded using three-times
repetition code, in which each code bit is transmitted using one BPSK symbol with symbol energy Ec.
o Then Eb = 3 Ec. n N0: One-sided noise spectral density
o The bit error rate (BER) is a function of Eb/N0 and transmission speed (and implicitly bandwidth, etc).
© Po-Ning [email protected] Chapter 3-84
3.8 Error Threshold
o Influence of Eb/N0 on BER at 105 bit per second (bps)
Eb/N0 (dB) BER About one bit error in every … 4.3 10-2 10-3 second 8.4 10-4 10-1 second 10.6 10-6 10 seconds 12.0 10-8 20 minutes 13.0 10-10 1 day 14.0 10-12 3 months
n The usual requirement of BER in practice is 10-5.
43
© Po-Ning [email protected] Chapter 3-85
3.8 Error Threshold
o Error threshold n The minimum Eb/N0 to achieves the required BER.
o By knowing the error threshold, one can always add a regenerative repeater when Eb/N0 is about to drop below the threshold; hence, long-distance transmission becomes feasible. n Unlike the analog transmission, distortion will
accumulate for long-distance transmission.
© Po-Ning [email protected] Chapter 3-86
3.9 Time-Division Multiplexing
o An important feature of sampling process is a conservation-of-time. n In principle, the communication link is used only at the
sampling time instances. o Hence, it may be feasible to put other message’s samples
between adjacent samples of this message on a time-shared basis.
o This forms the time-division multiplex (TDM) system. n A joint utilization of a common communication link by
a plurality of independent message sources.
44
© Po-Ning [email protected] Chapter 3-87
3.9 Time-Division Multiplexing
o The commutator (1) takes a narrow sample of each of the N input messages at a rate fs slightly higher than 2W, where W is the cutoff frequency of the anti-aliasing filter, and (2) interleaves these N samples inside the sampling interval Ts.
© Po-Ning [email protected] Chapter 3-88
3.9 Time-Division Multiplexing
o The price we pay for TDM is that N samples be squeezed in a time slot of duration Ts.
45
© Po-Ning [email protected] Chapter 3-89
3.9 Time-Division Multiplexing
o Synchronization is essential for a satisfactory operation of the TDM system. n One possible procedure to synchronize the transmitter
and receiver clocks is to set aside a code element or pulse at the end of a frame, and to transmit this pulse every other frame only.
© Po-Ning [email protected] Chapter 3-90
Example 3.2 The T1 System
o T1 system n Carries 24 64kbps voice channels with regenerative
repeaters spaced at approximately 2-km intervals. n Each voice signal is essentially limited to a band from
300 to 3100 Hz. o Anti-aliasing filter with W = 3.1 KHz o Sampling rate = 8 KHz (> 2W = 6.2 KHz)
n ITU G.711 µ-law is used with µ = 255. n Each frame consists of 24 × 8 + 1 = 193 bits, where a
single bit is added at the end of the frame for the purpose of synchronization.
46
© Po-Ning [email protected] Chapter 3-91
Example 3.2 The T1 System
n In addition to the 193 bits per frame (i.e., 1.544 Megabits per second), a telephone system must also pass signaling information such as “dial pulses” and “on/off-hook.” o The least significant bit of each voice channel is
deleted in every sixth frame, and a signaling bit is inserted in its place.
193 bit/frame⇥ 1
1 sample (from each of 24 voice channels)/frame
⇥ 8000 sample/sec = 1.544 Megabits/sec
(DS=Digital Signal)
© Po-Ning [email protected] Chapter 3-92
3.10 Digital Multiplexers
o The introduction of digital multiplexer enables us to combine digital signals of various natures, such as computer data, digitized voice signals, digitized facsimile and television signals.
47
© Po-Ning [email protected] Chapter 3-93
3.10 Digital Multiplexers
o The multiplexing of digital signals is accomplished by using a bit-by-bit interleaving procedure with a selector switch that sequentially takes a (or more) bit from each incoming line and then applies it to the high-speed common line.
(3 channels/frame)
© Po-Ning [email protected] Chapter 3-94
3.10 Digital Multiplexers
o Digital multiplexers are categorized into two major groups. 1. 1st Group: Multiplex digital computer data for TDM
transmission over public switched telephone network. n Require the use of modem technology.
2. 2nd Group: Multiplex low-bit-rate digital voice data into high-bit-rate voice stream. n Accommodate in the hierarchy that is varying from
one country to another. n Usually, the hierarchy starts at 64 Kbps, named a
digital signal zero (DS0).
48
© Po-Ning [email protected] Chapter 3-95
3.10 North American Digital TDM Hierarchy
o The first level hierarchy n Combine 24 DS0 to obtain a primary rate DS1 at 1.544
Mb/s (T1 transmission) o The second-level multiplexer
n Combine 4 DS1 to obtain a DS2 with rate 6.312 Mb/s o The third-level multiplexer
n Combine 7 DS2 to obtain a DS3 at 44.736 Mb/s o The fourth-level multiplexer
n Combine 6 DS3 to obtain a DS4 at 274.176 Mb/s o The fifth-level multiplexer
n Combine 2 DS4 to obtain a DS5 at 560.160 Mb/s
© Po-Ning [email protected] Chapter 3-96
3.10 North American Digital TDM Hierarchy
n The combined bit rate is higher than the multiple of the incoming bit rates because of the addition of bit stuffing and control signals.
49
© Po-Ning [email protected] Chapter 3-97
3.10 North American Digital TDM Hierarchy
o Basic problems involved in the design of multiplexing system n Synchronization should be maintained to properly
recover the interleaved digital signals. n Framing should be designed so that individual can be
identified at the receiver. n Variation in the bit rates of incoming signals should be
considered in the design. o A 0.01% variation in the propagation delay produced
by a 1℉ decrease in temperature will result in 100 fewer pulses in the cable of length 1000-km with each pulse occupying about 1 meter of the cable.
© Po-Ning [email protected] Chapter 3-98
3.10 Digital Multiplexers
o Synchronization and rate variation problems are resolved by bit stuffing.
o Example 3.3. AT&T M12 (second-level multiplexer) n 24 control bits are stuffed, and separated by sequences
of 48 data bits (12 from each DS1 input).
50
© Po-Ning [email protected] Chapter 3-99
© Po-Ning [email protected] Chapter 3-100
Example 3.3 AT&T M12 Multiplexer
o The control bits are labeled F, M, and C. n Frame markers: In sequence of F0F1F0F1F0F1F0F1, where F0
= 0 and F1 = 1. n Subframe markers: In sequence of M0M1M1M1, where M0 = 0
and M1 = 1. n Stuffing indicators: In sequences of CI CI CI CII CII CII CIII CIII
CIII CIV CIV CIV, where all three bits of Cj equal 1’s indicate that a stuffing bit is added in the position of the first information bit associated with the first DS1 bit stream that follows the F1-control bit in the same subframe, and three 0’s in CjCjCj imply no stuffing. o The receiver should use majority law to check whether a
stuffing bit is added.
51
© Po-Ning [email protected] Chapter 3-101
Example 3.3 AT&T M12 Multiplexer
o These stuffed bits can be used to balance (or maintain) the nominal input bit rates and nominal output bit rates. n S = nominal bit stuffing rate
o The rate at which stuffing bits are inserted when both the input and output bit rates are at their nominal values.
n fin = nominal input bit rate n fout = nominal output bit rate n M = number of bits in a frame n L = number of information bits (input bits) for one input
stream in a frame
© Po-Ning [email protected] Chapter 3-102
Example 3.3 AT&T M12 multiplexer
o For M12 framing,
bits 288bits 1176244288
Mbps312.6 Mbps544.1
=
=+×=
=
=
LMff
out
in
ininout fLS
fLS
fM )1(1 framea of Duration
bit. stuffeda by replaced isbit One
−+−
==!"#
334601.01176312.6544.1288 =−=−=⇒ M
ffLSout
in
M
fout
= S4(L� 1)
4fin
+ (1� S)4L
4fin
L� 1
fin
= 185.88082902 µs M
fout
= 186.31178707 µs L
fin
= 186.52849741 µs
52
© Po-Ning [email protected] Chapter 3-103
Example 3.3 AT&T M12 Multiplexer
o Allowable tolerances to maintain nominal output bit rates n A sufficient condition for the existence of S such that
the nominal output bit rate can be matched.
⎥⎦
⎤⎢⎣
⎡−+
−≥≥⎥
⎦
⎤⎢⎣
⎡−+
−∈∈
ininS
outininS f
LSfLS
fM
fLS
fLS )1(1min)1(1max
]1,0[]1,0[
outinoutinoutin
fMLff
ML
fL
fM
fL 11 −
≥≥⇔−
≥≥⇔
54043.1312.61176287312.6
11762885458.1 =≥≥= inf
© Po-Ning [email protected] Chapter 3-104
Example 3.3 AT&T M12 Multiplexer
n This results in an allowable tolerance range:
n In terms of ppm (pulse per million pulses),
o This tolerance is already much larger than the expected change in the bit rate of the incoming DS1 bit stream.
kbps 36735.51176/312.654043.15458.1 ==−
18.2312 and 8.11645458.1
10544.1
1054043.1
10 666
==⇒
+==
−
ppmppm
ppmppm
ba
ab
53
© Po-Ning [email protected] Chapter 3-105
3.11 Virtues, Limitations, and Modifications of PCM
o Virtues of PCM systems n Robustness to channel noise and interference n Efficient regeneration of coded signal along the transmission path n Efficient exchange of increased channel bandwidth for improved
signal-to-noise ratio, obeying an exponential law. n Uniform format for different kinds of baseband signal
transmission; hence, facilitate their integration in a common network.
n Message sources are easily dropped or reinserted in a TDM system.
n Secure communication through the use of encryption/decryption.
© Po-Ning [email protected] Chapter 3-106
3.11 Virtues, Limitations, and Modifications of PCM
o Two limitations of PCM system (in the past) n Complexity n Bandwidth
o Nowadays, with the advance of VLSI technology, and with the availability of wideband communication channels (such as fiber) and compression technique (to reduce the bandwidth demand), the above two limitations are greatly released.
54
© Po-Ning [email protected] Chapter 3-107
3.12 Delta Modulation
o Delta Modulation (DM) n The message is oversampled (at a rate much higher than
the Nyquist rate) to purposely increase the correlation between adjacent samples.
n Then, the difference between adjacent samples is encoded instead of the sample value itself.
© Po-Ning [email protected] Chapter 3-108
55
© Po-Ning [email protected] Chapter 3-109
3.12 Math Analysis of Delta Modulation
Then. at time )( of ionapproximat DM thebe ][Let
).(][Let
sq
s
nTtmnmnTmnm =
]).1[][sgn(][ where
,][][]1[][
−−⋅Δ=
=+−= ∑−∞=
nmnmne
nenenmnm
n
jqqqq
.}2/]1)/][[({ is wordcode ed transmittThe ∞−∞=+Δ nq ne
Chapter 3-110
o The principle virtue of delta modulation is its simplicity. n It only requires
the use of comparator, quantizer, and accumulator.
With bandwidth W of m(t)
3.12 Delta Modulation
]).1[][sgn(][ where
,][][]1[][
−−⋅Δ=
=+−= ∑−∞=
nmnmne
nenenmnm
n
jqqqq
(m[n] = mq[n� 1] + e[n]
mq[n] = mq[n� 1] + eq[n]) mq[n]�m[n] = eq[n]� e[n] (See Slide 3-131)
56
© Po-Ning [email protected] Chapter 3-111
3.12 Delta Modulation
o Distortions due to delta modulation n Slope overload distortion n Granular noise
© Po-Ning [email protected] Chapter 3-111
© Po-Ning [email protected] Chapter 3-112
3.12 Delta Modulation
o Slope overload distortion n To eliminate the slope overload distortion, it requires
n So, increasing step size Δ can reduce the slope-overload distortion.
n Alternative solution is to use dynamic Δ. (Often, a delta modulation with fixed step size is referred to as a linear delta modulator due to its fixed slope, a basic function of linearity.)
condition) overload (slope )(maxdttdm
Ts≥
Δ
57
© Po-Ning [email protected] Chapter 3-113
3.12 Delta Modulation
o Granular noise n mq[n] will hunt around a relatively flat segment of m(t). n A remedy is to reduce the step size.
o A tradeoff in step size is therefore resulted for slope overload distortion and granular noise.
© Po-Ning [email protected] Chapter 3-114
3.12 Delta-Sigma Modulation
o Delta-sigma modulation n In fact, the delta modulation distortion can be reduced
by increasing the correlation between samples. n This can be achieved by integrating the message signal
m(t) prior to delta modulation. n The “integration” process is equivalent to a pre-
emphasis of the low-frequency content of the input signal.
58
© Po-Ning [email protected] Chapter 3-115
3.12 Delta-Sigma Modulation
n A side benefit of “integration-before-delta-modulation,” which is named delta-sigma modulation, is that the receiver design is further simplified (at the expense of a more complex transmitter).
Move the accumulator to the transmitter.
© Po-Ning [email protected] Chapter 3-116
3.12 Delta-Sigma Modulation
A straightforward structure
Since integration is a linear operation, the two integrators before comparator can be combined into one after comparator.
59
© Po-Ning [email protected] Chapter 3-117
3.12 Math Analysis of Delta-Sigma Modulation
]).1[][sgn(][ where],[]1[][ Then
. at time )()( of ionapproximat DM thebe ][Let
.)(][Let
−−⋅Δ=+−=
=
=
∫
∫
∞−
∞−
nininnnini
nTdmtini
dttmni
qqqqq
s
t
q
nTs
σσ
ττ
.}2/]1)/][[({ is wordcode ed transmittThe ∞−∞=+Δ nq nσ
,)()(]1[][]1[][][)1( s
nT
Tnqqq Ttmdttmninininin s
s
≈=−−≈−−= ∫ −σ
Since
we only need a lowpass filter to smooth out the received signal at the receiver end. (See the previous slide.)
© Po-Ning [email protected] Chapter 3-118
3.12 Delta Modulation
o Final notes n Delta(-sigma) modulation trades channel bandwidth
(e.g., much higher sampling rate) for reduced system complexity (e.g., the receiver only demands a lowpass filter).
n Can we trade increased system complexity for a reduced channel bandwidth? Yes, by means of prediction technique.
n In Section 3.13, we will introduce the basics of prediction technique. Its application will be addressed in subsequent sections.
60
© Po-Ning [email protected] Chapter 3-119
3.13 Linear Prediction
o Consider a finite-duration impulse response (FIR) discrete-time filter, where p is the prediction order, with linear prediction
∑=
−=p
kk knxwnx
1
][][ˆ
© Po-Ning [email protected] Chapter 3-120
3.13 Linear Prediction
o Design objective n To find the filter coefficient w1, w2, …, wp so as to
minimize index of performance J:
].[ˆ][][ where]],[[ 2 nxnxneneEJ −==
61
⎟⎟⎠
⎞⎜⎜⎝
⎛+−+−=
−−+−−=
⎥⎥⎦
⎤
⎢⎢⎣
⎡⎟⎟⎠
⎞⎜⎜⎝
⎛−−=
∑∑∑∑
∑∑∑
∑
== >=
= ==
=
p
kXk
p
k
p
kjXjk
p
kXkX
p
k
p
jjk
p
kk
p
kk
RwjkRwwkRwR
jnxknxEwwknxnxEwnxE
knxwnxEJ
1
2
11
1 11
2
2
1
]0[][2][2]0[
]][][[]][][[2]][[
][][
.)( function ationautocorrel withstatinoary be ]}[{Let kRnx X
0][2][2
]0[2][2][2][2
1
1
11
=−+−=
⎟⎟⎠
⎞⎜⎜⎝
⎛+−+−+−=
∂
∂
∑
∑∑
=
−
=+=
p
jXjX
Xi
i
kXk
p
ijXjX
i
jiRwiR
RwikRwjiRwiRJw
© Po-Ning [email protected] Chapter 3-121
.1for ][][1
piiRjiRw X
p
jXj ≤≤=−∑
=
© Po-Ning [email protected] Chapter 3-122
The above optimality equations are called the Wiener-Hopf equations for linear prediction. It can be rewritten in matrix form as:
⎥⎥⎥⎥
⎦
⎤
⎢⎢⎢⎢
⎣
⎡
=
⎥⎥⎥⎥
⎦
⎤
⎢⎢⎢⎢
⎣
⎡
⎥⎥⎥⎥
⎦
⎤
⎢⎢⎢⎢
⎣
⎡
−−
−
−
][
]2[]1[
]0[]2[]1[
]2[]0[]1[]1[]1[]0[
2
1
pR
RR
w
ww
RpRpR
pRRRpRRR
X
X
X
pXXX
XXX
XXX
!!
"!#!!
……
or RXw = rX ⇒Optimal solution wo =RX−1rX
62
© Po-Ning [email protected] Chapter 3-123
3.13 Toeplitz (Square) Matrix
o Any square matrix of the form
is said to be Toeplitz. o A Toeplitz matrix (such as RX) can be uniquely determined
by p elements, [a0, a1, …, ap-1].
pppp
p
p
aaa
aaaaaa
×−−
−
−
⎥⎥⎥⎥
⎦
⎤
⎢⎢⎢⎢
⎣
⎡
021
201
110
!"#""
……
© Po-Ning [email protected] Chapter 3-124
3.13 Linear Adaptive Predictor
o The optimal w0 can only be obtained with the knowledge of autocorrelation function.
o Question: What if the autocorrelation function is unknown? o Answer: Use linear adaptive predictor.
63
© Po-Ning [email protected] Chapter 3-125
3.13 Idea Behind Linear Adaptive Predictor
o To minimize J, we should update wi toward the bottom of the J-bowel.
n So when gi > 0, wi should be decreased. n On the contrary, wi should be increased if gi < 0. n Hence, we may define the update rule as:
where µ is a chosen constant step size, and ½ is included only for convenience of analysis.
ii w
Jg∂
∂≡
][21][ˆ]1[ˆ ngnwnw iii ⋅−=+ µ
o gi[n] can be approximated by:
⎟⎟⎠
⎞⎜⎜⎝
⎛−+−−=
−−+−−≈
−+−=∂∂≡
∑
∑
∑
=
=
=
p
jj
p
jj
p
jXjXii
jnxnwnxinx
inxjnxnwinxnx
jiRwiRwJng
1
1
1
][][ˆ][][2
][][][ˆ2][][2
)(2)(2/][
© Po-Ning [email protected] Chapter 3-126
) wi[n+ 1] = wi[n] + µ · x[n� i]
0
@x[n]�
pX
j=1
wj [n]x[n� j]
1
A
= wi[n] + µ · x[n� i]e[n]
64
© Po-Ning [email protected] Chapter 3-127
3.13 Structure of Linear Adaptive Predictor
© Po-Ning [email protected] Chapter 3-128
3.13 Least Mean Square
o The below pair results in the form of the popular least-mean-square (LMS) algorithm for linear adaptive prediction. 8
><
>:
wj [n+ 1] = wj [n] + µ · x[n� j]e[n]
e[n] = x[n]�pX
j=1
wj [n]x[n� j]
65
© Po-Ning [email protected] Chapter 3-129
3.14 Differential Pulse-Code Modulation (DPCM)
o Basic idea behind differential pulse-code modulation n Adjacent samples are often found to exhibit a high
degree of correlation. n If we can remove this adjacent redundancy before
encoding, a more efficient coded signal can be resulted. n One way to remove the redundancy is to use linear
prediction. n Specifically, we encode e[n] instead of m[n], where
e[n]=m[n]− m[n], where m[n] is the linear prediction of m[n].
© Po-Ning [email protected] Chapter 3-130
3.14 DPCM
o For DPCM, the quantization error is on e[n], rather on m[n] as for PCM.
o So the quantization error q[n] is supposed to be smaller.
66
© Po-Ning [email protected] Chapter 3-131
3.14 DPCM
o Derive:
][][][ nqneneq +=
][][][][][ˆ
][][ˆ][
nqnmnqnenm
nenmnm qq
+=
++=
+=⇒
So we have the same relation between mq[n] and m[n] (as in Slide 3-110) but with smaller q[n].
][ˆ nm
][neq ][nmq
© Po-Ning [email protected] Chapter 3-132
3.14 DPCM
o Notes n DM system can be treated as a special case of DPCM.
Prediction filter => single delay Quantizer => single-bit
67
© Po-Ning [email protected] Chapter 3-133
3.14 DPCM
o Distortions due to DPCM n Slope overload distortion
o The input signal changes too rapidly for the prediction filter to track it.
n Granular noise
© Po-Ning [email protected] Chapter 3-134
3.14 Processing Gain
o The DPCM system can be described by:
o So the output signal-to-noise ratio is:
o We can re-write SNRO as:
][][][ nqnmnmq +=
]][[]][[
2
2
nqEnmESNRO =
QpO SNRGnqEneE
neEnmESNR ⋅==
]][[]][[
]][[]][[
2
2
2
2
error. prediction theis ][ˆ][][ where nmnmne −=
68
© Po-Ning [email protected] Chapter 3-135
3.14 Processing Gain
o In terminologies,
⎪⎪⎩
⎪⎪⎨
⎧
=
=
ratio noise onquantizati tosignal ]][[]][[
gain processing ]][[]][[
2
2
2
2
nqEneESNR
neEnmEG
Q
p
Notably, SNRQ can be treated as the SNR for system of eq[n]= e[n]+ q[n].
© Po-Ning [email protected] Chapter 3-136
3.14 Processing Gain
o Usually, the contribution of SNRQ to SNRO is fixed and limited. n One additional bit in quantization will results in 6 dB
improvement. o Gp is the processing gain due to a nice “prediction.”
n The better the prediction is, the larger Gp is.
69
© Po-Ning [email protected] Chapter 3-137
3.14 DPCM
o Final notes on DPCM n Comparing DPCM with PCM in the case of voice
signals, the improvement is around 4-11 dB, depending on the prediction order.
n The greatest improvement occurs in going from no prediction to first-order prediction, with some additional gain resulting from increasing the prediction order up to 4 or 5, after which little additional gain is obtained.
n For the same sampling rate (8KHz) and signal quality, DPCM may provide a saving of about 8~16 Kbps compared to standard PCM (64 Kpbs).
© Po-Ning [email protected] Chapter 3-138
Speech Quality (Mean Opinion Scores)
Source: IEEE Communications Magazine, September 1997.
2 4 8 16 32 64 Unacceptable
Poor
Fair
Good
Excellent
Bit Rate (kb/s)
FS-1015
MELP 2.4 FS-1016
JDC2
G.723.1
GSM/2 JDC IS54 IS96
GSM
G.723.1 G.729 G.728 G.726 G.711
G.727 IS-641
PCM ADPCM
3.14 DPCM
IS = Interim Standard GSM = Global System for Mobile Communications JDC = Japanese Digital Cellular FS = Federal Standard MELP = Mixed-Excitation Linear Prediction
70
© Po-Ning [email protected] Chapter 3-139
3.15 Adaptive Differential Pulse-Code Modulation
o Adaptive prediction is used in DPCM. o Can we also combine adaptive quantization into DPCM to
yield a comparably voice quality to PCM with 32 Kbps bit rate? The answer is YES from the previous figure. n 32 Kbps: 4 bits for one sample, and 8 KHz sampling
rate n 64 Kbps: 8 bits for one sample, and 8 KHz sampling
rate o So, “adaptive” in ADPCM means being responsive to
changing level and spectrum of the input speech signal.
© Po-Ning [email protected] Chapter 3-140
3.15 Adaptive Quantization
o Adaptive quantization refers to a quantizer that operates with a time-varying step size Δ[n].
o Δ[n] is adjusted according to the power of input sample m[n]. n Power = variance, if m[n] is zero-mean.
n In practice, we can only obtain an estimate of E[m2[n]].
]][[][ 2 nmEn ⋅=Δ φ
71
© Po-Ning [email protected] Chapter 3-141
3.15 Adaptive Quantization
o The estimate of E[m2[n]] can be done in two ways: n Adaptive quantization with forward estimation (AQF)
o Estimate based on unquantized samples of the input signals.
n Adaptive quantization with backward estimation (AQB) o Estimate based on quantized samples of the input
signals.
© Po-Ning [email protected] Chapter 3-142
3.15 AQF
o AQF is in principle a more accurate estimator. However it requires n an additional buffer to store unquantized samples for the
learning period. n explicit transmission of level information to the receiver
(the receiver, even without noise, only has the quantized samples).
n a processing delay (around 16 ms for speech) due to buffering and other operations for AQF.
o The above requirements can be relaxed by using AQB.
72
© Po-Ning [email protected] Chapter 3-143
3.15 AQB
A possible drawback for a feedback system is its potential unstability. However, stability in this system can be guaranteed if mq[n] is bounded.
© Po-Ning [email protected] Chapter 3-144
3.15 APF and APB
o Likewise, the prediction approach used in ADPCM can be classified into: n Adaptive prediction with forward estimation (APF)
o Prediction based on unquantized samples of the input signals.
n Adaptive prediction with backward estimation (APB) o Prediction based on quantized samples of the input
signals. o The pro and con of APF/APB is the same as AQF/AQB. o APB/AQB are a preferred combination in practical
applications.
73
© Po-Ning [email protected] Chapter 3-145
3.15 ADPCM
Adaptive prediction with backward estimation (APB).
© Po-Ning [email protected] Chapter 3-146
3.16 Computer Experiment: Adaptive Delta Modulation
o In this section, the simplest form of ADM modulation with AQB is simulated, namely, ADM with AQB.
o Comparison with LDM (i.e., linear DM) where step size is fixed will also be performed.
][ne ][neq
]1[ −neq
This figure may be incorrect.
74
© Po-Ning [email protected] Chapter 3-147
3.16 Computer Experiment: Adaptive Delta Modulation
o In this section, the simplest form of ADM modulation with AQB is simulated, namely, ADM with AQB.
o Comparison with LDM (i.e., linear DM) where step size is fixed will also be performed.
][ne ][neq
]1[ −neq
I thus fix it in this slide.
© Po-Ning [email protected] Chapter 3-148
3.16 Computer Experiment: Adaptive Delta Modulation
⎪⎩
⎪⎨
⎧
Δ<−ΔΔ
Δ≥−Δ⎟⎟⎠
⎞⎜⎜⎝
⎛ −+×−Δ
=Δ
minmin
min
]1[ if,
]1[ if,][
]1[211]1[][
n
nnene
nnq
q
⎩⎨⎧
±
Δ
1. equalst output thaquantizer bit 1 theis ][ , iterationat size step theis ][
wherene
nn
q
Setting: m(t) =10sin 2π fs100
t⎛
⎝⎜
⎞
⎠⎟,ΔLDM =1 and Δmin =
18
75
© Po-Ning [email protected] Chapter 3-149
3.16 Computer Experiment: Adaptive Delta Modulation
LDM ADM
Observation: ADM can achieve a comparable performance of LDM with a much lower bit rate.
è
Fixed slope Exponentially decreasing slope
Exponentially increasing slope
© Po-Ning [email protected] Chapter 3-150
3.17 MPEG Audio Coding Standard
o The ADPCM and various voice coding techniques introduced above did not consider the human auditory perception.
o In practice, a consideration on human auditory perception can further improve the system performance (from the human standpoint).
o The MPEG-1 standard is capable of achieving transparent, “perceptually lossless” compression of stereophonic audio signals at high sampling rate. n A human subjective test shows that a 6-to-1
compression ratio are “perceptually indistinguishable” to human.
76
© Po-Ning [email protected] Chapter 3-151
3.17 Characteristics of Human Auditory System
o Psychoacoustic characteristics of human auditory system n Critical band
o The inner ear will scale the power spectra of incoming signals non-linearly in the form of limited frequency bands called the critical bands.
o Roughly, the inner ear can be modeled as 25 selective overlapping band-pass filters with bandwidth < 100Hz for the lowest audible frequencies, and up to 5kHz for the highest audible frequencies.
© Po-Ning [email protected] Chapter 3-152
3.17 Characteristics of Human Auditory System
n Auditory masking o When a low-(power-)level signal (i.e., the maskee)
and a high-(power-)level signal (i.e., the masker) occur simultaneously in the same critical band, and are close to each other in frequency, the low-(power-)level signal will be made inaudible (i.e., masked) by the high-(power-)level signal, if the low-(power-)level one lies below a masking threshold.
77
© Po-Ning [email protected] Chapter 3-153
SMR SNR for R-bit quantizer
3.17 Characteristics of Human Auditory System
o Masking threshold is frequency-dependent.
NMR (noise-to-mask ratio) = SMR – SNR
(in the gray-color critical band)
Within a critical band, the quantization noise is inaudible as long as the NMR for the pertinent quantizer is negative.
© Po-Ning [email protected] Chapter 3-154
3.17 MPEG Audio Coding Standard
78
© Po-Ning [email protected] Chapter 3-155
3.17 MPEG Audio Coding Standard
o Time-to-frequency mapping network n Divide the audio signal into a proper number of subbands, which is
a compromise design for computational efficiency and perceptual performance.
o Psychoacoustic model n Analyze the spectral content of the input audio signal and thereby
compute the signal-to-mask ratio. o Quantizer-coder
n Decide how to apportion the available number of bits for the quantization of the subband signals.
o Frame packing unit n Assemble the quantized audio samples into a decodable bit stream.
© Po-Ning [email protected] Chapter 3-156
3.18 Summary and Discussion
o Sampling – transform analog waveform to discrete-time continuous wave n Nyquist rate
o Quantization – transform discrete-time continuous wave to discrete data. n Human can only detect finite intensity difference.
o PAM, PDM and PPM o TDM (Time-Division Multiplexing) o PCM, DM, DPCM, ADPCM o Additional consideration in MPEG audio coding