UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

79
SHREE SATHYAM COLLEGE OF ENGINEERING AND TECHNOLOGY DEPARTMENT OF ECE EC 6501- DIGITAL COMMUNICATION ALL UNITS NOTES SEM/YEAR: V/ III UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the process of quantization and obtain the expression for signal to quantization ratio in the case of uniform quantizer (Nov/Dec2016,April/May17) The conversion of an analog sample of the signal into a digital form is called the quantizing process. The quantizing process has a twofold effect. 1) The peak to peak range of input sample is subdivided into a finite set of decision levels (or) decision thresholds that are aligned with the „risers‟ of the staircase and 2) The output is assigned a discrete value selected from a finite set representation levels (or) reconstruct ion levels that are aligned with „treads‟ of the staircase. 2 types of quantizers. 1) Uniformquantizer 2) Non uinifomquantizer 1) Uniform quantizer: In uniform quantization.as in figure 1(a), the separation between the desicion thresholds and the separation between the representation levels of the quantizer have a common value called the step size. 2 types of uniform quantizers 1) Symmetric quantizer of midtreadtype. 2) Symmetric quantizer of midrisetype. 1. Mid treadtype:: According to the staircase-like transfer characteristics of figure 1a, the decision thresholds of the quantizer are located at ±Δ/2, ±3Δ/2, ±5Δ/2....... and representation levels are located at 0, ±Δ, ±2Δ, where is a step size. Since the origin lies in the middle of tread of the staircase, it is referred to as symmetric quantizer of midtread type.

Transcript of UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

Page 1: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

SHREE SATHYAM COLLEGE OF ENGINEERING AND TECHNOLOGY

DEPARTMENT OF ECE

EC 6501- DIGITAL COMMUNICATION

ALL UNITS NOTES

SEM/YEAR: V/ III

UNIT -1 SAMPLING & QUANTIZATION

PART –A

1.Explain the process of quantization and obtain the expression for signal to quantization ratio in the case of uniform quantizer

(Nov/Dec2016,April/May17) The conversion of an analog sample of the signal into a digital form is called the

quantizing process. The quantizing process has a twofold effect.

1) The peak to peak range of input sample is subdivided into a finite set of

decision levels (or) decision thresholds that are aligned with the „risers‟ of the

staircase and

2) The output is assigned a discrete value selected from a finite set

representation levels (or) reconstruction levels that are aligned with „treads‟ of the

staircase.

2 types of quantizers.

1) Uniformquantizer

2) Non uinifomquantizer

1) Uniform quantizer:

In uniform quantization.as in figure 1(a), the separation between the desicion

thresholds and the separation between the representation levels of the quantizer have

a common value called the step size.

2 types of uniform quantizers

1) Symmetric quantizer of midtreadtype.

2) Symmetric quantizer of midrisetype.

1. Mid treadtype::

According to the staircase-like transfer characteristics of figure 1a, the decision

thresholds of the quantizer are located at ±Δ/2, ±3Δ/2, ±5Δ/2....... and representation

levels are located at 0, ±Δ, ±2Δ, where is a step size. Since the origin lies in the middle

of tread of the staircase, it is referred to as symmetric quantizer of midtread type.

Page 2: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

2. Mid risertype:

Figure 2(a) shows staircase-like transfer characteristics in which the decision

thresholds of the quantizer are located at 0, ±Δ, ±2Δ, and the representation levels are

located at ±Δ/2, ±3Δ/2, ±5Δ/2.......where is a stepsize..

12

Page 3: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

Since in this case, the origin lies in the middle of the riser of the stair case, it

is referred to as symmetric quantizer of midriser type.

Both quantizers mid tread (or) mid riser type is memory less ,that is the quantizer

output is determined only by the value of corresponding input samples.

Overload level:

The absolute value of which is one half of peak to peak range of input sample

values.

Quantization Noise: The use of quantization introduces an error defined as the

difference between the input signal m and output signal v.The error is called

quantization noise.

Let the quantizer input m be the sample value of a zero mean random variable M.

A quantizerg(.)maps the input random variable M of continuous amplitude into a

discrete random variable, thir respective sample values are related by the equation

v = g(m)

Let the quantization error be denoted by random variable Q of sample valueq

q= m-v (or)

Q=M-V

With the input M having zero mean, and the quantizer assumed to be symmetric

the quantization output V and therefore quantization error also have zero mean.

Quantization error Q:

Consider then an input m of continuous amplitude in the range (-

mmax,mmax).Assuming a uniform quantizer of mid riser type. we find that the step size of

the quantizer is given by

Where L-total number of representation levels.

For a uniform quantizer, the quantization error Q will have its sample values

bounded by (-Δ/2 ≤q≤ Δ/2).If the step size is very small. It is reasonable to assume

that the quantization error Q is uniformly distributed random variable.

Now express the probability density function of the Quantization error Q

Page 4: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

For this is true, we must ensure that incoming signal does not overload the

quantizer. Then with the mean of the quantization error being zero, its variance is

the same as the mean squarevalue.

=E[Q2]

------------(2)

Substitute equation (1) into (2)weget

Typically L ary number k, denoting the kthrepresentation level of the quantizer, is

transmitted to the receiver in binary form.

Let R denote the number of bits per sample used in the construction of binary

code.

Therefore, --------(4)

Substituting the value of L from equation(4)

Now,

= x

=

Let P be the average power of the message signal m(t).Now express the output

signal-to-noise ratio of uniform quantizer

(SNR)o=

=

The above equation shows that the output signal to noise ratio of the quantizer

increases exponentially with increasing number of bits per sample „R‟

Page 5: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

2.Describe PCM waveform coder and decoder with neat sketch and list the

merits and compared with analog coders(Nov/Dec15,April17)

Pulse-Code Modulation

PCM is a discrete-time, discrete-amplitude waveform-coding process, by means

of which an analog signal is directly represented by a sequence of coded pulses.

Specifically, the transmitter consists of two components: a pulse-amplitude

modulator followed by an analog-to-digital (A/D) converter. The latter component itself

embodies a quantizer followed by an encoder. The receiver performs the inverse of

these two operations: digital-to- analog (D/A) conversion followed by pulse-amplitude

demodulation. The communication channel is responsible for transporting the encoded

pulses from the transmitter to the receiver. Figure 3, a block diagram of the PCM,

shows the transmitter, the transmission path from the transmitter output to the receiver

input, and the receiver. It is important to realize, however, that once distortion in the

form of quantization noise is introduced into the encoded pulses, there is absolutely

nothing that can be done at the receiver to compensate for that distortion.

Sampling in the Transmitter

The incoming message signal is sampled with a train of rectangular pulses. To

ensure perfect reconstruction of the message signal at the receiver, the sampling rate

must be greater than twice the highest frequency component W of the message signal in

accordance with the samplingtheorem.

A low-pass anti-aliasing filter is used at the front end of the pulse-amplitude

modulator to exclude frequencies greater than W before sampling. Thus, the

application of sampling permits the reduction of the continuously varying message

signal to a limited number of discrete values persecond.

Quantization in the Transmitter

The PAM representation of the message signal is then quantized in the analog-to-

digital converter, thereby providing a new representation of the signal that is discrete in

both time andamplitude.

By using a non uniformquantizer with the feature that the step size

increases as the separation from the origin of the input–output amplitude

Page 6: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

characteristic of the quantizer is increased, the large end-steps of the quantizer

can take care of possible excursions of the voice signal into the large amplitude

ranges that occur relativelyinfrequently. Encoding in the Transmitter

Through the combined use of sampling and quantization, analog message

signal becomes limited to a discrete set of values, but not in the form best suited to

transmission over a telephone line or radio link.

The last signal-processing operation in the transmitter is that of line coding, the

purpose of which is to represent each binary codeword by a sequence of pulses; for

example, symbol 1 is represented by the presence of a pulse and symbol 0 is

represented by absence of the pulse. Inverse Operations in the PCM Receiver

The first operation in the receiver of a PCM system is to regenerate (i.e.,

reshape and clean up) the received pulses. These clean pulses are then regrouped

into code words and decoded (i.e., mapped back) into a quantized pulse-amplitude

modulatedsignal. PCM Regeneration along the Transmission Path

The most important feature of a PCM systems is its ability to control the effects of

distortion and noise produced by transmitting a PCM signal through the channel,

connecting the receiver to the transmitter. This capability is accomplished by

reconstructing the PCM signal through a chain of regenerative repeaters, located at

sufficiently close spacing along the transmission path.

Three basic functions are performed in a regenerative repeater: equalization,

timing, and decision making. The equalizer shapes the received pulses so as to

compensate for the effects of amplitude and phase distortions produced by the non-ideal

transmission characteristics of the channel. The timing circuitry provides a periodic pulse

train, derived from the received pulses, for sampling the equalized pulses at the instants

of time where the SNR ratio is a maximum. Each sample so extracted is compared with a

predetermined threshold in the decision-making device. In each bit interval, a decision is

then made on whether the received symbol is 1 or 0 by observing whether the threshold

is exceeded or not. If thethreshold is exceeded, a clean new pulse representing symbol 1

is transmitted to the next repeater; otherwise, another clean new pulse representing

symbol 0 is transmitted

Page 7: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

3. Describe the process of sampling & how the message is reconstructed

from its samples. Also illustrate the effect of aliasing with neat sketch.

(Nov/Dec2015)

Sampling Theory

The sampling process is usually described in the time domain, it is an operation

that is basic to digital signal processing and digital communications. Through use of

the sampling process, an analog signal is converted into a corresponding sequenceof

samples that are usually spaced uniformly in time. The sampling rate is properly chosen

in relation to the bandwidth of the message signal, so that the sequence of samples

uniquely defines the original analog signal.

Frequency-Domain Description of Sampling

Consider an arbitrary signal g(t) of finite energy, which is specified for all time t.

A segment of the signal g(t) is shown in Figure 6a. Suppose that we sample the signal

g(t) instantaneously and at a uniform rate, once every Ts seconds. Consequently, we

obtainaninfinitesequenceofsamplesspacedTssecondsapartanddenotedby

{g(nTs)}, where n takes on all possible integer values, positive as well as negative. We

refer to Ts as the sampling period, and to its reciprocal fs = 1/Ts as the sampling rate.

For obvious reasons, this ideal form of sampling is called instantaneous sampling.

Let g (t) denote the signal obtained by individually weighting the elements of a

periodic sequence of delta functions spaced Ts seconds apart by the sequence of

numbers {g(nTs)}, as shown by

Fourier transform of the delta function (t – nTs) is equal to exp(–j2nfTs). Letting

G (f) denote the Fourier transform of g (t), we may write

Where G(f) is the Fourier transform of the original signal g(t) and fs is

the sampling rate.

Page 8: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

The process of uniformly sampling a continuous-time signal of finite energy

results in a periodic spectrum with a frequency equal to the sampling rate.

The Sampling Theorem

The sampling theorem for strictly band- limited signals of finite energy in two

equivalent parts:

A band-limited signal of finite energy that has no frequency components higher than W hertz is completely described by specifying the values of the signal instants of time separated by 1/2Wseconds.

A band-limited signal of finite energy that has no frequency components higher than W hertz is completely recovered from a knowledge of its samples taken at the rate of 2W samples persecond.

Aliasing Phenomenon

Aliasing refers to the phenomenon of a high-frequency component in the

spectrum of the signal seemingly taking on the identity of a lower frequency in the

spectrum of its sampled version, as illustrated in Figure 8. The aliased spectrum,

shown by the solid curve in Figure 8 b, pertains to the under sampled version of the

message signal represented by the spectrum of Figure 8 a. To combat the effects of

aliasing in practice, we may use two corrective measures:

Prior to sampling, a low-pass anti-aliasing filter is used to attenuate those high-frequency components of the signal that are not essential to the information being conveyed by the message signalg(t).

The filtered signal is sampled at a rate slightly higher than the Nyquistrate.

Page 9: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

The use of a sampling rate higher than the Nyquist rate also has the beneficial

effect of easing the design of the reconstruction filter used to recover the original signal

from its sampled version. Consider the example of a message signal that has been

anti-alias (low- pass) filtered, resulting in the spectrum shown in Figure 9 a. The

corresponding spectrum of the instantaneously sampled version of the signal is shown

in Figure 9 b, assuming a sampling rate higher than the Nyquist rate. According to

Figure 9 b, we readily see that design of the reconstruction filter may be specified as

follows:

The reconstruction filter is low-pass with a pass band extending from –W to W, which is itself determined by the anti-aliasingfilter.

The reconstruction filter has a transition band extending (for positive frequencies) from W to (fs – W), where fs is the samplingrate.

4.Explain TDM & logarithmic companding of speech signal.

(May/June2016,Nov/Dec 2016,April/May 2017)

Time-division multiplex system (TDM), which enables the joint utilization of

a common channel by a plurality of independent message signals without

mutual interference. The concept of TDM is illustrated by the block diagram

shown inFig.10.

Each input message signal is first restricted in bandwidth by a low-pass

prealias filter to remove the frequencies that are nonessential to an adequate

signal representation. The pre-alias filter outputs are then applied to a

commutator, which is usually implemented using electronic switching circuitry.

The function of the commutator is two-fold:

(1) to take a narrow sample of each of the N input messages at a rate f,

that is slightly higher than 2W, where W is the cut-off frequency of the pre-alias

filter,and

Page 10: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

(2) to sequentially interleave these N samples inside a sampling intervalTs

= 1/ fs. Indeed, this latter function is the essence of the time-division multiplexing

operation.

Following the commutation process, the multiplexed signal is applied to a

pulse-amplitude modulator, the purpose of which is to transform the multiplexed

signal into a form suitable for transmission over the communication channel.

The N message signals to be multiplexed have similar spectral properties.

Then the sampling rate for each message signal is determined in accordance with

the sampling theorem. Let T, denote the sampling period so determined for each

message signal. Let denote the time spacing between adjacentsamples in the

time-multiplexed signal. It is rather obvious that we may set „

Hence, the use of time-division multiplexing introduces a bandwidth

expansion factor N, because the scheme must squeeze N samples derived from

N independent message signals into a time slot equal to one sampling interval.

At the receiving end of the system, the received signal is applied to a pulse

amplitude demodulator, which performs the reverse operation of the pulse

amplitude modulator. The short pulses produced at the pulse demodulator output

are distributed to the appropriate low-pass reconstruction filters by means of a

decommutator, which operates in synchronism with the commutator in the

transmitter. This synchronization is essential for a satisfactory operation of the

TDM system, and provisions have to be made forit.

Non uniform quantization (robust quantization):

In telephonic communication, it is preferable to use variable separation

between the representation levels

For example,the range of voltages covered by voice signals from the peaks of

loud talk to the weak passages of the weak talk is on the order of 1000 to 1.

The use of non uniformquantizer is equivalent to passing the baseband

signal through a compressor and then applying the compressed signal to the

uniform quantizer.

Page 11: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

Logarithmic companding of speech signal:

A particular form of compression law that is used in practice is µ law,which

is defined by

=

Where m and v are the normalized input and output voltages and µ is a

positive constant.

In figure 11(a), we have plotted the µ -law for 3 different values of µ.The

case of uniform quantization corresponds to µ=0.

For a given value of µ, the reciprocal slope of the compression curve,

which defines the quantum steps is given by the derivative of with

respective ,thatis

The reciprocal slope of the second compression curve is given by the

derivativeof withrespective ,that is

To restore the signal samples to their correct relative level, we use a

device in the receiver with a characteristic complementary to the compressor

and is called asexpander.

Page 12: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

Model of Non uniform quantiser:

Input Compressor Uniform Quantizer Expander

The combination of compressor and expander is called compander.Since

the compression and expansion laws are inverse ,the expander output is equal

to the compressor input.Figure 12 depicts the transfer characteristics of

compressor,uniformquantizer and expander

UNIT-II WAVEFORM CODING

PART –B

1. Describe and illustrate delta modulation and its quantization

error.[NOV/DEC 2015,2016] Delta modulation (DM) which is the one-bit version ofDPCM. Delta modulation transmits only one bit persamples Delta modulation provides a stair case approximation to the oversampled version of an input basesignal. The difference between the input and the approximation is quantized into two

levels namely ±δ i.e. positive to negativedifferences. If the approximation falls below

the signal at any samples it is increased byδ. On the other hand the approximation lies above the signal it is diminished byδ.The signal does not change too rapidly from sample to sample. We find the

staircase approximation remains with in ±δ of the inputsignal.

The step size ∆ of the quantizer is related to

δby ∆=2δ

Page 13: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

The input signal as x(t) and the staircase approximation to it asu(t). The basic principle of delta modulation may be formalized with set of discrete timerelations.

.

Ts – Samplingperiod.

e(nTs) – prediction error between the present samplesx(nTs).

x(nTs) – is the sampled signal ofx(t).

DM TRANSMITTER.

It consist of a summer, two-level quantizer and an accumulator interconnected as

shown infig. Assume that accumulator is initially set tozero.

In summer accumulator adds quantizer output (±δ) with the previous

sample approximation.

=

At each sampling instants the accumulator increments the approximation to the

input signal by ±δ, depending upon the binary output of themodulator. The accumulator does the best it can track the input by an increment +δ or –δ at

atime.

DM RECEIVER

The staircase approximation u(t) is reconstructed by passing the incoming

sequence of positive and negative pulses through an accumulator in a manner similar to

that used in thetransmitter. The out-of-band quantization noise in the high frequency staircase waveform u(t)

is rejected by passing it through low-pass filter with a bandwidth equal to the original

signalbandwidth.

Page 14: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

Delta modulation offers two unique features (1) a one bit code-word for the output

which eliminates the need of word framing (2) simplicity of design for both the transmitter

andreceiver.

QUANTIZATION ERROR.

Delta modulation has two types oferror.

1) slope over load 2) granularnoise SLOPE OVERLOAD DISTORTION

Let q(nTs) denote the quantizingerror

u(nTs) = x(nTs)+q(nTs)

To eliminate u(nTs-Ts), we may express the prediction error e(nTs)as

We find the step size ∆=2δ is too small for the staircase approximation u(t) to follow a steep segment of the input waveform x(t) with the result that u(t) falls behindx(t). This condition is called slope-overload and the resulting quantization error is

called slope overloaddistortion.

GRANULAR NOISE In contrast to slope-overload distortion, granular noise occurs when the

stepsize ∆ is too large relatively to the local slope characteristics of the input wave form

x(t).

Thereby causing the staircase approximation u(t) to hunt around a relatively flat

segment of the inputwaveform.

2. Explain how Adaptive Delta Modulation performs better and gains more SNR

than delta modulation.[Nov/Dec 2016,April/May 2017] The performance of the delta modulator can be improved significantly by making

the step size of the modulator (assume time-varyingform). During a steep segment of

the input signal the step size isincreased. Conversely when the input signal is varying

slowly, the step size isreduced.

In this way, the step size is adapted to the level of the input signal is called

adaptive delta modulation(ADM). Several ADM schemes to adjust stepsize

Discrete set of values is provided for the stepsize. Continuous range for step size variation isprovided.

In summer accumulator adds quantizer output (±δ) with the previous sample

approximation.

Page 15: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

At each sampling instants the accumulator increments the approximation to the

input signal by ±δ, depending upon the binary output of themodulator.

The accumulator can track the input by an increment +δ or –δ at atime.In practical implementations of the system, the step size ∆(nTs) or 2δ(nTs) is

constrained to lie between minimum and maximum values can writeas.

δmin≤δ(nTs)≤δmax

The upper limit controls the amount of slope-overload distortion, the lower limit

δmin, controls the amount of idle channelnoise.

The adaptation rule δ(nTs) can be generally expressedas

δ(nTs) = g(nTs)δ(nTs-Ts).

Transmitted output is given to the receiver input finding step limitsize.The staircase approximation u(t) is reconstructed by passing the incoming sequence of positive and negative pulses through an accumulator in a manner similar to that used in

thetransmitter.The out-of-band quantization noise in the high frequency staircase waveform u(t)

is rejected by passing it through low-pass filter with a bandwidth equal to the original

signalbandwidth.

3.Explain DPCM transmitter andreceiver.(April/May 2017)

The digitization of a voice or video signal, the signal is sampled at rate

slightly higher than nyquistrate.

The resulting sampled signal is then found to exhibit a high correlation

between adjacentsamples

High correlation is that in an average sense, the signal doesnot change

rapidly from one sample to next with the result that the difference between

adjacent samples has a variance that is smaller than the variance of the

signalitself.

When these highly correlated samples are encoded as in a standard PCM

system, the resulting encoded signal contains redundantinformation. Symbols

that are not absolutely essential to the transmission of information are

generated as a result of encoding process and remove this redundancy

beforeencoding.

Page 16: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

DPCM TRANSMITTER

DPCM works on the principle of periodiction i.e., value of present sample is

predicted from the previoussamples.

A baseband signal x(t) is sampled at the rate fs=1/Ts, to produce a

sequence of correlated samplesTs.

Let the sequence be denoted by x(nTs), where n is integervalues.

To predict the future values of the signal x(t) differential quantization

schemes is introduced with the input of a signalis

.

The predicted value is produced by the predictor whose input consist of

quantized version of the input signal x(nTs) and the difference signal e(nTs) is

called a predictionerror.

By encoding the quantizer output to obtain an important variation of PCM is

known as differential pulse-codemodulation.

The quantizer output may be representedas

u[nTs] = Q[enTs)]

=e(nTs)+q(nTs) (1)

q(nTs) – quantization error.

The quantizer output u(nTs) is added to the predicted value x(nTs)

to produce the predictorinput.

Theoriginalinput signal x(nTs) by the quantizationerror.

Ifthepredictionisgoodthevarianceofthepredictionerrore(nTs)willbe

smaller.

DPCM RECEIVER

The receiver for reconstructing the quantized version of the input.

It consist of decoder to reconstruct the quantized errorsignal.

The quantized version of the original input is reconstructed from the decoder

output using the same predictor as used in thetransmitter.

37

Page 17: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

In the absence of channel noise in transmitter, we find that the encoded signal at

the receiverinput.

The receiver output is equal to u(nTs), which differs from the original input x(nTs)

only by the quantizing error q(nTs) incurred as a result of quantizing the prediction

errore(nTs).

The transmitter and receiver operate on the same sequence of samplesu(nTs).

SNR IN DPCM:

The output signal to quantization noise ratio of a signal coder

(SNR)o= .

= variance of the original input x(nTs)

= variance of the quantization error q(nTs)

(SNR)o= .

(SNR)o=GP(SNR)P. The prediction error to quantization noise ratio.

(SNR)P = .

GP is the prediction gain produced by the differential quantization scheme is

definedby

4.Explain ADPCM and illustrate adaptive quantization with forward

estimation (AQF) and adaptive quantization with backward estimation(AQB)

(or)

Illustrate how the adaptive time domain coder codes the speech at low bit

rate and compare it with frequency domain coder. [NOV/DEC 2015]

Reduction in the number of bits per sample from 8 to 4 involves the combined

use of adaptive quantization and adaptiveprediction.

Adaptive means being responsive to changing level and spectrum of the input

speechsignal.

Page 18: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

The variation of performance with speakers and speech material together with

variations in signal level inherent in the speech communicationprocess.

A digital coding scheme that uses both adaptive quantization and adaptive

prediction is called adaptive differential pulse-code modulation(ADPCM).

The term “adaptive quantization” refers to a quantizer that operates with a time

varying step size ∆(nTs), where Ts is the samplingperiod.

The step size ∆(nTs) is varied so as to match thevariance of

theinputsignalx(nTs) we write the equationas

∆(nTs) = X(nTs)

Φ – constant, X(nTs)- estimate of the standarddeviation

The problem of adaptive quantization, accordingly to above equation is one of

estimating continuously.

To proceed with the application of above equation, we maycomputetheestimate

Unquantized samples of the input signal are used to derive forward estimates of

. Samples of the quantizer output are used to derive backward estimates of

ADAPTIVE QUANTIZATION

The respective quantization schemes are referred to as adaptive quantization

with forward estimation (AQF) and adaptive quantization with backward

estimation(AQB).

ADAPTIVE QUANTIZATION WITH FORWARD ESTIMATION

The AQF scheme first goes through a learning period by buffering unquantized

samples of the samples of the input speechsignal.

The samples are released after the estimate X(nTs) has been obtained this

estimate is obviously independent of quantizingnoise.

Therefore we find the that the step size ∆(nTs) obtained from AQF ismore reliable

than that from AQB.

However, the use of AQF requires the explicit transmission of level information

to a remotedecoder.

Page 19: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

The system with additional side information that has to be transmitted to the

receiver.

The processing delay in the encoding operation result the use ofAQF.

The problem of level transmission, buffering and delay intrinsic to AQF are all

avoided in the AQB scheme by using the quantizer output to extract information

for the computation of the step size∆(nTs).

ADAPTIVE QUANTIZATION WITH BACKWARD ESTIMATION

An adaptive quantizer with backward estimation represents nonlinear feedback

systems not obvious that the system will bestable

The system is indeed stable in the sense that if the quantizer input x(nTs) is

bounded then so with the backward estimate X(nTs) and the correspondingstep size

∆(nTs).

ADAPTIVE PREDICTION

The use of adaptive prediction in ADPCM is justified because step size are

inherently nonstationary.

The two schemes for performing adaptive prediction are 1) adaptive prediction

with forward estimation (APF) 2) adaptive prediction with backward estimation (APB).

ADAPTIVE PREDICTIVE WITH FORWARD ESTIMATION (APF)

The APF in which unquantized samples of the input signal are used to derive

estimates of the predictorcoefficient.

In APF scheme N unquantized samples of the input speech are first buffered and

then released after computation of M predictor coefficients that are optimized for the

buffered segment of inputsamples.

Page 20: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

The choice of M involves a compromise between an adequate prediction gain

and an acceptable amount of sideinformation.

Likewise the choice of learning period or buffer length N involves a compromise

between the rate at which information on predictor coefficients must be updated and

transmitted toreceiver.

ADAPTIVE PREDICTIVE WITH BACKWARD ESTIMATION (APB)

APF suffers from the same intrinsic disadvantages as AQF the disadvantages are

eliminated using APB scheme in the belowfig.

The optimum predictor coefficients are estimated on the basis of quantized and

transmitted data, they can be updated as frequently asdesired.

From sample to sample APB is the preferred method ofprediction.

The adaptive prediction is intended to represent the mechanism for updating the

predictorcoefficients.

Let y(nTs)denote the quantizer output where Ts is the sampling period and , n is

the timeindex. u(nTs) = (nTs) +y(nTs)

(nTs)isthepredictionofthespeechinputsamplex(nTs)theaboveequationcan be

rewriteas

y(nTs) = u(nTs) - (nTs).

u(nTs) – represents a sample value of the predictorinput

(nTs) – sample value of the predictoroutput. y(nTs) – predictorerror.

The structure of the predictor assumed to be of order M .

Where µ is the adaptation constant

We set all of the predictor coefficients equal to zero at n =0.

The correction term equation consists of product of y(nTs) u(nTs-kTs) update

scaled by the adaptation constantµ. As µ is the small value correction term will decrease with the number of iterations

n. The stationary speech inputs and small quantizationeffects.

Page 21: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

5.Explain Linear PredictiveCoding.

Linear prediction provides the basis of an important source coding techniques

for the digitization of speech signals this technique known as linear prediction

vocoding relies on the parameter of speech signal it is physical model of speech

productionprocess.

The model assumes the sound generating mechanism is linearly separable

from the intelligence modulation mechanism. The precise form of the excitation

depends on whether the speech sound is voiced orunvoiced.

Voiced sounds are produced by forcing air through the glottis with the

tension of vocal chords adjusted so that they vibrate in a relaxation oscillation

thereby producing quasi-periodic pulse of air that excite the vocaltract.

Unvoiced sounds are generated by forming a constriction at some points in

the vocal tract and forcing air through the constriction at a high enough velocity

to produceturbulence

Examples of voiced and unvoiced sounds are A andS.

The speech waveform in the below figure (a) is the result of utterance “every

salt breeze comes from the sea” by a malesubject.

The waveform of fig (b) corresponds to the “A” segment in the world “salt” and

fig (c) corresponds to “S”segment.

The generation of voiced sound is modeled as the response of the vocal tract

filter excited with a periodic sequence of impulses spaced by a fundamental

period.

Page 22: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

A linear predictor vocoder consists of transmitter and a receiver having the

block diagram shown in belowfig.

The transmitter first performs analysis on the input speech signal, block

byblock.

Each block is 10-30 ms long for which the speech production process may

be treated as essentiallystationary. The parameters resulting from the analysis namely the prediction –error

filter (analyzer) coefficients, a voiced/unvoiced parameter, and the pitch period,

provide a complete description for the particular segment of the input

speechsignal. A digital representation of the parameters of the complete description

constitutes the transmittedsignal.

The receiver first performs decoding followed by synthesis of the speech

signal the standard result of this analysis/synthesis is an artificial – sounding

reproduction of the original speech signal.

6.Briefly explain about PredictionFilter. Prediction constitutes a special form of estimation the requirement is to

use a finite set of present and past samples of a stationary process to predict a

sample of the process in thefuture. If prediction is linear if it is linear combination of the given samples of the

process and is confined to linearpredictor. The filter designed to perform the prediction is calledpredictor. The difference between the actual sample of the process at the (future)

timeofinterestandthepredictoroutputiscalledthepredictionerror. Consider the random samples Xn-1,Xn-2, ……..,Xn-M drawn from a stationary

process X(t), the requirement is to make a prediction of the sampleXn.

Let n denote the random variable resulting from thisprediction.

n = h01, h02, …….., h0M are the optimum predictorcoefficients M – number of delay elements employed in thepredictor.

Page 23: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

By minimizing the mean square value of the prediction error as a special

case of the weiner filter proceed asfollow.

The variance of the sample Xn, viewed as the desired response,equals

Where it is assumed that Xn has zero mean

The cross-correlation function of Xn, acting as the desired response, and

Xn-k, acting as the kth tap input of the predictor, is givenby

The autocorrelation function of the predictor‟s tap input Xn-k with another

tap inputXn-m is givenby

E[Xn-kXn-m] = RX(m-k) k, m = 1,2, . . . ,M

The normal equation to fit the linear prediction problem asfollows

k =1, 2, . . .M Therefore we need only to know the auto correlation function of the signal

for different lag in order to solve the normal equations for the predictor

coefficients.PREDICTION ERROR PROCESS prediction error denoted by εn, is definedby The prediction error εn is computed by giving the present and past samples

of a stationary process, namely Xn, Xn-1 . . . Xn-Many giving the predictor

coefficients h01,h02, . . . h0M, by using the structures which is called as prediction

error filter as shown infig.

Page 24: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

Where n refers the present structure for performing the inverse operations

so and the second as the inverse filter.

The impulse response of the inverse filter has infinite duration because of

feedback present in the filter whereas the impulse response of the prediction error

filter has finitedduration. The structures from the figure that there is a one to one correspondence

between samples of a stationary process an those of the prediction errorinthat if

we are given one can compute the other by means of a linear filteringoperations.

The reason for representing samples of a stationary process (Xn) by

samples of the corresponding prediction error .The prediction

error variance is less thanζ2X1.

UNIT-III BASEBAND TRANSMISSION

PART-A

1.What are line codes? Name some popular line codes. (MAY/JUNE2016)

Line coding refers to the process of representing the bit stream (1‟s and 0‟s) in the

form of voltage or current variations optimally tuned for the specific properties of the

physical channel beingused.

Unipolar (Unipolar NRZ and UnipolarRZ)

Polar (Polar NRZ and PolarRZ)

Non-Return-to-Zero, Inverted(NRZI)

Manchesterencoding

2.What is ISI? What are the causes of ISI? (MAY/JUNE2016) The transmitted signal will undergo dispersion and gets broadened

during its transmission through the channel. So they happen to collide or overlap with the

adjacent symbols in the transmission. This overlapping is called Inter

SymbolInterference.

Pulse shaping compresses the B.W of the data impulse to a small B.W greater than the nyquist minimum, so that it would not spread in time and degrade the system‟s error performance due to increasedISI .

Page 25: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

3.List the properties of syndrome. (NOV/DEC2015) Syndrome depends only on the error pattern and not on transmitted code word

All error pattern differing by a code word will have the samesyndrome.

The syndrome is the sum of those columns of matrix H corresponding to the errorlocation

With syndrome decoding and (n,k) linear block code can correct upto t error per code word if n and k satisfy the hammingbound.

4. Compare M-ary PSK and M-ary QAM. (NOV/DEC2015)

Sl.NO M-ary PSK M-ary QAM

The in phase and quadrature The in phase and quadrature

component are interrelated and component are independent and

the phase of the carrier takes 1 it enables the transmission M= L2 one of M possible values with independent symbols over the

wherei=0,1,…..M-1 same channelbandwidth

2 It has rectangular constellation It has circular constellation

5.Define the followingterms

NRZ unipolar format

NRZ polar format

NRZ bipolar format

Manchesterformat

NRZ unipolar format-In this format binary 0 is represent by no pulse and binary

1 is represented by the positivepulse.

NRZ polar format-Binary 1 is represented by a positive pulse and binary 0 is

represented by a Negative pulse.

NRZ bipolar format- Binary 0 is represented by no pulse and binary one is

represented by the alternative positive and negative pulse.

Manchester format-Binary 0: The first half bit duration negative pulse and the

second half Bit duration positive pulse. Binary 1: first half bit duration positive pulse

and the second half Bit duration negativepulse.

Page 26: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

6. Define the followingterms

i) Eyepattern Nov/Dec 2006,May/June-2009,April/May 2017

ii) What is the width of theeye?

iii) What is sensitivity of aneye?

Iv) Margin overnoise

v) Applications for eyepattern

Eye Pattern is used to study the effect of intersymbol interference.

Width of the eye- It defines the time interval over which the received waveform can be

sampled without error from intersymbol interference. Sensitivity of an eye- The sensitivity of the system to timing error is determined by the

rate of closure of the eye as the sampling time isvaried. Margin over noise- The height

of the eye opening at a specified sampling time defines the margin over noise. Applications for eye pattern

used to study the effect ofISI

Eye opening-additive noise in thesignal

Eye overshoot/undershoot-Peak distortion due to interruptions in the signal path

Eye width-Timing synchronization and jittereffects.

7.State Nyquist criterion for ZeroISI. The weighted pulse contribution akP(iTb-kTb) for k=1 free from ISI. Sampling time t=iTb for the receivedsignal.

ThefrequencydomainP(f)eliminatesISIforsamplestakenatintervalsTb provided that it satisfies equation.

8.List the properties of linecodes (April/May 2017)

Transmission Bandwidth : as small aspossible

Power Efficiency : As small as possible for given BW and probability oferror Error detection and correction capability :ExBipolar

Favorable power spectral density :dc=0

9. What is correlativecoding?(Nov/Dec 2016)

Page 27: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

It is a technique by which a transmission speed of 2W is achieved on a channel of

bandwidth W by introducing controlled ISI. Duobinary signaling is a particular form of

correlative coding. It gives Nyquist speed of transmission but suffers from following

disadvantages i)Nonzero PSD at f=0 ii)error propagation

10. A 64 kbps binary PCM polar NRZ signal is passed through a communication

system with a raised cosine filter with roll off factor 0.25. Find the bandwidth of the

filteredPCMsignal. [NOV12]Fb

= 64Kpbs

B0 = Fb/2 =32Kpbs

4. =0.25

B = B0(1+α) = 32*103*(1+0.25)= 40KHz

PART-B

1.Derive and explain Nyquist first criteria to minimizeISI.[Nov16,April 17] The transfer function of the channel and the transmitted pulse shape are

specified and the problem is to determine the transfer functions of the transmitting and

receiving filters to reconstruct the transmitted datasequence.{bk} The receiver extracts and then decodes the corresponding sequence

ofweights {ak} from the output y(t) The extraction involves sampling the output y(t) at some timet=iTb.The decoding requires that the weighted pulse contribution akp(iTb –Ktb) for k=I be free

from ISI must be represented byk

The received pulse is controlled by p(iTb –kTb)= By normalization p(0)=1.If p(t)satisfies the above condition the receiver output

simplifies to y(ti)=µai. It implies zero inter symbol interference. It assures perfect reception in the

absence ofnoise.

Consider the sequence of samples {p(nTb)} wheren=0, Sampling in the time domain produces periodicity in the frequencydomain.

Pð (f)=Rb

Page 28: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

Where Rb =1/Tb is the bit rate. Pð (f) is the Fourier transform of

infinite periodic sequence of delta function of period Tb

Pð (f) =

Let the integer m=i-k. Then i=k corresponds to m=1 and corresponds to

m

Imposing the condition of zero ISI on the sample values of p(t) in the above integral

Pð (f) = = p(0).by using the sifting property of

deltafunction. As p(0)=1,by normalization the condition or zero ISI is satisfied if

Thus Nyquist criterion for distortion less baseband transmission is formulated in

terms of time function p(t) and frequency functionP(f)IDEAL SOLUTION: A frequency function P(f) is obtained by permitting only one non zero

component in the series for each f in the range from –B0 to B0 and B0 denotes half the

bitrate

B0= Rb/2

P(f)=1/2B0 RECT (F/2B0)

In this solution no frequencies of absolute value exceeding half the bit rate are

needed. Hence one signal waveform that produces zero ISI is defined by sincfunction.

P(t)= sin (2πB0t)/ 2πB0t =sinc (2 B0t)

Page 29: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

The function p(t) is the impulse response of an ideal LPF with pass

band amplitude response 1/(2B0) and bandwidthB0 The function p(t) has its peak value at the origin and goes through zero at

integer multiples of bit durationTb

If the received waveform y(t) is sampled at instants of time t=0,

then the pulses defined by µp(t-iTb) with arbitrary amplitude µ and index i=

0, ….. Will not interfere with eachother.

Two practical difficulties make it undesirable objective for systemdesign.

The amplitude characteristics of P(f) must be flat from –B0 to B0 andzero This is physically unrealizable because of abrupt transitions. The function p(t) decreases as 1/!t! For large !t! Producing slow rate of

decay.

To evaluate the effect of timing error put the sampling time tiequal to zero.In the

absence ofnoise

y( =µ =µ

As 2BTb= 1 Rewrite above equationasy( =µ

=µa0sinc (2B0ðt)+µsin(2πB0ðt)/∏

The first term on right side defines the desired symbol and the remaining

series represents interference caused by timing error ðt in sampling the outputy(t).PRACTICAL SOLUTION: The practical difficulties caused by ideal solution is overcome by extending the

bandwidth from B0=Rb/2 to value between B0 and2B0.

A particular form of P(f) is constructed by raised cosine spectrum.The frequency

characteristics consists of flat portion and roll off portion . It has a sinusoidal formas

The frequency f1 and bandwidth B0 are related by α =1-f1/B0.The parameter α

is called rollofffactorResponse for different rolloff factors (a) Frequency response

(b)Time response

The frequency response P(f) normalized by multiplying it by 2B0 is shown for

three values of α namely 0,0.5 and 1.For α=0.5 or 1,the roll off characteristic of P(f) cuts

of gradually compared to idealLPF. The time response p(t) is the inverse fourier transform ofP(f)

Page 30: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

2.Explain correlative coding indetail. Or

Describe modified duobinary coding techniques and its performance by

illustrating its frequency and impulse response [NOV/DEC 2015] Definition: Correlative coding is a scheme used to add inter symbol interference to the

transmitted signal in a controlled manner to achieve a bit rate of 2B0 bits per second in a

channel of bandwidth B0Hz. It is also called as partial response signalingschemes.Correlative coding is a practical means of achieving the theoretical maximum signaling

rate of 2B0 bits per second in a bandwidth of B0 Hz using realizable and perturbation

tolerantfilters.Duobinary Signaling: Duo means doubling the transmission capacity of straight binarysystem.

Consider a binary input sequence {bk} consisting of uncorrelated binary digits

with duration Tb seconds, with symbol 1represented by a pulse of amplitude +1 volt and

symbol 0 by a pulse of amplitude -1volt‟ When this sequence is applied to a duo binary encoder, it is converted in to

three level output namely -2, 0, +2volts. The binary sequence {bk} is first passed through a simple filter involving a

single delay element. For every unit impulse applied to the input of the filter we get two

unit impulses spaced Tb seconds apart at the filteroutput. The digit ck at the duobinary coder output is expressed as the sum of the

present binary digit bk and its previous value bk-1 ck =bk + bk-1

The transformation changes input sequence {bk} of uncorrelated binary digits in

to a sequence {ck} of correlateddigits. The correlation between adjacent transmitted levels introduces inter symbol

interference in to the transmitted signal in an artificialmanner. An ideal delay element with delay of Tb seconds has the transfer function exp (-

j2πfTb ),so the transfer function of the simple filter is 1+ exp (-j2πfTb ).The overall transfer

function of this filter connected in cascade with the ideal channelHc (f)is

H(f)=Hc(f)[1+exp(-j2πfTb )]

= Hc(f) [exp(jπfTb )]exp(-jπfTb)

2Hc(f ) cos(πfTb ) exp (-jπfTb)

Page 31: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

For an ideal channel of bandwidth B0= Rb /2

Hc(f) . 0otherwise The impulse response consists of two sinc pulses, time displaced by Tb Seconds

h(t) =Tb 2 sin (πt/Tb ) /πt(Tb-t) The original data {bk} may be detected from duobinary coded sequence { ck} by

subtracting the previous decoded binary digit from the currently receiveddigit.Let `bk representing estimate of the original binary digit bk noted by the receiver at time t

equal to kTb .~bk =ck -~bk-1 If ck is received without error and if also previous estimate ~bk-1 at time t=(k-1)Tb

corresponds to a correct decision. then the current estimate will becorrect The technique of using a stored estimate of the previous symbol is called

decisionfeedback. A drawback of this detection process is that once errors are made, they tend to

propagate. This is due to the fact that a decision on the current binary digit bk depends

on the correctness of a decision made on the previous binary digitbk-1 Error propagation can be avoided by using precoding before the duobinary

coding.

The precoding operation performed on the input binary sequence {bk} converts

it in to another binary sequence {ak} defined by ak =bk +ak-1 modulo-2.Modulo 2 operation is equivalent to exclusive or operation .The output of

exclusive or gate is a 1 if exactly one input is a1.otherwise the output is azero.

The resulting precoder output {ak }is next applied to the duobinary coder, thereby

producing sequence {ck} and it is related to {ak)asfollows

Ck =ak+ ak-1. The precoding is a nonlinear operation. Assume that symbol 1 at the precoder output is represented by +1 volt and symbol 0

by -1volt

Ck=

Page 32: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

The detector consists of a rectifier the output of which is compared to a

threshold of 1 volt and the original binary sequence is therebydetected.

Modified duo binary technique:

It involves a correlation span of two binary digits. This is achieved by subtracting

input binary digits spaced 2Tb seconds apart.

The output of the modified duobinary conversion filter is related to thesequence

{ak } atitsoutput ck =ak –ak-2

A three level signal is generated. If ak volt and it takes on one of three

three values.2,0,-2volts.

Generalized form of correlative coding.

The duobinary and modified duobinary techniques have correlation spans of 1

binary digit and 2 binary digits respectively. These two techniques are generalized as

correlative coding scheme. It involves the use of tapped delay line filter with tap weights W0,W1,WN-I.

Correlative sample ck is obtained by superposition of N successive input sample valuesbk

Ck =

By choosing various combinations of integer values for Wn, we obtain different forms of

correlative coding schemes.In the duo binary case W0 =+1 w1 =+1 and wN =0

for In the modified duo binary case we have W0 W1 =0,W2 = -1 and Wn =0 for

n

Page 33: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

3.Explain the modes of operation of adaptive equalizer.[NOV/DEC 2015] Definition:

Equalization is process of correcting channel induced distortion. To realize the

full transmission capability of telephone channel, adaptive equalization is needed.

Equalizer is said to be adaptive when it adjusts itself continuously during data

transmission by operating on the inputsignal. Prechannel equalization is used at the transmitter and post channel equalization

is used at thereceiver. As prechannel equalization requires a feedback channel, adaptive equalization

at the receiving side isconsidered. This equalization can be achieved before data transmission by training the filter

with suitable training sequence transmitted through channel so as to adjust the filter

parameters to optimalvalues. The adaptive equalizer consists of a tapped delay line filter with 100 taps or

more and its coefficients are updated according to LMSalgorithm. The adjustments to the filter coefficients are made in a step by step fashion

synchronously with the incomingdataModes of operation:

(i) Training period mode (ii) decision directed mode

Training period mode During the training period, a known sequence is transmitted and a synchronized

version of the signal is generated in the receiver .It is applied to the adaptive equalizer as

the desired response. The training sequence may be Pseudo Noise sequence and the

length of the training sequence may be equal to or greater than the length of

adaptiveequalizer. When the training period is completed adaptive equalizer is switched to decision

directedmode.

Page 34: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

.

The error signal equals e(nT) = b(nT)- y(nT), where y(nT) is

theequalizeroutput and b(nT) is the final correct estimate of the

transmittedsymbol(nT).

In normal operations the decisions made by the receiver are correct with high

probability. It means that the error estimates are correct most of thetime.An adaptive equalizer operating in the decision directed mode.

Page 35: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

4. a) Determine the power spectral density of NRZ polar and unipolar data

formats. [NOV/DEC2015,April/May 2017]

Unipolar format (on-off signaling)

The symbol 1 is represented as transmitting a pulse whereas symbol 0 is

represented by switching off pulse. When the pulse occupies the full duration of a symbol

the unipolar format is said to be non-return to zero (NRZ) type. When it occupies a

fraction (usually one half) of the symbol duration it is said to be return to zero (RZ)type.

Polar format:

A positive pulse is transmitted for symbol 1 and a negative for symbol 0. It can be

of the NRZ or RZ type. Polar waveform has no dc component provided that 0s and 1s in

the input data occur in the equal proportion.

Power spectra of discrete PAM signals:

. A random process X(t) definedby

X(t)= ( t –kT)…………….(1)

66

Page 36: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

Where the coefficient is a discrete random variable, v(t) is a basic pulse

shape and T is the symbol duration. The basic pulse v(t) is centered at the origin

t=0and normalized such that u(0) =1.

The data signaling rate is defined as the rate, measured in bits per second, at

which data rates are transmitted. It is also common practice to refer to the data signaling

rate as the bit rate. This rate is denoted by Rb = 1 / Tb. where Tb is the bit duration. For an M-aryformat , the symbol duration of the is related to the bit duration Tb

by T= Tb log2 M. correspondingly one baud equals log2 M bits persecond.The source is characterized as having the ensemble averaged autocorrelation function

RA(n)= E [ AkAk-n ]

Where E is the expectation operator. the power spectral density of the discrete

PAM signal X(t) defined in equation (1) is given by

Sx(f)= (n) exp(-j2πnfT)…..(2)

Where V(f) is the fourier transform of the basic pulse v(t).

i) NRZ Unipolarformat:

The 0s and 1s of a random binary sequence occur with equal probability then

for a unipolar format of the NRZ type wehave

P(Ak= 0 ) = P(Ak≠ a ) =

Hence for n = 0, we may write

E [ ] P(Ak= 0 ) + P(Ak≠ a )=

Consider the next product AkAk-nfor n ≠ 0. This product has four possible values

namely 0, 0 , 0 and . Assuming that the successive symbols in the binary sequence

are statistically independent these four values occur with a probability of 1/4 each. Hence

for n ≠ 0, we maywrite

E[AkAk-n ] (1/4) + (1/4 ) = , n ≠0

We may express the autocorrelation function RA(n) asfollows

Page 37: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

RA(n)= ……(3)

For the basic pulse v(t) we have a rectangular pulse of unit amplitude

and duration Tb. hence the fourier transform of v(t)equals

Hence the use of the equation (3) & (4) in (2) with T= Tb, yields the following

result for the power spectral density of the NRZ unipolarformat

Sx(f)= ..(5)

we next use poissons formula written in the form

…(6) Where δ (f) denotes a dirac delta function at f=0. Now substituting equation (6) in

and recognizing that the sinc function sinc(fTb) has nulls at f= ±1/Tb,±2/Tb,……. We

may simply expression for the power spectral density Sx(f)as

The presence of the Dirac delta function δ (f) accounts for one half of the power

contained in the unipolar waveform. The curve a shows a normalized plot of equation

Specifically the power spectral density Sx(f) is normalized withrespectto and f

is normalized with respect to the bit rate 1 / Tb. The power of the NRZ unipolar format

lies between dc and the bit rate of the inputdata.

NRZ Polarformat:

Consider a polar format of the NRZ type for which the binary data consists of

independent and equally likely symbols and it is givenby

RA(n)= …..(8)

The basic pulse v(t) for the pulse format is same as that for the unipolar format.

Hence the use of equation (4) and (8) in equation (2) with the symbol period T=Tb yields

the power spectral density of NRZ polar formatas

The normalized form of this equation is plotted in curve b. the power of the NRZ

polar format lies inside the main lobe of the sinc shaped curve, which extends up to the

bit rate 1/Tb.

Page 38: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

4 b) Determine the power spectral density of NRZ and RZ bipolar and unipolar data

formats.

Bipolar format (pseudoternary signaling: Positive pulse and negative pulses are used alternatively for the transmission of

1s and no pulses for the transmission of 0s. It can be of the NRZ or RZ type. In this

representation there are three levels such as +1, 0. -1. An attractive feature of this

formatistheabsenceofadccomponent,eventhoughtheinputbinarydatamay

contain long strings of 0s and 1s. This property does not hold for the unipolar

and polarformats. NRZ Bipolar format:

The bipolar format has three levels a, 0 and –a. then assuming that the 1s and

0s with equal probability. The probabilities of three levels are asfollows

P(Ak= a ) =

P(Ak= 0 )=

P(Ak= -a ) =

Hence for n = 0, we may write

[ ] P(Ak= a) + P(Ak=0)+ P(Ak= -a ) = For n=1 the dibit represented by the sequence (Ak-1 ,Ak) can assume only four

possible forms (0,0), (0,1) , (1,0) and (1,1). The respective values of the product AkAk-1

are 0, 0 ,0 and , the last value results from the fact that successive 1s alternate in

polarity. Each of the dibits occur with the probability 1/4, on the assumption that

successive symbols in the binary sequence occur with equal probability. Hence we may

write

Where in the second on the right side. We have made note of the fact thatRA(-n)

=RA(n). The basic pulse v(t) for the NRZ bipolar format has its fourier transform and

hence substituting the corresponding equations with T= Tb. The power spectral density

of the NRZ bipolar format is givenby

Sx(f)= [ ( exp(j2πf )+ ( exp(-j2πf ))]

Page 39: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

The normalized form of this equation is plotted in curve c. The power lies inside

a bandwidth equal to the bit rate 1/ Tb, the spectral content of the NRZ bipolar format is

relatively small around zerofrequency.iv) Manchester format (biphase baseband signaling: Symbol 1 is represented by a transmitting a positive pulse for one half of the

symbol duration followed by a negative pulse for the remaining half of the symbol

duration for symbol 0, these two pulses are transmitted in reverseorder. The autocorrelation function RA(n) for the Manchester format is same as that for

the NRZ polar format. The basic pulse v(t) for the Manchester format consists of doublet

pulse of unit amplitude and total duration Tb. Hence the fourier transform of the

pulseequals

V(f) = jTbsinc( )

Thus substituting the corresponding equations , we find that the power spectral

density of the Manchester format is givenby

The normalized form of this equation is plotted in curve d. The power lies inside

a bandwidth equal to the bit rate 2/Tb.

5.Write short notes on Eye pattern & Inter symbol interference [Nov/Dec 2015,2016 Eye pattern: Eye patterns can be observed using an oscilloscope. The received wave is

applied to the vertical deflection plates of an oscilloscope and the saw tooth wave at a

rate equal to transmitted symbol rate is applied to the horizontal deflection plates,

resulting display is eye pattern as it resembles human eye. The interior region of eye

pattern is called eye opening.

The width of the eye opening defines the time interval over which the received

wave can be sampled without error from ISI. It is apparent that the preferred time for

sampling is the instant of time at which the eye is openwidest. The sensitivity of the system to timing error is determined by the rate of closure

of the eye as the sampling time isvaried. The height of the eye opening at a specified sampling time is a measure of the

margin over channelnoise.

Page 40: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

Fig: Interpretation of Eye pattern Intersymbol interference:

When the dispersed pulse originate from different symbol interval and the

channel bandwidth closer to signal bandwidth, spreading of signal exceed a symbol

duration and causes the signal to overlap or interfere with each other. This is known as

Inter Symbol Interference(ISI).

Theincomingbinarysequence{bk}consistsofsymbol0and1,withduration

of Tb.

The pulse amplitude modulator modulates the binary sequence in a sequence of

short pulses.

The signal is applied to the transmit filter of impulse response g(t). The

transmitted signal will be

S(t)= )

The transmitted signal is modified when it is transmitted through the channel

with the impulse response of h(t). In addition to that it adds a random noise to the

signalatthereceiverinput.Thissignalispassedthroughthereceiverfilter.The

resultant signal is sampled synchronously with the transmitter. Sampling instants can

be determined by clock or timing signal.

If the sample value is greater than the threshold then the decision made in favor

of 1. If the sample value is less than the threshold then the decision made in favor of 0.If

the sample value is equal to the threshold then the receiver makes a random guess

about which symbol was transmitted. The receiver filter outputis

y(t)= ) +n(t)

Where is a scaling factor and p(t) is the pulse to be defined

The delay (t0) due to the transmission delay through the system should be

included with the pulse but for the simplification purpose we kept t0 to be zero. Scaled

pulse p(t) is obtained by the double convolution of the impulse response of the transmit

Page 41: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

filter g(t), impulse response h(t) of the channel, the impulse response of c(t) of the

receiverfilter.

p(t) = g(t) * h(t) * c(t)

Convolution of time domain will be equal to the multiplication in frequency domain

P(f) = G(f) . H(f) . C(f)

Where n(t) is the noise produced at the output of the receive filter due to channel

noise w(t) and w(t) is the white Gaussian noise with zero mean. Receive filter output is y(t) which is sampled at time ti= iTb

y(ti)= + n(ti)

y(ti)= + n(ti)

k ≠ i

The first term is produced by the ithtransmitted bit. The second term represents

the residual effect of all other transmitted bits on the decoding of the ithbit. This residual

effect is called inter symbol interference. Last term n(t i) represents the noise sample at ti.

In the absence of noise andISI

y(ti)=

UNIT-IV DIGITAL MODULATION SCHEME

PART-A

1.Distinguish BPSK, QAM and QPSK techniques. Write the expression for the signal

set of QPSK (MAY/JUNE 2016),(NOV/DEC2015),(April/May 2017)

BPSK-Phase of the carrier is shifted between two values according to input bit sequence(1,0).

QAM-The information carried is contained in both amplitude and phase of the

transmitted carrier. Signals from two separate information sources modulate the same carrier frequency at the same time. It conserves theB.W .

QPSK-The information carried by the transmitted wave is carried in the phase.

Phase of carrier takes place on one of the four values[π/4,3π/4,5π/4,7π/4].Two

successive bits of data sequence are groupedtogether.

Si= COS ((2i-1)), i=1,2,3,4

SIN )

Page 42: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

2. Distinguish coherent and non-coherent reception.(May/June16,Nov/Dec16)

Coherent detection Non-coherent detection

Local carrier generated at the Local carrier generated at the

receiver is phase locked with receiver not be phase locked with the

the carrier at the transmitter. carrier at the transmitter.

3.Give the two basic operation of DPSKtransmitter.

Differential encoding of the input binarywave.

Phase –shift keying hence, the name differential phase shiftkeying. 4.Why synchronization is required and what are the three broad types of

synchronization?

The signals from various sources are transmitted on the single channel by

multiplexing. This requires synchronization between transmitter and receiver. Special

synchronization bits are added in the transmitted signal for the purpose. Synchronization

is also required for detectors to recover the digital data properly from the modulated

signal.

Types

Carrier synchronization

Symbol & Bit synchronization

Frame synchronization.

5.DefineBER [MAY14]

The signal gets contaminated by several undesired waveforms in channel. The net

effect of all these degradations causes error in detection. The performance measure of

this error is called Bit error rate. 6.How can BER of a systembeimproved? [NOV12]

Increasing transmitted signal power

Improving frequency filtering techniques

Proper modulation & demodulation techniques

Coding a Decoding methods

Page 43: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

PART-B

1.Explain the transmitter, receiver and signal space diagram of BPSK [may/june

2016,April /May 2017] In a coherent binary PSK system the pair of signals, S1(t) and S2(t) used

to represent binary symbols 1 and 0 are definedby

S1(t) (1)

S2(t)= =- (2)

Where0≤t< and is the transmitted signal energy perbit. A pair of sinusoidal waves that differ only in a relative phase shift of 180 degrees

as defined above are referred to as antipodalsignals.

From equations 1 and 2 there is only one basis function of unit energy namely

Փ1(t)= 0≤t< (3)

The transmitted signals S1(t) and S2(t)are expanded in terms ofՓ1(t)

A coherent binary PSK system is having a signal space that is one dimensional

i.e., N=1 and with two message points i.e., M=2 as shown in figure1

S11 =

The signal space of Figure 1 is partitioned into tworegions:

The set of points closest to the message pointat+ .

The set of points closest to the message pointat- .

The decision regions aremarkedas and .

The decision rule is to guesssignal or binary symbol 1 was transmitted if

the received signal point fallsinregion and guess signal S2(t)orbinarysymbol0

was transmitted if the received signal point fallsinregion .

Two kinds of erroneous decisions may bemade.

Signal S2(t)is transmitted but the noise is such that received signal point falls

Page 44: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

insideregion and so the receiver decides in favor ofsignal .Alternatively signal is

transmitted but the noise is such that the received signalpointfalls insideregion and so

the receiver decides in favor of signalS2(t).

To calculate the probability of error the decision region associated with symbol 1

orsignal is givenby

0 < x1 <1

where x1 is the observation scalar:

x1= (8)

Where is the receivedsignal.

The likelihood function when symbol 0 or signal S2(t) is transmitted is definedby

(x1|0) =

=

The conditional probability of the receiver deciding in favor of symbol 1

given that symbol 0 was transmitted istherefore

=

to z so rewrite the equation Binary PSK Transmitter

To generate a binary PSK wave represent the input binary sequence in polar

formwithsymbols1and0representedbyconstantamplitudelevelsof and

respectively. This binary wave and a sinusoidalcarrierwave are applied to the

product modulator as shown in figure2.

79

The carrier and the timing pulses used to generate the binary wave are usually

extracted from a common master clock. The desired PSK wave is obtained at the

modulatoroutput.

Page 45: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

Binary PSK Receiver:

To detect the original binary sequence of 1s and 0s apply the noisy PSK wave

x(t) to a correlator which is also supplied with a locally generated

coherentreferencesignal

Thecorrelatoroutput is compared with a threshold zero volts.If the

receiver decides in favor of symbol 1. On the other hand if it decides in favor of

symbol0.

2.Explain the transmitter, receiver and signal space diagram ofBFSK In a binary FSK system symbols 1 and 0 are distinguished from each other by

transmitting one of two sinusoidal waves that differ in frequency by a fixed amount.

A typical pair of sinusoidal waves is describedby

Si(t) =

0 elsewhere (1)

Where i=1,2and is the transmitted signal energy per bit

andthetransmittedfrequencyequals

fi= for somefixedinteger and i=1,2 (2)

Thus symbol 1 is represented by S1(t) and symbol 0 by S2(t).

From equation (1) it is observed that the signals S1(t)andS2(t) are orthogonal but not

normalized to have unit energy. The orthonormal basis functionis

Thus a coherent binary FSK system is having a signal space that is two

dimensional i.e., N=2 with two message points i.e., M=2 as in figure1

Page 46: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

The two message points are defined by the signalvectors:

S1 =

and

S2 = (6)The distance between two message points is

equalto

The observation vectors x has two elements x1 and x2 they are defined by respectively

where is the received signal the form of which depends on which symbol was

transmitted. Given that symbol 1 was transmitted equalss1(t)+w(t)

wherew(t) is the sample function of a white Gaussian noise process of zero

mean and power spectral density N0/2. If symbol 0 was transmitted equals s2(t)+w(t).

After applying the decision rule the observation space is partitioned into two

decision regions labeled and as shown infigure1.

Accordinglythereceiverdecidesinfavorofsymbol1ifthereceivedsignal

point represented by the observation vector x falls insideregion

This occurs when x1>x2 if we have x1 <x2 the received signal point falls inside

region and the receiver decides in favor of symbol 0. The decision boundaryseparating

regionfrom region is defined by x1 = x2 .

If symbol 0 was transmitted, then the corresponding value of the conditional

probability density function of the random variable Lequals

(13)

Since the condition x1 >x2 or equivalently l > 0 corresponds to the receiver making

a decision in favor of symbol 1 we deduce that the conditional probability of error given

that symbol 0 was transmitted is givenby

Page 47: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

is the conditional probability of error given that symbol 1 was transmitted

and it has the same value as in equation.Averaging and we

find the average probability of symbol error for coherent binary FSKis

In a binary PSK system the distance between the two message points is equal

to 2 whereas in a binary FSK system the corresponding distance is . This

shows that in an AWGN channel the detection performance ofequal energy binary

signals depends only on the distance between the two pertinent message pints in the

signal space.

Binary FSK Transmitter

The input binary sequence is represented in its on – off form with symbol 1

represented by a constant amplitude of volts and symbol 0 represented by zerovolts.

By using an inverter in the lower channel symbol 1 is used at the input the oscillator with

frequency in the upper channel is switched on whilethe

oscillator with frequency in the lower channel is switched off with the result

thatfrequency istransmitted.

Suppose if we have symbol 0 at the input the oscillator in the upper channel is

switched off whilethe oscillator in the lower channel is switched on with the result that

frequency is transmitted

In the transmitter we assume that the two oscillators are synchronized so that their

outputs satisfy the requirements of the two orthonormalbasisfunction and as

in equation(4).

To detect the original binary sequence given the noisy received wavex(t)

receiver is used as shown in figure 3

BFSK Receiver

Itconsistsoftwocorrelatorswithacommoninputwhicharesuppliedwithlocally

generated coherent reference signals and . The correlator outputs are then subtracted one from the other and the resulting

difference l is compared with a threshold of zero volts. If l>0 the receiver decides in favor of 1. If l<0 it decides in favor of 0.

Page 48: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

3.Explain the transmitter, receiver and signal space diagram of QPSK [nov/dec

2015,2016] As with binary PSK QPSK is characterized by the fact that the information carried by the

transmitted wave is contained in thephase.In Quadriphase shift keying (QPSK) the phase of the carrier takes on one offour

equally spaced values such as , , , as shownby

Si(t) =

0 elsewhere (1) Where i=1,2,3,4 and E is the

transmitted signal energy per symbol Tis the time

duration and the carrier frequency equals for some fixed integer .

Each possible value of the phase corresponds to a unique pair of bits called as dibit.

example the foregoing set of phase values to represent the Gray encoded set of

dibits: 10,00,01, and11. Using trigonometric identity we may rewite (1) in the equivalent form:

A QPSK signal is having a two dimensional signal constellation i.e., N=2 and

four message points i.e., M=4 as illustrated in Figure 1

To realize the decision rule for the detection of the transmitted data sequence

the signal space is partitioned into fourregions

The set of points closest to the message point associated with signal vector The received signal x(t) is definedby

x(t)=

i=1,2,3,4

x1=

= )

and x2=

=

Page 49: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

The values equal to and =

respectively and with common varianceequalto .

The decisionruleistoguess)= was transmitted if the

receivedsignalpoint associated with the observation vector x fallsinsideregion

was transmitted if the received signal point fallsinsideregion and soon.

The probability of correct decision equals the conditional probability of the

joint event x1>0 and x2>0giventhatsignal was transmitted.Sincetherandom variables

and are independent

Where the first integral on the right side is the conditional probability of the event

x1>0 and the second integral is the conditional probability of the event

x2>0 both given that signal wastransmitted.

From the definition of the complementary error function,

The average probability of symbol error for coherent QPSK istherefore

=1-

n the region where (E/ )we may ignore the second term on the righside of equation

(14) and so approximate the formula for the average probability of symbol error for

coherent QPSKas

In a QPSK system we note that there are two bits per symbol. This means that

the transmitted signal energy per symbol is twice the signal energy per bit that is

E=2 (16)

Page 50: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

Thus expressing the average probability of symbol error in terms of the

ratio we maywrite

QPSK transmitter.

The input binary sequence is represented in polar form with symbols 1 and

0representedby and voltsrespectively. This binary wave is divided by means of a demultiplexer into two separate

binary waves consisting of the odd and even numbered inputbits.

These two binary waves aredenotedby and .

In any signaling interval theamplitudesof and equal Si1andSi2

respectively depending on the particular dibit that is beingtransmitted.

The two binary waves and are used to modulate a pair of

quadrature carriers or orthonormal basis functions:Փ1(t) Փ1(t) =

andՓ2(t) .

The result is a pair of binary PSK waves which may be detected independently

due to the orthogonality of Փ1(t) andՓ2(t).

Finally the two binary PSK waves are added to produce the desired QPSK

wave. Note that the symbol duration T of a QPSK wave is twice as long as the bit

duration of the input binarywave.

That is for a given bit rate a QPSK wave requires half the transmission

bandwidth of the corresponding binary PSK wave. Equivalently for a given transmission

bandwidth a QPSK wave carries twice as many bits of information as the corresponding

binary PSKwave.QPSK Receiver The QPSK receiver consists of a pair of correlators with a common input and

supplied with a locally generated pair of coherent reference signals Փ1(t) andՓ2(t)

Page 51: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

The correlator outputs and are each compared with a threshold of zero volts.

If a decision is made in favor of symbol 1 for the upper or in phase

channel output,butif a decision is made in favor of symbol0.

Similarlyif a decision is made in favor of symbol 1 forThe lowerorquadrature channel outputbutif a decision is made in favor of symbol0

Finally these two binary sequences at the in-phase and quadrature channel

outputs are combined in a multiplexer to reproduce the original binary sequence at the

transmitter input with the minimum probability of symbolerror.

4.Explain the transmitter, receiver and signal space diagram ofDPSK Differential Phase Shift Keying is the non-coherent version of the PSK. It

eliminates the need for coherent reference signal at the receiver by combining two basic

operations at thetransmitter

Differential encoding of the input binary waveand Phase shiftkeying Hence the name differential phase shift keying [DPSK]. To send symbol 0 we

Phase advance the current signal waveform by 1800 and to send symbol 1 we leave

the Phase of the current signal waveform unchanged. storage capability so that it can measure the relative phase difference between

the waveforms received during two successive bitintervals.

DPSK is another non coherent orthogonal modulation. When it is considered

over two bit intervals. Suppose the transmitted DPSK signalequals

cos(2πfct) for 0≤ t ≤ Tb,

Where Tb is the bit duration and Eb is the signal energy perbit. Let S1(t) denote the transmitted DPSK for 0≤ t ≤ 2Tbfor the case when we have

binary symbol 1 at the transmitter input for the second part of this interval namely Tb ≤ t ≤

2Tb. The transmission of leaves the carrier phase unchanged and so we define S1(t)as

Let S2(t) denote the transmitted DPSK signal for 0≤ t ≤ 2Tb for the case when

we have binary symbol 0 at the transmitter input for Tb ≤ t ≤ 2Tb . The

transmission of 0 advances the carrier by phase by 1800 and so we define S2(t)

In other words DPSK is a special case of non-coherent orthogonal modulationwith

Page 52: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

T= 2Tb and E= 2Eb

We find that the average probability of error for DPSK is givenby

Pe= ) The next issue is generation and demodulation of DPSK. The differential

encoding process at the transmitter input starts with an arbitrary first bit serving as

reference and there after the differentially encoded sequence {dk}is generated by using

logicalequation

Where bkis the input binary digit at time KTb and dk-1 is the previous value of

differentially encoded digit. The use of an over bar denotes logical inversion. The

following table illustrates logical operation involved in the use of logical equation,

assuming that the reference bit added to the differentially encoded sequence {dk}is as 1.

The differentially encoded sequence {dk} thus generated is used to phase shift key a

carrier with the phase angles 0 and πradians.

If correlator output is positive – The phase difference between the waveforms

received during the pertinent pair of bit intervals lies inside the range –π/2 to π/2. A

decision is made in favour of symbol1. If correlator output is negative - The phase difference lies outside the range –π/2 to π/2. A decision is made in favour of symbol0.

5.Explain the transmitter, receiver ofQAM. In a M-ary PSK system, in phase and quadrature components of the modulated

signal are interrelated in such a way that the envelope is constrained to the main

constant. This constrained manifests itself in a circular constellation for the message

points. However if this constraint is removed, and the in phase and quadrature

components are thereby permitted to be independent. We get a new modulation scheme

called M-ary quadrature modulation (QAM)scheme.

The signal constellation of M-ary QAM consists of a square lattice of message

points for M=16. The corresponding signal constellations for the in phase and quadrature

components of the amplitude phase modulated wave asshown,

S1(t) =

Page 53: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

Where E0is the energy of the signal with the lowest amplitude and aiand biare a

pair of independent integers chosen in accordance with the location of the pertinent

message point. The signal S1(t) consists of two phase quadrature carriers , each of which

is modulated by a set of discrete amplitude hence the name called quadrature amplitude

modulation.

ᶲ1(t) and

ᶲ2(t) = To calculate the

probability of symbol error for M-ary QAM

Since the in phase and quadrature components of M-ary QAM areindependentthe probability of correct detection for such a scheme may be written as

The signal constellation for the in phase or quadrature component has a

geometry similar to that for discrete pulse amplitude modulation (PAM) with a

corresponding number of amplitudelevels.

Pe‟=( 1- )erfc ( )

Where L is the square root of M

The probability of symbol error for M-ary QAM is

givenby Pe = 1 – Pc

1 – ( 1- Pe‟)2 Pe = 2 Pe‟

Where it is assumed that Pe‟ is small compared to unity and we find the

probability of symbol error for M-ary QAM is given by

Pe = 2( 1- ) erfc( )

The transmitted energy in M-ary QAM is variable in that its instantaneous value

depends on the particular symbol transmitted. It is logical to express Pe in terms of the

average value of the transmitted energy rather than Eo . Assuming that the L amplitude

levels of the in phase or quadrature component are equally likely wehave

Eav= 2

Page 54: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

Where the multiplying factor 2 accounts for the equal combination made by in

phase and quadrature components. The limits of the summation take account of the

symmetric nature of the pertinent amplitude levels around zero we get

Eav=

=

Substitute Eo value in Pe we get

Pe = 2( 1- ) erfc( )

The serial to parallel converter accepts a binary sequence at a bit rate Rb=1/Tb

and produces two parallel binary sequences whose bit rates are Rb/2 each. The 2 to L

level converters where L= , generate polar L level signals in response to the

respective in phase and quadrature channel inputs. Quadrature carrier multiplexing of

the two polar L level signals so generated produces desired M-ary QAM signal.

Decoding of each baseband channel is accomplished at the output of the pertinent

decision circuit which is designed to compare the L level signals against L-1 decision

thresholds. The two binary sequences so detected are combined in the parallel to serial

converter to reproduce the original binary sequence.

UNIT-5 ERROR CONTROL CODING

PART-A

1. What is a linear code? and List its properties(MAY/JUNE2016) A code is linear if the sum of any two code vectors produces another codevector.

A code is linear if modulo-2 sum of any two code vectors produces another code Vector. This means any code vector can be expressed as linear combination of other codevectors.

Properties:

The sum of two code words belonging to the code is also acodeword.

The all zero word is always acodeword.

Page 55: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

The minimum distance between two code words of a linear code is equal to

the minimum weight of thecode.

2.What is meant by constrained length of convolutional encoder? (MAY/JUNE

2016)

Constraint length is the number of shift over which the single message bit can

influence the encoder output. It is expressed in terms of message bits. 3.State channel coding theorem.(NOV/DEC2015,Nov/Dec 2016,April/May 2017)

Channel coding theorem states that if a discrete memory less channel has

capacity C and a source generates information at a rate less than C then there

exists a coding technique such that the output of the source may be transmitted

over the channel with an arbitrarily low probability of symbol error,

For binary symmetric channel if the code rate r is less than the channel capacity

C it is possible to find the code with error free transmission. If the code rate r is

greater than the channel capacity it is not possible to find thecode. 4.What is cyclic code and List the properties of cyclic codes. (NOV/DEC2015)

A linear code is cyclic if every cyclic shift of the code vector produces some other

valid code vector. Linearity Property: the sum of two code word is also a codeword Cyclic property: Any cyclic shift of a code word is also a codeword 5.What is hamming distance and Write itscondition

The hamming distance .between two code vectors is equal to the

number of Elements in which they differ. For example, let the two code words

be, X = (101) and Y= (110)

These two code words differ in second and third bits. .Therefore the hamming distance

between X and Y is two.

Condition:

No. of Check bits q≥3

Block length n = 2q–1

No of message bits K =n-q

Minimum distance dmin=3

Page 56: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

6.Define code efficiency, code, block rate, Hamming weight and minimum

distance

Code Efficiency

The code efficiency is the ratio of message bits in a block to the transmitted bits for

that block by the encoder i.e., Code efficiency= (k/n) k=message bits n=transmitted

bits.

Code:

In (n,k) block code, the channel coder accepts information of k-bits blocks, it adds n-k

redundant bits to form n-bit block. This n-bit block is called the code word.

Block rate:

The channel encoder produces bits at the rate of Ro=

(n/k)RsHamming weight:

Hamming weight w(x) of a code vector „x‟ is defined as the number of non-

zero elements in the code vector.

Minimum distance.

The minimum distance dmin of a linear block code is defined as the smallest

hamming distance between any pair of code vectors.

The minimum distance dmin of a linear block code is defined as the smallest

hamming weight of the non-zero code vectors. 7.What is meant by systematic and non-systematiccodes?

In a systematic block code, message bit appear first and then check bits. In the

non Systematic code, message and check bits cannot be identified in the code vector. 8.List the Applications for error controlcodes

Compact disc players provide a growing application area forFECC.

100 In CD applications the powerful Reed-Solomon code is used since it works at a symbol level, rather than at a bit level, and is very effective against bursterrors.

The Reed-Solomon code is also used in computers for data storage and retrieval.

Digital audio and video systems are also areas in which FEC isapplied.

Error control coding, generally, is applied widely in control and communications systems for aerospace applications, in mobile(GSM).

Cellular telephony and for enhancing security in banking and barcodereaders.

Page 57: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

9. Find the hamming distance between 101010 and 010101. If the minimum

hamming distance of a (n, k) linear block code is 3, what is its minimum

hamming weight? [NOV12]

Hamming Distance Calculation:

Codeword1:101010 and Codeword2:010101

d(x,y): 6

For Linear BlockCode

Minimum hamming distance = minimum hamming

weight Given minimum hamming distance = 3

Hence, minimum hamming weight = 3

10. State the significance of minimum distance of ablockcode. [MAY13]

dmin ≤ S + 1

S ≥ dmin - 1

It can detect „S‟ errors.

dmin ≤ 2t + 1

t ≥ (dmin – 1)/2

It can correct „t‟ errors. PART-B

5. Describe the steps involved in generation of linear block codes define

and explain the properties of syndrome.

Linear Block Codes

Consider (n,k) linear block codes

It is a systematic code

Since message and parity bits are separate

b0, b1, …………, m0, m1

bn-k-1 ,……,mk-1

Message Order

m = [m0, m1 ,……,mk-1] 1* k

Parity bits

b = [b0, b1, …………, bn-k-1] 1* n-k

Code word

x = [x0, x1 ,……,xn-1] 1*n

Coefficient Matrix

Page 58: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

P= k*n-k

b = mP

IdentityMatrix

Ik = k*k

Generator Matrix

x=

x=

x=m

x=mG

G= k*n

10H = n-k*n

To prove the use of parity check matrix

=

W.K.T

X=MG Syndrome Decoding

Y=x+e

Y – receivedvector

e – error pattern

S= y

Important properties of syndrome

Property 1:

Page 59: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

The syndrome depends only on the error pattern and not on th e transmitted

code word

S= y

S= (x+e)

6. x

7. 0+e

S = e

Property2

All error pattern that differs at most by a code word have the same syndrome

ei =e+xi i= 0,1,2,…….

Multiplyby

=e +0

=e

Property3

The syndrome S is the sum of those columns of the matrix H corresponding to

the error locations.

8. = [h0, h1 ,……,hn-1]

S =

[e1, e2 ,……,en]

S =

Property 4

With syndrome decoding an (n,k)LBC can correct upto t errors per codeword,

provided n & k satisfy the hamming bound

Where =

Page 60: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

Minimum distance Considerations

Hamming distance - It is the no.of location in which the respective elements of

two code words differ

Hamming weight - It is defined as the number of non zero elements in the code

vector.

Minimum distance (dmin) -The minimum distance of a linear block code is the

smallest hamming weight of the non- zero code vector

Errordetection- It

can detect S number oferrors

Errorcorrection - It

can correct t number oferrors

2 .Explain channel coding theorem

Shannon's Second Theorem (Or) Channel Coding Theorem:

For a relatively noisychannel,

If the probability of error is10-2

,

99 out of 100 transmitted bits are receivedcorrectly.

This level of reliability isinadequate.

Indeed, a probability of error equal to 10-6

or less isnecessary.

In order to achieve such a high level of performance, we may have to resort to the use of channelcoding.

Aim

It is used to increase the resistance of a digital communication system to channelnoise.

Channel coding consistsof

Mapping the incoming data sequence into a channel input sequence,and

Inverse mapping the channel output sequence into an output data sequence in such a way that the overall effect of channel noise on the system isminimized.

The mapping operation is performed in the transmitter by means of an encoder, whereas the inverse mapping operation is performed in the receiver by means of adecoder.

Page 61: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

Channel coding introduces controlled redundancy to improve reliability whereas the Source coding reduces redundancy to improveefficiency.

The message sequence is subdivided into sequential blocks each k bitslong

Each k-bit block is mapped into an n-bitblock

Where n >k the number of redundant bits added by the encoder to each transmitted block is n - kbits.

The ratio k/n is called the coderate.

r = k/n „r‟is less thanunity.

Statement: The channel coding theorem for a discrete memoryless channelis stated in two parts asfollows.

Let a discrete memoryless source with an alphabet „ζ‟ have entropy H(ζ)

and produce symbols once every Ts seconds. Let a discrete memoryless channel have

capacity C and be used once every T seconds. f

exists a coding scheme for which the source output can be transmitted over

thechannel and be reconstructed with an arbitrarily small probability of error. The

parameterC/Tc is called the criticalrate.

it is not possible to transmit information over the channel and reconstruct

it with an arbitrarily small probability of error.

NOTE:

The channel coding theorem does not show us how to construct a good code.

Rather, that it tells us that if the condition is satisfied, then good codes do exist.

Application of the Channel Coding Theorem to BinarySymmetric Channels: Consider a discrete memoryless source that emits equally likely binary symbols

(0‟s and 1‟s) once everyTsseconds. source entropy equal to one bit per sourcesymbol.

The information rate of the source is (1/Ts) bits persecond. The source sequence is applied to a binary channel encoder with code rater.

The encoder produces a symbol once every Tcseconds.

Hence, the encoded symbol transmission rate is (1/Tc) symbols persecond.

The encoder engages the use of a binary symmetric channel once every Tc

seconds.

Hence, the channel capacity per unit time is (C/Tc) bits perseconds.

Page 62: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

The channel coding theorem implies that if

the probability of error can be made arbitrarily low by the use of a

suitable encoding scheme.

But the ratio Tc/Ts equals the code rate of the encoder: Condition can also be written as

r ≤ C

3.For (6,3) systematic linear block code, the code word comprises I1 , I2, I3, P1, P2, P3 where the three parity check bits P1, P2 and P3 are formed from the information bits as follows:

P1 = I2

P2 = I3

P3 = I3

Find

I.The parity checkmatrix

Ii.The generatormatrix

Iii.All possible codewords.

Minimum weight and minimum distanceand

The error detecting and correcting capability of thecode.

vi.If the received sequence is 10000. Calculate the syndrome and

decode thereceivedsequence. (16)

[DEC 10]

Solution:

Parity CheckMatrix:

Given: n=6K

=3

Page 63: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

GeneratorMatrix:

All Possible Codewords:

b = mP

where b

Parity bits

m

message bits

No ofParitybits = n – k = 6 – 3 =3

No of message bits=k =3 Minimum weight & minimum distance: Minimum weight=3

Minimum distance = dmin =3

Error Detection & ErrorCorrection:

ErrorDetection:

It can detect upto 2 errors.

Error Correction: It can correct upto 1error.

Syndrome:

Received sequence r =101000

SYNDROME TABLE:

SYNDROME ERROR PATTERN

0 0 0 0 0 0 0 0 0

110

Page 64: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

1 1 0 1 0 0 0 0 0

1 0 1 0 1 0 0 0 0

0 1 1 0 0 1 0 0 0

1 0 0 0 0 0 1 0 0

0 1 0 0 0 0 0 1 0

0 0 1 0 0 0 0 0 1

The correct codeword is 111000

4.Consider a (7, 4) linear block code whose parity check matrix is givenby

Find the generatormatrix

How many errors this code candetect

How many errors can this code becorrect

Draw circuit for encoder and syndrome computation. [MAY 12]

Solution: Generator Matrix:

Given

H =

H=

PT

Page 65: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

P =

G =

Given K=4

b. Error Detection:

To find dmin, we have to write the table for the codewords.

b= mP

No. ofparity bits = n-k =7-4 =3

No.of message bits = k=4

=

b1= m1 m2 m3

b2 = m1 m2 m4

b3 = m1 m3 m4

dmin =3

It can detect upto 2 errors.

C. Error Correction:

It can correct upto 1 error.

Page 66: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

5.Determine the generator polynomial g(x) for a (7, 4) cyclic code, and find code

vectors for the following data vectors 1010, 1111, and 1000. (8)[NOV 11, MAY14]

Given :

To find generator polynomial

It is a factor of

(xn+1) Here n=7

x7+1 = (1+x) (1+x+x3) (1+x2+x3)

Generator must have a maximum power of n-k.

Here n-k = 7-4 = 3

Therefore generator must be a term with power 3

So (1+x+x3) and (1+x2+x3) can be used as

generator. Assume (1+x+x3) is a generator

g(x) = 1+x+x3

9. Consider data vector

1010: m1 =1010

m1(x) = 1+x2

Step 1:

Multiply m1(x) by xn-k

xn-k = x7-3 = x3

x3m1(x) = x3(1+x2) = x3+x5

Step 2:

Divide x3m1(x) by g(x)

x3+x5 / (1+x+x3)

Quotient = q(x) = x2

Remainder = R(x) = x2

Page 67: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

Step 3:

Add the remainder R(x) to x3m1(x)

C1(x) = x2 + ( x5+x3)

x2+x3+x5

C1 = 0011010 Consider data vector 1111: m2

=1111

m2(x) = 1+x+x2+x3

Step 1:

Multiply m2(x) by xn-k

xn-k= x7-3 = x3

x3m2(x) = x3(1+x+x2+x3) = x3+x4+x5+x6

Step 2:

Divide x3m2(x) by g(x)

x3+x4+x5+x6 / (1+x+x3)

Quotient=q(x)

=x3+x2+1

Remainder=R(x) =x2+x+1

= 1+x+x2

Step 3:

Add the remainder R(x) to x3m2(x)

C2(x) = (1+x+x2)+ ( x3+x4+x5+x6)

10. 1+x+x2+x3+x4+x5+

x6 C2 = 1111111

Page 68: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

Consider data vector1000:

m1 = 1000

m3(x) = 1

Step 1:

Multiply m3(x) by xn-k

xn-k = x7-3 = x3

x3m3(x) = x3(1) = x3

Step 2:

Divide x3m3(x) by g(x)

x3 / (1+x+x3)

Quotient = q(x) = 1

Remainder = R(x) =x+1

Step 3:

Add the remainder R(x) tox3m3(x)

C3(x) = (x+1) + (x3)

11. 1+x+x3 C3 = 1101000

Page 69: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

6.Consider a (7,4) linear block code with the parity checkmatrix

H=

Construct the Coefficientmatrix

Find the generatormatrix

Construct all possible codewords

Minimum weight and minimumdistance

Error detection and error correctioncapabilities

Check whether it is a hammingcode

If the received sequence is [0101100]. Calculate the syndrome and

decode the receivedsequence.

Illustrate the relation between the minimum distance and the structure

of parity check matrix H by considering the code word[0101100].

Solution:

Coefficient Matrix:

Given

H=

W.k.t H=

From above equation

COEFFICIENT MATRIX:

12. = =

=

Page 70: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

GENERATOR MATRIX:

=

Given n=7,K=4

G =

=

All possiblecodewords

b = mP

b

No.of parity bits = n-k = 7-4 =3

m

No.of message bits = k= 4

=

b1= m1 m3 m4

b2 = m1 m2 m4

b3 = m2 m3 m4

MINIMUM WEIGHT & MINIMUMDISTANCE From

above tabular column

Choose the value other than zero

Minimum weight =3

In LBC,

Minimum distance = minimum weight

Therefore minimum distance = 3 ERROR DETECTION & ERRORCORRECTION: Error

detection:

Page 71: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

It can correct upto 1 error.

13. TO CHECK WHETHER IT IS A HAMMINGCODE

1) Yes

2) Block length n =2q-1

14. = n-k

7-4

q = 3

15. =2q -1

7 = 23 -1 7 = 8-1

7=7 Yes

3) No.of Messagebits

16. =2q –q-1

4 = 23- 3-1 8-4

4 = 4

4) No. of Paritybits

17. = n-k

3 = 7-4

3=3 Yes

Since it satisfies all the conditions. It is a hamming code.

=

SYNDROME TABLE:

SYNDROME ERROR PATTERN

0 0 0 0 0 0 0 0 0 0

1 1 0 10 0 0 0 0 0

0 11 0 1 0 0 0 0 0

10 1 0 0 1 0 0 0 0

1 1 1 0 0 0 1 0 0 0

1 0 0 0 0 0 0 1 0 0

0 1 0 0 0 0 0 0 1 0

0 0 1 0 0 0 0 0 0 1

Page 72: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

Error pattern e =0000000

Correct code word = r + e

18. 0101100+0000000

19. 0101100

0101100 is the correct code word

RELATION BETWEEN Dmin&H : dmin=3

Smallest no. of columns that sums to zero in H is

3 dmin= H

7.For a conventional Encoder of constraint length 3 andrate

Draw the encoder diagram for generator vectors g1= & g2 =

Find the dimension of thecode

Coderate

Constraintlength

Obtain the encoded output for the input message 10011. Using

Transform DomainApproach

Solution:

Given

Rate = ½

Constraintlength=3

Generatorvectorg1= &

g2 =Input

message m =10011

Encoder:

Rate = ½

1 input & 2 output

Page 73: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

Dimension of thecode:

The encoder takes 1 input at a time. So k =

1 It generates 2 output bits. So n =2

Dimension = (n, k)= (2,1)

Coderate:

Constraintlength:

Definition:

No of shifts over which the msg bit can influence the encoder output.

Here it is 3.

Output sequence: Given

Generatorvectorg1= &

g2 =

Input message m = 10011

In Polynomial Representation

g1(D) = 1+D+D2

g2(D) = 1+(0)D+D2 = 1+D

2

m(D) = 1+(0)D+(0)D2+D

3+D

4

20. 1+D3+D

4

Output of Upper Path

x1(D) = m(D) g1(D)

21. (1+D3+D4)(1+D+D2)

22. 1+D3+D4+D+D4+D5+D2+D5+D6

23. 1+D+D2+D3+D6

x1 = {1 1 1 1 0 01}

Output of Lower Path

x2(D) = m(D) g2(D)

Page 74: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

24. (1+D3+D4) (1+D2)

25. 1+D3+D4+D2+D5+D6

26. 1+D2+D3+D4+D5+D6 x1= {1 0 1 1 1 11}

Overall output

The switch moves between upper and lower path alternatively

Code word = {11 10 11 11 01 01 11}

8.For a conventional Encoder of constraint length 3 andrate

Draw the encoder diagram for generator vectors = & =

Find the dimension of thecode

Coderate

Constraintlength

Obtain the encoded output for the input message 10011. UsingTime

DomainApproach

Solution :

Given:

Rate = ½

Constraintlength=3

Generatorvector = &

=

Dimension of thecode:

The encoder takes 1 input at a time. So k = 1

It generates 2 output bits. So n =2

Dimension = (n, k) = (2, 1) Coderate:

Constraintlength:

Number of shifts over which the message bits can influence

the encoder output.

Here it is 3.

Page 75: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

i = 0, 1, 2, 3, 4, 5,6

l = 0, 1, 2

i = 0

l=0

= 1*1

27. 1

i = 1

l = 0,1

= 1*0 1*1

=0 1

=1

i = 2

l = 0, 1, 2

= 1*0 1*0 1*1

=0 0 1

=1

i = 3

l = 0, 1, 2

Page 76: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

=1*1 1*0 1*0

= 0 0

=1

i = 4

l= 0, 1, 2

=1*1 1*1 1*0

= 1 0

=0

i = 5

l = 0, 1, 2

=1*m5 1*1 1*1

=1 1

=0

i = 6

l = 0, 1, 2

=1*m6 1*1

= 1

Page 77: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

The bottom branch output sequence is

i = 0, 1, 2, 3, 4, 5, 6

l = 0, 1, 2

i = 0

l=0

= 1*1 = 1

i = 1

l = 0,1 = 1*0 0*1

=0 0

=0

i = 2

l = 0, 1, 2

= 0*0 1*1

= 0 1

=1

i = 3

l = 0, 1, 2

= 1*1 0*0 1*0

=1 0 0

=1

i = 4

l = 0, 1, 2

Page 78: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

= 1*1 0*1 1*0

=1 0 0

=1

i = 5

l = 0, 1, 2

=1*m5 0*1 1*1

= 1

=1

i = 6

l = 0, 1, 2

=1*m6 0*m5 1*1

=1

Overall output

The Switch moves between upper & lower path alternatively Code

word =

9. Arate convolutional encoder has generator vectorsg1= , ,

g3= Draw the encodercircuit[April/May 2017]

28. Draw the code tree , state diagram & Trellisdiagram

29. Decode the given sequence 111 011 010 100 using

Viterbialgorithm Solution: Given:

K =1, n=3

Using generator polynomialx1= m

x2=m m1 m2x3=m m2

Encoder:

Page 79: UNIT -1 SAMPLING & QUANTIZATION PART A 1.Explain the ...

Code Tree, Trellis & Statediagram: Assume

0 0

a

0 1

b

30. 0

c

31. 1

d

State Table:

Output

In x1 = m

Current x2 = m Next SNo state m1m state

2

x3=m m2

m2 m1 m x1 x2 x3 m1 m

1 a = 0 0 0 0 0 0 0 0 =a

1 1 1 1 0 1 =b

2 b = 0 1 0 0 1 0 1 0 =c

1 1 0 1 1 1 =d

3 c = 1 0 0 0 1 1 0 0 =a 1 1 0 0 0 1 =b

4 d = 1 1 0 0 0 1 1 0 =c

1 1 1 0 1 1 =d