An Intro to Digital Comms - Part 1

download An Intro to Digital Comms - Part 1

of 56

Transcript of An Intro to Digital Comms - Part 1

  • 8/13/2019 An Intro to Digital Comms - Part 1

    1/56

    AN INTRODUCTION TO DIGITAL COMMUNICATIONS Part 1

    Figure 1 shows the block diagram of a digital communication system. In this diagram, threebasic signal-processing operations are identified: source coding, channel coding, andodulation. It is assumed that the source of information is digital by nature or converted into

    it by design ( i.e. sampling, uantising and digital coding into !"# for e$ample%.

    &he ob'ective of source encodingis to reduce the number of bits transmitted for each messageby taking advantage of any redundancy in the message. For e$ample if one can convey amessage accurately by transmitting bits, then it is inefficient to transmit that same messageusing ) bits. *sing ) bits increases the number of bits to be transmitted. &o transmit thisinformation, in a certain amount of time, using ) bits instead of bits one needs to speed upthe transmission. &his increase in transmission speed reuires a wider bandwidth. +s we know

    bandwidth is a very e$pensive resourceIn source coding, the source encodermaps the digital message signal generated at the sourceoutput into another signal in digital form. &he mapping is one-to-one (i.e. for each source

    message there is only one uniue source encoder output%, and the ob'ective is to eliinate orreduce redundanc! so as to provide an efficient representation of the source output. incethe source encoder mapping is one-to-one, the source decoder simply performs the inversemapping and thereby delivers to the destination a reproduction of the original digital sourceoutput. &he primary benefit thus gained from the application of source coding is a reduced"and#idth re$uireent. ( +ll compression techniues used in transmitting data use someform of source encoding method.%

    &he ob'ective of channel codingis to reduce the effect of noise on the transmitted signalthrough the channel.In channel coding, the channel encodermaps the incoming digital signal into a channel input

    and for the decoder to map the channel output into an output digital signal in such a way thatthe effect of channel noise is minimised. &hat is, the combined role of the channel encoder anddecoder is to %ro&ide 'or relia"le counication o&er a nois! channel. &his provision issatisfied "! introducing redundanc!to the signal representing the message in a prescribedfashion in the channel encoder (before transmitting it into the channel%, and e$ploiting thisredundancy in the channel decoder to reconstruct the original channel encoder input asaccurately as possible.

    &hus, in source coding, we remove redundancy, whereas in channel coding, we introducecontrolled redundancy."learly, we may perform source coding alone, channel coding alone, or include the two in the

    system. In the latter case, naturally, the source encoding is performed first, followed by channelencoding in the transmitter as illustrated in Figure 1. In the receiver we proceed in the reverseorder channel decoding is performed first, followed by source decoding. /hichevercombination is used, the resulting improvement in system performance is achieved at the costof increasedcircuit comple$ity.

    #odulationis performed with the purpose of providing 'or the e''icient transission o' thesignal o&er the channel. In particular, the modulator (constituting the last stage of the

    0. egree &elecoms 2 3ecture notes 1 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    2/56

    transmitter in Fig. 1% uses digital modulation techniues such as amplitude-shift keying,freuency-shift keying, phase-shift keying, or uadarture amplitude modulation.&he demodulator performs the demodulation (the inverse of modulation%, thereby producing asignal that follows the time variations in the channel encoder output (e$cept for the effects ofnoise%. #odulation in general is a process designed to:

    -match the properties of the transmitted signal to the channel through the use of acarrier wave- to reduce the effect of noise and interference- to simultaneously transmit several signals over a single channel- to overcome some euipment limitations (e.g antennae and 6F circuits%

    &he combination of modulator, channel, and demoduator, enclosed inside the dashed rectangleshown in Fig. 1, is sometimes called a discrete channel. It is so called since both its input andoutput signals are in discrete form.&raditionally, channel coding and modulation are performed as separate operations and theintroduction of redundant symbols by the channel encoder appears to imply increased

    transmission bandwidth. In some applications, however, these two operations (channel codingand modulation % are performed as one function in such a way that the transmission bandwidthneed not be increased.

    Channels 'or Digital Counications&he details of modulation and coding used in a digital communications system depend on thecharacteristics of the channel and the application of interest.&wo characteristics, bandwidth and power, constitute primary communication resourcesavailable to the designer. In addition to bandwidth, channel characteristics of particular

    concern are the amplitude and phase responses of the channel and how they affect the signal,whether the channel is linear or non-linear, and how free the channel is from e$ternalinterference.&here are si$ specific types of channels used in communications:&elephone channel, coa$ial cables, optical fibres, microwave radio, cellular mobile and satellitechannel.&he channel provides the connection between the information source and the user. 6egardlessof its type, the channel degrades the transmitted signal in a number of ways. &he degradation isa result of signal distortion due to imperfect response of the channel and due to undesirableelectrical signals (noise% and interference.

    7oise and signal distortion are two basic problems of electrical communications.

    0. egree &elecoms 2 3ecture notes 4 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    3/56

  • 8/13/2019 An Intro to Digital Comms - Part 1

    4/56

    0. egree &elecoms 2 3ecture notes 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    5/56

    +L+M+NTS O( PRO)A)ILIT, T-+OR,

    Introduction:&he word 9random9 is used to describe apparently unpredictable variations of an observed signal.6andom signals ( of one sort or another% are encountered in every practical communications system.( Information carrying signals are unpredictable noise that perturb information signals is also

    unpredictable. &hese unpredictable signal and noise waveforms are e$amples of outcome of randomprocesses %.&he e$act value of these random signals cannot be predicted. 7evertheless the received signal can bedescribed in terms of its statistical properties such as its average power, or the spectral distribution ofits average power.&he mathematical discipline that deals with the statistical characterisation of random signals is

    probability theory.!robability theory is also used in measuring information which helps in defining the ideal channel fromthe point of view of reliable ma$imum information transmission, as well as estimating the probability oftransmission error and helping in finding ways to combat the effect of noise in communications andespecially in digital transmission.

    !robability theory is rooted in real-life situations which result in outcomes that cannot be predictedaccurately before hand. &hese situations and their outcome are akin to random e$periments. Fore$ample the e$periment may be the tossing of a fair coin, in which the possible outcomes of trials are9heads9 or 9tails9.

    y random e$periment is meant one which has the following three features:1- &he e$periment is repeatable under identical conditions.4- 0n any trial of the e$periment, the outcome is unpredictable.2- For a large number of trials of the e$periment, the outcome e$hibits statistical regularity.

    &hat is, a definite average pattern of outcomes is observed if the e$periment is repeated a

    large number of times.

    "onsider an e$periment involving the tossing of a fair coin .8very time the coin is tossed is considered as an event."onsider a specific event, +, among all the possible events of the e$periment under consideration( 9heads9 or 9tails9 in the coin tossing case% .If the e$periment is repeated 7 times during which the event +, ( the occurrence of a 9heads9 fore$ample% has appeared n+times.

    &he ratio (n+57% is called the relati&e 're$uenc!of occurrence of the event + .

    +s 7 becomes large we find that this relative freuency number converges to the same limit, every timewe repeat this e$periment.

    /hen 7 , the limit of (n+57% (when 7 tends to infinity% is called ( or defined% as the %ro"a"ilit!%.A/of the event + , i.e.

    %.A/ 0 lim(n

    N)

    N

    A

    &hus we can write :

    ; n+7 or ; n+57 1 and

    0. egree &elecoms 2 3ecture notes < 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    6/56

    0 1 lim

    N( )

    nAN

    or %.A/ 1o we can see that the value of the probability p($% of occurrence of any event $ , ranges between =eroand one ( i.e. it is a non-negative real number less than or eual to 1%, where,

    p($% > ;, means that the event $ never occurs and p($% > 1 means the event $ always occurs.

    ? +lthough the outcome of a random e$periment is unpredictable, there is a statistical regularity aboutthe outcomes. For, e$ample if a coin is tossed a large number of times, about half the times theoutcome will be 9heads9, and the remaining half of the times it will be 9tails9. /e may say that therelative freuency ( and in this case the probability % of the outcome 9heads9 or 9tails9 is one half .0bserve that if the e$periment is repeated only a small number of times the relative freuency of anoutcome may vary widely from its probability of occurrence. ecause of the statistical regularit!ofthe random process, the relative freuency converges toward the probability when the e$periment isrepeated a largenumber of time@

    /e are now ready to make a formal definition of probability. + probability system consists of thefollowing three features:1- + sample space of all events (outcomes%4- +t least one class or set of events + that are subsets of .2- + probability p(+% assigned to each event + in the subset + and having the following

    properties:(i% p(% > 1

    (ii% ; p(+% 1(iii% if + A is the union of two utuall! e2clusi&eevent in the classes + and then

    p(+A% > p(+% A p(%

    !roperties (i% , (ii% , (iii% are known as the a$ioms of probability.+$iom (i% is really the occurrence of every event in the set +$iom (ii% has been e$plained above,+$iom (iii% states that the probability of the union of two mutually e$clusive event is the sum of the

    probabilities of the individual events.In the ne$t two pages these a$ioms will be e$plained further.

    In the definition of a probability system above we used a few terms that need e$planation anddefinition. &o do this we will use a few simple and basic concepts from what is called et &heory.

    0. egree &elecoms 2 3ecture notes B 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    7/56

    /e define the sample space as the collection of all possible separately identifiable outcomes of arandom e$periment. 8ach outcome is an element, or sample point, of this space and can be represented

    by a point in the sample space.In the random e$periment of rolling a die, for e$ample, the sample space consists of si$ elementsrepresented by si$ sample points s1,s4,s2,s,s

  • 8/13/2019 An Intro to Digital Comms - Part 1

    8/56

    2- oth + and have occurred, i.e. +- 7either + nor has occurred, i.e. +CC

    AB# AB A#B

    A#B#

    A B

    FIG 2. $enn %ia&ram representin&

    'nion an" Intersection of eents

    (AB#) (A#B)

    A#B#

    A B

    FIG 3. $enn %ia&ram representin&!o *+t+all, -cl+sie eents

    + setis a collection of items which have a common characteristic. For e$ample the left hand circleshown above denotes the set of events where the e$periment resulted in the occurrence of +. &he righthand circle represents the set showing all the occurrences of .

    &he intersectionof set + and set i.e. of the two circles, denoted by A), is the set representing theoccurrences of both + and together, and note that the event + is the same as the event +.

    ( #athematicians use the notation A )to represent the intersection of sets + and , which isanalogous to the logical +7 operation of digital circuits.%

    &he unionof set + and set , denoted by A3), is the set that contains all the elements of + or all the

    elements of or both. ( #athematicians use the notation A ).%&he events + and are called simple events and the events " > + and > +A are called compoundor 'oint events since they are functions of simple events which are related or which occur together.

    &wo events + and are said to be dis'oint, or utuall! e2clusi&e, if they cannot occur simultaneously

    i.e. + > ( for e$ample in figure 4, the events +oand +e are mutually e$clusive %.

    3ooking at the above diagram

    + > +C + or (+C A +% > +C + or (+C A +%+ > +C + +C or + A > +C A + A+C

    If the number of events of each of the categories 1,4,2, and listed above, (in the first paragraph ofpage % is denoted by n1, n4, n2, n, then n1A n4A n2A n> n,

    the total number of all events.

    fD+E the relative freuency of occurrence of + independent of >n n

    n1 3+

    fDE the relative freuency of occurrence of independent of + >n n

    n2 3+

    fD+ E the relative freuency of occurrence of + and together > nn3

    0. egree &elecoms 2 3ecture notes 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    9/56

    fD+ E the relative freuency of occurrence of either + or or both >n n n

    n1 2 3+ +

    fD+GE the relative freuency of occurrence of + under condition that has occurred >n

    n n3

    2 3+

    fDG+E the relative freuency of occurrence of under condition that + has occurred >n

    n n3

    1 3+

    In the limit when n , fDHE becomes the probability p($% of occurrence of the event $,(as was defined earlier%, thus

    fD+E p(+%fDE p(%fD+E p(+%fD+AE p(+A%fD+GE p(+G%fDG+E p(G+%

    4oint Pro"a"ilit!

    efinition: &he probability of a 'oint event, +, is

    p(+% > lim ( )n

    ABn

    n

    where n+ is the number of times that the event + occurs out of n trials.

    In addition, two events + and , are said to be utuall! e2clusi&eif the event

    + never occurs, which implies that p(+% >;

    0. egree &elecoms 2 3ecture notes 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    10/56

    &he probability of the union of two events ( i.e. a 'oint event, represented by +A or + % may beevaluated by measuring the compound (or 'oint % event directly or, it may be evaluated by measuringthe probabilities of the simple events as given in the following theorem

    Theore: 3et the event 8 be the union of the two events + and ( + % i.e.8 > + A then

    %.+/ 0 %.A3)/ 0 %.A/ 3 %.)/ 5%.A)/

    Proo':3et the event +-only occur n+times out of n trials, the event -only occur ntimes out of n trials, and

    the event + occur n+times. &hus

    p A B n

    n n

    nA

    n

    A

    n

    B

    n

    ( ) lim ( )

    lim ( ) lim ( ) lim ( )

    + = +

    = +

    + +

    +

    n n

    n

    n

    n

    n

    n

    n

    n

    B AB

    AB AB AB

    which is identical to p(8% > p(+% A p(% - p(+%, since

    p A n

    n

    A( ) lim ( )= +

    n

    nAB , p B

    n

    n

    B( ) lim ( )= +

    n

    nAB , and p AB

    n( ) lim ( )=

    n

    nAB

    (&he proof of this theorem could have been deduced directly from the above Jenn diagram infigure 2.%

    "onsider the two events + and of a random e$periment. uppose we conduct 7 independent trialsof this e$periment and events + and occur in n+and ntrials, respectively.

    If + and are mutually e$clusive (or dis'oint%, then if + occurs, cannot occur, and vice versa. Kence

    the event + occurs in n+A ntrials and

    p(+ % > lim( % lim( % lim( % lim( %n

    +

    n

    +

    n

    +

    n

    n n n

    n

    n n

    += + = +

    n

    n n n n

    > p(+% A p(%

    where in this case the event + > (+gain the proof could have been deduced directly from the Jenn diagram in figure .%

    &o understand more the probability of mutually e$clusive events, lets consider the coin tossinge$periment. &he outcomes 9heads9 (K% or 9tails9 (&% cannot happen together, they are mutuallye$clusive and thus their 'oint probability p(K&%, the probability of a 9head9 and a 9tail9 occurringtogether is =ero (p(K&% > ;%. &he probability of occurrence of a 9head9 or a 9tail9 i.e. the union of the

    two events would be eual to p(K% A p(&% -p(K&% > p(K% A p(&% > 154 A 154 > 1.&his result stands to reason since the outcome of tossing a coin has to be either a 9heads9 or 9tail9.( assuming the coin doesnLt come to rest on its edge, or gets lost %

    0. egree &elecoms 2 3ecture notes 1; 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    11/56

    +s another e$ample of mutually e$clusive events consider the throwing of one dice.&he probability of occurrence of any number from 1 to B is eual to

    p(1% A p(4% A p(2% A p(% A p( 15B A 15BA 15B A 15B A15B A15B > B5B >1&he occurrence of any of these numbers is mutally e$clusive to the occurrence of any other one ofthese numbers, ( the numbers being 1 to B % and their 'oint probability is =ero.

    0. egree &elecoms 2 3ecture notes 11 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    12/56

    Conditional Pro"a"ilit!:&he probability that an event + occurs given that an event has also occurred is denoted by p(+G%and is defined by

    p A Bn

    ( / ) lim ( )=

    n

    n

    AB

    B

    p(+G% is read 9 the probability of + given 9.

    &he probability of the intersection of two events may be evaluated by measuring the compound eventdirectly or it may be evaluated from the probability of either of the simple events and the conditional

    probability of the other simple event.

    Theore: 3et the event 8 be the intersection of the two events + and i.e. 8 > + then

    %.A)/ 0 %.A/ %.)6A/ 0 %.)/ %.A6)/

    Proo'7

    p AB n

    np B A p A

    nn

    p A B p B

    n n

    A

    nB

    n

    n

    A l e

    B l e

    ( ) lim ( ) lim ( ) ( / ) ( )

    lim ( ) ( / ) ( )

    ar&

    ar&

    = = =

    = =

    n

    n

    n

    n

    nn

    AB AB

    A

    AB

    B

    efinition: &wo events + and are said to be independent if eitherp(+G% > p(+%

    or

    p(G+% > p(%

    It is easy to show that if a set of events +1,+4,...., +n, are independent then

    p(+1,+4,...., +n% > p(+1% p(+4% .....p(+n%

    7ote that for mutually independent events : p(+% > p(+% p(%

    /hen the problem discussed is such that the outcome of each e$periment remain independent of the

    previous e$periment outcome, in engineering terminology such e$periments (or processes% are said tolack memory. For these e$periments the probability of any outcome is always the same ( i.e. anoutcome of the nth trial has e$actly the same probability of occurrence as in the kth trial%. &his type of

    0. egree &elecoms 2 3ecture notes 14 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    13/56

    e$periment leads to the concept of so called independent stochastic (or random% processes, and is oftencalled a eor!less %rocess8In certain types of problems an outcome may be influenced by the past history or 9memory9 of thee$periment. uch e$periment are termed 9dependent stochastic processes9. &he simplest of these arethose e$periments in which the probability of an outcome of a trial depends on the outcome of theiediate %recedingtrial. &hese types of e$periments are called 9Mar*o& %rocesses98

    0. egree &elecoms 2 3ecture notes 12 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    14/56

    COM)INATIONS

    If we tossed a coin four times in succession and wanted to determine the probability of obtaininge$actly two heads we might have to list all the 1B possible outcomes to identify the ones we areinterests in to count them. ( see problem below%&his method of listing all possible outcomes uickly becomes unwieldy as the number of tosses

    increases. For e$ample, if a coin is tossed 1; times in succession, the total number of outcomes is 41;

    >1;4. 0bviously listing all these outcomes is not to be recommended + more convenient approach would be to use the results of combinatorial analysis.If a coin is tossed k times, the number of ways in which ' heads can occur is the same as the number ofcombinations of k things taken ' at a time.

    &his is given by ( )

    , where

    ( )

    >

    ( )> :

    ;C

    &his can be proved as follows. "onsider an urn containing k distinguishable balls marked1, 4, ....., k. uppose we draw ' balls from this urn without replacement. the first ball could be any oneof the k balls, the second ball could be any one of the remaining (k-1% balls, and so onL Kence the totalnumber of ways in which ' balls can be drawn isk (k-1% (k-4%.........(k-'A1% > k5(k - '%

    7e$t, consider any one set of the ' balls drawn. &hese balls can be ordered in different ways. /e couldchoose any one of the ' balls for number 1, and any one of the remaining (' -1% balls for number 4, andso on. &his will give a total of (' -1% (' -4% .....1 > ' distinguishable patterns formed from the ' balls.&he total number of ways in which ' things can be taken from k things is k5(k-'%. but many of these

    ways will use the same ' things, but arranged in different order.&he ways in which things can be taken from k things without regard to order is k5(k-'% divided by ' .

    &his precisely ( )

    .

    &hus the number of ways in which two heads can occur in four tosses is

    ( )

    = =4

    2 26

    0. egree &elecoms 2 3ecture notes 1 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    15/56

    Pro"les:

    1- +ssign probabilities to each of the si$ outcomes in Figure 1.

    ? ecause each of the si$ outcomes is eually likely in a large number of independent

    trials, each outcome will appear in one-si$th of the trials . Kencep(si% > 15B for i > 1, 4, 2, , (s4ssB% where s4, s, and sBare mutually e$clusive, p(+e% > p(s4% A p(s% A p( (sB% > 15B A 15B A 15B > 154 , similarly

    p(+o% > 154 and p(% > 452

    +e > (s4 s% , and p(+e% > p(s4% A !(s% > 152, similarly p(+o% > 152@

    2- /hat is the probability of an even integer showing ona% a roll of one honest die M

    b% the sum of two honest dice Mc% what would you e$pect from the sum of three honest diceM?+nsr 154 , 154, 154.@

    - &wo dice are thrown together. etermine the probability that a seven is thrown.

    ? For this e$periment, the sample space contains 2B sample points because 2B possibl outcome

    e$ist. +ll the outcome are eually likely. Kence the probability of eachoutcome is 152B.

    + sum of seven can be obtained by the si$ combinations :(1, B%, (4, 15B.@

    /e can also consider the solution in the following manner:If we say that out of the possible 2B outcomes B outcomes will give us the 9seven9.&hen the 9seven9 will occur

  • 8/13/2019 An Intro to Digital Comms - Part 1

    16/56

    - + coin is tossed four times in succession. etermine the probability of obtaining e$actly twoheads.

    ? ince each toss has two possible outcomes only i.e. a heads (K% or a tails (&%, then a total of 4> 1Bdistinct outcomes are possible ( remember the number of all possible combinations of a four digit

    binary code %, all of which are eually likely. Kence the sample space consists of 1B points, each with

    probability 151B.

    &he si$teen outcomes are listed below.

    &&&& K&&&

    &&&K K&&K&&K& K&K&

    &&KK K&KK&K&& KK&&

    &K&K KK&K

    &KK& KKK&&KKK KKKK

    i$ out of these outcomes are favourable to the event 9obtaining two heads9(denoted by arrows% .

    ecause all of these si$ outcomes are mutually e$clusive (dis'oint%. p(9 obtaining two heads9% > B51B > 25@

    - + long binary message contains 14B binary 1Ls and 4BB binary ;Ls./hat is the probability of obtaining a binary 1 in any received bitM ( ansr: ;.22%

    - +n urn containing two white balls and three black balls. &wo balls are drawn in succession, thefirst one not being replaced. /hat is the chance of picking two white balls in successionM(ansr: 151;%

    again we can solve this problem in several ways:a% &here are 4; different permutations of 4 things out of 151; . ( 8ach one of the 4; permutations is euallylikely and also they are all mutually e$clusive.%

    b% &he probability of drawing two white balls >the probab. of drawing white ball1 then white ball4 A

    the probab. of drawing white ball4 then white ball1 i.e p(w or w% > p(w1 w4% A p(w4 w1%

    but p(w1 w4 % > p(w1% p(w4Gw1% > (15 154; similarly p(w4 w1% > 154;

    so p(w w% > 154; A 154; > 151;

    c% p(w w% > p(w1% p(w4Gw1% > (45 151;

    0. egree &elecoms 2 3ecture notes 1B 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    17/56

    1;- &wo urns contain white and black balls. *rn + contains two black balls and one whiteballurn contains three black balls and two white balls. 0ne of the urns is selected at random,andand one of the balls in it is chosen. /hat is the probability p(/% of drawing a white ballM

    &here are two ways of drawing one white ball:a% pick urn + draw /

    b% pick urn draw /&hese two events are mutually e$clusive, so p(/% > p(+ /% A p(/%

    but p(+ /% > p(+% p(/G+% > (154% ( 152% > 15Band p( /% > p(% p(/G% > (154% (45 15 15B A 15< > 1152;

    0. egree &elecoms 2 3ecture notes 1) 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    18/56

    0. egree &elecoms 2 3ecture notes 1 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    19/56

    IN(ORMATION T-+OR,

    Information theory deals with mathematical modelling and analysis of a communications system ratherthan with physical sources and physical channels.pecifically, given an information sourceand a noisy channel, information theory provides limitson :1- &he minimum number of bits per symbol reuired to fully represent the source.

    (i.e. the efficiency with which information from a given source can be represented.%

    4- &he ma$imum rate at which reliable (error-free% communications can take place over thenoisy channel.

    ince the whole purpose of a communications system is to transport information from a source to adestination, the uestion arises as to how much information can be transmitted in a given time.( 7ormally the goal would be to transmit as much information as possible in as small a time as possiblesuch that this information can be correctly interpreted at the destination.%&his of course leads to the ne$t uestion, which is :Kow can information be measured and how do we measure the rate at which information is emitted

    from a source M

    uppose that we observe the output emitted by a discrete source ( every unit interval or signallinginterval.%&he source output can be considered as a set, , of discrete random events ( or outcomes%. &heseevents are symbols from a fi$ed finite alphabet.( for e$ample the set or alphabet can be the numbers 1 to B on a die and each roll of the die outputs asymbol being the number on the die upper face when the die comes to rest.+nother e$ample is a digital binary source, where the alphabet is the digits 9;9 and 919, and the sourceoutputs a symbol of either 9;9 or 919 at random .%

    If in general we consider a discrete random source which outputs symbols from a fi$ed finite alphabetwhich has k symbols. &hen the set contains all the k symbols and we can write

    > D s;, s1, s4, ......., sk-1E and

    p sii

    i k

    ( %( %

    ==

    = 1

    ;

    1

    (2.1%

    In addition we assume that the symbols emitted by the source during successive signalling intervals arestatisticall! inde%endenti.e. the probability of any symbol being emitted at any signalling intervaldoes not depend on the probability of occurrence of previous symbols. i.e. we have what is called a

    discrete eor!less source.

    "an we find a measure of how much 9information9 is produced by this source M

    &he idea of information is closely related to that of 9uncertainty9 and 9surprise9.

    If the source emits an output si, which has a probability of occurrence p(s i% > 1, then all other symbolsof the alphabet have a =ero probability of occurrence and there is really no 9uncertainty9, 9surprise9, orinformation since we already know before hand ( a %rior!% what the output symbol will be.

    0. egree &elecoms 2 3ecture notes 1 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    20/56

    If on the other hand the source symbols occur with different probabilities, and the probability p(si% is

    low, then there is more 9uncertainty9, 9surprise9 and therefore 9information9 when the symbol si is

    emitted by the source, rather than another one with higher probability.

    &hus the words 9uncertainty9, 9surprise9, and 9information9 are all closely related.- efore the output sioccurs, there is an amountof 9uncertainty9.

    - /hen the output sioccurs, there is an amountof 9surprise9.

    - +fter the occurrence of the output si, there is gain in the amountof 9information9.

    +ll three amountsare really the same and we can see that the amount of information is related to theinverse of the probability of occurrence of the symbol.

    De'inition:&he amount of information gained after observing the event si which occurs with probability p(si%, is

    log/

    iI.s = "its, for i > ;,1,4, ..., (k-1% (2.4%

    &he unit of information is called 9"it9 , a contraction of 9binary digit9

    &his definition e$hibits the following important properties that are intuitively satisfying:

    1- I (si% > ; for p(si% > 1

    i.e. if we are absolutely certain of the output of the source even before it occurs (a priory%, thenthere is no information gained.

    4- I(si% ; because ; p(si% 1 for symbols of the alphabet.

    i.e. the occurrence of an output s' either provides some information or no information but

    never brings about a loss of information ( unless it is a severe blow to the head which is highly unlikelyfrom the discrete source %

    2- I(s'% I(si% for p(s'% p(si%

    i.e. the less probability of occurrence an output has the more information we gain when itoccurs.

    - I(s'si% > I(s'% A I(si% if the outputs s'and siare statistically independent.

    &he use of the logarithm to the base 4 ( instead of to the base 1; or to the base e % has been adopted in

    the measure of information because usually we are dealing with digital binary sources, (however it isuseful to remember that log>.a/ 0 ?8?>> log1.a/%. &hus if the source alphabet was the binary set of

    0. egree &elecoms 2 3ecture notes 4; 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    21/56

    symbols, i.e. 9;9 or 919 , and each symbol was eually likely to occur i.e. s ; having p(s;% > 154 and s1having p(s1% > 154

    we have :

    1%4(log@41

    1?log>@

    %i

    p(s

    1?

    4log%

    iI(s 44 === bit

    Kence 9one bit9 is the amount of information that is gained when one of two possible ande$uall! li*el! .e$ui%ro"a"le/outputs occurs.

    ?7ote that a 9bit9 is also used to refer to a binary digit when dealing with the transmission of aseuence of 1Ls and ;Ls@.

    &he amount of information , I(si%, associated with the symbol si emitted by the source during a

    signalling interval depends on the symbolLs probability of occurrence. In general, each source symbolhave a different probability of occurrence. ince the source can emit any one of the symbols of itsalphabet, a measure for the a&erage in'oration content %er source s!"olwas defined and calledthe entro%! o' the discrete source, -@ (i.e. taking all the discrete source symbols into account %.

    De'inition&he entro%!, K, of a discrete memoryless source with source alphabet composed of the set > D s;, s1, s4, ......., sk-1E, is a measure of the a&erage in'oration content %er source s!"ol,

    and is given by :

    1/.*i

    i

    i

    i

    1/.*i

    i

    i

    =

    =

    =

    =

    =

    =

    "itss!"ol (2.2%

    /e note that the entropy, K, of a discrete memoryless source with euiprobable symbols is boundedas follows:; 4 K klog , where k is the number of euiprobable source symbols.

    Furthermore, we may state that :1- K > ; , if and only if the probability p(si% > 1 for some symbol si , and the remaining

    source symbols probabilities are all =ero. &his lower bound on entropy corresponds to nouncertainty and no information.

    4- - 0 log>* bits5symbol, if and only if p(si% > 15k for all the k source symbols (i8e8 the! are

    all e$ui%ro"a"le%. &his upper bound on entropy corresponds to ma$imum uncertainty andma$imum information.

    +2a%le7

    "alculate the entropy of a discrete memoryless source with source alphabet > D s;, s1, s4E with

    probabilities p(s;% > 15 , p(s1% > 15, p(s2% > 154 .

    0. egree &elecoms 2 3ecture notes 41 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    22/56

    - %.s / I.s /

    %.s / log =1

    %.s /i

    =

    =

    =

    =

    =

    =

    K p(s % log ? 1p(s % @ A p(s log ? 1p(s % @ A p(s log ? 1p(s % @

    >1

    log

    1

    log

    1

    4log

    ; 4;

    1 41

    4 44

    4 4 4

    =

    + + = =

    % %

    ( % ( % ( % . 5 42

    41 < bits symbol

    In'oration RateIf we consider that the symbols are emitted from the source at a fi$ed time rate (the signalling interval%,

    denoted by rs symbols5second. /e can define thea&erage source in'oration rate Rin bits per second as the product of the average informationcontent per symbol, K, and the symbol rate rs.

    R 0 rs- "itssec (2.%

    +2a%le+ discrete source emits one of five symbols once every millisecond. &he source symbols probabilitiesare 154, 15, 15, 151B, and 151B respectively.Find the source entropy and information rate.

    - %.s / log =1

    %.s /i

    ==

    =

    bits where, in this case k > 1

    4log

    1

    log

    1

    log

    1

    1Blog

    1

    1Blog

    ; 4

    ;

    1 4

    1

    4 4

    4

    2 4

    2

    ( 4

    (

    4 4 4 4 4

    =

    + + + +

    = + + + + =

    % % % %

    ( % ( % ( % ( % ( %

    . . . . . . 5

    4 1B 1B

    ; < ; < ; 2)< ; 4< ; 4< 1 )< bits symbol

    R 0 rs- bits5sec

    &he information rate 6 > (151;-2% $ 1.)< > 1)< bits5second.

    +ntro%! o' a )inar! Meor!less Source:&o illustrate the properties of K, let us consider a memoryless digital binary source for which symbol ;occurs with probability p;and symbol 1 with probability p1 > (1 - p;%.

    0. egree &elecoms 2 3ecture notes 44 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    23/56

    &he entropy of such a source euals:

    bits@%p-(1

    1?log%p-(1A@

    p

    1?logp

    @p

    1?logpA@

    p

    1?logpK

    ;

    4;

    ;

    4;

    1

    41

    ;

    4;

    =

    =

    /e note that

    1- /hen p;> ;, the entropy K > ;. &his follows from the fact that $ log $ ; as $ ;.4- /hen p;> 1, the entropy K > ;.

    2- &he entropy K attains its ma$imum value, Kma$> 1 bit, when p; > p1>154, that is symbols

    ; and 1 are eually probable. (i.e. K > log4k > log44 > 1 %

    ( Kma$> 1 can be verified by differentiating K with respect to p and euating to =ero %

    0. egree &elecoms 2 3ecture notes 42 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    24/56

    0. egree &elecoms 2 3ecture notes 4 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    25/56

    C-ANN+L CAPACIT,

    In Information &heory, the transmission medium is treated as an abstract and noisy filter calledthe channel. &he limiting rate of information transmission through a channel is called thechannel capacity, ".

    Channel Coding Theormhannon showed that, if the information rate 6 ?remember that R 0 rs-bits5sec@ is eual to or

    less than ", R C, then there e$ists a coding techniue which enables transmission over thenoisy channel with an arbitrarily small freuency of errors.?+ converse to this theorem states that it is not possible to transmit messages without error if

    6 N " @

    &hus the channel capacity is defined as the ma$imum rate of reliable (error-free% information transmissionthrough the channel.

    7ow consider a binary source with an available alphabet of k discrete messages (or symbols%which are euiprobable and statistically independent (these messages could be either singledigit symbols or could be composed of several digits each depending on the situation%. /eassume that each message sent can be identified at the receiver therefore this case is oftencalled the Odiscrete noiseless channelP. &he ma$imum entropy of the source is log4k bits, and

    if & is the transmission time of each message, (i.e. rs>T

    1symbols5sec%, the channel capacity

    is

    krHrRC ss 4log=== "its %er second8

    &o attain this ma$imum the messages must be euiprobable and statistically independent.&hese conditions form a basis for the coding of the information to be transmitted over thechannel.In the presence of noise, the capacity of this discrete channel decreases as a result of the errorsmade in transmission.

    In making comparisons between various types of communications systems, it is convenient toconsider a channel which is described in terms of bandwidth and signal-to-noise ratio.

    Review of Signal to Noise Ratio

    &he analysis of the effect of noise on digital transmission will be covered later on in this coursebut before proceeding, we will review the definition of signal to noise ratio. It is defined as theratio of signal power to noise power at the same point in a system. It is normally measured in

    decibels.

    0. egree &elecoms 2 3ecture notes 4< 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    26/56

    Signal to Noise Ratio .d)/ 0N

    S

    1;log1; d)

    7oise is any unwanted signal. In electrical terms it is any unwanted introduction of energytending to interfere with the proper reception and reproduction of transmitted signals.

    Channel Capacity Theorm

    it error and signal bandwidths are of prime importance when designing a communicationssystem. In digital transmission systems noise may change the value of the transmitted digitduring transmission. (e.g. change a high voltage to a low voltage or vice versa%.

    &his raises the uestion : Is it possible to invent a system with no bit error at the output even

    when we have noise introduced into the channelM hannonQs Channel Ca%acit! Theor(also called the the Shannon 5-artle! Theor% answers this uestion

    C 0 ) log> .1 3 SN/ "its %er second,

    where " is the channel capacity, is the channel bandwidth in hert= and 57 is the signal-to-noise power ratio (watts5watts, not d%.

    +lthough this formula is restricted to certain cases (in particular certain types of randomnoise%, the result is of widespread importance to communication systems because manychannels can be modelled by random noise.

    From the formula, we can see that the channel capacity, ", decreases as the availablebandwidth decreases. " is also proportional to the log of (1A57%, so as the signal to noiselevel decreases " also decreases.

    &he channel capacity theorem is one of the most remarkable results of information theory. In asingle formula, it highlights most vividly the interplay between three key system parameters:"hannel bandwidth, average transmitted power (or, euivalently, average received power%, andnoise at the channel output.

    &he theorem implies that, for given average transmitted power and channel bandwidth , we

    can transmit information at the rate " bits per seconds, with arbitrarily small probability oferror by employing sufficiently comple$ encoding systems. It is not possible to transmit at arate higher than " bits per second by any encoding system without a definite probability oferror. Kence, the channel capacity theorem defines the fundamental limit on the rate of error-free transmission for a power-limited, band-limited Raussian channel. &o approach this limit,however, the noise must have statistical properties appro$imating those of white Raussiannoise.

    Pro"les7

    1. + voice-grade channel of the telephone network has a bandwidth of 2. kK=.

    (+% "alculate the channel capacity of the telephone channel for a signal-to-noise ratio of 2;d.

    0. egree &elecoms 2 3ecture notes 4B 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    27/56

    (% "alculate the minimum signal-to-noise ratio reuired to support information transmissionthrough the telephone channel at the rate of ;; bits5sec.("% "alculate the minimum signal-to-noise ratio reuired to support information transmissionthrough the telephone channel at the rate of B;; bits5sec.(8H+#:San

  • 8/13/2019 An Intro to Digital Comms - Part 1

    28/56

    The )inar! S!etric Channel

    *sually when a 919 or a 9;9 is sent it is received as a 919 or a 9;9, but occasionally a 919 will be received asa 9;9 or a 9;9 will be received as a 919.3etLs say that on the average 1 out of 1;; digits will be received in error, i.e. there is a probability p >151;; that the channel will introduce an error.

    &his is called a inary ymmetric "hannel ()SC%, and is represented by the following diagram.

    p

    p

    1-p

    (1-p)

    0

    1

    0

    1

    Representation of the Binary Symmetric Chanwith an error probability of p

    7ow let us consider the useof this " odel.ay we transmit one information digit coded with a single even parity bit . &his means that if theinformation digit is ; then the codeword will be ;; , and if the information digit is a 1 then the codewordwill be 11.+s the codeword is transmitted through the channel, the channel may (or may not% introduce an erroraccording to the following error patterns:8 > ;; i.e. no errors8 > ;1 i.e. a single error in the last digit8 > 1; i.e. a single error in the first digit8 > 11 i.e. a double error

    &he probability of no error , is the probability of receiving the second transmitted digit correctly oncondition that the first transmitted digit was received correctly.Kere we have to remember our discussion on 'oint probability:

    p(+% > p(+% p(5+% > p(+% p(% when the occurrence of any of the two outcomes is independent of theoccurrence of the other.&hus the probabilty of no error is eual to the probability of receiving each digit correctly.&his probability, according to the " model, is eual to (1 - p%, where p is the probability of one digit

    being received incorrectly.

    &hus the probability of no error > (1 - p% ( 1- p% > (1 - p%4.

    imilarly, the probability of a single error in the first digit > p ( 1- p%and the probability of a single error in the second digit > (1 - p% p ,i.e. the probability of a single error is eual to the sum of the above two probabilities ( since the two eventsare mutually e$clusive%, i.e.

    the probability of a single error ( when a code with block length, n > 4 , is used, as in this case%is eual to 4 p(1 - p%

    imilarly the probability of a double error in the above e$ample ( i.e. the error pattern 8 > 11 %

    is eual to p4.In summary these probabilities would be

    0. egree &elecoms 2 3ecture notes 4 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    29/56

    p(8 > ;;% > (1 - p%p(8 > ;1% > (1 - p% pp(8 > 1;% > p (1 - p%

    p(8 > 11% > p4.

    and if we substitute for p > .;1 ( given in the above e$ample% we find that

    p(8 > ;;% > (1 - p% > ;.p(8 > ;1% > (1 - p% p > ;.;;p(8 > 1;% > p (1 - p% > ;.;;

    p(8 > 11% > p4 > ;.;;;1

    &his shows that if p T 154 , then the probability of no error is higher than the probability of a single erroroccurring which in turn is higher than the probability of a double error.

    +gain, if we consider a block code with block length n > 2 , then the

    probability of no error p(8 > ;;;% > (1 - p%2,

    probability of an error in the first digit p(8 > 1;;% > p (1 -p%4,

    probability of a single error per codeword p(1e% > 2 p (1 -p%4,probability of a double error per codeword > p(4e% > ( %4

    2 p4(1 - p% > 2 p4(1 - p%

    probability of a triple error per codeword > p(2e% > p2.

    +nd again, if we have a code with block length n > , then the

    probability of no error p(8 > ;;;;% > (1 - p%,

    probability of an error in the first digit p(8 > 1;;;% > p (1 -p%2,

    probability of a single error per codeword p(1e% > p (1 -p%2,

    probability of a double error per codeword > p(4e% > ( %4 p4(1 - p%4> B p4(1 - p%4

    probability of a triple error per codeword > p(2e% > ( %2 p2(1 - p% > p2(1 - p%

    probability of four errors per codeword > p(e% > p.

    +nd again, if we have a code with block length n > ;;;;;% > (1 - p% 1;;;;% > p (1 -p%,

    probability of a single error per codeword p(1e% > < p (1 -p%,

    probability of a double error per codeword > p(4e% > ( %4< p4(1 - p%4> 1; p4(1 - p%4

    probability of a triple error per codeword > p(2e% > ( %2< p2(1 - p%4> 1; p2(1 - p%4

    probability of four errors per codeword > p(e% > ( %< p(1 - p% > < p(1 - p%.

    probability of five errors per codeword > p( p

  • 8/13/2019 An Intro to Digital Comms - Part 1

    30/56

    &he "ommunications ystem from the channel "oding &heorem point of view

    0. egree &elecoms 2 3ecture notes 2; 1451512

    source 8ncoder ecoder user

  • 8/13/2019 An Intro to Digital Comms - Part 1

    31/56

    C-ANN+L CODING

    uppose that we wish to transmit a seuence of binary digits across a noisy channel. If we send a one,a one will probably be received if we send a =ero, a =ero will probably be received. 0ccasionally,however, the channel noise will cause a transmitted one to be mistakenlyinterpreted as a =ero or a transmitted =ero to be mistakenly interpreted as a one. +lthough we are

    unable to prevent the channel from causing such errors, we can reduce their undesirable effects withthe use of coding.&he basic idea, is simple. /e take a set of k in'oration digitswhich we wish to transmit, anne$ tothem r chec* digits, and transmit the entire block of n > k A r channel digits. +ssuming that thechannel noise changes sufficiently few of these transmitted channel digits, the r check digits may

    provide the receiver with sufficient information to enable it to detect and5or correct the channel errors.(&he detection and5or correction capability of a channel code will be discussed at some lengthin the following pages.%Riven any particular seuence of k message digits, the transmitter must have some rule for selectingthe r check digits. &his is called channel encoding.+ny particular seuence of n digits which the encoder might transmit is called a code#ord.

    +lthough there are 4n

    different binary seuences of length n, only 4k

    of these seuences arecodewords, because the r check digits within any codeword are completely determined by the k

    information digits. &he set consisting of these 4kcodewords, of length n each, is called a code(sometimes referred to as a code book.%

    7o matter which codeword is transmitted, any of the 4npossible binary seuences of length n may bereceived if the channel is sufficiently noisy. Riven the n received digits, the decoder must attempt to

    decide which of the 4kpossible codewords was transmitted.

    Re%etition codes and single5%arit!5chec* codes

    +mong the simplest e$amples of binary codes are the repetition codes, with k > 1, r arbitrary, and n >

    k A r > 1 A r . &he code contains two codewords, the seuence of n =eros and the seuence of n ones./e may call the first digit the information digit the other r digits, check digits. &he value of eachcheck digit (each ; or 1% in a repetition code is identical to the value of the information digit. &hedecoder might use the following rule:"ount the number of =eros and the number of ones in the received bits. If there are more received=eros than ones, decide that the all-=ero codeword was sent if there are more ones than =eros, decidethat the all-one codeword was sent. If the number of ones eual the number of =eros do not decide('ust flag the error%..&his decoding rule will decode correctly in all cases when the channel noise changes less than half thedigits in any one block. If the channel noise changes e$actly half of the digits in any one block, thedecoder will be faced with a decoding 'ailure(i.e. it will not decode the received word into any of the

    possible transmitted codewords% which could result in an +6U (automatic reuest to repeat themessage%. If the channel noise changes more than half of the digits in any one block, the decoder willcommit a decoding error i.e. it will decode the received word into the wrong codeword.If channel errors occur infreuently, the probability of a decoding failure or a decoding error for arepetition code of long block length is very small indeed. Kowever repetition codes are not very useful.&hey have only two codewords and have very low in'oration rateR 0 *n( also called code rate%,all but one of the digits are check digits./e are usually more interested in codes which have a higher in'oration rate.

    8$treme e$amples of such very high rate codes which use a single5%arit!5chec*digit. &his checkdigit is taken to be the modulo-4 sum (8$clusive-06% of the codeword (n -1% information digits.

    ( &he information digits are added according to the e$clusive-06 binary operation : ; A ; > ; , ; A 1 >1, 1 A ; > 1, 1 A 1 > ; %. If the number of ones in the information word is even the modulo-4 sum of all

    0. egree &elecoms 2 3ecture notes 21 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    32/56

    the information digits will be eual to =ero, If the number of ones in the information word is odd theirmodulo-4 sum will be eual to one.+&en %arit!means that the total number of ones in the codeword is even, odd %arit!means that thetotal number ones in the codeword is odd. +ccordingly the parity bit (or digit% is calculated andappended to the information digits to form the codeword.&his type of code can only detecterrors. + single digit error (or any number of odd digit errors% will

    be detected but any combination of two digit errors (or any number of even digit errors% will cause adecoding error. &hus the single-parity-check type of code cannot correct errors.&hese two e$amples, the repetitive codes and the single-parity-check codes, provide the e$treme,relatively trivial, cases of binary block codes. ( +lthough relatively trivial single-parity-checks are useduite often because they are simple to implement.%&he repetition codes have enormous error-correction capability but only one information bit per block.&he single-parity-check codes have very high information rate but since they contain only one checkdigit per block, they are unable to do more than detect an odd number of channel errors.&here are other codes which have moderate information rate and moderate error-correction5detectioncapability, and we will study few of them.

    &hese codes are classified into two ma'or categories:lock codes , and "onvolutional codes.

    In "loc* codes, a block of k information digits is encoded to a codeword of n digits(n N k%. For each seuence of k information digits there is a distinct codeword of n digits.

    In con&olutional codes, the coded seuence of n digits depends not only on the k information digitsbut also on the previous 7 - 1 information digits (7 N 1%. Kence the coded seuence for a certain kinformation digits is not uniue but depends on 7 - 1 earlier information digits.

    In block codes, k information digits are accumulated and then encoded into an n-digit codeword. Inconvolutional codes, the coding is done on a continuous, or running, basis rather than by accumulatingk information digits.

    /e will start by studying block codes. (and if there is time we might come back to study convolutionalcodes%.

    )LOC; COD+S&he block encoder input is a stream of information digits. &he encoder segments the input informationdigit stream into blocks of * information digits and for each block it calculates a number of r checkdigits and outputs a codeword of n digits, where n 0 * 3 r (or r > n - k%.&he code efficiency (also known as the code rate % is k5n.uch a block code is denoted as an .n@*/ code.lock codes in which the k information digits are transmitted unaltered first and followed by thetransmission of the r check digits are called s!steatic codes, as shown in figure 1 below.ince systematic block codes simplify implementation of the decoder and are always used in practicewe will consider only systematic codes in our studies.( + non-systematic block code is one which has the check digits interspersed between the informationdigits. For 3inear block codes it can be shown that a non systematic block code can always betransformed into a systematic one%.

    0. egree &elecoms 2 3ecture notes 24 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    33/56

    1 nn12 ...............................................

    r cec "i&its information "i&its

    Fi&+re 1 an (n5) loc co"e!or" in s,stematic form

    LIN+AR )LOC; COD+S

    3inear block codes are a class of parity check codes that can be characteri=ed by the (n, k% notationdescribed earlier.&he encoder transforms a block of k information digits (an information word% into a longer

    block of n codeword digits, constructed from a given alphabet of elements. /hen the alphabet consists

    of two elements (; and 1%, the code is a binary code comprised of binary digits (bits%. 0ur discussion oflinear block codes is restricted to binary codes.

    +gain, the k-bit information words form 4k distinct information seuences referred to as*5tu%les(seuences of k digits%.

    +n n-bit block can form as many as 4ndistinct seuences, referred to as n5tu%les.

    &he encoding procedure assigns to each of the 4kinformation k-tuples one of the 4nn-tuples. + block

    code represents a one-to-one assignment, whereby the 4kinformation k-tuples are uniuely mapped

    into a new set of 4kcodeword n-tuples the mapping can be accomplished via a look-up table, or viasome encoding rules that we will study shortly.De'inition:

    +n (n, k% binary block code, is said to be linearif, and only if, the modulo-4 addition (CiC:% of anytwo codewords, Ciand C:, is alsoa codeword. &his property thus means that (for linear block code%

    the all-=ero n-tuple ustbe a member of the code book (because the modulo-4 addition of acodeword with itself results in the all-=ero n-tuple%.+ linear block code, then, is one in which n-tuples outside the code book cannot be created by themodulo-4 addition of legitimate codewords (members of the code book%.

    For e$ample, the set of all 4> 1B, -tuples (or -bit seuences % is shown below:

    ;;;; ;;;1 ;;1; ;;11 ;1;; ;1;1 ;11; ;111

    1;;; 1;;1 1;1; 1;11 11;; 11;1 111; 1111

    an e$ample of a block code ( which is really a subset of the above set % that forms a linear code is

    ;;;; ;1;1 1;1; 1111

    It is easy to verify that the addition of any two of these code words in the code book can only yieldone of the other members of the code book and since the all-=ero n-tuple is a codeword this code is a

    linearbinary block code.

    0. egree &elecoms 2 3ecture notes 22 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    34/56

    Figure

  • 8/13/2019 An Intro to Digital Comms - Part 1

    35/56

    In fact, if one wished that the coding system would correct the occurrence of e errors, then it is bothnecessary and sufficient that each codeword seuence di''ersfrom every other codeword in at least(4e + 1% positions.

    D+(INITION&he number of positions in which any two codewords differ from each other is called the -aing

    distance, and is normally denoted by d .

    For e$ample:3ooking at the (n,k% > (,4% binary linear block code, mentioned earlier, which has the followingcodewords:

    C1 ;;;;

    C> ;1;1

    C? 1;1;

    CB 1111

    we see that the Kamming distance, d, :between C> and C? is eual to

    between C> and CB is eual to 4

    between C? and CB is eual to 4

    /e also observe that the Kamming distance between "1and any of the other codewords is eual to the

    9#eight9 that is the nu"er o' onesin each of the other codewords.

    /e can also see that the iniu -aing distance( i.e. the smallest Kamming distance betweenany pair of the codewords%, denoted by din, of this code is eual to 4

    ( &he minimum Kamming distance of a binary linear block code is simply eual to the minimum weightof its codewords. &his is due to the fact that the code ois linear, meaning that if any two codewords areadded together modulo-4 the result will be another codeword. thus to find the minimum Kammingdistance of a linear block code all we need to do is to find the minimum weight code%.

    3ooking at the above code again, and keeping in mind what we said earlier about the 9Kammingdistance9 property of the codewords for a code to correct a single error./e said that, to correct a single error, this code must have any of its codewords differing from any ofthe other codewords by at least (4e A 1%, where e in our case is 1 (i.e. a single error%. &hat is theminimum Kamming distance of the code must be at least 2. &herefore the above mentioned codecannot correct the result of occurrence of a single error, (since its dmin> 4%, but it can detect it.

    &o e$plain this further let us consider the following diagram in figure

    0. egree &elecoms 2 3ecture notes 2< 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    36/56

    C1 C2

    xx

    6) 7ammin& spere of ra"i+s e 8 1 aro+n" eac co"e!or"

    7ammin& "istance 6et!een co"e!or"s 8 2

    o"e can onl, "etect e 8 1 error 6+t cannot correct it

    6eca+se " 8 e 9 1 ( i.e. " : 2e 9 1)

    x x

    C1 C2

    a) 7ammin& spere of ra"i+s e 8 1 aro+n" eac co"e!or"

    7ammin& "istance 6et!een co"e!or"s 8 3o"e can correct a sin&le error since " 8 2e 9 1

    FIR*68 4

    If we imagine that we draw a sphere ( called a Kamming sphere% of radius e> 1 around eachcodeword. &his sphere will contain all n-tuples which are at a distance 1 away from each codeword( i.e. all n-tuples which differ from this code word in one position %.

    If the minimum Kamming distance of the code dminT 4e A 1 (as in figure 4b, where d > 4% theoccurrence of a single error will result in changing the codeword to the ne$t n-tuple and the decoderdoes not have enough information to decide if codeword "1or "4was transmitted. &he decoder

    however can detect that an error has occurred.If we look at figure 4a we see that the code has a dmin> 4e A 1 and that the occurrence of a single

    error results in the ne$t n-tuple being received and in this case the decoder can make an unambiguousdecision, based on what is called nearest neigh"our decoding rule, as to whichof the two codewords was transmitted.

    0. egree &elecoms 2 3ecture notes 2B 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    37/56

    If the corrupted received n-tuple is not too unlike (not too distant from% the valid codeword, thedecoder could make a decision that the transmitted codeword was the code word 9nearest in distance9to the received the word.

    &hus in general we can say that a binary linear code will correct e errors

    if din0 >e 3 1(for odd dmin %if din0 >e 3 > (for even dmin %

    A . @ ?/ Linear )loc* Code +2a%le

    8$amine the following coding assignment that describes a (B, 2% code. &here are 4k> 42> information words, and therefore eight codewords.

    &here are 4n> 4B> si$ty-four B-tuples in the total B-tuple set (or vector space%

    Information word "odeword c2c4c1 cBc c1 A c4 1;; 11;1;; c< > A c4 A c2 ;1; ;11;1; cB > c1 Ac4 11; 1;111; and its K matri$ is

    ;;1 1;1;;1 11;1;; 1;1 ;111;1 ;11;1; ;11 11;;11 1;1;;1 111 ;;;111

    It is easy to check that the eight codewords shown above form a linear code (the all-=eros codewordis present, and the sum of any two codeword is another codeword member of the code%. &herefore,these codewords represent a linear "inar! "loc* code.

    It is also easy enough to check that the minimum Kamming distance of the code is dmin> 2

    thus we conclude that this code is a single error detection code, sincedin0 >e 3 1 ('or odd din% .

    In the simple case of single-parity-check codes, the single parity was chosen to be the modulo-4 sum ofall the information digits.3inear block codes contain several check digits, and each check digit is a function of the modulo-4 sumof some (or all% of the information digits.

    0. egree &elecoms 2 3ecture notes 2) 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    38/56

    3et us consider the (B , 2% code, i.e. n > B, k > 2, and there are r > n-k > 2 chek digits.

    /e shall label the three information digits by "1,"4,"2 and the three check digits as "," "1A "4

    " "1 A "2"B> "4A "2

    or in matri$ notation

    =

    2

    4

    1

    B

    codewords in the code are

    Information digits, "odewords "1,"4,"2 "1,"4,"2, "," ? 11;;11 @and the received word was R> ? 11;111 @/e can say that the error pattern was +> ? ;;;1;; @

    If we multiplied the transpose of the received word by the parity-check matri$ -what do we get M

    - Rt 0 - .C+/t0 - Ct- +t 0 St&he r-tuple S> ? 1,4,2@ is called the s!ndroe.&his shows that the s!ndroe test, whether performed on either the corrupted received word or onthe error pattern that caused it, yields the same syndrome

    0. egree &elecoms 2 3ecture notes 2 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    40/56

    ince the syndrome digits are defined by the same euations as the parity-check euations, thesyndrome digits reveal the parity check failures on the received codeword. (&his happens because thecode is linear. +n important property of linear block codes, fundamental to the decoding process, isthat the mapping between correctable error patterns and syndromes is one-to-one and this means thatwe not only can detect an error but we can also correct it.%

    For e$ample using the received word given above R>? 11;111@

    - Rt 0

    =

    ;

    ;

    1

    1

    1

    1

    ;

    1

    1

    1;;11;

    ;1;1;1

    ;;1;11

    > St,

    where S> ? 1,4,2@ > ?1;;@

    and as we can see this points to the fourth bit being in error.7ow all the decoder has to do ( after calculating the syndrome% is to invert the fourth bit position inthe received word to produce the codeword that was sent i.e C> ? 11;;11 @.

    having obtained a feel of what channel coding and decoding is about, lets apply this knowledge to aparticular type of linear binary block codes called the -aing codes.

    -AMMING COD+S

    &hese are 3inear binary single-error-correcting codes having the property that the columns of theparity-check-matri$, -@consist of all the distinct non-=ero rseuences of binary numbers.&hus a Kamming code has as many parity-check matri$ columns as there are single-error seuences.these codes will correct all patterns of single errors in any transmitted codeword.

    &hese codes have n > k A r , where n 0 >r5 1 @ and * 0 >r5 1 5 r .&hese codes have a guaranteed minimum Kamming distance dmin> 2 .

    for e$ample the parity-check-matri$ for the (),% Kamming code is

    ->

    1;;11;1

    ;1;1;11

    ;;1;111

    a% etermine the codeword for the information seuence ;;11b% If the received word, R, is 1;;;;1;, determine if an error has occurred. If it has, find the

    correct codeword.

    Solution7

    a% since - Ct 0 , we can use this euation to calculate the parity digits for the given

    0. egree &elecoms 2 3ecture notes ; 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    41/56

    codeword as follows:

    )

    B

    )

    B

    ;

    ;

    ;

    by multiplying out the left hand side we get

    1.; 1.; 1.1 ;.1 1." ;

    ; ; 1 ; " ;

    i.e. 1 "< > ; and "< > 1

    similarly by multiplying out the second row of the K matri$ by the transpose of the codeword weobtain

    1.; 1.; ;.1 1.1 ;." ;

    ; ; ; 1 ; "B ; > ;

    i.e. 1 "B > ; and "B > 1

    similarly by multiplying out the third row of the K matri$ by the transpose of the codeword we obtain

    1.; ;.; 1.1 1.1 ;." ;

    ; ; 1 1 ; ; ") > ;

    i.e. 1 1 ") > ; and ") > ;

    so that the codeword isC 0 ="1,"4,"2, ","

  • 8/13/2019 An Intro to Digital Comms - Part 1

    42/56

    ;

    1

    ;

    ;

    ;

    ;

    1

    1;;11;1

    ;1;1;11

    ;;1;111

    >

    1

    ;

    1

    because the syndrome is the third column of the parity-check matri$, the third position of the receivedword is in error and the correct codeword is 111.

    0. egree &elecoms 2 3ecture notes 4 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    43/56

    &he Generator Matri2of a linear binary block code

    /e saw above that the parity-check matri$ of a systematic linear binary block code can be written inthe following (n-k% by n matri$ form

    K > ? h In5*@

    &he Generator Matri2of this same code is written in the following k by n matri$ form

    G> ? I* ht @

    &he generator matri$ is useful in obtaining the codeword from the information seuence according tothe following formula

    C> G

    /here,

    Cis the codeword ="1,"4,.........,"n-1,"n@ 88888@ * (),% Kamming code disscussed previously, itsparity-check matri$ was

    ->

    1 1 1 ; 1 ; ;

    1 1 ; 1 ; 1 ;

    1 ; 1 1 ; ; 1

    and thus its generator matri$ would be

    G=

    1 ; ; ; 1 1 1

    ; 1 ; ; 1 1 ;

    ; ; 1 ; 1 ; 1

    ; ; ; 1 ; 1 1

    now if we had an information seuence given by the following digits ;;11 , the codeword would begiven by C> G, i.e.

    "= =; ; 1 1

    1 ; ; ; 1 1 1

    ; 1 ; ; 1 1 ;

    ; ; 1 ; 1 ; 1

    ; ; ; 1 ; 1 1

    ;;1111;

    0. egree &elecoms 2 3ecture notes 2 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    44/56

    SAMPLING E PCM

    I7&60*"&I07 &he trend in the design of new communication systems has been toward increasing the use ofdigital techniues. igital communications offer several important advantages compared to analogcommunications, for e$ample, allows techniues, higher performance in the presence of noise ,

    greater versatility, easier to processs, and higher security.&o transmit analogue message signals, such as voice and video signals, by digital means, the signal hasto be converted to a digital signal. &his process is known as the analogue-to-digital conversion. &woimportant techniues of analogue-to-digital conversion are %ulse code odulation(PCM% and deltaodulation(DM%.

    PULS+ COD+ MODULATION&he essential processes of !"# are 're$uenc! "andliiting .i8e8 'iltering/@ sa%ling, $uantising,and encoding, as shown below in Fig 1.

    Sampler ;+antiser -nco"erm(t)

    Filter

    Figure 1. !"# essential processes

    Sa%lingis the process in which a continuous-time signal is sampled by measuring its amplitude atdiscrete instants. 6epresenting the sampled values of the amplitude by a finite set of levels is called$uantising. esignating each uantised level by a code is called encoding.

    /hile sampling converts a continuous-time signal to a discrete-time signal, uantising convertsa continuous-amplitude sample to a discrete-amplitude sample. &hus sampling and uantising

    operations transform an analogue signal to a digital signal.&he uantising and encoding operations are usually performed in the same circuit, which iscalled an analogue-to-digital (+5% converter. &he combined use of uantising and encodingdistinguishes !"# from analogue pulse modulation techniues.In the following sections, we discuss the operations of sampling and freuency bandlimiting, and PulseA%litude Modulation(PAM% before discussing uantising, encoding and !"# .

    SAMPLING T-+OR+Migital transmission of analogue signals is possible by virtue of the sampling theorem , and the

    sampling operation is performed in accordance with the sampling theorem.

    A8 )and5Liited Signals:+ band-limited signal is a signal m(t% for which the Fourier transform of m(t% is identically =ero

    above a certain freuency #:

    m(t% #(% > ; for GG N #> 4f# (1% .

    )8 Sa%ling Theore:If a signal m(t% is a real-valued band-limited signal satisfying condition (1%, then m(t% can be

    0. egree &elecoms 2 3ecture notes 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    45/56

    uniuely determined from its values m(n&s% sampled at uniform intervals &s ? &s 15(4f#%@.In fact, the reconstructed m(t% is given by

    =

    =

    n s

    s

    snTt

    nTtnTmtm

    M

    M

    %(

    %(sin%(%(

    &s is called the sa%ling %eriodand fs> 15&s is called the sa%ling rate(i.e. fssamples per

    second%.

    &hus, the sa%ling theorestates that a band-limited signal which has no freuency components

    higher than f#K= can be recovered completely from a set of samples taken at the rate of 's >'Msamples per second.&he above sampling theorem is often called the uniform sampling theorem for baseband orlow-pass signals.&he minimum sampling rate, 4f#samples per second, is called the N!$uist sa%ling rate .also called

    N!$uist 're$uenc!/ its reciprocal 15( 4f#% (measured in seconds% is called the N!$uist inter&al.

    SAMPLING+ sampled signal consists of a train of pulses, where each pulse corresponds to the amplitude of thesignal m(t% at the corresponding sampling time. &hus the signal is modulated in amplitude and hencethe name Pulse A%litude Modulation (PAM%.everal !+# signals can be multiple$ed together as long as they are kept distinct and are recoverableat the receiving end. &his system is one e$ample of Tie Di&ision Multi%le2(/ transission(although, in practice, it is not really used nowadays%

    A8 Instantaneous Sa%ling:

    uppose we sample an arbitrary signal m(t% ?Fig.4a@ whose freuency spectrum is #(%?fig. 4b@ instantaneously and at a uniform rate, once every &sseconds. &hen we obtain an infinite

    seuence of samples Dm(n&%E, where n takes on all possible integer values. &his ideal form of samplingis called instantaneous sa%ling(also called impulse sampling%.

    )8 Ideal Sa%led Signals:

    /hen sampling a signal m(t% by a unit impulse train &(t% ?figure 4c@ it is represented mathematically by

    = == nssTs nTtnTmttmtm s %(%(%(%(%(

    and the sampled signal ms(t% ?figure 4d @ is called the ideal sa%led signal8 &his sampled signal has a

    spectrum as shown in figure 4e where it is seen that the baseband spectrum of the signal is repeated(unattenuated% periodically and appears around all multiples of the sampling freuency. ( &his is

    because the impulse train has a constant line freuency spectrum repeating at the harmonics ormultiples of the sampling freuency%.

    C8 Practical Sa%ling:

    18 Natural sa%ling:+lthough instantaneous ideal sampling is a convenient model, a more practical way ofsampling a "and5liitedanalogue signal m(t% is performed by high-speed switching

    0. egree &elecoms 2 3ecture notes < 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    46/56

    circuits. +n euivalent circuit of a mechanical switch and the resulting sampled signal areshown in figure 2 aVb. Figure < shows the effect that sampling with pulses that have a #idth(asopposed to theoretical impulses% has on the freuency spectrum of the resulting sampled signal.+s it can be seen the baseband is still repeated at multiples of the sampling freuency but nowthese repeated baseband spectra are attenuated by the sin.2/2factor which results from thespectrum of the train of pulses that have width.

    &he sampled signal $ns(t% can be written as $ns(t% > m(t% $p(t%where $p(t% is the periodic train of rectangular pulses with period &s, and each rectangular

    pulse in $p(t% has width d and unit amplitude.

    &he sampling here is termed natural sampling, since the top of each pulse in $ns(t% retains the

    shape of its corresponding analogue segment during the pulse interval.

    >8 (lat5To% Sa%ling:&he simplest and thus the most popular practical sampling method is actually performed

    by a functional block termed the sample-and-hold (S-% circuit shown infigure aVb. &his circuit produces a flat-top sampled signal $s(t% shown in figure 2b.

    It produces a sampled signal which has appro$imately the same repeated freuencyspectrum as was discussed for the natural sampling and shown in figure

  • 8/13/2019 An Intro to Digital Comms - Part 1

    47/56

    sampling, to remove freuency terms greater than fs54, even if these freuency terms are ignored (i.e.,

    are inaudible% at the destination.

    +liasing problems are not confined to speech digitisation processes. &he potential for aliasing ispresent in any sample data system. #otion picture taking, for e$ample, is another sampling system thatcan produce aliasing. + common e$ample occurs when filming moving stagecoaches in old /esterns.

    0ften the sampling process is too slow to keep up with thestagecoach wheel movements, and spurious rotational rates are produced. If the wheel rotates 2

  • 8/13/2019 An Intro to Digital Comms - Part 1

    48/56

    FUANTISATION in PULS+ COD+ MODULATION

    &he preceding section describes pulse amplitude modulation, which uses discrete sample times withanalogue sample amplitudes to e$tract the information in a continuously varying analogue signal.!ulse code modulation (PCM% is an e$tension of !+# wherein each analogue sample value isuantised into a discrete value for representation as a digital code word.

    &hus, as shown in Figure , a !+# system can be converted into a !"# system by adding a suitableanalogue-to-digital (+5% converter at the source and a digital-to-analogue (5+% converter at thedestination. Figure 14 depicts a typical uantisation process in which a set of uantisation intervalsare associated in a one-to-one fashion with a binary codeword. +ll sample values falling in a

    particular uantisation interval are represented by a single discrete value located at the centre of theuantisation interval. In this manner the uantisation process introduces a certain amount of error ordistortion into the signal samples. &his error,known as 9$uantisation noise,9 is minimised by establishing a large number of small uantisationintervals. 0f course, as the number of uantisation intervals increase, so must the number of bitsincrease to uniuely identify the uantisation intervals.

    FUANTISATION

    A8 Uni'or Fuantisation7+n e$ample of the uantising operation is shown in figure 14. /e assume that the amplitude of thesignal m(t% is confined to the range (-mp,Amp%, i.e. mpis the peak voltage value of the signal. +s

    illustrated in figure 14, this range is divided into L=ones, each of ste% sie , given by

    L

    mp4= (1%

    + sample amplitude value is appro$imated by the midpoint of the interval in which it lies.&he input-output characteristics of a uniform uantiser are shown in figure 12

    )8 Fuantisation noise7&he difference between the input and output signals of the uantiser becomes the $uatisation error@or $uantisation noise. It is apparent that with a random input signal, the uantising error $evaries

    randomly in the interval44

    +

    eq

    It can be shown that, assuming the uantising error is eually likely to lie anywhere in the range ( - 54to 54 %, that the mean suare uantising error (or the mean suare noise power%is given by

    14

    44 =eq

    (4%

    &hus by substituting from euation 1 into 4 we get the uantisation noise power as given by

    $

    Le

    %>>

    >? (2%

    &he uantisation error or distortion created by digitising an analogue signal is customarily e$pressed asthe ratio of average signal power to average uantisation noise power. &hus the signal-to-uantisationnoise ratio SFR(also called signal-to-distortion ratio or signal-to-noise ratio% can be determined as

    0. egree &elecoms 2 3ecture notes 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    49/56

    SFR d)signal rs &oltage

    $e

    . / log .. / 1 1

    >

    > (%

    "onsidering the sine wave with peak voltage mpand using euation (2% we get:

    U6 (d% 0 1; log1;?( %m

    m 3

    p

    p

    4

    2

    4

    4 4@ > 1; log1;

    2

    443 > 1.)B A 4; log1;3

    SFR.d)/ 0 18 3 > log1L .H/

    If 3 > 4n levels , where each level can be represented by n binary digit, then,

    SFR .d)/ 0 18 3 n ./

    this last formula tells us that each time we increase the codeword by one bit we gain Bd

    in the U6%

    +2a%le7+ sine wave with 1 volt ma$imum amplitude is to be digitised with a minimum signal-to-uantisationnoise ratio (U6% of 2; d. Kow many bits are needed to encode each sample.M

    ince m

    3e

    p44

    42= and U6 > 2; d > 1; log1;?

    ( %m

    m 3

    p

    p

    4

    2

    4

    4 4@

    2; d > 1; log1;2

    443 > 1.)B A 4; log1;3

    but since 3 > 4n , where n is the number of binary digits that can represent 3 levels

    2; d > 1.)B A4; log1; 4n > 1.)B A 4; n log1;4 > 1.)B A Bn

    thus n > .) i.e. n must be < bits

    8$ample:+ ;-#byte hard disk is used to store !"# data. uppose that a JF (voice-freuency% signalis sampled at ;;; samples5sec and the encoded !"# is to have an U6 of 2; d. Kowmany minutes of JF conversation can be stored on this hard diskM

    If all uantisation intervals have eual lengths (uniform uantisation%, the uantisation noise isindependent of the sample values and the signal-to-uantisation noise ratio is determined by

    %14

    (log1;%(log1;%(4

    4

    1;4

    4

    1;

    == m

    q

    mdBSQR

    e

    %(log4;.1;%( 1; +=

    mdBSQR

    0. egree &elecoms 2 3ecture notes 1451512

  • 8/13/2019 An Intro to Digital Comms - Part 1

    50/56

    where is the rsamplitude of an arbitrary input signal wave form (not necessarily a sine wave%.

    In particular if the input is a sine wave with peak amplitude mpthen m > mp54 , and

    %

    14

    4(log1;%(

    4

    4

    1;

    = pm

    dBSQR

    %(log4;).)%( 1; += p

    mdBSQR

    +gain considering euation , if the input is a sine wave with peak amplitude less than mp54,say eual to Jp, then

    U6 dJ

    m3

    J

    m

    3p

    p

    p

    p

    ( % log ( % log ( %( %= =1;4

    2

    1;2

    41;

    4

    4

    4

    1;

    4

    4

    4

    SFR.d)/ 0 > log1.

    %

    %

    / 3 18 3 n ()%

    &he last two terms of this euation provide the U6 when encoding a full range sine wave.&he first term indicates a loss in U6 #hen encoding a lo#er le&el signal.

    0. egree &elecoms 2 3ecture notes

  • 8/13/2019 An Intro to Digital Comms - Part 1

    51/56

    Again@ in relation to SFR7

    /e assume that the amplitude of the signal m(t% is confined to the range (-mp,Amp%, i.e. mpis the peak

    voltage value of the signal. +s illustrated in figure 14, this range is divided into L=ones, each of ste%

    sie , given by

    L

    mp4

    = (1%

    It can be shown that, assuming the uantising error is eually likely to lie anywhere in the range ( - 54to 54 %, that the mean suare uantising error (or the mean suare noise power%is given by

    14

    44

    =eq (4%

    &hus by substituting from euation 1 into 4 we get the uantisation noise power as given by

    $

    Le

    %>>

    >?

    (2%

    &hus the signal-to-uantisation noise ratio SFRcan be determined as

    ))(

    (lo&10)(210

    eq

    rsignalpowedBSQR =

    and we could consider the following three cases:

    1/ If we consider the ma$imum signal and the ma$imum signal power to uantisation noise ratio,then in this caseSFR.d)/0 B8J 3 n

    0. egree &elecoms 2 3ecture notes

  • 8/13/2019 An Intro to Digital Comms - Part 1

    52/56

    +s shown below:

    U6 (d% 0 1; log1;? >< 2eqS

    @ > 1; log1;?

    2

    2

    2

    3L

    m

    m

    p

    p

    @ > 1; log1;? 2

    3L @

    > 1; log1;2 A 1; log1;? 2L @ > 1; log1;2 A 4; log1;?L @ > 1; log1;2 A 4; log1;?

    n2 @

    > 1; log1;2 A 4; n log1;?2 @ > . A Bn

    >/ If we consider the average signal and the average signal power to uantisation noise ratio, thenin this caseSFR.d)/0 n

    +s shown below:&he average signal of a signal which can have any values between Ampand W mpwith eual probability

    can be shown to be eual to3

    2

    pm

    U6 (d% 0 1; log1;? >< 2eqS

    @ > 1; log1;?

    2

    2

    2

    3

    3

    L

    m

    m

    p

    p

    @ > 1; log1;? 2L @

    > 4; log1;?L @ > 4; log1;? n2 @ > 4; n log1;?2 @ > Bn

    ?/ If we consider the specific case of a sine wave, then the average signal power to uantisationnoise ratio in this case will beSFR.d)/0 18 3 n

    +s shown below:

    &he root mean suare of a sine wave signal of peak amplitude mpis2

    pm

    , thus

    U6 (d% 0 1; log1;? >< 2eqS @ > 1; log1;?

    2

    2

    2

    3

    2

    L

    m

    m

    p

    p

    @ > 1; log1;?2

    3 2L @

    > 1; log1;?2

    3 @ A 1; log1;? 2L @ > 1.)B A 4; log1;?

    n2 @ > 1.)B A 4; n log1;?2 @ > 1.)B A Bn

    0. egree &elecoms 2 3ecture notes

  • 8/13/2019 An Intro to Digital Comms - Part 1

    53/56

    +2a%le7+n audio signal with spectral components limited to the freuency band 2;; to 22;; K=, is sampled at;;; samples per second, uniformly uantised and binary coded. If the reuired output ma$imumsignal-to-uantisation-noise is 2; d, calculate:a% the minimum number of uniform uantisation levels needed,

    b% the number of bits per sample needed,c% the system output bit rate,

    a% U6 2; > . A 4; log1;33 > log-1(2;-.%54; >1.4

    &hus the minimum number of reuired levels is 1b% 1 > 4n, thus log1;1> n log1; 4 , n > (log1;1%5 log1; 4 > .4

    thus the minimum number of bits per sample is 4 24 %c% ;;; $ < > ;;;; bits5sec

    Pro"le7It is reuired to design a new !"# which would improve on the U6 of an e$isting !"#system by 4 d. If the e$isting !"# system uses 1; bits to represent each sample, how many

    bits must the new !"# system use in order to satisfy the reuirementM

    Pro"le7If the input signal to a !"# system is a sine wave, and given that the mean suare uantising

    error (or the mean suare uantising noise power% is14

    4

    4 =

    eq

    prove that for a sine wave the ratio of average signal power to average uantisation noisepower is given by SFR .d)/ 0 18 3 n

    Pro"le7how how does an increase or decrease of 1 bitaffect the !"# system U6.

    0. egree &elecoms 2 3ecture notes

  • 8/13/2019 An Intro to Digital Comms - Part 1

    54/56

    Co%anding7In a uniform !"# system the si=e of every uantisation interval is determined by the U6 reuirementof the lowest signal level to be encoded. 3arger signals are also encoded with the same uantisationinterval. +s indicated in euation ) and Figure 12c , the U6 increases with the signal amplitude + .For e$ample, a 4B d U6 for small signals and a 2; d dynamic range produces a

  • 8/13/2019 An Intro to Digital Comms - Part 1

    55/56

    In particular, the +-law characteristic can also be well appro$imated by straight-line segments tofacilitate direct or digital companding, and can be easily converted to and from a linear format. &henormalised +-law compression characteristic is defined as:

    F $ sgn $ + $

    + $

    +

    sgn $ +$

    + + $

    + ( % ( %( G G

    ln( %%

    ( %( lnG G

    ln( % %

    =+

    =

    +

    +

    1 ;

    1

    1

    1

    11

    and it also has an even more cumbersome formula for its inverse which we will not mention here

    DI((+R+NTIAL PULS+ COD+ MODULATION

    ifferential pulse code modulation (!"#% is designed specifically to take advantage of the sample-to-sample redundancies in a typical speech waveform. ince the range of sample differences is less thanthe range of individual amplitude samples, fewer bits are needed to encode difference samples. &he

    sampling rate is often the same as for a comparable !"# system. &hus the bandlimiting filter in theencoder and the smoothing filter in the decoder are basically identical to those used in conventional!"# systems.&he simplest means of generating the difference samples for a !"# coder is to store the previousinput sample directly in a sample-and-hold circuit and use an analog subtractor to measure the change.&he change in the signal is then uanti=ed and encoded for transmission. &he !"# structure shownin Figure 1< is more complicated, however, because the previous input value is reconstructed by afeedback loop that integrates the encoded sample differences. In essence, the feedback signal is anestimate of the input signal as obtained by integrating the encoded sample differences. &hus thefeedback signal is obtained in the same manner used to reconstruct the waveform in the decoder.&he advantage of the feedback implementation is that uanti=ation errors do not accumulate

    indefinitely. If the f