A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system...

56
A Guide to Digital Television Systems and Measurements EAV SAV 3FF Cb 000 Y 000 Cr XYZ Y 3FF 1712 Cb 0 000 1713 Y 1 000 1714 Cr 2 XYZ 1715 Y 3 Word Number Data Last active picture sample First active picture sample COMPOSITE: Full analog line digitized COMPONENT: Active line digitized, EAV/SAV added Copyright ® 1997, Tektronix, Inc. All rights reserved.

Transcript of A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system...

Page 1: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

A Guide to Digital TelevisionSystems and Measurements

EA

V

SA

V

3FF

Cb

000

Y

000

Cr

XY

Z

Y 3FF

1712

Cb

0

000

1713

Y1

000

1714

Cr

2

XY

Z17

15

Y3Word Number

Data

Last active picture sample First active picture sample

COMPOSITE: Full analog line digitized

COMPONENT: Active line digitized, EAV/SAV added

Copyright ® 1997, Tektronix, Inc. All rights reserved.

Page 2: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are
Page 3: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

i

A Guide to Digital

Television Systems and

Measurements

AcknowledgmentAuthored by David K. Fibushwith contributions from:

Bob ElkindKenneth Ainsworth

Some text and figures used in this booklet reprint-ed with permission from Designing Digital Sys-tems published by Grass Valley Products.

Page 4: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

ii

Page 5: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

iii

1. Introduction · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 1

2. Digital Basics · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 3Component and Composite (analog)· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 3Sampling and Quantizing · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 4Digital Video Standards· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 4Serial Digital Video · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 8Rate Conversion – Format conversion · · · · · · · · · · · · · · · · · · · · · · · · · · · 10

3. Digital Audio · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 11AES/EBU Audio Data Format · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 11Embedded Audio · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 11

4. System Hardware and Issues · · · · · · · · · · · · · · · · · · · · · · · · · 19Cable Selection · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 19Connectors · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 19Patch Panels · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 19Terminations and Loop-throughs · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 19Digital Audio Cable and Connector Types · · · · · · · · · · · · · · · · · · · · · · · · 20Signal Distribution, Reclocking · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 20System Timing · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 20

5. Monitoring and Measurements · · · · · · · · · · · · · · · · · · · · · · · · 23

6. Measuring the Serial Signal · · · · · · · · · · · · · · · · · · · · · · · · · · 25Waveform Measurements · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 25Measuring Serial Digital Video Signal Timing · · · · · · · · · · · · · · · · · · · · · 26

7. Definition and Detection of Errors · · · · · · · · · · · · · · · · · · · · · 29Definition of Errors · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 29Quantifying Errors · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 30Nature of the Studio Digital Video Transmission Systems · · · · · · · · · · · 30Measuring Bit Error Rates · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 32An Error Measurement Method for Television· · · · · · · · · · · · · · · · · · · · · 32System considerations· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 33

8. Jitter Effects and Measurements · · · · · · · · · · · · · · · · · · · · · · · 35Jitter in Serial Digital Signals · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 35Measuring Jitter · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 37

9. System Testing · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 39Stress Testing · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 39SDI Check Field · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 39

10. Bibliography· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 42Articles and Books· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 42Standards · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 43

11. Glossary · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 44

Contents

Page 6: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are
Page 7: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 1

The following major topicsare covered in this booklet:1. A definition of digital

television including digi-tal basics, video stan-dards, conversionsbetween video signalforms, and digital audioformats and standards.

2. Considerations in select-ing transmission systempassive components suchas cable, connectors andrelated impedance match-ing concerns are dis-cussed as well as contin-ued use of the passiveloop-through concept.Much of the transmissionsystem methods and hard-ware already in place maybe used for serial digital.

3. New and old systemsissues that are particularlyimportant for digital videoare discussed, includingequalization, reclocking,and system timing.

4. Test and measurement ofserial digital signals canbe characterized in threedifferent aspects: usage,methods, and operationalenvironment. Types ofusage include: designer’sworkbench, manufactur-ing quality assurance,user equipment evalua-tion, system installationand acceptance testing,system and equipmentmaintenance, and perhapsmost important, opera-tion.

5. Several measurementmethods are covered indetail, including serialwaveform measurements,definition and detectionof errors, jitter effects andmeasurements, and sys-tem testing with specialtest signals.

New methods of test andmeasurement are required todetermine the health of thedigital video transmissionsystem and the signals it car-ries. Analysis methods forthe program video and audiosignals are well known.However, measurements forthe serial form of the digi-tized signal differ greatlyfrom those used to analyzethe familiar baseband videosignal.This guide addresses manytopics of interest to designersand users of digital televisionsystems. You’ll find informa-tion on embedded audio(Section 3), system timingissues (Section 4), and sys-tem timing measurements(Section 6). Readers who arealready familiar with digitaltelevision basics may wish tostart with Section 4, SystemHardware and Issues. Discus-sion of specific measurementtopics begins in Section 6.

1. Introduction

Page 8: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 2

Page 9: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 3

Figure 2-2. Composite encoding.

QuadratureModulation

at subcarrierfrequency

Y 0-5 MHz

R-Y 0-1.5 MHz

B-Y 0-1.5 MHz

++ Composite Video

0-5 MHz

Chroma2-5 MHz

Component and Composite(analog)Image origination sources,such as cameras andtelecines, internally producecolor pictures as three, fullbandwidth signals – one eachfor Green, Blue and Red. Cap-italizing on human vision notbeing as acute for color as itis for brightness level, televi-sion signals are generallytransformed into luminanceand color difference signalsas shown in Figure 2-1. Y, theluminance signal, is derivedfrom the GBR component col-ors, based on an equationsuch as:

Y = 0.59G + 0.30R + 0.11B

The color difference signalsoperate at reduced band-width, typically one-half ofthe luminance bandwidth. Insome systems, specificallyNTSC, the color differencesignals have even lower andunequal bandwidths. Compo-nent signal formats and volt-age levels are not standard-

ized for 525-line systems,whereas there is an EBU doc-ument (EBU N-10) for 625line systems. Details of com-ponent analog systems arefully described in anotherbooklet from Tektronix, Solv-ing the Component Puzzle.It’s important to note that sig-nal level values in the colordifference domain allow com-binations of Y, B-Y, R-Y thatwill be out of legal gamutrange when converted toGBR. Therefore, there’s aneed for gamut checking ofcolor difference signals whenmaking operational adjust-ments for either the analog ordigital form of these signals.Most television signals todayare two field-per-frame“interlaced.” A frame con-tains all the scan lines for apicture, 525 lines in 30frame/second applicationsand 625 lines in 25frame/second applications.Interlacing increases the tem-poral resolution by providingtwice as many fields per sec-

ond, each one with only halfthe lines. Every other pictureline is represented in the firstfield and the others are filledin by the second field. Another way to look at inter-lace is that it’s bandwidthreduction. Display rates of 50or more presentations persecond are required to elimi-nate perceived flicker. Byusing two-field interlace, thebandwidth for transmission isreduced by a factor of two.Interlace does produce arti-facts in pictures with highvertical information content.However, for television view-ing, this isn’t generally con-sidered a problem when con-trasted to the cost of twice thebandwidth. In computer dis-play applications, the arti-facts of interlace are unac-ceptable so progressive scan-ning is used, often with framerates well above 60 Hz. Astelevision (especially highdefinition) and computer ap-plications become more inte-grated there will be a migra-tion from interlace to pro-gressive scanning for allapplications.Further bandwidth reductionof the television signal occurswhen it’s encoded into com-posite PAL or NTSC asshown in Figure 2-2. NTSC isdefined for studio operationby SMPTE 170M which for-malizes and updates thenever fully approved,RS 170A. Definitions for PALand NTSC (as well asSECAM) can be found inITU-R (formally CCIR) Report624. Where each GBR signalmay have a 6 MHz band-width, color difference sig-nals would typically have Yat 6 MHz and each color dif-ference signal at 3 MHz; how-ever, a composite signal isone channel of 6 MHz or less.The net result is a composite6 MHz channel carrying colorfields at a 60 per second ratethat in its uncompressed, pro-gressive scan form, wouldhave required three 12 MHzchannels for a total band-

2. Digital Basics

Figure 2-1. Component signals.

Page 10: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 4

width of 36 MHz. So, datacompression is nothing new;digital just makes it easier.For composite NTSC signals,there are additional gamutconsiderations when con-verting from the color differ-ence domain. NTSC trans-mitters will not allow 100percent color amplitude onsignals with high luminancelevel (such as yellow). Thisis because the transmittercarrier will go to zero for sig-nals greater than about 15percent above 1 volt. Hence,there’s a lower gamut limitfor some color difference sig-nals to be converted to NTSCfor RF transmission.As part of the encoding pro-cess, sync and burst areadded as shown in Figure2-3. Burst provides a refer-ence for decoding back tocomponents. The phase ofburst with respect to syncedge is called SCH phase,which must be carefully con-trolled in most studio appli-cations. For NTSC, 7.5 IREunits of setup is added to theluminance signal. This posessome conversion difficulties,

particularly when decodingback to component. Theproblem is that it’s relativelyeasy to add setup; but remov-ing it when the amplitudesand timing of setup in thecomposite domain are notwell controlled can result inblack level errors and/or sig-nal disturbances at the endsof the active line.

Sampling and QuantizingThe first step in the digitiz-ing process is to “sample”the continuous variations ofthe analog signal as shown inFigure 2-4. By looking at theanalog signal at discrete timeintervals, a sequence of volt-age samples can be stored,manipulated and later,reconstructed.In order to recover the analogsignal accurately, the samplerate must be rapid enough toavoid missing important in-formation. Generally, thisrequires the sampling fre-quency to be at least twicethe highest analog frequency.In the real world, the fre-quency is a little higher thantwice. (The Nyquist Sam-

pling Theorem says that theinterval between successivesamples must be equal to orless than one-half of theperiod of the highest fre-quency present in the signal.)The second step in digitizingvideo is to “quantize” byassigning a digital number tothe voltage levels of the sam-pled analog signal – 256 lev-els for 8-bit video, 1024 for10-bit video, and up to sev-eral thousand for audio.To get the full benefit of digi-tal, 10-bit processing isrequired. While most currenttape machines are 8-bit,SMPTE 125M calls out 10-bitas the interface standard.Processing at less than 10bits may cause truncationand rounding artifacts, par-ticularly in electronicallygenerated pictures. Visibledefects will be revealed inthe picture if the quantiza-tion levels are too coarse (toofew levels). These defectscan appear as “contouring”of the picture. However, thegood news is that randomnoise and picture details pre-sent in most live video sig-nals actually help concealthese contouring effects byadding a natural randomnessto them. Sometimes the num-ber of quantizing levels mustbe reduced; for example,when the output of a 10-bitprocessing device feeds an8-bit recorder. In this case,contouring effects are mini-mized by deliberately addinga small amount of randomnoise (dither) to the signal.This technique is known asrandomized rounding.

Digital Video StandardsAlthough early experimentswith digital technology werebased on sampling the com-posite (NTSC or PAL) signal,it was realized that for thehighest quality operation,component processing wasnecessary. The first digitalstandards were component.Interest in composite digitalwas revived when Ampexand Sony announced a com-posite digital recording for-mat, which became known as

Figure 2-3. Color difference and composite waveforms.

Figure 2-4. Sampling an analog signal.

Page 11: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 5

signals CB and CR, which arescaled versions of the signals B-Y and R-Y.The sampling structuredefined is known as “4:2:2.”This nomenclature is derivedfrom the days when multi-ples of NTSC subcarrier werebeing considered for the sam-pling frequency. This ap-proach was abandoned, butthe use of “4” to representthe luminance sampling fre-quency was retained. Thetask force mentioned aboveexamined luminance sam-pling frequencies from12 MHz to 14.3 MHz. Theyselected 13.5 MHz as a com-promise because the sub-multiple 2.25 MHz is a factorcommon to both 525- and625-line system. Someextended definition TV sys-tems use a higher resolutionformat called 8:4:4 which hastwice the bandwidth of 4:2:2.Parallel component digital.Rec. 601 described the sam-pling of the signal. Electricalinterfaces for the data pro-duced by this sampling werestandardized separately bySMPTE and the EBU. Theparallel interface for525/59.94 was defined bySMPTE as SMPTE Standard125M (a revision of the ear-

lier RP-125), and for 625/50by EBU Tech 3267 (a revi-sion of the earlier EBU Tech3246). Both of these wereadopted by CCIR and areincluded in Recommenda-tion 656, the documentdefining the hardware inter-face.The parallel interface useseleven twisted pairs and 25-pin “D” connectors. (Theearly documents specifiedslide locks on the connec-tors; later revisions changedthe retention mechanism to4/40 screws.) This interfacemultiplexes the data wordsin the sequence CB, Y, CR, Y,CB…, resulting in a data rateof 27 Mwords/s. The timingsequences SAV and EAVwere added to each line torepresent Start of ActiveVideo and End of ActiveVideo. The digital active linecontains 720 luminance sam-ples, and includes space forrepresenting analog blankingwithin the active line. Rec. 601 specified eight bitsof precision for the datawords representing the vid-eo. At the time the standardswere written, some partici-pants suggested that this maynot be adequate, and provi-sion was made to expand theinterface to 10-bit precision.Ten-bit operation has indeedproved beneficial in manycircumstances, and the latestrevisions of the interfacestandard provide for a 10-bitinterface, even if only eightbits are used. Digital-to-ana-log conversion range is cho-sen to provide headroomabove peak white and foot-room below black as shownin Figure 2-5. Quantizing lev-els for black and white areselected so the 8-bit levelswith two “0”s added willhave the same values as the10-bit levels. Values 000 to003 and 3FF to 3FC arereserved for synchronizing

D-2. Initially, these machineswere projected as analogin/out devices for use inexisting analog NTSC andPAL environments; digitalinputs and outputs were pro-vided for machine-to-ma-chine dubs. However, thepost-production communityrecognized that greateradvantage could be taken ofthe multi-generation capabil-ity of these machines if theywere used in an all digitalenvironment.Rec. 601. RecommendationITU-R BT.601 (formerly CCIRRecommendation 601) is nota video interface standard,but a sampling standard. Rec.601 evolved out of a jointSMPTE/EBU task force todetermine the parameters fordigital component video forthe 525/59.94 and 625/50television systems. Thiswork culminated in a seriesof tests sponsored by SMPTEin 1981, and resulted in thewell known CCIR Recom-mendation 601. This docu-ment specified the samplingmechanism to be used forboth 525 and 625 line sig-nals. It specified orthogonalsampling at 13.5 MHz forluminance, and 6.75 MHzfor the two color difference

Figure 2-5. Luminance quantizing.

Excluded

Peak

Black

Excluded

766.3 mv

700.0 mv

0.0 mv

-51.1 mv

3FF

3AC

040

000

11 1111 1111

11 1010 1100

00 0100 0000

00 0000 0000

Page 12: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 6

purposes. Similar factorsdetermine the quantizing val-ues for color difference sig-nals as shown in Figure 2-6.Figure 2-7 shows the locationof samples and digital wordswith respect to an analoghorizontal line. Because thetiming information is carriedby EAV and SAV, there is noneed for conventional syn-chronizing signals, and thehorizontal intervals (and theactive line periods during thevertical interval) may beused to carry ancillary data.The most obvious applica-tion for this data space is tocarry digital audio, and doc-uments are being preparedby SMPTE to standardize theformat and distribution ofthe audio data packets.Rec. 601/656 is a well proventechnology with a full rangeof equipment available forproduction and post produc-tion. Generally, the parallelinterface has been super-seded by a serial implemen-tation, which is far morepractical in larger installa-tions. Rec. 601 provides allof the advantages of both dig-ital and component opera-tion. It’s the system of choicefor the highest possible qual-ity in 525- or 625-linesystems.Parallel composite digital.The composite video signalis sampled at four times the(NTSC or PAL) subcarrierfrequency, giving nominalsampling rates of 14.3 MHzfor NTSC and 17.7 MHz forPAL. The interface is stan-dardized for NTSC as SMPTE244M and, at the time ofwriting, EBU documentationis in process for PAL. Bothinterfaces specify ten-bit pre-cision, although D-2 and D-3machines record only eightbits to tape. Quantizing ofthe NTSC signal (shown inFigure 2-8) is defined with amodest amount of headroomabove 100% bars, a smallfootroom below sync tip andthe same excluded values asfor component.PAL composite digital hasbeen defined to minimize

Figure 2-7. Digital horizontal line.

EA

V

SA

V

3FF

Cb

000

Y

000

Cr

XY

Z

Y 3FF

1712

Cb

0

000

1713

Y1

000

1714

Cr

2

XY

Z17

15

Y3Word Number

Data

Last active picture sample First active picture sample

COMPOSITE: Full analog line digitized

COMPONENT: Active line digitized, EAV/SAV added

Figure 2-6. Color difference quantizing.

Excluded

Max Positive

Black

Max Negative

Excluded

399.2 mv

350.0 mv

0.0 mv

-350.0 mv

-400.0 mv

3FF

3C0

200

040

000

11 1111 1111

11 1100 0000

10 0000 0000

00 0100 0000

00 0000 0000

Figure 2-8. NTSC quantizing.

Excluded

100% Chroma

Peak White

Blanking

Sync TipExcluded

998.7 mv

937.7 mv

714.3 mv

0.0 mv

-285.7 mv-306.1 mv

3FF

3CF

320

0F0

016000

11 1111 111111 1100 1111

11 0010 0000

00 1111 0000

00 0001 000000 0000 0000

Page 13: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 7

sentation of vertical sync andequalizing pulses is alsotransmitted over the compos-ite interface. Composite digital installa-tions provide the advantagesof digital processing andinterface, and particularlythe multi-generation capabil-ities of digital recording.However, there are some lim-itations. The signal is com-posite and does bear the foot-print of NTSC or PAL encod-ing, including the narrow-band color informationinherent to these codingschemes. Processes such aschroma key are generally notsatisfactory for high qualitywork, and separate provisionmust be made for a compo-nent derived keying signal.Some operations, such asdigital effects, require thatthe signal be converted tocomponent form for process-ing, and then re-encoded to

composite. Also, the fullexcursion of the compositesignal has to be representedby 256 levels on 8-bitrecorders. Nevertheless, com-posite digital provides amore powerful environmentthan analog for NTSC andPAL installations and is avery cost effective solutionfor many users.As with component digital,the parallel composite inter-face uses a multi-pair cableand 25-pin “D” connectors.Again, this has proved to besatisfactory for small andmedium sized installations,but practical implementationof a large system requires aserial interface.16:9 widescreen componentdigital signals. Many televi-sion markets around theworld are seeing the intro-duction of delivery systemsfor widescreen (16:9 aspectratio) pictures. Some of thesesystems, such as MUSE inJapan and the future ATVsystem in the USA, areintended for high-definitionimages. Others, such asPALplus in Europe will use525- or 625-line pictures.Domestic receivers with 16:9displays have been intro-duced in many markets, andpenetration will likelyincrease significantly overthe next few years. For mostbroadcasters, this creates theneed for 16:9 programmaterial.Two approaches have beenproposed for the digital rep-resentation of 16:9 525- and625-line video. The firstmethod retains the samplingfrequencies of the Rec. 601standard for 4:3 images(13.5 MHz for luminance).This “stretches” the pixelrepresented by each dataword by a factor of 1.33 hori-zontally, and results in a lossof 25 percent of the horizon-tal spatial resolution whencompared to Rec. 601 pic-tures. For some applications,this method still provides ad-equate resolution, and it hasthe big advantage that much

quantizing noise by using amaximum amount of theavailable digital range. Ascan be seen in Figure 2-9, thepeak analog values actuallyexceed the digital dynamicrange, which might appear tobe a mistake. Because of thespecified sampling axis, ref-erence to subcarrier and thephase of the highest lumi-nance level bars (such as yel-low), the samples neverexceed the digital dynamicrange. The values involvedare shown in Figure 2-10.Like the component inter-face, the composite digitalactive line is long enough toaccommodate the analogactive line and the analogblanking edges. Unlike thecomponent interface, thecomposite interface transmitsa digital representation ofconventional sync and burstduring the horizontal blank-ing interval. A digital repre-

Figure 2-9. PAL quantizing.

Excluded

Peak White

Blanking

Sync TipExcluded

913.1 mv

700.0 mv

0.0 mv

-300.0 mv-304.8 mv

3FF

34C

100

004000

11 1111 1111

11 0100 1100

01 0000 0000

00 0000 010000 0000 0000

Figure 2-10. PAL yellow bar sampling.

-U+V sample

U+V sample

-U-V sample

U-V sample

0.934 V

Excluded values0.9095 to 0.9131 VHighest sample

0.886 V

+167o

0.620 V Luminance Level

Reference Subcarrier

100% Yellow Bar

Page 14: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 8

existing Rec. 601 equipmentcan be used.A second method maintainsspatial resolution where pix-els (and data words) areadded to represent the addi-tional width of the image.This approach results in 960luminance samples for thedigital active line (comparedto 720 samples for a 4:3 pic-ture, and to 1920 samples fora 16:9 high definition pic-ture). The resulting samplingrate for luminance is18 MHz. This system pro-vides the same spatial resolu-tion as Rec. 601 4:3 images;but existing equipmentdesigned only for 13.5 MHzcannot be used. The addi-tional resolution of thismethod may be advantageouswhen some quality overheadis desired for post produc-tion or when up-conversionto a high-definition system iscontemplated. SMPTE 267M provides forboth the 13.5 MHz and18 MHz systems. Because

16:9 signals using 13.5 MHzsampling are electricallyindistinguishable from 4:3signals, they can be con-veyed by the SMPTE 259Mserial interface at 270 Mb/s.It’s planned that SMPTE259M (discussed below) willbe revised to provide forserial transmission of18 MHz sampled signals,using the same algorithm, ata data rate of 360 Mb/s.

Serial Digital VideoParallel connection of digitalequipment is practical onlyfor relatively small installa-tions, and there’s a clearneed for transmission over asingle coaxial cable. This isnot simple as the data rate ishigh, and if the signal weretransmitted serially withoutmodification, reliable recov-ery would be very difficult.The serial signal must bemodified prior to transmis-sion to ensure that there aresufficient edges for reliableclock recovery, to minimize

the low-frequency content ofthe transmitted signal, and tospread the transmittedenergy spectrum so thatradio-frequency emissionproblems are minimized.In the early 1980s, a serialinterface for Rec. 601 signalswas recommended by theEBU. This interface used 8/9block coding and resulted ina bit rate of 243 Mb/s. Itsinterface did not support ten-bit precision signals, andthere were some difficultiesin producing reliable, costeffective, integrated circuits.The block coding based inter-face was abandoned and hasbeen replaced by an interfacewith channel coding thatuses scrambling and conver-sion to NRZI. The serial inter-face has been standardized asSMPTE 259M and EBU Tech.3267, and is defined for bothcomponent and compositesignals including embeddeddigital audio.Conceptually, the serial digitalinterface is much like a carriersystem for studio applica-tions. Baseband audio andvideo signals are digitized andcombined on the serial digital“carrier” as shown in Figure2-11. (It’s not strictly a carriersystem in that it’s a basebanddigital signal, not a signalmodulated on a carrier.) Thebit rate (carrier frequency) isdetermined by the clock rateof the digital data: 143 Mb/sfor NTSC, 177 Mb/s for PALand 270 Mb/s for Rec. 601component digital. Thewidescreen (16:9) componentsystem defined in SMPTE 267will produce a bit rate of360 Mb/s.Parallel data representing thesamples of the analog signalis processed as shown in Fig-ure 2-12 to create the serialdigital data stream. The par-allel clock is used to loadsample data into a shift regis-ter, and a times-ten multipleof the parallel clock shiftsthe bits out, LSB (least signif-icant bit) first, for each 10-bitdata word. If only 8 bits ofdata are available at theinput, the serializer places

Figure 2-11. The carrier concept.

ANALOG TODIGITAL

C ONVERSION

DIGITAL DATAF ORMATTING

VIDEO

AUDIO

&

SERIAL DIGITALVIDEO “C ARRIER”

143 Mb/s NTSC

177 Mb/s PAL

270 Mb/s CCIR 601

360 Mb/s 16:9 component

Figure 2-12. Parallel-to-serial conversion.

Shift Register

EncoderScrambler

MSB10

LSB

x 10

Data

Parallel In Serial Out

Scrambled

NRZI

MSB LSB

NRZ

Clock

Fclock

Load Data Shift Data

10 x Fclock

G1(x) = x9 + x4 + 1

G2(x) = x + 1

Page 15: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 9

zeros in the two LSBs tocomplete the 10-bit word.Component signals do notneed further processing asthe SAV and EAV signals onthe parallel interface provideunique sequences that can beidentified in the serialdomain to permit word fram-ing. If ancillary data such asaudio has been inserted intothe parallel signal, this datawill be carried by the serialinterface. The serial interfacemay be used with normalvideo coaxial cable.The conversion from parallelto serial for composite signalsis somewhat more complex.As mentioned above, theSAV and EAV signals on the

parallel component interfaceprovide unique sequencesthat can be identified in theserial domain. The parallelcomposite interface does nothave such signals, so it isnecessary to insert a suitabletiming reference signal (TRS)into the parallel signal beforeserialization. A diagram ofthe serial digital NTSC hori-zontal interval is shown inFigure 2-13. The horizontalinterval for serial digital PALwould be similar except thatthe sample location isslightly different on eachline, building up two extrasamples per field. A three-word TRS is inserted in thesync tip to enable word fram-

Figure 2-14. NRZ and NRZI relationship.

NRZ

NRZ all "0"s

NRZ all "1"s

NRZI

NRZI all "highs" or "lows"

NRZI 1/2 clock freqeither polarity

Clock FreqData detect on

rising edge

1

oo o o o o o

1 1 1

ing at the serial receiver,which should also removethe TRS from the receivedserial signal. The composite parallel inter-face does not provide fortransmission of ancillarydata, and the transmission ofsync and burst means thatless room is available forinsertion of data. Upon con-version from parallel toserial, the sync tips may beused. However, the dataspace in NTSC is sufficientfor four channels of AES/EBUdigital audio. Ancillary datasuch as audio may be addedprior to serialization, andthis would normally be per-formed by the same co-pro-cessor that inserts the TRS.Following the serialization ofthe parallel information thedata stream is scrambled by amathematical algorithm thenencoded into NRZI (non-return to zero inverted) by aconcatenation of the follow-ing two functions:

G1(X) = X 9 + X 4 + 1

G2(X) = X + 1

At the receiver the inverse ofthis algorithm is used in thedeserializer to recover thecorrect data. In the serial dig-ital transmission system, theclock is contained in the dataas opposed to the parallelsystem where there’s a sepa-rate clock line. By scram-bling the data, an abundanceof transitions is assured asrequired for clock recovery.The mathematics of scram-bling and descrambling leadto some specialized test sig-nals for serial digital systemsthat are discussed later inthis guide.Encoding into NRZI makesthe serial data stream polar-ity insensitive. NRZ (nonreturn to zero, an old digitaldata tape term) is the familiarcircuit board logic level, highis a “1” and low is a “0.” Fora transmission system it’sconvenient to not require acertain polarity of the signalat the receiver. As shown inFigure 2-14, a data transitionis used to represent each “1”

Figure 2-13. NTSC horizontal interval.

Page 16: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 10

and there’s no transition for adata “0.” The result’s that it’sonly necessary to detect tran-sitions; this means eitherpolarity of the signal may beused. Another result of NRZIencoding is that a signal ofall “1”s now produces a tran-sition every clock intervaland results in a square waveat one-half the clock fre-quency. However, “0”s pro-duce no transition, whichleads to the need for scram-bling. At the receiver, the ris-ing edge of a square wave atthe clock frequency would beused for data detection.

Rate Conversion – FormatConversionWhen moving between com-ponent digital and compositedigital in either direction,there are two steps: theactual encoding or decoding,and the conversion of thesample rate from one stan-dard to the other. The digitalsample rates for these twoformats are different:13.5 MHz for component dig-ital and 14.3 MHz for NTSCcomposite digital (17.7 MHzfor PAL). This second step iscalled “rate conversion.”Often the term rate conver-sion is used to mean bothencoding/decoding and

resampling of digital rates.Strictly speaking, rate con-version is taking one samplerate and making anothersample rate out of it. For ourpurposes, we’ll use the term“format conversion” to meanboth the encode/decode stepand resampling of digitalrates. The format conversionsequence depends on direc-tion. For component to com-posite, the usual sequence israte conversion followed byencoding. For composite tocomponent, the sequence isdecoding followed by rateconversion. See Figure 2-15. It’s much easier to do pro-duction in componentbecause it isn’t necessary towait for every color framingboundary (four fields forNTSC and eight for PAL) tomatch two pieces of video to-gether; instead, they can bematched every two fields(motion frame). This pro-vides four times the opportu-nity. Additionally, compo-nent is a higher quality for-mat since luminance andchrominance are handledseparately. To the extent pos-sible, origination for compo-nent environment produc-tion should be accomplishedcomponent. A high quality

composite-to-component for-mat converter provides a rea-sonable alternative.After post-production work,component digital oftenneeds to be converted tocomposite digital. Sourcesthat are component digitalmay be converted for inputto a composite digitalswitcher or effects machine.Or a source from a compo-nent digital telecine may beconverted for input to a com-posite digital suite. Further-more, tapes produced incomponent digital may needto be distributed or archivedin composite digital.The two major contributorsto the quality of this processare the encoding or decodingprocess and the sample rateconversion. If either one isfaulty, the quality of the finalproduct suffers. In order toaccurately change the digitalsample rate, computationsmust be made between twodifferent sample rates and in-terpolations must be calcu-lated between the physicallocation of the source pixeldata and the physical loca-tion of destination pixel data.Of the 709,379 pixel loca-tions in a PAL compositedigital frame, all except onemust be mapped (calculated).In order to accomplish thiscomputationally intense con-version, an extremely accu-rate algorithm must be used.If the algorithm is accurateenough to deal with PALconversions, NTSC conver-sions using the same algo-rithm are simple. To ensurequality video, a sophisticatedalgorithm must be employedand the hardware must pro-duce precise coefficients andminimize rounding errors.

Figure 2-15. Format conversion.

Page 17: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 11

AES/EBU Audio Data FormatAES digital audio (alsoknown as AES/EBU) con-forms to the specificationAES3 (ANSI 4.40) titled “AESrecommended practice fordigital audio engineering –Serial transmission format fortwo-channel linearly repre-sented digital audio data.”AES/EBU digital audio is theresult of cooperation betweenthe Audio Engineering Soci-ety and the European Broad-casting Union. When discussing digitalaudio, one of the importantconsiderations is the numberof bits per sample. Wherevideo operates with 8 or 10bits per sample, audio imple-mentations range from 16 to24 bits to provide the desireddynamic range and signal-to-noise ratio (SNR). The basicformula for determining theSNR for digital audio is:

SNR = (6.02 * n) + 1.76

where “n” is the number ofbits per sampleFor a 16-bit system, the maxi-mum theoretical SNR would be(6.02 * 16) + 1.76 = 98.08 dB;for an 18-bit system, the SNRwould be 110.2 dB; and for a20-bit device, 122.16 dB. Awell-designed 20-bit ADCprobably offers a value

between 100 and 110 dB.Using the above formula foran SNR of 110 dB, this sys-tem has an equivalent 18.3-bit resolution. An AES digital audio signalalways consists of two chan-nels which may be distinctlyseparate audio material orstereophonic audio. There’salso a provision for single-channel operation (mono-phonic) where the seconddigital data channel is eitheridentical to the first or hasdata set to logical “0.” For-matting of the AES data isshown in Figure 3-1. Eachsample is carried by a sub-frame containing: 20 bits ofsample data, 4 bits of auxil-iary data (which may be usedto extend the sample to 24bits), 4 other bits of data anda preamble. Two sub-framesmake up a frame which con-tains one sample from each ofthe two channels.Frames are further groupedinto 192-frame blocks whichdefine the limits of user dataand channel status datablocks. A special preambleindicates the channel identityfor each sample (X or Ypreamble) and the start of a192-frame block (Z pream-ble). To minimize the direct-current (DC) component on

the transmission line, facili-tate clock recovery, and makethe interface polarity insensi-tive, the data is channelcoded as biphase-mark. Thepreambles specifically violatethe biphase-mark rules foreasy recognition and toensure synchronization.When digital audio is embed-ded in the serial digital videodata stream, the start of the192-frame block is indicatedby the so-called “Z” bitwhich corresponds to theoccurrence of the Z-type pre-amble.The validity bit indicateswhether the audio samplebits in the sub-frame are suit-able for conversion to an ana-log audio signal. User data isprovided to carry other infor-mation, such as time code.Channel status data containsinformation associated witheach audio channel. Thereare three levels of implemen-tation of the channel statusdata: minimum, standard,and enhanced. The standardimplementation is recom-mended for use in profession-al television applications,hence the channel status datawill contain informationabout signal emphasis, sam-pling frequency, channelmode (stereo, mono, etc.), useof auxiliary bits (extendaudio data to 24 bits or otheruse), and a CRC (cyclicredundancy code) for errorchecking of the total channelstatus block.

Embedded AudioOne of the important advan-tages of SDI (Serial DigitalInterconnect) is the ability toembed (multiplex) severalchannels of digital audio inthe digital video. This is par-ticularly useful in large sys-tems where separate routingof digital audio becomes acost consideration and theassurance that the audio isassociated with the appropri-ate video is an advantage. Insmaller systems, such as apost production suite, it’s

3. Digital Audio

Figure 3-1. AES audio data formatting.

Sample Data20 bits

32 bits

Auxiliary Data 4 bits

Audio sample validityUser bit data

Audio channel statusSubframe parity

Start of 192 frame block

Channel BChannel BChannel B Channel AChannel A

Subframe 1 Subframe 2

Frame 0Frame 191 Frame 1

Preamble X, Y or Z, 4 bits

ZZ

Subframe Structure

Frame Structure

YY YY XX YY

Page 18: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 12

generally more economical tomaintain separate audio thuseliminating the need fornumerous mux (multiplexer)and demux (demultiplexer)modules. In developing theSDI standard, SMPTE 259M,one of the key considerationswas the ability to embed at

least four channels of audioin the composite serial digi-tal signal which has limitedancillary data space. A basicform of embedded audio forcomposite video is docu-mented in SMPTE 259M andthere’s a significant amountof equipment in the field

using this method for bothcomposite and componentsignals. As engineers wereusing the SMPTE 259M spec-ifications for their designs, itbecame clear that a moredefinitive document wasneeded. A draft proposedstandard is being developedfor embedded audio whichincludes such things as dis-tribution of audio sampleswithin the available compos-ite digital ancillary dataspace, the ability to carry 24-bit audio, methods to handlenon-synchronous clockingand clock frequencies otherthan 48 kHz, and specifica-tions for embedding audio incomponent digital video.Ancillary data space. Com-posite digital video allowsancillary data only in itsserial form and then only inthe tips of synchronizing sig-nals. Ancillary data is notcarried in parallel compositedigital as parallel data is sim-ply a digital representation ofthe analog signal. Figures3-2 and 3-3 show the ancil-lary data space available incomposite digital signalswhich includes horizontalsync tip, vertical broadpulses, and vertical equaliz-ing pulses. A small amount of the hori-zontal sync tip space isreserved for TRS-ID (timingreference signal and identifi-cation) which is required fordigital word framing in thedeserialization process. Sincecomposite digital ancillarydata is only available in theserial domain, it consists of10-bit words, whereas bothcomposite and componentparallel interconnects may beeither 8 or 10 bits. A sum-mary of composite digitalancillary data space for NTSCis shown in Table 3-1. Thirtyframes/second will provide9.87 Mb/second of ancillarydata which is enough for fourchannels of embedded audiowith a little capacity left over.PAL has slightly more ancil-lary data space due to thehigher sample rate; however

Figure 3-2. Composite ancillary data space.

Active Video Active Video

Ancillary Data SpaceTRS-ID (not to scale)V

ertical B

lanking

Vertical

Blanking

ScanDirection

V H

Figure 3-3. NTSC available ancillary data words.

Not to scale

790 - 794

790 - 794

790 - 794

TRS-ID

TRS-ID

TRS-ID

ANC Data - 55 wordsH-sync (all lines except vertical sync)

ANC Data - 376 wordsVertical sync lines only

ANC Data - 21 wordsVertical equalizing pulses only

ANC Data - 21 wordsVertical equalizing pulses only

ANC Data - 376 wordsVertical sync lines only

795 - 849

795 - 260

795 - 815

340 - 715

340 - 360

Table 3-1. NTSC Digital Ancillary Data SpacePulse Tip Words Per Frame Total

Horizontal 55 507 27,885

Equalizing 21 24 504

Broad 376 12 4,512

Total 10-bit words/frame 32,901

Page 19: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 13

Figure 3-5. Ancillary data formatting.

HANC (horizontal ancillarydata) and VANC (verticalancillary data) with theSMPTE defining the use ofHANC and the EBU definingthe use of VANC. Wordlengths are 10-bit for HANCspecifically so it can carryembedded audio in the sameformat as for composite digi-tal and 8-bit for VANCspecifically so it can be car-ried by D-1 VTRs (on thosevertical interval lines that arerecorded). Total data space isshown in Table 3-2. Up to 16

channels of embedded audioare specified for HANC inthe proposed standard andthere’s room for significantlymore data.Not all of the VANC dataspace is available. For in-stance, the luminance sam-ples on one line per field arereserved for DVITC (digitalvertical interval time code)and the chrominance sam-ples on that line may bedevoted to video index. Also,it would be wise to avoidusing the vertical intervalswitch line and perhaps, thesubsequent line where datamight be lost due to relockafter the switch occurs.Ancillary data formatting.Ancillary data is formattedinto packets prior to multi-plexing it into the video datastream as shown in Figure3-5. Each data block maycontain up to 255 user datawords provided there’senough total data spaceavailable to include five(composite) or seven (compo-nent) words of overhead. Forcomposite digital, only thevertical sync (broad) pulseshave enough room for thefull 255 words. Multiple datapackets may be placed inindividual ancillary dataspaces, thus providing arather flexible data commu-nications channel. At the beginning of each datapacket is a header usingword values that are exclud-ed for digital video data andreserved for synchronizingpurposes. For compositevideo, a single header wordof 3FCh is used whereas forcomponent video, a three-word header 000h 3FFh3FFh is used. Each type ofdata packet is identified witha different Data ID word.Several different Data IDwords are defined to orga-nize the various data packetsused for embedded audio.The Data Block Number(DBN) is an optional counterthat can be used to providesequential order to ancillarydata packets allowing areceiver to determine if

embedded audio for digitalPAL is rarely used.There’s considerably moreancillary data space availablein component digital videoas shown in Figure 3-4. Allof the horizontal and verticalblanking intervals are avail-able except for the smallamount required for EAV(end of video) and SAV (startof video) synchronizingwords. The ancillary data space hasbeen divided into two types

Table 3-2. Component Digital Ancillary Data Space525 line Words Per Frame Total Rate Mb/s

HANC 268 525 140,700 42.2

VANC 1440 39 56,160 13.5

Total words 198,860 55.7

625 line Words Per Frame Total Rate Mb/s

HANC 282 625 176,250 44.1

VANC 1440 49 70.560 14.1

Total words 246,810 58.2

User Data255 words maximum

Data Count (1 word)Check Sum (1 word)

Data Block Number (1 word)

Data Header1 word 3 words

composite, 3FCcomponent,000 3FF 3FF

Data ID (1 word)

Figure 3-4. Component ancillary data space.

Page 20: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 14

sum which is used to detecterrors in the data packet.Basic embedded audio.Embedded audio defined inSMPTE 259M provides fourchannels of 20-bit audio datasampled at 48 kHz with thesample clock locked to thetelevision signal. Althoughspecified in the compositedigital part of the standard,the same method is also usedfor component digital video.

there’s missing data. As anexample, with embeddedaudio the DBN may be usedto detect the occurrence of avertical interval switch,thereby allowing the receiverto process the audio data toremove the likely transient“click” or “pop.” Just prior tothe data is the Data Countword indicating the amountof data in the packet. Finally,following the data is a check-

This basic embedded audiocorresponds to Level A in theproposed embedded audiostandard. Other levels ofoperation provide morechannels, other sampling fre-quencies, and additionalinformation about the audiodata. Basic embedded audiodata packet formattingderived from AES audio isshown in Figure 3-6. Two AES channel-pairs areshown as the source; how-ever it’s possible for each ofthe four embedded audiochannels to come from a dif-ferent AES signal (specifi-cally implemented in someD-1 digital VTRs). The AudioData Packet contains one ormore audio samples from upto four audio channels. 23bits (20 audio bits plus the C,U, and V bits) from each AESsub-frame are mapped intothree 10-bit video words (X,X+1, X+2) as shown in Table3-3.Bit-9 is always not Bit-8 toensure that none of theexcluded word values (3FFh-3FCh or 003h-000h) are used.The Z-bit is set to “1” corre-sponding to the first frame ofthe 192-frame AES block.Channels of embedded audioare essentially independent(although they,re alwaystransmitted in pairs) so theZ-bit is set to a “1” in eachchannel even if derived fromthe same AES source. C, U,and V bits are mapped fromthe AES signal; however theparity bit is not the AES par-ity bit. Bit-8 in word X+2 iseven parity for bits 0-8 in allthree words.There are several restrictionsregarding distribution of theaudio data packets althoughthere’s a “grandfather clause”in the proposed standard toaccount for older equipmentthat may not observe all therestrictions. Audio datapackets are not transmittedin the horizontal ancillarydata space following the nor-mal vertical interval switchas defined in RP 168. They’realso not transmitted in theancillary data space desig-Figure 3-6. Basic embedded audio.

20 bitsAES 1

Embed

Sample Datachnl B (subframe 2)ded Channel 2

Auxiliary Data 4 bitssample validity

User bit datachannel statusAudio

Audio

Subframe parityPreamble X, Y or Z, 4 bits

Channel BChannel BChannel B Channel AChannel A

Subframe 1 Subframe 2

Frame 0Frame 191 Frame 1

ZZYY YY XX YY

Channel BChannel BChannel B Channel AChannel A

Subframe 1 Subframe 2

Frame 0Frame 191

Frame 3

Frame 1

ZZYY YY XX YY

AES Subframe 32 bits

Dat

aC

ount

Che

ckS

um

Dat

aB

lk N

um

Dat

a ID

Data HeaderComposite, 3FCComponent, 000 3FF 3FF

20 + 3 bitsmapped into

of data is3 ANC words

AES 1, Chnl AChannel 1x, x+1, x+2

AES 1, Chnl BChannel 2x, x+1, x+2

AES 2, Chnl AChannel 3x, x+1, x+2

AES Channel-Pair 1

AES Channel-Pair 2

Audio Data Packet

Table 3-3. Embedded Audio Bit DistributionBit X X + 1 X + 2

b9 not b8 not b8 not b8

b8 aud 5 aud 14 Parity

b7 aud 4 aud 13 C

b6 aud 3 aud 12 U

b5 aud 2 aud 11 V

b4 aud 1 aud 10 aud 19 (msb)

b3 aud 0 aud 9 aud 18

b2 ch bit-1 aud 8 aud 17

b1 ch bit-2 aud 7 aud 16

b0 Z-bit aud 6 aud 15

Page 21: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 15

nated for error detectioncheckwords defined inRP 165. For composite digitalvideo, audio data packets arenot transmitted in equalizingpulses. Taking into accountthese restrictions “datashould be distributed asevenly as possible through-out the video field.” The rea-son for this last statement isto minimize receiver buffersize which is an importantissue for transmitting 24-bitaudio in composite digitalsystems. For basic, Level A,this results in either three or

four audio samples per chan-nel in each audio datapacket.Extended embedded audio.Full-featured embeddedaudio defined in the pro-posed standard includes: • Carrying the 4 AES auxil-

iary bits (which may beused to extend the audiosamples to 24-bits)

• Allowing non-synchronousclock operation

• Allowing sampling fre-quencies other than 48 kHz

• Providing audio-to-videodelay information for eachchannel

• Documenting Data IDs toallow up to 16 channels ofaudio in component digitalsystems

• Counting “audio frames”for 525 line systems.

To provide these features,two additional data packetsare defined. Extended DataPackets carry the 4 AES aux-iliary bits formatted suchthat one video word containsthe auxiliary data for twoaudio samples as shown inFigure 3-7. Extended datapackets must be located inthe same ancillary data spaceas the associated audio datapackets and must follow theaudio data packets. The Audio Control Packet(shown in Figure 3-8.) istransmitted once per field inthe second horizontal ancil-lary data space after the ver-tical interval switch point. Itcontains information onaudio frame number, sam-pling frequency, active chan-nels, and relative audio-to-video delay of each channel.Transmission of audio con-trol packets is optional for48 kHz synchronous opera-tion and required for allother modes of operation(since it contains the infor-mation as to what mode isbeing used).Audio frame numbers are anartifact of 525 line, 29.97frame/second operation. Inthat system there are exactly8008 audio samples inexactly five frames, whichmeans there’s a non-integernumber of samples perframe. An audio framesequence is the number offrames for an integer numberof samples (in this case five)and the audio frame numberindicates where in thesequence a particular framebelongs. This is importantwhen switching betweensources because certainequipment (most notably dig-ital VTRs) require consistentsynchronous operation toprevent buffer over/under

Figure 3-7. Extended embedded audio.

Dat

aC

ount

Dat

aC

ount

AU

X

AU

X

AU

X

AU

X

AU

X

AU

X

Che

ckS

umC

heck

Sum

Dat

aB

lk N

umD

ata

Blk

Num

Data HeaderComposite, 3FCComponent, 000 3FF 3FF

(1FD, 1FB, 2F9)Data ID 2FF

Data ID 1FE (2FC, 2FA, 1F8)

AES 1, ch AChannel 1x, x+1, x+2

Audio Data Packet

Extended Data PacketAuxiliary 4 bits

AES 2, ch AChannel 3x, x+1, x+2

AES 1, ch BChannel 2x, x+1, x+2

Figure 3-8. Audio control packet formatting.

DID

DB

N

DC

AC

T

RA

TE

AF

1-2

AF

3-4

DE

LA

0

DE

LC

0

DE

LA

1

DE

LC

1

DE

LA

2

DE

LC

2

DE

LB

0

DE

LD

0

DE

LB

1

DE

LD

1

DE

LB

2

DE

LD

2

RS

RV

RS

RV

CH

K S

UM

Data HeaderComponent

Composite 3FC000 3FF 3FF

Data IDData block number

Data count

Audio framenumber chnls 1 & 2chnls 3 & 4

+_ Delay (audio-video)

SamplingFrequency

ActiveChannels

ReservedWords

Checksum

ch 1 (&2) ch 3 (&4) ch 4 =\ 3ch 2 =\ 1

Table 3-4. Data IDs For Up To 16-channel OperationAudio Audio Data Extended Audio Control

Channels Packet Data Packet Packet

Group 1 1-4 1FF 1FE 1EF

Group 2 5-8 1FD 2FC 2EE

Group 3 9-12 1FB 2FA 2ED

Group 4 13-16 2F9 1F8 1EC

Page 22: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 16

flow. Where frequent switch-ing is planned, receivingequipment can be designedto add or drop a sample fol-lowing a switch in the fourout of five cases where thesequence is broken. Thechallenge in such a system isto detect that a switch hasoccurred. This can be facili-tated by use of the data blocknumber in the ancillary dataformat structure and byincluding an optional framecounter with the unused bitsin the audio frame numberword of the audio controlpacket.Audio delay informationcontained in the audio con-trol packet uses a defaultchannel-pair mode. That is,delay-A (DELA0-2) is forboth channel 1 and channel2 unless the delay for chan-nel 2 is not equal to channel1. In that case, the delay forchannel 2 is located in delay-C. Sampling frequency mustbe the same for each channelin a pair, hence the data in“ACT” provides only twovalues, one for channels 1and 2 and the other for chan-nels 3 and 4.In order to provide for up to16 channels of audio in com-ponent digital systems theembedded audio is dividedinto audio groups corre-sponding to the basic four-channel operation. Each ofthe three data packet typesare assigned four Data IDs asshown in Table 3-4.Receiver buffer size. In com-ponent digital video, the re-ceiver buffer in an audiodemultiplexer is not a criti-cal issue since there’s muchancillary data space availableand few lines excluding au-

dio ancillary data. The caseis considerably different forcomposite digital video dueto the exclusion of data inequalizing pulses and, evenmore important, the datapacket distribution requiredfor extended audio. For thisreason the proposed standardrequires a receiver buffer of64 samples per channel witha grandfather clause of 48samples per channel to warndesigners of the limitationsin older equipment. In theproposed standard, Level Adefines a sample distributionallowing use of a 48 sample-per-channel receiver bufferwhile other levels generallyrequire the use of the speci-fied 64 sample buffer. Deter-mination of buffer size isanalyzed in this section fordigital NTSC systems, how-ever the result is much thesame for digital PAL systems. Synchronous sampling at48 kHz provides exactly 8008samples in five frames whichis 8008 ÷ (5 x 525) = 3.051samples per line. Thereforemost lines may carry threesamples per channel whilesome should carry four sam-ples per channel. For digitalNTSC, each horizontal synctip has space for 55 ancillarydata words. Table 3-5 showshow those words are used forfour-channel, 20-bit and 24-bit audio (overhead includesheader, data ID, data count,etc.). Since it would require24 more video words to carryone additional sample perchannel for four channels of24-bit audio and that wouldexceed the 55-word space,there can be no more thanthree samples per line of 24-bit audio.

A receiver sample buffer isneeded to store samples nec-essary to supply the continu-ous output required duringthe time of the equalizingpulses and other excludedlines (the vertical intervalswitch line and the subse-quent line). When only 20-bitaudio is required the “evendistribution of samples” dic-tates that some horizontallines carry four samples perchannel resulting in a rathermodest buffer size of 48 sam-ples or less depending on thetype of buffer. Using only thefirst broad pulse of each ver-tical sync line, four or fivesamples in each broad pulsewill provide a satisfactory,even distribution. This isknown as Level A. However,if 24-bit audio is used or ifthe sample distribution is toallow for 24-bit audio even ifnot used, there can be nomore than three samples perhorizontal ancillary dataspace for a four-channel sys-tem. The first broad pulse ofeach vertical sync line isrequired to carry 16 or 17samples per channel to getthe best possible “even dis-tribution.” This results in abuffer requirement of 80samples per channel, or less,depending on the type ofbuffer. To meet the 64 sam-ple-per-channel buffer, asmart buffer is required.There are two common typesof buffer that are equivalentfor the purpose of under-standing requirements tomeet the proposed standard.One is the FIFO (first in firstout) buffer and the other is acircular buffer. Smart buffershave the following synchro-nized load abilities:• FIFO Buffer – hold off

“reads” until a specificnumber of samples are inthe buffer (requires acounter) AND neither reador write until a specifiedtime (requires verticalsync).

• Circular Buffer – set theread address to be a certainnumber of samples afterthe write address at a spec-

Table 3-5. Maximum Samples Per NTSC Sync Tip4-Channels 20-bit Audio 24-bit Audio

3-samples 4-samples 3-samples

Audio DataOverhead 5 5 5

Sample data 36 48 36

Extended DataOverhead 5

Auxiliary data 6

TOTAL 41 53 47

Page 23: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 17

audio or for automatic opera-tion of both sample sizes.Therefore, a 64 sample-per-channel smart buffer isrequired to meet the specifi-cations of the proposedstandard. Systemizing AES/EBU audio.Serial digital video andaudio are becoming com-monplace in production andpost-production facilities aswell as television stations. Inmany cases, the video andaudio are married sources;and it may be desirable tokeep them together and treatthem as a single serial digitaldata stream. This has, for oneexample, the advantage ofbeing able to keep the signalsin the digital domain andswitch them together with aserial digital video routingswitcher. In the occasional

instances where it’s desirableto break away some of theaudio sources, the digitalaudio can be demultiplexedand switched separately viaan AES/EBU digital audiorouting switcher.At the receiving end, afterthe multiplexed audio haspassed through a serial digi-tal routing switcher, it maybe necessary to extract theaudio from the video so thatediting, audio sweetening, orother processing can be ac-complished. This requires ademultiplexer that strips offthe AES/EBU audio from theserial digital video. The out-put of a typical demulti-plexer has a serial digitalvideo BNC as well as connec-tors for the two-stereo-pairAES/EBU digital audiosignals.

ified time (requires verticalsync).

Using a smart buffer the min-imum sizes are 24 samplesper channel for 20-bit opera-tion and 40 samples perchannel for 24-bit operation.Because of the differentrequirements for the twotypes of sample distribution,the minimum smart buffersize for either number of bitsper sample is 57 samples.Therefore, a 64 sample-per-channel smart buffer shouldbe adequate.In the case of a not-so-smartbuffer, read and writeaddresses for a circularbuffer must be far enoughapart to handle either a buildup of 40 samples or a deple-tion of 40 samples. Thismeans the buffer size mustbe 80 samples for 24-bit

Page 24: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 18

Page 25: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 19

Cable SelectionThe best grades of precisionanalog video cables exhibitlow losses from very low fre-quencies (near DC) to around10 MHz. In the serial digitalworld, cable losses in thisportion of the spectrum are ofless consequence but stillimportant. It’s in the higherfrequencies associated withtransmission rates of 143,177, 270, or 360 Mb/s wherelosses are considerable. For-tunately, the robustness ofthe serial digital signal makesit possible to equalize theselosses quite easily. So whenconverting from analog todigital, the use of existing,quality cable runs shouldpose no problem.The most important charac-teristic of coaxial cable to beused for serial digital is itsloss at 1/2 the clock fre-quency of the signal to betransmitted. That value willdetermine the maximumcable length that can beequalized by a given receiver.It’s also important that thefrequency response loss in dBbe approximately propor-tional to 1/√

___f down to fre-

quencies below 5 MHz. Someof the common cables in useare the following:

PSF 2/3 UKBelden 8281 USA and JapanF&G 1.0/6.6 Germany

These cables all have excel-lent performance for serialdigital signals. Cable manu-facturers are seizing theopportunity to introducenew, low-loss, foam dielectriccables specifically designedfor serial digital. Examples ofalternative video cables areBelden 1505A which,although thinner, more flexi-ble, and less expensive than8281, has a higher perfor-mance at the frequencies thatare more critical for serialdigital signals. A recentdevelopment specifically forserial digital video is Belden1694A with lower loss than1505A.

ConnectorsUntil recently, all BNC con-nectors used in television hada characteristic impedance of50 ohms. BNC connectors ofthe 75-ohm variety wereavailable, but were not physi-cally compatible with 50-ohmconnectors. The impedance“mismatch” to coax is of littleconsequence at analog videofrequencies because thewavelength of the signals ismany times longer than thelength of the connector. Butwith serial digital’s high datarate (and attendant shortwavelengths), the impedanceof the connector must be con-sidered. In general, the lengthof a simple BNC connectordoesn’t have significant effecton the serial digital signal.Several inches of 50-ohmconnection, such as a numberof barrels or short coax,would have to be used forany noticeable effect. In thespecific case of a serial trans-mitter or receiver, the activedevices associated with thechassis connector need signif-icant impedance matching.This could easily includemaking a 50-ohm connectorlook like 75 ohms over thefrequency band of interest.Good engineering practicetells us to avoid impedancemismatches and use 75-ohmcomponents wherever possi-ble and that rule is being fol-lowed in the development ofnew equipment for serialdigital.

Patch PanelsThe same holds true for other“passive” elements in the sys-tem such as patch panels. Inorder to avoid reflectionscaused by impedance discon-tinuities, these elementsshould also have a character-istic impedance of 75 ohms.Existing 50-ohm patch panelswill probably be adequate inmany instances, but newinstallations should use 75-ohm patch panels. Severalpatch panel manufacturersnow offer 75-ohm versions

designed specifically forserial digital applications.

Terminations and Loop-throughsThe serial digital standardspecifies transmitter andreceiver return loss to begreater than 15 dB up to270 MHz. That is, termina-tions should be 75 ohms withno significant reactive com-ponent to 270 MHz. Clearlythis frequency is related tocomponent digital, hence alower frequency might besuitable for NTSC and thatmay be considered in thefuture. As can be seen by therather modest 15 dB specifi-cation, return loss is not acritical issue in serial digitalvideo systems. It’s moreimportant at short cablelengths where the reflectioncould distort the signal ratherthan at long lengths wherethe reflected signal receivesmore attenuation.Most serial digital receiversare directly terminated inorder to avoid return lossproblems. Since the receiveractive circuits don’t look like75 ohms, circuitry must beincluded to provide a goodresistive termination and lowreturn loss. Some of today’sequipment does not providereturn loss values meeting thespecification for the higherfrequencies. This hasn’tcaused any known systemproblems; however, good en-gineering practice should pre-vail here as well.Active loop-throughs are verycommon in serial digitalequipment because they’rerelatively simple and havesignal regeneration qualitiessimilar to a reclocking distri-bution amplifier. Also, theyprovide isolation betweeninput and output. However, ifthe equipment power goes offfor any reason, the connec-tion is broken. Active loop-throughs also have the sameneed for care in circuitimpedance matching as men-tioned above.

4. System Hardware and Issues

Page 26: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 20

Passive loop-throughs areboth possible and practical.Implementations used in se-rial digital waveform moni-tors have return loss greaterthat 25 dB up to the clockfrequency. This makes it pos-sible to monitor the actualsignal being received by theoperational equipment orunit-under-test and not sub-stitute the monitor receiver. The most important use ofpassive loop-throughs is forsystem diagnostics and faultfinding where it’s necessaryto observe the signal in thetroubled path. If a loop-through is to be used formonitoring, it’s importantthat it be passive loop-through for two reasons.First, serial transmitters withmultiple outputs usuallyhave separate active deviceson each output, thereforemonitoring one outputdoesn’t necessarily indicatethe quality of another outputas it did in the days of resis-tive fan-out of analog signals.Second, when an active loop-through is used, loss ofpower in the monitoringdevice will turn off the sig-nal. This would be disastrousin an operational situation. Ifan available passive loop-through isn’t being used, it’simportant to make sure thetermination is 75 ohms withno significant reactive com-ponent to at least the clockfrequency of the serial signal.That old, perhaps “preci-sion,” terminator in your toolbox may not do the job forserial digital.

Digital Audio Cable andConnector TypesAES/EBU digital audio hasraised some interesting ques-tions because of its character-istics. In professional appli-cations, balanced audio hastraditionally been deemednecessary in order to avoidhum and other artifacts. Usu-ally twisted, shielded, multi-conductor audio cable isused. The XLR connectorwas selected as the connectorof choice and is used univer-sally in almost all profes-

sional applications. WhenAES/EBU digital audioevolved, it was natural toassume that the traditionalanalog audio transmissioncable and connectors couldstill be used. The scope ofAES3 covers digital audiotransmission of up to 100meters, which can be han-dled adequately withshielded, balanced twisted-pair interconnection. Because AES/EBU audio hasa much wider bandwidththan analog audio, cablemust be selected with care.The impedance, in order tomeet the AES3 specification,requires 110-ohm source andload impedances. The stan-dard has no definition for abridging load, although loop-through inputs, as used inanalog audio, are theoreti-cally possible. Improperlyterminated cables may causesignal reflections and subse-quent data errors. The relatively high frequen-cies of the AES/EBU signalscannot travel over twisted-pair cables as easily as ana-log audio. Capacitance andhigh-frequency losses causehigh-frequency rolloff. Even-tually, signal edges becomeso rounded and amplitude solow that the receivers can nolonger tell the “1”s from the“0”s. This makes the signalsundetectable. Typically,cable lengths are limited to afew hundred feet. XLR con-nectors are also specified.Since AES/EBU digital audiohas frequencies up to about6 MHz, there’ve been sugges-tions to use unbalancedcoaxial cable with BNC con-nectors for improved perfor-mance in existing videoinstallations and for trans-missions beyond 100 meters.There are two committeesdefining the use of BNC con-nectors for AES/EBU digitalaudio. The Audio Engineer-ing Society has developedAES3-ID and SMPTE isdeveloping a similar recom-mended practice. These rec-ommendations will use a110-ohm balanced signal

instead of 75-ohm unbal-anced and reduce the 3- to10-volt signal to 1 volt. Theresulting signal now has thesame characteristics as ana-log video, and traditionalanalog video distributionamplifiers, routing switchers,and video patch panels canbe used. Costs of making upthe coaxial cable with BNCsis also less than multicon-ductor cable with XLRs.Tests have indicated thatEMI emissions are alsoreduced with coaxialdistribution.

Signal Distribution, Reclocking Although a video signal maybe digital, the real worldthrough which that signalpasses is analog. Conse-quently, it’s important to con-sider the analog distortionsthat affect a digital signal.These include frequencyresponse rolloff caused bycable attenuation, phase dis-tortion, noise, clock jitter,and baseline shift due to ACcoupling. While a digital sig-nal will retain the ability tocommunicate its data despitea certain degree of distortion,there’s a point beyond whichthe data will not be recover-able. Long cable runs are themain cause of signal distor-tion. Most digital equipmentprovides some form of equal-ization and regeneration at allinputs in order to compen-sate for cable runs of varyinglengths. Considering the spe-cific case of distribution am-plifiers and routing switch-ers, there are several ap-proaches that can be used:wideband amplifier, wide-band digital amplifier, andregenerating digital amplifier.Taking the last case first,regeneration of the digitalsignal generally means torecover data from an incom-ing signal and to retransmit itwith a clean waveform usinga stable clock source. Regen-eration of a digital signalallows it to be transmittedfarther and sustain more ana-log degradation than a signalwhich already has accumu-lated some analog distor-

Page 27: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 21

several dozen times before aparallel regeneration is nec-essary. Excessive jitter buildup eventually causes a sys-tem to fail; jitter acceptableto a serial receiver must stillbe considered and handledfrom a system standpointwhich will be discussedlater.Regeneration, where a refer-ence clock is used to producethe output, can be performedan unlimited number oftimes and will eliminate allthe jitter built up in a seriesof regeneration operations.This type of regenerationhappens in major operationalequipment such as VTRs,production switchers (visionmixers) or special effectsunits that use external refer-ences to perform their digitalprocessing. So the bottomline is, regeneration (orreclocking) can be perfect,only limited by economicconsiderations.A completely differentapproach to signal distribu-tion is the use of widebandanalog routers, but there aremany limitations. It’s truethat wideband analog routers(100 MHz plus) will pass a143 Mb/s serial digital signal.However, they’re unlikely tohave the performancerequired for 270 or 360 Mb/s.With routers designed forwideband analog signals, the1/√

___f frequency response

characteristics will adverselyaffect the signal. Receiversintended to work withSMPTE 259M signals expectto see a signal transmittedfrom standard source andattenuated by coaxial cablewith frequency responselosses. Any deviation from6 dB/octave rolloff over abandwidth of 1 MHz up tothe clock frequency willcause improper operation ofthe automatic equalizer inthe serial receiver. In general,this will be a significantproblem because the analogrouter is designed to haveflat response up to a certainbandwidth and then roll offat some slope that may, or

may not, be 6 dB/octave. It’sthe difference between thein-band and out-of-bandresponse that causes theproblem.In between these twoextremes are wideband serialdigital routers that don’treclock the signal. Such non-reclocking routers will gener-ally perform adequately at allclock frequencies for theamount of coax attenuationgiven in their specifications.However, that specificationwill usually be shorter thanfor reclocking routers.

System TimingThe need to understand,plan, and measure signaltiming has not been elimi-nated in digital video, it’sjust moved to a different setof values and parameters. Inmost cases, precise timingrequirements will be relaxedin a digital environment andwill be measured in micro-seconds, lines, and framesrather than nanoseconds. Inmany ways, distribution andtiming is becoming simplerwith digital processingbecause devices that havedigital inputs and outputscan lend themselves to auto-matic input timing compen-sation. However, mixed ana-log/digital systems placeadditional constraints on thehandling of digital signaltiming. Relative timing of multiplesignals with respect to a ref-erence is required in mostsystems. At one extreme,inputs to a composite analogproduction switcher must betimed to the nanosecondrange so there will be no sub-carrier phase errors. At theother extreme, most digitalproduction switchers allowrelative timing betweeninput signals in the range ofone horizontal line. Signal-to-signal timing requirementsfor inputs to digital videocan be placed in three cate-gories: 1. Digital equipment with

automatic input timingcompensation.

tions. Regeneration generallyuses the characteristics of theincoming signal, such asextracted clock, to producethe output. In serial digitalvideo, there are two kinds ofregeneration: serial andparallel.Serial regeneration is sim-plest. It consists of cableequalization (necessary evenfor cable lengths of a fewmeters or less), clock recov-ery, data recovery, and re-transmission of the data us-ing the recovered clock. Aphase locked loop (PLL) withan LC (inductor/capacitor) orRC (resistor/capacitor) oscil-lator regenerates the serialclock frequency, a processcalled reclocking.Parallel regeneration is morecomplex. It involves threesteps: deserialization; paral-lel reclocking, usually usinga crystal-controlled timebase; and serialization.Each form of regenerationcan reduce jitter outside itsPLL bandwidth, but jitterwithin the loop bandwidthwill be reproduced and mayaccumulate significantlywith each regeneration. (Fora complete discussion of jit-ter effects and measurementssee Section 8.) A serial regen-erator will have a loop band-width on the order of severalhundred kilohertz to a fewmegahertz. A parallel regen-erator will have a much nar-rower bandwidth on theorder of several Hertz. There-fore, a parallel regeneratorcan reduce jitter to a greaterextent than a serial regenera-tor, at the expense of greatercomplexity. In addition, theinherent jitter in a crystalcontrolled timebase (parallelregenerator) is much lessthan that of an LC or RCoscillator timebase (serialregenerator). Serial regenera-tion obviously cannot be per-formed an unlimited numberof times because of cumula-tive PLL oscillator jitter andthe fact that the clock isextracted from the incomingsignal. Serial regenerationcan typically be performed

Page 28: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 22

2. Digital equipment withoutautomatic input timingcompensation.

3. Digital-to-analog convert-ers without automatictiming compensation.

In the first case, the signalsmust be in the range of theautomatic compensation,generally about one horizon-tal line. For system installa-tion and maintenance it’simportant to measure the rel-ative time of such signals toensure they’re within theautomatic timing range withenough headroom to allowfor any expected changes.Automatic timing adjust-ments are a key componentfor digital systems; but theycannot currently be appliedto a serial data stream due tothe high digital frequenciesinvolved. A conversion backto the parallel format for theadjustable delay is necessarydue to the high data rates.Therefore, automatic inputtiming is not available in allequipment.For the second case, signalsare nominally in-time aswould be expected at theinput to a routing switcher.SMPTE recommended prac-tice RP 168, which applies toboth 525 and 625 line opera-tion, defines a timing win-dow on a specific horizontalline where a vertical intervalswitch is to be performed.See Figure 4-1 which shows

the switching area for field 1.(Field 2 switches on line 273for 525 and line 319 for 625.)The window size is 10 µswhich would imply anallowable difference in timebetween the two signals of afew microseconds. A pre-ferred timing difference isgenerally determined by thereaction of downstreamequipment to the verticalinterval switch. In the case ofanalog routing switchers, avery tight tolerance is oftenmaintained based on subcar-rier phase requirements. Forserial digital signalsmicroseconds of timing dif-ference may be acceptable.At the switch point, clocklock of the serial signal willbe recovered in one or twodigital words (10 to 20 serialclock periods). However,word framing will, mostlikely, be lost causing the sig-nal value to assume somerandom value until the nextsynchronizing data (EAV)resets the word framing andproduces full synchroniza-tion of the signal at thereceiver. A signal with a ran-dom value for part of a line isclearly not suitable on-line toa program feed; howeverdownstream equipment canbe designed to blank theundesired portion of the line.In the case where the unde-sired signal is not elimi-nated, the serial signal tim-ing to the routing switcher

will have to be within one ortwo nanoseconds so the dis-turbance doesn’t occur.Clearly, the downstreamblanking approach ispreferred.Finally, the more difficultcase of inputs to DACs with-out timing compensationsmay require the same accura-cy as today’s analog systems– in the nanosecond range.This will be true where fur-ther analog processing isrequired, such as used inanalog production switchers(vision mixers). Alternately,if the analog signal is simplya studio output, this highaccuracy may not berequired.Automatic timing adjust-ments are a key componentfor digital systems; but theycannot currently be appliedto a serial data stream due tothe high digital frequenciesinvolved. A conversion backto the parallel format for thedelay is necessary due to thehigh data rates. There’s aneed for a variety ofadjustable delay devices toaddress these new timingrequirements. These includea multiple-line delay deviceand a frame delay device. Aframe delay device, ofcourse, would result in avideo delay that could poten-tially create some annoyingproblems with regard toaudio timing and the passingof time code through the sys-tem. Whenever audio andvideo signals are processedthrough different paths, apotential exists for differen-tial audio-to-video delay.The differential delay causedby embedding audio is insig-nificant (under 1 ms). How-ever, potential problemsoccur where frame delays areused and the video andaudio follow separate paths.

Figure 4-1. Vertical interval switching area.

1 2 3 4 5 6 7 8 9 22 23 24622 623 624 625

SwitchingArea

35 25 µs

1 2 3 4 5 6 7 8 9 10 11 12 19 20 21525

SwitchingArea

525-line systems, Field 1

625-line systems, Field 1

25 µs35 µs

µs

Page 29: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 23

Test and measurement ofserial digital signals can becharacterized in three differ-ent aspects: usage, methods,and operational environment.Types of usage wouldinclude: designer’s work-bench, manufacturing qualityassurance, user equipment

evaluation, system installa-tion and acceptance testing,system and equipment main-tenance, and perhaps mostimportant, operation.Requirements for a basicoperational monitor includedisplay of the program sig-nals carried by the digital sig-

nal with features and accu-racy consistent with today’sanalog baseband signal moni-tors. An operational monitorshould also include informa-tion about the serial digitalsignal itself, such as dataavailable, bit errors, and dataformatting errors. A displayof the actual serial waveformis not required. Due to therobust nature of digital videosignals a reduction in thenumber of operational moni-tors may be possible. How-ever, monitoring of the pro-gram signal waveform isrequired at all locationswhere an operator or equip-ment has the ability tochange program parameters.The Tektronix WFM 601Series of serial-componentmonitors is shown in Figure5-1.Methods for technical evalua-tion cover several usage areasthat have overlapping re-quirements. In addition tothe traditional television sys-tem measurements there’s anew dimension for test andmeasurement – to quantifythe various parameters asso-ciated directly with the serialwaveform. The result is sev-eral categories of monitoringand measurement methods tobe considered: program sig-nal analysis, data analysis,format verification, transmit-ter/receiver operation, trans-mission hardware, and faultreporting. An equivalent-timesampling display of the serialwaveform is displayed on theTektronix WFM601M SerialComponent Monitor in Fig-ure 5-2.Program signal measure-ments are essentially thebaseband video and audiomeasurements that have beenused for years. An importantaspect to these measurementsis that the accuracy of signalrepresentation is limited bythe number of bits-per-sam-ple. In analog systems there’sbeen a small amount of digi-tal data present in video sig-nals in the form of Vertical

5. Monitoring and Measurements

Figure 5-1. The WFM 601 Series of serial-component monitors.

Figure 5-2. Eye-pattern display on WFM601M.

Page 30: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 24

Interval Time Code (VITC);but the serial data stream hasmuch more capacity for dataother than video. Hence,more complete measurementmethods are desirable.Digital signal waveform anal-ysis and its effect on trans-mitter/receiver operation is anew requirement in test andmeasurement for televisionfacilities. Rather than acquirespecial equipment for inter-preting this high speed wave-form, appropriate capabili-ties are being added to tradi-tional television test equip-ment, helping in the econom-ic transition to serial digitaltelevision. Testing of passivetransmission components

(coax, patch panels, etc.) issimilar to that used withbaseband systems except thatmuch wider bandwidthsmust be considered.The third aspect to test andmeasurement is whether it’sin-service or out-of-service.All operational monitoringmust be in-service, whichmeans that monitors must beable to give the operator in-formation about the digitalsignal that contains activeprogram material as well asthe program signal itself. Ifthere are problems to besolved, discovering thosewith an intermittent naturewill also require in-servicemeasurements.

Because of the well knowncrash-point type of failure fordigital systems, out-of-ser-vice testing is also especiallyimportant. In order to knowhow much headroom isavailable it’s necessary toadd stressing parameters tothe digital signal in mea-sured amounts until it doescrash, which is certainly notacceptable in operational sit-uations. The next few sections dealwith the details of specificmeasurement techniquescovering monitoring, mea-surements, in-service testing,and out-of-service testing.

Page 31: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 25

different paths are due to thefact that the digits in theserial stream vary based onthe data (high or low states,with or without change atpossible transition times).The waveform that results isknown as an eye pattern(with two “eyes” shown in

Figure 6-1 and displayed ona monitor as shown on theWFM601M Serial Compo-nent Monitor in Figure 5-2).Analog measurements of theserial digital waveform startwith the specifications of thetransmitter output as shownin Figure 6-2. Specificationsto be measured are ampli-tude, risetime, and jitter,which are defined in theserial standard, SMPTE259M. Frequency, or period,is determined by the televi-sion sync generator develop-ing the source signal, not theserialization process. A unitinterval (UI) is defined as thetime between two adjacentsignal transitions, which isthe reciprocal of clock fre-quency. The unit interval is7.0 ns for NTSC, 5.6 ns forPAL, and 3.7 ns for compo-nent 525 or 625.A serial receiver determinesif the signal is a “high” or a“low” in the center of eacheye, thereby detecting theserial data. As noise and jit-ter in the signal increasethrough the transmissionchannel, certainly the bestdecision point is in the cen-ter of the eye (as shown inFigure 6-3) although somereceivers select a point at afixed time after each transi-tion point. Any effect whichcloses the eye may reducethe usefulness of thereceived signal. In a communications systemwith forward error correc-tion, accurate data recoverycan be made with the eyenearly closed. With the verylow error rates required forcorrect transmission of serialdigital video (see Section 7)a rather large and clean eyeopening is required afterreceiver equalization. This isbecause the random nature ofthe processes that close theeye have statistical “tails”that would cause an occa-sional, but unacceptableerror. Jitter effects that close

Waveform MeasurementsWhen viewed on an appro-priate scope or monitor, sev-eral time sweeps (overlaid bythe CRT persistence or digi-tal sample memory) pro-duces a waveform that fol-lows a number of differentpaths across the screen. The

Figure 6-3. Data recovery.

Figure 6-2. Serial signal specifications.

0.4 to 1.5 ns

0.8 Volts+_ 10%

Jitter<0.2 UI p-p,10 Hz HPF

UnitInterval

20% to 80% Risetime

Figure 6-1. Eye pattern display.

6. Measuring the Serial Signal

Page 32: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 26

the eye are discussed in Sec-tion 8.Amplitude is importantbecause it affects maximumtransmission distance; toolarge can be a problem aswell as the more obvious toosmall. Some receiver equaliz-ers depend on amplitude toestimate cable length and set-ting of the equalizer signifi-cantly affects resulting noiseand jitter. Precise measurement of theserial transmitter waveformrequires a scope with a1 GHz bandwidth because ofthe serial signal's 1 ns rise-time. However, amplitudemeasurements may be madewith lower bandwidthscopes (300 to 500 MHz

bandwidth). Monitoringquality risetime measure-ments can be made with tele-vision test equipment usingequivalent-time sampling atthe lower bandwidths. Rise-time measurements are madefrom the 20% to 80% pointsas appropriate for ECL logicdevices. Since the serial sig-nal has approximately a 1 nsrisetime the measurementmust be adjusted per the fol-lowing formula:

Ta = √(T m2 – 0.5 T s

2)

Where:Ta = actual risetimeTm = measured risetimeTs = risetime of the scope

The factor of 0.5 adjusts forthe fact that the scope rise-time specification is for the10% to 90% points. As anexample: with a scope 10 to90% risetime of 1.0 ns, ameasured risetime of 1.2 nswould indicate an actualserial waveform 20 to 80%risetime of 0.97 ns, and ameasured 1.6 ns would indi-cate 1.44 ns actual risetime.

Measuring Serial Digital VideoSignal TimingRelative timing betweenserial digital video signalsthat are within an opera-tional range for use in studioequipment may vary fromseveral nanoseconds to a fewtelevision lines. Measure-ment of the timing differ-ences in operational signalpaths may be accomplishedusing the Active Picture Tim-ing Test Signal availablefrom the TSG 422 DigitalComponent Generator inconjunction with the timingcursors and line select of theWFM 601 series of serialcomponent waveformmonitors.Figure 6-4 shows a represen-tation of a picture monitordisplay of the Active PictureTiming Test Signal (previ-ously known as the DigitalBlanking Test Signal). Thefirst and last full active ana-log lines have a white (lumi-nance-only) bar with nomi-nal blanking edges. Depend-Figure 6-4. Picture monitor display of the Active Picture Timing Test signal.

First full line in an analog field

Last full line in an analog field

First and last full amplitude sampleallowed by nominal analog blanking

First and last digital active picture samples

Page 33: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 27Figure 6-7.Timing measurement, small time difference.

Digital Signal B

Digital Signal B

Analog Reference

Analog Reference

Digital Signal A

Digital Signal A

DT = 1.05m

DT = 1.05m

T1

T1

T2

T2

mag sweepWFM 601, 1-line

mag sweepWFM 601, 1-line

ing on the field, these may ormay not be the first and lastfull digital active lines. Useof the specific full-linesshown below ensure that thecomplete signal will be visi-ble after digital-to-analogconversion. It’s the responsi-bility of the converter tomake half-lines where appro-priate. The lines with theluminance white bar are:

525-line Lines 21, 262,signals: 284, and 525625-line Lines 24, 310,signals: 336, and 622

All other active picture lineshave a one-sample, half-amplitude word in four loca-tions: first and last activedigital sample, first and lastsample at the 100% point ofa nominal white bar. Figure6-5 shows the specific loca-tion of each half-amplitudeword. Note: the last activechrominance sample on eachline is co-sited with lumi-nance sample #718 whereasthe last luminance sample is#719.To make a measurement oflarge relative timing betweentwo digital signals, set up theWFM 601 as shown in Figure6-6. Alternately, select eachinput in the line select modeand find the 50% point ofthe white bar. The line num-ber and delta-T measurementgive the total time difference.For higher resolution mea-surement of small time dif-ferences, use the same set upto view the time of the dou-ble pulses at the start of theactive line as shown in Fig-ure 6-7. Although the figureshows nice clean pulses,considerable ringing will beseen due to the non-nyquistnature of the single-wordexcursion. As with other fre-quently used setups, thesecan be stored as one of thenine WFM 601 presets, to berecalled instantly whenneeded.

Figure 6-5. Locations of half-amplitude samples.

µs

µs µs

µs

mV

mV mV

mV

sample

Cb sample Cb sample

Cr sample Cr sample

sampleword

word (Cb) word (Cb)

word (Cr) word (Cr)

word000

000 000

000 000

000

0.0

0.0 0.0

0.0

350

350 350

350

0.00

0.00 0.00

0.000.30

0.30 1.04

1.04

First & last digital sample

First & last digital sample First & last digital sample

First & last digital sample

Analog line100% amplitude

Analog line100% amplitude

Analog line100% amplitude

Analog line100% amplitude

Not to scale

Not to scale Not to scale

Not to scale

52.89

52.89 52.44

52.4453.26

53.19 53.19

53.26004

002 007

002 007

014714

357 354

357 354

708719

359 359

359 359

7191439

1436 1436

1438 1438

14391429

1428 1416

1430 1418

1417009

008 028

010 030

029001

000 000

002 002

001

525 Line Luminance

525 Line Chrominance 625 Line Chrominance

625 Line Luminance

Figure 6-6.Timing measurement, large time difference.

Digital Signal B

Digital Signal B

Analog Reference

Analog Reference

Digital Signal A

Digital Signal A

Line 21

Line 24

DT = 43.22m

DT = 43.22m

T1

T1

T2

T2

sweep, Line-select modeWFM 601, 1-line

sweep, Line-select modeWFM 601, 1-line

Page 34: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 28

Page 35: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 29

In our application of comput-ers and digital techniques toautomate many tasks, we’vegrown to expect the hard-ware and software to reportproblems to an operator. Thisis particularly important assystems become larger, morecomplex, and more sophisti-cated. It’s reasonable toexpect digital television sys-tems to provide similar faultreporting capabilities.Television equipment has tra-ditionally had varying levelsof diagnostic capabilities.With the development of dig-ital audio and video signalsit’s now possible to add avital tool to system faultreporting; confirmation ofsignal integrity in studiointerconnection systems. Tounderstand and use this tool,it’s necessary to delve intothe technology of digital errordetection. The combinationof the serial-digital videointerconnect method, thenature of television systems,and typical digital equipmentdesign lead to the use of errormeasurement methods thatare economical yet special-ized for this application.Serial digital video operatesin a basically noise-free envi-ronment, so traditional meth-ods of measuring randomerror rates are not particu-larly useful. Examination ofthe overall system character-istics leads to the conclusionthat a specialized burst errormeasurement system willprovide the television studioengineer with an effectivetool to monitor and evaluateerror performance of serial-digital video systems.

Definition of ErrorsIn the testing of in-studiotransmission links and oper-ational equipment a singleerror is defined as one dataword whose digital valuechanges between the signalsource and the measuringreceiver. This is significantlydifferent than the analog casewhere a range of received

values could be consideredto be correct. Application ofthis definition to transmis-sion links is straight forward.If any digital data word val-ues change between trans-mitter and receiver there’s anerror. For operational studioequipment, the situation is alittle more complicated andneeds further definition.Equipment such as routingswitchers, distribution ampli-fiers and patch panels should,in general, not change thedata carried by the signal;hence, the basic definitionapplies. However, when arouting switcher changes thesource selection there will bea short disturbance. Thisshould not be considered anerror although the length ofthe disturbance is subject toevaluation and measurement.SMPTE Recommended Prac-tice RP 168 specifies the linein the vertical interval whereswitching is to take place,which allows error measure-ment equipment to ignore thisacceptable error.There are various types ofstudio equipment that wouldnot normally be expected tochange the signal. Examplesare frame synchronizers withno proc amp controls, pro-duction switchers in astraight-through mode, anddigital VTRs in the E-to-Emode. All have the capability(and are likely to) replace allof the horizontal and part ofthe vertical blanking inter-vals. In general, replacingpart of the television signalwill destroy the error-freeintegrity of the signal. This istrue for both component andcomposite signals as therecould be ancillary data in thereplaced sections of the sig-nal. There’s an even strongercase for composite signals asthe sync areas are not astrictly defined set of digits,hence may vary within thelimits of the analog signalstandard.

Considering the possiblereplacement of blanking areasby some equipment andknowing that the active pic-ture sample locations arewell defined by the variousdigital standards, it’s possibleto measure Active PictureErrors separately from FullField Errors. The Active Pic-ture Error concept takes careof blanking area replace-ments. However, there is amore insidious possibilitybased on the televisiondesign engineer’s historicalright to modify the blankingedges. Most digital standardswere designed with digitalblanking narrower than ana-log blanking to ensure appro-priate edge transitions in theanalog domain. (Large ampli-tude edge transitions withinone sample period caused bytime truncating the blankingwaveform can create out-of-band spectral componentsthat can show up as excessiveringing after digital-to-analogconversion and filtering.)What happens in some equip-ment is that engineers havemodified the samples repre-senting the analog blankingedge in order to provide adesired transition; therefore,error measurements throughsuch equipment even usingthe Active Picture conceptwill not work.With VTRs, there’s a furtherlimit to measurement oferrors. The original full bitrate VTR formats (D-1, D-2,and D-3) are designed withsignificant amounts of for-ward error correction andwork very well. However, it’sthe nature of the tape record-ing and reproduction processthat some errors will not becorrected. Generally the errorcorrection system identifiesthe uncorrected data andsophisticated error conceal-ment systems can be appliedto make a virtually perfectpicture. The systems are sogood that tens of generationsare possible with no human-noticeable defect. Although

7. Definition and Detection of Errors

Page 36: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 30

modification of the data.Therefore, all error measure-ment methods external to adigital VTR are not meaning-ful because of potentialblanking edge adjustment,error concealment methodsand, in some equipment, theuse of data-rate reduction.The good news is that vari-ous types of error rate andconcealment rate data areavailable inside the VTR andstandards for reporting thatinformation are beingdeveloped.

Quantifying ErrorsMost engineers are familiarwith the concept of Bit ErrorRate (BER), which is the ratioof bits in error to total bits. Asan example, the 10-bit digitalcomponent data rate is270 Mb/s. If there were oneerror per frame the BERwould be 30/(270 x 106) =1.11 x 10-7 for 525 line sys-tems or 25/(270 x 106) = 0.93x 10-7 for 625 line systems.Table 7-1 shows the BER forone error over differentlengths of time for varioustelevision systems. BER is auseful measure of system per-formance where the signal-to-noise ratio at the receiver issuch that noise-producedrandom errors occur. As part of the serial digitalinterconnect system, scram-bling is used to lower DCcontent of the signal and pro-vide sufficient zero crossingsfor reliable clock recovery.It’s the nature of thedescrambler that a single biterror absolutely causes anerror in two words (samples)and has a 50-percent proba-bility of the error in one ofthe words being in the most,or next to the most, signifi-cant bit. Therefore, an errorrate of 1 error/frame will benoticeable by a reasonablypatient observer. If it’snoticeable, it’s unacceptable;but it’s even more unaccept-able because of what it tellsus about the operation of theserial transmission system.

Nature of Studio Digital VideoTransmission SystemsSpecifications for sources ofdigital video signals aredefined by SMPTE 259M.Although the specificationsdo not include a signal-to-noise ratio (SNR) typical val-ues would be 40 dB orgreater at the transmitter.Errors will occur if the SNRat some location in the sys-tem reaches a low enoughvalue, generally in the vicin-ity of 20 dB. Figure 7-1 is ablock diagram of the basicserial transmitter andreceiver system. An intuitivemethod of testing the serial

the error concealment isexcellent, it’s almost certainthat error measurement sys-tems (that by definitionrequire perfect data repro-duction) would find errors inmany completely acceptablefields. New VTR formats thatuse bit-rate reduction willhave an additional potentialsource of small but accept-able data value changes. Thedata compression anddecompression schemes arenot absolutely lossless,resulting in some minor

Table 7-1. Error Frequency and Bit Error RatesTime Between Errors NTSC PAL Component

143 Mb/s 177 Mb/s 270 Mb/s

1 television frame 2 x 10–7 2 x 10–7 1 x 10–7

1 second 7 x 10–9 6 x 10–9 4 x 10–9

1 minute 1 x 10–10 9 x 10–11 6 x 10–11

1 hour 2 x 10–12 2 x 10–12 1 x 10–12

1 day 8 x 10–14 7 x 10–14 4 x 10–14

1 week 1 x 10–14 9 x 10–15 6 x 10–15

1 month 3 x 10–15 2 x 10–15 1 x 10–15

1 year 2 x 10–16 2 x 10–16 1 x 10–16

1 decade 2 x 10–17 2 x 10–17 1 x 10–17

1 century 2 x 10–18 2 x 10–18 1 x 10–18

Table 7-2. Error Rate as a Function of SNR for Serial Digital NTSC

Time Between Errors BER SNR SNR (dB) (volts ratio)

1 microsecond 7 x 10–3 10.8 12

1 millisecond 7 x 10–6 15.8 38

1 television frame 2 x 10–7 17.1 51

1 second 7 x 10–9 18.1 64

1 minute 1 x 10–10 19.0 80

1 day 8 x 10–14 20.4 109

1 month 3 x 10–15 20.9 122

1 century 2 x 10–18 21.8 150

Figure 7-1. Serial transmission system.

Transmitted Signal

Signal After Equalization

Coax Cable

0.8 V 0.03 V 0.8 V

SerialSource

EqualizingAmplifier

Page 37: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 31

Figure 7-2. Calculated NTSC bit-error rate.

system is to add cable – astraight-forward method oflowering the SNR. Sincecoax itself is not a significantnoise source it’s the noisefigure of the receiver thatdetermines the operatingSNR. Assuming an automaticequalizer in the receiver,eventually, as more cable isadded, the signal level due tocoax attenuation causes theSNR in the receiver to besuch that errors occur.Based on the scrambledNRZI channel code used andassuming gaussian distribut-ed noise, a calculation usingthe error-function gives thetheoretical values shown inTable 7-2. The calibrationpoint for this calculation isbased on the capabilities ofthe serial-digital interface. In

the proposed serial digitalstandard it’s stated that theexpected operational dis-tance is through a length ofcoax that attenuates a fre-quency of 1/2 the clock rateby up to 30 dB. That is, re-ceivers may be designed withless or more capability, butthe 30 dB value is consideredto be realizable. The data inTable 7-2 for NTSC serialdigital transmission showsthat a 4.7 dB increase in SNRchanges the result from 1error-per-frame to 1 error-per-century. For NTSC thecalibration point for the cal-culation is 400 meters ofBelden 8281 coax. (Varioustypes of coax can be usedprovided they have a fre-quency response reasonablymeeting the √

___f characteristic

described in the proposedstandard. It’s the 30 dB-of-loss point that’s critical.)This same theoretical datacan be expressed in a differ-ent manner to show errorrates as a function of cablelength as demonstrated inTable 7-3 and shown graphi-cally in Figure 7-2. Thegraph makes it very apparentthat there’s a sharp knee inthe cable length vs. error ratecurve. Eighteen additionalmeters of cable (5 percent ofthe total) moves operationfrom the knee to completelyunacceptable while 50 fewermeters of cable (12 percent ofthe total length) moves oper-ation to a reasonably safe, 1error/month. Similar resultswill be obtained for otherstandards where the calibra-tion point for the calculationis 360 meters for PAL and290 meters for component.Cable length changesrequired to maintain head-room ‘scales proportionallyas shown in Figure 7-3.Good engineering practicewould suggest a 6-dB marginor 80 meters of cable, hence amaximum operating lengthof about 320 meters in anNTSC system where the kneeof the curve is at 400 meters.At that operating level, thereshould never be any errors(at least not in our lifetime).Practical systems includesequipment that doesn’t nec-essarily completely reconsti-tute the signal in terms ofSNR. That is, sending the sig-nal through a distributionamplifier or routing switchermay result in a completelyuseful but not completelystandard signal to be sent toa receiving device. The non-standardness could be bothjitter and noise, but thesharp-knee characteristic ofthe system would remain,occurring at a differentamount of signal attenuation.Use of properly equalizedand re-clocked distributionand routing equipment atintervals with adequateheadroom provides virtually

Table 7-3. Error Rate as a Function of Cable LengthUsing 8281 Coax in for Serial Digital NTSC

Time Between Errors BER Cable Length Attenuation (dB) (meters) at 1/2 Clock Freq

1 microsecond 7 x 10–3 484 36.3

1 millisecond 7 x 10–6 418 31.3

1 television frame 2 x 10–7 400 30.0

1 second 7 x 10–9 387 29.0

1 minute 1 x 10–10 374 28.1

1 day 8 x 10–14 356 26.7

1 month 3 x 10–15 350 26.2

1 century 2 x 10–18 ≤338 25.3

Page 38: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 32

the Errored Second. The fol-lowing example demon-strates the benefits of theerrored second. Suppose aburst error causes 10,000errors in two frames of video.A BER measurement madefor 1 minute would indicatea 1 x 10–6 BER and a mea-surement made for 1 daywould indicate a 8 x 10–9

BER. Whereas an errored sec-ond measurement couldindicate that there was onesecond in error at a time 3hours, 10 minutes and 5 sec-onds ago. The errored secondmethod is clearly a more use-ful measurement in this case.A significant advantage oferrored seconds vs. a straightBER measurement is that it’sa better measure of fitness forservice of links that are sub-ject to burst errors. The serialdigital video system fits thiscategory because televisionimages are greatly disturbedby momentary loss in syn-chronization. A BER mea-surement could give thesame value for a single, largeburst as it does for severalshorter scattered bursts. Butif several of the shorterbursts each result in momen-tary sync failure, the subjec-tive effect will be more dam-age to the viewed picturethan caused by the single

burst. Errored seconds, andits inverse, error free sec-onds, do a good job of quan-tifying this.For use in serial-digital tele-vision systems, there are sev-eral disadvantages to directmeasurement of BER:1. It must be an out-of-ser-

vice test because tradi-tional BER measurementsuse one of several definedpseudo-random sequencesat various bit rates.

2. None of the sequences areparticularly similar to theserial-digital video bitstream; hence, some tele-vision equipment will notprocess the test set bit pat-terns.

3. Consider a system operat-ing with a reasonable 6 dBof SNR margin withrespect to 1 error/frame.Errors due to randomnoise will occur onceevery hundred years or so.A BER measurement willnot be possible.

4. BER measurements do notprovide meaningful datawhen the system-under-test is basically noise freebut potentially subject toburst errors.

5. Test sets for BER measure-ment are expensive con-sidering their limitedapplication in a televisionsystem.

An Error Measurement Methodfor TelevisionTektronix has developed anerror detection system fordigital television signals andhas placed the technicaldetails in the public domainto encourage other manufac-turers to use the method.This method has proven tobe a sensitive and accurateway to determine if the sys-tem is operating correctlyand it has been approved forstandardization by SMPTE asRecommended PracticeRP 165. Briefly, the ErrorDetection and Handling(EDH) concept is based onmaking Cyclic RedundancyCode (CRC) calculations foreach field of video at the

unlimited total transmissiondistances.

Measuring Bit Error Rates (BER)BER measurements may bemade directly using equip-ment specifically designedfor that purpose. Unfortu-nately, in a properly operat-ing system, say with 6 dB ofheadroom, there’s no BER tomeasure. This is because theserial digital television sys-tem generally operates in anenvironment that is free fromrandom errors. If errors arenever going to occur (asimplied by Table 7-2) with6 dB of headroom, what’sthere to measure? A morecommon problem will beburst errors due to some sortof interfering signal such as anoise spike that occurs at in-termittent intervals spacedfar apart in time. Anothersource could be crosstalkthat might come and go de-pending on what other sig-nals are being used at a par-ticular time. Also, there’s thepoor electrical connection atan interface that would causenoise only when it’s mechan-ically disturbed.Because of the intermittentnature of burst errors, datarecording and communica-tions engineers have definedanother error measurement –

Figure 7-3. Calculated 525/625 component bit-error rate.

Page 39: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 33

serializer as shown in Figure7-4. Separate CRCs for thefull field and active picture,along with status flags, arethen sent with the otherserial data through the trans-mission system. The CRCsare recalculated at the deseri-alizer and, if not identical tothe transmitted values, anerror is indicated. Typicalerror detection data will bepresented as errored secondsover a period of time, andtime since the last erroredsecond.In normal operation of theserial-digital interface, therewill be no errors to measure.What’s of interest to the tele-vision engineer is theamount of headroom that’savailable in the system. Thatis, how much stressing couldbe added to the systembefore the knee of the errorrate curve or crash point isreached. As an out-of-servicetest, this can be determinedby adding cable or otherstressing method until theonset of errors. Since it’s anout-of-service test, either aBER test set or RP 165 couldbe used. There are, however,many advantages to using theRP 165 system:

1. The CRC data is part ofthe serial-digital televi-sion signal, thereby pro-viding a meaningful mea-sure of system perfor-mance.

2. RP 165 can be used as anin-service test to automat-ically and electronicallypinpoint any system fail-ures.

3. For out-of-service testing,RP 165 is sufficiently sen-sitive to accurately definethe knee of the error ratecurve during stressingtests.

4. Where there are errorspresent, RP 165 providesthe information necessaryto determine errored sec-onds, which is more use-ful than bit-error rate.

5. Facility is provided formeasuring both full-fieldand active-picture errors.Optional status flags arealso available to facilitateerror reporting.

6. CRC calculation can bebuilt into all serial trans-mitters and receivers for avery small incrementalcost. With error informa-tion available from a vari-ety of television equip-ment, the results can then

be routed to a central col-lection point for overallsystem diagnostics.

Composite digital systemsalready must have a copro-cessor to handle special tim-ing signals required in theserial data and not allowedin the parallel data. Addingbasic CRC capabilities to thecoprocessor costs close tonothing. For component sys-tems that do not have the dif-ferences between serial andparallel, there’s a smallincremental cost whichshould be a fraction of a per-cent of the total cost inequipment where it’s appro-priate to use this method.

System ConsiderationsIn a large serial digital instal-lation the need for traditionalwaveform monitoring can bereduced by the systematicapplication of error detectionmethods. At signal sourcelocations, such as cameras,DVEs and VTRs, where oper-ational controls can affect theprogram signal, it will con-tinue to be important to ver-ify the key program signalparameters using waveformmonitors with serial digitalinput capabilities. The resultsof certain technical opera-tions, such as embeddedaudio mux/demux, serial linktransmission, and routingswitcher I/O, can be moni-tored with less sophisticateddigital-only equipment pro-vided the signal data integrityis verified. It’s the function ofthe RP 165 system to providethat verification.For equipment where thedigital signal remains in theserial domain, such as rout-ing switchers or distributionamplifiers, it’s not economi-cal to provide the CRC calcu-lation function. Therefore, abasic error detection systemwill be implemented as

Figure 7-4. Error detection concept.

Page 40: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 34

Figure 7-7. System fault reporting.

domain processing equip-ment, consider the block dia-gram in Figure 7-6. Since thesignal source doesn’t have aninternal CRC data generatorone might consider a blackbox to provide that function.Unfortunately, the cost ofdeserializing and reserializ-ing makes the cost pro-hibitively high so such a sys-tem is not recommended. Ata receiver that doesn’t calcu-late the CRC, it’s possible,and not unreasonable, to pro-vide a monitor with eitherpassive or active loop-through that determines thesignal integrity. The draw-back of this system is that it’s

the monitor’s receiver that’sbeing used to process the sig-nal, not the actual destina-tion equipment. It’s possiblethat different receiver charac-teristics of the two pieces ofequipment would providemisleading results. If thereceiver being tested has anactive loop-through, it’s use-ful to place the error detect-ing monitor at that output.Errors detected would mostlikely be due to the firstreceiver, that’s the unit beingtested, eliminating the differ-ence in receiver sensitivities.To extend the idea of systemdiagnostics beyond simpledigital data error detection, itwould be useful to providean equipment fault reportingsystem. SMPTE 269M docu-ments a single contact clo-sure fault reporting systemand discussions have begunon computer data communi-cations protocols for moresophisticated fault reporting.Error detection is an impor-tant part of system diagnos-tics as shown in Figure 7-7.Information relating to signaltransmission errors are com-bined with other equipmentinternal diagnostics to besent to a central computer. Alarge routing switcher caneconomically support inter-nal diagnostics. Using twoserial receivers a polling ofserial-data errors could beimplemented. The normalinternal bus structure couldbe used to send serial data toa receiver to detect inputerrors and a special outputbus could be used to ensurethat no errors were createdwithin the large serialdomain routing switcher.

shown in Figure 7-5. Equip-ment that processes the sig-nal in the parallel domain,such as VTRs, DVEs or pro-duction switchers, shouldprovide the transmit andreceive CRC calculation nec-essary for error detection.That equipment can thenreport errors locally and/orto a central diagnostics com-puter. Routing switchers andother equipment operating inthe serial domain will havetheir signal path integrityverified by the error detec-tion system.To emphasize the impor-tance of including the CRCcalculation in parallel-

Figure 7-6. Using error detection (alternate).

Signal Processing

and/or

Storage Equipment

(parallel domain)

Signal Routing

Equipment

(serial domain)

Signal Processing

and/or

Storage Equipment

(parallel domain)

RxTx

Tx = Deserialize, CRC calculation, Reserialize

Rx = Monitor with active or passive loop-thru, CRC calculation and compare

Error Reporting

Figure 7-5. Using error detection (basic).

Page 41: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 35

tion of the standard in Febru-ary 1992, there’s a note to thespecification which states“this specification is tenta-tive with further work inprogress to determine themeasurement method.” Infact, both the specificationand its measurement methodare the subject of SMPTE en-gineering committee work.In parallel component trans-mission, jitter specificationsfor the 27 MHz clock signalare stated as “the peak-to-peak jitter between risingedges shall be within 3 ns ofthe average time of the risingedge computed over at leastone field.” For NTSC compos-ite, the specification is 5 ns.Digital audio per the AESstandard is a serial signal

that is a multiplex of samplesfrom two audio channels andsome ancillary data (see digi-tal audio formats in Section3). For television, the pre-ferred audio sampling rate is48 kHz, clock-locked tovideo as required for digitalVTRs. There are 32 bits ofdata associated with eachaudio sample, which pro-duces a 2-channel (one serialstream) data rate of3.07 Mb/s. Since the channelcoding scheme (biphasemark) inserts a transition foreach clock, the resultingtransition rate of the serialsignal can be 6.14 MHz, giv-ing an eye pattern unit inter-val of 163 ns. For AES audio,the jitter specification is“data transitions shall occurwithin +20 ns of an ideal jit-ter-free clock.”

Jitter In Serial Digital SignalsJitter, noise, amplitudechanges, and other distor-tions to the serial-digital sig-nal can occur as it’s pro-cessed by distribution ampli-fiers, routing switchers, andother equipment that operateon the signal exclusively inits serial form. Figure 8-1shows how the recovered sig-nal can be as perfect as theoriginal if the data isdetected with a jitter-freeclock. As long as the noiseand jitter don’t exceed thethreshold of the detectioncircuits (that’s, the eye is suf-ficiently open) the data willbe perfectly reconstructed. In a practical system, theclock is extracted from the se-rial bit stream and containssome of the jitter present inthe signal. Jitter in the clockcan be a desirable feature tothe extent that the jitter helpsposition the clock edge in themiddle of the eye. Largeamounts of low-frequency jit-ter can be tolerated in serialreceivers if the clock followsthe eye opening location as itsposition varies with time. Anexample of receiver jitter sen-sitivity is shown in Figure 8-2

Digital transmission systemscan operate with a consider-able amount of jitter; in fact,more jitter than would be tol-erated in the program signalrepresented by the digits.This section looks at the rea-sons behind this separationof jitter effects – the amountof jitter that may be found indifferent digital transmissionsystems and how to measurethe jitter in serial transmis-sion systems.The jitter specification in thestandard for the serial-digitalvideo signal is “the timing ofthe rising edges of the datasignal shall be within+0.25 ns of the average tim-ing of rising edges, as deter-mined over a period of oneline.” As of the first publica-

8. Jitter Effects and Measurements

Figure 8-1. Data recovery with a noise-free clock.

Figure 8-2. Receiver jitter sensitivity.

2 unit-intervals

0.25 unit-intervals

This jitter appearsin the extracted clock

This jitter does not appearin the extracted clock

10 Hz

250 kHz

Xtal oscillator

LC oscillator

10 kHz

2 MHzFrequency

Jitter(Clock Unit Intervals)

Page 42: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 36

where the solid line repre-sents the amount of jitter thatwould cause data errors at thereceiver. For low jitter fre-quencies, eye location varia-tions of up to several unitintervals will not defeat dataextraction. As the jitter fre-quency increases, the receiver

tolerance decreases to a valueof about 0.25 unit intervals.The break points for this toler-ance curve depend on thetype of phase-locked loop cir-cuitry used in clock extrac-tion.

Based on the acceptance ofsome jitter in the extractedclock used to recover thedata, two types of jitter aredefined:1. Timing Jitter is defined as

the variation in time ofthe significant instants(such as, zero crossings)of a digital signal relativeto a clock with no jitterabove some low frequency(about 10 Hz).

2. Alignment Jitter (or rela-tive jitter) is defined asthe variation in time ofthe significant instants(such as, zero crossings)of a digital signal relativeto a clock recovered fromthe signal itself. (Thisclock will have jitter com-ponents above 10 Hz butnone above a higher fre-quency in the 1 kHz to10 kHz range.)

Since the data will generallybe recovered using a clockwith jitter, the resulting digi-tal information may have jit-ter on its transition edges asshown in Figure 8-3. This isstill completely valid infor-mation since digital signalprocessing will only considerthe high/low value in themiddle of the clock period.However, if the same clockwith jitter (or simple sub-multiple thereof) is used toconvert the digital informa-tion to analog, errors mayoccur as shown in Figure 8-4.Sample values converted toanalog at exactly the correcttimes produce a straight linewhereas use of a clock withjitter produces an incorrectanalog waveform. The rela-tive effect on the analogwaveform produced using thesame clock for digital-to-ana-log conversion and for dataextraction from the serial sig-nal depends on the amount ofjitter (in unit intervals) andthe number of quantizing lev-els in the digitized signal.Where reduced jitter isrequired for the digital-to-analog converter (DAC) clock,a circuit such as shown inFigure 8-5 may be usedwhere a high Q (crystal)

Figure 8-4. Signal errors due to clock jitter.

Figure 8-3. Data recovery with extracted clock.

Figure 8-5. DAC with a low-jitter clock.

Page 43: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 37

the clock jitter to an accept-able level.The situation for parallel dig-ital video is similar; how-ever, practical implementa-tions have generally avoidedthe whole issue. That is,most or all parallel digitalvideo clocks have jitter in the1 ns or smaller region, andmost or all parallel receiversdo not allow or account forclocks with more than thatsmall amount of jitter. For525/625 video signals, inputto a 10-bit full-range D/Aconverter, the allowableclock jitter is based on aburst frequency, 25% ampli-tude signal (approximatelyburst amplitude) causing lessthan one LSB of error. Forhigh-definition signals, a20 MHz signal is used. In theauthor's opinion, designers ofnew equipment would do

well to follow this traditionand not produce parallelclock jitter with values in theregion of the published spec-ification. For serial digitalsystems, the engineeringcommunity is still consider-ing whether low-frequencyjitter greater than the nomi-nal 0.5 ns should beallowed. That is, shouldclocks extracted from allserial-digital signals beappropriate as inputs to D/Aconverters without using jit-ter reduction or should D/Aconverters be expected toprovide the jitter reductioncircuitry.

Measuring JitterThere are three oscilloscope-related methods of measur-ing jitter in a serial digitalsignal. Measurement of tim-ing jitter requires a jitter-freereference clock as shown inthe top part of Figure 8-6.Alignment (relative) jitter ismeasured using a clockextracted from the serial sig-nal being evaluated as shownin the bottom part of this fig-ure. Timing jitter measure-ments include essentially allfrequency components of thejitter as indicated by thedashed line leading into thesolid line in Figure 8-7. Thejitter frequency componentsmeasured using the align-ment-jitter method willdepend on the bandwidththe clock extraction circuit.Low-frequency componentsof jitter will not be includedbecause the extracted clockfollows the serial signal jit-ter. These low-frequency jit-ter components are not sig-nificant for data recoveryprovided the measurementclock extraction system hasthe same bandwidth as thedata recovery clock extrac-tion system. All jitter fre-quencies above a certainvalue will be measured withbreak points between the twoareas at frequencies appro-priate for the bandwidth ofthe clock extraction systemused. When deriving thescope trigger it’s important toconsider the frequency divi-

phase-locked loop produces aclock with much lower jitter.Typical jitter specificationsfor serial audio and seri-al/parallel video signals areshown in Table 8-1. TheAES/EBU standard for serialdigital audio allows +20 nsof jitter, which is appropriateas the peak-to-peak value of40 ns is about 1/4 of a unitinterval. As explained above,the DAC clock jitter require-ments are considerablytighter. A draft AES/EBUstandard specifies the DACclock at 1 ns jitter. However,a theoretical value for 16-bitaudio could be as small as0.1 ns. The important pointis that receivers for AES/EBUserial audio should bedesigned to handle up to40 ns of jitter, whereas theinternal DAC circuitry ofsuch receivers must reduce

Figure 8-6. Measurements using a clock reference.

Jitter

Jitter

Oscilloscope Presentation

Oscilloscope Presentation

Timing jitter measurement

Alignment jitter measurement

SerialSignal

SerialSignal

Clock Extractionand Phase Locked Loop

ReferenceClock

V in

V in

H trigger

H trigger

Table 8-1. Typical Jitter SpecificationsStandard Clock Jitter % of DAC

Period Spec Clock Requirement

AES/EBU Audio 163 ns 40.0 ns 25% 1.0 ns spec0.1 ns 16-bit

Serial NTSC 7 ns 0.5 ns 7% 0.5 ns

Serial PAL 6 ns 0.5 ns 9% 0.5 ns

Serial Component 4 ns 0.5 ns 14% 0.5 ns

Parallel NTSC 70 ns 10.0 ns 15% 0.5 ns

Parallel PAL 56 ns 10.0 ns 20% 0.5 ns

Parallel Component 37 ns 6.0 ns 16% 0.5 ns

Parallel HD 7 ns 1.0 ns 14% 0.1 ns

Page 44: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 38

sion that usually takes placein the clock extractor. If it’s adivide-by-ten, then word-synchronized jitter will notbe observed depending onwhether the sweep time ofthe display covers exactly 10zero crossings. Division by anumber other than the wordlength will ensure that all jit-ter is measured.The third method of observ-ing, but not measuring, jitteris the simple but deceptiveself-triggered scope measure-ment depicted in Figure 8-8.By internally triggering thescope on the rising (orfalling) edge of the waveformit’s possible to display an eyepattern at a delayed timeafter the trigger point. Usingmodern digital sampling

scopes, long delays may beobtained with extremelysmall amounts of jitterattributable to the delay cir-cuits in the scope. At firstglance, this looks like astraight forward way to eval-uate the eye pattern since aclock extraction circuit is notrequired. The problem is thatthe components of jitter fre-quency that are being mea-sured are a function of thesweep delay used and thatfunction is a comb filter. As aresult, there are some jitterfrequencies that will not bemeasured and others thatwill indicate twice as high avalue as would be expected.Although there’s a low-passeffect with this type of mea-surement, the shape of the

filter is much different thanthat obtained with anextracted clock measure-ment; hence, the results maynot be a good indication ofthe ability of a receiver torecover the data from theserial bit stream.As an example, componentserial digital video has a unitinterval of 3.7 ns. If thesweep delay is 37 ns thetenth zero crossing will bedisplayed. In many of today’sserializers there’s a strong jit-ter component at one tenth ofthe clock frequency (due inpart to the times 10 multipli-er in the serializer). A self-triggered measurement at thetenth zero crossing will notshow that jitter. Alternately,a measurement at other zerocrossings may show as muchas two times too high a valueof jitter. To further compli-cate the matter, if the one-tenth frequency jitter wereexactly symmetrical (such as,a sine wave) the fifth zerocrossing would also have nojitter. In practice, the fifthcrossing also shows a lot ofjitter; only the tenth, twenti-eth, thirtieth, etc., crossingsappear to be nearly jitter free.In troubleshooting a system,the self-triggered method canbe useful in tracking down asource of jitter. However,proper jitter measurements tomeet system specificationsshould be made with a clockextraction method. A SMPTEad-hoc group is developingmethods to measure andspecify jitter for serial digitalvideo, most likely incorpo-rating a defined-clock extrac-tion system.

Figure 8-7. Measured jitter components.

Full Value

Zero

Total jitter is measuredusing a reference clock

This jitter is measuredusing an extracted clock

10 Hz

250 kHz

Xtal oscillator

LC oscillator

10 kHz

2 MHzFrequency

Jitter(Relative Value)

Figure 8-8. Self-triggered jitter observations.

Page 45: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 39

important in operational test-ing as discussed in Sections6 and 8, but not as stress test-ing.) Addition of noise orchange in risetime (withinreasonable bounds) has littleeffect on digital systems andis not important in stresstests.Cable-length stress testingcan be done using actualcoax or a cable simulator.Coax is the real world andmost accurate method. Thekey parameter to be mea-sured is onset of errorsbecause that defines thecrash point as described inSection 7. With an error mea-surement method in place,the quality of the measure-ment will be determined bythe sharpness of the knee ofthe error curve. As an exam-ple, using 8281 coax, a five-meter change in length willgo from no errors in oneminute to more than oneerror-per-second.To evaluate a cable simula-tor, the natural approach isto compare its loss curvewith coax using a networkanalyzer. Certainly that’s anappropriate criteria; but themost important is still sharp-ness of the error curve at var-ious simulated lengths.Experiments have shownthat good cable simulatorsrequire a 10- to 15-meterchange in added 8281 coaxto produce the no-errors tomore than one error-per-sec-ond change. Simulators thatrequire a longer change incoax (less sharp knee) arestill useful for comparisontesting but should be avoidedin equipment evaluation.

SDI Check FieldThe SDI (serial digital inter-face) Check Field (alsoknown as “pathological sig-nal”) is not a stress test; butbecause it’s a full-field testsignal, it must be an out-of-service test. It’s a difficultsignal for the serial digitalsystem to handle and is avery important test to per-

form. The characteristic ofthe SDI Check Field is that ithas a maximum amount oflow-frequency energy in twoseparate signals. One signaltests equalizer operation andthe other tests phase-lockedloop operation. It’s a fullyvalid signal for componentdigital and was originally de-veloped to test D-1 recorders.In the composite domain, it’snot a valid signal (hence maybe considered a stress test);however, it’s also a good testto perform. The SDI CheckField is defined in proposedSMPTE Recommended Prac-tice RP 178.Scrambled NRZI. It’s themathematics of the scram-bling process that producesthe pathological signal. Al-though the quantizing levels3FF and 000 are excludedfrom the active picture andancillary data, the ratio of“1”s to “0”s may be quiteuneven for some values of aflat color field. This is one ofthe main reasons for scram-bling the signal. Conversionto NRZI eliminates the polar-ity sensitivity of the signalbut has little effect on theratio of “1”s to “0”s. Scram-bling does indeed break uplong runs of “1”s or “0”s andproduces the desired spec-trum with minimum low-fre-quency content, while pro-viding maximum zero cross-ings needed for clock extrac-tion at the receiver. However,there are certain combina-tions of data input andscrambler state that will, oc-casionally, cause long runs(20 to 40) of “0”s. These longruns of “0”s create no zerocrossings and are the sourceof low-frequency content inthe signal. (All “1”s is not aproblem because this pro-duces an NRZI signal at one-half the clock frequency,which is right in the middleof the frequency band occu-pied by the overall signal.)The scrambler is a nine-cellshift register as shown in Fig-ure 9-1. For each input bit,

Stress TestingUnlike analog systems thattend to degrade gracefully,digital systems tend to workwithout fault until theycrash. This characteristic ismost clearly illustrated bythe discussion of error detec-tion in Section 7. However,other stressing functions willproduce much the sameresult. When operating a dig-ital system, it’s desirable toknow how much headroomis available; that is, how farthe system is away from thecrash point. To date, thereare no in-service tests thatwill measure the headroom;but research is being pursuedin this area. Therefore, out-of-service stress tests arerequired to evaluate systemoperation.Stress testing consists ofchanging one or more param-eters of the digital signaluntil failure occurs. Theamount of change required toproduce a failure is a mea-sure of the headroom. Start-ing with the specifications inthe serial digital video stan-dard (SMPTE 259M), themost intuitive way to stressthe system is to add cableuntil the onset of errors.Other tests would be tochange amplitude or rise-time, or add noise and/or jit-ter to the signal. Each ofthese tests are evaluating oneor more aspect of the receiverperformance, specificallyautomatic equalizer rangeand accuracy and receivernoise characteristics. Experi-mental results indicate thatcable-length testing is themost meaningful stress testbecause it represents realoperation. Stress testingreceiver ability to handleamplitude changes andadded jitter are useful inevaluating and acceptingequipment, but not toomeaningful in system opera-tion. (Measuring the signalamplitude at the transmitterand measuring jitter at vari-ous points in the system is

9. System Testing

Page 46: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 40

there is one output bit sent tothe one-cell NRZI encoder.An output bit is determinedby the state of the 10 cellsand the input bit. A typicaldistribution of runs of “1”sand “0”s is shown in Table9-1 for 100 frames of compo-nent black. (For the purpose

of this analysis the “1”s arehigh and the “0”s are low asopposed to the transition, no-transition data definition.)The longer runs (16, 19, 22,32, 33, 34, and 39) occur assingle events that the trans-mission system amplifiersand receivers can handle

with relative ease because oftheir very short duration.Multiple long run events canbe encouraged with certaininput signals, which is thebasis for the generation ofpathological signals. Whenthe input words alternatebetween certain digital val-ues the multi-event patholog-ical signal will occur for onespecific scrambler state, gen-erally all “0”s, hence thesequence usually starts at anSAV. The multi-eventsequence ends at the nextEAV, which breaks the inputsequence. Since the scram-bler state is more or less arandom number, the multi-event sequence of long runsof “0”s happens about onceper frame (9-bits, 512 possi-ble states, 525 or 625 linesper frame).Pathological Signals, the SDICheck Field. There are twoforms of the pathological sig-nal as shown in Figure 9-2.For testing the automaticequalizer, a run of 19 “0”sfollowed by two “1”s pro-duces a signal with large DCcontent. Remember a “0” isno transition and a “1” isrepresented by a transition inthe NRZI domain. In addi-tion, the switching on and offof this high DC content sig-nal will stress the linearity ofthe analog capabilities of theserial transmission systemand equipment. Poor analogamplifier linearity results inerrors at the point of transi-tion where peak signalamplitude is the greatest. It’simportant to produce bothpolarities of this signal forfull system testing. Phase-locked loop testing is accom-plished with a signal consist-ing of 20 NRZI “0”s followedby a single NRZI “1”. Thisprovides the minimum num-ber of zero crossings forclock extraction.The SDI Check Field consistsof one-half field of each sig-nal as shown in Figure 9-3.Equalizer testing is based onthe values 300h and 198h

Table 9-1. 100 Frames of Component BlackLength “1”s “0”s

1 112615438 112610564

2 56303576 56305219

3 28155012 28154963

4 14076558 14077338

5 7037923 7038595

6 3518756 3518899

7 1759428 1760240

8 1059392 699666

9 610241 970950

10 1169 1060

11 14 156

12 40

13 156 114

14

16 13713 73865

19 101 94

Length “1”s “0”s

21

22 73759 13579

23

24

25

26

27

28

29

30

31

32 55 51

33 49

34 3

36

39 50

Figure 9-1. Serial scrambler and encoder.

Figure 9-2. Signal forms.

19 bits19 bits

20 bits 20 bits

1 bit

Signal for testing automatic cable equalization

Signal for testing PLL Lock-in

Page 47: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 41

while phase-locked loop test-ing is based on the values200h and 110h. In the C-Yorder shown, the top half ofthe field will be a purplehere and the bottom half ashade of gray. Some test sig-nal generators use the otherorder, Y-C, giving two differ-ent shades of green. Eitherone works, but the C-Y orderis specified in the proposedrecommended practice. Oneword with a single bit of “1”is placed in each frame to en-sure both polarities of equal-izer stress signal for the de-fined SDI Check Field. Thesingle “1” inverts the polar-ity of the NRZ to NRZI func-tion because the field (forcomponent video) wouldotherwise have an even num-ber of “1”s and the phasewould never change.Statistics for 100 frames ofSDI Check Field are shownin Table 9-2. It’s the runs of19 and 20 that represent thelow-frequency stressing ofthe system. When they hap-pen (about once per frame)they occur for a full activetelevision line and cause asignificant low-frequencydisturbance. Since the PLLstress signal is a 20 “1”s, 20“0”s squarewave, you wouldexpect an even number ofeach type of run. The equal-izer stress signal is based onruns of 19 “1”s followed by a“0” or 19 “0”s followed by a“1”. As described above, anextra “1” in each frameforces the two polarities tohappen and both are repre-sented in the data.

Table 9-2. 100 Frames of SDI Check Field

Figure 9-3. SDI check field.

Length “1”s “0”s

1 112595598 112595271

2 56291947 56292583

3 28146431 28147229

4 14073813 14072656

5 7035240 7035841

6 3518502 3518856

7 1758583 1758969

8 879203 879359

9 706058 706029

10 51596 51598

11 146 154

12 16858 16817

13 34405 34374

14 17445 17442

16 9558 9588

18 29 27

19 20944 19503

20 19413 19413

Length “1”s “0”s

21

22 9344 9425

23

24

25

26

27

28

29

30

31

32 49 49

33 35 32

34 3 4

36

38

39 13 14

40

Page 48: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 42

Fibush, David and Elkind,Bob. “Test and Measurementof Serial Digital TelevisionSystems,” 133rd SMPTETechnical Conference, LosAngeles, CA, Oct. 1991.SMPTE Journal, Sept. 1992,p. 622-631. Tektronix litera-ture number 25W-7154.Fibush, David. “Understand-ing the Serial Digital VideoInterface,” In View, MidwestPublishing, Winter/Spring1991, p. 16-18.Fibush, David. “Error Detec-tion in Serial Digital Sys-tems,” NAB, Las Vegas, NV,Apr. 1993. SMPTE Journal,Aug. 1993, p. 688-692.Tektronix literature number25W-7198. Fibush, David. “Jitter Effectsand Measurements in SerialDigital Television Signals,”26th SMPTE Conference,Feb. 1993. SMPTE Journal,Aug. 1993, p. 693-698.Tektronix literature number25W-7179.Fibush, David. “Error Mea-surements in Studio DigitalVideo Systems,” 134thSMPTE Technical Confer-ence, Toronto, Canada, Nov.1992. SMPTE Journal, Aug.1993, p. 688-692. Tektronixliterature number 25W-7185.Fibush, David. “Test andMeasurement of Serial Digi-tal Signals error detectionand jitter,” SBE Convention,San Jose, CA, Oct. 1992.Grass Valley Group, “DigitalTelevision: The Transition,”Grass Valley Group, Inc.,Grass Valley, CA. 1992Huckfield, Dave. SonyBroadcast & CommunicationsLtd. “Serial Digital Interfac-ing,” International Broad-cast Engineer, Sept. 1990.

Mendelson, Philip. DigitalMagic, Santa Monica, CA.“Building a Serial Compo-nent Facility,” Broadcast En-gineering, Mar. 1992, p. 38,42, 44, 48, 50.Martin, Bruno. AdvancedAudio Visual Systems, Mon-treuil, France. “A Compre-hensive Method of DigitalVideo Signal Analysis,”135th SMPTE TechnicalConference, Los Angeles,California, Nov. 1993.Reynolds, Keith. Grass Val-ley Group, Grass Valley, CA.“The Transition Process: Get-ting from A to D,” BroadcastEngineering, Mar. 1992,p. 86-90.Robin, Michael and Poulin,Michel. CBC, Montreal,Canada. “Performance Evalu-ation and Acceptance Test-ing of Bit-Serial Digital VideoEquipment and Systems atthe CBC,” 134th SMPTETechnical Conference,Toronto, Canada, Nov. 1992.Robin, Michael. CBC, Mon-treal, Canada. “Bit-serial Dig-ital Signal Jitter Causes,Effects, Remedies and Mea-surement,” 135th SMPTETechnical Conference, LosAngeles, California, Nov.1993.Stremler, Ferrel G. “Introduc-tion to Communications Sys-tems,” Addison-WesleySeries in Electrical Engi-neering, December 1982.Smith, David R. DigitalTransmission Systems, VanNostrand Reinhold Com-pany, New York, 1985.Stone, David. Pro-Bel Ltd.“Digital Signal Analysis,” In-ternational Broadcast Engi-neer, Nov. 1992, p. 28-31.

Ainsworth, Kenneth. “Moni-toring the Serial Digital Sig-nal,” Broadcast Engineer-ing, Nov. 1991. Tektronix lit-erature number 25W-7156.Ainsworth, Kenneth andAndrews, Gary. “EvaluatingSerial Digital Video Sys-tems,” IEEE 4th Interna-tional Montreux Sympo-sium, Montreux, Switzer-land, June 1991. Ainsworth, Kenneth andAndrews, Gary. “Compre-hending the Video DigitalSerial,” Broadcast Engineer-ing, Spanish language edi-tion, Sept. 1992, p. 34-44.Tektronix literature number25W-7153.Avis, Richard, Sony Broad-cast & Pro-Audio Division,Basingstoke, U.K. “SerialDigital Interface,” Booklet.Sony Broadcast & Commu-nications, Basingstoke, Eng-land, June 1991.Elkind, Bob. Ainsworth, Ken-neth. Fibush, David. Lane,Richard. Overton, Michael.“Error Measurements andAnalysis of Serial CCIR-601Equipment,” InternationalBroadcaster's Convention,Amsterdam, Holland, June1992.Elkind, Bob and Fibush, Dav-id. “Proposal for Error Detec-tion and Handling in StudioEquipment,” 25th SMPTETelevision Conference, De-troit, MI, Feb. 1991. SMPTEJournal, Dec. 1991.Tektronix literature number25W-7128-1.Eguchi, Takeo. Sony Corpo-ration, Japan. “PathologicalCheck Codes for Serial Digi-tal Interface Systems,” 133rdSMPTE Technical Confer-ence, Los Angeles, CA, Oct.1991. SMPTE Journal, Aug.1992, p. 553-558.

10. BibliographyArticles and Books

Page 49: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 43

AES 3 (ANSI S4.40), “AESRecommended Practice forDigital Audio Engineering,Serial Transmission Formatfor Linearly Represented Dig-ital Audio Data”AES 11 (Draft ANSI S4.44),“AES Recommended Practicefor Digital Audio Engineer-ing, Synchronization of Digi-tal Audio Equipment in Stu-dio Operations”Recommendation ITU-RBT.601, “Encoding Parame-ters of Digital Television forStudios”Recommendation ITU-RBT.656, “Interfaces for Digi-tal Component Video Signalsin 525-line and 625-lineTelevision Systems Operat-ing at the 4:2:2 level of Rec-ommendation 601”Recommendation ITU-RBT.711, “Synchronizing Ref-erence Signals for the Com-ponent Digital Studio”

Report ITU-R BT.624, “Char-acteristics of Television Sys-tems”Report ITU-R BT.1212,“Measurements and Test Sig-nals for Digitally EncodedColor Television Signals”EBU Tech 3246, “EBU Paral-lel Interface for 625-line Digi-tal Video Signals”EBU N-10, “Parallel Compo-nent Video Interface for Non-composite ENG Signals”SMPTE 125M, “Bit-ParallelDigital Interface, ComponentVideo Signal 4:2:2”SMPTE 170M, “CompositeAnalog Video Signal, NTSCfor Studio Applications”SMPTE 244M, “Digital Rep-resentation of NTSC EncodedVideo Signal”SMPTE 259M, “Serial DigitalInterface for 10-bit 4:2:2Component and 4ƒsc NTSCComposite Digital Signals”

SMPTE 267M, “Bit ParallelDigital Interface, ComponentVideo Signal 4:2:2, 16x9Aspect Ratio”SMPTE 269M, “Fault Report-ing in Television Systems”SMPTE RP 154, “ReferenceSignals for the Synchroniza-tion of 525-line Video Equip-ment”SMPTE RP 168, “Definitionof Vertical Interval SwitchingPoint for Synchronous VideoSwitching”SMPTE RP 165, “Error Detec-tion Checkwords and StatusFlags for Use in Bit-serialDigital Television Interfaces”SMPTE RP 178, “Serial Digi-tal Interface Check Field for10-bit 4:2:2 Component and4ƒsc Composite Digital Sig-nals”

Note: Former CCIR recom-mendations and reports arenow ITU-R documents. TheInternational Telecommuni-cation Union, Radio Com-munication Sector hasreplaced the CCIR.

Tancock, M.B., Sony Broad-cast & Communications, Bas-ingstoke, U.K. “The DigitalEvolution: The Serial DigitalInterface,” Sony, Bas-ingstoke, U.K. Bookletreleased at IBC Convention,Brighton, England, Sept.1990.Walker, Marc S. BTS, SaltLake City, UT. “DistributingSerial Digital Video,” Broad-cast Engineering, Feb. 1992,p. 46-50.

Ward, Ron. Utah Scientific,Salt Lake City, UT. “Avoid-ing the Pitfalls in Serial Digi-tal Signal Distribution,”SMPTE Journal, Jan. 1993,p. 14-23. Watkinson, John. The Art ofDigital Video, Focal Press,London & Boston.Watkinson, John and Rum-sey, Francis. The Digital In-terface Handbook, FocalPress, London & Boston.

Wonsowicz, John. CBC,Toronto, Canada. “DesignConsiderations for TelevisionFacilities in the CBC TorontoBroadcast Center,” 134thSMPTE Technical Confer-ence, Toronto, Canada, Nov.1992.

Standards

Page 50: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 44

bandwidth – 1. The differ-ence between the upper andlower limits of a frequency,often measured in megahertz(MHz). 2. The completerange of frequencies overwhich a circuit or electronicsystem can function withless than a 3 dB signal loss.3. The information carryingcapability of a particular tele-vision channel.baseline shift – A form oflow-frequency distortionresulting in a shift in the DClevel of the signal.bit – A binary representationof 1 or 0. One of the quan-tized levels of a pixel. bit parallel – Byte-wisetransmission of digital videodown a multi-conductor ca-ble where each pair of wirescarries a single bit. This stan-dard is covered underSMPTE 125M, EBU 3267-Eand CCIR 656.bit serial – Bit-wise transmis-sion of digital video down asingle conductor such as co-axial cable. May also be sentthrough fiber optics. Thisstandard is covered underCCIR 656.bit slippage – 1. Occurswhen word framing is lost ina serial signal so the relativevalue of a bit is incorrect.This is generally reset at thenext serial signal, TRS-ID forcomposite and EAV/SAV forcomponent. 2. The erroneousreading of a serial bit streamwhen the recovered clockphase drifts enough to miss abit. 3. A phenomenon whichoccurs in parallel digital databuses when one or more bitsgets out of time in relation tothe rest. The result is errone-ous data. Differing cablelengths is the most commoncause.bit stream – A continuousseries of bits transmitted on aline.

blocking – Occurs in a multi-stage routing system when adestination requests a sourceand finds that source un-available. In a tie line sys-tem, this means that a desti-nation requests a tie line andreceives a “tie line busy”message, indicating that alltie lines are in use. BNC – Abbreviation of “babyN connector.” A cable con-nector used extensively intelevision.byte – A complete set ofquantized levels containingall of the bits. Bytes consist-ing of 8 to 10 bits per sampleare typical.cable equalization – The pro-cess of altering the frequencyresponse of a video amplifierto compensate for high-fre-quency losses in coaxialcable.CCIR – International RadioConsultative Committee, aninternational standards com-mittee, now replaced byITU-R.CCIR-601 – (See ITU-RBT.601.) CCIR-656 – The physical par-allel and serial interconnectscheme for CCIR-601. CCIR656 defines the parallel con-nector pinouts as well as theblanking, sync, and multi-plexing schemes used inboth parallel and serial inter-faces. Reflects definitions inEBU Tech 3267 (for 625-linesignals) and in SMPTE 125M(parallel 525) and SMPTE259M (serial 525).channel coding – Describesthe way in which the 1s and0s of the data stream are rep-resented on the transmissionpath.character generator (CG) – Acomputer used to generatetext and sometimes graphicsfor video titles or captions.clock jitter – Timing uncer-tainty of the data cell edgesin a digital signal.

125M – See SMPTE 125M.4:2:2 – A commonly-usedterm for a component digitalvideo format. The details ofthe format are specified inthe CCIR-601 standard docu-ment. The numerals 4:2:2denote the ratio of the sam-pling frequencies of the sin-gle luminance channel to thetwo color-difference chan-nels. For every four lumi-nance samples, there are twosamples of each color differ-ence channel. See CCIR-601.4ƒsc – Four times subcarriersampling rate used in com-posite digital systems. InNTSC, this is 14.3 MHz. InPAL, this is 17.7 MHz.AES/EBU – Informal namefor a digital audio standardestablished jointly by theAudio Engineering Societyand European BroadcastingUnion organizations.algorithm – A set of rules orprocesses for solving a prob-lem in a finite number ofsteps.aliasing – Defects in the pic-ture typically caused byinsufficient sampling or poorfiltering of digital video. De-fects are typically seen as jag-gies on diagonal lines andtwinkling or brightening inpicture detail.analog – An adjectivedescribing any signal thatvaries continuously as op-posed to a digital signal thatcontains discrete levels rep-resenting the binary digits 0and 1. asynchronous – A transmis-sion procedure that is notsynchronized by a clock.A-to-D Converter (analog-to-digital) – A circuit that usesdigital sampling to convertan analog signal into a digitalrepresentation of that signal.

11. Glossary

Page 51: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 45

clock recovery – The recon-struction of timing informa-tion from digital data.coaxial cable – A transmis-sion line with a concentricpair of signal carrying con-ductors. There’s an innerconductor and an outer con-ductive metallic sheath. Thesheath aids in preventingexternal radiation from af-fecting the signal on theinner conductor and mini-mizes signal radiation fromthe transmission line. coding – Representing eachlevel of a video signal as anumber, usually in binaryform.coefficients – A number(often a constant) thatexpresses some property of aphysical system in a quanti-tative way.component analog – Theunencoded output of a cam-era, videotape recorder, etc.,consisting of three primarycolor signals: green, blue,and red (GBR) that togetherconvey all necessary pictureinformation. In some compo-nent video formats, thesethree components have beentranslated into a luminancesignal and two color-differ-ence signals, for example, Y,B-Y, R-Y. component digital – A digitalrepresentation of a compo-nent analog signal set, mostoften Y, B-Y, R-Y. The en-coding parameters are speci-fied by CCIR 601. The paral-lel interface is specified byCCIR 656 and SMPTE 125M(1991).composite analog – Anencoded video signal, suchas NTSC or PAL video, thatincludes horizontal and ver-tical synchronizing informa-tion.

compression artifacts – Com-pacting of a digital signal,particularly when a highcompression ratio is used,may result in small errors inthe decompressed signal.These errors are known as“artifacts,” or unwanteddefects. The artifacts may re-semble noise (or edge “busy-ness”) or may cause parts ofthe picture, particularly fastmoving portions, to be dis-played with the movementdistorted or missing.composite digital – A digi-tally encoded video signal,such as NTSC or PAL video,that includes horizontal andvertical synchronizing infor-mation.contouring – Video picturedefect due to quantizing attoo coarse a level.D1 – A component digitalvideo recording format thatuses data conforming to theCCIR-601 standard. Recordson 19mm magnetic tape.(Often used incorrectly toindicate component digitalvideo.)D2 – A composite digitalvideo recording format thatuses data conforming toSMPTE 244M. Records on19mm magnetic tape. (Oftenused incorrectly to indicatecomposite digital video.)D3 – A composite digitalvideo recording format thatuses data conforming toSMPTE 244M. Records on1/2" magnetic tape. delay – The time required fora signal to pass through adevice or conductor.demultiplexer (demux) – Adevice used to separate twoor more signals that werepreviously combined by acompatible multiplexer andtransmitted over a singlechannel.deserializer – A device thatconverts serial digital infor-mation to parallel.differential gain – A changein chrominance amplitude ofa video signal caused by achange in luminance level ofthe signal.

differential phase – A changein chrominance phase of avideo signal caused by achange in luminance level ofthe signal.digital word – The number ofbits treated as a single entityby the system.discrete – Having an individ-ual identity. An individualcircuit component.dither – Typically a random,low-level signal (oscillation)which may be added to ananalog signal prior to sam-pling. Often consists of whitenoise of one quantizing levelpeak-to-peak amplitude.dither component encoding –A slight expansion of theanalog signal levels so thatthe signal comes in contactwith more quantizing levels.The results are smoothertransitions. This is done byadding white noise (which isat the amplitude of one quan-tizing level) to the analog sig-nal prior to sampling.drift – Gradual shift orchange in the output over aperiod of time due to changeor aging of circuit compo-nents. Change is often causedby thermal instability ofcomponents.D-to-A converter (digital-to-analog) – A device that con-verts digital signals to analogsignals.DVTR – Abbreviation of digi-tal videotape recorder.EAV – End of active video incomponent digital systems.EBU – European Broadcast-ing Union. An organizationof European broadcastersthat, among other activities,produces technical state-ments and recommendationsfor the 625/50 line televisionsystem.EBU TECH.3267-E – TheEBU recommendation for theparallel interface of 625-linedigital video signal. A revi-sion of the earlier EBUTech.3246-E, which in turnwas derived from CCIR-601and contributed to CCIR-656standards.

Page 52: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 46

gain – Any increase ordecrease in strength of anelectrical signal. Gain is mea-sured in terms of decibels ornumber of times of magnifi-cation.group delay – A signal defectcaused by different frequen-cies having differing propa-gation delays (delay at1 MHz is different from delayat 5 MHz).horizontal interval (horizon-tal blanking interval) – Thetime period between lines ofactive video.interpolation – In digitalvideo, the creation of newpixels in the image by somemethod of mathematicallymanipulating the values ofneighboring pixels.I/O –I/O – Abbreviation ofinput/output. Typicallyrefers to sending informationor data signals to and fromdevices.ITU-R – The InternationalTelecommunication Union,Radio Communication Sector(replaces the CCIR).ITU-R BT.601 – An interna-tional standard for compo-nent digital television fromwhich was derived SMPTE125M (was RP-125) and EBU3246E standards. CCIRdefines the sampling sys-tems, matrix values, and fil-ter characteristics for both Y,B-Y, R-Y and GBR compo-nent digital television.jaggies – Slang for the stair-step aliasing that appears ondiagonal lines. Caused by in-sufficient filtering, violationof the Nyquist Theory,and/or poor interpolation.jitter – An undesirable ran-dom signal variation withrespect to time.MAC – Multiplexed AnalogComponent video. This is ameans of time multiplexingcomponent analog videodown a single transmissionchannel such as coax, fiber,or a satellite channel. Usu-ally involves digital pro-cesses to achieve the timecompression.

microsecond (µ) – One mil-lionth of a second: 1 x 10–6 or0.000001 second.Miller squared coding – ADC-free channel codingscheme used in D-2 VTRs.MPEG-2 – Motion picturesexpert group. An interna-tional group of industry ex-perts set up to standardizecompressed moving picturesand audio.multi-layer effects – Ageneric term for a mix/effectssystem that allows multiplevideo images to be combinedinto a composite image.multiplexer (mux) – Devicefor combining two or moreelectrical signals into a sin-gle, composite signal.nanosecond (ns) – One bil-lionth of a second: 1 x 10–9 or0.000000001 second.NICAM (near instantaneouscompanded audio multiplex)– A digital audio coding sys-tem originally developed bythe BBC for point-to-pointlinks. A later development,NICAM 728 is used in sev-eral European countries toprovide stereo digital audioto home television receivers.nonlinear encoding – Rela-tively more levels of quanti-zation are assigned to smallamplitude signals, relativelyfewer to the large signalpeaks.nonlinearity – Having gainvary as a function of signalamplitude.NRZ – Non return to zero. Acoding scheme that is polar-ity sensitive. 0 = logic low;1 = logic high. NRZI – Non return to zeroinverse. A video data scram-bling scheme that is polarityinsensitive. 0 = no change inlogic; 1 = a transition fromone logic level to the other.

EDH (error detection andhandling) – Proposed SMPTERP 165 for recognizing inac-curacies in the serial digitalsignal. It may be incorpo-rated into serial digitalequipment and employ asimple LED error indicator.Equalization (EQ) – Processof altering the frequencyresponse of a video amplifierto compensate for high-fre-quency losses in coaxialcable.embedded audio – Digitalaudio is multiplexed onto aserial digital data stream.encoder – In video, a devicethat forms a single, compos-ite color signal from a set ofcomponent signals.error concealment – A tech-nique used when error cor-rection fails (see error correc-tion). Erroneous data isreplaced by data synthesizedfrom surrounding pixels.error correction – A schemethat adds overhead to thedata to permit a certain levelof errors to be detected andcorrected.eye pattern – A waveformused to evaluate channel per-formance.field-time (linear) distortion –An unwarranted change invideo signal amplitude thatoccurs in a time frame of16 ms.format conversion – The pro-cess of both encoding/decod-ing and resampling of digitalrates.frequency modulation –Modulation of a sinewave or“carrier” by varying its fre-quency in accordance withamplitude variations of themodulating signal.frequency response rolloff –A distortion in a transmis-sion system where the higherfrequency components arenot conveyed at their originalfull amplitude and possibleloss of color saturation.

Page 53: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 47

phase error – A picturedefect caused by the incor-rect relative timing of a sig-nal in relation to anothersignal. phase shift – The movementin relative timing of a signalin relation to another signal. pixel – The smallest distin-guishable and resolvable areain a video image. A singlepoint on the screen. In digitalvideo, a single sample of thepicture. Derived from thewords picture element.PRBS – Pseudo randombinary sequence.production switcher (visionmixer) – A device that allowstransitions between differentvideo pictures. Also allowskeying and matting(compositing).propagation delay (pathlength) – The time it takes fora signal to travel through acircuit, piece of equipment,or a length of cable.quantization – The processof converting a continuousanalog input into a set of dis-crete output levels. quantizing noise – The noise(deviation of a signal from itsoriginal or correct value)which results from the quan-tization process. In serial dig-ital, a granular type of noiseonly present in the presenceof a signal.rate conversion – 1. Techni-cally, the process of convert-ing from one sample rate toanother. The digital samplerate for the component for-mat is 13.5 MHz; for thecomposite format it’s either14.3 MHz for NTSC or17.7 MHz for PAL. 2. Oftenused incorrectly to indicateboth resampling of digitalrates and encoding/decoding.Rec. 601 – See CCIR-601.reclocking – The process ofclocking the data with aregenerated clock.

resolution – The number ofbits (four, eight, ten, etc.)determines the resolution ofthe digital signal.

4-bits = a resolution of 1 in168-bits = a resolution of 1 in25610-bits = a resolution of 1in 1024

Eight bits is the minimumacceptable for broadcast TV.RP 125 – See SMPTE 125M. routing switcher – An elec-tronic device that routes auser-supplied signal (audio,video, etc.) from any input toany user-selected output(s). sampling – Process whereanalog signals are measured,often millions of times persecond for video. sampling frequency – Thenumber of discrete samplemeasurements made in a giv-en period of time. Often ex-pressed in megahertz forvideo.SAV – Start of active video incomponent digital systemsscope – Short for oscillo-scope (waveform monitor) orvectorscope, devices used tomeasure the televisionsignal.scrambling – 1. To transposeor invert digital data accord-ing to a prearranged schemein order to break up the low-frequency patterns associatedwith serial digital signals.2. The digital signal is shuf-fled to produce a better spec-tral distribution. serial digital – Digital infor-mation that is transmitted inserial form. Often used infor-mally to refer to serial digitaltelevision signals.serializer – A device thatconverts parallel digitalinformation to serial digital.SMPTE (Society of MotionPicture and Television Engi-neers) – A professional orga-nization that recommendsstandards for the televisionand film industries.

NTSC (National TelevisionSystems Committee) – Orga-nization that formulatedstandards for the NTSC tele-vision system. Now describesthe American system of colortelecasting which is usedmainly in North America,Japan, and parts of SouthAmerica. Nyquist sampling theorem –Intervals between successivesamples must be equal to orless than one-half the periodof highest frequency.orthogonal sampling – Sam-pling of a line of repetitivevideo signal in such a waythat samples in each line arein the same horizontalposition.PAL (Phase Alternate Line) –The name of the color televi-sion system in which the Vcomponent of burst isinverted in phase from oneline to the next in order tominimize hue errors thatmay occur in colortransmission. parallel cable – A multi-con-ductor cable carrying simul-taneous transmission of databits. Analogous to the rowsof a marching band passing areview point. patch panel – A manualmethod of routing signalsusing a panel of receptaclesfor sources and destinationsand wire jumpers to inter-connect them.peak to peak – The ampli-tude (voltage) differencebetween the most positiveand the most negative excur-sions (peaks) of an electricalsignal.phase distortion – A picturedefect caused by unequaldelay (phase shifting) of dif-ferent frequency componentswithin the signal as they passthrough different impedanceelements – filters, amplifiers,ionospheric variations, etc.The defect in the picture is“fringing,” like diffractionrings, at edges where thecontrast changes abruptly.

Page 54: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

page 48

SMPTE 125M (was RP 125) –The SMPTE recommendedpractice for bit-parallel digi-tal interface for componentvideo signals. SMPTE 125Mdefines the parameters re-quired to generate and dis-tribute component video sig-nals on a parallel interface. SMPTE 244M – The SMPTErecommended practice forbit-parallel digital interfacefor composite video signals.SMPTE 244M defines theparameters required to gener-ate and distribute compositevideo signals on a parallel in-terface. SMPTE 259M – The SMPTErecommended practice for525-line serial digital compo-nent and composite interfaces.still store – Device for storageof specific frames of video.synchronous – A transmis-sion procedure by which thebit and character stream areslaved to accurately synchro-nized clocks, both at thereceiving and sending end.

sync word – A synchronizingbit pattern, differentiatedfrom the normal data bit pat-terns, used to identify refer-ence points in the televisionsignal; also to facilitate wordframing in a serial receiver.telecine – A device for cap-turing movie film as a videosignal.temporal aliasing – A visualdefect that occurs when theimage being sampled movestoo fast for the sampling rate.A common example is wagonwheels that appear to rotatebackwards.time base corrector – Deviceused to correct for time baseerrors and stabilize the tim-ing of the video output froma tape machine.TDM (time division multi-plex) – The management ofmultiple signals on onechannel by alternately send-ing portions of each signaland assigning each portion toparticular blocks of time.

time-multiplex – In the caseof CCIR-601, a technique fortransmitting three signals atthe same time on a group ofparallel wires (parallelcable). TRS – Timing reference sig-nals in composite digital sys-tems (four words long).TRS-ID (timing reference sig-nal identification) – A refer-ence signal used to maintaintiming in composite digitalsystems. It’s four words long. truncation – Deletion oflower significant bits on adigital system. Usuallyresults in digital noise.VTR (video tape recorder) –A device which permitsaudio and video signals to berecorded on magnetic tape.waveform – The shape of anelectromagnetic wave. Agraphical representation ofthe relationship betweenvoltage or current and time.word – See byte.

Page 55: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are
Page 56: A Guide to Digital Television Systems and Measurements · 2017. 8. 7. · (Section 3), system timing issues (Section 4), and sys-tem timing measurements (Section 6). Readers who are

9/97 TD/XBS 25W–7203–3

Copyright © 1997, Tektronix, Inc. All rights reserved. Tektronix products are covered by U.S. and foreign patents, issued and pending. Information in thispublication supersedes that in all previously published material. Specification and price change privileges reserved. TEKTRONIX and TEK are registeredtrademarks of Tektronix, Inc. All other trade names referenced are the service marks, trademarks, or registered trademarks of their respective companies.

For further information, contact Tektronix:World Wide Web: http://www.tek.com; ASEAN Countries (65) 356-3900; Australia & New Zealand 61 (2) 888-7066; Austria, Eastern Europe, & Middle East 43 (1) 701 77-261; Belgium 32 (2) 725-96-10;Brazil and South America 55 (11) 3741 8360; Canada 1 (800) 661-5625; Denmark 45 (44) 850700; Finland 358 (9) 4783 400; France & North Africa 33 (1) 69 86 81 08; Germany 49 (221) 94 77-400;Hong Kong (852) 2585-6688; India 91 (80) 2275577; Italy 39 (2) 250861; Japan (Sony/Tektronix Corporation) 81 (3) 3448-4611; Mexico, Central America, & Caribbean 52 (5) 666-6333;The Netherlands 31 23 56 95555; Norway 47 (22) 070700; People’s Republic of China (86) 10-62351230; Republic of Korea 82 (2) 528-5299; Spain & Portugal 34 (1) 372 6000; Sweden 46 (8) 629 6500;Switzerland 41 (41) 7119192; Taiwan 886 (2) 722-9622; United Kingdom & Eire 44 (1628) 403300; USA 1 (800) 426-2200

From other areas, contact: Tektronix, Inc. Export Sales, P.O. Box 500, M/S 50-255, Beaverton, Oregon 97077-0001, USA (503) 627-1916