The CMS Tracker: Front End Readout and Control Systems & DAQ · Figure 1: Schematic of the CMS...

28
3rd August 2006 The CMS Tracker: Front End Readout and Control Systems & DAQ MPhil to PhD Transfer Report M. Pesaresi Imperial College, London Abstract The CMS Silicon Strip Tracker front-end readout system utilises 73,000 APV25 ASICs to provide low noise signal amplification, analogue buffering and processing in a harsh radiation environment on-detector and 440 Front End Drivers (FEDs) to digitise, buffer and process the signals off- detector so that the data rate is reduced. The APV buffers cannot be monitored directly on-detector so an APV Emulator (APVE) has been designed and built to calculate the exact buffer occupancies of the APV25s for preventing buffer overflows. The APVE is also able to provide information to the FEDs in order to check that synchronisation between the APVE and the APV25s has been maintained. The majority of the authors work concerns the commisioning of the APVE within the final Data Acquisition (DAQ) System. The APVE has been fully tested and integration with the tracker readout and DAQ is almost complete. The author has written an application based on the integrated CMS DAQ framework (XDAQ) for the purpose of monitoring the APVE during the testing and operational phases of the CMS detector.

Transcript of The CMS Tracker: Front End Readout and Control Systems & DAQ · Figure 1: Schematic of the CMS...

Page 1: The CMS Tracker: Front End Readout and Control Systems & DAQ · Figure 1: Schematic of the CMS Detector[1]. 2.1 Magnet Fundamental to the choice of detector layout is the 13 m long,

3rd August 2006

The CMS Tracker: Front EndReadout and Control Systems & DAQ

MPhil to PhD Transfer Report

M. Pesaresi

Imperial College, London

Abstract

The CMS Silicon Strip Tracker front-end readout system utilises 73,000 APV25 ASICs to providelow noise signal amplification, analogue buffering and processing in a harsh radiation environmenton-detector and 440 Front End Drivers (FEDs) to digitise, buffer and process the signals off-detector so that the data rate is reduced. The APV buffers cannot be monitored directly on-detectorso an APV Emulator (APVE) has been designed and built to calculate the exact buffer occupanciesof the APV25s for preventing buffer overflows. The APVE is also able to provide informationto the FEDs in order to check that synchronisation between the APVE and the APV25s has beenmaintained. The majority of the authors work concerns the commisioning of the APVE withinthe final Data Acquisition (DAQ) System. The APVE has been fully tested and integration withthe tracker readout and DAQ is almost complete. The author has written an application based onthe integrated CMS DAQ framework (XDAQ) for the purpose of monitoring the APVE during thetesting and operational phases of the CMS detector.

Page 2: The CMS Tracker: Front End Readout and Control Systems & DAQ · Figure 1: Schematic of the CMS Detector[1]. 2.1 Magnet Fundamental to the choice of detector layout is the 13 m long,

Contents

1 Introduction 3

1.1 The Large Hadron Collider . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . 3

2 The Compact Muon Solenoid Experiment 3

2.1 Magnet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . 4

2.2 Muon System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . 4

2.3 Calorimetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . 5

2.3.1 ECAL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 6

2.3.2 HCAL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 6

2.4 Tracker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . 6

2.4.1 Silicon Strip Tracker . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . 7

2.4.2 Pixel Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . 9

2.5 Trigger System & DAQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . 10

2.5.1 Level 1 Trigger (L1T) . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . 11

2.5.2 High Level Trigger (HLT) & DAQ . . . . . . . . . . . . . . . . . . . . .. . . . . . . 11

2.5.3 XDAQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12

2.6 CMS Timing & Control Systems . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . 13

2.6.1 Trigger, Timing & Control System (TTC) . . . . . . . . . . . . .. . . . . . . . . . . 13

2.6.2 Trigger Throttling System (TTS) . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 14

3 The Silicon Strip Tracker Readout System 15

3.1 The APV25 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . 15

3.2 The Front End Driver (FED) . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . 17

3.3 The Front-End Readout Link (FRL) & DAQ . . . . . . . . . . . . . . . .. . . . . . . . . . . 18

4 The Silicon Strip Tracker Control System 18

4.1 Front End Module Control . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . 18

4.2 APV Readout Throttling . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . 19

4.3 Front End Driver Readout Throttling . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . 19

4.4 The APV Emulator (APVE) . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . 19

4.4.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . 21

4.4.2 Firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . 22

4.4.3 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . 22

4.5 Tracker DAQ & APVE Integration . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . 23

5 Summary 26

2

Page 3: The CMS Tracker: Front End Readout and Control Systems & DAQ · Figure 1: Schematic of the CMS Detector[1]. 2.1 Magnet Fundamental to the choice of detector layout is the 13 m long,

1 Introduction1.1 The Large Hadron Collider

The Large Hadron Collider (LHC) and its experiments represent one of the largest technological challengesever undertaken in high energy physics. Their primary motivation is to extend our knowledge of physics be-yond the Standard Model and in particular to confirm the existence of the long theorised Higgs boson. TheLHC will collide protons at a centre of mass energy of 14 TeV, 70 times the energy of its predecessor LEP anda significant increase in energy over the Tevatron hadron collider currently in operation. Bunches of around1011 protons will be collided every 25 ns (40 MHz rate) resulting in a machine luminosity of 1034 cm−2s−1,two orders of magnitude greater than anything previously achieved. During the pilot and startup runs from2007-8 however, it is estimated that the machine luminositywill begin at around 1030 cm−2s−1 and graduallyramp up to full luminosity by increasing the number of bunches, protons per bunch and bunch crossing rate upuntil 2010[1].

The LHC and in particular its accompanying experiments facea number of technological challenges. The highoperating luminosity and beam crossing rate put an enormousstrain on the LHC detectors, including CMS andATLAS. During peak operation, the detectors will observe upto 1 billion inelastic collisions per second or onaverage 20 interactions/bunch crossing event. With a charged particle multiplicity of∼1000 per event, pile upcan become a significant problem, especially if detector components have time resolutions of greater than 25ns. High granularity detectors with fast response times aretherefore required to minimise channel occupancyand hence pileup. As a consequence however, a larger number of channels will increase the on detector powerconsumption due to the extra associated readout electronics.

The bunch crossing rate places strict requirements on response and signal times as well as on the speed of asso-ciated readout electronics. The high rate also has an important bearing on the design of the detector readout andtrigger systems. Since only approximately 100 events/s canbe stored for later analysis offline, an online triggersystem must be in place to reduce the data rate by selecting the most ‘interesting’ events through reconstructionof various track and event parameters. As the selection of these events cannot take place within a 25 ns period,the system must then be implemented with a pipelined processing stage to buffer data before readout. As aresult, maintaining synchronisation between the detectorchannels becomes more of an issue.

The detectors must also be capable of surviving the harsh radiation environment the LHC provides. Detectorelements and front end readout electronics must be able to withstand the high particle fluences and radiationdoses expected in its 10 year lifetime with minimal degradation to signal/noise ratios and response times. Read-out electronics must also be immune to single event upsets whereby passing energetic particles (e.g. hadrons)deposit large amounts of ionisation charge near sensitive circuit nodes causing the state of a single bit within alogic cell to flip.

2 The Compact Muon Solenoid ExperimentThe Compact Muon Solenoid (CMS) is one of four detectors under construction for the Large Hadron Collider(LHC) at CERN. This general purpose detector consists of a central Pixel Detector, a Silicon Strip Tracker andElectromagnetic and Hadronic Calorimeters all contained within a powerful 4 Tesla superconducting solenoidalmagnet. The CMS also houses three types of Muon Detectors within the iron return yoke. The barrel region,coaxial to the beam pipe, is complemented by endcaps at each end to ensure detector hermicity (Figure 1).

The primary goal of the CMS experiment is find evidence for theHiggs boson. As a consequence the detectorhas been designed to be sensitive to the various Higgs decay channels over the mass range 100 GeV/c2 < MH <1 TeV/c2. The detector has been optimised for the identification of muons, electrons, photons and jets as well asfor the accurate measurement of their momenta over a large energy range within a high luminosity environment.In addition, the CMS detector will be used to perform precisemeasurements of Standard Model physics andwill also search for Supersymmetric particles, including the Supersymmetric Higgs and extra dimensions[1].

3

Page 4: The CMS Tracker: Front End Readout and Control Systems & DAQ · Figure 1: Schematic of the CMS Detector[1]. 2.1 Magnet Fundamental to the choice of detector layout is the 13 m long,

Figure 1: Schematic of the CMS Detector[1].

2.1 Magnet

Fundamental to the choice of detector layout is the 13 m long,4 Tesla solenoidal magnet used for providingthe central tracking region with ability to measure the momenta of charged particles with excellent resolution.The superconducting coil rests inside a vacuum chamber and is cooled to 4.5 K using liquid Helium. Themagnet structure is currently fully assembled and at operating temperature ready for the forthcoming MagnetTest & Cosmic Challenge (MTCC - see Section 4.5), when the coil will be taken up to full current (19.5 kA).Surrounding the solenoid lies the iron yoke for the return and containment of the magnetic field. The returnfield in the barrel region will be large enough (∼2 Tesla) for muon tracking, so the iron yoke is integrated with4 muon detection layers, both around the barrel and in the endcaps. Efficient muon triggering is an importantrequirement at hadron colliders, hence the large bending power of the magnet provides the∆p/p ≈ 10%momentum resolution needed atp = 1 TeV/c without placing excessive demands on muon chamber alignmentand resolution[1]. Unambiguous determination of the charge of high energy (p ≈ 1 TeV/c) muons is alsopossible with a high field configuration, further enhancing muon triggering efficiencies. The bore of the magnetmeasures 5.9m and houses the detector inner tracking and calorimetry elements. The large field in this regionassists the calorimeters by keeping low transverse momentum charged particles within the Tracker improvingisolation efficiency and energy resolution[2].

2.2 Muon System

Whereas hadronic background is mainly contained within thecalorimeters, muons are able to propagate past themagnet with minimal interaction with the detector. Muons therefore provide a strong indication of a signal eventover the minimum bias background and are therefore prime candidates for Level 1 (L1) triggering purposes.In addition, dimuon events are important signatures for ‘golden’ mode Higgs decay channels such asH →ZZ → 4l andH → WW → llνν. As a consequence, the CMS detector employs a high performance, robustmuon system for fast identification of muons with good momentum resolution over a large range of momenta.

Figures 2 and 3 describe the performance of the system with respect to momentum resolution in the barrel

4

Page 5: The CMS Tracker: Front End Readout and Control Systems & DAQ · Figure 1: Schematic of the CMS Detector[1]. 2.1 Magnet Fundamental to the choice of detector layout is the 13 m long,

Figure 2: Muon momentum resolution as a functionof momentum, using either the muon system only,the tracker only or the combined muon-tracker sys-tem in the region 0< |η| < 0.2[1].

Figure 3: Muon momentum resolution as a functionof momentum, using either the muon system only,the tracker only or the combined muon-tracker sys-tem in the region 1.8< |η| < 2.0[1].

and the endcaps respectively. The momentum is determined bymeasuring the muon bending angle after thesolenoid, using the interaction point as the muon origin. Low pT tracks are affected by multiple Coulombscattering effects within the detector so tracking information is used in conjunction with the muon systemto improve resolution by up to an order of magnitude (Figures2, 3; Full system). The use of the tracker isalso important for increasing the resolution of high momentum muons, as the sagitta of the muon track canbe measured both before and after the solenoid with minimal interference from scattering and energy loss,especially within the barrel region.

The muon system employs three types of detector. Drift Tube (DT) gas chambers are interleaved with the ironreturn yoke in the barrel section and provide (r, φ, z) measurements of passing muons with point resolutionsof ≈200µm and a direction inφ to within 1 mrad[1]. DT chambers in the barrel region are complementedby Cathode Strip Chambers (CSCs) in the endcaps where background rates are much larger and the magneticfield is highly inhomogeneous. A CSC consists of 6 planes of radial cathode strips with the gap between planesfilled by an inert gas. Each gas gap contains a plane of anode wires orientated perpendicularly to the cathodestrips providing a set of up to 6 (r, φ, z) measurements. Spatial resolution is improved by interpolating hitswith Gaussian fits so that each chamber attains a point resolution of 100-200µm and an angular precision towithin 10 mrad[1]. Resistive Plate Chambers (RPCs) are present throughout the muon system to achieve a fastposition measurement for use in the L1 trigger. RPCs are formed from two closely spaced resistive plates witha high electric field applied across the gas gap. Ionisation charge caused by passing muons rapidly undergoesavalanche charge multiplication. This allows RPCs to operate at flux rates of up to 10 kHz/cm2[1].

2.3 Calorimetry

The CMS detector calorimetry is provided by a high performance crystal Electromagnetic Calorimeter (ECAL)and a sampling Hadronic Calorimeter (HCAL). Energy measurements benefit from the placement of the calorime-ters within the coil since energy loss due to interactions with the magnet system is eliminated. The calorimetersprovide full geometric coverage up to|η| < 5 in order to maintain hermicity for the accurate calculation ofmissing transverse energy.

5

Page 6: The CMS Tracker: Front End Readout and Control Systems & DAQ · Figure 1: Schematic of the CMS Detector[1]. 2.1 Magnet Fundamental to the choice of detector layout is the 13 m long,

2.3.1 ECAL

The ECAL (Figure 4) consists of over 75,000 lead tungstate (PbWO4) crystals present both in the barrel andin the endcaps. The design of the ECAL has been optimised for achieving the excellent energy resolutionsrequired for a Higgs search using theH → γγ decay mode. The background for this channel is particularlylarge and so demands a mass resolution of∼1% at MH ≃ 130 GeV, necessitating a highly granular and wellcalibrated ECAL. The crystals have a short radiation length(0.89 cm) and small Moliere radius (2.2 cm) so thatmost of the energy from electromagnetic showers is collected within a few crystals (each measuring 22 x 22 x230 mm in barrel region)[1]. Passing photons, pions and electrons deposit energy in the crystals resulting in theproduction of scintillation light with a typical response of less than 25 ns. This light is collected by two types ofphotodetector glued to the crystal ends. Silicon avalanchephotodiodes (APDs) convert photons with high gainsof ∼50 and are used in the barrel region. Vacuum phototriodes (VPTs) provide a more radiation tolerant optionin the endcaps even though they operate with lower gains. Thecrystals themselves are extremely radiationhard, with good uniformity over doses of up to 10 Mrad. Energyresolution is expected to be on the order ofσ(E)/E < 0.6% for photons and electrons of 100 GeV depending on the intercalibration of crystal cells.

Figure 4: Transverse section of the CMS ECAL. Coverage is provided up to pseudorapidities of 1.48 in the barreland 3.0 in the endcaps. The endcap Preshower is also indicated. The preshower is responsible for rejecting the largeπ0 → γγ background that would otherwise compromise the Higgs signal. Only the endcap region is instrumentedwith a preshower since pion energies are generally much higher, resulting in an angular separation between the twophotons of less than the granularity of the ECAL.[1].

2.3.2 HCAL

The HCAL is a brass/steel-scintillator sampling calorimeter for the measurement of energy from strongly in-teracting particles. Its primary requirement is to providegood containment of showers for the measurementof missing energy and to protect the muon system from contamination. In this respect, it has been designedto maximise the amount of absorber material before the magnet coil whilst providing good jet and missingtransverse energy resolution. Plastic scintillator tilesare sandwiched between absorber layers with wavelengthshifting fibres to collect and channel the generated light tohybrid photodiodes (HPDs)[1]. The barrel HCALis complemented by an outer hadronic calorimeter situated immediately outside the magnet so that showerscan be sampled by almost 11 hadronic interaction lengths before the first muon chambers. HCAL endcapsprovide geometrical coverage up to|η| < 3, while two forward steel/quartz fibre hadronic calorimeters fulfilthe requirement for hermicity by sampling showers up to|η| < 5.

2.4 Tracker

The CMS Tracker (Figure 5) comprises a Pixel Detector for track vertexing located near the interaction pointand a Silicon Microstrip Tracker (SST) for the calculation of particle momenta within the magnetic field. Thetracker design and performance is motivated by its requirements to provide high spatial resolution tracking ofcharged particles for accurate momentum measurements of signal decay products. Good momentum resolution

6

Page 7: The CMS Tracker: Front End Readout and Control Systems & DAQ · Figure 1: Schematic of the CMS Detector[1]. 2.1 Magnet Fundamental to the choice of detector layout is the 13 m long,

is also needed for the rejection of a large minimum bias background. The CMS tracker must also minimise itsmaterial budget as multiple scattering and Bremsstrahlungprocesses reduce the performance of the Tracker,Muon and ECAL systems. Radiation levels are also extremely high in this region with particle fluxes ofup to 107 cm−2s−1, requiring a tracker that can survive the harsh LHC environment for the duration of theexperiment.

Figure 5: Schematic of the CMS Tracker. A quarter section is displayedwith the interaction point at the bottom-leftcorner. The SST is made up of individual modules - indicated here by the straight red and blue lines. Each moduleconsists of either 1 or 2 silicon sensors as well as associated readout electronics. The blue modules are double sidedso that a stereor-φ andr-z measurement can be performed. The red modules are single sided. Scale is in mm, withunits of pseudorapidity indicating the angular coverage[1].

2.4.1 Silicon Strip Tracker

Figure 5 demonstrates the layout of the SST[4, 5]. With a total length of 5.4 m and a diameter of 2.4 m, thetracker is divided into four subsections: the Tracker OuterBarrel (TOB), Tracker Inner Barrel (TIB), TrackerInner Disks (TID) and Tracker Endcaps (TEC). Subsections are comprised of layers of modules with eachmodule containing a set of silicon strip sensors, a mechanical support structure and readout electronics. Asummary of each tracker section with a parameter description of the sensors employed is given in Table 1.The sensors themselves single sided silicon wafers, with p+-type strips of varying pitch on a n-type bulkback layer operated under reverse bias. The voltage is enough to fully deplete the bulk layer so that chargedparticles passing through the sensors maximise the number of electron-hole pairs generated[7]. The p+-typesilicon strips are capacitatively coupled to aluminium readout strips above a thin layer of SiO2. Signals from128 strips are recorded by an APV25[17] chip (see Section 3.1) which buffers and processes data from eachchannel before L1 readout. Each module employs between 2-12APV25s, whose data is multiplexed and sentoff detector via optical links using on board Analogue Optohybrids (AOH) laser drivers[6].

The TIB comprises 4 layers, 2 of which use ‘stereo’ or double sided modules for the provision of bothr-φandr-z coordinates. These modules are made from two back to back sensors aligned with a relative angle of100 mrad so that a point resolution of between 23-34µm in r-φ and 230µm in z is possible. The strip pitchwithin the TIB is 80µm for the inner 2 layers and 120µm for the outer layers while sensor thickness is 320µm throughout. The strip length is∼12 cm. The fine sensor pitch employed within the TIB allows more stripsper sensor and hence reduces the occupancy per strip. In order to keep a similar occupancy in the outer trackerlayers where the radiation levels are lower, strip length and pitch are increased. Modules within the TOB usesensors with strip lengths of∼18 cm and pitches of 183µm for the inner 4 layers and 122µm for the outer2 layers. Due to the increased noise from the resultant higher interstrip capacitance, a sensor thicknesses of500µm is employed to maintain good signal/noise ratio in the TOB.The Outer Barrel also uses two layers ofdouble sided modules allowing point resolutions of 35-52µm in r-φ and 530µm in z. To avoid resolution

7

Page 8: The CMS Tracker: Front End Readout and Control Systems & DAQ · Figure 1: Schematic of the CMS Detector[1]. 2.1 Magnet Fundamental to the choice of detector layout is the 13 m long,

Table 1: Summary of sensor parameters in each of the Silicon Strip Tracker subsections.

Section No. Layers/Rings No. Detectors Sensor Thickness (µm) Pitch/Pitch Range (µm)TIB 2 (2 stereo layers) 1536 320 80

2 1188 320 120TOB 4 (2 stereo layers) 3528 50 0 183

2 1680 520 122TID 3 (2 stereo rings) 816 320 81-112,113,143,123-158TEC 4 (3 stereo rings) 2512 320 81-112,113,143,123-158,113-159

3 (1 stereo ring) 3888 500 126-156,163-205,140-172

deterioration caused by tracks crossing sensors at shallowangles, Tracker End Caps consisting of 9 disks eachare used between|z| > 120 cm and|z| < 280 cm. At smaller radii, an Inner Detector comprised of 3 smalldisks is used and lies in the space between the TIB and the TEC.Sensors in the TID and the TEC are trapezoidaland have strips that point towards the beam axis. Pitches in the tracker end caps vary between 81-205µm whilesensor thicknesses increase from 320µm before the fourth TEC layer to 500µm at higherz. Both the TID andthe TEC utilise double sided modules. The final system will comprise almost 10 million silicon strips coveringan area of over 200 m2 making it the largest silicon tracker ever constructed.

Figure 6: Transverse momentum resolution of sin-gle muons (atpT =1,10,100 GeV/c) in the SST as afunction of pseudorapidity[1].

Figure 7: Track reconstruction efficiency formuons (atpT =1,10,100 GeV/c) as a function ofpseudorapidity[1].

Due to its proximity to the interaction point, the CMS Tracker will suffer the effects of an extremely hostileradiation environment (Table 2). The silicon sensors and readout electronics within the Tracker must be radi-ation hard enough to survive the 10 year operational lifetime of the experiment with negligible degradation ofperformance. The sensors use〈100〉 oriented silicon crystals to minimise the effects of surface damage causedby ionising radiation. However, bulk damage due to the significant hadron fluence through the sensors resultsin a reduced signal/noise ratio and an increase in leakage current[3]. This will in turn cause a significant in-crease in power dissipation throughout the tracker and may lead to a runaway leakage current effect. To reducethe effect of radiation induced power dissipation and to extract the∼60kW generated by the front end readoutelectronics and cables, the Tracker will be housed inside a temperature controlled outer support tube with thesensors operating at around -10◦C. Bulk damage will also cause the depletion voltage of the silicon sensors tochange due to radiation induced doping concentration changes. At some point, the bulk type will ‘invert’ to

8

Page 9: The CMS Tracker: Front End Readout and Control Systems & DAQ · Figure 1: Schematic of the CMS Detector[1]. 2.1 Magnet Fundamental to the choice of detector layout is the 13 m long,

p-type and a rapidly increasing voltage will be required to fully deplete the sensor. With this in consideration,the sensors have been designed to withstand a high biasing voltage (>450 V) without breaking down.

A highly granular and precise tracker is required to minimise pileup and occupancy for efficient track re-construction. With∼107 readout channels, the SST provides the necessary granularity and keeps the stripoccupancy between 1% and 3% throughout the tracker. The tracker must be capable of reconstructing chargedparticle tracks with good efficiencies over a wide range of momenta and pseudorapidity. A single hit findingefficiency of close to 100% is possible, even after irradiation, as the tracker has been designed to keep the sig-nal/noise ratio to above 10[1]. Muon tracks should be reconstructed with a 98% efficiency and apT resolutionof < 3% (pT < 100 GeV/c) for |η| < 2 (see Figures 6 and 7). At highpT , muon momentum resolution can beimproved by including information from the Muon system. Figure 8 demonstrates that the track reconstructionefficiency of pions within the tracker is slightly worse thanthat of muons, especially at|η| ≈1.5. This is due tothe larger pion interaction cross section and increased amount of tracker material between|η| > 1 and|η| < 2(see Figure 9). It is important that the amount of material within the tracker is minimised to reduce the numberof secondary interactions before the calorimeters. To maintain sensitivity to theH → γγ decay channel, nomore that 50% of Higgs photons should be allowed to convert inthe tracker.

Figure 8: Track reconstruction efficiency for pi-ons (at pT =1,10,100 GeV/c) as a function ofpseudorapidity[1]. The lower efficiencies at higherη

are a combination of increased tracker material andfewer reconstruction points.

Figure 9: Tracker material budget in interactionlengths as a function of pseudorapidity[1]. The ma-jority of the budget is taken up by cabling, coolingequipment, electronics and support structures.

2.4.2 Pixel Detector

The pixel detector (Figure 10) is situated within the Silicon Strip Tracker and instruments the region closest tothe interaction point. The pixel detector comprises three barrel layers and two endcap layers using pixellatedsilicon to cover an active area of 1 m2. Each pixel measures 100×150 µm and uses n+-type implants on a250 µm layer of n-type silicon substrate in contrast to the p+-on-n sensors employed by the SST[7]. As aresult, the pixel detector has a higher sensitivity to the Lorentz drift on charge carriers due to the presenceof the magnetic field than the Tracker. This is a consequence of the fact that electrons provide most of theinduced signal current instead of holes, hence their highermobility induces charge sharing across neighboringpixels. The pixel detector uses this to its advantage since the hit position within active pixel clusters can be

9

Page 10: The CMS Tracker: Front End Readout and Control Systems & DAQ · Figure 1: Schematic of the CMS Detector[1]. 2.1 Magnet Fundamental to the choice of detector layout is the 13 m long,

Table 2: Radiation & Fluence Levels within the CMS detector (barrel)at different radial lengths from the interactionpoint after 10 years of operation (500 fb−1 integrated luminosity)[1].

Radius (cm) Fluence of Fast Hadrons (1014 cm−2) Dose (Mrad) Charged Particle Flux (cm−2s−1)4 32 84 108

11 4.6 1922 1.6 7 6 x 106

75 0.3 0.7115 0.2 0.18 3 x 105

interpolated. This can improve the hit resolution to betterthan the dimensions of the pixel. In the endcaps,where the magnetic field is perpendicular to the pixel plane,the endcap blades are rotated by 20◦ to benefitfrom the Lorentz drift. In this fashion, the pixel detector will be able to provide single point hits with spatialresolutions of∼10µm in r-φ and∼15-20µm z[1]. The pixel sensor layer is bump-bonded to∼16000 analogueread out chips capable of amplifying and buffering signals before L1 readout. The final detector will consist ofabout 66 million readout channels.

The pixel detector will provide extremely high resolution 3-dimensional measurements of charged particletracks near the interaction point. This allows a precise calculation of track impact parameters and secondaryvertex positions for efficient tagging ofτ ’s andb-jets. However, due to its proximity to the primary vertexwhere particle fluxes will be at their highest, radiation damage mainly to the silicon sensors and read out chipswill be substantial (Table 2). Although the pixel detector has been designed with radiation hard criteria, tomaintain acceptable signal/noise ratios, response times and positional resolutions will require the replacementof the system at least once during the experiment lifetime.

Figure 10: Layout of the CMS Pixel Detector[1].

2.5 Trigger System & DAQ

At design luminosity the LHC will be able to provide a tremendous amount of data at an interaction rate of 1GHz requiring a fast, highly performant online data acquisition system (DAQ). Since data from approximately100 collisions may be stored to mass media every second, a trigger system with a rejection factor of 106 isrequired. CMS achieves this using two physical trigger levels to select events deemed physically significantout of the billion background events that occur every second. Fortunately such ‘interesting’ events are rareand so allows an efficient trigger system that retains as manysignal events as possible whilst rejecting a largebackground.

10

Page 11: The CMS Tracker: Front End Readout and Control Systems & DAQ · Figure 1: Schematic of the CMS Detector[1]. 2.1 Magnet Fundamental to the choice of detector layout is the 13 m long,

2.5.1 Level 1 Trigger (L1T)

The Level 1 Trigger provides a quick online decision using primitive reconstructed objects as to whether anevent has provided physically interesting decay products.A Level 1 Accept (L1A) is generated if triggerprimitives such as photons, electrons, muons and jets with transverse energies or momenta above preset valuesare found within regional trigger systems (Figure 11). The regional systems use information from the ECAL,HCAL and Muon Chambers. Global triggers correlate the trigger primitives between the regional systems andalso take global sums of transverse and missing transverse energy to achieve a rejection rate of 1000[9].

Figure 11: Control flow diagram of the L1 Trigger System[9].

Since a L1 decision is required in less than 1µs, this places a phenomenal challenge on the readout and triggerelectronics[1]. Low resolution and easily accessible datais read out through custom hardware processors andhigh speed links to achieve the high rejection rates required within a limited time period. The total L1 triggerlatency period on detector amounts to 3.2µs, allowing for the transit time for signals to reach the trigger logicin the services cavern 50-100 m away and a decision to return back to the detector. During this time, the highresolution data on detector must remain buffered in pipelined memories until the latency time has passed or anL1A is returned upon which the data is read out. The expected L1 event rate will be on average 100 kHz.

2.5.2 High Level Trigger (HLT) & DAQ

After L1 readout, data from each subdetector is processed byfront end electronics and buffered for accessby the DAQ (Figure 12). Event fragments are buffered in up to 512 Reader Units (RUs) before a networkswitch, or Event Builder, capable of operating up to transfer rates of 800 Gb/s collates the event fragments. Thefragments are transferred to the Filter Systems of which there are around 500, for event reconstruction throughthe HLT[10]. Each event, totalling around 1.5 MB, is constructed from data fragments on its own BuilderUnit (BU) for processing using dedicated Filter Units (FUs). A single FU may contain multiple processors forrunning High Level Trigger algorithms in order to reduce theevent rate from 100 kHz to the 100 Hz requiredfor storage. All nodes on the Filter Unit farm use the same CMSsoftware to reconstruct the event within thedetector framework and check if the event passes selection.Although time is less of a constraint in the HLT thanin the L1T, events are rejected as soon as possible using onlypartial reconstruction[1]. Computing Servicescollect the data output from the Filter System including events selected for analysis and calibration, for storageto mass media. Data for online monitoring is also obtained from the Filter System and may be processed bythe computing services. The Event Manager is responsible for controlling the flow of events through the DAQand interfaces with the Reader Units, Builder Units and the Global Trigger Processor. The Control & MonitorSystem (RCMS) interacts with the full DAQ and Detector Control System (DCS) to control and monitor theexperiment during data taking and to provide easy configuration and running of the DAQ systems with an

11

Page 12: The CMS Tracker: Front End Readout and Control Systems & DAQ · Figure 1: Schematic of the CMS Detector[1]. 2.1 Magnet Fundamental to the choice of detector layout is the 13 m long,

Internet accessible user interface[14].

Figure 12: Overview of the CMS Trigger and Data Acquisition System[24]. Indicated are the Reader Units (RUs),Builder Units (BUs) and Filter Units (FUs) as well as the Detector Control Network (DCN) and DAQ Service Net-work (DSN) used by the Run Control/Monitor System (RCMS) formanaging the DAQ. On the left, the ‘side’ viewillustrates the principle of DAQ staging where ‘slices’ of the ‘front view’ DAQ will be gradually added until the fullsystem is realised. This is explained further in Section 3.3.

2.5.3 XDAQ

XDAQ is a generic software package aimed at improving systemwide integration specifically thoughout theCMS DAQ and readout systems[15, 16]. Open protocols and standard libraries are used to build a common andsimple framework for data acquisition systems operating ina distributed processing network. XDAQ providesa platform independent runtime environment for control thecentral DAQ, local subdetector DAQ and readoutsystems as well as for the trigger and control systems. In addition, the XDAQ suite will aid the configuration,monitoring and calibration of the detector during the commissioning, debugging and operational phases ofCMS.

XDAQ is managed by an executive which provides a XDAQ application written in C++ with all the toolsneeded for control and communication within a distributed processing environment. A copy of the executivemay run on multiple processing nodes over a network and communicate using Simple Object Access Protocol(SOAP) over HTTP, ensuring platform independence between applications. An application is loaded into theexecutive at runtime and is configurable at any point by sending it a user-defined XML file containing anapplication specific list of parameters over SOAP. Multipleinstances of the same application may be configuredto operate on a single node. In this way, it becomes easy to accommodate and configure clusters of equivalenthardware and requires minimal input from the end user. XDAQ also provides a number of generic functions andcomponents which may be useful in DAQ applications. Many data structures are available and can be passedbetween applications using SOAP. Communication through Intelligent Input/Output (I2O) binary messagingis also possible, allowing simple peer to peer messages to besent over the network. Tools are provided toimplement Finite State Machines (FSMs), applications whose behaviour is defined by a set of constant statesand the transition functions between them, within the DAQ system so that a central controller can manage allDAQ processes using predefined transitions (e.g. “Configure”, “Enable”, “Halt” etc). The RCMS will issuethese commands over SOAP for the quick startup and control ofthe detector and DAQ. XDAQ also offers a webapplication interface (HyperDAQ) for easy monitoring or configuration of DAQ systems over the Internet. TheXDAQ tools enable the application to dynamically change theweb page interface in response to user inputs orstate changes.

A set of generic applications and libraries are also included in the XDAQ suite. These include applications that

12

Page 13: The CMS Tracker: Front End Readout and Control Systems & DAQ · Figure 1: Schematic of the CMS Detector[1]. 2.1 Magnet Fundamental to the choice of detector layout is the 13 m long,

provide DAQ graphical displays, data streaming from DAQ components, power supply and detector monitoring,alarming and logging, trigger system management packages and an Event Builder that operates as describedin Section 2.5.2. The Event Builder framework interfaces with the Readout Unit, Builder Unit and EventManager applications using XDAQ tools such as I2O for transferring event data and SOAP for passing finitestate transition messages under the supervision of RCMS.

2.6 CMS Timing & Control Systems

The Trigger Control System (TCS) provides the LHC clock, controlled L1As and maintains synchronisationbetween the detector readout systems and DAQ. The TCS must primarily control the rate of L1 Accepts gen-erated by the Global Trigger before distribution to the subsystems over the Trigger Timing & Control (TTC)network by monitoring the status of the front end systems. Front end status is derived from the buffer occu-pancies and internal states of detector readout electronics. This information is transmitted back through theTrigger Throttling System (TTS) upon which the TCS may temporarily lower the trigger rate or inhibit L1Ascompletely. The percentage of L1As lost due to throttling, or deadtime, is calculated by the TCS. Front endbuffers are required to be large enough so that the DAQ inefficiency due to deadtime is less than 1%. Triggerrules are implemented by the TCS to reduce the likelihood of buffer overflows in the readout systems althoughthese must also yield a deadtime of less than the nominal value[11]. The first rule is that consecutive L1Asmust be separated by at least 3 bunch crossings as required bythe tracker (see Section 3.1). The TCS must alsobroadcast fast reset commands (B-Go commands) to the front end systems in order to maintain synchronisationthroughout the entire detector. The most important B-Go commands employed by CMS are listed in Table 3.

Table 3: Sample of B-Go Commands generated by the TCS including counter definitions. Modified from [11].

B-Go Command Description1 Bunch Crossing Zero Bunch Crossing Counter corresponds to number of LHC clock

(BC0) cycles since last BC0. Is synchronous with the LHC orbit signal5 ReSync Clears buffers etc, to resynchronise subsystems after buffer overflow6 HardReset Resets readout electronics7 Event Counter Reset Event Counter corresponds to number of L1As received by front

(ECR) end system since last ECR. Reset sent after 224 transmitted L1As8 Orbit Counter Reset Orbit Counter corresponds to number of LHC orbits (BC0s)

(OCR) since last OCR. Reset sent at the start of a new run9 Start Start taking data with next orbit (BC0)10 Stop Stop taking data with next orbit (BC0)

The TCS organises CMS subdetector components into partitions, of which there are 30 (maximum 32). Eachpartition corresponds to a major element in the detector; for example there are 4 Tracker partitions comprisingthe Inner Barrel, the Outer Barrel and two Endcaps. The TCS interacts with each partition using a dedicatedTTC network and receives feedback from the TTS belonging to that partition (Figure 13). Control of theTCS is carried out by the Central or Global Trigger Controller (GTC)[12]. The GTC controls 30 subdetectorpartitions via individual TTC/TTS interfaces. The GTC mustalso receive L1As from the Global Trigger pro-cessor and obtain the LHC clock and orbit signals from the LHCcontrol room via the TTC Machine Interface(TTCmi) card. In addition, the GTC is required to send L1A information to the DAQ Event Manager andreceive throttling and synchronisation commands from the Run Control Monitor System in the case of a DAQerror. Individual partitions can be controlled by a Local Trigger Controller (LTC) which provides the samefunctionality as the GTC except it can obtain triggers from the relevant local subdetector. In this way, partitionscan be operated separately if the central TCS is down for maintenance.

2.6.1 Trigger, Timing & Control System (TTC)

The TTC system provides the CMS detector and readout electronics with the clock and control signals re-quired for synchronous operation. A TTC distribution tree exists for each detector subpartition consisting of aTTC CMS Interface (TTCci) module, a TTC Encoder/Transmitter (TTCex) and possibly a TTC Optical Cou-

13

Page 14: The CMS Tracker: Front End Readout and Control Systems & DAQ · Figure 1: Schematic of the CMS Detector[1]. 2.1 Magnet Fundamental to the choice of detector layout is the 13 m long,

Figure 13: An overview of the Trigger Control System including the Trigger, Timing & Control (TTC) network andTrigger Throttling System (sTTS)[11].

pler/Splitter (TTCoc)[11, 12]. Information from the Localor Central TCS and the LHC clock and orbit signalsfrom the TTCmi crate are passed on to the TTCci for distribution over the TTC network using two channels.Channel A is dedicated for the transmission of fast signals;L1 trigger decisions and the LHC clock. ChannelB broadcasts B-Go commands from the TCS including Event, Bunch and Orbit Counter Resets as well as L1Reset (Resync) and Hard Resync signals. The B channel is alsoused to transmit individually addressed syn-chronous and asynchronous framed commands and data to detector components. Both A and B channels areBiPhase Mark encoded with the clock and Time Division Multiplexed together for distribution to the detectorcomponents over a single fibre. This is performed by the TTCexwhich then outputs the encoded signal usinga laser transmitter with sufficient power to drive the optical splitters. Each channel has an available bandwidthof 40 Mb/s.

TTC signals are received and decoded by TTC Receiver (TTCrx)chips which are integrated on the CMS frontend electronics. Even though the clock is not explicitly encoded in the signal, it can be calculated from thetime difference between the regular transitions on the TTC input. The resulting clock can be output raw ordeskewed with fine tuning in steps of 104 ps. The A channel is identified by the constraints set on L1 Accepts,i.e. a minimum trigger spacing of 3 bunch crossings. The TTCrx also contains a number of registers whichare controlled by TTC B channel signals. These registers canbe accessed by front end electronics via an InterIntegrated Circuit (I2C) standard interface[18].

2.6.2 Trigger Throttling System (TTS)

Since data rates from the detector are variable, a number of buffers are used throughout the front end readoutsystems to de-randomise the flow of data. This however introduces the risk of buffer overflows leading to loss ofsynchronisation and data through the whole DAQ chain[13]. The CMS detector requires that all readout buffersare monitored through the Trigger Throttling System (TTS) to ensure that buffer overflows are avoided. TheTTS provides the Trigger Control System (TCS) with the frontend status from which the Level 1 Accept (L1A)rate can be controlled. Buffer overflows can occur in the front end system due to the variation in both the datathroughput and the L1 trigger rate. Although the L1 trigger rate is on average 100 kHz, trigger occurrences infact follow a Poisson distribution. There is therefore a finite probability that excess L1 triggers are sent withina 10µs period.

The TTS feedback signals are listed in Table 4. In the event that readout buffers are close to overflowing, localmonitoring will generate a Warn signal over the TTS line. TheTCS should respond by lowering the L1A rate,however further L1As may fill the buffers past a certain threshold at which point a Busy signal should be sentback. This forces the TCS to inhibit L1As so that the buffer occupancy can fall below an acceptable level and aReady signal can be asserted. Since the TTS-TCS-TTC feedback loop can be up to 10 bunch crossings, L1Asmay be sent out after a Busy signal has been transmitted. In this case, corresponding data payloads should bedropped and empty events (Event ID header with buffer overflow flag) should be forwarded to the DAQ untilthe buffers recover. If the readout electronics generate anOut-Of-Sync (OOS) error, then the TCS must wait

14

Page 15: The CMS Tracker: Front End Readout and Control Systems & DAQ · Figure 1: Schematic of the CMS Detector[1]. 2.1 Magnet Fundamental to the choice of detector layout is the 13 m long,

Table 4: TTS status signals & descriptions. Modified from [11].

Ready Ready to receive triggers. Applied continuously if connected and workingWarning Buffers close to overflow

Busy Temporarily busy/buffers full and cannot receive triggersOut of Sync Event fragments do not correspond to the same front-end pipeline position or have

different Event IDs or/and synchronous trigger data is out of sync. Requires a resyncError An error external to TTS. System requires a reset

Disconnected Setup not done, cables removed, or power-off

for the partitions to empty their buffers before sending a Resync Command over the TTCci B-channel. Onceall the partitions report back Ready, an Event Counter Resetis broadcast and a BC0 is sent at the beginningof the next orbit so that L1As may resume[11]. If the TCS ever receives an Error signal, then it is required tobroadcast a Hard Reset to the detector electronics. The reset is implemented in the individual electronics.

The Fast Merging Module (FMM)[26] is a generic board used within the TTS to merge the status of detectorFront End Drivers and Trigger Boards for input into the TCS. Atotal of ∼46 FMMs will be used, up to 32of which will produce a single TTS state per partition. Each FMM is capable of receiving up to 20 statusinputs[27]. The FMM may either merge signals using a logicalOR so that the most significant error is output,or it may perform an arithmetic sum up to a threshold to calculate a majority decision. The output status can bedetermined within 4 bunch crossings. The majority decisionprocedure is important otherwise an Out-Of-Syncstatus from one board will cause the TCS to generate a Resync to the entire detector, resulting in a significantdeadtime.

3 The Silicon Strip Tracker Readout SystemThe CMS Tracker utilises an on-detector analogue readout design. One benefit of this approach is the improvedposition resolution obtained by interpolating track hits between adjacent channels. An analogue readout sys-tem is also advantageous for CMS as on detector digitisationwill require power and therefore extra cooling,increasing the material budget of the tracker. Channel noise can also be improved upon since pedestals may besubtracted just after digitisation off detector, reducingnoise contributions from external sources. However, thereadout system must be able to cope with the extremely high data rates from the front end which requires extracabling.

The SST readout system (Figure 14) is based on the APV25 chip which is capable of recording and bufferinganalogue data from the silicon sensors before L1 triggering. Each Front End Module (FEM) comprises anumber of APV25 ASICs and APVMUX chips which time domain multiplex data from 2 APV25s onto a singledifferential line. Data are then converted to optical signals using an onboard Linear Laser Drivers (LLDs) fortransmission to the Front End Drivers (FEDs). The optical signals are driven at 40 MHz over∼60-100 m ofoptical fibre providing the high data rate required whilst minimising the contribution to the material budget.Each Front End Module also houses a Detector Control Unit (DCU) for module monitoring and a Phase LockedLoop (PLL) chip for maintaining synchronisation of clock and control signals to the FEM[6].

3.1 The APV25

Signals from the tracker silicon strips need to be amplified,processed and buffered before L1 readout. This isachieved with the APV25 ASIC which utilises radiation hard 0.25µm CMOS technology to obtain both the lownoise and low power the CMS detector requires[8]. Radiationhardness is achieved through the small featuresize and the thin layer of transistor gate oxide used. This isbecause electron tunnelling through the oxidecounters the effects of surface damage due to trapped holes near interface layers. The APV25 also employsenclosed gate transistors to reduce the increase in leakagecurrent during irradiation. Each APV chip may beexposed to more than the 10 Mrad of radiation expected withinthe lifetime of the SST without degradation inperformance. The expected gain of each APV25 is around 100 mV/Minimum Ionising Particle.

15

Page 16: The CMS Tracker: Front End Readout and Control Systems & DAQ · Figure 1: Schematic of the CMS Detector[1]. 2.1 Magnet Fundamental to the choice of detector layout is the 13 m long,

Figure 14: The CMS Tracker Front End control and readout architecture[19].

An APV25 is capable of reading 128 detector channels and sampling them at 25 ns intervals. Each channelconsists of a low-noise charge sensitive preamplifier followed by a pulse shaping CR-RC filter with a peakingtime of 50 ns[17]. Channels are then sampled onto a 192 cell analogue pipeline memory at 40 MHz beforeawaiting a L1 trigger decision. The pipeline allows up to a 4µs L1 latency corresponding to 160 memorylocations. This permits a maximum of 32 cells for use as a FIFOto flag data triggered for readout. The APVcan also be operated in different modes depending on the luminosity experienced by the detector. Peak modeis used at lower luminosities where the APV selects a single sample from each channel at the peak of the 50 nsCR-RC pulse. In deconvolution mode the APV takes a weighted sum of 3 consecutive pipeline samples usingan analogue pulse shape processor (APSP) before readout. This is in order to resolve bunch crossings whenoccupancies are high as the preamplifier has a slow response time. In multi mode, three consecutive pipelinesamples are read out without the APSP taking a weighted sum. As a result, the APV can only buffer up to 10L1 events when it is operating in deconvolution or multi mode.

APV control registers for configuration settings and biasesare accessible via I2C[18] transfers. The APV isoperated using a single control line for L1As(“1”), Resync(“101”) or Calibration Requests (“11”) to generatea calibrated signal in the pipeline for readout. Due to the possible ambiguity between the L1A and Resyncsignals, it takes two clock cycles for the APV to register thetrigger and thus requires that there is a minimumspacing between triggers of at least 2 bunch crossings.

Following a L1 trigger, the analogue data from each channel are multiplexed into a single output frame oflength 7µs. The frame consists of a 12 bit digital header followed by the 128 channels of analogue APV data.The APV also transmits ‘tick marks’ every 35 bunch crossingswhich are used to help the FEDs synchronisethe front end readout system. The header contains 3 start bits, an 8 bit pipeline address indicating the locationof the signal data in the pipeline and an error bit signifyingthe status of the chip[2, 3]. The example APV dataframe in Figure 15 illustrates this. A deviation from the baseline level in each of the detector channels is alsonoticeable. These levels or pedestals remain fixed over timeand must be subtracted by the Front End Driversalong with common mode level originating from external sources.

16

Page 17: The CMS Tracker: Front End Readout and Control Systems & DAQ · Figure 1: Schematic of the CMS Detector[1]. 2.1 Magnet Fundamental to the choice of detector layout is the 13 m long,

Figure 15: An example single APV data frame[2]. The frame is started with a 12 bit header followed by 128 channelsof unordered analogue data and ended with a tick mark. Pedestals are clearly visible on each APV channel and thecommon mode noise is denoted by CMmin.

3.2 The Front End Driver (FED)

The CMS silicon tracker Front End Drivers are required to digitise and process the raw data from the detectorbefore it is acquired by the Data Acquisition system (DAQ). Each FED is responsible for taking the raw datafrom 96 optical inputs, amounting to 192 APVs, and digitising it with 10 bit analogue to digital converters at 40MHz[21]. The FEDs must then buffer and process the digitiseddata. Delay Field Programmable Gate Arrays(FPGA) provide independently skewable clocks to each of theADCs in order to tune the timings of individualchannels to within 781 ps[2]. In total, 24 Xilinx Virtex-II FPGAs[22] are used for this purpose, each FPGAusing in chip Digital Clock Managers (DCMs) to provide 4 configurable clocks. Data processing is then handledby 8 Front End (FE) FPGAs, each one merging the data from 24 APVs. The FE FPGA checks synchronisationwithin the APV channels using the APV tick marks before processing begins. If an APV header is detectedfrom a synchronised channel, the data is processed under oneof three modes. Virgin raw mode outputs the fullAPV data load with no processing. In processed raw mode, the data is reordered and pedestals are subtractedbefore output. Zero suppressed mode reorders the APV data and performs both pedestal and common modenoise subtraction. Cluster finding, where algorithms sort through the data to pick out clusters of hit strips, isalso performed in this mode, cutting the initial data rate of3.4 GB/s by more than an order of magnitude. Thepipeline address and error bit from the APV25 header are alsorecorded by the FE FPGA. If some channels arenot synchronised or a header was not detected, data processing will still continue but the status of the event willbe forwarded with an error. The same is also true if the pipeline addresses for each event do not match or if theAPV presents an error; the Trigger Throttling System must decide how to proceed if synchronisation is lost.

After buffering in the FE FPGA, the Back End (BE) FPGA merges the data fragments from each of the FEFPGAs over 80 MB/s links and buffers them using 2 MB of Quad Data Rate (QDR) SRAM memory. The FEDdata load is prepended with a header containing an event number, orbit number and bunch crossing numberof the associated event. An S-LINK transition card interfaces directly with the FED and provides the pathwayto the Front-End Readout Links (FRLs) and the rest of the DAQ.The transition card can transmit the eventdata fragment from the QDR buffer at a rate of 80 MHz providinga theoretical FED data transfer rate of 640MB/s[20].

Clock and Control signals from the TTCci are received by the FED via an on board TTCrx and then processedby the BE FPGA. The transition card is responsible for sending the front end status to the TTS via the FastMerger Module (FMM). This allows the FED buffers to be monitored and throttled if data rates are too high.

17

Page 18: The CMS Tracker: Front End Readout and Control Systems & DAQ · Figure 1: Schematic of the CMS Detector[1]. 2.1 Magnet Fundamental to the choice of detector layout is the 13 m long,

3.3 The Front-End Readout Link (FRL) & DAQ

Event fragments from the FEDs are passed on the the FRLs[24] via a short (<15 m) S-LINK interface[23].In total, there are a maximum of 512 FRLs for use with the CMS DAQ, ∼260 of them for the SST alone.Each FRL is capable of reading out data from two FEDs and merging them to produce a single event fragmentimplying that almost all the tracker FEDs must share an FRL. Atracker FED outputs around 1-2 kB of data perevent resulting in an average data rate of∼170 MB/s compared with the Readout Link bandwidth of around 200MB/s. Consequently, the FRLs may exert back pressure on the FEDs if the data rate increases due to randomfluctuations in the L1 trigger rate and the event data size. Back pressure and buffer overflows are howeveralleviated by merging FEDs with high and low average data rates due to the tracker region they read out.

The FRLs are designed to transmit the data fragments it receives through an on board network interface cardand along 200 m of duplex optical fibre running at 2.5 Gb/s per link. The data from 8 FRLs are sent fromthe services cavern underground to the surface counting room where a pair of 8x8 network switches knownas FED Builders, merge the event fragments together to form asuper-fragment. A total of 64 FED Builderswill deployed in the final DAQ system to merge the fragments from around 630 data sources (FEDs & TriggerBoards) through 512 FRLs[10, 24]. Each FED Builder can connect up to 8 Reader Units (RUs) which bufferthe super-fragments before event building. In this way, theDAQ system becomes scalable since the number ofRUs connected to FED Builders determines the number of DAQ ‘slices’ and hence influences the maximumpossible readout rate. Only two DAQ slices are planned to be available at LHC startup when luminosity anddata rates are low. A full DAQ system should be in place however by the following year[1].

4 The Silicon Strip Tracker Control SystemThe CMS Tracker control system (Figure 16) is based around the Front End Controller (FEC) for the distri-bution of TTC commands. Timing synchronisation, system monitoring and configuration is carried out by theFEC using a token ring control network to communicate with the detector front end modules. Readout buffersand front end status are monitored over the TTS, allowing theTCS to maintain synchronisation and controlover the entire Tracker system.

4.1 Front End Module Control

The FEM devices, including the APV25, the APVMUX, DCU, PLL and LLD are all accessible and config-urable via the Inter Integrated Circuit (I2C)[18] bus standard. Each device must be configured prior to datataking, this being achieved through the off-detector FrontEnd Controller (FEC) and an on-detector token con-trol ring network (Figure 14). The FEC also receives clock and control signals from the Trigger, Timing &Control (TTC) interface which it encodes and redistributesto the∼300 control rings around the detector.

Each FEC is able to control up to 8 control rings via digital optical links operating at 40 Mb/s[25]. The con-trol ring is served by a single optical receiver/transmitter which passes the optical data onto the ring nodes aselectrical signals. Nodes on the ring are known as Communication Control Units (CCUs) and forward the slow-control commands over the ring network to the next CCU until the addressed unit is reached and an operationis carried out. The result of the operation is forwarded through the same ring network back to the FEC. A CCUis capable of communicating with the Front End Modules via I2C for configuration of the module devices aswell as for system monitoring via the DCU. The DCU provides temperature readout of the sensor, module andof itself, as well as a measurement of the sensor leakage current and voltage lines.

The CCU also distributes the clock and trigger signals on a single channel around the ring to the PLL chipslocated on each FEM. The Phase Locked Loop allows precise timing synchronisation between the modulesthroughout the detector and the LHC clock by adding fixed delays and phase corrections to the encoded signal.The delay due to time of flight of particles passing through the detector must also be taken into account by thePLL. The PLL is able to regenerate the adjusted clock and L1 trigger signals from the encoded input for use inthe FEM electronics such as the APV25.

18

Page 19: The CMS Tracker: Front End Readout and Control Systems & DAQ · Figure 1: Schematic of the CMS Detector[1]. 2.1 Magnet Fundamental to the choice of detector layout is the 13 m long,

4.2 APV Readout Throttling

In deconvolution mode, the APV has only 10 buffers availablefor data awaiting readout following a trigger. Ittakes the APV 7µs to read out a L1 trigger, so a random increase in the rate of Level 1 Accepts will quickly fillup the APV buffers. However, if more than 10 L1As occur withinthe 7µs timeframe, the APV buffer becomesinsufficient to hold all the L1 triggers and data would be lost. A buffer overflow will require all the APVs ondetector to be reset and resynchronised with the rest of the front end resulting in a significant deadtime in datataking. As a result, the APV pipeline must be monitored during operation and triggers must be halted if thebuffers fill up. The APV buffers however, cannot be monitoreddirectly on detector with a fast enough responsesince the signal latency between the detector and the TCS, located in the counting room, would exceed thelength of the APV buffer. By the time a warning signal had arrived at the TCS, further L1As may have beensent out causing an overflow of the buffers.

It should be noted however, that the occupancy of all of the APV buffers is entirely determined by the L1A rateand the APV data readout rate alone. Since all the APVs behaveidentically, a state machine can be used toemulate the behaviour of the APV and calculate the exact status of its buffers. In the CMS Tracker this is carriedout by the APV Emulator (APVE - see Section 4.4). The APVE is able to determine the APV25 buffer statuswithin 2 bunch crossings and send fast status signals to the TCS nearby. In this way, the TCS can temporarilystop issuing L1As before the APV buffers overflow. The APVE also supplies the “Golden” pipeline addresswhich indicates the location of the L1 trigger data within the APV pipeline. This allows a verification with theAPV header pipeline address that synchronisation has been maintained throughout the front end system.

4.3 Front End Driver Readout Throttling

Each FED contains 96 front end buffers, a back end buffer and abuffer for the Trigger Timing & Control headercontaining event identification data. Buffers may fill up dueto fluctuations in the input data rate, correspondingto the mean strip occupancy in the Front End and the APV outputrate. Though even at a sustained maximumrate of 140 kHz (back-to-back APV data frames) with occupancies greater than the maximum 3% expected inthe Tracker, the FED would be able to cope without any of its buffers overflowing when operating under zerosuppression mode. The limiting factor in the transfer rate however is the S-LINK interface to the FRLs. TheS-LINK will exert back pressure on the FED if sustained data rates of>200 MB/s occur, although typical datarates at full luminosity will average 170 MB/s with a 100 kHz Poisson trigger and 0.5-3% strip occupancy. Ifthe FED is close to overflowing its buffers, it must assert theBusy status. If L1As are still received due to thelatency in the TTS loop, the FED is required to drop the event data and only forward the event header withan error code. In this way, it is protected from losing synchronisation with the rest of the front end system,especially if there is still data buffered earlier on in the readout chain.

The FED must also perform a check that synchronisation has been maintained within the APV buffers. Itobtains the “Golden” pipeline address from the APVE via the TTCci B-channel which it must then compareagainst the majority pipeline address from the APV frame headers. The FED buffer and synchronisation statusare transmitted to the TCS via a Fast Merging Module (FMM). Since each FMM is capable of receiving up to20 status inputs or split to recieve two sets of 10 inputs, all440 Tracker FEDs can be serviced by 25 physicalFMMs (or 50 split FMMs); 5 each for the endcaps, 6 for the TIB and 7 for the TOB. For the four trackerpartitions, 2 FMM layers are used to obtain a merged state perpartition. The second layer is implemented by 2physical FMMs, both split to provide four TTS partition statuses[27].

4.4 The APV Emulator (APVE)

In order to emulate the front end APVs precisely, the APVE must receive clock and control signals from theTCS including L1As. The APVE can switch between the central and local TCS systems, allowing trackerpartitions to operate independently if the parts of the detector are undergoing maintenance[28].

The APVE receives 5 signals encoded on a 3 bit differential line including BC0, L1A, ReSync and Event &Orbit Counter Resets from the TCS as well as the LHC clock. In order to calculate the buffer occupancy withinthe APVs, the APVE can employ either a real APV25 chip or an FPGA emulation of the APV on board. Bothmethods keep a counter of the number of available buffers within the front end APVs, so that the APVE can

19

Page 20: The CMS Tracker: Front End Readout and Control Systems & DAQ · Figure 1: Schematic of the CMS Detector[1]. 2.1 Magnet Fundamental to the choice of detector layout is the 13 m long,

Figure 16: The Silicon Strip Tracker Readout and Control System and itsinteraction with the TCS (one partition).The marked region consisting of the FEDs, FMMs and FRLs is repeated 3-5 times depending on the partition size.There will be approximately 96-134 FEDs, 11 FECs, 6-8 FMMs and 50-70 FRLs per Tracker partition in the finalsystem. All off detector components are accessible via VMEbus and can therefore be controlled by RCMS and DCSthrough XDAQ.

check that this value does not fall below a threshold value. The simulated APV offers increased efficiency overthe real APV since it has complete knowledge of the internal buffer state. Depending on the APV mode ofoperation, the counter is decreased on receipt of a L1A and increased when an APV readout occurs. A Busystatus signal is sent to the TCS if the APVE determines that the number of available buffers has fallen below aset threshold. The TCS response will be to inhibit further L1As until more buffers are available.

The APVE is also required to monitor the merged FED status signal obtained from the Fast Merging Module(FMM). This status is combined with that of the APV buffers toform a tracker status signal which is receivedby the TCS. The status can be either Ready, Warning-Overflow,Busy, Out-Of-Sync (OOS) or Error (Table 4),where the most significant signal from the FMM and APV is output. The APVE will latch on to any statussignal presented on the FMM input, even if the event is transient, and will hold OOS or Error signals from theFMM until the signal is cleared and the TCS has issued a ReSync. The APVE may also become Out-Of-Syncif a BC0 is not received every orbit, the APVE configuration has been changed or if a buffer overflow hasoccurred due to incorrect setting of threshold values (see below). APVE errors are rare but may be caused byFMM errors, unknown input signal codes from the TCS or FMM or incompatible configuration settings.

The APVE will also record the address of the APV pipeline cellused to buffer the L1 event. This “GoldenPipeline Address” is sent to the TTCci card. The APVE has a dedicated TTCci B channel on which the addressis sent before it is forwarded onto the Front End Drivers. TheFEDs are required to check the pipeline addresswith those from the front end APVs to ensure synchronisationhas been maintained throughout the tracker. Adiagram of the CMS Tracker Control System and its interaction with the APVE is shown in Figure 16.

In order to reduce the tracker deadtime caused by blocked L1As, the number of filled buffers before a Busystatus is asserted must be as high as possible. The APVE Busy threshold is however entirely determined by thesize of the control loop from the TCS issuing a L1A to APVE, theAPVE determining the buffer status as Busy,the return of this signal to the TCS and the TCS responding with a L1A inhibit.

During this time the TCS may issue further L1As and hence reserve buffer space is needed to stop the buffersoverflowing within this period. Therefore, for maximum efficiency, this control loop must be as small as

20

Page 21: The CMS Tracker: Front End Readout and Control Systems & DAQ · Figure 1: Schematic of the CMS Detector[1]. 2.1 Magnet Fundamental to the choice of detector layout is the 13 m long,

Figure 17: Tracker deadtime as a function of the control loop size and the maximum number of APV buffers availablewhen operating in deconvolution mode. The deadtime calculated when the real APV is used is given by the dashedline. The solid line indicates the deadtime when the simulated APV is used[28].

possible.

The TCS trigger rules state that no more than 1 L1 trigger can be sent within 3 bunch crossings, so a controlloop of 3 bunch crossings would make maximum use of the APV buffers. However, the control loop at CMSis expected to be between 3 and 9 bunch crossings increasing the deadtime slightly as illustrated by Figure 17.If the APVs are operated in deconvolution mode, a maximum of 10 buffers are available for storage of triggerevents. A control loop of this expected size will reduce thisto between 8 and 10 buffers corresponding to a deadtime of <0.25% if a real APV is used or<0.13% if the simulated APV is used[28]. This fulfils the detectorrequirements for a total deadtime of less than∼1%; in comparison the first trigger rule yields a deadtime of∼0.5%. The thresholds are variable through the software and can be tuned after the control loop size has beendetermined.

4.4.1 Hardware

The APVE hardware design is implemented by the IDAQ (Figure 18), a generic data acquisition board which in-corporates features specifically used for the APVE. At the heart of the IDAQ is a Xilinx Virtex II Pro FPGA[22],providing the processing power and flexibility for different IDAQ applications including the APVE. The FPGAis booted off an on-board Compact Flash Card, facilitating aquick and easy method of swapping the function-ality of the IDAQ. The IDAQ is also equipped with 128 MB of Double Data Rate (DDR) SDRAM memorywhich can be operated at up to 200 MHz (400 MHz DDR) on a 32 bit wide bus. Although the DDR memory ismost beneficial in DAQ applications, it also provides the APVE with a significant amount of external storagefor history recording at fast read/write speeds. The IDAQ also implements a number of interfaces. Up to 300single ended I/O connections are available either through 5standard RJ45 jacks on the front panel or throughmounting an additional card on the IDAQ. A USB 2.0 interface and a 10/100 Mbit/s Ethernet link are also pro-vided. The IDAQ is implemented as a standard 6U VME board. TheAPVE will interface with controller PCsvia the VME bus standard, common to all the electronics boards used in CMS. VME will be used to configure,control and monitor the APVE through custom built software when it is powered up.

A real APV25 chip is mounted on the APVE and connected to the FPGA for the buffer occupancy calculation.The on-board APV is operated as it would be on the detector andis also configurable via I2C. The FPGA willincrease an internal counter whenever a L1A arrives from theTCS input and will decrease the counter if anAPV frame is detected. In addition to this, an FPGA based simulation of the APV is also implemented in thefirmware.

21

Page 22: The CMS Tracker: Front End Readout and Control Systems & DAQ · Figure 1: Schematic of the CMS Detector[1]. 2.1 Magnet Fundamental to the choice of detector layout is the 13 m long,

Figure 18: Layout of the APVE IDAQ card.

4.4.2 Firmware

The APVE is operated over two clock domains. The local clock uses an on board 40 MHz oscillator for thecontrol of the APVE via VME. The TCS clock derives from the clock originating from the TCS. Operationsrelating to buffer occupancy calculations and interfaces to the TTS are run under the TCS domain. DigitalClock Managers (DCMs) within the FPGA are used to lock on to the external clock signals from either a globalor local TCS source (GTCS/LTCS) and provides a method of protecting the APVE functionality if the TCSclock is lost. Firmware components interface with each other using the WISHBONE bus standard. In the localclock domain, the bus bridges the interfaces to VME and the TCS clock domain and provides access to theclock managers and GTCS/LTCS switch. The APVE firmware also implements a simulation of the TCS andFMM to output signals through a daughter card mounted on the board. These signals include TCS controlcommands (BC0, ReSync), L1As generated either repetitively or pseudo-randomly with Poisson occurrencesand TTS signal statuses (Table 4) emulating the FMM. By routing the signals back to the APVE inputs, theAPVE may undergo testing independent of external electronics. The WISHBONE interface also allows accessto the APVE pipeline and status history for debugging, TCS event counters and binners and the real APV25via I2C for configuration of APV parameters.

4.4.3 Software

The APVE can be controlled from a PC via VMEbus using the Hardware Access Libraries (HAL)[29]. Anaddress table provides the HAL with the register space for the control of the APVE. Low level control of thesoftware is performed by the ApveObject class which uses theHAL libraries and address table to set parametersfor control and configuration and get parameters such as temperatures and board status. Higher level control isimplemented by the ApveApplication class and provides userfriendly access and control functions that wrapthe functions found in the ApveObject class. The ApveApplication object requires an XML file to configurethe APVE parameters on startup. The ApveDescription class parses this file before handing the data to theApveApplication. The APVE classes are written entirely in C++.

A XDAQ[15] layer for the APVE has been written and tested by the author and is now in use for the trackerDAQ integration tests and forthcoming Magnet Test & Cosmic Challenge (MTCC). As a consequence, theAPVE is now accessible through the XDAQ interfaces such as a web application and SOAP communication.The XDAQ layer, the ApveSupervisor, implements all the basic command sequences to operate the APVE ina full DAQ system and also provides monitoring and history reports to the end user. The main configurationoptions available through the web application are the selection of the external clock source as either GTCS

22

Page 23: The CMS Tracker: Front End Readout and Control Systems & DAQ · Figure 1: Schematic of the CMS Detector[1]. 2.1 Magnet Fundamental to the choice of detector layout is the 13 m long,

or LTCS, selection of either the real or simulated APV and themode of operation (Peak, Deconvolution,Multi) and the setting of values for busy and warning thresholds. The web pages also allow the configurationand operation of the simulated TCS and FMM. The user may startand stop the generation of cyclic BC0s,issue a ReSync command, change the FMM state and generate L1As according to the distribution specifiedin the XML file. Information from the APVE is also displayed onthe web interface. This includes boardand FPGA temperatures, VME address, DCM status as well as thecurrent APV buffer, FMM and APVEcombined statuses. Using the large amount of external memory available on the IDAQ, the logging of pipeline,status and TCS events is also performed. For the verificationof TCS throttling, counters are used to recordthe number of BC0s, L1As, L1Resets and Warns/Busys from the APV, FMM and APVE. A histogram of thetrigger distribution is also implemented in XDAQ in order toexamine the TCS trigger generation profile.

Communication with the APVE may also be achieved using SOAP messaging so as to allow control under acentral XDAQ application running on a separate node. The APVE will be managed by the TrackerSupervisor(via the ApveSupervisor), using system wide commands such as “Configure” and “Enable” as well as APVEspecific commands, to set the TCS source for example. Almost all the configuration options available onthe web application can be called over SOAP commands. The TrackerSupervisor, and hence the APVE, iscontrolled using SOAP by the XDAQ implemented Run Control/Monitoring System (RCMS), responsible formanaging the entire DAQ, Timing & Trigger systems, DCS and configuration databases in CMS.

Figure 19: APVE XDAQ Web Interface real-time histogram of LTC generated triggers following a Poisson distri-bution at 100 kHz (trigger rate scale is logarithmic). The rates are representative of the time difference betweenconsecutive triggers.

4.5 Tracker DAQ & APVE Integration

The following section summarises the authors work concerning the integration and commissioning of the APVEwithin the Tracker DAQ in collaboration with the people acknowledged on page 28.

Production of DAQ components for the Tracker is almost complete and integration is progressing. All 440Front End Drivers required for the final system have been produced and are all now at CERN. Although over95% of boards have already passed verification and testing, an additional 50 FEDs are expected to be manu-factured for contingency. Production of the S-LINK transition cards for the FEDs is also complete, with over210 cards in use at CERN and∼290 cards undergoing testing at RAL. Around 20 FRLs and 3 FMMsareavailable for the present Tracker DAQ tests. The full numberwill be supplied during the installation of theDAQ in the service cavern underground. Five APVEs are available for use at CERN including two prototype

23

Page 24: The CMS Tracker: Front End Readout and Control Systems & DAQ · Figure 1: Schematic of the CMS Detector[1]. 2.1 Magnet Fundamental to the choice of detector layout is the 13 m long,

versions. More IDAQ boards will be supplied later in the yearwhen the DAQ partitions approach full size.All TTC components, including LTCs and TTCci/ex/oc cards, are available and fully functional except for theGTC which is still under development and testing. Since the LTC has the same functionality as the GTC, it hasbeen used in the recent DAQ tests and will emulate a GTC in the forthcoming MTCC.

Recent tests with the LTC have verified that the APVE is throttling L1As according to the APV buffer occu-pancy. During a test with the GTC, it was observed that the APVE occasionally lost synchronisation (OOS)with the trigger controller. It was found that when a L1A and aBC0 occurred at the same time, the GTC wasforwarding the L1A to the APVE instead of the BC0. Since the APVE requires a BC0 signal every 3564 clockcycles (1 LHC orbit) from the TCS, this forced the APVE to enter the OOS state. The resolution of this problemrequires a GTC firmware update.

Transmission of the frontend pipeline address from the APVEto the TTCci under the control of the LTC hasalso been verified. The address is transferred to an auxiliary B channel input on the TTCci through an 8 bitdata and strobe LVDS link. The TTCci will allow pipeline forwarding by transmitting the address betweensynchronous commands. This is allowed since the pipeline address is 42 clock cycles long and the minimumgap between command data is 44 clock cycles. Forwarding of the address from the TTCci to the FEDs hastaken place successfully. Verification of alignment between the pipeline address from the APVE and that fromthe APV header has not yet been performed. Initial tests showed that the addresses did not match within theFED. It is likely that this is because L1 Resets (Resync) and L1As follow different paths to eachother on theway to the front end APV. Before reaching the FECs, L1 Resets are transmitted over the B channel pathway asopposed to L1As which are sent promptly over the A channel. This has no consequence for the APVs on detec-tor as the difference between a Resync and a L1A will be equal for all APVs, hence readout frames should allcontain the same pipeline address. However, since the APVE receives both L1 Resets and L1As promptly fromthe TCS, the difference between them is likely to be longer. This would result in a L1A marking a differentpipeline column address in comparison to the detector APVs.Importantly however, the phase between the twoaddresses and hence the difference in address (±pipeline length) should always be constant. One option wouldbe to measure the difference and implement a delay in the APVEbefore executing an internal Resync. Thiscapability has been added to the firmware in the event that thedifference is constant throughout a partition. Amore comprehensive option would be to histogram the difference in the FED, where it can check both addresseson a regular basis, and report any change in the value as a lossof synchronisation. This has yet to be verifiedas the sole cause of the address mismatch however.

Figure 20: Scoped output from the Fast Merging Module indicating the skew over the differential lines during aReady to Warn transition. Taken with permission from J. Jones[30].

Tests in the last six months have been motivated towards integrating and verifying the functionality and controlof a full Tracker DAQ partition in the run up to the Magnet Test& Cosmic Challenge (MTCC). The MTCC willprimarily check the correct operation of the magnet system including the coil and yoke as well as the cooling,control, power and alignment systems. In addition, commissioning and alignment of subdetector components

24

Page 25: The CMS Tracker: Front End Readout and Control Systems & DAQ · Figure 1: Schematic of the CMS Detector[1]. 2.1 Magnet Fundamental to the choice of detector layout is the 13 m long,

will also be performed using cosmic muons, with readout through a full chain DAQ and subsequent high levelreconstruction. Subdetectors, including elements of the Silicon Strip Tracker, are already in place within theclosed CMS detector and will take muon data in under different magnetic field conditions. The principal goalof the MTCC will be to identify any issues concerning detector integration before final installation of the CMSunderground.

Initial tests involved using a cosmic telescope to detect passing muons through a number of APV25s under thecontrol of FECs. Data was successfully read out by the FEDs even though throughput rates were low. Highrate tests have since been performed and a number of integration issues have been solved. For the high ratetests, an LTC was used to generate L1As which were successfully throttled by the APVE. The LTC triggerdistribution can now be verified using the APVE XDAQ web application as illustrated in Figure 19. Clock andcontrol commands were passed over the TTCci/ex/oc to the FEDs which were configured to run using randomfake event data. FMMs have been used to merge the FED statusesfor input to the APVE and a number of FRLswere available for the readout of FEDs. The integration of FMMs within the DAQ system initially caused aproblem for the APVE. This was due to a significant skew on the differential FED/FMM output during statustransitions (Figure 20). The transition would present a temporary state to the APVE which it would lock to,so the APVE firmware was modified to change the time period overwhich it monitored the FMM input. Thecause of the skew on the FED was traced to a BE FPGA test register which was subsequently removed alongwith the APVE workaround. When zero suppression on the FEDs was disabled and data rates of more that 200MB/s per FED were tested, it was observed that the FEDs would stop operating. Using the APVE XDAQ webinterface to examine the status history of the APV and FMM, itwas discovered that the FEDs would lock upin the Warn state. It was discovered that this was due to a bug in the FED firmware where reading certain FEFPGA registers during running would cause the FED to hang. These issues have since been resolved.

Figure 21: Results of the high rate DAQ test, where the LTC was required to generate 140 kHz Poisson distributedtriggers. The throttled rate due to the data throughput in the DAQ chain as a function of mean strip occupancy isindicated. At the maximum expected strip occupany of 3%, a trigger rate limit of 102 kHz was achieved. Resultsfrom J. Fulcher[31].

Most recently, a full luminosity CMS scenario was tested in the tracker DAQ in order to qualify the chainfor MTCC readout. Up to 30 FEDs were operated in zero suppression mode with fake data over a range ofoccupancies. The FEDs were read out by 15 FRLs so that the links merged the event fragments from twoFEDs while limiting data throughput at 230 MB/s. The LTC was configured to generate triggers with a Poissondistribution while the APVE was set to throttle the trigger rates for the FEDs through two FMMs. With theAPVE limiting the trigger rate to average 140 kHz, a throttled rate of 102 kHz was achieved with event dataat strip occupancies of 3%. This indicates that the FEDs are able to cope with back to back APV frames (140kHz) at the maximum occupancies expected for proton collisions at the LHC without reducing the mean L1trigger rate to below 100 kHz. The setup was extended to a 24 hour test at high occupancy and rate so thatover 6 billion events were read out through the DAQ without issues. More tests are required to check triggerthrottling at different rates and to optimise thresholds according to modes of operation in order to minimise

25

Page 26: The CMS Tracker: Front End Readout and Control Systems & DAQ · Figure 1: Schematic of the CMS Detector[1]. 2.1 Magnet Fundamental to the choice of detector layout is the 13 m long,

deadtime. The 25% test, where a section of the Tracker will beread out by a full DAQ partition, and MTCCwill present an opportunity for this.

5 SummaryThe high luminosity and bunch crossing rates at the LHC placesignificant requirements on the CMS detectorand especially the Tracker. The Silicon Strip Tracker has been designed to survive 10 years within the harshradiation environment and the front end readout systems areable to cope with the extreme timings and largedata rates. In particular the APV25 provides a low noise, lowpower and radiation hard solution to limit theamount of readout electronics within the detector. As a consequence the analogue data output rate from thetracker is high, although the optical links are successfully able to transfer the data to the Front End Driversfor digitisation, buffering and processing to reduce this rate. The APVE successfully prevents buffer overflowsfrom occurring within the Tracker by emulating the APV buffer occupancies and quickly reporting the statusto the Trigger Control System. The APVE is also capable of transmitting the Golden Pipeline Address to theTTCci so that synchronisation within the whole tracker is verified. The deadtime associated with occupiedAPV buffers is minimal and is expected to be<0.13% if the simulated APV is used.

Integration and testing of the APVE within the DAQ system is nearing completion, and the emphasis is now onrunning and testing the full DAQ chain before installation.The APVE is able to receive commands and triggersfrom the LTC and throttle the rate according to the occupancyof the APV buffers. It is also successfullyforwarding the Front End Driver status from the FMMs and transmitting the “Golden” pipeline address to theTTCci. Further tests are required to ensure that the pipeline address matches that from the front end APVswithin the FEDs. APVE and FED throttling tests at different rates and modes of operation are needed to verifyand optimise the running of the DAQ. Final measurements of latencies and channel data rates are also requiredas the system is installed underground so that status thresholds may be tuned to minimise the trigger deadtimeand the number of dropped events.

References[1] The CMS Collaboration, “The CMS Physics Technical Design Report, Volume I, Detector Perfomance &

Software”, CERN/LHCC 2006-001.

[2] M. Noy, “Development and Characterisation of the Compact Muon Solenoid Silicon Microstrip TrackerFront End Driver”, PhD Thesis, London Univ. 2004.

[3] R. Bainbridge, “Influence of Highly Ionising Events on the CMS APV25 Readout Chip”, PhD Thesis,London Univ. 2004.

[4] The CMS Collaboration, “The Tracker Project Technical Design Report”, CERN/LHCC 98-6.

[5] The CMS Collaboration, “Addendum to the CMS Tracker TDR”, CERN/LHCC 2000-016, CMS TDR 5Addendum 1 (2000).

[6] J. Fernandez, “The CMS Silicon Strip Tracker”, Seventh International Conference on Position SensitiveDetectors, CMS-CR-2006-007.

[7] J. D’Hondt, “The CMS Tracker System”, Nuclear Science Symposium Conference Record, 2005 IEEE,Volume 2, p1084.

[8] L. Jones et. al., “The APV25 Deep Submicron Readout Chip for CMS Detectors”, Fifth Workshop onElectronics for LHC Experiments, CERN/LHCC 99-09.

[9] The CMS Collaboration, “The Trigger and Data Acquisition (TriDAS) Project Technical Design Report,Volume 1, The Level-1 Trigger”, CERN/LHCC 2000-038.

[10] The CMS Collaboration, “The Trigger and Data Acquisition (TriDAS) Project Technical Design Report,Volume II, Data Acquisition & High-Level Trigger”, CERN/LHCC 2002-26.

26

Page 27: The CMS Tracker: Front End Readout and Control Systems & DAQ · Figure 1: Schematic of the CMS Detector[1]. 2.1 Magnet Fundamental to the choice of detector layout is the 13 m long,

[11] J. Varela, “CMS L1 Trigger Control System”, CMS NOTE 2002/033.

[12] T. Christiansen, E. Corrin, “CMS TTC Homepage”,http://cmsdoc.cern.ch/cms/TRIDAS/ttc/modules/ttc.html

[13] A. Racz et. al., “Trigger Throttling System for CMS DAQ”, Sixth Workshop on Electronics for LHCExperiments, CERN/LHCC 2000-041.

[14] V. Brigljevic et al., “Run Control and Monitor System for the CMS Experiment”, Computing in HighEnergy and Nuclear Physics, 2003 CHEP.

[15] L.Orsini, J. Gutleber, “The XDAQ Framework”,http://xdaqwiki.cern.ch/

[16] V. Brigljevic et al., “Using XDAQ in Application Scenarios of the CMS Experiment”, Computing in HighEnergy and Nuclear Physics, 2003 CHEP.

[17] M. Raymond et. al., “The CMS Tracker APV25 0.25µm CMOS Readout Chip”, Sixth Workshop onElectronics for LHC Experiments, CERN/LHCC 2000-041.

[18] Philips Semiconductors, “The I2C Specification”,http://www.semiconductors.philips.com/products/interface control/i2c/index.html

[19] J. Troska et al., “Optical Readout and Control Systems for the CMS Tracker”, IEEE Transactions onNuclear Science, 50 (2003) 1067-1072.

[20] C. Foudas et al, “The CMS Tracker Readout Front End Driver”, IEEE Transactions on Nuclear Science,52 (2005) 2836-2840.

[21] G. Iles et. al., “Performance of the CMS Silicon TrackerFront-End Driver”, Tenth Workshop on Elec-tronics for LHC Experiments and Future Experiments, CERN 2004-010, p222.

[22] Xilinx, “Virtex-II Platform FPGA Handbook”,http://www.xilinx.com

[23] S. Haas, E. van de Bij, “S-LINK Specification”,http://www.cern.ch/HSI/s-link/

[24] A. Racz et al., “CMS Data to Surface Transportation Architecture”, Eighth Workshop on Electronics forLHC Experiments, CERN 2002-003, CERN/LHCC 2002-034.

[25] F. Drouhin et al., “The CERN CMS Tracker Control System”, Nuclear Science Symposium ConferenceRecord, 2004 IEEE, p1196.

[26] E. Cano et al., “The Fast Merging Module (FMM) for Readout Status Processing in CMS DAQ”, NinthWorkshop on Electronics for LHC Experiments and Future Experiments, CMS-CR-2003-050.

[27] A. Racz et al., “The Final Prototype of the Fast Merging Module (FMM) for Readout Status Processingin CMS DAQ”, Tenth Workshop on Electronics for LHC Experiments and Future Experiments, CERN2004-010, p316.

[28] G. Iles, “The APV Emulator to Prevent Front-End Buffer Overflows Within the CMS Silicon StripTracker”, Eighth Workshop on Electronics for LHC experiments, CERN 2003-003, CERN/LHCC 2002-034.

[29] C. Schwick, “The Hardware Access Libraries”,http://cmsdoc.cern.ch/∼cschwick/software/documentation/HAL/index.html

[30] J. Jones, “APVe/Tracker Throttle Status”, Tracker Meeting 04/06http://indico.cern.ch/getFile.py/access?contribId=32&sessionId=11&resId=2&materialId=

slides&confId=1782

[31] S. Tkaczyk, “Tracker Electronics, DAQ and Power Systems, Slow Controls”, Tracker Meeting 07/06http://indico.cern.ch/getFile.py/access?contribId=104&amp;sessionId=1&amp;resId=0&amp

;materialId=slides&amp;confId=4046

27

Page 28: The CMS Tracker: Front End Readout and Control Systems & DAQ · Figure 1: Schematic of the CMS Detector[1]. 2.1 Magnet Fundamental to the choice of detector layout is the 13 m long,

AcknowledgementsWith thanks to the Tracker DAQ integration team including - John Jones, Jonathan Fulcher, Emlyn Corrin,Laurent Mirabito, Roberta Arcidiacono, Ozman Zorba, Kostas Kloukinas, Saeed Taghavi, John Coughlan, JanTroska, Frederic Drouhin, Slawek Tkaczyk, Jo Cole, Hannes Sakulin, Tim Christiansen.

And special thanks to John Jones, James Leaver and Greg Iles in the development and testing of the APVE.

28