Predictions and PDFs for the LHC · us that pdf’s determined for one process are applicable to...
Transcript of Predictions and PDFs for the LHC · us that pdf’s determined for one process are applicable to...
Predictions and PDFs for the LHC
J. Huston Michigan State University
([email protected]) Sparty we’ll talk more about Sparty tomorrow
Two advertisements
Excitement about my visit
Understanding cross sections at the LHC
We’re all looking for BSM physics at the LHC
Before we publish BSM discoveries from the early running of the LHC, we want to make sure that we measure/understand SM cross sections ◆ detector and
reconstruction algorithms operating properly
◆ SM backgrounds to BSM physics correctly taken into account
◆ and, in particular, that QCD at the LHC is properly understood
Cross sections at the LHC Experience at the Tevatron is
very useful, but scattering at the LHC is not necessarily just “rescaled” scattering at the Tevatron
Small typical momentum fractions x for the quarks and gluons in many key searches ◆ dominance of gluon and
sea quark scattering ◆ large phase space for
gluon emission and thus for production of extra jets
◆ intensive QCD backgrounds
◆ or to summarize,…lots of Standard Model to wade through to find the BSM pony
Cross sections at the LHC Note that the data from HERA
and fixed target cover only part of kinematic range accessible at the LHC
We will access pdf’s down to 10-6 (crucial for the underlying event) and Q2 up to 100 TeV2
We can use the DGLAP equations to evolve to the relevant x and Q2 range, but… ◆ we’re somewhat blind in
extrapolating to lower x values than present in the HERA data, so uncertainty may be larger than currently estimated
◆ we’re assuming that DGLAP is all there is; at low x BFKL type of logarithms may become important (more later about DGLAP and BFKL)
BFKL?
DGLAP
Understanding cross sections at the LHC
PDF’s, PDF luminosities and PDF uncertainties
Sudakov form factors underlying event and minimum bias events
LO, NLO and NNLO calculations K-factors
jet algorithms and jet reconstruction
benchmark cross sections and pdf correlations
…but to understand cross sections, we have to understand QCD (at the LHC)
Factorization Factorization is the key to
perturbative QCD ◆ the ability to separate the
short-distance physics and the long-distance physics
In the pp collisions at the LHC, the hard scattering cross sections are the result of collisions between a quark or gluon in one proton with a quark or gluon in the other proton
The remnants of the two protons also undergo collisions, but of a softer nature, described by semi-perturbative or non-perturbative physics
The calculation of hard scattering processes at the LHC requires: (1) knowledge of the distributions of the quarks and gluons inside the proton, i.e. what fraction of the momentum of the parent proton do they have ->parton distribution functions (pdf’s) (2) knowledge of the hard scattering cross sections of the quarks and gluons, at LO, NLO, or NNLO in the strong coupling constant αs
Parton distribution functions and global fits
Calculation of production cross sections at the LHC relies upon knowledge of pdf’s in the relevant kinematic region
Pdf’s are determined by global analyses of data from DIS, DY and jet production
Two major groups that provide semi-regular updates to parton distributions when new data/theory becomes available ◆ MRS->MRST98->MRST99
->MRST2001->MRST2002 ->MRST2003->MRST2004 ->MSTW2008
◆ CTEQ->CTEQ5->CTEQ6 ->CTEQ6.1->CTEQ6.5 ->CTEQ6.6->CT09->CT10
◆ NNPDF1.0->NNPDF1.1->NNPDF1.2->NNPDF2.0
Three
Global fits With the DGLAP equations,
we know how to evolve pdf’s from a starting scale Q0 to any higher scale
…but we can’t calculate what the pdf’s are ab initio ◆ one of the goals of lattice
QCD We have to determine them
from a global fit to data ◆ factorization theorem tells
us that pdf’s determined for one process are applicable to another
So what do we need ◆ a value of Qo (1.3 GeV for
CTEQ, 1 GeV for MSTW) lower than the data used in the fit (or any prediction)
◆ a parametrization for the pdf’s ◆ a scheme for the pdf’s ◆ hard-scattering calculations at
the order being considered in the fit
◆ pdf evolution at the order being considered in the fit
◆ a world average value for αs
◆ a lot of data ▲ with appropriate
kinematic cuts ◆ a treatment of the errors for
the experimental data ◆ MINUIT
Orders and Schemes Fits are available at
◆ LO ▲ CTEQ6L1 or CTEQ6L
– 1 loop or 2 loop αs
▲ in common use with parton shower Monte Carlos
▲ poor fit to data due to deficiencies of LO ME’s
◆ LO* ▲ better for parton shower
Monte Carlos ◆ NLO
▲ CTEQ6.1,CTEQ6.6,CT09 ▲ precision level: error pdf’s
defined at this order ◆ NNLO
▲ more accurate but not all processes known
At NLO and NNLO, one needs to specify a scheme or convention for subtracting the divergent terms
Basically the scheme specifies how much of the finite corrections to subtract along with the divergent pieces ◆ most widely used is the modified
minimal subtraction scheme (or MSbar)
◆ used with dimensional regularization: subtract the pole terms and accompanying log 4π and Euler constant terms
◆ also may find pdf’s in DIS scheme, where full order αs correction for F2 in DIS absorbed into quark pdf’s
LO->NLO->NNLO
There is a big change in general for PDFs in going from LO to NLO, but not from NLO to NNLO
Scales and Masses Processes used in global fits
are characterized by a single large scale ◆ DIS-Q2
◆ lepton pair production-M2
◆ vector boson production-MV2
◆ jet production-pTjet
By choosing the factorization and renormalization scales to be of the same order as the characteristic scale ◆ can avoid some large
logarithms in the hard scattering cross section
◆ some large logarithms in running coupling and pdf’s are resummed
Different treatment of quark masses and thresholds ◆ zero mass variable flavor
number scheme (ZM-VFNS)
◆ fixed flavor number scheme (FFNS)
◆ general mass variable flavor number scheme (GM-VFNS)
Data sets used in global fits (CTEQ6.6)
1. BCDMS F2proton (339 data points)
2. BCDMS F2deuteron (251 data points)
3. NMC F2 (201 data points) 4. NMC F2
d/F2p (123 data points)
5. F2(CDHSW) (85 data points) 6. F3(CDHSW) (96 data points) 7. CCFR F2 (69 data points) 8. CCFR F3 (86 data points) 9. H1 NC e-p (126 data points; 1998-98 reduced cross section) 10. H1 NC e-p (13 data points; high y analysis) 11. H1 NC e+p (115 data points; reduced cross section 1996-97) 12. H1 NC e+p (147 data points; reduced cross section; 1999-00) 13. ZEUS NC e-p (92 data points; 1998-99) 14. ZEUS NC e+p (227 data points; 1996-97) 15. ZEUS NC e+p (90 data points; 1999-00) 16. H1 F2
c e+p (8 data points;1996-97) 17. H1 Rσc for ccbar e+p (10 data points;1996-97) 18. H1 Rσ
b for bbbar e+p (10 data points; 1999-00) 19. ZEUS F2
c e+p (18 data points; 1996/97) 20. ZEUS F2
C e+p (27 data points; 1998/00) 21. H1 CC e-p (28 data points; 1998-99) 22. H1 CC e+p (25 data points; 1994-97) 23. H1 CC e+p (28 data points; 1999-00) 24. ZEUS CC e-p (26 data points; 1998-99) 25. ZEUS CC e+p (29 data points; 1994-97) 26. ZEUS CC e+p (30 data points; 1999-00) 27. NuTev neutrino dimuon cross section (38 data points) 28. NuTev anti-neutrino dimuon cross section (33 data points) 29. CCFR neutrino dimuon cross section (40 data points) 30. CCFR anti-neutrino cross section (38 data points) 31. E605 dimuon (199 data points) 32. E866 dimuon (13 data points) 33. Lepton asymmetry from CDF (11 data points) 34. CDF Run 1B jet cross section (33 data points) 35. D0 Run 1B jet cross section (90 data points)
2794 data points from DIS, DY, jet production
All with (correlated) systematic errors that must be treated correctly in the fit
Note that DIS is the 800 pound gorilla of the global fit with many data points and small statistical and systematic errors ◆ and fixed target DIS data still
have a significant impact on the global fitting, even with an abundance of HERA data
To avoid non-perturbative effects, kinematic cuts on placed on the DIS data ◆ Q2>5 GeV2
◆ W2(=m2+Q2(1-x)/x)>12.25 GeV2
Global fitting: best fit Using our 2794 data points, we do
our global fit by performing a χ2 minimization ◆ where Di are the data points and
Ti are the theoretical predictions; we allow for a normalization shift fN for each experimental data set
▲ but we provide a quadratic penalty for any normalization shift
◆ where there are k systematic errors β for each data point in a particular data set
▲ and where we allow the data points to be shifted by the systematic errors with the shifts given by the sj parameters
▲ but we give a quadratic penalty for non-zero values of the shifts sj
◆ where σi is the statistical error for data point i
For each data set, we calculate
For a set of theory parameters it is possible to analytically solve for the shifts sj,and therefore, continually update them as the fit proceeds
To make matters more complicated, we may give additional weights to some experiments due to the utility of the data in those experiments (i.e. NA-51), so we adjust the χ2 to be
where wk is a weight given to the experimental data and wN,k is a weight given to the normalization
�
χ 2 =fNDi − β ijs j
j=1
k
∑⎛
⎝ ⎜ ⎞
⎠ ⎟ − Ti⎡
⎣ ⎢ ⎢
⎤
⎦ ⎥ ⎥
2
σ i2 + s j
2
j=1
k
∑i∑
�
χ 2 = wkχk2 + wN ,k
1− fNσ N
norm
⎡
⎣ ⎢
⎤
⎦ ⎥
k∑
k∑
2
Minimization and errors
Free parameters in the fit are parameters for quark and gluon distributions
Too many parameters to allow all to remain free ◆ some are fixed at
reasonable values or determined by sum rules
20 free parameters for CTEQ6.1, 22 for CTEQ6.6,24 for CT09
Result is a global χ2/dof on the order of 1 ◆ for a NLO fit ◆ worse for a LO fit, since
the LO pdf’s can not make up for the deficiencies in the LO matrix elements
�
f (x) = x(a1 −1)(1− x)a2 ea3x[1+ ea4 x]a5
PDF Errors So we have optimal values
(minimum χ2) for the d=20 (22,24) free pdf parameters in the global fit ◆ {aµ},µ=1,…d
Varying any of the free parameters from its optimal value will increase the χ2
It’s much easier to work in an orthonormal eigenvector space determined by diagonalizing the Hessian matrix, determined in the fitting process
�
Huv =12
∂χ 2
∂aµ∂aν
To estimate the error on an observable X(a), due to the experimental uncertainties of the data used in the fit, we use the Master Formula
�
ΔX( )2 = Δχ 2 ∂X∂aµ
H −1( )µ,ν∑
µν
∂X∂aν
PDF Errors Recap: 20 (22,24)
eigenvectors with the eigenvalues having a range of >1E6
Largest eigenvalues (low number eigenvectors) correspond to best determined directions; smallest eigenvalues (high number eigenvectors) correspond to worst determined directions
Easiest to use Master Formula in eigenvector basis
To estimate the error on an observable X(a), from the experimental errors, we use the Master Formula
�
ΔX( )2 = Δχ 2 ∂X∂aµ
H −1( )µ,ν∑
µν
∂X∂aν
where Xi+ and Xi
- are the values for the observable X when traversing a distance corresponding to the tolerance T(=sqrt(Δχ2)) along the ith direction
PDF Errors What is the tolerance T? This is one of the most controversial
questions in global pdf fitting? We have 2794 data points in the
CTEQ6.6 data set (on order of 2000 for CTEQ6.1)
Technically speaking, a 1-sigma error corresponds to a tolerance T(=sqrt(Δχ2))=1
This results in far too small an uncertainty from the global fit ◆ with data from a variety of
processes from a variety of experiments from a variety of accelerators
For CTQE6.1, we chose a Δχ2 of 100 to correspond to a 90% CL limit
MSTW in the past has chosen a smaller Δχ2 criterion; now as we will see, MSTW and CTEQ agree well
Treatment for new PDFs for both is more sophisticated than shown here
What do the eigenvectors mean?
Each eigenvector corresponds to a linear combination of all 20 (22,24) pdf parameters, so in general each eigenvector doesn’t mean anything?
However, with 20 (22,24) dimensions, often eigenvectors will have a large component from a particular direction
Take eigenvector 1 (for CTEQ6.1); error pdf’s 1 and 2
It has a large component sensitive to the small x behavior of the u quark valence distribution
Not surprising since this is the best determined direction
Aside: PDF re-weighting Any physical cross section at a
hadron-hadron collider depends on the product of the two pdf’s for the partons participating in the collision convoluted with the hard partonic cross section
Nominally, if one wants to evaluate the pdf uncertainty for a cross section, this convolution should be carried out 41 times (for CTEQ6.1); once for the central pdf and 40 times for the error pdf’s
However, the partonic cross section is not changing, only the product of the pdf’s
So one can evaluate the full cross section for one pdf (the central pdf) and then evaluate the pdf uncertainty for a particular cross section by taking the ratio of the product of the pdf’s (the pdf luminosity) for each of the error pdf’s compared to the central pdf’s
This works exactly for fixed order calculations and works well enough (see later) for parton shower Monte carlo calculations.
Most experiments now have code to easily do this… and many programs will do it for you (MCFM)
�
f ia /A (xa,Q2) f ib /B (xb ,Q
2)f 0a /A (xa,Q
2) f 0b /B (xb ,Q2)
fi is the error pdf and f0 the central pdf
A very useful tool Allows easy calculation and comparison of pdf’s
Uncertainties
uncertainties get large at high x
uncertainty for gluon larger than that for quarks
pdf’s from one group don’t necessarily fall into uncertainty band of another …would be nice if they did
Uncertainties and parametrizations
Beware of extrapolations to x values smaller than data available in the fits, especially at low Q2
Parameterization may artificially reduce the apparent size of the uncertainties
Compare for example uncertainty for the gluon at low x from the recent neural net global fit to global fits using a parametrization
Q2=2 GeV2
note gluon can range negative at low x
Correlations Consider a cross section X(a) ith component of gradient of X is
Now take 2 cross sections X and Y ◆ or one or both can be pdf’s
Consider the projection of gradients of X and Y onto a circle of radius 1 in the plane of the gradients in the parton parameter space
The circle maps onto an ellipse in the XY plane
The angle φ between the gradients of X and Y is given by
The ellipse itself is given by
• If two cross sections/pdf’s are very correlated, then cosφ~1 • …uncorrelated, then cosφ~0 • …anti-correlated, then cosφ~-1
PDF uncertainties at the LHC
gg
gq
qQ Note that for much of the SM/discovery range, the pdf luminosity uncertainty is small
Need similar level of precision in theory calculations
It will be a while, i.e. not in the first fb-1, before the LHC data starts to constrain pdf’s
NB I: the errors are determined using the Hessian method for a Δχ2 of 100 using only experimental uncertainties,i.e. no theory uncertainties
NB II: the pdf uncertainties for W/Z cross sections are not the smallest
W/Z
NBIII: tT uncertainty is of the same order as W/Z production
tT Higgs
Ratios:LHC to Tevatron pdf luminosities Processes that depend on qQ initial
states (e.g. chargino pair production) have small enchancements
Most backgrounds have gg or gq initial states and thus large enhancement factors (500 for W + 4 jets for example, which is primarily gq) at the LHC
W+4 jets is a background to tT production both at the Tevatron and at the LHC
tT production at the Tevatron is largely through a qQ initial states and so qQ->tT has an enhancement factor at the LHC of ~10
Luckily tT has a gg initial state as well as qQ so total enhancement at the LHC is a factor of 100 ◆ but increased W + jets
background means that a higher jet cut is necessary at the LHC
◆ known known: jet cuts have to be higher at LHC than at Tevatron
qQ gq
gg
Ratios:LHC to Tevatron pdf luminosities
But wait, we’re not running at 14 TeV, but at 10 TeV (if we’re lucky)
In fact, we’ll probably never reach 14 TeV, just as we never reached 2 TeV at the Tevatron
gg gq qqbar
7
Precision benchmarks: W/Z cross sections at the LHC CTEQ (CTEQ6.1) and MRST
(MRST2004) NLO predictions in good agreement with each other
NNLO corrections are small and negative ◆ NB: this is a specific
statement for MRST2004 PDF’s, i.e. not true for MSTW2008 PDFs
NNLO mostly a shift in normalization (K-factor); NLO predictions adequate for most predictions at the LHC
Rapidity distributions and NNLO
Effect of NNLO just a small normalization factor over the full rapidity range
NNLO predictions using NLO pdf’s are close to full NNLO results, but outside of (very small) NNLO error band
Correlations As expected, W and Z cross
sections are highly correlated Anti-correlation between tT
and W cross sections ◆ more glue for tT production
(at higher x) means fewer anti-quarks (at lower x) for W production
◆ mostly no correlation for H and W cross sections
Heavy quark mass effects in global fits CTEQ6.1 (and previous
generations of global fits) used zero-mass VFNS scheme
With new sets of pdf’s (CTEQ6.5), heavy quark mass effects consistently taken into account in global fitting cross sections and in pdf evolution
In most cases, resulting pdf’s are within CTEQ6.1 pdf error bands
But not at low x (in range of W and Z production at LHC)
Heavy quark mass effects only appreciable near threshold ◆ ex: prediction for F2 at low
x,Q at HERA smaller if mass of c,b quarks taken into account
◆ thus, quark pdf’s have to be bigger in this region to have an equivalent fit to the HERA data
implications for LHC phenomenology
W/Z agreement
CTEQ6.5(6)
Inclusion of heavy quark mass effects affects DIS data in x range appropriate for W/Z production at the LHC
…but MSTW2008 also has increased W/Z cross sections at the LHC due at least partially to improvements in their heavy quark scheme ◆ now CTEQ6.6 and
MSTW2008 in good agreement
MSTW08
Correlations with Z, tT
• If two cross sections are very correlated, then cosφ~1 • …uncorrelated, then cosφ~0 • …anti-correlated, then cosφ~-1
Define a correlation cosine between two quantities Z tT
Correlations with Z, tT • If two cross sections are very correlated, then cosφ~1 • …uncorrelated, then cosφ~0 • …anti-correlated, then cosφ~-1
• Note that correlation curves to Z and to tT are mirror images of each other
• By knowing the pdf correlations, can reduce the uncertainty for a given cross section in ratio to a benchmark cross section iff cos φ > 0;e.g. Δ(σW+/σZ)~1%
• If cos φ < 0, pdf uncertainty for one cross section normalized to a benchmark cross section is larger
• So, for gg->H(500 GeV); pdf uncertainty is 4%; Δ(σH/σZ)~8%
Define a correlation cosine between two quantities
Z
tT
Modified LO pdf’s (LO*) What about pdf’s for parton shower Monte Carlos?
◆ standard has been to use LO pdf’s, most commonly CTEQ5L/CTEQ6L, in Pythia, Herwig, Sherpa, ALPGEN/Madgraph+…
…but ◆ LO pdf’s can create LHC cross sections/acceptances that differ
in both shape and normalization from NLO ▲ due to influence of HERA data ▲ and lack of ln(1/x) and ln(1-x) terms in leading order pdf’s
and evolution ◆ …and are often outside NLO error bands ◆ experimenters use the NLO error pdf’s in combination with the
central LO pdf even with this mis-match ▲ causes an error in pdf re-weighting due to non-matching of
Sudakov form factors ◆ predictions for inclusive observables from LO matrix elements
for many of the collider processes that we want to calculate are not so different from those from NLO matrix elements (aside from a reasonably constant K-factor)
Modified LO pdf’s (LO*) …but
◆ we (and in particular Torbjorn Sjostrand) like the low x behavior of LO pdf’s and rely upon them for our models of the underlying event at the Tevatron and its extrapolation to the LHC
◆ as well as calculating low x cross sections at the LHC ◆ and no one listened to me when I urged the use of NLO pdf’s
thus, the need for modified LO pdf’s
Parton showers and PDFs There’s a difference between tree
level fixed order predictions and those from parton shower Monte Carlos
The incoming partons can radiate hard gluons pushing the incoming partons off-mass shell
We expect that there will be a kinematic suppression of distributions formed by parton showers compared to those generated by tree level calculations
Although this is a sizeable effect for low Q2 processes, the effect is diminished for most processes we want to calculate at the LHC
CTEQ talking points LO* pdf’s should behave as LO as x->0; as close to
NLO as possible as x->1 LO* pdf’s should be universal, i.e. results should be
reasonable run on any platform with nominal physics scales
It should be possible to produce error pdf’s with ◆ similar Sudakov form factors ◆ similar UE ◆ so pdf re-weighting makes sense
LO* pdf’s should describe underlying event at Tevatron with a tune similar to CTEQ6L (for convenience) and extrapolate to a reasonable UE at the LHC
Where are the differences between LO and NLO partons?
low x and high x for up
missing ln(1-x) terms in LO ME
missing ln(1/x) terms in LO ME
everywhere for gluon
LO and NLO distributions
The shapes for the cross sections shown to the right are well-described by LO matrix elements using NLO PDFs, but there are distortions that are evident when LO PDFs are used
Normalizations are not fully described using LO matrix elements (K-factor)
CTEQ mod LO PDFs
Include in LO* fit (weighted) pseudo-data for characteristic LHC processes produced using CTEQ6.6 NLO pdf’s with NLO matrix elements (using MCFM), along with full CTEQ6.6 dataset (2885 points) ◆ low mass bB
▲ fix low x gluon for UE ◆ tT over full mass range
▲ higher x gluon ◆ W+,W-,Z0 rapidity
distributions ▲ quark distributions
◆ gg->H (120 GeV) rapidity distribution
Allow total momentum in proton to exceed 1.00 if needed to fit the real and pseudo-data ◆ other sum rules intact
Use 1-loop or 2-loop αs ◆ two different fits and thus 2
different PDFs ◆ will concentrate on 2-loop
results here ◆ keep αs(mZ) fixed on 1-loop
(2-loop) world averages, for better connection to other CTEQ PDFs
Also, another technique involving use of scales
Some observations χ2 improves with momentum
sum rule free ◆ without pseudo-data in fit, the
momentum sum increases by ~3-4%
Pseudo-data has conflicts with global data set ◆ that’s the motivation of the
modified pdf’s Requiring better fit to pseudo-
data increases chisquare of LO fit to global data set by about 10-20% (although this is not the primary concern; the fit to the pseudo-data is) ◆ prefers more momentum (1.10
for 1-loop and 1.14 for 2-loop); mostly goes into the gluon distribution
No strong preference for 1-loop or 2-loop αs that I can see, with fits containing weighted pseudo-data; without pseudo-data, prefers 2-loop
Normalization of pseudo-data (needed K-factor) gets closer to 1 ◆ 1.00 for W production
(instead of 1.15) ◆ ~1.1 for tT production
(instead of 1.4) ◆ ~1.4 for Higgs (120 GeV)
(instead of 1.7)
Results
Mod LO W+ rapidity distribution agrees better with NLO prediction in both magnitude and shape
Agreement at 7 and 10 TeV (not in fit) even better
Results
Results
Cross sections and uncertainties In the ATLAS Higgs group,
we’ve just gone through an exercise of compilation of predictions for Higgs production at LO/NLO/NNLO at a number of LHC center-of-mass energies
This has involved a comparison of competing programs for some processes, a standardization of inputs, and a calculation of uncertainties, including those from PDF’s/variation of αs
◆ from eigenvectors in CTEQ/MSTW
◆ using the NNPDF approach
This is an exercise that other physics groups will be going through as well, both in ATLAS and in CMS ◆ ATLAS Standard
Model group now, for example
There are a lot of tools/procedures out there now, and a lot of room for confusion
For example Use the Durham PDF plotter for the gluon uncertainties at the LHC, comparing
CTEQ6.6 and MSTW2008 MSTW uncertainty appears substantially less than that of CTEQ6.6
◆ as was typical in the past, where the Δχ2 used by CTEQ was ~100 and ~50 for MRST/MSTW
But CTEQ uncertainty plotted is 90%CL; MSTW2008 is 68%CL
I’ve often seen plots in presentations showing PDF uncertainties without specifying whether they are at 68% or 90% CL (or sometimes even what PDF is plotted!)
…but compare on a consistent basis
See A. Vicini’s talk in last PDF4LHC meeting
PDF errors So now, seemingly, we have more consistency in the size of PDF
errors, at least for this particular example The eigenvector sets represent the PDF uncertainty due to the
experimental errors in the datasets used in the global fitting process
Another uncertainty is that due to the variation in the value of αs
It has been traditional in the past for the PDF groups to publish PDF sets for variant values of αs, typically over a fairly wide range ◆ experiments always like to demonstrate that they can reject a
value of αs(mZ) of 0.128 MSTW has recently tried to better quantify the uncertainty due to
the variation of αs, by performing global fits over a finer range, taking into account any correlations between the values of αs and the PDF errors
…more recent studies by CTEQ and NNPDF have shown that for their PDF’s the correlation between αs errors and PDF errors is small enough that the two sources can be added in quadrature
αs(mZ) and uncertainty But of course life is not that simple Different values of αs and of its uncertainty are used CTEQ and NNPDF use the world average (actually 0.118 for
CTEQ and 0.119 for NNPDF), where MSTW2008 uses 0.120, as determined from their fit
Latest world average (from Siggi Bethke->PDG) ◆ αs (mZ) = 0.1184 +/- 0.0007
What does the error represent? ◆ Siggi said that only one of the results included in his world average
was outside this range ◆ suppose we’re conservative and say that +/-0.002 is a 90% CL ◆ joint CTEQ/NNPDF study to appear in Les Houches proceedings
using this premise Could it be possible for all global PDF groups to use the world
average value of αs in their fits, plus a prescribed range for its uncertainty (if not 0.002, then perhaps another acceptable value)?
I told Albert that if he could persuade everyone of this, that I personally would nominate him for the Nobel Peace Prize
New CTEQ6.6 αs series CTEQ6.6 central αs(mZ)
value=0.118 Error PDFs with αs
values of ◆ 0.116 ◆ 0.117 ◆ 0.118 ◆ 0.119 ◆ 0.120 ◆ available in next
version of LHAPDF αs error roughly half that
of PDF error
(My) interim recommendation for ATLAS
Cross sections should be calculated with MSTW2008 and CTEQ6.6
Upper range of prediction should be given by upper limit of error prediction using prescription for combining αs uncertainty with error PDFs ◆ in quadrature for CTEQ6.6 ◆ using αs eigenvector sets for MSTW2008
Ditto for lower limit So for a Higgs mass of 120 GeV at 14 TeV, the gg cross section
limits would be 34.9 pb (defined by the CTEQ6.6 lower limit, αs=0.120) and 41.4 pb (defined by the MSTW2008 upper limit) ◆ note that central predictions for CTEQ6.6 (35.74 pb) and MSTW2008
(38.45 pb) are different not because of the gluon distribution (which coincide very closely in the relevant x range), but because of the different values of αs used
Where possible, NNPDF predictions (and uncertainties) should be used as well in the comparisons
PDF Benchmarking 2010
Benchmark processes, all to be calculated (i) at NLO (in MSbar scheme) (ii) in 5-flavour quark schemes (definition of scheme to be specified) (iii) at 7 TeV [ and 14 TeV] LHC (iv) for central value predictions and +-68%cl [and +- 90%cl] pdf uncertainties (v) and with +- αs uncertainties (vi) repeat with αs(mZ)=0.119
(prescription for combining with pdf errors to be specified)
Using (where processes available) MCFM 5.7 ◆ gzipped version prepared by John Campbell using the specified
parameters (and the new CTEQ6.6 αs series) ◆ this will be shipped out this week
Cross Sections 1. W+, W-, and Z total cross sections and rapidity distributions total
cross section ratios W+/W- and (W+ + W-)/Z rapidity distributions at y = -4,-3,...,+4 and also the W asymmetry: A_W(y) = (dW+/dy - dW-/dy)/(dW+/dy + dW-/dy) using the following parameters taken from PDG 2009 ◆ MZ=91.188 GeV ◆ MW=80.398 GeV ◆ zero width approximation ◆ GF=0.116637 X 10-5 GeV-2
◆ other EW couplings derived using tree level relations ◆ BR(Z-->ll) = 0.03366 ◆ BR(W-->lnu) = 0.1080 ◆ CKM mixing parameters from eq.(11.27) of PDG2009 CKM review
0.97419 0.2257 0.00359 V_CKM = 0.2256 0.97334 0.0415 0.00874 0.0407 0.999133
◆ scales: µR = µF = MZ or MW
Cross Sections 2. gg->H total cross sections at NLO
◆ MH = 120, 180 and 240 GeV ◆ zero Higgs width approximation, no BR ◆ top loop only, with mtop = 171.3 GeV in sigma_0 ◆ scales: µR = µF = MH
3. ttbar total cross section at NLO ◆ mtop = 171.3 GeV ◆ zero top width approximation, no BR ◆ scales: µR = µF = mtop
The following are optional 4. inclusive jet cross section distribution at NLO
◆ use FastNLO "fnl0004" option with sqrt(s) = 14 TeV, D=0.7 kT algorithm
◆ jet rapidity bin: 0 < |yJ| < 0.8 ◆ scales: mu_R = mu_F = pT
J
5. Drell-Yan NLO dsigma /dM dy at y=0 for e.g. M = 7 and 14 GeV ◆ scales: mu_R = muF = M ◆ coupling = alpha_em(0) = 1/137.036
The “Future” How well do we know
PDFs going into the start of LHC running?
For much of the kinematic region, the uncertainty is pretty small
Primarily because of the precision data that came from HERA
And the precision will improve as the final HERA data sets are released to the public
gg
gq
qqbar
The Future, continued Of course, as the LHC data
comes in, we will use it in future PDF fits
But in order to be useful, the precision has to be high, and most early data will not fulfill that requirement
The global fits are dominated not by statistical errors but by systematic errors…and the correlations
Normalization It may be useful in the
beginning (and actually throughout the running) of the LHC to normalize cross sections to the W/Z cross sections
Do have to worry about correlations/anti-correlations with the W/Z cross sections though
What about Z production at high pT? ◆ unlike inclusive Z
production, dominated by qqbar initial states, Z production at high pT is produced primarily by gq initial states
(% pdf uncertainty)
% pdf uncertainty
gq
pZT > 25 GeV/c
for pZT > 200 GeV/c,
not only does the Z uncertainty become small, but there’s also a correlation developing with the tT cross section, because of the gluon being in a similar x range
…and finally
http://www.youtube.com/watch?v=ADcf3cOpiT8