RISK-1027: Estimate Accuracy: Dealing with Reality John K. Hollmann PE CCE CEP.
-
Upload
jakayla-flaxman -
Category
Documents
-
view
221 -
download
1
Transcript of RISK-1027: Estimate Accuracy: Dealing with Reality John K. Hollmann PE CCE CEP.
RISK-1027: Estimate Accuracy: Dealing with Reality
John K. Hollmann PE CCE CEP
2
John K. Hollmann BiographyDegree:
– BS Mining Engineering and MBA
University:– Penn State University (BS) and Indiana University of Pennsylvania (MBA)
Years of Experience:– 35 years of experience (10 in Mining Engineering; 25 in Cost Engineering)
Professional Field:– Owner of Validation Estimating LLC– Help owner companies improve their Cost Engineering capabilities and
competencies (including decision and risk management)
Something you do not know about me:– Editor/Lead Author of AACE’s Total Cost Management Framework– AACE Fellow and Life Member– Outstanding senior art student in high school (which raises the question: is
estimating an art or a science?)
Abstract• Reviews the findings of over 50 years of empirical cost estimate
accuracy research and compares this reality to unrealistic theoretical depictions and prevailing biased management expectations.
• Highlights risk analysis methods documented in recent AACE Recommended Practices that produce outputs based on and comparable to empirical reality.
• The presentation is both a fundamental reference on the topic of accuracy and a call for our industry to use reliable practices and speak the truth to management.
• Attendees will gain an understanding of estimate accuracy reality, the risks that drive it, management’s biases about it, and methods that analyze risks and address the biases in a way that results in more realistic accuracy forecasts, better contingency estimates and more profitable investments.
Outline
• Empirical Estimate Accuracy Studies• Challenges to the Data: Nowhere To Hide• Flawed Targets and Misguided Practices• A Brief History of How We Got Where We Are• A Note About Small Projects• Recommended Risk Analysis Methods• Conclusion
Empirical Estimate Accuracy Studies• Accuracy is a measure of how a cost estimate will differ from the final
actual outcome– A measure of cost uncertainty or risk (which are synonymous in TCM)
• Examined numerous published studies of actual project cost variation from estimates (i.e., accuracy)
• Focused on process plant scope (i.e., oil, gas, chemical, mining, power, etc.) and infrastructure (e.g., transport) often associated with plants– These projects are characterized by a fair amount of complexity and a
tendency for the work scopes to be fairly unique, subject to change and sometimes with new technology
– Excluded commercial, aerospace, IT, weapons, etc. although they have analogous estimate accuracy research (with more severe outcomes)
• Studies are from a variety of academic, research, consulting and industry practitioner sources
Study and Dataset Characteristics• Selected 12 studies that included over 1,000 projects with sample sets
ranging from about 20 to 250 projects each– Among these are the work of Hackney, Merrow, and Flyvbjerg
• On average the projects were large enough to be of relevance to the success of the enterprise (i.e., a few million $US up to megaprojects)
• Most sought to answer “what is the accuracy of our estimates and why?” in response to a perceived preponderance of overruns
• The project samples are not purely random, but the authors generally considered their samples to be somewhat representative of their class
• The quality of the datasets varied, but in general the authors lament the poor state of historical project records and data– In many cases, the only reliable data was the cost at the time of project
funding approval and the cost at completion– However, a number of the studies were corrected for major scope changes
and escalation impacts
7
Selected Accuracy Studies of Actual ProjectsFrom Authorization (typically between AACE Class 3 and 4)
Study Projects Reference Point Adjusted? P10 orsimilar
P50 or mean(μ)
P90 or similar
Source Info
Biery, IPA 100 Mining From Authorized Feasibility
Yes, scope & time <15>% 0% +43% May 2009 CIMM p8
Bertisen & Davis 63 Mining & Metals
From Bankable Feasibility No ~<3>% +16% ~+70% Figure 1
Merrow, IPA 56 Hydro From “Appraisal” Yes, time <15>% μ= +24% +65% 1991 AACE
Flyvbjerg 258 Transport From estimate at “Decision” Yes, time ~<15>% ~+15% ~+100% Figure 1
Lundberg, et al 167 Road/Rail Varied reference No ~<32>% μ= +15% ~+62% Table 2
Merrow, RAND 56 Mega Process Plant
From start of “Detailed Engineering”
Yes, time <14>% μ= +88% +190% Table 4.1
Oil & Gas Journal
188 US Pipeline 2000-2008 From FERC filing No <21>% 0% +34% 2009 OGJ database
Hackney 22 Process Plants <500 Rating Yes, scope & time +2% +10% +39% Figure 18.1
Merrow, RAND 30 Newer Technology RAND Class 2 Yes, scope &
time +7% μ= +28% +59% Table 4.3
Pincock, Allen and Holt 21 Mining/Metals From Feasibility Yes, time 3 of 21
underran μ= +17% 2 worst μ= +55%
Pincock Perspectives, Nov 2000, Table 1
Deloitte Aus Water Projects for 5 states From Budget No
μ for best state =
+8%μ for worst
state = +80% Figure 3
Schroeder36 Complex Refinery Turnarounds; 2005
From Budget Uncertain +8% μ= +23% +38% Figure 2
RANGE p10: <32>% to +8% p50 or mean: 0% to +88% p90: +34% to +190%
8
Accuracy Reality As a Frequency Diagram
The p10/p50/p90 averages of empirical data were plotted as log-normal distributions(in “3-point” terms, this approximates to about -20/+20/+120)
Technical Notes Regarding the Studies• The “accuracy” shown is the percentage variation of the final actual cost
from the reference estimate – The name of the reference point varies but it is usually the funded amount – Estimate names are industry specific; e.g., in mining, “Feasibility” means the
funding estimate, but in other industries that implies an earlier estimate • The reference estimates usually included contingency so overrun
percentages shown are understated in respect to the base estimates– From experience, the contingency at sanction is typically 5 to 15 percent
• For comparison, the accuracy values were approximated p10/p50/p90 confidence levels (each provided different descriptive statistics)– If a mean was provided (μ), it is shown as such in the p50 column. – If confidence p-value was not provided, it was estimated from the mean and
standard deviation (the ~ symbol indicates an approximation) assuming a normal distribution (i.e., p90 = mean + 1.28 x standard deviation). This probably underestimates the high range given that actual/estimate accuracy data is typically skewed to the high side (i.e., it is not Normally distributed).
Key Observations From The Studies• In summary, the approximate range of ranges for accuracy around the
funded amount are as follows:– P10: -32% to +8% – P50 or mean: 0% to +88%– P90: +34% to 190%
• In NO case was the high (nominal p90) range ever less than +34% of the project estimates including contingency at the time of funding – This is +40-50% over the base estimate excluding assumed contingency
• Overruns are usually the mean or median outcomes• This is the reality for typical large process and infrastructure industry
projects with all their many imperfections and risks• The typical Level of Scope Definition is from AACE Class 4 to Class 3
– We know that the Level of Scope Definition at the time of authorization is a key risk driver, so we will examine that issue next
11
Accuracy Progression with Scope Development
• Graph shows p10-mean-p90• Conventional process plant• Range around the Base• Adjusted for scope and time
Per
cen
t th
at A
ctu
al V
arie
s fr
om
BA
SE
Est
imat
e
Hackney’s Definition RatingDerived from data in Figure 18.1
HACKNEY (1965, 28 Estimates)
12
• Graph shows p10-mean-p90• Process plant with non-standard technology• Range around the Funded Amount• Adjusted for scope and time
Per
cen
t th
at A
ctu
al V
arie
s fr
om
Fu
ndin
g E
stim
ate
Derived from data in Table 4.1
RAND/Merrow (1981, 106 Estimates)
Accuracy Progression with Scope Development
13
Harbuck (2007, 3 Transport Studies)
0
10
20
30
40
50
60
Planning Preliminary Final Design Construction
Project Development Stage
• Average of 3 Transport Project Study Averages• Range around the Construction Estimate• No stated adjustment for scope and time
Per
cen
t th
at A
ctu
al V
arie
s fr
om
Con
stru
ctio
n E
stim
ate
Derived from data in Figure 2
P10/P90 range of one study was
about +/-50% for this phase
Accuracy Progression with Scope Development
Key Observations From Progression Studies• Each study had different characteristics which are shown in the charts• Each used a different method of rating the scope definition
– In general, de-facto investment decisions are made in the second or third phase illustrated (i.e., approximately AACE Class 4 to Class 3)
• In summary, the key observation from these accuracy progression figures is that even projects funded on better scope definition (i.e., AACE Class 3) tend to be overrun
• Also, there is obviously huge uncertainty and huge potential for overruns if the scope is more poorly defined
Note: the Hackney and RAND models based on this data are available in working Excel in from www.aacei.org !
Overruns; Poor Practices or Lying?
• Dr. Bent Flyvbjerg: owners recognize risk but ignore it financially
– “We conclude that the cost estimates used in public debates, media coverage, and decision making for transportation infrastructure development are highly, systematically and significantly deceptive.”
– “those…who value honest numbers should not trust the cost estimates presented by infrastructure promoters and forecasters…institutional checks and balances—including financial, professional, or even criminal penalties for consistent or foreseeable estimation errors—should be developed to ensure the production of less deceptive cost estimates.
• “Underestimating Costs in Public Works Projects, Error or Lie?”, Bent Flyvbjerg, Mette Skamris Holm, and Søren Buhl, APA Journal, Summer 2002.
• Edward Merrow: Poor recognition of risk and poor practices to deal with it
– “There is widely held belief that large public sector projects tend to overrun because the estimates are deliberately low-balled. Our (IPA’s) analysis of large private sector projects suggests that no Machiavellian explanation is required. Large projects have a dismal track record because we have not adjusted our practices to fit the difficulty that the projects present.”
• “Why Large Projects Fail More Often: Megaproject Failures: Understanding the Effects of Size”, Edward W. Merrow, IPA, Inc. at Joint Meeting of the AACEI National Capital Section and American Society of Mechanical Engineers Section, Wednesday, April 20, 2011
Fooling Nobody: Management ObservationsWhile we want to focus on the facts, management opinion of the situation is important. Here are a couple observations from a CEO and a financier:
• “..while not wishing to embarrass any of my customers, I would add that many greenfield (upstream oil) projects suffer significant cost overruns. Indeed, as a general rule 30 percent of such projects experience budget overruns of 50 percent” Schlumberger CEO Andrew Gould in a speech to the Sanford Bernstein Strategic Decisions Conference, June 2, 2011
• “the vast majority of mining projects have been coming in way over budget for the past couple of decades. As a result, RCF (his firm) now automatically factors in an average cost overrun of 25% when it considers the cost of mining projects.” Resource Capital Funds‘ (RCF) Jasper Bertisen at a speech at the Northwest Mining Association, Reno, Nov 30, 2011.
Outline
• Empirical Estimate Accuracy Studies• Challenges to the Data: Nowhere To Hide• Flawed Targets and Misguided Practices• A Brief History of How We Got Where We Are• A Note About Small Projects• Recommended Risk Analysis Methods• Conclusion
Challenges to the Data and Response• Some may argue that these studies are unfair depictions of the accuracy
situation because they show the effects of major scope changes, economic volatility and/or black swan events (i.e., unknown unknowns) which management cannot expect teams and risk analysts to capture.
• The evidence suggests otherwise; we cannot stare reality in the face and say it is “unknown”: – 7 of the 12 studies corrected for price changes over time– 3 of the studies corrected for major scope changes (i.e., changes to basic
project premises such as plant location or capacity)• In my benchmarking experience with process plants, major scope changes (as opposed to
design or plan changes which owner contingency must cover) are not common. This may be less true for public projects.
• Furthermore, the risk of project systems with weak change management cultures and discipline can be rated and quantified
• Corrected studies show wide accuracy ranges and overruns
Challenges to the Data and Response• In regards to economic volatility and “black swans”, the evidence
suggests that the accuracy findings hold for all time periods and regions (i.e., black swans are the norm for large, long projects.)
• Examples from mining of how findings hold up in good times and badThe 2009 Recession?: – “Of the companies that reported project overruns publically (between Oct 2010 and March
2011), the average overrun was about 71% of the original project cost estimate.” Ernst & Young, “Effective Capital Project Execution: Mining and Metals”, 2011
• Note: the 19 mining projects during this 6 months had overruns from 20% to 120% (presumably overruns <20% were not materially important enough for public reporting!
For 50 years before that?: – A study of 18 mining projects covering the period 1965 to 1981 showed an average cost overrun
of 33 per cent compared to feasibility study estimates (Castle, 1985). – A study of 60 mining projects covering the period from 1980 to 2001 showed average cost
overruns of 22 per cent with almost half of the projects reporting overruns of more than 20 per cent (Gypton, 2002).
– A review of 16 mining projects carried out in the 1990s showed an average cost overrun of 25 per cent (Anon, 2000).
D J Noort and C Adams, International Mine Management Conference Melbourne, Vic, October 2006.19
Its Not Our Job? We Can’t Do It?-Nonsense• Some owner estimators and their contractors argue that analyzing all
project cost risks is not their job or that analyzing what AACE calls. systemic and economic (escalation) risks is somehow new, mysterious our not expected or required in analyses
• Still others say that we do not have the tools to quantify “unknown-unknowns” (as this paper shows, we have “known” for 50 years.)
• Regardless of the risk breakdown or names used, it is my opinion that not quantifying all risk is a cop-out for owners (Flyvbjerg may be right.)
• In contingency estimates that I review, I rarely see documentation that says “we have excluded the most significant risks” or “we ignored past experience in similar projects”, but this is exactly what is occurring.
• Risk is risk and it is always quantifiable albeit imperfectly.• Saying risk is unknowable or dropping the ball on responsibility for its
quantification is a recipe for the forecasting failure we see.• Own It!
20
Outline
• Empirical Estimate Accuracy Studies• Challenges to the Data: Nowhere To Hide• Flawed Targets and Misguided Practices• A Brief History of How We Got Where We Are• A Note About Small Projects• Recommended Risk Analysis Methods• Conclusion
What Do Our Forecasts Look Like?• We could take comfort as a profession if we could state that at least the
overrunning projects are coming within the cost range of our risk analyses. Unfortunately, our forecast ranges are not even close.
• The author has reviewed many industry risk analyses by owner companies, their EPC contractors and consultants for 15 years and the p90 value of the risk analyses around the base estimate at the time of funding is rarely >30% over the Base Estimate, let alone +40 to 50%.
• The next slide provides a representative sample of the risk analysis outcomes seen. The typical p90 value, regardless of the project definition, complexity or technology (see the “Notes” column) is about +20 to 25% above the Base Estimate; i.e., only half of the way to the lowest bounds of reality.
• Even the single case where a p90 value of +45% was derived (after validating the data with experience), that range was exceeded by the next phase estimate, let alone the final cost.
22
23
What is Your P90? (Probably Too Low)
Project Type Estimate Class
Preparer P10 P50 P90 Method Notes
Mega,Expansion/
Revamp, Refining
Class 4 Owner 1 <15>% +13% +45%Line Item
Ranging w/M-C (validated)
Exceeded P90 at next estimate
Large, New,
MiningClass 4 EPC 1 +7% +13% +19% Line Item
Ranging w/M-C Remote,
developing country
Small, Revamp, Refining Class 3 Owner 2 <3>% +5% +13% Line Item
Ranging w/M-C Plant-based project
Mega, New,
MetalsClass 4 EPC 2 +0% +5% +13% Line Item
Ranging w/M-C Low wage country
Mega, Expansion,
RefiningClass 4 EPC 3 +2% +10% +18% Line Item
Ranging w/M-C Largest in region, low wage country
Mega, Upgrader Class 4 EPC 4 <3>% +12% +28% Line Item
Ranging w/M-C Remote, severe
winter
Percent values are over the Base Estimate excluding contingency!
We Are Getting What We Ask For• Arguably, we are not doing quantitative risk analyses at all. We use
seemingly sophisticated methods to reproduce a myopic risk perspective which seems to be what the owner or client wants to hear.
• So what do owners and clients want to hear? The next slide provides a sample of owner accuracy range expectations as stated in their phase-gate project scope development processes.
• These are compared to AACE’s range-of-ranges in Recommended Practice 18R-97.
• Is it coincidence that the typical p90 “target” is about the same p90 as teams are producing in their risk analyses (about half of reality)?
24
25
Wishful Thinking? (Reality vs. Target Disconnect)
AACE • AACE quotes a “range-of-ranges” (AACE does NOT provide targets)• Ranges are limited to “moderate” risks (e.g., no new technology, etc.)
Class 5 Class 4 Class 3
18R-97 (Nov 2011)
-20/50% to
+30/100%
-15/30% to
+20/50%
-10/20%to
+10/30%
TypicalOwner
“Targets”
Owner Targets as Stated are typically Statistically Meaningless• Assume risk-free projects (other than scope definition)• No basis: is-quote AACE (AACE has not quoted target ranges for 15 years)• NONE state the confidence interval nor what reference the values are around
Oil Sands -30 to +50% -20 to +30% +/- 10%
Power -30 to +50% -15 to +30% -5 to +15%
NOC Oil -/+50% -/+30% -/+15%
Mining -/+50% -/+25% -/+10%
Integrated Oil -15 to +50% -10 to +30% -10 to +25%
Owner Delusions or Wishful Thinking• By quoting accuracy range targets in their processes, companies display a
dangerous misunderstanding of risk and estimating. – Tight targets may result from viewing accuracy as how an estimate will differ from a
presumptive lump sum bid rather than the final actual outcome. This assumes the owner will not have to pay for ANY later systemic risk or event impacts. However, in practical reality, there is no such thing as “fixed price” on total project costs.
• Once a project plan reaches the target level of scope definition (e.g., Class 3), the residual risk and its potential impact is what it is (more “detail” will not make it go away)– i.e., no estimator can make the accuracy around the base estimate appreciably better
or worse by doing a “better estimate” • If the project is in a remote region, with a complex process, a complex strategy,
(i.e., most megaprojects), a weak team and/or using newer technology, or have any significant risk events the company target accuracy ranges have no relevance; they invite “disaster” and drive behavior (you get what you ask for).
• Nassim Taleb (re: The Black Swan) calls this “Tunneling”: the neglect of sources of uncertainty
26
Owner Delusions or Wishful Thinking
• Owner accuracy targets also have no statistical meaning. – Owner’s mis-quote AACE; AACE has not quoted specific target ranges for 15 years– Few state what confidence interval the targets represent– Few bother to state what reference the range is around (base or funded amount?)
• In summary, owner accuracy targets have no recognition of anything but “estimator’s risk” and have no valid research, standard or statistical grounding. We can only conclude that company targets are just wishful thinking.
27
28
In Summary (Reality vs. Risk Analyses)
The p10/p50/p90 averages of empirical data were plotted as log-normal distributions(in “3-point” terms, this approximates -10/+10/+30 versus -20/+20/+120)
Our risk analyses reflect an estimator’s perspective wherein the only risks are uncertainty in the estimate and takeoff assumptions and math (i.e., “estimating risks”).
A high range of +30% reflects uncertainty around quantities, rates, price and productivity assuming the scope is fixed, execution strategy and plan is not changed, no risk events occur and if they do, risk responses are always effective. Such analysis are pointless if not dangerous.
Our Failure Has Become Institutionalized• The prior chart makes the point that arguably we are not doing risk analyses at
all and why research has shown our prevailing practices to be a disaster. • Worse, these flawed analyses have seriously damaged our collective credibility.
This damage will be difficult to repair because poor practice and distorted behaviors have become institutionalized.
• For example, in the mining industry, it is now the norm to fund projects at a p80 level of confidence. This has evolved because managers intuitively understand that our p50 “As Estimated” levels in the previous chart are ridiculously low (typically less than 10% on risky projects).
• The p80 “As Estimated” is typically about 15 to 20% ; this is more realistic because in fact this “As Estimated” p80 is the “Reality” p50!
• If an analyst applies AACE Recommended Practices covering all project risks (excluding escalation), the p80 value is more likely to be >50%.
• An analyst presenting +50% at p80 value shortly before the authorization Board review meeting is likely to be treated like Cassandra ; no one will believe the truth because management has never been told the truth before.
29
Outline
• Empirical Estimate Accuracy Studies• Challenges to the Data: Nowhere To Hide• Flawed Targets and Misguided Practices• A Brief History of How We Got Where We Are• A Note About Small Projects• Recommended Risk Analysis Methods• Conclusion
How Did We Dig This Hole?• In order to provide clarity of how we got into the credibility crisis, we
need to review the history of risk analysis and contingency estimating in the process industry. The next slide illustrates a history of systemic risk analysis and empiricism being discovered, lost and rediscovered.
• We also find the valuable introduction of Monte Carlo simulation (MCS) modeling followed by its corruption through “Line-Item Ranging”.– MCS is typically being applied in a failed method I call “line-item ranging” (as
opposed to validated Range Estimating). – In line-item ranging, the team takes their estimate, assigns ranges to the line-items
based on brainstorming, and runs the MCS (usually without considering line-item dependency).
– This is the method that research has shown to be a “disaster” for projects with systemic risks. It is the prevailing method used in the “As Estimated” cases.
– It fails in part because of faulty application (i.e., no dependencies are established) but also because team brainstorming is unable to comprehend the impacts of overarching systemic risks (e.g., poor scope definition, complexity, new technology, competency, etc.) on any particular line item.
31
32
The Rediscovery of Empiricism
Bent Flyvbjerg“Lying” & Reference
Class Forecasting
2002
Edward MerrowFront-End Loading
Index & Contingency
Analysis Model1981 -Today
John Hackney CCEDefinition Rating
Method & Accuracy Forecast
1965
AACEFirst Estimate Classification Standard with
Accuracy Ranges
1958
Palisade Corp@Risk for Lotus 1-2-3
1987Proliferation of
“Line-Item Ranging” & Downsizing
Empi
rical
ly V
alid
Met
hods
Rel
ativ
e C
on
tin
gen
cy
Met
ho
d U
sag
e
Glenn Butts“Denial” &
Joint Confidence
Level2009
Kevin Curran“Handling the
Truth”& VBRM
Benchmarking2006
Juntima (IPA)
“Disaster” 2004
Empiricism LostEmpiricism Found Empiricism Rediscovered
Relative usage derived from Juntima study (2004)
Outline
• Empirical Estimate Accuracy Studies• Challenges to the Data: Nowhere To Hide• Flawed Targets and Misguided Practices• A Brief History of How We Got Where We Are• A Note About Small Projects• Recommended Risk Analysis Methods• Conclusion
Small Projects Are Different• Small Project Systems are dominated by punitive underrun cultures (this is only
possible with smaller projects where costs can be hidden in the base)– Combine undisciplined practice with biased, risk-loaded base estimates– There is less incentive to study these “gift-horse” cultures because overruns
are punished and underruns rewarded despite being unambiguously associated with wasteful capital spending
– “when a project team sets a soft target, about half of the unneeded funds are usually spent”….“about 70% of small projects underrun” (IPA, Inc. “InSites” 9/7/11)
• I call this the “cresting wave syndrome”; Actual example of one year’s small project portfolio from a plant-based project system:
No project overran by more than 10%
Combining Underrun Cultures with Large Projects
• Schizophrenia– Many owner companies have separate “Major Project” groups to manage
large projects that combine engrained risk-ignorant practices with externally prepared, risk-free base estimates from major EPCs (too big to hide the risk with overfunding); a recipe for overruns on the large projects!
– Interestingly, when you look at ALL the projects together, they look serenely “Normal”. Looking at data for populations of projects of all sizes, the P10/P90 range is about +/-20% around the authorization estimate.
Outline
• Empirical Estimate Accuracy Studies• Challenges to the Data: Nowhere To Hide• Flawed Targets and Misguided Practices• A Brief History of How We Got Where We Are• A Note About Small Projects• Recommended Risk Analysis Methods• Conclusion
Recommend Practices - Principles• AACE has published RP 40R-08 to guide the selection and development
of risk quantification and contingency estimating methods • This RP provides “principles”, among which are that any method to be
recommended should; – start with identifying risk drivers,– link risk drivers and cost/schedule outcomes and– employ empiricism
• Note that “line-item ranging” is not explicitly in accordance with any of above principles and is therefore NOT recommended by AACE
• To cover the whole scope of risks, AACE has defined a risk breakdown in respect to quantification methods that includes:– Project-Specific Risk: risk affecting the specific project and plan– Systemic Risk: artifacts of the system, enterprise or strategy – Escalation Risk: driven by economics or market context
37
Recommend Practices• Project-Specific Risk (e.g., operational, project, etc.);
– 41R-08: Risk Analysis and Contingency Determination Using Range Estimating – 44R-08: Risk Analysis and Contingency Determination Using Expected Value– 57R-09: Integrated Cost and Schedule Risk Analysis Using Monte Carlo Simulation of
a CPM Model– 65R-11: Integrated Cost and Schedule Risk Analysis and Contingency Determination
Using Expected Value• Systemic Risk (e.g., strategic, enterprise, inherent, background, etc.):
– 42R-08: Risk Analysis and Contingency Determination Using Parametric Estimating– 43R-08: Risk Analysis and Contingency Determination Using Parametric Estimating –
Example Models as Applied for the Process Industries • Includes functional Excel-based John Hackney and RAND models
• Escalation Risk (e.g., economic, contextual, etc.): – 58R-10: Escalation Principles and Methods Using Indices – 68R-11: Escalation Estimating Using Indices and Monte Carlo Simulation
38
These methods can and have been integrated to generate a “universal” cost risk profile to support decision making. AACE also recommends that cost and schedule risk analysis be integrated and has additional RPs focused on Schedule risk analysis.
Outline
• Empirical Estimate Accuracy Studies• Challenges to the Data: Nowhere To Hide• Flawed Targets and Misguided Practices• A Brief History of How We Got Where We Are• A Note About Small Projects• Recommended Risk Analysis Methods• Conclusion
An Ethical Challenge• As a student of Cost Engineering and a proponent of TCM, I have become
ashamed by the situation described in this paper– There is a prevailing, ongoing failure to effectively address the real
uncertainty in our cost estimates! • It is a credibility crisis that also raises an ethical question: What does it
mean if our profession understands reality, but continues to use failed methods known to be contrary to experience to the potential detriment of our employers and clients? – AACE Canon of Ethics item 2g states: “When, as a result of their studies,
members believe a project(s) might not be successful, or if their cost engineering and cost management or economic judgment is overruled, they should so advise their employer or client.”.
– In respect to large process industry projects, consider ourselves so advised!
40
Conclusion• This paper references and summarizes over 50 years of empirical cost
estimate accuracy research on large projects in the process industries • It shows how reality compares (or does not) to what we say and do • Recommended methods are highlighted; Failed methods are exposed • It is hoped that the facts and observations brought together here will
serve as a fundamental reference on the topic of accuracy so that we can better speak the truth among ourselves and to management
• The way to more realistic accuracy forecasts, better contingency estimates and more profitable investments is clear and documented by AACE International – At a minimum, a last case recommendation is that teams always test their
worst case analysis outcomes against the empirical reality. Study this paper’s references and your own enterprise’s historical experience. If your p90 value is 25% over the base estimate, ask what risks are you might be missing and what impacts you may have underestimated?
41
Questions?• As a final note for those who apply risk analysis practices, watch for news
on AACE’s upcoming Decision and Risk Management Professional (DRMP) Certification which will hopefully contribute to improved quantitative risk analyses for industry
• Besides for Hackney, Merrow, and Flyvbjerg, see the works of AACE stalwarts with a strategic view (speak truth to management) such as Butts, Curran, Gough, Jergeas, Revay, Schroeder, Schuyler, Westney, etc.
• Inquiries can be addressed to:– [email protected] (www.validest.com)– 1-703-945-5483
42