Risk Filtering, Ranking, and Management - ResearchGate

16
Risk Filtering, Ranking, and Management Framework Using Hierarchical Holographic Modeling Yacov Y. Haimes, 1 Stan Kaplan, 1 and James H. Lambert 1 This paper contributes a methodological framework to identify, prioritize, assess, and manage risk scenarios of a large-scale system. Qualitative screening of scenarios and classes of scenarios is appropriate initially, while quantitative assessments may be applied once the set of all scenarios (hundreds) has been prioritized in several phases. The eight-phase methodology is described in detail and is applied to operations other than war. The eight phases are as follows: Phase I, Scenario Identification—A hierarchical holographic model (HHM) is developed to describe the system’s ‘‘as planned’’ or ‘‘success’’ scenario. Phase II, Scenario Filtering—The risk scenarios identified in Phase I are filtered according to the responsibilities and interests of the current system user. Phase III, Bi-Criteria Filtering and Ranking. Phase IV, Multi-Criteria Evaluation. Phase V, Quantitative Ranking—We continue to filter and rank scenarios based on quantitative and qualitative matrix scales of likelihood and consequence; and ordinal response to system resiliency, robustness, redundancy. Phase VI, Risk Management is performed, involving identification of management options for dealing with the filtered scenarios, and estimating the cost, performance benefits, and risk reduction of each. Phase VII, Safeguarding Against Missing Critical Items—We examine the performance of the options selected in Phase VI against the scenarios previously filtered out during Phases II to V. Phase VIII, Operational Feedback—We use the experience and information gained during application to refine the scenario filtering and decision processes in earlier phases. These eight phases reflect a philosophical approach rather than a mechanical methodology. In this philosophy, the filtering and ranking of discrete scenarios is viewed as a precursor to, rather than a substitute for, consideration of the totality of all risk scenarios. KEY WORDS: Risk filtering; risk assessment; risk management; hierarchical holographic modeling 1. INTRODUCTION If we adopt the definition of risk as a ‘‘set of triplets’’ (Kaplan and Garrick 1981), then it is clear that the first and most important step in a quanti- tative risk analysis (QRA) is identifying the set of risk scenarios, S i . If the number of such scenarios is large, then the second step must be to filter and rank the scenarios according to their importance, as determined by their likelihood and consequence. The need for such ranking arises in a variety of situations. For example: thousands of military and civilian sites have been identified as contaminated with toxic substances; myriad risk scenarios are commonly identified during the development of software-intensive engineering systems; and thou- sands of mechanical and electronic components of the Space Shuttle are placed on a critical item list (CIL) in an effort to reveal significant contributions to program risk. In all such risk identification procedures we must then prioritize a large number 1 Center for Risk Management of Engineering Systems, Univer- sity of Virginia. Risk Analysis, Vol. 22, No. 2, 2002 383 0272-4332/02/0400-0383$22.00/1 Ó 2002 Society for Risk Analysis

Transcript of Risk Filtering, Ranking, and Management - ResearchGate

Page 1: Risk Filtering, Ranking, and Management - ResearchGate

Risk Filtering, Ranking, and Management FrameworkUsing Hierarchical Holographic Modeling

Yacov Y. Haimes,1 Stan Kaplan,1 and James H. Lambert1

This paper contributes a methodological framework to identify, prioritize, assess, andmanage risk scenarios of a large-scale system. Qualitative screening of scenarios and classesof scenarios is appropriate initially, while quantitative assessments may be applied once theset of all scenarios (hundreds) has been prioritized in several phases. The eight-phasemethodology is described in detail and is applied to operations other than war. The eightphases are as follows: Phase I, Scenario Identification—A hierarchical holographic model(HHM) is developed to describe the system’s ‘‘as planned’’ or ‘‘success’’ scenario. Phase II,Scenario Filtering—The risk scenarios identified in Phase I are filtered according to theresponsibilities and interests of the current system user. Phase III, Bi-Criteria Filtering andRanking. Phase IV, Multi-Criteria Evaluation. Phase V, Quantitative Ranking—We continueto filter and rank scenarios based on quantitative and qualitative matrix scales of likelihoodand consequence; and ordinal response to system resiliency, robustness, redundancy. PhaseVI, Risk Management is performed, involving identification of management options fordealing with the filtered scenarios, and estimating the cost, performance benefits, and riskreduction of each. Phase VII, Safeguarding Against Missing Critical Items—We examine theperformance of the options selected in Phase VI against the scenarios previously filtered outduring Phases II to V. Phase VIII, Operational Feedback—We use the experience andinformation gained during application to refine the scenario filtering and decision processesin earlier phases. These eight phases reflect a philosophical approach rather than amechanical methodology. In this philosophy, the filtering and ranking of discrete scenarios isviewed as a precursor to, rather than a substitute for, consideration of the totality of all riskscenarios.

KEY WORDS: Risk filtering; risk assessment; risk management; hierarchical holographic modeling

1. INTRODUCTION

If we adopt the definition of risk as a ‘‘set oftriplets’’ (Kaplan and Garrick 1981), then it is clearthat the first and most important step in a quanti-tative risk analysis (QRA) is identifying the set ofrisk scenarios, Si. If the number of such scenarios islarge, then the second step must be to filter and rankthe scenarios according to their importance, asdetermined by their likelihood and consequence.

The need for such ranking arises in a variety ofsituations. For example: thousands of military andcivilian sites have been identified as contaminatedwith toxic substances; myriad risk scenarios arecommonly identified during the development ofsoftware-intensive engineering systems; and thou-sands of mechanical and electronic components ofthe Space Shuttle are placed on a critical item list(CIL) in an effort to reveal significant contributionsto program risk. In all such risk identificationprocedures we must then prioritize a large number

1 Center for Risk Management of Engineering Systems, Univer-sity of Virginia.

Risk Analysis, Vol. 22, No. 2, 2002

383 0272-4332/02/0400-0383$22.00/1 � 2002 Society for Risk Analysis

Page 2: Risk Filtering, Ranking, and Management - ResearchGate

of risk scenarios according to their individual con-tributions to the overall system risk. A dependableand efficient ranking and filtering of identified riskelements can be an important aid toward systematicrisk control and reduction.

Infrastructure operation and protection high-lights the challenges to risk filtering, ranking, andmanagement in large-scale systems. Infrastructuresthat are becoming increasingly vulnerable to naturaland willful hazards are our manmade engineeredsystems; these include telecommunications, electricpower, gas and oil, transportation, water-treatmentplants, water-distribution networks, dams, andlevees. Fundamentally, such systems have a largenumber of components and subsystems. Most water-distribution systems, for example, must be addressedwithin a framework of large-scale systems, where ahierarchy of institutional and organizational decis-ion-making structures (e.g., federal, state, county,and city) is often involved in their management(Haimes et al. 1997). Coupling exists among thesubsystems (e.g., the overall budget constraintimposed on the overall system), and this furthercomplicates their management. A better under-standing of the interrelationship among natural,willful, and accidental hazards is a logical step inhelping to improve the protection of critical nationalinfrastructures. Such efforts should build on theexperience gained over the years from the recoveryand survival of infrastructures assailed by naturaland human hazards. Furthermore, it is imperative tomodel critical infrastructures as dynamic systems inwhich current decisions have impacts on futureconsequences and options.

Within the activity known as total risk manage-ment of a system (Haimes 1991), the term riskassessment means identifying the ‘‘risk scenarios,’’i.e., determining what can go wrong in the systemand all the associated consequences and likelihoods.The next steps are to generate mitigation options,evaluate each in terms of its cost, benefit, and risktradeoffs, and then decide which options to imple-ment and in what order. Filtering and ranking aidsthis decision process by focusing attention on thosescenarios that contribute the most to the risk.

This article presents a methodological frame-work to identify, prioritize, assess, and managescenarios of risk to a large-scale system frommultiple overlapping perspectives. The organizationof the article is as follows. After reviewing earlierefforts in risk filtering and ranking, we discusshierarchical holographic modeling as a method for

identification of risk scenarios. Next we describe theguiding principles and the eight phases of thedeveloped methodological framework. This is fol-lowed by an example applying the framework to amission in support of an operation other than war(OOTW). Finally, we offer conclusions and oppor-tunities for future work.

2. PAST EFFORTS IN RISK FILTERINGAND RANKING

Most real systems are exposed to numeroussources of risk. Over the last two decades, theproblem of ranking and prioritizing these sourceshas challenged not only decisionmakers, but the riskanalysis community as well.

Sokal (1974) discusses classification principlesand procedures that create a distinction between twomethods: monothetic and polythetic. The monotheticcategory establishes classes that differ by at least oneproperty that is uniform among members of eachclass, whereas the polythetic classification groupsindividuals/objects that share a large number oftraits, but do not agree necessarily on any one trait.Webler et al. (1995) outline a risk ranking meth-odology through an extensive survey example deal-ing with an application of sewage sludge on NewJersey farmland. Working with expert and laycommunity groups, two perceptions of risk aredeveloped and categorized, and weights are usedto balance the concerns of the two groups. Theydemonstrate how discussion-oriented approaches torisk ranking can supplement current methodologicalapproaches, and present a taxonomy that addressesthe substantive need for public discussion about risk.

Morgan et al. (1999, 2000) propose a rankingmethodology designed for use by federal risk man-agement agencies, calling for interagency taskforcesto define and categorize the risks to be ranked. Thetaskforces would identify the criteria that all agen-cies should use in their evaluations. The rankingwould be done by four groups: federal risk managersdrawn from inside and outside the concernedagency, lay people selected somewhat randomly, agroup of state risk managers, and a group of localrisk managers. Each ranking group would follow twodifferent procedures: (1) a reductionist and analyticapproach and (2) a holistic and impressionisticapproach. The results would then be combined torefine a better ranking. The four groups would meettogether to discuss their findings. In a most recentcontribution in this area, Categorizing Risks for Risk

384 Haimes, Kaplan, and Lambert

Page 3: Risk Filtering, Ranking, and Management - ResearchGate

Ranking, Morgan et al. (2000) discuss the problemsinherent in grouping a large number of risk scenariosinto easily managed categories, and argue that suchrisk categories must be evaluated with respect to aset of criteria. This is particularly important whenhard choices must be made in comparing andranking thousands of specific risks. The ultimaterisk characterization should be logically consistent,administratively compatible, equitable, and compat-ible with cognitive constraints and biases. Baronet al. (2000) conducted several extensive surveys ofexperts and nonexperts in risk analysis to ascertaintheir priorities as to personal and government actionfor risk reduction, taking into account the severity ofthe risk, the number of people affected, worry, andprobabilities for hazards to self and others. A majorfinding of these surveys ‘‘is that concern for action,both personal and government, is strongly related toworry. Worry, in turn, is affected mainly by beliefsabout probability.’’

A risk ranking and filtering (RRF) methodologywas developed for the purpose of prioritizing theresults of failure modes and effects analysis(FMEAs) (CRMES 1991; Haimes 1998). This riskprioritization methodology considers multiple quan-titative factors, such as reliability estimates, as wellas qualitative factors, such as expert rankings ofcomponent criticality.

3. HIERARCHICAL HOLOGRAPHICMODELING (HHM)

It is important to improve our understanding ofthe intricate interdependencies of our critical infra-structures. Therefore, any methodology must becomprehensive and holistic, addressing the hierar-chical institutional, organizational, managerial, andfunctional decision-making structures, in conjunc-tion with other determining factors. Since manyorganizational as well as technology-based systemsare hierarchical in nature, the risk management ofsuch systems is driven by this reality and must beresponsive to it. The risks associated with eachsubsystem within the hierarchical structure contri-bute to and ultimately determine the risks to theoverall system. The distribution of risks betweenthe subsystems often plays a dominant role in theallocation of resources. The aim is to achieve alevel of risk that is deemed acceptable in thejudgmental decision-making process taking intoconsideration the tradeoffs among all the costs,benefits, and risks.

Hierarchical holographic modeling has beenextensively and successfully used for identifyingthe risk scenarios in numerous projects (Haimes1981, 1998; Lambert et al. 2001). The HHM frame-work was developed because it is impractical torepresent within a single model all the importantand critical aspects of complex systems. HHM offersmultiple visions and perspectives, which addstrength to a risk analysis. It has been extensivelyand successfully deployed to study risks for govern-ment agencies such as the President’s Commissionon Critical Infrastructure Protection (PCCIP), theFBI, NASA, the Virginia Department of Transpor-tation (VDOT), and the National Ground Intelli-gence Center, among others. The HHMmethodology/philosophy is grounded on the premisethat in the process of modeling large-scale andcomplex systems, more than one mathematical orconceptual model is likely to emerge. Each of thesemodels may adopt a specific point of view, yet allmay be regarded as acceptable representations ofthe infrastructure system. Through HHM, multiplemodels can be developed and coordinated to capturethe essence of the many dimensions, visions, andperspectives of infrastructure systems.

Perhaps one of the most valuable and criticalaspects of hierarchical holographic modeling is itsability to facilitate the evaluation of the subsystemrisks and their corresponding contributions to therisks in the total system. In the planning, design, oroperational mode, the ability to model and quantifythe risks contributed by each subsystem markedlyfacilitates identifying, quantifying, and evaluatingrisk. In particular, HHM has the ability to model theintricate relationships among the various subsystemsand to account for all relevant and importantelements of risk and uncertainty. This makes for amore tractable modeling process and results in amore representative and encompassing risk assess-ment process.

As pointed out by Kaplan et al. (2001), HHMcan be regarded as a general method for identifyingthe set of risk scenarios. It has turned out to beparticularly useful in modeling large-scale, complex,and hierarchical systems such as defense and civilianinfrastructure systems. To understand HHM in thisway, we first remind ourselves of the principle thatthe process of identifying the risk scenarios for asystem of any kind should begin by laying out adiagram that represents the ‘‘success,’’ or ‘‘asplanned,’’ scenario of the system. In the HHMmethod this diagram takes the form of a master

Risk Filtering, Ranking, and Management Framework 385

Page 4: Risk Filtering, Ranking, and Management - ResearchGate

chart showing different ‘‘perspectives’’ on the sys-tem requirements (for an example, see Fig. 1).Perspectives are portrayed by columns in the chart,each with a head topic. In Fig. 1, head topics includetechnological, organizational, legal time-horizon,user-demands, and socioeconomic. Each perspectivein the chart is then broken down into boxes orsubtopics. Each subtopic box can then be thought ofas representing a set of ‘‘success criteria,’’ i.e.,actions or results that are supposed to occur as partof the definition of the system’s ‘‘success.’’ Considernow the set of such criteria represented by the jth

box in the ith perspective. For each such box we canthen generate a set of risk scenarios by asking:‘‘What can go wrong with respect to this class ofsuccess criteria?’’ i.e., ‘‘How could it happen that wewould fail to achieve this set of success criteria?’’(More pointedly, if we wanted to identify or antici-pate terrorism-type scenarios, we might ask: ‘‘If Iwanted to make something go wrong with respect tothis class of success criteria, how could I do it?’’(Kaplan et al. 1999)).

By answering these questions we generate a setof risk scenarios associated with the jth subtopic boxof the ith perspective, and it is now natural to think ofthis box as a ‘‘source of risk.’’ The union of thesesets of risk scenarios, over all the boxes, should nowyield a complete set of risk scenarios for the systemor operation as a whole.

Taking the union only over the boxes in oneperspective would typically yield a subset—anapproximation—of the complete set of risk scenar-ios. Similarly, the union of the sets of successcriteria corresponding to one perspective yields asubset—an approximation—to the total set ofsuccess criteria of the system as a whole. No oneperspective, typically, is adequate on its own toconsider the welfare of all current and futurestakeholders. Multiple perspectives of success areuseful for developing an inclusive set of answers to‘‘What can go wrong?’’

The nature and capability of HHM is thus toidentify a comprehensive, therefore large, set ofrisk scenarios. It does this by presenting multiple,complementary perspectives of the success scenariorequirements. To deal with this large set we need asystematic process that filters and ranks theseidentified scenarios so that we can prioritize riskmitigation activities. The first purpose of thisarticle is to assemble and discuss a number ofpublished approaches toward such a systematicprocess.

4. RISK FILTERING, RANKING,AND MANAGEMENT (RFRM):A METHODOLOGICAL FRAMEWORK

4.1. GUIDING PRINCIPLES

It is constructive to identify again the two basicstructural components of HHM. First are the headtopics, which constitute the major visions, concepts,and perspectives of success. Second are the subtop-ics, which provide a more detailed classification ofrequirements. Each such requirement class corres-ponds to a class of risk scenarios, namely, those thatimpact upon that requirement. In this sense, eachclass of requirements is also considered as a ‘‘sourceof risk.’’

Thus, by its nature and construction, the HHMmethodology generates a comprehensive set of sourcesof risk, i.e., categories of risk scenarios, commonly inthe order of hundreds of entries (Haimes 1998).Consequently, there is a need to discriminate amongthese sources as to the likelihood and severity of theirconsequences, and to do so systematically on the basisof principled criteria and sound premises. For thispurpose, the proposed methodological framework forrisk filtering and ranking is based on the followingmajor considerations:

• It is often impractical (e.g., due to time andresource constraints) to apply quantitativerisk analysis to hundreds of sources of risk. Insuch cases qualitative risk analysis may beadequate for decision purposes under certainconditions.

• All sources of evidence should be harnessedin the filtering and ranking process to assessthe significance of the risk sources. Suchevidence includes common sense, profes-sional experience, expert knowledge, andstatistical data.

• Six basic questions characterize the process ofrisk assessment and management and serveas the compass for the methodologicalapproach. For the risk assessment process,there are three questions (Kaplan andGarrick 1981):• What can go wrong?• What is the likelihood of that happening?• What are the consequences?

There are also three questions for the riskmanagement process (Haimes 1991, 1998):

386 Haimes, Kaplan, and Lambert

Page 5: Risk Filtering, Ranking, and Management - ResearchGate

Fig. 1. Excerpt from a hierarchical holographic model developed to identify sources of risk to operations other than war (Dombroski et al.

2002).

Risk Filtering, Ranking, and Management Framework 387

Page 6: Risk Filtering, Ranking, and Management - ResearchGate

• What are the available options?• What are the associated tradeoffs?• What are the impacts of current decisions on

future options?

To deploy the RFRM methodology effectively,the variety of perspectives of ‘‘success’’ and sourcesof risk must be considered, including those repre-senting hardware, software, organizational, andhuman failures. Risks that also must be addressedinclude programmatic risks, such as project-costoverrun and time delay in meeting completionschedules, and technical risks, such as not meetingperformance criteria.

An integration of empirical and conceptual,descriptive and normative, quantitative and qualit-ative methods and approaches is always superior tothe ‘‘either-or’’ choice. Relying, for example, on amix of simulation and analytically-based risk meth-odologies is superior to either one alone. Thetradeoffs that are inherent in the risk managementprocess manifest themselves in the RFRM meth-odology as well. The multiple noncommensurateand often conflicting objectives that characterizemost real systems guide the entire process of riskfiltering and ranking.

The risk filtering and ranking process is aimedat providing priorities in the scenario analysis. Thisdoes not imply that sources of risks that have beenfiltered in an early phase of methodology areignored; just that the more urgent sources of risksor scenarios are explored first.

4.2. RFRM Phases

Eight major phases constitute the risk filtering,ranking, and management (RFRM) method. A casestudy in Section 5 demonstrates the efficacy of theproposed method.

4.2.1. Phase I: Identification of Risk ScenariosThrough Hierarchical Holographic Modeling(HHM)

Most, if not all, sources of risk are identifiedthrough the HHM methodology as discussed earlier.In their totality, these sources of risk describe ‘‘whatcan go wrong’’ in the ‘‘as planned’’ or successscenario. Included are acts of terrorism, accidents,and natural hazards. Each subtopic represents acategory of risk scenarios, i.e., descriptions of whatcan go wrong. Thus, through the HHM we generate

a diagram that organizes and displays the completeset of system success criteria from multiple overlap-ping perspectives. Each box in the diagram repre-sents a set of actions or results that are required forthe successful operation of the system. At the sametime, any failure will show up as a deficiency in oneor more of the boxes. Fig. 1 is an excerpt from ahierarchical holographic model developed for char-acterization of support for operations other than warby the military.

It is important to note the tradeoff inherent inthe construction of the HHM. A more detailedHHM will yield a more accurate picture of thesuccess scenario, and consequently lead to a betterassessment of the risk situation. In other words, anHHM that contains more levels in its hierarchy willfacilitate identifying the various failure modes forthe system since the system structure is described ingreater detail. A less detailed HHM, however,encapsulates a larger number of possible failurescenarios within each subtopic. This leads to lessspecificity in identifying failure scenarios. Of course,the more detailed HHM will be more expensive toconstruct in terms of time and resources. Therefore,there is a tradeoff: detail and accuracy versus timeand resources. Consequently, the appropriate levelof detail for an HHM is a matter of judgmentdependent on the resources available for risk man-agement and the nature of the situation to which it isapplied.

4.2.2. Phase II: Scenario Filtering Based on Scope,Temporal Domain, and Level of DecisionMaking

In Phase II, filtering is done at the level of‘‘subtopics’’ or ‘‘sources of risk.’’ As mentionedearlier, the plethora of sources of risk identified inPhase I can be overwhelming. The number ofsubtopics in the HHM may easily be in the hundreds(Haimes 1998). Clearly, not all subtopics in theHHM can be of immediate and simultaneous con-cern to all levels of decision making and at all times.For example, in operations other than war (OOTW),three decision-making levels are identified(strategic, planning, and operational), and severaltemporal domains are considered (first 48 hours,short-, intermediate-, and long-term, disengagement,and postdisengagement).

At this phase of the risk filtering process, thesources of risk are filtered according to the interestsand responsibilities of the individual risk manager/

388 Haimes, Kaplan, and Lambert

Page 7: Risk Filtering, Ranking, and Management - ResearchGate

decisionmaker. The filtering criteria at this phaseinclude the decision-making level, the scope (i.e.,what risk scenarios are of prime importance to thismanager), and the temporal domain (which timeperiods are important to this manager). Thus, thefiltering in Phase II is achieved on the bases ofexpert experience and knowledge of the nature,function, and operation of the system being studiedand of the role and responsibility of the individualdecisionmaker. This phase often reduces the num-ber of risk sources from several hundred to around50.

4.2.3. Phase III: Bi-Criteria Filtering and RankingUsing the Ordinal Version of the U.S. AirForce Risk Matrix

In this phase filtering is also done at the level ofsubtopics. However, the process moves closer to aquantitative treatment, where the joint contributionsof two different types of information—the likelihoodof what can go wrong and the associated conse-quences—are estimated on the basis of the availableevidence. This phase is accomplished in the RFRMby using the ordinal version of the matrix procedureadapted from Military Standard (MIL-STD) 882,U.S. Department of Defense (DoD), cited in Rolandand Moriarty (1990). With this matrix, the likeli-hoods and consequences are combined into a jointconcept called ‘‘severity.’’ The mapping is achievedby first dividing the likelihood of a risk source intofive discrete ranges. Similarly, the consequence scalealso is divided into four or five ranges. The twoscales are placed in matrix formation, and the cellsof the matrix are assigned relative levels of riskseverity.

Fig. 2 is an example of this matrix, e.g., thegroup of cells in the upper right indicates the highestlevel of risk severity. The scenario categories (sub-topics) identified by the HHM are distributed to thecells of the matrix. Those falling in the low-severityboxes are filtered out and set aside for laterconsideration.

As a general principle, any ‘‘scenario’’ that wecan describe with a finite number of words is actuallya class of scenarios. The individual members of thisclass are subscenarios of the original scenario.Similarly, any subtopic from the HHM diagram tobe placed into the matrix represents a class of failurescenarios. Each member of the class has its owncombination of likelihood and consequence. Theremay be failure scenarios that are of low probability

and high consequence and scenarios that are of highprobability and low consequence. In placing thesubtopic into the matrix the analyst must make ajudgment as to the likelihood and consequencerange that characterizes the subtopic as a whole.This judgment must be such as to avoid overlookingpotentially critical failure scenarios, and at the sametime avoid overstating the likelihood of suchscenarios.

4.2.4. Phase IV: Multi-Criteria Evaluation

In Phase III we distributed the individual risksources, by judgment, into the boxes defined in Fig. 2by the consequence and likelihood categories. Thosesources falling in the upper right boxes of the riskmatrix were then judged to be the ones requiringpriority attention.

In Phase IV we take the process one stepfurther by reflecting on the ability of each scenarioto defeat three defensive properties of the under-lying system; namely, resilience, robustness, andredundancy.2 As an aid to this reflection, we presenta set of 11 ‘‘criteria’’ defined in Table I. Thesecriteria relate to the ability of the scenarios to

Fig. 2. Example risk matrix for Phase III.

2 Classifying the defenses of the system as resilience, robustness,and redundancy (3 Rs) is based, in part, on an earlier and relatedcategorization of water-resources systems by Matalas and Fier-ing (1977), updated by Haimes et al. (1997). Redundancy refersto the ability of extra components of a system to assume thefunctions of failed components. Robustness refers to the insen-sitivity of system performance to external stresses. Resilience isthe ability of a system to recover following an emergency. Sce-narios able to defeat these properties are of greater concern, andthus are scored as more severe.

Risk Filtering, Ranking, and Management Framework 389

Page 8: Risk Filtering, Ranking, and Management - ResearchGate

defeat these defensive properties. (These criteriaare intended to be generally applicable but the usermay of course modify them to suit the specificsystem under study.)

As a further aid to this reflection, it may behelpful to rate the scenario of interest as ‘‘high,’’‘‘medium,’’ or ‘‘low’’ against each criterion (usingTable II for guidance) and then to use this combi-nation of ratings to judge the ability of the scenarioto defeat the system.

The criteria of risk scenarios related to the threemajor defensive properties of most systems arepresented in Table I. These (example) criteria areintended to be used as a base for Phase V.

After the completion of Phase IV, the rankingof the remaining scenarios is undertaken in Phase Vwith the quantitative assessments of likelihood andconsequence. Scenarios that are judged to be lessurgent (based on Phase IV) can be returned to forlater study.

4.2.5. Phase V: Quantitative Ranking Using theCardinal Version of the MIL-STD 882 RiskMatrix

In Phase V, we quantify the likelihood of eachscenario3 using Bayes Theorem and all the relevantevidence available (Kaplan 1990, 1992). The value ofquantification, of course, is that it clarifies theresults, disciplines the thought process, and replaces

opinion with evidence. More on the use of BayesTheorem is discussed in Phase V of Section 5.

Calculating the likelihood of scenarios avoidspossible miscommunication when interpreting ver-bal expressions such as ‘‘high,’’ ‘‘low,’’ and ‘‘veryhigh.’’ This approach yields a matrix with ranges ofprobability on the horizontal axis, as shown in Fig. 3.This is the ‘‘cardinal’’ version of the ‘‘ordinal’’ riskmatrix first deployed in Phase III. Filtering andranking the risk scenarios through this matrixtypically reduces the number of scenarios fromabout 20 to about 10.

4.2.6. Phase VI: Risk Management

Having quantified the likelihood of the sce-narios in Phase V, and having filtered the scenariosby likelihood and consequence in the mannerof Fig. 3, we have now identified a number ofscenarios, presumably small, constituting mostof the risk for our subject system. We now turnour attention to risk management and ask: ‘‘Whatcan be done, and what is cost-effective to do aboutthese scenarios?’’

The first of these questions puts us into acreative mode. Knowing the system and the majorrisk scenarios, we create options for actions, asking,‘‘What design modifications or operational changescould we make that would reduce the risk from thesescenarios?’’ Having set forth these options, we thenshift back to an analytical and quantitative thoughtmode: ‘‘How much would it cost to implement (oneor more of) these options? How much would wereduce the risk from the identified scenarios?’’‘‘Would these options create new risk scenarios?’’

Table I. Eleven Criteria of a Risk Scenario Relating to its Ability to Defeat the Defenses of the System

Undetectability refers to the absence of modes by which the initial events of a scenario can be discovered before harm occurs.Uncontrollability refers to the absence of control modes that makes it possible to take action or make an adjustment to prevent harm.Multiple paths to failure indicates that there are multiple and possibly unknown ways for the events of a scenario to harm the system, such as

circumventing safety devices, for example.Irreversibility indicates a scenario in which the adverse condition cannot be returned to the initial, operational (pre-event) condition.Duration of effects indicates a scenario that would have a long duration of adverse consequences.Cascading effects indicates a scenario where the effects of an adverse condition readily propagate to other systems or subsystems, i.e.,

cannot be contained.Operating environment indicates a scenario that results from external stressors.Wear and tear indicates a scenario that results from use, leading to degraded performance.HW/SW/HU/OR (Hardware, Software, Human, and Organizational) interfaces indicates a scenario in which the adverse outcome is

magnified by interfaces among diverse subsystems (e.g., human and hardware).Complexity/emergent behaviors indicates a scenario in which there is a potential for system-level behaviors that are not anticipated from a

knowledge of the components and the laws of their interactions.Design immaturity indicates a scenario in which the adverse consequences are related to the newness of the system design or other lack of a

concept proof.

3 The quantification of likelihood should, of course, be based onthe totality of relevant evidence available, and should be done byprocessing the evidence items through Bayes Theorem (Kaplan1990, 1992).

390 Haimes, Kaplan, and Lambert

Page 9: Risk Filtering, Ranking, and Management - ResearchGate

Moving back and forth between these modes ofthought, we arrive at a set of cost-effective optionsthat we now would like to recommend for imple-mentation. However, we must remember that wehave evaluated these options against the filtered setof scenarios remaining at the end of Phase V. Thus,in Phase VII we take another look at the effect theseoptions might have on the risk scenarios previouslyfiltered out.

4.2.7. Phase VII: Safeguarding Against MissingCritical Items

Reducing the initial large number of riskscenarios to a much smaller one at the completion

of Phase V may inadvertently filter out scenariosthat originally seemed minor but could becomeimportant if the proposed options were actuallyimplemented. Also, in a dynamic world, earlyindicators of newly emerging critical threats andother sources of risk should not be overlooked.Following the completion of Phase VI, which gen-erates and selects risk management policy optionsand their associated tradeoffs, we ask the question:‘‘How robust has the policy selection and riskfiltering/ranking process been?’’ Phase VII, then, isaimed at providing added assurance that the pro-posed RFRM methodology creates flexible reactionplans if indicators signal the emergence of new orheretofore undetected critical items. In particular, inPhase VII of the analysis, we:

1. Ascertain the extent to which the risk man-agement options developed in Phase VIaffect or are being affected by any of the riskscenarios discarded in Phases II to V. That is,in light of the interdependencies within thesuccess scenario, we evaluate the proposedmanagement policy options against the riskscenarios previously filtered out.

2. Revise as appropriate the risk managementoptions developed in Phase VI in light ofwhat was learned in Step 1 above.

Thus, a purpose of Phase VII is to enable therefinement of risk management options in light ofpreviously screened-out scenarios.Fig. 3. Risk matrix with numerical values for use in Phase V.

Table II. Rating Risk Scenarios in Phase IV Against the 11 Criteria

Criterion High Medium Low Not Applicable

Undetectability Unknown or undetectable Late detection Early detection Not applicableUncontrollability Unknown or uncontrollable Imperfect control Easily controlled Not applicableMultiple paths to failure Unknown or many paths

to failureFew paths to failure Single path to failure Not applicable

Irreversibility Unknown or no reversibility Partial reversibility Reversible Not applicableDuration of effects Unknown or long duration Medium duration Short duration Not applicableCascading effects Unknown or many

cascading effectsFew cascading effects No cascading effects Not applicable

Operating environment Unknown sensitivity orvery sensitive to operatingenvironment

Sensitive to operatingenvironment

Not sensitive to operatingenvironment

Not applicable

Wear and tear Unknown or much wearand tear

Some wear and tear No wear and tear Not applicable

Hardware/Software/Human/Organizational

Unknown sensitivity or verysensitive to interfaces

Sensitive to interfaces No sensitivity to interfaces Not applicable

Complexity andemergent behaviors

Unknown or High degreeof complexity

Medium complexity Low complexity Not applicable

Design immaturity Unknown or highly immaturedesign

Immature design Mature design Not applicable

Risk Filtering, Ranking, and Management Framework 391

Page 10: Risk Filtering, Ranking, and Management - ResearchGate

The detailed deployment of Phase VII is mostlydriven by the specific characteristics of the system.The main guiding principle in this phase focuses oncascading effects due to the system’s intra- andinterdependencies that may have been overlookedduring the filtering processes in Phases I to V. Aswell, the defensive properties that are addressed inPhase IV may be revisited to ensure that thesystem’s redundancy, resilience, and robustnessremain secure by the end of Phase VII.

4.2.8. Phase VIII: Operational Feedback

New methodology and tools can be improved onthe basis of the feedback accumulated during theirdeployment, and the proposed RFRM is no excep-tion. Following are guiding principles for the feed-back data-collection process:

• The HHM is never considered finished; newsources of risk should be added as additionalcategories or new topics.

• Be cognizant of all benefits, costs, revenues,and risks to human health and the environ-ment.

In particular, no single methodology or tool can fitall cases and circumstances. Therefore, a systematicdata-collection process that is cognizant of thedynamic nature of the evolving sources of riskand their criticalities can maintain the viabilityand effectiveness of the proposed risk filtering andranking method.

5. DEMONSTRATION FOR AN OPERATIONOTHER THAN WAR (OOTW)

To demonstrate the proposed risk filtering,ranking, and management (RFRM), we use a casestudy that has been conducted with the NationalGround Intelligence Center, U.S. Department ofDefense, and with the U.S. Military Academy atWest Point. The case study of operations other thanwar (OOTW) focuses on the United States andallied operations in the Balkans (Dombroski et al.2001). The overall aim of the case study is to ensurethat the deployment of U.S. forces abroad for anOOTW would be effective and successful, withminimal casualties, losses, or surprises.

We take as our case study the following mission:U.S. and allied forces engaged in the Balkans areasked to establish and maintain security for 72 hoursat a bridge crossing the Tirana River in Bosnia. The

purpose is to support the exchange via the bridge ofhumanitarian medical and other supplies amongseveral nongovernmental organizations and publicagencies. These entities and the allied force mustcommunicate in part over public telecommunica-tions networks and the Internet regarding thesecurity status of the bridge. As well, the public willneed to be informed about the status of the bridgevia radio, television, and the Internet. The RFRMwill be used to identify, filter, and rank scenarios ofrisk to the mission.

5.1. Phase I: Developing the HHM

To identify risk scenarios that allied forcesmight encounter in this case study the followingfour HHMs were developed (Haimes et al. 2001):

1. Country HHM;2. U.S. HHM;3. Alliance HHM; and4. Coordination HHM.

For demonstration purposes and to limit the size ofthe example, the present article shows only theTelecommunications head topic of the CountryHHM (see Fig. 4).

Of the subtopics shown in Fig. 4, we choose the11 subtopics (risk scenarios) listed in Table III forinput to the Phase II filtering.

5.2. Phase II: Scenario Filtering by Domainof Interest

In Phase II, we filter out all scenarios exceptthose in the decisionmaker’s domain of interest andresponsibilities. In operations other than war, onemay consider three levels of decisionmakers: Stra-tegic (e.g., Chiefs of Staff), Operational (e.g.,Generals and Colonels), and Tactical (e.g., Captainsand Majors). The concerns with and interest in aspecific subset of the risk scenarios will depend onthe decision-making level and on the temporaldomain under consideration. At the strategic level,Generals may not be concerned with the specificlocation of a Company’s base and the risks associ-ated with it, while the Company’s commanderwould be. For this example, we assume that therisk scenarios 1.5 Technology and 6 Regulation inTable III were filtered out based on the decision-maker’s responsibilities. The surviving set of ninerisk scenarios shown in Table IV becomes the inputto Phase III.

392 Haimes, Kaplan, and Lambert

Page 11: Risk Filtering, Ranking, and Management - ResearchGate

5.3. Phase III: Bi-Criteria Filtering

To further reduce the number of risk scenarios,in Phase III we subject the remaining nine subtopics(risk scenarios) to the qualitative severity-scalematrix as shown in Fig. 5. We have assumed thatevidence for the evaluations shown in Fig. 5 camefrom reliable intelligence sources providing know-ledge about the telecommunications infrastructure inBosnia. Also, for the purpose of this example, wefurther assume that the decisionmaker’s analysis ofthe subtopics (risk scenarios) results in removing therisk scenarios that received a moderate or low riskvaluation from the subtopic set. In this example—thesubtopics 1.3 Radio, 1.4 Television, and 3.2 Manage-ment Information Systems (MIS)—attained amoderate valuation and are removed. The remainingset of six risk scenarios are shown in Table V.

5.4. Phase IV: Multi-Criteria Filtering

Now that the decisionmaker has narrowed theset of risk scenarios to a more manageable one, theuser can perform a more thorough analysis on eachsubtopic. Table VI lists the remaining six subtopics

Table III. List of 11 Scenarios to be Filtered in Phase II

Subtopic

1.1 Telephone1.2 Cellular1.3 Radio1.4 Television1.5 Technology2. Cable3.1 Computer Information Systems (CIS)3.2 Management Information Systems (MIS)4. Satellite5. International6. Regulation

Fig. 4. Telecommunications head topic of OOTW HHM.

Table IV. List of Nine Scenarios to be Filtered in Phase III

Subtopic

1.1 Telephone1.2 Cellular1.3 Radio1.4 Television2. Cable3.1 Computer Information Systems (CIS)3.2 Management Information Systems (MIS)4. Satellite5. International

Risk Filtering, Ranking, and Management Framework 393

Page 12: Risk Filtering, Ranking, and Management - ResearchGate

(risk scenarios), and gives each a more specificdefinition.

In Phase IV, the user assesses each of theseremaining subtopics in terms of the 11 criteria thatare identified in Table I. Table VII summarizes theseassessments.

As part of our example we assume that theseassessments result from analyzing each of thesubtopics (risk scenarios) against the criteria, usingintelligence data and expert analysis.

5.5. Phase V: Quantitative Ranking

The user has thus far narrowed the importantscenario list from 11 to six. Employing the quantita-tive severity-scale matrix and the criteria assessmentsin Phase IV, the user will now reduce the set further.In Phase V the same severity-scale index introducedin Phase III is used, except that the likelihood is nowexpressed quantitatively as shown in Figure 6.

5.5.1. Telephone

Likelihood of Failure ¼ 0.05; Effect ¼ A (Lossof life); Risk ¼ Extremely High.

This failure will cause loss of life and incapaci-tate the mission. Based on intelligence reports,

however, enemy forces operating in Bosnia do notappear to be preparing for an attack against thetelephone network. Therefore, we assign only 5%probability to this scenario.4 Should such an attackoccur, a failure would be detectable.

5.5.2. Cellular

Likelihood of Failure ¼ 0.45; Effect ¼ A (Lossof life); Risk ¼ Extremely High.

U.S. forces will be dependent on cellular com-munications, thus this failure could cause loss ofmission and loss of life. Intelligence reports andexpert analysis show that insurgent forces may bepreparing for an attack on the cellular network,knowing that coalition forces are utilizing it. There-

Fig. 5. Qualitative severity scale matrix.

Table V. List of Six Scenarios to be Evaluated in Phases IV and V

Subtopic

1.1 Telephone1.2 Cellular2. Cable3.1 Computer Information Systems (CIS)4. Satellite5. International

Table VI. Risk Scenarios for Seven Remaining Subtopics

Subtopic Risk Scenario

1.1 Telephone Failure of any portion of the telephonenetwork for more than 48 hours

1.2 Cellular Failure of any portion of the cellular networkfor more than 24 hours

2. Cable Failure of any portion of the coaxial and/orfiber optic cable networks for more than12 hours

3.1 CIS Loss of access to Internet throughout the entirecountry for more than 48 hours

4. Satellite Failure of the satellite network for more than12 hours throughout the region

5. International Failure of international communicationsnetwork for more than six hours

4 The Bayesian reasoning behind this assignment is as follows: LetA denote an enemy attack against the phone network. Let Edenote the relevant evidence, namely that the intelligencereports no preparations for an attack.By Bayes then

PðAjEÞ ¼ P0ðAÞ � PðEjAÞ=P0ðEÞP0ðEÞ ¼ PðEjAÞ � PðAÞ þ PðEjnotAÞ � PðnotAÞ

Our prior state of knowledge about A, before receiving theevidence is P0ðAÞ ¼ 0:5 ¼ PðnotAÞ.The probability of intelligence seeing evidence E, i.e., no

preparations, if the enemy is going to attack is small. We take itas P(E|A) ¼ 0.05. (This is our appraisal of the effectiveness ofour intelligence.)The probability of intelligence not seeing preparations giving

that the enemy is not going to attack is high P(E|notA) ¼ 0.99.(This expresses our confidence that the enemy would not makepreparations as a deceptive maneuver.Therefore

P0ðEÞ ¼ 0:05 � 0:5 þ 0:99 � 0:5 ¼ 0:025 þ 0:495 ¼ 0:52

PðAjEÞ ¼ 0:5 � 0:05=0:52 ¼ 0:05

394 Haimes, Kaplan, and Lambert

Page 13: Risk Filtering, Ranking, and Management - ResearchGate

fore, we assign a 45% likelihood that the riskscenario will occur during the operation as assessedby this intelligence. Analysis also shows that anattack’s effects will be difficult to reverse.

5.5.3. Computer Information Systems (CIS)

Likelihood of Failure ¼ 0.015; Effect ¼ C(Loss of some capability with compromise of somemission objectives); Risk ¼ Moderate.

U.S. forces would not be immediately depend-ent on the CIS network, so this may cause some lossof capability, but should not cause the mission tofail. Detailed analysis of the CIS network shows thatif an attack occurs against the existing Bosniannetwork, its effects may be severe with a lowlikelihood (about 0.015).

5.5.4. Cable

Likelihood of Failure ¼ 0.3; Effect ¼ B (Lossof mission); Risk ¼ High.

U.S. forces utilize existing fiber optic andcoaxial cable networks to communicate over theregion. However, the network is not a primarycommunications platform. Intelligence of insurgentand enemy activity shows that forces are preparingfor an attack on the cable network due to itsvulnerability across the country. Therefore, weassign a likelihood of 0.3 for this risk scenario, giventhe current security over the network.

5.5.5. Satellite

Likelihood of Failure ¼ 0.55; Effect ¼ A (Lossof life); Risk ¼ Extremely High.

Because U.S. forces are strongly dependent onsatellite communications, any loss for 12 hours ormore can result in a loss of life and mission. Anintelligence analysis of the satellite network showsthat the network is protected throughout Bosnia, butnot enough to ensure that forces opposing theoperation will not succeed when attacking it. Dueto the criticality of the network, enemy forces willlikely target the network. Based on this assessment,the likelihood of the failure scenario occurring ishigh (0.55).

5.5.6. International

Likelihood of Failure ¼ 0.15; Effect ¼ A (Lossof life); Risk ¼ Extremely High.

Here we assume that any loss of internationalcommunications for six hours or longer throughoutthe region would cut off U.S. forces from othercountries and U.S. strategic decisionmakers. There-fore, this is a very high-risk failure. Due to expertanalysis of forces opposing the operation, an attackagainst international communications would be dif-ficult but fairly likely. Therefore, we assign the

Table VII. Scoring of Subtopics for OOTW Using the Criteria Hierarchy

Criteria 1.1 Telephone 1.2 Cellular 2. Cable 3.1 CIS 4. Satellite 5. International

Undetectability Low Low Med High Low HighUncontrollability Med Med High High Med HighMultiple Paths to Failure High Med High High Med HighIrreversibility Med High Med High High LowDuration of Effects High High High High High HighCascading Effects Med Med Low Low High HighOperating Environment High High High High Med HighWear and Tear Med High Low High Med HighHardware/Software/Human/Organizational High High Med High High HighComplexity and Emergent Behaviors Med High Low High High HighDesign Immaturity Med High Med High High Med

Fig. 6. Quantitative severity scale matrix.

Risk Filtering, Ranking, and Management Framework 395

Page 14: Risk Filtering, Ranking, and Management - ResearchGate

likelihood of 0.15 to this scenario. Even if it didoccur, its effects may be somewhat reversible withinsix hours.

Assuming that we filter out all subtopics (riskscenarios) attaining a risk valuation of moderate orlow risk, CIS is filtered out. Therefore, the remain-ing five critical risk scenarios are: Telephone,Cellular, Cable, Satellite, and International Com-munications. Based on the assessments shown aboveand in Fig. 6, planners of the operation would surelywant to concentrate resources and personnel onensuring that the cellular, cable, satellite, telephone,and international communications networks are wellprotected and guarded.

5.6. Phase VI: Risk Management

In Phase VI, a complete quantitative decisionanalysis is performed, involving estimates of cost,performance benefits, and risk reduction, and ofmanagement options for dealing with the mosturgent remaining scenarios.

Examples for Phases VI to VIII are beyond thescope of the risk filtering and ranking aspects of thisarticle. Readers who are interested in the deploy-ment of these phases may consult the followingthree sources: Dombroski (2001), Lamm (2001), andMahoney (2001).

5.7. Phase VII: Safeguarding Against MissingCritical Items

In Phase VII, we examine the performance ofthe options selected in Phase VI against the scenar-ios that have been filtered out during Phases II to V.

5.8. Phase VIII: Operational Feedback

Phase VIII represents the operational phase ofthe underlying system, during which the experienceand information gained is used to continually updatethe scenario filtering and decision processes, PhasesII to VII.

6. CONCLUSIONS AND FUTURE WORK

Needless to say, any military operation, evenone other than war, is a matter of great seriousness.Once undertaken, it is important to ensure itssuccess. Just as the enemy will probe for weak spots,so the planners of the operation must identify,

impose priorities, and take appropriate actions tominimize the risks. The risk filtering, ranking, andmanagement methodological framework presentedhere addresses this process. The eight phases of riskfiltering, ranking, and management reflect a philo-sophical approach rather than a mechanical meth-odology. The philosophy can be specialized toparticular contexts, e.g., operations other than war,an aerospace system, contamination of drinkingwater, or the physical security of an embassy. Inthis philosophy, filtering and ranking discrete classesof scenarios is viewed as a precursor to, rather than asubstitute for, analysis of the totality of all riskscenarios.

ACKNOWLEDGMENTS

The research documented in this article wassupported in part by the Virginia TransportationResearch Council. We wish to thank our graduatestudents: Matthew Dombroski for his significantcontributions to Section 5: ‘‘A DemonstrationProblem’’; Ruth Y. Dicdican, Mike Diehl, GregoryLamm, Maria (Peach) Leung, Brian Mahoney, MikePennock, and Joost Santos for their helpful com-ments and suggestions; Grace Zisk for her editorialassistance; and Della Dirickson for her administra-tive assistance.

REFERENCES

Baron, J., J. C. Hershey, and H. Kunreuther, ‘‘Determinants ofpriority for risk reduction: the role of worry.’’ Risk Analysis,20(4): 413–427, 2000.

CRMES, ‘‘Ranking of space shuttle FMEA/CIL items: The riskranking and filtering (RRF) method,’’ Center for Risk Man-agement of Engineering Systems, University of Virginia,Charlottesville, VA, 1991.

Dombroski, M., Y. Y. Haimes, J. H. Lambert, K. Schlussel, andM. Sulcoski, ‘‘Risk-based methodology for the characteriza-tion and support for operations other than war.’’ To appear inMilitary Operations Research Journal, 2002.

Dombroski, M., ‘‘A risk-based decision support methodology foroperations other than war,’’ Masters of Science Thesis,Department of Systems and Information Engineering,University of Virginia, 2001.

Haimes, Y. Y., ‘‘Hierarchical holographic modeling.’’ IEEETransactions on Systems, Man, and Cybernetics, 11(9), 606–617, 1981.

Haimes, Y. Y., ‘‘Total risk management.’’ Risk Analysis, 11(2),169–171, 1991.

Haimes, Y. Y., Risk modeling assessment, and management. NewYork: John Wiley & Sons, 1998.

Haimes, Y. Y., N. C. Matalas, J. H. Lambert, B. A. Jackson, and J.F. R. Fellows, ‘‘Reducing the vulnerability of water supplysystems to attack.’’ Journal of Infrastructure Systems, Amer-ican Society of Civil Engineers, 4(4): 164–177, 1997.

Kaplan, S., ‘‘On inclusion of precursor and near miss events inquantitative risk assessments: A Bayesian point of view and a

396 Haimes, Kaplan, and Lambert

Page 15: Risk Filtering, Ranking, and Management - ResearchGate

space shuttle example.’’ Journal of Reliability Engineeringand System Safety, 27, 103–115, 1990.

Kaplan, S., ‘‘‘Expert information’ vs. ‘expert opinion;’ anotherapproach to the problem of eliciting/combining/using expertknowledge in PRA.’’ Journal of Reliability Engineering andSystem Safety 35, 61–72, 1992.

Kaplan, S., and B. J. Garrick, ‘‘On the quantitative definition ofrisk.’’ Risk Analysis, 1(1), 11–27, 1981.

Kaplan, S., Y. Y. Haimes, and B. J. Garrick, ‘‘Fitting hierarchicalholographic modeling (HHM) into the theory of scenariostructuring and a refinement to the quantitative definition ofrisk.’’ Risk Analysis, 21(5), 807–819, 2001.

Kaplan, S., S. Vishnepolschi, B. Zlotin, and A. Zusman, ‘‘Newtools for failure and risk analysis, anticipatory failure deter-mination (AFD) and the theory of scenario structuring.’’Monograph published by Ideation International Inc., South-field, Michigan, 1999.

Lambert, J. H., Y. Y. Haimes, D. Li, R. Schooff, and V. Tulsiani,‘‘Identification, ranking, and management of risks in a majorsystem acquisition.’’ Reliability Engineering and SystemSafety, 72(3), 315–325, 2001.

Lamm, G., ‘‘Assessing and managing risks to information assur-ance: A methodological approach,’’ Masters of ScienceThesis, Department of Systems and Information Engineering,University of Virginia, 2001.

Mahoney, B., ‘‘Quantitative risk analysis of GPS as a criticalinfrastructure for civilian transportation applications,’’ Mastersof Science Thesis, Department of Systems and InformationEngineering, University of Virginia, 2001.

Matalas, N. C. and M. B Fiering, ‘‘Water-resource systemplanning, in climate, climate change and water supply.’’ InStudies in geophysics, pp. 99–109, National ResearchCouncil, National Academy of Sciences, Washington, DC,1977.

Morgan, M. G., B. Fischhoff, L. Lave, and P. Fischbeck, ‘‘Aproposal for risk ranking within federal agencies.’’ In Com-paring environmental risks: Tools for setting governmentpriorities, J. Clarence Davies (Ed.), Resources for the Future,Washington, DC, 1999.

Morgan, M. G., H. K. Florig, M. L. DeKay, and P. Fischbeck,‘‘Categorizing risks for risk ranking.’’ Risk Analysis, 20(1), 49,2000.

Roland, H. E. and B. Moriarty, System safety engineering andmanagement, 2nd ed. New York: John Wiley & Sons,1990.

Sokal, R. R. ‘‘Classification: purposes, principles, progress, pros-pects.’’ Science, September 27, 1974.

Webler, T., H. Rakel, O. Renn, and B. Johnson, ‘‘Eliciting andclassifying concerns: A methodological critique.’’ Risk Ana-lysis, 15(3), 421, 1995.

Risk Filtering, Ranking, and Management Framework 397

Page 16: Risk Filtering, Ranking, and Management - ResearchGate

� 2002 Society for Risk Analysis