Copyright by Xiaomin You 2010

250
Copyright by Xiaomin You 2010

Transcript of Copyright by Xiaomin You 2010

Page 1: Copyright by Xiaomin You 2010

Copyright

by

Xiaomin You

2010

Page 2: Copyright by Xiaomin You 2010

The Dissertation Committee for Xiaomin You Certifies that this is the approved

version of the following dissertation:

Risk Analysis in Tunneling with Imprecise Probabilities

Committee:

Fulvio Tonon, Supervisor

Robert B. Gilbert

Lance Manuel

Timothy P. Smirnoff

Ellen M. Rathje

Page 3: Copyright by Xiaomin You 2010

Risk Analysis in Tunneling with Imprecise Probabilities

by

Xiaomin You, B.E.; M.E.

Dissertation

Presented to the Faculty of the Graduate School of

The University of Texas at Austin

in Partial Fulfillment

of the Requirements

for the Degree of

Doctor of Philosophy

The University of Texas at Austin

August 2010

Page 4: Copyright by Xiaomin You 2010

Dedication

This thesis is dedicated to my parents.

Page 5: Copyright by Xiaomin You 2010

v

Acknowledgements

First, I would like to thank my advisor, Dr. Fulvio Tonon, for giving me this

opportunity and supporting me during the entire process. I appreciate his guidance,

patience, and insightful discussion. Without him this thesis would not have been a reality.

Sincere thanks are extended to Drs. Robert Gilbert, Lance Manuel, Timothy

Smirnoff, and Ellen Rathje for being on my committee, and for valuable comments and

advice. Special thanks are due to Dr. Timothy Smirnoff for his continued support and

traveling thousands of miles to attend my comprehensive exam and the final defense.

I want to thank my labmates: Ran, Sang Yeon, Seung Han, Yuannian, Pooyan,

and Heejung, for the collaboration and inspiring discussion on lab tests and coursework.

I also thank my friends: Songcheng, Jiabei, Rui, Manxiang, Tommy, Lu, Jianli,

and Weihong. Thank you all for making my life at Austin more fun.

I especially thank my parents for their continuous support and unconditional love

throughout my life. I would also like to thank my parents-in-law for their love and care.

Finally, very special thanks to my husband, Wen, for his enthusiasm and support. Thank

you for loving me since I was just a girl.

This work was supported by the International Tunneling Consortium (ITC).

Thanks are due to all ITC members.

Page 6: Copyright by Xiaomin You 2010

vi

Risk Analysis in Tunneling with Imprecise Probabilities

Publication No._____________

Xiaomin You, Ph.D

The University of Texas at Austin, 2010

Supervisor: Fulvio Tonon

Due to the inherent uncertainties in ground and groundwater conditions, tunnel

projects often have to face potential risks of cost overrun or schedule delay. Risk analysis

has become a required tool (by insurers, Federal Transit Administration, etc.) to identify

and quantify risk, as well as visualize causes and effects, and the course (chain) of events.

Various efforts have been made to risk assessment and analysis by using conventional

methodologies with precise probabilities. However, because of limited information or

experience in similar tunnel projects, available evidence in risk assessment and analysis

usually relies on judgments from experienced engineers and experts. As a result,

imprecision is involved in probability evaluations. The intention of this study is to

explore the use of the theory of imprecise probability as applied to risk analysis in

tunneling. The goal of the methodologies proposed in this study is to deal with imprecise

information without forcing the experts to commit to assessments that they do not feel

comfortable with or the analyst to pick a single distribution when the available data does

not warrant such precision.

Page 7: Copyright by Xiaomin You 2010

vii

After a brief introduction to the theory of imprecise probability, different types of

interaction between variables are studied, including unknown interaction, different types

of independence, and correlated variables. Various algorithms aiming at achieving upper

and lower bounds on previsions and conditional probabilities with assumed interaction

type are proposed. Then, methodologies have been developed for risk registers, event

trees, fault trees, and decision trees, i.e. the standard tools in risk assessment for

underground projects. Corresponding algorithms are developed and illustrated by

examples. Finally, several case histories of risk analysis in tunneling are revisited by

using the methodologies developed in this study. All results obtained based on imprecise

probabilities are compared with the results from precise probabilities.

Page 8: Copyright by Xiaomin You 2010

viii

Table of Contents

List of Tables ......................................................................................................... xi

List of Figures ....................................................................................................... xv

Chapter 1 Introduction.................................................................................. 1

1.1 Research Motivation ............................................................................. 1

1.2 Literature Review.................................................................................. 5

1.2.1 Risk Analysis in Tunneling.......................................................... 5

1.2.2 Imprecise Probabilities................................................................. 7

1.3 Research Objectives and Plans ............................................................. 8

1.4 Dissertation Outline ............................................................................ 10

Chapter 2 A Short Introduction to Imprecise Probabilities ........................ 12

2.1 Basic Concepts.................................................................................... 13

2.1.1 Gamble....................................................................................... 13

2.1.2 Lower and Upper Previsions...................................................... 14

2.1.3 Geometrical Interpretation of Previsions ................................... 16

2.1.4 Lower and Upper Probabilities .................................................. 19

2.2 Two Properties of Rational previsions................................................ 20

2.2.1 Avoiding Sure Loss.................................................................... 20

2.2.2 Coherence .................................................................................. 22

2.3 Set Ψ Constructed on the basis of Specified Previsions..................... 24

2.4 Natural Extension................................................................................ 28

2.5 Decision making ................................................................................. 29

2.6 Why Use Imprecise Probabilities?...................................................... 32

Chapter 3 Different types of interaction between variables ....................... 37

3.1 Constraints on marginals..................................................................... 37

3.2 Problem Formulation .......................................................................... 39

3.2.1 Unknown interaction.................................................................. 39

3.2.2 Analysis with independent variables ......................................... 40

Page 9: Copyright by Xiaomin You 2010

ix

3.2.2.1 Epistemic Irrelevance/Independence........................... 41

3.2.2.2 Conditional Epistemic Irrelevance/Independence....... 46

3.2.2.3 Strong Independence ................................................... 47

3.2.2.4 Conditional strong Independence................................ 48

3.2.3 Analysis with uncertain correlation ........................................... 49

3.3 Algorithms .......................................................................................... 50

3.3.1 Previsions and conditional probability in joint distributions ..... 51

3.3.2 Unknown Interaction ................................................................. 53

3.3.3 Independence ............................................................................. 59

3.3.3.1 Epistemic Irrelevance/Independence........................... 59

3.3.3.2 Conditional Epistemic Irrelevance/Independence....... 77

3.3.3.3 Strong Independence ................................................... 81

3.3.3.4 Conditional strong Independence................................ 86

3.3.4 Uncertain Correlation................................................................. 88

3.4 Summary ............................................................................................. 90

Chapter 4 Failure and Decision Analysis ................................................... 94

4.1 Introduction......................................................................................... 94

4.2 Failure analysis with Imprecise Probability........................................ 98

4.2.1 Event Tree Analysis (ETA) ....................................................... 98

4.2.1.1 ETA with conditional probabilities ............................. 98

4.2.1.2 ETA with total probabilities ...................................... 107

4.2.1.3 ETA with Combination of conditional probabilities and total probabilities ............................................................ 115

4.2.2 Fault Tree Analysis .................................................................. 117

4.2.3 Combination of Event Tree Analysis and Fault Tree Analysis ...... ......................................................................................... 126

4.3 Decision Analysis with Imprecise Probabilities ............................... 127

4.3.1 Standard form of decision tree................................................. 128

4.3.2 Algorithm of decision analysis with imprecise probabilities... 130

4.3.3 Decision analysis with uncertain new information.................. 136

4.3.3.1 Input Data .................................................................. 136

Page 10: Copyright by Xiaomin You 2010

x

4.3.3.2 Algorithm to calculate ( )†|LOW iE X Y s− ................ 138

4.3.3.3 Discussion ................................................................. 149

4.3.4 Lower and upper values of information................................... 155

Chapter 5 Case Histories .......................................................................... 169

5.1 ETA applied to the design of a underwater tunnel ........................... 169

5.2 FTA applied to the Stockholm Ring road Tunnels ........................... 175

5.3 Decision analysis: the optimal exploration plan for the Sucheon Tunnel........................................................................................................... 184

5.4 Risk register for the East Side CSO project...................................... 203

Chapter 6 Summary and Future Work...................................................... 206

6.1 Summary ........................................................................................... 206

6.1.1 Algorithms for different types of interaction in imprecise probabilities.............................................................................. 206

6.1.2 Application to the standard tools in risk analysis .................... 207

6.1.2.1 Event tree analysis..................................................... 207

6.1.2.2 Fault tree analysis...................................................... 207

6.1.2.3 Decision tree analysis ................................................ 208

6.1.2.4 Risk register............................................................... 209

6.2 Future Work...................................................................................... 209

6.2.1 Elicitation and Assessment with Imprecise Probabilities ........ 209

6.2.2 Improvement on algorithms for different types of interaction. 210

6.2.3 Cost/Contingency and Schedule Estimation............................ 210

Appendix A Explicit Formulation for Optimization Problems in Section 4.2.1.2 .................................................................................................. 211

Appendix B Input Data and Results of Optimal Exploration Plan........... 213

Appendix C Risk register for East Side CSO Project, Portland, Oregon. 218

Reference ............................................................................................................ 229

Vita .................................................................................................................... 233

Page 11: Copyright by Xiaomin You 2010

xi

List of Tables

Table 1-1 The Intervals for the frequency of occurrence in ITA Guidelines ...... 3

Table 1-2 An Example of Risk Matrix (Pennington et al., 2006)........................ 6

Table 3-1a Example 3-1: Solutions of the linear programming problems (3.45) for

the lower and upper probabilities for T............................................ 56

Table 3-2 Example 3-1: Extreme distribution of Ψ for the linear programming

problems (3.45) ................................................................................ 57

Table 3-3a Example 3-2: Solutions of the optimization problems (3.17) for lower

and upper probabilities for T............................................................ 65

Table 3-4a Example 3-2: Solutions of the optimization problems (3.17) for upper

and lower conditional probabilities that the bolt is Type B given the type

of the nut. ......................................................................................... 67

Table 3-5 Example 3-2: Solutions of the optimization problems (3.63) for lower and

upper probabilities for T................................................................... 68

Table 3-6 Example 3-2: Solutions of the optimization problems (3.63) for upper and

lower conditional probabilities that the bolt is Type B given the type of

the nut............................................................................................... 68

Table 3-7 Example 3-3: Solutions of the optimization problems (3.64) for lower and

upper probabilities for T. (same solutions for (3.65))...................... 71

Table 3-8a Example 3-3: Solutions of the optimization problems (3.64) for upper

and lower conditional probabilities that the bolt is Type B given the type

of the nut. ......................................................................................... 72

Table 3-9 Example 3-3: Solutions of the optimization problems (3.66) for lower and

upper probabilities for T................................................................... 74

Page 12: Copyright by Xiaomin You 2010

xii

Table 3-10 Example 3-3: Solutions of the optimization problems (3.66) for upper

and lower conditional probabilities that the bolt is Type B given the type

of the nut. ......................................................................................... 74

Table 3-11 Example 3-3: Extreme distribution of Ψ for the linear programming

problem (3.66).................................................................................. 75

Table 3-12 Example: Solutions of the optimization problems (3.71) for upper and

lower conditional probabilities P(1,2,3)+P(1, 2 , 3 )+P( 1 ,2, 3 ). ........ 81

Table 3-13 Example 3-5: Lower and upper probabilities for T and lower and upper

conditional probability ( )1|2 1 2|P S B S B= = on all extreme distributions

of Ψ (optimal solutions are highlighted.) ......................................... 85

Table 3-14 Example: Solutions of the optimization problems (3.79) for upper and

lower conditional probabilities P(1,2,3)+P(1, 2 , 3 )+P( 1 ,2, 3 ). ........ 87

Table 3-15 Example: Solutions of the optimization problems (3.81) for upper and

lower probabilities for event T = (A, A), (B, B), (C, C). ............. 89

Table 3-16 Example: Solutions of the optimization problems (3.81) for upper and

lower probabilities for P(S1=B|S2=B). ............................................. 90

Table 3-17 Example: Solutions of the optimization problems (3.81) for upper and

lower probabilities for P(S2=B|S1=B). ............................................. 90

Table 3-18 Summary of all algorithms for different types of independence........ 93

Table 4-1: Solutions for the optimization problems (19) for the upper and lower

probabilities of failure.................................................................... 113

Table 4-2: Extreme points for the set of joint distributions of E1 and E2. ....... 114

Table 4-3: Extreme points for the set Ψ of joint distributions for Scomb and S3. combP

........................................................................................................ 114

Page 13: Copyright by Xiaomin You 2010

xiii

Table 4-4: Solutions for the optimization problems (4.32) for the upper and lower

probabilities of Event E1,1. ............................................................. 124

Table 4-5: Solutions for the optimization problems (4.33) for the upper and lower

probabilities of Event E1,2. ............................................................. 124

Table 4-6: Solutions for the optimization problems (4.34) for the upper and lower

probabilities of Event E1. ............................................................... 125

Table 4-7: Extreme joint distributions of E2,1 and E2,2. ................................... 126

Table 4-8: Extreme joint distributions of E1 and E2. ....................................... 126

Table 4-9: Construction Cost Matrix............................................................... 132

Table 4-10: Exploration Reliability Matrix....................................................... 143

Table 4-11: Set of extreme joint probabilities XEXT . ....................................... 145

Table 4-12: Extreme probabilities conditional to the exploration results. ........ 145

Table 4-13: Exploration Reliability Matrix....................................................... 148

Table 4-14: Set of extreme joint probabilities, XEXT . ...................................... 149

Table 4-15: Extreme probabilities conditional to the exploration results. ........ 149

Table 4-16: Set of extreme joint probabilities XEXT . ....................................... 164

Table 4-17: Extreme probabilities conditional to the exploration results. ........ 165

Table 4-18: Extreme marginals on exploration results and real geological states ...

........................................................................................................ 167

Table 4-19: Unique optimal construction strategies and value of information . 168

Table 5-1 Success probabilities of safety measures......................................... 170

Table 5-2 Probabilities of criticality and occurrence of accident .................... 173

Table 5-3 Probabilities of accident at different risk levels .............................. 173

Table 5-4 Success probabilities of safety measures in imprecise probabilities ......

........................................................................................................ 174

Page 14: Copyright by Xiaomin You 2010

xiv

Table 5-5 Probabilities of criticality and occurrence of accident .................... 175

Table 5-6 Probabilities of accident at different risk levels .............................. 175

Table 5-7 Occurrence probabilities of events at the bottom of fault-trees ...... 178

Table 5-8 Calculated occurrence probabilities of events................................. 183

Table 5-9 Description of Geologic States........................................................ 185

Table 5-10 Description of Construction Strategies.......................................... 185

Table 5-11 Construction Cost (per meter) ....................................................... 186

Table 5-12 Tunnel Section and Precise Prior Probabilities ........................... 186

Table 5-13 Exploration Reliability Matrix ...................................................... 187

Table 5-14 Imprecise Prior Probabilities ......................................................... 188

Table 5-15 Imprecise Exploration Reliability Matrix...................................... 189

Table 5-16 Optimal Construction Strategies Obtained.................................... 192

Table 5-17 Optimal Exploration Plans and the Corresponding Savings. ........ 203

Table 5-18 Description of occurrence probability in the East Side CSO project. ..

........................................................................................................ 204

Table B-1 Extreme Prior Probabilities for each tunnel section ....................... 213

Table B-2 Extreme conditional probabilities of exploration result given real

geological states ............................................................................. 215

Table B-3 Value of information (with relaxed constraints)............................. 215

Table B-4 Value of information (with strict constraints)................................. 216

Table B-5 Value of information (with precise probabilities)........................... 217

Page 15: Copyright by Xiaomin You 2010

xv

List of Figures

Figure 2-1 Gamble X, the mapping from Ω to R............................................ 13

Figure 2-2 (a) 3-D space of probability of singletons; (b) probability simplex in a

3-D space ......................................................................................... 17

Figure 2-3 Geometrical Interpretation of Previsions ......................................... 18

Figure 2-4 Three lower previsions which are incurring sure loss...................... 21

Figure 2-5 Three coherent lower previsions ...................................................... 23

Figure 2-6 Three incoherent lower previsions ................................................... 24

Figure 2-7 Probability simplex .......................................................................... 27

Figure 2-8 The natural extension lower prevision of the new gamble 1 2X X+ ....

.......................................................................................................... 29

Figure 2-9 Relaxed constraint............................................................................ 31

Figure 2-10 Strict constraint .............................................................................. 32

Figure 3-1 Example 3-1: marginal sets Ψi in the 3-dimensional spaces (n1 = n2 = 3)

.......................................................................................................... 55

Figure 3-2 Set of extreme points for the case of epistemic independence......... 63

Figure 3-3 An example of Credal network ........................................................ 78

Figure 3-4 Non-convex set of joint distribution, ΨS, under strong independence82

Figure 4-1: Example of Event Tree. ................................................................... 95

Figure 4-2: Example of Fault Tree. .................................................................... 96

Figure 4-3: Example of Decision Tree. .............................................................. 97

Figure 4-4: Event-tree with N levels................................................................... 99

Figure 4-5: Event-tree in Example 4-1. ............................................................ 106

Page 16: Copyright by Xiaomin You 2010

xvi

Figure 4-6: Example 4-1: sets Ψ1,0 and Ψ1,2 in the 3-dimensional spaces of the

probability of the singletons........................................................... 106

Figure 4-7: Event Tree when total probabilities are assigned to all events. ..... 111

Figure 4-8: Event Tree with mixed information consisting of conditional

probabilities and total probabilities................................................ 116

Figure 4-9: Event Tree equivalent to the tree in Figure 4-8 that contains only

probabilities conditional to the upper level events. ....................... 117

Figure 4-10: Sub-tree with OR-gate. .................................................................. 118

Figure 4-11: Sub-tree with AND-gate. ............................................................... 119

Figure 4-12: Fault tree analysis for the failure probability of sub-sea tunnel project

with imprecise probabilities........................................................... 122

Figure 4-13: (a) Decision tree, (b) its standard form, and (c) reduced decision tree

with only the optimal choices. ....................................................... 129

Figure 4-14: Decision tree for the tunnel............................................................ 132

Figure 4-15: Decision tree for the tunnel with exploration. ............................... 143

Figure 4-16: Effect of imprecision...................................................................... 152

Figure 4-17: Effect of reliability......................................................................... 152

Figure 4-18: Case (1): Intersection point in set Ψ ............................................ 154

Figure 4-19: Case (2): No intersection point in set Ψ ...................................... 155

Figure 4-20: Decision tree for the tunnel exploration......................................... 160

Figure 5-1 Construction site plan .................................................................... 170

Figure 5-2 Event tree for initiating event of poor ground conditions .............. 171

Figure 5-3 Stockholm Ring Road project plan in 1992 ...................................... 176

Figure 5-4 Fault tree for damage to lime trees due to tunneling activities ...... 179

Figure 5-5 Fault tree for Branch A .................................................................. 180

Page 17: Copyright by Xiaomin You 2010

xvii

Figure 5-6 Fault tree for Branch B................................................................... 181

Figure 5-7 Fault tree for Branch C................................................................... 182

Figure 5-8 Geological profile and layout of the Sucheon Tunnel ................... 184

Figure 5-9 Decision tree for Section i of the Sucheon Tunnel without additional

exploration ..................................................................................... 190

Figure 5-10 Decision tree for Section i of the Sucheon Tunnel with imperfect

additional exploration .................................................................... 191

Figure 5-11 Decision tree for determining the value of information for Section 1 of

the Sucheon Tunnel: No additional exploration branch ................ 194

Figure 5-12 Decision tree for determining the value of information for Section 1 of

the Sucheon Tunnel: Imperfect additional exploration branch...... 195

Figure 5-13 Decision tree for determining the value of information for Section 1 of

the Sucheon Tunnel: Perfect additional exploration branch .......... 196

Figure 5-14 Value of imperfect exploration (with relaxed constraints) .......... 198

Figure 5-15 Value of imperfect exploration (with strict constraints) .............. 198

Figure 5-16 Value of perfect exploration (with relaxed constraints)............... 199

Figure 5-17 Value of perfect exploration (with strict constraints) .................. 199

Figure 5-18 Value of updating to perfect exploration (with relaxed constraints)...

........................................................................................................ 200

Figure 5-19 Value of updating to perfect exploration (with strict constraints) ......

........................................................................................................ 200

Page 18: Copyright by Xiaomin You 2010

1

Chapter 1 Introduction

1.1 RESEARCH MOTIVATION

The inherent uncertainties in ground and groundwater conditions make tunnel

projects facing potential risks in cost overrun or schedule delay. Between 1994 and 2005,

catastrophic accidents in tunneling projects incurred more than $600 millions loss

(Wannick, 2006). In March 2009, the subway tunnel collapse in Cologne incurred at least

€ 400 million loss in addition to two fatalities (TunnelTalk, 2009). Risk analysis, an

important tool to identify risk, quantify risk, visualize causes and effects, and the course

(chain) of events, is becoming more and more important.

Because of the large amount of loss, tunneling insurance has become “notoriously

unprofitable” (Wannick, 2006), and insurers have not been willing to enter the tunnel

insurance market. Given this background, the International Tunnel Insurance Group

(2006) published The Code of Practice for Risk Management of Tunnel Works (ITIG

Code) to improve the insurability of tunnel projects. The ITIG code advocates a

systematic risk management approach in all phases of tunnel construction.

The International Tunnelling Association has published guidelines for tunneling

risk management (Eskesen et al., 2004). Referred to as “ITA Guidelines,” they aim to

improve present risk management significantly . Risk management is an overall term that

includes risk identification, risk assessment, risk analysis, risk elimination, and risk

mitigation and control. The ITA Guidelines stress the importance of risk analysis and

state that risk analysis should be an integral part of most underground construction

projects. In ITA guidelines, risk assessment and analysis are required as important

techniques to quantify risks and help to make decisions.

Page 19: Copyright by Xiaomin You 2010

2

In conventional risk assessment and analysis, uncertainties are measured by

precise probabilities. (An example can be found in Whitman, 1984.) However,

uncertainties in the real world include two parts: stochastic uncertainties and epistemic

uncertainties. The first arise from the variance of the event itself and the second from

ignorance about the subject matter. Stochastic uncertainties are the inherent physical

property of a specific event, and we are able to perform a quantitative assessment of

variance by using classic theories of precise probabilities, such as the estimation of wind

load by a probabilistic model (Simiu et al, 2001). Epistemic uncertainties about the

subject matter can be reduced if the amount of information is expanded. Because of

limited information, assigning a precise value as to the probability of an event may not be

practical. Probability is often evaluated imprecisely. Available evidence in risk

assessment and analysis usually relies on judgments from experienced engineers and

experts. As a result, imprecision is involved in probability evaluations. For instance, the

frequency of occurrence should be evaluated by 5 predefined intervals according to the

ITA Guidelines, as shown in Table 1-1. Given the 5 predefined intervals, experts’

evaluations are estimates or imprecise judgments, for example, ‘likely to happen.’ When

using precise probabilities, the central value ‘0.1’ is used in the later analysis.

Consequently, the question of why the single value ‘0.1’ is used to represent the whole

interval ‘0.03 – 0.3’ arises. This simplification applied in the risk analysis is not

defensible.

Page 20: Copyright by Xiaomin You 2010

3

Table 1-1 The Intervals for the frequency of occurrence in ITA Guidelines (Eskesen et al., 2004).

The author has participated in several risk analysis workshops for tunneling

projects. In these workshops, at the very beginning almost all probability evaluations

were given by implicit judgment such as “event A is of very high probability for

occurrence.” Then a crisp number such as “90%” was proposed to describe the “very

high probability.” If an expert expressed discomfort with this number, then it was

decreased to 80% or lower. Such experience illustrates the subjectivity involved when

evaluating probabilities. Experts also cannot distinguish between 82% and 85%

according to their experience. Subjectivity is a consequence of ignorance of the subject

matter itself and lack of information. Precision may not be necessary in such situations.

Various efforts have been made to consider the imprecision in probabilities.

Resulting theories include random sets (Matheron, 1975), fuzzy sets (Zadeh, 1965), and

probability bounds—also referred to as p-boxes (Ferson et al., 2002). These approaches

have been applied in risk analysis (Tonon et al. 2000, Huang et al. 2001). In fuzzy sets, a

membership function must be pre-defined; however, this term may not be familiar in the

tunneling industry. Normalized fuzzy sets (interpreted in a probabilistic sense) are

consonant random sets; also, equivalent random sets can be derived from p-boxes

(Bernardini and Tonon 2010, Pages 51-52, and 88-89). Indeed, random sets are the

Page 21: Copyright by Xiaomin You 2010

4

special cases of imprecise probabilities. Imprecise probabilities completely reflect the

imprecision within upper and lower previsions (or expectations) of gambles (or bounded

real functions), as will be introduced in Chapter 2. The fundamental principles of this

theory are: avoiding sure loss, coherence, and natural extension. As Walley (1991, Page

2) explained:

“a probability model avoids sure loss if it cannot lead to behavior that is certain to be harmful. This is a basic principle of rationality. Coherence is a stronger principle which characterized a type of self-consistency. Coherent models can be constructed from any set of probability assessments that avoid sure loss, through a mathematical procedure of natural extension which effectively calculates the behavioral implication of the assessments.”

For example, imprecise probabilities admit and allow imprecision in probability

evaluation and assessment. Implicit judgments, like ‘A is more probable than B’, can be

applied to construct sets of probability measures. Any probability distribution compatible

with given judgments should be considered in the analysis. On the other hand,

“a central assumption of the Bayesian theory is that uncertainty should always be measured by a single probability measure, and values should always be measured by a precise utility function.” (Walley, 1991, Page 3)

A thorough motivation for imprecise probability in tunneling will be given in Section 2.6.

The intention of this study is to explore the use of the theory of imprecise

probability as applied to risk analysis in tunneling. The goal of the methodologies

proposed in this study is to deal with imprecise information without forcing the experts to

commit to assessments that they do not feel comfortable with or the analyst to pick a

single distribution when the available data does not warrant such precision. It allows

considering different types of interaction between events, including unknown interaction,

independence, and uncertain correlation. Likewise, it allows, for example, contractors to

distinguish between the maximum buying price for additional information at bid stage

Page 22: Copyright by Xiaomin You 2010

5

and the minimum selling price of that information. Methodologies have been developed

for risk registers, event trees, fault trees, and decision trees, i.e. the standard tools in risk

assessment for tunnels.

1.2 LITERATURE REVIEW

1.2.1 Risk Analysis in Tunneling

Although ITIG Code states the significance of risk analysis, it does not

recommend a specific risk analysis method. Both qualitative and quantitative risk

analysis methods are recommended in ITA Guidelines. In a qualitative risk analysis, the

frequency and consequence of a hazard are rated based on predefined intervals (e.g.,

Table 1-1). According to the ratings, the risk level of a considered hazard is determined.

This qualitative method is called the ‘risk matrix method’. For example, Table 1-2 shows

the risk matrix for the East Side CSO project, Portland, Oregon (Pennington et al., 2006),

in which risks are divided into three levels: tolerable, as low as reasonably practicable

(ALARP), and intolerable. Risk register methodology is frequently used in risk

management for tunneling projects. It records all identified hazards with risk description,

probability rating, consequence rating, risk level determined by a risk matrix, and

suggested mitigations for high level risks.

While the qualitative method can provide a big picture of the risks, thereby

enabling the engineers to prepare pro-active mitigation measures, it is considered too

coarse to provide reliable quantitative risk estimates. Thus, the Monte-Carlo simulation is

often applied as the quantitative risk analysis method, but a probability distribution has to

be provided, even in the case of limited information. Some standard tools such as the

event tree, fault tree, and decision tree are suggested for risk analysis in ITA Guidelines.

Page 23: Copyright by Xiaomin You 2010

6

These tools are widely used outside the tunneling industry. However, they can be and

have been used for risk analysis in tunneling without major adjustments. The event tree

analysis in the design of a TBM tunnel (Hong et al., 2009), the application of fault tree

analysis to the Stockholm Ring Road Tunnels (Sturk et al., 1996), and the optimal

exploration plan for the Sucheon Tunnel by decision tree analysis (Karam et al., 2007),

all adopt precise probabilities.

Table 1-2 An Example of Risk Matrix (Pennington et al., 2006)

Besides ITA Guidelines and ITIG Code, some guidelines developed in other civil

engineering industries can be applied in tunneling projects as well. For example, the

Association for the Advancement of Cost Engineering published a Recommended

Practice (AACE International, 2008) to provide guidelines for risk analysis by using

range estimating. In the Recommended Practice, a “double-triangular” probability

distribution must be predefined to describe the uncertainties. The U.S. Federal Highway

Page 24: Copyright by Xiaomin You 2010

7

Administration also published Risk Assessment and Allocation for Highway

Construction Management (FHWA, 2006) and recommends the risk matrix method and

Monte-Carlo simulation method, which is more frequently used in cost estimation.

Federal Transit Administration also requires risk management for all federally-funded

projects (FTA, 2003). Similarly, the Washington State Department of Transportation

(WADOT, 2010) and California Department of Transportation (Caltrans, 2007) have

issued guidelines for risk management. Both recommend risk analysis methods similar to

those identified by the FHWA.

In summary, the qualitative method considers imprecise inputs but the analysis

results are coarse, while the quantitative method requires precise inputs to proceed with

the risk analysis. Considering the uncertainty inherent in tunneling projects, forcing

experts to commit to precise probability evaluation is not practical. Much needs to be

done in risk analysis in tunneling with imprecise inputs.

1.2.2 Imprecise Probabilities

With imprecise inputs, the question of how to generate a reasonable evaluation in

probability measures is critical in the risk analysis and decision-making process. One

solution is imprecise probabilities. Imprecise Probability, as proposed by Walley (1991),

is used as a generic term to cover all mathematical models which measure chance or

uncertainty without sharp numerical probabilities. All analysis conducted in this study is

by means of imprecise probabilities.

In the theory of imprecise probability, a basic problem in the risk analysis is how

to consider the interaction between events or variables. Usually, interactions include

unknown interaction, independence, and correlation. Because a set of probability

measures is considered in the theory of imprecise probabilities, the interaction between

Page 25: Copyright by Xiaomin You 2010

8

events or variables in imprecise probabilities is more complex than that in precise

probabilities. Couso, Moral, and Walley (2000) firstly proposed and explicitly

distinguished the concepts of different types of independence in imprecise probabilities;

however they did not develop systematic algorithms to deal with independence. Cano and

Moral (2000) proposed algorithms for imprecise probabilities to consider different types

of independence, but the concepts of independence in this reference is different from the

definitions proposed by Couso, Moral, and Walley (2000), which are more generally

accepted in the field of imprecise probabilities. Campos and Cozman (2007) investigated

the computation of lower and upper previsions under epistemic independence (one type

of independence, as will be discussed in Chapter 3), and algorithms were developed to be

applied in a graphical model—Credal networks. For the interaction considering uncertain

correlation, the author is not aware of any reference describing a general algorithm

dealing with the uncertain correlation in imprecise probabilities, though Ferson et al.

(2004) and Berleant and Zhang (2004) considered the case that dependence between

variables are partially determined in probability bound analysis. Therefore, the challenge

remains in the area of algorithms in imprecise probability for dealing with different types

of interaction.

1.3 RESEARCH OBJECTIVES AND PLANS

We have identified three research objectives that will constitute novel work in the

field of risk analysis:

(1) Our first research objective comes from considering different types of

interactions between variables under imprecise probabilities. The objective is the creation

of efficient algorithms for unknown interaction, concepts of independence at different

levels, and uncertain correlation.

Page 26: Copyright by Xiaomin You 2010

9

(2) The second research objective is the advancement of failure analysis and

decision-making with imprecise probabilities. Efficient algorithms are proposed for

failure analysis and decision analysis, respectively.

(3) The final research objective investigates the application of the proposed

methodologies to risk analysis in tunnel projects.

Our research plan consists of the sequence of actions that must be completed in

order to fulfill the three research objectives:

(1) For each type of interaction, corresponding algorithms will be developed to

calculate both prevision bounds and conditional probability bounds, subject to constraints

on marginal distributions over finite joint spaces. Two different types of constraints on

the marginals will be considered: previsions bounds and extreme distributions.

Algorithms written in terms of joint distributions or marginal distributions will be

discussed and compared. All algorithms will be justified and illustrated by simple

examples, and results will verify the influences of different interactions and the

equivalence between various algorithms.

(2) The tools adopted to improve failure analysis and decision-making are event-

tree analysis, fault-tree analysis and decision-tree analysis. In event-tree analysis,

different types of given information on probabilities will be considered; in fault-tree

analysis, various interactions between events will be taken into account; in decision-tree

analysis, we will study making decisions with imprecise probabilities and present how to

assess the value of the information. Different types of given information on probabilities,

interpretations of interactions will be discussed and compared in the study.

Page 27: Copyright by Xiaomin You 2010

10

(3) The application of the proposed methodologies on risk analysis in tunnel

projects will include risk assessment with a risk register, failure analysis, and decision-

making in tunnel engineering.

1.4 DISSERTATION OUTLINE

Including this introductory chapter, the dissertation is organized as follows.

Chapter 2 presents the basic concepts in the theory of imprecise probabilities. The basic

concepts include gambles, previsions, and two important properties in imprecise

probabilities: avoiding sure loss and coherence. At the end of this chapter, the idea of

decision-making with imprecise probabilities is introduced. All concepts and properties

are illustrated by simple examples.

Chapter 3 studies different types of interaction between variables. First, we

formulate the available information on marginals, and then introduce concepts of

interaction, including unknown interaction, different types of independence, and

correlated variables. For each type of interaction, systematic algorithms are proposed,

justified, and illustrated by examples. The algorithms aim at achieving upper and lower

bounds on previsions and conditional probabilities on joint finite spaces subject to the

constraints on marginals and the assumed interaction type.

Chapter 4 presents algorithms for failure analysis (i.e. Event-Tree Analysis and

Fault-Tree Analysis) and decision analysis with imprecise probabilities. Available

information is used to construct a convex set of probability distributions that are then

considered during failure analysis and decision making. In the failure analysis, our aim is

to determine the upper and lower bounds of a prevision (expectation of a real function) or

of the probability of failure; in the decision analysis, our objective is to determine the

optimal action(s). Corresponding algorithms are developed and illustrated by examples.

Page 28: Copyright by Xiaomin You 2010

11

Chapter 5 revisits several case histories of risk analysis in tunneling by using the

methodologies developed in previous chapters. Section 5.1 applies event-tree analysis

with imprecise probabilities to obtain the bounds on the occurrence probability of

accidents during the construction of an underwater tunnel. Section 5.2 deals with the

probability of the environmental damage occurring due to the construction activities of

the Stockholm Ring Road Project. Section 5.3 revisits the Sucheon Tunnel, where the

imprecision of probabilities are considered and finally the optimal exploration plan is

determined. Section 5.4 introduces the application of risk register methodology in the

East Side CSO Project in Portland, Oregon. All results obtained based on imprecise

probabilities are compared with the results from precise probabilities.

Chapter 6 presents a summary of the conclusions drawn from the dissertation and

provides recommendations for future work.

Page 29: Copyright by Xiaomin You 2010

12

Chapter 2 A Short Introduction to Imprecise Probabilities

Uncertainties in the real world arise from both the variance of the event itself and

the ignorance about the subject matters. The variance is a physical property of a specific

event, and we are able to perform a quantitative assessment of variance with the classic

theory of probability (Durrett, 1996), such as the estimation of wind load by a

probabilistic model (Simiu et al, 2001). Compared with the variance independent of

human efforts, ignorance about the subject matter can be reduced if the amount of

information is expanded. Because of the limited information, assigning a precise value as

the probability of an event is not practical. In this case, the evaluation of probability is

often presented implicitly; for example, ‘A is more probable than B’. How to combine

such implicit expressions and generate a rational evaluation is critical in a decision-

making process. One solution for this problem is Imprecise Probabilities (Walley, 1991)

which measure chance or uncertainty without sharp numerical probabilities. In the theory

of imprecise probabilities, the considered probability is not unique. All probabilities

which are compatible with available information should be taken into consideration. Such

probabilities form a convex set Ψ . The objective of this chapter is to introduce the

theory of Imprecise Probabilities. The starting point of this chapter is introducing the

concept of a gamble, followed by the upper and lower previsions of a gamble. Two

important properties of the upper and lower previsions are presented. Finally, the basic

concepts in decision-making with imprecise probabilities are introduced, where two types

of constraints: relaxed constraints and strict constraints are considered and explained.

Page 30: Copyright by Xiaomin You 2010

13

2.1 BASIC CONCEPTS

2.1.1 Gamble

A gamble X is a bounded real-valued function on the set of possible states of

affairs Ω (Walley, 1991). In other words, a gamble in nature is a mapping from Ω to

a bounded section of real axis, as illustrated in Fig. 1, where iω denotes a possible state

of affairs and ia is the corresponding real number in the real axis, R.

Figure 2-1 Gamble X, the mapping from Ω to R

To illustrate, consider a tunnel boring machine (TBM) advancing in soft ground

as an example, often facing such problem: whether there are any existing obstructions or

not. Let 1 be the reward for no obstructions, 0 for small obstructions, and -1 for existing

obstructions. In this case, the set of possible states (also called Possibility Space) is Ω =

no obstructions ahead, small obstructions ahead, big obstructions ahead, and the

gamble X is the function as follows:

ω1

ω2

ω3

ω4

X

X

X

X

a1 a2 a3 a4

Page 31: Copyright by Xiaomin You 2010

14

1

2

3

1, 0,( ) -1,

No obstructions aheadSmall obstructions aheadXBig obstructions ahead

ωωωω

=⎧⎪ == ⎨⎪ =⎩

(2.1)

A particular gamble that will be frequently used in the following is a 0-1 valued

gamble. Consider a set A⊆ Ω and a gamble X such that:

if Aω∈ , then ( )X ω =1; otherwise ( )X ω =0 (2.2)

then we call this gamble X a 0-1 Valued Gamble. Since A is an event in the possibility

space Ω, a 0-1 valued gamble identifies an event in Ω because it is a characteristic

function for that event. On the other hand, a generic gamble is a generalization of an

event, and this makes imprecise probabilities very powerful as will be explained in

Section 2.1.4 and 2.3. The 0-1 valued gamble is widely used in risk analysis, because we

are often interested whether the event will happen or not.

2.1.2 Lower and Upper Previsions

In precise probabilities, only one probability distribution P is defined on Ω, thus

the expectation of a gamble X can be obtained precisely. However, when a convex set of

probability distributions, Ψ, is considered in the theory of imprecise probabilities, for a

given gamble X there is not a unique expectation (also called prevision): there will be as

many expectation values as distributions in Ψ. For a given gamble X, let P be a distribution in Ψ and let ( )P XE be the

expectation of X calculated with distribution P. The lower prevision LOWE is defined as

follows:

( ) ( )infLOW PPE X X

∈Ψ= E (2.3)

and the upper prevision UPPE is defined as follows:

Page 32: Copyright by Xiaomin You 2010

15

( ) ( ) ( ) ( )inf supUPP LOW P PP PE X E X X X

∈Ψ ∈Ψ= − − = − − =E E (2.4)

Actually, the Separation Lemma (Walley, 1991, Theorem 3.3.2, Page 133) assures that, if

Ψ is convex, then

( ) ( )minLOW PPE X X

∈Ψ= E and ( ) ( )minUPP PP

E X X∈Ψ

= − −E ( )max PP

X∈Ψ

= E (2.5)

i.e. the infimum and supremum are achieved by some P∈Ψ.

A different point of departure in the theory of imprecise probabilities is taken

when, instead of specifying Ψ, a lower prevision or a set of lower previsions is specified.

Then, the question is whether these previsions satisfy any rationality requirement and

what type of set Ψ they define. This topic is dealt with in Section 2.2.

In Imprecise Probability, the behavioral interpretation of the theory of the lower

prevision LOWE of gamble X is used to model the decision maker’s attitudes about the

gamble; namely, the value of the lower prevision LOWE reflects the decision maker’s

behavioral disposition as the maximum buying price for the gamble X; the upper

prevision, UPPE , is interpreted as the minimum selling price for the gamble X (Walley,

1991). There is no need for UPPE to be the same as LOWE ; for example, bookmakers

and insurers make their living on the difference UPPE - LOWE . Bayesians impose that

UPPE = LOWE in all circumstances—it seems highly unlikely that the highest buying

price be the same as the lowest selling price—and if so, then bookmakers and insurers

would be broke!

When the available information is limited, judgments can be given in terms of

lower previsions and upper previsions of some gambles. In the previous example of TBM

advancing in the soft ground (gamble X), if 0.1 is assigned as its lower prevision and 0.4

as the upper prevision, the two pieces of information can be written as

Page 33: Copyright by Xiaomin You 2010

16

( ) 0.1LOWE X =

( ) 0.4UPPE X = (2.6)

In this example, the highest price for the decision maker to buy the gamble X is 0.1, and

the minimum selling price of the gamble X is 0.4.

Generally, the selling price of a gamble is higher than the buying price. Based on

this fact, we are able to earn a sure gain. In line with it, the upper prevision ( )UPPE X is

usually higher than the lower prevision ( )LOWE X .

Sometimes, a decision maker has to face a gamble about which he/she has no

information. In this situation, the most conservative solution in the theory of imprecise

probabilities is to set ( ) infLOWE X X= , and consequently ( )UPPE X is sup X, which are

called vacuous previsions. Bayesians deal with such situations by using a unique

probability distribution (usually choose the uniform distribution), which is improper

because (1) it cannot model the case of complete ignorance; (2) it actually has strong

behavioral implication by insisting that ( ) ( )LOW UPPE X E X= : the maximum buying price

for X is always equal to its minimum selling price.

2.1.3 Geometrical Interpretation of Previsions

Consider an n-dimensional possibility space Ω = ω1,…, ωn, and an n-

dimensional space (P(ω1),…, P(ωn)) (the space of probabilities of singletons), where the

i-th component P(ωi) is the probability of the i-th possible state. Notice that any

probability distribution is constrained by P(ωi)≥0 and ∑P(ωi)=1, which make the space

of probabilities of singletons bounded and closed. Figure 2-2(a) is an example in a 3-

dimensional space, where the space of singletons’ probabilities is the intersection of

plane ( ) ( ) ( )1 2 3 1P P Pω ω ω+ + = and three half-spaces P: P(ωi)≥0. Figure 2-2(b)

Page 34: Copyright by Xiaomin You 2010

17

shows the corresponding probability simplex, which is the view of the 3-dimentional

space of the singleton’s probabilities (Figure 2-2(a)) down the line connecting point (1, 1,

1) through origin, and therefore the probability simplex is an equilateral triangle. The

three vertices of the triangle represent the occurrence of the possible states. Every point

Q inside the triangle represents a probability distribution and the probability of iω is

measured by the perpendicular distance of the point Q to the side opposite to vertex

P( iω )=1. For example, the point Q = (1/3, 1/3, 1/3) in Figure 2-2(b) indicates that 1ω

through 3ω have the same probability of 1/3.

(a) (b)

Figure 2-2 (a) 3-D space of probability of singletons; (b) probability simplex in a 3-D space

The expectation of gamble Xi: ( )ij jaω → (Figure 2-1) defines a hyper-plane in

the space of probabilities of singletons as follows: ( ) ( ) ( ) ( ) ( ) ( )1 1( ) ... ...i i i

P i j j n nX a P a P a Pω ω ω= + + + +E (2.7)

0

( )2P ω

Plane: ( ) ( ) ( )1 2 3 1P P Pω ω ω+ + =

Point(1/3, 1/3, 1/3)

Parallel to the line from (1, 1, 1) through origin

( )1P ω

( )3P ω

( )1P ω

( )2P ω ( )3P ω

1

1 1

Page 35: Copyright by Xiaomin You 2010

18

and therefore a lower prevision ELOW on a gamble Xi defines a half-space Ψ(i): ( ) ( ) ( ) ( ) ( ) ( ) ( ) 1 1: ( ) ... ... ( )i i i i

P i j j n n LOW iP X a P a P a P E Xω ω ωΨ = = + + + + ≥E (2.8)

Consider again the example in Figure 2-2. Assume gamble X1 such that X1 = 0 for 1ω ω= , X1 = 1 for 2ω ω= , and X1 = -1 for 3ω ω= , and that the lower prevision for

gamble X1 is 0. Then, Ψ(1) is P: P(ω2) – P(ω3) > 0, as shown in Figure 2-3.

(a) (b)

Figure 2-3 Geometrical Interpretation of Previsions

If a lower prevision ELOW is assigned on a set of gambles K, it defines a set Ψ as

intersection of half-spaces Ψ(i): ( )iΨ = Ψ∩ , Xi ∈ K (2.9)

Since each Ψ(i) is convex, Ψ is convex as well.

If set Ψ is convex, it can also be defined by its extreme points (vertices), which

are the points that cannot be expressed as a convex combination of any other points in set

Ψ (Rockafellar 1991, Page 162). Properties of Ψ and the algorithm for calculating

extreme points are discussed in Section 2.3.

0

( )2P ω

( )1P ω

( )3P ω

( )1P ω

( )2P ω ( )3P ω

Ψ(1): P: P(ω2) – P(ω3) > 0 Ψ(1): P: P(ω2) – P(ω3) > 0 1

1

1

Page 36: Copyright by Xiaomin You 2010

19

2.1.4 Lower and Upper Probabilities

Compared with probabilities, previsions are more general and can capture the

available information without losing any available information; therefore, previsions are

more fundamental than probabilities (Walley 1991). Here is why.

As noted in Section 2.1.1, 0-1 valued gambles are the characteristic functions of

events. Since the expectation of an event is equal to its probability, upper and lower

previsions of 0-1 valued gambles are upper and lower probabilities of events. Assume n

possible states in Ω: 1ω , … , nω . The expectation of a 0-1 valued gamble X defined in

Eq.(2.2) can be written as follows:

( ) ( ) ( )1 1( ) ... ...P i i n nX a P a P a Pω ω ω= + + + +E (2.10)

where 1ia = if i Aω ∈ , and 0ia = if i Aω ∉ .

Upper and lower probabilities give hyper-planes in the space of probabilities of

singletons:

( ) ( ) ( )1 1( ) ...P n n LOWX a P a P E Xω ω= + + =E (2.11)

and

( ) ( ) ( )1 1( ) ...P n n UPPX a P a P E Xω ω= + + =E (2.12)

These hyper-planes’ normal components are either 0 or 1, whereas within previsions

normals of hyper-planes can assume any values (Bernardini and Tonon 2010, Page 68).

Upper and lower probabilities cannot represent the assessments like ‘the probability of A

is at least twice the probability of B’, i.e. P(A) > 2P(B) P(A)–2P(B) > 0, because in

this case one normal component is equal to –2. Indeed, they cannot even capture the

assessment” “event A is more probable than B”, i.e. P(A) > P(B) P(A)–P(B) > 0,

because here the normal component is equal to –1. Therefore, lower and upper

probabilities are only a special case of lower and upper previsions, and are not elaborate

Page 37: Copyright by Xiaomin You 2010

20

enough to define a general convex set in the space of the singleton’s probabilities. In this

sense, prevision can capture the available information without losing any available

information, as explained further in Section 2.3 (weak* compactness theorem).

2.2 TWO PROPERTIES OF RATIONAL PREVISIONS

When the available information is conveyed as lower and upper previsions on a

set of gambles, K, only the previsions that are consistent and not conflicting with others

should be used. A rational prevision should avoid sure loss and be coherent as defined

below.

2.2.1 Avoiding Sure Loss

In loose terms, a lower prevision which incur sure loss indicates irrational

judgments, because:

(1) we are willing to pay more for X∈K than the supremum that we can get back;

or

(2) the prevision of X∈K contradicts the prevision of another gamble Y∈K or of a

finite linear combination of gambles in K,

and the set Ψ of probability distributions is empty.

In mathematical terms, the lower prevision ELOW avoids sure loss if ( )1

sup 0nj LOW jj

X E X=⎡ ⎤− ≥⎣ ⎦∑ (2.13)

whenever 1n ≥ , and jX K∈ (Wally 1991, Page 68).

Take TBM advancing in soft ground as an example. Consider three gambles 1X ,

2X , and 3X defined as three pieces of information that the contractor can buy and will

give the following rewards ( in units of utility):

Page 38: Copyright by Xiaomin You 2010

21

1

21

3

1, 0,( ) 0,

No obstructions aheadSmall obstructions aheadXBig obstructions ahead

ωωωω

=⎧⎪ == ⎨⎪ =⎩

(2.14)

1

22

3

0, 1,( ) 0,

No obstructions aheadSmall obstructions aheadXBig obstructions ahead

ωωωω

=⎧⎪ == ⎨⎪ =⎩

(2.15)

1

23

3

0, 0,( ) 1,

No obstructions aheadSmall obstructions aheadXBig obstructions ahead

ωωωω

=⎧⎪ == ⎨⎪ =⎩

(2.16)

Suppose that the contractor is willing to pay 0.4 units of utility for each gamble (i.e. the

lower prevision of each gamble is 0.4). As a result, the total amount paid for three

gambles is 0.4+0.4+0.4 = 1.2, while the reward obtained by the contractor from the

gambles is 1. As a result, the lower prevision has incurred a sure loss (-0.2). As shown in

Figure 2-4, every assessment ( ( ) 0.4LOW iE X = ) is represented by a half space (see

Section 2.1.3), and the three half spaces have no intersection (Ψ = ∅), thus, sure loss is

incurred by the three previsions.

Figure 2-4 Three lower previsions which incur sure loss

( )2 0.4LOWE X =

( )1P ω

( )2P ω ( )3P ω

( )3 0.4LOWE X =

( )1 0.4LOWE X =

Page 39: Copyright by Xiaomin You 2010

22

2.2.2 Coherence

Once it is ensured that a lower prevision on K avoids sure loss, the next step is to

work out the full implications for buying prices of the initial assessment, i.e. for the lower

previsions on K so that the prevision for each X∈K actually bounds (is an active

constraint for) Ψ. In a mathematic term, a general definition of coherence is given by

Walley (1991, Page 73): a lower prevision ELOW is coherent for a set of gambles K if the

following condition is satisfied.

( ) ( ) 0 01sup 0n

j LOW j LOWjX E X m X E X

=⎡ ⎤− − − ≥⎡ ⎤⎣ ⎦⎣ ⎦∑ (2.17)

whenever m and n are non-negative integers and Xo and Xj are in K. If K is a linear space,

then ELOW is coherent when the following three axioms (Walley 1991, Page 63) are

satisfied:

( ) infLOWE X X≥ when X∈K (2.18)

( ) ( )LOW LOWE X E Xλ λ= when X∈K and 0λ > (2.19)

( ) ( ) ( )LOW LOW LOWE X Y E X E Y+ ≥ + when X∈K and Y∈K (2.20)

An equivalent form of Eq.(2.20) in the conjugated upper prevision EUPP is

( ) ( ) ( )UPP UPP UPPE X Y E X E Y+ ≤ + when X∈K and Y∈K (2.21)

In general, previsions are coherent when each gamble X∈K is accepted by a

decision maker, and what he/she pays for a finite combination of some gambles X∈K is

lower than the lower prevision of some gamble X0∈K.

Consider again the example of TBM advancing in soft ground introduced in

Section 2.2.1. If now the lower previsions for gamble X1 (X2) is 0.3 (0.4), and the lower

prevision for gamble (X1+X2) is assigned to be 0.8, then the contractor is willing to pay

0.3 to buy gamble X1 and 0.4 for gamble X2, i.e., he is expected to accept the price of 0.7

(i.e., 0.3+0.4) for buying both gamble X1 and gamble X2. Since 0.7 is lower than the

Page 40: Copyright by Xiaomin You 2010

23

highest price that the contractor is willing to pay for (X1+X2), i.e., 0.8, then the assigned

lower prevision on the three gambles holds the property of coherence. As shown in

Figure 2-5, all three lines (that represent the previsions of X1, X 2, and X1+X2) intersect

each other. If the lower prevision of gamble (X1+X2) is 0.6 (see Figure 2-6), then the line

( )1 2 0.6LOWE X X+ = does not intersect the area constructed by ( )1 0.3LOWE X = and

( )2 0.4LOWE X = and thus the prevision is not coherent. The lower prevision

( )1 2 0.6LOWE X X+ = does not contribute in defining Ψ, i.e. it is an inactive constraint.

In order for it to intersect and be consistent with the lower prevision on X1 and X2, we

must set ( )1 2 0.7LOWE X X+ = : this is the natural extension of (X1+X2) that will be

introduced in Section 2.4.

Figure 2-5 Three coherent lower previsions

( )2 0.4LOWE X =

( )1 0.3LOWE X =

( )1P ω

( )2P ω ( )3P ω

( ) ( )1 2 0.7LOW LOWE X E X+ =

Ψ

( )1 2 0.8LOWE X X+ =

Page 41: Copyright by Xiaomin You 2010

24

Figure 2-6 Three incoherent lower previsions

2.3 SET Ψ CONSTRUCTED ON THE BASIS OF SPECIFIED PREVISIONS

From Section 2.1.3, assessments specified as lower and upper previsions on a set

of gambles K, i.e. EP(Xj) ≥ ELOW(Xj) and Xj∈K, can be written in terms of linear

inequalities of probabilities of the singletons P(ωi) and thus each set P:

EP(Xj)≥ELOW(Xj); Xj∈K is convex. Suppose the specified lower prevision ELOW avoids

sure loss, i.e. set Ψ is not empty. Ψ is the intersection over all convex and closed hence

compact sets P: EP(Xj)≥ELOW(Xj); Xj∈K , and therefore Ψ is convex and compact.

The weak*-compactness theorem (Walley 1991, Pages 145 – 146) states that,

there is a one-to-one correspondence exist between lower previsions ELOW and compact

convex sets Ψ. The proof of the theorem—

“relies on the fact that any closed convex set in a locally convex topological space is the intersection of the closed half-spaces containing it.” (Walley 1991, Page 146)

( )2 0.4LOWE X =

( )1 0.3LOWE X =

( ) ( )1 2 0.7LOW LOWE X E X+ =

( )1P ω

( )2P ω ( )3P ω

( )1 2 0.6LOWE X X+ =

Ψ

Page 42: Copyright by Xiaomin You 2010

25

Because set Ψ is convex and is the intersection of the closed half spaces P:

EP(Xj)≥ELOW(Xj); Xj∈K , each gamble Xj supports convex set Ψ at some point. Notice

that the one-to-one correspondence between lower previsions and convex sets is not valid

for probabilities because probability hyper-planes’ normal components are restricted to 0s

and 1s (Section 2.1.3). Given a generic convex set Ψ, any bounding plane may have any

normal vectors.

The following extreme point theorem (Walley 1991, Pages 146 – 147) i shows

that each gamble supports convex set Ψ at some extreme point:

Assume that the lower prevision ELOW avoids sure loss, and let EXT denote the set of all

extreme points of Ψ. The extreme point theorem states that:

(a) EXT is non-empty.

(b) Ψ is the smallest convex set containing EXT.

(c) If the lower prevision ELOW is coherent, for every gamble Xj∈K there is a

extreme point P in set EXT such that EP(Xj) = ELOW(Xj).

Set Ψ may be constructed by assessments (lower prevision on a set of gambles K)

and defined by its extreme points. Let |Ω| = n, and |K| = q, the general algorithm for

determining all extreme points (or vertices) is as follows:

1) In the |n|-dimensional space of the singleton’s probabilities, Ψ is bounded by q

linear inequalities, n non-negative constraints on singletons ωi, ( ) 0iP ω ≥ , and the

constraint that ∑P(ωi)=1. Consider (n-1) inequalities at a time in addition to the

constraint that ∑P(ωi)=1.

2) Write the (n-1) inequalities as equalities: an n×n system of linear equations is

obtained. Compute the unique solution P.

Page 43: Copyright by Xiaomin You 2010

26

3) If P satisfies the remaining (n+q+1) – n = q + 1 constraints, P is an extreme point

of the joint distribution set Ψ, otherwise it is not.

4) Repeat 1) to 3) until all combinations of inequalities are considered.

Let us consider again the example of a TBM advancing in soft ground and

construct set Ψ by the following available information:

①. The upper prevision of gamble X1 is 0.5:

( )1 0.5UPPE X = , i.e., P( 1ω )≤0.5 (2.22)

②. No obstructions is more probable than small obstructions:

P( 1ω )≥P( 2ω ) (2.23)

③. No obstructions is more probable than big obstructions:

P( 1ω )≥P( 3ω ) (2.24)

Considering that the sum of the probabilities is equal to 1, i.e.,

P( 1ω )+P( 2ω )+P( 3ω ) =1, by using the algorithm stated previously, all effective

combinations to determine the extreme points of set Ψ are as follows

① & ②( )( ) ( )

( ) ( ) ( )

1

1 2

1 2 3

1/ 2

1

P

P P

P P P

ω

ω ω

ω ω ω

=⎧⎪

=⎨⎪ + + =⎩

(2.25)

① & ③( )( ) ( )

( ) ( ) ( )

1

1 3

1 2 3

1/ 2

1

P

P P

P P P

ω

ω ω

ω ω ω

=⎧⎪

=⎨⎪ + + =⎩

(2.26)

② & ③( ) ( )( ) ( )

( ) ( ) ( )

1 2

1 3

1 2 3 1

P P

P P

P P P

ω ω

ω ω

ω ω ω

=⎧⎪

=⎨⎪ + + =⎩

(2.27)

Solve the equation sets and obtain the three extreme probability distributions:

Page 44: Copyright by Xiaomin You 2010

27

( ) ( ) ( )( )1 3 3, ,P P Pω ω ω = (0.5, 0.5, 0); it satisfies ③ (2.28)

( ) ( ) ( )( )1 3 3, ,P P Pω ω ω = (0.5, 0, 0.5); it satisfies ① (2.29)

( ) ( ) ( )( )1 3 3, ,P P Pω ω ω = (0.33, 0.33, 0.33); it satisfies ② (2.30)

Since each of the three solutions satisfies the remaining inequality, all three solutions are

extreme points.

Figure 2-7 Probability simplex

From Section 2.1.3, recall that the probability simplex is the view of Ψ down the

line connecting (1,1,1) through the origin. The three lines shown in Figure 2-7, three lines

represent the three linear inequalities in Eqs. (2.22) through (2.24) when written as

equalities; the associated half spaces indicated by arrows represent the inequalities.

Finally, an intersection is obtained, which is the set of all probability distributions

reflecting the available information. The coordinates of the three extreme points

(vertices) of the intersection Ψ are the same as the ones in Eqs. (2.28) through (2.30).

( )3P ω

( )2P ω ( )1P ω

③ ①

Ψ

Page 45: Copyright by Xiaomin You 2010

28

2.4 NATURAL EXTENSION

Avoiding sure loss and coherence are two important properties that ensure that a

previsions on a set of gambles is rational. Given a coherent prevision on a set of

gambles K, when assessing the lower and upper previsions for a new gamble X, it is

heavy work to verify whether the prevision on X avoids sure loss and is coherent. The

solution is natural extension. Natural extension enables us to construct previsions for

new gambles from the given previsions for the specified gambles.

As stated by Walley (1991, Page 122), for a new gamble X, its lower prevision

( )LOWE X is the maximum buying price for X that is constructed from the specified

buying prices LOWE (Xj), Xj ∈ K through linear combination. This process is natural

extension. The lower prevision for the new gamble X can be obtained as follows (Walley

1991, Page 122):

( ) ( )1

sup :n

LOW j j LOW jj

E X X X E Xα α λ=

⎧ ⎫⎡ ⎤= − ≥ −⎨ ⎬⎣ ⎦⎩ ⎭∑ for some 0n ≥ ,

jX K∈ , 0jλ ≥ , α ∈

(2.31)

If the lower prevision is coherent on K, it is coherent on K∪X.

The natural extension theorem (Walley 1991, Page 136) states that the natural extension

of X∉K can be obtained as ( ) ( )( )minLOW PP

E X X∈Ψ

= E (2.32)

where set Ψ is defined in Eqs. (2.8) and (2.9).

In the example of a TBM advancing in soft ground of Section 2.2.2, let P =

(P(ω1), P(ω2), P(ω3)) be a probability distribution over the possibility space Ω. Given K = X1, X2, ( )1 0.3LOWE X = and ( )2 0.4LOWE X = , the natural extension for X = X1 + X2 is

obtained as 0.7 by applying Eq.(2.32): ( ) ( ) ( )( )1 2minLOW PE X P Pω ω

∈Ψ= + , where Ψ =

( ) ( ) 1 2: 0.3, 0.4P P Pω ω≥ ≥ . As illustrated in Figure 2-8, graphically, the natural

Page 46: Copyright by Xiaomin You 2010

29

extension of the lower prevision to the new gamble 1 2X X+ is achieved at the point of

intersection for the lower prevision on the two specified gambles X1 and X2, and thus

( )1 2LOWE X X+ = 0.3+0.4 = 0.7.

Figure 2-8 The natural extension (lower prevision) of the new gamble 1 2X X X= +

2.5 DECISION MAKING

This section introduces two concepts optimal (maximal) and preference, which

are the basic concepts of decision-making within imprecise probabilities. Consider a set of gambles K, gamble X∈K is optimal (or maximal) in set K when ( ) 0UPPE X Y− ≥ for

all X∈K (Walley 1991, page 161). Here the condition of ( ) 0UPPE X Y− ≥ is equivalent

to ( ) 0LOWE Y X− ≤ , which is not a very strong condition. As long as any gamble

satisfies this condition, it is optimal. In other words, we may have multiple optimal

gambles in set K.

According to Walley (1991, page 155-156), preference is interpreted as follows:

the decision maker is disposed to choose one gamble rather than the other one, i.e. a

( )2 0.4LOWE X =

( )1 0.3LOWE X =

( )1 2 0.7LOWE X X+ =

( )1P ω

( )2P ω ( )3P ω

Ψ

Page 47: Copyright by Xiaomin You 2010

30

preferred gamble in set K is always unique. Consider two gambles X1 and X2. Given event

A, gamble X1 is preferred to X2 if the following inequalities are satisfied (Walley 1991,

Page 156, coherence theorem):

( )1 2 | 0LOWE X X A− ≥ (2.33)

On the other hand, X2 is preferred to X1 if the following conditions are satisfied

( )2 1 | 0LOWE X X A− ≥ (2.34)

Otherwise, the preference (or choice) between X1 and X2 is indeterminate. A general method to calculate the lower expectation ( )|LOW i jE X X A− is by

using the natural extension theorem (Eq. (2.32)):

( ) ( )( )| min |LOW i j i jE X X A X X A∈Ψ

− = −P

E (2.35) where P is a conditional probability measure for gamble ( )i jX X− given event A,

and Ψ is the set of conditional probability measures P . Since the expectation operator

E is linear, one may rewrite Eq. (2.35) as

( ) ( )( )( ) ( )( )

,

| min |

min | |i i j j

LOW i j i j

i j

E X X A X X A

X A X A∈Ψ

∈Ψ ∈Ψ

− = −

= −

P

P P

E

E E (2.36)

where iP and jP are conditional probability measures for gambles iX and jX

given event A, respectively, and iΨ and jΨ are the sets of iP and jP , respectively.

Now ( )|LOW i jE X X A− may be obtained by solving the optimization problem

(2.36) where, besides the constraints ( i i∈ΨP , j j∈ΨP ) on the sets of probability

measures written in terms of vertices or upper and lower previsions, there are additional

constraints (called relaxed constraint and strict constraint), which are applied only if two

gambles do not affect the uncertainty on Ω. Here a simple example illustrates the

situation.

Consider a decision situation in which the construction strategy must be decided

based on the rock mass condition quantified by the Q-system of rock mass classification

Page 48: Copyright by Xiaomin You 2010

31

(Barton, 1974). For example, the construction strategies consist of gamble X1 (full face

excavation with nominal support) or gamble X2 (full face excavation with extensive

support). The uncertain geologic states are: G1 (Q value > 40) or G2 (Q value < 40),

which are the only two elements in possibility space Ω. No matter which construction

strategy is selected, the uncertainty in the geologic states is the same. In imprecise

probabilities, the same uncertainty could be interpreted in two ways:

(1) Relaxed constraint: Sets of probability measures for geologic states should be the

same no matter whether X1 or X2 is chosen, i.e.

1 2Ψ = Ψ (2.37)

As shown in Figure 4-19, the relaxed constraint requires sets Ψ1 and Ψ2 be the same but

two different probability distributions P1 and P2 can be selected to carry out the analysis.

Figure 2-9 Relaxed constraint

(2) Strict constraint: the probability measures for geologic states should be the same no

matter whether X1 or X2 is chosen, i.e.

1 2P P= , 1 1P ∈Ψ , 1 2P ∈Ψ (2.38)

Ψ1

P1

Ψ2

P2

Page 49: Copyright by Xiaomin You 2010

32

As illustrated by Figure 2-10, two probability distributions P1 and P2 are enforced to be

the same, which is a stronger constraint than the relaxed constraint illustrated in Figure

4-19, where the two distributions P1 and P2 are not necessarily the same.

Figure 2-10 Strict constraint

The choice between Eqs.(2.37) and (2.38) could be determined based on the context or by

the risk analyst.

However, it is worth noting that in some cases the uncertainty on possible states

could be affected by the decision making. For example, the failure probability of the

tunnel will depend on the construction strategy chosen. In this case, Eqs.(2.37) or (2.38)

should not be applied, and the only constraints are the ones that define sets 1Ψ and 2Ψ .

Details about the methodology of decision-making with imprecise probabilities

will be discussed in Chapter 4.

2.6 WHY USE IMPRECISE PROBABILITIES?

Now that the basic knowledge about the theory of imprecise probabilities has

been outlined, it is important to discuss the rationale for utilizing imprecise probabilities

in this study. Arguments for imprecise probabilities are summarized in Walley (1991,

Ψ1

P1

Ψ2

P2

Page 50: Copyright by Xiaomin You 2010

33

pages 3-6). Here the author explain several of those statements in the perspective for

tunneling projects.

(1) Lack of information

Insufficient information is the most important inducement for choosing imprecise

probabilities. Imprecise probabilities can be applied to problems when information is

limited, either because it is impossible to collect more or because it is not practical or

economically feasible to do so. For example, we will never drill boreholes every meter

along the tunnel axis for a perfect geological investigation report. Imprecise probabilities

allow us to rationally reason with the available information, which may still be enough to

make decisions. If it is not enough to make decisions, the decision maker rests assured

that no unwanted information is unwittingly added to force him to make a decision

despite the lack of information (an example will be given in Section 4.3.3.3.1).

Indeed, the methods of imprecise probabilities yield a range of results according

to the amount of information available for analysis. Suppose minimal previous

experience is available from tunneling projects similar to the one proposed. The

difference between upper and lower probabilities is used to reflect that lack of

information. An extreme case is “complete ignorance”, for example, construction of a

tunnel with an entirely new construction method, where the vacuous previsions model

appropriately takes account of the maximal imprecision situation. No single precise

probability can do this. As experience is gained and information accumulates, upper and

lower probabilities become close to one another and imprecision decreases. The opposite

extreme case is the project in which a large amount of information is available. Then as a

special case of imprecise probabilities, precise probabilities are proper to use. For

Page 51: Copyright by Xiaomin You 2010

34

instance, a negative exponential distribution can be used to describe the length of intact

rocks in borehole (Harrison and Hudson, 2000, Page 93).

(2) Descriptive realism

As a result of lack of information, beliefs about the subject maters are often

indeterminate. Recall the risk analysis workshops for tunneling projects introduced in

Chapter 1: experts cannot really differentiate between probabilities of 82% and 85%

based on their experience. Imprecise probabilities can model such indeterminacy in

beliefs.

(3) Conflicting assessments

Conflict or disagreement on probability and utility assessment may arise when

they are obtained from different sources even if each source is precise. For example,

consider the case in which two or more experts give different assessments on the

probability of the event “leakage in tunnel lining”. Imprecise probabilities can reflect the

extent of disagreement; for example, a simple technique is to use of intervals.

Generally, distinctly different opinions may result when experts use different

evidence or different assessment strategies. The conflict or disagreement can be

eliminated or reduced by sharing the experts’ information, or by using a more elaborate

strategy to improve the original assessments.

(4) Elicitation

The imprecise probability model enables us to express our beliefs by probabilistic

judgments like “A is more probable than B.” Although such judgments are imprecise,

they are easier to elicit than precise models. Through the process of elicitation, a set of

Page 52: Copyright by Xiaomin You 2010

35

rational probability measures can be constructed to properly reflect available evidences

(see the example in Section 2.4).

(5) Natural extension

In the theory of imprecise probability, a decision maker can always determine the

bounds on the prevision or probability of a new gamble based on the specified previsions.

Remember the example in Section 2.4: the lower prevision of new gamble 1 2X X+ is

determined given ( )1 0.3LOWE X = and ( )2 0.4LOWE X = .

(6) Statistical inference

Imprecision in probability assessment decreases when statistical data

accumulates. For example, before performing any test, we may roughly guess the in-situ

stresses based on the overburden, the topography and the tectonic history of a site. After a

large amount of in-situ stress measurements, a more precise estimation of in-situ stresses

can be provided.

(7) Robust results

Robust results are not affected by a small variation in the probability measures.

The theory of imprecise probability is inherently robust, i.e., the results are

“automatically robust, because they do not rely on arbitrary or doubtful assumptions”

(Wally, 1991, Page 5).

(8) Indeterminacy

A strong argument in favor of precise probabilities is that the decision maker must

choose a unique, optimal option; however imprecise probabilities often result in

Page 53: Copyright by Xiaomin You 2010

36

indeterminacy. It is difficult to assess the probabilities precisely when the information

required is not available. In such cases, if precise probabilities are imposed and a ‘best’

choice is selected, the choice may not be defensible.

Analytical techniques of imprecise probabilities may fail to yield the “best”

option. Instead, they generate multiple optimal options with indeterminate preferences.

To determine a unique optimal option by this method, two major strategies can be

adopted: (a) undertake further analysis or search for more information to determine the

“best” option; for example, drill additional boreholes to collect geological data; or (b)

arbitrarily select one among the optimal options. (Walley, 1991, Page 239) i.e., the

decision maker has the “freedom of choice,” as explained by Walley (1991, Page 241)

“Perhaps we should simply accept this degree of arbitrariness in choice, and call it ‘freedom of choice’. Rather than follow rules to eliminate it. Reasoning cannot always determine a uniquely reasonable course of action, especially when there is little information to reason it.”

Page 54: Copyright by Xiaomin You 2010

37

Chapter 3 Different types of interaction between variables

This chapter studies different types of interaction between discrete variables

within imprecise probabilities. First, we formulate the available information on

marginals, and then introduce concepts of interaction, including unknown interaction,

different types of independence, and correlated variables. For each type of interaction,

systematic algorithms are proposed, justified, and illustrated by simple examples. The

algorithms aim at achieving upper and lower bounds on previsions and conditional

probabilities on joint finite spaces subject to the constraints on marginals and the

assumed interaction. All theorems presented in this chapter are developed and proved by

the author.

3.1 CONSTRAINTS ON MARGINALS

The usual situation we are facing in risk analysis is to search the upper and lower

bounds for a specific real function (or gamble) of interest with constraints on marginals.

There are two ways to present the constraints on marginals. One is to define upper and

lower previsions on marginals; the other is by specifying extreme distributions of

marginals.

First, let us consider the prevision bounds. We use bold letters for column vectors

or matrices, and corresponding non-bold letter as the components of a vector or matrix.

An n–column vector pi is a probability measure on the finite space Si= jis : j = 1,…, ni,

and its j-th component is Pi( jis ). Let fi

k : Si → , k= 1,…,ki be a set of bounded functions

(gambles) on Si. fik( j

is ) is the j-th component of the n–column vector fik. The expectation

(prevision) is:

( ) ( ) ( )1

in Tk k j j ki i i i i i i

jE f f s P s

=

⎡ ⎤ = =⎣ ⎦ ∑ f p (3.1)

Page 55: Copyright by Xiaomin You 2010

38

If Ψ, a set of distributions p is given, upper and lower previsions of a new gamble

f in Ψ are calculated as follows (Eq. (2.6) in Chapter 2) [ ] [ ] [ ] [ ]max ; minUPP LOWE f E f E f E f

∈Ψ∈Ψ= =

pp (3.2)

On the other hand, if bounds on previsions on marginals, kLOW iE f⎡ ⎤

⎣ ⎦ and k

UPP iE f⎡ ⎤⎣ ⎦ , are provided, the set of distributions pi, Ψi that is compatible with the given

information, is

( ) ( ) : ; 1,..., ; 1; 0i

Tk k k T ji i LOW i i i UPP i i i inE f E f k k p⎡ ⎤ ⎡ ⎤Ψ = ≤ ≤ = = ≥⎣ ⎦ ⎣ ⎦p f p 1 p (3.3)

Alternatively, constraints may be given in terms of extreme distributions of Ψ

(Section 2.3). Let ETXi indicate the set of extreme distributions (vertices) of Ψi, and

iEXTξp be the ξ -th extreme distribution of variable Si, or the ξ -th vertex of Ψi. Thus, Ψi

is the set of convex combinations of iEXT

ξp :

1 1: , 1, 0

j j

ii i i i i iEXTc c cξ ξ

ξ ξ ξ ξ

ξ ξ= =

⎧ ⎫⎪ ⎪Ψ = = = ≥⎨ ⎬⎪ ⎪⎩ ⎭

∑ ∑p p p (3.4)

Consider a probability distribution P on a two-dimensional joint space S = ( )1 2, jis s : i = 1,…, n1; j = 1,…, n2, where the i,j-th entry of the n1×n2 matrix P is the

probability mass on a joint elementary event ( )1 2, jis s . Then the marginal pi can be written

as pi = P ( )jn1 . Here ( )jn1 is an nj-column vector of unit components. We can rewrite the

constraints in Eqs.(3.3) and (3.4) as follows, respectively:

( ) ( ) ; 1,..., ; 1,2, 2,1j

Tk k kLOW i i UPP i inE f E f k k i j⎡ ⎤ ⎡ ⎤≤ ⋅ ≤ = = =⎣ ⎦ ⎣ ⎦f P 1

( ) ( )1 2

,1; 0T i jn n p⋅ ⋅ = ≥1 P 1

(3.5)

( )1 1

, 1, 0; 1,2j j

i ii i in EXTc c c iξ ξ

ξ ξ ξ ξ

ξ ξ= == = ≥ =∑ ∑P 1 p (3.6)

The set of joint distributions Ψ is composed by all joint probabilities P compatible

with the constraints in Eq. (3.5) or (3.6) and the assumed interaction between variables.

Page 56: Copyright by Xiaomin You 2010

39

3.2 PROBLEM FORMULATION

This section introduces three types of interaction between variables: unknown

interaction, independent variables, and correlated variables in Imprecise Probability.

Concepts of unknown interaction and notions of independence are introduced in Couso et

al. (1999 and 2000) and Vicig (1999). Tonon et al. (2010) and Bernardini and Tonon

(2010) further explicitly express corresponding constraints in mathematical terms.

3.2.1 Unknown interaction

Unknown interaction is applied to the situation that no information is available on

the interaction between S1 and S2. ΨU, the set of probability measures on the joint space

S, is composed of all joint probability measures that respect the marginal rules, i.e.

marginals are in Ψi.:

( )2 1P S⋅× ∈Ψ ; ( )1 2P S ×⋅ ∈Ψ (3.7)

Unknown interaction has no more constraints than Eq.(3.7). Consider a joint

distribution P with (i, j)-th entry Pi,j. By expressing the previsions as a linear function of the probability mass, 1 2; , ,

1; 1

i n j n i j i ji j

a P= =

= =∑ , the problem under unknown interaction reads as

follows (Eq.(3.5)):

Minimize (Maximize) 1 2; , ,1; 1

i n j n i j i ji j

a P= =

= =∑

Subject to ( ) ( )

( ) ( )1 2

,1 2

; 1,..., ; 1,2, 2,1

1

0; 1,..., ; 1,...,

j

Tk k kLOW i i UPP i in

Tn n

i j

E f E f k k i j

p i n j n

⎡ ⎤ ⎡ ⎤≤ ⋅ ≤ = = =⎣ ⎦ ⎣ ⎦

⋅ ⋅ =

≥ = =

f P 1

1 P 1 (3.8)

When extreme distributions are given on the marginals, the problem of (3.8) can

be written as follows:

Page 57: Copyright by Xiaomin You 2010

40

Minimize (Maximize) 1 2; , ,1; 1

i n j n i j i ji j

a P= =

= =∑ Subject to

( )

( )

1

2 1

2

1 2

1 11

2 21

1

1, 0; 1,2i

n EXT

Tn EXT

i i

c

c

c c i

ξξ ξ

ξ

ξξ ξ

ξ

ξξ ξ

ξ

=

=

=

= ⋅ =

= ⋅ =

= ≥ =

p P 1 p

p P 1 p (3.9)

Constraints in Eqs.(3.8) and (3.9) are linear, thus the set of joint distributions

under unknown interaction, ΨU, is convex. If the conditional probability ( ) ( )1 1 1

1 2 2 2, |P s s S s= is of interest, replace the

objective functions in (3.8) and (3.9) by 11,1 ,11/ i n i

iP P==∑ ; likewise for other conditional

probabilities. Although Ψ remains convex, the objective function is no longer linear.

3.2.2 Analysis with independent variables

The simplest way to consider the interaction between variables is to assume

independence. Let Pi be the probability measure on the variable Si. The probability

measure on the joint measurable space (S1, S2) for independent variables in precise

probabilities is defined as the product measure 1 2 1 2: : [0,1]i iP P P U U U S= ⊗ = × ∈ →C (3.10)

where ( ) ( ) ( )1 2 1 2 1 1 2 2P P U U P U P U⊗ × = . The conditional probability measures for

independent variables yield the marginal probability measures ( ) ( ) ( )2 1 2 1 2 2 2| : 0;P S S s P s P s⋅× × = ⋅ ∀ >

( ) ( ) ( )1 1 2 2 1 1 1| : 0P S s S P s P s×⋅ × = ⋅ ∀ > (3.11)

This means that: if we learn that the actual value of S2 is s2, then our knowledge about the

probability measure for S1 does not change. Likewise for S1. The concept of independence

is equivalent to the following two properties: 1) the probability of S1 conditional to S2 is

Page 58: Copyright by Xiaomin You 2010

41

equal to the marginal of S1, and vice versa; 2) the joint probability is equal to the product

of the marginals. The equivalence exists only when each variable is subject to a single

probability distribution.

Within imprecise probabilities, the two aforementioned properties and definitions

of independence are not always equivalent. As a result, notions of independence on joint

spaces are defined at different levels, including epistemic irrelevance/independence and

strong independence (see Campos et al. 1995, Couso et al. 1999 and 2000).

3.2.2.1 Epistemic Irrelevance/Independence

Consider the finite joint space S = S1×S2. Under epistemic irrelevance of the

second variable with respect to the first variable, the probability measure over S1

conditional to S2 is always in the set of Ψ1, regardless of the value of S2; however, the

probability measure for S1 could be different for different values of S2. The definition of

epistemic irrelevance indeed uses the concept of conditional probability. The set of joint

probability measures, 2|sEΨ , is just the largest set of joint measures that are extensions to

Eq.(3.11) (Bernardini and Tonon 2010): ( ) ( ) 2|

2 1 2 1 2 2 2 2 2: . | ; : : 0sE P S S s s P P sΨ = × × ∈Ψ ∀ ∃ ∈Ψ >P (3.12)

where 2|sEΨ is called the “irrelevant natural extension (Couso et al. 1999) of the two

marginals when the second experiment is epistemically irrelevant to the first” (i.e. the set

of acceptable gambles concerning the first experiment does not change when we learn the

outcome of the second experiment). Likewise for 1|sEΨ . It is important to notice that s1

may be selected by a different procedure for different values of s2. Therefore, in

Imprecise Probabilities, irrelevance of one experiment with respect to another is a

directional or asymmetric relation. This asymmetry disappears in precise probabilities

because both Ψ1 and Ψ2 contain only one probability distribution each.

Page 59: Copyright by Xiaomin You 2010

42

Let matrix P1|2 be probability distributions over S1 conditional to the values of S2,

i.e. the i-th column of P1|2 is the conditional probability distribution over S1 given that S2

is equal to its i-th value. Following Eq.(3.12), a joint probability P in 2|sEΨ is

( ) ( ) ( )1 2 1 2 1 2 2 2|P U s P U S S s P s× = × × ; P = P1|2 Diag(p2) (3.13)

and thus 2|sEΨ in Eq.(3.12) can be written as (Bernardini and Tonon 2010)

( )( )

2 2

( )2 1 1

| ( )(1) (2)1|2 2 2 2 1|2 1 1 1: ; ...

i

s nE

n

Diag∈Ψ

⎧ ⎫⎛ ⎞⎪ ⎪⎜ ⎟⎪ ⎪Ψ = = ∈Ψ = ⎜ ⎟⎨ ⎬

⎜ ⎟⎪ ⎪⎪ ⎪⎝ ⎠⎩ ⎭p

P P p p P p p p (3.14)

where ( )iDiag p is a diagonal matrix with ip as its diagonal entries.

When the first variable is epistemically irrelevant to the second variable, a joint

probability P in 1|sEΨ is

( ) ( ) ( )1 2 1 1 1 2 1 2|P s U P s P S U s S× = × × ; P = Diag(p1) P2|1 (3.15)

Thus the set 1|sEΨ is as follows (Bernardini and Tonon 2010).

( )

( )( )

( )

1

1

(1)2

(2)2

| ( )1 2|1 1 1 2|1 2 2

( )2

.: : ; ;

.

.

T

T

s iE

Tn

Diag

⎧ ⎫⎛ ⎞⎪ ⎪⎜ ⎟⎪ ⎪⎜ ⎟⎪ ⎪⎜ ⎟⎪ ⎪⎜ ⎟⎪ ⎪⎜ ⎟Ψ = = ∈Ψ = ∈Ψ⎨ ⎬⎜ ⎟⎪ ⎪⎜ ⎟⎪ ⎪⎜ ⎟⎪ ⎪⎜ ⎟⎪ ⎪⎜ ⎟⎜ ⎟⎪ ⎪⎝ ⎠⎩ ⎭

p

p

P p P p P p

p

(3.16)

Consequently, the constraints for problems involving epistemic irrelevance are

constraints on the marginals , i.e. Eq.(3.7), and on the definition of epistemic irrelevance,

i.e. (3.14) or (3.16). For example, when the second variable is epistemically irrelevant to

the first variable, the complete optimization problems read as follows (Bernardini and

Tonon 2010):

Page 60: Copyright by Xiaomin You 2010

43

Minimize(Maximize) 1 2; , ,1; 1

i n j n i j i ji j

a P= =

= =∑ Subject to

( ) ( )2( )(1) (2)1 1 1 2... n Diag=P p p p p

( )( )

1 2

( )1 1 1 1 1 2

2 2 2 2 2

( )( ) 1 2 ( ) 2

( )1 2 2

; 1,..., ; 1,..., 1;

; 1,..., ;

1 1,..., 1; 1

0 1,..., 1; 0

Tk k j kLOW UPP

Tk k kLOW UPP

T j Tn n

j

E f E f k k j n

E f E f k k

j n

j n

⎡ ⎤ ⎡ ⎤≤ ≤ = = +⎣ ⎦ ⎣ ⎦

⎡ ⎤ ⎡ ⎤≤ ≤ =⎣ ⎦ ⎣ ⎦

⋅ = = + ⋅ =

≥ = + ≥

f p

f p

1 p 1 p

p p

(3.17)

When marginals are assigned through their extreme distributions, the optimization

problems is (Bernardini and Tonon 2010):

Minimize(Maximize) 1 2; , ,1; 1

i n j n i j i ji j

a P= =

= =∑ Subject to

1 1 22

1 1 2

1 2

,,11 1 2

1 1 1

,21 2

1 1

,2 2 11 2

...

1 1,..., ; 1

0 1,..., ; 1,..., ; 0 ; 1,...,

nEXT EXT EXT

j

j

c c Diag c

c j n c

c j n c

ξ ξ ξξξ ξ ξ ξ ξ

ξ ξ ξ

ξ ξξ ξ

ξ ξ

ξ ξξ ξ ξ ξ

= = =

= =

⎛ ⎞ ⎛ ⎞= ⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠

= = =

≥ = = ≥ =

∑ ∑ ∑

∑ ∑

P p p p

(3.18)

When each variable is epistemically irrelevant to the other, that is, irrelevance is

applied to both directions, it is called “epistemic independence” (Campos et al. 1995,

Couso et al. 1999 and 2000). Epistemic independence is symmetric. It is the appropriate

model when we are given information on two marginal sets of probability measures and

under the assumption that the uncertainty about either variable does not change when

some information about the other variable is available. However, this assumption is not

equivalent to the assumption that variables are stochastically independent, which actually

is the concept of strong independence defined in the next sub-section.

Following of Bernardini and Tonon (2010), let P be a n1×n2 matrix whose i,j-th entry is the probability mass on a joint elementary event ( )1 2, jis s , ( )1 2, jiP s s . If T1 = U1 ×

Page 61: Copyright by Xiaomin You 2010

44

S2, T2 = S1 × s2, by noticing that T1∩T2 = U1×s2, two variables are epistemically

independent if ∀(s1, s2)∈ S1 × S2,

( ) ( ) ( )1 2 1 2 1 2 2 2|P U s P U S S s P s× = × × ; P = P1|2 Diag(p2) (3.19)

Likewise,

( ) ( ) ( )1 2 1 2 1 2 1 1|P s U P S U s S P s× = × × ; P = Diag(p1) P2|1. (3.20)

Thus, two variables are epistemically independent if ∀(s1, s2)∈ S1 × S2: ( ) ( ) ( )

( ) ( ) ( )2 2

1 1

| |1 1 1 2 1 1 2 2

| |2 2 1 2 2 2 1 1

:

:

s s

s s

P P U s P U P s AND

P P s U P U P s

∃ ∈Ψ × =

∃ ∈Ψ × = (3.21)

The set of joint probability distributions, ΨE, is (Bernardini and Tonon 2010):

2 1| |s sE E EΨ = ∈Ψ ∩ΨP (3.22)

Here 1|sEΨ and 2|s

EΨ are given by (3.14) and (3.16), respectively.

In the optimization problem under epistemic independence, the constraints are

given by the marginal rules, i.e. Eq.(3.7), and by the definition of epistemic

independence, i.e. (3.21). If information on marginals is given as bounded previsions, the

optimization problems are (Bernardini and Tonon 2010): Minimize(Maximize) 1 2; , ,

1; 1

i n j n i j i ji j

a P= =

= =∑ Subject to

( ) ( )

( )

( )( )

( )

2 1

2

1

( ) ( 1)(1) (2)1 1 1 2

(1)2

(2)( 1) 2

1

( )2

... n n

T

Tn

Tn

Diag

Diag

+

+

=

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟

= ⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

P p p p p

p

pP p

p

( )( )

1 2

( )1 1 1 1 1 2

( )2 2 2 2 2 1

( ) ( )( ) 1 2 ( ) 2 1

( ) ( )1 2 2

; 1,..., ; 1,..., 1;

; 1,..., ; 1,..., 1;

1 1,..., 1; 1 1,..., 1

0 1,..., 1; 0

Tk k j kLOW UPP

Tk k j kLOW UPP

T j T jn n

j j

E f E f k k j n

E f E f k k j n

j n j n

j n j

⎡ ⎤ ⎡ ⎤≤ ≤ = = +⎣ ⎦ ⎣ ⎦

⎡ ⎤ ⎡ ⎤≤ ≤ = = +⎣ ⎦ ⎣ ⎦

⋅ = = + ⋅ = = +

≥ = + ≥ =

f p

f p

1 p 1 p

p p 11,..., 1n +

(3.23)

Page 62: Copyright by Xiaomin You 2010

45

When marginals are assigned through their extreme distributions, the optimization

problems become are (Bernardini and Tonon 2010): Minimize(Maximize) 1 2; , ,

1; 1

i n j n i j i ji j

a P= =

= =∑ Subject to

1 1 22 1

1 1 2

2

2

12

1

21

2

, , 1,11 1 2

1 1 1

,12

1

, 11

1

,2

1

,1

...

.

.

.

n nEXT EXT EXT

T

EXT

nEXT

Tn

EXT

c c Diag c

c

Diag c

c

c

ξ ξ ξξ ξξ ξ ξ ξ

ξ ξ ξ

ξξ ξ

ξ

ξξ ξ

ξ

ξξ ξ

ξ

ξ

+

= = =

=

+

=

=

⎛ ⎞ ⎛ ⎞= ⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠

⎛ ⎞⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠⎜ ⎟⎜ ⎟

⎛ ⎞⎜ ⎟= ⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠⎜ ⎟

⎜ ⎟⎜ ⎟⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠⎝ ⎠

∑ ∑ ∑

P p p p

p

P p

p

1 2,

2 121 1

, ,2 2 1 11 2

1 1,..., ; 1 1,...,

0 1,..., ; 1,..., ; 0 1,..., ; 1,...,

j j

j j

j n c j n

c j n c j n

ξ ξξ

ξ ξ

ξ ξξ ξ ξ ξ

= == = = =

≥ = = ≥ = =

∑ ∑

(3.24)

By observing Eqs.(3.17), (3.18), (3.23), and (3.24), we find that the joint

probability P is the product of two matrices, which makes the constraints non-convex.

Optimization problems involving non-convex constraints are known to be NP-hard (Horst

et al. 2000), i.e. there is no fully polynomial-time approximation scheme to solve the

problem. To reduce the computational difficulties, equivalent forms of the optimization

problems and algorithms will be discussed in Section 4.3.3. If the conditional probability ( ) ( )1 1 1

1 2 2 2, |P s s S s= is of interest, then replace the

objective functions in (3.17), (3.18), (3.23), and (3.24) by 11,1 ,11/ i n i

iP P==∑ ; likewise for

other conditional probabilities. Again, the objective function is no longer linear.

Page 63: Copyright by Xiaomin You 2010

46

3.2.2.2 Conditional Epistemic Irrelevance/Independence

In last sub-section, we have discussed epistemic irrelevance and epistemic

independence between two variables. When a third variable is involved, then conditional

irrelevance/independence (Campos and Cozman 2007) may need to be considered.

Similar to the definition of epistemic irrelevance in Eq. (3.12), S1 is conditional

epistemic irrelevant to S2 for a given *3s if

( ) ( )* *2 1 3 2 3| |P S s s S s× ∈Ψ (3.25)

In matrix form, set ΨE over (S1, S2) given *3s is

( )

( )( )

( )

( )

( )1 1

*1 2 3

1 * (1) *1 3 2 3

*1 2 3

* ( ) *31 2 3

, | :

| 0 |

, |

0 | |

E

T

Tn n

S S s

P s s S s

S S s

P s s S s

Ψ =

⎫⎡ ⎤⎛ ⎞⎪⎢ ⎥⎜ ⎟ ⎪⎢ ⎥⎜ ⎟= ⎬⎢ ⎥⎜ ⎟ ⎪⎢ ⎥⎜ ⎟ ⎪⎝ ⎠ ⎢ ⎥⎣ ⎦⎭

P

P

P

P

(3.26)

where vector ( )( ) *2 3|i S sP is a conditional probability measure over S2 given *

3s .

Accordingly, S2 is conditional epistemic irrelevant to S1 given *3s if

( ) ( )* *1 2 3 1 3| |P S s s S s× ∈Ψ (3.27)

and set ΨE over (S1, S2) given *3s is

( )

( ) ( ) ( )( )

( )2

2

*1 2 3

1 *2 3

( )* (1) * *1 2 3 1 3 1 3

*32

, | :

| 0

, | | ... |

0 |

E

n

n

S S s

P s s

S S s S s S s

P s s

Ψ =

⎫⎛ ⎞⎪⎜ ⎟⎪⎡ ⎤ ⎜ ⎟= ⎬⎣ ⎦ ⎜ ⎟⎪⎜ ⎟⎪⎝ ⎠⎭

P

P P P (3.28)

It is easy to see that conditional epistemic irrelevance is an asymmetric and directional

property as well. If both Eqs. (3.25) and (3.27) are satisfied, S1 is conditional epistemic

independent to S2 given S3.

Page 64: Copyright by Xiaomin You 2010

47

3.2.2.3 Strong Independence

In Imprecise Probability, strong independence is equivalent to stochastic

independence in precise probability. As a result, the set of probability measures, ΨS, is

composed of all product measures: 1 2 1 1 2 2: ,S P P P P PΨ = = ⊗ ∈Ψ ∈Ψ (3.29)

where ( ) ( ) ( )1 2 1 2 1 1 2 2P P s s P s P s⊗ × = . Bernardini and Tonon (2010) pointed out that if

(3.21) and additional constraints below are satisfied, then epistemic independence

becomes strong independence. 2|

2 2 1 1: ss S P P∀ ∈ = and 1|1 1 2 2: ss S P P∀ ∈ = (3.30)

Strong independence is an appropriate model when the random experiments are

stochastically independent. The constraints in the optimization problem under strong

independence are constraints on marginals, i.e., Eq.(3.7), and the definition of strong

independence in Eq.(3.29). When the marginals are bounded by upper and lower

previsions, the complete optimization problems are (Bernardini and Tonon 2010):

Minimize(Maximize) 1 2; , ,

1; 1

i n j n i j i ji j

a P= =

= =∑ Subject to

1 2T=P p p

( )( )

1 2

1 1 1 1 1

2 2 2 2 2

( ) 1 ( ) 2

1 2

; 1,..., ;

; 1,..., ;

1 1

0 ; 0

Tk k kLOW UPP

Tk k kLOW UPP

T Tn n

E f E f k k

E f E f k k

⎡ ⎤ ⎡ ⎤≤ ≤ =⎣ ⎦ ⎣ ⎦

⎡ ⎤ ⎡ ⎤≤ ≤ =⎣ ⎦ ⎣ ⎦

⋅ = ⋅ =

≥ ≥

f p

f p

1 p 1 p

p p

(3.31)

If the extreme points are given on the convex sets of marginal probability distributions,

the optimization problems under strong independence become (Bernardini and Tonon

2010):

Page 65: Copyright by Xiaomin You 2010

48

Minimize(Maximize) 1 2; , ,1; 1

i n j n i j i ji j

a P= =

= =∑ Subject to

1

1

2

2

1 2

1 2

1 11

2 21

1 21 1

1 21 2

1 ; 1

0; 1,..., ; 0 ; 1,...,

T

EXT

EXT

c

c

c c

c c

ξξ ξ

ξ

ξξ ξ

ξ

ξ ξξ ξ

ξ ξ

ξ ξξ ξ ξ ξ

=

=

= =

=

=

=

= =

≥ = ≥ =

∑ ∑

P p p

p p

p p (3.32)

Similar to the case of epistemic irrelevance/independence, constraints in (3.31) and (3.32)

under strong independence are NP-hard as well. Detailed algorithms to solve this

problem will be explained in Section 4.3.3. For the conditional probability ( ) ( )1 1 1

1 2 2 2, |P s s S s= , the objective functions in

(3.31) and (3.32) are replaced by 11,1 ,11/ i n i

iP P==∑ ; likewise for other conditional

probabilities.

3.2.2.4 Conditional strong Independence

Consider now strong independence conditional to a third variable. S1 is

Conditional strong independent of S2 given s3 if ( ) ( ) ( )1 2 3 1 3 2 3| | |P S S s P S s P S s× = × (3.33)

where ( )1 3|P S s and ( )2 3|P S s are given and equal to marginal probabilities of

( )1 2 3|P S S s× .

In matrix form, set Ψ over (S1, S2) given s3 is ( ) ( ) ( ) ( )1 2 3 1 2 3 1 3 2 3, | : | | |TS S s S S s S s S sΨ = × =P P P P (3.34)

where ( )1 3|S sP and ( )2 3|S sP are conditional probability distributions over S1

and S2 given s3, respectively.

Page 66: Copyright by Xiaomin You 2010

49

Or, equivalently,

( ) ( ) ( ) ( )

1 3 2 31 2 3

3

P S s P S sP S S s

P s

× × ×× × = , for ( )3P s > 0 (3.35)

where ( )1 3P S s× and ( )2 3P S s× are given and equal to marginal probabilities of

( )1 2 3P S S s× × . The definition in Eq. (3.35) is the same as ‘weak conditional

independence’ defined in (Cano and Moral 2000), where ‘strong conditional

independence’ is defined for another situation, which we do not discuss in this study.

In the form of matrix, set ΨS over (S1, S2, s3) is

( ) ( ) ( ) ( )( ) ( )1 3 2 3

1 2 3 1 2 3 33

, ,, , : , , , 0

T

S

S s S sS S s S S s P s

P s

⎫⎪Ψ = = > ⎬⎪⎭

P PP P (3.36)

Compared with the constraints in Eq.(3.26) or (3.28), constraints in Eq.(3.36) are stricter.

Similar to the case of epistemic irrelevance and strong independence, we could expect

that set ΨS under conditional strong independence is a subset of set ΨE under conditional epistemic irrelevance: ( ) ( )1 2 3 1 2 3, , , ,S ES S s S S sΨ ⊆ Ψ .

3.2.3 Analysis with uncertain correlation

The correlation coefficient ρ is often used as a measure of linear dependence

between two variables S1 and S2: ( ) ( ) ( ) ( )

1 2 1 2

1 2 1 2 1 2,

S S S S

COV S S E S S E S E SD D D D

ρ−

= = (3.37)

where ( ) ( )22 , 1,2iS i iD E S E S i= − = , ( )E S is the expected value of variable S.

In this sub-section, we discuss the situation when the correlation coefficient ρ is

given imprecisely, i.e., bounded by an interval:

ρ ρ ρ≤ ≤ (3.38)

To eliminate the possibility of division-by-zero errors, Eq. (3.37) is re-written as

follows:

Page 67: Copyright by Xiaomin You 2010

50

( ) ( ) ( )1 21 2 1 2 S SE S S E S E S D Dρ= + (3.39)

Substitute ρ in Eq. (3.39) with the bounds on ρ in Eq. (3.38), then one obtains:

( ) ( ) ( ) ( ) ( )1 2 1 21 2 1 2 1 2S S S SE S E S D D E S S E S E S D Dρ ρ+ ≤ ≤ + (3.40)

Therefore, the set of joint probability measures over ( )1 2,S S , Ψc, is obtained as follows ( ) ( ) ( ) ( ) ( )

( ) ( )

1 2 1 2

1 2

1 2 1 2 1 2

1 2

: ;

;

S S S SC T

n n

E S E S D D E S S E S E S D Dρ ρ⎧ ⎫+ ≤ ≤ +⎪ ⎪Ψ = ⎨ ⎬∈Ψ ∈Ψ⎪ ⎪⎩ ⎭

P

P1 1 P (3.41)

where Ψi, i = 1, 2, is the given convex set of marginals of Si.

It should be noted that in precise probabilities, the covariance matrix must be

positive semi-definite. Here we do not include this constraint in Eq. (3.41), because all

constraints in (3.41) are written in terms of joint probability distributions. The constraint

that the covariance matrix should be positive semi-definite will be satisfied automatically

as long as set Ψc is not empty.

3.3 ALGORITHMS

Section 3.2 formulated the problems under assumptions of unknown interaction,

different types of independence, and uncertain correlation respectively. This section will

focus on the algorithms to search the global optimizer where the upper and lower bounds

of objective functions are achieved.

Generally, there are two options to find the extreme values of functions on the

joint distribution:

Option (1) Using global optimization to find the point(s) at which the minimum or

maximum value of the objective function is achieved.

Option (2) First finding all extreme points for Ψ as described in Section 2.3 and then

restricting the search to these extreme points.

Page 68: Copyright by Xiaomin You 2010

51

Option (1) directly focuses on the optimal solution, which is apparently more

efficient if only one maximum or minimum must be calculated. Option (2) could be used

only if the objective function and constraints are linear (e.g., probabilities or

expectations). However, the conditional probability over joint finite spaces, which is non-

linear, is an exception. In Section 3.3.1, we will prove that the minimum (or maximum)

value of a conditional probability is still achieved at the extreme points of the set of the

joint distributions Ψ.

3.3.1 Previsions and conditional probability in joint distributions

The following two theorems make calculations on joint spaces much easier:

Theorem 3-1 The minimum and maximum values of a prevision in the joint distribution

are achieved at some extreme points of the set of the joint distributions Ψ .

Proof: Any prevision in the joint distribution is a convex function.

Case 1: Ψ is convex. The global optimizer of a convex function relative to a convex set Ψ

occurs at some extreme point of Ψ (Rockafellar 1997, Page 342)

Case 2: Ψ is not convex. Let Ψ* be the convex hull of Ψ. Thus, Ψ* is the smallest convex

set which contains Ψ. Any extreme point of Ψ* is an extreme point of Ψ, and vice versa.

According to Case 1, the prevision will achieve its extreme values at some extreme points

of Ψ*, and thus some extreme points of Ψ. ◊

Theorem 3-2 The minimum and maximum values of a conditional probability in the joint

distribution are achieved at the extreme points of the set of the joint distributions Ψ. Proof: By inserting the marginal expression for ( )2 2

jp s into the expression for the

conditional probability ( )1|2 1 2| jip s s , one obtains:

Page 69: Copyright by Xiaomin You 2010

52

( ) ( )1

, ,1 1|2 12 2

1, : | /

nj ji i i j i j

is s S p s s p p

=∀ ∈ = ∑ (3.42)

Let *P ∈Ψ and **P ∈Ψ be two extreme points of the set Ψ. The conditional

probability ( )1|2 1 2|i jp s s on each joint distribution is ( )1

, ,1|2,* 1 2 * *

1| /

ni j i j i j

ip s s p p

== ∑ , and

( )1

, ,1|2,** 1 2 ** **

1| /

ni j i j i j

ip s s p p

=

= ∑ , respectively. Assume ( )1|2,* 1 2|i jp s s ≥ ( )1|2,** 1 2|i jp s s , then

any point ( )newP between *P and **P may be written as ( )* **1λ λ+ −P P , 0 1λ≤ ≤ ,

i.e., ( )( ) * **

, , ,1new

i j i j i jp p pλ λ= + − . Consequently, the conditional probability based on the

new joint distribution ( )newP is:

( ) ( )

( )

( )

( )1 1 1

, , ,* **

1|2, 1 2, , ,

* **1 1 1

1|

1

i j i j i jnewji

new n n ni j i j i jnew

i i i

p p pp s s

p p p

λ λ

λ λ= = =

+ −= =

+ −∑ ∑ ∑ (3.43)

By subtracting Eq. (3.43) from (3.42), one obtains:

( ) ( ) ( )

( )1 1 1

, , ,* ** *

1|2, 1 1|2,* 12 2, , ,

* ** *1 1 1

1| |

1

i j i j i jj ji i

new n n ni j i j i j

i i i

p p pp s s p s s

p p p

λ λ

λ λ= = =

+ −− = −

+ −∑ ∑ ∑

( ) ( )

( )

1 1 1

1 1 1

, , , , , ,* ** * * ** *

1 1 1

, , ,* ** *

1 1 1

1 1

1

n n ni j i j i j i j i j i j

i i in n n

i j i j i j

i i i

p p p p p p

p p p

λ λ λ λ

λ λ

= = =

= = =

⎡ ⎤⎡ ⎤+ − − + −⎢ ⎥⎣ ⎦ ⎢ ⎥⎣ ⎦=

⎡ ⎤+ −⎢ ⎥

⎢ ⎥⎣ ⎦

∑ ∑ ∑

∑ ∑ ∑

( )

( )

1 1

1 1 1

, , , ,** * * **

1 1

, , ,* ** *

1 1 1

1

1

n ni j i j i j i j

i in n n

i j i j i j

i i i

p p p p

p p p

λ

λ λ

= =

= = =

⎛ ⎞− −⎜ ⎟⎜ ⎟

⎝ ⎠=⎡ ⎤

+ −⎢ ⎥⎢ ⎥⎣ ⎦

∑ ∑

∑ ∑ ∑

( )

( )

1 1 1 1

1 1 1 1 1

, , , , , ,** * * ** * **

1 1 1 1

, , , , ,* ** * * **

1 1 1 1 1

1 /

1 /

n n n ni j i j i j i j i j i j

i i i in n n n n

i j i j i j i j i j

i i i i i

p p p p p p

p p p p p

λ

λ λ

= = = =

= = = = =

⎛ ⎞ ⎛ ⎞− − ⋅⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠=⎡ ⎤ ⎛ ⎞

+ − ⋅⎜ ⎟⎢ ⎥ ⎜ ⎟⎢ ⎥⎣ ⎦ ⎝ ⎠

∑ ∑ ∑ ∑

∑ ∑ ∑ ∑ ∑

(3.44)

Page 70: Copyright by Xiaomin You 2010

53

( )

( ) ( )

1 1

1 1 1

, , , ,** ** * *

1 1, , ,* ** **

1 1 1

1|2,** 1 1|2,* 12 2

1 / /1 /

| | ,

n ni j i j i j i j

n n ni ii j i j i j

i i i

j ji i

p p p pp p p

A p s s p s s

λ

λ λ = =

= = =

⎛ ⎞−= −⎜ ⎟⎜ ⎟⎡ ⎤ ⎝ ⎠+ −⎢ ⎥⎢ ⎥⎣ ⎦

⎡ ⎤= ⋅ −⎣ ⎦

∑ ∑∑ ∑ ∑

where ( )

( )

1

1 1

,**

1

, ,* **

1 1

10

1

ni j

in n

i j i j

i i

pA

p p

λ

λ λ

=

= =

−= ≥

+ −

∑ ∑.

Since ( ) ( )1|2,* 1 1|2,** 12 2| |j ji ip s s p s s≥ , then one obtains ( )1|2, 1 2| jinewp s s - ( )1|2,* 1 2| 0jip s s ≤ ,

i.e. p1|2,*(s1i|s2

j) ≥ p1|2,new(s1i|s2

j). Likewise, p1|2,new(s1i|s2

j).≥ p1|2,**(s1i|s2

j)

In conclusion, given two extreme points on the set of joint distribution, *P and

**P , and p1|2,*(s1i|s2

j) ≥ p1|2,**(s1i|s2

j), any point ( )newP between them satisfies the

inequality p1|2,*(s1i|s2

j) ≥ p1|2,new(s1i|s2

j) ≥ p1|2,**(s1i|s2

j). Therefore, the minimum and

maximum values of conditional probability are achieved at the extreme points of the set

of joint distributions.

Therefore, regardless of the type of independence introduced next, the conditional

upper and lower probabilities reach the minimum or maximum values at an extreme point

of Ψ. ◊

3.3.2 Unknown Interaction

Because the constraints for an unknown interaction problem are linear, the set of

joint distributions, ΨU, is convex. According to Theorem 3-1 and Theorem 3-2, whether

the objective function is a prevision or a conditional probability on joint spaces, the

bounds are achieved at extreme points of ΨU. Both Option (1) and (2) are applicable to

the unknown interaction problem.

Page 71: Copyright by Xiaomin You 2010

54

Example 3-1. Consider the case in which a bolt and its corresponding nut have to be applied in a

process. Box 1 contains bolts of types A, B, and C; Box 2 contains nuts of types A, B, and C.

Unfortunately, the manufacturer mixed up the labels, and the following information is available about Box

1 (i.e., S1): P(A) ≤ 2P(B); 2P(A) ≥ P(C); P(B) ≤ P(C), whose set Ψ1 is depicted in Figure 3-1a, and has

three vertices identified by vectors: 1

1EXTp = (0.29, 0.14, 0.57)T,

1

2EXTp = (0.5, 0.25, 0.25)T, and

1

3EXTp =

(0.2, 0.4, 0.4)T. Since bounds on expectations of general functions are given on S1, planes bounding Ψ1 are

not parallel to any coordinate plane (Figure 3-1a), and Ψ1 cannot be generated by bounds on probabilities

of events (Section 2.3). Available information about Box 2 (i.e., S2) is that 10% of nuts are either A or B,

80% are B, and the remaining 10% is indeterminate, then set Ψ2 is depicted in Figure 3-1b, and has four

vertices identified by vectors 2

1EXTp = (0, 0.9, 0.1)T,

2

2EXTp = (0, 1, 0)T,

2

3EXTp = (0.1, 0.8, 0.1)T, and

2

4EXTp = (0.2, 0.8, 0)T. A worker in the field takes one bolt from Box 1 and then one nut from Box 2; we

are interested in the joint probability of the picked bolt and nut. Since we are just given the marginal

probabilities, all we know is that the joint probability measure must satisfy Eq.(3.7). A pair of bolt and nut

is selected from each of the boxes, but we do not assume stochastic independence, and it is possible that a

correlated joint procedure is used to select the bolt and nut.

Let Si = A, B, C, and S = (A, A), (A, B), (A, C), (B, A), (B, B), (B, C), (C, A), (C, B), (C, C).

Under the unknown interaction assumption, ΨU contains all joint probabilities of elementary events on S

whose marginals are in Ψ1 and Ψ2. Consider the case in which the same type of bolt and nut is selected: T =

(A, A), (B, B), (C, C). Objective function and Constraints (3.8) are thus equal to:

Page 72: Copyright by Xiaomin You 2010

55

(a)

(b)

Figure 3-1 Example 3-1: marginal sets Ψi in the 3-dimensional spaces (n1 = n2 = 3).

Minimize (Maximize) 1,1 2,2 3,3p p p+ + Subject to ( ) ( )( ) ( )( ) ( )

1,1 1,2 1,3 2,1 2,2 2,3

1,1 1,2 1,3 3,1 3,2 3,3

2,1 2,2 2,3 3,1 3,2 3,3

1,1 2,1 3,1

1,2 2,2 3,2

1,3 2,3 3,3

1,1 1,2 1,3 2,1 2,2 2,3 3,1

2 0

2 0

0

0.0 0.2

0.8 1.0

0.0 0.1

p p p p p p

p p p p p p

p p p p p p

p p p

p p p

p p p

p p p p p p p

− + + + ⋅ + + ≥

⋅ + + − + + ≥

− + + + + + ≥

≤ + + ≤

≤ + + ≤

≤ + + ≤

+ + + + + + + 3,2 3,3

,

1

0i j

p p

p

+ =

(3.45)

With all extreme points of sets Ψ1 and Ψ2, the constraints in problems (3.9) now read as follows:

1,1 1,2 1,3 1 2 31 1 1

2,1 2,2 2,3 1 2 31 1 1

3,1 3,2 3,3 1 2 31 1 1

1,1 2,1 3,1 1 2 3 42 2 2 2

1,2 2,2 3,2 1 22 2

0.29 0.5 0.2 0

0.14 0.25 0.4 0

0.57 0.25 0.4 0

0.0 0.0 0.1 0.2 0

0.9 1.0 0

subject to

p p p c c c

p p p c c c

p p p c c c

p p p c c c c

p p p c c

+ + − − − =

+ + − − − =

+ + − − − =

+ + − − − − =

+ + − − − 3 42 2

1,3 2,3 3,3 1 2 3 42 2 2 2

.8 0.8 0

0.1 0.0 0.1 0.0 0

c c

p p p c c c c

− =

+ + − − − − =

1 2 31 1 11 2 3 42 2 2 2

,

1

1

0

0ii j

c c c

c c c c

c

p

ξ

+ + =

+ + + =

(3.46)

1

1EXTp

1

2EXTp

1

3EXTp

( )32P s

( )22P s( )1

2P s

( )32P s

( )22P s( )1

2P s2

1EXTp

2

2EXTp

2

3EXTp

2

4EXTp

Page 73: Copyright by Xiaomin You 2010

56

Under the unknown interaction assumption, the lower and upper probabilities for the event T are

equal to 0 and 0.6, respectively; solutions are detailed in Table 3-1a and Table 3-1b.

Another option is to use the general algorithm of Option (2) to find all extreme distributions on the

joint space, and then calculate the objective function on all the extreme distributions to find the extreme

values of the objective function. Constraints in Eq.(3.45) generate 60 extreme distributions, which are

given in Table 3-2 together with the values of the objective functions p 1,1+ p 2,2+ p3,3. One can check that

these values are the same as the maximum and minimum in Table 3-1a and Table 3-1b.

It should be noted that the optimal solutions in unknown interaction problems are not necessary

achieved at extreme points of Ψ1 or Ψ2. The concepts of independence will provide narrower probability

intervals.

Table 3-1a Example 3-1: Solutions of the linear programming problems (3.45) for the lower and upper probabilities for T.

Solution for Joint P Marginal on S1 Marginal on S2

Min 0.00 0.29 0.000.09 0.00 0.060.00 0.57 0.00

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

1

1EXTp =

(0.29, 0.14, 0.57)

(0.09, 0.84, 0.06)

Max 0.1 0.1 00 0.4 00 0.3 0.1

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

1

3EXTp =

(0.2, 0.4, 0.4) 2

3EXTp =

(0.1, 0.8, 0.1)

Table 3-1b Example 3-1: Solutions of the linear programming problems (3.46) for the lower and upper probabilities for T.

Solution for Joint P (c11, c1

2, c13) Marginal on S1 (c2

1, c22, c2

3, c24) Marginal on S2

Min 0 0.4 00.2 0 00 0.4 0

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

(0.47, 0.53, 0) (0.4, 0.2, 0.4) (0, 0, 0, 1)

2

4EXTp =

(0.2, 0.8, 0)

Max 0.2 0 00 0.4 00 0.4 0

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

(0, 0, 1)

1

3EXTp =

(0.2, 0.4, 0.4)

(0, 0, 0, 1) 2

4EXTp =

(0.2, 0.8, 0)

Page 74: Copyright by Xiaomin You 2010

57

Table 3-2 Example 3-1: Extreme distribution of Ψ for the linear programming problems (3.45)

Extreme point of Ψ p1,1 p1,2 p1,3 P2,1 P2,2 P2,3 P3,1 P3,2 P3,2

1,1 2,2 3,3p p p+ + p2,2/( p1,2+ p2,2+ p3,2)

1 0.00 0.40 0.00 0.20 0.00 0.00 0.00 0.40 0.00 0.00 0.00

2 0.20 0.00 0.00 0.00 0.40 0.00 0.00 0.40 0.00 0.60 0.50

3 0.00 0.27 0.00 0.20 0.00 0.00 0.00 0.53 0.00 0.00 0.00

4 0.00 0.29 0.00 0.00 0.14 0.00 0.00 0.57 0.00 0.14 0.14

5 0.00 0.29 0.00 0.14 0.00 0.00 0.00 0.57 0.00 0.00 0.00

6 0.00 0.50 0.00 0.00 0.25 0.00 0.00 0.25 0.00 0.25 0.25

7 0.00 0.20 0.00 0.00 0.40 0.00 0.00 0.40 0.00 0.40 0.40

8 0.00 0.40 0.00 0.10 0.00 0.10 0.00 0.40 0.00 0.00 0.00

9 0.00 0.27 0.00 0.10 0.00 0.10 0.00 0.53 0.00 0.00 0.00

10 0.10 0.00 0.10 0.00 0.40 0.00 0.00 0.40 0.00 0.50 0.50

11 0.00 0.29 0.00 0.14 0.00 0.00 0.00 0.51 0.06 0.06 0.00

12 0.00 0.29 0.00 0.00 0.14 0.00 0.20 0.37 0.00 0.14 0.18

13 0.00 0.29 0.00 0.14 0.00 0.00 0.06 0.51 0.00 0.00 0.00

14 0.20 0.09 0.00 0.00 0.14 0.00 0.00 0.57 0.00 0.34 0.18

15 0.00 0.23 0.06 0.14 0.00 0.00 0.00 0.57 0.00 0.00 0.00

16 0.06 0.23 0.00 0.14 0.00 0.00 0.00 0.57 0.00 0.06 0.00

17 0.00 0.29 0.00 0.00 0.14 0.00 0.00 0.47 0.10 0.24 0.16

18 0.00 0.29 0.00 0.00 0.04 0.10 0.00 0.57 0.00 0.04 0.05

19 0.00 0.29 0.00 0.04 0.00 0.10 0.00 0.57 0.00 0.00 0.00

20 0.00 0.19 0.10 0.00 0.14 0.00 0.00 0.57 0.00 0.14 0.16

21 0.00 0.50 0.00 0.00 0.25 0.00 0.20 0.05 0.00 0.25 0.31

22 0.00 0.50 0.00 0.20 0.05 0.00 0.00 0.25 0.00 0.05 0.06

23 0.20 0.30 0.00 0.00 0.25 0.00 0.00 0.25 0.00 0.45 0.31

24 0.00 0.50 0.00 0.00 0.25 0.00 0.00 0.15 0.10 0.35 0.28

25 0.00 0.50 0.00 0.00 0.15 0.10 0.00 0.25 0.00 0.15 0.17

26 0.00 0.40 0.10 0.00 0.25 0.00 0.00 0.25 0.00 0.25 0.28

27 0.00 0.20 0.00 0.00 0.40 0.00 0.20 0.20 0.00 0.40 0.50

28 0.00 0.20 0.00 0.20 0.20 0.00 0.00 0.40 0.00 0.20 0.25

29 0.00 0.20 0.00 0.00 0.40 0.00 0.00 0.30 0.10 0.50 0.44

30 0.00 0.20 0.00 0.00 0.30 0.10 0.00 0.40 0.00 0.30 0.33

31 0.00 0.10 0.10 0.00 0.40 0.00 0.00 0.40 0.00 0.40 0.44

32 0.00 0.29 0.00 0.00 0.14 0.00 0.10 0.37 0.10 0.24 0.18

33 0.00 0.29 0.00 0.10 0.00 0.04 0.00 0.51 0.06 0.06 0.00

34 0.00 0.29 0.00 0.10 0.04 0.00 0.00 0.47 0.10 0.14 0.05

35 0.10 0.19 0.00 0.00 0.14 0.00 0.00 0.47 0.10 0.34 0.18

Page 75: Copyright by Xiaomin You 2010

58

Extreme point of Ψ p1,1 p1,2 p1,3 P2,1 P2,2 P2,3 P3,1 P3,2 P3,2

1,1 2,2 3,3p p p+ + p2,2/( p1,2+ p2,2+ p3,2)

36 0.00 0.29 0.00 0.00 0.04 0.10 0.10 0.47 0.00 0.04 0.05

37 0.00 0.29 0.00 0.04 0.00 0.10 0.06 0.51 0.00 0.00 0.00

38 0.00 0.19 0.10 0.00 0.14 0.00 0.10 0.47 0.00 0.14 0.18

39 0.10 0.19 0.00 0.00 0.04 0.10 0.00 0.57 0.00 0.14 0.05

40 0.00 0.23 0.06 0.10 0.00 0.04 0.00 0.57 0.00 0.00 0.00

41 0.06 0.23 0.00 0.04 0.00 0.10 0.00 0.57 0.00 0.06 0.00

42 0.00 0.19 0.10 0.10 0.04 0.00 0.00 0.57 0.00 0.04 0.05

43 0.10 0.09 0.10 0.00 0.14 0.00 0.00 0.57 0.00 0.24 0.18

44 0.00 0.50 0.00 0.00 0.25 0.00 0.10 0.05 0.10 0.35 0.31

45 0.00 0.50 0.00 0.10 0.15 0.00 0.00 0.15 0.10 0.25 0.19

46 0.10 0.40 0.00 0.00 0.25 0.00 0.00 0.15 0.10 0.45 0.31

47 0.00 0.50 0.00 0.00 0.15 0.10 0.10 0.15 0.00 0.15 0.19

48 0.00 0.40 0.10 0.00 0.25 0.00 0.10 0.15 0.00 0.25 0.31

49 0.00 0.50 0.00 0.10 0.05 0.10 0.00 0.25 0.00 0.05 0.06

50 0.10 0.40 0.00 0.00 0.15 0.10 0.00 0.25 0.00 0.25 0.19

51 0.00 0.40 0.10 0.10 0.15 0.00 0.00 0.25 0.00 0.15 0.19

52 0.10 0.30 0.10 0.00 0.25 0.00 0.00 0.25 0.00 0.35 0.31

53 0.00 0.20 0.00 0.00 0.40 0.00 0.10 0.20 0.10 0.50 0.50

54 0.00 0.20 0.00 0.10 0.30 0.00 0.00 0.30 0.10 0.40 0.38

55 0.10 0.10 0.00 0.00 0.40 0.00 0.00 0.30 0.10 0.60 0.50

56 0.00 0.20 0.00 0.00 0.30 0.10 0.10 0.30 0.00 0.30 0.38

57 0.00 0.10 0.10 0.00 0.40 0.00 0.10 0.30 0.00 0.40 0.50

58 0.00 0.20 0.00 0.10 0.20 0.10 0.00 0.40 0.00 0.20 0.25

59 0.10 0.10 0.00 0.00 0.30 0.10 0.00 0.40 0.00 0.40 0.38

60 0.00 0.10 0.10 0.10 0.30 0.00 0.00 0.40 0.00 0.30 0.38

Min 0.00 0.00

Max 0.60 0.50

By replacing the objective function in (3.45) or (3.46) by the conditional probability that the bolt

is Type B given the type of nut, i.e. p2,2/( p1,2+ p2,2+ p3,2) , the bounds of the conditional probability are 0

and 0.5, which are larger than the interval [0.14, 0.4], i.e., the range of P1(B) in Ψ1.

In Table 3-2, the function p2,2/( p1,2+ p2,2+ p3,2) has been calculated at the 60 extreme points of Ψ.

One can check that the objective function achieves its maximum and minimum values at extreme points of

Ψ.

Page 76: Copyright by Xiaomin You 2010

59

3.3.3 Independence

As stated in Section 3.2.2, the multiplication of two matrices makes the problems

of epistemic irrelevance/independence and strong independence non-convex and

consequently NP-hard. The following sub-sections will present the solutions to avoid

such computational difficulties.

3.3.3.1 Epistemic Irrelevance/Independence

When marginals’ expectations are bounded as in Eq.(3.3), these non-convex

optimization problems could be turned into linear ones by rewriting the problems in

terms of the joint distribution matrix P. The derivation is presented as follows.

We start from a problem of epistemic irrelevance. Given P = Diag(p1) P2|1, the marginal on S1, ( )2n⋅P 1 , may be written as

( )( ) ( )

( ) ( )( ) ( ) ( )

22

2 2

1 2|1

1 2|1 1 1

n n

n n

Diag

Diag Diag

⋅ = ⋅

= ⋅ = ⋅ =

P 1 p P 1

p P 1 p 1 p

(3.47)

i.e., the marginal on S1 is the same as p1. Therefore, the constraints on p1 (i.e.,

( )1 1 1 1 Tk k k

LOW UPPE f E f⎡ ⎤ ⎡ ⎤≤ ≤⎣ ⎦ ⎣ ⎦f p ), by substituting ( )2n⋅P 1 for p1, can be rewritten as

( ) ( )21 1 1

n

Tk k kLOW UPPE f E f⎡ ⎤ ⎡ ⎤≤ ⋅ ≤⎣ ⎦ ⎣ ⎦f P 1 (3.48)

By substituting ( )2n⋅P 1 for p1 in P = Diag(p1) P2|1, one obtains

( ) ( )( )21 2|1 2|1nDiag Diag= ⋅ = ⋅ ⋅P p P P 1 P (3.49)

Thus, the jth row of matrix P, ,j ⋅P , may be written as

( )( ) ( )2

2

, , , , ( )2|1 2 1

1; 1,...,

n Tj j j j m jn

mp j n⋅ ⋅ ⋅

=

⎛ ⎞= ⋅ = =⎜ ⎟

⎝ ⎠∑P P 1 P p (3.50)

where ,2|1j ⋅P is the jth row of matrix 2|1P .

Page 77: Copyright by Xiaomin You 2010

60

As for constraints on p2, i.e.: ( ) ( )2 2 2 2

Tk k j kLOW UPPE f E f⎡ ⎤ ⎡ ⎤≤ ≤⎣ ⎦ ⎣ ⎦f p , by multiplying

it by 2

,

1

nj m

mp

=∑ (as

2,

10

nj m

mp

=≥∑ ), one obtains

( )2 2 2

, ( ) , ,2 2 2 2

1 1 1

n n nTk j m k j j m k j mLOW UPP

m m mE f p p E f p

= = =

⎡ ⎤ ⎡ ⎤⋅ ≤ ≤ ⋅⎣ ⎦ ⎣ ⎦∑ ∑ ∑f p (3.51)

By noticing Eq.(3.50) and substituting ( ), Tj ⋅P for 2

( ) ,2

1

nj j m

mp

=

⎛ ⎞⋅ ⎜ ⎟⎜ ⎟⎝ ⎠∑p in Eq. (3.51),

constraints on p2 are rewritten as 2 2

, , ,2 2 2

1 1

n nk j m j k k j m

LOW UPPm m

E f p E f p⋅

= =

⎛ ⎞ ⎛ ⎞⎡ ⎤ ⎡ ⎤⋅ ≤ ⋅ ≤ ⋅⎜ ⎟ ⎜ ⎟⎣ ⎦ ⎣ ⎦⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠∑ ∑P f (3.52)

Therefore, given the epistemic irrelevance of the first experiment with respect to

the second experiment, P = Diag(p1) P2|1, the optimization problem may be written in the

linear form:

Minimize 1 2; , ,1; 1

i n j n i j i ji j

a p= =

= =∑ 1 2; , ,1; 1

i n j n i j i ji j

a p= =

= =⎛ ⎞−⎜ ⎟⎝ ⎠∑

Subject to ( ) ( )2

2 2

1 1 1 1

, , ,2 2 2 2 1

1 1

; 1,..., ;

; 1,..., ; 1,..., ;

1; 0

n

Tk k kLOW UPP

n nk j m j k k j m

LOW UPPm m

T

E f E f k k

E f p E f p k k j n⋅

= =

⎡ ⎤ ⎡ ⎤≤ ⋅ ≤ =⎣ ⎦ ⎣ ⎦

⎛ ⎞ ⎛ ⎞⎡ ⎤ ⎡ ⎤⋅ ≤ ⋅ ≤ ⋅ = =⎜ ⎟ ⎜ ⎟⎣ ⎦ ⎣ ⎦⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠

⋅ ⋅ = ≥

∑ ∑

f P 1

P f

1 P 1 P

(3.53)

Likewise, when the second experiment is epistemically irrelevant to the first, P = P1|2Diag(p2) and ( )1 2

Tn =1 P p , i.e., the marginal on S2 is equal to p2. The linear

optimization problem is

Minimize(Maximize) 1 2; , ,1; 1

i n j n i j i ji j

a P= =

= =∑

Subject to

( )1 2

1

, , ,1 1 1 1 2

1 1

2 ( ) 2 2 2

; 1,..., ; 1,..., ;

; 1,..., ;

1; 0

n nTk m j k j k m jLOW UPP

m m

k T k kLOW n UPP

T

E f p E f p k k j n

E f E f k k

= =

⎛ ⎞ ⎛ ⎞⎡ ⎤ ⎡ ⎤⋅ ≤ ≤ ⋅ = =⎜ ⎟ ⎜ ⎟⎣ ⎦ ⎣ ⎦⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠⎡ ⎤ ⎡ ⎤≤ ⋅ ≤ =⎣ ⎦ ⎣ ⎦

⋅ ⋅ = ≥

∑ ∑f P

1 P f

1 P 1 P

(3.54)

Page 78: Copyright by Xiaomin You 2010

61

where , j⋅P is the jth column of matrix P. Constraints in (3.54) are equivalent to the

constraints in (3.17), but much more the computational effort is substantially reduced

with respect to (3.17).

For the case of epistemic independence, the optimization problem (3.23) can be

reformulated in terms of linear constraints:

Minimize 1 2; , ,1; 1

i n j n i j i ji j

a p= =

= =∑ 1 2; , ,1; 1

i n j n i j i ji j

a p= =

= =⎛ ⎞−⎜ ⎟⎝ ⎠∑

Subject to

( )1 2

2 2

, , ,1 1 1 1 2

1 1

, , ,2 2 2 2 1

1 1

; 1,..., ; 1,..., ;

; 1,..., ; 1,..., ;

1; 0

n nTk m j k j k m jLOW UPP

m m

n nk j m j k k j m

LOW UPPm m

T

E f p E f p k k j n

E f p E f p k k j n

= =

= =

⎛ ⎞ ⎛ ⎞⎡ ⎤ ⎡ ⎤⋅ ≤ ≤ ⋅ = =⎜ ⎟ ⎜ ⎟⎣ ⎦ ⎣ ⎦⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠⎛ ⎞ ⎛ ⎞

⎡ ⎤ ⎡ ⎤⋅ ≤ ⋅ ≤ ⋅ = =⎜ ⎟ ⎜ ⎟⎣ ⎦ ⎣ ⎦⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠

⋅ ⋅ = ≥

∑ ∑

∑ ∑

f P

P f

1 P 1 P

(3.55)

An efficient algorithm for calculating the extreme joint distributions for epistemic

irrelevance is given by the following two theorems:

Theorem 3-3 Let the extreme points of the convex sets of probability distributions on S1 and S2 be

1 1, 1,...EXTξ ξ ξ=p , and

2 2, 1,...EXTξ ξ ξ=p , respectively. If the first experiment is

epistemically irrelevant to the second, the set of extreme points of the joint distributions, 1|s

EΨ , is:

( ) ( )

1 2|1 1 1 2|1 2 1= : , ; 1,...i

EXT Diag EXT EXT i n= ⋅ ∈ ∈ =P p P p P , i.e.

( )( )

( )

12

1 1 2

12

1 2

1

: , ;=

1,...

i

n

T

EXTm m

EXT EXT EXT EXTT

EXT

Diag EXT EXTEXT

i n

η

η

η

⎧ ⎫⎛ ⎞⎪ ⎪⎜ ⎟⎪ ⎪⎜ ⎟

= ∈ ∈⎪ ⎪⎜ ⎟⎨ ⎬⎜ ⎟⎪ ⎪⎜ ⎟⎪ ⎪⎝ ⎠⎪ ⎪=⎩ ⎭

p

P p p p

p

(3.56)

Proof: Any p1∈Ψ1 and p2∈Ψ2 can be written as a linear combination of extreme points

in Ψ1 and Ψ2, respectively:

( )( )1 11 1 1

1 11 1 1 1... ... ... ...

TEXT EXT EXT

ξ ξξ ξλ λ λ=p p p p (3.57)

Page 79: Copyright by Xiaomin You 2010

62

( )

( )

22

1 2 12

2

1,11,12 2

,2|1 2

1, ,2 2

...T

EXTi

n n T

EXT

ξ

ξ

ξ ξ

λ λ

λ

λ λ

⎛ ⎞⎡ ⎤⎜ ⎟⎢ ⎥ ⎜ ⎟= ⎢ ⎥⎜ ⎟⎢ ⎥ ⎜ ⎟⎢ ⎥ ⎜ ⎟⎣ ⎦ ⎝ ⎠

p

P

p

1

2

11 11

, ,2 12 2

1

0 1, 1,..., ; 1

0 1, 1,..., , 1,..., ; 1i ii n

ξξ ξ

ξ

ξξ ξ

ξ

λ ξ ξ λ

λ ξ ξ λ

=

=

≤ ≤ = =

≤ ≤ = = =

(3.58)

By inserting Eqs.(3.57) and (3.58) into ( )1 2|1Diag= ⋅P p P , one obtains:

( )

( )

( )

( )

2

2

1 21 1 1

1 2 1

12

2

111

,11,12 2

1 ,12

1, ,2 2

1

...

... ...

TEXT

Ti EXT

EXT EXT EXTn n

T

EXT

Diag

ξξ ξ

ξξ ξ

ξξ

ξ

λ

λ λλ

λ

λ λλ

⎛ ⎞⎜ ⎟⎛ ⎞⎛ ⎞

⎜ ⎟ ⎜ ⎟⎜ ⎟⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎡ ⎤⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎢ ⎥⎜ ⎟ ⎜ ⎟⎜ ⎟= ⎢ ⎥⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎢ ⎥⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎢ ⎥⎣ ⎦⎜ ⎟ ⎜ ⎟⎜ ⎟⎜ ⎟ ⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠ ⎜ ⎟⎝ ⎠ ⎜ ⎟

⎝ ⎠

p

pP p p p

p

(3.59)

Extreme points of P are achieved if and only if 11, 0,

mm

ξ ξλ

ξ=⎧

= ⎨ ≠⎩ and ,

21, 0,

ii

i

ξ ξ ηλ

ξ η=⎧

= ⎨ ≠⎩,

11,...,m ξ= , 21,...,iη ξ=

Therefore, ( )( )

( )

12

1

12

n

T

EXTm

EXT EXTT

EXT

Diag

η

η

⎛ ⎞⎜ ⎟⎜ ⎟

= ⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

p

P p

p

Theorem 3-4 If the second experiment is epistemically irrelevant to the first, the set of

extreme points of the joint distributions is ( ) ( )

1|2 2 1|2 1 2 2 2= : , ; 1,...i

EXT Diag EXT EXT i n= ⋅ ∈ ∈ =P P p P p , i.e.

( ) ( )1 221 1

21 1 1 1

P :=

, ; 1,...

n

i

mEXT EXTEXT EXT

mEXTEXT

DiagEXT

EXT EXT i n

ηη

η

⎧ ⎫=⎪ ⎪⎨ ⎬⎪ ⎪∈ ∈ =⎩ ⎭

p p p

p p

… (3.60)

Page 80: Copyright by Xiaomin You 2010

63

Theorem 3-3 and Theorem 3-4 enable us to efficiently find the extreme joint

distributions given the extreme distributions on the marginals. The upper limit for the

number of extreme joint distribution is 11 2

nξ ξ× when the first experiment is

epistemically irrelevant to the second experiment, and 21 2nξ ξ× when the second

experiment is epistemically irrelevant to the first one. However, the algorithms in

Theorem 3-3 and Theorem 3-4 cannot be used in the case of epistemic independence

because, under epistemic independence, the set of joint distributions is the intersection of

the two convex sets for the epistemic irrelevance cases, i.e., ΨE= 1|SEΨ ∩ 2|S

EΨ . As

illustrated in Figure 3-2, the extreme points of ΨE may not be extreme points of 1|SEΨ or

2|SEΨ . The only way to determine the extreme points for ΨE is to use the linear constraints

in (3.55) and apply the general algorithm for Option (2).

Figure 3-2 Set of extreme points for the case of epistemic independence

Example 3-2. Consider again the situation and knowledge available in Example 3-1. Suppose that now

two additional boxes (Boxes 3 and 4) of nuts are delivered to the construction site. The content is the same

as Box 2 in Example 3-1. This time, the worker in the field picks a bolt from box 1 (Experiment 1), and

then proceeds with Experiment 2, i.e.,

If it is Type A, then he picks a nut from Box 2.

If it is Type B, then he picks a nut from Box 3

2|SEΨ

1|SEΨ

Page 81: Copyright by Xiaomin You 2010

64

Otherwise, he picks a nut from Box 4.

We want to write down the optimization problems for finding upper and lower expectations on the

joint space and then calculate the upper and lower probabilities for the case in which the same type of bolt

and nut is selected, i.e. event T=(A, A), (B, B), (C, C). Finally, calculate upper and lower conditional

probabilities that the bolt is Type B given the type of nut, and contrast to upper and lower conditional

probabilities that the nut is Type B given the type of the bolt.

In this example, the first experiment is epistemically irrelevant to the second experiment because

1) The set of acceptable gambles concerning the second experiment does not change when we

learn the outcome of the first experiment.

2) s1 is selected according to some marginal distribution in Ψ1, and then s2 is selected according

to a distribution from Ψ2 that depends on s1.

3) s2 is selected by a different procedure for different values of s1

Let us first solve the problem with the quadratic constrains in (3.17) and (3.18), respectively. As a

consequence, quadratic constraints in Eq. (3.17) become:

Subject to

( )( )( )( )

(1)2

(1) (2)1 2

(3)2

T

T

T

Diag

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟=⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

p

P p p

p

( )( )( )

( )( )( )

(1)1

(1)1

(1)1

( )2

( )2

( )2

(1) (1)1 1

(1) (2) (3)2 2 2

(1) (2)2 2

1,2,0 0

2,0, 1 0

0, 1,1 0

0.0 1,0,0 0.2, 1, 2,3

0.8 0,1,0 1.0, 1, 2,3

0.0 0,0,1 0.1, 1,2,3

1; 0

1; 1; 1

0; 0

T

T

T

T j

T j

T j

T

T T T

j

j

j

− ≥

− ≥

− ≥

≤ ≤ =

≤ ≤ =

≤ ≤ =

⋅ = ≥

⋅ = ⋅ = ⋅ =

≥ ≥

p

p

p

p

p

p

1 p p

1 p 1 p 1 p

p p (3)2; 0≥p

(3.61)

Page 82: Copyright by Xiaomin You 2010

65

Based on the extreme distributions given in Example 3-1, Eq. (3.18) gives the following quadratic

constraints:

Subject to

1,1 2,1 3,11 1 1

1,1 2,1 3,1 4,12 2 2 2

0.29 0.5 0.20.14 0.25 0.40.57 0.25 0.4

0.0 0.0 0.1 0.20.9 1.0 0.8 0.80.1 0.0 0.1 0.0

Diag c c c

c c c c

⎛ ⎞⎛ ⎞ ⎛ ⎞ ⎛ ⎞⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟= + +⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟

⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎜ ⎟⎝ ⎠ ⎝ ⎠ ⎝ ⎠⎝ ⎠

⎛ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞⎜ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟+ + +⎜ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟

⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎜ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠⎝

P

1,2 2,2 3,2 4,22 2 2 2

1,3 2,3 3,3 4,32 2 2 2

0.0 0.0 0.1 0.20.9 1.0 0.8 0.80.1 0.0 0.1 0.0

0.0 0.0 0.1 0.20.9 1.0 0.8 00.1 0.0 0.1

T

T

c c c c

c c c c

⎞⎟⎟⎟⎠

⎛ ⎞⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟+ + +⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟

⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎜ ⎟⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠⎝ ⎠

⎛ ⎞ ⎛ ⎞ ⎛ ⎞⎜ ⎟ ⎜ ⎟ ⎜ ⎟+ + +⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠ ⎝ ⎠

4,1,1 2,1 3,1

1 1 1 21

, ,1 2

.80.0

1; 1; 1, 2,3

0; 0

T

i j

ii j i j

c c c c j

c c=

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎛ ⎞⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠⎝ ⎠⎝ ⎠

+ + = = =

≥ ≥

(3.62)

The upper and lower probabilities for the event T = (A, A), (B, B), (C, C) are equal to 0.11 and

0.48, respectively. The solutions summarized in Table 3-3a and Table 3-3b indicate that p2(1) ≠ p2

(2) ≠ p2(3)

at the minimizing solution, i.e. they do not satisfy stochastic independence. Likewise for the maximizing

solution.

Table 3-3a Example 3-2: Solutions of the optimization problems (3.17) for lower and upper probabilities for T.

Solution for Joint P ( )1

ip Marginal on S1 ( )

2jp Marginal on S2

Min

0.00 0.27 0.010.02 0.11 0.010.06 0.51 0.00

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

1

1EXTp =

(0.29,0.14,0.57) 1

1EXTp = (0.29, 0.14, 0.57)

(0.00, 0.95, 0.05); (0.14, 0.79, 0.07); (0.11, 0.89, 0.00)

(0.07, 0.90, 0.03)

Max 0.04 0.16 0.000.00 0.40 0.000.03 0.33 0.04

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

1

3EXTp = (0.2, 0.4, 0.4) 1

3EXTp =

(0.2, 0.4, 0.4) 2

4EXTp =(0.2,0.8,0);

2

2EXTp = (0, 1, 0); (0.07, 0.83, 0.10)

(0.08, 0.88, 0.04)

Page 83: Copyright by Xiaomin You 2010

66

Table 3-3b Example 3-2: Solutions of the optimization problems (3.18) for lower and upper probabilities for T.

Solution for Joint P ( )1

ip Marginal on S1 ( )

2jp Marginal on S2

Min 0.00 0.26 0.030.02 0.11 0.010.03 0.54 0.00

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

1

1EXTp =

(0.29,0.14,0.57) 1

1EXTp =

(0.29,0.14,0.57)

(0, 0.91, 0.09); (0.15, 0.80, 0.05); (0.05, 0.95, 0)

(0.05,0.92,0.03)

Max 0.04 0.16 0.000.00 0.40 0.000.02 0.34 0.04

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

1

3EXTp =

(0.2, 0.4, 0.4) 1

3EXTp =

(0.2, 0.4, 0.4) 2

4EXTp =(0.2,0.8,0);

2

2EXTp = (0, 1, 0);

(0.02, 0.88, 0.10)

(0.06,0.90,0.04)

We started from noticing, and we based our solution on the observation that the first experiment is

epistemically irrelevant to the second experiment. As a consequence, upper and lower conditional

probabilities that the nut is Type B given the type of the bolt are equal to the marginal ones, i.e. 1 and 0.8,

respectively. In order to check that epistemic irrelevance is a directional, asymmetric property, let us

calculate upper and lower conditional probabilities that the bolt is Type B given the type of the nut. The

(non-linear) function to minimize and maximize is p2,2/( p1,2+ p2,2+ p3,2), where p1,2+ p2,2+ p3,2 is the second

component of the marginal on S2. The constraints are still given by Eqs. (3.17) or (3.18). Upper and lower

conditional probabilities are equal to 0.45 and 0.12, which are larger bounds than the marginal bounds, i.e.

0.4 and 0.14. Since the conditional probability bounds are different from the marginal probability bounds,

the second experiment is epistemically relevant to the first one. Results summarized in Table 3-4a and

Table 3-4b indicate that p2(1) ≠ p2

(2) ≠ p2(3), i.e., the minimizing and maximizing solutions again violate

stochastic independence.

Page 84: Copyright by Xiaomin You 2010

67

Table 3-4a Example 3-2: Solutions of the optimization problems (3.17) for upper and lower conditional probabilities that the bolt is Type B given the type of the nut.

Solution for Joint P ( )

1ip Marginal on S1

( )2

jp Marginal on S2

Min

0.00 0.29 0.000.01 0.11 0.010.00 0.57 0.00

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

1

1EXTp =

(0.29,0.14,0.57) 1

1EXTp =

(0.29,0.14,0.57) 2

2EXTp = (0, 1, 0);

(0.09, 0.81, 0.09);

2

2EXTp = (0, 1, 0);

(0.01, 0.97, 0.01)

Max

0.02 0.16 0.020.00 0.40 0.000.04 0.32 0.04

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

1

3EXTp =

(0.2, 0.4, 0.4) 1

3EXTp =

(0.2, 0.4, 0.4) 2

3EXTp =(0.1,0.8,0.1);

2

2EXTp = (0, 1, 0);

2

3EXTp =(0.1,0.8,0.1);

(0.06, 0.88, 0.06)

Table 3-4b Example 3-2: Solutions of the optimization problems (3.18) for upper and lower conditional probabilities that the bolt is Type B given the type of the nut.

Solution for Joint P ( )1

ip Marginal on S1 ( )

2jp Marginal on S2

Min 0.00 0.29 0.000.02 0.11 0.010.00 0.57 0.00

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

1

1EXTp =

(0.29,0.14,0.57) 1

1EXTp =

(0.29,0.14,0.57) 2

2EXTp = (0, 1, 0);

(0.15,0.80, 0.05);

2

2EXTp = (0, 1, 0);

(0.02,0.97,0.01)

Max 0.04 0.16 0.000.00 0.40 0.000.04 0.32 0.04

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

1

3EXTp =

(0.2, 0.4, 0.4) 1

3EXTp =

(0.2, 0.4, 0.4) 2

4EXTp =(0.2,0.8,0);

2

2EXTp = (0, 1, 0);

2

3EXTp =(0.1,0.8,0.1);

(0.08,0.88,0.04)

Now we can rework Example 3-2 with the new linear algorithm (Eq.(3.53)), in which constraints

are written in terms of the joint distribution. Since the problem is linear, one may solve the problem with

two different methods: a linear optimization problem (Option (1)), and Option (2). When constraints are

given in terms of extreme points of the marginals, Theorem 3-3 will be used in Example 3-2.

Let us redo Example 3-2 by rewriting the constraints as in Eq. (3.53): Subject to (3.63)

Page 85: Copyright by Xiaomin You 2010

68

( )( )( )

, ,1 ,

, ,2 ,

, ,3 ,

1,2,0 0

2,0, 1 0

0, 1,1 0

0.0 0.2 , 1,2,3

0.8 1.0 , 1,2,3

0.0 0.1 , 1,2,3

T

T

T

i i i

i i i

i i i

p i

p i

p i

⋅ ⋅

⋅ ⋅

⋅ ⋅

− ⋅ ≥

− ⋅ ≥

− ⋅ ≥

⋅ ≤ ≤ ⋅ =

⋅ ≤ ≤ ⋅ =

⋅ ≤ ≤ ⋅ =

P 1

P 1

P 1

P 1 P 1

P 1 P 1

P 1 P 1

1;0

T ⋅ ⋅ =≥

1 P 1P

Option (1): a linear optimization problem

Upper and lower probabilities of T are 0.48 and 0.11, and upper and lower conditional

probabilities are 0.45 and 0.12, the same as the results in previous calculation in Example 3-2, and may be

achieved at different solutions, listed in Table 3-5 and Table 3-6. The computational effort was greatly

reduced: for the linear optimization problem (3.63), it takes around 8 iterations to achieve the optimal

solutions; while for the optimization problem (3.61) with quadratic constraints, the solutions are achieved

after about 11 iterations. Thus, the computational effort was reduced.

Table 3-5 Example 3-2: Solutions of the optimization problems (3.63) for lower and upper probabilities for T.

Solution for Joint P Marginal on S1 Marginal on S2 Min

0.00 0.26 0.030.03 0.11 0.000.11 0.46 0.00

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

1

1EXTp =

(0.29,0.14,0.57)

(0.14, 0.83, 0.03)

Max 0.04 0.16 0.000.00 0.40 0.000.04 0.32 0.04

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

1

3EXTp =

(0.2, 0.4, 0.4)

(0.08, 0.88, 0.04)

Table 3-6 Example 3-2: Solutions of the optimization problems (3.63) for upper and lower conditional probabilities that the bolt is Type B given the type of the nut.

Solution for Joint P Marginal on S1 Marginal on S2

Min

0.00 0.29 0.000.01 0.11 0.010.00 0.57 0.00

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

1

1EXTp =

(0.29,0.14,0.57) (0.01, 0.97, 0.01)

Page 86: Copyright by Xiaomin You 2010

69

Max

0.02 0.16 0.020.00 0.40 0.000.08 0.32 0.00

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

1

3EXTp =

(0.2, 0.4, 0.4) (0.10, 0.88, 0.02)

Option (2):

Use Eq. (3.56) or (3.60) to calculate extreme points of the convex set of joint probability

distributions. All 192 extreme points are found by this algorithm. One can check all values of objective

functions and find upper and lower probabilities of T are 0.48 and 0.11, and upper and lower conditional

probabilities are 0.45 and 0.12, the same as previous results.

Theorem 3-3 can also be used to solve the problem. One may first use Eq.(3.63) to generate all

3×43=192 extreme points, check and find that none is duplicated. Therefore, Theorem 3-3 also generates

192 extreme points, which are the same as the extreme points in Option (2).

Example 3-3. Consider again the situation and knowledge available in Example 3-1. In addition to the

knowledge available in Example 3-1, all we now know about the stochastic mechanism for picking the bolt

and nut, i.e. the joint probability measure P is that: (a) whatever the type of the bolt, the conditional

probability that the nut is A (B or C) lies in Ψ2; and (b) whatever the type of the nut, the conditional

probability that the bolt is A (B or C) lies in the convex set of Ψ1.

We want to write down the optimization problems for finding upper and lower expectations on the

joint space and then calculate the upper and lower probabilities for the case in which the same type of the

bolt and nut is selected, i.e. event T=(A, A), (B, B), (C, C). Finally, calculate the conditional upper an

lower probabilities that the bolt is Type B given the type of the nut, and contrast these values with the

conditional upper and lower probabilities that the nut is Type B given the type of the bolt.

This is a case of epistemic independence, where each experiment is epistemically irrelevant to the

other. As a consequence, this example is the symmetric counterpart of Example 3-2 above, where only the

Page 87: Copyright by Xiaomin You 2010

70

first experiment was epistemically irrelevant to the other. Quadratic constraints in Eqs. (3.23) and (3.24)

become: Subject to

( )( )( )( )

( ) ( )

( ) ( ) ( )( )( )

(1)2

(1) (2) (2) (3) (4) (4)1 2 1 1 1 2

(3)2

( ) ( ) ( )1 1 1

( )2

( )2

;

1,2,0 0; 2,0, 1 0; 0, 1,1 0

0.0 1,0,0 0.2

0.8 0,1,0 1.0

T

T

T

T T Tj j j

T j

T j

Diag Diag

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟= =⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

− ≥ − ≥ − ≥

≤ ≤

≤ ≤

p

P p p P p p p p

p

p p p

p

p

( ) ( )2

( ) ( )1 1

( ) ( )2 2

0.0 0,0,1 0.1

1; 0

1; 01,2,3, 4

T j

T j j

T j j

j

≤ ≤

⋅ = ≥

⋅ = ≥=

p

1 p p

1 p p

(3.64)

Based on the extreme distributions given in Example 3-1, Eq. (3.23) gives the following

quadratic constraints:

Subject to

1,1 2,1 3,11 1 1

1,1 2,1 3,1 4,12 2 2 2

0.29 0.5 0.20.14 0.25 0.40.57 0.25 0.4

0.0 0.0 0.1 0.20.9 1.0 0.8 0.80.1 0.0 0.1 0.0

Diag c c c

c c c c

⎛ ⎞⎛ ⎞ ⎛ ⎞ ⎛ ⎞⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟= + +⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟

⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎜ ⎟⎝ ⎠ ⎝ ⎠ ⎝ ⎠⎝ ⎠

⎛ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞⎜ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟+ + +⎜ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟

⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎜ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠⎝

P

1,2 2,2 3,2 4,22 2 2 2

1,3 2,3 3,3 4,32 2 2 2

0.0 0.0 0.1 0.20.9 1.0 0.8 0.80.1 0.0 0.1 0.0

0.0 0.0 0.1 0.20.9 1.0 0.8 00.1 0.0 0.1

T

T

c c c c

c c c c

⎞⎟⎟⎟⎠

⎛ ⎞⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟+ + +⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟

⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎜ ⎟⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠⎝ ⎠

⎛ ⎞ ⎛ ⎞ ⎛ ⎞⎜ ⎟ ⎜ ⎟ ⎜ ⎟+ + +⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠ ⎝ ⎠

.80.0

T

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎛ ⎞⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠⎝ ⎠⎝ ⎠

(3.65)

Page 88: Copyright by Xiaomin You 2010

71

1,2 2,2 3,21 1 1

1,3 2,3 3,31 1 1

1,4 2,41 1

0.29 0.5 0.20.14 0.25 0.40.57 0.25 0.4

0.29 0.5 0.20.14 0.25 0.40.57 0.25 0.4

0.29 0.50.14 0.250.57

c c c

c c c

c c

⎛ ⎞ ⎛ ⎞ ⎛ ⎞⎜ ⎟ ⎜ ⎟ ⎜ ⎟+ +⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠ ⎝ ⎠⎛ ⎞ ⎛ ⎞ ⎛ ⎞⎜ ⎟ ⎜ ⎟ ⎜ ⎟= + +⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠ ⎝ ⎠⎛ ⎞⎜ ⎟ +⎜ ⎟⎜ ⎟⎝ ⎠

P

3,41

1,4 2,4 3,4 4,42 2 2 2

,1

0.20.4

0.25 0.4

0.0 0.0 0.1 0.20.9 1.0 0.8 0.80.1 0.0 0.1 0.0

T

i

c

Diag c c c c

c

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎛ ⎞ ⎛ ⎞⎜ ⎜ ⎟ ⎜ ⎟⎟+⎜ ⎜ ⎟ ⎜ ⎟⎟

⎜ ⎟ ⎜ ⎟⎜ ⎟⎝ ⎠ ⎝ ⎠⎝ ⎠⎛ ⎞⎛ ⎞⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎜ ⎟⋅ + + +⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠⎝ ⎠⎝ ⎠

3

14

,2

1, ,

1 2

1; 1, 2,3, 4;

1; 1, 2,3, 4;

0; 0

j

i

i j

ii j i j

j

c j

c c

=

=

= =

= =

≥ ≥

The upper and lower probabilities for the event T=(A, A), (B, B), (C, C) are equal to 0.42 and

0.14, respectively. These bounds are tighter than those in Example 3-2 because now additional constraints

were added on P reflecting the fact that the second experiment is epistemically irrelevant to the first one.

The solutions are summarized in Table 3-7. They do not necessary satisfy stochastic

independence. In epistemic independence, if we learn that the actual value of s2 is s2*, then the probability

measure for s1 is again one of the probability measures in Ψ1, but in general not always the same for

different values s2*; and vice versa. Strong independence (dealt with in the next sub-section) imposes that

the probability measures be the same.

Table 3-7 Example 3-3: Solutions of the optimization problems (3.64) for lower and upper probabilities for T. (same solutions for (3.65))

Solution for Joint P ( )

1ip Marginal on

S1

( )2

jp Marginal on S2

Min 0.00 0.29 0.000.00 0.14 0.000.00 0.57 0.00

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

( )

1ip =

1

1EXTp =

(0.29,0.14,0.57) i=1,…,4

1

1EXTp =

(0.29,0.14,0.57)

( )2

jp =2

2EXTp =

(0, 1, 0); j=1,…,4

2

4EXTp =

(0, 1, 0)

Page 89: Copyright by Xiaomin You 2010

72

Max 0.02 0.18 0.020.01 0.35 0.010.01 0.35 0.04

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

(0.22,0.37,0.41);

1

2EXTp =(0.5,0.25,0.25);

1

3EXTp =(0.20,0.40,0.40);

1

1EXTp =(0.29,0.14,0.57)

(0.22,0.37,0.41) (0.11, 0.80, 0.09); (0.03, 0.94, 0.03); (0.03, 0.87, 0.10); (0.05,0.88,0.07)

(0.05,0.88, 0.07)

Upper and lower conditional probabilities that the first (second) selection is Type B given the type

of the second (first) selection are obtained by minimizing and maximizing p2,2/( p1,2+ p2,2+ p3,2) (p2,2/( p2,1+

p2,2+ p2,3)). The constraints are still given by Eq. (3.64) or (3.65). Results are summarized in Table 3-8a-d.

The solutions show that upper and lower conditional probabilities that the first (second) selection is Type B

given the type of the second (first) selection are equal to the marginal ones, i.e. 0.4 and 0.14 (1 and 0.8),

respectively, which are the same as the provided bounds on the marginals. It is consistent with the

definition of epistemic independent that each experiment is epistemically irrelevant to the other one.

Table 3-8a Example 3-3: Solutions of the optimization problems (3.64) for upper and lower conditional probabilities that the bolt is Type B given the type of the nut.

Solution for Joint P ( )

1ip Marginal on

S1

( )2

jp Marginal on S2

Min 0.03 0.24 0.030.01 0.12 0.010.04 0.47 0.04

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

1

1EXTp =(0.29,0.14,0.57);

(0.38,0.11,0.52);

1

1EXTp =(0.29,0.14,0.57);

(0.37,0.12,0.51);

1

1EXTp =

(0.29,0.14,0.57) 2

2EXTp =(0.1,0.8,0.1)

(0.07,0.86, 0.07);

(0.07,0.86, 0.07);

(0.07,0.84, 0.10)

(0.07,0.84, 0.10)

Max 0 0.2 00 0.4 00 0.4 0

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

( )

1ip =

1

3EXTp =

(0.2,0.4,0.4)

i=1,…,4

1

3EXTp =

(0.2, 0.4, 0.4)

( )2

jp =2

2EXTp =

(0, 1, 0); j=1,…,4

2

2EXTp =

(0, 1, 0);

Table 3-8b Example 3-3: Solutions of the optimization problems (3.64) for upper and lower conditional probabilities that the nut is Type B given the type of the bolt.

Solution for Joint P ( )

1ip Marginal on

S1

( )2

jp Marginal on S2

Min 0.04 0.23 0.010.04 0.18 0.000.05 0.44 0.01

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

(0.28,0.23,0.49); (0.33,0.31,0.35);

(0.27,0.21,0.51); (0.32,0.16,0.53);

(0.28,0.23,0.49) (0.16,0.82, 0.02);

(0.18,0.80, 0.01);

(0.10,0.89, 0.02);

(0.13,0.85, 0.02)

(0.13,0.85, 0.02)

Page 90: Copyright by Xiaomin You 2010

73

Max 0 0.2 00 0.4 00 0.4 0

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

( )

1ip =

1

3EXTp =

(0.2,0.4,0.4)

i=1,…,4

1

3EXTp =

(0.2, 0.4, 0.4)

( )2

jp =2

2EXTp =

(0, 1, 0); j=1,…,4

2

2EXTp =

(0, 1, 0);

Table 3-8c Example 3-3: Solutions of the optimization problems (3.65) for upper and lower conditional probabilities that the bolt is Type B given the type of the nut.

Solution for Joint P ( )

1ip Marginal on

S1

( )2

jp Marginal on S2

Min 0.04 0.24 0.020.02 0.12 0.010.02 0.48 0.05

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

(0.30,0.15,0.55);

1

2EXTp =(0.5,0.25,0.25);

1

1EXTp =(0.29,0.14,0.57);

1

1EXTp =(0.29,0.14,0.57);

(0.30,0.15,0.55) (0.12, 0.8, 0.08);

(0.12, 0.8, 0.08);

(0.04,0.88, 0.08);

(0.08,0.84, 0.10)

(0.08,0.84, 0.10)

Max 0.02 0.17 0.020.01 0.35 0.020.02 0.35 0.03

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

(0.22,0.38,0.40);

(0.45,0.26,0.28);

1

3EXTp =(0.2,0.4,0.4);

(0.27,0.27,0.47);

(0.22,0.38,0.40) (0.11,0.8, 0.09);

(0.04,0.91,0.05);

(0.04,0.88, 0.08); (0.05,0.87,0.07)

(0.05,0.87,0.07)

Table 3-8d Example 3-3: Solutions of the optimization problems (3.65) for upper and lower conditional probabilities that the nut is Type B given the type of the bolt.

Solution for Joint P ( )

1ip Marginal on

S1

( )2

jp Marginal on S2

Min 0.06 0.32 0.010.05 0.21 0.000.05 0.27 0.02

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

(0.39,0.27,0.34);

(0.40,0.30,0.30);

(0.40,0.26,0.34);

(0.29,0.16,0.55);

(0.39,0.27,0.34) (0.16, 0.82, 0.02);

(0.18, 0.80, 0.02);

(0.15,0.81, 0.04);

(0.16,0.81, 0.03)

(0.16,0.81, 0.03)

Max 0 0.2 00 0.4 00 0.4 0

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

( )

1ip =

1

3EXTp =

(0.2,0.4,0.4)

i=1,…,4

1

3EXTp =

(0.2, 0.4, 0.4)

( )2

jp =2

2EXTp =

(0, 1, 0); j=1,…,4

2

2EXTp =

(0, 1, 0);

Let us now redo Example 3-3 under epistemic independence by using the linear optimization

algorithm in Eq.(3.55). Rewrite the constraints as:

Page 91: Copyright by Xiaomin You 2010

74

Subject to ( )( )( )

, ,

, ,

, ,

, ,1 ,

, ,2 ,

, ,3 ,

1,2,0 , 1,2,3

2,0, 1 , 1,2,3

0, 1,1 , 1,2,3

0.0 0.2 , 1,2,3

0.8 1.0 , 1,2,3

0.0 0.1 , 1,2,

T j T j

T j T j

T j T j

i i i

i i i

i i i

j

j

j

p i

p i

p i

⋅ ⋅

⋅ ⋅

⋅ ⋅

⋅ ⋅

⋅ ⋅

⋅ ⋅

− ≥ ⋅ =

− ≥ ⋅ =

− ≥ ⋅ =

⋅ ≤ ≤ ⋅ =

⋅ ≤ ≤ ⋅ =

⋅ ≤ ≤ ⋅ =

P 0 1 P

P 0 1 P

P 0 1 P

P 1 P 1

P 1 P 1

P 1 P 1 3

1;0

T ⋅ ⋅ =≥

1 P 1P

(3.66)

Option (1) consists of finding the solutions in the linear optimization problem (3.66). Solutions are

detailed in Table 3-9 and Table 3-10. Here the upper and lower probabilities of T are 0.42 and 0.14, and

upper and lower conditional probabilities are 0.4 and 0.14, still the same as the previous results in Example

3-3.

Table 3-9 Example 3-3: Solutions of the optimization problems (3.66) for lower and upper probabilities for T.

Solution for Joint P Marginal on S1 Marginal on S2

Min

0.00 0.29 0.000.00 0.14 0.000.00 0.57 0.00

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

1

1EXTp =

(0.29,0.14,0.57) 2

4EXTp =

(0, 1, 0)

Max

0.02 0.18 0.020.01 0.35 0.010.01 0.35 0.04

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

(0.22,0.37,0.41) (0.05,0.88, 0.07)

Table 3-10 Example 3-3: Solutions of the optimization problems (3.66) for upper and lower conditional probabilities that the bolt is Type B given the type of the nut.

Solution for Joint P Marginal on S1 Marginal on S2

Min

0.00 0.26 0.030.00 0.13 0.010.00 0.52 0.05

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

1

1EXTp =

(0.29,0.14,0.57) (0, 0.92, 0.08)

Page 92: Copyright by Xiaomin You 2010

75

Max

0.02 0.18 0.020.01 0.35 0.010.01 0.35 0.04

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

(0.22,0.37,0.41) (0.05,0.88, 0.07)

Option (2) consists of finding the extreme points of ψ by the linear constraints in Eq. (3.66) and

then calculating the objective functions on these extreme points. A total of 80 extreme points are found and

listed in Table 3-11. By calculating the values of the objective functions, one can find the same extreme

values as in the previous results, i.e. the upper and lower probabilities of T are 0.42 and 0.14, respectively,

and upper and lower conditional probabilities are 0.40 and 0.14, respectively.

As explained before, in the epistemically independent case, we cannot find all extreme

distributions through Theorem 3-3 or Theorem 3-4.

Table 3-11 Example 3-3: Extreme distribution of Ψ for the linear programming problem (3.66)

Extreme point of Ψ p1,1 p1,2 p1,3 P2,1 P2,2 P2,3 P3,1 P3,2 P3,2

1,1 2,2 3,3p p p+ + p2,2/( p1,2+ p2,2+ p3,2)

1 0.00 0.29 0.00 0.00 0.14 0.00 0.00 0.57 0.00 0.14 0.14 2 0.00 0.50 0.00 0.00 0.25 0.00 0.00 0.25 0.00 0.25 0.25 3 0.00 0.20 0.00 0.00 0.40 0.00 0.00 0.40 0.00 0.40 0.40 4 0.06 0.23 0.00 0.03 0.11 0.00 0.11 0.46 0.00 0.17 0.14 5 0.00 0.26 0.03 0.00 0.13 0.01 0.00 0.51 0.06 0.19 0.14 6 0.10 0.40 0.00 0.05 0.20 0.00 0.05 0.20 0.00 0.30 0.25 7 0.04 0.16 0.00 0.08 0.32 0.00 0.08 0.32 0.00 0.36 0.40 8 0.03 0.43 0.00 0.05 0.22 0.00 0.05 0.22 0.00 0.24 0.25 9 0.06 0.25 0.00 0.03 0.13 0.00 0.03 0.50 0.00 0.19 0.14 10 0.02 0.26 0.00 0.03 0.13 0.00 0.03 0.53 0.00 0.15 0.14 11 0.00 0.45 0.05 0.00 0.23 0.03 0.00 0.23 0.03 0.25 0.25 12 0.04 0.17 0.00 0.02 0.34 0.00 0.09 0.34 0.00 0.38 0.40

Page 93: Copyright by Xiaomin You 2010

76

Extreme point of Ψ p1,1 p1,2 p1,3 P2,1 P2,2 P2,3 P3,1 P3,2 P3,2

1,1 2,2 3,3p p p+ + p2,2/( p1,2+ p2,2+ p3,2)

13 0.03 0.45 0.00 0.01 0.23 0.00 0.06 0.23 0.00 0.25 0.25 14 0.00 0.18 0.02 0.00 0.36 0.04 0.00 0.36 0.04 0.40 0.40 15 0.00 0.19 0.02 0.00 0.37 0.01 0.00 0.37 0.04 0.41 0.40 16 0.00 0.47 0.01 0.00 0.23 0.03 0.00 0.23 0.03 0.26 0.25 17 0.00 0.48 0.01 0.00 0.24 0.01 0.00 0.24 0.03 0.26 0.25 18 0.05 0.18 0.00 0.02 0.36 0.00 0.02 0.36 0.00 0.41 0.40 19 0.00 0.27 0.03 0.00 0.13 0.02 0.00 0.54 0.02 0.15 0.14 20 0.00 0.28 0.01 0.00 0.14 0.02 0.00 0.55 0.02 0.15 0.14 21 0.00 0.19 0.02 0.00 0.38 0.01 0.00 0.38 0.01 0.39 0.40 22 0.03 0.23 0.03 0.01 0.11 0.01 0.06 0.46 0.06 0.20 0.14 23 0.02 0.19 0.02 0.03 0.24 0.03 0.05 0.37 0.05 0.31 0.30 24 0.05 0.40 0.05 0.03 0.20 0.03 0.03 0.20 0.03 0.28 0.25 25 0.06 0.37 0.03 0.03 0.18 0.02 0.03 0.25 0.03 0.28 0.23 26 0.05 0.37 0.05 0.02 0.18 0.02 0.02 0.26 0.03 0.26 0.23 27 0.02 0.16 0.02 0.04 0.32 0.04 0.04 0.32 0.04 0.38 0.40 28 0.01 0.42 0.05 0.03 0.21 0.03 0.03 0.21 0.03 0.25 0.25 29 0.02 0.19 0.02 0.05 0.24 0.01 0.05 0.38 0.05 0.31 0.29 30 0.01 0.41 0.05 0.03 0.20 0.02 0.03 0.22 0.03 0.25 0.24 31 0.03 0.24 0.03 0.02 0.12 0.02 0.02 0.48 0.06 0.21 0.14 32 0.01 0.25 0.03 0.02 0.12 0.02 0.02 0.49 0.06 0.19 0.14 33 0.01 0.25 0.03 0.02 0.12 0.01 0.02 0.49 0.06 0.19 0.14 34 0.06 0.31 0.02 0.03 0.25 0.03 0.03 0.25 0.03 0.34 0.31 35 0.07 0.35 0.02 0.04 0.17 0.01 0.04 0.28 0.04 0.28 0.22 36 0.05 0.42 0.01 0.03 0.21 0.03 0.03 0.21 0.03 0.29 0.25 37 0.01 0.43 0.01 0.03 0.22 0.03 0.03 0.22 0.03 0.26 0.25 38 0.02 0.37 0.02 0.04 0.18 0.01 0.04 0.29 0.04 0.24 0.22 39 0.03 0.24 0.03 0.02 0.13 0.02 0.02 0.47 0.05 0.21 0.15 40 0.03 0.24 0.03 0.02 0.12 0.01 0.02 0.48 0.06 0.21 0.14 41 0.03 0.24 0.03 0.02 0.12 0.02 0.02 0.48 0.06 0.21 0.14 42 0.02 0.17 0.02 0.01 0.33 0.04 0.04 0.33 0.04 0.39 0.40 43 0.01 0.43 0.05 0.01 0.21 0.02 0.03 0.21 0.03 0.25 0.25 44 0.02 0.17 0.02 0.01 0.34 0.01 0.04 0.34 0.04 0.40 0.40 45 0.01 0.44 0.01 0.01 0.22 0.03 0.03 0.22 0.03 0.26 0.25 46 0.01 0.45 0.01 0.01 0.23 0.01 0.03 0.23 0.03 0.27 0.25 47 0.03 0.23 0.03 0.06 0.29 0.01 0.06 0.29 0.01 0.33 0.36 48 0.02 0.17 0.02 0.04 0.33 0.01 0.04 0.33 0.04 0.39 0.40 49 0.01 0.42 0.05 0.03 0.21 0.02 0.03 0.21 0.02 0.25 0.25 50 0.01 0.42 0.05 0.03 0.21 0.02 0.03 0.21 0.03 0.25 0.25

Page 94: Copyright by Xiaomin You 2010

77

Extreme point of Ψ p1,1 p1,2 p1,3 P2,1 P2,2 P2,3 P3,1 P3,2 P3,2

1,1 2,2 3,3p p p+ + p2,2/( p1,2+ p2,2+ p3,2)

51 0.02 0.17 0.02 0.01 0.34 0.04 0.01 0.34 0.04 0.40 0.40 52 0.02 0.18 0.02 0.01 0.35 0.01 0.01 0.35 0.04 0.42 0.40 53 0.06 0.32 0.02 0.03 0.25 0.01 0.03 0.25 0.03 0.35 0.31 54 0.05 0.42 0.01 0.03 0.21 0.01 0.03 0.21 0.03 0.29 0.25 55 0.01 0.44 0.01 0.03 0.22 0.01 0.03 0.22 0.03 0.26 0.25 56 0.02 0.17 0.02 0.01 0.34 0.04 0.01 0.34 0.04 0.40 0.40 57 0.02 0.18 0.02 0.01 0.35 0.01 0.01 0.35 0.04 0.42 0.40 58 0.04 0.29 0.04 0.02 0.15 0.02 0.07 0.36 0.02 0.20 0.18 59 0.04 0.21 0.01 0.02 0.17 0.02 0.08 0.42 0.02 0.23 0.21 60 0.03 0.24 0.03 0.02 0.12 0.02 0.06 0.48 0.02 0.16 0.14

61 0.04 0.30 0.01 0.02 0.15 0.02 0.08 0.37 0.02 0.21 0.18

62 0.04 0.20 0.01 0.02 0.20 0.03 0.08 0.40 0.03 0.27 0.25

63 0.03 0.24 0.01 0.02 0.12 0.02 0.06 0.49 0.02 0.17 0.14

64 0.03 0.25 0.03 0.02 0.13 0.02 0.02 0.50 0.02 0.17 0.14

65 0.01 0.26 0.03 0.02 0.13 0.02 0.02 0.51 0.02 0.15 0.14

66 0.02 0.20 0.02 0.05 0.24 0.01 0.05 0.39 0.01 0.28 0.29

67 0.01 0.26 0.03 0.02 0.13 0.02 0.02 0.51 0.02 0.15 0.14

68 0.05 0.23 0.01 0.02 0.18 0.02 0.02 0.45 0.02 0.25 0.21

69 0.03 0.26 0.01 0.02 0.13 0.02 0.02 0.51 0.02 0.18 0.14 70 0.01 0.26 0.01 0.02 0.13 0.02 0.02 0.53 0.02 0.16 0.14 71 0.04 0.29 0.04 0.03 0.26 0.03 0.03 0.26 0.03 0.33 0.32

72 0.02 0.18 0.02 0.01 0.33 0.04 0.04 0.33 0.04 0.38 0.39

73 0.01 0.43 0.05 0.01 0.21 0.02 0.03 0.21 0.02 0.25 0.25

74 0.03 0.24 0.03 0.02 0.30 0.02 0.06 0.30 0.02 0.34 0.36

75 0.02 0.17 0.02 0.01 0.33 0.04 0.05 0.33 0.04 0.39 0.40

76 0.02 0.17 0.02 0.01 0.33 0.04 0.04 0.33 0.04 0.39 0.40

77 0.02 0.18 0.02 0.01 0.35 0.01 0.04 0.35 0.01 0.39 0.40

78 0.02 0.44 0.01 0.01 0.22 0.03 0.03 0.22 0.03 0.26 0.25

79 0.02 0.18 0.02 0.01 0.36 0.01 0.01 0.36 0.01 0.40 0.40

80 0.02 0.17 0.02 0.04 0.34 0.01 0.04 0.34 0.01 0.37 0.40 Min 0.14 0.14 Max 0.42 0.40

3.3.3.2 Conditional Epistemic Irrelevance/Independence

To deal with the problem under conditional epistemic irrelevance/independence,

Campos and Cozman (2007) proposed an algorithm to compute lower and upper

Page 95: Copyright by Xiaomin You 2010

78

expectations within a ‘Credal network’, which consists of ‘parent’ nodes and their

‘descendant’ nodes, connected by some directed arrows, as shown in Figure 3-3. Campos

and Cozman’s algorithm can deal with the unconditional epistemic

irrelevant/independent problem, but is more cumbersome compared to the algorithm in

Section 3.3.3.1. As for dealing with the conditional epistemic irrelevant/independent

problems, Campos and Cozman’s algorithm is designed for Credal network. In this

section, we are going to expand Campos and Cozman’s algorithm to a general form,

which can be applied to any problem with conditional epistemic

irrelevance/independence.

Figure 3-3 An example of Credal network

Let S1, S2, S3 be three random variables. Consider the case that S1 is epistemically

irrelevant to S2 given S3, and thus Eq. (3.25) can read as follows: ( ) ( )

( ) ( ) ( )

2 1 3 2 3

2 32 3

2 3

| |

|

i i

ii

T i

S s s S s

S sS s

S s

× =

×=

×

P q

qq

1 q

(3.67)

where ( ) ( ) ( )( ) ( )312 3 2 3 2 2 33, , , niS s S s S s S S× × × = ×q q q q… … , and ( )2 3S S×q is a

valid measure over S2, S3, defined by

( ) ( )2 3 2 3S S S S× ∈Ψ ×q (3.68)

X1

X2

X3 X4

Page 96: Copyright by Xiaomin You 2010

79

The reason why we do not write the constraints on ( )2 3| iS sq in terms of

( )2 1 3| iS s s×P is that set ( )2 3| iS sΨ is bounded by both constraints on ( )2 3| iS sq

and on ( )2 3S S×q , and that we cannot write the constrains on ( )2 3S S×q in terms of

( )2 1 3| iS s s×P . Therefore, here we create the new optimization variables ( )2 3S S×q ,

and all constraints on sets ( )2 3S SΨ × and ( )2 3|S SΨ could be expressed in terms of

( )2 3S S×q .

If S1 is epistemically independent of S2 given S3, constraints are obtained as

follows: ( ) ( ) ( ) ( )

( ) ( ) ( ) ( ) ( )

( )

2 1 3 2 3 1 2 3 1 3

2 3 1 32 3 1 3

2 3 1 3

| | , | |

| , |

i i i i

i ii i

T i T i

S s s S s S s s S s

S s S sS s S s

S s S s

× = × =

× ×= =

× ×

P q P q

q qq q

1 q 1 q

(3.69)

where ( ) ( )( ) ( )312 3 2 2 33, , nS s S s S S× × = ×q q q… , and ( ) ( )( )31

1 3 1 3, , nS s S s× ×q q… =

( )1 3S S×q . ( )1 3S S×q and ( )2 3S S×q are defined by

( ) ( )1 3 1 3S S S S× ∈Ψ ×q

( ) ( )2 3 2 3S S S S× ∈Ψ ×q

(3.70)

The algorithm is illustrated by the following example, adapted from Campos and

Cozman (2007).

Example 3-4. Let X1, X2 and X3 be three binary random variables. Variable Xi takes values i and i . The

following information is available on the probability measure: 0.1 ≤ P(1) ≤ 0.3; 0.2 ≤ P(2|1) ≤ 0.5,

0.6 ≤ P(2| 1 ) ≤ 0.7; 0.7 ≤ P(3|2) ≤ 0.8, 0.3 ≤ P(3| 2 ) ≤ 0.4. Given X2, X1 is epistemic independent of X3. We are

interested to the upper and lower values of P(1,2,3)+P(1, 2 , 3 )+P( 1 ,2, 3 ).

Let P be the joint probability measure over X1, X2, X3, and thus its eight elements are defined by

p1=P(1,2,3), p2=P(1,2, 3 ), p3=P(1, 2 ,3), p4=P(1, 2 , 3 ), p5=P( 1 ,2,3), p6=P( 1 ,2, 3 ), p7=P( 1 , 2 ,3),

Page 97: Copyright by Xiaomin You 2010

80

p8=P( 1 , 2 , 3 ). Let 1 2,3,3x xq and 1 2,

ˆ3,3x xq be two probability measures over X1, X2 for X3 = 3 and X3 = 3 ,

respectively, and 2 3,1,1x xq and 2 3,

ˆ1,1x xq be two probability measures over X2, X3 for X1 = 1 and X1 = 1 ,

respectively. Then the complete problem can be written as

Minimize 1 4 6p p p+ + Subject to

1 2 3 40.1 0.3p p p p≤ + + + ≤ 1 2

1 2 3 4

0.2 0.5p pp p p p

+≤ ≤

+ + +; 5 6

5 6 7 8

0.6 0.7p pp p p p

+≤ ≤

+ + +

1 5

1 2 5 6

0.7 0.8p pp p p p

+≤ ≤

+ + +; 3 7

3 4 7 8

0.3 0.4p pp p p p

+≤ ≤

+ + +

( ) ( )2,31,11

ˆ2,3 2,31 2 1,1 1,1

qpp p q q

=+ +

; ( ) ( )

2,31,13

ˆ ˆ ˆ2,3 2,33 4 1,1 1,1

qpp p q q

=+ +

;

( ) ( )2,3

ˆ1,15ˆ2,3 2,3

5 6 ˆ ˆ1,1 1,1

qpp p q q

=+ +

; ( ) ( )

2,3ˆ1,17

ˆ ˆ ˆ2,3 2,37 8 ˆ ˆ1,1 1,1

qpp p q q

=+ +

;

( ) ( )1,23,31

ˆ1,2 1,21 5 3,3 3,3

qpp p q q

=+ +

; ( ) ( )

ˆ1,23,33

ˆ ˆ ˆ1,2 1,23 7 3,3 3,3

qpp p q q

=+ +

;

( ) ( )1,2

ˆ3,32ˆ1,2 1,2

2 6 ˆ ˆ3,3 3,3

qpp p q q

=+ +

; ( ) ( )

ˆ1,2ˆ3,34

ˆ ˆ ˆ1,2 1,24 8 ˆ ˆ3,3 3,3

qpp p q q

=+ +

;

ˆ1,2 1,23,3 3,30.1 0.3q q≤ + ≤ ; ˆ1,2 1,2

ˆ ˆ3,3 3,30.1 0.3q q≤ + ≤

1,23,3

ˆ1,2 1,23,3 3,3

0.2 0.5q

q q≤ ≤

+;

1,2ˆ3,3

ˆ1,2 1,2ˆ ˆ3,3 3,3

0.2 0.5q

q q≤ ≤

+

1,23,3

ˆ ˆ ˆ1,2 1,23,3 3,3

0.6 0.7q

q q≤ ≤

+;

1,2ˆ3,3

ˆ ˆ ˆ1,2 1,2ˆ ˆ3,3 3,3

0.6 0.7q

q q≤ ≤

+

2,31,1

ˆ2,3 2,31,1 1,1

0.7 0.8q

q q≤ ≤

+;

2,3ˆ1,1

ˆ2,3 2,3ˆ ˆ1,1 1,1

0.7 0.8q

q q≤ ≤

+

2,31,1

ˆ ˆ ˆ2,3 2,31,1 1,1

0.3 0.4q

q q≤ ≤

+;

2,3ˆ1,1

ˆ ˆ ˆ2,3 2,3ˆ ˆ1,1 1,1

0.3 0.4q

q q≤ ≤

+

, ,3,3 3,30; 1i j i j

i jq q≥ =∑∑ ; , ,

ˆ ˆ3,3 3,30; 1i j i j

i jq q≥ =∑∑

, ,1,1 1,10; 1i j i j

i jq q≥ =∑∑ ; , ,

ˆ ˆ1,1 1,10; 1i j i j

i jq q≥ =∑∑

0; 1i ii

p p≥ =∑

(3.71)

Page 98: Copyright by Xiaomin You 2010

81

Since all probabilities are positive, the 10 fraction constraints in Eq. (3.71) could be re-written in terms of

linear ones by using the techniques introduced in Section 3.3.3.1. The upper and lower probabilities of

P(1,2,3)+P(1, 2 , 3 )+P( 1 ,2, 3 ) are equal to 0.346 and 0.503. Solutions are summarized in Table 3-12.

Table 3-12 Example: Solutions of the optimization problems (3.71) for upper and lower conditional probabilities P(1,2,3)+P(1, 2 , 3 )+P( 1 ,2, 3 ).

Solution for P1 P2 P3 P4 P5 P6 P7 P8

Min 0.014 0.006 0.032 0.048 0.432 0.108 0.113 0.247

Max 0.108 0.027 0.050 0.116 0.441 0.189 0.028 0.042

Solution for

2 3,1,1x xq 2 3,

ˆ1,1x xq 1 2,

3,3x xq 1 2,

ˆ3,3x xq

Min 0.15 0.150.62 0.08⎛ ⎞⎜ ⎟⎝ ⎠

0.15 0.150.62 0.08⎛ ⎞⎜ ⎟⎝ ⎠

0.09 0.210.63 0.07⎛ ⎞⎜ ⎟⎝ ⎠

0.09 0.210.63 0.07⎛ ⎞⎜ ⎟⎝ ⎠

Max 0.02 0.080.62 0.28⎛ ⎞⎜ ⎟⎝ ⎠

0.02 0.080.62 0.28⎛ ⎞⎜ ⎟⎝ ⎠

0.03 0.070.54 0.36⎛ ⎞⎜ ⎟⎝ ⎠

0.03 0.070.54 0.36⎛ ⎞⎜ ⎟⎝ ⎠

If available information are given in terms of the upper and lower probabilities of P(2), P(1|2),

P(1| 2 ), P(3|2), and P(3| 2 ). Alternatively, we could use the algorithms described in Section 3.3.3.1 to find

all extreme distributions for sets ( )1 3, | 2X XΨ and ( )1 3ˆ, | 2X XΨ , then obtain extreme joint

distributions on X1, X2 and X3 by using Theorem 3-5, and finally find the min/max values of objective

function on some extreme joint distributions on X1, X2 and X3.

3.3.3.3 Strong Independence

The optimization problems under strong independence in Eqs. (3.31) and (3.32)

are NP-hard. To reduce the computational effort, Theorem 3-5 below addresses the

Page 99: Copyright by Xiaomin You 2010

82

calculation of the extreme joint distributions and measures of the set of joint distributions,

sΨ . When strong independence is assumed, sΨ is not a convex set, as illustrated in

Figure 3-4 and explained in the Theorem 3-6.

Figure 3-4 Non-convex set of joint distribution, ΨS, under strong independence

Theorem 3-5 Under strong independence, the set of extreme joint distributions

(measures), EXT, is the set of product distributions (measures), each taken from the

extreme distributions (measures) of the marginals, ETXi: 1 2 1 1 2 2= : ,EXT P P P P EXT P EXT= ⊗ ∈ ∈ , i.e. (3.72)

( )1 2 1 2

,1 2= : ,

TEXT EXT EXT EXT EXTEXT EXT EXTξ η ξ η ξ η⎧ ⎫= ∈ ∈⎨ ⎬

⎩ ⎭P p p p p

(3.73)

Proof : Let the extreme points of the convex set of probability distributions on S1 and S2 be

1 1, 1,...EXTξ ξ ξ=p , and

2 2, 1,...EXTξ ξ ξ=p , respectively. Any p1 and p2 can be written as a

linear combination of extreme points:

( )( )1 11 1 1

1 11 1 1 1... ... ... ...

TEXT EXT EXT

ξ ξξ ξλ λ λ=p p p p ;

( )( )2 22 2 2

1 12 2 2 2... ... ... ...

TEXT EXT EXT

ξ ξξ ξλ λ λ=p p p p 1

2

11 11

22 21

0 1, 1,..., ; 1

0 1, 1,..., ; 1

ξξ ξ

ξ

ξξ ξ

ξ

λ ξ ξ λ

λ ξ ξ λ

=

=

≤ ≤ = =

≤ ≤ = =

(3.74)

sΨ2EXT

1EXT

3EXT

EXTξ

1EXTξ −

iEXT

Page 100: Copyright by Xiaomin You 2010

83

Since 1 2= ⊗P p p , any joint probability distribution may be written as follows:

( ) ( )

( )

( )

( )

2

1 2 21 1 1

12

2

111

1 112 2 2

1

... ... ... ...

TEXT

T

EXTEXT EXT EXT

T

EXT

ξ ξξ ξξ ξ

ξξ

λ

λλ λ λ

λ

⎛ ⎞⎜ ⎟⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟= ⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟

⎝ ⎠ ⎜ ⎟⎜ ⎟⎝ ⎠

p

pP p p p

p

(3.75)

Extreme points of P are achieved if and only if 1

11

1, 0,

mm

ξ ξλ

ξ=⎧

= ⎨ ≠⎩and 2

22

1, 0,

mm

ξ ξλ

ξ=⎧

= ⎨ ≠⎩, 1 11,...,m ξ= , 2 21,...,m ξ= .

(3.76)

Therefore, EXT = ,PEXTξ η = ( )1 2 1 1:

T

EXT EXT EXT EXTξ η ξ ∈p p p , 2 2EXT EXTη ∈p . ◊

Theorem 3-6 Under strong independence, the set of joint distributions (measures), sΨ , is not convex.

Proof : Consider a counterexample with 1 3ξ = , 2 2ξ = in Eq.(3.73) and let us proceed

by contradiction by assuming that sΨ is a convex set . Let 1P and 2P be two joint

distributions in sΨ such that 1P is generated by taking ( )111 1 1... ... ξξλ λ λ = ( )1,0,0 ,

( )212 2 2... ... ξξλ λ λ = ( )0,1 , and thus ( ) ( )1 21 1

1 21 1 2 2... ... ... ...Tξ ξξ ξλ λ λ λ λ λ =

0 10 00 0

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

, and 2P is

generated by taking ( )111 1 1... ... ξξλ λ λ = ( )0,0,1 , ( )21

2 2 2... ... ξξλ λ λ = ( )1,0 , and thus

( ) ( )1 21 11 21 1 2 2... ... ... ...

Tξ ξξ ξλ λ λ λ λ λ =0 00 01 0

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

. The mid-point, mP , between 1P and 2P is

( )1 21 / 2 +P P . Consequently, ( ) ( )1 21 11 21 1 2 2... ... ... ...

Tξ ξξ ξλ λ λ λ λ λ for mP is equal to 0 1/20 0

1/2 0

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

,

which could be written in the form ( ) ( )1 2 3 1 21 1 1 2 2, , ,

Tλ λ λ λ λ based on the assumption that

Page 101: Copyright by Xiaomin You 2010

84

mP is in the convex set sΨ . Thus, ( ) ( )1 2 3 1 21 1 1 2 2, , ,

Tλ λ λ λ λ =

1 1 1 21 2 1 22 1 2 2

1 2 1 23 1 3 21 2 1 2

,

,

,

λ λ λ λ

λ λ λ λ

λ λ λ λ

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

= 0 1/20 0

1/2 0

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

,

subject to 0jiλ ≥ , 1j

ijλ =∑ . Since 1 2

1 2λ λ =1/2 and 3 11 2λ λ =1/2, 1 2 3 1

1 2 1 2λ λ λ λ =1/4. However,

we also have 1 11 2λ λ =0 and 3 2

1 2λ λ =0, so 1 1 3 21 2 1 2λ λ λ λ = 1 2 3 1

1 2 1 2λ λ λ λ =0, which contradicts the

previous result 1 2 3 11 2 1 2λ λ λ λ =1/4. Therefore, there are no j

iλ that satisfy all requirements,

i.e., 1 2m ≠ ⊗P P P . Therefore, sΨ is not convex. ◊

When Option (2) is used, Eq.(3.73) provides an efficient algorithm to find all

extreme distributions of sΨ , as compared to finding the extreme distributions by setting

up the quadratic constraints based on the definition of strong independence in Eq.(3.29).

Although sΨ is not convex, Theorem 3-1 insures that maxima and minima of

linear functions are always achieved at extreme points of sΨ . There are several options

for carrying out calculations on the joint space (Bernardini and Tonon 2010):

1. If extreme distributions are available for the marginals (Eq.(3.3)), first

calculate all ξ1×ξ2 extreme joint distributions ,EXTξ ηP as indicated in

Eq.(3.55) and then calculate the objective function (e.g., prevision or

probability) on all extreme joint distributions, which is much easier to

solve than the non-convex and NP-hard optimization problems that follow.

2. If expectation (prevision) bounds are given on the marginals (Eq. (3.3)),

use these constraints to directly obtain quadratic constraints in the n1×n2

components pi,j and n1 + n2 components Pi(sik). This option is obviously

less efficient than the first one:

Subject to:

Page 102: Copyright by Xiaomin You 2010

85

( )

( )1 2 0

; 1,..., ; 1,2

T

Tk k kLOW i i i UPP i iE f E f k k i

− =

⎡ ⎤ ⎡ ⎤≤ ≤ = =⎣ ⎦ ⎣ ⎦

P p p

f p

(3.77)

3. If expectation (prevision) bounds are given on the marginals (Eq. (3.3)),

first use constraints to obtain extreme distributions for the marginals, and

then proceed as stated in the first algorithm.

Example 3-5. Consider again the situation and knowledge available in Example 3-1, but now suppose

that a bolt and a nut are picked from each box in a stochastically independent way. We want to know the

upper and lower probabilities for the event T=(A, A), (B, B), (C,C), and the conditional probability that

the bolt is Type B given the type of nut.

Let us solve this problem by using Theorem 3-5 and the first option above. There are 3 extreme

points on Ψ1 and 4 extreme points on Ψ2, thus the number of extreme points on joint Ψs would not exceed

3× 4=12. Extreme points are listed in Table 3-13, which are the products of two extreme marginal

distributions. The lower and upper probabilities for T are 0.14 and 0.4, respectively; and the conditional

probability that the bolt is Type B given the type of nut is from 0.14 and 0.4, which is the same as the

bounds of the marginal of Ψ1.

Table 3-13 Example 3-5: Lower and upper probabilities for T and lower and upper conditional probability ( )1|2 1 2|P S B S B= = on all extreme distributions of Ψ (optimal solutions are highlighted.)

Extreme point of Ψ p1,1 p1,2 p1,3 P2,1 P2,2 P2,3 P3,1 P3,2 P3,2

1,1 2,2 3,3p p p+ + p2,2/( p1,2+ p2,2+ p3,2)

1 0.03 0.23 0.03 0.01 0.11 0.01 0.06 0.46 0.06 0.20 0.14 2 0.05 0.40 0.05 0.03 0.20 0.03 0.03 0.20 0.03 0.28 0.25 3 0.02 0.16 0.02 0.04 0.32 0.04 0.04 0.32 0.04 0.38 0.40 4 0.06 0.23 0.00 0.03 0.11 0.00 0.11 0.46 0.00 0.17 0.14 5 0.10 0.40 0.00 0.05 0.20 0.00 0.05 0.20 0.00 0.30 0.25 6 0.04 0.16 0.00 0.08 0.32 0.00 0.08 0.32 0.00 0.36 0.40 7 0.00 0.26 0.03 0.00 0.13 0.01 0.00 0.51 0.06 0.19 0.14 8 0.00 0.45 0.05 0.00 0.23 0.03 0.00 0.23 0.03 0.25 0.25

Page 103: Copyright by Xiaomin You 2010

86

9 0.00 0.18 0.02 0.00 0.36 0.04 0.00 0.36 0.04 0.40 0.40 10 0.00 0.29 0.00 0.00 0.14 0.00 0.00 0.57 0.00 0.14 0.14 11 0.00 0.50 0.00 0.00 0.25 0.00 0.00 0.25 0.00 0.25 0.25 12 0.00 0.20 0.00 0.00 0.40 0.00 0.00 0.40 0.00 0.40 0.40

Min 0.14 0.14

Max 0.40 0.40

3.3.3.4 Conditional strong Independence

Let S1, S2, S3 be three random variables. Consider the case that S1 is conditional

strong independent of S2 given S3, and thus Eq. (3.35) can read as follows:

( )( ) ( )

( )( )

2 1

1 2

1 2

1 3 1 32 21 1

1 3 1 32 21 1

1 321 1

, , , ,

, , , for , , 0, ,

n ni j l i j l

n nj ii j l i j l

n ni ji j l

i j

P s s s P s s s

P s s s P s s sP s s s

= =

= =

= =

⎛ ⎞⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠⎝ ⎠= >∑ ∑

∑∑∑∑

(3.78)

The algorithm is illustrated by the following example:

Example 3-6 Redo Example 3-4, but with the information that given X2, X1 is conditional strong

independent to X3. Then the complete problem can be written as

Page 104: Copyright by Xiaomin You 2010

87

Minimize 1 4 6p p p+ + Subject to

1 2 3 40.1 0.3p p p p≤ + + + ≤ 1 2

1 2 3 4

0.2 0.5p pp p p p

+≤ ≤

+ + +; 5 6

5 6 7 8

0.6 0.7p pp p p p

+≤ ≤

+ + +

1 5

1 2 5 6

0.7 0.8p pp p p p

+≤ ≤

+ + +; 3 7

3 4 7 8

0.3 0.4p pp p p p

+≤ ≤

+ + +

( )( )1 2 1 51

1 2 5 6

p p p pp

p p p p+ +

=+ + +

; ( )( )1 2 2 62

1 2 5 6

p p p pp

p p p p+ +

=+ + +

;

( )( )3 4 3 73

3 4 7 8

p p p pp

p p p p+ +

=+ + +

; ( )( )3 4 4 84

3 4 7 8

p p p pp

p p p p+ +

=+ + +

;

( )( )5 6 1 55

1 2 5 6

p p p pp

p p p p+ +

=+ + +

; ( )( )5 6 2 66

1 2 5 6

p p p pp

p p p p+ +

=+ + +

;

( )( )7 8 3 77

3 4 7 8

p p p pp

p p p p+ +

=+ + +

; ( )( )7 8 4 88

3 4 7 8

p p p pp

p p p p+ +

=+ + +

;

0; 1i ii

p p≥ =∑

(3.79)

The upper and lower probabilities of P(1,2,3)+P(1, 2 , 3 )+P( 1 ,2, 3 ) are equal to 0.172 and 0.399. Solutions

are summarized in Table 3-14.

Table 3-14 Example: Solutions of the optimization problems (3.79) for upper and lower conditional probabilities P(1,2,3)+P(1, 2 , 3 )+P( 1 ,2, 3 ).

P1 P2 P3 P4 P5 P6 P7 P8 P1+P4+P6

min 0.016 0.004 0.032 0.048 0.432 0.108 0.144 0.216 0.172

max 0.042 0.018 0.072 0.168 0.441 0.189 0.021 0.049 0.399

If available information are given in terms of the upper and lower probabilities of P(2), P(1|2),

P(1| 2 ), P(3|2), and P(3| 2 ). Alternatively, we could use Theorem 3-5 to find all extreme distributions for

sets ( )1 3, | 2X XΨ and ( )1 3ˆ, | 2X XΨ , then obtain extreme joint distributions on X1, X2 and X3 by using

Theorem 3-5 again, and finally find the min/max values of objective function on some extreme joint

distributions on X1, X2 and X3.

Page 105: Copyright by Xiaomin You 2010

88

3.3.4 Uncertain Correlation

According to the problem formulation in Eq.(3.41), the problem under uncertain

correlation reads as follows:

Minimize(Maximize) 1 2; , ,1; 1

i n j n i j i ji j

a P= =

= =∑

Subject to ( ) ( ) ( ) ( ) ( )

1 2 1 21 2 1 2 1 2S S S SE S E S D D E S S E S E S D Dρ ρ+ ≤ ≤ +

( )( )

1 2

1 1 1 1 1

2 2 2 2 2

( ) 1 ( ) 2

1 2

; 1,..., ;

; 1,..., ;

1 1

0 ; 0

Tk k kLOW UPP

Tk k kLOW UPP

T Tn n

E f E f k k

E f E f k k

⎡ ⎤ ⎡ ⎤≤ ≤ =⎣ ⎦ ⎣ ⎦

⎡ ⎤ ⎡ ⎤≤ ≤ =⎣ ⎦ ⎣ ⎦

⋅ = ⋅ =

≥ ≥

f p

f p

1 p 1 p

p p

(3.80)

where ( ) ( )22 , 1,2iS i iD E S E S i= − = , ( )E S is the expected value of variable S.

Obviously, the constraint for uncertain correlation in (3.80) is nonlinear.

Algorithms of solving non-linear programming problems could be found in Luenberger

(1984).

Example 3-7. Consider again the situation and knowledge available in Example 3-1. Suppose that now

the type of the picked bolt from box 1 and the type of the picked nut in Box 2 has some uncertain

correlation: 0.1 < ρ < 0.5. We want to write down the optimization problems for finding upper and lower

expectations on the joint space and then calculate the upper and lower probabilities for the case in which

the same type of bolt and nut is selected, i.e. event T=(A, A), (B, B), (C, C). Finally, calculate upper and

lower conditional probabilities that the bolt is Type B given the type of nut, and contrast to upper and lower

conditional probabilities that the nut is Type B given the type of the bolt.

The nonlinear programming problem in Eq. (3.80) becomes:

Page 106: Copyright by Xiaomin You 2010

89

Minimize (Maximize) 1,1 2,2 3,3p p p+ + (2,2

1,2 2,2 3,2p

p p p+ +,

2,2

2,1 2,2 2,3p

p p p+ +)

Subject to

1, 1, ,1 ,1 1,1 1, ,1

1, 1, ,1 ,1

0.5 1 1

0.8 1 1

i i j j i j

i i j j i j

i i j j

i i j j

p p p p p p p

p p p p

⎛ ⎞⎛ ⎞− ⎜ − ⎟ ≤ −⎜ ⎟⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠

⎛ ⎞⎛ ⎞≤ − ⎜ − ⎟⎜ ⎟⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠

∑ ∑ ∑ ∑ ∑ ∑

∑ ∑ ∑ ∑

2, 2, ,2 ,2 2,2 2, ,2

2, 2, ,2 ,2

0.5 1 1

0.8 1 1

i i j j i j

i i j j i j

i i j j

i i j j

p p p p p p p

p p p p

⎛ ⎞⎛ ⎞− ⎜ − ⎟ ≤ −⎜ ⎟⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠

⎛ ⎞⎛ ⎞≤ − ⎜ − ⎟⎜ ⎟⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠

∑ ∑ ∑ ∑ ∑ ∑

∑ ∑ ∑ ∑

3, 3, ,3 ,3 3,3 3, ,3

3, 3, ,3 ,3

0.5 1 1

0.8 1 1

i i j j i j

i i j j i j

i i j j

i i j j

p p p p p p p

p p p p

⎛ ⎞⎛ ⎞− ⎜ − ⎟ ≤ −⎜ ⎟⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠

⎛ ⎞⎛ ⎞≤ − ⎜ − ⎟⎜ ⎟⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠

∑ ∑ ∑ ∑ ∑ ∑

∑ ∑ ∑ ∑

( ) ( )( ) ( )( ) ( )

1,1 1,2 1,3 2,1 2,2 2,3

1,1 1,2 1,3 3,1 3,2 3,3

2,1 2,2 2,3 3,1 3,2 3,3

2 0

2 0

0

p p p p p p

p p p p p p

p p p p p p

− + + + ⋅ + + ≥

⋅ + + − + + ≥

− + + + + + ≥

1,1 2,1 3,1

1,2 2,2 3,2

1,3 2,3 3,3

1,1 1,2 1,3 2,1 2,2 2,3 3,1 3,2 3,3

,

0.0 0.2

0.8 1.0

0.0 0.1

1

0i j

p p p

p p p

p p p

p p p p p p p p p

p

≤ + + ≤

≤ + + ≤

≤ + + ≤

+ + + + + + + + =

(3.81)

Lower and upper probabilities for the event T = (A, A), (B, B), (C, C) are equal to 0.178 and

0.565, respectively. Lower and upper conditional probabilities P(S1=B|S2=B) are 0.151 and 0.438 , and the

lower and upper conditional probabilities P(S2=B|S1=B) are 0.849 and 1. Solutions are detailed in Table

3-15 through Table 3-17.

Table 3-15 Example: Solutions of the optimization problems (3.81) for upper and lower probabilities for event T = (A, A), (B, B), (C, C).

Solution for P1,1 P1,2 P1,3 P2,1 P2,2 P2,3 P3,1 P3,2 P3,3

Min 0.022 0.264 0.000 0.000 0.143 0.000 0.022 0.536 0.013Max 0.068 0.131 0.001 0.002 0.398 0.000 0.030 0.271 0.099

Page 107: Copyright by Xiaomin You 2010

90

Table 3-16 Example: Solutions of the optimization problems (3.81) for upper and lower probabilities for P(S1=B|S2=B).

Solution for P1,1 P1,2 P1,3 P2,1 P2,2 P2,3 P3,1 P3,2 P3,3

Min 0.042 0.244 0.000 0.000 0.143 0.000 0.001 0.557 0.013

Max 0.000 0.200 0.000 0.000 0.400 0.000 0.000 0.313 0.087

Table 3-17 Example: Solutions of the optimization problems (3.81) for upper and lower probabilities for P(S2=B|S1=B).

Solution for P1,1 P1,2 P1,3 P2,1 P2,2 P2,3 P3,1 P3,2 P3,3

Min 0.032 0.167 0.001 0.042 0.340 0.019 0.027 0.293 0.080

Max 0.032 0.168 0.000 0.000 0.400 0.000 0.068 0.254 0.078

3.4 SUMMARY

The constraints that define the five sets of probability measures are summarized

as follows:

Unknown interaction: ΨU: (3.7)

Epistemic irrelevance: |E

isΨ : (3.7) + (one of (3.21))

Conditional epistemic irrelevance: 3

|E|

issΨ : (3.7)+(3.26) (or (3.28))

Epistemic independence: ΨE: (3.7) + (3.21)

Conditional epistemic independence: 3E|sΨ : (3.7)+ (3.26)+ (3.28)

Strong independence: ΨS: (3.7) + (3.21) + (3.30)

Conditional strong independence: 3S|sΨ : (3.7)+(3.36)

Uncertain correlation: ΨC: (3.7) +(3.40)

Page 108: Copyright by Xiaomin You 2010

91

Since constraints are consecutively added, the sets of probability measures are

nested, i.e. ΨS ⊆ ΨE ⊆ |E

isΨ ⊆ ΨU; ΨC ⊆ ΨU; and 3S|sΨ ⊆

3E|sΨ ⊆ 3

|E|

issΨ ⊆ ΨU. As a

consequence, the upper and lower probability bounds are also nested: | |

, , , , , , , ,i is s

U low E low E low S low S upp E upp E upp U uppP P P P P P P P≤ ≤ ≤ ≤ ≤ ≤ ≤

, , , ,U low C low C upp U uppP P P P≤ ≤ ≤

3 3 3 3 3 3

| |, | , | , | , | , | , | , ,

i is sU low E s low E s low S s low S s upp E s upp E s upp U uppP P P P P P P P≤ ≤ ≤ ≤ ≤ ≤ ≤

(3.82)

As for conditional probability on joint space, the upper and lower conditional

probability bounds are: | |

, , , , , , , ,i is s

U low E low E low S low S upp E upp E upp U uppP P P P P P P P≤ ≤ = ≤ = ≤ ≤

, , , ,U low C low C upp U uppP P P P≤ ≤ ≤

3 3 3 3 3 3

| |, | , | , | , | , | , | , ,

i is sU low E s low E s low S s low S s upp E s upp E s upp U uppP P P P P P P P≤ ≤ = ≤ = ≤ ≤

(3.83)

This is exemplified by the probability of set T=(A, A), (B, B), (C,C) in

Example 1 through Example 4:

Unknown interaction: , ,0.0; 0.60U low U uppP P= =

Epistemic irrelevance: 1 1| |, ,0.11; 0.48s s

E low E uppP P= =

Epistemic independence: , ,0.14; 0.42E low E uppP P= =

Strong independence: , ,0.14; 0.40S low S uppP P= =

Uncertain correlation: , ,0.178; 0.565C low C uppP P= =

Also, the conditional probabilities that the bolt is type B given the type of nut are

Unknown interaction: , ,0; 0.50U low U uppP P= =

Epistemic irrelevance: 1 1| |, ,0.12; 0.45s s

E low E lowP P= =

Epistemic independence: , ,0.14; 0.40E low E uppP P= =

Strong independence: , ,0.14; 0.40S low S uppP P= =

Uncertain correlation: , ,0.151; 0.438C low C uppP P= =

Page 109: Copyright by Xiaomin You 2010

92

Table 3-18 summarizes all algorithms discussed in this chapter and their

applications to the cases under different types of independence. The algorithms are

divided into two groups by the type of constraints: prevision bounds or extreme points of

marginals. Both Option (1) and Option (2) are applied to solve problems under each type

of constraints. Every option includes two algorithms, written in terms of the joint

probability or marginals. The applicability of each algorithm to a specified type of

independence is listed in the table. For example, if prevision bounds are given on the

marginals, consider the case in which one is interested in the extreme values of the

conditional probability and decides to use Option (1) written in terms of the joint

probability. Then one has to solve an optimization problem with linear objective

functions and linear constraints, i.e. N/L as stated in Table 3-18.

Note: E: Enumerate all joint extreme points UI: Unknown Interaction; EIR: Epistemic Irrelevance

EIN: Epistemic Independence SI: Strong Independence NA: Not available L: Nonlinear N: Nonlinear Q: Quadratic

Page 110: Copyright by Xiaomin You 2010

93

Table 3-18 Summary of all algorithms for different types of independence.

Independence Constraints Given as Algorithm

UI EIR EIN SI Objective

L/L L/L L/L L/Q Prevision on joint distributionConstraints written

in terms of P N/L N/L N/L N/Q Conditional

Probability

Q/Q Q/Q Q/Q Prevision on joint distribution

Option (1): Optimization

problem Constraints written in terms of p1 and

p2 NA

N/Q N/Q N/Q Conditional Probability

L/L L/L L/L Prevision on joint distributionConstraints written

in terms of P N/L N/L N/L NA Conditional

Probability Prevision on

joint distribution

Bounds on Previsions of

marginals

Option (2): Find all extreme

joint distributions. Constraints written in terms of p1 and

p2 NA Conditional

Probability

L/L L/Q L/Q L/Q Prevision on joint distributionConstraints written

in terms of P N/L N/Q N/Q N/Q Conditional Probability

L/L Q/Q Q/Q Q/Q Prevision on joint distribution

Option (1): Optimization

problem Constraints written in terms of p1 and

p2 N/L N/Q N/Q N/Q Conditional Probability

L/L Prevision on joint distributionConstraints written

in terms of P N/LNA Conditional

Probability Prevision on

joint distributionConstraints written in terms of p1 and

p2 NA Conditional

Probability

L/L Prevision on joint distributionCombination of

EXTs on p1 and p2NA

N/L NA NA Conditional

Probability Prevision on

joint distribution

Extreme points of marginals

Option (2): Find all extreme

joint distributions.

Enumeration of all joint extreme

points E Conditional

Probability

Page 111: Copyright by Xiaomin You 2010

94

Chapter 4 Failure and Decision Analysis

This chapter focuses on the application of Imprecise Probability to failure analysis

(i.e. Event Tree Analysis and Fault Tree Analysis) and decision analysis. As explained in

Chapter 3, the available information is used to construct a convex set of probability

distributions, which are then considered during failure analysis and decision making. In

the failure analysis, our aim is to determine the upper and lower bounds of a prevision

(expectation of a real function) or of the probability of failure; in the decision analysis,

our objective is to determine the optimal action(s) to take. Corresponding algorithms are

developed and illustrated by examples. All theorems presented in this chapter are

developed and proved by the author.

4.1 INTRODUCTION

In risk analysis, we often face the problem of evaluating the prevision or the

probability of some undesirable consequence (i.e. failure), and of making the best

decision based on the available information. To answer the first question, two common

methodologies in failure analysis are Event Tree Analysis (ETA) and Fault Tree Analysis

(FTA). As for the second question, Decision Tree Analysis is a usual technique. For

example, the International Tunnelling and Underground Space Association published

guidelines (Eskesen et al., 2004) on risk analysis and recommended using Event Tree

Analysis, Fault Tree Analysis and Decision Tree Analysis as risk analysis tools in

tunneling.

An event tree presents an inductive logical relationship, starting from a hazardous

event, which is called as “initiating event” and followed by all the possible outcomes. By

analyzing all outcomes at the bottom of the tree, the failure paths are determined. The

Page 112: Copyright by Xiaomin You 2010

95

failure probability is obtained by summing up the probabilities calculated on each failure

path. For example, Figure 4-1 shows an event tree for the case of “Pedestrian walks

against red light”. By assigning probabilities to all outcomes, the probability of

occurrence of an accident could be evaluated quantitatively. Detailed introduction of

Event Tree Analysis with precise probability in Civil Engineering could be found in

Benjamin and Cornell (1970).

Figure 4-1: Example of Event Tree (Eskesen, 2004).

A fault tree analysis is a deductive analytical technique. It starts from a specified

state of the system as the “Top event”, and includes all faults which could contribute the

top event. At the bottom of the fault tree, the basic initiating faults, which could not be

further developed, are called as “Basic events” linked by fault tree gates, including AND-

, OR-, and Exclusive OR- gates etc. Figure 4-2 illustrates a fault tree to analyze the

failure of a toll sub-see tunnel project. For further introduction and application of Fault

Tree Analysis with precise probability in Civil Engineering, see Ang and Tang (1984).

Page 113: Copyright by Xiaomin You 2010

96

Figure 4-2: Example of Fault Tree (Eskesen, 2004).

A decision Tree presents all choices in a tree-like structure with the information

of consequences and probabilities. Figure 4-3 shows an example of decision tree. With

specific probabilities and consequences, one could first make a decision between Choice

1a and 1b. The selected action between these two becomes the representative of Choice 1.

Then Choice 1 and 2 are compared based on their expected value. Finally, the one with

maximum expected value is selected as the best option. Detailed introduction of Decision

Tree Analysis with precise probability in Civil Engineering may be found in Benjamin

and Cornell (1970) and Ang and Tang (1984).

Page 114: Copyright by Xiaomin You 2010

97

Figure 4-3: Example of Decision Tree (Eskesen, 2004).

In the conventional fault analysis and decision analysis, probabilities are usually

treated as precise values. By assigning a single probability distribution to all events in the

tree (Whitman, 1984), the prevision or probability of some outcomes is evaluated

quantitatively.

In tunneling projects, ground and groundwater conditions are affected by large

uncertainty, and thus past experience or data may not be reliably used for evaluating the

probabilities precisely. As a result, the available evidence in the event-tree analysis relies

on the judgment of experienced engineers and experts. Those judgments might be some

probability intervals, or more generally, upper and lower previsions (expectations) of

gambles (bounded real functions), as the constraints expressed in Section 2.1. In general,

these constraints generate convex sets of probability distributions. Thus, the following

sub-sections will solve event trees, fault trees, and decision trees with convex sets of

probabilities.

Several references allowed for imprecision in probabilities when dealing with

event-tree analysis, such as Tonon et al. (2000), Huang et al. (2001), and Kenarangui

Page 115: Copyright by Xiaomin You 2010

98

(1991); random sets or interval probabilities were used to evaluate the input probabilities,

but they still did not consider the most general case, i.e. convex sets of probability

distributions.

To illustrate, a random set on a finite universe Ω is composed of n non-empty

subsets i ⊆ΩA with associated probability assignment ( )im A : ( )im A > 0,

( )1

n

ii

m=∑ A = 1. Bernardini and Tonon (2010) explains that a random set

( )( ) , , 1,...,i im i n=A A generates a convex set of probability distributions bounded by

belief (or plausibility) functions, which are special cases of upper and lower probabilities.

Upper and lower probabilities are special cases of previsions when the gamble is the

characteristic function of an event (Tonon et al. 2001).

4.2 FAILURE ANALYSIS WITH IMPRECISE PROBABILITY

4.2.1 Event Tree Analysis (ETA)

In the event tree analysis, the available information on probabilities could be

given in three different types: (1) probabilities conditional to the event at the upper level;

(2) total probabilities of occurrences, i.e., probabilities not conditional to other events; (3)

the combination of the previous two types. In the following sub-sections we will discuss

the solutions in these three cases.

4.2.1.1 ETA with conditional probabilities

This sub-section deals with event-tree analysis when the probabilities conditional

to the upper level events are given. Let N be the number of levels of the event tree (see

Figure 4-4). The subscript ( )21, ,..., Ni i is used to index an event in the tree: if the event

is located at level k, then ij = 0 for j = k+1,…,N. Let 21, ,..., ,0,..,0ki iS be an event at level k

Page 116: Copyright by Xiaomin You 2010

99

with 21, ,..., ,0,..,0ki in possible states (or outcomes). It is located on the ik-th branch of event

2 11, ,..., ,0,...,0ki iS−

at level k-1, which is on the ik-1-th branch of event 2 21, ,..., ,0,...,0ki iS

− at Level k-

2, …, and so on till event 21, ,0,0,...,0iS on the i2-th branch of the event 1,0,...,0S at level 1.

For example, if N = 4 in Figure 4-4, then 1,2,1,0S is an event at level 3, which is located

on the 1st branch of event 1,2,0,0S , which is on the 2nd branch of event 1,0,0,0S .

Figure 4-4: Event-tree with N levels.

In the middle of the tree, 21, ,..., ,0,...,0ki ip denotes the probability vector for event

21, ,..., ,0,..,0ki iS , with its j-th component 21, ,..., , ,0,...0ki i jp being the probability of occurrence of

the j-th state 21, ,..., , ,..,0ki i jS of event

21, ,..., ,0,..,0ki iS . Let 21, ,..., ,0,...,0ki iΩ be the set of outcomes

of event 21, ,..., , ,..,0ki i jS . Let

21, ,..., ,0,...,0k

mi if :

21, ,..., ,0,...,0ki iΩ → , 21, ,..., ,0,...,01,...,

ki im m= be a set

of limited functions (gambles) on 21, ,..., ,0,...,0ki iΩ .

21, ,..., ,0,...,0k

mi if denotes a

21, ,..., ,0,..,0ki in -

1

1,0,0,...,0N

S−

1,1,0,...,0S

……

1,2,0,...,0S

1,4,0,...,0S

1,1,1,...,0S

1,1,2,...,0S

1,1,3,...,0S

……

1,4,1,...,0S

1,4,2,...,0S

1,4,3,...,0S

1,2,1,...,0S

1,2,2,...,0S

1

1,1,...,1,1N

s+

1,1,...,1,2s1

1,1,...,1,1N

a+

1,1,...,1,2a 1,1,...,1,0N

a

……

……

……

……

Level 1 Level 2 Level 3 …… Level N Outcome Consequence i2 i3 …… iN

1,1,...,2,2s 1,1,...,2,2a 1,1,...,2,0a1,1,...,2,1s 1,1,...,2,1a

Tree Bottom

Tree Top

1,1,...,1N

S

1,1,...,2S

……

Page 117: Copyright by Xiaomin You 2010

100

column vector, whose j-th component is the function value for the j-th state 21, ,..., , ,..,0ki i js

for event 21, ,..., ,0,..,0ki iS .

21, ,..., ,0,...,0ki ip denotes one element in set 21, ,..., ,0,...,0ki iΨ of

probability distributions whose expectations (previsions) for 21, ,..., ,0,...,0k

mi if fall between

assigned upper and lower bounds 21, ,..., ,0,...,0k

mUPP i iE f⎡ ⎤

⎣ ⎦ and 21, ,..., ,0,...,0k

mLOW i iE f⎡ ⎤

⎣ ⎦ ,

respectively. 21, ,..., ,0,...,0ki iΨ is thus defined as follows:

( )( )

2

2 2 2

2

2 2 2

1, ,..., ,0,...,0

1, ,..., ,0,...,0 1, ,..., ,0,...,0 1, ,..., ,0,...,01, ,..., ,0,...,0

1, ,..., ,0,...,0 1, ,..., ,0,...,0 1, ,..., ,0,...,0

:

;

;

k

k k k

k

k k k

i i

Tm mi i i i LOW i i

i i Tm mi i i i UPP i i

E f

E f

⎡ ⎤≥ ⎣ ⎦Ψ =⎡ ⎤≤ ⎣ ⎦

p

f p

f p

21, ,..., ,0,...,0 1,...,ki im m

⎧ ⎫⎪ ⎪⎪ ⎪⎪ ⎪⎨ ⎬⎪ ⎪⎪ ⎪⎪ ⎪=⎩ ⎭

(4.1)

By applying the algorithm for finding extreme points of set 21, ,..., ,0,...,0ki iΨ described in

Section 2.4, the extreme distributions for 21, ,..., ,0,...,0ki ip (vertices of

21, ,..., ,0,...,0ki iΨ ) may be

obtained:

1, ,..., ,0,...,022 1, ,..., ,0,...,0 1, ,..., ,0,...,0 1, ,..., ,0,...,02 2 2

1 21, ,..., ,0,...,0 , ,..., i ik

k i i i i i ik k ki i EXT EXT EXTEXT ξ

= p p p (4.2)

Let 21, ,..., ,0,...,0ki ia be the consequence vector for event

21, ,..., ,0,..,0ki iS , where its j-th

component 21, ,..., , ,0,...0ki i ja is the consequence of the occurrence of the j-th state

21, ,..., , ,..,0ki i js

of event 21, ,..., ,0,..,0ki iS . Let

2 11, ,..., , ,...,0k ki i iE+

be the prevision (expectation) of event

2 11, ,..., , ,0,..,0k ki i iS+

conditional to its upper level events. If all 2 11, ,..., , ,...,0k ki i iE

+

(21 1, ,..., ,0,...,01,...,

kk i ii n+ = ) are calculated, consequence vector 21, ,..., ,0,...,0ki ia at level k is

2

2

2 1, ,..., ,0,...,02

1, ,..., ,1,0,...,0

1, ,..., ,0,...,0

1, ,..., , ,0,...,0

k

k

k i ik

i i

i i

i i n

E

E

⎛ ⎞⎜ ⎟

= ⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

a (4.3)

At the bottom of the event tree, let 21, ,..., ,Ni i js be the j-th possible state of event

21, ,..., Ni iS , and let 21, ,..., ,Ni i ja be the numeric value of the consequence of its outcome. Thus,

vector 21, ,..., Ni ia is the consequence vector and

21, ,..., ,Ni i ja is its j-th component. Let

Page 118: Copyright by Xiaomin You 2010

101

21, ,..., Ni ip be the associated probability vector where its j-th component 21, ,..., ,Ni i jp is the

probability of outcome 21, ,..., ,Ni i js conditional to the occurrence of upper level events, i.e.,

2

2

2 1, ,...,2

1, ,..., ,1

1, ,...,

1, ,..., ,

N

N

N i iN

i i

i i

i i n

a

a

⎛ ⎞⎜ ⎟

= ⎜ ⎟⎜ ⎟⎝ ⎠

a ; 2

2

2 1, ,...,2

1, ,..., ,1

1, ,...,

1, ,..., ,

N

N

N i iN

i i

i i

i i n

p

p

⎛ ⎞⎜ ⎟

= ⎜ ⎟⎜ ⎟⎝ ⎠

p (4.4)

Each outcome at the bottom of the event tree has one probability path. Assume

the ζ -th outcome 21, ,..., ,Ni is ζ of event

21, ,..., Ni iS is the j-th outcome at the bottom of the

event tree. The probability path is composed of N events: 1,0,...,0S , 21, ,0...,0iS , …,

21, ,..., Ni iS ,

with associated N conditional probabilities equal to 21, ,0,...,0ip ,

2 31, , ,0,...,0i ip , …, and

21, ,..., ,Ni ip ζ , respectively. The consequence for outcome 21, ,..., ,Ni is ζ is

21, ,..., ,Ni ia ζ , and thus

the prevision contributed by this probability path is given by the product ( )( ) ( )

( )2 2 2 3 21, ,..., , 1, ,0,...,0 1, , ,0,...,0 1, ,..., ,N Ni i i i i i i ja p p pζ ζ⎡ ⎤⎣ ⎦ . Assume that there are a total

of M events at the bottom of the tree: the total prevision of the whole event tree is

( )( ) ( )( )

1, ,...,2

2 2 2 3 21, ,..., , 1, ,0,...,0 1, , ,0,...,0 1, ,..., ,1 1

i iN

N N

nM

i i i i i i i jjE a p p pζ ζ

ζ= =

⎡ ⎤= ⎣ ⎦∑ ∑ .

To calculate the upper and lower previsions from the event tree, the following

constrained optimization problems must be solved: Minimize (Maximize)

( )( ) ( )( )

1, ,...,2

2 2 2 3 21, ,..., , 1, ,0,...,0 1, , ,0,...,0 1, ,..., ,1 1

i iN

N N

nM

i i i i i i i jja p p pζ ζ

ζ= =

⎡ ⎤⎣ ⎦∑ ∑

Subject to

( )( )

2 2 2

2 2 2

2

1, ,..., ,0,...,0 1, ,..., ,0,...,0 1, ,..., ,0,...,0

1, ,..., ,0,...,0 1, ,..., ,0,...,0 1, ,..., ,0,...,0

1, ,..., ,0,...,0

;

;

1,..., ; 1,...,

k k k

k k k

k

Tm mi i i i LOW i i

Tm mi i i i UPP i i

i i

E f

E f

m m k N

⎫⎡ ⎤≥ ⎪⎣ ⎦⎪⎪⎡ ⎤≤ ⎬⎣ ⎦⎪

= = ⎪⎪⎭

f p

f p (Eq.(3.1))

( ) 21, ,..., ,0,...,0 22

2 2

1, ,..., ,0,...,0 1, ,..., ,0,...,0

1, ,..., , ,...,0 1, ,..., ,0,...,0

1 must be

a probability distribution0; 1,..., ; 1,...,

ki i kk

k k

Ti in i i

i i j i ip k N j n

⎫⋅ = ⎪⎬⎪≥ = = ⎭

1 p p

(4.5)

Page 119: Copyright by Xiaomin You 2010

102

The objective function in these optimization problems is the sum of products of

M N× variables subject to linear constraints. The optimization problems are thus non-

linear and non-convex. Besides being computationally intensive (Luenberger, 1984), non-

linear and non-convex problems may yield local minima and maxima.

We now show how problem (4.5) can be efficiently solved by a sequence of linear

programming problems (when prevision bounds UPPE and LOWE are known

(Eq.(3.1)), Case 1) or by enumeration (when the extreme points for iΨ are known

(Eq.(4.2), Case 2).

At level N, given the probability distributions 21, ,..., Ni ip conditional to the

occurrence of event 2 11, ,..., ,0Ni iS

−, the upper and lower previsions 21, ,..., Ni iE and

21, ,..., Ni iE of

event 21, ,..., Ni iS can be obtained by solving the following optimization problems:

2 2 2

1, ,..., 1, ,..., 1, ,...,maxN N N

Ti i i i i iE = pa

2 2 21, ,..., 1, ,..., 1, ,...,minN N N

Ti i i i i iE = pa

Subject to

( )2 2 2 2 21, ,..., 1, ,..., 1, ,..., 1, ,..., 1, ,..., ; 1,...,N N N N N

Tm m mLOW i i i i i i UPP i i i iE f E f m m⎡ ⎤ ⎡ ⎤≤ ≤ =⎣ ⎦ ⎣ ⎦f p

( ) 21, ,...,2

2 2

1, ,...,

1, ,..., , 1, ,...,

1

0; 1,...,

Ni iN

N N

Ti in

i i j i ip j n

⋅ =

≥ =

1 p

(4.6)

In Case 1, problems (4.6) can be solved as linear programming problems. In Case 2, the upper and lower previsions 21, ,..., Ni iE and

21, ,..., Ni iE of event

21, ,..., Ni iS are determined by enumerating the extreme probability distributions of 21, ,..., Ni iΨ ,

which is the set of probability distributions 21, ,..., Ni ip . All extreme probability distributions

in 21, ,..., Ni iΨ should satisfy constraints (4.6), and are calculated by the algorithm in

Chapter 2 Section 2.4, and collected in set

Page 120: Copyright by Xiaomin You 2010

103

1, ,...,22 1, ,..., 1, ,..., 1, ,...,2 2 2

1 21, ,..., , ,..., i iN

N i i i i i iN N Ni i EXT EXT EXTEXT ξ

= p p p . Then 21, ,..., Ni iE and 21, ,..., Ni iE may be

obtained by searching in 21, ,..., Ni iEXT :

2 2 1, ,...,21, ,..., 1, ,...,maxN N i iN

Ti i i i EXTE ξ= pa

2 2 1, ,...,21, ,..., 1, ,...,min

N N i iN

Ti i i i EXTE ξ= pa

Subject to

1, ,..., 22 1, ,...,i i NNEXT i iEXTξ ∈p

(4.7)

Once the upper and lower bounds 21, ,..., Ni iE and

21, ,..., Ni iE (2 11, ,..., ,01,...,

NN i ii n−

= ) are

all obtained, then the upper and lower consequence vectors 2 11, ,..., ,0Ni i −a and 2 11, ,..., ,0Ni i −

a are

written as

2 1

2 1

2 1 1, ,..., ,02 1

1, ,..., ,1

1, ,..., ,0

1, ,..., ,

N

N

N i iN

i i

i i

i i n

E

E

− −

⎛ ⎞⎜ ⎟

= ⎜ ⎟⎜ ⎟⎝ ⎠

a ; 2 1

2 1

2 1 1, ,..., ,02 1

1, ,..., ,1

1, ,..., ,0

1, ,..., ,

N

N

N i iN

i i

i i

i i n

E

E

− −

⎛ ⎞⎜ ⎟

= ⎜ ⎟⎜ ⎟⎝ ⎠

a (4.8)

Now, we continue the same procedure at level N-1 with conditional probability vector

2 11, ,..., ,0Ni i −p , and consequence vectors 2 11, ,..., ,0Ni i −a and

2 11, ,..., ,0Ni i −a in Eq.(4.8). The

upper and lower previsions 2 11, ,..., ,0Ni iE − and 2 11, ,..., ,0Ni iE

− of event

2 11, ,..., ,0Ni iS−

can be

obtained as

2 1 2 1 2 11, ,..., ,0 1, ,..., ,0 1, ,..., ,0maxN N N

Ti i i i i iE − − −

= pa

2 1 2 1 2 11, ,..., ,0 1, ,..., ,0 1, ,..., ,0minN N N

Ti i i i i iE

− − −= pa

Subject to

( )2 1 2 1 2 1 2 1

2 1

1, ,..., ,0 1, ,..., ,0 1, ,..., ,0 1, ,..., ,0

1, ,..., ,0

;

1,...,N N N N

N

Tm m mLOW i i i i i i UPP i i

i i

E f E f

m m− − − −

⎡ ⎤ ⎡ ⎤≤ ≤⎣ ⎦ ⎣ ⎦=

f p

( ) 2 11, ,..., ,02 1

2 1 2 1

1, ,..., ,0

1, ,..., , 1, ,..., ,0

1

0; 1,...,

Ni iN

N N

Ti in

i i j i ip j n

−−

− −

⋅ =

≥ =

1 p

(4.9)

After this second step, the upper and lower consequence vectors 2 21, ,..., ,0,0Ni i −a and

2 21, ,..., ,0,0Ni i −a are equal to:

Page 121: Copyright by Xiaomin You 2010

104

2 2

2 2

2 2 1, ,..., ,02 2

1, ,..., ,1,0

1, ,..., ,0,0

1, ,..., , ,0

N

N

N i iN

i i

i i

i i n

E

E

− −

⎛ ⎞⎜ ⎟

= ⎜ ⎟⎜ ⎟⎝ ⎠

a ; 2 2

2 2

2 1 1, ,..., ,0,02 2

1, ,..., ,1,0

1, ,..., ,0,0

1, ,..., , ,0

N

N

N i iN

i i

i i

i i n

E

E

− −

⎛ ⎞⎜ ⎟

= ⎜ ⎟⎜ ⎟⎝ ⎠

a (4.10)

Then, one would repeat the analysis at level N-2 with conditional probability vector

2 21, ,..., ,0,0Ni i −p , and consequence vectors 2 21, ,..., ,0,0Ni i −a and

2 21, ,..., ,0,0Ni i −a . After N-1

iterations, one obtains the upper and lower previsions 1,0,...,0E and 1,0,...,0E at level 1,

which are the upper and lower previsions E and E of the event tree:

1,0,...,0 1,0,...,0 1,0,...,0maxT

E = pa

1,0,...,0 1,0,...,0 1,0,...,0min TE = pa Subject to

1,0,...,0 1,0,...,0∈Ψp

(4.11)

Let us now illustrate the aforementioned algorithm for Event Tree Analysis with

imprecise probabilities with an example.

Example 4-1 Consider a leaking water-conveyance tunnel. The analysis is meant to evaluate the failure

probability with the initiating event S1,0: construction void behind the unreinforced concrete lining. Here

‘failure’ is defined as either structural collapse failure or service failure of the lining (i.e., wide cracks).

Three states are considered for the construction void: (1) large void (s1,1,0: Diameter ∅ > 3 ft); (2) small

void (s1,2,0: Diameter ∅≤ 1 ft); and (3) intermediate size void (s1,3,0: Diameter 1 ft<∅≤ 3 ft). Information

on the initiating event S1,0 is as follows: (1) 10% of voids are either large (s1,1,0) or intermediate (s1,3,0); (2)

80% of voids are small (s1,2,0); (3) the remaining 10% of voids is indeterminate. This information can be

condensed in a random set: ( s1,1,0, s1,3,0, 0.1), (s1,1,0, s1,2,0, s1,3,0, 0.1), (s1,2,0, 0.8), whose set Ψ1,1 is

depicted in Figure 4-6(a) and has four vertices 1,0

1EXTp = (0, 0.9, 0.1)T,

1,0

2EXTp = (0, 1, 0)T,

1,0

3EXTp = (0.1,

0.8, 0.1)T, and 1,0

4EXTp = (0.2, 0.8, 0)T (Bernardini and Tonon, 2010).

Page 122: Copyright by Xiaomin You 2010

105

If the void is large, it is possible that structural collapse (Event S1,1) occurs because of non-

uniform load distribution on the lining. Thus, there are only two outcomes, Yes ( 1,1,1s ) or No ( 1,1,2s ) with

conditional probability evaluated as 0.8≤P( 1,1,1s |s1,1,0) 0.9≤ . Set Ψ1,1 has two extreme points: 1,1

1EXTp =

(0.8, 0.2)T and 1,1

2EXTp = (0.9, 0.1)T.

If the void is small, although the lining will not collapse, the void may cause some cracks (Event

S1,2), which may reduce the serviceability. Given the existence of small voids, there are three outcomes for

crack development (Event S1,2): (1) 1,2,1s : wide cracks; (2) 1,2,2s : small cracks; (3) 1,2,3s : no cracks. Here

we assume that only wide cracks will cause leakage through the lining. The following information is

available: in the case of small void, the probability of wide cracks ( 1,2,1s ) is less than twice the probability

of small cracks ( 1,2,2s ); the probability of no cracks ( 1,2,3s ) is less than twice the probability of wide cracks

( 1,2,1s ); having a small crack ( 1,2,2s ) is less probable than having no cracks ( 1,2,3s ). This information may

be expressed in terms of the following inequalities: P( 1,2,1s |s1,2,0) ≤ 2P( 1,2,2s |s1,2,0); P( 1,2,3s |s1,2,0)

≤2P( 1,2,1s |s1,2,0); P( 1,2,2s |s1,2,0) ≤ P( 1,2,3s |s1,2,0), whose set Ψ1,2 is depicted in Figure 4-6(b), and has three

extreme points: 1,2

1EXTp = (0.29, 0.14, 0.57)T,

1,2

2EXTp = (0.5, 0.25, 0.25)T, and

1,2

3EXTp = (0.2, 0.4, 0.4)T.

If the size of the void is intermediate, structural collapse (Event S1,3) is still possible, i.e., two

outcomes - Yes ( 1,3,1s ) or No ( 1,3,2s ) are possible with conditional probabilities assigned as:

0.5≤P( 1,3,1s |s1,3,0) 0.6≤ , and Ψ1,3 has two vertices: 1,3

1EXTp = (0.5, 0.5)T and

1,3

2EXTp = (0.6, 0.4)T.

The event tree is depicted in Figure 4-5. Since we are interested in the probability of failure, we

set a1,1,1 = a1,2,1 = a1,3,1 = 1, and all other 1 2 3, ,i i ia are equal to 0.

Page 123: Copyright by Xiaomin You 2010

106

Figure 4-5: Event-tree in Example 4-1.

(a) (b)

Figure 4-6: Example 4-1: sets Ψ1,0 and Ψ1,2 in the 3-dimensional spaces of the probability of the singletons.

To determine the extreme values of the expectation, we first consider the sub-trees for S1,1, S1,2,

and S1,3, respectively. In set Ψ1,1, function 1,1,1 1,1,1 1,1,2 1,1,2a p a p+ achieves its maximum value 0.9 at 1,1

2EXTp

and minimum value 0.8 at 1,1

1EXTp . For the sub-tree of S1,2, the extreme values of function

1,2,1 1,2,1 1,2,2 1,2,2 1,2,3 1,2,3a p a p a p+ + are 0.5 and 0.2, achieved at 1,2

2EXTp and

1,2

3EXTp , respectively.

Page 124: Copyright by Xiaomin You 2010

107

Likewise for S1,3, the maximum value for 1,3,1 1,3,1 1,3,2 1,3,2a p a p+ is 0.6 at 1,3

2EXTp and minimum value 0.5

at 1,3

1EXTp .

Next, we conduct the analysis for S1,0. The maximized objective function

1,1,0 1,2,0 1,3,00.9 0.5 0.6p p p+ + achieves its maximal value (i.e., 0.58) at 1,0

4EXTp , and the minimized

objective function is 1,1,0 1,2,0 1,3,00.8 0.2 0.5p p p+ + , whose minimum value 0.2 is achieved at 1,0

2EXTp .

Therefore, the upper and lower probabilities of leakage are equal to 0.58 and 0.2, respectively,

which conveys the initial imprecision on the input values.

4.2.1.2 ETA with total probabilities

In the last sub-section, we discussed the case when the probabilities are given

conditional to the occurrences of upper level events. That is to say, we are confident

about how the occurrence of upper level events affects the probabilities of the lower level

events. However, sometimes this information is not available. If the information is only

given in terms of total probabilities, the analysts have to assume some interaction

between the upper and lower level events, including unknown interaction, epistemic

irrelevance, epistemic independence, strong independence, and uncertain correlation.

To set up the event-tree, one needs to consider that the interaction between events

are partially defined; therefore, the outcomes at the bottom of the event tree should

include all combinations of events. Let 1S , 2S , …, mS be m events, where event iS

has in possible states. 1S is the initiating event, and mS is the event at the bottom of

the tree. Then there are a total of 1 2 mn n n× × outcomes at the bottom of the event tree,

collected in matrix a , whose ( 1 2, , , mi i i )-th entry is 1 2, , , mi i ia . The probability

distribution of all outcomes (i.e., the joint distribution of 1E through mE ), P , is also a

Page 125: Copyright by Xiaomin You 2010

108

1 2 mn n n× × matrix whose ( 1 2, , , mi i i )-th entry is the probability of the

( )1 2, , , -thmi i i outcome, i.e. 1 2, , , mi i iP .

Under the assumed interaction between events 1S and 2S , the constraints on the

probabilities of events are as follows (Section 3.3):

(1) Unknown interaction: ΨU:

( )2 1P S⋅× ∈Ψ ; ( )1 2P S ×⋅ ∈Ψ (4.12)

(2) Epistemic irrelevance: | isEΨ :

( )2 1P S⋅× ∈Ψ ; ( )1 2P S ×⋅ ∈Ψ

( ) ( ) ( ) ( ) ( ) ( )

2 2

1 1

| |1 1 1 2 1 1 2 2

| |2 2 1 2 2 2 1 1

: , 2

: , 1

s s

s s

P P U s P U P s i OR

P P s U P U P s i

∃ ∈Ψ × = =

∃ ∈Ψ × = = (4.13)

(3) Epistemic independence: ΨE:

( )2 1P S⋅× ∈Ψ ; ( )1 2P S ×⋅ ∈Ψ

( ) ( ) ( ) ( ) ( ) ( )

2 2

1 1

| |1 1 1 2 1 1 2 2

| |2 2 1 2 2 2 1 1

:

:

s s

s s

P P U s P U P s AND

P P s U P U P s

∃ ∈Ψ × =

∃ ∈Ψ × = (4.14)

(4) Strong independence: ΨS:

( )2 1P S⋅× ∈Ψ ; ( )1 2P S ×⋅ ∈Ψ

2|2 2 1 1: ss S P P∀ ∈ = and 1|

1 1 2 2: ss S P P∀ ∈ = (4.15)

(5) Uncertain correlation: ΨC:

( )2 1P S⋅× ∈Ψ ; ( )1 2P S ×⋅ ∈Ψ

( ) ( ) ( ) ( ) ( )1 2 1 21 2 1 2 1 2S S S SE S E S D D E S S E S E S D Dρ ρ+ ≤ ≤ + (4.16)

Various algorithms for finding the extreme points of the set of joint distributions

under different interactions were proposed and exemplified in Chapter 4.

It should be noted that there are two ways to interpret the interactions between

events. The first one is to interpret them as pair-wise interactions between Events iS

Page 126: Copyright by Xiaomin You 2010

109

and jS ( i j≠ ). The second way is to interpret them as interactions between Event iS

and the new joint event composed of Events 1iS − through 1S with 1 2 1in n n −× ×

possible states. For example, in the event tree shown in Figure 4-5, one may define

unknown interaction between Events 2S and 1S , and strong independence between

Events 3S and 2S . One may also define the interactions using the second way:

unknown interaction between Events 2S and 1S , and strong independence between

Event 3S and the joint event composed of 2S and 1S , which has 2 1n n× possible

states. These two different ways of defining interactions results in different problem

formulations and final results.

If the interactions between events iS and jS are defined pair-wise, the

optimization problems are rewritten in terms of P as follows (see the Appendix A for

explicit formulations):

Minimize (Maximize) 1

1 2 1 21

, , , , , ,1 1

m

m mm

nn

i i i i i ii i

a P= =∑ ∑

Subject to (1) Unknown interaction between iS and jS ,: ^i j U∈ΨP ; or

(2) iS is epistemically irrelevant to jS : |^

isi j E∈ΨP ; or

(3) Epistemic independence between iS and jS : ^i j E∈ΨP ; or

(4) Strong independence between iS and jS : ^i j S∈ΨP ; or

(5) Uncertain correlation between iS and jS : ^i j C∈ΨP .

(4.17)

where UΨ , | isEΨ , EΨ , SΨ , and CΨ are defined in Eqs.(4.12)-(4.16) and detailed in

Chapter 3 (Sections 3.3.2, 3.3.3, and 3.3.4, respectively), and ^i jP is the joint

probability matrix for Events iS and jS , which may be expressed in terms of P as

follows:

Page 127: Copyright by Xiaomin You 2010

110

1 11 11

1 21 1 1 1 1

^ , , ,1 1 1 1 1 1

j ji i m

mi i j j m

n nn n nn

i j Pξ ξ ξξ ξ ξ ξ ξ ξ

− +− +

− + − += = = = = == ∑ ∑ ∑ ∑ ∑ ∑P (4.18)

In this case, we solve the problem directly in terms of the 1 2 mn n n× × matrix P , and

the computational difficulty will dramatically increase with the dimensions of the matrix.

Alternatively, the interactions could be defined between Event iS and the joint

event composed of Events 1iS − through 1S . The solution for this problem proceeds in a

top-to-bottom pattern. Here is the general algorithm:

1. Start the calculation from the initiating event 1S . Only consider the first two

Events, 1S and 2S . Set 2i = .

2. According to the assumed interaction between Events 1iS − and iS , set up the

corresponding constraints for the probability distribution combP , which is a

( )1 2 1i in n n n−× × × matrix, i.e. the joint distribution of 1iS − and iS , as

detailed in Chapter 4.

3. Find all extreme distributions of combP that satisfy the constraints set up in the

last step.

4. Consider Events 1iS − and iS as a joint event combS with ( )1 2 1i in n n n−× × ×

possible states subject to the probability distribution combP .

5. Replace 1iS − with combS and replace iS with 1iS + . Then set 1i i= + .

6. Repeat Step 2 through Step 5 till 1i m= + . All extreme distributions of combP

are obtained, and P = combP .

7. Find the upper and lower bounds of the prevision of the event tree analysis by

enumerating the extreme distributions of P .

Example 4-2 illustrates the algorithm for pair-wise interactions in Event Tree

Analysis. In Example 4-3, the interactions are defined the second way, i.e. interaction

Page 128: Copyright by Xiaomin You 2010

111

with the joint event at the upper level. Comparison of the results in Example 4-2 and

Example 4-3 highlights the differences due to the different ways of defining the

interaction.

Example 4-2 Consider three fire alarm systems in a tunnel: Systems I, II, and III. If none of them work,

the tunnel is in a dangerous situation. Let S1, S2, and S3 denote failure of Systems I, II, and III, respectively.

The event tree for the fire alarm system is depicted in Figure 4-7. Considering the imprecision of the alarm

sensors, the evacuation system will be activated only when at least two fire systems are activated. The

objective of the analysis is to evaluate the probability of failure, which is defined as “not activating the

evacuation system when a fire is really happening in the tunnel”.

Figure 4-7: Event Tree when total probabilities are assigned to all events.

Assume that we have no information about the correlation or interaction between the failures of

Systems I and II. Thus it is safe to model the interaction between failures of the two systems as unknown

Event 1 Event 2 Event 3

1P

2P

1,1P

1,2P

2,1P

2,2P

1,1,1 1,1,1 P a

1,1,2 1,1,2 P a

1,2,1 1,2,1 P a

1,2,2 1,2,2 P a

2,1,1 2,1,1 P a

2,1,2 2,1,2 P a

2,2,1 2,2,1 P a

2,2,2 2,2,2 P a

Page 129: Copyright by Xiaomin You 2010

112

interaction. However, we are confident that System III works stochastically independently to System II,

thus failures of Systems III and II are strongly independent. As for the failure probability of each system,

the available information is 0<P(S1)<0.1, 0.05<P(S2)<0.1, and 0<P(S3)<0.15. Thus, the extreme vertices for

Ψ1, Ψ2, and Ψ3 are 1

1EXTp = (0, 1)T,

1

2EXTp = (0.1, 0.9)T;

2

1EXTp = (0.05, 0.95)T,

2

2EXTp = (0.1, 0.9)T;

3

1EXTp =

(0, 1)T, 3

2EXTp = (0.15, 0.85)T.

According to the objective of the analysis, let 1,1,1a = 1,1,2a = 1,2,1a = 2,1,1a =1, and all the other a’s

are equal to 0. The complete optimization problem in Eq.(4.17) reads

Minimize (Maximize) 1 2 3 1 2 3

1 2 3

2 2 2

, , , ,1 1 1

i i i i i ii i i

a P= = =∑∑∑

Subject to

2 32 3

2 2

1, ,1 1

0 0.1i ii i

P= =

< <∑∑ ;

1 31 3

2 2

,1,1 1

0.05 0.1i ii i

P= =

< <∑∑ ;

1 21 2

2 2

, ,11 1

0 0.15i ii i

P= =

< <∑∑

( )( )

2 3 2 3 2 2 2 2

3 3 3 3

1, , 2, , 1, ,1 2, ,1 1, ,2 1, ,2

1,1, 2,1, 1,2, 2,2,2 3

2 3

Strong independencebetween and

1, 2; 1,2;

i i i i i i i i

i i i i

P P P P P P

P P P PS S

i i

⎫+ = + + +⎪⎪⋅ + + + ⎬⎪

= = ⎪⎭

1

1 21

1 2

, , ,1 1

, , ,

1

0

m

mm

m

nn

i i ii i

i i i

P

P= =

=

∑ ∑

(4.19)

Under the assumptions of unknown interaction between E1 and E2 and strong independence between E2 and

E3, the upper and lower failure probabilities are equal to 0.115 and 0, respectively, and solutions are

detailed in Table 4-1. One can check that the joint distribution of E2 and E3 satisfies the assumption of

strong independence, i.e., 2^3P = 2 3Tp p .

Page 130: Copyright by Xiaomin You 2010

113

Table 4-1: Solutions for the optimization problems (19) for the upper and lower probabilities of failure.

1,1,1 1,1,2 1,2,1 1,2,2

2,1,1 2,1,2 2,2,1 2,2,2

; ; ; ;

; ; ;

P P P P

P P P P⎛ ⎞⎜ ⎟⎜ ⎟⎝ ⎠

P = 1p 2p 3p 2^3P 1 2 3 1 2 3

1 2 3

2 2 2

, , , ,1 1 1

i i i i i ii i i

a P= = =∑∑∑

max 0; 0.015; 0.085; 0;

0.015; 0.12; 0; 0.765⎛ ⎞⎜ ⎟⎝ ⎠

0.10.9⎛ ⎞⎜ ⎟⎝ ⎠

0.10.9⎛ ⎞⎜ ⎟⎝ ⎠

0.150.85⎛ ⎞⎜ ⎟⎝ ⎠

0.015 0.0850.135 0.765⎛ ⎞⎜ ⎟⎝ ⎠

0.115

min 0; 0; 0; 0.1;

0; 0; 0.05; 0.85⎛ ⎞⎜ ⎟⎝ ⎠

0.10.9⎛ ⎞⎜ ⎟⎝ ⎠

0.050.95⎛ ⎞⎜ ⎟⎝ ⎠

01⎛ ⎞⎜ ⎟⎝ ⎠

0 0.050 0.95⎛ ⎞⎜ ⎟⎝ ⎠

0

Example 4-3 Consider again the situation and knowledge available in Example 2. Suppose that now the

interactions are defined in the second manner, i.e. interaction between an event and the joint event at its

upper level. We still assume unknown interaction between S1 and S2, but we assume strong independence

between S3 and the joint event Scomb composed of S1 and S2.

We start by finding all the extreme points in the set of joint distributions for S1 and S2, subject to

the linear constraints on 1p and 2p , as shown in Eq. (4.20) below. All 7 extreme distributions of combP

are listed in Table 4-2.

1,1 1,20 0.1P P< + < ; 1,1 2,1

2 2

,1 1

,

0.05 0.1

1

0

i ji j

i j

P P

P

P= =

< + <

=

∑∑ ;

(4.20)

Theorem 3-5 regarding the extreme joint distributions in Chapter 4 states that: “Under strong independence,

the set of extreme joint distributions (measures) is the set of product distributions (measures), each taken

from the extreme distributions (measures) of the marginals”. Accordingly, the extreme points for the joint

distributions of Scomb and S3 are found by taking the product of one of the 7 extreme joint distributions in

Table 4-2 and one of the two extreme distributions of S3 at a time. A total of 14 different extreme joint

distributions are obtained for Scomb and S3 as listed in Table 4-3. By enumerating all the extreme points in

Table 4-3, the maximum and minimum of the objective functions are found to be 0.1 and 0, respectively,

Page 131: Copyright by Xiaomin You 2010

114

which are different from the results obtained in Example 4-2 (i.e., 0.115 and 0) because of the two different

interpretations on the interaction between events.

Table 4-2: Extreme points for the set of joint distributions of E1 and E2.

Extreme points of Ψcomb

P1,1 P1,2 P2,1 P2,2

1 0.10 0.00 0.00 0.90

2 0.00 0.00 0.05 0.95

3 0.05 0.00 0.00 0.95

4 0.00 0.00 0.10 0.90

5 0.00 0.1 0.05 0.85

6 0.05 0.05 0.00 0.90

7 0.00 0.10 0.10 0.80

Table 4-3: Extreme points for the set Ψ of joint distributions for Scomb and S3. combP

Extreme points of Ψ

P1,1,1 P1,1,2 P1,2,1 P1,2,2 P2,1,1 P2,1,2 P2,2,1 P2,2,2 1 2 3 1 2 3

1 2 3

2 2 2

, , , ,1 1 1

i i i i i ii i i

a P= = =∑∑∑

1 0.000 0.000 0.000 0.000 0.100 0.000 0.000 0.900 0.100 2 0.000 0.000 0.000 0.000 0.000 0.000 0.050 0.950 0.000 3 0.000 0.000 0.000 0.000 0.050 0.000 0.000 0.950 0.050 4 0.000 0.000 0.000 0.000 0.000 0.000 0.100 0.900 0.000 5 0.000 0.000 0.000 0.000 0.000 0.100 0.050 0.850 0.000 6 0.000 0.000 0.000 0.000 0.050 0.050 0.000 0.900 0.050 7 0.000 0.000 0.000 0.000 0.000 0.100 0.100 0.800 0.000 8 0.015 0.000 0.000 0.135 0.085 0.000 0.000 0.765 0.100 9 0.000 0.000 0.008 0.143 0.000 0.000 0.043 0.808 0.008 10 0.008 0.000 0.000 0.143 0.043 0.000 0.000 0.808 0.050 11 0.000 0.000 0.015 0.135 0.000 0.000 0.085 0.765 0.015 12 0.000 0.015 0.008 0.128 0.000 0.085 0.043 0.723 0.023 13 0.008 0.008 0.000 0.135 0.043 0.043 0.000 0.765 0.058 14 0.000 0.015 0.015 0.120 0.000 0.085 0.085 0.680 0.030

Page 132: Copyright by Xiaomin You 2010

115

4.2.1.3 ETA with Combination of conditional probabilities and total probabilities

Suppose that the information on some events in the tree is available in terms of

conditional probabilities, and that the information on the remaining events are given in

terms of total probability with partially known interaction to the upper events, i.e.

unknown interaction, epistemic irrelevance, epistemic independence, strong

independence, or uncertain correlation. The algorithm for this case is as follows:

1. Start the calculation from Events on which total probabilities are assigned, for

example iS and jS .

2. Set up the constraints based on the defined interaction, find all extreme

distributions for ,comb ijP , which is the joint distribution of iS and jS .

3. Replace Events iS and jS in the event tree with the joint event ,comb ijE ,

whose probability distribution is defined by ,comb ijP .

4. Proceed the analysis with the equivalent tree where information is now only given

in terms of probabilities conditional to the occurrences of upper level events.

As an example, consider the event tree shown in Figure 4-8. The information

available assigns probabilities of 5S conditional to 11s and the probabilities of 2E

through 4E conditional to the occurrence of their upper level events, respectively. As

for the interaction between 2E and 5E , it is assumed that they are strongly

independent.

Since 2S and 5S are strongly independent, we first combine 2E and 5E as a

new event combE , which has 4 outcomes. Then we find all extreme distributions of

combP conditional to 1E , as shown in Figure 4-9.

Page 133: Copyright by Xiaomin You 2010

116

Figure 4-8: Event Tree with mixed information consisting of conditional probabilities and total probabilities.

1S

2S

3S

4S

1 11 1 s P

2 21 1 s P

3 31 1 s P

13P

14P

24P

1a

2a

3a

4a

5a

6a

7a

8a

9a

5S

5S

23P

33P

Page 134: Copyright by Xiaomin You 2010

117

Figure 4-9: Event Tree equivalent to the tree in Figure 4-8 that contains only probabilities conditional to the upper level events.

4.2.2 Fault Tree Analysis

In Fault Tree Analysis, the failure event (major fault) is logically connected with

the sub-events (alternative faults) by gates. Here we only consider two basic gates: OR-

and AND-gates (See Figure 4-10 and Figure 4-11). Any event in the fault tree has only

two possible states: occurrence or not occurrence. The occurrence probabilities for the

failure events are assumed to be given imprecisely, i.e. as interval probabilities [PLOW,

PUPP].

1E

combE

3E

4E

1 11 1 s P

2 21 1 s P

3 31 1 s P

13P

14P

24P

1a

2a

3a

4a

5a

6a

7a

8a

9a

23P

33P

1combP

2combP

3combP

4combP

Page 135: Copyright by Xiaomin You 2010

118

In the probability evaluation for Figure 4-10 and Figure 4-11, let P be the joint

probability distribution for sub-events 1E through nE , and thus P is a matrix of

dimensions 2 2 2n

× × × , where the i-th index indicates the states of Event iE ; let the

subscript “1” denote the probability of occurrence, and “2” denote the probability of non-

occurrence. For the OR-gate in Figure 4-10, the occurrence probability for the failure

event E is ( ) 2,2, ,21

n

P E P= − (4.21)

where 2,2, ,2P is the probability of that none of the n events occurs, and thus 2,2, ,21 P−

is the probability that the complementary event (any of the n events ) occurs.

For the AND-gate in Figure 4-11, the occurrence probability for the event E is ( ) 1,1, ,1

n

P E P= (4.22)

where 1,1, ,1P is the probability that all the n events occur.

Figure 4-10: Sub-tree with OR-gate.

Page 136: Copyright by Xiaomin You 2010

119

Figure 4-11: Sub-tree with AND-gate.

Let ^i jP be the joint probability distribution for sub-events iE and jE , which

can be written in terms of the joint distribution P for 1E through nE :

1 21 1 1 1 1

2 2 2 2 2 2

^ , , ,1 1 1 1 1 1

n

i i j j n

i j Pξ ξ ξξ ξ ξ ξ ξ ξ− + − += = = = = =

= ∑ ∑ ∑ ∑ ∑ ∑P (4.23)

According to the assumed interaction between iE and jE , the joint distribution ^i jP

should fall in one of the following cases:

1. Unknown interaction: ^i j U∈ΨP ; or

2. Epistemic irrelevance: |^

isi j E∈ΨP ; or

3. Epistemic independence: ^i j E∈ΨP ; or

4. Strong independence: ^i j S∈ΨP ; or

5. Uncertain correlation: ^i j C∈ΨP .

Take three sub-events E1, E2, and E3 as an example. Assume epistemic

independence between E1 and E2 and strong independence between E2 and E3. Lower and

upper 1,1,1P and 2,2,2P are obtained by solving the optimization problems below.

Page 137: Copyright by Xiaomin You 2010

120

Minimize (Maximize) 1,1,1P ( 2,2,2P ) Subject to

1^2 E∈ΨP ; 2^3 S∈ΨP 2 2 2

, ,1 1 1

; 1i j li j l

P= = =

≥ =∑∑∑P 0

(4.24)

Then lower and upper probabilities for event E are obtained by inserting lower and upper

1,1,1P and 2,2,2P into Eqs.(4.21) and (4.22), respectively.

The objective of Fault Tree Analysis is to determine the upper and lower bounds

of the failure probability of the top event. The Fault Tree Analysis with imprecise

probabilities proceeds in a bottom-to-top pattern:

1. Start the analysis from the bottom of the fault tree, where the basic failure events

cannot be decomposed further.

2. Set up the constraints according to the defined interaction between sub-events

(Section 3.4). Express the occurrence probability of the upper level event ( )P E

by Eq. (4.21) or (4.22).

3. Calculate 1,...,1P or 2,...,2P as described in Eq. (4.24).

4. Determine the upper and lower bounds of the upper level event ( )P E .

5. Repeat Step 2 through Step 4 with all events in the same sub-tree till the top event

is included.

It is important to notice that epistemic irrelevance, epistemic independence, and

strong independence all lead to the same results in the fault tree analysis. Here we will

explain it for the case in Figure 4-10. In Eq. (4.21):

( )max P E = ( )2,2, ,2max 1 P− = ( )2,2, ,21 min P− (4.25)

and thus

( )min P E = ( )2,2, ,21 max P− (4.26)

Page 138: Copyright by Xiaomin You 2010

121

By observing Eqs. (4.25) and (4.26), we find that the optimization problems of

maximizing ( and minimizing) ( )P E are equivalent to the problems of minimizing (and

maximizing) 2,2, ,2P . It should be noted that 2,2, ,2P is an element in matrix P . Under

either epistemic irrelevance or strong independence, any element of P could be written

in terms of the product of marginals. For example, if n=2 and Event E1 is epistemically

irrelevant to Event E2, the (2,2)-th component in the joint distribution P , 2,2P , is equal

to 2 21 2p p⋅ . Therefore, under epistemic irrelevance or strong independence, the extreme

values of 2,2, ,2P coincide and are obtained as

( )22,2, ,2

1max max

n

n

ii

P p=

⎛ ⎞=⎜ ⎟⎜ ⎟

⎝ ⎠∏

( )22,2, ,2

1min min

n

n

ii

P p=

⎛ ⎞=⎜ ⎟⎜ ⎟

⎝ ⎠∏

(4.27)

where 2ip is the 2nd component of the probability vector iP for event Ei.

Since | iSE E SΨ ⊇Ψ ⊇Ψ (Section 3.4),

| |, , , , , ,

i is sE low E low S low S upp E upp E uppP P P P P P≤ ≤ ≤ ≤ ≤ (Section 3.4) (4.28)

Since Eq. (4.27) is true under epistemic irrelevance and strong independence, | |

, , , , and i is sE low S low E upp S uppP P P P= = (4.29)

From Eqs. (4.28) and (4.29), one obtains: |

, , ,is

E low E low S lowP P P= = (4.30) |

, , ,is

E upp E upp S uppP P P= = (4.31)

Likewise for the case of Figure 4-11 and Eq.(4.22).

Example 4-4 Let us consider the example of the fault tree adapted from Eskesen (2004) (Fig. 2), where

the failure of a toll sub-sea tunnel project (Event E) is caused by two sub-events: technical failure (Event

Page 139: Copyright by Xiaomin You 2010

122

E1) or economical failure (Event E2), which are here assumed to be strongly independent. Technical failure

may happen due to the occurrence of at least one of two epistemically independent events:

(1) total collapse: seawater fills the tunnel (Event E1,1), as a result of the occurrence of both too small rock

cover (Event E1,1,1) and insufficient investigations (Event E1,1,2). E1,1,1 and E1,1,2 are here assumed to be

linked by uncertain correlation with coefficient [ ]0.6,0.8ρ ∈ ;

(2) the tunnel cannot be built (Event E1,2) because of difficult rock conditions (Event E1,2,1) and poor

investigation (Event E1,2,2) occurring at the same time. Events E1,2,1 and E1,2,2 are assumed to be linked by

unknown interaction.

Figure 4-12: Fault tree analysis for the failure probability of sub-sea tunnel project with imprecise probabilities.

The economical failure is triggered by two strongly independent events: either too small toll

revenue (Event E2,1) or too high construction and maintenance costs (Event E2,2). The structure of the fault

Page 140: Copyright by Xiaomin You 2010

123

tree is the same as Figure 4-2, but the probabilities for the events at the bottom of the fault tree are assigned

as imprecise probabilities, as shown in Figure 4-12. Our objective is to determine the upper and lower

probabilities for the failure of the sub-sea tunnel project, given the assumed interactions and imprecise

probabilities in the fault tree.

We first consider the sub-tree for Event E1,1 together with sub-events E1,1,1 and E1,1,2, and

determine the bounds on the occurrence probability of Event E1,1 by solving the optimization problems

(4.32) written in terms of the joint distribution P of E1,1,1 and E1,1,2 (Section 3.2.3): Minimize (Maximize) 1,1P Subject to

1,1 1,2

1,1 2,1

0.01 0.05

0 0.01

P P

P P

< + <

< + <

( ) ( ) ( )( ) ( ) ( )

2 2

,1 1

,

1,1,1 1,1,2 1,1,1 1,1,2 1,1,1 1,1,2

1,1,1 1,1,2 1,1,1 1,1,2 1,1,1 1,1,2

1

0

0.8

0.6

i ji j

i j

P

P

E E E E E E E D D

E E E E D D E E E

= ==

≤ +

+ ≤

∑∑

(4.32)

where (Section 4.2.3):

( ) ( ) 1,1 1,21,1,1 1,1 1,2

2,1 2,21 0

P PE E P P

P P+⎛ ⎞

= = +⎜ ⎟+⎝ ⎠;

( ) ( ) 1,1 2,11,1,2 1,1 2,1

1,2 2,21 0

P PE E P P

P P+⎛ ⎞

= = +⎜ ⎟+⎝ ⎠

( )1,1,1 1,1,2 1,11 00 0

E E E P⎛ ⎞

= =⎜ ⎟⎝ ⎠

P , where , ,i j i ji j

P a= ∑∑P a ;

( )( )1,1,1 1,1 1,2 2,1 2,2D P P P P= + + ;

( )( )1,1,2 1,1 2,1 1,2 2,2D P P P P= + + .

The extreme values of the occurrence probability of E1,1 are found to be 0.01 and 0.0036. Detailed

solutions are listed in Table 4-4.

Page 141: Copyright by Xiaomin You 2010

124

Table 4-4: Solutions for the optimization problems (4.32) for the upper and lower probabilities of Event E1,1.

P: joint dist. of E1,1,1 and E1,1,2 P(E1,1,1) P(E1,1,2) P(E1,1) = P1,1

max 0.01 0.0055

0 0.9845⎛ ⎞⎜ ⎟⎝ ⎠

0.0155 0.01 0.01

min 0.0036 0.0064

0 0.99⎛ ⎞⎜ ⎟⎝ ⎠

0.01 0.0036 0.0036

Next, we calculate the bounds on the probability of Event E1,2, and the optimization problems read

as follows (Section 3.2.1) Minimize (Maximize) 1,1P Subject to

1,1 1,2

1,1 2,12 2

,1 1

,

0 0.02;

0 0.1;

1;

0

i ji j

i j

P P

P P

P

P= =

< + <

< + <

=

∑∑

(4.33)

where matrix P is now the joint distribution of E1,2,1 and E1,2,2. Solutions detailed in Table 4-5 show that the

upper and lower occurrence probabilities for E1,2 are equal to 0.02 and 0, respectively.

Table 4-5: Solutions for the optimization problems (4.33) for the upper and lower probabilities of Event E1,2.

P: joint dist. of E1,2,1 and E1,2,2 P(E1,2,1) P(E1,2,2) P(E1,1) = P1,1

max 0.02 0

0 0.98⎛ ⎞⎜ ⎟⎝ ⎠

0.02 0.02 0.02

min 0 00 1⎛ ⎞⎜ ⎟⎝ ⎠

0 0 0

The next step is to determine the extreme values of the occurrence probability of Event E1.

Replace the probabilities of E1,1 and E1,2 with intervals [0.0036, 0.01] and [0, 0.02], respectively. Since E1

is connected with E1,1 and E1,2 by an OR-gate in Figure 4-12, the optimization problems are

Page 142: Copyright by Xiaomin You 2010

125

Minimize (Maximize) 2,21 P− Subject to

1,1 1,2

1,1 2,1

0.0036 0.01;

0 0.02

P P

P P

< + <

< + <

2 2

,1 1

,

;

1;

0

E

i ji j

i j

P

P= =

∈Ψ

=

∑∑

P

(4.34)

where matrix P is the joint distribution of E1,1 and E1,2.

The upper and lower probabilities for E1 in (4.34) are 0.0298 and 0.0036, respectively. By

observing the solutions shown in Table 4-6, it is easy to check that both maximum and minimum solutions

belong to set ΨS, the set of joint distribution under strong independence. For example, the maximal solution 0.0002 0.00980.0198 0.9702⎛ ⎞

= ⎜ ⎟⎝ ⎠

P can be written as ( )0.0002 0.0098 0.010.02 0.98

0.0198 0.9702 0.99⎛ ⎞ ⎛ ⎞

= =⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠

P , i.e. product of two extreme

distributions of E1,1 and E1,2, respectively. Thus, in the fault tree analysis there is no difference between

epistemic independence and strong independence. This observation is consistent with the previous

theoretical derivation.

Table 4-6: Solutions for the optimization problems (4.34) for the upper and lower probabilities of Event E1.

P: joint dist. of E1,1 and E1,2 P(E1,1) P(E1,2) P(E1)

max 0.0002 0.00980.0198 0.9702⎛ ⎞⎜ ⎟⎝ ⎠

0.01 0.02 0.0298

min 0 0.00360 0.9964⎛ ⎞⎜ ⎟⎝ ⎠

0.0036 0 0.0036

As for the sub-tree of Event E2, we determine the bounds on the occurrence probability by first

multiplying all the extreme distributions of E2,1 by all extreme probability distributions of E2,2 (Section

Page 143: Copyright by Xiaomin You 2010

126

3.3.3) and then by calculating P(E2) = 1- P2,2 on all the extreme joint distributions as shown in Table 4-7.

The upper and lower occurrence probabilities for E2 are 0.012 and 0, respectively.

Table 4-7: Extreme joint distributions of E2,1 and E2,2.

Extreme points of Ψcomb

P1,1 P1,2 P2,1 P2,2 P(E2)

1 0.000 0.000 0.000 1.00 0.000

2 0.000 0.000 0.010 0.990 0.010

3 0.000 0.002 0.000 0.998 0.002

4 0.000 0.002 0.010 0.988 0.012

Finally we come to the top event of the fault tree E, i.e. the failure of the sub-sea tunnel project.

The extreme values of the occurrence probabilities are achieved by the same procedure as E2. Table 4-8

lists all extreme points with their values of P(E), and the upper and lower occurrence probabilities for E are

0.041 and 0.004, respectively.

Table 4-8: Extreme joint distributions of E1 and E2.

Extreme points of Ψcomb

P1,1 P1,2 P2,1 P2,2 P(E)

1 0.000 0.004 0.000 0.996 0.004

2 0.000 0.030 0.000 0.970 0.030

3 0.000 0.004 0.012 0.984 0.016

4 0.000 0.029 0.012 0.959 0.041

4.2.3 Combination of Event Tree Analysis and Fault Tree Analysis

Usually, it is difficult to directly evaluate the occurrence probability of some

events at the bottom of the fault tree, or such events cannot be decomposed further due to

the lack of knowledge. Then, one may consider an event tree analysis to evaluate the

Page 144: Copyright by Xiaomin You 2010

127

probabilities of those events and input the calculated upper ad lower probabilities into the

fault tree analysis.

For example, consider Event E1,2,2 “Poor investigation” in the fault tree of

Example 4-4. It might not be easy to directly evaluate the probability for the occurrence

of this event, but one may first consider several common initiating events for E1,2,2 “Poor

investigation” first, and then obtain the probability bounds of E1,2,2 by an event tree

analysis as explained in Section 4.2.1.

It is common to incorporate Fault Tree Analysis with Event Tree Analysis in risk

analysis with precise probabilities, and the procedure for analysis with imprecise

probabilities is not much different. One can just carry out the analysis step by step for

each Fault Tree and Event Tree with imprecise probabilities as explained in the previous

two sub-sections 4.2.1 and 4.2.2.

4.3 DECISION ANALYSIS WITH IMPRECISE PROBABILITIES

In the conventional decision analysis, precise values are assigned to evaluate the

occurrence probabilities of events. By calculating the expected utility on each decision

branch, the branch with maximal value is chosen and kept in further analysis. However,

as explained previously (Section 2.1), when the information is limited, assigning precise

probabilities may not be sensible or practical. Usually we have to use imprecise

probabilities to describe uncertainty. In this section the author is going to develop a

methodology of decision analysis with imprecise probabilities.

Page 145: Copyright by Xiaomin You 2010

128

4.3.1 Standard form of decision tree

A decision tree consists of decision nodes, chance nodes, and end nodes.

Typically, a decision node is represented by a square node, with branches representing

feasible alternatives, choice, or options; a chance node is represented by a circle, from

which possible outcomes radiate; an end node indicates a solution where the decision is

made and uncertainty has be resolved. Huntley and Troffaes (2008) introduce the

notation used here and suggest a standard form of decision trees, where

- A decision tree must start from a decision node;

- Successive decision nodes must be separated by a chance node;

- Successive chance nodes must be combined as one chance node;

- The number of nodes must be the same for all paths; otherwise, dummy decision

and chance nodes are added to the shorter paths.

This standard form is used because it enables us to discuss algorithms and methodologies

for decision analysis based on a general format of the decision tree. In the following

subsections, all decision trees are in the standard form.

Below is an example of a decision tree and its standard form. In the standard

form, a circle represents a chance node labeled as S and a square represents a decision

node labeled as ς . When at a decision node, we may refer to a choice also as a gamble;

for example, in Figure 4-13b at decision node 11ς , if the decision maker chooses option

111d , then in terms of gambles, the decision maker chooses gamble 1

11X , which has value

is 1111a if 11

11E occurs, and 1211a if 12

11E occurs.

Page 146: Copyright by Xiaomin You 2010

129

(a) (b) (c)

Figure 4-13: (a) Decision tree, (b) its standard form (Huntley et al. 2008), and (c) reduced decision tree with only the optimal choices.

In Figure 4-13(b), d1 and d2 are decisions at the root node, and Ei1, Ei

2, … are the

events at the chance node Si. In the conventional decision analysis with precise

probabilities, by calculating the expected utilities on each decision branch, a unique

optimal choice is selected because of the maximal expected utility value. In Figure

4-13(c), the reduced decision tree shows only the optimal choices when the following

conditions are satisfied in sequence: (1) 111d is preferred to 1

12d given E11; (2) 2

12d is

preferred to 211d given E1

2; and (3) 1d is preferred to 2d . However, when the input

consists of imprecise probabilities, the optimal choice might not be unique, or it may

even be indeterminate, as explained in Section 2.6 and illustrated by Example 4-5 in the

context of decision trees.

1111a1211a

2112a2212a

21ς

11ς

ς

111X

1211a

1111a

1212a

1112a

2211a

2111a

2212a

2112a

1221a

1121a

1222a

1122a

11ς

21ς

12ς

ς

2S

Page 147: Copyright by Xiaomin You 2010

130

4.3.2 Algorithm of decision analysis with imprecise probabilities

Let **ς and *

*iS be a decision node and a choice node, respectively. Here we use

super-script ‘*’ and sub-script ‘*’ to refer to any indices, and thus **iS means the i-th

choice node at decision node **ς . Let *

*A be the union of all events leading to decision

node **ς and chance node *

*iS ; let **N and *

*iN be the normal forms of sub-trees at

decision node **ς and chance node *

*iS , respectively; and let **

jiE be the j-th event at

the chance node **iS . Here we also use *

*j

iE to refer to its characteristic function, i.e., **

jiE = 1 if it occurs; otherwise, *

*j

iE = 0. Let opt(·) be the set of optimal choices. Huntley

and Troffaes (2008) show that at a decision node **ς , the set of optimal choices (or

gambles) is a subset of the union of the sets of optimal choices at its children **iS :

( ) ( )* * * ** * * *| |i

iopt N A opt N A⊆∪ ,

(4.35)

and that at an intermediate chance node **iS , any optimal choice could be written in the

form * ** *

j ji i

jE X∑ for ( )* * *

* * *|j j ji i iX opt N A∈ , where *

*j

iX is the gamble at the j-th

branch of chance node **iS , and *

*j

iE is the characteristic function for event **

jiE :

( ) ( )* * * * * * ** * * * * * *| : |j j j j ji i i i i i

jopt N A E X X opt N A

⎧ ⎫⎪ ⎪⊆ ∈⎨ ⎬⎪ ⎪⎩ ⎭∑ (4.36)

If **iN is the final chance node, then there is only one gamble *

*iX at this node.

Thus the optimal choice is

( ) * * * * ** * * * *| j ji i i i

jopt N A X E a

⎧ ⎫⎪ ⎪= = ⎨ ⎬⎪ ⎪⎩ ⎭∑ (4.37)

where **

jia is the reward (or utility) at the j-th branch of chance node *

*iS .

Given an event A and a set of gambles N, references Walley (1991, page 161) and

Huntley and Troffaes (2008) introduce the following formula to determine the optimal

gambles in set N: ( ) ( ) | : and , | 0LOWopt N A X N Y N Y X E Y X A= ∈ ∀ ∈ ≠ − ≤ (4.38)

Page 148: Copyright by Xiaomin You 2010

131

If there is only one gamble X satisfying the constraints in Eq.(4.38), then X is preferred to

any other gamble in set N and thus only gamble X is kept and considered in further

analysis. If there are more than one optimal gamble in set opt(N|A), then the preference is

indeterminate and all optimal gambles in opt(N|A) should be kept. Section 2.5 deals with

preference between two gambles within the context of imprecise probability. It also gives

algorithms to calculate ( )|LOWE X Y A− and presents the relaxed constraints and the

strict constraints.

The general algorithm for decision analysis with imprecise probabilities is as

follows:

(1) Determine the set of optimal choices at the final chance nodes by Eq.(4.37);

(2) At the parent decision nodes of the chance nodes in last step, determine the set of

optimal choices by Eqs.(4.35) and (4.38).

(3) At the parent chance nodes of the decision nodes in Step 2, determine the set of

optimal choices by Eqs.(4.36) and (4.38).

(4) Recursively apply Step 2 and Step 3 untill the root decision node is reached.

Let us illustrate the algorithm by an example of tunnel construction strategy,

which is adapted from Karam et al. (2007), where precise probabilities were used.

Example 4-5 Consider a tunnel being constructed in a ground where two uncertain geologic states may

occur: either geologic state 1G or 2G. The current available information for the probability of geologic state

1G is: 0.35 ≤ P(1G) ≤ 0.45. Two construction strategies are considered. Table 4-9 lists cost of

construction strategies C1 and C2 in different geologic states: construction strategy C1 costs $1,890,000

under geologic state 1G and $1,935,000 under geologic state 2G; construction strategy C2 costs $1,125,000

under geologic state 1G and $2,520,000 under geologic state 2G. As a consequence, the cost for C1 is less

Page 149: Copyright by Xiaomin You 2010

132

sensitive to the geological state than C2, but C2 is much less expensive than C1 if 1G occurs. The selection

of the construction strategy must be made between C1 and C2.

Table 4-9: Construction Cost Matrix.

Geologic state

Construction strategy 1G (×$1,000) 2G (×$1,000)

C1 1,890 1,935

C2 1,125 2,520

Figure 4-14: Decision tree for the tunnel, adapted from Karam et al. (2007).

First let us construct a decision tree for the problem, as shown in Figure 4-14, where jia is the

reward if construction strategy Ci is chosen and the geology state is jG, and jiE is a characteristic

function given that construction strategy Ci is chosen, i.e., jiE = 1 if geologic state is jG, otherwise, j

iE =

0. At the final chance nodes S1 and S2, Eq.(4.37) gives the sets of optimal solutions as

( )

( )

1 1 2 21 1 1 1 1 1

1 1 2 22 2 2 2 2 2

opt N E a E a X

opt N E a E a X

= + =

= + = (4.39)

At the decision node ς , the set of optimal solutions is a subset of the union of ( )1opt N and ( )2opt N ,

and is obtained by Eq. (4.35):

( ) 1 2,opt N X X⊆ (4.40)

By using Eq.(4.38), the preference between X1 and X2 is determined by ( )1 2LOWE X X− and

( )2 1LOWE X X− .

Construction Strategy

Geologic State

Cost(x $1,000)

d1=C1

d2=C2

a11= - 1890

a12= - 1935

a21= - 1125

a22= - 2520

1S

2S

ς

11 1GE =

21 2 GE =12 1GE =

22 2 GE =

1X

2X

Page 150: Copyright by Xiaomin You 2010

133

We know the fact that uncertainty in the geologic states does not change with the selection of

construction strategies, thus two options to reflect this fact may be chosen: either the relaxed constraint or

the strict constraint in Section 2.5.

Let ijP be the probability of geologic state iG if construction strategy Cj is selected, i.e. event

ijE occurs. When the relaxed constraint is applied, the following optimization problems are obtained

(notice that 1 2X XΨ = Ψ )

Minimize ( )1 2X X−E = ( )( )

( )( )1 2

1 2 1 21 1 2 21890 1935 1125 2520

X X

P P P P− − − − −

E E

Minimize ( )2 1X X−E = ( )( )

( )( )2 1

1 2 1 22 2 1 11125 2520 1890 1935

X X

P P P P− − − − −

E E

Subject to (relaxed constraint) 1

11 2

1 1

1

0.35 0.45;

1;

0, 1, 2i

P

P P

P i

⎫≤ ≤⎪⎪+ = ⎬⎪≥ = ⎪⎭

1XΨ

12

1 22 2

2

0.35 0.45;

1;

0, 1,2i

P

P P

P i

⎫≤ ≤⎪⎪+ = ⎬⎪≥ = ⎪⎭

2XΨ

(4.41)

where ( ) ( ) ( ) ( ) ( )1 1 2 2 1 1 2 21 2 1 2 1 1 1 1 2 2 2 2X X X X E a E a E a E a− = − = + − +E E E E E = ( ) ( )( )1 1 2 2

1 1 1 1a E a E+E E -

( ) ( )( ) ( ) ( )( ) ( ) ( )1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 21 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2a E a E a E a E a P a P a P a P+ − + = + − +E E E E . Likewise for ( )2 1X X−E .

Since ( )1 2 $27,000 0LOWE X X− = − < and ( )2 1 $117,000 0LOWE X X− = − < , then the preference

between X1 and X2 is indeterminate when the relaxed constraint is used.

When the strict constraint is applied, ( )1 2LOWE X X− and ( )2 1LOWE X X− are equal to -

$22,500 and -$112,500, respectively, and are obtained by solving the optimization problems in Eq.(4.42)

below. Note that these values are different from the results obtained when the relaxed constraint is used;

actually the lower expectation values become higher due to the strict constraint. However, the preference

between X1 and X2 is still indeterminate because both have negative signs.

Page 151: Copyright by Xiaomin You 2010

134

Minimize ( )1 2X X−E = ( ) ( )1 2 1 21 1 2 21890 1935 1125 2520P P P P− − − − −

Minimize ( )2 1X X−E = ( ) ( )1 2 1 22 2 1 11125 2520 1890 1935P P P P− − − − −

Subject to

1 11 2P P= (strict constraint)

11

1 21 1

1

0.35 0.45;

1;

0, 1, 2i

P

P P

P i

⎫≤ ≤⎪⎪+ = ⎬⎪≥ = ⎪⎭

1XΨ

12

1 22 2

2

0.35 0.45;

1;

0, 1,2i

P

P P

P i

⎫≤ ≤⎪⎪+ = ⎬⎪≥ = ⎪⎭

2XΨ

(4.42)

Therefore, the allowable information: 0.35 ≤ P(1G) ≤ 0.45 is not enough to decide between

construction strategies C1 and C2. As Walley (1991, page 2) has stated,

“An inevitable consequence of admitting imprecise probabilities is that probabilistic reasoning may produce indeterminate conclusions (we may be unable to determine which of two events or hypotheses is more probable), and decision analysis may produce indecision (we may be unable to determine which of two actions is better). When there is little information on which to base our conclusions, we cannot expect reasoning (no matter how clever or thorough) to reveal a most probable hypothesis or a uniquely reasonable course of action. There are limits to the power of reason.”

If further information is available and narrows down the interval, then a preference may be

chosen; for example, in the original example in Karam et al. (2007), P(1G) = 0.4, and this precise

information allowed a unique optimal choice to be made (C1 was preferred to C2).

Let us assume that new or additional information has been acquired that leads to a tighter bound:

0.38 ≤ P(1G) ≤ 0.42. Section 4.3.3 that follows discusses how to combine prior and new information.

Here we are going to redo Example 4-5 and see if the tighter bound leads to a preferred option.

When the relaxed constraints are applied, problems (4.41) become:

Page 152: Copyright by Xiaomin You 2010

135

Minimize ( )1 2X X−E = ( ) ( )1 2 1 21 1 2 21890 1935 1125 2520P P P P− − − − −

Minimize ( )2 1X X−E = ( ) ( )1 2 1 22 2 1 11125 2520 1890 1935P P P P− − − − −

Subject to (relaxed constraint)

11

1 21 1

1

0.38 0.42;

1;

0, 1, 2i

P

P P

P i

⎫≤ ≤⎪⎪+ = ⎬⎪≥ = ⎪⎭

1XΨ

12

1 22 2

2

0.38 0.42;

1;

0, 1, 2i

P

P P

P i

⎫≤ ≤⎪⎪+ = ⎬⎪≥ = ⎪⎭

2XΨ (4.43)

By solving problems (4.43), ( )1 2LOWE X X− and ( )2 1LOWE X X− are found to be equal to $16,200 and

-$73,800, respectively. Because ( )1 2LOWE X X− is positive and ( )2 1LOWE X X− is negative, then X1 is

preferred to X2 with the relaxed constraints, i.e., C1 is the optimal choice.

When the strict constraints are applied, problems (4.42) become

Minimize ( )1 2X X−E = ( ) ( )1 2 1 21 1 2 21890 1935 1125 2520P P P P− − − − −

Minimize ( )2 1X X−E = ( ) ( )1 2 1 22 2 1 11125 2520 1890 1935P P P P− − − − −

Subject to

1 11 2P P= (strict constraint)

11

1 21 1

1

0.38 0.42;

1;

0, 1, 2i

P

P P

P i

⎫≤ ≤⎪⎪+ = ⎬⎪≥ = ⎪⎭

1XΨ

12

1 22 2

2

0.38 0.42;

1;

0, 1, 2i

P

P P

P i

⎫≤ ≤⎪⎪+ = ⎬⎪≥ = ⎪⎭

2XΨ

(4.44)

Then ( )1 2LOWE X X− and ( )2 1LOWE X X− are found to be equal to $18,000 and $72,000− ,

respectively. Again, when the strict constraint is used, ( )1 2LOWE X X− is positive and ( )2 1LOWE X X−

is negative, thus X1 is preferred to X2, i.e., C1 is still the optimal choice. This shows that due to the

additional information, the tighter bound (0.38 ≤ P(1G) ≤ 0.42) allows a unique optimal choice to be

made.

Page 153: Copyright by Xiaomin You 2010

136

4.3.3 Decision analysis with uncertain new information

Usually, new information is obtained by sampling or experiments. However, tests

and experiments are not perfect, and test results may not be accurate. With the new

information, uncertainty on possible states will change and thus their relevant

probabilities should be updated by taking into account the probability that the test result

is not accurate. This subsection deals with the decision analysis with probabilities

updated by new information. Both prior probabilities and reliabilities of new information

are assigned as imprecise probabilities as described next in Section 4.3.3.1. Then, Section

4.3.3.2 describes how to calculate the lower prevision of the difference between two

gambles conditional to an observation result. Such a prevision is necessary to construct

the set of optimal gambles in Eq. (4.38) that is used in Steps 2 and 3 of the algorithm on

page 131.

4.3.3.1 Input Data

Let S be a state variable with n possible states is , i = 1,…,n. The prior probability

measure of possible states is a vector p of size n with the i-th entry pi denoting the

probability of is . Let pΨ be the convex set of prior probabilities p. Assume that new

information is provided by a test, which predicts the realization of state variable S with

reliability matrix XR of size n n× , where ijXr , the (i, j)-th entry of XR , is the

conditional probability of test result being †is conditional to the choice X and real case

js . Here we use superscript ‘ † ’ to denote the test result. A perfect test is only a special

case with a unit reliability matrix.

Because each column of reliability matrix XR is a probability measure

conditional to the real case, and such conditional probability measure is given

imprecisely, the j-th column of XR may be written as an element of a convex set of

Page 154: Copyright by Xiaomin You 2010

137

conditional probability measures, jΨ . Let j

Ψ be a convex set of reliability matrices,

which is defined as 11 1 1 1

1

:

j n jX XX X

j X j

n nj nn njX XX X

r r r r

r r r r

⎧ ⎫⎛ ⎞ ⎛ ⎞⎪ ⎪⎜ ⎟ ⎜ ⎟

Ψ = = ∈Ψ⎨ ⎬⎜ ⎟ ⎜ ⎟⎪ ⎪⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠⎩ ⎭

R (4.45)

Let rXΨ be the intersection of all convex sets

jΨ :

1

n jrX

j=Ψ = Ψ∩ (4.46)

rXΨ is convex because it is the intersection of convex sets, and each element in r

XΨ is

a reliability matrix XR : 11 1 1 1

1

: , 1,...,

j n jX XX X

r X jX

n nj nn njX XX X

r r r rj n

r r r r

⎧ ⎫⎛ ⎞ ⎛ ⎞⎪ ⎪⎜ ⎟ ⎜ ⎟

Ψ = = ∈Ψ =⎨ ⎬⎜ ⎟ ⎜ ⎟⎪ ⎪⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠⎩ ⎭

R (4.47)

Matrix XR should satisfy the constraints below: X r

X∈ΨR

( ) ( )T X

n n=1 R 1 ; 0X ≥R (4.48)

Let jEXT be the set of extreme points of jΨ . According to the definition of set rXΨ in Eq.(4.47), the set of extreme points of r

XΨ , denoted as rXEXT , is obtained as

11 1 1 1

1

: , 1,...,

j n jX XX X

r X jX

n nj nn njX XX X

r r r rEXT EXT j n

r r r r

⎧ ⎫⎛ ⎞ ⎛ ⎞⎪ ⎪⎜ ⎟ ⎜ ⎟

= = ∈ =⎨ ⎬⎜ ⎟ ⎜ ⎟⎪ ⎪⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠⎩ ⎭

R (4.49)

i.e., rXEXT is composed of matrices whose j-th column is in jEXT .

Let us prove this statement in Eq. (4.49) by a contradiction. Assume that XR is

a matrix in rXEXT and the k-th row of XR is a point in set kΨ but not an extreme

point of kΨ , i.e.,

Page 155: Copyright by Xiaomin You 2010

138

11 1 1 1

1

1 1

where , 1,..., 1, 1,... ;

j k nX X XX

X

n nj nk nnX X XX

j kXX

j k

nj nkXX

r r r r

r r r r

r rEXT j k k n

r r

⎛ ⎞⎜ ⎟

= ⎜ ⎟⎜ ⎟⎝ ⎠⎛ ⎞ ⎛ ⎞⎜ ⎟ ⎜ ⎟

∈ = − + ∈Ψ⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠

R

(4.50)

Assume m extreme points in set kΨ : 1

1

11

,...,

m

m

kkX X

k

nk nkX X

r rETX

r r

⎧ ⎫⎛ ⎞⎛ ⎞⎪ ⎪⎜ ⎟⎜ ⎟⎪ ⎪= ⎨ ⎬⎜ ⎟⎜ ⎟⎪ ⎪⎜ ⎟⎜ ⎟⎪ ⎪⎝ ⎠ ⎝ ⎠⎩ ⎭

(4.51)

Any point of set Ψj that is not an extreme point can be written in terms of a convex

combination of extreme points in kΨ : 1

1

111

1 ...

m

m

kkkX X X

mnk nk nkX X X

r r r

r r r

λ λ

⎛ ⎞⎛ ⎞⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟

= + + ⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟

⎝ ⎠ ⎝ ⎠ ⎝ ⎠

, 1 ... 1; 0, 1,...,m i i mλ λ λ+ + = ≥ = (4.52)

Thus, the extreme point XR can be written as a convex combination of m extreme

points in set rXΨ :

1

1

1111 1 11 1

11 1

...

m

m

kk n nX X X XX X

X mn nk nn n nk nnX X X XX X

r r r r r r

r r r rr r

λ λ

⎛ ⎞⎛ ⎞⎜ ⎟⎜ ⎟

= + + ⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟

⎝ ⎠ ⎝ ⎠

R (4.53)

but this means that XR cannot be an extreme point because any extreme point cannot

be expressed as a convex combination of other extreme points. Therefore, the proposition

in Eq.(4.50) is not correct.

4.3.3.2 Algorithm to calculate ( )†|LOW iE X Y s−

In order to calculate ( )†|LOW iE X Y s− , one needs to define the feasible domain

Ψ . In particular, in this case the feasible domain is identified by the extreme

Page 156: Copyright by Xiaomin You 2010

139

probabilities of the state variable conditional to the test result †is . The train of thoughts is

as follows: the joint probability distribution over the space of test results and real cases is

first defined based on Bayes theorem applied to the reliability matrix and the prior

probabilities. The joint distribution if finally conditioned on †is . All of the above steps

are carried out on the extreme distributions.

Recall that jp is the prior probability of real case js , p is a vector whose j-th

entry is jp , and pΨ is a convex set composed of vectors p . Let QX be the joint

probability measure over the space of test results and real cases. The (i, j)-th entry ijXq =P(test = †

is , real case = js | Choice X) is obtained by using Bayes’ Theorem: ij ij

jX Xq r p= (4.54)

In matrix form, QX is obtained as

( ) X X Diag=Q R p (4.55)

Let ΨX be the set of probability measures over the joint space of test results and

real cases. Theorem 4-7 below shows that ΨX is a convex set and that its extreme points

may be obtained in an efficient way.

Theorem 4-7 Let pΨ be the set of prior probabilities p ; and let pEXT be its set of

extreme points. Let rXΨ be the set of reliability matrices and r

XEXT be the set of

extreme points of rXΨ (Eq. (4.49)). Then the set of joint distribution ΨX defined by

Eq.(4.56) is convex: ( ) : ,r

X X X X X pDiagΨ = ∈Ψ ∈Ψ= Q R p R p (4.56) The set of extreme points

XEXTQ of ΨX is:

( ) = : ,X

rX EXT X X X pEXT Diag EXT EXT= ∈ ∈Q R p R p (4.57)

Proof : Based on Eq.(4.56), , jX⋅R ,the j-th column of reliability matrix XR , may be

written as

Page 157: Copyright by Xiaomin You 2010

140

,,

,

jj X

X T jX

⋅⋅

⋅=QR

1 Q, 1,...,j n= (4.58)

where , jX⋅Q is the j-th column of , j

X⋅Q and 1 is a unit vector of size n.

Since , jX⋅R is an element in the convex set jΨ , there are matrix 0j ≠A and

vector 0j ≠b such that set jΨ may be constructed by linear constraints as follows:

( ) , ,: 0j j jj jX X

⋅ ⋅Ψ = + ≤R A R b , 1,...,j n= (4.59)

By inserting Eq. (4.58) into the constraints in Eq. (4.59) and multiplying the constraints

in Eq. (4.59) by ,T jX⋅1 Q , one obtains:

( ) ( ), , 0j T jj jX X

⋅ ⋅+ ≤A Q b 1 Q , 1,...,j n= (4.60)

Thus, set ΨX may be obtained as ( ) ( ) , ,: 0, 1,...,j T j

X j jX X j n⋅ ⋅Ψ = + ≤ =Q A Q b 1 Q (4.61)

Since the constraints used to construct set ΨX in Eq. (4.61) are all linear, set ΨX is

convex.

Regarding the extreme points of set ΨX, any p∈ pΨ and RX∈ rXΨ can be

written as a linear combination of extreme points in pΨ and rXΨ , respectively:

( )

( )

( )

( )

1

1 ... ...p

p

TEXT

Tp p pEXT

T

EXT

ξξ ξ

ξ

λ λ λ

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟

= ⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

p

p p

p

10 1, 1,..., ; 1

pp p

p

ξ

ξ ξξ

λ ξ ξ λ=

≤ ≤ = =∑

(4.62)

Page 158: Copyright by Xiaomin You 2010

141

( ) ( ) ( ) ( )( ) 11 11

11

1,1 ,1 , ,

1

0

0

0

0

n

n

n nnX EXT EXT EXT EXT

n

ξ ξ

ξ

ξ

λ

λλ

λ

⋅ ⋅ ⋅ ⋅

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

R R R R R

1

0 1, 1,...,

1, 1,...,j

jj

j j n

ξ

ξ

ξξ

λ ξ ξ

λ=

≤ ≤ =

= =∑

where ,i

jEXT⋅R is the i-th extreme point in set jΨ .

(4.63)

By inserting Eqs.(4.62) and (4.63) into Eq. (4.56), one obtains:

( ) ( ) ( ) ( )( )

( )

( )

( )

( )

11 11

11

1,1 ,1 , ,

1

1

1

0

0

0

0

... ...

n

n

p

p

n nnX EXT EXT EXT EXT

n

TEXT

Tp p pEXT

T

EXT

Diag

ξ ξ

ξ

ξ

ξξ ξ

ξ

λ

λλ

λ

λ λ λ

⋅ ⋅ ⋅ ⋅

⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥= ⋅⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦

⎛ ⎞⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟

⎝ ⎠⎝ ⎠

Q R R R R

p

p

p

(4.64)

Extreme points of XΨ are achieved if and only if 1, 0,

p mmξ

ξλ

ξ=⎧

= ⎨ ≠⎩ and

1,

0, jj

ξ ηλ

ξ η

=⎧⎪= ⎨ ≠⎪⎩,

1,..., pm ξ= , 1,...,j jη ξ= .

Therefore, ( )X pX

mEXT EXTEXT Diagη=Q R p and the upper limit for the number of

extreme joint distribution is 1

n

p jj

ξ ξ=∏ . ◊

Page 159: Copyright by Xiaomin You 2010

142

After obtaining the extreme points of the joint distributions XQ , at gamble X, the extreme probabilities of the state variable conditional to the test result †

is , †| i

XEXT sP , are

obtained by applying Theorem 3-2, i.e.:

,

,|X

iX

iEXTX

iEXT sEXT

⋅=Q

PQ 1

(4.65)

where ,X

iEXT⋅Q is the i-th row of

XEXTQ and 1 is a unit vector of length n.

Likewise for †| i

YEXT sP .

Let †| i

XsEXT and †| i

YsEXT be the sets of extreme probabilities †| i

XEXT sP and

†| i

YEXT sP , respectively. The lower prevision of the gamble difference X-Y conditional to †

is

can be obtained by enumerating all of the extreme conditional probabilities:

( ) ( ) ( )( )† † †

,| min | |

X YLOW i i iE X Y s X s Y s− = −

P PE E

Subject to †| i

XX sEXT∈P , †| i

YY sEXT∈P

(4.66)

If the choice between gambles X and Y does not affect the uncertainty of the state variable

S, then the relaxed constraint could be applied to reflect this fact, i.e., in addition to the

constraints in Eq. (4.66), one imposes that the sets of extreme conditional distributions

are the same: † †| |i i

X Ys sEXT EXT= (4.67)

Alternatively, the strict constrain could be applied as well, i.e., in addition to the

constraints in Eq. (4.66), the extreme conditional distributions are forced to be the same: † †| |i i

X YEXT s EXT s=P P (4.68)

Example 4-6 Consider again the situation and information in Example 4-5. Suppose that now we could

obtain new information by performing additional exploration and we want to combine prior and new

information. It is assumed that construction strategies do not affect uncertainty in exploration reliability,

Page 160: Copyright by Xiaomin You 2010

143

which is quantified by the reliability matrix assigned as in Table 4-10. In this example, the gamble X is the

construction strategy, i.e., X can be either C1 or C2. However, set rXΨ is unique because the reliability

matrix is independent of the construction strategy. Set S is composed of the two geological states: 1s = 1G,

2s = 2G. The extreme points of set rXΨ (i.e., r

CjΨ , 1, 2j = ) are obtained by taking extreme distributions

in each column of the reliability matrix as described in Eq.(4.49): 0.85 0.35 0.85 0.45 0.95 0.35 0.95 0.45

, , ,0.15 0.65 0.15 0.55 0.05 0.65 0.05 0.55

rXEXT

⎧ ⎫⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞⎪ ⎪= ⎨ ⎬⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎪ ⎪⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠⎩ ⎭

(4.69)

The prior probabilities are assigned as 0.35 ≤ P(1G) ≤ 0.45, which gives the following extreme points

for ΨP: 1EXTp = (0.35, 0.65), and 2

EXTp = (0.45, 0.55). Construction strategies need to be selected between

C1 and C2 conditional on the exploration results.

Table 4-10: Exploration Reliability Matrix, where the following values are assigned 0.85≤ r11≤ 0.95; 0.35≤ r12≤ 0.45. Hence, 0.05≤ r21≤ 0.15; 0.55≤ r22≤ 0.65.

Real state Exploration indicates

geologic state given reality 1G 2G

1G r11 r12

2G r21 r22

Figure 4-15: Decision tree for the tunnel with exploration, adapted from Karam et al. (2007).

Construction Strategy

Geologic State

Cost(x $1,000)

Exploration Results

1Sς

1 111 1GE =

11ς

21ς

1 211 2GE =1 112 1GE =

2 11 1 1GE =

2 112 1GE =

1 21 2 2GE =

2 212 2 GE =

11 1S

11 2S

21 1S

21 2S

111 1d C=

11 2 2d C=

212 2d C=

211 1d C=

1111 1890a = −

1211 1935a = −

1112 1125a = −

1212 2520a = −2111 1890a = −

2211 1935a = −

2112 1125a = −

2212 2520a = −

2 211 2 GE =

111X112X211X212X

11 1E G=

21 2E G=

Page 161: Copyright by Xiaomin You 2010

144

First, let us construct a decision tree for the tunnel (shown in Figure 4-15), where jiE is the event

that the exploration result is jG, jeikE is the event that the real geologic state is eG given the test result jG

and construction strategy Ck is chosen, and jeika is the reward obtained if event je

ikE occurs. At the final

chance nodes 111S , 1

12S , 211S and 2

12S , Eq.(4.37) gives the sets of optimal solutions as

( ) 1 11 11 12 12 111 11 11 11 11 11opt N E a E a X= + =

( ) 1 11 11 12 12 112 12 12 12 12 12opt N E a E a X= + =

( ) 2 21 21 22 22 211 11 11 11 11 11opt N E a E a X= + =

( ) 2 21 21 22 22 212 12 12 12 12 12opt N E a E a X= + =

(4.70)

At the decision nodes 11ς and 2

1ς , based on Eq.(4.35), the sets of optimal solutions are the subsets of the

union of ( )111opt N and ( )1

12opt N and the union of ( )211opt N and ( )2

12opt N , respectively,

( ) 1 1 11 11 12,opt N X X⊆

( ) 2 2 21 11 12,opt N X X⊆ (4.71)

By using Eq.(4.38), the optimal decisions at nodes 11ς and 2

1ς are determined by ( )1 111 12LOWE X X− ,

( )1 112 11LOWE X X− and ( )2 2

11 12LOWE X X− , ( )2 212 11LOWE X X− .

With the assigned prior probabilities of geological states 0.35 ≤ P(1G) ≤ 0.45 and the

exploration reliability matrix shown in Eq. (4.69), the set of extreme joint probabilities ( XEXT ) can be

obtained by applying Theorem 4-7 and is given in Table 4-11. For example, the first extreme joint

probability distribution is obtained by 0.85 0.35 0.350.15 0.65 0.65

Diag⎛ ⎞⎛ ⎞ ⎛ ⎞⎜ ⎟⎜ ⎟ ⎜ ⎟⎜ ⎟⎝ ⎠ ⎝ ⎠⎝ ⎠

, i.e., the product of the first extreme

point of rXΨ and the first extreme point of pΨ . XEXT has 8 elements because pΨ has 2 extreme

points and rXΨ has 4 extreme points. Note that XEXT is the same for both construction strategies

because the reliability matrix is independent of the construction strategy.

Page 162: Copyright by Xiaomin You 2010

145

Table 4-11: Set of extreme joint probabilities XEXT .

,XEXT mQ Extreme point

number, m 1,1

,XEXT mq 1,2,XEXT mq 2,1

,XEXT mq 2,2,XEXT mq

1 0.298 0.228 0.053 0.423

2 0.298 0.293 0.053 0.358

3 0.333 0.228 0.018 0.423

4 0.333 0.293 0.018 0.358

5 0.383 0.193 0.068 0.358

6 0.383 0.248 0.068 0.303

7 0.428 0.193 0.023 0.358

8 0.428 0.248 0.023 0.303

Finally, Eq.(4.65) gives the extreme probabilities conditional to the exploration results, listed in

Table 4-12. For example, the first extreme conditional probability distribution is obtained by taking

following operations on the first extreme joint probability distribution: 0.298 0.228 0.053 0.423, , ,

0.298 0.228 0.298 0.228 0.053 0.423 0.053 0.423⎛ ⎞⎜ ⎟+ + + +⎝ ⎠

. Again, these extreme points are the same for

both construction strategies C1 and C2.

Table 4-12: Extreme probabilities conditional to the exploration results.

†|1m

CjEXT G

P †|2m

CjEXT G

P Extreme point number, m

( )†|11

m

CjEXT G

P G ( )†|12

m

CjEXT G

P G ( )†|21

m

CjEXT G

P G ( )†|22

m

CjEXT G

P G

1 0.567 0.433 0.111 0.889 2 0.504 0.496 0.128 0.872 3 0.594 0.406 0.040 0.960 4 0.532 0.468 0.047 0.953 5 0.665 0.335 0.159 0.841 6 0.607 0.393 0.182 0.818 7 0.690 0.310 0.059 0.941 8 0.633 0.367 0.069 0.931

Page 163: Copyright by Xiaomin You 2010

146

Let 1lijP be the probability of geologic state iG if the exploration result is geologic state †

ls lG=

and construction strategy Cj is selected. Either the relaxed constraint in Eq.(4.67) or the strict constraint in

Eq.(4.68) should be chosen to reflect the fact that selection of construction strategies will not change

uncertainty of geologic states.

At decision node 11ς (i.e., given exploration result †1G ), when the relaxed constraint is applied,

( )1 111 12LOWE X X− and ( )1 1

12 11LOWE X X− are equal to -$354,200 and $87,400, respectively, obtained by

solving the following optimization problems:

Minimize ( )1 1

11 12X X−E = ( ) ( )11 12 11 1211 11 12 121890 1935 1125 2520P P P P− − − − −

Minimize ( )1 112 11X X−E = ( ) ( )11 12 11 12

12 12 11 111125 2520 1890 1935P P P P− − − − −

Subject to (relaxed constraint)

11 811 1

1, |112111

8

1, 1,1

;

0; 1

m

Cm EXT G

m

m mm

P

λ λ

=

=

⎫⎛ ⎞⎪⎜ ⎟ =

⎜ ⎟ ⎪⎝ ⎠ ⎬⎪

≥ = ⎪⎭

P

†1

|1C

GΨ †

11 812 2

2, |112112

8

2, 2,1

;

0; 1

m

Cm EXT G

m

m mm

P

λ λ

=

=

⎫⎛ ⎞⎪⎜ ⎟ =

⎜ ⎟ ⎪⎝ ⎠ ⎬⎪

≥ = ⎪⎭

P

†2

|1C

(4.72)

Since ( )1 111 12LOWE X X− is negative and ( )1 1

12 11LOWE X X− is positive, then gamble 112X is preferred

to 111X under the relaxed constraint.

Alternatively, if the strict constraint is applied, ( )1 111 12LOWE X X− and ( )1 1

12 11LOWE X X− are

obtained by solving the optimization problems in Eq. (4.73) and they are equal to -$345,800 and $95,700,

respectively, which are higher than the results obtained with the relaxed constraints. Thus, gamble 112X is

preferred to 111X under the strict constraint as well.

Page 164: Copyright by Xiaomin You 2010

147

Minimize ( )1 111 12X X−E = ( ) ( )11 12 11 12

11 11 12 121890 1935 1125 2520P P P P− − − − −

Minimize ( )1 112 11X X−E = ( ) ( )11 12 11 12

12 12 11 111125 2520 1890 1935P P P P− − − − −

Subject to

11 1111 1212 12

11 12

P P

P P

⎛ ⎞ ⎛ ⎞⎜ ⎟ ⎜ ⎟=⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠

(strict constraint)

11 811 1

1, |112111

8

1, 1,1

;

0; 1

m

Cm EXT G

m

m mm

P

λ λ

=

=

⎫⎛ ⎞⎪⎜ ⎟ =

⎜ ⎟ ⎪⎝ ⎠ ⎬⎪

≥ = ⎪⎭

P

†1

|1C

GΨ †

11 812 2

2, |112112

8

2, 2,1

;

0; 1

m

Cm EXT G

m

m mm

P

λ λ

=

=

⎫⎛ ⎞⎪⎜ ⎟ =

⎜ ⎟ ⎪⎝ ⎠ ⎬⎪

≥ = ⎪⎭

P

†2

|1C

(4.73)

Likewise for the case at node 21ς (i.e., given exploration result †2G ). ( )2 2

11 12LOWE X X− and

( )2 212 11LOWE X X− are equal to $332,300 and -$537,700 with the relaxed constraint, and they are equal to

$338,700 and -$531,300 with the strict constraint, respectively. As a result, gamble 211X is preferred to

212X in both cases.

Therefore, with the prior probability 0.35 ≤ P(1G) ≤ 0.45 and the imprecise reliability of the

exploration, (1) if the exploration predicts geologic state †1G , construction strategy C2 (i.e. 112X ) is

always preferred to construction strategy C1 (i.e. 111X ) no matter whether the relaxed constraint or the

strict constraint is applied; (2) if the exploration predicts geologic state †2G , construction strategy C1 (i.e.

211X ) is always preferred to construction strategy C2 (i.e. 2

12X ).

Compared to the indeterminate result in Example 4-5, the new information provided by further

exploration allowed us to decide between C1 and C2, despite the fact that all information was provided in

terms of imprecise probabilities. ♦

Now let us see an example with perfect information.

Page 165: Copyright by Xiaomin You 2010

148

Example 4-7 Consider again the situation and information in Example 4-5. Suppose now that we could

obtain perfect information by performing additional exploration. The exploration reliability is shown in

Table 4-13 in this case, it is an identity matrix because perfect information is assumed. Thus, set rXΨ

(i.e., rCjΨ , 1,2j = ) collapses to a single point, i.e.,

1 00 1

rX

⎧ ⎫⎛ ⎞⎪ ⎪Ψ = ⎨ ⎬⎜ ⎟⎪ ⎪⎝ ⎠⎩ ⎭

. As in Example 4-6, the prior

probabilities are assigned as 0.35 ≤ P(1G) ≤ 0.45, which give extreme distributions for ΨP: 1EXTp =

(0.35, 0.65), and 2EXTp = (0.45, 0.55). Constructions strategy needs to be selected between C1 and C2 given

the exploration results.

Table 4-13: Exploration Reliability Matrix.

Real state Exploration indicates

geologic state given reality 1G 2G

1G 1 0

2G 0 1

The decision tree with perfect information is the same as in Figure 4-15, and at the final chance

nodes 111S , 1

12S , 211S and 2

12S , the sets of optimal solutions are again defined as in Eq.(4.70).

Since ΨP contains 2 extreme points and rXΨ contains 1 extreme point, there are only two

extreme joint probabilities in XEXT . Table 4-14 and both extreme joint probabilities yield the same

extreme conditional probability as listed in Table 4-15, which means no imprecision and consequently no

difference between the relaxed constraint and the strict constraint. Thus, at decision node 11ς (i.e., given

exploration result †1G ), ( )1 111 12LOWE X X− = ( )1 1

11 12X X−E = (-1890×1-1935×0)-(-1125×1-2520×0) =

-765 × $1,000 = -$765,000, and ( )1 112 11LOWE X X− = ( )1 1

12 11X X−E = ( )1 111 12X X− −E = $765,000,

indicating that gamble 112X is preferred to 1

11X ; at decision node 21ς (i.e., given exploration result

†2G ), ( ) ( )2 2 2 211 12 11 12LOWE X X X X− = −E = (-1890× 0-1935× 1)-(-1125× 0-2520× 1) = 585× $1,000 =

$585,000, and ( ) ( ) ( )2 2 2 2 2 212 11 12 11 12 11LOWE X X X X X X− = − = − −E E = -$585,000, indicating that gamble

211X is preferred to 2

12X .

Page 166: Copyright by Xiaomin You 2010

149

Therefore, with the prior probability 0.35 ≤ P(1G) ≤ 0.45 and perfect information from the

additional exploration, (1) if the exploration predicts geologic state †1G , construction strategy C2 (i.e.

112X ) is preferred to construction strategy C1 (i.e. 1

11X ); (2) if the exploration predicts geologic state

†2G , construction strategy C1 (i.e. 211X ) is preferred to construction strategy C2 (i.e. 2

12X ). These

decisions are the same as the ones in Example 4-6. ♦

Table 4-14: Set of extreme joint probabilities, XEXT .

XEXTQ No. of extreme probabilities

1,1EXTq 1,2

EXTq 2,1EXTq 2,2

EXTq

1 0.35 0 0 0.65

2 0.45 0 0 0.55

Table 4-15: Extreme probabilities conditional to the exploration results.

†|1CjEXT G

P †|2CjEXT G

P No. of extreme probabilities ( )†|1

1CjEXT G

P G ( )†|12Cj

EXT GP G ( )†|1

1CjEXT G

P G ( )†|12Cj

EXT GP G

1 1 0 0 1

4.3.3.3 Discussion

The previous Section 4.3.3.2 studied updating probability measures with

imprecise new information. In this section the effect of the uncertainty of the new

information on the probability measures is investigated. Two indeterminate cases are

explained and discussed.

4.3.3.3.1 Uncertain new information

Page 167: Copyright by Xiaomin You 2010

150

Recall the imprecise prior information in Example 4-5 and the imprecise

reliability matrix in Table 4-10. The sets of probability measures conditional to the

exploration results determined in Section 4.3.3.2 (extreme points of the set are listed in

Table 4-12) are depicted in Figure 4-16a, where pΨ is the set of prior probability

measures on the state variable S (geological states), ( )†|1S GΨ and ( )†| 2S GΨ are

the sets of probability measures conditional to exploration results †1G and †2G ,

respectively. Figure 4-16a illustrates the changes in the set of probability measures caused by new information: (1) sets ( )†|1S GΨ and ( )†| 2S GΨ are larger than pΨ ;

(2) sets ( )†|1S GΨ and ( )†| 2S GΨ are closer to the two ends (i.e., zero-uncertainty,

where P(1G) = 1 and P(2G) = 1, respectively) than pΨ . The first observation is a result

of the uncertainty of the new information, which compounds with the original

imprecision. As for the second observation, the reason is that the new evidence increases

the probability of observing one geological state or the other, and the increase depends on

the reliability of the evidence.

Consequently, the new information assigned in terms of imprecise probabilities

acts independently on the imprecision and on the probability of geological states.

Although the updated probability measures are more imprecise than the prior probability

measures, it may be enough to make a decision, which could not have been obtained with

the prior information. For instance, decision between construction strategies C1 and C2 is indeterminate within pΨ in Example 4-5 but become determinate within ( )†|1S GΨ

and ( )†| 2S GΨ in Example 4-6, though both ( )†|1S GΨ and ( )†| 2S GΨ are more

imprecise than pΨ . On the other hand, it is also possible that the updated information is

not enough to make a decision, because of the increase in imprecision possibly leading to

a negative lower value of the new information. Thus, more information may not always

Page 168: Copyright by Xiaomin You 2010

151

be better than less. An example of such a situation will be shown and illustrated in the

case history of Section 5.3.

It is worth comparing the effect of additional information using precise

probabilities, where imprecision cannot be taken into account. With precise probabilities,

only the second phenomenon can be observed. Thus, the value of information within

precise probabilities can never be negative, and more information is always better than

less.

Below is the discussion about the effects of the imprecision and the reliability of

the uncertain new information on the set of probability measures on the state variable S:

(1) Imprecision of new information We construct two additional convex sets ( )†|1S GΨ and ( )†| 2S GΨ (as

shown in Figure 4-16b), which are obtained from a more precise exploration, to study the

effect of imprecision of the new information and compare it with the original case in Table

4-10 (as shown in Figure 4-16a).

In Figure 4-16a, where the reliability interval width is 0.1, sets of probability measures conditional to exploration results (i.e. ( )†|1S GΨ and ( )†| 2S GΨ ) are larger

than the set of prior probability measures (i.e. pΨ ). In Figure 4-16b, where the

reliability interval width is 0.05, the size of ( )†|1S GΨ and ( )†| 2S GΨ is similar to

pΨ and is obviously smaller than the corresponding sets in Figure 4-16a. Thus, the

imprecision of probability measures on the state variable is highly correlated with the

imprecision of the new information.

Page 169: Copyright by Xiaomin You 2010

152

0.0

0.2

0.4

0.6

0.8

1.0

0.0 0.2 0.4 0.6 0.8 1.0

P (1G)

P(2

G)

0.0

0.2

0.4

0.6

0.8

1.0

0.0 0.2 0.4 0.6 0.8 1.0

P (1G)P

(2G

)

(a) (b)

Figure 4-16: Effect of imprecision: a) Table 4-10: 0.85≤ r11≤0.95; 0.35≤ r12≤0.45; b) 0.85≤ r11≤0.90; 0.3≤ r12≤0.35

0.0

0.2

0.4

0.6

0.8

1.0

0.0 0.2 0.4 0.6 0.8 1.0

P (1G)

P(2

G)

0

0.2

0.4

0.6

0.8

1

0 0.2 0.4 0.6 0.8 1

P (1G)

P(2

G)

(a) (b)

Figure 4-17: Effect of reliability: (a) Table 4-10: 0.85≤ r11≤0.95; 0.35≤ r12≤0.45; (b) 0.88≤ r11≤0.98; 0.15≤ r12≤0.25

( )†| 2S GΨ

( )†|1S GΨ

∑P(iG)=1

A

B

( )†| 2S GΨ

( )†| 1S GΨ

∑P(iG)=1

A

B

( )†| 2S GΨ

( )†| 1S GΨ

∑P(iG)=1

A

B

( )†| 2S GΨ

( )†| 1S GΨ

∑P(iG)=1

A

B

Page 170: Copyright by Xiaomin You 2010

153

(2) Reliability of new information

Figure 4-17a reproduces Figure 4-16a, and Figure 4-17b presents a case with

higher reliabilities but same interval width, i.e. 0.1. Compared to the original case in Figure 4-17a, sets ( )†|1S GΨ and ( )†| 2S GΨ in Figure 4-17b are closer to the two

ends (where P(1G) = 1 and P(2G) = 1, respectively). Higher reliabilities ultimately

generate a higher probability of observing one geological state or the other.

4.3.3.3.2 Indeterminate cases

Because of imprecision in probabilities, we often have indeterminate problems,

such as Example 4-5, where a decision between construction strategies C1 and C2 could

not be made. Generally, there are two kinds of cases in which indeterminate problems

occur:

Case (1): Intersection point

In this case, the problem is indeterminate when the expected values of the utility

of the two gambles attain the same value (and thus intersect) somewhere in the set of

probability measures on the state variable. As a result, the preference cannot be

determined. Take Example 4-5 as an instance. The horizontal axis in Figure 4-18 and

Figure 4-19 is the line with equality P(1G)+P(2G)=1 in Figure 4-16 and Figure 4-17;

their coordinates are set in Figure 4-16a. The vertical axis in Figure 4-18 and Figure 4-19

are the expected utilities for gambles. The set of prior probabilities (set pΨ in Figure

4-18a ) contains the intersection point of two expected utility curves for gambles X1 and

X2 (X1 = C1 and X2 = C2). Therefore, the preference between C1 and C2 is indeterminate

based on prior information. When new evidence is provided by additional exploration, the probability measures are updated. As shown in Figure 4-18a, sets ( )†|1S GΨ and

Page 171: Copyright by Xiaomin You 2010

154

( )†| 2S GΨ do not contain the intersection point. When the exploration result is †1G ,

C2 is preferred to C1; when it is †2G , C1 is preferred to C2 (see case (2) for details).

However, if the new information cannot move the set of probability measures far from the intersection point, the problem is still indeterminate, for example, set ( )†|1S GΨ in

Figure 4-18b.

(a) (b)

Figure 4-18: Case (1): Intersection point in set Ψ

Case (2): No intersection point

Figure 4-19 shows the case in which there is no intersection point within the set of

probability measures on the state variable. Recall that strict constraints use the same

probability measure for different choices (Figure 2-10). Thus, we can always select the

unique option as long as its utility curve is higher than all the others. As shown in Figure

4-19, since curve for X2 (i.e., construction strategy C1) is above that for X1 (i.e.,

construction strategy C2) in Ψ , C2 is preferred to C1 under the strict constraints.

However, the relaxed constrains take the whole set of probability measures into

consideration (Figure 2-9). Even if there is no intersection point, problems may still be

indeterminate. Figure 4-19 illustrates this situation, since Emin(X2) < Emax(X1) and then

X2

X1

( )†| 2S GΨ pΨ ( )†|1S GΨ

∑P(iG)=1

E(Xi)

A B

X2

X1

( )†| 2S GΨ ( )†|1S GΨpΨ

∑P(iG)=1

E(Xi)

A B

Page 172: Copyright by Xiaomin You 2010

155

Emin(X2- X1) = Emin(X2) - Emax(X1) < 0, we cannot draw the conclusion that C2 is preferred

to C1.

Figure 4-19: Case (2): No intersection point in set Ψ

To summarize, in Case (1) indeterminate problems occur regardless of the type of

constraints imposed, while indeterminate problems such as Case (2) can occur only when

relaxed constraints are applied.

4.3.4 Lower and upper values of information

Oftentimes, the decision maker wants to know the maximum cost to be allowed

for collecting additional information. In conventional decision analysis with precise

probabilities, the maximum cost is determined by the ‘value of information’ (VI); for

perfect information, it is the ‘value of perfect information’ (VPI). Let X be the choice

made without additional information. Let Y be the choice made with imperfect additional

information, and Z be the choice made with perfect additional information. In precise

probabilities, if consequence is a linear function of the cost of information, the value of

X2

X1

Ψ

Emax(X1)

Emin(X2)

∑P(iG)=1

E(Xi)

A B

Page 173: Copyright by Xiaomin You 2010

156

imperfect information (VI) and the value of perfect information (VPI) are obtained as

follows (e.g. Ang and Tang, 1984, Page s 30 - 31):

( ) ( )VI Y X= −E E (4.74)

( ) ( )VPI Z X= −E E (4.75)

Accordingly, the updated value (VUP) from imperfect information to perfect information

is

( ) ( )VUP Z Y= −E E (4.76)

Because in precise probability the probability measure is unique, the value of (perfect)

information is a single value, which is the maximum cost to be allowed for collecting

additional information and also the minimum selling price of (perfect) information for

which we could give away the new information under the scenario that we acquired the

new information and someone is willing to buy it from us. In other words, a reasonable

price for imperfect (or perfect) information should be less than VI (or VPI); on the other

hand, if an amount higher than VI (or VPI) is offered to the decision maker, the decision

maker is willing to sell the new information and not use it anymore.

When dealing with imprecise probabilities, the probability measures are not

unique, and therefore the value of (perfect) information can no longer be a single value:

indeed, it is an interval, which is bounded by lower and upper values of (perfect)

information. The lower value of (perfect) information determines the maximum allowed

cost for collecting the information, and the upper value of (perfect) information is the

minimum selling price for the new information. There is no reason why the maximum

buying price should be the same as the minimum selling price. For example, consider a

contractor that has to bid on a project: the cost that he is willing to pay for additional

information on the project is different (much smaller) than the amount for which he

Page 174: Copyright by Xiaomin You 2010

157

would sell this information to his competitors without using the information. Here we are

going the explain how to obtain these bounds on the value of information.

In imprecise probabilities, the bounds on VI and VPI are calculated as follows: ( ) ( )( ) ( )( ) ( )VI min minLOW LOWY X Y X E Y X= − = − = −E E E

( ) ( )( ) ( )( ) ( )VI max minUPP LOWY X X Y E X Y= − = − − = − −E E E (4.77)

( ) ( )( ) ( )( ) ( )VPI min minLOW LOWZ X Z X E Z X= − = − = −E E E

( ) ( )( ) ( )( ) ( )VPI max minUPP LOWZ X X Z E X Z= − = − − = − −E E E (4.78)

Here VILOW ( VPILOW ) is the least value of the imperfect (perfect) information.

Therefore, VILOW is the maximum cost allowed for collecting the information. VIUPP

(VPIUPP) is the maximum value of the imperfect (perfect) information. Once the price is

higher than VIUPP (VPIUPP), we should be willing to sell the information without using it.

If any lower value of (perfect) information is found to be negative, then the preference

among the involved gambles is indeterminate, indicating that it may not be worth buying

the new (perfect) information even if the new information is free.

Similarly, lower and upper updated values from imperfect information to perfect

information (VUP) are as follows:

( )VUPLOW LOWE Z Y= −

( )VUPUPP LOWE Y Z= − − (4.79)

By observing Eq.(4.78), ( )VPILOW LOWE Z X= − = ( ) ( )LOWE Z Y Y X⎡ ⎤− + −⎣ ⎦ .

In imprecise probabilities, the following property hold (Walley 1991, page 76): ( ) ( )LOWE Z Y Y X⎡ ⎤− + −⎣ ⎦ ≥ ( ) ( )LOW LOWE Z Y E Y X− + − = VUPLOW + VILOW .

Therefore,

VPI VUP VILOW LOW LOW≥ + , or

VUP VPI VILOW LOW LOW≤ − (4.80)

Page 175: Copyright by Xiaomin You 2010

158

indicating that the maximum allowed cost (i.e., lower value) for perfect information

( VPILOW ) is more than the sum of the lower updated value ( VUPLOW ) and the lower

value of imperfect information ( VILOW ); on the other hand, the lower updated value

( VUPLOW ) can not exceed the difference between the lower value of perfect information

( VPILOW ) and the lower value of imperfect information ( VILOW ). Similarly,

VPI VUP VIUPP UPP UPP≤ + , or

VUP VPI VIUPP UPP UPP≥ − (4.81)

which means that the upper value for perfect information ( VPIUPP ) may not exceed the

sum of the lower updated value ( VUPUPP ) and the lower value of imperfect information

( VIUPP ); also, the upper updated value ( VUPUPP ) is more than the difference between

the upper value of perfect information ( VPIUPP ) and the upper value of imperfect

information ( VIUPP ).

Let us now consider the case in which there are sets of gambles: iX and iY ,

and the problem with perfect information is always determinate and thus only contains

one gamble, i.e. Z . Then, the lower and upper VI, VPI, and VUP are as follows:

( ),

VI minLOW LOW j ii jE Y X= −

( )( ) ( ),,

VI max minUPP LOW i j LOW i ji ji jE X Y E X Y= − − = − − (4.82)

( )VPI minLOW LOW iiE Z X= −

( )( ) ( )( )VPI max minUPP LOW i LOW iiiE X Z E X Z= − − = − − (4.83)

( )VUP minLOW LOW jjE Z Y= −

( )VUP minUPP LOW jjE Y Z= − − (4.84)

Relations in Eqs.(4.80) and (4.81) still hold in this case.

Page 176: Copyright by Xiaomin You 2010

159

Again, if the choice between no additional information (X), imperfect information

(Y) and perfect information (Z) does not affect the uncertainty on the state variables, then

both the relaxed constraint and the strict constraint could be applied.

Notice that a negative lower value of imperfect information may occur under both

the relaxed constraints and the strict constraints. Given the new evidence from imperfect

new information, preference among gambles (choices) is probably indeterminate. The

indeterminacy among multiple choices may lead to a negative lower value of imperfect

information (as will be exemplified in Figure 5-14 and Figure 5-15). The situation in

perfect information is different. With perfect information, problems are determinate. A

negative lower value of perfect information can never occur under the strict constraints;

however, it is possible under the relaxed constraints. Case(2) in Section 4.3.3.3.2 (Figure

4-19) may occur, where X2 represents the case with perfect information and X1 represents

no information. Thus, a negative lower value of perfect information is obtained (as will

be exemplified in Figure 5-17). Within precise probabilities, because perfect information

is always preferred to no information, we can never obtain a negative value of

information.

Here a simple example illustrates the methodology to determine lower and upper

values of information.

Example 4-8 Consider again the situation and information in Example 4-5, the imperfect exploration in

Example 4-6, and the perfect exploration in Example 4-7. We now want to determine the lower and upper

values of VI, VPI, and VUP.

The reduced decision tree for exploration is depicted in Figure 4-20, which only includes the

optimal choices obtained from Example 4-5 through Example 4-7:

Page 177: Copyright by Xiaomin You 2010

160

(1) At chance node S1 (i.e. the choice of no exploration), the set of optimal gambles is

( ) 1 11 11 12,opt N X X= ;

(2) At chance node S2 (i.e. the choice of imperfect exploration), the set of optimal gambles is

( ) 1 1 2 22 2 22 2 21opt N E X E X= + ;

(3) At chance node S3 (i.e. the choice of perfect exploration), the set of optimal gambles is

( ) 1 1 2 23 3 32 3 31opt N E X E X= +

Figure 4-20: Decision tree for the tunnel exploration, adapted from Karam et al. (2007).

To determine the lower and upper values of imperfect information (VI), Eq.(4.82) reads as

follows:

( ) ( )( )1 1 2 2 1 1 1 2 2 12 22 2 21 11 2 22 2 21 12VI min ,LOW LOW LOWE E X E X X E E X E X X= + − + −

( ) ( )( )1 1 1 2 2 1 1 1 2 211 2 22 2 21 12 2 22 2 21VI min ,UPP LOW LOWE X E X E X E X E X E X= − − − − − (4.85)

Construction Strategy

Geologic State

Cost(x $1,000)

Exploration Results

E31=1G†

E32=2G†

E21=1G†

E22=2G†

No Exp

lorati

on

Imperfect Exploration

Perfect

Exploration

Exploration Method

1S

ς

11 1S

11 2S

12 2S

22 1S

2S

3S

11ς

12ς

22ς

13ς

23ς

13 2S

23 1S

1111 1890a = −

1211 1935a = −

1112 1125a = −

1212 2520a = −

1122 1125a = −

1222 2520a = −

1132 1125a = −

2121 1890a = −

2221 1935a = −

2231 1935a = −

111 1d C=

112 2d C=

122 2d C=

22 1 1d C=

1 11 1 1GE =

1 21 1 2 GE =1 11 2 1GE =

1 21 2 2 GE =

1 122 1GE =

1 22 2 2 GE =

1 132 1GE =

2 12 1 1GE =

2 22 1 2 GE =

2 231 2 GE =

132 2d C=

231 1d C=

111X112X122X221X132X

231X

Page 178: Copyright by Xiaomin You 2010

161

As for ( )1 1 2 2 12 22 2 21 11LOWE E X E X X+ − , it could be rewritten as

( ) ( ) ( )( )1 1 2 2 1 1 1 2 2 12 22 2 21 11 2 22 2 21 11minLOWE E X E X X E X E X X+ − = + −E E (4.86)

Note that both the relaxed constraint and the strict constraint may be applied because the decision of

performing exploration will not change the uncertainty on the geologic states. Since gambles 122X and

221X are affected by uncertainty on the geologic states and the exploration results may not be completely

reliable, calculating ( )1 1 2 22 22 2 21E X E X+E requires the joint probability measure Q over the exploration

results and the real geologic states (Section 4.3.3), which is constructed by the extreme distributions

XEXTQ listed in Table 4-11. Let ijq be the joint probability of that the exploration result is Gi and that the

geologic state is Gj. Accordingly, ( )1 1 2 22 22 2 21E X E X+E = 11 12 21 22

22 11 22 12 21 21 21 22a q a q a q a q+ + + . To calculate

( )111XE , only the uncertainty on the geologic states is required because it is under the option of no

additional exploration. Let ip be the prior probability of geologic state Gi, and thus

( )111XE = 11 12

11 1 11 2a p a p+ . If the relaxed constraint is applied, ( )1 1 2 2 12 22 2 21 11LOWE E X E X X+ − is equal to

$52,000, which is obtained by solving the following optimization problem:

Minimize

( ) ( )1 1 2 2 12 22 2 21 11E X E X X+ −E E =

( ) ( )11 12 21 22 1 21125 2520 1890 1935 1890 1935q q q q p p− − − − − − − Subject to (relaxed constraint)

8

1,1

8

1, 1,1

;

0; 1

mm EXTm

m mm

λ

λ λ

=

=

=

≥ =

Q Q

12,1 2,2

22

2, 2,1

0.35 0.45;

0.65 0.55

0; 1i ii

pp

λ λ

λ λ=

⎛ ⎞ ⎛ ⎞ ⎛ ⎞= +⎜ ⎟ ⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠⎝ ⎠

≥ =∑

(4.87)

where mEXTQ are given in Table 4-11. Notice that X1 in Eq.(2.37) (i.e.

1 2X XΨ = Ψ ) is here

1 1 2 22 22 2 21E X E X+ , and X2 in Eq. (2.37) is here 1

11X . The equality constraint in Eq. (2.37): 1 2X XΨ = Ψ is

satisfied because if Q is marginalized to the state variable, the marginal is an element of pΨ , the convex

set of prior probabilities, i.e. 1X pΨ = Ψ . On the other hand, 1

2

pp

⎛ ⎞⎜ ⎟⎝ ⎠

is also constrained to be an element of

set pΨ , i.e. 2X pΨ = Ψ . Thus, the relaxed constraint is applied here by enforcing

1 2X X pΨ = Ψ = Ψ .

Page 179: Copyright by Xiaomin You 2010

162

Similarly, ( )1 1 2 2 12 22 2 21 12LOWE E X E X X+ − is $29,500; ( )1 1 1 2 2

11 2 22 2 21LOWE X E X E X− − and

( )1 1 1 2 212 2 22 2 21LOWE X E X E X− − are equal to -$218,900 and -$331,400, respectively. Thus, when the relaxed

constraint is applied, VILOW = min($52,000,$29,500) = $29,500, and VIUPP =

( )min $331,400, $218,900− − − = $331,400.

If the imperfect exploration in Example 4-6 costs less than $29,500, we should perform the

exploration; if we are offered more than $331,400, we are willing to give the imperfect exploration

information away without using it. As for any price between VILOW and VIUPP, the preference between no

exploration and the imperfect exploration is indeterminate.

Alternatively, if the strict constraint is applied, ( )1 1 2 2 12 22 2 21 11LOWE E X E X X+ − is calculated as

$56,500 by solving the following optimization problem:

Minimize

( ) ( )1 1 2 2 12 22 2 21 11E X E X X+ −E E =

( ) ( )11 12 21 22 1 21125 2520 1890 1935 1890 1935q q q q p p− − − − − − − Subject to

1

2

T pp

⎛ ⎞= ⎜ ⎟⎝ ⎠

1 Q (strict constraint)

8

1,1

8

1, 1,1

;

0; 1

mm EXTm

m mm

λ

λ λ

=

=

=

≥ =

Q Q

12,1 2,2

22

2, 2,1

0.35 0.45;

0.65 0.55

0; 1i ii

pp

λ λ

λ λ=

⎛ ⎞ ⎛ ⎞ ⎛ ⎞= +⎜ ⎟ ⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠⎝ ⎠

≥ =∑

(4.88)

where the strict constraint is used by imposing that the marginal of Q is the same as the prior probabilities.

Similarly, ( )1 1 2 2 12 22 2 21 12LOWE E X E X X+ − = $125,300; ELOW ( )1 1 1 2 2

11 2 22 2 21X E X E X− − = -$214,400,

and ( )1 1 1 2 212 2 22 2 21LOWE X E X E X− − = -$233,800. Thus, when the strict constraint is applied, VILOW =

min($56,500, $125,300) = $56,500, and VIUPP = -min(-$214,400, -$233,800) = $233,800.

Compared to the results with the relaxed constraints, we have a higher VILOW and lower VIUPP

because the strict constraint reduces the imprecision.

Page 180: Copyright by Xiaomin You 2010

163

When the relaxed constraint is applied, VPILOW =$240,800, VPIUPP=$461,300, VUPLOW =$48,800,

and VUPUPP =$292,300; when the strict constraint is used, VPILOW =$267,800, VPIUPP =$380,300, VUPLOW

=$129,800, and VUPUPP =$211,300. One may easily check that the inequalities in (4.80) and (4.81) are true

under both the relaxed constraint and the strict constraint, i.e., VPILOW > VUPLOW + VILOW, VPIUPP <

VUPUPP + VIUPP.

Now let us consider a example which may yield a negative lower value of information.

Example 4-9 Consider again the situation in Example 4-5 and Example 4-6. Instead of the original

imprecise reliability matrix shown in Table 4-10, here we consider another exploration with reliability

matrix such that 0.45≤ r11≤ 0.55; 0.45≤ r12≤ 0.55. Hence, 0.45≤ r21≤ 0.55; 0.45≤ r22≤ 0.55. We want to

determine the lower and upper values of information for this exploration.

Now, the extreme points of set rXΨ (i.e., r

CjΨ , 1, 2j = ) are obtained by taking extreme

distributions in each column of the reliability matrix as described in Eq.(4.49): 0.45 0.45 0.55 0.45 0.45 0.55 0.55 0.55

, , ,0.55 0.55 0.45 0.55 0.55 0.45 0.45 0.45

rXEXT

⎧ ⎫⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞⎪ ⎪= ⎨ ⎬⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎪ ⎪⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠⎩ ⎭

(4.89)

The prior probabilities are still assigned as 0.35 ≤ P(1G) ≤ 0.45, and thus the extreme points

for ΨP is 1EXTp = (0.35, 0.65), and 2

EXTp = (0.45, 0.55). Construction strategies need to be selected between

C1 and C2 conditional on the exploration results.

The first step is to determine the optimal construction strategies given exploration results.

Decision tree is the same as depicted in Figure 4-15. By applying Theorem 4-7, the set of extreme joint

probabilities ( XEXT ) is obtained, as shown in Table 4-16. Finally, Eq.(4.65) gives the extreme

probabilities conditional to the exploration results, listed in Table 4-17.

At decision node 11ς (i.e., given exploration result †1G ), when the relaxed constraint is applied,

( )1 111 12LOWE X X− and ( )1 1

12 11LOWE X X− are equal to -$98,700 and -$180,900, respectively. Since both

are negative, then gambles 112X and 1

11X are optimal under the relaxed constraint. Alternatively, if the

Page 181: Copyright by Xiaomin You 2010

164

strict constraint is applied, ( )1 111 12LOWE X X− and ( )1 1

12 11LOWE X X− are equal to -$90,000 and -

$172,100, respectively, which are still both negative. Thus, gambles 112X and 1

11X are optimal under the

strict constraint as well.

Likewise for the case at node 21ς (i.e., given exploration result †2G ). ( )2 2

11 12LOWE X X− and

( )2 212 11LOWE X X− are equal to-$98,700 and -$180,900 with the relaxed constraint, and they are equal to -

$90,000 and -$172,100 with the strict constraint, respectively. As a result, gambles 211X and 2

12X are

optimal in both cases.

Therefore, with the prior probability 0.35 ≤ P(1G) ≤ 0.45 and the imprecise reliability of the

exploration, given the exploration result, the preference between construction strategy C1 and C2 are still

indeterminate.

Table 4-16: Set of extreme joint probabilities XEXT .

,XEXT mQ Extreme point

number, m 1,1

,XEXT mq 1,2,XEXT mq 2,1

,XEXT mq 2,2,XEXT mq

1 0.158 0.358 0.193 0.293 2 0.158 0.293 0.193 0.358 3 0.193 0.358 0.158 0.293 4 0.193 0.293 0.158 0.358 5 0.203 0.303 0.248 0.248 6 0.203 0.248 0.248 0.303 7 0.248 0.303 0.203 0.248 8 0.248 0.248 0.203 0.303

Page 182: Copyright by Xiaomin You 2010

165

Table 4-17: Extreme probabilities conditional to the exploration results.

†|1m

CjEXT G

P †|2m

CjEXT G

P Extreme point number, m

( )†|11

m

CjEXT G

P G ( )†|12

m

CjEXT G

P G ( )†|21

m

CjEXT G

P G ( )†|22

m

CjEXT G

P G

1 0.306 0.694 0.397 0.603 2 0.350 0.650 0.350 0.650 3 0.350 0.650 0.350 0.650 4 0.397 0.603 0.306 0.694 5 0.401 0.599 0.500 0.500 6 0.450 0.550 0.450 0.550 7 0.450 0.550 0.450 0.550 8 0.500 0.500 0.401 0.599

Figure 4-21: Decision tree for the tunnel exploration.

111X112X121X122X221X222X132X

231X

Construction Strategy

Geologic State

Cost(x $1,000)

Exploration Results

E31=1G†

E32=2G†

E21=1G†

E22=2G†

No

Expl

orati

on

Imperfect Exploration

Perfect

Exploration

Exploration Method

2231 1935a = −

2231 1125a = −

1211 1935a = −

1111 1890a = −

1212 2520a = −

1112 1125a = −

ς

S1

S2

S3

11ς

12ς

22ς

13ς

23ς

111S

112S

121S

122S

221S

222S

132S

231S

1221 1935a = −

1121 1890a = −

1222 2520a = −

1122 1125a = −

2221 1935a = −

2121 1890a = −

2222 2520a = −

2122 1125a = −

111 1d C=

112 2d C=

121 1d C=

122 2d C=

221 1d C=

222 2d C=132 2d C=

231 1d C=

1G

2G1G

2G

1G

2G1G

2G1G

2G1G

2G1G

2G

Page 183: Copyright by Xiaomin You 2010

166

The next step is to determine the lower and upper values of imperfect information (VI). Decision

tree is shown in Figure 4-21. Eq.(4.82) reads as follows:

( )( )1 1 2 2 12 2 2 2 1, ,

VI minLOW LOW i j ki j kE E X E X X= + −

( )( )1 1 1 2 21 2 2 2 2, ,

VI minUPP LOW k i ji j kE X E X E X= − − − (4.90)

By alternating on the extreme distributions XEXTQ listed in Table 4-16, if the relaxed constraint

is applied, VILOW = –$139,500, and VIUPP = $161,600; if the strict constraint is applied,

VI $112,500LOW = − , and VIUPP = $112,500.

As explained on Page 159, given the new evidence from imprecise new information, preference

among gambles is probably indeterminate. The indeterminacy among multiple choices may lead to a

negative lower value of information. Here the negative lower values of information are obtained.

Recall that the value of information can never be negative if the inputs are precise probabilities.

Someone may argue that the value of information should never be negative even the inputs are imprecise,

and the idea is to solve the problem with a unique prior probability and a precise reliability matrix at a time,

which are picked from Ψp and Ψr, respectively. Here we illustrate this procedure by this example.

By marginalizing XEXTQ (Table 4-16) on exploration results and real geological states, the

corresponding marginals are obtained as listed in Table 4-18. Then the preference between C1 and C2 can

be determined within each precise inputs. For example, for the prior information, if P(1G) = 0.35, then

E( 111X ) = -$1,919,250, and E( 2

11X ) = -$2,031,750, and thus C1 is preferred to C2, i.e. the expected value

with no additional exploration is -$1,919,250.

When new information is provided, take the first extreme points in Table 4-17 as an example:

given that the exploration result is †1G , P(1G) = 0.306, P(2G) = 0.694, then the expected value for C1 is

( ) 18900.306 0.694

1935−⎛ ⎞⎜ ⎟−⎝ ⎠

= -$1,921,240, and the expected value for C2 is ( ) 11250.306 0.694

2520−⎛ ⎞⎜ ⎟−⎝ ⎠

= -

$2,093,370, thus, C1 is preferred to C2; given that the exploration result is †2G , P(1G) = 0.397, P(2G) =

0.603, then the expected value for C1 is ( ) 18900.397 0.603

1935−⎛ ⎞⎜ ⎟−⎝ ⎠

= -$1,917,140, and the expected value

Page 184: Copyright by Xiaomin You 2010

167

for C2 is ( ) 11250.306 0.694

2520−⎛ ⎞⎜ ⎟−⎝ ⎠

= -$1,966,310, thus, C1 is preferred to C2. Since P(1G†) = 0.525 and

P(2G†) = 0.485 in Table 4-18, the expected value with exploration results is ( ) 19212400.525 0.485

1917140−⎛ ⎞⎜ ⎟−⎝ ⎠

=

-$1,919,250, which is the same as the expected value with no additional exploration (-$1,919,250). Thus,

the value of information is 0. Similarly for all other extreme points in Table 4-17 and Table 4-18. The

unique optimal construction strategies for each extreme reliability matrix and corresponding values of

information are listed in Table 4-19.

It should be noticed that this method actually deal with the problem with multiple precise

probabilities, thus problems are always determinate. Therefore, the value of information will never be

negative. In the previous results obtained by the algorithm developed in this study, the negative lower

values of information is the results of indeterminacy, i.e., the indeterminacy between construction strategy

is taken into account when calculating the value of information.

Table 4-18: Extreme marginals on exploration results and real geological states

( )1P G ( )2P G ( )†1P G ( )†2P G

1 0.35 0.65 0.515 0.485 2 0.35 0.65 0.450 0.550 3 0.35 0.65 0.550 0.450 4 0.35 0.65 0.485 0.515 5 0.45 0.55 0.505 0.495 6 0.45 0.55 0.450 0.550 7 0.45 0.55 0.550 0.450 8 0.45 0.55 0.495 0.505

Page 185: Copyright by Xiaomin You 2010

168

Table 4-19: Unique optimal construction strategies and value of information

Exploration Result †1G †2G

VI

1 C1 C1 0

2 C1 C1 0

3 C1 C1 0

4 C1 C1 0

5 C1 C2 $22,050

6 C2 C2 0

7 C2 C2 0

8 C2 C1 $22,050

Page 186: Copyright by Xiaomin You 2010

169

Chapter 5 Case Histories

This chapter revisits several case histories of risk analysis in tunneling by using

the methodologies developed in previous chapters. Section 5.1 applies event-tree analysis

with imprecise probabilities to obtain the bounds on the occurrence probability of

accidents during the construction of an underwater tunnel. Section 5.2 deals with the

probability of the environmental damage caused by the construction activities of the

Stockholm Ring Road Project. Section 5.3 revisits the Sucheon Tunnel by introducing the

imprecision of probabilities, and finally the optimal exploration plan is determined.

Section 5.4 introduces the application to the risk register of the East Side CSO Project in

Portland, Oregon. All results obtained based on imprecise probabilities are compared

with the results from precise probabilities.

5.1 ETA APPLIED TO THE DESIGN OF A UNDERWATER TUNNEL

The event-tree analysis of Section 4.2.1 was applied to analyze the potential risks

at the design stage of a 1.27 km long and 8.1-meter diameter TBM tunnel under crossing

the Han River in South Korea. Because the tunnel started and ended in the downtown

area (see Figure 5-1), the major concerns were the potential risks to neighborhoods and

local business, existing structures and facilities. A risk analysis was conducted to quantify

the occurrence probability of accidents (Hong et al. 2009). In this section, all precise

probability inputs and the project background information are from Hong et al. (2009).

Page 187: Copyright by Xiaomin You 2010

170

Figure 5-1 Construction site plan (Hong et al. 2009).

Three important initiating events: poor ground conditions, high water pressure,

and heavy rainfall were identified after an extensive analysis of the available empirical

data. Without any mitigation measures, the three initiating events would lead to an

accident and cause an impact on schedule and cost, even tunnel failure. To avoid or

mitigate the impact, safety measures are proposed, and classified into five categories: A.

investigation/design; B. Process planning; C. Machine type; D. Construction

management; E. Reinforcement. Under each initiating event, the success probabilities of

safety measures A through E are obtained by averaging the probability evaluations from

four experts. The precise inputs used by Hong et al. (2009) are summarized in Table 5-1.

Table 5-1 Success probabilities of safety measures (Hong et al. 2009)

Safety measures Poor ground conditions High water pressure Heavy rainfall

A. investigation/design 0.02 0.15 0.40 B. Process planning 0.13 0.30 0.19 C. Machine type 0.65 0.68 0.73 D. Construction management 0.63 0.65 0.63

E. Reinforcement 0.38 0.56 0.28

Page 188: Copyright by Xiaomin You 2010

171

Figure 5-2 Event tree for initiating event of poor ground conditions (Hong et al. 2009)

Page 189: Copyright by Xiaomin You 2010

172

The event tree with the initiating event ‘Poor ground conditions’ is depicted in

Figure 5-2. The occurrence of accidents and their consequences are identified at the end

of each probability path. The consequences are evaluated at five levels: catastrophic,

critical, serious, marginal, and negligible. Hong et al. (2009) offered the analysis results

for the three initiating events. As shown in Table 5-2, the occurrence probabilities of

accidents are 0.59, 0.56, and 0.46, respectively. Based on the consequence, accidents are

classified into three risk levels: I, II, and II, where catastrophic and critical accidents are

grouped in level I, serious and marginal accidents belong to level II, and negligible

accidents are grouped in level III. For risk level I, Hong et al. (2009) suggest to apply

significant mitigation measures to reduce the risk level or to remove the causes of risk

reasons; for risk level II, proactive mitigation measures need to be considered; risks at

level III should be taken care of in the construction management. The objective of the

risk management is to ensure that the occurrence probability of risks at level I is smaller

than 5%. The probabilities of accidents at different levels are summarized in Table 5-3,

and the probabilities that the initiating events cause a level I Risk are 0.25, 0.11, and 0.09,

respectively. Among the three initiating events, event ‘Poor ground conditions’ is far

from the objective of 5% probability. Thus, to reduce the potential of risk I, mitigation

measures should be implemented carefully and their effects on reducing the occurrence

probability or the consequence of accidents should be monitored regularly.

Page 190: Copyright by Xiaomin You 2010

173

Table 5-2 Probabilities of criticality and occurrence of accident (Hong et al. 2009)

Consequence Poor ground conditions High water pressure Heavy rainfall

Catastrophic 0.08 0.05 - Critical 0.17 0.06 0.09 Serious 0.34 0.45 0.36 Marginal 0.25 0.19 0.42 Negligible 0.16 0.25 0.13 Accident 0.59 0.56 0.46

Table 5-3 Probabilities of accident at different risk levels (Hong et al. 2009)

Risk level Poor ground conditions High water pressure Heavy rainfall

I 0.25 0.11 0.09 II 0.59 0.64 0.78 III 0.16 0.25 0.13

The author redid the analyses by applying the event tree in Figure 5-2 to all three

initiating events and by using the precise probabilities in Table 5-1; however, different

results were obtained in several occasions, and are highlighted in bold in Table 5-5 and

Table 5-6. It is probably because the event tree for ‘poor ground conditions’ is different

from the cases of ‘high water pressure’ and ‘heavy rainfall’, which are not provided in

Hong et al. (2009). The author contacted the corresponding author in the reference Hong

et al. (2009). Unfortunately, the corresponding author did not have time to confirm the

event-trees for the other two cases. Therefore, in the following part of this section, we

only compare our calculated results from precise probabilities to the results from

imprecise probabilities.

Page 191: Copyright by Xiaomin You 2010

174

Since the precise probabilities are obtained by averaging the evaluations of four

experts, it would be better to admit the imprecision in the available data and use

imprecise probabilities. As shown in Table 5-4, the predefined success probabilities of

each safety measure are bounded by intervals, where the interval width are all equal to

0.1 and the precise probabilities in Table 5-1 are contained in these intervals. Results

obtained from the intervals in Table 5-4 are summarized in Table 5-5 and Table 5-6. It is

easy for one to check that the results from precise probabilities are also bounded by the

results from the imprecise probabilities.

Results from imprecise probabilities show that the case of ‘poor ground

conditions’ is more risky than other two cases. To ensure that the occurrence probability

of risk I is less than 5%, the upper probabilities of risk I must be less than 5%. It is a

stricter requirement than that in precise probabilities, where the probability of risks at

level I is represented by a single value. Therefore, by admitting the imprecision in

probability evaluation, more attention should be paid to risk management to ensure that

the upper probability be in the acceptable range.

Table 5-4 Success probabilities of safety measures in imprecise probabilities

Safety measure Poor ground conditions

High water pressure

Heavy rainfall

A. Investigation/design [0.00, 0.10] [0.10, 0.20] [0.35, 0.45] B. Process planning [0.10, 0.20] [0.25, 0.35] [0.10, 0.30] C. Machine type [0.60, 0.70] [0.60, 0.70] [0.70, 0.80] D. Construction management [0.60, 0.70] [0.60, 0.70] [0.60, 0.70]

E. Reinforcement [0.30, 0.40] [0.50, 0.60] [0.20, 0.30]

Page 192: Copyright by Xiaomin You 2010

175

Table 5-5 Probabilities of criticality and occurrence of accident

Poor ground conditions High water pressure Heavy rainfall Consequence

Imprecise Precise Imprecise Precise Imprecise Precise

Catastrophic [0.0540, 0.1120] 0.08 [0.0360, 0.0800] 0.05 [0.0420, 0.0960] 0.07Critical [0.1112, 0.2080] 0.17 [0.0887, 0.1662] 0.12 [0.0507, 0.1187] 0.09Serious [0.2658, 0.3936] 0.34 [0.3293, 0.4528] 0.39 [0.2933, 0.4266] 0.38Marginal [0.2160, 0.3430] 0.25 [0.1440, 0.2450] 0.19 [0.2940, 0.4480] 0.33Negligible [0.1080, 0.1960] 0.16 [0.1800, 0.2940] 0.25 [0.0840, 0.1680] 0.13Accident [0.5100, 0.6400] 0.59 [0.5100, 0.6400] 0.56 [0.4400, 0.5800] 0.54

Table 5-6 Probabilities of accident at different risk levels

Poor ground conditions High water pressure Heavy rainfall Risk level

Imprecise Precise Imprecise Precise Imprecise Precise

I [0.1807, 0.2944] 0.25 [0.1337, 0.2302] 0.17 [0.1031, 0.1949] 0.16II [0.5208, 0.6819] 0.59 [0.5063, 0.6496] 0.58 [0.6488, 0.7955] 0.71III [0.1080, 0.1960] 0.16 [0.1800, 0.2940] 0.25 [0.0840, 0.1680] 0.13

5.2 FTA APPLIED TO THE STOCKHOLM RING ROAD TUNNELS

The Stockholm Ring Road project is a vast underground construction project, and

will provide a ring road around Stockholm to improve public transportation. The majority

of the alignment is in hard rock, but several sections are in soft ground (Sturk et al. 1996).

Figure 5-3 shows the project plan in 1992 based on “Dennis Agreement”, which

was a political agreement on transit and highway improvements in and around

Stockholm. However, the Stockholm Ring Road project has been opposed by green

parties and local residents mainly because of environmental concerns since the permitting

Page 193: Copyright by Xiaomin You 2010

176

stage, which led to several project halts (Tollroads Newsletter, 1997). As of 2007, about

half of the ring road was built. After a new environmental evaluation was completed, the

project was resumed and is expected to be ready for opening in 2020 (Stockholm News,

2009).

Figure 5-3 Stockholm Ring Road project plan in 1992 (Stockholm ring road, from http://en.wikipedia.org/wiki/Stockholm_ring_road)

Page 194: Copyright by Xiaomin You 2010

177

Because of the high environmental concerns, a ‘Review Team’ composed of

experts in geotechnical engineering was set up at the beginning of the project in 1996 to

provide and ensure high quality technical solutions, where a risk analysis was conducted

by using fault-trees to evaluate the environmental damage due to tunneling (Sturk et al.

1996). Figure 5-4 through Figure 5-7 show the fault-trees, where the top event is ‘the

lime trees are damaged due to the tunneling activities’. It should be noted that all events

in Sturk et al. (1996) are assumed to be independent to each other. Interaction noted in

Figure 5-4 through Figure 5-7, such as ‘unknown interaction’ etc, is applied only when

imprecise probabilities is considered later in this section. For the events at the bottom of

the fault-trees, Sturk et al. (1996) used precise probabilities, which are summarized in the

columns under ‘precise’ in Table 5-7. Finally, the occurrence probability for the top event

is equal to 0.105, which Sturk et al. (1996) thought acceptable. However, the current

status of the project tells us that it is not a good estimation. The probability might be

higher than 0.105 and thus it is not acceptable.

Page 195: Copyright by Xiaomin You 2010

178

Table 5-7 Occurrence probabilities of events at the bottom of fault-trees

Event Precise Imprecise Event Precise Imprecise E02 0.50 [0.45, 0.55] E36 0.50 [0.45, 0.55] E08 0.05 [0.01, 0.10] E37 0.90 [0.85, 0.95] E10 0.05 [0.01, 0.10] E39 0.50 [0.45, 0.55] E12 0.50 [0.45, 0.55] E40 0.25 [0.20, 0.30] E13 0.25 [0.20, 0.30] E41 0.05 [0.01, 0.10] E14 0.25 [0.20, 0.30] E42 0.01 [0.01, 0.10] E15 0.05 [0.01, 0.10] E43 0.10 [0.05, 0.15] E16 0.10 [0.05, 0.15] E44 0.10 [0.05, 0.15] E18 0.25 [0.20, 0.30] E45 0.10 [0.05, 0.15] E23 0.25 [0.20, 0.30] E52 0.10 [0.05, 0.15] E24 0.01 [0.01, 0.10] E53 0.05 [0.01, 0.10] E25 0.10 [0.05, 0.15] E54 0.10 [0.05, 0.15] E26 0.417 [0.35, 0.45] E55 0.25 [0.20, 0.30] E27 0.25 [0.20, 0.30] E56 0.25 [0.20, 0.30] E28 0.207 [0.25, 0.35] E57 0.25 [0.20, 0.30] E29 0.25 [0.20, 0.30] E58 0.50 [0.45, 0.55] E33 0.01 [0.01, 0.10] E59 0.10 [0.05, 0.15]

Page 196: Copyright by Xiaomin You 2010

179

Figure 5-4 Fault tree for damage to lime trees due to tunneling activities, adapted from Sturk et al. (1996)

The lime trees are damaged

Trees exposed to damage Measures

to reduce damage fail

Damage due to carelessness

Trees dried up

RootsDamaged due to

vibrations

AND

OR

Trees suffocating, no oxygen to roots

Contractor not interested in saving trees

Lack of control

Deficient information

concerning the problem

No financial incentives to save

trees

AND

OR

AND

Soil excessively compacted

No remediation

measure

AND

Other roots cannot

supply tree with enough water

Branch A Branch B

Branch C

Independence

0.5[0.45, 0.55]

0.05[0.01, 0.10]

0.50[0.45, 0.55]

0.25[0.20, 0.30]

0.25[0.20, 0.30]

Independence

Independence

[ ]0.5,0.8ρ∈ Unknown Interaction

Unknown Interaction

0.05[0.01, 0.10]

0.05[0.01, 0.10]

0.10[0.05, 0.15]

ET

E01 E02

E03 E04 E05 E06 E07

E08

E09 E10 E11

E12 E13 E14

E16E15

E28

Page 197: Copyright by Xiaomin You 2010

180

Figure 5-5 Fault tree for Branch A, adapted from Sturk et al. (1996)

Branch A

Roots damaged mechanically

RiskReducing measures

fail

AND

OR

Minor roots damaged due to

settlements

Deficient information

concerning the problem

Tunnelling causes settlements that damage

minor roots

AND

AND

Hazardous settlement occur

Early monitoring and remediation of

settlements fail

Spilling/forepoling in

contact with roots

No survey of extension of

roots

Deep rootsexist

AND

Main roots damaged due to

settlements

AND

Tunnelling causes settlements that damage

main roots

Early monitoring and remediation of

settlements fail

Trees exposed to root damageE04

Independence

Independence

Independence

Independence

Unknown Interaction

Unknown Interaction

0.25[0.20, 0.30]

0.25[0.20, 0.30]

0.207[0.15, 0.25]

0.25[0.20, 0.30]

0.417[0.35, 0.45]

0.25[0.20, 0.30]

0.01[0.01, 0.10]

0.10[0.05, 0.15]

E17 E18

E19 E20 E21

E22 E23 E24 E25

E26 E27 E28 E29

Page 198: Copyright by Xiaomin You 2010

181

Figure 5-6 Fault tree for Branch B, adapted from Sturk et al. (1996)

Branch B

Trees damaged by chemicals in the ground

Spillage on the ground

Petroleum products from

tunneling

AND

OR

Petroleum products reach

the roots

Hazardous products are

used

Petroleum products transported

to roots

Spillage in tunnel

AND

Trees damaged by Petroleum

products

Trees damaged by chemicals from grout

AND

Chemicals in grout soluble in

water

Chemicals reach roots

Chemicals affect trees

Trees damaged by direct contact

with grout

Grout affects the tree/rocks

AND

Grout reaches the roots

AND

Grouting procedure not

adapted to situation

Quality con-trol insufficient

(grouting)

OR

E05

E30 E31 E32

E33 E34 E35 E36

E37 E38

E41 E42

E39 E40

E43 E44 E45

Unknown Interaction

Independence

Independence

IndependenceIndependence

Independence

0.01[0.01, 0.10]

0.90[0.85, 0.95]

0.05[0.01, 0.10]

0.01[0.01, 0.10]

0.10[0.05, 0.15]

0.10[0.05, 0.15]

0.01[0.01, 0.10]

0.50[0.45, 0.55]

0.25[0.20, 0.30]

0.50[0.45, 0.55]

[ ]0.5,0.8ρ∈

Page 199: Copyright by Xiaomin You 2010

182

Figure 5-7 Fault tree for Branch C, adapted from Sturk et al. (1996)

As stated by Sturk et al. (1996), “the probabilities were assessed subjectively,

based on expert knowledge and experience”, which is a major reason to use imprecise

probability instead of precise probabilities as explained in Section 4.2.2. The two

columns ‘imprecise’ heading in Table 5-7 list all imprecise probabilities which evaluate

the uncertainty of the events at the bottom of the fault-trees, where the interval widths are

equal to 0.1. As shown in Figure 5-4 through Figure 5-7, the interaction between events is

assumed to be ‘unknown interaction’, ‘independence’, or ‘uncertain correlation’ with the

correlation coefficient [ ]0.5,0.8ρ ∈ . Table 5-8 summarizes all calculated occurrence

[ ]0.5,0.8ρ∈ [ ]0.5,0.8ρ∈

Page 200: Copyright by Xiaomin You 2010

183

probabilities obtained from both the precise and the imprecise inputs. The lower and the

upper probabilities for the top event are equal to 0.0189 and 0.3116, respectively.

Compare the two types of input: precise and imprecise. The only differences are

(1) that the former is precise and the latter is given in terms of intervals, and (2) relaxing

the constraint of independence and assuming different types of interaction. Finally, we

find that the probability of the top event (i.e., the lime trees are damaged by tunneling

activities) can be as high as 0.3116, which is much higher than the original estimation

(0.105) and might not be acceptable anymore. As a result, further proactive and effective

solutions should be considered to deal with the environmental concerns.

Table 5-8 Calculated occurrence probabilities of events

Event Precise Imprecise Event Precise Imprecise

E01 2.09E-01 [4.20E-02, 5.67E-01] E22 1.04E-01 [7.00E-02, 1.35E-01]

E03 7.25E-03 [1.56E-02, 1.00E-01] E30 1.00E-02 [1.42E-02, 1.70E-01]

E04 1.83E-02 [2.80E-03, 9.17E-02] E31 1.25E-02 [4.50E-03, 2.48E-02]

E05 2.29E-02 [1.42E-02, 2.03E-01] E32 5.00E-04 [2.25E-04, 8.25E-03]

E06 6.80E-02 [0, 2.51E-01] E34 4.50E-04 [4.25E-03, 7.79E-02]

E07 6.25E-02 [0, 3.00E-01] E35 1.00E-03 [5.00E-04, 1.50E-02]

E09 1.45E-01 [5.95E-03, 2.35E-01] E38 5.00E-04 [5.00E-03, 8.20E-02]

E11 1.36E-01 [7.19E-02, 2.61E-01] E46 5.00E-03 [5.00E-04, 1.50E-02]

E17 7.73E-02 [1.40E-02, 3.06E-01] E47 1.32E-01 [7.19E-02, 2.46E-01]

E19 2.61E-02 [1.40E-02, 4.05E-02] E48 2.50E-02 [5.88E-02, 1.50E-01]

E20 1.00E-03 [5.00E-04, 1.50E-02] E49 1.09E-01 [1.40E-02, 1.13E-01]

E21 5.18E-02 [0, 2.50E-01] E50 6.25E-02 [1.20E-01, 2.58E-01]

Page 201: Copyright by Xiaomin You 2010

184

5.3 DECISION ANALYSIS: THE OPTIMAL EXPLORATION PLAN FOR THE SUCHEON TUNNEL

The Sucheon tunnel is a 2 km long rock tunnel, buried in micrographic granite

and diorite (see Figure 5-8). Based on the investigation results: RMR, resistivity, and Q

values, the geologic states were classified into five categories, as shown in Table 5-9.

Five construction strategies C1 through C5 were proposed (see Table 5-10). Costs for

each construction strategy under different geologic states are listed in Table 5-11, where

the bolded ones are the minimum cost per unit length under different geologic states.

According to the prior information of geologic states, the tunnel is divided into 18

sections (see Table 5-12). Table 5-13 shows the reliability matrix of an imperfect

additional exploration. The selection of the construction strategy with or without the

imperfect additional exploration must be made among C1 through C5 for each tunnel

section. Moreover, the value of information and the optimal exploration plan need to be

determined.

Figure 5-8 Geological profile and layout of the Sucheon Tunnel (Min et al. 2003)

Page 202: Copyright by Xiaomin You 2010

185

Table 5-9 Description of Geologic States (Karam et al. 2007)

Description Geologic state RMR Resistivity (Ωm) Q value

1G > 81 > 3000 > 40

2G 60 - 80 1000 - 3000 4 - 40

3G 40 - 60 300 - 1000 1 – 4

4G 20 - 40 100 - 300 0.1 - 1

5G < 20 < 100 < 0.1

Table 5-10 Description of Construction Strategies (Karam et al. 2007)

Construction strategy Description

C1 Full face excavation with nominal support

C2 Full face excavation with extensive support

C3 Heading and bench excavation with nominal support

C4 Heading and bench excavation with extensive support

C5 Multi-heading and bench excavation

Page 203: Copyright by Xiaomin You 2010

186

Table 5-11 Construction Cost (per meter) (Karam et al. 2007)

Geological state Construction strategy 1G 2G 3G 4G 5G

C1 $3,150 $4,500 $5,700 $7,200 $8,850 C2 $4,125 $3,525 $5,325 $7,350 $9,000 C3 $4,350 $4,875 $4,650 $6,375 $8,138 C4 $4,650 $4,500 $5,400 $6,285 $7,875 C5 $4,425 $4,575 $5,175 $6,450 $6,600

Table 5-12 Tunnel Section and Precise Prior Probabilities (Karam et al. 2007)

Geologic state Section Length (m)

1G 2G 3G 4G 5G 1 60 0.51 0.49 0.00 0.00 0.00 2 20 0.15 0.47 0.38 0.00 0.00 3 57 0.15 0.47 0.38 0.00 0.00 4 40 0.00 0.00 0.47 0.53 0.00 5 120 0.00 0.29 0.64 0.07 0.00 6 41 0.15 0.25 0.53 0.07 0.00 7 96 0.14 0.25 0.56 0.06 0.00 8 104 0.47 0.43 0.10 0.00 0.00 9 586 0.48 0.45 0.07 0.00 0.00

10 106 0.49 0.51 0.00 0.00 0.00 11 67 0.51 0.49 0.00 0.00 0.00 12 167 0.50 0.50 0.00 0.00 0.00 13 36 0.78 0.22 0.00 0.00 0.00 14 74 0.10 0.83 0.07 0.00 0.00 15 43 0.19 0.81 0.00 0.00 0.00 16 110 0.67 0.33 0.00 0.00 0.00 17 188 0.86 0.14 0.00 0.00 0.00 18 50 0.13 0.32 0.54 0.00 0.00

Page 204: Copyright by Xiaomin You 2010

187

Table 5-13 Exploration Reliability Matrix (Karam et al. 2007)

Reality Exploration results 1G 2G 3G 4G 5G

1G 0.90 0.10 0.00 0.00 0.00 2G 0.10 0.90 0.10 0.00 0.00 3G 0.00 0.00 0.80 0.10 0.00 4G 0.00 0.00 0.10 0.80 0.10 5G 0.00 0.00 0.00 0.10 0.90

In Table 5-12 and Table 5-13, probabilities were assigned as precise values. With

the precise inputs, Karam et al. (2007) decided the optimal construction strategies and

exploration plan by using decision trees.

Now we are going to redo the decision analysis with imprecise prior probabilities

and imprecise reliability matrix, as shown in Table 5-14 and Table 5-15. The extreme

point form of Table 5-14 and Table 5-15 are given in the Appendix B (shown in Table

B-1 and Table B-2). To avoid division-by-zero errors in the analysis, all zero

probabilities in Table 5-12 through Table 5-15 are replaced by a very small number: 10-9.

The results obtained from both imprecise and precise probabilities will be compared later

in this section.

For Section i (i =1,…,18), we first use the decision trees in Figure 5-9 and Figure

5-10 to determine the optimal construction strategies in the cases of no additional

exploration and imperfect additional exploration, respectively. Both cases consider the

relaxed constraints and the strict constraints, respectively. Since the precise inputs (Table

5-12 and Table 5-13) are contained in the imprecise inputs (Table 5-14 and Table 5-15),

Page 205: Copyright by Xiaomin You 2010

188

the results obtained with the precise inputs should also be contained in those with

imprecise inputs. When imprecise inputs are used, the sets of probability measures are the

same regardless the type of constraints (relaxed or strict). However, since the strict

constraints can reduce uncertainty, the results generated under the strict constraints are

included in those obtained by using the relaxed constraints. Accordingly, all results with

precise or imprecise inputs can be presented in one table (i.e. Table 5-16). The results

under the strict constraints are bolded in Table 5-16. The underlined construction

strategies are obtained with precise inputs. For instance, in Section 1, given the

exploration result is 4G, if the relaxed constraints are imposed, the optimal construction

strategies are C1 through C5; under the strict constraints, the optimal ones are C1 and C5;

if the precise inputs are used, the unique optimal construction strategy is C5.

For the case of perfect additional exploration, the optimal construction strategies

are the one with minimum cost under different geological states, as shown in Table 5-11.

Accordingly, for the case of perfect additional exploration, if the exploration result is iG,

the optimal construction strategy is Ci (i = 1, …, 5).

Table 5-14 Imprecise Prior Probabilities

Geologic state Section Length (m)

1G 2G 3G 4G 5G 1 60 [0.45, 0.55] [0.45, 0.55] 0.00 0.00 0.00 2 20 [0.10, 0.20] [0.40, 0.50] [0.30, 0.40] 0.00 0.00 3 57 [0.10, 0.20] [0.45, 0.50] [0.35, 0.40] 0.00 0.00 4 40 0.00 0.00 [0.45, 0.55] [0.45, 0.55] 0.00 5 120 0.00 [0.20, 0.30] [0.60, 0.70] [0.05, 0.15] 0.00 6 41 [0.10, 0.20] [0.20, 0.30] [0.50, 0.60] [0.00, 0.10] 0.00 7 96 [0.10, 0.20] [0.20, 0.30] [0.50, 0.60] [0.00, 0.10] 0.00 8 104 [0.40, 0.50] [0.40, 0.50] [0.05, 0.15] 0.00 0.00

Page 206: Copyright by Xiaomin You 2010

189

Geologic state Section Length (m)

1G 2G 3G 4G 5G 9 586 [0.40, 0.50] [0.40, 0.50] [0.05, 0.15] 0.00 0.00

10 106 [0.48, 0.52] [0.48, 0.52] 0.00 0.00 0.00 11 67 [0.48, 0.52] [0.48, 0.52] 0.00 0.00 0.00 12 167 [0.48, 0.52] [0.48, 0.52] 0.00 0.00 0.00 13 36 [0.75, 0.80] [0.20, 0.25] 0.00 0.00 0.00 14 74 [0.05, 0.15] [0.80, 0.85] [0.05, 0.10] 0.00 0.00 15 43 [0.15, 0.20] [0.80, 0.85] 0.00 0.00 0.00 16 110 [0.65, 0.70] [0.30, 0.35] 0.00 0.00 0.00 17 188 [0.85, 0.90] [0.10, 0.15] 0.00 0.00 0.00 18 50 [0.10, 0.15] [0.30, 0.35] [0.50, 0.55] 0.00 0.00

Table 5-15 Imprecise Exploration Reliability Matrix

Reality Exploration results 1G 2G 3G 4G 5G

1G [0.85, 0.95] [0.08, 0.12] 0.00 0.00 0.00 2G [0.05, 0.15] [0.88, 0.92] [0.08, 0.12] 0.00 0.00 3G 0.00 0.00 [0.70, 0.85] [0.05, 0.15] 0.00 4G 0.00 0.00 [0.05, 0.15] [0.75, 0.90] [0.06, 0.12] 5G 0.00 0.00 0.00 [0.08, 0.15] [0.88, 0.94]

Page 207: Copyright by Xiaomin You 2010

190

Figure 5-9 Decision tree for Section i of the Sucheon Tunnel without additional exploration

Page 208: Copyright by Xiaomin You 2010

191

Figure 5-10 Decision tree for Section i of the Sucheon Tunnel with imperfect additional exploration

Page 209: Copyright by Xiaomin You 2010

192

Table 5-16 Optimal Construction Strategies Obtained

With Exploration Result Section No exploration

†1G †2G †3G †4G †5G 1 C1, C2 C1 C2 C1, C2, C3 C1, C2, C3, C4, C5 C5 2 C2 C1 C2 C3 C3 C5 3 C2 C1 C2 C3 C3 C5 4 C3 C1,C3 C3 C3 C3, C4, C5 C4 5 C2, C3 C2 C2 C3 C2, C3, C4, C5 C4 6 C1, C2, C3, C5 C1 C2 C3 C1, C2, C3, C4, C5 C3, C4, C5

7 C1, C2, C3, C5 C1 C2 C3 C1, C2, C3, C4, C5 C3, C4, C5 8 C1, C2 C1 C2 C3 C3 C5 9 C1, C2 C1 C2 C3 C3 C5

10 C1, C2 C1 C2 C1, C2, C3 C1, C2, C3, C4, C5 C5 11 C1, C2 C1 C2 C1, C2, C3 C1, C2, C3, C4, C5 C5 12 C1, C2 C1 C2 C1, C2, C3 C1, C2, C3, C4, C5 C5 13 C1 C1 C2 C1, C2, C3 C1, C3, C4,C5 C5 14 C2 C1, C2 C2 C3 C3 C5 15 C2 C1 C2 C2 C2, C3, C4, C5 C5 16 C1 C1 C2 C1, C2, C3 C1, C3, C4, C5 C5 17 C1 C1 C1, C2 C1, C3 C1, C3, C5 C5 18 C2 C1 C2 C3 C3 C5

As shown in Table 5-16, the optimal construction strategies obtained with

imprecise probabilities are not necessarily unique, while the construction strategies under

precise probabilities are always unique, except the case of tunnel section 12 with no

additional exploration, where C1 and C2 are both optimal because C1 and C2 give

exactly the same expected cost.

As explained in Section 4.3.3.3.1, the uncertain new information may decrease or

increase the uncertainties. This statement is verified in Table 5-16. By comparing the

results under no additional exploration and imperfect additional exploration, we cannot

Page 210: Copyright by Xiaomin You 2010

193

draw the conclusion that the new evidence provided by imperfect exploration always

helps reducing the range of optimal construction strategies. Take section 13 as an

example. There is a unique optimal strategy under no exploration and when the

exploration result is 1G, 2G, or 5G, while multiple optimal strategies occurs when the

exploration results is 3G or 4G. Only later calculations determine whether the new

evidence is useful in reducing the range of optimal choices

Uncertainty under the strict constraints is higher than that under the relaxed

constraints. As shown in Table 5-16, the results obtained by using the strict constraints

are always included in the results from the relaxed constraints. In some sections, the

relaxed constraints generate multiple selections while the strict constraints give a unique

selection, like the case of section 1 given that the exploration result is 3G, where the

optimal strategies are C1, C2 and C3 under the relaxed constraints and it becomes C2

when the strict constraints are imposed. It is because the strict constraints may reduce

uncertainty, as described in Case (2) in Section 4.3.3.3. However, the strict constraints

sometimes give multiple choices the same as the relaxed constraints. Such as the case in

section 6 given that the exploration result is 5G. Both the relaxed constraints and the

strict constraints give multiple optimal strategies:C3, C4 and C5. It is because the utility

curves for the three strategies intersect in the set of probability measures conditional to

the exploration result, which has been explained in Case (1) in Section 4.3.3.3.2 (Figure

4-18). The third situation is the combination of the previous two situations: both the

relaxed and the strict constraints give multiple choices, but the number of choices under

strict constraints is less than that under the relaxed constraints. This can be explained as a

combination of Case (1) and (2) in Section 4.3.3.3.

Sections 2, 3 and 18 always have a unique optimal construction strategy,

indicating that the imprecision of probability measures on state variable does not drive

Page 211: Copyright by Xiaomin You 2010

194

any problem to be indeterminate. Similar as the case of precise probabilities, we may

expect a positive value of imperfect information for these sections in the calculations

later.

Because the inputs of probability measures listed in Table 5-12 through Table

5-15 are the same in the following sections: sections 2 and 3, sections 6 and 7, sections 8

and 9, and sections 10 through 12, respectively, the optimal construction strategies should

be the same as well, which is confirmed by the results in Table 5-16.

After obtaining the optimal construction strategies under no additional

exploration, the imperfect additional exploration, and the perfect additional exploration,

respectively, the next step is to determine the lower and the upper values of (perfect)

information for each tunnel section. For example, the decision trees in Figure 5-11

through Figure 5-13 are used to determine the lower and the upper values of (perfect)

information in tunnel section 1. Again, the relaxed constraints and the strict constraints

are considered, respectively.

No Exp

lorati

on

Perfect

Exploration

Figure 5-11 Decision tree for determining the value of information for Section 1 of the Sucheon Tunnel: No additional exploration branch

Page 212: Copyright by Xiaomin You 2010

195

Figure 5-12 Decision tree for determining the value of information for Section 1 of the Sucheon Tunnel: Imperfect additional exploration branch

Page 213: Copyright by Xiaomin You 2010

196

$3,150

$4,500

$5,700

$7,200

$8,850

$4,125

$3,525

$5,325

$7,350

$9,000

$4,350

$4,875

$4,650

$6,375

$8,138

$4,650

$4,500

$5,400

$6,385

$7,875

1G

2G

3G

4G

5G

1G

2G

3G

4G

5G

1G

2G

3G

4G

5G

1G

2G

3G

4G

5G

C1

C2

C3

C4

1G

2G

3G

4G

5G

$4,425

$4,575

$5,175

$6,450

$6,600

1G

2G

3G

4G

5G

C5

Construction Strategy

Geologic State

Cost(/m)

Exploration Results

Exploration Methods

No Exp

lorati

on

ImperfectExploration

Perfect

Exploration

Figure 5-13 Decision tree for determining the value of information for Section 1 of the Sucheon Tunnel: Perfect additional exploration branch

Page 214: Copyright by Xiaomin You 2010

197

Values of information for each section along the tunnel are depicted in Figure

5-14 through Figure 5-19. The results in tabular forms are listed in Appendix B (Table

B-3 and Table B-4). By comparing the results with the relaxed constraints (Figure 5-14

through Figure 5-18) and the ones with the strict constraints (Figure 5-15 through Figure

5-19), one can easily find that the former has larger bounds than the latter. The value of

information obtained with precise inputs (Karam et al. 2007) are also included in the

figures, represented by a thin solid line, which is always located between the lower and

the upper value of information.

There are several negative lower values of imperfect information in Figure 5-14

and Figure 5-15 as a result of the imprecision in the probability measures. As illustrated

in Section 4.3.4, a negative lower value of information means the indeterminacy in

buying the information even if it is free. Thus, the imperfect exploration is indeterminate

in sections 4 – 7, 14, and 17 if the relaxed constraints are used and in sections 4 and 17 if

the strict constraints are imposed. As for perfect information, a negative lower value is

obtained in section 4 under the relaxed constraints (Figure 5-16); however, it turns to be

positive when the strict constraints are imposed (Figure 5-17), as explained in Section

4.3.4. Similarly for the case of updating value (Figure 5-18 and Figure 5-19). Because the

relaxed constraints do not reduce uncertainty, several sections obtain negative lower

updating values (Figure 5-18); under the strict constraints, perfect exploration is always

better than imperfect exploration, thus updating values can never be negative (Figure

5-19).

Page 215: Copyright by Xiaomin You 2010

198

-$50,000

$0

$50,000

$100,000

$150,000

$200,000

$250,000

$300,000

$350,000

$400,000

$450,000

0 400 800 1200 1600 2000

Distance Along Tunnel (m)

Val

ue o

f Inf

orm

atio

n

Imperfect Exploration(lower)Imperfect Exploration(upper)Imperfect Exploration(precise)

Figure 5-14 Value of imperfect exploration (with relaxed constraints)

-$50,000

$0

$50,000

$100,000

$150,000

$200,000

$250,000

$300,000

$350,000

$400,000

0 400 800 1200 1600 2000

Distance Along Tunnel (m)

Val

ue o

f Inf

orm

atio

n

Imperfect Exploration(lower)Imperfect Exploration(upper)Imperfect Exploration(precise)

Figure 5-15 Value of imperfect exploration (with strict constraints)

Cost of Imperfect Exploration

Cost of Imperfect Exploration

Page 216: Copyright by Xiaomin You 2010

199

-$50,000

$0

$50,000

$100,000

$150,000

$200,000

$250,000

$300,000

$350,000

$400,000

$450,000

0 400 800 1200 1600 2000

Distance Along Tunnel (m)

Val

ue o

f Inf

orm

atio

n

Perfect Exploration(lower)Perfect Exploration(upper)Perfect Exploration(precise)

Figure 5-16 Value of perfect exploration (with relaxed constraints)

-$50,000

$0

$50,000

$100,000

$150,000

$200,000

$250,000

$300,000

$350,000

$400,000

0 400 800 1200 1600 2000

Distance Along Tunnel (m)

Val

ue o

f Inf

orm

atio

n

Perfect Exploration(lower)Perfect Exploration(upper)Perfect Exploration(precise)

Figure 5-17 Value of perfect exploration (with strict constraints)

Cost of Perfect Exploration

Cost of Perfect Exploration

Page 217: Copyright by Xiaomin You 2010

200

-$60,000

-$30,000

$0

$30,000

$60,000

$90,000

$120,000

$150,000

$180,000

0 400 800 1200 1600 2000

Distance Along Tunnel (m)

Upd

atin

g va

lue

Updating value(lower)Updating value(upper)Updating value(precise)

Figure 5-18 Value of updating to perfect exploration (with relaxed constraints)

$0

$10,000

$20,000

$30,000

$40,000

$50,000

$60,000

$70,000

$80,000

0 400 800 1200 1600 2000Distance Along Tunnel (m)

Upd

atin

g va

lue

Updating value(lower)Updating value(upper)Updating value(precise)

Figure 5-19 Value of updating to perfect exploration (with strict constraints)

Cost for Updating

Cost for Updating

Page 218: Copyright by Xiaomin You 2010

201

Next, we are going to determine the optimal exploration plan for the Sucheon

tunnel. The costs of the imperfect exploration and the perfect exploration are assumed to

be $30,000 (Karam et al. 2007) and $50,000, respectively. Thus the cost of updating from

imperfect exploration to the perfect one is equal to $20,000. The three costs are

represented by three solid horizontal lines in Figure 5-14 through Figure 5-19. For a

considered tunnel section, only if the lower value of the exploration is above the cost line,

i.e. a sure positive saving, the exploration may be considered. If the upper value of the

exploration is below the cost line, which indicates a sure negative saving, the exploration

should not be considered. If the lower value and the upper value straddle the cost line,

which means both positive and negative saving are possible, then whether to conduct the

exploration or not is indeterminate. Finally, the optimal exploration plans with imprecise

probabilities and precise probabilities are summarized in Table 5-17. In the case of

imprecise probabilities, the relaxed constraints and the strict constraints are adopted,

respectively. For example, when the relaxed constraints are used, imperfect additional

exploration is warranted only in sections 9, 10, and 12; as for sections 2 – 4, 13 – 15, and

17 – 18, imperfect additional exploration is not considered; for the remaining section,

decisions cannot be made with current imprecise information.

By observing the optimal exploration plans in Table 5-17, one could see that the

set of sections determined to perform the additional exploration under the relaxed

constraints is a subset of the one under the strict constraints; and the set under the strict

constraints is a subset of the one with precise probabilities. Similarly for the sections

where the new exploration is denied. Take the case of the imperfect additional

exploration as an example. The set of the sections with the additional exploration

warranted under the relaxed constraints is section 9, section 10, section 12, as shown in

Table 5-17. The set becomes section 8, section 9, section 10, section 12 under the strict

Page 219: Copyright by Xiaomin You 2010

202

constraints, and section 5, section 7, section 8, section 9, section 10, section 12 with

precise probabilities. This is because the problem with imprecise probabilities under the

relaxed constraints has the highest uncertainty, and the problem with precise probabilities

has the lowest uncertainty. The higher the uncertainty is, the more indeterminate the

problem will be. Therefore, only sections 9, 10, and 12 are determined to be worth

performing the imperfect additional exploration with imprecise probabilities under the

relaxed constraints; while sections 5, 7, 8, 9, 10, and 12 are worthy performing the

additional exploration with precise probabilities.

Next step is to calculate the minimum and the maximum savings resulting from

the optimal exploration plans. The minimum saving would be the sum of the lower values

of information deducted by the costs for the exploration, and thus the maximum saving

would be the sum of the upper values less the exploration costs. For example, in

imperfect exploration under the relaxed constraints, minimum saving =

($128,000 - $30,000) + ($34,100 - $30,000) + ($53,800 - $30,000) = $125,900. Similarly,

maximum saving = ($401,000 - $30,000) + ($48,700 - $30,000) + ($76,700 - $30,000) =

$436,400. The lower and the upper values of information for each tunnel section can be

found in Table B-3 through Table B-5.

Page 220: Copyright by Xiaomin You 2010

203

Table 5-17 Optimal Exploration Plans and the Corresponding Savings.

Relaxed Constraints

Imperfect exploration Perfect exploration Update to perfect exploration

Y N I Y N I Y N I

Sections 9,10,12 2-4,13-15, 17,18

1,5-8, 11,16 9, 12 1-4,11,

13-18 5-8,10, None1-4,

10,11, 13-16,18

5-9, 12,17

Saving [$125,900, $436,400] [$175,700, $424,200] $0

Strict Constraints

Imperfect exploration Perfect exploration Update to perfect exploration

Y N I Y N I Y N I

Sections 8-10, 12 2-4,6,11,

13-15, 17,18

1,5,7,16 9, 12 1-6,11, 13-18 7,8,10 9

1-8,10, 11,13-16, 18

12,17

Saving [$201,300, $369,800] [$246,200, $333,700] [$15,700, $56,100]

Precise probabilities

Imperfect exploration Perfect exploration Update to perfect exploration

Y N I Y N I Y N I

Sections 5, 7-10, 12

1-4,6,11, 13-18 None 8-10, 12 1-7,11,

13-18 None 9 1-8, 10-18 None

Saving $297,605 $286,755 $35,904

Note: Y – Additional exploration is warranted;

N – Additional exploration is not warranted;

I – Indeterminate.

5.4 RISK REGISTER FOR THE EAST SIDE CSO PROJECT

The East Side CSO (Combined Sewer Overflow) Tunnel Project is located in

Portland, Oregon. The primary component of this project is constructed in soft ground

below the groundwater table. It is 20-foot in internal diameter and 6 miles long. During

the construction, this project needs to negotiate with existing infrastructures, including

Page 221: Copyright by Xiaomin You 2010

204

historic structures, railroad, outfalls, and several major utilities. A comprehensive risk

analysis was conducted to reduce the potential impact to the existing infrastructure

(Pennington et al. 2006).

The risk register is one of the standard tools to control the risks. In the risk

register, risk items are identified for all construction activities. For each risk item, experts

estimate its occurrence probability and its consequence. A risk register (Gribbon, City of

Portland Bureau of Environmental Services) developed by the contractor during the

construction is showed in Appendix C, where the occurrence probability of the risk items

are classified into five categories, and rated by a number from 1 through 5, as shown in

Table 5-18. Each rating represents a corresponding probability interval. It is actually

imprecise probability! For example, for a rating of ‘4’, the occurrence probability is the

interval [51%, 70%]. In the original risk register, the contingency for mitigating the risks

was estimated as (consequence×probability rating/5), where the probability is replaced

by (probability rating/5) to calculate the expected value of the consequence. However,

since the occurrence probabilities are originally evaluated by imprecise probabilities, we

can use the imprecise probabilities to obtain the lower and the upper expected

consequence, i.e. the lower contingence and the upper contingence.

Table 5-18 Description of occurrence probability in the East Side CSO project.

Rating Description of occurrence probability 5 Almost Certain (>71% - Expected to occur) 4 Probable (51% - 70% - Will probably occur) 3 Likely (31% - 50% - Likely to occur) 2 Unlikely (11% - 30% - May occur) 1 Rare (<10% - Will rarely occur)

Page 222: Copyright by Xiaomin You 2010

205

Let X1, …, Xn be n risk items. Because the contingency assigned to each risk item

is equal to the expected value of consequence, the total contingency is obtained as

E(X1+ … + Xn). According the property of expectation: E(X1+ … + Xn) = E(X1)+ …

+E(Xn), regardless of the type of interaction between the risk items. Then the lower

contingency is obtained by using the lower values of the occurrence probabilities use, and

the upper contingency is obtained when the upper values of the occurrence probabilities

are used. The values of contingency values are listed in Appendix C. The lower and the

upper total contingency are equal to $19,901,475 and $32,865,750, respectively, which

are the results of the original imprecise evaluations from experts and keep the

imprecision. Within the current available information, we can draw the conclusion that

the total contingency can be as low as $19,901,475 and as high as $32,865,750. However,

the simplification of probabilities as (probability rating/5) led to a probability even higher

than the upper value of the corresponding interval. Take probability rating ‘3’ as an

example, probability rating/5 indicates a value of 0.6, which is larger than the upper value

of interval [31%, 50%). As a result, this inappropriate simplification generated the

original contingency of $39,688,000, which is even higher than the upper contingency

estimated by imprecise probabilities. When a contractor is at the bidding stage, he may

lose the project because of the overestimation of contingency.

Page 223: Copyright by Xiaomin You 2010

206

Chapter 6 Summary and Future Work

6.1 SUMMARY

6.1.1 Algorithms for different types of interaction in imprecise probabilities

Various algorithms for different types of interaction in imprecise probability are

proposed in this study. All algorithms were designed to accommodate two types of

constraints over marginal distributions: prevision bounds or extreme distributions. Each

algorithm was written in terms of both joint distributions and marginal distributions. All

algorithms developed in Chapter 3 have been summarized in Table 3-19.

Previsions on the joint space are linear functions of probability masses, thus

prevision bounds are always achieved at the extreme joint distributions. As for the non-

linear conditional probability on the joint space, we have shown that its upper and lower

bounds are obtained at the extreme joint distributions for all types of independence.

The constraints under unknown interaction are always linear. As a result, the set

of joint distributions on joint finite spaces ΨU is convex. In epistemic

irrelevance/independence, although the constraints stated by the definition are quadratic,

when constraints are given as bounds on marginal previsions, it is possible to rewrite

algorithms in terms of joint distributions, which turns quadratic problems into linear ones.

In strong independence, we have proved that the set of joint probability distribution ΨS is

not convex; an efficient algorithm to find all extreme distributions is presented. The

Pearson correlation coefficient is applied in the case of uncertain correlation, so the

problem becomes non-linear and non-convex. Non-linear programming techniques are

required for solving the problem.

Constraints are consecutively added with types of interaction, and thus the sets of

probability measures are nested, i.e. ΨS ⊆ ΨE ⊆ | isEΨ ⊆ ΨU and ΨC ⊆ ΨU.

Page 224: Copyright by Xiaomin You 2010

207

6.1.2 Application to the standard tools in risk analysis

6.1.2.1 Event tree analysis

Novel methodologies for event tree analysis with imprecise probabilities are

developed in the dissertation. Three types of evidence on outcome probabilities were

considered: probabilities conditional to the occurrence of the event at the upper level,

total probabilities of occurrences, and the combination of the previous two types.

When evidence is given in terms of probabilities conditional to upper level events,

efficient recurrent algorithms were given either in terms of extreme points or as linear

programming problems.

If total probabilities are given, the interaction between any two events should be

assumed in the analysis, including unknown interaction, epistemic irrelevance, epistemic

independence, strong independence, and uncertain correlation. Two different ways to

interpret the interactions are discussed: pair-wise interaction and interaction between

event Si and a new combined event composed of events at its upper level.

In the case of a combination of conditional probabilities and total probabilities,

the event tree can be converted into an equivalent event tree, where all probabilities are

conditional to the upper level events, and then the equivalent tree is used to carry out the

analysis, as in the first case.

6.1.2.2 Fault tree analysis

In this study, the major fault is logically connected with the sub-events by OR-

and AND-gates. The occurrence probabilities for the failure events are not assumed to be

determined but are given imprecisely. Different types of interaction between sub-events

Page 225: Copyright by Xiaomin You 2010

208

can be taken into account, including unknown interaction, independence, and uncertain

correlation.

We have observed and proved that epistemic irrelevance, epistemic independence,

and strong independence all lead to the same results in the fault tree analysis. An example

of a fault tree application has confirmed this claim.

6.1.2.3 Decision tree analysis

This dissertation proposes algorithms for decision analysis within the standard

form of a decision tree, where information on probability evaluation is provided

imprecisely. Considering that the uncertainty may not be changed by the decision, two

types of constraints: relaxed constraints and strict constraints are proposed to deal with

this issue.

Further, this study presents how to consider both prior and new implicit

information and show how to determine the value of new information. Relaxed and strict

constraints are implemented, respectively, while relaxed constraints always provide more

imprecise results than strict constraints.

In the decision-making process, we study lower and upper values of information

within the theory of imprecise probabilities. The lower value is the maximum buying

price for the new information and the upper value if the minimum selling price for the

new information. When the cost is less than the lower value of the new information,

indicating a positive gain, the decision maker should buy the information; when the cost

is even higher than the upper value of the new information, indicating a negative gain, the

decision maker should reject buying the new information.

The new imperfect information may increase or decrease the uncertainty on state

variables. Though the probability measures are updated by the new evidence, the problem

Page 226: Copyright by Xiaomin You 2010

209

may become more indeterminate and then may probably lead to a negative lower value of

information. As for the perfect information, although problems can never be

indeterminate, this study has shown that a negative lower value of information may occur

under relaxed constraints, and will never occur under strict constraints. Generally, more

information are not always better than less information within imprecise probabilities.

6.1.2.4 Risk register

When the theory of imprecise probability is implemented to risk register and the

estimated total contingency is required, it is obtained as the prevision of the sum of all

risk items listed in the risk register. No matter which type of interaction between the risk

items is considered, the lower contingency is obtained when all occurrence probabilities

adopt their lower values, and the upper contingency is obtained when the upper value of

occurrence probabilities are used.

6.2 FUTURE WORK

6.2.1 Elicitation and Assessment with Imprecise Probabilities

In this dissertation, we proposed algorithms and methodologies for risk analysis

with imprecise probabilities. The proposed algorithms, however, require input

information after elicitation and assessment. Future research may develop elicitation and

assessment procedures in a tunnelling risk management process by using the theory of

imprecise probability. Methodologies to elicit and assess information can be developed

for typical applications: risk registers and event, fault, and decision trees. This

information is the input information needed by the algorithms developed in this

Page 227: Copyright by Xiaomin You 2010

210

dissertation; probability intervals and risk intervals are then calculated. The

methodologies may be tested in brainstorming sessions for risk registers and trees.

6.2.2 Improvement on algorithms for different types of interaction

Different algorithms are developed to deal with unknown interaction,

independence, and uncertain correlation. By applying transformation or other

mathematical techniques, some non-linear, non-convex problems have been converted to

linear ones or more efficient algorithms are proposed. However, some problems, such as

uncertain correlation, still require non-linear programming techniques. Due to the

computational difficulty, much remains to be done to improve and simplify the

algorithms for non-linear, non-convex problems.

6.2.3 Cost/Contingency and Schedule Estimation

Cost and schedule estimation under uncertainty requires the use of continuous

variables, as opposed to discrete variables used in risk registers and trees. Similarly, when

the consequence in the risk register is provided imprecisely, the contingency estimation

should be carried out with continuous variables. Algorithms developed in this dissertation

should be extended to continuous variables and applied to cost and scheduling estimation.

This entails the extensive use of functional analysis. The methodologies should be tested

in several case histories and relative advantages/disadvantages should be evaluated.

Page 228: Copyright by Xiaomin You 2010

211

Appendix A Explicit Formulation for Optimization Problems in Section 4.2.1.2

This appendix contains the explicit formulation for the optimization problems over ^i jP

(the joint probability distribution for sub-events iE and jE ) in Section 4.2.2.

1. Unknown interaction: ^i j U∈ΨP

Minimize (Maximize) 2; 2 , ,^1; 1 i ja Pξ η ξ η ξ η

ξ η

= =

= =∑

Subject to

( )( )

^

^

^

,^

; 1,...,

; 1,...,

1

0; 1, 2; 1, 2

Tk k kLOW i i i j UPP i i

Tk k T kLOW j j i j UPP j j

Ti j

i j

E f E f k k

E f E f k k

Pξ η ξ η

⎡ ⎤ ⎡ ⎤≤ ⋅ ≤ =⎣ ⎦ ⎣ ⎦

⎡ ⎤ ⎡ ⎤≤ ≤ =⎣ ⎦ ⎣ ⎦

⋅ ⋅ =

≥ = =

f P 1

f 1 P

1 P 1

(A.1)

2. Epistemic irrelevance: |^

isi j E∈ΨP

Minimize (Maximize) 2; 2 , ,^1; 1 i ja Pξ η ξ η ξ η

ξ η

= =

= =∑

Subject to

( ) ^

2 2, , ,

^ ^ ^1 1

^

,^

; 1,..., ;

; 1,..., ;

1

0; 1,2; 1, 2

Tk k kLOW i i i j UPP i i

k k kLOW j j UPP j ji j i j i j

Ti j

i j

E f E f k k

E f P E f P k k

P

ξ η ξ ξ η

η η

ξ η ξ η

= =

⎡ ⎤ ⎡ ⎤≤ ⋅ ≤ =⎣ ⎦ ⎣ ⎦⎛ ⎞ ⎛ ⎞

⎡ ⎤ ⎡ ⎤⋅ ≤ ⋅ ≤ ⋅ =⎜ ⎟ ⎜ ⎟⎣ ⎦ ⎣ ⎦⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠

⋅ ⋅ =

≥ = =

∑ ∑

f P 1

P f

1 P 1

(A.2)

where ,^i jξ ⋅P is the ξ -th row of matrix ^i jP .

3. Epistemic independence: ^i j E∈ΨP

Minimize (Maximize) 2; 2 , ,^1; 1 i ja Pξ η ξ η ξ η

ξ η

= =

= =∑

Subject to (A.3)

Page 229: Copyright by Xiaomin You 2010

212

( )2 2

, , ,^ ^ ^

1 1

2 2, , ,

^ ^ ^1 1

^

,^

; 1,..., ;

; 1,..., ;

1

0;

Tk k kLOW i i UPP i ii j i j i j

k k kLOW j j UPP j ji j i j i j

Ti j

i j

E f P E f P k k

E f P E f P k k

P

ξ η η ξ η

ξ ξ

ξ η ξ ξ η

η η

ξ η

= =

= =

⎛ ⎞ ⎛ ⎞⎡ ⎤ ⎡ ⎤⋅ ≤ ≤ ⋅ =⎜ ⎟ ⎜ ⎟⎣ ⎦ ⎣ ⎦⎜ ⎟ ⎜ ⎟

⎝ ⎠ ⎝ ⎠⎛ ⎞ ⎛ ⎞

⎡ ⎤ ⎡ ⎤⋅ ≤ ⋅ ≤ ⋅ =⎜ ⎟ ⎜ ⎟⎣ ⎦ ⎣ ⎦⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠

⋅ ⋅ =

∑ ∑

∑ ∑

f P

P f

1 P 1

1, 2; 1, 2ξ η= =

4. Strong independence: ^i j S∈ΨP

Minimize (Maximize) 2; 2 , ,^1; 1 i ja Pξ η ξ η ξ η

ξ η

= =

= =∑

Subject to

( )( )( )

^ 0

; 1,...,

; 1,...,

1; 1

0; 0

Ti j i j

Tk k kLOW i i i UPP i i

Tk k kLOW j j j UPP j j

T Ti j

i j

E f E f k k

E f E f k k

− =

⎡ ⎤ ⎡ ⎤≤ ≤ =⎣ ⎦ ⎣ ⎦

⎡ ⎤ ⎡ ⎤≤ ≤ =⎣ ⎦ ⎣ ⎦

⋅ = ⋅ =

≥ ≥

P p p

f p

f p

1 p 1 p

p p

(A.4)

5. Uncertain correlation: ^i j C∈ΨP .

Minimize (Maximize) 2; 2 , ,^1; 1 i ja Pξ η ξ η ξ η

ξ η

= =

= =∑

Subject to ( ) ( ) ( ) ( ) ( )i j i ji j S S i j i j S SE S E S D D E S S E S E S D Dρ ρ+ ≤ ≤ +

( )( )( )

^ 0

; 1,...,

; 1,...,

1; 1

0; 0

Ti j i j

Tk k kLOW i i i UPP i i

Tk k kLOW j j j UPP j j

T Ti j

i j

E f E f k k

E f E f k k

− =

⎡ ⎤ ⎡ ⎤≤ ≤ =⎣ ⎦ ⎣ ⎦

⎡ ⎤ ⎡ ⎤≤ ≤ =⎣ ⎦ ⎣ ⎦

⋅ = ⋅ =

≥ ≥

P p p

f p

f p

1 p 1 p

p p

(A.5)

where ( ) ( )22 , ,mS m mD E S E S m i j= − = , ( )E S is the expected value of variable S.

Page 230: Copyright by Xiaomin You 2010

213

Appendix B Input Data and Results of Optimal Exploration Plan

Table B-1 Extreme Prior Probabilities for each tunnel section

Section Length (m)

No. of Ext. points 1G 2G 3G 4G 5G

1 60 2 0.45 0.55 0.00 0.00 0.00 0.55 0.45 0.00 0.00 0.00

2 20 3 0.20 0.40 0.40 0.00 0.00 0.10 0.50 0.40 0.00 0.00 0.20 0.50 0.30 0.00 0.00

3 57 4 0.20 0.45 0.35 0.00 0.00 0.10 0.50 0.40 0.00 0.00 0.15 0.50 0.35 0.00 0.00 0.15 0.45 0.40 0.00 0.00

4 40 2 0.00 0.00 0.45 0.55 0.00 0.00 0.00 0.50 0.50 0.00

5 120 6 0.00 0.30 0.60 0.10 0.00 0.00 0.20 0.70 0.10 0.00 0.00 0.30 0.65 0.05 0.00 0.00 0.25 0.70 0.05 0.00 0.00 0.20 0.65 0.15 0.00 0.00 0.25 0.60 0.15 0.00

6 41 6 0.20 0.20 0.60 0.00 0.00 0.10 0.30 0.60 0.00 0.00 0.20 0.30 0.50 0.00 0.00 0.20 0.20 0.50 0.10 0.00 0.10 0.30 0.50 0.10 0.00 0.10 0.20 0.60 0.10 0.00

7 96 6 0.20 0.20 0.60 0.00 0.00 0.10 0.30 0.60 0.00 0.00 0.20 0.30 0.50 0.00 0.00 0.20 0.20 0.50 0.10 0.00 0.10 0.30 0.50 0.10 0.00 0.10 0.20 0.60 0.10 0.00

8 104 6 0.50 0.40 0.10 0.00 0.00 0.40 0.50 0.10 0.00 0.00 0.50 0.45 0.05 0.00 0.00 0.45 0.50 0.05 0.00 0.00 0.40 0.45 0.15 0.00 0.00

Page 231: Copyright by Xiaomin You 2010

214

Section Length (m)

No. of Ext. points 1G 2G 3G 4G 5G

0.45 0.40 0.15 0.00 0.00 9 586 6 0.50 0.40 0.10 0.00 0.00 0.40 0.50 0.10 0.00 0.00 0.50 0.45 0.05 0.00 0.00 0.45 0.50 0.05 0.00 0.00 0.40 0.45 0.15 0.00 0.00 0.45 0.40 0.15 0.00 0.00

10 106 2 0.48 0.52 0.00 0.00 0.00 0.52 0.48 0.00 0.00 0.00

11 67 2 0.48 0.52 0.00 0.00 0.00 0.52 0.48 0.00 0.00 0.00

12 167 2 0.48 0.52 0.00 0.00 0.00 0.52 0.48 0.00 0.00 0.00

13 36 2 0.75 0.25 0.00 0.00 0.00 0.80 0.20 0.00 0.00 0.00

14 74 4 0.15 0.80 0.05 0.00 0.00 0.05 0.85 0.10 0.00 0.00 0.10 0.85 0.05 0.00 0.00 0.10 0.80 0.10 0.00 0.00

15 43 2 0.15 0.85 0.00 0.00 0.00 0.20 0.80 0.00 0.00 0.00

16 110 2 0.65 0.35 0.00 0.00 0.00 0.70 0.30 0.00 0.00 0.00

17 188 2 0.85 0.15 0.00 0.00 0.00 0.90 0.10 0.00 0.00 0.00

18 50 3 0.15 0.30 0.55 0.00 0.00 0.10 0.35 0.55 0.00 0.00 0.15 0.35 0.50 0.00 0.00

Page 232: Copyright by Xiaomin You 2010

215

Table B-2 Extreme conditional probabilities of exploration result given real geological states

Reality Exploration Result 1G 2G 3G 4G 5G

No. of Ext. Pts 2 2 5 5 2 1G 0.85 0.95 0.08 0.12 0.000.00 0.00 0.000.00 0.00 0.000.00 0.00 0.00 0.00 0.002G 0.15 0.05 0.92 0.88 0.080.12 0.10 0.080.12 0.00 0.000.00 0.00 0.00 0.00 0.003G 0.00 0.00 0.00 0.00 0.850.83 0.85 0.770.73 0.15 0.050.15 0.05 0.10 0.00 0.004G 0.00 0.00 0.00 0.00 0.070.05 0.05 0.150.15 0.75 0.870.77 0.80 0.75 0.06 0.125G 0.00 0.00 0.00 0.00 0.000.00 0.00 0.000.00 0.10 0.080.08 0.15 0.15 0.94 0.88

Table B-3 Value of information (with relaxed constraints)

Value of Information (Relaxed Constraints) VI VPI VUP Section

LOW UPP LOW UPP LOW UPP 1 $17,600 $33,300 $26,100 $37,300 $1,590 $10,900 2 $2,840 $10,300 $4,950 $11,600 -$1,700 $5,110 3 $11,500 $24,400 $17,500 $28,000 -$1,710 $11,400 4 -$6,140 $4,410 -$1,470 $5,250 -$2,430 $7,940 5 -$16,300 $100,000 $14,700 $105,000 -$28,400 $64,100 6 -$6,400 $52,600 $4,370 $55,000 -$15,000 $28,200 7 -$15,000 $123,000 $10,200 $129,000 -$35,100 $66,100 8 $22,700 $71,100 $35,500 $77,600 -$9,130 $28,400 9 $128,000 $401,000 $200,000 $437,000 -$51,500 $160,000

10 $34,100 $48,700 $48,000 $55,300 $5,070 $15,500 11 $21,600 $30,800 $30,500 $35,000 $3,200 $9,780 12 $53,800 $76,700 $75,700 $87,200 $7,980 $24,400 13 $1,340 $7,480 $6,340 $9,450 $1,290 $5,680 14 -$4,340 $12,400 $4,160 $17,800 -$1,580 $1,540 15 $1,070 $5,280 $6,290 $8,380 $2,300 $6,030 16 $15,200 $33,300 $30,100 $39,600 $4,270 $17,000 17 -$12,700 $21,300 $14,800 $31,000 $6,190 $31,000 18 $15,400 $25,600 $20,400 $28,700 -$669 $8,760

Page 233: Copyright by Xiaomin You 2010

216

Table B-4 Value of information (with strict constraints)

Value of Information (Strict Constraints) VI VPI VUP Section

LOW UPP LOW UPP LOW UPP 1 $19,900 $30,800 $28,500 $34,900 $4,020 $8,650 2 $5,240 $8,050 $7,350 $9,300 $1,250 $2,240 3 $14,900 $20,900 $20,900 $24,600 $3,630 $6,280 4 -$720 $1,210 $1,800 $1,980 $774 $2,520 5 $25,700 $44,900 $33,500 $49,700 $4,670 $7,810 6 $12,400 $24,000 $16,400 $26,400 $2,370 $4,300 7 $29,000 $56,100 $38,300 $61,900 $5,540 $10,100 8 $34,600 $55,500 $47,600 $62,000 $6,340 $13,500 9 $195,000 $313,000 $268,000 $349,000 $35,700 $76,100

10 $35,600 $47,100 $49,600 $53,700 $6,660 $14,000 11 $22,500 $29,800 $31,400 $34,000 $4,210 $8,860 12 $56,100 $74,200 $78,200 $84,700 $10,500 $22,100 13 $1,970 $6,760 $7,020 $8,780 $1,970 $5,050 14 $103 $7,960 $8,600 $13,300 $4,010 $11,100 15 $1,070 $5,280 $6,290 $8,380 $3,100 $5,280 16 $17,100 $31,000 $32,200 $37,500 $6,330 $15,100 17 -$8,620 $17,500 $18,300 $27,500 $9,710 $27,500 18 $18,400 $22,900 $23,400 $25,900 $3,020 $5,170

Page 234: Copyright by Xiaomin You 2010

217

Table B-5 Value of information (with precise probabilities)

Value of Information Section VI VPI VUP

1 $24,716 $31,054 $6,338 2 $6,333 $8,055 $1,722 3 $18,049 $22,957 $4,908 4 $307 $1,908 $1,601 5 $41,872 $47,736 $5,864 6 $18,178 $21,476 $3,298 7 $41,329 $49,003 $7,674 8 $44,694 $54,522 $9,828 9 $244,274 $300,179 $55,904 10 $40,306 $50,642 $10,335 11 $25,477 $32,009 $6,533 12 $65,130 $81,413 $16,283 13 $4,212 $7,722 $3,510 14 $3,652 $10,712 $7,060 15 $3,773 $7,966 $4,193 16 $24,667 $35,393 $10,725 17 $7,332 $25,662 $18,330 18 $20,777 $24,837 $4,060

Page 235: Copyright by Xiaomin You 2010

218

Appendix C Risk register for East Side CSO Project, Portland, Oregon

Note: adapted from the risk register from Gribbon, City of Portland Bureau of Environmental Services Probability of Risk Degree of Risk Risk Ratings

5 = ( >70% - Expected to occur) 5 = Very Serious (>$10m - >6 months) Prob. Degree Total Potential Cost

Precise Contingency

Lower Contingency

Upper Contingency

4 = (51% - 70% - Will probably occur) 4 = Major ($2m - $10m - 3-6 months)

3 = (31% - 50% - Likely to occur) 3 = Moderate ($0.5m - $2m - 1-3 months)

2 = (10% - 30% - May occur) 2 = Minor ($0.1m - $0.5m - 1-4 weeks)

1 = ( <10% - Unlikely to occur) 1 = Insignificant (<$0.1m - <1 week)

Cost$ X Probability/5

No. RISK ITEM MITIGATION

ACCESS / PERMIT RISKS

101 Delay in Obtaining Property

101.1 Opera Parking Lot

(1) Start discussion early with Portland Opera; (2) Condemnation of property; (3) Accelerate shaft / tunnel; (4) Move shaft within OMSI site; (5) Alternate mining site

3 3 9 $1,000,000 $600,000 $310,000 $500,000

101.3 Alder Shaft - Corno Building (1) Condemnation of property; (2) Multi-shift / increase work-days per week 2 4 8 $1,000,000 $400,000 $110,000 $300,000

108 Emergency Surface Access Issues

108.2 Micro-Tunnel Machine Recovery Shaft(1) Maintain reasonable bore lengths; (2) reduce number of long bores; (3) MTBM preparation for longer drives

3 3 9 $500,000 $300,000 $155,000 $250,000

110 Crew Parking

110.2 Alder Shaft (1) Phase work to minimize workers; (2) Mandate street parking; (3) Bus crew 3 3 9 $950,000 $570,000 $294,500 $475,000

Page 236: Copyright by Xiaomin You 2010

219

Probability of Risk Degree of Risk Risk Ratings

5 = ( >70% - Expected to occur) 5 = Very Serious (>$10m - >6 months) Prob. Degree Total Potential Cost

Precise Contingency

Lower Contingency

Upper Contingency

4 = (51% - 70% - Will probably occur) 4 = Major ($2m - $10m - 3-6 months)

3 = (31% - 50% - Likely to occur) 3 = Moderate ($0.5m - $2m - 1-3 months)

2 = (10% - 30% - May occur) 2 = Minor ($0.1m - $0.5m - 1-4 weeks)

1 = ( <10% - Unlikely to occur) 1 = Insignificant (<$0.1m - <1 week)

Cost$ X Probability/5

112 Barging Permit (1) Start permit during pre-construction; (2) Qualify under Slopes; (3) Perform work during first summer work window (2006)

3 3 9 $800,000 $480,000 $248,000 $400,000

114 Noise Variances

114.2 Alder Shaft (1) Apply for noise variance; (2) Equipment modifications; (3) Limit multi-shift work 4 2 8 $500,000 $400,000 $255,000 $350,000

114.3 Pipeline Shaft Sites (1) Apply for noise variance; (2) Equipment modifications; (3) Limit multi-shift work 4 2 8 $250,000 $200,000 $127,500 $175,000

114.3 Microtunnel Pipelines (1) Apply for noise variance; (2) Equipment modifications; (3) Limit multi-shift work 4 2 8 $250,000 $200,000 $127,500 $175,000

115 Other Permits

115.1Corno's Building Demolition Permits -requirement to removal concrete basement structure

(1) Work with City permitting agency 3 3 9 $750,000 $450,000 $232,500 $375,000

TUNNEL CONSTRUCTION RISKS

202 Labor Availability (1) Monitor availability of experienced tunnel personnel (Brightwater concern); 3 3 9 $1,000,000 $600,000 $310,000 $500,000

Page 237: Copyright by Xiaomin You 2010

220

Probability of Risk Degree of Risk Risk Ratings

5 = ( >70% - Expected to occur) 5 = Very Serious (>$10m - >6 months) Prob. Degree Total Potential Cost

Precise Contingency

Lower Contingency

Upper Contingency

4 = (51% - 70% - Will probably occur) 4 = Major ($2m - $10m - 3-6 months)

3 = (31% - 50% - Likely to occur) 3 = Moderate ($0.5m - $2m - 1-3 months)

2 = (10% - 30% - May occur) 2 = Minor ($0.1m - $0.5m - 1-4 weeks)

1 = ( <10% - Unlikely to occur) 1 = Insignificant (<$0.1m - <1 week)

Cost$ X Probability/5

203 TBM Initial Launch

203.1 Initial Launch (1) Redundant systems - cement wall, steel can, double seal; (2) Detailed planning; 3 4 12 $3,300,000 $ 1,980,000 $ 1,023,000 $ 1,650,000

203.2 Shaft Break-In / Out (1) Redundant systems - flooded shaft; (2) Detailed planning; 1 4 4 $3,200,000 $640,000 $- $320,000

204 TBM Productivity (Discrete Items)

204.1 Longer Learning Curve (1) Detailed step-step work plan; (2) TBM supplier tech. representation; (3) Over-staff initial start-up

3 4 12 $1,600,000 $960,000 $496,000 $800,000

204.11 Electrical Plant Upgrades - PGE (1) Early discussions with PGE; (2) Detailed ramp-up analysis; (3) Peer review of PGE provided equipment

3 3 9 $500,000 $300,000 $155,000 $250,000

204.13 Main Bearing Replacement

(1) Design life at 15,000-hr; (2) Maintenance program; (3) Inspection after north drive; (4) Main bearing stored in US / under warranty

2 4 8 $5,600,000 $ 2,240,000 $616,000 $ 1,680,000

204.14 Power Drop at Opera Site (1) Continue discussions with PGE; (2) Look at alternative power access routes 5 3 15 $350,000 $350,000 $248,500 $350,000

205 TBM Productivity (excluding interventions and time through shafts)

205.2 Production at 35 feet per day (1) Increase to 6 wd per week; (2) Increase to 7 wd per week 3 5 15 $12,250,000 $ 7,350,000 $ 3,797,500 $ 6,125,000

Page 238: Copyright by Xiaomin You 2010

221

Probability of Risk Degree of Risk Risk Ratings

5 = ( >70% - Expected to occur) 5 = Very Serious (>$10m - >6 months) Prob. Degree Total Potential Cost

Precise Contingency

Lower Contingency

Upper Contingency

4 = (51% - 70% - Will probably occur) 4 = Major ($2m - $10m - 3-6 months)

3 = (31% - 50% - Likely to occur) 3 = Moderate ($0.5m - $2m - 1-3 months)

2 = (10% - 30% - May occur) 2 = Minor ($0.1m - $0.5m - 1-4 weeks)

1 = ( <10% - Unlikely to occur) 1 = Insignificant (<$0.1m - <1 week)

Cost$ X Probability/5

206 Slurry Separation System

206.1 Difficult Separation in Sand/Silt Alluvium

(1) Truck material or pay barging premium, do not impact tunneling operation. 4 2 8 $250,000 $200,000 $127,500 $175,000

206.3 Availability of Slurry Separation Plant (1) Procure new plant; (2) Maintenance program; (3) Experienced personnel 2 4 8 $1,600,000 $640,000 $176,000 $480,000

207 Wear on TBM and Parts

207.1 River Street - Increased Maintenance in Preparation of Long North Drive

(1) Increase shaft rehab time; (2) Procure 2nd cutterhead 3 3 9 $840,000 $504,000 $260,400 $420,000

208 Interventions

208.2 Increased Duration of Interventions (1) Machine maintenance at shafts; (2) Wear detection systems; (3) TBM design 3 4 12 $1,600,000 $960,000 $496,000 $800,000

208.3 Inability to Perform Compressed Air Intervention

(1) Ensure intervention in Troutdale; (2) Jet grout zone; 3 3 9 $1,330,000 $798,000 $412,300 $665,000

209 Segmental Lining

Page 239: Copyright by Xiaomin You 2010

222

Probability of Risk Degree of Risk Risk Ratings

5 = ( >70% - Expected to occur) 5 = Very Serious (>$10m - >6 months) Prob. Degree Total Potential Cost

Precise Contingency

Lower Contingency

Upper Contingency

4 = (51% - 70% - Will probably occur) 4 = Major ($2m - $10m - 3-6 months)

3 = (31% - 50% - Likely to occur) 3 = Moderate ($0.5m - $2m - 1-3 months)

2 = (10% - 30% - May occur) 2 = Minor ($0.1m - $0.5m - 1-4 weeks)

1 = ( <10% - Unlikely to occur) 1 = Insignificant (<$0.1m - <1 week)

Cost$ X Probability/5

209.4 Excessive Water Leaks (1) QC Program for segment fabrication; (2) Lining installation quality control inspection

3 3 9 $600,000 $360,000 $186,000 $300,000

210 Banfield Interchange Piles (Option B)

210.1 Underpinning Program for Bridge Bent(1) Perform pile investigation program; (2) Perform engineering analysis of down-drag on pile group

3 3 9 $750,000 $450,000 $232,500 $375,000

210.2 Jet Grouting for Pile Removal (1) Perform pile investigation program 2 3 6

210.3 Alternate Alignment - Option C (1) Perform pile investigation program; (2) Consider 210.1 and 210.2 risks 2 4 8 $2,300,000 $920,000 $253,000 $690,000

211 Port Center Tunneled Connection

211.3 Redundant Bulkhead System in Case of Early Tie-In 4 3 12 $500,000 $400,000 $255,000 $350,000

GROUND IMPROVEMENT RISKS

Page 240: Copyright by Xiaomin You 2010

223

Probability of Risk Degree of Risk Risk Ratings

5 = ( >70% - Expected to occur) 5 = Very Serious (>$10m - >6 months) Prob. Degree Total Potential Cost

Precise Contingency

Lower Contingency

Upper Contingency

4 = (51% - 70% - Will probably occur) 4 = Major ($2m - $10m - 3-6 months)

3 = (31% - 50% - Likely to occur) 3 = Moderate ($0.5m - $2m - 1-3 months)

2 = (10% - 30% - May occur) 2 = Minor ($0.1m - $0.5m - 1-4 weeks)

1 = ( <10% - Unlikely to occur) 1 = Insignificant (<$0.1m - <1 week)

Cost$ X Probability/5

303 Compensation / Compaction Grouting

303.1 Structure protection grouting - Reach 2

(1) Perform detailed engineering settlement analysis; (2) Instrumentation and monitoring program; (3) Pre-drill grout holes

3 3 9 $600,000 $360,000 $186,000 $300,000

305 Scope, Complexity of Instrumentation Installation and Monitoring

4 3 12 $500,000 $400,000 $255,000 $350,000

MAIN SHAFT CONSTRUCTION RISKS

401 Slurry Wall Construction

401.1 Difficulty of Excavation in Cobbles and Boulders

(1) Reasonable GBR values; (2) Evaluate appropriate equipment; (3) Multiple shift and added work days per week

4 3 12 $525,000 $420,000 $267,750 $367,500

$401 Difficulty of Excavation in Obstructions (Logs, Buried Objects, etc.)

(1) Reasonable GBR values; (2) Evaluate appropriate equipment; (3) Multiple shift and added work days per week

4 3 12 $525,000 $420,000 $267,750 $367,500

403 Shaft Excavation

Page 241: Copyright by Xiaomin You 2010

224

Probability of Risk Degree of Risk Risk Ratings

5 = ( >70% - Expected to occur) 5 = Very Serious (>$10m - >6 months) Prob. Degree Total Potential Cost

Precise Contingency

Lower Contingency

Upper Contingency

4 = (51% - 70% - Will probably occur) 4 = Major ($2m - $10m - 3-6 months)

3 = (31% - 50% - Likely to occur) 3 = Moderate ($0.5m - $2m - 1-3 months)

2 = (10% - 30% - May occur) 2 = Minor ($0.1m - $0.5m - 1-4 weeks)

1 = ( <10% - Unlikely to occur) 1 = Insignificant (<$0.1m - <1 week)

Cost$ X Probability/5

403.1 Main Mining Shaft Not Ready Ahead of TBM Assembly

(1) Early procurement of slurry wall; (2) Multiple shift and added work days per week for slurry wall and shaft excavation

4 2 8 $100,000 $80,000 $51,000 $70,000

405 Permanent Site Restoration at Shaft Locations

(1) Complete design/drawings with defined scope of work; (2) Receive approval from O&M staff

3 3 9 $700,000 $420,000 $217,000 $350,000

406 Shaft Concrete

406.1 Scope of Structural Concrete -Quantities and Complexity

(1) Complete design/drawings with defined scope of work; (2) Design to ERC budget 4 3 12 $1,800,000 $ 1,440,000 $918,000 $ 1,260,000

MICROTUNNELING CONSTRUCTION RISKS

502 MTBM Operation Set-Up / Move

502.1 Difficulty With Set-Up and Relocation of Microtunnel TBM and Support Systems

(1) Detailed planning; (2) Utilize same crews - learning curve benefit. 4 2 8 $337,500 $270,000 $172,125 $236,250

502 MTBM Break-In / Out(s)

Page 242: Copyright by Xiaomin You 2010

225

Probability of Risk Degree of Risk Risk Ratings

5 = ( >70% - Expected to occur) 5 = Very Serious (>$10m - >6 months) Prob. Degree Total Potential Cost

Precise Contingency

Lower Contingency

Upper Contingency

4 = (51% - 70% - Will probably occur) 4 = Major ($2m - $10m - 3-6 months)

3 = (31% - 50% - Likely to occur) 3 = Moderate ($0.5m - $2m - 1-3 months)

2 = (10% - 30% - May occur) 2 = Minor ($0.1m - $0.5m - 1-4 weeks)

1 = ( <10% - Unlikely to occur) 1 = Insignificant (<$0.1m - <1 week)

Cost$ X Probability/5

502.1 Shaft Break-In / Out of MTBM (1) Detailed planning; (2) Utilize same crews - learning curve benefit. 4 2 8 $315,000 $252,000 $160,650 $220,500

503 MTBM Productivity (Discrete Items)

503.1 Obstructions - logs, buried objects, etc. (1) Reasonable GBR values; (2) Evaluate appropriate equipment 4 2 8 $320,000 $256,000 $163,200 $224,000

503.2 Conflicts With Existing Utilities (1) Adequate microtunnel depth; (2) Investigation of existing utilities; (3) Locates

4 2 8 $320,000 $256,000 $163,200 $224,000

503.5Excessive Friction Due to Drive Lengths Too Long For Ground Conditions

(1) Jacking force and use of stations; (2) Pipe lubricants; (3) Non-stop operations 4 2 8 $160,000 $128,000 $81,600 $112,000

505 Interventions

505.4Increased Number / Duration of Interventions (MTBM Stuck Beneath Railroad Tracks)

(1) Geological investigation; (2) Proper machine design; (3) Non-stop operations 4 2 8 $320,000 $256,000 $163,200 $224,000

505.5 Rescue Shaft for MTBM (1) Maintain reasonable bore lengths; (2) reduce number of long bores; (3) MTBM preparation for longer drives

3 3 9 $1,200,000 $720,000 $372,000 $600,000

OPEN-CUT PIPELINE CONSTRUCTION RISKS

Page 243: Copyright by Xiaomin You 2010

226

Probability of Risk Degree of Risk Risk Ratings

5 = ( >70% - Expected to occur) 5 = Very Serious (>$10m - >6 months) Prob. Degree Total Potential Cost

Precise Contingency

Lower Contingency

Upper Contingency

4 = (51% - 70% - Will probably occur) 4 = Major ($2m - $10m - 3-6 months)

3 = (31% - 50% - Likely to occur) 3 = Moderate ($0.5m - $2m - 1-3 months)

2 = (10% - 30% - May occur) 2 = Minor ($0.1m - $0.5m - 1-4 weeks)

1 = ( <10% - Unlikely to occur) 1 = Insignificant (<$0.1m - <1 week)

Cost$ X Probability/5

601 Scope of Relocation/Protection of Existing Utilities

(1) Completeness of drawings; (2) Perform utility locates; (3) Test pits 5 2 10 $400,000 $400,000 $284,000 $400,000

605 Scope of Surface Restoration (1) Discussion / agreement with PDOT and businesses; (2) Reasonable quantities in ERC.

5 2 10 $250,000 $250,000 $177,500 $250,000

606 Upgrade of Surface Storm System (1) Discussion / agreement with PDOT and businesses; (2) Reasonable quantities in ERC.

2 4 8 $2,000,000 $800,000 $220,000 $600,000

607 Difficulties With Final Tie-Ins to Existing Outfall Structures

(1) Detailed planning; (2) Perform work during low flow period; (3) Ground stabilization program; (4) Complete maximum amount of work prior to final tie-in.

4 2 8 $200,000 $160,000 $102,000 $140,000

PIPELINE SHAFT CONSTRUCTION RISKS

701 Support of Excavation System -Soldier Pile Shafts Revised to Secant Pile

701.1 Structures - OF-30, OF-43, OF-46 (1) Detailed geotechnical assessment; (2) Adequate dewatering provisions; (3) Structure instrumentation

3 4 12 $2,300,000 $ 1,380,000 $713,000 $ 1,150,000

701.2 Increased Scope of Dewatering Work Anticipated for Soldier Pile Shafts

(1) Detailed geotechnical assessment; (2) Adequate dewatering scope; (3) Dewatering duration

4 2 8 $300,000 $240,000 $153,000 $210,000

704 Pipeline Shaft Excavation

Page 244: Copyright by Xiaomin You 2010

227

Probability of Risk Degree of Risk Risk Ratings

5 = ( >70% - Expected to occur) 5 = Very Serious (>$10m - >6 months) Prob. Degree Total Potential Cost

Precise Contingency

Lower Contingency

Upper Contingency

4 = (51% - 70% - Will probably occur) 4 = Major ($2m - $10m - 3-6 months)

3 = (31% - 50% - Likely to occur) 3 = Moderate ($0.5m - $2m - 1-3 months)

2 = (10% - 30% - May occur) 2 = Minor ($0.1m - $0.5m - 1-4 weeks)

1 = ( <10% - Unlikely to occur) 1 = Insignificant (<$0.1m - <1 week)

Cost$ X Probability/5

704.1 Secant Pile Drilling - Encountering Cobbles and Boulders

(1) Reasonable GBR values; (2) Evaluate appropriate equipment; (3) Multiple shift and added work days per week

3 3 9 $555,000 $333,000 $172,050 $277,500

705 Pipeline Shaft Concrete

705.1 Scope of Structural Concrete -Quantities and Complexity

(1) Complete design/drawings with defined scope of work; (2) Design to ERC budget 3 3 9 $1,225,000 $735,000 $379,750 $612,500

FINANCIAL / OTHER RISKS

901 Equipment

901.1 Purchase More Equipment Than Planned

(1) Perform detailed equipment analysis; (2) Control of purchasing; (3) Sequence operations to avoid extra concurrent activities

3 3 9 $1,500,000 $900,000 $465,000 $750,000

901.2 Purchase Cost of Equipment Exceeds Budget

(1) Perform detailed equipment analysis; (2) Control of purchasing; (3) Obtain competitive pricing

3 4 12 $2,500,000 $ 1,500,000 $775,000 $ 1,250,000

901.3 Less Than Expected Equipment Salvage

(1) Control equipment hours; (2) Proper equipment maintenance program; (3) Corporate equipment marketing and sales

3 3 9 $500,000 $300,000 $155,000 $250,000

Page 245: Copyright by Xiaomin You 2010

228

Probability of Risk Degree of Risk Risk Ratings

5 = ( >70% - Expected to occur) 5 = Very Serious (>$10m - >6 months) Prob. Degree Total Potential Cost

Precise Contingency

Lower Contingency

Upper Contingency

4 = (51% - 70% - Will probably occur) 4 = Major ($2m - $10m - 3-6 months)

3 = (31% - 50% - Likely to occur) 3 = Moderate ($0.5m - $2m - 1-3 months)

2 = (10% - 30% - May occur) 2 = Minor ($0.1m - $0.5m - 1-4 weeks)

1 = ( <10% - Unlikely to occur) 1 = Insignificant (<$0.1m - <1 week)

Cost$ X Probability/5

901.4 Foreign Exchange Rates (1) All purchases in US$; (2) Receive quotes in US$ or secure Euro conversion 4 3 12 $200,000 $160,000 $102,000 $140,000

902 Subcontractors

902.1 Actual Subcontract Contract Amounts Differ From Plug Prices in Final ERC

(1) Obtain quotations for ERC pricing (absent 100% design); (2) Use reasonable and actual unit (past) costs; (3) Package work to receive best value bids

3 4 12 $5,000,000 $ 3,000,000 $ 1,550,000 $ 2,500,000

902.2Subcontract Change Orders Not Easily Resolved (Beyond those Identified in This Risk Assessment

(1) Defined Subcontract Agreements and Bid Documents; (2) Unit Price Items (where applicable); (3) Contract administration

3 3 9 $1,000,000 $600,000 $310,000 $500,000

TOTAL $39,688,000 $19,901,475 $32,865,750

Page 246: Copyright by Xiaomin You 2010

229

Reference

AACE International Recommended Practice No. 41R-08, Risk Analysis And Contingency Determination Using Range Estimating, 2008. Available at http://www.aacei.org/technical/rps/41R-08.pdf.

Ang, A.H.-S. and Tang, W.H., 1984. Probability concepts in engineering planning and design. Decision, Risk and Reliability vol. II, Wiley, New York.

Barton, N.R., Lien, R. and Lunde, J. Engineering classification of rock masses for the design of tunnel support. Rock Mech. 6(4), 189-239. 1974.

Benjamin, R.J., Cornell, A.C., 1970. Probability, Statistics and Decision for Civil Engineers. McGraw-Hill, New York.

Berleant, D. and J. Zhang, Using Pearson Correlation to improve envelopes around the distribution s of functions, Reliable Computing 10(2), 139-161, 2004.

Bernardini, A. and Tonon, F. Bounding Uncertainty in Civil Engineering – Theoretical Background. Springer. 2010

Caltrans, Project Risk Management Handbook, 2007. Available at http://www.dot.ca.gov/hq/projmgmt/documents/prmhb/caltrans_project_risk_management_handbook_20070502.pdf

Campos, C.P., and F.G. Cozman, Computing lower and upper expectations under epistemic independence, International Journal of Approximate Reasoning, 44(3):244-260, 2007.

Campos, L.M. and S. Moral. Independence concepts for convex sets of probabilities. In Ph. Besnard and S. Hanks, eds, Proceedings of the 11th Conference on Uncertainty in Artificial Intelligence, 108–115. Morgan & Kaufmann, San Mateo, 1995.

Cano A., and S. Moral, Algorithms for Imprecise Probabilities, Handbook of Defeasible Reasoning and Uncertainty Management Systems, V 5, 369 - 420, Springer, 2000.

Couso, I., S. Moral, and P. Walley. (1999). Examples of independence for imprecise probabilities. ISIPTA '99 - Proceedings of the First International Symposium on Imprecise Probabilities and Their Applications, Zwijnaarde, Belgium.

Couso, I., S. Moral, and P. Walley. A survey of concepts of independence for imprecise probabilities. Risk Decision and Policy, Vol. 5, pp. 165-181, 2000.

Page 247: Copyright by Xiaomin You 2010

230

Durrett R., Probability: theory and examples, second or third ed., Duxbury Press, Belmont, CA, 1996.

Eskesen, S.D., P. Tengborg, J. Kampmann, T.H. Veicherts, Guidelines for tunnelling risk management: International Tunnelling Association, Working Group No. 2, Tunnelling and Underground Space Technology, Volume 19, Issue 3, May 2004, Pages 217-237.

Ferson, S., R.B. Nelsen, J. Hajagos, D.J. Berleant, J. Zhang, W. T. Tucker, L. R. Ginzburg and W. L. Oberkampf, Dependence in probabilistic modeling, Dempster-Shafer theory, and probability bounds analysis, SAND2004-3072, 2004. Available at http://www.ramas.com/depend.pdf.

Ferson S., V. Kreinovich, L. Ginzburg, D. S. Myers, and K. Sentz, Constructing Probability Boxes and Dempster-Shafer Structures, Sandia National Laboratories, Technical Report SAND2002-4015, Albuquerque, New Mexico. Available at http://www.sandia.gov/epistemic/Reports/SAND2002-4015.pdf.

Fetz, T. and M. Oberguggenberger (2004). Propagation of uncertainty through multivariate functions in the framework of sets of probability measures. Reliability Engineering & System Safety 85(1-3): 73-87.

Gribbon, P. “Risk Register for East Side CSO Project, Portland”, City of Portland Bureau of Environmental Services, personal communication.

FHWA, Risk Assessment and Allocation for Highway Construction Management, 2006. Available at http://international.fhwa.dot.gov/riskassess/.

FTA, Project & Construction - Management Guidelines, 2003. Available at http://www.fta.dot.gov/publications/reports/other_reports/publications_3875.html.

Harrison, J.P., J.A. Hudson, Engineering rock mechanics: part 2: illustrative worked examples, Elsevier Science, 2000.

Hong, E. S., I. M. Lee, H. S. Shin, S. W. Nam, and J. S. Kong (2009), “Quantitative risk evaluation based on event tree analysis technique: Application to the design of shield TBM.” Tunnelling and Underground Space Technology, 24(3). 269-277.

Horst, R., P.M. Pardalos and N.V. Thoai (2000) Introduction to Global Optimization, 2nd Edn. Dortrecht: Kluwer

Huang D., Toly Chen, Mao-Jiun J. Wang, A fuzzy set approach for event tree analysis, Fuzzy Sets and Systems ,Volume 118, Issue 1, 16 February 2001, Pages 153-165

Page 248: Copyright by Xiaomin You 2010

231

Huntley, N., Troffaes, M., An Efficient Normal Form Solution to Decision Trees with Lower Previsions, Soft Methods for Handling Variability and Imprecision, ASC 48, Page 419-426, 2008.

International Tunnel Insurance Group, The Code of Practice for Risk Management of Tunnel Works, 2006

Karam K. S., J. S. Karam, and H. H. Einstein (2007), “Decision Analysis Applied to Tunnel Exploration Planning I: Principles and Case Study.” Journal of Construction Engineering and Management, 133(5), 344-353.

Kenarangui R., Event-Tree Analysis by Fuzzy Probability, IEEE TRANSACTIONS ON RELIABILITY, VOL. 40, NO. 1, 1991 APFUL, Page 120-124

Luenberger D.G. Linear and nonlinear programming, 1984

Matheron G. Random sets and integral geometry. New York: Wiley, 1975.

Min, S. Y., H. H. Einstein, J. S. Lee, and T. K. Kim (2003). Application of decision aids for tunneling (DAT) to a drill & blast tunnel. KSCE Journal of Civil Engineering, 7(5). 619-628

Pennington, T.W., R.F. Cook, D.P. Richards, J. O'Carroll, and T. Cleys, A tunnel case study in Risk Management, Proceedings of North American Tunneling 2006, Chicago, IL, USA.

Rockafellar, R.T., Convex Analysis, Princeton University Press, Princeton, NJ, 1997.

Simiu E., N. A. Heckert, J. J. Filliben and S. K. Johnson, Extreme wind load estimates based on the Gumbel distribution of dynamic pressures: an assessment, Structural Safety, 23(3), 2001, Pages 221-229

Stockholm ring road. (2010). Retrieved June 6, 2010, from http://en.wikipedia.org/wiki/Stockholm_ring_road

Stockholm News, "Controversial road project to be approved", Sept. 3 2009, http://www.stockholmnews.com/more.aspx?NID=3898

Sturk, R., L. Olsson, and J. Johansson (1996), “Risk and Decision Analysis for Large Underground Projects, as Applied to the Stockholm Ring Road Tunnels.” Tunnelling and Underground Technology, 11(2), 157-164.

Tonon F., A. Bernardini, A. Mammino, Determination of parameters range in rock engineering by means of Random Set Theory, Reliability Engineering & System Safety Volume 70, Issue 3, December 2000, Pages 241-261

Page 249: Copyright by Xiaomin You 2010

232

Tollroads Newsletter, “SETBACK FOR TOLLS Stockholm tunnel ring stopped”, issue 13, Mar 1997, http://www.tollroadsnews.com/node/1978

TunnelTalk, “Fatal collapse on Cologne’s new metro line”, Available at http://www.tunneltalk.com/Cologne-collapse-Mar09-Deadly-collapse-in-Cologne.php, retrieved on June 1, 2010.

Vicig, P. (1999). Epistemic independence for imprecise probabilities, ISIPTA '99 - Proceedings of the First International Symposium on Imprecise Probabilities and Their Applications, Zwijnaarde, Belgium.

Walley, P. Statistical reasoning with Imprecise Probabilities. London, New York, Tokyo, Melbourne, Madras, Chapman and Hall, 1991.

Wannick, H. The code of Practice for Risk Management of tunnel works - Future Tunnelling Insurance from the insurer's point of view, ITA-AITES 2006 World Tunnel Congress, Seoul, Korea, 2006.

Whitman, R.V. Evaluating calculated risk in geotechnical engineering. Journal of Geotechnical Division, ASCE 1984;110(2):143-188.

WSDOT, Project Risk Management Guide, 2010. Available at http://www.wsdot.wa.gov/publications/fulltext/cevp/ProjectRiskManagement.pdf.

Zadeh, L.A. (1965) "Fuzzy sets". Information and Control 8 (3) 338–353. Available at http://www-bisc.cs.berkeley.edu/Zadeh-1965.pdf.

Page 250: Copyright by Xiaomin You 2010

233

Vita

Xiaomin You was born in Nankang, Jiangxi Province, China, on December 02,

1982, the daughter of Weihua You and Xiangying Liu. After completing her work at

Ganzhou No.3 Middle School, she entered Tongji University, where she received her

Bachelor and Master degrees in Civil Engineering in 2004 and 2007, respectively.

Afterward, Xiaomin entered The University of Texas at Austin for her doctoral degree

under the supervision of Dr. Fulvio Tonon. Her research interests include tunnel

engineering, underground excavation, and risk analysis.

Permanent Address: 11 Dahuae Street

Ganzhou, Jiangxi 341000

China

This disseration was typed by the author.