Société de Calcul Mathématique, S....
Transcript of Société de Calcul Mathématique, S....
Siège social et bureaux : 111, Faubourg Saint Honoré, 75008 Paris. Tel. : 01 42 89 10 89. Fax : 01 42 89 10 69
Société Anonyme au capital de 56 200 Euros. RCS : Paris B 399 991 041. SIRET : 399 991 041 00035. APE : 731Z
“
Méthodes probabilistes pour l'analyse des incertitudes
liées à la sûreté des réacteurs nucléaires
L'Hypersurface Probabiliste
Construction explicite à partir du Code Cathare
Rapport no 3 adressé à
l'Institut de RadioProtection et de Sûreté Nucléaire
par la
Société de Calcul Mathématique S. A.
en application de votre commande R50/11026029 du 29 novembre 2006
rapport rédigé par Olga Zeydina, SCM SA
February 2007
Société de Calcul Mathématique, S. A. Algorithmes et Optimisation
EPH 3, IRSN - SCM SA, February 2007 2
Executive summary
In this report, we show how the Experimental Probabilistic Hypersurface (EPH), intro-
duced by Bernard Beauzamy in [BB1] (2004), can be applied to the situation of the nu-
merical code CATHARE. This code computes the temperature reached in a nuclear reac-
tor in case of a severe accident ; it is used for simulations. The answer we obtain here is :
what is the probability that the final temperature will be above a given threshold
(namely 1200°C), given the results of a few hundred runs of the code ?
The code CATHARE depends on 51 parameters (of various types) ; a list of these pa-
rameters is given in Annex1. A list of 321 measures were performed, and, for each meas-
ure, the maximal temperature reached is retained.
In fact, CATHARE indicates the evolution of temperature over time (and usually 3 peaks
appear), but we are interested only in the maximum temperature reached for each ex-
periment.
The computational code is rather slow (several hours for each measure point), so only
few computations can be performed. The general question is : what is the value of the
information provided by 321 experiments, given the fact that we have 51 parameters ? If
each parameter took only 10 values (and in fact most of them are continuous parameters,
which can take infinitely many values), then the configuration space has 5110 different
points, and the exploration given from 321 values is quite small.
The principal aim behind the construction of EPH is the propagation of information from
a measure point to any other. The EPH will give, at a point where no measure has been
made, a density of probability about the result which can be expected at that point.
The whole construction of EPH is based upon a principle of minimal entropy, so no un-
necessary assumptions are made.
The construction of EPH can be applied :
− in order to find a local probability, that is the probability to have some value above a
certain threshold at a given point in the parameter space ;
− or in order to compute a global probability, that is the local probability integrated
upon the whole parameter space.
So here, the EPH is built in a space of dimension 51, using 321 measure points. The
computer representation is done using Excel files, and the program is made as VBA
macros in Excel. The construction is made in two steps :
− General formula for the density of probability above each point in the space, combin-
ing the information sent by the 321 measure points. This density of probability is ex-
act (no approximation is made) ;
− Computation of the global integral of these densities above the whole space ; this
computation is done by Monte-Carlo methods in a space of dimension 51, so it is only
approximate, but we derive an estimate of the error.
The complete computation of the global probability, in the case of 321 measures in a 51-
dimensional space, takes a few minutes on a PC.
EPH 3, IRSN - SCM SA, February 2007 3
The results obtained here indicate that, with the present values, the probability of being
over the threshold is quite small : 132.5 10−× , with error at most 0,02. This opens the
possibility of increasing the power of the reactor, still remaining under the threshold.
The new value of the probability, for a different reactor, would be computed the same
way.
Here, all parameters are taken with uniform laws in their respective domain : it means
that no specific assumption is made upon each of them. But in practice engineers con-
sider specific laws upon each parameter (normal, log-normal, and so on), reflecting spe-
cific knowledge for each of them. The present construction may easily incorporate such
laws.
Table of contents
Executive summary ............................................................................................................. 2 I. General construction ........................................................................................................ 4 II. Normalization of the parameters ................................................................................... 6 III. Presentation of the computations ................................................................................. 8 A. Computing a local probability .................................................................................... 8 B. Computing a global probability ................................................................................... 8 1. The normalization coefficient ................................................................................... 9 2. Discretization ......................................................................................................10 3. Method Monte Carlo for computing integrals.....................................................10
IV. Numerical computations connected with CATHARE ..................................................12 A. Description of the parameters ....................................................................................12 B. Description of the measures .......................................................................................12 C. Choice of the boundaries for temperature..................................................................12 D. Computing a local probability ...................................................................................12
1. Computing the local probability at a precise point *X . .....................................12
2. Quality of the construction ......................................................................................14 C. Computing a global probability..............................................................................16
Annex 1 List of the parameters : ........................................................................................17 Annex 2 Monte Carlo Method for computing integrals.....................................................18 Annex 3 Example of application of the Monte Carlo method in a 50-parameters space .20
Siège social et bureaux : 111, Faubourg Saint Honoré, 75008 Paris. Tel. : 01 42 89 10 89. Fax : 01 42 89 10 69
Société Anonyme au capital de 56 200 Euros. RCS : Paris B 399 991 041. SIRET : 399 991 041 00035. APE : 731Z
I. General construction
A computational code, or physical experiment, always depends on several parameters ;
we denote by K the number of these parameters. A set of N measures (or experiments)
were made. So the computational code appears to be a function defined on a subset of K� , with real values. We call "measure point" a point in
K� where a measure has been
made (that is, this value of the parameters were introduced into the code).
In the second report for IRSN [see OZ2], we gave an extensive description of the con-
struction of the Hypersurface in the case of K parameters and N measures.
We have a physical experiment or computational code, which returns a real value (in our
case a temperature). This value lies in an interval, denoted by [ ]min max,t t . The tempera-
tures are discretized, using a step τ ( 1 Cτ = ° in our example). We denote by
max mint tν
τ
−= the total number of points in this discretization.
We denote by ( )min max min ,j
jt t t t
ν= + − 0, ,j ν= … , the points in the interval [ ]min max,t t .
We denote by 1, , Kx x… theK parameters . We write the result as :
1( , , )KT CT x x= …
(CT stands for "Cathare").
If N measures were performed, we denote by ( )nkξ the value taken by the k -th parame-
ter kx for the n th− measure. We put :
( ) ( )
1( )n n
n KA ξ ξ= … , 1n N= … .
This is a point (called the "measure point") in a K − dimensional space. So there are N
measure points in a K − dimensional space.
For all these measure points, we obtain the values of temperature ; we denoted them as :
( )n nCT Aθ = , 1n N= … .
Recall (see (OZ2]) that the EPH consists in a probability density above each point in the
parameter space. Each density, above a point ( )1, , KX x x= … , is a sum of contributions
coming from each measure point. Such a contribution will be called an "elementary den-
sity", or "elementary contribution".
The general form of the elementary density, coming from the n -th measure point, is :
( )( ){ } ( )
2
, 2
1( ) exp exp 1 2 ,
2
j n
n j n n
tp X d d
π θλ ν λ ν
τ
− = − − + − 1, ,n N= … (1.1)
EPH 3, IRSN - SCM SA, February 2007 5
where nd is the distance between the points X and nA in the K − dimensional space :
( )( ) ( )( )2 2
1 1( , ) n n
n n K Kd d X A x xξ ξ= = − + + −… (1.2)
and these elementary densities are combined together, as :
( ) 1 1, ,( ) ( ) ... ( ) ( )j j N N jp X X p X X p Xγ γ= ⋅ + + ⋅ , 0,j ν= … (1.3)
with :
1
( )K
nn N
K
i
i
dX
d
γ−
−
=
=
∑, 1, ,n N= … (1.4)
In formula (1.1), ( )λ ν is the coefficient of propagation of information (we prefer to say
that it is a “speed” of diffusion of information sent by each measure point). This coeffi-
cient depends on N and on ν .
The choice of the value for ( )λ ν is based upon a general principle of maximal entropy :
we require the maximal entropy at the point where we have the worst (minimal) infor-
mation. This is the point where the information about possible temperatures is weakest.
We now explain how this coefficient ( )λ ν is computed.
Recall that for a given point ( )1, , KX x x= … , the entropy of the density function above
this point is of the form :
( ) ( )nI X dλ ν=
where nd is defined in (1.2).
We have the worst (minimal) information when this point is the farthest from a measure
one. We denote by maxd the biggest distance in the space from all measure points, that
is :
( )( ) ( )( ){ }2 2
max 1 1 1max ; 1,.., ,( ,..., )n n
K K Kd x x n N x xξ ξ= − + + − =… (1.5)
A point which is at distance maxd from all measure points will be the one with weakest
information. In practice, such a point must be one of the vertices of the cube defining the
parameter space.
So, at this point, we have maximal entropy : ( )max maxI dλ ν= .
On the other hand, we know that the maximal entropy is equal ( )maxI Log ν= (see
[BB1] and [OZ2]), because we assume a uniform law at this point (weakest information).
We obtain the equation, from which ( )λ ν is defined unambiguously :
( )( )
max
Log
d
νλ ν = (1.6)
EPH 3, IRSN - SCM SA, February 2007 6
Also, we have to define the boundaries for each parameter . We set :
, min , max,k k kx x x ∈ for 1,...,k K= , and [ ]1 min max, , ,N t tθ θ ∈… .
II. Normalization of the parameters
In practice, we can meet a situation where all parameters have completely different or-
ders of magnitude.
As an example, we present the bounds for some parameters which were taken from
“CATHARE” :
First parameter 1 0.455, 1x ∈ , 10th : 10 -52, 71x ∈ , 16
th : 16 4 130 000, 4 390 000x ∈
and so on.
So, in this case the parameter which has the largest distance between minx and maxx will
give the biggest influence for the collection of the densities ( )jp X . The form of the Hy-
persurface would strongly depend on this fact. But this cannot be correct, because in gen-
eral, we do not know which parameter is more “important” and we want to consider all of
them equally, at least in a preliminary stage. So, the width of the interval of variation
cannot be a criterion of importance.
In order to have an equivalent influence from all parameters, we have to normalize all of
them and bring the interval of variation of each kx to be [0,1], 1k K= … .
This can be obtained using the formula (see [BB1] ) : if a parameter kx varies in a inter-
val , min , max,k k kx x x ∈ , the parameter , mink kx x− varies in , max , min0, k kx x −
and the pa-
rameter , min
, max , min
k k
k k
x x
x x
−
− in [0,1].
In [OZ1], referring to BB1, we had some objection against this normalization, namely
that it "shrinks" all distances, and all points look close one to the others. But our new
approach, using proper bounds for the entropy, takes correctly care of this situation.
Since the normalization changes nothing in the case of one parameter (see the proof be-
low), it is meaningful to apply it to the case of several parameters, which need to be com-
pared.
So, we are going to prove that the normalization changes nothing in the case of one pa-
rameter ( 1K = ). Mathematically speaking, we have to prove that the densities of prob-
abilities ( )jp y , 0,j ν= … have the same value when both [ ]min max, ,y x x x x= ∈ and
[ ]min
max min
, 0,1x x
y x xx x
−′ ′= = ∈−
:
( ) ( ), 0, ,j jp x p x j ν′= = …
EPH 3, IRSN - SCM SA, February 2007 7
In order to prove this, it is enough to compare everything which contains nd (because x
appears only in the formula for distances).
Let λ′ and d ′ indicate values in the normalized case. We have to show that :
1) ( ) ( )n nd dλ ν λ ν′ ′⋅ = ⋅
2)
1 1
1 1
1 1
n nN N
i i
i i
d d
d d= =
′=′∑ ∑
The measure points ,min ,max,k k kx xξ ∈ have also to be normalized :
[ ],min
,max ,min
0,1k k
k
k k
x
x x
ξξ
−′ = ∈
− .
Proof :
We have :
( )( )
( )
( ) ( )( )
min max
min min
min max min min max min
max min
max min
max min maxmin max
Log1)
Log ( ) ( )
( ) ( )
Log Log
n n
or n
n
or n
n n n
or n
d xx
x x x
x x x x x
x x
x xx d d
x x dx
νλ ν ξ
ξ
ν ξ
ξ
ν νξ λ ν
ξ
′ ′ ′ ′⋅ = ⋅ − =′ ′−
− − −= ⋅
− − − −
−
−= ⋅ − ⋅ = =
−−
and similarly :
max min
max min
max min max minmax min
1 11 1
1 1 1
2)1 1
1 1
nn n nN N
i iN Ni i
x x
x x xd d dx x x x x x
d dx x d d
ξ
ξ ξ= =
−′ − −
= = ⋅ =− − −+ + + +′ − −∑ ∑… …
So we see that in the case of one parameter the normalization does not change the value
of the probability, and in the case of 1K > the normalization gives us an opportunity to
compare all parameters correctly despite different orders of magnitude.
EPH 3, IRSN - SCM SA, February 2007 8
III. Presentation of the computations
In this paragraph, we present the computations which will be performed ; the numerical
examples will be treated in the next paragraph.
A. Computing a local probability
We already presented this application in [OZ2] and we showed how to compute the prob-
ability that, at some point ( )1, , KX x x= … , our ( )T CT X= will be in the interval
1 2,T T , where 1 minT t≥ and 2 maxT t≤ .
We recall that this probability is given by the sum with step τ :
{ }2
1
1 2 ( )j
T
X j
t T
P T T T c p X=
≤ ≤ = ⋅∑ (2.1)
where c is a normalization coefficient :
max
min
1
( )j
t
j
t t
c
p X=
=
∑ (2.2)
B. Computing a global probability
Each input parameter 1, Kx x… lies in the interval [0,1] (since we normalized them). At
this point, we assume that each parameter follows a uniform law in this interval : this
means that we have no information at all on any of them, except the fact that they are
between 0 and 1.
We will see later how to incorporate more precise information, for instance the fact that
a parameter follows a Gauss law, or any type of probabilistic law.
Let us define, for each parameter 1, Kx x… , a sub interval of variation:
[ ] [ ], 0,1k k kx A B∈ ⊆ . Let [ ],A B denote the product [ ] [ ]1
, ,K
k k
k
A B A B=
=∏ . We want to
compute the probability that the temperature is in some interval [ ]1 2,T T when
[ ],X A B∈ .
For this we have to integrate the function ( )2
1j
T
j
t T
c p X=
⋅∑ over [ ],X A B∈ (see [BB1]). This
can be written :
[ ] { } ( )1 2
11
1 2 1
1 1
,
1 1K
jK
B B T
j K
t TK KA A
X A BP T T T c p X dx dx
B A B A∈
=
≤ ≤ = ⋅− − ∑∫ ∫… …
EPH 3, IRSN - SCM SA, February 2007 9
In our case, for the code CATHARE, we want to compute the probability for all X , so we
take 0kA = and 1kB = . We denote by [ ]0,1K
GRH = the global reduced hypercube.
So, the probability that CATHARE will give a result above 0 1200T C= ° is :
{ } ( )max
0
1 1
0 1 1
0 0
, ,j
t
GRH j K K
t T
P T T c p x x dx dx=
≥ = ⋅∑∫ ∫… … … (2.3)
It is not possible to compute such an integral directly : first, we have a very complicated
function to integrate, and second, we must integrate K - times over each kx : this makes
it impossible to compute each integral using discretization. So we use an approximate
method for our computations.
1. The normalization coefficient
First, we compute the normalization coefficient c . We have :
{ } ( )
( )
max
0
max
0
max
0
0 1
1 1, ,
,1
, ,
( ) ( ) ... ( ) ( )
( ) ( ) ,
j
j
j
t
X j K
t T
t
j N N j
t T
tN
n n j
n t T
P T T c p x x
c X p X X p X
c X p X
γ γ
γ
=
=
= =
≥ = ⋅
= ⋅ ⋅ + + ⋅
= ⋅ ⋅
∑
∑
∑ ∑
…
where the function max
0
, ( )j
t
n j
t T
p X=∑ is computed as :
( )( ){ } ( )
( ){ } ( )( ){ }
( ){ }( )
( ){ }
max max
0 0
max
0
max
0
2
, 2
2
2
2
2
1( ) exp exp 1 2
2
exp 1 21exp exp
2
12 exp
2 exp 1 22 exp2 2
j j
j
j
t tj n
n j n n
t T t T
tn
n j n
t T
t nn
j n
t T
tp X d d
dd t
dd
t
π θλ ν λ ν
τ
π λ νλ ν θ
τ
τ π λ νπ λ ν
θτ π τ
= =
=
=
− = − − + − ⋅ − = − ⋅ − − ⋅
⋅ − − = ⋅ − − ⋅
∑ ∑
∑
∑
We denote by :
( ){ }( )
( )1 22 exp2
nd
n
n
eX
ed
λ ντ τσ
ππ λ ν
= =⋅ −
,
So we see that the above sum can be computed approximately using a Gaussian func-
tion :
EPH 3, IRSN - SCM SA, February 2007 10
( )( )
( )
( )( )
( )
max max
0 0
0
2
, 2
2
2
1( ) exp
2 2
1 1exp
2 2
j j
t tj n
n j
t T t T n n
n
n nT
tp X
X X
tdt
X X
θτ
πσ σ
θττ πσ σ
= =
+∞
− = − − ≈ ⋅ −
∑ ∑
∫
So we obtain :
( )( )
( )
max
min
2
21
1
1( )
1( ) exp
2 2
( )
1
j
t
j
t t
Nn
n
n n n
N
n
n
p Xc
tX dt
X X
X
θγ
πσ σ
γ
=
+∞
= −∞
=
=
− ≈ −
=
=
∑
∑ ∫
∑
As we said in our first report (see OZ1), for our technical computations, we use the mac-
ros in Excel, where we have a preprogrammed function ( )F T , which computes the dis-
tribution function of the Gauss law :
( )2, 2
1( ) exp
2 2
Tt
F T dtθ σ
θ
πσ σ−∞
− = − ∫
So, we introduce this distribution function in our equation ; this gives :
{ } ( )1 1
0 , 0 110 0
( ) 1 ( )n n
N
GRH n K
n
P T T X F T dx dxθ σγ=
≥ ≈ −∑∫ ∫… … (2.4)
Of course, this small replacement will make our computation slightly faster and simpler,
but still we need to integrate K-times over each kx , which is not simple.
2. Discretization
A simple method in order to compute an integral is the discretization of all intervals
[ ]0,1 by some step (for instance, 0.01) and after that to sum the values at each point.
This is quite appropriate when the number of parameters is small. But here, if we took a
discretization using 100 points on each axis, we would have 51100 values to compute. So,
this method cannot be applied here.
3. Method Monte Carlo for computing integrals
We will use a Monte Carlo method in order to compute our integrals. It is simple to per-
form and gives good results. Theoretical justifications, as well as rate of convergence, are
given in Annex 2.
EPH 3, IRSN - SCM SA, February 2007 11
We set :
( )( )
( )( )
0 2
21
1( ) exp
2 2
TNn
n
n n n
tR X X dt
X X
θγ
πσ σ= −∞
− = − ∑ ∫ .
Let, to start with, 1 000M = . We choose values for each parameter 1,..., Kx x , for each of
them according to a uniform law in the interval [ ]0,1 , and we repeat this M times (so, at
total, we have M K⋅ samples).
When this is done, the integral of R upon [ ]0,1K is given (approximately) by the average
of the evaluations of R at all these sampling points. Let 1, , MX X… be the sampling
points ; we have :
{ } ( )0
1
1H
M
G
m
R mRP T T XM =
≥ ≈ ∑ (2.5)
The choice of quantity of points M depends on the precision we want. We denote by
( )I R the precise value of the integral :
( ) ( )1 1
1
0 0
KI R X dx dxR = ∫ ∫… …
With probability 0.95 we have a rough estimate (see Annex 2) :
( ) ( )1
1 2M
m
m
R X I RM M=
− <∑ (2.6)
If we want to get more precise result, we have to take a larger number of points.
A simple example of the use of Monte Carlo method in a 50 dimensional space is given in
Annex 3.
EPH 3, IRSN - SCM SA, February 2007 12
IV. Numerical computations connected with CATHARE
A. Description of the parameters
Here we have 68 parameters. The precise list of parameters is given in the Annex 1. All
parameter have their respective law (it may be uniform, normal, log-normal and con-
stant laws). At this point, we will not take into account the constant parameters and, for
the others, we assume that all of them follow a uniform law. So we are left with 51 pa-
rameters, following a uniform law.
B. Description of the measures
In this experiment 330 measures were made, in which 9 of them gave wrong results,
namely temperatures for them were zero. So, we will use only 321 measures : 1 321, ,A A…
which indicate accordingly 321 values of temperature 1 321, ,θ θ… . The highest recorded
temperature is 1 166°C.
C. Choice of the boundaries for temperature.
All values of 1 321, ,θ θ… lie in the interval 903 , 1166C C ° ° . For mint and maxt we have to
take an interval which is wider.
We have to be careful with the choice of maxt , because if we choose maxt near 0T , then , of
course, the probability to be above 0T will be extremely small, which does not reflect the
reality.
On the other hand, if we choose maxt quit large : max 0t T� , then we penalize ourselves.
So, we define min 900t C= ° and max 1300t C= ° . The step of subdivision is 1 Cτ = ° .
D. Computing a local probability
First of all we normalize all parameters, as we said above.
1. Computing the local probability at a precise point *X .
We are going to compute the probability that, at a given point *X , the temperature in
the reactor will be above 1200°C.
The probability to be above the threshold at the point *X is (see formula (2.1)) :
{ }*
1300*
1200
1200 ( )j
jXt
P T C c p X=
≥ ° = ⋅ ∑
EPH 3, IRSN - SCM SA, February 2007 13
We start our computation by computing ( )λ ν . For this we make 321 computations :
For the first measure point ( 1n = ), separately for each parameter on it, we compute the
maximum distance to the vertices of the cube :
( ) ( ){ }1 1max max , 1k k kξ ξ= − , 1, ,k K= …
In this way we find the biggest distance from the first measure point to the corners of
the cube :
( ) ( )2 2
1 1max maxKd = + +…
The same idea for the second measure point ( 2n = ) :
( ) ( )2 2
2 1max maxKd = + +…
where ( ) ( ){ }2 2max max , 1k k kξ ξ= − , 1, ,k K= … and so on.
Among all 321 distances we chose the biggest, in this way we obtain global maxd :
{ }max max , 1, ,nd d n N= = …
In the case of our 321 measures, we find max 5.72d = .
From this, we deduce the value for ( )λ ν :
( )( ) ( )
max
Log Log 4001.047
5.72d
νλ ν = = = .
As an example, we take ( )* 1, ,1X = … . We first compute the coefficient c . We can do it
here precisely, since we are dealing with a local probability. We find 0.99986c = .
Then, we compute the probability to be above 1200°C . Following the method indicated
above, we get :
{ }* 1200 0.00001XP T C≥ ° = .
We present the graph of the function *( )jp X for min max, ,jt t t= … :
EPH 3, IRSN - SCM SA, February 2007 14
X*
0,000
0,002
0,004
0,006
0,008
0,010
0,012
0,014
0,016
0,018
0,020
90
0
92
0
94
0
96
0
98
0
10
00
10
20
10
40
10
60
10
80
11
00
11
20
11
40
11
60
11
80
12
00
12
20
12
40
12
60
12
80
13
00
°C
proba
Graph 1 : The form of the probability at the point X*
From this graph, we conclude that the most probable values of the temperature
( )*T CT X= lie in the interval 960 , 1020C C ° ° , with probability 60 %.
2. Quality of the construction
In order to investigate the precision of this answer, we perform the construction using
300 measures only ; the last 21 ones will be used for comparison.
As before we start with computing λ ; for 300 measures, we get =1.051λ .
We repeat the construction at each of the 21 points left for tests :
We take the point 301A , for which we have the temperature 301 1018 Cθ = ° , calculated
from the code “CATHARE”. Using only 300 measures we build the density of probability
for this point and we compute the probability that the temperature ( )301T CT A= will
be inside the interval [ ]301 30150 , 50C Cθ θ− ° + ° .
For the point 302A , we compute the probability that ( )302T CT A= will be inside the
interval [ ]302 30250 , 50C Cθ θ− ° + ° , where 302 1126 Cθ = ° and so on.
The results obtained for the 21 points are presented in the table below.
The quality of the construction for any point depends from two factors:
− first, the distance to the neighbouring measure points. If they are too far, then the
reliability of the result is lower;
− second, the value of the temperature at the closest measure point.
Since these factors are important , we put them also in our table :
EPH 3, IRSN - SCM SA, February 2007 15
Test points A301 A302 A303 A304 A305 A306 A307
Observed
temperatures, °C1018 1126 1001 1068 999 1017 951
Intervals for computing
probability, °C[968 - 1068] [1076 - 1176] [951 - 1051] [1018 - 1118] [949 - 1049] [967 - 1067] [901 - 1001]
Probability to be inside
the interval 0,98 0,02 0,30 0,45 0,92 0,15 0,96
Distance to the closest
point2,06 2,02 2,05 1,89 2,07 1,99 1,87
Temperature at the
closest point, °C982 1045 1055 983 1031 941 913
Test points A308 A309 A310 A311 A312 A313 A314
Observed
temperatures, °C993 1142 991 1041 1122 1072 1033
Intervals for computing
probability, °C[943 - 1043] [1092 - 1192] [941 - 1041] [991 - 1091] [1072 - 1172] [1022 - 1122] [983 - 1083]
Probability to be inside
the interval 0,50 0,01 0,70 0,96 0,89 0,11 0,72
Distance to the closest
point2,00 2,13 2,03 2,00 1,94 2,05 2,02
Temperature at the
closest point, °C1037 1017 1028 1012 1079 997 1044
Test points A315 A316 A317 A318 A319 A320 A321
Observed
temperatures, °C977 1031 1020 1063 999 1033 1060
Intervals for computing
probability, °C[927 - 1027] [981 - 1081] [970 - 1070] [1013 - 1113] [949 - 1049] [983 - 1083] [1010 - 1110]
Probability to be inside
the interval 0,62 1,00 0,97 0,59 0,28 0,77 0,57
Distance to the closest
point2,15 1,77 1,82 2,10 2,10 2,03 1,94
Temperature at the
closest point, °C1018 993 1017 1017 1052 1037 1009
So, in all cases, the distance to the nearest measure point is more than 1.77, which is not
small compared with the biggest distance in the whole space : the largest distance in the
hypercube [ ]510,1 is 51 7.14≈ .
Since the distance between each result point and all measure points is large, the interval
we indicate does not always contain the value of the temperature at the closest point.
For instant, for 302A , the observed temperature is 1126°C, and the interval around it is
[ ]1076 ,1176C C° ° . The closest point has temperature 1045 C° , and this temperature is
not in this interval, which, therefore, will have only a small probability.
All these facts indicate that evidently, it is not enough to have just 300 measures to fill
whole space. But still they provide some information, which we could use.
EPH 3, IRSN - SCM SA, February 2007 16
C. Computing a global probability
Here we take again 321 measure points and we solve our original question : what is the
probability that for all possible X (in the 51-dimensional space) temperature in the nu-
clear reactor will be above 0 1200T C= ° ?
We saw above that this probability is given by :
{ }( )
( )( )
0 21 1
0 1210 0
1( ) exp
2 2
TNn
GRH n K
n n n
tP T T X dt dx dx
X X
θγ
πσ σ= −∞
− ≥ ≈ − ∑∫ ∫ ∫… …
with
( )
( )2
nd
n
eX
e
λ ντσ
π=
Using Monte Carlo method, we can write :
{ }( )
( )( )
0
2
0 21 1
1 1( ) exp
2 2
M Nn
GRH n
m n n nT
tP T T X dt
M X X
θγ
πσ σ
+∞
= =
− ≥ ≈ − ∑ ∑ ∫
With 1 0 000M = we obtain :
{ } 130 2.5 10GRHP T T −≥ ≈ ×
with 0.02Error ≤
Even if we change the upper bound maxt and choose max 2000t C= ° , we find :
{ } 100 7.3 10GRHP T T −≥ ≈ × .
As example, we can take another threshold, 0 1100T C= ° . In this case the probability to
above is :
{ }1100 0.035GRHP T C≥ ° ≈ .
We can check roughly this result :
Among 321 measure points, we have 18 for which the temperatures are more than
1100 C° , if we divide 18 by 321 we obtain 0.056 : this is the same order of magnitude as before.
EPH 3, IRSN - SCM SA, February 2007 17
Annex 1
List of the parameters :
Variable
n°Nom Type de Loi Min Max
Moyenne (ou Mode pour ln-
normale)
Ecart-type
1 Coefficient DNBR assemblage chaud crayon chaud Uniforme 0,455 1 0,7275 -
2 Coefficient DNBR cœur moyen Uniforme 0,455 1 0,7275 -
3 HTC aval FDT cœur moyen Log-Normale 0,351153 5,18435 1,349259 1,566269
4 HTC aval FDT assemblage chaud crayon chaud Log-Normale 0,456265 3,161981 1,201124 1,380778
5 Température moyenne pastille crayon chaud Normale 0,865 1,135 1 0,045
6 CL branche froide Uniforme 0,8 1,9 1,35 -
7 CV branche froide Uniforme 0,8 1,9 1,35 -
8 ti grappe en décompression Log-Normale 0,235957 3,451408 0,902432 1,563846
9 QLE branche froide Uniforme 1 10 5,5 -
10 TMFS Uniforme -52 71 9,5 -
11 ti grappe aval front de trempe Log-Normale 0,001619 6,176324 0,1 3,952847
12 ti grappe amont front de trempe Log-Normale 0,228722 2,063201 0,68695 1,442798
13 Volume d'eau + ligne de décharge (m3) Uniforme 30,9 32,6 31,75 -
14 Ligne de décharge (m) Uniforme 24,5 29,8 27,15 -
15 k/a² ligne de décharge Uniforme 800 1900 1350 -
16 Pression accu (uniforme : bars) Uniforme 4130000 4390000 4260000 -
17 Enthalpie liquide (J) Uniforme 109430 213020 161225 -
18 Gamma Uniforme 1,32 1,4 1,36 -
19 FDH crayon chaud couronne 1 Constante 1,892 1,892 1,892 0
20 FQ crayon chaud couronne 1 Normale 2,265 2,793 2,529 0,088
21 Axial offset crayon chaud couronne 1 Constante 0 0 0 0
22 Cote de piquage crayon chaud couronne 1 Constante 1,8288 1,8288 1,8288 0
23 FDH assemblage chaud Constante 1,768 1,768 1,768 0
24 FQ assemblage chaud Constante 2,364 2,364 2,364 0
25 Axial offset assemblage chaud Constante 0 0 0 0
26 Cote de piquage assemblage chaud Constante 1,8288 1,8288 1,8288 0
27 FDH crayon chaud couronne 2 Constante 1,152 1,152 1,152 0
28 FQ crayon chaud couronne 2 Constante 1,555 1,555 1,555 0
29 Axial offset crayon chaud couronne 2 Constante 0 0 0 0
30 Cote de piquage crayon chaud couronne 2 Constante 1,8288 1,8288 1,8288 0
31 Pression primaire Uniforme 15300000 15700000 15500000 -
32 Niveau pressu (%) Uniforme 64,1 68,1 66,1 -
33 Température moyenne primaire (°C) Uniforme 302,4 306,8 304,6 -
34 Niveau GV (%) Uniforme 34 54 44 -
35 Puissance nominale (W) Uniforme 2770000000 2830000000 2800000000 -
36 Coefficient CV + FB en décompression Normale 0,61 1,39 1 0,13
37 ti downcomer Uniforme 1 39 20 -
38 QLE 3D Constante 1 1 1 0
39 ti branche froide Uniforme 0,01 1 0,505 -
40 Encrassement GV 1 Uniforme 1 1,15 1,075 -
41 Encrassement GV 2 Uniforme 1 1,15 1,075 -
42 Encrassement GV 3 Uniforme 1 1,15 1,075 -
43 Excentricité couronne 1 Uniforme 0,2 0,8 0,5 -
44 Excentricité couronnes 2&3 Uniforme 0,2 0,8 0,5 -
45 Pression dans la jeu pastille gaine couronne 1 Uniforme 9400000 17400000 13400000 -
46 Pression dans la jeu pastille gaine couronnes 2&3 Uniforme 9300000 9600000 9450000 -
47 Coefficient sur la loi de fluage Qa couronne 1 Normale 27745 30697 29221 492
48 Coefficient sur la loi de fluage Qaβ couronne 1 Normale 18652 21520 20086 478
49 Coefficient sur la loi de fluage Qβ couronne 1 Constante 0 0 0 0
50 Coefficient sur la loi de fluage Qa couronnes 2&3 Normale 27745 30697 29221 492
51 Coefficient sur la loi de fluage Qaβ couronnes 2&3 Normale 18652 21520 20086 478
52 Coefficient sur la loi de fluage Qβ couronnes 2&3 Constante 0 0 0 0
53 Débit primaire Uniforme 21724 22375,72 22049,86 -
54 Loi de puissance résiduelle Normale -3 3 0 1
55 Beta Constante 0,00585 0,00585 0,00585 0
56 Durée de vie Uniforme 0,000014 0,000018 0,000016 -
57 Doppler Constante 0,00752 0,00752 0,00752 0
58 Modérateur Constante 1 1 1 0
59 Température IS1 Uniforme 7 50 28,5 -
60 Température IS2 Uniforme 7 50 28,5 -
61 Débit IS 1 Uniforme -3 3 0 -
62 Débit IS 2 Uniforme -3 3 0 -
63 HTC FL coté secondaire Uniforme 1,12 1,4 1,26 -
64 HTC CNL coté secondaire Uniforme 0,46 1,85 1,155 -
65 HTC CNB coté primaire Uniforme 0,5 2 1,25 -
66 FTC CFV coté primaire Uniforme 0,5 2 1,25 -
67 Coefficient CV en régime C9 décompression Log-Normale 0,266175 3,065651 0,903327 1,50277
68 Taille de brèche Uniforme 0,5 0,9 0,7 -
EPH 3, IRSN - SCM SA, February 2007 18
Annex 2
Monte Carlo Method for computing integrals
Assume that we have some function ( )1( ) , , KR X R x x= … which is continuous in the re-
gion of integration : [ ] [ ] [ ]1 1 2 2, , ,K KD a b a b a b= × × ×… . We want to compute an integral of
the type : ( ) ( )D
I R R X dX= ∫ .
We choose at random M points ( ) ( )1 1,1 ,1 1, ,, , , , , ,K M M K MX x x X x x= =… … … in the region
D : [ ], ,k m k kx a b∈ , 1, ,k K= … and 1, ,m M= … , with uniform law. We compute the
value of the function ( )mR X at each of these M points and find the average among all
values we obtained.
Here we use the “law of large numbers” (Tchebyshev’s theorem) which says :
For a sequence 1, , ,...nξ ξ… of independent and identically distributed random variables ,
we have the almost everywhere convergence :
[ ]1 n En
ξ ξξ
+ +→
�
when n →∞ , where [ ]E ξ is the expectation of these variables.
Since all mX are independent and have the same distributions (uniform) we can apply
the theorem above. In this way we obtain the value of the function ( )R X in some “aver-
age point”:
( )1
1 M
average m
m
R R XM =
= ∑
Then the value of the integral ( )I R is :
( )
( ) ( )
( ) ( )( )
1 1
1 1
1
average
D D
average K K
MK K
m
m
R X dX R dX
R b a b a
b a b aR X
M =
≈
= ⋅ − ⋅ ⋅ −
− ⋅ ⋅ −=
∫ ∫
∑
…
…
We denote by ( )S R the approximate value of the integral, computed by Monte Carlo
method :
EPH 3, IRSN - SCM SA, February 2007 19
( )( ) ( )
( )1 1
1
:M
K K
m
m
b a b aS R R X
M =
− ⋅ ⋅ −= ∑
…
In order to estimate the difference between ( )S R (approximate value) and ( )I R (true
value of the integral) we use the central limit theorem which says :
For a sequence 1, , ,...nξ ξ… of independent and identically distributed random variables ,
we have the convergence in law :
[ ]
( )( )1
1
0,1
n
i
i
En
N
n
ξ ξ
σ ξ=
−→
∑
where ( )σ ξ is the variance for these numbers and ( )0,1N is the Normal Gaussian law
with expectation 0 and variance 1 .
So with probability less than 0.05 we have :
[ ]( )
1
210.05
n
i
i
P En n
σ ξξ ξ
=
− > < ∑
We can apply this theorem in order to estimate the difference ( ) ( )S R I R− .
With probability 0.95 we have :
( ) ( )( )2
( )R
S R I R vol DM
σ− ≤
We return to our original problem :
We denote by
( )( )
( )( )
0 2
21
1( ) exp
2 2
TNn
n
n n n
tR X X dt
X X
θγ
πσ σ= −∞
− = − ∑ ∫ .
The probability to be above the threshold is :
{ } ( )0GRH
GRH
P T T R X dX≥ ≈ ∫ .
Using the method Monte Carlo we get :
EPH 3, IRSN - SCM SA, February 2007 20
{ } ( )0
1
1H
M
G
m
R mRP T T XM =
≥ ≈ ∑
Here
1 1
0 0
1GRH
dX dX= =∫ ∫ ∫…
The error is given by the formula :
( ) ( )( )2 R
S R I RM
σ− ≤
In our case ( ) 1Rσ < , because we are working with the interval [ ]0,1 , so finally we have:
( ) ( )2
S R I RM
− <
Annex 3
Example of application of the Monte Carlo method in a 50-parameters space
We take a precise function R which depends on 50 parameters :
( ) ( )( ) ( )( )1 50 1 1 1 50 50 50, , 1 1R x x y y x y y x= + − ⋅ ⋅ + −… … with 1
12
k ky = − .
On the one hand, we can compute the integral ( )1 1
1 50 1 50
0 0
,R x x dx dx∫ ∫… … … by hand :
( )1 1
1 501 501 1 50 50 2 51
0 0
1 1 1 11 1 1 1 0.57758
2 2 2 2 2 2
x xI R dx dx
= − + ⋅ ⋅ − + = − ⋅ ⋅ − = ∫ ∫… …
On the other hand, we can do it with the Monte Carlo method :
We take at random 10 000M = points : ( )1 1,1 ,1, KX x x= … … ( )1, ,,M M K MX x x= … with
[ ], 0,1k hx ∈ . With numerical values taken above, we have :
( ) ( )1
10.57690
M
m
m
S R R XM =
= =∑ ,
and :
( ) ( ) 0.02Error S R I R= − ≤
In reality : ( ) ( ) 0.0007I R S R− = .
EPH 3, IRSN - SCM SA, February 2007 21
References
[BB1] The experimental probabilistic Hypersurface, Bernard Beauzamy, SCM SA, 2004.
[OZ1] L'Hypersurface Probabiliste. Rapport no 1, IRSN, SCM SA, 2006.
[OZ2] L'Hypersurface Probabiliste. Rapport no 2, IRSN, SCM SA, 2007.