บทที่ 3 การแจกแจง (Distribution) · 03/12/2019  · - 1 -...

15
- 1 - บทที ่ 3 การแจกแจง (Distribution) 3.1 MOMENTS There are a variety of interesting calculations that can be done from the models described in Chapter 2. Examples are the average amount paid on a claim that is subject to a deductible or policy limit or the average remaining lifetime of a person age 40. Definition 3.1: The th k raw moment of a random variable is the expected (average) value of the th k power of the variable, provided it exists. It is denoted by ] [ k X E or by k . The first raw moment is called the mean of the random variable and is usually denoted by . Note that is not related to ) ( x , the force of mortality. For random variables that take on only nonnegative values (i.e., 1 ) 0 Pr( X ), k may be any real number. When presenting formulas for calculating this quantity, a distinction between continuous and discrete variables needs to be made. Formulas will be presented for random variables that are either everywhere continuous or everywhere discrete. The formula for the th k raw moment are as follows: Let X be a random variable 1 st 1 raw moment of X is X E 2 th k moment of X is k k X E dx x f x k if the random variable is continuous j j k j k x X P x if the random variable is discrete. (3.1) Example 3.1 : Determine the first two raw moments for each of the five models. The subscripts on the random variable X indicate which model is being used. 100 0 1 , 50 01 . 0 dx x X E 100 0 2 2 1 , 33 . 333 , 3 01 . 0 dx x X E 0 4 3 2 , 000 , 1 000 , 2 000 , 2 3 dx x x X E

Transcript of บทที่ 3 การแจกแจง (Distribution) · 03/12/2019  · - 1 -...

Page 1: บทที่ 3 การแจกแจง (Distribution) · 03/12/2019  · - 1 - บทที่ 3 การแจกแจง (Distribution) 3.1 MOMENTS There are a variety

- 1 -

บทท 3 การแจกแจง (Distribution)

3.1 MOMENTS

There are a variety of interesting calculations that can be done from the models described in Chapter 2. Examples are the average amount paid on a claim that is subject to a deductible or policy limit or the average remaining lifetime of a person age 40.

Definition 3.1: The thk raw moment of a random variable is the expected (average) value of the thk power of the variable, provided it exists. It is denoted by ][ kXE or by k . The first raw moment

is called the mean of the random variable and is usually denoted by . Note that is not related to )(x , the force of mortality. For random variables that take on only nonnegative values (i.e., 1)0Pr( X ), k may be any real number. When presenting formulas for calculating this quantity, a distinction between continuous and discrete variables needs to be made. Formulas will be presented for random variables that are either everywhere continuous or

everywhere discrete. The formula for the thk raw moment are as follows: Let X be a random variable

1 st1 raw moment of X is XE

2 thk moment of X is

k

k XE dxxfxk

if the random variable is continuous

j

j

k

jk xXPx if the random variable is discrete. (3.1)

Example 3.1 : Determine the first two raw moments for each of the five models. The subscripts on the random variable X indicate which model is being used.

100

0

1 ,5001.0 dxxXE

100

0

22

1 ,33.333,301.0 dxxXE

0

4

3

2 ,000,1000,2

000,23dx

xxXE

Page 2: บทที่ 3 การแจกแจง (Distribution) · 03/12/2019  · - 1 - บทที่ 3 การแจกแจง (Distribution) 3.1 MOMENTS There are a variety

- 2 -

0

4

3

22

2 ,000,000,4000,2

000,23dx

xxXE

,93.005.0408.0312.0225.015.003 XE ,25.205.01608.0912.0425.015.002

3 XE

,000,3000003.07.00 00001.0

0

4

dxexXE x

,000,000,000,600003.07.00 00001.0

0

222

4

dxexXE x

,75.4302.001.0

75

50

50

0

5 dxxdxxXE

,83.395,202.001.075

50

250

0

22

5 dxxdxxXE

Definition 3.2 : The empirical model is a discrete distribution based on a sample of size n that assigns

probability n

1 to each data point.

Model 6 : Consider a sample of size 8 in which the observed data points were 3, 5, 6, 6, 6, 7, 7, and 10. The empirical model then has probability function

10 ; 125.07 ; 25.06 ; 375.05 ; 125.03 ; 125.0

)(6

xxxxx

xp

The two moments for Model 6 are 25.6)( 6 XE and 50.42)( 2

6 XE .

Definition 3.3 The thk central moment of a random variable is the expected value of the thk power of

the derivative of the variable from its mean. It is denoted by kXE or by k . The second

central moment is usually called the variance and denoted 2 and its square root, , is called the standard deviation. The ratio of the standard deviation to the mean is called the coefficient of variation :

cv . The ratio of the central moment to the cube of the standard deviation,

3

31

is called

the skewness. The ratio of the fourth central moment to the fourth power of the standard deviation,

4

42

, is called the kurtosis.

The continuous and discrete formulas for calculating central moments are

Page 3: บทที่ 3 การแจกแจง (Distribution) · 03/12/2019  · - 1 - บทที่ 3 การแจกแจง (Distribution) 3.1 MOMENTS There are a variety

- 3 -

k

k XE

dxxfxk

if the random variable is continuous

j

j

k

j xpx if the random variable is discrete. (3.2)

Figure 3.1 Densities of xf gamma 100,5.0 and xg gamma 10,5

There is link between raw and central moments. The following equation indicates the connection between second moments. The development uses the continuous version from (3.1) and (3.2), but the result applies to all random variables.

dxxfxxdxxfx 222

2 2

.2 2

2

22 XEXE (3.3)

Example : gamma distribution function

P.d.f :

x

ex

xf

11

Moment : kXE = )1( kk

The first three raw moments of the gamma distribution are , 2)1( and 3)2)(1( From (3.3) variance is 2 . Consider the following two gamma distributions.

One has parameters 5.0 and 100 while the other has 5 and 10 . These have the same mean, but their skewness coefficients are 2.83 and 0.89, respectively.

Definition 3.4 For a given value of d with 0Pr dX , the excess loss variable is dXY P given that dX . Its expected value,

dXdXEYEdede P

X ,

Page 4: บทที่ 3 การแจกแจง (Distribution) · 03/12/2019  · - 1 - บทที่ 3 การแจกแจง (Distribution) 3.1 MOMENTS There are a variety

- 4 -

is called the mean excess loss function. Other names for this expectation are mean residual life

function and complete expectation of life. When the latter terminology is used, the commonly used symbol is

de .

This variable could also be called a left truncated and shifted variable. It is left truncated because observations below d are discarded. It is shifted because d is subtracted from the remaining values. When X is a payment variable, the mean excess loss is the expected amount paid given that there has been a payment in excess of a deductible of d . When X is the age at death, the mean excess loss is the expected remaining time until death given that the person is alive at age d .

The thk moment of the excess loss variable is determined from

dF

dxxfdxde d

k

Xk

1 if the variable is continuous

dF

xpdx j

k

jdX J

1 if the variable is discrete (3.4)

The second line is based on an integration by parts where the antiderivative of xf takes as .xS

dF

dxxfdxde

d

X

1

dS

dxxSxSdxd

d

dS

dxxSd

. (3.5)

Definition 3.5 The left censored and shifted variable is

.,

,,0

dXdX

dXdXY L

It is left censored because values below d are not ignored but are set equal to 0. There is no standard name or symbol for the moments of this variable. For dollar events, the distinction between the excess loss variable and the left censored and shifted variable is one of per payment versus per loss. In the former situation, the variable exists only when a payment is made. The latter variable takes on the value 0 whenever a loss produces no payment. The moments can be calculated from

dxxfdxdXEd

kk

if the variable is continuous

dx

j

k

j

j

xpdx if the variable is discrete (3.6)

°

Page 5: บทที่ 3 การแจกแจง (Distribution) · 03/12/2019  · - 1 - บทที่ 3 การแจกแจง (Distribution) 3.1 MOMENTS There are a variety

- 5 -

It should be noted that

.1 dFdedXE kk (3.7)

Figure 3.2 Excess loss variable. Figure 3.3 Left censored and shifted variable. Example 3.2 : Construct graphs to illustrate the difference between the excess loss variable and the left censored and shifted variable. The two graphs in Figures 3.2 and 3.3 plot the modified variable Y as a function of the unmodified variable X . The only difference is that for X values below 100 the variable is undefined while for the left censored and shifted variable it is set equal to zero. The next definition provides a complementary function to the excess loss. Definition 3.6 The limited loss variable is

.,

,,

uXu

uXXuXY

Its expected value, uXE , is called the limited expected value.

Figure 3.4 Limit of 100 plus deductible of 100 equals loss full coverage.

This variable could also be called the right censored variable. It is right censored because values above u are set equal to u . An insurance phenomenon that relates to this variable is the existence of a policy limit that sets a maximum on the benefit to be paid. Note that XdXdX )()( .

Page 6: บทที่ 3 การแจกแจง (Distribution) · 03/12/2019  · - 1 - บทที่ 3 การแจกแจง (Distribution) 3.1 MOMENTS There are a variety

- 6 -

That is, buying one policy with a limit of d and another with a deductible of d is equivalent to buying full coverage. This is illustrated in Figure 3.4.

The most direct formulas for the thk moment of the limited loss variable are

u

kkkuFudxxfxuXE 1 if the random variable is continuous

uFuxpx k

ux

j

k

j

j

1 if the random variable is discrete (3.8)

Another interesting formula is derived as follows:

0

0

1 uFudxxfxdxxfxuXE k

u

kkk

0

0

1

0

10uSudxxSkxxSxdxxFkxxFx k

u

kukkk

0

0

11u

kk dxxSkxdxxFkx (3.9)

Where the second line integration by parts. For 1k , we have

.

0

0

u

dxxSdxxFuXE

3.2 QUANTILES One other value of interest that may be derived from the distribution function is the percentile function. It is the inverse of the distribution function, but because this quantity is not well defined, an arbitrary definition must be created.

Definition 3.7 The thp100 percentile of a random variable is any P value such that .PP FpF The 50th percentile, 5.0 is called the median.

Page 7: บทที่ 3 การแจกแจง (Distribution) · 03/12/2019  · - 1 - บทที่ 3 การแจกแจง (Distribution) 3.1 MOMENTS There are a variety

- 7 -

Example 3.3 : Determine the 50th and 80th percentiles for Model 1 and Model 3.

For Model 1, the thp percentile can be obtained from PPFp 01.0 and so pP 100 . In particular, the requested percentiles are 50 and 80 (see Figure 3.5) . For Model 3 the

distribution function equals 0.5 for all 10 x and so any value from 0 to 1 inclusive can be the th50 percentile. For the th80 percentile, note that at 2x the distribution function jumps from 0.75 to

0.87 and so 28.0 . (see Figure 3.6). 3.3 Model-Free estimation of distributions To elicit some information about an unknown distribution, the statistician takes a random sample from that distribution (or population). We can assume the n sample observations (items)

n, ..., X, XX 21 to be independent and identically distributed; that is, if )(xf is the p.d.f.of the unknown distribution, then the joint p.d.f. of the sample observations is

)( )()( 21 nxfxfxf If we assume nothing more about the underlying distribution, we would estimate the probability of an event A by counting the number of item that fall in A . This number, in statistics, is called the frequency of A and relative frequency is an estimate of )Pr( AX . A more formal way of thinking

about this procedure is to construct an empirical distribution by assigning the probability of n

1 to each

of the n values n, ..., X, XX 21 . The corresponding (cumulative) empirical distribution function is frequently denoted by

) (1

)( xnsobservatioofnumbern

xFn ; x .

An example, with 4n , is given in Fig. 3.7, say )(xfn , is a discrete one with a weight of n

1 on

each iX . The estimate of )Pr( AXp is

Ax

n xfp )(ˆ .

Fig. 3.7 Empirical distribution function

The moments : the means and the variance are, respectively,

Xxnn

xxxfn

ii

n

ii

xn

11

11)(

and

2

1

2

1

22

)(1

1)()()(

SXxn

nXxxfXx

n

ii

n

iin

xi

Page 8: บทที่ 3 การแจกแจง (Distribution) · 03/12/2019  · - 1 - บทที่ 3 การแจกแจง (Distribution) 3.1 MOMENTS There are a variety

- 8 -

Often these characteristics of the sample are denoted by X and 2S , respectively. These statistics are called the sample mean and the sample variance because they are determined by the sample

items. Sometimes 2S is calculated using a parallel of

22][ XE ; 2

1

22 1Xx

nS

n

ii

.

The positive square root, S of 2S is called the sample standard deviation.

Remark: some statisticians define the sample variance by

n

ii Xx

n 1

2

1

1 because they want

an unbiased estimator of the population 2 . 3.4 Parameters Estimation

We estimate distributions and their parameters after assuming a functional form for the underlying p.d.f. However, in practice, there always must be some doubt about results obtained with a particular model because it probably is not the right one. Of course, we can hope that the models selected is a reasonable approximation to the true state of affairs and thus the corresponding inferences can be used as a good guide in our decision-making process. Suppose we assume a model that depends on one or more parameters. For convenience, let us begin with a p.d.f. with one parameter and denote it by );( xf with a random sample

n, ..., X, XX 21 from the underlying distribution, we want to find an estimator, say )( 21 n, ..., X, XXu of .

The traditional estimation of parameters is done by five different methods. The first two are crude methods which have the advantage of being easy to do. The penalty is a significant lack of accuracy. These methods are percentile matching (pm) and method of moments (mm). The others three methods, minimum distance (md), minimum chi-square (mc) and maximum likelihood (ml) are more formal procedures with well-defined statistical properties. The produce reliable estimators but are computationally complex. The pm and mm estimators provide good starting values for these procedures. For most distributions, the pm and mm procedures are easy to implement. Recall that with no coverage modifications pm estimation is accomplished by solving the p equations

)();( iniX xFxP , pi ..., ,2 ,1 Where is the p-dimensional vector of parameters of the distribution of X . The values

pxxx ..., , , 21 are arbitrarily selected from the data. The equations for mm estimation are

n

j

i

j

i xn

XE1

1][ , pi ..., ,2 ,1

Page 9: บทที่ 3 การแจกแจง (Distribution) · 03/12/2019  · - 1 - บทที่ 3 การแจกแจง (Distribution) 3.1 MOMENTS There are a variety

- 9 -

Where, of course, the expected value on the left-hand side is a function of . When coverage modifications are in effect, the equations take on a more complex form. If truncation is present, the percentiles for pm estimation are from the truncated distribution. The equations are of the form

)()(1

)()()( in

X

XiXiY xF

dF

dFxFxF

Since )( in xF is the empirical distribution of a sample from the truncated distribution. This formulation allows us to estimate the distribution of X , the model for untruncated losses. 3.5 The maximum likelihood estimation (MLE)

In order to find the most likely value of parameters, j , to produce outcome X , one maximize the

likelihood function L where

1

);()(

i

ji

n

j xfL .

To maximize the natural log of L , the log likelihood function, in the form of

n

ijij xfL

1

);(ln)(ln .

Taking the partial derivative with respect to j yields

0)(ln

j

j

L

We get parameters, j , by taking the partial derivative as above. Loss distributions are considered in this study containing in distributions of exponential, lognormal, inverse exponential, Pareto, Gamma and Weibull distributions. The estimated parameters are explained as the following items. (1) Exponential distribution The p.d.f and c.d.f of the exponential distribution are

)(xf

Page 10: บทที่ 3 การแจกแจง (Distribution) · 03/12/2019  · - 1 - บทที่ 3 การแจกแจง (Distribution) 3.1 MOMENTS There are a variety

- 10 -

)(xF Expectation and variance of exponential distribution are as the following forms:

][XE

][XVar The likelihood function is

1

);()(i

ji

n

j xfL

Then

n

ijij xfL

1

);(ln)(ln

The estimation for the parameter can be obtained by solving the equation:

0)(ln

Ld

d

We get (2) Lognormal distribution The estimated parameters are

and

Page 11: บทที่ 3 การแจกแจง (Distribution) · 03/12/2019  · - 1 - บทที่ 3 การแจกแจง (Distribution) 3.1 MOMENTS There are a variety

- 11 -

(3) Inverse exponential distribution

Expectation of inverse exponential distribution is as the following form:

The estimated parameter is

are (4) Pareto distribution

Expectation and variance of Pareto distribution are as the following forms:

(5) Gamma distribution

Expectation and variance of Pareto distribution are as the following forms:

Since parameters estimation of and cannot be found in closed form, we use numerical iteration sheme by Newton-Raphson method for parameters estimation. (6) Weibull distribution

Page 12: บทที่ 3 การแจกแจง (Distribution) · 03/12/2019  · - 1 - บทที่ 3 การแจกแจง (Distribution) 3.1 MOMENTS There are a variety

- 12 -

Expectation and variance of Pareto distribution are as the following forms:

The likelihood function is

Then

Since parameters estimation of and cannot be found in closed form, we use numerical iteration sheme by Newton-Raphson method for parameters estimation. Activity : Data

No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Claims 100 200 300 400 500 50 200 700 1,000 300 500 500 500 100 200 3.5 Testing the fit of models

The goodness of fit (GOF) test measures the compatibility of a random sample with a theoretical probability distribution function. The distance considered is that between the empirical distribution function, )(xFn , and the distribution function, );( xF of the model. This is the distance used in the popular Kolmogorov-Sminov statistic test (K-S test) that the K-S test statistic is defined by

);()(max xFxFD n

where ) (1

)( xnsobservatioofnumbern

xFn

);( xF is c.d.f. of theoretical probability interested.

The test of K-S test is defined by:

0H : The data follow the specified distribution.

1H : The data do not follow the specified distribution.

Page 13: บทที่ 3 การแจกแจง (Distribution) · 03/12/2019  · - 1 - บทที่ 3 การแจกแจง (Distribution) 3.1 MOMENTS There are a variety

- 13 -

Level critical values: The hypothesis regarding the distributional form is rejected at the chosen significance level (alpha, ) if the test statistic D is greater than the critical value obtained from Table 3.1 as below.

Table 3.1 The level of significance for D

0.2 0.15 0.1 0.05 0.01

1 0.900 0.925 0.950 0.975 0.995

2 0.684 0.726 0.776 0.842 0.929

3 0.565 0.597 0.642 0.708 0.828

4 0.494 0.525 0.564 0.624 0.733

5 0.446 0.474 0.510 0.565 0.669

6 0.410 0.436 0.470 0.521 0.618

7 0.381 0.405 0.438 0.486 0.577

8 0.358 0.381 0.411 0.457 0.543

9 0.339 0.360 0.388 0.432 0.514

10 0.322 0.342 0.368 0.410 0.490

11 0.307 0.326 0.352 0.391 0.468

12 0.295 0.313 0.338 0.375 0.450

13 0.284 0.302 0.325 0.361 0.433

14 0.274 0.292 0.314 0.349 0.418

15 0.266 0.283 0.304 0.338 0.404

16 0.258 0.274 0.295 0.328 0.392

17 0.250 0.266 0.286 0.318 0.381

18 0.244 0.259 0.278 0.309 0.371

19 0.237 0.252 0.272 0.301 0.363

20 0.231 0.246 0.264 0.294 0.356

25 0.210 0.220 0.240 0.270 0.320

30 0.190 0.200 0.220 0.240 0.290

35 0.180 0.190 0.210 0.230 0.270

Over 35

Sample

size (n )

Level of significance ( ) for D

n

07.1

n

14.1

n

22.1

n

36.1

n

63.1

The acceptance with respective to P-values are widely applied in many papers as follows;

P-value Description P < 0.01 0.01 P < 0.05 0.05 P < 0.10 P < 0.10

very strong evidence against 0

H moderate evidence against

0H

suggestive evidence against 0

H little or no real evidence against

0H

However, we use here another important measure, a form of the MisesvonreCram statistic,

n

iiin xFxF

n 1

2);()(

1

Where pxxx ..., , , 21 are observations.

Page 14: บทที่ 3 การแจกแจง (Distribution) · 03/12/2019  · - 1 - บทที่ 3 การแจกแจง (Distribution) 3.1 MOMENTS There are a variety

- 14 -

Loss Distributions Item Distribution F Tail F or density f : moment nXE Parameters

1

Exponential

P.d.f : xexf D.f. : xexF 1

xexF

Moment : nXE = n

1

0 0 x

2

Gamma

P.d.f :

xexxf

1

D.f. : xxF ;

Moment : nXE =

1

0

n

in

i

0 ,

0 x

3

Weibull

P.d.f : cxexcxf 1

D.f. : cxexF 1

cxexF

Moment : nXE =

/

/1

nc

n

0 ,0 c

0 x

4

Truncated normal

P.d.f. : 2

2

2

x

exf

-

5

Log-normal P.d.f. :

22

2ln

2

1

x

ex

xf

D.f. :

xxF

ln

Moment : nXE =

22

2

1 nn

e

0 , R 0 ,

0 x

6

Pareto

P.d.f. : 1

xxf

D.f. :

xxF 1

xxF

0 ,0

0 x

Page 15: บทที่ 3 การแจกแจง (Distribution) · 03/12/2019  · - 1 - บทที่ 3 การแจกแจง (Distribution) 3.1 MOMENTS There are a variety

- 15 -

Moment : nXE =

n

i

n

i

n

1

!

n

7

Burr

P.d.f. : 11

xkxkxf

D.f. :

xk

kxF 1

xk

kxF

Moment : nXE =

nnk kn 1 /

0 , , k

0 x

8

Benktander type I

D.f. :

xxexxF log12loglog211

xxexxF log12loglog21

0 ,

9

Benktander type II

D.f. :

x

exexF

11

x

exexF

1

0 10

10

Log-gamma

P.d.f :

log

1

1

x

xxf

D.f. : xxF log ;

Moment : nXE =

n1

0 ,

1 x n

11

Truncated - stable

xXPxF Where X is an - stable random variable.

21