Candidate, QRM, Risk Measures

download Candidate, QRM, Risk Measures

of 24

Transcript of Candidate, QRM, Risk Measures

  • 7/31/2019 Candidate, QRM, Risk Measures

    1/24

    Risk Measures and

    Optimised Importance

    Sampling

    Candidate Number: 457767

    University of Oxford

    20th April 2012

  • 7/31/2019 Candidate, QRM, Risk Measures

    2/24

    Abstract

    We begin with a brief discussion of risk measures and give three particular ex-amples of such measures. We then consider how Monte Carlo techniques can beused to estimate these risk measures and, through the use of importance sam-pling, how we can improve the accuracy of such techniques. We then discuss amethod to optimise the choice of parameters for a normal distribution to impor-tance sample from. Finally we implement this method in Matlab and run it toget good estimates of the optimal sampling distribution for Value-at-Risk and

    rough estimates for Expected Shortfall.

    2

  • 7/31/2019 Candidate, QRM, Risk Measures

    3/24

    Contents

    Abstract 2

    1 Introduction 4

    2 Risk Measures 5

    3 Monte Carlo and Importance Sampling for Risk Measures 7

    4 The Importance Sampling Optimisation Problem 10

    5 Optimisation Method 12

    6 Results 13

    6.1 VaR Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136.2 ES Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156.3 Testing out optimised importance sampling method . . . . . . . . 16

    7 Conclusion 17

    A Code 18

    3

  • 7/31/2019 Candidate, QRM, Risk Measures

    4/24

    Chapter 1

    Introduction

    For any agent holding a portfolio the two most important factors to consider arethe expected return of the portfolio and the riskiness of the portfolio. Expectedreturn is a fairly simple concept to agree upon, it is the average return; however

    the definition of risk is far more complicated and perhaps impossible to agreeupon [5]. In order to assign some quantity to risk we use so-called risk measures.Different investors have different risk profiles, different sets of clients to satisfy,differences in the which they trade in etc. which is why there is no one measureof risk which is universally accepted and used. In this project we will begin inby defining a coherent (well behaved) risk measure then we will briefly discussthe merits and pitfalls of three, quite different, measures of risk Value-at-Risk,Expected Shortfall and the Wang Transform measure. We will then considerhow Monte Carlo techniques and importance sampling can be used to estimaterisk measures. Having considered the improvements importance sampling cangive to conventional Monte Carlo estimation we shall discuss how to optimise ourchoice of distribution to importance sample from. We will then carry out someoptimisations of our importance sampling approach for obtaining VaR estimatesat both the 95% and 99% levels for two different loss distributions and discusswhy optimisation of importance sampling for Expected Shortfall is more difficult.Lastly we shall take the results of our optimisations for the VaR importancesampling method and use them to try to beat the conventional Monte Carloestimates for accuracy.

    4

  • 7/31/2019 Candidate, QRM, Risk Measures

    5/24

    Chapter 2

    Risk Measures

    In order to study risk we need to consider the potential loss of an investment.To do this we study the loss distribution over a period [0, t] of a given portfoliowith value process V. We define the loss distribution as L = (Vt V0) whereV0 is the initial value of the portfolio and Vt is the value of the portfolio at timet. Clearly positive values of L indicate losses and negative values of L indicategains. We shall now define the notion of a well behaved, or rather coherent riskmeasure as laid out by Artzner, Delbaen, Eber and Heath [2].

    Definition 1. A map : L(F) R, that satisfies the following properties:1. Monotonicity: If X Y almost surely, then (X) (Y).2. Cash translatability: For any constant c, (X + c) = (X) c.3. Positive Homogeneity: If 0, then (X) = (X).4. Sub-additivity: (X + Y) (X) + (Y)

    is said to be a coherent risk measure.

    We shall now go on to define the three risk measures with which the rest of theproject shall deal with.

    Definition 2. Value-at-Risk (VaR) at level (0, 1) of our portfolio isV aR = inf{l R : P(L l) (1 )} = F1L ()

    Remark 1. Perhaps the best way to think of VaR, especially from the perspectiveof capital requirements, is to consider it as the smallest amount of cash which, ifadded to the portfolio, would make P(Vt < 0) 1 .There are two major problems with VaR:

    1. It is not coherent (see definition 1): specifically it fails on sub-additivity.This means that it is possible to find portfolios which have a higher VaRwhen taken as one combined portfolio than when the individual portfolioVaRs are summed. This contradicts basic diversification principles andmakes it possible for institutions holding such portfolios to artificially lowertheir VaR. This is a major concern when VaR is often used to computeminimum capital requirements (for instance Basel I used VaR widely).

    5

  • 7/31/2019 Candidate, QRM, Risk Measures

    6/24

    2. It does not consider the extent of the losses greater than the VaR value.Sure, there may be only a 5% probability that my losses are greater than

    1m, say, but what about that last 5%? What if my average loss in thatlast 5% is 100m?

    This second issue provokes the following risk measure which is a measure of the

    average value at risk for levels p .Definition 3. Expected Shortfall (ES) at level (0, 1) is:

    ES =1

    1

    1

    qp(FL)dp =1

    1

    1

    V aRpdp

    Where qp is the quantile function qp(F) = inf{x R : F(x) p}.Remark 2. When the loss distribution is continuous this renders the intuitiveresult: ES = E[L|L V aR].

    Not only does expected shortfall take into account the extent of the losses greaterthan the VaR it is also coherent, satisfying the sub-additivity requirement. How-ever a possible criticism of both VaR and ES is that while both give some senseof the worst case scenario they fail to describe the risk in the majority of the lossdistribution. This is perhaps reasonable given their uses in calculating appro-priate capital reserves however it would be useful to have a risk measure whichdescribes the more relevant day to day risks and provides a relevant measure ofrisk when comparing different strategies or portfolios. In [4] Wang argues that arisk measure should go beyond coherence and proposes a risk measure that isthe expectation of the losses using distorted probabilities to cautiously account

    for severe, but improbable losses whilst also taking into account the larger partof the loss distribution. We conclude this section by outlining the measure.

    Definition 4. Wang transform measure, W T: Let(x) = (1(x)) where

    = 1() with our pre-selected security level (e.g. 95% or 99%), so is justV aR. Then with F

    L (x) := (FL(x)) we have our risk measure

    W T =R

    xdFL (x).

    6

  • 7/31/2019 Candidate, QRM, Risk Measures

    7/24

    Chapter 3

    Monte Carlo and Importance

    Sampling for Risk Measures

    In most models explicit formulae are not available for the risk measures we have

    defined. Instead we turn to Monte Carlo simulation to approximate these val-ues. Clearly we are interested in estimating cumulative distribution functionsand quantile functions. Here we will outline our method for estimating thesefunctions then we will demonstrate how to use such estimates to calculate ourrisk measures. Finally we will look at how to use importance sampling to improvethese estimates and will consequently revise our methods of estimating the riskmeasures.

    First recall that the cdf random variable X with density f is:

    F(x) =

    x

    f(x)dx (3.1)

    Now, to estimate this is fairly simple. We just use the empirical distributionfunction of our simulated data, {x1,..,xn}:

    F(x) F(x) = {i:xix}

    1

    n(3.2)

    Therefore for our loss distribution is:

    FL(x) FL(x) = {i:xix}

    1n

    (3.3)

    However for the calculation of VaR and ES this is not enough. We need to esti-mate the quantiles of the distribution which requires an estimate of the quantilefunction ofFL. Our estimate of the quantile function will be the quantile functionofFL and is therefore given by:

    F1L (p)

    F1L (p) = min{x R :

    FL(x) p} (3.4)

    7

  • 7/31/2019 Candidate, QRM, Risk Measures

    8/24

    In order to find the minimum x above we must first order the sample such thatx1 ... xn. Then,

    F1L (p) = min{xi {xi,..,xn} : FL(xi) p} (3.5)and FL(xi) = in since our sample is ordered. Thus,F1L (p) = xpn (3.6)Given our sample data {xi,..,xn} this quantile taken at gives us our estimatefor V aR:

    V aR =F1L () = xn (3.7)

    Consequently, using first difference reasoning, our estimate for ES is:

    ES = 11 1n (n n)xn +n

    i=n+1xi

    n (3.8)Using a similar first differences technique to estimate the integral, but first trans-forming the estimated cdf, we obtain our estimate for the W T:

    W T =

    ni=1

    xi

    FL (xi) FL (xi1) = n

    i=1

    xi

    i

    n

    i 1n

    (3.9)

    These Monte Carlo techniques will provide useful estimates of the risk measures

    for large enough sample sizes however they are inefficient techniques as theystand. A large part of the problem is that estimating extreme quantiles like VaRis very inefficient because only a very small portion of our MC sample falls withinthe quantile. This implies that importance sampling could greatly improve ourestimates. We now outline the premise of importance sampling and pave the wayfor the rest of the project.

    When we use importance sampling techniques we sample data y1,..,yn from adifferent distribution with a density g(x) and we note that

    F(x) =

    x

    f(x)g(x)

    g(x)dx (3.10)

    So our improved estimation is:

    F(x) F(x) = {i:yix}

    f(yi)

    g(yi)

    1

    n

    (3.11)

    Which we then normalise using a function

    8

  • 7/31/2019 Candidate, QRM, Risk Measures

    9/24

    k(x) f(x)g(x)

    to get our improved importance sampling estimate of the cdf:

    F(x) F(x) ={i:yix}k(yi)

    ik(yi)(3.12)

    Note: This approach is taken from [1]

    Applying this method to the estimations of VaR, ES and WT we obtain thefollowing estimates, where we have ordered the yi such that y1 ... yn:F1L (p) = inf{yi : FL(yi) p} = yr(p) (3.13)Where r(p) = inf{m N : mi=1k(yi) pik(yi)}. It is a simple task to create analgorithm that takes the yi and p and finds r(p). Given this, we get our improved

    estimate for VaR: V aR = yr() (3.14)

    Now by definition r()i=1 k(yi) ik(yi) so solving

    r()i=1 k(yi) = aik(yi) (3.15)

    for a gives us the first jump point in V aRp for p [, 1]. Using this reasoning weget our improved estimate for ES.

    ES = 11 (a0 )yr() +nr()

    j=1

    (aj aj1) (3.16)where a0 satisfies

    r()i=1 k(yi) = a0ik(yi) and aj satisfies

    R(aj1)i=1 k(yi) = ajik(yi)

    with R(p) = inf{m N : mi=1k(yi) > pik(yi)}.

    Again using first differences to estimate the integral, but first transforming theestimated cdf, we obtain our improved estimate for the W T:

    W T =

    n

    i=1yi

    FL (yi)

    FL (yi1)

    (3.17)

    WhereF

    L (y) = (

    FL(y)) and FL(y0) = 0.That concludes this chapter, the next chapter will look at how to optimise ourchoice of the importance sampling density function k.

    9

  • 7/31/2019 Candidate, QRM, Risk Measures

    10/24

    Chapter 4

    The Importance Sampling

    Optimisation Problem

    Having presented the case for, and methodology behind, using importance sam-

    pling in our Monte Carlo estimates of risk measures we now seek to optimise theprocess. The reasoning behind importance sampling implies that the selection ofthe new distribution we sample from is of great importance. Given the wrongselection the process could be less efficient and given the right selection the pro-cess can be made more efficient. It is therefore a worthwhile endeavour, if wechoose to use importance sampling, to seek to optimise the choice of distributionto sample from.

    The complexity of this problem increases with the complexity of the actual dis-tribution and the computational complexity of the risk measure being estimated.Given the complexity of real world loss distributions, and the computational com-plexity of some risk measures, especially when they are subjected to importancesampling (e.g. Expected Shortfall), the problem is not simple. It is for thesereasons that high quality optimisation of importance sampling used on such lossdistributions and risk measures is beyond the scope of this project. Instead wesimply introduce the reader to the issue by attempting to optimise the selectionof and for a normal sampling distribution with mean and variance (note:in this paper variance, because it is used more than standard deviation is de-noted as not 2 as is the usual approach). We do this individually for each riskmeasure; VaR, ES, and WT and for both a simple loss distribution with normaldistribution and for a slightly more complex loss distribution which is a mixture

    of three different normal distributions, intended to reflect a simplified model ofnegatively impacting events on an otherwise healthy portfolio. However beforewe embark on this journey let us first lay out the problem, expected solution andpotential difficulties.

    The problem, is to minimise the error in the importance sampled estimate fromthe exact value. We keep everything else, sample size, and actual loss distri-butions constant ()sample size n = 10000) and we carry out the optimisationproblem for the two loss distributions separately.

    10

  • 7/31/2019 Candidate, QRM, Risk Measures

    11/24

    The first has density:

    f(x) =18

    e(x+1)2

    8

    and the second has density:

    f(x) = 0.76

    e (x+1)2

    6 + 0.2516

    e (x6)2

    16 + 0.052

    e (x15)2

    4

    Exact values will be calculated by a brute force method that involves the basicMC approach and a very large sample size (in the majority of estimates we useupwards of 100 million sample point). When calculating our errors it is possiblethat a sampling distribution that should be inaccurate gets lucky and lands onor very near to our exact value. To combat this we take all error estimatesseveral times over and take an average of the error.

    Now, according to our intuition the optimal , for VaR and ES at least, should

    be nearer the tail of the estimate or the variance should be large, that way moresample points will lie in the relevant area and there will therefore be less variationin our estimate. We counteract the fact that we have changed the distributionby multiplying our integrands by k the ration of the actual density of the lossdistribution to the density of our importance sampling distribution.

    As for potential difficulties, VaR will be fairly simple to apply importance sam-pling to. The only extra element to consider over the simple approach is thecomputation of r() = inf{m N : mi=1k(yi) ik(yi)} which is a one-timecalculation.. This problem is much more of an issue for the importance sampled

    estimate of ES though because for each sample point above the importance sam-pled estimate of VaR we need to find a similar r (see equation 3.15) - this is likely

    to be very costly. As for the WT measure the transforming of{i:yix}k(yi)

    ik(yi)for

    each yi not just those in an extreme quantile is likely to be expensive.

    We refer the reader to the appendix for the algorithms used.

    11

  • 7/31/2019 Candidate, QRM, Risk Measures

    12/24

    Chapter 5

    Optimisation Method

    Here we explain our method for optimising the importance sampling for VaR,ES and WT. For ease of explanation we explain our method for optimising theimportance sampled estimate of V aR0.95 for the first loss distribution and note

    that the optimisation for the other VaR estimates, and indeed the other risk mea-sures, is analogous. Our method for finding the optimum and is as follows:In Optpair95VaR1plot (see appendix) we iterate through a range of values andfor each we use optimalsigma95VaR1 (see appendix) to search for the optimal corresponding to the . This returns the optimal (, ) pair and an average errorfor the pair. Optpair95VaR1plot then plots the proportional errors against the values. So we use Optpair95VaR1plot over a wide range at first, and then fromthe plot we focus our search for the optimal (, ) pair. By focus our search,we mean decrease the step sizes in the and values iterated through but searcha narrower range of values. From the plot we then, if appropriate, use Matlabsin built quadratic best fit to get the optimal , , if not we assess the situationwith common sense. This is a faster approach than continually zooming-in onour optimum and also takes into account the general trend towards an optimum- which is more relevant because we are, after all, dealing with random sampleswhich are subject to random behaviour. Finally we run optimalsigma95VaR1 onelast time, with and a small step size in the s to get our optimal (, ) pair.

    While the optimisation method is analogous for ES and WT due to their im-portance sampled estimates being much more computationally expensive thanimportance sampled VaR we run a more rough optimisation for ES and leave outthe optimisation of WT for the interested reader to take the methods and code

    here as a starting point to optimise themselves.

    12

  • 7/31/2019 Candidate, QRM, Risk Measures

    13/24

    Chapter 6

    Results

    6.1 VaR Results

    Here we present the results for the optimisation of the importance sampling for 4

    VaR estimates: V aR0.95 and V aR0.99 for both the first and second loss distribu-tions individually. Firstly V aR0.95 for the first loss distribution, which we recallhas a N(1, 4) distribution. Carrying out the method outlined in the previouschapter we obtained the following plots:

    Based on the above the search was narrowed to [18, 0] rendering the plot below:

    13

  • 7/31/2019 Candidate, QRM, Risk Measures

    14/24

    Based on the above, = 11. The optimal corresponding variance was thenfound by optimalsigma95VaR1 to be = 16.9, and the corresponding absoluteerror was 0.0477.

    Now for the V aR0.99 estimate:

    From the above plot and a similar narrowing-it-down process we obtain (, ) =

    (14,

    16.0) which gave an error of 0.0476.

    Now to the VaR estimates for the second loss distribution. We obtain the follow-ing data:

    Note: we also investigated the = 8 value point and it proved to also be a one-off. So taking = 5.25 we get = 16.2 with an absolute error 0.0826. Now wetry to optimise the estimate of V aR0.99 for the second loss distribution:

    14

  • 7/31/2019 Candidate, QRM, Risk Measures

    15/24

    So from the above we take = 2 which has a corresponding = 16.0 with anabsolute error of 0.0220. We conclude this section by presenting the results inthe following table:

    Loss Dist. Level actual err. prop. err.first 0.95 -11 16.9 0.0477 0.0086first 0.99 -14 16.0 0.0476 0.0057

    second 0.95 5.25 16.2 0.0826 0.0053second 0.99 2 16.0 0.0220 0.0011

    6.2 ES Results

    Due to the much more complicated process required to estimate ES as opposedto VaR - remember we are effectively integrating VaR over different levels -it was a lot more time consuming to optimise the importance sampling process.for instance the first plot below took twenty two minutes to compose in Matlab.This made obtaining good estimates for the optimal and more difficult. Im-

    15

  • 7/31/2019 Candidate, QRM, Risk Measures

    16/24

    provements can be made by using a faster platform, e.g. C++, or failing thatvectorising the loops in the algorithms presented in the appendix. No doubt, in-telligent coding can improve the efficiency of these algorithms but we leave thatto the interested reader. As it is, we present here the optimisation of and for the importance sampling estimate of the Expected Shortfall of the first lossdistribution at the = 0.95 level and trust the reader to follow the methods

    illustrated in previous section to obtain other similar ES estimates should theyso desire.

    The corresponding optimal variance is = 2.9 and the actual error is 0.5209and the proportional error is 0.0718 which is rather large.

    6.3 Testing out optimised importance sampling

    method

    In this section we test out the, now optimised, importance sampling approach

    against the conventional Monte Carlo approach to estimate the Value-at-Risk forboth loss distributions at both levels. The estimation was carried out 100 timesfor both the importance sampling approach, and the standard approach, and thenthe average error in each was taken. The results are presented below:

    Loss Dist. Level avg simple MC error avg importance sampling errorfirst 0.95 0.0642 0.0540first 0.99 0.1164 0.0589second 0.95 0.0991 0.1285second 0.99 0.2851 0.0346

    As we can see in the first risk measure the importance sampling method is onlyslightly more efficient. In the second risk measure it is almost twice as efficient.In the third it is marginally less efficient, however in the final risk measure be-ing estimated, V aR0.99 of the second loss distribution, the importance samplingmethod is over 8 times more efficient.

    16

  • 7/31/2019 Candidate, QRM, Risk Measures

    17/24

    Chapter 7

    Conclusion

    We have seen that, while VaR and ES are intuitive measures of risk they arealso quite flawed. The Wang Transform measure is an alternative, but a possiblecriticism is it lacks an easy to understand intuition to its values. Risk mea-

    sures usually require estimation and we discussed a conventional Monte Carloapproach and an importance sampling MC approach. We then went on to findthe optimal normal distribution to importance sample from given a selection ofdifferent VaR measures. Finally we compared our importance sampling approachto the conventional MC approach and found that in most cases it out-performsthe conventional approach by varying amounts. Having said that though theimprovement was not, in most cases, drastic and considering the additional com-putational expense in using the importance sampling approach the net gain wasperhaps, in some cases, negative. The possible reasons for the slightness of theaccuracy improvement could be:

    poor optimisation of the importance sampling - searching through all the(, ) pairs is a very slow process and inevitably compromises in the finenessof the net over which we sift the pairs have to be made.

    perhaps the normal distribution is not a good kind of distribution to use tosample the kind of extreme risk measures like VaR and ES for the specificloss distributions we used.

    rounding errors in the sums embedded in the importance sampling algo-rithm

    In any case if the problem of optimising importance sampling were to be revisitedit would be sensible to use a faster platform like C++, as Matlab struggled, andany improvement in speed is an improvement in possible accuracy.

    We end by considering the results we obtained. While the values were muchlower than expected the importance sampling distributions compensated for thisby having very large variances. This ensured more of the sample was in therelevant tail part of the loss distribution. Finally examining our optimal pairs,(, ), we can see in general that the variance should be made very large andthe should be adjusted to capture a large part of the loss distribution.

    17

  • 7/31/2019 Candidate, QRM, Risk Measures

    18/24

    Appendix A

    Code

    Before reading the code please note that, although this was coded in Matlab theuse of for loops in Matlab gives sub-optimal performance. The choice was madeto use for loops in this project because they are more intuitive to read in someone

    elses code and it makes translation of the following algorithms into more powerfullanguages, e.g. C++, a lot easier.The sample generating functions:

    1 function [ X ] = rgenerator1( n )

    2 %regerator1

    3 %Generates a nsized sample from a N(1,4) distribution and ...sort the elements, smallest first.

    4 X=1+4.*randn(1,n);5 X=sort(X);

    6 end

    1 function [ X ] = rgenerator2( n )

    2 %regenerator2

    3 %Generates a nsized sample from a mixture of normal ...distributions and sorts them:

    4 %with probability 0.70 N(2, 3),5 %with probability 0.25 N( 6 ,8) and,

    6 %with probability 0.05 N(15 ,2).

    7 a=0.7*n;

    8 b=0.25*n;

    9 c=0.05*n;

    10 x1=2+3.*randn(1,a);11 x2=6+8.*randn(1,b);

    12 x3=15+2.*randn(1,c);

    13 X=[x1 x2 x3];

    14 X=sort(X);

    15 end

    18

  • 7/31/2019 Candidate, QRM, Risk Measures

    19/24

    1 function [ Y ] = rgenerator3( n , mu , sigma )

    2 %regenerator3

    3 %provides a function which takes n, mu, and sigma, and ...

    returns a sorted nsized sample taken from a N(mu,sigma) ...distribution

    4 Y=mu+sigma.*randn(1,m);

    5 Y=sort(Y);6 end

    The simple monte carlo algorithms, no importance sampling:

    1 function [ VaR alpha ] = simpleVaR1( alpha , n)

    2 %simpleVaR1

    3 % Takes alpha and sample size n and generates

    4 % a standard MC estimate of VaR alpha for 1st loss distribution

    5 % using sample size rgenerator1 and a sample size of n

    6 sample=rgenerator1(n);

    7 index=ceil(alpha*n);

    8 VaR alpha=sample(index);

    9 end

    Note the simple MC estimates for each risk measure of the 2nd loss distributionare calculated, and named, analagously to the above (e.g. simpleVaR2 is identicalto simpleVaR1 except rgenerator1 changed to rgenerator2)

    1 function [ ES alpha ] = simpleES1( alpha, n )

    2 %simpleES1

    3 % Takes alpha and sample size n and generates

    4 % a standard MC estimate of ES alpha for 1st loss distribution

    5 % using sample size rgenerator1 and a sample size of n6 sample=rgenerator1(n);

    7 a=ceil(alpha*n);

    8 b=(1/n)*(ceil(alpha*n)alpha*n)*sample(a);9 s=(1/n).*sample(a+1:1:n);

    10 ES alpha=(1/(1alpha))*(b+sum(s));11 end

    19

  • 7/31/2019 Candidate, QRM, Risk Measures

    20/24

    1 function [ WT alpha ] = simpleWT1( alpha , n )

    2 %simpleES1

    3 % Takes alpha and sample size n and generates

    4 % a standard MC estimate of WT alpha for 1st loss distribution

    5 % using sample size rgenerator1 and a sample size of n

    6 sample=rgenerator1(n);

    7 v=1:n;8 W=zeros(1,n+1);

    9 for m=1:n

    10 W(m)=Wangtransform(((m1)/n),alpha);11 end

    12 for m=1:n

    13 v(m)=sample(m)*(W(m+1)W(m));14 end

    15 WT alpha=sum(v);

    16 end

    Where Wangtransform is:

    1 function [ output ] = Wangtransform( x , alpha )

    2 %Wangtransform

    3 % takes x and alpha and carries out the wang transform on x

    4 % using alpha to obtain gamma as described in definition 4

    5 gamma=sqrt(2)*erfinv(2*alpha1);6 inside=sqrt(2)*erfinv(2*x1)gamma;7 output=0.5*(1+erf(inside/sqrt(2)));

    8 end

    Now for the importance sampled estimates, but first:

    1 function [ newdensity ] = kdensity1( x , mu , sigma)

    2 %kdensity1

    3 % Calculates the ratio of f(x)/g(x) given g is the

    4 % density of N(mu,sigma), which we let equal k.

    5 f = (1/(sqrt(8*pi)))*exp(((x+1)2)/8);6 g = (1/(sqrt(2*sigma*pi)))*exp(((xmu)2)/(2*sigma));7 newdensity = f/g;

    8 end

    Again, kdensity2 is analogous. Then,

    1 function [ VaR alpha ] = impVaR1( mu , sigma , alpha ,n)2 %impVaR1

    3 % Given sampling distribution N(mu,sigma) and alpha, and ...

    size of sample

    4 % returns an importance sampled estimate of VaR alpha.

    5 sample=rgenerator3(n,mu,sigma);

    6 VaR alpha=sample(rfinder1(mu,sigma,alpha,sample));

    7 end

    20

  • 7/31/2019 Candidate, QRM, Risk Measures

    21/24

    1 function [ r alpha ] = rfinder1( mu , sigma , alpha ,sample )

    2 %rfinder1

    3 % Given alpha, a sample and the mu and sigma of the

    4 % sampling distribution finds, using k, r(alpha),

    5 % as required in (3.13).

    6 n=length(sample);

    7 temp=1:1:n;8 for m=1:n

    9 temp(m)=kdensity1(sample(m),mu,sigma);

    10 end

    11 s=cumsum(temp);

    12 t=zeros(1,n);

    13 for m=1:n

    14 if (s(m)/s(n))alpha

    15 t(m)=1;

    16 end

    17 end

    18 H=find(t); %sets H to be the vector of indices where ...

    t neq 0

    19 r alpha = H(1); %return first index

    20 end

    1 function [ ES alpha ] = impES1( mu , sigma , alpha , n )

    2 %impES1

    3 % Given sampling distribution N(mu,sigma) and alpha, and ...

    size of sample

    4 % returns an importance sampled estimate of ES alpha.

    5 sample=rgenerator2(n,mu,sigma);

    6 temp=1:1:n;

    7 for q=1:n

    8 temp(q)=kdensity1(sample(q),mu,sigma);

    9 end

    10 s=cumsum(temp);

    11 r=rfinder1(mu,sigma,alpha,sample);

    12 a0=s(r)/s(n);

    13 t=zeros(1,nr);14 t(1)=a0;

    15 for q=2:nr+116 if isempty(bigrfinder1(t(q1),temp))17 t(q)=0;

    18 else

    19 t(q)=s(bigrfinder1(t(q

    1),temp))/s(n);

    20 end21 end

    22 v=zeros(1,nr);23 for q=2:nr+124 v(q1)=t(q)t(q1);25 end

    26 ES alpha=(1/(1alpha))*((a0alpha)*sample(r) + sum(v));27 end

    21

  • 7/31/2019 Candidate, QRM, Risk Measures

    22/24

    1 function [ bigr p ] = bigrfinder( p ,temp )

    2 %bigrfinder1

    3 % Given alpha, a temp and p level returns R(p)

    4 s=cumsum(temp);

    5 n=length(s);

    6 t=zeros(1,n);

    7 for i=1:n8 if s(i)>p*s(n)

    9 t(i)=1;

    10 end

    11 end

    12 bigr p = find(t,1,'first');

    13 end

    1 function [ WT alpha ] = impWT1( alpha , mu , sigma , n )

    2 %impWT1

    3 % Given sampling distribution N(mu,sigma) and alpha, and ...

    size of sample4 % returns an importance sampled estimate of WT alpha.

    5 sample=rgenerator3(n,mu,sigma);

    6 temp=1:1:n;

    7 for m=1:n

    8 temp(m)=kdensity1(sample(m),mu,sigma);

    9 end

    10 s=cumsum(temp);

    11 W=zeros(1,n);

    12 for k=1:n

    13 W(k)=Wangtransform(s(k)/s(n),alpha);

    14 end

    15 v=1:1:n;

    16 v(1)=sample(1)*Wangtransform(s(1)/s(n),alpha);

    17 for m=2:n

    18 v(m)=sample(m)*(W(m)W(m1));19 end

    20 WT alpha=sum(v);

    21 end

    22

  • 7/31/2019 Candidate, QRM, Risk Measures

    23/24

    1 function [ graph, sigmas] = Optpair95VaR1plot( dmu, dsigma, ...

    no )

    2 %Optpair95VaR1plot Plots prop errors against mu

    3 % dmu is spacing between mu values

    4 % dsigma is spacing between sigma values

    5 % no is number of times we calculate error for each sigma ...

    mu pair6 mus=25:dmu:25;7 n=length(mus);

    8 sigmas=zeros(1,n);

    9 errors=zeros(1,n);

    10 for m=1:n

    11 temp=optimalsigma95VaR1(mus(m),dsigma,no);

    12 sigmas(m)=temp(1);

    13 errors(m)=temp(2);

    14 end

    15 datamix=[mus; sigmas; errors./5.578];

    16 graph=plot(datamix(1,:),datamix(3,:));

    17 end

    1 function [ osig merr ] = optimalsigma95VaR1(mu,d,no)

    2 %optimalsigma95VaR1 Returns optimal sigma and error for mu

    3 % d is spacing between sigma values

    4 % no is number of times we calculate error for each sigma ...

    mu pair

    5 spacer=1:1:(20/d);

    6 sigma=d.*spacer;

    7 compar=zeros(1,(20/d));

    8 avger=zeros(1,no);

    9 for m=1:length(sigma)

    10 for k=1:no

    11 avger(k)=abs(impVaR1(mu,sigma(m),0.95,10000)5.578);12 end

    13 compar(m)=sum(avger)/no;

    14 end

    15 [minimum index]=min(compar);

    16 osig merr=[sigma(index), minimum];

    17 end

    23

  • 7/31/2019 Candidate, QRM, Risk Measures

    24/24

    Bibliography

    [1] Cohen, S. Lectures on Risk Measures Mathematical Institute, University ofOxford.

    [2] Artzner, P., Delbaen, F., Eber, J-M., Heath, D. (1999) Coherent Measures ofRisk, Mathematical Finance, Vol. 9, No. 3.

    [3] Glasserman, P., Heidelberger, P., Shahabuddin, P. (2002) Portfolio Value-

    at-Risk with Heavy-Tailed Risk Factors, Mathematical Finance, Vol. 12, pp.239-269.

    [4] Shaun S. Wang, A Risk Measure That Goes Beyond Coherence, ResearchPaper, Institute of Insurance and Pension Research - University of Waterloo

    [5] Delbaen, F. Risk Measures or Measures That Describe Risk?, Research Paper,Dep. Mathematics, ETH-Zurich.

    [6] Muller, P. (2010) Computation of Risk Measures Using Importance Sampling,ETH-Zurich, Masters Thesis

    [7] Anderson, E.C. (1999) Monte Carlo Methods and Importance Sampling, Lec-ture Notes for Stat 578C, Statistical Genetics.

    24