Stress testing & sensitivity Analysis -Requirements and methods

24
Stress Testing and Sensitivity Analysis: Requirements and Methods Name: Ananya Bhattacharyya Designation: Business Analyst, Genpact Analytics & Research Business Vertical: Financial Services Analytics E-Mail: [email protected] Date: 31 st December 2015 Genpact Analytics & Research

Transcript of Stress testing & sensitivity Analysis -Requirements and methods

Page 1: Stress testing & sensitivity Analysis -Requirements and methods

Stress Testing and Sensitivity Analysis: Requirements and Methods

Name: Ananya Bhattacharyya

Designation: Business Analyst, Genpact Analytics & Research

Business Vertical: Financial Services Analytics

E-Mail: [email protected]

Date: 31st December 2015

Genpact Analytics & Research

Page 2: Stress testing & sensitivity Analysis -Requirements and methods

2

Table of ContentsStress Testing 4

Why Stress testing is required: Regulatory guidelines 4

How stress testing helps to assess the financial health of an Institution 4

Solvency Test 4

Liquidity Test 5

Methods of Stress Testing 5

Scenario Analysis 5

Macroeconomic Stress Testing 6

Approaches of Macroeconomic Stress Testing 6

PPNR as a stress testing tool 7

Sensitivity Analysis 7

Regulatory Guidelines 8

Various Methods for Sensitivity Analysis 9-11

Different methods for the input variation 11

Case Study 11-16

Result Interpretation & Conclusion 16

References

Page 3: Stress testing & sensitivity Analysis -Requirements and methods

3

AbstractThe global financial crisis has brought the spotlight on stress testing both as a regulatory requirement and as an internal risk management tool. This paper depicts the evolution of stress testing as a regulatory requirement and finds out why it is being considered very important as a supervisory tool. The first part of the paper attempts to discuss different types of stress testing based on the ultimate objective it serves and the various approaches followed by the regulators for carrying out this exercise. It also throws light on the regulators’ changing focus towards making the entire exercise dynamic by including pre-provision net revenue (PPNR) for capturing the variability in projected loss. It has made an attempt to discuss a few mostly used quantitative methods for sensitivity analysis and their major advantages and disadvantages. The second part applies a logistic regression based model for estimating default probabilities on consumer loan portfolio. A few approaches of sensitivity analysis and stress testing have been applied on this model to check how the model is performing under stress scenarios and the results have been interpreted.

Key WordsStress Testing, Sensitivity Analysis, Probability of Default model, Risk, Regulations

Paper TypeTechnical

Page 4: Stress testing & sensitivity Analysis -Requirements and methods

4

1. Stress Testing:

Stress testing has been conceived as a key component of the Financial Sector Assessment Program (FSAP) and presently it is being used as an analytical tool in Global Financial Stability Reports (GFSRs). Stress testing measures the vulnerability of a financial institution or an entire system under different hypothetical scenarios and helps to chalk out proactive measures that can act as a future shock absorber.

1.2. Why Stress testing is required: Regulatory guidelines

Stress testing has become a regulatory requirement only after passing of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act or DFA) of 2010, motivated by the widely perceived success of the Federal Reserve's Supervisory Capital Assessment Program (SCAP) of 2009. Practical implementation of banks' stress testing (Dodd-Frank Act stress testing or DFAST) began in early 2011 with the release of the Comprehensive Capital Analysis and Review (CCAR) stress scenarios by the Federal Reserve. As defined in Fed’s website “CCAR is an annual exercise by the Federal Reserve to assess whether the largest bank holding companies(BHCs) operating in the United States have sufficient capital to continue operations throughout times of economic and financial stress and that they have robust, forward-looking capital-planning processes that account for their unique risks.” The 2011 CCAR contained only one stress scenario with nine domestic variables and was more limited in scope than its successors. The 2012 CCAR expanded the number of domestic variables, added international variables and provided the series for both baseline and stressed scenarios. Now the standard format includes baseline, adverse and severely adverse scenarios. DFAST is a complementary exercise to CCAR conducted by the Fed to assess whether institutions have sufficient capital to absorb losses and support operations during adverse economic conditions.

Table1.2.1: Evolution of stress testing as a regulatory requirement

To validate the capital calculation Basel Committee on Banking Supervision, 2005 states that stress testing will be employed to verify that the minimum capital computed under Basel II is sufficient to protect against macroeconomic downturns. In the Basel II framework (BCBS 2006), stress testing is part of Pillar I and Pillar II whereby banks are asked to analyze possible future scenarios that may threaten their solvency.

Based on the regulatory guidelines provided in section 1.2, stress testing evaluates an institution’s performance along two lines- solvency and liquidity.

Timeline: Regulatory guidelines

2009 2010 2011 2012 2013

CCAR: $10 BILLION & ABOVE

CCAR: $50 BILLION & ABOVE

CCAR: TOP 19 BHCS

DODD FRANK REFORM

SCAP

Page 5: Stress testing & sensitivity Analysis -Requirements and methods

5

1.3. How stress testing helps to assess the financial health of an Institution

1.3.1. Solvency Test:

A solvency test assesses whether a financial institution will be sufficiently solvent (value of its asset is larger than debt ensuring a positive equity capital) when put into a hypothetically challenging situation by studying changes in its balance sheet variables. Solvency is measured by the capital adequacy ratio, debt to equity ratio or capital shortfall (i.e, the amount of capital needed to maintain a certain capital ratio) defined by regulatory requirement. A financial institute will pass the solvency test if its capital ratio is above the “hurdle rate” (set on minimum regulatory requirement).

1.3.2 Liquidity Test:

Liquidity tests examine the resilience of the financial institution or system under shocks. If under adverse economic scenario huge amount of deposits is withdrawn suddenly and the banks do not have enough cash inflows or liquid assets then they face liquidity crisis. The bank can withstand this crisis if it starts selling its liquid assets or by using repos but it can also happen that the value of the collaterals fall during an adverse economic situation. In this case the liquidity crisis may actually turn into a solvency crisis. Hurdle rate in case of a liquidity stress test can be fixed by taking into consideration the net cash flow position of the bank or the stressed liquidity ratio.

1.4. Methods of Stress Testing

Stress tests can be divided into two categories: scenario analysis and sensitivity tests.

1.4.1. Scenario analysis:

The scenarios are based either on1. Portfolio-driven approach

Or 2. Event-driven approach

Table 1.4.1: Approaches towards scenario analysisPortfolio driven Event drivenRisk drivers of a given portfolio are identified and then plausible scenarios are designed under which those factors are stressed.

This approach is based on plausible events and how these events might affect the relevant risk factors for a bank or a given portfolio.

Under each approach scenarios can be developed either based on historical events (past crisis) or hypothetically (that have not yet happened).Based on the ultimate objective stress tests can be of four types-

Table 1.4.1.1: Types of Stress Testing depending on the objectiveType Internal Risk Management Crisis Management Macro prudential Micro prudentialObjective Financial institutions use

stress testing to measure and manage risk with their investment. It serves as an input for business planning

After the financial crisis supervisory authorities use stress testing to check whether the key institutions need to be recapitalized

Ensures system wide monitoring and analyze the system wide risk

This entails supervisory assessment of the financial health of individual institutions

Page 6: Stress testing & sensitivity Analysis -Requirements and methods

6

1.4.1.2. Macroeconomic Stress Testing:

A stress test can look into the impact of one source of risk or multiple sources of risks. Risk factors can be combined or can be generated using a macroeconomic framework. Allen and Saunders (2004) provided a detailed study on the impact of cyclical effects on major credit risk parameters (e.g., probability of default (PD), loss given default (LGD), exposure at default (EAD) etc.) and they were found to be highly exposed. For macro scenario stress tests empirical relationship between key risk parameters (probability of default (PD), loss given default (LGD) etc.) and relevant macroeconomic variables (GDP, unemployment rate etc.) are checked. Macro scenario stress tests should take into consideration at least one economic cycle. This implies for net interest income models it should cover an interest rate cycle and for non-interest income models it should cover one business cycle. Macro stress tests are carried out taking non-performing loans (NLPs) as dependent variable and various macroeconomic variables like GDP growth, interest rates, inflation, real wages, oil price etc. as main drivers for corporate credit risk (Virolainen,2004).For household sector these factors can be unemployment rate, interest rate etc. Suppose there are k macroeconomic risk drivers that are expected to have an impact on a bank’s portfolio and a vector

of stressed macroeconomic drivers is Xstress= (X1,…, Xk). The model for measuring the PD of obligor j looks like

PD j =1/1+exp (-β0 -∑i=1

n

β i .K j,I - ∑l=1

k

γ l.Xl)

where Ki are obligor specific risk drivers. To compute the stressed PD we can simply put Xstress into the above formula.These PDs are then used in the calculation of expected losses and regulatory capital under stress.

Table 1.4.1.2: Approaches for translating macro scenarios into balance sheets Approach Description Advantage Disadvantage

Bottom Up Granular borrower level analysis

Granular risk factor driven approach leads to more precise results

Uses advanced internal model

Provides detailed risk analysis and risk management capacity of an institution

The result it provides is institution specific. Hence, comparison across similar institutions can be difficult.

Implementing this approach is resource intensive

Top down The impact is estimated using aggregated data

Ensures uniformity in methodology and consistency of assumptions across all the institutions

An effective tool for validating bottom up approaches

Applying the tests only to aggregate data can disguise concentration of exposures to risk at the level of individual institutions

Risk estimates may not be precise due to limited data coverage

Page 7: Stress testing & sensitivity Analysis -Requirements and methods

7

Both of these approaches are used as complements rather than as substitutes by the regulators to extract the advantages of both the approaches and minimize their challenges. Liquidity tests are mostly conducted as bottom- up exercise because they require granular individual level analysis.

1.4.1.3. PPNR as a Stress Testing Tool:

During the initial years of evolution of stress testing as a regulatory measure major thrust was given on ensuring that banks should understand the potential impact of credit losses on capital. Gradually, when the banks achieved an improved state in their credit risk modeling competency regulators found PPNR estimation more meaningful because it covers more variations on credit losses. Traditionally stress testing methods mainly focused on loss calculation only but for a complete assessment of capital adequacy under stressed scenarios both the balance sheet and the income statement must be taken into consideration. SCAP changed the traditional method of stress testing and included profitability (pre- provision net revenue) to make the exercise dynamic. Historical relationship between the macroeconomic variables and the revenue components is estimated and projected into the future for BHCs.Pre-provision net revenue (PPNR) = net interest income+ non-interest income- non-interest expense

Table 1.4.1.3: An overview of PPNR Modeling Data Requirement Advantages Challenges

10 years of monthly balance and fee data

Portfolio level balance histories

Records of management actions(e. g, marketing or pricing strategies)

It helps to understand the variability in projected loss – thus helping in a more transparent capital management and allocation

PPNR modeling enhances firm’s ability to foresee and identify extreme but possible risk at various levels.

It serves as an important tool for liquidity management by looking at enterprise wide stress tests results.

Model granularity: If the institution models on each balance sheet or P&L item it would become cumbersome and the model would reflect little macro sensitivity.

Data availability: Most BHCs do not have detailed revenue related time series data of at least 8-10 years required for PPNR modeling due to changes in internal strategy, business structure or unavailability of a robust data management system. Back testing of model is quite hard due to limited data. So it is difficult to check the consistency of the results.

1.4.2. Sensitivity Analysis:

One of the very important requirements for a model is that the modeler provides an evaluation of the confidence in the model. Hence along with the quantification of uncertainty in any model, an evaluation of how much each input is contributing in the uncertainty of the output is equally important and that is where sensitivity comes into play. Generally, sensitivity analyses are conducted by: (a) defining the model and its independent and dependent variables (b) assigning probability density functions to each input parameter, (c) generating an input matrix through an appropriate random

Page 8: Stress testing & sensitivity Analysis -Requirements and methods

8

sampling method,(d) calculating an output vector, and (e) assessing the influences and. It helps reducing model uncertainty and therefore improves model robustness with regard to:

Magnitude of sensitivity

Output uncertainty reduction

Model simplification (by removing unnecessary parameters)

Enhancing model transparency

Evaluation of model confidence

Excerpt from Regulatory guidelines:

According to SR Letter 11-7 guidelines, “Banks should employ sensitivity analysis in model development and validation to check the impact of small changes in inputs and parameter values on model outputs to make sure they fall within an expected range. Unexpectedly large changes in outputs in response to small changes in inputs can indicate an unstable model. Varying several inputs simultaneously as part of sensitivity analysis can provide evidence of unexpected interactions, particularly if the interactions are complex and not intuitively clear.” To be specific, sensitivity analysis is a technique used to determine how change in the values of an independent variable can impact a particular dependent variable under a given set of assumptions. It is a way to predict the outcome of a decision if the situation turns out to be different from the one used for key prediction(s).

In sensitivity tests, risk factors are moved instantaneously by a unit amount and the source of the shock is not identified. Moreover, the time horizon for sensitivity tests is generally shorter in comparison with scenarios.

Fig. 1.4.2.: Sensitivity analysis: A diagrammatic representation

Portfolio disaggregation PD/LGD/EAD shock calibration Result interpretation

Loan Portfolio

Asset type A

Asset type B

Asset type C

PD/LGD/EAD shock

Stressed portfolio

Sensitivity Analysis Diagram

Profit effect

Solvency effect

Page 9: Stress testing & sensitivity Analysis -Requirements and methods

9

1.4.2.1. Various methods for Sensitivity Analysis:

Table 1.4.2.1: Different types of sensitivity analysis and their advantages and disadvantages

Method Procedure Advantage Disadvantage

One factor at a time (OFAT) sensitivity measure / partial sensitivity analysis

Repeatedly varying one parameter at a time while holding the other inputs fixed and then monitoring changes in the output

It is the most simple and common approach best suited for linear model.

They examine only small perturbations and do not explore full input space. It does not work well with non-linear models since it ignores interaction with other inputs.

Multiple factor at a time sensitivity measure

Examine the relationship between two or more simultaneously changing inputs and the model output. The variation could be

1. Percentage change

2. Standard deviation

3. Best or worst “possible” values (extreme values) of inputs etc.

The outputs can be presented as scatterplots, tornado diagrams etc. Such presentations help assess the rank order of key inputs or key drivers. It considers aggregate impact of multiple inputs, hence more accurate.

It is typically limited to two inputs since it is difficult to assess the impact for three or more inputs

Regression Analysis

It typically involves fitting a relationship between inputs and an output. The effect of inputs on the output can be studied using regression coefficients, standard error of regression coefficients and the level of significance of the regression coefficients.

It allows evaluation of sensitivity of individual model inputs taking into account the simultaneous impact of other model inputs on the result.

Possible lack of robustness if key assumptions of regression are not met

Difference in Log-Odds Ratio (DLOR)

The odds ratio of an event is a ratio of the probability that the event occurs to the probability that the event does not occur. DLOR is used to examine the difference between the outputs when the input changes and when it is at its baseline value.

It can be used when the output is a probability

It cannot be used for non-linear models

Response It is used to represent the relation between a response variable (output) It helps to reduce

Most RS studies are based on fewer

Page 10: Stress testing & sensitivity Analysis -Requirements and methods

10

Surface Method (RSM)

and one or more explanatory inputs using a sequence of designed experiments to obtain optimal response. It can be thought of as “model of a model” (Frey et al., 2005).To develop a response surface least squares regression is used to fit a first or second order equation to the original data. After developing, the sensitivity of the model output to the inputs can be determined by either employing regression analysis or other sensitivity analysis to the response surface (Frey et al., 2005).

the model in such a form so that computation can be much faster. It can also be used when the output is a probability.

inputs compared to original model. Therefore the effect of all original inputs on the sensitivities cannot be evaluated.

Fourier Amplitude Sensitivity Test (FAST)

It is used to estimate the expected value and variance of the output and the contribution of individual inputs to the variance of the output. For example, a relatively large conditional variance of expected value of model output Y given a set of parameters xi (i.e, V (E(y| xi)) will indicate that a relatively large proportion of model output variance is contributed by parameter xi. The ratio of the contribution of each input to the output variance and the total variance of the output gives the first order sensitivity index.

It is better than OFAT as it can apportion the output variance to the variance in the inputs. First order sensitivity indices are used to rank the inputs (Saltelli et al., 2000).

First order indices cannot capture the interaction among the inputs. Saltelli et al (1999) developed the extended FAST method which can address this limitation but it is a complex procedure to carry.

Mutual Information Index

It is a conditional probability analysis. It provides a measure of the information about the output that is provided by a particular input. The magnitude of the measure can be compared for different inputs to determine which input provides useful information about the output. It involves three general steps: (1) generating an overall confidence measure of the output value which is estimated from the CDF of the output; (2) obtaining a conditional confidence measure for a given value of an input by holding an input constant at some value and varying all other inputs; and (3) calculating sensitivity indices (Critchfield and Willard, 1986):IaXY = ∑x ∑y PX PY|X logn (PY | X / PY)where,PY|X = conditional confidence;PY = overall confidence;PX = probability distribution for the input; and n = 2, to indicate binary output.IaXY is always positive. If IaXY is large, then X provides a great deal of information

MII includes the joint effects of all the inputs when evaluating sensitivities of an input. Correlation coefficient of two random variables examines the degree of linear relatedness of thevariables. MII is a more informative method.

Calculation of the MII by Monte Carlo techniques suffers from computational complexity

Page 11: Stress testing & sensitivity Analysis -Requirements and methods

11

about Y. If X and Y are statistically independent, then it is zero.

Scatter PlotIt is used for visualassessment of the influence of individual inputs on an output

It is the first step in sensitivity analysis to identify the nature of association among variables.

It is tedious to generate if there are large number of inputs and outputs.

1.4.2.2 Different methods for the input variation considering the type of model and the data available:

1. Choosing a percentage range: It is possible to select the variation range as a percentage and then change the input consequently. For example, the input variable may be varied by -20% and +20% and then the impact on the model performance is observed.

2. Choosing a standard deviation factor: The main limitation with the previous method is that it only addresses sensitivity relative to a chosen point and not for the entire parameter distribution. Here each parameter is individually increased by a factor of its standard deviation.

3. Choosing the extremes : An alternative method is to calculate the sensitivity index (SI) by checking the change in the output level under ceteris paribus condition by taking the minimum and maximum values of each input. The process is in 3 steps:

First, the corresponding outputs coming from the calculation with the minimum and maximum of the selected input are computed

Second, the corresponding Outmin and Outmax values resulting of the previous step are figured out

Third, the %change is calculated:

%change=Outmax−Outmin

Outmax

2. Case Study:

Let us look at a probability of default (PD) model used on commercial loan portfolio. The model is a logistic regression based model using separate dummy variables for companies belonging to different industries. An initial model is defined, based on which different scenarios were considered for the purpose of quantifying their impact on probability of default. We studied operating revenue for the year 2010, company age, number of employees, dummies for different industries and their impact on default rate. The dependent variable (default rate) is restricted to the values of zero or one, where one indicates bad loan and zero indicates good loan. After performing regression analysis, we consider only those independent variables that are significantly related to the dependent variable. Once the model is developed, the signs of the parameter estimates associated with each variable are analyzed. Each sign suggests the relationship (positive/negative) of an independent variable with the dependent variable.

Page 12: Stress testing & sensitivity Analysis -Requirements and methods

12

Table 2.1: Final variables and their relation with default rate

Variable Sign

Operating revenue(2010) Negative

Number of employees(WOE) Negative

Company age Positive

Dummy for companies belonging to consumer industries Positive

Dummy for companies belonging to hotel industries Positive

A convincing negative relation with the default rate and operating revenue was found. The relation with number of employees is also significant. Positive relation with the company age for the sector Consumer industries and hotel industries were found. It can be concluded that if any economic downturn affects the companies and operating revenue falls, it will significantly impact default rate.

Fig 2.1: Portfolio distribution industry wise

0

10

20

Portfolio distribution by industry

Sample size

Distribution(%)

Page 13: Stress testing & sensitivity Analysis -Requirements and methods

13

Fig 2.2: Industry wise Default rate

0%10%20%30%

Industry wise Default rate

Default rate

Distribution(%)

After model development sensitivity analysis has been conducted to see how the model is performing based on Kolmogorov-Smirnov Statistic (KS statistic), a model discriminatory measure and Somers’ D to check the accuracy in the prediction.

• KS is the measure of maximum separation between good and bad distribution. Higher value of the KS statistic reflects the higher quality of the scorecard.

• Somers' D=Number of pairs that are concordant – Number of pairsthat are discordantTotal Number of pairs .

• The % of pairs for which the predicted probability of the event given the occurrence of the event is greater than predicted probability of the event given the non-occurrence of the event is called percentage concordant and vice-versa. Theoretically Somers’ D ranges from -1 to +1. For an accurate model, higher value of Somers’ D is preferred.

For our original model KS was reported as 32.3 and Somers’ D as 0.387.Now let us check after adopting a few approaches of Sensitivity Analysis how the model is performing in terms of KS and Somers’ D.

2.1. Sensitivity Analysis:

2.1.1. One factor at a time:

Mean, Standard Deviation, maximum and minimum values for all the variables were noted. To check how the default rate is sensitive to each parameter we replaced the variables with their mean values and checked the movement of KS and Somers’ D. Table 2.1.1.1 displays the results.

Page 14: Stress testing & sensitivity Analysis -Requirements and methods

14

Table 2.1.1.1: Impact of changing one factor at a time on KS and Somers’ D

The model was rebuilt taking into consideration only consumer industry and OFAT was applied and how the model discriminatory power was varying was noted. Table 2.1.1.2. shows the result.

Table 2.1.1.2: Impact of changing one factor at a time on KS and Somers’ D considering Consumer industry

Again, the model was redeveloped taking into consideration only hotel services industry and same procedure was applied.

Table 2.1.1.3: Impact of changing one factor at a time on KS and Somers’ D considering Hotel industry

Parameter KS Somers’ D

Company age replaced with mean 32 0.384

Operating revenue replaced with mean

17.7 0.243

WOE value of number of employees replaced with minimum WOE

30.2 0.383

Parameter KS Somers’ D

Original model taking only consumer industry

35.9 0.389

Company age replaced with mean 34.7 0.388

Operating revenue replaced with mean

13.9 0.202

WOE value of number of employees replaced with minimum WOE

33.8 0.385

Page 15: Stress testing & sensitivity Analysis -Requirements and methods

15

2.1.2 Multiple factor at a time:

Two inputs were simultaneously changed and the model performance was checked.

Table 2.1.1.4: Impact of changing two factors simultaneously on KS and Somers’ D

It can be concluded from the above methods of Sensitivity Analysis that the variable operating revenue is highly impacting model’s discriminatory power and accuracy level of prediction. So it is the key parameter to check the institution’s financial health.

2.2 Stress Testing:

Since we do not have any macroeconomic variable in our dataset and data is available for only one year (2010) it was neither possible to do macroeconomic stress testing nor to check the performance period. So, we simply increased the default rate of the population creating hypothetical stressed scenarios and checked the model performance in terms of KS and Somers’ D.

Default rate of the original data was 2.1% and average estimated probability of default for the model was 0.021. KS for our original model was reported as 32.3 and Somers’ D as 0.387.This represents the base scenario. For stress testing the default rate of the model was gradually increased and consequently stressed PDs were calculated. Table 2.2 and

Parameter KS Somers’ D

Original model taking only hotel industry

34 0.382

Company age replaced with mean 33.3 0.381

Operating revenue replaced with mean

15.5 0.225

WOE value of number of employees replaced with minimum WOE

30 0.381

Parameter KS Somers’ D

Company age replaced with mean + Operating revenue replaced with mean 18.2 0.223

Company age replaced with mean + WOE value of number of employees replaced with maximum WOE

30.2 0.373

Operating rev. replaced with mean + WOE value of number of employees replaced with maximum WOE

11.3 0.171

Page 16: Stress testing & sensitivity Analysis -Requirements and methods

16

figure 2.2 show the behaviour of KS and Somers’ D of the respective models resulting from the following stress scenarios.

Table 2.2.: Model’s performance against various stress scenarios

Stress scenario Stressed PD KS Somers’ D

Default rate=2.9% 0.029 32.2 0.395

Default rate=3.5% 0.035 31.5 0.376

Default rate=4.4% 0.044 31.3 0.376

Default rate=5.3% 0.053 31.4 0.373

Default rate=7.7% 0.1 31.7 0.368

Fig 2.2.: Somers’ D against various stress scenarios

Default rate=2.9%

Default rate=3.5%

Default rate=4.4%

Default rate=5.3%

Default rate=7.7%

0.350.355

0.360.365

0.370.375

0.380.385

0.390.395

0.40.395

0.376 0.376 0.3730.368

Somers’ D

Somers’ D

2.2.1. Result Interpretation & Conclusion:

From table2.2 and figure 2.2, it can be concluded that model’s accuracy level to predict the probability of default slightly falls as we increased the default rate. So, under stressed scenarios the model is performing moderately well, implying that the model is robust and with sufficient accuracy it predicts the probability of default even during stress periods. From the above analysis it can be concluded that arbitrarily varying a model variable will reveal how sensitive the portfolio is to that parameter but cannot infer anything about the likelihood of such a stress or how the portfolio might perform in an economic downturn.

From this case study it can be concluded that no one method is clearly best for risk assessment models. In general, combining two or more methods may be needed to increase confidence in the model and for ranking the key inputs.

Page 17: Stress testing & sensitivity Analysis -Requirements and methods

17

Through this paper due to data limitation exploring all the techniques discussed in the first part of the paper was not possible. If it was possible to incorporate some macroeconomic variables into the model it would have been possible to check how the model’s stressed PD is varying against the macroeconomic stressed scenarios and calculate the associated risk. This paper recommends a future study for exploring all the techniques empirically.

References:

1. Dietske Simons and Ferdinand Rolwes (Feb,2008),Macroeconomic Default Modelling and stress testing

2. D M Hamby, A review of Techniques for Parameter Sensitivity: Analysis of Environmental Models

3. Christopher Frey, Sumeet Patil (2005), Identification and Review of Sensitivity Analysis Methods

4. IMF(2012), Macro financial Stress Testing: Principles and Practices

5. IMF Working Paper, Designing Effective Macro prudential Stress Tests: Progress So Far and the Way Forward

6. IMF Working Paper, A Framework for Macro prudential Bank Solvency Stress Testing: Application to S-25 and Other G-20 Country FSAPs

7. Antonella Foglia (Bank of Italy), Stress Testing Credit Risk: A Survey of Authorities’ Approaches

8. Dodd Frank Act Stress Test 2015:Supervisory Stress TEST Methodology and Results March 2015 (Board of Governors of the Federal Reserve System)

9. BIS (2005), An Explanatory Note on the Basel II IRB Risk weight Functions

10. CCAR submissions- GENPACT internal training material

11. DFAST – GENPACT internal training material

12. PD Modelling – GENPACT internal training material

13. PPNR Modelling - GENPACT internal training material