Value at Risk Model in Indian Stock Market
-
Upload
rikesh-daliya -
Category
Documents
-
view
221 -
download
0
Transcript of Value at Risk Model in Indian Stock Market
-
8/7/2019 Value at Risk Model in Indian Stock Market
1/70
M P Birla institute of Management - 1 -
Value at Risk model in Indian stock Market
Submitted in partial fulfillment of the requirements of
the M.B.A Degree Course of Bangalore University
By
MEHTA PARTH
(REGD.NO:05XQCM 6046)
Under the Guidance
Of
DR. T.V.NARASIMHA RAO
Faculty
MPBIM
M.P.BIRLA INSTITUTE OF MANAGEMENT
Associate Bharatiya Vidya Bhavan
43, Race Course Road, Bangalore-560001
2005-2007
-
8/7/2019 Value at Risk Model in Indian Stock Market
2/70
M P Birla institute of Management - 2 -
DECLARATION
I hereby declare that the dissertation entitled Value at Risk model in Indian
stock Market is the result of work undertaken by me, under the guidance of
Dr.T.V.N.Rao, Associate Professor, M.P.Birla Institute of Management,
Bangalore.
I also declare that this dissertation has not been submitted to any other
University/Institution for the award of any Degree or Diploma.
Place: Bangalore
Date : 13th May 2007 Mehta Parth Bharatbhai
-
8/7/2019 Value at Risk Model in Indian Stock Market
3/70
M P Birla institute of Management - 3 -
PRINCIPALS CERTIFICATE
This is to certify that the Research Report entitled Value at Risk model in
Indian stock Market done by Mehta Parth Bharatbhai bearing Registration
No. 05 XQCM 6046 under the guidance ofDr.T.V.N.Rao.
Place: Bangalore (Dr.N.S.Malavalli)
Date: 13th May 2007 Principal
MPBIM, Bangalore
-
8/7/2019 Value at Risk Model in Indian Stock Market
4/70
M P Birla institute of Management - 4 -
GUIDES CERTIFICATE
This is to certify that the Research Report entitled Value at Risk model in
Indian stock Market done by Mehta Parth Bharatbhai bearing Registration
No. 05XQCM6046 is a bonafide work done carried under my guidance during
the academic year 2006-07 in a partial fulfillment of the requirement for the
award of MBA degree by Bangalore University. To the best of my knowledge this
report has not formed the basis for the award of any other degree.
Place: Bangalore Dr.T.V.N.Rao
Date : 13th May 2007
-
8/7/2019 Value at Risk Model in Indian Stock Market
5/70
M P Birla institute of Management - 5 -
ACKNOWLEDGEMENT
Its my special privilege to extend words of the thanks to all of them who have
helped me and encouraged me in completing the project successfully.
I would thank Dr.T.V.N.Rao for giving me valuable inputs required for completing
this project report successfully. I owe my sincere gratitude to him for spending his
valuable time with me and for his guidance.
It would be improper if I do not acknowledge the help and encouragement by my
friends and well wishers who always helped me directly or indirectly.
My gratitude will not be complete without thanking the almighty god and my
loving parents who have been supportive through out the project.
Mehta Parth Bharatbhai
-
8/7/2019 Value at Risk Model in Indian Stock Market
6/70
M P Birla institute of Management - 6 -
TABLE OF CONTENTS
CHAPTERS PARTICULARS PAGE NO.
ABSTRACT 02
1 INTRODUCTION AND THEORETICAL
BACKGROUND
04
2 REVIEW OF LITERATURE 28
3 RESEARCH METHODOLOGY 32
4 PROBLEM STATEMENT 33
5 OBJECTIVE OF THE STUDY 34
6 SAMPLE SIZE AND DATA SOURCES 35
7 TEST OF STATIONARITY 368 AUTO-CORRELATION 38
9 LIMITATIONS OF THE RESEARCH 43
10 DATA ANALYSIS & INTERPRETATIO 44
11 CONCLUSION 53
12 ANNEXTURE 55
13 BIBLIOGRAPHY 64
-
8/7/2019 Value at Risk Model in Indian Stock Market
7/70
-
8/7/2019 Value at Risk Model in Indian Stock Market
8/70
M P Birla institute of Management - 8 -
immediate past, and autoregressive describes a feedback mechanism that
incorporates past observations into the present. GARCH then is a mechanism
that includes past variances in the explanation of future variances. More
specifically, GARCH is a time-series technique that allows users to model the
serial dependence of volatility.
The data taken here is 10 year S&P CNX Nifty daily Index .Firstly the stationary
of the daily returns is tested with Augmented Dickey-Fuller Test. Then
parameters for the various models are calculated. After forecasting the monthly
variance the results of these competing models are evaluated on the basis of two
categories of evaluation measures symmetric and asymmetric error statistics.
Based on an out of the sample forecasts and a majority of evaluation measures
we find that GARCH (1, 6) method will lead to better volatility forecasts in the
Indian stock market. The same model performed better on the basis of
asymmetric error statistics also. but the other model like GARCH ( 1,1) , GARCH
( 1,2 ) , GARCH ( 3,1 ) are not able to forecast the volatility of the NIFTY index.
-
8/7/2019 Value at Risk Model in Indian Stock Market
9/70
M P Birla institute of Management - 9 -
CHAPTER 1
INTRODUCTION AND
THEORETICAL BACKGROUND
-
8/7/2019 Value at Risk Model in Indian Stock Market
10/70
M P Birla institute of Management - 10 -
Introduction
Modeling and forecasting volatility of financial time series has been an important
research topic for the last several years. There are two main reasons for the
strong interest in volatility estimates. Since the prices of derivative products do
depend on the volatility of the underlying instrument any pricing of these products
requires volatility forecast. The second reason is related to the concept of
volatility as a measure of market risk. Since the modern banking industry
requires an efficient management of all risks in todays new global financial
architecture, heavy emphasis must be placed on financial market risks. As a
consequence many regulatory requirements (e.g. those initiated by the Bank for
International Settlements) are by now standardized and have introduced many
novel concepts and tools into the management of market, credit and operational
risk. In the case of market risk these developments have led to an uniformly
accepted and applied risk measure called Value-at-Risk (VaR). The VaR of a
portfolio position is defined as the maximum potential loss for this position for a
given holding period and a given confidence level. Alternative specifications of
financial products, increasing availability of financial data and rapid advances incomputer technology have led to the introduction and formulation of various VaR
models that can currently be applied to measure the market risk of a portfolio
analysis.
The VaR concept can be viewed as a generalization of the risk sensitivities
-
8/7/2019 Value at Risk Model in Indian Stock Market
11/70
M P Birla institute of Management - 11 -
related to different risk factors. As an example let us quickly look at the market
risk of a simple European call option. If we ignore higher order approximations
the options delta is the sensitivity of the call price with respect to the risk
resulting from a change in the price of the underlying. Hence the delta linearly
measures market risk. This measure, however, is incomplete as long as we do
not know what the volatility of the risk factor is. If we multiply the sensitivity of the
position with the volatility of the risk factor we end up with the VaR, a therefore a
comprehensive measure of market risk. This simple description points out that
the calculation of the VaR is directly related to forecasting volatility of a position.
Only if we have full knowledge about the conditional density it is not necessary to
express percentiles of distributions as multiples of the standard deviation. In that
case we can directly calculate the value at risk. VaR models that are based on
standard distributions (e.g. normal distribution) first estimate the standard
deviation (or covariance matrix) in order to calculate the VaR for a given
confidence level. For that reason good volatility forecasts are an integral part of
sound VaR models.
One of the most widely used volatility models is the GARCH model (Bollerslev,
1986) for which the conditional variance is governed by a linear autoregressive
process of past squared returns and variances. The standard GARCH model
based on a normal distribution captures several stylized facts of asset return
series, like heteroskedasticity (time-dependent conditional variance), volatility
clustering and excess kurtosis. Recent empirical research, however, has found
that there are additional empirical regularities in return data such as negative and
autocorrelated skewness (asymmetry), fat tails and time dependent kurtosis that
-
8/7/2019 Value at Risk Model in Indian Stock Market
12/70
M P Birla institute of Management - 12 -
can not be described by the classical GARCH model. For that reason several
alternative specifications have been formulated in the literature.
We take into account the latest developments in conditional volatility research
and propose a generalized model that extends the existing literature in two
directions: the first one is to allow for non-linear dependencies in the conditional
mean and variance, and the second one concerns a non-standard specification
of the conditional density. To estimate nonlinear conditional second moments we
use a neural networkbased approach (i.e., so called recurrent mixture density
networks) for which the conditional mean and variance are modelled by a multi-
layer perceptrons ( see, e.g., Schittenkopf et al. (2000).
With regard to the specification of the conditional distributions, we compare three
different density specifications: 1) a standard GARCH model and its non-linear
generalization with a conditional normaldistribution (heteroskedastic, but neither
skewed nor leptokurtic); 2) a non-linear recurrent GARCH model with a Students
t-distribution (heteroskedastic, not skewed but leptokurtic); and 3) linear and non-
linear recurrent mixture density models, for which the conditional distributions are
approximated by a mixture of gaussians (two components) (heteroskedastic,
skewed and leptokurtic in a time-dependent manner).
These model specifications make clear that our point of interest in this study is
twofold. On the one hand we are interested in forecasting volatilities in order to
-
8/7/2019 Value at Risk Model in Indian Stock Market
13/70
M P Birla institute of Management - 13 -
accurately estimate the value at risk of a portfolio. On the other hand we are
concerned with the forecast of conditional distributions that allows the calculation
of VaR directly. Based on these two objectives we empirically evaluate the
forecasting performance of alternative volatility models and apply statistical tests
to discriminate between alternative VaR models. For the latter we apply the
Basle traffic light test, the proportion of failure test and interval tests.
All these tests evaluate the accuracy of a VaR model on the basis of statistical
procedures. Since it is very likely that the statistical criteria do not single out one
model as the best, we alternatively calculate the costs of capital requirements as
induced by a specific VaR model. The rationale behind this approach is the
following. Assume that out of several competing models there are two that
perform equally well with respect to forecasting the value at risk of a portfolio
position, i.e. these two models have two similar statistical characteristics.
The two models, however, can lead to very different costs, as far as the capital
requirements are concerned. Form a banks point of view it is not only necessary
to have a risk management model that correctly predicts the market risk, but one
that additionally uses the least capital possible. Since any capital requirement
incurs opportunity costs for the bank (i.e. capital that is in an unproductive,
regulatory use), it has an interest to cut this requirement down as much as
possible. Hence, VaR models should not only be judged on the basis of their
forecasting power, but also on the basis of their capital costs. This discussion
motivates the structure of our empirical analysis. It is based on return series of
stock indices from three different financial markets. We use return series of the
-
8/7/2019 Value at Risk Model in Indian Stock Market
14/70
M P Birla institute of Management - 14 -
Dow Jones Industrial Average (USA), the FTSE 100 (Great Britain) and the
NIKKEI 225 index (Japan) over a period of more than 13 years in order to
evaluate in detail the out-of-sample predictive performance of our models. Our
empirical analysis has the following structure. We predict conditional distributions
and calculate the VaR for each of our models for three different homogeneous
portfolios based on the same stock indices. To evaluate the quality and accuracy
of the VaR models we apply a number of statistical tests specifically designed to
interval forecasts. Among those are regulatory backtesting required as a part of
the capital-adequacy framework (the Basle Committees traffic light); exceptions
testing which examines the frequency with which losses greater than the VaR
estimate are observed together with independence of these events; statistical
test on the accuracy of point estimation of the VaR significance level. The
advantage of these tests is given by the fact that the actual loos of any portfolio
can be measured exactly and hence the VaR forecasts can be evaluated on the
basis of actual observations. As pointed out above, our central focus is also
related to the analysis of the efficiency of VaR measures, as measured by the
costs of capital associated with VaR based regulatory capital requirements
(calculation of the lost interest yield connected with the dynamically computed
model-based capital reserves).
-
8/7/2019 Value at Risk Model in Indian Stock Market
15/70
M P Birla institute of Management - 15 -
Theoretical Background
What is VaR?
In recent years value at risk (VaR) has become a very popular measure of
market risk. It is widely used by financial institutions, fund managers, and non
financial corporations to control the market risk in a portfolio of financial
instruments. As discussed by Jorion (1997), it has been adopted by central bank
regulators as the major determinant of the capital banks are required to keep to
cover potential losses arising from the market risks they are bearing.
The VaR of a portfolio is a function of two parameters, a time period and a
confidence level. It equals the dollar loss on the portfolio that will not be
exceeded by the end of the time period with the specified confidence level. If X%
is the confidence level and Ndays is the time period, the calculation of VaR is
based on the probability distribution of changes in the portfolio value over N
days. Specifically VaR is set equal to the loss on the portfolio at the 100-X
percentile point of the distribution. Bank regulators have chosen Nequal to 10days and Xequal to 99%. They set the capital required for market risk equal to
three times the value of VaR calculated using these parameters. In practice the
VaR forNdays is almost invariably assumed to be Ntimes the VaR for one day.
A key task for risk managers has therefore been the development of accurate
and robust procedures for calculating a one-day VaR. One common approach to
calculating VaR involves assuming that daily percentage changes in the
-
8/7/2019 Value at Risk Model in Indian Stock Market
16/70
M P Birla institute of Management - 16 -
underlying market variables are conditionally multivariate normal with the mean
percentage change in each market variable being zero. This is often referred to
as the model building approach. If the daily change in the portfolio value is
linearly dependent on daily changes in market variables that are normally
distributed, its probability distribution is also normal. The variance of the
probability distribution, and hence the percentile of the distribution corresponding
to VaR, can be calculated in a straightforward way from the variance-covariance
matrix for the market variables. In circumstances where the linear assumption is
inappropriate, the change in the portfolio value is often approximated as a
quadratic function of percentage changes in the market variables. This allows the
first few moments of the probability distribution of the change in the portfolio 1
We are grateful to the editor Philippe Jorion for many suggestions that improved
this paper.
Value to be calculated analytically so that the required percentile of the
distribution can be estimated.2 An alternative approach to handling non-linearity
is to use Monte Carlo simulation. On each simulation trial daily changes in the
market variables are sampled from their multivariate distribution and the portfolio
is revalued. This enables a complete probability distribution for the daily change
in the portfolio value to be determined. 3 Many market variables have
distributions with fatter tails than the normal distribution. This has led some risk
managers to use historical simulation rather than the model building approach.
Historical simulation involves creating a database consisting of the daily
movements in all market variables over a period of time. The first simulation trial
assumes that the percentage changes in the market variables are the same as
-
8/7/2019 Value at Risk Model in Indian Stock Market
17/70
-
8/7/2019 Value at Risk Model in Indian Stock Market
18/70
M P Birla institute of Management - 18 -
Introduction to GARCH model
In econometrics, an autoregressive conditional heteroskedasticity (ARCH,
Engle (1982)) model considers the variance of the current error term to be a
function of the variances of the previous time period's error terms. ARCH relates
the error variance to the square of a previous period's error. It is employed
commonly in modeling financial time series that exhibit time-varying volatility.
Specifically, let denote the returns (or return residuals, net of a mean process)
and assume that , where and where the series
are modeled by
and where and
If an autoregressive moving average model (ARMA model) is assumed for the
error variance, the model is a generalized autoregressive conditional
heteroskedasticity (GARCH, Bollerslev(1986)) model.
-
8/7/2019 Value at Risk Model in Indian Stock Market
19/70
M P Birla institute of Management - 19 -
In that case, the GARCH(p,q) model (where p is the order of the GARCH terms
and q is the order of the ARCH terms) is given by
IGARCH or Integrated Generalized Autoregressive Conditional
Heteroskedasticity is a restricted version of the GARCH model, where the sum of
the persistent parameters sum up to one.
Generally, when testing for heteroskedasticity in econometric models, the best
test is the White test. However, when dealing with time series data, the best test
is Engle's ARCH test.
Prior to GARCH there was EWMA which has now been superseded by GARCH.
Some people utilise both
ARCH models
The autoregressive conditional heteroskedasticity model was introduced by
Engle (1982) to model the volatility of UK inflation.
-
8/7/2019 Value at Risk Model in Indian Stock Market
20/70
M P Birla institute of Management - 20 -
As the name suggests, the model has the following properties:
1) Autoregression - Uses previous estimates of volatility to calculate subsequent
(future) values. Hence volatility values are closely related.
2) Heteroskedasticity - The probability distributions of the volatility varies with the
current value.
In order to introduce ARCH processes, let us assume that we have a time series
of asset price quotes for each time step i. We calculate the fractional change
in the price of the asset between time step i and i+1 using
Furthermore, we are required to determine the long-running historical volatility
(e.g. over several years) denoted by . In the first figure above, is
illustrated by the flat line. We have seen that the volatility rates fluctuate about
this mean long-running mean volatility, therefore, it seems reasonable to
incorporate this quantity in the ARCH model.
Formally, an ARCH(m) process may be expressed mathematically as
where is the volatility at the time step, and are weighting factors that
satisfy
-
8/7/2019 Value at Risk Model in Indian Stock Market
21/70
M P Birla institute of Management - 21 -
Here m denotes the number of observations of used to determine . The
most common ARCH(m) process used to model asset price volatility dynamics is
the ARCH(1) model where
or
using the above relation.
GARCH models
Bollerslev (1986) later proposed a more generalised form of the ARCH(m) modelappropriately termed the GARCH(p,q) (General-ARCH) model. The GARCH(p,q)
model may be written as
The p and q denote the number of past observations of and ,
-
8/7/2019 Value at Risk Model in Indian Stock Market
22/70
M P Birla institute of Management - 22 -
respectively, used to estimate .
The EWMA model
The Exponentially Weighted Moving Average model (EWMA) is a special case of
the GARCH(1,1) model where . Thus,
Since , we may express the EWMA model as
The EWMA model differs from ARCH and GARCH models since it does not
mean-revert. The preference between these different models is dependent upon
many factors. For example, the asset class, forcasting time frame under
consideration, and the efficiency with which the weighting parameters may be
calibrated to the time series. Whilst the maximum likelihoodestimators method
may be the most obvious method to select for calibration with empirical data,
more efficient algorithms have also been put forward.
Since these volatility forecasting models were introduced, there have been many
alternatives/modifications proposed to these models to better their use in volatility
forecasting.
The great workhorse of applied econometrics is the least squares model. The
-
8/7/2019 Value at Risk Model in Indian Stock Market
23/70
M P Birla institute of Management - 23 -
basic version of the model assumes that, the expected value of all error terms, in
absolute value, is the same at any given point. Thus, the expected value of any
given error term, squared, is equal to the variance of all the error terms taken
together. This assumption is called homoskedasticity. Conversely, data in which
the expected value of the error terms is not equal, in which the error terms may
reasonably be expected to be larger for some points or ranges iof the data than
for others, is said to suffer from heteroskedasticity.
It has long been recognized that heteroskedasticity can pose problems in
ordinary least squares analysis. The standard warning is that in the presence of
heteroskedasticity, the regression coefficients for an ordinary least squares
regression are still unbiased, but the standard errors and confidence intervals
estimated by conventional procedures will be too narrow, giving a false sense of
precision. However, the warnings about heteroskedasticity have usually been
applied only to cross sectional models, not to time series models. For example, if
one looked at the cross-section relationship between income and consumption in
household data, one might expect to find that the consumption of low-income
households is more closely tied to income than that of high-income households,
because poor households are more likely to consume all of their income and to
be liquidity-constrained. In a cross-section regression of household consumption
on income, the error terms seem likely to be systematically larger for high-income
than for low-income households, and the assumption of homoskedasticity seems
implausible. In contrast, if one looked at an aggregate time series consumption
function, comparing national income to consumption, it seems more plausible to
assume that the variance of the error terms doesnt changed much over time.
-
8/7/2019 Value at Risk Model in Indian Stock Market
24/70
M P Birla institute of Management - 24 -
A recent developments in estimation of standard errors, known as
robust standard errors, has also reduced the concern over heteroskedasticity. If
the sample size is large, then robust standard errors give quite a good estimate
of standard errors even with heteroskedasticity. Even if the sample is small, the
need for a heteroskedasticity correction that doesnt affect the coefficients, but
only narrows the standard errors somewhat, can be debated.
However, sometimes the key issue is the variance of the error terms itself.
This question often arises in financial applications where the dependent variable
is the return on an asset or portfolio and the variance of the return represents the
risk level of those returns. These are time series applications, but it is
nonetheless likely that heteroskedasticity is an issue. Even a cursory look at
financial data suggests that some time periods are riskier than others; that is, the
expected value of error terms at some times is greater than at others. Moreover,
these risky times are not scattered randomly across quarterly or annual data.
Instead, there is a degree of autocorrelation in the riskiness of financial returns.
ARCH and GARCH models, which stand for autoregressive conditional
heteroskedasticity and generalizedautoregressive conditional heterosjedasticity,
have become widespread tools for dealing with time series heteroskedastic
models such as ARCH and GARCH. The goal of such models is to provide a
volatility measure like a standard deviation -- that can be used in financial
decisions concerning risk analysis, portfolio selection and derivative pricing.
-
8/7/2019 Value at Risk Model in Indian Stock Market
25/70
M P Birla institute of Management - 25 -
ARCH/GARCH Models
Because this paper will focus on financial applications, we will use financial
notation. Let the dependent variable be labeledt
r, which could be the return on
an asset or portfolio. The mean value m and the variance h will be defined
relative to a past information set. Then, the return r in the present will be equal to
the mean value of r (that is, the expected value of r based on past information)
plus the standard deviation of r (that is, the square root of the variance) times the
error term for the present period.
The econometric challenge is to specify how the information is used to forecast
the mean and variance of the return, conditional on the past information. While
many specifications have been considered for the mean return and have been
used in efforts to forecast future returns, rather simple specifications have proven
surprisingly successful in predicting conditional variances The most widely used
specification is the GARCH(1,1) model introduced by Bollerslev (1986) as a
generalization of Engle(1982). The (1,1) in parentheses is a standard notation in
which the first number refers to how many autoregressive lags appear in the
equation, while the second number refers to how many lags are included in the
moving average component of a variable. Thus, a GARCH (1,1) model for
variance looks like this:
-
8/7/2019 Value at Risk Model in Indian Stock Market
26/70
M P Birla institute of Management - 26 -
= + +2
1 1 1t t t t h h h .
This model forecasts the variance of date t return as a weighted average of a
constant, yesterdays forecast, and yesterdays squared error. Of course, if themean is zero, then from the surprise is simply
2
1tr .
Thus the GARCH models are conditionally heteroskedastic but have a constant
unconditional variance.
Possibly the most important aspect of the ARCH/GARCH model is the
recognition that volatility can be estimated based on historical data and that a
bad model can be detected directly using conventional econometric techniques.
A variety of statistical software packages like Eview and others? are available for
implementing GARCH and ARCH approaches.
-
8/7/2019 Value at Risk Model in Indian Stock Market
27/70
M P Birla institute of Management - 27 -
Overview of VaR
VaR analysis began in the early 1990s as a way for Wall Street firms to estimate
their daily exposure to trading losses. In 1995 the Basle Capital Accord endorsed
the use of VaR in determining capital requirements for banks, lending credibility
to the practice. The Securities and Exchange Commission also forwarded VaR
as one of three possible methods for the disclosure of derivative exposure by
U.S. corporations. The goal of VaR is to calculate the expected down-side loss
over a specified time period with a specified degree of certainty. A common time
period used for VaR is one day or one month, since it has been used largely by
traders and financial institutions with multi-currency portfolios. Confidence levels
are usually calculated at the 95th and 99th percentiles. A VaR estimate must
include the time period and the degree of confidence. A traders VaR for a
$1,000,000 portfolio of foreign currencies might look like this: the 99% VaR for
one day is $34,950 (the calculation of this number is detailed below).
Part of the basic foundation for VaR comes from modern portfolio theory (MPT).
The calculations implicitly include the volatility-dampening effects ofdiversification when examining a multi-asset or multi-currency portfolio. For an
international bank, this means recognizing that stocks and bonds denominated in
various currencies will not all move in the same direction (and to the same
degree) at once. It also allows a bank to summarize its risk from various assets
into one measure denominated in the banks home currency.
-
8/7/2019 Value at Risk Model in Indian Stock Market
28/70
M P Birla institute of Management - 28 -
VaR Methodologies
Various vendors have developed proprietary VaR methodologies. The most
widely available and publicized methodology for setting assumptions is the one
used by J.P. Morgan called RiskMetrics, which was the first major set of
standard, simplifying assumptions. RiskMetrics uses a derivation of the GARCH
(generalized auto-regressive conditional hederoscadasticity) model to estimate
asset volatility and correlation. The method attempts to estimate time-varying
volatility (or correlation) by giving more weight to more recent observations. It is
common for VaR calculations to employ an expected daily return of zero (which
is not much different from the average daily return of 4 basis points over a 250
trading-day year, assuming a nominal return of 10% annually). These methods
use historical data to derive forward assumptions.
The primary disadvantage of historical data is obviously its dependence on
relationships which may change over time. Besides missing structural changes
in markets, such as the collapse of the European Exchange Rate Mechanism in1993, the technique will also not capture the effects of short term shocks, such
as the stock market crash of 1987. In the case of U.S. stocks, the historical
volatility was 1.05% during the 250 trading days prior to the crash on October 19.
That would make the decline one of more than 19 standard deviations.IN
addition, it has become conventional wisdom since 1987 that correlations
increase during times of extreme market declines, and it may be true that
-
8/7/2019 Value at Risk Model in Indian Stock Market
29/70
M P Birla institute of Management - 29 -
correlations are lower during positive return periods. In this case, a VaR which
uses historical correlations from a positive period will grossly underestimate the
actual VaR should markets turn lower. However, the primary advantage of
marketbased estimates is that the assumed return distribution can be non-
normal. VaR models sometime use Monte Carlo simulation (a method for
modeling random outcomes) in lieu of historical data, but usually depend on the
assumption of normally distributed returns. Unfortunately, returns are often non-
normally distributed. In the case of U.S. stocks, daily returns are commonly found
to be leptokurtic. Leptokurtic returns have a peaked mean (fortunately, to the
right of zero) and longer, fatter tails than a normal distributions. In practical terms,
observations nearer to and farther from the mean are more common that what a
normal distribution would predict.
Another alternative to the standard GARCH-based volatility prediction is the use
of implied volatility. This involves backing-out the forward volatility expectation
embedded in the market price of options contracts. Unfortunately, options are not
available for all instruments over all desired time horizons (e.g., the implied
volatility from 30 or 90 day options would not be appropriate for use in estimating
a one day VaR). In practice, the use of implied volatility in VaR models is rare.
-
8/7/2019 Value at Risk Model in Indian Stock Market
30/70
M P Birla institute of Management - 30 -
Institutional Use
Some large pension plans have established internal VaR programs using
systems developed by outside vendors. Ontario Teachers Plan Board, for
instance, uses a Reuters product to monitor their VaR in-house. Outside VaR
packages can cost around $1 million. Several large global custodians (such as
Bankers Trust and Chase) offer VaR calculation as a value-added service for
around $100,000 a year. Several large plans are said to be studying the use of
VaR, although many have expressed skepticism over the ability of custodians to
carry out the task. Plans considering VaR seem to prefer purchasing a program
to use internally, should they choose to adopt the risk measure.
VaR Concerns
While VaR can be a useful risk measure, we have some concerns with respect to
institutional use. Our primary concern is whether it is a suitable tool for long-term
investors. Researchers have shown that calculated VaRs for even short time
horizons can differ substantially based on which methodology is used to set
assumptions. However, volatility and correlation estimates can be made with a
greater degree of certainty for short-term observations, simply because these
parameters trend. GARCH models (and their cousins) have done a fairly good
job of capturing these trends over short periods of time. However, our confidence
-
8/7/2019 Value at Risk Model in Indian Stock Market
31/70
M P Birla institute of Management - 31 -
in ten-year estimates are substantially lower, as no solid methodologies have
been developed to deal with long time periods.
Even if VaRs are calculated correctly, there is the possibility that these numbers
will be relied on too heavily. As mentioned earlier, a common misconception with
VaR is that it quantifies the maximum expected loss over a given time periods.
It actually does quite the opposite. By revealing what is to the right of the value at
risk estimate 99% of the time, VaR gives the break-point for the other 1% of the
observed outcomes. A VaR that correctly predicts that a portfolio will suffer a loss
greater than $1 million during a year only 0.5% of the trading days is a success.
This is true even if the actual loss on that one day was $3 million. A single VaR
estimate should not be used in isolation. Even Philippe Jorion, one of the most
prominent proponents of VaR, suggests that estimates should be stress tested.
Also, a report by the International Securities Market Association noted both the
widespread use of VaR among banks, and the importance of stress-testing
results. The basics of stress-testing involve making VaR estimates based on
higher asset volatilities and/or correlations than those normally assumed. VaR
may also be used in conjunction with more traditional risk measures. These
measures may not be as intuitive as a VaR figure, but can help affirm or
contradict the VaR estimate. Lastly, for pension funds, VaR ignores the important
effects of capital market changes on liabilities.
A VaR methodology for pension plans would be more useful if it targeted
variables such as pension surplus or contributions, rather than simply asset
-
8/7/2019 Value at Risk Model in Indian Stock Market
32/70
M P Birla institute of Management - 32 -
values.
An Example
Assumption setting is the most difficult aspect of determining VaR. Once these
assumptions are set, the actual calculation is straight forward. To borrow the
foreign exchange example used earlier, assume that the combined portfolio of
currencies is expected to have a daily volatility of 1.5%. From basic statistics,
one-tailed critical values for 90%, 95%, and 99% confidence levels are 1.28,
1.65, and 2.33 respectively. The 99th percentile VaR of the $1,000,000 portfolio
would be found as follows:
(Portfolio Value) x (Portfolio Standard Deviation) x (Critical Value) = VaR
($1,000,000) x (0.015) x (2.33) = $34,900
The above example assumes a mean of zero, which is standard in VaR
assumption.
-
8/7/2019 Value at Risk Model in Indian Stock Market
33/70
M P Birla institute of Management - 33 -
CHAPTER 2
REVIEW OF LITERATURE
-
8/7/2019 Value at Risk Model in Indian Stock Market
34/70
M P Birla institute of Management - 34 -
Robert Engle : The Use of ARCH/GARCH Models in Applied
Econometrics , Journal of Economic PerspectivesVolume 15, Number
4Fall 2001Pages 157168
The least squares model assumes that the expected value of all error terms,
when squared, is the same at any given point. This assumption is called
homoskedasticity, and it is this assumption that is the focus of GARCH model.
Data in which the variances of the error terms are not equal, in which the error
terms may reasonably be expected to be larger for some points or ranges of the
data than for others, are said to suffer from heteroskedasticity. The standard
warning is that in the presence of heteroskedasticity, the regression coefficients
for an ordinary least squares regression are still unbiased, but the standard
errors and confidence intervals estimated by conventional procedures will be too
narrow, giving a false sense of precision. Instead of considering this as a
problem to be corrected, GARCH models treat heteroskedasticity as a variance
to be modeled. As a result, not only are the deficiencies of least squares
corrected, but a prediction is computed for the variance of each error term. This
prediction turns out often to be of interest, particularly in applications in finance.
EMPIRICAL ISSUES IN VALUE-AT-RISK
BY DENNIS BAMS 1 AND JACCO L. WIELHOUWER
According to them they have compared four alternative models to calculate VaR
-
8/7/2019 Value at Risk Model in Indian Stock Market
35/70
M P Birla institute of Management - 35 -
estimates for the value of a certain portfolio of the bank. Crucial for this
calculation is the underlying return distribution, since it reflects the probability of
extreme returns. A number of issues are important. First, the underlying
probability distribution should be able to reflect the behavior of extreme returns.
Hence, the tail of the distribution should be well modeled. We proposed adopting
a Student-t distribution, since it allows for fatter tails than a normal distribution.
Parametric and Semi-parametric models of
Value-at-Risk: On the way to bias reduction Yan Liu and Richard
Luger Emory University
Due to the existence of the non-linear transformation bias in the VaR estimation
using GARCH model, we propose a new Two-stage VaR model based on our
generalized conditional coverage test. In order to eliminate the non-linear
transformation bias, the first-stage model starts from a parametric conditional
standard deviation model and is then tested by the generalized conditional
coverage test. If the first-stage model passes the test, we will keep using this
model since it is well specialized. If the model fails to pass the test, we will
incorporate some additional variables, selected based on the test, into the
second-stage model, a semi-parametric quantile regression model.
Crucial for the determination of the extreme future market value, and hence for
the VaR, is the distribution function of the return on market value. As allowed by
-
8/7/2019 Value at Risk Model in Indian Stock Market
36/70
M P Birla institute of Management - 36 -
the Basle Committee, a normal or lognormal distribution has usually been
assumed for the market return. Recently, alternative distributions have been
proposed that focus more on the tail behavior of the returns. See, for example,
Embrechts, Kluppelberg and Mikosch (1997), McNeil and Frey (1999) and Lucas
and Klaassen (1998) for a discussion. A normal distribution supposedly
underestimates the probability in the tail and hence the VaR result. Popular
alternatives in the financial literature include GARCH-type models which allow for
time-varying volatility, and the Student-t distribution, which allows for more
probability mass in the tail than the normal distribution. For a review of (G)ARCH
models, see Bollerslev, Engle and Nelson (1994). Other papers have focused on
different risk measures and different VaR methods. See, for example, Drudi et al.
(1997), Van Goorbergh and Vlaar (1999) and Jorion (1996).
-
8/7/2019 Value at Risk Model in Indian Stock Market
37/70
M P Birla institute of Management - 37 -
CHAPTER 3
RESEARCH METHODOLOGY
-
8/7/2019 Value at Risk Model in Indian Stock Market
38/70
M P Birla institute of Management - 38 -
Statement of Problem
Volatility always taken into consideration while taking investment decision.
Generally there are various models are used in forecasting volatility, GARCH is
one of them. We try to use GARCH in forecasting the volatility in the Indian stock
market and try to find out that at what level it is useful.
OBJECTIVES
To find out the forecasting techniques in Indian stock markets.
To ascertain the performance of different GARCH models at different risk
levels
-
8/7/2019 Value at Risk Model in Indian Stock Market
39/70
M P Birla institute of Management - 39 -
Study Design
a) Study Type:
The study type is analytical, quantitative and historical. Analyticalbecause
facts and existing information is used for the analysis, Quantitative as
relationship is examined by expressing variables in measurable terms and also
Historicalas the historical information is used for analysis and interpretation.
b) Study population:
Populationis the daily closing prices of NIFTY Index.
c) Sampling frame:
Sampling Frame would be monthly closing prices of NIFTY Index.
d) Sample:
Samplechosen is daily closing values of NIFTY Index from 01-01-1997 to
31-3-2007.
e) Sampling technique:
Deliberate sampling is used because only particular units are selected
-
8/7/2019 Value at Risk Model in Indian Stock Market
40/70
M P Birla institute of Management - 40 -
from the sampling frame. Such a selection is undertaken as these units represent
the population in a better way and reflect better relationship with the other
variable.
3.3 SAMPLE SIZE AND DATA SOURCES
In this study S&P CNX Nifty index has been considered as a proxy for the stock
market and accordingly the closing index values were collected from Jan 1,1997
till March 30, 2007.
Here we calculate the monthly variance from the data taken from above
mentiond period and use the first 78 months data to forecasting the remaining 34
months volatility.
-
8/7/2019 Value at Risk Model in Indian Stock Market
41/70
M P Birla institute of Management - 41 -
TEST OF STATIONARITY
Dickey-fuller Test for unit root:
Dickey fuller statistic test for the unit root in the time series data rt is regressed
against rt-1 to test for unit root in a time series random walk model.
This is given as:
rt= rt-1 + ut
if is significant equal to 1, then the stochastic variable rt is said to be having unit
root. A series with unit root is said to be un-stationary and does not follow
random walk. There are three most popular dickey-fuller tests for testing unit root
in a series.
The above equation can be rewritten as:
rt= rt-1 + ut
Here = (-1) and here it is tested if is equal to zero. rt is random walk if is
equal to zero. It is possible that time series could behave as a random walk with
a drift. This means that the value of rt may not center to zero and thus a constant
should be added to the random walk equation. A linear trend value could also be
added align with the constant it the equation, which results in a null hypothesis
reflecting stationary deviations from trend.
-
8/7/2019 Value at Risk Model in Indian Stock Market
42/70
M P Birla institute of Management - 42 -
The Augmented Dickey-fuller Test:
In conducting the DF test as above, it is assumed that the error term u t was
uncorrelated. But in case the ut are correlated, Dickey and Fuller have developed
a test, known as the augmented Dickey- Fuller ( ADF) test. The ADF test
consists of estimating the following regression:
Yt= 1 + 2t + Yt-1 + i Yt-i+ t
Where, t is a pure whitenoise term and Yt-1 = (Yt-1-Yt-2), Yt-2 = (Yt-2-Yt-3),etc.
The number of lagged difference terms to include is often determined empirically,
the idea being to include enough terms so the error term in above equation is
serially correlated. In ADF we still test whether=0 and the ADF test follow the
same asymptotic distribution as the DF statistic, so the same critical value can be
used.
-
8/7/2019 Value at Risk Model in Indian Stock Market
43/70
M P Birla institute of Management - 43 -
AUTO-CORRELATION
The term auto-correlation may be defined as correlation between members of
series of observation ordered in time or space. In the regression context, the
classical linear regression model assumes that such that autocorrelation doesnt
exist in the disturbances ui.
Symbolically
E (ui uj) = 0 (i j)
Put simply, the classical model assumes that the disturbance term relating to any
observation is not influenced by the disturbance term relating to any other
observation. For example: if we are dealing with quarterly time series data
involving the regression of output on labour and capital inputs and if, say there is
a labour strike affecting output in one quarter, there is no reason to believe that
this disruption will be carried out over to the next quarter. That is, if output is
lower this quarter, there is no reason to believe that this disruption will be carried
over to the next quarter.
-
8/7/2019 Value at Risk Model in Indian Stock Market
44/70
M P Birla institute of Management - 44 -
GARCH MODELS
The great workhorse of applied econometrics is the least squares model. The
basic version of the model assumes that, the expected value of all error terms, in
absolute value, is the same at any given point. Thus, the expected value of any
given error term, squared, is equal to the variance of all the error terms taken
together. This assumption is called homoskedasticity. Conversely, data in which
the expected value of the error terms is not equal, in which the error terms may
reasonably be expected to be larger for some points or ranges iof the data than
for others, is said to suffer from heteroskedasticity.
It has long been recognized that heteroskedasticity can pose problems in
ordinary least squares analysis. The standard warning is that in the presence of
heteroskedasticity, the regression coefficients for an ordinary least squares
regression are still unbiased, but the standard errors and confidence intervals
estimated by conventional procedures will be too narrow, giving a false sense of
precision. However, the warnings about heteroskedasticity have usually been
applied only to cross sectional models, not to time series models. For example, if
one looked at the cross-section relationship between income and consumption in
household data, one might expect to find that the consumption of low-income
households is more closely tied to income than that of high-income households,
because poor households are more likely to consume all of their income and to
be liquidity-constrained. In a cross-section regression of household consumption
on income, the error terms seem likely to be systematically larger for high-income
than for low-income households, and the assumption of homoskedasticity seems
-
8/7/2019 Value at Risk Model in Indian Stock Market
45/70
M P Birla institute of Management - 45 -
implausible. In contrast, if one looked at an aggregate time series consumption
function, comparing national income to consumption, it seems more plausible to
assume that the variance of the error terms doesnt changed much over time.
ARCH stands for autoregressive conditionally heteroskedasticity and these
models are a sophisticated group of time series models initially introduced by
Engle (1982) and ARCH models capture the volatility clustering phenomenon
usually observed in financial time series data. In the linear ARCH (q) model the
time varying conditional variance is postulated to be a linear function of the past
q squared innovations. In other words variance is modeled as a constant plus
a distributed lag on the squared residual terms from earlier periods
rt = + t and t2= +i.t-1
2
Where t~ iidN (0, 1) For stability .
I< 1.0 and theoretically q may assume any
number but generally it is determined based on some information criteria like AIC
or BIC. In financial markets the ARCH (1) model is most oftenly used and this is
a very simple model that exhibits constant unconditional variance but non-
constant conditional variance. Accordingly the conditional variance is modeled as
t2= 0 + 1. t-1
2
i-1
q
-
8/7/2019 Value at Risk Model in Indian Stock Market
46/70
M P Birla institute of Management - 46 -
As with simple regression the parameters in ARCH and GARCH models
(discussed next) are estimated at weekly intervals using a rolling window of
weekly 7 year window. The problem with the ARCH models is it involves
estimation of a large number of parameters and if some of the parameters
become negative they lead to difficulties in forecasting. Bollerslev (1986)
proposed a Generalized ARCH or GARCH (p, q) model where volatility at time t
depends on the observed data at t-1, t-2, t-3 .. t-q as well as on volatilities at
t-1, t-2, t-3 ... t-p.
The advantage of GARCH formulation is that though recent innovations enter the
model it involves only estimation of a few parameters hence there will be little
chance that they will ill-behaved. In GARCH there will be two equations
conditional mean equation given below:
rt = + t
and the conditional variance equation shown below,
t2= +i.t-1
2 +i.t-12
i-1 i-1
q p
-
8/7/2019 Value at Risk Model in Indian Stock Market
47/70
M P Birla institute of Management - 47 -
the parameters in both the equations are estimated simultaneously using
maximum likelihood methods once a distribution for the innovations t has been
specified generally it is assumed that they are Gaussian.
The simplest and most commonly used member of the GARCH family is the
GARCH (1, 1) model shown below
t2 = + .t-1
2+.t-12
Where,
t2 = variance of the current period
= intercept
t-12 = lag variable of residual
= parameter of error terms lag variable
t-12 = variance of last period
= parameter of lag variance
Following Schwarz Information Criteria and Akiake Information Criteria we found
that the best model in the GARCH (p, q) class for p [1, 5] and q [1, 2] was a
-
8/7/2019 Value at Risk Model in Indian Stock Market
48/70
M P Birla institute of Management - 48 -
GARCH (1,1) in the stock market. We also tested for whether the GARCH (1,1)
adequately captured all the persistence in the variance of returns by using Ljung-
Box Q- statistic at the 36th lag of the standardized squared residuals was 37.498
(p = 0.4) indicating that the residuals are not serially correlated.
In our forecasting exercise first we estimated the GARCH parameters using the
estimation period i.e., 1st week of Jan 1997 to last week of March 2004 for Nifty
and then used these parameters to obtain the forecasts for the trading days in 1st
week of April 2004 and these daily forecasts were aggregated to obtain the
forecast for the weeks of April 2004. Then the beginning and end observations
for parameter 4 for conserving space and to maintain the flow the values are not
presented and are available up on request estimation were adjusted by including
the data for 1st week of March 2004 and omitting the data pertaining to 1st week
of Jan 1997. The procedure is repeated for every week using a rolling window of
7 years.
LIMITATIONS OF THE RESEARCH
1. Data considered for ten years only.
2. Sample is restricted to S&P CNX Nifty index.
3. The models are tested on the basis of 3 years forecasted volatility value
only.
-
8/7/2019 Value at Risk Model in Indian Stock Market
49/70
-
8/7/2019 Value at Risk Model in Indian Stock Market
50/70
M P Birla institute of Management - 50 -
Analysis and Interpretation
Steps followed in the analysis
The data is collected for
NIFTY Index.
Period of data collection January 1st January,1997 to 31st march,2007
The data is converted into log naturals format to way out any spurious
correlations within the data sets.
Then the data is tested for its stationarity using Augmented Dickey fuller
test
The monthly variance of the NIFTY daily closing price found
Out of 120 months, total 77 months used to find out the equation through
which we can forecast the value for next 37 months.
Forecasting the value using which find out the residual value which shows
the minor variation from the forecasted value.
-
8/7/2019 Value at Risk Model in Indian Stock Market
51/70
M P Birla institute of Management - 51 -
ADF RESULT
When the daily return of index is been tested for its unit root with four lag variable
the following result is obtained
ADF Test
Statistic -29.994
1% Critical
Value* -3.4437
5% Critical Value -2.8667
10% Critical Value -2.5695
*MacKinnon critical values for rejection of hypothesis of a unit root.
Augmented Dickey-Fuller Test Equation
Dependent Variable: D(SER02,2)
Method: Least Squares
Date: 04/30/07 Time: 11:22
Sample(adjusted): 4 601
Included observations: 598 after adjusting endpoints
Variable Coefficient
Std.
Error t-Statistic Prob.
D(SER02(-1)) -2.00722 0.066921 -29.994 0
D(SER02(-1),2) 0.341221 0.038837 8.786002 0C -1.73E-05 0.000764 -0.02267 0.9819
R-squared 0.775773
Mean dependent
var -9.64E-05
Adjusted R-
squared 0.77502
S.D. dependent
var 0.039396
S.E. of
regression 0.018686
Akaike info
criterion -5.11703
-
8/7/2019 Value at Risk Model in Indian Stock Market
52/70
M P Birla institute of Management - 52 -
Sum squared
resid 0.207764 Schwarz criterion -5.09499
Log likelihood 1532.993 F-statistic 1029.282
Durbin-Watson
stat 2.171695 Prob(F-statistic) 0
Interpretation
As it can be easily seen from the ADF test, the null hypothesis of unit root can be
rejected as the estimated value is -29.991, which in absolute value is greater
than all the critical value at 1%, 5% and 10% level of significance.
The absence of unit root means the series is stationary, combined with the
phenomenon of volatility clustering implies that volatility can be predicted and the
forecasting ability of the different models can be generalized to other time
periods also.
-
8/7/2019 Value at Risk Model in Indian Stock Market
53/70
M P Birla institute of Management - 53 -
REGRESSION ANALYSIS
Dependent variable : Variance
Independent variable : Return^2 , Lag Variance
GARCH ( 1,6 ) Model
Coefficients(a)
ModelUnstandardizedCoefficients
StandardizedCoefficients T Sig.
B Std. Error Beta1 (Constant) 0.000167 4.78E-05 3.491208 0.000814
Return^2 0.007883 0.004951 0.17688 1.592206 0.115601
Lagvariance^2 0.236369 0.111409 0.235694 2.121633 0.037217
A Dependent Variable: Variance
0.000167
Rt-62 1 0.007883
t-12 2 0.236369
-
8/7/2019 Value at Risk Model in Indian Stock Market
54/70
-
8/7/2019 Value at Risk Model in Indian Stock Market
55/70
M P Birla institute of Management - 55 -
GARCH (1,2)
Dependent Variable: SER43 VarianceMethod: ML ARCH
Date: 05/14/07 Time: 12:50
Sample(adjusted): 1 117
Included observations: 117 after adjusting endpoints
Convergence achieved after 1 iterations
CoefficientStd.Error
z-Statistic Prob.
Variance Equation
C 9.11E-08 1.45E-07 0.628415 0.5297ARCH(1) 0.133333 0.199803 0.667325 0.5046
GARCH(1) 0.533333 1.354867 0.393642 0.6938
GARCH(2) 0.044444 1.321249 0.033638 0.9732
R-squared -0.85557Mean dependent
var 0.000265
Adjusted R-squared -0.90483
S.D. dependentvar 0.000287
S.E. of regression 0.000397Akaike info
criterion -12.6746
Sum squared resid 1.78E-05 Schwarz criterion -12.5802
Log likelihood 745.4646
Durbin-Watson
stat 0.811168
-
8/7/2019 Value at Risk Model in Indian Stock Market
56/70
M P Birla institute of Management - 56 -
GARCH ( 1,1 )
Dependent Variable: variance
Method: ML ARCH
Date: 05/13/07 Time: 13:58
Sample(adjusted): 1 118
Included observations: 118 after adjusting endpoints
Convergence achieved after 1 iterations
CoefficientStd.Error z-Statistic Prob.
Variance Equation
C 9.10E-08 1.47E-07 0.618426 0.5363
ARCH(1) 0.15 0.196358 0.763913 0.4449
GARCH(1) 0.6 0.095394 6.289721 0
R-squared -0.85736Mean dependent
var 0.000264Adjusted R-squared -0.88966
S.D. dependentvar 0.000286
S.E. ofregression 0.000394
Akaike infocriterion -12.6548
Sum squaredresid 1.78E-05 Schwarz criterion -12.5844
Log likelihood 749.6338Durbin-Watson
stat 0.866372
-
8/7/2019 Value at Risk Model in Indian Stock Market
57/70
M P Birla institute of Management - 57 -
GARCH ( 2,1 )
Dependent Variable: SER40 Variance
Method: ML ARCH
Date: 05/14/07 Time: 12:42
Sample(adjusted): 1 118
Included observations: 118 after adjusting endpoints
Convergence achieved after 1 iterations
CoefficientStd.Error
z-Statistic Prob.
Variance
Equation
C 9.72E-08 1.51E-07 0.644293 0.5194
ARCH(1) 0.133333 0.255025 0.522824 0.6011
ARCH(2) 0.044444 0.232875 0.190851 0.8486
GARCH(1) 0.533333 0.110066 4.845583 0
R-squared -0.56078Mean dependent
var 0.000264Adjusted R-squared -0.709748
S.D. dependentvar 0.000286
S.E. ofregression 0.000396
Akaike infocriterion -12.6744
Sum squaredresid 1.78E-05 Schwarz criterion -12.5805
Log likelihood 751.7879Durbin-Watson
stat 0.810855
-
8/7/2019 Value at Risk Model in Indian Stock Market
58/70
-
8/7/2019 Value at Risk Model in Indian Stock Market
59/70
M P Birla institute of Management - 59 -
CONCLUSION
As we try to forecast the NIFTY closing prices for 36 months at different LAG, weget the highest significance at the GARCH ( 1,6 ) level. We took the alpha & Betaand try to forecast the variance for the remaining period. We got the graph for
residual value which is not correlated with the actual variance
We try to forecast NIFTY at different lag like GARCH ( 1,2) , GARCH(1,3) ,GARCH ( 2,1) but we find the value of R^2 like -0.5609, -0.85736 , -0.8557 whichis not significance enough to show forecasting power.
At last we can say that we are not able to forecast the voletility in the NIFTYindex with the help of GARCH model.
-
8/7/2019 Value at Risk Model in Indian Stock Market
60/70
M P Birla institute of Management - 60 -
ANNEXTURE
CNX S&P Daily Index
0
500
1000
1500
2000
2500
3000
3500
4000
4500
1/1/1997
7/1/1997
1/1/1998
7/1/1998
1/1/1999
7/1/1999
1/1/2000
7/1/2000
1/1/2001
7/1/2001
1/1/2002
7/1/2002
1/1/2003
7/1/2003
1/1/2004
7/1/2004
1/1/2005
7/1/2005
1/1/2006
7/1/2006
1/1/2007
Correlogram of S&P CNX Nifty monthly return
Date: 05/13/07 Time: 18:41
Sample: 1 2507
Included observations: 119
-
8/7/2019 Value at Risk Model in Indian Stock Market
61/70
M P Birla institute of Management - 61 -
Autocorrelation Partial
Correlation
AC PAC Q-
Stat
Prob
.|. | .|. | 1 -
0.00
4
-
0.00
4
0.002
2
0.96
3
.|* | .|* | 2 0.08
3
0.08
3
0.850
9
0.65
3
.|. | .|. | 3 0.03
5
0.03
6
1.002
1
0.80
1
*|. | *|. | 4 -
0.08
1
-
0.08
9
1.833
6
0.76
6
.|. | .|. | 5 -
0.02
9
-
0.03
6
1.938
9
0.85
8
.|* | .|* | 6 0.13
6
0.15
2
4.298
4
0.63
6
*|. | *|. | 7 -
0.070
-
0.059
4.928
7
0.66
9
*|. | *|. | 8 -
0.07
5
-
0.11
3
5.655
8
0.68
6
-
8/7/2019 Value at Risk Model in Indian Stock Market
62/70
M P Birla institute of Management - 62 -
.|. | .|. | 9 -
0.01
6
-
0.01
8
5.688
9
0.77
1
.|. | .|* | 1
0
0.06
4
0.12
1
6.233
2
0.79
5
.|. | .|. | 1
1
0.04
7
0.05
7
6.532
8
0.83
6
.|* | .|. | 1
2
0.07
6
0.01
2
7.311
5
0.83
6
.|. | .|. | 1
3
-
0.01
0
-
0.01
9
7.323
9
0.88
5
*|. | .|. | 1
4
-
0.07
2
-
0.04
7
8.027
7
0.88
8
.|. | .|. | 1
5
0.00
2
0.01
3
8.028
5
0.92
3
*|. | *|. | 1
6
-
0.11
8
-
0.13
8
9.980
7
0.86
8
.|* | .|* | 1
7
0.07
8
0.07
8
10.83
0
0.86
5
.|. | .|. | 1
8
-
0.04
2
-
0.01
8
11.07
7
0.89
1
-
8/7/2019 Value at Risk Model in Indian Stock Market
63/70
M P Birla institute of Management - 63 -
*|. | *|. | 1
9
-
0.10
0
-
0.09
6
12.50
5
0.86
3
*|. | *|. | 2
0
-
0.06
3
-
0.07
5
13.08
7
0.87
4
.|. | .|. | 2
1
0.03
8
0.06
3
13.29
4
0.89
8
.|. | .|. | 2
2
-
0.03
5
0.00
3
13.47
8
0.91
9
.|* | .|* | 2
3
0.13
8
0.07
0
16.33
6
0.84
1
.|* | .|* | 2
4
0.12
5
0.12
7
18.70
9
0.76
7
.|. | .|* | 2
5
0.04
9
0.08
6
19.07
2
0.79
4
*|. | *|. | 2
6
-
0.06
0
-
0.07
8
19.63
6
0.80
8
*|. | *|. | 2
7
-
0.08
1
-
0.15
8
20.66
7
0.80
2
.|. | .|. | 2
8
-
0.03
5
0.01
1
20.86
5
0.83
1
-
8/7/2019 Value at Risk Model in Indian Stock Market
64/70
M P Birla institute of Management - 64 -
.|. | .|. | 2
9
-
0.04
1
-
0.00
1
21.13
0
0.85
4
.|. | .|. | 3
0
0.02
0
0.00
0
21.19
2
0.88
2
.|. | .|. | 3
1
-
0.03
5
-
0.03
0
21.39
2
0.90
1
.|. | .|* | 3
2
0.06
3
0.12
5
22.03
9
0.90
6
.|. | .|* | 3
3
0.05
6
0.10
3
22.56
3
0.91
4
.|. | *|. | 3
4
0.01
4
-
0.10
1
22.59
7
0.93
2
.|* | .|. | 3
5
0.07
3
-
0.02
4
23.50
7
0.93
0
*|. | *|. | 3
6
-
0.06
4
-
0.06
0
24.22
1
0.93
3
-
8/7/2019 Value at Risk Model in Indian Stock Market
65/70
M P Birla institute of Management - 65 -
REGRESSION ANALYSIS
Coefficients(a)
ModelUnstandardizedCoefficients
StandardizedCoefficients T Sig.
B Std. Error Beta
1 (Constant) 0.000167 4.78E-05 3.491208 0.000814
Return^2 0.007883 0.004951 0.17688 1.592206 0.115601
Lagvariance^2 0.236369 0.111409 0.235694 2.121633 0.037217
a Dependent Variable: Variance
0.000167
Rt-62 1 0.007883
t-12 2 0.236369
S&P CNX Nifty daily return
-
8/7/2019 Value at Risk Model in Indian Stock Market
66/70
M P Birla institute of Management - 66 -
-300
-250
-200
-150
-100
-50
0
50
100
150
200
1 149 297 445 593 741 889 1037 1185 1333 1481 1629 1777 1925 2073 2221 2369 2517 2665 2813Series1
-
8/7/2019 Value at Risk Model in Indian Stock Market
67/70
M P Birla institute of Management - 67 -
16151413121110987654321
Lag Number
1.0
0.5
0.0
-0.5
-1.0
ACF
Lower ConfidenceLimit
Upper Confidence Limit
Coefficient
VAR00001
-
8/7/2019 Value at Risk Model in Indian Stock Market
68/70
-
8/7/2019 Value at Risk Model in Indian Stock Market
69/70
M P Birla institute of Management - 69 -
Bibliography
BOOKS
1. Basic Econometrics: By Damodar N. Gujrati
2. Introductory Econometrics: By Ramu Ramanathan
ECONOMETRICS SOFTWARE PACKAGES
1. Eviews
2. SPSS
References
Bollerslev, T. (1986). A generalized autoregressive conditional
heteroskedasticity. Journal of Econometrics, 31:307 327.
Bollerslev, T. (1987). A conditionally heteroskedastic time series model forspeculative prices and rates of return.
Bollerslev, T., Chou, R., and Kroner, K. (1992). ARCH modelling in finance: A
review of the theory and empirical evidence. Journal of Econometrics, 52:559.
-
8/7/2019 Value at Risk Model in Indian Stock Market
70/70
Christoffersen, P. (1998). Evaluating interval forecast. International Economic
Review, 39(4):841864.
Crouhy, M., Galai, D., and Mark, R. (1998). The new 1998 regulatory framework
for capital adequacy. In Alexander, C., editor, Risk Management and Analysis,
volume 1, chapter 1, pages 137.
John Willey, New York. Dowd, K. (1998). Beyond Value at Risk: the New Science
of Risk Management. John Willey & Sons, England. Duffie, D. and Pan, J.
(1997). An overview of value at risk. Journal of Derivatives, 4:749.
Geman, S., Bienenstock, E., and Doursat, R. (1992). Neural networks and the
bias/variance dilemma. Neural Computation, 4:158.
Hornik, K., Stinchcombe, M., and White, H. (1989). Multilayer feedforward
networks are universal approximators. Neural Networks, 2:359366.
Kupiec, H. (1995). Techniques for verifying the accuracy of risk management
models. Journal of Derivatives, 3:7384.
Lopez, J. (1998). Methods for evaluating value-at-risk estimates. Economic
Policy Review, 4:119124.