Vector Autoregressions and Structural VARs for Monetary ...
Transcript of Vector Autoregressions and Structural VARs for Monetary ...
Vector Autoregressions and Structural VARs for Monetary Policy Analysis Using Eviews
User Guide
Vector Autoregressions and Structural VARs for Monetary Policy Analysis Using Eviews
Prepared by
Dr Thomas Bwire
Senior Principal Economist
Research Department, Bank of Uganda
Published by
COMESA Monetary Institute (CMI)
First Published 2019
COMESA Monetary Institute (CMI) C/O Kenya School of Monetary Studies P.O. Box 65041 – 00618 Noordin Road Nairobi, KENYA TEL: +254-20-8646207 http://cmi.comesa.int
Copyright © 2019, COMESA Monetary Institute
All rights reserved. No part of this publication may be reproduced, stored in retrieval system, or transmitted in any form or by any other means, electronic, mechanical, photocopying, recording or otherwise without prior permission from the COMESA Monetary Institute.
Disclaimer
The Author is solely responsible for the views expressed herein. These opinions do not in any way represent the official position of the COMESA, its Member States, or the affiliated institution of the Author.
Typesetting and Design
Mercy W. Macharia
TABLE OF CONTENTS
List of Tables ..............................................................................................................vii
List of Figures .......................................................................................................... viii
List of Acronyms ........................................................................................................ ix
Preface ........................................................................................................................... x
Acknowledgements .................................................................................................... xi
1. Introduction to Modelling for Analysis of Monetary Policy Transmission Mechanism .............................................................. 1
1.1 Introduction to Modelling ................................................................................. 1
1.2. EViews Operation Environment, Basics and Data ...................................... 3
1.2.1 EViews Software ........................................................................................... 3
1.2.2 Getting familiarity with EViews Window ...................................................... 3
1.2.3 Getting Data into EViews ............................................................................. 4
1.2.4 Viewing the Data ........................................................................................ 11
2. Theoretical Aspects of Monetary Policy Transmission Channels ....................................................................................... 13
2.1 Introduction...................................................................................................... 13
2.2 Interest Rate Channel...................................................................................... 15
2.3 Money Channel ................................................................................................ 17
2.4 The Wealth Channel........................................................................................ 17
2.5 The Exchange Rate Channel ......................................................................... 18
2.6 Bank Based Channels ...................................................................................... 18
2.6.1 The traditional bank lending channel ........................................................... 19
2.6.2 Bank capital channel ................................................................................... 19
2.7 Balance Sheet Channel .................................................................................... 20
2.8 Expectations Channel ..................................................................................... 21
3. Motivating VAR Modelling for Analysis of MTM ....................... 23
3.1 Introduction to VAR ...................................................................................... 23
3.2 Exploring the VAR(p) methods for the analysis of MTM ....................... 24
|vi
4. Taking MTM Theories and the VAR Model to the Data ........... 27
4.1 Adjustments for Seasonal Effects ................................................................. 28
4.2 Deriving the Output GAP ............................................................................. 34
4.3 Unit Root Testing ............................................................................................ 42
4.3.1 Demonstration of unit root testing using ADF test ....................................... 51
4.4 Setting up a MTM VAR model ..................................................................... 62
4.4.1 Determination of lag length .......................................................................... 62
4.4.2 VAR(2) residual statistical properties ......................................................... 67
4.4.3 Stability of VAR(2) ................................................................................... 71
5. Structural Vector Autoregressive Models (SVARs) ...................... 73
5.1 Motivation ......................................................................................................... 73
5.2 VAR Identification .......................................................................................... 73
5.3.1 Imposing a recursive Identification Scheme .................................................... 76
5.4 Generating Impulse Response Functions and Forecast Error Variance Decomposition ................................................................................................ 88
5.5 Non-recursive Identification Scheme ........................................................... 94
5.6 Comparing Recursively and Non-recursively Identified SVAR ............... 97
6. VAR and Vector Error Correction Model (VECM) .................... 101
6.1 Deriving the VECM Framework ................................................................ 101
6.2 Demonstration Determination of Cointegration and Estimation of VECM ............................................................................................................. 105
6.3 Estimating a VECM ...................................................................................... 109
6.4 Long-run Exclusion Tests ............................................................................ 112
6.5 Long-run Weak Exogeneity Tests............................................................... 113
6.6 Granger Non-causality Test ......................................................................... 116
7. Granger Causality Test ............................................................... 119
8. References ................................................................................... 120
vii |
List of Tables
Table 1: Unit test in CPI (levels) ...................................................................................... 53
Table 2: Unit root test in LCPI (difference) ...................................................................... 54
Table 3: Unit root test in exr ........................................................................................... 55
Table 4: Unit root test in lgdp ......................................................................................... 56
Table 5: Unit root test in lM2 .......................................................................................... 57
Table 6: Unit root test in loil_price .................................................................................. 58
Table 7: Unit root test in lpsc .......................................................................................... 59
Table 8: Unit root test in lrate ......................................................................................... 60
Table 9: Unit root test in tb91 ......................................................................................... 61
Table 10: VAR(2) Estimates ............................................................................................. 64
Table 11: VAR Lag order selection criteria ...................................................................... 66
Table 12: Residual Serial Correlation LM test ................................................................. 67
Table 13: VAR Residual Normality Tests ......................................................................... 70
Table 14: VAR Residual Heteroskedasticity Test............................................................. 71
Table 15: VAR (2) Roots of the characteristic polynomial .............................................. 72
Table 16: Just-Identified SVAR estimates (short-run text form) ..................................... 81
Table 17: Over-identified SVAR estimates (short-run text form) ................................... 83
Table 18: SVAR estimates (short-run pattern matrix option) ......................................... 87
Table 19: Variance decomposition ................................................................................. 93
Table 20: Just-identified Non-recursive SVAR estimates (matrix form) ......................... 96
Table 21: Johansen's trace test results ......................................................................... 108
Table 22: Vector Error Correction Estimates ................................................................ 111
Table 23: A test of monetary policy neutrality ............................................................. 113
Table 24: A test of weak exogeneity ............................................................................. 115
Table 25: VEC Granger Causality/Block Exogeneity test ............................................... 117
|viii
List of Figures
Figure 1: Theoretical Monetary Transmission Mechanisms ........................................... 14
Figure 2: Quarterly GDP, CPI and oil_price in levels ....................................................... 31
Figure 3: Comparing seasonally unadjusted and seasonally adjusted series ................. 34
Figure 4: Output gap: Hodrick-Prescott Filter (λ=1600) ................................................. 36
Figure 5: Variables in levels and first differences ........................................................... 38
Figure 6: Level, first difference autocorrelations ............................................................ 46
Figure 7: Residual plots ................................................................................................... 69
Figure 8: IRF to one standard-deviation structural shock in tb91 (recursive structural factorisation) .................................................................................................... 91
Figure 9: IRF to one standard-deviation structural shock in tb91 (non-recursive structural factorisation).................................................................................... 97
ix |
List of Acronyms
ADF Augmented Dickey Fuller
AIC Akaike Information Criteria
CPI Consumer Price Index
DGP Data Generating Process
FEVD Forecast Error Variance Decomposition
GDP Gross domestic product
HP Hodrick-Prescott
HQ Hannan-Quinn
IFS International Financial Statistics
IMF International Monetary Fund
IRF Impulse Response Function
LR Likelihood Ratio
MTM Monetary Transmission Mechanisms
OLS Ordinary Least Squares
PSC Private Sector Credit
PWT Penn World Tables
SBC Schwarz Bayesian Criterion
UN United Nations
VAR Vector Autoregressive
VECM Vector Error Correction Model
WB World Bank
|x
Preface
The preparation of this User Guide followed a directive to COMESA
Monetary Institute (CMI) by the 23rd Meeting of the COMESA Committee of
Governors of Central Banks which was held in March, 2018 in Djibouti.
Governors noted that understanding and analysing monetary policy
transmission mechanism is crucial since COMESA member central banks are
currently moving gradually to inflation targeting monetary policy frameworks.
The overall objective of this User Guide is therefore to equip users with skills
on modelling and analyzing monetary policy transmission mechanism.
The User Guide utilizes strictly time series data, for the analysis of monetary
policy transmission mechanisms. It demonstrates all steps from data
organization to results interpretation using EVIES software. The key important
questions which the analysis in the User Guide addresses are: (i) Which
transmission channels, or combination of channels (interest rate, bank lending,
exchange rate, asset prices) are likely to be the most effective in transmitting
monetary policy; and (ii) What is the timing and magnitude of the effects of
policy changes on macroeconomic variables. The User Guide therefore,
provides an analytical guide on application of VAR, SVAR and VECM for
analysis of transmission mechanism of monetary policy.
It is hoped that the Guide will enable users to have a firm understanding of the
process involved in the analysis of transmission mechanism of monetary policy.
It is also hoped that the Guide will be used by COMESA member central
banks as a reference material to train their staff.
Ibrahim Zeidy Director and Chief Executive Officer
xi |
Acknowledgements
The Author was grateful to the COMESA Monetary Institute (CMI) for the
opportunity given to prepare the User Guide. He acknowledged technical and
professional support from the Director of CMI, Mr. Ibrahim Zeidy and the
Senior Economist, Dr. Lucas Njoroge. The Author also acknowledged the
earlier COMESA training material on the subject by Dr. Ole Rummel of Bank
of England’s Centre for Central Banking Studies.
The Author was especially grateful for the comments from the participants of
the Validation Workshop held from 7th – 11th May 2018 in Nairobi, Kenya that
provided the final inputs to the User Guide. The workshop was attended by
participants from the following COMESA member countries’ Central Banks:
Burundi, Djibouti, DR Congo, Eswatini, Kenya, Malawi, Mauritius, Sudan,
Uganda, Zambia, and Zimbabwe.
Chapter 1
Introduction to Modelling for Analysis of Monetary Policy Transmission Mechanism
1.1 Introduction to Modelling
The magnitude of economic influences is unobservable, so their modelling is
necessary. Measuring economic influences is done in a wider area called
econometrics and utilizes the inputs of mathematicians, statisticians and
economists (theoretical and applied). Uncovering economic influences makes
use of economic data, which in general bears three key characteristics. First,
unlike natural sciences, economic data is non-experimental and is in raw form
gathered by local government agencies - such as National Statistics agencies,
Central banks, Finance, Planning and Economic Development ministries, etc.
and various international agencies such as the World Bank (WB), International
Monetary Fund (IMF) – International Financial Statistics (IFS), United Nations
(UN), Penn World Tables (PWT), etc. Second, the data is random, so it
involves modelling the randomness and may require undertaking some
transformations. Thirdly, in addition to measurement errors, randomness of
data brings into play the error term – which we explore in due course.
Economic data falls under three broad categories of: cross sectional, i.e., data
collected at a point in time for a number of economic agents or institutional
units e.g. households consumption in a given year, GDP data collected for a
given number of countries at each given date (annually, semi-annually, and
quarterly); time series, data collected sequentially in time at regular intervals e.g.
Uganda’s annual GDP data for the period, 2000 – 2017; and panel/pooled data
- which combines both time series and cross section data. In general, economic
data can be disaggregated or aggregated. Disaggregated data is usually for
microeconomic modelling (a discipline called micro econometrics) while
aggregated data is usually for macroeconomic modelling (a discipline called
macro econometrics).
Vector and Structural Vector Autoregressions
|2
This User Guide utilizes strictly time series data, which is applied to modern
time series models suitable for the analysis of monetary policy transmission
mechanism using EViews econometric software. Analysis of monetary policy
transmission mechanism begins with model building – which, by and large, is
an art - perfected through continuous research, i.e., the knowledge of the
theory of what is to be modelled. Based on the constructed model(s), economic
relationships and outcomes can both be measured and predicted. In principal,
econometric models are essential for statistical inferences - estimation,
hypothesis testing and forecasting and policy analysis.
In practice, the analysis of monetary policy transmission mechanism has been
done using a variety of software packages, of which EViews is just one. Other
common ones have been Microfit, Stata RATS, CATS in RATS, OxMetrics
(PcGIVE), Gauss, Matlab, Dynare, Iris etc. Importantly, the software chosen
depends on the complexity of the Model to be estimated. The good news
though is that almost any of this econometric software can perform most of
the econometric estimations, except that they may have different speed of
program execution depending on the complexity of the model. In addition,
some are more users friendly and easy to calibrate than others. Similarly, others
have clear advantages in the estimation of models related to certain disciplines
Title bar
Main menu & menu items
Command window
Work space
Status line
Grayed menu items not available
Black menu items available
Introduction to Modelling for Analysis of Monetary Policy Transmission Mechanism
3 |
than others. In the following, we familiarize ourselves with EViews operational
environment.
1.2. EViews Operation Environment, Basics and Data
1.2.1 EViews Software
EViews – short form of Econometric Views is a user-friendly and powerful
statistical software package designed to provide sophisticated data analysis and
forecasting tools. The software comes with an extensive user User Guide which
contains many useful examples/explanations/ tutorials and is extremely well-
presented. Once you have gained basic familiarity with the basic concepts and
operations of the program, you should be able to perform most operations
without consulting the User Guide. Moreover, the program has a very
extensive help menu (one of the entries in the Main menu) which is simple to
use and sufficient for most users, so one may not again need to consult the
User Guide when conducting statistical analysis. This introductory chapter aims
to familiarise users with the fundamentals of working with EViews pertinent to
the purpose at hand, i.e., analysis of monetary transmission mechanisms
(MTM). The detailed and in-depth exposition of EViews fundamentals are
given in the User Guide I which is in the drop-down menu of the help menu in
the earlier versions of EViews or among the documents in PDF Docs file in
the drop-down menu of the help menu in the later versions of EViews, notably
EViews 10. All illustrations and output in this User Guide derive from EViews
9.0
1.2.2 Getting familiarity with EViews Window
Launch the program (i.e., double click on EViews icon) - this assumes the
program has been properly installed on your computer (the installation is
usually done by authorized IT staff). This brings forth the EViews window as
in the screen print below. We want to familiarize ourselves with the title bar,
main menu, command window, work space and status line, in there.
At the very top of the main window, is the title bar, labelled EViews and is
generally whiter (with later versions). Immediately below the title bar is the main
menu. If you move the cursor to an entry in the main menu, say Object and
click on the left mouse button, a drop-down menu will appear. Some of the items
in the drop-down menu, as shown in the print screen herewith are listed in black
whilst others are in gray. Black items are executable while the gray items are
Vector and Structural Vector Autoregressions
|4
not or are simply unavailable. Clicking on a black listed entry in the drop-down
menu selects the highlighted item.
Below the menu bar is the command window space in which EViews
commands such as log, first difference transformations are entered or typed.
Each of the entered commands is executed as soon as you press the ENTER
button on the key board. The area in the middle of EViews window is the
work space area and is where various objects that EViews creates are
displayed. It is actually analogous to a sheet of paper in an exercise book or
work book on your desk. To close EViews, select File from the main menu
and in the drop-down menu, choose and execute Exit. Alternatively, and
obviously much more directly, click on the x button in the upper right-hand
corner of the EViews window. If necessary, EViews will warn you and provide
you with the opportunity to save any unsaved work. EViews work file is saved
in pretty the same way as any other computer-generated document. Select File
in the main menu, then Save As… in the drop- down menu and save the file.
Subsequently, click on Save through the File menu or as usual through the key
board operations (pressing Ctrl button, and whilst holding it down, press the S
button) to save any changes to the file. It is advisable that you save your work
continuously.
1.2.3 Getting Data into EViews
In this demonstration, we use quarterly time series data for Uganda provided in
excel sheet in MTM_data folder. The folder includes data points for a suite of
variables which are usually used in the analysis of various monetary policy
transmission mechanisms: Gross domestic product (GDP) in 2009/10 constant
prices, Core consumer price index (core_CPI), mid-rate Uganda shillings USD
nominal exchange (exr), international oil pump prices (oil_price), 91 day
Treasury bill rate (tbill_91day) – a measure of monetary policy, end period
stock of private sector credit (PSC), lending rate (lr) and a measure of broad
money (M2), all observed for the period 2000q1 – 2017q3, i.e., some 71
seasonally unadjusted observations. Given the data, we would like to read it into
an EViews work file so then it can be subjected to statistical analysis. EViews
obviously provides sophisticated tools for reading in data from a variety of
common data formats and sources, but here I will demonstrate about four,
among the many different ways of doing this from an Excel file, starting with
the easiest – a simpler copy and paste command type.
Introduction to Modelling for Analysis of Monetary Policy Transmission Mechanism
5 |
Launch the EViews program to see the EViews window we’ve already
described above. On the
Main menu, left click on
File and in the drop-down
menu, point the cursor at
NEW and navigate through
to Work file. Click on Work
file and it’s in the Work file
dialog box which pops
(given in the screen shot
below) that the user uses to
supply critical data
information to the software.
Starting with the work file
structure type, choose from the drop-down menu, the type of series at our
disposal. You notice from MTM_data folder our data is dated (2000 – 2017)
and is at a regular frequency (quarterlies), so given this, we choose dated-
regular frequency (which incidentally is EViews default entry). Similarly,
under Date specification, choose from the drop-down menu, Quarterly. We then
supply the Start date and the End date (in the form: 2000Q1 for start date
and 2017Q3 for end data). Although Work file name is optional, here we
provide - for convenience COMESA2018 (for WF) and Demo (for Page).
It is important to provide for page names, just as it is with sheets in the excel
book because in the process of executing the analysis or handling multiple
tasks, it is often the case that we end up with multiples of pages/sheets within
the same work file. These described entries are what we see in the EViews snap
shot hereof.
With these entries, press OK to open a new window, given here in the screen
print, with two variables; C and resid. As we all know, these two, namely
constant, C (also known as intercept) and residual, form an integral part of an
econometric model. They serve to capture important information about the
regressor or dependent variable. The C means, if we were to hold everything
else constant (the famous ceteris paribus notion in economics), the minimum
value that the dependent variable would take is equal to the magnitude of the
constant (be it positive or negative), subject to its statistical significance and the
model meeting standard statistical criteria for evaluating the estimated results.
Vector and Structural Vector Autoregressions
|6
Moreover, because it cannot claimed that the whole of economics or even the
whole of economic theory can be encompassed in a model, but a well devised
model can bring out certain features of interdependence among economic
quantities that are not easily comprehended without its help (Beach, 1958), resid
captures, among other things, the unexplained component of the dependent
variable. Importantly, note that C and resid are not observable a priori, but are
generated with execution or estimation of a regression model.
The window gives both the Range and Sample of the data, spanning 2000Q1 to
2017Q3 - some 71 quarterly observations. In addition, the entries we made for
the work file and page
name in the previous
step are also reflected,
as COMESA2018 and
demo, respectively.
Note that if we had
not provided for the
page name upfront (as
in the previous screen
shot), the software
would instead baptise
the page (in the place
of demo-in this screen
print) a default name -
Untitled.
Because as stressed above, it is very important that work file pages are named,
if the user has not provided for the page name upfront, but finds it reasonable
to do so, it is equally straight forward to rename the untitled page. All we do is
to point the cursor at untitled – a page that we want to rename, right click the
mouse and select from the options, rename Work file page…whereof, we provide
the page name of choice.
It is at this point that we load the data. To do so, interactively open the excel
data file MTM_data folder. In the excel file, select and copy the series data for
the eight variables, the cell variable labels inclusive – ensure, for emphasis, that
both the series name and the series data have been copied. The column for
sample period should be excluded in this process, because, already, this has
been declared. In the open EViews window, check on the Quick command in
Introduction to Modelling for Analysis of Monetary Policy Transmission Mechanism
7 |
the Main menu and choose Empty group (edit series). A click on this
produces a spread sheet similar to that in excel .
The first column displays
the sample range period,
spanning from the start to
the end date that we have
supplied in the preceding
steps. As can be seen, the
first two rows are by default
empty, but unlike the
second, the first is
highlighted blue. Either of
these can be used to paste in
the data, albeit with
different implications, which,
if curious, you might want to
practically check. In the sheet,
(though not exactly printed to
reflect this) sample period
starts from the third row of
the first column.
Click on the second cell in the
second column, and while
maintaining the cursor in
there, make a right hand click
on your mouse and paste in
the copied data. As you will
see, this simple way of
importing the data also brings on board the series names, which then
automatically become visible in the work space window.
As can be seen in the screen print, the number of entries in the work space
window, in addition to C and resid, grows by the number of series we have
imported. Once sure the copy and paste command is complete, click on the x -
a small box like command - highlighted in red at the top most right-hand side
corner of the open excel-like sheet window (containing the copied in data). In
Vector and Structural Vector Autoregressions
|8
the prompt message, select Yes. A window with all the series and C and resid
appears as shown in the screen print.
Note that any mismatch between the actual series length and length between
the start and end date may prompt the software to reject the data paste
command. Also, if during the copying, the series initial is omitted, the software
will assign the omitted initial a default name, usually SER01 (SER for series) for
the first series column, or in general SER0i, i=1, 2, ..., k for k multiple series.
This however can be easily replaced and/or renamed with the actual series
names through the Genr (generate) command (see pointer in the print screen
above) in the work space window. Alternatively, highlight the variable/series
you want to rename and while keeping the cursor in there, right click on the
mouse and choose rename, then provide the name for the variable/series in
question. Also, take note that at times, actually quite often, the initial given to
the series may matter, and may be rejected by EViews especially if the initial is
a preserve of the
software. In the event
that this is so, try
changing the initial
until it is acceptable.
The other easy way
involves direct
importation of data
from the data source, a
process we next
describe in detail, as
before.
To proceed, double
click on the EViews icon for the EViews window described earlier. Check File
and among the entries in the drop-down menu, point the cursor at Import and
following the arrow, choose Import from file…selecting this takes us to the
computer window, whereof, we choose the path where our MTM_data folder
is stored. On my PC, my MTM_data folder is stored on the desk top, so I
chose the desk top, and then navigated through to and opened COMESA
folder, then 2018 and finally double checked MTM_data.
Introduction to Modelling for Analysis of Monetary Policy Transmission Mechanism
9 |
Executing the steps
above should generate
an EViews screen as
displayed herein. As
can be seen from the
screenshot, this is a 3-
step procedure (see at
the top: Excel Read –
Step 1 of 3), in the
order, next (Step 1 of
3), next (Step 2 of 3),
next (Step 3 of 3) and
finish. We can also see
under Predefined
range (highlighted),
our data is being
picked from sheet2 of the excel workbook, noting that appropriate choices of
the excel book sheets can still be done from the arrow in the dialogue box.
Clearly the screen gives data as was in excel, arranged in columns. We also have
the option to transpose the incoming data to rows, but only if we have a
sufficient reason to do so.
Click next (Step 2 of 3), and then next (Step 3 of 3). In step 3, which is
Structure of the Data to be Imported, adjust for the Basic structure, from
the drop-down menu, to Dated – regular frequency (EViews default is
Unstructured/Undated). With this choice, we need to declare Frequency/date
specification. In the dialogue box for Frequency, we chose Quarterly (from the
drop-down menu thereof) and type 2000q1 in the dialogue box for Start date
(EViews default is 1), as shown in the screen print.
Vector and Structural Vector Autoregressions
|10
With these entries, click finish, which gives a screen shot, like the final screen
we had under copy and paste command. The work file page is automatically
named by the excel folder name, MTM_data.
The other, and possibly the last way of reading in data into EViews I would like
to demonstrate in this
User Guide is the file
option.
Like in the previous
Import option, double
click on the EViews
icon for the EViews
window described
earlier. Check File and
among the entries in
the drop-down menu,
point the cursor at Open and following the arrow, navigate through to
Foreign Data as Workfile… Selecting this takes us to the computer window,
whereof, we choose the path where our MTM_data folder is stored, pretty
much in a similar way as already described under direct importation.
Like in the previous description, this file option also loads data in a 3-step
procedure, in the order, next (Step 1 of 3), next (Step 2 of 3), next (Step 3 of 3)
and finish. Do not forget that as describe above, Step 3, i.e., Structure of the
Data to be Imported, involve quite some editing, involving adjustments for
the Basic structure, which from the drop-down menu, has to be changed from the
EViews default of Unstructured/Undated) to Dated – regular frequency, and
Introduction to Modelling for Analysis of Monetary Policy Transmission Mechanism
11 |
consequently, we have got to
declare Frequency/ date
specification in the respective
dialogue boxes. Accordingly,
In the, we chose Quarterly
(from the drop-down menu) of
the dialogue box for
Frequency and type 2000q1 in
the dialogue box for Start date
(EViews default is 1), indeed as
has been described before and
also shown in the screen print
in the text. We realize, at the
end of the procedure, the work
file page is automatically
named by the excel folder name, MTM_data, as shown in the screen shot.
As mentioned earlier, there are many ways of importing data into EViews, but
the simplicity and complexity associated with each of these ways is user
specific. It is important that as users, we aim to choose the easiest way
possible– and anyone of the above is simpler in my judgement.
1.2.4 Viewing the Data
Viewing data in EViews is
important, not least because
we must verify and be
satisfied that the data now in
EViews work space is the
actual data that has been
called in from excel as the
data may have been distorted
during the process of reading
it into EViews.
To view the data within
EViews work space, click on
and highlight any one series,
and while maintaining the
Vector and Structural Vector Autoregressions
|12
cursor in the highlighted series, hold down the Ctrl key on your key board
(using an alternative hand) and select the rest of the series, but one at a time.
Once all the series of interest are highlighted, and while maintaining the cursor
in the highlighted series, right click on your mouse and choose open as a
group. This road map is shown in the adjacent screen print.
Execute the open as a group command to display the variable’s data as is
shown in the EViews print screen herewith.
Examine the data to ensure it represents the actual data that was intended to be
read into EViews. Always remember, whenever you enter data into EViews, to
check it thoroughly and
carefully so to ensure it is
what you expect it to be
and that it has not been
distorted in the process
of calling it into EViews.
For emphasis, checking
your data is boring, but
extremely vital.
The open window can
then be closed using the
x command - highlighted
in red at the top most right-hand side corner of the open spread sheet-like
window. In the prompt message, select Yes to revert to the EViews work
space in the previous screen print. You are now ready to undertake any data
manipulation and analysis. In what will follow, we would want to explore the
data, including adjusting for seasonality and log transformations where
necessary, and to generate the output gap – a variable common in monetary
policy making circles.
In the next section, we discuss the known channels of monetary transmission
mechanism to inform the variable and data needs of this extensive exercise.
Chapter 2
Theoretical Aspects of Monetary Policy Transmission Channels
2.1 Introduction
Monetary transmission mechanism (MTM) describes the process through
which monetary policy decisions impact an economy. Simply put, when a
central bank decides on a monetary policy action, it sets in motion a sequence
of events, with the initial impact on financial markets, which in turn, slowly
(with a lag) permeates its way through to changes in aggregate demand (private
consumption and investment), which then influences current production levels,
wages and employment, and in the process steer domestic prices in the desired
direction of the monetary authority – low and stable inflation, which is the
ultimate pursuit/goal of monetary policy. It is this chain of developments,
triggered by monetary policy actions, which constitute monetary policy
transmission mechanism. As we delve into this discussion, it is order that I put
a disclaimer – a great majority of the discussion in this section is adapted from
Mugume (2011, 5-10), with express permission and is slightly modified only
where necessary.
Regardless of the monetary policy framework used in practice, a successful
implementation of monetary policy requires an accurate assessment of how fast
the effects of policy changes propagate to other parts of the economy and the
timing and size these effects. Cecchetti (1995), Mishkin (1995), and Christiano
et al. (1997) give a comprehensive survey of the literature on monetary
transmission mechanism. Whilst a consensus on the monetary transmission
mechanism has not emerged from this literature (Mugume 2011), the general
thinking is that traditionally, monetary policy impulses are thought to be
transmitted via money or credit channels—the so-called money versus credit
view of monetary policy (Davoodi et al., 2013).
Vector and Structural Vector Autoregressions
|14
In the former, changes in the nominal quantity of money affect spending
directly. The transmission in the latter case is indirect. Monetary policy
transmission is triggered when the central bank changes the monetary base
through an open market operation, to withdraw or inject liquidity in the
banking system with implications for the interest rates. This eventually
permeates to changes in prices for a variety of domestic and foreign assets. The
literature also agrees to the existence of nominal rigidities that both currency
and bank reserves are nominally denominated and that their quantities are
measured in terms of the economy’s unit of account. This suggests implicitly
that if policy‐induced movements in the nominal monetary base are to have
real effects, nominal prices must respond with a lag to the policy movements in
a way that leaves the real value of the monetary base unchanged (Mugume,
2011, emphasis mine). Such sources of nominal rigidities, which limit the ability
of households to participate in financial markets, include sticky prices and
wages and market imperfections. Figure 1 depicts a black box of an eclectic view
of monetary policy transmission identifying the major channels common in the
literature.
Figure 1: Theoretical Monetary Transmission Mechanisms
Source: Adopted from Adam (2011, p.9)
Theoretical Aspects of Monetary Policy Transmission Channels
15 |
The process begins with the transmission of open market operations to market
interest rates, either through the commercial banks excess reserves or through
the supply and demand for money more broadly. From there, transmission may
proceed through any of several channels. In what follows, we cover in more
detail how each channel works.
2.2 Interest Rate Channel
This is the primary mechanism at work in conventional macroeconomic
models, and hinges on the notion that the transmission of monetary policy
depends on private expenditures being interest elastic. The basic idea is that
given some degree of price stickiness, an increase in nominal interest rates, for
example, translates into an increase in the real rate of interest and the user cost
of capital – changes which in turn lead to a postponement in consumption or a
reduction in investment spending and, hence, the output level and prices. Note
for emphasis that in the channel, while the central bank on the contrary has
direct grip only over the short-term nominal interest rate, effectiveness of
monetary policy will depend on its ability to affect the real interest rate and the
sensitivity of consumption and investment to changes in the price of
intertemporal substitution.
This is the mechanism embodied in the New Keynesian macro models
developed by Rotemberg and Woodford (1997) and Clarida, Galí, and Gertler
(1999). The New Keynesian model consists of three equations: the aggregate
demand (IS) curve, aggregate supply (Philips) curve and the Uncovered Interest
Rate Parity (UIP) equation, involving three variables, output 𝑦𝑡 , inflation
𝜋𝑡 and short-term nominal interest rate 𝑖𝑡 . These are given in equations 1-3.
The aggregate demand curve for an open economy is represented as:
y
tttt rerryy 3211 (1)
Where ty is output,
tr is the real interest rate, rer is the real exchange rate, y
t
is an aggregate demand shock and coefficients 21, and 3 are the
persistence of output, impact of the interest rate on output and the impact of
the exchange rate on output, respectively.
The aggregate supply curve is defined as:
Vector and Structural Vector Autoregressions
|16
tttt rery 3211 (2)
Where t is inflation, t is an aggregate supply shock and coefficients
21,
and 3 are the persistence of inflation, impact of output on inflation and the
impact of exchange rate on inflation, respectively.
The uncovered interest rate parity equation that captures the relationship of the
domestic economy with the rest of the world can be represented as:
s
ttttt premiiSS )( *
11 (3)
Where tS is the nominal exchange rate,
ti is the domestic nominal interest
rate, *
ti is the foreign nominal interest rate, prem is the risk premium, s
t is
the exchange rate shock and coefficient 1 is the persistence of exchange rate
movement.
The real exchange rate can be derived as iw
i
n
itt pp
Srer
1
1* where p is the
domestic consumer price index, ip is the consumer price index of country i,
iw is the weight attached to country i in the basket of countries that trade most
with the domestic economy and n is the number of trading partners.
The fourth equation is monetary policy reaction function which follows the
Taylor rule (Taylor 1993).
𝑖𝑡 = 𝜌 + 𝜗𝜋𝑡+1 + 𝛾𝑦𝑡 + 휀𝑡 (4)
Consistent with the behaviour of inflation targeting central banks, the central
bank systematically adjusts the short-term nominal interest in response to
movements in expected inflation (𝜋𝑡+1), and output gap (𝑦𝑡).
This description of monetary policy in terms of interest rates reflects the
observation, noted above, that most central banks today conduct monetary
policy using targets for the interest rate as opposed to any of the monetary
aggregates. In this New Keynesian model, monetary policy operates through
the traditional Keynesian interest rate channel. A monetary tightening, in the
form of a shock to the Taylor rule, that increases the short-term nominal
interest rate translates into an increase in the real interest rate as well when
Theoretical Aspects of Monetary Policy Transmission Channels
17 |
nominal prices move sluggishly due to costly or staggered price setting. This
rise in the real interest rate then causes households to cut back on their
spending as summarized by the IS curve. Finally, through the Phillips curve,
the decline in output puts downward pressure on inflation, which adjusts only
gradually after the shock.
However, as Bernanke and Gertler (1995) have pointed out, the
macroeconomic response to policy-induced interest rate changes is
considerably larger than that implied by conventional estimates of the interest
elasticities of consumption and investment. This observation suggests that
mechanisms other than the narrow interest rate channel may also be at work in
the transmission of monetary policy. Such other alternative paths include;
2.3 Money Channel
This channel effectively assumes changes in reserve money are transmitted to
broad money via the money multiplier; that banks are in the business of
creating inside money. The money view of monetary policy assumes aggregate
demand and price levels move in line with money balances used to finance
transactions. It is this idea that forms the basis for broad money representing
the intermediate target in many central bankers’ money-focused monetary
policies (Mishkin, 1998).
2.4 The Wealth Channel
The wealth channel is built on the life-cycle model of consumption developed
by Ando and Modigliani (1963), in which households’ wealth is a key
determinant of consumption spending. The link to monetary policy comes
through the link between interest rates and asset prices. A policy-induced
interest rate increase and/or reduces the value of long-lived assets (stocks,
bonds, and real estate), shrinking households’ resources and leading to a fall in
consumption.
Asset values also play an important role in the broad credit channel developed by
Bernanke and Gertler (1989), but in a manner distinct from that of the wealth
channel. In the broad credit channel, asset prices are especially important in
that they determine the value of the collateral that firms and consumers may
present when obtaining a loan. In ―frictionless credit markets, a fall in the
value of borrowers’ collateral will not affect investment decisions; but in the
Vector and Structural Vector Autoregressions
|18
presence of information or agency costs, declining collateral values will increase
the premium borrowers must pay for external finance, which in turn will
reduce consumption and investment. Thus, the impact of policy-induced
changes in interest rates may be magnified through this ―financial accelerator
effect. A direct effect of monetary policy on the firm’s balance sheet comes
about when an increase in interest rates works to increase the payments that
the firm must make to service its floating rate debt. An indirect effect arises,
too, when the same increase in interest rates works to reduce the capitalized
value of the firm’s long-lived assets. Hence, a policy induced increase in the
short-term interest rate not only acts immediately to depress spending through
the traditional interest rate channel, it also acts, possibly with a lag, to raise each
firm’s cost of capital through the balance sheet channel, deepening and
extending the initial decline in output and employment.
2.5 The Exchange Rate Channel
This is an important element in conventional open-economy macroeconomic
models. The chain of transmission here runs from interest rates to the
exchange rate via the uncovered interest rate parity condition relating interest
rate differentials to expected exchange rate movements. Thus, under floating
exchange rates and perfect capital mobility, arbitrage between domestic and
foreign short-term government securities causes incipient capital flows, which
change the equilibrium value of the exchange rate required to sustain
uncovered interest parity. With sticky prices, this change in the nominal
exchange rate is reflected in a real exchange rate depreciation that induces
expenditure switching between domestic and foreign goods. The effectiveness
of this channel depends on the central bank’s willingness to allow the exchange
rate to move, on the degree of capital mobility, on the strength of expenditure
switching effects (this depends on the commodity composition of production
and consumption), on the importance of currency mismatches, and on the
degree of exchange rate pass through.
2.6 Bank Based Channels
There are two distinct bank-based transmission channels. In both, banks play a
special role in the transmission process because bank loans are imperfect
substitutes for other funding sources.
Theoretical Aspects of Monetary Policy Transmission Channels
19 |
2.6.1 The traditional bank lending channel
According to this view, banks play a special role in the financial system because
they are especially well suited to solve asymmetric information problems in
credit markets. Because of banks’ special role, certain borrowers will not have
access to credit markets unless they borrow from banks. As long as there is no
perfect substitutability of retail bank deposits with other sources of funds, the
bank-lending channel operates as follows. Expansionary monetary policy,
which increases bank reserves and bank deposits, increases the quantity of bank
loans available. Because many borrowers are dependent on bank loans to
finance their activities, this increase in loans will cause investment and
consumer spending to rise. An important implication of the bank lending
channel is that monetary policy will have a greater effect on expenditure by
smaller firms, which are more dependent on bank loans, than it will on large
firms, which can get funds directly through stock and bond markets (and not
only through banks).
2.6.2 Bank capital channel
In this channel, the state of banks’ and other financial intermediaries’ balance
sheets has an important impact on lending. A fall in asset prices can lead to
losses in banks’ loan portfolios; alternatively, a decline in credit quality, because
borrowers are less able or unwilling to pay back their loans, may also reduce the
value of bank assets. The resulting losses in bank assets can result in a
diminution of bank capital, as has occurred during the recent financial crisis.
The shortage of bank capital can then lead to a cutback in the supply of bank
credit, as external financing for banks can be costly, particularly during a period
of declining asset prices, implying that the most cost-effective way for banks to
increase their capital to asset ratio is to shrink their asset base by cutting back
on lending. This deleveraging process means that bank-dependent borrowers
are now no longer able to get credit and so they will cut back their spending
and aggregate demand will fall. Expansionary monetary policy can lead to
improved bank balance sheets in two ways. First, lower short-term interest
rates tend to increase net interest margins and so lead to higher bank profits
which result in an improvement in bank balance sheets over time. Second,
expansionary monetary policy can raise asset prices and lead to immediate
increases in bank capital. In the bank capital channel, expansionary monetary
policy boosts bank capital, lending, and hence aggregate demand by enabling
bank-dependent borrowers to spend more.
Vector and Structural Vector Autoregressions
|20
2.7 Balance Sheet Channel
Like the bank-lending channel, the balance sheet channel arises from the
presence of asymmetric information problems in credit markets. When an
agent’s net worth falls, adverse selection and moral hazard problems increase in
credit markets. Lower net worth means that the agent has less collateral,
thereby increasing adverse selection and increasing the incentive to boost risk
taking, thus exacerbating the moral hazard problem. As a result, lenders will be
more reluctant to make loans (either by demanding higher risk premia or
curtailing the quantity lent), leading to a decline in spending and aggregate
demand. A particularly convenient, and widely adopted, model of this type is
he financial accelerator framework of Bernanke and Gertler (1989) and
Bernanke, Gertler and Gilchrist (1999), in which lower net worth increases the
problems associated with asymmetric information in debt financing, thereby
increasing the external finance premium.
Monetary policy affects firms’ balance sheets in several ways. Contractionary
monetary policy leads to a decline in asset prices, particularly equity prices,
which lowers the net worth of firms, which leads to a decline in lending,
spending and aggregate demand. Another way is through cash flow, the
difference between cash receipts and cash expenditure. Contractionary
monetary policy, which raises interest rates, causes firms’ interest payments to
rise, thereby causes a fall in cash flow. With less cash flow, the firm has fewer
internal funds and must raise funds externally. Because external funding is
subject to asymmetric information problems and hence an external finance
premium, additional reliance on external funds boosts the cost of capital,
curtailing lending, investment and economic activity. An interesting feature of
the cash flow channel is that nominal interest rates affect firms’ cash flow, in
contrast to the role of the real interest rate emphasized in neoclassical channels.
Furthermore, the short-term interest rate plays a special role in this
transmission mechanism, because interest payments on short-term (rather than
long-term) debt typically have the greatest impact on firms’ cash flow.
Additional asset price channels are highlighted by Tobin’s (1969) q‐theory of
investment and Ando and Modigliani’s (1963) life‐cycle theory of consumption.
Tobin’s q measures the ratio of the stock market value of a firm to the
replacement cost of the physical capital that is owned by that firm. All else
being equal, a policy‐induced increase in the short‐term nominal interest rate
Theoretical Aspects of Monetary Policy Transmission Channels
21 |
makes debt instruments more attractive than equities in the eyes of investors;
hence, following a monetary tightening, equilibrium across securities markets
must be re-established in part through a fall in equity prices. Faced with a lower
value of q, each firm must issue more new shares of stock in order to finance
any new investment project; in this sense, investment becomes costlier for the
firm. Therefore, in the aggregate, across all firms, investment projects that were
only marginally profitable before the monetary tightening go unfunded after
the fall in q, leading output and employment to shrink as well. Meanwhile,
Ando and Modigliani’s life‐cycle theory of consumption assigns a role to
wealth as well as income as key determinants of consumer spending. Hence,
this theory also identifies a channel of monetary transmission: if stock prices
fall after a monetary tightening, household financial wealth declines, leading to
a fall in consumption, output, and employment.
2.8 Expectations Channel
Expectations are so central to monetary transmission that they deserve to be
analysed in detail. Assuming rational expectations, the precise effect of a policy
change on expectations can vary at different points in time or in the business
cycle. The market’s response will depend on the external and internal
environments and on the policy regime. The uncertainty on the impact of the
policy change on the economy enhances the need for a credible and
transparent regime. Central bank credibility will play a leading role, permitting
agents to evaluate more clearly the consistency of a specific policy decision.
With a credible inflation target, for example, monetary policy will be anchored
to the target in the medium term, allowing agents to generate a clearer and less
erratic expectation of the future behaviour of the policy rate, and diminishing
the impact of temporary disturbances that are likely to reverse in the future. If
the nominal objective is credible, the term structure associated to a reduction in
the policy rate, for example, must be consistent with the fact that future
expected policy rates—that partially determine long-term interest rates today—
reflect compliance with the policy goal. If, on the contrary, the target is not
credible or—more generally—there is no clarity on the central bank’s objective,
the effect on the rate structure will be more ambiguous. The market shall infer
future policy actions by looking at the currently available information. Thus,
the impact of a policy decision today on the global rate structure of the
economy should be more predictable—with a given financial structure— the
greater the degree of credibility in the goals of the central bank.
Chapter 3
Motivating VAR Modelling for Analysis of MTM
3.1 Introduction to VAR
Monetary economics focuses on the behaviour of prices, monetary aggregates,
nominal and real interest rates and output. Vector autoregressive (VAR)
methods with origins to the seminal work of Sim’s (1980), have become 'a
workhorse' in much of the empirical analysis of the interrelationship between
these variables and for uncovering the impact of monetary policy on the real
economy – the so called monetary policy transmission mechanism. The novelty
of VAR methods stems from its structure (Hamilton 1994: 326-7), which offers
both empirical tractability and a link between data and theory in economics, as
it uncovers and describes data facts and characteristics. The technique takes
into account the interactions between macro variables over time, and as shall
neatly be shown, allows a distinction in estimating the long-run (equilibrium)
and short-run (adjustment to the equilibrium) relations – a popularity tied to
the concept of cointegration (Engle and Granger, 1987). There is one equation
for each and every variable, so all variables in the system are treated as
potentially endogenous. Each variable is explained by own lags and lagged
values of the other variables.
Reduced-form VAR models treat the economy as a black box and aim only to
identify the linkages between some key inputs and key outputs of interest.
Specifically, assumptions about Exogeneity are tested for directly avoiding
making strong a priori assumptions, thus by design, a key advantage of the
reduced form VAR models is that they allow the data to speak freely about the
empirical content of the model. The approach is much more a theoretical, i.e.,
one does not have to maintain the existence of, estimate or test specific
theoretical formulations of any economic relationship, rather it invokes
Vector and Structural Vector Autoregressions
|24
economic theory to choose the variables to include in the analysis, select the
appropriate normalization and to interpret the results.
Based on extensive empirical research, Christiano et al. (1999) derived stylized
facts about the effect of contractionary monetary policy shocks1 in a closed
economy setting. They conclude that plausible models of the MTM should be
consistent with at least the following evidence on prices, output and interest
rates: i) the aggregate price level initially responds very slowly; ii) interest rates
initially rise; and iii) aggregate output initially falls, with a J-shaped response,
and a zero long-run effect of monetary policy shock (long-run monetary policy
neutrality). However, VAR models of monetary policy shocks have exclusively
concentrated on simulating the response of the economy to shocks, leaving out
the systematic component of monetary policy – as formalized in the monetary
policy reaction function. In what follows, we first explore the theoretical
aspects of reduced VAR methods which is a building block to the formulation
of Structural VAR applied to the analysis of MTM.
3.2 Exploring the VAR(p) Methods for the Analysis of MTM
For ease of understanding, consider a Bivariate VAR system, for 𝑧1and 𝑧2, each
lagged once i.e. a VAR(1) set out as in eqn. 5
𝑧1𝑡 = 𝑎10 + 𝑎11𝑧1𝑡−1 + 𝑎12𝑧2𝑡−1 + 휀1𝑡𝑧2𝑡 = 𝑎20 + 𝑎21𝑧1𝑡−1 + 𝑎22𝑧2𝑡−1 + 휀2𝑡
𝐼𝑛 𝑚𝑎𝑡𝑟𝑖𝑥 𝑛𝑜𝑡𝑎𝑡𝑖𝑜𝑛
(𝑧1𝑡𝑧2𝑡)
⏟ 𝐙t
= (𝑎10𝑎20)
⏟ 𝐀0
+ (𝑎11 𝑎12𝑎21 𝑎22
)⏟
𝐀1
[𝑧1𝑡−1𝑧2𝑡−1
]⏟ 𝐙t−1
+ (휀1𝑡휀2𝑡)
⏟ 𝛆t
𝐶𝑜𝑚𝑝𝑎𝑐𝑡𝑒𝑑 𝑡𝑜𝐙t = 𝐀0 + 𝐀1𝐙t−1 + 𝛆t
(5)
In these all these varied expressions, each observed variable is expressed
equivalently as a linear function of their past values (own lag) and past values
(the lag) of the other variable plus white noise residuals, with no current dated
variables on the right-hand side (RHS), i.e. all regressors are predetermined. In
other words, all variables are potentially endogenous. Each of the 휀𝑖𝑡 is
1 Monetary policy shocks are defined as deviations from the monetary policy rule that are obtained by
considering an exogenous shock which does not alter the response of the monetary policy-maker to macroeconomic conditions.
Motivating VAR Modelling for Analysis of MTM
25 |
assumed to be stationary or more technically, integrated of order zero - I(0),
with zero mean and homoskedastic variance, that is, 𝐸[휀1𝑡] = 𝐸[휀2𝑡] = 0 and
𝐸[휀1𝑡2 ] = 𝐸[휀2𝑡
2 ] = 𝛿𝑖2, mean and variance are time invariant. A further
assumption is that the auto Covariances are equal to zero, i.e. 𝐸[휀1𝑡휀1𝑡−1] = 0
and 𝐸[휀2𝑡휀2𝑡−1] = 0. Critically in the VAR set up, shocks such as 휀1𝑡 and 휀2𝑡
can be contemporaneously correlated across equations. In this
case, 𝐸[휀1𝑡휀2𝑡] ≠ 0, which guarantees the structural/causal relationships to be
embedded in the data, although not explicitly modelled – at least at first. More
generally, a VAR(p) system of k-autoregressive equations becomes:
𝐙t = 𝐀0 + 𝐀1𝐙t−1 + A2Zt−2 +⋯+ Ap𝐙t−p + 𝛆t (6)
Where: 𝒁𝑡 is a (k1) vector of variables (= equations) at time t; A0 is a (k1)
vector of constants; Ai are (kk) matrices of coefficients; t is a (k1) vector of
errors at time t, which are assumed to be identically and independently
distributed, i.e., have zero mean (𝐸(휀𝑡) = 0), and homoskedastic variance,
(𝐸(휀𝑡2 = 𝛿2)), are serially uncorrelated (𝐸(휀𝑡휀𝑡−𝑝
′ ) = 0) for 𝑝 ≠ 0) and have a
time-invariant positive definite variance-covariance matrix, Ω, i.e. (𝐸(휀𝑡휀𝑡′) =
Ω). Thus, the error terms follow a white noise process, i.e., t ~ (0, Ω). The
residual covariance matrix, Ω has dimensions k x k , and contains information
about possible contemporaneous effects. Showing this is straight forward.
Consider a four variable VAR, such that in equation 6;
𝛆t = (
1𝑡2𝑡3𝑡4𝑡
)
We want to show that 𝐸(휀𝑡휀𝑡′) = Ω and that Ω has dimensions k x k , i.e.,
contains information about possible contemporaneous effects.
Given 𝛆t above,
𝐸(휀𝑡휀𝑡′) = Ω = 𝐸 (
1𝑡2𝑡3𝑡4𝑡
) [1𝑡 2𝑡 3𝑡 4𝑡]
Vector and Structural Vector Autoregressions
|26
Ω = 𝐸
(
1𝑡2 1𝑡2𝑡 1𝑡3𝑡 1𝑡4𝑡
2𝑡1𝑡 2𝑡2 2𝑡3𝑡 2𝑡4𝑡
3𝑡1𝑡 3𝑡2𝑡 3𝑡2 3𝑡4𝑡
4𝑡1𝑡 4𝑡2𝑡 4𝑡3𝑡 4𝑡2)
Exploring the knowledge that 𝐸(휀𝑡2) = 𝛿2, and 𝐸(휀𝑡휀𝑡−𝑝
′ ) = 0 for 𝑝 ≠ 0),
then
Ω=
(
𝛿12 0 0 0
0 𝛿22 0 0
0 0 𝛿32 0
0 0 0 𝛿42)
We also know that 𝐸(휀𝑡2) = 𝛿2 = 1, thus
Ω = (
1 0 0 00 1 0 00 0 1 00 0 0 1
)
⏟ 4 𝑋 4
= 𝐼𝑘 (7)
Eqn. 7 shows that an innovation in any one variable has contemporaneous
effect only to itself, signalling a key flaw of unrestricted VAR in impulse
response analysis. The fact that 𝐸(휀𝑖𝑗) = 0; ji devoids the innovations in
the model from interactions among the variables – the interaction that is the
very essence of VAR. Indeed, as can be seen, 휀𝑖𝑖 or 휀𝑗𝑗 shock affects itself but
not 휀𝑖𝑗 or 휀𝑗𝑖 . Put otherwise, the VAR does not describe economic ‘structure
form’, although we may be able to recover it via testing the VAR. Thus,
traditional VARs, such as one in eqn. 6 are of limited use. They have been a
subject of Lucas critique (Lucas, 1972), primarily for lack of knowledge of
complete structural economic structure (atheoretical), and moreover, are prone
to spurious regression problem when data are non-stationary.
Chapter 4
Taking MTM Theories and the VAR Model to the Data
To demonstrate as many of the monetary transmission mechanisms, discussed
in section 2 above, we have provided data in the COMESA2018 EViews work
file on seven variables, for data covering the period 2000Q1-2017Q3 (some 71
observations), all on Uganda, except international oil price index. The variables
are quarterly time series observations on core consumer price index
(Core_CPI), nominal exchange rate (exr)-UGX/USD rate, gross domestic
product (GDP) – in 2009/10 constant prices, weighted average bank lending
rate (lrate), a measure of monetary aggregates – broad money (M2), credit to
the private sector (PSC) and three months Treasury bill rate (tb91) – which is
the key short-term interest rate used by the Bank of Uganda to signal its
monetary policy stance. Two of these variables, i.e., output and price are non-
policy variables and are, as discussed earlier, a focus in the transmission of
monetary policy, irrespective of the monetary policy regime in force. The rest
of the variables capture most of the most likely channels in COMESA region
for which the data is readily available, but on Uganda. These include the
interest rate (tb91) channel, the exchange rate (exr) channel, the credit and/or
bank lending (lrate and/or PSC) channel and the money (M2) channel. An
index of international commodity prices (oil_price) is included, but as an
exogenous variable.
Given the data and the purpose of this compilation, it is important, to mention
upfront, that we must focus on the following data related technical issues:
i. Seasonal adjustment and transformation of series to natural
logarithms
ii. Pre-testing for order of integration
iii. Determining the lag length of VAR and suitability of VAR
iv. Stability of the VAR
Vector and Structural Vector Autoregressions
|28
The data being of relatively high frequency, it is important that before anything
else, the data is adjusted for seasonal effects.
4.1 Adjustments for Seasonal Effects
Economic analysis is focused on business cycles. As such performing an
analysis on variables with seasonality in them is a recipe for incorrectly
characterizing cyclical behaviour, and the ensuing results would be spurious
(Dejong and Dave, 2007; Nyanzi and Bwire, 2017). Testing and adjusting the
series for seasonality effects in EViews is done using the popular X12 method
by the Census Bureau of Statistics. Like any other built in routine procedures in
EViews, X12 method is an automated procedure, but is available only for
monthly and quarterly series for at least 3 full years.
The series at hand are of a quarterly frequency and spans for 17 years, making
it a suitable candidate for seasonal effects adjustment. In applied time series, it
is a common suspicion that quarterly GDP, CPI, oil_price and M2 series are
embedded with strong seasonal components. This claim can easily be verified
with series plots, a process we briefly describe.
To plot these series in EViews, click
on any one of the four series, which
then gets highlighted. While holding
the control key down, click on each of
the remaining three series, but one at a
time. This simple procedure highlights
all the four series of interest. Once
these are highlighted, and while
maintaining the cursor in the
highlighted area, right click on the
mouse, then from the drop-down menu
items, choose Open, and following
the arrow, choose, from the drop-down
menu items, as Group, as shown in the accompanying road map.
Executing the above procedure opens the series in a spread-like sheet, shown
in the screen print. While in this spread-like sheet, click on View (at the
extreme left had side corner), and from the drop-down menu, navigate to
Graph…as highlighted.
Taking MTM Theories and the VAR Model to the Data
29 |
Click on graph, and by default, EViews highlight Basic type under Option
Pages and Line & Symbol under graph type as in the screen here.
Vector and Structural Vector Autoregressions
|30
Click OK and you will see the graphs in the screen print. We could as well use
the Quick icon at the top- most window for the same results, except that here,
instead of opening the series data points, we would be required to highlight the
series name.
However, as can be seen, there are scaling issues, especially with cpi and oil
price series, to the extent that, relative to GDP and M2 series, they are hardly
noticeable. Given this, we would have to allow for dual axis scaling, whereof,
we can plot both CPI and oil price series on the secondary axis. Achieving this
is also straight forward.
While in the above EViews graph window, double click in the legend to yield a
screen reprinted here. Here we can do all sorts of statistical manipulation of the
graph (s), including choices of
secondary axis, which we explore.
We want first to specify upfront
from the legend, which variable
we want to put on the secondary
axis or rhs. As we can see, under
Legend, Attributes is highlighted,
and we can easily edit legend
entries by clicking on the legend
of interest, e.g. CPI and edit to
include (rhs) and so is oil as
shown, noting that CPI is ordered
first while oil is in the fourth
position.
Click on Axes & Scaling-one of the
entries under Option Pages
(appeared at the left-hand side of the
window), and as we clearly see from
the extreme right-hand side of the
screen that pops up in the dialogue
box for Series axis assignment, all
our series are, by default, assigned to
the left hand-side axis (primary axis). Recall that as per ordering above, CPI
came first, and this is the exact order EViews is following – the first series is
highlighted and indicated as #1 CPI (rhs). We want to transfer this and the
Taking MTM Theories and the VAR Model to the Data
31 |
one in position 4 (oil (rhs)) to the Right axis. Check Right (providing 1 Left is
still highlighted) and click on 4 Left and check Right, and in the Dual axis
scaling dialogue box, change to Overlap scales (lines cross) – to get to the
following screen print. Click OK to yield Figure 2.
Figure 2: Quarterly GDP, CPI and oil_price in levels
Indeed, as can be seen, GDP and to some extent, oil prices are rugged,
reflecting a degree of seasonality. Within years, GDP tended to be highest in
the third quarter and lower in the first quarter while oil prices peaked in the
first and second quarters (and sometimes, to a lesser extent in the third
quarter). It is still possible, whilst in the graph window to retrieve the series
spread sheet. Click on View and in the drop-down menu, select Spread Sheet.
While nominal exchange rate could as well be embedded with strong seasonal
components, it has been argued in empirical setting that the seasonality here is
not regular and that adjusting it for seasonal effects would contradict the
assumption of rational behaviour in financial markets (Bwire, Opolot and
Anguyo 2013). In what follows, we describe the implementation of X12
procedure in EViews on quarterly GDP series, a procedure which then can be
replicated for the rest of the qualifying series. The null hypothesis for the test is
that seasonality is present.
Vector and Structural Vector Autoregressions
|32
We implement this in the
COMESA2018 work file, using data in
the Demo page. In the workspace,
highlight and open GDP series. In the
open series window (excel like sheet),
click Proc, and point the cursor on
Seasonal Adjustment in the drop-down
menu of Proc and select Census X12... -
a road map highlighted the print screen
herewith.
Select Census X12...This brings forth the X12 Options screen, which is also
replicated in the screen print below. In the box for X11 Method – under
Seasonal Adjustment, a choice must be made between the EViews default of
Multiplicative and Additive – the
most popular two adjustment methods
in applied time series. Multiplicative
method applies when the series to be
adjusted is nonstationary and the series
values are strictly non-negative.
Additive method applies when the
series to be adjusted is in stationary
form, but in addition, the series values
must be positive. GDP in our
application here is nonstationary and
with no negative entries.
Given this, we retain the default of Multiplicative, retaining all the other default
options as given. Observe that under Component Series to Save, the Base
name for the series to be adjusted is given as GDP and the Final seasonally
adjusted series will appear in the work space window with an extension_SA
(where suffix SA is used to mean seasonally adjusted). This box is by default
checked. Click OK to implement the procedure.
The resulting results tables are usually too large, but for purposes of
demonstration, an extract of the three broadly available F- tests for seasonality
in the large results table: 1) Tests for the presence of seasonality assuming stability; 2)
Nonparametric test for the presence of seasonality assuming stability; and 3) Moving
seasonality test, performed on quarterly GDP series is provided.
Taking MTM Theories and the VAR Model to the Data
33 |
D 8.A F-tests for seasonality Test for the presence of seasonality assuming stability. Sum of Dgrs.of Mean Squares Freedom Square F-Value Between quarters 716.9275 3 238.97584 26.200** Residual 611.1111 67 9.12106 Total 1328.0386 70 **Seasonality present at the 0.1 per cent level. Nonparametric Test for the Presence of Seasonality Assuming Stability Kruskal-Wallis Degrees of Probability Statistic Freedom Level 34.1783 3 0.000% Seasonality present at the one percent level. Moving Seasonality Test Sum of Dgrs.of Mean Squares Freedom Square F-value Between Years 185.1699 16 11.573117 2.005 Error 277.0066 48 5.770970 Moving seasonality present at the five percent level.
As can be seen, both the test for the presence of seasonality assuming stability
and nonparametric test for the presence of seasonality assuming stability find
evidence that seasonality is present at the 0.1 and one percent levels,
respectively, and so is moving
seasonality, which is present at the five
percent level. Replicating this procedure
for CPI and for oil reveals as expected,
presence of seasonality.
And as can be seen in the screen print
here, all these series bear the extension
_SA. Note however that even if the null
hypothesis of presence of seasonality is
rejected, the series in question will still
have this extension, providing the test
has been implemented. How to use the
series going forward is the user’s cup of technical business, but generally, when
a variable has no seasonality component, the user must revert to the unadjusted
Vector and Structural Vector Autoregressions
|34
series, but use adjusted series where seasonality is present, in economic
modelling and analysis thereafter.
The corresponding pairs of the three seasonally adjusted series are plotted, in
Figure 3, for visual appreciation of the difference it makes between true
economic cycle and economic seasons – which is characterized by up- and
down- turns.
Figure 3: Comparing seasonally unadjusted and seasonally adjusted series
0
20
40
60
80
100
120
140
00:1
00:3
01:1
01:3
02:1
02:3
03:1
03:3
04:1
04:3
05:1
05:3
06:1
06:3
07:1
07:3
08:1
08:3
09:1
09:3
10:1
10:3
11:1
11:3
12:1
12:3
13:1
13:3
14:1
14:3
15:1
15:3
16:1
16:3
17:1
17:3
OIL
OIL_SA
4,000
6,000
8,000
10,000
12,000
14,000
16,000
18,000
00
:10
0:3
01
:10
1:3
02
:10
2:3
03
:10
3:3
04
:10
4:3
05
:10
5:3
06
:10
6:3
07
:10
7:3
08
:10
8:3
09
:10
9:3
10
:11
0:3
11
:11
1:3
12
:11
2:3
13
:11
3:3
14
:11
4:3
15
:11
5:3
16
:11
6:3
17
:11
7:3
GDP
GDP_SA
40
60
80
100
120
140
160
180
00:1
00:3
01:1
01:3
02:1
02:3
03:1
03:3
04:1
04:3
05:1
05:3
06:1
06:3
07:1
07:3
08:1
08:3
09:1
09:3
10:1
10:3
11:1
11:3
12:1
12:3
13:1
13:3
14:1
14:3
15:1
15:3
16:1
16:3
17:1
17:3
CPI
CPI_SA
0
2,000
4,000
6,000
8,000
10,000
12,000
14,000
16,000
00
:10
0:3
01
:10
1:3
02
:10
2:3
03
:10
3:3
04
:10
4:3
05
:10
5:3
06
:10
6:3
07
:10
7:3
08
:10
8:3
09
:10
9:3
10
:11
0:3
11
:11
1:3
12
:11
2:3
13
:11
3:3
14
:11
4:3
15
:11
5:3
16
:11
6:3
17
:11
7:3
M2
M2_SA
Based on the evidence of the presence of seasonality effects, going forward,
core_cpi, GDP, M2 and oil_prices are adjusted for seasonal effects, but for
simplicity, the suffix_sa is dropped.
4.2 Deriving the Output GAP
Another important variable transformation which is common in monetary
policy making circles is the output gap. Though may not be used here because
we are not analyzing a gap model, it is important to understand how it is
derived. It is the deviation of actual output from trend, where trend output is
generated in EViews using an in-built Hodrick-Prescott (HP) filter routine,
Taking MTM Theories and the VAR Model to the Data
35 |
from the GDP series available to the user. A positive output gap indicates
‘excess demand’ position, which implies that output is above its trend level and
the reverse is true when it is negative. Output gap is an important premise of
the Taylor principle in the wider standard neo-Keynesian macroeconomic
models, which are used by central banks for policy analysis and forecasting. In
here, we want to describe how the output gap is generated in EViews using the
inbuilt HP filter routine.
Turning to Demo page of EViews COMESA2018 work file (WF) that we have
used all along, there, among other series, is GDP_sa. The first thing we want to
do, also a standard practice in applied time series modelling, is to transform the
GDP_sa series to natural logarithms. To do this, in the command window, type
a command of the form genr lgdp = log(gdp_sa), then press the enter button on
your key board. This adds a new series, lgdp onto the list of variables in the
work space window.
Now double click on lgdp to open the series in an excel-like worksheet. Whilst
in the lgdp open spread sheet, click on Proc, and in the drop-down window, is
Hodrick-Prescott Filter…, as shown in the screen shot.
Clicking on Hodrick-Prescott Filter…, - brings Hodrick-Prescott Filter
window - also printed here. This specifies the Smoothed series as hptrend01
but is no more than GDP trend series (in this case). The Cycle series, here, is
the output gap, and is computed behind the scene as the difference between
the actual GDP (entered as raw data) and the software’s computed GDP trend
series. Here, I have intentionally declared its series name upfront as
‘output_gap’ and so upon execution; it will be reflected among the other series
Vector and Structural Vector Autoregressions
|36
in the work space window. Else, if not declared upfront, it will not be
computed and the user would have to compute it through the command
window, with the following command: genr output_gap=lgdp-hptrend01, which
inevitably is a longer route. The value of the smoothing parameter, lambda (λ)
varies with data frequency:
𝜆 =
100: 𝑖𝑓 𝑎𝑛𝑛𝑢𝑎𝑙 𝑓𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑦1600: 𝑖𝑓 𝑞𝑢𝑎𝑟𝑡𝑒𝑟𝑙𝑦 𝑓𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑦
14400: 𝑖𝑓 𝑚𝑜𝑛𝑡ℎ𝑙𝑦 𝑓𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑦.
Check OK for the output in Figure 4.
Figure 4: Output gap: Hodrick-Prescott Filter (λ=1600)
-.06
-.04
-.02
.00
.02
.04
.06
8.4
8.6
8.8
9.0
9.2
9.4
9.6
9.8
00:1
00:3
01:1
01:3
02:1
02:3
03:1
03:3
04:1
04:3
05:1
05:3
06:1
06:3
07:1
07:3
08:1
08:3
09:1
09:3
10:1
10:3
11:1
11:3
12:1
12:3
13:1
13:3
14:1
14:3
15:1
15:3
16:1
16:3
17:1
17:3
LGDP (secondary axix)
Trend (secondary axis)
Cycle
It is the cycle or output gap that is of interest for policy analysis and
forecasting, and this, in our EViews work space, is appeared as output_gap.
Note however, for emphasis that we are not analyzing a gap model and
therefore, while generating the output gap is a useful exercise for our
knowledge, the variable is not of use here.
The next thing we do, except for interest rates, which are percentages, is to
express the rest of the variables (duly tested for seasonal effects) in natural
logarithms to stabilize the second moment, the variance.
In the active COMESA2018 work file, supply, for demonstration purposes, in
the command window, commands of the following form:
Taking MTM Theories and the VAR Model to the Data
37 |
Genr lcpi = log(cpi_sa) Genr lexr=log(exr) Genr lm2=log(m2_sa) Genr loil=log(oil_sa) Genr lpsc=log(psc)
Each of these commands
is executed on pressing
the enter button on your
key board. Resulting from
this, a whole host of new
corresponding series, lcpi,
lexr, lgdp, lm2, loil and lpsc
are created and are
automatically added onto
the list of variables in the
work space, as shown in the screen print.
All the data are from the bank of Uganda data bases. But prior to any empirical
analysis of this kind of data and models, it is always a good idea to take some
time to simply examine the data. This often, as a precursor, requires the
modeller to have a graphical exposition of the level and first difference of the
series to, among others, unearth important data features.
The series in level and first difference are given in Figure 5. Plotting the series
is important, not least because it gives us the opportunity to get familiar with
the data’s important characteristics - features that make modelling even far
more exciting. If we do not know what the features of our data are, we are not
going to be able to develop a particularly good model of them. It is in general
vital that we know our data, especially in time series modelling.
Vector and Structural Vector Autoregressions
|38
Figure 5: Variables in levels and first differences
4.0
4.2
4.4
4.6
4.8
5.0
5.2
00:1
00:3
01:1
01:3
02:1
02:3
03:1
03:3
04:1
04:3
05:1
05:3
06:1
06:3
07:1
07:3
08:1
08:3
09:1
09:3
10:1
10:3
11:1
11:3
12:1
12:3
13:1
13:3
14:1
14:3
15:1
15:3
16:1
16:3
17:1
17:3
lcpi
-.01
.00
.01
.02
.03
.04
.05
.06
.07
.08
00:1
00:3
01:1
01:3
02:1
02:3
03:1
03:3
04:1
04:3
05:1
05:3
06:1
06:3
07:1
07:3
08:1
08:3
09:1
09:3
10:1
10:3
11:1
11:3
12:1
12:3
13:1
13:3
14:1
14:3
15:1
15:3
16:1
16:3
17:1
17:3
dlcpi
7.3
7.4
7.5
7.6
7.7
7.8
7.9
8.0
8.1
8.2
00:1
00:3
01:1
01:3
02:1
02:3
03:1
03:3
04:1
04:3
05:1
05:3
06:1
06:3
07:1
07:3
08:1
08:3
09:1
09:3
10:1
10:3
11:1
11:3
12:1
12:3
13:1
13:3
14:1
14:3
15:1
15:3
16:1
16:3
17:1
17:3
lexr
-.10
-.05
.00
.05
.10
.15
.20
00:1
00:3
01:1
01:3
02:1
02:3
03:1
03:3
04:1
04:3
05:1
05:3
06:1
06:3
07:1
07:3
08:1
08:3
09:1
09:3
10:1
10:3
11:1
11:3
12:1
12:3
13:1
13:3
14:1
14:3
15:1
15:3
16:1
16:3
17:1
17:3
dlexr
Taking MTM Theories and the VAR Model to the Data
39 |
8.6
8.8
9.0
9.2
9.4
9.6
9.8
00:1
00:3
01:1
01:3
02:1
02:3
03:1
03:3
04:1
04:3
05:1
05:3
06:1
06:3
07:1
07:3
08:1
08:3
09:1
09:3
10:1
10:3
11:1
11:3
12:1
12:3
13:1
13:3
14:1
14:3
15:1
15:3
16:1
16:3
17:1
17:3
lgdp
-.08
-.06
-.04
-.02
.00
.02
.04
.06
.08
00:1
00:3
01:1
01:3
02:1
02:3
03:1
03:3
04:1
04:3
05:1
05:3
06:1
06:3
07:1
07:3
08:1
08:3
09:1
09:3
10:1
10:3
11:1
11:3
12:1
12:3
13:1
13:3
14:1
14:3
15:1
15:3
16:1
16:3
17:1
17:3
dlgdp
6.8
7.2
7.6
8.0
8.4
8.8
9.2
9.6
00:1
00:3
01:1
01:3
02:1
02:3
03:1
03:3
04:1
04:3
05:1
05:3
06:1
06:3
07:1
07:3
08:1
08:3
09:1
09:3
10:1
10:3
11:1
11:3
12:1
12:3
13:1
13:3
14:1
14:3
15:1
15:3
16:1
16:3
17:1
17:3
lM2
-.04
-.02
.00
.02
.04
.06
.08
.10
.12
00:1
00:3
01:1
01:3
02:1
02:3
03:1
03:3
04:1
04:3
05:1
05:3
06:1
06:3
07:1
07:3
08:1
08:3
09:1
09:3
10:1
10:3
11:1
11:3
12:1
12:3
13:1
13:3
14:1
14:3
15:1
15:3
16:1
16:3
17:1
17:3
dlM2
Vector and Structural Vector Autoregressions
|40
2.8
3.2
3.6
4.0
4.4
4.8
5.2
00:
10
0:3
01:
10
1:3
02:
10
2:3
03:
10
3:3
04:
10
4:3
05:
10
5:3
06:
10
6:3
07:
10
7:3
08:
10
8:3
09:
10
9:3
10:
11
0:3
11:
11
1:3
12:
11
2:3
13:
11
3:3
14:
11
4:3
15:
11
5:3
16:
11
6:3
17:
11
7:3
loil
-.8
-.6
-.4
-.2
.0
.2
.4
00:1
00:3
01:1
01:3
02:1
02:3
03:1
03:3
04:1
04:3
05:1
05:3
06:1
06:3
07:1
07:3
08:1
08:3
09:1
09:3
10:1
10:3
11:1
11:3
12:1
12:3
13:1
13:3
14:1
14:3
15:1
15:3
16:1
16:3
17:1
17:3
dloil
6.0
6.5
7.0
7.5
8.0
8.5
9.0
9.5
00:1
00:3
01:1
01:3
02:1
02:3
03:1
03:3
04:1
04:3
05:1
05:3
06:1
06:3
07:1
07:3
08:1
08:3
09:1
09:3
10:1
10:3
11:1
11:3
12:1
12:3
13:1
13:3
14:1
14:3
15:1
15:3
16:1
16:3
17:1
17:3
lpsc
-0.8
-0.6
-0.4
-0.2
0.0
0.2
0.4
0.6
0.8
1.0
00:1
00:3
01:1
01:3
02:1
02:3
03:1
03:3
04:1
04:3
05:1
05:3
06:1
06:3
07:1
07:3
08:1
08:3
09:1
09:3
10:1
10:3
11:1
11:3
12:1
12:3
13:1
13:3
14:1
14:3
15:1
15:3
16:1
16:3
17:1
17:3
dlpsc
Taking MTM Theories and the VAR Model to the Data
41 |
16
18
20
22
24
26
28
00:1
00:3
01:1
01:3
02:1
02:3
03:1
03:3
04:1
04:3
05:1
05:3
06:1
06:3
07:1
07:3
08:1
08:3
09:1
09:3
10:1
10:3
11:1
11:3
12:1
12:3
13:1
13:3
14:1
14:3
15:1
15:3
16:1
16:3
17:1
17:3
lrate
0
4
8
12
16
20
24
00:1
00:3
01:1
01:3
02:1
02:3
03:1
03:3
04:1
04:3
05:1
05:3
06:1
06:3
07:1
07:3
08:1
08:3
09:1
09:3
10:1
10:3
11:1
11:3
12:1
12:3
13:1
13:3
14:1
14:3
15:1
15:3
16:1
16:3
17:1
17:3
tb91
16
18
20
22
24
26
28
00:1
00:3
01:1
01:3
02:1
02:3
03:1
03:3
04:1
04:3
05:1
05:3
06:1
06:3
07:1
07:3
08:1
08:3
09:1
09:3
10:1
10:3
11:1
11:3
12:1
12:3
13:1
13:3
14:1
14:3
15:1
15:3
16:1
16:3
17:1
17:3
lrate (%)
0
4
8
12
16
20
24
00:1
00:3
01:1
01:3
02:1
02:3
03:1
03:3
04:1
04:3
05:1
05:3
06:1
06:3
07:1
07:3
08:1
08:3
09:1
09:3
10:1
10:3
11:1
11:3
12:1
12:3
13:1
13:3
14:1
14:3
15:1
15:3
16:1
16:3
17:1
17:3
tb91 (%)
Vector and Structural Vector Autoregressions
|42
Save for tb91 and to a slight extent lrate whose mean reversion behaviour is
somewhat unclear, the remainder of the series in levels clearly exhibit trend like
behaviour over time (i.e. trending) as they are not mean-reverting but appears
stationary around trend (i.e. trend-stationary) in first difference. The first
difference plots also seem to point to a deep in GDP and oil_prices around
2008 and a spike in inflation around 2011. These correspond to events that we
all can pin down quite easily, namely the 2008 global financial crisis and the
increases in world commodity prices beginning the third quarter of 2011,
amplified by unprecedented exchange rate depreciation and rapid expansion in
private sector credit in Uganda in 2011. We might want to think of shift
dummies to capture these shocks, but only if they can be found to improve the
analysis.
4.3 Unit Root Testing
After visual inspection of the data, the series are formally tested for the order
of integration or non-stationarity and the degree of differencing required to
induce stationarity using the commonly used Augmented Dickey Fuller (ADF)
unit root test (Dickey and Fuller, 1979). But before deploying the test static,
some explanatory notes are in order.
Economic time series are commonly characterised by strong ‘trend-like’
behaviour. Orthodox methods of estimation and hypothesis testing assume
that all variables are stationary (loosely speaking, ‘trend-free’). If no account is
made of this trend-like behaviour then the OLS estimator can give rise to
highly misleading results (spurious regression - Granger and Newbold (1974)) -
often characterised by significant t-ratios and a high explanatory power, even
though the regressors are economically unrelated to the variable being
explained.
In this section, we show how the univariate tests for non-stationarity can be
used to test for the existence of real (as opposed to ‘spurious’) relationships
between data series. Because (most) economic time series have strong trend
components we often find that many different regressions will apparently
‘explain’ the same variable of interest. Each competing regression may have
significant t-ratios and high explanatory power, even though the regressors are
economically unrelated to the variable being explained. This arises due to the
presence of trends in the variables, nothing more. Any trending variable will be
highly correlated with any other, even though the reason for their trends is
Taking MTM Theories and the VAR Model to the Data
43 |
separate (in general, correlation does not mean causation). Moreover, where
trends are present in the data, test statistics (t and F) are not distributed
according to their usual distributions and thus standard critical values are
almost always incorrect.
Although the presence of trend-like behaviour is apparent in the series in levels
simply by plotting the data (as in Figure 5 above), the behaviour can be
generated in two distinct ways. Thus, in order to be able to remove the trend in
the data, it is essential that we know what type of mechanism is creating the
trend in the first place. To go about this, consider the autoregressive [AR]
model,
ttt ztccz 121 ; 2,0~ nidt (8)
This class of process embodies a wide range of processes, some of which
exhibit trend-like behaviour, and some that do not.
If we assume that 021 cc , then ttz is devoid of any ‘structure’
(i.e., tz is a completely random process), and thus clearly cannot have any
trend-like behaviour.
For 01 c , 02 c the process has a non-zero mean, but still no trend.
For 10 , the series has some structure (it is related to its previous value,
i.e. what happens today (and in the past) influences what happens tomorrow -
some dependence) but no trend. Such processes are called stationary or integrated
of order zero, denoted I(0).
If we now allow 02 c , ty has a deterministic (linear trend) and is said to be
stationary about trend or trend stationary. Note that series of this sort will always rise
( 02 c ) or fall ( 02 c ) albeit with some 'noise'. Such deterministic trend
behaviour approximates many processes we tend to find in the real world. And
are called non-stationary series and are often integrated of order one, denoted I(1).
They are said to contain a unit root because they occur when, in (8), 1
Besides I(1) processes exhibiting trend-like behaviour, the unit root could, in
addition, be due to the stochastic trend. Stochastic trend is generated through
the accumulation of the random process, t . It may seem counter-intuitive, but
Vector and Structural Vector Autoregressions
|44
the accumulation of a white noise process such as t produces a series that has
long swings of trend-like behaviour, that we describe as stochastic trend. Over
(arbitrarily) long periods of time, the ‘trend’ may be positive and then become
negative. The point at which the ‘trend’ switches, the magnitude of the ‘trend’
and the duration of the period in which a ‘trend’ is apparent is completely
random, depending simply on the run of disturbances that make up the
stochastic process t (the inverted commas are used above since the behaviour
is not actually due to the presence of a (deterministic linear) trend, it simply
looks like it). Asymptotically, (i.e. as t ) the clear distinction between
stochastic and deterministic trend is obvious, however even in samples of 100
observations, they can be easily confused, even though the mechanisms
generating the behaviours are poles apart (one is deterministic and the other is
stochastic).
Random Walk process:
The idea of stochastic trends is fundamental to modern time series analysis, not least
because they seem to mimic rather well the real-world series that we observe in
economics. To fix the idea of the stochastic trend, consider the following analogy.
Imagine we are tracing the journey of a man who takes a step forward to the left if the
toss of a coin is tails and a step forward to the right when the coin comes up heads.
Each step forward (whether to the left or the right) on the path of his randomly
selected footsteps will take him away from his starting point. Successive 'tails' will
mean she walks forward in a leftwards direction.
Occasional 'heads' on this part of his journey will cause his path to deviate but she will
still be walking in an approximately leftward direction. Only when there are a number
of successive heads will she appear to change course and head off in the rightwards
direction. Changes in direction are akin to changes in trend and because they are
caused by chance (successive flips of the coin that come out as heads or tails), a time
series that contains stochastic trends is often called a random walk. So, although each
step is random to the right or left, this does not mean that she stays put; far from it.
Because her journey represents the accumulation of her steps, some in the leftwards
direction and some in a rightwards direction, she meanders further away from her
starting point the more steps there are to her journey.
It may strike you as odd, but this random walk behaviour mimics rather well the sort
of ‘trend-like’ behaviour we observe in economic time series, in that sometimes,
economic time series (say, GDP) are trending upward, sometimes downward and
sometimes they appear to move without any trend-like behaviour at all. In contrast, if
a true (linear) deterministic trend were driving the process (say, GDP), the trend
Taking MTM Theories and the VAR Model to the Data
45 |
would relentlessly take the variable in one direction only. Deviations from upward (or
downward) trend would only be transient. It is not plausible to observe the series
display a mixture of upward, downward (or indeed no) trend if the series has a
deterministic trend driving it (except if the trend is varying across sub-periods). Such
deterministic behaviour of relentless growth or decline, while possible in economics, is
not really plausible particularly over long-time scales.
As we will be showing, it is important to be able to distinguish between
deterministic and stochastic trends. Testing for the existence of trend is called
testing for non-stationarity (or testing for a unit root) and this is now a
standard pre-test that is conducted prior to all regression analysis involving
time series data. Note that if the ‘trend’ is deterministic, the process is
sometimes called trend-stationary whereas if the ‘trend’ is stochastic, the process
is said to be difference-stationary. These additional labels simply reflect that ‘de-
trending’ removes a deterministic trend whereas differencing removes a
stochastic trend. Strictly speaking, taking the first difference will remove both
types of trend-like behaviour. Subtracting a linear trend on the other hand will
not remove a stochastic trend, but only a deterministic trend. The downside is
that while this removes the trends in the data, it throws away valuable
information about the ‘long-run’ behaviour of the variables (about which
economic theory is informative) leaving only the ‘short-run’ behaviour (the
deviation about trend)
about which economic
theory has little to say.
As we show later, the
modern estimation
techniques for the non-
stationary environment
allows us to describe
both short and long run
behaviours yet avoid
‘spurious regression’,
that is so common with
I(1) data. In the
literature, the terms
difference stationary,
I(1), unit root and non-
stationary all denote a non-stationary process and are used interchangeably.
Vector and Structural Vector Autoregressions
|46
Given the discussion above, having a graphical visual inspection (as in Figure
5), complemented by a look at the level and difference autocorrelation plots (a
process we describe below), is a really good starting point to categorising the
series.
Taking an example of lgdp, that is now
familiar; to generate the autocorrelation,
double click on the lgdp series in the
COMESA2018 EViews file to open the
series. In the window containing the data
spreadsheet, click on View icon at the
extreme left-hand side corner and select
Correlogram…, a route shown in the
screen print.
In the small window that opens from the above route (also shown here),
ensure, as shown, under Correlogram of, choose Level (if level data) and 1st
difference (if in first difference). Now we choose Level and 1st difference
thereafter. Click OK to generate the figure in the left-hand side panel of Figure
6. Now go through the same process, but this time, choose 1st difference to
generate the figure on the right-hand side panel of Figure 6.
Figure 6: Level, first difference autocorrelations
Autocorrelation Partial Correlation
Autocorrelation Partial Correlation
The autocorrelations of a non-stationary series tend to decay very slowly to
zero as t while the decay is rapid for a stationary series. The
autocorrelations behaviour in levels and first difference mimics non-stationarity
Taking MTM Theories and the VAR Model to the Data
47 |
and stationarity, respectively. Nonetheless, appropriate categorization requires a
formal test.
In theory, a vector tz is said to be integrated of order d (i.e., )(~ dIzt ) if
variables in tz can be differenced d times to induce stationarity. A commonly
used test of non-stationarity is the Augmented Dickey-Fuller (ADF) test
(Dickey and Fuller, 1979). This test is designed to distinguish between
stationary (about mean or trend) and non-stationary processes.
Rather than estimating equation (8) directly, Dickey and Fuller proposed
subtracting from both sides1tz , a manipulation that gives:
ttt ztccz 121 (9)
Where 1 - since testing for the presence of a unit root is testing that
1 . To allow for more general autoregressive [AR(p)] processes, the ADF
regression takes the general form given by
1
120
i
tititt zztccz
(10)
Where, 0c is the intercept term, 2c and are coefficients of time trend and level
of lagged dependent variable, respectively, is the first difference operator and
t are white noise residuals. is the number of lagged ∆𝐳𝑡 terms required to
achieve white noise residuals in the ADF regression of tz.
The appropriate order of is an
empirical issue, thus in practice
is set high enough to include
the ‘true’ lag length for the
variable and is selected on the
basis of Akaike (AIC), Schwartz
(SC) and Hannan-Quinn (HQ)
information criteria, as shown in
the screen shot, in the drop-down
menu under Test type. EViews
default here is Augmented
Dickey-Fuller (or ADF for short).
Vector and Structural Vector Autoregressions
|48
These criteria have the same basic formulation, i.e., derive from the log
likelihood ratio (LR) function but penalise for the loss of degrees of freedom
due to extra lags to different degrees (see Juselius 2006, 70-1), for detailed
exposition of these frameworks). In practice, we choose the model which
minimizes each model selection criterion. The marginal cost (or penalty) of
adding redundant regressors is greater with the Schwarz Bayesian Criterion
(SBC) than the AIC and hence they need not all select the same preferred
model. Although this lack of unanimity is frustrating, unless the unit root
inference changes between models of differing lag length; the choice of lag
length is, from a practical perspective, irrelevant. Remember that lagged
dependent variables are introduced to remove autocorrelation, so if the model
does not have any, you do not need to correct for it. Overall, SBC has superior
large sample properties, (it is consistent) but because it is prone to select an
under-parameterised (and hence potentially serially correlated) model in small
(i.e. usual) samples, the AIC is often the preferred test.
In eqn. 10, if 0 , the sequence tz contains a unit root, else it is stationary.
Therefore, the ADF test is conducted under the null hypothesis that 0 , so
is rejected if the
calculated t-statistic is
larger than the critical
value reported by
Dickey and Fuller
(1981) - the statistic
(see ADF table in
attachment or Table A
in Enders, 2010: 488).
The t-ratio on is
called the ADF test
statistic, and is
constructed to account
for the fact that critical
values of the t-statistic
do depend on whether
an intercept (0c )
Taking MTM Theories and the VAR Model to the Data
49 |
and/or time trend ( t ) and intercept (0c ) and /or none of this is included in
the regression equation and on the sample size (Enders 2010: 206).
In an estimated ADF equation, an extensive exercise we turn to shortly, this is
given in the upper results window, shown here for ADF test on lrate equation
of eqn. 10.
Based on the same equation, we also evaluate whether the data generating
process (DGP) is characterized by non-stationarity with or without a linear
deterministic trend and a drift, and non-stationarity with or without a linear
deterministic trend. This involves testing joint hypotheses on the coefficients
of interest, i.e. 0,c and
2c . However, under non-stationarity, the computed
ADF- test statistic does not follow a standard t-distribution, but rather a dickey
Fuller (DF) distribution and so the critical values for these joint tests are also
non-standard. They follow the non-standard F-statistics denoted by 2 and
3
statistics which are constructed in the same way as ordinary F-tests (adopted
from Enders, 2010: 207), i.e.
KTedunrestrictSSR
redunrestrictSSRrestrictedSSRi
(11)
Where SSR (restricted) and SSR (unrestricted) are the sums of the squared residuals
from the restricted and unrestricted models, while r is the number of
restrictions, T is the number of usable observations and k is the number of
parameters estimated in the restricted model.
The joint hypothesis 020 cc , i.e. the significance or otherwise of a
constant term, time trend and non-stationarity is tested using the 2 -statistic.
The null hypothesis is then that the data are generated by the restricted model
and the alternative hypothesis is that the data are generated by the unrestricted
model. Thus, if 2 (calculated)
is smaller than 2 (critical)
, we accept the
restricted model and conclude that the restriction is not binding.
Similarly, the joint hypothesis 02 c , i.e. the sequence tz contains a
unit root and no linear deterministic trend is tested using the 3 -statistic, and is
evaluated on exactly the same grounds as the 2 -statistic. That is, the restricted
model is accepted, and the restriction is not binding if 3 (calculated) is smaller
Vector and Structural Vector Autoregressions
|50
than 3 (critical)
. Critical values for the i - statistics are obtained from Table
B in Enders (2010: 489).
The ADF test is known to have low power if the series has undergone a
(permanent) regime shift during the period under consideration (Harris and
Sollis, 2005: 57) or if there are outliers in regression residuals. Such economic
behaviour needs to be included in the deterministic part of the model and is
likely to bias estimates and result in invalid inference if ignored (Juselius, 2003).
More precisely, Perron (1989) argues that when there are structural breaks, the
various Dickey-Fuller test statistics are biased towards the non-rejection of a
unit root, when in reality the series could simply be trend-stationary but
characterized by a structural break, which the test would fail to take into
account. Thus, because a break in the deterministic time trend introduces a
spurious unit root, one econometric procedure to test for unit roots in the
presence of a structural break involves splitting the sample into ‘q’- number of
regimes arising from the structural breaks and subjecting each part of the
sample period to the ADF test. Taking for example, the simplest case where
q=2, that is, one structural break, the sample series period would have to be
split into two sub-regimes, that is, t1 and Tt 1 with being the
proportion of the way through the sample where the break occurs.
However, this application is limited on two grounds: First, the degrees of
freedom for each of the resulting regressions are greatly diminished; generating
the likelihood that the probability estimates and associated critical values may
be unreliable for inference and may lack power in small samples (Mackinnon,
1996). Second, you may not know when the break point actually occurs (only
known to be possible with CATS procedure). It is therefore preferable to have
a single test based on the full sample (Perron, 1989). The test is based on the
estimates of the following equation;
∆𝑧𝑡 = 𝛼0 + 𝛼1𝑧𝑡−1 + 𝛼2𝑡 + 𝜇2𝐷𝐿 + ∑ 𝛽𝑖𝑘𝑖=1 ∆𝑧𝑡−𝑖 + 휀𝑡 (12)
Where, t denotes the time trend and, LD , is in general, a dummy that allows for
a one time change in the drift or a onetime change in both the mean and the
drift such that oDL for ,...,1t and 1LD for Tt ,...,1 . The L
subscript indicates the level of the dummy changes. This equation is the formal
Phillips-Perron (P-P) procedure for testing for unit roots in the presence of
Taking MTM Theories and the VAR Model to the Data
51 |
structural change at time period 1t based on the full sample and test the
:0H 11 a , that is, tz is a unit root process, allowing for any structural break.
Note that unlike the ADF test for which we can evaluate the significance of
21 ,cc and coefficients from the critical values, the P-P test only generates
critical values for testing the significance of 1a coefficient, also the P-P statistic.
4.3.1 Demonstration of unit root testing using ADF test
Given the exposition above, in
the following, we conduct unit
root test for all the variables in
Figure 5 but using the most
popular ADF test. To proceed,
make active our COMESA2018
EViews work file, in which we
want to test for a unit root (s) in
lcpi variable, for demonstration
purposes.
Double click on lcpi. This opens
a spread sheet of lcpi raw data. In
this spread sheet window, click
on View (this is appeared at the
extreme left-hand side corner of
the window) and in the drop-down menu, there lies Unit Root Test…entry, as
highlighted here in the screen print.
Choosing Unit Root
Test…brings forth Unit Root
Test dialogue box, also
printed here.
As can be seen, under the Test
Type, if you check the drop-
down key, we find all the
available unit root tests, of
which we have discussed only
ADF and P-P, in both of
Vector and Structural Vector Autoregressions
|52
which the null hypothesis is that the series contains a unit root(s). Please note
that you will need to fully understand each of these other tests before you can
ably apply them – the null hypothesis of non-stationarity is not always true for
all these tests! As mentioned earlier, I have limited myself in this demonstration
to the ADF unit root test, as appeared in the Test Type box. The next thing the
user will have to decide on is whether to test unit root in level, 1st difference or
2nd difference.
EViews understands that rarely will you get a time series integrated of order
more than two. The practice is that we first test for unit root in level and
depending on our judgement of its order of integration, i.e. the number of
times we will need to difference it to make it stationary, decide on whether it is
I(1) or I(2), where the former is 1st difference while the latter is 2nd difference.
We first conduct unit root on lcpi in level. We also have to decide on the
deterministic terms to include in the ADF equation, given by eqn. 10 in the
text. EViews provides all possible combinations of deterministic terms under
Includes in test equation. An appropriate specification of eqn. 10
corresponds to option 2 under here, as indicated. However, after the equation
has been estimated, we do a post mortem to evaluate the statistical significance
of each of the deterministic terms (intercept and trend) in the ADF model,
which then informs a true specification of the data generating process (d.g.p)
for the series at hand. This is not withstanding the fact that statistical
significance is not necessarily economic significance so some modeller’s
judgement may be exercised. Should there be need to drop any of these
deterministic terms or even both, the model is re-estimated for true
characterization of the unit root process.
Lastly, we must decide on how to determine the lag-length or the size of in
eqn. 10. EViews again provides all the available known lag-length criteria in the
Lag length drop-down key, some of which we have discussed. Here, as argued
above and as shown in the screen print, I adopt the Modified Akaike and in the
interest of preserving the degrees of freedom, set Maximum lags to 5 informed
by the rule of thumb of (𝑓 + 1), where 𝑓 is the data frequency. With these
entries, click OK, for the test results in Table 1.
Taking MTM Theories and the VAR Model to the Data
53 |
Table 1: Unit test in CPI (levels)
Null Hypothesis: LCPI has a unit rootExogenous: Constant, Linear TrendLag Length: 3 (Automatic - based on Modified AIC, maxlag=5)
t-Statistic Prob.*
Augmented Dickey-Fuller test statistic -2.585515 0.2881Test critical values: 1% level -4.100935
5% level -3.47830510% level -3.166788
*MacKinnon (1996) one-sided p-values.
Augmented Dickey-Fuller Test EquationDependent Variable: D(LCPI)Method: Least SquaresDate: 09/11/18 Time: 09:48Sample (adjusted): 2001Q1 2017Q3Included observations: 67 after adjustments
Variable Coefficien... Std. Error t-Statistic Prob.
LCPI(-1) -0.064924 0.025111 -2.585515 0.0121D(LCPI(-1)) 0.606732 0.117779 5.151461 0.0000D(LCPI(-2)) 0.109973 0.141216 0.778753 0.4391D(LCPI(-3)) -0.211118 0.117674 -1.794095 0.0778
C 0.258458 0.097636 2.647169 0.0103@TREND("2000Q1") 0.001191 0.000444 2.680725 0.0094
R-squared 0.483579 Mean dependent var 0.015174Adjusted R-squared 0.441250 S.D. dependent var 0.012906S.E. of regression 0.009647 Akaike info criterion -6.359048Sum squared resid 0.005677 Schwarz criterion -6.161612Log likelihood 219.0281 Hannan-Quinn criter. -6.280922F-statistic 11.42415 Durbin-Watson stat 1.737229Prob(F-statistic) 0.000000
The results show that we needed no more than 3 lags to appropriately fit the
lcore_cpi equation with no traces of any remaining serial correlation.
The ADF statistic is -2.586 and
is greater than the 5% critical
value of -3.478, which shows the
series has a unit root.
In the lower panel, both the
intercept C and Trend or 0c and
t , respectively in eqn.9 are
statistically significant, with t-
statistic values of 2.647 and
2.681, respectively. This shows
the model as specified in EViews, is the true d.g.p.
After the test in level, we test the same series in first difference. While in the
same unit root output window, click on View and in the drop-down menu choose
Unit Root Test…, as in the above case to see the now familiar Unit Root Test
Vector and Structural Vector Autoregressions
|54
dialogue box. As in level, the test type is the ADF, but this time, we’re testing
for unit root in 1st difference. In practice, for some reason (which we explore
later), when testing for unit in differences, we choose None for the
deterministic terms, but this is not always the case.
The ADF statistic is sensitive to this choice. Here I chose Intercept,
maintained Modified Akaike under lag-length and provided for the true lag
length of 3 as determined earlier. With these entries, click OK, for the test results
in Table 2.
Clearly, the ADF statistic is now smaller (-4.023) relative to the 5% critical
value of -2.904. Thus, the series, in first difference, is stationary.
Table 2: Unit root test in LCPI (difference)
Null Hypothesis: D(LCORE_CPI) has a unit root
Exogenous: Constant
Lag Length: 0 (Automatic - based on Modified AIC, maxlag=3)
t-Statistic Prob.*
Augmented Dickey-Fuller test statistic -4.029994 0.0023
Test critical values: 1% level -3.528515
5% level -2.904198
10% level -2.589562
*MacKinnon (1996) one-sided p-values.
Augmented Dickey-Fuller Test Equation
Dependent Variable: D(LCORE_CPI,2)
Method: Least Squares
Date: 04/17/18 Time: 15:16
Sample (adjusted): 2000Q3 2017Q3
Included observations: 69 after adjustments
Variable Coefficient Std. Error t-Statistic Prob.
D(LCORE_CPI(-1)) -0.393687 0.097689 -4.029994 0.0001
C 0.006047 0.001956 3.091753 0.0029
R-squared 0.195107 Mean dependent var -4.31E-05
Adjusted R-squared 0.183093 S.D. dependent var 0.011411
S.E. of regression 0.010313 Akaike info criterion -6.282233
Sum squared resid 0.007126 Schwarz criterion -6.217476
Log likelihood 218.7370 Hannan-Quinn criter. -6.256542
F-statistic 16.24086 Durbin-Watson stat 1.807840
Prob(F-statistic) 0.000145
Replicating these steps yields the following results for the rest of the series in
Figure 5, both in level and differences.
Taking MTM Theories and the VAR Model to the Data
55 |
Table 3: Unit root test in exr
Null Hypothesis: LEXR has a unit root
Exogenous: Constant, Linear Trend
Lag Length: 3 (Automatic - based on Modified AIC, maxlag=5)
t-Statistic Prob.*
Augmented Dickey-Fuller test statistic -1.539293 0.8059
Test critical values: 1% level -4.100935
5% level -3.478305
10% level -3.166788
*MacKinnon (1996) one-sided p-values.
Augmented Dickey-Fuller Test Equation
Dependent Variable: D(LEXR)
Method: Least Squares
Date: 04/17/18 Time: 15:34
Sample (adjusted): 2001Q1 2017Q3
Included observations: 67 after adjustments
Variable Coefficient Std. Error t-Statistic Prob.
LEXR(-1) -0.079718 0.051789 -1.539293 0.1289
D(LEXR(-1)) 0.223365 0.124018 1.801068 0.0766
D(LEXR(-2)) -0.069814 0.124810 -0.559360 0.5780
D(LEXR(-3)) -0.129550 0.123666 -1.047582 0.2990
C 0.577234 0.377574 1.528796 0.1315
@TREND("2000Q1") 0.001242 0.000600 2.070689 0.0426
R-squared 0.155751 Mean dependent var 0.010189
Adjusted R-squared 0.086550 S.D. dependent var 0.044306
S.E. of regression 0.042345 Akaike info criterion -3.400650
Sum squared resid 0.109379 Schwarz criterion -3.203215
Log likelihood 119.9218 Hannan-Quinn criter. -3.322524
F-statistic 2.250708 Durbin-Watson stat 1.989774
Prob(F-statistic) 0.060471
Null Hypothesis: D(LEXR) has a unit root
Exogenous: None
Lag Length: 0 (Automatic - based on Modified AIC, maxlag=3)
t-Statistic Prob.*
Augmented Dickey-Fuller test statistic -6.163611 0.0000
Test critical values: 1% level -2.598907
5% level -1.945596
10% level -1.613719
*MacKinnon (1996) one-sided p-values.
Augmented Dickey-Fuller Test Equation
Dependent Variable: D(LEXR,2)
Method: Least Squares
Date: 04/17/18 Time: 15:35
Sample (adjusted): 2000Q3 2017Q3
Included observations: 69 after adjustments
Variable Coefficient Std. Error t-Statistic Prob.
D(LEXR(-1)) -0.714894 0.115986 -6.163611 0.0000
R-squared 0.358399 Mean dependent var -0.000387
Adjusted R-squared 0.358399 S.D. dependent var 0.055950
S.E. of regression 0.044816 Akaike info criterion -3.358135
Sum squared resid 0.136574 Schwarz criterion -3.325757
Log likelihood 116.8557 Hannan-Quinn criter. -3.345290
Durbin-Watson stat 1.926933
Vector and Structural Vector Autoregressions
|56
Table 4: Unit root test in lgdp
Null Hypothesis: LGDP has a unit root
Exogenous: Constant
Lag Length: 2 (Automatic - based on Modified AIC, maxlag=5)
t-Statistic Prob.*
Augmented Dickey-Fuller test statistic -1.180061 0.6785
Test critical values: 1% level -3.530030
5% level -2.904848
10% level -2.589907
*MacKinnon (1996) one-sided p-values.
Augmented Dickey-Fuller Test Equation
Dependent Variable: D(LGDP)
Method: Least Squares
Date: 04/17/18 Time: 15:40
Sample (adjusted): 2000Q4 2017Q3
Included observations: 68 after adjustments
Variable Coefficient Std. Error t-Statistic Prob.
LGDP(-1) -0.010365 0.008783 -1.180061 0.2423
D(LGDP(-1)) -0.430965 0.119319 -3.611863 0.0006
D(LGDP(-2)) -0.231227 0.119291 -1.938348 0.0570
C 0.119063 0.080656 1.476173 0.1448
R-squared 0.185826 Mean dependent var 0.014714
Adjusted R-squared 0.147662 S.D. dependent var 0.022733
S.E. of regression 0.020987 Akaike info criterion -4.832759
Sum squared resid 0.028190 Schwarz criterion -4.702200
Log likelihood 168.3138 Hannan-Quinn criter. -4.781028
F-statistic 4.869091 Durbin-Watson stat 2.053308
Prob(F-statistic) 0.004111
Null Hypothesis: D(LGDP) has a unit root
Exogenous: Constant
Lag Length: 0 (Automatic - based on Modified AIC, maxlag=2)
t-Statistic Prob.*
Augmented Dickey-Fuller test statistic -11.91474 0.0001
Test critical values: 1% level -3.528515
5% level -2.904198
10% level -2.589562
*MacKinnon (1996) one-sided p-values.
Augmented Dickey-Fuller Test Equation
Dependent Variable: D(LGDP,2)
Method: Least Squares
Date: 04/17/18 Time: 15:41
Sample (adjusted): 2000Q3 2017Q3
Included observations: 69 after adjustments
Variable Coefficient Std. Error t-Statistic Prob.
D(LGDP(-1)) -1.356456 0.113847 -11.91474 0.0000
C 0.019423 0.003066 6.334388 0.0000
R-squared 0.679366 Mean dependent var -0.000156
Adjusted R-squared 0.674580 S.D. dependent var 0.037697
S.E. of regression 0.021504 Akaike info criterion -4.812554
Sum squared resid 0.030984 Schwarz criterion -4.747797
Log likelihood 168.0331 Hannan-Quinn criter. -4.786863
F-statistic 141.9610 Durbin-Watson stat 2.119827
Prob(F-statistic) 0.000000
Notes: The trend was not significant, so the ADF equation assumes the intercept only. Critical values therefore differ from the ADF under intercept and trend assumptions.
Taking MTM Theories and the VAR Model to the Data
57 |
Table 5: Unit root test in lM2
Null Hypothesis: LM2 has a unit root
Exogenous: Constant
Lag Length: 0 (Automatic - based on Modified AIC, maxlag=5)
t-Statistic Prob.*
Augmented Dickey-Fuller test statistic -1.386003 0.5844
Test critical values: 1% level -3.527045
5% level -2.903566
10% level -2.589227
*MacKinnon (1996) one-sided p-values.
Augmented Dickey-Fuller Test Equation
Dependent Variable: D(LM2)
Method: Least Squares
Date: 04/17/18 Time: 15:55
Sample (adjusted): 2000Q2 2017Q3
Included observations: 70 after adjustments
Variable Coefficient Std. Error t-Statistic Prob.
LM2(-1) -0.005858 0.004227 -1.386003 0.1703
C 0.086515 0.035314 2.449884 0.0169
R-squared 0.027474 Mean dependent var 0.037807
Adjusted R-squared 0.013172 S.D. dependent var 0.029234
S.E. of regression 0.029041 Akaike info criterion -4.212060
Sum squared resid 0.057350 Schwarz criterion -4.147818
Log likelihood 149.4221 Hannan-Quinn criter. -4.186542
F-statistic 1.921004 Durbin-Watson stat 1.899254
Prob(F-statistic) 0.170274
Null Hypothesis: D(LM2) has a unit root
Exogenous: Constant
Lag Length: 0 (Automatic - based on Modified AIC, maxlag=0)
t-Statistic Prob.*
Augmented Dickey-Fuller test statistic -7.639987 0.0000
Test critical values: 1% level -3.528515
5% level -2.904198
10% level -2.589562
*MacKinnon (1996) one-sided p-values.
Augmented Dickey-Fuller Test Equation
Dependent Variable: D(LM2,2)
Method: Least Squares
Date: 04/17/18 Time: 15:56
Sample (adjusted): 2000Q3 2017Q3
Included observations: 69 after adjustments
Variable Coefficient Std. Error t-Statistic Prob.
D(LM2(-1)) -0.934056 0.122259 -7.639987 0.0000
C 0.035477 0.005865 6.049077 0.0000
R-squared 0.465579 Mean dependent var -0.000138
Adjusted R-squared 0.457603 S.D. dependent var 0.040139
S.E. of regression 0.029562 Akaike info criterion -4.176120
Sum squared resid 0.058551 Schwarz criterion -4.111363
Log likelihood 146.0761 Hannan-Quinn criter. -4.150429
F-statistic 58.36940 Durbin-Watson stat 2.004644
Prob(F-statistic) 0.000000
Notes: The trend was not significant, so the ADF equation assumes the intercept only. Critical values therefore differ from the ADF under intercept and trend assumptions.
Vector and Structural Vector Autoregressions
|58
Table 6: Unit root test in loil_price
Null Hypothesis: LOIL_PRICE has a unit root
Exogenous: Constant
Lag Length: 1 (Automatic - based on Modified AIC, maxlag=5)
t-Statistic Prob.*
Augmented Dickey-Fuller test statistic -1.886398 0.3368
Test critical values: 1% level -3.528515
5% level -2.904198
10% level -2.589562
*MacKinnon (1996) one-sided p-values.
Augmented Dickey-Fuller Test Equation
Dependent Variable: D(LOIL_PRICE)
Method: Least Squares
Date: 04/17/18 Time: 16:20
Sample (adjusted): 2000Q3 2017Q3
Included observations: 69 after adjustments
Variable Coefficient Std. Error t-Statistic Prob.
LOIL_PRICE(-1) -0.060237 0.031932 -1.886398 0.0636
D(LOIL_PRICE(-1)) 0.270850 0.116427 2.326341 0.0231
C 0.250947 0.130140 1.928292 0.0581
R-squared 0.111004 Mean dependent var 0.009438
Adjusted R-squared 0.084065 S.D. dependent var 0.145567
S.E. of regression 0.139314 Akaike info criterion -1.061662
Sum squared resid 1.280962 Schwarz criterion -0.964527
Log likelihood 39.62733 Hannan-Quinn criter. -1.023125
F-statistic 4.120528 Durbin-Watson stat 1.928181
Prob(F-statistic) 0.020592
Null Hypothesis: D(LOIL_PRICE) has a unit root
Exogenous: Constant
Lag Length: 0 (Automatic - based on Modified AIC, maxlag=1)
t-Statistic Prob.*
Augmented Dickey-Fuller test statistic -6.340849 0.0000
Test critical values: 1% level -3.528515
5% level -2.904198
10% level -2.589562
*MacKinnon (1996) one-sided p-values.
Augmented Dickey-Fuller Test Equation
Dependent Variable: D(LOIL_PRICE,2)
Method: Least Squares
Date: 04/17/18 Time: 16:22
Sample (adjusted): 2000Q3 2017Q3
Included observations: 69 after adjustments
Variable Coefficient Std. Error t-Statistic Prob.
D(LOIL_PRICE(-1)) -0.749102 0.118139 -6.340849 0.0000
C 0.007505 0.017113 0.438567 0.6624
R-squared 0.375037 Mean dependent var 0.001733
Adjusted R-squared 0.365709 S.D. dependent var 0.178234
S.E. of regression 0.141949 Akaike info criterion -1.038134
Sum squared resid 1.350027 Schwarz criterion -0.973377
Log likelihood 37.81562 Hannan-Quinn criter. -1.012443
F-statistic 40.20637 Durbin-Watson stat 1.911960
Prob(F-statistic) 0.000000
Notes: The trend was not significant, so the ADF equation assumes the intercept only. Critical
values therefore differ from the ADF under intercept and trend assumptions.
Taking MTM Theories and the VAR Model to the Data
59 |
Table 7: Unit root test in lpsc
Null Hypothesis: LPSC has a unit root
Exogenous: Constant, Linear Trend
Lag Length: 3 (Automatic - based on Modified AIC, maxlag=5)
t-Statistic Prob.*
Augmented Dickey-Fuller test statistic -1.892457 0.6474
Test critical values: 1% level -4.100935
5% level -3.478305
10% level -3.166788
*MacKinnon (1996) one-sided p-values.
Augmented Dickey-Fuller Test Equation
Dependent Variable: D(LPSC)
Method: Least Squares
Date: 04/17/18 Time: 16:26
Sample (adjusted): 2001Q1 2017Q3
Included observations: 67 after adjustments
Variable Coefficient Std. Error t-Statistic Prob.
LPSC(-1) -0.171435 0.090588 -1.892457 0.0632
D(LPSC(-1)) 0.106514 0.124477 0.855693 0.3955
D(LPSC(-2)) 0.134873 0.125092 1.078185 0.2852
D(LPSC(-3)) -0.280375 0.125619 -2.231950 0.0293
C 1.139737 0.560926 2.031887 0.0465
@TREND("2000Q1") 0.007888 0.004534 1.739677 0.0870
R-squared 0.192919 Mean dependent var 0.044521
Adjusted R-squared 0.126765 S.D. dependent var 0.129410
S.E. of regression 0.120930 Akaike info criterion -1.301924
Sum squared resid 0.892068 Schwarz criterion -1.104489
Log likelihood 49.61445 Hannan-Quinn criter. -1.223798
F-statistic 2.916207 Durbin-Watson stat 1.379892
Prob(F-statistic) 0.020010
Null Hypothesis: D(LPSC) has a unit root
Exogenous: None
Lag Length: 1 (Automatic - based on Modified AIC, maxlag=3)
t-Statistic Prob.*
Augmented Dickey-Fuller test statistic -4.714456 0.0000
Test critical values: 1% level -2.599413
5% level -1.945669
10% level -1.613677
*MacKinnon (1996) one-sided p-values.
Augmented Dickey-Fuller Test Equation
Dependent Variable: D(LPSC,2)
Method: Least Squares
Date: 04/17/18 Time: 16:27
Sample (adjusted): 2000Q4 2017Q3
Included observations: 68 after adjustments
Variable Coefficient Std. Error t-Statistic Prob.
D(LPSC(-1)) -0.762826 0.161806 -4.714456 0.0000
D(LPSC(-1),2) -0.138716 0.121621 -1.140561 0.2582
R-squared 0.453744 Mean dependent var -0.000383
Adjusted R-squared 0.445468 S.D. dependent var 0.180986
S.E. of regression 0.134775 Akaike info criterion -1.141454
Sum squared resid 1.198839 Schwarz criterion -1.076174
Log likelihood 40.80943 Hannan-Quinn criter. -1.115588
Durbin-Watson stat 1.929444
Vector and Structural Vector Autoregressions
|60
Table 8: Unit root test in lrate
Null Hypothesis: LRATE has a unit root
Exogenous: Constant
Lag Length: 0 (Automatic - based on Modified AIC, maxlag=5)
t-Statistic Prob.*
Augmented Dickey-Fuller test statistic -2.334926 0.1642
Test critical values: 1% level -3.527045
5% level -2.903566
10% level -2.589227
*MacKinnon (1996) one-sided p-values.
Augmented Dickey-Fuller Test Equation
Dependent Variable: D(LRATE)
Method: Least Squares
Date: 04/17/18 Time: 17:02
Sample (adjusted): 2000Q2 2017Q3
Included observations: 70 after adjustments
Variable Coefficient Std. Error t-Statistic Prob.
LRATE(-1) -0.148448 0.063577 -2.334926 0.0225
C 3.170276 1.365202 2.322203 0.0232
R-squared 0.074224 Mean dependent var -0.000143
Adjusted R-squared 0.060609 S.D. dependent var 1.223533
S.E. of regression 1.185875 Akaike info criterion 3.206994
Sum squared resid 95.62833 Schwarz criterion 3.271236
Log likelihood -110.2448 Hannan-Quinn criter. 3.232512
F-statistic 5.451879 Durbin-Watson stat 1.862622
Prob(F-statistic) 0.022507
Null Hypothesis: D(LRATE) has a unit root
Exogenous: None
Lag Length: 0 (Automatic - based on Modified AIC, maxlag=0)
t-Statistic Prob.*
Augmented Dickey-Fuller test statistic -8.295674 0.0000
Test critical values: 1% level -2.598907
5% level -1.945596
10% level -1.613719
*MacKinnon (1996) one-sided p-values.
Augmented Dickey-Fuller Test Equation
Dependent Variable: D(LRATE,2)
Method: Least Squares
Date: 04/17/18 Time: 17:04
Sample (adjusted): 2000Q3 2017Q3
Included observations: 69 after adjustments
Variable Coefficient Std. Error t-Statistic Prob.
D(LRATE(-1)) -1.003975 0.121024 -8.295674 0.0000
R-squared 0.502986 Mean dependent var -0.004783
Adjusted R-squared 0.502986 S.D. dependent var 1.742939
S.E. of regression 1.228758 Akaike info criterion 3.264272
Sum squared resid 102.6696 Schwarz criterion 3.296650
Log likelihood -111.6174 Hannan-Quinn criter. 3.277117
Durbin-Watson stat 1.990924
Notes: The trend was not significant, so the ADF equation assumes the intercept only. Critical values therefore differ from the ADF under intercept and trend assumptions.
Taking MTM Theories and the VAR Model to the Data
61 |
Table 9: Unit root test in tb91
Null Hypothesis: TB91 has a unit root
Exogenous: Constant
Lag Length: 0 (Automatic - based on Modified AIC, maxlag=5)
t-Statistic Prob.*
Augmented Dickey-Fuller test statistic -2.921201 0.0480
Test critical values: 1% level -3.527045
5% level -2.903566
10% level -2.589227
*MacKinnon (1996) one-sided p-values.
Augmented Dickey-Fuller Test Equation
Dependent Variable: D(TB91)
Method: Least Squares
Date: 04/17/18 Time: 17:07
Sample (adjusted): 2000Q2 2017Q3
Included observations: 70 after adjustments
Variable Coefficient Std. Error t-Statistic Prob.
TB91(-1) -0.222967 0.076327 -2.921201 0.0047
C 2.436678 0.897054 2.716311 0.0084
R-squared 0.111499 Mean dependent var 0.000571
Adjusted R-squared 0.098433 S.D. dependent var 2.912467
S.E. of regression 2.765413 Akaike info criterion 4.900412
Sum squared resid 520.0306 Schwarz criterion 4.964655
Log likelihood -169.5144 Hannan-Quinn criter. 4.925930
F-statistic 8.533413 Durbin-Watson stat 1.722144
Prob(F-statistic) 0.004728
However, with seven endogenous variables for only 71 observations there is a
limit on how much a VAR can bear due to diminishing degrees of freedom
(d.o.f), defined as 𝑁 − 𝐾 where 𝑁 is the number of observations and 𝐾 is the
number of coefficients to be estimated, including the constant, defined as
[𝑘(𝑘 𝑥 𝜌 ) + 𝒏]. 𝑘 is the number of variables in the VAR system, 𝜌 is the VAR
lag length, and 𝒏 is a vector of constants for the n equations. For small samples
therefore, it is pertinent that we carefully consider modelling as few variables as
possible while at the same time minimise on the number of lags – ensuring they
are enough to remove any remaining residual serial correlation. Given this, it is
not possible that we can analyse all the possible channels of MTM embedded
in our data in one system at the same time.
For purposes of demonstration and brevity, we consider a five variable VAR
model for the two non-policy variables (GDP and CPI) and three policy
variables (tb91, M2 and exr), capturing the interest rate, money and exchange
rate channels, respectively. We also add a sixth variable, oil price, but as
indicated earlier, enter the model exogenously. Central to VAR specification is
the determination of the order of VAR or lag-length that describes the true
d.g.p.
Vector and Structural Vector Autoregressions
|62
4.4 Setting up a MTM VAR model
A key consideration before any VAR estimation can be attempted is the form
the variables must have when they enter the VAR: whether in levels, gaps or
first differences. The answer is simply ‘it depends’ on what the VAR is going to
be used for. If for forecasting purposes, we must avoid potential spurious
regressions that may result in spurious forecasts. For the purpose of identifying
shocks, we have to care about the statistical properties of the residuals, the
stability of the VAR and the reliability of the impulse response functions. For a
given choice and form of variables, setting up a VAR starts with the
appropriate specification, i.e., lag length specification.
4.4.1 Determination of lag length
In practice, when choosing the lag-length, we want to reduce the number of
lags as much as possible to get as simple a model as is possible, but at the same
time we want enough lags to remove autocorrelation of the VAR residuals. The
appropriate lag-length ( ) of the VAR (in eqn. 6) is chosen using the
minimum of the information criteria: the Akaike (AIC), Schwarz (SC) and
Hannan-Quinn (HQ) information criteria. These criteria have the same basic
formulation, i.e., derive from the log likelihood ratio (LR) function but penalise
for the loss of degrees of freedom due to extra 𝜌 lags to different degrees,
hence, in practice, need not to select the same preferred model and often they
do not. Juselius (2006, 70-1) gives a very detailed exposition of these
frameworks. AIC asymptotically over estimates the order with positive
probability, HQ estimates the order consistently (i.e. ppp ˆlim ) and SC is
even more strongly asymptotically consistent (i.e. pp ˆ ) (Lütkepohl and
Krätzig, 2004), i.e., it selects a shorter lag than the other criteria.
It is further shown that even in small samples of fixed size 16n , the
following relation AICpQHpSCp ˆˆˆ hold, hence the reason why, in
applied work SC is usually favoured in choosing the appropriate order of VAR.
This notwithstanding, it is very important to note that the residuals from the
estimated VAR should be well behaved, i.e., there should be no problems with
autocorrelation.
Taking MTM Theories and the VAR Model to the Data
63 |
As stated earlier, SC usually favours a shorter lag, which in a great majority of
cases results in an under-parameterised (and hence potentially serially
correlated) model especially in small samples. Given this, the most crucial
assumption in the VAR environment is that of time independence of the
residuals (Juselius, 2006). Thus, whilst the AIC, SC or HQ may be good
starting points for
determining the lag-length
of the VAR, it is crucial to
check for residual
autocorrelation – a practical
process we now describe.
To recap, we have
considered a five variable
VAR model (lGDP, lcpi,
tb91, lM2 and lexr). With
this in mind, make active
our work-horse EViews
work file, COMESA2018.
Check and highlight the
five series above (click on for example lcpi, and while holding down, on your
key board, the control button, click on the rest of the series but one at a time –
and clearly, all the five will appear highlighted). Point the cursor in any of the
highlighted variables (this assumes you have not let loose the control button on
your key board) and right click. Follow the Open command and in the drop-
down menu, select as VAR… as shown
in the screen print.
Selecting this gives VAR specification
window, herewith.
As seen, this estimates Unrestricted
VAR for the estimation sample,
corresponding to our data points. All
the five variables are endogenous. The
lag interval for now is left as in
EViews default, where the first entry (1) is the system and the second (2) is the
initial value of 𝜌 in our equations. And under exogenous dialogue box, in
Vector and Structural Vector Autoregressions
|64
addition to c (default), I have input loil as an exogenous variable. Click OK to
generate the output in Table 10
Table 10: VAR(2) Estimates
Vector Autoregression Estimates
Date: 04/21/18 Time: 14:07
Sample (adjusted): 2000Q3 2017Q3
Included observations: 69 after adjustments
Standard errors in ( ) & t-statistics in [ ]
LCPI LEXR LGDP LM2 TB91
LCPI(-1) 1.358467 -0.616656 -0.297740 -0.608976 108.2125
(0.12836) (0.49656) (0.27758) (0.34306) (31.4928)
[ 10.5836] [-1.24185] [-1.07264] [-1.77512] [ 3.43611]
LCPI(-2) -0.437665 0.603780 0.178212 0.637168 -108.9942
(0.11656) (0.45092) (0.25206) (0.31153) (28.5980)
[-3.75492] [ 1.33900] [ 0.70701] [ 2.04530] [-3.81125]
LEXR(-1) 0.021762 0.847628 0.008170 -0.134891 14.13248
(0.03576) (0.13835) (0.07734) (0.09558) (8.77456)
[ 0.60850] [ 6.12656] [ 0.10564] [-1.41122] [ 1.61062]
LEXR(-2) -0.018084 -0.204031 -0.056458 -0.047848 -6.988324
(0.03350) (0.12961) (0.07245) (0.08954) (8.22007)
[-0.53978] [-1.57420] [-0.77925] [-0.53435] [-0.85015]
LGDP(-1) 0.006963 -0.433617 0.420086 0.019165 -3.544776
(0.06061) (0.23446) (0.13106) (0.16198) (14.8699)
[ 0.11488] [-1.84942] [ 3.20522] [ 0.11831] [-0.23839]
LGDP(-2) 0.003725 -0.129905 0.234311 0.047588 -10.74061
(0.06135) (0.23735) (0.13268) (0.16398) (15.0533)
[ 0.06072] [-0.54731] [ 1.76599] [ 0.29020] [-0.71351]
LM2(-1) 0.081943 0.290661 -0.051731 0.768638 0.676937
(0.05227) (0.20222) (0.11304) (0.13971) (12.8248)
[ 1.56768] [ 1.43739] [-0.45764] [ 5.50186] [ 0.05278]
LM2(-2) -0.053812 0.047443 0.244084 0.247399 3.315820
(0.05552) (0.21478) (0.12006) (0.14839) (13.6218)
[-0.96927] [ 0.22089] [ 2.03298] [ 1.66726] [ 0.24342]
TB91(-1) 0.000930 0.002967 -0.000547 -0.000764 0.679838
(0.00049) (0.00191) (0.00107) (0.00132) (0.12127)
[ 1.88188] [ 1.55178] [-0.51160] [-0.57842] [ 5.60606]
TB91(-2) -0.000570 -0.000523 0.000907 -0.000297 -0.080569
(0.00050) (0.00192) (0.00107) (0.00133) (0.12176)
[-1.14774] [-0.27259] [ 0.84480] [-0.22386] [-0.66172]
C -0.013058 5.455254 2.561532 0.703407 49.46761
(0.37736) (1.45988) (0.81607) (1.00859) (92.5876)
[-0.03460] [ 3.73679] [ 3.13887] [ 0.69742] [ 0.53428]
LOIL 0.003467 -0.081395 -0.016110 -0.025377 -0.211361
(0.00523) (0.02023) (0.01131) (0.01398) (1.28316)
[ 0.66287] [-4.02303] [-1.42446] [-1.81551] [-0.16472]
R-squared 0.999374 0.980472 0.995834 0.999153 0.756174
Adj. R-squared 0.999254 0.976704 0.995030 0.998989 0.709119
Sum sq. resids 0.005302 0.079346 0.024794 0.037872 319.1515
S.E. equation 0.009644 0.037310 0.020856 0.025776 2.366252
F-statistic 8277.875 260.1746 1238.768 6110.235 16.07026
Log likelihood 228.9412 135.5909 175.7220 161.1071 -150.7456
Akaike AIC -6.288151 -3.582345 -4.745565 -4.321945 4.717262
Schwarz SC -5.899611 -3.193805 -4.357025 -3.933404 5.105803
Mean dependent 4.548391 7.693571 9.162380 8.372681 10.89884
S.D. dependent 0.353023 0.244445 0.295854 0.810731 4.387363
Determinant resid covariance (dof adj.) 1.08E-13
Determinant resid covariance 4.15E-14
Log likelihood 573.4909
Akaike information criterion -14.88380
Schwarz criterion -12.94109
Taking MTM Theories and the VAR Model to the Data
65 |
Each column in the Table corresponds to the equation for each of the
endogenous variable in the VAR. For each right-hand side variable, EViews
reports the coefficient point estimate, the estimated coefficient standard error
(in round brackets) and the t-statistic (in square brackets)
As pointed out earlier, this traditional VAR output is not economically
intuitive, in part because it lacks economic structure that would aid
interpretation of estimated coefficients, but also and indeed as can be deduced,
is spurious (very high R-squared and on average strong t-ratios) due to non-
stationarity of the data. Instead the result from this implementation is a step to
the specification of an appropriate VAR.
What we really want is to unravel the true lag structure of the VAR, at which
the VAR residuals
are independent of
each other across
time (i.e., serially
uncorrelated).
To achieve this,
within this very
results window,
click on View (at
the extreme left-
hand side of the
results window),
and in the drop-down
menu, lies Lag
Structure, and
following the highlights in the screen print, we choose Lag Length
Criteria….
This gives the Lag Specification window, where
the only modification I have made is changing
the lags to include from the default of 6 to 5, for
the reasons given earlier. Click OK, to generate
the results in Table 11.
Vector and Structural Vector Autoregressions
|66
Table 11: VAR Lag order selection criteria
VAR Lag Order Selection Criteria
Endogenous variables: LCPI LEXR LGDP LM2 TB91
Exogenous variables: C LOIL
Date: 04/21/18 Time: 14:12
Sample: 2000Q1 2017Q3
Included observations: 66
Lag LogL LR FPE AIC SC HQ
0 124.1461 NA 2.16e-08 -3.458974 -3.127208 -3.327877
1 521.4094 710.2585 2.74e-13 -14.73968 -13.57850* -14.28084
2 557.6360 59.28000* 1.98e-13* -15.07988 -13.08928 -14.29330*
3 582.8379 37.42098 2.05e-13 -15.08600* -12.26599 -13.97168
4 600.4145 23.43541 2.76e-13 -14.86104 -11.21162 -13.41898
5 628.0872 32.70420 2.88e-13 -14.94204 -10.46320 -13.17224
* indicates lag order selected by the criterion
LR: sequential modified LR test statistic (each test at 5% level)
FPE: Final prediction error
AIC: Akaike information criterion
SC: Schwarz information criterion
HQ: Hannan-Quinn information criterion
As expected, there are variations in lag length choices by the AIC, H-Q and SC.
AIC selects 3 lags, H-Q favours 2 lags, while SC suggests 1 lag, which, as
discussed earlier is not entirely surprising.
In as much as in applied work, it is the lag structure as chosen by SC that is
often preferred. However, as already stated, the fact that SC often favours a
shorter lag, such a short lag is likely to under-parameterise the model, resulting
in a potentially serially correlated model, more so in small samples. Given this,
it is highly recommended that we test for serial correlation in the VAR
residuals. This is achieved using Autocorrelation LM Test – one of the default
tests in a battery of EViews Residual Tests. Note that in this exercise, it is the
Residual Serial Correlation LM Test that we use to pin down the actual lag-
structure that we consider. Implementing this is also straight forward.
In the open VAR results window, click on View, and in the drop-down menu,
select Residual Tests, and following the arrow,
are a battery of residual misspecification tests,
Autocorrelation LM Test… inclusive. Clicking
on this brings forth Lag Specification dialogue
box, in this case with 3 lags to include (always
better to adjust this to accommodate choices of
all the information criteria: AIC, HQ and SC).
Click OK to generate VAR Residual Serial
Correlation LM Test output in Table 12.
Taking MTM Theories and the VAR Model to the Data
67 |
Table 12: Residual Serial Correlation LM test
VAR Residual Serial Correlation LM T...
Null Hypothesis: no serial correlation a...
Date: 04/21/18 Time: 14:28
Sample: 2000Q1 2017Q3
Included observations: 69
Lags LM-Stat Prob
1 24.48707 0.4914
2 29.87300 0.2291
3 26.61641 0.3753
Probs from chi-square with 25 df.
All lags, 1 – 3 appear to guarantee time independence of the VAR residuals.
Note that the guiding principle when choosing the lag-length is that we want to
reduce the number of lags as much as possible to get as simple a model as is
possible, but at the same time, we want enough lags to remove autocorrelation
of the VAR residuals. Any of the three lags is accommodative, but for purposes
of uncovering the dynamics embedded in the data, while preserving the degrees
of freedom, our exercise will estimate VAR(2) model.
4.4.2 VAR(2) residual statistical properties
It is also important, upon establishing the true order of VAR, to assess the
suitability of the choice model in terms of a battery of residual misspecification
tests (see inter alia Godfrey, 1988). This comprises: the residuals plots;
normality; autocorrelation and ARCH effects.
The idea with residual plots is to see if there are outlier observations and /or
change in behaviour over time. Such features, if detected, would require the
modeller to take appropriate modelling actions, to account for such outlier
observations or mean shifts – usually, use of dummies has been found to be
very helpful particularly in the interest of preserving the degrees of freedom.
Vector and Structural Vector Autoregressions
|68
To generate the residual plots, re-estimate the VAR(2) model. You may not
want to go through the entire process we have described above, but instead,
while in the above VAR Residual Serial Correlation LM Test window, click on
Estimate in the result window menu bar. This should give us a VAR(2)
Specification window we have seen before, with 2 lags in the Lag Intervals for
Endogenous comb box. Then click OK to estimate the traditional VAR like we
have done before, noting that these are still spurious results and with no
economic structure so is not economically intuitive.
To generate residual plots from this VAR(2) choice model output, click on
View in the menu bar of this results window. In the drop-down menu, select
Residuals and following the arrow, select Graphs – a road map given in the
screen print. The resulting residual plots are given in Figure 7, and look on
average, to be well behaved.
Taking MTM Theories and the VAR Model to the Data
69 |
Figure 7: Residual plots
-.03
-.02
-.01
.00
.01
.02
.03
.04
00:3
01:1
01:3
02:1
02:3
03:1
03:3
04:1
04:3
05:1
05:3
06:1
06:3
07:1
07:3
08:1
08:3
09:1
09:3
10:1
10:3
11:1
11:3
12:1
12:3
13:1
13:3
14:1
14:3
15:1
15:3
16:1
16:3
17:1
17:3
LCPI Residuals
-.10
-.05
.00
.05
.10
.15
00:3
01:1
01:3
02:1
02:3
03:1
03:3
04:1
04:3
05:1
05:3
06:1
06:3
07:1
07:3
08:1
08:3
09:1
09:3
10:1
10:3
11:1
11:3
12:1
12:3
13:1
13:3
14:1
14:3
15:1
15:3
16:1
16:3
17:1
17:3
LEXR Residuals
-.06
-.04
-.02
.00
.02
.04
.06
.08
00:3
01:1
01:3
02:1
02:3
03:1
03:3
04:1
04:3
05:1
05:3
06:1
06:3
07:1
07:3
08:1
08:3
09:1
09:3
10:1
10:3
11:1
11:3
12:1
12:3
13:1
13:3
14:1
14:3
15:1
15:3
16:1
16:3
17:1
17:3
LGDP Residuals
-.08
-.06
-.04
-.02
.00
.02
.04
.06
00:3
01:1
01:3
02:1
02:3
03:1
03:3
04:1
04:3
05:1
05:3
06:1
06:3
07:1
07:3
08:1
08:3
09:1
09:3
10:1
10:3
11:1
11:3
12:1
12:3
13:1
13:3
14:1
14:3
15:1
15:3
16:1
16:3
17:1
17:3
LM2 Residuals
-8
-4
0
4
8
00:3
01:1
01:3
02:1
02:3
03:1
03:3
04:1
04:3
05:1
05:3
06:1
06:3
07:1
07:3
08:1
08:3
09:1
09:3
10:1
10:3
11:1
11:3
12:1
12:3
13:1
13:3
14:1
14:3
15:1
15:3
16:1
16:3
17:1
17:3
TB91 Residuals
Following the same route, we can undertake several of Residual Tests,
including normality and heteroskedasticity tests.
Vector and Structural Vector Autoregressions
|70
Table 13: VAR Residual Normality Tests
VAR Residual Normality Tests
Orthogonalization: Cholesky (Lutkepohl)
Null Hypothesis: residuals are multivariate normal
Date: 04/21/18 Time: 14:49
Sample: 2000Q1 2017Q3
Included observations: 69
Component Skewness Chi-sq df Prob.
1 0.677531 5.279061 1 0.0216
2 -0.046807 0.025195 1 0.8739
3 0.818639 7.706948 1 0.0055
4 -0.374700 1.614602 1 0.2038
5 -0.809223 7.530684 1 0.0061
Joint 22.15649 5 0.0005
Component Kurtosis Chi-sq df Prob.
1 5.413742 16.75018 1 0.0000
2 3.103978 0.031083 1 0.8601
3 4.175388 3.971919 1 0.0463
4 2.810353 0.103402 1 0.7478
5 4.863281 9.981467 1 0.0016
Joint 30.83805 5 0.0000
Component Jarque-Bera df Prob.
1 22.02924 2 0.0000
2 0.056277 2 0.9723
3 11.67887 2 0.0029
4 1.718004 2 0.4236
5 17.51215 2 0.0002
Joint 52.99454 10 0.0000
We know the standard normal distribution has Skewness of 0 and kurtosis of 3.
Looking at the results, normality of residuals is rejected at the conventional 5
percent level of significance for the first, third and fifth residuals. Consistent
with all joint tests, the overall Jarque-Bera test is not significantly different from
zero, indicating that the assumption of multivariate normality is not supported.
The non-normality of the residuals results from a relatively large degree of
skewness – which is usually due to large outliers – seen in CPI (2011), GDP
(2008) and tb91 (2000), and excess kurtosis (fat tails).
Rather than EViews’ default setting of Cholesky of covariance (Lütkepohl) as
the Orthogonalisation method, some authors prefer to use Square root of
correlation (Doornik-Hendry). This is because we must choose a factorisation
of the residuals for the multivariate normality test, such that residuals are
orthogonal to each other. The approach due to Doornik and Hansen (2008)
has two advantages over the one in Lütkepohl (2005, p. 174-181). First,
Lütkepohl’s test uses the inverse of the lower triangular Cholesky factor of the
residual covariance matrix, resulting in a test which is not invariant to a re-
Taking MTM Theories and the VAR Model to the Data
71 |
ordering of the dependent variables. Second, Doornik and Hansen perform a
small-sample correction to the transformed residuals before computing their
statistics. This problem on non-normality notwithstanding, the good news is that estimates of
the VAR model are robust to deviations from normality provided residuals are not
autocorrelated (Juselius 2006).
The results of multivariate test for ARCH effects are given in Table 14 and
reveal moderate ARCH effects in the system (p-value = 0.0324). Rahbek et al.
(2002) cited in Juselius (2006) and Dennis (2006) show that the rank tests are
robust to moderate ARCH effects, so this should not be a problem here.
Table 14: VAR Residual Heteroskedasticity Test
VAR Residual Heteroskedasticity Tests: No Cross Terms (only levels and squares)
Date: 04/21/18 Time: 14:57
Sample: 2000Q1 2017Q3
Included observations: 69
Joint test:
Chi-sq df Prob.
379.0245 330 0.0324
4.4.3 Stability of VAR(2)
The last thing we would want to do is to check the stability of the VAR. If the
VAR is not stable, certain results (such as impulse response standard errors) are
not valid. In doing this procedure, there will be (n × p) roots overall, where n is
the number of endogenous variables (i.e., 5) and p is the particular lag length
(i.e., 2). It is easy to check for stability in EViews. Go to View, Lag Structure
and click on AR Roots Table. You should get the results, as in the table 15.
Vector and Structural Vector Autoregressions
|72
Table 15: VAR (2) Roots of the characteristic polynomial
Roots of Characteristic Polynomial
Endogenous variables: LCPI LEXR LGDP LM2 TB91
Exogenous variables: C LOIL
Lag specification: 1 2
Date: 04/21/18 Time: 15:00
Root Modulus
0.990389 0.990389
0.852970 0.852970
0.750850 - 0.307318i 0.811308
0.750850 + 0.307318i 0.811308
0.684853 0.684853
0.345303 - 0.209800i 0.404043
0.345303 + 0.209800i 0.404043
-0.354578 0.354578
-0.145642 - 0.162338i 0.218095
-0.145642 + 0.162338i 0.218095
No root lies outside the unit circle.
VAR satisfies the stability condition.
The VAR is stable as none of the roots lie outside the unit circle (EViews very
kindly tells you this at the bottom of the table): all the moduli of the roots of
the characteristic polynomial are less than one in magnitude. In case one or
more roots fall outside the unit circle, adding a time trend (denoted @trend in
EViews) as an exogenous variable can help, but this does not always correct
VAR instability.
To circumvent the short comings associated with unrestricted VAR
frameworks discussed above, models that identify a set of behavioral
relationships often between endogenous variables - that together explain the
overall working of the economy, have been advocated for – birthing Structural
VAR (SVAR) models, i.e., where economic theory is used to inform the
construction of the model. SVARS models are discussed in detail in the next
section.
Chapter 5
Structural Vector Autoregressive Models (SVARs)
5.1 Motivation
Structural econometric models may be used to explain and predict the effect of
policies on key macroeconomic aggregates. This could for instance include how
a change in monetary policy that is achieved through a reduction/increase in
the central bank policy interest rate propagates to the rest of the economy.
While in the previous section, it is plausible to construct impulse response
functions based on reduced-form VARs, the framework relies on an unrealistic
assumption, whereby the interpretation of the effect of each shock largely
assumes that all other shocks are held constant. In other words, as shown in
eqn. 7, shocks are orthogonal or uncorrelated. However, from the perspective
of Wold representation, we know that shocks are not always orthogonal due to
the contemporaneous correlation between variables. Consequently, we need to
impose a structure on the evolution of the shock processes in a VAR to
achieve contemporaneity, where the value of a variable depends on the
contemporaneous values of other variables, as opposed to a variable depending
solely on the past values of other variables. SVARs also ensure that the shocks
have economic meaning (technology, demand, labour supply, monetary policy
etc.).
5.2 VAR Identification
When Sims (1980a) first advocated for the use of VARs in economics, it was in
response to the prevailing orthodoxy at the time that all economic models
should be structural models, i.e., that they should include identifying
restrictions. Instead, he argued for the use of an unrestricted VAR wherein, no
distinction is made between endogenous and exogenous variables. The aim was
to free-up econometric modelling from the constraints applied by economic
Vector and Structural Vector Autoregressions
|74
theory and, in effect, to ‘let the data speak. However, as pointed out earlier,
unrestricted VARs have been criticised for their lack of economic structure
(Lucas, 1972), which forms a basis for structural analysis.
To undertake the structural analysis, a reduced-form VAR is first fitted to
summarise the data and then a structural VAR is proposed whose structural
equation errors are taken to be the structural, primitive or economic shocks or
innovations. The parameters of these structural equations are then ‘identified’
(estimated) by utilising the information in the estimated reduced-form VAR. In
other words, the VAR in a reduced-form model summarises the data, while the
SVAR provides an interpretation of the data.
It follows then that when estimating a SVAR model, the starting point is the
estimation of a reduced form VAR, given by eqn. 6, here compressed to take
the form:
𝑦𝑡 = 𝐹𝑦𝑡−1 + 휀𝑡 , 휀𝑡~𝑁(0 , Ω) (13)
A corresponding underlying structural system of equations in EViews is of the
form:
𝐴𝑦𝑡 = 𝐶(𝐿)𝑦𝑡−1 + 𝐵𝑢𝑡 ,
𝑦𝑡 = 𝐴−1𝐶(𝐿)𝑦𝑡−1 + 𝐴
−1𝐵𝑢𝑡, 𝑢𝑡~𝑁(0 , 𝐼) (14)
Where, 𝑢𝑡 is a vector of normally distributed structural shocks, i.e.,
𝑢𝑡~𝑁(0 , Σ), where the Σ is specified as a diagonal matrix - essentially because
the structural shocks are assumed to originate from independent sources, i.e.,
are purely exogenous and mutually uncorrelated. Moreover, Σ is frequently
normalised such that 𝐸(𝑢𝑡𝑢𝑡′) = Σ = 𝐼𝑛 such that 𝑢𝑡~𝑁(0 , 𝐼). In other words,
the assumptions underlying 𝑢𝑡 are that there are as many structural shocks as
there are variables in the model and that these shocks are by definition
mutually uncorrelated, which implies that Σ is diagonal.
The matrices A, B and the Ci’s (i = 1, 2, …, p) are not separately observable
from the estimated variance-covariance matrix, 𝐸(휀𝑡휀𝑡′) = Ω, of the reduced-
form shocks, 휀𝑡 in eqn. 7, which is a diagonal matrix. The vector of non-policy
variables and policy variables are contained in 𝑦𝑡, while the contemporaneous
relations among the variables, i.e., recovered structural shocks from the
reduced form shocks are contained in matrix 𝐴, and 𝐵 is a matrix of reduced-
Taking MTM Theories and the VAR Model to the Data
75 |
form shocks. However, eqn. 14 cannot be estimated directly due to
identification issues and therefore must be recovered from eqn.13. Therefore,
the major task here is how to recover eqn. 14 from eqn. 13. This process is
achieved by imposing economic theory informed restrictions on unrestricted
VAR (i.e., eqn. 13) to identify an underlying structure embedded in the data
(i.e., eqn. 14).
The process to finding “enough” restrictions is arguably the most difficult part
in the estimation of SVAR models. However, as a general rule, this process is
largely guided by our knowledge of economic theory. Some of the most
popular approaches that are found in the literature include: use of recursive and
non-recursive ordering of shock processes; imposing parametric restrictions on
the 𝐴 matrix; and imposing parametric restrictions on the shocks in 𝑢𝑡. i.e. the
impulse responses.
While there are many types of restrictions that can be used to identify a SVAR,
EViews allows two different types of restrictions. One type imposes
restrictions on the short-run behaviour of the system, while the other,
introduced by Blanchard and Quah (1989) imposes restrictions on the long-
run. Blanchard and Quah restriction scheme consider the vector moving-
average representation and thus its impulse responses, concentrating on the
impact that shocks have on the long-run of variables. However, in the current
application of identification of monetary policy shocks, long run restrictions are
often not admissible because it is now a stylized fact that monetary policy
shocks have a zero long-run effect- the so called long-run monetary policy
neutrality (Christiano et al.,1999). Given this, in this User Guide, we limit our
application to the imposition of short-run restrictions. Nonetheless, where
permissible, starting with EViews 7.1, it is possible to impose both long- and
short-run restrictions at the same time. Moreover, it is also possible to impose
sign restrictions to identify SVARs, subject to having access to MATLAB
software on the same computer for it to work – a real embarrassment of riches
when it comes to imposing sign restrictions in EViews.
5.3 Imposing short-run identifications
To impose short-run restrictions in EViews, we use eqn. 14, in which we
estimate the random stochastic residual, 𝐴−1𝐵𝑢𝑡 from the residuals, 휀𝑡 of the
estimated VAR, given in Table 10. Hence in view of eqns. 13 and 14, 휀𝑡
becomes:
Vector and Structural Vector Autoregressions
|76
휀𝑡 = 𝐴−1𝐵𝑢𝑡 (15)
Or equivalently, as in EViews 𝐴휀𝑡 = 𝐵𝑢𝑡 (15*)
In requiring that restrictions or identifying schemes are of the form given by
eqn. 15*, EViews follows what is known as the AB model, where A and B are
interpreted as above.
From eqn. 14, we can show that:
휀𝑡휀𝑡′= 𝐴−1𝐵𝑢𝑡𝑢𝑡
′𝐵′(𝐴−1)′. (16)
Applying the expectation operator on both sides, we get
𝐸(휀𝑡휀𝑡′) = 𝐸(𝐴−1𝐵𝑢𝑡𝑢𝑡
′𝐵′(𝐴−1)′) = 𝐴−1𝐵𝐸(𝑢𝑡𝑢𝑡′)𝐵′(𝐴−1)′ )
= 𝐴−1𝐵𝐵′(𝐴−1)′ ; 𝐸(𝑢𝑡𝑢𝑡′) = 𝐼𝑛. (17)
Clearly relating coefficients of the structural and reduced form equations
Under the assumption that 𝐸(𝑢𝑡𝑢𝑡′) = Σ = 𝐼𝑛 and also that 𝐸(휀𝑡휀𝑡
′) = Ω, then
from eqn. 17:
𝐸(휀𝑡휀𝑡′) = Ω = 𝐴−1𝐵𝐵′(𝐴−1)′ ) (18)
Where Ω is a symmetric matrix that has 𝑘(𝑘 + 1)/2 different elements. This
matrix plays a key role in the identification scheme.
The key question is whether we can identify all the elements in A and B from
Ω. A necessary condition in the identification scheme of eqn.18 is the
fulfilment of the requirement that the number of equations in the system
should be equal to the number of the unknown variables. A sufficient
condition to facilitate the identification process is that the equations in eqn. 18
should not be a linear combination of each another. When the VAR model has
𝑘 endogenous variables, the symmetry property of the variance-covariance
matrix 𝐸(휀𝑡휀𝑡′) = Ω in eqn. 18, implies that we would need to impose 𝑘(𝑘 +
1)/2 (identity) restrictions on the 2𝑘2 unknown elements in 𝐴 and 𝐵. Since the
two matrices 𝐴 𝑎𝑛𝑑 𝐵 have 𝑘2 elements each, we would need to impose a total
of 2𝑘2 −𝑘(𝑘+1)
2= (3𝑘2 − 𝑘)/2 restrictions in the two matrices to achieve
identification, either recursively or non-recursively. In so doing, we impose
restrictions on the SVAR given by eqn. 14.
5.3.1 Imposing a recursive Identification Scheme
Taking MTM Theories and the VAR Model to the Data
77 |
This approach involves imposing restrictions on the 𝐴 matrix so that it is a
lower triangular matrix, and the structural shocks are uncorrelated. In practical
terms, the approach makes use of Cholesky decomposition for estimation.
In this case, and using the above AB model, we have a VAR model with k = 5
endogenous variables. This would require us to impose at least (3(52) −
5)/2 = 35 restrictions. Where, the resultant Cholesky decomposition scheme
may be represented as:
𝐴 =
[ 1 0 0𝑎21𝑎31
1𝑎32
01
𝑎41𝑎51
𝑎42𝑎52
𝑎43𝑎53
0001𝑎54
00001]
, 𝐵 =
[ 𝑏11 0 0
00
𝑏220
0𝑏33
00
00
00
000𝑏440
0000𝑏55]
(19)
The 35 restrictions constitute 10 zero restrictions in matrix 𝐴, 20 zero
restrictions in matrix 𝑩 and 5 normalisation restrictions from the diagonal of
matrix 𝐴. Here the Cholesky decomposition assumes that shocks are
propagated in the order of output, inflation, money, interest rate and exchange
rate. And is intended to ensure that non-policy variables, output and inflation,
have a slow response to changes in policy variables (money, interest rate and
exchange rate), as monetary policy would not be expected to affect output over
a one period horizon.
These five equations tell a nice story about initial or short-run responses of the
endogenous variables. The set-up is such that a given endogenous variable is
contemporaneously determined by those variables “above it” in the system but
not by those “below it”, which affect it with a lag. For example, in our ordering
scheme, output only responds to lags of itself and to lags of the other four
variables, i.e., in the short-run, output does not respond to any variable
contemporaneously. Inflation, on the other hand reacts to the output
contemporaneously, with the size and magnitude of this contemporaneous
response given by the size of coefficient, 𝑎21. Putting it all together, we readily
see that a variable ordered first does not react to variables ordered below it
contemporaneously but does affect all the ones ordered below it
contemporaneously, i.e., every variable ordered above the other is affected by
variables below it only with a lag or not on impact. Similarly, a variable ordered
second responds contemporaneously to the variable above it but has a delayed
response from the variable ordered third in the ordering scheme, and so on.
And finally, the variable ordered last responds contemporaneously to all the
Vector and Structural Vector Autoregressions
|78
variables ordered above it or, alternatively, it does not affect the variables
above it in the current period.
Put simply, if we take 𝑢1𝑡 to be a demand shock, 𝑢2𝑡 to be a supply or cost-
push shock, 𝑢3𝑡 to be a money shock, 𝑢4𝑡 to be a monetary or interest-rate
shock, and 𝑢5𝑡 to be an exchange rate shock, then: the cost push shock does
not contemporaneously affect output, i.e., output is not affected on impact by a
cost push shock. Both output and inflation are not affected on impact by a
money shock, i.e., the money shock does affect inflation and output with a lag.
Output, inflation and money are not affected on impact by the interest rate
shock and so on. In other words, according to the interest rate rule, the impact
of monetary policy on money, inflation and output is felt with a lag.
In EViews, these restrictions can be imposed either in Text or Matrix form,
following from an estimated unrestricted VAR, given in Table 10. Unlike Text
form, imposing restrictions in Matrix form is relatively straight forward. We
will, as such, while still fresh enough, start this implementation with an
illustration of the Text form restrictions.
Taking MTM Theories and the VAR Model to the Data
79 |
5.3.1.1 Text form restrictions
In terms of implementation in EViews, we will need to navigate through the
route leading us to VAR (2) results in Table 10, and here I proceed on this very
assumption. However, unlike before, in this round, variables are ordered in the
order consistent with that in eqn. 18, i.e., lgdp, lcpi, lm2, tb91, lexr.
With VAR(2) results, as in Table 10, to impose the 35 restrictions given in eqn.
18, and in Text format, select Proc, and from the drop-down menu, select
Estimate Structural Factorization, as shown in the screen print here.
This brings to the fore SVAR Options dialogue box (shown here). The SVAR
Options window consists of Identifying restrictions and Optimization
Control options. We will deal with Identifying restrictions option for now,
which allows us to impose short run and long run restrictions, under
Identifying restrictions (𝐀𝐞 = 𝐁𝐮 where 𝐄(𝐮′𝐮) = 𝐈) box – a
representation given by eqn. 15* in the text. Select Text. This option
presupposes that each endogenous variable has a specific variable number, for
example in, k=5 variable SVAR;
@e1 for LGDP residuals @e2 for LCPI residuals @e3 for LM2 residuals @e4 for TB91 residuals @e5 for LEXR residuals
The identifying restrictions are imposed
in terms of the 휀’s, which are the
residuals from the reduced-form VAR
estimates, and the 𝑢’s, which are the
structural, fundamental or ‘primitive’
random (stochastic) errors in the
structural system, formulated for
estimation according to the relationship in eqn. 15*.
At this point, we enter the following in the text box (it is easiest to simply copy
and paste the suggested short-run factorisation example in the Identifying
restrictions box into the empty Identifying Restrictions box at the bottom),
as in the above screen print:
@e1 = C(1)*@u1
Vector and Structural Vector Autoregressions
|80
@e2 = C(2)*@e1 + C(3)*@u2 @e3 = C(4)*@e1 + C(5)*@e2 + C(6)*@u3 @e4 = C(7)*@e1 + C(8)*@e2 + C(9)*@e3 + C(10)*@u4 @e5 = C(11)*@e1 + C(12)*@e2 + C(13)*@e3 + C(14)*@e4 + C(15)*@u5
The way to interpret these restrictions is that they represent the entries in the
𝐴−1𝐵 matrix linking 휀𝑡 (@e) and 𝑢𝑡 (@u) via eqn. 15, i.e., 휀𝑡 = 𝐴−1𝐵𝑢𝑡.
Given matrices A and B (eqn. 19), the matrix representation of 휀𝑡 = 𝐴−1𝐵𝑢𝑡 is
no more than:
(
휀1𝑡휀2𝑡휀3𝑡휀4𝑡휀5𝑡)
=
(
1 0 0𝑎21𝑎31
1𝑎32
01
𝑎41𝑎51
𝑎42𝑎52
𝑎43𝑎53
0001𝑎54
00001)
′
[ 𝑏11 0 0
00
𝑏220
0𝑏33
00
00
00
000𝑏440
0000𝑏55]
(
𝑢1𝑡𝑢2𝑡𝑢3𝑡𝑢4𝑡𝑢5𝑡)
(20)
Exploring the fact that the inverse of a lower (upper) triangular matrix is also a
lower (upper) triangular matrix readily facilitates the computations of the
underlying matrix algebra that results in the above set of EViews restrictions –
though frankly speaking, the matrix algebra involved is quite tedious. Check
OK and the resultant output is given in Table 16.
Taking MTM Theories and the VAR Model to the Data
81 |
Table 16: Just-Identified SVAR estimates (short-run text form)
Structural VAR Estimates Date: 04/29/18 Time: 11:48 Sample (adjusted): 2000Q3 2017Q3 Included observations: 69 after adjustments Estimation method: method of scoring (analytic derivatives) Convergence achieved after 15 iterations Structural VAR is just-identified
Model: Ae = Bu where E[uu']=IRestriction Type: short-run text form@e1 = C(1)*@u1@e2 = C(2)*@e1 + C(3)*@u2@e3 = C(4)*@e1 + C(5)*@e2 + C(6)*@u3@e4 = C(7)*@e1 + C(8)*@e2 + C(9)*@e3 + C(10)*@u4@e5 = C(11)*@e1 + C(12)*@e2 + C(13)*@e3 + C(14)*@e4 + C(15)*@u5where@e1 represents LGDP residuals@e2 represents LCPI residuals@e3 represents LM2 residuals@e4 represents TB91 residuals@e5 represents LEXR residuals
Coefficient Std. Error z-Statistic Prob.
C(2) -0.124607 0.053609 -2.324360 0.0201C(4) 0.241603 0.151700 1.592638 0.1112C(5) 0.200320 0.328061 0.610617 0.5415C(7) -4.083252 13.96435 -0.292405 0.7700C(8) -3.757790 29.73856 -0.126361 0.8994C(9) -22.57606 10.88356 -2.074328 0.0380C(11) 0.287316 0.178884 1.606159 0.1082C(12) 2.183418 0.380760 5.734368 0.0000C(13) 0.305309 0.143611 2.125945 0.0335C(14) 0.003931 0.001541 2.550563 0.0108C(1) 0.020856 0.001775 11.74734 0.0000C(3) 0.009287 0.000791 11.74734 0.0000C(6) 0.025309 0.002154 11.74734 0.0000C(10) 2.288076 0.194774 11.74734 0.0000C(15) 0.029292 0.002494 11.74734 0.0000
Log likelihood 540.5339
Estimated A matrix: 1.000000 0.000000 0.000000 0.000000 0.000000 0.124607 1.000000 0.000000 0.000000 0.000000-0.241603 -0.200320 1.000000 0.000000 0.000000 4.083252 3.757790 22.57606 1.000000 0.000000-0.287316 -2.183418 -0.305309 -0.003931 1.000000
Estimated B matrix: 0.020856 0.000000 0.000000 0.000000 0.000000 0.000000 0.009287 0.000000 0.000000 0.000000 0.000000 0.000000 0.025309 0.000000 0.000000 0.000000 0.000000 0.000000 2.288076 0.000000 0.000000 0.000000 0.000000 0.000000 0.029292
The correspondence between the estimated residuals, 휀𝑡 (denoted by @e), and
the structural shocks, 𝑢𝑡 (denoted by @u) in EViews should be obvious.
And so is correspondence between:
C(1) = 𝑏11 = 0.021∗∗∗
C(2) = 𝑎21 = 0.125∗∗
Vector and Structural Vector Autoregressions
|82
C(3) = 𝑏22 = 0.009∗∗∗
C(4) = 𝑎31 = 0.242
C(5) = 𝑎32 = 0.200
C(6) = 𝑏33 = 0.025∗∗∗
C(7) = 𝑎41 = 4.083
C(8) = 𝑎42 = 3.758
C(9) = 𝑎43 = 22.576∗∗
C(10) = 𝑏44 = 2.288∗∗∗
C(11) = 𝑎51 = 0.287
C(12) = 𝑎52 = 2.183∗∗∗
C(13) = 𝑎53 = 0.305∗∗
C(14) = 𝑎54 = 0.004∗∗
C(15) = 𝑏55 = 0.029∗∗∗
Asterisks ***, **, and * indicate 1%. 5% and 10% levels of significance, respectively. All numbers are in absolute values.
Structural VAR proponents try to avoid over identifying the VAR structure
and propose just enough restrictions to identify the parameters uniquely, which
is what we have just done with Table 16 – where in EViews output, it is
explicitly mentioned of the fact that the structural VAR is just identified.
Accordingly, in almost most cases, providing the identification scheme is
appropriate, SVAR models will often be just identified. It is always a good idea
to consider this recursive solution first, which then serve as a benchmark for
later analysis. Based on this, we could assess if there is anything unreasonable
about the recursive solution and think about how the system could be
modified. In our output in Table 16, five freely estimated coefficients in the A
matrix [C(4), C(5), C(7), C(8) and C(11)], are not statistically different from
zero or are insignificant. In other words, the recursive output suggests we could
impose five additional zero (over identifying) restrictions on the 𝐀 matrix in
eqn. 18 so that the resultant Cholesky decomposition scheme now takes the
form:
𝐴 =
[ 1 0 0𝑎210
10
01
00
0𝑎52
𝑎43𝑎53
0001𝑎54
00001]
, 𝐵 =
[ 𝑏11 0 0
00
𝑏220
0𝑏33
00
00
00
000𝑏440
0000𝑏55]
(21)
We could test this empirical finding further and assess the impact of setting
C(4), C(5), C(7), C(8) and C(11) equal to zero. In the text form, this over
identifying restrictions becomes:
Taking MTM Theories and the VAR Model to the Data
83 |
@e1 = C(1)*@u1 @e2 = C(2)*@e1 + C(3)*@u2 @e3 = C(6)*@u3 @e4 = C(9)*@e3 + C(10)*@u4 @e5 = C(12)*@e2 + C(13)*@e3 + C(14)*@e4 + C(15)*@u5
Note the absence of C(4) and C(5) in the equations for @e3 and C(7) and C(8)
in the equation for @e4 and C(11) in the equation for @e5.).
Re-estimating the model with these five additional restrictions yield the results
in Table 17.
Table 17: Over-identified SVAR estimates (short-run text form)
Structural VAR Estimates
Date: 04/30/18 Time: 08:01
Sample (adjusted): 2000Q3 2017Q3
Included observations: 69 after adjustments
Estimation method: method of scoring (analytic derivatives)
Convergence achieved after 10 iterations
Structural VAR is over-identified (5 degrees of freedom)
Model: Ae = Bu where E[uu']=I
Restriction Type: short-run text form
@e1 = C(1)*@u1
@e2 = C(2)*@e1 + C(3)*@u2
@e3 = C(6)*@u3
@e4 = C(9)*@e3 + C(10)*@u4
@e5 = C(12)*@e2 + C(13)*@e3 + C(14)*@e4 + C(15)*@u5
where
@e1 represents LGDP residuals
@e2 represents LCPI residuals
@e3 represents LM2 residuals
@e4 represents TB91 residuals
@e5 represents LEXR residuals
Coefficient Std. Error z-Statistic Prob.
C(2) -0.124607 0.053609 -2.324360 0.0201
C(9) -23.18650 10.69300 -2.168381 0.0301
C(12) 2.013359 0.372419 5.406169 0.0000
C(13) 0.345457 0.144009 2.398854 0.0164
C(14) 0.003844 0.001569 2.450259 0.0143
C(1) 0.020856 0.001775 11.74734 0.0000
C(3) 0.009287 0.000791 11.74734 0.0000
C(6) 0.025776 0.002194 11.74734 0.0000
C(10) 2.289530 0.194898 11.74734 0.0000
C(15) 0.029835 0.002540 11.74734 0.0000
Log likelihood 537.9611
LR test for over-identification:
Chi-square(5) 5.145566 Probability 0.3984
Estimated A matrix:
1.000000 0.000000 0.000000 0.000000 0.000000
0.124607 1.000000 0.000000 0.000000 0.000000
0.000000 0.000000 1.000000 0.000000 0.000000
0.000000 0.000000 23.18650 1.000000 0.000000
0.000000 -2.013359 -0.345457 -0.003844 1.000000
Estimated B matrix:
0.020856 0.000000 0.000000 0.000000 0.000000
0.000000 0.009287 0.000000 0.000000 0.000000
0.000000 0.000000 0.025776 0.000000 0.000000
0.000000 0.000000 0.000000 2.289530 0.000000
0.000000 0.000000 0.000000 0.000000 0.029835
By imposing five additional (zero) restrictions on C(4), C(5), C(7), C(8) and
C(11), we have more restrictions than are necessary to identify the SVAR. We
needed to impose a minimum of 35 restrictions to achieve identification.
Counting restrictions in the A and B matrices, we find that we now have 15
Vector and Structural Vector Autoregressions
|84
zero restrictions in matrix A, 20 zero restrictions in matrix B and another five
normalisation restrictions on the diagonal of matrix A, giving us a total of 40
restrictions. The five extra restrictions are reflected in a new entry in Table 17,
which is a likelihood ratio (LR) test for over-identification, with a p-value of
0.398. The fact that the LR test of the over-identification is not significant,
indicates that we cannot reject the null hypothesis that the five elements, C(4),
C(5), C(7), C(8) and C(11), are equal to zero. We could therefore, actually, set
them equal to zero.
Further to that, the value of the test statistic, 5.146, is actually twice the
difference between the log-likelihood values of the unrestricted (540.534) and
the restricted model (537.961). In essence, these five restrictions mean that the
re-estimated relationship between the reduced-form errors, 휀𝑡, and the
structural errors, 𝑢𝑡, mean that the estimated reduced-form shocks are two
different linear combinations of the structural shocks (@e1 and @e3
equations). In particular, they are no longer recursive, which is why additional
restrictions are very rarely tested and almost never imposed. We also note that
the new estimates of C(2), C(9), C(12), C(13) and C(14) are qualitatively little
changed from their earlier values. The same is true for the entries of the B
matrix containing the variances of the reduced-form shocks. In short, unless
we have a (very) good reason for doing so, we should be wary of over-
identifying SVARs.
5.3.1.2 Matrix form restrictions
An alternative approach to inputting identifying restrictions would be to use
the matrix option. Under this option, you would create the two matrices, 𝐀
and 𝐁 with the following entries:
𝐴 =
[ 1 0 0𝑁𝐴𝑁𝐴
1𝑁𝐴
01
𝑁𝐴𝑁𝐴
𝑁𝐴𝑁𝐴
𝑁𝐴𝑁𝐴
0001𝑁𝐴
00001]
, 𝐵 =
[ 𝑁𝐴 0 000
𝑁𝐴0
0𝑁𝐴
00
00
00
000𝑁𝐴0
0000𝑁𝐴]
(22)
Unlike in the text form above, matrices are created by going to object at the
main menu of EViews work file window, then New Object, as shown in the
screen print. This opens New Object window of the form:
Taking MTM Theories and the VAR Model to the Data
85 |
From the list of available possibilities, under Type of object, choose the
Matrix-Vector-Coef option. Under Name for object, make sure to give the
matrix an appropriate
name – here
"𝑚𝑎𝑡𝑟𝑖𝑥_𝑎" and then
later, 𝑚𝑎𝑡𝑟𝑖𝑥_𝑏. Click
OK.
A new window for
New Matrix that pops
up, as shown in the
EViews default, the
new matrix Type is
Matrix. It also provides
for the Dimension of
the matrix, where I
have input 5 rows and 5 columns
(the size of matrix A in the text) as
shown in the screen print. Click OK.
With this execution, in the matrix
that comes up, all entries are zero.
This must be edited to reflect the
structure in eqn. 22 above before it
can be used in the estimation of
SVAR. To do this, select Edit +/-
option in the open matrix window, to edit the
individual cells in the matrix we have created.
We then enter the individual elements of the
first matrix, i.e., matrix A, as in eqn. 22,
consisting of ones, zeros and NAs, where,
NA is used to represent those elements in the
matrix that are unknown values of
coefficients and are to be estimated. Once we are done, click on Edit +/-
again, and then close the window.
Vector and Structural Vector Autoregressions
|86
Repeat the procedure for the second matrix, matrix B. The said input,
consistent with entries in Matrices A and B in eqn. 22, are shown here.
Once the two matrices have been created, both matrix_a and matrix_b will
appear among the variables listed in EViews work file. The next procedure is to
return to the estimated VAR(2) model in Table 10, but as with the text form,
ordering variables in the order: lgdp, lcpi, lm2, tb91, lexr.
Go to Proc, and in the drop-down
menu, navigate through to Estimate
Structural Factorization, as
described before, and select Matrix
and then short-run pattern,
providing for the names of matrices
A (matrix_a) and B (matrix_b) as
appropriate, as shown in the screen
print. Click OK.
Taking MTM Theories and the VAR Model to the Data
87 |
Table 18: SVAR estimates (short-run pattern matrix option)
Structural VAR Estimates
Date: 04/30/18 Time: 11:32
Sample (adjusted): 2000Q3 2017Q3
Included observations: 69 after adjustments
Estimation method: method of scoring (analytic derivatives)
Convergence achieved after 1 iterations
Structural VAR is just-identified
Model: Ae = Bu where E[uu']=I
Restriction Type: short-run pattern matrix
A =
1 0 0 0 0
C(1) 1 0 0 0
C(2) C(5) 1 0 0
C(3) C(6) C(8) 1 0
C(4) C(7) C(9) C(10) 1
B =
C(11) 0 0 0 0
0 C(12) 0 0 0
0 0 C(13) 0 0
0 0 0 C(14) 0
0 0 0 0 C(15)
Coefficient Std. Error z-Statistic Prob.
C(1) 0.124607 0.053609 2.324360 0.0201
C(2) -0.241603 0.151700 -1.592638 0.1112
C(3) 4.083252 13.96435 0.292405 0.7700
C(4) -0.287316 0.178884 -1.606159 0.1082
C(5) -0.200320 0.328061 -0.610617 0.5415
C(6) 3.757790 29.73856 0.126361 0.8994
C(7) -2.183418 0.380760 -5.734368 0.0000
C(8) 22.57606 10.88356 2.074328 0.0380
C(9) -0.305309 0.143611 -2.125945 0.0335
C(10) -0.003931 0.001541 -2.550563 0.0108
C(11) 0.020856 0.001775 11.74734 0.0000
C(12) 0.009287 0.000791 11.74734 0.0000
C(13) 0.025309 0.002154 11.74734 0.0000
C(14) 2.288076 0.194774 11.74734 0.0000
C(15) 0.029292 0.002494 11.74734 0.0000
Log likelihood 540.5339
Estimated A matrix:
1.000000 0.000000 0.000000 0.000000 0.000000
0.124607 1.000000 0.000000 0.000000 0.000000
-0.241603 -0.200320 1.000000 0.000000 0.000000
4.083252 3.757790 22.57606 1.000000 0.000000
-0.287316 -2.183418 -0.305309 -0.003931 1.000000
Estimated B matrix:
0.020856 0.000000 0.000000 0.000000 0.000000
0.000000 0.009287 0.000000 0.000000 0.000000
0.000000 0.000000 0.025309 0.000000 0.000000
0.000000 0.000000 0.000000 2.288076 0.000000
0.000000 0.000000 0.000000 0.000000 0.029292
Vector and Structural Vector Autoregressions
|88
The resulting output, given here in Table 18, is identical to the one obtained in
Table 16, obtained using the text form.
5.4 Generating Impulse Response Functions and Forecast Error Variance Decomposition
Two very useful outputs from the above VARs modelling process are the
impulse response function (IRF) and the forecast error variance decomposition
(FEVD). An IRF traces the effect of a shock to one of the innovations of the
VAR on current and future values of the endogenous variables. As such, a
shock to the i-th variable directly affects the i-th variable itself
contemporaneously, and is also transmitted to all of the endogenous variables
with a lag through the dynamic structure of the VAR.
In the case of identified SVARs, impulse responses show how the different
variables in the system respond to identified structural shocks, i.e., they show
the dynamic interactions between the endogenous variables in the VAR(p)
process. Since we have ‘identified’ the SVAR, the impulse responses will be
depicting the responses to the structural shocks to interest rates, here the
interest rate shock. We do this because the results of the impulse response
analysis are often more informative than the parameter estimates of the SVAR
coefficients themselves.
The same is true for forecast error variance decompositions, which are also
popular tools for interpreting VAR models. While IRF trace the effect of a
shock to one endogenous variable onto the other variables in the system,
forecast error variance decompositions (or variance decompositions for short)
separate the variation in an endogenous variable into the contributions
explained by the component shocks in the VAR. In other words, the variance
decomposition tells us the proportion of the movements in a variable due to its
‘own’ shock versus shocks to the other variables. Thus, the variance
decomposition provides information about the relative importance of each
(structural) shock in affecting the variables in the VAR. In much empirical
work, it is typical for a variable to explain almost all of its own forecast error
variance at short horizons and smaller proportions at longer horizons. Such a
delayed effect of the other endogenous variables is not unexpected, as the
effects from the other variables are propagated through the reduced-form VAR
with lags. In what follows, we attempt to generate IRFs using the Cholesky
decomposition following a shock to tb91.
Taking MTM Theories and the VAR Model to the Data
89 |
To do this, in the open
SVAR output, click on
the Impulse button at
the top of the estimated
SVAR output box. This
opens the impulse
responses window,
where we select the
multiple graphs
option under the
Display Format, then
the Analytic
(asymptotic option) for Response Standard Errors. We enter (under
Display Information) the variable for which we want to shock (Impulses) -
here tb91 and the variables for which we want to observe the responses
(Responses) – here lgdp lcpi lm2 tb91 lexr – noting that these appear in the
order in which the SVAR has been estimated. Alternatively, one may simply
enter the numbers corresponding to the ordering of the variables. As shown in
the screen print, we type tb91 (or 4) in the Impulses box and leave all the five
variables as they are in the Responses box. This option will show the impulse
response of each variable to a structural shock to tb91 (or 𝑢4𝑡). We plot
impulse responses over 20 Periods (some five years horizon). The two
standard error bands of the impulse response functions are based on
analytical (or asymptotic, i.e., large-sample) results, as in the screen print.
We also need to define the nature of impulse responses we are interested in
estimating. This is done in the Impulse Definition option. The default option
in EViews is the Cholesky-dof adjusted, which is a choice option for
unrestricted VAR, estimated in Table 10. However, as our model is structural,
we need to choose the Structural Decomposition option which makes use of
the identification scheme that we had earlier specified in estimating the SVAR,
either the text or the matrix form. Then click on OK, which yields Figure 8.
Figure 8 consists of five charts of impulse response functions, corresponding
to the five endogenous variables we have modelled. Specifically, it shows the
impact of a one standard deviation shock, defined as an exogenous, one-time
positive shock to short-term interest rate (i.e. monetary policy shock). The solid
line in each graph is the estimated response while the dashed lines denote a two
Vector and Structural Vector Autoregressions
|90
standard error confidence band around the estimate. The short-term interest
rate obviously increases as a result of a one-time positive shock to itself (the
increase is equal to 2.3, a value we have come across before as C(14) in Table
18, or C(10) in Table 16, i.e., the standard deviation of the structural monetary
policy shock), but the effect of the monetary shock decays to zero as 𝑡 → ∞,
which is consistent with Christiano et al. (1999) stylized fact of long-run
monetary policy neutrality.
In response to a positive one-standard deviation structural shock to tb91, gdp
first falls for some two periods before rising for some two periods, but there is
a zero long-run effect of the monetary policy shock. The aggregate price level
responds quite strongly – but with the opposite sign, contrary to what theory
would predict. While this is a manifestation of the so-called ‘inflation puzzle’ –
since we would not expect inflation to increase with policy rate tightening, it is
also true that inflation responds to policy tightening with a lag due to sticky
prices. M2 falls as expected, while the exchange rate depreciates (another
puzzle). Once we add the plus and minus two standard error bands, we can see
how significant these effects are. The negative response of GDP is insignificant
throughout and so is inflation and M2. The positive shock of tb91 to itself is
significant and persists for some four periods before becoming insignificant
while exr shows a significant response to tb91 for the immediate three periods.
Taking MTM Theories and the VAR Model to the Data
91 |
Figure 8: IRF to one standard-deviation structural shock in tb91 (recursive structural factorisation)
-.012
-.008
-.004
.000
.004
.008
2 4 6 8 10 12 14 16 18 20
Response of LGDP to Shock4
-.010
-.005
.000
.005
.010
2 4 6 8 10 12 14 16 18 20
Response of LCPI to Shock4
-.03
-.02
-.01
.00
.01
2 4 6 8 10 12 14 16 18 20
Response of LM2 to Shock4
-1
0
1
2
3
2 4 6 8 10 12 14 16 18 20
Response of TB91 to Shock4
-.02
-.01
.00
.01
.02
.03
2 4 6 8 10 12 14 16 18 20
Response of LEXR to Shock4
Response to Structural One S.D. Innovations ± 2 S.E.
We now demonstrate how to generate forecast error variance decompositions
for the recursively identified SVAR, assuming we can still navigate through the
process to the results in either Table 16 or Table 18.
Vector and Structural Vector Autoregressions
|92
While in the SVAR results window, select View and Variance
Decomposition… and in the VAR Variance Decomposition dialogue box
that pops up (in the screen print here), select, under the Display Format,
Table option and 12 periods, and Structural Decomposition under
Factorization. EViews default is Cholesky Decomposition, an option we
would pick if we were generating variance decomposition from the unrestricted
VAR, given in Table 10. Click OK for output in Table 19.
Taking MTM Theories and the VAR Model to the Data
93 |
Table 19: Variance decomposition
Variance Decomposition of LGDP:
Period S.E. Shock1 Shock2 Shock3 Shock4 Shock5
1 0.020856 100.0000 2.25E-30 2.27E-28 2.76E-27 5.41E-31
2 0.023083 98.23935 1.319503 0.170100 0.260295 0.010749
3 0.025898 93.32557 3.897530 2.186899 0.217606 0.372391
4 0.028073 87.52252 7.435605 3.585088 0.366931 1.089853
5 0.030473 80.68891 10.75726 5.506875 0.864001 2.182953
6 0.032984 74.09178 13.92871 6.822371 1.764045 3.393101
7 0.035614 68.31438 16.62026 7.584801 2.932796 4.547764
8 0.038248 63.67140 18.78895 7.846986 4.164553 5.528115
9 0.040792 60.17446 20.43094 7.805573 5.291171 6.297856
10 0.043171 57.66623 21.62332 7.614181 6.225563 6.870701
11 0.045342 55.92972 22.46042 7.377514 6.948161 7.284192
12 0.047294 54.75151 23.03587 7.151855 7.480612 7.580151
Variance Decomposition of LCPI:
Period S.E. Shock1 Shock2 Shock3 Shock4 Shock5
1 0.009644 7.261370 92.73863 1.33E-31 0.000000 0.000000
2 0.016862 5.883174 91.10335 0.971154 1.899416 0.142909
3 0.022703 5.327304 89.35431 1.883251 3.185051 0.250080
4 0.027045 5.083353 87.59703 3.366052 3.728604 0.224960
5 0.030023 4.868406 85.85633 5.323202 3.766082 0.185980
6 0.031967 4.583894 83.99350 7.611276 3.522957 0.288369
7 0.033242 4.271190 81.80704 10.00898 3.259347 0.653448
8 0.034170 4.081406 79.14278 12.25134 3.196361 1.328113
9 0.034993 4.208826 75.96740 14.10455 3.450010 2.269207
10 0.035861 4.800899 72.39206 15.43595 4.006707 3.364390
11 0.036836 5.895774 68.62550 16.23871 4.758733 4.481283
12 0.037925 7.421278 64.89101 16.60111 5.571908 5.514690
Variance Decomposition of LM2:
Period S.E. Shock1 Shock2 Shock3 Shock4 Shock5
1 0.025776 3.072632 0.520949 96.40642 4.80E-29 2.49E-32
2 0.033667 4.427342 4.552485 88.86896 0.773812 1.377400
3 0.042345 6.270284 9.073304 75.66169 4.106420 4.888305
4 0.050956 8.543194 12.87681 62.38385 7.858896 8.337249
5 0.059205 11.46141 15.19725 51.55981 11.01162 10.76992
6 0.066749 14.66089 16.48401 43.41708 13.22545 12.21258
7 0.073476 17.89582 17.12614 37.45861 14.55706 12.96236
8 0.079378 20.94977 17.40697 33.12309 15.22674 13.29344
9 0.084538 23.69892 17.50004 29.94770 15.45788 13.39545
10 0.089083 26.08451 17.51274 27.58868 15.42920 13.38487
11 0.093157 28.09800 17.50815 25.80202 15.26426 13.32757
12 0.096895 29.76242 17.52230 24.41610 15.04103 13.25816
Variance Decomposition of TB91:
Period S.E. Shock1 Shock2 Shock3 Shock4 Shock5
1 2.366252 0.562070 0.105622 5.830751 93.50156 3.02E-33
2 3.232146 2.314144 14.87812 3.951660 77.21564 1.640431
3 3.838388 4.813424 22.00545 3.172673 67.19508 2.813371
4 4.156673 6.675522 23.52914 4.007825 62.67250 3.115019
5 4.295984 8.118661 23.15342 5.465784 60.24070 3.021438
6 4.355723 8.875209 22.54899 6.865976 58.75413 2.955692
7 4.394796 9.038630 22.35688 7.744007 57.77082 3.089663
8 4.436156 8.898150 22.58065 8.067919 57.08877 3.364506
9 4.480784 8.746168 22.98271 8.044305 56.58043 3.646386
10 4.522175 8.726422 23.35126 7.909328 56.16557 3.847414
11 4.554874 8.839252 23.59371 7.802269 55.81333 3.951439
12 4.577234 9.017644 23.71177 7.760710 55.52457 3.985312
Variance Decomposition of LEXR:
Period S.E. Shock1 Shock2 Shock3 Shock4 Shock5
1 0.037310 0.071858 30.31975 2.158130 5.811360 61.63891
2 0.050002 1.388463 22.64017 5.450584 11.54432 58.97646
3 0.056614 4.860304 18.95208 9.688222 13.79740 52.70200
4 0.060763 8.709735 16.93905 13.84737 13.64499 46.85885
5 0.063471 11.38523 15.79756 17.10348 12.74365 42.97009
6 0.065198 12.68156 15.13367 19.25494 12.09297 40.83686
7 0.066297 13.00616 14.72476 20.46982 11.96752 39.83174
8 0.067031 12.87898 14.45275 21.02417 12.23125 39.41286
9 0.067563 12.67738 14.25879 21.19181 12.63849 39.23354
10 0.067978 12.58546 14.11501 21.17524 13.00522 39.11907
11 0.068319 12.64150 14.00704 21.09330 13.25427 39.00388
12 0.068608 12.81189 13.92312 21.00285 13.38703 38.87510
Factorization: Structural
Vector and Structural Vector Autoregressions
|94
Table 19 gives the variance decomposition of the five variables in the VAR to
the identified structural shocks. As you may have realized during
implementation, variance decompositions in Table 19 are given without
standard errors.
Of some interest is the effect of nominal on real variables, as it is sometimes
found that monetary policy shocks explain only a small part of the variance of
output. As such, the short-term interest rate predicts only a small percentage of
the variance of the output, equal to some 7.5% after 12 periods. The variance
decomposition of output due to a shock to itself is as high as 55%, with a
combined contribution of policy variables (money, interest and exchange rate)
of about 22% at the end of the observation period. The picture is the same for
CPI and tb91, where the percentage of the variance explained by the other
variables amounts to 35% and 44.5%, respectively, with more than ½ of the
variance of CPI and tb91 explained by themselves, respectively, in period 12.
5.5 Non-recursive Identification Scheme
In non-recursive identification scheme, we are not worried about a particular
ordering of the endogenous variables but rather our interest is in ensuring that
the short-run restrictions make economic sense. Recall that previously in our
recursive ordering scheme, we had assumed that the non- policy variables come
first and the policy variables come last, now under a non-recursive scheme we
could start with policy variables and end with non-policy variables, while
ensuring all the contemporaneous relationships are identified. Recall that the
main requirement of just-identification is to ensure that we can uniquely
recover all the parameters in the A and B matrices from the variance-
covariance matrix Ω of the estimated residuals, which we achieve by imposing
the necessary (3𝑘2 − 𝑘)/2 additional restrictions.
We can therefore impose identifying structures different from the recursive
one, but for the variables in order exr, GDP, CPI, tb91 and M2. Accordingly, as
in eqn. 19, we postulate the following non-recursive system for matrix A:
𝐴 =
[ 1 𝑎12 𝑎1300
1𝑎32
01
00
𝑎42𝑎52
𝑎43𝑎53
𝑎140010
𝑎1500𝑎451 ]
, 𝐵 =
[ 𝑏11 0 0
00
𝑏220
0𝑏33
00
00
00
000𝑏440
0000𝑏55]
(23)
Taking MTM Theories and the VAR Model to the Data
95 |
Where, the first relationship now postulates a contemporaneous effect of exr
on the rest of the variables in the system, while in the second row, GDP has a
contemporaneous relationship with itself. In the third relationship, there is a
contemporaneous effect of GDP on CPI, and a contemporaneous effect of
GDP, CPI and M2 on interest rates in the fourth relation. Finally, in the fifth
relationship, we capture a contemporaneous effect of CP and GDP on money.
Note that while we have the same number of zeros as in eqn. 19, we have
located them in different places, and the basic structures of matrices A and B
remain unchanged: matrix A still has ones on the main diagonal (this is a
normalisation).
These restrictions are most easily imposed using the matrix form in EViews. As
before, we need to create these two matrices A and B by the procedure
described earlier, but for purposes of clarity, here as follows:
𝐴 =
[ 1 𝑁𝐴 𝑁𝐴00
1𝑁𝐴
01
00
𝑁𝐴𝑁𝐴
𝑁𝐴𝑁𝐴
𝑁𝐴0010
𝑁𝐴00𝑁𝐴1 ]
, 𝐵 =
[ 𝑁𝐴 0 000
𝑁𝐴0
0𝑁𝐴
00
00
00
000𝑁𝐴0
0000𝑁𝐴]
(24)
I have named these matrix_a2 and matrix_b2, respectively, in the EViews work
file. Once you have created the two new matrices, as I have, estimate VAR(2)
as before, noting the ordering of variables for this round (lexr, lgdp, lcpi, tb91,
lM2). In the estimated VAR(2) results window, select Proc, Estimate
Structural Factorisation..., select Matrix, Short-run pattern and enter the
two matrices (matrix_a2 and matrix_b2) in the appropriate places. Click on OK
to estimate the elements of the two matrices, as shown in Table 20.
Vector and Structural Vector Autoregressions
|96
Table 20: Just-identified Non-recursive SVAR estimates (matrix form)
Structural VAR Estimates
Date: 05/02/18 Time: 09:53
Sample (adjusted): 2000Q3 2017Q3
Included observations: 69 after adjustments
Estimation method: method of scoring (analytic derivatives)
Convergence achieved after 1 iterations
Structural VAR is just-identified
Model: Ae = Bu where E[uu']=I
Restriction Type: short-run pattern matrix
A =
1 C(1) C(5) C(8) C(9)
0 1 0 0 0
0 C(2) 1 0 0
0 C(3) C(6) 1 C(10)
0 C(4) C(7) 0 1
B =
C(11) 0 0 0 0
0 C(12) 0 0 0
0 0 C(13) 0 0
0 0 0 C(14) 0
0 0 0 0 C(15)
Coefficient Std. Error z-Statistic Prob.
C(1) -0.287316 0.178884 -1.606159 0.1082
C(2) 0.124607 0.053609 2.324360 0.0201
C(3) 4.083252 13.96435 0.292405 0.7700
C(4) -0.241603 0.151700 -1.592638 0.1112
C(5) -2.183418 0.380760 -5.734368 0.0000
C(6) 3.757790 29.73856 0.126361 0.8994
C(7) -0.200320 0.328061 -0.610617 0.5415
C(8) -0.003931 0.001541 -2.550563 0.0108
C(9) -0.305309 0.143611 -2.125945 0.0335
C(10) 22.57606 10.88356 2.074328 0.0380
C(11) 0.029292 0.002494 11.74734 0.0000
C(12) 0.020856 0.001775 11.74734 0.0000
C(13) 0.009287 0.000791 11.74734 0.0000
C(14) 2.288076 0.194774 11.74734 0.0000
C(15) 0.025309 0.002154 11.74734 0.0000
Log likelihood 540.5339
Estimated A matrix:
1.000000 -0.287316 -2.183418 -0.003931 -0.305309
0.000000 1.000000 0.000000 0.000000 0.000000
0.000000 0.124607 1.000000 0.000000 0.000000
0.000000 4.083252 3.757790 1.000000 22.57606
0.000000 -0.241603 -0.200320 0.000000 1.000000
Estimated B matrix:
0.029292 0.000000 0.000000 0.000000 0.000000
0.000000 0.020856 0.000000 0.000000 0.000000
0.000000 0.000000 0.009287 0.000000 0.000000
0.000000 0.000000 0.000000 2.288076 0.000000
0.000000 0.000000 0.000000 0.000000 0.025309
Taking MTM Theories and the VAR Model to the Data
97 |
5.6 Comparing Recursively and Non-recursively Identified SVAR
One natural thing to think about is which of the two, i.e., recursive and non-
recursive SVAR is better, something we evaluate using the maximised value of
the log of the likelihood function. A model with a lower log-likelihood is
generally better. In both the non-recursive (Table 20) and recursive (Table 18),
the log-likelihood value is 540.5339, and is exactly the same in both cases. This
means that the two are observationally equivalent. In other words, we cannot
choose between them based on the fit of the data alone, and other criteria may
be required. One possibility in this regard is the visual inspection of the IRFs.
Since we have described the process of generating IRFs before, leading to
charts in Figure 8, there is no need of replicating the same story here. I will
therefore proceed, following strictly the same procedure to produce impulse
response charts from the non-recursive scheme in Figure 9.
Figure 9: IRF to one standard-deviation structural shock in tb91 (non-recursive structural factorisation)
-.02
-.01
.00
.01
.02
.03
2 4 6 8 10 12 14 16 18 20
Response of LEXR to Shock4
-.012
-.008
-.004
.000
.004
.008
2 4 6 8 10 12 14 16 18 20
Response of LGDP to Shock4
-.010
-.005
.000
.005
.010
2 4 6 8 10 12 14 16 18 20
Response of LCPI to Shock4
-1
0
1
2
3
2 4 6 8 10 12 14 16 18 20
Response of TB91 to Shock4
-.03
-.02
-.01
.00
.01
2 4 6 8 10 12 14 16 18 20
Response of LM2 to Shock4
Response to Structural One S.D. Innovations ± 2 S.E.
Vector and Structural Vector Autoregressions
|98
-.02
-.01
.00
.01
.02
.03
2 4 6 8 10 12 14 16 18 20
Response of LEXR to Shock4
-.012
-.008
-.004
.000
.004
.008
2 4 6 8 10 12 14 16 18 20
Response of LGDP to Shock4
-.010
-.005
.000
.005
.010
2 4 6 8 10 12 14 16 18 20
Response of LCPI to Shock4
-1
0
1
2
3
2 4 6 8 10 12 14 16 18 20
Response of TB91 to Shock4
-.03
-.02
-.01
.00
.01
2 4 6 8 10 12 14 16 18 20
Response of LM2 to Shock4
Response to Structural One S.D. Innovations ± 2 S.E.
These new impulses are broadly well aligned to those in Figures 8, and notably,
the price and exchange rate puzzles persist following a contractionary monetary
policy shock. Therefore, until now, we are unable to judge superiority of one
ordering over the other!
Nonetheless, some caution about non-recursive ordering scheme is in order.
While there may be arguments favouring non-recursive ordering, there is also a
substantial cost. A broader set of economic relations must be identified to
make sense of the non-recursive structure, and it turns out, in practice, that it is
not enough to just have the same number of restrictions in the A matrix.
Assume for example, as in eqn.19, the following non-recursive system.
𝐴 =
[ 1 𝑎12 00𝑎31
10
𝑎231
𝑎41𝑎51
0𝑎52
𝑎43𝑎53
0𝑎24010
000𝑎451 ]
, 𝐵 =
[ 𝑏11 0 0
00
𝑏220
0𝑏33
00
00
00
000𝑏440
0000𝑏55]
(25)
Such that there is a contemporaneous relationship between gdp and cpi (row
1), a contemporaneous effect of cpi on M2 and interest rates (row 2), and a
contemporaneous effect of GDP on M2 (row 3). There is also a
contemporaneous effect of GDP, M2 and exr on the interest rate (row 4) and
finally a contemporaneous effect of GDP, CPI and M2 on exr (row 5). Again,
we have the same number of zeros as in eqn. 18, except that we have located
them in different places of the A matrix. Furthermore, the basic structures of
matrices A and B remain unchanged, with matrix A having ones on the main
diagonal.
As with the earlier cases, these restrictions are easily imposed using the matrix
form in EViews. For clarity again, we will need to create these two matrices A
Taking MTM Theories and the VAR Model to the Data
99 |
and B, (matrix_a3 and matrix_b3) by the same procedures described earlier.
The matrices take the form:
𝐴 =
[ 1 𝑁𝐴 00𝑁𝐴
10
𝑁𝐴1
𝑁𝐴𝑁𝐴
0𝑁𝐴
𝑁𝐴𝑁𝐴
0𝑁𝐴010
000𝑁𝐴1 ]
, 𝐵 =
[ 𝑁𝐴 0 000
𝑁𝐴0
0𝑁𝐴
00
00
00
000𝑁𝐴0
0000𝑁𝐴]
(26)
Once these have been created, as I have, estimate VAR(2) as before, noting the
ordering of variables for this round (lgdp, lcpi, lM2, tb91, lexr). In the estimated
VAR(2) results window, select Proc, Estimate Structural Factorisation...,
select Matrix, Short-run pattern and enter the two matrices (matrix_a3 and
matrix_b3) in the appropriate places. Click on OK to estimate the elements of
the two matrices, but other than the usual neat output, this time, we receive an
error message!
Note that even with the right number of necessary restrictions for
identification (35), such that the order condition for identification is fulfilled,
we encounter a problem with the estimation, in particular the fact that the
Hessian matrix is near singular at final iteration parameter values.
Although it has been shown that this problem can be resolved by going to the
Optimisation Control tab and changing the option for Starting Values from
Fixed to Draw from Uniform (0,1), as in the screen print,
Vector and Structural Vector Autoregressions
|100
sadly, this time around, the problem has no quick fix, as we have run afoul of
the rank condition of identification, which is described in, inter alia, Hamilton
(1994, p. 332). Taking this into account makes coming up with a good
economic rationale for non-recursive restrictions even harder.
Chapter 6
VAR and Vector Error Correction Model (VECM)
6.1 Deriving the VECM Framework
On the basis of unit root unit testing, we treat all variables, save for tb91 which
is boarder line stationary so, as unit root non-stationary, i.e., I(1). I(1) behaviour
allows (potentially) for cointegration, i.e., a linear combination of I(1) variables
that is I(0) – which is a statistical analogy of economic equilibrium. The logic
for this is straight forward. If the trend in one variable is cancelled out by the
trend in one (or a linear combination) of other variable(s), then there must exist
forces ensuring that these variable(s) ‘do not move too far apart’ (i.e. they share
a common trend). In economics, such forces constitute the ‘equilibrium’
relationships posited by economic theory. If there were no relationship tying
one variable to another (or set of other variables) then they would drift
arbitrarily far apart, independent of one another.
The link between VAR in eqn. 6 and cointegration is accommodated in a
vector error correction model (VECM) (Johansen 1988), a link that we now
turn to. Providing the data for the k system series are I(1), it is possible to
rewrite VAR(p) in eqn. 6 as an VECM of the form:
∆Z𝑡 = A0 + Γ1∆Z𝑡−1 + Γ2∆Z𝑡−2 +⋯+ Γ𝑘−1∆Z𝑡−𝑝−1
+ΠZ𝑡−𝑝 + ε𝑡 (27)
Where
Γ𝑖 = −(I − A1 − A2 −⋯− A𝑘−1); 𝑖 = (1,2, … , 𝑘 − 1), and
Π = −(I − A1 − A2 −⋯− A𝑘)
Making the derivation simpler, consider the VAR(2) model:
Vector and Structural Vector Autoregressions
|102
Z𝑡 = A0 + A1Z𝑡−1 + A2Z𝑡−2 + ε𝑡 (28)
Subtract 𝐙𝒕−𝟏 from both sides to give
∆Z𝑡 = (A1 − I)Z𝑡−1 + A2Z𝑡−2 + ε𝑡 (29)
From RHS subtract and add (A1 − I)Z𝑡−2, to give
∆Z𝑡 = A0 + (A1 − I)Z𝑡−1 + −(A1 − I)Z𝑡−2⏟ (A1−I)∆Z𝑡−1
+ (A1 − I)Z𝑡−2 +
A2Z𝑡−2 + ε𝑡 (30)
∆Z𝑡 = A0 + (A1 − I)∆Z𝑡−1 + (A1 + A2 − I)Z𝑡−2 + ε𝑡 (31)
∆Z𝑡 = A0 + Γ1∆Z𝑡−1 +ΠZ𝑡−2 + ε𝑡 (32)
Where Γ1 = −(I − A1) and Π = −(I − A1 − A2)
In general VAR(p) in VECM is of the form:
∆Z𝑡 = A0 + ΠZ𝑡−𝑝 + ∑ Γ𝑖∆Z𝑡−𝑖𝑝−1𝑖=1 + ε𝑡. (33)
Where 𝒊 = 𝟏,… , 𝒑 − 𝟏 is the lag-length, which is sufficient to remove any
remaining serial correlation in the model - and is statistical metric. We readily
see from this expression that the VECM expresses the VAR in I(0) space, i.e.,
the dependent variable is now ∆𝐙𝒕~𝐈(𝟎) and ∆𝐙𝒕−𝑖~𝐈(𝟎), which facilitates
standard hypothesis testing.
The advantage of this parametrization is in the economic interpretation of the
coefficients. The short run dynamics are isolated in 𝚪𝒊, which is also useful in
examining the concept of Granger causality (Appendix 1). The 𝚷 matrix
contains the long-run (cointegrating) relationships and the nature of long-run
causality, which is the idea behind Granger’s Representation Theorem that for any
set of cointegrated variables, there is error correction representation.
Johansen (1988) shows that we can write the long-run matrix, 𝚷 as,
Π = α𝛽′
Where 𝛂 and 𝛃 are both matrices of dimensions (k x r).
Under this decomposition, eqn. 32 becomes:
VAR and Vector Error Correction Model
103 |
∆Z𝑡 = A0 + α𝛽′Z𝑡−𝑝 + ∑ Γ𝑖∆Z𝑡−𝑖
𝑝−1𝑖=1 + ε𝑡. (36)
The 𝒓 columns of 𝛃 represent the co-integrating vectors that quantify the
‘long-run’ (or equilibrium) relation(s) between the variables in the system, and
as we have suggested, this could be the statistical analogue of economic
equilibrium. The 𝒓 columns of error correction coefficients 𝛂 load deviations
from equilibrium into ∆𝐙𝒕 for correction, thereby ensuring that the equilibrium
is maintained.
Note that:
If 𝐙𝒕~𝐈(𝟏), 𝒓 can be at most 𝒌 − 𝟏. In other words, where 𝒌 = 𝟐, there can at
most be one equilibrium relationship between the two variables (i.e., 𝒓 = 𝟏).
If 𝒓 = 𝟎, there is no cointegration, and the VECM collapses to a VAR in first
differences, that is,
∆Z𝑡 = A0 + ∑ Γ𝑖∆Z𝑡−𝑖𝑝−1𝑖=1 + ε𝑡 (37)
If 𝒓 = 𝟏, there is a single cointegrating relationship - an equilibrium which is a
‘long run’ property of the data.
In addition, some interpretation is in order. For purposes of clarity, we
continue with VAR(2) model in eqn. 32, but for k =2 variables, z1 and z2.
which when unpacked, ignoring the 𝐀𝟎 term, is:
[∆𝑍1𝑡∆𝑍2𝑡
] = [𝛼11𝛼21
] [𝛽11 𝛽21] [𝑍1𝑡−2𝑍2𝑡−2
] + Γ1 [∆𝑍1𝑡−1∆𝑍2𝑡−1
] + [휀1𝑡휀2𝑡] (38)
Ignoring the 𝚪𝟏 term for the moment, we will focus on the decomposition of
the long-run matrix:
Π = αβ′ ⇒ [𝛼11𝛼21
] [𝛽11 𝛽21] [𝑍1𝑡−2𝑍2𝑡−2
] (39)
𝛃𝒋𝒊 describes the cointegrating or ‘long-run relationship while the error
correction coefficients, 𝜶𝒊𝒋, quantifies the speed at which deviation from each
equilibrium is corrected.
Literally, cointegration implies that in equilibrium:
β11𝑧1 + β21𝑧2 = 0. (40)
Vector and Structural Vector Autoregressions
|104
Normalizing on say (𝒛𝟏), this relation implies
𝑧1 = β𝑧2, (41)
Where β = −β21 −β11⁄
This is the cointegrating (equilibrium) relationship and 𝛃 is called the
cointegrating or long-run parameter – noting that equilibrium normalization is
arbitrary.
Deviations from equilibrium are not zero, i.e.,
β11𝑧1𝑡−2 + β21𝑧2𝑡−2 ≠ 0 (42)
And serves to measure the extent to which variables are out of their
equilibrium, with ‘disequilibrium:
𝑒 = (𝑧1 − β𝑧2). (43)
Because of cointegration, 𝒆 is I(0). Incorporating the notion of 𝒆, eqn. 38 can
be rewritten as:
[∆𝑍1𝑡∆𝑍2𝑡
] = Γ1 [∆𝑍1𝑡−1∆𝑍2𝑡−1
] + [𝛼11𝛼21
] e𝑡−2 + [휀1𝑡휀2𝑡] (44)
The error correction coefficients (𝜶𝒊𝒋) measure how fast each variable adjusts
to deviations from equilibrium (𝒆), if at all. Moreover, if indeed the system
variables are cointegrated, then at least one of the 𝜶𝒊𝒋 ≠ 𝟎. This is essentially
the Granger (Nobel Prize) Representation Theorem, i.e., that cointegration ⇔
error correction. This error correction is the means by which equilibrium is
maintained and whether one or all are significant can be tested.
Where 𝛂𝟏𝟏 = 𝟎, then 𝒛𝟏 does not adjust to disequilibrium, and in this case, it is
‘weakly exogenous’ for the long-run, i.e., 𝒛𝟏 drives the long-run relationship
but itself is not driven by other variables in the long-run relationship – this will
constitute part of the sign restrictions to be imposed in practical applications.
Lastly, we examine a related concept of ‘Granger Causality’ in the VECM
which relates to the short run parameters contained in 𝚪𝟏matrix, which for
exposition, we unpack to:
[∆𝑍1𝑡∆𝑍2𝑡
] = [Γ11 Γ12Γ21 Γ22
] [∆𝑍1𝑡−1∆𝑍2𝑡−1
] + [𝛼11𝛼21
] e𝑡−2 + [휀1𝑡휀2𝑡] (45)
VAR and Vector Error Correction Model
105 |
If 𝚪𝟏𝟐 = 𝟎, then 𝒁𝟐𝒕is not a Granger cause of 𝒛𝟏𝒕. Thus, if, from eqn. 44,
𝒛𝟏𝒕 is weakly exogenous and 𝒛𝟐𝒕 does not Granger-cause it, then 𝒛𝟏𝒕 is also
‘strongly exogenous’.
I want to stress for emphasis that this extension of VAR to VECM, in the sort
of exercise at hand, i.e., MTM is only necessary for hypothesis testing, and
specifically, testing for weak exogeneity, but only where we find ourselves in a
mix of theory regarding the ordering of variables in the SVAR. We limit
VECM estimation here to hypothesis testing largely because, as we may know,
and as indeed has been established from IRFs, monetary policy has a zero long-
run effect on real variables – the so called long-run monetary policy neutrality.
Implementing this useful extension exercise demands of us to cautiously think
about;
i. The deterministic terms that enter into the cointegrating equation;
ii. Determining that cointegration exists
iii. Determining that all or subset of variables are important for the
existing long-run equilibrium, i.e., long-run exclusion tests.
iv. Testing for weak exogeneity; and
v. Performing, probably Granger non-causality tests.
6.2 Demonstration Determination of Cointegration and Estimation of VECM
We have already established that four of the five variables - lgdp, lcpi, lM2 and
lexr are unit root non-stationary, i.e., I(1), while tb91 is boarder line stationary.
Arguably therefore, a linear combination involving lgdp, lcpi, lexr, lM2 – all I(1)
and tb91 - the only I(0) variable could be cointegrated. Note that in a
multivariate model, a combination of I(1) and I(0) data is permissible, but the
number of cointegrating relations increases correspondingly with every
stationary variable included in the system (Harris and Sollis, 2005 p. 112).
In what follows, we evaluate the existence of equilibrium relation among the
five endogenous variables using the Johansen’s (1988) trace statistic (given in
eqn. 46), as this is asymptotically more correct than the bottom-to-top
alternative of the Max-Eigen statistic (Juselius, 2006: 131-134).
𝜆𝑡𝑟𝑎𝑐𝑒 = −𝑇∑ 𝑙𝑜𝑔(1 − 𝜆𝑛𝑖=𝑟+1 ) (46)
Vector and Structural Vector Autoregressions
|106
Where T is the effective sample and 𝜆 is the estimated Eigenvalues.
The Johansen’s (1988) trace test has however since been shown to have a finite
sample bias, with the implication that it often indicates too many cointegrating
relations, i.e., the test is over-sized (Juselius, 2006: 140-2; Cheung and Lai,
1993b; Reimers, 1992). Hence, for the sort of small samples we encounter quite
often, especially on developing COMESA countries, it could be helpful having
to adjust for finite-sample bias. This is achieved using the correcting procedure
suggested by Reimers (1992) as in eqn. 47 (Harris and Sollis, 2005: 122-24).
𝜆𝑡𝑟𝑎𝑐𝑒 = −(𝑇 − 𝑛𝑘)∑ 𝑙𝑜𝑔(1 − 𝜆𝑛𝑖=𝑟+1 ) (47)
Where in the adjustment equation, T is as defined before and n and k are the
number of variables in the system (5 in this case) and lag-length used when
testing for cointegration (2 in this case).
With the order of VAR beforehand, prior to implementation of the trace test,
is a choice of the deterministic components (linear trend and constant)
(Johansen, 1994) that must enter into the cointegrating space. Note that from
the visual inspection of the data in Fig. 2, it looks reasonable to supply the
standard unrestricted VECM model given by eqn. 33 with a restricted trend
and an unrestricted constant, as is, at least initially. This is because, whilst the
variables in levels appear
to be trending, we cannot
be sure these linear trends
will cancel out in the
cointegrating relation. We
therefore allow for the
possibility of a linear trend
in the data by restricting
the trend and unrestricting
the constant. Including
unrestricted constant
allows for linear trends in
both cointegrating space
and in the variables in
levels and produces a non-
zero mean in the
cointegrating relation and
VAR and Vector Error Correction Model
107 |
avoids creation of quadratic trends in the levels, which would arise if both the
constant and trend are unrestricted (Juselius, 2006: 99-100). Accordingly, I
unrestricted the intercept, 𝐀𝟎 to lie in the cointegrating space but restricted the
trend to zero.
With a lag of 2 and unrestricted intercept and no trend, corresponding to
option 3 of the deterministic trend assumption of test in the Cointegration Test
Specification dialogue box of EViews, we implement the Johansen (1988) trace
statistic test for cointegration through the steps described here-under.
Turn to and make active our COMESA2018 EViews work file as we have done
before. Select or highlight the five series: lcpi, lexr, lgdp, lm2 and tb91. Click on
Quick at the Top most menu window and navigate through the drop-down
menu to Group Statistics. Follow the arrow and navigate through to
Johansen Cointegration Test, as shown
in the screen print.
Choosing this brings forth the Series List
box, highlighting the variables we have
selected and on which we want to
perform cointegration test (note that this
list can be adjusted). Click OK, for the
Johansen Cointegration Test dialogue
box.
As already alluded to, we allow
for the unrestricted intercept,
but restrict the trend in the
deterministic trend assumption
of the test, i.e., option 3 – which
incidentally is EViews default. I
have entered under Exog
variables*, loil, just as it was
with VAR, though it is of no
added value as the test critical
values assume no exogenous
series. The Lag Intervals default
of 2 is incidentally what has
been determined beforehand. Click OK, for the results in Table 21.
Vector and Structural Vector Autoregressions
|108
Table 21: Johansen's trace test results
Date: 04/23/18 Time: 16:56 Sample (adjusted): 2000Q4 2017Q3 Included observations: 68 after adjustments Trend assumption: Linear deterministic trend Series: LCPI LEXR LGDP LM2 TB91 Exogenous series: LOIL Warning: Critical values assume no exogenous series Lags interval (in first differences): 1 to 2 Unrestricted Cointegration Rank Test (Trace)
Hypothesized No. of CE(s) None *
Eigenvalue 0.43181
Trace Statistic 98.57763
Trace Statistic# 84.08097
0.05 Critical Value 69.81889
Prob.** 0.0001
At most 1 * 0.40296 60.13727 51.29361 47.85613 0.0023
At most 2 0.183746 25.06482 21.37888 29.79707 0.1591
At most 3 0.133213 11.25883 9.60316 15.49471 0.1961
At most 4 0.022356 1.537441 1.31136 3.841466 0.215
Trace test indicates 2 cointegrating eqn(s) at the 0.05 level
* denotes rejection of the hypothesis at the 0.05 level
**MacKinnon-Haug-Michelis (1999) p-values # small sample correction
Looking at the results, we see that the values for both trace statistic and trace
statistic* in the rows for None and At most 1 are higher than the
corresponding 5% critical values, i.e., 98.578 (84.081) > 69.819, and 60.137
(51.294) > 47.856, respectively. This suggests the presence of two equilibrium
(stationary) relations, even when corrected for small sample bias among the
variables at the conventional 5% level of significance cannot be rejected. Even
further, EViews generously indicates with * in the output table, but also
explicitly indicates the same in the first line of the accompanying notes beneath
the table.
However, we know that tb91 variable is I(0) while the rest of the variables are
I(1). This implies that potentially, the I(0) variable may have added an
additional cointegrating relation in the model. Thus, adjusting the number of
cointegration equations for the one I(0) variable leaves only one long-run
relation.
VAR and Vector Error Correction Model
109 |
6.3 Estimating a VECM
With a unique relationship as indicated in Table 21 among the five endogenous
variables, identification of the long-run relation becomes relatively direct. As
normalization is arbitrary, the only existing cointegration relation will here be
normalized on GDP, translating into the VECM estimates in Table 22. The
procedure for achieving this is detailed as follows.
As with VAR estimation procedure described earlier, check and highlight the
five series in the cointegrating relationship. Point the cursor in any of the
highlighted variables and right click. Follow the Open command and in the
drop-down menu, select as VAR…. The menu bar of the VAR Specification
dialogue box that pops up, as in the screen print, has 3 parts: i) Basics, ii)
Cointegration, and iii) VEC restrictions. We begin with the Basics window,
where under VAR Type, we select Vector Error Correction (the default is
Unrestricted VAR). The rest of the entries in the screen print are as before.
Vector and Structural Vector Autoregressions
|110
Next check Cointegration in the menu bar, and input Number of
cointegrating rank, as may have been determined (i.e. 1 in our case) and we
are allowing for intercept (no trend in cointegrating Equation (CE) and
VAR), i.e. option 3) in EViews, as shown in the screen print.
Finally, we turn to VEC Restrictions window.
It is in this window that we have the leverage to impose: long-run
normalization, long-run exclusion and weak exogeneity restrictions, and some
detailed illustration is warranted.
Restrictions may be placed on the coefficients B(r, k) of the r-th cointegrating
relation:
B(r,1)*LCPI + B(r,2)*LEXR + B(r,3)*LGDP + B(r,4)*LM2 + B(r,5)*TB91 (48)
This equation is akin to eqn. 39 of the theoretical model and is the long-run
equilibrium. For r = 1 as determined in Table 21, imposing the restriction that
B(1, i)=1 for i = 1…5, is an equivalent of the normalization given by eqn. 41.
In the screen print, the restriction b(1,3)=1, implies that normalization is on
GDP or that we are estimating the GDP equation in a VECM framework.
Note that normalization is arbitrary and can be done on any one variable,
providing there is leaning on some economic intuition. In what follows, we
demonstrate the VECM output. In the screen print above, make active
Impose Restrictions box and provide in the box thereunder, the restriction
b(1,3)=1, i.e., normalize on GDP. Click OK for the VECM results in Table 22.
VAR and Vector Error Correction Model
111 |
Table 22: Vector Error Correction Estimates
Vector Error Correction Estimates
Date: 04/24/18 Time: 15:05
Sample (adjusted): 2000Q4 2017Q3
Included observations: 68 after adjustments
Standard errors in ( ) & t-statistics in [ ]
Cointegration Restrictions:
B(1,3)=1
Convergence achieved after 1 iterations.
Restrictions identify all cointegrating vectors
Restrictions are not binding (LR test not available)
Cointegrating Eq: CointEq1
LCPI(-1) -0.104111
(0.26980)
[-0.38588]
LEXR(-1) 0.779756
(0.16835)
[ 4.63169]
LGDP(-1) 1.000000
LM2(-1) -0.612232
(0.12040)
[-5.08508]
TB91(-1) 0.005176
(0.00372)
[ 1.39244]
C -9.617435
Error Correction: D(LCPI) D(LEXR) D(LGDP) D(LM2) D(TB91)
CointEq1 0.004642 -0.354163 -0.018260 -0.141348 -9.499563
(0.02223) (0.07782) (0.04327) (0.06011) (5.37615)
[ 0.20883] [-4.55104] [-0.42197] [-2.35149] [-1.76698]
D(LCPI(-1)) 0.651879 -0.164099 -0.293812 -0.359738 164.7688
(0.15481) (0.54194) (0.30135) (0.41860) (37.4395)
[ 4.21074] [-0.30280] [-0.97498] [-0.85937] [ 4.40093]
D(LCPI(-2)) -0.218317 -0.865829 -0.055406 -0.106765 -114.4386
(0.15478) (0.54181) (0.30128) (0.41850) (37.4303)
[-1.41054] [-1.59804] [-0.18391] [-0.25511] [-3.05738]
D(LEXR(-1)) -0.014779 0.184653 0.046651 -0.088046 3.661002
(0.03445) (0.12060) (0.06706) (0.09315) (8.33130)
[-0.42899] [ 1.53117] [ 0.69568] [-0.94519] [ 0.43943]
D(LEXR(-2)) 0.002703 0.011326 -0.021026 -0.177891 1.229858
(0.03328) (0.11651) (0.06479) (0.08999) (8.04887)
[ 0.08122] [ 0.09721] [-0.32455] [-1.97671] [ 0.15280]
D(LGDP(-1)) 0.034705 -0.151297 -0.457682 0.157668 9.235799
(0.06493) (0.22730) (0.12639) (0.17557) (15.7030)
[ 0.53448] [-0.66562] [-3.62108] [ 0.89802] [ 0.58815]
D(LGDP(-2)) -0.017465 -0.506523 -0.307409 0.057205 -15.22198
(0.06291) (0.22024) (0.12247) (0.17012) (15.2151)
[-0.27760] [-2.29987] [-2.51015] [ 0.33627] [-1.00045]
D(LM2(-1)) 0.100083 -0.096778 -0.062131 -0.159615 0.597304
(0.05470) (0.19148) (0.10648) (0.14791) (13.2285)
[ 1.82967] [-0.50541] [-0.58352] [-1.07917] [ 0.04515]
D(LM2(-2)) 0.003324 -0.251315 0.331758 0.190495 -7.798826
(0.05418) (0.18966) (0.10546) (0.14650) (13.1026)
[ 0.06136] [-1.32508] [ 3.14574] [ 1.30033] [-0.59521]
D(TB91(-1)) 0.001116 0.003931 0.000297 0.000737 0.052385
(0.00049) (0.00173) (0.00096) (0.00134) (0.11957)
[ 2.25752] [ 2.27132] [ 0.30892] [ 0.55124] [ 0.43810]
D(TB91(-2)) 7.38E-05 0.003318 0.001898 0.000450 0.080294
(0.00047) (0.00164) (0.00091) (0.00127) (0.11340)
[ 0.15749] [ 2.02111] [ 2.07882] [ 0.35500] [ 0.70803]
C -0.020032 0.309989 0.042652 0.153174 6.871951
(0.02231) (0.07809) (0.04342) (0.06032) (5.39455)
[-0.89805] [ 3.96980] [ 0.98230] [ 2.53955] [ 1.27387]
LOIL 0.006054 -0.064604 -0.005462 -0.026815 -1.839598
(0.00527) (0.01843) (0.01025) (0.01424) (1.27328)
[ 1.14982] [-3.50524] [-0.53299] [-1.88357] [-1.44477]
R-squared 0.500961 0.489082 0.387289 0.301453 0.415049
Adj. R-squared 0.392080 0.377609 0.253606 0.149043 0.287423
Sum sq. resids 0.005599 0.068612 0.021215 0.040936 327.4575
S.E. equation 0.010090 0.035320 0.019640 0.027282 2.440035
F-statistic 4.600991 4.387442 2.897078 1.977904 3.252080
Log likelihood 223.2711 138.0715 177.9793 155.6310 -149.9307
Akaike AIC -6.184444 -3.678572 -4.852333 -4.195030 4.792080
Schwarz SC -5.760127 -3.254255 -4.428016 -3.770712 5.216398
Mean dependent 0.015397 0.011209 0.014714 0.037783 -0.101765
S.D. dependent 0.012940 0.044770 0.022733 0.029574 2.890549
Determinant resid covariance (dof adj.) 1.30E-13
Determinant resid covariance 4.48E-14
Log likelihood 562.5742
Akaike information criterion -14.48748
Schwarz criterion -12.20269
𝛃′
𝚪𝒊
Vector and Structural Vector Autoregressions
|112
Each column in the table corresponds to the equation for one endogenous
variable in the VAR. For each right-hand side variable, EViews reports the
coefficient point estimate, the estimated coefficient standard error (in round
brackets) and the t-statistic (in square brackets)
6.4 Long-run Exclusion Tests
Based on the estimated VECM, zero restrictions on 𝛃 imply long-run
exclusion, i.e., test whether (or not) a corresponding variable can be excluded
from the estimated long-run relation. If accepted, the variable is redundant to
the long-run relation(s) (Juselius, 2006: 176) and so can at most have a short-
run impact. For purposes of demonstration, we may want to check the long-
held view of monetary policy neutrality, i.e. that monetary policy has a zero
long-run effect on real variables. To test this view, we impose a VEC
restriction that B(1,5)=0, while maintaining normalization .
To do this, while in the VECM result (Table 22) window (assuming this is still
live or else re-estimate), click on Estimate to get back to the VEC
Restrictions dialogue box above. Click VEC Restrictions in the menu bar of
the window and impose a restriction as shown in the screen print. Execute the
restriction to generate the output in Table 23.
VAR and Vector Error Correction Model
113 |
Table 23: A test of monetary policy neutrality
Vector Error Correction Estimates
Date: 04/24/18 Time: 15:44
Sample (adjusted): 2000Q4 2017Q3
Included observations: 68 after adjustments
Standard errors in ( ) & t-statistics in [ ]
Cointegration Restrictions:
B(1,3)=1, B(1,5)=0
Convergence achieved after 61 iterations.
Restrictions identify all cointegrating vectors
LR test for binding restrictions (rank = 1):
Chi-square(1) 0.207866
Probability 0.648445
Cointegrating Eq: CointEq1
LCPI(-1) -0.182368
(0.24636)
[-0.74025]
LEXR(-1) 0.853532
(0.14676)
[ 5.81591]
LGDP(-1) 1.000000
LM2(-1) -0.595378
(0.10569)
[-5.63348]
TB91(-1) 0.000000
C -9.913518
Error Correction: D(LCPI) D(LEXR) D(LGDP) D(LM2) D(TB91)
CointEq1 0.004871 -0.410354 -0.020845 -0.144820 -4.414180
(0.02371) (0.08012) (0.04614) (0.06435) (5.86383)
[ 0.20544] [-5.12147] [-0.45179] [-2.25041] [-0.75278]
Clearly, we see that the p-value resulting from the restriction that B(1,5)=0 is
0.648. Based on this, long-run exclusion of tb91, in GDP normalization,
cannot be rejected; amounting to the finding that monetary policy has no long-
run impact but could at best have a short-run impact. This is not only
consistent with the long-held view of monetary policy neutrality, i.e., that
monetary policy has a zero long-run effect on real variables (Christiano et
al.,1999), but also with our IRFs estimates.
6.5 Long-run Weak Exogeneity Tests
This constitute restrictions on the adjustment coefficients, iα given as A(k, r)
where k= 1, …5 for k = 1 D(LCPI) equation; k = 2 D(LEXR) equation; k = 3
D(LGDP) equation; k = 4 D(LM2) equation and k = 5 D(TB91) equation, and
r = 1 in the VEC Restrictions window, and are accomplished by a zero row in
Vector and Structural Vector Autoregressions
|114
α (Johansen, 1996). Each iα measure the speed at which the corresponding
variable in ∆𝐙𝒕 in eqn. 44 adjusts to deviations from the equilibrium.
Therefore, a zero coefficient implies that the variable impacts on the long-run
stochastic path of the other variables of the system, while at the same time has
not been influenced by them (Juselius, 2006: 193), and is as such, considered to
be weakly exogenous for the long-run parametersβ .
For purposes of demonstration, we may want to check for example if exchange
rate is weakly exogenous to the domestic variables, a test we accomplish by
imposing a VEC restriction that A(2,1)=0.
To do this, while in the VECM result window (assuming this is still live or else
re-estimate), click on Estimate to get back to the VEC Restrictions dialogue
box above. Click VEC Restrictions in the menu bar of the window and
impose a restriction as shown in the screen print. Execute the restriction to
generate the following output in Table 24.
VAR and Vector Error Correction Model
115 |
Table 24: A test of weak exogeneity
Vector Error Correction Estimates
Date: 04/25/18 Time: 07:12
Sample (adjusted): 2000Q4 2017Q3
Included observations: 68 after adjustments
Standard errors in ( ) & t-statistics in [ ]
Cointegration Restrictions:
B(1,3)=1, A(2,1)=0
Maximum iterations (500) reached.
Restrictions identify all cointegrating vectors
LR test for binding restrictions (rank = 1):
Chi-square(1) 2.717220
Probability 0.099271
Cointegrating Eq: CointEq1
LCPI(-1) 1.843897
(1.38927)
[ 1.32724]
LEXR(-1) -0.878474
(0.86690)
[-1.01335]
LGDP(-1) 1.000000
LM2(-1) -1.028502
(0.61996)
[-1.65897]
TB91(-1) 0.099142
(0.01914)
[ 5.17959]
C -3.263896
Error Correction: D(LCPI) D(LEXR) D(LGDP) D(LM2) D(TB91)
CointEq1 0.000308 0.000000 -0.000692 -0.013772 -4.715715
(0.00415) (0.00000) (0.00880) (0.01219) (0.92670)
[ 0.07413] [NA] [-0.07865] [-1.12955] [-5.08870]
Vector and Structural Vector Autoregressions
|116
The p-value resulting from the restriction that A(2,1)=0 is 0.099, which is
significantly different zero, even if boarder line so. This suggests, weak
exogeneity of exr can be rejected and amounts to the finding that it is
endogenous, i.e., are influenced by the other variables in the system.
6.6 Granger Non-causality Test
Determining as in the above that variables are cointegrated implies there must
be Granger causality in at least one direction. To illustrate this, we estimate
VEC Granger Causality in the VECM framework for the output in Table 25.
Assuming we can navigate through the process leading to the results in Table
22, once in results window, click on View and in the drop-down menu, choose
Lag Structure, and following the arrow, choose Granger Causality/ Block
Exogeneity Tests – a road map shown here in the screen print. Implement
this for the results in Table 25.
VAR and Vector Error Correction Model
117 |
Table 25: VEC Granger Causality/Block Exogeneity test
VEC Granger Causality/Block Exogeneity Wald Tests
Date: 04/25/18 Time: 09:38
Sample: 2000Q1 2017Q3
Included observations: 68
Dependent variable: D(LCPI)
Excluded Chi-sq df Prob.
D(LEXR) 0.184041 2 0.9121
D(LGDP) 0.583467 2 0.7470
D(LM2) 3.412090 2 0.1816
D(TB91) 5.096852 2 0.0782
All 9.179960 8 0.3273
Dependent variable: D(LEXR)
Excluded Chi-sq df Prob.
D(LCPI) 4.998432 2 0.0821
D(LGDP) 5.383924 2 0.0677
D(LM2) 1.836494 2 0.3992
D(TB91) 8.722769 2 0.0128
All 18.07900 8 0.0206
Dependent variable: D(LGDP)
Excluded Chi-sq df Prob.
D(LCPI) 1.858613 2 0.3948
D(LEXR) 0.520750 2 0.7708
D(LM2) 11.18510 2 0.0037
D(TB91) 4.355405 2 0.1133
All 15.87207 8 0.0442
Dependent variable: D(LM2)
Excluded Chi-sq df Prob.
D(LCPI) 1.651997 2 0.4378
D(LEXR) 5.755381 2 0.0563
D(LGDP) 0.807757 2 0.6677
D(TB91) 0.407811 2 0.8155
All 11.09241 8 0.1965
Dependent variable: D(TB91)
Excluded Chi-sq df Prob.
D(LCPI) 19.66035 2 0.0001
D(LEXR) 0.252507 2 0.8814
D(LGDP) 2.203567 2 0.3323
D(LM2) 0.376372 2 0.8285
All 31.29667 8 0.0001
The null hypothesis of the test, in part, is that individually, variable i is
excludable from any of the five system equations, and that collectively, all
system variables are excludable from each of the five system equations.
The result table has 5 tranches of variable exclusion, corresponding to the five
equations in the system. The interpretation is readily facilitated by the p-values.
Vector and Structural Vector Autoregressions
|118
In summary, we see a two-way causation, with influences running from CPI to
interest rates (d(tb91) block) and from interest rates to inflation (d(lcpi) block),
i.e., cpi ↔ tb91, which is intuitive more so in an environment of inflation
targeters. Dynamics in inflation very much influence the monetary policy
signals and also monetary policy signals are expected to anchor inflation
expectations. We also find unidirectional influences from cpi → exr; gdp →
exr; tb91 → exr; money → gdp; and exr → money, and not vice versa. All
these are economically intuitive.
Appendix 1
Granger Causality Test
Determining in the above that variables are cointegrated implies there must be
Granger non- causality in at least one direction. On this account, and following
Granger (1969), y is said to be Granger-caused by x if x helps in the prediction
of y or if the coefficients on the lagged x's are statistically significant in y and
vice versa. Thus, the Granger non-causality model is specified as follows:
n
i
n
i
tntntt xyy1 1
112111 (34)
n
i
n
i
tntntt xyx1 1
222212 (35)
Where n is the maximum lag-length; and t1 and
t2 are additive stochastic error
terms, which are by assumption normally distributed with a zero mean and a
constant variance. In light of equations (34) and (35), we can deduce two
testable hypotheses:
that 012 while 011 , i.e. x does not Granger-cause y (no causality
from x to y)
that 022 while 021 , i.e. y does not Granger-cause x (no causality
from y to x)
Acceptance of either hypotheses would suggest the existence of unidirectional
causality between x and y, and feedback between x and y may be understood to
exist if 02i and 01i . Alternatively, no causality between x and y
exists if 02i and 01i .
References Abuka, C., Alinda, R.K., Minoiu, C., Peydro, J-L and Presbitero, A.F (2015)
Monetary Policy in a Developing Country: Loan Applications and Real
Effects, IMF working paper, WP/15/270
Bernanke B. S. and Mark G. (1995) Inside the Black Box: The Credit Channel of
Monetary Policy Transmission‖, Journal of Economic Perspectives 9(4), 27-48.
Bwire, T. (2018) Modelling and forecasting volatility in Financial markets using
EViews, practical manual developed for COMESA central banks (forth
coming).
Blanchard, O.J and Quah, D.T. (1989) The dynamic effects of aggregate demand
and supply disturbances, American Economic Review 79 (4), 655 – 73.
Cecchetti, S. (1995) Distinguishing theories of monetary policy transmission
mechanism Federal Reserve Bank of St. Louis, 77, 83-97.
Cheung, Y-W., and Lai, K.S. (1993a) A Fractional Cointegration Analysis of
Purchasing Power Parity, Journal of Business and Economic Statistics 11: 103-
112
_____. (1993b) Finite-Sample Sizes of Johansen’s Likelihood Ratio Tests for
Cointegration, Oxford Bulletin of Economics and Statistics 55: 313-328
Christiano, L. J., Eichenbaum, M. and Evans, C. (1997) Sticky prices and limited
participation models of money: a comparison, European Economic Review,
41, 1201-49
______. (1999) Monetary policy shocks: what have we learned and to what end? in
Taylor, JB and Woodford, M (eds), Handbook of Macroeconomics, Vol. 1A,
Amsterdam, Elsevier Science, North-Holland, pp 65 – 148.
Clarida, R., Gal´ı, J. and M. Gertler, M. (1998) Monetary Policy Rules in Practice:
Some International Evidence, European Economic Review 42, 1033–
1067.
Davoodi, H.R, Dixit, S., Pinter, G. (2013) Monetary Transmission Mechanisms in
the East African Community: An Empirical Investigation, IMF working
paper, WP/13/39
Dennis, G. J. (2006) CATS in RATS Cointegration Analysis of Time Series, Version 2,
Estima, Evanston, Illinois, USA
VAR and Vector Error Correction Model
121 |
Dickey, D.A., and W.A. Fuller (1979) Distribution of estimators for Autoregressive
time series with a unit root, Journal of American Statistical Association 74(366),
427-31.
______. (1981) Likelihood ratio statistics for Autogressive time series with a unit
root, Econometrica, Econometric society 49(4), 1057-1072
Doornik, J.A. and Hansen, H. (2008) An omnibus test for Univariate and
Multivariate Normality, Oxford Bulletin of Economics and Statistics 70, 927-939.
Enders, Walter (2010). Applied Time Series Econometrics, Wiley
Engel, F. Robert (1982) Autoregressive Conditional Heteroskedasticity with
Estimates of the Variance of United Kingdom Inflation, Econometrica 50(4),
987-1007
Engle R. and C.W. Granger (1987) Cointegration and Error Correction:
Representation and Testing, Econometrica 48, 1-48.
Godfrey, L.G. (1988) Misspecification Tests in Econometrics, Cambridge: Cambridge
University Press
Granger, C. and Newbold, P. (1974) Spurious Regressions in Econometrics, Journal
of Econometrics 2, 111-20
Hamilton, J.D. (1994) Time Series Analysis, Princeton University Press, Princeton.
Harris, R. and Sollis, R. (2005) Applied Time Series Modelling and Forecasting, John
Wiley & sons Ltd
Johansen, S. (1988) Statistical Analysis of Co-integrating Vectors, Journal of Economic
Dynamics and Control, 12, (1988): 231-254.
______. (1994) The role of Constant and Linear Terms in Cointegration Analysis
of Non-stationary variables, Econometric Reviews 13, 205-229.
Juselius, K. (2006) The Cointegrated VAR Model: Methodology and Applications, Advanced
Texts in Econometrics, Oxford University Press, Oxford
______. (2003) The Cointegrated VAR model: Econometric Methodology and
Macroeconomic Applications, manuscript.
Lucas, R. (1972) Expectations and the Neutrality of Money, Journal of Economic
Theory 4: 103-124
Lütkepohl, H. (2005) New Introduction to Multiple Time Series Analysis, Berlin,
Springer Verlag.
Vector and Structural Vector Autoregressions
|122
Lütkepohl, H., and Krätzig, M. (2004) Applied Time Series Econometrics: Themes in
Modern Econometrics, Cambridge University Press.
Rahbek, A., Hansen, E., and Dennis, J.D. (2002) ARCH innovations and their
impact on cointegration rank testing, Working Paper, Department of Applied
Mathematics and Statistics, University of Copenhagen.
Reimers, H.-E. (1992) Comparison of tests for multivariate cointegration, Statistical
Papers 33: 335 359
Rotemberg, Julio R., and Michael Woodford (1997) An Optimization-Based
Econometric Framework for the Evaluation of Monetary Policy, In
NBER Macroeconomics Annual, 297-346. Cambridge. MA. MIT Press
Sims, C. (1980) Macroeconomics and Reality, Econometrica 48(1), 1-48.
Mackinnon, J.G. (1996) Numerical Distribution Functions for Unit Root and
Cointegration Tests, Journal of Applied Econometrics 11, 601 - 618
Mishkin, F. (1995) Symposium on the Monetary Transmission Mechanism, Journal
of Economic Perspectives 9(4) 3-10.
Montiel, P. (2013) The Monetary Transmission Mechanism in Uganda, S-43002-
UGA-1
Mugume, A. (2011) Monetary Policy Transmission Mechanisms in Uganda, Bank
of Uganda Working Paper Series, BOUWPS/01/2011, Bank of Uganda
Opolot, J. (2013) Bank lending channel of the Monetary Policy Transmission
Mechanism in Uganda: Evidence from Panel Data Analysis, Bank of
Uganda Working Paper, BOUWPS 01/2013.
Perron (1989) The Great Crash, the Oil Price Shock, and the unit root hypothesis,
Econometrica 57, 1361 -1401.
Taylor, John B. (1993). Discretion versus Policy Rules in Practice, Carnegie –
Rochester Conference Series on Public Policy 39, 195-214