Bootstrap-After-Bootstrap Prediction Intervals for Auto Regressive Models

download Bootstrap-After-Bootstrap Prediction Intervals for Auto Regressive Models

of 13

Transcript of Bootstrap-After-Bootstrap Prediction Intervals for Auto Regressive Models

  • 8/4/2019 Bootstrap-After-Bootstrap Prediction Intervals for Auto Regressive Models

    1/13

    Bootstrap-after-Bootstrap Prediction Intervals for Autoregressive Models

    Author(s): Jae H. KimSource: Journal of Business & Economic Statistics, Vol. 19, No. 1 (Jan., 2001), pp. 117-128Published by: American Statistical AssociationStable URL: http://www.jstor.org/stable/1392547 .

    Accessed: 30/09/2011 13:56

    Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

    JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of

    content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms

    of scholarship. For more information about JSTOR, please contact [email protected].

    American Statistical Association is collaborating with JSTOR to digitize, preserve and extend access toJourna

    of Business & Economic Statistics.

    http://www.jstor.org/action/showPublisher?publisherCode=astatahttp://www.jstor.org/stable/1392547?origin=JSTOR-pdfhttp://www.jstor.org/page/info/about/policies/terms.jsphttp://www.jstor.org/page/info/about/policies/terms.jsphttp://www.jstor.org/stable/1392547?origin=JSTOR-pdfhttp://www.jstor.org/action/showPublisher?publisherCode=astata
  • 8/4/2019 Bootstrap-After-Bootstrap Prediction Intervals for Auto Regressive Models

    2/13

    Bootstrap-Ater-Boorediction Intervalsf o r Autoregressiveo d e l sJae H. KIMSchool of Business,LaTrobeUniversity,undoora,Victoria083,Australia([email protected])

    The use of the Bonferroniprediction interval based on the bootstrap-after-bootstraps proposed forautoregressive AR) models. Monte Carlo simulationsare conductedusing a number of AR modelsincludingstationary, nit-root,andnear-unit-rootrocesses.The major inding s thatthe bootstrap-after-bootstrapprovidesa superiorsmall-samplealternativeo asymptoticand standardbootstrappredictionintervals. The latter are often too narrow,substantiallyunderestimatingutureuncertainty, speciallywhen the model has unitrootsor near unitroots.Bootstrap-after-bootstraprediction ntervalsare foundto provideaccurate and conservative assessment of futureuncertaintyundernearly all circumstancesconsidered.KEY WORDS: Bias correction; nterval orecasting;Nonnormality;Unit roots.

    The bootstrapmethod(Efronand Tibshirani1993) has beenfound to be a useful small-samplealternative o conventionalmethodsof constructingconfidence intervals n autoregressive(AR) models. Past studies on AR forecastinginclude thoseof Findley (1986), Stine (1987), Masarotto(1990), Thombsand Schucany (1990), Kabaila (1993), McCullough (1994),Breidt, Davis, and Dunsmuir(1995), Grigoletto(1998), andKim(1999). The standardnonparametric)ootstrap mployedin paststudiesgeneratesbootstrap eplicatesnecessarilybiasedin small samples,due to biases presentin AR parameter sti-mators(see Tjostheimand Paulsen 1983; Nicholls and Pope1988; Shaman and Stine 1988; Pope 1990). Recently, Kilian(1998a) proposedwhat is called the bootstrap-after-bootstrapto construct confidence intervals for impulse responses ofvector autoregressive (VAR) models. It has a built-in bias-correctionprocedure hatadjustsbiases in bootstrapreplicatesand is implementedby two successiveapplicationsof the stan-dard bootstrap.Monte Carlo simulations by Kilian (1998a)revealed that bootstrap-after-bootstrapntervalsperformsub-stantiallybetter than those based on the conventionalmethodsin small samples.In this article, the bootstrap-after-bootstraps applied toprediction ntervalsfor AR models. A featurefundamentallydifferent from Kilian's (1998a) procedureis the use of thebackwardAR model in resampling.This is to incorporate heconditionalityof AR forecasts on past observations nto thebootstrapforecasts. This extends the works of Thombs andSchucany (1990) and Kim (1999), where the bootstrapwithresamplingbased on the backwardAR model is used to con-struct what may be called the standardbootstrappredictionintervals(BPI's). Although these authorsconcluded that thebootstrapprovidesa useful small-samplealternative o asymp-totic predictionintervals(API's), they found that both API'sand BPI's exhibitunsatisfactoryperformanceswhen the modelhas characteristic oots close to unity. If biases of parameterestimatorsare the majorcause of theirpoor performances, hebootstrap-after-bootstrapan providea superioralternative.The bootstrap prediction intervals proposed byMasarotto (1990) and Grigoletto (1998) use forward ARmodels to generate bootstrapforecasts. The bootstrap-after-bootstrap can be applied to their framework to generatebias-freebootstrapreplicates,but the majorweakness is that

    their bootstrapforecasts are not conditional on past obser-vations. The bootstrapbased on backward AR resamplingadoptedhererequires nnovations o be normal or asymptoticvalidity because backwardAR innovations are independentonly when forward nnovations are normal (see Thombs andSchucany 1990). To circumvent his problem,Kabaila(1993)and Breidt et al. (1995) proposedbootstrap ntervalsapplica-ble to AR models with nonnormal nnovations.However,dueto theirhigh computationalcosts and unknownsmall-sampleproperties,directresamplingof backward esidualsadopted nthis articleseems to be a morepracticalalternative see Kilianand Demiroglu 2000). Nevertheless,because thereis evidenceof nonnormality n economic and business time series (seeSims 1988; Kilian 1998b), the propertiesof prediction nter-vals undernonnormal nnovationsareexaminedin this article.The purpose of this article is to evaluate propertiesofbootstrap-after-bootstraprediction ntervals BBPI's) for uni-variate and VAR models. Small-sample propertiesof BBPI'sarecomparedwith those of asymptotic(Liitkepohl 1991) andstandardbootstrap prediction intervals (Thombs and Schu-cany 1990; Kim 1999). As in Kim (1999), the use ofBonferroni-typeoint prediction ntervals s considered.MonteCarlo simulations are conductedusing a numberof univari-ate and bivariateAR models of orders 1 and 2, includingstationaryandunit-rootAR models,undernormaland nonnor-mal innovations.Only the results associatedwith the bivariateAR(1) models are presentedin this article because qualita-tively similar resultswere evident from the other AR models.Bootstrap ntervalsareconstructedbased on the percentileandpercentile-tmethods detailedby Efron and Tibshirani 1993,p. 160, p. 170). Otherbootstrapprocedures hat arepotentiallysuperiorto the percentilemethod are the BC and BCa meth-ods (EfronandTibshirani1993, p. 184). However,paststudiesreported hatthey are inferiorto the percentile-tin economet-ric applications:See Rilstone and Veall (1996) for statisticalinference in the seemingly unrelatedregressioncontext andKim (1999) for VARforecasting.In view of these results,theBC and BCa methodsare not consideredin this article.

    ? 2001 AmericanStatistical AssociationJournal of Business & Economic StatisticsJanuary 2001, Vol. 19, No. 1

    117

  • 8/4/2019 Bootstrap-After-Bootstrap Prediction Intervals for Auto Regressive Models

    3/13

    118 Journalof Business & EconomicStatistics,January2001The majorfindingof the article is that the bootstrap-after-bootstrapprovidesa superioralternativeo the asymptoticandstandardbootstrapprediction ntervals.BBPI's tend to providethe most accurateand conservativeassessmentof futureuncer-

    tainty,especially when the samplesize is small, undernearlyall circumstancesncludingthe AR models with rootsclose toor equal to unity.In Section 1, asymptoticand bootstrappre-dictionintervals or AR models arepresented.The asymptoticvalidityof the bootstrap s also given in Section 1. Section 2presentsexperimentaldesign, followed by Section 3 in whichsimulation results are presented.In Section 4, the case of aVAR model with high AR order and dimension is presentedusing an empiricalexample and simulation. The conclusionsare drawn n Section 5.

    1. ASYMPTOTICNDBOOTSTRAPPREDICTIONNTERVALSWe considerthe K-dimensionalstationaryAR(p) model ofthe form

    Yt= v + A1Yt+-+ - ApYt-p + ut, t=O0,1,2 ..... (1)where the K x 1 vector of iid innovations ut is such thatE(u,) = 0 and E(utu') = l, is a K x K symmetric positivedefinite matrix with finite elements. Note that the AR orderp is assumed to be finite and known. The backwardAR(p)model associated with the forward model (1) can be writtenasYt=L +HYt++...-+HpYt+p-+-vtt, t=O0,1,2 ..... (2)where E(vt) = 0 and E(vtv') = , is a K x K symmetricpositive definite matrix with finite elements. The backwardmodel is used for the bootstrapas a means of generatingbootstrap orecastsconditionallyon the last p observationsofthe original series. Maekawa(1987) showed for the univari-ate AR(1) case that the conditionalityon past observations san importantdeterminantof forecast bias and variabilityinsmall samples. Chatfield(1993) also stressedthe importanceof consideringthis conditionality or prediction ntervals.Theforward and backwardmodels (1) and (2) are closely relatedin the VARcase; see Kim (1997, 1998) for details.

    1.1 Asymptotic Prediction IntervalsThe unknown coefficient matrices in (1) and (2) areestimated using the least squares (LS) method. Let A =(i, A1 .... Ap) andH = (p2,H1 .... Hp) denotethe LS esti-mators for A = (v,A1 ....Ap) and H = (,z,H.... H).Forecastsare generated n the usual way using the estimatedcoefficientsas

    Yn h) = P ? , lYA (h) ?- - . . . , _p (h)'where Ynj)= Yn41or j

  • 8/4/2019 Bootstrap-After-Bootstrap Prediction Intervals for Auto Regressive Models

    4/13

    Kim:Bootstrap-After-Bootstraprediction ntervals orAutoregressiveModels 119Stage lb. Using the standardnonparametricbootstrap,obtain the bootstrapestimatorsA* andH* for A and H basedon the LS method. For the formercase, pseudodatasetsaregeneratedas Yt*=j *+Ai Y*+ +A *, + u*, where u*v A1Yt-1+" + ApYt-_p ttweeuis a random draw with replacementfrom { t}, and, for thelatter,Yt* + Hi Yt* +"". + HpYt*+ v*, where v* is a ran-dom draw from {Vt with replacement.The estimatesof thebiases of A and H are calculated as bias(A) = A*- A andbias(H) = H* - H. Adaptingthe procedureproposed by Kil-ian (1998a), the bias-corrected stimatorsare calculatedfrom

    bias(A) andbias(H) and denotedas Ac and Hc.Stage 2. Generatepseudodatasets ecursivelybasedon (2)asy .*ic +. nYt*,+ .- --+. t* + .v* , (3)

    where the p startingvalues are set equal to the last p valuesof the original series. Using these pseudodatasets, he coeffi-cient matrices of the forwardmodel (1) are estimatedusingthe LS method and the estimators are denoted A*. Adapt-ing againthe bias-correctionprocedureof Kilian(1998a), thebiases in A* are corrected to yield the bias-correctedestima-tor Ac. Note thatbias(A) calculated n Stage 1 is used for thispurposeto ease the burden of computation,as suggested byKilian (1998a). The bootstrapreplicatesof AR forecasts aregeneratedrecursivelyas

    Yn*h) = Vc+ AcY*(h- 1) Ac-t-.- ApYn*(h p) + un+h, (4)where Y*(j)= Yn*+j Yn+j for j < 0 and U*+h is a randomdraw from { t,} with replacement.Repeated generation ofpseudodatasetsas in (4), say B times, will yield the bootstrapforecast distribution{Y,*(h;) }BNote that, for simplicity of exposition, the notation Yt* sused to indicatedifferent ypes of pseudodatasetsn Stages lband 2. By generating pseudodatasetsand bootstrapforecastsas in (3) and(4), the conditionalityof AR forecastson the lastp observationsof the originalseries can explicitlybe incorpo-rated into bootstrap orecasts.The BBPI for the kth AR component,based on the per-centile method with the nominal coverage rate of 100(1 -a/K)%, can be defined as

    BBPIpk [Yn(h, 7), \n(h, 1- ')], (5)

    where Y,*n(h, r) is the 100-th percentileof the kthcomponentof the bootstrapdistribution{Y,*(h; )}i1 and "= .5(a/K).The BBPI for the kth AR component,based on thepercentile-tmethodwith the nominalcoveragerate of 100(1 - a/K), canbe defined asBBPIpt,k = [KCn(h) -Z,(h, 1-)(h),

    YCn(h) -Z,(h, ')&,(h)], (6)where YC(h) is the kth component of AR forecasts gener-atedusing Ac and &Oh) is the squareroot of the kthdiagonalelement of the asymptoticMSE matrix of Yn(h) calculated

    using Ac. Note that z4,n(h, T) is the 100'th percentile of{Zkn(h; i) }Ii

    S Ykn(h; i)- Ykn(h)Zk,n(h; ) = ( T ? , * ( h )and &k(h) is the bootstrap counterpart of Uk(h). ByBonferroni'smethod, the rectangularregions formedjointlyby K BonferroniBBPI's have the nominalcoveragerateof atleast 100(1- a)%.The standardbootstrap prediction intervals proposed byThombs andSchucany(1990) andKim (1999) areconstructedusing the pseudodatasets3) andbootstrapreplicates (4), bothgeneratedwith H and A instead of their bias-correctedver-sions. This version of the percentileinterval(5) is called thestandardbootstrapprediction nterval(BPI) based on the per-centile method and denoted as BPIp. Similarly,based on thebootstrap replicates generated with H and A, the standardbootstrapversionof (6) can be constructedusing Yk,n(h) andUk h) instead of Yc(h) and cO(h). This is called the stan-dardbootstrap ntervalbased on the percentile-tmethod anddenoted as BPIp/. As mentionedearlier,Thombs and Schu-cany (1990) and Kim (1999) found that these BPI's exhibitunsatisfactory mall-sampleperformanceswhen the model is anear-unit-root rocess. Because BBPI's are constructedusingbias-correctedparameter stimators, heyhave strongpotentialto performbetterthan the BPI's.For the bootstrapto be a valid small-sample alternativeto the asymptotic method, it should be shown to satisfysome desirableasymptoticproperties.As the following theo-rem states, the bootstrap-after-bootstrapstimatorsand fore-casts satisfy desirable asymptotic properties under certainconditions.

    Theorem AsymptoticValidityof the Bootstrap). Considera stationaryAR process given in (1). Under the assumptionthat ut follows an iid normal distribution,along almost allsample sequences, conditionallyon the data, (a) Ac and Acconvergeto A in conditionalprobability, b) Hc convergestoH in conditionalprobability,and (c) Y*(h) convergesto Yn+hin distribution.The proofof the preceding heorem s givenin the appendix.It should be noted that both the standardbootstrap andbootstrap-after-bootstraprocedureshave been shown to beasymptoticallyvalid only for stationaryAR processes,but wefollow Kilian(1998a,b)in also investigating heirperformance

    in small samplesfor exact unit-rootprocesses.In establishingthe asymptoticvalidity of the bootstrap nthis article,the assumptionof normalinnovations s required.This is because backwarddisturbanceterms vt's in (2) areindependentonly when forwarddisturbance erms ut's in (1)are normallydistributed. t shouldbe noted that the API alsorelies on the normality assumptionfor its asymptoticvalid-ity (see Liitkepohl 1991, p. 33). For bootstrapintervals,analternativeunder nonnormalinnovationsis to resample for-ward residuals, whose underlying innovations are indepen-dent, and obtainbackwardresidualsby using the relationshipbetween the forwardand backwardAR models (Breidtet al.1992; McCullough 1994). An investigationof these methods

  • 8/4/2019 Bootstrap-After-Bootstrap Prediction Intervals for Auto Regressive Models

    5/13

    120 Journalof Business &EconomicStatistics,January2001is beyondthe scope of this article.Instead, he bootstrapnter-vals proposed n this article are simulatedundera wide rangeof nonnormal nnovations.

    2. EXPERIMENTALESIGNTable 1 presents the coefficient matrices of five bivariateAR(1) models simulated,labeled Ml to M5. These models

    are chosen so that importantpartsof the parameter pace-for example, unit root, near unit root, and stationarity-are systematicallydealt with in simulations. Note that themodel M4 is cointegrated. Other design settings includev e {(0, 0)', (1, 1)'} and vech(Eu) = (1,.3, 1)', similar to thedesignadoptedby Kilian(1998a,b),while vech is the column-stacking operator hat stackselementson and below the diag-onal only. Sample sizes considered are 50, 100, and 200 torepresentsmall, moderate,and large sample sizes. The fore-cast horizonh rangesfrom 1 to 8. The numberof MonteCarlotrialsis set to 500, while the numberof bootstrapreplicationsin Stage 1 is set to 500 and that in Stage 2 is set to 999 (thelatterchoice is to avoid the discretenessproblem;see Boothand Hall 1994). The nominalcoverage rate (1 - a) for jointBonferroniprediction ntervals s set to .95. Theresultsassoci-ated with the nominalcoveragerate of .9 providedqualitativesimilarresults.The criteria of comparison are the conditional coveragerateand averagearea coveredby thejoint prediction nterval.The former is the average frequencyof the true futurevaluesjointly belongingto K Bonferroniprediction ntervals,and thelatter s the squareroot(or thepowerof K-') of the mean area(volume) of the rectangle(rectangular egion) formedjointlyby K Bonferroniprediction ntervals.To calculate the condi-tionalcoveragerate, 100 truefuturevalues aregeneratedcon-ditionallyon the last p observationsof AR process for eachMonte Carlo trial(see Thombs and Schucany 1990).

    3. SIMULATIONESULTS3.1 StationaryModels

    Figure 1 exhibits conditionalcoverage rates of stationaryVAR(1) models Ml to M3 when v = (0, 0)'. It can be seenfromthe resultsassociatedwith model Ml and M2 thatAPI'sand BPI's underestimatehe nominalcoveragerate to a degree.The degreeof underestimation, owever,decreasesas samplesize increases.When n = 200, API's and BPI's exhibit con-ditionalcoverageratesnearlyidenticalto 95% in most cases.Thereis a tendencyfor BPIp/ to performbetterthan API and

    Table 1. Design of the Bivariate AR(1) ModelsMl M2 M3 M4 M5

    [ : , 5

    Roots 2, -2 2 1.03, 2 1, 2 1NOTE:The entries in the second row indicate the values of A1 matrices; Model M1 is coin-tegrated; v e {(O,0)', (1,1)'}; and, unless stated otherwise, u iid N(0, u) with vech(Zu) =(1,.3,1)'.

    BPIo. Contrary o API's and BPI's, BBPI's overestimate henominalcoveragerate.BBPIpsubstantiallyoverestimates henominal coverage rate when the sample size is small, whileBBPIP/performsmuch better than BBPIo, slightly overesti-matingthe nominalcoveragerate.The degreeof overestima-tion by BBPI's decreases as sample size increases. For thecase of the near-unit-rootmodel M3, BBPI's performmuchbetter than the others.When n e {50, 100}, API's and BPI'sseriously underestimatethe nominal coverage rate and thedegree of underestimation ncreases as h increases. BBPI'sdo not show such sensitivity to the increasing values of hwhen n e {50, 100}, although they overestimatethe nominalcoverage rate to a degree. It is also evident that BBPIP/ per-forms better thanBBPIpin most cases. When n = 200, API'sand BPI's performsatisfactorily,outperformingBBPI's thatslightly overestimate he nominalcoveragerate.Figure2 reports averageareas of joint prediction ntervalsfor models M2 and M3 when v = (0, 0)'. Plausibly,the aver-age area ncreases as h increases. BBPI's arewider thanAPI'sand BPI's for all cases, althoughthe differences in averageareas become smaller as n increases. For model M2,

    BBPIpcan be much wider thanBBPIPt,especially when n is small.For model M3, they show virtuallyidenticalvalues of aver-age area.Taking nto account the coverageproperties eportedin Figure 1, it can be said that API's and BPI's are oftentoo narrowand BBPIp is often too wide, when the samplesize is small or moderate.Because the averagearea proper-ties observedin Figure2 are also evidentfor all other modelsunderall possible situationssimulated,furtherdetails of theaverageareapropertiesare not reported.3.2 Unit-RootModels

    Figure3 presentsconditionalcoverageratesfor models M4and M5. The firstpanel exhibits the case of model M4 whenv = (0, 0)'. As in the near-unit-root ase, serious underesti-mation of the nominal coverage rate by API's and BPI's isevident especially when h is long. When n = 200, however,API's and BPI's performsatisfactorilywith their conditionalcoverage rates fairly close to 95%. BBPI's slightly overesti-matethe nominalcoverageratein most cases butdo not dete-rioratesubstantiallyas h increases. As before, BBPIp showsa higher degree of overestimationthan BBPIPt.The secondpanel shows the case of model M5 when v = (1, 1)'. TheAPI exhibits a substantialdegree of underestimation, spe-cially when n is small and h is long. BPI's performmuch bet-ter than API's, but they underestimate he nominal coveragerate to a degree especially when n is small. API's and BPI'sunderestimatehe nominalcoveragerate to some extent evenwhen n = 200. BBPI's overestimate he nominalcoveragerateto a degree when n = 50 but provide fairly accuratecondi-tionalcoveragerateswhen the samplesize is larger.It shouldalso be noted that simulationresults associated with modelM4 when v = (1, 1)' arequalitatively imilarto the case whenv = (0, 0)'.3.3 Robustness to Nonnormality

    It was mentionedearlier hatthe assumptionof normality scrucial for the asymptoticvalidity of the bootstrapproposed

  • 8/4/2019 Bootstrap-After-Bootstrap Prediction Intervals for Auto Regressive Models

    6/13

    Kim:Bootstrap-After-Bootstraprediction ntervalsorAutoregressiveModels 121ModelM1

    n=5O n=100 n=20099 99 9998 98 .. . 98 --9796 pap 97996 96.....U- ,-U-bb

    h h h

    Model M2n= 0 n10X n=200

    99 99 999494-9-4979--a-bpi

    96 - 96 96 - i~j92392 92

    1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8h h h

    ModelM3n=50 n=100 n=2009999 99-

    98 -------- - =-- ---- --=97-- 97 - -97 -- ----------------- 95- 957i--------------- 0, api

    9393 93 --U-bbpi

    896 -89 .. 89 .. . . . . . .. . ..---bpb87 - -87

    -

    -8785 -- - - - - - - - - - , 85 - -,- ,85X bp

    1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8h h h

    Figure 1. Conditional Coverage Rates for Bonferroni Prediction Intervals:StationaryAR Models, Normal Innovations, Nominal Coverage Rateof at Least 95% for Joint Bonferroni Intervals. The data-generation process is Yt= v +A1Yt_+ ut, with ut " iid N(0, y,), vech(,u) = (1,.3,1)',and v = (0,0)'.

    in this article. However, it is often the case in practicethatinnovations are suspected of being nonnormal.It is there-fore instructiveto examine small-sample propertiesof theasymptotic and bootstrap prediction intervals under nonnor-mal innovations. Nonnormal innovations are generated usingthe Student-t(5) and X2(4) distributions,which are represen-tatives of fat-tailed and asymmetricdistributions.These non-normaldistributionsareadjustedso thatthey have zero meansand variance-covariance structurethe same as 1, given inSection 2.Figure 4 presents the conditional coverage rates of pre-diction intervals for model M4 undernonnormal nnovations

    when v = (0, 0)'. The results reported here can be com-paredwith those undernormal nnovations n Figure3. UnderStudent-t innovations, the results are virtually unchanged,except thatthe performanceof API's has somewhatworsened.However,underX2(4) innovations,some predictionintervalsshow drasticchanges.BBPIpand API do not show anynotice-able changes, while BPIpperformsslightly better than undernormal innovations. It is noticeable that BBPIpt and BPIptshow substantialdeteriorationwhen h is short. The degreeofdeteriorationdoes not decreaseas n increases. The thirdandfourthpanels of Figure 4 presentconditionalcoverage ratesfrom model M5 when v = (1, 1)' under nonnormal innova-

  • 8/4/2019 Bootstrap-After-Bootstrap Prediction Intervals for Auto Regressive Models

    7/13

    122 Journalof Business & EconomicStatistics,January2001ModelM2n=50 n=100 n=200

    10 10 109 9 9

    8 --- - - -- 8---------8-------------------- -. . . . . . . . . . . .. . . . . . . . . . . b~. ...- --....- bbp t7 7 ----U---bbpi -6 -bbpit

    6555 ---bpit

    4 4 41 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8

    h h h

    ModelM3n=50 n=100 n=200

    20 20 20

    15 - 15 15 --api-4--api10 10 10 ---bbpi-A- bbpit5.-X-- bpi

    ----e--bpit0 0 0

    1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8h h h

    Figure2. AverageArea of Bonferroni rediction ntervals orARModels:Stationary RModels,Normal nnovations,NominalCoverageRateof at Least 95% for Joint Bonferronintervals.Thedata-generation rocess is Y,= v +A1,Y-1ut, withut - iid N(0, Eu), vech(Eu) = (1,.3,1)',and v = (0,0)'.tions. These figurescan be comparedto those for model M5in Figure3 undernormal nnovations.As in the case of modelM4, BBPII,,t nd BPI,,t show substantialdeteriorationunderX2(4) innovationsespecially when h is short and n is large.Under Student-t innovations,all predictionintervals seem tobe robust to the departure rom normality,except for API,which shows a higher degree of underestimation.These find-ings suggest that bootstrap ntervals are reasonablyrobusttononnormal nnovations.However,care should be taken whenasymmetryof innovationdistributions suspected.Inthiscase,the bootstrap prediction intervals based on the percentile-tmethodmay performundesirably.Rather, he use of the boot-strap ntervalsbased on the percentilemethodis stronglyrec-ommended.One may also be interested n the performancesof asymp-totic and bootstrappredictionintervals when innovationsaregenerated with conditional heteroscedasticity.As a furtherextension,the case of VAR(1)models with autoregressive on-ditionalheteroscedasticity ARCH) innovations(Engle 1982)is also examined. We consider an ARCH(l1)process that,canbe written as

    e,= +(t ,-e)'/2, (7)where 7 is generatedrandomly from the standard normaldistribution.The parametervalues are chosen such that I0 E{.3, .6, .9} and o= 1. For the bivariatecase, two independentARCH(1) processes are generatedas in (7) and transformed

    so that they sharethe variance-covariance tructure he sameas vech(l,) = (1, .3, 1)'. Figure5 presentssimulationresultsfor models M4 and M5 when ARCH(l1) nnovationsare gen-eratedwith f0 = .6. From the firstpanel, in which the resultsfrommodel M4 with v = (0, 0)' arepresented, t is againevi-dent thatBBPI's performmuch betterthanotheralternatives,especially when the sample size is small. API's and BPI'sunderestimate he nominal coverage rate substantiallywhenn E {50, 100} and h is long, while BBPI's provide accuratecoverage properties or all sample sizes. A similarfeatureisevidentfrom model M5 with v = (1, 1)' in which BBPI's per-formmuch betterthan the others even when the samplesize is200. The case of model M5 with v = (0, 0)' providedqualita-tively similar results.It is also evident from both models thatBBPIPperformsslightly betterthan BBPII,,,especially whenthe sample size is small. ComparingFigure 5 with Figure 3,it can be seen that performancesof all predictionintervalsare somewhatworsenedunderARCH(1)innovations.Anotherinteresting eature underARCH(1) innovations s thatBBPI'sshow a tendency to slightly underestimate he nominal cov-erage rate, a feature not generally observed in the case ofotherinnovations.This may be due to the volatilitypresent nARCH(l1) nnovations.The resultspresented n Sections 3.1 and 3.2 indicate thatthe use of BBPI's is highly desirableundernormalityof inno-vations, especially for near- or exact-unit-rootAR models.This sectionprovidessimulationevidence thatdesirableprop-erties of BBPI's remainmostly unchangedwhen innovations

  • 8/4/2019 Bootstrap-After-Bootstrap Prediction Intervals for Auto Regressive Models

    8/13

    Kim: Bootstrap-After-BootstrapPrediction Intervals for Autoregressive Models 123Model M4n=50 n=100 n=200

    98 -----..... -------- - 98--98 -954--ap92 --- 92 -- - 92- .------.----------------.---- ---bbpi

    _ -e-bpitj j ? ? - -

    83 83 1831 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8h h h

    ModelM5n=50 n=100 n=200

    99 99 9997 - -- ---- 97 -- ------ ----97 --- -- -- -- -- -

    ----------

    91 -! --- ---- --- 95 --- --- --- 97------ -- -93 93 93------....'----" 93" -,- 9 api91 --- -------91--------------------------- 891---------------- ----- A-- bbpi,-, I bbpit

    --------------------------- -

    - - - - - -- - - - - - 87 - -

    - - - - - - ---------

    8 - - - -- - - - - - 5 - - - - - - - - - - -- - - - - 8 - - - - - - - - - - - - - - - - - - - bp

    83 t83 831 2 3 4 5 6 7 8 12 3 4 5 6 7 8 1 2 3 4 5 6 7 8h h h

    Figure3. ConditionalCoverageRates forBonferroni rediction ntervals:Unit-Root RModels,Normal nnovations,NominalCoverageRateof at Least 95% for Joint Bonferroni Intervals. The data-generation process is Yt= v + A1,Yt-+ ut, with ut - iid N(0, Eu), vech(Eu) = (1,.3,1)'.For model M4, v = (0,0)', and for model M5, v = (1,1)'.

    are generatedfrom nonnormal nnovations ncludingthe con-ditionalheteroscedasticity.3.4 FurtherSimulationFindingsand Discussions

    It is of interest to examine the variabilityof conditionalcoveragerates of joint Bonferroniprediction ntervals.This isbecause the conditionalcoveragerates associated with BBPI'smay show high variability as a result of bias correction.Table2 reports he results associated with the models M2 andM4 when n = 100 and v = (0, 0)', because other results arequalitativelysimilar. The conditionalcoveragerates of API'sand BPI's show similardegreesof variability,but BBPI's gen-erate less variableconditionalcoverage rates than API's andBPI's for all cases. It is interesting o observe that the condi-tional coveragerates of BBPIPare far less variablethanthoseassociated with BBPIPt.However,as evident from the simu-lationresults,BBPIPshows a higherdegreeof overestimationthan BBPIp,, n most cases. It seems that the use of BBPIp,is highly attractive,because its conditionalcoveragerates arefairly close to the nominal coverage rate in most cases andless variablethanthose of API's and BPI's.Because AR models with deterministic time trend arepopularfor forecastingin practice, it is of interest to exam-ine small-samplepropertiesof asymptoticand bootstrappre-diction intervals when a deterministic linear time trend isincluded in the model. The results associated with models

    M2 to M4 when v = (1, 1)' are presented n Figure 6. Thesemodels are generatedwith the coefficients of the linear time-trend variables set to zeros, but they are treatedas unknownsfor estimationand forecasting.The results are in accordancewith those from the case without the linear trend. For unit-root models M3 and M4, API's and BPI's exhibit seriousunderestimationwhen n is small and h is long. BBPI's over-estimate the nominal coverage rate to a degree but providemuch moreaccuratecoverage properties hanAPI's andBPI'sfor all cases. For model M2, API's and BPI's show under-estimation of the nominal coverage rate when n = 50, buttheirperformancesmproveas n increases.As in the case ofthe no-trendmodel, BBPI's overestimatethe nominal cover-age ratefor model M2 even when n = 200. It is againevidentthat BBPIptprovidesthe most accurateconditionalcoveragerates especially for the model with near or exact unit roots.The resultssuggestthatBBPI's perform n generalbetterthanotheralternativeswhenthe AR modelswith deterministicimetrend are considered.An interestingfeatureassociated with BBPI's is that theytend to overestimate he nominalcoverageratein most cases.This tendencyis particularly trongfor stationaryAR modelswhose roots are far from unity.The following conjecturecanbe made. FortheseAR modelswhose biases of parameter sti-mators are likely to be negligible, the bias-correctionproce-dure of the bootstrap-after-bootstrapan bringmoresamplingvariability hangain in precision.This may result in overesti-

  • 8/4/2019 Bootstrap-After-Bootstrap Prediction Intervals for Auto Regressive Models

    9/13

    124 Journalof Business &EconomicStatistics,January2001

    ModelM4(Student-tnnovations)n=50 n=100 n=200

    927 98 -92---------------------- 9-------pi95 ""-95t i 9s apbbpit921

    - - - -- - - -92

    - - --92

    -- - - - - - - - - - - --A bi

    89 ..... .--

    ------89------------

    ----------... 889 -89--------------------------------- 89------------------------------- *-Xbpi86 - 86 ------------------------ 86----------------86------ -bpit83 .83 183,. ... ..1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8h h h

    Model M4 (chi-squared innovations)n=50 n=100 n=200

    98 -- ----------------------- 98 --------------- ------ - 9895 -------- -95 ---- --95 - - - - -- api89 - 89 -- - - - 9 -. --api

    /-bbpit- 89 --86 ------------------------ --6 ---- 86- - --------------------- G bpit83 1 1 1 1 1 1. 83 83 1,...1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8h h h

    ModelM5 withdrift, tudent-tnnovations)n=50 n=100 n=20099 99 99

    97 ------------------------------ 97 97

    ..pi85 -------- - ------ 85 9 8583 83------------------------- 3 83 . . ...-.. - api

    1 2 3 4 5 6------7 8 1 2- 3 4 5- 6 7- 8 1 2- 3 4 5 6 7 8 bbp

    h h hModel M5 (with drfit, chi-squared innovations)

    n=50 n lO0 n=200

    99 99-----------------------9--------------- - 89 -- - -- bbpt

    93 -.--

    - - -

    - 93 -

    -

    ----- 93--a--pi

    __ ....... . ,, ., _.l bbpit89 -- - - -- - - - 89 - - -. - - - - 89

    -X-bpi

    87------------ -- ---------- 87 87 ------ - -e0-bpit85----------------- ---------------------85 _----85-83 83 -. . 83 -. ..1 1.1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8h h h

    FigurModel5(withdrfit, hi-squarednnovRates f Bonferroni rediction ntervals: onnormal nnovations, tudent-t 5) and)2(4) Distributions, ominal

    Coverage Rate of at Least 95% for Joint Bonferroni Intervals. The data-generation process is Yt= v + A1Yt-1 + ut, vech(u) = (1 ,.3,1 )'. For modelM4, = (0, 0)', and for model M5, = (1,1)'.

    n=60 n=100 n*2009999 99

    --_ ___1___1___W__

  • 8/4/2019 Bootstrap-After-Bootstrap Prediction Intervals for Auto Regressive Models

    10/13

    Kim:Bootstrap-After-Bootstraprediction ntervals orAutoregressiveModels 125ModelM4n=50 n=100 n=200

    100 100 100

    -4-api

    909-85--9- 90-------------- -b--------5-bpi----bpit

    80 -- ---- - -- -- 80-- - ---- 80 --

    75 75 ,175 ,,1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8h h h

    Model M5(withdrfit)n=50 n=100 n=200

    100 100 10095 95 ------ ------95 --90-90 90 ----4---api85 -- -- 80---85----------- - 85---------------- - -Ubbpi

    ----- -----

    ---A-bbpit80 ---------- -----------80------------------ 80 - ------80------- p75---------------75 ---------------------------75 ---------------------------G -bpft70 ---- --------- 70 ------------------------ 7065 65 65

    1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8h h h

    Figure5. ConditionalCoverageRates of Bonferroni rediction ntervals: RCH(1)nnovations 0= 1 and 8 = .6; NominalCoverageRate ofat Least 95%for JointBonferronintervals.Thedata-generation rocess is Yt= v +A1Yt-,1 ut, vech(_,) = (1,.3,1)'. Formodel M4, v= (0,0)',and for model M5, , = (1,1)'.mationof the nominalcoveragerate.For AR models with nearor exact unit roots, BBPI's still show a small degreeof over-estimationeven when n = 200. These AR models may havefairly small biases of parameterestimatorswhen the sample

    Table2. StandardDeviationsof ConditionalCoverageRates forBonferroni rediction ntervalsh API BBPIp BBPIpt BPIp BPIpt

    n = 100, Model M41 3.19 3.21 3.20 3.65 3.442 3.67 2.88 3.01 3.72 3.533 3.99 2.52 3.33 4.38 4.154 4.71 2.47 3.59 4.76 4.455 5.28 2.62 4.16 5.70 5.336 6.20 2.84 4.36 6.28 5.967 6.84 2.75 5.18 6.87 6.628 7.39 3.03 5.55 7.33 7.34

    n = 100, Model M21 3.06 3.08 3.05 3.73 3.332 3.22 2.65 2.72 3.41 3.263 3.15 2.36 2.74 3.46 3.264 3.38 2.38 3.06 3.49 3.545 3.55 2.31 3.18 3.78 3.726 3.31 2.07 3.00 3.77 3.807 3.41 2.11 3.13 3.79 3.838 3.58 2.15 3.06 3.80 3.78NOTE: orbothmodels,v (0, 0)' undernormalnnovations; PI: symptotic redictionnter-vals; BBPIp:bootstrap-after-bootstrapredictionntervalsbased on the percentilemethod;BBPIpt:ootstrap-after-bootstrapredictionntervals ased on the percentile-tmethod;BPIp:standardbootstrappredictionntervals ased on the percentilemethod;and BPIpt:tandardbootstrappredictionntervals ased on the percentile-tmethod.

    size is as largeas 200. Similarlyto the case of stationaryARmodels whose roots are far from unity, gain in precisioncanbe outweighedby extrasampling variabilitybroughtaboutbybias correction.4. A MODELWITHHIGHERORDERANDDIMENSION: MPIRICALXAMPLEANDSIMULATION

    There have been many empirical studies on VAR fore-casting, but the use of intervalforecasts has been neglected.For example, Litterman 1986), McNees (1986), Bessler andBabula (1987), and Simkins (1995) used only point fore-casts to evaluate forecasting ability of VAR models. How-ever, as Chatfield(1993) and Christoffersen 1998) stressed,intervalforecastsshouldbe used (togetherwith or insteadofpointforecasts)for morethoroughand informativeassessmentof futureuncertainty.As an application,the five-dimensionalVAR model examinedby Simkins (1995) is used for intervalforecastingof U.S. macroeconomictime series. The datasetwas given by Simkins(1995) and it includes 172 observationsquarterlyfrom 48:1 to 90:4 for real gross national product(GNP), GNP deflator,real business fixed investment,moneysupply(MI), and unemploymentrate.A VAR(6) model is fitted following Simkins (1995). Theparametersare estimatedusing the data points rangingfrom48:1 to 88:4, and predictionintervalsare calculatedfor theperiodfrom 89:1 to 90:4. All data are transformed o naturallogarithmsfor estimation and forecasting.Five of the char-acteristic roots calculatedfrom the estimated coefficients are

  • 8/4/2019 Bootstrap-After-Bootstrap Prediction Intervals for Auto Regressive Models

    11/13

    126 Journal of Business & Economic Statistics, January 2001Model M4n=50 n=100 n=200100 100 100

    95 _- --_-- -__ --5A959 0 - Aim

    - - - - --p i

    - bbpi85 ------- - ---- 85

    85 ---------------------------A-bbpit80 -- - X -bpi80 --.- ------ 80- -S--bpit751

    75 75 1 2 3 4 5 6 7 81 2 3 4 5 6 7 8 1 2 3 4h 5 6 7 8 hh

    Model M3n=50 n=100 n=200

    100 100 100O

    90 - ---, _- -90 . . . .0 . . . . . . . . - apt85---- -- -------- -- --- - -- -- -- -- - 85- -------------------- 85 ----------------------- -- -bbpi-A-bbpit80 ------------------ ----- - 80 --- ---------- 80 ---- - - -- ------ -- X bpi

    --e-bpit75 . .. . . . . . . .5 - - - - 7 -75 - - -p70 70

    1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8h h h

    ModelM2n=50 n=100 n=200

    bbpi

    - -- - - 92 - - - - - -9------------------------ : b i

    89 - 89-------------------------- 89 ------------------------------bpit

    86 86 1861 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8

    h h hFigure6. ConditionalCoverageRates of Bonferroni rediction ntervals: RModels WithTimeTrend,Normal nnovations,NominalCoverageRate of at Least 95%. The data-generation process is Yt= v + St +A,1Yt- + ut, with vech(E,) = (1,.3,1)' and 8 = 0. For all models M2 to M4,

    close to the unit circle, with their moduli all less than 1.1.The tests for normalityof innovations Liitkepohl1991;Kilianand Demiroglu 2000) are conducted. The test statistics forskewness andkurtosis,respectively,yield 7.57 and 18.77,bothasymptoticallyfollowing x2(5) distribution.However,Kilianand Demiroglu (2000) found that asymptoticcritical valuescan be highly misleading and proposedthe use of the boot-strap critical values. These are calculated as suggested byKilian andDemiroglu (2000) and found to be 5.95 and51.81,respectively,for skewnessandkurtosis,for the level of signif-icance 5%. They are drasticallydifferent from the asymptoticcritical value of 11.07, as expectedfrom the simulationresultsof Kilian and Demiroglu (2000) for large VAR models. Theinferencebased on the bootstrapcritical values indicates that

    there is evidence for the presenceof asymmetric nnovations.This is also supportedby visual inspectionof other descrip-tive measures. A simple Lagrangemultiplier(LM) test forARCH errors(Franses1998, p. 165) has also found that thereis evidencefor ARCH(l1) rrorson the residualsof unemploy-ment rate,real business fixed investment,and money supply.The previoussection found thatbootstrap ntervals,especiallythose basedon the percentilemethod,arereasonablyrobust othis type of nonnormality.API's, BPI's, and BBPI's are calculated for the nominalcoverage rate of at least 95% for joint Bonferronipredictionintervals. For BBPI's, the numbers of bootstrap replicationsfor stage 1 and 2 are set, respectively, to 500 and 4,999.Althoughthe details are not given here, the results are found

  • 8/4/2019 Bootstrap-After-Bootstrap Prediction Intervals for Auto Regressive Models

    12/13

    Kim: Bootstrap-After-BootstrapPrediction Intervalsfor Autoregressive Models 127100

    9590 ..4. ap80------------ ---------- - bbpi

    75--bbpi5------- -----------------____k-, bpit70 - - - - - - - - -

    6501 2 3 4 5 6 7 8

    hFigure 7. ConditionalCoverage Rates for BonferroniPredictionIntervals or Five-DimensionalVAR(6)Model. EstimatedcoefficientsfromSimkins(1995) are used as a data-generation rocess. n = 100withnormal nnovations, nd the nominalcoverage rate forBonferroniprediction ntervalss 95%.

    to be compatiblewith those of the Monte Carlo simulationsconducted n this article.That s, API'sandBPI's behave simi-larly,andthey are narrowerhan BBPI's for all cases. BBPIPsand BBPIpts are also found to be similar for all variables.Another featureworthmentioningis that prediction ntervalsfor all variables n the systemcan be very wide, especially forh > 4, indicatinga high degree of uncertainty nvolved. Thissuggests that evaluationof point forecastsonly may providemisleading interpretation f futureuncertainty.In addition,a simulationexperiment s conductedusing theestimatedcoefficientsof the precedingVARmodel as thedata-generatingprocess when the sample size is 100. The modelcontains a nonzero intercept vector as estimated from thepreceding example. Normal innovationsare generatedtreat-ing the estimated 1, as the true variance-covariancematrixof innovations.Otherdesign settings are same as those sum-marizedin Section 2. The results are presentedin Figure 7,where the conditionalcoverageratesof asymptoticand boot-strapBonferroniprediction ntervalsareplotted.API substan-tially underestimates 5% even when h = 1, and the degreeofunderestimationharply ncreases as h increases.BPI's exhibita similarpattern,but with far less underestimationhan API.The desirablepropertiesof BBPI's are again evident, provid-ing conditionalcoverage rates fairly close to 95% for all hvalues. Simulationsconducted with the value of the nomi-nal coveragerate (1 - a) set to .75 yield qualitativelysimilarresults.

    Although a pilot study, the simulation presented in thissection demonstrates hat the results obtainedin the bivariateAR case arehighly suggestive to the case of AR models withhigher order and dimension. It seems that serious underesti-mation of futureuncertaintyby API is more pronounced na large VAR system with a rich lag structurewidely used inpractice.The use of the bootstrap-after-bootstraps stronglyrecommendedwhen intervalforecastingis conducted with alargeVAR system.

    5. CONCLUDINGREMARKSThis articleproposes the use of predictionintervalsbasedon the bootstrap-after-bootstrapor AR models. Simula-

    tion results suggest that bootstrap-after-bootstraponferroniprediction intervals perform substantiallybetter than theirasymptoticand standardbootstrapalternatives n most cases,especially when the sample size is small or moderate. Theperformancesof API's and BPI's deteriorate ubstantially ormodels with near unitroots, butbootstrap-after-bootstrapre-diction intervals seem robust.Asymptoticand standardboot-strap prediction ntervals tend to be too narrowcomparedtoBBPI's, substantiallyunderestimating he nominal coveragerate. BBPI's also tend to performsatisfactorily or integratedandcointegratedAR processesunderbothnormaland nonnor-mal innovations.Hence, the use of bootstrap-after-bootstrapsstrongly recommended for AR forecastingof economic andbusiness time series in practice.In general, BBPI's based on the percentile-t method per-form better than those based on the percentilemethod. Butthe latter can provide a superioralternative to the formerwhen the model is near or unit-rootnonstationary. imulationresultssuggest that BBPI's are reasonablyrobust to the inno-vations generatedwith a fat-tailed distributionor conditionalheteroscedasticity.When innovations are generatedfrom anasymmetricdistribution,however,bootstrap predictioninter-vals based on the percentilemethod should be favoredoverthose based on the percentile-tmethod.This article examined small-samplepropertiesof alterna-tive Bonferroniprediction ntervals for AR models under theideal condition where the AR order is known and there isno model misspecification.Future researchefforts should bedirectedtoward nvestigationof the effect of the unknownARorder and model misspecification see Kilian 1998c). The useof pre-test nformation elated o the presenceof unit roots andcointegrating estrictions an affect small-sampleperformanceof prediction ntervals Diebold and Kilian2000) andis a sub-ject of futureresearch.Given that the assumptionof normalityis essential for the bootstrap-after-bootstrapmployed in thisarticle, an interestingextension in the futureresearch is theimpositionof normalitywhenthe residuals romthe backwardAR model are resampled.Anotherpossibilityfor AR modelswith nonnormal nnovations s the use of resamplingbased onthe block bootstrap nsteadof backwardAR models.

    ACKNOWLEDGMENTSI thank the editor and anonymousreferees for their con-structive and detailed comments that greatly improved thearticle in many ways. All remaining errors are mine. An

    earlier version of this article was presented at the 1999AustralasianMeetings of the Econometric Society held inSydney, Australia. This research was partly supported by agrantfrom the James Cook University.APPENDIX:PROOF OF THEOREM

    The forwardAR model (1) can be writtenas Wt=IWt_ -+Ut,where Wt= (Y', ...., Yt'-+)' and Ut= (ut, 0,... 0)' areKp x 1 vectors andI1 is a Kp x Kp matrix of the form

    =[ E A ]I '

  • 8/4/2019 Bootstrap-After-Bootstrap Prediction Intervals for Auto Regressive Models

    13/13

    128 Journalof Business & EconomicStatistics,January2001where I is a K(p- 1) identitymatrix,0 is a K(p - 1) x Knull matrix,andA*= [A1,.... Ap_]J. Similarly, hebackwardmodel (2) can be expressedas W,= f1W,+1 V, where V,=(0 ... O,VV_p+1)'is a Kp x 1 vector and fk is a Kp x Kpmatrixof the form

    f = with H, = [HP_1 . H1.H,HNote that H and fk are related as H = Ff-', where F =E(WW,). The LS estimators or H and fk can be writtenas

    1- == Wt W'-_ 1) E Wt-1 W'-_ 1)-=

    I+ n Fn1I

    fl= ( WW;t'+1(E W+1W'+1)-") = nDnlwhere wn = 1/n UW,W'_,Fn- = 1/nLW_1 W,'_,n =1/n j VtWt+ and Dn = 1/n E Wt+lW+. The usual asymp-totic theory applies so that plim(wn) = 0, plim(Fn) = F,plim(8n)= 0, and plim(Dn) = F, where "plim" ndicatestheprobabilitylimit, implying the consistency of the LS esti-mators. The bootstrapLS estimators based on the residualresamplingcan be expressedas

    [1*= + w*Fn*-; W " + *D*-1,n nwhere wn,F,*,8n, and Dn are the bootstrapcounterpartsofwn,Fn,8n, and Dn. Theorem4 of Freedman 1984) indicatesthat,as n -- increases,F,*andD* converge o F in conditionalprobability.Moreover,the conditional law of n/2w* has thesame limit as the unconditionallaw of nl/2wn, and that ofnl/28" has the same limit as the unconditional aw of nl/2 n.n'This means that A* and H*, respectively,convergeto A andH in conditionalprobability.Because the bias correctionisnegligible in large samples, the same is true for Ac and Hc.In a similarway, it can be shown that Ac convergesto A inconditionalprobability.Since A*convergesto A in conditionalprobabilityandu*+hconvergesto un+h in distributionby theorem 4 of Freedman(1984) as the sample size increases, it follows that Yn*(h)=Acyn*(h- 1) + - - -+ AYn*(h - p) + Un+hconverges to Yn+h indistributionor all h.

    [Received December 1998. RevisedApril2000.]

    REFERENCESBessler, D. A., and Babula,R. A. (1987), "ForecastingWheat Exports:DoExchange Rates Matter?"Journal of Business & Economic Statistics, 5,397-406.Booth, J. G., and Hall, P. (1994), "MonteCarloApproximationand the Iter-ated Bootstrap,"Biometrika,81, 331-340.Breidt, F. J., Davis, R. A., and Dunsmuir,W. T. M. (1992), "On Back-casting in LinearTime Series Models,"in New Directions in Time SeriesAnalysis Part I, eds. D. Brillinger et al., New York, Springer-Verlag,pp. 25-40.- (1995), "ImprovedBootstrapPredictionIntervalsfor Autoregres-sions,"Journalof TimeSeries Analysis, 16, 177-200.Chatfield,C. (1993), "Calculating ntervalForecasts," ournalof Business &Economic Statistics,11, 121-135.Christoffersen,P. E (1998), "EvaluatingIntervalForecasts,"InternationalEconomicReview,39, 841-862.

    Diebold,F. X. andKilian,L. (2000), "Unit-RootTestsAre Useful for Select-ing ForecastingModels,"Journal of Business & Economic Statistics, 18,265-273.Efron, B., and Tibshirani,R. J. (1993), An Introduction o the Bootstrap,New York,Chapman& Hall.Engle, R. F. (1982), "AutoregressiveConditionalHeteroskedastictyWithEstimates of the Variance of U.K. Inflation," Econometrica, 50,987-1008.Findley, D. F. (1986), "On BootstrapEstimatesof Forecast Mean Square

    Errors for Autoregressive Processes,"in Computer Science and Statis-tics: The Interface, ed. D. M. Allen, Amsterdam: Elsevier Science,pp. 11-17.Franses,P. H. (1998), Time Series Modelsfor Business and EconomicFore-casting, Cambridge,U.K.: CambridgeUniversityPress.Freedman, D. A. (1984), "On BootstrappingTwo-Stage Least-SquaresEstimates in StationaryLinear Model," The Annals of Statistics, 12,827-842.Grigoletto, M. (1998), "Bootstrap Prediction Intervals for Autoregres-sions: Some Alternatives,"International Journal of Forecasting, 14,447-456.Kabaila,P.(1993), "OnBootstrapPredictive nference or AutoregressivePro-cesses,"Journalof TimeSeries Analysis, 14, 473-484.Kilian,L. (1998a),"SmallSampleConfidence ntervals or Impulse ResponseFunctions,"TheReviewof Economics and Statistics,80, 218-230.(1998b), "Confidence ntervals or ImpulseResponsesUnderDepar-ture FromNormality,"EconometricReviews, 17, 1-29.(1998c), "Accounting or Lag-OrderUncertainty n Autoregressions:The EndogenousLag OrderBootstrapAlgorithm," ournalof TimeSeriesAnalysis, 19, 531-528.Kilian, L., and Demiroglu, U. (2000), "Residual-BasedBootstrapTests forNormality in Autoregressions:Asymptotic Theory and Simulation Evi-dence,"Journalof Business & Economic Statistics,18, 40-50.Kim, J. H. (1997), "RelationshipBetween the Forwardand BackwardRep-resentationsof the StationaryVAR Model, Problem97.5.29"EconometricTheory, 13, 889-890.(1998), "RelationshipBetween the ForwardandBackwardRepresen-tations of the StationaryVAR Model, Solution97.5.2,"Econometric The-ory, 14, 691-693.(1999), "Asymptoticand BootstrapPrediction Regions for VectorAutoregression,"nternationalJournalof Forecasting,15, 393-403.Litterman, R. B. (1986), "Forecasting with Bayesian VectorAutoregressions-Five Years of Experience," Journal of Business &

    Economic Statistics,4, 25-38.Liutkepohl,H. (1991), Introduction o MultipleTime Series Analysis,Berlin:Springer-Verlag.Maekawa, K. (1987), "FiniteSample Propertiesof Several PredictorsFroman AutoregressiveModel,"EconometricTheory,3, 359-370.Masarotto,G. (1990), "BootstrapPrediction Intervals for Autoregression,"InternationalJournalof Forecasting,6, 229-329.McCullough,B. D. (1994), "Bootstrapping orecastIntervals:An Applicationto AR(p) Models,"Journalof Forecasting,13, 51-66.McNees, S. K. (1986), "ForecastingAccuracy of AlternativeTechniques:A Comparisonof U.S. MacroeconomicForecasts," ournalof Business &Economic Statistics, 4, 5-15.Nicholls, D. F., and Pope, A. L. (1988), "Bias in Estimationof MultivariateAutoregression,"AustralianJournalof Statistics, 30A, 296-309.Pope, A. L. (1990), "Biases of Estimators in Multivariate Non-GaussianAutoregressions,"ournalof TimeSeries Analysis, 11, 249-258.Rilstone, P., andVeall, M. (1996), "Using BootstrappedConfidenceIntervalsfor Improved nferencesWithSeeminglyUnrelatedRegression Equations,"EconometricTheory,12, 569-580.Shaman,P.,and Stine, R. A. (1988), "TheBias of AutoregressiveCoefficientEstimators," ournal of the American StatisticalAssociation, 83, 842-848.Simkins,S. (1995), "ForecastingWith VectorAutoregressive VAR) ModelsSubjectto Business Cycle Restrictions,"nternationalJournalof Forecast-ing, 11, 569-583.Sims, C. A. (1988), "Bayesian Skepticism on Unit Root Econometrics,"Journalof Economic Dynamicsand Control,12, 463-474.Stine, R. A. (1987), "Estimating Propertiesof AutoregressiveForecasts,"Journalof the AmericanStatisticalAssociation, 82, 1072-1078.Thombs,L. A., and Schucany,W. R. (1990), "BootstrapPredictionIntervalsfor Autoregression," ournal of the AmericanStatisticalAssociation, 85,486--492.

    Tjostheim,D., andPaulsen,J. (1983), "Biasof Some Commonly-UsedTimeSeries Estimates,"Biometrika,70, 389-399.