Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model...

40
Dynamic Identification of DSGE Models Ivana Komunjer * Serena Ng September 2009 Preliminary: Comments Welcome Abstract This paper studies identification of the parameters of a DSGE model using the entire au- tocovariance structure of the data. Classical results for identification of structural parameters do not apply because some reduced form parameters in DSGE models cannot be identified. We use the parameter restrictions that two models with identical spectrum must satisfy to obtain the rank and order conditions for identification. Both conditions depend on the parametrization of the model alone, and should be checked before any observations are considered. The results are established in a general set up that allows fewer shocks than endogenous variables. Three examples are considered to illustrate the results. * University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093. Email: [email protected] Columbia University, 420 W. 118 St. MC 3308, New York, NY 10027. Email: [email protected] This paper was presented at the 2009 NBER summer institute, as well as the econometrics seminar at UC Davis. We thank Roger Farmer, Frank Schorfheide, Giuseppe Ragusa, and Leon Wegge for helpful comments. Part of this work was done while the first author was visiting Columbia University whose warm hospitality is gratefully acknowledged. The second author acknowledges financial support from the National Science Foundation (SES-0549978).

Transcript of Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model...

Page 1: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

Dynamic Identification of DSGE Models

Ivana Komunjer∗ Serena Ng†

September 2009

Preliminary: Comments Welcome

Abstract

This paper studies identification of the parameters of a DSGE model using the entire au-tocovariance structure of the data. Classical results for identification of structural parametersdo not apply because some reduced form parameters in DSGE models cannot be identified. Weuse the parameter restrictions that two models with identical spectrum must satisfy to obtainthe rank and order conditions for identification. Both conditions depend on the parametrizationof the model alone, and should be checked before any observations are considered. The resultsare established in a general set up that allows fewer shocks than endogenous variables. Threeexamples are considered to illustrate the results.

∗University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093. Email: [email protected]†Columbia University, 420 W. 118 St. MC 3308, New York, NY 10027. Email: [email protected]

This paper was presented at the 2009 NBER summer institute, as well as the econometrics seminar at UC Davis. Wethank Roger Farmer, Frank Schorfheide, Giuseppe Ragusa, and Leon Wegge for helpful comments. Part of this workwas done while the first author was visiting Columbia University whose warm hospitality is gratefully acknowledged.The second author acknowledges financial support from the National Science Foundation (SES-0549978).

Page 2: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

1 Introduction

Dynamic stochastic general equilibrium (DSGE) models have now reached the level of sophistication

to permit analysis of important policy and theoretical macroeconomic issues. Whereas the param-

eters in these models used to be calibrated, numerical advances in the last two decades have made

it possible to estimate models with as many as a hundred parameters. Researchers are, however,

aware that not all deep or structural parameters of DSGE models can be consistently estimated

due to identification problems. A DSGE model is identifiable whenever changing the values of the

model parameters induce a variation in the reduced form parameter that is not observationally

equivalent to its original value. In spite of the recognition of this problem, a procedure has yet

to be developed that tells us in a systematic manner how many parameters are identifiable, and

if so which ones. This is not a trivial problem because unlike in the classical setup, the reduced

form parameters of DSGE models are generally not identifiable. The contribution of this paper is

to propose rank and order conditions for identification of parameters of interest. These can either

be the deep parameters of the optimizing model, or the structural parameters of the log-linearized

model. Hereafter, we will let θ denote the parameter vector of interest. Given that these models

are dynamic, identification is obtained from the autocovariance structure (spectral density) of the

observables. The results are new and not yet seen in the literature.

Since the solutions to log-linearized DSGE models are merely systems of linear simultaneous

equations with VARMA representations, it is useful to explain why new identification results are

needed. First and foremost, economic theory often considers fewer shocks than observed endogenous

variables, a condition known as ‘stochastic singularity’. This singular structure can arise because

of adding up constraints as in demand systems, or common shocks as in factor models, of which

DSGE is an example. As a consequence, VARMA representations of DSGE models involve matrices

that are generally not square. This violates the usual invertibility assumption used in the classical

work by Hannan (1971), Zellner and Palm (1974), Hatanaka (1975), and Wallis (1977), for example.

Even in models that are nonsingular, identification of VARMA models requires the so-called left-co

prime condition to rule out common factors in the autoregressive and moving average polynomials.

The identified model is the canonical form obtained from solving the Kronecker indices of the

system.1 These methods are cumbersome and are rarely used in economic analysis. Furthermore,

we are not interested in identifying the VARMA parameters per se, but the ones that determine

the VARMA parameters. More importantly, canonical forms are a useful (and perhaps the only)

starting point if we have no other information about the model. As pointed out in Glover and

Willems (1974) in control analysis, when theory imposes a lot of restrictions on a linear dynamical1These methods are discussed in Solo (1986), Reinsel (2003), and Lutkepohl (2005).

1

Page 3: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

system, we can identify the parameters of interest without appealing to the canonical form. In

particular, in any DSGE model we typically know the number of endogenous state variables and

the number of exogenous shocks. Thus, unlike a typical VARMA model, we know the order of the

model (also known as the Mcmillian degree) which is useful for identification analysis.

Classical work on the identification of linear simultaneous equations pioneered by Koopmans

(1950) exploits the structure of the model, but several features of DSGE models make those results

inapplicable. First, DSGE models are dynamic. Whereas in a static case, the parameters of

two observationally equivalent structures must be related by a constant matrix, this definition of

equivalence is inadequate because two models with different dynamic structures can have identical

impulse response functions. Therefore, exclusion restrictions that permit identification of static

models may not ensure identification of dynamic models. This problem motivated Rubio-Ramırez,

Waggoner, and Zha (2007) to develop results for identification of linear structural VARs. Second, in

the traditional work on identification, latent shocks are assumed to be independent and identically

distributed (iid) across time. The iid property in particular implies that the distribution of the

observables remains constant across time. Identification is then discussed under the assumption

that an infinite time series of the data is available, which allows the econometrician to eventually

observe the distribution of the DSGE model variables. The iid assumption on the shocks is overly

restrictive for time series data. It is common to use the weaker white noise assumption so that

identification can be based solely on the autocovariances of the observables and not their entire

distribution, which may well vary across time. Finally, DSGE models have no exogenous variables

other than the shocks, unlike in Deistler (1976). Latent shocks cannot be used for identification.

There is another important feature of DSGE models that invalidates the classical rank conditions

summarized by Fisher (1966) and Rothenberg (1971). In many economic models covered by the

classical analysis, the reduced form parameter of the model is known to be identifiable. In such cases,

the parameters of the model θ are (locally) identifiable from the distribution of the observables if

and only if the mapping from θ to the reduced form parameters is (locally) one to one. The

identification problem then becomes simply a question of uniqueness of solutions to systems of

equations. This leads to the well known rank conditions for (local) identification (see, e.g., Theorem

6 in Rothenberg, 1971). What makes Rothenberg’s (1971) analysis inapplicable here is that the

reduced form parameters are generally not identifiable in DSGE models as we will see.

For all the reasons above, a new identification theory needs to be developed that allows for

stochastically singular models and the possibility that the reduced form parameters are not iden-

tifiable. To circumvent the problem of stochastic singularity, it is not uncommon to complete the

system by specifying enough shocks so as to match the number of endogenous variables. However,

2

Page 4: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

as argued in Chari, Kehoe, and McGrattan (2009), many shocks in moderate to large new Key-

nesian models are not structural. We therefore consider a general setup that does not restrict the

dimension of the shocks.

Our identification results have implications for frequentist estimation as non-identifiable param-

eters are not consistently estimable; the results are also useful in a Bayesian context. As Poirier

(1998) pointed out, Bayesian analysis is not immune to identification problems. It is well known

that local non identification leads to pathological behavior of the posteriors when flat priors are

used. This has serious consequences for the posterior computation in practice. Indeed, the local non

identification problems lead to reducible Markov chains in which the region of locally unidentified

parameter values becomes an absorbing state. This violates the convergence conditions typically

imposed for Gibbs samplers. A solution to the problem is to use informative priors (see, e.g.,

Chao and Phillips, 1998; Kleibergen and van Dijk, 1998). Knowing which parameters are locally

unidentifiable would help researchers pinpoint where the specification of such priors is the most

important.

Parameters that are not identifiable cannot be consistently estimated. In spite of the impor-

tance of this problem, the literature on identification of DSGE models is small. Beyer and Farmer

(2007) shows that a determinate solution can be observationally equivalent to one that is not.

While indeterminate solutions cannot be ruled out, most solutions are determinate, and identifica-

tion of determinate solutions is not even completely understood. Canova and Sala (2009) suggests

to examine the properties of the impulse responses evaluated at different parameter values, while

Consolo, Favero, and Paccagnini (2009) compares the properties of the VAR implied by the DSGE

model with those of a factor augmented VAR, also implied by the DSGE model. Both approaches,

while useful, have not been shown to be necessary or sufficient for identification. Iskrev (2009)

suggests to use all the autocovariances for identification, but does not exploit the parameter re-

strictions that equivalent systems must satisfy. Moreover, Iskrev’s (2009) analysis assumes that

the number of shocks equals the number of variables in the system, and that the reduced form

parameters of the DSGE model are identifiable from the autocovariances. Our point of departure

is that this is not true in general.

The plan of the paper is as follows. In Section 2 we present our setup, state the model assump-

tions, and study the properties of the observables. In Section 3, we derive operational restrictions

that two observationally equivalent reduced form parameters must satisfy. We also make precise the

sense in which some components of the DSGE model reduced form parameter are not identifiable

in general. Section 4 presents our rank and order conditions for identification. The key feature of

the order condition is that it only depends on the dimension of the variables in the DGSE model;

3

Page 5: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

hence it is easy to compute. In Section 5 we then use three examples to illustrate our results.

Finally, Section6 discusses the implications and extensions of this work, and concludes the paper.

2 Setup

In this section we introduce a generic DSGE model and discuss its properties. In particular, we focus

on the features of the model that make the classical identification results inapplicable. A careful

analysis of the observables at the end of the section prepares the ground for the new identification

results that follow.

2.1 Model

Consider a generic nonlinear discrete time DSGE model in which the parameter of interest θ belongs

to a parameter set Θ ⊆ Rnθ .2 Assume that a solution to the log-linearized version of the model

exists and is unique see, e.g. Sims (2002). Equivalent representations of the solution can be obtained

via algorithms of Anderson and Moore (1985), Uhlig (1999), Sims (2002), Klein (2000), and King

and Watson (2002), for example. In this paper, we focus on the following representation. Let Kt

be a vector of observed endogenous (state) variables whose values are known at time t, Wt be a

vector of observed endogenous (jump) variables, Zt be a vector of latent exogenous variables, and

εt be a vector of unobserved shocks that satisfy Etεt+1 = 0. These vectors (as well as their leads

and lags) are of dimensions nK , nW , nZ and nZ , respectively. The variables evolve according to

the recursive equilibrium law of motion given by:

Kt+1 = P (θ)Kt +Q(θ)Zt

Wt = R(θ)Kt + S(θ)Zt (1)

Zt+1 = Ψ(θ)Zt + εt+1.

The matrices P (θ), Q(θ), R(θ), S(θ) and Ψ(θ) are of dimensions (nK×nK), (nK×nZ), (nW ×nK),

(nW ×nZ), and (nZ ×nZ), respectively. We will refer to the equations in (1) as the (reduced form)

solution equations.3 The solution (1) consists of two sets of equations: first are the equations for

the endogenous state and jump variables Yt ≡ (K ′t,W′t)′. The second set of equations specifies the

law of motion of the exogenous shock process.

As written, the equations in (1) do not include identities even though variables defined by

identities are often of interest. Consider a new set of endogenous jump variables W t defined2By assumption, parameters known to be only identifiable from the steady state are excluded from θ.3For details on how (1) can be obtained using alternative solution methods, see Uhlig (1999), for example.

4

Page 6: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

through nW identities:

W t =(A(θ) B(θ)

)(Kt+1

Wt

).

Then letting Wt ≡ (W ′t ,W′t)′, the new solution equations can be written as:

Kt+1 = P (θ)Kt +Q(θ)Zt

Wt = R(θ)Kt + S(θ)Zt

with a new nW × nK matrix R and a new nW × nZ matrix S (nW ≡ nW + nW ) defined as:

R(θ) ≡(

R(θ)A(θ)P (θ) +B(θ)R(θ)

)and S(θ) ≡

(S(θ)

A(θ)Q(θ) +B(θ)S(θ)

).

The above system of solution equations is thus quite general.

We now state the main model assumptions:

Assumption 1 For every θ ∈ Θ and every (t, s) > 1, E(εt) = 0 and E(εtε′s) = δt−sΣ(θ) with Σ(θ)

positive definite.

Assumption 2 For every θ ∈ Θ and any z ∈ C, det(Id − Ψ(θ)z) det(Id − P (θ)z) = 0 implies

|z| > 1. Moreover, for every (θ, θ) ∈ Θ2, P (θ) and Ψ(θ) have no eigenvalues in common.

Assumption 3 nK 6 nZ 6 nK + nW .

Assumption 4 For every θ ∈ Θ, we have: (i) Q(θ) full rank; (ii)(Q(θ)′, S(θ)′

)′ full rank.

Assumption 5 Let Λ(θ) ≡ (P (θ), Q(θ), R(θ), S(θ),Ψ(θ),Σ(θ)) denote the reduced form parameter.

Then, the mapping Λ : Θ→ RnΛ with Θ an open subset of Rnθ and nΛ ≡ (nK+nW )(nZ+nK)+2n2Z

which to each θ ∈ Θ assigns Λ(θ) is continuously differentiable on Θ.

Assumption 1 states that the shocks εt follow a vector generalization of white noise, which

we denote by εt ∼ WN(0,Σ(θ)). This property of the errors is weaker than that of being iid.

DSGE models with rational expectations models assume εt to be a martingale difference sequence,

Etεt+1 = 0, whose mean and covariance across time properties are the same as those of a white

noise process. The key additional feature of Assumption 1 is to require the contemporaneous shocks

covariance matrix Σ(θ) to be constant. Note that Σ(θ) is not required to be of the form σ2Id (with

σ > 0). In other words, we allow different components of the shock vector εt to be correlated as

in Curdia and Reis (2009). Since Σ(θ) is nonsingular, the shocks εt are required to be linearly

independent.

5

Page 7: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

The first requirement of Assumption 2 holds when all the eigenvalues of the matrices Ψ(θ) and

P (θ) lie inside the unit circle, i.e. when Ψ(θ) and P (θ) stable. The second restriction rules out

situations when the internal propagating mechanism of one structure (as given by P (θ0)) coincide

with the dynamics of the exogenous process in another structure (as given by Ψ(θ1). Assumption

3 assumes at least as many shocks as endogenous state variables, but fewer than the total number

of endogenous variables. Most DSGE models have this property.

When nK 6 nZ 6 nK + nW , Assumptions 4(i) and 4(ii) ensure that the nK × nZ matrix Q(θ)

and the (nK + nW ) × nZ matrix (Q(θ)′, S(θ)′)′ are of rank nK and nZ , respectively. When those

rank conditions fail, then the dynamics of Zt are no longer fully transmitted to Yt; hence, not

all information about the shocks can be recovered from the observables. Note that Assumption

4(ii) allows for the identities to be included in the system (1). If the original system in (1) satisfies

Assumptions 3 and 4(ii), then the new system is such that nK 6 nZ 6 nK+ nW , and(Q(θ)′, S(θ)′

)′also has rank nZ . Indeterminate solutions can be considered by suitably expanding the state vector.

However, the rank conditions rule out cases when the solution is unique but an expanded state vector

is assumed.

Finally, Assumption 5 puts smoothness requirements on the mapping from the parameter vector

of interest θ to system matrices P (θ), Q(θ), R(θ), S(θ),Ψ(θ) in (1), and the covariance matrix Σ(θ)

in Assumption 1. In what follows, we collect all the components of those matrices in the reduced

form parameter

Λ(θ) ≡ (P (θ), Q(θ), R(θ), S(θ),Ψ(θ),Σ(θ))

of dimension nΛ ≡ (nK +nW )(nZ +nK) + 2n2Z . Given the reduced form solution (1), the objective

of the exercise is to estimate θ (or Λ) and eventually to conduct inference about (functions of) θ

from the DSGE model observables. Before that, we need to discuss the properties of Yt.

2.2 Observables

Under Assumptions 1 and 2 the process for Yt whose dimension we denote by nY ≡ nK + nW

has the following VMA(∞) representation:

Yt =(Kt

Wt

)=∞∑j=0

h(j; θ)εt−j . (2)

The (nK + nW ) × nZ matrices h(j; θ) are the Markov parameters of the sequence Yt, obtained

from the transfer function (also called impulse response function):

6

Page 8: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

H(z; θ) =(HK(z; θ)HW (z; θ)

)≡∞∑j=0

h(j; θ)zj

=(

z[Id− P (θ)z]−1Q(θ)[Id−Ψ(θ)z]−1R(θ)z[Id− P (θ)z]−1Q(θ) + S(θ)

[Id−Ψ(θ)z]−1

). (3)

Under our assumptions, the sequence of endogenous variables Yt has two important properties

which we now present in the form of two Lemmas.

Lemma 1 (Covariance Stationarity) Let Assumptions 1 and 2 hold. Then for every θ ∈ Θ, Ytis weakly stationary with E(Yt) = 0 and E(YtY ′s ) ≡ Γ(s − t; θ) =

∑∞j=0 h(j; θ)Σ(θ)h′(j + s − t; θ),

for all (t, s) > 1.

Lemma 1 is an immediate consequence of (2) and (3). The covariance stationarity result is

crucial for subsequent developments as we can exploit the time invariant second moment properties

of the endogenous variables Yt to identify the parameter of interest θ. Indeed, when the process

is weakly stationary, the autocovariances completely summarize the properties of Yt, and the

econometrician eventually observes its entire autocovariance generating function, which for any

z ∈ C is defined as:

Ω(z; θ) ≡+∞∑j=−∞

Γ(j; θ)z−j .

Note that for every θ ∈ Θ, Ω(z; θ) is a square nY ×nY matrix that satisfies Ω(z−1; θ)′ = Ω(z; θ), for

every z ∈ C. The real function Ω(exp(iω); θ), defined for any ω ∈ R, is called the spectral density

(or spectrum) of the observables Yt. Furthermore,

Ω(exp(iω); θ) = Γ(0; θ) + 2∞∑j=1

Γ(j; θ) cos(jω),

which is always positive semi definite. The autocovariances Γ(j; θ) are then obtained from the

spectral density Ω(exp(iω); θ) via the following inversion formula:

Γ(j; θ) =1

∫ π

−πΩ(exp(iω)) exp(−iωj)dω.

The autocovariance matrices Γ(j; θ) satisfy Γ(j; θ) = Γ(−j; θ)′. With a slight abuse of terminology,

we shall hereafter refer to Ω(z; θ) as the spectral density as well.

It is important to emphasize that the white noise Assumption 1 does not prevent the observ-

ables to have higher moments that change in time, nor does it prevent the unobservables εt to be

conditionally heteroskedastic. Imposing further restrictions on ε such as being homoskedastic or

iid would imply further properties on the observables that could be used for identification.

7

Page 9: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

From the VMA(∞) representation in (2) it follows that the process Yt is causal with∑∞

j=0 |h(j; θ)| <∞. Under our assumptions, Yt is also invertible meaning that εt is spanned by the past and

present values of Yt, or that εt is ‘fundamental’ for Yt.

Lemma 2 (Invertibility) Let Assumptions 1 through 4 hold. Then for every θ ∈ Θ, Yt is invert-

ible and we can write εt =∑∞

j=0 g(j; θ)Yt−j for all t > 1 with∑∞

j=0 |g(j; θ)| <∞.

Proof of Lemma 2 is in Appendix A. The invertibility condition is usually stated in terms of the

eigenvalues of H(z; θ), which is sufficient. The necessary condition that the rank of H(z; θ) is nZis, however, enough for us to obtain a VAR(∞) representation of the model (see, e.g., Hannan and

Diestler, 1988). Economic theory usually cannot rule out non-fundamentalness, a situation that

arises when agents have more information than the econometrician. Giannone and Reichlin (2006)

argue that superior information is much less likely to happen when there are more variables than

shocks in the system. Fernandez-Villaverde, Rubio-Ramirez, Sargent, and Watson (2007) suggest

a way to check fundamentalness.

We need invertibility of Yt to ensure that the spectral density matrix Ω(z; θ) has constant

rank nZ almost everywhere in C. This minimum phase (or ‘miniphase’) property will allow us to

go from the spectral density matrix to the transfer function that generated it.4 Under regularity

conditions (see, e.g., Deistler, 2006):

Ω(z; θ) = H(z; θ)Σ(θ)H(z−1; θ)′ (4)

with the transfer function for H(z; θ) as defined in (3).

3 Identifiability and Observational Equivalence

In this section, we ask the following question: when do two distinct values of the reduced form

parameter of a DSGE model yield the same spectrum? We start our analysis with a definition of

observational equivalence that applies to dynamic models. We then derive necessary and sufficient

conditions under which two sets of reduced form parameters Λ(θ0) and Λ(θ1) of the DSGE model

(1) are observationally equivalent.

3.1 Definitions

In static analysis, observational equivalence is typically defined with reference to the distribution (or

density) of the observed variables Yt. In a dynamic setting, this distribution may well vary across4A linear, time invariant system is said to have minimum phase if the system and its inverse are invertible and

stable. Under Assumptions 1 through 4, the reduced form solution of our DSGE model has this property.

8

Page 10: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

time. However, in models that are linear, the process Yt is known to be covariance stationary. In

such cases, observational equivalence is defined with reference to the entire autocovariance structure

of the observables. In other words, we need to work with spectral densities of the observables.

It is clear from (3) and (4) that the spectral density Ω(z; θ) depends on the deep parameter

θ only through the reduced form parameter Λ(θ) and we have Ω(z; θ) = Ω(z; Λ(θ)). Each value

θ0 of θ implies the value Λ0 ≡ Λ(θ0) which completely characterizes the spectrum of Yt. Now

the determination of Ω(z; θ) from Λ(θ) is straightforward once a value of θ is given. The converse

problem, which is that of identification, is to start with a prescribed spectral density Ω(z; θ) and

ask which sextuple Λ(θ) is such that (3) and (4) hold.

The question of identification from the spectral density can be stated as follows: having observed

the spectral density function Ω(z; θ0), under what conditions is it possible to uncover the value θ0

that generated it? It is possible that two different values θ0 and θ1 of the parameter θ in (1) lead

to the reduced form parameters Λ(θ0) and Λ(θ1) with the same spectral density. We say that Λ(θ0)

and Λ(θ1) are observationally equivalent if Ω(z; θ1) = Ω(z; θ0) for all z ∈ C, or more formally, if for

every z ∈ C:

H(z; θ0)Σ(θ0)H(z−1; θ0)′ = H(z; θ1)Σ(θ1)H(z−1; θ1)′. (5)

The problem of identifying the DSGE model parameter θ from the spectral density of the observ-

ables can be defined as follows.

Definition 1 The DSGE model (1) is said to be locally identifiable from the spectral density of

Yt at a point θ0 ∈ Θ if there exists an open neighborhood of θ0 such that for every θ1 in this

neighborhood Λ(θ0) and Λ(θ1) are observationally equivalent only if θ1 = θ0.

Put in words, in a neighborhood of θ0 there are no two distinct reduced form parameters

that generated the same spectral density of the observables. As already pointed out, the cru-

cial assumption of Fisher (1966) and Rothenberg (1971)—that Λ(θ) itself is (globally or locally)

identifiable—does not hold in DSGE models. To see this, let U be a full rank nZ ×nZ matrix, and

consider the system:

Kt+1 = P (θ)Kt + Q(θ)Zt

Wt = R(θ)Kt + S(θ)Zt (6)

with Zt ≡ U−1Zt, and Q(θ) ≡ Q(θ)U, S(θ) = S(θ)U, Ψ(θ) = U−1Ψ(θ)U, Σ(θ) = U−1Σ(θ)U−1′.

We then have that Λ(θ) = (P (θ), Q(θ), R(θ), S(θ), Ψ(θ), Σ(θ)) is observationally equivalent to Λ(θ),

since Ω(z; Λ(θ)) = Ω(z; Λ(θ)) for every z ∈ C. Yet, Λ(θ) is different from Λ(θ). The reduced form

parameter Λ(θ) is thus not identifiable from the spectral density of the observables. This means

9

Page 11: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

that Rothenberg’s (1971) rank condition alone will in general not be sufficient to guarantee the

identifiability of θ.

It is known in work by Glover (1973) and Glover and Willems (1974) that how to identify the

parameter θ in (1) ultimately depends on whether or not the inputs εt are observed. If they are

observed, then the parameters of the model can be identified from the transfer function in (3) if

the mapping from the parameter space to the Markov parameters is locally one-to-one. When the

input noises are unobserved, then in general, identification can only be based on the autocovariance

generating function since the properties of stationary processes are completely summarized by their

second moments.5

To derive conditions for identification of θ from the spectrum, we proceed along the lines of work

by Glover and Willems (1974) in control theory, but take into account that our state vector is par-

tially observed. First, we look for all reduced form parameter values Λ(θ1) that are observationally

equivalent to Λ(θ0). For instance, if it is possible to find a value θ1 such that the transformation

in (6) can be written as Λ(θ0) = Λ(θ1), then Λ(θ0) and Λ(θ1) are observationally equivalent. The

transformation in (6) is one example from which observationally equivalent parameters can be ob-

tained; however, we need to find all such transformations. Second, once the set of all reduced form

parameters Λ(θ1) that are observationally equivalent to Λ(θ0) is obtained, we impose conditions

that ensure that in this set θ1 is always equal to θ0.

Now from (5), many observationally equivalent transformations Λ(θ1) with Ω(z; θ1) = Ω(z; θ0)

can arise because: (i) each H(z; θ) can potentially be obtained from a multitude of quintuples

(P (θ), Q(θ), R(θ), S(θ),Ψ(θ)) in (3), and (ii) there can be transfer functions H(z; θ) and shock

covariance matrices Σ(θ) which jointly satisfy (4). In the next two subsections, we derive the

necessary and sufficient conditions for both sources of observational equivalence.

3.2 Equivalence of Transfer Functions

Assume for the time being that εt in (1) is observable. Then, from (2), the econometrician can

observe the entire transfer function H(z; θ). The question which we now ask is as follows: given

H(z; θ) is it possible to uniquely recover the quintuple (P (θ), Q(θ), R(θ), S(θ),Ψ(θ)) that generated

it?

It is clear from (3) that to any value θ0 of θ we can associate a unique transfer function

H(z; θ0). In addition, to each θ0 corresponds a unique quintuple (P (θ0), Q(θ0), R(θ0), S(θ0),Ψ(θ0))

which we call a realization of the transfer function H(z; θ0). We now characterize all realiza-

tions (P (θ1), Q(θ1), R(θ1), S(θ1),Ψ(θ1)) that are equivalent to (P (θ0), Q(θ0), R(θ0), S(θ0),Ψ(θ0)) in5There are, however, exceptions when identification can be achieved through the Markov parameters even if the

input is not observed. See Grewal and Glover (1976) and Glover and Willems (1974).

10

Page 12: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

a sense that they yield the same transfer function H(z; θ1) = H(z; θ0) everywhere in C.

Lemma 3 (Similarity Transform) Let Assumptions 1 through 4 hold. Consider the realizations

(P (θ0), Q(θ0), R(θ0), S(θ0),Ψ(θ0)) and (P (θ1), Q(θ1), R(θ1), S(θ1),Ψ(θ1)) of the transfer functions

H(z; θ0) and H(z; θ1), respectively, with (θ0, θ1) ∈ Θ2. Then the two realizations are equivalent, i.e.

H(z; θ0) = H(z; θ1) for every z ∈ C, if and only if P (θ1) = P (θ0), R(θ1) = R(θ0), and there exists

a full rank nZ × nZ matrix T such that:

Q(θ1) = Q(θ0)T−1, S(θ1) = S(θ0)T−1 and Ψ(θ1) = TΨ(θ0)T−1.

Proof of Lemma 3 is in Appendix B. The proof goes roughly is as follows. We consider the

state space representation of our model and show that it is minimal, so that the dimension of

the latent state vector is no larger than any other system with the same output spectral density.

To show minimality, we need to establish the system is controllable and observable.6 Now a well

known result in control theory7 is that all equivalent minimal realizations must be linearly related

through a similarity transformation, which is simply a linear change in the coordinates of the

latent state variables. Our state variables are partially observed, but the similarity transforms

can still be applied to the system. Solving the similarity transform equations taking into account

that our state vector is partially observed, and expressing them in terms of the system matrices

P (θ), Q(θ), R(θ), S(θ) and Ψ(θ) gives the stated results.

Lemma 3 says the following: assume that it is possible for the econometrician to observe the

transfer function H(z; θ). Then from the transfer function, the econometrician can uniquely recover

the system matrices P (θ) and R(θ) that generated it. However, the matrix Q(θ), S(θ) and Ψ(θ)

can only recovered up to an unknown full rank matrix T . Put differently, Lemma 3 shows that

P (θ) and R(θ) are exactly identified from the transfer function. However, further restrictions are

needed in order to also identify Q(θ), S(θ), and Ψ(θ).

3.3 Equivalence of Spectral Densities

Now the econometrician only observes the autocovariance generating function of the observables

Yt, or equivalently, its spectral density Ω(z; θ). Each spectral density is obtained from the

transfer function H(z; θ) and the errors’ covariance matrix Σ(θ) according to the equation in (4).

The question which we now ask is as follows: given a spectral density Ω(z; θ), is it possible to

uniquely recover the couple (H(z; θ),Σ(θ)) that generated it.6Controllability means that for any initial state, it is always possible to design an input sequence that puts the

system to a desired final state. Observability means that we can always reconstruct the initial state from observingthe evolution of the output, provided that we also know the evolution of the input. Formally, these conditions holdwhen the controllability and observability matrices are full rank.

7See, for example, Theorem 8.13 of (Gourieroux and Monfort, 1997).

11

Page 13: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

It is clear that there can be an infinity of transfer functions H(z; θ) and shock covariance

matrices Σ(θ) which jointly satisfy (4). However, invertible systems have the minimum phase

property, and in minimum phase systems, all observationally equivalent couples (H(z; θ0),Σ(θ0))

and (H(z); θ1),Σ(θ1) are related through a particular transformation. A necessary and sufficient

condition for two couples (H(z; θ0),Σ(θ0)) and (H(z; θ1),Σ(θ1)) to be observationally equivalent is

as follows:

Lemma 4 (Minimum Phase) Let Assumptions 1 through 4 hold. Consider two couples (H(z; θ0),

Σ(θ0)) and (H(z; θ1),Σ(θ1)) with spectral densities Ω(z; θ0) and Ω(z; θ1), respectively, where (θ0, θ1)

∈ Θ2. Then the two couples are equivalent, i.e. Ω(z; θ0) = Ω(z; θ1) for every z ∈ C, if and only if

there exists a full rank nZ × nZ matrix U such that:

for every z ∈ C H(z; θ1) = H(z; θ0)U, and UΣ(θ1)U ′ = Σ(θ0).

A proof is given in Appendix B. The result of Lemma 4 is an equivalence statement that contains

two implications. The sufficiency part is obvious: indeed, if the two couples (H(z; θ0),Σ(θ0)) and

(H(z; θ1),Σ(θ1)) are related to each other by the above transformation involving a full rank matrix

U , then

H(z; θ0)Σ(θ0)H(z−1; θ0)′ = H(z; θ0)UΣ(θ1)U ′H(z−1; θ0)′ = H(z; θ1)Σ(θ1)H(z−1; θ1)′

so the two spectral density matrices are the same.

The crux of Lemma 4 is the necessity part, which consists in showing that the two spectral

densities are the same only if (H(z; θ0),Σ(θ0)) and (H(z; θ1),Σ(θ1)) are related to each other by

the above transformation involving U . The problem of finding the transfer function from a given

spectral density is problem known in control theory as ‘spectral factorization’ (see, e.g., Anderson,

1969). Its solution requires that the rank of the spectral density matrix Ω(z; θ) of Yt be constant

and known almost everywhere in C. When Yt is causal and invertible, the constant rank property

holds provided the shocks εt have a full rank covariance matrix Σ(θ). In addition, the rank of Ω(z; θ)

is then shown to be equal to nZ , the size of the shocks. It is worth pointing out that the result

of Lemma 4 does not require the size of the shocks (nZ) to be the same as that of the observables

(nK + nW ). In other words, Lemma 4 applies in systems that are possibly stochastically singular.

A sketch of the proof is as follows. We first show that for any θ0 ∈ Θ, the rank of H(z; θ0)

is nZ for all |z| > 1. For this, we call L(θ) the Cholesky triangle of Σ(θ) (i.e. Σ(θ) = L(θ)L(θ)′)

and we consider a nZ × (nK + nW ) matrix Wd(z) ≡ L(θ0)′H(z−1; θ0)′ that is a ‘spectral factor’,

meaning that Ω(z; θ0) = Wd(z−1)′Wd(z). We next show that Wd(z) has rank nZ . Results in

Anderson (1969) and Newcomb and Anderson (1968) can then be used to show the following: if

12

Page 14: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

Wd(z) = L(θ1)′H(z−1; θ1)′ is another minimum spectral factor, then we must have Wd(z) = VWd(z)

where V is an orthogonal nZ × nZ matrix (i.e. V ′V = IdnZ ). Letting U ≡ L(θ0)V ′L(θ1)−1 then

yields the result. We refer to this as a miniphase lemma because the spectral factors of minimum

phase systems have a constant rank of nZ .

Combining Lemma 4 with the similarity results obtained in Lemma 3 leads to our first Propo-

sition.

Proposition 1 Let Assumptions 1 through 4 hold. Consider two reduced form parameters Λ(θ0)

and Λ(θ1) with (θ0, θ1) ∈ Θ2. Then Λ(θ0) and Λ(θ1) are observationally equivalent if and only if:

P (θ1) = P (θ0), R(θ1) = R(θ0)

and there exists a full rank nZ × nZ matrix U such that:

Q(θ1) = Q(θ0)U, S(θ1) = S(θ0)U, Ψ(θ1) = U−1Ψ(θ0)U, Σ(θ1) = U−1Σ(θ0)U−1′.

A proof is given in Appendix Appendix B. Proposition 1 establishes that the two transformation

matrices T and U defined in Lemma 3 and Lemma 4, respectively, are related to each other through

T = U−1.

The conditions in Proposition 1 are necessary and sufficient for Λ(θ0) and Λ(θ1) to have the

same spectrum. Like for the equivalence results derived in Lemma 3 and Lemma 4, the sufficiency

part of Proposition 1 is obvious. On the other hand, the necessity part is nontrivial to derive. We

now give an interpretation of the necessity statement.

We know from Lemma 4 that Λ(θ1) and Λ(θ0) are observationally equivalent if only if the

Markov parameters obtained under the two reduced form parameters satisfy:

Q(θ1) = Q(θ0)U (7)

S(θ1) = S(θ0)Uk∑j=0

P (θ1)jQ(θ1)Ψ(θ1)k−j =[ k∑j=0

P (θ0)jQ(θ0)Ψ(θ0)k−j]U

S(θ1)Ψ(θ1)k +R(θ1)k−1∑j=0

P (θ1)jQ(θ1)Ψ(θ1)k−1−j =[S(θ0)Ψ(θ0)k +R(θ0)

k−1∑j=0

P (θ0)jQ(θ0)Ψ(θ0)k−1−j]U

for every k > 0, where U is a nZ×nZ full rank matrix.8 Proposition 1 provides the conditions under

which the above infinite set of equations holds. It provides an operational definition of observational

equivalence of two spectral densities in terms of the system matrices P (θ), Q(θ), R(θ), S(θ),Ψ(θ)8The equations above are obtained by applying Lemma 4 to Markov parameters h(k; θ1) and h(k; θ0) in (2).

13

Page 15: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

and Σ(θ). Its key implication is that some of the reduced form parameters are not identifiable. If the

parameter of interest is the reduced form parameter Λ(θ) itself, then Proposition 1 shows that only

the components of Λ(θ) corresponding to system matrices P (θ) and R(θ) in (1) are identifiable from

the spectrum. These features hold even when there are as many shocks as endogenous variables in

the system.

It is important to point out that the restriction to minimal state systems allows us to focus

on systems related by similarity transform, while the minimum phase property narrows down

equivalent models to those couples (H(z; θ),Σ(θ)) that satisfy Lemma 4. If one (or both) property

fails to hold, observational equivalence can only be defined in terms of weaker restrictions on the

spectral density (see, e.g. Glover and Willems, 1974).

4 Rank and Order Conditions for Identification

Having pinned down how two reduced form parameters with the same spectrum must be related, we

now use those restrictions to derive conditions that are necessary and sufficient for θ to be locally

identified. We start by considering the case when identification is possible from the spectrum alone.

We then turn to the case when additional a priori restrictions on θ are needed.

4.1 Identifiability from the spectrum

Let λ : Θ × Rn2Z → RnΛ be a mapping which to each deep parameter θ ∈ Θ and nZ × nZ

full rank matrix U assigns a transformed reduced parameter vector λ(θ, U) of dimension nΛ ≡(nK + nW )(nZ + nK) + 2n2

Z , defined by:9

λ(θ, U) ≡

vec(P (θ))vec(Q(θ)U)vec(R(θ))

vec(S(θ)U)vec(U−1Ψ(θ)U)

vec(U−1Σ(θ)U−1′

(8)

Then θ0 is identifiable from the spectrum of Yt if and only if the system of nΛ equations given

by λ(θ0, Id) = λ(θ, U) has a unique solution θ = θ0 and U = Id. We have the following result:

Lemma 5 Let Assumptions 1 through 5 hold. Then the DSGE model (1) is locally identifiable

from the spectral density of Yt at a point θ0 ∈ Θ if and only if the mapping λ is locally injective

at θ = θ0 and U = Id.9If A ∈ Rm×n then vec(A) is defined to be the nm-vector formed by stacking the columns of A on top of one

another, i.e. vec(A) ∈ Rnm. Note that vec(A′) = Tm,nvec(A) where Tm,n is an nm × nm permutation matrix thatsatisfies Tn,mTm,n = Idmn, Tn,m = T−1

m,n, and Tm,n = T ′n,m. Moreover, if B ∈ Rp×q then A⊗B denotes the Kroneckerproduct (or tensor product) of A and B, i.e. A⊗B ∈ Rmp×nq.

14

Page 16: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

A proof of Lemma 5 is in Appendix C. The mapping in 8 defines a system of nΛ = (nK +

nW )(nZ + nK) + 2n2Z equations in (nθ + n2

Z) unknowns. We now look for necessary and sufficient

conditions under which a (locally) unique solution to this system is θ = θ0 and U = Id. It is a

well known result (Fisher, 1966) that if the rank of the matrix of partial derivatives of λ(θ, U)

with respect to the components of θ and U remains constant in a neighborhood of (θ0, Id), then a

necessary and sufficient condition for λ to be locally injective is that:

rank(∂λ(θ0,Id)

∂θ∂λ(θ0,Id)∂vecU

)= nθ + n2

Z . (9)

See also Glover and Willems (1974) for a similar result. Necessary and sufficient conditions for

identifiability of θ from the spectrum are given in the following Proposition:

Proposition 2 Let Assumptions 1 through 5 hold. If the rank of ∆(θ) remains constant in a

neighborhood of θ0 where

∆(θ) ≡(∂λ(θ,Id)∂θ

∂λ(θ,Id)∂vecU

)=

∂vecP (θ)∂θ 0n2

K×n2Z

∂vecQ(θ)∂θ IdnZ ⊗Q(θ)

∂vecR(θ)∂θ 0nKnW×n2

Z∂vecS(θ)

∂θ IdnZ ⊗ S(θ)∂vecΨ(θ)

∂θ IdnZ ⊗Ψ(θ)−Ψ(θ)′ ⊗ IdnZ∂vecΣ(θ)

∂θ −(Idn2Z

+ TnZ ,nZ )(Σ(θ)⊗ IdnZ )

,

then a necessary and sufficient rank condition for the DSGE model (1) to be locally identified from

the spectrum of Yt at a point θ0 in Θ is: rank ∆(θ0) = nθ + n2Z . Moreover, a necessary order

condition is:

nθ 6 (nK + nW )(nZ + nK) +nZ(nZ + 1)

2.

The proof is in Appendix C. Proposition 2 contains two results. First is a rank condition which

is both necessary and sufficient for the DSGE model (1) to be identified. This condition is entirely

new and not yet seen in the literature on identification. It extends the rank conditions of Glover

(1973) and Glover and Willems (1974) that are necessary and sufficient for θ to be identifiable

from the transfer function. Glover’s (1973) and Glover and Willems’s (1974) results require the

shocks εt to be observed. When they are unobserved, then our rank condition applies. The second

result of Proposition 2 is an order restriction on nθ without which θ cannot be identified from the

spectrum. The intuition behind is simple: the number of linearly independent equations defined by

the mapping λ(θ, U) in (8) needs to be at least as large as the number of unknowns nθ +n2Z . Since

the last n2Z equations correspond to the matrix U−1Σ(θ)U−1′ which is always symmetric, they only

contain nZ(nZ+1)2 independent components. Hence the order restriction follows.

15

Page 17: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

Whether or not the DSGE model is identifiable from the spectrum, crucially depends on the

dynamics of Zt in (1). This is because in both the rank and order conditions, the dimension nθ

of θ depends on the assumptions placed on the system matrix Ψ and the covariance matrix Σ. In

particular, when in the DSGE model (1) the shocks are mutually uncorrelated univariate AR(1)

processes, Ψ and Σ are diagonal matrices, which contributes to 2nZ unknowns in nθ, instead of

n2Z + nZ(nZ + 1)/2 unknowns when the shocks are VAR(1) and those matrices are unrestricted

(we recall that Σ is always required to be symmetric). Hence, it may well be the case that the

unrestricted DSGE model with VAR(1) shocks fails to be identified, while the opposite holds under

the AR(1) specification.

What makes Proposition 2 important is the result that the deep parameter θ can be identified

from the spectrum even though the reduced form parameter Λ(θ) is itself not identifiable. An

immediate consequence of the rank condition in Proposition 2 is the following simple necessary

condition for identification:

rank(∂vec(Λ(θ0))

∂θ

)= nθ. (10)

While necessary, the above condition by itself is not sufficient to guarantee identifiability. Not

surprisingly, our rank condition in Proposition 2 is stronger than (10). This is because the reduced

form parameter Λ(θ) is not identifiable so a stronger requirement is needed to identify θ. Intu-

itively, the identifiability of the deep parameter obtains whenever the deviations in the reduced

form parameter Λ(θ) induced by a change in θ are incompatible with observationally equivalent

transformations in Proposition 1.

Canova and Sala (2009) and Iskrev (2007, 2008) suggest to check the rank of ∂vec(Λ(θ0))∂θ , just as

suggested in (10). However, their recommendation comes as a consequence of considering ∂ logL∂θ =

∂ logL∂vecΛ

∂vecΛ∂θ , and assuming that ∂ logL

∂vecΛ is full rank. But the reduced form parameter Λ(θ) is in fact

not identifiable as Proposition 1 shows, and we make no reference to the likelihood L. The rank

condition on ∂vec(Λ(θ0))∂θ alone is still necessary for identification; it is however not sufficient. This

result has a classical flavor, even though we work with assumptions that would not be valid in a

classical setup.

Finally, it is worth emphasizing that the results of Proposition 2 do not involve the spectrum

itself. This makes our approach fundamentally different from approaches based on the informa-

tion matrix (Iskrev, 2007). Whether or not the DSGE model (1) is identifiable depends on the

parametrization alone and our rank and order conditions should be checked before any data obser-

vations are considered.

16

Page 18: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

4.2 Identifiability Under A Priori Restrictions

When the order condition in Proposition 2 fails, then it is not possible to identify θ0 from the

spectrum alone. We need to impose a set of a priori restrictions on some of the components of θ.

Let θ0 satisfy a set of a priori restrictions:

ϕ(θ0) = 0 (11)

where ϕ : Θ→ Rnϕ is assumed to be continuous.

The constraints imposed by Proposition 1 are already defined by the mapping λ in (8). We now

look for conditions under which the mapping χ : Θ×Rn2Z → Rnϕ+nΛ which to each deep parameter

θ and full rank nZ × nZ matrix U assigns χ(θ, U) defined by:

χ(θ, U) ≡(

ϕ(θ)λ(θ, U)

)is locally injective at θ = θ0 and U = Id. As before, this condition is necessary and sufficient for

θ to be locally identifiable from the spectrum and the a priori restrictions (11). If the rank of

the matrix of partial derivatives of χ(θ, U) remains constant in a neighborhood of (θ0, Id), then a

necessary and sufficient condition for χ to be locally injective is that:

rank(∂χ(θ0,Id)

∂θ∂χ(θ0,Id)∂vecU

)= nθ + n2

Z . (12)

We then have the following result:

Proposition 3 Let Assumptions 1 through 5 hold and assume that ϕ is continuously differentiable

on Θ. If the rank of ∆ϕ(θ) remains constant in a neighborhood of θ0 where

∆ϕ(θ) ≡(∂χ(θ,Id)∂θ

∂χ(θ,Id)∂vecU

)=

∂ϕ(θ)∂θ 0nϕ×n2

Z∂vecP (θ)

∂θ 0n2K×n

2Z

∂vecQ(θ)∂θ IdnZ ⊗Q(θ)

∂vecR(θ)∂θ 0nKnW×n2

Z∂vecS(θ)

∂θ IdnZ ⊗ S(θ)∂vecΨ(θ)

∂θ IdnZ ⊗Ψ(θ)−Ψ(θ)′ ⊗ IdnZ∂vecΣ(θ)

∂θ −(Idn2Z

+ TnZ ,nZ )(Σ(θ)⊗ IdnZ )

,

then a necessary and sufficient rank condition for the DSGE model (1) satisfying the a priori

restrictions (11) to be locally identified from the spectrum of Yt at θ0 is: rank ∆ϕ(θ0) = nθ + n2Z .

Moreover, a necessary order condition for identification is:

nϕ > nθ −[(nK + nW )(nZ + nK) +

nZ(nZ + 1)2

].

17

Page 19: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

A proof of Proposition 3 is in Appendix C. When the rank condition in Proposition 3 holds,

we say that θ is conditionally identified at θ0, where the conditioning information is given by the

constraints ϕ(θ) = 0. Proposition 3 shows that in this case that is likely true for many models, the

full rank condition on ∆(θ0) is not the appropriate identification condition as we also need to take

into account the additional a priori restrictions. Typically, those restrictions would be fixing the

values of some components of θ. Proposition 3 shows how many of these restrictions are necessary

for identification, and is very useful in empirical work.

The fact that identification conditions for θ differ depending on whether or not nθ 6 (nK +

nW )(nZ+nK)+nZ(nZ+1)/2 is entirely new and not seen in the existing literature on identification.

Aforementioned work by Canova and Sala (2009) and Iskrev (2007, 2008) does not include a priori

restrictions on the deep parameter. The rank and order conditions for identification are simple and

can be checked even prior to solving the model. This is unlike the problem of weak identification,

which is a finite sample problem.

It is worth emphasizing that our identification conditions do not depend on any population

moments of the data. This is what distinguishes our approach from either likelihood or moment

based identification methods, which typically require full rank Fisher information matrix (see, e.g.,

Iskrev, 2007), or a full rank Hessian of the GMM objective function. Unlike ∆(θ) and ∆ϕ(θ),

these matrices depend on the sample moments of the data. Using our approach, the researchers

can establish whether or not their DSGE models are identified prior to collecting any data, and

thus abstracting from issues relating to measurement error. Since the solution parameter Λ(θ) is a

function of the system matrices obtained in popular numerical solutions to DSGE models (which

we will show), the rank conditions are easy to verify.

5 Examples

This section consists of three examples. Example 1 consists of one equation. Example 2 is a

simple growth model with fewer shocks than variables in the system. This example is also used

to illustrate how output from different solution algorithms can be used in evaluating the rank and

order conditions. Example 3 is a model with three shocks and three endogenous variables. This

example helps to understand the usefulness of the P,Q,R, S representation in identification.

5.1 Example 1

An and Schorfheide (2007) considered two structural models with the same reduced form:

yt = ψyt−1 + ηt, ηt ∼ iid(0, 1) (13)

18

Page 20: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

Model M1 has ψ = ρ, and is given by:

yt =1αEtyt+1 + ut, ut = ρut−1 + εt, εt ∼ iid(0, (1− ρ/α)2)

with deep parameter θ ≡ (α, ρ). Model M2 has ψ = 12

(α−

√α2 − 4φα

)and is given by:

yt =1αEtyt+1 + εt, εt ∼ iid

(0,[α+

√α2 − 4φα2α

]2)with deep parameter θ ≡ (α, φ).

The reduced form (13) is a special case of (1) obtained by setting Zt = ηt, Kt = yt, and leaving

Wt is empty so nZ = 1, nK = 1, nW = 0. The reduced form parameters are P (θ) = ψ, Q(θ) = 1,

Ψ(θ) = 0, and Σ(θ) = 1. They satisfy Assumptions 1 through 4 whenever 0 < |ψ| < 1. Now note

that nΛ = (nW + nK)(nK + nZ) + 2n2Z = 4, nZ = 1, nθ = 2, so for both models we have:

nθ = 2 < (nW + nK)(nK + nZ) +nZ(nZ + 1)

2= 3.

In other words, both models satisfy our order condition in Proposition 2. We now examine the

rank condition. For this, note that the 4 × 3 matrix ∆(θ) defined in Proposition 2 calculated for

each of the two models is:

M1 : ∆(θ) =

0 1 00 0 10 0 00 0 −2

and M2 : ∆(θ) =

12

(1− α−2φ√

α2−4φα

)α√

α2−4φα0

0 0 10 0 00 0 −2

Hence, for both models the rank of ∆(θ) = 2 < nθ + n2

Z = 3, so θ is not identifiable. This is not

surprising as in model M1 the component α of the deep parameter can never be identified from

(13), while in M2 the two components α and φ cannot be identified separately.

We now search for a priori restrictions that would allow for θ to be identified in each of the

models. For model M1, any scalar restriction (11) such that ∂ϕ/∂α 6= 0 will be such that the

corresponding 5× 3 matrix ∆ϕ(θ) becomes of rank 3. Hence, θ will be identifiable under any such

restriction. A simple example is the restriction α = α, i.e. ϕ(θ) = α − α. Of course, letting

ϕ(θ) = ρ − ρ would fail to satisfy the rank condition. For model M2, the restrictions that will

satisfy the rank condition must be such that:

det

∂ϕ(θ)∂α

∂ϕ(θ)∂φ

12

(1− α−2φ√

α2−4φα

)α√

α2−4φα

6= 0.

For instance, setting α = α is sufficient to identify θ at any value θ0 = (α0, φ0) such that α0 6= 0.

Similarly, setting φ = φ is also sufficient for identification provided φ0 6= 0.

19

Page 21: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

5.2 Example 2

Consider a one-sector stochastic growth model with inelastic labor supply, allowing for the possibil-

ity of costly adjustment of capital. The planner’s problem is to choose consumption (Ct) and capital

(Kt) to maximize her expected utility: Et∑∞

s=0 βt+s(C1−ν

t+s

1−ν)

subject to the given production technol-

ogy and a feasibility constraint: Qt = ZtKαt (1−Φt) and Qt = Ct+Kt+1− (1−δ)Kt. The exogenous

technology process is specified as Zt = Z0 exp(zt) where zt = ψzt−1 + εt and εt ∼ WN(0, σ2) with

Z0 given. The adjustment cost function is: Φt(Kt+1,Kt) = φ2 [Kt+1

Kt − 1]2.

The solution to the log-linearized model takes the form:

kt+1 = λkkkt + λkzzt

ct = λckkt + λczzt

with zt = ψzt−1 + εt and εt ∼WN(0, σ2), and where (λkk, λkz, λck, λcz) are defined in Appendix D.

Here, P (θ) = λkk, Q(θ) = λkz, R(θ) = λck, S(θ) = λcz, Ψ(θ) = ψ and Σ(θ) = σ2, and the variables

in (1) are Zt ≡ zt, Kt ≡ kt, Wt ≡ ct, with nZ = 1, nK = 1, and nW = 1. The parameter of interest

is θ ≡(α, β, δ, φ, ν, ψ, σ2

). The reduced form parameter is Λ(θ) ≡ (λck, λcz, λkk, λkz, ψ, σ2). The

parameter space Θ is chosen so that Λ(θ) satisfies Assumptions 1 through 4.

We consider three versions of the stochastic growth model, all with

θ0 = (α0, β0, δ0, φ0, ν0, ψ0, σ02) = (0.36, 0.95, 0.025, φ, ν, 0.85, 0.04).

The three models differ in the restrictions imposed on ν and φ. As this model is sufficiently simple,

the method of undetermined coefficients can be used to obtain numerical values for λ for any given

θ. Appendix D shows how to use the algorithms of Sims (2002), Klein (2000), and Dynare to

compute the P,Q,R, S representation.

Model I: With ν = 1, φ = 0, we have a log utility and capital is costless to adjust. In this version

of the model, the dimension of the deep parameter θ is nθ = 5. Since (nK + nW )(nZ + nK) +

nZ(nZ + 1)/2 = 5, the order condition of Proposition 2 holds. We now check the rank condition.

The matrix ∆(θ) evaluated at θ0 = (0.36, 0.95, 0.025, 0.85, 0.04) has rank 6 = nθ+n2Z which satisfies

the rank condition in Proposition 2. Hence, the value θ0 of the deep parameter θ in Model I is

identifiable from the spectrum of (kt, ct)′.

Model II: With ν > 0 unrestricted and φ = 0, we have a power utility and capital adjustment

remains costless. Now, nθ = 6 and the order condition of Proposition 2 fails to hold. Since

nθ − (nK + nW )(nZ + nK) + nZ(nZ + 1)/2 = 1, we need to introduce at least one restriction.

20

Page 22: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

Consider fixing a particular component of θ to its true value, such as for example fixing β to

0.95. This corresponds to letting ϕ(θ) = β − 0.95 in (11). We now need to check that the rank

of the matrix ∆ϕ(θ) defined in Proposition 3 equals nθ + n2Z = 7. When evaluated at θ0 =

(0.36, 0.95, 0.025, 2, 0.85, 0.04), the rank of ∆ϕ(θ) equals 6 which fails to satisfy the rank condition.

Table 1 summarizes the rank results obtained by successively fixing one component θi of θ.

θi α β δ ν ψ σ

7 6 7 7 6 7

Table 1: Rank of ∆ϕ(θ0) in Model II (θ0 = (0.36, 0.95, 0.025, 2, 0.85, 0.04))

As seen from Table 1 fixing any component of θ other than β or ψ is sufficient for θ0 to be

conditionally identifiable from the spectrum of (kt, ct)′.

Model III: With both ν > 0 and φ > 0 unrestricted, we extend Model II to allow for costly

adjustment of capital. Now, nθ = 7 and nθ − (nK + nW )(nZ + nK) + nZ(nZ + 1)/2 = 2, so at least

two restrictions are necessary to satisfy the order condition in Proposition 2. Similar to above, we

consider all restrictions obtained by fixing the values of two distinct components θi and θj of θ.

Table 2 reports the ranks of ∆ϕ(θ) evaluated at θ0 = (0.36, 0.95, 0.025, 2, 0.03, 0.85, 0.04). Recall

that for θ to be conditionally identified under ϕ(θ) = 0 we now need rank ∆ϕ(θ0) = nθ + n2Z = 8.

HHHHHHθi

θj β δ φ ν ψ σ

α 8 8 8 8 7 8β - 8 8 8 7 8δ - 8 8 7 8φ - 8 7 8ν - 7 8ψ - 7

Table 2: Rank of ∆ϕ(θ0) in Model III (θ0 = (0.36, 0.95, 0.025, 2, 0.03, 0.85, 0.04))

As seen from Table 2 the two restrictions on α and β, for example, are sufficient to identify θ.

The rank condition is, however, not trivially satisfied for all parameter restrictions. For example,

restricting ψ and any other component of θ gives a reduced rank of ∆ϕ(θ). Note that the results

of Table 2 suggest that when the constraints are put on β and ψ, i.e., when ϕ(θ) = (β − 0.95, ψ)′,

the point θ1 = (0.36, 0.95, 0.025, 2, 0, 0.85, 0.04) is not a regular point of ∆ϕ(θ), i.e. the rank of

∆ϕ(θ1) differs from the rank of ∆ϕ(θ) when θ is in a neighborhood of θ1. Indeed, we have that

21

Page 23: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

rank ∆ϕ(θ1) = 7 while for any small departure of ψ from ψ1 = 0 we get rank ∆ϕ(θ) = 8.

5.3 Example 3

An and Schorfheide (2007) consider a model whose log-linearized solution is given by:

yt = Etyt+1 + gt − Etgt+1 −1τ

(rt − Etπt+1 − Etzt+1

)πt = βEtπt+1 + κ(yt − gt)

ct = yt − gt

rt = ρrrt−1 + (1− ρr)ψ1πt + (1− ρr)ψ2(yt − gt) + εrt

gt = ρggt−1 + εgt

zt = ρzzt−1 + εzt

with εrt ∼WN(0, σ2r ), εgt ∼WN(0, σ2

g), and εzt ∼WN(0, σ2z) mutually uncorrelated. In the above

model, κ = τ(1− ν)/(νπ2φ), and π is steady state inflation rate. The parameter vector of interest

is θ = (τ, β, κ, ψ1, ψ2, ρr, ρg, ρz, σ2r , σ

2g , σ

2z) which is of dimension nθ = 11. The definition of θ in An

and Schorfheide (2007) reflects the fact that ν, π, and φ are not separately identified.

This model has three exogenous shocks εt ≡ (εrt, εgt, εzt)′ (nZ = 3), one observed state variable

Kt ≡ rt−1 (nK = 1), two latent state processes zt and gt, and three additional endogenous variables

Wt ≡ (yt, πt, ct)′ (nW = 3). Letting Zt ≡ (εrt, gt, zt)′, the P,Q,R, S representation of An and

Schorfheide’s (2007) model obtained by DYNARE (2009), Sims (2002) or Klein (2000) algorithm

takes the form:

Kt+1 = λrrKt +(λrεr 0 λrεz

)Zt

Wt =

λyrλπrλcr

Kt +

λyεg 1 λyεzλπεg 0 λπεzλcεg 0 λcεz

Zt

Zt+1 =

0 0 00 ρg 00 0 ρz

Zt + εt+1,

in which the λ coefficients are nonlinear functions of θ. Unlike in our Example 2, the analytic

expression of different λ’s is unknown. Their numerical values are however easily obtained by using

one of the algorithms above. The choice of the parameter space Θ is such that the above system

satisfies our Assumptions 1 through 4.

In this example we have nΛ = (nK + nW )(nZ + nK) + 2n2Z = 34. Since nθ = 11 < (nK +

nW )(nZ +nK) +nZ(nZ + 1)/2 = 31 the order condition in Proposition 2 is satisfied. We now check

22

Page 24: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

the rank condition. For this, we choose θ0 as in Table 3 of An and Schorfheide (2007),

θ0 = (2, 0.9975, 0.33, 1.5, 0.125, 0.75, 0.95, 0.9, 4× 10−6, 36× 10−6, 9× 10−6).

Then, rank ∆(θ0) = 20 = nθ + n2Z , so our rank condition in Proposition 2 is satisfied. Hence, θ0 is

identifiable from the spectrum of (rt−1, yt, πt, ct)′.As a simple check of our procedure, if we let the parameter of interest be θ = (τ, β, ν, φ, π, ψ1, ψ2,

ρr, ρg, ρz, σ2r , σ

2g , σ

2z) then nθ = 13 which still satisfies the order condition. However, we now have

that rank ∆(θ0) = 21 < nθ +n2Z = 22 which shows that θ0 is not identifiable. This is not surprising

as we have already noted that the components ν, φ, and π are not separately identifiable.

6 Discussion

In the preceding sections, we obtained rank and order conditions for identification of θ from the

spectral density Ω(z; θ) of the observables by assuming that the shocks are fundamental, and

that the model has a minimal state representation. Both properties were shown to hold under

Assumptions 1 through 4. There are however important examples of DSGE models in which those

assumptions fail. For example, when there are sunspot shocks of the type discussed in Beyer and

Farmer (2007). To accommodate this possibility, the state vector may be larger than necessary,

which violates the minimal state property need to obtain Lemma 3.

In systems in which some of our assumptions fail, the identification arguments developed in the

previous sections no longer apply. In particular, we can no longer rely on the results of Proposition

1 to characterize all observationally equivalent values of the reduced form parameter Λ(θ) in (1).

Instead, we need to look directly at the equivalence relation (5):

H(z; θ1)Σ(θ1)H(z−1; θ1)′ = H(z; θ0)Σ(θ0)H(z−1; θ0)′,

for all z in C, and find the necessary and sufficient conditions under which the above holds only

when θ1 = θ0. Without Proposition 1, all we can use for identification are the spectra or the

autocovariance generating functions. This is essentially the approach taken in Iskrev (2009), who

asked whether θ can be identified from the autocovariance matrices Γ(j; θ) of the observables. In

this case, one looks at the necessary and sufficient conditions under which the infinite system of

equalities:

Γ(j; θ1) = Γ(j; θ0)

obtained for every j (−∞ < j < +∞) can only hold if θ1 = θ0. Whether it is based on the

spectrum or on the autocovariance matrices, this identification approach remains valid even when

the assumptions of Lemmas 3 and 4 do not hold. However, for the problem considered here, it

23

Page 25: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

amounts to studying local injectivity properties of an infinite system of nonlinear equations which

is both theoretically challenging and computationally difficult.

The starting point of our analysis is to fully exploit restrictions that need to exist between obser-

vationally equivalent parameters of the DSGE model (1). Unsurprisingly, observational equivalence

in a dynamic setting involves different restrictions on the model parameters than those in static

models. We provide an operational definition of spectral equivalence and show that some reduced

form parameters of DSGE models are generally not identifiable. We then use these restrictions to

develop necessary and sufficient conditions for identification. Our results have implications for esti-

mation, both using full information method, and particularly for two step estimation that requires

a first step estimation of the reduced form model. This issue is analyzed in our companion paper

Komunjer and Ng (2009).

24

Page 26: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

Appendix A Proofs of Section 2

Proof of Lemma 2. Recall that for every t > 1 we have:

Yt = H(L; θ)εt

where the transfer function H(z; θ) is an (nK +nW )×nZ matrix of polynomials defined in (3). To

show invertibility we use the following result: If for every θ ∈ Θ, H(z−1; θ) is of rank nZ in |z| > 1,

then H(z−1; θ) has a left inverse, i.e. there exists a matrix G(z−1; θ) of dimension nZ × (nK +nW )

such that G(z−1; θ)H(z−1; θ) = Id. This implies that we can write G(L; θ)Yt = εt so the process

is invertible. We now show that for every θ ∈ Θ, rankH(z−1; θ) = nZ for every z ∈ C such that

|z| > 1. From (3) we have that:

rankH(z−1; θ) = rank(

z−1[Id− P (θ)z−1]−1Q(θ)[Id−Ψ(θ)z−1]−1R(θ)z−1[Id− P (θ)z−1]−1Q(θ) + S(θ)

[Id−Ψ(θ)z−1]−1

)= rank

[(z[Id− P (θ)z−1] 0−R(θ) Id

)−1(Q(θ)S(θ)

)[Id−Ψ(θ)z−1]−1

]

= rank(Q(θ)S(θ)

)= nZ for every |z| > 1

where the second to last equality follows by Assumption 2, while the last equality follows by

combining Assumptions 3 and 4(ii).

Appendix B Proofs of Section 3

In the proofs of the results stated in Section 3, we define the state vector

St ≡(ZtKt

)and we use the following state space representation of the DSGE model (1):

St+1 = A(θ)St +B(θ)εt+1

Yt = C(θ)St (B.1)

where Yt =(K ′t W ′t

)′ as before, while the matrices A(θ), B(θ), C(θ) are given by:

A(θ) ≡(

Ψ(θ) 0Q(θ) P (θ)

), B(θ) ≡

(Id0

), C(θ) ≡

(0 Id

S(θ) R(θ)

)(B.2)

Recall that for every t > 1 we have Yt = H(L; θ)εt. The transfer function H(z; θ) can be related to

the matrices (A(θ), B(θ), C(θ)) in (B.2) as:

H(z; θ) = C(θ) [Id− zA(θ)]−1B(θ) (B.3)

25

Page 27: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

In what follows, we call (A(θ), B(θ), C(θ)) a realization of the transfer function H(z; θ). For sim-

plicity, we drop θ from the notations below. We first give several definitions that have been adapted

from control theory (see, e.g., Antsaklis and Michel, 1997):

Definition 2 (Controllability) Let C be the (nZ + nK)× (nZ(nZ + nK)) matrix defined as:

C ≡(B AB . . . AnZ+nK−1B

)A realization (A,B,C) is controllable if and only if rank C = nZ + nK .

Definition 3 (Observability) Let O be the (nK +nW )(nZ +nK)× (nZ +nK) matrix defined as:

O ≡

CCA

...CAnZ+nK−1

A realization (A,B,C) is observable if and only if rankO = nZ + nK .

Definition 4 (Minimality) A realization (A,B,C) is minimal (irreducible, of least order) if and

only if it is both controllable and observable.

Note that the matrix C is called the controllability matrix, while O is referred to as the observ-

ability matrix.

We first show the following Lemma whose result will be useful in our subsequent proofs.

Lemma 6 Let Assumptions 1 through 4 hold. Then, for every θ ∈ Θ, the realization (A(θ), B(θ),

C(θ)) in (B.2) is minimal.

Proof of Lemma 6. The proof of Lemma 6 is done in two steps.

Step 1. We first show controllability. For this, note that for any 1 6 k 6 nZ + nK − 1, we

have:

AkB =(

Ψk

P k−1Q+ P k−2QΨ + . . .+ PQΨk−2 +QΨk−1

)so it holds that for every 1 6 k 6 nZ + nK − 1:

AkB −Ak−1BΨ =(

0P k−1Q

)(B.4)

Now consider the following (nZ(nZ + nK))× (nZ(nZ + nK)) matrix J :

J ≡

Id −Ψ (0)

. . . . . .. . . −Ψ

(0) Id

26

Page 28: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

Since J is nonsingular, we have that:

rank(CJ ) = rank(C)

Now using the property in (B.4), we have that:

CJ =(

Id 0 0 . . . 00 Q PQ . . . PnZ+nK−2Q.

)Under Assumption 4(i) we have that rank(Q) = nK . Then, it holds that:

rank(

Id 00 Q

)= nZ + nK

The above implies that rankC = nZ + nK , so (A,B,C) is controllable.

Step 2. To show observability, first note that for any 1 6 k 6 nZ + nK − 1, we have:

CAk =(

P k−1Q+ P k−2QΨ + . . .+ PQΨk−2 +QΨk−1 P k

RP k−1Q+RP k−2QΨ + . . .+RPQΨk−2 +RQΨk−1 + SΨk RP k

)Now consider the (nK + nW )(nZ + nK)× (nK + nW )(nZ + nK) matrix Q defined as:

Q ≡

Id 0 0 0 (0)0 Id 0 0−P 0 Id 00 0 −R Id

. . . . . . . . . . . .−P 0 Id 0

(0) 0 0 −R Id

Note that detQ = 1 so Q is of full rank and we have:

rank(QO) = rankO

Now, straightforward calculations yield:

QO =

0 IdS RQ 0SΨ 0

......

QΨnZ+nK−2 0SΨnZ+nK−1 0

Under Assumption 4(ii) it follows that

rank

0 IdS RQ 0

= nZ + nK

which shows that rank(O) = rank(QO) = nZ + nK , and (A,B,C) is observable.

27

Page 29: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

Proof of Lemma 3. According to Lemma 6, in our DSGE model (1), under Assumptions 1 to

4, every realization (A(θ), B(θ), C(θ)) of the transfer function H(z; θ) in (B.3) is minimal. We can

then use the following standard result from control theory:

Theorem 3.10 (Antsaklis and Michel, 1997). Let (A(θ0), B(θ0), C(θ0)) and (A(θ1), B(θ1), C(θ1))

be realizations of H(z; θ0). If (A(θ0), B(θ0), C(θ0)) is a minimal realization, then (A(θ1), B(θ1), C(θ1))

is also a minimal realization if and only if the two realizations are equivalent, i.e., if and only if

there exists a nonsingular (nK + nW ) × (nK + nW ) matrix T such that A(θ1) = T A(θ0)T −1,

B(θ1) = T B(θ0), and C(θ1) = C(θ0)T −1.

Now our model can be put in an (A,B,C) form like (B.2), and by Lemma 6, all realizations are

minimal. By the above theorem, they then must be related by a similarity transformation. Given

that Kt is observed, our T matrix must be of the form:

T =(

Id 00 T

)with T full rank. Plugging in the definitions of A,B,C gives the result as desired.

Proof of Lemma 4. We first recall several useful results from control theory Anderson, Hitz, and

Diem (1974). In the spectral factorization problem, one is given a n×n matrix φ(z) of real rational

functions of a complex variable z, with φ(z−1)′ = φ(z) and φ(exp(iω)) positive semi-definite for

all real ω. One is required to find a W (z) such that W (z−1)′W (z) = φ(z), with W (z) real and

rational, of entries analytic in |z| > 1, and possibly with W (z) of constant rank in |z| > 1. A

spectral factor Wd(z) of dimension r × n is termed minimum phase if Wd(z−1)′Wd(z) = φ(z) and

Wd(z) has rank r everywhere in |z| > 1 where r is equal to the rank almost everywhere of φ(z), i.e.

rankφ(z) = r for almost every z ∈ C. As shown in Anderson (1969) or Anderson, Hitz, and Diem

(1974), for example, minimum phase spectral factors are unique to within left multiplication by an

arbitrary real constant orthogonal r × r matrix V , i.e. any other minimum phase factor Wd(z) is

of the form:

Wd(z) = VWd(z) where V ′V = Id

Now, fix θ0 ∈ Θ and consider a reduced form parameter Λ(θ0) = (P (θ0), Q(θ0), R(θ0), S(θ0),Ψ(θ0),

Σ(θ0)) that generates the spectral density Ω(z; θ0) of the observables Yt. Under Assumption 1,

the matrix Σ(θ0) is real, symmetric and positive definite and L(θ0) is the lower triangular matrix

obtained by the Cholesky decomposition of Σ(θ0), i.e. Σ(θ0) = L(θ0)L(θ0)′. Note that since Σ(θ0)

has real entries, L(θ0) can be assumed to have real entries as well. In addition, L(θ0) is positive

28

Page 30: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

definite. Now consider the following nZ × (nK + nW ) matrix:

Wd(z) ≡ L(θ0)′H(z−1; θ0)′. (B.5)

We have

Wd(z−1)′Wd(z) = H(z; θ0)L(θ0)L(θ0)′H(z−1; θ0)′

= H(z; θ0)Σ(θ0)H(z−1; θ0)′

≡ Ω(z; θ0)

for every z ∈ C. Note that by (3), Ω(z; θ0) is a matrix of real rational functions of a complex

variable z, with Ω(z−1; θ0)′ = Ω(z; θ0) and Ω(exp(iω); θ0) positive semi-definite for all real ω.

We now show that rankWd(z) = nZ for every z ∈ C such that |z| > 1. From the proof of

Lemma 2 we know that (3) rankH(z−1; θ0) = nZ for every |z| > 1. Using (B.5) and the fact that

L(θ0) is positive definite, then shows that rankWd(z) = nZ for every |z| > 1. It remains to be

shown that rank Ω(z; θ0) = nZ for almost every z ∈ C. For this, recall again from the proof of

Lemma 2 that:

H(z−1; θ0) =(z[Id− P (θ0)z−1] 0−R(θ0) Id

)−1(Q(θ0)S(θ0)

)[Id−Ψ(θ0)z−1]−1

which is rank deficient only for those z ∈ C that are such that det[zId−P (θ0)]det[zId−Ψ(θ0)] = 0,

i.e. only at the eigenvalues of the matrices P (θ0) and Ψ(θ0). Given that there is at most nZ + nK

different eigenvalues, H(z−1; θ0) is of full rank almost everywhere in C. Since by Assumption 1,

Σ(θ0) is full rank, it follows that Ω(z; θ0) is of rank nZ almost everywhere in C.

We can now apply the result in Anderson, Hitz, and Diem (1974) to show that if Wd(z) ≡L(θ1)′H(z−1; θ1)′ is another minimum phase factor, then necessarily we have:

L(θ1)′H(z−1; θ1)′ = Wd(z) = VWd(z) = V L(θ0)′H(z−1; θ0)′

where V is an orthogonal nZ × nZ matrix, V ′V = Id. Now let:

U ≡ L(θ0)V ′L(θ1)−1

be a full rank nZ × nZ matrix. It then follows that H(z; θ1) = H(z; θ0)U for every z ∈ C and

UL(θ1)V = L(θ0), so UL(θ1)V V ′L(θ1)′U−1 = L(θ0)L(θ0)′, i.e. UΣ(θ1)U ′ = Σ(θ0).

Proof of Proposition 1. The proof is obtained by combining the results of Lemma 3 and Lemma

4. From Lemma 4 we know that Λ(θ0) and Λ(θ1) are observationally equivalent if and only if, for

every z ∈ C we have H(z; θ1) = H(z; θ0)U with U nonsingular. Expressing this equality in terms

29

Page 31: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

of the (A,B,C) matrices in (B.3) and using the similarity transformation result of Lemma 3 then

gives the following necessary and sufficient condition for observational equivalence of Λ(θ0) and

Λ(θ1):

A(θ1) = T A(θ0)T −1, B(θ1) = T B(θ0)U, C(θ1) = C(θ0)T −1, (B.6)

where U is a nonsingular nZ ×nZ matrix, and T is a nonsingular (nK +nW )× (nK +nW ) matrix.

Let

T ≡(T1 T2

T3 T4

)be a full rank (nK + nZ) × (nK + nZ) matrix, with sub-matrices (T1, T2, T3, T4) of dimensions

nK × nK , nK × nZ , nZ × nK , and nZ × nZ , respectively. Then, writing the above equations for

(B(θ1), A(θ1), C(θ1)) gives the following:

0 = T2 U (B.7)

Id = T4 U (B.8)

P (θ1)T1 +Q(θ1)T3 = T1P (θ0) (B.9)

P (θ1)T2 +Q(θ1)T4 = T1Q(θ0) + T2Ψ(θ0) (B.10)

Ψ(θ1)T3 = T3P (θ0) (B.11)

Ψ(θ1)T4 = T3Q(θ0) + T4Ψ(θ0) (B.12)

R(θ1)T1 + S(θ1)T3 = R(θ0) (B.13)

R(θ1)T2 + S(θ1)T4 = S(θ0) (B.14)

T1 = Id (B.15)

T2 = 0 (B.16)

Now, we can rewrite Equation (B.11) as:

(IdnK ⊗Ψ(θ1)) vec(T3) =(P (θ0)′ ⊗ IdnZ

)vec(T3)

where vec(T3) denotes the vectorization of the matrix T3 formed by stacking the columns of T3 into

a single column vector. It follows that:[(IdnK ⊗Ψ(θ1))− (P (θ0)⊗ IdnZ )′

]vec(T3) = 0

which has a unique solution T3 = 0 if and only if Ψ(θ1) and P (θ0) have no eigenvalues in common

(Assumption 2). Then, combining the above with (B.15), (B.16), and (B.8) gives that the matrix

T is necessarily of the form:

T =(

Id 00 U−1

)30

Page 32: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

Then, from (B.9) and (B.10) we get:

P (θ1) = P (θ0) and Q(θ1) = Q(θ0)U (B.17)

Results for R(θ1), S(θ1), and Ψ(θ1) are obtained from Equations (B.13), (B.14), and (B.12), re-

spectively. Using the fact that in addition Σ(θ1) = U−1Σ(θ0)U−1′ from Lemma 4 then gives the

desired result.

Appendix C Proofs of Section 4

Proof of Lemma 5 The proof is similar to that of Theorem 1 in Glover and Willems (1974).

The necessity is straightforward. We now prove sufficiency by proving the contrapositive: suppose

that θ0 is not locally identifiable. Then there exists an infinite sequence of deep parameter vectors

θ1, . . . , θk, . . . (of dimension nθ) approaching θ0 such that:

Ω(z; θk) = Ω(z; θ0) for all z ∈ C.

From Proposition 1 we know that the above holds if and only if there exists an infinite sequence of

full rank nZ × nZ matrices T1, . . . , Tk, . . . such that:

P (θk) = P (θ0), Q(θk)Tk = Q(θ0), R(θk) = R(θ0), S(θk)Tk = S(θ0), (C.1)

T−1k Ψ(θk)Tk = Ψ(θ0) and T−1

k Σ(θk)T−1k

′ = Σ(θ0). (C.2)

In other words, using the notation in (8) we have that:

λ(θk, Tk) = λ(θ0, Id)

In order to show that the mapping λ is not locally injective, it suffices to show that the sequence

T1, . . . , Tk, . . . approaches the identity matrix Id. For this, note that from (C.1) we have:(Q(θk)S(θk)

)Tk =

(Q(θ0)S(θ0)

)Now, from Assumption 4(ii) we know that the (nK + nW )× nZ matrix (Q(θk)′, S(θk)′)′ is of rank

nZ ; hence, it is left invertible and we have

Tk =[(Q(θk)′ S(θk)′

)(Q(θk)S(θk)

)]−1 (Q(θk)′ S(θk)′

)(Q(θ0)S(θ0)

)By continuity of Q(θ) and S(θ) (Assumption 5), it then follows that the sequence T1, . . . , Tk, . . .approaches the identity matrix Id. Hence, λ is not injective in the neighborhood of (θ0, IdnZ ).

31

Page 33: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

Proof of Proposition 2 The proof of Proposition 2 is done in two steps.

Step 1. We start by establishing the necessary and sufficient (rank) condition. To compute

the matrix of partial derivatives of λ, we use the product rule: if F (X) and G(X) are, respectively,

m× p- and p× q-variate and differentiable functions with respect to X, then

∂vec(F (X)G(X))∂vecX

= (G(X)′ ⊗ Idm)∂vec(F (X))∂vecX

+ (Idq ⊗ F (U))∂vec(G(X))∂vecX

For example, if X is an nZ×nZ matrix (X ∈ RnZ×nZ ), then applying the above rule to F (X) = X−1

(respectively F (X) = X ′) and G(X) = X we get the following useful results:

∂vec(X−1)∂vecX

= −(X−1′ ⊗X−1) and∂vec(X−1′)∂vecX

= −TnZ ,nZ (X−1′ ⊗X−1)

where TnZ ,nZ is the n2Z×n2

Z permutation matrix that transforms vecX into vecX ′, i.e. TnZ ,nZvecX =

vecX ′. Note that TnZ ,nZ is an orthogonal matrix: TnZ ,nZ = T−1nZ ,nZ

and TnZ ,nZ = T ′nZ ,nZ . In addi-

tion, note that we have rank(Idn2Z

+ TnZ ,nZ ) = nZ(nZ+1)2 .

Step 1a. We first compute the partial derivatives of λ(θ, U) with respect to the components

of θ. Using the product rule, it immediately follows that:

∂λ(θ, U)∂θ

=

∂P (θ)∂θ

(U ′ ⊗ IdnZ )∂Q(θ)∂θ

∂R(θ)∂θ

(U ′ ⊗ IdnZ )∂Sθ)∂θ

(U ′ ⊗ U−1)∂Ψ(θ)∂θ

(U−1 ⊗ U−1)∂Σ(θ)∂θ

(C.3)

Step 1b. We now compute the derivatives of λ(θ, U) with respect to U . For vec(Q(θ)U) and

vec(S(θ)U) applying the product rule immediately gives

∂vec(Q(θ)U)∂vecU

= (IdnZ ⊗Q(θ)) (C.4)

∂vec(S(θ)U)∂vecU

= (IdnZ ⊗ S(θ)) (C.5)

For vec(U−1Ψ(θ)U), we let F (U) = U−1 and G(U) = Ψ(θ)U . Then,

∂vec(U−1Ψ(θ)U)∂vecU

= −((Ψ(θ)U)′ ⊗ IdnZ

)(U−1′ ⊗ U−1) + (IdnZ ⊗ U

−1)(IdnZ ⊗Ψ(θ))

= −((U ′Ψ(θ)′U−1′)⊗ U−1

)+(IdnZ ⊗ (U−1Ψ(θ))

)=(U ′ ⊗ U−1

)((IdnZ ⊗Ψ(θ))− (Ψ(θ)′ ⊗ IdnZ )

)((U−1)′ ⊗ IdnZ

)(C.6)

32

Page 34: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

where the second and third equality use the fact that for any conformable matrices A, B, C, D we

have (A⊗B)(C ⊗D) = (AC ⊗BD). Finally, letting F (U) = U−1 and G(U) = Σ(θ)U−1′ we get:

∂vec(U−1Σ(θ)U−1′)∂vecU

= ((U−1Σ(θ))⊗ IdnZ )∂vec(U−1)∂vecU

+ (IdnZ ⊗ U−1)

∂vec(U−1′)∂vecU

= −(Idn2Z

+ TnZ ,nZ )((U−1Σ(θ)U−1′)⊗ U−1) (C.7)

Combining (C.4) through (C.7) we then get:

∂λ(θ, U)∂vecU

=

0n2K×n

2Z

IdnZ ⊗Q(θ)0nKnW×n2

Z

IdnZ ⊗ S(θ)(U ′ ⊗ U−1

)((IdnZ ⊗Ψ(θ))− (Ψ(θ)′ ⊗ IdnZ )

)((U−1)′ ⊗ IdnZ

)−(Idn2

Z+ TnZ ,nZ )((U−1Σ(θ)U−1′)⊗ U−1)

(C.8)

Now let

∆(θ) ≡(∂λ(θ,Id)∂θ

∂λ(θ,Id)∂vecU

)=

∂vecP (θ)∂θ 0n2

K×n2Z

∂vecQ(θ)∂θ IdnZ ⊗Q(θ)

∂vecR(θ)∂θ 0nKnW×n2

Z∂vecS(θ)

∂θ IdnZ ⊗ S(θ)∂vecΨ(θ)

∂θ IdnZ ⊗Ψ(θ)−Ψ(θ)′ ⊗ IdnZ∂vecΣ(θ)

∂θ −(Idn2Z

+ TnZ ,nZ )(Σ(θ)⊗ IdnZ )

(C.9)

and note that using (C.8) we can write(∂λ(θ,U)∂θ

∂λ(θ,U)∂vecU

)= M(U)∆(θ)N(U)

where M(U) and N(U) are, respectively, an nΛ×nΛ diagonal matrix and an (nθ +n2Z)× (nθ +n2

Z)

diagonal matrix defined as:

M(U) ≡

Idn2K

U ′ ⊗ IdnZIdnKnW

U ′ ⊗ IdnZU ′ ⊗ U−1

U−1 ⊗ U−1

N(U, V ) ≡

(Idnθ

U−1′ ⊗ IdnZ

)Since U is full rank, both M(U) and N(U) are full rank, so we have that

rank(∂λ(θ,U)∂θ

∂λ(θ,U)∂vecU

)= rank ∆(θ)

33

Page 35: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

Note that if the rank of ∆(θ) remains constant in a neighborhood of θ0, then the rank of the partial

derivatives of λ remains constant in a neighborhood of (θ0, Id).

Step 2. We now establish the necessary (order) condition. Counting the number of linearly

independent rows in the matrix ∆(θ) in (C.9), it is easy to show that:

rank∆(θ) 6 (nW + nK)(nZ + nK) + n2Z +

nZ(nZ + 1)2

where we have used the fact that Σ(θ) is always symmetric so:

rank(∂vecΣ(θ)

∂θ −(Idn2Z

+ TnZ ,nZ )(Σ(θ)⊗ IdnZ ))6nZ(nZ + 1)

2.

Hence, a necessary condition for (9) to hold is that:

nθ+n2Z 6 (nW+nK)(nZ+nK)+n2

Z+nZ(nZ + 1)

2i.e. nθ 6 (nK+nW )(nZ+nK)+

nZ(nZ + 1)2

Proof of Proposition 3 To begin, note that the matrix of partial derivatives of χ can be written

as: (∂χ(θ,U)∂θ

∂χ(θ,U)∂vecU

)=

(∂ϕ(θ)∂θ 0nϕ×n2

Z∂λ(θ,U)∂θ

∂λ(θ,U)∂vecU

)Using the same reasoning as in the proof of Proposition 2, the above can further be written as:(

∂χ(θ,U)∂θ

∂χ(θ,U)∂vecU

)=(

Idnϕ 0nϕ×nΛ

0nΛ×nϕ M(U)

)∆ϕ(θ)N(U)

where the nΛ × nΛ matrix M(U) and (nθ + n2Z) × (nθ + n2

Z) matrix N(U) are as in the proof of

Proposition 2, and where we have let:

∆ϕ(θ) ≡(∂χ(θ,Id)∂θ

∂χ(θ,Id)∂vecU

)=

(∂ϕ(θ)∂θ 0nϕ×n2

Z

∆(θ)

)

=

∂ϕ(θ)∂θ 0nϕ×n2

Z∂vecP (θ)

∂θ 0n2K×n

2Z

∂vecQ(θ)∂θ IdnZ ⊗Q(θ)

∂vecR(θ)∂θ 0nKnW×n2

Z∂vecS(θ)

∂θ IdnZ ⊗ S(θ)∂vecΨ(θ)

∂θ IdnZ ⊗Ψ(θ)−Ψ(θ)′ ⊗ IdnZ∂vecΣ(θ)

∂θ −(Idn2Z

+ TnZ ,nZ )(Σ(θ)⊗ IdnZ )

It then follows that:

rank(∂χ(θ,U)∂θ

∂χ(θ,U)∂vecU

)= rank∆ϕ(θ)

34

Page 36: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

from which the rank condition follows. For the order condition, note that we again have:

rank(∂χ(θ,U)∂θ

∂χ(θ,U)∂vecU

)6 nϕ + (nK + nW )(nZ + nK) + n2

Z +nZ(nZ + 1)

2

so a necessary condition for identifiability of θ under the a priori restrictions (11) is:

nθ+n2Z 6 nϕ+(nK+nW )(nZ+nK)+n2

Z+nZ(nZ + 1)

2i.e. nϕ > nθ−

[(nK + nW )(nZ + nK) +

nZ(nZ + 1)2

]

Appendix D Details of Example 2 in Section 5

LetQ∗, C∗,K∗ be steady state values of output, consumption, and capital, and letR∗ = (1−δ)+αQ∗K∗ .

The log-linearized model is

qt = αkt + zt

qt = (1− δK∗

Q∗)ct +

K∗

Q∗kt+1 − (1− δ)K

Q∗kt

0 = Et

[ν(ct+1 − ct)− φβ

Q∗

K∗kt+2 + (α(1− α) + βφ(2− δ))βQ

K∗kt+1

−φβ(1− δ)Q∗

K∗kt − αβ

Q∗

K∗zt+1

]zt = ψzt−1 + σεt

The structural model can be rewritten as

qt = γ1kt + zt

qt = γ2ct + γ3kt+1 + (1− γ2 − γ3)kt

= Et

[γ4(ct+1 − ct) + γ5kt+2 +

( γ1(1− γ1)γ1 + γ2 + γ3 − 1

+(1− γ2 − 2γ3)γ5

γ3

)kt+1

−(1− γ2 − γ3)γ5

γ3kt −

γ1

γ1 + γ2 + γ3 − 1zt+1

]zt = γ5zt−1 + γ6εt

where the structural parameters are

γ = (γ1, γ2, γ3, γ4, γ5, γ6) ≡(α, 1− δK

Q∗,K∗

Q∗, ν, −φβQ

K∗, ψ, σ

).

For this model, λkk is given by:

λkk =−B +

√B2 − 4AC2A

35

Page 37: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

where

A ≡ −νK∗

C∗− φβ Y

K∗, B ≡ ν

(1 +

)K∗C∗

+ [α(1− α) + φ(2− δ)]β Y∗

K∗, C ≡ −ν

β

K∗

C∗− φβ(1− δ)Y

K∗.

And λkz, λck, λcz equal:

λkz =(1− ψ)ν Y

C∗ + αβψ Y ∗

K∗

ν[ 1β − λkk + 1− ψ]K∗C∗ + β [α(1− α)− φ(2− δ)− φλkk − ψφ] Y ∗K∗

λck =K∗

C∗[1β− λkk]

λcz =K∗

C∗[Y ∗

K∗− λkz].

36

Page 38: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

References

An, S., and F. Schorfheide (2007): “Bayesian Analysis of DSGE Models,” Econometric Reviews,

26: 2-4, 113–172.

Anderson, B. D. O. (1969): “The Inverse Problem of Stationary Covariance Generation,” Journal

of Statistical Physics, 1, 133–147.

Anderson, B. D. O., K. L. Hitz, and N. D. Diem (1974): “Recursive Algorithm for Spectral

Factorization,” IEEE Transactions on Circuits and Systems, 21, 742–750.

Anderson, G., and G. Moore (1985): “A linear algebraic procedure for solving linear perfect

foresight models,” Economic Letters, 17(3), 247–252.

Antsaklis, P., and A. Michel (1997): Linear Sytems. McGraw-Hill, New York.

Beyer, A., and R. Farmer (2007): “Indeterminancy of Determinacy and Indeterminacy,” Amer-

ican Economic Review, 97:1, 524–529.

Canova, F., and L. Sala (2009): “Back to Square One: Identification Issues in DSGE Models,”

Journal of Monetary Economics, 56, 431–449.

Chao, J., and P. Phillips (1998): “Bayesian Posterior Distributions in Limited Information

Analysis of the Simultaneous Equation Model Using the Jeffreys Prior,” Journal of Econometrics,

87, 4986.

Chari, V. V., P. Kehoe, and E. McGrattan (2009): “New Keynesian Models: Not Yet Useful

for Policy Analysis,” American economic Journal: Macroeconomics, 1:1, 242–66.

Consolo, A., C. Favero, and A. Paccagnini (2009): “On the Statistical Identification of

DSGE Models,” Journal of Econometrics, 150, 99–115.

Curdia, V., and R. Reis (2009): “Correlated Disturbances and U.S. Business Cycles,” unpub-

lished manuscript.

Deistler, M. (1976): “The Identifiability of Linear Econometric Models with Autocorrelated

Errors,” International Economic Review, 17, 26–46.

Deistler, M. (2006): “Stationary Processes and Linear Systems,” in Harmonic Analysis and

Rational Approximation, ed. by J.-D. Fournier, J. Grimm, J. Leblond, and J. R. Partington, pp.

159–179. Springer, Berlin.

37

Page 39: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

DYNARE (2009): “Version 4,” .

Fernandez-Villaverde, F., J. Rubio-Ramirez, T. Sargent, and M. Watson (2007):

“A,B,Cs and (D)’s for Understanding VARs,” American Economic Review, 97:3, 1021–1026.

Fisher, F. (1966): The Identification Problem in Econometrics. McGraw-Hill.

Giannone, D., and L. Reichlin (2006): “Does Information Help Recover Structural Shocks

From Past Observations,” Journal of the European Economic Association, 4:2-3, 455–465.

Glover, K. (1973): “Structural Aspects of System Identification,” Unpublished Dissertation,

M.I.T.

Glover, K., and J. Willems (1974): “Parameterizations of Linear Dynamical Systems: Canon-

ical Forms and Identifiability,” IEEE Transactions on Automatic Control, 19:6, 640–646.

Gourieroux, C., and A. Monfort (1997): Time Series and Dynamic Models. Cambridge Uni-

versity Press.

Grewal, K., and K. Glover (1976): “Identifiability of Linear and Nonlinear Dynamic Systems,”

IEEE Transactions on Automatic Control, 21:6, 833–837.

Hannan, E. (1971): “The Identification Problem for Multiple Equation Systems with Moving

Average Errors,” Econometrica, 39:5, 751–765.

Hannan, E., and M. Diestler (1988): The Statistical Theory of Linear Systems. Wiley, New

York.

Hatanaka, M. (1975): “On the Global Identification of the Dynamic Simultaneous Equations

Model with Stationary Disturbances,” International Economic Review, 16:3, 545–553.

Iskrev, N. (2007): “Evaluating the Information Matrix in Linearized DSGE models,” Economics

Letters, 99:3, 607–610.

(2008): “How Much Do We Learn from Estimation of DSGE models? A Case Study of

Identification Issues in a New Keynesian Business Cycle Model,” University of Michigan.

(2009): “Local Identification in DSGE Models,” Banco de Portgual.

King, R. G., and M. W. Watson (2002): “System Reduction and Solution Algorithms for

Singular Linear Difference Systems under Rational Expectations,” Computational Economics,

20, pp. 57–86.

38

Page 40: Dynamic Identi cation of DSGE Modelsecon.ucsb.edu/~doug/245a/Papers/DSGE Model Identification...Dynamic Identi cation of DSGE Models Ivana Komunjer Serena Ngy September 2009 Preliminary:

Kleibergen, F., and H. K. van Dijk (1998): “Bayesian Simultaneous Equations Analysis Using

Reduced Rank Structures,” Econometric Theory, 14, 701743.

Klein, P. (2000): “Using the Generalized Schur Form to Solve a Multivariate Linear Rational

Expectations Model,” Jornal of Economic Dynamics and Control, 4:2, 257–271.

Komunjer, I., and S. Ng (2009): “Issues in Limited Information Identification of DSGE models,”

unpublished manuscript, Columbia University.

Koopmans, T. C. (ed.) (1950): Statistical Inference in Dynamic Economic Models, vol. 10 of

Cowles Commission Monograph. John Wiley & Sons, Inc.

Lutkepohl, H. (2005): New Introduction to Multiple Time Series Analysis. Springer Verlag,

Berlin.

Newcomb, R., and B. Anderson (1968): “On the Generation of All Spectral Factors,” IEEE

Transcations on Information Theory, 14:3, 512–514.

Poirier, D. (1998): “Revisign Beliefs in Nonidentified Models,” Econometric Theory, 14, 183–209.

Reinsel, G. (2003): Elements of Multivariate Time Series Analysis. Springer, New York, 2nd edn.

Rothenberg, T. (1971): “Identification in Parametric Models,” Econometrica, 39:3, 577–591.

Rubio-Ramırez, J. F., D. F. Waggoner, and T. Zha (2007): “Structural Vector Autoregres-

sions: Theory of Identification and Algorithms for Inference,” Review of Economics and Studies,

Forthcoming.

Sims, C. (2002): “Solving Linear Rational Expectations Models,” Computational Economics, pp.

1–20.

Solo, V. (1986): “Topics in Advanced Time Series,” in Lectures in Probability and Statistics, ed.

by G. del Pino, and R. Rebolledo, vol. 1215. Springer-Verglag.

Uhlig, H. (1999): “A Toolkit for Analyzing Nonlinear Dynamic Stochastic Models Easily,” in

Computational Methods for the Study of Dynamic Economies, ed. by R. Marimon, and A. Scott,

pp. 30–61. Oxford University Press.

Wallis, K. (1977): “Multiple Time Series Analysis and the Final Form of Econometric Models,”

Econometrica, 45:6, 1481–1497.

Zellner, A., and F. Palm (1974): “Time Series Analysis and Simultaneous Equation Economet-

ric Models,” Journal of Econometrics, 2, 17–54.

39