extremum_estimators_introduction
-
Upload
victor-haselmann-arakawa -
Category
Documents
-
view
214 -
download
0
Transcript of extremum_estimators_introduction
-
7/30/2019 extremum_estimators_introduction
1/54
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
Estimadores ExtremosIntroduo
Cristine Campos de Xavier Pinto
CEDEPLAR/UFMG
Maio/2010
Cristine Campos de Xavier Pinto Institute
Estimadores Extremos
http://find/ -
7/30/2019 extremum_estimators_introduction
2/54
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
Sequences and Convergence
Deterministic Sequence vs Random Sequence
Convergence of Deterministic Sequences: A sequence of
nonrandom numbers faN:
N = 1,
2, ...
g converges to a limita if for all > 0, there exists a N such that if N> N thenjaN aj <
In this case, aN ! a as N !
Cristine Campos de Xavier Pinto Institute
Estimadores Extremos
http://find/ -
7/30/2019 extremum_estimators_introduction
3/54
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
Convergence in Probability: A sequence of randomvariables fXN : N = 1, 2, ...g converges in probability to aconstant a if for all > 0
Pr [jXN aj] ! 0 as N ! In this case,
XN !p a , plimXN = a
Cristine Campos de Xavier Pinto Institute
Estimadores Extremos
http://find/ -
7/30/2019 extremum_estimators_introduction
4/54
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
Bounded Deterministic Sequences: A sequence ofnonrandom numbers faN : N = 1, 2, ...g is bounded if andonly if there is some b< such that
jaN
j b for all N = 1, 2, ...
Bounded in Probability: A sequence of random variables
fXN : N = 1, 2, ...g is bounded in probability if and only if forevery > 0, there exists a b < and a integer N such that
Pr [jXNj b] < for N N
Cristine Campos de Xavier Pinto Institute
Estimadores Extremos
http://find/ -
7/30/2019 extremum_estimators_introduction
5/54
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
Notation
Deterministic Sequences:
A sequence of random numbers faNg is ON if NaN isbounded.If = 0, faNg is bounded and we can write aN = O(1)faNg is o
N
if NaN ! 0.When = 0, aN ! 0, and we say aN = o(1)
Cristine Campos de Xavier Pinto Institute
Estimadores Extremos
C S
http://find/ -
7/30/2019 extremum_estimators_introduction
6/54
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
Random Sequences:
If
fXN
gis bounded in probability, we write XN = Op (1) .
If XN !p 0, we write XN = op (1)Lemma: If XN !p a, then XN = Op (1) .
Cristine Campos de Xavier Pinto Institute
Estimadores Extremos
E l C i A i N li T S E i E i i h A i V i H h i T i
http://find/ -
7/30/2019 extremum_estimators_introduction
7/54
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
Notation
Random Sequences:A sequence of random numbers fXNg is Op (RN), with fRNgis a nonrandom, positive sequences, if
XNR
N
= Op (1)
We writeXN = Op (RN)
A random sequence fXNg, with fRNg is a nonrandom,positive sequences, if
XNRN
= op (1)
We writeXN = op (RN)
Cristine Campos de Xavier Pinto Institute
Estimadores Extremos
E l C i t A t ti N lit T St E ti t E ti ti th A t ti V i H th i T ti
http://find/http://goback/ -
7/30/2019 extremum_estimators_introduction
8/54
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
Examples:
Lets get back to OLS estimator, under the classical linearmodel assumptions
pNbOLS 0
=
1N
Ni=1 X
0iXi
1 1pN
Ni=1 X
0ii
1N Ni=1 X0iXi1 hE hX0 Xii
1
= op (1) (using WLLN)1pN
Ni=1 X
0ii = Op (1) (by CLT)
At the end,
pNbOLS 0 = hE hX
0Xii
1
1
pN
N
i=1
X0ii!+ op (1)
= Op (1)
Cristine Campos de Xavier Pinto Institute
Estimadores Extremos
Examples Consistency Asymptotic Normality Two Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/http://goback/ -
7/30/2019 extremum_estimators_introduction
9/54
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
However
1
N
N
i=1
X0
ii Eh
X0i
| {z }=0 by assumption= op (1)
bOLS 0 =
hE
hX
0Xii1 0 + op (1)
= op (1)
Cristine Campos de Xavier Pinto Institute
Estimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/ -
7/30/2019 extremum_estimators_introduction
10/54
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
Properties:
1 op (1) + op (1) = op (1)2 Op (1) + Op (1) = Op (1)
3 op (1) + Op (1) = Op (1)4 Op (1) Op (1) = Op (1)5 op (1) Op (1) = op (1)6 op (1) op (1) = op (1)
Cristine Campos de Xavier Pinto Institute
Estimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/ -
7/30/2019 extremum_estimators_introduction
11/54
Examples Consistency Asymptotic Normality Two Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
Extremum estimators (M-estimators): estimators obtainedby either minimizing or maximizing a certain function denedover a parameter space.
An estimator
b is an extremum estimator if there is an
objective function bQN () such thatb maximizes bQN () subject to 2 where is the set of possible parameter values.
In this course, we will work with four examples of extremumestimators: MLE, NLS, GMM and CMD.
Cristine Campos de Xavier Pinto Institute
Estimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/ -
7/30/2019 extremum_estimators_introduction
12/54
Examples Consistency Asymptotic Normality Two Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
Example 1: Maximum Likelihood Estimator (MLE)Suppose we have a random sample (Z1 , ..., ZN) with p.d.ff(Zj 0) equal to some member of family of p.d.fs f(Zj ).The MLE maximizes
bQN () = 1N
N
i=1
ln f(Zij )
where bQN () is the normalized likelihood function.
Cristine Campos de Xavier Pinto Institute
Estimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/http://goback/ -
7/30/2019 extremum_estimators_introduction
13/54
p y y p y p g y p yp g
Example 2: Nonlinear Least Squares (NLS)
We have a random sample of (Yi, Xi)Ni=1 with
E [Yj X] = h (X, 0), the estimator maximizes
bQN () = 1N
N
i=1
[Yi h (Xi.)]2
Cristine Campos de Xavier Pinto Institute
Estimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/ -
7/30/2019 extremum_estimators_introduction
14/54
Example 3: Generalized Method of Moments (GMM)Suppose that there is a vector of moment functions g(Z, )
such that the population moments satisfy
E [g(Z, 0)] = 0
The GMM estimator minimizes a squared Euclidean distance
of the sample moments from their populations analog (=zero).
Let cW be a positive semi-denite matrix so that m0cW m 12is a measure of distance from m to zero. The GMM estimatormaximizes
bQN () = " 1N
N
i=1
g(Zi, )
#0cW" 1N
N
i=1
g(Zi, )
#
Cristine Campos de Xavier Pinto Institute
Estimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/http://goback/ -
7/30/2019 extremum_estimators_introduction
15/54
Example 4: Classical Minimum Distance Estimator (CMD)Suppose that there is a vector of estimators
b!p 0 and a
vector of functions h () with 0 = h (0).
An estimate of can be constructed by maximizingbQN () = [b h ()]0 cW [b h ()]
Cristine Campos de Xavier Pinto Institute
Estimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/ -
7/30/2019 extremum_estimators_introduction
16/54
Remarks
There is a dierent framework that is the minimum distance
estimation.The minimum distance estimation is the class of estimatorssuch that
b maximizes
bQN () subject to 2
where bQN () = bgN ()0 cWbgN ()with
bgN () as a vector of data and parameters such that
bgN (0) !p 0and cW is positive denite,GMM and CMD are special cases of minimum distance.
This framework is useful to get the asymptotic distribution ofGMM and CMD.
Cristine Campos de Xavier Pinto InstituteEstimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/ -
7/30/2019 extremum_estimators_introduction
17/54
General idea: If bQN () converges in probability to Q0 (),for every and Q0 () is maximized at the true parameter 0,
then the limit of the maximum b should be the maximum ofthe limit (0), under some regularity conditions.
To get consistency of an extremum estimator, we need to
dene uniform convergence in probability.Uniform Convergence in Probability: bQN () convergesuniformly in probability to Q0 () if
sup2 bQN () Q0 () !p 0
Cristine Campos de Xavier Pinto InstituteEstimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/http://goback/ -
7/30/2019 extremum_estimators_introduction
18/54
TheoremIf there is a function Q0 () such that
(i) Q0 () is uniquely maximized at 0
(ii) is compact
(iii) Q0 () is continuous
(iv) bQN () converges uniformly to Q0 (), thenb ! 0
Cristine Campos de Xavier Pinto InstituteEstimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/ -
7/30/2019 extremum_estimators_introduction
19/54
Comments on this theorem:
Condition (i): Identication condition. This condition is
related to the general idea of identication: distribution of thedata at the true parameter is dierent than at any otherpossible parameter value.
Condition (ii) is very important, and strong. It requires thatbounds on the true parameter value are known. The practiceof ignoring the compactness restriction is justied forestimators where compactness can be dropped withoutaecting consistency. One nice result (next theorem) is whenthe objective function is compact.
Conditions (iii) and (iv) are the regularity conditions forconsistency. These assumptions are satised if the momentsof certain functions exist and there is some continuity in
bQN () or in the distribution of the data.
Cristine Campos de Xavier Pinto InstituteEstimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/ -
7/30/2019 extremum_estimators_introduction
20/54
TheoremIf there is a function Q0 () such that
(i) Q0 () is uniquely maximized at 0
(ii) 0 is an element of the interior of convex set andbQN () is concave(iii) bQN () !p Q0 () for all 2 , then b exists with
probability approaching one and
b ! 0
Cristine Campos de Xavier Pinto InstituteEstimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/ -
7/30/2019 extremum_estimators_introduction
21/54
Idea: Large sample estimators are approximately equal to
linear combinations of sample averages, so we can CLT andLLN.
Lets assume that 0 is an interior of, which means that must have nonempty interior. Since
b !p 0, b
is in the
interior of with probability one. If bQN () is continuouslydierentiable then (with probability one) b solves the FOC
sN
Z,
b= 0
where sZi,b is a vector of partial derivatives of bQN () .
Cristine Campos de Xavier Pinto InstituteEstimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/ -
7/30/2019 extremum_estimators_introduction
22/54
If
bQN () is twice continuous dierentiable, then we can
expanded FOC around 0
sN
Zi,b = sN (Zi, 0) + HN e, Z b 0
where HNe, Zi is a matrix with second derivativesevaluated at a dierent mean value. Since this mean value arebetween b and 0, then it must converge in probability to 0.Combining the results above
0 = 1pN
N
i=1
s(Zi, 0) + 1N
N
i=1
HN e, ZipNb 0
Cristine Campos de Xavier Pinto InstituteEstimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/ -
7/30/2019 extremum_estimators_introduction
23/54
Lemma: If Zi is i.i.d, H(Z, ) is continuous at 0 withprobability one, and there is a neighborhood Nof 0 suchthat
E
[sup2NkH(
Z,)k]
< , then for any e !p 0,
1
N
N
i=1
H
Zi,e !p E [H(Z, 0)]We use this lemma to show that
1
N
N
i=1
HNe, Zi !p E [H(Z, 0)]
If H0 E [H(Z, 0)] is nonsingular, then 1N Ni=1 HN e, Ziis nonsingular with probability one, and
pNb 0 =
1
N
N
i=1
HN
e, Zi!1
1pN
N
i=1
s(Zi, 0)
!
Cristine Campos de Xavier Pinto InstituteEstimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/ -
7/30/2019 extremum_estimators_introduction
24/54
If 1pNNi=1 s(Zi, 0) is the average of i.i.d random vector with
mean zero, multiplied byp
N, then we can apply the CLT tothis term.
At the end,
pNb 0 !d N0, H10 H10
where = E
hs(Zi, 0) s(Zi, 0)
0
i.
Cristine Campos de Xavier Pinto InstituteEstimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/ -
7/30/2019 extremum_estimators_introduction
25/54
We will get the asymptotically linear representation of eachestimator
pNb 0 = Ni=1 (Zi)p
N+ op (1)
where E [ (Z)] = 0 and E h (Z) (Z)0i exists.Asymptotic normality ofb results from CLT applied to
Ni=1 (Zi)p
N.
Inuence Function: (Zi) .
For asymptotic normality, we have two basic results. One forextremum estimators, and another one for minimum distanceestimators.
Cristine Campos de Xavier Pinto InstituteEstimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://goforward/http://find/http://goback/ -
7/30/2019 extremum_estimators_introduction
26/54
Theorem
Suppose that
b is such that
b maximizes bQN () subject to 2 , b !p 0(i) 0 2interior()(ii)
bQN () is twice continuously dierentiable in a
neighborhood Nof0.(iii)
pNrbQN () !d N(0,)
(iv) there is H() that is continuous at 0 and
sup2Nrb
QN () H()!p 0
(v) H H(0) is nonsingularthen
pN
b 0
!d N
0, H10 H
10
Cristine Campos de Xavier Pinto InstituteEstimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/ -
7/30/2019 extremum_estimators_introduction
27/54
Theorem
Suppose thatb is such that
b maximizes
bQN () subject to 2
where bQN () = bgN ()0 cWbgN ()b !p 0cW !p W , W is positive semi-deniteand
Cristine Campos de Xavier Pinto InstituteEstimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/http://goback/ -
7/30/2019 extremum_estimators_introduction
28/54
Theorem
(i) 0
2interior()
(ii) bgN () is continuously dierentiable in aneighborhood ofN of0
(iii)p
N
bgN (0) !d N(0,)
(iv) There is G() that is continuous at 0 and
sup2NkrbgN () G()k !p 0(v) for G G(0), G0WG is nonsingular
then
pNb 0!d N
0,
G0WG
1G
0WWG
G
0WG
1
Cristine Campos de Xavier Pinto InstituteEstimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/ -
7/30/2019 extremum_estimators_introduction
29/54
Two-step estimator is one that depends on some preliminary,"rst-step" estimator of a parameter vector.
Feasible GLS estimator and IV estimator are examples oftwo-step estimators.
Question: Does the rst step aects the asymptotic varianceof the second? If it does, how?
A general type of estimator b that is one that, with probabilityapproaching one, solves an equation:1
N
N
i=1
s(Zi, ,
b) = 0
where q is a vector of functions with the same dimension of and b is a rst-step estimator.The rst question is: When
b will be consistent for 0?
Cristine Campos de Xavier Pinto InstituteEstimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/ -
7/30/2019 extremum_estimators_introduction
30/54
To get identication, we need to know about the asymptoticbehavior of
b.
General assumption: b !p Note that does not need to converge to a parameter
indexing some interesting feature of the distribution.Example: Two-stage least squares. We ask that
b!p
We did not ask b!p 0where
X = 0Z+ v
Cristine Campos de Xavier Pinto InstituteEstimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/ -
7/30/2019 extremum_estimators_introduction
31/54
Identication condition: Q0 (, ) is uniquely maximized at0.
The consistent result is the same as before, but usingQ0 (, ) .
Theorem
If there is a function Q0 (, ) such that(i) Q0 (, ) is uniquely maximized at 0(ii) is compact
(iii) Q0 (, ) is continuous
(iv) bQN (, ) converges uniformly to Q0 (, ), thenb ! 0Cristine Campos de Xavier Pinto InstituteEstimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/ -
7/30/2019 extremum_estimators_introduction
32/54
Asymptotic Normality
Two cases can happen:
1 The asymptotic variance ofp
N
b 0
does not depend on
the asymptotic variance of pN(b ) .2 The asymptotic variance of
pNb 0 must adjusted to
account for the rst-stage estimation of.
Question: When can we ignore the rst-stage estimation
error?
Cristine Campos de Xavier Pinto InstituteEstimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/ -
7/30/2019 extremum_estimators_introduction
33/54
Using arguments similar to the ones above,
pNb 0 = H (0)1 1p
N
N
i=1
s(Zi, 0 ,b)!+ op (1)where H (0) = E [H(Z; 0 , )]
Doing a mean value expansion for the second term
1pN
N
i=1
s(Zi, 0 ,
b) =
1pN
N
i=1
s(Zi, 0 , )+F0
pN(
b )+op
whereF0 = E [rs(Z, 0 , )]
Cristine Campos de Xavier Pinto InstituteEstimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/ -
7/30/2019 extremum_estimators_introduction
34/54
IfE [rs(Z, 0 , )] = 0, we can ignore the rst-stageestimation error.
The asymptotic variance of pNb 0 is the same as ifwere plugged into.
When this condition fails, we need to adjust the variance ofpNb 0 .
To do the adjustment, we get the rst-order representation ofpN(b )
pN(b ) =
Ni=1 (Zi)
pN+ o
p(1)
with E [ (Zi)] = 0
Cristine Campos de Xavier Pinto InstituteEstimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/ -
7/30/2019 extremum_estimators_introduction
35/54
Using this linear representation, we can write
pNb 0 = H (0)1 1pN
N
i=1
g(Zi, 0 , )!+op (1)where g(Zi, 0 , ) s(Zi, 0 , ) + F0 (Zi) .Note that
E [g(Zi, 0 , )] = 0
In this case
Avarp
Nb 0 = H10 DH
10
whereD E
hg(Zi, 0 , ) g(Zi, 0 , )
0i= Var[g(Zi, 0 , )] .
Cristine Campos de Xavier Pinto InstituteEstimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/ -
7/30/2019 extremum_estimators_introduction
36/54
Lets consider the case when we do not have a nuisanceparameter () ,
pNb 0 !d N0, H10 H10 In this case, we need to get an estimator for H and .
Under some regularity conditions that ensures uniformconvergence of the matrix of second derivatives
(Condition (iv))
1
N
N
i=1
H
Zi,
b
!p H0
Advantage: Always available in problems with twicecontinuously dierentiable functions.
Drawbacks: Requires calculation of second derivatives and itis NOT guaranteed to be positive semidenite.
Cristine Campos de Xavier Pinto InstituteEstimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/ -
7/30/2019 extremum_estimators_introduction
37/54
If we have more information about the structure of ourproblem, we can use a dierent estimator. Suppose that we
can partition Z into X and Y, and that 0 indexes somefeature of the distribution of Y given X. Dene
A (X, 0) = E [H(Z, 0)j X]A (X,
0) is a function of X, and by the law of iterated
expectation
E [A (X, 0)] = E [H(Z, 0)] = H0
Under standard regularity conditions
1
N
N
i=1
A
Xi,b !p H0Cristine Campos de Xavier Pinto InstituteEstimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/http://goback/ -
7/30/2019 extremum_estimators_introduction
38/54
Getting an estimator for is easy
1
N
N
i=1
s
Zi,
b
s
Zi,
b0
!p
Combining these estimators, we can consistently estimateAvar
pNb 0
[Avarp
N
b 0
=
bH1
b
bH1
Cristine Campos de Xavier Pinto InstituteEstimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/ -
7/30/2019 extremum_estimators_introduction
39/54
The asymptotic standard error are obtained from a matrix
[Avarb =bH1
b
bH1
Nwhich can be expressed as
1
N
N
i=1
H
Zi,
b
!1
1
N
N
i=1
s
Zi,
b
s
Zi,
b
0!
1
N
N
i=1
H
Zi,b!1or
1N
Ni=1
AXi,b!1 1N
Ni=1
sZi,b sZi,b0!
1
N
N
i=1
A
Xi,b
!1
Cristine Campos de Xavier Pinto InstituteEstimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/ -
7/30/2019 extremum_estimators_introduction
40/54
In the case of a two-step estimator, we may need to adjust forthe estimating error in the rst stage.
IfE [rs(Z, 0 , )] = 0, the estimator will be the same asabove but with HZi,b,b or AXi,b,b, ands
Zi,b,b .
Cristine Campos de Xavier Pinto InstituteEstimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/http://goback/ -
7/30/2019 extremum_estimators_introduction
41/54
IfE [rs(Z, 0 , )] 6= 0, the asymptotic variance estimatorof
b need to be adjusted, taking into account the asymptotic
variance ofb.In this case, we can estimate H using H
Zi,b,b or
A
Xi,
b,
b
, but we need to estimate D.
To get an estimator for D, rst we need to estimate F0 . We
can use bF = 1N
N
i=1
rs
Zi,b,bThen,
bD = 1N
N
i=1
gZi,b,b gZi,b,b0where g
Zi,
b,
b
s
Zi,
b,
b+
bF (Zi,
b) .
Cristine Campos de Xavier Pinto Institute
Estimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/ -
7/30/2019 extremum_estimators_introduction
42/54
Wald Test
Wald test is easy if you know the form of asymptotic variance.To test Q restrictions:
H0 : c(0) = 0
we can form the Wald statistics
W cb0 bCbVbC01 cbwhere bV is an asymptotic variance estimator ofb,C = C
b
, where C
b
is the QxK matrix of rst
derivatives (Jacobian) of c().
If bV is a robust estimator of the variance, 0 2interior(),C(0) = rc(0) has full-rank
W X2K under H0Cristine Campos de Xavier Pinto Institute
Estimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/ -
7/30/2019 extremum_estimators_introduction
43/54
Lagrange Multiplier
Only requires estimation under the null. If the unrestrictedmodel is dicult to estimate, LM is a good option.
Assume that there Q continuously dierentiable restrictionsimposed on 0 under H0.
Assume that the restrictions dene a mapping fromh : RKQ ! RQ. Under the null0 = h (0) , where 0 is (K Q) x1 and 0 is Kx1
We need to assume that 0 is in the interior of its parameterspace (), under H0 .
Assume that h is twice continuously dierentiable on theinterior of.
Cristine Campos de Xavier Pinto Institute
Estimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/http://goback/ -
7/30/2019 extremum_estimators_introduction
44/54
Let
e be the solution of the constrained minimization problem
min2
N
i=1
q(Zi, d())
The constraint estimator of 0 is simply e he .The LM statistics is based on the limiting distribution of
Ni=1 si
e
pN
under H0
Ife is replaced by b, this statistics is equal to zero.Cristine Campos de Xavier Pinto Institute
Estimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/ -
7/30/2019 extremum_estimators_introduction
45/54
Under the given assumptions,
pNe 0 = Op (1)Using the Delta Method,
pNe 0 = Op (1)
A standard mean value expansion
1
pN
N
i=1
sie =1
pN
N
i=1
si (0) + Hp
Ne 0+ op (1)under H0 .
Cristine Campos de Xavier Pinto Institute
Estimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/ -
7/30/2019 extremum_estimators_introduction
46/54
Lets play with the restrictions:
0 =p
Nce =p
Nc(0) + CpNe 0
where C is the Jacobian matrix evaluated at mean valuebetween e and 0 .Under H0 ,
c(0) = 0 and plim C = C(0) CUnder H0 ,
Cp
Ne 0
= op (1)
and
CH11pN
N
i=1
si
e = CH1 1pN
N
i=1
si (0) + op (1)
Cristine Campos de Xavier Pinto Institute
Estimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/ -
7/30/2019 extremum_estimators_introduction
47/54
We know that by the CLT
CH11pN
N
i=1
si (0) !d N
0, CH1H1C
where
= Eh
s(Zi, 0) s(Zi, 0)0i
Cristine Campos de Xavier Pinto Institute
Estimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
U d th ti i CH1H1C h f ll
http://find/ -
7/30/2019 extremum_estimators_introduction
48/54
Under the assumptions we impose, CH 1H 1C has fullrank,
Ni=1
sie!0 H1C0 CH1H1C1 CH1 N
i=1
sie!
!p X2QThe score and LM statistics is
LM =1
N
N
i=1
si
e!0 eH1eC0 heCeH1eeH1 eCi1 eCeH1
N
i=1 sie!where all the estimated values are evaluated at e.Under H0, LM !p X2Q
Cristine Campos de Xavier Pinto Institute
Estimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
C i i F i S i i
http://find/http://goback/ -
7/30/2019 extremum_estimators_introduction
49/54
Criterion Function Statistics
Both the restricted and unrestricted models are easy toestimate.
In the case of the two-step estimators, we have to assumethat
b has no eect on the asymptotic distribution of the
M-estimator.Lets consider the case in whichE
hs(Zi, 0) s(Zi, 0)
0i= E [H(Z, )] .
Note that
QNe QN b = N
i=1
q
Zi,e Ni=1
q
Zi,b
Cristine Campos de Xavier Pinto Institute
Estimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
D i d d i
http://find/ -
7/30/2019 extremum_estimators_introduction
50/54
Doing a second order expansion,
N
i=1 qZi,eN
i=1 qZi,b =N
i=1 sZi,b e b+
1
2
e
b0 Ni=1
Hi
!e
b
where Hi is a Hessian evaluated at means value between eand b.Under H0 ,
Ni=1 H
i
N= H+ op (1) and
pNe
b= Op (1) ,
2 " Ni=1
qZi,e Ni=1
qZi,b# = pNe b0 HpNe b+op (1)
Cristine Campos de Xavier Pinto Institute
Estimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/ -
7/30/2019 extremum_estimators_introduction
51/54
From before, we know that
pNe b = H1 Ni=1 sZi,epN
+ op (1)
Using these two equations,
QLR 2 " Ni=1
qZi,e Ni=1
qZi,b#= 0@
Ni=1 s
Zi,
e
pN 1A0
H1 0@
Ni=1 s
Zi,
e
pN 1A+ op (1)Under H0, QLR ! X2Q
Cristine Campos de Xavier Pinto Institute
Estimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
Local Alternatives
http://find/ -
7/30/2019 extremum_estimators_introduction
52/54
Local Alternatives
So far, we only derive the limiting of the statistics under thenull hypothesis.
We need to know the behave under alternative hypothesis inorder to choose the test with the highest power.
Local Alternative: is a hypothesis under which we can
approximate the nite sample power of test statistics foralternatives "close" to H0 .
If H0 : c(0) = 0, then a sequence of local alternatives is
HN1 = c(0,N) =0
pNwhere 0 is a given Qx1 vector.
Each of the statistics have a well-dened limiting distributionunder the alternative that diers from the limiting under H0 .
Cristine Campos de Xavier Pinto Institute
Estimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
http://find/ -
7/30/2019 extremum_estimators_introduction
53/54
Under the local alternative (under some regularity conditions),Wald and LM statistics have a limiting noncentral chi-squareddistribution with Q degrees of freedom.
The noncentral parameter depends on C, H , and 0 .
For various 0 , we can estimate the asymptotic local power ofthe test statistics.
We can compare the test statistics based, using the powerunder local alternatives.
Cristine Campos de Xavier Pinto Institute
Estimadores Extremos
Examples Consistency Asymptotic Normality Two-Step Estimators Estimating the Asymptotic Variance Hypothesis Testing
References
http://find/http://goback/ -
7/30/2019 extremum_estimators_introduction
54/54
References
Amemya: 4
Wooldridge, 12
Newey, W. and D. McFadden (1994). "Large SampleEstimation and Hypothesis Testing", Handbook ofEconometrics, Volume IV, chapter 36.
Cristine Campos de Xavier Pinto Institute
Estimadores Extremos
http://find/