Lecture 1 Maximum Likelihood

download Lecture 1 Maximum Likelihood

of 50

description

Parametric

Transcript of Lecture 1 Maximum Likelihood

  • Maximum Likelihood

  • Much estimation theory is presented in a rather ad hoc fashion. Minimising squared errors seems a good idea but why not minimise the absolute error or the cube of the absolute error?The answer is that there is an underlying approach which justifies a particular minimisation strategy conditional on certain assumptions.This is the maximum likelihood principle.

  • The idea is to assume a particular model with unknown parameters, we can then define the probability of observing a given event conditional on a particular set of parameters. We have observed a set of outcomes in the real world. It is then possible to choose a set of parameters which are most likely to have produced the observed results.This is maximum likelihood. In most cases it is both consistent and efficient. It provides a standard to compare other estimation techniques.

  • An exampleSuppose we sample a set of goods for quality and find 5 defective items in a sample of 10. What is our estimate of the proportion of bad items in the whole population.Intuitively of course it is 50%. Formally in a sample of size n the probability of finding B bad items is

  • If the true proportion=0.1, P=0.0015, if it equals 0.2, P=0.0254 etc, we could search for the most likely value. Or we can solve the problem analytically,

  • So the maximum likelihood estimate of the population proportion of bad items is 0.5.This basic procedure can be applied in many cases, once we can define the probability density function for a particular event we have a general estimation strategy.

  • A general StatementConsider a sample (X1...Xn) which is drawn from a probability distribution P(X|A) where A are parameters. If the Xs are independent with probability density function P(Xi|A) the joint probability of the whole set is

    this may be maximised with respect to A to give the maximum likelihood estimates.

  • It is often convenient to work with the Log of the likelihood function.

    the advantage of this approach is that it is extremely general but if the model is misspecified it may be particularly sensitive to this misspecification.

  • The Likelihood function for the general non linear modelif Y is a vector of n endogenous variables and Then the likelihood function for one period is

  • and dropping some constants and taking logsif the covariance structure is constant and has zero off diagonal elements this reduces to single equation OLS

  • Two important matricesThe efficient score matrixthis is made up of the first derivatives at each point in time. It is a measure of dispersion of the maximum estimate.

  • The information matrix (Hessian)This is defined asThis is a measure of how `pointy' the likelihood function is. The variance of the parameters is given either by the inverse Hessian or the outer product of the score matrix

  • The Cramer-Rao Lower BoundThis is an important theorem which establishes the superiority of the ML estimate over all others. The Cramer-Rao lower bound is the smallest theoretical variance which can be achieved. ML gives this so any other estimation technique can at best only equal it. this is the Cramer-Rao inequality.

  • Concentrating the Likelihood functionsuppose we split the parameter vector into two sub vectors

    now suppose we knew 1 then sometimes we can derive a formulae for the ML estimate of 2, eg

    then we could write the LF as this is the concentrated likelihood function.This process is often very useful in practical estimation as it reduces the number of parameters which need to be estimated.

  • An example of concentrating the LFThe likelihood function for a standard single variable normal non-linear model is

    we can concentrate this with respect to the variance as follows, the FOC for a maximum with respect to the variance is

  • which implies thatso the concentrated log likelihood becomes

  • Prediction error decomposition

    We assumed that the observations were independent in the statements above. This will not generally be true especially in the presence of lagged dependent variables. However the prediction error decomposition allows us to extend standard ML procedures to dynamic models.

    From the basic definition of conditional probabilitythis may be applied directly to the likelihood function,

  • The first term is the conditional probability of Y given all past values. We can then condition the second term and so on to givethat is, a series of one step ahead prediction errors conditional on actual lagged Y.

  • Testing hypothesis.

    If a restriction on a model is acceptable this means that the reduction in the likelihood value caused by imposing the restriction is not `significant'.

    This gives us a very general basis for constructing hypothesis tests but to implement the tests we need some definite metric to judge the tests against, i.e. what is significant.

  • LLuLR

  • Consider how the likelihood function changes as we move around the parameter space, we can evaluate this by taking a Taylor series expansion around the ML pointand of course

  • Soit is possible to demonstrate thatwhere m is the number of restrictions, and so

  • And sothis gives us a measure for judging the significance of likelihood based tests.

  • Three test procedures.

    To construct the basic test we need an estimate of the likelihood value at the unrestricted point and the restricted point and we compare these two. There are three ways of deriving this.

    The likelihood ratio testwe simply estimate the model twice, once unrestricted and once restricted and compare the two.

    The Wald testThis estimates only the unrestricted point and uses an estimate of the second derivative to `guess' at the restricted point. Standard `t' tests are a form of wald test.

    The LaGrange multiplier testthis estimates only the restricted model and again uses an estimate of the second derivatives to guess at the restricted point.

  • If the likelihood function were quadratic then LR=LM=W. In general however W>LR>LM LLuLR

  • A special form of the LM test

    The LM test can be calculated in a particularly convenient way under certain circumstances.

    The general form of the LM test isNow supposewhere we assume that the subset of parameters 1 is fixed according to a set of restrictions g=0 (G is the derivative of this restriction).

  • Nowand so the LM test becomes

  • Andwhich may be interpreted as TR2 from a regression of e on GThis is used in many tests for serial correlation heteroskedasticity functional form etc.

    e is the actual errors from a restricted model and G is the restrictions in the model.

  • An Example: Serial correlation

    SupposeThe restriction that may be tested as an LM test as followsestimate the model without serial correlation. save the residuals u. then estimate the modelthen TR2 from this regression is an LM(m) test for serial correlation

  • Quasi Maximum Likelihood

    ML rests on the assumption that the errors follow a particular distribution (OLS is only ML if the errors are normal etc.) What happens if we make the wrong assumption.

    White(1982) Econometrica, 50,1,pg1. demonstrates that, under very broad assumptions about the misspecification of the error process, ML is still a consistent estimator. The estimation is then referred to as Quasi Maximum Likelihood.

    But the covariance matrix is no longer the standard ML one instead it is given byGenerally we may construct valid Wald and LM tests by using this corrected covariance matrix but the LR test is invalid as it works directly from the value of the likelihood function.

  • Numerical optimisation

    In simple cases (e.g. OLS) we can calculate the maximum likelihood estimates analytically. But in many cases we cannot, then we resort to numerical optimisation of the likelihood function.

    This amounts to hill climbing in parameter space.there are many algorithms and many computer programmes implement these for you.

    It is useful to understand the broad steps of the procedure.

  • set an arbitrary initial set of parameters.

    2. determine a direction of movement

    3. determine a step length to move

    4. examine some termination criteria and either stop or go back to 2.

  • LLu

  • Important classes of maximisation techniques.

    Gradient methods. These base the direction of movement on the first derivatives of the LF with respect to the parameters. Often the step length is also determined by (an approximation to) the second derivatives. SoThese include, Newton, Quasi Newton, Scoring, Steepest descent, Davidson Fletcher Powel, BHHH etc.

  • Derivative free techniques. These do not use derivatives and so they are less efficient but more robust to extreme non-linearitys. e.g. Powell or non-linear Simplex.

    These techniques can all be sensitive to starting values and `tuning' parameters.

  • Some special LFs

    Qualitative response models.These are where we have only partial information (insects and poison) in one form or another.

    We assume an underlying continuous model,but we only observe certain limited information, eg z=1 or 0 related to y

  • then we can group the data into two groups and form a likelihood function with the following formwhere F is a particular density function eg. the standard normal Cumulative function or perhaps the logistic (logit model) function

  • ARCH and GARCH

    These are an important class of models which have time varying variances

    Supposethen the likelihood function for this model iswhich is a specialisation of the general Normal LF with a time varying variance.

  • An alternative approach

    Method of moments

    A widely used technique in estimation is the Generalised Method of Moments (GMM), This is an extension of the standard method of moments.

    The idea here is that if we have random drawings from an unknown probability distribution then the sample statistics we calculate will converge in probability to some constant. This constant will be a function of the unknown parameters of the distribution. If we want to estimate k of these parameters,we compute k statistics (or moments) whose probability limits are known functions of the parameters

  • These k moments are set equal to the function which generates the moments and the function is inverted.

  • A simple exampleSuppose the first moment (the mean) is generated by the following distribution, . The observed moment from a sample of n observations isHenceAnd

  • Method of Moments Estimation (MM)

    This is a direct extension of the method of moments into a much more useful setting.

    The idea here is that we have a model which implies certain things about the distribution or covariances of the variables and the errors. So we know what some moments of the distribution should be. We then invert the model to give us estimates of the unknown parameters of the model which match the theoretical moments for a given sample.

    So suppose we have a modelwhere are k parameters. And we have k conditions (or moments) which should be met by the model.

  • then we approximate E(g) with a sample measure and invert g.

  • Examples

    OLSIn OLS estimation we make the assumption that the regressors (Xs) are orthogonal to the errors. ThusThe sample analogue for each xi isand soand so the method of moments estimator in this case is the value of which simultaneously solves these i equations. This will be identical to the OLS estimate.

  • Maximum likelihood as an MM estimator

    In maximum likelihood we have a general likelihood function.and this will be maximised when the following k first order conditions are met.This give rise to the following k sample conditionsSimultaneously solving these equations for gives the MM equivalent of maximum likelihood.

  • Generalised Method of Moments (GMM)

    In the previous conditions there are as many moments as unknown parameters, so the parameters are uniquely and exactly determined. If there were less moment conditions we would not be able to solve them for a unique set of parameters (the model would be under identified). If there are more moment conditions than parameters then all the conditions can not be met at the same time, the model is over identified and we have GMM estimation.

    Basically, if we can not satisfy all the conditions at the same time we have to trade them of against each other. So we need to make them all as close to zero as possible at the same time. We need a criterion function to minimise.

  • Suppose we have k parameters but L moment conditions L>k.Then we need to make all L moments as small as possible simultaneously. One way is a weighted least squares criterion.That is, the weighted squared sum of the moments.

    This gives a consistent estimator for any positive definite matrix A (not a function of )

  • The optimal A

    If any weighting matrix is consistent they clearly can not all be equally efficient so what is the optimal estimate of A.

    Hansen(1982) established the basic properties of the optimal A and how to construct the covariance of the parameter estimates.

    The optimal A is simply the covariance matrix of the moment conditions. (just as in GLS)

    Thus

  • The parameters which solve this criterion function then have the following properties.Wherewhere G is the matrix of derivatives of the moments with respect to the parameters and is the true moment value.

  • ConclusionBoth ML and GMM are very flexible estimation strategies

    They are equivalent ways of approaching the same problem in many instances.