Mean Squared Error and Maximum Likelihood

24
Mean Squared Error and Maximum Likelihood Lecture XVIII

description

Mean Squared Error and Maximum Likelihood. Lecture XVIII. Mean Squared Error. As stated in our discussion on closeness, one potential measure for the goodness of an estimator is. In the preceding example, the mean square error of the estimate can be written as: - PowerPoint PPT Presentation

Transcript of Mean Squared Error and Maximum Likelihood

Page 1: Mean Squared Error and Maximum Likelihood

Mean Squared Error and Maximum Likelihood

Lecture XVIII

Page 2: Mean Squared Error and Maximum Likelihood

Mean Squared Error

• As stated in our discussion on closeness, one potential measure for the goodness of an estimator is

2ˆ E

Page 3: Mean Squared Error and Maximum Likelihood

• In the preceding example, the mean square error of the estimate can be written as:

where is the true parameter value between zero and one.

2TE

Page 4: Mean Squared Error and Maximum Likelihood

• This expected value is conditioned on the probability of T at each level value of .

XXXP 11,

2121 121 1,, XXXXXXP

2

22

1,1,1

5.,1,020,0,0

P

PPMSE

Page 5: Mean Squared Error and Maximum Likelihood

22 1,10,0 PPMSE

2)5(. MSE

Page 6: Mean Squared Error and Maximum Likelihood

MSEs of Each Estimator

0.2 0.4 0.6 0.8 1

0.05

0.1

0.15

0.2

0.25

Page 7: Mean Squared Error and Maximum Likelihood

• Definition 7.2.1. Let X and Y be two estimators of . We say that X is better (or more efficient) than Y if E(X-)2 E(Y-) for all in and strictly less than for at least one in .

Page 8: Mean Squared Error and Maximum Likelihood

• When an estimator is dominated by another estimator, the dominated estimator is inadmissable.

• Definition 7.2.2. Let be an estimator of We say that is inadmissible if there is another estimator which is better in the sense that it produces a lower mean square error of the estimate. An estimator that is not inadmissible is admissible.

Page 9: Mean Squared Error and Maximum Likelihood

Strategies for Choosing an Estimator:

• Subjective strategy: This strategy considers the likely outcome of and selects the estimator that is best in that likely neighborhood.

• Minimax Strategy: According to the minimax strategy, we choose the estimator for which the largest possible value of the mean squared error is the smallest:

Page 10: Mean Squared Error and Maximum Likelihood

• Definition 7.2.3: Let ^ be an estimator of . It is a minimax estimator if for any other estimator of ~ , we have:

22 ~maxˆmax

EE

Page 11: Mean Squared Error and Maximum Likelihood

Best Linear Unbiased Estimator:

• Definition 7.2.4: ^ is said to be an unbiased estimator of if

for all in . We call

bias

ˆE

ˆE

Page 12: Mean Squared Error and Maximum Likelihood

• In our previous discussion T and S are unbiased estimators while W is biased.

• Theorem 7.2.10: The mean squared error is the sum of the variance and the bias squared. That is, for any estimator ^ of

22 ˆˆ

EVE

Page 13: Mean Squared Error and Maximum Likelihood

• Theorem 7.2.11 Let {Xi} i=1,2,…n be independent and have a common mean and variance 2. Consider the class of linear estimators of which can be written in the form

and impose the unbaisedness condition

n

i ii Xa1

n

i ii XaE1

Page 14: Mean Squared Error and Maximum Likelihood

Then

for all ai satisfying the unbiasedness condition. Further, this condition holds with equality only for ai=1/n.

n

i ii XaVXV1

Page 15: Mean Squared Error and Maximum Likelihood

• To prove these points note that the ais must sum to one for unbiasedness

• The final condition can be demonstrated through the identity

n

i i

n

i i

n

i ii

n

i ii aaXEaXaE1111

na

na

na

n

i i

n

i i

n

i i

12111

2

1

2

Page 16: Mean Squared Error and Maximum Likelihood

na

n

ii

1

1

Page 17: Mean Squared Error and Maximum Likelihood

• Theorem 7.2.12: Consider the problem of minimizing

with respect to {ai} subject to the condition

n

i ia1

2

11

n

i iiba

Page 18: Mean Squared Error and Maximum Likelihood

The solution to this problem is given by

n

ii

ii

b

ba

1

2

Page 19: Mean Squared Error and Maximum Likelihood

Asymptotic Properties

• Definition 7.2.5. We say that ^ is a consistent estimator of if

ˆplimn

Page 20: Mean Squared Error and Maximum Likelihood

Maximum Likelihood

• The basic concept behind maximum likelihood estimation is to choose that set of parameters that maximize the likelihood of drawing a particular sample.– Let the sample be X={5,6,7,8,10}. The

probability of each of these points based on the unknown mean, , can be written as

Page 21: Mean Squared Error and Maximum Likelihood

2

10exp

2

1|10

2

6exp

2

1|6

2

5exp

2

1|5

2

2

2

f

f

f

Page 22: Mean Squared Error and Maximum Likelihood

• Assuming that the sample is independent so that the joint distribution function can be written as the product of the marginal distribution functions, the probability of drawing the entire sample based on a given mean can then be written as:

2

10

2

6

2

5exp

2

1|

222

25

XL

Page 23: Mean Squared Error and Maximum Likelihood

• The value of that maximize the likelihood function of the sample can then be defined by

Under the current scenario, we find it easier, however, to maximize the natural logarithm of the likelihood function:

|max XL

Page 24: Mean Squared Error and Maximum Likelihood

6

1098765ˆ

01065

2

10

2

6

2

5|lnmax

222

MLE

KXL