Lecture-3: Simple Regression Model Properties-II

Post on 13-Nov-2021

0 views 0 download

Transcript of Lecture-3: Simple Regression Model Properties-II

1

Lecture-3: Simple Regression

Model Properties-II

In Today’s Class2

Recap

Simple regression model estimation

Gauss-Markov Theorem

Hand calculation of regression estimates

3

So the OLS estimated slope is

0 that provided

ˆ

1

2

1

2

11

n

i

i

n

i

i

n

i

ii

xx

xx

yyxx

4

Summary of OLS slope estimate

The slope estimate is the sample covariance between x and y divided by the sample variance of x

If x and y are positively correlated, the slope will be positive

If x and y are negatively correlated, the slope will be negative

Only need x to vary in our sample

5

More OLS

Intuitively, OLS is fitting a line through the sample points such that the sum of squared residuals is as small as possible, hence the term least squares

The residual, û, is an estimate of the error term, u, and is the difference between the fitted line (sample regression function) and the sample point

Simple Regression Function (SRF)6

.

..

.

y4

y1

y2

y3

x1 x2 x3 x4

}

}

{

{

û1

û2

û3

û4

x

y

xy 10ˆˆˆ

PRF and SRF Relationship7

Observe the relationship between error term (u), and the residuals ( 𝑢)

8

Algebraic Properties of OLS

The sum of the OLS residuals is zero

Thus, the sample average of the OLS residuals is zero as well

The sample covariance between the regressors and the OLS residuals is zero

The OLS regression line always goes through the mean of the sample

9

Algebraic Properties (precise)

xy

ux

n

u

u

n

i

ii

n

i

in

i

i

10

1

1

1

ˆˆ

0

ˆ

thus,and 0ˆ

10

Algebraic Properties (2)

SSR SSE SSTThen

(SSR) squares of sum residual theis ˆ

(SSE) squares of sum explained theis ˆ

(SST) squares of sum total theis

:following thedefine then Weˆˆ

part, dunexplainean and part, explainedan of up

made being asn observatioeach ofcan think We

2

2

2

i

i

i

iii

u

yy

yy

uyy

Algebraic Properties (3)11

SST = SSE + SSR

12

Proof that SST = SSE + SSR

0 ˆˆ that know weand

SSE ˆˆ2 SSR

ˆˆˆ2ˆ

ˆˆ

ˆˆ

22

2

22

yyu

yyu

yyyyuu

yyu

yyyyyy

ii

ii

iiii

ii

iiii

13

Goodness-of-Fit (1)

How do we think about how well our sample regression line fits our sample data?

Can compute the fraction of the total sum of squares (SST) that is explained by the model, call this the R-squared of regression

R2 = SSE/SST =1 – SSR/SST

𝑟2 = 𝑖( 𝑦𝑖− 𝑦)

2

𝑖(𝑦𝑖− 𝑦)2

14

Unbiasedness of OLS

Assume the population model is linear in parameters as y = 0 + 1x + u

Assume we can use a random sample of size n, {(xi, yi): i=1, 2, …, n}, from the population model. Thus we can write the sample model yi = 0 + 1xi + ui

Assume E(u|x) = 0 and thus E(ui|xi) = 0

Assume there is variation in the xi

Unbiasedness of OLS (2)15

In order to think about unbiasedness, we need to rewrite our estimator in terms of the population parameter

Start with a simple rewrite of the formula as

22

21 where,ˆ

xxs

s

yxx

ix

x

ii

16

Unbiasedness of OLS (cont)

ii

iii

ii

iii

iiiii

uxx

xxxxx

uxx

xxxxx

uxxxyxx

10

10

10

17

Unbiasedness of OLS (cont)

211

2

1

2

ˆ

thusand ,

asrewritten becan numerator the,so

,0

x

ii

iix

iii

i

s

uxx

uxxs

xxxxx

xx

18

Unbiasedness of OLS (cont)

1211

21

then,1ˆ

thatso ,let

iix

iix

i

ii

uEds

E

uds

xxd

19

Unbiasedness Summary

The OLS estimates of 1 and 0 are unbiased

Proof of unbiasedness depends on our 4 assumptions – if any assumption fails, then OLS is not necessarily unbiased

Remember unbiasedness is a description of the estimator – in a given sample we may be “near” or “far” from the true parameter

20

Variance of the OLS Estimators

Now we know that the sampling distribution of our estimate is centered around the true parameter

Want to think about how spread out this distribution is

Much easier to think about this variance under an additional assumption, so

Assume Var(u|x) = s2 (Homoskedasticity)

21

Variance of OLS (cont)

Var(u|x) = E(u2|x)-[E(u|x)]2

E(u|x) = 0, so s2 = E(u2|x) = E(u2) = Var(u)

Thus s2 is also the unconditional variance, called the error variance

s, the square root of the error variance is called the standard deviation of the error

Can say: E(y|x)=0 + 1x and Var(y|x) = s2

Homoskedasticity22

Graphical illustration of homoskedasticity

The variability of the unobservedinfluences does not dependent on thevalue of the explanatory variable

Heteroskedasticity23

An example for heteroskedasticity: Wage and education

The variance of the unobserveddeterminants of wages increaseswith the level of education

24

Variance of OLS (cont)

12

22

2

22

2

2

2222

2

2

2

2

2

2

2

211

ˆ1

11

11

ss

ss

Vars

ss

ds

ds

uVards

udVars

uds

VarVar

xx

x

ix

ix

iix

iix

iix

25

Variance of OLS Summary

The larger the error variance, s2, the larger the variance of the slope estimate

The larger the variability in the xi, the smaller the variance of the slope estimate

As a result, a larger sample size should decrease the variance of the slope estimate

Problem that the error variance is unknown

26

Estimating the Error Variance

We don’t know what the error variance, s2, is, because we don’t observe the errors, ui

What we observe are the residuals, ûi

We can use the residuals to form an estimate of the error variance

27

Error Variance Estimate (cont)

2/ˆ

2

is ofestimator unbiasedan Then,

ˆˆ

ˆˆ

ˆˆˆ

22

2

1100

1010

10

nSSRun

u

xux

xyu

i

i

iii

iii

s

s

Why n-2?

28

Error Variance Estimate (cont)

21

2

1

1

2

/ˆˆse

, ˆ oferror standard the

have then wefor ˆ substitute weif

ˆsd that recall

regression theoferror Standardˆˆ

xx

s

i

x

s

ss

s

ss

Gauss-Markov Assumptions (1)29

Standard assumptions for the linear regression model

Assumption SLR.1 (Linear in parameters)

Assumption SLR.2 (Random sampling)

In the population, the relationshipbetween y and x is linear

The data is a random sample drawn from the population

Each data point therefore followsthe population equation

Gauss-Markov Assumptions (2)30

Assumptions for the linear regression model (cont.)

Assumption SLR.3 (Sample variation in explanatory variable)

Assumption SLR.4 (Zero conditional mean)

Assumption SLR.5 (Homoskedasticity)

The values of the explanatory variables are not all the same (otherwise it would be impossible to stu-dy how different values of the explanatory variablelead to different values of the dependent variable)

The value of the explanatory variable must contain no information about the mean ofthe unobserved factors

The value of the explanatory variable must contain no information about the variabilityof the unobserved factors

Simple Regression Estimation31

Term Formulae

Coefficient of x

Intercept coefficient

Coefficient of Determination (r2)𝑟2 =

𝑖( 𝑦𝑖 − 𝑦)2

𝑖(𝑦𝑖 − 𝑦)2

Standard error (𝛽1) where, 𝜎2 = 𝑖 𝑢𝑖

2

𝑛−2

Standard error (𝛽0) se(𝛽0)=sqrt ( 𝜎2∗ 𝑖 𝑥𝑖

2

𝑛 𝑖(𝑥𝑖− 𝑥)2)

t-stat Coefficient/standard error

n

i

i

n

i

ii

xx

yyxx

1

2

11̂

xy 10ˆˆ

21

2

1 /ˆˆse xxis