Chapter 8: Linear regression

48
CHAPTER 8: LINEAR REGRESSION AP Statistics Unit 2

description

Chapter 8: Linear regression. AP Statistics Unit 2. Fat Versus Protein: An Example. The following is a scatterplot of total fat versus protein for 30 items on the Burger King menu:. The linear model. - PowerPoint PPT Presentation

Transcript of Chapter 8: Linear regression

Page 1: Chapter 8:  Linear regression

CHAPTER 8: LINEAR REGRESSION

AP StatisticsUnit 2

Page 2: Chapter 8:  Linear regression

FAT VERSUS PROTEIN: AN EXAMPLE The following is a scatterplot of total fat

versus protein for 30 items on the Burger King menu:

Page 3: Chapter 8:  Linear regression

THE LINEAR MODEL The correlation in this example is 0.83.

It says, “There seems to be a linear association between these two variables,” but it doesn’t tell what that association is.

We can say more about the linear relationship between two quantitative variables with a model.

A model simplifies reality to help us understand underlying patterns and relationships.

Page 4: Chapter 8:  Linear regression

THE LINEAR MODEL (CONT.) The linear model is just an equation of

a straight line through the data.

The points in the scatterplot don’t all line up, but a straight line can summarize the general pattern with only a couple of parameters.

The linear model can help us understand how the values are associated.

Page 5: Chapter 8:  Linear regression

RESIDUALS The model won’t be perfect, regardless

of the line we draw.

Some points will be above the line and some will be below.

The estimate made from a model is the predicted value (denoted as -pronounced y-hat).

y

Page 6: Chapter 8:  Linear regression

RESIDUALS (CONT.) The difference between the observed

value and its associated predicted value on the line is called the residual.

To find the residuals, we always subtract the predicted value from the observed one:

residual observed predicted y y

Page 7: Chapter 8:  Linear regression

RESIDUALS (CONT.) A negative residual

means the predicted value’s too big (an overestimate).

A positive residual means the predicted value’s too small (an underestimate).

In the figure, the estimated fat of the BK Broiler chicken sandwich is 36 g, while the true value of fat is 25 g, so the residual is –11 g of fat.

residual observed predicted y y

Page 8: Chapter 8:  Linear regression

“BEST FIT” MEANS LEAST SQUARES Some residuals are positive, others are

negative, and, on average, they cancel each other out.

So, we can’t assess how well the line fits by adding up all the residuals.

Similar to what we did with deviations, we square the residuals and add the squares. The smaller the sum, the better the fit.

The line of best fit is the line for which the sum of the squared residuals is smallest, the least squares line.

Page 9: Chapter 8:  Linear regression

CORRELATION AND THE LINE The figure shows the

scatterplot of z-scores for fat and protein.

If a burger has average protein content, it should have about average fat content too.

Moving one standard deviation away from the mean in x moves us r standard deviations away from the mean in y.

Page 10: Chapter 8:  Linear regression

CORRELATION AND THE LINE (CONT.)

 

Page 11: Chapter 8:  Linear regression

HOW BIG CAN PREDICTED VALUES GET?

r cannot be bigger than 1 (in absolute value), so each predicted y tends to be closer to its mean (in standard deviations) than its corresponding x was.

This property of the linear model is called regression to the mean; the line is called the regression line.

Page 12: Chapter 8:  Linear regression

EXAMPLEA scatterplot of house Price (in thousands of $) vs. house Size (in thousands of sq. ft.) for houses sold recently in Saratoga, NY shows a relationship that is straight, with only moderate scatter and no outliers. The correlation between house price and house size is 0.77.

1)You go to an open house and find that the house is 1 standard deviation above the mean in size. What would you guess about its price?

2)A friend tells you about a house whose size in sq. meters is 1.5 standard deviations above the mean. What would you guess about its size in sq. ft.?

Page 13: Chapter 8:  Linear regression

THE REGRESSION LINE IN REAL UNITS

Remember from Algebra that a straight line can be written as:

In Statistics we use a slightly different notation:

We write to emphasize that the points that satisfy this equation are just our predicted values, not the actual data values.

This model says that our predictions from our model follow a straight line.

If the model is a good one, the data values will scatter closely around it.

y mx b

y b0 b1xy

Page 14: Chapter 8:  Linear regression

THE REGRESSION LINE IN REAL UNITS (CONT.)

We write b1 and b0 for the slope and intercept of the line.

b1 is the slope, which tells us how rapidly changes with respect to x.

b0 is the y-intercept, which tells where the line crosses (intercepts) the y-axis.

y

Page 15: Chapter 8:  Linear regression

THE REGRESSION LINE IN REAL UNITS (CONT.)

In our model, we have a slope (b1): The slope is built from the correlation and the

standard deviations:

Our slope is always in units of y per unit of x.

b1 rsy

sx

Page 16: Chapter 8:  Linear regression

THE REGRESSION LINE IN REAL UNITS (CONT.)

In our model, we also have an intercept (b0). The intercept is built from the means and the

slope:

Our intercept is always in units of y.

b0 y b1x

Page 17: Chapter 8:  Linear regression

FAT VERSUS PROTEIN: AN EXAMPLE

 

The predicted fat content for a BK Broiler chicken sandwich (with 30 g of protein) is:

Page 18: Chapter 8:  Linear regression

THE REGRESSION LINE IN REAL UNITS (CONT.)

Since regression and correlation are closely related, we need to check the same conditions for regressions as we did for correlations:

Quantitative Variables Condition

Straight Enough Condition

Outlier Condition

Page 19: Chapter 8:  Linear regression

SARATOGA EXAMPLE (CONTINUED):

Below is the regression model for the relationship between the house price (in thousands of $) and the house size (in thousands of sq. ft.):

3) What does the slope of 94.454 mean?

4) What are the units of the slope?

5) Is the y-intercept meaningful?

Page 20: Chapter 8:  Linear regression

RESIDUALS REVISITED The linear model assumes that the

relationship between the two variables is a perfect straight line. The residuals are the part of the data that hasn’t been modeled.

Data = Model + Residual or (equivalently)

Residual = Data – ModelOr, in symbols,

Day 2

e y y

Page 21: Chapter 8:  Linear regression

RESIDUALS REVISITED (CONT.) Residuals help us to see whether the

model makes sense.

When a regression model is appropriate, nothing interesting should be left behind.

After we fit a regression model, we usually plot the residuals in the hope of finding…nothing.

Page 22: Chapter 8:  Linear regression

RESIDUALS REVISITED (CONT.) The residuals for the BK menu

regression look appropriately boring. There should be no direction or shape.

Page 23: Chapter 8:  Linear regression

SARATOGA EXAMPLE (CONTINUED):

Our linear model uses the Size to estimate the price. Suppose you’re thinking of buying a home there.

6) Would you prefer to find a home with a negative or positive residual?

7) You are looking for a home of about 3000 sq. ft. About how much should you expect to pay?

8) You find a nice home that size that is selling for $300,000. What is the residual?

Page 24: Chapter 8:  Linear regression

THE RESIDUAL STANDARD DEVIATION The standard deviation of the residuals, se,

measures how much the points spread around the regression line.

Check to make sure the residual plot has about the same amount of scatter throughout. Check the Equal Variance Assumption with the Does the Plot Thicken? Condition.

We estimate the SD of the residuals using:

se e2

n 2

Page 25: Chapter 8:  Linear regression

THE RESIDUAL STANDARD DEVIATION (CONT.)

We don’t need to subtract the mean because the mean of the residuals

Make a histogram or normal probability plot of the residuals. It should look unimodal and roughly symmetric.

Then we can apply the 68-95-99.7 Rule to see how well the regression model describes the data.

Page 26: Chapter 8:  Linear regression

R2—THE VARIATION ACCOUNTED FOR The variation in the residuals is the

key to assessing how well the model fits.

In the BK menu items example, total fat has a standard deviation of 16.4 grams. The standard deviation of the residuals is 9.2 grams.

Page 27: Chapter 8:  Linear regression

R2—THE VARIATION ACCOUNTED FOR (CONT.) If the correlation were 1.0 and the model predicted the

fat values perfectly, the residuals would all be zero and have no variation.

As it is, the correlation is 0.83—not perfection.

However, we did see that the model residuals had less variation than total fat alone.

We can determine how much of the variation is accounted for by the model and how much is left in the residuals.

Page 28: Chapter 8:  Linear regression

R2—THE VARIATION ACCOUNTED FOR (CONT.)

The squared correlation, r2, gives the fraction of the data’s variance accounted for by the model.

Thus, 1 – r2 is the fraction of the original variance left in the residuals.

For the BK model, r2 = 0.832 = 0.69, so 1-.69 = 31% of the variability in total fat has been left in the residuals.

Page 29: Chapter 8:  Linear regression

R2—THE VARIATION ACCOUNTED FOR (CONT.)

All regression analyses include this statistic, although by tradition, it is written R2 (pronounced “R-squared”). An R2 of 0 means that none of the

variance in the data is in the model; all of it is still in the residuals.

When interpreting a regression model you need to Tell what R2 means. In the BK example, 69% of the variation in

total fat is accounted for by variation in the protein content.

Page 30: Chapter 8:  Linear regression

INTERPRETING R2 Teacher’s Disclosure: Interpreting r-squared can be problematic.

Many students are eager to say “Height explains 46% of a person’s weight” instead of “Differences in height explain 46% of the variability in weight” or some similar statement about variability. They have trouble both hearing and understanding the difference.

An easier-to-understand example can help. Suppose we open a pizza place, selling our plain pizzas for $8 and adding $1.50 per topping. Create a scatterplot graphing cost of pizza against the number of toppings. Because this association is perfectly linear we know that r = 1.00. Which statement correctly describes this situation?

A: The number of toppings explains 100% of the cost of a pizza. B: Differences in the number of toppings explain 100% of the

variation in the cost of pizzas.  You should now see that the first statement is false. It attributes all

of the cost to the toppings, ignoring the base price of a plain pizza. The second correctly explains the association.

Page 31: Chapter 8:  Linear regression

SARATOGA EXAMPLE (CONTINUED):Let’s go back to the regression of house Price (in thousands of $) on house Size (in thousands of square ft.). The R2

value is reported as 59.5%, and the standard deviation of the residuals is 53.79.

9) What does the R2 value mean about the relationship of price and size?

10) Is the correlation of price and size positive or negative? Why?

11) If we measure house size in sq. m instead, would R2 change? How about the slope of the line?

Page 32: Chapter 8:  Linear regression

HOW BIG SHOULD R2 BE?

R2 is always between 0% and 100%. What makes a “good” R2 value depends on the kind of data you are analyzing and on what you want to do with it.

The standard deviation of the residuals can give us more information about the usefulness of the regression by telling us how much scatter there is around the line.

Page 33: Chapter 8:  Linear regression

REPORTING R2

Along with the slope and intercept for a regression, you should always report R2 so that readers can judge for themselves how successful the regression is at fitting the data.

Statistics is about variation, and R2 measures the success of the regression model in terms of the fraction of the variation of y accounted for by the regression.

Page 34: Chapter 8:  Linear regression

ASSUMPTIONS AND CONDITIONS Quantitative Variables Condition:

Regression can only be done on two quantitative variables (and not two categorical variables), so make sure to check this condition.

Straight Enough Condition: The linear model assumes that the

relationship between the variables is linear.

A scatterplot will let you check that the assumption is reasonable.

Page 35: Chapter 8:  Linear regression

ASSUMPTIONS AND CONDITIONS (CONT.)

If the scatterplot is not straight enough, stop here. You can’t use a linear model for any two

variables, even if they are related. They must have a linear association or the

model won’t mean a thing.

Some nonlinear relationships can be saved by re-expressing the data to make the scatterplot more linear.

Page 36: Chapter 8:  Linear regression

ASSUMPTIONS AND CONDITIONS (CONT.)

It’s a good idea to check linearity again after computing the regression when we can examine the residuals.

Does the Plot Thicken? Condition: Look at the residual plot -- for the standard

deviation of the residuals to summarize the scatter, the residuals should share the same spread. Check for changing spread in the residual scatterplot.

Page 37: Chapter 8:  Linear regression

ASSUMPTIONS AND CONDITIONS (CONT.)

Outlier Condition: Watch out for outliers. Outlying points can dramatically change a

regression model. Outliers can even change the sign of the

slope, misleading us about the underlying relationship between the variables.

If the data seem to clump or cluster in the scatterplot, that could be a sign of trouble worth looking into further.

Page 38: Chapter 8:  Linear regression

THE TALE OF TWO REGRESSIONS Remember that when you create a regression line, you are

creating a prediction equation for the predicted y-value. You cannot use the same equation to solve for x. You would have to create a new equation with a predicted x-value ( ). When finding the slope for predicted y in terms of x, we used:

Now to find the predicted x in terms of y slope, we would use:

Thus, you need to create a whole new model to make predictions in the other direction.

b1 rsy

sx

b1 rsx

sy

ˆ x

Page 39: Chapter 8:  Linear regression

STATISTICAL SOFTWARE OUTPUTS

Page 40: Chapter 8:  Linear regression

REALITY CHECK: IS THE REGRESSION REASONABLE?

Statistics don’t come out of nowhere. They are based on data. The results of a statistical analysis should

reinforce your common sense, not fly in its face.

If the results are surprising, then either you’ve learned something new about the world or your analysis is wrong.

When you perform a regression, think about the coefficients and ask yourself whether they make sense.

Page 41: Chapter 8:  Linear regression

WHAT CAN GO WRONG? Don’t fit a straight line to a nonlinear relationship.

Beware extraordinary points (y-values that stand off from the linear pattern or extreme x-values).

Don’t extrapolate beyond the data—the linear model may no longer hold outside of the range of the data.

Don’t infer that x causes y just because there is a good linear model for their relationship—association is not causation.

Don’t choose a model based on R2 alone.

Page 42: Chapter 8:  Linear regression

RECAP When the relationship between two

quantitative variables is fairly straight, a linear model can help summarize that relationship.

The regression line doesn’t pass through all the points, but it is the best compromise in the sense that it has the smallest sum of squared residuals.

Page 43: Chapter 8:  Linear regression

RECAP (CONT.) The correlation tells us several things about the

regression: The slope of the line is based on the

correlation, adjusted for the units of x and y. For each SD in x that we are away from the x

mean, we expect to be r SDs in y away from the y mean.

Since r is always between –1 and +1, each predicted y is fewer SDs away from its mean than the corresponding x was (regression to the mean).

R2 gives us the fraction of the response accounted for by the regression model.

Page 44: Chapter 8:  Linear regression

RECAP (CONT.) The linear model makes no sense

unless the Linear Relationship Assumption is satisfied.

Also, we need to check the Straight Enough Condition and Outlier Condition with a scatterplot.

For the standard deviation of the residuals, we must make the Equal Variance Assumption. We check it by looking at both the original scatterplot and the residual plot for Does the Plot Thicken? Condition.

Page 45: Chapter 8:  Linear regression

EXAMPLEColleges use SAT Scores in the admissions process because they believe these scores provide some insight into how a high school student will perform at a college level. Suppose the entering freshman at a certain college have a mean combined SAT scores of 1833, with a standard deviation of 123. In the first semester these students attained a mean GPA of 2.66, with a standard deviation of 0.56. A scatterplot showed the association to be reasonably linear, and the correlation between SAT scores and GPA was 0.47. 1)Write the equation of the regression line.

Page 46: Chapter 8:  Linear regression

EXAMPLE (CONT.) 2)Explain what the y-intercept of the regression line indicates.

3)Interpret the slope of the regression line.

4)Predict the GPA of a freshman who scored a combined 2100.

Page 47: Chapter 8:  Linear regression

EXAMPLE 2 Try the calculator example on page 187. Use the

following data for Arizona State tuition:Years since 1990 Tuition

0 65461 69962 69963 73504 75005 79786 83777 87108 91109 941110 9800

Page 48: Chapter 8:  Linear regression

CHAPTER 8 BOOK PROBLEMS PP. 192 – 199: Day 1: # 1-6, 13, 21, 33 Day 2: # 7-9, 11, 12 Day 3: # 15-17, 19, 20, 22, 23 Day 4: # 27, 29, 31, 35, 37 Day 5: # 38, 41, 43, 53