1 Chapter 2: Simple Comparative Experiments (SCE) Simple comparative experiments: experiments that...

Post on 02-Jan-2016

234 views 3 download

Tags:

Transcript of 1 Chapter 2: Simple Comparative Experiments (SCE) Simple comparative experiments: experiments that...

1

Chapter 2: Simple Comparative Experiments (SCE)

• Simple comparative experiments: experiments that compare two conditions (treatments)– The hypothesis testing framework– The two-sample t-test– Checking assumptions, validity

2

Portland Cement Formulation (page 23)• Average tension bond sterngths (ABS)

differ by what seems nontrivial amount.• Not obvois that this difference is large

enough imply that the two formulations really are diff.

• Diff may be due to sampling fluctuation and the two formulations are really identical.

• Possibly another two samples would give opposite results with strength of MM exceeding that of UM.

• Hypothesis testing can be used to assist in comparing these formulations.

• Hypothesis testing allows the comparison to be made on objective terms, with knowledge of risks associated with searching the wrong conclusion

3

Graphical View of the DataDot Diagram, Fig. 2-1, pp. 24

• Response variable is a random variable• Random variable:

1. Discrete2. continuous

4

Box Plots, Fig. 2-3, pp. 26

• Displays min, max, lower and upper quartile, and the median

• Histogram

Probability Distributions

5

• Probability structure of a Random variable, y, is described by its probability distribution.

• y is discrete: p(y) is the probability function of y (F2-4a)

• y is continuous: f(y) is the probability density function of y (F2-4b)

Probability DistributionsProperties of probability distributions

6

• y-discrete:

• y-continuous:

0 ( ) 1

( ) ( )

( ) 1j

j j

j j j

jall values of y

p y all values of y

P y y p y all values of y

p y

( ) 0

( ) ( )

( ) 1

b

a

f y

P a y b f y dy

f y dy

Probability Distributionsmean, variance, and expected values

7

( ) y continuous( )

( ) y discreteall y

yf y dyE y

yp y

2

2 2

2

( ) ( ) y continuous( ) ( )

( ) ( ) y discreteall y

y f y dyV y E y

y p y

Probability DistributionsBasic Properties

1. E(c) = c

2. E(y) = 3. E(cy) = c E(y) = c4. V(c) = 0

5. V(y) = 2

6. V(cy) = c2 V(y) = c2 2

8

Probability DistributionsBasic Properties

• E(y1+y2) = E(y1)+E(y2) = 1+2

• Cov(y1,y2) = E[(y1-1)(y2-2)]

• Covariance: measure of the linear association between y1 and y2.

• E(y1.y2) = E(y1).E(y2) = 1.2 (y1 and y2 are indep)

9

1 1

2 2

( )( )

( )

y E yEy E y

1 2 1 2 1 2( ) ( ) ( ) 2 ( , )V y y V y V y Cov y y

Sampling and Sampling Distributions

• The objective of statistical inference is to draw conclusions about a population using a sample from that population.

• Random Sampling: each of N!/(N-n)!n! samples has an equal probability of being chosen.

• Statistic: any function of observations in a sample that does not contain unknown parameters.

• Sample mean and sample variance are both statistics.

10

2

21 1, 1

n n

i ii i

y y yy S

n n

Properties of sample mean and variance

• Sample mean is a point estimator of population mean • Sample variance is a point estimator of population

variance

• The point estimator should be unbiased. Long run average should be the parameter that is being estimated.

• An unbiased estimator should have min variance. Min variance point estimator has a variance that is smaller than the variance of any other estimator of that parameter.

11

2 21( ) ( ), E(SS) ( 1)

1E S E SS n

n

Degrees of freedom

• (n-1) in the previous eq is called the NDOF of the sum of squares.

• NDOF of a sum of squares is equal to the no. of indep elements in the sum of squares

• Because , only (n-1) of the n elements are indep, implying that SS has (n-1) DOF

12

1

0n

ii

y y

The normal and other sampling distributions

• Normal Distribution

• y is distributed normally with mean and variance 2

• Standard normal distribution: =0 and 2=1

13

2(1/ 2)[( ) / ]1( ) ,

2yf y e y

( , )y N

yz

(0,1)z N

Central Limit Theorem• If y1, y2, …, yn is a sequence of n independent and

identically distributed random variables with E(yi) = and V(yi) = 2 (both finite) and x = y1+y2+…+yn, then has an approximate N(0,1) distribution.

• This implies that the distribution of the sample averages follows a normal distribution with

• This approximation require a relatively large sample size (n≥30)

14

2n

x nz

n

Chi-Square or 2 distribution

• If z1, z2, …, zn are normally and independently distributed random variables with mean 0 and variance 1 NID(0,1), then the random variable follows the chi-square distribution with k DOF.

15

2 2 21 2 3...x z z z

( / 2) 1 / 2

/ 2

1( ) , 0

22

k x

k

f x x e xk

Chi-Square or 2 distribution• The distribution is asymmetric (skewed)

= k

2= 2k

• Appendix III

16

Chi-Square or 2 distribution

• y1, y2, …, yn is a random sample from N(,), then

• SS/2 is distributed as chi-square with n-1 DOF

17

2

2112 2

n

ii

n

y ySS

Chi-Square or 2 distribution• If the observations in the sample are

NID(), then the distribution of S2 is

and, ,,,

• Thus, the sampling distribution of the sample variance is a constant times the chi-square distribution if the population is normally distributed

18

2 21/( 1) nn

Chi-Square or 2 distribution• Example: The Acme Battery Company has developed a new cell

phone battery. On average, the battery lasts 60 minutes on a single charge. The standard deviation is 4.14 minutes.

a) Suppose the manufacturing department runs a quality control test. They randomly select 7 batteries. The standard deviation of the selected batteries is 6 minutes. What would be the chi-square statistic represented by this test?

b) If another sample of 7 battery was selected, what is the probability that the sample standard deviation is greater than 6?

DOX 6E Montgomery 19

Chi-Square or 2 distribution Solution• a) We know the following:

– The standard deviation of the population is 4.14 minutes.

– The standard deviation of the sample is 6 minutes.

– The number of sample observations is 7.

• To compute the chi-square statistic, we plug these data in the chi-square equation, as shown below.

2 = [ ( n - 1 ) * s2 ] / σ2 2 = [ ( 7 - 1 ) * 62 ] / 4.142 = 12.6

DOX 6E Montgomery 20

• b) To find the probability of having a sample standard deviation S > 6, we refer to the Chi-square distribution tables, we find the value of corresponding to chi-square = 12.6 and 6 degrees of freedom.

• This will give 0.05 which is the probability of having S > 6

DOX 6E Montgomery 21

Chi-Square or 2 distribution

DOX 6E Montgomery 22

t distribution with k DOF

• If z and are indpendent normal and chi-square random variables, the random variable

• Follow t distribution with k DOF as follows:

23

2k

2 /k

k

zt

k

2 ( 1) / 2

[( 1) / 2] 1( ) ,

[( / ) 1]( / 2) k

kf t t

t kk k

t distribution with k DOF

= 0 and 2 = k/(k-2) for k>2

• If k=infinity, t becomes standard normal

• If y1, y2, …, yn is a random sample from N(,), then

is distributed as t with n-1 DOF

24

/k

yt

S n

t distribution - example

Example: • Acme Corporation manufactures light bulbs. The

CEO claims that an average Acme light bulb lasts 300 days. A researcher randomly selects 15 bulbs for testing. The sampled bulbs last an average of 290 days, with a standard deviation of 56 days. If the CEO's claim were true, what is the probability that 15 randomly selected bulbs would have an average life of no more than 290 days?

DOX 6E Montgomery 25

t distribution – Example

Solution

To find P(x-bar<290),

• The first thing we need to do is compute the t score, based on the following equation:– t = [ x - μ ] / [ s / sqrt( n ) ] 

t = ( 290 - 300 ) / [ 56 / sqrt( 15) ] = - 0.692

• P(t<-0.692) is equivalent to P(t>0.692)

DOX 6E Montgomery 26

t distribution – Example

• From the t-distribution tables, we have = 25% which correspond to the probability of having the sample average less than 290

DOX 6E Montgomery 27

DOX 6E Montgomery 28

F distribution

• If and are two independent chi-square random variables with u and v DOF, then the ratio

• Follows the F dist with u numerator DOF and v denominator DOF

29

2u 2

v

( / 2) 1( / 2) 1

( ) / 2

2( ) , 0

12 2

uu

u v

u v ux

vh x x

u v ux

v

2

, 2

/

/u

u vv

uF

v

F distribution• Two independent normal populations with

common variance 2. If y11, y12, …, y1n1 is a random sample of n1 observations from 1st population and y21, y22, …, y2n2 is a random sample of n2 observations from 2nd population, then

30

21

1 1, 2 122

n n

SF

S

31

The Hypothesis Testing Framework

• Statistical hypothesis testing is a useful framework for many experimental situations

• We will use a procedure known as the two-sample t-test

32

The Hypothesis Testing Framework

• Sampling from a normal distribution• Statistical hypotheses:

0 1 2

1 1 2

:

:

H

H

33

Estimation of Parameters

1

2 2 2

1

1 estimates the population mean

1( ) estimates the variance

1

n

ii

n

ii

y yn

S y yn

34

Summary Statistics (pg. 36)

1

21

1

1

16.76

0.100

0.316

10

y

S

S

n

Formulation 1

“New recipe”

Formulation 2

“Original recipe”

2

22

2

2

17.04

0.061

0.248

10

y

S

S

n

35

How the Two-Sample t-Test Works:

1 2

22y

Use the sample means to draw inferences about the population means

16.76 17.04 0.28

Difference in sample means

Standard deviation of the difference in sample means

This suggests a statistic:

y y

n

1 20 2 2

1 2

1 2

Zy y

n n

36

How the Two-Sample t-Test Works:2 2 2 2

1 2 1 2

1 2

2 21 2

1 2

2 2 21 2

2 22 1 1 2 2

1 2

Use and to estimate and

The previous ratio becomes

However, we have the case where

Pool the individual sample variances:

( 1) ( 1)

2p

S S

y y

S Sn n

n S n SS

n n

37

How the Two-Sample t-Test Works:

• Values of t0 that are near zero are consistent with the null hypothesis

• Values of t0 that are very different from zero are consistent with the alternative hypothesis

• t0 is a “distance” measure-how far apart the averages are expressed in standard deviation units

• Notice the interpretation of t0 as a signal-to-noise ratio

1 20

1 2

The test statistic is

1 1

p

y yt

Sn n

38

The Two-Sample (Pooled) t-Test

2 22 1 1 2 2

1 2

1 20

1 2

( 1) ( 1) 9(0.100) 9(0.061)0.081

2 10 10 2

0.284

16.76 17.04 2.20

1 1 1 10.284

10 10

The two sample means are a little over two standard deviations apart

Is t

p

p

p

n S n SS

n n

S

y yt

Sn n

his a "large" difference?

39

The Two-Sample (Pooled) t-Test

• So far, we haven’t really done any “statistics”

• We need an objective basis for deciding how large the test statistic t0

really is.

t0 = -2.20

40

The Two-Sample (Pooled) t-Test• A value of t0 between

–2.101 and 2.101 is consistent with equality of means

• It is possible for the means to be equal and t0 to exceed either 2.101 or –2.101, but it would be a “rare event” … leads to the conclusion that the means are different

• Could also use the P-value approach

t0 = -2.20

Use of P-value in Hypothesis testing

• P-value: smallest level of significance that would lead to rejection of the null hypothesis Ho

• It is customary to call the test statistic significant when Ho is rejected. Therefore, the P-value is the smallest level at which the data are significant.

41

42

Minitab Two-Sample t-Test Results

Checking Assumptions – The Normal Probability Plot

• Assumptions1. Equal variance

2. Normality

• Procedure:1. Rank the observations in the sample in an ascending order.

2. Plot ordered observations vs. observed cumulative frequency (j-0.5)/n

3. If the plotted points deviate significantly from straight line, the hypothesized model in not appropriate.

43

44

Checking Assumptions – The Normal Probability Plot

Checking Assumptions – The Normal Probability Plot

• The mean is estimated as the 50th percentile on the probability plot.

• The standard deviation is estimated as the differnce between the 84th and 50th percentiles.

• The assumption of equal population variances is simply verified by comparing the slopes of the two straight lines in F2-11.

45

46

Importance of the t-Test

• Provides an objective framework for simple comparative experiments

• Could be used to test all relevant hypotheses in a two-level factorial design.

47

Confidence Intervals (See pg. 43)• Hypothesis testing gives an objective statement

concerning the difference in means, but it doesn’t specify “how different” they are

• General form of a confidence interval

• The 100(1- α)% confidence interval on the difference in two means:

where ( ) 1 L U P L U

1 2

1 2

1 2 / 2, 2 1 2 1 2

1 2 / 2, 2 1 2

(1/ ) (1/ )

(1/ ) (1/ )

n n p

n n p

y y t S n n

y y t S n n

Hypothesis testing

• The test statitic becomes

• This statistic is not distributed exactly as t.

• The distribution of to is well approximated by t if we use

as the DOF48

2 21 2

1 2

2 21 2

1 2

o

y yt

S S

n n

22 21 1 2 2

2 22 21 1 2 2

1 2

/ /

/ /

1 1

S n S nv

S n S n

n n

Hypothesis testing

• The test statitic becomes

• If both populations are normal, or if the sample sizes are large enough, the distribution of zo is N(0,1) if the null hypothesis is true. Thus, the critical region would be found using the normal distribution rather than the t.

• We would reject Ho, if where z is the upper 2 percentage point of the standard normal distribution

49

2 21 2 and are known

1 2

2 21 2

1 2

o

y yz

n n

/ 2oz z

Hypothesis testing

• The 100(1-) percent confidence interval:

50

2 21 2 and are known

2 2 2 21 2 1 2

1 2 / 2 1 2 1 2 / 21 2 1 2

y y z y y zn n n n

Hypothesis testingComparing a single mean to a specified value

• The hypothesises are: Ho: = o and H1: ≠ o

• If the population is normal with known variance, or if the population is non-normal but the sample size is large enough, then the hypothesis may be tested by direct application of the normal distribution.

• Test statistic

• If Ho is true, then the distribution of zo is N(0,1). Therefore, Ho is rejected if

51

/o

o

yz

n

/ 2oz z

Hypothesis testingComparing a single mean to a specified value

The value of o is usually determined in one of three ways:1. From past evidence, knowledge, or

experimentation

2. The result of some theory or model describing the situation under study

3. The result of contractual specifications

52

Hypothesis testingComparing a single mean to a specified value

• If the variance of the population is known, we assume that the population is normally distributed.

• Test statistic

• Ho is rejected if

• The 100(1-) percent confidence interval

53

/o

o

yt

S n

/ 2, 1o nt t

/ 2, 1 / 2, 1n n

S Sy t y t

n n

The paired comparison problemThe two tip hardness experiment

• Statistical model

• jth paired difference

• Expected value of paired difference

• Testing hypothesis: Ho: d=0 and H1: d≠0

• Test statistic:

54

1,2

1,2,...,10ij i j ij

iy

j

1 2j j jd y y

1 2d

1/ 2

1

1

1; ;

1/

n

jnj

o j djd

d dd

t d d Sn nS n

The paired comparison problemThe two tip hardness experiment

• Randomized block design

• Block: homogenous experimental unit

• The block represents a restriction on complete randomization because the treatment combinations are only randomized within the block

55

Randomized Block Complete Randomization

DOF n-1 = 9 2n-2 = 18

Standard deviation Sd = 1.2 Sp = 2.32

Confidence interval on 1-2

-0.1±0.86 -0.1±2.18

Inferences about the variablity of normal distributions

• Ho: 2 = o and H1: 2 ≠ o

• Test statistic

• Ho is rejected if or

• The 100(1-a) percent confidence interval

56

2

2 12 2

n

ii

oo o

y ySS

2 2.2, 1o n

2 21 .2, 1o n

2 22

2 2/ 2, 1 1 / 2, 1

( 1) ( 1)

n n

n S n S

Inferences about the variablity of normal distributions

• Test statistic

• Ho is rejected if or

57

2 2 2 21 2 2 1 2: :oH and H

2122

o

SF

S

.2, 1 1, 2 1o n nF F 1 .2, 1 1, 2 1o n nF F