Chapter 2

40
Slide 2.1 Undergraduate Econometrics, 2 nd Edition –Chapter 2 Chapter 2 Some Basic Probability Concepts 2.1 Experiments, Outcomes and Random Variables A random variable is a variable whose value is unknown until it is observed. The value of a random variable results from an experiment; it is not perfectly predictable. A discrete random variable can take only a finite number of values, which can be counted by using the positive integers. Discrete variables are also commonly used in economics to record qualitative, or nonnumerical, characteristics. In this role they are sometimes called dummy variables. A continuous random variable can take any real value (not just whole numbers) in an interval on the real number line.

description

“Undergraduate Econometrics” by: R. Carter Hill, William E. Griffiths, George G. Judge. John Wiley & Sons, Inc. Publishers

Transcript of Chapter 2

Page 1: Chapter 2

Slide 2.1

Undergraduate Econometrics, 2nd Edition –Chapter 2

Chapter 2

Some Basic Probability Concepts

2.1 Experiments, Outcomes and Random Variables

• A random variable is a variable whose value is unknown until it is observed. The

value of a random variable results from an experiment; it is not perfectly predictable.

• A discrete random variable can take only a finite number of values, which can be

counted by using the positive integers.

• Discrete variables are also commonly used in economics to record qualitative, or

nonnumerical, characteristics. In this role they are sometimes called dummy

variables.

• A continuous random variable can take any real value (not just whole numbers) in an

interval on the real number line.

Page 2: Chapter 2

Slide 2.2

Undergraduate Econometrics, 2nd Edition –Chapter 2

2.2 The Probability Distribution of a Random Variable

• The values of random variables are not known until an experiment is carried out, and

all possible values are not equally likely. We can make probability statements about

certain values occurring by specifying a probability distribution for the random

variable.

• If event A is an outcome of an experiment, then the probability of A, which we write

as P(A), is the relative frequency with which event A occurs in many repeated trials of

the experiment. For any event, 0 ≤ P(A) ≤ 1, and the total probability of all possible

event is one.

2.2.1 Probability Distributions of Discrete Random Variables

• When the values of a discrete random variable are listed with their chances of

occurring, the resulting table of outcomes is called a probability function or a

probability density function.

Page 3: Chapter 2

Slide 2.3

Undergraduate Econometrics, 2nd Edition –Chapter 2

• The probability density function spreads the total of 1 “unit” of probability over the set

of possible values that a random variable can take.

• Consider a discrete random variable, X = the number of heads obtained in a single flip

of a coin. The values that X can take are x = 0,1. If the coin is “fair” then the

probability of a head occurring is 0.5. The probability density function, say f(x), for

the random variable X is

Coin Side x f(x)

tail 0 0.5

head 1 0.5

• “The probability that X takes the value 1 is 0.5” means that the two values 0 and 1

have an equal chance of occurring and, if we flipped a fair coin a very large number of

times, the value x = 1 would occur 50 percent of the time. We can denote this as P[X

Page 4: Chapter 2

Slide 2.4

Undergraduate Econometrics, 2nd Edition –Chapter 2

= 1] = f(1) = 0.5, where P[X = 1] is the probability of the event that the random

variable X = 1.

• For a discrete random variable X the value of the probability density function f(x) is

the probability that the random variable X takes the value x, f(x) = P(X=x).

• Therefore, 0 ≤ f(x) ≤ 1 and, if X takes n values x1, .., xn, then

1 2( ) ( ) ( ) 1nf x f x f x+ + + =L .

2.2.2 The Probability Density Function of A Continuous Random Variable

• For the continuous random variable Y the probability density function f(y) can be

represented by an equation, which can be described graphically by a curve. For

continuous random variables the area under the probability density function

corresponds to probability.

• For example, the probability density function of a continuous random variable Y might

be represented as in Figure 2.1. The total area under a probability density function is 1,

Page 5: Chapter 2

Slide 2.5

Undergraduate Econometrics, 2nd Edition –Chapter 2

and the probability that Y takes a value in the interval [a, b], or P[a ≤ Y ≤ b], is the area

under the probability density function between the values y = a and y = b. This is

shown in Figure 2.1 by the shaded area.

• Since a continuous random variable takes an uncountable infinite number of values,

the probability of any one occurring is zero. That is, P[Y = a] = P[a ≤ Y ≤ a] = 0.

• In calculus, the integral of a function defines the area under it, and therefore

[ ] ( )b

y aP a Y b f y dy

=≤ ≤ = ∫ .

• For any random variable x, the probability that x is less than or equal to a is denoted

F(a). F(x) is the cumulative distribution function (cdf).

• For a discrete random variable,

( ) ( ) Prob( )

X xF x f x X x

≤= = ≤∑

Page 6: Chapter 2

Slide 2.6

Undergraduate Econometrics, 2nd Edition –Chapter 2

In view of the definition of f(x),

1( ) ( ) ( )i i if x F x F x −= −

• For a continuous random variable,

( ) ( )x

F x f t dt−∞

= ∫

and

( )( ) dF xf x dx=

• In both the continuous and discrete cases, F(x) must satisfy the following properties:

Page 7: Chapter 2

Slide 2.7

Undergraduate Econometrics, 2nd Edition –Chapter 2

1. 0 ( ) 1F x≤ ≤ .

2. If x y≥ , then ( ) ( )F x F y≥ .

3. ( ) 1F +∞ = .

4. ( ) 0F −∞ = .

5. Prob( ) ( ) ( )a x b F b F a< ≤ = − .

Page 8: Chapter 2

Slide 2.8

Undergraduate Econometrics, 2nd Edition –Chapter 2

2.3 Expected Values Involving a Single Random Variable

• When working with random variables, it is convenient to summarize their probability

characteristics using the concept of mathematical expectation. These expectations will

make use of summation notation.

2.3.1 The Rules of Summation

1. If X takes n values x1, ..., xn then their sum is

1 21

n

i ni

x x x x=

= + + +∑ L

2. If a is a constant, then

1

n

ia na

=

=∑

Page 9: Chapter 2

Slide 2.9

Undergraduate Econometrics, 2nd Edition –Chapter 2

3. If a is a constant then it can be pulled out in front of a summation

1 1

n n

i ii i

ax a x= =

=∑ ∑

4. If X and Y are two variables, then

1 1 1( )

n n n

i i i ii i i

x y x y= = =

+ = +∑ ∑ ∑

5. If a and b are constants, then

1 1( )

n n

i ii i

a bx na b x= =

+ = +∑ ∑

6. If X and Y are two variables, then

1 1 1( )

n n n

i i i ii i i

ax by a x b y= = =

+ = +∑ ∑ ∑

Page 10: Chapter 2

Slide 2.10

Undergraduate Econometrics, 2nd Edition –Chapter 2

7. The arithmetic mean (average) of n values of X is

1 21 .

n

ini

xx x xx

n n= + + +

= =∑ L

Also,

1( ) 0

n

ii

x x=

− =∑

8. We often use an abbreviated form of the summation notation. For example, if f(x) is a

function of the values of X,

1 21

( ) ( ) ( ) ( )

= ( ) ("Sum over all values of the index ")

( ) ("Sum over all possible values of ")

n

i ni

ii

x

f x f x f x f x

f x i

f x X

=

= + + +

=

L

Page 11: Chapter 2

Slide 2.11

Undergraduate Econometrics, 2nd Edition –Chapter 2

9. Several summation signs can be used in one expression. Suppose the variable Y takes

n values and X takes m values, and let f(x,y) = x+y. Then the double summation of

this function is

1 1 1 1( , ) ( )

m n m n

i j i ji j i j

f x y x y= = = =

= +∑∑ ∑∑

To evaluate such expressions work from the innermost sum outward. First set i=1 and

sum over all values of j, and so on. To illustrate, let m = 2 and n = 3. Then

( ) ( ) ( ) ( )

( ) ( ) ( )( ) ( ) ( )

2 3 2

1 2 31 1 1

1 1 1 2 1 3

2 1 2 2 2 3

, , , ,

, , ,

, , ,

i j i i ii j i

f x y f x y f x y f x y

f x y f x y f x y

f x y f x y f x y

= = =

= + +

= + + +

+ +

∑∑ ∑

The order of summation does not matter, so

1 1 1 1( , ) ( , )

m n n m

i j i ji j j i

f x y f x y= = = =

=∑∑ ∑∑

Page 12: Chapter 2

Slide 2.12

Undergraduate Econometrics, 2nd Edition –Chapter 2

2.3.2 The Mean of a Random Variable

• The expected value of a random variable X is the average value of the random variable

in an infinite number of repetitions of the experiment (repeated samples); it is denoted

E[X].

• If X is a discrete random variable which can take the values x1, x2,…,xn with

probability density values f(x1), f(x2),…, f(xn), the expected value of X is

1 1 2 2

1

[ ] ( ) ( ) ( )

( )

( )

n nn

i ii

x

E X x f x x f x x f x

x f x

xf x

=

= + + +

=

=

L

(2.3.1)

Page 13: Chapter 2

Slide 2.13

Undergraduate Econometrics, 2nd Edition –Chapter 2

• If X is a continuous random variable, the expected value of X is

[ ] ( )x

E X xf x dx= ∫

The notation x∫ means integral over the entire range of values of x.

2.3.3 Expectation of a Function of a Random Variable

• If X is a discrete random variable and g(X) is a function of it, then

[ ( )] ( ) ( )x

E g X g x f x= ∑ (2.3.2a)

Page 14: Chapter 2

Slide 2.14

Undergraduate Econometrics, 2nd Edition –Chapter 2

However, [ ( )] [ ( )]E g X g E X≠ in general.

• If X is a discrete random variable and g(X) = g1(X) + g2(X), where g1(X) and g2(X) are

functions of X, then

1 2

1 2

1 2

[ ( )] [ ( ) ( )] ( )

( ) ( ) ( ) ( )

[ ( )] [ ( )]

x

x x

E g X g x g x f x

g x f x g x f x

E g x E g x

= +

= +

= +

∑ ∑ (2.3.2b)

The expected value of a sum of functions of random variables, or the expected value

of a sum of random variables, is always the sum of the expected values.

Page 15: Chapter 2

Slide 2.15

Undergraduate Econometrics, 2nd Edition –Chapter 2

• The idea of how to determine the expected value of a function of a continuous random

variable Y, say g(y), is exactly the same as in the discrete case. The terms g(y) must be

weighted by f(y) and then all those products summed. This operation is carried out via

integration, but the interpretation of the result is the same. Specifically, if Y is a

continuous random variable, then

[ ( )] ( ) ( )y

E g y g y f y dy= ∫

• Some properties of mathematical expectation work for both discrete and continuous

random variable. For the discrete case, these results are shown as follows:

1. If c is a constant,

[ ]E c c= (2.3.3a)

2. If c is a constant and X is a random variable, then

[ ] [ ]E cX cE X= (2.3.3b)

Page 16: Chapter 2

Slide 2.16

Undergraduate Econometrics, 2nd Edition –Chapter 2

3. If a and c are constants and X is a random variable, then

[ ] [ ]E a cX a cE X+ = + (2.3.3c)

4. If a, b, and c are constants and X and Y are random variables, then [ ] [ ] [ ]E aX bY c aE X bE Y c+ + = + +

• A conditional mean is the mean of the conditional distribution and is defined by

[ | ] ( | ) if is discrete

[ | ] ( | ) if is continuous

y

y

E y x yf y x y

E y x yf y x dy y

=

=

The conditional mean function E[y|x] is called the regression of y on x.

Page 17: Chapter 2

Slide 2.17

Undergraduate Econometrics, 2nd Edition –Chapter 2

2.3.4 The Variance of a Random Variable

• The variance of a discrete or continuous random variable X, based on the rules in

Section 2.3.3, is defined as the expected value of 2( ) [ ( )]g X X E X= − . Algebraically,

2 2 2

2 2

2

2

var( ) σ [ ( )] [ ( )] [ µ]

[ ] [ ( )]

( µ) ( ) if is discrete

( µ) ( ) if is continuous

x

x

X E g X E X E X E X

E X E X

x f x x

x f x dx x

= = = − = −

= −

= −

= −

(2.3.4)

where [ ] µE X = . Examining g(X) = [X – E(X)]2, we observe that the variance of a

random variable is the average squared difference between the random variable X and

its mean variable E[X]. Thus, the variance of a random variable is the weighted

Page 18: Chapter 2

Slide 2.18

Undergraduate Econometrics, 2nd Edition –Chapter 2

average of the squared differences (or distances) between the values x of the random

variable X and the mean (center of the probability density function) of the random

variable. The larger the variance of a random variable, the greater the average squared

distance between the values of the random variable and its mean, or the more “spread

out” are the values of the random variable.

• Let a and c be constants, and let Z = a + cX. Then Z is a random variable and its

variance is

var(a + cX) = E[(a + cX) – E(a + cX)]2 = c2var(X) (2.3.5)

The result in Equation (2.3.5) says that if you:

1. Add a constant to a random variable it does not affect its variance, or dispersion.

This fact follows, since adding a constant to a random variable shifts the location of

its probability density function but leaves its shape, and dispersion, unaffected.

Page 19: Chapter 2

Slide 2.19

Undergraduate Econometrics, 2nd Edition –Chapter 2

2. Multiply a random variable by a constant, the variance is multiplied by the square

of the constant.

• The square root of the variance of a random variable is called the standard deviation;

it is denoted by σ. It, too, measures the spread or dispersion of a distribution, and it

has the advantage of being in the same units of measure as the random variable.

• A conditional variance is the variance of the conditional distribution:

2

2

2

var[ | ] [( [ | ]) | ]

( [ | ]) ( | ) if is discrete

var[ | ] ( [ | ]) ( | ) if is continuous

y

y

y x E y E y x x

y E y x f y x y

y x y E y x f y x dy y

= −

= −

= −

The computation can be simplified by using var[y|x] = E[y2|x] – (E[y|x])2.

Page 20: Chapter 2

Slide 2.20

Undergraduate Econometrics, 2nd Edition –Chapter 2

• Two other measures often used to describe a probability distribution are

3skewness [( µ) ]E x= −

and

4kurtosis [( µ) ]E x= −

Skewness is a measure of the asymmetry of a distribution. For symmetric

distributions, (µ ) (µ )f x f x− = + and skewness = 0. For asymmetric distributions, the

skewness will be positive (negative) if the “long tail” is in the positive (negative)

direction. Kurtosis is a measure of the thickness of the tails of the distribution.

Page 21: Chapter 2

Slide 2.21

Undergraduate Econometrics, 2nd Edition –Chapter 2

• Two common measures are

3

3[( µ) ]skewness coefficientσ

E x−=

and

4

4[( µ) ]degree of excess 3σ

E x−= −

The second is based on the normal distribution, which has excess of zero.

Page 22: Chapter 2

Slide 2.22

Undergraduate Econometrics, 2nd Edition –Chapter 2

2.4 Using Joint Probability Density Functions

Frequently we want to make probability statements about more than one random variable

at a time. To answer probability questions involving two or more random variables, we

must know their joint probability density function. For the continuous random variables

X and Y, we use f(x,y) to represent their joint density function. A typical joint density

function might look something like Figure 2.3. See Example 2.5.

2.4.1 Marginal Probability Density Functions

• If X and Y are two discrete random variables then

Page 23: Chapter 2

Slide 2.23

Undergraduate Econometrics, 2nd Edition –Chapter 2

( ) ( , ) for each value can take

( ) ( , ) for each value can take

y

x

f x f x y X

f y f x y Y

=

=

∑ (2.4.1)

Note that the summations in Equation (2.4.1) are over the other random variable, the

one that we are eliminating from the joint probability density function. If the random

variables are continuous the same idea works, with integrals replacing the summation

sign as follows:

( ) ( , )

( ) ( , )

y

x

f x f x y dy

f y f x y dx

=

=

Page 24: Chapter 2

Slide 2.24

Undergraduate Econometrics, 2nd Edition –Chapter 2

2.4.2 Conditional Probability Density Functions

• Often the chances of an event occurring are conditional on the occurrence of another

event. For discrete random variables X and Y, conditional probabilities can be

calculated from the joint probability density function f(x,y) and the marginal

probability density function of the conditioning random variables. Specifically, the

probability that the random variable X takes the value x given that Y = y, is written P[X

= x|Y = y]. This conditional probability is given by the conditional probability density

function f(x|y):

Page 25: Chapter 2

Slide 2.25

Undergraduate Econometrics, 2nd Edition –Chapter 2

( , )( | ) [ | ]( )

( , )( | ) [ | ]( )

f x yf x y P X x Y yf y

f x yf y x P Y y X xf x

= = = =

= = = =

(2.4.2)

2.4.3 Independent Random Variables

• Two random variables are statistically independent, or independently distributed, if

knowing the value that one will take does not reveal anything about what value the

other may take. When random variables are statistically independent, their joint

probability density function factors into the product of their individual probability

density functions, and vice versa. If X and Y are independent random variables, then

( , ) ( ) ( )f x y f x f y= (2.4.3)

Page 26: Chapter 2

Slide 2.26

Undergraduate Econometrics, 2nd Edition –Chapter 2

for each and every pair of values x and y. The converse is also true.

• If X1, …, Xn are statistically independent the joint probability density function can be

factored and written as

f(x1,x2,…,xn) = f1(x1)f2(x2)…fn(xn) (2.4.4)

• If X and Y are independent random variables, then the conditional probability density

function of X, given that Y = y is

( , ) ( ) ( )( | ) ( )( ) ( )

f x y f x f yf x y f xf y f y

= = = (2.4.5)

for each and every pair of values x and y. The converse is also true.

Page 27: Chapter 2

Slide 2.27

Undergraduate Econometrics, 2nd Edition –Chapter 2

2.5 The Expected Value of a Function of Several Random Variables: Covariance

and Correlation

In economics we are usually interested in exploring relationships between economic

variables. The covariance literally indicates the amount of covariance exhibited by the

two random variables.

• If X and Y are random variables, then their covariance is

cov( , ) [( [ ])( [ ])]X Y E X E X Y E Y= − − (2.5.1)

• If X and Y are discrete random variables, f(x,y) is their joint probability density

function, and g(X,Y) is a function of them, then

Page 28: Chapter 2

Slide 2.28

Undergraduate Econometrics, 2nd Edition –Chapter 2

[ ( , )] ( , ) ( , )x y

E g X Y g x y f x y= ∑∑ (2.5.2)

• If X and Y are discrete random variables and f(x,y) is their joint probability density

function, then

cov( , ) [( [ ])( [ ])]

[ ( )][ ( )] ( , )x y

X Y E X E X Y E Y

x E X y E Y f x y

= − −

= − −∑∑ (2.5.3)

• If X and Y are continuous random variables, then the definition of covariance is similar,

with integrals replacing the summation signs as follows:

cov( , ) ( , )x y

X Y f x y dxdy= ∫ ∫

Page 29: Chapter 2

Slide 2.29

Undergraduate Econometrics, 2nd Edition –Chapter 2

• The sign of the covariance between two random variables indicates whether their

association is positive (direct) or negative (inverse). The covariance between X and Y

is expected, or average, value of the random product [X – E(X)][Y – E(Y)]. If two

random variables have positive covariance then they tend to be positively (or directly)

related. See Figure 2.4. The values of two random variables with negative covariance

tend to be negatively (or inversely) related. See Figure 2.5. Zero covariance implies

that there is neither positive nor negative association between pairs of values. See

Figure 2.6.

• The magnitude of covariance is difficult to interpret because it depends on the units of

measurement of the random variables. The meaning of covariation is revealed more

clearly if we divide the covariance between X and Y by their respective standard

deviations. The resulting ratio is defined as the correlation between the random

variables X and Y. If X and Y are random variables then their correlation is

Page 30: Chapter 2

Slide 2.30

Undergraduate Econometrics, 2nd Edition –Chapter 2

cov( , )var( ) var( )

xy

x y

X YX Y

σρ = =

σ σ (2.5.4)

• If X and Y are independent random variables then the covariance and correlation

between them are zero. The converse of this relationship is not true.

• Independent random variables X and Y have zero covariance, indicating that there is

no linear association between them. However, just because the covariance or

correlation between two random variables is zero does not mean that they are

necessarily independent. Zero covariance means that there is no linear association

between the random variables. Even if X and Y have zero covariance, they might have

a nonlinear association, like X2 + Y2 = 1.

• If a, b, c, and d are constants and X and Y are random variables, then

cov( , ) var( ) var( ) ( )cov( , )aX bY cX dY ac X bd Y ad bc X Y+ + = + + +

Page 31: Chapter 2

Slide 2.31

Undergraduate Econometrics, 2nd Edition –Chapter 2

Proof:

2 2

cov( , )

[(( ) [ ])(( ) [ ])]

[( [ ] [ ])( [ ] [ ])]

[( ( [ ]) ( [ ]))( ( [ ]) ( [ ]))]

[ ( [ ]) ( [ ]) ( )( [ ])( [ ])]

[

aX bY cX dY

E aX bY E aX bY cX dY E cX dY

E aX bY aE X bE Y cX dY cE X dE Y

E a X E X b Y E Y c X E X d Y E Y

E ac X E X bd Y E Y ad bc X E X Y E Y

acE

+ +

= + − + + − +

= + − − + − −

= − + − − + −

= − + − + + − −

= 2 2( [ ]) ] [( [ ]) ] ( ) [( [ ])( [ ])]

var( ) var( ) ( )cov( , )

X E X bdE Y E Y ad bc E X E X Y E Y

ac X bd Y ad bc X Y

− + − + + − −

= + + +

Page 32: Chapter 2

Slide 2.32

Undergraduate Econometrics, 2nd Edition –Chapter 2

2.5.1 The Mean of a Weighted Sum of Random Variables

• Let the function g(X,Y) = aX + bY where a and b are constants. This is called a

weighted sum. Now use Equation (2.5.2) to find the expectation

[ ] [ ] [ ]E aX bY aE X bE Y+ = + (2.5.5)

This rule says that the expected value of a weighted sum of two random variables is

the weighted sum of their expected values. This rule works for any number of random

variables whether they are discrete or continuous.

• If X and Y are random variables, then

[ ] [ ] [ ]E X Y E X E Y+ = + (2.5.6)

Page 33: Chapter 2

Slide 2.33

Undergraduate Econometrics, 2nd Edition –Chapter 2

In general, the expected value of any sum is the sum of the expected values.

2.5.2 The Variance of a Weighted Sum of Random Variables

• If X, Y, and Z are random variables and a, b, and c are constants, then

var[aX + bY + cZ] = a2var[X] + b2var[Y] + c2var[Z]

+ 2abcov[X,Y] + 2accov[X,Z] + 2bccov[Y,Z] (2.5.7)

Proof:

Page 34: Chapter 2

Slide 2.34

Undergraduate Econometrics, 2nd Edition –Chapter 2

2

2

2 2 2 2 2 2

2 2 2

var[ ]

[(( ) [ ]) ]

[( ( [ ]) ( [ ]) ( [ ])) ]

[ ( [ ]) ( [ ]) ( [ ]) 2 ( [ ])( [ ]) 2 ( [ ])( [ ]) 2 ( [ ])( [ ])]

[( [ ]) ] [(

aX bY cZ

E aX bY cZ E aX bY cZ

E a X E X b Y E Y c Z E Z

E a X E X b Y E Y c Z E Zab X E X Y E Y ac X E X Z E Z bc Y E Y Z E Z

a E X E X b E

+ +

= + + − + +

= − + − + −

= − + − + −+ − − + − − + − −

= − + 2 2 2

2 2 2

[ ]) ] [( [ ]) ] 2 [( [ ])( [ ])] 2 [( [ ])( [ ])] 2 [( [ ])( [ ])]

var( ) var( ) var( ) 2 cov( , ) 2 cov( , ) 2 cov( , )

Y E Y c E Z E ZabE X E X Y E Y acE X E X Z E Z bcE Y E Y Z E Z

a X b Y c Z ab X Y ac X Z bc Y Z

− + −+ − − + − − + − −

= + + + + +

• If X, Y, and Z are independent, or uncorrelated, random variables, then the covariance

terms are zero and:

var[aX + bY + cZ] = a2var[X] + b2var[Y] + c2var[Z] (2.5.8)

Page 35: Chapter 2

Slide 2.35

Undergraduate Econometrics, 2nd Edition –Chapter 2

• If X, Y, and Z are independent, or uncorrelated, random variables, and if a = b = c = 1,

then

var[X + Y + Z] = var[X] + var[Y] + var[Z] (2.5.9)

• When the “variance of a sum is the sum of the variances,” the random variables

involved must be independent, or uncorrelated.

Page 36: Chapter 2

Slide 2.36

Undergraduate Econometrics, 2nd Edition –Chapter 2

2.6 The Normal Distribution

• If X is a normally distributed random variable with mean β and variance σ2,

symbolized as X ~ N(β,σ2), then its probability density function is expected

mathematically as:

2

22

1 ( )( ) exp , 22xf x x

− − β= − ∞ < < ∞ σπσ

(2.6.1)

where exp[a] denotes the exponential function ea. The mean β and variance σ2 are the

parameters of this distribution and they determine its location and dispersion. The

range of the continuous normal random variable is minus infinity to plus infinity. See

Figure 2.7.

Page 37: Chapter 2

Slide 2.37

Undergraduate Econometrics, 2nd Edition –Chapter 2

• A standard normal random variable is one that has a normal probability density

function with mean 0 and variance 1. If, X ~ N(β,σ2)then

~ (0,1)XZ N− β=

σ

• If X ~ N(β,σ2) and a is a constant, then

[ ] X a aP X a P P Z− β − β − β ≥ = ≥ = ≥ σ σ σ (2.6.2)

• If X ~ N(β,σ2) and a and b are constants, then

[ ] a X b a bP a X b P P Z− β − β − β − β − β ≤ ≤ = ≤ ≤ = ≤ ≤ σ σ σ σ σ (2.6.3)

Page 38: Chapter 2

Slide 2.38

Undergraduate Econometrics, 2nd Edition –Chapter 2

• When using Table 1, and Equations (2.6.2) and (2.6.3), to compute normal

probabilities, remember that:

1. The standard normal probability density function is symmetric about zero.

2. The total amount of probability under the density function is 1.

3. Half the probability is on either side of zero.

For example, if X ~ N(3,9), then using Table 1,

P[4 ≤ X ≤ 6] = P[.33 ≤ Z ≤ 1] = .3413 – .1293 = .212

• See Figure 2.8. For the purposes of statistical testing, it is useful to know that:

1. The probability that a single observation of a normally distributed variable X will

lie within 1.96 standard deviations of its mean is approximately 95%.

2. The probability that a single observation of a normally distributed variable X will

lie within 2.57 standard deviations of its mean is approximately 99%.

Page 39: Chapter 2

Slide 2.39

Undergraduate Econometrics, 2nd Edition –Chapter 2

• If ( ) ( ) ( )2 2 21 1 1 2 2 2 3 3 3~ , , ~ , , ~ ,X N X N X Nβ σ β σ β σ and 1 2 3, , and c c c are constants,

then

Z = c1X1 + c2X2 + c3X3 ~ N[E(Z),var(Z)] (2.6.4)

Page 40: Chapter 2

Slide 2.40

Undergraduate Econometrics, 2nd Edition –Chapter 2

Exercises

2.1 2.2 2.3 2.4 2.5

2.6 2.7 2.8 2.10 2.11

2.15 2.16 2.18 2.19 2.21

2.22 2.24