STA301_LEC25
description
Transcript of STA301_LEC25
Virtual University of Pakistan
Lecture No. 25 of the course on
Statistics and Probability
by
Miss Saleha Naghmi Habibullah
IN THE LAST LECTURE, YOU LEARNT
• Chebychev’s Inequality• Concept of Continuous Probability Distribution• Distribution Function of a Continuous Probability Distribution
TOPICS FOR TODAY• Mathematical Expectation, Variance &
Moments of a Continuous Probability distribution
• BIVARIATE Probability Distribution
You will recall that, in the last lecture, we were dealing with an example of a continuous probability distribution in which we were interested in computing a conditional probability.
We now discuss this particular concept:
EXAMPLE
a) Find the value of k so that the function f(x) defined as follows, may be a density function
f(x) = kx, 0 < x < 2= 0, elsewhere
b) Compute P(X = 1).
c) Compute P(X > 1).
d) Compute the distribution function F(x).
3/2X3/11/2XP e)
SOLUTIONWe hadf(x) = kx, 0 < x < 2
= 0, elsewhereand we obtained k = 1/2. Hence:
elsewhere,0
2x0for,xf 2
x
e) Applying the definition of conditional probability, we get
125
13
365
31
232
31
221
4x
4x
32
31
21
31
dx2x
dx2x
XPXP
X|XP32
31
21
31
32
31
21
The above example was of the simplest case when the graph of our continuous probability distribution is in the form of a straight line.
Let us now consider a slightly more complicated situation:
EXAMPLE A continuous random variable X hasthe d.f. F(x) as follows:
F(x) = 0, for x < 0,
,5x2 2
for 0 < x < 1,
,2x1for,2
xx352
53 2
= 1 for x > 2.Find the p.d.f. and P(|X| < 1.5).
SOLUTION
By definition, we have .xFdxdxf
Therefore 5x4xf for 0 < x < 1
x352 for 1 < x < 2
= 0 elsewhere.
N o w P | X | < 1 . 5 ) = P ( – 1 . 5 < X < 1 . 5 )
1
0
5.1
1
0
5.1
5.1dx
5x32dx
5x4dx0dx0
12
xx3
5.1
05x2
1
002
52
2
213
225.25.4
52
52
= 0 . 4 0 + 0 . 3 5 = 0 . 7 5 .
Let us now discuss the mathematical expectation of continuous random variables through the following example:
EXAMPLE
Find the expected value of the random variable X having the p.d.f.
f(x) = 2 (1 – x), 0 < x < 1= 0, elsewhere.
SOLUTION
Now
31
31
212
3x
2x2
dxx1x2
dxxfx)X(E
1
0
32
1
0
As indicated earlier, the term ‘expected value’ implies the mean value.
The graph of the above probability density function and its mean value are presented in the following figure:
10.50 X
f(x)
2
1
0.5
1.5
0.25 0.75
E(X) = 0.33
Suppose that we are interested in verifying the properties of mathematical expectation that are valid in the case of univariate probability distributions.
You will recall that, in the last lecture, we noted that if X is a discrete random variable and if a and b are constants, then
E(aX + b) = a E(X) + b.
This property is equally valid in the case of continuous probability distributions.
In this example, suppose that a = 3 and b = 5. Then, we wish to verify that
E(3X + 5) = 3 E(X) + 5.
The right-hand-side of the above equation is:
3 E(X) + 5 = 3( ) + 5 = 1 + 5 = 631
In order to compute the left-hand-side, we proceed as follows:
.6321152
xxx52
dxx3x252
dxx15x32)5X3(E
10
32
21
0
1
0
Since the left-hand-side is equal to the right-hand-side, therefore the property is verified.
SPECIAL CASE:
We have
E(aX + b) = a E(X) + b.
If b = 0, the above property takes the following simple form:
E(aX) = a E(X).
Next, let us consider the computation of the moments and moment-ratios in the case of a continuous probability distribution:
EXAMPLEA continuous random variable X has the p.d.f.
otherwise,0
.2x0,x2x43xf
Find the first four moments about the mean and the moment-ratios.
We first calculate the moments about origin as:
;11216
43
416
316
43
4x
3x2
43dxxx2x
43
dxxfxXE'
2
0
432
2
0
1
;56
58
43
5328
43
5x
4x2
43dxxx2x
43
dxxfxXE'
2
0
5422
2
0
222
;58
3064
43
664
564
43
6x
5x2
43dxxx2x
43
dxxfxXE'
2
0
6523
2
0
333
.7
162164
43
7128
364
43
7x
6x2
43dxxx2x
43
dxxfxXE'
2
0
7624
2
0
444
N e x t , w e f i n d t h e m o m e n t s a b o u t t h em e a n a s f o l l o w s :
01
511
56'' 22
122
;025
185812
5613
58
'2''3'
3
312133
.3533
536
532
716
135616
5814
716
'3''6''4'
42
412
213144
The first moment-ratio is
.0
510
3
2
32
23
1
This implies that this particular continuous probability distribution is absolutely symmetric.
The second moment-ratio is
.14.2
51353
222
42
This implies that this particular continuous probability distribution may be regarded as playkurtic, i.e. flatter than the normal distribution.
The students are encouraged to draw the graph of this distribution in order to develop a visual picture in their minds.
We begin the concept of bivariate probability distribution by introducing the term ‘Joint Distributions’:
JOINT DISTRIBUTIONS:
The distribution of two or more random variables which are observed simultaneously when an experiment is performed, is called their JOINT distribution.
It is customary to call the distribution of a single random variable as univariate.
Likewise, a distribution involving two, three or many r.v.’s simultaneously is referred to as bivariate, trivariate or multivariate.
A bivariate distribution may be discrete when the possible values of (X, U) are finite or countably infinite. It is continuous if (X, Y) can assume all values in some non-countable set of the plane. A bivariate distribution is said mixed when one r.v. is discrete and the other is continuous.
Bivariate Probability Function:
Let X and Y be two discrete r.v.’s defined on the same sample space S, X taking the values x1, x2, …, xm
and Y taking the values y1, y2, …, yn.
Then the probability that X takes on the value xi and, at the same time, Y takes on the value , denoted by f(xi, yj) or pij, is defined to be the joint probability function or simply the joint distribution of X and Y.
jy
Thus the joint probability function, also called the bivariate probability function f(x, y) is a function whose value at the point (xi, yj) is given by
f(xi, yj) = P(X = xi and Y = yj),
i = 1, 2, …, m.j = 1, 2, …, n.
The joint or bivariate probability distribution consisting of all pairs of values (xi, yj) and their associated probabilities f(xi, yj) i.e. the set of triples [xi, yj, f(xi, yj)] can either be shown in the following two-way table:
Joint Probability Distribution of X and YX\Y y1 y2 … yj … yn P(X = x i)
x1 f(x1,y1) f(x1,y2) … f(x1,yj) … f(x1,yn) g(x1)
x2 f(x2,y1) f(x2,y2) … f(x2,yj) … f(x2,yn) g(x2)
xi f(xi,y1) f(xi,y2) … f(xi,yj) … f(xi,yn) g(xi)
xm f(xm,y1) f(xm,y2) … f(xm,yj) … f(xm,yn) g(xm)
P(Y=yj) h(y1) h(y2) … h(yj) … h(yn) 1
or be expressed by mean of a formula for f(x, y). The probabilities f(x, y) can be obtained by substituting appropriate values of x and y in the table or formula.
A joint probability function has the following properties:
PROPERTIES:i) f(xi, yj) > 0, for all
(xi, yj), i.e. for i = 1, 2, …, m; j = 1, 2, …, n.
ii) i j
ji 1y,xf
Next, we consider the concept of MARGINAL PROBABILITY FUNCTIONS:
The point to be understood here is that, from the joint probability function for (X, Y), we can obtain the INDIVIDUAL probability function of X and Y. Such individual probability functions are called MARGINAL probability functions.
= f(xi, y1) + f(xi, y2) + … + f(xi, yn)
as xi must occur either with y1 or y2 or … or yn.
= P(X = xi);
n
1jjii y,xfxg
Let f(x, y) be the joint probability function of two discrete r.v.’s X and Y. Then the marginal probability function of X is defined as
that is, the individual probability function of X is found by adding over the rows of the two-way table.
Similarly, the marginal probability function for Y is obtained by adding over the column as
jm
1ijij yYPy,xfyh
The values of the marginal probabilities are often written in the margins of the joint table as they are the row and column totals in the table. The probabilities in each marginal probability function add to 1.
Next, we consider the concept of CONDITIONAL PROBABILITY FUNCTION:
Let X and Y be two discrete r.v.’s with joint probability function f(x, y).
Then the conditional probability function for X given Y = y, denoted as f(x|y), is defined by
f(xi | yj) = P(X = xi | Y = yj)
jji
yYPyYandxXP
,yh
y,xf
j
ji
for i = 1, 2, …, j = 1, 2, …where h(y) is the marginal probability, and h(y) > 0.
It gives the probability that X takes on the value xi given that Y has taken on the value yj.
The conditional probability f(xi | yj) is non-negative and (for a given fixed yj) adds to 1 on i and hence is a probability function.
Similarly, the conditional probability function for Y given X = x is
f(yj | xi) = P(Y = yj | X = xi)
iij
xXPxXandyYP
,xgy,xf
i
ji where g(x) > 0.
INDEPENDENCE:
Two discrete r.v.’s X and Y are said to be statistically independent, if and only if, for all possible pairs of values (xi, yj) the joint probability function f(x, y) can be expressed as the product of the two marginal probability functions.
f(x, y) = P(X = xi and Y = yj)
= P(X = xi). P(Y = yj)for all i and j.
= g(x) h(y).
That is, X and Y are independent, if
It should be noted that the joint probability function of X and Y when they are independent, can be obtained by MULTIPLYING together their marginal probability functions.
Let us now illustrate all these concepts with the help of an example:
EXAMPLE:
An urn contains 3 black, 2 red and 3 green balls and 2 balls are selected at random from it. If X is the number of black balls and Y is the number of red balls selected, then find
i) the joint probabilityfunction f(x, y);
ii) P(X + Y < 1); iii) the marginal p.d. g(x)
and h(y);iv) the conditional p.d. f(x | 1),v) P(X = 0 | Y = 1); andvi) Are x and Y independent?
i) The sample space S for this experiment contains sample points. The possible values of X are 0, 1, and 2, and those for Y are 0, 1, and 2.
The values that (X, Y) can take on are (0, 0), (0, 1), (1, 0), (1, 1), (0, 2) and (2, 0). We desire to find f(x, y) for each value (x, y).
The total number of ways in which 2 balls can be drawn out of a total of 8 balls is
.282
7x882
f(0, 0) = P(X = 0 and Y = 0) = 3/28
Now f(0, 0) = P(X = 0 and Y = 0), where the event (X = 0 and Y = 0) represents that neither black nor red ball is selected, implying that the 2 selected are green balls. This event therefore contains
sample points, and
332
20
30
f(0, 0) = P(X = 0 and Y = 0) = 3/28
Again f(0, 1) = P(X = 0 and Y = 1)
= P (none is black, 1 is red and 1 is green)
286
28
31
21
30
S im i l a r ly , f ( 1 , 1 )= P ( X = 1 a n d Y = 1 )= P ( 1 i s b l a c k 1 i s r e d a n d n o n e i s g r e e n )
286
28
30
21
31
Similar calculations give the probabilities of other values and the joint probability function of X and Y is given as:
Joint Probability DistributionY
X0 1 2 P(X = xi)
g(x)
0 3/28 6/28 1/28 10/281 9/28 6/28 0 15/282 3/28 0 0 3/28
P(Y = yj)h(y) 15/28 12/28 1/28 1
IN TODAY’S LECTURE, YOU LEARNT
• Mathematical Expectation, Variance & Moments of Continuous Probability Distributions
•BIVARIATE Probability Distribution (Discrete case)
IN THE NEXT LECTURE, YOU WILL LEARN
•BIVARIATE Probability Distributions (Discrete and Continuous)
• Properties of Expected Values in the case of Bivariate Probability Distributions
• Covariance & Correlation