4. Continuous Random Variables - STAT homestat€¦ · 4. Continuous Random Variables 4.1: ......
-
Upload
hoangthien -
Category
Documents
-
view
230 -
download
5
Transcript of 4. Continuous Random Variables - STAT homestat€¦ · 4. Continuous Random Variables 4.1: ......
http://statwww.epfl.ch
4. Continuous RandomVariables
4.1: Definition. Density and distribution functions. Examples:
uniform, exponential, Laplace, gamma. Expectation, variance.
Quantiles.
4.2: New random variables from old.
4.3: Normal distribution. Use of normal tables. Continuity
correction. Normal approximation to binomial distribution.
4.4: Moment generating functions.
4.5: Mixture distributions.
References: Ross (Chapter 4); Ben Arous notes (IV.1, IV.3–IV.6).
Exercises: 79–88, 91–93, 107, 108, of Recueil d’exercices.
Probabilite et Statistique I — Chapter 4 1
http://statwww.epfl.ch
Petit Vocabulaire Probabiliste
Mathematics English Francais
P(A | B) probability of A given B la probabilite de A sachant B
independence independance
(mutually) independent events les evenements (mutuellement) independants
pairwise independent events les evenements independants deux a deux
conditionally independent events les evenements conditionellement independants
X, Y, . . . random variable une variable aleatoire
I indicator random variable une variable indicatrice
fX probability mass/density function fonction de masse/fonction de densite
FX probability distribution function fonction de repartition
E(X) expected value/expectation of X l’esperance de X
E(Xr) rth moment of X rieme moment de X
E(X | B) conditional expectation of X given B l’esperance conditionelle de X, sachant B
var(X) variance of X la variance de X
MX (t) moment generating function of X, or la fonction generatrices des moments
the Laplace transform of fX (x) ou la transformee de Laplace de fX (x)
Probabilite et Statistique I — Chapter 4 2
http://statwww.epfl.ch
4.1 Continuous Random Variables
Up to now we have supposed that the support of X is countable, so
X is a discrete random variable. Now consider what happens when
D = x ∈ R : X(ω) = x, ω ∈ Ω is uncountable. Note that this
implies that Ω itself is uncountable.
Example 4.1: The time to the end of the lecture lies in (0, 45)min.•
Example 4.2: Our (height, weight) pairs lie in (0,∞)2. •
Definition: Let X be a random variable. Its cumulative
distribution function (CDF) (fonction de repartition) is
FX(x) = P(X ≤ x) = P(Ax), x ∈ R,
where Ax is the event ω : X(ω) ≤ x, for x ∈ R.
Probabilite et Statistique I — Chapter 4 3
http://statwww.epfl.ch
Recall the following properties of FX :
Theorem : Let (Ω,F , P) be a probability space and X : Ω 7→ R a
random variable. Its cumulative distribution function FX satisfies:
(a) limx→−∞ FX(x) = 0;
(b) limx→∞ FX(x) = 1;
(c) FX is non-decreasing, that is, FX(x) ≤ FX(y) whenever x ≤ y;
(d) FX is continuous to the right, that is,
limt↓0
FX(x + t) = FX(x), x ∈ R;
(e) P(X > x) = 1 − FX(x);
(f) if x < y, then P(x < X ≤ y) = FX(y) − FX(x).
•
Probabilite et Statistique I — Chapter 4 4
http://statwww.epfl.ch
Definition: A random variable X is continuous if there exists a
function fX(x), called the probability density function (la
densite) of X , such that
P(X ≤ x) = FX(x) =
∫ x
−∞
fX(u) du, x ∈ R.
The properties of FX imply (i) fX(x) ≥ 0, and (ii)∫ ∞
−∞fX(x) dx = 1.
Note: The fundamental theorem of calculus gives
fX(x) =dFX(x)
dx.
Note: As P(x < X ≤ y) =∫ y
xfX(u) du when x < y, for any x ∈ R,
P(X = x) = limy↓x
P(x < X ≤ y) = limy↓x
∫ y
x
fX(u) du =
∫ x
x
fX(u) du = 0.
Note: If X is discrete, then its pmf fX(x) is also called its density.
Probabilite et Statistique I — Chapter 4 5
http://statwww.epfl.ch
Some Examples
Example 4.3 (Uniform distribution): The random variable U
with density function
f(u) =
1b−a , a < u < b,
0, otherwise,a < b,
is called a uniform random variable. We write U ∼ U(a, b). •
Example 4.4 (Exponential distribution): The random variable
X with density function
f(x) =
λe−λx, x > 0,
0, otherwise,λ > 0,
is called an exponential random variable with rate λ. We write
X ∼ exp(λ). Establish the lack of memory property for X , that
P(X > x + t | X > t) = P(X > x) for t, x > 0. •
Probabilite et Statistique I — Chapter 4 6
http://statwww.epfl.ch
Example 4.5 (Laplace distribution): The random variable X
with density function
f(x) =λ
2e−λ|x−η|, x ∈ R, η ∈ R, λ > 0,
is called a Laplace (or sometimes a double exponential) random
variable. •
Example 4.6 (Gamma distribution): The random variable X
with density function
f(x) =
λαxα−1
Γ(α) e−λx, x > 0,
0, otherwise,λ, α > 0,
is called a gamma random variable with shape parameter α and rate
parameter λ. Here Γ(α) =∫ ∞
0uα−1e−u du is the gamma function.
Note that setting α = 1 yields the exponential density. •
Probabilite et Statistique I — Chapter 4 7
http://statwww.epfl.ch
−2 0 2 4 6 8
0.0
0.4
0.8
exp(1)
x
f(x)
−2 0 2 4 6 8
0.0
0.4
0.8
Gamma, shape=5,rate=3
x
f(x)
−2 0 2 4 6 8
0.0
0.4
0.8
Gamma, shape=0.5,rate=0.5
x
f(x)
−2 0 2 4 6 8
0.0
0.4
0.8
Gamma, shape=8,rate=2
xf(
x)
Probabilite et Statistique I — Chapter 4 8
http://statwww.epfl.ch
Moments of Continuous Random Variables
Definition: Let g(x) be a real-valued function and X a continuous
random variable with density function fX(x). Then the expectation
of g(X) is defined to be
Eg(X) =
∫ ∞
−∞
g(x)fX(x) dx,
provided E|g(X)| < ∞. In particular the mean and variance of
X are
E(X) =
∫ ∞
−∞
xfX(x) dx, var(X) =
∫ ∞
−∞
x − E(X)2fX(x) dx.
Example 4.7: Compute the mean and variance of (a) the U(a, b),
(b) the exp(λ), (c) the Laplace, and (d) the gamma distributions. •
Probabilite et Statistique I — Chapter 4 9
http://statwww.epfl.ch
Quantiles
Definition: Let 0 < p < 1. The p quantile of distribution function
F (x) is defined as
xp = infx : F (x) ≥ p.
For most continuous random variables, xp is unique and is found as
xp = F−1(p), where F−1 is the inverse function of F . In particular,
the 0.5 quantile is called the median of F .
Example 4.8 (Uniform distribution): Let U ∼ U(0, 1). Show
that xp = p. •
Example 4.9 (Exponential distribution): Let X ∼ exp(λ).
Show that xp = −λ−1 log(1 − p). •
Exercise: Find the quantiles of the Laplace distribution. •
Probabilite et Statistique I — Chapter 4 10
http://statwww.epfl.ch
4.2 New Random Variables From Old
Often in practice we consider Y = g(X), where g is a known
function, and want to find FY (y) and fY (y).
Theorem : Let Y = g(X) be a random variable. Then
FY (y) = P(Y ≤ y) =
∫
AyfX(x) dx, X continuous,
∑
x∈AyfX(x), X discrete,
where Ay = x ∈ R : g(x) ≤ y. When g is monotone increasing and
has inverse function g−1, we have
FY (y) = FXg−1(y), fY (y) =dg−1(y)
dyfXg−1(y),
with a similar result if g is monotone decreasing. •
Probabilite et Statistique I — Chapter 4 11
http://statwww.epfl.ch
Example 4.10: Let Y = Xβ, where X ∼ exp(λ). Find FY (y) and
fY (y). •
Example 4.11: Let Y = dXe, where X ∼ exp(λ) (thus Y is the
smallest integer no smaller than X). Find FY (y) and fY (y). •
Example 4.12: Let Y = − log(1 − U), where U ∼ U(0, 1). Find
FY (y) and fY (y). Find also the density and distribution functions of
W = − log U . Explain. •
Example 4.13: Let X1 and X2 be the results when two fair dice
are rolled independently. Find the distribution of X1 − X2. •
Example 4.14: Let a, b be constants. Find the distribution and
density functions of Y = a + bX in terms of FX , fX . •
Probabilite et Statistique I — Chapter 4 12
http://statwww.epfl.ch
4.3 Normal Distribution
Definition: A random variable X with density function
f(x) =1
(2π)1/2σexp
−(x − µ)2
2σ2
, x ∈ R, µ ∈ R, σ > 0,
is a normal random variable with mean µ and variance σ2: we
write X ∼ N(µ, σ2).
When µ = 0, σ2 = 1, the corresponding random variable Z is
standard normal, Z ∼ N(0, 1), with density φ(z) = (2π)−1/2e−z2/2,
for z ∈ R. The corresponding cumulative distribution function is
P(Z ≤ x) = Φ(x) =
∫ x
−∞
φ(z) dz =1
(2π)1/2
∫ x
−∞
e−z2/2 dz.
This integral is tabulated in the formulaire and can be obtained
electronically.
Probabilite et Statistique I — Chapter 4 13
http://statwww.epfl.ch
Standard Normal Density Function
−3 −2 −1 0 1 2 3
0.0
0.1
0.2
0.3
0.4
N(0,1) density
x
phi(x
)
Probabilite et Statistique I — Chapter 4 14
http://statwww.epfl.ch
Properties of the Normal Distribution
Theorem : The density function φ(z), cumulative distribution
function Φ(z), and quantiles zp of Z ∼ N(0, 1) satisfy:
(a) the density is symmetric about z = 0, φ(z) = φ(−z) for all z ∈ R;
(b) P(Z ≤ z) = Φ(z) = 1 − Φ(z) = 1 − P(Z ≥ z), for all z ∈ R;
(c) the standard normal quantiles zp satisfy zp = −z1−p, for all
0 < p < 1;
(d) zrφ(z) → 0 as z → ±∞, for all r > 0;
(e) φ′(z) = −zφ(z), φ′′(z) = (z2 − 1)φ(z), etc.
•
Probabilite et Statistique I — Chapter 4 15
http://statwww.epfl.ch
Example 4.15: Show that the mean and variance of X ∼ N(µ, σ2)
are indeed µ and σ2. •
Example 4.16: Find the p quantile of Y = µ + σZ, where
Z ∼ N(0, 1). •
Example 4.17: Find the distribution and density functions of
Y = |Z| and W = Z2, where Z ∼ N(0, 1). •
Example 4.18: Find P(Z ≤ −2), P(Z ≤ 0.5), P(−2 < Z < 0.5),
P(Z ≤ 1.75), z0.05, z0.95, z0.5, z0.8, and z0.15. •
Note: The next page gives an extract from the tables showing the
function Φ(z) in the Formulaire.
Probabilite et Statistique I — Chapter 4 16
http://statwww.epfl.ch
z 0 1 2 3 4 5 6 7 8 9
0.0 .50000 .50399 .50798 .51197 .51595 .51994 .52392 .52790 .53188 .53586
0.1 .53983 .54380 .54776 .55172 .55567 .55962 .56356 .56750 .57142 .57535
0.2 .57926 .58317 .58706 .59095 .59483 .59871 .60257 .60642 .61026 .61409
0.3 .61791 .62172 .62552 .62930 .63307 .63683 .64058 .64431 .64803 .65173
0.4 .65542 .65910 .66276 .66640 .67003 .67364 .67724 .68082 .68439 .68793
0.5 .69146 .69497 .69847 .70194 .70540 .70884 .71226 .71566 .71904 .72240
0.6 .72575 .72907 .73237 .73565 .73891 .74215 .74537 .74857 .75175 .75490
0.7 .75804 .76115 .76424 .76730 .77035 .77337 .77637 .77935 .78230 .78524
0.8 .78814 .79103 .79389 .79673 .79955 .80234 .80511 .80785 .81057 .81327
0.9 .81594 .81859 .82121 .82381 .82639 .82894 .83147 .83398 .83646 .83891
1.0 .84134 .84375 .84614 .84850 .85083 .85314 .85543 .85769 .85993 .86214
1.1 .86433 .86650 .86864 .87076 .87286 .87493 .87698 .87900 .88100 .88298
1.2 .88493 .88686 .88877 .89065 .89251 .89435 .89617 .89796 .89973 .90147
1.3 .90320 .90490 .90658 .90824 .90988 .91149 .91309 .91466 .91621 .91774
1.4 .91924 .92073 .92220 .92364 .92507 .92647 .92786 .92922 .93056 .93189
1.5 .93319 .93448 .93574 .93699 .93822 .93943 .94062 .94179 .94295 .94408
1.6 .94520 .94630 .94738 .94845 .94950 .95053 .95154 .95254 .95352 .95449
1.7 .95543 .95637 .95728 .95818 .95907 .95994 .96080 .96164 .96246 .96327
1.8 .96407 .96485 .96562 .96638 .96712 .96784 .96856 .96926 .96995 .97062
1.9 .97128 .97193 .97257 .97320 .97381 .97441 .97500 .97558 .97615 .97670
2.0 .97725 .97778 .97831 .97882 .97932 .97982 .98030 .98077 .98124 .98169
Probabilite et Statistique I — Chapter 4 17
http://statwww.epfl.ch
Normal Approximation to Binomial Distribution
Before computers were widespread, one use of the normal
distribution was as an approximation to the binomial distribution.
Theorem (de Moivre–Laplace): Let Xn ∼ B(n, p), where
0 < p < 1, set µn = E(Xn) = np, σ2n = var(Xn) = np(1 − p), and let
Z ∼ N(0, 1). Then as n → ∞,
P
(
Xn − µn
σn≤ z
)
→ Φ(z), z ∈ R; that is,Xn − µn
σn
D−→ Z.
•
This gives an approximation for the probability that Xn ≤ r:
P(Xn ≤ r) = P
(
Xn − µn
σn≤
r − µn
σn
)
.= Φ
(
r − µn
σn
)
.
In practice this should be used only when minnp, n(1 − p) ≥ 5.
Probabilite et Statistique I — Chapter 4 18
http://statwww.epfl.ch
Normal and Poisson Approximations to Binomial
0 5 10 15
0.00
0.20
B(16, 0.5) and Normal approximation
r
dens
ity
0 5 10 15
0.00
0.20
B(16, 0.1) and Normal approximation
r
dens
ity
0 5 10 15
0.00
0.20
B(16, 0.5) and Poisson approximation
r
dens
ity
0 5 10 15
0.00
0.20
B(16, 0.1) and Poisson approximation
r
dens
ity
Probabilite et Statistique I — Chapter 4 19
http://statwww.epfl.ch
Continuity Correction
A better approximation to P(Xn ≤ r) is given by replacing r by
r + 12 ; the 1
2 is known as a continuity correction.
0 5 10 15
0.00
0.05
0.10
0.15
0.20
Binomial(15, 0.4) and Normal approximation
x
Den
sity
Example 4.19: Let X ∼ B(15, 0.4). Compute exact and
approximate values of P(X ≤ r) for r = 1, 8, 10, with and without
continuity correction. Comment. •
Probabilite et Statistique I — Chapter 4 20
http://statwww.epfl.ch
4.4 Moment Generating Functions
Recall that the moment generating function of a random variable
X is defined as MX(t) = Eexp(tX), for t ∈ R such that
MX(t) < ∞.
MX(t) is also called the Laplace transform of fX(x).
Example 4.20: Find MX(t) when X ∼ exp(λ). •
Example 4.21: Find the moment generating function of the
Laplace distribution. •
Example 4.22: Find MX(t) when X ∼ N(µ, σ2). •
Example 4.23: Let X ∼ exp(λ). Find the moment generating
functions of Y = 2X , of X conditional on the event X < a, and of
W = min(X, 3). •
Probabilite et Statistique I — Chapter 4 21
http://statwww.epfl.ch
4.5 Mixture Distributions
In practice random variables are almost always either discrete or
continuous. Exceptions can arise, however.
Example 4.24 (Petrol): Describe the distribution of the money
spent by motorists buying petrol at an automate. •
Example 4.25 (Mixture): Let X1 ∼ Geom(p) and X2 ∼ exp(λ).
Suppose that X = X1 with probability γ and X = X2 with
probability 1 − γ. Find FX , fX , E(X) and var(X). •
Probabilite et Statistique I — Chapter 4 22