Chapter 4: Probability Distributions - Stony Brooklinli/teaching/ams-310/lecture-notes...Chapter 4:...
Transcript of Chapter 4: Probability Distributions - Stony Brooklinli/teaching/ams-310/lecture-notes...Chapter 4:...
Chapter 4: Probability Distributions
4.1 Random Variables
A random variable is a function X that assigns a numerical value x to each possible outcome in the sample space
An event can be associated with a single value of the random variable, or it can be associated with a range of values of the random variable. The probability of an event can then be described as: π π΄ = π(π = π₯π) or π π΄ = π(π₯π β€ π β€ π₯π’) There could also be other topology for the random variable to describe the event. If π₯π , π = 1,2,β― ,π are all the possible values of random variable associated with the sample space, then
π(π = π₯π)
π
π=1
= 1
π1
π2
π1
π2
π1
πΆ1
πΆ2
πΆ3
πΆ1
πΆ2
πΆ3
πΆ1
πΆ2
πΆ3
probabilities 0.03
0.06
0.07
0.02
0.01
0.01
0.09
0.16
0.01 β¦
e.g. Each (composite) outcome consists of 3 ratings (M,P,C). Let π1 , π1 and πΆ1 be preferred ratings. Let X be the function that assigns to each outcome the number of preferred ratings each outcome possesses.
Since each outcome has a probability, we can compute the probability of getting each value x = 0,1,2,3 of the function X
x 3
2
2
2
1
1
2
1
1 β¦
x | P(X = x) 3 | 0.03 2 | 0.29 1 | 0.50 0 | 0.18
Random variables X can be classified by the number of values x they can assume. The two common types are discrete random variables with a finite or countably infinite number of values continuous random variables having a continuum of values for x 1. A value of a random variable may correspond to several random events. 2. An event may correspond to a range of values (or ranges of values) of a
random variable. 3. But a given value (in its legal range) of a random variable corresponds to a
random event. 4. Different random values of the random variable correspond to mutually
exclusive random events. 5. Each value of a random variable has a corresponding probability. 6. All possible values of a random variable correspond to the entire sample
space. 7. The summation of probabilities corresponding to all values of a random
variable must equal to unity.
A fundamental problem is to find the probability of occurrence for each possible value x of the random variable X.
π π = π₯ = π(π΄)
all outcomes π΄ assigned value π₯
This is the problem of identifying the probability distribution for a random variable. The probability distribution of a discrete random variable X can be listed as a table of the possible values x together with the probability P(X = x) for each e.g. π₯1 | π(π = π₯1) π₯2 | π(π = π₯2) π₯3 | π(π = π₯3) β¦ It is standard notation to refer to the values P(X = x) of the probability distribution by f(x) f(x) β‘ P(X = x)
The probability distribution always satisfies the conditions π π₯ β₯ 0 and π π₯ = 1πππ π₯
e.g. π π₯ = π₯β2
2 for x = 1,2,3,4
e.g. π π₯ = π₯2
25 for x = 0,1,2,3,4
Since the probability distribution for a discrete random variable is a tabular list, it can also be represented as a histogram, the probability histogram.
For a discrete random variable, the height for the bin value x is f(x), the width of the bin is meaningless. For a discrete random variable, the probability histogram is commonly drawn either with touching bins (left) or in Pareto style (right - also referred to as a bar chart).
f(x) for number preferred ratings
Of course one can also compute the cumulative distribution function (or cumulative probability function)
πΉ π₯ = π π β€ π₯ for all β β β€ π₯ β€ β and plot it in the ways learned in chapter 2 (with consideration that the x-axis is not continuous but discrete).
We now start to discuss the probability distributions for many discrete random variables that occur in nature
F(x) for number preferred ratings
4.2 Binomial Distribution
Bernoulli distribution: In probability theory and statistics, the Bernoulli distribution, named after Swiss scientist Jacob Bernoulli, is a discrete probability distribution, which takes value 1 with success probability π and value 0 with failure probability π = 1 β π . So if X is a random variable with this distribution, we have:
π π = 1 = π; π π = 0 = π = 1 β π. Mean and variance of a random variable πΏ: (1) Mean (mathematical expectation, expectation, average, etc):
π = π₯ = πΈ π = π₯π(π = π₯)
π
(2) Variance:
πππ π = πΈ π₯ β π₯ 2 = π2 = π₯ β π 2π(π = π₯)π
π is called the standard deviation. For random variable with Bernoulli distribution, we have
π = πΈ π = π πππ π = π2 = 1 β π 2π + π2π = π2π + π2π = ππ π + π = ππ
Binomial Distribution: We can refer to the ordered sequence of length n as a series of n repeated trials, where each trial produces a result that is either βsuccessβ or βfailureβ. We are interested in the random variable that reports the number x successes in n trials. Each trial is a Bernoulli trial which satisfies a) there are only two outcomes for each trial b) the probability of success is the same for each trial c) the outcomes for different trials are independent
We are talking about the events π΄π in the sample space S where π΄1= s _ _ _ _ β¦. _; π΄2= _ s _ _ _ β¦. _; π΄3= _ _ s _ _ β¦. _; β¦ ; π΄π= _ _ _ _ _ β¦. s; where by b) P(π΄1) = P(π΄2) = β¦ = P(π΄π) and by c) P(π΄π β© π΄π) = P(π΄π) Β· P(π΄π) for all distinct pairs i , j
e.g. police roadblock checking for drivers who are wearing seatbelts condition a): two outcomes: βyβ or βnβ conditions b) &c): if the events π΄1 to π΄π contain all cars stopped, then b) and c) will be satisfied If however, event π΄1 is broken into two (mutually exclusive sub-events), π΄1< which is all events s _ _ _ β¦ _ and driver 1 is less than 21 and π΄1> which is all events s _ _ _ β¦ _ and driver 1 is 21 or older
it is entirely likely that P(π΄1<) β P(π΄1>), and we would not be dealing with Bernoulli trials.
If the someone caught not wearing a seatbelt began to warn oncoming cars approaching the roadblock, then P(π΄π β© π΄π) β P(π΄π) Β· P(π΄π) for all i , j pairs and we would also not be
dealing with Bernoulli trials.
Note that in our definition of Bernoulli trials the number of trials n is fixed in advance
All Bernoulli trials of length n have the same probability distribution!!!! (a consequence of the assumptions behind the definition of Bernoulli trials) This probability distribution is called the Binomial probability distribution for n. (it is called this because each trial has a binomial outcome βsβ or βfβ and the sequences generated (the composite outcomes) are binomial sequences.)
Probability Distribution
x 0 1 2 3
f(x) 1/8 3/8 3/8 1/8
30Β½ 0 1 βΒ½ 3
31Β½ 1 1 βΒ½ 2
32Β½ 2 1 βΒ½ 1
33Β½ 3 1 βΒ½ 0
e.g. Binomial probability distribution for n = 3. Sample space has 23 = 8 outcomes sss ssf sff fff sfs fsf fss ffs
RV values 3 2 1 0 P(sss) = 1/8 = Β½ Β· Β½ Β· Β½; P(ssf) = 1/8 = Β½ Β· Β½ Β· (1βΒ½); P(fsf) = 1/8 = (1βΒ½) Β· Β½ Β· (1βΒ½); etc.
From this example, we see that the binomial probability distribution, which governs Bernoulli trials of length n is:
π(π₯) β‘ π π₯; π, π = ππ₯ππ₯ 1 β π πβπ₯ (BPD)
where p is the (common) probability of success in any trial, and x = 0, 1, 2, β¦., n
Note: 1. The term on the RHS of (BPD) is the xβth term of the binomial expansion of
π + (1 β π) π
i.e. π + (1 β π) π = ππ₯ππ₯(1 β π)πβπ₯π
π₯=0
which also proves that
ππ₯ππ₯(1 β π)πβπ₯
π
π₯=0
= 1π = 1
2. (BPD) is a 2-parameter family of distribution functions characterized by choice of n and p.
e.g. In 60% of all solar-heat installations, the utility bill is reduced by at least 1/3. What is the probability that the utility bill will be reduced by at least 1/3 in
a) 4 of 5 installations? b) at least 4 of 5 installation?
a) βsβ = βat least 1/3β (i.e. 1/3 or greater) βfβ = βless than 1/3β P(Ai) = p = 0.6 Assume c) of Bernoulli trial assumptions holds.
Then f(4) = b(4; 5, 0.6) = 54 0.64 0.41
b) We want f(4) + f(5) = b(4; 5, 0.6) + b(5; 5, 0.6) = 54 0.64 0.41 +
55 0.65 0.40
Cumulative binomial probability distribution
π© π;π, π β‘ π π;π, π
π
π=π
(ππππ)
is the probability of x or fewer successes in n Bernoulli trials, were p is the probability of success on each trial. From (CBPD) we see π π; π, π = π© π;π, π β π©(π β π;π, π)
Values of π© π;π, π are tabulated for various n and p values in Table 1 of Appendix B
e.g. probability is 0.05 for flange failure under a given load L. What is the probability that, among 16 columns, a) at most 2 will fail b) at least 4 will fail
a) π΅ 2; 16, 0.05 = π 0; 16, 0.05 + π 1; 16, 0.05 + π(2; 16, 0.05)
b) 1.0 β π΅ 3; 16, 0.05
e.g. Claim: probability of repair for a hard drive within 12 months is 0.10 Preliminary data show 5 of 20 hard drives required repair in first 12 months of manufacture Does initial production run support the claim?
βsβ = repair within 12 months. p = 0.10. Assume Bernoulli trials. 1.0 β B(4; 20, 0.10) = 0.0432 is the probability of seeing 5 or more hard drives requiring repair in 12 months. This says that in only 4% of all year-long periods (i.e. in roughly 1 year out of 25) should one see 5 or more hard drives needing repair. The fact that we saw this happen in the very first year makes us suspicious of the manufacturers claim (but does NOT prove that manufacturers claim is wrong !!!!!!!)
Shape of binomial probability histograms e.g. b(x; 5, p)
positively skewed symmetric negatively skewed
b(x; n, 0.5) will always be symmetric: b(x; n, p) will always be positively skewed for p < 0.5 (Tail on positive side) will always be negatively skewed for p > 0.5 (Tail on negative side)
π(π₯; π, 0.5) = π(π β π₯; π, 0.5)
4.3 Hypergeometric probability distribution
In Bernoulli trials, one can get βsβ with probability p and βfβ with probability 1βp in every trial (i.e. Bernoulli trials can be thought of as βsample with replacementβ)
Consider a variation of the problem, in which there are total of only a outcomes available that are successes (have RV values = βsβ) and N β a outcomes that are failures. (e.g. there are N radios, a of them are defective and N β a of them work.)
We want to run n trials, (e.g. in each trial we pick a radio), but outcomes are sampled without replacement (that is, once a radio is picked, it is no longer available to be picked again).
As we run each trial, we assume that whatever outcomes are left, whether having RV value βsβ or βfβ, have the same chance of being selected in the next trial (i.e. we are assuming classical probability β where the chance of being picking a particular value of a RV is in proportion to the number of outcomes that have that RV value).
Thus, for x β€ a, the probability of getting x successes in n trials if there will be a successes in N trials is
the number of n-arrangements (permutations) having x successes and n β x failures
the number n arrangements (permutations) of N things
That is _ _ _ _ _ _ _ _ _ _ _ _ _ _ . . . _ trial 1 2 3 4 5 . . . n pick x of the trials: πΆπ₯π ways
pick x of the a outcomes and arrange them in all possible ways in those x trials: ππ₯π ways
pick nβx of the Nβa outcomes and arrange them in all possible ways in the remaining nβx trials: ππβπ₯πβπ ways total possible n outcomes πππ Therefore
π(π₯) = πΆπ₯ ππ₯ ππβπ₯πβπππ
πππ
i.e.
π π₯ =
π!π β π₯ ! π₯!
π!π β π₯ !
π β π !
π β π β π β π₯ !
π!π β π !
=
π!π β π₯ ! π₯!
π β π !
π β π β π β π₯ ! π β π₯ !
π!π β π ! π!
=
ππ₯π β ππ β π₯ππ
,
This defines the hypergeometric probability distribution
β π₯; π, π, π =
ππ₯π β ππ β π₯ππ
, π₯ = 0, 1,2, β¦ , π; π β€ π
e.g. PC has 20 identical car chargers, 5 are defective. PC will randomly ship 10. What is the probability that 2 of those shipped will be defective?
β 2; 10,5,20 =
521582010
=
5!3! 2!
15!7! 8!20!10! 10!
= 5! 15! 10! 10!
3! 2! 7! 8! 20!= 5!
3! 2!
15!
20!
10!
7!
10!
8!
= 5 4
2
1
20 19 18 17 16
10 9 8 10 9=5 4
2
10
20
9
18
8
16
10
19
9
17=5 4
2
1
2
1
2
1
2
10
19
9
17
= 5
2 5
19 9
17= 0.348
e.g. redo using 100 car chargers and 25 defective
β 2; 10,25,100 =
252758
10010
= 0.292
e.g. approximate this using the binomial distribution
b 2; 10, π β 25/100 =102 0.252 0.758= 0.282
The hypergeometric distribution β π₯; π, π, π approaches the binomial distribution
π(π₯; π, π =π
π) in the limit π β β
i.e. the binomial distribution can be used to approximate the hypergeometric
distribution when π β€π
10
4.4 Mean and Variance of a Probability Distribution
Consider the values π₯1, π₯2, β― , π₯π As discussed in Chapter 2, the sample mean is
π₯ = π₯πππ=1
π= π₯π β
1
π
π
π=1
We can view each term in the RHS as π₯π β π(π₯π) where π π₯π =1
π is the probability
associated with each value (each value appears once in the list, and each is equally likely)
Let X be a discrete random variable having values π₯1, π₯2, β― , π₯π, with probabilities f(π₯π). The mean value of the RV , aka. the mean value of the probability distribution, is
ΞΌ = π₯ β π(π₯)
all π₯
e.g. Mean value for the probability distribution of the number of heads obtained in 3 flips of a coin.
There are 23 = 8 outcomes. The RV βnumber of heads in 3 flipsβ has 4 possible values, 0 1, 2, and 3 heads having probabilities f(0) = 1/8; f(1) = 3/8; f(2) = 3/8; f(3) = 1/8. Therefore the mean value is
ΞΌ = 0 β1
8+ 1 β3
8+ 2 β3
8+ 3 β1
8= 3
2
The mean value for the Binomial distribution
π = π₯ β π π₯; π, π = π₯ βππ₯ππ₯(1 β π)πβπ₯
π
π₯=0
π
π₯=0
= π₯ βπ!
π β π₯ ! π₯!ππ₯(1 β π)πβπ₯
π
π₯=1
= π π β 1 !
π β π₯ ! π₯
π₯!π ππ₯β1 (1 β π)πβπ₯
π
π₯=1
= π π π β 1 !
π β π₯ ! 1
(π₯ β 1)!ππ₯β1 (1 β π)πβπ₯
π
π₯=1
Let y = x β 1 and m = n β 1
π = π π π!
π β π¦ ! π¦! ππ¦ (1 β π)πβπ¦
π
π¦=0
= π π [π β 1 β π ]π = π π 1π
The mean value for the binomial distribution π π; π, π is π = π π
e.g. Since the RV βnumber of heads in three tossesβ is a Bernoulli trial RV with p = 0.5, its mean value must be n p = 3 Β·Β½ = 3/2 as shown on the previous slide.
The mean value of the hypergeometric distribution π(π; π, π, π΅) is given by
π = π βπ
π
(This is βeasyβ to remember. The formula is similar to the binomial distribution if one βrecognizesβ π = π π as the hypergeometric probability in the limit of large N.)
e.g. PC has 20 identical car charges, 5 are defective. PC will randomly ship 10. On average (over many trials of shipping 10), how many defective car chargers will be included in the order.
We want the mean of β(π₯; 10,5,20). The mean value is ΞΌ = 10 Β· 5/20 = 2.5
Recall from chapter 2, that the sum of the sample deviations π₯π β π₯ ππ=1 = 0
If ΞΌ is the mean of the probability distribution f(x), then note that
π₯ β π β π(π₯)
πππ π₯
= π₯ β π π₯ β π π π₯
πππ π₯πππ π₯
= π β π = 0
Therefore, in analogy to the sample variance defined in Chapter 2, we define the variance of the probability distribution f(x) as
π2 = π₯ β π 2 β π(π₯)πππ π₯
Similarly we define the standard deviation of the probability distribution f(x) as
π = π2 = π₯ β π 2 β π(π₯)
πππ π₯
The variance for the binomial distribution π(π₯; π, π)
ππ = π β π β π β π = π β (1 β π)
e.g. The standard deviation for throwing heads in 3 flips of a coin is
π = 3 β1
2β (1 β1
2) =3
4= 3
2= 0.866
The variance for the hypergeometric distribution is
ππ = π π
π΅π βπ
π΅
π΅β π
π΅β π
e.g. The standard deviation for the number of defective car chargers in shipments of 10 is
π = 10 5
201 β5
20
20 β 10
20 β 1=75
76= 0.99
β1 as N ββ
The moments of a probability distribution The kβth moment about the origin (usually just called the kβth moment) of a probability distribution is defined as
ππβ² = π₯π β π(π₯)
πππ π₯
Note: the mean of a probability distribution is the 1βst moment (about the origin)
The kβth moment about the mean of a probability distribution is defined as
ππ = (π₯ β π)πβ π(π₯)
πππ π₯
Notes:
the 1βst moment about the mean, π1 = 0
the 2βnd moment about the mean π2 is the variance
the 3βrd moment about the mean π3/π3 is the skewness (describes the symmetry)
the 4βth moment about the mean π3/π4 is the kurtosis (describes the βpeakednessβ)
Note:
π2 = π₯ β π 2 β π(π₯)
πππ π₯
= (π₯2β2π₯π + π2) π(π₯)
πππ π₯
= π₯2π π₯ β 2π π₯ π π₯
πππ π₯
+ π2 π π₯
πππ π₯
= π2β² β 2π2 + π2
πππ π₯
Therefore we have the result π2 = π2
β² β π2
Since computation of π2β² and π2 does not involve squaring differences within the sum,
they can be more straightforward to compute.
e.g. Consider the R.V. which is the number of points obtained on a single roll of a die. The R.V. has values 1,2,3,4,5,6. What is the variance of the probability distribution behind this RV?
The probability distribution is f(x) = 1/6 for each x. Therefore the mean is
π = 1 β1
6+ 2 β1
6+ 3 β1
6+ 4 β1
6+ 5 β1
6+ 6 β1
6= 6 β 7
2 β 6=7
2
The second moment about the origin is
π2β² = 12 β
1
6+ 22 β
1
6+ 32 β
1
6+ 42 β
1
6+ 52 β
1
6+ 62 β
1
6=91
6
Therefore π2 = 91
6β49
4= 35
12
4.5 Chebyshevβs Theorem
Theorem 4.1 If a probability distribution has mean ΞΌ and standard deviation Ο,
then the probability of getting a value that deviates from ΞΌ by at least k Ο is a most 1
π2
i.e. the probability P(x) for getting a result x such that |x βΞΌ| β₯ k Ο satisfies π π₯ β€1
π2
Chebyshevβs theorem quantifies the statement that the probability of getting a result x decreases as x moves further away from ΞΌ Theorem 4.1 can be stated as
π(|π₯ β π| β₯ ππ) β€1
π2
Note: k can be any positive number (it does not have to be an integer).
Corollary 4.1 If a probability distribution has mean ΞΌ and standard deviation Ο,
then the probability of getting a value that deviates from ΞΌ by at most k Ο is at least 1β 1
π2
π π₯ β π β€ ππ β₯ 1 β1
π2
e.g. The number of customers who visit a car dealerβs showroom on a Saturday morning is an RV with mean 18 and standard deviation 2.5. With what probability can we assert there will be more than 8 but fewer than 28 customers.
This problem sets k Ο = 10, making k = 4. Thus
π π₯ β 18 β€ 4 Β· 2.5 β₯ 1 β1
42=15
16
Chebyshevβs theorem holds for all probability distributions, but it works better for some than for others (gives a βsharperβ estimate).
4.6 Poisson distribution
Consider the binomial distribution
π π₯; π, π = ππ₯ππ₯(1 β π)πβπ₯
Write π as π = Ξ»/π where Ξ» is a constant. In the limit π β β, theπ π β 0 and the binomial distribution becomes the Poisson probability distribution
π π₯; Ξ» = Ξ»π₯πβΞ»
π₯! for π₯ = 0, 1, 2, 3, β¦
As derived, the Poisson distribution describes the probability distribution for an infinite (in practice very large) number of Bernoulli trials when the probability of success in each trial is vanishingly small (in practice β very small).
As the Poisson distribution describes probabilities for a sample space in which each outcome is countably infinite in length, we have to technically modify the third Axiom (property) that probabilities must obey to include such sample spaces. The third axiom stated that the probability function is an additive set function. The appropriate modification is
Axiom 3β If π΄1, π΄2, π΄3, β― is a countably infinite sequence of mutually exclusive events in S, then
π π΄1 Uπ΄2 βͺ π΄3 βͺβ― = π π΄1 + π π΄2 + π π΄3 +β―
Note that the Poisson distribution satisfies π(π₯; Ξ»)πππ π₯ = 1 Proof:
Ξ»π₯πβΞ»
π₯!
β
π₯=0
= πβΞ» Ξ»π₯
π₯!= πβΞ»πΞ»
β
π₯=0
= 1
Taylors series
expansion of πΞ»
The cumulative Poisson distribution πΉ π₯; Ξ» = π(π; Ξ») π₯π=0 is tabluated for select
values of x and Ξ» in Appendix B (Table 2)
e.g. 5% of bound books have defective bindings. What is the probability that 2 out of 100 books will have defective bindings using (a) the binomial distribution, (b) the Poisson distribution as an approximation
(a) b(2;100,0.05) = 1002 0.052 0.9598 = 0.081
(b) Ξ» = 0.05 β 100 = 5. f 2; 5 =52 πβ5
2!= 0.084
e.g. There are 3,840 generators. The probability is 1/1,200 that any one will fail in a year. What is the probability of finding 0, 1, 2, 3, 4, β¦ failures in any given year
Ξ» = 3840 /1200 = 3.2. We want the probabilities f(0; 3.2), f(1; 3.2), f(2; 3.2) etc. Using the property π π₯; Ξ» = πΉ π₯; Ξ» β πΉ π₯ β 1; Ξ» we can compute these probabilities from Table 2 Appendix B
x 0 1 2 3 4 5 6 7 8
π π₯; 3.2 0.041 0.130 0.209 0.223 0.178 0.114 0.060 0.028 0.011
The mean value for the Poisson probability distribution is π = π The variance for the Poisson probability distribution is ππ = π
i.e. the standard deviation for the Poisson distribution is π = π
Proof for mean: π = π₯Ξ»π₯πβΞ»
π₯!=
β
π₯=0
Ξ»πβΞ» Ξ»π₯β1
(π₯ β 1)!
β
π₯=1
Let π¦ = π₯ β 1
π = Ξ» πβΞ» Ξ»π¦
π¦!=
β
π¦=0
Ξ» πβΞ» πΞ» = Ξ»
The average Ξ» is usually approximated by running many long (but finite) trials.
e.g. An average of 1.3 gamma rays per millisec is recorded coming from a radioactive substance. Assuming the RV βnumber of gamma rays per millisecβ has a probability distribution that is Poisson (aka, is a Poisson process), what is the probability of seeing 1 or more gamma rays in the next millisec
Ξ» = 1.3. Want π π β₯ 1 = 1.0 β π π = 0 = 1.0 β1.30πβ1.3
0!= 1.0 β πβ1.3 = 0.727
4.7 Poisson Processes
Consider a random process (a physical process controlled, wholly or in part, by a chance mechanism) in time. To find the probability of the process generating x success over a time interval T, divide T into n equal interval βπ‘ = π/π. (n is large, βπ‘ is small) Assume the following hold: 1. The probability of success during βπ‘ is πΌ βπ‘ 2. The probability of more than one success during βπ‘ is negligible 3. The probability of success during each time interval βπ‘ does not depend on what
happened in a prior interval.
These assumptions describe Bernoulli trials, with π = π/βπ‘ and p = πΌ βπ‘ and the
probability of x successes in n intervals is π(π₯;π
βπ‘, πΌ βπ‘).
As π β β, p β0 (as βπ‘ β0) and the probability of x successes is governed by the Poisson probability distribution with Ξ» = ππ = πΌπ
Since Ξ» is the mean (average) number of successes over time T, we see that πΆ is the mean number of successes per unit time.
e.g. A bank receives, on average, 6 bad checks per day. What are the probabilities it will receive (a) 4 bad checks on a given day (b) 10 bad checks over a 2 day period
(a) πΌ = 6. Ξ» = 6 β 1
Therefore π(4; 6) =64πβ6
4!= 0.134
(b) πΌ = 6. Ξ» = 6 β 2 = 12
Therefore π 10; 12 =1210πβ12
10!= πΉ 10; 12 β πΉ 9; 12 = 0.134
e.g. a process generates 0.2 imperfections per minute. Find probabilities of (a) 1 imperfection in 3 minutes (b) at least 2 imperfections in 5 minutes (c) at most 1 imperfection in 15 minutes
(a) Ξ» = 0.2 β 3 = 0.6. Want π 1; 0.6 = πΉ 1; 0.6 β πΉ(0; 0.6)
(b) Ξ» = 0.2 β 5 = 1.0. Want 1.0 β πΉ 1; 1.0
(c) Ξ» = 0.2 β 15 = 3.0. Want πΉ 1; 3.0
4.8 Geometric and Negative Binomial Distributions
Consider the sample space of outcomes for countably infinite Bernoulli trials (i.e. the three Bernoulli assumptions hold) In particular βsβ occurs with probability p and βfβ with probability 1-p We want to know the probability that the first success occurs on the xβth trial.
Divide the sample space into the following events
π΄1 s _ _ _ _ _ _ _ _ β¦ π΄ 1 f _ _ _ _ _ _ _ _ β¦ π΄1 βͺ π΄ 1 = π π΄2 f s _ _ _ _ _ _ _ β¦ π΄ 2 f f _ _ _ _ _ _ _ β¦ π΄2 βͺ π΄ 2 = π΄ 1 π΄3 f f s _ _ _ _ _ _ β¦ π΄ 3 f f f _ _ _ _ _ _ β¦ π΄3 βͺ π΄ 3 = π΄ 2 π΄4 f f f s _ _ _ _ _ β¦ π΄ 4 f f f f _ _ _ _ _ β¦ π΄4 βͺ π΄ 4 = π΄ 3 etc
π΄1 π΄2
π΄3
π΄4
π(π΄1) = π
π(π΄2) = π 1 β π
π(π΄3)=π1βπ2
π(π΄4) = π 1 β π3
π΄5 π΄6 π(π΄6) = π 1 β π
5
π(π΄5)=π1βπ4
π΄7 β¦
Since the sum of the probabilities of all outcomes must =1, from the diagram we see that
π π΄1 + π π΄2 + π π΄3 + π π΄4 +β― = π + π 1 β π + π 1 β π2 + π 1 β π 3 +β―
= π(1 β π)π₯β1β
π₯=1
= 1
Let the sample space consist of outcomes each of which consists of infinitely countable Bernoulli trials. Let p be the probability of success in each Bernoulli trial. Then the geometric probability distribution
π π₯; π = π(1 β π)π₯β1, π₯ = 1, 2, 3, 4, β¦
describes the probability that the first success occurs on the xβth trial.
e.g. A measuring device has a 5% probability of showing excessive drift during a measurement. What is the probability that the first time the device exhibits successive drift occurs on the sixth measurement?
p = 0.05. We want π 6; 0.05 = 0.05(0.95)5= 0.039
Assume you are dealing with Bernoulli trials governed by probability p and you would like to know how many trials x you need to make in order to observe r successes. (Clearly π β€ π₯) To have exactly r successes in x trials, the rβth success has to occur on trial x, and the previous π β 1 successes have to occur in the previous π₯ β 1 trials. Therefore the probability that the rβth success occurs on the xβth trial must be
f(π₯) = (probability of π β 1 successes in π₯ β 1 trials) x (probability of βsβ on trial x)
= π π β 1; π₯ β 1, π β π
f(π₯) =π₯ β 1π β 1
ππβ1(1 β π)π₯βπβ π =π₯ β 1π β 1
ππ(1 β π)π₯βπ
This is the negative binomial probability distribution
π π₯ =π₯ β 1π β 1
ππ 1 β π π₯βπ for π₯ = π, π + 1, π + 2,β¦
As ππ=ππ β π
, the negative binomial probability distribution can also be written
π π₯ =π₯ β 1π₯ β π
ππ 1 β π π₯βπ
It can be shown that π₯ β 1π₯ β π
= β1π₯βπβπ₯π₯ β π
explaining the name βnegativeβ binomial
distribution
Recap: Sample space: outcomes are Bernoulli trials of fixed length n. Probability of βsβ is p. Probability of getting x outcomes in the n trials is given by the binomial distribution π π₯; π, π , π₯ = 0,1, 2, 3, β¦ , π
If n is large and p is small, π π₯; π, π β π π₯; Ξ» where Ξ» = ππ and π π₯; Ξ» is the Poisson distribution Sample space: outcomes are Bernoulli trials of countably infinite length. Probability of βsβ is p. Probability of getting the first success on the xβth trial is given by the geometric distribution π π₯; π , π₯ = 1, 2, 3, 4, β¦ . Probability of getting exactly r successes in x trials is given by π π₯ = π π β 1; π₯ β 1, π β π, π₯ = π, π + 1, π + 2,β¦
Recap: Sample space: Time recordings of a random process occurring over a continuous time interval T. The random process produces only βsβ or βfβ.
Let πΌ denote the average number of βsβ produced per unit time. Further assume 1. probability of βsβ during small time interval βπ‘ is Ξ±βπ‘ 2. probability of more than one βsβ in βπ‘ is negligible 3. probability of βsβ in a later βπ‘ is independent of what occurs earlier Then: Probability of x successes during time interval T is given by the Poisson distribution
π π₯; Ξ» where Ξ» = πΌπ
4.9 The Multinomial Distribution
Sample space: sequences of trials of length n We assume: 1) Each trial has k possible distinct outcomes, type 1, type 2, type 3, β¦., type k
2) Outcome type i occurs with probability ππ for each trail, where ππ = 1ππ=1
3) The outcomes for different trials are independent. (i.e. we assume βmultinomial Bernoulliβ trials. In the n trials, we want to know the probability π(π₯1, π₯2, π₯3, β¦ , π₯π) that there are
π₯1 outcomes of type 1 π₯2 outcomes of type 2 β¦ π₯π outcomes of type k
where π₯π = πππ=1
For fixed values of π₯1, π₯2, π₯3, β¦ , π₯π, there are ππ₯1
π β π₯1π₯2
π β π₯1 β π₯2π₯3
β―π β π₯1 β π₯2 ββ―β π₯πβ1
π₯π
=π!
π₯1! π₯2! π₯3!β―π₯π!
outcomes that have these k values.
(AMS 301 students will recognize this as π(π; π₯1, π₯2, π₯3, β¦ , π₯π), the number of ways to arrange n objects, when there are π₯1 of type 1, π₯2 of type 2, β¦ , and π₯π of type k ) Each outcome has probability π1
π₯1π2π₯2π3π₯3β―ππ
π₯π. Summing the probabilities for theses outcomes we have
π π₯1, π₯2, π₯3, β¦ , π₯π =π!
π₯1! π₯2! π₯3!β―π₯π! π1π₯1π2π₯2π3π₯3β―ππ
π₯π
This is the multinomial probability distribution with the conditions that each π₯π β₯ 0 and that
π₯π = πππ=1
e.g. 1. 30% of light bulbs will survive less that 40 hours of continuous use 2. 50% will survive from 40 to 80 hours of continuous use 3. 20% will survive longer than 80 hours of continuous use What is the probability that, among 8 light bulbs, 2 will be of type 1, 5 of type 2 and 1 of type 3?
We want π 2,5,1 = 8!
2! 5! 1!(0.3)2(0.5)5(0.2)1= 0.0945
4.10 Generating discrete random variables that obey different probability distributions
Observation: It is relatively simple to generate the random values 0, 1, 2, β¦, 9 with equal-likelihood (i.e. each with probability 1/10) draw the numbers (with replacement) from a hat flip a balanced, 10-sided dice
It is also relatively straightforward to write a computer program that generates the integers 0, 1, 2, β¦, 9 with equal-likelihood.
Consequently, it is possible to generate all 2-digit numbers (outcomes) 00 to 99 with equal-likelihood (1/100) all 3-digit numbers (outcomes) 000 to 999 with equal-likelihood (1/1000) etc.
outcomes
Consider the RV βnumber of heads in 3 tosses of the diceβ The probability distribution for this RV is
x 0 1 2 3
f(x) 1/8=0.125 3/8=0.375 3/8=0.375 1/8=0.125
F(x) 0.125 0.500 0.875 1.000
πΉ(0)
πΉ(1)
πΉ(2)
π₯1 = 0 π₯2 = 1 π₯3 = 2 π₯4 = 3
i.e. all the outcomes 0 β 124 are assigned the RV 0 all the outcomes 125 β 499 are assigned the RV 1 all the outcomes 500 β 874 are assigned the RV 2 all the outcomes 875 β 999 are assigned the RV 3 Thus RV 0 occurs with probability 1/8 RV 1 occurs with probability 3/8 RV 2 occurs with probability 3/8 RV 3 occurs with probability 1/8
Thus the sequence of outcomes generated randomly (with equal-likelihood) 197, 365, 157, 520, 946, 951, 948, 568, 586, 089
are interpreted as the random values (number of heads) 1, 1, 1, 2, 3, 3, 3, 2, 2, 0
Table 7 in Appendix B presents a long list of the integers 0, β¦, 9 generated with equal-likelihood. One can use the table to randomly generate lists of 1-digit, 2-digit, 3-digit, etc. outcomes (by taking non-overlapping combinations and starting in different places)
e.g. RV = number cars arriving at a toll booth per minute
x 0 1 2 3 4 5 6 7 8 9
f(x) 0.082 0.205 0.256 0.214 0.134 0.067 0.028 0.010 0.003 0.001
F(x) 0.082 0.287 0.543 0.757 0.891 0.958 0.986 0.996 0.999 1.000
πΉ(0)
πΉ(1)
πΉ(2)
0 1 2 3
πΉ(3)
πΉ(4)
4 β¦
Classical probability versus frequentist probability
Recall: classical probability counts outcomes and assumes all outcomes occur with equal likelihood. Frequentist probability measures the frequency of occurrence of outcomes from past βexperimentsβ. So what do two dice really do when thrown at the same time? Classic probability: distinct (i.e. different colored) dice: There are 36 distinct outcomes, each appears with equal likelihood, therefore the (unordered) outcome 1,2 has probability 2/36
identical dice: There are 21 distinct outcomes, each appears with equal likelihood, therefore the (unordered) outcome 1,2 has probability 1/21 Frequentist probability: distinct dice: The (unordered) outcome 1,2 has measured probability 2/36 in agreement with classic probability
identical dice: The (unordered) outcome 1,2 has measured probability 2/36 (!!) in disagreement with classic probability
For identical dice, the classic view of probability for throwing two identical dice assumes all 21 outcomes occur with equal probability. This is not what occurs in practice. in practice, each of the (unordered) outcomes i, j where i β j occurs more frequently than the outcomes i, i.
βWhyβ is the frequentist approach correct. Clearly the frequency of getting unordered outcomes cannot depend on the color of dice being thrown (i.e. the color of the dice cannot affect frequency of occurrence). Thus two identical dice must generate outcomes with the same frequency as two differently-colored dice. Note: That is not to say that the classic probability view is completely wrong. The classic view correctly counts the number of different outcomes in each case ( identical and different dice). However it computes probability incorrectly for the identical case. The frequentist view concentrates on assigning probabilities to each outcome. In the frequentist view, the number of outcomes for two identical dice is still 21, but the probabilities assigned to i,i and i,j outcomes are different.