Sei sulla pagina 1di 52

2

PROBABILITY
DEFINITIONS AND AXIOMS
Probability theory deals with the study of random phenomena, which under
repeated experiments yield different outcomes that have certain underlying
patterns about them. The notion of an experiment assumes a set of repeatable
conditions that allow any number of identical repetitions. When an experiment is
performed under these conditions, certain elementary events  i occur in different
but completely uncertain ways. We can assign nonnegative number P ( i ), as the
probability of the event  i in various ways:
Laplace’s Classical Definition: The Probability of an event A is defined
a-priori without actual experimentation as
Number of outcomes favorable to A
P( A) 
Total number of possible outcomes
provided all these outcomes are equally likely.
Examples
1. Consider a box with n white and m red balls. In this case, there are two
elementary outcomes: white ball or red ball.
n
Probability of “selecting a white ball”  .
nm
2. We can use above classical definition to determine the probability that a
given number is divisible by a prime p.
If p is a prime number, then every pth number (beginning from p) is divisible
by p. Thus among p consecutive integers there is one favorable outcome, and
hence 1
P a given number is divisible by a prime p 
p
Relative Frequency Definition: The probability of an event A is defined as
nA
P ( A)  lim
n n

where nA is the number of occurrences of A and n is the total number of


trials. We can use the relative frequency definition to derive the probability
that a given number is divisible by a prime p as well.
To do this we argue that among the integers 1,2,3, ….n only the numbers p,
2p,3p … are divisible by p.
Thus there are n/p such numbers between 1 and n. Hence
P a given number N is divisible by a prime p
1
 lim n/ p
n  .
n p
In a similar manner, it follows that

P  p 2 divides any given number N  


1
and
p2
1
P  pq divides any given number N   .
pq
The axiomatic approach to probability, due to Kolmogorov, is based on a
set of axioms. This approach is recognized as superior to the above two
definitions, as it provides a solid foundation for complicated applications.
Before stating the axioms of probability we need to define the concepts of
sets, subsets etc.
Sample Space and Events
Sample Space
The set of all possible outcomes, denoted by 

Sample points
The outcomes are called sample points,

Events

Certain subsets of  are referred to as events.

Sample Space
b j
a k m o Sample point
f l
d e h p Event
c g i n
Example

Consider measuring the lifetime of a light bulb. Since


any nonnegative real number can be considered as the
lifetime of the light bulb (in hours), the sample space 
is  = {x:x  0 }.

The event E = {x:x  100 } is the event that the light bulb
lasts at least 100 hours.
The event F = {x:x  1000 } is the event that it lasts at
most 1000 hours.
The event G = {505.5} is the event that it lasts exactly 505.5
hours.
A bus with a capacity of 34 passengers stops at a station some
time between 11:00 AM and 11:40 AM every day.
What is the sample space of the experiment, consists of
counting the number of passengers on the bus and
measuring the arrival time of the bus?
Ans:
   x, t : 0  x  34, 11.00  t 11.40 

What is the event F  {( 27, t ) :11 13  t  11


? 2}
3
Ans:

The bus stops at the station between 11.20 – 11.40 with exactly
27 passengers on board.
Relations of Events
Subset
An event E is said to be a subset of the event F if,
whenever E occurs, F also occurs.  E  F
Equality
Events E and F are said to be equal if the occurrence
of E implies the occurrence of F, and vice versa.
 E = F  E  F and F  E
Intersection
An event is called the intersection of two events E and F
if it occurs only whenever E and F occur simultaneously.
It is denoted by E  F . General Form: i 1 E i
n
Relations of Events (Cont’d)

Union
An event is called the union of events E and F if it
occurs whenever at least one of them occurs.

It is denoted by E  F . General Form: i 1 E i
n

Complement
An event is called the complement of the event E if it
only occurs whenever E does not occur, denoted by EC
Difference
An event is called the difference of two events E and F
if it occurs whenever E occurs but F does not, and is
denoted by EF .
Note that: EC =  E and EF = EFC
Relations of Events (Cont’d)

Certainty
An event is called certain if it its occurrence is inevitable.
The sample space is a certain event.
Impossibility
An event is called impossibility if there is certainty in
its nonoccurence. The empty set is an impossible event.
Mutually Exclusiveness
If the joint occurrence of two events E and F is
impossible, we say that E and F are mutually exclusive.
That is, EF =  . Their intersection is impossible – empty set
Example
At a busy international airport, arriving planes land on
a first-come first-served basis. Let
E = there are at least 5 planes waiting to land,
F = there are at most 3 planes waiting to land,
H = there are exactly 2 planes waiting to land. Then
EC is the event that at most 4 planes are waiting to land.
FC is the event that at least 4 planes are waiting to land.
E is a subset of FC. That is, EFC = E
H is a subset of F. That is, FH = H
E and F, E and H are mutually exclusive. They can not occur
together.
FHC is the event that the number of planes waiting to land is
0, 1, or 3.
The totality of all events  i , known a priori, constitutes a set , the set of all
experimental outcomes.
  1 ,  2 ,,  k ,
 has subsets A,B,C,.. Recall that if A is a subset of , then   A implies
  . etc. From A and B, we can generate other related subsets.
The union A  B (also denoted by A+B), and intersection A  B (also
denoted by AB) of two subsets are defined as
A  B    |   A or   B
A  B    |   A and   B
The complement to a subset A is denoted by A or Ac and represents the
subset of all elements which are not elements of A,

A    |   A
We further define the “empty set”  which contains no elements of  .

11
Clearly, A    A and A.   .
The operation with subsets can be conveniently visulized using
the graphical representations called Venn diagrams:

A
A B A B A

A
A B A B
If A  B   , the empty set, then A and B are said to be mutually
exclusive (M. E).
A partition of  is a collection of mutually exclusive subsets of
 such that their union is .
Ai  Aj   , and  A  .
i
i 1

A1
A2
A B Ai
Aj An
A B   A partition of 
ALGEBRA
A
Commutative laws
A B  B  A A B  B  A B

B
Associative laws A B B  A
 A  B  C  A  B  C 
 A  B  C  A  B  C 
Distributive laws
 A  B   C   A  C   B  C 
 A  B   C   A  C   B  C 
Useful Laws
Commutative Laws: EF = F  E, E  F = F  E

Associative Laws:
E(F  G) = (E  F)  G, E (F  G) = (E  F)  G

Distributive Laws:
(EF)H = (EH)(FH), (EF)H = (E H) (FH)

De Morgan’s Laws:

(EF)C = EC  FC, (ni1 E i )C  ni1 E iC


De Morgan’s Second Laws:

(E  F)C = EC FC, (ni1 E i )C  ni1 E iC


De-Morgan’s Laws:

A B  A B ; A B  A B

A B A B A B A B

A B A B A B
Often it is meaningful to talk about at least some of the subsets of  as events,
for which we must have mechanism to compute their probabilities.
Example : Consider the experiment where two coins are simultaneously tossed.
The various elementary events are
1  ( H , H ), 2  ( H , T ), 3  (T , H ), 4  (T , T )
and    1 ,  2 , 3 ,  4 .
The subset A   1 ,  2 ,  3  is the same as “Head has occurred at least
once” and qualifies as an event.
Suppose two subsets A and B are both events, then consider
“Does an outcome belong to A or B  A  B ,
“Does an outcome belong to A and B  A B ,
“Does an outcome fall outside A = Ac
Thus the sets A  B , A B , etc., also qualify as events.

Assuming that the probability Pi  P(i ) of elementary


outcomes  i of  are apriori defined, how does one assign
probabilities to more ‘complicated’ events such as A, B, AB,
etc., above?
The three axioms of probability defined below can be used
to achieve that goal.
Axioms of Probability
For any event A, we assign a number P(A), called the probability of
the event A. This number satisfies the following three conditions
that act as the axioms of probability.

(i) P( A)  0 (Probabili ty is a nonnegativ e number)


(ii) P   1 (Probabili ty of the whole set is unity)
(iii) If A  B   then PA  B  P( A)  P( B)
Note that (iii) states that if A and B are mutually exclusive (M.E.)
events, the probability of their union is the sum of their
probabilities.

17
The following conclusions follow from these axioms:
a. Since A  A   , we have using (ii)
P( A  A)  P()  1.
But A  A   ,and using (iii),
P( A  A)  P( A)  P( A)  1 or P( A)  1  P( A).
b. Similarly, for any A, A      .
P A     P( A)  P( ) .
Hence it follows that
But A     A, and thus P   0.
c. Suppose A and B are not mutually exclusive (M.E.)?
How does one compute P ( A  B )  ?
To compute the above probability, we should re-express A  B
in terms of M.E. sets so that we can make use of the probability
axioms. With the aid of the Venn diagram we have
A  B  A  AB, where A and AB are A AB
clearly M.E. events.
Thus using axiom iii we obtain
P( A  B )  P( A  AB )  P( A)  P( AB ).
To compute P ( AB ),we can express B as

B  B    B  ( A  A)  ( B  A)  ( B  A)  BA  B A
Thus P( B )  P( BA)  P( B A), since BA  AB and B A  AB
we have P( AB )  P( B )  P( AB) and we obtain the desired
probability as
P( A  B )  P( A)  P( B )  P( AB).
Relations between the “sizes” of subsets : A  B is read as B contains A,
i.e. all elements (events) of A are events of B . It follows

(1) If A  B and B  A then A=B.

(2) P ( A)  P ( B )
Mutually Exclusive Events

Two events, A and B, are mutually exclusive if they cannot


occur at the same time.

A and B

A
B A B

A and B are mutually A and B are not mutually


exclusive. exclusive.
Mutually Exclusive Events

Example:
Decide if the two events are mutually exclusive.
Event A: Roll a number less than 3 on a die. Event B: Roll a 4
on a die.

A B
1
4
2

These events cannot happen at the same time, so the events


are mutually exclusive.
Mutually Exclusive Events

Example:
Decide if the two events are mutually exclusive.
Event A: Select a Jack from a deck of cards. Event B: Select a
heart from a deck of cards.

A J
9 2 B
3 10
J J A 7
K 4
J 5 8
6
Q

Because the card can be a Jack and a heart at the same time,
the events are not mutually exclusive.
The Addition Rule
P A  B  P A or B
The probability that event A or B will occur is given by
P (A or B) = P (A) + P (B) – P (A and B ).
If events A and B are mutually exclusive, then the rule can be
simplified to P (A or B) = P (A) + P (B).

Example:
You roll a die. Find the probability that you roll a number less than 3
or a 4.
The events are mutually exclusive.
P (roll a number less than 3 or roll a 4)
= P (number is less than 3) + P (4)
2 1 3
    0.5
6 6 6
The Addition Rule

Example:
A card is randomly selected from a deck of cards. Find the probability
that the card is a Jack or the card is a heart.
The events are not mutually exclusive because the Jack of
hearts can occur in both events.

P (select a Jack or select a heart)


= P (Jack)
) + P (heart) – P (Jack of hearts)

4 13 1 16
   
52 52 52 52  0.308
The Addition Rule
Example:
100 college students were surveyed and asked how many hours a week
they spent studying. The results are in the table below. Find the
probability that a student spends between 5 and 10 hours or more than
10 hours studying.

Less More
5 to 10 Total
then 5 than 10
Male 11 22 16 49
Female 13 24 14 51
Total 24 46 30 100

The events are mutually exclusive.


P (5 to10 hours or more than 10 hours) = P (5 to10) + P (10)
46 30 76
    0.76
100 100 100
EXAMPLES
1.- You are in a restaurant and ordering 2 dishes. With
probability 0.6, you will like the first dish; with prob. 0.4, you
will like the second dish.With prob. 0.3, you will like both of
them. What is the probability you will like neither dish?
Let Ai the event: "You like dish i. Then the prob. you like at
least one is
P A1  A2   P A1   P A2   P A1  A2   0.6  0.4  0.3  0.7
The event that you like neither dish is the complement of
liking at least one, so
P ("you will like neither dish.) = P A1  A2 c  1  P A1  A2   0.3

2. - A dice is thrown twice and the number on each throw is


recorded. Assuming the dice is fair, what is the probability
of obtaining at least one 6?
There are clearly 6 possible outcomes for the first throw and 6 for
the second throw. By the counting principle, there are 36 possible
outcomes for the two throws. Let Ai the event I have obtained a 6
for throw i. The probability we are interested in is
1 1 1 11
P( A1  A2 )  P( A1 )  P( A2 )  P( A1  A2 )    
6 6 36 36
3.- Calculate P( A  B  C ) 11 12 13 14 15 16

21 22 23 24 25 26
We note that A  B  C  ( A  B)  C hence 31 32 33 34 35 36

P( A  B  C )  P( A  B)  P(C )  P[( A  B)  C ] 41 42 43 44 45 46

 P( A)  P( B)  P(C )  P( A  B)  P[( A  B)  C ] 51 52 53 54 55 56

61 62 63 64 65 66
The last term can be written as
( A  B)  C  ( A  C )  ( B  C ) and its probability is

P[( A  B)  C ]  P( A  C )  P( B  C )  P[( A  C )  ( B  C )]
Note that ( A  C)  (B  C)  A  B  C

Combining all terms the answer is obtained as


P( A  B  C )  P( A)  P( B)  P(C )
 P( A  B)  P( A  C )  P( B  C )  P( A  B  C )

Example: Calculate the probability of obtaining an arbitrarly


chosen number at least once in three successive throws of a
fair dice. Calculate also the probabilities of exactly 1,2 and 3
favorable outcomes.

Answer: Let A,B,C refer to the desired outcome of the first,


second and third throws, respectively. Note that probabilities
of the events A,B, and C are easily obtained as
P(A)=P(B)=P(C)=1/6;
Consider the event where the chosen number is the outcome in
at least one of the throws,
The probability of obtaining at least one favorable outcome is
91/63. Since,
1
P( A  B)  P( A  C )  P( B  C )  since P( A  B)  P( A) P( B)
62
1
Note also , P( A  B  C )  substittuting these results we obtain
63
1 1 1 91
P( A  B  C )  3  3 2  3  3  0.42
6 6 6 6
P(1)  P( A).P( B  C )c  P( B).P( A  C )c  P(C ).P( A  B)c  75 / 63 since
P( A  B)c  1  P( A  B)  1  P( A)  P( B)  P( A  B)  1  2 / 6  1/ 62 )  25 / 62 , etc.
Similarly
P(2)  P( A  B).P(C )c  P( A  C ).P( B)c  P( B  C ).P( A)c  3(5 / 63 )  15 / 63

P(3)  P( A  B  C )  1/ 63  P(1)  P(2)  P(3)  P( A  B  C )


Example: Chevalier de Méré, who was a big gambler, realized
empirically that he could earn money by betting that, if you
throw a fair dice 4 times, at least one 6 would appear. What is
the probability of this event?
Answer: The complement event is to never obtain a 6 when
you throw the dice 4 times, this has a probability (5/6)4.
Since for M.E. P( Ac  Bc  C c  Dc )  P( Ac ) P( Bc ) P(C c ) P( Dc )
The probability of interest is given by 1 - (5/6)4 = 0.5177.
Example: Chevalier de Méré decided to extend this “trick” to
two dices and conjectured that he would still make a money
by betting that, if you throw two dices 24 times, at least one
double 6 would appear. He started losing money and asked
Fermat and Pascal to explain him why.
Answer: The complement event is to never obtain a double 6
when you throw the dice 24 times, it has a probability (35/36)24
Hence the probability of interest is given by 1 - (35/36)24 = 0.491.
Conditional Probability and Independence
In N independent trials, suppose NA, NB, NAB denote the number of
times events A, B and AB occur respectively. According to the
frequency interpretation of probability, for large N
NA N N
P( A)  , P( B )  B , P( AB)  AB .
N N N
Among the NA occurrences of A, only NAB of them are also found
among the NB occurrences of B. Thus the ratio
N AB N AB / N P( AB)
 
NB NB / N P( B )
is a measure of “the event A given that B has already occurred”.
We denote this conditional probability by
P(A|B) = Probability of “the event A given
that B has occurred”.
We define
P ( AB )
P( A | B ) 
P( B )
Provided P ( B )  0. . As we show below, the above definition
satisfies all probability axioms discussed earlier.

We have
P( AB)  0
(i) P( A | B )   0,
P( B )  0
P(B ) P( B )
(ii) P( | B )    1, since  B = B.
P( B ) P( B )
(iii) Suppose A  C   Then
P(( A  C )  B) P( AB  CB)
P( A  C | B )   .
P( B ) P( B )
But AB  CB   , hence P( AB  CB)  P( AB)  P(CB).
P( AB) P(CB)
P( A  C | B )    P( A | B)  P(C | B),
P( B ) P( B )

satisfying all probability axioms and thus defines a legitimate


probability measure.

Properties of Conditional Probability:


a. If B  A, AB  B, and
P( AB) P( B)
P( A | B)   1
P( B) P( B)

since if B  A, then occurrence of B implies automatic


occurrence of the event A.
As an example, consider the events
A  {outcome is even}, B={outcome is 2},
in a dice tossing experiment. Then B  A, and P( A | B )  1.
b. If A  B, AB  A, and
A
B
P( AB) P( A)
P( A | B )    P( A).
P( B ) P( B )

c. Suppose A and B are independent, then


P( AB) P( A) P( B )
P( A | B )    P( A).
P( B ) P( B )

Thus if A and B are independent, the event that B has


occurred does not shed any more light into the event A. It
makes no difference to A whether B has occurred or not.
Independent Events

Two events are independent if the occurrence of one of the events


does not affect the probability of the other event. Two events A and
B are independent if
P (B |A) = P (B) or if P (A |B) = P (A).
Events that are not independent are dependent.


Example:
Decide if the events are independent or dependent.
Selecting a diamond from a standard deck of cards (A), putting it
back in the deck, and then selecting a spade from the deck (B).

13 1 13 1 The occurrence of A does not


P (B A )   and P (B )   .
52 4 52 4 affect the probability of B, so
the events are independent.
Example : A box contains 6 white and 4 black balls. Remove two
balls at random without replacement. What is the probability that
the first one is white and the second one is black?
Let W1 = “first ball removed is white”
B2 = “second ball removed is black”
We need P(W1  B2 )  ? We have W1  B2  W1B2  B2W1.
Using the conditional probability rule,

P(W1B2 )  P( B2W1 )  P( B2 | W1 ) P(W1 ).

6 6 3 4 4
But P (W1 )    , and P( B2 | W1 )   ,
6  4 10 5 54 9
and hence
3 4 12
P(W1 B2 )     0.25.
5 9 45
Are the events W1 and B2 independent? Our common sense says No.
To verify this we need to compute P(B2). Of course the fate of the
second ball very much depends on that of the first ball. The first ball
has two options: W1 = “first ball is white” or B1= “first ball is black”.
Note that W1  B1   , and W1  B1  . Hence W1 together with
B1 form a partition. Thus
P( B2 )  P( B2 | W1 ) P(W1 )  P( B2 | B1 ) P( B1 )
4 3 3 4 4 3 1 2 42 2
          ,
5  4 5 6  3 10 9 5 3 5 15 5

2 3 12 12
And P( B2 ) P(W1 )     P( B2W1 )  .
5 5 50 45
As expected, the events W1 and B2 are dependent.
Multiplication Rule

The probability that two events, A and B will occur in sequence is


P (A and B) = P(AB) = P (A) · P (B |A).
If event A and B are independent, then the rule can be simplified to
P (A and B) = P (A) · P (B).

Example:
Two cards are selected, without replacement, from a deck. Find
the probability of selecting a diamond, and then selecting a spade.
Because the card is not replaced, the events are dependent.
P (diamond and spade) = P (diamond) · P (spade |diamond).
13 13 169
    0.064
52 51 2652
Multiplication Rule

Example:
A die is rolled and two coins are tossed.
Find the probability of rolling a 5, and flipping two tails.
1
P (rolling a 5) = .
6
1
Whether or not the roll is a 5, P (Tail ) = ,
2
so the events are independent.

P (5 and T and T ) = P (5)· P (T )· P (T )


1 1 1
    1  0.042
6 2 2 24
BAYES THEOREM
Recall P( B  A) P( A  B)
P( B | A)   ,
P( A) P( A)
P( B  A) P( A  B)
P( A | B)   ,
P( B) P( B)
Hence we can write
P( B | A)
P( A | B)   P( A)
P( B)
This equation is known as Bayes’ theorem and has an interesting
interpretation: P(A) represents the a-priori probability of the
event A. Suppose B has occurred, and assume that A and B are
not independent. How can this new information be used to
update our knowledge about A? Bayes’ rule takes into account
the new information (“B has occurred”) and gives out the
a-posteriori probability of A given B.
We can also view the event B as new knowledge obtained from a
fresh experiment. We know something about A as P(A). The new
information is available in terms of B. The new information should be
used to improve our knowledge/understanding of A. Bayes’ theorem
gives the exact mechanism for incorporating such new information.
A more general version of Bayes’ theorem involves partition of .
P ( B | Ai ) P ( Ai ) P ( B | Ai ) P ( Ai )
P ( Ai | B )   n
,
 P( B | A ) P( A )
P( B )
i i
i 1
To obtain above formula recall the definition of a partition
Ai  Aj   , and  A  .i
i 1
Thus Ai , i  1  n, are mutually exclusive events with a-priori
probabilities P( Ai ), i  1  n. With the new information “B has
occurred”, the information about Ai can be updated by the n
conditional probabilities P( B | Ai ), i  1  n, using Bayes formula
Example : Two boxes B1 and B2 contain 100 and 200 light
bulbs respectively. The first box (B1) has 15 defective bulbs
and the second 5. Suppose a box is selected at random and
one bulb is picked out.
(a) What is the probability that it is defective?

Answer: Note that box B1 has 85 good and 15 defective bulbs.


Similarly box B2 has 195 good and 5 defective bulbs. Let D =
“Defective bulb is picked out”.
15 5
Then P( D | B1 )   0.15, P ( D | B2 )   0.025.
100 200
Since a box is selected at random, they are equally likely.
1
P ( B1 )  P ( B2 )  .
2
Thus B1 and B2 form a partition and we can write
P( D)  P( D | B1 ) P( B1 )  P( D | B2 ) P( B2 )
1 1
 0.15   0.025   0.0875.
2 2
Thus, there is about 9% probability that a bulb picked at random is
defective.
(b) Suppose we test the bulb and it is found to be defective.
What is the probability that it came from box 1? ,i.e., P( B1 | D)  ?
Answer: Notice that initially P( B1 )  0.5; then we picked
out a box at random and tested a bulb that turned out to be
defective. Can this information shed some light about the fact
that we might have picked up box 1?
Using Bayes Law we obtain
P( D | B1 ) P( B1 ) 0.15  1 / 2
P( B1 | D)    0.8571.
P ( D) 0.0875
Thus P( B1 | D)  0.857  0.5,
and indeed it is more likely at this point that we must have
chosen box 1 in favor of box 2. (Recall box1 has six times
more defective bulbs compared to box2).
Example :HIV blood tests are very accurate: when people are
infected HIV test is 99.8% positive when they are not infected
the test is 99.99% negative. In the UK, the prevalence of HIV
in adults with no risk factors is around 1 in 10000. For an adult
with no risk factors, (a) what is the probability of having HIV
given a positive test result? (b) what is the probability of
having HIV given a negative test result?
Answer : (a)Consider the event A =Adult has HIV.and
B = "HIV test positive.. We are interested in P( A | B).
Noting that A and its complement forms a partition i.e.,
A  Ac   ; A  Ac   we make use of the Bayes law
P ( B | A ) P  A P ( B | A ) P  A
P( A | B)  
P B   
P ( B | A ) P  A  P ( B | A c ) P A c
Here we have
P  A 
1
10000
 
1  P A c ; P( B A)  0,998 ; P( B A c )  0,0001;

0,998 / 10000
So , P( A | B)   0.5
0,998 / 10000  0.0001 0.9999

(b) The probability of having HIV given a negative test


result is P( A | B c ).which can be calculated using Bayes
law as,
P( A | B c ) 
P ( B c
| A ) P  A  
P ( B c
| A ) P A
 
P Bc  
P ( B c | A ) P  A  P ( B c | A c ) P A c
Where    
P B c A 1  PB A1  0.998  0.002 ; P B c A c  0.9999
So, 0.002 / 10000
P( A | B c )   2 10 7
0.002 / 10000  0.9999 x 0.9999
there is 2.10-5 % chance that the person is HIV positive
whereas he/she tested negative.

Example : Three doors A,B,C are closed. Behind one of


them, there is a Ferrari, behind the others there is a goat. The
player selects one door, say A for sake of simplicity. The TV
presenter, who knows where the car is, tells him that the car
is not behind door B and offers him the possibility to revise
her/his choice (i.e. that is to select door C).
Should the player revise his choice?
Answer: Let A (resp. B, C) the event "the Ferrari is behind
door A“ (resp. B, C). Let E the event "the presenter tells the
player that it is not behind door B". We want to compute PA E 
We have P A PB PC  1/ 3 and
PE A1 / 2 ; PE B 0 ; PE C  1
Indeed, if the car is behind A then the presenter has the
choice between B and C. If the car is behind C, the presenter
can only pick door B as he/she cannot select A.
Hence we have
PE   PE A P A  PE B  PB   PE C  PC  
11 1 1
0 
23 3 2
and PE A P A
P A E  
(1 / 2) (1 / 3) 1
 
P E  (1 / 2) 3
Since. PC E 1  PA E 1  1/ 3  2 / 3

You should definitely revise your choice.


Example: In a game of bridge, West has no aces. What is the
probability of his partner’s having (a) no aces? (b) exactly 1 ace
(c) 2 or more aces? (d) What would the probabilities be if West
had exactly 1 ace?
a) There are 52-13=39 cards, 39-4=35 cards are not aces.thus
 35 
 
13   26 x 25 x 24 x 23  0.1818
P(no aces) =  39  39 x38 x37 x36
 
13 
b) Suppose the partner has one of the four aces, the remaining
12 cards should have no aces, thus
 35 
4  
12   4 26 x 25 x 24 x13  0.4109
P(one ace) =  39  39 x38 x37 x36
 
13 

Let us calculate also the probabilities of partner having


exactly 2,3 and 4 aces. We recall that k aces can be selected
4
among the four in  k  ways, thus
 
 35 
6  
11   6 26 x 25 x13x12  0.3082
P(2 aces)=  39  39 x38 x37 x36
 
13 
 35 
4  
10   4 26 x13x12 x11  0.0904
P(3 aces)=  39  39 x38 x37 x36
 
13 
 35 
1  
 9   13x12 x11x10  0.0087
P(4 aces)=  39  39 x38x37 x36
 
13 

Note that the probabilities of having 1,2,3,4 aces sum up to


the probability of [1- P(no aces)], i.e. The probability of having
at least one ace.
c) This probability is equal to the sum of the probabilities of
having 2,3 and 4 aces (independent events). It can also be
calculated by substracting the probability of having exactly
one ace from the probability of having one or more aces.
The result is  0.4073.
d) Given W has one ace, reduces the number of available
aces to 3, and 36 cards are no aces. Proceeding in a similar
way we obtain,
 36   36 
  3  
P(no aces)= 39   0.2845
13 12   0.4623
P(1 ace)=  39 
   
13  13 
 36   36 
3   1  
11   0.2219 10   0.0313
P(2 aces)=  39  P(3 aces)=  39 
   
13  13 
Note that the probabilities of having 1,2,3 aces sum up again to
the probability of [1- P(no aces)].
Example : How can 20 balls, 10 white and 10 black, be put into
two urns so as to maximize the probability of drawing a white
ball if an urn is selected at random and a ball is drawn at random
from it?
The probability of selecting a white ball from either urn is
equal to the product of selecting the urn (which is equal to ½)
with the probability of selecting a W ball from this urn.
Clearly, we sould maximize the latter probability. Thus the
answer is put 1 white and 0 black balls in one urn , and the
remaining 9 white and 10 black balls in the other urn.
Example: Let A and B be two events with nonzero probabilities.
State whether each of the following statements is (i) necessarily
true, (ii) necessarily false, or (iii) possibly true.
(a) If A and B are mutually exclusive, then they are independent.
(a) Necessarly false. Mutually exclusive events can not happen
together A  B   . But for independet events we have
P  A  B  P
, ( A) .P(B)  0
(b) If A and B are independent, then they are mutually
exclusive.
(b) Necessarly false. P  A  B  P( A) .P(B)  0
(c) P(A) = P(B) = .6, and A and B are mutually exclusive.
(c) Necessarly false. P  A  B  P( A)  P( B)  P A  B  1.2  1
(d) P(A) = P(B) = .6, and A and B are independent.

(d) Possibly true. From above we should have P  A  B  0.2


which is satisfied when A and B are independent:
P  A  B  P( A) P( B)  0.36  0.2 Thus we also
have P  A  B  P( A )  P( B )  P( A ) P( B )  1.2  0.36  1

Potrebbero piacerti anche