Sei sulla pagina 1di 9

STATISTICS I

PROBABILITY: ELEMENTARY CONCEPTS

ELEMENTARY CONCEPTS OF PROBABILITY THEORY

In our everyday life, we often speak about "chance" or "likelihood" of occurrence of some events. Probability
is the numerical measure of chance; the chance of occurrence of events is expressed using decimal or
fractional numbers in the range 0 to 1 inclusively. An event having probability 0 is an impossible or null
event; an event having probability 1 is the sure or certain event.

Generally, the phenomena we observe in our everyday life may be of two kinds: deterministic or chance
dependent (probabilistic). In the deterministic phenomenon we can predict the outcome with certainty, but
in probabilistic phenomenon we cannot predict the result with certainty, so we speak about probability of
events. Such probabilistic phenomena are frequently observed in economics, business and social sciences.

DEFINITION OF DIFFERENT TERMS IN PROBABILITY:

RANDOM EXPERIMENT: An experiment is called random experiment if the when conducted repeatedly under
essentially homogeneous condition, the result is not unique but may be any one of various possible outcomes.
In such experiment, we cannot say what will happen, but we can say what are the different possibilities.

SAMPLE SPACE: The set of all possible outcomes of an experiment is called the sample space for that
experiment. The list of all possible outcomes of a random experiment is called ‘Collectively Exhaustive List’.

Examples:
1.Sample space of a coin tossing experiment is S= {Head, Tail}
2.If we consider the experiment of throwing a die the sample space would be
S={1, 2, 3, 4, 5, 6}.
3.If a card is drawn from a pack of 52 cards, the sample space has 52 members:
S={A♥, K♥,Q♥,……..2♥ ,A♦, K♦, Q♦……..2♦,A♣, K♣, Q♣,….2♣,A♠, K♠, Q♠,……2♠}
4. A box contains 4 balls, 2 balls are drawn then the sample space is S={B1B2, B1B3, B1B4, B2B3, B2B4, B3B4}

Note that a sample space is a set, which is analogous to the concept of 'Universe' in the set theory. The
members of the Sample space are called ‘cases’ or ‘simple events’.

TRIAL AND EVENT: Performing a random experiment is called a Trial and an outcome or combination of
outcomes is called the Event. Events are denoted by capital letters.

Examples: When a coin is tossed, it either shows a head or shows a tail. We cannot say beforehand, which
face will be shown. So tossing a coin is a random experiment or trial and getting a head or getting a tail is
events.
Similarly, throwing a die is a trial and getting any one of the faces 1, 2, 3, 4, 5 or 6 is an event, or getting of
an odd number is an another event, getting an even number is also an event. In the terminology of the set
theory, the events are the subsets of the sample space

EXHAUSTIVE CASES: The total number of possible outcomes of a random experiment is called exhaustive cases.
For example, in a toss of a coin we can get either a head or tail, so number of exhaustive cases is 2. If 'r'
balls are drawn from a box at random containing 'n' balls the number of exhaustive cases is nCr, however if the
balls are drawn one by one with replacement the exhaustive cases is nr. In other words, it is the number of
elements in the sample space.

FAVOURABLE CASES: In the performance of random experiment, some cases entail( or result in) the happening
of a predefined event A, those cases are called favourable cases to A. The remaining cases which do not
result in happening of A, are called unfavourable cases to A or favourable cases to Ac(complement of A).

Examples:
i) In the toss of two coin, suppose A = event that exactly one head occurs, then the cases HT, TH are
favourable to A, and the cases HH and TT are unfavourable to A. In the notation of set theory, we
write A= {HT, TH}, so we can also define an event as a subset of the sample space.
ii) In the experiment of rolling of a die, suppose B = event that a odd number occurs, then the cases
favourable to B are 1, 3, and 5, the other cases 2, 4, and 6 are favourable to Bc.

Page 1 of 9
STATISTICS I
PROBABILITY: ELEMENTARY CONCEPTS

iii) Consider the experiment of rolling of two dice, List all the favourable cases for the following events:
A = event that the sum is 7, B= event that the sum is 8.
iv) Consider the experiment of drawing 2 cards from a deck of 52 cards, suppose A= event that both cards
are hearts, list all the favourable cases for A.
In the calculation of probability we use the methods learned in combinatorial analysis for counting the
number of exhaustive cases and favourable cases, whenever listing of all cases is not desirable/feasible.

Class Work: For each of the following trials, write the sample space or/determine the number of exhaustive
cases, if writing of sample space is not possible. Then define at least two events and list out their favourable
cases.
1. tossing 3 coins.
2. rolling 2 dice.
3. 3 balls are drawn at random from a box containing 6 red and 7 black balls.
4. 2 cards are drawn from a deck of standard cards.

MUTUALLY EXCLUSIVE CASES OR EVENTS: Two or more events or cases are said to be mutually exclusive (disjoint)
if one and only one of them can take place at a time, or they cannot happen simultaneously, the happening of
one of them excludes the happening of all others. In the notation of set theory, two events A and B are said to
be mutually exclusive if A∩B = φ(a null set).

Examples:
i) In the toss of a coin, head and tail cannot fall simultaneously, if head comes tail does not and vice
versa, so the cases Head and Tail are mutually exclusive. Similarly, in rolling a die, only one face can
show up, so the cases 1, 2, 3, 4, 5, 6 are mutually exclusive.
ii) Consider the experiment of rolling a die, suppose, A= event that an odd number comes and B= event
that an even number comes, then A and B are mutually exclusive. But if C= event that an even
number comes and D = event that a number divisible by 3 comes then the events C and D are not
mutually exclusive because if 6 comes, then we say that C and D happened simultaneously.
iii) Suppose two cards are drawn from a standard deck of 52 cards. Suppose A = event that both cards
are hearts, B = event that both cards are diamonds, and C= event that both cards are face cards (J, Q
K) Then A and B are mutually exclusive, A and C are not mutually exclusive. Similarly B and C are not
mutually exclusive.
iv) Suppose two cards are drawn from a standard deck of 52 cards. Suppose A = event that both cards
are hearts, B = event that both cards are Two’s and C= event that both cards are face cards (J, Q K),
D=event that both cards are red. Discuss disjoint-ness of the events.

THREE TYPES OF PROBABILITY:

There are basically three ways of assigning probabilities to different events. The three represent rather
different conceptual approaches to the study of probability theory; the three approaches are;

1. Classical or 'a priori' or Mathematical approach


2. Statistical or Empirical or Relative Frequency Approach
3. Subjective or Judgmental approach

1. CLASSICAL PROBABILITY: The classical probability uses the 'classical or mathematical or a priori definition of
probability', which is stated below;

Definition: If a trial results in N number of exhaustive, mutually exclusive and equally likely outcomes (cases)
out of which 'm' are favourable to the happening of event A, then the probability of occurrence of A is
denoted by P(A) and is defined as

Favourable number cases to A m


P( A ) = =
Total number of cases N

Page 2 of 9
STATISTICS I
PROBABILITY: ELEMENTARY CONCEPTS

Remarks and Examples:


1. To apply the above definition of probability, we should first count the number of total cases and ensure
that all the cases are equiprobable, and mutually exclusive.

2. If m cases are favourable to happening of A, then (N-m) cases will be unfavourable to A, or they are
favourable to Ac, then according to the above definition;
N −m m
P( A c ) = = 1 − = 1 − P( A )
N N
3. Classical probability is often called 'a priori' probability, because we can assign the probability to different
events in advance (a priori) without performing the experiment. We do not have to perform the trial to make
our probability statements about fair coins, standard deck of cards, and unbiased dice, instead we are able to
make statements based on logical reasoning based on mathematics objectively. There is no requirement of
performing the experiment and there is no any judgement in assigning probability.

4. Example: According to this definition, it is easy to verify that


P(getting a Head in toss of a coin)= 0.5, P(even number in roll of a die) = 3/6=1/2
P(getting total 7 in roll of a fair die)= 6/36 etc.

Limitations of this approach:


The classical probability has some disadvantages also, it assumes away situations which are very unlikely but
that could happen. It also assumes the equal chances to all the outcomes of the experiment, which may not
be so in the real life, particularly in the situations of the business and management. In fact, real -life
situations are not orderly in terms of probability as assumed. For example, if you bid for a contract, then the
possible outcomes are 'you get the contract' and 'you do not get the contract', but the probability of these two
events are not the same. In our example of a die, the probability of every outcome will not be equal if the
die is not unbiased (or tampered one). In such cases, we cannot apply the classical approach for assigning
probability.

Odds: The other way of expressing Probability


The probability of an event is sometimes expressed in terms of odds (especially in game situations) by using
either the phrase 'odds in favour of an event A' or the phrase 'odds against A".
The 'odds in favour of A' is the ratio P(A)/P(Ac), and the 'odds against A' is the ratio P(Ac)/ P(A), provided that
the denominator is not zero. Odds are usually expressed in the terms of ratio p:q of two positive integers.
The statements P(A)=m/N or odds in favor of A is m:(N-m) or odds against A is (N-m):m conveys same
meaning. For example, odds for Tyson to win a bout against Holyfield is 4:5 means that P(Tyson's
winning)=4/9 and odds against him is 7:11 means P(Tyson winning )=11/18, these are another ways of
expressing the probability.

2. RELATIVE FREQUENCY OR THE EMPIRICAL PROBABILITY: It defines the probability as either the observed relative
frequency of an event in a very large number of trials or the proportion of time that an event occurs in the
long run if the experiment is done in homogeneous condition.

Definition: Suppose that an event A happens m times in N repetitions of a random experiment. Then the ratio
m/N, gives the relative frequency of the event A. Then, as N becomes sufficiently large m/N becomes stable
which gives the probability of A.
m
P( A ) = lim
N →∞ N
This method uses the relative frequencies of past occurrences as probabilities.

Remarks and Examples:

1. It may be remarked that the probability defined above can never be obtain in practice, however we can get
a close estimate of P(A) by making N sufficiently large, so greater the number of trials, higher is the accuracy
of the estimate. However, for different value of N the relative frequency m/N may not attain the unique
value. Suppose a coin is tossed, the classical probability states that the probability of getting a head is 0.5,
but see at the following table how the relative frequency may be changed in first 10000 toss of that coin.

Page 3 of 9
STATISTICS I
PROBABILITY: ELEMENTARY CONCEPTS

Number of Tosses(N) Number of heads (m) Probability=m/N


10 3 0.3
20 8 0.4
100 67 0.67
1000 482 0.482
5000 2000 0.4
10000 5500 0.55
(Also See Figure 4-1 page 166)

2. In the above definition the most confusing phrase in this definition is 'sufficiently large' .
For example, in a bout of two boxers A and B, it is not be logical to say that the probability of winning of A is
0.5, rather we may use the data on past bouts of the boxers. Suppose, in the last 15 bouts between them A
has won 11 times, then the relative frequency approach says that the probability of winning of A is 11/15. We
cannot say how many past bout are necessary to evaluate this probability.

In the decision making context , the empirical approach of assigning probability is, in fact, seeking for the
past experiences in the identical conditions/situations.

3. An example from Kalosha Industries: Table showing Distribution of Respondent Income ($000)
Respondent Frequenc Percent Cumulative
Income y Percent
10-20 115 28.8 28.8
20-30 124 31.0 59.8
30-40 79 19.8 79.5
40-50 41 10.3 89.8
50-60 28 7.0 96.8
60-70 6 1.5 98.3
70-80 5 1.3 99.5
80-90 1 .3 99.8
90-100 1 .3 100.0
Total 400 100.0
See the column of Percent, in this column you see that the percentage of persons earning "between $10000 to
$20000" is 28.8. In another word it may be expressed as" probability of a randomly selected person earning in
the range 10000-20000 is 0.288". The relative frequency table you calculated in previous sessions gives the
probability of a randomly selected observation falling into different classes.
Questions for thinking:
i) How would you get the probability of dying of a person 60 years old, within one year. Do you
think this probability would remain constant year after year.
ii) Suppose you are going to get married on the February 14, 2005. How would you make the
statement of the probability of raining on that day.
iii) The old people of Kathmandu say that on the day of "Bhoto Jatra of Macchindranath", it will
rain. Keeping aside the religious belief, do you think that there is any empirical reason.

3. SUBJECTIVE APPROACH OR JUDGMENTAL APPROACH: It is not always possible to give mathematical logic or
observing the past data in assigning probability to events occurring in real life situation. Subjective reasoning
may help in such situations, where the individual assigns probabilities based on his/her beliefs and available
evidences. The evidence may be in the form of relative frequency of past occurrences or it may be due to
judgment based on his/her analysis of the situation or simply the educated guess. It is important to point out
that the probabilities assigned to a same event by two individuals may not be the same.

Decision makers have to use the approach of subjective probabilities, in making high level social and
managerial decisions, because in such decision making processes, it may be possible that the manager is
making such a decision for the first time, he/she may have not faced the identical situation before. In facts,
managers have to face unique situations, and they have to use subjective approach whenever no experience
of identical situation had been faced in the past.

Page 4 of 9
STATISTICS I
PROBABILITY: ELEMENTARY CONCEPTS

Example: Suppose there is going to be a football match between two teams A and B.
Suppose you are going to bet for one of them. The classical probability simply says that the probability of
their winning is 0.5 each. In relative frequency approach, one would go for the past matches between those
teams in identical conditions. In subjective approach, one may analyze about the performance of the teams
in last season, fitness of players, goal-saving record of the keeper( in case of a penalty shoot-out), ground
conditions, audience support, records of strikers etc. before making probability assessments.

Questions for thinking:


i) Suppose you are a manager in a construction company, and you are bidding for a contract. How
would you make probability assessment about your chances of getting the contract.
ii) "Tomorrow will be a rainy day." Mr. X said on a certain day, when he has pain in his left hand, which
was fractured in an accident some 5 years ago, " I'm observing this since 4 years that pain in my left
hand is an indication of raining on the day after". Do you think that this is a subjective reasoning in
the making assessment about chances of rain tomorrow.
iii) Suppose there is going to be a Tennis match between Kournikova and Seles, and you are going to bet
$100 in favour of one of them. You know that in the past 8 matches between them, Seles has lost in 5
matches. Is this information sufficient for you to go for Kournikova?

PROBABILITY RULES:

In assigning probability, the events may be of simple or composite kind. To compute the probabilities of
composite events, the usual way of listing of favourable cases may not be feasible, so we require the rules of
assigning probabilities for different composite events, such as
a. The composite event where one or another happen
b. The composite event where both of events happen

To study the probability rules, first we define some terms.

DEPENDENCE AND INDEPENDENCE OF EVENTS: Two (or more) events are said to be independent if happening or non-
happening of one of them does not effect and not affected by happening or non-happening of others,
otherwise they are termed as Dependent Events.

Remarks and Examples:

i) Suppose two coins are tossed, and then event of getting a head on the first coin is independent of the
event of getting a tail on the second coin.
ii) Suppose from a bag containing 1 red and 4 white balls, a ball is drawn and replaced. Again a ball is
drawn, suppose A= event of getting a red ball in the first draw and B=event of getting a red ball in the
second draw. Then it is apparent that the events A and B are independent. But, however, if the ball
drawn is not replaced in the bag before the second draw, the happening or non-happening of event A
would affect the event B and as such the events are dependent.
iii) The situations, where the trials described are "with replacement" would give rise to independent
events and the situations where the trials described are "without replacement" would give rise to
dependent events.

CONDITIONAL PROBABILITY: The conditional probability of event B, known (or given) that another event A has
already happened is denoted by P(B|A), it is the probability of B, conditioned with the occurrence of A. If A
and B are independent event then P(A|B)=P(A), P(B|A)=P(B). The following examples will clear the matter.

Examples:
i) Suppose from a bag containing 2 red balls and 3 black balls, a ball is drawn and not replaced before
making the second draw. Let A= event of getting a red ball in the first draw and B= event of getting a
red ball in the second draw. Then
P(B|A) = P(red ball in second draw| red ball in first draw)=1/4
P(B|Ac )= P(red ball in second draw|black in first)=2/4
However if the balls were replaced before the second draw then

Page 5 of 9
STATISTICS I
PROBABILITY: ELEMENTARY CONCEPTS

P(B|A)=2/5=P(B). This example shows that the ‘Sample with Replacement’ gives rise to Independent
Events and ‘Sampling without Replacement’ gives rise to Dependent Events.

ii) Suppose there are 3 boxes B1, B2 and B3 which contain balls of different colours as shown below,
B1: 5 red and 3 green, B2: 4 black and 2 green, B3: 5 red and 4 black
Suppose a box is selected at random, and a ball is drawn from the selected box. Let A= event of
getting a black ball, then the event A is dependent on event of selecting different boxes, as such, we
give the conditional probabilities as follows, P(A|B1) = 0, P(A|B2)= 4/6, and P(A|B3) = 4/9

iii) The following table reveals the cross-classification of sex and job satisfaction of Kalsoha Industries
employees.
Very Moderately Satisfied A little Very DissatisfiedTotal
Satisfied Dissatisfied
Male 108 100 19 6 233
Female 77 71 8 11 167
Total 185 171 27 17 400
i) What is the probability that a randomly selected employee is very satisfied and is male?(108/400)
ii) What is the probability that a randomly selected male is 'moderately satisfied'(100/233)
iii) What is the probability that a 'Very Dissatisfied' employee is a female?(6/17)

Solution : Class Work

iv) The formula for the conditional Probability comes straightforwardly by transposing the formula for the
multiplication rule of probability. In the above examples (i) and (ii), conditional probability was
found using logical reasoning.

RULE OF ADDITION OF PROBABILITY:

Let A and B be events, then A∪B gives the event at A or B happens(at least one of them occurs) and A∩B(or
simply AB) gives the event that both happens simultaneously( A and B happens). We state the following rule
in addition:
When A and B are not mutually exclusive,
P(A∪B)=P(A) + P(B) –P(AB)
when A and B are mutually exclusive, they can not happen simultaneously i.e P(AB)=0 then P(A∪B)=P(A) +
P(B)
For 3 events A, B, and C we have,
P(A∪B∪C)=P(A)+P(B)+P(C)-P(AB) –P(BC)- P(AC)+P(ABC) and
when A, B and C are mutually exclusive P(A∪B∪C)=P(A)+P(B)+P(C)

Examples:

i) Suppose A=event of getting a spade in a draw of a card from a deck of cards


B= event of getting a face card
Then P(A)=13/52, P(B)= 12/52, P(AB)=3/52
So P(A∪B)=P(spade or face card)=22/52
This is the case of non-mutually exclusive events.
ii) suppose A= event of getting a sum of 6 on roll of two fair dice
B= event of getting a sum of 7 then A and B are mutually exclusive event. We know that P(A)=5/36
and P(B)=6/36 so P(A or B)= P(getting 6 or 7)=11/36

Page 6 of 9
STATISTICS I
PROBABILITY: ELEMENTARY CONCEPTS

MULTIPLICATION RULE OF PROBABILITY:

The multiplication rule of probability gives the probability of simultaneous happening of two or more events,
P(AB) gives the probability of simultaneous happening of events A and B, and this probability is called the
joint probability of occurrence of A and B.
This probability has different formulas under the conditions of dependence and independence.

Case 1: when the events A and B are independent P(AB) = P(A)P(B)


Case 2: When the events A and B are dependent P(AB)=P(A).P(B|A) =P(B)P(A|B)

Example:

i) Consider the experiment of rolling a pair of fair dice and tossing a coin. Let A= event of getting a head
and B= event of getting 4. Then A and B are independent and P(AB)=P(head and 4) =
P(A)xP(B)=1/2X1/6 = 1/24
ii) Consider the experiment of drawing two balls from a box containing 2 red and 4 white balls
successively without replacement. Let A= event of getting a red ball on the first draw and B= event of
getting a red ball in second draw then we know that P(A) = P(red ball in first draw)= 2/5 and
P(B|A)=1/4, then the joint probability of happening of A and B, i.e. probability of getting red ball in
both draws is given by P(AB)= P(A) P(B|A)=2/20
iii) Missing Card Example: Suppose in a pack of cards, 2 cards are missing. Suppose A1 = event that both
missing cards are red, A2= Both missing cards are Black, A3= event that missing cards are one of each
color. Further suppose that one card is drawn from this pack and let X= event that a black card turns
out. Then we say that event X is dependent on A1, A2 and A3. The conditional probabilities are given
by, P(X|A1)= 26/50, P(X|A2)= 24/50 and P(X|A3)= 25/50

Self Learning Question:


1. Students always find it difficult to express the literal definition of some events into symbols. To develop
skill in this, they have to use the concepts of the set theory and venn diagrams. Suppose A, B, C are events.
Express the following events into symbols.
Literal Expressions Symbolic Expressions
All A, B and C happen
A and B happen but C does not
At least one of A, B, C happens
At most one of A, B, C happens
None happen
Two of three events happens
At least two happens
One and only one happens
Exactly one happens

2. Finding out Joint, Marginal and Conditional Probabilities from a Two-way table (Cross tables). Many
problems in your textbook may be quickly solved using a cross table.

See §4.5 page 176 of your Text Book to understand Tree-Diagrams

Page 7 of 9
STATISTICS I
PROBABILITY: ELEMENTARY CONCEPTS

RULE OF TOTAL PROBABILITY:

Let A be any event, which can happen only in conjunction with one and only one of the mutually exclusive
events E1, E2, ……En ( that is A may happen only after the happening of one of these events). Then the
marginal probability of happening of A is given by,
n n
P( A ) = ∑ P( A ∩ E j ) = ∑ P( E j )P( A | E j ) Note that this formula is the mixture of "Rule of Addition "
j =1 j =1
and "Rule of Multiplication" of probabilities. Further note that the events (A∩Ej), which give the probability
that A will happens in conjunction with Ej are mutually exclusive.

Example: i) Suppose there are 3 boxes B1, B2 and B3 which contain balls of different colours as shown below,
B1: 5 red and 3 green, B2: 4 black and 2 green, B3: 5 red and 4 black
Suppose a box is selected at random, and a ball is drawn from a selected ball, what is the probability that it is
a green ball ? Hint: Here the event A of getting a green ball happens only after one of the events of selecting
a box. Use rule of total probability.

ii) In the Missing Card Example what is the probability of drawing a black card?

See §4.6 page 185 of your Text Book

BAYES' THEOREM AND POSTERIOR PROBABILITIES:

Class Demonstration: How to have an idea about the Missing Cards?

Definition: Let A be any event, which can happen only in conjunction with one and only one of the mutually
exclusive events E1, E2, ……En ( that is A may happen only after the happening of one of these events). Then
the marginal probability of happening of A is given by,
n n
P( A ) = ∑ P( A ∩ E j ) = ∑ P( E j )P( A | E j )
j =1 j =1
further if the event A actually happens, then we are concerned about the probability of the event (Ej|A ) that
the event A was due to Ej, this probability is given by Bayes' theorem:

P( A ∩ E j ) P( A ∩ E j )
P( E j | A ) = = n

∑ P( E
P( A )
j )P( A | E j )
j =1
Here the events Ej are called the "Elementary Events". The probabilities P(Ej|A) are called the posterior
probabilities.
Example(Very Important): Suppose there are 4 kinds of dice in a pot in equal numbers. The dice are
deformed (biased or weighted or tampered). On the first kind of dice, ace( one dot) comes up 40% of time,
on the second kind ace comes only 30% of time, on the third kind of dice ace comes up 60% of time and fourth
kind of dice are such that ace comes up 90% of time.
A die is chosen at random. At this point we are in condition to believe that die is chosen with equal
probability 0.25 .We rolled the chosen die once and we obtained an ace, now we are in the position to revise
the prior probability on the basis of our outcome. Suppose A=event of getting an ace from the chosen die, Ej=
event of choosing jth kind of die(j= 1, 2, 3. 4). Now we shall calculate the posterior probabilities using the
following table.

Page 8 of 9
STATISTICS I
PROBABILITY: ELEMENTARY CONCEPTS

Elementary P(Ej) P(A|Ej) P(A∩Ej)= Posterior probabilities


event(Ej) P(Ej)P(A|Ej) =P(Ej|A)= P(A∩Ej)/P(A)
E1 0.25 0.4 0.100 0.2041
E2 0.25 0.3 0.075 0.1531
E3 0.25 0.6 0.090 0.1837
E4 0.25 0.9 0.225 0.4592
Total 1.0 P(A)=0.49

Conclusion: By rolling the die, we have been able to alter or revise the prior probability of 0.25 that the die in
our hand is of 4th kind , with the posterior probability 0.4592, because we obtain an ace by rolling the die.

Variations of the above problem:


1. In the roll, suppose we did not obtain an ace. Revise the probabilities
2. Suppose in the example above, the selected die was rolled twice and we obtain ace in both roll. Revise
the posterior probabilities.
3. Suppose in rolling the selected die twice, we did not get ace in both rolls, revise the probabilities.

Value of Bayes' Theorem:


The prior estimates of probability are always based on limited information. By this theorem, we can use the
additional information (if any) to revise our prior estimate. That is why this probability is also called
posteriori probability. (See article 4.7)

Some Problems on Bayes Theorem:

i) In 1999, there were 5 candidates for the post of Principal of a college. Their probabilities of
appointment are in the ratio 1:4:5:2:3. All the candidates had promised to buy a Snooker Board in the
college but their respective probabilities of buying are 0.5, 0.8, 0.1, 0.6 and 0.7.
a. What is the probability that there would be a Snooker board in the college.
b. Someone was appointed and bought a Snooker Board, find the posterior probabilities of
different candidates selection.

ii) The contents of urn I, II and III as follows.


Urn I: 1 white, 2 black, 3 red balls, Urn II: 2 white, 1 black, 1 red ball, Urn III: 4 white, 5 black, and 3
red balls. An urn is chosen at random and two balls are drawn. They happen to be white and red.
What is the probability that they come from Urn I, II or III?

iii) One card was missing from the pack of card. From this pack 13 cards were drawn in succession and
without replacement and all turned to be red. What is the prob that the missing card was red?

End of Basic Concepts of Probability Theory:

Page 9 of 9

Potrebbero piacerti anche