Sei sulla pagina 1di 21






18 21 26 29



We would firstly like to thank our Institution & sincere thanks to Principal Prof. A. E. Lakdawala and Vice Principal Prof. Kamala Arunachalam for providing us support and giving us an opportunity for doing B&I course and completing this project.

We would also like to extent our profound and sincere gratitude to our project guide Prof. NEETA who has guided our project with her vast fund of knowledge advice and constant encouragement. We kindly appreciate her implicit and valuable contribution in drawing up this project.

We also thank all our colleagues without who this project would have not been completed.

Thank you all for your contribution towards the project whether big or small and will forever be indebted to each and every one of you. We also thanks to all those whom we have forgotten to mention in this space.


probability INDEX



Probability theory is the branch of mathematics concerned with analysis of random phenomenon. The central objects of probability theory are random variable, and events mathematical abstractions of non-determinants events or measured quantities that may either be single occurrences or evolve over time in an apparently random fashion. Although an individual coin toss or the roll of a die is a random event, if repeated many times the sequence of random events will exhibit certain statistical patterns, which can be studied and predicted. Two representative mathematical results describing such patterns are the law of large numbers and the central limit theorem As a mathematical foundation for statistics, probability theory is essential to many human activities that involve quantitative analysis of large sets of data. Methods of probability theory also apply to descriptions of complex systems given only partial knowledge of their state, as in statistical mathematics. A great discovery of twentieth century physics was the probabilistic nature of physical phenomena at atomic scales, described in quantum mechanics.



The study of probability helps us figure out the likelihood of something happening. For instance, when you roll a pair of dice, you might ask how likely you are to roll a seven. In math, we call the "something happening an "event." The probability of the occurrence of an event can be expressed as a fraction or a decimal from 0 to 1. Events that are unlikely will have a probability near 0, and events that are likely to happen have probabilities near 1.* In any probability problem, it is very important to identify all the different outcomes that could occur. For instance, in the question about the dice, you must figure out all the different ways the dice could land, and all the different ways you could roll a seven. Note that when you're dealing with an infinite number of possible events, an event that could conceivably happen might have probability zero. Consider the example of picking a random number between 1 and 10 - what is the probability that you'll pick 5.0724? It's zero, but it could happen. Likewise, when dealing with infinities, a probability of 1 doesn't guarantee the event: when choosing a random number between 1 and 10, what is the probability that you'll choose a number other than 5.0724? It's 1.

History of probability
In the seventeenth century Galileo wrote down some ideas about dice games. This led to discussions and papers which formed the earlier parts of probability theory. There were and have been a variety of contributors to probability theory since then but it is still a fairly poorly understood area of mathematics. The scientific study of probability is a modern development. Gambling shows that there has been an interest in quantifying the ideas of probability for millennia, but exact mathematical descriptions of use in those problems only arose much later. According to Richard Jeffrey, "Before the middle of the seventeenth century, the term 'probable' (Latin probabilis) meant approvable, and was applied in that sense,



univocally, to opinion and to action. A probable action or opinion was one such as sensible people would undertake or hold, in the circumstances. However, in legal contexts especially, 'probable' could also apply to propositions for which there was good evidence. Aside from some elementary considerations made by Girolamo Cardano in the 16th century, the doctrine of probabilities dates to the correspondence of Pierre de Fermat and Blaise Pascal (1654). Christiaan Huygens (1657) gave the earliest known scientific treatment of the subject. Jakob Bernoulli's Ars Conjectandi (posthumous, 1713) and Abraham de Moivre's Doctrine of Chances (1718) treated the subject as a branch of mathematics. See Ian Hacking's The Emergence of Probability and James Franklin's The Science of Conjecture for histories of the early development of the very concept of mathematical probability. The theory of errors may be traced back to Roger Cotes's Opera Miscellanea (posthumous, 1722), but a memoir prepared by Thomas Simpson in 1755 (printed 1756) first applied the theory to the discussion of errors of observation. The reprint (1757) of this memoir lays down the axioms that positive and negative errors are equally probable, and that there are certain assignable limits within which all errors may be supposed to fall; continuous errors are discussed and a probability curve is given. Pierre-Simon Laplace (1774) made the first attempt to deduce a rule for the combination of observations from the principles of the theory of probabilities. He represented the law of probability of errors by a curve y = (x), x being any error and y its probability, and laid down three properties of this curve: 1. It is symmetric as to the y-axis; 2. The x-axis is an asymptote, the probability of the error being 0; 3. The area enclosed is 1, it being certain that an error exists. He also gave (1781) a formula for the law of facility of error (a term due to Lagrange, 1774), but one which led to unmanageable equations. Daniel Bernoulli (1778) introduced the principle of the maximum product of the probabilities of a system of concurrent errors. The method of least squares is due to Adrien-Marie Legendre (1805), who introduced it in his Nouvelles mthodes pour la dtermination des orbites des comtes (New Methods for Determining the Orbits of Comets). In ignorance of Legendre's contribution, an Irish-American writer, Robert Adrain, editor of "The Analyst" (1808), first deduced the



law of facility of error, being a constant depending on precision of observation, and c a scale factor ensuring that the area under the curve equals 1. He gave two proofs, the second being essentially the same as John Herschel's (1850). Gauss gave the first proof which seems to have been known in Europe (the third after Adrain's) in 1809. Further proofs were given by Laplace (1810, 1812), Gauss (1823), James Ivory (1825, 1826), Hagen (1837), Friedrich Bessel (1838), W. F. Donkin (1844, 1856), and Morgan Crofton (1870). Other contributors were Ellis (1844), De Morgan (1864), Glaisher (1872), and Giovanni Schiaparelli (1875). Peters's (1856) formula for r, the probable error of a single observation, is well known. In the nineteenth century authors on the general theory included Laplace, Sylvestre Lacroix (1816), Littrow (1833), Adolphe Quetelet (1853), Richard Dedekind (1860), Helmert (1872), Hermann Laurent (1873), Liagre, Didion, and Karl Pearson. Augustus De Morgan and George Boole improved the exposition of the theory. Andrey Markov introduced the notion of Markov chains (1906) playing an important role in theory of stochastic processes and its applications. The modern theory of probability based on the meausure theory was developed by Andrey Kolmogorov (1931). On the geometric side (see integral geometry) contributors to The Educational Times were influential (Miller, Crofton, McColl, Wolstenholme, Watson, and Artemas Martin).

Basic Concepts
Random Experiment: An experiment is said to be a random experiment, if it's out-come can't be predicted with certainty. Example; if a coin is tossed, we can't say, whether head or tail will appear. So it is a random experiment. What is a sample space? The sample space is a set consisting of all the possible outcomes of an event (like drawing a marble from a jar, or picking a card from a deck). The number of different ways you can choose something from the sample space is the total number of possible outcomes. Because each probability is a fraction of the sample space, the sum of the probabilities of all the possible outcomes equals one. The probability of the occurrence of an event is always one minus the probability that it doesn't occur.



For example, if the probability of picking a red marble from a jar that contains 4 red marbles and 6 blue marbles is 4/10 or 2/5, then the probability of not picking a red marble is equal to 1 - 4/10 = 6/10 or 3/5, which is also the probability of picking a blue marble. Given the only two events that are possible in this example (picking a red marble or picking a blue marble), if you don't do the first, then you must do the second. That is, given this example, the probability of picking a red marble plus the probability of picking a blue marble will equal 1 (or 100 percent).

One event, all outcomes equally likely Suppose we have a jar with 4 red marbles and 6 blue marbles, and we want to find the probability of drawing a red marble at random. In this case we know that all outcomes are equally likely: any individual marble has the same chance of being drawn. To find a basic probability with all outcomes equally likely, we use a fraction: Number of favorable outcomes --------------------------------------total number of possible outcomes What's a favorable outcome? In our example, where we want to find the probability of drawing a red marble at random, our favorable outcome is drawing a red marble. What's the total number of possible outcomes? The total number of possible outcomes forms a set called a sample space. In our problem, the sample space consists of all ten marbles in the jar, because we are equally likely to draw any one of them. Using our basic probability fraction, we see that the probability of drawing a red marble at random is: Number of red 4 marbles --------------------------- = --Total marbles in 10 jar

Since 4/10 reduces to 2/5, the probability of drawing a red marble where all outcomes are equally likely is 2/5. Expressed as a decimal, 4/10 = .4; as a percent, 4/10 = 40/100 = 40%.



Suppose we number the marbles 1 to 10. What is the probability of picking out number 5? Well, there is only one number 5 marble, and there are still 10 marbles in the jar, so the answer is 1 marble (favorable outcome) divided by 10 marbles (size of sample space) = 1/10 or 10 percent.


- Two events, all outcomes equally likely

Now let's suppose we have two events: first you draw 1 marble from the 10, and then I draw another marble from the nine that remain. What is the probability that you will draw a blue one first? What is the probability that I will draw a red one second (given that you have already drawn a blue one)? Again, we'll use our fraction. When you draw the first marble, there are 10 marbles in the jar of which 6 are blue, so your probability of drawing a blue one is 6/10 (60 percent) or 3/5. After you draw it's my turn, but now the size of our sample space has changed because there are only 9 marbles left; 4 of them are red, so the probability that I'll draw a red marble (again, assuming that you have already drawn a blue one) is 4/9.


- Two events, second outcome dependent upon the first

We just calculated the probability for two events whose outcomes were equally likely: in the first, you drew a blue marble; in the second, I drew a red one after you had drawn. But suppose we want to know the probability of your drawing a blue marble and my drawing a red one? This may seem like the same question, but it is not the same because we now have more than one event. Here are the possibilities that make up the sample space: (You draw a blue marble and then I draw a blue marble) (You draw a blue marble and then I draw a red marble) (You draw a red marble and then I draw a blue marble) (You draw a red marble and then I draw a red marble) These are the only four possibilities - but they are not all equally likely. When we have an event made up of two separate events with the word and, where the outcome of the

The probability of example (b) above is:


second event is dependent on the outcome of the first, we multiply the individual probabilities to get the answer.

your probability of drawing a blue marble (3/5) multiplied by my probability of drawing a red marble (4/9): 3/5 x 4/9 = 12/45 or, reduced, 4/15. How about the probability of example (a)? We've already calculated the probability of your drawing a blue marble; it's 3/5. How about the probability of my drawing a blue marble too? Well, after you draw a blue, there are 9 marbles left and 5 of them are blue, so for me the probability will be 5/9. Multiply 3/5 times 5/9 and you get 3/9 or, reduced, 1/3.

According to the definition, probability is a function on the subsets of a sample space. Let's see how it could be defined on the simplest sample space of a single coin toss, {H, T}. The two element sample space {H, T} has four subsets: = { }, {H}, {T}, {H, T} = .

To be a probability, a function P defined on this four sets must be non-negative and not exceeding 1. In addition, on the two fundamental sets and it must take on the prescribed values: P ( ) = 0 and P ( ) = 1. The values P ({H}) and P({T}) which we shall write more concisely as P(H) and P(T) must be somewhere in-between. P (H) is expected to be the probability of a coin landing heads up; P (T) should be the probability of its landing tails up. This is up to us to assign those probabilities. Intuitively those numbers should be expressing our notion of certainty with which the coin lands one way or the other. Since, for a fair coin, there is no way to prefer one side to the other, the most natural and common way is to make the two probabilities equal: (1) P (H) = P (T).

As in real life, the choices we make have consequences. Once we decided that the two probabilities are equal, we are no longer at liberty to choose their common value. The

P ({H}) + P ({T}) = P ({H} {T}) = P ({H, T}) =P( ) = 1.


definitions take over and dictate the result. Indeed, the two events {H} and {T} are mutually exclusive so that a probability function should satisfy the additively requirement:


The combination of (1) and (2) leads inevitably to the conclusion that a probability function that models a toss of a fair coin is bound to satisfy P(H) = P(T) = 1/2. Two events that have equal probabilities are said to be equiprobable. It's a common approach, especially in the introductory probability courses, to define a probability function on a finite sample space by declaring all elementary events equiprobable and building up the function using the additively requirement. Having a formal definition of probability function avoids the apparent circularity of the construction hinted at elsewhere. Let's consider the experiment of rolling a die. The sample space consists of 6 possible outcomes {1, 2, 3, 4, 5, 6} Which, with no indication that the die used is loaded, are declared to be equiprobable. From here, the additively requirement leads necessarily to: P (1) = P (2) = P (3) = P (4) = P(5) = P(6) = 1/6. Since all 6 elementary events - {1}, {2}, {3}, {4}, {5}, {6} - are mutually exclusive, we may readily apply the required additively, for example: P ({1, 2}) = P ({1}) + P ({2}) = 1/6 + 1/6 = 1/3 and similarly P ({4, 5, 6}) = P ({4}) + P ({5}) + P ({6}) = 1/6 + 1/6 + 1/6 = 1/2 Note that a 2-element event {1, 2} has the probability of 1/3 = 21/6, whereas a 3element event {4, 5, 6} has the probability of 1/2 = 31/6.

{1, 2} = {x: x < 3},


Let X be the random variable associated with the experiment of rolling the dice. The introduction of a random variable allows for naming various sets in a convenient manner, e.g.

And, for the probability, P ({1, 2}) = P(X < 3) = 1/3. Similarly, P ({4, 5, 6}) = P(X > 3) = 1/2. Here are a few additional examples: P ({2, 4, 6}) = P(X is even) = 1/2, P({1, 2, 4, 5}) = P(X is not divisible by 3) = 2/3, P({2, 3, 5}) = P(X is prime) = 1/2. In general, if an event A has m favorable elementary outcomes, the additively requirement implies P (A) = m/6. In other experiments, with n possible equiprobable elementary outcomes, we would have P (A) = m/n. For example, under normal circumstances, drawing a particular card from a deck of 52 cards is assigned a probability of 1/52. Drawing a named (A, K, Q, J) card (of which there are 44 = 16 cards) has a probability of 16/52. The event of drawing a black card has the probability of 26/52 = 1/2, that of drawing a hearts the probability of 13/52 = 1/4 and the probability of drawing a 10 is 4/52 = 1/13. Later on, we shall have examples of sample spaces where considering the elementary events as equiprobable is unjustified. However, whenever this is possible, the evaluation of probabilities becomes a combinatorial problem that requires finding the total number n of possible outcomes and the number m of the outcomes favorable to the event at hand. It is then natural that properties of combinatorial counting have bearings on the assignment and evaluation of probabilities. When tossing two distinct (say, first and second) coins there are four possible outcomes {HH, HT, TH, TT} and no reason to declare one more likely than another. Thus each event is assigned the probability of 1/4. Here are more examples P({H popped up at least once}) = P({HH, HT, TH}) = 3/4, P(First coin came up heads) = P({HH, HT}) = 2/4 = 1/2, P(Two outcomes were different) = P({HT,TH}) = 2/4 = 1/2. We consider tossing two coins as completely independent experiments, the outcome of one having no effect on the outcome of the other. It follows then from the Sequential, or Product, Rule that the size of the sample space of the two experiments is the product of

P ({HT}) = 1/4 = 1/21/2 = P ({H}) P ({T}).


the sizes of the two sample spaces and the same holds of the probabilities. For example,

More generally, given two sample spaces S1 and S2 with the number of equiprobable outcomes n1 and n2 and two events E1 (on S1) and E2 (on S2) with the number of favorable outcomes m1 and m2. Then P (E1) = m1/n1 and P (E2) = m2/n2. The sample space of two successive experiments has a sample space with n1n2 outcomes. The event E1E2 which occurs if E1 took place followed by E2 taking place consists of m1m1 favorable outcomes so that P (E1E2) = m1m2/n1n2 = m1/n1 m2/n2 = P (E1) P (E2). The two coins may be indistinguishable and, when thrown together, may produce only three possible outcomes {{H, H}, {H, T}, {T, T}} where the set notations are used to emphasize that the order of the outcomes of the two coins is irrelevant in this case. However, assigning each of the elementary events the probability of 1/3 is probably a bad choice. A more reasonable assignment is P({H, H}) = 1/4, P({H, T}) = 1/2, P({T, T}) = 1/4. Why? This is because the results of the two experiments won't change if we imagine the two coins different; say if we think of them as being blue and red. But, for different coins, the number of elementary events is 4, with two of them - HT and TH - destined to coalesce into one - {H, T} - when we back off from our fantasy. The other two - HH and TT - will still have the probabilities of 1/4 and the remaining total of 1/2 should be given to {H, T}. When rolling two die, the sample space consists of 36 equiprobable elementary events each with probability 1/36. The possible sums of the two die range from 2 through 12 and the number of favorable events can be observed from the table below:

P(S = 2) = 1/36, P(S = 3) = 2/36 = 1/18, P(S = 4) = 3/36 = 1/12, P(S = 5) = 4/36 = 1/9, P(S = 6) = 5/36, P(S = 7) = 6/36 = 1/6, P(S = 8) = 5/36, P(S = 9) = 4/36 = 1/9, P(S = 10) = 3/36 = 1/12, P(S = 11) = 2/36 = 1/18, P(S = 12) = 1/36,


Using S for the random variable equal to the sum of the two die, the additively requirement leads to the following probabilities:

Note that the events are mutually exclusive and exhaustive: their probabilities add up to 1. (As a curiosity, note that, say, both sums of 4 and 5 come up in two ways, viz., 4 = 1 + 3, 4 = 2 + 2, 5 = 1 + 4, and 5 = 2 + 3. However, as we just saw, P(S = 4) < P(S = 5). That this is so may be bewildering to the uninitiated.) Let's return to throwing a dice. (For an historic example, see the Chevalier de Meres Problem.) With 3 die, the sample space consists of 8 = 23 possible outcomes. Four 4 die the number grows to 16 = 24, and so on. We obtain a curious sample space tossing the coin until the first tail comes up. The probability P (T) that it will happen on the first toss equals 1/2. The probability P (HT) that it will happen on the second toss is evaluated under the assumption that the first toss showed heads, for, otherwise, the experiment would have stopped right after the first stop. The outcome of the first toss has no effect on the outcome of the second, P (HT) = P (H) P (T) = 1/2 1/2 = 1/4. Continuing in this way, P (HHT) = 1/21/21/2 = 1/8 is the probability of getting the tails on the third toss; P (HHHT) = 1/16 is the probability of getting the tails on the fourth toss, and so on. The events are mutually exclusive and exhaustive: P (T) + P (HT) + P (HHT) +... = 1/2 + 1/4 + 1/8 +... = 1/21 / (1 - 1/2) = 1, as the sum of a geometric series starting at 1/2 with the factor also of 1/2.



This is a curiosity because there is one event that has been left over: this is the event in which the outcome T never occurs. An infinite number of coin tosses is called for, each with the outcome of heads: HHHH ... Although abstractedly this event is complementary to the possibility of having a tails in a finite number of steps, this event is practically impossible because it requires an infinite number of coin tosses. Deservedly it is assigned the probability of 0. The probability that tails will show up in four tosses or less equals P (T) + P (HT) + P(HHT) + P(HHHT) = 1/2 + 1/4 + 1/8 + 1/16 = 1/2 (1 - 1/24)/ (1 - 1/2). More generally, the probability that the tails will show up in at most n tosses equals to the sum 1/2 + 1/4 + 1/8 + ... + 1/2n = 1/2(1 - 1/2n)/ (1 - 1/2). The interpretation of the infinite sum 1/2 + 1/4 + 1/8 + ... is that this is the probability of the tails showing up in a finite number of steps. This probability is 1 so that one should expect to get the tails sooner or later. For this sample space, an event with probability 0 is conceivable but practically impossible. In continuous sample spaces, events with probability 0 are a regular phenomenon and far from being impossible Mutually (Jointly) Independent Events Two events A and B are independent if P (A B) = P(A)P(B). This definition extends to the notion of independence of a finite number of events. Let K be a finite set of indices. Events AK, k K are said to be mutually (or jointly) independent if P(
m MAm)

m MP(Am),

for any subset M K. For example, for four events A, B, C, D to be mutually independent, we must have P(A B C D) = P(A)P(B)P(C)P(D), P(A P(A P(A P(B B B C C C) = P(A)P(B)P(C), D) = P(A)P(B)P(D), D) = P(A)P(C)P(D), D) = P(B)P(C)P(D),

P(A B) = P(A)P(B), P(A C) = P(A)P(C), P(A D) = P(A)P(D), P(B C) = P(B)P(C), P(B D) = P(B)P(D), P(C D) = P(C)P(D).



Thus, by the definitions, mutual independence implies the pair wise independence. For two events, the definitions actually coincide. For more than two events, they are not. There are pair wise independent events that are not mutually independent. Two examples have been produced by S. N. Bernstein years ago and discussed more recently (2007) by C. Stepniak. Consider an urn containing four balls, numbered 110, 101, 011 and 000, from which one ball is drawn at random. For k = 1, 2, 3 let AK be the event of drawing a ball with 1 in the kth position. Thus, the three events are pair wise independent. However, since A1 A2 A3 = , they are not mutually independent. For a second example, let Bk be the event of drawing a ball with 0 in position k. Now, for k = 1, 2, 3, P (Bk) = 1/2, because in any of the three positions 0 appears exactly twice out of four possibilities. For any two distinct indices k and m, P (Bk Bm) = 1/4, for just one ball out of four has zeros both in positions k and m. Therefore, the events Bk are (pair wise) independent. However, P(B1 B2 B3) = 1/4 which is different from P(B1)P(B2)P(B3) = 1/8, meaning that the events are not mutually independent. The two examples are essentially different because in the first the intersection of A's is empty whereas in the second the intersection of B's is not.

Noting this, Stepniak proceeds to prove that Bernstein's are the only possible examples in a space with four outcomes. Thus assume that three (pair wise) independent events



A, B, C are defined in the space with four outcomes, none being the whole of the space. None may consist of a single outcome. For assume A = {x}. Then x may or may not belong to, say, B. If x B, then A B = A and P(A B) = P(A) P(A)P(B). On the other hand, if x B, A and B are disjoint and, therefore, are not independent. It follows, that each of the event contains at least two elements. Since the complements of two events are independent only of the events themselves are, we see that the complements of the events A, B, C also consist of at least 2 outcomes each. We conclude that each consists of exactly two outcomes. There are just two possibilities. There is an outcome common to all three events, which gives the configuration of the second example. Or there is no outcome common to all events, which gives the configuration of the first example

Probability in our lives

I) Weather forecasting
Suppose you want to go on a picnic this afternoon, and the weather report says that the chance of rain is 70%? Do you ever wonder where that 70% came from? Forecasts like these can be calculated by the people who work for the National Weather Service when they look at all other days in their historical database that have the same weather characteristics (temperature, pressure, humidity, etc.) and determine that on 70% of similar days in the past, it rained. As we've seen, to find basic probability we divide the number of favorable outcomes by the total number of possible outcomes in our sample space. If we're looking for the chance it will rain, this will be the number of days in our database that it rained divided by the total number of similar days in our database. If our meteorologist has data for 100 days with similar weather conditions (the sample space and therefore the denominator of our fraction), and on 70 of these days it rained (a favorable outcome), the probability of rain on the next similar day is 70/100 or 70%. Since a 50% probability means that an event is as likely to occur as not, 70%, which is greater than 50%, means that it is more likely to rain than not. But what is the probability that it won't rain? Remember that because the favorable outcomes represent all the possible ways that an event can occur, the sum of the various probabilities must equal 1 or 100%, so 100% - 70% = 30%, and the probability that it won't rain is 30%.

ii) Batting averages



Let's say your favorite baseball player is batting 300. What does this mean? A batting average involves calculating the probability of a player's getting a hit. The sample space is the total number of at-bats a player has had, not including walks. A hit is a favorable outcome. Thus if in 10 at-bats a player gets 3 hits, his or her batting average is 3/10 or 30%. For baseball stats we multiply all the percentages by 10, so a 30% probability translates to a 300 batting average. This means that when a Major Leaguer with a batting average of 300 steps up to the plate, he has only a 30% chance of getting a hit - and since most batters hit below 300, you can see how hard it is to get a hit in the Major Leagues! iii) Business Probability is used throughout business to evaluate financial and decision-making risks. Every decision made by management carries some chance for failure, so probabiity analysis is conducted formally ("math") and informally (i.e. "I hope"). Math is the preferred method but requires some advanced training, like college courses.

Future Exploration
In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed. For example, a single roll of a six-sided die produces one of the numbers 1, 2, 3, 4, 5, 6, each with equal probability. Therefore, the expected value of a single die roll is According to the law of large numbers, if a large number of dice are rolled, the average of their values (sometimes called the sample mean) is likely to be close to 3.5, with the accuracy increasing as more dice are rolled. Similarly, when a fair coin is flipped once, the expected value of the number of heads is equal to one half. Therefore, according to the law of large numbers, the proportion of heads in a large number of coin flips should be roughly one half. In particular, the proportion of heads after n flips will almost surely converge to one half as n approaches infinity. Though the proportion of heads (and tails) approaches half, almost surely the absolute (nominal) difference in the number of heads and tails will become large as the number of flips becomes large. That is, the probability that the absolute difference is a small number approaches zero as number of flips becomes large. Also, almost surely the ratio of the absolute difference to number of flips will



approach zero. Intuitively, expected absolute difference grows, but at a slower rate than the number of flips, as the number of flips grows. The LLN is important because it "guarantees" stable long-term results for random events. For example, while a casino may lose money in a single spin of the roulette wheel, its earnings will tend towards a predictable percentage over a large number of spins. Any winning streak by a player will eventually be overcome by the parameters of the game. It is important to remember that the LLN only applies (as the name indicates) when a large number of observations are considered. There is no principle that a small number of observations will converge to the expected value or that a streak of one value will immediately be "balanced" by the others

The probability is defined as an event of the number of times that has been favorable in the particular event from the event of sequence combinations. Probability has been expressed as the ratio of the particular group of numbers in between the number of ways the particular event can happen from the total number of things available in the system that can happen in the possible combination. Formula for probability is P = P Probability N Set of elements n Favorable elements Conclusion of Probability is always between 0 and 1.

Conclusion of Probability - Example Problems:

Ex: 1 A Basketball has 3 equal sectors colored like brown, black, and white. What are the chances of landing on blue after spinning the Basketball? What are the chances of landing on black? Solution: Given that, 3 colors are brown, black, and white Probability = N = 3 colors n = we need to find black color from the 3 colors. P (black) = number of possible / total number of element


This is the conclusion of the probability for this example.

Ex: 2 A Tanker lorry contains 6 red tin, 5 green tin, 8 blue tin and 3 yellow tin. If a single ball is, choose at random from the tin, what is the probability of choosing a red tin from the Tanker lorry? Solution: Given that, 4 colors are red, green, blue and yellow Probability = P (total number of event) N = 6 + 5 + 8 + 3 = 22 P (probability of red) = 6 Probability = this is the conclusion of probability for this example

Conclusion of Probability - Practice Problems:

1). Choose a bus from the city at random from bus number 1 to bus number 5. What is the probability of get the bus number 3? What is the probability that the bus number 3 chosen in the city? Ans = 0.2 or 2). A computer has used two students. What is the probability of both student using the system separately? Ans=