Sei sulla pagina 1di 2

ENGINEERING

MANAGEMENT
Bautista, Cristel V. Engr. Mark Anthony Castro
BS ECE – 2B February 11, 2020

HOMEWORK

Bayesian analysis is a statistical procedure which endeavors to estimate parameters of an


underlying distribution based on the observed distribution. The purpose of Bayesian analysis is to revise
and update the initial assessments of the event probabilities generated by the alternative solutions.

Bayesian theorem states that:

Posterior = Prior * Likelihood

This can also be stated as P (A | B) = (P (B | A) * P(A)) / P(B) , where P(A|B) is the probability of
A given B, also called posterior.

The prior is the probability distribution representing knowledge or uncertainty of a data object
prior or before observing it. Posterior refers to the conditional probability distribution representing what
parameters are likely after observing the data object. While likelihood or evidence is the probability of
falling under a specific category or class.

Prior Belief which may be


based on anything, including an assessment of the relative likelihoods of parameters or the results of non-
Bayesian observations. In practice, it is common to assume a uniform distribution over the appropriate
range of values for the prior distribution.

Given the prior distribution, collect data to obtain the observed distribution. Then calculate the
likelihood of the observed distribution as a function of parameter values, multiply this likelihood function
by the prior distribution, and normalize to obtain a unit probability over all possible values. This is called
the posterior distribution. The mode of the distribution is then the parameter estimate, and "probability
intervals" (the Bayesian analog of confidence intervals) can be calculated using the standard procedure.
Bayesian analysis is somewhat controversial because the validity of the result depends on how valid the
prior distribution is, and this cannot be assessed statistically.
EXAMPLE:
20 students are randomly picked off a city street in Pampanga. Whether they are male or female is
noted on 20 identical pieces of paper, put into a hat and the hat is brought to me. I have not seen these 20
people. I take out five pieces of paper from the hat and read them - three are female. I am then asked to
estimate the number of females in the original group of twenty.

The total population is 20, the sample size n = 5, the number observed in the sample with the required
property (x = 3) but the number of females D were unknown, which is denoted as q, is the parameter to be
estimated.

T h e g r a p h s h o
The

prior is very strong and the amount of information imbedded in the likelihood function is small, so the
posterior distribution is quite close to the prior.
The posterior distribution is a balance between the prior and likelihood function. Hence, the peak
of the posterior distribution now lies somewhere between the peaks of the prior and likelihood function.
The effect of the likelihood function is small because the sample is small (a sample of five) and because it
is in reasonable agreement with the prior (the prior has a maximum at q = 10, and this value of q also
produces one of the highest likelihood function values).

REFERENCE:
https://towardsdatascience.com/probability-concepts-explained-bayesian-inference-for-parameter-estimation-90e8930e5348?
gi=840bc0e73171
https://www.oreilly.com/library/view/machine-learning-with/9781785889936/ff082869-751b-4de3-9a59-edff60ad4e94.xhtml

Potrebbero piacerti anche