Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
YBE: Bayes' Theorem is named after Thomas Bayes. There are two types of
probabilities: Posterior Probability and Prior Probability. According to Bayes'
Theorem, posterior is equals to prior times likelihood divided by evidence.
(SLIDE 3)
YBE: Bayes Rule is applied here to calculate the posterior from the prior and the
likelihood, because the later two is generally easier to be calculated from a
probability model.
YBE: The Nave Bayes Classifier technique is particularly suited when the
dimensionality of the inputs is high. Despite its simplicity, Naive Bayes can often
outperform more sophisticated classification methods. (SLIDE 4)
CHES: In this example, the objects can be classified as either GREEN or RED.
Our task is to classify new cases as they arrive (that is decide to which class label
they belong, based on the currently exiting objects). (SLIDE 5)
CHES: We can then calculate the priors (that is the probability of the object
among all objects) based on the previous experience. Thus: the prior probability
for GREEN is equals to the number of GREEN objects divided by the total
number of objects and the prior probability for RED is equals to the number of
RED objects divided by the total number of objects. (SLIDE 6)
CHES: Although the prior probabilities indicate that X may belong to GREEN
(given that there are twice as many GREEN compared to RED) the likelihood
indicates otherwise; that the class membership of X is RED (given that there are
more RED objects in the vicinity of X than GREEN). In the Bayesian analysis, the
final classification is produced by combining both sources of information (that is the
prior and the likelihood) to form a posterior probability using Bayes Rule. (SLIDE 11)
YBE: Each node in a directed acyclic graph represents a random variable. These
variable may be discrete or continuous valued. These variables may correspond to
the actual attribute given in the data. (SLIDE 15)
YBE: For example, lung cancer is influenced by a person's family history of lung
cancer, as well as whether or not the person is a smoker. It is worth noting that the
variable PositiveXray is independent of whether the patient has a family history of
lung cancer or that the patient is a smoker, given that we know the patient has lung
cancer. (SLIDE 16)
YBE: The conditional probability table for the values of the variable LungCancer
(LC) showing each possible combination of the values of its parent nodes,
FamilyHistory (FH), and Smoker (S) is as follows: (SLIDE 17)
The table shows that given the FamilyHistory and Smoker variables are both
positive, the probability of Lung Cancer is 0.8 for positive and 0.2 for negative. If
the FamilyHistory variable is positive and the Smoker variable is negative, the
probability of Lung Cancer is 0.5 for positive and negative. If the FamilyHistory
variable is negative and the Smoker variable is positive, the probability of Lung
Cancer is 0.7 for positive and 0.3 for negative. And if the FamilyHistory and
Smoker variables are both egative, the probability of Lung Cancer is 0.1 for
positive and 0.9 for negative.