Sei sulla pagina 1di 28

Expert Systems

Presented by

Mohammad Saniee
December 2, 2003

Department of Computer Engineering


Sharif University of Technology
Expert Systems

A branch of Artificial Intelligence that makes


an extensive use of specialized knowledge to
solve problems at the level of an human
expert.
Natural
Vision Lang.

Robotics

Expert
Neural Systems
Networks
Why do we need Expert Systems

• Increased availability
• Permanence
• Reduced Danger
• Reduced Cost
• Multiple expertise
• Increased Reliability
• Explanation facility
• Fast Response
• Steady, emotional & complete response
• Intelligent tutor
Expert System building Process
• Selecting a specific Domain
• Scoping the project – The purpose/functionality of the
expert system.
• Identifying Human resources such as the Domain
expert / Knowledge Engineer, etc.
• Knowledge Acquisition
• Designing user interface
• Implementing the expert system
• Maintenance and update of Knowledge Base/System
Expert System components
• Working Memory
– A global database of facts used by the system
• Knowledge Base
– Contains the domain knowledge
• Inference Engine
– The brain of the Expert system. Makes logical deductions based upon the
knowledge in the KB.
• User Interface
– A facility for the user to interact with the Expert system.
• Explanation Facility
– Explains reasoning of the system to the user
• Knowledge Acquisition Facility
– An automatic way to acquire knowledge
Expert System Structure
Inference Working
Knowledge Base Memory
Engine

Explanation Knowledge
Facility Acquisition
Facility

User Interface
Knowledge Types

• The knowledge base of expert system contains both factual and heuristic
knowledge.

– Factual knowledge is that knowledge of the task domain that is widely shared,
typically found in textbooks or journals, and commonly agreed upon by those
knowledgeable in the particular field.

• The capital of Italy is Rome


• A day consists of 24 hours
• Bacteria type a causes Flu type B

– Heuristic knowledge is the less rigorous, more experiential, more judgmental


knowledge of performance.

• For instance, in a medical expert system - if patient has spots, it’s probably
chickenpox
• In a mechanical trouble shooting system - if engine doesn’t turn over, check battery
Knowledge Representation
• Knowledge representation formalizes and organizes the knowledge. The
two most widely used representation are

– Production Rules: A rule consists of an IF part and a THEN part (also called a
condition and an action). if the IF part of the rule is satisfied; consequently, the
THEN part can be concluded, or its problem-solving action taken. Rule based
expert Systems use this representation, e.g.,

IF the stain of the organism is gram negative AND the morphology of


the organism is rod AND the aerobicity of the organism is anaerobic
THEN there is strongly suggestive evidence (0.8) that the class of the
organism is Enterobacter iaceae.

– Frames or units: The unit is an assemblage of associated symbolic knowledge


about the represented entity Typically, a unit consists of a list of properties of
the entity and associated values for those properties
Rules Based Expert System

Expert systems that represent domain


knowledge using production rules. Two type of
Rule based systems:

– Forward chaining Systems

– Backward chaining systems


Forward Chaining Systems
Forward chaining Systems support chaining of IF-
THEN rules to form a line of reasoning. The chaining
starts from a set of conditions and moves toward a
conclusion.

Question: Does employee John get a Computer?


Rule: If John is an employee, he gets a computer.
Fact: John is an employee
Conclusion: John Gets a computer.
Forward Chaining
• The rules are of the form:
left hand side (LHS) ==> right hand side (RHS).

• The execution cycle is


– Select a rule whose left hand side conditions match the
current state as stored in the working storage.

– Execute the right hand side of that rule, thus somehow


changing the current state.

– Repeat until there are no rules which apply.


Forward Chaining
• Facts are represented in a working memory which is
continually updated.

• Rules represent possible actions to take when


specified conditions hold on items in the working
memory.

• The conditions are usually patterns that must match


items in the working memory, while the actions
usually involve adding or deleting items from the
working memory.
Forward Chaining (example)
First we'll look at a very simple set of rules: (month February)
1. IF (lecturing X) (happy Alison)
AND (marking- practicals X) (researching Alison)
THEN ADD (overworked X)
2. IF (month February) • Rule 2 & 3 let’s assume rule 2 chosen
THEN ADD (lecturing Alison)
3. IF (month February) (lecturing Alison)
THEN ADD (marking- practicals Alison) (month February)
(happy Alison)
4. IF (overworked X) (researching Alison)
OR (slept-badly X)
THEN ADD (bad-mood X)
5. IF (bad-mood X) • Rule 3 & 6 apply, assume rule 3 chosen,
THEN DELETE (happy X) This cycle continues and we end up with
6. IF (lecturing X)
THEN DELETE (researching X) (bad-mood Alison)
(overworked Alison)
(marking- practicals Alison)
Here we use capital letters to indicate variables (lecturing Alison)
(month February)
Example of Forward Chaining System

• XCON
– Developed by DEC to configures computers.
– Starts with the data about the customer order and
works forward toward a configuration based on
that data.
– Written in the OPS5 (forward chaining rule based)
language.
Backward Chaining System
If the conclusion is known (goal to be achieved) but
the path to that conclusion is not known, then
reasoning backwards is called for, and the method is
backward chaining.
• The consequence part of rule specifies combinations
of facts (goals) to be matched against Working
Memory.
• The condition part of the rule is then used as a set of
further sub-goals to be proven / satisfied.
Backward Chaining example
Question: Does employee John get a computer?
Statement: John gets a computer.
Rule: If employee is a programmer, then he gets a computer.

Backward Chaining:
Check the rule base to see what has to be “true” for john to
get a computer. A programmer. Is it a fact that john is
programmer. If true, then he gets a computer
Backward Chaining

• Start with a goal state

• System will first check if the goal matches the initial facts given. If it does,
the goal succeeds. If it doesn't, the system will looks for rules whose
conclusions match the goal.

• One such rule will be chosen, and the system will then try to prove any
facts in the preconditions of the rule using the same procedure, setting these
as new goals to prove.

• Needs to keep track of what goals it needs to prove its main hypothesis.
Backward Chaining (example)
1. IF (lecturing X) • initial facts:
AND (marking- practicals X) (month February)
THEN (overworked X) (year 1993)
2. IF (month February)
THEN (lecturing Alison) • Goal that has to be proved:
3. IF (month February) (bad-mood Alison)
THEN (marking- practicals Alison)
4. IF (overworked X) • The goal is not satisfied by initial
THEN (bad-mood X) facts.
5. IF (slept-badly X) • Rules 4 & 5 apply. Assume 4 chosen
THEN (bad-mood X) • New Goal( overworked Alison)
6. IF (month February) • Rule 1 applies
THEN (weather cold) • New Goal (lecturing Alison)
7. IF (year 1993)
THEN (economy bad )
Conflict Resolution (I)
• Conflict Resolution is a method that is used when more than one rule is
matched on the facts asserted. There are several approaches
– First in first serve
• It involves firing the first rule that matches the content of the
working memory or the facts asserted.

– Last in first serve


• The rule applied will be the last rule that is matched.

– Prioritization:
• The rule to apply will be selected based on priorities set on rules, with
priority information usually provided by an expert or knowledge engineer.
Conflict Resolution (II)
• Specificity - The rule applied is usually the most
specific rule, or the rule that matches the most facts.

• Recency - The rule applied is the rule that matches the


most recently derived facts.

• Fired Rules - Involves not applying rules that have


already been used.
Conflict Resolution (example)
First we'll look at a very simple set of rules: (month February)
1. IF (lecturing X) (researching Alison)
AND (marking- practicals X) (overworked Alison)
THEN ADD (overworked X)
2. IF (month February)
THEN ADD (lecturing Alison) • First-serve apply Rule 2
3. IF (month February) • Last in first serve apply rule 3
THEN ADD (marking- practicals Alison)
4. IF (overworked X) (month February)
OR (slept-badly X) (researching Alison)
THEN ADD (bad-mood X)
(overworked Alison)
5. IF (bad-mood X) (marking- practicals Alison)
THEN DELETE (happy X)
6. IF (lecturing X)
• Recency Apply Rule that match most recent fact
Rule # 7
THEN DELETE (researching X)
7. IF (marking – praticals X)
• Fired Rules – don’t fire the same rule again
THEN ADD(Needsrest X)
• Specificity: If we had two rules but one of them
matched more facts than we’’ chose that rule
Here we use capital letters to indicate variables

Prioritization If we add priority to these rules
then the higher priority rule will be fired
Uncertainty

• The expert system must deal with the


uncertainty that comes from the individual
rules, conflict resolution, and incompatibilities
among the rules. Certainty factors can be
assigned to the rules as in the case of MYCIN.
Uncertainty in MYCIN
• Rules contain certainty factors , cf.

– they make inexact inferences on a confidence scale of -1.0 to 1.0.


– 1.0 represents complete confidence that it is true.
– -1.0 represents complete confidence that it is false.
– The Cfs are measurements of the association between the premise and
action clauses of each rules.

when a production rule succeeds because its premise clauses are true in
the current context, the Cfs of the component clauses that indicate how
strongly each clause is believed are combined, the resulting CF is used to
modify the CF specified in the action clause.
Explanation facilities

• Explains the reasoning process used to arrive a conclusion

– provides the user with a means of understanding the system


behavior.
– This is important because a consultation with a human
expert will often require some explanation.
– Many people would not always accept the answers of an
expert without some form of justification.
– e.g., a medical expert providing a diagnosis/treatment of a
patient is expected to explain the reasoning behind his/her
conclusions: the uncertain nature of this type of decision
may demand a detailed explanation so that the patient
concerned is aware of any risks, alternative treatments ,etc.
Expert System Tools (I)

• PROLOG
– A programming language that uses backward chaining.
• ART-IM (Inference Corporation)
– Following the distribution of NASA's CLIPS, Inference Corporation implemented a forward-chaining
only derivative of ART/CLIPS called ART-IM.
• ART (Inference Corporation)
– In 1984, Inference Corporation developed the Automated Reasoning Tool (ART), a forward chaining
system.
• CLIPS –
– NASA took the forward chaining capabilities and syntax of ART and introduced the "C Language
Integrated Production System" (i.e., CLIPS) into the public domain.
• OPS5 (Carnegie Mellon University)
– OPS5 (Carnegie Mellon University) – First AI language used for Production System (XCON)
• Eclipse (The Haley Enterprise, Inc.)
 Eclipse is the only C/C++ inference engine that supports both forward and Backward chaining.
Expert Systems Tools (II)

• Expert System Shells


– provide mechanism for knowledge representation,
reasoning, and explanation, e.g. EMYCIN
• Knowledge Acquisition Tools:
– Programs that interact with experts to extract
domain knowledge. Support inputting knowledge,
maintaining knowledge base consistency and
completeness. E.g., Mole, Salt
Expert System Examples
• MYCIN (1972-80)
– MYCIN is an interactive program that diagnoses certain infectious
diseases, prescribes antimicrobial therapy, and can explain its reasoning in
detail

• PROSPECTOR
– Provides advice on mineral exploration

• XCON
– configure VAX computers

• DENDRAL (1965-83)
– rule-based expert systems that analyzes molecular structure. Using a plan-
generate-test search paradigm and data from mass spectrometry and other
sources, DENDRAL proposes plausible candidate structures for new or
unknown chemical compounds.
LIMITATIONS

• NARROW DOMAIN
• LIMITED FOCUS
• INABILITY TO LEARN
• MAINTENANCE PROBLEMS
• DEVELOPMENTAL COST

Potrebbero piacerti anche