Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Bidirectional Search
It searches forward from initial state and backward from goal state till both meet to identify a common state.
The path from initial state is concatenated with the inverse path from the goal state. Each search is done only up to
half of the total path.
The idea of a bidirectional search is to reduce the search time by searching forward from the start and backward
from the goal simultaneously. When the two search frontiers intersect, the algorithm can reconstruct a single path
that extends from the start state through the frontier intersection to the goal.
A new problem arises during a bidirectional search, namely ensuring that the two search frontiers actually meet.
For example, a depth-first search in both directions is not likely to work well because its small search frontiers are
likely to pass each other by. Breadth-first search in both directions would be guaranteed to meet.
A simple reflex agent is the most basic of the intelligent agents out there. It performs actions based on a current
situation. When something happens in the environment of a simple reflex agent, the agent quickly scans its
knowledge base for how to respond to the situation at-hand based on pre-determined rules.
Simple reflex agents ignore the rest of the percept history and act only on the basis of the current percept. Percept
history is the history of all that an agent has perceived till date. The agent function is based on the condition-action
rule. A condition-action rule is a rule that maps a state i.e, condition to an action. If the condition is true, then the
action is taken, else not. This agent function only succeeds when the environment is fully observable. For simple
reflex agents operating in partially observable environments, infinite loops are often unavoidable. It may be
possible to escape from infinite loops if the agent can randomize its actions. Problems with Simple reflex agents
are :
Very limited intelligence.
No knowledge of non-perceptual parts of state.
Usually too big to generate and store.
If there occurs any change in the environment, then the collection of rules need to be updated.
Link: https://www.youtube.com/watch?v=KZFfbebQPAU
Heuristic functions:
It is a function f(n) that gives an estimation on the cost of getting from node n to goal state. So least cost node can
be selected from possible choices.
There are two common heuristic functions:
1) Admissible (under estimate)
2) Non Admissible (over estimate)
An admissible heuristic is used to estimate the cost of reaching the goal state in an informed search algorithm. In
order for a heuristic to be admissible to the search problem, the estimated cost must always be lower than or equal
to the actual cost of reaching the goal state. The search algorithm uses the admissible heuristic to find an estimated
optimal path to the goal state from the current node. For example, in A* search the evaluation function (where n
is the current node) is:
f(n)=g(n)+h(n)
where
f(n)= the evaluation function.
g(n)= the cost from the start node to the current node
h(n)= estimated cost from current node to goal.
h(n) is calculated using the heuristic function. With a non-admissible heuristic, the A* algorithm could overlook
the optimal solution to a search problem due to an overestimation in f(n).
A non-admissible heuristic may overestimate the cost of reaching the goal. It may or may not result in an optimal
solution. However, the advantage is that sometimes, a non-admissible heuristic expands much fewer nodes. Thus,
the total cost (= search cost + path cost) may actually be lower than an optimal solution using an admissible
heuristic.
Link:https://www.youtube.com/watch?v=KF8JRfzSRWc
A knowledge base is not merely a space for data storage, but can be an artificial intelligence tool for delivering
intelligent decisions. Various knowledge representation techniques, including frames and scripts, represent
knowledge. The services offered are explanation, reasoning and intelligent decision support.
The two major types of knowledge bases are human readable and machine readable.
Human readable knowledge bases enable people to access and use the knowledge. They store help
documents, manuals, troubleshooting information and frequently answered questions.
Machine readable knowledge bases store knowledge, but only in system readable forms. Solutions
are offered based upon automated deductive reasoning and are not so interactive as this relies on query
systems that have software that can respond to the knowledge base to narrow down a solution.
Link:https://www.youtube.com/watch?v=9iN3O_oL2ac
Knowledge representation and reasoning (KR, KRR) is the part of Artificial intelligence which concerned with AI
agents thinking and how thinking contributes to intelligent behavior of agents.
It is responsible for representing information about the real world so that a computer can
understand and can utilize this knowledge to solve the complex real world problems such as diagnosis a
medical condition or communicating with humans in natural language.
It is also a way which describes how we can represent knowledge in artificial intelligence.
Knowledge representation is not just storing data into some database, but it also enables an intelligent
machine to learn from that knowledge and experiences so that it can behave intelligently like a human.
There are mainly four ways of knowledge representation which are given as follows:
1. Logical Representation
3. Frame Representation
4. Production Rules
1. Logical Representation
Logical representation is a language with some concrete rules which deals with propositions and has no ambiguity
in representation. Logical representation means drawing a conclusion based on various conditions. This
representation lays down some important communication rules. It consists of precisely defined syntax and
semantics which supports the sound inference. Each sentence can be translated into logics using syntax and
semantics.
Syntax:
Syntaxes are the rules which decide how we can construct legal sentences in the logic.
Semantics are the rules by which we can interpret the sentence in the logic.
1. Propositional Logics
2. Predicate logics
2. Kind-of-relation
3.Frame Representation
A frame is a record like structure which consists of a collection of attributes and its values to describe an entity in
the world. Frames are the AI data structure which divides knowledge into substructures by representing
stereotypes situations. It consists of a collection of slots and slot values. These slots may be of any type and sizes.
Slots have names and values which are called facets.
Facets:The various aspects of a slot is known as Facets. Facets are features of frames whi
4.Production Rules
Production rules system consist of (condition, action) pairs which mean, "If condition then action". It has mainly
three parts:
Working Memory
The recognize-act-cycle
In production rules agent checks for the condition and if the condition exists then production rule fires and
corresponding action is carried out. The condition part of the rule determines which rule may be applied to a
problem. And the action part carries out the associated problem-solving steps. This complete process is called a
recognize-act cycle.
For example, in the game Guess Who, the players each begin with a set of character cards from which to choose.
They then take turns asking yes-or-no questions about the other player's choice. The set of cards is the search
space for this game. This set is finite and known ahead of time.
In chess, the search space is much more complicated. The search space is the set of all possible valid moves. For a
given turn the space is finite, but the set of all possible games is infinite. Since the player is trying to maximize the
probability of winning, they must search through many turns. These possibilities must be generated during the
search.
In the context of AI, the search space is used to define the optimization power of the AI. Eliezer
Yudkowsky defines optimization power as follows. If S is the search space and S+ is the set of options better or
equal to the chosen one, the the optimization power is
Figure: search space
10. What are the major component/components for measuring the performance of problem solving?
Search as a black box will result in an output that is either failure or a solution, We will evaluate a search
algorithm`s performance in four ways:
1. Completeness: is it guaranteed that our algorithm always finds a solution when there is one ?
2. Optimality: Does our algorithm always find the optimal solution ?
3. Time complexity: How much time our search algorithm takes to find a solution ?
4. Space complexity: How much memory required to run the search algorithm?
Time and Space in complexity analysis are measured with respect to the number of nodes the problem graph has
in terms of asymptotic notations.
In AI, complexity is expressed by three factors b,d and m:
A Turing Test is a method of inquiry in artificial intelligence (AI) for determining whether or not a computer is
capable of thinking like a human being. The test is named after Alan Turing, the founder of the Turning Test and
an English computer scientist, cryptanalyst, mathematician and theoretical biologist.
Turing proposed that a computer can be said to possess artificial intelligence if it can mimic human responses
under specific conditions. The original Turing Test requires three terminals, each of which is physically separated
from the other two. One terminal is operated by a computer, while the other two are operated by humans.
During the test, one of the humans functions as the questioner, while the second human and the computer function
as respondents. The questioner interrogates the respondents within a specific subject area, using a specified format
and context. After a preset length of time or number of questions, the questioner is then asked to decide which
respondent was human and which was a computer.
Link:https://www.youtube.com/watch?v=VZqSCFt6OsM
Deep Blue was a chess-playing computer developed by IBM. It is known for being the first computer chess-
playing system to win both a chess game and a chess match against a reigning world champion under regular time
controls.
Deep Blue was a supercomputer developed by IBM specifically for playing chess and was best known for being
the first artificial intelligence construct to ever win a chess match against a reigning world champion,
Grandmaster Garry Kasparov, under regular time controls. Deep Blue lost to Kasparov in their first 6-game match
in 1996 with a score of 4–2, and was later heavily upgraded and finally won over Kasparov in May 1997 in a 6-
game rematch with a score of 3½–2½.
Technical specifications of Deep Blue include:
RS/6000 SP Thin P2SC-based system
30 nodes
Knowledge
Reasoning
Problem solving
Perception
Learning
Planning
Machine learning is also a core part of AI. Learning kind of supervision requires an ability to identify patterns in
streams of inputs, whereas learning with adequate supervision involves classification and numerical regressions.
Link: https://www.youtube.com/watch?v=J7LqgglEfQw
The informed search algorithm is more useful for large search space. Informed search algorithm uses the idea of
heuristic, so it is also called Heuristic search.
Informed Search
A search using domain-specific knowledge.
Suppose that we have a way to estimate how close a state is to the goal, with an evaluation function.
General strategy: expand the best state in the open list first. It's called a best-first search or ordered state-space
search.
In general the evaluation function is imprecise, which makes the method a heuristic (works well in most cases).
The evaluation is often based on empirical observations.
Examples
:::The vacuum cleaner - it might move from a cell towards the dirtiest adjacent cell.
:::For a path-search in a graph with a geometrical representation - give preference to the neighbors which are the
closest to the target based on the Euclidian distance (which may or may not be an indication of a good path). This
is called a greedy search algorithm.
:::For a game-playing, the algorithm may move to a state giving it a strategic advantage (capturing the opponent's
queen).
In the informed search we will discuss two main algorithms which are given below:
A* Search Algorithm
link: https://www.youtube.com/watch?v=japhjrVxJdg
An agent can be anything that perceiveits environment through sensors and act upon that environment through
actuators. An Agent runs in the cycle of perceiving, thinking, and acting. An agent can be:
Human-Agent: A human agent has eyes, ears, and other organs which work for sensors and hand,
legs, vocal tract work for actuators.
Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for sensors and
various motors for actuators.
Software Agent: Software agent can have keystrokes, file contents as sensory input and act on
those inputs and display output on the screen.
Hence the world around us is full of agents such as thermostat, cellphone, camera, and even we are also agents.
Application of AI
Artificial Intelligence has various applications in today's society. It is becoming essential for today's time because
it can solve complex problems with an efficient way in multiple industries, such as Healthcare, entertainment,
finance, education, etc. AI is making our daily life more comfortable and fast.
Following are some sectors which have the application of Artificial Intelligence:
1. AI in Astronomy
Artificial Intelligence can be very useful to solve complex universe problems. AI technology can
be helpful for understanding the universe such as how it works, origin, etc.
2. AI in Healthcare
In the last, five to ten years, AI becoming more advantageous for the healthcare industry and going
to have a significant impact on this industry.
Healthcare Industries are applying AI to make a better and faster diagnosis than humans. AI can
help doctors with diagnoses and can inform when patients are worsening so that medical help can reach to
the patient before hospitalization.
3. AI in Gaming
AI can be used for gaming purpose. The AI machines can play strategic games like chess, where
the machine needs to think of a large number of possible places.
4. AI in Finance
AI and finance industries are the best matches for each other. The finance industry is implementing
automation, chatbot, adaptive intelligence, algorithm trading, and machine learning into financial
processes.
5. AI in Data Security
The security of data is crucial for every company and cyber-attacks are growing very rapidly in the
digital world. AI can be used to make your data more safe and secure. Some examples such as AEG bot,
AI2 Platform,are used to determine software bug and cyber-attacks in a better way.
6. AI in Social Media
Social Media sites such as Facebook, Twitter, and Snapchat contain billions of user profiles, which
need to be stored and managed in a very efficient way. AI can organize and manage massive amounts of
data. AI can analyze lots of data to identify the latest trends, hashtag, and requirement of different users.
7. AI in Travel &Transport
AI is becoming highly demanding for travel industries. AI is capable of doing various travel related
works such as from making travel arrangement to suggesting the hotels, flights, and best routes to the
customers. Travel industries are using AI-powered chatbots which can make human-like interaction with
customers for better and fast response.
Etc.
18. What do you mean by Intelligent Agent? Briefly explain different types of Agent.
An intelligent agent is an autonomous entity which act upon an environment using sensors and actuators for
achieving goals. An intelligent agent may learn from the environment to achieve their goals. A thermostat is an
example of an intelligent agent.
Types of Agents
Agents can be grouped into four classes based on their degree of perceived intelligence and capability :
The Simple reflex agents are the simplest agents. These agents take decisions on the basis of the
current percepts and ignore the rest of the percept history.
The Simple reflex agent does not consider any part of percepts history during their decision and
action process.
The Simple reflex agent works on Condition-action rule, which means it maps the current state to
action. Such as a Room Cleaner agent, it works only if there is dirt in the room.
The Model-based agent can work in a partially observable environment, and track the situation.
Model: It is knowledge about "how things happen in the world," so it is called a Model-based
agent.
3. Goal-based agents
The knowledge of the current state environment is not always sufficient to decide for an agent to
what to do.
The agent needs to know its goal which describes desirable situations.
Goal-based agents expand the capabilities of the model-based agent by having the "goal"
information.
These agents may have to consider a long sequence of possible actions before deciding whether the
goal is achieved or not. Such considerations of different scenario are called searching and planning, which
makes an agent proactive.
4. Utility-based agents
These agents are similar to the goal-based agent but provide an extra component of utility
measurement which makes them different by providing a measure of success at a given state.
Utility-based agent act based not only goals but also the best way to achieve the goal.
The Utility-based agent is useful when there are multiple possible alternatives, and an agent has to
choose in order to perform the best action.
The utility function maps each state to a real number to check how efficiently each action achieves
the goals.
5. Learning Agents
A learning agent in AI is the type of agent which can learn from its past experiences, or it has
learning capabilities.
It starts to act with basic knowledge and then able to act and adapt automatically through learning.
Hence, learning agents are able to learn, analyze performance, and look for new ways to improve
the performance.
Link: https://www.youtube.com/watch?v=BkedAnQfJ_U
Propositional logic (PL) is the simplest form of logic where all the statements are made by propositions. A
proposition is a declarative statement which is either true or false. It is a technique of knowledge representation in
logical and mathematical form.
Example:
1. a) It is Sunday.
2. b) The Sun rises from West (False proposition)
Propositional logic is also called Boolean logic as it works on 0 and 1.
inference using Propositional Logic: Inference is used to create new sentences that logically follow from a given
set of predicate calculus sentence.
Below are some inference rules:
link: https://www.youtube.com/watch?v=gKHsecJl_iM
b) If it is raining and you are outside then you will get wet.
f) It is a warm day. Use Resolution-Refutation to prove that “You are not swimming.”
Above figure illustrates a resolution refutation proof of “You are not swimming” using the facts about a summer
day .Since we want to prove ~ swimming we begin the proof by assuming that its opposite,swimming, is true. We
then look for a contradiction between our established set of facts about a summer day and this new assumption.
First up,(swimming) and Statement 1,( ~ swimming∨wet ), can be resolved since they share matching terms. In
order for both of these clauses to be true, the clause (wet)must be true. However,(wet) can be resolved with
Statement 4,(~wet), indicating a contradiction. Thus, given that the original six statements were true, the
contradiction must have been caused by our assumption that (swimming) was true. Since (swimming) cannot be
true,(~ swimming) must be true.
Here is this same argument recast into English.
To prove that “You are not swimming” we will assume that “You are swimming” and
show that this assumption leads to a contradiction. Statement (1) says that either “You
are not swimming” or “You are wet”, but since we are assuming that “You are
swimming” then, logically, you must be wet. However, Statement (4) says, “You are not
wet”. Hence, we have found a contradiction. If all of our original statements were true to
begin with, our assumption “You are swimming” must be incorrect. So, we conclude,
that you are not swimming.
21. Explain the working principle of Breadth First Search (BFS) and Depth-First-Search(DFS).
Example:
Question: Which solution would BFS find to move from node S to node G if run on the graph below?
A Prolog program consists of a number of clauses. Each clause is either a fact or a rule. After a Prolog program is
loaded (or consulted) in a Prolog interpreter, users can submit goals or queries, and the Prolog intepreter will
give results (answers) according to the facts and rules.
Facts
A fact must start with a predicate (which is an atom) and end with a fullstop. The predicate may be followed by
one or more arguments which are enclosed by parentheses. The arguments can be atoms (in this case, these atoms
are treated as constants), numbers, variables or lists. Arguments are separated by commas.
If we consider the arguments in a fact to be objects, then the predicate of the fact describes a property of the
objects.
In a Prolog program, a presence of a fact indicates a statement that is true. An absence of a fact indicates a
statement that is not true. See the following example:
Rules
A rule can be viewed as an extension of a fact with added conditions that also have to be satisfied for it to be true.
It consists of two parts. The first part is similar to a fact (a predicate with arguments). The second part consists of
other clauses (facts or rules which are separated by commas) which must all be true for the rule itself to be true.
These two parts are separated by ":-". You may interpret this operator as "if" in English.
See the following example:
Take Rule 3 as an example. It means that "granfather(X, Y)" is true if both "father(X, Z)" and "parent(Z, X)" are
true. The comma between the two conditions can be considered as a logical-AND operator.
Link:
what is prolog : https://www.youtube.com/watch?v=hBz3DgXlg0Q
facts and rules: https://www.youtube.com/watch?v=h9jLWM2lFr0
23. What is the importance of Search Algorithm in AI? What is the difference between Uninformed Search
and Informed Search.
Search plays a major role in solving many Artificial Intelligence (AI) problems. Search is a universal problem-
solving mechanism in AI. In many problems, sequence of steps required to solve is not known in advance but
must be determined by systematic trial-and-error exploration of alternatives.
The problems that are addressed by AI search algorithms fall into three general classes:
single-agent path-finding problems, two-players games, and constraint-satisfaction problems.
Link: serach algo : https://www.youtube.com/watch?v=uZfS2B-lWic&list=PLrjkTql3jnm_yol-
ZK1QqPSn5YSg0NF9r&index=10
informed and uninformed search:
https://www.youtube.com/watch?v=AneIXxdu_g4&list=PLrjkTql3jnm_yol-
ZK1QqPSn5YSg0NF9r&index=11
Step 2:If the OPEN list is empty, Stop and return failure.
Step 3:Remove the node n, from the OPEN list which has the lowest value of h(n), and places it in the
CLOSED list.
Step 4:Expand the node n, and generate the successors of node n.
Step 5:Check each successor of node n, and find whether any node is a goal node or not. If any successor
node is goal node, then return success and terminate the search, else proceed to Step 6.
Step 6:For each successor node, algorithm checks for evaluation function f(n), and then check if the node
has been in either OPEN or CLOSED list. If the node has not been in both list, then add it to the OPEN
list.
Best first search can switch between BFS and DFS by gaining the advantages of both the algorithms.
1) Hill climbing algorithm is a local search algorithm which continuously moves in the direction of
increasing elevation/value to find the peak of the mountain or best solution to the problem. It terminates
when it reaches a peak value where no neighbor has a higher value.
2) Hill climbing algorithm is a technique which is used for optimizing the mathematical problems.
One of the widely discussed examples of Hill climbing algorithm is Traveling-salesman Problem in which
we need to minimize the distance traveled by the salesman.
3) It is also called greedy local search as it only looks to its good immediate neighbor state and not
beyond that.
4) A node of hill climbing algorithm has two components which are state and value.
6) In this algorithm, we don't need to maintain and handle the search tree or graph as it only keeps a
single current state.
1) Generate and Test variant: Hill Climbing is the variant of Generate and Test method. The Generate
and Test method produce feedback which helps to decide which direction to move in the search space.
2) Greedy approach: Hill-climbing algorithm search moves in the direction which optimizes the cost.
3) No backtracking: It does not backtrack the search space, as it does not remember the previous
states.
Different regions in the state space landscape:
Local Maximum: Local maximum is a state which is better than its neighbor states, but there is also another state
which is higher than it.
Global Maximum: Global maximum is the best possible state of state space landscape. It has the highest value of
objective function.
Current state: It is a state in a landscape diagram where an agent is currently present.
Flat local maximum: It is a flat space in the landscape where all the neighbor states of current states have the same
value.
Shoulder: It is a plateau region which has an uphill edge.
2) Steepest-Ascent hill-climbing:
Simulated Annealing:
A hill-climbing algorithm which never makes a move towards a lower value guaranteed to be incomplete because
it can get stuck on a local maximum. And if algorithm applies a random walk, by moving a successor, then it may
complete but not efficient. Simulated Annealing is an algorithm which yields both efficiency and completeness.
In mechanical term Annealing is a process of hardening a metal or glass to a high temperature then cooling
gradually, so this allows the metal to reach a low-energy crystalline state. The same process is used in simulated
annealing in which the algorithm picks a random move, instead of picking the best move. If the random move
improves the state, then it follows the same path. Otherwise, the algorithm follows the path which has a
probability of less than 1 or it moves downhill and chooses another path.
Genetic Algorithms
Genetic Algorithms(GAs) are adaptive heuristic search algorithms that belong to the larger part of evolutionary
algorithms. Genetic algorithms are based on the ideas of natural selection and genetics. These are intelligent
exploitation of random search provided with historical data to direct the search into the region of better
performance in solution space. They are commonly used to generate high-quality solutions for optimization
problems and search problems.
Genetic algorithms are based on an analogy with genetic structure and behavior of chromosome of the population.
Following is the foundation of GAs based on this analogy –
1. Individual in population compete for resources and mate
2. Those individuals who are successful (fittest) then mate to create more offspring than others
3. Genes from “fittest” parent propagate throughout the generation, that is sometimes parents create
offspring which is better than either parent.
4. Thus each successive generation is more suited for their environment.
Link : hill climbing : https://www.youtube.com/watch?v=FU6ZzRs6szE
stimulated annealing:https://www.youtube.com/watch?v=4pVYqxoPE94
Genetic algo: https://www.youtube.com/watch?v=FwPgHgbncPk
26. A* Search algorithm. Considering a graph find shortest path using A* algorithm.
Link: https://www.youtube.com/watch?v=tvAh0JZF2YE
27. Adversarial Search. Mini-Max Algorithm. Importance of Mini-Max Algorithm in Artificial Intelligence.
Limitation of Mini-Max Algorithm. Alpha-Beta Pruning algorithm How does it work?
Adversarial search, also known as Minimax search is known for its usefulness in calculating the best move in two
player games where all the information is available, such as chess or tic tac toe. It consists of navigating through a
tree which captures all the possible moves in the game, where each move is represented in terms of loss and gain
for one of the players.
It follows that this can only be used to make decisions in zero-sum games, where one player’s loss is the other
player’s gain. Theoretically, this search algorithm is based on von Neumann’s minimax theorem which states that
in these types of games there is always a set of strategies which leads to both players gaining the same value and
that seeing as this is the best possible value one can expect to gain, one should employ this set of strategies
Mini-Max Algorithm in Artificial Intelligence
3) Min-Max algorithm is mostly used for game playing in AI. Such as Chess, Checkers, tic-tac-toe,
go, and various tow-players game. This Algorithm computes the minimax decision for the current state.
4) In this algorithm two players play the game, one is called MAX and other is called MIN.
5) Both the players fight it as the opponent player gets the minimum benefit while they get the
maximum benefit.
6) Both Players of the game are opponent of each other, where MAX will select the maximized value
and MIN will select the minimized value.
7) The minimax algorithm performs a depth-first search algorithm for the exploration of the complete
game tree.
8) The minimax algorithm proceeds all the way down to the terminal node of the tree, then backtrack
the tree as the recursion.
1) Complete- Min-Max algorithm is Complete. It will definitely find a solution (if exist), in the finite
search tree.
3) Time complexity- As it performs DFS for the game-tree, so the time complexity of Min-Max
algorithm is O(bm), where b is branching factor of the game-tree, and m is the maximum depth of the
tree.
4) Space Complexity- Space complexity of Mini-max algorithm is also similar to DFS which
is O(bm).
Alpha-Beta Pruning
2) As we have seen in the minimax search algorithm that the number of game states it has to examine
are exponential in depth of the tree. Since we cannot eliminate the exponent, but we can cut it to half.
Hence there is a technique by which without checking each node of the game tree we can compute the
correct minimax decision, and this technique is called pruning. This involves two threshold parameter
Alpha and beta for future expansion, so it is called alpha-beta pruning. It is also called as Alpha-Beta
Algorithm.
3) Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not only prune the tree
leaves but also entire sub-tree.
1. Alpha: The best (highest-value) choice we have found so far at any point along the path of
Maximizer. The initial value of alpha is -∞.
2. Beta: The best (lowest-value) choice we have found so far at any point along the path of
Minimizer. The initial value of beta is +∞.
5) The Alpha-beta pruning to a standard minimax algorithm returns the same move as the standard
algorithm does, but it removes all the nodes which are not really affecting the final decision but making
algorithm slow. Hence by pruning these nodes, it makes the algorithm fast.
The main condition which required for alpha-beta pruning is:
1. α>=β
28. Propositional Logic (PL)? Limitations of Propositional Logic? First Order Logic. Syntax of FOPL.
Universal Quantifier and Existential Quantifier in FOPL with examples.
Example:
Link:
1)propositional logic : https://www.youtube.com/watch?v=tpDU9UXqsUo
2)predicate or fol : https://www.youtube.com/watch?v=pcV2lL6yNZ8
29. Steps for Resolution in First Order Logic. Solve example problems for resolution by refutation.
For example of resolution by refutation see answer 20.
link: resolution : https://www.youtube.com/watch?v=eaCVH8XWaPc
https://www.youtube.com/watch?v=TR7iWaN_nHQ
https://www.youtube.com/watch?v=-LT96Et6b0A
30. Expert Systems? Components of Expert Systems?
An Expert System is defined as an interactive and reliable computer-based decision-making system which uses
both facts and heuristics to solve complex decision-making problems. It is considered at the highest level of
human intelligence and expertise. It is a computer application which solves the most complex issues in a specific
domain.
The expert system can resolve many issues which generally would require a human expert. It is based on
knowledge acquired from an expert. It is also capable of expressing and reasoning about some domain of
knowledge. Expert systems were the predecessor of the current day artificial intelligence, deep learning and
machine learning systems.
Given a set of rules like these, there are essentially two ways we can use them to generate new knowledge:
forward chaining starts with the facts, and sees what rules apply (and hence what should be done) given the facts.
data driven;
backward chaining starts with something to find out, and looks for rules that will help in answering it goal driven.
1) forward : https://www.youtube.com/watch?v=PBTSdx_C9WM
2) backward: https://www.youtube.com/watch?v=W5O8QAWu-OM
Natural language processing (NLP) is the ability of a computer program to understand human language as it is
spoken. NLP is a component of artificial intelligence (AI).
The development of NLP applications is challenging because computers traditionally require humans to "speak" to
them in a programming language that is precise, unambiguous and highly structured, or through a limited number
of clearly enunciated voice commands. Human speech, however, is not always precise -- it is often ambiguous and
the linguistic structure can depend on many complex variables, including slang, regional dialects and social
context.
Benefits of NLP
NLP hosts benefits such as:
Applications of nlp :
Syntactic Analysis (Parsing) − It involves analysis of words in the sentence for grammar and arranging words in a
manner that shows the relationship among the words. The sentence such as “The school goes to boy” is rejected
by English syntactic analyzer.
Semantic Analysis − It draws the exact meaning or the dictionary meaning from the text. The text is checked for
meaningfulness. It is done by mapping syntactic structures and objects in the task domain. The semantic analyzer
disregards sentence such as “hot ice-cream”.
Discourse Integration − The meaning of any sentence depends upon the meaning of the sentence just before it. In
addition, it also brings about the meaning of immediately succeeding sentence.
Pragmatic Analysis − During this, what was said is re-interpreted on what it actually meant. It involves deriving
those aspects of language which require real world knowledge.
Learning:
An agent is learning if it improves its performance on future tasks after making observations about the world.
Learning can range from the trivial, as exhibited by jotting down a phone number, to the profound, as exhibited by
Albert Einstein, who inferred a new theory of the universe.
FORMS OF LEARNING
Factoring its representation of knowledge, AI learning models can be classified in two main types: inductive and
deductive.
b) DEDUCTIVE LEARNING
This type of AI learning technique starts with te series of rules nad infers new rules that are more efficient in the
context of a specific AI algorithm. Explanation-Based Learning(EBL) and Relevance-0Based Learning(RBL) are
examples examples o f deductive techniques. EBL extracts general rules from examples by “generalizing” the
explanation. RBL focuses on identifying attributes and deductive generalizations from simple example.
Based on the feedback characteristics, AI learning models can be classified as supervised, unsupervised, semi-
upervised or reinforced.
c) UNSUPERVISED LEARNING
Unsupervised models focus on learning a pattern in the input data without any external feedback. Clustering is a
classic example of unsupervised learning models.
In unsupervised learning the agent learns patterns in the input even though no explicit feedback is supplied. The
most common unsupervised learning task is clustering: detecting potentially useful clusters of input examples. For
example, a taxi agent might gradually develop a concept of “good traffic days” and “bad traffic days” without ever
being given labeled examples of each by a teacher.
d) SUPERVISED LEARNING
Supervised learning models use external feedback to learning functions that map inputs to output observations. In
those models the external environment acts as a “teacher” of the AI algorithms.
In supervised learning the agent observes some example input–output pairs and learns a function that maps from
input to output.
e) SEMI-SUPERVISED LEARNING:
Semi-Supervised learning uses a set of curated, labeled data and tries to infer new labels/attributes on new data sets.
Semi-Supervised learning models are a solid middle ground between supervised and unsupervised models.
In semi-supervised learning we are given a few labeled examples and must make what we can of a large collection
of unlabeled examples. Even the labels themselves may not be the oracular truths that we hope for. Imagine that
you are trying to build a system to guess a person’s age from a photo. You gather some labeled examples by snapping
pictures of people and asking their age. That’s supervised learning. But in reality some of the people lied about their
age. It’s not just that there is random noise in the data; rather the inaccuracies are systematic, and to uncover them
is an unsupervised learning problem involving images, self-reported ages, and true (unknown) ages. Thus, both
noise and lack of labels create a continuum between supervised and unsupervised learning.
f ) REINFORCEMENT LEARNING
Reinforcement learning models use opposite dynamics such as rewards and punishment to “reinforce” different
types of knowledge. This type of learning technique is becoming really popular in modern AI solutions.
In reinforcement learning the agent learns from a series of reinforcements—rewards or punishments. For example,
the lack of a tip at the end of the journey gives the taxi agent an indication that it did something wrong. The two
points for a win at the end of a chess game tellsthe agent it did something right. It is up to the agent to decide which
of the actions prior to the reinforcement were most responsible for it.
36. What do you mean by Supervised Learning, Unsupervised Learning and Reinforcement learning
37. Probabilistic reasoning: Representing knowledge in an uncertain domain, the semantics of Bayesian
networks, Dempster-Shafer theory, Fuzzy sets, and fuzzy logics
There are various ways of representing uncertainty. Here we consider three different approaches, representing three
different areas of uncertainty:
Probability theory:
Probabilistic assertions and queries are not usually about particular possible worlds, but about sets of them.
In probability theory, the set of all possible worlds is called the sample space. The Greek letter Ω (uppercase
omega) is used to refer to the sample space, and ω (lowercase omega) refers to elements of the space, that is,
particular possible worlds.
A fully specified probability model associates a numerical probability P (ω) with each possible world.1 the basic
axioms of probability theory say that every possible world has a probability between 0 and 1 and that the total
probability of the set of possible worlds is 1:
i. If a coin is flipped there is an equal chance of it landing on head side or tail side, consider H1 is for heads and
H2 for tails. This scenario is expressed as P(H1)=0.5 and P(H2)=0.5.
ii. The probability of 1st and 2nd toss both landing on heads is 0.5*0.5=0.25.
iii. We can write this as P(H1^H2)-0.25 and in general two independent events P and Q, P(P^Q)=P(P)*P(Q).
Fuzzy logic:
In the existing expert systems, uncertainty is dealt with through a combination of predicate logic and probability-
based methods. A serious shortcoming of these methods is that they are not capable of coming to grips with the
pervasive fuzziness of information in the knowledge base, and, as a result, are mostly ad hoc in nature.
An alternative approach to the management of uncertainty which is suggested in this paper is based on the use of
fuzzy logic, which is the logic underlying approximate or, equivalently, fuzzy reasoning. A feature of fuzzy logic
which is of particular importance to the management of uncertainty in expert systems is that it provides a
systematic framework for dealing with fuzzy quantifiers, e.g., most, many, few, not very many, almost all,
infrequently, about 0.8, etc.
In this way, fuzzy logic subsumes both predicate logic and probability theory, and makes it possible to deal with
different types of uncertainty within a single conceptual framework.
In fuzzy logic, the deduction of a conclusion from a set of premises is reduced, in general, to the solution of a
nonlinear program through the application of projection and extension principles. This approach to deduction
leads to various basic syllogisms which may be used as rules of combination of evidence in expert systems.
To choose their actions, reasoning programs must be able to make assumptions and subsequently revise their
beliefs when discoveries contradict these assumptions.
The Truth Maintenance System (TMS) is a problem solver subsystem for performing these functions by recording
and maintaining the reasons for program beliefs. Such recorded reasons are useful in constructing explanations of
program actions and in guiding the course of action of a problem solver
TMS are another form of knowledge representation which is best visualized in terms of graphs.
It stores the latest truth value of any predicate. The system is developed with the idea that truthfulness of a
predicate can change with time, as new knowledge is added or exiting knowledge is updated.
Bayes’ theorem can be used to calculate the probability that a certain event will occur or that a certain
proposition is true
The theorem is stated as follows:
P(B) is called the prior probability of B. P(B|A), as well as being called the conditional probability, is also known
as the posterior probability of B.
P(A ∨B) = P(A|B)P(B)
Note that due to the commutativity of ∨, we can also write
P(A ∨B) = P(B|A)P(A)
Hence, we can deduce: P(B|A)P(A) = P(A|B)P(B)
This can then be rearranged to give Bayes’ theorem:
Bayes Theorem states:
This reads that given some evidence E then probability that hypothesis is true is equal to the ratio of the probability
that E will be true given times the a priori evidence on the probability of and the sum of the probability of E over
the set of all hypotheses times the probability of these hypotheses.
The set of all hypotheses must be mutually exclusive and exhaustive.
Thus to find if we examine medical evidence to diagnose an illness. We must know all the prior probabilities of
find symptom and also the probability of having an illness based on certain symptoms being observed.
Bayesian networks are also called Belief Networks or Probabilistic Inference Networks.
Fuzzy logic is derived from fuzzy set theory dealing with reasoning that is approximate rather than precisely
deduced from classical predicate logic.
−Fuzzy logic is capable of handling inherently imprecise concepts.
−Fuzzy logic allows in linguistic form the set membership values to imprecise concepts like "slightly", "quite" and
"very".
−Fuzzy set theory defines Fuzzy Operators on Fuzzy Sets.
38. Depth limited search, bidirectional search, comparing uniform search strategies
BIDIRECTIONAL SEARCH
It searches forward from initial state and backward from goal state till both meet to identify a common state. The
path from initial state is concatenated with the inverse path from the goal state. Each search is done only up to half
of the total path.
The idea behind bidirectional search is to run two simultaneous searches—one forward from the initial state and the
other ackward from the goal—hoping that the two searches meet in the middle (Figure 3.20). The motivation is that
bd/2 + bd/2 is much less than bd, or in the figure, the area of the two small circles is less than the area of one big circle
centered on the start and reaching to the goal.
Bidirectional search is implemented by replacing the goal test with a check to see whether the frontiers of the two
searches intersect; if they do, a solution has been found. It is important to realize that the first such solution found
may not be optimal, even if the two searches are both breadth-first; some additional search is required to make sure
there isn’t another short-cut across the gap.
Fig. 4.2: A schematic view of a bidirectional search that is about to succeed when a branch from the start node meets
a branch from the goal node.
Comparision :
Created by ,
Aman Adarsh
(CSE),Jisce