Sei sulla pagina 1di 45

ARTIFICIAL INTELLIGENCE (CS701)

1). What is bi-directional Searching?

Bidirectional Search
It searches forward from initial state and backward from goal state till both meet to identify a common state.
The path from initial state is concatenated with the inverse path from the goal state. Each search is done only up to
half of the total path.
The idea of a bidirectional search is to reduce the search time by searching forward from the start and backward
from the goal simultaneously. When the two search frontiers intersect, the algorithm can reconstruct a single path
that extends from the start state through the frontier intersection to the goal.
A new problem arises during a bidirectional search, namely ensuring that the two search frontiers actually meet.
For example, a depth-first search in both directions is not likely to work well because its small search frontiers are
likely to pass each other by. Breadth-first search in both directions would be guaranteed to meet.

Video link: https://www.youtube.com/watch?v=SABX6YggDTU

2. What is the difference between BFS and DFS?

Link : 1)bfs : https://www.youtube.com/watch?v=qul0f79gxGs


2)dfs: https://www.youtube.com/watch?v=f8luGFRtshY
3. What is Simple reflexive Agent?

A simple reflex agent is the most basic of the intelligent agents out there. It performs actions based on a current
situation. When something happens in the environment of a simple reflex agent, the agent quickly scans its
knowledge base for how to respond to the situation at-hand based on pre-determined rules.

Simple reflex agents ignore the rest of the percept history and act only on the basis of the current percept. Percept
history is the history of all that an agent has perceived till date. The agent function is based on the condition-action
rule. A condition-action rule is a rule that maps a state i.e, condition to an action. If the condition is true, then the
action is taken, else not. This agent function only succeeds when the environment is fully observable. For simple
reflex agents operating in partially observable environments, infinite loops are often unavoidable. It may be
possible to escape from infinite loops if the agent can randomize its actions. Problems with Simple reflex agents
are :
 Very limited intelligence.
 No knowledge of non-perceptual parts of state.
 Usually too big to generate and store.
 If there occurs any change in the environment, then the collection of rules need to be updated.
Link: https://www.youtube.com/watch?v=KZFfbebQPAU

4. How will you solve 8-puzzle problem using A* algorithm?


Link: https://www.youtube.com/watch?v=wJu3IZq1NFs

5. What are the common heuristing functions applied in AI search methods.

Heuristic functions:
It is a function f(n) that gives an estimation on the cost of getting from node n to goal state. So least cost node can
be selected from possible choices.
There are two common heuristic functions:
1) Admissible (under estimate)
2) Non Admissible (over estimate)

An admissible heuristic is used to estimate the cost of reaching the goal state in an informed search algorithm. In
order for a heuristic to be admissible to the search problem, the estimated cost must always be lower than or equal
to the actual cost of reaching the goal state. The search algorithm uses the admissible heuristic to find an estimated
optimal path to the goal state from the current node. For example, in A* search the evaluation function (where n
is the current node) is:
f(n)=g(n)+h(n)
where
f(n)= the evaluation function.
g(n)= the cost from the start node to the current node
h(n)= estimated cost from current node to goal.

h(n) is calculated using the heuristic function. With a non-admissible heuristic, the A* algorithm could overlook
the optimal solution to a search problem due to an overestimation in f(n).
A non-admissible heuristic may overestimate the cost of reaching the goal. It may or may not result in an optimal
solution. However, the advantage is that sometimes, a non-admissible heuristic expands much fewer nodes. Thus,
the total cost (= search cost + path cost) may actually be lower than an optimal solution using an admissible
heuristic.

Link:https://www.youtube.com/watch?v=KF8JRfzSRWc

6. What are the different types of knowledge?

There are 5 main types of knowledge representation in Artificial Intelligence.


 Meta Knowledge – It’s a knowledge about knowledge and how to gain them
 Heuristic – Knowledge – Representing knowledge of some expert in a field or subject.
 Procedural Knowledge – Gives information/ knowledge about how to achieve something.
 Declarative Knowledge – Its about statements that describe a particular object and its attributes ,
including some behavior in relation with it.
 Structural Knowledge – Describes what relationship exists between concepts/ objects.

7. What is knowledge-base in AI?

A knowledge base is a database used for knowledge sharing and management.


It promotes the collection, organization and retrieval of knowledge. Many knowledge bases are structured around
artificial intelligence and not only store data but find solutions for further problems using data from previous
experience stored as part of the knowledge base.

A knowledge base is not merely a space for data storage, but can be an artificial intelligence tool for delivering
intelligent decisions. Various knowledge representation techniques, including frames and scripts, represent
knowledge. The services offered are explanation, reasoning and intelligent decision support.
The two major types of knowledge bases are human readable and machine readable.
 Human readable knowledge bases enable people to access and use the knowledge. They store help
documents, manuals, troubleshooting information and frequently answered questions.

 Machine readable knowledge bases store knowledge, but only in system readable forms. Solutions
are offered based upon automated deductive reasoning and are not so interactive as this relies on query
systems that have software that can respond to the knowledge base to narrow down a solution.

Link:https://www.youtube.com/watch?v=9iN3O_oL2ac

8. How knowledge is represented?

Knowledge representation and reasoning (KR, KRR) is the part of Artificial intelligence which concerned with AI
agents thinking and how thinking contributes to intelligent behavior of agents.

 It is responsible for representing information about the real world so that a computer can
understand and can utilize this knowledge to solve the complex real world problems such as diagnosis a
medical condition or communicating with humans in natural language.
 It is also a way which describes how we can represent knowledge in artificial intelligence.
Knowledge representation is not just storing data into some database, but it also enables an intelligent
machine to learn from that knowledge and experiences so that it can behave intelligently like a human.
There are mainly four ways of knowledge representation which are given as follows:

1. Logical Representation

2. Semantic Network Representation

3. Frame Representation

4. Production Rules

1. Logical Representation
Logical representation is a language with some concrete rules which deals with propositions and has no ambiguity
in representation. Logical representation means drawing a conclusion based on various conditions. This
representation lays down some important communication rules. It consists of precisely defined syntax and
semantics which supports the sound inference. Each sentence can be translated into logics using syntax and
semantics.
Syntax:

 Syntaxes are the rules which decide how we can construct legal sentences in the logic.

 It determines which symbol we can use in knowledge representation.

 How to write those symbols.


Semantics:

 Semantics are the rules by which we can interpret the sentence in the logic.

 Semantic also involves assigning a meaning to each sentence.


Logical representation can be categorised into mainly two logics:

1. Propositional Logics

2. Predicate logics

2.Semantic Network Representation


Semantic networks are alternative of predicate logic for knowledge representation. In Semantic networks, we can
represent our knowledge in the form of graphical networks. This network consists of nodes representing objects
and arcs which describe the relationship between those objects. Semantic networks can categorize the object in
different forms and can also link those objects. Semantic networks are easy to understand and can be easily
extended.
This representation consist of mainly two types of relations:

1. IS-A relation (Inheritance)

2. Kind-of-relation

3.Frame Representation
A frame is a record like structure which consists of a collection of attributes and its values to describe an entity in
the world. Frames are the AI data structure which divides knowledge into substructures by representing
stereotypes situations. It consists of a collection of slots and slot values. These slots may be of any type and sizes.
Slots have names and values which are called facets.
Facets:The various aspects of a slot is known as Facets. Facets are features of frames whi

4.Production Rules
Production rules system consist of (condition, action) pairs which mean, "If condition then action". It has mainly
three parts:

 The set of production rules

 Working Memory

 The recognize-act-cycle
In production rules agent checks for the condition and if the condition exists then production rule fires and
corresponding action is carried out. The condition part of the rule determines which rule may be applied to a
problem. And the action part carries out the associated problem-solving steps. This complete process is called a
recognize-act cycle.

9. What is search space ?


A search space is the set or domain through which an algorithm searches. In computer science, the space may be a
well-defined and finite data structure. Or, as in decision theory, it may be a vast and possibly infinite set whose
elements need to be individually generated during the search.

For example, in the game Guess Who, the players each begin with a set of character cards from which to choose.
They then take turns asking yes-or-no questions about the other player's choice. The set of cards is the search
space for this game. This set is finite and known ahead of time.
In chess, the search space is much more complicated. The search space is the set of all possible valid moves. For a
given turn the space is finite, but the set of all possible games is infinite. Since the player is trying to maximize the
probability of winning, they must search through many turns. These possibilities must be generated during the
search.
In the context of AI, the search space is used to define the optimization power of the AI. Eliezer
Yudkowsky defines optimization power as follows. If S is the search space and S+ is the set of options better or
equal to the chosen one, the the optimization power is
Figure: search space

10. What are the major component/components for measuring the performance of problem solving?

Search as a black box will result in an output that is either failure or a solution, We will evaluate a search
algorithm`s performance in four ways:
1. Completeness: is it guaranteed that our algorithm always finds a solution when there is one ?
2. Optimality: Does our algorithm always find the optimal solution ?
3. Time complexity: How much time our search algorithm takes to find a solution ?
4. Space complexity: How much memory required to run the search algorithm?
Time and Space in complexity analysis are measured with respect to the number of nodes the problem graph has
in terms of asymptotic notations.
In AI, complexity is expressed by three factors b,d and m:

1. b the branching factor is the maximum number of successors of any node.


2. d the depth of the deepest goal.
3. M the maximum length of any path in the state space.
11. What is Blind search?

12. What is “Turing Test” ?

A Turing Test is a method of inquiry in artificial intelligence (AI) for determining whether or not a computer is
capable of thinking like a human being. The test is named after Alan Turing, the founder of the Turning Test and
an English computer scientist, cryptanalyst, mathematician and theoretical biologist.
Turing proposed that a computer can be said to possess artificial intelligence if it can mimic human responses
under specific conditions. The original Turing Test requires three terminals, each of which is physically separated
from the other two. One terminal is operated by a computer, while the other two are operated by humans.
During the test, one of the humans functions as the questioner, while the second human and the computer function
as respondents. The questioner interrogates the respondents within a specific subject area, using a specified format
and context. After a preset length of time or number of questions, the questioner is then asked to decide which
respondent was human and which was a computer.

Link:https://www.youtube.com/watch?v=VZqSCFt6OsM

13. What is “Deep Blue ?

Deep Blue was a chess-playing computer developed by IBM. It is known for being the first computer chess-
playing system to win both a chess game and a chess match against a reigning world champion under regular time
controls.
Deep Blue was a supercomputer developed by IBM specifically for playing chess and was best known for being
the first artificial intelligence construct to ever win a chess match against a reigning world champion,
Grandmaster Garry Kasparov, under regular time controls. Deep Blue lost to Kasparov in their first 6-game match
in 1996 with a score of 4–2, and was later heavily upgraded and finally won over Kasparov in May 1997 in a 6-
game rematch with a score of 3½–2½.
Technical specifications of Deep Blue include:
 RS/6000 SP Thin P2SC-based system

 30 nodes

 120 MHz P2SC microprocessor per node

 480 special-purpose VLSI chess chips per node

 AI programmed in C running on the AIX operating System

14. What is AI?


Artificial intelligence is a branch of computer science that aims to create intelligent machines. It has become an
essential part of the technology industry.
Research associated with artificial intelligence is highly technical and specialized. The core problems of artificial
intelligence include programming computers for certain traits such

 Knowledge

 Reasoning

 Problem solving

 Perception

 Learning

 Planning

 Ability to manipulate and move objects

Machine learning is also a core part of AI. Learning kind of supervision requires an ability to identify patterns in
streams of inputs, whereas learning with adequate supervision involves classification and numerical regressions.

Link: https://www.youtube.com/watch?v=J7LqgglEfQw

15. What is Informed search strategy ?

The informed search algorithm is more useful for large search space. Informed search algorithm uses the idea of
heuristic, so it is also called Heuristic search.
Informed Search
A search using domain-specific knowledge.
Suppose that we have a way to estimate how close a state is to the goal, with an evaluation function.
General strategy: expand the best state in the open list first. It's called a best-first search or ordered state-space
search.
In general the evaluation function is imprecise, which makes the method a heuristic (works well in most cases).
The evaluation is often based on empirical observations.
Examples
:::The vacuum cleaner - it might move from a cell towards the dirtiest adjacent cell.
:::For a path-search in a graph with a geometrical representation - give preference to the neighbors which are the
closest to the target based on the Euclidian distance (which may or may not be an indication of a good path). This
is called a greedy search algorithm.
:::For a game-playing, the algorithm may move to a state giving it a strategic advantage (capturing the opponent's
queen).
In the informed search we will discuss two main algorithms which are given below:

 Best First Search Algorithm(Greedy search)

 A* Search Algorithm

link: https://www.youtube.com/watch?v=japhjrVxJdg

16. What is an agent ?

An agent can be anything that perceiveits environment through sensors and act upon that environment through
actuators. An Agent runs in the cycle of perceiving, thinking, and acting. An agent can be:

 Human-Agent: A human agent has eyes, ears, and other organs which work for sensors and hand,
legs, vocal tract work for actuators.

 Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for sensors and
various motors for actuators.

 Software Agent: Software agent can have keystrokes, file contents as sensory input and act on
those inputs and display output on the screen.
Hence the world around us is full of agents such as thermostat, cellphone, camera, and even we are also agents.

17. Applications of Artificial Intelligence?

Application of AI
Artificial Intelligence has various applications in today's society. It is becoming essential for today's time because
it can solve complex problems with an efficient way in multiple industries, such as Healthcare, entertainment,
finance, education, etc. AI is making our daily life more comfortable and fast.
Following are some sectors which have the application of Artificial Intelligence:

1. AI in Astronomy

 Artificial Intelligence can be very useful to solve complex universe problems. AI technology can
be helpful for understanding the universe such as how it works, origin, etc.
2. AI in Healthcare

 In the last, five to ten years, AI becoming more advantageous for the healthcare industry and going
to have a significant impact on this industry.

 Healthcare Industries are applying AI to make a better and faster diagnosis than humans. AI can
help doctors with diagnoses and can inform when patients are worsening so that medical help can reach to
the patient before hospitalization.
3. AI in Gaming

 AI can be used for gaming purpose. The AI machines can play strategic games like chess, where
the machine needs to think of a large number of possible places.
4. AI in Finance

 AI and finance industries are the best matches for each other. The finance industry is implementing
automation, chatbot, adaptive intelligence, algorithm trading, and machine learning into financial
processes.
5. AI in Data Security

 The security of data is crucial for every company and cyber-attacks are growing very rapidly in the
digital world. AI can be used to make your data more safe and secure. Some examples such as AEG bot,
AI2 Platform,are used to determine software bug and cyber-attacks in a better way.
6. AI in Social Media

 Social Media sites such as Facebook, Twitter, and Snapchat contain billions of user profiles, which
need to be stored and managed in a very efficient way. AI can organize and manage massive amounts of
data. AI can analyze lots of data to identify the latest trends, hashtag, and requirement of different users.
7. AI in Travel &Transport

 AI is becoming highly demanding for travel industries. AI is capable of doing various travel related
works such as from making travel arrangement to suggesting the hotels, flights, and best routes to the
customers. Travel industries are using AI-powered chatbots which can make human-like interaction with
customers for better and fast response.

Etc.

18. What do you mean by Intelligent Agent? Briefly explain different types of Agent.

An intelligent agent is an autonomous entity which act upon an environment using sensors and actuators for
achieving goals. An intelligent agent may learn from the environment to achieve their goals. A thermostat is an
example of an intelligent agent.

Types of Agents
Agents can be grouped into four classes based on their degree of perceived intelligence and capability :

 Simple Reflex Agents


 Model-Based Reflex Agents
 Goal-Based Agents
 Utility-Based Agents
 Learning Agent
1. Simple Reflex agent:

 The Simple reflex agents are the simplest agents. These agents take decisions on the basis of the
current percepts and ignore the rest of the percept history.

 These agents only succeed in the fully observable environment.

 The Simple reflex agent does not consider any part of percepts history during their decision and
action process.

 The Simple reflex agent works on Condition-action rule, which means it maps the current state to
action. Such as a Room Cleaner agent, it works only if there is dirt in the room.

2. Model-based reflex agent

 The Model-based agent can work in a partially observable environment, and track the situation.

 A model-based agent has two important factors:

 Model: It is knowledge about "how things happen in the world," so it is called a Model-based
agent.

 Internal State: It is a representation of the current state based on percept history.


 These agents have the model, "which is knowledge of the world" and based on the model they
perform actions.

 Updating the agent state requires information about:

 How the world evolves

 How the agent's action affects the world.

3. Goal-based agents

 The knowledge of the current state environment is not always sufficient to decide for an agent to
what to do.

 The agent needs to know its goal which describes desirable situations.

 Goal-based agents expand the capabilities of the model-based agent by having the "goal"
information.

 They choose an action, so that they can achieve the goal.

 These agents may have to consider a long sequence of possible actions before deciding whether the
goal is achieved or not. Such considerations of different scenario are called searching and planning, which
makes an agent proactive.

4. Utility-based agents

 These agents are similar to the goal-based agent but provide an extra component of utility
measurement which makes them different by providing a measure of success at a given state.

 Utility-based agent act based not only goals but also the best way to achieve the goal.

 The Utility-based agent is useful when there are multiple possible alternatives, and an agent has to
choose in order to perform the best action.
 The utility function maps each state to a real number to check how efficiently each action achieves
the goals.

5. Learning Agents

 A learning agent in AI is the type of agent which can learn from its past experiences, or it has
learning capabilities.

 It starts to act with basic knowledge and then able to act and adapt automatically through learning.

 A learning agent has mainly four conceptual components, which are:

1. Learning element:It is responsible for making improvements by learning from environment


2. Critic:Learning element takes feedback from critic which describes that how well the agent is
doing with respect to a fixed performance standard.

3. Performance element:It is responsible for selecting external action


4. Problem generator:This component is responsible for suggesting actions that will lead to new
and informative experiences.

 Hence, learning agents are able to learn, analyze performance, and look for new ways to improve
the performance.

Link: https://www.youtube.com/watch?v=BkedAnQfJ_U

19. Propositional Logic and inference using Propositional Logic .

Propositional logic (PL) is the simplest form of logic where all the statements are made by propositions. A
proposition is a declarative statement which is either true or false. It is a technique of knowledge representation in
logical and mathematical form.

Example:

1. a) It is Sunday.
2. b) The Sun rises from West (False proposition)
Propositional logic is also called Boolean logic as it works on 0 and 1.

inference using Propositional Logic: Inference is used to create new sentences that logically follow from a given
set of predicate calculus sentence.
Below are some inference rules:

link: https://www.youtube.com/watch?v=gKHsecJl_iM

20. Consider the following statements.

a) If you go swimming you will get wet.

b) If it is raining and you are outside then you will get wet.

c) If it is warm and there is no rain then it is a pleasant day.

d) You are not wet.

e) You are outside.

f) It is a warm day. Use Resolution-Refutation to prove that “You are not swimming.”
Above figure illustrates a resolution refutation proof of “You are not swimming” using the facts about a summer
day .Since we want to prove ~ swimming we begin the proof by assuming that its opposite,swimming, is true. We
then look for a contradiction between our established set of facts about a summer day and this new assumption.
First up,(swimming) and Statement 1,( ~ swimming∨wet ), can be resolved since they share matching terms. In
order for both of these clauses to be true, the clause (wet)must be true. However,(wet) can be resolved with
Statement 4,(~wet), indicating a contradiction. Thus, given that the original six statements were true, the
contradiction must have been caused by our assumption that (swimming) was true. Since (swimming) cannot be
true,(~ swimming) must be true.
Here is this same argument recast into English.

To prove that “You are not swimming” we will assume that “You are swimming” and
show that this assumption leads to a contradiction. Statement (1) says that either “You
are not swimming” or “You are wet”, but since we are assuming that “You are
swimming” then, logically, you must be wet. However, Statement (4) says, “You are not
wet”. Hence, we have found a contradiction. If all of our original statements were true to
begin with, our assumption “You are swimming” must be incorrect. So, we conclude,
that you are not swimming.

21. Explain the working principle of Breadth First Search (BFS) and Depth-First-Search(DFS).

Depth First Search


Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data structures. The algorithm
starts at the root node (selecting some arbitrary node as the root node in the case of a graph) and explores as far as
possible along each branch before backtracking.
Breadth First Search
Breadth-first search (BFS) is an algorithm for traversing or searching tree or graph data structures. It starts at the
tree root (or some arbitrary node of a graph, sometimes referred to as a ‘search key’), and explores all of the
neighbor nodes at the present depth prior to moving on to the nodes at the next depth level.

Example:
Question: Which solution would BFS find to move from node S to node G if run on the graph below?

Bfs link : https://www.youtube.com/watch?v=qul0f79gxGs

Dfs link: https://www.youtube.com/watch?v=f8luGFRtshY


22. What is Prolog? Explain the concept of fact and rule in Prolog.

A Prolog program consists of a number of clauses. Each clause is either a fact or a rule. After a Prolog program is
loaded (or consulted) in a Prolog interpreter, users can submit goals or queries, and the Prolog intepreter will
give results (answers) according to the facts and rules.

Facts
A fact must start with a predicate (which is an atom) and end with a fullstop. The predicate may be followed by
one or more arguments which are enclosed by parentheses. The arguments can be atoms (in this case, these atoms
are treated as constants), numbers, variables or lists. Arguments are separated by commas.
If we consider the arguments in a fact to be objects, then the predicate of the fact describes a property of the
objects.
In a Prolog program, a presence of a fact indicates a statement that is true. An absence of a fact indicates a
statement that is not true. See the following example:

Rules
A rule can be viewed as an extension of a fact with added conditions that also have to be satisfied for it to be true.
It consists of two parts. The first part is similar to a fact (a predicate with arguments). The second part consists of
other clauses (facts or rules which are separated by commas) which must all be true for the rule itself to be true.
These two parts are separated by ":-". You may interpret this operator as "if" in English.
See the following example:

Take Rule 3 as an example. It means that "granfather(X, Y)" is true if both "father(X, Z)" and "parent(Z, X)" are
true. The comma between the two conditions can be considered as a logical-AND operator.

Link:
what is prolog : https://www.youtube.com/watch?v=hBz3DgXlg0Q
facts and rules: https://www.youtube.com/watch?v=h9jLWM2lFr0

23. What is the importance of Search Algorithm in AI? What is the difference between Uninformed Search
and Informed Search.

Search plays a major role in solving many Artificial Intelligence (AI) problems. Search is a universal problem-
solving mechanism in AI. In many problems, sequence of steps required to solve is not known in advance but
must be determined by systematic trial-and-error exploration of alternatives.
The problems that are addressed by AI search algorithms fall into three general classes:
single-agent path-finding problems, two-players games, and constraint-satisfaction problems.
Link: serach algo : https://www.youtube.com/watch?v=uZfS2B-lWic&list=PLrjkTql3jnm_yol-
ZK1QqPSn5YSg0NF9r&index=10
informed and uninformed search:
https://www.youtube.com/watch?v=AneIXxdu_g4&list=PLrjkTql3jnm_yol-
ZK1QqPSn5YSg0NF9r&index=11

24. Best-first Search Algorithm or Greedy Search work.


Best-first Search Algorithm (Greedy Search):
Greedy best-first search algorithm always selects the path which appears best at that moment. It is the
combination of depth-first search and breadth-first search algorithms. It uses the heuristic function and search.
Best-first search allows us to take the advantages of both algorithms. With the help of best-first search, at each
step, we can choose the most promising node. In the best first search algorithm, we expand the node which is
closest to the goal node and the closest cost is estimated by heuristic function, i.e.
f(n)=g(n).
Were, h(n)= estimated cost from node n to the goal.
The greedy best first algorithm is implemented by the priority queue.
Best first search algorithm:

Step 1:Place the starting node into the OPEN list.

Step 2:If the OPEN list is empty, Stop and return failure.

Step 3:Remove the node n, from the OPEN list which has the lowest value of h(n), and places it in the
CLOSED list.
Step 4:Expand the node n, and generate the successors of node n.

Step 5:Check each successor of node n, and find whether any node is a goal node or not. If any successor
node is goal node, then return success and terminate the search, else proceed to Step 6.

Step 6:For each successor node, algorithm checks for evaluation function f(n), and then check if the node
has been in either OPEN or CLOSED list. If the node has not been in both list, then add it to the OPEN
list.

Step 7:Return to Step 2.


Advantages:

Best first search can switch between BFS and DFS by gaining the advantages of both the algorithms.

This algorithm is more efficient than BFS and DFS algorithms.


25. Hill-climbing, Simulated-Annealling and Genetic algorithm. How do these algorithm work?

Hill Climbing Algorithm in Artificial Intelligence

1) Hill climbing algorithm is a local search algorithm which continuously moves in the direction of
increasing elevation/value to find the peak of the mountain or best solution to the problem. It terminates
when it reaches a peak value where no neighbor has a higher value.

2) Hill climbing algorithm is a technique which is used for optimizing the mathematical problems.
One of the widely discussed examples of Hill climbing algorithm is Traveling-salesman Problem in which
we need to minimize the distance traveled by the salesman.

3) It is also called greedy local search as it only looks to its good immediate neighbor state and not
beyond that.

4) A node of hill climbing algorithm has two components which are state and value.

5) Hill Climbing is mostly used when a good heuristic is available.

6) In this algorithm, we don't need to maintain and handle the search tree or graph as it only keeps a
single current state.

Features of Hill Climbing:


Following are some main features of Hill Climbing Algorithm:

1) Generate and Test variant: Hill Climbing is the variant of Generate and Test method. The Generate
and Test method produce feedback which helps to decide which direction to move in the search space.

2) Greedy approach: Hill-climbing algorithm search moves in the direction which optimizes the cost.

3) No backtracking: It does not backtrack the search space, as it does not remember the previous
states.
Different regions in the state space landscape:
Local Maximum: Local maximum is a state which is better than its neighbor states, but there is also another state
which is higher than it.
Global Maximum: Global maximum is the best possible state of state space landscape. It has the highest value of
objective function.
Current state: It is a state in a landscape diagram where an agent is currently present.
Flat local maximum: It is a flat space in the landscape where all the neighbor states of current states have the same
value.
Shoulder: It is a plateau region which has an uphill edge.

Types of Hill Climbing Algorithm:

1) Simple hill Climbing:

2) Steepest-Ascent hill-climbing:

3) Stochastic hill Climbing:

Simulated Annealing:
A hill-climbing algorithm which never makes a move towards a lower value guaranteed to be incomplete because
it can get stuck on a local maximum. And if algorithm applies a random walk, by moving a successor, then it may
complete but not efficient. Simulated Annealing is an algorithm which yields both efficiency and completeness.
In mechanical term Annealing is a process of hardening a metal or glass to a high temperature then cooling
gradually, so this allows the metal to reach a low-energy crystalline state. The same process is used in simulated
annealing in which the algorithm picks a random move, instead of picking the best move. If the random move
improves the state, then it follows the same path. Otherwise, the algorithm follows the path which has a
probability of less than 1 or it moves downhill and chooses another path.

Genetic Algorithms
Genetic Algorithms(GAs) are adaptive heuristic search algorithms that belong to the larger part of evolutionary
algorithms. Genetic algorithms are based on the ideas of natural selection and genetics. These are intelligent
exploitation of random search provided with historical data to direct the search into the region of better
performance in solution space. They are commonly used to generate high-quality solutions for optimization
problems and search problems.
Genetic algorithms are based on an analogy with genetic structure and behavior of chromosome of the population.
Following is the foundation of GAs based on this analogy –
1. Individual in population compete for resources and mate
2. Those individuals who are successful (fittest) then mate to create more offspring than others
3. Genes from “fittest” parent propagate throughout the generation, that is sometimes parents create
offspring which is better than either parent.
4. Thus each successive generation is more suited for their environment.
Link : hill climbing : https://www.youtube.com/watch?v=FU6ZzRs6szE
stimulated annealing:https://www.youtube.com/watch?v=4pVYqxoPE94
Genetic algo: https://www.youtube.com/watch?v=FwPgHgbncPk

26. A* Search algorithm. Considering a graph find shortest path using A* algorithm.
Link: https://www.youtube.com/watch?v=tvAh0JZF2YE
27. Adversarial Search. Mini-Max Algorithm. Importance of Mini-Max Algorithm in Artificial Intelligence.
Limitation of Mini-Max Algorithm. Alpha-Beta Pruning algorithm How does it work?

Adversarial search, also known as Minimax search is known for its usefulness in calculating the best move in two
player games where all the information is available, such as chess or tic tac toe. It consists of navigating through a
tree which captures all the possible moves in the game, where each move is represented in terms of loss and gain
for one of the players.
It follows that this can only be used to make decisions in zero-sum games, where one player’s loss is the other
player’s gain. Theoretically, this search algorithm is based on von Neumann’s minimax theorem which states that
in these types of games there is always a set of strategies which leads to both players gaining the same value and
that seeing as this is the best possible value one can expect to gain, one should employ this set of strategies
Mini-Max Algorithm in Artificial Intelligence

1) Mini-max algorithm is a recursive or backtracking algorithm which is used in decision-making and


game theory. It provides an optimal move for the player assuming that opponent is also playing optimally.

2) Mini-Max algorithm uses recursion to search through the game-tree.

3) Min-Max algorithm is mostly used for game playing in AI. Such as Chess, Checkers, tic-tac-toe,
go, and various tow-players game. This Algorithm computes the minimax decision for the current state.

4) In this algorithm two players play the game, one is called MAX and other is called MIN.

5) Both the players fight it as the opponent player gets the minimum benefit while they get the
maximum benefit.

6) Both Players of the game are opponent of each other, where MAX will select the maximized value
and MIN will select the minimized value.

7) The minimax algorithm performs a depth-first search algorithm for the exploration of the complete
game tree.

8) The minimax algorithm proceeds all the way down to the terminal node of the tree, then backtrack
the tree as the recursion.

Properties of Mini-Max algorithm:

1) Complete- Min-Max algorithm is Complete. It will definitely find a solution (if exist), in the finite
search tree.

2) Optimal- Min-Max algorithm is optimal if both opponents are playing optimally.

3) Time complexity- As it performs DFS for the game-tree, so the time complexity of Min-Max
algorithm is O(bm), where b is branching factor of the game-tree, and m is the maximum depth of the
tree.

4) Space Complexity- Space complexity of Mini-max algorithm is also similar to DFS which
is O(bm).

Limitation of the minimax Algorithm:


The main drawback of the minimax algorithm is that it gets really slow for complex games such as Chess, go, etc.
This type of games has a huge branching factor, and the player has lots of choices to decide. This limitation of the
minimax algorithm can be improved from alpha-beta pruning.

Alpha-Beta Pruning

1) Alpha-beta pruning is a modified version of the minimax algorithm. It is an optimization technique


for the minimax algorithm.

2) As we have seen in the minimax search algorithm that the number of game states it has to examine
are exponential in depth of the tree. Since we cannot eliminate the exponent, but we can cut it to half.
Hence there is a technique by which without checking each node of the game tree we can compute the
correct minimax decision, and this technique is called pruning. This involves two threshold parameter
Alpha and beta for future expansion, so it is called alpha-beta pruning. It is also called as Alpha-Beta
Algorithm.
3) Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not only prune the tree
leaves but also entire sub-tree.

4) The two-parameter can be defined as:

1. Alpha: The best (highest-value) choice we have found so far at any point along the path of
Maximizer. The initial value of alpha is -∞.

2. Beta: The best (lowest-value) choice we have found so far at any point along the path of
Minimizer. The initial value of beta is +∞.

5) The Alpha-beta pruning to a standard minimax algorithm returns the same move as the standard
algorithm does, but it removes all the nodes which are not really affecting the final decision but making
algorithm slow. Hence by pruning these nodes, it makes the algorithm fast.
The main condition which required for alpha-beta pruning is:

1. α>=β

for working must check :


link:1) minimax: https://www.youtube.com/watch?v=Ntu8nNBL28o
2)alpha beta: https://www.youtube.com/watch?v=dEs_kbvu_0s

28. Propositional Logic (PL)? Limitations of Propositional Logic? First Order Logic. Syntax of FOPL.
Universal Quantifier and Existential Quantifier in FOPL with examples.
Example:
Link:
1)propositional logic : https://www.youtube.com/watch?v=tpDU9UXqsUo
2)predicate or fol : https://www.youtube.com/watch?v=pcV2lL6yNZ8

29. Steps for Resolution in First Order Logic. Solve example problems for resolution by refutation.
For example of resolution by refutation see answer 20.
link: resolution : https://www.youtube.com/watch?v=eaCVH8XWaPc
https://www.youtube.com/watch?v=TR7iWaN_nHQ
https://www.youtube.com/watch?v=-LT96Et6b0A
30. Expert Systems? Components of Expert Systems?

An Expert System is defined as an interactive and reliable computer-based decision-making system which uses
both facts and heuristics to solve complex decision-making problems. It is considered at the highest level of
human intelligence and expertise. It is a computer application which solves the most complex issues in a specific
domain.
The expert system can resolve many issues which generally would require a human expert. It is based on
knowledge acquired from an expert. It is also capable of expressing and reasoning about some domain of
knowledge. Expert systems were the predecessor of the current day artificial intelligence, deep learning and
machine learning systems.

Examples of Expert Systems


Following are examples of Expert Systems
1) MYCIN: It was based on backward chaining and could identify various bacteria that could cause
acute infections. It could also recommend drugs based on the patient's weight.
2) DENDRAL: Expert system used for chemical analysis to predict molecular structure.
3) PXDES: Expert system used to predict the degree and type of lung cancer
4) CaDet: Expert system that could identify cancer at early stages
The expert System consists of the following given components:
User Interface
The user interface is the most crucial part of the expert system. This component takes the user's query in a
readable form and passes it to the inference engine. After that, it displays the results to the user. In other words, it's
an interface that helps the user communicate with the expert system.
Inference Engine
The inference engine is the brain of the expert system. Inference engine contains rules to solve a specific problem.
It refers the knowledge from the Knowledge Base. It selects facts and rules to apply when trying to answer the
user's query. It provides reasoning about the information in the knowledge base. It also helps in deducting the
problem to find the solution. This component is also helpful for formulating conclusions.
Knowledge Base
The knowledge base is a repository of facts. It stores all the knowledge about the problem domain. It is like a large
container of knowledge which is obtained from different experts of a specific field.
Thus we can say that the success of the Expert System mainly depends on the highly accurate and precise
knowledge.

31. Rule-Based Architecture of Expert System.

Rule-Based System Architecture


-A collection of rules
-A collection of facts
-An inference engine
We might want to: See what new facts can be derived Ask whether a fact is implied by the knowledge base and
already known facts

Given a set of rules like these, there are essentially two ways we can use them to generate new knowledge:
forward chaining starts with the facts, and sees what rules apply (and hence what should be done) given the facts.
data driven;
backward chaining starts with something to find out, and looks for rules that will help in answering it goal driven.

A Simple Example (I)


R1: IF hot AND smoky THEN fire
R2: IF alarm_beeps THEN smoky
R3: If fire THEN switch_on_sprinklers
F1: alarm_beeps [Given]
F2: hot [Given]
32. Forward Chaining and Backward Chaining in Expert System. What are the differences between
Forward Chaining and Backward Chaining?
Link:

1) forward : https://www.youtube.com/watch?v=PBTSdx_C9WM
2) backward: https://www.youtube.com/watch?v=W5O8QAWu-OM

33. Natural Language processing (NLP)?

Natural language processing (NLP) is the ability of a computer program to understand human language as it is
spoken. NLP is a component of artificial intelligence (AI).
The development of NLP applications is challenging because computers traditionally require humans to "speak" to
them in a programming language that is precise, unambiguous and highly structured, or through a limited number
of clearly enunciated voice commands. Human speech, however, is not always precise -- it is often ambiguous and
the linguistic structure can depend on many complex variables, including slang, regional dialects and social
context.

Benefits of NLP
NLP hosts benefits such as:

1) Improved accuracy and efficiency of documentation.


2) The ability to automatically make a readable summary text.

3) Useful for personal assistants such as Alexa.

4) Allows an organization to use chatbots for customer support.


5) Easier to perform sentiment analysis.

34. Common applications of Natural Language processing.

Applications of nlp :

#1. Sentiment Analysis


Mostly used on the web & social media monitoring, Natural Language Processing is a great tool to comprehend
and analyse the responses to the business messages published on social media platforms. It helps to analyse the
attitude and emotional state of the writer (person commenting/engaging with posts). This application is also
known as opinion mining.
#2. Chatbots
We hear a lot about Chatbots these days, chatbots are the solution for consumer frustration regarding customer
care call assistance. They provide modern-day virtual assistance for simple problems of the customer and offload
low-priority, high turnover tasks which require no skill. Intelligent Chatbots are going to offer personalised
assistance to the customer in the near future.
#3. Customer Service
Ensuring customer loyalty by keeping them content and happy is the supreme challenge and responsibility of
every business organisation. NLP has aided in multiple functions of customer service and served as an excellent
tool to gain insight into audience tastes, preferences and perceptions. Speech separation where the AI will identify
each voice to the corresponding speaker and answer each of the callers separately.
#4. Managing the Advertisement Funnel
What does your consumer need? Where is your consumer looking for his or her needs? Natural Language
Processing is a great source for intelligent targeting and placement of advertisements in the right place at the right
time and for the right audience. Reaching out to the right patron of your product is the ultimate goal for any
business. NLP matches the right keywords in the text and helps to hit the right customers. Keyword matching is
the simple task of NLP yet highly remunerative for businesses.
#5. Market Intelligence
Business markets are influenced and impacted by market knowledge and information exchange between various
organisations, stakeholders, governments and regulatory bodies. It is vital to stay up to date with industry trends
and changing standards. NLP is a useful technology to track and monitor the market intelligence reports for and
extract the necessary information for businesses to build new strategies.
35. Steps involved in NLP. Different form of Learning in AI.

There are general five steps −


Lexical Analysis − It involves identifying and analyzing the structure of words. Lexicon of a language means the
collection of words and phrases in a language. Lexical analysis is dividing the whole chunk of txt into paragraphs,
sentences, and words.

Syntactic Analysis (Parsing) − It involves analysis of words in the sentence for grammar and arranging words in a
manner that shows the relationship among the words. The sentence such as “The school goes to boy” is rejected
by English syntactic analyzer.

Semantic Analysis − It draws the exact meaning or the dictionary meaning from the text. The text is checked for
meaningfulness. It is done by mapping syntactic structures and objects in the task domain. The semantic analyzer
disregards sentence such as “hot ice-cream”.

Discourse Integration − The meaning of any sentence depends upon the meaning of the sentence just before it. In
addition, it also brings about the meaning of immediately succeeding sentence.

Pragmatic Analysis − During this, what was said is re-interpreted on what it actually meant. It involves deriving
those aspects of language which require real world knowledge.

Learning:

An agent is learning if it improves its performance on future tasks after making observations about the world.
Learning can range from the trivial, as exhibited by jotting down a phone number, to the profound, as exhibited by
Albert Einstein, who inferred a new theory of the universe.

FORMS OF LEARNING
Factoring its representation of knowledge, AI learning models can be classified in two main types: inductive and
deductive.

 a) INDUCTIVE LEARNING


This type of AI learning model is based on inferring a general rule from datasets of input-output pairs.. Algorithms
such as knowledge based inductive learning(KBIL) are a great example of this type of AI learning technique. KBIL
focused on finding inductive hypotheses on a dataset with the help of background information.

b) DEDUCTIVE LEARNING
This type of AI learning technique starts with te series of rules nad infers new rules that are more efficient in the
context of a specific AI algorithm. Explanation-Based Learning(EBL) and Relevance-0Based Learning(RBL) are
examples examples o f deductive techniques. EBL extracts general rules from examples by “generalizing” the
explanation. RBL focuses on identifying attributes and deductive generalizations from simple example.
Based on the feedback characteristics, AI learning models can be classified as supervised, unsupervised, semi-
upervised or reinforced.

c) UNSUPERVISED LEARNING
Unsupervised models focus on learning a pattern in the input data without any external feedback. Clustering is a
classic example of unsupervised learning models.
In unsupervised learning the agent learns patterns in the input even though no explicit feedback is supplied. The
most common unsupervised learning task is clustering: detecting potentially useful clusters of input examples. For
example, a taxi agent might gradually develop a concept of “good traffic days” and “bad traffic days” without ever
being given labeled examples of each by a teacher.

d)  SUPERVISED LEARNING
Supervised learning models use external feedback to learning functions that map inputs to output observations. In
those models the external environment acts as a “teacher” of the AI algorithms.
In supervised learning the agent observes some example input–output pairs and learns a function that maps from
input to output.

e) SEMI-SUPERVISED LEARNING:
Semi-Supervised learning uses a set of curated, labeled data and tries to infer new labels/attributes on new data sets.
Semi-Supervised learning models are a solid middle ground between supervised and unsupervised models.
In semi-supervised learning we are given a few labeled examples and must make what we can of a large collection
of unlabeled examples. Even the labels themselves may not be the oracular truths that we hope for. Imagine that
you are trying to build a system to guess a person’s age from a photo. You gather some labeled examples by snapping
pictures of people and asking their age. That’s supervised learning. But in reality some of the people lied about their
age. It’s not just that there is random noise in the data; rather the inaccuracies are systematic, and to uncover them
is an unsupervised learning problem involving images, self-reported ages, and true (unknown) ages. Thus, both
noise and lack of labels create a continuum between supervised and unsupervised learning.

 f ) REINFORCEMENT LEARNING
Reinforcement learning models use opposite dynamics such as rewards and punishment to “reinforce” different
types of knowledge. This type of learning technique is becoming really popular in modern AI solutions.
In reinforcement learning the agent learns from a series of reinforcements—rewards or punishments. For example,
the lack of a tip at the end of the journey gives the taxi agent an indication that it did something wrong. The two
points for a win at the end of a chess game tellsthe agent it did something right. It is up to the agent to decide which
of the actions prior to the reinforcement were most responsible for it.

36. What do you mean by Supervised Learning, Unsupervised Learning and Reinforcement learning

solution (see answer 35)

37. Probabilistic reasoning: Representing knowledge in an uncertain domain, the semantics of Bayesian
networks, Dempster-Shafer theory, Fuzzy sets, and fuzzy logics

There are various ways of representing uncertainty. Here we consider three different approaches, representing three
different areas of uncertainty:
Probability theory:

Probabilistic assertions and queries are not usually about particular possible worlds, but about sets of them.

In probability theory, the set of all possible worlds is called the sample space. The Greek letter Ω (uppercase
omega) is used to refer to the sample space, and ω (lowercase omega) refers to elements of the space, that is,
particular possible worlds.

A fully specified probability model associates a numerical probability P (ω) with each possible world.1 the basic
axioms of probability theory say that every possible world has a probability between 0 and 1 and that the total
probability of the set of possible worlds is 1:

0 ≤P(ω) ≤1 for every ω and _ω∨Ω


P(ω) = 1

i. If a coin is flipped there is an equal chance of it landing on head side or tail side, consider H1 is for heads and
H2 for tails. This scenario is expressed as P(H1)=0.5 and P(H2)=0.5.

ii. The probability of 1st and 2nd toss both landing on heads is 0.5*0.5=0.25.

iii. We can write this as P(H1^H2)-0.25 and in general two independent events P and Q, P(P^Q)=P(P)*P(Q).

Fuzzy logic:

In the existing expert systems, uncertainty is dealt with through a combination of predicate logic and probability-
based methods. A serious shortcoming of these methods is that they are not capable of coming to grips with the
pervasive fuzziness of information in the knowledge base, and, as a result, are mostly ad hoc in nature.

An alternative approach to the management of uncertainty which is suggested in this paper is based on the use of
fuzzy logic, which is the logic underlying approximate or, equivalently, fuzzy reasoning. A feature of fuzzy logic
which is of particular importance to the management of uncertainty in expert systems is that it provides a
systematic framework for dealing with fuzzy quantifiers, e.g., most, many, few, not very many, almost all,
infrequently, about 0.8, etc.

In this way, fuzzy logic subsumes both predicate logic and probability theory, and makes it possible to deal with
different types of uncertainty within a single conceptual framework.

In fuzzy logic, the deduction of a conclusion from a set of premises is reduced, in general, to the solution of a
nonlinear program through the application of projection and extension principles. This approach to deduction
leads to various basic syllogisms which may be used as rules of combination of evidence in expert systems.

Truth maintenance System:

To choose their actions, reasoning programs must be able to make assumptions and subsequently revise their
beliefs when discoveries contradict these assumptions.

The Truth Maintenance System (TMS) is a problem solver subsystem for performing these functions by recording
and maintaining the reasons for program beliefs. Such recorded reasons are useful in constructing explanations of
program actions and in guiding the course of action of a problem solver

TMS are another form of knowledge representation which is best visualized in terms of graphs.

It stores the latest truth value of any predicate. The system is developed with the idea that truthfulness of a
predicate can change with time, as new knowledge is added or exiting knowledge is updated.

It keeps a record showing which items of knowledge is currently believed or disbelieved.

Bayesian probabilistic inference

Bayes’ theorem can be used to calculate the probability that a certain event will occur or that a certain
proposition is true
The theorem is stated as follows:

P(B) is called the prior probability of B. P(B|A), as well as being called the conditional probability, is also known
as the posterior probability of B.
P(A ∨B) = P(A|B)P(B)
Note that due to the commutativity of ∨, we can also write
P(A ∨B) = P(B|A)P(A)
Hence, we can deduce: P(B|A)P(A) = P(A|B)P(B)
This can then be rearranged to give Bayes’ theorem:
Bayes Theorem states:
This reads that given some evidence E then probability that hypothesis is true is equal to the ratio of the probability
that E will be true given times the a priori evidence on the probability of and the sum of the probability of E over
the set of all hypotheses times the probability of these hypotheses.
The set of all hypotheses must be mutually exclusive and exhaustive.
Thus to find if we examine medical evidence to diagnose an illness. We must know all the prior probabilities of
find symptom and also the probability of having an illness based on certain symptoms being observed.
Bayesian networks are also called Belief Networks or Probabilistic Inference Networks.

DEMPSTER- SHAFER THEORY


The Dempster-Shafer theory, also known as the theory of belief functions, is a generalization of the Bayesian
theory of subjective probability.
Whereas the Bayesian theory requires probabilities for each question of interest, belief functions allow us to base
degrees of belief for one question on probabilities for a related question. These degrees of belief may or may not
have the mathematical properties of probabilities;
The Dempster-Shafer theory owes its name to work by A. P. Dempster (1968) andGlenn Shafer (1976), but the
theory came to the attention of AI researchers in the early 1980s, when they were trying to adapt probability theory
to expert systems.
Dempster-Shafer degrees of belief resemble the certainty factors in MYCIN, and this resemblance suggested that
they might combine the rigor of probability theory with the flexibility of rule-based systems.
The Dempster-Shafer theory remains attractive because of its relative flexibility.
The Dempster-Shafer theory is based on two ideas:
the idea of obtaining degrees of belief for one question from subjective probabilities for a related question,
Dempster's rule for combining such degrees of belief when they are based on independent items of evidence.

Benefits of Dempster-Shafer Theory:


Allows proper distinction between reasoning and decision taking
No modeling restrictions (e.g. DAGs)
It represents properly partial and total ignorance
Ignorance is quantified:
o low degree of ignorance means
- high confidence in results
- enough information available for taking decisions
o high degree of ignorance means

What is Fuzzy Set ?


• The word "fuzzy" means "vagueness". Fuzziness occurs when the boundary of a piece of information is not clear-
cut.
• Fuzzy sets have been introduced by Lotfi A. Zadeh (1965) as an extension of the classical notion of set.
• Classical set theory allows the membership of the elements in the set in binary terms, a bivalent condition - an
element either belongs or does not belong to the set.
Fuzzy set theory permits the gradual assessment of the membership of elements in a set, described with the aid of a
membership function valued in the real unit interval [0, 1].
• Example:
Words like young, tall, good, or high are fuzzy.
− There is no single quantitative value which defines the term young.
− For some people, age 25 is young, and for others, age 35 is young.
− The concept young has no clean boundary.
− Age 1 is definitely young and age 100 is definitely not young;

Fuzzy logic is derived from fuzzy set theory dealing with reasoning that is approximate rather than precisely
deduced from classical predicate logic.
−Fuzzy logic is capable of handling inherently imprecise concepts.
−Fuzzy logic allows in linguistic form the set membership values to imprecise concepts like "slightly", "quite" and
"very".
−Fuzzy set theory defines Fuzzy Operators on Fuzzy Sets.

38. Depth limited search, bidirectional search, comparing uniform search strategies

DEPTH LIMITED SEARCH


The embarrassing failure of depth-first search in infinite state spaces can be alleviated by supplying depth-first
search with a predetermined depth limit. That is, nodes at depth are treated as if they have no successors. This
approach is called depth-limited search. The depth limit solves the infinite-path problem. Unfortunately, it also
introduces an additional source of incompleteness if we choose l < d, that is, the shallowest goal is beyond the
depth
limit. (This is likely when d is unknown.) Depth-limited search will also be nonoptimal if we choose l > d. Its time
complexity is O(bl) and its space complexity is O(bl). Depth-first search can be viewed as a special case of depth-
limited search with l = ∞.
Depth limited search (limit)
Let fringe be a list containing the initial state
Loop
if fringe is empty return failure
Node ← remove-first (fringe)
if Node is a goal
then return the path from initial state to Node
else if depth of Node = limit return cutoff
else add generated nodes to the front of fringe
End Loop

BIDIRECTIONAL SEARCH

It searches forward from initial state and backward from goal state till both meet to identify a common state. The
path from initial state is concatenated with the inverse path from the goal state. Each search is done only up to half
of the total path.
The idea behind bidirectional search is to run two simultaneous searches—one forward from the initial state and the
other ackward from the goal—hoping that the two searches meet in the middle (Figure 3.20). The motivation is that
bd/2 + bd/2 is much less than bd, or in the figure, the area of the two small circles is less than the area of one big circle
centered on the start and reaching to the goal.
Bidirectional search is implemented by replacing the goal test with a check to see whether the frontiers of the two
searches intersect; if they do, a solution has been found. It is important to realize that the first such solution found
may not be optimal, even if the two searches are both breadth-first; some additional search is required to make sure
there isn’t another short-cut across the gap.

Fig. 4.2: A schematic view of a bidirectional search that is about to succeed when a branch from the start node meets
a branch from the goal node.
Comparision :

Created by ,
Aman Adarsh
(CSE),Jisce

Potrebbero piacerti anche