Sei sulla pagina 1di 243

Problem solving: State space search and control strategies

Introduction
Problem solving State space Difference between AI and traditional search method Methods of problem solving:

General purpose method Special purpose method

General problem solving


Methods of modeling of problems: Production system State space search How to achieve the goal ? Control strategies

Production system
Helps AI programs to do search process more

conveniently in state space problems.

It Consists of: start state and final state Knowledge representation schemes Production rules on the left side(current state) that

determines the applicability of the rule and the right side(new state) describes the action to be performed if the rule is applied. Control strategies

Example of water jug problem and formulate the production rules to solve the problem.
Problem statement: 2 jugs- 5 gallon and 3 gallon

with no measuring mark on them. There is endless supply of water through tap. Our task is to get 4 g of water in 5 g jug.

Solution
State space for this problem can be described as the

set of ordered pair of integers(X,Y) such that X represents the no. of gallons of water in 5 g and Y for 3 g jug.
Start state is (0,0)
Goal state is (4,N) for any value N3.

Possible operations:
Fill 5 g jug from the tap and empty the 5 g jug by

throwing water down the drain Fill 3 g jug from the tap and empty the 3 g jug by throwing water down the drain. Pour some or 3 g water jug from 5 g jug into the 3 g jug to make it full. Pour some or full 3 jug water into the 5 g jug.

Production rules for water jug problem.

Solution path -1

State space search


Using this search, one can find path from start state

to goal state Consists of four components A set S containing start states of the problem. A set G containing goal states of the problem. Set of nodes. Set of arc connecting the nodes

Problem Formulation
11

A Problem Space consists of


The current state of the world (initial state)

A description of the actions we can take to

transform one state of the world into another (operators). to the goal state.

A solution consists of the goal state, or a path

Missionaries and cannibals problem


Problem statement: Three missionaries and three

cannibals are on the left bank of a river.


There is one boat which can hold one or two people. Find a way to get everyone to the right bank, without

ever leaving a group of missionaries in one place outnumbered by cannibals in that place.

Possible Operators
2M0C 1M1C 0M2C 1M0C 0M1C

Solution
State space for this problem can be described as set

of ordered pair of left and right banks of the river as (L,R) where each bank is represented as a list [nM, mC, B] n= no. of missionaries M m= no. of cannibals C B= boat

Solution
Start state: ([3M, 3 C, 1B], [0M, 0C, 0B])or

[331:000] 1B means boat is present and 0B means boat is absent. Goal state : ([0M, 0 C, 0B], [3M, 3C, 1B]) 0r [000:331]

Missionaries and Cannibals


(331): Initial State(000)

16

Missionaries and Cannibals


A missionary and cannibal cross

17

Missionaries and Cannibals


(220:111)

18

Missionaries and Cannibals


One missionary returns

19

Missionaries and Cannibals


(321:010)

20

Missionaries and Cannibals


Two cannibals cross

21

Missionaries and Cannibals


(300:031)

22

Missionaries and Cannibals


A cannibal returns

23

Missionaries and Cannibals


(311:020)

24

Missionaries and Cannibals


Two missionaries cross

25

Missionaries and Cannibals


(110:221)

26

Missionaries and Cannibals


A missionary and cannibal return

27

Missionaries and Cannibals


(221:110)

28

Missionaries and Cannibals


Two Missionaries cross

29

Missionaries and Cannibals


(020:311)

30

Missionaries and Cannibals


A cannibal returns

31

Missionaries and Cannibals


(031:300)

32

Missionaries and Cannibals


Two cannibals cross

33

Missionaries and Cannibals


(010:321)

34

Missionaries and Cannibals


A cannibal returns

35

Missionaries and Cannibals


(021:310)

36

Missionaries and Cannibals


The last two cannibals cross

37

Missionaries and Cannibals


(000) : Goal State(331)

38

Missionaries and Cannibals


Solution = the sequence of actions within the path : [ (3,3,1) (2,2,0)(3,2,1) (3,0,0) (3,1,1)
(1,1,0) (2,2,1) (0,2,0) (0,3,1) (0,1,0) (0,2,1) (0,0,0)]; Cost

= 11 crossings

39

Solution
Start state: ([3M, 3 C, 1B], [0M, 0C, 0B]) 1B means boat is present and 0B means boat is

absent. Any state : ([n1M, m1C,_],[n2M, m2C_]) with constraints and conditions at any state as n1(0)m1; n2(0)m2; n1+n2=3, m1+m2=3; Boat can be either side. Goal state : ([0M, 0 C, 0B], [3M, 3C, 1B])

Production rules

Solution path

Problem Formulation of 8 puzzle problem


It has 3*3 grid with 8 random no.s (1 to 8) tiles

arranged on it with empty cell. At any point, the adjacent tile can move to the empty cell, creating a new empty cell. We have to arrange the tiles such that we get the goal state from the start state.

Partial search tree for Eight puzzle problem

Problem Formulation 8-Puzzle Problem


46

1 4 7

2 8 6

1 4

2 5 8

3 6

Initial state

Goal state

Operators: slide blank up, slide blank down, slide blank left, slide blank right
1 4 7 2 8 6 5 3 1 4 7 2 8 6 3 5 1 4 7 2 8 3 5 6 1 4 7 8 2 3 5 6 1 4 7 2 5 8 6 3 1 4 7 2 5 8 3 6

Solution: sb-down, sb-left, sb-up,sb-right, sb-down Path cost: 5 steps to reach the goal

Control Strategies
There are two direction in which a search can

proceed:
Data driven search i.e. forward chaining(8 puzzle) Goal driven search i.e. backward chaining(NIM game

playing)

Characteristics of the problem


Type of problem: Ignorable(Proving Mathematical theorems) Recoverable (Water jug, Single player games) Irrecoverable(Chess, snake and ladder , Two player games) Decomposability of the problem(Integration)

Role of knowledge(Rules , facts)


Consistency of a problem Requirement of solution
ABSOLUTE AND RELATIVE SOLUTION ANY PATH AND BEST PATH PROBLEM

Basic search algorithms


Uninformed( Blind) search/Exhaustive

searches: breadth-first, depth-first, depth limited, iterative deepening, and bidirectional search

Informed (Heuristic) search: search is guided

by an evaluation function: Branch and Bound, Hill climbing,Greedy best-first, A*, IDA*, and beam search Constraint satisfaction

Blind search
Types of Blind search :

Breadth first search Depth first search Depth first iterative deepening(DFID) Bidirectional search

Breadth first search


Expands all the states one step away from the first

state, expands two steps away from the start state and so on. Until a goal state is reached. Implemented using two steps

OPEN list- those states that are to be expanded CLOSED lists- those states that are already expanded.

Breadth First Search


52

Application1: Given the following state space (tree search), give the sequence of visited nodes when using BFS (assume that the nodeO is the goal state):
A

B F G

C H I

D J N

Breadth First Search


53

A,

A B C D E

Breadth First Search


54

A, B,
A B F G C D E

Breadth First Search


55

A, B,C
A B F G C H D E

Breadth First Search


56

A, B,C,D
A B F G C H I D J E

Breadth First Search


57

A, B,C,D,E
A B F G C H I D J E

Breadth First Search


58

A, B,C,D,E, F,
A B F G C H I D J E

Breadth First Search


59

A, B,C,D,E, F,G
A B F G C H I D J E

Breadth First Search


60

A, B,C,D,E, F,G,H
A B F G C H I D J E

Breadth First Search


61

A, B,C,D,E, F,G,H,I
A B F G C H I D J E

Breadth First Search


62

A, B,C,D,E, F,G,H,I,J,
A B F G C H I D J N E

Breadth First Search


63

A, B,C,D,E, F,G,H,I,J, K,
B F G C

A D H I J N E

Breadth First Search


64

A, B,C,D,E, F,G,H,I,J, K,L


B F G C

A D H I J N E

Breadth First Search


65

A, B,C,D,E, F,G,H,I,J, K,L, M,


B F G C

A D H I J N E

Breadth First Search


66

A, B,C,D,E, F,G,H,I,J, K,L, M,N,


B F G C

A D H I J N E

Breadth First Search


67

A, B,C,D,E, F,G,H,I,J, K,L, M,N, Goal state: O


B F G C

A D H I J N E

Breadth First Search


68

The returned solution is the sequence of operators in the path: A, B, G, L, O


A B F G C H I D J N E

Water Jug problem using BFS

What Criteria are used to Compare different search techniques ? 70


Completeness: Is the technique guaranteed to find an answer (if

there is one).

Optimality :Highest quality solution when there are several

solution for the problem

Time Complexity: How long does it take to find a solution.

Space Complexity: How much memory does it take to find a solution.

Time and space complexity are measured in terms of:


The (maximum) branching factor b is defined as the maximum nodes created when a new node is expanded. The length of a path to a goal is the depth d. The maximum length of any path in the state space m.

Breadth First Search (BFS)


71

Complete? Yes. Optimal? Yes(unit cost) Time Complexity: O(bd) Space Complexity: O(bd)

Algorithm
Input: START and GOAL states Local variables: OPEN, CLOSED, STATE-X, SUCCs, FOUND; OUTPUT: Yes Or No Method: Initalize OPEN list with START and CLOSED=; FOUND= false; While(OPEN and FOUND= false) do { Remove the first state from OPEN and call it STATE-X; Put STATE-X in the front of closed list{maintained as stack}; If STATE-X=GOAL then FOUND= true else { Perform EXPAND operation on STATE-X, producing a list of SUCCs; Remove the successors from those states, if any that are in the CLOSED list; Append SUCCs at the end of OPEN list; /*queue*/ } } /* end of while*/ If FOUND= true then return Yes else return No Stop

Depth First Search (DFS)


73

Depth First Search (DFS)


74

Application2: Given the following state space (tree search), give the sequence of visited nodes when using DFS (assume that the nodeO is the goal state):
A

B F G

C H I

D J N

Depth First Search


75

A,

A B C D E

Depth First Search


76

A,B,

A B F G C D E

Depth First Search


77

A,B,F,

A B F G C D E

Depth First Search


78

A,B,F, G,
A B F G C D E

Depth First Search


79

A,B,F, G,K,
A B F G C D E

Depth First Search


80

A,B,F, G,K, L,
A B F G C D E

Depth First Search


81

A,B,F, G,K, L, O: Goal State


A B F G C D E

Depth First Search


82

The returned solution is the sequence of operators in the path: A, B, G, L, O


A B F G C D E

Depth First Search (DFS)


83

Main idea: Expand node at the deepest level (breaking ties left to right).

Complete? No (Yes on finite trees, with no loops). Optimal? No Time Complexity: O(bd), where d is the maximum depth. Space Complexity: O(d), where d is the maximum depth.

Depth First Iterative Deepening Search (DFID)

DFID combines the benefits of BFS and DFS: Like DFS the memory requirements are very modest (O(d)). Like BFS, it is complete when the branching factor is finite.

In general, iterative deepening is the preferred uninformed search method when there is a large search space and the depth of the solution is not known.

Depth First Iterative Deepening Search 85


Given the following state space (tree search), give the sequence of visited nodes when using IDS:
A B F G C H I D J N E

Limit = 0 Limit = 1 Limit = 2

Limit = 3

Limit = 4

Depth First Iterative Deepening Search


DLS WITH BOUND = 0

Depth First Iterative Deepening Search 87

A,

Limit = 0

Depth First Iterative Deepening Search


88

A, Failure

Limit = 0

Depth First Iterative Deepening Search


DLS WITH BOUND = 1

Depth First Iterative Deepening Search 90

A,

A Limit = 1 B C D E

Depth First Iterative Deepening Search


91

A,B,

A Limit = 1 B C D E

Depth First Iterative Deepening Search


92

A,B, C,
A

Limit = 1

Depth First Iterative Deepening Search


93

A,B, C, D,
A

Limit = 1

Depth First Iterative Deepening Search


94

A,B C, D, E,
B C

A D E

Limit = 1

Depth First Iterative Deepening Search


95

A,B, C, D, E, Failure
B C

A D E

Limit = 1

Depth First Iterative Deepening Search


96

A,

A B Limit = 2 C D E

Depth First Iterative Deepening Search


97

A,B,

A B Limit = 2 F G C D E

Depth First Iterative Deepening Search


98

A,B,F,

A B Limit = 2 F G C D E

Depth First Iterative Deepening Search


99

A,B,F, G,
A B C G D E

Limit = 2

Depth First Iterative Deepening Search


100

A,B,F, G, C,
A B C G H D E

Limit = 2

Depth First Iterative Deepening Search


101

A,B,F, G, C,H,
A B C G H D E

Limit = 2

Depth First Iterative Deepening Search


102

A,B,F, G, C,H, D,
B C G

A D H I J E

Limit = 2

Depth First Iterative Deepening Search


103

A,B,F, G, C,H, D,I


B C G

A D H I J E

Limit = 2

Depth First Iterative Deepening Search


104

A,B,F, G, C,H, D,I J,


B C G

A D H I J E

Limit = 2

Depth First Iterative Deepening Search


105

A,B,F, G, C,H, D,I J, E


F

A B G C H I D J E

Limit = 2

Depth First Iterative Deepening Search


106

A,B,F, G, C,H, D,I J, E, Failure


F

A B G C H I D J N E

Limit = 2

Depth First Iterative Deepening Search


DLS WITH BOUND = 3

Depth First Iterative Deepening Search


108

A,

A B C D E

Limit = 3

Depth First Iterative Deepening Search


109

A,B,

A B F G C D E

Limit = 3

Depth First Iterative Deepening Search


110

A,B,F,

A B F G C D E

Limit = 3

Depth First Iterative Deepening Search


111

A,B,F, G,
A B F G C D E

Limit = 3

Depth First Iterative Deepening Search


112

A,B,F, G,K,
A B F G C D E

Limit = 3

Depth First Iterative Deepening Search


113

A,B,F, G,K, L,
A B F G C D E

Limit = 3

Depth First Iterative Deepening Search


114

A,B,F, G,K, L, C,
B F G C

A D H E

Limit = 3

Search
115

A,B,F, G,K, L, C,H,


B F G C

A D H E

Limit = 3

Depth First Iterative Deepening Search


116

A,B,F, G,K, L, C,H, D,


B F G C

A D H I J E

Limit = 3

Depth First Iterative Deepening Search


117

A,B,F, G,K, L, C,H, D,I,


B F G C

A D H I J E

Limit = 3

Depth First Iterative Deepening Search


118

A,B,F, G,K, L, C,H, D,I,M,


B F G C

A D H I J E

Limit = 3

Depth First Iterative Deepening Search 119


A,B,F, G,K, L, C,H, D,I,M, J,


F

A B G C H I D J N E

Limit = 3

Depth First Iterative Deepening Search


120

A,B,F, G,K, L, C,H, D,I,M, J,N,


F

A B G C H I D J N E

Limit = 3

Depth First Iterative Deepening 121 Search


A,B,F, G,K, L, C,H, D,I,M, J,N, E,


F

A B G C H I D J N E

Limit = 3

Depth First Iterative Deepening Search


122

A,B,F, G,K, L, C,H, D,I,M, J,N, E,Failure


F

A B G C H I D J N E

Limit = 3

Depth First Iterative Deepening Search


DLS WITH BOUND = 4

Depth First Iterative Deepening Search


124

A,

A B C D E

Limit = 4

Depth First Iterative Deepening Search


125

A,B,

A B F G C D E

Limit = 4

Depth First Iterative Deepening Search


126

A,B,F,

A B F G C D E

Limit = 4

Depth First Iterative Deepening Search


127

A,B,F, G,
A B F G C D E

Limit = 4

Depth First Iterative Deepening Search


128

A,B,F, G,K,
A B F G C D E

Limit = 4

Depth First Iterative Deepening Search


129

A,B,F, G,K, L,
A B F G C D E

Limit = 4

Depth First Iterative Deepening Search


130

A,B,F, G,K, L, O: Goal State


A B F G C D E

Limit = 4

Depth First Iterative Deepening Search


131

The returned solution is the sequence of operators in the path: A, B, G, L, O


A B F G C D E

Depth First Iterative Deepening Search


Advantage: It gives the best solution by combining BFS and DFS. Disadvantage: It performs wasted computation before reaching the

goal depth

Bi-directional Search (BDS)


133

Main idea: Start searching from

both the initial state and the goal state, meet in the middle.

Complete? Yes Optimal? Yes Time Complexity: O(bd/2), where

d is the depth of the solution. Space Complexity: O(bd/2), where d is the depth of the solution.

Bi-directional Search (BDS)


ADVANTAGES The merit of bidirectional search is its speed. Sum of the time taken by two searches (forward and backward) is much less than the O(bd) complexity. It requires less memory. DISADVANTAGES Implementation of bidirectional search algorithm is difficult because additional logic must be included to decide which search tree to extend at each step. One should have known the goal state in advance. The algorithm must be too efficient to find the intersection of the two search trees.

Basic Search Algorithms


COMPARISON OF SEARCH ALGORITHMS

Comparison of search algorithms


136

b: Branching factor d: Depth of solution m: Maximum depth l : Depth Limit

Travelling Salesman Problem


Above mentioned searches are blind and are not

much use in real life applications. We need to have some intelligent searches which take into account some relevant problem info an finds solution faster.
Statement: In TSP ,one required to find the

shortest route of visiting all the cities once and return back to starting point. Assume that there are n cities and distance between each pair of cities is given.

Travelling Salesman Problem

It would require (n-1) ! paths to be examined. If the number of cities grows then the time require

would be large enough.This phenomena is called combinatorial explosion.

Travelling Salesman Problem


This consist of: A)Start generating complete paths ,keeping track of the shortest path found so far. B) Stop exploring any path as soon as its partial length becomes greater than the shortest path length found so far.

Travelling Salesman Problem

Travelling Salesman Problem


Table shows possible paths generated using modified

approach.
Continue till all the paths have been explored .In this

case 4! =24 possible paths .


Therefore ,some kind of heuristic(thumb rule) may

be thought of applying .So we need some intelligent methods which can make use of problem knowledge and improves search time.

Informed/Heuristic Search Strategies

Heuristic Search Strategies


Heuristic technique is a criterion for determining

which among several alternatives will be the most effective to achieve some goal.
This technique improves the efficiency of search

process . It no longer guarantees to find the best solution but almost always finds a very good solution.

Heuristic Search Strategies


Branch and Bound(Uniform cost search) Hill climbing Beam search Greedy best-first A* IDA*

Basic Search Algorithms Uninformed Search


BRANCH AND BOUND/UNIFORM COST SEARCH (UCS)

Branch and Bound(Uniform cost search)


In Branch and bound search method ,cost function

g(X) is designed that assigns expense to the path from start node to the current node X by applying sequence of operators.
While generating a search space ,least cost path

obtained so far is expanded at each iteration till we reach to goal state.

Uniform Cost Search (UCS)


5 [5] 2 [2]

147

1
[6]
Goal state

4
[9] [3] 4 [7]

7
[9] 5 [8]

[x] = g(n) path cost of node n

Uniform Cost Search (UCS)


5 [5] 2 [2]

148

Uniform Cost Search (UCS)


5 [5] 2 [2]

149

1
[3]

7
[9]

Uniform Cost Search (UCS)


5 [5] 2 [2]

150

1
[3] 4 [7] 5

7
[9]

[8]

Uniform Cost Search (UCS)


5 [5] 2 [2]

151

1
[6] [9]

4
[3] 4 [7]

7
[9] 5 [8]

Uniform Cost 152 Search (UCS)

2 [2]

Goal state 1 path cost g(n)=[6]

[5]

4
[9] [3] 4 [7]

7
[9] 5 [8]

Uniform Cost Search (UCS)


5 [5] 2 [2]

153

1
[6] [9]

4
[3] 4 [7]

7
[9] 5 [8]

Uniform Cost Search (UCS) 154

Since the shortest path is always choosen for extension,the path first reaching the goal is certain to be optimal but not guaranteed to find the solution quickly. In branch and bound method ,If g(X)=1 for all operators. Then it degenerates to simple breadth first search.

Uniform Cost Search (UCS)

Hill Climbing
156

Hill climbing search algorithm (also known as greedy local search) uses a loop that continually moves in the direction of increasing/decreasing values (that is uphill/downhill).

It teminates when it reaches a peak where no neighbor has a higher/lower value.

Hill Climbing
157

evaluation

states

Hill Climbing in Action


158

Cost

States

Hill Climbing
159

Current Solution

Hill Climbing
Current Solution

160

Hill Climbing
Current Solution

161

Hill Climbing
Current Solution

162

Hill Climbing
Best

163

Local Minimum

Global Minimum

Problems with Hill climbing


The search process may reach to a position that is

not a solution but from where no move improves a solution. This will happen if we have reached :
local maximum

a) Local maximum b) Plateau c) Ridge

plateau

ridge

Problems with Hill climbing


A) Local maximum: It is a state that is better than

all the neighbors but not better than some other states which are far away.

Problems with Hill climbing


B) Plateau: It is a flat area of the search space where all the neighbors state has the same value .Its not possible to determine best direction.

Problems with Hill climbing


B) Ridge: A

ridge is a long thin region of high land with low land on either side. When looking in one of the four direction finding the right direction might be very tricky as one can fall in either direction.

Beam search
NARROWING THE WIDTH OF THE BREADTH-FIRST SEARCH

BEAM SEARCH
In beam search is a heuristic search algorithm that

explores a graph by expanding the most promising node in a limited set. tree. At each level of the tree, it generates all successors of the states at the current level, sorting them in increasing order of heuristic cost. The greater the beam width, the fewer states are pruned. With an infinite beam width, no states are pruned and beam search is identical to breadth-first search.

Beam search uses breadth-first search to build its search

Beam search (1):


Depth 1)
10.4

S
8.9

Assume a pre-fixed WIDTH

(example : 2 )

Perform breadth-first, BUT: Only keep the WIDTH best

new nodes

depending on heuristic

at each new level.

Depth 2)
A
6.7

S S

D D A
10.4

8.9

6.9

X ignore

X ignore

Beam search (2):


Depth 3)
A
4.0

S D
X

OptimiB

B
C _ E

6.9

6.7

end

X ignore

3.0

Depth 4)
A B
X

S D D A
X 10.4

zation: ignore leafs that are not goal nodes (see C)

C _

B
A C _

F
G
0.0

Local Beam Search


Cost

172

States

Local Beam Search

173

Local Beam Search

174

Local Beam Search

175

Local Beam Search

176

Local Beam Search

177

Local Beam Search

178

Local Beam Search

179

Local Beam Search

180

Local Beam Search

181

Local Beam Search

182

Local Beam Search

183

Local Beam Search

184

Local Beam Search

185

Local Beam Search

186

Local Beam Search

187

Best First search/Greedy search


Best first search is based on expanding the best

partial path from current node to the goal node .


It should be noted that in hill climbing ,sorting is

done on the successors nodes, whereas in best first search ,sorting is done on the entire list.
It is not guaranteed to find an optimal solution, but

generally it finds some solution faster than solution obtained from any other method.

Greedy Search/Best first search


189

A
C E D G

Start B

State A

Heuristic: h(n) 366

B
C D E F

374
329 244 253 178

G
H I H I Goal

193
98 0

f(n) = h (n) = straight-line distance heuristic

Greedy Search/Best first search


190

A
C E D G

Start B

State A

Heuristic: h(n) 366

B
C D E F

374
329 244 253 178

G
H I H I Goal

193
98 0

f(n) = h (n) = straight-line distance heuristic

Greedy Search
191

A
C E D G

Start B

State A

Heuristic: h(n) 366

B
C D E F

374
329 244 253 178

G
H I H I Goal

193
98 0

f(n) = h (n) = straight-line distance heuristic

Greedy Search
192

A
C E D G

Start B

State A

Heuristic: h(n) 366

B
C D E F

374
329 244 253 178

G
H I H I Goal

193
98 0

f(n) = h (n) = straight-line distance heuristic

Greedy Search
193

A
C E D G

Start B

State A

Heuristic: h(n) 366

B
C D E F

374
329 244 253 178

G
H I H I Goal

193
98 0

f(n) = h (n) = straight-line distance heuristic

Greedy Search
194

A
C E D G

Start B

State A

Heuristic: h(n) 366

B
C D E F

374
329 244 253 178

G
H I H I Goal

193
98 0

f(n) = h (n) = straight-line distance heuristic

Greedy Search
195

A
C E D G

Start B

State A

Heuristic: h(n) 366

B
C D E F

374
329 244 253 178

G
H I H I Goal

193
98 0

f(n) = h (n) = straight-line distance heuristic

Greedy Search
196

A
C E D G

Start B

State A

Heuristic: h(n) 366

B
C D E F

374
329 244 253 178

G
H I H I Goal

193
98 0

f(n) = h (n) = straight-line distance heuristic

Greedy Search
197

A
C E D G

Start B

State A

Heuristic: h(n) 366

B
C D E F

374
329 244 253 178

G
H I H I Goal

193
98 0

f(n) = h (n) = straight-line distance heuristic

Greedy Search
198

A
C E D G

Start B

State A

Heuristic: h(n) 366

B
C D E F

374
329 244 253 178

G
H I H I Goal

193
98 0

f(n) = h (n) = straight-line distance heuristic

Greedy Search: Tree Search


199

A
[329] C [253] E

Start [374] B

Greedy Search: Tree Search


200

A
[329] C [253] E

Start [374] B

[193]

[178]

Greedy Search: Tree Search


201

A
[329] C

Start [374] B

140 [253] E

[193]

[178]

I
Goal

[0]

Greedy Search: Tree Search


202

118
[329] C 80

Start A 75 140 [253] E [374] B 99 F [178] 211

[193]

I
Path cost(A-E-F-I) = 253 + 178 + 0 = 431 dist(A-E-F-I) = 140 + 99 + 211 = 450 Goal

[0]

Greedy Search: Optimal ?


203

118
C 111 D 80 G 97 H 101 I

Start A 75 140 E 99 F B

State A

Heuristic: h(n) 366

B
C D E F

374
329 244 253 178

G
211 H I Goal

193
98 0

f(n) = h (n) = straight-line distance heuristic dist(A-E-G-H-I) =140+80+97+101=418

Greedy Search: Complete ?


204

118
C 111 D 80 G 97 H 101 I

Start A 75 140 E 99 F B

State A

Heuristic: h(n) 366

B
** C D E F

374
250 244 253 178

G
211 H I Goal

193
98 0

f(n) = h (n) = straight-line distance heuristic

Greedy Search: Tree Search


205

118
[250] C

Start A 75 140 [253] E [374] B

Greedy Search: Tree Search


206

118
[250] 111 [244] D C

Start A 75 140 [253] E [374] B

Greedy Search: Tree Search


207

118
[250] 111 [244] D Infinite Branch ! [250] C C

Start A 75 140 [253] E [374] B

Greedy Search: Tree Search


208

118
[250] 111 [244] D Infinite Branch ! [250] C C

Start A 75 140 [253] E [374] B

[244] D

Greedy Search: Tree Search


209

118
[250] 111 [244] D Infinite Branch ! [250] C C

Start A 75 140 [253] E [374] B

[244] D

Greedy Search: Time and Space Complexity ?


210

118
C 111 D 80 G 97 H 101 I

Start A 75 140 E 99 F B

Greedy search is not optimal.


Greedy search is incomplete without systematic checking of repeated states. In the worst case, the Time and Space Complexity of Greedy Search are both O(bm)
Where b is the branching factor and m the maximum path length

211

Goal

Informed Search Strategies

A* SEARCH
E VA L - F N : F ( N ) = G ( N ) + H ( N )

A* (A Star)
212

Greedy Search minimizes a heuristic h(n) which is an

estimated cost from a node n to the goal state. However, although greedy search can considerably cut the search time (efficient), it is neither optimal nor complete.
Uniform Cost Search minimizes the cost g(n) from the

initial state to n. UCS is optimal and complete but not efficient.


New Strategy: Combine Greedy Search and UCS to get an

efficient algorithm which is complete and optimal.

A* (A Star)
213

A* uses a heuristic function which combines g(n)

and h(n): f(n) = g(n) + h(n)


g(n) is the exact cost of node to node . h(n) is an estimation of the remaining cost to reach

the goal.

A* Search
214

118
C 111 D 80 G 97 H 101 I

Start A 75 140 E 99 F B

State A

Heuristic: h(n) 366

B
C D E F

374
329 244 253 178

G
211 H I Goal

193
98 0

f(n) = g(n) + h (n)

g(n): is the exact cost to reach node n from the initial state.

A* Search: Tree Search


215

Start

A* Search: Tree Search


216

Start

118
[118+329=447] C

140
E

75
B [75+374=449]

[140+253=393]

A* Search: Tree Search


217

Start

118
[447] C

140
E 80 [393] 99

75
B [449]

[140+80+193=413]

[140+99+178=417]

A* Search: Tree Search


218

Start

118
[447] C

140
E 80 [393] 99

75
B [449]

[413]

G
97

[417]

[140+80+97+98=415]

A* Search: Tree Search


219

Start

118
[447] C

140
E 80 [393] 99

75
B [449]

[413]

G
97

[417]

[415]

101
Goal I [140+80+97+101+0=418]

A* Search: Tree Search


220

Start

118
[447] C

140
E 80 [393] 99

75
B [449]

[413]

G
97

[417]

[415]

[450]

101
Goal I [418]

A* Search: Tree Search


221

Start

118
[447] C

140
E 80 [393] 99

75
B [449]

[413]

G
97

[417]

[415]

[450]

101
Goal I [418]

A* Search: Tree Search


222

Start

118
[447] C

140
E 80 [393] 99

75
B [449]

[413]

G
97

[417]

[415]

[450]

101
Goal I [418]

A* Search
Let us consider an example of eight puzzle and solve by A* algo: The function f(x) =h(x)+g(x),where

h(x)=the number of tiles not in the goal position in given state.


g(x)=depth of node X in the search tree.
3 5 4 7 1 6 2 2 8 5 7 4 1 3 6 2 8

Initial state

Goal state

7 1

6 2 8

f=(0+4)

5 4

u
3 7 6
2 1 8

l
3 5 7 1 4

(1+5)
6 2 8

r
3 5 4 7 1 8 6 2

(1+3) (2+3)
3 5 4 7 1

5 4

(1+5)

u
6 2 8

(2+3)
3 7 5 4 1

l
6 2 8 3 5 4

r
7 2 1

(2+4)
6

l (3+2)
3 5 4 7 1 6 2 8

r
3 5 4

(3+4)
6 7 1 5 7 4 1 2 8 3 6 2 8

d
5 3 7 4 1

(4+1)
6 2 8

Goal state
224

Informed Search Strategies


ITERATIVE DEEPENING A*

Iterative Deepening A*:IDA*


226

IDA* is a combination of DFID+A* algorithm Each iteration is depth-first with cutoff on the value

of f of expanded nodes

Iterative Deepening A*:IDA*


Iterative deepening A* or IDA* is similar to iterative-

deepening depth-first, but with the following modifications:


The depth bound modified to be an f-limit

1. Start with limit = h(start)


2. Prune any node if f(node) > f-limit 3. Next f-limit=minimum cost of any node pruned

Iterative Deepening A*:IDA*


228

First iteration (Threshold=5)

1+5=6 X

1+7=8
XX

1+3=4

1+8=9 XX

Iterative Deepening A*:IDA*


229

First iteration (Threshold=6)

4 XX

XX

XX

Iterative Deepening A*:IDA*


230

First iteration (Threshold=7)

8 XX

4 8 XX

GOAL

XX

Optimal solution by A*
UNDERESTIMATION OVERESTIMATION ADMISSIBILITY MONOTONIC FUNCTION

UNDERESTIMATION/OVERESTIMATION
If we can guarantee that h never overestimates actual

value from current to goal ,then A* algorithm ensures to find an optimal path to a goal, if one exist.
We find worse solution by overestimating and it

doesnt guarantee the shortest path

A* algorithm
When h(n) = actual cost to goal Only nodes in the correct path are expanded Optimal solution is found When h(n) < actual cost to goal Additional nodes are expanded Optimal solution is found When h(n) > actual cost to goal Optimal solution can be overlooked

ADMISSIBILITY
A search algorithm is admissible ,if for any grah ,it

always terminates in an optimal path from start state to goal state We have seen earlier that if heuristic function h underestimates the actual value from the current state to goal state ,then it bounds to give an optimal solution . So,we can say that A* always terminates with the optimal path in case h is an admissible heuristic function.

Monotonic function
A heuristic function h is monotone if

1.For all states Xi and Xj such that Xj is successor of Xi h(Xi)-h(Xj)=<cost(Xi,Xj) i.e. actual cost of going from Xi to Xj. 2. h(Goal)=0

Conclusions
236

Frustration with uninformed search led to the idea of

using domain specific knowledge in a search so that one can intelligently explore only the relevant part of the search space that has a good chance of containing the goal state. These new techniques are called informed (heuristic) search strategies.
Even though heuristics improve the performance of

informed search algorithms, they are still time consuming especially for large size instances.

Constraint Satisfaction Problems (CSPs)


Constraint

satisfaction problems (CSPs) are mathematical problems defined as a set of objects whose state must satisfy a number of constraints or limitations. CSPs often exhibit high complexity, requiring a combination of heuristics and combinatorial search methods to be solved in a reasonable time.

Examples of simple problems that can be modelled as a constraint

satisfaction problem

Eight queens puzzle Map colouring problem Sudoku, Futoshiki, Kakuro (Cross Sums), Numbrix, Hidato and many other logic puzzles

Varieties of constraints
Unary constraints involve a single variable, e.g., SA green Binary constraints involve pairs of variables, e.g., SA WA Higher-order constraints involve 3 or more

variables,

e.g., cryptarithmetic column constraints

Example: Map-Coloring

Variables WA, NT, Q, NSW, V, SA, T Domains Di = {red, green, blue}

Constraints: adjacent regions must have different colors e.g., WA NT

Example: Map-Coloring

Solutions are complete and consistent assignments, e.g.,


WA = red, NT = green, Q = red, NSW = green, V = red, SA = blue, T = green

A state may be incomplete e.g., just WA=red

Constraint satisfaction problem


A CSP is defined by a set of variables a domain of possible values for each variable a set of constraints between variables

Example: Cryptarithmetic
242

Variables: F T U W

R O X 1 X2 X3 Domains: {0,1,2,3,4,5,6,7,8,9} Constraints: Alldiff (F,T,U,W,R,O)

O + O = R + 10 X1 X1 + W + W = U + 10 X2 X2 + T + T = O + 10 X3 X3 = F, T 0, F 0

Real-world CSPs
243

Assignment problems e.g., who teaches what class

Timetabling problems e.g., which class is offered when and where?


Transportation scheduling Factory scheduling Notice that many real-world problems involve real-valued

variables

Potrebbero piacerti anche