Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

78
Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search

Transcript of Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Page 1: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Lecture 02 – Part BProblem Solving by Searching

Search Methods : Informed (Heuristic)

search

Page 2: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Using problem specific knowledge to aid searching

Without incorporating knowledge into searching, one can have no bias (i.e. a preference) on the search space.

Without a bias, one is forced to look everywhere to find the answer. Hence, the complexity of uninformed search is intractable.

2

Search everywhere!!

Page 3: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Using problem specific knowledge to aid searching

3

With knowledge, one can search the state space as if he was given “hints” when exploring a maze.Heuristic information in search = Hints

Leads to dramatic speed up in efficiency.

A

B C ED

F G H I J

K L

O

M N

Search only in this subtree!!

Page 4: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

More formally, why heuristic functions work?

4

In any search problem where there are at most b choices at each node and a depth of d at the goal node, a naive search algorithm would have to, in the worst case, search around O(bd) nodes before finding a solution (Exponential Time Complexity).

Heuristics improve the efficiency of search algorithms by reducing the effective branching factor from b to (ideally) a low constant b* such that1 =< b* << b

Page 5: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Heuristic Functions

5

A heuristic function is a function f(n) that gives an estimation on the “cost” of getting from node n to the goal state – so that the node with the least cost among all possible choices can be selected for expansion first.

Three approaches to defining f:

f measures the value of the current state (its “goodness”)

f measures the estimated cost of getting to the goal from the current state: f(n) = h(n) where h(n) = an estimate of the cost to get from n to a

goal

f measures the estimated cost of getting to the goal state from the current state and the cost of the existing path to it. Often, in this case, we decompose f: f(n) = g(n) + h(n) where g(n) = the cost to get to n (from initial state)

Page 6: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Approach 1: f Measures the Value of the Current StateUsually the case when solving optimization

problems Finding a state such that the value of the metric f is

optimized

Often, in these cases, f could be a weighted sum of a set of component values:

N-Queens Example: the number of queens under attack …

Data mining Example: the “predictive-ness” (a.k.a. accuracy) of a rule

discovered

6

Page 7: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Approach 2: f Measures the Cost to the Goal

A state X would be better than a state Y if the estimated cost of getting from X to the goal is lower than that of Y – because X would be closer to the goal than Y

7

• 8–Puzzle

h1: The number of misplaced tiles (squares with number).

h2: The sum of the distances of the tiles from their goal positions.

Page 8: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Approach 3: f measures the total cost of the solution path (Admissible Heuristic Functions)

A heuristic function f(n) = g(n) + h(n) is admissible if h(n) never overestimates the cost to reach the goal.Admissible heuristics are “optimistic”: “the cost is not that much …”

However, g(n) is the exact cost to reach node n from the initial state.Therefore, f(n) never over-estimate the true cost to reach the goal

state through node n.Theorem: A search is optimal if h(n) is admissible.

I.e. The search using h(n) returns an optimal solution.

Given h2(n) > h1(n) for all n, it’s always more efficient to use h2(n).h2 is more realistic than h1 (more informed), though both are optimistic.

8

Page 9: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Traditional informed search strategies

Greedy Best First Search“Always chooses the successor node with the

best f value” where f(n) = h(n)We choose the one that is nearest to the final

state among all possible choices

A* SearchBest first search using an “admissible” heuristic

function f that takes into account the current cost g

Always returns the optimal solution path

9

Page 10: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Greedy Best-FirstSearcheval-fn: f(n) = h(n)

Informed (Huristic) Search Strategies

Page 11: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

An implementation of Best First Search

11

function BEST-FIRST-SEARCH (problem, eval-fn)

returns a solution sequence, or failure

queuing-fn = a function that sorts nodes by eval-fn

return GENERIC-SEARCH (problem,queuing-fn)

Page 12: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Greedy Best First Search

12

State Heuristic: h(n)

A 366

B 374

C 329

D 244

E 253

F 178

G 193

H 98

I 0

A

B

D

C

E

F

I

99

211

G

H

80

Start

Goal

97

101

75118

111

f(n) = h (n) = straight-line distance heuristic

140

Page 13: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Greedy Best First Search

13

State Heuristic: h(n)

A 366

B 374

C 329

D 244

E 253

F 178

G 193

H 98

I 0

A

B

D

C

E

F

I

99

211

G

H

80

Start

Goal

97

101

75118

111

f(n) = h (n) = straight-line distance heuristic

140

Page 14: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Greedy Best First Search

14

State Heuristic: h(n)

A 366

B 374

C 329

D 244

E 253

F 178

G 193

H 98

I 0

A

B

D

C

E

F

I

99

211

G

H

80

Start

Goal

97

101

75118

111

f(n) = h (n) = straight-line distance heuristic

140

Page 15: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Greedy Best First Search

15

State Heuristic: h(n)

A 366

B 374

C 329

D 244

E 253

F 178

G 193

H 98

I 0

A

B

D

C

E

F

I

99

211

G

H

80

Start

Goal

97

101

75118

111

f(n) = h (n) = straight-line distance heuristic

140

Page 16: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Greedy Best First Search

16

State Heuristic: h(n)

A 366

B 374

C 329

D 244

E 253

F 178

G 193

H 98

I 0

A

B

D

C

E

F

I

99

211

G

H

80

Start

Goal

97

101

75118

111

f(n) = h (n) = straight-line distance heuristic

140

Page 17: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Greedy Best First Search

17

State Heuristic: h(n)

A 366

B 374

C 329

D 244

E 253

F 178

G 193

H 98

I 0

A

B

D

C

E

F

I

99

211

G

H

80

Start

Goal

97

101

75118

111

f(n) = h (n) = straight-line distance heuristic

140

Page 18: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Greedy Best First Search

18

State Heuristic: h(n)

A 366

B 374

C 329

D 244

E 253

F 178

G 193

H 98

I 0

A

B

D

C

E

F

I

99

211

G

H

80

Start

Goal

97

101

75118

111

f(n) = h (n) = straight-line distance heuristic

140

Page 19: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Greedy Best First Search

19

State Heuristic: h(n)

A 366

B 374

C 329

D 244

E 253

F 178

G 193

H 98

I 0

A

B

D

C

E

F

I

99

211

G

H

80

Start

Goal

97

101

75118

111

f(n) = h (n) = straight-line distance heuristic

140

Page 20: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Greedy Best First Search

20

State Heuristic: h(n)

A 366

B 374

C 329

D 244

E 253

F 178

G 193

H 98

I 0

A

B

D

C

E

F

I

99

211

G

H

80

Start

Goal

97

101

75118

111

f(n) = h (n) = straight-line distance heuristic

140

Page 21: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Greedy Best First Search

21

State Heuristic: h(n)

A 366

B 374

C 329

D 244

E 253

F 178

G 193

H 98

I 0

f(n) = h (n) = straight-line distance heuristic

A

B

D

C

E

F

I

99

211

G

H

80

Start

Goal

97

101

75118

111

140

Page 22: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Greedy Best First Search

22

AStart

State

h(n)

A 366

B 374

C 329

D 244

E 253

F 178

G 193

H 98

I 0

Page 23: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Greedy Search: Tree Search

23

A

BC

E

Start75118

140 [374][329]

[253]

State

h(n)

A 366

B 374

C 329

D 244

E 253

F 178

G 193

H 98

I 0

Page 24: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Greedy Search: Tree Search

24

A

BC

E

F

99

GA

80

Start75118

140 [374][329]

[253]

[193]

[366][178]

State

h(n)

A 366

B 374

C 329

D 244

E 253

F 178

G 193

H 98

I 0

Page 25: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Greedy Search: Tree Search

25

A

BC

E

F

I

99

211

GA

80

Start

Goal

75118

140 [374][329]

[253]

[193]

[366][178]

E[0][253]

State

h(n)

A 366

B 374

C 329

D 244

E 253

F 178

G 193

H 98

I 0

Page 26: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Greedy Search: Tree Search

26

A

BC

E

F

I

99

211

GA

80

Start

Goal

75118

140 [374][329]

[253]

[193]

[366][178]

E[0][253]

dist(A-E-F-I) = 140 + 99 + 211 = 450

State

h(n)

A 366

B 374

C 329

D 244

E 253

F 178

G 193

H 98

I 0

Page 27: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Greedy Best First Search : Optimal ?

27

State Heuristic: h(n)

A 366

B 374

C 329

D 244

E 253

F 178

G 193

H 98

I 0

A

B

D

C

E

F

I

99

211

G

H

80

Start

Goal

97

101

75118

111

f(n) = h (n) = straight-line distance heuristic

dist(A-E-G-H-I) =140+80+97+101=418

140

Page 28: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

28

State Heuristic: h(n)

A 366

B 374

** C 250

D 244

E 253

F 178

G 193

H 98

I 0

A

B

D

C

E

F

I

99

211

G

H

80

Start

Goal

97

101

75118

111

f(n) = h (n) = straight-line distance heuristic

140

Greedy Best First Search : Optimal ?

Page 29: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Greedy Search: Tree Search

29

AStart

State

h(n)

A 366

B 374

C 250

D 244

E 253

F 178

G 193

H 98

I 0

Page 30: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Greedy Search: Tree Search

30

A

BC

E

Start75118

140 [374][250]

[253]

State

h(n)

A 366

B 374

C 250

D 244

E 253

F 178

G 193

H 98

I 0

Page 31: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Greedy Search: Tree Search

31

A

BC

E

D

111

Start75118

140 [374][250]

[253]

[244]

State

h(n)

A 366

B 374

C 250

D 244

E 253

F 178

G 193

H 98

I 0

Page 32: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Greedy Search: Tree Search

32

A

BC

E

D

111

Start75118

140 [374][250]

[253]

[244]

C[250]Infinite Branch !

State

h(n)

A 366

B 374

C 250

D 244

E 253

F 178

G 193

H 98

I 0

Page 33: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Greedy Search: Tree Search

33

A

BC

E

D

111

Start75118

140 [374][250]

[253]

[244]

C

D

[250]

[244]

Infinite Branch !

State

h(n)

A 366

B 374

C 250

D 244

E 253

F 178

G 193

H 98

I 0

Page 34: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Greedy Search: Tree Search

34

A

BC

E

D

111

Start75118

140 [374][250]

[253]

[244]

C

D

[250]

[244]

Infinite Branch !

State

h(n)

A 366

B 374

C 250

D 244

E 253

F 178

G 193

H 98

I 0

Page 35: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Greedy Search: Time and Space Complexity ?

35

A

B

D

C

E

F

I

99

211

G

H

80

Start

Goal

97

101

75118

111

140

• Greedy search is not optimal.

• Greedy search is incomplete without

systematic checking of repeated states.

• In the worst case, the Time and Space Complexity of Greedy Search are both

O(bm)

Where b is the branching factor and m the maximum path

length

Page 36: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

A* Search

eval-fn: f(n)=g(n)+h(n)

Informed Search Strategies

Page 37: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

A* (A Star)

37

Greedy Search minimizes a heuristic h(n) which is an estimated cost from a node n to the goal state. However, although greedy search can considerably cut the search time (efficient), it is neither optimal nor complete.

Uniform Cost Search minimizes the cost g(n) from the initial state to n. UCS is optimal and complete but not efficient.

New Strategy: Combine Greedy Search and UCS to get an efficient algorithm which is complete and optimal.

Page 38: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

A* (A Star)

38

A* uses a heuristic function which combines g(n) and h(n) f(n) = g(n) + h(n)

g(n) is the exact cost to reach node n from the initial state. Cost so far up to node n.

h(n) is an estimation of the remaining cost to reach the goal.

Page 39: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

A* (A Star)

39

n

g(n)

h(n)

f(n) = g(n)+h(n)

Page 40: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

A* Search

40

State Heuristic: h(n)

A 366

B 374

C 329

D 244

E 253

F 178

G 193

H 98

I 0

f(n) = g(n) + h (n)

g(n): is the exact cost to reach node n from the initial state.

A

B

D

C

E

F

I

99

211

G

H

80

Start

Goal

97

101

75118

111

140

Page 41: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

A* Search: Tree Search

41

A Start

State

h(n)

A 366

B 374

C 329

D 244

E 253

F 178

G 193

H 98

I 0

Page 42: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

A* Search: Tree Search

42

A

BC E

Start

75118140

[393]

[449][447

]

State

h(n)

A 366

B 374

C 329

D 244

E 253

F 178

G 193

H 98

I 0

C[447] = 118 + 329E[393] = 140 + 253B[449] = 75 + 374

Page 43: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

A* Search: Tree Search

43

A

BC E

F

99

G

80

Start

75118140

[393]

[449][447

]

[417]

[413]

State

h(n)

A 366

B 374

C 329

D 244

E 253

F 178

G 193

H 98

I 0

Page 44: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

A* Search: Tree Search

44

A

BC E

F

99

G

80

Start

75118140

[393]

[449][447

]

[417]

[413]

H

97

[415]

State

h(n)

A 366

B 374

C 329

D 244

E 253

F 178

G 193

H 98

I 0

Page 45: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

A* Search: Tree Search

45

A

BC E

F

I

99

G

H

80

Start

97

101

75118140

[393]

[449][447

]

[417]

[413]

[415]

Goal [418]

State

h(n)

A 366

B 374

C 329

D 244

E 253

F 178

G 193

H 98

I 0

Page 46: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

A* Search: Tree Search

46

A

BC E

F

I

99

G

H

80

Start

97

101

75118140

[393]

[449][447

]

[417]

[413]

[415]

Goal [418]

I [450]

State

h(n)

A 366

B 374

C 329

D 244

E 253

F 178

G 193

H 98

I 0

Page 47: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

A* Search: Tree Search

47

A

BC E

F

I

99

G

H

80

Start

97

101

75118140

[393]

[449][447

]

[417]

[413]

[415]

Goal [418]

I [450]

State

h(n)

A 366

B 374

C 329

D 244

E 253

F 178

G 193

H 98

I 0

Page 48: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

A* Search: Tree Search

48

A

BC E

F

I

99

G

H

80

Start

97

101

75118140

[393]

[449][447

]

[417]

[413]

[415]

Goal [418]

I [450]

State

h(n)

A 366

B 374

C 329

D 244

E 253

F 178

G 193

H 98

I 0

Page 49: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Conditions for optimalityAdmissibility

An admissible heuristic never overestimates the cost to reach the goal

Straight-line distance hSLD obviously is an admissible heuristic

49

Admissibility / Monotonicity The admissible heuristic

h is consistent (or satisfies the monotone restriction) if for every node N and every successor N’ of N:

h(N) c(N,N’) + h(N’)

(triangular inequality)A consistent heuristic is

admissible.

A* is optimal if h is admissible or consistent

N

N’ h(N)

h(N’)

c(N,N’)

Page 50: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

A* with systematic checking for repeated states

An Example: Map Searching

A* Algorithm

Page 51: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

SLD Heuristic: h()Straight Line Distances to Bucharest

51

Town SLDArad 366

Bucharest

0

Craiova 160

Dobreta 242

Eforie 161

Fagaras 178

Giurgiu 77

Hirsova 151

Iasi 226

Lugoj 244

Town SLDMehadai 241

Neamt 234

Oradea 380

Pitesti 98

Rimnicu 193

Sibiu 253

Timisoara 329

Urziceni 80

Vaslui 199

Zerind 374

We can use straight line distances as an admissible heuristic as they will never overestimate the cost to the goal. This is because there is no shorter distance between two cities than the straight line distance. Press space to continue with the slideshow.

Page 52: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Distances Between Cities

52

Arad

Bucharest

OradeaZerind

Faragas

Neamt

Iasi

Vaslui

Hirsova

Eforie

Urziceni

Giurgui

Pitesti

Sibiu

Dobreta

Craiova

Rimnicu

Mehadia

Timisoara

Lugoj

87

92

142

86

98

86

211

101

90

99

151

71

75

140118

111

70

75

120

138

146

97

80

140

80

97

101

Sibiu

Rimnicu

Pitesti

Page 53: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Map Searching

Greedy Search in Action …

Page 54: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

54

Press space to see an Greedy search of the Romanian map featured in the previous slide. Note: Throughout the animation all nodes are labelled with f(n) = h(n). However,we will be using the abbreviations f, and h to make the notation simpler

OradeaZerind

Fagaras

Sibiu

RimnicuTimisoara

Arad

We begin with the initial state of Arad. The straight line distance from Arad to Bucharest (or h value) is 366 miles. This gives us a total value of ( f = h ) 366 miles. Press space to expand the initial state of Arad.

F= 366

F= 366

F= 374

F= 374

F= 253

F=253F= 329

F= 329

Once Arad is expanded we look for the node with the lowest cost. Sibiu has the lowest value for f. (The straight line distance from Sibiu to the goal state is 253 miles. This gives a total of 253 miles). Press space to move to this node and expand it.

We now expand Sibiu (that is, we expand the node with the lowest value of f ). Press space to continue the search.

F= 178

F= 178

F= 380

F= 380

F= 193

F= 193

We now expand Fagaras (that is, we expand the node with the lowest value of f ). Press space to continue the search.

BucharestF= 0

F= 0

Page 55: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Map Searching

A* in Action …

Page 56: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

56

Press space to see an A* search of the Romanian map featured in the previous slide. Note: Throughout the animation all nodes are labelled with f(n) = g(n) + h(n). However,we will be using the abbreviations f, g and h to make the notation simpler

OradeaZerind

Fagaras

Pitesti

Sibiu

Craiova

RimnicuTimisoara

Bucharest

Arad

We begin with the initial state of Arad. The cost of reaching Arad from Arad (or g value) is 0 miles. The straight line distance from Arad to Bucharest (or h value) is 366 miles. This gives us a total value of ( f = g + h ) 366 miles. Press space to expand the initial state of Arad.

F= 0 + 366

F= 366

F= 75 + 374

F= 449

F= 140 + 253

F= 393F= 118 + 329

F= 447

Once Arad is expanded we look for the node with the lowest cost. Sibiu has the lowest value for f. (The cost to reach Sibiu from Arad is 140 miles, and the straight line distance from Sibiu to the goal state is 253 miles. This gives a total of 393 miles). Press space to move to this node and expand it.

We now expand Sibiu (that is, we expand the node with the lowest value of f ). Press space to continue the search.

F= 239 + 178

F= 417

F= 291 + 380

F= 671

F= 220 + 193

F= 413

We now expand Rimnicu (that is, we expand the node with the lowest value of f ). Press space to continue the search.

F= 317 + 98

F= 415F= 366 + 160

F= 526

Once Rimnicu is expanded we look for the node with the lowest cost. As you can see, Pitesti has the lowest value for f. (The cost to reach Pitesti from Arad is 317 miles, and the straight line distance from Pitesti to the goal state is 98 miles. This gives a total of 415 miles). Press space to move to this node and expand it.

We now expand Pitesti (that is, we expand the node with the lowest value of f ). Press space to continue the search.

We have just expanded a node (Pitesti) that revealed Bucharest, but it has a cost of 418. If there is any other lower cost node (and in this case there is one cheaper node, Fagaras, with a cost of 417) then we need to expand it in case it leads to a better solution to Bucharest than the 418 solution we have already found. Press space to continue.

F= 418 + 0

F= 418

In actual fact, the algorithm will not really recognise that we have found Bucharest. It just keeps expanding the lowest cost nodes (based on f ) until it finds a goal state AND it has the lowest value of f. So, we must now move to Fagaras and expand it. Press space to continue.

We now expand Fagaras (that is, we expand the node with the lowest value of f ). Press space to continue the search.

Bucharest(2)F= 450 + 0

F= 450

Once Fagaras is expanded we look for the lowest cost node. As you can see, we now have two Bucharest nodes. One of these nodes ( Arad – Sibiu – Rimnicu – Pitesti – Bucharest ) has an f value of 418. The other node (Arad – Sibiu – Fagaras – Bucharest(2) ) has an f value of 450. We therefore move to the first Bucharest node and expand it. Press space to continue

BucharestBucharestBucharest

We have now arrived at Bucharest. As this is the lowest cost node AND the goal state we can terminate the search. If you look back over the slides you will see that the solution returned by the A* search pattern ( Arad – Sibiu – Rimnicu – Pitesti – Bucharest ), is in fact the optimal solution. Press space to continue with the slideshow.

Page 57: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Iterative Deepening A*

Informed Search Strategies

Page 58: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Iterative Deepening A*:IDA*

58

Use f(N) = g(N) + h(N) with admissible and consistent h

Each iteration is depth-first with cutoff on

the value of f of expanded nodes

Page 59: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

IDA* AlgorithmIDA* Algorithm

59

In the first iteration, we determine a “f-cost limit” – cut-off value f(n0) = g(n0) + h(n0) = h(n0), where n0 is the start node.

We expand nodes using the depth-first algorithm and backtrack whenever f(n) for an expanded node n exceeds the cut-off value.

If this search does not succeed, determine the lowest f-value among the nodes that were visited but not expanded.

Use this f-value as the new limit value – cut-off value and do another depth-first search.

Repeat this procedure until a goal node is found.

Page 60: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

8-Puzzle8-Puzzle

60

4

6

f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

Cutoff=4

Page 61: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

8-Puzzle8-Puzzle

61

4

4

6

Cutoff=4

6

f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

Page 62: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

8-Puzzle8-Puzzle

62

4

4

6

Cutoff=4

6

5

f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

Page 63: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

8-Puzzle8-Puzzle

63

4

4

6

Cutoff=4

6

5

5

f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

Page 64: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

8-Puzzle8-Puzzle

64

4

4

6

Cutoff=4

6

5

56

f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

Page 65: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

8-Puzzle8-Puzzle

65

4

6

Cutoff=5

f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

Page 66: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

8-Puzzle8-Puzzle

66

4

4

6

Cutoff=5

6

f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

Page 67: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

8-Puzzle8-Puzzle

67

4

4

6

Cutoff=5

6

5

f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

Page 68: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

8-Puzzle8-Puzzle

68

4

4

6

Cutoff=5

6

5

7

f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

Page 69: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

8-Puzzle8-Puzzle

69

4

4

6

Cutoff=5

6

5

7

5

f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

Page 70: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

8-Puzzle8-Puzzle

70

4

4

6

Cutoff=5

6

5

7

5 5

f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

Page 71: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

8-Puzzle8-Puzzle

71

4

4

6

Cutoff=5

6

5

7

5 5

f(N) = g(N) + h(N) with h(N) = number of misplaced tiles

Page 72: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

The Effect of Heuristic Accuracy on Performance

An Example: 8-puzzle

Heuristic Evaluation

Page 73: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Admissible heuristicsE.g., for the 8-puzzle: h1(n) = number of misplaced tiles h2(n) = total Manhattan distance(i.e., no. of squares from desired location of each tile)

h1(S) = ? h2(S) = ?

Page 74: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Admissible heuristicsE.g., for the 8-puzzle: h1(n) = number of misplaced tiles h2(n) = total Manhattan distance(i.e., no. of squares from desired location of each tile)

h1(S) = ? 8h2(S) = ? 3+1+2+2+2+3+3+2 = 18

Page 75: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

DominanceIf h2(n) ≥ h1(n) for all n (both admissible)then h2 dominates h1 h2 is better for search

Why?

Typical search costs (average number of nodes expanded):

d=12 IDS = 3,644,035 nodes A*(h1) = 227 nodes A*(h2) = 73 nodes

d=24 IDS = too many nodes A*(h1) = 39,135 nodes

A*(h2) = 1,641 nodes

Self Study

Page 76: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

When to Use Search Techniques

76

The search space is small, andThere are no other available techniques, orIt is not worth the effort to develop a more

efficient technique

The search space is large, andThere is no other available techniques, andThere exist “good” heuristics

Page 77: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

Conclusions

77

Frustration with uninformed search led to the idea of using domain specific knowledge in a search so that one can intelligently explore only the relevant part of the search space that has a good chance of containing the goal state. These new techniques are called informed (heuristic) search strategies.

Even though heuristics improve the performance of informed search algorithms, they are still time consuming especially for large size instances.

Page 78: Lecture 02 – Part B Problem Solving by Searching Search Methods : Informed (Heuristic) search.

ReferencesChapter 3 of “Artificial Intelligence: A

modern approach” by Stuart Russell, Peter Norvig.

Chapter 4 of “Artificial Intelligence Illuminated” by Ben Coppin

78