1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

34
1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami

Transcript of 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

Page 1: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

1/34

Informed search algorithms

Chapter 4

Modified by Vali Derhami

Page 2: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

2/34

Material

• Chapter 4

Page 3: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

3/34

Outline

• Best-first search• Greedy best-first search• A* search• Heuristics• Local search algorithms• Hill-climbing search• Simulated annealing search• Local beam search• Genetic algorithms

Page 4: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

4/34

Review: Tree search

• A search strategy is defined by picking the order of node expansion

Page 5: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

5/34

Best-first searchبهترين اول جستجوي

• Idea: use an evaluation function f(n) for each node– estimate of "desirability"Expand most desirable unexpanded node

• Implementation:Order the nodes in fringe in decreasing order of desirability

• Special cases:– greedy best-first search– A* search

•–

Page 6: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

6/34

Romania with step costs in km

Page 7: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

7/34

Greedy best-first search• Evaluation function f(n) = h(n) (heuristic)

= estimate of cost from n to goal

• e.g., hSLD(n) = straight-line distance from n to Bucharest

Greedy best-first search expands the node that appears to be closest to goal

Page 8: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

8/34

Greedy best-first search example

Page 9: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

9/34

Greedy best-first search example

Page 10: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

10/34

Greedy best-first search example

Page 11: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

11/34

Greedy best-first search example

Page 12: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

12/34

Properties of greedy best-first search

• Complete? No – can get stuck in loops, e.g., Iasi Neamt Iasi Neamt

• Time? O(bm), but a good heuristic can give dramatic improvement

• Space? O(bm) -- keeps all nodes in memory

• Optimal? No

Page 13: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

13/34

A* search

• Idea: avoid expanding paths that are already expensive– h(n) = estimated cost from n to goal– f(n) = estimated total cost of path through n to

goal–

• Evaluation function f(n) = g(n) + h(n)

• g(n) = cost so far to reach n

Page 14: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

14/34

A* search example

Page 15: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

15/34

A* search example

Page 16: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

16/34

A* search example

Page 17: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

17/34

A* search example

Page 18: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

18/34

A* search example

Page 19: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

19/34

A* search example

Page 20: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

20/34

Admissible heuristics

• A heuristic h(n) is admissible if for every node n,

h(n) ≤ h*(n), where h*(n) is the true cost to reach the goal state from n.

• Theorem: If h(n) is admissible, A* using TREE-SEARCH is optimal

• An admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimistic

• Example: hSLD(n) (never overestimates the actual road distance)

Page 21: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

21/34

Optimality of A* (proof)• Suppose some suboptimal goal G2 has been generated and is in the

fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G.

• f(n)= g(n)+h(n)

• f(G2) = g(G2) since h(G2) = 0

• g(G2) > g(G) since G2 is suboptimal

• f(G) = g(G) since h(G) = 0

• f(G2) > f(G)=C* from above

Page 22: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

22/34

Optimality of A* (proof)• Suppose some suboptimal goal G2 has been generated and is in the fringe.

Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G.

• f(n)= g(n)+h(n)• f(G2) > C*=f(G) (1) from above • h(n) ≤ h*(n) since h is admissible• g(n) + h(n) ≤ g(n) + h*(n) =C*• f(n) ≤ C*=f(G),

With respect to (1) and (2) , f(G2) > f(n), and A* will never select G2 for expansion

•• (2)

Page 23: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

23/34

گراف جستجوي در بهينگي عدم

روش • كه شود بهينه *Aتوجه گراف جستجوي دريك است ممكن روش اين در كه چرا نيست

گذاشته كنار تكراري حالت يك به بهينه مسيرشود.

شناسايي: • تكراري حالت مذكور روش در ياداوري . گردد مي حذف شده كشف جديد مسير شود

• : حل راه•. شود گذاشته كنار دارد تر هزينه پر مسير كه گرهيبه • بهينه مسير كند تضمين كه باشد گونه بدان عملكرد نحوه

شده دنبال كه است مسيري اولين هميشه تكراري حالتيكنواخت. هزينه جستجوي همانند است

Page 24: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

24/34

Consistent heuristicsسازگار هيوريستيكهاي

• A heuristic is consistent if for every node n, every successor n' of n generated by any action a, h(n) ≤ c(n,a,n') + h(n')

هزينه و پسين گره هيوريستيك تابع حاصلجمع هميشه يعنيبه .رفتن است مساوي يا بيشتر والد هيوريستيك تابع از ان

». هست هم قبول قابل سازگار هيوريستيك هر• If h is consistent, we havef(n') = g(n') + h(n') = g(n) + c(n,a,n') + h(n') ≥ g(n) + h(n) = f(n)

مقدار • .f(n)يعني است نزولي غير مسيري هر طول در• Theorem: If h(n) is consistent, A* using GRAPH-SEARCH is optimal

•i.e., f(n) is non-decreasing along any path.

Page 25: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

25/34

Optimality of A*

• A* expands nodes in order of increasing f value

• Contour i has all nodes with f=fi, where fi < fi+1

• Gradually adds "f-contours" of nodes

Page 26: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

26/34

Properties of A*• Complete? Yes (unless there are infinitely many nodes

with f ≤ f(G) )• Optimal? Yes

• ريشه از را جستجو مسيرهاي كه الگوريتمهايي ميان درتواند نمي ديگري بهينه الگوريتم هيچ دهند مي توسعه

از ميدهد توسع كه هايي گره تعداد كه كند كمتر *A تضمينباشد

• Time? Exponential• Space? Keeps all nodes in memory• مياورد كم حافظه بياورد كم وقت آنكه از بيشتر .الگوريتم

Page 27: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

حافظه با هیوریستیک جستجویمحدود

•. دارند حافظه مشکل شده مطرح الگوریتمهای

مصرفی • کاهشحافظه برای الگوریتم دو–Iterative Deepening A* (IDA*)–SMA* مشابهA* کمتر صف اندازه با

27/34

Page 28: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

Iterative Deepening A* (IDA*)

تکراری – شونده روشعمیق با تفاوت

هزینه • استفاده مورد . g+hیعنی fبرش عمق نه است

تا • که است عمقی جستجوی یک الگوریتم از تکرار هر

هزینه .fمحدوده رود پیشمی

28/34

Page 29: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

Simplified-Memory-Bounded A* (SMA*)

•. کند می استفاده موجود حافظه همه ازگسترش • را ها گره دهد می اجازه حافظه که جایی تا

میدهد.• ( ) آنکه ) لبه برگی گره بدترین شد پر حافظه که هنگامی

( ) fمقدار پاک حافظه از کند می حذف را دارد باالتری . ) میگرداند بر والدش به را گره این مقدار و کند می

داری: • برگی های گره تمام اگر چه fسوال باشد یکسانافتد؟ می اتفاقی

•SMA * یافتنی دست حل راه اگر است بهینه و کامل. باشد یافتنی دست بهینه حل راه و موجود

29/34

Page 30: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

SMA* Exampleاندازه به گره 3حافظه

30/34

Page 31: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

SMA* Code

31/34

Page 32: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

32/34

Admissible heuristics

E.g., for the 8-puzzle:• h1(n) = number of misplaced tiles• h2(n) = total Manhattan distance موقعيتهاي از كاشيها فاصله مجموع هدفشان

(i.e., no. of squares from desired location of each tile)

• h1(S) = ? • h2(S) = ?

••

Page 33: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

33/34

Admissible heuristics

E.g., for the 8-puzzle:• h2(n) = total Manhattan distance(i.e., no. of squares from desired location of each tile)

• h1(S) = ? 8• h2(S) = ? 3+1+2+2+2+3+3+2 = 18

• h1(n) = number of misplaced tiles

Page 34: 1/34 Informed search algorithms Chapter 4 Modified by Vali Derhami.

34/34

Dominance• effective branching factor b*.

N + 1 = 1 + b* + b*2 + • • • + (b*)d . N=total number of nodes generated by A*, d= solution depth

, b* is the branching factor that a uniform tree of depth d would have to have in order to contain N+ 1 nodes.

• If h2(n) ≥ h1(n) for all n (both admissible)• then h2 dominates h1 , and h2 is better for search

• Typical search costs (average number of nodes expanded):• d=12 IDS = 3,644,035 nodes

A*(h1) = 227 nodes A*(h2) = 73 nodes b*=1.24

• d=24 IDS = too many nodesA*(h1) = 39,135 nodes A*(h2) = 1,641 nodes b*=1.26

••