Algorithms of Artificial Intelligence Lecture 3: Search methods E. Tyugu Spring 2001.

24
Algorithms of Artificial Intelligence Lecture 3: Search methods E. Tyugu Spring 2001

description

© Enn Tyugu3 Search problem In the most general form it is a description of the condition which must be satisfied by its solution, and a way to find candidates for the solution. The latter can be given in different forms. We shall use the predicate good(x) to designate the condition for the solution. For generating the objects which are candidates to be tested, we can use the functions first() -- for finding an object when no other objects have been given next(x) -- for finding a new object, when an object x is known. This gives us already the possibility to build a simple search algorithm A.2.1: x:=first(); until good(x) do x:=next(x) od

Transcript of Algorithms of Artificial Intelligence Lecture 3: Search methods E. Tyugu Spring 2001.

Page 1: Algorithms of Artificial Intelligence Lecture 3: Search methods E. Tyugu Spring 2001.

Algorithms of Artificial Intelligence

Lecture 3: Search methods E. Tyugu

Spring 2001

Page 2: Algorithms of Artificial Intelligence Lecture 3: Search methods E. Tyugu Spring 2001.

© Enn Tyugu 2

Search

Search is a universal method of problem solving which can be applied in the case when no other methods of problem solving are applicable. People apply search in their everyday life constantly, without paying attention to it. Very little must be known in order to apply some general search algorithm in a formal setting of the search problem. However, the efficiency of search can be much improved, if additional knowledge can be exploited to guide the search. Search is present in some form almost in every AI program, and its efficiency is often critical to the performance of the whole program.

Page 3: Algorithms of Artificial Intelligence Lecture 3: Search methods E. Tyugu Spring 2001.

© Enn Tyugu 3

Search problemIn the most general form it is a description of the condition which must be satisfied by its solution, and a way to find candidates for the solution. The latter can be given in different forms. We shall use the predicate

good(x)to designate the condition for the solution. For generating the objects which are candidates to be tested, we can use the functions

first() -- for finding an object when no other objects have been givennext(x) -- for finding a new object, when an object x is known.

This gives us already the possibility to build a simple search algorithm

A.2.1:x:=first();until good(x) do x:=next(x)od

Page 4: Algorithms of Artificial Intelligence Lecture 3: Search methods E. Tyugu Spring 2001.

© Enn Tyugu 4

Search space Another way to represent a search problem is to give the

predicate good(x) and, besides that, to give the set of testable objects as a whole, leaving the development of the search operators to the designer of algorithms. The set of testable objects is called search space, and the objects can be called states.

A.2.2:for x searchSpace doif good(x) then success(x) fiod;failure()

Page 5: Algorithms of Artificial Intelligence Lecture 3: Search methods E. Tyugu Spring 2001.

© Enn Tyugu 5

Narrowing the selection• To improve the search, one can use a set called options which

contains only the objects which may be interesting to test at a certain step of the search instead of the whole search space. This set will be modified during the search by the procedure modify(options). The function select(options) performs the selection of a new object to test at each search step.

• Often options is a set of close neighbours of the current object x. Then we have local search.

A.2.3:initialize(options); while not empty(options) dox:=select(options);if good(x) then success(x) else modify(options) fi od; failure()

Page 6: Algorithms of Artificial Intelligence Lecture 3: Search methods E. Tyugu Spring 2001.

© Enn Tyugu 6

Search graph and -tree• A search space is often represented as a graph with the

nodes representing the states of search, and arcs leading from a state to all states which can be immediately reached from this state. This is a search graph.

• If the information associated with each search state determines also the way in which the state was obtained, the search graph becomes a tree. This is a searh tree.

• There is an interesting class of search spaces called and-or trees with states as search problems. And-or trees appear often in search related to games.

• Example: search in a labyrinth (explain search tree and search graph for this case).

Page 7: Algorithms of Artificial Intelligence Lecture 3: Search methods E. Tyugu Spring 2001.

© Enn Tyugu 7

And-or tree

• An and-or tree is a tree where each node represents a problem. Besides that, each node is either an and-node or an or-node.

• For solving the problem of an and-node one has to solve problems of all its immediate descendants. For solving the problem of an or-node one has to solve the problem of one of its descendant.

• Problems associated to the terminal nodes (leaves) of an and-or tree are primitive, i.e. either immediately solvable or obviously unsolvable.

• Example: game of two persons.

Page 8: Algorithms of Artificial Intelligence Lecture 3: Search methods E. Tyugu Spring 2001.

© Enn Tyugu 8

Breadth-first searchThe following specific functions should work on a given search tree:

next(p) -- changes p to the node next to p on the same layer as p is in the search tree;

down(p) -- changes p to the first node on the level next to the layer of p in the search tree;The program takes the root p of a search tree as an argument.A.2.4: bf(p) = while not empty(p) do

while not empty(p) doif good(p) then success(p) fi;next(p)

od; down(p);

od; failure()

Page 9: Algorithms of Artificial Intelligence Lecture 3: Search methods E. Tyugu Spring 2001.

© Enn Tyugu 9

Breadth-first search order

1

2 3 4

5 6 7 8 9

Page 10: Algorithms of Artificial Intelligence Lecture 3: Search methods E. Tyugu Spring 2001.

© Enn Tyugu 10

Breadth-first search continued

• Example: Building solution candidates in some regular way starting from simplest cases. Level in the tree shows the complexity of a solution, e.g. the number of elements in a solution.

• The dificulty arises with memorizing all passed states on one and the same level. In other words – the search operators are complicated in this case.

Page 11: Algorithms of Artificial Intelligence Lecture 3: Search methods E. Tyugu Spring 2001.

© Enn Tyugu 11

Using an open-list

open -- list of objects to be tested

first(open) -- the first node of the list openrest(open) -- the list open without its first nodesucc(x) -- successors of x in the tree(L,L1) -- list obtained from the list L by adding the list L1 to its end.

The breadth-first search algorithm Bf(p) which starts the search from the given root p of a search tree can be represented now as follows:A.2.5: Bf(p): open:=(p),

while not empty(open) dox:=first(open);if good(x) then success(x)else open:=(rest(open),succ(x));fi

od;failure()

Page 12: Algorithms of Artificial Intelligence Lecture 3: Search methods E. Tyugu Spring 2001.

© Enn Tyugu 12

Depth-first search

The algorithm is represented by the recursive function

Df(p) which takes the root p of a nonempty search tree as an argument:

A.2.6: Df(p) = if not empty(p) then if good(p) then success(p) else for q succ(p) do Df(q) od fi fi

A drawback of this method is that it may go deep down (maybe even endlessly), and not find a solution which is close to the root.

Page 13: Algorithms of Artificial Intelligence Lecture 3: Search methods E. Tyugu Spring 2001.

© Enn Tyugu 13

Depth-first search order

1

2 5 6

3 4 7 8 9

Page 14: Algorithms of Artificial Intelligence Lecture 3: Search methods E. Tyugu Spring 2001.

© Enn Tyugu 14

Iteratively deepening depth-first search

A.2.7: Idf(p): d:=initd; success:=false; while not success & d < dmax doidf(p,d);d:=d+incrd od;

idf(p,d) = while d > 0 & not empty(p) do if good(p) then success(p) else for q succ(p) do Df(q,d-1) od od

The drawback is that some search is repeated, when the depth d has been increased.

Page 15: Algorithms of Artificial Intelligence Lecture 3: Search methods E. Tyugu Spring 2001.

© Enn Tyugu 15

Search on binary trees left(p) -- the left descendant of the node p

right(p) -- the right descendant of the node p

A.2.8: ulr(p)=if not empty(p) then if good(p) then success(p) fi; ulr(left(p)); ulr(right(p)) fi;

A.2.9: rul(p)=if not empty(p) then rul(right(p)); if good(p) then success(p) fi; rul(left(p)) fi;

(These simple algorithms do not report failure.)

Page 16: Algorithms of Artificial Intelligence Lecture 3: Search methods E. Tyugu Spring 2001.

© Enn Tyugu 16

Best-first search We change the functions down(p) and next(p) of the depth-

first search so that they go down to some given level l, but will choose the node to be known as the best. Let us call the new functions bestdown(p) and bestnext(p) respectively. Then we get a simple version of the best-first search algorithm:

A.2.10: BestFirst(p,l) = if not empty(p) then if good(p) then success(p)else BestFirst(bestdown(p),l); BestFirst(bestnext(p),l) fi fi

Page 17: Algorithms of Artificial Intelligence Lecture 3: Search methods E. Tyugu Spring 2001.

© Enn Tyugu 17

Beam search open -- the set of nodes to be tested currently (the beam)

initstate -- element of the search space where to start the searchsuccstate(x) -- produces an expansion of the set x by adding all untested successors of its elementsscore(x) -- evaluates the elements of the set x and assigns them the

estimates of their fitnessprune(x) -- produces a pruned state by dropping the worst elements of x.goodin(x) is a loop over all elements of x.

A.2.11: open:={initstate}; while not empty(open)do if goodin(open) then success fi; candidates:=succstate(open); score(candidates); open:=prune(candidates) od; failure()

Page 18: Algorithms of Artificial Intelligence Lecture 3: Search methods E. Tyugu Spring 2001.

© Enn Tyugu 18

Improved beam search

closed consists of all the attended elements of the search space. This gives us the following improved algorithm:

A.2.12: open:={initstate}; closed:={}; while not empty(open) do

if good(open) then success fi;closed:=closed open;candidates:=succstate(open) \ closed;score(candidates);open:=prune(candidates);

od; failure()

Page 19: Algorithms of Artificial Intelligence Lecture 3: Search methods E. Tyugu Spring 2001.

© Enn Tyugu 19

Improved best-first search

If there is a possibility to find the best candidate from open, then the following algorithm is applicable:

A.2.13: open:={initstate}; closed:={}; while not empty(open) dox:=BestFrom(open);if good(x) then success(x) fi;closed:=closed {x};candidates:=open succ(x) \ closed;score(candidates);open:=prune(candidates) od; failure()

Page 20: Algorithms of Artificial Intelligence Lecture 3: Search methods E. Tyugu Spring 2001.

© Enn Tyugu 20

Hill-climbing If the beam contains precisely one element, we get the hill-

climbing algorithm. In this case we can use, instead of the functions score() and prune(), a simpler function -- BestFrom(candidates) which gives the best element from candidates or an empty element, if the set candidates is empty. If BestFrom is based on heuristics, then it is the simplest heuristic search algorithm.

A.2.14: x:=initstate; closed:={}; while not empty(x) doif good(x) then success(x) fi;closed:=closed {x};candidates:=succ(x) \ closed;x:=BestFrom(candidates) od; failure()

Page 21: Algorithms of Artificial Intelligence Lecture 3: Search methods E. Tyugu Spring 2001.

© Enn Tyugu 21

Constrained hill-climbing One can use a given set of forbidden objects - forbidden. Then we get hill-climbing

with constraints:

A.2.15: x:=initstate; closed:={forbidden}; while not empty(x) doif good(x) then success(x) fi;closed:=closed {x};candidates:=succ(x) \ closed;x:=BestFrom(candidates) od; failure()

This seemingly small difference can cause considerable difficulties, if the search is otherwise performed in a good space, e.g. Eucleidean space.

Page 22: Algorithms of Artificial Intelligence Lecture 3: Search methods E. Tyugu Spring 2001.

© Enn Tyugu 22

Search by backtracking

• Search by backtracking can be explained in a very simple way by factorizing the search space into several subspaces D1,D2,...,Dn, so that the whole search space will be their product D1*D2*...*Dn.

• A problem is to find a tuple x=(a1,a2,...,an) which satisfies the predicate good (x), and a1 D1, a2 D2,...,an Dn.

• The predicate good() in this setting is extended so that it can be applied also to partial solutions (a1,a2,...,ak), k<n. We shall call this function accept(a1,a2,...,ak).

Page 23: Algorithms of Artificial Intelligence Lecture 3: Search methods E. Tyugu Spring 2001.

© Enn Tyugu 23

Search by backtracking continued

• The search starts by selecting a candidate a1 in the subspace D1.

• Thereafter, the candidate a2 in the D2 is selected and a partial solution (a1,a2) is formed etc.

• This process is continued until the whole solution is found, or it becomes clear that a partial solution, let us say, (a1,a2,...,ak) k<n is unsatisfactory. If the latter happens, the last component ak of the partial solution will be dropped and, if there still are unchecked elements in Dk , a new candidate from Dk is selected. This is a backtracking step.

Page 24: Algorithms of Artificial Intelligence Lecture 3: Search methods E. Tyugu Spring 2001.

© Enn Tyugu 24

BacktarckingThe function F((), D, D) where D=(D1, . . . ,Dn) finds the answer (a1, . . . , an) if such a tuple exists in D1*... *Dn.

V1, . . . ,Vn are the sets of untested alternatives in D1, . . . , Dn respectively. accept(x, . . . , y) is true, if x, . . . , y is an acceptable combination of

elements from D1, . . . ,Di.select(x) produces an element of x, if x is nonempty.

A.2.16: F((a1, . . . , ai), (V1, . . . ,Vn), D) = if i = n then (a1, . . . , an)

elif Vi+1 0 & accept (a1, . . . , ai, select (Vi+1)) then F((a1, . . . , ai, select (Vi+1)),(V1, . . . , Vi, Vi+1\{select (Vi+1)}, Di+2, . . . , Dn), (D1,...,Dn)) elif i=0 then failure() else F((a1, . . . , ai-1), (V1, . . . , Vi \{ai}, Di+1, . . . ,Dn), (D1, . . . , Dn))fi