CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

40
CS 416 Artificial Intelligence Lecture 6 Lecture 6 Informed Searches Informed Searches

Transcript of CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Page 1: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

CS 416Artificial Intelligence

Lecture 6Lecture 6

Informed SearchesInformed Searches

Lecture 6Lecture 6

Informed SearchesInformed Searches

Page 2: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

A sad day

No more Western Union telegramsNo more Western Union telegrams

• 1844 - First telegram (Morse) “What hath God wrought”1844 - First telegram (Morse) “What hath God wrought”

• 1858 - First transatlantic telegram from Queen Victoria to 1858 - First transatlantic telegram from Queen Victoria to President BuchananPresident Buchanan

• Break, break, break… 1866 working againBreak, break, break… 1866 working again

– A few words per minuteA few words per minute

– Punctuation cost extra. “stop” was cheaper.Punctuation cost extra. “stop” was cheaper.

No more Western Union telegramsNo more Western Union telegrams

• 1844 - First telegram (Morse) “What hath God wrought”1844 - First telegram (Morse) “What hath God wrought”

• 1858 - First transatlantic telegram from Queen Victoria to 1858 - First transatlantic telegram from Queen Victoria to President BuchananPresident Buchanan

• Break, break, break… 1866 working againBreak, break, break… 1866 working again

– A few words per minuteA few words per minute

– Punctuation cost extra. “stop” was cheaper.Punctuation cost extra. “stop” was cheaper.

Page 3: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Assignment 1

Getting Visual StudioGetting Visual Studio

Signing up for Thursday (3-5) Friday (2-3:30)Signing up for Thursday (3-5) Friday (2-3:30)

Explanation of IsNormal ( )Explanation of IsNormal ( )

Getting Visual StudioGetting Visual Studio

Signing up for Thursday (3-5) Friday (2-3:30)Signing up for Thursday (3-5) Friday (2-3:30)

Explanation of IsNormal ( )Explanation of IsNormal ( )

Page 4: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

A* without admissibility

9A

4B

200C

0D

5

4 5

5

Never explored!

Page 5: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

A* with admissibility

9A

4B

1C

0D

5

4 5

5

Page 6: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Another A* without admissibility

9A

1B

6C

0D

5

4 5

5

Never explored!

Page 7: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Admissible w/o Consistency

3A

2B

1C

1D

1

1 1

0

02

Page 8: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Meta-foo

What does “meta” mean in AI?What does “meta” mean in AI?

• Frequently it means step back a level from fooFrequently it means step back a level from foo

• Metareasoning = reasoning about reasoningMetareasoning = reasoning about reasoning

• These informed search algorithms have pros and cons These informed search algorithms have pros and cons regarding how they choose to explore new levelsregarding how they choose to explore new levels

– a metalevel learning algorithm may learn how to combine a metalevel learning algorithm may learn how to combine search techniques to suit application domainsearch techniques to suit application domain

What does “meta” mean in AI?What does “meta” mean in AI?

• Frequently it means step back a level from fooFrequently it means step back a level from foo

• Metareasoning = reasoning about reasoningMetareasoning = reasoning about reasoning

• These informed search algorithms have pros and cons These informed search algorithms have pros and cons regarding how they choose to explore new levelsregarding how they choose to explore new levels

– a metalevel learning algorithm may learn how to combine a metalevel learning algorithm may learn how to combine search techniques to suit application domainsearch techniques to suit application domain

Page 9: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Heuristic Functions

8-puzzle problem8-puzzle problem8-puzzle problem8-puzzle problem

Avg Depth=22

Branching = approx 3

322 states

170,000 repeated

Page 10: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Heuristics

The number of misplaced tilesThe number of misplaced tiles

• Admissible because at least n moves required to solve n Admissible because at least n moves required to solve n misplaced tilesmisplaced tiles

The distance from each tile to its goal positionThe distance from each tile to its goal position

• No diagonals, so use No diagonals, so use Manhattan DistanceManhattan Distance

– As if walking around rectilinear city blocksAs if walking around rectilinear city blocks

• also admissiblealso admissible

The number of misplaced tilesThe number of misplaced tiles

• Admissible because at least n moves required to solve n Admissible because at least n moves required to solve n misplaced tilesmisplaced tiles

The distance from each tile to its goal positionThe distance from each tile to its goal position

• No diagonals, so use No diagonals, so use Manhattan DistanceManhattan Distance

– As if walking around rectilinear city blocksAs if walking around rectilinear city blocks

• also admissiblealso admissible

Page 11: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Compare these two heuristics

Effective Branching Factor, b*Effective Branching Factor, b*

• If A* generates If A* generates NN nodes to find the goal at depth nodes to find the goal at depth dd

– b* = branching factor such that a uniform tree of depth d b* = branching factor such that a uniform tree of depth d contains N+1 nodes (we add one for the root node that contains N+1 nodes (we add one for the root node that wasn’t included in N)wasn’t included in N)

N+1 = 1 + b* + (b*)N+1 = 1 + b* + (b*)22 + … + (b*) + … + (b*)dd

Effective Branching Factor, b*Effective Branching Factor, b*

• If A* generates If A* generates NN nodes to find the goal at depth nodes to find the goal at depth dd

– b* = branching factor such that a uniform tree of depth d b* = branching factor such that a uniform tree of depth d contains N+1 nodes (we add one for the root node that contains N+1 nodes (we add one for the root node that wasn’t included in N)wasn’t included in N)

N+1 = 1 + b* + (b*)N+1 = 1 + b* + (b*)22 + … + (b*) + … + (b*)dd

Page 12: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Compare these two heuristics

Effective Branching Factor, b*Effective Branching Factor, b*

• b* close to 1 is ideal b* close to 1 is ideal

– because this means the heuristic guided the A* search because this means the heuristic guided the A* search linearlylinearly

– If b* were 100, on average, the heuristic had to consider If b* were 100, on average, the heuristic had to consider 100 children for each node100 children for each node

– Compare heuristics based on their b*Compare heuristics based on their b*

Effective Branching Factor, b*Effective Branching Factor, b*

• b* close to 1 is ideal b* close to 1 is ideal

– because this means the heuristic guided the A* search because this means the heuristic guided the A* search linearlylinearly

– If b* were 100, on average, the heuristic had to consider If b* were 100, on average, the heuristic had to consider 100 children for each node100 children for each node

– Compare heuristics based on their b*Compare heuristics based on their b*

Page 13: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Compare these two heuristics

Page 14: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Compare these two heuristics

hh22 is always better than h is always better than h11

• for any node, n, for any node, n, hh22(n) >= h(n) >= h11(n)(n)

• hh22 dominatesdominates h h11

• Recall all nodes with f(n) < C* will be expanded?Recall all nodes with f(n) < C* will be expanded?

– This means all nodes, h(n) + g(n) < C*, will be expandedThis means all nodes, h(n) + g(n) < C*, will be expanded

All nodes where h(n) < C* - g(n) will be expandedAll nodes where h(n) < C* - g(n) will be expanded

– All nodes hAll nodes h22 expands will also be expanded by h expands will also be expanded by h11 and and

because hbecause h11 is smaller, others will be expanded as well is smaller, others will be expanded as well

hh22 is always better than h is always better than h11

• for any node, n, for any node, n, hh22(n) >= h(n) >= h11(n)(n)

• hh22 dominatesdominates h h11

• Recall all nodes with f(n) < C* will be expanded?Recall all nodes with f(n) < C* will be expanded?

– This means all nodes, h(n) + g(n) < C*, will be expandedThis means all nodes, h(n) + g(n) < C*, will be expanded

All nodes where h(n) < C* - g(n) will be expandedAll nodes where h(n) < C* - g(n) will be expanded

– All nodes hAll nodes h22 expands will also be expanded by h expands will also be expanded by h11 and and

because hbecause h11 is smaller, others will be expanded as well is smaller, others will be expanded as well

Page 15: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Inventing admissible heuristic funcs

How can you create h(n)?How can you create h(n)?

• Simplify problem by reducing restrictions on actionsSimplify problem by reducing restrictions on actions

– Allow 8-puzzle pieces to sit atop on anotherAllow 8-puzzle pieces to sit atop on another

– Call this a Call this a relaxed problemrelaxed problem

– The cost of optimal solution to relaxed problem is The cost of optimal solution to relaxed problem is admissible heuristic for original problemadmissible heuristic for original problem

It is at least as expensive for the original problemIt is at least as expensive for the original problem

How can you create h(n)?How can you create h(n)?

• Simplify problem by reducing restrictions on actionsSimplify problem by reducing restrictions on actions

– Allow 8-puzzle pieces to sit atop on anotherAllow 8-puzzle pieces to sit atop on another

– Call this a Call this a relaxed problemrelaxed problem

– The cost of optimal solution to relaxed problem is The cost of optimal solution to relaxed problem is admissible heuristic for original problemadmissible heuristic for original problem

It is at least as expensive for the original problemIt is at least as expensive for the original problem

Page 16: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Examples of relaxed problems

A tile can move from square A tile can move from square AA to square to square BB if if

AA is horizontally or vertically adjacent to is horizontally or vertically adjacent to BB

and and BB is blank is blank

• A tile can move from A to B if A is adjacent to B (overlap)A tile can move from A to B if A is adjacent to B (overlap)

• A tile can move from A to B if B is blank (teleport)A tile can move from A to B if B is blank (teleport)

• A tile can move from A to B (teleport and overlap)A tile can move from A to B (teleport and overlap)

Solutions to these relaxed problems can be computed Solutions to these relaxed problems can be computed without search and therefore heuristic is easy to computewithout search and therefore heuristic is easy to compute

A tile can move from square A tile can move from square AA to square to square BB if if

AA is horizontally or vertically adjacent to is horizontally or vertically adjacent to BB

and and BB is blank is blank

• A tile can move from A to B if A is adjacent to B (overlap)A tile can move from A to B if A is adjacent to B (overlap)

• A tile can move from A to B if B is blank (teleport)A tile can move from A to B if B is blank (teleport)

• A tile can move from A to B (teleport and overlap)A tile can move from A to B (teleport and overlap)

Solutions to these relaxed problems can be computed Solutions to these relaxed problems can be computed without search and therefore heuristic is easy to computewithout search and therefore heuristic is easy to compute

Page 17: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Multiple Heuristics

If multiple heuristics available:If multiple heuristics available:

• h(n) = max {hh(n) = max {h11(n), h(n), h22(n), …, h(n), …, hmm(n)}(n)}

If multiple heuristics available:If multiple heuristics available:

• h(n) = max {hh(n) = max {h11(n), h(n), h22(n), …, h(n), …, hmm(n)}(n)}

Page 18: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Use solution to subproblem as heuristic

What is optimal cost of solving some portion of What is optimal cost of solving some portion of original problem?original problem?

• subproblem solution is heuristic of original problemsubproblem solution is heuristic of original problem

What is optimal cost of solving some portion of What is optimal cost of solving some portion of original problem?original problem?

• subproblem solution is heuristic of original problemsubproblem solution is heuristic of original problem

Page 19: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Pattern Databases

Store optimal solutions to subproblems in databaseStore optimal solutions to subproblems in database

• We use an exhaustive search to solve every permutation of the We use an exhaustive search to solve every permutation of the 1,2,3,4-piece subproblem of the 8-puzzle1,2,3,4-piece subproblem of the 8-puzzle

• During solution of 8-puzzle, look up optimal cost to solve the During solution of 8-puzzle, look up optimal cost to solve the 1,2,3,4-piece subproblem and use as heuristic1,2,3,4-piece subproblem and use as heuristic

Store optimal solutions to subproblems in databaseStore optimal solutions to subproblems in database

• We use an exhaustive search to solve every permutation of the We use an exhaustive search to solve every permutation of the 1,2,3,4-piece subproblem of the 8-puzzle1,2,3,4-piece subproblem of the 8-puzzle

• During solution of 8-puzzle, look up optimal cost to solve the During solution of 8-puzzle, look up optimal cost to solve the 1,2,3,4-piece subproblem and use as heuristic1,2,3,4-piece subproblem and use as heuristic

Page 20: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Learning

Could also build pattern database while solving Could also build pattern database while solving cases of the 8-puzzlecases of the 8-puzzle

• Must keep track of intermediate states and true final cost of Must keep track of intermediate states and true final cost of solutionsolution

• Inductive learningInductive learning builds mapping of state -> cost builds mapping of state -> cost

• Because too many permutations of actual statesBecause too many permutations of actual states

– Construct important Construct important features features to reduce size of spaceto reduce size of space

Could also build pattern database while solving Could also build pattern database while solving cases of the 8-puzzlecases of the 8-puzzle

• Must keep track of intermediate states and true final cost of Must keep track of intermediate states and true final cost of solutionsolution

• Inductive learningInductive learning builds mapping of state -> cost builds mapping of state -> cost

• Because too many permutations of actual statesBecause too many permutations of actual states

– Construct important Construct important features features to reduce size of spaceto reduce size of space

Page 21: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Local Search Algorithms andOptimization Problems

Page 22: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Characterize Techniques

Uninformed SearchUninformed Search• Looking for a solution where solution is a Looking for a solution where solution is a pathpath from start to from start to

goalgoal

• At each intermediate point along a path, we have no At each intermediate point along a path, we have no prediction of the future value of the pathprediction of the future value of the path

Informed SearchInformed Search• Again, looking for a path from start to goalAgain, looking for a path from start to goal

• This time we have more insight regarding the value of This time we have more insight regarding the value of intermediate solutionsintermediate solutions

Uninformed SearchUninformed Search• Looking for a solution where solution is a Looking for a solution where solution is a pathpath from start to from start to

goalgoal

• At each intermediate point along a path, we have no At each intermediate point along a path, we have no prediction of the future value of the pathprediction of the future value of the path

Informed SearchInformed Search• Again, looking for a path from start to goalAgain, looking for a path from start to goal

• This time we have more insight regarding the value of This time we have more insight regarding the value of intermediate solutionsintermediate solutions

Page 23: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Now change things a bit

What if the path isn’t important, just the goal?What if the path isn’t important, just the goal?

• So the goal is unknownSo the goal is unknown

• The path to the goal need not be solvedThe path to the goal need not be solved

ExamplesExamples

• What quantities of quarters, nickels, and dimes add up to What quantities of quarters, nickels, and dimes add up to $17.45 and minimizes the total number of coins$17.45 and minimizes the total number of coins

• Is the price of Microsoft stock going up tomorrow?Is the price of Microsoft stock going up tomorrow?

What if the path isn’t important, just the goal?What if the path isn’t important, just the goal?

• So the goal is unknownSo the goal is unknown

• The path to the goal need not be solvedThe path to the goal need not be solved

ExamplesExamples

• What quantities of quarters, nickels, and dimes add up to What quantities of quarters, nickels, and dimes add up to $17.45 and minimizes the total number of coins$17.45 and minimizes the total number of coins

• Is the price of Microsoft stock going up tomorrow?Is the price of Microsoft stock going up tomorrow?

Page 24: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Local Search

Local search does not keep track of previous Local search does not keep track of previous solutionssolutions• Instead it keeps track of current solution (current state)Instead it keeps track of current solution (current state)

• Uses a method of generating alternative solution candidatesUses a method of generating alternative solution candidates

AdvantagesAdvantages• Use a small amount of memory (usually constant amount)Use a small amount of memory (usually constant amount)

• They can find reasonable (note we aren’t saying optimal) They can find reasonable (note we aren’t saying optimal) solutions in infinite search spaces solutions in infinite search spaces

Local search does not keep track of previous Local search does not keep track of previous solutionssolutions• Instead it keeps track of current solution (current state)Instead it keeps track of current solution (current state)

• Uses a method of generating alternative solution candidatesUses a method of generating alternative solution candidates

AdvantagesAdvantages• Use a small amount of memory (usually constant amount)Use a small amount of memory (usually constant amount)

• They can find reasonable (note we aren’t saying optimal) They can find reasonable (note we aren’t saying optimal) solutions in infinite search spaces solutions in infinite search spaces

Page 25: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Optimization Problems

Objective FunctionObjective Function• A function with vector inputs and scalar outputA function with vector inputs and scalar output

– goal is to search through candidate input vectors in order goal is to search through candidate input vectors in order to minimize or maximize objective functionto minimize or maximize objective function

ExampleExample• f (q, d, n) = 1,000,000 if q*0.25 + d*0.1 + n*0.05 != 17.45f (q, d, n) = 1,000,000 if q*0.25 + d*0.1 + n*0.05 != 17.45

= q + n + d otherwise = q + n + d otherwise

• minimize fminimize f

Objective FunctionObjective Function• A function with vector inputs and scalar outputA function with vector inputs and scalar output

– goal is to search through candidate input vectors in order goal is to search through candidate input vectors in order to minimize or maximize objective functionto minimize or maximize objective function

ExampleExample• f (q, d, n) = 1,000,000 if q*0.25 + d*0.1 + n*0.05 != 17.45f (q, d, n) = 1,000,000 if q*0.25 + d*0.1 + n*0.05 != 17.45

= q + n + d otherwise = q + n + d otherwise

• minimize fminimize f

Page 26: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Search SpaceThe realm of feasible input vectorsThe realm of feasible input vectors• Also called state-space landscapeAlso called state-space landscape

• Usually described byUsually described by

– number of number of dimensionsdimensions (3 for our change example) (3 for our change example)

– domaindomain of each dimension (# of each dimension (#quartersquarters is discrete from 0 to 69…) is discrete from 0 to 69…)

– functional relationshipfunctional relationship between input vector and objective function between input vector and objective function outputoutput

no relationship (chaos or seemingly random)no relationship (chaos or seemingly random)

smoothly varyingsmoothly varying

discontinuitiesdiscontinuities

The realm of feasible input vectorsThe realm of feasible input vectors• Also called state-space landscapeAlso called state-space landscape

• Usually described byUsually described by

– number of number of dimensionsdimensions (3 for our change example) (3 for our change example)

– domaindomain of each dimension (# of each dimension (#quartersquarters is discrete from 0 to 69…) is discrete from 0 to 69…)

– functional relationshipfunctional relationship between input vector and objective function between input vector and objective function outputoutput

no relationship (chaos or seemingly random)no relationship (chaos or seemingly random)

smoothly varyingsmoothly varying

discontinuitiesdiscontinuities

Page 27: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Search Space

Looking for global maximum (or minimum)Looking for global maximum (or minimum)Looking for global maximum (or minimum)Looking for global maximum (or minimum)

Page 28: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Hill Climbing

Also called Greedy SearchAlso called Greedy Search

• Select a starting point and set Select a starting point and set currentcurrent

• evaluate (evaluate (current)current)

• loop doloop do

– neighborneighbor = highest value successor of = highest value successor of currentcurrent

– if evaluate (if evaluate (neighborneighbor) <= evaluate () <= evaluate (currentcurrent))

return return currentcurrent

– else else currentcurrent = = neighborneighbor

Also called Greedy SearchAlso called Greedy Search

• Select a starting point and set Select a starting point and set currentcurrent

• evaluate (evaluate (current)current)

• loop doloop do

– neighborneighbor = highest value successor of = highest value successor of currentcurrent

– if evaluate (if evaluate (neighborneighbor) <= evaluate () <= evaluate (currentcurrent))

return return currentcurrent

– else else currentcurrent = = neighborneighbor

Page 29: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Hill climbing gets stuck

Hiking metaphor (you are wearing glasses that Hiking metaphor (you are wearing glasses that limit your vision to 10 feet)limit your vision to 10 feet)Hiking metaphor (you are wearing glasses that Hiking metaphor (you are wearing glasses that limit your vision to 10 feet)limit your vision to 10 feet)

• Local maximaLocal maxima

– Ridges (in cases when you can’t walk along the ridge)Ridges (in cases when you can’t walk along the ridge)

• PlateauPlateau

– why is this a problem?why is this a problem?

• Local maximaLocal maxima

– Ridges (in cases when you can’t walk along the ridge)Ridges (in cases when you can’t walk along the ridge)

• PlateauPlateau

– why is this a problem?why is this a problem?

Page 30: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Hill Climbing GadgetsVariants on hill climbing play special rolesVariants on hill climbing play special roles

• stochastic hill climbingstochastic hill climbing

– don’t always choose the best successordon’t always choose the best successor

• first-choice hill climbingfirst-choice hill climbing

– pick the first good successor you findpick the first good successor you find

useful if number of successors is largeuseful if number of successors is large

• random restartrandom restart

– follow steepest ascent from multiple starting statesfollow steepest ascent from multiple starting states

– probability of finding global max increases with number of startsprobability of finding global max increases with number of starts

Variants on hill climbing play special rolesVariants on hill climbing play special roles

• stochastic hill climbingstochastic hill climbing

– don’t always choose the best successordon’t always choose the best successor

• first-choice hill climbingfirst-choice hill climbing

– pick the first good successor you findpick the first good successor you find

useful if number of successors is largeuseful if number of successors is large

• random restartrandom restart

– follow steepest ascent from multiple starting statesfollow steepest ascent from multiple starting states

– probability of finding global max increases with number of startsprobability of finding global max increases with number of starts

Page 31: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Hill Climbing Usefulness

It DependsIt Depends

• Shape of state space greatly influences hill climbingShape of state space greatly influences hill climbing

• local maxima are the Achilles heellocal maxima are the Achilles heel

• what is cost of evaluation?what is cost of evaluation?

• what is cost of finding a random starting location?what is cost of finding a random starting location?

It DependsIt Depends

• Shape of state space greatly influences hill climbingShape of state space greatly influences hill climbing

• local maxima are the Achilles heellocal maxima are the Achilles heel

• what is cost of evaluation?what is cost of evaluation?

• what is cost of finding a random starting location?what is cost of finding a random starting location?

Page 32: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Simulated Annealing

A term borrowed from metalworkingA term borrowed from metalworking• We want metal molecules to find a stable location relative to We want metal molecules to find a stable location relative to

neighborsneighbors

• heating causes metal molecules to jump around and to take on heating causes metal molecules to jump around and to take on undesirable (high energy) locationsundesirable (high energy) locations

• during cooling, molecules reduce their movement and settle during cooling, molecules reduce their movement and settle into a more stable (low energy) position into a more stable (low energy) position

• annealing is process of heating metal and letting it cool slowly annealing is process of heating metal and letting it cool slowly to lock in the stable locations of the moleculesto lock in the stable locations of the molecules

A term borrowed from metalworkingA term borrowed from metalworking• We want metal molecules to find a stable location relative to We want metal molecules to find a stable location relative to

neighborsneighbors

• heating causes metal molecules to jump around and to take on heating causes metal molecules to jump around and to take on undesirable (high energy) locationsundesirable (high energy) locations

• during cooling, molecules reduce their movement and settle during cooling, molecules reduce their movement and settle into a more stable (low energy) position into a more stable (low energy) position

• annealing is process of heating metal and letting it cool slowly annealing is process of heating metal and letting it cool slowly to lock in the stable locations of the moleculesto lock in the stable locations of the molecules

Page 33: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Simulated Annealing

““Be the Ball”Be the Ball”

• You have a wrinkled sheet of metalYou have a wrinkled sheet of metal

• Place a BB on the sheet and what happens?Place a BB on the sheet and what happens?

– BB rolls downhillBB rolls downhill

– BB stops at bottom of hill (local or global min?)BB stops at bottom of hill (local or global min?)

– BB momentum may carry it out of hill into another (local or global)BB momentum may carry it out of hill into another (local or global)

• By shaking metal sheet, your are adding energy (heat)By shaking metal sheet, your are adding energy (heat)

• How hard do you shake?How hard do you shake?

““Be the Ball”Be the Ball”

• You have a wrinkled sheet of metalYou have a wrinkled sheet of metal

• Place a BB on the sheet and what happens?Place a BB on the sheet and what happens?

– BB rolls downhillBB rolls downhill

– BB stops at bottom of hill (local or global min?)BB stops at bottom of hill (local or global min?)

– BB momentum may carry it out of hill into another (local or global)BB momentum may carry it out of hill into another (local or global)

• By shaking metal sheet, your are adding energy (heat)By shaking metal sheet, your are adding energy (heat)

• How hard do you shake?How hard do you shake?

Page 34: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Our Simulated Annealing Algorithm

““You’re not being the ball, Danny” You’re not being the ball, Danny” (Caddy Shack)(Caddy Shack)

• Gravity is great because it tells the ball which way is downhill Gravity is great because it tells the ball which way is downhill at all timesat all times

• We don’t have gravity, so how do we find a successor state?We don’t have gravity, so how do we find a successor state?

– RandomnessRandomness

AKA AKA Monte CarloMonte Carlo

AKA AKA StochasticStochastic

““You’re not being the ball, Danny” You’re not being the ball, Danny” (Caddy Shack)(Caddy Shack)

• Gravity is great because it tells the ball which way is downhill Gravity is great because it tells the ball which way is downhill at all timesat all times

• We don’t have gravity, so how do we find a successor state?We don’t have gravity, so how do we find a successor state?

– RandomnessRandomness

AKA AKA Monte CarloMonte Carlo

AKA AKA StochasticStochastic

Page 35: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Algorithm OutlineSelect some initial guess of evaluation function parameters: Select some initial guess of evaluation function parameters:

Evaluate evaluation function, Evaluate evaluation function,

Compute a random displacement,Compute a random displacement,

• The Monte Carlo eventThe Monte Carlo event

EvaluateEvaluate

• If If v’ < vv’ < v; set new state,; set new state,

• Else set with Prob(E,T)Else set with Prob(E,T)

– This is the Metropolis stepThis is the Metropolis step

Repeat with updated state and tempRepeat with updated state and temp

Select some initial guess of evaluation function parameters: Select some initial guess of evaluation function parameters:

Evaluate evaluation function, Evaluate evaluation function,

Compute a random displacement,Compute a random displacement,

• The Monte Carlo eventThe Monte Carlo event

EvaluateEvaluate

• If If v’ < vv’ < v; set new state,; set new state,

• Else set with Prob(E,T)Else set with Prob(E,T)

– This is the Metropolis stepThis is the Metropolis step

Repeat with updated state and tempRepeat with updated state and temp

Page 36: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Metropolis Step

We approximate nature’s alignment of molecules by We approximate nature’s alignment of molecules by allowing uphill transitions with some probabilityallowing uphill transitions with some probability•

Prob (in energy state E) ~ Prob (in energy state E) ~

– Boltzmann Probability DistributionBoltzmann Probability Distribution

– Even when T is small, there is still a chance in high energy stateEven when T is small, there is still a chance in high energy state

• Prob (transferring from EProb (transferring from E11 to E to E22) =) =

– Metropolis StepMetropolis Step

– if Eif E22 < E < E11, prob () is greater than 1, prob () is greater than 1

– if Eif E22 > E > E11, we may transfer to higher energy state, we may transfer to higher energy state

The rate at which T is decreased and the amount The rate at which T is decreased and the amount it is decreased is prescribed by anit is decreased is prescribed by an annealing schedule annealing schedule

We approximate nature’s alignment of molecules by We approximate nature’s alignment of molecules by allowing uphill transitions with some probabilityallowing uphill transitions with some probability•

Prob (in energy state E) ~ Prob (in energy state E) ~

– Boltzmann Probability DistributionBoltzmann Probability Distribution

– Even when T is small, there is still a chance in high energy stateEven when T is small, there is still a chance in high energy state

• Prob (transferring from EProb (transferring from E11 to E to E22) =) =

– Metropolis StepMetropolis Step

– if Eif E22 < E < E11, prob () is greater than 1, prob () is greater than 1

– if Eif E22 > E > E11, we may transfer to higher energy state, we may transfer to higher energy state

The rate at which T is decreased and the amount The rate at which T is decreased and the amount it is decreased is prescribed by anit is decreased is prescribed by an annealing schedule annealing schedule

Page 37: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

What have we got?Always move downhill if possibleAlways move downhill if possible

Sometimes go uphillSometimes go uphill

• More likely at start when T is highMore likely at start when T is high

Optimality guaranteed with slow annealing scheduleOptimality guaranteed with slow annealing schedule

No need for smooth search spaceNo need for smooth search space

• We do not need to know what nearby successor isWe do not need to know what nearby successor is

Can be discrete search spaceCan be discrete search space

• Traveling salesman problemTraveling salesman problem

Always move downhill if possibleAlways move downhill if possible

Sometimes go uphillSometimes go uphill

• More likely at start when T is highMore likely at start when T is high

Optimality guaranteed with slow annealing scheduleOptimality guaranteed with slow annealing schedule

No need for smooth search spaceNo need for smooth search space

• We do not need to know what nearby successor isWe do not need to know what nearby successor is

Can be discrete search spaceCan be discrete search space

• Traveling salesman problemTraveling salesman problem

More info: Numerical Recipes in C (online) Chapter 10.9

Page 38: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Local Beam Search

Keep more previous states in memoryKeep more previous states in memory• Simulated Annealing just kept one previous state in memorySimulated Annealing just kept one previous state in memory

• This search keeps k states in memoryThis search keeps k states in memory

Generate k initial statesGenerate k initial states

if any state is a goal, terminateif any state is a goal, terminate

else, generate all successors and select best kelse, generate all successors and select best k

repeatrepeat

Keep more previous states in memoryKeep more previous states in memory• Simulated Annealing just kept one previous state in memorySimulated Annealing just kept one previous state in memory

• This search keeps k states in memoryThis search keeps k states in memory

Generate k initial statesGenerate k initial states

if any state is a goal, terminateif any state is a goal, terminate

else, generate all successors and select best kelse, generate all successors and select best k

repeatrepeat

Page 39: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Isn’t this steepest ascent in parallel?

Information is shared between k search pointsInformation is shared between k search points• Each k state generates successorsEach k state generates successors

• Best k successors are selectedBest k successors are selected

• Some search points may contribute none to best successorsSome search points may contribute none to best successors

• One search point may contribute all k successorsOne search point may contribute all k successors

– ““Come over here, the grass is greener” Come over here, the grass is greener” (Russell and Norvig)(Russell and Norvig)

• If executed in parallel, no search points would be terminated If executed in parallel, no search points would be terminated like thislike this

Information is shared between k search pointsInformation is shared between k search points• Each k state generates successorsEach k state generates successors

• Best k successors are selectedBest k successors are selected

• Some search points may contribute none to best successorsSome search points may contribute none to best successors

• One search point may contribute all k successorsOne search point may contribute all k successors

– ““Come over here, the grass is greener” Come over here, the grass is greener” (Russell and Norvig)(Russell and Norvig)

• If executed in parallel, no search points would be terminated If executed in parallel, no search points would be terminated like thislike this

Page 40: CS 416 Artificial Intelligence Lecture 6 Informed Searches Lecture 6 Informed Searches.

Beam Search

Premature termination of search paths?Premature termination of search paths?

• Stochastic beam searchStochastic beam search

– Instead of choosing best K successorsInstead of choosing best K successors

– Choose k successors at randomChoose k successors at random

Premature termination of search paths?Premature termination of search paths?

• Stochastic beam searchStochastic beam search

– Instead of choosing best K successorsInstead of choosing best K successors

– Choose k successors at randomChoose k successors at random