Introduction: Combinatorial Problems and...

35
HEURISTIC OPTIMIZATION Introduction: Combinatorial Problems and Search slightly adapted from slides for SLS:FA, Chapter 1 Outline 1. Combinatorial Problems 2. Two Prototypical Combinatorial Problems 3. Computational Complexity 4. Search Paradigms 5. Stochastic Local Search Heuristic Optimization 2015 2

Transcript of Introduction: Combinatorial Problems and...

Page 1: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

HEURISTIC OPTIMIZATION

Introduction:

Combinatorial Problems and Search

slightly adapted from slides for SLS:FA, Chapter 1

Outline

1. Combinatorial Problems

2. Two Prototypical Combinatorial Problems

3. Computational Complexity

4. Search Paradigms

5. Stochastic Local Search

Heuristic Optimization 2015 2

Page 2: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

Combinatorial Problems

Combinatorial problems arise in many areasof computer science and application domains:

I finding shortest/cheapest round trips (TSP)

I finding models of propositional formulae (SAT)

I planning, scheduling, time-tabling

I vehicle routing

I location and clustering

I internet data packet routing

I protein structure prediction

Heuristic Optimization 2015 3

Combinatorial problems involve finding a grouping,ordering, or assignment of a discrete, finite set of objectsthat satisfies given conditions.

Candidate solutions are combinations of solution componentsthat may be encountered during a solution attemptbut need not satisfy all given conditions.

Solutions are candidate solutions that satisfy all given conditions.

Heuristic Optimization 2015 4

Page 3: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

Example:

I Given: Set of points in the Euclidean plane

I Objective: Find the shortest round trip

Note:

I a round trip corresponds to a sequence of points(= assignment of points to sequence positions)

I solution component: trip segment consisting of two pointsthat are visited one directly after the other

I candidate solution: round trip

I solution: round trip with minimal length

Heuristic Optimization 2015 5

Problem vs problem instance:

I Problem: Given any set of points X , find a shortest round trip

I Solution: Algorithm that finds shortest round trips for any X

I Problem instance: Given a specific set of points P , find ashortest round trip

I Solution: Shortest round trip for P

Technically, problems can be formalised as sets of problem instances.

Heuristic Optimization 2015 6

Page 4: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

Decision problems:

solutions = candidate solutions that satisfy given logical conditions

Example: The Graph Colouring Problem

I Given: Graph G and set of colours C

I Objective: Assign to all vertices of G a colour from Csuch that two vertices connected by an edgeare never assigned the same colour

Heuristic Optimization 2015 7

Every decision problem has two variants:

I Search variant: Find a solution for given problem instance(or determine that no solution exists)

I Decision variant: Determine whether solutionfor given problem instance exists

Note: Search and decision variants are closely related;algorithms for one can be used for solving the other.

Heuristic Optimization 2015 8

Page 5: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

Optimisation problems:

I can be seen as generalisations of decision problems

I objective function f measures solution quality(often defined on all candidate solutions)

I typical goal: find solution with optimal qualityminimisation problem: optimal quality = minimal value of fmaximisation problem: optimal quality = maximal value of f

Example:

Variant of the Graph Colouring Problem where the objective isto find a valid colour assignment that uses a minimal numberof colours.

Note: Every minimisation problem can be formulatedas a maximisation problems and vice versa.

Heuristic Optimization 2015 9

Variants of optimisation problems:

I Search variant: Find a solution with optimalobjective function value for given problem instance

I Evaluation variant: Determine optimal objective functionvalue for given problem instance

Every optimisation problem has associated decision problems:

Given a problem instance and a fixed solution quality bound b,find a solution with objective function value b (for minimisationproblems) or determine that no such solution exists.

Heuristic Optimization 2015 10

Page 6: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

Many optimisation problems have an objective functionas well as logical conditions that solutions must satisfy.

A candidate solution is called feasible (or valid) i↵ it satisfiesthe given logical conditions.

Note: Logical conditions can always be captured byan objective function such that feasible candidate solutionscorrespond to solutions of an associated decision problemwith a specific bound.

Heuristic Optimization 2015 11

Note:

I Algorithms for optimisation problems can be used to solveassociated decision problems

I Algorithms for decision problems can often be extended torelated optimisation problems.

I Caution: This does not always solve the given problem moste�ciently.

Heuristic Optimization 2015 12

Page 7: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

Two Prototypical Combinatorial Problems

Studying conceptually simple problems facilitatesdevelopment, analysis and presentation of algorithms

Two prominent, conceptually simple problems:

I Finding satisfying variable assignments ofpropositional formulae (SAT)– prototypical decision problem

I Finding shortest round trips in graphs (TSP)– prototypical optimisation problem

Heuristic Optimization 2015 13

SAT: A simple example

I Given: Formula F := (x1 _ x2) ^ (¬x1 _ ¬x2)I Objective: Find an assignment of truth values to variables

x1, x2 that renders F true, or decide that no such assignmentexists.

General SAT Problem (search variant):

I Given: Formula F in propositional logic

I Objective: Find an assignment of truth values to variables in Fthat renders F true, or decide that no such assignment exists.

Heuristic Optimization 2015 14

Page 8: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

Definition:

I Formula in propositional logic: well-formed string that maycontain

I propositional variables x1, x2, . . . , xn;I truth values > (‘true’), ? (‘false’);I operators ¬ (‘not’), ^ (‘and’), _ (‘or’);I parentheses (for operator nesting).

I Model (or satisfying assignment) of a formula F : Assignmentof truth values to the variables in F under which F becomestrue (under the usual interpretation of the logical operators)

I Formula F is satisfiable i↵ there exists at least one model ofF , unsatisfiable otherwise.

Heuristic Optimization 2015 15

Definition:

I A formula is in conjunctive normal form (CNF) i↵ it is of theform

m

i=1

k(i)_

j=1

lij = (l11 _ . . . _ l1k(1)) . . . ^ (lm1 _ . . . _ lmk(m))

where each literal lij is a propositional variable or its negation.The disjunctions (li1 _ . . . _ lik(i)) are called clauses.

I A formula is in k-CNF i↵ it is in CNF and all clauses containexactly k literals (i.e., for all i , k(i) = k).

Note: For every propositional formula, there is an equivalentformula in 3-CNF.

Heuristic Optimization 2015 16

Page 9: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

Concise definition of SAT:

I Given: Formula F in propositional logic.

I Objective: Decide whether F is satisfiable.

Note:

I In many cases, the restriction of SAT to CNF formulaeis considered.

I The restriction of SAT to k-CNF formulae is called k-SAT.

Heuristic Optimization 2015 17

Example:

F := ^ (¬x2 _ x1)^ (¬x1 _ ¬x2 _ ¬x3)^ (x1 _ x2)^ (¬x4 _ x3)^ (¬x5 _ x3)

I F is in CNF.

I Is F satisfiable?Yes, e.g., x1 := x2 := >, x3 := x4 := x5 := ? is a model of F .

Heuristic Optimization 2015 18

Page 10: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

TSP: A simple example

Bonifaccio

Gibraltar

Europe

Africa

AsiaStromboli

ZakinthosIthaca

Corfu

Taormina

Djerba

Malea

Troy

Maronia

Messina

Circeo

Favignana

Ustica

Birzebbuga

Heuristic Optimization 2015 19

Definition:

I Hamiltonian cycle in graph G := (V ,E ):cyclic path that visits every vertex of G exactly once(except start/end point).

I Weight of path p := (u1, . . . , uk) in edge-weighted graphG := (V ,E ,w): total weight of all edges on p, i.e.:

w(p) :=k�1X

i=1

w((ui , ui+1))

Heuristic Optimization 2015 20

Page 11: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

The Travelling Salesman Problem (TSP)

I Given: Directed, edge-weighted graph G .

I Objective: Find a minimal-weight Hamiltonian cycle in G .

Types of TSP instances:

I Symmetric: For all edges (v , v 0) of the given graph G , (v 0, v)is also in G , and w((v , v 0)) = w((v 0, v)).Otherwise: asymmetric.

I Euclidean: Vertices = points in a Euclidean space,weight function = Euclidean distance metric.

I Geographic: Vertices = points on a sphere,weight function = geographic (great circle) distance.

Heuristic Optimization 2015 21

Computational Complexity

Fundamental question:

How hard is a given computational problems to solve?

Important concepts:

I Time complexity of a problem ⇧: Computation timerequired for solving a given instance ⇡ of ⇧using the most e�cient algorithm for ⇧.

I Worst-case time complexity: Time complexity in theworst case over all problem instances of a given size,typically measured as a function of instance size,

Heuristic Optimization 2015 22

Page 12: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

Time complexity

I time complexity gives the amount of time taken by analgorithm as a function of the input size

I time complexity often described by big-O notation (O(·))I let f and g be two functionsI we say f (n) = O(g(n)) if two positive numbers c and n0 exist

such that for all n � n0 we have

f (n) c · g(n)

I we call an algorithm polynomial-time if its time complexity isbounded by a polynomial p(n), ie. f (n) = O(p(n))

I we call an algorithm exponential-time if its time complexitycannot be bounded by a polynomial

Heuristic Optimization 2015 23

Heuristic Optimization 2015 24

Page 13: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

Theory of NP-completeness

I formal theory based upon abstract models of computation(e.g. Turing machines)(here an informal view is taken)

I focus on decision problems

I main complexity classes

I P: Class of problems solvable by a polynomial-time algorithm

I NP : Class of decision problems that can be solved inpolynomial time by a nondeterministic algorithm

I intuition: non-deterministic, polynomial-time algorithm guessescorrect solution which is then verified in polynomial time

I Note: nondeterministic 6= randomised;

I P ✓ NP

Heuristic Optimization 2015 25

I non-deterministic algorithms appear to be more powerful thandeterministic, polynomial time algorithms:

If ⇧ 2 NP, then there exists a polynom p such that ⇧ can besolved by a deterministic algorithm in time O(2p(n))

I concept of polynomial reducibility: A problem ⇧0 ispolynomially reducible to a problem ⇧, if there exists apolynomial time algorithm that transforms every instance of⇧0 into an instance of ⇧ preserving the correctness of the“yes” answers

I ⇧ is at least as di�cult as ⇧0

I if ⇧ is polynomially solvable, then also ⇧0

Heuristic Optimization 2015 26

Page 14: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

NP-completeness

DefinitionA problem ⇧ is NP-complete if

(i) ⇧ 2 NP(ii) for all ⇧0 2 NP it holds that ⇧0 is polynomially reducible to

⇧.

I NP-complete problems are the hardest problems in NPI the first problem that was proven to be NP-complete is SAT

I nowadays many hundred of NP-complete problems are known

I for no NP-complete problem a polynomial time algorithmcould be found

The main open question in theoretical computer science isP = NP?

Heuristic Optimization 2015 27

DefinitionA problem ⇧ is NP-hard if for all ⇧0 2 NP it holds that ⇧0 ispolynomially reducible to ⇧.

I extension of the hardness results to optimization problems,which are not in NP

I optimization variants are at least as di�cult as theirassociated decision problems

Heuristic Optimization 2015 28

Page 15: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

Many combinatorial problems are hard:

I SAT for general propositional formulae is NP-complete.

I SAT for 3-CNF is NP-complete.

I TSP is NP-hard, the associated decision problem for optimalsolution quality is NP-complete.

I The same holds for Euclidean TSP instances.

I The Graph Colouring Problem is NP-complete.

I Many scheduling and timetabling problems are NP-hard.

Heuristic Optimization 2015 29

Approximation algorithms

I general question: if one relaxes requirement of finding optimalsolutions, can one give any quality guarantees that areobtainable with algorithms that run in polynomial time?

I approximation ratio is measured by

R(⇡, s) = max

✓OPT

f (s),f (s)

OPT

where ⇡ is an instance of ⇧, s a solution and OPT theoptimum solution value

I TSP caseI general TSP instances are inapproximable, that is, R(⇡, s) is

unboundedI if triangle inequality holds, ie. w(x , y) w(x , z) + w(z , y),

best approximation ratio of 1.5 with Christofides’ algorithm

Heuristic Optimization 2015 30

Page 16: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

Practically solving hard combinatorial problems:

I Subclasses can often be solved e�ciently(e.g., 2-SAT);

I Average-case vs worst-case complexity(e.g. Simplex Algorithm for linear optimisation);

I Approximation of optimal solutions:sometimes possible in polynomial time (e.g., Euclidean TSP),but in many cases also intractable (e.g., general TSP);

I Randomised computation is often practicallymore e�cient;

I Asymptotic bounds vs true complexity:constants matter!

Heuristic Optimization 2015 31

Example: polynomial vs. exponential

10−10

10−5

1

105

1010

1015

1020

0 200400

600800

1 0001 200

1 4001 600

1 8002 000

run-

time

instance size n

10−201

102010401060108010100101201014010160

0 50 100 150 200 250 300 350 400 450 500

run-

time

instance size n

10−6 • 2n/2510 • n4 10−6 • 2n/25

10−6 • 2n

Heuristic Optimization 2015 32

Page 17: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

Example: Impact of constants

10−10

10−5

1

105

1010

1015

1020

0 200400

600800

1 0001 200

1 4001 600

1 8002 000

run-

time

instance size n

10−201

102010401060108010100101201014010160

0 50 100 150 200 250 300 350 400 450 500

run-

time

instance size n

10−6 • 2n/2510 • n4 10−6 • 2n/25

10−6 • 2n

Heuristic Optimization 2015 33

Search Paradigms

Solving combinatorial problems through search:

I iteratively generate and evaluate candidate solutions

I decision problems: evaluation = test if it is solution

I optimisation problems: evaluation = check objectivefunction value

I evaluating candidate solutions is typicallycomputationally much cheaper than finding(optimal) solutions

Heuristic Optimization 2015 34

Page 18: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

Perturbative search

I search space = complete candidate solutions

I search step = modification of one or more solutioncomponents

Example: SAT

I search space = complete variable assignments

I search step = modification of truth values for one or morevariables

Heuristic Optimization 2015 35

Constructive search (aka construction heuristics)

I search space = partial candidate solutions

I search step = extension with one or more solution components

Example: Nearest Neighbour Heuristic (NNH) for TSP

I start with single vertex (chosen uniformly at random)

I in each step, follow minimal-weight edge to yet unvisited,next vertex

I complete Hamiltonian cycle by adding initial vertex to endof path

Note: NNH typically does not find very high quality solutions,but it is often and successfully used in combination withperturbative search methods.

Heuristic Optimization 2015 36

Page 19: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

Systematic search:

I traverse search space for given problem instance in asystematic manner

I complete: guaranteed to eventually find (optimal) solution,or to determine that no solution exists

Local Search:

I start at some position in search space

I iteratively move from position to neighbouring position

I typically incomplete: not guaranteed to eventually find(optimal) solutions, cannot determine insolubility withcertainty

Heuristic Optimization 2015 37

Example: Uninformed random walk for SAT

procedure URW-for-SAT(F ,maxSteps)

input: propositional formula F , integer maxSteps

output: model of F or ;choose assignment a of truth values to all variables in F

uniformly at random;

steps := 0;

while not((a satisfies F ) and (steps < maxSteps)) dorandomly select variable x in F ;

change value of x in a;

steps := steps+1;

endif a satisfies F then

return a

elsereturn ;

endend URW-for-SAT

Heuristic Optimization 2015 38

Page 20: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

Local search 6= perturbative search:

I Construction heuristics can be seen as local search methodse.g., the Nearest Neighbour Heuristic for TSP.

Note: Many high-performance local search algorithmscombine constructive and perturbative search.

I Perturbative search can provide the basis for systematicsearch methods.

Heuristic Optimization 2015 39

Tree search

I Combination of constructive search and backtracking, i.e.,revisiting of choice points after construction of completecandidate solutions.

I Performs systematic search over constructions.

I Complete, but visiting all candidate solutionsbecomes rapidly infeasible with growing size of probleminstances.

Heuristic Optimization 2015 40

Page 21: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

Example: NNH + Backtracking

I Construct complete candidate round trip using NNH.

I Backtrack to most recent choice point with unexploredalternatives.

I Complete tour using NNH (possibly creating new choicepoints).

I Recursively iterate backtracking and completion.

Heuristic Optimization 2015 41

E�ciency of tree search can be substantially improvedby pruning choices that cannot lead to (optimal) solutions.

Example: Branch & bound / A⇤ search for TSP

I Compute lower bound on length of completion of givenpartial round trip.

I Terminate search on branch if length of current partialround trip + lower bound on length of completion exceedslength of shortest complete round trip found so far.

Heuristic Optimization 2015 42

Page 22: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

Variations on simple backtracking:

I Dynamical selection of solution componentsin construction or choice points in backtracking.

I Backtracking to other than most recent choice points(back-jumping).

I Randomisation of construction method orselection of choice points in backtracking randomised systematic search.

Heuristic Optimization 2015 43

Systematic vs Local Search:

I Completeness: Advantage of systematic search, but notalways relevant, e.g., when existence of solutions isguaranteed by construction or in real-time situations.

I Any-time property: Positive correlation between run-timeand solution quality or probability; typically more readilyachieved by local search.

I Complementarity: Local and systematic search can befruitfully combined, e.g., by using local search for findingsolutions whose optimality is proven using systematic search.

Heuristic Optimization 2015 44

Page 23: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

Systematic search is often better suited when ...

I proofs of insolubility or optimality are required;

I time constraints are not critical;

I problem-specific knowledge can be exploited.

Local search is often better suited when ...

I reasonably good solutions are required within a short time;

I parallel processing is used;

I problem-specific knowledge is rather limited.

Heuristic Optimization 2015 45

Stochastic Local Search

Many prominent local search algorithms use randomised choices ingenerating and modifying candidate solutions.

These stochastic local search (SLS) algorithms are one of the mostsuccessful and widely used approaches for solving hardcombinatorial problems.

Some well-known SLS methods and algorithms:

I Evolutionary Algorithms

I Simulated Annealing

I Lin-Kernighan Algorithm for TSP

Heuristic Optimization 2015 46

Page 24: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

Stochastic local search — global view

c

s

I vertices: candidate solutions(search positions)

I edges: connect neighbouringpositions

I s: (optimal) solution

I c: current search position

Heuristic Optimization 2015 47

Stochastic local search — local view

c

s

Next search position is selected from

local neighbourhoodbased on local information, e.g., heuristic values.

Heuristic Optimization 2015 48

Page 25: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

Definition: Stochastic Local Search Algorithm (1)

For given problem instance ⇡:

I search space S(⇡)(e.g., for SAT: set of all complete truth assignmentsto propositional variables)

I solution set S 0(⇡) ✓ S(⇡)(e.g., for SAT: models of given formula)

I neighbourhood relation N(⇡) ✓ S(⇡)⇥ S(⇡)(e.g., for SAT: neighbouring variable assignments di↵erin the truth value of exactly one variable)

Heuristic Optimization 2015 49

Definition: Stochastic Local Search Algorithm (2)

I set of memory states M(⇡)(may consist of a single state, for SLS algorithms thatdo not use memory)

I initialisation function init : ; 7! D(S(⇡)⇥M(⇡))(specifies probability distribution over initial search positionsand memory states)

I step function step : S(⇡)⇥M(⇡) 7! D(S(⇡)⇥M(⇡))(maps each search position and memory state ontoprobability distribution over subsequent, neighbouringsearch positions and memory states)

I termination predicate terminate : S(⇡)⇥M(⇡) 7! D({>,?})(determines the termination probability for eachsearch position and memory state)

Heuristic Optimization 2015 50

Page 26: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

procedure SLS-Decision(⇡)input: problem instance ⇡ 2 ⇧output: solution s 2 S 0(⇡) or ;(s,m) := init(⇡);

while not terminate(⇡, s,m) do(s,m) := step(⇡, s,m);

end

if s 2 S 0(⇡) thenreturn s

elsereturn ;

endend SLS-Decision

Heuristic Optimization 2015 51

procedure SLS-Minimisation(⇡0)input: problem instance ⇡0 2 ⇧0

output: solution s 2 S 0(⇡0) or ;(s,m) := init(⇡0);s := s;while not terminate(⇡0, s,m) do

(s,m) := step(⇡0, s,m);if f (⇡0, s) < f (⇡0, s) then

s := s;end

endif s 2 S 0(⇡0) then

return selse

return ;end

end SLS-Minimisation

Heuristic Optimization 2015 52

Page 27: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

Note:

I Procedural versions of init, step and terminate implementsampling from respective probability distributions.

I Memory state m can consist of multiple independentattributes, i.e., M(⇡) := M1 ⇥M2 ⇥ . . .⇥Ml(⇡).

I SLS algorithms realise Markov processes:behaviour in any search state (s,m) depends onlyon current position s and (limited) memory m.

Heuristic Optimization 2015 53

Example: Uninformed random walk for SAT (1)

I search space S : set of all truth assignments to variablesin given formula F

I solution set S 0: set of all models of F

I neighbourhood relation N: 1-flip neighbourhood, i.e.,assignments are neighbours under N i↵ they di↵er inthe truth value of exactly one variable

I memory: not used, i.e., M := {0}

Heuristic Optimization 2015 54

Page 28: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

Example: Uninformed random walk for SAT (continued)

I initialisation: uniform random choice from S , i.e.,init()(a0,m) := 1/#S for all assignments a0 andmemory states m

I step function: uniform random choice from currentneighbourhood, i.e., step(a,m)(a0,m) := 1/#N(a)for all assignments a and memory states m,where N(a) := {a0 2 S | N(a, a0)} is the set ofall neighbours of a.

I termination: when model is found, i.e.,terminate(a,m) := 1 if a is a model of F , and 0 otherwise.

Heuristic Optimization 2015 55

Definition:

I neighbourhood (set) of candidate solution s:N(s) := {s 0 2 S | N(s, s 0)}

I neighbourhood graph of problem instance ⇡:GN(⇡) := (S(⇡),N(⇡))

Note: Diameter of GN = worst-case lower bound for number ofsearch steps required for reaching (optimal) solutions

Example:

SAT instance with n variables, 1-flip neighbourhood:GN = n-dimensional hypercube; diameter of GN = n.

Heuristic Optimization 2015 56

Page 29: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

Definition:

k-exchange neighbourhood: candidate solutions s, s 0 areneighbours i↵ s di↵ers from s 0 in at most k solution components

Examples:

I 1-flip neighbourhood for SAT(solution components = single variable assignments)

I 2-exchange neighbourhood for TSP(solution components = edges in given graph)

Heuristic Optimization 2015 57

Search steps in the 2-exchange neighbourhood for the TSP

u4 u3

u1 u2

u4 u3

u1 u2

2-exchange

Heuristic Optimization 2015 58

Page 30: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

Definition:

I Search step (or move): pair of search states s, s 0 for whichs 0 can be reached from s in one step, i.e., N(s, s 0) andstep(s,m)(s 0,m0) > 0 for some memory states m,m0 2 M.

I Search trajectory: finite sequence of search states(s0, s1, . . . , sk) such that (si�1, si ) is a search stepfor any i 2 {1, . . . , k} and the probability of initialisingthe search at s0 is greater zero, i.e., init(s0,m) > 0for some memory state m 2 M.

I Search strategy: specified by init and step function; to someextent independent of problem instance andother components of SLS algorithm.

Heuristic Optimization 2015 59

Uninformed Random Picking

I N := S ⇥ S

I does not use memory

I init, step: uniform random choice from S ,i.e., for all s, s 0 2 S , init(s) := step(s)(s 0) := 1/#S

Uninformed Random Walk

I does not use memory

I init: uniform random choice from S

I step: uniform random choice from current neighbourhood,i.e., for all s, s 0 2 S , step(s)(s 0) := 1/#N(s) if N(s, s 0),and 0 otherwise

Note: These uninformed SLS strategies are quite ine↵ective,but play a role in combination with more directed search strategies.

Heuristic Optimization 2015 60

Page 31: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

Evaluation function:

I function g(⇡) : S(⇡) 7! R that maps candidate solutions ofa given problem instance ⇡ onto real numbers,such that global optima correspond to solutions of ⇡;

I used for ranking or assessing neighbhours of currentsearch position to provide guidance to search process.

Evaluation vs objective functions:

I Evaluation function: part of SLS algorithm.

I Objective function: integral part of optimisation problem.

I Some SLS methods use evaluation functions di↵erent fromgiven objective function (e.g., dynamic local search).

Heuristic Optimization 2015 61

Iterative Improvement (II)

I does not use memory

I init: uniform random choice from S

I step: uniform random choice from improving neighbours,i.e., step(s)(s 0) := 1/#I (s) if s 0 2 I (s), and 0 otherwise,where I (s) := {s 0 2 S | N(s, s 0) ^ g(s 0) < g(s)}

I terminates when no improving neighbour available(to be revisited later)

I di↵erent variants through modifications of step function(to be revisited later)

Note: II is also known as iterative descent or hill-climbing.

Heuristic Optimization 2015 62

Page 32: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

Example: Iterative Improvement for SAT (1)

I search space S : set of all truth assignments to variablesin given formula F

I solution set S 0: set of all models of F

I neighbourhood relation N: 1-flip neighbourhood(as in Uninformed Random Walk for SAT)

I memory: not used, i.e., M := {0}I initialisation: uniform random choice from S , i.e.,

init()(a0) := 1/#S for all assignments a0

Heuristic Optimization 2015 63

Example: Iterative Improvement for SAT (continued)

I evaluation function: g(a) := number of clauses in Fthat are unsatisfied under assignment a(Note: g(a) = 0 i↵ a is a model of F .)

I step function: uniform random choice from improvingneighbours, i.e., step(a)(a0) := 1/#I (a) if s 0 2 I (a),and 0 otherwise, where I (a) := {a0 | N(a, a0) ^ g(a0) < g(a)}

I termination: when no improving neighbour is availablei.e., terminate(a) = > if I (a) = ;, and ? otherwise.

Heuristic Optimization 2015 64

Page 33: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

Incremental updates (aka delta evaluations)

I Key idea: calculate e↵ects of di↵erences betweencurrent search position s and neighbours s 0 onevaluation function value.

I Evaluation function values often consist of independentcontributions of solution components; hence, g(s) can bee�ciently calculated from g(s 0) by di↵erences betweens and s 0 in terms of solution components.

I Typically crucial for the e�cient implementation ofII algorithms (and other SLS techniques).

Heuristic Optimization 2015 65

Example: Incremental updates for TSP

I solution components = edges of given graph G

I standard 2-exchange neighbhourhood, i.e., neighbouringround trips p, p0 di↵er in two edges

I w(p0) := w(p) � edges in p but not in p0

+ edges in p0 but not in p

Note: Constant time (4 arithmetic operations), compared tolinear time (n arithmethic operations for graph with n vertices)for computing w(p0) from scratch.

Heuristic Optimization 2015 66

Page 34: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

Definition:

I Local minimum: search position without improving neighboursw.r.t. given evaluation function g and neighbourhood N,i.e., position s 2 S such that g(s) g(s 0) for all s 0 2 N(s).

I Strict local minimum: search position s 2 S such thatg(s) < g(s 0) for all s 0 2 N(s).

I Local maxima and strict local maxima: defined analogously.

Heuristic Optimization 2015 67

Heuristic Optimization 2015 68

Page 35: Introduction: Combinatorial Problems and Searchiridia.ulb.ac.be/~stuetzle/Teaching/HO15/Slides/ch1-2pages.pdf · Combinatorial Problems Combinatorial problems arise in many areas

Simple mechanisms for escaping from local optima:

I Restart: re-initialise search whenever a local optimumis encountered.(Often rather ine↵ective due to cost of initialisation.)

I Non-improving steps: in local optima, allow selection ofcandidate solutions with equal or worse evaluation functionvalue, e.g., using minimally worsening steps.(Can lead to long walks in plateaus, i.e., regions ofsearch positions with identical evaluation function.)

Note: Neither of these mechanisms is guaranteed to alwaysescape e↵ectively from local optima.

Heuristic Optimization 2015 69

Diversification vs Intensification

I Goal-directed and randomised components of SLS strategyneed to be balanced carefully.

I Intensification: aims to greedily increase solution quality orprobability, e.g., by exploiting the evaluation function.

I Diversification: aim to prevent search stagnation by preventingsearch process from getting trapped in confined regions.

Examples:

I Iterative Improvement (II): intensification strategy.

I Uninformed Random Walk (URW): diversification strategy.

Balanced combination of intensification and diversificationmechanisms forms the basis for advanced SLS methods.

Heuristic Optimization 2015 70