Heuristic Optimization Methods VNS, GLS. 2 Summary of the Previous Lecture Simulated Annealing...

of 36 /36
Heuristic Optimization Methods VNS, GLS

Embed Size (px)

Transcript of Heuristic Optimization Methods VNS, GLS. 2 Summary of the Previous Lecture Simulated Annealing...

  • Heuristic Optimization MethodsVNS, GLS

  • *Summary of the Previous LectureSimulated AnnealingOverview and repetitionThreshold AcceptingDeterministic variation of SAGeneralized Hill-Climbing AlgorithmGeneralization of SAIterated Local SearchSearches in the space of local optima

  • *AgendaVariable Neighborhood SearchGuided Local Search

  • *FYIFurther information about the methods discussed in this course can be found easilyJust ask if you are interested in reading more about any particular method/techniqueAlso, if you have heard about some method that you think is interesting, we can include it in the lecturesNote that some methods/topics you should know wellSimulated Annealing, Tabu Search, Genetic Algorithms, Scatter Search, Youll be given hand-outs about theseFor others you only need to know the big pictureTA, Generalized Hill-Climbing, ILS, VNS, GLS,

  • *Variable Neighborhood Search (1)Based on the following central observationsA local minimum w.r.t. one neighborhood structure is not necessarily locally minimal w.r.t. another neighborhood structureA global minimum is locally optimal w.r.t. all neighborhood structuresFor many problems, local minima with respect to one or several neighborhoods are relatively close to each other

  • *Variable Neighborhood Search (2)Basic principle: change the neighborhood during the searchVariations:Variable Neighborhood DescentBasic Variable Neighborhood SearchReduced Variable Neighborhood SearchVariable Neighborhood Decomposition Search

  • *Variable Neighborhood Search (3)The first task is to generate the different neighborhood structuresFor many problems different neighborhood structures already existsE.g., for the VRP: 2-Opt, Cross, Swap, Exchange,Find neighborhoods that depend on some parameterk-Opt (k=2,3,)Flip-neighborhoods can be extended to double-flip, triple-flip, etcSome neighborhoods are associated with distance measures: can increase the distance

  • *

  • *Variable Neighborhood DescentThe final solution is locally optimal with respect to all neighborhoods, N1, N2,, Nk-maxFirst Improvement could be used instead of Best ImprovementTypically, neighborhoods are ordered from smallest to largestIf Local Search Algorithms can be treated as Black-Box procedures:Sort the proceduresApply them in the given orderPossibly reiterate starting the first oneAdvantage: solution quality and speed

  • *Basic Variable Neighborhood SearchUse neighborhood structures N1, N2,, Nk-maxStandard (Best Improvement) Local Search is applied in N1The other neighborhoods are explored only randomlyExploration of the other neighborhoods are perturbations as in ILSPerturbation is systematically varied

  • *

  • *Variations of Basic VNSThe order of the neighborhoodsForward VNS: start with k=1 and increaseBackward VNS: start with k=kmax and decreaseExtended version: Parameters kmin and kstepSet k = kmin, and increase k by kstep if no improvementAccepting worse solutionsWith some probabilitySkewed VNS: Accept iff(s*)-d(s, s*) < f(s)d(s, s*) measures the distance between the solutions

  • *Final Notes on VNS (1)Other variations existsReduced VNS: same as Basic VNS, but no Local Search procedureCan be fastVariable Neighborhood Decomposition SearchFix some components of the solution, and perform Local Search on the remaining free components

  • *Final Notes on VNS (2)ILS and VNS are based on different underlying philosophiesILS: Perturb and do Local SearchVNS: Exploit different neighborhoodsILS and VNS are also similar in many respectsILS can be more flexible w.r.t. the optimization of the interaction between modulesVNS gives place to approaches such as VND for obtaining more powerful Local Search approaches

  • *Conclusions about ILS and VNSBased on simple principlesEasy to understandBasic versions are easy to implementRobustHighly effective

  • *Guided Local SearchA general strategy, metaheuristic, used to guide a neighborhood searchTries to overcome local optima by removing them:Changes the topography of the search spaceUses an extended move evaluation functionOriginal objective function + penaltiesFocuses on promising parts of the search space

  • *Features of a SolutionGLS assumes that we can find some features of a solution that we can penalizeWhat is a feature?Something that is characteristic for the solutionSomething that might be different in another solutionProblem dependentExamples:TSP: whether city A follows immediately after city B in the tour or notKnapsack: whether item 5 is included or not

  • *A solution (trip) is an ordered set of edgesAn edge is a good choice for feature:Is either in a solution or notFeature cost = edge lengthLet the set of all edges eij be features:

    The cost for a feature eij is given by dij in the distance matrix:

    Features - Example: TSP

  • *Features & GLSThe modification of the move evaluation in GLS is based on featuresThe features each has a costRepresents (directly or indirectly) the influence of a solution on the (extended) move evaluation functionConstant or variable (dependent on other features)GLS tries to avoid costly featuresWe use an indicator function as follows:Ii(s) = 1 if solution s has feature iIi(s) = 0 if solution s does not have feature i

  • *Extended Move Evaluation (1)Until now we have only seen the use of the objective function in move evaluationsWe let f be the objective functionf(s) gives us the value of a solution sWe have always taken the best neighbor to be the neighbor s for which f(s) has the best valueThis will make the search stop when a local optimum have been found

  • *Extended Move Evaluation (2)Let the set of features be denoted by F = {1,,G}We have our indicator function:Ii(s) = 1 if solution s has feature iIi(s) = 0 if solution s does not have feature iWe create a penalty vector p=[pi ], i=1Gpi is the number of times feature i have been penalized until nowThe extended move evaluation function becomes

  • *Extended Move Evaluation (3)The extended move evaluation function has two partsThe original objective functionA penalty term, which penalizes certain features of the solution

    The parameter adjusts the influence of the penalties

  • *Penalties (1)The penalties are initially equal to 0When the search has reached a local optimum (with respect to the extended move evaluation function)The penalty is increased for some of the features of the current (locally optimal) solutionThis will make the current solution look worse in comparison to some neighboring solutions (which do not have the penalized features, and thus do not get the penalty)

  • *Penalties (2)How to select which feature to penalize?Define the utility of a feature i in solution s as follows:ui(s) = Ii(s) * ci / (1+pi)Here, ci is the cost of the feature (in objective function) and pi is the previous penaltyIn a local optimum, s, increase the penalty for the feature that has the highest utility value, ui(s)NB: Penalties are only adjusted when the search has reached a local optimum, and only for features included in the local optimum

  • *

  • *Comments on GLSUses the extended move evaluation function when deciding which neighbor is bestCould also use First Improvement-strategyUses the normal objective function value to find the best solutionif (f(current)
  • *How to Select LambdaThe control parameter dictates the influence of the penalty on the extended move evaluation functionLow value: intensificationHigh value: diversificationCan be problematic to find values for , even though the method is robust for some valueGenerally: fraction of the objective function value at a local minimum = *f(a local minimum)/(#features in the local minimum)

  • *GLS - Example : TSP (1)Features: edgesCost of the features: edge lengthThe feature associated with e26 will be penalized in the solution on the right:In the next round of LS is the move evaluation function as before, f(s), except if e26 is in the solution, when the value will be f(s)+1234567100020003000000001400500600

  • *GLS - Example : TSP (2)After the next LS e34 is penalizedAfter this the move evaluation function is as before, f(s), except if e26 or e34 is in the solution

  • *Applications of GLSA fairly recent metaheuristic (evolved from another method called GENET)Some applications:Workforce scheduling at BT (Tsang, Voudouris 97)TSP (Voudouris, Tsang 98)VRP (Kilby et. al. 97)Function optimization (Voudouris 98)

  • *Has been used for function optimization, good results on:

    Possibilities and ExtensionsLimited life time of penaltiesDiminishing penaltiesAwards in addition to penaltiesAutomatic regulation of New utility-functions to find features to penalize

  • *GLS versus SAIt is difficult in SA to find the right cooling schedule (problem dependent)High temperature gives bad solutionsLow temperature gives convergence to a local minimumSA is non-deterministicGLS visits local minima, but can escapeNot random up-hill moves as in SAGLS is deterministic Does not converge to a local minimum; penalties are added until the search escapes

  • *GLS vs. Tabu Search (1)GLS is often said to be closely related to Tabu SearchBoth have mechanisms to guide the Local Search away from local optimaGLS penalizes features in the solutionsTS bans (makes taboo) features in the solutionsBoth incorporate memory structuresGLS has the accumulated penaltiesTS has different memory structuresShort term, long term, frequency, recency,

  • *GLS vs. Tabu Search (2)

    Tabu Search

    GLS

    Information that is used

    Modified neighborhood

    Modified move

    evaluation function

    Memory

    Tabu list, frequency based memory

    Penalizing features

    When?

    Every iteration or every N'th iteration

    In local minima

    defined by the extended

    move evaluation function

    The "nature" of the search

    - Avoid stops in local minima and reversing moves- Diversification; penalize moves that are done often or attributes appearing often in solutions

    - Escape local minima- Distribute the search effort

    base don the cost of features

    Intensification / diversification

    < >

    Parameter (

    Reduction of neighborhood

    Candidate list

    Fast Local Search - FLS

  • *Summary of Todayss LectureVNSIdea: utilize different neighborhoodsSeveral variationsVariable Neighborhood DescentGLSIdea: Remove local optima by changing the evaluation of solutionsHas some similarities with Tabu Search

  • *Topics for the next Lecture

    An introduction to Tabu SearchTabu CriteriaAspirationShort Term MemoryLong Term MemoryIntensificationDiversification

    ************************************