Lecture 1 An Introduction to Optimization -- β¦qf-zhao/TEACHING/MH/Lec01.pdfUn-constrained...
Transcript of Lecture 1 An Introduction to Optimization -- β¦qf-zhao/TEACHING/MH/Lec01.pdfUn-constrained...
Lecture 1An Introduction to Optimization β
Classification and Case Study
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/1
Un-constrained Optimization
β’ Generally speaking, an optimization problem has an objective function f(x).
β’ The problem is represented by
min(max) π(π₯), πππ πππ π₯
β’ This is called an un-constrained optimization problem (η‘εΆη΄ζι©εει‘).
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/2
Un-constrained Optimization
β’ Usually, π₯ is a βpointβ in an N-dimensional Euclidean space π π, and π(π₯) is a point in π π.
β’ In this course, we study only the case in which π = 1. That is, we have only one objective to optimize.
β’ Some special considerations are needed to extend the results obtained here to βmultiple objectiveβ cases.
β’ Interested students may also study optimization in βnon-Euclideanβ spaces (i.e. manifolds).
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/3
Constrained Optimization
β’ The domain can be a sub-space π· of π π.
β’ We have constrained optimization problem:
β’ π· again can be defined by some functions
β π₯π > 0, π = 1,2, β¦
β ππ(π₯) > 0, π = 1,2, β¦
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C)
Subject to
Lec01/4
β’ min πππ₯ π π₯
β’ π . π‘. π₯ β π·
Linear programming(η·εθ¨η»ζ³)
β’ If both π(π₯) and ππ(π₯) are linear functions, we have linear optimization problem, and this is usually called linear programming (LP).
β’ For LP, we have very efficient algorithms already, and meta-heuristics are not needed.
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/5
Non-linear programming(ιη·ε½’θ¨η»ζ³)
β’ If π(π₯) or any ππ(π₯) is non-linear, we have non-linear optimization problem, and this is often called non-linear programming (NLP).
β’ Many methods have been proposed to solve this class of problems.
β’ However, conventional methods usually finds local optimal solutions. Meta-heuristic methods are useful for finding global solutions.
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/6
Local optimal and global optimal
β’ For minimization problem, β A solution π₯β is local optimal if π(π₯β) < π(π₯) for all π₯ in
the π-neighborhood of π₯β, where π > 0 is a real number, and is the radius of the neighborhood.
β A solution π₯β is global optimal if π(π₯β) < π(π₯) for all π₯in the search space (problem domain).
β’ Meta-heuristics are useful for obtaining global optimal solutions efficiently.
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/7
Example 1: Linear Programming
β’ 2 materials are used for making two products. β’ The prices of the products are 25 and 31 (in million yen), and those
of the materials are 0.5 and 0.8 (in million yen). β’ Suppose that we produce x1 units for product1, and x2 units for
product2.β’ We can get 25*x1+31*x2 million yen by selling the products.β’ On the other hand, we must pay (7*x1+5*x2)*0.5 +
(4*x1+8*x2)*0.8 million yen to buy the materials.
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C)
Material used in Product1 Material used in Product2
Material 1 7 5
Material 2 4 8
Lec01/8
Example 1: Linear Programming
β’ The problem can be formulated as follows:
max π(π₯1, π₯2) = 18.3π₯1 + 22.1π₯2π . π‘. π₯1 > 0; π₯2 > 0;
6.7π₯1 + 8.9π₯2 < π΅
β’ The first set of constraints means that both products should be produced to satisfy social needs; and the second constraint is the budget limitation.
β’ This is a typical linear programming problem, and can be solved efficiently using the well-known simplex algorithm.
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/9
Example 2: Non-linear programming
β’ Given π observations: (π₯1, π(π₯1)), (π₯2, π(π₯2)), β¦ ,(π₯π, π(π₯π)) of an unknown function π(π₯).
β’ Find a polynomial π(π₯) = π0+ π1π₯ + π2π₯2, such that
min π π0, π1, π2 =
π=1
π
π π₯π β π π₯π + π π(π₯)
β’ Note that in this problem π(π₯) is also a function of π0, π1, πππ π2.
β’ The first term is the approximation error, and the second term is regularization factor that can make the solution better (e.g. smoother).
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/10
Combinatorial optimization problems
β’ If π(π₯) or ππ(π₯) cannot be given analytically (in closed-form), we have combinatorial problems.
β’ For example, if π₯ takes π discrete values (e.g. integers), and if there are πΎ variables, the number of all possible solutions will be ππΎ.
β’ It is difficult to check all possible solutions in order to find the best one(s).
β’ In such cases, meta-heuristics can provide efficient ways for obtaining good solutions using limited resources (e.g. time and memory space).
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/11
Example 3: Traveling salesman problem (TSP)
β’ Given π users located in πdifferent places (cities).
β’ The problem is to find a route so that the salesman can visit all users once (and only once), start from and return to his own place (to find the Hamiltonian cycle).
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C)
From Wikipedia
Lec01/12
Example 3: Traveling salesman problem (TSP)
β’ In TSP, we have a route map which can be represented by a graph.
β’ Each node is a user, and the edge between each pair of nodes has a cost (distance or time).
β’ The evaluation function to be minimized is the total cost of the route.
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C)
From Wikipedia
Lec01/13
For TSP, the number of all possible solutions is π!, and this is a well-known NP-hard combinatorial problem.
NP-hard and NP-complete
β’ Problems that can be solved by a deterministic algorithm in polynomial time is called class P.
β’ NP is a class of decision problems that can be solved by a non-deterministic algorithm in polynomial time.
β’ A problem H is NP-hard if it is at least as hard as any NP problem.
β’ NP-hard decision problems are NP-complete.
β’ NP-complete and NP-hard problems can be solved more efficiently if we use meta-heuristics.
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/14
P
NP-complete
NP
NP-hard
Example 4: The Knapsack problem
β’ Knapsack problem is another NP-hard problem defined by:β There are π objects;
β Each object has a weight and a value;
β The knapsack has a capacity;
β The user has a quota (minimum desired value);
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/15
The problem is to find a sub-set of the objects that can be put into the knapsack and can maximize the total value.
Example 4: The Knapsack problem KNAPSACK (in OS : set of objects; QUOTA : number; CAPACITY : number;
out S : set of objects; FOUND : boolean) Begin S := empty;
total_value := 0; total_weight := 0; FOUND := false; pick an order L over the objects; loop
choose an object O in L; add O to S; total_value:= total_value + O.value; total_weight:= total_weight + O.weight; if total_weight > CAPACITY then fail
else if total_value > = QUOTA FOUND:= true; succeed;
end enddelete all objects up to O from L;
end end
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/16
This is a non-deterministic algorithm. Each time we run the program, we get a different answer. By chance, we may get the best answer.
Example 5: Learning problems
β’ Many optimization problems related to machine learning (learning from a given set of training data) are NP-hard/complete.
β’ Examples included:β Finding the smallest feature sub-set;β Finding the most informative training
data set;β Finding the smallest decision tree;β Finding the best clusters;β Finding the best neural network;β Interpret a learned neural network.
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/17
Homework
β’ Try to find some other examples of optimization problems (at least two) from the Internet.
β’ Tell if the problems are NP-hard, NP-complete, NP, or P.
β’ Provide a solution (not necessarily the best one) for each of the problems.
β’ Summarize your answer using a pdf-file, and submit the printed copy before the class of next week.
An Introduction to Metaheuristics: Produced by Qiangfu Zhao (Since 2012), All rights reserved (C) Lec01/18