Optimal capacitor placement in distribution systems by genetic algorithm

8
Optimal capacitor placement in distribution systems by genetic algorithm Gary Boone and Hsiao-Dong Chiang School of Electrical Engineering, Cornett UniversiW, Ithaca, NY 14853, USA The genetic algorithm uses a biological analogy to evolve a population of search-space points toward an optimal solution, and has been applied to several disiplines in engineering and the sciences. The algorithm is attractive due to its ease of implementation, its lack of differenti- ability requirements on the objective function, and, its ability to .find globally optimal solutions. These properties allow optimization of a practical formulation of the capacitor placement problem which includes the discrete nature of capacitor installations. In this paper, the genetic algorithm's structure, its application to the capacitor placement problem in distribution systems, and experi- mental numerical results are presented. Additionally, several implementation issues including selection pressure, fitness scaling and ranking, unit)" crossover probability, and the selection of generalized control parameters are examined in details'. Keywords: genetic algorithm, power distribution, capacitor placement, artificial evolution I. Introduction Capacitors are widely installed in distribution systems for reactive power compensation to achieve power and energy reduction, voltage regulation, and system capacity release. The extent of these benefits depends greatly on how the capacitors are placed and controlled on the system. The general capacitor placement problem is concerned with how to choose the locations, types, sizes, and control schemes for the capacitors in a general distribution system such that these benefits are achieved and/or maximized against the cost of the capacitors, while load constraints at each bus and operational constraints Received 4 May 1992; accepted 30 November 1992 (e.g., voltage profile, current magnitude) at each node and each branch during varying loading conditions are satisfied. In the past, most of the effort put into the problem has centred on simpler versions of the general capacitor placement. In the past, considerable effort has been put into the capacitor placement problem. The early approaches to this problem include those using the dynamic programming technique to handle the discrete nature of the capacitor size 1 and those employing analytical methods in conjunction with heuristics z 7. Recently, the growing need for Distribution Automation and Control has regenerated interest in the capacitor placement problem. In Reference 8, Farzi et al. incorpor- ated the release substation kVA and the voltage rise at light-load level into a model developed by Neagle and Samson 2. Pannavaikko and Prakaso 9 proposed a model which considered the load growth, system capacity release, voltage rise at light-load level and the discrete nature of capacitor size, and used a local optimization technique called the method of local variables to solve the problem. In Reference 10, Kaplan presented a formulation of feeders with multiple laterals and sug- gested a heuristic solution algorithm. Another approach, pioneered by Grainger et al. 12, is to formulate the problem as a non-linear programming problem by treating the capacitor locations and sizes as continuous variables 11. Grainger and Civanlar formulated the capacitor placement and voltage regulator problem and proposed a decoupled solution methodology for general distribution systems 12. El-kib et al. 13 extended the methodologies developed in Reference 11 to unbalanced three-phase feeders ~3. In References 14 and 15, Baran and Wu presented a problem formulation similar to that of Grainger et al., a non-linear optimization problem, but incorporated the following directly into the model: (i) the distribution power flow equation, (ii) the con- straints on node voltage magnitudes at different load levels and (iii) the discrete nature of capacitor locations. The resulting formulation represents a mixed-integer programming problem. They developed a solution Volume 1 5 Number 3 1 993 0142-0615/93/03155-08 © 1993 Butterworth-Heinemann Ltd 1 55

Transcript of Optimal capacitor placement in distribution systems by genetic algorithm

Page 1: Optimal capacitor placement in distribution systems by genetic algorithm

Optimal capacitor placement in distribution systems by genetic algorithm Gary Boone and Hsiao-Dong Chiang School of Electrical Engineering, Cornett UniversiW, Ithaca, NY 14853, USA

The genetic algorithm uses a biological analogy to evolve a population of search-space points toward an optimal solution, and has been applied to several disiplines in engineering and the sciences. The algorithm is attractive due to its ease of implementation, its lack of differenti- ability requirements on the objective function, and, its ability to .find globally optimal solutions. These properties allow optimization of a practical formulation of the capacitor placement problem which includes the discrete nature of capacitor installations. In this paper, the genetic algorithm's structure, its application to the capacitor placement problem in distribution systems, and experi- mental numerical results are presented. Additionally, several implementation issues including selection pressure, fitness scaling and ranking, unit)" crossover probability, and the selection of generalized control parameters are examined in details'.

Keywords: genetic algorithm, power distribution, capacitor placement, artificial evolution

I. Introduction Capacitors are widely installed in distribution systems for reactive power compensation to achieve power and energy reduction, voltage regulation, and system capacity release. The extent of these benefits depends greatly on how the capacitors are placed and controlled on the system. The general capacitor placement problem is concerned with how to choose the locations, types, sizes, and control schemes for the capacitors in a general distribution system such that these benefits are achieved and/or maximized against the cost of the capacitors, while load constraints at each bus and operational constraints

Received 4 May 1992; accepted 30 November 1992

(e.g., voltage profile, current magnitude) at each node and each branch during varying loading conditions are satisfied.

In the past, most of the effort put into the problem has centred on simpler versions of the general capacitor placement. In the past, considerable effort has been put into the capacitor placement problem. The early approaches to this problem include those using the dynamic programming technique to handle the discrete nature of the capacitor size 1 and those employing analytical methods in conjunction with heuristics z 7. Recently, the growing need for Distribution Automation and Control has regenerated interest in the capacitor placement problem. In Reference 8, Farzi et al. incorpor- ated the release substation kVA and the voltage rise at light-load level into a model developed by Neagle and Samson 2. Pannavaikko and Prakaso 9 proposed a model which considered the load growth, system capacity release, voltage rise at light-load level and the discrete nature of capacitor size, and used a local optimization technique called the method of local variables to solve the problem. In Reference 10, Kaplan presented a formulation of feeders with multiple laterals and sug- gested a heuristic solution algorithm. Another approach, pioneered by Grainger et al. 12, is to formulate the problem as a non-linear programming problem by treating the capacitor locations and sizes as continuous variables 11. Grainger and Civanlar formulated the capacitor placement and voltage regulator problem and proposed a decoupled solution methodology for general distribution systems 12. El-kib et al. 13 extended the methodologies developed in Reference 11 to unbalanced three-phase feeders ~3. In References 14 and 15, Baran and Wu presented a problem formulation similar to that of Grainger et al., a non-linear optimization problem, but incorporated the following directly into the model: (i) the distribution power flow equation, (ii) the con- straints on node voltage magnitudes at different load levels and (iii) the discrete nature of capacitor locations. The resulting formulation represents a mixed-integer programming problem. They developed a solution

Volume 1 5 Number 3 1 993 0142-0615/93 /03155-08 © 1993 Butterworth-Heinemann Ltd 1 55

Page 2: Optimal capacitor placement in distribution systems by genetic algorithm

algorithm by using decomposition techniques and the Phase l Phase II feasible direction method 15. Recently, Chiang et al. presented a general capacitor placement

.problem formulation by taking practical aspects of capacitors and the load and operational constraints at different load levels into consideration 16. This extends the formulation in References 14 and 15 by also incorporating the following: (i) the cost associated with capacitor placement is considered to be a step-like function (rather than a continuously differentiable function since capacitors in practice are grouped in banks of standard discrete capacities) and (ii) the capacitor sizes and control settings are treated as discrete variables. The result is a combinatorial optimization problem with a non-differentiable objective function, making most non- linear optimization techniques awkward to apply. They developed a solution algorithm based on the simulated annealing technique to solve the general capacitor placement problem in Reference 17. An attractive feature of this solution algorithm is its ability to find the globally optimal solution for the problem.

This paper considers a general distribution system with nc possible locations for capacitors and n, different loading conditions. The objective function consists of two terms: the cost of capacitor placements (the sum of purchase, installment, and maintenance costs) and the total cost of energy loss (the sum of the power losses for each loading condition multiplied by the duration of the loading period). In practice, capacitors are grouped in banks of standard discrete capacities and tuned in discrete steps. These two factors make the capacitor cost function discontinuous and non-differentiable. There are two types of capacitors: switchable capacitors and fixed-type capacitors. Due to practical considerations, fixed-type capacitors are chosen in several distribution systems. This paper considers only fixed-type capacitor placement. Let N T = {1,2 . . . . . t i t} . The problem then can be formulated as follows

nc

min 2 Ck(u°)+ Kc ~ i TiPloss,i(x , u) u k = l i = 1

where

F(zi, u)=O i eN~ (load flow constraints) H(x ~) <~ 0 i s N T (operational constraints) Uk is a discrete variable uk <~ u °

0 is the fixed capacitor size vector whose where Uk components are multiples of the standard capacitor bank size. Ck(U °) is the cost of capacitor placement at location k with size u °. Variable Uk is the control size at bus k. For a fixed capacitor, the control size is fixed for all loading conditions. This formulation is detailed in Reference 16.

The fixed-type capacitor placement problem is a constrained, combinatorial optimization problem with a non-differentiable objective function. Because it is a combinatorial optimization problem, many search methods have been employed to find good solutions in reasonable computation time. Methods based on deriva- tive of the underlying objective function, such as steepest descent method or secant method, can be forced into service only by approximating the cost function. Further, these methods suffer from local minima deception. Simulated annealing, popular for its global ability, is

guaranteed to find the global minima only under certain conditions a9. In practice, it can be slow. The genetic algorithm, however, is welt-suited to this type of problem: it requires only a binary representation of the search variables and is not concerned with the nature of the search function. The algorithm is attractive due to its ease of implementation, its lack of search function constraints, and its ability to find the global optimal solution.

II. The genetic algorithm In this section, the genetic algorithm will be briefly discussed. For a detailed introduction, see References 20 and 21.

The genetic algorithm is a search technique based on a biological metaphor. Populations of 'chromosomes', binary strings, are subjected to genetic operators: selection, crossover, and mutation. These operators use the cost returned by the search function to assign a 'fitness' to each string. Because the operators cause the average fitness to increase as the simulation progresses, the search function is maximized. Search functions can be minimized by defining lower-cost strings as fitter.

The genetic algorithm begins with a population of random binary strings, generating successive populations until an arbitrary iteration limit is reached or until the population converges, i.e., consists entirely of copies of only one string. To translate the binary strings into search-space points, a decoding function is required. Note that an encoding function is not required; the translation is one-way. The decoded strings are evaluated by the search function to determine each string's fitness. Next, the three genetic operators are applied. Selection chooses a 'mating pool' in which representation is proportional to fitness. Crossover takes random pairs from the mating pool and produces two new strings, each made of one part of each 'parent' string. Mutation inverts bits randomly.

I1.1 D e c o d i n g

The decoding function, which translates strings into search space points, can be discontinuous or many-to- one. Provided a given string has a consistent cost, the algorithm will generate a selection pressure toward better solutions. Choosing a decoding function requires the user to determine a binary representation of the search space variables. A simple approach is to consider each string as an ordered list of binary numbers. For example, an n dimensional vector given b bits per element would be represented by {sll . . . . . s, b . . . . . s,1 . . . . . S,b }. The corre- sponding decoding function would read b bits at a time from the string, converting them to a decimal number which can be multiplied by a ranging factor, if necessary.

The decoded vectors are passed to the search function to return raw fitnesses. For example, to maximize x 2, let the binary string 1100 decode to 12, giving a raw fitness of 144. Raw fitnesses can be scaled or ranked to determine the final fitnesses. (See 'selection pressure' in section III). If the search function is to be minimized, the raw fitnesses can be subtracted from a large number. Because the genetic algorithm maximizes fitnesses, maximizing the additive inverse fitnesses minimizes the search function. Note that fitness is positive by definition; the raw fitnesses cannot simply be negated.

156 Electrical Power & Energy System~

Page 3: Optimal capacitor placement in distribution systems by genetic algorithm

Porent l

Parent 2

Offspring I

o.,0r,n0 1o I o t I Figure 1. The crossover operator

I °1 ,1 1 ° 1 , 1 ° i ° i , I ° 1 , 1

I I ' 17_ I

/ ~ Crossover point

r l' I' I o 11; i,ii ti; I Iol o1,1

11.2 Selection The selection operator determines the mating pool by choosing strings randomly from the population in proportion to their fitnesses. A simple method for this is called 'roulette wheel selection' and involves spinning a roulette wheel in which each string occupies an area of the wheel equal to the string's share of the total fitness. For example, if the sum of the fitnesses is 4, then a string with fitness 1 would be given 1/4 of the wheel's area. After the wheel is proportioned, it is spun p times, where p is the population size. The resulting mating pool will have more copies of the fittest strings and few, if any, copies of the least fit. The population size remains constant but contains better solutions as the simulation evolves.

After the mating pool has been selected, strings are taken from the pool in random pairs without replacement. Each pair is then copied to two new strings which are put into the next population. The parents are discarded; the next generation consists entirely of offspring. Other researchers have allowed generational overlap 2°'zz. In these schemes, called 'steady-state reproduction', new strings randomly replace old ones. However, in the basic algorithm, generations are independent.

11.3 Crossover As the strings are copied, the remaining genetic operators are applied with fixed probability. Crossover switches the parent strings during copying, causing the offspring to be composites of the parent strings. If crossover occurs, only one crossover point is chosen randomly, as shown in Figure 1. Note that if crossover does not occur, the offspring pair is identical to the parent pair. In this way, there is a fixed probability that the new generation will contain some of the previous generation's strings; there can be generational overlap without explicitly retaining parents. Crossover probability is a control parameter.

11.4 Mutation There is also a fixed probability that the bits will be mis-copied. The mutation operator simply inverts bits and is applied to all the bits in each string, as shown in Figure 2. Note that the mutation probability must be small because crossover is intended to be the primary means of creating new points. Too high a mutation rate reduces the algorithm to a random walk. However, too small a mutation rate may allow variation to dis- appear from a given bit position due to the finite size of

the population. This condition is called premature convergence.

Crossover and mutation are applied until the mating pool is empty. Once the new population is completed, the process repeats: decode, select, crossover, mutate. The process can be stopped at an arbitrary generation count or when the variation in the population falls below a threshold. The above procedures are illustrated in Figures 3 and 5.

III. S e l e c t i o n pressure The genetic algorithm favours the best sample solutions and discourages the poor ones. In this way, the population is subjected to a 'selection pressure' which evolves it toward optimal solutions. We can characterize this pressure by defining 'expectation' as the ratio of each string's fitness to the average fitness. For example, by roulette wheel selection, we expect a string twice as fit as the average to be selected twice as often.

In our simulation, to convert the raw fitnesses returned by the search function into the final fitness values, each raw fitness is subtracted from the largest value in the current population. This causes the search function to be minimized as the fitnesses are maximized by the algorithm. However, this also serves another purpose: by substracting from the largest value rather than an arbitrary large number, the selection pressure remains constant over time. Following Whitley 23, we can illus- trate this constancy by considering a population we seek to maximize whose average fitness is 500, but whose largest fitness is 750. We expect 1.5 copies in the mating pool per string with fitness 750; there is strong pressure for this string to propagate. Later in the simulation, the average is 1000 with a maximum fitness of 1100. Here

I 1 ..... 1 I ! 1 ....... ........... ii o !i ::i o ~i::: i::i :~:! i:: ~ o Offspring 2 : : ! . !:: !:! (:::! ::: :.: ::::

lii O /! iiiili 'i iiit i! iiiil ' I o Figure 2. The mutation operator

o1,1o1,1

Mutat ion

, l , l o l ,-]

(1) Initialization (1) Input control data (2) Input power system configuration and

initialize (2) Optimization

(1) Input genetic algorithm control data (2) Initialize population with random strings (3) Loop until generation count equals

maximum allowed: (1) Decode and evaluate strings (2) Apply selection, crossover, and

mutation to create new population (3) Output

(1) Write final configuration and statistics

Figure 3. Program structure

Volume 15 Number 3 1993 157

Page 4: Optimal capacitor placement in distribution systems by genetic algorithm

only l.l copies of the best string(s) are expected. The effectiveness of the selection operator has been reduced: as the variation decreases, all strings become similarly good. Scaling prevents this by considering the strings relative to each other instead of by absolute raw fitness. For example, if the minimum raw fitnesses in the above cases were 250 and 900, the expected number of the best strings by scaling would be ( 7 5 0 - 2 5 0 ) / ( 5 0 0 - 2 5 0 ) = (1100 -- 900/(1000 - 900) = 2.

Another way to maintain constant selection pressure is to use each string's rank as its fitness. This technique is called 'ranking '23 or 'linear normalization '2°. The strings are first sorted according to their raw fitnesses, worst to best. Then the rank is taken as the fitness; i.e., the fifth worst string has fitness five; the tenth has fitness ten, etc. The rankings can also be subjected to linear or non-linear functions. To minimize the search function, the sorting order can be reversed. In addition to maintaining a constant selection pressure, ranking also arbitrarily sets the distribution of the fitnesses. For example, a linear ranking scheme will scale raw fitnesses clustered at either extreme toward the average because ranks are equidistant. Whether or not this aspect is desirable is an open question. Finally, note that functions applied to the rankings can be used to bias the selection pressure: f = r 2 will propagate fitter strings more aggressively than will f = r.

IV. Implementation The genetic algorithm implementation is straightforward: a coding scheme and a decoding function must be devised. Once programmed, however, appropriate operating parameters must be determined empirically, a major task. As described above, our simulation considers each string to be an ordered list of binary numbers corresponding to elements of the capacitor size vector. The binary numbers represent the number of capacitor banks at each node; the decoded binary numbers are subsequently multiplied by the standard capacitor bank size. Large capacitors are truncated to the maximum capacitor size, as shown in Figure 4. The number of bits per capacitor is a control parameter.

The overall simulation structure is outline in Figure 3. Figure 5 details the operation of the scaling genetic algorithm.

To compare the relative merits of scaling and ranking, we have included both in our simulation. Note that genetic algorithm minimization essentially requires scaling since it is difficult to choose a priori a raw-fitness minuend

Bus no. I 2 3 - . .

Binorystring Io lo t ' I ' l ' I o l o l ' l Bits per COp.= 3 ~ ~ ~.

Y Y Y

Decode t G 3

Bmnk size = 300 "1('300 "3('300 "1('300

(Max. cap. =1500)

I '1 )

Copocitor size vecfor 300 1500 900

Figure 4. Decoding Example

(2) Optimization by Scaling Genetic Algorithm: (1) Input genetic algorithm control data (2) Initialize population with random strings (3) Outer Loop: While gen. <max. gen.

(1) For each string: (1) Decode into a test configuration (2) for each load demand: high, low, and

medium: (1) apply load demand (2) call distribution load f low solver (3) check feasibility; if unfeasible, replace

string with random choice from current population. Go to decode ...

(4) determine real power loss (5) undo load demand

(3) Compute total cost function (2) Find highest cost in population (3) Each fitness = highest cost-string's cost (4) Calculate sum of fitnesses and average (5) Selection via roulette wheel:

(1) While the pool is incomplete: (1) choose a random number, R, between

1 and the total fitness (2) scan population, accumulating

fitnesses, until the sum exceeds R (3) choose the string that caused the

overflow (6) Use the pool to create new population:

(1) Choose a pair from the pool randomly without replacement

(2) If (a random number)<crossover probability, then: [crossover occurs] (1) pick a random crossover position (2) copy from parent1 to offspring1

between bit 1 and crossoverBit, copy from parent2 to offspring2 between bit 1 and crossoverBit, inverting bits as they are copied according to mutation probability

(3) copy from parent2 to offspring1 between crossoverBit+l & lastBit, copy from parent1 to offspring2 between crossoverBit+l & lastBit, inverting bits as they are copied according to mutation probability

(3) If no crossover, copy string from parent1 to offspring1, copy string from parent2 to offspring2, inverting bits as they are copied according to mutation probability

(4) Continue until mating pool is empty

Figure 5. Solution algorithm structure

large enough to guarantee positive fitnesses, yet not dramatically reduce selection pressure.

Note that the percentage of ones in the initial population is under parametric control. This allows us to bias the initial population toward all-zero strings. Because the power loss cost function saturates as capacitors are added, while purchase costs increase monotonically, it is known prior to simulation that fewer placements are favoured.

V. S imula t ion results The simulated distribution architecture is a 12.66kV system with 69 buses and 7 laterals, as detailed in Table 5. The energy cost is 0.06 S/kWh. The capacitor cost includes a $1000 fixed installment cost and a $900 per bank purchase cost, where each bank is 300Kvar. The system was subjected to the load conditions shown in Table 1.

The optimal capacitor placement for the 69 bus system is shown in Table 2. Table 3 shows this placement's

158 Electrical Power & Energy Systems

Page 5: Optimal capacitor placement in distribution systems by genetic algorithm

Table 1. Load duration data for the test system

Load levels Time intervals

System S O $1 $2 To T1 7"2

1.8 1.0 0.5 1000 6760 1000

Table 2. Optimal capacitor placement

Bus Cap. size

17 300.0 50 1200.0 others 0.0

Table 3. Voltage profile results

Load Profile Initial Optimal

Light I/~.~n 0.9747 0.9899 Vm, x 1.000 1.0000

Medium Vmi n 0.9478 0.9674 Vma x 1 . 0 0 0 0 1 . 0 0 0 0

Heavy V~in 0.9009 0.9239 Vma x 0.9999 1.0000

Table 4. Optimization results

Initial Optimal

Real loss in light load case 26.63 23.36 Real loss in medium load case 111.58 42.63 Real loss in heavy toad case 393.02 197.00 Real cost due to energy loss 70437.01 30513.29 Purchase and installation cost 0.00 6500.00

70437.01 37013.29 Total cost

voltage profile. The effectiveness of the genetic algorithm optimization is shown in Table 4.

Because of its lack of constraints on the search function, such as continuity or differentiability, the genetic algorithm is easy to implement. However, one must determine the optimal control parameters empiri- cally. The results presented here represent the best runs after repeated control parameter variation.

Population size variation trials showed that popu- lations of greater than 20 and less than 60 were most effective. Smaller populations maintained less variation and thus evolved more slowly and were more likely to converge prematurely. Larger populations suffered from increased processing overhead without a corresponding improvement in performance. Populations between 20 and 60 were similarly effective.

Crossover probability variation showed that prob- abilities approaching unity were more effective. In fact, experiments with multiple, random crossover suggested further improvement, at least for the capacitor placement problem. These results support evidence presented in References 20 and 22.

Finally, algorithm effectiveness was maximized by a mutation probability near 0.001. As expected, setting this value to zero caused the algorithm to converge prema- turely. Higher values interfered with the effectiveness of the selection and crossover operators, eventually reduc- ing the algorithm to a random search.

The genetic algorithm's solution progress is character- ized by a rapid initial convergence, as shown in Figure 6. The graph also illustrates the effectiveness of increased selection pressure. Non-linear ranking consistently pro- vided faster convergence than scaling or linear ranking. As discussed above, by increasing the selection pressure, the good strings are more strongly propagated, speeding convergence.

VI. Discussion The genetic algorithm is characterized by two fun- damental, dichotomous forces competing within the evolution of the population: exploitation and exploration. The selection operator exploits the current knowledge of the solution space by propagating the better guesses and discarding the poorer ones. The crossover and mutation operators explore the search space by creating new guesses. The balance is adjusted by changing the crossover probability. However, the management of these competing forces remains an open question. In the simple algorithm, the crossover probability is fixed but crossover becomes less effective as the variety decreases because, eventually, crossover swaps identical substrings.

Another primary force acting within the genetic algorithm has been termed 'implicit parallelism'. See Reference 21 for a detailed introduction. Notice that the longer a given substring, the greater the likelihood of it being split by crossover. Therefore, the algorithm is said to favour short substrings. In fact, the algorithm can be seen as multiplying and recombining these substrings, exploiting the good ones by increasing their quantities within the population. Each substring is also a 'defining schema'. That is, by specifying bit positions, a subset of the search space is chosen. For example, we could specify 'the strings with 1 in bit positions 1 and 3' as a schema. Now any strings that begin with 101 or 111 are representatives of this schema. By assigning fitnesses to these strings, the algorithm implicitly scores the schema. Since any string represents many schemata, the algorithm

7 0 0 0 0 0

6 0 0 0 0 0

.500 0 0 0

4 0 0 0 0 0

$ 300 000

200 000

i O0 000

0 0

Figure 6

Salu t ion pragress

: i : : :

. . . . . . . . . . . . . . - . . . . . . . : . . . . . . . . ; . . . . . . . : . . . . . . .

: ,=, sca l i ng I B

. . . . . . ! . . . . . . e, l i nea r r a n k i n g ~ . . . . . I

: I ~ n o n - l i n e a r r a n k i n g ]1

5 0 I 0 0 JSO 2 0 0 2 5 0 3 0 0

Gene ra f i an

Algorithm comparison

V o l u m e 15 N u m b e r 3 1 9 9 3 159

Page 6: Optimal capacitor placement in distribution systems by genetic algorithm

Table 5. Test system data and initial state

Network data

Line Line Recv. Send. no. bus bys Resistance Reactance

1 0 1 0.0005 0.0012 2 1 2 0.0005 0.0012 3 2 2e 0.0000 0.0000 4 2e 3 0.0015 0.0036 5 3 4 0.0251 0.0294 6 4 5 0.3660 0.1864 7 5 6 0.3811 0.1941 8 6 7 0.0922 0.0470 9 7 8 0.0493 0.0251

t0 8 9 0.8190 0.2707 11 9 10 0.1872 0.0619 12 10 11 0.7114 0.2351 13 1t t2 1.0300 0.3400 14 12 13 1.0440 0.3450 15 13 14 1.0580 0.3496 16 14 15 0.1966 0.0650 17 15 16 0.3744 0.1238 18 16 17 0.0047 0.0016 19 17 18 0.3276 0.1083 20 18 19 0.2106 0.0696 21 19 20 0.3416 0.1t29 22 20 21 0.0140 0.0046 23 21 22 0.1591 0.0526 24 22 23 0.3463 0.1145 25 23 24 0.7488 0.2475 26 24 25 0.3089 0.1021 27 25 26 0.1732 0.0572

28 2 27 0.0044 0.0108 29 27 28 0.0640 0.1565 30 28 29 0.3978 0.1315 31 29 30 0.0702 0.0232 32 30 31 0.3510 0.1160 33 31 32 0.8390 0.2816 34 32 33 1.7080 0.5646 35 33 34 1.4740 0.4873

36 2e 27e 0.0044 0.0t08 37 27e 28e 0.0640 0.1565 38 28e 65 0.1053 0.1230 39 65 66 0.0304 0.0355 40 66 67 0.0018 0.0021 41 67 68 0.7283 0.8509 42 68 69 0.3100 0.3623 43 69 70 0.0410 0.0478 44 70 88 0.0092 0.0116 45 88 89 0.1089 0.1373 46 89 90 0.0009 0.0012

47 3 35 0.0034 0.0084 48 35 36 0.0851 0.2083 49 36 37 0.2898 0.7091 50 37 38 0.0822 0.2011

51 7 40 0.0928 0.0473 52 40 41 0.3319 0.1114

53 8 42 0.1740 0.0886 54 42 43 0.2030 0.1034 55 43 44 0.2842 0.1447 56 44 45 0.2813 0.1433

Send. end Bus load P(kW) Q(kVAR)

0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 2.6000 2.2000

40.4000 30.0000 75.0000 54.0000 30.0000 22.0000 28.0000 19.0000

145.0000 104.0000 145.0000 104.0000

8.0000 5.5000 8.0000 5.5000 0.0000 0.0000

45.5000 30.0000 60.0000 35.0000 60.0000 35.0000

0.0000 0.0000 1.0000 0.6000

114.0000 81.0000 5.3000 3.5000 0.0000 0.0000

28.0000 20.0000 0.0000 0.0000

14.0000 t0.0000 14.0000 10.0000

26.0000 18.6000 26.0000 18.6000

0.0000 0.0000 0.0000 0.0000 0.0000 0.0000

14.0000 10.0000 19.5000 14.0000 6.!000 4.0000

26.0000 t8.5500 26.0000 18.5500

0.0000 0.0000 24.0000 17.0000 24.0000 17.0000

1.2000 1.0000 0.0000 0.0000 6.0000 4.3000 0.0000 0.0000

39.2200 26.3000 39.2200 26.3000

0.0000 0.0000 79.0000 56.4000

384.7000 274.5000 384.7000 274.5000

40.5000 28.3000 3.6000 2.7000

4.3500 3.5000 26.4000 19.0000 24.0000 17.2000

0.0000 0.0000

160 Electrical Power & Energy Systems

Page 7: Optimal capacitor placement in distribution systems by genetic algorithm

Table 5. Continued

Network data

Line Line Recv. Send. no. bus bys Resistance

Send. end Bus load Reactance P(kW) Q(kVAR)

57 45 46 1.5900 0.5337 0.0000 0.0000 58 46 47 0.7837 0.2630 0.0000 0.0000 59 47 48 0.3042 0.1006 100.0000 72.0000 60 48 49 0.3861 0.1172 0.0000 0.0000 61 49 50 0.5075 0.2585 1244.0000 888.0000 62 50 51 0.0974 0.0496 32.0000 23.0000 63 51 52 0.1450 0.0738 0.0000 0.0000 64 52 53 0.7105 0.3619 227.0000 162.0000 65 53 54 1.0410 0.5302 59.0000 42.0000

66 10 55 0.2012 0.0611 18.0000 13.0000 67 55 56 0.0047 0.0014 18.0000 13.0000

68 11 57 0.7394 0.2444 28.0000 20.0000 69 57 58 0.0047 0.0016 28.0000 20.0000

in fact processes a large number of schemata simul- taneously. This implicit parallelism suggests that the algorithm's power lies in its ability to consider a large number of hyperplanes rather than a small population of sample points.

The primary control parameters of the genetic algorithm are the population size, the crossover probability, and the mutation probability. Choosing optimal control parameters remains an open question. Consequently, considerable experimental effort must be undertaken to ensure that the genetic algorithm is operating efficiently. Another topic of investigation is whether or not parameter choice is problem-dependent. Studies which claim problem-independence have been presented 24. There is also debate over the relative importance of control parameters 24'25. Our experimentation suggests strong problem-independence for crossover and moderate problem-dependence for population size and mutation.

Although crossover probability is a control parameter, we have found no advantage for non-unity crossover probability. It may be thought that allowing offspring which are identical to parent strings will help prevent the destruction of good parent strings. However this 'good string preservation' assumption fails to appreciate the implicit parallelism of the genetic algorithm: good substrings will propagate. As multiple copies of good strings are selected, it becomes increasingly likely that the parents will be identical, negating crossover and preserving the good strings outright. Further, by propa- gating good substrings, any destroyed good string will be likely to be recreated as its constituent substrings multiply and circulate through the population. Addition- ally, reducing crossover to preserve good strings also reduces the rate at which bad strings are discarded! Finally, note that no non-unity crossover probability can guarantee the preservation of good strings. Thus, cross- over probability appears to moderate algorithm process- ing speed; it does not alter processing ability or character. Our experimentation shows a consistent decrease in evolution speed as the crossover is reduced. This suggests

that multiple and uniform crossover should increase processing rate, as claimed by References 20 and 22.

V I I . C o n c l u s i o n

The genetic algorithm is well-suited to the fixed capacitor placement problem in electric distribution systems be- cause of its ability to handle non-differentiable objective functions, its ability to find the global optimal solution, and its computational speed. Our experimental results reveal that the genetic algorithm can quickly find the region in the search space containing the global optimal solution of the fixed capacitor placement problem. These results encourage the extension of the genetic algorithm to the fixed/switched capacitor placement problem and to other applications in optimal design and planning for distribution systems.

We have presented several results regarding genetic algorithm operation. Using the genetic algorithm for minimization requires fitness scaling, but we have found a slight advantage for fitness ranking. Unity crossover probability is most effective for single-point crossover. This suggests that multiple or uniform crossover may further improve processing efficiency. Finally, only moderate population sizes are necessary.

V I I I . A c k n o w l e d g m e n t

The authors gratefully acknowledge support in part from NSF under grant numbers ECS-8810544, ECS-8957878 and ECS-8913074.

I X . R e f e r e n c e s 1 Duran, H "Optimum number, location, and size of shunt

capacitors in radial distribution feeders: a dynamic programm- ing approach' IEEE Trans. on Power Apparatus and Systems Vol 87 (1968) pp1769-1774

2 Neagle, N M and Samson, D R "Loss reduction from capacitors installed on primary feeders' AIEE Trans. part I l l Vol 75 (1965) pp950-959

Volume 15 Number 3 1993 161

Page 8: Optimal capacitor placement in distribution systems by genetic algorithm

8

9

IO

11

12

13

Distribution systems Westinghouse Eectric Corp, East Pitts- burgh, PA (1965) Cook. R F ‘Optimizing the application of shunt capacitors for reactive volt-ampere control and loss reduction’ AlEE Trans. Vol8O (1961) ~1961 Chang, N E ‘Locating shunt capacitors on primary feeders for voltege control and toss reduction’ iEEE Trans. on Power Apperatos and Systems Vol88 (1969) pp 1574-l 577 Bae, Y G ‘Analytical method of capacitor application on distribution primary feeders’ IEEE Trans. on Power Apparatus and Systems Vol97 (I 978) pp 1232-I 237 Berg Jr. R. Hewkine. E S and Pldnes, W W ‘Mechanized cakuletion of un~lan~~ load flow in radial d~~ri~tion circuits’ IEEE Trans. on Power Apparatus and Systems Vol PAS-86 (1987) ~~415-421 Farzi. T H. El-Sobji. S M and Abdel-Halium. M A ‘A new approech for the application of shunt capacitors to the primav distribution feeders‘ /EEE Trans. on Power Apparatus and Systems VollO2 (1983) pp 1 O-t 3 Pennavaikko. M end Prakaea Rao. K S ‘Optimal choice of fixed and switched shunt capacitors on radial distribution feeders by the method of local variations’ IEEE Trans. on Power Apparatus and Systems Vd 102 (1983) pp 1607-l 614 Kaplan. 1111 ‘O~imization of number, locetion, size, control type, and control setting of shunt capacitors on radial distribution feeders’ IEEE Trans. on Power Apparatus and Systems Vol 103 (1984) pp 2659-2665 Grainger, J J and Lea, S H ‘Optimum size end location of shunt capacitors for reduction of losses on di~ribution feeders’ lEEE Trans. on Power Apparatus and Systems Vd 100 (1981) pp7105-1118 Grainger, J J, Civanlar, S, Clinard, K N and Gale, L J ‘Optimel voJtage dependent continuous time control of reective power on primery distribution feeders’ KEE Trans. on Power ~r8~s end Systems Vol103 (1984) pp 2714-2723 El-Kib, A A, Graingsr. J J. Clinard, K N and Gage, L J ‘Placement of fixed and/or non-simultaneously switched capacitors on unbalanced three-phase feeders involving laterals’ IEEE Trans. on Power Apparatus and Systems VollO4 (1985) pp3298-3305

14 Baran. M E and Wu. F F ‘Optimal capacitor placement on

15

16

17

18

19

20

21

22

23

24

25

,radial distribution systems’/EEE Trans. on Power Delivery Vol4 (1999) pp 725-734 B8r8n. M E end Wu, F F ‘Optimal sizing of capacitors placed on a radial distribution system’ KEE OES Winter Meeting (1998) paper no 88WM 065-5 Chiang, H D. Wang, J C, Cookings, 0 and Shin, H D ‘Optimal cape&or placements in distribution systems. Part 1: A new fo~ulet~ and the overall problem’ fEEE Trans. on Power D&very Vol5 No 2 (t 990) pp 634-642 Chiang, H D. Wang, J C. Cookings, 0 and Shin, H D ‘Optimel cepecitor plecements in distribution systems: Part 2: Solution algorithms and numerical result’ /EEE Trans. on Power ~~~Vol5No2 (19901 pp643-649 K8reting. W H and Mandive. D L *Application of ladder network theory to the solution of three-phase radial load flow oroblems’ IEEE Winter Power Meetino New York II 976) &tra, D. Romeo, F and Sangi&anni-Vi&e&i, A ‘Convergence and finite-time behavior of simulated annealing’ Proc. 24th Cord. on Decision and Control (Dee 1985) pp 761-767 Devie, L Handbook of genetic algorithms Van Nostrand Reinhold, New York (1991) Goldberg. D E Genetic algorithms in search, optimization and ~ch~ne turning Addison-Wry, Reading, MA SYewerda, G ‘Uniform crossover in genetic algorithms’ Proc. 3rd Int. Conf. on Genetic Algorithms Morgan Kaufmann Publishers, San Mateo, CA (1989) pp 2-9 Whitley, D ‘The genitor algorithm and selection pressure: Why rank-based allocation of reproductive triais is best’ Proc. 3rd I&. Conf. on Genetic Algorithms Morgan Kau~ann Publishers, San Mateo, CA (1989) pp 116-l 21 Schaffer, J ‘A study of control parameters affecting online performance of genetic algorithms of function optimization’ Proc. 3rd Int. Conf. on Genetic Algorithms Morgan Kaufmann Publishers, San Mateo, CA (1989) pp 51-60 Jog, P ‘The effects of population size, heuristic crossover aqd local improvement on a genetic algorithm for the traveling salesman problem’ Proc. 3rd Int. Conf. on Genetic Algorithms Morgan Kaufmann Publishers, San Mateo, CA (1989) pp 11 O- 115

162 Electrical Power & Energy Systems