kodanda pjt-06

55
Teaching load frequency control using MATLAB and SIMULINK Abstract In this paper, an attractive approach for teaching automatic load frequency control of a single-area system is presented. This approach is based primarily on using SIMULINK in building the system model and simulating its behaviour. A detailed design example for such an application is also presented. 2 INTRODUCTION Automatic load frequency control (ALFC) is usually a major topic in a power system control undergraduate course. An attractive way for teaching such a topic is the use of MATLAB and SIMULINK1. Both software packages are accessible to students in most colleges and universities. SIMULINK is an interactive environment for modeling and simulating a wide variety of dynamic systems, including linear, nonlinear, discrete-time, continuous-time and hybrid systems. It provides a graphical user interface (GUI) for building models as block diagrams, using click-and-drag mouse operations. The user can change model parameters on-the-fly and display results 'live' during a simulation. SIMULINK is built on top of the MATLAB technical computing environment. In this paper, the application of SIMULINK to the analysis ALFC of a single- area system is introduced and a design example for such an application is presented. 3 AUTOMATIC LOAD FREQUENCY CONTROL OF SINGLE-AREA SYSTEM The basic role of ALFC is to maintain desired megawatt output of a generator unit and assist in controlling the frequency of the larger interconnection. The ALFC also helps to keep the net interchange of power between pool members at predetermined values. Control should be applied in such a fashion that highly differing response characteristics of units of various types (hydro, nuclear, fossil, etc.) are recognized. Also, unnecessary power output changes should be kept to a minimum in order to reduce wear of control valves. The ALFC loop will maintain control only during normal (small and slow) changes in load and frequency. It is

Transcript of kodanda pjt-06

Page 1: kodanda pjt-06

Teaching load frequency control using MATLAB and SIMULINK

Abstract In this paper, an attractive approach for teaching automatic load frequency control of a single-area system is presented. This approach is based primarily on using SIMULINK in building the system model and simulating its behaviour. A detailed design example for such an application is also presented. 2 INTRODUCTION Automatic load frequency control (ALFC) is usually a major topic in a power system control undergraduate course. An attractive way for teaching such a topic is the use of MATLAB and SIMULINK1. Both software packages are accessible to students in most colleges and universities. SIMULINK is an interactive environment for modeling and simulating a wide variety of dynamic systems, including linear, nonlinear, discrete-time, continuous-time and hybrid systems. It provides a graphical user interface (GUI) for building models as block diagrams, using click-and-drag mouse operations. The user can change model parameters on-the-fly and display results 'live' during a simulation. SIMULINK is built on top of the MATLAB technical computing environment. In this paper, the application of SIMULINK to the analysis ALFC of a single-area system is introduced and a design example for such an application is presented.

3 AUTOMATIC LOAD FREQUENCY CONTROL OF SINGLE-AREA SYSTEM The basic role of ALFC is to maintain desired megawatt output of a generator unit and assist in controlling the frequency of the larger interconnection. The ALFC also helps to keep the net interchange of power between pool members at predetermined values. Control should be applied in such a fashion that highly differing response characteristics of units of various types (hydro, nuclear, fossil, etc.) are recognized. Also, unnecessary power output changes should be kept to a minimum in order to reduce wear of control valves.

The ALFC loop will maintain control only during normal (small and slow) changes in load and frequency. It is typically unable to provide adequate control during emergency situations, when large megawatt imbalances occur. In such cases, more drastic `emergency controls' must be applied.

3.1 Mathematical model

(ii) Determine the critical gain K^sub 1,crit^ for critical stability using MATLAB function rlocfind (Fig. 2). Therefore, the range of k for a stable system is 0

(iii) Using SIMULINK, build the ALFC model of Fig. 1. Such a model, (Fig. 4), is composed of three transfer functions, two gains, two sums and an integrator. The forcing function is a step function representing a step change in the power demand. The change in the system frequency is displayed during the simulation using a scope.

(iv) run SIMULINK using different values of Kr.

4.1 Discussion of simulation results

Page 2: kodanda pjt-06

Fig. 5 displays transient time responses of the ALFC loop following a step change of 0.01 p.u. in the power demand. The following observations are worth noting:

K^sub 1^ = 0 represents the case with no integral control. The static frequency drop is -0.0235 Hz.

The higher the integral control gain K^sub 1^ (within the stable range), the shorter the time for the frequency to settle back to its original value (i.e., Of= 0).

K^sub 1^ = 0.95; the system is on the stability boundary and the response is oscillatory with constant amplitude and frequency.

K^sub 1^ = 1.1 represents an unstable case. The system has two conjugate roots in the right-hand side of the s-plane. The system response in this case is oscillatory with exponentially growing amplitude.

5 CONCLUSIONS

In this paper, an attractive approach for teaching automatic load frequency control of a single-area system has been presented. This approach is based primarily on using SIMULINK in building the system model and simulating its behavior. An advantage of using such an approach is that the student becomes familiar with some of the features of SIMULINK and the whole power of the software is available to him for more advanced courses or project work. Furthermore, the use of this type of educational methods will significantly improve the student understanding of physical system behavior which is one of the main objectives of engineering education.

6 REFERENCES

[1] MATLAB 5 and SIMULINK 2, The Math Works Inc., 24 Prime Park Way, Natick, MA 01760-1500 (1997)

[2] Elgerd, O. I., Electric Energy Systems Theory - An Introduction, 2nd Ed., McGraw-Hill,

pp. 299-362 (1982)

[3] Dorf, R. C. and Bishop, R. H., Modern Control Systems, 7th Ed., Addison-Wesley, pp. 315-386 (1995)

ABSTRACTS - FRENCH, GERMAN, SPANISH

Enseignement du reglage frequence-puissance par l'utilisation de MATLAB et SIMULINK

Page 3: kodanda pjt-06

Cet article presente une approche attractive de l'enseignement du reglage automatique frequence puissance d'une region isolde. Cette approche est basee sur l'usage principal de SIMULINK dans la construction du module du systeme et la simulation de son comportement. Un exemple detaille de la conception d'une telle application est aussi presente.

Frequenz-Leistungs-Regelung mit MATLAB und SIMULINK lehren

In diesem Beitrag wird eine attraktive Methode fur das Lehren automatischer Frequenz-LeistungsRegelung eines Einbereichsystems vorgestellt. Diese Methode beruht in erster Linie auf dem Einsatz von SIMULINK beim Aufbau des Systemmodells und in der Simulation seines Verhaltens. Ein detailliertes Designbeispiel fur eine solche Anwendung wird ebenfalls vorgefuhrt.

Performance analysis of a steam power plant with different governors

Abstract A comparative study of three steam plant governors is made using MATLAB. Such an exercise trains students in using engineering software for simulation. It also adds motivation to learning control as a subject by showing the practical application of the theory taught in class.

1 INTRODUCTION

A governor is a control component which greatly influences the dynamic behaviour of a power system and it is responsible in many respects for keeping the system stable. There are different types of governor1 and since governor models exist in literature2-5, a study of these different models can be very enriching to students and practising engineers. However, a governor as a control device is rarely studied in either control or power system courses so that students do not fully understand its use and function in a power system. To-day, with the availability of software tools such as Matlab, it is easy to analyse and simulate systems under different control strategies and the study of the performance of a governor can be a very attractive, and self-learning exercise. Educators use software tools6' to enhance student's learning capability or to encourage self study through senior projects. This paper describes such a project in the fields of control and power systems. It consists of comparing the performance of three governor models for steam power plants.

The paper is organised as follows: Section 2 underlines the assumptions made for the problem under consideration, Section 3 shows the block diagram models of the three systems under study, while Section 4 presents the state space models for simulation. Results are discussed in Section 5 and Section 6 gives a conclusion.

Page 4: kodanda pjt-06

2 ASSUMPTIONS

The system models are derived to address power and frequency (pf) control and accordingly the following assumptions are made:

(i) Small variations of variables permit the linearisation of system equations around an arbitrarily operating chosen reference operating state.

(ii) For small changes in power demand, the two problems, load and frequency control and reactive-power/voltage control, are decoupled and can be considered separately.

(iii) Low-order representation of turbine-generator dynamics.

The systems under consideration are exposed to a small change in load during their normal operation so that linear models are sufficient for their dynamic representations.

3 SYSTEM BLOCK DIAGRAM REPRESENTATION

Three models are analysed and each has the same turbine/generator group. The steam turbine is modelled as a non-reheat single stage turbine and in modelling the generator, the excitation system is neglected as justified in assumption (ii) above. In each case different governors are used giving the following three models:

Model 1: Mechanical Hydraulic Governor System. In this model the time constant of the relay is assumed to be negligible and it is shown in Fig. 1.

Model 2: System with Mechanical Hydraulic Governor with Speed Relay. In this model, the delay introduced by the speed relay is taken into account in the model development as shown in Fig. 2.

Model 3: System with Electro-Hydraulic Governor. The complete block diagram is shown in Fig. 3 and it includes a turbine output feedback. It does not include a speed relay, but instead the speed transducer signal is amplified by a gain K and fed directly to the servomotor.

4 MATHEMATICAL MODELS

The incremental state space models of the three systems above are the following:

5 RESULTS AND DISCUSSIONS

The following system data values are taken for analysis purposes:

The eigenvalues of the three models are shown in Table 1.

Page 5: kodanda pjt-06

The time variations of frequency deviations of the three models are shown in Fig. 4. It is found that Model 3 responds faster and settles to a steady value within lOs and has a smaller overshoot compared to Models 1 and 2. As expected, the frequency of the three systems does not return to its nominal value following a load demand since the uncontrolled case is being studied and only the governor is taking any necessary action. Since the generator slows down, this results in a negative frequency deviation.

The turbine output variations of the three models are shown in Fig. 5. The negative deviation in frequency results in a positive deviation in turbine output power, following the governor action, to cater for the load demand. This is so because the governor reacts to the reduction in speed by further opening the valves to allow more steam to flow to the turbine. Again, Model 3 has the best performance with a smaller overshoot and settling time. It is found that the power output does not settle down to 0.1 pu because of the frequency deviations that exist when steady state is attained. The offsets can be reduced to zero by appropriately changing the reference input to the systems.

The effect of the time delay introduced in Model 2 clearly shows a deterioration in performance. Here one can see the importance of accurate modelling of systems for simulation and control purposes and neglecting significant delays in systems models can give erroneous results. The simulations also give a better insight of the effects of governor models on the overall stability of systems.

The eigenvalues of the three systems indicate that Model 3 has a faster response and is more stable than the others since its eigenvalues are further to the left of the imaginary axis in the s-plane. Comparing the eigenvalues of Model 1 with those of Model 2, it is found that the introduction of the relay in Model 2 has decreased the system relative stability.

The electro-hydraulic governor, with its feedback loop as shown in its block diagram representation, provides improved performance as compared to mechanical-hydraulic governors. The frequency overshoot is less. The frequency and the torque output deviations settle down to a steady value within a reasonable period of 10 s as compared to the larger settling time in the other models.

6 CONCLUSION

In this paper, three steam power systems incorporating different governors are modelled and simulated. The effect of a sudden load disturbance on the frequency and power output of the systems are studied.

The advantage of such simulations is that systems' parameters can very easily be modified in the models so that students can use values from real systems to make analysis more realistic and they can predict performance and make comparisons. Since control as a subject is very mathematical and abstract to students, such an exercise can stimulate their interest and they can see the practical application of the theory they learn in a

Page 6: kodanda pjt-06

classroom. It can also become a tool for self study and a motivation to learn in other areas where control principles can be applied.

REFRENCES

[1] Woodward Governor Company, The Control of Prime Mover Speed: Part IIA, Speed Governor Fundamentals, Manual 25031 (1981)

[2] Elgerd, O. I. and Fosha, 'The megawatt-frequency control problem: a new approach via optimal control theory, IEEE Transaction on Power Apparatus and Systems (April, 1970)

[3] IEEE Committee, Dynamic Models for Steam and Hydro Turbines in Power Systems Studies. IEEE Transaction on Power Apparatus and Systems (Nov./Dec.,1973)

[4] Hovey, L. M. and Bateman, L. A., `Speed regulation tests on a hydrostation supplying an isolated load', Trans. Am. Inst. Electr. Eng., 81, No. 3 (1962)

[5] Elgerd, O. I., Electric Energy Systems Theory: An Introduction, McGraw Hill (1977)

[6] Souza, R. F. and Caballero, C. A., `Observation of Solitons with MATHEMATICATM', IEEE Trans. on Education, 39, No. 1 (Feb., 1996)

[7] Chau, K. T., 'A software tool for learning the dynamic behaviour of power electronic circuits', IEEE Trans. on Education, 39, No.1 (Feb., 1996)

ABSTRACTS - FRENCH, GERMAN, SPANISH

Analyse des performances d'une centrale electrique thermique utilisant differents regulateurs

Une etude comparative de trois regulateurs de vitesse de groupes thermiques est realisee en utilisant MATLAB. Un tel exercice entraine les etudiants a l'utilisation de logiciels scientifiques pour la simulation. II ajoute aussi une motivation a l'apprentissage du controle comme sujet d'enseignement en montrant l'application pratique de la theorie enseignee en classe.

Teaching load frequency control using MATLAB and SIMULINK

Abstract In this paper, an attractive approach for teaching automatic load frequency control of a single-area system is presented. This approach is based primarily on using SIMULINK in building the system model and simulating its behaviour. A detailed design example for such an application is also presented

Page 7: kodanda pjt-06

2 INTRODUCTION Automatic load frequency control (ALFC) is usually a major topic in a power system control undergraduate course. An attractive way for teaching such a topic is the use of MATLAB and SIMULINK1. Both software packages are accessible to students in most colleges and universities. SIMULINK is an interactive environment for modeling and simulating a wide variety of dynamic systems, including linear, nonlinear, discrete-time, continuous-time and hybrid systems. It provides a graphical user interface (GUI) for building models as block diagrams, using click-and-drag mouse operations. The user can change model parameters on-the-fly and display results 'live' during a simulation. SIMULINK is built on top of the MATLAB technical computing environment. In this paper, the application of SIMULINK to the analysis ALFC of a single-area system is introduced and a design example for such an application is presented.

3 AUTOMATIC LOAD FREQUENCY CONTROL OF SINGLE-AREA SYSTEM The basic role of ALFC is to maintain desired megawatt output of a generator unit and assist in controlling the frequency of the larger interconnection. The ALFC also helps to keep the net interchange of power between pool members at predetermined values. Control should be applied in such a fashion that highly differing response characteristics of units of various types (hydro, nuclear, fossil, etc.) are recognized. Also, unnecessary power output changes should be kept to a minimum in order to reduce wear of control valves.

The ALFC loop will maintain control only during normal (small and slow) changes in load and frequency. It is typically unable to provide adequate control during emergency situations, when large megawatt imbalances occur. In such cases, more drastic `emergency controls' must be applied.

3.1 Mathematical model

(ii) Determine the critical gain K^sub 1,crit^ for critical stability using MATLAB function rlocfind (Fig. 2). Therefore, the range of k for a stable system is 0

(iii) Using SIMULINK, build the ALFC model of Fig. 1. Such a model, (Fig. 4), is composed of three transfer functions, two gains, two sums and an integrator. The forcing function is a step function representing a step change in the power demand. The change in the system frequency is displayed during the simulation using a scope.

(iv) run SIMULINK using different values of Kr.

4.1 Discussion of simulation results

Fig. 5 displays transient time responses of the ALFC loop following a step change of 0.01 p.u. in the power demand. The following observations are worth noting:

K^sub 1^ = 0 represents the case with no integral control. The static frequency drop is -0.0235 Hz.

Page 8: kodanda pjt-06

The higher the integral control gain K^sub 1^ (within the stable range), the shorter the time for the frequency to settle back to its original value (i.e., Of= 0).

K^sub 1^ = 0.95; the system is on the stability boundary and the response is oscillatory with constant amplitude and frequency.

K^sub 1^ = 1.1 represents an unstable case. The system has two conjugate roots in the right-hand side of the s-plane. The system response in this case is oscillatory with exponentially growing amplitude.

5 CONCLUSIONS

In this paper, an attractive approach for teaching automatic load frequency control of a single-area system has been presented. This approach is based primarily on using SIMULINK in building the system model and simulating its behavior. An advantage of using such an approach is that the student becomes familiar with some of the features of SIMULINK and the whole power of the software is available to him for more advanced courses or project work. Furthermore, the use of this type of educational methods will significantly improve the student understanding of physical system behavior which is one of the main objectives of engineering education.

6 REFERENCES

[1] MATLAB 5 and SIMULINK 2, The Math Works Inc., 24 Prime Park Way, Natick, MA 01760-1500 (1997)

[2] Elgerd, O. I., Electric Energy Systems Theory - An Introduction, 2nd Ed., McGraw-Hill,

pp. 299-362 (1982)

[3] Dorf, R. C. and Bishop, R. H., Modern Control Systems, 7th Ed., Addison-Wesley, pp. 315-386 (1995)

Using MATLAB, SIMULINK and Control System Toolbox - A practical approach

Using MATLAB, SIMULINK and Control System Toolbox -- A practical approach: A. CAVALLO, R. SETOLA and F. VASCA (Prentice Hall, 1996, 405 pp., L23.95)

This is a very useful book for students, researchers and technical professionals who wish to use MATLAB to simulate continuous and discrete systems. Unlike MATLAB and its

Page 9: kodanda pjt-06

toolbox manuals, this book contains a brief overview of the mathematical and engineering background for MATLAB operations and many detailed examples.

This book is divided into three parts: The introduction of MATLAB; the use of SIMULINK, and the use of the Control System toolbox.

The first part contains 9 chapters. In this part, the fundamentals such as input/output files, matrix and scalar operations, polynomials and interpolation, and graphics are described. MATLAB programming and debugging are also discussed. Further, a class of operators for numerical analysis, for example, derivatives and integrals, optimizations of nonlinear equations, solution of differential equations, are described.

The second part contains 9 chapters. In this part, the main features of SIMULINK are described with examples of building and analysing SIMULINK schemes. SIMULINK is described in further detail with special emphasis on modelling continuous and discrete (or, hybrid) nonlinear systems, time-varying systems, and multivariable systems with examples included. The methods of grouping SIMULINK blocks into multilevel or hierarchical blocks and an overview of the functions of SIMULINK blocks are included.

The third part, containing 7 chapters, describes the use of the Control System toolbox. This includes models of continuous-time linear time-invariant (LTI) systems, time-domain and frequency domain responses, root locus, state feedback such as feedback gain design, Kalman filter, and many more. The corresponding methods for discrete-time systems are then described. Many practical examples are included in these chapters.

In Appendices, readers can find more details on advanced graphic functions, graphic user interface design and writing S-functions. This book contains 404 pages, not so thick as far as the coverage of its contents is concerned, yet, it contains most essential and useful information for the user.

IRENE Y. H. GU Department of Applied Electronics, Chalmers University of Technology, Gothenburg, Sweden

Optimal decentralized load frequency control using HPSO algorithms in deregulated power systems

Large-scale power systems are normally composed of interconnected subsystems or control areas. The connection between the control areas is done using tie lines. Each area has its own generator or group of generators and it is responsible for its own load and scheduled interchanges with neighboring areas. Because loading of a given power system is never constant and to ensure the quality of power supply, a load frequency controller is needed to maintain the system frequency at the desired nominal value. It is known that changes in real power affect mainly the system frequency and the input mechanical

Page 10: kodanda pjt-06

power to generators is used to control the frequency of the output electrical power. In a deregulated power system, each control area contains different kinds of uncertainties and various disturbances due to increased complexity, system modeling errors and changing power system structure. A well designed and operated power system should cope with changes in the load and with system disturbances and it should provide acceptable high level of power quality while maintaining both voltage and frequency within tolerable limits (1-6).

During the last three decade, various control strategies for LFC have been proposed (1-18). This extensive research is due to fact that LFC constitutes an important function on power system operation where the main objective is to regulate the output power of each generator at prescribed levels while keeping the frequency fluctuations within pre-defined limits. Robust adaptive control schemes have been developed (4-7) to deal with changes in system parametric under LFC strategies. A different algorithm has been presented (8) to improve the performance of multi-area power systems. Viewing a multi-area power system under LFC as a decentralized control design for a multi-input multi-output system, it has been shown (9) that a group of local controllers with tuning parameters can guarantee the overall system stability and performance. The result reported in (4-9) demonstrates clearly the importance of robustness and stability issues in LFC design. In addition, several practical points have been addressed in (10-15) which include recent technology used by vertically integrated utilities, augmentation of filtered area control error with LFC schemes and hybrid LFC that encompasses an independent system operator and bilateral LFC.

The applications of artificial neural networks, genetic algorithms, fuzzy logic and optimal control to LFC have been reported in (16-18). The objective of this study is to investigate the load frequency control and inter area tie-power control problem for a multi-area power system taking into consideration the uncertainties in the parameters of system.

PI type and I type controllers are considered to LFC control. An optimal control scheme based hybrid particle swarm optimization (HPSO) Algorithm method is used for tuning the parameters of these PI and I controllers. The proposed controller is simulated for a two-area power system.

To show effectiveness of proposed method and also compare the performance of these two controllers, several changes in demand of first area, demand of second area and demand of two areas simultaneously are applied. Simulation results indicate that HPSO controllers guarantee the good performance under various load conditions.

MATERIALS AND METHODS

A two-control area power system, shown in Fig. 1 is considered as a test system (14). The state-space model of foregoing system is as (1) (14).

[FIGURE 1 OMITTED]

Page 11: kodanda pjt-06

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1)

Where:

u = [[DELTA][P.sub.D1],[DELTA][P.sub.D2] [u.sub.1][u.sub.2]]

y = [[y.sub.1],[y.sub.2]] = [[DELTA][f.sub.1],[DELTA][f.sub.2][DELTA], [DELTA][P.sub.tie]

x = [[DELTA][P.sub.G1] [DELTA][P.sub.T1] [DELTA][f.sub.1] [DELTA][P.sub.tie] [DELTA][P.sub.G2] [DELTA][T.sub.2][DELTA][f.sub.2]]

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The parameters of model, defined as follow:

[DELTA] = Deviation from nominal value

M = 2H = Inertia constant

D = Damping constant

R = Gain of speed droop feedback loop

[T.sub.t] = Turbine time constant

[T.sub.G] = Governor time constant

The typical values of system parameters for nominal operation condition are given in appendix (12).

This study focuses on optimal tuning of controllers for LFC and tie-power control using HPSO algorithm. The aim of the optimization is to search for the optimum controller parameter setting that maximize the minimum damping ratio of the system. On the other hand in this study the goals are control of frequency and inter area tie-power with good oscillation damping and also obtaining a good performance under all operating conditions and various loads and finally designing a low-order controller for easy implementation.

PSO AND HPSO ALGORITHMS

A novel population based optimization approach, called particle swarm optimization (PSO), was introduced first in (19). In a PSO system, multiple candidate solutions coexist and collaborate simultaneously. Each solution candidate, called a "particle", flies in the problem space (similar to the search process for food of a bird swarm) looking for the

Page 12: kodanda pjt-06

optimal position. A particle with time adjusts its position to its own experience, while adjusting to the experience of neighboring particles. If a particle discovers a promising new solution, all the other particles will move closer to it, exploring the region more thoroughly in the process.

This new approach features many advantages; it is simple, fast and can be coded in few lines. Also its strong requirement is minimal. Moreover, this approach is advantageous over evolutionary and genetic algorithm in many ways. First, PSO has memory. That is, every particle remembers its best solution (global best). Another advantage of PSO is that the initial population of the PSO is maintained and so there is no need for applying operators to the population, a process that is time-and memory-storage-consuming. In addition, PSO is based on constructive cooperation between particles, in contrast with the genetic algorithms, which are based on the survival of the fittest (19-22).

Steps of PSO: Steps of PSO as implemented for optimization are (19-29):

Step 1: Initialize an array of particles with random positions and their associated velocities to satisfy the inequality constraints.

Step 2: Check for the satisfaction of the equality constraints and modify the solution if required.

Step 3: Evaluate the fitness function of each particle.

Step 4: Compare the current value of the fitness function with the particles previous best value (pbest). If the current fitness value is less, then assign the current fitness value is less, then assign the current coordinates (positions) to pbestx.

Step 5: Determine the current global minimum fitness value among the current positions.

Step 6: Compare the current global minimum with the previous global minimum (gbest). If the current global minimum is better than gbest, then assign the current global minimum to gbest and assign the current coordinates (positions) to gbestx.

Step 7: Change the velocities.

Step 8: Move each particle to the new position and return to step 2.

Step 9: Repeat step 2-8 until a stop criterion is satisfied or the maximum number of iterations is reached.

PSO and HPSO algorithm definition: The PSO definition is presented as follows (19), (22), (26):

* Each individual particle i has the following properties:

Page 13: kodanda pjt-06

[x.sub.i] = A current position in search space.

[v.sub.i] = A current velocity in search space.

[y.sub.i] = A personal best position in search space.

* The personal best position pi corresponds to the position in search space, where particle i presents the smallest error as determined by the objective function f, assuming a minimization task.

* The global best position denoted by g represents the position yielding the lowest error among all the pi's.

Equation 2 and 3 define how the personal and global best values are updated at time k, respectively. In below, it is assumed that the swarm consists of s particles. Thus, i [member of] 1,...,s

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2)

[g.sup.k] [member of] {[p.sub.1.sup.k],[p.sub.2.sup.k],...,[p.sub.s.sup.k]} | f([g.sup.k]) = min {f([p.sub.1.sup.k]), f([p.sub.2.sup.k])...f([p.sub.2.sup.k])} (3)

During each iteration, every particle in the swarm is updated using 4 and 5. Two pseudorandom sequences [r.sub.1] ~ U(0,1) and [r.sub.2] ~ U(0,1) are used to affect the stochastic nature of the algorithm.

[v.sub.i.sup.k+1] = w x [v.sub.i.sup.k] + [c.sub.1] x rand[().sub.1] x ([p.sub.i.sup.k] - [X.sub.i.sup.k]) + [c.sub.2] x rand[().sub.2] x ([g.sup.k] - [X.sub.i.sup.k]) (4)

[X.sub.i.sup.k+1] = [X.sub.i.sup.k] + [v.sub.i.sup.k+1] (5)

w = [w.sub.max] - [[w.sub.max] - [w.sub.min]]/[iter.sub.max] x iter (6)

[v.sub.max] = k X [x.sub.max] 0.1[less than or equal to]k[less than or equal to]1 (7)

Where:

[[v.sub.i].sup.k] = Velocity of ith particle at kth iteration.

[[v.sub.i].sup.k+1] = Velocity of ith particle at (k+1)th iteration.

w = Inertia weight,

[[X.sub.i].sup.k]= Position of ith particle at kth iteration.

[[X.sub.i].sup.k+1]= Position of ith particle at (k+1)th iteration.

Page 14: kodanda pjt-06

[c.sub.1], [c.sub.2] = Positive constants both equal to 2.

iter, [iter.sub.max] = Iteration number and maximum iteration number.

rand[().sub.1], rand[().sub.2] = Random number selected between 0 and 1.

Evolutionary operators such as selection, crossover and mutation have been applied into the PSO. By applying selection operation in PSO, the particles with the best performance are copied into the next generation, therefore, PSO can always keep the best performed particles. By applying crossover operation, information can be exchanged or swapped between two particles so that they can fly to the new search area as in evolutionary programming and genetic algorithms. Among the three evolutionary operators, the mutation operators are the most commonly applied evolutionary operators in PSO. The purpose of applying mutation to PSO is to increase the diversity of the population and the ability to have the PSO to escape the local minima (19-28). HPSO uses the mechanism of PSO and a natural selection mechanism utilizing genetic algorithm.

CONTROLLER DESIGN USING HPSO ALGORITHM

In this study P-I and I type controllers optimized by HPSO are designed for LFC and tie-power control. The goals are control of frequency and inter area tie-power with good oscillation damping, also obtaining a good performance. The structure of system with PI controller is shown in Fig. 2 (26), (29). The area control error (ACE) for the ith area is defined as:

<[DELTA][CE.sub.i] = [DELTA][P.sub.tiei] + [DELTA][f.sub.i] (8)

with PI controller, the conventional automatic generation controller has a control equation of the form 9.

[FIGURE 2 OMITTED]

[DELTA][PC.sub.i] = [K.sub.pi] ([DELTA][P.sub.tiei] + [DELTA][f.sub.i]) + [K.sub.Ii] [integral] ([DELTA][P.sub.tiei] + [DELTA] [f.sub.i]) (9)

With I controller; the conventional automatic generation controller has a linear integral control strategy as 10.

[DELTA][PC.sub.i] = [C.sub.i] = [K.sub.li][integral]([DELTA][P.sub.tiei] + [DELTA][f.sub.i) (10)

Where [K.sub.pi] is the gain of the proportional controller and [K.sub.li] is the gain of the integral controller for the ith area.

In this study, the optimum values of the parameters [K.sub.p] and [K.sub.I] for PI controller and [K.sub.I] for I controller, who minimize an array of different performance

Page 15: kodanda pjt-06

indices, are easily and accurately computed using a HPSO. In a typical run of the HPSO, an initial population is randomly generated. This initial population is referred to as the 0th generation. Each individual in the initial population has an associated performance index value. Using the performance index information, the HPSO then produces a new population.

In order to obtain the value of the performance index for each of the individuals in the current population, the system must be simulated. The HPSO then produces the nest generation of individuals using the reproduction crossover and mutation operators.

These processes are repeated until the population is converged and optimum value of parameters found. To simplify the analysis, the two interconnected areas were considered identical. The optimal parameter values are such that:

[K.sub.P1] = [K.sub.P2] = [K.sub.P] and [K.sub.I1] = [K.sub.I2] = [K.sub.I]

The nominal system parameters are given in appendix. The performance index considered in this study is of the form:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (11)

To compute the optimum parameter values, a unit step load change is assumed in area 1 and the performance index is minimized using a HPSO algorithm. In the next section, the optimum values of the parameters [K.sub.p] and [K.sub.I] for PI controller and [K.sub.I] for I controller, resulting from minimizing the performance index are presented. In this case performance index was considered with:

[alpha] = 1, [beta] = 1 [gamma] = 1

(frequency deviations in both areas and tie-power deviation are equally penalized).

It should be noted that the [alpha], [beta] and [gamma] are weighting coefficients chosen by the designer. The optimum value of the parameters [K.sub.p] and [K.sub.I] for performance index as obtained using HPSO algorithm is summarized in the Table 1. The optimum value of the parameter [K.sub.I] for performance index as obtained using HPSO algorithm is summarized in the Table 2.

Table 1: Optimum values of [K.sub.P] and [K.sub.I] for PI controller

[K.sub.P] 2.2264[K.sub.I] 6.6567Performance index 0.6146

Table 2: Optimum value of [K.sub.I] for I controller

[K.sub.I] 0.6812Performance index 3.7226

Page 16: kodanda pjt-06

Table 1 and 2 give the optimum values for [K.sub.p], [K.sub.I] and the corresponding values of the performance index for the two cases considered.

RESULTS AND DISCUSSION

In this section different comparative cases are examined to show the effectiveness of proposed HPSO method for optimizing controller parameters (PI and I type). These cases have been evaluated extensively by time-domain simulation, using commercially available software package (30).

It is clear that considering PI type controller results in a decrease of the optimum value of the performance index. This in turn will lead to an increase damping in the dynamic response of the system and clearly show that PI controller has a better performance in compare to I controller in LFC control. In continue, the simulation result clearly shows this subject.

Step increase in demand of the first area([DELTA]P.sub.D1]): As the first test case, a step increase in demand of the first area ([DELTA]P.sub.D1] is applied at operating point 1 (nominal operating point). The frequency deviation of the first area ([DELTA][[omega].sub.1]) and the frequency deviation of the second area [DELTA][[omega].sub.2] and inter area tie-power signals of the closed-loop system are shown in Fig. 3 and 5. Using PI controller, the frequency deviations and inter area tie-power are quickly driven back to zero and PI controller has the best performance in control and damping of frequency and tie-power in compare to I controller. Also responses without any controller cannot be driven back to zero and will have a steady-state error.

Step increase in demand of the second area ([DELTA]P.sub.D2]): In this case, a step increase in demand of the second area ([DELTA][P.sub.D1]) is applied at operating point 2. The frequency deviation of the first area [DELTA][[OMEGA].sub.1] and the frequency deviation of the second area [DELTA][[OMEGA].sub.2] and inter area tie-power signals of the closed-loop system are shown in Fig. 6-8. Using PI controller, the frequency deviations and inter area tie-power quickly driven back to zero and PI controller has the best performance in control and damping of frequency and tie-power in compare to I controller. Also responses without any controller cannot be driven back to zero and will have a steady state error.

[FIGURE 4 OMITTED]

[FIGURE 6 OMITTED]

[FIGURE 7 OMITTED]

[FIGURE 8 OMITTED]

Step increase in demand of the first and second area simultaneously: In this case, a 0.5 step increase in demand of the first area ([DELTA][P.sub.D1]) and step increases in

Page 17: kodanda pjt-06

demand of the second area ([DELTA][P.sub.D2]) simultaneously are applied at operating point 3. The signals of the closed-loop system are shown in Fig. 9-11. Using optimized PI controller, the frequency deviations and inter area tie-power quickly driven back to zero and PI controller has the best performance in compare to optimized I controller. Also responses without any controller cannot be driven back to zero and will have a steady-state error.

[FIGURE 10 OMITTED]

[FIGURE 11 OMITTED]

CONCLUSION

In this study HPSO has been successfully applied to tune the parameters of conventional automatic generation systems of the P-I type and I type controller. A two-area power system is assumed to demonstrate the proposed method. The performance index has been considered as the integral of time-multiplied absolute value of the error. For performance index, a digital simulation of the system is carried out and optimization of the parameters of the automatic generation control (AGC) systems is achieved in a simple and elegant manner through the effective application of HPSO algorithm. These results and the suitability of HPSO to nonlinear problems, open the door to study the effect of the generation rate constraints on the optimal value of the AGC parameters.

REFRENCES

(1.) Shayegi, H., H.A. Shayanfar and O.P. Malik, 2007. Robust decentralized neural networks based LFC in a deregulated power system. Elec. Power Syst. Res., 77: 241-251.

(2.) Shayeghi, H. et al., 2007. Robust modified GA based multi-stage fuzzy LFC. Energy Conversion and Manag., 48: 1656-1670.

(3.) Sivaramakrishnan, A.Y., M.V. Hariharan and M.C. Srisailam, 1984. Design of variable structure load frequency controller using pole assignment technique. Int. J. Control, 40: 487-498.

(4.) Lim, K.Y. et al., 1996. Robust decentralized load frequency control of multi-area power system. IEE Proceedings-C, 143 (5): 377-386.

(5.) Taher, S.A. and R. Hematti, 2008. Robust decentralized load frequency control using multi-variable QFT method in deregulated power systems. Am. J. Applied Sci., 5 (7): 818-828.

(6.) Wang, Y., D.J. Hill and G. Guo, 1998. Robust decentralized control for multi-machine power system, IEEE Trans. on circuits and systems: Fund. Theory and Applications., Vol. 45, No. 3.

Page 18: kodanda pjt-06

(7.) Stankovic, et al., 1998. On robust control analysis and design for LFC regulation. IEEE Trans. PWRS, 13 (2): 449-455.

(8.) Pan, C.T. and C.M. Liaw, 1989. An adaptive control for power system LFC. IEEE Trans. PWRS, 4 (1): 122-128.

(9.) Yamashita, K. and H. Miagi, 1991. Multi variable self-tuning regulator for LFC system with interaction of voltage on load demand. IEE proceeding conference. Theory and Applications., 138 (2): 177-183.

(10.) Rubaai, A. and V. Udo, 1994. Self-tuning LFC: Multi-level adaptive approach. IEE Proc.-C, 141, (4): 285-290.

(11.) Talaq, J. and H. Al-basri, 1999. Adaptive fuzzy gain scheduling for LFC. IEEE Trans. PWRS, 14(1): 145-150.

(12.) Aldeen, M. and J.F. Marah, 1991. Decentralized PI design method for inter-connected power systems. IEE Proc.-C, Vol. 138, No. 4.

(13.) Yang, T.C. and H. Cimen, 1996. Applying structured singular values and a new LQR design to robust decentralized power system LFC. Proceeding of the IEEE Int. Conf. on Ind. Technol.

(14.) Yang, T.C., H. Cimen and Q.M. Zhu, 1998. Decentralized LFC design based on structural sing. Val., IEE Proc.-C, 145 (1): 137-146.

(15.) Moon, Y.H., H.S. Ryu, J.G. Lee and S. Kim, 2001. Power system LFC using noise tolerable PID feedback. ISIE 2001, Pusan, Korea.

(16.) Birch, A.P., A.T. Sapeluk and C.S. Ozveren, 1994. An enhanced neural network LFC technique. Control 94, IEE Conf. Publication, No. 398.

(17.) Rerkpreedapong, D. et al., 2003. Robust LFC using genetic algorithms and LMI. IEEE Trans. PWRS, 18 (2): 855-861.

(18.) Liu, F. et al., 2003. Optimal LFC in restructured power systems. IEE Proc.-C, 150 (1): 87-95.

(19.) Kennedy, J. and R.C. Eberhart, 1995. Particle swarm optimization. In: Proc. IEEE Int. Conf. on Neural Network, Perth, Australia, pp: 1942-1948.

(20.) Eberhart, R.C. and Y. Shi, 2001. Particle swarm optimization: Development, applications and resources, evolutionary comp. In: proc. 2001 Cong. on Evolutionary. Comput., 1: 81-86.

Page 19: kodanda pjt-06

(21.) Al-Awami, A.T. at al., 2007. A particle-swarm-based approach of power system stability enhancement with UPFC. Elec. Power and Energy Syst., (29): 251-259.

(22.) Jaung, C.F. and C.F. Lu, 2006. Load frequency control by hybrid evolutionary fuzzy PI controller. IEE Proc.-C, 153 (2): 196-204.

(23.) Mukherjee, V. and S.P. Ghoshal, 2007. Intelligent particle swarm optimized fuzzy PID controller for AVR system. Elec. Power Syst. Res., (77): 1689-1698.

(24.) Miranda, V. and N. Fonseca, 2002. New evolutionary particle swarm algorithm (EPSO) applied to voltage/VAR control. The 14th PSCC Conf. (PSCC'02-2002), Seville, Spain.

(25.) Blackwell, T. and P.J. Bentley, 2002. Don't push me! Collision-avoiding swarms. IEEE Cong. on Evol. Comput., Honolulu, Hawaii USA.

(26.) Taher, S.A. and S.M.H. Tabei, 2008. A multi-objective HPSO algorithm approach for optimally location of UPFC in deregulated power systems. Am. J. Applied Sci., 5 (7): 835-843.

27.) Krink, T., J.S. Vesterstr[empty set]m and J. Riget, 2002. PSO with spatial particle extension, Proc. of the 4th Congress on Evol. Comput., (CEC-2002).

(28.) Ratnaweera, A., S.K. Halgamuge and H.C. Watson, 2004. Self-organizing hierarchical PSO optimizer with time varying accelerating coefficients. IEEE 2004 Trans. Evol. Comput., (Accepted for special issue on PSO).

(29.) Abdel-Magid, Y.L. and M.M. Dawood, 1995. Genetic algorithms applications in load frequency control. Proc. IEE Conference on genetic algorithms in Eng. Syst., (414): 207-213.

(30.) Matlab Software, 2006. The Mathworks, Inc.

Seyed Abbas Taher, Reza Hematti, Ali Abdolalipour and Seyed Hadi Tabei Department of Electrical Engineering, University of Kashan, Kashan, Iran

Corresponding Author: Seyed Abbas Taher, Department of Electrical Engineering, University of Kashan, Kashan, Iran

COPYRIGHT 2008 Science Publications COPYRIGHT 2008 Gale, Cengage Learning

ReferensesArticles in Sept, 2008 issue of American Journal of Applied Sciences

Page 20: kodanda pjt-06

Spatial variability of soil inorganic N in a mature oil palm plantation in Sabah, Malaysia by Abdul Rahim Anuar Kah Joh Goh Tee Bee Heoh Osumanu Haruna Ahmed

Experimental investigation of single pass, double duct photovoltaic thermal air collector with CPC and fins by M. Ebrahim Ali Alfegi Kamaruzzaman Sopian Mohd Yusof Hj Othman Baharudin Bin Yatim

Quality of life in hematologic cancer patients: A randomized clinical trial of low dose naltrexone versus placebo by Seifrabiei Mohammad Ali Abbasi Mohammad Montazeri Ali Shahnazari Fatemeh Pooya Arash

Effects of creative, educational drama activities on developing oral skills in primary school children by Abdulhak Halim Ulas

On studying spatial patterns and association between these patterns of mortality and prosperity in Malaysia by Faisal G. Khamis Abdul Aziz Jemain Kamarulzaman Ibrahim

Hydrochemical differentiation of salinisation process of the water in endoreic semi-arid Basin: case of Remila basin, Algeria by Houha Belgacem Kherici Nacer Kachi Slimane Vincent Valles

DSP-128: stream cipher based on discrete log problem and polynomial Arithmetic

by Khaled M. Suwais Azman Samsudin Homotopy perturbation method and variational iteration method for solving

Zakharov-Kuznetsov equation by B Ganjavi H Mohammadi D.D. Ganji A. Barari

A control strategy for photovoltaic-solid polymer electrolysis system based on surface temperature of PV panel by Riza Muhida Wahyudi Rifki Muhida Ahmad Unggul Priantoro

Assessment of suspended sediments concentration in surface waters, using modis images by Reza Mobasheri Mohammad

Configurable multirate filter banks by Ali Al- Haj

Fuzzy assignment procedure based on categories' boundaries by George Rigopoulos

A multi-objective HPSO algorithm approach for optimally location of UPFC in deregulated power systems by Seyed Abbas Taher Seyed Mohammad Hadi Tabei

A Dynamic Resource Synchronizer mutual exclusion algorithm for wired/wireless distributed systems by Ahmed Sharieh Mariam Itriq Wafa Dbabat

Robust Decentralized Load Frequency Control using multi variable QFT method in deregulated power systems by Seyed Abbas Taher Reza Hematti

Page 21: kodanda pjt-06

The role of R and D and business performance in Korean electronics companies by Woosik Kim Seok Yoon

New analysis for the FGM thick cylinders under combined pressure and temperature loading by K. Abrinia H. Naee F. Sadeghi F. Djavanroodi

Short-term and medium-term load forecasting for jordan's power system by I Badran H. El- Zayyat G. Halasa

Direct model reference adaptive controller based-on neural-fuzzy techniques for nonlinear dynamical systems by Hafizah Husain Marzuki Khalid Rubiyah Yusof

Low-noise high-accuracy TOF laser range finder by Shahram Mohammed Nejad Saeed Olyaee

A password-based key derivation algorithm using the KBRP method by Shakir M Hussain Hussein Al- Bahadili

Design and development of Decision Making system using fuzzy Analytic Hierarchy Process by Cheong Chin Wen Jie Lee Hua Meng Mak Chee Lan Amy Lim Hui

Aerodynamic noise prediction using stochastic turbulence modeling by Arash Ahmadzadegan Mehran Tadjfar

New efficient strategy to accelerate k-means clustering algorithm by Moh'd Belal Al-Zoubi Amjad Hudaib Ammar Huneiti Bassam Hammo

Dynamic analysis of structures using neural networks by N Ahmadi R.Kamyab Moghadas A Lavaei

Economical investigation of ICECHP using gasohol-a case study for Iran by Iman Baratian Barat Ghobadian Mohammad Ameri

TwigX-Guide: twig query pattern matching for XML trees by Su-Cheng Haw Chien-Sing Lee

Analytical solution of nonlinear vibrating systems by NI. Tolou I. Khatam B. Jafari D.D. Ganji

Transient stability assessment of a power system using probabilistic neural network by Noor Izzri Abdul Wahab Azah Mohamed

Ductility performance of hybrid fibre reinforced concrete by S Eswari P.N Raghnuath K Sugana

Relationship between architectural outer shape and function of buildings: behaviour study on building constructed in China by ISSA. A.M. Al- Kahtani Suhaib Yahya Kasim Al- Darzi

Gait optimization of biped robot during double support phase by pure dynamic synthesis by Nima Jamshidi Mostafa Rostami

Optimal decentralized load frequency control using HPSO algorithms in deregulated power systems by Seyed Abbas Taher Reza Hematti Ali Abdolalipour Seyed Hadi Tabei

Political stability and balance of payment: an empirical study in Asia by Arfan Ali Tan Shukui Santhirasegaram Selvarathnam Xu Xiaolin Abdul Saboor

Page 22: kodanda pjt-06

Quantum Hadamard gate implementation using planar lightwave circuit and photonic crystal structures by Salemian Shamsolah Mohammadnejad Shahram

Predictors of technology deployment among Malaysian teachers by Naresh Kumar Raduan Che Rose D'Silva Jeffrey Lawrence

Security policy development: towards a life-cycle and logic-based verification model by Luay A. Wahsheh Jim Alves-Foss

Dynamic response of loads on viscously damped axial force Rayleigh beam by I.A. Adetunde Baba Seidu

TwigINLAB: A decomposition-matching-merging approach to improving XML query processing by Su-Cheng Haw Chien-Sing Lee

Weighting spatial information in GIS for copper mining exploration by Farhad Hosseinali Ali Asghar Alesheikh

Utility frequency

The waveform of 230 volt, 50 Hz compared with 110 V, 60 Hz. The utility frequency (American English) or mains frequency (British English) is the frequency at which alternating current (AC) is transmitted from a power plant to the end user. In most parts of the Americas, it is typically 60 Hz, and in most parts of the rest of the world it is typically 50 Hz. Precise details are shown in the list of countries with mains power plugs, voltages and frequencies.

During the development of commercial electric power systems in the late 19th and early 20th centuries, many different frequencies (and voltages) had been used. Large investment in equipment at one frequency made standardization a slow process. However, as of the turn of the 21st century, places that now use the 50 Hz frequency tend to use 220-240 V, and those that now use 60 Hz tend to use 100-120 V. Both frequencies co-exist today (some countries such as Japan use both) with no technical reason to prefer one over the other and no apparent desire for complete worldwide standardization.

Unless specified by the manufacturer to operate on both 50 and 60 Hz, appliances may not operate efficiently or even safely if used on anything other than the intended frequency.

Page 23: kodanda pjt-06

Operating factors

Several factors influence the choice of frequency in an AC system. [1] Lighting, motors, transformers, generators and transmission lines all have characteristics which depend on the power frequency.

All of these factors interact and make selection of a power frequency a matter of considerable importance. The best frequency is a compromise between contradictory requirements. In the late 19th century, designers would pick a relatively high frequency for systems featuring transformers and arc lights, so as to economize on transformer materials, but would pick a lower frequency for systems with long transmission lines or feeding primarily motor loads or rotary converters for producing direct current. When large central generating stations became practical, the choice of frequency was made based on the nature of the intended load. Eventually the improvements in machine design allowed a single frequency to be used both for lighting and motor loads; a unified system improved the economics of electricity production since system load was more uniform during the course of a day.

[edit] Lighting

The first applications of commercial electric power were incandescent lighting and commutator-type electric motors. Both devices operate well on DC, but DC cannot be easily transmitted long distances at utilization voltage and also cannot be easily changed in voltage.

If an incandescent lamp is operated on a low-frequency current, the filament cools on each half-cycle of the alternating current, leading to perceptible change in brightness and flicker of the lamps; the effect is more pronounced with arc lamps, and the later mercury-vapor and fluorescent lamps.

[edit] Rotating machines

Commutator-type motors do not operate well on high-frequency AC since the rapid changes of current are opposed by the inductance of the motor field; even today, although commutator-type universal motors are common in 50 Hz and 60 Hz household appliances, they are small motors, less than 1 kW. The induction motor was found to work well on frequencies around 50 to 60 Hz but with the materials available in the 1890s would not work well at a frequency of, say, 133 Hz. There is a fixed relationship between the number of magnetic poles in the induction motor field, the frequency of the alternating current, and the rotation speed; so, a given standard speed limits the choice of frequency (and the reverse). Once induction motors became common, it was important to standardize frequency for compatibility with the customer's equipment.

Generators operated by slow-speed reciprocating engines will produce lower frequencies, for a given number of poles, than those operated by, for example, a high-speed steam turbine. For very slow prime mover speeds, it would be costly to build a generator with

Page 24: kodanda pjt-06

enough poles to provide a high AC frequency. As well, synchronizing two generators to the same speed was found to be easier at lower speeds. While belt drives were common as a way to increase speed of slow engines, in very large ratings (thousands of kilowatts) these were expensive, inefficient and unreliable. Direct-driven generators off steam turbines after about 1906 favored higher frequencies. The steadier rotation speed of high-speed machines allowed for satisfactory operation of commutators in rotary converters. [2]

Direct-current power was not entirely displaced by alternating current and was useful in railway and electrochemical processes. Prior to the development of mercury arc valve rectifiers, rotary converters were used to produce DC power from AC. Like other commutator-type machines, these worked better with lower frequencies.

[edit] Transmission and transformers

With AC, transformers can be used to step down high transmission voltages to lower utilization voltage. Since, for a given power level, the dimensions of a transformer are roughly inversely proportional to frequency, a system with many transformers would be more economical at a higher frequency.

Electric power transmission over long lines favors lower frequencies. The effects of the distributed capacitance and inductance of the line are less at low frequency.

[edit] System interconnection

Generators can only be interconnected to operate in parallel if they are of the same frequency and wave-shape. By standardizing the frequency used, generators in a geographic area can be interconnected in a grid, providing reliability and cost savings.

[edit] History

Utility frequencies currently in use.

Many different power frequencies were used in the 19th century.

Very early isolated AC generating schemes used arbitrary frequencies based on convenience for steam engine, water turbine and electrical generator design. Frequencies between 16⅔ Hz and 133⅓ Hz were used on different systems. For example, the city of Coventry, England, in 1895 had a unique 87 Hz single-phase distribution system that was in use until 1906. [3] The proliferation of frequencies grew out of the rapid development of electrical machines in the period 1880 through 1900. In the early incandescent lighting period, single-phase AC was common and typical generators were 8-pole machines operated at 2000 RPM, giving a frequency of 133 cycles per second.

Though many theories exist, and quite a few entertaining urban legends, there is little certitude in the details of the history of 60 Hz vs. 50 Hz.

Page 25: kodanda pjt-06

The German company AEG (descended from a company founded by Edison in Germany) built the first German generating facility to run at 50 Hz, allegedly because 60 was not a preferred number. AEG's choice of 50 Hz is thought by some to relate to a more "metric-friendly" number than 60. At the time, AEG had a virtual monopoly and their standard spread to the rest of Europe. After observing flicker of lamps operated by the 40 Hz power transmitted by the Lauffen-Frankfurt link in 1891, AEG raised their standard frequency to 50 Hz in 1891. [4]

Westinghouse Electric decided to standardize on a lower frequency to permit operation of both electric lighting and induction motors on the same generating system. Although 50 Hz was suitable for both, in 1890 Westinghouse considered that existing arc-lighting equipment operated slightly better on 60 Hz, and so that frequency was chosen.[5] Frequencies much below 50 Hz gave noticeable flicker of arc or incandescent lighting. The operation of Tesla's induction motor required a lower frequency than the 133 Hz common for lighting systems in 1890. In 1893 General Electric Corporation, which was affiliated with AEG in Germany, built a generating project at Mill Creek, California using 50 Hz, but changed to 60 Hz a year later to maintain market share with the Westinghouse standard.

[edit] 25 Hz origins

The first generators at the Niagara Falls project, built by Westinghouse in 1895, were 25 Hz because the turbine speed had already been set before alternating current power transmission had been definitively selected. Westinghouse would have selected a low frequency of 30 Hz to drive motor loads, but the turbines for the project had already been specified at 250 RPM. The machines could have been made to deliver 16⅔ Hz power suitable for heavy commutator-type motors but the Westinghouse company objected that this would be undesirable for lighting, and suggested 33⅓ Hz. Eventually a compromise of 25 Hz, with 12 pole 250 RPM generators, was chosen. [6] Because the Niagara project was so influential on electric power systems design, 25 Hz prevailed as the North American standard for low-frequency AC.

[edit] 40 Hz origins

A General Electric study concluded that 40 Hz would have been a good compromise between lighting, motor, and transmission needs, given the materials and equipment available in the first quarter of the 20th Century. Several 40 Hz systems were built. The Lauffen-Frankfurt demonstration used 40 Hz to transmit power 175 km in 1891. A large interconnected 40 Hz network existed in north-east England (the Newcastle-upon-Tyne Electric Supply Company, NESCO) until the advent of the National Grid (UK) in the late 1920s, and projects in Italy used 42 Hz.[7] The oldest continuously-operating commercial hydroelectric power plant in the United States, at Mechanicville, New York, still produces electric power at 40 Hz and supplies power to the local 60 Hz transmission system through frequency changers. Industrial plants and mines in North America and Australia sometimes were built with 40 Hz electrical systems which were maintained until too uneconomic to continue. Although frequencies near 40 Hz found much

Page 26: kodanda pjt-06

commercial use, these were bypassed by standardized frequencies of 25, 50 and 60 Hz preferred by higher volume equipment manufacturers.

[edit] Standardization

In the early days of electrification, so many frequencies were used that no one value prevailed (London in 1918 had 10 different frequencies). As the 20th century continued, more power was produced at 60 Hz (North America) or 50 Hz (Europe and most of Asia). Standardization allowed international trade in electrical equipment. Much later, the use of standard frequencies allowed interconection of power grids. It wasn't until after World War II with the advent of affordable electrical consumer goods that more uniform standards were enacted.

In Britain, implementation of the National Grid starting in 1926 compelled the standardization of frequencies among the many interconnected electrical service providers. The 50 Hz standard was completely established only after World War II.

Because of the cost of conversion, some parts of the distribution system may continue to operate on original frequencies even after a new frequency is chosen. 25 Hz power was used in Ontario, Quebec, the northern USA, and for railway electrification. In the 1950s, many 25 Hz systems, from the generators right through to household appliances, were converted and standardized. Some 25 Hz generators still exist at the Beck 1 and Rankine generating stations near Niagara Falls to provide power for large industrial customers who did not want to replace existing equipment; and some 25 Hz motors and a 25 Hz electrical generator power station exist in New Orleans for floodwater pumps [1]. Some of the metre gauge railway lines in Switzerland operate at 16⅔ Hz, which can obtained from the local 50 Hz 3 phase power grid through frequency converters.

In some cases, where most load was to be railway or motor loads, it was considered economic to generate power at 25 Hz and install rotary converters for 60 Hz distribution. [8] Converters for production of dc from alternating current were larger and more efficient at 25 Hz compared with 60 Hz. converters to produce DC Remnant fragments of older systems may be tied to the standard frequency system via a rotary converter or static inverter frequency changer. These allow energy to be interchanged between two power networks at different frequencies, but the systems are large, costly, and consume some energy in operation.

Rotating-machine frequency changers used to convert between 25 Hz and 60 Hz systems were awkward to design; a 60 Hz machine with 24 poles would turn at the same speed as a 25 Hz machine with 10 poles, making the machines large, slow-speed and expensive. A ratio of 60/30 would have simplified these designs, but the installed base at 25 Hz was too large to be economically opposed.

In the United States, the Southern California Edison company had standardized on 50 Hz [9]. Much of Southern California operated on 50 Hz and did not completely change frequency of their generators and customer equipment to 60 Hz until around 1948. Some

Page 27: kodanda pjt-06

projects by the Au Sable Electric Company used 30 Hz at transmission voltages up to 110,000 volts in 1914. [10]

In Japan, the western part of the country (Kyoto and west) uses 60 Hz and the eastern part (Tokyo and east) uses 50 Hz. This originates in the first purchases of generators from AEG in 1895, installed for Tokyo, and General Electric in 1896, installed in Osaka.

Utility Frequencies in Use in 1897 in North America [11]

Cycles Description140 Wood arc-lighting dynamo133 Stanley-Kelly Company125 General Electric single-phase66.7 Stanley-Kelly company62.5 General Electric "monocyclic"60 Many manufacturers, becoming "increasing common" in 189758.3 General Electric Lachine Rapids40 General Electric33 General Electric at Portland Oregon for rotary converters27 Crocker-Wheeler for calcium carbide furnaces25 Westinghouse Niagara Falls 2-phase - for operating motors

Even by the middle of the 20th century, utility frequencies were still not entirely standardized at the now-common 50 Hz or 60 Hz. In 1946, a reference manual for designers of radio equipment [12] listed the following now obsolete frequencies as in use. Many of these regions also had 50 cycle, 60 cycle or direct current supplies.

Frequencies in Use in 1946 (As well as 50 Hz and 60 Hz)

Cycles Region

25Canada (Southern Ontario), Panama Canal Zone(*), France, Germany, Sweden, UK, China, Hawaii,India, Manchuria,

40Jamaica, Belgium, Switzerland, UK, Federated Malay States, Egypt, West Australia(*)

42Czechoslovakia, Hungary, Italy, Monaco(*), Portugal, Romania, Yugoslavia, Libya (Tripoli)

43 Argentina45 Italy, Libya (Tripoli)76 Gibraltar(*)100 Malta(*), British East Africa

Where regions are marked (*), this is the only utility frequency shown for that region.

Page 28: kodanda pjt-06

[edit] RailwaysMain article: List of current systems for electric rail traction

Other power frequencies are used. Germany, Austria, Switzerland, Sweden and Norway use traction power networks for railways, distributing single-phase AC at 16⅔ Hz. A frequency of 25 Hz was used for the Austrian railway Mariazeller Bahn and some railway systems in New York and Pennsylvania (Amtrak) in the USA. Other railway systems are energized at the local commercial power frequency, 50 Hz or 60 Hz. Traction power may be derived from commercial power supplies by frequency converters, or in some cases may be produced by dedicated generating stations. In the 19th Century frequencies as low as 8 Hz were contemplated for operation of electric railways with commutator motors [13]

[edit] 400 Hz

Frequencies as high as 400 Hz are used in aerospace and some special-purpose computer power supplies and hand-held machine tools. Such high frequencies cannot be economically transmitted long distances, so 400 Hz power systems are usually confined to a building or vehicle. Transformers and motors for 400 Hz are much smaller and lighter than at 50 or 60 Hz, which is an advantage in aircraft and ships.

[edit] Stability

[edit] Long-term stability and clock synchronization

Regulation of power system frequency for timekeeping accuracy was not commonplace until after 1926 and the invention of the electric clock driven by a synchronous motor. Network operators will regulate the daily average frequency so that clocks stay within a few seconds of correct time. In practice the nominal frequency is raised or lowered by a specific percentage to maintain synchronization. Over the course of a day, the average frequency is maintained at the nominal value within a few hundred parts per million.[14] In the continental European UCTE grid, the deviation between network phase time and UTC is calculated at 08:00 each day in a control center in Switzerland, and the target frequency is then adjusted by up to ±0.02% from 50 Hz as needed, to ensure a long-term frequency average of exactly 3600×24×50 cycles per day is maintained.[15] In North America, whenever the error exceeds 2 seconds for the east, 3 seconds for Texas, or 10 seconds for the west, a correction of ±0.02 Hz (0.033%) is applied. Time error corrections start and end either on the hour or on the half hour.[16][17] A real-time frequency meter for power generation in the United Kingdom is available online.[2] Smaller power systems may not maintain frequency with the same degree of accuracy.

[edit] Frequency and load

The primary reason for accurate frequency control is to allow the flow of alternating current power from multiple generators through the network to be controlled. The trend

Page 29: kodanda pjt-06

in system frequency is a measure of mismatch between demand and generation, and so is a necessary parameter for load control in interconnected systems.

Frequency of the system will vary as load and generation change. Increasing the mechanical input power to a synchronous generator will not greatly affect the system frequency but will produce more electric power from that unit. During a severe overload caused by tripping or failure of generators or transmission lines the power system frequency will decline, due to an imbalance of load versus generation. Loss of an interconnection, while exporting power (relative to system total generation) will cause system frequency to rise. AGC (automatic generation control) is used to maintain scheduled frequency and interchange power flows. Control systems in power plants detect changes in the network-wide frequency and adjust mechanical power input to generators back to their target frequency. This counteracting usually takes a few tens of seconds due to the large rotating masses involved. Temporary frequency changes are an unavoidable consequence of changing demand. Exceptional or rapidly changing mains frequency is often a sign that an electricity distribution network is operating near its capacity limits, dramatic examples of which can sometimes be observed shortly before major outages.

Frequency protection relays on the power system network sense the decline of frequency and automatically initiate load shedding or tripping of interconnection lines, to preserve the operation of at least part of the network. Small frequency deviations (i.e.- 0.5 Hz on a 50 Hz or 60 Hz network) will result in automatic load shedding or other control actions to restore system frequency.

Smaller power systems, not extensively interconnected with many generators and loads, will not maintain frequency with the same degree of accuracy. Where system frequency is not tightly regulated during heavy load periods, the system operators may allow system frequency to rise during periods of light load, to maintain a daily average frequency of acceptable accuracy.

[edit] Audible noise and interference

AC-powered appliances can give off a characteristic hum (often referred to as the "60 cycle hum" or "mains hum"), at the multiples of the frequencies of AC power that they use. This often occurs in poorly made amplifiers. Most countries have chosen their television standard to approximate their mains supply frequency. This helps prevent power line hum and magnetic interference from causing visible beat frequencies in the displayed picture.

[edit] See also Mains electricity Mains power systems List of countries with mains power plugs, voltages and frequencies Power connector

Page 30: kodanda pjt-06

[edit] Further reading Furfari, F.A., The Evolution of Power-Line Frequencies 133⅓ to 25 Hz, Industry

Applications Magazine, IEEE, Sep/Oct 2000, Volume 6, Issue 5, Pages 12-14, ISSN 1077-2618.

Rushmore, D.B., Frequency, AIEE Transactions, Volume 31, 1912, pages 955-983, and discussion on pages 974-978.

Blalock, Thomas J., Electrification of a Major Steel Mill - Part II Development of the 25 Hz System, Industry Applications Magazine, IEEE, Sep/Oct 2005, Pages 9-12, ISSN 1077-2618.

[edit] References1. ̂ B. G. Lamme, The Technical Story of the Frequencies, Transactions AIEE

January 1918, reprinted in the Baltimore Amateur Radio Club newsletter The Modulator January -March 2007

2. ̂ Lamme, Technical story... 3. ̂ Gordon Woodward ,City of Coventry Single and Two Phase Generation and

Distribution, retrieved from http://www.iee.org/OnComms/pn/History/HistoryWk_Single_&_2_phase.pdf October 30,2007

4. ̂ Owen, ..Origins.. 5. ̂ Owen, E.L, The Origins of 60-Hz as a Power Frequency, Industry Applications

Magazine, IEEE, Volume: 3, Issue 6, Nov.-Dec. 1997, Pages 8, 10, 12-14. 6. ̂ Lamme, Technical Story of the Frequencies 7. ̂ Thomas P. Hughes, Networks of Power: Electrification in Western Society

1880-1930, The Johns Hopkins University Press, Baltimore 1983 ISBN 0-8018-2873-2 pgs. 282-283

8. ̂ Samuel Insull, Central-Station Electric Service, private printing, Chicago 1915, available on the Internet Archive,page 72

9. ̂ Central Station Engineers of the Westinghouse Electric Corporation, Electrical Transmission and Distribution Reference Book, 4th Ed., Westinghouse Electric Corporation, East Pittsburgh PA, 1950, no ISBN

10. ̂ Hughes as above 11. ̂ Edwin J. Houston and Arthur Kennelly, Recent Types of Dynamo-Electric

Machinery, copyright American Technical Book Company 1897, published by P.F. Collier and Sons New York, 1902

12. ̂ H.T. Kohlhaas, (ed.), Reference Data for Radio Engineers 2nd Edition, Federal Telephone and Radio Corporation, New York, 1946, no ISBN

13. ̂ B. G. Lamme, The Technical Story of the Frequencies, Transactions AIEE January 1918, reprinted in the Baltimore Amateur Radio Club newsletter The Modulator January -March 2007

14. ̂ Donald G. Fink and H. Wayne Beaty, Standard Handbook for Electrical Engineers, Eleventh Edition,McGraw-Hill, New York, 1978, ISBN 0-07-020974-X, page 16-15, 16-16

15. ̂ Load Frequency Control and Performance 16. ̂ Manual Time Error Correction

Page 31: kodanda pjt-06

Retrieved from

"http://en.wikipedia.org/wiki/Utility_frequency"

Neural network

Traditionally, the term neural network had been used to refer to a network or circuit of biological neurons[citation needed]. The modern usage of the term often refers to artificial neural networks, which are composed of artificial neurons or nodes. Thus the term has two distinct usages:

1. Biological neural networks are made up of real biological neurons that are connected or functionally related in the peripheral nervous system or the central nervous system. In the field of neuroscience, they are often identified as groups of neurons that perform a specific physiological function in laboratory analysis.

2. Artificial neural networks are made up of interconnecting artificial neurons (programming constructs that mimic the properties of biological neurons). Artificial neural networks may either be used to gain an understanding of biological neural networks, or for solving artificial intelligence problems without necessarily creating a model of a real biological system. The real, biological nervous system is highly complex and includes some features that may seem superfluous based on an understanding of artificial networks.

This article focuses on the relationship between the two concepts; for detailed coverage of the two different concepts refer to the separate articles: Biological neural network and Artificial neural network

Overview

In general a biological neural network is composed of a group or groups of chemically connected or functionally associated neurons. A single neuron may be connected to many other neurons and the total number of neurons and connections in a network may be extensive. Connections, called synapses, are usually formed from axons to dendrites, though dendrodendritic microcircuits[1] and other connections are possible. Apart from the electrical signaling, there are other forms of signaling that arise from neurotransmitter diffusion, which have an effect on electrical signaling. As such, neural networks are extremely complex.

Artificial intelligence and cognitive modeling try to simulate some properties of neural networks. While similar in their techniques, the former has the aim of solving particular tasks, while the latter aims to build mathematical models of biological neural systems.

In the artificial intelligence field, artificial neural networks have been applied successfully to speech recognition, image analysis and adaptive control, in order to construct software agents (in computer and video games) or autonomous robots. Most of

Page 32: kodanda pjt-06

the currently employed artificial neural networks for artificial intelligence are based on statistical estimation, optimization and control theory.

The cognitive modelling field involves the physical or mathematical modeling of the behaviour of neural systems; ranging from the individual neural level (e.g. modelling the spike response curves of neurons to a stimulus), through the neural cluster level (e.g. modelling the release and effects of dopamine in the basal ganglia) to the complete organism (e.g. behavioural modelling of the organism's response to stimuli).

[edit] History of the neural network analogyMain article: Connectionism

The concept of neural networks started in the late-1800s as an effort to describe how the human mind performed. These ideas started being applied to computational models with Turing's B-type machines and the Perceptron.

In early 1950s Friedrich Hayek was one of the first to posit the idea of spontaneous order[citation needed] in the brain arising out of decentralized networks of simple units (neurons). In the late 1940s, Donald Hebb made one of the first hypotheses for a mechanism of neural plasticity (i.e. learning), Hebbian learning. Hebbian learning is considered to be a 'typical' unsupervised learning rule and it (and variants of it) was an early model for long term potentiation.

The Perceptron is essentially a linear classifier for classifying data specified by parameters and an output function f = w'x + b. Its parameters are adapted with an ad-hoc rule similar to stochastic steepest gradient descent. Because the inner product is a linear operator in the input space, the Perceptron can only perfectly classify a set of data for which different classes are linearly separable in the input space, while it often fails completely for non-separable data. While the development of the algorithm initially generated some enthusiasm, partly because of its apparent relation to biological mechanisms, the later discovery of this inadequacy caused such models to be abandoned until the introduction of non-linear models into the field.

The Cognitron (1975) was an early multilayered neural network with a training algorithm. The actual structure of the network and the methods used to set the interconnection weights change from one neural strategy to another, each with its advantages and disadvantages. Networks can propagate information in one direction only, or they can bounce back and forth until self-activation at a node occurs and the network settles on a final state. The ability for bi-directional flow of inputs between neurons/nodes was produced with the Hopfield's network (1982), and specialization of these node layers for specific purposes was introduced through the first hybrid network.

The parallel distributed processing of the mid-1980s became popular under the name connectionism.

Page 33: kodanda pjt-06

The rediscovery of the backpropagation algorithm was probably the main reason behind the repopularisation of neural networks after the publication of "Learning Internal Representations by Error Propagation" in 1986 (Though backpropagation itself dates from 1974). The original network utilised multiple layers of weight-sum units of the type f = g(w'x + b), where g was a sigmoid function or logistic function such as used in logistic regression. Training was done by a form of stochastic steepest gradient descent. The employment of the chain rule of differentiation in deriving the appropriate parameter updates results in an algorithm that seems to 'backpropagate errors', hence the nomenclature. However it is essentially a form of gradient descent. Determining the optimal parameters in a model of this type is not trivial, and steepest gradient descent methods cannot be relied upon to give the solution without a good starting point. In recent times, networks with the same architecture as the backpropagation network are referred to as Multi-Layer Perceptrons. This name does not impose any limitations on the type of algorithm used for learning.

The backpropagation network generated much enthusiasm at the time and there was much controversy about whether such learning could be implemented in the brain or not, partly because a mechanism for reverse signalling was not obvious at the time, but most importantly because there was no plausible source for the 'teaching' or 'target' signal.

[edit] The brain, neural networks and computers

Neural networks, as used in artificial intelligence, have traditionally been viewed as simplified models of neural processing in the brain, even though the relation between this model and brain biological architecture is debated.

A subject of current research in theoretical neuroscience is the question surrounding the degree of complexity and the properties that individual neural elements should have to reproduce something resembling animal intelligence.

Historically, computers evolved from the von Neumann architecture, which is based on sequential processing and execution of explicit instructions. On the other hand, the origins of neural networks are based on efforts to model information processing in biological systems, which may rely largely on parallel processing as well as implicit instructions based on recognition of patterns of 'sensory' input from external sources. In other words, at its very heart a neural network is a complex statistical processor (as opposed to being tasked to sequentially process and execute).

[edit] Neural networks and artificial intelligenceMain article: Artificial neural network

An artificial neural network (ANN), also called a simulated neural network (SNN) or commonly just neural network (NN) is an interconnected group of artificial neurons that uses a mathematical or computational model for information processing based on a connectionistic approach to computation. In most cases an ANN is an adaptive system

Page 34: kodanda pjt-06

that changes its structure based on external or internal information that flows through the network.

In more practical terms neural networks are non-linear statistical data modeling or decision making tools. They can be used to model complex relationships between inputs and outputs or to find patterns in data.

[edit] Background

An artificial neural network involves a network of simple processing elements (artificial neurons) which can exhibit complex global behaviour, determined by the connections between the processing elements and element parameters. Artificial neurons were first proposed in 1943 by Warren McCulloch, a neurophysiologist, and Walter Pitts, an MIT logician.[1] One classical type of artificial neural network is the Hopfield net.

In a neural network model simple nodes, which can be called variously "neurons", "neurodes", "Processing Elements" (PE) or "units", are connected together to form a network of nodes — hence the term "neural network". While a neural network does not have to be adaptive per se, its practical use comes with algorithms designed to alter the strength (weights) of the connections in the network to produce a desired signal flow.

In modern software implementations of artificial neural networks the approach inspired by biology has more or less been abandoned for a more practical approach based on statistics and signal processing. In some of these systems neural networks, or parts of neural networks (such as artificial neurons) are used as components in larger systems that combine both adaptive and non-adaptive elements.

The concept of a neural network appears to have first been proposed by Alan Turing in his 1948 paper "Intelligent Machinery".

[edit] Applications

The utility of artificial neural network models lies in the fact that they can be used to infer a function from observations and also to use it. This is particularly useful in applications where the complexity of the data or task makes the design of such a function by hand impractical.

Real life applications

The tasks to which artificial neural networks are applied tend to fall within the following broad categories:

Function approximation , or regression analysis, including time series prediction and modelling.

Classification , including pattern and sequence recognition, novelty detection and sequential decision making.

Page 35: kodanda pjt-06

Data processing , including filtering, clustering, blind signal separation and compression.

Application areas include system identification and control (vehicle control, process control), game-playing and decision making (backgammon, chess, racing), pattern recognition (radar systems, face identification, object recognition, etc.), sequence recognition (gesture, speech, handwritten text recognition), medical diagnosis, financial applications, data mining (or knowledge discovery in databases, "KDD"), visualization and e-mail spam filtering.

[edit] Neural network software

Main article: Neural network software Neural network software is used to simulate, research, develop and apply artificial neural networks, biological neural networks and in some cases a wider array of adaptive systems.

[edit] Learning paradigms

There are three major learning paradigms, each corresponding to a particular abstract learning task. These are supervised learning, unsupervised learning and reinforcement learning. Usually any given type of network architecture can be employed in any of those tasks.

Supervised learning

In supervised learning, we are given a set of example pairs and the aim is to find a function f in the allowed class of functions that matches the examples. In other words, we wish to infer how the mapping implied by the data and the cost function is related to the mismatch between our mapping and the data.

Unsupervised learning

In unsupervised learning we are given some data x, and a cost function which is to be minimized which can be any function of x and the network's output, f. The cost function is determined by the task formulation. Most applications fall within the domain of estimation problems such as statistical modeling, compression, filtering, blind source separation and clustering.

Reinforcement learning

In reinforcement learning, data x is usually not given, but generated by an agent's interactions with the environment. At each point in time t, the agent performs an action yt and the environment generates an observation xt and an instantaneous cost ct, according to some (usually unknown) dynamics. The aim is to discover a policy for selecting actions that minimises some measure of a long-term cost, i.e. the expected cumulative cost. The environment's dynamics and the long-term cost for each policy are usually unknown, but

Page 36: kodanda pjt-06

can be estimated. ANNs are frequently used in reinforcement learning as part of the overall algorithm. Tasks that fall within the paradigm of reinforcement learning are control problems, games and other sequential decision making tasks.

[edit] Learning algorithms

There are many algorithms for training neural networks; most of them can be viewed as a straightforward application of optimization theory and statistical estimation.

Evolutionary computation methods, simulated annealing, expectation maximization and non-parametric methods are among other commonly used methods for training neural networks. See also machine learning.

Recent developments in this field also saw the use of particle swarm optimization and other swarm intelligence techniques used in the training of neural networks.

[edit] Neural networks and neuroscience

Theoretical and computational neuroscience is the field concerned with the theoretical analysis and computational modeling of biological neural systems. Since neural systems are intimately related to cognitive processes and behaviour, the field is closely related to cognitive and behavioural modeling.

The aim of the field is to create models of biological neural systems in order to understand how biological systems work. To gain this understanding, neuroscientists strive to make a link between observed biological processes (data), biologically plausible mechanisms for neural processing and learning (biological neural network models) and theory (statistical learning theory and information theory).

[edit] Types of models

Many models are used in the field, each defined at a different level of abstraction and trying to model different aspects of neural systems. They range from models of the short-term behaviour of individual neurons, through models of how the dynamics of neural circuitry arise from interactions between individual neurons, to models of how behaviour can arise from abstract neural modules that represent complete subsystems. These include models of the long-term and short-term plasticity of neural systems and its relation to learning and memory, from the individual neuron to the system level.

[edit] Current research

While initially research had been concerned mostly with the electrical characteristics of neurons, a particularly important part of the investigation in recent years has been the exploration of the role of neuromodulators such as dopamine, acetylcholine, and serotonin on behaviour and learning.

Page 37: kodanda pjt-06

Biophysical models, such as BCM theory, have been important in understanding mechanisms for synaptic plasticity, and have had applications in both computer science and neuroscience. Research is ongoing in understanding the computational algorithms used in the brain, with some recent biological evidence for radial basis networks and neural backpropagation as mechanisms for processing data.

[edit] Criticism

A common criticism of neural networks, particularly in robotics, is that they require a large diversity of training for real-world operation. Dean Pomerleau, in his research presented in the paper "Knowledge-based Training of Artificial Neural Networks for Autonomous Robot Driving," uses a neural network to train a robotic vehicle to drive on multiple types of roads (single lane, multi-lane, dirt, etc.). A large amount of his research is devoted to (1) extrapolating multiple training scenarios from a single training experience, and (2) preserving past training diversity so that the system does not become overtrained (if, for example, it is presented with a series of right turns – it should not learn to always turn right). These issues are common in neural networks that must decide from amongst a wide variety of responses.

A. K. Dewdney, a former Scientific American columnist, wrote in 1997, "Although neural nets do solve a few toy problems, their powers of computation are so limited that I am surprised anyone takes them seriously as a general problem-solving tool." (Dewdney, p.82)

Arguments against Dewdney's position are that neural nets have been successfully used to solve many complex and diverse tasks, ranging from autonomously flying aircraft[2] to detecting credit card fraud[3].

Technology writer Roger Bridgman commented on Dewdney's statements about neural nets:

Neural networks, for instance, are in the dock not only because they have been hyped to high heaven, (what hasn't?) but also because you could create a successful net without understanding how it worked: the bunch of numbers that captures its behaviour would in all probability be "an opaque, unreadable table...valueless as a scientific resource". In spite of his emphatic declaration that science is not technology, Dewdney seems here to pillory neural nets as bad science when most of those devising them are just trying to be good engineers. An unreadable table that a useful machine could read would still be well worth having.[2]