Editors Proceedings of Sixth International Conference on ... · Dr. Jagdish Chand Bansal is...

30
Advances in Intelligent Systems and Computing 546 Kusum Deep Jagdish Chand Bansal Kedar Nath Das Arvind Kumar Lal Harish Garg Atulya K. Nagar Millie Pant Editors Proceedings of Sixth International Conference on Soft Computing for Problem Solving SocProS 2016, Volume 1

Transcript of Editors Proceedings of Sixth International Conference on ... · Dr. Jagdish Chand Bansal is...

Page 1: Editors Proceedings of Sixth International Conference on ... · Dr. Jagdish Chand Bansal is Assistant Professor with South Asian University, New Delhi, India. Holding an excellent

Advances in Intelligent Systems and Computing 546Kusum DeepJagdish Chand BansalKedar Nath DasArvind Kumar LalHarish GargAtulya K. NagarMillie Pant Editors

Proceedings of Sixth International Conference on Soft Computing for Problem SolvingSocProS 2016, Volume 1

Page 2: Editors Proceedings of Sixth International Conference on ... · Dr. Jagdish Chand Bansal is Assistant Professor with South Asian University, New Delhi, India. Holding an excellent

Advances in Intelligent Systems and Computing

Volume 546

Series editor

Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Polande-mail: [email protected]

Page 3: Editors Proceedings of Sixth International Conference on ... · Dr. Jagdish Chand Bansal is Assistant Professor with South Asian University, New Delhi, India. Holding an excellent

About this Series

The series “Advances in Intelligent Systems and Computing” contains publications on theory,applications, and design methods of Intelligent Systems and Intelligent Computing. Virtuallyall disciplines such as engineering, natural sciences, computer and information science, ICT,economics, business, e-commerce, environment, healthcare, life science are covered. The listof topics spans all the areas of modern intelligent systems and computing.

The publications within “Advances in Intelligent Systems and Computing” are primarilytextbooks and proceedings of important conferences, symposia and congresses. They coversignificant recent developments in the field, both of a foundational and applicable character.An important characteristic feature of the series is the short publication time and world-widedistribution. This permits a rapid and broad dissemination of research results.

Advisory Board

Chairman

Nikhil R. Pal, Indian Statistical Institute, Kolkata, Indiae-mail: [email protected]

Members

Rafael Bello Perez, Universidad Central “Marta Abreu” de Las Villas, Santa Clara, Cubae-mail: [email protected]

Emilio S. Corchado, University of Salamanca, Salamanca, Spaine-mail: [email protected]

Hani Hagras, University of Essex, Colchester, UKe-mail: [email protected]

László T. Kóczy, Széchenyi István University, Győr, Hungarye-mail: [email protected]

Vladik Kreinovich, University of Texas at El Paso, El Paso, USAe-mail: [email protected]

Chin-Teng Lin, National Chiao Tung University, Hsinchu, Taiwane-mail: [email protected]

Jie Lu, University of Technology, Sydney, Australiae-mail: [email protected]

Patricia Melin, Tijuana Institute of Technology, Tijuana, Mexicoe-mail: [email protected]

Nadia Nedjah, State University of Rio de Janeiro, Rio de Janeiro, Brazile-mail: [email protected]

Ngoc Thanh Nguyen, Wroclaw University of Technology, Wroclaw, Polande-mail: [email protected]

Jun Wang, The Chinese University of Hong Kong, Shatin, Hong Konge-mail: [email protected]

More information about this series at http://www.springer.com/series/11156

Page 4: Editors Proceedings of Sixth International Conference on ... · Dr. Jagdish Chand Bansal is Assistant Professor with South Asian University, New Delhi, India. Holding an excellent

Kusum Deep • Jagdish Chand BansalKedar Nath Das • Arvind Kumar LalHarish Garg • Atulya K. NagarMillie PantEditors

Proceedings of SixthInternational Conferenceon Soft Computingfor Problem SolvingSocProS 2016, Volume 1

123

Page 5: Editors Proceedings of Sixth International Conference on ... · Dr. Jagdish Chand Bansal is Assistant Professor with South Asian University, New Delhi, India. Holding an excellent

EditorsKusum DeepDepartment of MathematicsIndian Institute of Technology RoorkeeRoorkeeIndia

Jagdish Chand BansalDepartment of MathematicsSouth Asian UniversityNew DelhiIndia

Kedar Nath DasDepartment of MathematicsNational Institute of Technology, SilcharSilchar, AssamIndia

Arvind Kumar LalSchool of MathematicsThapar Institute of Engineering andTechnology University

Patiala, PunjabIndia

Harish GargSchool of MathematicsThapar University PatialaPatiala, PunjabIndia

Atulya K. NagarDepartment of Mathematics andComputer Science

Liverpool Hope UniversityLiverpoolUK

Millie PantDepartment of Applied Science andEngineering

Indian Institute of Technology RoorkeeRoorkeeIndia

ISSN 2194-5357 ISSN 2194-5365 (electronic)Advances in Intelligent Systems and ComputingISBN 978-981-10-3321-6 ISBN 978-981-10-3322-3 (eBook)DOI 10.1007/978-981-10-3322-3

Library of Congress Control Number: 2017931564

© Springer Nature Singapore Pte Ltd. 2017This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or partof the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmissionor information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilarmethodology now known or hereafter developed.The use of general descriptive names, registered names, trademarks, service marks, etc. in thispublication does not imply, even in the absence of a specific statement, that such names are exempt fromthe relevant protective laws and regulations and therefore free for general use.The publisher, the authors and the editors are safe to assume that the advice and information in thisbook are believed to be true and accurate at the date of publication. Neither the publisher nor theauthors or the editors give a warranty, express or implied, with respect to the material contained herein orfor any errors or omissions that may have been made. The publisher remains neutral with regard tojurisdictional claims in published maps and institutional affiliations.

Printed on acid-free paper

This Springer imprint is published by Springer NatureThe registered company is Springer Nature Singapore Pte Ltd.The registered company address is: 152 Beach Road, #21-01/04GatewayEast, Singapore 189721, Singapore

Page 6: Editors Proceedings of Sixth International Conference on ... · Dr. Jagdish Chand Bansal is Assistant Professor with South Asian University, New Delhi, India. Holding an excellent

Contents

Adaptive Scale Factor Based Differential Evolution Algorithm . . . . . . . . 1Nikky Choudhary, Harish Sharma, and Nirmala Sharma

Adaptive Balance Factor in Particle Swarm Optimization . . . . . . . . . . . . 12Siddhi Kumari Sharma and R.S. Sharma

Community Detection in Complex Networks: A Novel ApproachBased on Ant Lion Optimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22Maninder Kaur and Abhay Mahajan

Hybrid SOMA: A Tool for Optimizing TMD Parameters . . . . . . . . . . . . 35Shilpa Pal, Dipti Singh, and Varun Kumar

Fast Convergent Spider Monkey Optimization Algorithm . . . . . . . . . . . . 42Neetu Agarwal and S.C. Jain

Bi-level Problem and SMD Assessment Delinquent for SingleImpartial Bi-level Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52Srinivas Vadali, Deekshitulu G.V.S.R., and Murthy J.V.R.

An Adaptive Firefly Algorithm for Load Balancing in CloudComputing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63Gundipika Kaur and Kiranbir Kaur

Review on Inertia Weight Strategies for Particle Swarm Optimization . . .. . 73Ankush Rathore and Harish Sharma

Hybridized Gravitational Search Algorithms with Real Coded GeneticAlgorithms for Integer and Mixed Integer Optimization Problems. . . . .. . . . 84Amarjeet Singh and Kusum Deep

Spider Monkey Optimization Algorithm Based on MetropolisPrinciple. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113Garima Hazrati, Harish Sharma, Nirmala Sharma,Vani Agarwal, and D.C. Tiwari

v

Page 7: Editors Proceedings of Sixth International Conference on ... · Dr. Jagdish Chand Bansal is Assistant Professor with South Asian University, New Delhi, India. Holding an excellent

Introducing Biasedness in NSGA-II to Construct Boolean FunctionHaving Best Trade-Off Among Its Properties . . . . . . . . . . . . . . . . . . . . . . 122Rajni Goyal and Anupama Panigrahi

Generating Distributed Query Plans Using ModifiedCuckoo Search Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128T.V. Vijay Kumar and Monika Yadav

Locally Informed Shuffled Frog Leaping Algorithm . . . . . . . . . . . . . . . . 141Pragya Sharma, Nirmala Sharma, and Harish Sharma

An Astute Artificial Bee Colony Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 153Avadh Kishor, Manik Chandra, and Pramod Kumar Singh

Exploitative Gravitational Search Algorithm . . . . . . . . . . . . . . . . . . . . . . 163Aditi Gupta, Nirmala Sharma, and Harish Sharma

A Systematic Review of Software Testing UsingEvolutionary Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174Deepti Bala Mishra, Rajashree Mishra, Kedar Nath Das,and Arup Abhinna Acharya

On the Hybridization of Spider Monkey Optimizationand Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185Anivesh Agrawal, Pushpa Farswan, Vani Agrawal, D.C. Tiwari,and Jagdish Chand Bansal

An Analysis of Modeling and Optimization Production Cost ThroughFuzzy Linear Programming Problem with Symmetric and RightAngle Triangular Fuzzy Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197Rajesh Kumar Chandrawat, Rakesh Kumar, B.P. Garg, Gaurav Dhiman,and Sumit Kumar

A New Intuitionistic Fuzzy Entropy of Order-a with Applicationsin Multiple Attribute Decision Making . . . . . . . . . . . . . . . . . . . . . . . . . . . 212Rajesh Joshi and Satish Kumar

The Relationship Between Intuitionistic Fuzzy Programmingand Goal Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220Sandeep Kumar

A Fuzzy Dual SBM Model with Fuzzy Weights: An Applicationto the Health Sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230Alka Arya and Shiv Prasad Yadav

An Approach for Purchasing a Sedan Car from Indian Car MarketUnder Fuzzy Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239Mukesh Chand, Deeksha Hatwal, Shalini Singh, Varsha Mundepi,Vidhi Raturi, Rashmi, and Shwetank Avikal

vi Contents

Page 8: Editors Proceedings of Sixth International Conference on ... · Dr. Jagdish Chand Bansal is Assistant Professor with South Asian University, New Delhi, India. Holding an excellent

Fuzzy Subtractive Clustering for Polymer Data Mining for SAWSensor Array Based Electronic Nose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245T. Sonamani Singh, Prabha Verma, and R.D.S. Yadava

Clustering of Categorical Data Using Intuitionistic Fuzzy k-modes . . . .. . . . 254Darshan Mehta and B.K. Tripathy

Some Properties of Rough Sets on Intuitionistic Fuzzy ApproximationSpaces and Their Application in Computer Vision. . . . . . . . . . . . . . . . . . 264B.K. Tripathy and R.R. Mohanty

Interval Type-II Fuzzy Multiple Group Decision Making BasedRanking for Customer Purchase Frequency Determinants in OnlineBrand Community . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276S. Choudhury, A.K. Patra, A.K. Parida, and S. Chatterjee

User Localization in an Indoor Environment Using Fuzzy Hybridof Particle Swarm Optimization & Gravitational Search Algorithmwith Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286Jayant G. Rohra, Boominathan Perumal, Swathi Jamjala Narayanan,Priya Thakur, and Rajen B. Bhatt

An Analysis of Decision Theoretic Kernalized Rough IntuitionisticFuzzy C-Means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296Ryan Serrao, B.K. Tripathy, and A. Jayaram Reddy

Analysis of Fuzzy Controller for H-bridge Flying CapacitorMultilevel Converter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307P. Ponnambalam, K. Aroul, P. Prasad Reddy, and K. Muralikumar

Analysis of Stacked Multicell Converter with Fuzzy Controller . . . . . . . 318P. Ponnambalam, M. Praveenkumar, Challa Babu, and P. Dhambi Raj

Implementation of Fuzzy Logic on FORTRAN Coded FreeConvection Around Vertical Tube . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331Jashanpreet Singh, Chanpreet Singh, and Satish Kumar

Availability Analysis of the Butter Oil Processing Plant UsingIntuitionistic Fuzzy Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . 342Neha Singhal and S.P. Sharma

Applying Fuzzy Probabilistic PROMETHEE on a Multi-CriteriaDecision Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353Susmita Bandyopadhyay and Indraneel Mandal

Erratum to: An Analysis of Decision Theoretic KernalizedRough Intuitionistic Fuzzy C-Means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E1Ryan Serrao, B.K. Tripathy, and A. Jayaram Reddy

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361

Contents vii

Page 9: Editors Proceedings of Sixth International Conference on ... · Dr. Jagdish Chand Bansal is Assistant Professor with South Asian University, New Delhi, India. Holding an excellent

About the Editors

Prof. Kusum Deep is Professor at the Department of Mathematics, Indian Instituteof Technology Roorkee, India. Over the past 25 years, her research has made her acentral international figure in the areas of nature-inspired optimization techniques,genetic algorithms and particle swarm optimization.

Dr. Jagdish Chand Bansal is Assistant Professor with South Asian University,New Delhi, India. Holding an excellent academic record and having written severalresearch papers in journals of national and international repute, he is an outstandingresearcher in the field of swarm intelligence at both national and internationallevels.

Dr. Kedar Nath Das is Assistant Professor at the Department of Mathematics,National Institute of Technology, Silchar, Assam, India. Over the past 10 years, hehas made substantial contribution to research on ‘soft computing’. He has publishedseveral research papers in prominent national and international journals. His chiefarea of interest includes evolutionary and bio-inspired algorithms for optimization.

Dr. Arvind Kumar Lal is currently associated with the School of Mathematicsand Computer Applications at Thapar University, Patiala. He received his B.Sc.Honors (mathematics) and M.Sc. (mathematics) from Bihar University,Muzaffarpur in 1984 and 1987, respectively. He completed his Ph.D. (mathematics)at the University of Roorkee (now the IIT, Roorkee) in 1995. Dr. Lal has over 130publications in journals and conference proceedings to his credit. His research areasinclude applied mathematics (modeling of stellar structure and pulsations), relia-bility analysis and numerical analysis.

Dr. Harish Garg is Assistant Professor at the School of Mathematics at ThaparUniversity, Patiala, Punjab, India. He received his B.Sc. (computer applications)and M.Sc. (Mathematics) from Punjabi University, Patiala before completing hisPh.D. (applied mathematics) at the Indian Institute of Technology Roorkee. He iscurrently teaching undergraduate and postgraduate students and is pursuing inno-vative and insightful research in the area of reliability theory using evolutionary

ix

Page 10: Editors Proceedings of Sixth International Conference on ... · Dr. Jagdish Chand Bansal is Assistant Professor with South Asian University, New Delhi, India. Holding an excellent

algorithms and fuzzy set theory with their application in numerous industrialengineering areas. Dr. Garg has produced 62 publications, which include 6 bookchapters, 50 journal papers and 6 conference papers.

Prof. Atulya K. Nagar holds the Foundation Chair as Professor of MathematicalSciences and is Dean of the Faculty of Science at Liverpool Hope University, UK.Professor Nagar is an internationally respected scholar working on the cutting edgeof theoretical computer science, applied mathematical analysis, operations researchand systems engineering.

Dr. Millie Pant is Associate Professor at the Department of Paper Technology,Indian Institute of Technology Roorkee, India. She has published several researchpapers in national and international journals and is a prominent figure in the field ofswarm intelligence and evolutionary algorithms.

x About the Editors

Page 11: Editors Proceedings of Sixth International Conference on ... · Dr. Jagdish Chand Bansal is Assistant Professor with South Asian University, New Delhi, India. Holding an excellent

Adaptive Scale Factor Based DifferentialEvolution Algorithm

Nikky Choudhary(B), Harish Sharma, and Nirmala Sharma

Rajasthan Technical University, Kota, [email protected]

Abstract. In DE, exploration and exploitation capabilities depend ontwo processes, namely mutation and crossover. In these two processesexploration capability and exploitation capability is balanced using thetuning of scale factor F and crossover probability CR. In DE, for a highvalue of CR and F, there is always enough chance to skip the true solu-tion due to large step size in the solution search space. Therefore in thisarticle, a self-adaptive scale factor strategy is proposed in which scalefactor is adaptively decided through iterations. In the proposed strategy,in the early iteration, the value of F is kept high to keep the large stepsize while in later iterations the value of F is kept small to keep the stepsize short. The proposed strategy is named as Adaptive Scale Factorbased Differential Evolution (ASFDE) Algorithm. Further, to increasethe exploration capability of the algorithm, a limit is associated withevery solution to count the number of not updating iterations. If thiscount crosses the pre-defined limit, then the solution is randomly ini-tialized. The proposed algorithm is tested over 12 different benchmarkfunctions and correlate with standard DE, and another swarm intelli-gence based algorithm, namely artificial bee colony (ABC) algorithm,and particle swam optimization (PSO) algorithm. The obtained resultsreveal that ASFDE is a competitive variant of DE.

Keywords: Evolutionary Algorithm · Differential Evolution Algo-rithm · Optimization · Nature inspired algorithms

1 Introduction

Nature-inspired algorithms (NIA) are inspired by the natural behavior and solvevarious real-world optimization problems [11]. Evolutionary Algorithm (EA) isa method used for searching the optimum value for a problem by yielding apopulous of results over numeral generations. Differential Evolution (DE) is apopulous based and random probability search technique, comparatively an easymethod to search an optimum value to the optimization problems. There arepossibilities that populous has not merged to local optima due to which it can’treach the global optimum [7].

Mezura-Montes et al. [12] emulate the different forms of DE for globalbest and bring out that DE demonstrates a degraded performance and staysc© Springer Nature Singapore Pte Ltd. 2017K. Deep et al. (eds.), Proceedings of Sixth International Conferenceon Soft Computing for Problem Solving, Advances in Intelligent Systemsand Computing 546, DOI 10.1007/978-981-10-3322-3 1

Page 12: Editors Proceedings of Sixth International Conference on ... · Dr. Jagdish Chand Bansal is Assistant Professor with South Asian University, New Delhi, India. Holding an excellent

2 N. Choudhary et al.

ineffectual in analyzing the search space, particularly for multimodal functions.Price et al. [5] concluded the same. The problems of premature convergenceand stagnation have to be considered seriously for developing a comparativelyefficient differential evolution algorithm. Research is constantly operational toenhance the premature convergence of DE [3,8,13]. In [9] Some latest variantsof DE with remarkable purposes are depicted.

In this paper, modified version is adaptive through a change in iterationswhich make the step-size adaptive due to which a proper balance is maintainedamong two mechanism that is exploration and exploitation which helps a leaderto remove stagnation and achieve a good convergence speed. Further, to avoidthe stagnation, a limit based counter is associated with every solution to checkthe not updating counts of the solution.

The remaining paper is described as: Sect. 2 covers the Condensed summaryof DE. Adaptive Scale Factor based Differential Evolution (ASFDE) Algorithmis presented in Sect. 3. Performance of ASFDE is tested with several benchmarkfunctions in Sect. 4. Last, Sect. 5 comprises a summary and conclusion.

2 Condensed Summary of the DE

Price and Storn propounded DE algorithm [2] in 1995. It is a fast, easy and pop-ulous based random probabilistic search technique. DE/rand/1/bin technique isused, rand denotes that parent is elected arbitrarily, 1 represents the counts ofdifferential vectors and bin indicates the binomial crossover. DE comprises ofthree essential components that are mutation, crossover and selection respec-tively. At the initial phase, uniformly distributed population is generated ran-domly. Mutation results in the generation of an experimental vector which fur-ther used within crossover to create offspring and then selection is committed toelect the best for next generation [10]. A Dim-dimensional vector (xi1, xi2, . . . ,xiDim) is used to represent an Dim-dimensional area and i = 1, 2, . . . , S. Here,S is the populous size. Initialization of ith vector in jth component is displayedEq. 1:

Xi,j = Xj,lo + randi,j [0, 1] ∗ (Xj,hi − Xj,lo) (1)

where, Xi,j is a position, lo and hi are lower and upper limits of searching area.randij is an evenly dispersed random number in the range of 0 to 1.

2.1 Mutation

For every individual of the current populous, an experimental vector is producedby mutation operator. Experimental vector is created when a parent is alteredwith a subjective differential which produces an offspring in crossover operation.For generating an experimental vector ui(t), mutation operation is defined inEq. 2:

Page 13: Editors Proceedings of Sixth International Conference on ... · Dr. Jagdish Chand Bansal is Assistant Professor with South Asian University, New Delhi, India. Holding an excellent

ASFDE 3

– Parent xi1(t) is elected randomly from initialized populous, as i �= i1.– Election of two candidates are done arbitrarily that are xi2 and xi3, from

populous with a condition that i �= i1 �= i2 �= i3.– After this, experimental vector is computed by mutating the parent using

Eq. 2:ui(t) = xi1(t) + F × (xi2(t) − xi3(t)) (2)

Here, F ∈ [0, 1], which controls differential variation.

2.2 Crossover

Crossover is applied to get offspring x′i(t) which is produced by crossover of

parent xi(t) and experimental vector ui(t) depicted in the following Eq. 3:

y′ij(t) =

{uij(t), if q ∈ Q

xij(t), otherwise.(3)

Here Q is the set of crossover points that will go under perturbation, xij(t) is thejth element of the vector xi(t). Basically two types of crossover are used in DE.The presented variant ASFDE uses the binomial crossover. Here, R(1,Dim) isa uniformly distributed between 1 and Dim. Crossover point (Q) ε {1, 2, . . .... ,Dim} is used to select the crossover points in random fashion. Algorithm1 showsbinomial crossover. CR (Crossover probability) is used to select the crossoverpoints.

Q (Set of crossover points) = empty set, q∗ ∼ R(1, Dim);Q ← Q ∪ q∗;for each q ∈ 1.......Dim (Problem dimension) do

if R(0, 1) < CR (Crossover probability) and q �= q∗ thenQ ← Q ∪ q;

end ifend for

Algorithm 1. Binomial Crossover.

2.3 Selection

The solution having low-cost value or less objective value is chosen to survivein next generation i.e. group. It elects the better among parent and offspringdepending on objective cost for the next group.

xi(t + 1) =

{y′i(t), if f(y′

i(t)) > f(xi(t)).xi(t), otherwise.

(4)

The solution having less objective value will survive in the next generation.

Page 14: Editors Proceedings of Sixth International Conference on ... · Dr. Jagdish Chand Bansal is Assistant Professor with South Asian University, New Delhi, India. Holding an excellent

4 N. Choudhary et al.

Fig. 1. Flow chart of the Differential Evolution (DE) algorithm

3 Adaptive Scale Factor Based DE

To avoid premature convergence and stagnation, a modification is done in muta-tion phase of DE. DE algorithm has two control parameters named as scalingfactor F and crossover probability CR. CR controls the perturbation rate of thealgorithm and F used to maintain the step-size of individuals. CR is directlyproportional to perturbation rate which is directly proportional to the explo-ration of the search area. When F is greater, then resultant step-size explorethe search area, and lesser the F exploitation is performed. In basic DE, F isconstant due to which step-size gets affected.

For improving this drawback of DE, concept of adaptive step-size is intro-duced in the algorithm by which initially step-size is large and later on itdecreases gradually. Modified equation for mutation phase is given below:

ui(t) = xi1(t) + r × (1 − iter/Max iterations) × (xi2(t) − xi3(t)) (5)

Page 15: Editors Proceedings of Sixth International Conference on ... · Dr. Jagdish Chand Bansal is Assistant Professor with South Asian University, New Delhi, India. Holding an excellent

ASFDE 5

In above Eq. 5, r is random number between (0.1,1), iter is present iterationand Max iterations is maximum iterations. From Eq. 5 there is a uniform changein step-size that helps in doing exploration initially and as an increase in iterationnumber results in exploitation of search space. This modification results in auniform decrease in step-size by which global optima can’t be skipped.

Hence, modified version is adaptive through a change in iterations whichmake the step-size adaptive due to which a proper balance is maintained amongtwo mechanism that is exploration and exploitation which helps a leader toremove stagnation and achieve a good convergence speed.

Further, to reduce the possibilities of premature convergence, a counter isassociated with every solution which is incremented by one in case the solutionis not updated itself in each iteration. If a solution gets updated then the counteris initialized to zero. If the counter is reached to a predefined threshold thenthe associated solution is randomly initialized in the solution search space byconsidering that the solution has been stuck in a local optima.

Based on above discussion pseudo code of the purposed strategy is shown inAlgorithm 2:

Initialize, control parameters, r and CR;Initialize, the populous, s(0), of S individuals;while termination criteria(s) do

for each solution, xi(t) ∈ S(t) doFind the Objective value, f(xi(t));Produce the experimental vector depicted in the following Eq. 6:

ui(t) = xi1(t) + r × (1 − iter/Max iterations) × (xi2(t) − xi3(t)) (6)

Produce offspring, y′i(t), using crossover;

if f(y′i(t)) is more fit than f(xi(t)) then

Add y′i(t) to P (t + 1);

elseAdd xi(t) to P (t + 1);

end ifend forif Counter associated with xi is reached to the threshold then

Counter=0 and randomly initialize xi.end if

end whileReturn the best solution;

Algorithm 2. Adaptive Scale Factor based DE.

4 Outcomes and Discussions

4.1 Test Problems Under Consideration

For examining the outcomes of the ASFDE, 12 different benchmark functions(f1 to f12) are picked and displayed in Table 1.

Page 16: Editors Proceedings of Sixth International Conference on ... · Dr. Jagdish Chand Bansal is Assistant Professor with South Asian University, New Delhi, India. Holding an excellent

6 N. Choudhary et al.

Table

1.Tes

tfu

nct

ion

pro

ble

ms

Tes

tPro

blem

(TP)

Object

ive

funct

ion

(OF)

Sea

rch

Range

(SR)

Optim

um

Valu

e(O

V)

DAcc

epta

ble

Err

or

(AE)

CosineM

ixtu

ref1(x

)=

D i=

1x

i2

−0.1(

D i=

1cos5π

xi)+

0.1

D[-1,1]

f(0

)=

−D

×0.1

30

1.0

E−

05

Stepfu

nction

f2( x

)=

D i=

1(

xi+

0.5

)2[-100,100]

f(−

0.5

≤x

≤0.5)=

030

1. 0

E−

05

Invertedcosinewave

f3(x

)=

−D

−1

i=

1exp

−(x

2 i+

x2 i+

1+

0.5

xix

i+

1)

I[-5,5]

f(0

)=

−D

+1

10

1. 0

E−

05

wher

e,I

=cos

4x2 i+

x2 i+

1+

0.5

xix

i+

1

Levymonta

lvo1

f4(x

)=

π D(1

0sin2( π

y1)+

D−

1i=

1(y

i−

1)2

×(1

+10sin2(π

yi+

1))

+

(yD

−1)2),

where

yi=

1+

1 4(x

i+

1)

[-10,10]

f(−

1)=

030

1.0

E−

05

Colville

f5( x

)=

100[x

2−

x2 1]2

+(1

−x1)2

+90(x

4−

x2 3)2

+(1

−x3)2

+

10.1[(

x2

−1)2

+(x

4−

1)2]+

19.8( x

2−

1)(

x4

−1)

[-10,10]

f(1

)=

04

1. 0

E−

05

Kowalik

f6(x

)=

11

i=

1[a

i−

x1(b

2 i+

bix2)

b2 i+

bix3+

x4]2

[-5,5]

f(0

.192833,

0.190836,0.123117,

0.135766)

=0.000307486

41.0

E−

05

2D

Tripod

f7( x

)=

p(x

2)(1+

p(x

1))

+|(x

1+

50p(x

2)(1

−2p(x

1)))|

+|( x

2+

50(1

−2p(x

2)))|

[-100,100]

f(0

,−50)=

02

1.0

E−

04

ShiftedRosenbro

ckf8(x

)=

D−

1i=

1(1

00(z

2 i−

zi+

1)2

+(z

i−

1)2

)+

fbia

s,

z=

x−

o+

1,

x=

[x1,x2,..

..x

D],

o=

[o1,o2,..

.oD]

[-100,100]

f(0

)=

fbia

s=

390

10

1.0

E−

01

Goldstein-P

rice

f9(x

)=

(1+(x

1+

x2+1)2

·(19−14x1+3x2 1−14x2+6x1x2+3x2 2))

·(3

0+

(2x1

−3x2)2

· (18

−32x1+

12x2 1+

48x2

−36x1x2+

27x2 2))

[-2,2]

f(0

,−1)=

32

1.0

E−

14

Hosa

kiPro

blem

f10=

(1−

8x1+

7x2 1

−7/3x3 1+

1/4x4 1)x

2 2exp(−

x2),su

bjectto

0≤

x1

≤5,0

≤x2

≤6

[0,5],

[0,6]

-2.3458

21. 0

E−

05

Meyerand

Roth

Pro

blem

f11(x

)=

5 i=

1x1

x3

ti

1+

x1

ti+

x2

vi

−y

i

2[-10,10]

f(3

.13,15.16,0.78)

=0. 4

E−

04

31.0

E−

03

Sinuso

idalPro

blem

f12(x

)=

−[A

n i=

1sin(x

i−

z)+

n i=

1sin

(B(x

i−

z))],

A=

2.5

,B

=5,z=

30

[0,180]

f(9

0+

z)=

−(A

+1)

10

1.0

E−

02

Page 17: Editors Proceedings of Sixth International Conference on ... · Dr. Jagdish Chand Bansal is Assistant Professor with South Asian University, New Delhi, India. Holding an excellent

ASFDE 7

Table 2. Comparison of the results of test function problems

TF Algorithm SD ME AFE SR

f1 DE 4.72E−02 1.33E−02 37386.0 92

ASFDE 8.08E−07 8.99E−06 39007.5 100

ABC 2.30E−06 7.20E−06 22897.5 100

PSO 7.05E−02 3.70E−02 77107.5 77

f2 DE 2.92E−01 7.00E−02 26625.5 94

ASFDE 0.00E+00 0.00E+00 25508.5 100

ABC 0.00E+00 0.00E+00 11615.0 100

PSO 9.95E−02 1.00E−02 38549.5 99

f3 DE 6.59E−01 9.83E−01 173894.5 18

ASFDE 7.03E−01 8.38E−01 170203.0 30

ABC 1.18E−01 2.36E−02 91146.9 93

PSO 6.95E−01 1.37E+00 195749.5 7

f4 DE 1.03E−02 1.05E−03 21515.0 99

ASFDE 1.00E−06 8.80E−06 34098.0 100

ABC 2.23E−06 7.52E−06 19553.0 100

PSO 6.71E−07 9.31E−06 33939.0 100

f5 DE 3.66E−01 8.49E−02 32264.5 86

ASFDE 2.59E−03 6.71E−03 7264.9 100

ABC 1.04E−01 1.48E−01 200022.3 0

PSO 2.18E−04 8.01E−04 49955.0 100

f6 DE 2.00E−03 4.39E−04 55793.0 74

ASFDE 2.71E−04 1.90E−04 34975.0 88

ABC 7.54E−05 1.87E−04 184167.6 18

PSO 1.02E−05 9.20E−05 35835.0 100

f7 DE 3.67E−01 1.60E−01 34831.0 84

ASFDE 2.55E−01 7.01E−02 17044.8 93

ABC 2.24E−07 6.56E−07 12356.6 100

PSO 3.57E−01 1.51E−01 46957.0 84

f8 DE 1.78E+00 2.46E+00 194758.5 3

ASFDE 2.12E−03 9.83E−02 66069.7 100

ABC 1.08E+00 7.85E−01 172996.3 20

PSO 2.98E+00 4.88E−01 190951.5 60

f9 DE 4.00E−15 4.29E−15 3806.0 100

ASFDE 4.32E−15 4.88E−15 3875.6 100

ABC 2.96E−06 5.85E−07 111670.7 65

PSO 3.01E−15 5.27E−15 9759.5 100

Page 18: Editors Proceedings of Sixth International Conference on ... · Dr. Jagdish Chand Bansal is Assistant Professor with South Asian University, New Delhi, India. Holding an excellent

8 N. Choudhary et al.

Table 2. (Continued)

TF Algorithm SD ME AFE SR

f10 DE 6.37E−06 5.86E−06 34787.0 83

ASFDE 6.50E−06 5.82E−06 16845.1 92

ABC 6.52E−06 5.96E−06 657.0 100

PSO 3.36E−06 5.85E−06 23302.5 89

f11 DE 8.84E−05 1.96E−03 7751.5 97

ASFDE 2.85E−06 1.95E−03 1981.5 100

ABC 3.02E−06 1.95E−03 25993.6 100

PSO 2.50E−06 1.95E−03 3401.5 100

f12 DE 2.37E−01 5.07E−01 199314.5 1

ASFDE 1.89E−01 1.64E−01 169981.6 37

ABC 2.11E−03 7.66E−03 57215.8 100

PSO 3.05E−01 4.01E−01 177590.5 25

4.2 Trial Settings

For analyzing the performance of the developed algorithm ASFDE, a comparisonis done among ASFDE, DE, ABC [6] and PSO [4]. Following trial setting islimited to test the algorithm DE, ASFDE, ABC and PSO over the consideredtest problem.

– The number of run =100,– Population S = 50,– r = U [0.1, 1],– Settings for ABC [6] and PSO [4] are taken from their elementary papers.

4.3 Outcomes

Outcomes of algorithms are displayed in Table 2 in a form of standard deviation(SD), mean error (ME), average number of function evaluations (AFE) andsuccess rate (SR). Results in Table 2 replicates, many times ASFDE exceeds byother algorithms in terms of reliability, efficiency, and accuracy.

Further, for comparison of examined algorithms, in a form of consolidatedachievement boxplots [1] study of AFE is carried out. Boxplot study presents theempirical circulation of results graphically. The boxplots for DE, ASFDE, ABCand PSO are depicted in Fig. 2. The outcomes clearly display that interquartilespan and median of ASFDE is relatively low.

Further, Mann-Whitney U rank sum test [8] is performed between ASFDE -DE, ASFDE - ABC and ASFDE - PSO. Table 3 display the compared outcomesof mean function evaluation and Mann-Whitney test for 100 simulations. InMann-Whitney test, we observe the remarkable difference between two data set.If an outstanding difference is not seen then = symbol appears, and when a

Page 19: Editors Proceedings of Sixth International Conference on ... · Dr. Jagdish Chand Bansal is Assistant Professor with South Asian University, New Delhi, India. Holding an excellent

ASFDE 9

DE ASFDE ABC PSO

0

5

10

15

20x 104

Fig. 2. Boxplot for AFE

remarkable difference is observed then, a comparison is performed regarding theAFEs. And we use + and − symbol, + represent the ASFDE is superior tothe examined algorithms and − represent the algorithm is inferior. The totalnumber of + sign in the last line of Table 3, authorize the excellence of ASFDEover chosen algorithms.

Further, all examined algorithms are analyzed regarding ME, SR and AFEby performance indices (PI) [1] graph that are computed for DE, ASFDE, ABC,and PSO respectively and shown in Fig. 3.

Table 3. Evaluations of outcomes in Table 2

Test problems ASFDE Vs DE ASFDE Vs ABC ASFDE Vs PSO

f1 = − +

f2 + − +

f3 = + +

f4 + − =

f5 + + +

f6 + + +

f7 + − +

f8 + + +

f9 = + +

f10 + − +

f11 + + +

f12 + − +

Total number of + sign 09 06 11

It is evident from Fig. 3 that PI of ASFDE algorithm is superior as com-pared to others. At every phase, ASFDE performs better as compared to otherestablished algorithms.

Page 20: Editors Proceedings of Sixth International Conference on ... · Dr. Jagdish Chand Bansal is Assistant Professor with South Asian University, New Delhi, India. Holding an excellent

10 N. Choudhary et al.

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.2

0.4

0.6

0.8

1

Success Rate

Per

form

ance

Inde

x

DEASFDEPSOABC

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.2

0.4

0.6

0.8

1

Average Number of Function Evaluations

Per

form

ance

Inde

x

DEASFDEPSOABC

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.2

0.4

0.6

0.8

1

Mean Error

Per

form

ance

Inde

x

DEASFDEPSOABC

(a)

(b)

(c)

Fig. 3. Performance index; (a) SR, (b) AFE and (c) ME.

5 Conclusion

This paper presents a variant of DE algorithm, known as Adaptive Scale Factorbased DE (ASFDE). In ASFDE, modified version is adaptive through a changein iterations which make the step-size adaptive due to which a proper balanceis maintained among two mechanism that is exploration and exploitation whichhelps a leader to remove stagnation and achieve a good convergence speed. Fur-ther, to avoid the stagnation, a limit based counter is associated with everysolution to check the not updating counts of the solution. The proposed algo-

Page 21: Editors Proceedings of Sixth International Conference on ... · Dr. Jagdish Chand Bansal is Assistant Professor with South Asian University, New Delhi, India. Holding an excellent

ASFDE 11

rithm is compared with DE, ABC, and PSO over different benchmark functions.The obtained results state that ASFDE is a competitive variant of DE and alsoa good choice for solving the continuous optimization problems. In future, thenewly developed algorithm may be used to solve various real-world optimizationproblems of continuous nature.

References

1. Bansal, J.C., Sharma, H., Arya, K.V., Nagar, A.: Memetic search in artificial beecolony algorithm. Soft Comput. 17(10), 1911–1928 (2013)

2. Das, S., Mullick, S.S., Suganthan, P.N.: Recent advances in differential evolution-anupdated survey. Swarm Evol. Comput. 27, 1–30 (2016)

3. Das, S., Suganthan, P.N.: Differential evolution: a survey of the state-of-the-art.IEEE Trans. Evol. Comput. 15(1), 4–31 (2011)

4. Eberhart, R.C., Kennedy, J., et al.: A new optimizer using particle swarm the-ory. In: Proceedings of the Sixth International Symposium on Micro Machine andHuman Science, New York, NY, vol. 1, pp. 39–43 (1995)

5. Engelbrecht, A.P.: Computational Intelligence: An Introduction. Wiley, New York(2007)

6. Karaboga, D., Basturk, B.: A powerful and efficient algorithm for numerical func-tion optimization: artificial bee colony (ABC) algorithm. J. Glob. Optim. 39(3),459–471 (2007)

7. Lampinen, J., Zelinka, I.: On stagnation of the differential evolution algorithm(2000)

8. Neri, F., Tirronen, V.: Scale factor local search in differential evolution. MemeticComput. 1(2), 153–171 (2009)

9. Panigrahi, B.K., Suganthan, P.N., Das, S.: Swarm, Evolutionary, and MemeticComputing. LNCS, vol. 8947. Springer, Cham (2015)

10. Price, K.V.: Differential evolution: a fast and simple numerical optimizer. In: 1996Biennial Conference of the North American Fuzzy Information Processing Society,NAFIPS 1996, pp. 524–527. IEEE (1996)

11. Storn, R., Price, K.: Differential evolution-a simple and efficient adaptive schemefor global optimization over continuous spaces, vol. 3. ICSI, Berkeley (1995)

12. Yan, J.-Y., Ling, Q., Sun, D.: A differential evolution with simulated annealingupdating method. In: 2006 International Conference on Machine Learning andCybernetics, pp. 2103–2106. IEEE (2006)

13. Yang, X.-S.: Nature-Inspired Metaheuristic Algorithms. Luniver Press, Bristol(2010)

Page 22: Editors Proceedings of Sixth International Conference on ... · Dr. Jagdish Chand Bansal is Assistant Professor with South Asian University, New Delhi, India. Holding an excellent

Adaptive Balance Factor in Particle SwarmOptimization

Siddhi Kumari Sharma and R. S. Sharma(B)

Rajasthan Technical University, Kota, [email protected]

Abstract. Particle Swarm Optimization (PSO) is a refined optimiza-tion method, that has drawn interest of researchers in different areasbecause of its simplicity and efficiency. In standard PSO, particles roamover the search area with the help of two accelerating parameters. Theproposed algorithm is tested over 12 benchmark test functions and com-pared with basic PSO and two other algorithms known as Gravitationalsearch algorithm (GSA) and Biogeography based Optimization (BBO).The result reveals that ABF-PSO will be a competitive variant of PSO.

Keywords: Meta-heuristic optimization techniques · Particle swarmoptimization algorithm · Swarm intelligence · Acceleration coefficients ·Nature inspired algorithm

1 Introduction

Generally, real-world optimization problems are very difficult to solve. Optimiza-tion tools are used to solve these kind of problems, though there is no surety toget optimal solution always. So, by using different optimization methods severalproblems are solved by trial and errors [8]. Development of Swarm intelligence andbio-inspired algorithms make a new subject, inspired by nature. Based on the ori-gins of motivation, these kind of meta-heuristic algorithms can be known as swarm-intelligence-based, bio-inspired-based algorithm [6]. Particle Swarm Optimization(PSO) is a refined optimization method, that has drawn interest of researchers indifferent areas because of its simplicity and efficiency. Different versions of PSOhave been suggested already. PSO is a swarm - based, modifying search develop-ment facilityfirstly suggestedbyJamesKennedyandRussellEberhart (1995).Thisalgorithm is inspired by mimicking the collective behavior of natural swarm’s likefishes andbirds and evenhumancommon routine etc. [9]. In standardPSO(SPSO),particles roam over the search area with the help of two accelerating parameters.One parameter, known as the cognitive parameter, controls the local explorationof the particles, while the second parameter, known as the social parameter, guidesthe global search capability of the particles. Generally, diversification and intensi-fication properties are managed by these two parameter. Various researchers have

c© Springer Nature Singapore Pte Ltd. 2017K. Deep et al. (eds.), Proceedings of Sixth International Conferenceon Soft Computing for Problem Solving, Advances in Intelligent Systemsand Computing 546, DOI 10.1007/978-981-10-3322-3 2

Page 23: Editors Proceedings of Sixth International Conference on ... · Dr. Jagdish Chand Bansal is Assistant Professor with South Asian University, New Delhi, India. Holding an excellent

Adaptive Balance Factor in Particle Swarm Optimization 13

found that, in the SPSO, particles immediately get a fine local solution, thoughget stuck to that solution for rest of the iterations without a further improvement[7,12,13].

In order to increase the convergence speed as well as the exploration capa-bilities of PSO algorithms, an acceleration parameter PSO strategy has beenpresented in this paper. In this paper a time differing acceleration parameterscheme is introduced to efficiently manage the universal search and convergenceto the universal best solution. The primary attention of this modification is toneglect overearly convergence in the initial phases and to enhance convergenceto the universal optimal solution in the later phases [14].

The rest of the paper is structured as follows: In Sect. 2, the particleswarm optimization algorithm is discussed. In Sect. 3, introduction of scheduledalgorithm and the quality of the scheduled algorithm is tested with several bench-mark datasets in Sect. 4. To show the quality of the scheduled strategy, a provi-sional study is carried out among scheduled strategy, basic PSO and other algo.namely Gravitational search algorithm (GSA) [10] and Biogeography based Opti-mization (BBO) [15]. The simulation results reveals that the scheduled strategyoutperforms among the aforementioned algorithms. At last, Sect. 5 presents asummary and the conclusion of the work.

2 Particle Swarm Optimization Algorithm (PSO)

The particle swarm optimization (PSO) is a swarm- based, modifying searchdevelopment facility firstly suggested by James Kennedy and Russell Eberhart(1995). As the name suggests it is a swarm intelligence algorithm. This algorithmis inspired by mimicking the collective behavior of natural swarm’s like fishes,birds and even human common routine. It can be executed and practiced simplyto clarify distinct function optimization issues and the problems which could beconverted to function optimization issues [9].

To search the optimal solution, swarms are distributed to spot the food site.When the swarms are seeking for food here and there, there is ever a swarm whichmay scent the food positively, i.e., the swarm is detectable of the site where itcan get the food, keeping the superior food site knowledge. When they seekingthe food site, they are transferring the knowledge, specially the fine knowledgeat each time, forwarded by the fine knowledge, the swarms will finally went tothe food site [2]. Every particle modifies its flying based on its individual flyingexperience and its colleagues flying experience. Y. Shi and Russell Eberhart(1998) termed the initial as the cognition section and the later the social section.For the social section, James Kennedy and Russell Eberhart (1995) introducedUbest (Universal best, experience based on all swarms) and Lbest (Local best,experience based on individual swarm) components [5,18].

The primary PSO algorithm contains a group of swarms roaming in an n-dimensional, real valued search area of feasible problem solutions. Generally,a convinced fitness aspect is described for swarms to analyze distinct problemsolutions. Each swarm i at the time t has the subsequent attributes:

Page 24: Editors Proceedings of Sixth International Conference on ... · Dr. Jagdish Chand Bansal is Assistant Professor with South Asian University, New Delhi, India. Holding an excellent

14 S.K. Sharma and R.S. Sharma

pi(t) is the position vector;si(t) is the speed (velocity) vector;Li(t) is the limited memory saving its individual finest position seen earlier;Ui(t) is the overall finest position.Following are some points in PSO process [11]:

At first point, the population (swarms) size is assumed as N . The value of Nshould be moderate that would give different positions to get optimum solution.

At second point, initial population p is evaluated with arbitrary order to getthe p1, p2, p3, . . . pn. Objective function evaluation for every swarm is given byf [p1(0)], f [p2(0)], f [p3(0)], . . . f[ pn(0)].

At third point, update the speed for each swarm. The swarms roam facingthe optimum solution with a speed. At the initial point, speed of all the swarmsis taken as 0. Set iteration t =1. Now, at tth iteration, find some necessarycomponents for every swarm j such as:

– The values of local best (Lbest) and universal best (Ubest).– When the speed is upgraded then the swarm is positioned to a latest position.

The latest position is easily computed with the addition of the earlier positionand the latest speed:

pi(t + 1) = pi(t) + si(t + 1) (1)

– Where the speed upgrading is computed as following relation:

si(t + 1) = wsi(t) + a1r1(Lbest(t) − pi(t)) + a2r2(Ubest(t) − pi(t)) (2)

where, w represents inertia weight constant, r1 and r2 are random numbers. a1and a2 represent constant values and p represents the position of the swarm.

At the final point, check if the latest solution is convergent. If yes, then stopthe iteration, otherwise, repeat last phase by doing t = t+1 and compute thevalues of local best (Lbest) and universal best (Ubest).

for every swarm doEvaluate fitness value. If the current fitness value is better than the previousbest fitness value (Lbest) then set current value as the latest Lbest.Select the swarm that contains the best fitness value among all swarms as theUbest.for every swarm do

Evaluate latest speed:si(t+1) = wsi(t) + a1r1(Lbest(t)-pi(t)) + a2r2(Ubest(t)-pi (t))Upgrade position of the swarm:pi(t+1) = pi(t) + si(t+1)Until stopping pattern is found.

end forend for

Algorithm 1. PSO algorithm

Page 25: Editors Proceedings of Sixth International Conference on ... · Dr. Jagdish Chand Bansal is Assistant Professor with South Asian University, New Delhi, India. Holding an excellent

Adaptive Balance Factor in Particle Swarm Optimization 15

3 Adaptive Balance Factor in Particle SwarmOptimization (ABF-PSO)

In populous-based search algorithms, exploration and exploitation are the twokey properties of any Nature Inspired Algorithm (NIA). A proper balancebetween these two properties are required to find the global optima. Explo-ration identifies the promising regions by searching the given search space whileexploitation helps in finding the optimum solution in the promising searchregions. From the explanation of Particle swarm optimization, it is noted that,the search process is managed with the help of two acceleration parameters (thecognitive parameter and the social parameter). So, suitable discipline of thosetwo parameters is really necessary to get the optimal solution precisely as wellas effectively.

Mostly, in swarm-related optimization strategies, it is necessary to inspirethe swarms to roam over the full search area, beyond gathering around localoptimum, during the initial stages of the optimization. While, during the laterstages, it is necessary to increase convergence speed, to search the optimal solu-tion effectively.

Suganthan [16] evaluated a form of linear diminishing both acceleration para-meters with the time, but realized that the established acceleration parametersat value 2 develop superior solutions. Anyhow, over experimental exercises heoffered that the acceleration parameters should not be equivalent to value 2always. In PSO, for a high value of these two parameter, there is always enoughchance to skip the true solution due to large step size in the solution searchspace.

Since those involvement, in this paper, time differing acceleration parameterscheme is introduced for the PSO technique. The purpose of this expansion isto increase the step size as well as the universal search in the initial stage, andto decrease the step size of the swarms to motivate them to converge towardthe global optimum at the last stage of the search. In this scheme, the socialparameter is modified with the iterations to manage the step size. In this paper,a balance between cognitive section and the social section is presented, to getthe optimal result. The proposed speed updating strategy is shown in followingequation:

si(t+1) = wsi(t)+a1∗r1(Lbest(t)−pi(t))+a0∗(1−(it/MaxIt))∗r2(Ubest(t)−pi(t))(3)

where, w is inertia constant, r1 and r2 are random values, a1 has constant value,a2 = a0 * (1 − (it/MaxIt)), a0 = 2.5, it is the latest iteration no, MaxIt is thetotal no of iterations and p is particle position.

It is clear from above equation that initially, the social parameter will belarge so the step size will also be large and it will help in exploration of thesearch area. Further, at later stage as the iteration increases the value of socialparameter will decrease so that the step size will also gradually decrease, thiswill help in exploitation.

Page 26: Editors Proceedings of Sixth International Conference on ... · Dr. Jagdish Chand Bansal is Assistant Professor with South Asian University, New Delhi, India. Holding an excellent

16 S.K. Sharma and R.S. Sharma

Based on the above explanation the algorithmic representation is as shown:

for every swarm doEvaluate fitness value. If the current fitness value is better than the previousbest fitness value (Lbest) then set current value as the latest Lbest.Select the swarm that contains the best fitness value among all swarms as theUbest.for every swarm do

Evaluate latest speed:si(t+1) = wsi(t) + a1 * r1(Lbest(t)-pi (t) ) + 2.5 ∗ (1 − (it/MaxIt)) *r2(Ubest(t)-pi (t))Upgrade position of the swarm:pi(t+1) = pi(t) + si(t+1)Until stopping pattern is found.

end forend for

Algorithm 2. ABF-PSO algorithm

4 Experimental Results

4.1 Test Problems Under Consideration

To study the quality of the scheduled algorithm ABF-PSO, 12 distinct universaloptimization issues(f1–f12) are selected as indexed in Table 1. All the issuesare continuous optimization issues and having distinct rates of difficulty. Testproblems f1 to f12 are yielded from [1,17] with their correlated offset values.

4.2 Experimental Setting

To verify the quality of the scheduled algorithm ABF-PSO, a performance analy-sis is carried out among ABF-PSO, basic PSO and other algorithms namelyGravitational search algorithm (GSA) [10], Biogeography based Optimization(BBO) [15]. To analyze ABF-PSO, PSO, GSA and BBO over the specified prob-lems, consecutive empirical setting is used:

– The number of simulations/run = 30,– Population size nPop = 100 and Number of food sources SN = nPop/2,– rij = rand[0, 1],– a1 = 1.5 and a2 = 2.5 in PSO update equation.

4.3 Results Comparison

Numeral outcomes according to the empirical setting are given in Table 2. Thistable represents the outcomes of the scheduled and other considered algorithms interms of standard deviation (SD), mean error (ME), average number of function

Page 27: Editors Proceedings of Sixth International Conference on ... · Dr. Jagdish Chand Bansal is Assistant Professor with South Asian University, New Delhi, India. Holding an excellent

Adaptive Balance Factor in Particle Swarm Optimization 17

Table 1. Test problems

S. No. Test problem Objective function Search space Objectivevalue

Dimension Acceptableerror

1 Ackley f1(x) = −20 + e +

exp(− 0.2D

√∑Di=1 xi

3)

[−1 1] f(0) = 0 30 1.0E−05

2 Alpine f2(x) =∑n

i=1 |xisin xi + 0.1xi| [−10 10] f(0) = 0 30 1.0E−05

3 Michalewicz f3(x) =

−∑Di=1 sin xi(sin ( i.xi

2

π)20)

[0 π] fmin =−9.66015

10 1.0E−05

4 CosineMixture

f4(x) =∑D

i=1 xi2 −

0.1(∑D

i=1 cos 5πxi) + 0.1D

[−1 1] f(0) =−D × 0.1

30 1.0E−05

5 Schewel f5(x) =∑Di=1 |xi| +

∏Di=1 |xi|

[−10 10] f(0) = 0 30 1.0E−05

6 SalomonProblem

f6(x) = 1 − cos(2π√∑D

i=1 x2i) +

0.1(√∑D

i=1 x2i)

[−100 100] f(0) = 0 30 1.0E−01

7 Levymontalvo 1

f7(x) = ΠD

(10sin2(Πy1) +∑D−1i=1 (yi − 1)2 × (1 +

10sin2(Πyi+1)) + (yD − 1)2),

where yi = 1 + 14(xi + 1)

[−10 10] f(−1) = 0 30 1.0E−05

8 Levymontalvo 2

f8(x) =

0.1(sin2(3Πx1)+∑D−1

i=1 (xi −1)2 × (1 + sin2(3Πxi+1)) +

(xD − 1)2(1 + sin2(2ΠxD))

[−5 5] f(1) = 0 30 1.0E−05

9 Braninss

functionf9(x) = a(x2 − bx2

1 + cx1 − d)2 +

e(1 − f) cos x1 + e

x1 ∈ [−510],x2 ∈ [015]

f(0)=0.3979 2 1.0E−05

10 Kowalik f10(x) =∑11i=1[ai − x1(b2i +bix2)

b2i +bix3+x4]2

[−5 5] f(0.192833,

0.190836,

0.123117,0.135766) =0.000307486

4 1.0E−05

11 ShiftedRastrigin

f11(x) =∑D

i=1(z2i −

10 cos(2πzi) + 10) + fbias

z=(x-o),x=(x1,x2,........xD),

o=(o1,o2,........oD)

[−5 5] f(0) =fbias =−330

10 1.0E−02

12 Six-humpcamel back

f12(x) = (4 − 2.1x21 + x4

1/3)x21 +

x1x2 + (−4 + 4x22)x2

2

[−5 5] f(−0.0898,

0.7126) =−1.0316

2 1.0E−05

evaluations (AFE), and success rate (SR). According to outcomes in Table 2 max-imum time ABF-PSO shows best outcomes in terms of performance, accuracy aswell as efficiency from the considered algorithm like PSO, GSA and BBO.

Moreover, boxplots study of AFE is accomplished for comparing the studiedalgorithms in terms of centralized quality, still it can simply illustrate the empir-ical distribution of statistic data graphically. The box plots for ABF-PSO, PSO,

Page 28: Editors Proceedings of Sixth International Conference on ... · Dr. Jagdish Chand Bansal is Assistant Professor with South Asian University, New Delhi, India. Holding an excellent

18 S.K. Sharma and R.S. Sharma

Table 2. Comparison of the results of test problems

Test problem Algorithm SD ME AFE SR

f1 PSO 8.26E−01 7.79E−01 107183.33 15

ABF-PSO 4.46E−07 9.39E−06 38833.33 30

GSA 5.83E−07 9.37E−06 161030 30

BBO 1.27E−06 8.61E−06 60383.33 30

f2 PSO 5.51E−04 1.40E−04 86393.33 19

ABF-PSO 1.41E−06 9.12E−06 33290 30

GSA 5.59E−07 9.29E−06 154615 30

BBO 5.78E−03 1.04E−02 200000 00

f3 PSO 9.42E−01 1.69E+00 200000 00

ABF-PSO 4.78E−01 5.79E−01 191080 02

GSA 2.35E−01 4.75E−01 197296.67 01

BBO 2.81E−01 6.20E−01 200000 00

f4 PSO 4.28E−01 9.95E−01 200000 00

ABF-PSO 7.32E−02 3.45E−02 69750 24

GSA 8.13E−07 8.66E−06 111176.67 30

BBO 1.97E−01 2.36E−01 156776.67 07

f5 PSO 4.47E−02 1.32E−02 158420 07

ABF-PSO 6.05E−07 9.26E−06 33406.67 30

GSA 4.94E−07 9.42E−06 181821.67 30

BBO 9.51E−07 9.09E−06 48673.33 30

f6 PSO 6.53E−02 3.20E−01 175233.33 04

ABF-PSO 4.42E−02 2.27E−01 84983.33 22

GSA 5.82E−02 8.00E−01 200000 00

BBO 1.12E−01 6.73E−01 200000 00

f7 PSO 1.85E−01 1.35E−01 116980 13

ABF-PSO 3.63E−01 2.73E−01 101920 17

GSA 8.82E−07 8.81E−06 90630 30

BBO 8.52E−01 1.17E+00 182070 03

f8 PSO 2.48E−02 6.64E−03 20616.67 28

ABF-PSO 8.29E−07 9.11E−06 26570 30

GSA 6.12E−07 9.00E−06 95498.33 30

BBO 1.93E−06 8.66E−06 18816.67 30

f9 PSO 3.18E−05 3.43E−05 1146.67 0 30

ABF-PSO 2.84E−05 2.65E−05 903.33 30

GSA 3.29E−05 4.93E−05 37113.33 30

BBO 1.87E−05 7.40E−05 54626.67 30

f10 PSO 7.39E−05 2.82E−04 303.33 30

ABF-PSO 7.28E−05 2.83E−04 303.33 30

GSA 1.07E−04 2.23E−04 75 30

BBO 5.51E−05 2.75E−04 110 30

f11 PSO 7.07E+00 1.39E+01 200000 00

ABF-PSO 1.84E+00 3.32E+00 188313.33 02

GSA 1.56E+00 5.14E+00 200000 00

BBO 3.35E+00 8.56E+00 200000 00

f12 PSO 9.83E−06 1.44E−05 1263.33 30

ABF-PSO 1.15E−05 1.59E−05 956.67 30

GSA 1.16E−05 1.17E−05 49801.67 30

BBO 3.74E−01 2.45E−01 76745 21

Page 29: Editors Proceedings of Sixth International Conference on ... · Dr. Jagdish Chand Bansal is Assistant Professor with South Asian University, New Delhi, India. Holding an excellent

Adaptive Balance Factor in Particle Swarm Optimization 19

ABF−PSO PSO GSA BBO0

0.5

1

1.5

2x 105

Fig. 1. Box plot graphs (average function evaluation)

GSA and BBO are displayed in Fig. 1. The outcomes declares that interquartiledimensions and medians of ABF-PSO are relatively small.

Next, all studied algorithms are also observed by offering sufficient priorityto the AFE, ME, and SR. This observation is calculated using the quality basisthat is detailed in [3,4]. The concluded values of PI for the ABF-PSO, PSO,GSA and BBO are measured and subsequent performance index (PIs) graphsare exhibited in Fig. 2.

The graphs relating to any of the cases i.e. offering sufficient priority to theAFE, SR and ME (as described in [3,4]) are displayed in Fig. 2[a], [b] and [c]subsequently. In the mentioned diagrams, parallel axis shows the priority andperpendicular axis specifies the PI.

0 0.2 0.4 0.6 0.8 1

0.2

0.4

0.6

0.8

Success Rate

Per

form

ance

Inde

x

ABFPSO PSO GSA BBO

0 0.2 0.4 0.6 0.8 10.2

0.4

0.6

0.8

1

Average Number of Function Evaluation

Per

form

ance

Inde

x

ABF−PSO PSO GSA BBO

0 0.2 0.4 0.6 0.8 10.2

0.4

0.6

0.8

Mean Error

Per

form

ance

Inde

x

ABF−PSO PSO GSA BBO

(a) (b)

(c)

Fig. 2. Performance index for test problems; [a] for SR, [b] for AFE and [c] for ME.

Page 30: Editors Proceedings of Sixth International Conference on ... · Dr. Jagdish Chand Bansal is Assistant Professor with South Asian University, New Delhi, India. Holding an excellent

20 S.K. Sharma and R.S. Sharma

It is clear from Fig. 2 that PI of ABF-PSO are better than the studied algo.in every case. i.e. performance of ABF-PSO is superior on the studied test issuesas compared to the PSO, GSA and BBO.

5 Conclusion

To balance the step size of the swarms during the solution search process, anew variant of PSO is presented namely, Adaptive Balance Factor PSO (ABF-PSO) algorithm. In the ABF-PSO, the social parameter is modified such thatin initial iterations, the step size of the swarms will be high whereas in lateriterations, it will be low. Therefore, by managing the step size, an effort is madeto balance the diversification and intensification properties of the PSO algorithm.The proposed ABF-PSO algorithm is practiced on the 12 standard functions andcompared with PSO, GSA and BBO algorithms. Through intensive analysis ofoutcomes, it can be state that the ABF-PSO is an efficient varient of PSO andit can be applied to solve the real world complex optimization problems.

References

1. Montaz Ali, M., Khompatraporn, C., Zabinsky, Z.B.: A numerical evaluation ofseveral stochastic algorithms on selected continuous global optimization test prob-lems. J. Global Optimization 31(4), 635–672 (2005)

2. Aote, S.S., Raghuwanshi, M.M., Malik, L.: A brief review on particle swarm opti-mization: limitations & future directions. Intl. J. Comput. Sci. Eng. (IJCSE) 14,196–200 (2013)

3. Bansal, J.C., Sharma, H.: Cognitive learning in differential evolution and its appli-cation to model order reduction problem for single-input single-output systems.Memetic Comput. 4(3), 209–229 (2012)

4. Bansal, J.C., Sharma, H., Arya, K.V., Nagar, A.: Memetic search in artificial beecolony algorithm. Soft Comput. 17(10), 1911–1928 (2013)

5. Eberhart, R.C., Kennedy, J., et al.: A new optimizer using particle swarm the-ory. In: Proceedings of the Sixth International Symposium on Micro Machine andHuman Science, vol. 1, New York, NY, pp. 39–43 (1995)

6. Fister Jr., I., Yang, X.-S., Fister, I., Brest, J., Fister, D.: A brief review of nature-inspired algorithms for optimization. arXiv preprint, arXiv:1307.4186 (2013)

7. Jadon, S.S., Sharma, H., Bansal, J.C., Tiwari, R.: Self adaptive acceleration factorin particle swarm optimization. In: Proceedings of Seventh International Confer-ence on Bio-Inspired Computing: Theories and Applications (BIC-TA 2012), pp.325–340. Springer (2013)

8. Kennedy, J.: How it works: collaborative trial and error. Intl. J. Comput. Intell.Res. 4(2), 71–78 (2008)

9. Kennedy, J.: Particle swarm optimization. In: Encyclopedia of Machine Learning,pp. 760–766. Springer, New York (2011)

10. Rashedi, E., Nezamabadi-Pour, H., Saryazdi, S.: GSA: a gravitational search algo-rithm. Inf. Sci. 179(13), 2232–2248 (2009)

11. Rini, D.P., Shamsuddin, S.M., Yuhaniz, S.S.: Particle swarm optimization: tech-nique, system and challenges. Intl. J. Comput. Appl. 14(1), 19–26 (2011)