Gijet volume1 issue2

135
GRENZE International Journal of Engineering & Technology GIJET Volume 1 No 2 July 2015 Grenze ID: 01.GIJET.1.2.F2 © Grenze Scientific Society, 2015

description

The Grenze International Journal of Engineering and Technology (GIJET), is a multi-disciplinary journal which covers all aspects of scientific, engineering and technical disciplines including applications of scientific inventions for engineering, technological and industrial purposes, advances in engineering, technology and science. Published Twice-yearly, focuses on the frontier topics in the Computer Science, Civil, Mechanical, Electrical and Electronics Engineering subjects.

Transcript of Gijet volume1 issue2

Page 1: Gijet volume1 issue2

GRENZE International Journal of Engineering & Technology

GIJET

Volume 1 No 2 July 2015

Grenze ID: 01.GIJET.1.2.F2 © Grenze Scientific Society, 2015

Page 2: Gijet volume1 issue2

ISSN 2395-5295

GRENZE International Journal of Engineering & Technology

GIJET

Grenze ID: 01.GIJET.1.2.F1 © Grenze Scientific Society, 2015

Page 3: Gijet volume1 issue2

Copyright © 2015 by GRENZE Scientific Society All rights reserved

This work is subject to copyright. All rights are reserved. No part of this journal may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without permission in writing from the publisher. Permission request should be addressed to the General Chair. Email: [email protected] The papers in this journal copyrighted and published by the GRENZE Scientific Society, Kerala, India, Email: [email protected] Website: http://www.thegrenze.com/ This journal is also available in the GRENZE Digital Library: Opinions expressed in the papers are those of the author(s) and do not necessarily express the opinions of the editors or the GRENZE Scientific Society. The papers are published as presented and without change, in the interests of timely dissemination.

GRENZE Scientific Society GRENZE International Journal of Engineering and Technology (GIJET)

ISSN 2395-5287 (Print); ISSN 2395-5295(Online)

Additional copies may be ordered from: GRENZE Scientific Society

Ravi Nagar-84, Peroorkada, Trivandrum, Kerala - 695005 Email: [email protected]

THIS BOOK IS NOT FOR SALE

Grenze ID: 01.GIJET.1.2.F3 © Grenze Scientific Society, 2015

Page 4: Gijet volume1 issue2

Editorial Board

Editor-in-Chief: Dr. Janahanlal Stephen (Matha College of Technology, India) Editors Dr. Harish Chandran (MG University, India) Dr. Dinesh Kumar Tyagi (BITS-Pilani, India) Dr. Anjali Mohapatra (IIIT-BHUBANESWAR, India) Dr. Amitabha Sinha (West Bengal University of Technology, India) Associate Editor Dr. Gylson Thomas (Thejus Engineering College, India) Dr. Anoop HL (SXCCE, India) Dr. Anju JA (Imanuel Engineering College, India)

Grenze ID: 01.GIJET.1.2.F4 © Grenze Scientific Society, 2015

Page 5: Gijet volume1 issue2

Table of Contents

1. Choroidal Segmentation and Volume Measurement of Optical Coherence 1-5 Tomography Images in Eyes using Intensity-Threshold Method Neeru Rai, John S Werner and Raju Poddar 2. Participation of Doubly Fed Induction Generator based Wind Turbines in 6-10 Power System Primary Frequency Regulation Renuka T K and Reji P 3. Strength Characteristics of Stabilized Peat Soil using Fly Ash 11-15 Lakshmi Sai A and Ramya K 4. An Efficient Image Denoising Method using SVM Classification 16-20 Divya V and Sasikumar M 5. Study of the Lateral Load Carrying Capacities of Piles in Layered Soils 21-25 using PLAXIS 3D Neerad Mohan and Ramya K 6. Change in Shrinkage Characteristics of Fiber Amended Clay Liner 26-30 Material Radhika V and Niranjana K 7. Secure Authentication using Hybrid Graphical Passwords 31-36 Shalaka Jadhav and Abhay Kolhe 8. Interactive Image Segmentation based on Seeded Region Growing and 37-42 Energy Optimization Algorithms Rasitha K R, Sherin Thomas and Vijaykumar P M 9. An Innovative Method to Reduce Power Consumption using Look-Ahead 43-50 Clock Gating Implemented on Novel Auto-Gated Flip Flops Roshini Nair 10. Methods for Reduction of Stray Loss in Flange-Bolt Regions of Large 51-56 Power Transformers using Ansys Linu Alias and Malathi V 11. Development & Implementation of Mixing Unit using PLC 57-61 Dilbin Sebastian, George Jacob, Hani Mol A A, Indu Lakshmi B and Rajani S H Grenze ID: 01.GIJET.1.2.F5 © Grenze Scientific Society, 2015

Page 6: Gijet volume1 issue2

12. Cryptography in the Field of Cloud Computing for Enhancing Security 62-65 Nikhitha K Nair, Navin K S and Soya Chandra C S 13. A Comparative Study of Harmonic Compensation Techniques in Micro-Grids 66-72 using Active Power Filters Neeraj N, Ramakrishnan P V and Mini P R 14. Speed Control of Vehicle using Fractional Network based Controller 73-80 Abida K, Nafeesa K and Labeeb M 15. Securing Images using Elliptic Curve Cryptography 81-85 Blessy Joy A and Girish R 16. Hole Detection and Healing Techniques in WSN 86-90 Soumya P V and Shreeja R 17. A Study on M-Sand Bentonite Mixture for Landfill Liners 91-96 Anna Rose Varghese and Anjana T R 18. Microstructure, Mechanical & Wear Characteristics of Al 336/ (0-10) 97-101 Wt. % SICP Composites Harikrishnan T, Sarathchandradas M R and Rajeev V R 19. Controller based Auto Agricultural System 102-107 Shah Payal Jayeshkumar 20. Design and Simulation of Generation Control Loops for Multi Area 108-115 Interconnected System Abhilash M G and Frenish C Francis 21. PID Controller Tuning by using Fuzzy Logic Control and Particle Swarm 116-123 Optimization for Isolated Steam Turbine Frenish C Francis and Abhilash M G 22. Seismic Analysis of Performance based Design of Reinforced Concrete 124-128 Building Rehan A Khan and Naqvi T

Page 7: Gijet volume1 issue2

Choroidal Segmentation and Volume Measurement of Optical Coherence Tomography Images in Eyes using

Intensity-Threshold Method

1Neeru Rai, 2John S. Werner and 3Raju Poddar 1Department of Bio-Engineering, Birla Institute of Technology-Mesra, Ranchi, JH 835 215, India.

2Vision Science and Advanced Retinal Imaging laboratory, Department of Ophthalmology and Vision Science, University of California Davis, Sacramento, CA 95817, USA

3Corresponding author: Department of Bio-Engineering, Birla Institute of Technology-Mesra, Ranchi 835215, India. Ph.: +91-651-2276223; TeleFax: +91-651-2275401,

Email: [email protected]

Abstract— We present a relatively new and robust method for automated segmentation of choroids in healthy and pathological eyes. The 1µm swept-source optical coherence tomography (OCT) images were utilized for this purpose due to deeper penetration in choroids. The algorithm is build with an intensity-threshold technique. The method is demonstrated on healthy and age related macular degeneration (AMD) patient’s eyes. The total choroidal volume is calculated automatically. The results are well correlated with available reports. Keyword: Coherence Tomography, optical biopsy, binarization technique, Gaussian filter.

I. INTRODUCTION

Optical Coherence Tomography (OCT) is a new emerging technology for biomedical imaging and optical biopsy. It was first demonstrated in the year 1991, for the imaging of internal cross-sectional microstructure of tissues using a low-coherence interferometer system (Fujimoto et al. (2000) & Huang et al. (1991)). Since its introduction it has found a potential use in the field of retinal imaging to reveal the changes in the morphology of the retina in normal and diseased eyes (Adhi et al. (2014)). Time-domain optical coherence tomography (TD-OCT) was used for retinal imaging but due to its poor resolution and inability to capture 3-D images it sooner got replaced with spectral-domain optical coherence tomography (SD-OCT) systems which provide higher resolution and 3-D imaging possibilities. Swept-source (SS) OCT is now an attractive alternative for 1 µm spectral band OCT (1000-1100 nm) over SD-OCT. Its main advantages include robustness to sample motion, a long measurement range in depth due to short instantaneous line-width, linear sampling in wavenum ber (k-clock–trigger), compactness, increased detection efficiency (balanced detection scheme) and high imaging speed (Michalewska et al. (2013), Choma et al. (2003) & Wojtkowski (2010)). The use of longer wavelengths helped in deeper light penetration allowing a full depth volumetric imaging of the choroid. The choroid is the most vascular part of the eye characterized by the region below the RPE and above the chorio-scleral interface. It performs the vital role in supplying the eye with appropriate oxygen and other essential nutrients (Caneiro et al. (2013)). A number of diseases affecting the macula, such as age-related Grenze ID: 01.GIJET.1.2.18 © Grenze Scientific Society, 2015

Grenze International Journal of Engineering and Technology, July 2015

Page 8: Gijet volume1 issue2

2

macular degeneration (AMD), polypoidal choroidal vasculopathy (PCV), and central serous chorioretinopathy (CSC), have been found to be correlated to the choroidal dysfunction. Earlier, a vast majority of studies examining choroidal thickness and volume using OCT instruments have utilized manual segmentation methods, which are time consuming and more prone to subjective error. Recently, a small number of methods have been reported for the fully-automatic segmentation of the choroidal layer. A two-stage statistical model has been used by Kajic et al. (2011) to automatically segment out the choroidal region in normal and pathological 1060 nm OCT image. Hu et al. (2013) used a graph-based search theory for semi-automatic segmentation of the choroid. Also, Tian et al. (2013) used the graph-based search theory for fully-automatic segmentation of the choroid. In the current, we have implemented a method that uses intensity-threshold based binarization (ITB) technique for the fully automatic segmentation of the choroidal layer. Although the concept is very simple, there are several difficulties in the application of the ITB technique to OCT images mainly because of the depth-dependent signal decay due to scattering in the sample. To avoid this intrinsic problem, en face images will be extracted from a constant distance from the RPE and not from a constant distance from the zero delay point. The signal decay is nearly even in this en face image, hence, the ITB technique can be applied.

II. MATERIALS AND METHODS

A. Imaging System and scanning protocol SSOCT data sets were obtained in the Vision Science and Advanced Retinal Imaging laboratory (VSRI) at the University of California Davis Medical Center on a 62-year-old healthy subject with normal ocular media and two other AMD patients. Written informed consent was obtained prior to imaging approved by the institutional review board (IRB). The description of SSOCT system was reported in our previous work, Poddar et al. (2014), allowing posterior segment imaging. The light source is an external cavity tune-able laser (ECTL), swept-source laser (Axsun Technologies), with a central wavelength of 1060 nm, sweep bandwidth of 110 nm, repetition rate of 100 kHz, 46% duty cycle and average output power of ~23 mW. The subject’s head position was fixed during acquisition using a custom bite-bar and forehead rest. There was no need for pupil dilation. Scanning areas of the retina was 1.5x1.5 mm2. For the 1.5 x 1.5 mm2 scanning pattern, 4.2 µm spacing between both consecutive A-scans and BM -scans was used. The A-line exposure time was 7.2 µs and the spectral data were saved in a binary file format for post-processing in custom-made software. All images shown in this manuscript were acquired in vivo at 100,000 axial scans (A-scan) rate per second. Each B-scan consisted of 440 A-scans acquired over a 1.5 mm lateral scanning range.

B. Segmentation Algorithm All the SS-OCT data sets saved in binary format are first imported into FIJI/ImageJ, 2014 software for registration of the B-Scans contained in the volumetric scan. It helps in aligning all the frames of the volume scans into the same coordinate system. Then, the images are imported into the custom-made software for the segmentation of the choroid using ITB technique for the segmentation purpose. For the segmentation, here we have utilized a method similar to that presented by Yasuno et al. (2006). Two boundaries namely, anterior and the posterior boundary were extracted. The anterior boundary is represented by the outer segment of a highly reflective layer of RPE (the Bruch’s membrane). To suppress the imaging noise the OCT images are passed through a Gaussian filter having a standard deviation radius of 2 (i.e. σ = 2) for smoothing of the edges. The thresholding data is obtained by the iterated measurement of the histogram of the image. The histogram result is then divided into four groups. The group of pixels corresponding to the highest intensities is used for thresholding of the gradient magnitude images which yields the binary images. The small particles in the binary image are removed by using a 3 x 3 erosion process. This resulting binary image represents the RPE layer whose edges are found by using differentiation method by 2 x 2 matrix. The segmented line was obtained from the matrix.

C. Choroidal volume determination In the segmented image, the area between the upper and lower edges of the choroid was calculated from OCT volume scan. Each pixel dimension was first converted to actual physical dimension of image (image scanning length divided by the corresponding number of total number of pixel). The number obtained was then multiplied by the total number of pixels present in the segmented choroid region. This gives the

Page 9: Gijet volume1 issue2

3

area of the choroid of a single B-scan. Then the area of all the B-scans in a volumetric scan is summed-up to determine the cumulative volume.

III. RESULTS AND DISCUSSION

The acquired OCT images were segmented to reveal chorio-retina and choroid-sclera interface. Figure 1 shows the results for the healthy subject. The left panel A(a) and A(c) shows the unsegmented OCT images whereas the right panel A(b) and A(d) shows the OCT images after segmentation. The line pointed by the yellow arrow is Chorio-retinal interface and the line pointed by the green arrow refers to the choroid-sclera interface. The region between these two lines represents the choroid region of the eye. The cumulative volume of the choroidal region was found to be 22.90477 mm3 (Caneiro et al. (2013) & Kajic et al. (2011)). Similarly, Figure 2 and Figure 3 demonstrate the OCT images before and after the segmentation of the two AMD patients. The yellow arrow points to the chorio-retinal interface and the green arrow to the choroid-sclera interface. The OCT images of the AMD patient shows the irregular RPE layer as can be seen in the form of certain peaks. But the automated segmentation introduced by us demonstrates a robust method to easily segment the peak regions also. The cumulative volume for AM D patient-1 is found to be 22.03656 mm3 and for patient-2 it is 23.13005 mm3 , see Table 1. The results were well correlated with existing report of Caneiro et al. (2013) & Kajic et al. (2011).

TABLE I. CUMULATIVE CHOROIDAL THICKNESS AND VOLUME OF NORMAL AND DISEASED SUBJECTS

Subjects Choroidal Area (mm2)

Cumulative Choroidal Volume (mm3)

Normal 0.34651 22.90477

AM D patient-1 0.24701 22.03656

AM D patient-2 0.46989 23.13005

Figure 1. SS-OCT images of healthy posterior segment eye with 3° temporal eccentricity from fovea. A(a) and A(c):original OCT image (without segmentation), A(b) and A(d):demonstrates segmented choroidal layer represented by the lines between the arrows, (yellow arrow: boundary between retina and choriod; green arrow: boundary between choriod and sclera). Scale bar: 300 µm

IV. CONCLUSION

A new and robust algorithm for automatic segmentation of anterior and posterior choroidal boundaries is demonstrated. The method uses an Intensity-threshold based binarization technique to segment the two boundaries. The choroid sclera interface was detected at a constant depth from the RPE layer. The approach is tested and evaluated on different data sets of normal and pathological subjects. The algorithm shows high accuracy in case of AMD patients also with deformed RPE layer. The fully automated segmentation method developed here provides many medically essential histopathological findings in the field of ophthalmology.

Page 10: Gijet volume1 issue2

4

Figure 2. SS-OCT images of posterior eye of 1st AMD patient. A(a) & A(c):original OCT image, A(b) & A(d):demonstrates segmented choroidal layer represented by lines between the arrows. (yellow arrow: boundary between retina and choriod; green arrow: boundary between choriod and sclera). Scale bar: 300 µm Figure 3. SS-OCT images of posterior eye of 2nd AMD patient. A(a) & A(c):original OCT image, A(b) & A(d):demonstrates segmented choroidal layer represented by lines between the arrows. (yellow arrow: boundary between retina and choriod; green arrow: boundary between choriod and sclera). Scale bar: 300 µm

REFERENCES

[1] Adhi M., Liu J. J., Qavi A. H., et al. (2014), “Choroidal analysis in healthy eyes using Swept-source optical coherence tomography compared to spectral domain Optical coherence tomography”, American Journal of Ophthalmology 157, 1272-1281.

[2] Caneiro D. A., Read S. A., & Collins M. J. (2013), “Automatic segmentation of choroidal thickness in Optical coherence tomography”, Biomedical Optics Express 4(12), 2795-2812.

[3] Choma M. A., Sarcenic M. V., Yang C., et al. (2003), “Sensitivity advantage of swept source and fourier domain OCT”, Optics Express 11(18), 2183-2 190.

[4] Fujimoto J. G., Pitris C., Boppart S. A., et al (2000)., “Optical coherence Tomography: An emerging technology for biomedical imaging and optical biopsy”, Nature America 2(1-2), 9-25.

[5] H uang D., Swanson E. A., Li n C. P., et al. (1991), “Optical coherence tomography”, Science 254(5035), 1178-1181.

[6] Hu Z., Wu X., Ouyang Y., et al. (2013), “Semi-automated segmentation of the choroid in spectral-domain Optical Coherence Tomography volume scans”, Investigative Ophthalmology and Visual Science 54(3), 1722-1729.

Page 11: Gijet volume1 issue2

5

[7] K aji c V., Esm aeel pour M., Povazay B., et al. (2011), “Automated choroidal segmentation of 1060nm Optical coherence tomography in healthy and pathologic eyes using a statistical model”, Biomedical Optics Express 3(1), 86-103.

[8] Michalewska Z., Michalewski J., & Nawrocki J. (2013), “Swept source Optical coherence tomography”, Retina today, 50-56.

[9] Poddar R., K i m D. Y., Werner J. S., et al. (201 4), “In vivo imaging of human vasculature in chorioretinal complex using phase variance contrast method with phase stabilized 1 µm Swept-source Optical coherence tomography”, Journal of Biomedical Optics, 1-12.

[10] Schindelin, J.; Arganda-Carreras, I. & Frise, E. et al. (2012), "Fiji: an open-source platform for biological-image analysis", Nature methods 9(7): 676-682

[11] Ti an J., M arzi li ano P., B askaran M., et al. (2013), “Automatic measurements of choroidal thickness in ED I-OCT images”, Biomedical Optics express 4(3), 397-411.

[12] Wojtkowski M. (2010), “High speed optical coherence tomography:basics and applications”, Applied optics 49(16), D30-D61.

[13] Yasuno Y., M akita S., Hong Y., et al. (2006), “Optical coherence angiography”, Optics Express 14(17), 7821-7840.

Page 12: Gijet volume1 issue2

Participation of Doubly Fed Induction Generator based Wind Turbines in Power System Primary

Frequency Regulation

1Renuka T K and 2Dr. Reji P 1Department of Electrical & Electronics Engineering, MES College of Engineering, Kuttippuram, Kerala State, India 2Department of Electrical & Electronics Engineering, Government Engineering College, Thrissur, Kerala State, India

Abstract—The increasing penetration of Doubly Fed Induction Generator (DFIG) based wind turbines in the power system will result in the reduction of total system inertia. This will require new methods to control the grid frequency. This paper proposes two control methods for primary frequency control namely speed delay recovery control and droop control. Small-perturbation, linear, dynamic, transfer function models are used for the simulation of primary frequency regulation services for single-area and two-area power systems with a mix of conventional and non-conventional DFIG-based wind generators. Variation in DFIG penetration levels on system frequency control performance has been examined Keywords: DFIG, Frequency regulation , Speed delay recovery, Droop control

I. INTRODUCTION

Now a days the worldwide trend is to integrate more wind energy in to the power system. Modern wind farms are equipped with Variable Speed Wind Turbines (VSWT). The most frequently applied variable speed drive concept is the Doubly-Fed Induction Generator (DFIG) drive. The kinetic energy of wind turbines is stored in the rotating mass of their blades, but in the DFIG based turbines this energy will not contribute to the inertia of the grid as the rotational speed is decoupled from the grid frequency by a power electronic converter. This leads to a reduction of the total system inertia of wind integrated power system and hence the frequency change will be more during a load/generation mismatch. Even though the penetration of wind turbines into the power grid has increased, the frequency regulation and AGC tasks are mainly under taken by conventional generation units. Modern wind turbines are equipped with maximum power point tracking facilities, so that they deliver maximum power output under all possible conditions. A sustained increase in power is not possible and therefore wind turbines cannot participate in 'secondary response' services which conventional plants are able to do. Although the steady-state active power delivered to the grid by a VSWT depends on the mechanical energy transferred from the wind, they can be modified to increase their output power almost instantaneously. Recently the grid codes are revised to ensure that wind turbines contribute to control of frequency. The electric power has to be transiently controlled by using the kinetic energy stored in the mechanical system. Grenze ID: 01.GIJET.1.2.505 © Grenze Scientific Society, 2015

Grenze International Journal of Engineering and Technology, July 2015

Page 13: Gijet volume1 issue2

7

II. COMPARISON OF THE FREQUENCY RESPONSE OF DFIG BASED WIND TURBINE AND CONVENTIONAL GENERATION PLANTS

In Synchronous generators based conventional generating plants, any decrease in power system frequency manifests as a change in the speed of stator-led rotating flux. Such speed changes are resisted by the rotating mass (generator rotor and the wind turbine rotor) leading to rotational energy transfer to the power system via the stator. Also the Synchronous generators, which have the spinning reserve, activate the primary control by supplying the active power proportional to the frequency deviation based on the droop characteristics. After the primary control, the system operator can successively activate secondary and tertiary controls to recover the frequency to the nominal value. Figure 1 shows the block diagram of conventional generator with frequency control.

Fig.1. Block diagram of power system comprising a conventional generator with frequency control

Fig. 2. Block diagram of DFIG

The stator of DFIG is directly connected to the grid whereas the rotor is connected via a power electronic converter to grid as shown in figure 2. Two way transfer of power is possible due to the converter system. The grid side converter provides a dc supply to the rotor side converter. When there is a reduction in wind speed, the rotor speed also drops and then the generator operates in a sub synchronous operating mode. In the sub synchronous mode the DFIG rotor absorbs power from the grid. During high wind speed, the DFIG wind turbine running at super synchronous speed will deliver power from the rotor through the converters to the network. Thus the rotational speed of the DFIG determines whether the power is delivered to the grid through the stator only or through the stator and rotor. As the penetration of wind is expected to grow considerably in the coming decade, there will be a reduction in the total system inertia and greater rates of frequency change will be observed in various system contingencies (e.g. generating unit loss or sudden load variations). One of the solutions is to mimic the inherent inertial response of traditional synchronous generators; i.e., to add a control loop that helps to make the inertia of DFIG based turbine available to the grid.

III. DFIG FREQUENCY CONTROL

The Frequency variations can be controlled using two methods namely speed delay recovery control and droop control. The speed delay recovery module consists of DFIG mechanical inertia block together with speed regulator as shown in figure 3.The DFIG mechanical inertia block provides an output that is based on measured speed and reference speed (obtained from measured electrical power Pmeas).This output is sent to DFIG speed regulator which consists of PI controllers. Thus the speed delay recovery module provides a power set point to the wind turbine. The droop control proposed is similar to the one usually used in the synchronous generators. The droop loop, which is characterized by a regulation R inject an active power which is proportional to the difference in nominal and measured frequency. The DFIG droop loop is shown in figure 4. This loop will be activated for a short duration that is only when there is a change in frequency. The filter ensures that the permanent frequency deviation has no effect in the control.

Page 14: Gijet volume1 issue2

8

Fig 3. Speed Delay Recovery control of DFIG Fig. 4. Droop Loop Control of DFIG

IV. PRIMARY FREQUENCY REGULATION IN A SINGLE-AREA SYSTEM WITH DFIG-BASED WIND TURBINE

The block diagram of a power system comprising a conventional generator and a DFIG-based wind turbine with frequency control is shown in Figure 5. Here ΔPd is the incremental active power demand, ΔPw the incremental value of wind generation, ΔPc the incremental value of conventional generation and Δf is the incremental frequency change.

Fig.5. Block diagram of a power system comprising a Conventional generator and a DFIG-based wind turbine

Simulations are done for a 0.02 per unit load perturbation with and without DFIG wind turbine to examine its contribution in primary frequency regulation. Different plots for theses simulations are presented in Figure 6 to 9. It is assumed that DFIG-based wind turbines are in their optimal mechanical speed with the maximum power obtainable from the wind. The penetration level of wind power can be increased by changing some parameters, such as system inertia constant and permanent droop. An x % wind penetration means that the existing generator units are reduced by x %, i.e. an x% reduction in inertia and increase in permanent droop.

Fig. 6. Primary frequency regulation for 2% load change &5% wind power

Fig. 7. Primary frequency regulation for 2% load change & 20% wind

0 5 10 15 20 25 30 35 40 45 50-0.12

-0.1

-0.08

-0.06

-0.04

-0.02

0

0.02

Time (sec)

Per

Uni

t

Delta f with 5% wind power penetration and with DFIG inertia controlDelta f with 5% wind power penetration and with no DFIG inertia controlChange in mechanical speed ( 5% wind power penetrat ion)Change in wind power generation ( 5% wind power penetrat ion

0 5 10 15 20 25 30 35 40 45 50-0.12

-0.1

-0.08

-0.06

-0.04

-0.02

0

0.02

Time ( sec)

Per

Uni

t

Delta f with 20% wind power penetration and with DFIG inertia controlDelta f with 20% wind power penetration and without DFIG inertia controlChange in mechanical speed ( 20% wind power penetration)Change in wind power generation( 20 % wind power penetration)

ΔPc ΔPd

Δf

Droop Loop Control

Speed delay recovery control

Power output from Conventional generation Electrical

system

Wind turbine

ΔPw

Page 15: Gijet volume1 issue2

9

Fig. 8. Primary frequency regulation for 2% load change & 30% wind

Fig. 9. Primary frequency regulation for 2% load change & 40% wind

Table1 shows the magnitudes of peak values and steady state error of frequency response curves for different levels of wind penetration. From the frequency response plots and Table1 it is clear that, following the disturbance the response is improved in terms of lower frequency excursion with DFIG participation. With 5 % penetration there is no significant improvement in frequency response compared to without-DFIG frequency control case. But as the penetration of wind energy increases from 5% to 40%, there is considerable improvement in lower peak values. When the load increases at t=0, the DFIG instantly releases its kinetic energy by reducing the mechanical speed, and hence increases its output to participate in primary frequency regulation. Thereafter DFIG output decreases because the speed is no longer at the optimal and power extracted from the wind is reduced. Then the DFIG speed controller will act and the optimal speed will be recovered so that the DFIG power output returns to its nominal value.

TABLE I: COMPARISON OF THE FREQUENCY RESPONSE FOR VARIOUS WIND LEVEL PENETRATION

V. PRIMARY FREQUENCY REGULATION IN A TWO-AREA SYSTEM WITH DFIG-BASED WIND TURBINE

The dynamic performance of a two-area interconnected system can be analyzed using small-perturbation transfer-function model. During the simulations in both areas, it has been assumed that DFIG-based wind turbines are in their maximum power tracking mode and wind speed remain constant during the simulation. Figures 13 to 16 show the power increment for tie-line power for 5% to 40% DFIG based wind power penetration contributing to frequency control and also without wind power generation contributing to frequency regulation. It can be observed that with DFIG-based wind power penetration with frequency control, the increment in tie line power reduces with increase in wind penetration and the settling time also improves with increased penetration.

VI. CONCLUSION

This paper attempts to address the frequency regulation issues associated with the integration of DFIG based wind generation with conventional generation sources in the total energy supply mix of the power system. Small perturbation, transfer function models in state-space form are used for simulation of both single-area and two-area power systems with a mix of conventional generation and wind generation. In order to control frequency variations two methods namely speed delay recovery control and droop control methods are

0 10 20 30 40 50 60 70 80 90 100-0.14

-0.12

-0.1

-0.08

-0.06

-0.04

-0.02

0

0.02

Time (sec)

Per

Uni

t

Delta f with 30% wind power and with DFIG frequency cont rolDelta f with 30% wind power and without DFIG frequency controlChange in wind power (30% wind power)Change in mechanical speed (30% wind power)

0 10 20 30 40 50 60 70 80 90 100-0. 14

-0. 12

-0.1

-0. 08

-0. 06

-0. 04

-0. 02

0

0. 02

Time (sec)

Per

Uni

t

Delta f with 40% wind power and with DFIG frequency controlDelta f with 40% wind power and without DFIG frequency controlChange in wind power (40% wind power)Change in mechanical speed (40% wind power)

Wind Penetration

Steady state error (pu) Peak values(pu) With DFIG

Frequency control Without DFIG Frequency control

With DFIG Frequency control

Without DFIG Frequency control

5% -0.0602 -0.0601 -0.1012 -0.1059 10% -0.0634 -0.06326 -0.0990 -.1085

15% -0.06717 -0.06679 -0.0970 -0.1112 20% -0.07114 -0.0707 -0.0949 -0.1148 25% -0.07576 -0.07515 -0.09299 -0.1194 30% -0.08102 -0.080172 -0.09111 -0.1241 40% -0.094137 -0.09253 -0.08751 -0.13463

Page 16: Gijet volume1 issue2

10

Fig. 13. Change in Tie line power for 2% load change with and without frequency regulation (10% wind power penetration)

Fig. 14. Change in Tie line power for 2% load change with and without frequency regulation (20% wind power penetration)

Fig. 15. Change in Tie line power for 2% load change with and without frequency regulation (30% wind power penetration)

Fig. 14. Change in Tie line power for 2% load change with and without frequency regulation (40% wind power penetration)

proposed. As the penetration of wind energy increases there is considerable improvement in lower frequency peak excursion values. For two area system, with these controls, the increment in tie line power reduces with increase in wind penetration and the settling time also improves with increased penetration.

REFERENCES

[1] Erlich & M. Wilch. (2012). “Primary frequency control by wind turbines ", 3rd IEEE PES ISGT Europe, Berlin, Germany, October 14 -17.

[2] Jalali, M. & Bhattacharya, K. (2013).“Frequency regulation and AGC in isolated systems with DFIG-based wind turbines”, Power and Energy Society General meeting (PES), pp.1-5. Jalali M.and Bhattacharya K, “Frequency regulation and AGC in isolated systems with DFIG- based wind turbines”,Power and Energy Society General meeting(PES) 2013, pp.1-5.

[3] Johan Morren, , Sjoerd W. H. de Haan, , Wil L. Kling, , and J. A. Ferreira, “Wind Turbines Emulating Inertia and Supporting Primary Frequency Control”, IEEE Transactions on Power Systems, Vol. 21, No. 1, February 2006 .

[4] Michael Z. Bernard, T. H. Mohamed, Raheel Ali, Yasunori Mitani, Yaser Soliman Qudaih, “PI- MPC Frequency Control of Power System in the Presence of DFIG Wind Turbines”, Scientific Research, Engineering, 2013, 5, 43-50 .

[5] Sun HaiShun, Liu Ju, WEN JinYu, Cheng ShiJie, Luo Cheng & Yao LiangZhong, “Participation of large-scale wind power generation in power system frequency regulation”, Chinese Science Bulletin, Vol.58, December 2013

0 5 10 15 20 25 30 35 40 45 50-5

-4

-3

-2

-1

0

1

2

3

4

5x 10

-3

Time (sec)

Pow

er in

crem

ent

per

unit

Change in t ie-line power with 10% wind power and with DFIG frequency controlChange in t ie-line power with 10% wind power and without DFIG frequency control

0 5 10 15 20 25 30 35 40 45 50-6

-5

-4

-3

-2

-1

0

1

2

3

4x 10

-3

Time (sec)

Pow

er in

crem

ent p

er u

nit

Change in tie-line power with 20% wind power and with DFIG frequency controlChange in tie-line power with 20% wind power and without DFIG frequency control

0 5 10 15 20 25 30 35 40 45 50-6

-5

-4

-3

-2

-1

0

1

2

3

4x 10

-3

Time (sec)

Pow

er in

crem

ent

per

unit

Change in tie-line power with 30% wind power and with DFIG frequency controlChange in tie-line power with 30% wind power and without DFIG frequency control

0 5 10 15 20 25 30 35 40 45 50-6

-5

-4

-3

-2

-1

0

1

2

3

4x 10

-3

Time (sec)

Pow

er in

crem

ent

per

unit

Change in tie-line power with 40% wind power and with DFIG frequency controlChange in tie-line power with 40% wind power and without DF IG frequency control

Page 17: Gijet volume1 issue2

Strength Characteristics of Stabilized Peat Soil using

Fly Ash

1Lakshmi Sai. A and 2Ramya. K PG Scholar, Dept. of Civil Engineering, Thejus Engineering College, Thrissur, Kerala, India

[email protected] Assistant Professor, Dept. of Civil Engineering, Thejus Engineering College, Thrissur, Kerala, India

[email protected]

Abstract— Stabilization of soft and weak soils is considered as an effective method to improve the strength characteristics of the soil. Removal and replacement of the soil involves high cost. Peat soil is considered as one of the soft soils where the construction is difficult. This paper describes the stabilization of peat soil with objective to improve the strength of the soil by treating it with fly ash. Fly ash is an industrial by product that is relatively inexpensive. As the demand for land is increasing day by day, it is necessary to make use of the available area effectively. Index Terms— Peat soil, stabilization, fly ash, unconfined compressive strength

I. INTRODUCTION

Peat is considered as an extreme form of soft soil and also weak. Thus in most cases constructions on these soils are avoided. These soils are found in many countries throughout the world. In general, peat is mainly composed of fibrous organic matters, i.e. partly decomposed plants such as leaves and stems. Peat has largely organic residues of plants, incompletely decomposed through lack of oxygen. Peat is identified as a very soft and difficult soil with low shear strength, high organic matter, low bearing capacity and high compressibility. These characteristics cause excessive settlement which is very challenging to geotechnical engineers and the construction industry at large. Due to this problematic nature of peat soil, construction on it becomes a very challenging task to geotechnical and civil engineers and hence, the engineers regarded peat soil as the worst foundation soil for supporting the structures founded on it because of its unfavorable nature and behavior. Peat actually represents an accumulation of disintegrated plant remains, which have been preserved under condition of incomplete aeration and high water content. It accumulates wherever the conditions are suitable, that is, in areas with excess rainfall and the ground are poorly drained, irrespective of latitude or altitude. Peat deposits tend to be most common in those regions with comparatively cool wet climate. As demand for land increases and its supply becomes limited, constructions on weak soil such as peat cannot be avoided. There are many researches taking place to find the best method of stabilizing and improving peat soil. The methods are mainly concentrating on modification and stabilization of peat soil. The purpose of stabilizing and modifying peat soil is to improve its ability to perform well by increasing its strength and decreasing the excessive settlement when such soil is subjected to loads from structures [2]. Most common way for soft or peat soil treatment is by excavating the soft or peat soil and replacing it with good granular or sandy soil but this way of soil treatment is not encouraged because of the uneconomical Grenze ID: 01.GIJET.1.2.506 © Grenze Scientific Society, 2015

Grenze International Journal of Engineering and Technology, July 2015

Page 18: Gijet volume1 issue2

12

design. If heavy loaded buildings are to be constructed on a soft peat soil layers, piled foundations can be used to transfer the loading to the rock. But if lightly loaded buildings are to be constructed, it is not economical to construct the structures on piled foundations. Deformation of a peat soil is influenced by the orientation of solid particles in the soil. This arrangement of the particles controlled the way the particles are deposited. The particles arrangement influences the rate of water flow as water tries to escape from soil under loading. Fly ash, the most widely used supplementary cementitious material in concrete, is a by product of the combustion of pulverized coal in electric power generating plants. Upon ignition in the furnace, most of the volatile matter and carbon in the coal are burned off. During combustion, the coal’s mineral impurities (such as clay, feldspar, quartz, and shale) fuse in suspension and are carried away from the combustion chamber by the exhaust gases. In the process, the fused material cools and solidifies into spherical glassy particles called fly ash. The fly ash is then collected from the exhaust gases by electrostatic precipitators or bag filters. Fly ash is a finely divided powder resembling portland cement.

II. MATERIALS

The soil sample was collected from Ooty, Nilgiris district, Tamilnadu. The site was a water logged area. In the site visited, it was noticed that climatic factors such as temperature, humidity, rainfall, among others are the most important factors beyond peat soil formation and development. These factors are found to have direct and indirect influence on peat soil formation, development and its characteristics. Among these climatic factors, humidity and temperature were identified as the most important factors that facilitate the decomposition, transformation and development of organic matter. The soil sample collected was black to dark-brown in colour and is very spongy. The materials used for the work are soil sample and fly ash.

III. TEST PROGRAMS

Tests were conducted to examine the effect of fly ash in the strength characteristics of peat soil. The strength of soil without any addition of binder is evaluated to determine the percentage in increase after treating the soil with fly ash. The amount of fly ash added to the peat soil sample as a percentage of the dry soil mass, were in the range of 10-30%.

IV. RESULTS AND DISCUSSIONS

The different properties of virgin soil are evaluated before it is subjected to any treatment. The knowledge of strength characteristics of untreated peat soil helps to determine the percentage increase in the strength of the soil by the addition of fly ash. The basic properties of peat soil is given in Table I. From the test, it is determined that the soil is rich in peat. If the organic content of the soil is less than 15%, it can be termed as Class II peat [15]. The unconfined compressive strength of the soil sample implies that the soil is very soft and is not capable to carry even moderate loads. The soil is treated with different percentages of fly ash such as 10%, 15%, 20%, 25% and 30% of the weight of the dry soil mass.

TABLE I- BASIC PROPERTIES OF PEAT SOIL

Description Values Water content (%) 43.45 % Organic matter content (%) 12.5 %

Free swell index Low

Specific gravity 2.0 Optimum moisture content (%) 15 % Maximum dry density (g/cc) 1.77 g/cc

Unconfined compressive strength, qU 20.9kN/m2

Shear strength, qU/2 10.5 kN/m2

Page 19: Gijet volume1 issue2

13

The fly ash is incorporated with the soil mass as an addition to the total weight of the untreated soil. The variation in optimum moisture content and maximum dry density of the soil with different percentages of fly ash is shown in the table II.

Table II- Variation in OMC and MDD with Different Percentages of Fly Ash

Description OMC (%) MDD (g/cm3) Soil + 0% fly ash 14.4 1.77 Soil + 10% fly ash 25.9 1.44 Soil + 15% fly ash 27.27 1.42 Soil + 20% fly ash 28.0 1.39 Soil + 25% fly ash 29.1 1.32 Soil + 30% fly ash 22.2 1.46

While fly ash is added, the optimum moisture content of the sample increased upto 25% fly ash and then it decreased, which is shown in figure 1. Likewise the maximum dry density of the sample decreased and then it increased at 30% fly ash, shown in figure 2.

05

101520253035

0 5 10 15 20 25 30 35

Fly ash concentration (%)

OM

C (%

)

Figure 1- Curve showing variation in OMC with different percentages of fly ash

0

0.5

1

1.5

2

0 5 10 15 20 25 30 35

Fly ash concentration (%)

MD

D (g

/cc)

Figure 2-Curve showing variation in MDD with different percentages of fly ash

V. UNCONFINED COMPRESSIVE STRENGTH

When dealing with the strength characteristics of the soil alone and with the addition of fly ash, unconfined compression test is conducted. The preparation of sample for the unconfined compression test faced several difficulties which again indicates the less strength of the soil. The increase in the unconfined compression value of the soil for each percentages of fly ash is shown in Table 3. The results shows that there is a gradual

Page 20: Gijet volume1 issue2

14

increase in the strength value by the addition of fly ash. While adding 30% fly ash to the soil, there is an overall increase in the soil by about 68%. It is because when fly ash is added to the soil mass, the voids in the soil mass is reduced. This increases the strength of the soil mass creating an adequate bonding.

TABLE III – THE UNCONFINED COMPRESSIVE STRENGTH VALUE FOR DIFFERENT PERCENTAGES OF FLY ASH

Description Unconfined compressive strength, qu (kN/m2)

Soil + 0 % fly ash 20.9

Soil + 10 % fly ash 21.9

Soil + 15 % fly ash 26.6

Soil + 20 % fly ash 27.4

Soil + 25 % fly ash 30.4

Soil + 30 % fly ash 35.0

V. CONCLUSIONS

Stabilization of soft soils improves the engineering and index properties of soils. Peat soil is considered as a soft and weak soil which has low strength, low load bearing capacity and high compressibility. Therefore the stabilization of peat soil by suitable means increases the effectiveness of the soil. Fly ash is an industrial by product which is easily available and is economical. The disposal of fly ash is an environmental issue which can be solved to an extent by using it in the stabilization purpose. It seems that while adding the fly ash to the soil the properties of soil increases. The strength of the soil is increased up to 70% by the addition of 30% of fly ash.

05

10152025303540

Untreatedsoil

Soil + 10%FA

Soil + 15%FA

Soil + 20%FA

Soil + 25%FA

Soil + 30%FA

UCC strength

Figure 3- Barchart showing the strength values for different percentages of fly ash

REFERENCES

[1] Akol. A,K(2012), “Stabilization of peat soil using lime as a stabilizer”, Petronas. [2] Boobathiraja. S, Balamurugan .P .,et.al(2014), “Study on strength of peat soil stabilized with cement and other

pozzolanic materials”, International Journal of Civil Engineering Research, 5(4), 431-438. [3] Deboucha.S et al., “Engineering properties of stabilized tropical peat soils”, EJGE, 13. [4] Hendry. M.T et al(2012), “Evaluating the effect of fiber reinforcement on the anisotropic undrained stiffness and

strength of peat”, Journal of Geotechnical and Geoenvironmental Engineering ASCE, 1-11. [5] Huat.B.K et al.,(2005), “Effect of chemical admixtures on the engineering properties of tropical peat soils”,

American Journal of applied sciences, 2(7), 1113-112 [6] Kalantari.B, Huat.B.K (2004), “Peat soil stabilization using Ordinary Portland Cement, polypropylene fibers and air-

curing technique”, EJGE, 13. [7] Kolay. P. K, Pui. M.P(2010), “Peat stabilization using gypsum and fly ash”, Journal of Civil Engineering, 1(2)

Page 21: Gijet volume1 issue2

15

[8] Mesri.G, Ajilouni.M.(2007), “Engineering properties of fibrous peat”, Journal of Geotechnical and Geoenvironmental Engineering ASCE, 133(7), 850-866

[9] Santagata. M et al(2008), “One-dimensional compression behaviour of a soil with high organic matter content”, Journal of Geotechnical and Geoenvironmental Engineering ASCE, 134(1), 1-13.

[10] Sauer. J.J et al(2012), “Trace elements leaching from organic soils stabilized with high carbon fly ash”, Journal of Geotechnical and Geoenvironmental Engineering ASCE, 138(8), 968-980.

[11] Shabani. M, Kalantari. B. (2012), “Mass stabilization technique for peat soil”, ARPN Journal of Science and Technology, 2(5), 512-516.

[12] Sing.W.L et al.,(2008), “Behavior of stabilized peat soils in unconfined compression tests”, American Journal of Engineering and Applied sciences, 1(4), 274-279.

[13] Tastan. E.O, et al.,(2011), “Stabilization of organic soils with flyash”, Journal of Geotechnical and Geoenvironmental Engineering ASCE, 137(9), 819-833

[14] Tan.Y, and Paikowsky.S.G.(2008), “Performance of sheet pile wall in peat”, Journal of Geotechnical and Geoenvironmental Engineering ASCE,134(4), 445-458

[15] Thomas. P.K et al., “Quality and quantity of peat material reserves in the Nilgiris”, Soil Conservation Research, 40(6).

[16] Wehling.T.M et al(2003), “Nonlinear dynamic properties of a fibrous organic soil”, Journal of Geotechnical and Geoenvironmental Engineering ASCE, 129(10), 929-939.

[17] IS 2720 “Indian Standard methods for test for soils”. [18] Arora. K. R, “Soil mechanics and foundation engineering”.

Page 22: Gijet volume1 issue2

An Efficient Image Denoising Method using SVM

Classification

1Divya V and 2Dr. Sasikumar M 1Marian Engineering College, Trivandrum, India

2Professor, Marian Engineering College, Trivandrum, India

Abstract— Image denoising algorithms usually are dependent on the type of noise present in the image. There is a great need of a more generally usable, noise independent denoising algorithm. In this paper, an image denoising technique is proposed where the image is first transformed to the nonsubsampled contourlet transform (NSCT) domain, detail coefficients are extracted and feature vector for a pixel in the noisy image is formed by the spatial regularity. The support vector machine (SVM) is then used for classifying noisy pixels from the edge related ones. Finally, the denoising is done by shrink method, where an adaptive Bayesian threshold is utilized to remove noise. Experimental results show that the method gives good performance in terms of visual quality as well as the objective metrics such as peak signal to noise ratio (PSNR). Keywords: Image denoising; Non subsampled contourlet transform; Support Vector Machine classifier

I. INTRODUCTION

The main challenge for image denoising is to preserve the information bearing structures such as edges and textures to get satisfactory visual quality while improving signal to noise ratio (SNR). Initially conventional techniques of spatial and transform domain filtering were used for denoising, which includes mean filter, median filter, order statistics filter, adaptive filters etc. Image denoising based on total variation [Rudin et al.(1992)], anisotropic diffusion [Gerig (1992)], bilateral filtering [Tomasi et al.(1998)], mixture models [Portilla et al.(2003)] and non-local means [Brox et al.(2008)] widely emerged as improvisations for noisy images. However, these methods either exhibited certain disturbing artifacts or were efficient only for particular kinds of noise. The field of image denoising realised a boom in performance with the use of wavelets [Lusier et al.(2007)]. Wavelets in 2-D are good at isolating the discontinuities at edge points, but will not “see” the smoothness along the contours. In addition, separable wavelets can capture only limited directional information [Do et al.(2005)]. This led to the rise of directional, redundant, shift variant transforms such as contourlet transform. In applications such as denoising, enhancement and contour detection, a redundant representation can significantly outperform a non-redundant one. Subsequently, the shift invariant and non-redundant transform called the non-subsampled contourlet transform (NSCT) was designed [Cunha et al. (2006)]. More importantly, it provides the spatial relationship between pixels in the original image by a few, large, spatially contiguous coefficients in the NSCT domain, which represent features of the image and should be retained as much as possible during denoising. Grenze ID: 01.GIJET.1.2.507 © Grenze Scientific Society, 2015

Grenze International Journal of Engineering and Technology, July 2015

Page 23: Gijet volume1 issue2

17

The use of Support Vector Machine (SVM) for denoising has evolved recently. SVM based classifier is built to minimize the structural misclassification risk, where as conventional classification techniques often apply minimization of empirical risk [Cheng et al. (2004)]. Therefore, SVM is claimed to lead enhanced generalization properties. Further, application of SVM results in global solution for a classification problem. In the proposed method, SVM is used to classify the noisy NSCT coefficients from the non-noisy ones.

II. PROPOSED METHOD

The proposed method works well for both gray scale images as well as colour images. The process flow is as explained below:

Figure 1: Process flow of the proposed method

A. NSCT decomposition Perform a J level NSCT decomposition on the noisy image, and obtain a low-pass subband A1and a series of high-pass subbands Dk

s (k = 1, 2,…. J; s = 1, 2,….H). Here, k denotes the decomposition level, and s denotes the decomposition orientation, H is the maximum number of decomposition direction.

B. Binary map Form a preliminary binary label for each NSCT coefficient, which collectively form a binary map. The NSCT of noisy image generates NSCT coefficients C(x,y) which is used to create the preliminary binary map I(x,y).

I(x,y) = 1, |C(x, y)| > τ0,otherwise (1)

where τ is the threshold for selecting valid coefficients in the construction of the binary NSCT coefficient map. τ is a threshold calculated from the Otsu thresholding, thereby the thresholding depends on the between class variance of the image, rather than noise variance.

C. Spatial Regularity Extraction Because of the spatial regularity, the resulting NSCT subbands generally do not contain isolated coefficients. Spatial regularity in the preliminary binary map is used to further examine the role of the valid NSCT coefficient; whether it is isolated noise or part of a spatial feature. The number of supporting binary values around a particular nonzero value I(x,y) is used to make the judgement. The support value is the sum of all I(x,y) which support the current binary value, ie. The total number of all valid NSCT coefficients which are spatially connected to the current I(x,y).

D. Feature vector formation For each subband Dk

s, the preliminary binary map Iks[x,y] and support value Vk

s[x,y] are computed. Nks

NSCT coefficients with the max support value are selected as the feature vector Fks1, and Nk

s NSCT coefficients with the support value 0 are randomly selected as the feature vector Fk

s2 . Finally, those Iks[x,y]

corresponding to the selected NSCT coefficients is regarded as the training objective Oks1 and Ok

s2 respectively.

E. SVM Training and classification Train the SVM model. Let Fk

s1 and Fks2 be the feature vectors for training, Ok

s1 and Oks2 are the training

objective. The SVM model can be obtained by training. By using the well trained SVM model, all high-frequency NSCT coefficients are classified into noise-related coefficients and edge-related ones.

Page 24: Gijet volume1 issue2

18

F. Adaptive Bayesian thresholding Calculate the denoising threshold for each detail subband Dk

s. In this paper, a level adaptive Bayesian threshold is used, in which an exponentially decaying inter-scale model is used to describe the inter-scale dependency of image NSCT coefficients [Wang et al. (2010)]. The level adaptive Bayesian threshold can be computed as follows: (i) Calculate noise variance σn, which is estimated from the subband by the robust median estimator

σ =median(|C(x, y)|)

0.6745 ; C(x, y) ∈D (2) (ii) Perform an estimation of the signal variance σk

s (k = 1, 2,…. J; s = 1, 2,….H) for the noisy coefficients of each detail subband 퐷 , using

σ = max(0, ∑ ∑ D (x, y)− σ ) (3)

Here, m and n are image size.

(iii) Calculate discriminating threshold σth by exploiting the near exponential prior of the NSCT coefficients across scales

σ = σ ∑ 2 σ∑ 2 k (4)

where k is the current scale.

(iv) Calculate denoising threshold T( k; σks) for each detail subband if σk

s < σth

T(k, σ ) = 2 . σσ

(5)

where k is the current scale, J is the largest scale (or coarsest) undergoing denoising.

(v) Process the noise related NSCT coefficients in high frequency subbands with soft thresholding as follows:

Ĉ (x, y) = sgn C (x, y) (|C (x, y)| − T), |C (x, y)| ≥ T0,otherwise

(6)

where k = 1, 2,…. J; s = 1, 2,….H

G. NSCT Reconstruction Perform the inverse NSCT transform on the denoised NSCT high frequency components and the low pass component to reconstruct the denoised image.

H. Colour Image denoising While processing colour images, the RGB colour image is first converted to the YUV space. This is because the RGB colour space suffers from high correlation among the three planes, and the transformation from RGB to YUV space is simpler. Each channel is extracted and the same process flow is followed for denoising for each channel.

III. SIMULATION RESULTS

The proposed method was simulated for standard 8-bit grayscale images such as Cameraman, Lena, House, Boat and Peppers as well as for standard colour images such as Barbara, Peppers, Parrot, Mandrill and Tower. The types of noise considered in this paper were Salt and Pepper, Gaussian, Poisson and Speckle (multiplicative) noise. ‘dmaxflat’ was used as the directional filter for NSCT decomposition. 3-level NSCT

Page 25: Gijet volume1 issue2

19

decomposition was performed and the method resulted in good observations. The visual quality of the denoised image obtained using the proposed method is satisfactory. The PSNR value was chosen as the parameter to evaluate objective quality (Table I and II). Table III shows denoised outputs.

Table I: PSNR Values (in db) for Various Grayscale Images

Image Gaussian Salt & Pepper Poisson Speckle Noise variance 10 20 30 Lena 28.61 27.92 25.53 29.39 28.32 28.90 House 28.52 28.34 26.26 30.13 28.27 29.56 Peppers 27.65 27.46 25.34 29.20 28.32 29.45 Barbara 28.24 27.82 25.74 28.92 28.12 29.26 Cameraman 27.52 26.94 25.42 28.02 27.95 27.92

TABLE II: PSNR VALUES (IN DB) FOR VARIOUS COLOUR IMAGES

Image Gaussian Salt & Pepper Poisson Speckle Parrot 51.45 53.99 51.32 52.12 Peppers 52.24 54.23 52.11 53.49 Barbara 50.13 51.72 50.21 50.82 Tower 52.35 53.66 52.22 52.79 Mandrill 50.11 51.25 50.15 51.13

TABLE III. DENOISING OF GRAYSCALE AND COLOUR IMAGES FOR VARIOUS TYPES OF NOISES

Description Original image Noisy image Denoised image

Image Noise type

House

Gaussian noise of

variance σ = 30

Cameraman Speckle noise

Peppers Poisson noise

Parrot Salt and pepper noise

Page 26: Gijet volume1 issue2

20

IV. CONCLUSION

Many works on image denoising are noise dependent and performs well for a particular type of noise. The proposed method has the advantage of achieving a good visual quality with very less quantity of disturbing artifacts. The method utilizes the directional properties of NSCT to preserve the information bearing structures such as edges and the excellent classification properties of SVM to classify the noisy pixels from the non-noisy ones. This technique using NSCT and SVM achieves high performance in terms of quality and clarity, irrespective of the type of noise.

REFERENCES

[1] Brox T., Kleinschmidt O., Cremers D.,(2008) “Efficient Nonlocal Means for Denoising of Textural Patterns”, IEEE Transactions on Image Processing, 17(7), pp.1057-1092.

[2] Cheng H., Tian J. W., Liu J., Yu Q. Z.,(2004) “Wavelet Domain Image Denoising via Support Vector Regression”, Electronics Letters, 40(23)(2004), pp. 1479-1480.

[3] Cunha A. L., Zhou J., Do M.N. (2006), “The Non Subsampled Contourlet Transform: Theory, Design and Applications”, IEEE Transactions on Image Processing, 15(10), pp. 3089-3101.

[4] Do, M. N., & Vetterli, M. (2005). “The contourlet transform: an efficient directional multiresolution image representation”. IEEE Transactions on Image Processing, 14(12), 2091-2106.

[5] Gerig G., Kubler O., Kikinis R., Jolesz F. A.,(1992) “Nonlinear Anisotropic Filtering of MRI Data”, IEEE Transactions on Medical Imaging, 11(2): 221-232.

[6] Lusier F., Blu T., Unser M., (2007) “A New SURE Approach to Image Denoising: Interscale Orthonormal Wavelet Thresholding”, IEEE Transactions on Image Processing, 16(3), pp.593-606.

[7] Portilla J., Strela V., Wainwright M. J., Simoncelli E.P., (2003) “Image Denoising using scale mixtures of Gaussians in the Wavelet Domain”, IEEE Transactions on Image Processing, 12(11), pp.2851-2862.

[8] Rudin L. I., Osher, Fatemi E., (1992) “Nonlinear Total Variation based Noise Removal Algorithms”, Physica D, 60: Pg. 259-268.

[9] Tomasi C., Manduchi R., (1998) “Bilateral Filtering for Gray and Color Images”, In International Conference on Computer Vision and Pattern Recognition (CVPR), IEEE.

[10] Wang X. Y., Yang H. Y., Fu Z. K. (2010), “A new Wavelet based Image Denoising using Undecimated Wavelet Transform and Least Square Support Vector Machine”, Expert Systems with Applications, 37(10),pp. 556-567.

Page 27: Gijet volume1 issue2

Study of the Lateral Load Carrying Capacities of Piles

in Layered Soils using PLAXIS 3D

1Neerad Mohan and 2Ramya K 1P.G Scholar, Civil Engineering Department, Calicut University, Thejus Engineering College,Thrissur,

[email protected] 2Assistant Professor, Civil Engineering Department, Thejus Engineering College, Thrissur,

[email protected]

Abstract—Lateral load carrying capacity of piles is largely neglected area during the design of piles, due to various reasons like the high cost and lack of experts for carrying out insitu lateral load tests, lack of an appropriate theoretical method for the analysis etc. Thus, the lateral load carrying capacity of piles is an unresolved problem. Most studies related to the uncertainties of the lateral load capacities of piles have been done in homogeneous soils. But the response of layered soils to lateral behavior of piles is less explored. This paper studies the effect of laterally loaded piles in layered soils using PLAXIS 3D. Index Terms— Piles, Long pile, pile model, Lateral loads, Numerical Analysis, PLAXIS 3D, Layered soils, Homogeneous soil, Load deformation response, IS 2911(part IV)-1985.

I. INTRODUCTION

Structures like tall chimneys, television/ transmission towers, high retaining walls, offshore structures, high rise buildings, quay and harbor structures and passive piles in slopes and embankments etc. are subjected to lateral loads due to wind forces, wave forces, earthquakes, lateral earth pressure etc. These piles or pile groups should resist not only vertical movements but also lateral movements. There are many cases in which the external horizontal loads act at the pile head. Such loadings are called active loading. Common examples are lateral loads (and moments) getting transmitted to the pile from superstructures like buildings, bridges and offshore platforms. Sometimes the applied horizontal load acts in a distributed way over a part of the pile shaft; such a loading is called passive loading. Examples of passive loadings are loads acting on piles due to movement of slopes or on piles supporting open excavations. Thus, piles in most cases are subjected to lateral loads. Consequently, proper analysis of laterally loaded piles is very important. Many theoretical and experimental investigations have been done on single or group of vertical piles subjected to lateral loads. Generalized solutions for laterally loaded vertical piles are given by Matlock and Reese (1960). The effect of vertical loads in addition to lateral loads has been evaluated by Davisson (1960) in terms of non-dimensional parameters. Broms (1964a, 1964b) Poulos and Davis (1980) have given different approaches for solving laterally loaded pile problems. Brom's method is ingenious and is based primarily on the use of limiting values of soil resistance. The method of Poulos and Davis is based on the theory of elasticity. The finite difference method of solving the differential equation for laterally loaded piles and many finite difference packages such as PLAXIS, ANSYS etc. are very much in use where computer facilities are available. Here the study is done in PLAXIS 3D and the factors under consideration are the modulus of Grenze ID: 01.GIJET.1.2.508 © Grenze Scientific Society, 2015

Grenze International Journal of Engineering and Technology, July 2015

Page 28: Gijet volume1 issue2

22

elasticity and poisons ratio of soil and pile material, cohesion, angle of internal friction and unit weight of soils etc. But many other factors like the fixities of piles, movement of soils around the pile, history of previous loading, vertical loads coming on piles, whether piles are installed in groups, slenderness ratio of piles etc. should be made subjected for the lateral load study of piles. So, a comparative study of such various factors is possible, and the major factor causing a real difference can be found out.

II. MATERIAL USED FOR ANALYSIS

The numerical analysis has been carried out by PLAXIS 3D 2013.1. PLAXIS 3D Foundation program consists of four basic components, namely Input, Calculation, Output and Curves. In the Input program the boundary conditions, problem geometry with appropriate material properties are defined. The problem geometry is the representation of a real three-dimensional problem and it is defined by work-planes and boreholes. The model includes an idealized soil profile, structural objects, construction stages and loading. The model should be large enough so that the boundaries do not influence the results. Boreholes are points in the geometry model that define the idealized soil layers and the groundwater table at that point. Multiple boreholes are used to define the variable soil profile of the project. During 3D mesh generation soil layers are interpolated between the boreholes so that the boundaries between the soil layers coincide with the boundaries of the elements. The mesh element size can be adjusted by using a general mesh size varying from very coarse to very fine and also by using line, cluster and point refinements. After defining the model geometry and 3D mesh generation, initial stresses are applied by using either K0-procedure or gravity loading. The calculation procedure can be performed automatically or manually. The construction stages are defined by activating or deactivating the structural elements or soil clusters in the work-planes and a simulation of the construction process can be achieved. A construction period can also be specified for each construction stage but the soil material model should be selected as “MOHR COLUMB MODEL”. The Mohr-Coulomb model requires a total of five parameters, poisons ratio, dilatancy angle, friction angle, modulus of elasticity, cohesion. The most important calculation type in PLAXIS 3D Foundation is the staged construction. In every calculation step, the material Properties, geometry of the model, loading condition and the ground water level can be redefined. During the calculations in each construction step, a multiplier that controls the staged construction process (ΣMstage) is increased from zero to the ultimate level that is generally 1.0.

III. STUDY BACKGROUND

The previous studies show that, there are several factors that influence the lateral load carrying capacity of piles. For example adhesion between soil and the pile shaft has a significant effect on lateral response of piles. When adhesion increases, the lateral load capacity increases. In case of sloping ground, the ground inclination directly influences the increase in lateral deflection of piles (K. Georgiadis and M. Georgiadis, 2010). The increase in the frequency of cyclic loading as in the case of earthquakes, susceptibility to liquefaction etc increases the lateral pile deflection (S. Kucukarslan and P.K Banerjee, 2002). When the pile is vertically loaded, the soil around the pile gets confined which supports the pile in case of action of lateral loads ( M.N Hussein et al., 2013). Similarly the recent load history also influences the lateral load capacity of piles (N.H Levy et al., 2007). The difference in response of single piles and pile group is very profound and the group piles deflect more than a single pile under same lateral load due to the shadow effect of pile group, by which the leading piles in group tend to take more loads than the trailing ones ( Poulose and Davis, 1980). Behavior of short and long piles is also very different in case of lateral loads. Long piles fail when the moment at any point exceeds the resisting moments at that point on the shaft of the pile, whereas the short pile fail when the lateral deflection exceeds the limiting value(Duncan and Philip, 1994). And, the layering effect of the soil is also an important factor which is really different from the behavior of pile in homogeneous soil (Yang and Jeremic, 2004). These factors including the water table effect can be made the subject for the studies.

IV. EXPERIMENTAL STUDY

Study has been carried out by modeling an embedded pile using PLAXIS software. Pile was modeled as a long pile, to be installed in layered soil first and lateral load was applied to it at various depths and most reliable result was sought. The soil includes five layers namely dense clayey sand, dense sand, medium dense

Page 29: Gijet volume1 issue2

23

sand, silty clay with gravel and very dense sand. Then the behavior of this pile in layered soil was compared with the same pile installed in homogeneous medium dense sand soil under same loading. The parameters used for the analysis have been taken from the journals which are given as tables below.

TABLE I: SOIL LAYERS AND CORRESPONDING PARAMETERS

Soil γ (kN/ m3)

γsat (kN/ m3)

Modulus of elasticity, E (N/ mm2)

Poisson’s ratio

Cohesion, C (kN/ m3)

Friction angle (Degrees)

Dense clayey sand

17 20 250E3 0.35 10 30

Dense sand 16 18 75E3 0.3 0 40

Medium dense sand

16 17 50E3 0.3 0 40

Silty clay with gravel

19 21 45E3 0.3 50 28

Very dense sand

17 18 80E3 0.4 0 45

TABLE II: PILE PROPERTIES

Modulus of elasticity, E 30E6 N/mm2

Unit weight 6 kN/ m3

Diameter 1000 mm

Max. traction at top and bottom 200 kN/ m and 500 kN/m

Base resistance 10000 kN

A. Lateral Load – Deformation Response and Lateral Load Capacities of Piles The lateral load capacity of pile is estimated based on the maximum permissible deformation criteria as prescribed in IS: 2911 (Part 4) - 1985 and the minimum of the following is taken a) fifty percentage of the final load at which the total displacement increases to 12 mm b) final load at which the total displacement corresponds to 5 mm c) load corresponding to any other specified displacement as per performance requirements. Figure 1 and figure 2 depicts the load displacement curve of the pile in layered soil and homogeneous soil respectively. It can be observed that the deflection steadily increases with the increase in load in both cases.

Figure 1: Load – displacement curve for the layered soil

Figure 2: Load – Dspalcement curve of homogeneous medium dense sand

The deformation mesh for the soils are obtained from the software as shown below

0

2

4

6

8

10

12

14

40 240 400 680

Dis

plac

emen

t (m

m)

Load (kN)

Page 30: Gijet volume1 issue2

24

Table III and Table IV enlist the pile deflection corresponding to various applied loads in layered soils and homogeneous soils. The table shows that increase in pile load in turn increases the lateral deflection.

TABLE III: LOAD-DEFELCTION RESPONSE OF PILES IN LAYERED SOILS

Loads (kN) Displacements (mm)

40 0.43

80 0.93

120 1.53

160 2.18

200 2.90

240 3.70

280 4.50

320 5.31

360 6.20

400 7.20

440 8.10

480 9.07

600 12.30

TABLE IV: LOAD-DEFLECTION RESPONSE OF PILE IN MEDIUM DENSE SAND

Loads (kN) Displacements (mm)

40 0.3898

240 3.15

400 6.025

680 12.18

Figure 3: Deformation mesh for layered soil as obtained from PLAXIS

Figure 4: Deformation mesh for homogeneous medium dense sand as obtained from PLAXIS

Page 31: Gijet volume1 issue2

25

The ultimate lateral load capacity of pile is evaluated as below. By interpolation,

a) load for the pile in layered soil, at 12 mm settlement = 589 kN; pile capacity = ½ x 589 =294.5 kN

b) load at 5 mm settlement = 304 kN Therefore, the pile capacity in layered soil = 294.5 kN (least of two)

a) load for the pile in homogeneous soil (medium dense sand), at 12 mm settlement = 673 kN ; pile capacity = ½ x 673 kN = 336.5 kN

b) load at 5 mm settlement = 345 kN Therefore, the pile capacity in homogeneous medium dense sand = 336.5 kN

V. CONCLUSION

The computed results when compared reveals that the lateral load carrying capacity for the pile in layered soils is lower than the homogeneous medium dense sand as expected. This accounts for the ideal homogeneous and isotropic condition of homogeneous soil. The software can be used in the prediction of lateral load carrying capacity of piles. However additional research needs to be done to understand how the software responds to the additional factors like adhesion between soil and pile, previous loading history, combined loading, pile group effect, pile length, pile diameter etc.

REFERENCES

[1] Ashour, M, M., Norris, G., (2003) “ Lateral loaded pile response in liquefiable soil “ ASCE, J. Geotech. Geoenviron. Eng. 2003.129:404-414

[2] Choi, H., Lee, S., Park, H., Kim, D., (2013), “Evaluation of Lateral Load Capacity of Bored Piles in Weathered Granite Soil” , ASCE, J. Geotech. Geoenviron. Eng. 2013.139:1477-1489.D

[3] Georgiadis, M., Georgiadis, K., (2010) “ Undrained lateral pile response in sloping ground “, ASCE, J. Geotech. Geoenviron. Eng. 2010.136:1489-1500.

[4] Haldar, S., Sivakumar, B., (2007), “Effect of soil spatial variability on the response of laterally loaded pile in undrained clay”, Science direct.

[5] Hussein, M, N., Tobita, T., Karray, M., Lai, S., (2014) “On the influence of vertical loads on the lateral response of pile foundation”, Science direct.

[6] Jeremic , B., Yang, Z., (2004) “Study of soil layering effects on lateral loading behavior of piles”, ASCEATERAL [7] Kelesoglu, K, M., Cinicioglu, F, S., (2010) “Free-Field Measurements to Disclose Lateral Reaction Mechanism of

Piles Subjected to Soil Movements “, ASCE, J. Geotech. Geoenviron. Eng. 2010.136:331-343. [8] Leminitzer, A, M., Tehrani, K, P., Ahlberg, E, R., Wallace, G, W., Stewart, J, P., “ Non linear efficiency of bored

pile group under lateral loading “ ASCE, J. Geotech. Geoenviron. Eng. 2010.136:1673-1685. [9] Levy, N, H., Einav, I., RAndolf, F, M., (2007), “ Effect of recent load history on laterally loaded piles in normally

consolidated clay “ ASCE, Int. J. Geomech. 2007.7:277-286. [10] Mokhtar, A.,Motaal, A., Wahidy, M., (2014) “Lateral displacement and pile instability due to soil liquefaction using

numerical model “, Science direct. [11] Phillipi, S, K., Duncan, J, M., (1994) “ Lateral load analysis of group of piles and drilled shafts”, ASCE, J. Geotech.

Engrg. 1994.120:1034-1050. [12] Reese, C, L., Brown, A, D., Morrison, C., (1988),“ Lateral load behavior of pile group in sand “,ASCE, J. Geotech.

Engrg. 1988.114:1261-1276. [13] Reese, C, L., Brown, A, D., O’Neill, W, M., (1987),“ Cyclic lateral loading of large scale pile group “,ASCE, J.

Geotech. Engrg. 1987.113:1326-1343.

Page 32: Gijet volume1 issue2

Change in Shrinkage Characteristics of Fiber

Amended Clay Liner Material

1Radhika V and 2Niranjana K 1PG Scholar, Thejus Engineering College,Vellarakkad Thrissur

2Assistant Professor,Department of Civil Engineering, Thejus Engineering College,Vellarakkad Thrissur

Abstract—The change in shrinkage characteristics which is an abnormal behavior of clayey soils is the subject of investigation. The focus is on the impact of short random fiber inclusion on shrinkage characteristics of CH soil which is used as the clay liner material. To examine the possible improvements in the soil characteristics, soils were reinforced with 0.2, 0.4, 0.6 and 0.8 percent fibers as dry weight of soil with 10, 15, 20 and 25mm lengths. Results indicated that shrinkage limit showed an increase, whereas shrinkage index, shrinkage ratio and volumetric shrinkage decrease with increase in fiber content and length. Index Terms— Clay Liner, Nylon Fiber, Shrinkage limit, Shrinkage Index, Shrinkage ratio, Volumetric shrinkage.

I. INTRODUCTION

Wastes are increasing day by day and the method for storage or disposal is in great demand. The most frequently adopted disposal option for solid waste is landfill. For construction of environmental barriers low permeability clays are commonly used. The hydraulic properties of soil can be affected by the formation of shrinkage cracks which will reduce the efficiency of the barrier system as a barrier. Cracks will increase the hydraulic conductivity of the system and it will lead to leakage of leachate or gas to the surrounding soil. Mainly cracks on liner materials occurs due to low tensile strength of the soil. So by amending the soil with a material having high tensile strength can give a solution to this problem. Lime, cement and sand have been the most common additives used for potential crack reduction. Effects of above additives have been investigated to study the hydraulic conductivity and volumetric shrinkage of clayey soils and it has been reported that soil shrinkage reduced and its hydraulic conductivity increased in certain cases. The soil plasticity was also found to be decreasing, thus decreasing the potential of cracking due to shear forces. Due to the shortcomings of the common materials, in recent years, synthetic fibers have been used for reinforcing the clay liner to improve its strength and performance.

II. LITERATURE REVIEW

There is some potential for use of some fibrillated fibres with clays. The fibres are effective in reducing the desiccation cracking that occurs in clays subjected to drying. However, when subjected to wet/ dry cycles, the effectiveness of the fibres is not as evident. The inclusion of fibres also increased the tensile strength of the clay and provided a ductile behavior that was not present in the specimens without fibres. It was demonstrated that the usefulness of the fibrillated fibre might be improved if it could interact more effectively with clays that are subjected to negligible over burden pressure, such as through the use of grid Grenze ID: 01.GIJET.1.2.510 © Grenze Scientific Society, 2015

Grenze International Journal of Engineering and Technology, July 2015

Page 33: Gijet volume1 issue2

27

like screen fibres.[9] From a series of shrinkage test on a block sample of Kaolin clayey soil it was found that Kaolin clay experiences shrinkage in all directions. Shrinkage of the clay layer was followed by cracking due to desiccation. Using fiber cause huge decrease in crack intensity factor and increasing in fiber content directly decrease the linear shrinkage.[1] Samples consisting of 75% kaolinite and 25% montmorillonite were reinforced with 1, 2, 4 and 8 percent fibers as dry weight of soil with 5, 10 and 15mm lengths. Results indicated that consolidation settlements and swelling of fiber reinforced samples reduced substantially whereas hydraulic conductivities increased slightly by increasing fiber content and length. Shrinkage limits also showed an increase with increasing fiber content and length. This meant that samples experienced much less volumetric changes due to desiccation, and the extent of crack formation was significantly reduced.

III. RESEARCH MATERIAL

A. Soil Type According to Environmental Protection Agency norms the usually adopted clay liner materials are CH soil, CL soil and SC soil and maximum hydraulic conductivity should be 1x10-9 m/sec. The soil used for this project was a highly plastic clay (CH) with a liquid limit of 58 and a plastic limit of 30. The grain size distribution was 17.9% gravel, 15.4% sand and 66.7% fines. Atterberg limits (IS:2720(Part 5, Part 6)-1985) and specific gravity (IS:2720(Part 3)-1973) tests were also carried out on representative samples. The soil had a liquid limit of 58 (%), plastic limit of 30(%), plasticity index of 28(%), shrinkage limit of 15.55(%) and specific gravity of 2.50. The hydraulic conductivity was 7.39x10-10m/sec. The index properties of CH soil are shown in Table I.

B. Fiber Type Nylon fiber is selected for amending with the soil. This is one of the most commonly used synthetic material mainly because of its low cost and the ease with which it mixes with soils. It has a relatively high melting point, low thermal and electrical conductivity and high ignition point. It is also hydrophobic and chemically inert material which does not absorb or react with soil moisture or leachate. Nylon fibers having 10mm, 15mm, 20mm and 25mm lengths and contents of 0.2%, 0.4%, 0.6% and 0.8% by dry weight of soil were adopted in this research. The properties of Nylon Fiber is shown in Table II.

TABLE I. INDEX PROPERTIES OF CH SOIL

Sl.No Index properties of CH soil

1 Specific Gravity 2.50

2 Liquid Limit (%) 58

3 Plastic Limit (%) 30

4 Plasticity Index (%) 28

5 Shrinkage Limit (%) 15.55

6 Free Swell Index(%) 50

Sl.No Index properties of CH soil Sl.No 7 Grain Size Distribution

Percentage Gravel (%) 17.9

Percentage Sand (%) 15.4

Percentage Fines (%) 66.7

TABLE II. PROPERTIES OF NYLON FIBER

Sl. No Properties of Nylon Fiber

1 Diameter 0.78 x 10-3 m 2 Water Absorption Capacity 8.63 %

Page 34: Gijet volume1 issue2

28

IV. EXPERIMENTAL PROGRAM

A. Shrinkage Limits The test for shrinkage limit of soil samples amended with fibers was done as in the case of un amended soil samples. Effects of random fiber inclusion on Shrinkage characteristics of soil samples were evaluated as function of fiber length and content and shown in Figures 1, 2 and 3. Prior to the fiber inclusion, shrinkage limit of unreinforced soil sample was determined. This is also shown in the above figures to be used for a comparison with those from different fibrous samples. It can be observed from Figures 1, 2 and 3 that at increasing the fiber contents i.e., 0.2%, 0.4%, 0.6% and 0.8% resulted in increasing shrinkage limit of the samples. Maximum and minimum shrinkage limit values of 15.55% and 28.86% were respectively measured for the unreinforced sample and the sample reinforced by 0.8% fibers having 25mm length. Also the other shrinkage characteristics like shrinkage index, shrinkage ratio and volumetric shrinkage were found out and their change in values were observed.

V. TEST RESULTS

A. Shrinkage Limits Variations of the shrinkage limits as function of fiber content and length are shown in Fig.1. which shows that increasing fiber contents and lengths resulted in increasing the shrinkage limit of the samples. The shrinkage limit determined for the unreinforced sample was 15.55%. This was increased to 28.86% for the sample reinforced with 0.8% fibers having 25mm length. This is because samples reinforced with random inclusion of fibers experienced less volumetric changes due to desiccation. Increase in the shrinkage limits means that longer fibers having greater surface contacts with the soil have shown greater resistance to volume change on desiccation. It can be said that random fiber inclusion improved the soil tensile strength very effectively, thus resisting shrinkage on desiccation.

B. Shrinkage Index Values The shrinkage index values showed a decrease in value as the fiber content and length of fiber increased. For unreinforced soil the value was 14.55%. The minimum value was obtained in 25mm length and 0.8% fiber content. The minimum value reported was 1.14%. The variation of shrinkage index values are shown in Fig.2.

C. Shrinkage Ratio Values The shrinkage ratio values also showed a decrease in value as the length and fiber content increases. For unreinforced soil the value was 1.66. The minimum value was obtained in 25mm length and 0.8% fiber content and it was reported as 1.45. The variation in their values are shown in Fig.3.

D. Volumetric Shrinkage Values The volumetric shrinkage values decreased as the fiber content and length of fiber increased. For unreinforced soil the value of volumetric shrinkage was 69.95%. The minimum value was obtained in 25mm length and 0.8% fiber content and it was 60.89%. The variation is shown in Fig.4.

TABLE.III. SHRINKAGE CHARACTERISTICS OF CH SOIL

Sl. No Shrinkage Characteristics

1 Shrinkage Limit 15.55%

2 Shrinkage Index 14.55%

3 Shrinkage Ratio 1.66

4 Volumetric Shrinkage 69.95%

Page 35: Gijet volume1 issue2

29

Figure 1.Shrinkage Limits of fiber amended CH soil

Figure 2.Shrinkage Index values of fiber amended CH soil

Figure 3.Shrinkage Ratio values of fiber amended CH soil

Figure 4.Volumetric shrinkage values of fiber amended CH soil

Page 36: Gijet volume1 issue2

30

VI. CONCLUSIONS

In the current study, the inclusion of randomly distributed Nylon fibers as reinforcing material affected the shrinkage characteristics of the CH soil investigated. By analyzing the experimental results, the following conclusions were made:

Preliminary investigations showed that there is a maximum fiber content and length that can be used because of workability problems making uniform mixing of fibers with soil very difficult. In this investigation the maximum fiber content and length determined were 0.8% and 25mm respectively.

Shrinkage limit of the CH soil reinforced with fibers was significantly increased as a result of increasing the fiber content and length.

Shrinkage Index values were also decreased as the length and percentage of fiber increased. Shrinkage Ratio of soil also decreased as increasing length and percentage of fiber. Fiber reinforcement significantly reduced volumetric shrinkage of CH soil.

The over-all effects of random fiber inclusion on clays observed, suggests potential applications of fiber reinforced soils in shallow foundations, embankments over soft soils, liners, covers and other earthworks that may suffer excess deformations.

REFERENCES

[1] Chegenizadeh,A., and Nikraz,H., (2011), “Investigation on shrinkage behaviour of Kaolin clay”,Unsaturated Soils: Theory and Practice,663-666

[2] IS: 2720 (Part XL) 1977, “Determination of Free Swell Index of Soils” [3] IS: 2720 (Part IV) 1985, “Grain Size Analysis” [4] IS: 2720 (Part V) 1985, “Determination of Liquid and Plastic Limit” [5] IS: 2720 (Part VI) 1972, “Determination of Shrinkage Factors” [6] IS: 2720 (Part II) 1173, “Determination of Water Content” [7] Southen,J,M., and Rowe.R.K (2005)“Laboratory Investigation of Geosynthetic Clay Liner Desiccation in a

Composite Liner Subjected to Thermal Gradients”, Journal of Geotechnical and Geoenvironmental Engineering ASCE, 131(7), 925-935.

[8] Zhou,Y., and Rowe,R,K., (2005) “Modeling of Clay Liner Desiccation”, International Journal of Geomechanics ASCE, 5(1), 1-9

[9] Ziegler,S., et.al (1998), “Effect of Short Polymeric Fibers on Crack Development in Clays”, Soils and Foundations,38(1), 247-253

Page 37: Gijet volume1 issue2

Secure Authentication using Hybrid Graphical

Passwords

1Shalaka Jadhav and 2Abhay Kolhe 1, 2Computer Department, MPSTME, Narsee Monjee Institute Of Management Studies, Vile Parle(W)

Mumbai, Maharashtra 400056, India

Abstract— Passwords are used to restrict access to data. A user can gain access to a password protected file only by entering the correct password. Generally users prefer to use short passwords or those which are easy to remember. Such passwords definitely save time while logging in but are prone to various kinds of attacks. If an attacker is able to break the password then he will gain unauthorised access to someone’s private data. To solve this problem researchers have come up with various techniques to strengthen the authentication system. One of those methods is based on graphical passwords. Graphical password techniques have been introduced as an alternative to the conventional textual password techniques. The password techniques presented in this paper will see a transition from textual passwords to graphical passwords. Simple textual characters will be treated as graphical forms. The users need to understand the graphical password system and the password entry mechanism. Index Terms— Graphical Passwords, Brute Force, Shoulder Surfing, Memory Ability, Session.

I. INTRODUCTION

Authentication is required to secure a system. Usually passwords are used for authentication. Since textual passwords are prone to various kinds of attacks we use graphical passwords. Researchers have introduced many graphical password schemes which enhance security, are easy to remember and take minimal time to log in. Haichang Gao, Xiyang Liu & Ruyi Dai (2009) stated that it is well known that people can memorise pictorial information better than textual information. So if passwords are created out of pictures, it would reduce the chaos of long complex textual passwords. Some pure graphical passwords are also vulnerable to shoulder surfing attacks. To prevent any breach of security, hybrid graphical passwords have been introduced. These would retain the characteristics of textual passwords and introduce new graphical features. In these graphical password schemes the passwords are generated from an N x N grid. Here the value of “N” can be decided by the user. This grid will be filled by alphanumeric and special characters. Using these characters a pattern would have to be visualised. The user will just select the appropriate regions by clicking on it as explained by I. Jermyn, A. Mayer, M. Reiter & A. Rubein (1999). Thus a new graphical password would be created. To secure the passwords various encryption techniques or one way hash functions can be used as explained Grenze ID: 01.GIJET.1.2.513 © Grenze Scientific Society, 2015

Grenze International Journal of Engineering and Technology, July 2015

Page 38: Gijet volume1 issue2

32

by Wei-Chi Ku & Maw-Jinn Tsaur (2005). In this way the computational load can be reduced. Care has to be taken to maintain the integrity of data and any attempt of unauthorised modification should be detected by the server. The rest of the paper is organised as follows: Related work has been described in Section II, Proposed system is explained in Section III, Analysis on the proposed system is done in Section IV, Conclusion is given in Section V and References are mentioned in Section VI.

II. RELATED WORK

This section deals with the various existing graphical password strategies. Researchers have introduced both, pure graphical password schemes and hybrid graphical password schemes. Zhao and Li (2007) have described a triangle based graphical password scheme. It generates a randomised N x N gird consisting of alphabets A-Z and a-z, numbers 0-9 and special characters. The user selects a primary textual password. Using this password on the randomised grid, a new graphical password is generated. The user has to consider every three consecutive characters of his textual password and visualise a triangle on the randomised grid. Once the virtual triangle is formed, the user has to click inside that triangle or enter a character present inside that triangle. An example is shown in Fig. 1. This method has a few limitations. If the three characters fall in the same row or column, then the triangle cannot be formed. Thus the graphical password cannot be generated.

Figure 1: Triangle based password generation using three consecutive characters from primary password

Rao and Yalamanchili (2012) have described a rectangle based method. It has a similar randomised N x N grid consisting of alphabets, numbers and special characters. The grid is divided into quadrants. The user has to consider pairs of consecutive characters of the textual password and locate the pair on the grid and find its mirror coordinate characters across the axes. An example is shown in Fig. 2. Thus four characters forming a closed figure will be visualised. The user has to click inside this closed figure. After repeating these steps for the entire textual password a new graphical password will be generated.

Figure 2: Rectangle based password generation using pairs of characters from primary password

Page 39: Gijet volume1 issue2

33

Gao, Chen and Wang (2008) have described a graphical password strategy that not only enhances the strength of the password but also removes the restrictions on the user. It uses the neighbourhood grid concept. Every stroke would be represented by the number given in the neighbourhood grid. Pen-up and pen-down actions are represented by ‘5’. After following every stroke a coded string is generated. This string is the graphical password. A similar sketching concept has been used by Yuxin Meng (2012), where initially the user has to select an image from a pool of images. The next step involves password sketching on a grid. The stroke sequence to be followed is shown in Fig. 3.

Figure 3: Password generation using neighbourhood grid concept

Zheng and Liu (2009) have described a stroke based textual scheme. The user has to select a stroke and sketch it on an N x N grid. This grid will be filled with a set of characters. A vector will be generated containing those characters covered by the stroke. This vector is the generated graphical password. An example is shown in Fig. 4.

Figure 4: Stroke based password generation

The vector generated for the grid used in Fig. 4 will be [1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0].

III. PROPOSED SYSTEM

A. Description This section describes a hybrid graphical password scheme. It works on an N x N grid consisting of alphabets, numbers and special characters. In this technique the user will select a primary textual password. The basic idea of using graphical passwords is that the user never has to enter his primary password. The user has to visualise accurate patterns and enter the new graphical password accordingly. Thus the primary password remains secure. To make the password more secure, the grid is randomised. This ensures that the characters are always placed in different cells. If the grid itself keeps changing at every run, the generated graphical password also keeps changing. So even if an attacker is able to capture the entered password it would be of no use at consequent login sessions. This technique uses an N x N grid as shown in Fig. 5. Depending on the value of ‘N’, the set of password icons would be decided. In the grid shown in Fig. 5, the diagonals are left blank. These blank diagonals would act as dividers of the four regions. If ‘N’ is odd: - Totalno. ofpasswordicons = (N− 1) (1) If ‘N’ is even: -

Totalno. ofpasswordicons = {(N− 1) − 1} (2)

Page 40: Gijet volume1 issue2

34

Figure 5: Password Grid for the proposed system with diagonals as separators

B. Password Entry Mechanism The graphical password would be entered in ‘m’ rounds. Here ‘m’ is the length of user’s primary textual password. For every character in the primary password, four inputs will be made. If a character is found in a region then ‘Y’ or ‘y’ would entered. If the character is not present in a region then ‘N’ or ‘n’ would be entered. We will begin with the first character in the primary password; check in which region it lies. If it lies in the right region then the four inputs to be made will be “NYNN” or “nynn”. The order to be followed is clockwise starting from top region. After one round is completed, the grid is reset. We will have a new randomised grid. This is done to ensure that an attacker is unable to keep track of the password characters. Similarly the procedure will be carried out for all the characters of the primary password. Once all ‘m’ rounds are done we will have a matrix of dimension ‘m’ by 4. This matrix will be compared to the correct input sequence. If they match then the user will be granted access.

Example N = 5

Set of Password Icons = {‘A’, ‘E’, ‘I’, ‘O’, ‘U’, ‘0’, ‘1’, ‘2’, ‘3’, ‘4’, ‘5’, ‘6’, ‘7’, ‘8’, ‘9’, ‘#’}

Primary Textual Password = “A15#E”

Since the length of textual password = 5, there will be five rounds as shown in Fig. 6.

m = 5

(a) (b) (c)

(d) (e)

Figure 6: Rounds of password set procedure

Page 41: Gijet volume1 issue2

35

The password matrix should be: -

Figure 7: Password Matrix

If the user enters the correct sequence authentication would be successful. If the correct sequence of inputs is not entered, then the user will not be granted access. Here the count as well as the position of input is important.

C. Possible Improvements An alternative can be considered for entering the password. A ‘y’ input can be replaced by left mouse click and ‘n’ input can be replaced by right mouse click. This way it becomes easier to input password and it also takes lesser time. It also becomes difficult for the attacker to keep track of the mouse clicks. Password entry by mouse clicks is faster than entry by keyboard. Also it is more difficult to capture for any possible attack. The user simply has to click on the accurate regions with the corresponding mouse click. The user will click left mouse button for character match and right mouse button for character mismatch.

IV. ANALYSIS

A. Usability Since this method is a transition from textual passwords to hybrid graphical passwords, it is user friendly. It involved conventional characters that the users are familiar with. The users only need to remember the sequence of password entry, i.e. clockwise, starting from top region.

B. Time taken to login Once the user understands the idea behind this method, it will become easier to simply enter the password matrix. Depending on the length of primary password it would take 15 – 20 seconds to be authenticated. Password entry by mouse clicks is faster than that by keyboard. Thus it will take less time to login.

C. Memory ability This method does not involve any complex procedure. The user has to remember the sequence of password entry. Then he just needs to click ‘Y’ or ‘N’ to complete the password matrix or click left or right mouse button accordingly.

D. Resistance to attacks Randomised grid is used to generate the password. This ensures that all characters appear in different cells in consequent runs. Thus it is nearly impossible to predict the password by brute force attack. Since the grid resets after every round the attacker cannot find any intersecting region where a character might lie. It also ensures that a password once generated will not be the same for the next session. Thus it prevents shoulder surfing attack.

V. CONCLUSION

Thus a secure and user-friendly technique for authentication using hybrid graphical passwords has been proposed. It serves the purpose of enhancing more security and at the same time eases the restrictions on the users. The users do not have to worry about anyone spying during logging in as the passwords would be different for every login session. The proposed technique is user friendly. It is easy to understand and can be adapted on existing systems. It takes less time to login and is able to resist various kinds of attacks like brute force, shoulder surfing and random click attacks. Since the proposed technique is a transition from conventional textual passwords to hybrid graphical passwords, the users will find it easier to adapt to such a

Page 42: Gijet volume1 issue2

36

system. The randomised nature of the password grid ensures that an attacker will not be able to capture the original password by any form of spyware and any random click attack will be blocked by grid reset feature.

REFERENCES

[1] Haichang Gao, X. Guo, X. Chen, L. Wang & X. Liu (2008). “Yet Another Graphical Password Strategy”, Annual Computer Security Applications Conference, 121-129.

[2] Haichang Gao, Xiyang Liu & Ruyi Dai (2009). “Analysis and Evaluation of Colour Login Graphical Password Scheme”, Fifth International Conference on Image and Graphics, 722-727.

[3] I. Jermyn, A. Mayer, M. Reiter & A. Rubein (1999). “The Design and Analysis of Graphical Passwords”, Proceedings of the 8th USENIX Security Symposium, Volume: 8, 1-14.

[4] Wei-Chi Ku & Maw-Jinn Tsaur (2005). “A Remote User Authentication Scheme using Strong Graphical Passwords”, Proceedings of IEEE Conference on Local Computer Networks, 351-357.

[5] Yuxin Meng (2012). “Designing Click-Draw based Graphical Password Scheme for Better Authentication”, IEEE Seventh International Conference on Networking, Architecture and Storage, 39-48.

[6] M. Kameswara Rao & Sushma Yalamanchili (2012). “Novel Shoulder-Surfing Resistant Authentication Scheme using Text-Graphical Passwords”, International Journal of Information and Network Security, Volume: 1, No. 3, 163-170.

[7] Huanyu Zhao & Xiaolin Li (2007). “A Scalable Shoulder-Surfing Resistant Textual-Graphical Password Authentication Scheme”, Advanced Information Networking and Applications Workshops, Volume: 2, 467-472.

[8] Z. Zheng, X. Liu, L. Yin & Z. Liu (2009). “A Stroke Based Textual Password Authentication Scheme”, First International Workshop on Education Technology and Computer Science, Volume: 3, 90-95.

Page 43: Gijet volume1 issue2

Interactive Image Segmentation based on Seeded

Region Growing and Energy Optimization Algorithms

1K R.Rasitha, 2Sherin Thomas and 3P M.Vijaykumar 1, 2Dept. of ECE,Anna University, Maharaja Prithvi Engineering College, Avinashi, Tamilnadu, India

3Assistant Professor, Dept. of ECE, Maharaja Prithvi Engineering College, Avinashi, Tamilnadu, India

Abstract—The proposed system develops a novel image super pixel segmentation approach using energy optimization algorithm. This algorithm with self-loops has the merit of segmenting weak boundaries by the new global probability maps and the commute time strategy. But it is difficult to find and track exact contours of an object in case of complex images using energy optimization algorithm. In order to mitigate this problem and to improve segmentation quality, this paper presents a seeded region growing algorithm (SRG) along with energy optimization algorithm. In this paper, we discuss about popular seeded region growing methodology used for segmenting weak boundaries. This method use centroid calculation of the different regions appeared in an image and can withstand for almost each and every shape appear in the mage. This work is divided in to two stages, in first stage calculate the region of interest and then place the seed at the centroid of that region. In second stage region starts to grow from the initial seed until the homogeneity criteria satisfied. The experimental results have demonstrated that this method achieves better performance than previous approaches. Index Terms— Image Segmentation, Region Growing, Seed Placement, Commute Time, Optimization, Texture.

I. INTRODUCTION

Image segmentation is the division of an image in to regions or categories, which correspond to different objects or parts of objects. The purpose of dividing an image is to analyse each of object present in the image and to extract some high level information. Most of the segmentation techniques are either edge-based or region-based. An edge or linear feature is manifested as an abrupt change or a discontinuity in digital number of pixels along a certain direction in an image. Edge based segmentation is the location of pixels in the image that correspond to the boundaries of the object seen in the image. The region based segmentation is portioning of an image in to similar areas of connected pixels through application of similarity criteria among candidate set of pixels. Application fields of image segmentation aresecurity systems, object recognition, computer graphics, medical imaging, satellite images etc. Pixels are the basic building blocks of an image. Super pixels are commonly defined as contracting and grouping uniform pixels in an image. The main merit of super pixel is to provide meaningful representation of an image. It reduces the number of image primitives and improves segmentation efficiency. The proposed system develops a super pixel image segmentation approach using energy optimization algorithm. This approach consists of two main steps. The first step is to obtain the super pixels using LRW algorithm with initial seed points. In the second step, optimize the initial super pixels to improve the Grenze ID: 01.GIJET.1.2.526 © Grenze Scientific Society, 2015

Grenze International Journal of Engineering and Technology, July 2015

Page 44: Gijet volume1 issue2

38

performance. Energy optimization includes two items: the data items makes the super pixels more homogenous with regular size by relocating the seed positions, and the smooth items makes the super pixels more adhering to the texture edges by dividing the large irregular super pixels in to small regular ones. Then LRW algorithm is performed to obtain the better boundaries of super pixels with new seed positions. Super pixel optimization and LRW steps are executed iteratively to achieve the final result. It is an efficient algorithm to obtain better image segmentation. Butin case of complex images it is difficult to find and track exact contours of an object, especially to track the objects received from satellites. In such cases to improve efficiency and to enhance the segmentation quality, seeded region growing algorithm is added along with the energy optimization algorithm. With this enhancement the merit of segmenting the weak boundaries and complicated texture regions can be achieved. Seeded region growing algorithm consists of two stages, in the first stage calculate the area of interest (specific part of an image) based on background and objet properties of image. The area or region generated is used to find the centroid in order to place the seed. In the second stage region starts to grow from initial seed placed in the first stage. The growth of region depends upon the intensity value of the neighbouring pixels as well as threshold value. If the intensity value of the neighbouring eight pixels i.e. (left, right, up, down, top right, bottom right, top left, bottom left) is same and it lies in the given threshold value the region will start to grow. It also checks the previously visited pixels. If a pixel is already grown i.e. part of region it will not be visited again no matter if it comes as a neighbouring pixel. This will reduce computational overhead. When this regionstarts to grow in the second stage, there is a need of some stopping gradient which limits the growth of region up to the area of interest. This happens by calculating the intensity value of neighbours. If the intensity value of the neighbouring pixels changes abruptly then the region stop to grow at that point. So finally grown region will be the required segmented region.

II. PROPOSED SYSTEM

The proposed system is a super pixel image segmentation based on energy optimization algorithm. This method begins with initializing the seed positions and the initial super pixels are iteratively optimized by the energy function, which is defined on the commute time and texture measurement. Commute time in this algorithm computes the return time from the seeds to pixels. The labelled boundaries of super pixels are obtained from the commute time as follows: R(x ) = argminl CT(c ,x ) = argmaxlkf (x ) (1) Where c denotes the centre of the super pixel, and the label l is assigned to each pixel xito obtain the boundaries of super pixels.The energy optimization algorithm can find optimal path from the seed to the pixel. Then the label of the seed with the minimal commute time is assigned to the corresponding pixel as the final super pixel label. The performance of super pixels are improved with the following energy function:

E =∑ (Area(Sl) − Area(S))2+∑ WxCT (C x)2(2)

Figure 1. Illustration of computing the commute time

Where the first term is the data item and the second term is the smooth item. The data item makes the texture information of image to be distributed uniformly in the super pixels, which produces more homogeneous super pixels. The smooth item makes the boundaries of super pixels to be more consistent with the object boundaries in the image.When the commute time CT (cl,x)between seed point cland pixel x is small, Wx

Page 45: Gijet volume1 issue2

39

(penalty function)will be a large value. This makes theoptimized super pixel to be more compact and more homogeneousin texture regions. But in case of complex images it is difficult to obtain the correct boundaries.

Figure 2.Input image with user seeds

Figure 3. Probability map obtained by energy optimization method

Figure 4.Segmentation result obtained by energy optimization method

Figure 2. Showing the input image selected for segmentation, where green line is used to select foreground and the blue line is used to select the background. Figure 3 and 4 are the probability map and the segmentation result obtained by energy optimization algorithm only. It shows that in the complicated regions it is difficult to get the proper segmentation.

III. SEEDED REGION GROWING METHOD

Seeded region growing is a simple and robust method of segmentation, which is rapid and free of running parameters. User control over the high level knowledge of image components in the seed selection process makes it a better choice for easy implementation and applying it on a larger dataset. Seeded region growing is based on the conventional growing postulate of similarity of pixels within regions.SRG is controlled by choosing a (usually small) number of pixels known as seeds .This form of control and the corresponding result is readily conceptualized, which allows relatively unskilled users to be able to achieve good segmentation on their first attempt.

IV. REGION GROWING PROCESS

The goal of region growing is to map the input image data in to a sets of connected pixels, called regions according to a prescribed criterion which generally examines the properties of local groups of pixels. The growing starts from a pixel in the proximity of the seed point initially selected by the user. The pixel can be chosen based on either its distance from the seed points or the statistical properties of the neighbourhood. Then each of 4 or 8 neighbours of that pixels are visited to determine if they belong to the same region. This growing expands further by visiting the neighbours of each of these 4 or 8 neighbour pixels. This recursive process continues until either some termination criterion is met or all pixels in the image are examined. The result is as a set of connected pixels determined to be located within the region of interest.

Page 46: Gijet volume1 issue2

40

Advantages: 1. Region growing method can correctly separate the regions that have same properties we define. 2. Region growing method can provide the original images which have clear edges with good segmentation results. 3. The concept is simple. 4. We can determine the seed points and the criteria we want to make. 5. We can choose the multiple criteria at the same time. 6. It performs well with respect to noise.

V. SEEDED REGION GROWING ALGORITHM

Seeded region growing approach to image segmentation is to segment an image in to regions with respect to a set of q seeds. Given the set of seeds S1, S2 …Sq, each step of SRG involves identifying one additional pixel from one of these seed sets. These initial seeds are further replaced by centroids of the generated homogenous regions R1, R2………Rq, by involving the additional pixels step by step. The pixels in the same regions are labelled by the same symbol and the pixels in the variant regions are labelled by different symbols. All these labelled pixels are called the allocated pixels and the others are called the unallocated pixels. The algorithm is presented as follows: 1. Select seed pixels within the image. 2. From each seed pixel grow a region: 2.1 Set the region prototype to be the seed pixel; 2.2 Calculate the similarity between region prototype and the candidate pixel; 2.3 Calculate the similarity between candidate and its nearest neighbour in the region; 2.4 Include the candidate pixel if both similarity measures are higher than experimentally set thresholds; 2.5 Update the region prototype by calculating the new principal component; 2.6 Go to the next pixel to be examined.

VI. INTERACTIVE IMAGE SEGMENTATION

Image segmentation is the process that partitions an image in to region. Although many literatures studied automated image segmentation, it is still difficult to segment region of interest in any kind of images. Automatic segmentation method are not generic, it requires some form of an interventions to correct anomalies in segmentation. Automatic segmentation methods are still far from human segmentation performance, which have several problems such as finding the faint object boundaries and separating the object from the complicated background in natural images. In order to solve these problems, an interactive segmentation method is often preferred. Thus manual segmentation is important yet. Interactive image segmentation is the process of extracting an object in an image with additional hints from the user. Interactive segmentation aims to separate an object of interest from the rest of an image. In order to shorten the processing time and to decrease the effort of users, there has many interactive image segmentation methods based on various technologies. One of such method is the interactive image segmentation based on region growing algorithm. In an interactive image segmentation a user views the image and based on personal judgement choose the seed points. Figure 6.Showing the segmentation result obtained after doing enhancement along with the energy optimization method over the same input image as shown in figure 2. It gives better segmentation result with proper segmentation in the complex regions.

VII. CONCLUSION

In this paper we have presented a novel image segmentation approach using energy optimization and seeded region growing algorithms. Here first runs energy optimization algorithm to obtain the initial result by placing the seed positions on input image and further optimize the labels of super pixels to improve the performance. But in case of complex images it is difficult to track the object, in order to mitigate this problem add seeded region growing algorithm with the energy optimization algorithm to improve the performance.It consists of two stages, in stage1 calculate the area of interest (specific part of an image) based

Page 47: Gijet volume1 issue2

41

on background and objet properties of image. The area or region generated is used to find the centroid in order to place the seed.In stage2, region start to grow from initial seed placed in stage1. Growth of region depends upon the intensity value of the neighbouring pixels as well as threshold value. If the intensity value of the neighbouring eight pixels is same and it lies in the given threshold value the region will start to grow. It also checks the previously visited pixels. If a pixel is already grown i.e. part of region it will not be visited again no matter if it comes as a neighbouring pixel. This will reduce computational overhead. When this region starts to grow in stage2, there is a need of some stopping gradient which limits the growth of region up to the area of interest. This algorithm is capable of obtaining good boundary adherence in the complicated texture and weak boundary regions and improves the quality of segmentation.

Figure 5. Flow chart of region growing algorithm

Figure 6. Segmentation obtained by adding SRG along with energy optimization

ACKNOWLEDGMENTS

I extend my deep sense of gratitude and sincere thanks to Ms. S. GOWSALYA, M.E., (Ph.D.,) Head of Electronics and Communication Department, and Mr. P.M. VIJAYKUMAR M.E., Assistant Professor for

Page 48: Gijet volume1 issue2

42

their support, encouragement, guidance throughout the period of this paper work.

REFERENCES

[1] Adams, R., and Bischof, L. (1994). “Seeded regiongrowing”, IEEE Trans. Pattern Analysis Machine Intelligence. [2] Jianping Fan, Guihua Zeng. (2005).“Seeded region growing: an extensive and comparative study”, Pattern

Recognition Letters 26, 1139–1156. [3] Juyong Zhang andJianmin Zheng. (2011). “A diffusion approach to seeded image segmentation”. [4] Padma,L. Suresh and Mary Synthuja Jain. (2012). “Image segmentation using seeded region growing”. [5] Rajesh Dass and Priyanka. (2012).“Image Segmentation Techniques”, IJECT Vol.3, Issue 1. [6] Shilpa Kamdi andKrishna,R.K. (2011). “Image segmentation and region growing algorithm”. [7] Tranos Zuva and Oludayo. (2011). “Image segmentation, available techniques, developments and open issues”.

Page 49: Gijet volume1 issue2

An Innovative Method to Reduce Power Consumption

using Look-Ahead Clock Gating Implemented on Novel Auto-Gated Flip Flops

Roshini Nair

M.Tech, MET’S School Of Engineering, Kerala, India

Abstract—Clock gating is extremely useful for decreasing the power wasted by digital circuits. This paper proposes a new and innovative method of look ahead clock gating. It avoids and eliminates the drawbacks of the previously used methods. The existing systems for clock gating are synthesis base clock gating, data driven clock gating and clock gating on auto gated flip flops but all these techniques had a number of disadvantages. This project deals with the elimination of the drawbacks in the existing system Look-Ahead Clock Gating (LACG), combines all the three. LACG computes the clock enabling signals of each FF one cycle ahead of time and further DETFF and dual pulse generator are used to increase performance. Index Terms— clock gating, clock networks, dynamic power consumption.

I. INTRODUCTION

The major source of dynamic power consumption in computing and consumer electronics products is the circuit’s clock signal. It is responsible for about 30% to 70% of the total switching power consumption [2].Different methods to reduce the dynamic power have been implanted and clock gating is predominant among them. Normally, when a logic unit is clocked, its underlying sequential elements also receives the clock signal no matter whether or not their data will toggle in the next cycle. Gated clock is a popular method for reducing power dissipation in synchronous digital system. Using this method the clock is not given to the flip flop when the circuit is idle . This considerably reduces the power consumption. Clock gating is employed at all levels: system architecture, block design, logic design and gates.The most popular is synthesis-based. It derives the clock enabling signals based on the logic of the underlying system,but it leaves the majority of the clock pulses driving the flip-flops (FFs) redundant .To eliminate the problem of redundancy, data-driven clock gating was proposed for flip-flops (FFs).In this method the clock signal driving a FF, is disabled (gated) when the FF’s state is not changing in the next clock cycle but it’s implementation was extremely complex. The third method called auto-gated FFs (AGFF) is simple but it produces comparatively small power savings. To avoid all these drawbacks a new innovative method of look ahead clock gating was used. LACG finds the clock enabling signals of each FF one cycle ahead of time.It takes AGFF a leap forward. The main objective of LACG is to reduce the dynamic power consumption, this method is considerably simpler and to improve this technique a novel method of joint gating is used along with this. Grenze ID: 01.GIJET.1.2.527 © Grenze Scientific Society, 2015

Grenze International Journal of Engineering and Technology, July 2015

Page 50: Gijet volume1 issue2

44

II. CLOCK GATING

Clock gating is a popular technique used in many synchronous circuits for reducing dynamic power dissipation. Clock gating saves power by adding more logic to a circuit to prune the clock tree. Pruning the clock disables portions of the circuitry so that the flip-flops in them do not have to switch states. Switching states consumes power. When not being switched, the switching power consumption goes to zero, and only leakage currents are incurred. Clock gating works by taking the enable conditions attached to registers, and uses them to gate the clocks. Therefore it is imperative that a design must contain these enable conditions in order to use and benefit from clock gating.

Figure 1 Combinational Clock-Gating

III. EXISTING SYSTEM

Clock gating is a predominant technique used for power saving. It is a popular technique used in many synchronous circuits for reducing dynamic power dissipation. Clock gating saves power by adding more logic to a circuit to prune the clock tree. Pruning the clock disables portions of the circuitry so that the flip-flops in them do not have to switch states. Switching states consumes power. When not being switched, the switching power consumption goes to zero, and only leakage currents are incurred. There are mainly three existing methods for clock gating.They are synthesis based clock gating, data-driven method and using auto-gated flip flops.

A. Synthesis-Based Method Synthesis-based is the most popular method of clock gating. It derives clock enabling signal based on the logic of the underlying system. Normally, when a logic unit is clocked, its underlying sequential elements always gets the clock signal, no matter whether they are toggling in the next cycle. In clock gating, the clock signals are ANDed with already defined enabling signals. Clock gating can be done at all levels: system level, block level, logic level, and gates [2], [3]. Different methods to take advantage of this technique are described in [5]–[7]. These methods are known as synthesis-based methods. Synthesis-based clock gating is the most popularly used technique by EDA tools [8]. The use of the clock pulses, calculated by using data-to-clock toggling ratio, left after the employment of synthesis-based clock gating can still be very low. These enabling signals were estimated using a mix of logic synthesis and manual definitions. The blocks are arranged in increasing order based on their data-to-clock activity ratio. Due to the rapid increase in design complexity, computer aided design tools are mainly used. Eventhough design productivity is increasing, these tools need the use of a long chain of automatic synthesis algorithms.Unfortunately, these automation gives way to a number of unnecessary clock toggling problems, which furthur increases the number of redundant clock signals at flip-flops (FFs). Synthesis-based method leaves majority of the clock pulses driving the flip-flop redundant. So to overcome this disadvantage other methods have to be used.

B. Data-Driven Method A data-driven method stops majority of the redundant clock pulses and produces higher power savings. In this the clock signal driving a FF, is disabled at the time when the FF’s state is not subject to variation in the next clock cycle[9]. In an effort to decrease the area overhead of gating logic,a number of FFs are controlled by the same clock pulse which is generated by ORing the enabling signals of the individual FFs [8]. Depending on toggling probability, a model to derive the group size maximizing the power savings was made. The results found after comparing both the synthesis-based and data-driven gating methods proved that the latter outperforms for control and arithmetic circuits, while the former outperforms for register-file based circuits [10].

Page 51: Gijet volume1 issue2

45

Data-driven gating method is depicted in Fig.2. A FF understands that its clock pulse can be disabled in the next cycle by XORing it’s output with the present input data. Furthur the outputs of XOR gates are ORed to produce a joint gating signal for FFs. This is then latched to avoid glitches. The combination of a latch with AND gate is called Integrated Clock Gate (ICG) [5]. It is advantageous to group FFs whose switching activitiy patterns are highly similar. The work in [7] dealt with the questions of which FFs should be placed in a group to maximize the power savings, and how to identify those groups.

Figure 2 Data Driven Clock Gating

Data-driven method has the disadvantage of very short time-window where the gating circuit can work without any trouble. The overall delay of the XOR, OR, latch and the AND gate should not exceed the setup time of the FF. These constraints may exclude 5%-10% of the FFs from being gated because they are present on critical paths [11]. Along with this drawback data-driven gating also suffers from very complex design methodology. In order to maximize the power savings, the FFs should be grouped in such a way that their toggling activity is highly similar,for this a number of extensive and expensive simulations have to be done. In majority of the cases these applications are unknown and the amount of wasted clock pulses will considerably increase for specific applications.

C. Auto-Gated Flip Flops Auto-Gated FFs (AGFF) is simple but yields relatively small power savings. In this only the slave flip-flops are gated. The basic circuit used for LACG is Auto-Gated Flip-Flip (AGFF) illustrated in Fig.3 [8]. The FF’s master latch becomes transparent on the falling edge of the clock, where its output must stabilize no later than a setup time prior to the arrival of the clock’s rising edge, when the master latch becomes opaque and the XOR gate indicates whether or not the slave latch should change its state. If it does not, its clock pulse is stopped and otherwise it is passed.

Figure 3 Auto-Gated Flip-Flops

In [8] a significant power reduction was reported for register-based small circuits.AGFF can also be used for general logic, but with major drawbacks. In this only the slave latches are gated ,half of the clock load is not gated, serious timing constraints are imposed on those FFs residing on critical paths, which avoid their gating and it yields only small power savings.

Page 52: Gijet volume1 issue2

46

To avoid these drawbacks that is to decrease the dynamic power consumption, to avoid the tight timing constraints and to save overhead a new innovative technique was needed.

IV. EXISTING SYSTEM

LACG computes the clock enabling signals of each FF one cycle ahead of time, based on the present cycle data of those FFs on which it depends. It avoids the tight timing constraints of AGFF and data-driven. LACG takes AGFF a leap forward. Similarly to data-driven gating, it is capable of stopping the majority of redundant clock pulses.Furthermore, unlike data-driven gating whose optimization requires the knowledge of FFs’ data toggling vectors, LACG is independent of those. LACG is based on using the XOR output in Fig 4 to generate clock enabling signals of other FFs in the system, whose data depend on that FF. There is a problem though. The XOR output is valid only during a narrow window [-tsetup,tccq] of around the clock rising edge, where tsetup and tccq are the FF’s setup time and clock to output contamination delay, respectively. After a tccq delay the XOR output is corrupted and turns eventually to zero. To be valid during the entire positive half cycle it must be latched as shown in Fig 4 and Fig 5 is the symbol of the enhanced AGFF with the XOR output. The power consumed by the new latch can be reduced by gating its clock input clk_g.The working of look ahead clock gating by using auto-gated flip flops is shown in Fig 4.

Figure 4 Enhanced AGFF Used For LACG

Figure 5 Symbol for enhanced AGFF

A. Working of LACG Fig 6 illustrates how LACG works. We call FF’’ target and FF’ source. A target FF depends on K>1 source FFs. It is required that the logic driving a target FF does not have an input externally of the block. Let X(D”) denote the set of the XOR outputs of the source FFs, and denote by Q(D”) the set of their corresponding outputs. The source FFs can be found by a traversal of the logic paths from D” back to Q(D’’). Using a FF for gating is a considerable overhead that will consume power of its own. This can significantly be reduced by gating FF”’ as shown in Fig 6 since FF”’ is oppositely clocked and its data is sampled at the clock’s falling edge, its clock enabling signal X”’ must be negated. Also ,FF”’ is an ordinary FF where the internal XOR gate is connected between D”’ and Q’”.

Page 53: Gijet volume1 issue2

47

Figure 6 LACG Of General Logic

B. Modeling the Power Savings While modeling the power Savings independency should not be considered, that is the worst case condition .ratio of toggling probability and source FF s must be kept in mind.to get the reduced power dissipation the power saving break even curve must be analysed.

Figure 7 Power Saving Break even Curve

C. Minimizing the Gating Logic The gating logic can be shared among several target FFs, which further reduces the overhead.Logic sharing model is developed to minimize the gating cost. The idea is illustrated in Fig 14, showing two target FFs, FFi and FFj ,with their corresponding OR trees,driven by ki and kj source FFs, respectively. A different implementation is shown in Fig 15 were the OR logic is merged and a single gater is used for the two FFs.

Figure 8 Overlapping OR Gate Logic

Figure 9 Merging OR Logic For Joint Gating

Page 54: Gijet volume1 issue2

48

V. INNOVATIVE METHOD OF DETFF

The power consumption of a system is a crucial parameter in modern VLSI circuits especially for low power applications. To further reduce power consumption double edge triggered flip flops are used. The proposed DETFF is having less number of clocked transistors than existing designs. The most popular synchronous digital circuits are edge triggered flip-flops. The total clock related power consumption in synchronous VLSI circuits is due to power consumption in the clock network, power consumption in the clock buffers, and power consumption in the flip-flops [3]. By using double-edge triggered flip-flops (DETFFs), the clock frequency can be significantly reduced ideally, cut in half while preserving the rate of data processing. A simple DETFF is implemented with about 50% extra transistors than the traditional SET flip flop, however, this issue is also being resolved. The basic working of a simple DETFF is shown below in figure .In this D is sampled on both rising and falling edges of clock and it doubles data rate per clock edge.During the rising edge of the clock the upper D flip flop will be working and during the falling edge the lower D flip flop will be working and the output will be produced from the multiplexer accordingly.

Fig 10 Double Edge Triggered Flip-Flop

VI. NEW PROPOSED SYSTEM

The conventional structures have a number of drawbacks like the increased number of transistors ,increased area overhead and complexity. To avoid all these a new innovative structure of DETFF is used. The proposed Double Edge Triggered Flip-Flop (DETFF) design is shown in Fig. 3. The operation of the proposed flip-flop is similar to that of figure 1, but the number of clocked transistors is reduced from 10 to 6.This is done by replacing the transmission gates by using n-type pass transistors. As a result the proposed DETFF in figure 6.4 is free from threshold voltage loss problem of pass transistors.Along with this the feedback network of figure 1 is changed by replacing the p-type pass transistor by n-type pass transistor as, the area used by NMOS is less than that of PMOS.Thus the proposed Double Edge Triggered Flip-Flop (DETFF) has become more efficient in terms of area, power and speed.

Figure 11 Proposed DETFF

Page 55: Gijet volume1 issue2

49

A. Dual Pulse Generator In this along with the DETFF instead of clock signal a dual pulse generator is used. In dual edge triggering the flip flop is triggered in both edges of clock pulses.Instead, applying the clock signal to the flip flop the dual pulse is applied using dual pulse generator scheme shown in figure 12.Dual pulse generator is integrated with DETFF. The pulse generator consists of two transmission gates and four inverters shown in figure 20. When clk=1 the upper TG is ON and lower TG if OFF the output pulse=0.

Figure 12 Dual Pulse Generator Circuit

When the clk transit from 1 to 0 suddenly the pulse=1. That is the output of the invertor I3 is ‘1’. Similarly, when clk=0 the lower TG is responsible to produce the pulse at negative edge of the clock.

VII. EXPERIMENTAL RESULTS

Data driven method, auto-gated flip flops and LACG has been implemented practically and the time period ,area and total estimated power has been compared. In data driven method the minimum time period is around 2.722ns and the gate count is 373 and the total estimated power is around 63. In the method using auto gated flip flops the time required will be less around 2.452 ns and gate count will be considerably reduced to 44 and the estimated power will be 48, but this method cannot be implemented practically in large circuits so it is not preferable so we opt for LACG .In LACG the time period will be around 2.245ns and the number of gates will be 218 and the total estimated power will be 47 and maximum power reduction is obtained in LACG. So the flip flops in LACG is replaced by DETFF which further reduces the power dissipation by 30-40%.

VIII. CONCLUSION

One of the major sources responsible for dynamic power consumption is the system’s clock signal. Earlier different methods of clock gating was used to reduce power consumption due to the clock signals. The main methods were synthesis based method, data-driven method and clock gating using auto-gated flip-flops ,but all these methods had certain drawbacks. To avoid these drawbacks the look-ahead clock gating method on auto-gated flip-flops was adopted. The gating logic in this is further optimized by joint gating which reduces the hardware overheads. The power dissipation and area is further reduced using DETFF in place of D flip flops in LACG.

REFERENCES

[1] L. Benini, A. Bogliolo, and G. De Micheli, “A survey on design techniques for system- level dynamic power management,” IEEE Trans.VLSI Syst., vol. 8, no.3, Jun. 2000.

[2] M. S. Hosny and W. Yuejian, “Low power clocking strategies in deepsubmicron technologies,” in Proc. IEEE Int. Conf. Integr. Circuit DesignTechnol., ICICDT 2008,

[3] C. Chunhong, K. Changjun, and S. Majid, “Activity-sensitive clocktree construction for low power,” in Proc. ISLPED, 2002,

[4] A. Farrahi, C. Chen, A. Srivastava, G. Tellez, andM. Sarrafzadeh, “Activity-driven clock design,” IEEE Trans. Comput. Aided Des. Integr.Circuits Syst., vol. 20, no. 6, 2001.

[5] W. Shen, Y. Cai, X. Hong, and J. Hu, “Activity and register placement aware gated clock network design,” in Proc. ISPD, 2008.

[6] S. Wimer and I. Koren, “The Optimal fan-out of clock network for power minimization by adaptive gating,” IEEE Trans. VLSI Syst., vol.20, no. 10,Oct. 2012.

Page 56: Gijet volume1 issue2

50

[7] S. Wimer and I. Koren, “Design flow for flip-flop grouping in data driven clock gating,” IEEE Trans. VLSI Syst., to be published.

[8] 8 J. Oh and M. Pedram, “Gated clock routing for low-power microprocessor design,” IEEE Trans. Comput.-Aided Design Integr. CircuitsSyst., vol. 20, no. 6,Jun. 2001

[9] A. G. M. Strollo and D. De Caro, “Low power flip-flop with clock gating on master and slave latches, ” , Electron.Lett., vol. 36, no. 4 Feb. 2000

[10] Xiaowen Wang and William H. Robinson, “A Low-Power Double Edge-Triggered Flip-Flop with Transmission Gates and Clock Gating,” IEEE Conference, pp 205-208, 2010.

[11] Yu Chien-Cheng,, “Low-Power Double Edge- Triggered Flip-Flop Circuit Design,” Third International Conference on Innovative Computing Information and Control (ICICIC’08), IEEE Conference, 2008.

[12] J. Kathuria, M. Ayoub, M. Khan, and A. Noor, “A review of Clock Gating Techniques,” MIT Int. J. Electron. and Commun. Engin., vol.1, no. 2,Aug. 2011.

[13] S. Wimer, “On optimal flip-flop grouping for VLSI power minimization,”Oper. Res. Lett., vol. 41, no. 5, Sep. 2013. [14] C. Chunhong, K. Changjun, and S. Majid, “Activity-sensitive clocktree construction for low power,” in Proc.

ISLPED, 2002

AUTHOR

Roshini Nair, is a M. Tech (VLSI) scholar of MET’s School of Engineering. A very proactive student,she has passed her B.Tech from University of Calicut, Kerala (First Class With Honours), She has received many honours and commendations for her presentations and publications. Her keen interest in conceiving and designing low power, high speed integrated circuits and systems has motivated her to innovate new tools and techniques for addressing problems in High Performance Computing Architectures. She is ardently dedicated to Research and Development work to develop integratable low power circuits from the point of VLSI product design.

Page 57: Gijet volume1 issue2

Methods for Reduction of Stray Loss in Flange-Bolt Regions of Large Power Transformers using Ansys

1Linu Alias and 2Dr. V. Malathi

1EEE Dept, Anna University Regional Office, Madurai, Madurai, Tamil Nadu, India 2Professor, EEE Dept, Anna University Regional Office, Madurai, Madurai, Tamil Nadu, India

Abstract—In large power transformers, more than 20% of the total load loss is the stray loss in structural components. The biggest part of stray loss takes place in the transformer tank. As the ratings of transformer increase, the stray loss problem also increases significantly resulting in higher temperatures and local hot spots that reduce the transformer life. Due to the heavy current flow in Low Voltage (LV) windings, the strong magnetic flux linking the transformer tank causes over heating of the tank walls and particularly, if there is loose contact between the wall and tank cover it may leads to hot spot in flange bolts which are near the high current bushings of transformer. So in order to ensure a good contact between cover and tank body we use copper links thus significantly reducing the overheating of flange bolt region. This work presents a 3-D Finite Element Analysis of the geometry of interest to verify the copper link solution for the overheating of flange bolts. The overheating results are analyzed and discussed for the case of a 315MVA, 400/220/33KV Autotransformer. Index Terms— Stray loss, local hot spots, high current bushings, copper link, Autotransformer

I. INTRODUCTION

All Power and distribution transformers are expensive and vital components in electric power transmission and distribution systems. The statistics of failures in power transformers are as follows: : 41% of faults are related to the tap changer, 19% with the windings, 3% with the core, 12% with the terminals, 13% with the tank and fluids, and 12% with the accessories. Hot spot failures in the tank are included in this 13%. Consequently, it is important to analyze the causes and consequences of tank hot spots as well as to present solutions to the problem of bolt heating. In case of distribution-transformer overheating of flange–bolt regions is not important whereas in the case of large power transformers of ratings 100 MVA and above, this overheating can lead to transformer failure. In order to alleviate the problem of heating of transformer tanks three main methods are utilized they are: 1) Use of magnetic shunts. 2) Use of electromagnetic shields. 3) Varying the distance of the LV leads to the tank wall. For this it is essential to evaluate stray losses first. Evaluation of stray losses has been done using different methods and the methods are: 1) Two-Dimensional Methods 2) Three-Dimensional Formulations 3) Three-Dimensional Finite Element Method (FEM) Analysis. After the evaluation of stray losses, the different methods to reduce the heating of transformer tanks are incorporated to avoid the heating of transformer tanks. The biggest part of stray loss takes place in the transformer tank. As the ratings of transformer increase, the stray loss problem also increases largely resulting in higher temperatures and local hot spots that reduce the transformer life. Stray losses in Grenze ID: 01.GIJET.1.2.530 © Grenze Scientific Society, 2015

Grenze International Journal of Engineering and Technology, July 2015

Page 58: Gijet volume1 issue2

52

transformer covers depend on the distribution of leakage flux produced by strong induced fields. These induced stray currents are circulated forcefully through the flange-bolt region. Due to this, the region is overheated if there is no good contact between the tank and cover. This effect can damage the properties of the insulating oil and tank’s sealing system, the painting, and the insulation of the high current conductors. Figure 1 shows the after effect of flange bolt overheating.

Figure 1. Aftereffect of flange bolt overheating

II. CASES UNDER CONSIDERATION

A. Case A This case simulates the situation of a loose bolt. It resembles to the situation where there is bad contact between the bolt and the tank surface that is the nitrile gasket has a height of 2 mm in excess, simulating an inadequate tightening. High stray currents are induced in the tank and circulate through the flange that produce high stray losses and overheating in the flange–bolt region. In this case, the tank and cover are at different electrical potential. Figure 2.shows the loose bolt condition.

Figure 2. Loose bolt condition

B. Case B Case B simulates the same situation where there is bad contact between the bolt and the tank surface. Here by means of a copper link the bolt top side was connected to the bolt bottom side to ensure good contact between the tank and its cover. The insufficient contact between the tank and the cover is taken into consideration by using an excess of 2-mm height in the nitrile gasket. A copper link is placed in between the bolt as shown in Figure 3. to avoid heating problems.

Figure 3. Loose bolt with copper-link

Page 59: Gijet volume1 issue2

53

C. Copper-link method In transformer manufacturing, three methods have been engaged to reduce and avoid the heating of transformer tanks they are use of magnetic shunts, use of electromagnetic shields and varying the distance of the LV leads to the tank wall.

Figure 4. Geometry of the proposed solution. 1. stainless steel bolt IS 1367. 2.copper link. 3. belleville washers, nonmagnetic material AISI-304. 4. flat washers, non-magnetic material IS 2016. 5. stainless steel nut, nonmagnetic material IS 1367. 6. flange for connection of tank and cover IS 2062:2006. 7. toothed washer, nonmagnetic material.

Here in this work, for the hot spot removal, installation of bridges of copper links in junction of the cover and tank is implemented. The configuration adopted is shown in Figure 4. The use of copper was selected because of its high conductivity. Nonmagnetic stainless steel nuts and bolts were used to reduce corrosion, which will ensure good contact with the walls of the tank. The link can provide a low impedance path for stray currents and keeps both parts at the same electrical potential. The reluctance of the flange–bolts is more than twice that of a solid wall when bolts have a good contact.

III. INTRODUCTION TO FEA AND ANSYS WORKBENCH

A. FEA The study of behavior of components in real time conditions in computer aided engineering is achieved through Finite Element Analysis (FEA). The Finite Element Analysis is a computing technique that is used to obtain approximate solutions of Boundary Value Problems. It uses a numerical method called as Finite Element Method (FEM). FEA involves a computer model of a design that is loaded and analyzed for specific results. The main advantages of FEA are:

It reduces the amount of prototype testing, thereby saving the cost and time. It helps to optimize a design and it helps to create more reliable, high quality and competitive

designs.

B. ANSYS Workbench ANSYS Workbench is a Computer Aided Finite Element Modeling (FEM) and Finite Element Analysis (FEA) tool. In the graphical user interface of ANSYS Workbench, the user can generate 3-D and FEA models, perform analysis and generate results of analysis. Different steps included are 1) build geometry 2) define material property 3) generate mesh 4) apply loads 5) obtain solutions 6) present the results. Among different analysis included in ANSYS, here thermal analysis is used.

C. Thermal Analysis The thermal analysis is used to determine the temperature distribution and related thermal quantities such as: thermal distribution, amount of heat loss or gain, thermal gradients and thermal fluxes. All primary heat transfer modes such as conduction, convection and radiation can be simulated. There are two types of thermal analysis:

Steady State Thermal Analysis: In this analysis, the system is studied under steady thermal loads with respect to time.

Transient Thermal Analysis: In this analysis, the system is studied under varying thermal loads with respect to time.

Page 60: Gijet volume1 issue2

54

IV. RESULTS AND DISCUSSIONS

A. Steady State Thermal Analysis on The Flange Bolt Region Using ANSYS: In the steady state thermal analysis, the thermal load does not vary with time and remains constant throughout the period of application. This analysis considers only steady loads and does not consider any thermal load that varies with time. In this steady-state thermal analysis, the system is studied under steady thermal loads with respect to time. These thermal loads include convection, radiation, heat fluxes, heat generation rates, and constant temperature boundaries. The steady state thermal analysis may be either linear or non-linear, with respect to material properties that depend on temperature. The thermal properties of most of the materials do vary with temperature; therefore the analysis usually is non-linear. Including radiation effects or temperature-dependent convection in a model also makes the analysis non-linear. Here steady state thermal analysis is done with an ambient temperature of 22o C(see table 1) for case A on the inner surface (6 faces) of the tank walls and a convection boundary of 6 faces are selected and the medium is selected as stagnant air – horizontal cylindrical. Figure 5. shows the steady state thermal analysis of flange bolt region without copper link. For case B also the ambient temperature is selected as 22oC (see table 2) and convection boundary is selected including the two faces of copper link. Figure 6. shows the steady state thermal analysis of flange bolt region with copper link. From table 1 it is clear that the temperature near the flange without copper link is about 394.23oC. Due to this temperature there is a big possibility for hot spots in the flange bolt. And from table 2 it is clear that the temperature with copper link is about 83.285oC. Hence there is a great reduction of temperature that is about 310.945oC when copper link was inserted on the bolts. This reduced hot spots near the bolt region.

B. Steady State Thermal Analysis of Flange Bolt Region Without Copper Link

Figure 5. Result showing Steady state thermal analysis of transformer tank’s flange bolt region without copper link

TABLE I. MODEL-STEADY STATE-THERMAL ANALYSIS OF FLANGE BOLT WITHOUT COPPER LINK SETTINGS

Definition Initial Temperature Uniform Temperature

Initial Temeperature Value 22.oC Step Controls

Number Of Steps 1. Current Step Number 1.

Step End Time 1. s Radiosity Controls

Flux Convergence 1.e-004 Maximum Iteration 1000.

Solver Tolerance 0.1 Over Relaxation 0.1

Hemicube Resolution 10. Results

Minimum 394.23°C 9.039e-010 W/mm² Maximum 400.01oC 8.2506e-003 W/mm² Minimum Part 4 Part 1

Maximum Part 3

Page 61: Gijet volume1 issue2

55

C. Steady State Thermal Analysis of Flange Bolt Region Without Copper Link

Figure 6. Steady state thermal analysis of flange bolt region without copper link

TABLE II. STEADY STATE THERMAL SOLUTION OF FLANGE BOLT WITH COPPER LINK

V. CONCLUSION

This work has shown the unfavourable effect of bad physical contact between the cover and walls of Auto-transformer tanks. The results indicate that due to the loose bolts there is an increase in stray currents as the bolts serve as current paths which leads to the overheating of the region and could result in life-threatening fire explosions. To avoid such hot spots, it is important that the bolts are always kept tight, by an arrangement used in this work (Belleville washers, antiseize paste). Furthermore, when copper links are installed between bolts, connecting the tank and cover, they help to remove potential differences that produce the stray currents, as well as, a greater surface for heat dissipation. As the cost of each copper link is extremely low ($0.5) compared to the very high cost of the transformer ($2 000 000), so installation of copper link is a good remedy for hot spot removal in flange bolt region of power transformers.

REFERENCES

[1] T. Renyuan, Y. Junyou, W. Zhouxiong, L. Feng, L. Chunrong, and X. Zihong, (1990) “Computation of eddy current losses by heavy current leads and windings in large transformers using IEM coupled with improved R-Ψ method,” IEEE Trans. Magn., vol. 26, no. 2, pp. 493–496,

[2] Christophe Guerin, Gerard Tanneau, and Gerard Meunier, (1993) “3-D eddy current losses calculation in transformer tanks using the Finite Element Method,” IEEE Trans. Magn., vol. 29, no. 2.

[3] Y. Junyou, T. Renyuan, W. Chengyuan, Z. Meiwen, and C. Yongbin, (1996) “ New preventive measures against stray field of heavy current carrying conductors,” IEEE Trans. Magn., vol. 32, no. 3, pp.1489–1492.

Definition Object Name Solution Information

Initial Temperature Value 22. °C Solution Information

Solution Output Solver Output

Update Interval 2.5 s

Display Points All

Results

Minimum 83.285 °C 3.0452e-009 W/mm²

Maximum 84. °C 9.8544e-004 W/mm²

Minimum Occurs Part 3 Part 2

Maximum Occurs Part 2 Part 4

Page 62: Gijet volume1 issue2

56

[4] S.V. Kulkarni, (2000) Transformer Div., Crompton Greaves Ltd., Bombay, India ; Khaparde, S.A, “Stray loss evaluation in power transformers-a review,” Pwr. Eng. Society Win. Conf, pp. 2269 - 2274.

[5] S.V. Kulkarni, J. C. Olivares, R. Escarela-Perez, V. K. Lakhiani, and J. Turowski,(2004) “Evaluation of eddy current losses in the cover plates of distribution transformers,” Proc. Inst. Elect. Eng.—Sci. Meas. Technol.,vol. 151, no. 5, pp. 313–318.

[6] Livio Susnjic, Zijad Haznadar, and Zvonimir Valkovic, (2008) “3-D Finite Element determination of stray losses in power transformer” electric Power Systems Research, 78. pp. 1814-1818, Elsevier.

[7] A. M. Milagre, M. V. Ferreira da Luz , G. M. Cangane , A. Komar, and P. A. Avelino, (2012) “3D calculation and modelling of eddy current losses in a large power transformer,” in Proc. XX ICEM, Sep. 2-5, pp.2282-2286.

[8] J. C. Olivares-Galvan, S. Magdaleno-Adame, R. Escarela-Perez, R. Ocon-Valdez, P. S. Georgilakis and G. Loizos, (2012) “Experimental validation of a new methodology to reduce hot spots on the screws of power transformer tanks,” in Proc . ICEM, pp. 2318-2322.

[9] Mukesh Kirar, Gaurav Gupta, Mukesh Kumar, and Kumar Sharma,( 2013) “Study of stray losses reduction through Finite Element Method,” in Annual IEEE India Conf (INDICON).

[10] Dorinel Constantin, Petre-Marian Nicolae, Cristina-Maria Nitu,( 2013) “3-D Finite Element Analysis of a three phase power transformer,” in Euro IEEE conf.

[11] Juan Carlos Olivares-Galvan, , S. Magdaleno-Adame, R. Escarela-Perez, R. Ocon-Valdez, P. S. Georgilakis and G. Loizos, “Reduction of Stray Losses in Flange-Bolt regions of large power transformer tanks,” IEEE Trans. on Ind. Electron., vol. 61, no. 8, pp. 4455-4463.

[12] Olivares, J.C., Escarela-Perez, R., Kulkarni, S.V., de Le!on, F.,Melgoza, E., and Hernlandez, (2004) “Improved insert geometry for reducing tank wall losses in pad-mounted transformers”, IEEETrans. Power Deliv,19(3), pp.1120-1126.

[13] M. Rizzo, et al., “Influence of flux collectors on stray losses in Transformers”, IEEE Trans.Magn.36 1915-1918.

Page 63: Gijet volume1 issue2

Development & Implementation of Mixing Unit using

PLC

1Dilbin Sebastian, 2George Jacob, 3Hani Mol A A, 4Indu Lakshmi B and 5Rajani S H 1, 2U G Student, Dept. of AE&I, ASIET, Kalady, Kerala, 683574, India

3Student, Dept. of AE&I, ASIET, Kalady, Kerala,683574, India 4, 5Asst.Professor, Dept. of AE&I, ASIET, Kalady, Kerala,683574, India

Abstract—Automation plays an increasingly important role in the world economy. It uses control systems and information technologies to reduce the human effort in the field of industries. In our project we are developing a system that uses automation in the paint mixing. The paint will be produced by means of mixing different colors according to a fixed proportion. The entire system will be continuously monitored by Supervisory Control AndData Acquisition system and it is controlled by Programmable Logic Controller. The process has mainly 3 steps including paint mixing, transportation and packing section. Initially the paint will be formed based on our required proportion of colors and is then transported to the packing section via conveyor belt. Finally it is packed and sealed in the packing section. This system is properly controlled using PLC and the user can enter the required proportion of color via SCADA. Index Terms— Ratio control, PLC-programmable logic controller, SCADA-supervisory control and data acquisition, Paint mixing, Packing, Online tuning.

I. INTRODUCTION

The field of automation has had a notable impact in a wide range of industries beyond manufacturing. Automation is the use of control systems and information technologies to reduce the need for human work in the production of goods and services. In the scope of industrialization, automation is a step beyond mechanization. Whereas mechanization provides human operators with machinery to assist them with the muscular requirements of work, automation greatly decreases the need for human sensory and mental requirements as well. Automation plays an increasingly important role in the world economy. One of the important applications of automation is in mixing process where definite ratio of colors has to be mixed. For these kinds of applications the trend is moving away from the individual device or machine toward continuous automation solutions. Totally Integrated Automation puts this continuity into consistent practice. Totally Integrated Automation covers the complete production line, fromreceipt of goods, the production process, filling and packaging, to shipment of goods. Our project is also an application of automation wherein we have developed a paint mixing, transportation and capping system .The various processes are controlled using a PLC (Programmable Logic Controller) and is monitored using SCADA (Supervisory Control And Data Acquisition) software. The required proportion of different colours is given as an input to the PLC via the SCADA software. Based on the given requirements the PLC will be controlling the flow sensor attached to the dc solenoidal valve so that the ratio of each paint is controlled and the mixing process is done.The process of mixing will be already Grenze ID: 01.GIJET.1.2.538 © Grenze Scientific Society, 2015

Grenze International Journal of Engineering and Technology, July 2015

Page 64: Gijet volume1 issue2

58

programmed in the PLC. After the mixing process the result will be transported to another location. The transportation will be possible using the conveyor belt and DC servo motors. The colors required for the paint mixing can be selected through online based on the customer requirements. These features can be available to the designer with the help of SCADA software. The mixing will be done in a bottle or small buckets. This filled bucket will be finally given to the packing section. In the packing section the bucket or the bottle will be capped automatically. The filling, transportation and capping process will be automatically done using the PLC programming. The final sealed product will be given to the customer for their usage.

Figure1. Paint mixing system

II. LITERATURE SURVEY

T.Kalaiselvi, Aakanksha.R,Dhanya.S , R.Praveena [1],“PLC Based Automatic Bottle Filling and Capping System With User Defined Volume Selection”, this system does the filling and capping of bottles simultaneously. Both the process takes places at simultaneous manner .Bottles are kept in position in a carton over a conveyor belt; they are sensed to detect their presence. IR sensors are used for sensing the bottles. Depending on the output of the sensor the corresponding pumps switch on and filling operation takes place. If the particular bottle is not present then the pump in that position is switched off, thereby avoiding wastage of the liquid.filling operation is accompanied with a user-defined volume selection menu which enables the user to choose the volume of liquid to be filled. The filling process is done based on timing. Depending on the preset value of the timer the pump is switched on for that particular period of time and the filling is done. The entire filling and capping operation is controlled by PLC and is monitored using SCADA (Supervisory Control and Data Acquisition). Dr.D.V.PushpaLatha [2] “Simulation of PLC based Smart Street Lighting Control using LDR”, Here it is an approach to accomplish the demand for flexible public lighting systemsusing a Programmable Logic controller (PLC).In the current street light system the light will be switched on at evenings and will be switched off at in the morning. The consequence is that alarge amount of power is wasted meaninglessly. This system ensures an useful way of operating the street light based on intensity of light using LDR and PLC. It is done based on seasonal variations. The PLC is programmed based on our requirement including timing and sequencing. By this way a large amount of energy savings can be achieved. Dhanojmohan,Rathikarani,Gopukumar [3],”Automation in ration shop using PLC ,proposed a methodology for ration shop automation using embedded PLC. Further the updating to the government database about the stock available and the customer details were not carried out. Only the ration commodities will be delivered to the customers according to their allotted products, the delivering of the process and the entire control action will be done by PLC.

III. AN OVERVIEW OF PLC

PLC or programmable controller is a digital computer used for automation of typically industrial electromechanical processes, such as control of machinery on factory assembly lines, amusement rides, or light fixtures. PLCs are used in many industries and machines. PLCs are designed for multiple analogue and digital inputs and output arrangements, extended temperature ranges, immunity to electrical noise, and resistance to vibration and impact. Programs to control machine operation are typically stored in battery-

Page 65: Gijet volume1 issue2

59

backed-up or non-volatile memory. A PLC is an example of a "hard" real-time system since output results must be produced in response to input conditions within a limited time, otherwise unintended operation will result. Before the PLC, control, sequencing, and safety interlock logic for manufacturing automobiles was mainly composed of relays, cam timers, drum sequencers, and dedicated closed-loop controllers. Since these could number in the hundreds or even thousands, the process for updating such facilities for the yearly model change-over was very time consuming and expensive, as electricians needed to individually rewire the relays to change their operational characteristics. Digital computers, being general-purpose programmable devices, were soon applied to control of industrial processes. Early computers required specialist programmers, and stringent operating environmental control for temperature, cleanliness, and power quality. Using a general-purpose computer for process control required protecting the computer from the plant floor conditions. An industrial control computer would have several attributes: it would tolerate the shop-floor environment, it would support discrete (bit-form) input and output in an easily extensible manner, it would not require years of training to use, and it would permit its operation to be monitored. The response time of any computer system must be fast enough to be useful for control; the required speed varying according to the nature of the process.[1] Since many industrial processes have timescales easily addressed by millisecond response times, modern (fast, small, reliable) electronics greatly facilitate building reliable controllers, especially because performance can be traded off for reliability.

Figure 2. Programmable Logic Controller

IV. SUPERVISORY CONTROL AND DATA ACQUISITION

SCADA (supervisory control and data acquisition) is an industrial control system at the core of many modern industries such as manufacturing, energy, water, power, transportation and many more. SCADA systems deploy multiple technologies that allow organizations to monitor, gather, and process data as well as send commands to those points that are transmitting data. Virtually anywhere you look in today's world, you will find some version of a SCADA system running, whether it's at your local supermarket, refinery, waste water treatment plant, or even your own home.SCADA systems range from simple configurations to large, complex projects. Most SCADA systems utilize HMI (human-machine interface) software that allows users to interact with and control the machines and devices that the HMI is connected to such as valves, pumps, motors, and much more. SCADA software receives its information from RTUs (remote terminal units) or PLCs (programmable logic controllers) which can receive their information from sensors or manually inputted values. From here, the data can be used to effectively monitor, collect and analyze data, which can potentially reduce waste and improve efficiency resulting in savings of both time and money. Numerous case studies have been published highlighting the benefits and savings of using a modern SCADA software solution such as Ignition.

V. SYSTEM WORKING

The main blocks of the system are sensors, power supply, filling mechanism, transportation, conveyor belt, capping system. The sensor will be mostly an object sensor which wills checks the presence of bottles or buckets. After checking the presence of the bottles the required amount of color proportion will be given to the PLC through the SCADA. The color selected will be mixed to obtain paint by means of the PLC programming. The paint mixing will be done using solenoidvalves. The obtain paint will be given to the capping section through conveyor belts. The conveyor belt will be operated by means of DC servo motors.

Page 66: Gijet volume1 issue2

60

Figure 3 . SCADA system

The speed of the motor can be also controlled using PLC. The different color combination for the paint will be selected through internet as per the interest of the customer. The finally mixed paint will be given to capping section where proper capes will be filled to the bottles/buckets. The capping section will be an another mechanical part like the filling part. Both these part will be synchronously executed by means of PLC and SCADA. The final product of our system will be a sealed paint as per the customer choice. We are actually decided to mix two colors to obtain a final product and also we are providing the ratio of colors. The PLC we are using in this system is SIEMENS S71200 model which one is actually suits for our requirement. The PLC is having 24 digital inputs and 10 digital outputs. The PLC can be well programmed to obtain a perfect result.

Figure 4. Proposed Methodology

A. Hardware details The hardware we have developed is actually a high speed system which includes several seconds to complete the entire working. The initial filling of the bottles takes about 3-5 sec. After the filling section the bottle will be given to the packing section and packing will be completed within 8 sec. The PLC we have used in our system is siemen’s s71200 model which has 14 digital inputs and 10 digital outputs and 1 analog I/O. here we have also using solenoid valves (1/2 inch), flow sensor and proximity sensors for measuring the flow rate and detecting the presence of the bottle to be filled.

VI. ADVANTAGES AND DISADVANTAGES

High accuracy than hand mixing. Low labour cost. Required color proportion can be obtained. Human effort can be minimized. The proposed system can be used for mixing of paint in required proportion .They find wide applications in automobile industry, architectural designing etc.. They can be used for enhancing the overall appearance .In the coming years they can be used for the mass production of custom paints.

VII. CONCLUSION

The system we developed will be an automatic paint mixing unit which will reduce the human effort, it only require human effort at the beginning for providing the base colors only. The manual work of mixing will generate enormous errors which will affect the entire process. Mixing paints manually in critical ratios is a tedious task which may causes error the desired result may not occur. In order to reduce this difficulty the

Page 67: Gijet volume1 issue2

61

system is made fully automatic by means of using PLC. The system also pack the mixed paint in tins and placing the top above the tin.

REFERENCES

[1] Bolton, (2006), “Programmable Logic Controllers” [2] Chan C,H.L.T.Dy, A.H.Fernando, R.L.Tiu,&P.J.G.Viernes, (2007), “Design, Fabrication, and Testing of a PLC

Training Module Using Siemens S7-300 PLC”, DLSU Engineering e-Journal,Vol. 1 No. 1, , pp.43 -54 [3] Didactic Festo, (Textbook TP 301, GmbH & Co., D-73770 Denkendorf.), “Programmable Logic Controllers Basic

Level” [4] Daniels Axel & Wayne Salter , (CERN-CNL-2000-003 Vol. XXXV, issue no 3.), “What is SCADA” [5] Ghosh S, S.Bairagya, C.Roy, S.Dey, S.Goswami, &A.Ghosh, (2002 ET &TE-2008), “Bottle Filling System using

PLC” [6] Saha A, S.Kundu, &A.Ghosh, (6 -7 february,2012)“MINI CEMENT PLANT USING PLC”, Conference

proceedings, ,Burdwan University

Page 68: Gijet volume1 issue2

Cryptography in The Field of Cloud Computing for

Enhancing Security

1Nikhitha K Nair, 2Navin K S and 3Soya Chandra C S 1, 3Department of Computer Science and Engineering, Sarabhai Institute of Science and Technology, Vellanad,

Thiruvananthapuram, Kerala 2Department of Computer Science and Engineering, L.B.S. Institute of Technology for Women, Poojapura,

Thiruvananthapuram, Kerala

Abstract—Cryptography is widely used in the field of cloud computing now days. Cloud computing provides the users with the facility to store large amount of data in the cloud from different location. Also data sharing can be done efficiently while dealing with cloud data. But these facilities can bring new challenges to users mainly concerning security and privacy. These are the main issues which can restrict the users to upload their data in the cloud and these issues can prevent users from using the cloud services for various applications. In these cases, cryptographic techniques plays a major role for enhancing security in the field of cloud computing. Index Terms— Cryptography, Encryption, Cloud, Security.

I. INTRODUCTION

In the modern era, cloud computing is an emerging terminology. Cloud computing is believed to be considered as a metaphor for internet. The growth of cloud computing is very fast with time. This growth helped Information Technology to get diversified with services for large number of users. Cloud computing brings forward the facility to utilize different resources that were handled by different users. Cloud acts like a storage area which helped the clients to store their files, data and resources in it instead of keeping files and data in their own hard disk. This facility also paid way for different clients to access and share the files and resources that are being stored in the cloud through the internet. This paradigm put forward the mechanism of data accessibility and data sharing more efficient. Therefore work load on individual client machines greatly reduced. Beneath all these facilities, the main problem while dealing with the cloud computing is the security and privacy issues. In order to secure the cloud, we must secure the data stored in the cloud, calculations being carried out and the databases that are being hosted by the cloud service providers. The goal towards security includes integrity, confidentiality and availability of cloud data. The real importance of cryptography comes with providing data security from the unauthorized users. Data cryptography deals with scrambling all the contents of the files that are to be stored in the cloud in such a way that it becomes meaningless or unreadable. This process is known as “Encryption”. The opposite process is called “Decryption” where the original data is retrieved from the encrypted contents. In cloud computing, both symmetric and asymmetric encryption techniques can be used. But symmetric encryption techniques are found to be more efficient than the asymmetric encryption techniques since large number of databases are kept in the cloud storage. Grenze ID: 01.GIJET.1.2.540 © Grenze Scientific Society, 2015

Grenze International Journal of Engineering and Technology, July 2015

Page 69: Gijet volume1 issue2

63

Cryptography is the mechanism which can be applied in any area ranging from simple applications having calculations to highly efficient cloud computing services where data sharing is involved.

II. OBJECTIVES OF CRYPTOGRAPHY

The main objectives under the study of cryptography include the following terminologies:

A. Data Integrity The data integrity is the property which deals with unauthorized modification of data that is being stored in the cloud.

B. Non Repudiation This property will not allow the sender and receiver in denying for sending and receiving the messages or files to each other.

C. Confidentiality This property deals with protecting the contents in the files or messages when they are to be stored in the cloud and not allowing any third party to access the information that is being stored in the cloud.

D. Authentication This property deals with identifying the identity of the sender and the receiver involved in transmitting the messages with each other and when the sender wants to store the data in the cloud that has to be shared with the receiver.

E. Access Control This property deals with only allowing the authorized persons in accessing the data in the cloud while preventing other from access by using keys.

III. ISSUES DEALING WITH SECURITY

A. Privacy Cloud computing uses the terminology of virtual computing. The users who store the data in the cloud, that data will be scattered among various virtual data centers than being in a single physical location. Users can also leak the personal/hidden information when they access the cloud computing services. Therefore, privacy of the data that is being stored in the cloud is affected.

B. Trust Trust in cloud computing reveals the idea of integrity and surety of data and the clients involved in the entire process of data storage and data sharing.

C. Security The cloud service providers should employ the techniques of encryption, authorization and authentication in order to mitigate the security issues.

D. Performance and Availability Issues Most clients who are dealing with business organization worry about the proper levels of performance and availability of applications and services that are being hosted by the clients.

E. Data Portability Most of the people worry about the process of switching from one cloud provider to another and thereby transferring data. Porting and conversion of data depending on the nature of the cloud service provider’s data retrieval format and type. As open standards tend to become more popular, portability and conversion process slowly tends to ease by itself.

Page 70: Gijet volume1 issue2

64

IV. CRYPTOGRAPHIC TECHNIQUES UNDER CLOUD COMPUTING

Cryptography is a technique of keeping data as secure as possible by converting them into format that is unreadable to outsiders. The broad classification of encryption techniques includes: Symmetric key (private key) encryption, Asymmetric key (Public) encryption and hybrid encryption techniques. Symmetric key algorithms works by using only a single key (private key) .Here both the sender and the receiver uses the same private key for encryption and decryption process. Symmetric key encryption algorithms include both block ciphers and stream ciphers. These algorithms are very fast and easy to implement. Some examples of symmetric key encryption techniques include DES, AES, Triple DES, Blowfish and IDEA. Asymmetric key algorithms use two different keys (private key and public key) for encryption and decryption process. Here the receiver’s public key is used by the sender to encrypt the plain text into cipher text. Receiver on the other hand uses its own private key to decrypt the cipher Text plain text. Some of the examples of this type of encryption technique include RSA, DSA, Diffe-Hellman and ElGamel. Now days, several encryption techniques are combines and applied in a hybrid manner. Using several encryptions for a particular task increases the way of handling the data security more efficient.

V. RELATED WORK

In paper [4] provides an overview about various cloud deployment models including public private, hybrid cloud and community cloud. It also mentions about various issues dealing with the cloud data. This paper tried to bring out in building a repository for storage and sharing along with data confidentiality across the cloud. In paper [5], gives an idea about the benefits of using cloud computing which includes cost savings, reliability, flexibility and mobile access. This paper also focuses on various existing symmetric and asymmetric key encryption techniques. The mentioned symmetric key algorithms include blowfish, DES, 3DES, RC5 and AES. Asymmetric encryption techniques covered under this paper includes DSA, RSA, Diffe-Hellman and ElGamel. In paper [6] provides more detailed survey on various encryption techniques. This paper tries bringing the information about homomorphic encryption technique dealing with additive and multiplicative homomorphic encryption technique. The paper [7] mentions about attack detection and proactive resolution in a single cloud environment. It also mentions about the proposed approach for enhancing data security in cloud focusing on motivations and method of implementation.

VI. OBJECTIVE OF PROPOSED WORK

The main objective of proposed work is to provide tight security to the data being stored in the cloud. So encryption techniques such AES and XOR encryption schemes are being conducted both by the data owner side and data user side. This paper also attempts to bring in front the scenario of public auditability by a third party auditor in order to verify the integrity of cloud data.

VII. PROPOSED WORK

The proposed work aims in bringing out the different concepts related to cloud computing and cryptography. Cryptography deals with the study of providing security to data and Cloud computing deals with the main concern of data security and privacy. Different encryption techniques can be clustered and combined at several levels for providing security. While dealing with an organization, encryption techniques can applied at the data owner side, data user side, cloud side and even ad third-party auditor side. By utilizing different encryption techniques at the same time, increases security on the cloud data. Since cloud provides an environment where large users’ gets involved and large number of data gets stored each time, strict cryptographic methods should be used each time relating to privacy concern. The Objectives of this proposed work includes a. To design a system which will aim to handle security related issues and provide privacy concerns.

Page 71: Gijet volume1 issue2

65

b. To develop a system which is based on different encryption techniques at various levels. c. To develop a system where the data user can upload their required files in the cloud in a secure manner. d. To develop a system where user can create signature in order to identify the proper owner of that file by the receiver. e. To develop a system where cloud performs additional encrypted on the already encrypted data. f. To develop a system where user can download the encrypted data from the cloud and decrypt it using the cloud key and the private key.

VIII. CONCLUSION

Cryptography deals with study of encrypting data which can be used in the cloud computing for providing additional security on the data that is being stored in the cloud. Various encryption techniques both symmetric and asymmetric encryption techniques can be used to enhance security. The proposed system develops scenario of providing high security and efficiency with the help of cryptographic techniques.

REFERENCES

[1] Cong Wang ,Chow, S.S.M., Qian Wang ,Kui Ren ,” Privacy-Preserving Public Auditing for Secure Cloud Storage”, IEEE Transactions on computers, Vol. 62, No. 2, February,2013.

[2] N. Cao, C. Wang, M. Li, K. Ren, and W. Lou, "Privacy-Preserving Multi-Keyword Ranked Search over Encrypted Cloud Data," Proc. IEEE INFOCOM, pp. 829837, Apr, 2011.

[3] Y.-C. Chang and M. Mitzenmacher, "Privacy Preserving Keyword Searches on Remote Encrypted Data," Proc. Third Int'l Conf. Applied Cryptography and Network Security, 2005.

[4] Mohit Marwaha, Rajeev Bedi, “ Applying Encryption Algorithm for Data Security and Privacy in Cloud Computing” IJCSI International Journal of Computer Science Issues, Vol. 10, Issue 1, No 1, January 2013.

[5] Randeep Kaur1 ,Supriya Kinger,” Analysis of Security Algorithms in Cloud Computing , International Journal of Application or Innovation in Engineering & Management (IJAIEM) Volume 3, Issue 3, March 2014 ISSN 2319 – 4847.

[6] Rashmi Nigoti, Manoj Jhuria, Dr.Shailendra Singh,” A Survey of Cryptographic Algorithms for Cloud Computing”, International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS) ISSN (Print): 2279-0047 ISSN (Online): 22790055 .

[7] Aysan Shiralizadeh, Abdulreza Hatamlou and Mohammad Masdari, “Presenting a new data security solution in cloud computing” Journal of Scientific Research an Development 2 (2): 30-36, 2015 , ISSN 11157569.

Page 72: Gijet volume1 issue2

A Comparative Study of Harmonic Compensation

Techniques in Micro-Grids using Active Power Filters

1Neeraj N, 2Ramakrishnan P V and 3Mini P R 1-3SCET, Kodakara, Kerala, 680684, India

[email protected]

Abstract—Power quality issues like voltage sag, voltage swell, harmonics, etc are common in micro-grids. These power quality issues are arising due to the increased usage of non-linear loads and power electronics interfaced Distributed - Generation system. Various methods are used for the improvement of power quality of which one of the most advanced technologies is the employment of an Active Power Filter. In this paper comparative study on various methods of harmonic compensation in micro-grids using active power filters is done. Analysis have been done based on their topology and control strategies and the conclusions so arrived are represented using comparison tables. This could help the designers to select appropriate topology or control strategy for an APF which is employed in a particular micro-grid. The review is done by analyzing many publications and is appended here for reference. Index Terms—Micro-grid, Power quality, Active power filers, APF Topologies, Time domain control algorithms.

I. INTRODUCTION

Micro-grids can be also termed as a group of Distributed Generation units that can be interfaced with an electrical distribution network using Power electronic devices like voltage source converters. The first mode is termed as Grid-connected operation and the second case is termed as Island mode of operation. If the voltage unbalance is severe, the circuit breaker between the micro-grid and utility grid opens, the micro-grid now operates in island mode. If the voltage unbalance is solemn, the circuit breaker remains in closed condition, which is called as Grid connected operation of micro-grid. Islanding can happen if the converter isn’t prevented from injecting current within a short period of time and continues to feed local loads after tripping the grid. Islanding can pose a safety risk to utility workers if they assume that the line is de-energized after disconnecting it from the grid. Moreover, closing the upstream circuit breakers during islanding can cause major damage to the converters due to unsynchronized reconnection to the grid. Another issue is that due to a mismatch between active and reactive power delivered by the converters and consumed by the loads, the voltage and frequency of an islanded DG system might shift considerably from the nominal values. Therefore, islanding is potentially a hazard to people and to grid-connected converters, and should be effectively detected and avoided, Khani et al. (2013).Based on IEEE Standard 1547, which is applicable for converters with rated power of less than 10MVA connected to primary or secondary distribution systems, the converters shall detect islanding and cease to energize the area within two seconds of formation of an islanding event Diarmaid J. Hogan et al.(2014), Jayawardena et al.(2012), Wang Jinquan et al.(2012), JinweiHe et al.(2014). Grenze ID: 01.GIJET.1.2.545 © Grenze Scientific Society, 2015

Grenze International Journal of Engineering and Technology, July 2015

Page 73: Gijet volume1 issue2

67

II. REVIEW OF VARIOUS PQ PROBLEMS IN MICRO-GRID

The power quality problems which are commonly affecting the utility grid are presence of harmonic content, load unbalance, increased reactive power demand and fluctuation in system voltage. Generally, current harmonic and voltage-frequency imbalance increase losses in ac power lines. The current control loop based on synchronous reference frame and conventional PI regulator is used for voltage-frequency regulation, Fujita et al.(2000). The power quality parameters are made conditioning with the support of voltage source inverter interfaced distributed energy resources. Since, they need of Conventional filter in order to detect apparently, the unbalance in voltage and harmonic in the main system. The art of designing of filter in three phase power system indulged adopting of band pass and band stop filter to eliminate the harmonic in micro-grid, Shuai, (2011). Flexible Distributed Generation (FDG), which relates in the functional of FACTS is proposed to active power flow control and to mitigate harmonic, unbalance load and voltage flickering, Gupta et al(2012). The current controller functions to inject sinusoidal current to grid, although in presence of non- linear load and unbalance voltage distortion, Dash et al.(2013), Wang Jinquan et al.(2012). So as to attain fixed switching frequency, the controller complexity will be raised, although hysteresis controller is used. One of the major concerns in electricity industry today is power quality problems to sensitive loads. Presently, the majority of power quality problems are due to different fault conditions. These conditions cause voltage sag. Voltage sag may cause the apparatus tripping, shutdown commercial, domestic and industrial equipment, and miss process of drive system. The proposed system can provide the cost effective solution to mitigate voltage sag by establishing the appropriate voltage quality level, required by the customer. It is recently being used as the active solution. Distributed Generation is a back-up electric power generating unit that is used in many commercial buildings, industrial facilities, hospitals, campuses and department stores. Most of these back-up units are used primarily by customers to provide emergency power during times when grid-connected power is not available and they are installed within the premises of the consumer where the electric demand is needed. The installation of the back-up units close to the demand center avoids the transmission losses and cost of transmitting the power. The generating units called back-up units are currently defined as distributed generation to differentiate from the traditional centralized power generation model, which has proven to be economical and a reliable source of energy production. However, without significant increase in building new generating capacity or even in expanding existing ones to meet the needs of today’s power demand, the whole electrical power industry is facing serious challenges and is looking for a solution, Jayawardena et al. (2012), Khani et al. (2013). The advancements in the power technology have proven a path to the modern industries to extract and develop the innovative technologies within the limits of their industries for the fulfillment of their industrial goals. Optimization of the production while minimizing the production cost and thereby achieving maximized profits while ensuring continuous production throughout the period has become the ultimate goal, Menniti et al.(2008), Shuai et al.(2011). A stable supply of un-interruptible power has to be guaranteed during the production process. The modern manufacturing and process equipment, which operates at high efficiency, requires high quality defect free power supply for the successful operation of their machines. So, the reason for demanding high quality power is basically of such importance. To be precise, most of the modern machine components are designed to be very sensitive for the power supply variations. Adjustable speed drives, automation devices, power electronic components are only some of the examples for such equipment. Failure to provide the required power quality in the output may sometimes cause complete shutdown of the industries which leads to a major financial loss to the industry concerned, Anjana et al. (2014), Jelani et al. (2012). Thus the industries always needs high quality power from the supplier or the utility. But the responsibility for degraded quality cannot be solely put on to the hands of the utility itself. It has been found out most of the conditions that can disrupt the processes are generated within the industry itself. Most of the non-linear loads within the industries cause transients which can affect the reliability of the power supply. Improvement of power quality has been given considerable attention due to the increase of the power quality issues in addition to the limitations required by international standards such as IEEE519-1992, IEC1000-3-2, and IEC1000-3-4. Those limitations were set in order to limit the disturbances and avoid major problems in distribution power systems. Conventionally, passive filters are used for current harmonics mitigation while capacitor banks are utilized for Power factor correction. Neither of them solves the problem in a suitable way, and usually causes other problems, such as resonances. Moreover, their performance depends on system

Page 74: Gijet volume1 issue2

68

impedance and also suffers from filter passive components ageing effect, Fahmy et al. (2014), Shuai et al. (2011).

III. REVIEW ON GENERAL ACTIVE SOLUTIONS FOR MITIGATION OF PQ ISSUES IN MICRO-GRID

The mitigation of harmonics was done by using conventional passive filters at first. But these passive filters were having certain drawbacks like low efficiency, higher filter size, resonance problem. If the network voltages consist of frequency at which passive filters have low impedance, that voltage component may cause severe rise current in passive filters. Occurrence of anti-resonance between the source impedance and filter impedance, flowing harmonic current and causing severe voltage increase. The above mentioned problem can be solved by simultaneous use of passive and active series filters. Shunt active power filters (APF) have attracted considerable attention as an efficient way to perform power conditioning tasks such as harmonic elimination, reactive power compensation, load balancing, and neutral current elimination. Also APFs offer high efficiency and perform effectively on lower-order harmonics such as 3rd, 5th, 7th which are generated by the nonlinear loads, Jayawardena et al. (2012). Shunt APF's DC-link voltage must be kept constant in order that it can compensate both of harmonics, reactive power and mitigates the neutral current effectively. Because of their simple implementation and tuning, PI controller gains extensive application in the DC-link voltage controllers for shunt APFs, Fahmy et al. (2014), Tiwari et al. (2014), Zamani et al. (2014). However, PI controllers require exact system mathematic model and also offer poor robustness in transient state. Occasionally, DC-link voltage overshoot and inrush source current occur, which may result to protection tripping or even semiconductors failure when APF's operation is started. Recently, Fuzzy Logic Controller (FLC) has received a noticeable attention regarding their application as APFs' controller. FLCs offer strong robustness to variable parameters, good dynamic response and limited overshoot in the transient response. The active and passive filters are connected in series with each other. The hybrid filter is connected in parallel with other loads in the vicinity of the secondary of a distribution transformer installed at the utility-consumer point of the common coupling (PCC). It is, therefore, different in the point of installation from pure active filters and hybrid active filters which have been installed in the vicinity of harmonic-producing loads. The purpose of installing the hybrid filter proposed in this paper is to damp the harmonic resonance in industrial power systems, as well as to mitigate harmonic voltages and currents. When an over current flows into the passive filter, the active filter controls the gain to be a positive value. Thus, the active filter acts as a positive resistor, preventing the passive filter from absorbing an excessive 5th-harmonic current. The 5th-harmonic current flowing into the passive filter.

Fig 2.1 series Active power filters in a D-G system

In some applications, combining several different types of filters into a hybrid system can achieve better performance. Several hybrid configurations were reported, including parallel active filter with series active filter, series active filter with parallel passive filter, parallel active filter with parallel passive filter and active filter in series with parallel passive power filter, Dash et al. (2013), Fujita et al. (2000), Shuai et al. (2011). Among these configurations, the active filter in series with parallel passive filters, also known as the hybrid active power filter (HAPF), shows great promise. In particular, the concept of injection-type hybrid active

Page 75: Gijet volume1 issue2

69

Fig 2.2 Shunt Active power filters in a D-G system

Fig 2.3 Hybrid Active power filters in a D-G system

power filter (IHAPF), owing to its lack of fundamental wave voltage and suitability for high-voltage grids, becomes a focal point of extensive research, Wang Jinquan et al. (2012), Ilango et al. (2012). A unified power quality conditioner is an advanced concept in the power quality control field. The unified power quality conditioner is implemented based on the idea of integration of a series active filter and shunt active filter that share a single DC link. Unified power quality conditioner can be applied in a power system for current harmonic compensation, voltage compensation and reactive power control, but the main drawback is that it cannot compensate frequency regulation. This drawback is overcome by introducing constant frequency unified power quality conditioner (CF-UPQC).CF-UPQC which is a combination of unified power quality conditioner and matrix converter. This modified unified power quality conditioner enables the PWM converter to perform active filtering, and the matrix converter also performs the function of frequency regulation. The Pulse Width Modulation technique (PWM) is commonly used to control all these converters. The switching rate is high so the PWM converter can produce controlled current or voltage waveform with high fidelity. It can simultaneously compensate the load current harmonics, supply voltage harmonics and frequency regulation.

Fig 2.4 CF-UPQC for PQ improvement in a D-G system

Page 76: Gijet volume1 issue2

70

TABLE I: COMPARISON OF VARIOUS APF TOPOLOGIES IN MICRO-GRID

APF Topology Compatibility in Micro-grid

Series APF ** Shunt APF *** Hybrid APF **** UPQC ****

*Poor performance, **Average performance, ***More than average performance, ****Good performance, *****Superior performance

IV. COMPARATIVE STUDY OF VARIOUS APF CONTROL STRATEGIES

The analysis has been made out based on several factors such as compactness of configurations, nature of supply systems and economic aspects. The control strategies were analyzed based on various types of supplies that is balanced sinusoidal supply, unbalanced sinusoidal supply, and balanced non-sinusoidal supply. Control scheme to calculate the compensation voltage for the active filter. The voltage vector in the side of the load and the source current vector are the input signals. By means of a calculation block the components α-β can be determined. The product of these vectors allows the real instantaneous power to be calculated, obtaining its mean value with a low pass filter (LPF). This power is consumed by the set passive filter and load. The mean power is divided by the square of rms value of fundamental current component. Various conclusions derived from the survey are tabulated, which helps the designers to select a particular control strategy compatible with a particular configuration and application. After analyzing various control strategies, the results are tabulated.

TABLE II: COMPLEXITY OF VARIOUS APF CONTROL STRATEGIES

Obtaining sinusoidal source current in phase with the positive-sequence symmetrical component of the applied voltage fundamental harmonic is considered as compensation target, the configuration used as ideal reference. At this conditions: P-q, modified p-q, p-q-r and vectorial formulation suppose a null compensator average power and d-q requires a compensator average power not null; p-q, p-q-r, d-q and vectorial formulations get a null neutral current and modified p-q does not get to clear the neutral current. Only vectorial and d-q formulations achieve to get a null distortion in all the cases. P-q and p-q-r allow to obtain control algorithms in cases 2 and 3 with a distortion below the 10%. Modified p-q goes over that value. In summary, it can be said that only vectorial formulation is adequate to establish APLC compensation strategies with any kind of load and any kind of supplies. Nevertheless, original formulation presents a good performance, which can be improved, to look for adequate compensation strategies, if its representation through the mapping matrix is changed by a vectorial representation.

TABLE III: CONTROL STRATEGIES IN MICRO-GRID OPERATION

Formulation Effectiveness in micro-grid

p-q *** Modified p-q ** d-q *** p-q-r ** vectorial ****

*Poor performance, **Average performance, ***More than average performance, ****Good performance, *****Superior performance

Formulation Compactness p-q Simple strategy Modified p-q Moderately complex strategy

d-q Complex strategy

p-q-r Moderately complex strategy

vectorial Complex strategy

Page 77: Gijet volume1 issue2

71

V. CONCLUSION

Theoretical analysis has investigated how to maintain the power quality in a micro-grid by using various configurations of Active power filters. The control based on the instantaneous power theorems like p-q, d-q, vectorial formulation were analyzed. The main advantage of this control approach lies on the fact that all sensitive loads connected to the PCC are immunized from the power quality problem. The parallel active filter will increase harmonic current and may cause overcurrent of the load when the load is a harmonic voltage source. Instead, it has been verified that the series active filter is better suited for compensation of a harmonic voltage source such as a diode rectifier with smoothing dc capacitor. When a parallel active filter is installed in a power system network such as at a point of common coupling, the network impedance and main harmonic sources downstream from the installation point should be investigated in order to get good performance and to minimize influence to the loads downstream. In some cases, a combined system of parallel active filter and series active filter may be necessary by utilizing the harmonic isolation function of the series active filters. Without any doubt we can say that active filters are superior to passive filters if used in their niche applications.

REFERENCES

[1] Pradeep Anjana, Vikas Gupta, Harpal Tiwari, “Reducing Harmonics in Micro Grid Distribution System Using APF with PI Controller”, T&D Conference and Exposition, 2014 IEEE PES.

[2] Santanu Kumar Dash, Gayadhar Panda, “Development of 1-ph Hybrid Active Power Filter with an Efficient FPGA Platform for Power Conditioning”, 2013 IEEE 1st International Conference on Condition Assessment Techniques in Electrical Systems.

[3] M. Fahmy, A. K. Abdelsalam, A. b. Kotb, “4-Leg Shunt Active Power Filter with Hybrid Predictive Fuzzy-logic Controller”, International Symposium on Industrial Electronics (ISIE),2014 IEEE.

[4] Hideaki Fujita, Takahiro Yamasaki, Hirofumi Akagi, “A Hybrid Active Filter for Damping of Harmonic Resonance in Industrial Power Systems”, IEEE Transactions On Power Electronics, VOL. 15, NO. 2, MARCH 2000.

[5] Narayan Prasad Gupta, Preeti Gupta, Dr. DeepikaMasand, “Power Quality Improvement Using Hybrid Active Power Filter for A DFIG Based Wind Energy Conversion System”, 2012 Nirma University International Conference On Engineering, Nuicone-2012, 06-08december, 2012.

[6] Diarmaid J. Hogan, Fran Gonzalez-Espin, John G. Hayes, Gordon Lightbody, Michael G. Egan, “Adaptive Resonant Current-Control for Active Power Filtering within a Microgrid”, Energy Conversion Congress and Exposition (ECCE), 2014 IEEE.

[7] A.V. Jayawardena, L.G. Meegahapola, S. Perera, D.A. Robinson, “Dynamic Characteristics of a Hybrid Microgrid with Inverter and Non- Inverter Interfaced Renewable Energy Sources: A Case Study”, Power System Technology (POWERCON), 2012 IEEE.

[8] Nadeem Jelani, Marta Molinas, “Shunt Active Filtering by Constant Power LoadinMicrogrid Based on IRP p-q and CPC Reference Signal Generation Schemes”, Power System Technology (POWERCON), 2012 IEEE.

[9] Wang Jinquan, Li Jianke, Xu Ye, Cui Chenhua, Chen Donghao, “Impact Analusis of Frequency Fluctuations on ThePreformance of Passive Filters in Microgrids”, Power and Energy Engineering Conference (APPEEC), 2012.

[10] JinweiHe,YunWei Li,FredeBlaabjerg,“Flexible Microgrid Power Quality Enhancement Using Adaptive Hybrid Voltage and Current Controller”, IEEE Transactions On Industrial Electronics, VOL. 61, NO. 6, JUNE 2014.

[11] Ilango K, Bhargav.A, Trivikram.A, Kavya.P.S, Mounika.G,Manjula G. Nair, “Performance Comparison of Shunt Active Filter Interfacing Algorithm for Renewable Energy Sources”, 2012 IEEE International Conference on Power Electronics, Drives and Energy Systems December16-19, 2012, Bengaluru, India.

[12] S.Khani, L. Mohammadian, S.H. Hosseini, “Modified p-q Theory Applied to Flexible Photovoltaic Systems at the 3-Phase 4-Wire Distribution Grids”. Electrical Engineering (ICEE), 2013 21st Iranian Conference on; 01/2013 (2013)

[13] LiShengqing, Zeng Huanyue, Xu Wenxiang, Li Weizhou, “A Harmonic Current Forecasting Method for Microgrid HAPF based on the EMD-SVR Theory”, 2013 Third International Conference on Intelligent System Design and Engineering Applications.

[14] D. Menniti, A. Burgio, A. Pinnarelli, N. Sorrentino, “Grid-Interfacing Active Power Filters to Improve The Power Quality in a Microgrid”, Harmonics and Quality of Power, 2008. ICHQP 2008.

[15] Adel M. Sharaf, Adel A. Aktaibi, “A Novel Hybrid Facts Based Renewable Energy Scheme for Village Electricity”, Innovations in Intelligent Systems and Applications (INISTA), 2012 International Symposium.

[16] Z. Shuai, A. Luo, C. Tu, D. Liu, “New control method of injection-type hybrid active power filter”, IET Power Electron., 2011, Vol. 4, Iss. 9, pp. 1051–1057 1051, doi: 10.1049/iet-pel.2010.0353.

[17] Mukhtiar Singh, Vinod Khadkikar, Ambrish Chandra, Rajiv K. Varma, “Grid Interconnection of Renewable Energy Sources at the Distribution Level With Power-Quality Improvement Features”, IEEE Transactions On Power Delivery, VOL. 26, NO. 1, JANUARY 2011.

Page 78: Gijet volume1 issue2

72

[18] Dr. H.P. Tiwari, Pradeep Anjana, Dr. Vikas Gupta, “Power Quality Improvement of Micro-Grid Using APF's With APC Theory”, IEEE International Conference on Recent Advances and Innovations in Engineering (ICRAIE-2014), May 09-11,2014, Jaipur, India.

[19] A. Zamani, H. Shahalami, “Performance of a Hybrid Series Filter in Mitigating Power Quality Problem of a Grid-Connected PV Array Interfaced with a Line-Commutated Inverter”, The 22nd Iranian Conference on Electrical Engineering (ICEE 2014), May 20-22, 2014, SahidBeheshti University.

[20] Lei Zhang, Linchuan Li, Wei Cui, Shaobo Li, “Study on Improvement of Micro-grid's Power Quality Based on APF and FESS”, Innovative Smart Grid Technologies - Asia (ISGT Asia), 2012 IEEE.

Page 79: Gijet volume1 issue2

Speed Control of Vehicle using Fractional Network

based Controller

1Abida K, 2Dr. Nafeesa K and 3Labeeb M 1-3Department of EEE, MES CE kuttippuram, malappuram, kerala,india

[email protected]

Abstract—Vehicle dynamics plays an important role in driving systems, especially in congested traffic situations at very low speeds. In order to ensure safety during driving accurate controllers are needed. The design of a fractional PI controller for low speed control problem is attempted in this paper. A system to adapt the vehicles speed to avoid or mitigate possible accidents is developed. The performance comparison of various controllers like PI controller, fuzzy PI controller and fractional PI controller iscarried out. Results show that fractional PI controller gives better performance than the other controllers in terms of settling time, rise time, maximum overshoot etc. Index Terms— Fractional-order control; adaptive cruise control; gain scheduling; vehicle-to-infrastructure (V2I) communication; vehicle-to-vehicle (V2V) communication.

I. INTRODUCTION

The idea of vehicles driving in an automatic way is a far aim. Tasks, such as parking assistance or keeping a safe distance from other vehicles is discussed in [1]. Most of these advances focus on improving the passengers and pedestrians safety. However, human failures remain the main cause of serious accidents. The safety initiative includes research projects based on vehicle-to-vehicle (V2V) or vehicle-to-infrastructure (V2I) communications designed to reduce the number of vehicle accidents. Intelligent cooperative systems based on wireless communications will play a key role in the development of new advanced driver assistance systems (ADAS). Indeed, the development of dedicated short-range communications (DSRC) as a reserved band for communications among vehicles reflects the importance of applying communication systems to increase road safety [2]. It is based on control stations to coordinate the traffic within a zone. However, the main drawback of wireless communication systems is that they present delays that can cause failures in the control system, since information has to go from the vehicle and the infrastructure to a control station and then return to the vehicles and devices in the infrastructure to take the most appropriate action. Most road fatalities are due to excessive speed, so management of delays in the information exchange between a control station and the vehicles in its vicinity can be critical for collision avoidance. Number of vehicles driving in a common area and a control station capable of communicating with all of them through a wireless network is described in [3]. The control station will be responsible for sending each vehicle its specific target speed so as to avoid the possibility of collisions. Design and development of a fractional-order proportional integral (PI) controller to manage a vehicles speed in the common area at low speeds and designing a gain scheduler to make adaptations to the local speed controller from the control station will be a new venture. In this paper a local fractional PI controller capable of performing efficient networked control by modifying its gains with an external gain beta is designed. The gain beta is determined as an optimal function of the current network conditions. Grenze ID: 01.GIJET.1.2.548 © Grenze Scientific Society, 2015

Grenze International Journal of Engineering and Technology, July 2015

Page 80: Gijet volume1 issue2

74

Among the various networked control strategies, gain scheduling technique is the most widely used one due to the frequent change in operating network conditions. Replacing a proper and widely used controller by a new one for efficient network-based capability may be costly and time consuming [4]. Gain scheduling can be considered as the simplest classical strategy of adaptive control. Several gain scheduling approaches have been applied successfully in network control systems. Fractional-order control (FOC) is the generalization to non-integer orders of traditional controllers or control schemes. Its applications are becoming an important research field in recent years due to its adjustable time and frequency response of the control system, which gives robust performances.

II. SYSTEM DESCRIPTION

The architecture is mainly based on local control stations towards traffic control of cooperating vehicles. These receive information from the vehicles in their domain, analyses it to determine when a situation of risk might arise, and modify the vehicles speeds in order to avoid an accident. Although control stations are capable of analysing traffic conditions in real time, delays caused by the wireless communications between each vehicle and the control station may cause inappropriate control signals to be sent to the vehicles [5]. This paper describes a controller that is capable of overcoming this problem.

A. Local station A local control station is responsible for detecting traffic risk situations in its control area and then modifying the speeds of the vehicles involved so as to avoid or mitigate possible accidents. A wireless access point is used to detect any vehicle in its vicinity, establishing a Vehicle-2-Infrastructure (V2I) communication following the Wi-Fi standard. Although Vehicle-2-Vehicle (V2V) solutions have been proposed for several automotive applications including adaptive cruise control (ACC) [6], V2I communications play a key role for the management of driving areas, since they significantly reduce the number of communication channels required. One of the major concerns in the V2I communication field is the ability of the infrastructure (control station) to manage the information coming from hundreds of vehicles in real time with minimum delay. To obtain a suitable intelligent infrastructure management system, it will be vital to deal with the effects of traffic density on the communication requirements. The vehicle is able to send its current position and speed to the control station, which will take into account the vehicle’s proximity to a curved stretch of road, an intersection or a traffic merging segment, and the measured condition of the network to determine the reference speed. It then sends this data together with the measured network delay to the vehicle for it to adapt its speed to the road’s layout.

B. Vehicle The vehicle has been modified to permit actions on the brake by means of control commands. For the brake automation, a brake-by-wire system was installed in the trunk of the vehicle. It consists of a pump attached to a dc motor that permits different pressures to be applied to the brake shoes. The vehicle is equipped with an on-board control unit which is responsible for receiving the control commands to be applied to the vehicles actuators. It is impossible to obtain the exact dynamics that describe the vehicle, this paper uses a simplified model developed in [7] based on the vehicles experimental response. The model is implemented in the MATLAB environment and by taking a chirp signal as the input to the vehicles throttle, the transfer function is obtained as, 퐺(푠) = (1) Where K=7.8473 x 104 , δ=160, and ωn=55.87. The poles of (1) are p1 = 0.1746 and p2 = -1.7878 x 104, so the vehicle dynamics will evidently be given by p1, being the greater time constant. As a result, (1) is reduced to a first-order function as 퐺(푠) = = .

. (2)

Page 81: Gijet volume1 issue2

75

III. V2I COMMUNICATIONS

Point-to-point communications cause an exponential number of opened communication channels in case number of vehicles are driving in the same area. Wide area architecture based on five levels to reduce the number of communication in order to perform a safety and efficient system is presented. Following points must be considered while developing a wide area control system to improve the local area control [8], 1. A central unit of control must exist inside every local area which manages everything that happens inside it. 2. Every vehicle must know the information of the vehicles in the local area. 3. Existence of a common zone among local stations to assure the commutation of one zone to another one without loss of information. 4. Communication of the unit of local control with the local surrounding units in order to exchange the information of the vehicles in the common zones. The V2I architecture is divided into five steps as shown in Fig. 1

Fig. 1 V2I architecture

1. Perception:- All the sensorial information is sent from the vehicles (car, trucks, motorbikes, etc.) and the infrastructure (traffic signals, light panels, etc.) to the local control station (LCS). From the vehicles standpoint, these information can come from Global Navigation Satellite Systems (GNSS), Inertial Measurement Unit (IMU), compass or any sensor that can be used (CAN bus) in the vehicles. From the infrastructure standpoint, sensors can be used to obtain extra informations. The LCS is limited by the area that can be covered by the local communication system. 2. Management:- All the sensorial information received by the LCS is analysed and efficiently sorted to find the best way to solve the risky traffic situations 3. Coordination:- All the information of the vehicle that are driving in common areas are send to the LCSs. Taking into account this information, the LCS will send the data information to the vehicles and the infrastructure in order to permit safety manoeuvres. 4. Planning:- With the information coming from the LCSs, the vehicles and the infrastructure evaluate the conditions and choose the best alternative in order to improve the traffic flow. 5. Actuation:- The options selected in the previous stage are sent to the actuators. In case of a traffic light, it can be to change the light or in case of a vehicle, it can be to modify the speed.

IV. BRAKING SYSTEM

The main prerequisite of a vehicle system is to obtain a brake by wire system in coexistence with the original braking system. The solution is to design a hydraulic system equipped with electronic components to permit handling by computer generated signals through an input/output device. It is also necessary to determine the maximum braking pressure in order to avoid excessive system stress.This data is experimentally determined by means of a manometer. A wheel is removed, and a manometer is connected in lieu of the brake shoe. A pressure of 160 bars is measured when the brake pedal is completely pressed down [9]. Fig. 2 shows the design scheme of the braking system. The hydraulic system consists of a one-litre capacity brake fluid tank that includes a gear pump and coupling to a 350-watt, 12-volt supply, and dc motor. A pressure limiter tube whose value is fixed at 160 bars is added in order to protect the vehicle elements involved in the braking process. This system permits one to obtain the maximum pressure that the original braking system is able to apply on the wheels. Electronic components are needed to regulate this pressure as

Page 82: Gijet volume1 issue2

76

Fig. 2 Braking system design

required by the computer. Two electronic components are included, one is used to regulate the pressure between 0 and the maximum value, and the other to transmit this pressure from the pump to the wheels. In order to regulate the flow of the pressure, an electro-proportional pilot is installed with a nominal pressure between 12 and 250 bars. The control voltage varies between 0 and10 volts. The electro-proportional pilot yields a non-null minimum pressure, and hence will always exert some small pressure on the wheels. The second element, pool directional valve, is used to resolve this problem. It is normally open, and is only closed when the proportional pilot is actuated. These two elements cause delays that cannot be disregarded if good behaviour of the system is desired. At the first sampling period after brake actuation is requested, a signal is sent simultaneously to both valves, and the actual delay corresponds to that of the slower element, the spool directional valve, whose switching time is about 30 ms. For subsequent sampling periods, the spool directional valve is already closed and the delay corresponds only to that of the electro-proportional pilot, being even in the worst case at most10 ms. Following the design of the hydraulic and electronic components, the system needs to be plugged into the existing vehicle braking system. To this end, a shuttle valve is installed to form the junction between the two systems. This valve permits flow from either of two inlet ports to a common outlet. A free-floating metal ball shuttles back-and-forth according to the relative pressure at the two inlets. Flow from the higher pressure inlet through the valve moves the ball to close the opposite inlet. This valve is thus responsible for the switching between the two braking systems. The model selected is the Hydraulic WV 6-S. It is chosen because the smallness of the flow through a braking system permits one to select the valve of least diameter which also has the smallest floating ball, thus minimizing the switching time. The valve is mounted so that the ball under gravity maintains the standard braking system open, when the electro-hydraulic system is switched off. The shuttle valve introduces a delay associated with the movement of the metal ball between the two inlets. The delay time calculated for the selected model is less than 1 ms for the minimum pressure of 10 bars. The connection between the shuttle valve and the electro-hydraulic braking system is through the output of the spool directional valve which is connected to one of the inputs of the shuttle valve. Therefore, two shuttle valves are used to switch between the conventional and the electro-hydraulic braking systems. The outputs of the two shuttle valves are connected to the ABS (Anti-lock Braking System) inputs. Finally, the ABS performs the distribution of the braking.

V. CONTROLLER DESIGN

The design process of the local controller and its network based efficient adaptation is described. Use of fractional-order strategy is a new perspective in longitudinal control at low speeds in the automotive field, because of its capacity to provide time and frequency responses that are more adjustable [10]. For low speed control purposes, the most important mechanical and practical requirements of the vehicle to be addressed during the design process is that the vehicle response has to be smooth so that its acceleration

Page 83: Gijet volume1 issue2

77

will be less than the well-known comfort acceleration, that is, less than 2m/s2. Apart from the inherent vehicle issues, the controller will have a twofold purpose: 1) robustness against non-modelled dynamics and imprecision in measurements and 2) the desired closed loop response with a value of overshoot Mp close to 0% and a rise time tr ≈4s, or equivalently, a phase margin and a crossover frequency around 90deg and 0.45rad/s, respectively (it has been tested that higher values of both parameters cause worse system performances) [11] In previous works, some traditional PI controllers have been designed with non-significantly good results. As fractional order controllers have been applied in several fields with better results in comparison with the traditional ones, a PI controller, given by (3), is designed to fulfil the desired system specifications. The use of a PID controller instead of the proposed PI might introduce problems with high frequency noise, since the derivative action is sensitive to measurement noise.

퐶(푠) = 푘 + ∝ = 푘 1 + ∝ ,푤푖푡ℎ푧 = 푘

푘 (3)

Here the gain crossover frequency is given by ωc, the phase margin is specified by φm and the output disturbance rejection is defined by a desired value of a sensitivity function S(s) for a desired frequencies range. For meeting the system stability and robustness, following specifications are considered.

A. Phase margin specification Arg[G0l(jωc)] = Arg[C(jωc)G(jωc)] = π + φm (4)

B. Gain crossover frequency specication |퐺 (푗휔 )| = |퐶(푗휔 )퐺(푗휔 )| = 1(5)

C. Output disturbance rejection for ω ≤ ωs= 0.035rad/s

|푆(푗휔)| =( ) ( )

≤−20푑퐵(6)

The phase and magnitude of the open-loop frequency response Gol (jω) of the system can be written as

퐴푟푔(퐺 ) = −푡푎푛푧 휔 푠푖푛휙

1 + 푧 휔 푐표푠휙 − 푡푎푛휔푝 (7)

|퐺 | =퐾푘 (1 + 푧 휔 푐표푠휙) + (푧 휔 푠푖푛휙)

휔 + 푝(8)

where Ф=απ/2, specification (1) leads to

−푡푎푛푧 휔 푠푖푛휙

1 + 푧 휔 푐표푠휙 − 푡푎푛휔푝 = −휋 + 휑 (9)

In accordance with specification (2)

퐾푘 (1 + 푧 휔 푐표푠휙) + (푧 휔 푠푖푛휙)

휔 + 푝= −1(10)

Specification (3) gives

|푆| =1

1 + 푘 [1 + 푧 휔 푐표푠휙 − 푗푧 휔 푠푖푛휙] 퐾푗휔 + 푝

(11)

Page 84: Gijet volume1 issue2

78

Fig. 3 Bode plot for designed controller

Solving the above set of equations, the values of the controller parameters obtained are: kp = 0.09, ki = 0.025 and α= 0.8. Fig. 3 shows the Bode plot of the controlled system by applying the designed controller. As it can be observed, the cross over frequency ωc= 0.46 rad/s and the phase margin φm= 87.79 deg, which roughly fulfills the design specifications.

D. Networked Control The future trend in the automotive field is to develop ADAS to ensure safe driving within a zone by using communication network for traffic control. For this the local station has to be able to adapt each vehicles speed according to the circumstances. In this sense, the previous local speed controller has to be moved towards a networked speed controller, taking into account the network effects [11].

Fig. 4 Scheme for gain scheduling

This section presents the adaptation of the fractional controller for networked control by minimizing the effects of network induced delays. The idea is to enable the designed local controller to perform efficient networked control by means of an external gain. This is based on the approach proposed in [12]. It allows controller gains to adapt β ( β> 0) with respect to the current network condition by estimating the current network delay. Thus, β will be a function of τnetwork = fop(τnetwork), which will optimally adapt the vehicles speed according to the current network conditions, enabling the local controller to perform efficiently over the network. A scheme of this fractional gain scheduling strategy is shown in Fig. 4. There are three basic components in this scheme. 1) The central decision unit, whose function is to measure or estimate the current network condition. This measurement is then utilized by the gain scheduler and also to determine the reference speed. 2) The gain scheduler, which modifies the controller output with respect to the current network condition. It is determined by an offline optimal study of the system. 3) The remote system in this case is a production vehicle. In [13], the authors solve the stability of the system roughly using the root locus, so that an approximation is needed for delays. On the contrary, here the Nyquist stability criterion is applied to calculate the maximum value of β that guarantees the systems stability. Hence, the structure of the gain scheduling and the form of tuning β ensure closed-loop stability for any given τnetwork. [14]. The application of the gain scheduling to the PIα controller, referred to as a fractional gain scheduled controller (FGSC), is presented for networked vehicle speed adaptation at low speeds.

Page 85: Gijet volume1 issue2

79

VI. PERFORMANCE COMPARISON

Comparing the step response of the system to various controllers, it is observed that fractional PI controller gives better performance than the others. The system response to ordinary PI controller, fuzzy PI controller and fractional PI controller is shown in Figs. 5, 6 and 7 respectively. An overshoot occurs in fuzzy PI and ordinary PI, but it is not present in fractional PI controller. So it can inferred that fractional PI is a better controller which gives better performance than the others in which many parameters can be adjusted to get better results.

Fig.5 Response of PI controller

Fig.6 Response of fuzzy PI controller

Fig.7 Response of FOPI controllerFig.7 Response of FOPI controller

0 20 40 60 80 1000

0.2

0.4

0.6

0.8

1

1.2

1.4

Time(s)

Am

plitu

de

reference

0 20 40 60 80 1000

0.2

0.4

0.6

0.8

1

1.2

1.4Am

plitu

de

Time(s)

reference

0 20 40 60 80 1000

0.2

0.4

0.6

0.8

1

1.2

1.4

Am

plitu

de

Time(s)

reference

Page 86: Gijet volume1 issue2

80

TABLE I. PERFORMANCE COMPARISON WITH VARIOUS CONTROLLERS

Controllers

Rise time(sec) Settling time(sec)

Overshoot (%)

PI 4.74 18.72 4.7 Fuzzy PI 4.12 18.57 2.3

FOPI 1.75 7.53 0.5

Table 1 shows the values of rise time, settling time and maximum overshoot for three different controllers from which it is analysed that fractional order controller is the better one.

VII. CONCLUSIONS

A fractional-order proportional integral (PIα) controller to manage a vehicles speed in the common area at low speeds is designed and simulated. A gain scheduler is designed to make adaptations to the local speed controller from the control station, to compensate delays in system. Comparison of the system performance with various controllers like ordinary PI controller, fuzzy PI controller and fractional PI controller are done and it is found that fractional PI controller gives better results than the others in terms of settling time, rise time and percentage overshoot.

REFERENCES [1] Ines Tejado, Vicente Milanes, Jorge Villagra, and Blas M. Vinagre, “Fractional Network-Based Control for Vehicle

Speed Adaptation via Vehicle-to-Infrastructure Communications” , IEEE Trans. Control Syst. Technol., May 2012. [2] V. Milanés, C. González, J. E. Naranjo, E. Onieva, and T. de Pedro, “Electro-hydraulic braking system for

autonomous vehicles,” Int. J. Autom. Technol., vol. 11, no. 1, pp. 89–95, 2010. [3] V. Milanés, J. E. Naranjo, C. González, J. Alonso, and T. de Pedro, “Autonomous vehicle based in cooperative GPS

and inertial systems,” Robotica, vol. 26, no. 5, pp. 627–633, 2008. [4] S. Koo and H.-S. Tan, “Tire dynamic deflection and its impact on vehicle longitudinal dynamics and control,” IEEE-

ASME Trans. Mech., vol. 12, no. 6, pp. 623–631, Dec. 2007. [5] A. Kamga and A. Rachid, “Speed, steering angle and path tracking controls for a tricycle robot,” in Proc. IEEE Int.

Symp. Comput. Aided Control Syst. Design, Sep. 1996, pp. 56–61. [6] E. Onieva, V. Milanés, C. González, T. de Pedro, J. Pérez, and J. Alonso, “Throttle and brake pedals automation for

populated areas,” Robotica, vol. 28, no. 4, pp. 509–516, 2010. [7] J. Villagrá, V. Milanés, J. Pérez, and J. Godoy, “Dynamic modelling and nonlinear parameter identification for

longitudinal vehicle control,” Mech. Syst. Signal Process., to be published. [8] Y. Tipsuwan and M.-Y. Chow, “Gain scheduler middleware: A methodology to enable existing controllers for

networked control and teleoperation - part I: Networked control,” IEEE Trans. Ind. Electron., vol. 51, no. 6, pp. 1218–1227, Dec. 2004.

[9] R. Cosmad, A. Cabellos-Aparicio, J. Domenech-Benlloch, J. Gimenez- Guzman, J. Martinez-Bauset, M. Cristian, A. Fuentetaja, A. Lopez, J. Domingo-Pascual, and J. Quemada, “Measurement-based analysis of the performance of several wireless technologies,” in Proc. 16th IEEE Workshop Local Metropol. Area Netw., Sep. 2008, pp. 19–24.

[10] R. C. L. Gámez, P. Martí, M. Velasco, and J. M. Fuertes, “Wireless network delay estimation for time-sensitive applications,” Autom. Control Dept., Technical Univ. Catalonia, Catalonia, Spain, Tech. Rep. ESAII RR-06-12, 2006.

[11] Y. Tipsuwan and M.-Y. Chow, “Control methodologies in networked control systems,” Control Eng. Pract., vol. 11, no. 10, pp. 1099–1111, 2003.

[12] H. S. Li, Y. Luo, and Y. Q. Chen, “A fractional order proportional and derivative (FOPD) motion controller: Tuning rule and experiments,” IEEE Trans. Control Syst. Technol., vol. 18, no. 5, pp. 516–520, Mar. 2010.

[13] Y. Q. Chen, I. Petras, and D. Xue, “Fractional order control - A tutorial,” in Proc. Amer. Control Conf., pp. 1397–1411, 2009.

[14] Harald Voit and Anuradha Annaswamy, “Adaptive Control of a Networked Control System with Hierarchical Scheduling”in Proc. Amer. Control Conf., June 29 - July 01, 2011

Page 87: Gijet volume1 issue2

Securing Images using Elliptic Curve Cryptography

1Blessy Joy A and 2Girish R

1, 2Department of IT, Nehru College of Engineering and Research Centre, Thrissur, Kerala, India

Abstract—The use of multimedia files has been increased drastically. These files may contain sensitive information like military images, medical images, and technical blueprints. This information is passed through open networks which are insecure. This paper proposes a method to encrypt the images according to the required level of security. In the paper, the images are classified into three groups. They are- images having low, intermediate and high security. Different encryption methods are used for these three categories of images. The main contribution of this paper is RGB image encryption using Elliptic Curve cryptography. This paper gives more privilege to the user to select the encryption method and image quality. After the encryption, the images are compressed using JPEG method. Since ECC is used for encryption, this method is highly suitable for mobile environments where the processing power and battery life is limited. Index Terms— RGB image encryption, bitplanes, Elliptic curve cryptography, Xoring mages.

I. INTRODUCTION

With the growth of the technology, the security of images has become a major concern. To overcome the problem of insecurity of images, encryption is the most popular method used. In the image encryption, the original image is converted to another image which hard to understand the original image can be retrieved only if the correct key is used in the decryption phase. The encryption of images varies from that of text. Up on encrypting the images, the characteristics of multimedia files should be considered. The multimedia files are loss-tolerant, synchronous and power hungry. They also consume large amount of power. These characteristics must be considered while encrypting the multimedia. By considering these characteristics the multimedia encryption in classified in to two. First method is called naïve encryption, in which the entire multimedia frames are encrypted. Obviously this is not an efficient method since it will introduce considerable delay at the receiver’s side. This delay interrupts the continuous nature of the multimedia files. So another method called selective encryption is introduced. In this method, only the selected frames or parts of the multimedia files are encrypted. A subgroup of selective encryption is called perceptual encryption, in which encrypted multimedia data are still partially perceptible in order to make a sense about the high-quality version. The conventional encryption algorithms are used to encrypt the images. The conventional encryption algorithms can be of two types. They are- symmetric and asymmetric key encryption. In symmetric key encryption a shared key is used to encrypt and decrypt the data. The problems regarding the symmetric key encryption are the privacy of the secret key and the difficulty of storing the keys for all the users. For e.g. if there are n number of users in a network, a user has to store (n-1) number of keys. The examples of symmetric key encryption include Advanced Encryption Standard (AES) and Data Encryption Standard (DES). The asymmetric key cryptosystems use two large keys for encryption and decryption processes. Grenze ID: 01.GIJET.1.2.550 © Grenze Scientific Society, 2015

Grenze International Journal of Engineering and Technology, July 2015

Page 88: Gijet volume1 issue2

82

These keys are called public and private keys and any of them can be used for encryption or decryption. Examples of asymmetric key cryptosystems are Rivest, Shamir, Adleman (RSA) and El-Gamal cryptosystem. The hardness of the underlying mathematical problem represents the fundamental security of all protocols in the public-key family. Hence, asymmetric key cryptography is slower. The application of symmetric key over multimedia networking applications is not practical because each participating entity requires storing the keys of all other entities. Elliptic Curve Cryptography (ECC) is one of the asymmetric key cryptosystem which has low computational complexity and low power consumption. In this paper, ECC is used to encrypt the image.

II. RELATED WORKS

ECC has a lot of advantages over other encryption algorithms. So recently many methods were proposed on image encryption using ECC. Some of the methods are described below.

A. An ethical way of image encryption using ECC

In this paper, ECC was used to encrypt the message without compression. i.e, each pixel in the image is encoded into a point on the elliptic curve and this point was encrypted and sent to the receiver. No compression method was used in this technique. The resulted encrypted image was of large size. Since a single pixel value was encoded into affine coordinates, the size of the encrypted image will be double that of the original image.

B. Public key cryptosystem technique elliptic curve cryptography with generator g for image encryption

In this paper, ECC points convert into cipher image pixels at sender side and decryption algorithm is used to get original image within a very short time with a very high level of security at the receiver side. In this method also pixel wise encoding is done. And no compression algorithm is used.

C. A Novel Public Key Image Encryption Based on Elliptic Curves over Prime Group Field In this paper a new mapping method introduced to convert an image pixel value to a point on a predefined elliptic curve over finite field GF(p) using a map table. This mapping technique was very fast with low complexity and computation, easy to implement and for low entropy plain images, mapping will results a high distribution of different points for repetitive intensity values.

III. ELLIPTIC CURVE CRYPTOGRAPHY

In the proposed system, ECC is used to encrypt the image. After the encryption, the compression is also done. Before moving on to the image encryption, let’s discuss the entire process of ECC.ECC is a public key cryptosystem which has two keys- private key and public key. The public key will be distributed among the group of users. ECC works based on elliptic curve theory. The standard equation for the elliptic curve can be written as follows.

ଶݕ = ଷݔ + ݔܽ + ܾ mod p (1)

In the above equation p is the prime number based on which the elliptic curve is generated. Not all elliptic curves can be used for cryptography. The condition to be satisfied is the following.

(4a3 + 27b2) mod p ≠ 0 (2)

Now we selected the elliptic curve. The next step is to find the points on the elliptic curve. To find the points let’s consider the following example. Let the prime number, p=5 and the constant a=1,b=1

First verify that (4a3+27b2) ≠ 0 mod p. Here 4a3+27b2 = 31 ≠ 0 mod 31 Now determine the quadratic residues of 5 Q5 = {1,4} Now 0 ≤x ≤p compute y2=(x3+ax+b) mod 5

Page 89: Gijet volume1 issue2

83

Table I. Finding Points On The Elliptical Curve

X 0 1 2 3 4 Y2 1 3 1 1 4

Y2 ϵ Q5 yes no Yes Yes yes Y1 1 - 1 1 2 Y2 4 - 4 4 3

The points on the elliptical curve are Ep(a,b)={(0,1),(0,4),(2,1),(2,4),(3,1),(3,4),(4,2),(4,3), α} The next step is to find the generator point (G) from the above set of points. The next step is to convert the binary values or ASCII values to affine coordinates. For this encoding, we have used Koblitz’s method. After encoding the data will be the points on elliptic curve. The next phase is to encrypt those points. Let A and B the two communicating entities. m be the message to be encrypted. The first step is to encode the message into a point in the elliptic curve using Koblitz method. Pm is the encoded point. The two parties A and B should select their own private key. Let nA and nB the private keys of the entities. To generate the public key, multiply the private keys with the generator point G. For encryption

Cm ={ kG, Pm+ kPB} (3)

Where Cm is the cipher text and k is the random positive integer chosen by the entities For decrypting the data,

Pm + kPB − nB(kG) = Pm +k( nBG)− nB(kG) = Pm (4)

III. PROPOSES SYSTEM

A. Image encryption using ECC In some instances, the complex cryptographic algorithms are not needed to encrypt the images. if the images are shared in a secured environment which is secured by a firewall, the complex encryption algorithms are just a waste of computational complexity. In order to preserve the computational complexity, this paper categorizes the images in to three. They are the images require low, intermediate and high security. Different algorithms are used for these different categories. If the images are shared in a secured network like intranet, those images does not need to encrypt by using the complex cryptographic algorithms. So for such images simple pixel wise XORing of images has been performed. The original image is XORed with a key image to produce the final encrypted image. Xoring of image is a symmetric key encryption. If the image is xored with a key image, only the intended user who knows the key image can only decrypt the image. So this method can also ensure authentication too.

Figure 1. Xoring operation for images having low security

In figure 1, the xoring operation is illustrated. User 1 is the sender. He encrypts the original image using key image 1. The final result is broadcasted in the intranet. User 2 and 3 receives the encrypted image and both try to decrypt the original image by using the keys they have. User 2 only has the same key that of user 1. So only user 2 can only retrieve the original image. Since user 3 does not have the right key, he won’t be able to retrieve the original message.

Page 90: Gijet volume1 issue2

84

10010101 10101010 11001101 11001100

10100100 11000111 00000110 00101010

10010100 10111100 00000001 11100101

01010101 11001010 10111000 00110001

Figure 2. The pixel-wise representation of an image and formation of Bitplanes

For the remaining two categories, the image encryption is done on the basis of bitplanes. A bit plane of an image is a set of bits corresponding to a given bit position. For example, for 8-bit data representation there are 8 bit planes: the first bit plane contains the set of the least significant bit, and the 8th contains the most significant bit. Let the figure 2 represents a 16-pixel image. Each pixel in the image is represented by 8-bits. MSB of each pixel is highlighted in red color. By selecting all the MSBs of the every pixel, we can generate the MSB bitplane of the image. The MSB bitplane is given below to the image representation. The next step is to encrypt those bitplanes. These bitplanes are grouped into 8 bit groups. If the bitplane sequence is not a multiple of 8, then the last field is padded accordingly. Then these 8bit digits are encoded in to elliptic curve and then they are encrypted. After encryption, the image is compressed using JPEG. For the images require intermediate security, the user is provided with an option to convert the RGB image to greyscale image. First we will discuss about the encryption of greyscale image. Greyscale images are represented using 8 bits. Let b0b1b2b3b4b5b6b7 be the single pixel of the greyscale image. And each bit is either 0 or 1. Hence, we can form eight binary images from each bi of all the pixels in the greyscale image. The higher-order bits usually contain most of the significant visual information, while the lower-order bits contain the subtle details. So by encrypting higher bitplanes we are preserve almost complete information of the image. Even though, the user is provided with an option to choose the number of bitplanes to be encrypted. If the user needs a high quality image, he should select more number of bitplanes. 8 bits in a bitplane is grouped together and encoded into a point on the elliptic curve and then encrypted into two points of four cipher values, where each value is represented by 32 bits. The cipher values are stored in the LSB bitplane, since it contains only subtle details, and grouped as blocks, each contains 4 cipher values. As a result, each encrypted segment is associated with a block of 128 bits, where the block number is stored in place of the original segment. Since the segment values range from 0 to 255, only 256 blocks at most are required to store the cipher values of all segments. For instance, to encrypt a bitplane of size 256 × 256 bits, only half of the LSB bitplane size (256 × 128 bits) is required to store all the blocks of the cipher values. Then the entire cipher text is compressed using JPEG.

Figure 3. RGB image encryption process

1111110011010110 MSB bitplane/ 8th bitplane

Page 91: Gijet volume1 issue2

85

RGB image encryption is different from greyscale image encryption. RGB image have three components which represent red, green and blue channels.8-bits are used to represent each component. Thus total of 24 bits is used to represent a single pixel in rgb image. To perform encryption, first we want to extract three components. That is, we are generating 3 images which represent red, green and blue channel. The result will be three images in which each pixel will be 8 bit in length. And the encryption is performed in each channel separately based on bitplanes. The RGB image encryption is proposed for the images which require high quality and high security. Si it is recommended to encrypt the all bitplanes so that the requirement can be met. Even though the user can select the number of bitplanes to be encrypted. The images in this category include the military images which describe the location of the operation, medical images like scanning reports and blueprints of drawings. After the encryption process, the images are compressed using JPEG.

IV. CONCLUSIONS

In this paper, image encryption techniques are introduced. This paper categorizes the images into three depending upon the required level of security. By proving less complex cryptographic mechanism to the less sensitive image we could preserve the computational complexity. Color image encryption is also proposed in this paper. This paper gives more importance to user privileges too. Since ECC is used to encrypt the images the real time constraints of the multimedia files are maintained. ECC is highly suitable for the environments where the computational power and battery power is limited. By using ECC for image encryption, the image encryption can be easily implemented in mobile devices in an efficient way. This method also can be used for the encryption in sensor networks too. Regarding the security of the system, the strength of the ECC lies on the discrete logarithm problem. It is very difficult to extract the original message if the attacker is provided with the intermediate values of the encryption. Since bitplane encryption is used, the attacker will be able to retrieve the entire image if and only if he retrieves the every bitplanes. This increases the security of the system. After the encryption JPEG compression is also applied. So the resultant image will be of smaller size. This system can be enhanced to an intelligent system which can analyze the destination address and automatically chooses the encryption mechanism. For ensuring more security, a two phase encryption method also can be used. In this method, first the original image can be xored with a key image and the resultant image can be encrypted based on the bitplanes.

REFERENCES [1] Lo’ai Tawalbeh et al,(2013) “Use of elliptic curve cryptography for multimedia Encryption”, IET Information.

Security, 7(2), 67–74 [2] Gupta, K., Silakari, S., Gupta, R., Khan, S.A.(2009), An ethical way of image encryption using ECC in First Int.

Conf. on Computational Intelligence, Communication Systems and Networks, Indore. [3] Gupta, K., Silakari, S.(2009), “Efficient image encryption using MRF and ECC’, International. Journal Information

Technology,245–248 [4] Yadav, V.K., Malviya, A.K., Gupta, D.L., Singh, S., Chandra, G(2013), “Public key cryptosystem technique elliptic

curve cryptography with generator g for image encryption”, International Journal of Computer Technology and Application, (1), 298–302

[5] Ali Soleymani et al.(2013), “A Novel Public Key Image Encryption Based on Elliptic Curves over Prime Group Field”, Journal of Image and Graphics, 1(1), 43-49

[6] Chandravathi, D, Roja, P.P(2010), “Encoding and decoding of a message in the implementation of elliptic curve cryptography using Koblitz’s method”, International Journal for Computer Science and Engineering,1904–1907

Page 92: Gijet volume1 issue2

Hole Detection and Healing Techniques in WSN

1Soumya P V and 2Shreeja R

1, 2Computer Science and Engineering Dept, MES College of Engineering, Kuttippuram, Kerala, India

Abstract— One of the major functions of a wireless sensor network is the monitoring of a particular area. Coverage is considered as an important measure of quality of service provided by a wireless sensor network and also emergence of holes in the area of interest is unavoidable due to the nature of WSN nodes. If there is a hole in the network, during data transmission across the hole, the data will move along the hole boundary nodes again and again. This will cause the depletion of energy of nodes in the hole boundary nodes. So detection and healing of such coverage holes is an important factor of concern. There are a number of techniques are introduced to detect and heal holes in WSN. This work reviews these techniques and will explain its merits and demerits. Index Terms— Coverage, Hole, Hole detection, Hole healing, WSN.

I. INTRODUCTION

A wireless sensor network includes a number of sensor nodes with the capability of communication and computation. Sensor nodes are low power devices equipped with power supply, actuator, processor, memory and radio. In WSN, sensor nodes use radio transmitters and receivers to communicate with each other. Coverage in wireless sensor network is defined as the ability of sensor nodes to monitor a particular area. Coverage of an entire area means that every single point within the field of interest is within the sensing range of at least one sensor node. WSNs have a lot of applications such as weather forecasting, battle field surveillance, threat identification, health monitoring, environment monitoring, and wild life monitoring. All such interdisciplinary applications that demand random deployment of sensor nodes and uncontrolled external environment may cause holes in the wireless sensor network. Holes in a wireless sensor network are an area where a group of sensor nodes stops working and does not take part in any computation and communication process. In a wireless sensor network holes are acts as a barrier of communication so that it will affect the performance of the network. During the transmission of data along the hole it will move along the hole boundary nodes repeatedly. This will cause the creation of a large hole due to the premature exhaustion of energy present at the hole boundary nodes. Detection of holes will identify the damaged, attacked and inaccessible nodes in the network. A coverage hole is formed when the sensor nodes are arranged unsystematically in the area.

II. LITERATURE SURVEY

Approaches for detecting hole in a wireless sensor network can be classified into based on information used, based on computational model and network dynamics. The first category can be again classified as geographical approach which use location information, topological approach which use connectivity information and statistical approach which use mathematical calculations. The second category that is based Grenze ID: 01.GIJET.1.2.552 © Grenze Scientific Society, 2015

Grenze International Journal of Engineering and Technology, July 2015

Page 93: Gijet volume1 issue2

87

on computational model can be further classified as centralized which use one or two nodes at a centralized location and distributed method in which multiple nodes work together to detect hole. The third category can be classified as techniques which use static sensors and mobile sensors.

A. HDAR Yang et al. (2010) proposed a hole detection and adaptive geographical routing (HDAR) algorithm to detect holes in wireless sensor networks. It is a geographical approach and hence it use location information of the sensor nodes. In this technique it uses a hole detection ratio to identify the hole. HDAR method will begin its hole detection algorithm when the angle between two adjacent edges of a node is greater than 120 degrees. Here, the ratio of network distance over the Euclidean distance is used as metric to detect a hole, and is called as the hole detection ratio. If there exists a node N such that its hole detection ratio is greater than a predefined threshold value D, then N is considered to be sitting on a hole. One of the main advantages of this approach is that a single node can efficiently detect the hole.

B. Hop based approach Zeadally et al. (2012) proposed a hop based approach to find holes in sensor networks. There are three phases, namely, information collection phase where each node exchanges information to build a list of x-hop neighbors, path construction phase where communication links between sensor nodes in list of x-hop neighbors are identified, and finally path checking phase where paths are examined to infer boundary and inner nodes. If the communication path of x-hop neighbors of a node is broken, then it is boundary node. Algorithm works for node degree of 7 or higher which is better than some of the other approaches, but there is a huge communication overhead involved in identifying x-hop neighbors.

C. Rips complex method Martins et al. (2011) used the concepts of Rips complex and Cech complex to discover coverage holes. Cech complex:- Given a collection of sets U = Ua, Cech complex of U, C(U), is the abstract simplicial complex whose k-simplices correspond to nonempty intersections of k + 1 distinct elements of U. Rips complex:- Given a set of points X = Xa is a subset of Rn in Euclidean n-space and a fixed radius E, the Rips complex of X, R(X), is the abstract simplicial complex whose k-simplices correspond to unordered (k + 1) tuples of points in X which are pairwise within Euclidean distance E of each other. After constructing neighbor graph, each node checks whether there exists a Hamiltonian cycle in graph. If not, then node is on the hole boundary. After making decision, each node broadcasts its status to its neighbors. The algorithm further finds cycles bounding holes.

D. DBRA Kun-Ying et al. (2009) introduced a Distributed Boundary Recognition Algorithm (DBRA) consists of four phases. The first phase of the algorithm will identify Closure nodes (CNs) which enclose the holes and boundary of the sensing field. In the second phase, those closure nodes are connected with each other to form Coarse Boundary Cycles (CBCs) for identifying each obstacle. The third phase is proposed to discover the exact Boundary Nodes (BNs) and connect them to refine the CBCs to be final boundaries. To find the boundary nodes at first, some BNs near the obstacles are selected to initiate the procedure. Some CNs ring-shaped areas are cut off by obstacles, the flooding of packets along these ring-shaped areas must be stopped by the boundaries of obstacles. Hence, the main idea of selecting the initiated BNs is let each CN flood the packets along its two adjacent CNs ring shaped areas and then the nodes on that areas having maximum hop counts will be selected as the initiated BNs.

E. 3MESH-DR Li et al. (2008) proposed 3MeSH (triangular mesh self-healing hole detection algorithm), which is a distributed coordinate-free hole detection algorithm. It is assumed that each node has uniform sensing radius R and communication radius 2R. Initially a subset of active nodes is selected. An active node x is neighbor of active node y if they are between R and 2R distance apart. Nodes that lie within the sensing range of an active node are called redundant nodes. Connectivity information is collected by each active node from its neighbors. If node detects presence of 3MeSH ring defined by all its neighbors, then it is a boundary node.3MeSH-DR Hole recovery steps Active node election

Page 94: Gijet volume1 issue2

88

Broadcasting of neighbor messages Comparison of neighbor messages Hop count calculation Hole detection and recovery

F. Voronoi method J. Kanno et al. (2009) proposed voronoi method, here voronoi diagrams are used to detect coverage holes. Voronoi diagram is a diagram of boundaries around each sensor such that every point within sensors boundary is closer to that sensor than any other sensor in the network. Voronoi edges in a voronoi cell are the vertical bisectors of the line connecting a particular node to its neighbors. To detect hole, each node checks whether its voronoi polygon is covered by its sensing area. If not, then coverage hole exists. After detecting the hole any of the methods can be used to move the mobile nodes to heal the hole. In vector based algorithm (VEC), sensor nodes are pushed from dense regions to sparse regions so that nodes are evenly distributed. Voronoi based algorithm (VOR) is a pull algorithm which pulls nodes towards sparse regions. In minimax algorithm, target location is at the center of its voronoi polygon.

G. CHDM Zhao et al. (2011) proposed a coverage hole detection method (CHDM) by mathematical analysis. It is assumed that network consists of mobile nodes each with sensing radius r and communication radius 2r. A node p is defined as neighbor of node q if it lies in its communication range. On the basis of central angle between neighbor sensors, different cases to find coverage holes in communication circle around a redundant movable node are considering here. To patch hole, a redundant node is moved to an appropriate position inside the hole. H. HEAL Senouci et al. (2014) proposed HEAL. It is a distributed and localized algorithm for hole detection and healing in which nodes mobility is utilized to recover from the holes. HEAL has mainly two phases.

Hole Detection Hole Identification Hole Discovery Network Boundary Identification

Hole Healing HHA Calculation Node Relocation

Hole detection step defined the stuck nodes where packets can possibly get stuck in multi-hop forwarding. A local rule, the TENT rule, for each node in the network to test if it is a stuck node. To identify holes in the network, HEAL proceed in three steps. In the first step it is necessary to assess the existence of a hole, which is done by identifying stuck nodes. Each node in the network executes the TENT rule to check if it is a stuck node. First, it orders all 1-hop neighbors of node p counterclockwise. Let u and v be a pair of angularly adjacent nodes. Second, it draws the perpendicular bisector of up and vp, l1, l2. l1 and l2 intersect at point o and divide the plane into four quadrants. Only the points in the quadrant containing p are closer to p than u and v. Finally, if o is outside the communication range of p, the angle dvpu is a stuck angle and p considers itself stuck node. In the second step all the nodes that are marked as stuck nodes by the TENT rule trigger the discovery of holes. The aim of this step is to find the boundary of the hole and the computation of the holes characteristics. In the third phase the network boundary nodes execute the TENT rule and, as a result, they detect that they are stuck nodes, which will launch the hole discovery and the healing processes even if these nodes are actually not stuck nodes (they are the borders of the network). That is why it is necessary to carry out the network boundary-node identification to avoid that the hole discovery process be launched by those nodes. Hole healing treats the hole healing with novel concept, hole healing area (HHA). It consists of two sub-tasks, hole healing area determination and node relocation. This allows a local healing where only the nodes located at an appropriate distance from the hole will be involved in the healing process. Determination of HHA will determine the number of nodes that must be relocated to ensure a local repair of the hole. To

Page 95: Gijet volume1 issue2

89

determine the HHA, here it will estimate the radius of the circle that defines the HHA. After determining HHA, nodes moves towards the hole center.

III. PERFORMANCE ANALYSIS

A comparative study of the algorithms discussed so far is presented here. The communication overhead, node density and scalability are taken as the parameters for comparison. The performance comparison is summarized in Table I.

TABLE I: COMPARISON OF TECHNIQUES

TECHNIQUE

Communication Overhead

Node Density

Scalability

Mobility

HDAR

LOW

MEDIUM

MEDIUM

NO

HOP BASED METHOD

HIGH

MEDIUM

MEDIUM

NO

RIPS METHOD

HIGH

MEDIUM

MEDIUM

NO

DBRA

HIGH

LOW

MEDIUM

NO

3MESH-DR

HIGH

MEDIUM

HIGH

YES

VORONOI METHOD

LOW

MEDIUM

HIGH

YES

CHDM

LOW

MEDIUM

HIGH

YES

HEAL

LOW

LOW

HIGH

YES

IV. CONCLUSION

Geometrical approach for hole detection requires GPS enabled sensor and is expensive. They consume a lot of energy and it is not practical for sensors to know their exact location in hostile environment. Topological approach provides realistic results but involves communication overhead. Mobile sensor networks give better coverage. Sensors moved through a long distance will consume more energy. If energy of a sensor is so less that it dies shortly after being relocated to target region, then this effort is wasted. So a combination of geographical technique and mobile sensor nodes will provide large area coverage with low communication overhead and it should consider the energy of the sensor nodes.

REFERENCES

[1] Sharma. (2012). "Holes in wireless sensor networks", IJCSI, Vol-II, Issue-4. [2] Zongming Fei, Jianjun Yang.(2010). " HDAR: Hole Detection and Adaptive Geographic Routing [3] for Ad Hoc Networks ", IEEE, vol. 10,Issue. 4 [4] S.Zeadally, N. Jabeur, and I.M. Khan.(2012). “Hop-based approach for holes and boundary detection in wireless

sensor networks,” IET Wireless Sensor Systems, vol. 2, no. 4, pp. [5] Feng Yan, Philippe Martins, Laurent Decreusefond.(2011). “Connectivity based Distributed Coverage Hole

Detection in Wireless Sensor Networks “, in IEEE Global Telecommunications Conference (GLOBECOM 11) ,pp. 16.

[6] Kun-Ying Hsieh, Jang-Ping Sheu.(2009). "Hole Detection and Boundary Recognition in Wireless [7] Sensor Networks", IEEE, vol.9, Issue. 4. [8] Xiaoyun Li ,David K. Hunter.(2008). “Distributed Coordinate-free Hole Recovery”, in Proceedings of the IEEE

International Conference on Communications Workshops (ICC 08), pp.189194.

Page 96: Gijet volume1 issue2

90

[9] J. Kanno, J. G. Buchart, R. R. Selmic, and V. Phoha.(2009)."Detecting coverage holes in wireless [10] sensor networks", in Proceedings of the 17th Mediterranean Conference on Control and automation , pp. 452457,

Thessaloniki, Greece. [11] Erdun Zhao, Juan Yao.(2011). "A Coverage Hole Detection Method and Improvement Scheme in [12] WSNs", IEEE, vol.11, Issue. 5. [13] Mustapha Reda Senouci, Abdelhamid Mellouk, Senior Member, IEEE, and Khalid Assnoune.(2014)."Localized

Movement-Assisted Sensor Deployment Algorithm for Hole Detection and Healing” IEEE Transactions on parallel and distributed systems, VOL. 25, NO.5.

Page 97: Gijet volume1 issue2

A Study on M-Sand Bentonite Mixture for Landfill

Liners

1Anna Rose Varghese and 2Anjana T R 1P.G Student,Dept. of Civil Engineering,Thejus Engineering College

2Asst.Professor, Dept. of Civil Engineering, Thejus Engineering College

Abstract—In modern landfills, the waste is contained by a liner system. The primary purpose of the liner system is to isolate the landfill contents from the environment and, therefore, to protect the soil and ground water from pollution originating in the landfill. Due to high swelling and adsorption capability, bentonite is commonly used material for liners. Sand is the basic material used with bentonite to improve its properties. Mixing sand with appropriate bentonite contents yields sand bentonite mixtures having low hydraulic conductivity that can be used as hydraulic containment liners. A series of laboratory experiments were conducted to evaluate the changes in soil properties such as consistency limits, compaction characteristics, hydraulic conductivity, strength characteristics and free swell index. In this study, compaction tests were conducted to determine the OMC and MDD of compacted M-Sand-bentonite mixtures. Hydraulic conductivity tests were conducted to assess the hydraulic conductivity of compacted M-Sand-bentonite mixtures. The M-sand bentonite mixture is tested to find out the exact proportion of the mixture which satisfies the landfill liner requirements. Index Terms— M-Sand bentonite mixture, bentonite, hydraulic conductivity, liners, compaction, atterberg limits, UCC, Free swell index, combinations, leachete.

I. INTRODUCTION

Waste liquids in the environment may result from several sources, e.g., uncontrolled dumping of pure solvents, spills, or infiltration of water through solid waste in landfill disposals resulting in contaminated leachate. Contaminants contained in this leachate can lead to significant damage to the environment and to the human health due to their mobility and solubility. One of the preferred methods of dealing with this kind of environmental problem is to dispose of the waste in landfills. In landfills compacted clay liners (CCLs) and geosynthetic clay liners (GCLs) are the most common materials used in construction of impermeable liners. According to international regulations, landfills must be constructed with containment systems with low hydraulic conductivity (≤ 1x10-9m/sec) in order to avoid contamination of groundwater and soil. Easy availability and economic feasibility make clayey soils the most preferred liner materials. Clayey soil liners are suitable as liners only when the temperature and moisture fluctuations are not high; otherwise, they can form cracks that cause the hydraulic conductivity to rise. Bentonite clay is widely used in these composites for waste containment due to its low hydraulic conductivity, high swelling potential and high cation exchange capacity. But exposed to high concentrations of inorganic pollutants, degradation of the bentonite clay can occur and a subsequent increase in the hydraulic conductivity. Grenze ID: 01.GIJET.1.2.553 © Grenze Scientific Society, 2015

Grenze International Journal of Engineering and Technology, July 2015

Page 98: Gijet volume1 issue2

92

According to Daniel (1993), for any material to be used as a liner, it should have the following properties: (i) The fluid transmission capability of a soil is defined as the permeability of the soil. Permeability is the

measure of the materials ability to contain the leachate. A low permeability generally 10-9 m/s is required.

(ii) Durability and resistance to weathering is the quality of the material to withstand the forces of alternating wet/dry and freeze/thaw cycle.

(iii) Constructability, which means the material, should be reasonably workable in terms of placement and compaction under field conditions.

(iv) Compatibility with leachate: the liner material must maintain its strength and low permeability even after prolonged contact with leachate. For a landfill liner the following requirements are necessary. Hydraulic conductivity should be ≤ 1x 10-7 cm/sec % of fines ≥ 20-30% % Gravel ≤ 30% Plasticity index ≥ 7-10%

For most countries there is a need for a landfill liner that is natural, locally available, and that can be installed in an inexpensive way. The incorporation of bentonite fines into a naturally occurring soil, e.g. sand, will significantly alter the physical and chemical properties of the soil. One material that can meet the hydraulic conductivity criteria without suffering from shrinkage cracking is a sand-bentonite mixture [15]. Due to the unavailability of river sand M-Sand can be used as an alternative to river sand. The sand particles provide mechanical stability and prevent shrinkage of bentonite. An investigation study is carried out on M-sand-bentonite mixtures with different percentages of bentonite additions, which starts from 5% to obtain the optimum M-Sand-bentonite mixture which satisfies the landfill liner requirements. Several standard tests were also performed to obtain properties of the M-Sand and the bentonite. Based on the test results, a suitable M-Sand-bentonite mixture that yields low hydraulic conductivity is selected for use as liner in hydraulic containment applications.

II. MATERIALS AND PROPERTIES

A. Bentonite The bentonite used in this study was powdered sodium bentonite procured from Karnataka. This bentonite is generally used as drilling mud in boring activities. The liquid limit of this particular bentonite is obtained as 445% and free swell index as 220%. The basic properties of bentonite are presented in Table I.

B. M-Sand Sand used in this study was local M-Sand which is typically used as a construction material. The basic properties of M-Sand are showed in Table II.

III. EXPERIMENTAL PROGRAMME

A. Atterberg Limits Atterbergs limits were found out according to IS (2720: Part 5- 1985). Liquid limits shown to be useful indicators of clay behavior.

TABLE I: BASIC PROPERTIES OF BENTONITE

BASIC PROPERTIES OF BENTONITE Liquid Limit 445% Plastic Limit 63% Plasticity Index 382% Specific gravity 2.6 Free swell index 220% Moisture content(as supplied) 13.64% Max Dry Density 10.69kN/m3 OMC (%) 53%

Page 99: Gijet volume1 issue2

93

TABLE II: BASIC PROPERTIES OF M-SAND

BASIC PROPERTIES OF M-SAND Uniformity Coefficient, Cu 0.76 Coefficient of Curvature,C c 6.07 Percentage gravel (%) 0.70% Percentage sand (%) 87.70% Percentage fines (%) 11.60% D10 0.13 Specific gravity 2.70 Fineness modulus 3.32

B. Compaction test Compaction tests were carried out in accordance with IS 2720 (Part 7) – 1973 for the M-Sand bentonite mixture for different combinations. The OMC and MDD for all the combinations are determined. The water content is determined by oven drying method.

C. Unconfined compression test Unconfined compression test was conducted as per IS 2720(Part 10) - 1973 and strength behavior M-Sand bentonite mixture was found out.

D. Free Swell Test Free swell tests were carried out in accordance with IS 2720 (Part XL) -1977 for the various combinations of M-Sand bentonite mixture.

E. Consolidation test The experiments were carried out in a standard consolidation apparatus as per IS: 2720 (Part 15) - 1986 specifications. The samples were carefully filled in the consolidation mould and they were fully saturated, applied a seating load of 0.05kg/cm2.

IV. RESULTS AND DISCUSSIONS

A. Atterberg limits for all Combinations Liquid limit, Plastic Limit and Shrinkage Limit tests were conducted for four combinations of M-Sand bentonite mixture. Results obtained are shown in Table III.

TABLE III: CONSISTENCY LIMITS

ATTERBERG LIMITS Bentonite content(%) LL (%) PL (%) PI SL(%)

Comb. 1 5 27 - NP - Comb. 2 10 31 - NP 28.47 Comb. 3 15 36 25.11 10.89% 25.74 Comb. 4 20 40 26.98 13.02% 21.64

For combination 1 and combination 2 plastic limit cannot be determined and the combinations is reported as non plastic. For combination 4 plasticity index is obtained as 17.14% which is greater than 10%. Combination 4 satisfies the landfill liner criteria for Attterberg limits.

V. COMPACTION CHARACTERISTICS

The compaction tests were carried out for different combinations of M-Sand bentonite mixtures. Figure 1 and Figure 2 shows the variations of maximum dry density and optimum moisture content with the addition of bentonite content. When more bentonite was added, OMC increased and MDD decreased. This is due to the activity of bentonite. The volume of adsorbed water film around the clay particles increases the water content and decreases the dry unit weight.

Page 100: Gijet volume1 issue2

94

Figure 1: Variation of OMC with bentonite addition Figure 2: Variation of MDD with bentonite content

When fine content (i.e., bentonite) is mixed with M-Sand, more water is required in compaction in order to achieve maximum dry unit weight. When water is added to the mixture, the water acts like lubricant that allows soil particles to move closer to each other, air void is minimized, and higher unit weight can be achieved. When additional water was added after optimum water content, the dry unit weight of the compacted sand-bentonite mixtures drastically decreased particularly at high bentonite contents. The bentonite swelled further when more additional water was added. At this stage, the additional water and swelled bentonite which was lighter than sand, occupied more space in the compaction mould resulting in decreasing of dry unit weight of mixture.

A. Effect of Bentonite on Unconfined Compressive Strength of M-Sand Bentonite mixture The unconfined compressive strength of M-Sand bentonite mixture increases as the bentonite content increases. Results are shown in the Figure 3.

Figure 3: Variation of UCC value with bentonite addition

B. Free Swell Index The results of free swell index are shown in Figure 4. From the test results it is observed that with increase in percentage of bentonite increases the free swell index. It is marginal upto 10% and is rapid beyond 15%.

Figure 4: Variation of Free Swell index with bentonite addition

Page 101: Gijet volume1 issue2

95

C. Effect of Bentonite on Hydraulic Conductivity of M-Sand - Bentonite Mixtures The hydraulic conductivity of Msand-bentonite mixtures decreases with increasing bentonite content. Results are shown in Table IV. The common regulatory requirement for compacted soil liners states that the hydraulic conductivity should be less than 1×10-9 m/s. Combination 1 itself satisfies the hydraulic conductivity criteria for landfill liners. The very low bentonite content leads to an uneven distribution of bentonite within the sand matrix and this resulted in preferential flow-paths.

TABLE IV: VARIATION OF COEFFICIENT OF PERMEABILITY WITH BENTONITE ADDITION

COEFFICIENT OF PERMEABILITY

Bentonite % k

Comb. 1 5 1.45x10-9 m/s

Comb. 2 10 1.158x10-9 m/s

Comb.3 15 7.356x10-10 m/s

Comb. 4 20 6.86x10-10 m /s

VI. CONCLUSIONS

From this study conclusions can be drawn out that the hydraulic conductivity of M-Sand bentonite mixtures decreases as the bentonite content increases. A combination of 80% M-Sand and 20% bentonite is selected for the use of hydraulic containment liners. Table V shows the results of Combination 4 (80% M-Sand and 20% bentonite).For combination 4 the liquid limit is obtained as 40% and the plastic limit as 22.86%. Hence the plasticity index is obtained as 17.14% which is greater than 10%. The hydraulic conductivity is obtained as 6.86x10-10 m /s which are less than the required value 1x10-9m/s. Combination 4 satisfies all the criteria for landfill liners. Hence combination 4 is selected as the optimum mixture for the landfill liners.

TABLE V: RESULTS OF COMBINATION 4

Results of Combination 4

LL 40%

PL 22.86%

P.I 17.14%

SL 21.64%

OMC 20.00%

MDD 17.27kN/m³

k 6.86x10^-10m/s

REFERENCES

[1] Ahonen, L. et.al – (2008) “Quality Assurance of the Bentonite Material” Geological Survey of Finland [2] Chalermyanont, T. and Arrykul, S.-(2005), “Compacted sand-bentonite mixtures for hydraulic containment liners”,

Songklanakarin J.Sci. Technol, Vol.27 No.2 [3] Daniel, D. E. (1993b). “Clay Liners.” In Geotechnical Practice for Waste Disposal, (ed. David E. Daniel) Chapman

& Hall, London, UK, 137-163. [4] Divya,G. and Sowmya,V.K. –(2013), “Zeolite-Sand-Bentonite mixture for hydraulic containment liner”,

Proceedings of Indian Geotechnical Conference, December 22-24 [5] Gleason,H.M., Daniel,E.David., Eykholt.R,G., - (1996) “Calcium and Sodium Bentonite For Hydraulic

Containment Applications”, Journal of Geotechnical aand Geoenvironmental Engineering, Vol.123, No.5 [6] Gueddouda, M.K. Lamara, M. Abou-bekr, N. and Taibi, S., -(2010) “Hydraulic behaviour of dune sand bentonite

mixtures under confining stress” , Global Journal of Researches in Engineering, Vol. 10, Issue 1 [7] Irene,M,C., Lo,M., Alex, F.T.L., Xiaoyun,Y., - (2004) “ Migration of heavy metals in saturated sand bentonite or

soil admixture, Journal of Environmental Engineering , Vol.130, No.8. [8] Irvanian,A., Bilsel,H., - (2009) “Characterisation of compacted sand bentonite mixtures as landfill barriers in North

Cyprus”, 2nd International Conference On New Developments in Soil mechanics and Geotechnical Engineering, 28th-30th May 2009

Page 102: Gijet volume1 issue2

96

[9] Mollins,L.H., Stewart, D.I. and Cousens, T.W. –(1996) “Predicting the Properties of Bentonite-Sand Mixtures”, Department of Civil Engineering, University of Leeds, Leeds, LS2 9JT, UK, Clay Minerals (1996) 31,243-252

[10] Muntohar, A.S. –(2004), “Swelling and compressibility characteristics of soil-bentonite mixtures”, Department of Civil and Environmental Engineering, University of Malaya

[11] Rimsheena, T.P. and Remya, S. –(2013) “Effect Of Organic And Inorganic Fluids On Bentonite-Sand Mixtures as Landfill Liners” Proceedings of Indian Geotechnical Conference

[12] Salim.K., - (2011), “Preservation of desert environments from urban pollution” , International Journal of Water resources and Arid Environments , Vol.5

[13] Sivapullaiah, P.V., Sridharan, A. and Stalin, V.K. –(2000) “Hydraulic conductivity of bentonite–sand mixtures”, Canadian journal of Geotechnical Engineering Vol. 37: 406–413

[14] Studds, P.G., Stewart, D.I and Cousens, T.W. –(1998) “The effect of salt solutions on the properties of bentonite-sand mixtures” , Department of Civil Engineering, University of Leeds, Clay Minerals 33,651-660

[15] Tsai, T. and Vesilind, P.A. –(1998), “A New Landfill liner to reduce ground-water contamination from heavy metals”, Journal of Environmental Engineering Vol.124 no.11

Page 103: Gijet volume1 issue2

Microstructure, Mechanical & Wear Characteristics of

Al 336/ (0-10) Wt. % SICP Composites

1Harikrishnan. T, 2M.R. Sarathchandradas and 3V.R. Rajeev 1,2Mechanical Engineering Department, Kerala University, Sree Chitra Thirunal College of Engineering

Trivandrum, Kerala-695018, India 3Mechanical Engineering Department, Kerala University, College of Engineering Trivandrum

Trivandrum, Kerala-695016, India

Abstract—In the present study, preparation of Al 336/ (0-10) wt. % SiCp composites were done using stir casting method. The microstructure features of Al 336 alloy showed α- aluminium and eutectic silicon. Apart from α- aluminium and eutectic silicon, SiC particles were found to be uniformly distributed in Al 336 - 5 wt. % SiCp and Al 336 - 10 wt. % SiCp composites. Ultimate tensile strength of Al 336 - 10 wt. % SiCp composites ( 241 MPa ) was found to be highest compared to Al 336 - 5 wt. % SiCp ( 192 MPa ) and Al 336 alloy ( 130 MPa ) respectively. The hardness value of Al 336 -10 wt. % SiCp composite ( 76 BHN ) was found to be the highest compared to Al 336- 5 wt. % SiCp composite ( 64 BHN ) and Al 336 alloy ( 50 BHN ) respectively. Wear characteristics of Al 336/ (0-10) wt. % SiCp composites were done using Pin on disc configuration. It was found that as the load increases from 10 N to 30N, the wear loss of Al 336/ (0-10) wt. % SiCp composites were found to increase and is attributed to increased metallic intimacy. As the sliding velocity increases from 1 m/s to 4 m/s, it was noticed that the wear loss of Al 336/ (0-10) wt. % SiCp composites were found to be decreasing, and is attributed to less time of contact between the aspirities of the mating surfaces. The wear resistance of Al 336 - 10 wt. % SiCp composites was found to be better compared to Al 336 - 5 wt. % SiCp composites and Al 336 alloy respectively. Index Terms— Al 336 alloy, Al 336/ SiCp composites, Stir casting method, Microstructure, Wear test, Pin on disc tribometer.

I. INTRODUCTION

Metal matrix composites (MMC) are a range of advanced materials providing properties that cannot be normally achieved by conventional materials. These properties include increased strength, higher elastic modulus, higher service temperature, improve wear resistance, decreased part weight, low thermal shock, high electrical and thermal conductivity, and low coefficient of thermal expansion compared to conventional metals and alloys. The excellent mechanical properties of these materials and the relatively low production cost make them very attractive for a variety of applications in automotive and aerospace industries. Nowadays Particulate-reinforced metal-matrix composites have attracted considerable attention over other MMC. Silicon carbide, boron carbide and aluminum oxide are the key particulate reinforcements and can be obtained in varying levels of purity and size distribution. In this work SiC particulate was selected as the Grenze ID: 01.GIJET.1.2.554 © Grenze Scientific Society, 2015

Grenze International Journal of Engineering and Technology, July 2015

Page 104: Gijet volume1 issue2

98

reinforcement, due to the excellent combination of its mechanical properties like high strength, low density, low thermal expansion, high thermal conductivity, high hardness, high elastic modulus, excellent thermal shock, superior chemical inertness etc at a lower cost. Aluminium matrix composites (AMCs) can be used in high-tech structural and functional applications including aerospace, defense, automotive and thermal management areas. Applications of AMCs comes in space shuttles, aircrafts and automobile components like braking systems, Pistons, Cylinder heads, Crank shafts etc. In this work Al 336 alloy is selected as the matrix material. Al 336 alloy finds wide application in automotive and diesel pistons, pulleys, sheaves, and other applications where good high-temperature strength, higher thermal conductivity, low coefficient of thermal expansion and good resistance to wear are required. Thus in this work Al 336 alloy is selected as the matrix material and SiC as the reinforcement material. The main objective of this work is to find out the influence of SiC particle on the wear behaviour of Al 336 alloy matrix composites.

II. MATERIALS AND PREPARATIONS

A. Al 336 alloy preparation Al 336 alloy was prepared using ‘Oil fired tilting furnace’. The main charge components used for the preparation of Al 336 alloy are Primary aluminum ingot ( 6063 Aluminum wrought alloy ), Al-50wt. % Si, Al-50 wt. % Cu and Al-10 wt. % Ni master alloys. Al 336 alloy have the following chemical composition ( in wt. % ) .

TABLE I. CHEMICAL COMPOSITION ( IN WT. % ) OF AL 336 ALLOY

B. Al 336/ (0-10) wt. % SiCp composite preparation using Stir casting method Liquid phase stir casting method was used for the preparation of Al 336/ (0-10) wt. % SiCp composites. Initially the temperature of stir casting furnace was set to 900 ºC. Here the Al 336 alloy, which was selected as the matrix alloy was charged on to the furnace, when the furnace temperature reaches the set value of temperature. The SiCp reinforcement was preheated at 500 ºC for 5 hours using heating furnace. After the required molten condition was achieved, the temperature of stir casting furnace was lowered to 720 ºC. When the molten Al 336 alloy reaches 720 ºC, lumps of magnesium (1-2 Wt. %) wrapped in aluminium foil were plunged into the melt. This was done to improve the Wettability and fluidity between the matrix and SiC reinforcement. Stirrer was inserted inside the graphite crucible containing molten Al 336 alloy which was driven by a speed variable motor and their by creating a vortex in the melt. After the formation of vortex in the melt region, preheated SiC particles were added at a uniform rate using injecting gun. The stirring of composite slurry was performed at 450 rpm and the mixture was stirred for 5 minutes. Upward and downward feed was given to stirrer rod for getting uniform mixing of SiC reinforcement in Al 336 alloy matrix. After thorough stirring, molten Al 336/ SiCp composite mixture in the crucible were taken out and poured onto a metallic mould.

III. RESULTS

A. Microstructure characteristics of Al 336/ (0-10) wt. % SiCp composites Microstructure study of the aluminium is very important in predicting the nature of interaction between the molecules in the alloy. The microstructure of prepared Al 336 alloy, Al 336/ 5 wt. % SiC and Al 336/ 10 wt. % SiC composites were obtained and analyzed. The microstructure of Al 336 alloy consists of mainly α-aluminium dendrites and needle shaped eutectic silicon. The morphology of eutectic silicon, namely size and shape plays an important role in determining the mechanical properties of this alloy. Apart from α-aluminium

Al 336 alloy

Al

Mg

Si

Fe

Mn

Cu

Ni

Zn

82.77

0.72

12

0.65

0.22

1.5

2

0.14

Page 105: Gijet volume1 issue2

99

and eutectic silicon, SiC particles were found to be uniformly distributed in Al 336/ 5 wt. % SiCp and Al 336/ 10 wt. % SiCp composites.

B. Mechanical and physical characteristics of Al 336/ (0-10) wt. % SiCp composites Densities of prepared Al 336 alloy, Al 336/ 5 wt %SiC and Al 336 10 wt % SiC composites were measured using Archimedean principle. Their density values were found to be 2.64 g/cc, 2.49 g/cc and 2.32g/cc respectively. Brinell hardness tester was used to measure the hardness of Al 336/ ( 0-10) wt. % SiC composites. The hardness of Al 336 alloy, Al 336/ 5 wt. % SiC and Al 336/ 10 wt. % SiC composites after heat treatment conditions were obtained as 50 BHN, 64 BHN and 76 BHN respectively. Thus the hardness value of Al 336/ 10 wt. % SiCp composite ( 76 BHN ) was found to be the highest compared to Al 336/ 5 wt. % SiCp composite ( 64 BHN ) and Al 336 alloy ( 50 BHN ). Also the Ultimate tensile strength of Al 336/ 10 wt. % SiCp composites ( 241 MPa ) was found to be highest compared to Al 336/ 5 wt. % SiCp ( 192 MPa ) and Al 336 alloy ( 130 MPa ).

(a) (b) (c)

Figure 1. Microstructures of : (a) Al 336 alloy, (b) Al 336/ 5 wt. % SiCp and (c) Al 336/ 10 wt. % SiCp composites at 1000 X magnification

C. Wear test The wear tests are carried out using Pin on disc tribometer. Pin samples for Wear test are made from Al 336 alloy, Al 336/ 5 wt. % SiC and Al 336/ 10 wt. % SiC composites as per ASTM standard. During wear test, the test specimens were made to slide against a rotating disc for the given amount of time and there after the specimens are removed, cleaned and weighed to determine the weight loss due to wear under dry sliding conditions. The wear test is carried out at room temperature. Wear test was performed in order to find out the wear characteristics of Al 336 alloy, Al 336- 5 wt. % SiC and Al 336- 10 wt. % SiC composites. Effect of variation in sliding distance, applied load and sliding velocity on wear loss was analyzed for each samples prepared.

Figure.2. Comparison of Wear loss versus sliding distance relationship at different applied loads ( 10N, 20N, 30N ) between (a) Al 336 alloy & Al 336/ 5 wt. % SiCp composites and (b) Al 336/ 5 wt. % SiCp & Al 336/ 10 wt. % SiCp composites sliding with constant sliding velocity 2m/s

Page 106: Gijet volume1 issue2

100

In the first stage of wear test the effect of applied load and sliding distance on wear loss was analyzed. The variation of wear loss with sliding distance has been studied at 3 different loads ( 10N, 20N & 30N ). The sliding velocity was taken as 2m/s, and is kept constant here in this test. Wear test was performed on Al 336/ (0-10) wt. % SiCp composites at 250 m, 500 m, 750 m & 1000 m sliding distance. The results obtained are graphically shown in figure 2. While analyzing the results it is clear that wear loss increases with increase in applied load (in all the 3 case of Al 336 alloy, Al 336/ 5 wt. % SiC and Al 336/ 10 wt. % SiC composites). This is becauses, as the load is increased the intensity of contact between the sliding surfaces increases. Due to this more and more asperity to asperity contact establishes and their by the wear loss increases. Also we can analyze from figure 2 that wear loss of Al 336 alloy, Al 336/ 5 wt. % SiC and Al 336/ 10 wt. % SiC composites are increasing with increase in sliding distance. This increase in sliding distance causes more asperity to asperity contact time and results in increased real area of contact, which in turn results in increase in wear loss. Fig.3. Comparison of Wear loss versus sliding velocity relationship at different applied loads ( 10N, 20N, 30N ) between (a) Al 336 alloy & Al 336- 5 wt. % SiCp composite and (b) Al 336- 5 wt. % SiCp and Al 336- 10 wt. % SiCp composites sliding through constant sliding distance of 500m

Also we can analyze from figure 2 (a) that, the wear loss of Al 336/ 5 wt. % SiC composite is lesser compared to Al 336 alloy. Similorly from fig 2 (b) we can analyze that the wear loss of Al 336/ 10 wt. % SiC composite is lesser compared to Al 336/ 5 wt. % SiC composite. That is Al 336/ 10 wt. % SiC composites possess better wear resistance compared to Al 336/ 5 wt. % SiC composite and Al 336 alloy. Also Al 336/ 5 wt. % SiC composite possess better wear resistance compared to Al 336 alloy. In the second stage of wear test the variation of wear loss with sliding velocity has been studied at 3 different load ( 10N, 20N & 30N ). The sliding distance was taken as 500 m, and is kept constant during this test. Wear test was performed on Al 336/ (0-10) wt. % SiCp composites for 1m/s, 2m/s, 3m/s and 4m/s sliding velocities. The results obtained are graphically shown in fig 3.While analyzing the results it is clear that wear loss of Al 336 alloy, Al 336/ 5 wt. % SiC composite and Al 336/ 5 wt. % SiC composites decreases with increase in sliding velocity. This decrease in wear loss is due to less time for asperity to asperity contact. Also by analyzing figure 2 and figure 3, we can conclude that Al 336/ 10 wt % SiC composite have higher wear resistance compared to Al 336/ 5 wt% SiC composite and Al 336 alloy. Similorly Al 336/ 5 wt. % SiC composite have higher wear resistance compared to Al 336 alloy.

IV. CONCLUSION

• Al 336/ (0-10) wt. % SiCp composites were successfully prepared using Stir casting method. • Microstructure features of Al 336 alloy showed α- aluminium and eutectic silicon. Apart from α-

aluminium and eutectic silicon, SiC particles were found to be uniformly distributed in Al 336/ 5 wt. % SiCp and Al 336/ 10 wt. % SiCp composites.

• Al 336/ ( 0-10 ) wt. % SiCp composites showed maximum value of Ultimate tensile strength for Al 336/ 10 wt. % SiCp composites ( 241 MPa ) compared to Al 336/ 5 wt. % SiCp ( 192 MPa ) and Al 336 alloy ( 130 MPa ) respectively.

• Al 336/ ( 0-10 ) wt. % SiCp composites showed maximum value of hardness for Al 336/ 10 wt. % SiCp composites ( 76 BHN ) compared to Al 336/ 5 wt. % SiCp ( 64 BHN ) and Al 336 alloy ( 50 BHN ) respectively.

Page 107: Gijet volume1 issue2

101

• As the applied load is increased from 10N to 30N the wear loss of Al 336/ (0-10) wt. % SiCp composites were found to increase and is attributed to increased metallic intimacy. Further it was also observed that the wear resistance of Al 336/ 10 wt. % SiCp composite was found to be better compared to Al 336/ 5 wt. % SiCp composite and Al 336 alloy respectively.

• As the sliding velocity is increased from 1 m/s to 4 m/s the wear loss of Al 336/ (0-10) wt. % SiCp composites were found to be decreasing and are attributed to less time of contact between the aspirities of the mating surfaces. Further it was also observed that the wear resistance of Al 336/ 10 wt. % SiCp composite was found to be better compared to Al 336/ 5 wt. % SiCp composite and Al 336 alloy respectively.

• Wear loss of Al 336/ (0-10) % composites are found to be decreasing with increase in sliding velocity. This decrease in wear loss is due to less time for asperity to asperity contact.

ACKNOWLEDGEMENTS

The authors are grateful for the funds received from Centre for Engineering Research and Development (CERD), Government of Kerala for this work; and for the support received from the staff of College of Engineering Trivandrum, University of Kerala.

REFERENCES

[1] Rajeev, V.R., Dwivedi, D.K. & Jain, S.C. (2010). “Dry reciprocating wear of Al–Si–SiCp composites: A statistical analysis”, Tribology International 43, pp. 1532–1541.

[2] Uyyuru, R.K., Surappa M.K. & Brusethaug, S. (2007). “Tribological behaviour of Al-Si-SiCp composites/automobile brake pad system under dry sliding conditions”, Tribology international 40 pp. 365- 373.

[3] Basavarajappa, S., & Chandramohan, G. (2005). “Wear Studies on Metal Matrix Composites: a Taguchi Approach”, Journal of material science & technology, Vol.21 No.6, pp. 845-850.

[4] Vijayarangan, S., & Rajedran, I. (2006). “Wear behaviour of A356/ 25 SiCp aluminium matrix composites sliding against automobile friction material”, Wear 261, PP. 812-822.

[5] Mitrovic, S., Babic, M., Stojanovic, B., Miloradovic, N., Pantic, M., & Dzunic, D. (2012). “Tribological Potential of Hybrid Composites based on Zinc and Aluminium alloys reinforced with SiC and Graphite particles”, Tribology in industry- VOL 34, No.4, PP. 177-185.

[6] Naher, S., Brabazon, D., & Looney, L. (2004). “Development and assessment of a new quick quench stir caster design for the production of metal matrix composites”, Journal of materials processing technology 166, PP. 430-439.

[7] Surappa, M.K. (2003). “Aluminium matrix composites: challenges and opportunities”, Sadhana, -VOL 28, PP. 319-334.

[8] Srivatsan, T.S. (1991). “Processing techniques for particulate-reinforced metal aluminium matrix composites”, Journal of material science-VOL 26, PP. 5965-5978.

Page 108: Gijet volume1 issue2

Controller based Auto Agricultural System

Shah Payal Jayeshkumar

Faculty of Technology and Engineering, M S University, Kalabhavan, Vadodara, Gujarat, India

Abstract— An AUTO AGRICULTURE SYSTEM is not only developed to optimize water used for the agricultural crops but also to reduce the efforts of billions of farmers and to provide exactly the same quantity of water that the crop require. Actual water requirement of crops depend on the moisture content (already present) in soil, actual temperature and relative humidity of the atmosphere, speed of wind and the types of crops, too. In this system, a gateway unit (control unit) handles with the sensor information and triggers actuators (valves). The system is having a distributed wireless network of soil moisture sensor, compost sensors which will be placed in the root zone of the plants. And the wind sensor is placed at 2 meter height with respect to ground level.. Index Terms— microcontroller, solenoid valve, QEI, sensor, irrigation, latitude, sunshine hour.

I. INTRODUCTION

In the fast paced world we want everything to be automated. Our lifestyle demands everything to be fast, automated and remote controlled. Apart from few things man has made his full life automated. And why not??? In the world of advance electronics, life of human beings should be simpler hence to make life simpler and convenient; an idea of “CONTROLLER BASED AUTO AGRICULTURE SYSTEM” is presented, a model of controlling irrigation and other facilities to help millions of farmers. This motivation came from the countries where economy is depends on agriculture and the climatic conditions lead to lack of rains, hence there is a need of smart and efficient way of agricultural activities like irrigation. The farmers working in the farm lands are dependent on the rains and bore wells. Even if the farm land has a water-pump, manual involvement by farmers is required to turn the pump on/off when on earth needed. This method of irrigation, sensor technology is used with microcontroller to make a smart switching device for pump and different solenoid valves. This paper is organized as follows: Section 2 describes Different auto irrigation methods presently used, in section 3 total water requirement for crop (Numerical Analysis) and in section 4 the system block diagram and working of basic building blocks is explained in brief, section 5 concludes finally, last but not least the references are shown in section 6.

II. DIFFERENT AUTO IRRIGATION METHODS PRESENTLY USED

Presently, research on ‘automatic irrigation’ is going on. Not only are that some of these methods implemented, too. But they are only having a timer mechanism. For a specific time period solenoid valve is made ON to provide water to the crops. But actual water requirement of crops is not a constant quantity. It depends on the moisture content (already present) in soil, actual temperature and relative humidity of the atmosphere, speed of wind and the types of crops, so real time soil moisture measurement and some Grenze ID: 01.GIJET.1.2.555 © Grenze Scientific Society, 2015

Grenze International Journal of Engineering and Technology, July 2015

Page 109: Gijet volume1 issue2

103

databases related to the effect of crop type, temperature variation, atmospheric relative humidity variations, wind velocity variations are also required to measure the actual moisture measurement, which is not included in the presently available AUTO IRRIGATION SYSTEM [4][5][6] In addition to that, here compost sensor is also interfaced. It provides the information regarding the amount of nitrogen, potassium, phosphor present in the soil, which will help to take decision that at what time and how much amount of compost is required? In this system, soil moisture sensor, wind velocity sensor, temperature and RH sensor for atmosphere, compost sensor for soil are to be interfaced to microcontroller.

II. WATER REQUIREMENT FOR CROPS

There are so many formulas and methods available to calculate the water requirement. [1] A. Blancy-criddle method B. Radiation method C. Pan evaporation D. Penman’s Equation

But from each and every method, Penmann method is only one which gives the result most accurately and uses highest number of parameters as an input. So we are going to use it. [1][3][12] Penman’s equation is based on sound theoretical reasoning and is obtained by a combination of the energy-balance and mass-transfer approach. Incorporating some of the modifications suggested by other investigators is

[1] Where PET = daily potential evapo-transpiration in mm per day A = slope of the saturation vapour pressure temperature curve at the mean air temperature, in mm of mercury per °C (Table 3.1) Hn = net radiation in mm of evaporable water per day Eα = parameter including wind velocity and saturation deficit γ= psychometric constant = 0.49 mm of mercury/ °C The net radiation is estimated by the following equation:

[12] Where Ha = incident solar radiation outside the atmosphere on a horizontal surface, expressed in mm of evaporable water per day (it is a function of the latitude and period of the year as indicated in Table 3.4) a = a constant depending upon the latitude Φ and is given by (0.29 cos Φ ) b = a constant with an average value of 0.52 n = actual duration of bright sunshine in hours N = maximum possible hours of bright sunshine (function of latitude as indicated in Table 3.3) γ= reflection coefficient. Ranges of values of γ are given below. [15] σ= Stefand-Boltzman constant = 2.01 * 10-9 mm/day Ta = mean air temperature in degrees Kelvin = 273 + °C ea = actual mean vapour pressure in the air in mm of mercury The parameter Ea is estimated as

[12] in which

Page 110: Gijet volume1 issue2

104

u2 = mean wind speed at 2 m above ground in km/day ew= saturation vapour pressure at mean air temperature in mm of mercury (table 3.3) ea= actual vapour pressure, defined earlier.

TABLE I. SLOPE OF THE SATURATION VAPOUR PRESSURE TEMPERATURE CURVE

Temperature(°C) A(mm/day)

0 0.30

5 0.45

7.5 0.54

10 0.60

12.5 0.71

15 0.80

17.5 0.95

20 1.05

22.5 1.24

25 1.40

27.5 1.61

30 1.85

32.5 2.07

35 2.35

37.5 2.66

40 2.95

45 3.66

TABLE II. MEAN MONTHLY SOLAR RADIATION AT TOP OF ATMOSPHERE HA IN MM

Page 111: Gijet volume1 issue2

105

TABLE III. MEAN MONTHLY VALUES OF POSSIBLE SUNSHINE HOURS .N

III. SYSTEM BLOCK DIAGRAM AND WORKING

As the purpose of this model is to measure the moisture content of the agricultural soil and other parameters like temperature, wind velocity, sunshine, RH by real-time method (to minimize the manual involvement by the farmer), a micro-controller with different sensors are used. The amount of moisture present in the soil is to be sensed by Moisture sensor, the atmospheric temperature and RH is measured by temperature-RH sensor and velocity of wind is sensed by wind velocity sensor As ARM9 cortex m3 (LPC 1768) is having very less power consumption here it is used as a microcontroller. Conductivity based moisture measurement of soil is done using sensor FC 28. As it provides an analog output, inbuilt ADC is used to convert in to digital. To measure the atmospheric temperature and relative humidity sensor DHT 11 is used. It features a temperature & humidity sensor complex with a calibrated digital signal output. It provides the total information of 5 bytes, first two of them represent temperature, another two represent relative humidity serially.

Figure 1 DHT11 interfacing [15] Fig 2 DHT 11 [15]

24 V , 20 A supply is required by a solenoid valve to operate. Actually there are many techniques for its interfacing. One of that is shown in figure4.3. Here to interface the solenoid valve with controller, darling ton transistor pair is required. Instead of using individual darling tons for each valves , here ULN 2003 IC is used [5]

Figure 3 Solenoid valve interfacing

Page 112: Gijet volume1 issue2

106

Wind speed can be easily measured using an LED-PHOTO TRANSISTOR pair. As the voltage across LED in forward bias is 0.7 V and current assumed is 5 mA, to compensate the extra voltage drop resistors are connected in series. Now if there is no wing of fan (Baffle) in between LED and PHOTO TRANSISTOR, transistor is enabled and output will be 1.and it will be 0 for the other case. output is connected to any of the pin of controller, by configuring that pin as an interrupt, number of these edges in one minute can be found , which is the wind velocity.

Figure 4 Wind velocity sensor interfacing

After collecting all the data, using penman equation PET (potential evatranspiration rate) is to be calculated by controller. As we know that water requirement for irrigation depends on the types of crop also. So, by multiplying this PET by the crop coefficient (Kc),[1] we can get the actual water requirement for a particular crop. And according to that the time period is to be calculated for which solenoid valve and pump should be ON to provide the water to the field. Not only the irrigation but we can also interface some kind of compost sensor to controller. so that system can itself provide a particular fertilizer whenever it is necessary.

Fig 5 Block Diagram of the system

As it is my dissertation title, Actual implementation of the interfacing of all the sensors with ARM9 to detect the humidity in the soil (agricultural field), temperature, wind speed and supply water to the field according to its requirement is done in all respect. It is a microcontroller based design which controls the water supply and fertilizer requirement in the field to be irrigated. The solenoid valves will not be activated till water is present for the specific crop on the field. Once the field gets dry, sensors will sense the requirement of water in the field, Microcontroller will then supply water to that particular field which has water requirement for some specific time and then valves will be deactivated again. As the size of farm is normally large, using a single sensor we cannot judge the proper moisture content. Therefore there is an arrangement of section wise different sensors and valves. As a controller NXP LPC1768 microcontroller is selected because of the following features [14] • ARM Cortex-M3 (ARMv7) 32-Bit CPU

Page 113: Gijet volume1 issue2

107

• Up to 100MHz Main (Core) frequency • 512kB program memory (Flash), 2x32kB RAMS • Supports ARM Cortex ETM Trace • 10/100MBit Ethernet, RMI interface, DMA controller • 12-Bit Analog-Digital-Converter, 8 channels • 10-Bit Digital-Analog-Converter, 1 channel • 4 32-Bit width timers • 6 PWM channels, 1x Motor Control PWM, 8 DMA channels • USB 2.0 interface with integrated transceiver • CAN2.0B with 2 channels • 4x UART, 2x SSP, 1x SPI, 3x I2C, 1x I2S • Interface for Quadrature encoder • Low Power RTC, Unique ID • Internal 4MHz RC oscillator

IV. CONCLUSION

The paper “AUTO AGRICULURE SYSTEM” includes the automatic irrigation, compost measurement in which wired or wireless network can be used. For an enhancement, options for the modes of irrigation like DRIP irrigation and irrigation using SPRINKLE is also to be provided. Similarly, the options like AUTO and MANUAL can also be provided. In AUTO mode, system work same as discussed earlier and in MANUAL mode, its working is exactly same as the present irrigation systems. In which, Farmer has to select the time interval for which valve is kept ON. In an analysis and data collection, the reviews and suggestions from various farmers and agriculture experts are taken in to consideration. So it can be really a very helpful system for the farmers.

REFERENCES

[1] Principles of AGRONOMY by T.Yellamanda Reddy and G.H.Sankara Reddy Ministry of water conservation India. [2] Agricultural experts, krishi mantralaya. Gov. Of India ‘Automated Irrigation System Using a Wireless Sensor

Network’ byJoaquín Gutiérrez, Juan Francisco Villa-Medina, Alejandra Nieto-Garibay, and Miguel Ángel Porta-Gándara

[3] Micro Controller Based Automatic Plant Irrigation System by Venkata Naga Rohit Gunturi, April 2013 [4] Design and Implementation of Real Time Irrigation System using a Wireless Sensor Network, January 2014 [5] ‘Code of Good Agricultural Practice for the Protection of Water’ (The Water Code). 1998. Defra Publications. [6] Estimation of crop water requirement in rice-wheat system from multi- temporal AWIFS satellite data Mamta

Kumari1, N. R. Patel2, Payshanbe Yaftalov Khayruloevich3 2013 [7] ‘Taking Water Responsibly – Government decisions following consultation on changes to the water abstraction

licensing system in England and Wales’. 1999. Defra Publications. [8] ‘The Water Supply (Water Fittings) Regulations 1999 and The Water Byelaws 2000 (Scotland) – What are they and

how do they affect you?’. 2001. The Water Regulations Advisory Scheme. [9] ‘Water Supply Systems: Prevention of Contamination and Waste of Drinking Water Supplies – Agricultural

Premises’. 2010. The Water Regulations Advisory Scheme. [10] Technical notes, guides and other publications penman equation Irrigated crops and their management – Roger

Bailey, Farming Press 1990. Defra Land Management Improvement Division - Irrigation Best Practice Manual, 2002

[11] User manual NXP LPC 1768 [12] User manual DHT11-tempetrqture RH sensor [13] Smart Irrigation Controllers: How Do Soil Moisture Sensor (SMS) Irrigation Controllers Work?1 Michael D. Dukes,

Mary Shedd, and Bernard Cardenas-Lailhacar2 edis.ifas.ufl.edu/ TOPIC_SERIES_Smart_Irrigation_Controllers. [14] http://www.sjrwmd. com/floridawaterstar/pdfs/SMS_field_guide.pdf. References Dukes, M. D., B. Cardenas-

Lailhacar, B., and G.L. Miller. (2005, June). Irrigation Research at UF/IFAS. Retrieved June 27, 2008, from Institute of Food and Agricultural Sciences:http://irrigation.ifas.ufl.edu/SMS/pubs/June05_Re- source_sensor_irrig.pdf.

Page 114: Gijet volume1 issue2

Design and Simulation of Generation Control Loops

for Multi Area Interconnected System

1Abhilash M G and 2Frenish C Francis 1,2Department of EEE, Thejus Engineering College, Thrissur, Kerala, India

[email protected], [email protected]

Abstract—This work deals with the automatic generation control (AGC) of interconnected thermal systems with combination of the automatic voltage control (AVR). In this particular work three thermal areas connected with tie-line are considered. The primary purpose of the AGC is to balance the total system generation against system load plus losses so that the desired frequency and power interchange with neighboring areas are maintained. Any mismatch between generation and demand causes the system frequency to deviate from scheduled value. Thus high frequency deviation may lead to system collapse. Further the role of automatic voltage regulator is to hold terminal voltage magnitude of synchronous generator at a specified level. The interaction between frequency deviation and voltage deviation is analyzed in this paper. System performance has been evaluated at various loading disturbances. Index Terms— Automatic Generation Control, Automatic Voltage Control, Frequency deviation, Tie-line flow, Fuzzy Logic Controller.

I. INTRODUCTION

With the growth of power system networks and the increase in their complexity, many factors have become influential in electric power generation, demand or load management. The increasing demand for electric power coupled with resource, and environmental constraints pose several challenges to system planners. Stability of power systems has been and continues to be of major concern in system operation and control. The maximum preoccupation and concern of power system engineers are the control of megawatt, the real power and reactive power, because it is the governing element of revenue. Because of increased size and demand, it has forced them to design and control more effective and efficient control schemes to maintain the power system at desired operating levels characterized by nominal system frequency and voltage. The main function of a power system is to supply the real and reactive power demand with good quality in terms of constancy in voltage and frequency. Furthermore, for interconnected power system the tie-line power flow between utilities must be maintained within prescribed limits. It is in fact impossible to maintain both active and reactive power without control which would result in variation of voltage and frequency levels. To cancel the effect of load variation and to keep frequency and voltage level constant a control system is required. Though the active and reactive powers have a combined effect on the frequency and voltage, the control problem of the frequency and voltage can be separated. Frequency is mostly dependent on the active power and voltage is mostly dependent on the reactive power. Thus the issue of controlling power systems can be separated into two independent problems. The active Grenze ID: 01.GIJET.1.2.556 © Grenze Scientific Society, 2015

Grenze International Journal of Engineering and Technology, July 2015

Page 115: Gijet volume1 issue2

109

power and frequency control is called as load frequency control (or Automatic Generation Control). The most important task of AGC is to maintain the frequency constant against the varying active power loads, which is also referred as un- known external disturbance.

Figure 1.Simplified representation of 3 area interconnected system

Power exchange error is an important task of AGC. Generally a power system is composed of several generating units. To improve the fault tolerance of the whole power system, these generating units are connected through tie-lines. This use of tie-line power creates a new error in the control problem, which is the tie-line power exchange error. When sudden change in active power load occurs to an area, the area will get its energy through tie-lines from other areas. Eventually the area that is subject to the change in load should balance it without external support. Or else there will be economic conflicts between the areas. This is why each area requires separate load frequency controller to regulate the tie line power exchange error so that all the areas in an interconnected system can set their set points differently. In short, the AGC has two major duties, which are to maintain the desired value of frequency and also to keep the tie line power exchange under schedule in the presence of any load changes. Also, the AGC has to be unaffected by unknown external disturbances and system model and parameter variation. Where as in the AVR loop, the excitation for the generators must be regulated in order to match the reactive power demand otherwise the voltages at various system may goes to beyond the prescribed limit. The maximum permissible of change in frequency is about ± 5% Hz and voltage is about is ± 5% if not there will be a highly undesirable conditions in the power system like frequency and voltage fluctuations. So it is necessary to keep the frequency and voltage at constant level.

A. Automatic Generation Control The AGC is to control the frequency deviation by maintaining the real power balance in the system. The main functions of the AGC are to maintain the steady frequency, control the tie-line flows and distribute the load among the participating generating units. The control signals are the tie line deviation ∆P (measured from the tie line flows), and the frequency deviation ∆f(obtained by measuring the angle deviation∆δ). These errors signals ∆f and ∆P are amplified, mixed and transformed to a real power signal, which then controls the valve position. Depending on the valve position, the turbine changes its output power to establish the real power balance. The combining equations for tie-line power are: ∆푃푡푖푒 = ∆푃푡푖푒 + ∆푃푡푖푒 (1) ∆푃푡푖푒 = ∆푃푡푖푒 + ∆푃푡푖푒 (2) ∆푃푡푖푒 = ∆푃푡푖푒 + ∆푃푡푖푒 (3) ∆푃푡푖푒 (푠) = [∆푓 (푠) − ∆푓 (푠)] (4)

Area 1

Area 3

Area 2

Page 116: Gijet volume1 issue2

110

∆푃푡푖푒 (푠) = [∆푓 (푠) −∆푓 (푠)] (5) ∆푃푡푖푒 (푠) = [∆푓 (푠) − ∆푓 (푠)] (6) ∆푃푡푖푒 (푠) = [∆푓 (푠) − ∆푓 (푠)] (7) ∆푃푡푖푒 (푠) = [∆푓 (푠) − ∆푓 (푠)] (8) ∆푃푡푖푒 (푠) = [∆푓 (푠) − ∆푓 (푠)] (9)

Figure 2. Block diagram representation of AGC including AVR and Fuzzy control (Area 1)

B. Automatic Voltage Regulator The voltage of the generator is proportional to the speed and excitation (flux) of the generator. The speed being constant, the excitation is used to control the voltage. Therefore, the voltage control system is also called as excitation control system or automatic voltage regulator (AVR). The generator terminal voltage Vt is compared with a voltage reference Vref to obtain a voltage error signal ∆V. This signal is applied to the voltage regulator shown as a block with transfer function KA/(1+TA). The output of the regulator is then applied to exciter shown with a block of transfer function Ke/(1+Te). The output of the exciter Efd is then applied to the field winding which adjusts the generator terminal voltage. The generator field can be represented by a block with a transfer function KF/(1+sTF).

Figure 3.Block diagram representation of AVR

C. Single Machine connected to Infinite Bus The rotational inertia equations describe the effect of unbalance between electromagnetic torque and mechanical torque of individual machines in a LFC system. By having small perturbation and small deviation in speed, the complete swing equation becomes

Page 117: Gijet volume1 issue2

111

∆푓 = [∆푃 − (∆푃 + ∆푃 )] (10)

Where, Pe is the deviation of internal electrical power that is sensitive to load characteristics. A combined model includes load frequency control and AVR system. This model can be used to show the mutual effect between LFC and AVR loops and depict the slight change in response of turbine output power in the steady state. The AGC and AVR loop are considered independently, since excitation control of generator have small time constant contributed by field winding, where AGC loop is slow acting loop having major time constant contributed by turbine and generator moment of inertia. Thus transient in excitation control loop are vanish much fast and does not affect the AGC loop. Practically these two are not non- interacting, the interaction exists but in opposite direction. Since AVR loop affect the magnitude of generated e.m.f, this e.m.f determines the magnitude of real power and hence AVR loop felt in AGC loop. When we include the small effect of voltage on real power, we get following equation: ∆푃 = 푃 ∆훿 + 퐾 퐸 (11) Where K2 is change in electrical power for small change in stator e.m.f and Ps is synchronizing power coefficient. By including the small effect of rotor angle upon generator terminal voltage, we may write ∆푉 = 퐾 ∆훿 + 퐾 퐸 (12) Where K5 is change in terminal voltage for small change in rotor angle at constant stator e.m.f and K6 is change in terminal voltage for small change in stator e.m.f at constant rotor angle. Finally, modifying the generator field transfer function to include effect of rotor angle, we may express the stator e.m.f as 퐸 = (푉 −퐾 ∆훿) (13)

Figure 4. Block diagram representation for single machine connected to infinite bus

Page 118: Gijet volume1 issue2

112

II. CONTROLLERS

A. PID Controller PID controller has all the necessary dynamics: fast reaction on change of the controller input (D mode), increase in control signal to lead error towards zero (I mode) and suitable action inside control error area to eliminate oscillations (P mode). Derivative mode improves stability of the system and enables increase in gain K and decrease in integral time constant Ti, which increases speed of the controller response. The transfer function of PID controller is 퐺 (푠) = 퐾 + + 푠퐾 (14)

PID controller is used when dealing with higher order capacitive processes (processes with more than one energy storage) when their dynamic is not similar to the dynamics of an integrator (like in many thermal processes). PID controller is often used in industry, but also in the control of mobile objects (course and trajectory following included) when stability and precise reference following are required. Conventional autopilot is for the most part PID type controllers.

B. Fuzzy Controller In contrast to conventional control techniques, fuzzy logic control (FLC) is best utilized in complex ill-defined processes that can be controlled by a skilled human operator without much knowledge of their underlying dynamics. The basic idea behind FLC is to incorporate the "expert experience" of a human operator in the design of the controller in controlling a process whose input – output relationship is described by collection of fuzzy control rules involving linguistic variables rather than a complicated dynamic model. The utilization of linguistic variables, fuzzy control rules, and approximate reasoning provides a means to incorporate human expert experience in designing the controller. FLC is strongly based on the concepts of fuzzy sets, linguistic variables and approximate reasoning introduced in the previous chapters. This chapter will introduce the basic architecture and functions of fuzzy logic controller, and some practical application examples. A typical architecture of FLC is shown below, which comprises of four principal comprises: a fuzzifier, a fuzzy rule base, inference engine, and a defuzzifier.

Figure 5. Fuzzy editor : one input and output of the FLC

Figure 6. (a) Membership functions for input error, (b) Membership functions for input change in error, (c) Membership functions for fuzzy output

Page 119: Gijet volume1 issue2

113

TABLE I. RULES FOR FUZZY LOGIC CONTROLLER

Figure 7. Fuzzy controller subsystem

III. EXPERIMENTAL RESULTS

In the simulation study, the combined proposed model is applied for three area power system. To illustrate the performance of this model, simulations are performed for the possible operating conditions of a power system. In this simulation the performance of the proposed combined model is analyzed with a load change in each area. In order to demonstrate the effectiveness of the fuzzy controller, the Simulink model for three area power system is simulated and the frequency response is plotted for a time period of 50 seconds. The change in frequency for different loads is obtained, and transient responses are found to be stable. It is clear from the simulation results that, FLC can bring down the frequency to its rated value immediately after the disturbance and without any oscillations. Figure 8 shows the response of frequency obtained for a sudden increase in load of 1p.u.

Figure 8. (a). AGC response for Area 1, 2 &3, (b). AVR response for area 1, 2 &3

e/de NB NM NS Z PS PM PB

NB NB NB NB NB NM NM Z

NM NB NB NB NM NS Z PM

NS NB NB NM NS Z PS PM

Z NB NM NS Z PS PM PB

PS NM NS Z PS PM PB PB

PM NS Z PS PM PB PB PB

PB Z PS PM PB PB PB PB

Page 120: Gijet volume1 issue2

114

TABLE II. ASSUMPTIONS USED FOR SIMULATION OF AGC

AGC Area 1 Area 2 Area 3

Governor time constant 0.2 0.3 0.2

Turbine time constant 0.5 0.4 0.4

Inertia 5 5 6

D 8 1 5

1/R 20 10 5

Bias 2 2 2

TABLE III. ASSUMPTIONS USED FOR SIMULATION OF AVR

AVR Time constant Gain

Amplifier 0.1 10

Exciter 0.4 1

generator 1 1

Sensor 0.05 1

TABLE IV. PARAMETERS OF PID CONTROLLER

IV. CONCLUSION

In this paper attempt is made to develop AGC model with AVR. In this scheme coupling between AGC and AVR is employed and interaction between frequency and voltage exists and cross coupling does exist. AVR loop affect the magnitude of emf as the internal emf determines the magnitude of real power. It is concluded that changes in AVR loop is felt in AGC loop. In this study, fuzzy control approach is employed for an Automatic Generation Control (AGC) of interconnected power system with Automatic Voltage Regulator. The effectiveness of the proposed controller in increasing the damping of local and inter area modes of oscillation is demonstrated in a three area interconnected power system.

REFERENCES [1] Lalit Chandra Saikia, Sanjoy Debbarma, N. Sinha, Puja Dash, ”AGC of a Multi-area Hydrothermal System Using

Thyristor Controlled Series Capacitor”, Annual IEEE India Conference (INDICON), 2013. [2] P. Kundur, Power system stability and control, McGraw Hill, 2012. [3] H. Saadat, Power system analysis, USA: McGraw-Hill; 1999.

Parameters Value

Kp 1.0

Ki 0.35

Kd 0.15

Page 121: Gijet volume1 issue2

115

[4] O.I. Elgard, Electrical Energy System theory an Introduction, McGraw-Hill, New Delhi, 2005. [5] Z.M. Ai-Homouz and Y.L. Abdel-Magid , Variable Structure Load Frequency Controllers for Multiarea Power

Systems, Int.J. Electr. Power Energy System. 15(5),1993. [6] El-Zonkoly, A.M., Optimal tuning of power systems stabilizers and AVR gains using particle swarm optimization.

Expert Systems with Applications, pp 551-557, 2006. [7] K. Hirayama, Y.T., K. Takagi, H. Murakami, M. Shibata, H. Nagamura, Y. Takagi, , “Digital AVR application to

power plants”. IEEE Transactions on Energy Conversion, 1993. 8(4): p. 602-609. [8] Hou Guolian, Zheng Xiaobin, Jiang Pengcheng , Zhang Jianhua, “Study of Modeling and Intelligent Control on

AGC System with Wind Power” , IEEE, 2014 [9] Muthana T. Alrifai and Mohamed Zribi “Decentralized Controllers for Power System Load Frequency Control”

ASCE Journal, Vol 5, Issue II, June,2005. [10] Sivaramakrishnan, A. Y., Hariharan, M. V. and Srisailam, M. C. "Design of Variable Structure Load Frequency

Controller Using Pole Assignment technique", Int. Journal of Control, Vol. 40, No. 3, pp. 487-498,1984. [11] Li Pingkang Beijing and Ma Yongzhen; “Some New Concepts in Modem Automatic Generation Control

Realization”, IEEE 1998,pp1232-1236, [12] Lim Yun Seng, Philip Taylor; “Innovative Application of Demand Side Management to Power Systems” First

International Conference on Industrial and Information Systems, ICIIS 2006, pp 8 – 11 August 2006, Sri Lanka. [13] C.C.A Rajan, “Demand side management using expert system”,IEEE Conference on Convergent Technologies for

Asia-Pacific Region, 2003. [14] Hongming Yang, Yeping Zhang, Xiaojiao Tong; “System Dynamics Model for Demand Side Management”

Electrical and Electronics Engineering, 2006 3rd International Conference on 6-8 Sept. 2006. [15] Vincent Thornley, Ruth Kemsley, Christine Barbier, Guy Nicholson; “User Perceptron Of Demand Side

Management” CIRED Seminar 208, Smart Grids for Distribution, Frankfurt, 23 – 24 June 2008. Paper 70. [16] G. V. Hicks and Jeyasurya, B, ”An investigation of automatic generation control for an isolated transmission

system”, IEEE Canadian Conference on Electrical and Computer Engineering, Vol. 2, 1997, pages: 31- 34.

Page 122: Gijet volume1 issue2

PID Controller Tuning by using Fuzzy Logic Control and Particle Swarm Optimization for Isolated Steam

Turbine

1Frenish C Francis and 2Abhilash M G PG scholar, Power systems, Thejus Engineering College Kerala, India

[email protected], [email protected]

Abstract—PID controller is employed in every facet of industrial automation. . It is natural that it is necessary to improve quality of any production technology in a complex way, by means of replacing the technology itself, as well as by continuous optimization of operation. It is just here, where new opportunities of using fuzzy controller are opening, as a compensation for a man in control and optimizing processes. This paper presents design of PID controller using Ziegler-Nichols (ZN) technique and Fuzzy controller for isolated steam turbine. PID controller tuning using PSO is proposed in this paper. Simulation results are demonstrated. Performance analysis shows the effectiveness of the proposed Fuzzy logic controller as compared to the ZN tuned PID controller & and tuning by using particle swarm optimization. Index Terms— Ziegler-Nichols (ZN), particle swarm optimization (PSO).

I. INTRODUCTION

Since many industrial processes are of a complex nature, it is difficult to develop a closed loop control model for this high level process. Also the human operator is often required to provide on line adjustment, which make the process performance greatly dependent on the experience of the individual operator. It would be extremely useful if some kind of systematic methodology can be developed for the process control model that is suits industrial process. There are some variables in continuous DCS (distributed control system) suffer from many unexpected disturbance during operation (noise, parameter variation, model uncertainties, etc.) so the human supervision (adjustment) is necessary and frequently. If the operator has a little experience the system may be damaged operated at lower efficiency. One of these systems is the control of turbine speed PI controller is the main controller used to control the process variable. Process is exposed to unexpected conditions and the controller fail to maintain the process variable in satisfied conditions and retune the controller is necessary. Fuzzy controller is one of the succeed controller used in the process control in case of model uncertainties. But it may be difficult to fuzzy controller to articulate the accumulated knowledge to encompass all circumstance. Hence, it is essential to provide a tuning capability. There are many parameters in fuzzy controller may be adopted. The Speed control of turbine unit construction and operation will be described. Adaptive controller is suggested here to adapt normalized fuzzy controller, mainly output/input scale factor. The algorithm is tested on an experimental model to the Turbine Speed Control System. A comparison between Conventional method and Adaptive Fuzzy Controller are done and also compare the PID controller tuning by using particle swarm Grenze ID: 01.GIJET.1.2.557 © Grenze Scientific Society, 2015

Grenze International Journal of Engineering and Technology, July 2015

Page 123: Gijet volume1 issue2

117

optimization. The suggested control algorithm consists of two controllers process variable controller and adaptive controller (normalized fuzzy controller).At last, the fuzzy supervisory adaptive implemented and compared with conventional method and also by using PSO

II. RELATED WORKS

To understand the various control techniques and to improve the speed control of steam turbine by tuning of controllers .The application of PID controller span from small industry to high technology industry. In this paper, it is proposed that the controller be tuned using Adaptive fuzzy controller. Adaptive fuzzy controller is a stochastic global search method that emulates the process of natural evolution. Adaptive fuzzy controller have been shown to be capable of locating high performance areas in complex domains without experiencing the difficulties associated with high dimensionality or false optima as may occur with gradient decent techniques. Using Fuzzy controller to perform the tuning of the controller will result in the optimum controller being evaluated for the system every time. For this study, the model selected is of turbine speed control system. The reason for this is that this model is often encountered in refineries in a form of steam turbine that uses hydraulic governor to control the speed of the turbine. The PID controller of the model will be designed using the classical method and the results analyzed. The same model will be redesigned using the AFC method. A steam turbine is a device that extracts thermal energy from pressurized steam and uses it to do mechanical work on a rotating output shaft. Because the turbine generates rotary motion it is particularly suited to be used to drive an electrical generator. The control of a turbine with a governor is essential, as turbines need to be run up slowly to prevent damage and some applications (such as the generation of alternating current electricity) require precise speed control. Uncontrolled acceleration of the turbine rotor can lead to an over speed trip, which causes the nozzle valves that control the flow of steam to the turbine to close. If this fails then the turbine may continue accelerating until it breaks apart, often catastrophically. The application of PID (proportional integral derivative) controller span from small industry to high technology industry. Tuning the parameters of a PID controller are very important in PID control. Ziegler and Nichols proposed the well-known Ziegler-Nichols method to tune the coefficients of a PID controller. This tuning method is very simple, but cannot guarantee to be always effective. This thesis investigates the effectiveness of different controllers for the speed control of Tandem compound single reheat steam turbine. In contrast to conventional control techniques, fuzzy logic control (FLC) is best utilized in complex ill-defined processes that can be controlled by a skilled human operator without much knowledge of their underlying dynamics. The basic idea behind FLC is to incorporate the "expert experience" of a human operator in the design of the controller in controlling a process whose input – output relationship is described by collection of fuzzy control rules (e.g., IF-THEN rules) involving linguistic variables rather than a complicated dynamic model. The utilization of linguistic variables, fuzzy control rules, and approximate reasoning provides a means to incorporate human expert experience in designing the controller.

III. MODELING OF STEAM TURBINE

For this study, the model selected is of turbine speed control system. The reason for this is that this model is often encountered in refineries in a form of steam turbine that uses hydraulic governor to control the speed of the turbine as illustrated above in figure 1.

Figure.1. steam turbine model

Page 124: Gijet volume1 issue2

118

The complexities of the electronic governor controller will not be taken into consideration in this dissertation. The electronic governor controller is a big subject by it and it is beyond the scope of this study. Nevertheless this study will focus on the model that makes up the steam turbine and the hydraulic governor to control the speed of the turbine. In the context of refineries, you can consider the steam turbine as the heart of the plant. This is due to the fact that in the refineries, there are lots of high capacities compressors running on steam turbine. Hence this makes the control and the tuning optimization of the steam turbine significant. The model used in this paper was presented .The transfer function of the open loop system can be approximated in the form of a third order transfer function:

1/s(s+5)(s+1) The identified model is approximated as a linear model, but exactly the closed loop is nonlinear due to the limitation in the control signal

IV. PID CONTROLLER

The PID controller is the most common form of feedback. It was an essential element of early governors and it became the standard tool when process control emerged in the 1940s. In process control today, more than95% of the control loops are of PID type, most loops are actually PI control. PID controllers are today found in all areas where control is used. The controllers come in many different forms. There are stand-alone systems in boxes for one or a few loops, which are manufactured thousands yearly. PID control is an important ingredient of a distributed control system. The controllers are also embedded in many special-purpose control systems. PID control is often combined with logic ,sequential functions, selectors, and simple function blocks to build the complicated automation systems used for energy production, transportation, and manufacturing. Many sophisticated control strategies, such as model predictive control, are also organized hierarchically. PID control issued at the lowest level; the multivariable controller gives the set points to the controllers at the lower level. The PID controller can thus be said to be the “bread and butter’ ’of control engineering. It is an important component in every control engineer’s tool box.PID controllers have survived many changes in technology, from mechanics and pneumatics to microprocessors via electronic tubes, transistors, integrated circuits. The microprocessor has had a dramatic influence on the PID controller. Practically all PID controllers made today are based on microprocessors. This has given opportunities to provide additional features like automatic tuning, gain scheduling, and continuous adaptation.

Figure 2 PID controller block

The controller attempts to minimize the error by adjusting the process control inputs. The PID controller calculation algorithm involves three separate constant parameters, and is accordingly sometimes called three-term control: the proportional, the integral and derivative values, denoted P,I, and D. Heuristically, these values can be interpreted in terms of time: P depends on the present error, I on the accumulation of past errors, and D is a prediction of future errors, based on current rate of change. The weighted sum of these three actions is used to adjust the process via a control element. Tuning the parameters of a PID controller is very important in PID control. Ziegler and Nichols proposed the well -known Ziegler-Nichols method to tune the coefficients of a PID controller. This tuning method is very simple, but cannot guarantee to be always effective. The PID control scheme is named after its three correcting terms, whose sum constitutes the

Page 125: Gijet volume1 issue2

119

manipulated variable(MV). The proportional, integral, and derivative terms are summed to calculate the output of the PID controller.Defining as the controller output, ݐ ݑ the final form of the PID algorithm is: ݐ݀ (ݐ) ݁݀ ݀݇ + ݐ݀ ݐ ݁ ݅݇ + ݐ݁ ݇ = ݐ ݑWhere Kp: Proportional gain, a tuning parameter, Ki: Integral gain, a tuning parameter, Kd: Derivative gain, a tuning parameter ,e: Error Tuning a control loop is the adjustment of its control parameters (proportional band/gain, integral gain/reset, derivative gain/rate) to the optimum values for the desired control response.

A. PID Tuning Tuning is adjustment of control parameters to the optimum values for the desired control response. Stability is a basic requirement. However, different systems have different behavior, different applications have different requirements, and requirements may conflict with one another. PID tuning is a difficult problem, even though there are only three parameters and in principle is simple to describe, because it must satisfy complex criteria within the limitations of PID control. There are accordingly various methods for loop tuning, some of them: Manual tuning method, Ziegler–Nichols tuning method, PID tuning software methods.

B. Manual Tuning Method: In manual tuning method, parameters are adjusted by watching system responses. Kp, Ki, Kd are changed until desired or required system response is obtained. Although this method is simple, it should be used by experienced personal.

C. One Manual Tuning Method Example Firstly, Ki and Kd are set to zero. Then, the Kp is increased until the output of the loop oscillates, after obtaining optimum Kp value, it should be set to approximately half of that value for a "quarter amplitude decay" type response. Then, Ki is increased until any offset is corrected in sufficient time for the process. However, too much Ki will cause instability. Finally, Kd is increased, until the loop is acceptably quick to reach its reference after a load disturbance. However, too much Kd also will cause excessive response and overshoot. A fast PID loop tuning usually overshoots slightly to reach the set point more quickly; however, some systems cannot accept overshoot, in which case an over-damped closed-loop system is required, which will require a Kp setting significantly less than half that of the Kp setting causing oscillation.

D. Ziegler–Nichols tuning method: This method was introduced by John G. Ziegler and Nathaniel B. Nichols in the 1940s. The Ziegler-Nichols’ closed loop method is based on experiments executed on an established control loop (a real system or a simulated system). The tuning procedure is as follows: 1.Bring the process to (or as close to as possible) the specified operating point of the control system to ensure that the controller during the tuning is “feeling” representative process dynamic and to minimize the chance that variables during the tuning reach limits. Process is brought to the operating point by manually adjusting the control variable, with the controller in manual mode, until the process variable is approximately equal to the set-point. 2 . Turn the PID controller into a P controller by setting set Ti = ∞ and Td = 0. Initially, gain Kp is set to “0”. Close the control loop by setting the controller in automatic mode. 3. Increase Kp until there are sustained oscillations in the signals in the control system, e.g. in the process measurement, after an excitation of the system. (The sustained oscillations correspond to the system being on the stability limit.) This Kp value is denoted the ultimate (or critical) gain, Kpu. The excitation can be a step in the set-point. This step must be small, for example 5% of the maximum set-point range, so that the process is not driven too far away from the operating point where the dynamic properties of the process may be different. On the other hand, the step must not be too small, or it may be difficult to observe the oscillations due to the inevitable measurement noise. It is important that Kpu is found without the control signal being driven to any saturation limit (maximum or minimum value) during the oscillations. If such limits are reached, there will be sustained oscillations for any (large) value of Kp, e.g. 1000000, and the resulting Kp-value is useless (the control system will probably be unstable). One way to say this is that Ku must be the smallest Kp value that drives the control loop into sustained oscillations. 4 . Measure the ultimate (or critical) period Pu of the sustained oscillations.

Page 126: Gijet volume1 issue2

120

5. Calculate the controller parameter values according to Table 4.3 , and these parameter values are used in the controller. If the stability of the control loop is poor, stability is improved by decreasing Kp, for example a 20% decrease.

TABLE I. CONTROL PARAMETERS VALUES

Control type Kp Ki Kd P 0.5*Ku - - PI 0.45*Ku 1.2*Kp/Tu -

PID 0.6*Ku 2*Kp/Tu Kp*Tu/8

Derivative mode improves stability of the system and enables increase in gain K and decrease in integral time constant Ti, which increases speed of the controller response. PID controller is used when dealing with higher order capacitive processes (processes with more than one energy storage) when their dynamic is not simillar to the dynamics of an integrator (like in many thermal processes). PID controller is often used in industry, but also in the control of mobile objects (course and trajectory following included) when stability and precise reference following are required. Conventional autopilot are for the most part PID type controllers

V. FUZZY LOGIC CONTROLLER

Fuzzy controllers are inherently nonlinear controllers, and hence fuzzy control technology can be viewed as a new, cost effective and practical way of developing nonlinear controllers. The major advantage of this technology over the traditional control technology is its capability of capturing and utilizing qualitative human experience and knowledge in a quantitative manner through the use of fuzzy sets, fuzzy rules and fuzzy logic. There exist two different types of fuzzy controllers: the Mamdani type and the Takagi±Sugeno (TS, for short) type. They mainly differ in the fuzzy rule consequent: a Mamdani fuzzy controller utilizes fuzzy sets as the consequent whereas a TS fuzzy controller employs linear functions of input variables Although it is possible to design a fuzzy logic controller by a simple modification of the conventional ones, via inserting some meaningful fuzzy logic IF- THEN rules into the control system, these approaches in general complicate the overall design and do not come up with new fuzzy PID controllers that capture the essential characteristics and nature of the conventional PID controllers. Besides, they generally do not have analytic formulas to use for control specification and stability analysis. The fuzzy PID controllers to be introduced below are natural extensions of their conventional versions, which preserve the linear structures of the PID controllers, with simple and conventional analytical formulas as the final results of the design. Thus, they can directly replace the conventional PID controllers in any operating control systems (plants, processes). The conventional design of PID controller was somewhat modified and a new hybrid fuzzy PID controller was designed. Instead of summation effect a mamdani based fuzzy inference system is implemented. The inputs to the mamdani based fuzzy inference system are error and change in error. FLC is strongly based on the concepts of fuzzy sets, linguistic variables and approximate reasoning introduced in the previous chapters. This chapter will introduce the basic architecture and functions of fuzzy logic controller, and some practical application examples. A typical architecture of FLC is shown below, which comprises of four principal comprises: a fuzzifier, a fuzzy rule base, inference engine, and a defuzzifier.

Figure 3 Typical architecture of FLC

Page 127: Gijet volume1 issue2

121

If the output from the defuzzifier is not a control action for a plant, then the system is fuzzy logic decision system .The fuzzifier has the effect of transforming crisp measured data (e.g. speed is 10 mph) into suitable linguistic values (i.e. fuzzy sets, for example, speed is too slow). The fuzzy rule base stores the empirical knowledge of the operation of the process of the domain experts. The inference engine is the kernel of a FLC, and it has the capability of simulating human decision making by performing approximate reasoning to achieve a desired control strategy. The defuzzifier is utilized to yield a non fuzzy decision or control action from an inferred fuzzy control action by the inference engine

A. FUZZY Inference Engine The main difference is that these fuzzy PID controllers are designed by employing fuzzy logic control principles and techniques, to obtain new controllers that possess analytical formulas very similar to the conventional digital PID controllers in a fuzzy logic system, the membership function is the operation that translates crisp input data into a membership degree The principle of fuzzy self-tuning PID is firstly to find out the fuzzy relationship between three parameters of PID and error (e) and error changes (ec). Fuzzy inference engines modify three parameters to be content with the demands of the control system online through constantly checking e and ec . Thus, the real plant will have better dynamic and steady Performance

Figure 4 Two input and three outputs of the FLC

Figure.5 The membership functions of e and ec

The inference engine is the kernel of FLC in modeling human decision making within the conceptual framework of fuzzy logic and approximate reasoning. The generalized modus pones (forward data-driven inference) plays anespecially important role in approximate reasoning. The generalized modus pones can be rewritten as Premise 1: IF x is A, THEN y is B. Premise 2: x is A’ Conclusion: y is B’ where A, A’, B and B’are fuzzy predicates (fuzzy sets or relations) in the universal sets U, U ,V and V, respectively. In general, a fuzzy control rule ( e.g. premise 1 ) is a fuzzy relation which is expressed as a fuzzy implication,

Page 128: Gijet volume1 issue2

122

R = A B. According to the compositional rule of inference conclusion, B’ can be obtained by taking the composition of fuzzy set A’and the fuzzy relation (here the fuzzy relation is a fuzzy implication) A B: B’= A’ o R = A’ o (A B) Fuzzy systems are indicating good promise in consumer products, industrial and commercial systems, and decision support systems. The term “fuzzy” refers to the ability of dealing with imprecise or vague inputs. Instead of using complex mathematical equations, fuzzy logic uses linguistic descriptions to define the relationship between the input information and the output action. In engineering systems, fuzzy logic provides a convenient and user-friendly front-end to develop control programs, helping designers to concentrate on the functional objectives, not on the mathematics. This introductory text discussed the nature of fuzziness and showed how fuzzy operations are performed, and how fuzzy rules can incorporate the underlying knowledge. Fuzzy logic is a very powerful tool that is pervading every field and signing successful implementations.

VI. PARTICLE SWARM OPTIMIZATION

In PSO, each particle contains these three components p, i , and d and updates the components in each iteration to find the Pbest and Gbest. Finally, the program runs to converge to the optimal solution. PSO has many similarities with evolutionary computation techniques like Genetic Algorithms (GA).The system is initialized with a population of random solutions and searches for optima by updating generations. In PSO, the potential solutions, called particles, fly through the problem space by following the current optimum particles. It is demonstrated that PSO has advantages over other methods in respect to run time, cost and better result. Another reason that PSO is attractive is that there are few parameters to adjust. One version, with slight variations, works well in a wide variety of applications. Particle swarm optimization has been used for approaches that can be applied across a wide range of applications, as well as for specific applications focused on a specific requirement.

vnew[ ] = vold[ ] +c1*rand( )*( pbest [ ]-present[ ]) +c2*rand( )*(gbest [ ]-present[ ]) where, : velocity of agent i at iteration k,

w: weighting function, c: weighting factor,

rand: uniformly distributed random number between 0 and 1, : current position of agent i at iteration k pbesti: pbest of agent i, gbest: gbest of the group.

VII. SIMULATION RESULTS

Speed Time

Figure 6 .Combined output of PID,FUZZY AND PSO

Page 129: Gijet volume1 issue2

123

VIII. CONCLUSION

A fuzzy self-adapting PID controller for a Control Turbine Speed is used. The robustness of the system controlled by the AFPIDC is compared with the system controlled by the traditional PID controller. According to the simulation results in MATLAB, show that the AFPIDC can improve the robustness and small overshoot and fast response compared to the conventional PID. In the area of turbine speed control the faster response to research stability, the better is the result for the plant .when compared with proposed PSO and FUZZY, PSO has lower settling time lower over shoot and become more stable.

REFRENCES

[1] Mohamed. M. Ismail, “Adaptation of PID controller using AI techniques for speed control of isolated steam turbine,” IEEE Journal of Control, Automation and Systems, vol 1 , 2012.

[2] Astrom, K., T. Hagglund: PID Controllers; Theory, Design and Tuning, Instrument Society of America, Research Triangle Park, 1995

[3] L. X. Wang: Adaptive Fuzzy System & Control design & Stability Analysis, Prentice-Hall, 1994. [4] Gowrishankar, Vasanth Elancheralathan , “adaptive fuzzy controller to control turbine speed”, Ubiquitous

Computing and Communication Journal [5] Gaing, Z.L., 2004, A particle swarm optimization approach for optimum design of PID Controller in AVR system.

IEEE Transaction on Energy Conversion, Vol.19(2), pp.384- 391. [6] Shi, Y.H. and Eberhart, R.C., 2001, Fuzzy Adaptive Particle Swarm Optimization. Proc.Congress on Evolutionary

Computation. Seoul,Korea. [7] Schei, Tor Steiner, 1994, Automatic Tuning of PID Controllers Based on Transfer Function Estimation, Automatica,

pp. Pp. 1983 – 1989. [8] Zulfatman, M.F.rahmat: Application of self- tuning fuzzy pid controller on industrial hydraulic actuator using

system identification approach, June 2009 [9] PSO Tutorial, 2012. http://www.swarmintelligence.org/tutorials.php

Page 130: Gijet volume1 issue2

Seismic Analysis of Performance based Design of

Reinforced Concrete Building

1Prof. Rehan A Khan and 2Prof. T Naqvi 1,2Civil Engineering Department, AMU, Aligarh-202002, U.P, India

[email protected], [email protected]

Abstract—A performance-based design is aimed at controlling the structural damage based on precise estimations of proper response parameters. Performance-based seismic design explicitly evaluates how a building is likely to perform; given the potential hazard it is likely to experience, considering uncertainties inherent in the quantification of potential hazard and uncertainties in assessment of the actual building response. In the present study the performance based seismic design is performed using a simple computer based pushover analysis technique using SAP2000. The proposed method is illustrated by finding the seismic performance point for a four storey reinforced concrete framed building located in Zone-IV, symmetrical in plan (designed according to IS 456:2000).Two performance levels are considered namely:1) Under Design Basis Earthquake (DBE), damage must be limited to Grade 2 (slight structural damage, moderate nonstructural damage) in order to enable Immediate Occupancy after DBE; 2) Under Maximum Considered Earthquake (MCE), damage must be limited to Grade 3 (moderate structural damage, heavy nonstructural damage) in order to ensure Life Safety after MCE.An extensive parametric study is conducted to investigate the effect of many important parameters on the performance point. The parameters include changing reinforcement of columns, size of columns and beams individually and in different combinations. The results of analysis are compared in terms of base shear, spectral acceleration, spectral displacement and storey displacements. Index Terms— Performance-based design, Pushover analysis, Design Basis Earthquake, Maximum Considered Earthquake.

I. INTRODUCTION

Amongst the natural hazards, earthquakes have the potential for causing the greatest damages. Since earthquake forces are random in nature & unpredictable, the engineering tools needs to be sharpened for analyzing structures under the action of these forces. Performance based design is gaining a new dimension in the seismic design philosophy wherein the near field ground motion (usually acceleration) is to be considered. Earthquake loads are to be carefully modeled so as to assess the real behavior of structure with a clear understanding that damage is expected but it should be regulated. In this context pushover analysis which is an iterative procedure shall be looked upon as an alternative for the orthodox analysis procedures. This study focuses on pushover analysis of multistory RC framed buildings subjecting them to monotonically increasing lateral forces with an invariant height wise distribution until the preset performance level (target displacement) is reached. The promise of performance-based seismic engineering (PBSE) is to produce structures with predictable seismic performance. To turn this promise into a reality, a comprehensive and Grenze ID: 01.GIJET.1.2.518 © Grenze Scientific Society, 2015

Grenze International Journal of Engineering and Technology, July 2015

Page 131: Gijet volume1 issue2

125

well-coordinated effort by professionals from several disciplines is required. This study focuses on pushover analysis of multistory RC framed buildings subjecting them to monotonically increasing lateral forces with an invariant height wise distribution until the preset performance level (target displacement) is reached and finally parametric study is carried out to study the effect of performance level of RCC building under earthquake forces.

II. MODELING APPROACH

The general finite element package SAP 2000 has been used for the analyses. A 3-D dimensional model of structure has been created to undertake the non -linear analysis. Beams and columns are modeled as nonlinear frame elements with lumped plasticity at the start and the end of each element. SAP 2000 provides default-hinge properties and recommends PMM hinges for columns and M3 hinges for beams as described in FEMA-356.

A. Assumptions 1. The material is homogeneous, isotropic and linearly elastic. 2. All columns supports are considered as fixed at the foundation. 3. Tensile strength of concrete is ignored in sections subjected to bending. 4.The super structure is analyzed independently from foundation and soil medium, on the assumptions that

foundations are fixed. 5. The floor acts as diaphragms, which are rigid in the horizontal plane. 7.The maximum target displacement of the structure is kept at 2.5% of the height of the building = (2.5/100)

x14= 0.35m = 350mm.

III. NUMERICAL STUDY

To illustrate the PBD procedure for finding the performance point, a four storey concrete frame as shown in Figure1 is taken as an example. The frame is designed according to IS 456: 2000 (with the superimposed vertical loads) using STAAD Pro. The natural frequencies of the concrete frame is given in Table I. It is seen from the table that the natural frequencies of the frame are quite widely spaced. The mass participating factor in the first mode is approximately equal to 78% which means that the dynamic response will be dominated by the first mode so, only first four modes are considered. The frame is subjected to response spectrum as per IS Code 1893: 2002 for 5% damping. The RC buildings (designed according to IS 456: 2000) using Pushover Analysis and redesigning by changing the main reinforcement of various frame elements and again analyzing. The performance based seismic engineering technique known has Non-Linear Static Pushover analysis procedure has been effectively used in this regard. The pushover analysis has been carried out using SAP2000. The description of the various cases is shown in Table-IX.

A. Pushover analysis using SAP2000 Pushover analysis of the four storey RC framed buildings subjecting them to monotonically increasing lateral forces with an invariant height wise distribution is performed using SAP2000. Table II shows the roof displacement and ductility demand for the frame for different performance levels. As is obvious the roof displacement and ductility demand increases as the performance level goes from operational to collapse prevention level.

B. Effect of change of size of the columns (Case B and Case G) Table III shows the effect of change of reinforcement in columns on the performance point. It is seen that as the reinforcement increases, the base shear increases and the roof displacement decreases and vice versa.

C. Effect of change of size of the Beams (Case H- Case K) Table 4 shows the effect of change of size of beams on the performance point. It is seen that as the size increases, the base shear increases and the roof displacement decreases and vice versa.

D. Effect of change of size of the Columns (Case L- Case O) Table V shows the effect of change of size of columns on the performance point. It is seen that as the size increases, the base shear increases and the roof displacement decreases drastically.

Page 132: Gijet volume1 issue2

126

E. Effect of change of size of the Columns and Beams simultaneously (Case P- Case S) Table VI shows the effect of change of size of columns on the performance point. It is seen that as the size increases, the base shear increases and the roof displacement decreases.

F. Effect of change of Response Reduction Factor (R) Table VII shows that the performance point is slightly affected by variation of Response Reduction Factor (R).

G. Performance Based Design Table VIII comparison of target roof displacement and actual displacement observed at operational, immediate-occupancy, life-safety and collapse-prevention performance levels Performance based design is obtained by increasing the main reinforcement of various frame elements by hit and trail method, so that the building performance level, (after performing Pushover Analysis) lies in Immediate Occupancy level i.e., roof displacement of building is 0.7% of total height of building (98mm). It is seen that the actual roof displacement is less than the target displacement and so the design is safe.

Figure 1: Elevation of four storey two-bay RC frame

TABLE I: NATURAL FREQUENCIES

TABLE II: ROOF DISPLACEMENT AND DUCTILITY DEMAND

S.No. Performance Level Roof Displacement(mm) Ductility Demand

1 Operational 21.602 1.000

2 Immediate Occupancy 32.971 1.526 3 Life Safety 85.271 3.947 4 Collapse Prevention 165.426 7.657 5 Complete Collapse ∞ ∞

TABLE III: EFFECT OF CHANGE OF REINFORCEMENT IN COLUMNS ON PERFORMANCE POINT

Mode Shapes Period(sec) Frequency ( cycle/sec) 1 0.58738 1.7024 2 0.18571 5.3847 3 0.10453 9.5661

S. No. Case Base Shear (KN)

% Change in Base Shear Roof Displacement (mm )

% Change in Roof Displacement

1 A 134.663 71.00 2 B 134.722 -0.0468 70.60 0.56 3 C 134.795 -0.0980 70.10 1.27

4 D 135.259 -0.4422 69.92 1.52 5 E 134.659 0.0030 71.50 -0.70

6 F 134.646 0.0126 71.90 -1.27

7 G 134.323 0.2525 72.30 -1.83

Page 133: Gijet volume1 issue2

127

TABLE IV: EFFECT OF SIZE OF BEAMS OF THE FRAME ON PERFORMANCE POINT

TABLE V: EFFECT OF SIZE OF COLUMNS OF THE FRAME ON PERFORMANCE POINT

TABLE VI: EFFECT OF SIZE OF BEAMS AND COLUMNS OF THE FRAME ON PERFORMANCE POINT

TABLE VII: EFFECT ON PERFORMANCE POINT BY CHANGING THE DIFFERENT VALUES OF R

TABLE VIII: COMPARISON OF TARGET ROOF DISPLACEMENT AND ACTUAL DISPLACEMENT OBSERVED AT VARIOUS PERFORMANCE LEVELS

S.No. Performance Level Target Roof Displacement (% of Height)

Actual Displacement (% of Height)

1 Operational 0.37 0.15 2 Immediate Occupancy 0.70 0.23 3 Life Safety 2.50 0.61 4 Collapse Prevention 5.00 1.18

S. No. Case Roof Displacement(m ) % Change in roof displacement Base Shear (KN)

% Change in Base Shear

1 A 0.071 134.664 2 H 0.069 2.817 147.353 -9.423 3 I 0.066 7.042 159.494 -18.438 5 J 0.074 -4.225 122.194 9.260 6 K 0.08 -12.676 109.857 18.421

S. No. Case Roof Displacement (m )

% Change in Roof Displacement Base Shear (KN) % Change in Base Shear

1 A 0.071 134.664 2 L 0.058 18.310 141.791 -5.292

3 M 0.057 19.718 148.504 -10.277

4 N 0.075 -5.634 126.317 6.198

5 O 0.083 -16.901 116.944 13.159

S.No. Case Roof Displacement(m ) % Change in Roof Displacement Base Shear (KN) % Change in Base Shear

1 A 0.071 134.664 2 P 0.054 23.944 155.217 -15.262 3 Q 0.051 28.169 177.278 -31.645 4 R 0.086 -21.127 114.459 15.004 5 S 0.092 -29.577 95.163 29.333

S. No. Response reduction factor

(R)

Spectral Displacement (Sd )

Spectral Acceleration (Sa)

Base Shear (V)

Roof Displacement (∆)

1 2.0 0.0589 0.224 134.685 0.0692

2 2.5 0.0589 0.224 134.681 0.0695

3 3.0 0.0590 0.225 134.676 0.0699

4 3.5 0.0591 0.225 134.673 0.0704

5 4.0 0.0591 0.225 134.671 0.0708 6 4.5 0.0592 0.226 134.667 0.0709 7 5.0 0.0593 0.226 134.664 0.0710

Page 134: Gijet volume1 issue2

128

TABLE IX: DESCRIPTION OF VARIOUS CASES

IV. CONCLUSIONS

Based on the present study, the following conclusions can be drawn: 1. Pushover analysis provides valuable information for the performance based seismic design of

building frame. 2. Ductility demand increases as the frame is pushed towards plastic range and ultimately at ∞ demand

the structure collapses due to plastic mechanism. 3. The performance point obtained satisfies the acceptance criteria. 4. The increase in reinforcement of columns results in nominal change in base shear and displacement. 5. As the size increases, the roof displacement decreases whereas base shear increases. 6. As the size decreases, the roof displacement increases whereas base shear decreases. 7. Performance point is slightly affected by variation of Response Reduction Factor (R).

REFERENCES

[1] ASCE, (2000).Prestandard and Commentary for the Seismic Rehabilitation of Buildings, FEMA 356 Report, prepared by the American Society of Civil Engineers for the Federal Emergency Management Agency, Washington, D.C.

[2] Computers and Structures SAP2000: Three Dimensional Static and Dynamic Finite Element Analysis and Design of Structures, Computers and Structures Inc., Berkeley, California, U.S.A.

[3] A. Der Kiureghian (1981), A Response Spectrum Method for Random Vibration Analysis of MDOF Systems, Earthquake Engineering and Structural Dynamics, 9, pp. 419-435.

S.No. Case Description Of Cases S.No. Case Description Of Cases

1 A Basic Structure 11 K 20% Decrease In Beams Size

2 B 10% Increase In Columns Reinforcement 12 L 10% Increase In Columns Size

3 C 20% Increase In Columns Reinforcement 13 M 20% Increase In Columns Size

4 D 30% Increase In Columns Reinforcement 14 N 10% Decrease In Columns Size

5 E 10% Decrease In Columns Reinforcement 15 O 20% Decrease In Columns Size

6 F 20% Decrease In Columns Reinforcement 16 P 10% Increase In Columns & Beams Size

7 G 30% Decrease In Columns Reinforcement 17 Q 20% Increase In Columns & Beams Size

8 H 10% Increase In Beams Size 18 R 10% Decrease In Columns & Beams Size

9 I 20% Increase In Beams Size 19 S 20% Decrease In Columns & Beams Size

10 J 10% Decrease In Beams Size

Page 135: Gijet volume1 issue2

Author Index

A Abhay Kolhe 31 Abhilash M G 108, 116 Abida K 73 Anjana T R 91 Anna Rose Varghese 91 B Blessy Joy A 81 D Dilbin Sebastian 57 Divya V 16 F Frenish C Francis 108, 116 G George Jacob 57 Girish R 81 H Hani Mol A A 57 Harikrishnan T 97 I Indu Lakshmi B 57 J John S Werner 1 L Labeeb M 73 Lakshmi Sai A 11 Linu Alias 51

M Malathi V 51 Mini P R 66 N Nafeesa K 73 Naqvi T 124 Navin K S 62 Neerad Mohan 21 Neeraj N 66 Neeru Rai 1 Nikhitha K Nair 62 Niranjana K 26 R Radhika V 26 Rajani S H 57 Rajeev V R 97 Raju Poddar 1 Ramakrishnan P V 66 Ramya K 11, 21 Rasitha K R 37 Rehan A Khan 124 Reji P 6 Renuka T K 6 Roshini Nair 43 S Sarathchandradas M R 97 Sasikumar M 16 Shah Payal Jayeshkumar 102 Shalaka Jadhav 31 Sherin Thomas 37 Shreeja R 86 Soumya P V 86 Soya Chandra C S 62 V Vijaykumar P M 37

Grenze ID: 01.GIJET.1.2.F6 © Grenze Scientific Society, 2015