ANN modelling

28
CHAPTER 5 ANN MODELING Introduction Neural Networks are capable of learning complex relationships in data. By mimicking the functions of the brain, they can discern patterns in data, and then extrapolate predictions when given new data. The problems Neural Networks are used for can be divided in two general groups: Classification Problems: Problems in which you are trying to determine what type of category an unknown item falls into. Examples include medical diagnoses and prediction of credit repayment ability. Numeric Problems: Situations where you need to predict a specific numeric outcome. Examples include stock price forecasting and predicting the level of sales during a future time period. Neural Tools Package A neural network is a system that takes numeric inputs, performs computations on these inputs, and outputs one or more numeric values. When a neural net is

description

Neural Networks are capable of learning complex relationships in data. By mimicking the functions of the brain, they can discern patterns in data, and then extrapolate predictions when given new data.

Transcript of ANN modelling

Page 1: ANN modelling

CHAPTER 5

ANN MODELING

Introduction

Neural Networks are capable of learning complex relationships in data. By

mimicking the functions of the brain, they can discern patterns in data, and then

extrapolate predictions when given new data. The problems Neural Networks are used

for can be divided in two general groups:

• Classification Problems: Problems in which you are trying to determine what type

of category an unknown item falls into. Examples include medical diagnoses and

prediction of credit repayment ability.

• Numeric Problems: Situations where you need to predict a specific numeric

outcome. Examples include stock price forecasting and predicting the level of sales

during a future time period.

Neural Tools Package

A neural network is a system that takes numeric inputs, performs computations

on these inputs, and outputs one or more numeric values. When a neural net is

designed and trained for a specific application, it outputs approximately correct values

for given inputs. For example, a net could have inputs representing some easily

measured characteristics of an abalone (a sea animal), such as length, diameter and

weight. The computations performed inside the net would result in a single number,

which is generally close to the age of the animal (the age of an abalone is harder to

determine). The inspiration for neural nets comes from the structure of the brain.

A brain consists of a large number of cells, referred to as "neurons". A neuron receives

impulses from other neurons through a number of "dendrites". Depending on the

impulses received, a neuron may send a signal to other neurons, through its single

"axon", which connects to dendrites of other neurons. Like the brain, artificial neural

Page 2: ANN modelling

nets consist of elements, each of which receives a number of inputs, and generates a

single output, where the output is a relatively simple function of the inputs.

The Structure of a Neural Net

The structure of a neural net consists of connected units referred to as "nodes" or

"neurons". Each neuron performs a portion of the computations inside the net: a

neuron takes some numbers as inputs, performs a relatively simple computation on

these inputs, and returns an output. The output value of a neuron is passed on as one

of the inputs for another neuron, except for neurons that generate the final output

values of the entire system. Neurons are arranged in layers. The input layer neurons

receive the inputs for the computations, like the length, diameter, and weight of an

individual abalone. These values are passed to the neurons in the first hidden layer,

which perform computations on their inputs and pass their outputs to the next layer.

This next layer could be another hidden layer, if there is one. The outputs from the

neurons in the last hidden layer are passed to the neuron or neurons that generate the

final outputs of the net, like the age of the abalone.

Numeric and Category Prediction

When neural nets are used to predict numeric values, they typically have just

one output. This is because single-output nets are more reliable than multiple-output

nets, and almost any prediction problem can be addressed using single-output nets.

For example, instead of constructing a single net to predict the volume and the price

for a stock on the following day, it is better to build one net for price predictions, and

one for volume predictions. On the other hand, neural nets have multiple outputs when

used for classification/category prediction. For example, suppose that we want to

predict if the price of a stock the following day will "rise more that 1%", "fall more

than 1%", or "not change more than 1%". Then the net will have three numeric

outputs, and the greatest output will indicate the category selected by the net.

Training a Net

Training a net is the process of fine-tuning the parameters of the computation,

where the purpose is to make the net output approximately correct values for given

Page 3: ANN modelling

inputs. This process is guided by training data on the one hand, and the training

algorithm on the other. The training algorithm selects various sets of computation

parameters, and evaluates each set by applying the net to each training case to

determine how good the answers given by the net are. Each set of parameters is a

"trial"; the training algorithm selects new sets of parameters based on the results of

previous trials.

Computer Processing of Neural Nets

A neural net is a model of computations that can be implemented in various

types of computer hardware. A neural net could be built from small processing

elements, with each performing the work of a single neuron. However, neural nets

are typically implemented on a computer with a single powerful processor, like

most computers currently in use. With single-processor computers the program, like

NeuralTools, uses the same processor to perform each neuron's computations; in this

case the concept of a neuron describes part of the computations needed to obtain a

prediction, as opposed to a physical processing element.

Types of Neural Networks

There are various types of neural networks, differing in structure, kinds of

computations performed inside neurons, and training algorithms. One type offered in

NeuralTools is the Multi-Layer Feed forward Network. With MLF nets, a

NeuralTools user can specify if there should be one or two layers of hidden neurons,

and how many neurons the hidden layers should contain (NeuralTools provides help

with making appropriate selections, as described in the section on MLF nets).

NeuralTools also offers Generalized Regression Neural Nets and Probabilistic

Neural Nets; these are closely related, with the former used for numeric prediction,

and the latter for category prediction/classification. With GRN/PN nets there is no

need for the user to make decisions about the structure of a net. These nets always

have two hidden layers of neurons, with one neuron per training case in the first

hidden layer, and the size of the second layer determined by some facts about

training data.

Page 4: ANN modelling

The remaining sections of this chapter discuss in more detail each type of neural

network offered in NeuralTools.

Multi-Layer Feedforward Nets

Multi-Layer Feedforward Networks (also referred to as "Multi-Layer Perceptron

Networks") are systems capable of approximating complex functions, and thus

capable of modeling complex relationships between independent variables and a

dependent one.

MLF Architecture

The diagram below shows an MLF net for numeric prediction with three independent

numeric variables; the net was configured to have 2 neurons/nodes in the first hidden

layer, and 3 neurons/nodes in the second hidden layer.

Page 5: ANN modelling

The behavior of the net is determined by:

Its topology (the number of hidden layers and the numbers of

nodes in those layers)

The "weights" of connections (a parameter assigned to each

connection) and bias terms (a parameter assigned to each neuron)

Activation/transfer function, used to convert the inputs of each

neuron into its output

Specifically, a hidden neuron with n inputs first computes a weighted sum of its

inputs:

Sum = in0 * w0 + in1 * w1 + ... + inn * wn + bias

where in0 to inn are outputs of neurons in the previous layer, while w0 to wn are

connection weights; each neuron has its own bias value.

Then the activation function is applied to the Sum to generate the output of the

neuron.

A sigmoid (s-shaped) function is used as the activation function in hidden layer

neurons. Specifically, NeuralTools uses the hyperbolic tangent function. In

NeuralTools the output neuron uses identity as the activation function; that is, it

simply returns the weighted sum of its inputs. Neural nets are sometimes constructed

with sigmoid activation functions in output neurons. However, that is not needed for

a neural net to be able to approximate complex functions.

Moreover, sigmoid functions have restricted output range (-1 to 1 for the

hyperbolic tangent function), and there will typically be dependent values outside the

range. Thus using a sigmoid function in the output neuron would force an additional

transformation of output values before passing training data to the net.

When MLF nets are used for classification, they have multiple output neurons,

one corresponding to each possible dependent category. A net classifies a case by

Page 6: ANN modelling

computing its numeric outputs; the selected category is the one corresponding to the

neuron that outputs the highest value.

MLF Net Training

Training an MLF net consists of finding a set of connection weights and bias terms

that will get the net to generally give right answers when presented with new cases

(for simplicity the bias term will be omitted in the presentation below). Training

starts by assigning a set of randomly selected connection weights. A prediction is

made for each training case (by presenting independent values as inputs to obtain the

output). The output will most likely be different from the known dependent value.

Thus for each training case we have an error value. From these we compute an error

measure for the entire training set; it tells us how well the net does given the initial

weights.

The net will probably not do very well with the random initial assignment of weights,

and we proceed to subsequent trials: other assignments of weights. However, the

assignments of weights are no longer random, but rather are decided by our training

algorithm: the method for selecting connection weights based on results of previous

trials. The problem is one of optimization: we want to minimize the error measure by

changing connection weights.

Error Measures

The error measure used when training numeric prediction nets is the Mean Squared

Error over all the training cases, that is the mean squared difference between the

correct answer, and the answer given by the net. With classification, we have more

than one output for each training case (with one output corresponding to each

dependent category). We compute the Mean Squared Error over all the outputs for all

the training cases, by reference to the desired output values: for each training case we

want the output value to be close to 1 for the output corresponding to the correct

category, and we want the remaining output values to be close to 0.

Training Time

Page 7: ANN modelling

The NeuralTools MLF training algorithms restarts itself multiple times from different

initial starting weights. Therefore, the longer a net is trained, the better. The more

times it is allowed to restart itself, the more likely it is that the global minimum of the

error function will be found.

Topology Selection

The selection of the number of layers and the numbers of neurons in the layers

determines whether the net is capable of learning the relationship between the

independent variables and the dependent one. Typically a net with a single hidden

layer and two hidden neurons will not train to a satisfactory error level. However,

increasing the number of layers and neurons comes at a price that is often not worth

paying. A single hidden layer is sufficient for almost any problem; using two layers

will typically result in unnecessarily long training times. Moreover, a few neurons in

a single hidden layer are typically sufficient.

NeuralTools can auto-configure the net topology based on training data.

However, the Best Net Search feature offers a more reliable approach. As part of the

Best Net Search a range of single-hidden- layer nets with different numbers of neurons

will be trained. By default, five MLF nets, with 2 to 6 hidden neurons will be

included. If sufficient time is available, the range can be broadened; but it is

recommended that it start with a 2-neuron net, for reasons related to preventing over-

training.

Preventing Over-Training

The term "over-training" refers to the situation where the net learns not only the

general characteristics of the relationship between independent variables and the

dependent one, but instead starts learning facts about training cases that will not apply

in general; that is, they will not apply to cases not included in training. Sometimes to

address this problem the testing set is divided into testing-while- training set, and the

proper testing set, to be used after training. The error on the testing-while-training set

is periodically computed during training. When it starts to increase, this is taken as

evidence that the net is beginning to over train, and training is stopped.

Page 8: ANN modelling

NeuralTools takes a different approach to preventing over-training. The

approach with two distinct testing sets is often unrealistic, insofar as typically there

is not enough data to split into a training set and two testing sets. Also, the increase

of error on a testing-while-training set is not a reliable indicator of over-training; the

increase could be local, and the error might continue to decrease with more training.

NeuralTools’ Best Net Search is designed to prevent over-training. With default

settings, Best Net Search will start with a net with 2 neurons, which is typically too

small to get over-trained. With default settings it will train nets with up to 6 neurons.

If the nets with 5 and 6 neurons over-train that will show in the results from the

single testing set; one of the nets with 2, 3 or 4 neurons will have the lowest testing

error.

Generalized Regression Neural Nets

GRN nets are used for numeric prediction/function approximation.

Architecture

A Generalized Regression Neural Net for two independent numeric variables is

structured as shown in the graph (assuming there are just three training cases):

Page 9: ANN modelling

The Pattern Layer contains one node for each training case. Presenting a

training case to the net consists here of presenting two independent numeric values.

Each neuron in the pattern layer computes its distance from the presented case. The

values passed to the Numerator and Denominator Nodes are functions of the distance

and the dependent value. The two nodes in the Summation Layer sum its inputs,

while the Output Node divides them to generate the prediction.

The distance function computed in the Pattern Layer neurons uses "smoothing

factors"; every input has its own "smoothing factor" value. With a single input, the

greater the value of the smoothing factor, the more significant distant training cases

become for the predicted value. With 2 inputs, the smoothing factor relates to the

distance along one axis on a plane, and in general, with multiple inputs, to one

dimension in multi-dimensional space.

Training a GRN net consists of optimizing smoothing factors to minimize the

error on the training set, and the Conjugate Gradient Descent optimization method is

used to accomplish that. The error measure used during training to evaluate different

sets of smoothing factors is the Mean Squared Error. However, when computing the

Squared Error for a training case, that case is temporarily excluded from the Pattern

Layer. This is because the excluded neuron would compute a zero distance, making

other neurons insignificant in the computation of the prediction.

Advantages of GRN nets:

Train fast

Do not require topology specification (numbers of hidden

layers and nodes)

PN nets not only classify, but also return the probabilities that

the case falls in different possible dependent categories

Advantages of MLF nets:

Smaller in size, thus faster to make predictions

More reliable outside the range of training data (for example,

when the value of some independent variable falls outside the

Page 10: ANN modelling

range of values for that variable in the training data); though

note that prediction outside the range of training data is still

risky with MLF nets

Capable of generalizing from very small training sets

Input Transformation

NeuralTools scales numeric variables before training, so that the values of

each variable are approximately in the same range. This is done to equalize the effect

variables have on net output during initial stages of training. When a variable is not

significant for making correct predictions, this will be reflected during training by

reducing the weights of connections leading from an input to first-hidden-layer

neurons. However, if that insignificant variable happens to have a larger order of

magnitude than other variables, the weights need to be reduced so much more to

compensate for the greater values.

The scaling uses the mean and the standard deviation for each variable,

computed on the training set. The mean is subtracted from each value, and the result

is divided by the standard deviation. The same scaling parameters are used when

testing the trained net or using it to make predictions.

Category/symbolic data cannot be used directly with a neural net, which takes

number as inputs. Consequently, every independent category variable is represented

by a number of numeric net inputs, one for every possible category. The "one-of-n"

conversion method is used.

CONJUGATE GRADIENTS DESCENT METHOD

The conjugate-gradient method is a general purpose simultaneous equation

solving method ideal for geophysical inversion and imaging. A simple form of the

algorithm iteratively searches the plane of the gradient and the previous step.

INTRODUCTION

The solution time for simultaneous linear equations grows cubically with the

number of unknowns. For equations with hundreds of unknowns the solutions require

minutes to hours. The number of unknowns somehow must be reduced by theoretical

Page 11: ANN modelling

means, or else numerical approximation methods to be used. A numerical technique

known as the conjugate-gradient method provides good approximations.

The conjugate-gradient method is an all-purpose optimizer and simultaneous

equation solver. It is useful for systems of arbitrarily high order because its iterations

can be interrupted at any stage and the partial result is an approximation that is often

useful. Like most simultaneous equation solvers, the exact answer (assuming exact

arithmetic) is attained in a finite number of steps. The conjugate-gradient method is

really a family of methods. There are perhaps a dozen or more forms of the conjugate-

gradient algorithm. The various methods differ in treatment of underdetermined

systems, accuracy in treating ill conditioned systems, space requirements, and

numbers of dot products..

CHOICE OF DIRECTION

Any collection of search lines can be used for function minimization. Even if

the lines are random, the descent can reach the desired extremum because if the value

does not decrease when moving one way along the line, it almost certainly decreases

when moving the other way.

In the conjugate-gradient method a line is not searched. Instead a plane is

searched. A plane is made from an arbitrary linear combination of two vectors. Take

one vector to be the gradient vector g . Take the other vector to be the previous

descent step vector, say s =Xj – Xj-1. Instead of αg, a linear combination is needed, say

a g + βs ( α and β are the distances to be determined). For minimizing quadratic

functions the plane search requires only the solution of a two-by-two set of linear

equations for α and β. (For nonquadratic functions a plane search is considered

intractable, whereas a line search proceeds by bisection).

Page 12: ANN modelling

ANN RESULTS:

Table 7 – ANN Predictions for Solid and Multichannel ElectrodesSOLID 1MM 1.5MM 2MM

Trial P I Ton Toff Pd MRR EWR SR MRR EWR SR MRR EWR SR MRR EWR SR

1 + 4 200 20 0.25 19.67 1.85 3.39 18.82 4.83 3.44 32.85 7.12 3.88 22.91 8.06 3.01

2 + 4 200 20 0.5 19.67 1.66 4.57 16.14 4.70 3.44 32.85 12.82 4.29 22.92 19.79 5.85

3 + 4 200 20 0.75 18.03 1.69 5.44 19.71 4.50 3.35 32.86 12.85 4.29 28.02 20.07 3.05

4 + 4 200 40 0.25 19.67 1.83 4.86 15.14 4.77 3.13 32.87 7.24 3.95 23.06 6.38 3.00

5 + 4 200 40 0.5 19.67 1.50 5.99 14.73 4.61 3.47 32.85 7.85 3.74 24.76 6.43 3.00

6 + 4 200 40 0.75 18.07 1.50 5.67 18.38 4.35 3.87 32.85 7.65 4.29 22.97 7.34 4.48

7 + 4 200 60 0.25 19.70 1.61 3.06 13.68 4.70 4.04 48.38 6.96 4.06 26.94 6.36 3.01

8 + 4 200 60 0.5 28.49 1.58 4.38 13.52 4.49 4.05 32.85 7.59 3.73 19.88 1.44 3.00

9 + 4 200 60 0.75 67.96 1.56 4.42 15.79 4.18 4.06 32.85 10.18 3.82 18.55 1.37 3.00

10 + 4 400 20 0.25 19.67 1.69 7.36 25.43 2.42 5.11 32.86 7.10 5.36 75.17 4.33 5.80

11 + 4 400 20 0.5 18.84 1.69 4.36 23.71 2.25 5.10 32.85 7.91 5.86 52.26 4.45 5.86

12 + 4 400 20 0.75 14.33 1.69 4.97 29.89 2.13 5.09 33.07 12.97 5.18 48.88 12.12 5.86

13 + 4 400 40 0.25 19.74 1.50 7.54 33.74 2.32 5.09 40.67 1.79 5.31 57.25 5.35 3.07

14 + 4 400 40 0.5 34.13 1.64 5.16 40.30 2.18 5.14 32.79 1.82 5.85 43.75 6.37 4.60

15 + 4 400 40 0.75 35.84 1.69 6.22 45.74 2.08 5.65 32.76 9.24 5.35 48.34 6.18 5.86

16 + 4 400 60 0.25 70.01 1.41 6.83 53.69 2.24 5.65 109.19 1.80 5.31 61.54 6.38 7.08

17 + 4 400 60 0.5 69.84 1.08 4.39 55.76 2.12 5.65 17.29 2.60 5.30 43.90 5.78 3.00

18 + 4 400 60 0.75 36.27 1.46 4.41 65.74 2.03 5.65 16.87 7.53 5.49 47.38 1.32 3.10

19 + 4 600 20 0.25 11.00 0.83 9.54 72.11 1.36 5.22 36.12 3.28 9.25103.78 4.33 6.98

20 + 4 600 20 0.5 68.78 1.7511.71 73.68 1.18 5.11 32.31 3.29 9.24

108.12 4.33 5.86

21 + 4 600 20 0.75 130.00 2.2211.69 74.63 1.05 5.10 84.92 3.80 5.87 94.82 4.34 5.86

22 + 4 600 40 0.25 37.71 1.69 7.62 33.68 1.26 5.10 106.67 3.27 8.70 91.38 4.33 7.66

23 + 4 600 40 0.5 37.18 1.69 6.73 31.10 1.32 5.65 16.11 3.28 6.96 78.97 4.54 5.89

24 + 4 600 40 0.75 56.74 1.6910.75 33.13 1.49 5.66 16.25 3.28 5.86 53.12 6.30 5.86

25 + 4 600 60 0.25 38.49 1.08 6.11 16.16 1.18 5.65 108.60 2.07 8.66104.93 2.27 7.10

26 + 4 600 60 0.5 36.26 1.18 5.00 15.99 1.05 5.65 64.07 1.77 5.37 57.89 6.37 7.08

27 + 4 600 60 0.75 36.34 1.27 5.76 46.36 1.39 5.69 48.14 3.30 5.86 47.47 2.19 5.86

28 + 8 200 20 0.25 19.67 6.64 5.85 18.94 9.28 4.32 32.86 7.84 5.41 22.96 35.80 3.00

29 + 8 200 20 0.5 19.75 7.06 3.83 18.81 9.25 4.25 33.10 14.32 4.29 21.33 24.72 4.66

30 + 8 200 20 0.75 22.24 2.61 4.57 29.27 9.21 3.79 107.38 14.34 4.29 29.86 15.07 5.86

31 + 8 200 40 0.25 19.67 6.99 6.54 18.84 9.27 3.13 32.88 7.35 5.20 22.69 35.13 3.00

Page 13: ANN modelling

32 + 8 200 40 0.5 19.91 7.34 5.31 21.52 9.23 3.13 32.86 13.19 4.29 25.88 18.56 3.00

33 + 8 200 40 0.75 20.87 5.77 5.06 23.68 9.17 3.32 33.21 14.32 4.29 27.70 17.10 3.11

34 + 8 200 60 0.25 50.36 7.31 6.85 19.40 9.25 3.66 52.98 10.67 5.24 52.05 17.15 3.12

35 + 8 200 60 0.5 70.10 6.64 5.50 43.53 9.20 3.68 32.85 13.88 3.75 41.56 6.94 3.00

36 + 8 200 60 0.75 67.27 2.73 5.50 61.53 9.14 3.68 32.86 13.49 4.47 40.31 18.04 3.00

37 + 8 400 20 0.25 73.17 3.70 5.58 45.02 8.49 6.29 33.03 8.61 6.22 79.23 17.50 3.63

38 + 8 400 20 0.5 84.28 2.47 5.16 70.69 8.65 6.29 91.27 8.61 5.86 90.00 19.47 5.87

39 + 8 400 20 0.75 142.81 2.9110.52 82.04 8.69 6.25 194.79 13.49 5.85

108.96 18.84 5.87

40 + 8 400 40 0.25 58.61 2.95 5.43 66.47 8.37 5.10 43.36 3.54 5.31 98.11 6.38 4.64

41 + 8 400 40 0.5 84.05 2.22 4.01 92.63 8.60 5.11 33.09 7.60 5.86 83.44 7.07 3.02

42 + 8 400 40 0.75 88.35 2.40 5.81 92.28 8.67 5.77 104.74 14.32 5.85 95.06 14.76 5.86

43 + 8 400 60 0.25 70.42 2.69 6.83 94.40 8.23 5.65 123.93 6.19 5.31103.41 2.51 7.10

44 + 8 400 60 0.5 68.84 2.16 6.17 61.18 8.55 5.66 31.92 6.16 5.69 97.33 1.32 3.05

45 + 8 400 60 0.75 33.80 2.67 5.43 16.30 8.66 5.96 32.67 9.61 7.03 94.24 1.75 3.00

46 + 8 600 20 0.25 124.03 2.7811.83 241.75 5.65 6.66 82.06 3.37 9.26

180.12 4.36 12.59

47 + 8 600 20 0.5 157.21 2.9212.20 193.33 6.15 8.48 194.53 6.39 9.25

177.43 7.88 8.72

48 + 8 600 20 0.75 179.23 2.9211.05 201.59 7.07 9.16 195.31 8.56 6.04

177.51 18.23 8.66

49 + 8 600 40 0.25 94.19 1.53 9.62 241.47 5.58 5.97 118.03 3.28 9.24129.82 4.41 9.74

50 + 8 600 40 0.5 93.47 2.4011.08 201.61 5.98 8.24 81.72 3.28 8.96

154.00 6.16 9.15

51 + 8 600 40 0.75 151.91 2.88 8.73 201.60 6.81 8.56 193.09 3.28 5.88132.38 1.51 8.04

52 + 8 600 60 0.25 93.65 1.62 6.60 106.41 5.53 7.08 118.93 6.39 8.71113.45 6.32 8.78

53 + 8 600 60 0.5 91.71 1.5810.72 199.22 5.84 8.43 94.48 5.04 6.32

110.45 1.36 8.36

54 + 8 600 60 0.75 83.28 2.7910.03 201.58 6.56 8.57 117.70 4.00 7.96 52.35 1.35 6.35

55 + 12 200 20 0.25 113.91 6.04 5.70 36.80 7.13 4.33 96.96 17.92 5.86 39.63 34.98 5.62

56 + 12 200 20 0.5 138.71 6.17 3.31 39.85 7.12 4.35 194.87 17.56 4.59 35.78 30.83 5.43

57 + 12 200 20 0.75 139.30 6.57 3.38 161.29 7.09 4.66 195.32 15.03 4.45 82.58 30.94 7.98

58 + 12 200 40 0.25 34.76 6.04 4.81 36.78 7.13 4.32 33.16 13.45 5.38 40.03 30.82 4.62

59 + 12 200 40 0.5 117.74 6.04 5.61 36.78 7.11 4.25 110.75 15.37 6.19 40.53 30.91 4.20

60 + 12 200 40 0.75 174.39 6.04 4.65 37.90 7.07 4.40 195.00 14.54 6.52 48.73 35.14 3.83

61 + 12 200 60 0.25 70.29 6.52 7.94 36.78 7.12 3.59 58.50 20.09 7.51 40.61 30.87 5.68

62 + 12 200 60 0.5 42.58 6.04 7.84 36.94 7.09 3.80 33.24 19.09 7.10 42.97 34.15 3.28

63 + 12 200 60 0.75 34.78 6.14 5.50 85.39 7.05 5.01 124.75 18.82 6.54 43.75 44.44 3.17

64 + 12 400 20 0.25 122.71 6.60 8.22 134.60 6.55 6.44 194.65 9.78 8.33123.92 22.82 5.96

65 + 12 400 20 0.5 185.99 6.5511.32 134.69 6.32 7.78 195.32 8.83 5.89

189.98 22.85 8.79

66 + 12 400 20 0.75 155.57 2.54 8.92 156.29 5.99 7.91 195.32 13.88 6.94195.90 15.02 8.81

67 + 12 400 40 0.25 134.41 6.04 6.91 140.68 6.63 7.13 108.32 8.85 6.48104.82 31.01 9.89

68 + 12 400 40 0.5 187.44 6.05 9.01 173.91 6.49 7.89 194.84 8.67 8.02110.01 20.84 5.94

69 + 12 400 40 0.75 137.79 2.2310.27 174.25 6.28 8.36 195.32 14.35 8.11

158.42 15.56 8.42

70 + 12 400 60 0.25 125.46 10.58 7.82 43.93 6.67 6.96 125.24 7.84 7.56102.36 18.63 10.04

71 + 12 400 60 0.5 89.94 6.03 7.96 97.65 6.59 7.33 109.44 9.74 8.11102.00 19.27 7.24

Page 14: ANN modelling

72 + 12 400 60 0.75 103.98 9.29 7.34 195.28 6.46 7.36 224.33 13.89 8.11102.82 24.47 5.94

73 + 12 600 20 0.25 157.33 6.2611.36 205.63 3.27 9.22 202.28 8.40 9.26

276.90 18.02 12.83

74 + 12 600 20 0.5 156.09 1.4711.37 241.61 3.09 9.22 195.32 8.61 9.60

199.16 18.08 8.81

75 + 12 600 20 0.75 166.25 1.66 9.53 173.81 2.97 9.22 195.34 8.61 9.82183.76 14.94 8.81

76 + 12 600 40 0.25 165.21 5.5212.01 241.17 3.58 9.22 286.12 3.31 11.10

276.21 7.19 10.04

77 + 12 600 40 0.5 155.44 1.5813.16 201.07 3.28 9.77 202.86 3.29 11.49

248.79 14.45 11.27

78 + 12 600 40 0.75 127.31 2.6712.94 183.39 3.09 9.75 227.33 3.44 8.12

183.83 15.06 8.81

79 + 12 600 60 0.25 125.52 13.3512.08 241.73 4.03 8.62 201.47 7.75 11.11

229.42 1.31 10.04

80 + 12 600 60 0.5 109.89 4.0912.59 184.63 3.59 8.58 265.03 7.74 9.20

203.85 1.37 10.04

81 + 12 600 60 0.75 112.70 11.6813.61 201.60 3.29 8.57 211.81 7.78 8.11

154.10 6.70 6.67

82 - 4 200 20 0.25 19.67 2.05 4.32 19.08 4.88 3.40 33.07 24.64 3.73 23.86 28.50 5.86

83 - 4 200 20 0.5 19.28 1.75 5.43 79.86 5.63 3.25 32.85 18.34 4.18 23.94 18.08 5.86

84 - 4 200 20 0.75 18.05 1.69 5.45 31.72 6.07 3.23 32.87 14.22 4.29 33.00 19.92 5.86

85 - 4 200 40 0.25 19.67 2.05 4.00 78.68 5.29 3.23 97.85 15.37 3.73 24.06 21.42 3.00

86 - 4 200 40 0.5 18.51 2.05 4.41 83.63 5.90 3.24 32.84 14.50 3.73 24.14 6.66 5.71

87 - 4 200 40 0.75 18.03 1.76 5.71 15.75 6.19 3.26 32.82 13.24 4.29 22.40 17.60 5.86

88 - 4 200 60 0.25 19.67 1.45 4.38 83.74 5.65 3.52 112.54 10.41 3.73 33.24 22.16 3.95

89 - 4 200 60 0.5 18.12 1.53 4.38 32.48 6.08 3.80 22.56 8.13 3.73 18.85 6.53 3.00

90 - 4 200 60 0.75 18.91 6.77 4.40 15.74 6.25 3.82 18.48 7.64 3.74 18.48 11.62 3.94

91 - 4 400 20 0.25 47.31 2.01 5.97 55.95 3.65 5.86 80.87 4.33 5.44 79.21 4.33 5.87

92 - 4 400 20 0.5 63.85 1.69 4.52 24.10 3.66 5.72 32.63 11.70 5.28 66.39 4.64 5.86

93 - 4 400 20 0.75 64.54 1.85 5.67 17.12 3.73 7.45 39.57 14.12 4.29 50.97 15.23 5.86

94 - 4 400 40 0.25 21.13 2.02 3.25 83.50 3.65 5.16 108.96 2.48 5.31 71.76 4.33 8.78

95 - 4 400 40 0.5 39.73 2.01 3.22 15.75 3.68 6.06 17.90 7.65 4.89 47.52 4.67 5.86

96 - 4 400 40 0.75 32.93 2.01 5.09 15.74 3.83 8.31 16.22 7.56 4.29 48.47 6.41 5.86

97 - 4 400 60 0.25 21.44 1.61 6.20 24.55 3.66 5.82 108.56 3.55 5.30 83.49 6.11 7.10

98 - 4 400 60 0.5 52.14 2.38 4.38 50.74 3.73 7.29 103.76 7.57 5.01 68.07 6.43 5.68

99 - 4 400 60 0.75 65.25 11.75 4.38 50.42 4.00 8.47 21.91 7.54 4.25 44.82 5.45 5.86

100-

4 600 20 0.25 34.90 2.1913.54 162.74 2.39 9.07 108.44 3.43 8.74

104.91 4.33 9.93

101-

4 600 20 0.5 88.65 2.2311.79 183.48 2.24 8.71 21.43 3.31 9.25

110.33 4.33 5.87

102-

4 600 20 0.75 106.68 2.2511.63 183.63 2.14 8.15 169.16 3.96 5.92

107.00 4.33 5.86

103 - 4 600 40 0.25 49.02 1.63 5.86 16.52 2.30 8.06 108.56 2.88 8.70 88.65 4.33 9.96

104 - 4 600 40 0.5 80.83 1.68 8.53 85.90 2.18 8.57 98.94 3.29 8.51 85.61 4.33 8.81

105-

4 600 40 0.75 88.75 10.9910.18 182.12 2.10 8.57 44.05 8.91 5.86 65.86 4.38 5.86

106 - 4 600 60 0.25 116.71 1.75 6.77 15.74 2.23 8.92 133.83 1.85 8.70 96.77 4.34 7.15

107 - 4 600 60 0.5 83.17 11.41 4.53 15.82 2.13 8.58 141.01 2.50 5.33 70.50 5.22 9.96

108 - 4 600 60 0.75 83.08 11.46 8.96 27.16 2.07 8.57 59.56 9.01 6.08 49.25 6.63 6.04

109 - 8 200 20 0.25 20.41 6.56 3.32 18.81 11.94 4.51 33.16 28.58 3.74 23.36 33.88 5.27

110 - 8 200 20 0.5 74.59 4.54 3.25 18.89 11.97 6.00 40.96 29.67 4.29 23.33 35.51 5.86

111 - 8 200 20 0.75 81.28 1.40 3.26 54.34 11.99 7.15 189.90 29.86 4.29 38.22 33.37 5.86

112 - 8 200 40 0.25 19.67 5.63 5.21 18.83 11.93 3.66 103.67 28.52 3.74 36.21 35.76 3.01

Page 15: ANN modelling

113 - 8 200 40 0.5 18.05 6.79 5.54 49.38 11.97 5.41 32.88 28.45 3.80 35.53 34.76 3.27

114 - 8 200 40 0.75 23.98 7.84 5.79 38.73 11.98 6.02 44.08 28.62 5.40 42.84 38.35 5.86

115 - 8 200 60 0.25 19.69 4.72 6.75 36.73 11.91 4.11 125.16 26.93 4.37 67.32 19.54 6.78

116 - 8 200 60 0.5 23.29 8.90 5.50 83.67 11.96 6.31 37.10 25.48 5.90 42.05 35.34 3.00

117 - 8 200 60 0.75 29.58 10.91 5.50 15.76 11.98 6.60 32.37 23.44 6.54 39.62 40.04 3.01

118 - 8 400 20 0.25 95.93 8.23 4.88 28.13 11.53 9.20 93.83 24.12 7.18 84.71 33.72 7.09

119-

8 400 20 0.5 101.84 6.86 6.69 116.72 11.43 9.22 187.28 29.01 5.85117.98 18.04 6.66

120-

8 400 20 0.75 107.70 5.28 8.91 112.47 11.37 9.22 195.30 27.06 4.37125.48 18.79 6.37

121-

8 400 40 0.25 20.21 4.06 5.43 56.56 11.61 8.61 122.77 18.59 5.32101.85 20.40 7.40

122 - 8 400 40 0.5 65.65 4.11 4.36 19.34 11.56 8.12 39.30 22.63 6.21 92.77 14.62 6.02

123 - 8 400 40 0.75 155.40 6.41 9.52 50.57 11.52 8.58 185.38 17.90 6.48 99.09 18.83 5.95

124-

8 400 60 0.25 69.73 5.02 6.83 75.39 11.66 8.57 108.62 22.76 7.17103.62 22.19 7.14

125-

8 400 60 0.5 67.58 10.48 4.38 15.75 11.63 8.57 111.70 21.88 7.55101.40 4.15 6.39

126 - 8 400 60 0.75 34.32 10.51 5.41 15.77 11.61 8.57 58.58 14.26 6.71 93.80 15.04 5.84

127-

8 600 20 0.25 159.25 2.2310.36 241.45 8.84 9.32 264.76 17.14 9.26

183.05 4.47 12.90

128-

8 600 20 0.5 169.29 2.68 8.46 173.78 8.67 9.22 184.08 12.24 9.26177.59 13.01 8.83

129-

8 600 20 0.75 202.16 11.85 8.99 200.38 8.57 9.22 182.34 7.55 7.05177.42 17.99 8.80

130-

8 600 40 0.25 196.84 2.5211.02 179.57 9.12 9.60 113.95 13.79 8.78

164.30 4.33 12.89

131-

8 600 40 0.5 200.85 12.1411.82 201.52 8.85 9.10 277.58 7.43 10.79

168.28 4.35 12.85

132-

8 600 40 0.75 201.78 11.55 9.67 197.29 8.68 8.66 210.83 9.90 8.10157.33 6.06 8.80

133-

8 600 60 0.25 116.09 14.90 7.76 55.36 9.5211.03 141.04 12.78 10.92

117.40 4.79 10.02

134-

8 600 60 0.5 84.69 11.1112.32 194.25 9.13 8.59 148.20 7.79 8.38

115.15 5.71 12.71

135-

8 600 60 0.75 123.96 10.5510.05 191.16 8.86 8.57 219.41 10.97 8.11 58.02 10.00 11.38

136 - 12 200 20 0.25 88.47 6.74 5.70 36.77 29.35 7.26 188.74 30.06 5.17 39.66 35.12 6.01

137 - 12 200 20 0.5 125.57 6.70 3.26 36.92 28.95 7.26 195.30 30.06 6.44 42.11 32.76 8.80

138-

12 200 20 0.75 136.25 14.39 4.68 103.49 28.37 7.26 195.32 30.06 6.54114.09 42.83 8.80

139 - 12 200 40 0.25 74.24 6.43 5.78 36.73 29.62 7.26 117.34 30.07 6.50 40.10 31.88 6.04

140 - 12 200 40 0.5 77.74 15.59 4.65 33.50 29.37 7.26 190.33 30.07 6.54 41.17 42.06 5.91

141 - 12 200 40 0.75 67.40 15.38 4.66 41.07 28.96 7.25 195.31 30.06 6.54 49.25 45.02 8.72

142 - 12 200 60 0.25 28.39 17.01 7.94 36.58 29.78 6.15 125.49 29.42 6.68 40.93 41.12 9.91

143 - 12 200 60 0.5 30.49 16.62 5.51 35.81 29.63 6.62 51.26 33.44 5.99 41.65 45.00 5.75

144 - 12 200 60 0.75 88.04 15.10 5.50 30.24 29.38 6.61 192.35 34.49 6.54 41.91 45.08 5.61

145-

12 400 20 0.25 144.04 6.7310.92 204.46 23.29 7.49 257.67 24.33 10.90

189.78 33.84 10.40

146-

12 400 20 0.5 173.55 5.4310.77 205.78 24.51 7.49 195.32 29.82 8.11

196.25 34.18 8.81

147-

12 400 20 0.75 191.88 11.94 8.92 191.83 24.94 7.50 195.32 30.06 7.79197.48 34.02 8.81

148-

12 400 40 0.25 170.25 12.04 7.78 95.24 22.99 7.51 280.47 19.12 7.69129.61 35.43 10.04

149-

12 400 40 0.5 182.18 12.3111.80 239.13 24.41 7.51 198.30 27.73 8.11

131.84 34.47 8.70

150-

12 400 40 0.75 175.71 11.9510.34 173.78 24.93 8.06 199.88 30.00 7.90

185.71 42.43 8.81

151-

12 400 60 0.25 67.55 14.37 7.74 71.44 22.79 7.32 128.13 26.56 7.56104.87 25.07 10.04

Page 16: ANN modelling

152-

12 400 60 0.5 107.74 12.23 8.18 33.47 24.36 6.95 298.35 29.19 8.09101.89 44.15 10.02

153-

12 400 60 0.75 144.07 11.2710.46 183.54 24.97 6.90 213.42 29.13 7.97

106.34 42.19 6.89

154-

12 600 20 0.25 221.81 1.8911.36 227.40 15.62 9.91 287.67 19.03 11.47

277.75 33.83 12.90

155-

12 600 20 0.5 202.44 11.8111.37 183.47 16.29 9.23 196.65 20.57 11.51

217.03 18.05 9.63

156-

12 600 20 0.75 227.95 11.94 8.92 173.78 17.99 9.22 214.16 25.82 11.18184.15 16.87 8.81

157-

12 600 40 0.25 235.00 11.9512.05 241.57 15.53

12.56 294.86 19.01 11.51

280.55 25.72 12.27

158-

12 600 40 0.5 184.63 11.9513.16 173.79 16.04 9.73 298.37 18.95 11.51

268.29 15.97 12.90

159-

12 600 40 0.75 159.71 11.9510.87 242.07 17.43 9.77 211.18 24.40 8.17

186.16 24.84 8.82

160-

12 600 60 0.25 200.22 15.7312.47 213.96 15.47

13.04 295.62 23.43 10.96

267.45 20.11 11.04

161-

12 600 60 0.5 160.09 13.9912.55 210.25 15.86 9.12 303.50 23.39 10.75

256.38 10.12 11.29

162-

12 600 60 0.75 159.61 11.6911.37 215.60 16.95 8.63 229.96 28.13 8.11

173.29 23.66 12.84

CONFIRMATION TEST :

Once the ANN prediction is achieved, it is needed to verify the ANN results

with experimental results. So the input parameters are randomly selected and

experiments are conducted for these conditions. The selected input parameters are

shown in the table 8. The comparison of ANN predictions and confirmation test

results are shown in the table 9. Also the errors in the predictions are indicated.

Table 8 Input parameters for verification experiments

Expt No P Ip Ton Toff Pf

(amps) (µs) (µs) (kg/cm2)1 + 4 200 40 0.752 - 4 400 60 0.53 + 8 200 40 0.754 - 8 400 20 0.755 + 12 400 20 0.256 - 12 600 60 0.25

Table 9 Comparison of experimental results with the ANN model prediction

Experimental results ANN model predictions Prediction error(%)Electrod

eMRR EWR SR MRR EWR SR MRR EWR SR

(mg/min)

(mg/min)

(µm) (mg/min)

(mg/min)

(µm) (mg/min)

(mg/min)

(µm)

16.56 1.72 5.1 18.07 1.50 5.67 -9.12 12.79 11.18

Page 17: ANN modelling

Solid Eectrode

46.44 2.86 4.66 52.14 2.38 4.38 12.27 -16.78 6.0119.28 4.56 5.68 20.87 5.77 5.06 8.25 26.54 10.9294.78 6.02 8.2 107.70 5.28 8.91 -13.63 -12.29 8.66

110.63 5.78 8.83 122.71 6.60 8.22 10.92 14.19 6.91213.16 13.88 11.92 200.22 15.73 12.47 6.07 -13.33 4.61

10.04 15.99 8.05

1mm MCE

18.06 3.56 3.44 18.38 4.35 3.87 -1.77 -22.19 12.5056.38 4.52 5.66 50.74 3.73 7.29 -10.00 -17.48 28.8019.62 8.8 3.54 23.68 9.17 3.32 20.69 4.20 6.2198.06 12.56 9.06 112.47 11.37 9.22 -14.70 -9.47 1.77

120.39 6.34 7.22 134.60 6.55 6.44 11.80 3.31 10.80220.17 14.59 12.39 213.96 15.47 13.04 2.82 -6.03 5.25

10.30 10.45 10.89

1.5mm MCE

26.52 6.12 4.1 32.85 7.65 4.29 -23.87 -25.00 4.6393.2 6.58 4.62 103.76 7.57 5.01 11.33 15.05 8.44

28.66 15.62 4.18 33.21 14.32 4.29 15.88 -8.32 2.63170.82 23.84 5.68 195.30 27.06 4.37 -14.33 13.51 23.06176.94 10.38 8.9 194.65 9.78 8.33 10.01 -5.78 6.40288.96 21.88 11.48 295.62 23.43 10.96 -2.30 -7.08 4.53

12.95 12.46 8.28

2mm MCE

21.32 6.52 3.62 22.97 7.34 4.48 -7.74 -12.58 23.7676.48 7.04 5.2 68.07 6.43 5.68 -11.00 -8.66 9.2323.81 19.82 3.83 27.69 17.10 3.11 16.30 -13.72 18.80

114.39 21.66 6.94 125.48 18.79 6.37 -9.69 -13.25 8.21130.25 19.38 6.7 123.92 22.82 5.96 -4.86 17.75 11.04270.06 22.46 12.56 267.45 20.11 11.04 0.97 10.46 12.10

8.43 12.74 13.86The number of neurons and the number of epochs were varied to reach

minimum values of root mean square error. The predicted results based on the above

model were compared with actual values and are found to be in good agreement as

shown in figure 6. The proposed model can be employed successfully in prediction of

MRR, EWR and SR of the stochastic and complex EDM process.

Figure 6. Actual

Vs Predicted

values

Page 18: ANN modelling

OPTIMIZATION :

Optimization conditions differ based on the performance characteristic

considered. Here, for MRR maximization results is the optimum condition and for

EWR and SR, minimization results in optimum condition.

Using ANN, the response parameters are predicted for all the combinations

of the machining conditions i.e., 162 experimental conditions.

Optimum conditions for MRR, EWR and SR for each of the electrode are readily reckoned from the ANN results. Confirmation experiments are conducted for the optimum machining conditions, response variables are found out and comparisions among predicted and experimental values are made.

Table 10 Comparison of Optimum Conditions

Electrode

Responses P I TON TOFF Pd EXPERIMENTAL VALUE

PREDICTED VALUE

%ERROR

SolidElectrod

e

MRR - 12 600 40 0.25 240.17 235 2.15EWR + 4 600 20 0.25 0.74 0.83 -12.16

SR + 4 200 60 0.25 2.8 3.06 -9.29

1mmMCE

MRR - 12 600 40 0.75 246.89 242.07 1.95EWR + 4 600 60 0.5 1.26 1.05 16.67

SR + 4 200 60 0.25 3.2 3.13 2.19

1.5mmMCE

MRR - 12 600 60 0.5 315.36 303.5 3.76EWR + 4 400 40 0.25 1.65 1.79 -8.48

SR + 4 200 60 0.5 3.58 3.73 -4.19

2mm MCE

MRR - 12 600 40 0.25 286.44 280.55 2.06EWR + 4 400 60 0.75 1.58 1.32 16.46

SR + 4 200 40 0.25 2.56 3.00 -17.19

……………………………..

CONCLUDING REMARKS :

ANN provides a means for finding the response variables for all the different machining combinations. Conducting experiments for all the machining combinations is time consuming and costly affair. Further, with ANN predicted results the optimum condition for each of the MRR, EWR and SR can be found out. The predicted responses of the ANN model are in very good agreement with the experimental values. This method is also tested for its prediction potentials for non-experimental patterns.

Page 19: ANN modelling