Aim: To study Fuzzy Logic Controller using Fuzzy logic...
Transcript of Aim: To study Fuzzy Logic Controller using Fuzzy logic...
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 1
G.H.RAISONI COLLEGE OF ENGINEERING, NAGPUR
Department of Computer Science & Engineering
LAB MANUAL
Program : Computer Science & Engineering
Subject: Artificial Intelligence and Expert System
Branch/Semester: M.Tech CSE/I Odd sem
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 2
Index 1. Write a program to implement single layer perception
algorithm.
2. Write a program to implement back propagation
learning algorithm
3. Design multilayer feed forward network using
backpropogation algorithm
4. Study of fuzzy inference system
5. To study fuzzy logic controller using fuzzy logic toolbox
6. Write a program to implement SDPTA
7. Write a program to implement RDPTA
8. To Study various defuzziification techniques
9. Write a program to implement of fuzzy set operation
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 3
Experiment No: 1
Aim: Write a program to implement single layer perception algorithm.
Theory:
Single-layer Neural Networks (Perceptrons)
Input is multi-dimensional (i.e. input can be a vector):
input x = ( I1, I2, .., In)
Input nodes (or units) are connected (typically fully) to a node (or multiple nodes) in the
next layer. A node in the next layer takes a weighted sum of all its inputs:
Summed input =
Fig.1
Example
Fig. 2
input x = ( I1, I2, I3) = ( 5, 3.2, 0.1 ).
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 4
Summed input = = 5 w1 + 3.2 w2 + 0.1 w3
The rule
The output node has a "threshold" t.
Rule: If summed input ≥ t, then it "fires" (output y = 1).
Else (summed input < t) it doesn't fire (output y = 0).
This implements a function
Obviously this implements a simple function from multi-dimensional real input to binary
output. What kind of functions can be represented in this way?
We can imagine multi-layer networks. Output node is one of the inputs into next layer.
"Perceptron" has just 2 layers of nodes (input nodes and output nodes). Often called a
single-layer network on account of having 1 layer of links, between input and output.
Fully connected?
Note to make an input node irrelevant to the output, set its weight to zero. e.g. If w1=0
here, then Summed input is the same no matter what is in the 1st dimension of the input.
Weights may also become negative (higher positive input tends to lead to not fire).
Some inputs may be positive, some negative (cancel each other out).
The brain
A similar kind of thing happens in neurons in the brain (if excitation greater than
inhibition, send a spike of electrical activity on down the output axon), though
researchers generally aren't concerned if there are differences between their models and
natural ones.
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 5
Big breakthrough was proof that you could wire up certain class of artificial nets
to form any general-purpose computer.
Other breakthrough was discovery of powerful learning methods, by which nets
could learn to represent initially unknown I-O relationships (see previous).
Sample Perceptrons
Perceptron for AND:
2 inputs, 1 output.
w1=1, w2=1, t=2.
Q. This is just one example. What is the general set of inequalities for w1, w2 and t that
must be satisfied for an AND perceptron?
Perceptron for OR:
2 inputs, 1 output.
w1=1, w2=1, t=1.
Q. This is just one example. What is the general set of inequalities that must be satisfied
for an OR perceptron?
Question - Perceptron for NOT?
What is the general set of inequalities that must be satisfied?
What is the perceptron doing?
The perceptron is simply separating the input into 2 categories, those that cause a fire,
and those that don't. It does this by looking at (in the 2-dimensional case):
w1I1 + w2I2 < t
If the LHS is < t, it doesn't fire, otherwise it fires. That is, it is drawing the line:
w1I1 + w2I2 = t
and looking at where the input point lies. Points on one side of the line fall into 1
category, points on the other side fall into the other category. And because the weights
and thresholds can be anything, this is just any line across the 2 dimensional input space.
So what the perceptron is doing is simply drawing a line across the 2-d input space.
Inputs to one side of the line are classified into one category; inputs on the other side are
classified into another. e.g. the OR perceptron, w1=1, w2=1, t=0.5, draws the line:
I1 + I2 = 0.5
across the input space, thus separating the points (0,1),(1,0),(1,1) from the point (0,0):
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 6
Fig. 3
As you might imagine, not every set of points can be divided by a line like this. Those
that can be, are called linearly separable.
In 2 input dimensions, we draw a 1 dimensional line. In n dimensions, we are drawing the
(n-1) dimensional hyperplane:
w1I1 + .. + wnIn = t
Perceptron for XOR:
XOR is where if one is 1 and other is 0 but not both.
Need:
1.w1 + 0.w2 cause a fire, i.e. >= t
0.w1 + 1.w2 >= t
0.w1 + 0.w2 doesn't fire, i.e. < t
1.w1 + 1.w2 also doesn't fire, < t
w1 >= t
w2 >= t
0 < t
w1+w2 < t
Contradiction.
Note: We need all 4 inequalities for the contradiction. If weights negative, e.g. weights =
-4 and t = -5, then weights can be greater than t yet adding them is less than t, but t > 0
stops this.
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 7
A "single-layer" perceptron can't implement XOR. The reason is because the classes in
XOR are not linearly separable. You cannot draw a straight line to separate the points
(0,0),(1,1) from the points (0,1),(1,0).
Led to invention of multi-layer networks.
Q. Prove can't implement NOT (XOR)
(Same separation as XOR)
Linearly separable classifications
If the classification is linearly separable, we can have any number of classes with a
perceptron.
For example, consider classifying furniture according to height and width:
Fig. 4
Each category can be separated from the other 2 by a straight line, so we can have a
network that draws 3 straight lines, and each output node fires if you are on the right side
of its straight line:
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 8
Fig. 5
3-dimensional output vector.
Problem: More than 1 output node could fire at same time.
Perceptron Learning Rule
We don't have to design these networks. We could have learnt those weights and
thresholds, by showing it the correct answers we want it to generate.
Let input x = ( I1, I2, .., In) where each Ii = 0 or 1.
And let output y = 0 or 1.
Algorithm is repeat forever:
1. Given input x = ( I1, I2, .., In). Perceptron produces output y. We are told correct
output O.
2. If we had wrong answer, change wi's and t, otherwise do nothing.
Motivation for weight change rule:
1. If output y=0, and "correct" output O=1, then y is not large enough, so reduce
threshold and increase wi's. This will increase the output. (Note inputs cannot be
negative, so high positive weights means high positive summed input.)
Q. Why not just send threshold to minus infinity? Then output will definitely be 1.
Q. Or send weights to plus infinity?
Note: Only need to increase wi's along the input lines that are active, i.e. where
Ii=1. If Ii=0 for this exemplar, then the weight wi had no effect on the error this
time, so it is pointless to change it (it may be functioning perfectly well for other
inputs).
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 9
2. Similarly, if y=1, O=0, output y is too large, so increase threshold and reduce wi's
(where Ii=1). This will decrease the output.
3. If y=1, O=1, or y=0, O=0, no change in weights or thresholds.
Hence algorithm is repeat forever:
1. Given input x = ( I1, I2, .., In). Perceptron produces output y. We are told correct
output O.
2. For all i:
wi := wi + C (O-y) Ii
3. t := t - C (O-y)
where C is some (positive) learning rate.
Note the threshold is learnt as well as the weights.
If O=y there is no change in weights or thresholds.
If Ii=0 there is no change in wi
Result: The single layer perception algorithm is implemented and studied for linearly
separatable problems.
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 10
Experiment No: 2
Aim: Write a program to implement The Backpropagation learning Algorithm
Theory:
1. Propagates inputs forward in the usual way, i.e.
All outputs are computed using sigmoid thresholding of the inner product
of the corresponding weight and input vectors.
All outputs at stage n are connected to all the inputs at stage n+1
2. Propagates the errors backwards by apportioning them to each unit according to the
amount of this error the unit is responsible for.
We now derive the stochastic Backpropagation algorithm for the general case. The
derivation is simple, but unfortunately the book-keeping is a little messy.
input vector for unit j (xji = ith input to the jth unit)
weight vector for unit j (wji = weight on xji)
, the weighted sum of inputs for unit j
oj = output of unit j ( )
tj = target for unit j
Downstream(j) = set of units whose immediate inputs include the output of j
Outputs = set of output units in the final layer
Since we update after each training example, we can simplify the notation somewhat by
imagining that the training set consists of exactly one example and so the error can
simply be denoted by E.
We want to calculate for each input weight wji for each output unit j. Note first that
since zj is a function of wji regardless of where in the network unit j is located,
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 11
Furthermore, is the same regardless of which input weight of unit j we are trying to
update. So we denote this quantity by .
Consider the case when . We know
Since the outputs of all units are independent of wji, we can drop the summation
and consider just the contribution to E by j.
Thus
(17)
Now consider the case when j is a hidden unit. Like before, we make the following two
important observations.
1. For each unit k downstream from j, zk is a function of zj
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 12
2. The contribution to error by all units in the same layer as j is independent of wji
We want to calculate for each input weight wji for each hidden unit j. Note that wji
influences just zj which influences oj which influences
each of which influence E. So we can write
Again note that all the terms except xji in the above product are the same regardless of
which input weight of unit j we are trying to update. Like before, we denote this common
quantity by . Also note that , and .
Substituting,
Thus,
(18)
We are now in a position to state the Backpropagation algorithm formally.
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 13
Formal statement of the algorithm:
Stochastic Backpropagation(training examples, , ni, nh, no)
Each training example is of the form where the input vector is and is the
target vector. is the learning rate (e.g., .05). ni, nh and no are the number of input,
hidden and output nodes respectively. Input from unit i to unit j is denoted xji and its
weight is denoted by wji.
Create a feed-forward network with ni inputs, nh hidden units, and no output units.
Initialize all the weights to small random values (e.g., between -.05 and .05)
Until termination condition is met, Do
o For each training example , Do
1. Input the instance and compute the output ou of every unit.
2. For each output unit k, calculate
3. For each hidden unit h, calculate
4. Update each network weight wji as follows:
Result: The back propagation algorithm is implemented and effect on different weights
and bias values is observed. The algorithm uses one hidden layer only. The algorithm is
situated for non linearity separate classes of problems.
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 14
Experiment No: 3
Aim: Write a program to study Fuzzy Inference Systems.
Introduction:
What Are Fuzzy Inference Systems?
Fuzzy inference is the process of formulating the mapping from a given input to an
output using fuzzy logic. The mapping then provides a basis from which decisions can be
made, or patterns discerned. The process of fuzzy inference involves all of the pieces that
are described in the previous sections: Membership Functions, Logical Operations, and
If-Then Rules. You can implement two types of fuzzy inference systems in the toolbox:
Mamdani-type and Sugeno-type. These two types of inference systems vary somewhat in
the way outputs are determined. See the Bibliography for references to descriptions of
these two types of fuzzy inference systems.
Fuzzy inference systems have been successfully applied in fields such as automatic
control, data classification, decision analysis, expert systems, and computer vision.
Because of its multidisciplinary nature, fuzzy inference systems are associated with a
number of names, such as fuzzy-rule-based systems, fuzzy expert systems, fuzzy
modeling, fuzzy associative memory, fuzzy logic controllers, and simply (and
ambiguously) fuzzy systems.
Theory:
Mamdani's fuzzy inference method is the most commonly seen fuzzy methodology.
Mamdani's method was among the first control systems built using fuzzy set theory. It
was proposed in 1975 by Ebrahim Mamdani as an attempt to control a steam engine and
boiler combination by synthesizing a set of linguistic control rules obtained from
experienced human operators. Mamdani's effort was based on Lotfi Zadeh's 1973 paper
on fuzzy algorithms for complex systems and decision processes although the inference
process described in the next few sections differs somewhat from the methods described
in the original paper, the basic idea is much the same.
Mamdani-type inference, as defined for the toolbox, expects the output membership
functions to be fuzzy sets. After the aggregation process, there is a fuzzy set for each
output variable that needs defuzzification. It is possible, and in many cases much more
efficient, to use a single spike as the output membership function rather than a distributed
fuzzy set. This type of output is sometimes known as a singleton output membership
function, and it can be thought of as a pre-defuzzified fuzzy set. It enhances the
efficiency of the defuzzification process because it greatly simplifies the computation
required by the more general Mamdani method, which finds the centroid of a two-
dimensional function. Rather than integrating across the two-dimensional function to find
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 15
the centroid, you use the weighted average of a few data points. Sugeno-type systems
support this type of model. In general, Sugeno-type systems can be used to model any
inference system in which the output membership functions are either linear or constant.
Overview of Fuzzy Inference Process
This section describes the fuzzy inference process and uses the example of the two-input,
one-output, three-rule tipping problem The Basic Tipping Problem that you saw in the
introduction in more detail. The basic structure of this example is shown in the following
diagram:
Fig. 1
Information flows from left to right, from two inputs to a single output. The parallel
nature of the rules is one of the more important aspects of fuzzy logic systems. Instead of
sharp switching between modes based on breakpoints, logic flows smoothly from regions
where the system's behavior is dominated by either one rule or another.
Fuzzy inference process comprises of five parts: fuzzification of the input variables,
application of the fuzzy operator (AND or OR) in the antecedent, implication from the
antecedent to the consequent, aggregation of the consequents across the rules, and
defuzzification. These sometimes cryptic and odd names have very specific meaning that
are defined in the following steps.
Step 1. Fuzzify Inputs
The first step is to take the inputs and determine the degree to which they belong to each
of the appropriate fuzzy sets via membership functions. In Fuzzy Logic Toolbox
software, the input is always a crisp numerical value limited to the universe of discourse
of the input variable (in this case the interval between 0 and 10) and the output is a fuzzy
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 16
degree of membership in the qualifying linguistic set (always the interval between 0 and
1). Fuzzification of the input amounts to either a table lookup or a function evaluation.
This example is built on three rules, and each of the rules depends on resolving the inputs
into a number of different fuzzy linguistic sets: service is poor, service is good, food is
rancid, and food is delicious, and so on. Before the rules can be evaluated, the inputs
must be fuzzified according to each of these linguistic sets. For example, to what extent is
the food really delicious? The following figure shows how well the food at the
hypothetical restaurant (rated on a scale of 0 to 10) qualifies, (via its membership
function), as the linguistic variable delicious. In this case, we rated the food as an 8,
which, given your graphical definition of delicious, corresponds to µ = 0.7 for the
delicious membership function.
Fig. 2
In this manner, each input is fuzzified over all the qualifying membership functions
required by the rules.
Step 2. Apply Fuzzy Operator
After the inputs are fuzzified, you know the degree to which each part of the antecedent
is satisfied for each rule. If the antecedent of a given rule has more than one part, the
fuzzy operator is applied to obtain one number that represents the result of the antecedent
for that rule. This number is then applied to the output function. The input to the fuzzy
operator is two or more membership values from fuzzified input variables. The output is
a single truth value.
As is described in Logical Operations section, any number of well-defined methods can
fill in for the AND operation or the OR operation. In the toolbox, two built-in AND
methods are supported: min (minimum) and prod (product). Two built-in OR methods are
also supported: max (maximum), and the probabilistic OR method probor. The
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 17
probabilistic OR method (also known as the algebraic sum) is calculated according to the
equation
probor(a,b) = a + b - ab
In addition to these built-in methods, you can create your own methods for AND and OR
by writing any function and setting that to be your method of choice.
The following figure shows the OR operator max at work, evaluating the antecedent of
the rule 3 for the tipping calculation. The two different pieces of the antecedent (service
is excellent and food is delicious) yielded the fuzzy membership values 0.0 and 0.7
respectively. The fuzzy OR operator simply selects the maximum of the two values, 0.7,
and the fuzzy operation for rule 3 is complete. The probabilistic OR method would still
result in 0.7.
Fig. 3
Step 3. Apply Implication Method
Before applying the implication method, you must determine the rule's weight. Every rule
has a weight (a number between 0 and 1), which is applied to the number given by the
antecedent. Generally, this weight is 1 (as it is for this example) and thus has no effect at
all on the implication process. From time to time you may want to weight one rule
relative to the others by changing its weight value to something other than 1.
After proper weighting has been assigned to each rule, the implication method is
implemented. A consequent is a fuzzy set represented by a membership function, which
weights appropriately the linguistic characteristics that are attributed to it. The
consequent is reshaped using a function associated with the antecedent (a single number).
The input for the implication process is a single number given by the antecedent, and the
output is a fuzzy set. Implication is implemented for each rule. Two built-in methods are
supported, and they are the same functions that are used by the AND method: min
(minimum), which truncates the output fuzzy, set, and prod (product), which scales the
output fuzzy set.
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 18
Fig. 4
Step 4. Aggregate All Outputs
Because decisions are based on the testing of all of the rules in a FIS, the rules must be
combined in some manner in order to make a decision. Aggregation is the process by
which the fuzzy sets that represent the outputs of each rule are combined into a single
fuzzy set. Aggregation only occurs once for each output variable, just prior to the fifth
and final step, defuzzification. The input of the aggregation process is the list of truncated
output functions returned by the implication process for each rule. The output of the
aggregation process is one fuzzy set for each output variable.
As long as the aggregation method is commutative (which it always should be), then the
order in which the rules are executed is unimportant. Three built-in methods are
supported:
max (maximum)
probor (probabilistic OR)
sum (simply the sum of each rule's output set)
In the following diagram, all three rules have been placed together to show how the
output of each rule is combined, or aggregated, into a single fuzzy set whose membership
function assigns a weighting for every output (tip) value.
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 19
Fig. 5
Step 5. Defuzzify
The input for the defuzzification process is a fuzzy set (the aggregate output fuzzy set)
and the output is a single number. As much as fuzziness helps the rule evaluation during
the intermediate steps, the final desired output for each variable is generally a single
number. However, the aggregate of a fuzzy set encompasses a range of output values, and
so must be defuzzified in order to resolve a single output value from the set.
Perhaps the most popular defuzzification method is the centroid calculation, which
returns the center of area under the curve. There are five built-in methods supported:
centroid, bisector, middle of maximum (the average of the maximum value of the output
set), largest of maximum, and smallest of maximum.
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 20
Fig. 6
The Fuzzy Inference Diagram
The fuzzy inference diagram is the composite of all the smaller diagrams presented so far
in this section. It simultaneously displays all parts of the fuzzy inference process you
have examined. Information flows through the fuzzy inference diagram as shown in the
following figure.
Fig. 7
In this figure, the flow proceeds up from the inputs in the lower left, then across each
row, or rule, and then down the rule outputs to finish in the lower right. This compact
flow shows everything at once, from linguistic variable fuzzification all the way through
defuzzification of the aggregate output.
The following figure shows the actual full-size fuzzy inference diagram. There is a lot to
see in a fuzzy inference diagram, but after you become accustomed to it, you can learn a
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 21
lot about a system very quickly. For instance, from this diagram with these particular
inputs, you can easily see that the implication method is truncation with the min function.
The max function is being used for the fuzzy OR operation. Rule 3 (the bottom-most row
in the diagram shown previously) is having the strongest influence on the output. and so
on. The Rule Viewer described in The Rule Viewer is a MATLAB implementation of the
fuzzy inference diagram.
Fig. 8
Customization
One of the primary goals of Fuzzy Logic Toolbox software is to have an open and easily
modified fuzzy inference system structure. The toolbox is designed to give you as much
freedom as possible, within the basic constraints of the process described, to customize
the fuzzy inference process for your application.
Building Systems with Fuzzy Logic Toolbox Software describes exactly how to build and
implement a fuzzy inference system using the tools provided. To learn how to customize
a fuzzy inference system, see Building Fuzzy Inference Systems Using Custom
Functions.
Result: Various aspects of fuzzy inference system are studied.
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 22
Experiment No: 4
Aim: To study Fuzzy Logic Controller using Fuzzy logic toolbox
Theory:
What is Fuzzy Logic?
Fuzzy Logic is a convenient way to map input space to output space. E.g., How much to
tip at hotel? Input space is the quality of service and output space is the amount of tip.
Things that can g o in Black Box
o Fuzzy Systems
o Linear Systems
o Expert Systems
o Neural Networks
o Differential Equations
o Interpolated multidimensional lookup tables
Fuzzy logic is often the best way because it is fast and cheap.
Fuzzy Sets
A fuzzy set is a set without any crisp, clearly defined boundaries. It can contain elements
with only a partial degree of membership. Classical sets either wholly includes given
element or wholly discludes elements. E.g., set of days of the week. While example of a
fuzzy set would be set of days that make up weekend.
In fuzzy logic, truth of any statement becomes a matter of degree. Fuzzy reasoning gives
us the ability to reply to a yes-no question with a not-quiteyes-or-no answer. This is the
kind of thing that humans do all the time.
Reasoning in fuzzy is just a matter of generalizing the familiar yes-no (Boolean) logic. If
we give „true‟ the numerical value of 1 and “false” the numerical value of 0, fuzzy logic
also permits in-between values like 0.2 and 0.7345.
Fig 1
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 23
Fig 1 (a)
Fig. 1 (b)
In the plot on the left in figure 4, notice that at midnight on Friday, just as the second
hand sweeps past 12, the weekend-ness value jumps discontinuously from 0 to1. It
doesn‟t really connect with our real-world experience of the weekend-ness. The plot on
the right shows a smoothly varying curve that accounts for the fact that all of Friday and
parts of Thursday to a small degree partake of the quality of weekend-ness and
thus deserve partial membership in the fuzzy set of weekend moments. The curve that
defines the weekendness of any instant in time is a function that maps the input space
(time of week) to the output space (weekend-ness). Specifically it is known as a
membership function.
Membership Functions
A membership function (MF) is a curve that defines how each point in the input space is
mapped to a membership value (or degree of membership) between 0 and 1. The input
space is sometimes referred to as the universe of discourse.
Fig. 2
The output-axis is a number known as the membership value between 0 and 1. The curve
is known as a membership function and is often given the designation of . E.g., of
membership functions representing different seasons of the year are given in figure 4.
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 24
Fig. 3
The only condition a membership function must satisfy is that it must vary between 0 and
1.
A classical set might be described as
A= { x | x>6 }
A fuzzy set is an extension of a classical set. If X is the universe of discourse and its
elements are denoted by x, then the fuzzy set A in X is defined as a set of ordered pairs.
A= { x, A (x) | x X}
A (x) is called the membership function (or MF) of x in A. The membership function
maps each element of X to a membership value between 0 and 1. The fuzzy logic toolbox
includes 11 built-in membership function types, triangular membership function,
trapezoidal membership function, simple Gaussian curve and a two-sided composite of
two different Gaussian curves, generalized bell membership function, sigmoidal
membership function, difference between two sigmoidal functions and the product of two
sigmoidal functions, polynomial based curves include Z, S and pi curves. Fuzzy Logic
toolbox allows you to create your own membership functions also.
Fuzzy Logic Operators
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 25
Fig. 4
We know what‟s so fuzzy about fuzzy logic, but what about the logic? Fuzzy logic is a
superset of standard Boolean logic. If we keep the fuzzy values to the extremes of 1
(completely true) and 0 (completely false), standard logical operators will hold.
The input values can be real numbers between 0 and 1. What function will preserve the
results of the classical logic truth table and also extend to all real numbers between 0 and
1.One answer is the min operation. We can replace the OR operation with the max
function, so that A OR B becomes equivalent to max (A, B). Finally the operation NOT
A becomes equivalent to the operation 1-A. Fuzzy intersection or conjunction (AND),
fuzzy union or disjunction (OR), and fuzzy complement (NOT) can either be defined
using the classical operators for these functions: AND=min, OR=max, and NOT=
additive complement or using customized functions. Fuzzy logic toolbox uses the
classical operator for the fuzzy complement, but the AND and OR operators can be easily
customized if desired.
Fig. 5
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 26
If-Then Rules
Fuzzy sets and fuzzy operators are the subjects and verbs of fuzzy logic. Conditional
statements, if-then rules are the things that make fuzzy logic useful.
A single fuzzy if-then rule assumes the form:
If x is A then y is B
Where A and B are linguistic values defined by fuzzy sets on the ranges X and Y,
respectively. The if-part of the rule “x is A” is called the antecedent or premise, while the
then-part of the rule “y is B” is called the consequent or conclusion. An example of such
a rule might be
“If service is good then tip is average”
Note that antecedent is an interpretation that returns a single number between 0 and 1,
whereas the consequent is an assignment that assigns the entire fuzzy set B to the output
variable y. A less confusing way of writing the rule would be
“If service ==good then tip=average”.
So the input to an if-then rule is the current value for the input variable (service) and the
output is an entire fuzzy set (average). Interpreting an if-then rule involves distinct parts.
First evaluating the antecedent (which involves fuzzifying the input and applying
necessary fuzzy operators. Second, applying the results to the consequents (known as
implication).
In the case of two-valued or binary logic, if the premise is true, then the conclusion is
true. In case of fuzzy if-then rule if the antecedent is true to some degree of membership,
then the consequent part is also true to that same degree.
In binary logic: p ->q (p and q are either true or false)
In fuzzy logic: 0.5 p ->0.5 q (partial antecedents apply partially)
The antecedents of a rule can have multiple parts
If sky is gray and wind is strong and barometer is falling then…
In which case all the parts of the antecedent are calculated simultaneously and resolved to
a single number using the fuzzy logical operators discussed previously. The consequent
of a rule can also have multiple
parts:
If temperature is cold then hot water valve is open and cold-water valve is shut
In which case all consequents are affected equally by the result of the antecedent. The
consequent specifies a fuzzy set to be assigned to the output. The implication function
then modifies the fuzzy set to the degree specified by the antecedent. The most common
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 27
ways to modify the output fuzzy sets are truncation using the min function (where the
fuzzy set is chopped off, as shown in Figure 4. part 3) or scaling using the prod
function (where the output fuzzy set is squashed). Both are supported by fuzzy logic
toolbox.
One rule by itself does not do much good. What are needed are two or more rules that can
play off one another. The output of each rule is a fuzzy set, but in general we want the
output for an entire collection of rules to be a single number. How are all these fuzzy sets
distilled into a single crisp result for the output variable? First the output fuzzy sets for
each rule are aggregated into a single output fuzzy set. Then the resulting set is
deffuzzified, or resolved to a single number.
Fuzzy Inference Systems
Fuzzy inference is the actual process of mapping from a given input to an output using
fuzzy logic. The process involves all the pieces that we have discussed previously i.e.,
membership functions, fuzzy logic operators, and if-then rules.
Example:
We will see how everything fits together using the two-input (service, food) and one
output(tip) three rule tipping problem.
Fig. 6
Information flows from left to the right, from two inputs to a single output.
In the fuzzy logic toolbox, there are five parts of the fuzzy inference process:
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 28
1. Fuzzification of the input variables
The first step is to take the inputs and determine the degree to which they belong to each
of the appropriate fuzzy sets via membership functions. The input is always a crisp
numerical value limited to the universe of discourse of the input variable and the output is
a fuzzy degree of membership (always in interval between 0 and 1).
2. Application of the fuzzy operator (AND or OR) in the antecedent
If the antecedent of a given rule has more than one part, the fuzzy operator is applied to
obtain one number that represents the result of the antecedent for that rule. This number
will then be applied to the output function. Any number of well-defined methods can fill
in for the AND operation or the OR operation. In fuzzy logic toolbox, two built -in AND
methods are supported: min (minimum) and prod (product). Two built-in OR methods are
also supported: max(maximum), and the probabilistic OR method probor.
3. Implication from the antecedent to the consequent
The implication method is defined as the shaping of the consequent ( a fuzzy set) based
on the antecedent ( a single number). The input for the implication process is a single
number given by the antecedent, and the output is a fuzzy set. Implication occurs for each
rule. Two built-in methods are supported, min (minimum) which truncates the output
fuzzy set, and prod (product) which scales the output fuzzy set.
4. Aggregation of the consequents across the rules
Unify the outputs of each rule
5. Defuzzification
Input for defuzzification phase is unified fuzzy set formed by aggregation of consequents
and output is crisp number. If there are more than one output variables, final output for
each variable is a crisp number. The most popular defuzzification method is the centroid
calculation, which returns the center of area under the curve. There are five built-in
methods supported: centroid, bisector, middle of maximum ( the average of the maximum
value of the output set), largest of maximum, and smallest of maximum.
Result: In this experiment, rule based fuzzy inference system is studied, various rules are
entered in the rule editor of fuzzy toolbox system and the effect on fuzzy inference
system tipper service is observed.
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 29
Experiment No: 5
Aim: Write a program to implement the Single Discrete Perceptron Training
Algorithm
Theory:
The Perceptron
After the linear networks, the perceptron is the simplest type of neural network and it is
typically used for classification. In the one-output case it consists of a neuron with a step
function. Figure 1 is a graphical illustration of a perceptron with inputs , ..., and
output .
Figure 1. A perceptron classifier.
As indicated, the weighted sum of the inputs and the unity bias are first summed and then
processed by a step function to yield the output
where { , ..., } are the weights applied to the input vector and b is the bias weight.
Each weight is indicated with an arrow in Figure 1. Also, the UnitStep function is 0 for
arguments less than 0 and 1 elsewhere. So, the output can take values of 0 or 1,
depending on the value of the weighted sum. Consequently, the perceptron can indicate
two classes corresponding to these two output values. In the training process, the weights
(input and bias) are adjusted so that input data is mapped correctly to one of the two
classes.
The perceptron can be trained to solve any two-class classification problem where the
classes are linearly separable. In two-dimensional problems (where x is a two-component
row vector), the classes may be separated by a straight line, and in higher-dimensional
problems it means that the classes are separable by a hyperplane.
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 30
If the classification problem is not linearly separable, then it is impossible to obtain a
perceptron that correctly classifies all training data. If some misclassifications can be
accepted, then a perceptron could still constitute a good classifier.
Because of its simplicity, the perceptron is often inadequate as a model for many
problems. Nevertheless, many classification problems have simple solutions for which it
may apply. Also, important insights may be gained from using the perceptron, which may
shed some light in considering more complicated neural network models.
Perceptron classifiers are trained with a supervised training algorithm. This presupposes
that the true classes of the training data are available and incorporated in the training
process. More specifically, as individual inputs are presented to the perceptron, its
weights are adjusted iteratively by the training algorithm so as to produce the correct
class mapping at the output. This training process continues until the perceptron correctly
classifies all the training data or when a maximum number of iterations has been reached.
It is possible to choose a judicious initialization of the weight values, which in many
cases makes the iterative learning unnecessary.
Classification problems involving a number of classes greater than two can be handled by
a multi-output perceptron that is defined as a number of perceptrons in parallel. It
contains one perceptron, as shown in Figure 1, for each output, and each output
corresponds to a class.
The training process of such a multi-output perceptron structure attempts to map each
input of the training data to the correct class by iteratively adjusting the weights to
produce 1 at the output of the corresponding perceptron and 0 at the outputs of all the
remaining outputs. However, it is quite possible that a number of input vectors may map
to multiple classes, indicating that these vectors could belong to several classes. Such
cases may require special handling. It may also be that the perceptron classifier cannot
make a decision for a subset of input vectors because of the nature of the data or
insufficient complexity of the network structure itself.
Training Algorithm
The training of a one-output perceptron will be described in the following section. In the
case of a multi-output perceptron, each of the outputs may be described similarly.
A perceptron is defined parametrically by its weights {w,b}, where w is a column vector
of length equal to the dimension of the input vector x and b is a scalar. Given the input, a
row vector, x={ ,..., }, the output of a perceptron is described in compact form by
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 31
This description can be used also when a set of input vectors is considered. Let x be a
matrix with one input vector in each row. Then in above Eq. becomes a column vector
with the corresponding output in its rows.
The weights {w,b} are obtained by iteratively training the perceptron with a known data
set containing input-output pairs, one input vector in each row of a matrix x, and one
output in each row of a matrix y. Given N such pairs in the data set, the training algorithm
is defined by
where i is the iteration number, is a scalar step size, and =y- (x, , ) is a column
vector with N-components of classification errors corresponding to the N data samples of
the training set. The components of the error vector can only take three values, namely, 0,
1, and -1. At any iteration i, values of 0 indicate that the corresponding data samples have
been classified correctly, while all the others have been classified incorrectly.
The training algorithm above Eq. begins with initial values for the weights {w,b} and i=0,
and iteratively updates these weights until all data samples have been classified correctly
or the iteration number has reached a maximum value, .
The step size , or learning rate as it is often called, has the following default value
By compensating for the range of the input data, x, and for the number of data samples,
N, this default value of should be good for many classification problems independent of
the number of data samples and their numerical range. It is also possible to use a step size
of choice rather than using the default value. However, although larger values of might
accelerate the training process, they may induce oscillations that may slow down the
convergence.
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 32
SDPTA
We will begin to examine neural network classifiers that derive their weights during the
learning cycle.
The sample pattern vectors X1, X2, …, Xp, called the training sequence, are presented to
the machine along with the correct response.
Based on the perceptron learning rule seen earlier.
Given are P training pairs
{X1,d1,X2,d2....Xp,dp}, where
Xi is (n*1)
di is (1*1)
i=1,2,...P
Yi= Augmented input pattern( obtained by appending 1 to the input vector)
i=1,2,…P
In the following, k denotes the training step and p denotes the step counter within the
training cycle
Step 1: c>0 is chosen.
Step 2: Weights are initialized at w at small values, w is (n+1)*1. Counters and error are
initialized.
k=1,p=1,E=0
Step 3: The training cycle begins here. Input is presented and output computed:
Y=Yp, d=dp
O=sgn(wtY)
Step 4: Weights are updated:
W=W+1/2c(d-o)Y
Step 5: Cycle error is computed:
E=1/2(d-o)2+E
Step 6: If p<P then p=p+1,k=k+1, and go to Step 3:
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 33
Otherwise go to Step 7.
Step 7: The training cycle is completed. For E=0, terminate the training session.
Outputs weights and k.
If E>0, then E=0, p=1, and enter the new training cycle by going to step 3.
Result: The SDPTA algorithm is implemented and different weights are derived during
training cycle.
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 34
Experiment No: 6
Aim: Write a program to implement R Category Discrete Perception Training
Algorithm
Introduction:
Perceptron Learning Algorithm
The perceptron learning rule was originally developed by Frank Rosenblatt in the late
1950s. Training patterns are presented to the network's inputs; the output is computed.
Then the connection weights wj are modified by an amount that is proportional to the
product of
the difference between the actual output, y, and the desired output, d, and
the input pattern, x.
The algorithm is as follows:
1. Initialize the weights and threshold to small random numbers.
2. Present a vector x to the neuron inputs and calculate the output.
3. Update the weights according to:
4. where
o d is the desired output,
o t is the iteration number, and
o eta is the gain or step size, where 0.0 < n < 1.0
5. Repeat steps 2 and 3 until:
o the iteration error is less than a user-specified error threshold or
o a predetermined number of iterations have been completed.
Notice that learning only occurs when an error is made, otherwise the weights are left
unchanged.
This rule is thus a modified form of Hebb learning.
During training, it is often useful to measure the performance of the network as it
attempts to find the optimal weight set.
A common error measure or cost function used is sum-squared error. It is computed over
all of the input vector/output vector pairs in the training set and is given by the equation
below:
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 35
Where p is the number of input/output vector pairs in the training set.
RDPTA
Given are P training pairs
{X1,d1,X2,d2....Xp,dp}, where
Xi is (n*1)
di is (n*1)
No of Categories=R.
i=1,2,...P
Yi= Augmented input pattern( obtained by appending 1 to the input vector)
i=1,2,…P
In the following, k denotes the training step and p denotes the step counter within the
training cycle
Step 1: c>0 , Emin is chosen,
Step 2: Weights are initialized at w at small values, w is (n+1)*1.
Counters and error are initialized.
k=1,p=1,E=0
Step 3: The training cycle begins here. Input is presented and output computed:
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 36
Y=Yp, d=dp
Oi=f(wtY) for i=1,2,….R
Step 4: Weights are updated:
wi=wi+1/2c(di-oi)Y for i=1,2,…..R.
Step 5: Cycle error is computed:
E=1/2(di-oi)2+E for i=1,2,…..R.
Step 6: If p<P then p=p+1,k=k+1, and go to Step 3:
Otherwise go to Step 7.
Step 7: The training cycle is completed. For E=0, terminate the training
session. Outputs weights and k.
If E>0, then E=0 ,p=1, and enter the new training cycle by going to step 3.
Result: The RDPTA algorithm is implemented and cost function are computed over
different input output vector pair.
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 37
Experiment No. 7
Aim: To Study various defuzzification technique
Introduction
Fuzzy logic is a rule-based system written in the form of horn clauses (i.e., if-then rules).
These rules are stored in the knowledge base of the system. The input to the fuzzy system
is a scalar value that is fuzzified. The set of rules is applied to the fuzzified input. The
output of each rule is fuzzy. These fuzzy outputs need to be converted into a scalar output
quantity so that the nature of the action to be performed can be determined by the system.
The process of converting the fuzzy output is called defuzzification. Before an output is
defuzzified all the fuzzy outputs of the system are aggregated with an union operator. The
union is the max of the set of given membership functions and can be expressed as
(1)
There are many defuzzification techniques but primarily only three of them are in
common use. These defuzzification techniques are discussed below in detail.
Maximum Defuzzification Technique
This method gives the output with the highest membership function. This defuzzification
technique is very fast but is only accurate for peaked output. This technique is given by
algebraic expression as
for all x X (2)
where x* is the defuzzified value. This is shown graphically in Figure 1.
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 38
Figure 1 Max-membership defuzzification method
Centroid Defuzzification Technique
This method is also known as center of gravity or center of area defuzzification. This
technique was developed by Sugeno in 1985. This is the most commonly used technique
and is very accurate. The centroid defuzzification technique can be expressed as
(3)
where x* is the defuzzified output, µi(x) is the aggregated membership function and x is
the output variable. The only disadvantage of this method is that it is computationally
difficult for complex membership functions.
Weighted Average Defuzzification Technique
In this method the output is obtained by the weighted average of the each output of the
set of rules stored in the knowledge base of the system. The weighted average
defuzzification technique can be expressed as
(4)
where x* is the defuzzified output, m
i is the membership of the output of each rule, and wi
is the weight associated with each rule. This method is computationally faster and easier
and gives fairly accurate result.
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 39
Experiment No. 8
Aim: Write a program to implement fuzzy set operation.
Theory:
Aim: Write a program to implement Fuzzy set operations
Introduction:
A fuzzy set operation is an operation on fuzzy sets. These operations are generalization
of crisp set operations. There is more than one possible generalization. The most widely
used operations are called standard fuzzy set operations. There are three operations:
fuzzy complements, fuzzy intersections, and fuzzy unions.
Theory:
Standard fuzzy set operations
Standard complement
cA(x) = 1 − A(x)
Standard intersection
(A ∩ B)(x) = min [A(x), B(x)]
Standard union
(A ∪ B)(x) = max [A(x), B(x)]
Fuzzy complements
A(x) is defined as the degree to which x belongs to A. Let cA denote a fuzzy complement
of A of type c. Then cA(x) is the degree to which x belongs to cA, and the degree to which
x does not belong to A. (A(x) is therefore the degree to which x does not belong to cA.)
Let a complement cA be defined by a function
c : [0,1] → [0,1]
c(A(x)) = cA(x)
Axioms for fuzzy complements
Axiom c1. Boundary condition
c(0) = 1 and c(1) = 0
Axiom c2. Monotonicity
For all a, b ∈ [0, 1], if a ≤ b, then c(a) ≥ c(b)
Axiom c3. Continuity
c is continuous function.
Axiom c4. Involutions
c is an involution, which means that c(c(a)) = a for each a ∈ [0,1]
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 40
Fuzzy intersections
The intersection of two fuzzy sets A and B is specified in general by a binary operation on
the unit interval, a function of the form
i:[0,1]×[0,1] → [0,1].
(A ∩ B)(x) = i[A(x), B(x)] for all x.
Axioms for fuzzy intersection
Axiom i1. Boundary condition
i(a, 1) = a
Axiom i2. Monotonicity
b ≤ d implies i(a, b) ≤ i(a, d)
Axiom i3. Commutativity
i(a, b) = i(b, a)
Axiom i4. Associativity
i(a, i(b, d)) = i(i(a, b), d)
Axiom i5. Continuity
i is a continuous function
Axiom i6. Subidempotency
i(a, a) ≤ a
Fuzzy unions
The union of two fuzzy sets A and B is specified in general by a binary operation on the
unit interval function of the form
u:[0,1]×[0,1] → [0,1].
(A ∪ B)(x) = u[A(x), B(x)] for all x
Axioms for fuzzy union
Axiom u1. Boundary condition
u(a, 0) = a
Axiom u2. Monotonicity
b ≤ d implies u(a, b) ≤ u(a, d)
Axiom u3. Commutativity
u(a, b) = u(b, a)
Axiom u4. Associativity
u(a, u(b, d)) = u(u(a, b), d)
Axiom u5. Continuity
u is a continuous function
Axiom u6. Superidempotency
u(a, a) > a
Axiom u7. Strict monotonicity
Artificial Neural Network and Fuzzy System
Lab Manual ME CSE First Semester 41
a1 < a2 and b1 < b2 implies u(a1, b1) < u(a2, b2)
Aggregation operations
Aggregation operations on fuzzy sets are operations by which several fuzzy sets are
combined in a desirable way to produce a single fuzzy set.
Aggregation operation on n fuzzy set (2 ≤ n) is defined by a function
h:[0,1]n → [0,1]
Axioms for aggregation operations fuzzy sets
Axiom h1. Boundary condition
h(0, 0, ..., 0) = 0 and h(1, 1, ..., 1) = 1
Axiom h2. Monotonicity
For any pair <a1, a2, ..., an> and <b1, b2, ..., bn> of n-tuples such that ai, bi ∈ [0,1]
for all i ∈ Nn, if ai ≤ bi for all i ∈ Nn, then h(a1, a2, ...,an) ≤ h(b1, b2, ..., bn); that is,
h is monotonic increasing in all its arguments.
Axiom h3. Continuity
h is a continuous function.
Result: Different fuzzy set operations such as aggregation and intersection are
implemented.