NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING...

72
NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE Professor School of Information Technology and Engineering University of Ottawa Ottawa, ON., Canada http://www.site.uottawa.ca/~petriu/ [email protected] University of Ottawa School of Information Technology - SITE Sensing and Modelling Research Laboratory SMRLab - Prof. Emil M. Petriu

Transcript of NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING...

Page 1: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

NEURAL NETWORKS FOR MODELLING APPLICATIONS

Emil M. Petriu, Dr. Eng., P. Eng., FIEEEProfessor School of Information Technology and EngineeringUniversity of OttawaOttawa, ON., Canadahttp://www.site.uottawa.ca/~petriu/[email protected]

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 2: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

Biological Neurons

Incoming signals to a dendrite may be inhibitory or excitatory.The strength of any input signal is determined by the strength ofits synaptic connection. A neuron sends an impulse down its axonif excitation exceeds inhibition by a critical amount (threshold/offset/bias) within a time window (period of latent summation).

Biological neurons are rather slow (10-3 s) when compared with the modern electronic circuits. ==> The brain is faster than anelectronic computer because of its massively parallel structure.The brain has approximately 1011 highly connected neurons (approx. 104 connections per neuron).

Dendrites carry electrical signals in into the neuron body. The neuron body integrates and thresholds the incoming signals.The axon is a single long nerve fiber that carries the signal fromthe neuron body to other neurons.

Memories are formed by the modification of the synaptic strengthswhich can change during the entire life of the neural systems..

Body

Axon

Dendrites

Synapse

A synapse is the connection between dendrites of two neurons.

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 3: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

W. McCulloch & W. Pitts (1943) the first theory on the fundamentals of neural computing (neuro-logicalnetworks) “A Logical Calculus of the Ideas Immanent in Nervous Activity” ==> McCulloch-Pitts neuron model; (1947) “How We Know Universals” - an essay on networks

capable of recognizing spatial patterns invariant of geometric transformations.

Cybernetics: attempt to combine concepts from biology, psychology, mathematics, and engineering.

1940s

Natural components of mind-like machines are simple abstractions based on the behavior of biological nerve cells, and such machines can be built by interconnecting such elements.

Historical Sketch of Neural Networks

D.O. Hebb (1949) “The Organization of Behavior” the first theory of psychology on conjectures about neural networks (neural networks might learn by constructing internal representations ofconcepts in the form of “cell-assemblies” - subfamilies of neurons that would learn to support one another’s activities). ==> Hebb’s learning rule: “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic changetakes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.”

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 4: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

1950s

Cybernetic machines developed as specific architectures to perform specific functions.==> “machines that could learn to do things they aren’t built to do”

M. Minsky (1951) built a reinforcement-based network learning system.

F. Rosenblatt (1958) the first practical Artificial Neural Network (ANN) - the perceptron, “The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain.”.

IRE Symposium “The Design of Machines to Simulate the Behavior of the Human Brain” (1955)with four panel members: W.S. McCulloch, A.G. Oettinger, O.H. Schmitt, N. Rochester, invited questioners: M. Minsky, M. Rubinoff, E.L. Gruenberg, J. Mauchly, M.E. Moran, W. Pitts, and themoderator H.E. Tompkins.

By the end of 50s, the NN field became dormant because of the new AI advances based onserial processing of symbolic expressions.

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 5: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

1960s

Connectionism (Neural Networks) - versus - Symbolism (Formal Reasoning)

B. Widrow & M.E. Hoff (1960) “Adaptive Switching Circuits” presents an adaptive percepton-like network. The weights are adjusted so to minimize the mean square error between the actual and desired output ==> Least Mean Square (LMS) error algorithm. (1961) Widrow and his students “Generalization and Information Storage in Newtworks of Adaline “Neurons.”

M. Minsky & S. Papert (1969) “Perceptrons” a formal analysis of the percepton networks explainingtheir limitations and indicating directions for overcoming them ==> relationship between the perceptron’sarchitecture and what it can learn: “no machine can learn to recognize X unless it poses some schemefor representing X.”

Limitations of the perceptron networks led to the pessimist view of the NN field as havingno future ==> no more interest and funds for NN research!!!

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 6: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

1970s

Memory aspects of the Neural Networks.

T. Kohonen (1972) “Correlation Matrix Memories” a mathematical oriented paper proposing a correlation matrix model for associative memory which is trained, using Hebb’s rule, to learn associations between input and output vectors.

J.A. Anderson (1972) “A Simple Neural Network Generating an Interactive Memory” a physiologicaloriented paper proposing a “linear associator” model for associative memory, using Hebb’s rule, to learnassociations between input and output vectors.

S. Grossberg (1976) “Adaptive Pattern Classification and Universal Recording: I. Parallel Developmentand Coding of Neural Feature Detectors”describes a self-organizing NN model of the visual systemconsisting of a short-term and long term memory mechanisms. ==> continuous-time competitive network that forms a basis for the Adaptive Resonance Theory (ART) networks.

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 7: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

1980s

Revival of Learning Machine.

D.E. Rumelhart & J.L. McClelland, eds. (1986) “Parallel Distributed Processing: Explorations in theMicrostructure of Cognition: Explorations in the Microstructure of Cognition” represents a milestonein the resurgence of NN research.

International Neural Network Society (1988) …. IEEE Tr. Neural Networks (1990).

J.A. Anderson & E. Rosenfeld (1988) “Neurocomputing: Foundations of Research” contains over fortyseminal papers in the NN field.

DARPA Neural Network Study(1988) a comprehensive review of the theory and applications of the Neural Networks.

[Minsky]: “The marvelous powers of the brain emerge not from any single, uniformly structured connectionst network but from highly evolved arrangements of smaller, specialized networks which are interconnected in very specific ways.”

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 8: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

Artificial Neural Networks (ANN)

McCulloch-Pitts model of an artificial neuron

y = f ( w1. p1 +…+ wj

. pj +... wR. pR + b)

wjpj

w1p1

wRpR

.

.

.

.

.

.

Σ f yz

b

Some transfer functions “f”

Hard Limit: y = 0 if z<0y = 1 if z>=0

0

1

y

z

Symmetrical: y = -1 if z<0Hard Limit y = +1 if z>=0

0

1

y

z-1

Log-Sigmoid:y =1/(1+e-z)

0

1y

z

Linear:y = z

0

y

zp = (p1, … , pR)T is the input column-vector

W = (w1, … , wR) is the weight row-vector

y = f (W. p + b)

*) The bias b can be treated as a weight whose input is always 1.

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 9: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

The Architecture of an ANN§ Number of inputs and outputs of the network;

§ Number of layers;

§ How the layers are connected to each other;

§ The transfer function of each layer;

§ Number of neurons in each layer;ANNs map input/stimulus values to output/response values: Y= F (P).

Intelligent systems generalize: their behavioral repertoires exceed their experience. An intelligent system is said to have a creative behaviour if it provides appropriate

responses when faced with new stimuli. Usually the new stimuliP’ resemble known stimuli P and their corresponding responsesY’ resemble known/learned responses Y.

Measure of system’s F creativity:

Volume of “stimuli ball BP “

Volume of “response ball BY”

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

BP

P

P’BY

YY’

Y’= F (P’)

Y= F (P)

Page 10: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

Most of the mapping functions can be implemented by a two-layer ANN: a sigmoid layer feeding a linear output layer.

ANNs with biases can represent relationships between inputs and outputs than networks without biases.

Feed-forward ANNs cannot implement temporal relationships. Recurrent ANNs have internalfeedback paths that allow them to exhibit temporal behaviour.

Feed-forward architecture with three layers

N (1,1)

N (1,R1)

p1...pR

.

.

.

N (2,1)

N (2,R2)

.

.

.

y(1,1)

y(1,R1)

N (3,1)

N (3,R3)

.

.

.

y(2,1)

y(2,R2)

y (3,1)

y (3,R3)

Layer 1 Layer 2 Layer 3N (1)

N (R)

.

.

.

y(1)

y(R)

.

.

.

Recurrent architecture (Hopfield NN)

The ANN is usually supplied with an initialinput vector and then the outputs are used as inputs for each succeeding cycle.

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 11: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

Learning Rules (Training Algorithms)

Supervised Learning

Procedure/algorithm to adjust the weights and biasesin order for the ANN to perform the desired task.

wj

. . .

Σ f yz

b

LearningRule

e = t-ye t

pj( j= 1,…,R)

. . .

For a given training set of pairs{p(1),t(1)},...,{p(n),t(n)}, where p(i) is an instance of the input vector and t(i) is the corresponding targetvalue for the output y, the learning rule calculates the updated value of the neuron weights and bias.

Reinforcement Learning

Similar to supervised learning - instead of being provided with the correct output value for each giveninput, the algorithm is only provided with a given grade/score as a measure of ANN’s performance.

Unsupervised Learning

The weight and unbiased are adjusted based on inputs only. Most algorithms of this type learn tocluster input patterns into a finite number of classes. ==> e.g. vector quantization applications

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 12: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

THE PERCEPTRON

The perceptron is a neuron with a hard limit transfer function and a weight adjustment mechanism(“learning”) by comparing the actual and the expected output responses for any given input /stimulus.

[Minski] “Perceptrons make decisions/determine whether or not event fits a certain pattern by adding up evidence obtained from many small experiments”

Frank Rosenblatt (1958), Marvin Minski & Seymour Papert (1969)

wjpj

w1p1

wRpR

.

.

.

.

.

.

Σ yz

b

f

0

1

Perceptrons are well suited for pattern classification/recognition.

The weight adjustment/trainingmechanism is called the perceptronlearning rule.

y = f (W. p + b)

NB: W is a row-vector and p is a column-vector.

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 13: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

§ Supervised learning

t <== the target value

e = t-y <== the errorwjpj

w1p1

wRpn

.

.

.

.

.

.

Σ yz

b

f

0

1

p = (p1, … , pR)T is the input column-vector

W = (x1, … , xR) is the weight row-vector

Because of the perceptron’s hard limit transfer function y, t, e can take only binary values

Perceptron learning rule:

Wnew = Wold + e.pT

bnew = bold + e

if e = 1, then Wnew = Wold + p , bnew = bold + 1;

if e = -1, then Wnew = Wold - p , bnew = bold - 1 ;

if e = 0, then Wnew = Wold .

Perceptron Learning Rule

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 14: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

The hard limit transfer function (threshold function) provides the ability to classify input vectors by deciding whether an input vector belongs to one of two linearly separable classes.

w1p1

w2p2

Σ yz

bf

0

1

Two-Input Perceptronp2

p10

-b / w2

-b / w1

( z = 0 )

w1. p1 + w2

. p2 + b =0

y = sign (b) y = sign (-b)

The two classes (linearly separable regions) in the two-dimensionalinput space (p1, p2) are separated by the line of equation z = 0.

y = hardlim (z) = hardlim{ [w1 , w2] . [p1 , p2]T + b}

The boundary is always orthogonal to the weight vector W.

W

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 15: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

q Example #1: Teaching a two-input perceptron to classify five input vectors into two classes

p(1) = (0.6, 0.2)T

t(1) = 1p(2) = (-0.2, 0.9)T

t(2) = 1p(3) = (-0.3, 0.4)T

t(3) = 0p(4) = (0.1, 0.1)T

t(4) = 0p(5) = (0.5, -0.6)T

t(5) = 0

p1

p2

1

1

-1

-1

P=[0.6 -0.2 -0.3 0.1 0.5;0.2 0.9 0.4 0.1 -0.6];

T=[1 1 0 0 0];W=[-2 2];b=-1;plotpv(P,T);plotpc(W,b);nepoc=0Y=hardlim(W*P+b);while any(Y~=T)Y=hardlim(W*P+b);E=T-Y;[dW,db]= learnp(P,E);W=W+dW;b=b+db;nepoc=nepoc+1;disp(‘epochs=‘),disp(nepoc),disp(W), disp(b);plotpv(P,T);plotpc(W,b);end

The MATLAB solution is:

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 16: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

q Example #1:

After nepoc = 11 (epochs of trainingstarting from an

initial weight vectorW=[-2 2] and abias b=-1)the weights are:

w1 = 2.4w2 = 3.1

and the bias is:

b = -2

-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 1.2-1.5

-1

-0.5

0

0.5

1

1.5

2Input Vector Classification

p1

p 2

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 17: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

Ø The larger an input vector p is, the larger is its effect on the weight vector W during the learning process

Long training times can be caused by the presence of an “outlier,” i.e. an input vectorwhose magnitude is much larger, or smaller, than other input vectors.

Normalized perceptron learning rule,the effect of each input vector on theweights is of the same magnitude:

Wnew = Wold + e.pT / p

bnew = bold + e

Perceptron Networks for Linearly Separable Vectors

The hard limit transfer function of the perceptron provides the ability to classify input vectors by deciding whether an input vector belongs to one of two linearly separable classes.

p2

p1

10

1AND

p2

p1

10

1OR

W = [ 2 2 ]b = -3

W = [ 2 2 ]b = -1

p = [ 0 0 1 1 ;0 1 0 1 ]

tAND =[ 0 0 0 1 ]

p = [ 0 0 1 1 ;0 1 0 1 ]

tOR = [ 0 1 1 1 ]

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 18: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

Three-Input Perceptron

w1p1

w2p2Σ

yz

bf

0

1

w3p3

y =hardlim ( z ) = hardlim{ [w1 , w2 ,w3] .

[p1 , p2 p3]T + b}

-2-1

01

2

-2

-10

1

2-2

-1

0

1

2

p1

p2

p3

P = [ -1 1 1 -1 -1 1 1 -1;-1 -1 1 1 -1 -1 1 1;-1 -1 -1 -1 1 1 1 1]

T = [ 0 1 0 0 1 1 1 0 ]

EXAMPLE

The two classes in the 3-dimensional

input space (p1, p2, p3) are separated by the plane of equation z = 0.

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 19: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

One-layer multi-perceptron classification of linearly separable patterns

-3 -2 -1 0 1 2-3

-2

-1

0

1

2

3

4

3p1

p 2

0 2 4 6 810

-20

10-15

10-10

10-5

100

105

# Epochs

Err

or

Demo P3 in the “MATLAB Neural Network Toolbox - User’s Guide”

T = [ 1 1 1 0 0 1 1 1 0 0;0 0 0 0 0 1 1 1 1 1 ]

00 = O ; 10 = +01 = * ; 11 = x

P = [ 0.1 0.7 0.8 0.8 1.0 0.3 0.0 -0.3 -0.5 -1.5;1.2 1.8 1.6 0.6 0.8 0.5 0.2 0.8 -1.5 -1.3 ]

R = 2 inputsS = 2 neurons

Where: R = # InputsS = # Neurons

MATLAB representation:

W

SxR

b

Sx1R

p

Rx1

1

z

Sx1

Sx1

y

Input Perceptron Layer

y = hardlim(W*p+b)

Σ

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 20: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

p = [ 0 0 1 1 ;0 1 0 1 ]

tXOR = [ 0 1 1 0 ]

XORp2

p1

10

1

If a straight line cannot be drawn between the set of input vectors associated with targets of 0 value and the input vectors associated with targets of 1, than a perceptron cannot classify these input vectors.

1 1

1 1

w1,1 w1,2

w2,1 w2,2=

b11

b12

-1.5

-0.5=

[ w21,1 w21,2] = [-1 1] [ b21 ] = [-0.5]

One solution is to use a two layer architecture, the perceptrons in the first layer areused as preprocessors producing linearly separable vectors for the second layer.

(Alternatively, it is possible to use linear ANN or back-propagation networks)w11,1

Σy11z11

b11 f10

1

Σy12z12

b12 f10

1

p1

p2 Σ y21z21

b21

f20

1w11,2

w12,1

w12,2

w21,2

w21,1

Perceptron Networks for Linearly Non-Separable Vectors

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

The row index of a weight indicates the destination neuron of the weight and the column index indicates which source is the input for that weight.

Page 21: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

LINEAR NEURAL NETWORKS (ADALINE NETWORKS)

Widrow-Hoff Learning Rule ( The ) Rule )

wj

. . .

Σy(y = z)

b

LMS Learning Rule

e = t-ye t

pj

( j= 1,…,R)

. . .

( ADALINE <== ADAptive LInear NEuron )

(NB: E[…] denotes the “expected value”; p is column vector)

The LMS algorithm will adjust ADALINE’s weights and biases in such away to minimize the mean-square-error E [e2] between all sets of the desired responseand network’s actual response:

E [ (t-y)2 ] = E [ (t - (w1 … wR b) . (p1 … pR 1)T )2 ]= E [ (t - W . p)2 ]

Where: R = # Inputs, S = # Neurons

W

SxR

b

Sx1R

p

Rx1

1

z

Sx1

Sx1

y

Input Linear Neuron Layer

y = purelin(W*p+b)

Σ

q Linear neurons have a linear transfer functionthatallows to use a Least Mean-Square (LMS) procedure- Widrow-Hoff learning rule- to adjust weights andbiases according to the magnitude of errors.

q Linear neurons suffer from the same limitation as theperceptron networks: they can only solve linearlyseparable problems.

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 22: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

>> Widrow-Hoff algorithm

As t(k) and p(k) - both affecting e(k) - are independent of W(k), we obtain the final expression of the Widrow-Hoff learning rule:

W(k+1) = W (k) + 2.µ .e(k). p(k)

where µ the “learning rate” and e(k) = t(k)-y(k) = t(k)-W(k) . p(k)

b(k+1) = b(k) +2.µ .e(k)

The input cross-correlation matrix

The cross-correlation between the input vector and its associated target.

If the input correlation matrix is positive the LMS algorithm will converge as there willa unique minimum of the mean square error.

E [ e2 ] = E [ (t - W . p)2 ] = {as for deterministic signals the expectation becomes a time-average} = E[t2] - 2.W . E[t.p] + W . E[p.pT] . WT

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

The weight vector is then modified in the direction that decreases the error:

W k W K W k W k e kke kW k

e kW k( ) ( ) ( ) ( ) ( )* ( )

( )( )( )+ = − • ∇ = − • = − • •1 2

2

µ µ µ∂∂

∂∂

[ ]∇ = =ke kW k

e kw k

e kw k

e kb kR

* ( )( )

( )( )

( )( )

( )( ). . . ,∂

∂∂∂

∂∂

∂∂

2 2

1

2 2

q The W-H rule is an iterative algorithm uses the “steepest-descent” method to reduce the mean-square-error. The key point of the W-H algorithm is that it replaces E[e2] estimation by the squared error of the iteration k: e2(k). At each iteration step k it estimates the gradient of this error k with respect to W as a vector consistingof the partial derivatives of e2(k) with respect to each weight:

Page 23: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

>> Widrow-Hoff algorithm

Demo Lin 2 in the “MATLAB Neural Network Toolbox - User’s Guide”

P = [ 1.0 -1.2]T = [ 0.5 1.0]

One-neuron one-input ADALINE, starting from some randomvalues for w = -0.96 and b= -0.90 and using the “trainwh” MATLABNN toolbox function, reaches the target after 12 epochs with an error e < 0.001. The solution found for the weight and bias is:

w = -0.2354 and b= 0.7066.

Erro

r

Weight WBias b

Bia

s b

Weight W

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 24: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

Back-Propagation Learning - The Generalized ) Rule

P. Werbos (Ph.D. thesis 1974); D. Parker (1985), Yann Le Cun(1985), D. Rumelhart, G. Hinton, R. Williams (1986)

Two-layer ANN that can approximate any function with a finite number of

of discontinuities, arbitrarily well, given sufficient neurons

in the hidden layer.

e2= (t-y2) = (t- purelin(W2*tansig(W1*p+b1) +b2))

The error is an indirect function of the weights in the hidden layers.

q Back-propagation ANNs often have one or more hidden layers of sigmoid neurons followed by an output layer of linear neurons.

Linear Neuron Layer

W2

S2xS1

b2

S2x1

z2

S2x1

S2x1

y2

y2 = purelin(W2*y1+b2)

1

y1 = tansig(W1*p+b1)

W1

S1xR

b1

S1x1R

p

Rx1

1

z1

S1x1

S1x1

y1

Input Sigmoid Neuron Layer

Σ Σ

q Single layer ANNs are suitable to only solving linearly separable classification problems. Multiple feed-forward layers can give an ANN greater freedom. Any reasonable function can be modeled by a two layer architecture: a sigmoid layer feeding a linear output layer.

q Single layer ANNs are only able to solve linearly Widrow-Hoff learning applies to single layer networks.==> generalized W-H algorithm (∆ -rule) ==> back-propagation learning.

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 25: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

>>Back-Propagation

e = (t - yN)

t

e

R

p

Rx1

Input

Phase I : The input vector is propagated forward (fed-forward) trough the consecutive layers of the ANN.

yN

SN x 1

Phase II : The errors are recursively back-propagatedtrough the layers and appropriate weight changes are made. Because the output error is an indirect function of the weights in the hidden layers, we have to use the“chain rule” of calculus when calculating the derivatives with respect to the weights and biases in the hidden layers.These derivatives of the squared error are computed firstat the last (output) layer and then propagated backwardfrom layer to layer using the “chain rule.”

∆Wj | j= N, N-1, …,1,0

q Back-propagation is an iterative steepest descent algorithm, in which the performance index is the mean square error E [e2] between the desired response and network’s actual response:

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 26: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

-1 -0.5 0 0.5 1-1

-0.5

0

0.5

1

Input vector P

Targ

et v

ecto

r T

0 50 100 150 200 250 300 350 400 45010-2

10-1

100

101

Epochs

Err

or

EXAMPLE: Function Approximation by Back-Propagation

Linear Neuron Layer

W2

S2xS1

b2

S2x1

z2

S2xQ

S2x1

y2

y2 = purelin(W2*y1+b2)

1

y1 = tansig(W1*P+b1)

W1

S1xQ

b1

S1x1Q

P

RxQ

1

z1

S1xQ

S1x1

y1

Input Sigmoid Neuron Layer

Σ ΣR

S1 S2

R = 1 inputS1 = 5 neurons

in layer #1S2 = 1 neuron

in layer #2Q = 21 input

vectors

Demo BP4 in the” MATLAB Neural Network Toolbox User’s Guide”

The back-propagation algorithm took 454 epochs toapproximate the 21 target vectors with an error < 0.02

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 27: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

HARDWARE NEURAL NETWORK ARCHITECTUCTURES

ANNs / Neurocomputers ==>architectures optimized for neuron model implementation

§ general-purpose, able to emulate a wide range of NN models;§ special-purpose, dedicated to a specific NN model.

ANN VLSI Architectures:• analog ==> compact,high speed,

asynchronous, no quantizationerrors, convenient weight “+”and “X”;

• digital ==> more efifcient VLSI technology,robust, convenient weight storage;

Pulse Data Representation: • Pulse Amplitude Modulation (PAM) -

not satisfactory for NN processing;• Pulse Width Modulation (PWM);• Pulse Frequency Modulation (PFM).

Number of nodes

0

103

106

109

1012

103 106 109 1012 Node complexity[VLSI area/node]

RAMsSpecial-purpose neurocomputers

General-purpose neurocomputersSystolic arrays

Computational arays

Conventional parallel computers

Sequential computers

[from P. Treleaven, M. Pacheco, M. Vellasco, “VLSI Architectures for Neural Networks,”IEEE Micro, Dec. 1989, pp. 8-27]

Pulse Stream ANNs: combination of different pulse data representation methodsand opportunistic use of both analog and digital implementation techniques.

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 28: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

HARDWARE NEURAL NETWORK ARCHITECTURES USING RANDOM-PULSE DATA REPRESENTATION

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Looking for a model to prove that algebraic operations with analog variables can be performed by logical gates, von Neuman advanced in 1956 the idea of representing analog variables by the mean rate of random-pulse streams [J. von Neuman, “Probabilistic logics and the synthesis of reliable organisms from unreliable components,” in Automata Studies, (C.E. Shannon, Ed.), Princeton, NJ, Princeton University Press, 1956].

The “random-pulse machine” concept, [S.T. Ribeiro, “Random-pulse machines,” IEEE Trans. Electron. Comp., vol. EC-16, no. 3, pp. 261-276,1967], a.k.a. "noise computer“, "stochastic computing“, “dithering”deals with analog variables represented by the mean rate of random-pulse streams allowing to use digital circuits to perform arithmetic operations. This concept presents a good tradeoff between the electronic circuit complexity and the computational accuracy. The resulting neural network architecture has a high packing density and is well suited for very large scale integration (VLSI).

Page 29: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

ΣF

Y = F [ w X ]Σj=1

m.

j iij

SYN

APS

E

SYN

APS

E

SYN

APS

E

. . .. . . X mX 1 X i

w mjw ijw 1j

Neuron Structure

FS+VFS-V

FS FSXQ

p.d.f. of VR

1

2 FS.

-FS

+FS

1

V

X

0

-1

VRQ1-BIT QUANTIZER

X-FS

+FS

XQ

X

0

1

-1

XQ

CLOCK CLK

VRP

ANALOG RANDOM SIGNAL GENERATOR

-FS +FS0R

p(R)1

2 FS

++

VRV

R

One-Bit “Analog / Random Pulse” Converter

v HARDWARE ANN USING RANDOM-PULSE (1-BIT) DATA REPRESENTATION

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

[ E.M. Petriu , K. Watanabe, T.Yeap, "Applications of Random-Pulse Machine Concept toNeural Network Design," IEEE Trans. Instrum. Meas., Vol. 45, No.2, pp.665-669, 1996. ]

Page 30: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

++

VRV

RVRQ

CLOCKCLK

VRP

b-BITQUANTIZER

X XQ

ANALOG RANDOMSIGNALGENERATOR

-∆/2 0R

p(R)1/∆

+∆/2

.(k+0.5) ∆(k-0.5) ∆.

XQ

X

k

k+1

k-1

0

β ∆.

1/∆p.d.f.of VR

∆/2∆/2

β ∆. (1-β) ∆.

.V= (k-β) ∆k ∆.

v HARDWARE ANN USING MULTI-BIT RANDOM-DATA REPRESENTATION

Generalized b-bit analog/random-data conversion and its quantization characteristics

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

[ E.M. Petriu , L. Zhao, S.R. Das, and A. Cornell, "Instrumentation Applications of Random-Data Representation," Proc. IMTC/2000, IEEE Instrum. Meas. Technol. Conf., pp. 872-877, Baltimore, MD, May 2000]

[ L. Zhao, "Random Pulse Artificial Neural Network Architecture," M.A.Sc. Thesis, University of Ottawa, 1998]

Page 31: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

CLK

UP

DOWN

PN -BIT UP/DOWN COUNTER

D

N -BIT SHIFT REGISTER

“Random Pulse / Digital” Converter using a Moving Average Algorithm

>>> Random-Pulse Hardware ANN

1 OUT_OF mDEMULTIPLEXER

RANDOM NUMBERGENERATOR

S1SjSmCLK

Y = (X1+...+Xm)/m

y

x1

xj

xm

X1

Xj

Xm

Random Pulse Addition

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 32: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

>>> Random-Pulse Hardware ANN

SYNAPSE ADDRESS DECODER

S mpS ijS 11

2 -BIT SHIFT REGISTER

n

......

wij

RANDOM- PULSE MULTIPLICATION

DT = w Xij ij.

i

SYNAPSE

MODE

DATIN SYNADD

X i

Random Pulse Implementation of a Synapse

RANDOM-PULSE/DIGITAL INTERFACE

CLK*

ACTIVATION FUNCTION F

DIGITAL/RANDOM-PULSE CONVERTER

Y = F [ w X ]Σj=1

m.

j iij

... ...

RANDOM-PULSE ADDITION

DT mjDTijDT1j

Σ

Neuron Body Structure

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 33: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

32 266 5003.2

1

1.2

x2is

x2ditis

x2RQis

42

dZis

dHis

dLis

MAVx2RQis

is

Moving Average ‘Random Pulse -to- Digital ” Conversion

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

>>> Random-Pulse Hardware ANN

Page 34: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

32 266 5008.2

3.5

1.2

x1 is

x1RQ is4

1.5

MAVx1RQ is

dZ1 is

x2 is 3

x2RQ is

44.5

MAVx2RQ is 3

dZ2 is

x1 is x2 is 6

SUMRQX is4

7.5

MAVSUMRQX is 6

dZS is

dH is

d L is

is

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

>>> Random-Pulse Hardware ANN

Random Pulse Addition

Page 35: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

32 144 2569.2

4

1.2

x1is

x1ditis

x1RQis

42

dZis

dHis

dLis

w1is 3.5

dZis 3.5

W1is

45

x1W1RQis

46.5

MAVx1W1RQis 8

dZis 8

is

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

>>> Random-Pulse Hardware ANN

Random Pulse Multiplication

Page 36: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

0 10 20 30 40 50 60 700

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

0.18

Moving average window size

Mea

n sq

uare

err

or

1-Bit

2-Bit

Mean square errors function of the moving average window size

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

>>> Random-Pulse Hardware ANN

Page 37: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

Auto-associative memory NN architecture

P1, t1 P2, t2 P3, t3

Training set for the auto-associative memory

30

P

30x1

30x30

n

30x1

a

30x1W

)*hardlim( PWa =

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

>>> Random-Pulse Hardware ANN

Page 38: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

Recovery of 30% occluded patterns

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

>>> Random-Pulse Hardware ANN

Page 39: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

Modelling allows to simulate the behavior of a system for a varietyof initial conditions,excitations and systems configurations - often in a much shorter time than would be required to physically build and test a prototype experimentally

The quality and the degree of the approximation of the model can be determined only by a validation against experimental measurements.

The convenience of the model means that it is capable of performing extensive parametric studies, in which independent parameters describing the model can be varied over a specified range in order to gain a global understanding of the response.

A more relevant model might be one which provides results more rapidly - even if a degradation in a solution accuracy results.

NEURAL NETWORK MODELS OF PHYSICAL PROCESSES

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 40: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

Analog Computer vs. Neural Network Toolsfor Physical Processes Modelling

q Both the Analog Computers and the Neural Networks are continuous modelling devices.

q The Analog Computer (AC) allows to solve the linear or nonlinear differentialand/or integral equations representing mathematical model of a given physical process. The coefficients of these equations must be exactly known as they areused to program/adjust the coefficient-potentiometers of the AC’s computing -elements (OpAmps). The AC doesn’t follow a sequential computation, all its computing elements perform simultaneously and continuously.As an interesting note, “because of the difficulties inherent inanalog differentiation the [differential] equation is rearranged so that it can besolved by integration rather than differentiation.” [A.S. Jackson, Analog Computation, McGraw-Hill Book Co., 1960].

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 41: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

q The Neural Network (NN) doesn’t require a prior mathematical model. A learning algorithm is used to adjust, sequentially by trail and error during the learning phase, the synaptic-weights/ coefficient-potentiometers of the neurons/computing-elements. As the AC, the NN don’t follow a sequential computation, all its neuron performing simultaneously and continuously. The neurons are also integrative-type computing/processing elements.

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

>> Analog Computer vs. Neural Network Tools for Physical Processes Modelling

Page 42: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

ANALYTIC MODEL -NUMERICAL SOLUTION

EXPERIMENTALMEASUREMENTDATA

AND/ORNEURAL NETWORK- OFF LINE TRAINING

NEURAL NETWORK- ON LINE MODEL

RENDERING

Neural Network Models

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 43: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

EMC Modelling for Electronic Design Automation

Optimum Approach to EMC Design

• {Design+Test+Analysis} Synergy

• EMC_Behavior = F (Design_Principle, Analysis&Modeling&Simulation_Tools,Test_Methodology&Instrumentation)

SystemSub-System

EquipmentMotherboard

P.C. BoardComponent/Device

EMC Design Levels

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 44: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

2-4 Prototypes

Electronics

Hand Partitioning

Placement

Eng. Review

Routing

Eng. Review

Analysis

2-4 w

2-4 w

1-2 w

2-4 w

Acceptable Design(10-14 weeks)

Electronics

Virtual Prototyping- Floorplan & partition- Trade-off analysis- Design Optimization

Placement

Routing

1 Prototype

Analysis

Optimized Design (3-4 weeks)

Product Design Cycles for Traditional and Virtual Prototyping

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 45: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

circuit theory to describe the conducted disturbances (such as overvoltages, voltage dips, voltage interruptions, harmonics, common ground coupling);

equivalent circuit with either distributed or lumped parameters(such as in low frequency electromagnetic field coupling expressed in terms of mutual inductances and stray capacitances, field-to-line coupling using the transmission line approximation, and cable crosstalk);

formal solutions to Maxwell's equations and the appropriate field boundary conditions (as for example in problems involving antenna scattering and radiation).

Electromagnetic Compatibility (EMC) Modelling Methods

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 46: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

Electromagnetic Models Based on Maxwell's Equations

linear phenomena: appropriate models can be developed directly from Maxwell's equations;

nonlinear behavior: necessary to combine measurements of the nonlinear behavior togetherwith the Maxwell equations.

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 47: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

* Classical numerical EM modelling using sequential algorithmssuch as TLM (transmission-line matrix) or FEM (finite element method) is computer intensive, particularly as spatial discretization, geometry complexity, and domain size requirements become more demanding.

* More efficient parallel and distributed computing techniques must be developed to reduce the execution time for these methods so that they can be used in commercial CAD software. Speed of execution is particularly important when the field analysis is to be coupled with optimization, which may require several hundred analyses to be performed within a reasonable time. NN models

Parallel and Distributed Processing Techniques for Electromagnetic Field Solution

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 48: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

Finite Element Method (FEM)1400 frequency steps 2-16 GHz;31 dielectric constants; a = d = 5.14 mm

∇ x H = (σ + jωε)E

∇ x E = -jωµH

Maxwell’s equations:

∇ x H = -jωµ (σ + jωε)H∇ x

NN modeling of the 3D EM field radiated by a dielectric-ringresonator antenna

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

[ I. Ratner, H.O. Ali, E.M. Petriu, "Neural Network Simulation of a Dielectric Ring Resonator Antenna," J. Systems Architecture, vol. 44, No. 8, pp. 569-581, 1998. ]

Page 49: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

NEURAL NETWORK§ Two input neurons (frequency, dielectric constant) + Two hidden layers (5 neurons

each, with hyperbolic tangent activation function) + One output linear neuron;§ Backpropagation using the Levenberg-Marquard algorithm;§ 55 s /200 epochs to train the NN off lineon SPARC 10 UNIX station;§ 0.5 s to render on line5,000 points of the EM field surface- model, SPARC 10 UNIX.

FEM numerical Solution =>1.3x105 s on SPARC 10 UNIX

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

>> NN modeling of dielectric-ring resonator antenna EMF

Page 50: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

Modeling Single Stripline Interconnects

w

Ground Plane

t

H2

H1Dielectric, ε r

Model for Z0.

Model for C0 and L0.

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

[ Mao Jie, “NN Modeling of Single Stripline Interconnects,” Technical Report, SMRLab, SITE, University of Ottawa, 1998

Page 51: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

NN architecture modelling Z0

Hidden Layer Output Layer

wht

εrZ0tansig

12 neurons

purelin

14 neurons

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

>> Modeling Single Stripline Interconnects

Page 52: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

0 20 40 60 80 100 120 0

50

100

150 Z0 ( O h m / f t )

No. of recalling data

recalling result --- Z0

--- expected recalling values

... recalling values

0 20 40 60 80 100 120 0

0.2

0.4

0.6

0.8

1

No. of recalling data

A b s. e r r o r

recalling error

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

>> Modeling Single Stripline Interconnects

Page 53: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

NN architecture modelling C0 and L0

Hidden Layer Output Layer

wht

εr

L0

C0tansig-purelin

12 hidden neurons

logsig-purelin

14 hidden neurons

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

>> Modeling Single Stripline Interconnects

Page 54: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

0 100 200 300 400 500 600 7000

200

400

600

800

Cap

acita

nce

(pF/

ft)

No. of recalling data

recalling result C0

--- expected recalling values

... recalling values

0 100 200 300 400 500 600 7000

10

20

30

40

No. of recalling data

Abs

olut

e er

ror (

pF/ft

)

recalling error

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

>> Modeling Single Stripline Interconnects

Page 55: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

0 50 100 150 200 2500

50

100

150

200

250

Indu

ctan

ce (n

H/f

t)

No. of recalling data

recalling result L0

--- expected recalling values

... recalling values

0 50 100 150 200 2500

0.5

1

1.5

2

2.5

No. of recalling data

Abs

olut

e er

ror (

nH

/ft) recalling error

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

>> Modeling Single Stripline Interconnects

Page 56: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

v The problem was solved by “Vector Finite Element Method” VFEM,and the values of the microstrip characteristic impedance for both plain and grooved geometries were obtained. These values describing both the frequency-dependent and/or groove-dependent behaviour of each microstrip geometry were used to train the NN models.

NEURAL NETWORK MODELLING OFPLAIN AND GROOVED MICROSTRIPS

v A feedforward network with backpropagation, having one or more hidden layers with non-linear transfer functions and one output layer with a linear transfer function, is capable of approximating any function with a finite number of discontinuities with arbitrary accuracy. A two-layer sigmoid/linear NN can represent any functional relationship between inputs and outputs if the sigmoid layer has enough neurons.

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

[ A. Chubukjan, “Computational Aspects in Modelling Electromagnetic Field Parameters in Microstrips,“ Ph.D. Thesis, University of Ottawa, 2000 ]

Page 57: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

v The grooved microstrip was modelled initially by two separate one hidden-layer NN architectures, having 50 and 60 hidden neurons respectively. These networks were trained both by decimation and by the standard way. The resulting error obtained by decimation wascomparable to that obtained by standard training, and at times, was superior. The networks reached the desired error goal easily, with excellent sum-squared error figures. Nevertheless, the NN architecture with60 neuron hidden-layer gave better results compared to the 50 neuron hidden-layer architecture, and it was selected for further modelling.

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

>> NN modelling of microstrips

Page 58: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

Error performancefor standard and decimated trainingof a “60 neuron one hidden-layer” NN model ofgrooved microstrip.

10 12 14 16 18 20 22 24 26 2810

-14

10-12

10-10

10-8

10-6

10-4

10-2

Groove Depth = 0.030 mm 1 HL -- 60 Neurons -- Training Comparison

Err

or P

rofi

le

Frequency GHz

RelErr-DecTrgRelErr-StdTrgGoal

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

>> NN modelling of microstrips

Page 59: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

10 12 14 16 18 20 22 24 26 2810

-10

10 -9

10-8

10-7

10 -6

10 -5

10-4

10-3

Frequency GHz

Erro

r Pro

file

Groove Depth = 0.045 mm 1 HL -- 60 Neurons -- 6+6+5=17 Epochs

RelErrGoal

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

>> NN modelling of microstrips

Error performance for standard and decimated training of a “60 neuron one hidden-layer” NN model of grooved microstrip.

Page 60: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

MODEL CALIBRATION

The whole idea of virtual prototyping relies on the ability to develop models conformable to the physical objects and phenomena which represent reality very closely.

There is a need for calibration techniques able to validatethe conformance with the physical reality of the modelsincorporated in the new prototyping tools.

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 61: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

Experimental Measurements

The EM field training data are conveniently obtained as analytical

estimations of far-field values in 3D space and frequency from near-field

data using the finite element method combined with method of integral

absorbing boundary conditions.

The near field data could be obtained analytically and/or by physically

measuring EM field values at for given frequency values and 3D space locations.

This approach allows to replace the usual cumbersome open site far-field

measurement technique by anechoic chamber measurements.

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 62: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

The amount and extent of the area of measurements is significantly reduced by collecting data in the near-field only and calculating then the far-field values using Poggio's equation:

where:-S1 is the surface on which measurements are made, closed or made closed,

-n is the normal to S1 andis the free space Green’s function.

H r'( )=1

4πG r, r'( )∂H r( )

∂n− H r( )

∂G r ,r'( )∂n

dS1s1∫

• This equation states that if the field values and their derivatives are known on a closed

surface enclosing all inhomogeneities, then the field outside the surface can be calculated.

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 63: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

Experimental setup for the noninvasive measurement of the 3D near field data

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 64: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

Computer vision recovery of the 3D position of the EM probe

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 65: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

NEURAL NETWORK MODELLING OF HUMAN AVATARS IN INTERACTIVE VIRTUAL ENVIRONMENTS

ANIMATIONSCRIPT

Face muscle-activation instructions

Joint-activation instructions

Face Modell(Facial Action Coding )

Body Model(Joint Control )

3-D ARTICULATED AVATAR

Avatar Machine-level Instructions

Story-level Instructions

l COMPILERl INVERSE KINEMATIC CONTROLl SCHEDULERl CONCURRENCY MANAGER

Voicesynthesizer

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 66: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

Scripting Language: Abstraction Levels

• Three levels of abstraction for the avatar animation scripting language:– Highest: story-level description

• constrained English-like description• syntactic and semantic analysis to extract information such as: main player(s), action,

subject and object of the action, relative location, degree, etc.• translate in a set of skill-level instructions, that may be executed sequentially or

concurrently – Middle: skill-level macro-instructions

• describe basic body and facial skills (such as walk, smile, wave hand, etc.)• each skill involves a number of muscle/joint activation instructions that may be executed

sequentially or concurrently– Lowest: muscle/joint activation instructions

• activation of individual muscles or joints to control the face, body or hand movement

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 67: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

Personalizing Skills

• Add “personality” to skill-level macro-instructions– different avatars may perform a certain skill in a “personalized” way

• examples: “walk like Charlie Chaplin”“write like Emil”

– there is a skill generalization/specialization relationship (similar to object-oriented systems) between

• a generic skill• one or more specialized (or personalized) skills

• Personalizing skills– by using Neural Network models

• off-line training• on-line rendering

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 68: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

STORY-LEVEL DESCRIPTION…..DanielA sits on the red chair. DanielA writes “Hello” on stationary.DanielA sees HappyCat under the white table and starts smiling. HappyCat grins back.……

SKILL-LEVEL (“MACRO”) INSTRUCTIONS…..DanielA’s right hand moves the pen to follow the trace representing “H”.DanielA’s right hand moves the pen to follow the trace representing “e”.DanielA’s right hand moves the pen to follow the trace representing “l”.DanielA’s right hand moves the pen to follow the trace representing “l”.DanielA’s right hand moves the pen to follow the trace representing “o”.……

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 69: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

DanielA’s specific styleof moving his right arm joints to write “H”( NN model capturingDanielA’s writing personality )

Rotate Wrist to α i

Rotate Elbow to β j

Rotate Shoulder to γ k

Wrist

Elbow

Shoulder

x

y

z

3-D Model of DanieA’s Right Hand

SKILL-LEVEL MACRO-INSTRUCTIONS…

DanielA’s right hand moves the pen to follow the trace representing “H”.

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 70: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

• W. McCulloch and W. Pitts, “A Logical Calculus of the Ideas Immanent in Nervous Activity,” Bulletin ofMathematical Biophysics, Vol. 5, pp. 115-133, 1943.

• D.O. Hebb, The Organization Of Behavior, Wiley, N.Y., 1949.• J. von Neuman, “Probabilistic logics and the synthesis of reliable organisms from unreliable components,”

in Automata Studies, (C.E. Shannon, Ed.), Princeton, NJ, Princeton University Press,1956.• F. Rosenblat, “The Perceptron: A Probabilistic Model for Information Storage and Organization in the

Brain,” Psychological Review, Vol. 65, pp. 386-408, 1958. • B. Widrow and M.E. Hoff, “Adaptive Switching Circuits,” 1960 IRE WESCON Convention Record, IRE

Part 4, pp. 94-104, 1960.• M. Minski and S. Papert, Perceptrons, MIT Press, Cambridge, MA, 1969.• J.S. Albus, “A Theory of Cerebellar Function,” Mathematical Biosciences, Vol. 10, pp. 25-61, 1971.• T. Kohonen, “Correlation Matrix Memories,” IEEE Tr. Comp., Vol. 21, pp. 353-359, 1972.• J. A. Anderson, “A Simple Neural Network Generating an Interactive Memory,” Mathematical Biosciences,

Vol. 14, pp. 197-220, 1972.• S. Grossberg, “Adaptive Pattern Classification and Universal Recording: I. Parallel Development and

Coding of Neural Feature Detectors,” Biological Cybernetics, Vol. 23, pp.121-134, 1976.• J.J. More, “The Levenberg-Marquardt Algorithm: Implementation and Theory,” in Numerical Analysis,

pp. 105-116, Spriger Verlag, 1977.• K. Fukushima, S. Miyake, and T. Ito, “Neocognitron: A Neural Network Model for a Mechanism of Visual

Pattern Recognition,” IEEE Tr. Syst. Man Cyber., Vol. 13, No. 5, pp. 826-834, 1983.

References

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 71: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

• D. E. Rumelhart, G.E. Hinton, and R.J. Willimas, “Learning Internal Representations by Error Propagation,”in Parallel Distributed Processing, (D.E. Rumelhart and J.L. McClelland, Eds.,) Vol.1, Ch. 8, MIT Press, 1986.

• D.W. Tankand J.J. Hopefield, “Simple ‘Neural’ Optimization Networks: An A/D Converter, Signal DecisionCircuit, and a Linear Programming Circuit,” IEEE Tr. Circuits Systems, Vol. 33, No. 5, pp. 533-541, 1986,

• M.J.D. Powell, “Radial Basis Functions for Multivariable Interpolation” A Review,” in Algorithms for the Approximation of Functions and Data , (J.C. Mason and M.G. Cox, Eds.), Clarendon Press, Oxford, UK, 1987.

• G.A. Carpenter and S. Grossberg, “ART2: Self-Organizing of Stable Category Recognition Codes for AnalogInput Patterns,” Applied Optics, Vol. 26, No. 23, pp. 4919-4930, 1987.

• B. Kosko, “Bidirectional Associative Memories,” IEEE Tr. Syst. Man Cyber., Vol. 18, No. 1, pp. 49-60, 1988. • T. Kohonen, Self_Organization and Associative Memory, Springer-Verlag, 1989.• K. Hornic, M. Stinchcombe, and H. White, “Multilayer Feedforward Networks Are Universal

Approximators,” Neural Networks, Vol. 2, pp. 359-366, 1989.• B. Widrow and M.A. Lehr, “30 Years of Adaptive Neural Networks: Perceptron, Madaline, and

Backpropagation,” Proc. IEEE, pp. 1415-1442, Sept. 1990. • B. Kosko, Neural Networks And Fuzzy Systems: A Dynamical Systems Approach to Machine

Intelligence, Prentice Hall, 1992.• E. Sanchez–Sinencio and C. Lau, (Eds.), Artificial Neural Networks, IEEE Press, 1992.• A. Hamilton, A.F. Murray, D.J. Baxter, S. Churcher, H.M. Reekie, and L. Tarasenko, “Integrated

Pulse Stream Neural Networks: Results, Issues, and Pointers,” IEEE Trans. Neural Networks, vol. 3,no. 3, pp. 385-393, May 1992.

• S. Haykin, Neural Networks: A Comprehensive Foundation, MacMillan, New York, 1994.• M. Brown and C. Harris, Neurofuzzy Adaptive Modelling and Control, Prentice Hall, NY, 1994.

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu

Page 72: NEURAL NETWORKS FOR MODELLING APPLICATIONSpetriu/NN-tutor2001.pdf · NEURAL NETWORKS FOR MODELLING APPLICATIONS Emil M. Petriu, Dr. Eng., P. Eng., FIEEE ... “A Simple Neural Network

• C.M. Bishop, Neural Networks for Pattern Recognition, Oxford University Press, NY, 1995• M.T. Hagan, H.B. Demuth, and M. Beale, Neural Network Design, PWS Publishing Co., 1995• S. V. Kartalopoulos, Understanding Neural and Fuzzy Logic:Basic Concepts and Applications,

IEEE Press, 1996. • M. T. Hagan, H.B. Demuth, M. Beale, Neural Network Design, PWS Publishing Co., 1996.• C.H. Chen (Editor), Fuzzy Logic and Neural Network Handbook, McGraw Hill, Inc., 1996.• ***, “Special Issue on Artificial Neural Network Applications,” Proc. IEEE, (E. Gelenbe

and J. Barhen, Eds.), Vol. 84, No. 10, Oct. 1996.• J.-S.R. Jang, C.-T. Sun, and E. Mizutani, Neuro-Fuzzy and Soft Computing. A Computational

Approach to Learning and Machine Intelligence, Prentice Hall, 1997.• C. Alippi and V. Piuri, Neural Methodology for Prediction and Identification of Non-linear Dynamic

Systems, “ in Instrumentation and Measurement Technology and Applications, (E.M. Petriu, Ed.), pp. 477-485, IEEE Technology Update Series, 1998.

• ***, “Special Issue on Pulse Coupled Neural Networks,” IEEE Tr. Neural Networks, (J.L. Johnson, M.L. Padgett, and O. Omidvar, Eds.), Vol. 10, No. 3, May 1999.

• C. Citterio, A. Pelagotti, V. Piuri, and L. Roca, “Function Approximation – A Fast-Convergence Neural Approach Based on Spectral Analysis, IEEE Tr. Neural Networks, Vol. 10, No. 4, pp. 725-740, July 1999.

• ***, “Special Issue on Computational Intelligence,” Proc. IEEE, (D.B. Fogel, T. Fukuda, andL. Guan, Eds.), Vol. 87, No. 9, Sept. 1999.

• L.I. Perlovsky, Neural Networks and Intellect, Using Model-Based Concepts, Oxford University Press, NY, 2001.

University of Ottawa School of Information Technology - SITE

Sensing and Modelling Research LaboratorySMRLab - Prof. Emil M. Petriu