Artificial Neural Networks Lect7: Neural networks based on competition

62
1 CS407 Neural Computation Lecture 7: Neural Networks Based on Competition. Lecturer: A/Prof. M. Bennamoun

Transcript of Artificial Neural Networks Lect7: Neural networks based on competition

Page 1: Artificial Neural Networks Lect7: Neural networks based on competition

1

CS407 Neural Computation

Lecture 7: Neural Networks Based on

Competition.

Lecturer: A/Prof. M. Bennamoun

Page 2: Artificial Neural Networks Lect7: Neural networks based on competition

2

Neural Networks Based on Competition.

IntroductionFixed weight competitive nets– Maxnet– Mexican Hat– Hamming Net

Kohonen Self-Organizing Maps (SOM)SOM in MatlabReferences and suggested reading

IntroductionFixed weight competitive nets– Maxnet– Mexican Hat– Hamming Net

Kohonen Self-Organizing Maps (SOM)SOM in MatlabReferences and suggested reading

Page 3: Artificial Neural Networks Lect7: Neural networks based on competition

3

Introduction…Competitive netsFausett

The most extreme form of competition among a group of neurons is called “Winner Take All”.As the name suggests, only one neuron in the competing group will have a nonzero output signal when the competition is completed.A specific competitive net that performs Winner Take All (WTA) competition is the Maxnet. A more general form of competition, the “Mexican Hat” will also be described in this lecture (instead of a non-zero o/p for the winner

and zeros for all other competing nodes, we have a bubble around the winner ).All of the other nets we discuss in this lecture use WTA competition as part of their operation.With the exception of the fixed-weight competitive nets (namely Maxnet, Mexican Hat, and Hamming net) all of the other nets combine competition with some form of learningto adjusts the weights of the net (i.e. the weights that are not part of any interconnections in the competitive layer).

Page 4: Artificial Neural Networks Lect7: Neural networks based on competition

4

Introduction…Competitive netsFausett

The form of learning depends on the purpose for which the net is being trained:

– LVQ and counterpropagation net are trained to perform mappings. The learning in this case is supervised.

– SOM (used for clustering of input data): a common use of unsupervised learning.

– ART are also clustering nets: also unsupervised.Several of the nets discussed use the same learning algorithm known as “Kohonen learning”: where the units that update their weights do so by forming a new weight vector that is a linear combination of the old weight vector and the current input vector. Typically, the unit whose weight vector was closest to the input vector is allowed to learn.

Page 5: Artificial Neural Networks Lect7: Neural networks based on competition

5

Introduction…Competitive netsFausett

The weight update for output (or cluster) unit j is given as:

where x is the input vector, w.j is the weight vector for unit j, and α the learning rate, decreases as learning proceeds.

Two methods of determining the closest weight vector to a pattern vector are commonly used for self-organizing nets.Both are based on the assumption that the weight vector for each cluster (output) unit serves as an exemplar for the input vectors that have been assigned to that unit during learning.

– The first method of determining the winner uses the squared Euclidean distance between the I/P vector

[ ]old)()1(

old)()old()new(

.

...

j

jjj

wx

wxww

αα

α

−+

−+=

Page 6: Artificial Neural Networks Lect7: Neural networks based on competition

6

Introduction…Competitive netsFausett

and the weight vector and chooses the unit whose weight vector has the smallest Euclidean distance from the I/P vector.

– The second method uses the dot product of the I/P vector and the weight vector.The dot product can be interpreted as giving the correlation between the I/P and weight vector.

In general and for consistency, we will use Euclidean distance squared.Many NNs use the idea of competition among neurons to enhance the contrast in activations of the neuronsIn the most extreme situation, the case of the Winner-Take-All, only the neuron with the largest activation is allowed to remain “on”.

Page 7: Artificial Neural Networks Lect7: Neural networks based on competition

7

Page 8: Artificial Neural Networks Lect7: Neural networks based on competition

8

Introduction…Competitive nets

1. Euclidean distance between two vectors

2. Cosine of the angle between two vectors

( ) ( )itii xxxxxx −−=−

i

it

xxxx

=ψcos

21ψψψ <<

T

Page 9: Artificial Neural Networks Lect7: Neural networks based on competition

9

Neural Networks Based on Competition.

IntroductionFixed weight competitive nets– Maxnet– Mexican Hat– Hamming Net

Kohonen Self-Organizing Maps (SOM)SOM in MatlabReferences and suggested reading

IntroductionFixed weight competitive nets– Maxnet– Mexican Hat– Hamming Net

Kohonen Self-Organizing Maps (SOM)SOM in MatlabReferences and suggested reading

Page 10: Artificial Neural Networks Lect7: Neural networks based on competition

10

MAXNET

[ ]kM

k yWy Γ=+1

−−

−−−−

=

1 . . . .

. . 1 . . 1

εε

εεεε

MW

( )

≥<

=0 ,0 ,0

netnetnet

netf

p10 << ε

Page 11: Artificial Neural Networks Lect7: Neural networks based on competition

11

Hamming Network and MAXNETMAXNET– A recurrent network involving both excitatory

and inhibitory connections• Positive self-feedbacks and negative

cross-feedbacks– After a number of recurrences, the only non-

zero node will be the one with the largest initializing entry from i/p vector

Page 12: Artificial Neural Networks Lect7: Neural networks based on competition

12

• Maxnet Lateral inhibition between competitorsMaxnet Fixed-weight competitive net

Fausett

>

=otherwise 0

0 if )(

:Maxnet for thefunction activationxx

xf

Competition: • iterative process until the net stabilizes (at most one

node with positive activation)

– Step 0:• Initialize activations and weight (set )where m is the # of nodes (competitors)

,/10 m<< ε

Page 13: Artificial Neural Networks Lect7: Neural networks based on competition

13

Algorithm

Step 1: While stopping condition is false do steps 2-4

– Step 2: Update the activation of each node:

– Step 3: Save activation for use in next iteration:

– Step 4: Test stopping condition:If more than one node has a nonzero activation, continue; otherwise, stop.

==

=

otherwise if 1

:weights

node input to )0(

εji

w

Aa

ij

jj

mj(old)a(old) –a f(new) ajk

kjj ,...,1for =

= ∑

ε

mj(new) a(old) a jj ,...,1for ==

the total i/p to node Aj from all nodes, including itself

Page 14: Artificial Neural Networks Lect7: Neural networks based on competition

14

ExampleNote that in step 2, the i/p to the function f is the total i/p to node Aj from all nodes, including itself.Some precautions should be incorporated to handle the situation in which two or more units have the same, maximal, input.Example:ε = .2 and the initial activations (input signals) are:a1(0) = .2, a2(0) = .4 a3(0) = .4 a4(0) = .8As the net iterates, the activations are:

a1(1) = f{a1(0) -.2 [a2(0) + a3(0) + a4(0)]} = f(-.12)=0a1(1) = .0, a2(1) = .08 a3(1) = .32 a4(1) = .56a1(2) = .0, a2(2) = .0 a3(2) = .192 a4(2) = .48a1(3) = .0, a2(3) = .0 a3(3) = .096 a4(3) = .442a1(4) = .0, a2(4) = .0 a3(4) = .008 a4(4) = .422

a1(5) = .0, a2(5) = .0 a3(5) = .0 a4(5) = .421

The only nodeto remain on

Page 15: Artificial Neural Networks Lect7: Neural networks based on competition

15

Neural Networks Based on Competition.

IntroductionFixed weight competitive nets– Maxnet– Mexican Hat– Hamming Net

Kohonen Self-Organizing Maps (SOM)SOM in MatlabReferences and suggested reading

IntroductionFixed weight competitive nets– Maxnet– Mexican Hat– Hamming Net

Kohonen Self-Organizing Maps (SOM)SOM in MatlabReferences and suggested reading

Page 16: Artificial Neural Networks Lect7: Neural networks based on competition

16

Mexican Hat Fixed-weight competitive netArchitecture

Each neuron is connected with excitatory links (positively weighted) to a number of “cooperative neighbors” neurons that are in close proximity.Each neuron is also connected with inhibitory links (with negative weights) to a number of “competitive neighbors” neurons that are somewhat further away.There may also be a number of neurons, further away still, to which the neurons is not connected.All of these connections are within a particular layer of a neural net.The neurons receive an external signal in addition to these interconnections signals (just like Maxnet).This pattern of interconnections is repeated for each neuron in the layer. The interconnection pattern for unit Xi is as follows:

Page 17: Artificial Neural Networks Lect7: Neural networks based on competition

17

Mexican Hat Fixed-weight competitive net

The size of the region of cooperation (positive connections) and the region of competition (negative connections) may vary, as may vary the relative magnitudes of the +ve and –ve weights and the topology of the regions (linear, rectangular, hexagonal, etc..)

si = external activation

w1 = excitatory connectionw2 = inhibitory connection

Page 18: Artificial Neural Networks Lect7: Neural networks based on competition

18

Mexican Hat Fixed-weight competitive netThe contrast enhancement of the signal si received by unit Xiis accomplished by iteration for several time steps. The activation of unit Xi at time t is given by:

Where the terms in the summation are the weighted signalfrom other units cooperative and competitive neighbors) at the previous time step.

1

−+= ∑ +

kkikii )(txw(t) s f(t) x

inputs

+

-

-

-

-

Page 19: Artificial Neural Networks Lect7: Neural networks based on competition

19

Mexican Hat Fixed-weight competitive netNomenclature

R2=Radius of region of interconnections; Xi is connected to units Xi+kand Xi-k for k=1, …R2

R1=Radius of region with +ve reinforcement; R1< R2

wk = weights on interconnections between Xi and units Xi+k and Xi-k; wk is positive for 0 ≤ k ≤ R1

wk is negative for R1 < k ≤ R2

x = vector of activationsx_old = vector of activations at previous time stept_max = total number of iterations of contrast enhancements = external signal

Page 20: Artificial Neural Networks Lect7: Neural networks based on competition

20

Mexican Hat Fixed-weight competitive netAlgorithm

As presented, the algorithm corresponds to the external signal being given only for the first iteration (step 1).Step 0 Initialize parameters t_max, R1, R2 as desired

Initialize weights:

Step 1 Present external signal s:

Set iteration counter: t = 1

0x_old to Initialize...,1for0

...,0for0

212

11

+=<=>

=RRkC

RkCwij

ii xoldx ==

=

_:n)..., 1,i(for array in activation Save x_old

sx

Page 21: Artificial Neural Networks Lect7: Neural networks based on competition

21

Mexican Hat Fixed-weight competitive netAlgorithm

Step 2 While t is less than t_max, do steps 3-7Step 3 Compute net input (i = 1, …n)

Step 4 Apply activation function f (ramp from 0 to x_max, slope 1):

∑∑ ∑+−=

+−=

−−

−=++ ++=

2

1

1

1

1

2 12

1

21 ___R

Rkki

R

Rk

R

Rkkikii oldxColdxColdxCx

>≤≤

<=

maxmax_max0

00)(

xifxxifx

xifxf

nixxx ii ,...,1)),0max(max,_min( ==

Page 22: Artificial Neural Networks Lect7: Neural networks based on competition

22

Mexican Hat Fixed-weight competitive netStep 5 Save current activations in x_old:

Step 6 Increment iteration counter:

Step 7 Test stopping condition:If t < t_max, continue; otherwise, stop.

Note:The +ve reinforcement from nearby units and –ve reinforcement from units that are further away have the effect of – increasing the activation of units with larger initial activations,– reducing the activations of those that had a smaller external

signal.

nixoldx ii ,...,1_ ==

1+= tt

Page 23: Artificial Neural Networks Lect7: Neural networks based on competition

23

Mexican Hat ExampleLet’s consider a simple net with 7 units. The activation function for this net is:

Step 0 Initialize parameters R1=1, R2 =2, C1 =.6, C2 =-.4

Step 1 t=0 The external signal s = (.0, .5, .8, 1, .8, .5, .0)

Step 2 t=1, the update formulas used in step 3, are listed as follows for reference:

)0.0,5.0,8.0,0.1,8.0,5.0,0.0()1(old_:_oldin Save

)0.0,5.0,8.0,0.1,8.0,5.0,0.0()0(

=

=

xx

x

>≤≤

<=

2220

00)(

xifxifx

xifxf

The node

to be enhanced

Page 24: Artificial Neural Networks Lect7: Neural networks based on competition

24

Mexican Hat Example

7657

76546

765435

654324

543213

43212

3211

_6.0_6.0_4.0_6.0_6.0_6.0_4.0

_4.0_6.0_6.0_6.0_4.0_4.0_6.0_6.0_6.0_4.0_4.0_6.0_6.0_6.0_4.0

_4.0_6.0_6.0_6.0_4.0_6.0_6.0

oldxoldxoldxxoldxoldxoldxoldxx

oldxoldxoldxoldxoldxxoldxoldxoldxoldxoldxxoldxoldxoldxoldxoldxx

oldxoldxoldxoldxxoldxoldxoldxx

++−=+++−=

−+++−=−+++−=−+++−=

−++=−+=

2.0)0.0(6.0)5.0(6.0)8.0(4.038.0)0.0(6.0)5.0(6.0)8.0(6.0)0.1(4.0

06.1)0.0(4.0)5.0(6.0)8.0(6.0)0.1(6.0)8.0(4.016.1)5.0(4.0)8.0(6.0)0.1(6.0)8.0(6.0)5.0(4.006.1)8.0(4.0)0.1(6.0)8.0(6.0)5.0(6.0)0.0(4.0

38.0)0.1(4.0)8.0(6.0)5.0(6.0)0.0(6.02.0)8.0(4.0)5.0(6.0)0.0(6.0

7

6

5

4

3

2

1

−=++−==+++−=

=−+++−==−+++−==−+++−=

=−++=−=−+=

xxxxxxxStep 3 t=1

Page 25: Artificial Neural Networks Lect7: Neural networks based on competition

25

Mexican Hat ExampleStep 4

Step 5-7 Bookkeeping for next iteration

Step 3: t=2

Step 4:

Step 5-7 Bookkeeping for next iteration

196.0)0.0(6.0)38.0(6.0)06.1(4.039.0)0.0(6.0)38.0(6.0)06.1(6.0)16.1(4.0

14.1)0.0(4.0)38.0(6.0)06.1(6.0)16.1(6.0)06.1(4.066.1)38.0(4.0)06.1(6.0)16.1(6.0)06.1(6.0)38.0(4.0

14.1)06.1(4.0)16.1(6.0)06.1(6.0)38.0(6.0)0.0(4.039.0)16.1(4.0)06.1(6.0)38.0(6.0)0.0(6.0

196.0)06.1(4.0)38.0(6.0)0.0(6.0

7

6

5

4

3

2

1

−=++−==+++−=

=−+++−==−+++−=

=−+++−==−++=

−=−+=

xxxxxxx

)0.0,38.0,06.1,16.1,06.1,38.0,0.0(=x

)0.0,39.0,14.1,66.1,14.1,39.0,0.0(=x

The node

Which has been enhanced

Page 26: Artificial Neural Networks Lect7: Neural networks based on competition

26

Neural Networks Based on Competition.

IntroductionFixed weight competitive nets– Maxnet– Mexican Hat– Hamming Net

Kohonen Self-Organizing Maps (SOM)SOM in MatlabReferences and suggested reading

IntroductionFixed weight competitive nets– Maxnet– Mexican Hat– Hamming Net

Kohonen Self-Organizing Maps (SOM)SOM in MatlabReferences and suggested reading

Page 27: Artificial Neural Networks Lect7: Neural networks based on competition

27

Hamming Network Fixed-weight competitive netHamming distance of two vectors is the number of components in which the vector differ. For 2 bipolar vectors, of dimension n,

The Hamming net uses Maxnet as a subnet to find the unit with the largest input (see figure on next slide).

yx and

distance Hammingshorterlarger larger)(5.0

2),HD( distanceHammingandindifferentbitsofnumberis

andinagreementinbitsofnumberis:where

⇒⇒⋅+⋅=

−=⋅=−=

−=⋅

ana

naandda

da

yxyx

yxyx

yxyx

yx

010

and 011

Page 28: Artificial Neural Networks Lect7: Neural networks based on competition

28

Hamming Network Fixed-weight competitive netArchitecture

The sample architecture shown assumes input vectors are 4-tuples, to be categorized as belonging to one of 2 classes.

Page 29: Artificial Neural Networks Lect7: Neural networks based on competition

29

Hamming Network Fixed-weight competitive netAlgorithm

Given a set of m bipolar exemplar vectors e(1), e(2), …, e(m), the Hamming net can be used to find the exemplar that is closest to the bipolar i/p vector x.

The net i/p y_inj to unit Yj gives the number of components in which the i/p vector and the exemplar vector for unit Yje(j) agree (n minus the Hamming distance between these vectors).

Nomenclature:

n number of input nodes, number of components of any I/p vector

m number of o/p nodes, number of exemplar vectors

e(j) the jth exemplar vector:

( ))(,),(),()( 1 jejejej ni LL=e

Page 30: Artificial Neural Networks Lect7: Neural networks based on competition

30

Hamming Network Fixed-weight competitive netAlgorithm

Step 0 To store the m exemplar vectors, initialise the weights and biases:

Step 1 For each vector x, do steps 2-4

Step 2 Compute the net input to each unit Yj:

Step 3 Initialise activations for Maxnet

Step 4 Maxnet iterates to find the best match exemplar.

nbjew jiij 5.0 ),(5.0 ==

mjwxbinyi

ijijj ,,1for _ L=+= ∑

mjinyy jj ,,1for _)0( L==

Page 31: Artificial Neural Networks Lect7: Neural networks based on competition

31

Hamming Network Fixed-weight competitive net

Upper layer: Maxnet– it takes the y_in as its initial value, then

iterates toward stable state (the best match exemplar)

– one output node with highest y_in will be the winner because its weight vector is closest to the input vector

jY

Page 32: Artificial Neural Networks Lect7: Neural networks based on competition

32

Hamming Network Fixed-weight competitive netExample (see architecture n=4)

[ ][ ] patterns)(exemplar 2 case in this m 1-1,-1,-1,)2(

1-1,-1,-1,)1(:ectorsexemplar v Given the

=−==

ee

The Hamming net can be used to find the exemplar that is closest to each of the bipolar i/p patterns, (1, 1, -1, -1), (1, -1, -1, -1), (-1, -1, -1, 1), and (-1, -1, 1, 1)

Step 1: Store the m exemplar vectors in the weights:

case) (in this 4nodesinput ofnumber 2/2

:biases theInitialize

2)( re whe

5.5.5.5.5.5.5.5.

21

=====

=

−−−−−−−

=

nnbb

jewW iij

Page 33: Artificial Neural Networks Lect7: Neural networks based on competition

33

Hamming Network Fixed-weight competitive netExample

For the vector x=(1,1, -1, -1), let’s do the following steps:

These values represent the Hamming similarity because (1, 1, -1, -1) agrees with e(1)= (1,-1, -1, -1) in the 1st , 3rd, and 4th components and because (1,1, -1, -1) agrees with e(2)=(-1,-1,-1,1) in only the 3rd component

Since y1(0) > y2(0), Maxnet will find that unit Y1 has the best match exemplar for input vector x = (1, 1, -1, -1).

112_

312_

222

111

=−=+=

=+=+=

iii

iii

wxbiny

wxbiny

1)0(3)0(

2

1

==

yy

Page 34: Artificial Neural Networks Lect7: Neural networks based on competition

34

Hamming Network Fixed-weight competitive netExample

For the vector x=(1,-1, -1, -1), let’s do the following steps:

Note that the input agrees with e(1) in all 4 components and agrees with e(2) in the 2nd and 3rd components

Since y1(0) > y2(0), Maxnet will find that unit Y1 has the best match exemplar for input vector x = (1, -1, -1, -1).

202_

422_

222

111

=+=+=

=+=+=

iii

iii

wxbiny

wxbiny

2)0(4)0(

2

1

==

yy

Page 35: Artificial Neural Networks Lect7: Neural networks based on competition

35

Hamming Network Fixed-weight competitive netExample

For the vector x=(-1,-1, -1, 1), let’s do the following steps:

Note that the input agrees with e(1) in the 2nd and 3rd

components and agrees with e(2) in all four components

Since y2(0) > y1(0), Maxnet will find that unit Y2 has the best match exemplar for input vector x = (-1, -1, -1, 1).

422_

202_

222

111

=+=+=

=+=+=

iii

iii

wxbiny

wxbiny

4)0(2)0(

2

1

==

yy

Page 36: Artificial Neural Networks Lect7: Neural networks based on competition

36

Hamming Network Fixed-weight competitive netExample

For the vector x=(-1,-1, 1, 1), let’s do the following steps:

Note that the input agrees with e(1) in the 2nd component and agrees with e(2) in the 1st, 2nd, and 4th components

Since y2(0) > y1(0), Maxnet will find that unit Y2 has the best match exemplar for input vector x = (-1, -1, 1, 1).

312_

112_

222

111

=+=+=

=−=+=

iii

iii

wxbiny

wxbiny

3)0(1)0(

2

1

==

yy

Page 37: Artificial Neural Networks Lect7: Neural networks based on competition

37

Neural Networks Based on Competition.

IntroductionFixed weight competitive nets– Maxnet– Mexican Hat– Hamming Net

Kohonen Self-Organizing Maps (SOM)SOM in MatlabReferences and suggested reading

IntroductionFixed weight competitive nets– Maxnet– Mexican Hat– Hamming Net

Kohonen Self-Organizing Maps (SOM)SOM in MatlabReferences and suggested reading

Page 38: Artificial Neural Networks Lect7: Neural networks based on competition

38

Kohonen Self-Organizing Maps (SOM)

Fausett

Teuvo Kohonen, born: 11-07-34, Finland Author of the book: “Self-organizing maps”, Springer (1997)SOM = “Topology preserving maps”SOM assume a topological structure among the cluster unitsThis property is observed in the brain, but not found in otherANNs

Page 39: Artificial Neural Networks Lect7: Neural networks based on competition

39

Kohonen Self-Organizing Maps (SOM)

Fausett

Page 40: Artificial Neural Networks Lect7: Neural networks based on competition

40

Fausett

Kohonen Sefl-Organizing Maps (SOM)

Architecture:

Page 41: Artificial Neural Networks Lect7: Neural networks based on competition

41

Fausett

Kohonen Self-Organizing Maps (SOM)

Architecture:

* * {* (* [#] *) *} * *

{ } R=2 ( ) R=1 [ ] R=0

Linear array of cluster units

This figure represents the neighborhood of the unit designated by # in a 1D topology (with 10 cluster units).

************************#************************

R=2

R=1

R=0

Neighbourhoods for rectangular grid (Each unit has 8 neighbours)

Page 42: Artificial Neural Networks Lect7: Neural networks based on competition

42

Fausett

Kohonen Self-Organizing Maps (SOM)

Architecture:

************

********#***

************

******R=2

R=1

R=0

Neighbourhoods for hexagonal grid (Each unit has 6 neighbours).

In each of these illustrations, the “winning unit” is indicated by the symbol # and the other units are denoted by *

NOTE: What if the winning unit is on the edge of the grid?

• Winning units that are close to the edge of the grid will have some neighbourhoods that have fewer units than those shown in the respective figures

• Neighbourhoods do not “wrap around” from one side of the grid to the other; “missing”units are simply ignored.

Page 43: Artificial Neural Networks Lect7: Neural networks based on competition

43

Training SOM NetworksWhich nodes are trained?How are they trained?– Training functions– Are all nodes trained the same amount?

Two basic types of training sessions?– Initial formation– Final convergence

How do you know when to stop?– Small number of changes in input mappings– Max # epochs reached

Page 44: Artificial Neural Networks Lect7: Neural networks based on competition

44

Fausett

Kohonen Self-Organizing Maps (SOM)

Algorithm:Step 0. Initialise weights wij (randomly)

Set topological neighbourhood parameters

Set learning rate parameters

Step 1 While stopping condition is false, do steps 2-8.

Step 2 For each i/p vector x, do steps 3-5

Step 3 For each j, compute:

Step 4 Find index J such that D(J) is a minimum

Step 5 For all units j within a specified neighbourhood of J, and for all i:

Step 6 Update learning rate.

Step 7 Reduce radius of topological neighbourhood at specified timeStep 8 Test stopping condition

∑ −=i

iij xwJD 2)()(

)]([)()( oldwxoldwneww ijiijij −+= α

Page 45: Artificial Neural Networks Lect7: Neural networks based on competition

45

Fausett

Kohonen Self-Organizing Maps (SOM)

Alternative structure for reducing R and α:

The learning rate α is a slowly decreasing function of time (or training epochs). Kohonen indicates that a linearly decreasing function is satisfactory for practical computations; a geometric decrease would produce similarresults.

The radius of the neighbourhood around a cluster unit also decreases as the clustering process progresses.

The formation of a map occurs in 2 phases:

The initial formation of the correct order and

the final convergence

The second phase takes much longer than the first and requires a small value for the learning rate.

Many iterations through the training set may be necessary, at least in some applications.

Page 46: Artificial Neural Networks Lect7: Neural networks based on competition

46

Fausett

Kohonen Self-Organizing Maps (SOM)

Applications and Examples:

The SOM have had may applications:

Computer generated music (Kohonen 1989)

Traveling salesman problem

Examples: A Kohonen SOM to cluster 4 vectors:

– Let the 4 vectors to be clustered be:

s(1) = (1, 1, 0, 0) s(2) = (0, 0, 0, 1)

s(3) = (1, 0, 0, 0) s(4) = (0, 0, 1, 1)

- The maximum number of clusters to be formed is m=2 (4 input nodes, 2 output nodes)

Suppose the learning rate is α(0)=0.6 and α(t+1) = 0.5 α(t)

With only 2 clusters available, the neighborhood of node J (step 4) is set so that only one cluster update its weights at each step (i.e. R=0)

Page 47: Artificial Neural Networks Lect7: Neural networks based on competition

47

Fausett

Kohonen Self-Organizing Maps (SOM)

Examples:

Step 0 Initial weight matrix

Initial radius: R=0

Initial learning rate: α(0)=0.6

Step 1 Begin training

Step 2 For s(1) = (1, 1, 0, 0) , do steps 3-5

Step 3

=

3. 9.7. 5.4. 6.8. 2.

W

98.0)03(.)07(.)14(.)18(.)2(86.1)09(.)05(.)16(.)12(.)1(

2222

2222

=−+−+−+−=

=−+−+−+−=

DD

Page 48: Artificial Neural Networks Lect7: Neural networks based on competition

48

Fausett

Kohonen Self-Organizing Maps (SOM)

Examples:

Step 4 The i/p vector is closest to o/p node 2, so J=2

Step 5 The weights on the winning unit are updated

This gives the weight matrix:

ii

iiii

xoldwoldwxoldwneww

6.0)(4.0)]([6.0)()(

2

222

+−+=

This set of weights has notbeen modified.

=

12. 9.28. 5.76. 6.92. 2.

W

Page 49: Artificial Neural Networks Lect7: Neural networks based on competition

49

Fausett

Kohonen Self-Organizing Maps (SOM)

Examples:

Step 2 For s(2) = (0, 0, 0, 1) , do steps 3-5

Step 3

2768.2)112(.)028(.)076(.)092(.)2(66.0)19(.)05(.)06(.)02(.)1(2222

2222

=−+−+−+−=

=−+−+−+−=

DD

Step 4 The i/p vector is closest to o/p node 1, so J=1

Step 5 The weights on the winning unit are updated

This gives the weight matrix:

=

12. 96.28. 20.76. 24.92. 08.

W

This set of weights has notbeen modified.

Page 50: Artificial Neural Networks Lect7: Neural networks based on competition

50

Fausett

Kohonen Self-Organizing Maps (SOM)

Examples:

Step 2 For s(3) = (1, 0, 0, 0) , do steps 3-5

Step 3

6768.0)012(.)028(.)076(.)192(.)2(8656.1)096(.)02(.)024(.)108(.)1(

2222

2222

=−+−+−+−=

=−+−+−+−=

DD

Step 4 The i/p vector is closest to o/p node 2, so J=2

Step 5 The weights on the winning unit are updated

This gives the weight matrix:

=

048. 96.112. 20.304. 24.968. 08.

W

This set of weights has notbeen modified.

Page 51: Artificial Neural Networks Lect7: Neural networks based on competition

51

Fausett

Kohonen Self-Organizing Maps (SOM)

Examples:

Step 2 For s(4) = (0, 0, 1, 1) , do steps 3-5

Step 3

724.2)1048(.)1112(.)0304(.)0968(.)2(7056.0)196(.)12(.)024(.)008(.)1(

2222

2222

=−+−+−+−=

=−+−+−+−=

DD

Step 4 The i/p vector is closest to o/p node 1, so J=1

Step 5 The weights on the winning unit are updated

This gives the weight matrix:

Step 6 Reduce the learning rate α = 0.5(0.6) = 0.3

=

048. 984.112. 680.304. 096.968. 032.

WThis set of weights has notbeen modified.

Page 52: Artificial Neural Networks Lect7: Neural networks based on competition

52

Fausett

Kohonen Self-Organizing Maps (SOM)

Examples:

The weights update equations are now

The weight matrix after the 2nd epoch of training is:

Modifying the learning rate and after 100 iterations (epochs), the weight matrix appears to be converging to:

iij

ijiijij

xoldw

oldwxoldwneww

3.0)(7.0

)]([3.0)()(

+

−+=

=

024. 999.055. 630.360. 047.980. 016.

W

=

0.0 0.10.0 5.5.0 0.0.1 0.

WThe 1st column is the average of the 2 vects in cluster 1, and the 2nd col is the average of the 2 vects in cluster 2

This set of weights has notbeen modified.

Page 53: Artificial Neural Networks Lect7: Neural networks based on competition

53

Example of a SOM That Use a 2-D Neighborhood

Character Recognition – Fausett, pp. 176-178

Page 54: Artificial Neural Networks Lect7: Neural networks based on competition

54

Traveling Salesman Problem (TSP) by SOMThe aim of the TSP is to find a tour of a given set of cities that is of minimum length.A tour consists of visiting each city exactly once and returning to the starting city.The net uses the city coordinates as input (n=2); there are as many cluster units as there are cities to be visited. The net has a linear topology (with the first and last unit also connected).

Fig. shows the Initial position of cluster units and location of cities

Page 55: Artificial Neural Networks Lect7: Neural networks based on competition

55

Traveling Salesman Problem (TSP) by SOMFig. Shows the result after 100 epochs of training with R=1 (learning rate decreasing from 0.5 to 0.4).

Fig. Shows the final tour after 100 epochs of training with R=0

Two candidate solutions:

ADEFGHIJBC

ADEFGHIJCB

Page 56: Artificial Neural Networks Lect7: Neural networks based on competition

56

Neural Networks Based on Competition.

IntroductionFixed weight competitive nets– Maxnet– Mexican Hat– Hamming Net

Kohonen Self-Organizing Maps (SOM)SOM in MatlabReferences and suggested reading

IntroductionFixed weight competitive nets– Maxnet– Mexican Hat– Hamming Net

Kohonen Self-Organizing Maps (SOM)SOM in MatlabReferences and suggested reading

Page 57: Artificial Neural Networks Lect7: Neural networks based on competition

57

Mat

lab

SO

M E

xam

ple

Page 58: Artificial Neural Networks Lect7: Neural networks based on competition

58

Architecture

Page 59: Artificial Neural Networks Lect7: Neural networks based on competition

59

Cre

atin

g a

SOM

newsom -- Create a self-organizing map

Syntax

net = newsom

net = newsom(PR,[D1,D2,...],TFCN,DFCN,OLR,OSTEPS,TLR,TND)

Description

net = newsom creates a new network with a dialog box.

net = newsom (PR,[D1,D2,...],TFCN,DFCN,OLR,OSTEPS,TLR,TND) takes,

PR - R x 2 matrix of min and max values for R input elements.

Di - Size of ith layer dimension, defaults = [5 8].

TFCN - Topology function, default ='hextop'.

DFCN - Distance function, default ='linkdist'.

OLR - Ordering phase learning rate, default = 0.9.

OSTEPS - Ordering phase steps, default = 1000.

TLR - Tuning phase learning rate, default = 0.02;

TND - Tuning phase neighborhood distance, default = 1.

and returns a new self-organizing map.

The topology function TFCN can be hextop, gridtop, or randtop. The distance function can be linkdist, dist, or mandist.

Page 60: Artificial Neural Networks Lect7: Neural networks based on competition

60

Neural Networks Based on Competition.

IntroductionFixed weight competitive nets– Maxnet– Mexican Hat– Hamming Net

Kohonen Self-Organizing Maps (SOM)SOM in MatlabReferences and suggested reading

IntroductionFixed weight competitive nets– Maxnet– Mexican Hat– Hamming Net

Kohonen Self-Organizing Maps (SOM)SOM in MatlabReferences and suggested reading

Page 61: Artificial Neural Networks Lect7: Neural networks based on competition

61

Suggested Reading.L. Fausett,

“Fundamentals of Neural Networks”, Prentice-Hall, 1994, Chapter 4.

J. Zurada, “Artificial Neural systems”, West Publishing, 1992, Chapter 7.

Page 62: Artificial Neural Networks Lect7: Neural networks based on competition

62

References:These lecture notes were based on the references of the

previous slide, and the following references

1. Berlin Chen Lecture notes: Normal University, Taipei, Taiwan, ROC. http://140.122.185.120

2. Dan St. Clair, University of Missori-Rolla, http://web.umr.edu/~stclair/class/classfiles/cs404_fs02/Misc/CS404_fall2001/Lectures/ Lecture 14.