Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

43
Multi-Valued Neurons and Multilayer Neural Network based on Multi- Valued Neurons MVN and MLMVN 1

Transcript of Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

Page 1: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons

MVN and MLMVN

1

Page 2: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

A threshold function is a linearly separable function

1

1

-1

-1

f(1,1)= 1

f(1,-1)= -1

f(-1,1)= -1

f(-1,-1)= -1

f (x1, x2) is the OR function

Linear separability means that it is possible to separate “1”s and “-1”s by a hyperplane

2

Page 3: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

Threshold Boolean Functions• The threshold (linearly separable) function can be

learned by a single neuron• The number of threshold functions is very small in

comparison to the number of all functions (104 of 256 for n=3, about 2000 of 65536 for n=4, etc.)

• Non-threshold (nonlinearly separable) functions can not be learned by a single neuron (Minsky-Papert, 1969), they can be learned only by a neural network

3

Page 4: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

XOR – a classical non-threshold (non-linearly separable) function

1

1

-1

-1

f(1,1)=1

f(1,-1)= -1

f(-1,1)= -1

f(-1,-1)=1

Non-linear separability means that it is impossible to separate “1”s and “-1”s by a hyperplane

4

Page 5: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

Multi-valued mappings

• The first artificial neurons could learn only Boolean functions.

• However, the Boolean functions can describe only very limited class of problems.

• Thus, the ability to learn and implement not only Boolean, but also multiple-valued and continuous functions is very important for solving pattern recognition, classification and approximation problems.

• This determines the importance of those neurons that can learn and implement multiple-valued and continuous mappings

5

Page 6: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

Traditional approach to learn the multiple-valued mappings by a neuron:

•Sigmoid activation function (the most popular):

zezF

1

1)(

-1 1

0.5

6

Page 7: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

Sigmoidal neurons: limitations

• Sigmoid activation function has a limited plasticity and a limited flexibility.

• Thus, to learn those functions whose behavior is quite different in comparison with the one of the sigmoid function, it is necessary to create a network, because a single sigmoidal neuron is not able to learn such functions.

7

Page 8: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

Is it possible to overcome

the Minsky’s-Papert’s

limitation for the classical perceptron?

Yes !!!8

Page 9: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

We can overcome the Minsky’s-Papert’s limitation using the complex-valued weights and the complex activation function

9

Page 10: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

Is it possible to learn XOR and Parity n functions using a single neuron?

• Any classical monograph/text book on neural networks claims that to learn the XOR function a network from at least three neurons is needed.

• This is true for the real-valued neurons and real-valued neural networks.

• However, this is not true for the complex-valued neurons !!!

• A jump to the complex domain is a right way to overcome the Misky-Papert’s limitation and to learn multiple-valued and Boolean nonlinearly separable functions using a single neuron.

10

Page 11: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

NEURAL NETWORKS

Traditional Neurons

Neuro-Fuzzy Networks

Complex-Valued Neurons

Generalizations of SigmoidalNeurons

Multi-Valued andUniversal Binary Neurons

Multi-Valued andUniversal Binary Neurons

11

Page 12: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

Complex numbers

Unlike a real number, which is geometrically a point on a line, a complex number is a point on a plane.

Its coordinates are called a real (Re, horizontal) and an imaginary (Im, vertical) parts of the number

i is an imaginary unityr is the modulo (absolute value) of

the numberAlgebraic form of a complex number

r

12

Page 13: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

Complex numbers

Trigonometric and exponential (Euler’s) forms of a complex number

A unit circle

φ is the argument (phase in terms of physics) of a complex number

13

Page 14: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

Complex numbers

Complex-conjugated numbers

(cos sin )

(cos sin )

i

i

z x y re r

z x y re r

i i

i i

14

Page 15: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

XOR problem

x1 x2 22110 xwxww

z

)(zPB f(x1, x2)

1 1 1+i 1 1

1 -1 1-i -1 -1

-1 1 -1+i -1 -1

-1 -1 -1-i 1 1

1BP

1BP

n=2, m=4 – four sectors

W=(0, 1, i) – the weighting vector

1

i

-i-1

1BP

1BP

15

Page 16: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

Parity 3 problem

n=3, m=6 : 6 sectors

W=(0, ε, 1, 1) – the weighting vector

1

1-1

1

-1 -1

ε

1-1

1X 2X 3X 3322

110

xwxw

xwwZ

)(ZPB ),,( 321 xxxf

1 1 1 2 1 1 1 1 -1 -1 -1 1 -1 1 -1 -1 1 -1 -1 2 1 1 -1 1 1 2 -1 -1 -1 1 -1 1 1 -1 -1 1 1 1 -1 -1 -1 2 -1 -1

16

Page 17: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

Multi-Valued Neuron (MVN)

• A Multi-Valued Neuron is a neural element with n inputs and one output lying on the unit circle, and with the complex-valued weights.

• The theoretical background behind the MVN is the Multiple-Valued (k-valued) Threshold Logic over the field of complex numbers

17

Page 18: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

Multi-valued mappings and multiple-valued logic

• We traditionally use Boolean functions and Boolean (two-valued) logic, to present two-valued mappings:

• To present multi-valued mappings, we should use multiple-valued logic

1 1,..., 0,1 ; ,..., 0,1n nx x f x x

1 1,..., 1, 1 ; ,..., 1, 1n nx x f x x

18

Page 19: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

Multiple-Valued Logic: classical view

• The values of multiple-valued (k-valued) logic are traditionally encoded by the integers {0,1, …, k-1}

• On the one hand, this approach looks natural.

• On the other hand, it presents only the quantitative properties, while it can not present the qualitative properties.

19

Page 20: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

Multiple-Valued Logic: classical view

• For example, we need to present different colors in terms of multiple-valued logic. Let Red=0, Orange=1, Yellow=2, Green=3, etc.

• What does it mean? • Is it true that

Red<Orange<Yellow<Green ??!

20

Page 21: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

Multiple-Valued (k-valued) logic over the field of complex numbers

• To represent and handle both the quantitative properties and the qualitative properties, it is possible to move to the field of complex numbers.

• In this case, the argument (phase) may be used to represent the quality and the amplitude may be used to represent the quantity

21

Page 22: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

Multiple-Valued (k-valued) logic over the field of complex numbers

/ exp( 2 )i k

{0, 1, ..., 1}j k

exp( 2 / )jj i j k

primitive kth root of unity

regular values of k-valued logici

0

1

k-1

k-2

2

k-1

1one-to-one correspondence

0 2 1, , ,..., kj The kth roots of unity are values of k-valued logic over the field of complex numbers

22

Page 23: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

Important advantage

• In multiple-valued logic over the field of complex numbers all values of this logic are algebraically (arithmetically) equitable: they are normalized and their absolute values are equal to 1

• In the example with the colors, in terms of multiple-valued logic over the field of complex numbers they are coded by the different phases. Hence, their quality is presented by the phase.

• Since the phase determines the corresponding frequency, this representation meats the physical nature of the colors.

23

Page 24: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

Discrete-Valued (k-valued)Activation Function

= exp 2 / ,

if

( )

/ ( ) ( ) /

ji πj k

2 j k arg z 2π j +1 k

P z

i

0

1

k-2Z

j-1

Jj+1

k-1

Function P maps the complex plane into the set of the kth roots of unity

j

1j

24

Page 25: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

Discrete-Valued (k-valued)Activation Function

k=16

25

Page 26: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

Multi-Valued Neuron (MVN)

)...(),...,( 1101 nnn xwxwwPxxf f is a function of k-valued logic

(k-valued threshold function)

1x

nx

),...,( 1 nxxf . . .

P ( z )

nn xwxwwz ...110

26

Page 27: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

MVN: main properties

• The key properties of MVN: – Complex-valued weights– The activation function is a function of the

argument of the weighted sum– Complex-valued inputs and output that are lying

on the unit circle (kth roots of unity)– Higher functionality than the one for the

traditional neurons (e.g., sigmoidal)– Simplicity of learning

27

Page 28: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

MVN Learning

q s

q

•Learning is reduced to movement along the unit circle•No derivative is needed, learning is based on the error-correction rule

- error, which completely determines the weights adjustment

s- Desired output

- Actual output

i

qs

28

Page 29: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

Learning Algorithm for the Discrete MVN with

the Error-Correction Learning Rule

X

q

W – weighting vector; X - input vector

is a complex conjugated to X

αr – learning rate (should be always equal to 1)

r - current iteration; r+1 – the next iteration

is a desired output (sector)

is an actual output (sector)

1 ( )( 1)

q srr+ r + ε - ε X

n+W W

i

qs

s29

Page 30: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

Continuous-Valued Activation Function

Continuous-valued case (k):

Arg exp( (arg ))

(

)

/ | |

i zP z e

z

i z

z

Function P maps the complex plane into the unit circle

Z

/( |) |zP z z

i

1

30

Page 31: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

Continuous-Valued Activation Function

31

Page 32: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

Continuous-Valued Activation Function

32

Page 33: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

Learning Algorithm for the Continuous MVN with

the Error Correction Learning Rule

XW – weighting vector; X - input vector

is a complex conjugated to X

αr – a learning rate (should be always equal to 1)

r - current iteration; r+1 – the next iterationZ – the weighted sum

1 ( 1) | |qr r

rr rW

z+ ε - XW

n+ z

i

|| z

zqε|| z

zq

qε is a desired output

is an actual output| |

z

z | |q z

z - neuron’s error

33

Page 34: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

Learning Algorithm for the Continuous MVN with

the Error Correction Learning Rule

XW – weighting vector; X - input vector

is a complex conjugated to X

αr – a learning rate (should be always equal to 1)

r - current iteration; r+1 – the next iterationZ – the weighted sum

1 ( 1) | |qr r

rr rW

z+ ε - XW

n+ z

i

|| z

zqε|| z

zq

qε is a desired output

is an actual output| |

z

z | |q z

z - neuron’s error

34

Page 35: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

A role of the factor 1/(n+1) in the Learning Rule

| |q z

z - neuron’s error

nnn xn

wwxn

wwn

ww)1(

~ ; ... ;)1(

~ ;)1(

~11100

0 1 1

0 1 1 1 1

0 1 1

0 1 1

...

( ) ( ) ... ( )( 1) ( 1) ( 1)

...( 1) ( 1) ( 1)

...

n n

n n

n n

n n

w w x w x

w w x x w x xn n n

w w x w xn n n

w x w zw x

z

The weights after the correction:

The weighted sum after the correction:

- exactly what we are looking for- exactly what we are looking for35

Page 36: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

Self-Adaptation of the Learning Rate

-is the absolute value of the weighted sum on theprevious (rth) iteration.

rz

is a self-adaptive part of the learning rate1

rz

i

|z| <1 |z| >1

1

1

( 1) | |r+

qr rr

r r

zW W + ε - X

n+ z z

1/|zr| is a self-adaptive part of the learning rate

36

Page 37: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

Modified Learning Rules with the Self-Adaptive Learning Rate

1 ( )( 1) | |

q srr r

r

W W + ε - ε Xn+ z

1/|zr| is a self-adaptive part of the learning rate

1 ( 1) | |qr

r rr

zW W + ε - X

n+ z z

Discrete MVN

Continuous MVN 37

Page 38: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

Convergence of the learning algorithm

• It is proven that the MVN learning algorithm converges after not more than k! iterations for the k -valued activation function

• For the continuous MVN the learning algorithm converges with the precision λ after not more than (π/λ)! iterations because in this case it is reduced to learning in π/λ –valued logic.

38

Page 39: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

MVN as a model of a biological neuron

No impulses Inhibition Zero frequency

Intermediate State Medium frequency

Excitation High frequency

The State of a biological neuron is determined by the frequency of the generated impulses

The amplitude of impulses is always a constant

39

Page 40: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

MVN as a model of a biological neuron

40

Page 41: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

MVN as a model of a biological neuron

Maximal inhibition

Maximal excitation

Intermediate State

Intermediate State

41

Page 42: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

MVN as a model of a biological neuron

Maximal inhibition

Maximal excitation

42

Page 43: Multi-Valued Neurons and Multilayer Neural Network based on Multi-Valued Neurons MVN and MLMVN 1.

MVN:

• Learns faster• Adapts better• Learns even highly nonlinear functions• Opens new very promising opportunities for the

network design• Is much closer to the biological neuron• Allows to use the Fourier Phase Spectrum as a

source of the features for solving different recognition/classification problems

• Allows to use hybrid (discrete/continuous) inputs/output

43