Radial Basis Function Networks Introductionbillw/cs9444/rbfn/rbfs08-4up.pdf · Hence the name...

3
Radial Basis Function Networks Notes © Bill Wilson, 2008 1 Radial Basis Function Networks Aim • to study radial basis function networks Reference Haykin, Chapter 7 provides a theoretical treatment. Another view is found in section 5.3 of Karray & de Silva: Soft Computing and Intelligent Systems Design, Addison-Wesley, 2004. ISBN 0 321 11617 8 Unfortunately this is not currently in UNSW Library (but is on order). Keywords radial basis function, receptive field, hybrid learning Plan • define radial basis function (RBF) • introduce RBF network • training (estimating parameters for) RBFNs • example • applications Radial Basis Function Networks Notes © Bill Wilson, 2008 2 Introduction Radial basis function networks feedforward networks, with some nodes using a different activation function from the regular sigmoid node in a backprop net – a radial activation function: ... ... ... inputs hiddens (RBF nodes) outputs 1 w ij x 1 x 2 x n o 1 o 2 o m The connections from the input units to the hidden units are fixed at 1. Each RBF/hidden node has two parameters, termed the centre v (a vector) and the scalar width " . The function computed by the i-th node will be of the form g i x () = r i x " v i # i $ % & ( ) Radial Basis Function Networks Notes © Bill Wilson, 2008 3 Radial Activation Functions Clearly the value of g i x () depends on the distance of the input vector x from the centre v i , scaled by the width " i . All points x that are equally close to v i will be treated in the same way by the function g i x () . Hence the name radial activation function. The most widely used radial basis function is the Gaussian function: g i x () = exp " x " v i 2 2# i 2 $ % & & ( ) ) There is no squashing function in the output neurons: the value from the j-th output unit of our RBF network, assuming n hidden layer units and r output units, is: o j (x) = w ij g i (x) i=1 n " j = 1,...,r Another possible radial activation function is g i x () = 1 1+ exp( x " v i 2 / # i 2 ) . Note that in both cases, g i (x) " 0 , and so o j (x) " 0, as x " v i #$ Radial Basis Function Networks Notes © Bill Wilson, 2008 4 “Training” Radial Activation Functions Training the RBF involves estimating the best values for the v i , the " i , and the w ij . The standard technique is the hybrid approach, which is a two-stage learning strategy: 1) Unsupervised clustering algorithm used to find the v i and " i for the RBFs. There are a number of possible clustering algorithms to try, the random input vector method, the k-means-based method, maximum likelihood estimate-based method, standard deviations- based method, and self-organising map method. 2) Supervised algorithm used to find values for the w ij . There is only one layer with weights, so this is not as complicated as with MLPs/backprop. We’ll describe one way to do this.

Transcript of Radial Basis Function Networks Introductionbillw/cs9444/rbfn/rbfs08-4up.pdf · Hence the name...

Radial Basis Function Networks Notes © Bill Wilson, 2008 1

Radial Basis Function Networks

Aim • to study radial basis function networks

Reference Haykin, Chapter 7 provides a theoretical treatment.

Another view is found in section 5.3 of Karray & de Silva: Soft Computing and

Intelligent Systems Design, Addison-Wesley, 2004. ISBN 0 321 11617 8

Unfortunately this is not currently in UNSW Library (but is on order).

Keywords radial basis function, receptive field, hybrid learning

Plan • define radial basis function (RBF)

• introduce RBF network

• training (estimating parameters for) RBFNs

• example

• applications

Radial Basis Function Networks Notes © Bill Wilson, 2008 2

Introduction

• Radial basis function networks feedforward networks, with some nodes using a different

activation function from the regular sigmoid node in a backprop net – a radial activation function:

... ... ...

inputs hiddens

(RBF nodes)outputs

1wij

x1

x2

xn

o1

o2

om

• The connections from the input units to the hidden units are fixed at 1.

• Each RBF/hidden node has two parameters, termed the centre

!

v (a vector) and the scalar width

!

" . The function computed by the i-th node will be of the form

!

gi x( ) = rix" vi

# i

$

% &

'

( )

Radial Basis Function Networks Notes © Bill Wilson, 2008 3

Radial Activation Functions

• Clearly the value of

!

gi x( ) depends on the distance of the input vector

!

x from the centre

!

vi, scaled

by the width

!

" i . All points

!

x that are equally close to

!

vi will be treated in the same way by the

function

!

gi x( ). Hence the name radial activation function.

• The most widely used radial basis function is the Gaussian function:

!

gi x( ) = exp" x" vi

2

2# i2

$

%

& &

'

(

) )

• There is no squashing function in the output neurons: the value from the j-th output unit of our

RBF network, assuming n hidden layer units and r output units, is:

!

oj (x) = wijgi(x)i=1

n

" j =1,...,r

• Another possible radial activation function is

!

gi x( ) =1

1+ exp( x" vi2/# i2).

• Note that in both cases,

!

gi(x)" 0, and so

!

oj (x)" 0, as

!

x" vi #$

Radial Basis Function Networks Notes © Bill Wilson, 2008 4

“Training” Radial Activation Functions

• Training the RBF involves estimating the best values for the

!

vi, the

!

" i , and the

!

wij.

• The standard technique is the hybrid approach, which is a two-stage learning strategy:

1) Unsupervised clustering algorithm used to find the

!

vi and

!

" i for the RBFs.

There are a number of possible clustering algorithms to try, the random input vector method,

the k-means-based method, maximum likelihood estimate-based method, standard deviations-

based method, and self-organising map method.

2) Supervised algorithm used to find values for the

!

wij.

There is only one layer with weights, so this is not as complicated as with MLPs/backprop.

We’ll describe one way to do this.

Radial Basis Function Networks Notes © Bill Wilson, 2008 5

Finding Centres and Widths

• The random input vector method involves choosing the locations of centres randomly from the

training data set. All RBFs get the same width

!

" =dmax

2m1

, where

!

dmax is the maximum

distance between the chosen centres, and

!

m1 is the the number of centres.

• An alternative is to give centres differing widths depending on the local data density (broader

width if density is low).

• The k-means-based method is a clustering algorithm:

1 Initialise: choose small random values for the initial (but distinct) centres

!

tk 0( ).

2 Sample: choose a vector x from the input space – we’ll call this

!

x(n).

3 Match: Let

!

k(x) be the index of the centre

!

tk 0( ) that minimises

!

x(n)" tk (n) .

4 Update: Set

!

tk n+1( ) = tk n( ) +" x n( )# tk n( )[ ] for

!

k = k x( );

!

tk n+1( ) = tk n( ) otherwise.

5 Continue: add 1 to n and go to step 2. Continue until no further significant changes.

• The width(s) can be done in the same way as that used for the random input vector method.

Radial Basis Function Networks Notes © Bill Wilson, 2008 6

Finding Weights

• A range of supervised learning methods can be used to find the weight values. Here is one that is

appropriate for function interpolation, given a set of input vectors

!

xk and corresponding output

values

!

dk. In this case, it makes sense to use one RBF node for each pair (

!

xk,

!

dk), and the centres

for the RBF nodes are just the

!

xk, i.e.

!

vk = xk .

• Let

!

D be the matrix of desired outputs – i.e. if the function to be interpolated is

!

f :

!

"n #"m, and

!

dk is the desired output for the input

!

xk, so that

!

f xk( ) = dk , then

!

D = dk[ ],

!

k =1,...,p, where p is

the number of hidden/RBF nodes. We can also write

!

dkj = f j xk( ) = wijgii=1

p

" xk( ) .

• Let

!

gki be

!

gi(xk ), i.e.

!

exp" xi " xk

2

2# i2

$

%

& &

'

(

) ) if we are using Gaussian RBFs. We can write

!

G = gki[ ].

Then

!

dkj = f j xk( ) = wijgii=1

p

" xk( ) = wijgkii=1

p

" = gkiwiji=1

p

" , or

!

dkj = GW[ ]kj . That is,

!

D =GW.

Radial Basis Function Networks Notes © Bill Wilson, 2008 7

Finding Weights 2

• Since

!

D =GW, if G is invertible, then

!

G"1D =W

• If G is not invertible (because

!

G = 0, or G is non-square, or G is ill-conditioned, i.e.

!

G is close to

0), then we replace

!

G"1

by

!

G+, the Moore-Penrose pseudo-inverse of G, (ako generalised

inverse), which is the unique matrix satisfying:

(1)

!

G =GG+G (2)

!

G+

=G+GG

+

(3)1

!

GG+( )*

=GG+ (4)

!

G+G( )*

=G+G

• If the columns of G are linearly independent, then

!

GTG is invertible, and then

!

G+ = G

TG( )

"1GT

.

Note that then

!

G+G = G

TG( )

"1GTG = I. So

!

W =G+D.

1

!

A* is the conjugate transpose of

!

A . This is the same as

!

AT

if

!

A is all real (not complex).

Radial Basis Function Networks Notes © Bill Wilson, 2008 8

Effect of Width on Approximation Performance

• The diagrams on the next slide use the same centres, but three different values of

!

" - but the

same

!

" for all RBFs in each graph. The top pair of graphs show

!

" = 0.5, the middle pair have

!

" = 2.1, and the bottom pair have

!

" = 8.5

• Inspection shows us that with the small value of , the approximation is “too wiggly”, for the large value of the approximation is too smooth, and the middle value is “just right”.

• In practice, one would use an unsupervised algorithm to estimate the best value for

!

" , as described above.

• The diagram comes from Karray & de Silva, Figure 5.8 a, b & c:

Radial Basis Function Networks Notes © Bill Wilson, 2008 9

Radial Basis Function Networks Notes © Bill Wilson, 2008 10

Applications of RBF Networks

• RBF networks have universal approximation capabilities (on compact subsets of

!

"n), and they

have efficient training algorithms.

• They have been used for

– control systems

– audio and video signal processing

– pattern recognition

– chaotic time series prediction (e.g. weather forecasting, power load forecasting).

Radial Basis Function Networks Notes © Bill Wilson, 2008 11

David Broomhead

Professor of Applied Mathematics at UMIST and

works in applied nonlinear dynamical systems

and mathematical biology. Since the early 1980's

he has developed methods for time series

analysis and nonlinear signal processing using

techniques from nonlinear dynamics, including

modelling oculomotor control. In 1989 he was

awarded the John Benjamin Memorial Prize for

his work on radial basis function neural

networks.

D.S. Broomhead and D. Lowe. Multivariable

functional interpolation and adaptive

networks. Complex Systems, 2:321-355. 1988.

Radial Basis Function Networks Notes © Bill Wilson, 2008 12