Introduction to Neural networks (under graduate course) Lecture 3 of 9

21
Neural Networks Dr. Randa Elanwar Lecture 3

Transcript of Introduction to Neural networks (under graduate course) Lecture 3 of 9

Page 1: Introduction to Neural networks (under graduate course) Lecture 3 of 9

Neural Networks

Dr. Randa Elanwar

Lecture 3

Page 2: Introduction to Neural networks (under graduate course) Lecture 3 of 9

Lecture Content

• Basic models of ANN

– Activation functions

– Interconnections (different NN structures)

– Important notations

2Neural Networks Dr. Randa Elanwar

Page 3: Introduction to Neural networks (under graduate course) Lecture 3 of 9

Basic models of ANN

3Neural Networks Dr. Randa Elanwar

Basic Models of ANN

Activation function

Interconnections Learning rules

Page 4: Introduction to Neural networks (under graduate course) Lecture 3 of 9

Activation function

• Bipolar binary and unipolar binary are called as hard limiting activation functions used in discrete neuron model

• Unipolar continuous and bipolar continuous are called soft limiting activation functions are called sigmoidal characteristics.

Neural Networks Dr. Randa Elanwar 4

Page 5: Introduction to Neural networks (under graduate course) Lecture 3 of 9

Activation functions

Neural Networks Dr. Randa Elanwar 5

Bipolar continuous (sigmoidal)

Bipolar binary functions (sign function)

Page 6: Introduction to Neural networks (under graduate course) Lecture 3 of 9

Activation functions

Neural Networks Dr. Randa Elanwar 6

Unipolar continuous (sigmoidal)

Unipolar Binary (Step function)

Page 7: Introduction to Neural networks (under graduate course) Lecture 3 of 9

Activation functions

• An output of 1 represents firing of a neuron down the axon.

7Neural Networks Dr. Randa Elanwar

f(in) f(in) f(in)

Page 8: Introduction to Neural networks (under graduate course) Lecture 3 of 9

Basic models of ANN

8Neural Networks Dr. Randa Elanwar

Basic Models of ANN

Activation function

Interconnections Learning rules

Page 9: Introduction to Neural networks (under graduate course) Lecture 3 of 9

Classification based on interconnections

9Neural Networks Dr. Randa Elanwar

Interconnections

Feed forward

Single layer

Multilayer

Feed Back Recurrent

Single layer

Multilayer

Page 10: Introduction to Neural networks (under graduate course) Lecture 3 of 9

The Perceptron

• First studied in the late 1950s (Rosenblatt).

• Definition: an arrangement of one input layer (more than 1 unit/node) of McCulloch-Pitts neurons feeding forward to one output layer of McCulloch-Pitts neurons is known as a Perceptron.

• Any number of McCulloch-Pitts neurons can be connected together in any way we like. Thus, it is also known as Layered Feed-Forward Networks.

• We can use McCulloch-Pitts neurons to implement the basic logic gates. All we need to do is find the appropriate connection weights and neuron thresholds to produce the right outputs for each set of inputs.

10Neural Networks Dr. Randa Elanwar

Page 11: Introduction to Neural networks (under graduate course) Lecture 3 of 9

11

The Perceptron

Neural Networks Dr. Randa Elanwar

Page 12: Introduction to Neural networks (under graduate course) Lecture 3 of 9

Single layer Feedforward Network

12Neural Networks Dr. Randa Elanwar

Page 13: Introduction to Neural networks (under graduate course) Lecture 3 of 9

Feedforward Network

• Its output and input vectors are respectively

• Weight wij connects the i’th neuron with j’thinput. Activation rule of ith neuron is

where

13Neural Networks Dr. Randa Elanwar

Page 14: Introduction to Neural networks (under graduate course) Lecture 3 of 9

Multilayer feed forward network

Can be used to solve complicated problems

14Neural Networks Dr. Randa Elanwar

Page 15: Introduction to Neural networks (under graduate course) Lecture 3 of 9

Feedback network

15Neural Networks Dr. Randa Elanwar

When outputs are directed back as inputs to same or preceding layer nodes it results in the formation of feedback networks

Page 16: Introduction to Neural networks (under graduate course) Lecture 3 of 9

Lateral feedback

16Neural Networks Dr. Randa Elanwar

If the feedback of the output of the processing elements is directed back as input to the processing elements in the same layer then it is called lateral feedback

Page 17: Introduction to Neural networks (under graduate course) Lecture 3 of 9

Recurrent networks

17Neural Networks Dr. Randa Elanwar

• Types:

• Single node with own feedback

• Competitive nets

• Single-layer recurrent networks

• Multilayer recurrent networks

Feedback networks with closed loop are called Recurrent Networks. The response at the k+1’th instant depends on the entire history of the network starting at k=0.

Page 18: Introduction to Neural networks (under graduate course) Lecture 3 of 9

A Brief History

• 1943 McCulloch and Pitts proposed the McCulloch-Pitts neuron model

• 1949 Hebb published his book The Organization of Behavior, in which the Hebbian learning

rule was proposed.

• 1958 Rosenblatt introduced the simple single layer networks now called Perceptrons.

• 1969 Minsky and Papert’s book Perceptrons demonstrated the limitation of single layer

perceptrons, and almost the whole field went into hibernation.

• 1982 Hopfield published a series of papers on Hopfield networks.

• 1982 Kohonen developed the Self-Organizing Maps that now bear his name.

• 1986 The Back-Propagation learning algorithm for Multi-Layer Perceptrons was re-

discovered and the whole field took off again.

• 1990s The sub-field of Radial Basis Function Networks was developed.

• 2000s The power of Neural Networks Ensembles & Support Vector Machines is apparent.19Neural Networks Dr. Randa Elanwar

Page 19: Introduction to Neural networks (under graduate course) Lecture 3 of 9

Linearly Separable Functions

• Consider a perceptron:

• Its output is

– 1, if W1X1 + W2X2 >

– 0, otherwise

• In terms of feature space

– hence, it can only classify examples if a line can separate the positive examples from the negative examples

20Neural Networks Dr. Randa Elanwar

Page 20: Introduction to Neural networks (under graduate course) Lecture 3 of 9

Learning Linearly Separable Functions

• What can these functions learn ?

• Bad news:

- There are not many linearly separable functions.

• Good news:

- There is a perceptron algorithm that will learn

any linearly separable function, given enough

training examples.

21Neural Networks Dr. Randa Elanwar

Page 21: Introduction to Neural networks (under graduate course) Lecture 3 of 9

22

Important notations

• One neuron can’t do much on its own. Usually we will have many neurons labeled by indices k, i, j and activation flows between them via links with strengths wki, wij:

Neural Networks Dr. Randa Elanwar