Back-Propagation MLP Neural Network Optimizer
-
Upload
sumitra-chetan -
Category
Documents
-
view
46 -
download
2
description
Transcript of Back-Propagation MLP Neural Network Optimizer
Back-Propagation MLP Neural Network Optimizer
ECE 539Andrew Beckwith
Back-Propagation MLP Network Optimizer Purpose Methods Features
Purpose Configuring a Neural Network and its parameters
is often a long and experimental process with much guess work.
Let the computer do it for you. Design and implement a program that can test
multiple network configurations with easy setup. Allow user to modify data properly by enhancing
important features and minimizing features with little importance or detrimental qualities.
Methods Use back-propagation algorithm with momentum To test multiple configurations, use brute force method
and keep track of most successful configuration. Only parameter user cannot control is the number of
neurons per hidden layer. Each configuration is tested with 2, 3, 5, and 10 neurons
per hidden layer. The last test is a random initialization between 1 and 10 for each layer.
Use hyperbolic tangent activation function for hidden neurons and sigmoidal activation function for output neurons. One could change this in the source code if desired.
Features Allow user to open data file, view mean and
standard deviation for each feature of each class for modification purposes.
Allow user to enter ranges and number of trials for parameters such as: max epoch, epoch size, learning rate, momentum constant, and the number of hidden layers.
Allow user to set a tolerance to achieve maximum classification rate.
Allow user to view entire network – network configuration, weight values, etc.