BPGD-AG: A New Improvement Of Back-Propagation Neural Network Learning Algorithms With Adaptive Gain

Nazri Mohd Nawi, R.S. Ransing, Norhamreeza Abdul Hamid


The back propagation algorithm is one of the popular learning algorithms to train self learning feed forward neural networks. However, the convergence of this algorithm is slow mainly because the algorithm required the designers to arbitrarily select parameters such as network topology, initial weights and biases, learning rate value, the activation function, value for gain in activation function and momentum. An improper choice of theses parameters can result the training process comes to as standstill or get stuck at local minima. Previous research demonstrated that in a back propagation algorithm, the slope of the activation function is directly influenced by a parameter referred to as ‘gain’. In this paper, the influence of the variation of ‘gain’ on the learning ability of a back propagation neural network is analysed. Multi layer feed forward neural networks have been assessed. Physical interpretation of the relationship between the gain value and the learning rate and weight values is given. Instead of a constant ‘gain’ value, we propose an algorithm to change the gain value adaptively for each node. The efficiency of the proposed algorithm is verified by means of simulation on a function approximation problem using sequential as well as batch modes of training. The results show that the proposed algorithm significantly improves the learning speed of the general back-propagation algorithm.


Neural Networks; Gain; Activation function; Learning rate; Training Efficiency.

Full Text:


Copyright (c)

ISSN : 2229-8460

e-ISSN : 2600-7924


Creative Commons License
This OJS site and its metadata are licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.