These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


PUBMED FOR HANDHELDS

Search MEDLINE/PubMed


  • Title: On the weight convergence of Elman networks.
    Author: Song Q.
    Journal: IEEE Trans Neural Netw; 2010 Mar; 21(3):463-80. PubMed ID: 20129857.
    Abstract:
    An Elman network (EN) can be viewed as a feedforward (FF) neural network with an additional set of inputs from the context layer (feedback from the hidden layer). Therefore, instead of the offline backpropagation-through-time (BPTT) algorithm, a standard online (real-time) backpropagation (BP) algorithm, usually called Elman BP (EBP), can be applied for EN training for discrete-time sequence predictions. However, the standard BP training algorithm is not the most suitable for ENs. A low learning rate can improve the training of ENs but can also result in very slow convergence speeds and poor generalization performance, whereas a high learning rate can lead to unstable training in terms of weight divergence. Therefore, an optimal or suboptimal tradeoff between training speed and weight convergence with good generalization capability is desired for ENs. This paper develops a robust extended EBP (eEBP) training algorithm for ENs with a new adaptive dead zone scheme based on eEBP training concepts. The adaptive learning rate and adaptive dead zone optimize the training of ENs for each individual output and improve the generalization performance of the eEBP training. In particular, for the proposed eEBP training algorithm, convergence of the ENs' weights with the adaptive dead zone estimates is proven in the sense of Lyapunov functions. Computer simulations are carried out to demonstrate the improved performance of eEBP for discrete-time sequence predictions.
    [Abstract] [Full Text] [Related] [New Search]