These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


PUBMED FOR HANDHELDS

Search MEDLINE/PubMed


  • Title: An improvement of extreme learning machine for compact single-hidden-layer feedforward neural networks.
    Author: Huynh HT, Won Y, Kim JJ.
    Journal: Int J Neural Syst; 2008 Oct; 18(5):433-41. PubMed ID: 18991365.
    Abstract:
    Recently, a novel learning algorithm called extreme learning machine (ELM) was proposed for efficiently training single-hidden-layer feedforward neural networks (SLFNs). It was much faster than the traditional gradient-descent-based learning algorithms due to the analytical determination of output weights with the random choice of input weights and hidden layer biases. However, this algorithm often requires a large number of hidden units and thus slowly responds to new observations. Evolutionary extreme learning machine (E-ELM) was proposed to overcome this problem; it used the differential evolution algorithm to select the input weights and hidden layer biases. However, this algorithm required much time for searching optimal parameters with iterative processes and was not suitable for data sets with a large number of input features. In this paper, a new approach for training SLFNs is proposed, in which the input weights and biases of hidden units are determined based on a fast regularized least-squares scheme. Experimental results for many real applications with both small and large number of input features show that our proposed approach can achieve good generalization performance with much more compact networks and extremely high speed for both learning and testing.
    [Abstract] [Full Text] [Related] [New Search]