These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

144 related articles for article (PubMed ID: 18249806)

  • 1. Classification ability of single hidden layer feedforward neural networks.
    Huang GB; Chen YQ; Babri HA
    IEEE Trans Neural Netw; 2000; 11(3):799-801. PubMed ID: 18249806
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions.
    Huang GB; Babri HA
    IEEE Trans Neural Netw; 1998; 9(1):224-9. PubMed ID: 18252445
    [TBL] [Abstract][Full Text] [Related]  

  • 3. A learning rule for very simple universal approximators consisting of a single layer of perceptrons.
    Auer P; Burgsteiner H; Maass W
    Neural Netw; 2008 Jun; 21(5):786-95. PubMed ID: 18249524
    [TBL] [Abstract][Full Text] [Related]  

  • 4. A constructive algorithm to solve "convex recursive deletion" (CoRD) classification problems via two-layer perceptron networks.
    Cabrelli C; Molter U; Shonkwiler R
    IEEE Trans Neural Netw; 2000; 11(3):811-6. PubMed ID: 18249809
    [TBL] [Abstract][Full Text] [Related]  

  • 5. A constructive method for multivariate function approximation by multilayer perceptrons.
    Geva S; Sitte J
    IEEE Trans Neural Netw; 1992; 3(4):621-4. PubMed ID: 18276462
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Specification of training sets and the number of hidden neurons for multilayer perceptrons.
    Camargo LS; Yoneyama T
    Neural Comput; 2001 Dec; 13(12):2673-80. PubMed ID: 11705406
    [TBL] [Abstract][Full Text] [Related]  

  • 7. A Single Hidden Layer Feedforward Network with Only One Neuron in the Hidden Layer Can Approximate Any Univariate Function.
    Guliyev NJ; Ismailov VE
    Neural Comput; 2016 Jul; 28(7):1289-304. PubMed ID: 27171269
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Methods of training and constructing multilayer perceptrons with arbitrary pattern sets.
    Liang X; Xia S
    Int J Neural Syst; 1995 Sep; 6(3):233-47. PubMed ID: 8589861
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Universal approximation using incremental constructive feedforward networks with random hidden nodes.
    Huang GB; Chen L; Siew CK
    IEEE Trans Neural Netw; 2006 Jul; 17(4):879-892. PubMed ID: 16856652
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Multilayer neural networks and Bayes decision theory.
    Funahashi K
    Neural Netw; 1998 Mar; 11(2):209-213. PubMed ID: 12662832
    [TBL] [Abstract][Full Text] [Related]  

  • 11. On the approximation by single hidden layer feedforward neural networks with fixed weights.
    Guliyev NJ; Ismailov VE
    Neural Netw; 2018 Feb; 98():296-304. PubMed ID: 29301110
    [TBL] [Abstract][Full Text] [Related]  

  • 12. A two-layer paradigm capable of forming arbitrary decision regions in input space.
    Deolalikar V
    IEEE Trans Neural Netw; 2002; 13(1):15-21. PubMed ID: 18244405
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Bounds on the number of hidden neurons in three-layer binary neural networks.
    Zhang Z; Ma X; Yang Y
    Neural Netw; 2003 Sep; 16(7):995-1002. PubMed ID: 14692634
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Extreme Learning Machine for Multilayer Perceptron.
    Tang J; Deng C; Huang GB
    IEEE Trans Neural Netw Learn Syst; 2016 Apr; 27(4):809-21. PubMed ID: 25966483
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Deterministic neural classification.
    Toh KA
    Neural Comput; 2008 Jun; 20(6):1565-95. PubMed ID: 18194103
    [TBL] [Abstract][Full Text] [Related]  

  • 16. An analytical framework for local feedforward networks.
    Weaver S; Baird L; Polycarpou M
    IEEE Trans Neural Netw; 1998; 9(3):473-82. PubMed ID: 18252471
    [TBL] [Abstract][Full Text] [Related]  

  • 17. A generalized feedforward neural network architecture for classification and regression.
    Arulampalam G; Bouzerdoum A
    Neural Netw; 2003; 16(5-6):561-8. PubMed ID: 12850008
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Generalization and capacity of extensively large two-layered perceptrons.
    Rosen-Zvi M; Engel A; Kanter I
    Phys Rev E Stat Nonlin Soft Matter Phys; 2002 Sep; 66(3 Pt 2A):036138. PubMed ID: 12366215
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Local coupled feedforward neural network.
    Sun J
    Neural Netw; 2010 Jan; 23(1):108-13. PubMed ID: 19596550
    [TBL] [Abstract][Full Text] [Related]  

  • 20. High-order and multilayer perceptron initialization.
    Thimm G; Fiesler E
    IEEE Trans Neural Netw; 1997; 8(2):349-59. PubMed ID: 18255638
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 8.