These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

145 related articles for article (PubMed ID: 8823625)

  • 1. Constructive training methods for feedforward neural networks with binary weights.
    Mayoraz E; Aviolat F
    Int J Neural Syst; 1996 May; 7(2):149-66. PubMed ID: 8823625
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Hardware prototypes of a Boolean neural network and the simulated annealing optimization method.
    Niittylahti J
    Int J Neural Syst; 1996 Mar; 7(1):45-52. PubMed ID: 8828049
    [TBL] [Abstract][Full Text] [Related]  

  • 3. The target switch algorithm: a constructive learning procedure for feed-forward neural networks.
    Campbell C; Vicente CP
    Neural Comput; 1995 Nov; 7(6):1245-64. PubMed ID: 7584901
    [TBL] [Abstract][Full Text] [Related]  

  • 4. New training strategies for constructive neural networks with application to regression problems.
    Ma L; Khorasani K
    Neural Netw; 2004 May; 17(4):589-609. PubMed ID: 15109686
    [TBL] [Abstract][Full Text] [Related]  

  • 5. A learning rule for very simple universal approximators consisting of a single layer of perceptrons.
    Auer P; Burgsteiner H; Maass W
    Neural Netw; 2008 Jun; 21(5):786-95. PubMed ID: 18249524
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Two constructive methods for designing compact feedforward networks of threshold units.
    Amaldi E; Guenin B
    Int J Neural Syst; 1997; 8(5-6):629-45. PubMed ID: 10065840
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Constructive approximation to multivariate function by decay RBF neural network.
    Hou M; Han X
    IEEE Trans Neural Netw; 2010 Sep; 21(9):1517-23. PubMed ID: 20693108
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Nonlinear time series analysis by neural networks: a case study.
    Saxén H
    Int J Neural Syst; 1996 May; 7(2):195-201. PubMed ID: 8823629
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Training pi-sigma network by online gradient algorithm with penalty for small weight update.
    Xiong Y; Wu W; Kang X; Zhang C
    Neural Comput; 2007 Dec; 19(12):3356-68. PubMed ID: 17970657
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Generalization and selection of examples in feedforward neural networks.
    Franco L; Cannas SA
    Neural Comput; 2000 Oct; 12(10):2405-26. PubMed ID: 11032040
    [TBL] [Abstract][Full Text] [Related]  

  • 11. A penalty-function approach for pruning feedforward neural networks.
    Setiono R
    Neural Comput; 1997 Jan; 9(1):185-204. PubMed ID: 9117898
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Boundedness and convergence of online gradient method with penalty for feedforward neural networks.
    Zhang H; Wu W; Liu F; Yao M
    IEEE Trans Neural Netw; 2009 Jun; 20(6):1050-4. PubMed ID: 19435681
    [TBL] [Abstract][Full Text] [Related]  

  • 13. A new constructive algorithm for architectural and functional adaptation of artificial neural networks.
    Islam MM; Sattar MA; Amin MF; Yao X; Murase K
    IEEE Trans Syst Man Cybern B Cybern; 2009 Dec; 39(6):1590-605. PubMed ID: 19502131
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Universal perceptron and DNA-like learning algorithm for binary neural networks: LSBF and PBF implementations.
    Chen F; Chen GR; He G; Xu X; He Q
    IEEE Trans Neural Netw; 2009 Oct; 20(10):1645-58. PubMed ID: 23460987
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Sensitivity-based adaptive learning rules for binary feedforward neural networks.
    Zhong S; Zeng X; Wu S; Han L
    IEEE Trans Neural Netw Learn Syst; 2012 Mar; 23(3):480-91. PubMed ID: 24808553
    [TBL] [Abstract][Full Text] [Related]  

  • 16. A new formulation for feedforward neural networks.
    Razavi S; Tolson BA
    IEEE Trans Neural Netw; 2011 Oct; 22(10):1588-98. PubMed ID: 21859600
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Role of Synaptic Stochasticity in Training Low-Precision Neural Networks.
    Baldassi C; Gerace F; Kappen HJ; Lucibello C; Saglietti L; Tartaglione E; Zecchina R
    Phys Rev Lett; 2018 Jun; 120(26):268103. PubMed ID: 30004730
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Batch gradient method with smoothing L1/2 regularization for training of feedforward neural networks.
    Wu W; Fan Q; Zurada JM; Wang J; Yang D; Liu Y
    Neural Netw; 2014 Feb; 50():72-8. PubMed ID: 24291693
    [TBL] [Abstract][Full Text] [Related]  

  • 19. On sequential construction of binary neural networks.
    Muselli M
    IEEE Trans Neural Netw; 1995; 6(3):678-90. PubMed ID: 18263353
    [TBL] [Abstract][Full Text] [Related]  

  • 20. A selective learning method to improve the generalization of multilayer feedforward neural networks.
    Galván IM; Isasi P; Aler R; Valls JM
    Int J Neural Syst; 2001 Apr; 11(2):167-77. PubMed ID: 14632169
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 8.