These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

217 related articles for article (PubMed ID: 18255774)

  • 1. Synthesis of fault-tolerant feedforward neural networks using minimax optimization.
    Deodhare D; Vidyasagar M; Sathiya Keethi S
    IEEE Trans Neural Netw; 1998; 9(5):891-900. PubMed ID: 18255774
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Maximally fault tolerant neural networks.
    Neti C; Schneider MH; Young ED
    IEEE Trans Neural Netw; 1992; 3(1):14-23. PubMed ID: 18276402
    [TBL] [Abstract][Full Text] [Related]  

  • 3. ADMM-Based Algorithm for Training Fault Tolerant RBF Networks and Selecting Centers.
    Wang H; Feng R; Han ZF; Leung CS
    IEEE Trans Neural Netw Learn Syst; 2018 Aug; 29(8):3870-3878. PubMed ID: 28816680
    [TBL] [Abstract][Full Text] [Related]  

  • 4. The superior fault tolerance of artificial neural network training with a fault/noise injection-based genetic algorithm.
    Su F; Yuan P; Wang Y; Zhang C
    Protein Cell; 2016 Oct; 7(10):735-748. PubMed ID: 27502185
    [TBL] [Abstract][Full Text] [Related]  

  • 5. A Projection Neural Network for Constrained Quadratic Minimax Optimization.
    Liu Q; Wang J
    IEEE Trans Neural Netw Learn Syst; 2015 Nov; 26(11):2891-900. PubMed ID: 25966485
    [TBL] [Abstract][Full Text] [Related]  

  • 6. On-line learning with minimal degradation in feedforward networks.
    de Angulo VR; Torras C
    IEEE Trans Neural Netw; 1995; 6(3):657-68. PubMed ID: 18263351
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Investigating the fault tolerance of neural networks.
    Tchernev EB; Mulvaney RG; Phatak DS
    Neural Comput; 2005 Jul; 17(7):1646-64. PubMed ID: 15901410
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Two highly efficient second-order algorithms for training feedforward networks.
    Ampazis N; Perantonis SJ
    IEEE Trans Neural Netw; 2002; 13(5):1064-74. PubMed ID: 18244504
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Performance and fault-tolerance of neural networks for optimization.
    Protzel PW; Palumbo DL; Arras MK
    IEEE Trans Neural Netw; 1993; 4(4):600-14. PubMed ID: 18267761
    [TBL] [Abstract][Full Text] [Related]  

  • 10. A local linearized least squares algorithm for training feedforward neural networks.
    Stan O; Kamen E
    IEEE Trans Neural Netw; 2000; 11(2):487-95. PubMed ID: 18249777
    [TBL] [Abstract][Full Text] [Related]  

  • 11. On solving constrained optimization problems with neural networks: a penalty method approach.
    Lillo WE; Loh MH; Hui S; Zak SH
    IEEE Trans Neural Netw; 1993; 4(6):931-40. PubMed ID: 18276523
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Improving convergence and solution quality of Hopfield-type neural networks with augmented Lagrange multipliers.
    Li SZ
    IEEE Trans Neural Netw; 1996; 7(6):1507-16. PubMed ID: 18263545
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Sensor Fault and Delay Tolerant Control for Networked Control Systems Subject to External Disturbances.
    Han SY; Chen YH; Tang GY
    Sensors (Basel); 2017 Mar; 17(4):. PubMed ID: 28350336
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Gradient Descent Ascent for Minimax Problems on Riemannian Manifolds.
    Huang F; Gao S
    IEEE Trans Pattern Anal Mach Intell; 2023 Jul; 45(7):8466-8476. PubMed ID: 37018266
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Extended least squares based algorithm for training feedforward networks.
    Yam JF; Chow TS
    IEEE Trans Neural Netw; 1997; 8(3):806-10. PubMed ID: 18255683
    [TBL] [Abstract][Full Text] [Related]  

  • 16. A Dual-Dimer method for training physics-constrained neural networks with minimax architecture.
    Liu D; Wang Y
    Neural Netw; 2021 Apr; 136():112-125. PubMed ID: 33476947
    [TBL] [Abstract][Full Text] [Related]  

  • 17. An iterative pruning algorithm for feedforward neural networks.
    Castellano G; Fanelli AM; Pelillo M
    IEEE Trans Neural Netw; 1997; 8(3):519-31. PubMed ID: 18255656
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Two-timescale recurrent neural networks for distributed minimax optimization.
    Xia Z; Liu Y; Wang J; Wang J
    Neural Netw; 2023 Aug; 165():527-539. PubMed ID: 37348433
    [TBL] [Abstract][Full Text] [Related]  

  • 19. The layer-wise method and the backpropagation hybrid approach to learning a feedforward neural network.
    Rubanov NS
    IEEE Trans Neural Netw; 2000; 11(2):295-305. PubMed ID: 18249761
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Distributed fault tolerance in optimal interpolative nets.
    Simon D
    IEEE Trans Neural Netw; 2001; 12(6):1348-57. PubMed ID: 18249964
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 11.