These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

139 related articles for article (PubMed ID: 34293509)

  • 1. A distributed optimisation framework combining natural gradient with Hessian-free for discriminative sequence training.
    Haider A; Zhang C; Kreyssig FL; Woodland PC
    Neural Netw; 2021 Nov; 143():537-549. PubMed ID: 34293509
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Speed and convergence properties of gradient algorithms for optimization of IMRT.
    Zhang X; Liu H; Wang X; Dong L; Wu Q; Mohan R
    Med Phys; 2004 May; 31(5):1141-52. PubMed ID: 15191303
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Study of a fast discriminative training algorithm for pattern recognition.
    Li Q; Juang BH
    IEEE Trans Neural Netw; 2006 Sep; 17(5):1212-21. PubMed ID: 17001982
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Variable three-term conjugate gradient method for training artificial neural networks.
    Kim H; Wang C; Byun H; Hu W; Kim S; Jiao Q; Lee TH
    Neural Netw; 2023 Feb; 159():125-136. PubMed ID: 36565690
    [TBL] [Abstract][Full Text] [Related]  

  • 5. The general inefficiency of batch training for gradient descent learning.
    Wilson DR; Martinez TR
    Neural Netw; 2003 Dec; 16(10):1429-51. PubMed ID: 14622875
    [TBL] [Abstract][Full Text] [Related]  

  • 6. A generalized LSTM-like training algorithm for second-order recurrent neural networks.
    Monner D; Reggia JA
    Neural Netw; 2012 Jan; 25(1):70-83. PubMed ID: 21803542
    [TBL] [Abstract][Full Text] [Related]  

  • 7. A hybrid neural learning algorithm using evolutionary learning and derivative free local search method.
    Ghosh R; Yearwood J; Ghosh M; Bagirov A
    Int J Neural Syst; 2006 Jun; 16(3):201-13. PubMed ID: 17044241
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Robust combination of neural networks and hidden Markov models for speech recognition.
    Trentin E; Gori M
    IEEE Trans Neural Netw; 2003; 14(6):1519-31. PubMed ID: 18244596
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Variational quantum classifiers through the lens of the Hessian.
    Sen P; Bhatia AS; Bhangu KS; Elbeltagi A
    PLoS One; 2022; 17(1):e0262346. PubMed ID: 35051206
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Fully complex conjugate gradient-based neural networks using Wirtinger calculus framework: Deterministic convergence and its application.
    Zhang B; Liu Y; Cao J; Wu S; Wang J
    Neural Netw; 2019 Jul; 115():50-64. PubMed ID: 30974301
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Evolutionary cross-domain discriminative Hessian Eigenmaps.
    Si S; Tao D; Chan KP
    IEEE Trans Image Process; 2010 Apr; 19(4):1075-86. PubMed ID: 19887315
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Novel maximum-margin training algorithms for supervised neural networks.
    Ludwig O; Nunes U
    IEEE Trans Neural Netw; 2010 Jun; 21(6):972-84. PubMed ID: 20409990
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Optimization in Quaternion Dynamic Systems: Gradient, Hessian, and Learning Algorithms.
    Xu D; Xia Y; Mandic DP
    IEEE Trans Neural Netw Learn Syst; 2016 Feb; 27(2):249-61. PubMed ID: 26087504
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Critical Point-Finding Methods Reveal Gradient-Flat Regions of Deep Network Losses.
    Frye CG; Simon J; Wadia NS; Ligeralde A; DeWeese MR; Bouchard KE
    Neural Comput; 2021 May; 33(6):1469-1497. PubMed ID: 34496389
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Parameter inference for discretely observed stochastic kinetic models using stochastic gradient descent.
    Wang Y; Christley S; Mjolsness E; Xie X
    BMC Syst Biol; 2010 Jul; 4():99. PubMed ID: 20663171
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Accelerating deep neural network training with inconsistent stochastic gradient descent.
    Wang L; Yang Y; Min R; Chakradhar S
    Neural Netw; 2017 Sep; 93():219-229. PubMed ID: 28668660
    [TBL] [Abstract][Full Text] [Related]  

  • 17. A fast kernel extreme learning machine based on conjugate gradient.
    He C; Xu F; Liu Y; Zheng J
    Network; 2018; 29(1-4):70-80. PubMed ID: 30688136
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Mutual Information Based Learning Rate Decay for Stochastic Gradient Descent Training of Deep Neural Networks.
    Vasudevan S
    Entropy (Basel); 2020 May; 22(5):. PubMed ID: 33286332
    [TBL] [Abstract][Full Text] [Related]  

  • 19. A fast and scalable recurrent neural network based on stochastic meta descent.
    Liu Z; Elhanany I
    IEEE Trans Neural Netw; 2008 Sep; 19(9):1652-8. PubMed ID: 18779096
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Backpropagation Neural Tree.
    Ojha V; Nicosia G
    Neural Netw; 2022 May; 149():66-83. PubMed ID: 35193079
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.