These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

144 related articles for article (PubMed ID: 33667166)

  • 21. Stochastic Training of Neural Networks via Successive Convex Approximations.
    Scardapane S; Di Lorenzo P
    IEEE Trans Neural Netw Learn Syst; 2018 Oct; 29(10):4947-4956. PubMed ID: 29994756
    [TBL] [Abstract][Full Text] [Related]  

  • 22. Hybrid reconstruction method for multispectral bioluminescence tomography with log-sum regularization.
    Yu J; Tang Q; Li Q; Guo H; He X
    J Opt Soc Am A Opt Image Sci Vis; 2020 Jun; 37(6):1060-1066. PubMed ID: 32543609
    [TBL] [Abstract][Full Text] [Related]  

  • 23. Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models.
    Xie X; Zhou P; Li H; Lin Z; Yan S
    IEEE Trans Pattern Anal Mach Intell; 2024 Jul; PP():. PubMed ID: 38963744
    [TBL] [Abstract][Full Text] [Related]  

  • 24. Fast curvature matrix-vector products for second-order gradient descent.
    Schraudolph NN
    Neural Comput; 2002 Jul; 14(7):1723-38. PubMed ID: 12079553
    [TBL] [Abstract][Full Text] [Related]  

  • 25. A Stochastic Quasi-Newton Method for Large-Scale Nonconvex Optimization With Applications.
    Chen H; Wu HC; Chan SC; Lam WH
    IEEE Trans Neural Netw Learn Syst; 2020 Nov; 31(11):4776-4790. PubMed ID: 31902778
    [TBL] [Abstract][Full Text] [Related]  

  • 26. Robust Stochastic Gradient Descent With Student-t Distribution Based First-Order Momentum.
    Ilboudo WEL; Kobayashi T; Sugimoto K
    IEEE Trans Neural Netw Learn Syst; 2022 Mar; 33(3):1324-1337. PubMed ID: 33326388
    [TBL] [Abstract][Full Text] [Related]  

  • 27. Learning Rates for Nonconvex Pairwise Learning.
    Li S; Liu Y
    IEEE Trans Pattern Anal Mach Intell; 2023 Aug; 45(8):9996-10011. PubMed ID: 37030773
    [TBL] [Abstract][Full Text] [Related]  

  • 28. On Consensus-Optimality Trade-offs in Collaborative Deep Learning.
    Jiang Z; Balu A; Hegde C; Sarkar S
    Front Artif Intell; 2021; 4():573731. PubMed ID: 34595470
    [TBL] [Abstract][Full Text] [Related]  

  • 29. PID Controller-Based Stochastic Optimization Acceleration for Deep Neural Networks.
    Wang H; Luo Y; An W; Sun Q; Xu J; Zhang L
    IEEE Trans Neural Netw Learn Syst; 2020 Dec; 31(12):5079-5091. PubMed ID: 32011265
    [TBL] [Abstract][Full Text] [Related]  

  • 30. Stochastic momentum methods for non-convex learning without bounded assumptions.
    Liang Y; Liu J; Xu D
    Neural Netw; 2023 Aug; 165():830-845. PubMed ID: 37418864
    [TBL] [Abstract][Full Text] [Related]  

  • 31. Efficient Implementation of Second-Order Stochastic Approximation Algorithms in High-Dimensional Problems.
    Zhu J; Wang L; Spall JC
    IEEE Trans Neural Netw Learn Syst; 2020 Aug; 31(8):3087-3099. PubMed ID: 31536020
    [TBL] [Abstract][Full Text] [Related]  

  • 32. ϵ-Approximation of Adaptive Leaning Rate Optimization Algorithms for Constrained Nonconvex Stochastic Optimization.
    Iiduka H
    IEEE Trans Neural Netw Learn Syst; 2023 Oct; 34(10):8108-8115. PubMed ID: 35089865
    [TBL] [Abstract][Full Text] [Related]  

  • 33. Online Learning for DNN Training: A Stochastic Block Adaptive Gradient Algorithm.
    Liu J; Li B; Zhou Y; Zhao X; Zhu J; Zhang M
    Comput Intell Neurosci; 2022; 2022():9337209. PubMed ID: 35694581
    [TBL] [Abstract][Full Text] [Related]  

  • 34. Stochastic Gradient Descent for Nonconvex Learning Without Bounded Gradient Assumptions.
    Lei Y; Hu T; Li G; Tang K
    IEEE Trans Neural Netw Learn Syst; 2020 Oct; 31(10):4394-4400. PubMed ID: 31831449
    [TBL] [Abstract][Full Text] [Related]  

  • 35. A multivariate adaptive gradient algorithm with reduced tuning efforts.
    Saab S; Saab K; Phoha S; Zhu M; Ray A
    Neural Netw; 2022 Aug; 152():499-509. PubMed ID: 35640371
    [TBL] [Abstract][Full Text] [Related]  

  • 36. Novel maximum-margin training algorithms for supervised neural networks.
    Ludwig O; Nunes U
    IEEE Trans Neural Netw; 2010 Jun; 21(6):972-84. PubMed ID: 20409990
    [TBL] [Abstract][Full Text] [Related]  

  • 37. Value iteration for streaming data on a continuous space with gradient method in an RKHS.
    Liu J; Xu W; Wang Y; Lian H
    Neural Netw; 2023 Sep; 166():437-445. PubMed ID: 37566954
    [TBL] [Abstract][Full Text] [Related]  

  • 38. Accelerated Stochastic Variance Reduction Gradient Algorithms for Robust Subspace Clustering.
    Liu H; Yang L; Zhang L; Shang F; Liu Y; Wang L
    Sensors (Basel); 2024 Jun; 24(11):. PubMed ID: 38894450
    [TBL] [Abstract][Full Text] [Related]  

  • 39. Stochastic Recursive Gradient Support Pursuit and Its Sparse Representation Applications.
    Shang F; Wei B; Liu Y; Liu H; Wang S; Jiao L
    Sensors (Basel); 2020 Aug; 20(17):. PubMed ID: 32872609
    [TBL] [Abstract][Full Text] [Related]  

  • 40. Variance Reduced Methods for Non-Convex Composition Optimization.
    Liu L; Liu J; Tao D
    IEEE Trans Pattern Anal Mach Intell; 2022 Sep; 44(9):5813-5825. PubMed ID: 33826512
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 8.