BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

129 related articles for article (PubMed ID: 35342226)

  • 1. Calibrating the Adaptive Learning Rate to Improve Convergence of ADAM.
    Tong Q; Liang G; Bi J
    Neurocomputing (Amst); 2022 Apr; 481():333-356. PubMed ID: 35342226
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Stochastic momentum methods for non-convex learning without bounded assumptions.
    Liang Y; Liu J; Xu D
    Neural Netw; 2023 Aug; 165():830-845. PubMed ID: 37418864
    [TBL] [Abstract][Full Text] [Related]  

  • 3. A Unified Analysis of AdaGrad With Weighted Aggregation and Momentum Acceleration.
    Shen L; Chen C; Zou F; Jie Z; Sun J; Liu W
    IEEE Trans Neural Netw Learn Syst; 2023 Jun; PP():. PubMed ID: 37310828
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Convergence analysis of AdaBound with relaxed bound functions for non-convex optimization.
    Liu J; Kong J; Xu D; Qi M; Lu Y
    Neural Netw; 2022 Jan; 145():300-307. PubMed ID: 34785445
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Painless Stochastic Conjugate Gradient for Large-Scale Machine Learning.
    Yang Z
    IEEE Trans Neural Netw Learn Syst; 2023 Jun; PP():. PubMed ID: 37285250
    [TBL] [Abstract][Full Text] [Related]  

  • 6. AdaSAM: Boosting sharpness-aware minimization with adaptive learning rate and momentum for training deep neural networks.
    Sun H; Shen L; Zhong Q; Ding L; Chen S; Sun J; Li J; Sun G; Tao D
    Neural Netw; 2024 Jan; 169():506-519. PubMed ID: 37944247
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Towards Understanding Convergence and Generalization of AdamW.
    Zhou P; Xie X; Lin Z; Yan S
    IEEE Trans Pattern Anal Mach Intell; 2024 Mar; PP():. PubMed ID: 38536692
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Learning Rates for Nonconvex Pairwise Learning.
    Li S; Liu Y
    IEEE Trans Pattern Anal Mach Intell; 2023 Aug; 45(8):9996-10011. PubMed ID: 37030773
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Appropriate Learning Rates of Adaptive Learning Rate Optimization Algorithms for Training Deep Neural Networks.
    Iiduka H
    IEEE Trans Cybern; 2022 Dec; 52(12):13250-13261. PubMed ID: 34495862
    [TBL] [Abstract][Full Text] [Related]  

  • 10. The WuC-Adam algorithm based on joint improvement of Warmup and cosine annealing algorithms.
    Zhang C; Shao Y; Sun H; Xing L; Zhao Q; Zhang L
    Math Biosci Eng; 2024 Jan; 21(1):1270-1285. PubMed ID: 38303464
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Stochastic learning via optimizing the variational inequalities.
    Tao Q; Gao QK; Chu DJ; Wu GW
    IEEE Trans Neural Netw Learn Syst; 2014 Oct; 25(10):1769-78. PubMed ID: 25291732
    [TBL] [Abstract][Full Text] [Related]  

  • 12. A multivariate adaptive gradient algorithm with reduced tuning efforts.
    Saab S; Saab K; Phoha S; Zhu M; Ray A
    Neural Netw; 2022 Aug; 152():499-509. PubMed ID: 35640371
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Communication-Efficient Nonconvex Federated Learning With Error Feedback for Uplink and Downlink.
    Zhou X; Chang L; Cao J
    IEEE Trans Neural Netw Learn Syst; 2023 Nov; PP():. PubMed ID: 37995164
    [TBL] [Abstract][Full Text] [Related]  

  • 14. A novel adaptive cubic quasi-Newton optimizer for deep learning based medical image analysis tasks, validated on detection of COVID-19 and segmentation for COVID-19 lung infection, liver tumor, and optic disc/cup.
    Liu Y; Zhang M; Zhong Z; Zeng X
    Med Phys; 2023 Mar; 50(3):1528-1538. PubMed ID: 36057788
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Convergence of the RMSProp deep learning method with penalty for nonconvex optimization.
    Xu D; Zhang S; Zhang H; Mandic DP
    Neural Netw; 2021 Jul; 139():17-23. PubMed ID: 33662649
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Mutual Information Based Learning Rate Decay for Stochastic Gradient Descent Training of Deep Neural Networks.
    Vasudevan S
    Entropy (Basel); 2020 May; 22(5):. PubMed ID: 33286332
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Stochastic Gradient Descent for Nonconvex Learning Without Bounded Gradient Assumptions.
    Lei Y; Hu T; Li G; Tang K
    IEEE Trans Neural Netw Learn Syst; 2020 Oct; 31(10):4394-4400. PubMed ID: 31831449
    [TBL] [Abstract][Full Text] [Related]  

  • 18. On Consensus-Optimality Trade-offs in Collaborative Deep Learning.
    Jiang Z; Balu A; Hegde C; Sarkar S
    Front Artif Intell; 2021; 4():573731. PubMed ID: 34595470
    [TBL] [Abstract][Full Text] [Related]  

  • 19. FastAdaBelief: Improving Convergence Rate for Belief-Based Adaptive Optimizers by Exploiting Strong Convexity.
    Zhou Y; Huang K; Cheng C; Wang X; Hussain A; Liu X
    IEEE Trans Neural Netw Learn Syst; 2023 Sep; 34(9):6515-6529. PubMed ID: 35271450
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Robust Stochastic Gradient Descent With Student-t Distribution Based First-Order Momentum.
    Ilboudo WEL; Kobayashi T; Sugimoto K
    IEEE Trans Neural Netw Learn Syst; 2022 Mar; 33(3):1324-1337. PubMed ID: 33326388
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.