These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

118 related articles for article (PubMed ID: 32989377)

  • 1. Robust Asynchronous Stochastic Gradient-Push: Asymptotically Optimal and Network-Independent Performance for Strongly Convex Functions.
    Spiridonoff A; Olshevsky A; Paschalidis IC
    J Mach Learn Res; 2020; 21():. PubMed ID: 32989377
    [TBL] [Abstract][Full Text] [Related]  

  • 2. A Sharp Estimate on the Transient Time of Distributed Stochastic Gradient Descent.
    Pu S; Olshevsky A; Paschalidis IC
    IEEE Trans Automat Contr; 2022 Nov; 67(11):5900-5915. PubMed ID: 37284602
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Push-Sum Distributed Online Optimization With Bandit Feedback.
    Wang C; Xu S; Yuan D; Zhang B; Zhang Z
    IEEE Trans Cybern; 2022 Apr; 52(4):2263-2273. PubMed ID: 32609617
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Hybrid-DCA: A double asynchronous approach for stochastic dual coordinate ascent.
    Pal S; Xu T; Yang T; Rajasekaran S; Bi J
    J Parallel Distrib Comput; 2020 Sep; 143():47-66. PubMed ID: 32699464
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Distributed Stochastic Constrained Composite Optimization Over Time-Varying Network With a Class of Communication Noise.
    Yu Z; Ho DWC; Yuan D; Liu J
    IEEE Trans Cybern; 2023 Jun; 53(6):3561-3573. PubMed ID: 34818207
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Distributed Nesterov Gradient and Heavy-Ball Double Accelerated Asynchronous Optimization.
    Li H; Cheng H; Wang Z; Wu GC
    IEEE Trans Neural Netw Learn Syst; 2021 Dec; 32(12):5723-5737. PubMed ID: 33048761
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Dualityfree Methods for Stochastic Composition Optimization.
    Liu L; Liu J; Tao D
    IEEE Trans Neural Netw Learn Syst; 2019 Apr; 30(4):1205-1217. PubMed ID: 30222587
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Distributed Optimization for Two Types of Heterogeneous Multiagent Systems.
    Sun C; Ye M; Hu G
    IEEE Trans Neural Netw Learn Syst; 2021 Mar; 32(3):1314-1324. PubMed ID: 32310791
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Stochastic Strongly Convex Optimization via Distributed Epoch Stochastic Gradient Algorithm.
    Yuan D; Ho DWC; Xu S
    IEEE Trans Neural Netw Learn Syst; 2021 Jun; 32(6):2344-2357. PubMed ID: 32614775
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Asymptotic Network Independence in Distributed Stochastic Optimization for Machine Learning.
    Pu S; Olshevsky A; Paschalidis IC
    IEEE Signal Process Mag; 2020 May; 37(3):114-122. PubMed ID: 33746471
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Privacy Masking Stochastic Subgradient-Push Algorithm for Distributed Online Optimization.
    Lu Q; Liao X; Xiang T; Li H; Huang T
    IEEE Trans Cybern; 2021 Jun; 51(6):3224-3237. PubMed ID: 32149669
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Stochastic Gradient Descent for Nonconvex Learning Without Bounded Gradient Assumptions.
    Lei Y; Hu T; Li G; Tang K
    IEEE Trans Neural Netw Learn Syst; 2020 Oct; 31(10):4394-4400. PubMed ID: 31831449
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Distributed Stochastic Gradient Tracking Algorithm With Variance Reduction for Non-Convex Optimization.
    Jiang X; Zeng X; Sun J; Chen J
    IEEE Trans Neural Netw Learn Syst; 2023 Sep; 34(9):5310-5321. PubMed ID: 35536804
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Sign Stochastic Gradient Descents without bounded gradient assumption for the finite sum minimization.
    Sun T; Li D
    Neural Netw; 2022 May; 149():195-203. PubMed ID: 35248809
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Training Neural Networks by Lifted Proximal Operator Machines.
    Li J; Xiao M; Fang C; Dai Y; Xu C; Lin Z
    IEEE Trans Pattern Anal Mach Intell; 2022 Jun; 44(6):3334-3348. PubMed ID: 33382647
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Preconditioned Stochastic Gradient Descent.
    Li XL
    IEEE Trans Neural Netw Learn Syst; 2018 May; 29(5):1454-1466. PubMed ID: 28362591
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Variable Smoothing for Convex Optimization Problems Using Stochastic Gradients.
    Boţ RI; Böhm A
    J Sci Comput; 2020; 85(2):33. PubMed ID: 33122873
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Distributed Online Constrained Optimization With Feedback Delays.
    Wang C; Xu S
    IEEE Trans Neural Netw Learn Syst; 2024 Feb; 35(2):1708-1720. PubMed ID: 35830400
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Distributed Randomized Gradient-Free Optimization Protocol of Multiagent Systems Over Weight-Unbalanced Digraphs.
    Wang D; Yin J; Wang W
    IEEE Trans Cybern; 2021 Jan; 51(1):473-482. PubMed ID: 30640644
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Optimization and Learning With Randomly Compressed Gradient Updates.
    Huang Z; Lei Y; Kabán A
    Neural Comput; 2023 Jun; 35(7):1234-1287. PubMed ID: 37187168
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.