These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

119 related articles for article (PubMed ID: 37187168)

  • 1. Optimization and Learning With Randomly Compressed Gradient Updates.
    Huang Z; Lei Y; Kabán A
    Neural Comput; 2023 Jun; 35(7):1234-1287. PubMed ID: 37187168
    [TBL] [Abstract][Full Text] [Related]  

  • 2. The Strength of Nesterov's Extrapolation in the Individual Convergence of Nonsmooth Optimization.
    Tao W; Pan Z; Wu G; Tao Q
    IEEE Trans Neural Netw Learn Syst; 2020 Jul; 31(7):2557-2568. PubMed ID: 31484139
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Stochastic Gradient Descent-like relaxation is equivalent to Metropolis dynamics in discrete optimization and inference problems.
    Angelini MC; Cavaliere AG; Marino R; Ricci-Tersenghi F
    Sci Rep; 2024 May; 14(1):11638. PubMed ID: 38773255
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Accelerating Minibatch Stochastic Gradient Descent Using Typicality Sampling.
    Peng X; Li L; Wang FY
    IEEE Trans Neural Netw Learn Syst; 2020 Nov; 31(11):4649-4659. PubMed ID: 31899442
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Weighted SGD for ℓ
    Yang J; Chow YL; Ré C; Mahoney MW
    Proc Annu ACM SIAM Symp Discret Algorithms; 2016 Jan; 2016():558-569. PubMed ID: 29782626
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Primal Averaging: A New Gradient Evaluation Step to Attain the Optimal Individual Convergence.
    Tao W; Pan Z; Wu G; Tao Q
    IEEE Trans Cybern; 2020 Feb; 50(2):835-845. PubMed ID: 30346303
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Stochastic Gradient Descent for Nonconvex Learning Without Bounded Gradient Assumptions.
    Lei Y; Hu T; Li G; Tang K
    IEEE Trans Neural Netw Learn Syst; 2020 Oct; 31(10):4394-4400. PubMed ID: 31831449
    [TBL] [Abstract][Full Text] [Related]  

  • 8. A Sharp Estimate on the Transient Time of Distributed Stochastic Gradient Descent.
    Pu S; Olshevsky A; Paschalidis IC
    IEEE Trans Automat Contr; 2022 Nov; 67(11):5900-5915. PubMed ID: 37284602
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Stochastic Gradient Descent Introduces an Effective Landscape-Dependent Regularization Favoring Flat Solutions.
    Yang N; Tang C; Tu Y
    Phys Rev Lett; 2023 Jun; 130(23):237101. PubMed ID: 37354404
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Learning Rates for Nonconvex Pairwise Learning.
    Li S; Liu Y
    IEEE Trans Pattern Anal Mach Intell; 2023 Aug; 45(8):9996-10011. PubMed ID: 37030773
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Learning Rates for Stochastic Gradient Descent With Nonconvex Objectives.
    Lei Y; Tang K
    IEEE Trans Pattern Anal Mach Intell; 2021 Dec; 43(12):4505-4511. PubMed ID: 33755555
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Communication-Censored Distributed Stochastic Gradient Descent.
    Li W; Wu Z; Chen T; Li L; Ling Q
    IEEE Trans Neural Netw Learn Syst; 2022 Nov; 33(11):6831-6843. PubMed ID: 34086584
    [TBL] [Abstract][Full Text] [Related]  

  • 13. A(DP)
    Xu J; Zhang W; Wang F
    IEEE Trans Pattern Anal Mach Intell; 2022 Nov; 44(11):8036-8047. PubMed ID: 34449356
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Accelerating deep neural network training with inconsistent stochastic gradient descent.
    Wang L; Yang Y; Min R; Chakradhar S
    Neural Netw; 2017 Sep; 93():219-229. PubMed ID: 28668660
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Efficient high-resolution refinement in cryo-EM with stochastic gradient descent.
    Toader B; Brubaker MA; Lederman RR
    ArXiv; 2023 Nov; ():. PubMed ID: 38076514
    [TBL] [Abstract][Full Text] [Related]  

  • 16. An Efficient Preconditioner for Stochastic Gradient Descent Optimization of Image Registration.
    Qiao Y; Lelieveldt BPF; Staring M
    IEEE Trans Med Imaging; 2019 Oct; 38(10):2314-2325. PubMed ID: 30762536
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Preconditioned Stochastic Gradient Descent.
    Li XL
    IEEE Trans Neural Netw Learn Syst; 2018 May; 29(5):1454-1466. PubMed ID: 28362591
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Accelerated Mini-batch Randomized Block Coordinate Descent Method.
    Zhao T; Yu M; Wang Y; Arora R; Liu H
    Adv Neural Inf Process Syst; 2014 Dec; 27():5614. PubMed ID: 25620860
    [TBL] [Abstract][Full Text] [Related]  

  • 19. The inverse variance-flatness relation in stochastic gradient descent is critical for finding flat minima.
    Feng Y; Tu Y
    Proc Natl Acad Sci U S A; 2021 Mar; 118(9):. PubMed ID: 33619091
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Surface structure feature matching algorithm for cardiac motion estimation.
    Zhang Z; Yang X; Tan C; Guo W; Chen G
    BMC Med Inform Decis Mak; 2017 Dec; 17(Suppl 3):172. PubMed ID: 29297330
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.