These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

132 related articles for article (PubMed ID: 38773255)

  • 1. Stochastic Gradient Descent-like relaxation is equivalent to Metropolis dynamics in discrete optimization and inference problems.
    Angelini MC; Cavaliere AG; Marino R; Ricci-Tersenghi F
    Sci Rep; 2024 May; 14(1):11638. PubMed ID: 38773255
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Parameter inference for discretely observed stochastic kinetic models using stochastic gradient descent.
    Wang Y; Christley S; Mjolsness E; Xie X
    BMC Syst Biol; 2010 Jul; 4():99. PubMed ID: 20663171
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Optimization and Learning With Randomly Compressed Gradient Updates.
    Huang Z; Lei Y; Kabán A
    Neural Comput; 2023 Jun; 35(7):1234-1287. PubMed ID: 37187168
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Weighted SGD for ℓ
    Yang J; Chow YL; Ré C; Mahoney MW
    Proc Annu ACM SIAM Symp Discret Algorithms; 2016 Jan; 2016():558-569. PubMed ID: 29782626
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Accelerating Minibatch Stochastic Gradient Descent Using Typicality Sampling.
    Peng X; Li L; Wang FY
    IEEE Trans Neural Netw Learn Syst; 2020 Nov; 31(11):4649-4659. PubMed ID: 31899442
    [TBL] [Abstract][Full Text] [Related]  

  • 6. The inverse variance-flatness relation in stochastic gradient descent is critical for finding flat minima.
    Feng Y; Tu Y
    Proc Natl Acad Sci U S A; 2021 Mar; 118(9):. PubMed ID: 33619091
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Anomalous diffusion dynamics of learning in deep neural networks.
    Chen G; Qu CK; Gong P
    Neural Netw; 2022 May; 149():18-28. PubMed ID: 35182851
    [TBL] [Abstract][Full Text] [Related]  

  • 8. A(DP)
    Xu J; Zhang W; Wang F
    IEEE Trans Pattern Anal Mach Intell; 2022 Nov; 44(11):8036-8047. PubMed ID: 34449356
    [TBL] [Abstract][Full Text] [Related]  

  • 9. A Monte Carlo Metropolis-Hastings algorithm for sampling from distributions with intractable normalizing constants.
    Liang F; Jin IH
    Neural Comput; 2013 Aug; 25(8):2199-234. PubMed ID: 23607562
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Surface structure feature matching algorithm for cardiac motion estimation.
    Zhang Z; Yang X; Tan C; Guo W; Chen G
    BMC Med Inform Decis Mak; 2017 Dec; 17(Suppl 3):172. PubMed ID: 29297330
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Comparative Monte Carlo efficiency by Monte Carlo analysis.
    Rubenstein BM; Gubernatis JE; Doll JD
    Phys Rev E Stat Nonlin Soft Matter Phys; 2010 Sep; 82(3 Pt 2):036701. PubMed ID: 21230207
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Variance Reduction in Stochastic Gradient Langevin Dynamics.
    Dubey A; Reddi SJ; Póczos B; Smola AJ; Xing EP; Williamson SA
    Adv Neural Inf Process Syst; 2016 Dec; 29():1154-1162. PubMed ID: 28713210
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Stochastic Gradient Descent Introduces an Effective Landscape-Dependent Regularization Favoring Flat Solutions.
    Yang N; Tang C; Tu Y
    Phys Rev Lett; 2023 Jun; 130(23):237101. PubMed ID: 37354404
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Communication-Censored Distributed Stochastic Gradient Descent.
    Li W; Wu Z; Chen T; Li L; Ling Q
    IEEE Trans Neural Netw Learn Syst; 2022 Nov; 33(11):6831-6843. PubMed ID: 34086584
    [TBL] [Abstract][Full Text] [Related]  

  • 15. The Limiting Dynamics of SGD: Modified Loss, Phase-Space Oscillations, and Anomalous Diffusion.
    Kunin D; Sagastuy-Brena J; Gillespie L; Margalit E; Tanaka H; Ganguli S; Yamins DLK
    Neural Comput; 2023 Dec; 36(1):151-174. PubMed ID: 38052080
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Preconditioned Stochastic Gradient Descent.
    Li XL
    IEEE Trans Neural Netw Learn Syst; 2018 May; 29(5):1454-1466. PubMed ID: 28362591
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Accelerating deep neural network training with inconsistent stochastic gradient descent.
    Wang L; Yang Y; Min R; Chakradhar S
    Neural Netw; 2017 Sep; 93():219-229. PubMed ID: 28668660
    [TBL] [Abstract][Full Text] [Related]  

  • 18. A mean field view of the landscape of two-layer neural networks.
    Mei S; Montanari A; Nguyen PM
    Proc Natl Acad Sci U S A; 2018 Aug; 115(33):E7665-E7671. PubMed ID: 30054315
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Stochastic gradient Langevin dynamics with adaptive drifts.
    Kim S; Song Q; Liang F
    J Stat Comput Simul; 2022; 92(2):318-336. PubMed ID: 35559269
    [TBL] [Abstract][Full Text] [Related]  

  • 20. An Improvised Sentiment Analysis Model on Twitter Data Using Stochastic Gradient Descent (SGD) Optimization Algorithm in Stochastic Gate Neural Network (SGNN).
    Vidyashree KP; Rajendra AB
    SN Comput Sci; 2023; 4(2):190. PubMed ID: 36748096
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.