These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

121 related articles for article (PubMed ID: 32525810)

  • 1. Gradient Descent Learning With Floats.
    Sun T; Tang K; Li D
    IEEE Trans Cybern; 2022 Mar; 52(3):1763-1771. PubMed ID: 32525810
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Nonergodic Complexity of Proximal Inertial Gradient Descents.
    Sun T; Qiao L; Li D
    IEEE Trans Neural Netw Learn Syst; 2021 Oct; 32(10):4613-4626. PubMed ID: 32997636
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Novel Convergence Results of Adaptive Stochastic Gradient Descents.
    Sun T; Qiao L; Liao Q; Li D
    IEEE Trans Image Process; 2021; 30():1044-1056. PubMed ID: 33237857
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Sign Stochastic Gradient Descents without bounded gradient assumption for the finite sum minimization.
    Sun T; Li D
    Neural Netw; 2022 May; 149():195-203. PubMed ID: 35248809
    [TBL] [Abstract][Full Text] [Related]  

  • 5. A multivariate adaptive gradient algorithm with reduced tuning efforts.
    Saab S; Saab K; Phoha S; Zhu M; Ray A
    Neural Netw; 2022 Aug; 152():499-509. PubMed ID: 35640371
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Efficient Implementation of Second-Order Stochastic Approximation Algorithms in High-Dimensional Problems.
    Zhu J; Wang L; Spall JC
    IEEE Trans Neural Netw Learn Syst; 2020 Aug; 31(8):3087-3099. PubMed ID: 31536020
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Sign-Based Gradient Descent With Heterogeneous Data: Convergence and Byzantine Resilience.
    Jin R; Liu Y; Huang Y; He X; Wu T; Dai H
    IEEE Trans Neural Netw Learn Syst; 2024 Jan; PP():. PubMed ID: 38215315
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Value iteration for streaming data on a continuous space with gradient method in an RKHS.
    Liu J; Xu W; Wang Y; Lian H
    Neural Netw; 2023 Sep; 166():437-445. PubMed ID: 37566954
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Stochastic Gradient Descent for Nonconvex Learning Without Bounded Gradient Assumptions.
    Lei Y; Hu T; Li G; Tang K
    IEEE Trans Neural Netw Learn Syst; 2020 Oct; 31(10):4394-4400. PubMed ID: 31831449
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Error analysis of stochastic gradient descent ranking.
    Chen H; Tang Y; Li L; Yuan Y; Li X; Tang Y
    IEEE Trans Cybern; 2013 Jun; 43(3):898-909. PubMed ID: 24083315
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Suspicion Distillation Gradient Descent Bit-Flipping Algorithm.
    Ivaniš P; Brkić S; Vasić B
    Entropy (Basel); 2022 Apr; 24(4):. PubMed ID: 35455221
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Stochastic quasi-gradient methods: variance reduction via Jacobian sketching.
    Gower RM; Richtárik P; Bach F
    Math Program; 2021; 188(1):135-192. PubMed ID: 34720193
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Nonsingular Gradient Descent Algorithm for Interval Type-2 Fuzzy Neural Network.
    Han H; Sun C; Wu X; Yang H; Qiao J
    IEEE Trans Neural Netw Learn Syst; 2024 Jun; 35(6):8176-8189. PubMed ID: 37015616
    [TBL] [Abstract][Full Text] [Related]  

  • 14. A learning rule for very simple universal approximators consisting of a single layer of perceptrons.
    Auer P; Burgsteiner H; Maass W
    Neural Netw; 2008 Jun; 21(5):786-95. PubMed ID: 18249524
    [TBL] [Abstract][Full Text] [Related]  

  • 15. DisSAGD: A Distributed Parameter Update Scheme Based on Variance Reduction.
    Pan H; Zheng L
    Sensors (Basel); 2021 Jul; 21(15):. PubMed ID: 34372361
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Stochastically Controlled Compositional Gradient for Composition Problems.
    Liu L; Liu J; Hsieh CJ; Tao D
    IEEE Trans Neural Netw Learn Syst; 2023 Feb; 34(2):611-622. PubMed ID: 34383655
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Accelerating Minibatch Stochastic Gradient Descent Using Typicality Sampling.
    Peng X; Li L; Wang FY
    IEEE Trans Neural Netw Learn Syst; 2020 Nov; 31(11):4649-4659. PubMed ID: 31899442
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Understanding and Optimizing Asynchronous Low-Precision Stochastic Gradient Descent.
    De Sa C; Feldman M; Ré C; Olukotun K
    Proc Int Symp Comput Archit; 2017 Jun; 2017():561-574. PubMed ID: 29391770
    [TBL] [Abstract][Full Text] [Related]  

  • 19. The general inefficiency of batch training for gradient descent learning.
    Wilson DR; Martinez TR
    Neural Netw; 2003 Dec; 16(10):1429-51. PubMed ID: 14622875
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Quantum State Tomography via Nonconvex Riemannian Gradient Descent.
    Hsu MC; Kuo EJ; Yu WH; Cai JF; Hsieh MH
    Phys Rev Lett; 2024 Jun; 132(24):240804. PubMed ID: 38949351
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.