BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

132 related articles for article (PubMed ID: 37285250)

  • 1. Painless Stochastic Conjugate Gradient for Large-Scale Machine Learning.
    Yang Z
    IEEE Trans Neural Netw Learn Syst; 2023 Jun; PP():. PubMed ID: 37285250
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Faster Stochastic Quasi-Newton Methods.
    Zhang Q; Huang F; Deng C; Huang H
    IEEE Trans Neural Netw Learn Syst; 2022 Sep; 33(9):4388-4397. PubMed ID: 33667166
    [TBL] [Abstract][Full Text] [Related]  

  • 3. A Minibatch Proximal Stochastic Recursive Gradient Algorithm Using a Trust-Region-Like Scheme and Barzilai-Borwein Stepsizes.
    Yu T; Liu XW; Dai YH; Sun J
    IEEE Trans Neural Netw Learn Syst; 2021 Oct; 32(10):4627-4638. PubMed ID: 33021942
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Stochastic Conjugate Gradient Algorithm With Variance Reduction.
    Jin XB; Zhang XY; Huang K; Geng GG
    IEEE Trans Neural Netw Learn Syst; 2019 May; 30(5):1360-1369. PubMed ID: 30281486
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Distributed Stochastic Gradient Tracking Algorithm With Variance Reduction for Non-Convex Optimization.
    Jiang X; Zeng X; Sun J; Chen J
    IEEE Trans Neural Netw Learn Syst; 2023 Sep; 34(9):5310-5321. PubMed ID: 35536804
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Stochastic learning via optimizing the variational inequalities.
    Tao Q; Gao QK; Chu DJ; Wu GW
    IEEE Trans Neural Netw Learn Syst; 2014 Oct; 25(10):1769-78. PubMed ID: 25291732
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Learning Rates for Nonconvex Pairwise Learning.
    Li S; Liu Y
    IEEE Trans Pattern Anal Mach Intell; 2023 Aug; 45(8):9996-10011. PubMed ID: 37030773
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Stochastic momentum methods for non-convex learning without bounded assumptions.
    Liang Y; Liu J; Xu D
    Neural Netw; 2023 Aug; 165():830-845. PubMed ID: 37418864
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Online Stochastic DCA With Applications to Principal Component Analysis.
    Le Thi HA; Luu HPH; Dinh TP
    IEEE Trans Neural Netw Learn Syst; 2024 May; 35(5):7035-7047. PubMed ID: 36315540
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Dualityfree Methods for Stochastic Composition Optimization.
    Liu L; Liu J; Tao D
    IEEE Trans Neural Netw Learn Syst; 2019 Apr; 30(4):1205-1217. PubMed ID: 30222587
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Fast compressed sensing-based CBCT reconstruction using Barzilai-Borwein formulation for application to on-line IGRT.
    Park JC; Song B; Kim JS; Park SH; Kim HK; Liu Z; Suh TS; Song WY
    Med Phys; 2012 Mar; 39(3):1207-17. PubMed ID: 22380351
    [TBL] [Abstract][Full Text] [Related]  

  • 12. ϵ-Approximation of Adaptive Leaning Rate Optimization Algorithms for Constrained Nonconvex Stochastic Optimization.
    Iiduka H
    IEEE Trans Neural Netw Learn Syst; 2023 Oct; 34(10):8108-8115. PubMed ID: 35089865
    [TBL] [Abstract][Full Text] [Related]  

  • 13. The Strength of Nesterov's Extrapolation in the Individual Convergence of Nonsmooth Optimization.
    Tao W; Pan Z; Wu G; Tao Q
    IEEE Trans Neural Netw Learn Syst; 2020 Jul; 31(7):2557-2568. PubMed ID: 31484139
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Primal Averaging: A New Gradient Evaluation Step to Attain the Optimal Individual Convergence.
    Tao W; Pan Z; Wu G; Tao Q
    IEEE Trans Cybern; 2020 Feb; 50(2):835-845. PubMed ID: 30346303
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Stochastic quasi-gradient methods: variance reduction via Jacobian sketching.
    Gower RM; Richtárik P; Bach F
    Math Program; 2021; 188(1):135-192. PubMed ID: 34720193
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Stochastically Controlled Compositional Gradient for Composition Problems.
    Liu L; Liu J; Hsieh CJ; Tao D
    IEEE Trans Neural Netw Learn Syst; 2023 Feb; 34(2):611-622. PubMed ID: 34383655
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Algorithms for accelerated convergence of adaptive PCA.
    Chatterjee C; Kang Z; Roychowdhury VP
    IEEE Trans Neural Netw; 2000; 11(2):338-55. PubMed ID: 18249765
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Speed and convergence properties of gradient algorithms for optimization of IMRT.
    Zhang X; Liu H; Wang X; Dong L; Wu Q; Mohan R
    Med Phys; 2004 May; 31(5):1141-52. PubMed ID: 15191303
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Variance Reduction in Stochastic Gradient Langevin Dynamics.
    Dubey A; Reddi SJ; Póczos B; Smola AJ; Xing EP; Williamson SA
    Adv Neural Inf Process Syst; 2016 Dec; 29():1154-1162. PubMed ID: 28713210
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Calibrating the Adaptive Learning Rate to Improve Convergence of ADAM.
    Tong Q; Liang G; Bi J
    Neurocomputing (Amst); 2022 Apr; 481():333-356. PubMed ID: 35342226
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.