BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

114 related articles for article (PubMed ID: 34804146)

  • 1. AdaCN: An Adaptive Cubic Newton Method for Nonconvex Stochastic Optimization.
    Liu Y; Zhang M; Zhong Z; Zeng X
    Comput Intell Neurosci; 2021; 2021():5790608. PubMed ID: 34804146
    [TBL] [Abstract][Full Text] [Related]  

  • 2. A novel adaptive cubic quasi-Newton optimizer for deep learning based medical image analysis tasks, validated on detection of COVID-19 and segmentation for COVID-19 lung infection, liver tumor, and optic disc/cup.
    Liu Y; Zhang M; Zhong Z; Zeng X
    Med Phys; 2023 Mar; 50(3):1528-1538. PubMed ID: 36057788
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Faster Stochastic Quasi-Newton Methods.
    Zhang Q; Huang F; Deng C; Huang H
    IEEE Trans Neural Netw Learn Syst; 2022 Sep; 33(9):4388-4397. PubMed ID: 33667166
    [TBL] [Abstract][Full Text] [Related]  

  • 4. A Stochastic Quasi-Newton Method for Large-Scale Nonconvex Optimization With Applications.
    Chen H; Wu HC; Chan SC; Lam WH
    IEEE Trans Neural Netw Learn Syst; 2020 Nov; 31(11):4776-4790. PubMed ID: 31902778
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Communication-efficient distributed cubic Newton with compressed lazy Hessian.
    Zhang Z; Che K; Yang S; Xu W
    Neural Netw; 2024 Jun; 174():106212. PubMed ID: 38479185
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Stochastic Optimization for Nonconvex Problem With Inexact Hessian Matrix, Gradient, and Function.
    Liu L; Liu X; Hsieh CJ; Tao D
    IEEE Trans Neural Netw Learn Syst; 2023 Dec; PP():. PubMed ID: 38039170
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Asynchronous Parallel Stochastic Quasi-Newton Methods.
    Tong Q; Liang G; Cai X; Zhu C; Bi J
    Parallel Comput; 2021 Apr; 101():. PubMed ID: 33363295
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Stochastic quasi-gradient methods: variance reduction via Jacobian sketching.
    Gower RM; Richtárik P; Bach F
    Math Program; 2021; 188(1):135-192. PubMed ID: 34720193
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Preconditioned Stochastic Gradient Descent.
    Li XL
    IEEE Trans Neural Netw Learn Syst; 2018 May; 29(5):1454-1466. PubMed ID: 28362591
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Gradient regularization of Newton method with Bregman distances.
    Doikov N; Nesterov Y
    Math Program; 2024; 204(1-2):1-25. PubMed ID: 38371323
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Towards Understanding Convergence and Generalization of AdamW.
    Zhou P; Xie X; Lin Z; Yan S
    IEEE Trans Pattern Anal Mach Intell; 2024 Mar; PP():. PubMed ID: 38536692
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Convergence of the RMSProp deep learning method with penalty for nonconvex optimization.
    Xu D; Zhang S; Zhang H; Mandic DP
    Neural Netw; 2021 Jul; 139():17-23. PubMed ID: 33662649
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Painless Stochastic Conjugate Gradient for Large-Scale Machine Learning.
    Yang Z
    IEEE Trans Neural Netw Learn Syst; 2023 Jun; PP():. PubMed ID: 37285250
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Convergence analysis of AdaBound with relaxed bound functions for non-convex optimization.
    Liu J; Kong J; Xu D; Qi M; Lu Y
    Neural Netw; 2022 Jan; 145():300-307. PubMed ID: 34785445
    [TBL] [Abstract][Full Text] [Related]  

  • 15. diffGrad: An Optimization Method for Convolutional Neural Networks.
    Dubey SR; Chakraborty S; Roy SK; Mukherjee S; Singh SK; Chaudhuri BB
    IEEE Trans Neural Netw Learn Syst; 2020 Nov; 31(11):4500-4511. PubMed ID: 31880565
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Stochastic Training of Neural Networks via Successive Convex Approximations.
    Scardapane S; Di Lorenzo P
    IEEE Trans Neural Netw Learn Syst; 2018 Oct; 29(10):4947-4956. PubMed ID: 29994756
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Stochastic Gradient Descent for Nonconvex Learning Without Bounded Gradient Assumptions.
    Lei Y; Hu T; Li G; Tang K
    IEEE Trans Neural Netw Learn Syst; 2020 Oct; 31(10):4394-4400. PubMed ID: 31831449
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Learning Rates for Nonconvex Pairwise Learning.
    Li S; Liu Y
    IEEE Trans Pattern Anal Mach Intell; 2023 Aug; 45(8):9996-10011. PubMed ID: 37030773
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Subsampled Hessian Newton Methods for Supervised Learning.
    Wang CC; Huang CH; Lin CJ
    Neural Comput; 2015 Aug; 27(8):1766-95. PubMed ID: 26079755
    [TBL] [Abstract][Full Text] [Related]  

  • 20. ϵ-Approximation of Adaptive Leaning Rate Optimization Algorithms for Constrained Nonconvex Stochastic Optimization.
    Iiduka H
    IEEE Trans Neural Netw Learn Syst; 2023 Oct; 34(10):8108-8115. PubMed ID: 35089865
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.