These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

118 related articles for article (PubMed ID: 35535049)

  • 1. Local convergence of tensor methods.
    Doikov N; Nesterov Y
    Math Program; 2022; 193(1):315-336. PubMed ID: 35535049
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Gradient regularization of Newton method with Bregman distances.
    Doikov N; Nesterov Y
    Math Program; 2024; 204(1-2):1-25. PubMed ID: 38371323
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Minimizing Uniformly Convex Functions by Cubic Regularization of Newton Method.
    Doikov N; Nesterov Y
    J Optim Theory Appl; 2021; 189(1):317-339. PubMed ID: 34720181
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Implementable tensor methods in unconstrained convex optimization.
    Nesterov Y
    Math Program; 2021; 186(1):157-183. PubMed ID: 33627889
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Scalable Proximal Jacobian Iteration Method With Global Convergence Analysis for Nonconvex Unconstrained Composite Optimizations.
    Zhang H; Qian J; Gao J; Yang J; Xu C
    IEEE Trans Neural Netw Learn Syst; 2019 Sep; 30(9):2825-2839. PubMed ID: 30668503
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Fast Augmented Lagrangian Method in the convex regime with convergence guarantees for the iterates.
    Boţ RI; Csetnek ER; Nguyen DK
    Math Program; 2023; 200(1):147-197. PubMed ID: 37215306
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Novel projection neurodynamic approaches for constrained convex optimization.
    Zhao Y; Liao X; He X
    Neural Netw; 2022 Jun; 150():336-349. PubMed ID: 35344705
    [TBL] [Abstract][Full Text] [Related]  

  • 8. A modified subgradient extragradient method for solving monotone variational inequalities.
    He S; Wu T
    J Inequal Appl; 2017; 2017(1):89. PubMed ID: 28515617
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Subgradient ellipsoid method for nonsmooth convex problems.
    Rodomanov A; Nesterov Y
    Math Program; 2023; 199(1-2):305-341. PubMed ID: 37155414
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Distributed Subgradient Method With Random Quantization and Flexible Weights: Convergence Analysis.
    Xia Z; Du J; Jiang C; Poor HV; Han Z; Ren Y
    IEEE Trans Cybern; 2024 Feb; 54(2):1223-1235. PubMed ID: 38117628
    [TBL] [Abstract][Full Text] [Related]  

  • 11. An incremental mirror descent subgradient algorithm with random sweeping and proximal step.
    Boţ RI; Böhm A
    Optimization; 2019; 68(1):33-50. PubMed ID: 30828224
    [TBL] [Abstract][Full Text] [Related]  

  • 12. On the Convergence Analysis of the Optimized Gradient Method.
    Kim D; Fessler JA
    J Optim Theory Appl; 2017 Jan; 172(1):187-205. PubMed ID: 28461707
    [TBL] [Abstract][Full Text] [Related]  

  • 13. A fast continuous time approach for non-smooth convex optimization using Tikhonov regularization technique.
    Karapetyants MA
    Comput Optim Appl; 2024; 87(2):531-569. PubMed ID: 38357400
    [TBL] [Abstract][Full Text] [Related]  

  • 14. The Strength of Nesterov's Extrapolation in the Individual Convergence of Nonsmooth Optimization.
    Tao W; Pan Z; Wu G; Tao Q
    IEEE Trans Neural Netw Learn Syst; 2020 Jul; 31(7):2557-2568. PubMed ID: 31484139
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Distributed Primal-Dual Subgradient Method for Multiagent Optimization via Consensus Algorithms.
    Deming Yuan ; Shengyuan Xu ; Huanyu Zhao
    IEEE Trans Syst Man Cybern B Cybern; 2011 Dec; 41(6):1715-24. PubMed ID: 21824853
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Inducing strong convergence into the asymptotic behaviour of proximal splitting algorithms in Hilbert spaces.
    Boţ RI; Csetnek ER; Meier D
    Optim Methods Softw; 2019; 34(3):489-514. PubMed ID: 31057305
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Piecewise convexity of artificial neural networks.
    Rister B; Rubin DL
    Neural Netw; 2017 Oct; 94():34-45. PubMed ID: 28732233
    [TBL] [Abstract][Full Text] [Related]  

  • 18. MOCCA: Mirrored Convex/Concave Optimization for Nonconvex Composite Functions.
    Barber RF; Sidky EY
    J Mach Learn Res; 2016; 17(144):1-51. PubMed ID: 29391859
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Incremental and Parallel Machine Learning Algorithms With Automated Learning Rate Adjustments.
    Hishinuma K; Iiduka H
    Front Robot AI; 2019; 6():77. PubMed ID: 33501092
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Exponential Convergence of Primal-Dual Dynamics Under General Conditions and Its Application to Distributed Optimization.
    Guo L; Shi X; Cao J; Wang Z
    IEEE Trans Neural Netw Learn Syst; 2024 Apr; 35(4):5551-5565. PubMed ID: 36178998
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.