These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

119 related articles for article (PubMed ID: 38640343)

  • 1. Quantization avoids saddle points in distributed optimization.
    Bo Y; Wang Y
    Proc Natl Acad Sci U S A; 2024 Apr; 121(17):e2319625121. PubMed ID: 38640343
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Distributed Subgradient Method With Random Quantization and Flexible Weights: Convergence Analysis.
    Xia Z; Du J; Jiang C; Poor HV; Han Z; Ren Y
    IEEE Trans Cybern; 2024 Feb; 54(2):1223-1235. PubMed ID: 38117628
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Noise Helps Optimization Escape From Saddle Points in the Synaptic Plasticity.
    Fang Y; Yu Z; Chen F
    Front Neurosci; 2020; 14():343. PubMed ID: 32410937
    [TBL] [Abstract][Full Text] [Related]  

  • 4. A collective neurodynamic penalty approach to nonconvex distributed constrained optimization.
    Jia W; Huang T; Qin S
    Neural Netw; 2024 Mar; 171():145-158. PubMed ID: 38091759
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Optimal design of connectivity in neural network training.
    Jordanov I; Brown R
    Biomed Sci Instrum; 2000; 36():27-32. PubMed ID: 10834204
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Stochastic Optimization for Nonconvex Problem With Inexact Hessian Matrix, Gradient, and Function.
    Liu L; Liu X; Hsieh CJ; Tao D
    IEEE Trans Neural Netw Learn Syst; 2023 Dec; PP():. PubMed ID: 38039170
    [TBL] [Abstract][Full Text] [Related]  

  • 7. NGDE: A Niching-Based Gradient-Directed Evolution Algorithm for Nonconvex Optimization.
    Yu Q; Liang X; Li M; Jian L
    IEEE Trans Neural Netw Learn Syst; 2024 Apr; PP():. PubMed ID: 38619963
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Two-timescale projection neural networks in collaborative neurodynamic approaches to global optimization and distributed optimization.
    Huang B; Liu Y; Jiang YL; Wang J
    Neural Netw; 2024 Jan; 169():83-91. PubMed ID: 37864998
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Communication-efficient distributed cubic Newton with compressed lazy Hessian.
    Zhang Z; Che K; Yang S; Xu W
    Neural Netw; 2024 Jun; 174():106212. PubMed ID: 38479185
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Learning of Gaussian Processes in Distributed and Communication Limited Systems.
    Tavassolipour M; Motahari SA; Shalmani MTM
    IEEE Trans Pattern Anal Mach Intell; 2020 Aug; 42(8):1928-1941. PubMed ID: 30908258
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Distributed Time-Varying Convex Optimization With Dynamic Quantization.
    Chen Z; Yi P; Li L; Hong Y
    IEEE Trans Cybern; 2023 Feb; 53(2):1078-1092. PubMed ID: 34437083
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Distributed Constrained Optimization With Delayed Subgradient Information Over Time-Varying Network Under Adaptive Quantization.
    Liu J; Yu Z; Ho DWC
    IEEE Trans Neural Netw Learn Syst; 2022 May; PP():. PubMed ID: 35552140
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Distributed Certifiably Correct Pose-Graph Optimization.
    Tian Y; Khosoussi K; Rosen DM; How JP
    IEEE Trans Robot; 2021 Dec; 37(6):2137-2156. PubMed ID: 35140552
    [TBL] [Abstract][Full Text] [Related]  

  • 14. On Consensus-Optimality Trade-offs in Collaborative Deep Learning.
    Jiang Z; Balu A; Hegde C; Sarkar S
    Front Artif Intell; 2021; 4():573731. PubMed ID: 34595470
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Convergence of the RMSProp deep learning method with penalty for nonconvex optimization.
    Xu D; Zhang S; Zhang H; Mandic DP
    Neural Netw; 2021 Jul; 139():17-23. PubMed ID: 33662649
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Stochastic proximal gradient methods for nonconvex problems in Hilbert spaces.
    Geiersbach C; Scarinci T
    Comput Optim Appl; 2021; 78(3):705-740. PubMed ID: 33707813
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Communication-Censored Distributed Stochastic Gradient Descent.
    Li W; Wu Z; Chen T; Li L; Ling Q
    IEEE Trans Neural Netw Learn Syst; 2022 Nov; 33(11):6831-6843. PubMed ID: 34086584
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Distributed Consensus Optimization in Multiagent Networks With Time-Varying Directed Topologies and Quantized Communication.
    Huaqing Li ; Chicheng Huang ; Guo Chen ; Xiaofeng Liao ; Tingwen Huang
    IEEE Trans Cybern; 2017 Aug; 47(8):2044-2057. PubMed ID: 28371788
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Appropriate Learning Rates of Adaptive Learning Rate Optimization Algorithms for Training Deep Neural Networks.
    Iiduka H
    IEEE Trans Cybern; 2022 Dec; 52(12):13250-13261. PubMed ID: 34495862
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Neural network for a class of sparse optimization with L
    Wei Z; Li Q; Wei J; Bian W
    Neural Netw; 2022 Jul; 151():211-221. PubMed ID: 35439665
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.