These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

111 related articles for article (PubMed ID: 37437199)

  • 1. Mirror Descent of Hopfield Model.
    Soh H; Kim D; Hwang J; Jo J
    Neural Comput; 2023 Aug; 35(9):1529-1542. PubMed ID: 37437199
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Hopfield Neural Network Flow: A Geometric Viewpoint.
    Halder A; Caluya KF; Travacca B; Moura SJ
    IEEE Trans Neural Netw Learn Syst; 2020 Nov; 31(11):4869-4880. PubMed ID: 31940561
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Piecewise convexity of artificial neural networks.
    Rister B; Rubin DL
    Neural Netw; 2017 Oct; 94():34-45. PubMed ID: 28732233
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Optimizing neural networks for medical data sets: A case study on neonatal apnea prediction.
    Shirwaikar RD; Acharya U D; Makkithaya K; M S; Srivastava S; Lewis U LES
    Artif Intell Med; 2019 Jul; 98():59-76. PubMed ID: 31521253
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Stochastic Mirror Descent on Overparameterized Nonlinear Models.
    Azizan N; Lale S; Hassibi B
    IEEE Trans Neural Netw Learn Syst; 2022 Dec; 33(12):7717-7727. PubMed ID: 34270431
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Improved GWO and its application in parameter optimization of Elman neural network.
    Liu W; Sun J; Liu G; Fu S; Liu M; Zhu Y; Gao Q
    PLoS One; 2023; 18(7):e0288071. PubMed ID: 37418374
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Microscale Adaptive Optics: Wave-Front Control with a mu-Mirror Array and a VLSI Stochastic Gradient Descent Controller.
    Weyrauch T; Vorontsov MA; Bifano TG; Hammer JA; Cohen M; Cauwenberghs G
    Appl Opt; 2001 Aug; 40(24):4243-53. PubMed ID: 18360462
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Initialization-Based k-Winners-Take-All Neural Network Model Using Modified Gradient Descent.
    Zhang Y; Li S; Geng G
    IEEE Trans Neural Netw Learn Syst; 2023 Aug; 34(8):4130-4138. PubMed ID: 34752408
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Gradient Descent with Random Initialization: Fast Global Convergence for Nonconvex Phase Retrieval.
    Chen Y; Chi Y; Fan J; Ma C
    Math Program; 2019 Jul; 176(1-2):5-37. PubMed ID: 33833473
    [TBL] [Abstract][Full Text] [Related]  

  • 10. An incremental mirror descent subgradient algorithm with random sweeping and proximal step.
    Boţ RI; Böhm A
    Optimization; 2019; 68(1):33-50. PubMed ID: 30828224
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Particle swarm optimization-based automatic parameter selection for deep neural networks and its applications in large-scale and high-dimensional data.
    Ye F
    PLoS One; 2017; 12(12):e0188746. PubMed ID: 29236718
    [TBL] [Abstract][Full Text] [Related]  

  • 12. A Novel Machine Learning Model for Dose Prediction in Prostate Volumetric Modulated Arc Therapy Using Output Initialization and Optimization Priorities.
    Jensen PJ; Zhang J; Koontz BF; Wu QJ
    Front Artif Intell; 2021; 4():624038. PubMed ID: 33969289
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Accelerating gradient descent and Adam via fractional gradients.
    Shin Y; Darbon J; Karniadakis GE
    Neural Netw; 2023 Apr; 161():185-201. PubMed ID: 36774859
    [TBL] [Abstract][Full Text] [Related]  

  • 14. A multivariate adaptive gradient algorithm with reduced tuning efforts.
    Saab S; Saab K; Phoha S; Zhu M; Ray A
    Neural Netw; 2022 Aug; 152():499-509. PubMed ID: 35640371
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Meta-SpikePropamine: learning to learn with synaptic plasticity in spiking neural networks.
    Schmidgall S; Hays J
    Front Neurosci; 2023; 17():1183321. PubMed ID: 37250397
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Fractional-order gradient descent learning of BP neural networks with Caputo derivative.
    Wang J; Wen Y; Gou Y; Ye Z; Chen H
    Neural Netw; 2017 May; 89():19-30. PubMed ID: 28278430
    [TBL] [Abstract][Full Text] [Related]  

  • 17. HyAdamC: A New Adam-Based Hybrid Optimization Algorithm for Convolution Neural Networks.
    Kim KS; Choi YS
    Sensors (Basel); 2021 Jun; 21(12):. PubMed ID: 34204695
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Multi-class DTI Segmentation: A Convex Approach.
    Xie Y; Chen T; Ho J; Vemuri BC
    Med Image Comput Comput Assist Interv; 2012 Oct; 2012():115-123. PubMed ID: 25177735
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Intelligent breast cancer diagnostic system empowered by deep extreme gradient descent optimization.
    Khan MBS; Rahman AU; Nawaz MS; Ahmed R; Khan MA; Mosavi A
    Math Biosci Eng; 2022 May; 19(8):7978-8002. PubMed ID: 35801453
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Parameter Efficient Neural Networks With Singular Value Decomposed Kernels.
    Vander Mijnsbrugge D; Ongenae F; Van Hoecke S
    IEEE Trans Neural Netw Learn Syst; 2023 Sep; 34(9):5682-5692. PubMed ID: 34941526
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.