These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

128 related articles for article (PubMed ID: 37112413)

  • 1. Training a Two-Layer ReLU Network Analytically.
    Barbu A
    Sensors (Basel); 2023 Apr; 23(8):. PubMed ID: 37112413
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Improved Linear Convergence of Training CNNs With Generalizability Guarantees: A One-Hidden-Layer Case.
    Zhang S; Wang M; Xiong J; Liu S; Chen PY
    IEEE Trans Neural Netw Learn Syst; 2021 Jun; 32(6):2622-2635. PubMed ID: 32726280
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Magnitude and angle dynamics in training single ReLU neurons.
    Lee S; Sim B; Ye JC
    Neural Netw; 2024 Oct; 178():106435. PubMed ID: 38970945
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Non-differentiable saddle points and sub-optimal local minima exist for deep ReLU networks.
    Liu B; Liu Z; Zhang T; Yuan T
    Neural Netw; 2021 Dec; 144():75-89. PubMed ID: 34454244
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Locally adaptive activation functions with slope recovery for deep and physics-informed neural networks.
    Jagtap AD; Kawaguchi K; Em Karniadakis G
    Proc Math Phys Eng Sci; 2020 Jul; 476(2239):20200334. PubMed ID: 32831616
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Dynamics of stochastic gradient descent for two-layer neural networks in the teacher-student setup.
    Goldt S; Advani MS; Saxe AM; Krzakala F; Zdeborová L
    J Stat Mech; 2020 Dec; 2020(12):124010. PubMed ID: 34262607
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Mutual Information Based Learning Rate Decay for Stochastic Gradient Descent Training of Deep Neural Networks.
    Vasudevan S
    Entropy (Basel); 2020 May; 22(5):. PubMed ID: 33286332
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Piecewise convexity of artificial neural networks.
    Rister B; Rubin DL
    Neural Netw; 2017 Oct; 94():34-45. PubMed ID: 28732233
    [TBL] [Abstract][Full Text] [Related]  

  • 9. A Novel Learning Algorithm to Optimize Deep Neural Networks: Evolved Gradient Direction Optimizer (EVGO).
    Karabayir I; Akbilgic O; Tas N
    IEEE Trans Neural Netw Learn Syst; 2021 Feb; 32(2):685-694. PubMed ID: 32481228
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Robust Stochastic Gradient Descent With Student-t Distribution Based First-Order Momentum.
    Ilboudo WEL; Kobayashi T; Sugimoto K
    IEEE Trans Neural Netw Learn Syst; 2022 Mar; 33(3):1324-1337. PubMed ID: 33326388
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Backpropagation Neural Tree.
    Ojha V; Nicosia G
    Neural Netw; 2022 May; 149():66-83. PubMed ID: 35193079
    [TBL] [Abstract][Full Text] [Related]  

  • 12. diffGrad: An Optimization Method for Convolutional Neural Networks.
    Dubey SR; Chakraborty S; Roy SK; Mukherjee S; Singh SK; Chaudhuri BB
    IEEE Trans Neural Netw Learn Syst; 2020 Nov; 31(11):4500-4511. PubMed ID: 31880565
    [TBL] [Abstract][Full Text] [Related]  

  • 13. The effect of choosing optimizer algorithms to improve computer vision tasks: a comparative study.
    Hassan E; Shams MY; Hikal NA; Elmougy S
    Multimed Tools Appl; 2023; 82(11):16591-16633. PubMed ID: 36185324
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Optimizing neural networks for medical data sets: A case study on neonatal apnea prediction.
    Shirwaikar RD; Acharya U D; Makkithaya K; M S; Srivastava S; Lewis U LES
    Artif Intell Med; 2019 Jul; 98():59-76. PubMed ID: 31521253
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Critical Point-Finding Methods Reveal Gradient-Flat Regions of Deep Network Losses.
    Frye CG; Simon J; Wadia NS; Ligeralde A; DeWeese MR; Bouchard KE
    Neural Comput; 2021 May; 33(6):1469-1497. PubMed ID: 34496389
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Re-Thinking the Effectiveness of Batch Normalization and Beyond.
    Peng H; Yu Y; Yu S
    IEEE Trans Pattern Anal Mach Intell; 2024 Jan; 46(1):465-478. PubMed ID: 37747867
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Correspondence between neuroevolution and gradient descent.
    Whitelam S; Selin V; Park SW; Tamblyn I
    Nat Commun; 2021 Nov; 12(1):6317. PubMed ID: 34728632
    [TBL] [Abstract][Full Text] [Related]  

  • 18. A novel adaptive cubic quasi-Newton optimizer for deep learning based medical image analysis tasks, validated on detection of COVID-19 and segmentation for COVID-19 lung infection, liver tumor, and optic disc/cup.
    Liu Y; Zhang M; Zhong Z; Zeng X
    Med Phys; 2023 Mar; 50(3):1528-1538. PubMed ID: 36057788
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Universality of gradient descent neural network training.
    Welper G
    Neural Netw; 2022 Jun; 150():259-273. PubMed ID: 35334438
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Singular Values for ReLU Layers.
    Dittmer S; King EJ; Maass P
    IEEE Trans Neural Netw Learn Syst; 2020 Sep; 31(9):3594-3605. PubMed ID: 31714239
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 7.