These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

140 related articles for article (PubMed ID: 37223467)

  • 21. A selective overview of deep learning.
    Fan J; Ma C; Zhong Y
    Stat Sci; 2021 May; 36(2):264-290. PubMed ID: 34305305
    [TBL] [Abstract][Full Text] [Related]  

  • 22. Shaping the learning landscape in neural networks around wide flat minima.
    Baldassi C; Pittorino F; Zecchina R
    Proc Natl Acad Sci U S A; 2020 Jan; 117(1):161-170. PubMed ID: 31871189
    [TBL] [Abstract][Full Text] [Related]  

  • 23. The Dropout Learning Algorithm.
    Baldi P; Sadowski P
    Artif Intell; 2014 May; 210():78-122. PubMed ID: 24771879
    [TBL] [Abstract][Full Text] [Related]  

  • 24. On the problem in model selection of neural network regression in overrealizable scenario.
    Hagiwara K
    Neural Comput; 2002 Aug; 14(8):1979-2002. PubMed ID: 12180410
    [TBL] [Abstract][Full Text] [Related]  

  • 25. A mathematical framework for improved weight initialization of neural networks using Lagrange multipliers.
    de Pater I; Mitici M
    Neural Netw; 2023 Sep; 166():579-594. PubMed ID: 37586258
    [TBL] [Abstract][Full Text] [Related]  

  • 26. Mutual Information Based Learning Rate Decay for Stochastic Gradient Descent Training of Deep Neural Networks.
    Vasudevan S
    Entropy (Basel); 2020 May; 22(5):. PubMed ID: 33286332
    [TBL] [Abstract][Full Text] [Related]  

  • 27. Efficient Approximation of High-Dimensional Functions With Neural Networks.
    Cheridito P; Jentzen A; Rossmannek F
    IEEE Trans Neural Netw Learn Syst; 2022 Jul; 33(7):3079-3093. PubMed ID: 33513112
    [TBL] [Abstract][Full Text] [Related]  

  • 28. Fast generalization error bound of deep learning without scale invariance of activation functions.
    Terada Y; Hirose R
    Neural Netw; 2020 Sep; 129():344-358. PubMed ID: 32593931
    [TBL] [Abstract][Full Text] [Related]  

  • 29. Learning through atypical phase transitions in overparameterized neural networks.
    Baldassi C; Lauditi C; Malatesta EM; Pacelli R; Perugini G; Zecchina R
    Phys Rev E; 2022 Jul; 106(1-1):014116. PubMed ID: 35974501
    [TBL] [Abstract][Full Text] [Related]  

  • 30. A Frobenius Norm Regularization Method for Convolutional Kernel Tensors in Neural Networks.
    Guo PC
    Comput Intell Neurosci; 2022; 2022():3277730. PubMed ID: 36579174
    [TBL] [Abstract][Full Text] [Related]  

  • 31. Magnitude and angle dynamics in training single ReLU neurons.
    Lee S; Sim B; Ye JC
    Neural Netw; 2024 Jun; 178():106435. PubMed ID: 38970945
    [TBL] [Abstract][Full Text] [Related]  

  • 32. A mean field view of the landscape of two-layer neural networks.
    Mei S; Montanari A; Nguyen PM
    Proc Natl Acad Sci U S A; 2018 Aug; 115(33):E7665-E7671. PubMed ID: 30054315
    [TBL] [Abstract][Full Text] [Related]  

  • 33. Locally adaptive activation functions with slope recovery for deep and physics-informed neural networks.
    Jagtap AD; Kawaguchi K; Em Karniadakis G
    Proc Math Phys Eng Sci; 2020 Jul; 476(2239):20200334. PubMed ID: 32831616
    [TBL] [Abstract][Full Text] [Related]  

  • 34. Understanding Double Descent Using VC-Theoretical Framework.
    Lee EH; Cherkassky V
    IEEE Trans Neural Netw Learn Syst; 2024 Apr; PP():. PubMed ID: 38669171
    [TBL] [Abstract][Full Text] [Related]  

  • 35. Improved Linear Convergence of Training CNNs With Generalizability Guarantees: A One-Hidden-Layer Case.
    Zhang S; Wang M; Xiong J; Liu S; Chen PY
    IEEE Trans Neural Netw Learn Syst; 2021 Jun; 32(6):2622-2635. PubMed ID: 32726280
    [TBL] [Abstract][Full Text] [Related]  

  • 36. Deep ReLU neural networks in high-dimensional approximation.
    Dũng D; Nguyen VK
    Neural Netw; 2021 Oct; 142():619-635. PubMed ID: 34392126
    [TBL] [Abstract][Full Text] [Related]  

  • 37. Error bounds for deep ReLU networks using the Kolmogorov-Arnold superposition theorem.
    Montanelli H; Yang H
    Neural Netw; 2020 Sep; 129():1-6. PubMed ID: 32473577
    [TBL] [Abstract][Full Text] [Related]  

  • 38. Improving generalization of deep neural networks by leveraging margin distribution.
    Lyu SH; Wang L; Zhou ZH
    Neural Netw; 2022 Jul; 151():48-60. PubMed ID: 35395512
    [TBL] [Abstract][Full Text] [Related]  

  • 39. Stochastic Gradient Descent Introduces an Effective Landscape-Dependent Regularization Favoring Flat Solutions.
    Yang N; Tang C; Tu Y
    Phys Rev Lett; 2023 Jun; 130(23):237101. PubMed ID: 37354404
    [TBL] [Abstract][Full Text] [Related]  

  • 40. A privacy preservation framework for feedforward-designed convolutional neural networks.
    Li D; Wang J; Li Q; Hu Y; Li X
    Neural Netw; 2022 Nov; 155():14-27. PubMed ID: 36027662
    [TBL] [Abstract][Full Text] [Related]  

    [Previous]   [Next]    [New Search]
    of 7.