These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

103 related articles for article (PubMed ID: 35647529)

  • 1. Shallow Univariate ReLU Networks as Splines: Initialization, Loss Surface, Hessian, and Gradient Flow Dynamics.
    Sahs J; Pyle R; Damaraju A; Caro JO; Tavaslioglu O; Lu A; Anselmi F; Patel AB
    Front Artif Intell; 2022; 5():889981. PubMed ID: 35647529
    [TBL] [Abstract][Full Text] [Related]  

  • 2. A comparison of deep networks with ReLU activation function and linear spline-type methods.
    Eckle K; Schmidt-Hieber J
    Neural Netw; 2019 Feb; 110():232-242. PubMed ID: 30616095
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Learning in the machine: The symmetries of the deep learning channel.
    Baldi P; Sadowski P; Lu Z
    Neural Netw; 2017 Nov; 95():110-133. PubMed ID: 28938130
    [TBL] [Abstract][Full Text] [Related]  

  • 4. ReLU Networks Are Universal Approximators via Piecewise Linear or Constant Functions.
    Huang C
    Neural Comput; 2020 Nov; 32(11):2249-2278. PubMed ID: 32946706
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Optimal approximation of piecewise smooth functions using deep ReLU neural networks.
    Petersen P; Voigtlaender F
    Neural Netw; 2018 Dec; 108():296-330. PubMed ID: 30245431
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Theoretical issues in deep networks.
    Poggio T; Banburski A; Liao Q
    Proc Natl Acad Sci U S A; 2020 Dec; 117(48):30039-30045. PubMed ID: 32518109
    [TBL] [Abstract][Full Text] [Related]  

  • 7. High frequency accuracy and loss data of random neural networks trained on image datasets.
    Rorabaugh AK; Caíno-Lores S; Johnston T; Taufer M
    Data Brief; 2022 Feb; 40():107780. PubMed ID: 35036484
    [TBL] [Abstract][Full Text] [Related]  

  • 8. On minimal representations of shallow ReLU networks.
    Dereich S; Kassing S
    Neural Netw; 2022 Apr; 148():121-128. PubMed ID: 35123261
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Improved Linear Convergence of Training CNNs With Generalizability Guarantees: A One-Hidden-Layer Case.
    Zhang S; Wang M; Xiong J; Liu S; Chen PY
    IEEE Trans Neural Netw Learn Syst; 2021 Jun; 32(6):2622-2635. PubMed ID: 32726280
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Residual D
    Dou H; Deng Y; Yan T; Wu H; Lin X; Dai Q
    Opt Lett; 2020 May; 45(10):2688-2691. PubMed ID: 32412442
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Neural networks-based regularization for large-scale medical image reconstruction.
    Kofler A; Haltmeier M; Schaeffter T; Kachelrieß M; Dewey M; Wald C; Kolbitsch C
    Phys Med Biol; 2020 Jul; 65(13):135003. PubMed ID: 32492660
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Macromolecular crowding: chemistry and physics meet biology (Ascona, Switzerland, 10-14 June 2012).
    Foffi G; Pastore A; Piazza F; Temussi PA
    Phys Biol; 2013 Aug; 10(4):040301. PubMed ID: 23912807
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Random Sketching for Neural Networks With ReLU.
    Wang D; Zeng J; Lin SB
    IEEE Trans Neural Netw Learn Syst; 2021 Feb; 32(2):748-762. PubMed ID: 32275612
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Error bounds for approximations with deep ReLU networks.
    Yarotsky D
    Neural Netw; 2017 Oct; 94():103-114. PubMed ID: 28756334
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Non-differentiable saddle points and sub-optimal local minima exist for deep ReLU networks.
    Liu B; Liu Z; Zhang T; Yuan T
    Neural Netw; 2021 Dec; 144():75-89. PubMed ID: 34454244
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Recursion Newton-Like Algorithm for l
    Zhang H; Yuan Z; Xiu N
    IEEE Trans Neural Netw Learn Syst; 2023 Sep; 34(9):5882-5896. PubMed ID: 34898441
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Entropic Dynamics in Neural Networks, the Renormalization Group and the Hamilton-Jacobi-Bellman Equation.
    Caticha N
    Entropy (Basel); 2020 May; 22(5):. PubMed ID: 33286359
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Studying the Evolution of Neural Activation Patterns During Training of Feed-Forward ReLU Networks.
    Hartmann D; Franzen D; Brodehl S
    Front Artif Intell; 2021; 4():642374. PubMed ID: 35005614
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Analysis on the Number of Linear Regions of Piecewise Linear Neural Networks.
    Hu Q; Zhang H; Gao F; Xing C; An J
    IEEE Trans Neural Netw Learn Syst; 2022 Feb; 33(2):644-653. PubMed ID: 33180735
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Improved robustness of reinforcement learning policies upon conversion to spiking neuronal network platforms applied to Atari Breakout game.
    Patel D; Hazan H; Saunders DJ; Siegelmann HT; Kozma R
    Neural Netw; 2019 Dec; 120():108-115. PubMed ID: 31500931
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.