These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

115 related articles for article (PubMed ID: 35230946)

  • 1. A Function Space Analysis of Finite Neural Networks With Insights From Sampling Theory.
    Giryes R
    IEEE Trans Pattern Anal Mach Intell; 2023 Jan; 45(1):27-37. PubMed ID: 35230946
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Optimal approximation of piecewise smooth functions using deep ReLU neural networks.
    Petersen P; Voigtlaender F
    Neural Netw; 2018 Dec; 108():296-330. PubMed ID: 30245431
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Integral representations of shallow neural network with rectified power unit activation function.
    Abdeljawad A; Grohs P
    Neural Netw; 2022 Nov; 155():536-550. PubMed ID: 36166980
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Neural networks with ReLU powers need less depth.
    Cabanilla KIM; Mohammad RZ; Lope JEC
    Neural Netw; 2024 Apr; 172():106073. PubMed ID: 38159509
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Universality and Approximation Bounds for Echo State Networks With Random Weights.
    Li Z; Yang Y
    IEEE Trans Neural Netw Learn Syst; 2023 Dec; PP():. PubMed ID: 38090874
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Approximation of smooth functionals using deep ReLU networks.
    Song L; Liu Y; Fan J; Zhou DX
    Neural Netw; 2023 Sep; 166():424-436. PubMed ID: 37549610
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Predicting the outputs of finite deep neural networks trained with noisy gradients.
    Naveh G; Ben David O; Sompolinsky H; Ringel Z
    Phys Rev E; 2021 Dec; 104(6-1):064301. PubMed ID: 35030925
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Upper bound of the expected training error of neural network regression for a Gaussian noise sequence.
    Hagiwara K; Hayasaka T; Toda N; Usui S; Kuno K
    Neural Netw; 2001 Dec; 14(10):1419-29. PubMed ID: 11771721
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Convergence of deep convolutional neural networks.
    Xu Y; Zhang H
    Neural Netw; 2022 Sep; 153():553-563. PubMed ID: 35839599
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Simultaneous neural network approximation for smooth functions.
    Hon S; Yang H
    Neural Netw; 2022 Oct; 154():152-164. PubMed ID: 35882083
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Deep ReLU neural networks in high-dimensional approximation.
    Dũng D; Nguyen VK
    Neural Netw; 2021 Oct; 142():619-635. PubMed ID: 34392126
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Analytical Bounds on the Local Lipschitz Constants of ReLU Networks.
    Avant T; Morgansen KA
    IEEE Trans Neural Netw Learn Syst; 2023 Jun; PP():. PubMed ID: 37368808
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions.
    Huang GB; Babri HA
    IEEE Trans Neural Netw; 1998; 9(1):224-9. PubMed ID: 18252445
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Simultaneous approximation of a smooth function and its derivatives by deep neural networks with piecewise-polynomial activations.
    Belomestny D; Naumov A; Puchkin N; Samsonov S
    Neural Netw; 2023 Apr; 161():242-253. PubMed ID: 36774863
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Improved Linear Convergence of Training CNNs With Generalizability Guarantees: A One-Hidden-Layer Case.
    Zhang S; Wang M; Xiong J; Liu S; Chen PY
    IEEE Trans Neural Netw Learn Syst; 2021 Jun; 32(6):2622-2635. PubMed ID: 32726280
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Design of double fuzzy clustering-driven context neural networks.
    Kim EH; Oh SK; Pedrycz W
    Neural Netw; 2018 Aug; 104():1-14. PubMed ID: 29689457
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Smooth Function Approximation by Deep Neural Networks with General Activation Functions.
    Ohn I; Kim Y
    Entropy (Basel); 2019 Jun; 21(7):. PubMed ID: 33267341
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Approximation rates for neural networks with general activation functions.
    Siegel JW; Xu J
    Neural Netw; 2020 Aug; 128():313-321. PubMed ID: 32470796
    [TBL] [Abstract][Full Text] [Related]  

  • 19. A new adaptive backpropagation algorithm based on Lyapunov stability theory for neural networks.
    Man Z; Wu HR; Liu S; Yu X
    IEEE Trans Neural Netw; 2006 Nov; 17(6):1580-91. PubMed ID: 17131670
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Error bounds for approximations with deep ReLU networks.
    Yarotsky D
    Neural Netw; 2017 Oct; 94():103-114. PubMed ID: 28756334
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 6.