BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

162 related articles for article (PubMed ID: 32473577)

  • 1. Error bounds for deep ReLU networks using the Kolmogorov-Arnold superposition theorem.
    Montanelli H; Yang H
    Neural Netw; 2020 Sep; 129():1-6. PubMed ID: 32473577
    [TBL] [Abstract][Full Text] [Related]  

  • 2. The Kolmogorov-Arnold representation theorem revisited.
    Schmidt-Hieber J
    Neural Netw; 2021 May; 137():119-126. PubMed ID: 33592434
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Deep ReLU neural networks in high-dimensional approximation.
    Dũng D; Nguyen VK
    Neural Netw; 2021 Oct; 142():619-635. PubMed ID: 34392126
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Dimension independent bounds for general shallow networks.
    Mhaskar HN
    Neural Netw; 2020 Mar; 123():142-152. PubMed ID: 31869651
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Optimal approximation of piecewise smooth functions using deep ReLU neural networks.
    Petersen P; Voigtlaender F
    Neural Netw; 2018 Dec; 108():296-330. PubMed ID: 30245431
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Error bounds for approximations with deep ReLU networks.
    Yarotsky D
    Neural Netw; 2017 Oct; 94():103-114. PubMed ID: 28756334
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Neural networks with ReLU powers need less depth.
    Cabanilla KIM; Mohammad RZ; Lope JEC
    Neural Netw; 2024 Apr; 172():106073. PubMed ID: 38159509
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Approximation in shift-invariant spaces with deep ReLU neural networks.
    Yang Y; Li Z; Wang Y
    Neural Netw; 2022 Sep; 153():269-281. PubMed ID: 35763879
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Low dimensional approximation and generalization of multivariate functions on smooth manifolds using deep ReLU neural networks.
    Labate D; Shi J
    Neural Netw; 2024 Jun; 174():106223. PubMed ID: 38458005
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Approximation of smooth functionals using deep ReLU networks.
    Song L; Liu Y; Fan J; Zhou DX
    Neural Netw; 2023 Sep; 166():424-436. PubMed ID: 37549610
    [TBL] [Abstract][Full Text] [Related]  

  • 11. On the approximation of functions by tanh neural networks.
    De Ryck T; Lanthaler S; Mishra S
    Neural Netw; 2021 Nov; 143():732-750. PubMed ID: 34482172
    [TBL] [Abstract][Full Text] [Related]  

  • 12. A deep network construction that adapts to intrinsic dimensionality beyond the domain.
    Cloninger A; Klock T
    Neural Netw; 2021 Sep; 141():404-419. PubMed ID: 34146968
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Deep Network With Approximation Error Being Reciprocal of Width to Power of Square Root of Depth.
    Shen Z; Yang H; Zhang S
    Neural Comput; 2021 Mar; 33(4):1005-1036. PubMed ID: 33513325
    [TBL] [Abstract][Full Text] [Related]  

  • 14. On the capacity of deep generative networks for approximating distributions.
    Yang Y; Li Z; Wang Y
    Neural Netw; 2022 Jan; 145():144-154. PubMed ID: 34749027
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Simultaneous neural network approximation for smooth functions.
    Hon S; Yang H
    Neural Netw; 2022 Oct; 154():152-164. PubMed ID: 35882083
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Efficient Approximation of High-Dimensional Functions With Neural Networks.
    Cheridito P; Jentzen A; Rossmannek F
    IEEE Trans Neural Netw Learn Syst; 2022 Jul; 33(7):3079-3093. PubMed ID: 33513112
    [TBL] [Abstract][Full Text] [Related]  

  • 17. A comparison of deep networks with ReLU activation function and linear spline-type methods.
    Eckle K; Schmidt-Hieber J
    Neural Netw; 2019 Feb; 110():232-242. PubMed ID: 30616095
    [TBL] [Abstract][Full Text] [Related]  

  • 18. On the Kolmogorov neural networks.
    Ismayilova A; Ismailov VE
    Neural Netw; 2024 Aug; 176():106333. PubMed ID: 38688072
    [TBL] [Abstract][Full Text] [Related]  

  • 19. ReLU Networks Are Universal Approximators via Piecewise Linear or Constant Functions.
    Huang C
    Neural Comput; 2020 Nov; 32(11):2249-2278. PubMed ID: 32946706
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Non-differentiable saddle points and sub-optimal local minima exist for deep ReLU networks.
    Liu B; Liu Z; Zhang T; Yuan T
    Neural Netw; 2021 Dec; 144():75-89. PubMed ID: 34454244
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 9.