These tools will no longer be maintained as of December 31, 2024. Archived website can be found here. PubMed4Hh GitHub repository can be found here. Contact NLM Customer Service if you have questions.


BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

191 related articles for article (PubMed ID: 34392126)

  • 1. Deep ReLU neural networks in high-dimensional approximation.
    Dũng D; Nguyen VK
    Neural Netw; 2021 Oct; 142():619-635. PubMed ID: 34392126
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Optimal approximation of piecewise smooth functions using deep ReLU neural networks.
    Petersen P; Voigtlaender F
    Neural Netw; 2018 Dec; 108():296-330. PubMed ID: 30245431
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Approximation of smooth functionals using deep ReLU networks.
    Song L; Liu Y; Fan J; Zhou DX
    Neural Netw; 2023 Sep; 166():424-436. PubMed ID: 37549610
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Error bounds for approximations with deep ReLU networks.
    Yarotsky D
    Neural Netw; 2017 Oct; 94():103-114. PubMed ID: 28756334
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Simultaneous neural network approximation for smooth functions.
    Hon S; Yang H
    Neural Netw; 2022 Oct; 154():152-164. PubMed ID: 35882083
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Approximation in shift-invariant spaces with deep ReLU neural networks.
    Yang Y; Li Z; Wang Y
    Neural Netw; 2022 Sep; 153():269-281. PubMed ID: 35763879
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Neural networks with ReLU powers need less depth.
    Cabanilla KIM; Mohammad RZ; Lope JEC
    Neural Netw; 2024 Apr; 172():106073. PubMed ID: 38159509
    [TBL] [Abstract][Full Text] [Related]  

  • 8. On the capacity of deep generative networks for approximating distributions.
    Yang Y; Li Z; Wang Y
    Neural Netw; 2022 Jan; 145():144-154. PubMed ID: 34749027
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Low dimensional approximation and generalization of multivariate functions on smooth manifolds using deep ReLU neural networks.
    Labate D; Shi J
    Neural Netw; 2024 Jun; 174():106223. PubMed ID: 38458005
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Error bounds for deep ReLU networks using the Kolmogorov-Arnold superposition theorem.
    Montanelli H; Yang H
    Neural Netw; 2020 Sep; 129():1-6. PubMed ID: 32473577
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Dimension independent bounds for general shallow networks.
    Mhaskar HN
    Neural Netw; 2020 Mar; 123():142-152. PubMed ID: 31869651
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Capacity bounds for hyperbolic neural network representations of latent tree structures.
    Kratsios A; Hong R; Sáez de Ocáriz Borde H
    Neural Netw; 2024 Oct; 178():106420. PubMed ID: 38901097
    [TBL] [Abstract][Full Text] [Related]  

  • 13. On the approximation of functions by tanh neural networks.
    De Ryck T; Lanthaler S; Mishra S
    Neural Netw; 2021 Nov; 143():732-750. PubMed ID: 34482172
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Nonlinear approximation via compositions.
    Shen Z; Yang H; Zhang S
    Neural Netw; 2019 Nov; 119():74-84. PubMed ID: 31401528
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Simultaneous approximation of a smooth function and its derivatives by deep neural networks with piecewise-polynomial activations.
    Belomestny D; Naumov A; Puchkin N; Samsonov S
    Neural Netw; 2023 Apr; 161():242-253. PubMed ID: 36774863
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Approximate Policy Iteration With Deep Minimax Average Bellman Error Minimization.
    Kang L; Liu Y; Luo Y; Yang JZ; Yuan H; Zhu C
    IEEE Trans Neural Netw Learn Syst; 2024 Jan; PP():. PubMed ID: 38194389
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Approximation rates for neural networks with encodable weights in smoothness spaces.
    Gühring I; Raslan M
    Neural Netw; 2021 Feb; 134():107-130. PubMed ID: 33310376
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Deep Network With Approximation Error Being Reciprocal of Width to Power of Square Root of Depth.
    Shen Z; Yang H; Zhang S
    Neural Comput; 2021 Mar; 33(4):1005-1036. PubMed ID: 33513325
    [TBL] [Abstract][Full Text] [Related]  

  • 19. ReLU Networks Are Universal Approximators via Piecewise Linear or Constant Functions.
    Huang C
    Neural Comput; 2020 Nov; 32(11):2249-2278. PubMed ID: 32946706
    [TBL] [Abstract][Full Text] [Related]  

  • 20. A comparison of deep networks with ReLU activation function and linear spline-type methods.
    Eckle K; Schmidt-Hieber J
    Neural Netw; 2019 Feb; 110():232-242. PubMed ID: 30616095
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 10.