BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

163 related articles for article (PubMed ID: 32946706)

  • 1. ReLU Networks Are Universal Approximators via Piecewise Linear or Constant Functions.
    Huang C
    Neural Comput; 2020 Nov; 32(11):2249-2278. PubMed ID: 32946706
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Optimal approximation of piecewise smooth functions using deep ReLU neural networks.
    Petersen P; Voigtlaender F
    Neural Netw; 2018 Dec; 108():296-330. PubMed ID: 30245431
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Deep ReLU neural networks in high-dimensional approximation.
    Dũng D; Nguyen VK
    Neural Netw; 2021 Oct; 142():619-635. PubMed ID: 34392126
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Neural networks with ReLU powers need less depth.
    Cabanilla KIM; Mohammad RZ; Lope JEC
    Neural Netw; 2024 Apr; 172():106073. PubMed ID: 38159509
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Error bounds for approximations with deep ReLU networks.
    Yarotsky D
    Neural Netw; 2017 Oct; 94():103-114. PubMed ID: 28756334
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Approximation of smooth functionals using deep ReLU networks.
    Song L; Liu Y; Fan J; Zhou DX
    Neural Netw; 2023 Sep; 166():424-436. PubMed ID: 37549610
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Configuration of continuous piecewise-linear neural networks.
    Wang S; Huang X; Junaid KM
    IEEE Trans Neural Netw; 2008 Aug; 19(8):1431-45. PubMed ID: 18701372
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Deep Network With Approximation Error Being Reciprocal of Width to Power of Square Root of Depth.
    Shen Z; Yang H; Zhang S
    Neural Comput; 2021 Mar; 33(4):1005-1036. PubMed ID: 33513325
    [TBL] [Abstract][Full Text] [Related]  

  • 9. Singular Values for ReLU Layers.
    Dittmer S; King EJ; Maass P
    IEEE Trans Neural Netw Learn Syst; 2020 Sep; 31(9):3594-3605. PubMed ID: 31714239
    [TBL] [Abstract][Full Text] [Related]  

  • 10. On minimal representations of shallow ReLU networks.
    Dereich S; Kassing S
    Neural Netw; 2022 Apr; 148():121-128. PubMed ID: 35123261
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Simultaneous approximation of a smooth function and its derivatives by deep neural networks with piecewise-polynomial activations.
    Belomestny D; Naumov A; Puchkin N; Samsonov S
    Neural Netw; 2023 Apr; 161():242-253. PubMed ID: 36774863
    [TBL] [Abstract][Full Text] [Related]  

  • 12. Nonlinear approximation via compositions.
    Shen Z; Yang H; Zhang S
    Neural Netw; 2019 Nov; 119():74-84. PubMed ID: 31401528
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Locally linear attributes of ReLU neural networks.
    Sattelberg B; Cavalieri R; Kirby M; Peterson C; Beveridge R
    Front Artif Intell; 2023; 6():1255192. PubMed ID: 38075385
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Universal approximation using incremental constructive feedforward networks with random hidden nodes.
    Huang GB; Chen L; Siew CK
    IEEE Trans Neural Netw; 2006 Jul; 17(4):879-892. PubMed ID: 16856652
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Neural-network approximation of piecewise continuous functions: application to friction compensation.
    Selmic RR; Lewis FL
    IEEE Trans Neural Netw; 2002; 13(3):745-51. PubMed ID: 18244470
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Error bounds for deep ReLU networks using the Kolmogorov-Arnold superposition theorem.
    Montanelli H; Yang H
    Neural Netw; 2020 Sep; 129():1-6. PubMed ID: 32473577
    [TBL] [Abstract][Full Text] [Related]  

  • 17. A Deep-Network Piecewise Linear Approximation Formula.
    Zeng GL
    IEEE Access; 2021; 9():120665-120674. PubMed ID: 34532202
    [TBL] [Abstract][Full Text] [Related]  

  • 18. PWLU: Learning Specialized Activation Functions With the Piecewise Linear Unit.
    Zhu Z; Zhou Y; Dong Y; Zhong Z
    IEEE Trans Pattern Anal Mach Intell; 2023 Oct; 45(10):12269-12286. PubMed ID: 37314901
    [TBL] [Abstract][Full Text] [Related]  

  • 19. A comparison of deep networks with ReLU activation function and linear spline-type methods.
    Eckle K; Schmidt-Hieber J
    Neural Netw; 2019 Feb; 110():232-242. PubMed ID: 30616095
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Integrating geometries of ReLU feedforward neural networks.
    Liu Y; Caglar T; Peterson C; Kirby M
    Front Big Data; 2023; 6():1274831. PubMed ID: 38033354
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 9.