BIOMARKERS

Molecular Biopsy of Human Tumors

- a resource for Precision Medicine *

164 related articles for article (PubMed ID: 36774863)

  • 1. Simultaneous approximation of a smooth function and its derivatives by deep neural networks with piecewise-polynomial activations.
    Belomestny D; Naumov A; Puchkin N; Samsonov S
    Neural Netw; 2023 Apr; 161():242-253. PubMed ID: 36774863
    [TBL] [Abstract][Full Text] [Related]  

  • 2. Approximation of smooth functionals using deep ReLU networks.
    Song L; Liu Y; Fan J; Zhou DX
    Neural Netw; 2023 Sep; 166():424-436. PubMed ID: 37549610
    [TBL] [Abstract][Full Text] [Related]  

  • 3. Optimal approximation of piecewise smooth functions using deep ReLU neural networks.
    Petersen P; Voigtlaender F
    Neural Netw; 2018 Dec; 108():296-330. PubMed ID: 30245431
    [TBL] [Abstract][Full Text] [Related]  

  • 4. Smooth Function Approximation by Deep Neural Networks with General Activation Functions.
    Ohn I; Kim Y
    Entropy (Basel); 2019 Jun; 21(7):. PubMed ID: 33267341
    [TBL] [Abstract][Full Text] [Related]  

  • 5. Neural networks with ReLU powers need less depth.
    Cabanilla KIM; Mohammad RZ; Lope JEC
    Neural Netw; 2024 Apr; 172():106073. PubMed ID: 38159509
    [TBL] [Abstract][Full Text] [Related]  

  • 6. Low dimensional approximation and generalization of multivariate functions on smooth manifolds using deep ReLU neural networks.
    Labate D; Shi J
    Neural Netw; 2024 Jun; 174():106223. PubMed ID: 38458005
    [TBL] [Abstract][Full Text] [Related]  

  • 7. Deep ReLU neural networks in high-dimensional approximation.
    Dũng D; Nguyen VK
    Neural Netw; 2021 Oct; 142():619-635. PubMed ID: 34392126
    [TBL] [Abstract][Full Text] [Related]  

  • 8. Simultaneous neural network approximation for smooth functions.
    Hon S; Yang H
    Neural Netw; 2022 Oct; 154():152-164. PubMed ID: 35882083
    [TBL] [Abstract][Full Text] [Related]  

  • 9. The deep arbitrary polynomial chaos neural network or how Deep Artificial Neural Networks could benefit from data-driven homogeneous chaos theory.
    Oladyshkin S; Praditia T; Kroeker I; Mohammadi F; Nowak W; Otte S
    Neural Netw; 2023 Sep; 166():85-104. PubMed ID: 37480771
    [TBL] [Abstract][Full Text] [Related]  

  • 10. Approximation rates for neural networks with encodable weights in smoothness spaces.
    Gühring I; Raslan M
    Neural Netw; 2021 Feb; 134():107-130. PubMed ID: 33310376
    [TBL] [Abstract][Full Text] [Related]  

  • 11. Approximation in shift-invariant spaces with deep ReLU neural networks.
    Yang Y; Li Z; Wang Y
    Neural Netw; 2022 Sep; 153():269-281. PubMed ID: 35763879
    [TBL] [Abstract][Full Text] [Related]  

  • 12. On the approximation of functions by tanh neural networks.
    De Ryck T; Lanthaler S; Mishra S
    Neural Netw; 2021 Nov; 143():732-750. PubMed ID: 34482172
    [TBL] [Abstract][Full Text] [Related]  

  • 13. Approximate Policy Iteration With Deep Minimax Average Bellman Error Minimization.
    Kang L; Liu Y; Luo Y; Yang JZ; Yuan H; Zhu C
    IEEE Trans Neural Netw Learn Syst; 2024 Jan; PP():. PubMed ID: 38194389
    [TBL] [Abstract][Full Text] [Related]  

  • 14. Theory of deep convolutional neural networks III: Approximating radial functions.
    Mao T; Shi Z; Zhou DX
    Neural Netw; 2021 Dec; 144():778-790. PubMed ID: 34688019
    [TBL] [Abstract][Full Text] [Related]  

  • 15. Neural network approximation: Three hidden layers are enough.
    Shen Z; Yang H; Zhang S
    Neural Netw; 2021 Sep; 141():160-173. PubMed ID: 33906082
    [TBL] [Abstract][Full Text] [Related]  

  • 16. Neural network interpolation operators optimized by Lagrange polynomial.
    Wang G; Yu D; Zhou P
    Neural Netw; 2022 Sep; 153():179-191. PubMed ID: 35728337
    [TBL] [Abstract][Full Text] [Related]  

  • 17. Random Sketching for Neural Networks With ReLU.
    Wang D; Zeng J; Lin SB
    IEEE Trans Neural Netw Learn Syst; 2021 Feb; 32(2):748-762. PubMed ID: 32275612
    [TBL] [Abstract][Full Text] [Related]  

  • 18. Deep Network With Approximation Error Being Reciprocal of Width to Power of Square Root of Depth.
    Shen Z; Yang H; Zhang S
    Neural Comput; 2021 Mar; 33(4):1005-1036. PubMed ID: 33513325
    [TBL] [Abstract][Full Text] [Related]  

  • 19. Depth Selection for Deep ReLU Nets in Feature Extraction and Generalization.
    Han Z; Yu S; Lin SB; Zhou DX
    IEEE Trans Pattern Anal Mach Intell; 2022 Apr; 44(4):1853-1868. PubMed ID: 33079656
    [TBL] [Abstract][Full Text] [Related]  

  • 20. Error bounds for approximations with deep ReLU networks.
    Yarotsky D
    Neural Netw; 2017 Oct; 94():103-114. PubMed ID: 28756334
    [TBL] [Abstract][Full Text] [Related]  

    [Next]    [New Search]
    of 9.